content
stringlengths 1
15.9M
|
---|
\section{Introduction}
\label{sec:intro}
The bending of light due to the presence of structures in its path is one very significant method to study the distribution of matter in the universe. The deflection is independent of the nature of the intervening matter, if it is dark or baryonic, and hence, this phenomenon, referred to as gravitational lensing, provides a unique tool to map the dark side of the universe. Under controlled systematics of the experiment, weak gravitational lensing, where the deflection of light rays are not significant enough to observe multiple images of the source but strong enough to deform the shape of the source, is a very powerful probe to study the nature of dark energy \citep{2006astro.ph..9591A}. The future sky surveys, like Euclid \citep{2011arXiv1110.3193L, 2009ExA....23...17R, 2009ExA....23...39C}, are expected to provide maps of the sky with un-precedented accuracy and high resolution like never before \citep{2013LRR....16....6A}. It is an opportunity to employ the advantage of such high quality data to answer the most important questions in cosmology - the energy content of the universe, its dynamics, its evolution and the formation of structure. Weak gravitational lensing can be used as an ideal tool for such high quality data and can deliver, with sub-percent level accuracy, measurements of the main cosmological parameters.
The deformation of the shape of the observed galaxies due to the intervening matter is referred to as {\it shear}. This signal is very small, nearly 1$\%$ of the intrinsic ellipticity of the source galaxies, but can be measured statistically under the assumption that the intrinsic ellipticity of the background galaxies do not have a preferred direction. There are a number of interpretation of the two-point shear statistics based on dark matter only (collision-less) simulations which is a good approximation in the linear regime. However, at non-linear scales baryonic physics becomes important and can introduce a bias of 5 to 20 percent in the interpretation of the measurements, which in turn can introduce a bias in the cosmological constraints. So, in the era of precision cosmology, it is very important to quantify the effect of baryonic physics in the two-point shear statistics or the power spectrum.
Baryons account for nearly 20\% of the matter content of the universe. Its distribution depends on the dark matter potential well, AGN feedback, supernovae, structure formation history and radiative cooling. Further baryonic distribution affects the matter power spectrum at small scales, which to the extension, affects the two point shear statistics. The effect of baryons on several statistics relevant for cosmology has been already studied by various authors. For instance, \cite{2009MNRAS.394L..11S,2012MNRAS.423.2279C,2014MNRAS.440.2290M} and \cite{2014MNRAS.439.2485C} focused on the effects on the halo mass function. The effect of baryonic processes on the power spectrum and on the weak gravitational lensing shear signal has been studied too \citep{2004APh....22..211W, 2004ApJ...616L..75Z, 2006ApJ...640L.119J, 2008ApJ...672...19R, 2010MNRAS.405..525G, 2011MNRAS.417.2020S, 2011MNRAS.415.3649V, 2014ApJ...783..118R, 2014arXiv1407.0060M}.
In most of the previous works (see references above), the approach was based on simulations, which suffer from finite volume and finite resolution effects, are performed using only one
cosmology and baryonic model. They however capture the non-linear physics of gravitational collapse and the associated baryonic effects. In this work, we employ the halo model, an analytical approach, to build two-point shear statistics with and without baryons. This allows one to recover various different realizations of any cosmological models. We also compare our results with simulations at various stages to validate our main assumptions.
The outline of the paper is as follows: In section \ref{sec:theory}, we review the necessary concepts of the halo model and propose our baryonic model as a modification in the radial density profiles of the halos. We compare the model to simulations with AGN feedback models. We also review the modelling of shear power spectrum. We talk about the covariance matrix of the $C_{\ell}$, Gaussian and non-Gaussian parts. In section \ref{sec:comparison}, we make a comparison between the dark-matter-only model (DMO) and our baryonic model (BAR) and shows the behaviour of the baryonic correction as a function of our main AGN-feedback-parameter, $M_{\rm crit}$. We introduce our fiducial model and mock datasets to perform the likelihood analysis in section \ref{sec:fiducial}. In section \ref{sec:cosmology}, we talk about the cosmological implication of these baryonic corrections and the forecasts on the cosmological parameters, its accuracy and precision. Finally in section \ref{sec:discussion} we discuss the implications of our results and propose possible strategies for future works.
\section{Theoretical model - a short review}
\label{sec:theory}
We employ an analytic approach to model the effects of baryonic physics on the matter power spectrum and to the extension, on the shear power spectrum. The model has two broad parts: $(i)$ the dark-matter-only model (DMO), and $(ii)$ the modified model with baryonic physics (BAR). These two approaches modify the density profile of dark matter halos. We used the halo model \citep{2000MNRAS.318..203S,2000MNRAS.318.1144P,2000ApJ...543..503M,2002PhR...372....1C} to construct the matter power spectrum based on the density profiles of halos of mass $M$ and at redshift $z$.
\subsection{The halo model}\label{sec:halomodel}
We employed the halo model \citep{1977ApJ...217..331M,2000MNRAS.318..203S,2000ApJ...543..503M,2000MNRAS.318.1144P,2002PhR...372....1C} approach to calculate the matter power spectrum given the density profile of the halos. The halo model assumes all the matter in the universe to be in spherical halos with mass defined by a threshold density as:
\begin{equation}
M_{\bigtriangleup} = \dfrac{4}{3} \pi R_{\bigtriangleup}^3 \bigtriangleup \bar{\rho}_{m}
\end{equation}
\\
where $M_{\bigtriangleup}$ is the mass of the halo and $R_{\bigtriangleup}$ is the boundary where the density of the halo drop to $\bigtriangleup$ times the mean matter density of the Universe, $\bar{\rho}_{m}$. We use $\bigtriangleup=200$ throughout this paper, unless stated otherwise. We define the virial radius of the halo $R_{\rm vir}$ to be $R_{200}$.
In this framework, the matter power spectrum can be split into two parts:
\begin{equation}
P(k) = P_{1h}(k) + P_{2h}(k),
\end{equation}
\\
where, the two terms on the right hand side correspond to 1-halo term, describing the correlation between dark matter particles within the halo and 2-halo term which describes the halo-halo correlation respectively. These terms are given by
\begin{equation}
P_{1h} = \int d\nu (f_{dm}+f_{gas}(\nu)) f(\nu) \dfrac{M}{\rho} |u(k|\nu)|^2,
\end{equation}
\begin{equation}
P_{2h} = \left(f_0b_0 + \int d\nu (f_{dm}+f_{gas}(\nu)) f(\nu) u(k|\nu) b(\nu) \right)^2 P_{\rm lin}(k),
\end{equation}
\\
where, $M$ is the mass of the halo and $\nu = \delta_c/\sigma(M,z)$ with $\delta_c = 1.686$. The term $f(\nu)$ is the functional form of the mass function and we used the fitting formula from \cite{2008ApJ...688..709T}. The term $b(\nu)$ resembles the bias in the dark matter halos and we used the fitting formula in \citep{2010ApJ...724..878T}. To fulfill the underlying assumptions of the halo model, these two functional forms, $f_{\nu}$ and $b_{\nu}$ have to be expressed as in the following relations:
\begin{equation}
\int_{0}^{\infty} f(\nu)d\nu = 1
\end{equation}
\begin{equation}
\int_{0}^{\infty} f(\nu)b(\nu)d\nu = 1
\end{equation}
\\
However, assuming a lower mass cut corresponding to $\nu_{\rm min}$, we introduce new background factors $f_0$ and $b_0$ such that:
\begin{equation}
f_0 + \int_{\nu_{\rm min}}^{\infty} (f_{dm}+f_{gas}(\nu)) f(\nu)d\nu = 1
\end{equation}
\begin{equation}
f_0b_0 + \int_{\nu_{\rm min}}^{\infty} (f_{dm}+f_{gas}(\nu)) f(\nu)b(\nu)d\nu = 1
\end{equation}
\\
Additionally, the term $f_{dm}+f_{gas} = 1$ for simpler models like no feedback, but for more exotic models, like with AGN feedback or including other baryonic physics, this term may deviate from unity. This will be more useful as explained in section \ref{sec:baryonicmodel}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.48\textwidth]{figures/conc_mhalo.eps}}
\subfigure{\includegraphics[width=0.48\textwidth]{figures/bcg_mhalo.eps}}
\subfigure{\includegraphics[width=0.48\textwidth]{figures/fgas_mhalo.eps}}
\subfigure{\includegraphics[width=0.48\textwidth]{figures/rho_nfw_gas_z0.eps}}
\caption{Top left: Concentration parameter as a function of halo mass for variable redshift. Top right: Mass of the central galaxy as a function of halo mass for variable redshift. Bottom left: Gas mass fraction as a function of halo mass for variable $M_{\rm crit}$. Bottom right: Density profile for NFW (solid lines) and intra-cluster gas (dashed lines) for different halo masses at redshift 0.}
\label{fig:halo}
\end{figure*}
We used the \cite{1998ApJ...496..605E, 1999ApJ...511....5E} transfer function calculations to account for the linear matter power spectrum term, $P_{lin}(k)$. The term $u(k|M)$ is the Fourier transform of the normalized density profile and is given by,
\begin{equation}
u(k|M) = \dfrac{4 \pi}{M} \int_0^{R_{\rm vir}}dr\ r^2\ \rho(r|M)\ \dfrac{\sin(kr)}{kr}.
\end{equation}
\\
where, $\rho(r|M)$ is the density profile of the halo of mass $M$. The function $u(k|M)$ is normalised such that $u(k=0|M)=1$ .The dispersion of the smoothed density field, $\sigma(M,z)$, is given by,
\begin{equation}
\sigma^2(M,z) = \dfrac{1}{2 \pi^2} \int P_{\rm lin}(k) k^2 |\tilde{W}(R,k)|^2 dk,
\end{equation}
\\
where, $\tilde{W}(R,k)$ is the Fourier transform of top-hat filtering function and given by,
\begin{equation}
\tilde{W}(R,k) = 3 \dfrac{\textrm{sin}(kR) - kR \textrm{cos}(kR)}{(kR)^3}
\end{equation}
This framework of the halo model is applied to both DMO and BAR model which, differ in the halo density profiles and normalization of the mass function. The following two sections explains the corresponding profiles.
\subsection{Dark matter only}\label{sec:darkmatteronly}
We started with the radial density profile of dark matter halos given by the functional form:
\begin{equation}
\rho(r|M) = \dfrac{\rho_s}{(r/R_s)^\alpha (1+r/R_s)^\beta},
\end{equation}
\\
where, $R_s$ is the characteristic radius given by the concentration parameter ($c$) and the virial radius of the halo ($r_{vir}$) as $c = R_{\rm vir}/R_s$. We used the two parameters $\alpha$ and $\beta$ to be 1 and 2 respectively, corresponding to the Navarro-Frenk-White (NFW) profile \citep{1997ApJ...490..493N}. The characteristic density $\rho_s$ which is strongly degenerate with $R_s$ and also proportional to the critical density of the Universe when the halo was formed. So, the NFW profile for dark matter halos is completely described by its concentration.
The concentration parameter $c$ gives the information about the environment or the mean background density during the formation of the halo. A number of $N$-body simulations \citep{1997ApJ...490..493N, 1999MNRAS.310..527A, 2000ApJ...535...30J, 2001MNRAS.321..559B, 2001ApJ...554..114E, 2003ApJ...597L...9Z, 2007MNRAS.381.1450N, 2007MNRAS.378...55M, 2008MNRAS.390L..64D, 2008MNRAS.387..536G, 2014MNRAS.441.3359D} has prescribed various power laws between mass of the halo ($M$) and its concentration parameter $c$ at redshift $z$. We used the fitting formula given in \citep{2011MNRAS.411..584M}:
\begin{equation}
\log(c) = a(z)\log(M_{\rm vir}/[h^{-1} M_{\bigodot}]) + b(z)
\end{equation}
\\
where,
\begin{equation}
a(z) = \omega z - m
\end{equation}
\\
and
\begin{equation}
b(z) = \dfrac{\alpha}{z+\gamma} + \dfrac{\beta}{(z+\gamma)^2}
\end{equation}
\\
The fitting parameters $\omega,\ m,\ \alpha,\ \beta\ {\rm and}\ \gamma$ are 0.029, 0.097, -110.001, 2469.720 and 16.885 respectively.
Figure \ref{fig:halo} (top-left panel) shows the behaviour of the concentration parameter as function of halo mass at different redshifts. There is an anti-correlation between the mass of the halo and its concentration. Also for a given halo mass, the concentration decreases with redshift. We limit the minimum concentration to 4 (dashed line in figure \ref{fig:halo} upper-left panel). This is because the higher mass halos did not reach there maximum formation efficiency redshift and will reach it in future. So, on an average, there concentration must not be less than a few. A very recent study from \cite{2014MNRAS.441.3359D} shows that this behaviour is consistent and the minimum concentration is very close to 4.
\subsection{A baryonic model}\label{sec:baryonicmodel}
Our baryonic model accounts within each halo for: 1) a central galaxy, the major stellar component whose properties are derived from abundance matching techniques; 2) a hot plasma in hydrostatic equilibrium and 3) an adiabatically-contracted (AC) dark matter component. This analytic approach allows us to compare our model to the DMO case. Apart from the normalization of the mass function, there is only one term that is affected by these baryonic components and is the density profile of the halo, which no longer follows the NFW profile. We can write the modified NFW (BAR) profile as:
\begin{equation}
\rho_{\rm BAR}(r|M) = f_{\rm dm}\rho_{\rm NFW}^{\rm AC}(r) + \rho_{\rm BCG}(r) + f_{\rm gas}(M)\rho_{\rm gas}(r),
\end{equation}
\\
we discuss each of these terms in more details.
\subsubsection{Stellar component}\label{sec:stellar}
We used the fitting function from \cite{2013MNRAS.428.3121M} based on abundance matching to map the stellar mass of the central galaxy $M_{\rm CentralGalaxy}$ (BCG), which is the major component of stellar mass in a cluster, to the mass of the halo ($M_{\rm{halo}}$). Figure \ref{fig:halo} (top-right panel) shows the mapping between halo mass and stellar mass fraction associated to the central galaxy for a variety of redshifts. The relation has a positive slope for low mass halos, however, at about the size of the Milky way halo, the slope turns negative. At this peak, the central galaxy stellar mass contributes about 4-5 $\%$ of the total mass of the halo. Also this peak shifts to higher masses for higher redshifts but contributes lower fraction.
The actual distribution of stellar mass in galaxy groups and clusters can be quite complex. The total stellar mass budget can be decomposed in 3 components: satellite galaxies, Brightest Cluster Galaxy (BCG, the massive elliptical galaxy dominating the cluster centre) and Intra-cluster Light (ICL, an extended stellar halo surrounding the BCG). The BCG and ICL represent $\sim 40$ \% of the mass in clusters, with this ratio decreasing with total cluster mass \citep{2007ApJ...666..147G}. However, BCG+ICL dominate the inner part of the cluster and constitute $\sim 70\%$ of the total stellar mass within 0.1 $R_{200}$. This fact is particularly relevant for computing the effect of baryon condensation on the dark matter profiles (see Subsection \ref{sec:adcon}). The BCG+ICL component is usually modelled using superimposition of fitting functions, typically multiple Sersic profiles. Given that we are not interested in detailed modelling of the stellar distribution, we consider a simplified model for the BCG+ICL.
we adopted a radial density profile for BCG, where the enclosed mass goes linearly with the radius,
\begin{equation}
M_{\star}(<r) = M_{\rm CentralGalaxy} \dfrac{r}{2R_{1/2}}
\end{equation}
\\
this gives,
\begin{equation}
\rho(r) = \dfrac{M_{\rm CentralGalaxy}}{8 \pi R_{1/2} r^2},\ r<2R_{1/2}
\end{equation}
\\
where, $R_{1/2}$ is the half mass radius. We use $R_{1/2} = 0.015 R_{\rm vir}$ which is a good fit to the observations \citep{2014arXiv1401.7329K}. We forced the density profile to drop exponentially after $2R_{1/2}$.
\subsubsection{Intra-cluster plasma}\label{sec:gas}
\label{sec:icm}
The major component of the baryonic matter in a galaxy cluster is the hot intra-cluster gas. It is mainly ionized hydrogen at very high temperature and low density. This plasma radiates in X-rays and can safely be assumed to be in hydrostatic equilibrium. We assume this gas distribution in the halo according the hydrostatic equilibrium equations given in \cite{2013MNRAS.432.1947M},
\begin{equation}
\rho(x) = \rho_0 \left[\dfrac{\ln(1+x)}{x} \right]^{\dfrac{1}{\Gamma -1}}
\end{equation}
\\
where, $x$ is the distance from the centre of the halo in unit of scale radius $R_s$. The effective polytropic index $\Gamma$ is given by,
\begin{equation}
\Gamma = 1+ \dfrac{(1+x_{eq})\ln(1+x_{eq}) - x_{eq}}{(1+3x_{eq})\ln(1+x_{eq})}
\end{equation}
\\
where, $x_{eq}=c/\sqrt{5}$. Figure \ref{fig:halo} (bottom-right in dashed lines) shows the density profile of the hot gas for variable halo masses at redshift 0 and also shows the comparison to the NFW profile (solid lines). For $x>x_{eq}$, the gas density profiles follows the NFW profile, however, it approaches a nearly constant values near the centre of the halo.
The normalization of the gas density profile, $\rho_0$, is fixed by the gas fraction $f_{\rm{gas}}$. if we assume no feedback from the baryonic component of the halo, this number can be a constant, however, many hydrodynamical simulations \citep{2005MNRAS.356..107R, 2005MNRAS.360..892D, 2006Natur.442..539M, 2012MNRAS.421.3464P, 2013MNRAS.429.3068T, 2013MNRAS.432.1947M} shows signatures of the expulsion of gas from the halo. This expulsion is stronger in low mass halos than the high mass halos. So the low mass halos are generally deficit in this hot plasma component. Following the same physical motivation, we used the gas mass fraction of the halo to be the function of the mass of the halo following the parametric form:
\begin{equation}
f_{\rm{gas}}(M_{\rm halo}) = \dfrac{\Omega_b/\Omega_m}{1 + \left (\dfrac{M_{\rm{crit}}}{M_{\rm{halo}}} \right) ^{\beta}}
\end{equation}
\\
where, $M_{\rm{crit}}$ is a free parameter and $\beta$ is fixed to 2. This parameter controls the gas fraction in halos of different mass. A higher value for $M_{\rm{crit}}$ represents less gas in the halo up to higher halo masses. This parameter can also be interpreted as the control sequence for AGN feedback. Figure \ref{fig:halo} (bottom-left panel) shows the variation of $f_{\rm{gas}}$ with halo mass for variety of $M_{\rm{crit}}$. We chose $M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$ as the most realistic model. In this case, all halos with mass lower than $\sim 2\times 10^{12} h^{-1} M_{\bigodot}$ have expelled all their gas to the background (outside the $R_{\rm vir}$) and all halos with mass larger than $\sim 2\times 10^{13} h^{-1} M_{\bigodot}$ have all their gas inside the halo. The intermediate mass halos have a very smooth transition from no gas to all gas inside the halo. This behaviour matches well with recent study from \cite{2014arXiv1409.8617S}. We studied this case in detail for all its cosmological implications at different scales. We also studied one optimistic\footnote{Optimistic in the sense of less AGN feedback that makes the baryonic corrections less troublesome} model, where the feedback is not as strong as in our realistic model, with $M_{\rm crit} = 10^{12} h^{-1} M_{\bigodot}$.
\subsubsection{Adiabatic contraction}\label{sec:ac}
\label{sec:adcon}
In the DMO model, we adopted the NFW profile for the distribution of dark matter in the halo which is nearly scale-free and completely described by the concentration parameter. However, in the presence of baryons, the dark matter component follows NFW only in the outskirts of the halo, but in the very centre the dark matter profile becomes steeper and deviates from pure a NFW profile. This is because the baryons, which are dominant in the centre of the halo, drag some extra matter from the surrounding towards the centre making the dark matter profile steeper towards the centre. The total distribution of matter is expected to dynamically respond to the condensation of baryons at the centre of the halo in a way that approximately conserves the value the adiabatic ``invariant'' $R\times M(R)$, where $R$ is the distance from the halo centre and $M(R)$ is the mass enclosed in a sphere of radius R \citep{1986ApJ...301...27B, 2004ApJ...616...16G}. We adopted a simplified model for this effect following the appendix of \cite{2011MNRAS.414..195T} where this adiabatic contraction (AC) of the dark matter profile is solely governed by the central galactic disk.
\subsection{Comparison with simulations}\label{sec:simulations}
\begin{figure*}
\includegraphics[width=0.48\textwidth]{figures/AGN00001.eps}
\includegraphics[width=0.48\textwidth]{figures/AGN00003.eps}\\
\includegraphics[width=0.48\textwidth]{figures/AGN00004.eps}
\includegraphics[width=0.48\textwidth]{figures/AGN00005.eps}\\
\includegraphics[width=0.48\textwidth]{figures/AGN00006.eps}
\includegraphics[width=0.48\textwidth]{figures/AGN00008.eps}
\caption{A comparison of our model density profiles (dashed lines) with hydrodynamical simulations of Martizzi et. al. 2014 (solid lines). There is a remarkable agreement, except at the very centre of the halo.}
\label{fig:compare}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\hline
Type & $H_0$ & $\sigma_{\rm 8}$ & $n_{\rm s} $ & $\Omega_\Lambda$ & $\Omega_{\rm m}$ & $\Omega_{\rm b}$ \\
\hline
\hline
DMO & 70.4 & 0.809 & 0.963 & 0.728 & 0.272 & - \\
BAR & 70.4 & 0.809 & 0.963 & 0.728 & 0.272 & 0.045 \\
\hline
\hline
\end{tabular}
\caption{Cosmological parameters adopted in our simulations. }\label{tab:cosm_par}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
\hline
{\itshape Type} & $m_{\rm cdm}$& $m_{\rm gas}$ & $\Delta x_{\rm min}$ \\
& $[10^{8}$ M$_\odot$/h] & $[10^{7}$ M$_\odot$/h] & [kpc/h] \\
\hline
\hline
Original box & $ 15.5 $ & n.a. & $2.14$ \\
DMO zoom-in & $1.94$ & n.a. & $1.07$ \\
BAR zoom-in & $1.62$ & $3.22$ & $1.07$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Mass resolution for dark matter particles, gas cells and star particles, and spatial resolution (in physical units) for our simulations. }\label{tab:mass_par}
\end{table}
We consider data from a set of cosmological re-simulations performed with the {\scshape ramses} code \cite{2002A&A...385..337T}. These simulations are part of a larger set recently used
by \cite{2014MNRAS.440.2290M} to study the baryonic effects on the halo mass function. Thanks to the adaptive mesh refinement capability of the {\scshape ramses} code, the resolution
achieved in these simulations is sufficient to study the properties of low redshift BCGs.
In these calculations, the cosmological parameters are: matter density parameter $\Omega_{\rm m}=0.272$, cosmological constant density parameter $\Omega_\Lambda=0.728$, baryonic matter
density parameter $\Omega_{\rm b}=0.045$, power spectrum normalization $\sigma_{\rm 8}=0.809$, primordial power spectrum index $n_{\rm s}=0.963 $ and Hubble constant $H_0=70.4$ km/s/Mpc
(Table~\ref{tab:cosm_par}). We generated initial conditions for the simulations using the \cite{1998ApJ...496..605E} transfer function and the {\scshape grafic++} code\footnote{http://sourceforge.net/projects/grafic/}, based on the original {\scshape grafic} code \citep{2001ApJS..137....1B}. These simulations come in two flavours: DMO (dark matter only)
which only follow the evolution of dark matter, BAR which include baryons and galaxy formation prescriptions.
The technique we adopted to perform the zoom-ins is described in the following. First, we ran a dark matter only simulation with particle mass $m_{\rm cdm}=1.55\times 10^9$~M$_\odot$/h
and box size $144$~Mpc/h. The initial level of refinement was $\ell=9$ ($512^3$), but as the simulation evolved more levels of refinement were allowed. At redshift $z=0$ the grid was
refined down to a maximum level $\ell_{\rm max}=16$. Subsequently, we ran apply the AdaptaHOP algorithm \cite{2004MNRAS.352..376A} to identify the position and masses of dark
matter halos. We selected 51 halos whose {\it total} masses lie $M_{\rm tot}>10^{14}$~M$_\odot$ and whose neighbouring halos do not have masses larger than $M/2$ within a spherical
region of five times their virial radius. We determined that only 25 of these clusters are relaxed. High resolution initial conditions were extracted for each of the 51 halos and were used
to run zoom-in re-simulations. Three different re-simulations per halo have been performed: (I) including dark matter and neglecting baryons, (II) including dark matter, baryons
and stellar feedback, (III) including baryons, stellar feedback and AGN feedback. In this paper we focus on cases (I) and (III), labelled DMO and BAR, respectively.
In the DMO re-simulation, the dark matter particle mass is $m_{\rm cdm}=1.94\times 10^{8}$~M$_\odot$/h. In the BAR re-simulations, the dark matter particle mass is
$m_{\rm cdm}=1.62\times 10^{8}$~M$_\odot$/h, while the baryon resolution element has a mass of $m_{\rm gas}=3.22\times 10^{7}$~M$_\odot$. The maximum refinement level was set to $\ell=17$,
corresponding to a minimum cell size $\Delta x_{\rm min} = L/2^{\ell_{\rm max}}\simeq 1.07$ kpc/h. The grid was dynamically refined using a quasi-Lagrangian approach:
when the dark matter or baryonic mass in a cell reaches 8 times the initial mass resolution, it is split into 8 children cells. Table~\ref{tab:mass_par} summarizes the particle mass and
spatial resolution achieved in the simulations.
The physical prescription implemented in the code to perform the BAR simulations is here briefly described. In {\scshape ramses} gas dynamics is solved via a second-order unsplit Godunov
scheme \citep{2002A&A...385..337T} based on different Riemann solvers (we adopted the HLLC solver) and the MinMod slope limiter. The gas is described by perfect gas equation of state
(EOS) with polytropic index $\gamma=5/3$. Gas cooling is modelled with the \cite{1993ApJS...88..253S} cooling function which accounts for H, He and metals. Star formation and supernovae
feedback ("delayed cooling" scheme, \cite{2006MNRAS.373.1074S}) and metal enrichment have been included in the calculations. AGN feedback has been included too, using a method inspired by
the \cite{2009MNRAS.398...53B} model. In this scheme, super-massive black holes (SMBHs) are modeled as sink particles and AGN feedback is provided in form of thermal energy injected in a
sphere surrounding each SMBH. More details about the AGN feedback scheme and about the tuning of the galaxy formation prescriptions can be found in \cite{2011MNRAS.414..195T} and \cite{2012MNRAS.422.3081M}.
Figure~\ref{fig:compare} shows the comparison between the dark matter, gas, stellar and total mass density profiles of 6 halos in the \cite{2014MNRAS.440.2290M} catalogue and the mass model described in Section \ref{sec:halomodel}. The model for the adiabatically contracted dark matter profile (red dashed lines) fits well the simulations down to scales $\sim 10$ kpc. The model for the Intra-cluster plasma (green dashed lines) fits well the results of the simulations down to scales $\sim 50$ kpc. The relation between mass of the central galaxy and that of the halo has a lot of scatter. So, to compare with simulations we use the stellar mass from the simulation itself for the given halo, which define the normalisation of our stellar model. The model (blue dashed lines) is a good fit to the results of the simulations except in the outskirts. This is expected since the data from the simulations include BCG, ICL and satellite galaxies. However, the model is constructed in such a way that the stellar mass expected from abundance matching is associated to the central regions of the halos. The overall result is that the model for the total mass (black dashed lines) provides an excellent match to the results of cosmological simulations down to a scale of $\sim 10$ kpc. Therefore we conclude that the mass model is good enough to be adopted for the purposes of this paper.
\subsection{From $P(k)$ to $C(\ell)$}
\label{sec:pk2cl}
In this section we develop the mapping from 3D matter power spectrum $P(k,z)$ to the 2D projected shear angular power spectrum $C_{\ell}$ following the theoretical framework explained in \cite{2009MNRAS.395.2065T}.
The distortion of the source shape due to weak gravitational lensing can be quantified with two quantities: shear $\gamma$ and convergence $\kappa$. The convergence $\kappa$ is the local isotropic part of the deformation matrix and can be expressed as:
\begin{equation}
\kappa(\vec{\theta}) = \dfrac{1}{2} \vec{\bigtriangledown}.\vec{\alpha}(\vec{\theta})
\end{equation}
\\
where, $\alpha$ is the deflection angle. If we know the redshift of the source galaxies, additional information can be gained by dividing the sources in different redshift bins. This process is referred to as lensing tomography and is very useful to gain extra constraints on cosmology from the evolution of the weak lensing power spectrum \citep{1999ApJ...522L..21H, 2002PhRvD..65f3001H, 2004MNRAS.348..897T}. In cosmological context, the convergence field can be expressed as the weighted projection of the mass distribution integrated along the line of sight in the $i$th redshift bin,
\begin{equation}
\kappa_i(\vec{\theta}) = \int_0^{\chi_H} g_i(\chi) \delta(\chi \vec{\theta},\chi)d\chi,
\end{equation}
\\
where, $\delta$ is the total 3 dimensional matter overdensity, $\chi$ is the comoving distance and $\chi_H$ is the comoving distance to the horizon. For a complete review see \cite{1999ARA&A..37..127M, 2001PhR...340..291B, 2006glsw.conf..269S}. The lensing weights $g_i(\chi)$ in the $i$th redshift bin with comoving distance range between $\chi_i$ and $\chi_{i+1}$ are given by:
\begin{equation}
g_i(\chi) = \begin{cases} \dfrac{g_0}{\bar{n}_i} \dfrac{\chi}{a(\chi)} \int_{\chi_i}^{\chi_{i+1}} n_s(\chi^{\prime})\dfrac{dz}{d\chi^{\prime}} \dfrac{(\chi^{\prime} - \chi) }{\chi^{\prime}} d\chi^{\prime}, & \chi \le \chi_{i+1} \\
0, & \chi > \chi_{i+1} \end{cases}
\end{equation}
\\
where, $a(\chi)$ is the scale factor at comoving distance $\chi$. Also,
\begin{equation}
g_0 = \dfrac{3}{2} \dfrac{\Omega_m}{H_0^2}
\end{equation}
\\
and,
\begin{equation}
\bar{n}_i = \int_{\chi_i}^{\chi_{i+1}} n_s(\chi(z)) \dfrac{dz}{d\chi^{\prime}} d\chi^{\prime}.
\end{equation}
\\
where, $n_s(\chi(z))$ is the distribution of sources in redshift. We assume a source distribution along the line of sight of the form:
\begin{equation}
n_s(z) = n_0 \times 4z^2 \exp\left(-\dfrac{z}{z_0} \right)
\end{equation}
\\
with $n_0 = 1.18 \times 10^{9} $ per unit steradian and $z_0$ is fixed such that the corresponding projected source density $n_g$ resembles the experiment, like Euclid etc.
\begin{equation}
\int_0^{\infty} n_s(z)dz = \bar{n}_g.
\end{equation}
\\
For Euclid like survey, we choose $z_0$ such that $\bar{n}_g=50$ sources per arcmin$^{-2}$ \citep{2008ARNPS..58...99H}.
Finally the shear power spectrum between redshift bins $i$ and $j$ can be computed as:
\begin{equation}
C_{ij}(\ell) = \int_0^{\chi_H} \dfrac{g_i(\chi) g_j(\chi)}{ \chi^2} P\left(\dfrac{\ell}{\chi},\chi \right)d \chi
\end{equation}
\\
where, $P$ is the 3D matter power spectrum calculated using the halo model framework as described in section \ref{sec:halomodel}. Larger $\ell$ corresponds to the smaller scale and the large contribution of $C_{\ell}$ at higher $\ell$ comes from non-linear clustering.
We divided the big cosmological volume into 3 redshift bins with boundaries: 0.01, 0.8, 1.5 and 4.0; so we calculated total 6 convergence cross-spectra (3 auto-spectra and 3 cross-spectra).
The auto-spectra is contaminated by the intrinsic ellipticity noise and assuming its distribution to be completely uncorrelated to different source galaxies, the observed power spectrum $C_{ij}^{\rm{obs}}(\ell)$ is given by,
\begin{equation}
C_{ij}^{\rm{obs}}(\ell) = C_{ij}(\ell) + \delta_{ij}\dfrac{\sigma_{\epsilon}^2}{\bar{n}_i},
\end{equation}
\\
we choose $\sigma_{\epsilon}=0.33$ which is the RMS intrinsic ellipticity. The cross spectra is not contaminated by shot noise.
The covariance matrix of $C_{\ell}$ has two contributions: Gaussian and non-Gaussian (NG). In this work we only consider the Gaussian contribution to the covariance matrix which is given by the following expression,
\begin{align}
{\rm Cov}_{ij,mn}(\ell,\ell^{\prime})& = \dfrac{\delta_{\ell\ell^{\prime}}}
{\Delta\ell(2\ell+1) \rm{f_{sky}}} \times \nonumber\\
&\qquad \left(C_{im}^{\rm{obs}}(\ell)C_{jn}^{\rm{obs}}(\ell)+
C_{in}^{\rm{obs}}(\ell)C_{jm}^{\rm{obs}}(\ell) \right),
\end{align}
\\
where, $\Delta\ell$ is the bin width of the $\ell$ and $f_{sky}$ is the sky fraction for the targeted experiment. This term is dominated by cosmic variance for lower $\ell$ and shot noise for higher $\ell$, however, for large number of sources, as in case of Euclid, and larger size of bins $(\Delta \ell)$ towards higher end of $\ell$, the shot noise can be significantly reduced.
The NG contribution to the covariance matrix of $C_{\ell}$ is rather complicated to calculate. It gives the correlation between different $\ell$. At the matter power spectrum level, this term depends on the matter trispectrum. To compute the NG covariance to lensing, we need to integrate the trispectrum in redshift and angle on the sky and then compute this quantity for various $\ell$ and $\ell^{\prime}$. So this is a 4D calculation of trispectrum which is computationally very expensive. \cite{2012PhRvD..86h3504Y} shows that these NG correction to the covariance becomes significant for $\ell$ of few thousand and \cite{2002PhR...372....1C} shows that neglecting this will introduce the bias in the cosmological parameters up to 20 $\%$. In this work, we are not taking into account these corrections and we are doing our analysis for different $\ell_{max}$: 1000, 2000, 3000, 4000, 5000, 6000, 8000, 10000 and 20000. We will discuss more about the NG covariance in section \ref{sec:NG}.
\section{Comparing BAR and DMO model}
\label{sec:comparison}
\begin{figure*}
\includegraphics[width=0.48\textwidth]{figures/pk_ratio_z0.0.eps}
\includegraphics[width=0.48\textwidth]{figures/pk_ratio_Mcrit_1d13.eps}
\includegraphics[width=0.48\textwidth]{figures/cl_ratio_nobin.eps}
\includegraphics[width=0.48\textwidth]{figures/cl_ratio_bins_Mcrit_1e13.eps}
\caption{Top row: Relative deviation of the matter power spectrum predicted by the BAR model from the DMO model predictions as a function of $k$ for different $M_{\rm crit}$ at redshift zero (left) and for fixed $M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$ and different redshifts (right). Bottom row: Relative deviation of the shear power spectrum ($C_{\ell}$) predicted by the BAR model from DMO model predictions for different $M_{\rm crit}$ in one big redshift bin (left) and for three tomographic redshift bins and fixed $M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$ (right). Dashed lines are the calculations without adiabatic contraction (AC) and solid lines with adiabatic contraction (AC). The horizontal dashed line shows the cosmic baryon fraction.}
\label{fig:pkandcl}
\end{figure*}
In this section we try to draw a comparison between the baryonic model (BAR) and the dark-matter only (DMO) model. We would like to establish an understanding of the scales where the baryonic corrections become important and how these scales changes with redshift and the only free parameter, $M_{\rm crit}$.
Figure \ref{fig:pkandcl} (top-left panel) shows the relative differences between the BAR and DMO predictions for the matter power spectrum, also referred as {\it boost} in this article. There is only one free parameter of the baryonic model, $M_{\rm crit}$ which regulates the amount of AGN feedback and which is introduced in section \ref{sec:icm}. The overall shape of the deviation is similar in all cases for various $M_{\rm crit}$ and redshifts: the BAR model follows the DMO model for large scales, suffers a deficit in power at intermediate scales due to flatter gas profile compared to the NFW profile and finally the power shoots up due to the central stellar component. Also without adiabatic contraction (AC) the raise in the matter power spectrum occurs at very small scales, but including AC effect this raise can be seen at comparatively lower $k$ or larger scales. This is because AC makes the profile steeper in the centre and shallower in the outskirts.
At redshift 0 (top-left panel of figure \ref{fig:pkandcl}), the baryonic correction starts showing up (more than 1\%) at $k \sim 5\ h/Mpc$ for models with negligible AGN feedback (lower $M_{\rm crit}$), whereas for more extreme AGN feedback models (higher $M_{\rm crit}$) this correction is important at much larger scales like $k \sim 0.1\ h/Mpc$. In our fiducial BAR model with $M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$, the baryonic effects become significant, i.e., more than 1 percent, at $k \sim 0.5\ h/Mpc$. The maximum dip in the intermediate scales vary for different $M_{\rm crit}$; for the most extreme models where AGN feedback can push all the gas out of the halo, this dip is nearly the cosmic baryon fraction, $\Omega_b/\Omega_m$. However, for more a realistic model ($M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$) this dip is nearly 7-8\%. For more optimistic models like $M_{\rm crit} = 10^{12} h^{-1} M_{\bigodot}$, this dip is even smaller, nearly 4-5\%. Therefore, we can conclude the more extreme AGN feedback models triggers the deviation of matter power spectrum from DMO model at larger scales and also the dip in the power at intermediate scales can be as large as the cosmic baryon fraction in case where all the gas are pulled out by the AGN feedback, however, for more realistic and optimistic models, the deviation starts at relatively small scales and also the maximum dip is comparatively smaller.
Figure \ref{fig:pkandcl} (top-right panel) shows the same quantity for a fixed $M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$ at different redshifts. If we go to higher redshift, the overall shape of the deviation of the BAR matter power spectrum from the prediction of the DMO model (boost) is nearly the same as at redshift zero, however, the scales and the maximum dip amplitude at various redshifts change. We see that at higher redshifts, the dip starts to trigger at larger scales and also the maximum dip converge to the cosmic baryon fraction.
In figure \ref{fig:pkandcl} (bottom-left panel), the baryonic correction to $C_{\ell}$ is shown in one big redshift bin ($z=0.01-4.0$). Here, the shear power spectrum starts to deviate from DMO predictions at about $\ell =100$ for the most extreme AGN feedback models and at $\ell$ of about several thousands for models with weak AGN feedback. For our realistic model (green curve), this deviation occurs at about $\ell \sim 700$. The maximum dip in power is very similar to that of the matter power spectrum explained above. It is worth noticing that for $\ell=10000$ the deviation is very significant for the realistic model ($M_{\rm crit} = 10^{13} h^{-1} M_{\bigodot}$), however, it is negligible for the optimistic model ($M_{\rm crit} = 10^{12} h^{-1} M_{\bigodot}$). Because these are the cases that we study in our likelihood analysis, we will show in section \ref{sec:cosmology} that this behaviour is consistent with the cosmological parameter estimation with these models.
\section{Fiducial model and mock datasets}
\label{sec:fiducial}
In this section, we would like to mention two factors that are quite important for our experiments - fiducial parameters and mock datasets. The fiducial parameters assumed in this work, particularly about cosmology, baryonic model and Euclid mission, are very standard. Also the mock datasets generated are correctly contaminated with random noise. Following are the key numbers and information about the fiducial model assumed and mock datasets:
\begin{enumerate}
\item We used WMAP - 5th year cosmology as our fiducial model with $[\Omega_m, \Omega_b, h, n_s, \sigma_8, w_0, w_a]$ as $[0.279, 0.0462, 0.701, 0.96, 0.817, -1.0, 0.0]$. We assume the equation of state of dark-energy is redshift dependent as \citep{2001IJMPD..10..213C,2003PhRvL..90i1301L},
\begin{equation}
w(a) = w_0 + (1-a)w_a
\end{equation}
\\
where, $a = 1/(1+z)$ is the scale factor at redshift $z$.
\item We used three redshift bins to do the tomographic analysis with boundaries $[0.01, 0.8, 1.5, 4.0]$. So we calculated a total of six spectra - three auto-spectra between bins 1-1, 2-2 and 3-3 and three cross-spectra between bins 1-2, 1-3 and 2-3.
\item We perform the likelihood analysis for different $\ell_{max}$ with $\ell_{min}=10$ and 100 equally spaced logarithmic bins. So the bin sizes for the likelihood analysis with different $\ell_{max}$ are different.
\item We assumed that the mean redshift of the source distribution to be nearly 1.0 which gives approximately 50 galaxies per arc min$^2$ and $f_{\rm sky}=0.55$ which resembles Euclid like survey.
\item For the baryonic model, we used the realistic AGN feedback model $M_{\rm crit}= 10^{13} h^{-1}M_{\bigodot}$ as the fiducial value for total nine $\ell_{max}$ (1000, 2000, 3000, 4000, 5000, 6000, 8000, 10000, 20000). We also performed one case with more optimistic model $M_{\rm crit} = 10^{12} h^{-1}M_{\bigodot}$ for $\ell_{max}=10000$. So there are ten cases in total.
\item We used our fiducial model stated above to generate shear power spectrum $C_{\ell}$ for these ten cases and perturbed all $C_{\ell}$ with normally distributed multi-variate random numbers drawn from a distribution with mean $C_{\ell}$ and the corresponding covariance matrix. These $C_{\ell}$ are catalogued and constitute the mock data sets. So, there are total ten mock data sets. In figure \ref{fig:bestfit} we show the mock datasets up to $\ell_{max}=20000$ for the six spectra and the best fits (which will be discussed in section \ref{subsec:goodness}).
\end{enumerate}
\begin{figure*}
\includegraphics[width=0.48\textwidth]{figures/bestfit_1-1.eps}
\includegraphics[width=0.48\textwidth]{figures/bestfit_1-2.eps}
\includegraphics[width=0.48\textwidth]{figures/bestfit_2-2.eps}
\includegraphics[width=0.48\textwidth]{figures/bestfit_1-3.eps}
\includegraphics[width=0.48\textwidth]{figures/bestfit_3-3.eps}
\includegraphics[width=0.48\textwidth]{figures/bestfit_2-3.eps}
\caption{Mock datasets (including random noise) for $\ell_{max}=20000$ in all six spectra (in black). The left column shows the three auto-spectra and the right column shows the three cross-spectra. Solid lines show the best fit for the DMO (red) and BAR (green) models.}
\label{fig:bestfit}
\end{figure*}
For each bin combination (1-1,1-2 etc), the length of the data vector ($\ell$ or $C_{\ell}$) is 100. Therefore, the total number of data points in each data set is 600. However, the two cross-spectra, 1-2 and 1-3, are highly correlated which actually leads us to have only 5 degree of freedom for each $\ell$. Therefore, the total number of degree of freedom in each data set is about 492 (500 - 8 free parameters). Hence, the best fit to each dataset can have a $\chi^2$ in the range 492 $\pm \sqrt{(2\times 492)}$ which is between 470 and 514.
In figure \ref{fig:pkandcl} (bottom-right panel), we show the boost for the unperturbed (without random noise) mock datasets up to very high $\ell_{max}$ with the corresponding DMO model. In all six curves of this figure, we kept $M_{\rm crit}=10^{13} M_{\bigodot}$. The auto-spectra in the first bin (1,1), starts deviating (more than 1\%) from the DMO model at about $\ell=300$ whereas the auto-spectra of the third bin (3,3) starts showing deviation at nearly $\ell=800$. All other auto-spectra and cross-spectra are between these two extremes. This behaviour is justified by looking at the same figure in upper-right panel, which shows the redshift evolution of the correction for the same $M_{\rm crit}$. It can be seen that at higher redshifts, the BAR matter power spectrum starts to deviate from DMO at smaller scales but also induces a larger dip at intermediate scales due to gas expulsion. This behaviour can be seen in the bottom-right panel. The $C_{\ell}$ in the lower redshift bin (1-1) starts deviating from DMO at larger scales as compared to the higher redshift bin (3-3), but the maximum dip in the two cases can be seen in the higher redshift bin (3-3). If we compare this to the bottom-left panel of the same figure, one can notice that the baryonic correction becomes even important when binning in redshift rather than using one big redshift bin. This provides additional constraints on $M_{\rm crit}$ while performing the analysis in tomographic bins compared to poorer constraints when only one bin is used.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figures/errors_euclid.eps}
\caption{Relative 1$\sigma$ errors on different cosmological parameters as a function of $\ell_{max}$ for $M_{\rm crit}=10^{13} h^{-1} M_{\bigodot}$. Solid lines are for the BAR model and dashed curves are for the DMO model. Horizontal black dashed lines mark the $\pm 1$ and vertical black dashed lines shows important scales.}
\label{fig:errors}
\end{figure*}
\section{Likelihood analysis and cosmological implications}
\label{sec:cosmology}
We performed a likelihood analysis using MCMC to explore the cosmological parameter space for nine different $\ell_{max}$ (1000, 2000, 3000, 4000, 5000, 6000, 8000, 10000, 20000) using $M_{\rm crit}=10^{13} h^{-1} M_{\bigodot}$, which is our most realistic model, and for $\ell_{max}=10000$ using $M_{\rm crit}=10^{12} h^{-1} M_{\bigodot}$ which is our optimistic model.
We run MCMC on the ten mock datasets obtained adopting both the DMO and BAR models, therefore we run a total of 20 MCMC. Each MCMC is performed using the publicly available code COSMOMC \citep{2002PhRvD..66j3511L}, with 16 chains in each case. So total 320 CPUs are used for nearly 10 days to reach the desired convergence. The whole analysis required about 76800 hours.
We demonstrate the results of the MCMC and the interpretation in the following two sections, targeting particularly the precision and accuracy in predicting the cosmological parameters.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{figures/Om_s8.eps}
\includegraphics[width=0.48\textwidth]{figures/w0_wa.eps}\\
\includegraphics[width=0.48\textwidth]{figures/Om_s8_1d12.eps}
\includegraphics[width=0.48\textwidth]{figures/w0_wa_1d12.eps}
\caption{Top row: 1$\sigma$ 2D error ellipses for different cosmological parameters using mock datasets with $M_{\rm crit}=10^{13} h^{-1} M_{\bigodot}$ and different $\ell_{max}=$ 3000 (red), 5000 (blue), 8000 (green), 10000 (magenta), 20000 (cyan). Bottom row: 1$\sigma$ 2D error ellipses using mock datasets with $M_{\rm crit}=10^{12} h^{-1} M_{\bigodot}$ for $\ell_{max}=$ 10000. All solid curves are for the BAR model and dashed curves are for the DMO model.}
\label{fig:mcmc}
\end{figure*}
\subsection{Precision in cosmology}
Future experiments, like Euclid, are expected to provide very tight constraints on cosmological parameters. Here we show the constraints expected from using the weak lensing shear power spectrum as a function $\ell_{max}$. Figure \ref{fig:errors} shows the relative variance of four cosmological parameters and one baryonic parameter using both models, BAR (solid curves) and DMO (dashed curves). The matter density of the Universe ($\Omega_m$) and the amplitude of fluctuations ($\sigma_8$) are the most constrained parameters, however, other parameters like the equation-of-state of dark-energy today ($w_0$) are relatively less constrained. The overall behaviour of all parameters is the same, weak constraints for small $\ell_{max}$, better constraints with increasing $\ell_{max}$ and a flattening beyond $\ell_{max}\sim 8000$. The constraints derived from the BAR model are relatively weaker than the constraints derived from DMO model, which is the consequence of the extra parameter, $M_{\rm crit}$.
The normalized matter density of the Universe $\Omega_m$ can already be determined up to 5\% at $\ell_{max}$=1000 which improves as good as 2-3\% at $\ell_{max}$=8000 whereas the amplitude of fluctuations $\sigma_8$ can be determined much better at corresponding scales. At $\ell_{max}$=1000, $\sigma_8$ can be known up to 3\% and these constraints improves better than 1\% at $\ell_{max}$=8000. After $\ell_{max}$=8000, the variance of both the parameters remains the same and no further constraints can be drawn by going up to lower scales or higher $\ell_{max}$. There is a certain degeneracy in these two parameters which can be seen in figure \ref{fig:mcmc} upper-left panel, where different colours represent different $\ell_{max}$.
The constraints on the two parameters describing the redshift evolution of the equation of state of dark energy, $w_0, w_a$ can also be improved with this kind of experiments. At $\ell_{max}$=1000 $w_0$ can only be determined as good as 12\%, whereas for $\ell_{max}$=8000 it can be constrained up to 6-7\% and with the same precision for higher $\ell_{max}$. However, the constraints on $w_a$ are much weaker. The absolute error on $w_0$ is nearly 0.35 for $\ell_{max}$=1000, $\sim$0.18 for $\ell_{max}$ = 8000 and the same afterwards.
The flattening of the relative errors of the parameters indicates that there is no gain in precision of cosmological parameters estimation after a certain threshold $\ell_{max}\sim 8000$. In practice, an experiment like Euclid may provide us with very high quality data to even resolve and measure the shear power spectrum at $\ell_{max}$ as high as $10^5$, but our analysis shows that the constraints becomes constant after $\ell_{max}\sim 8000$ and no further improvement can be achieved.
This forecast suggests that by measuring $C_{\ell}$s up to $\ell_{max}\sim 8000$, one can constrain $\Omega_m$ to about 2\% precision and $\sigma_8$ to about 0.5\% precision without any loss of information from high $\ell$s and including baryonic physics. However, $w_0$ can only be constraints up to 6-7\% with some information about $w_a$, the time derivative of the equation of state of dark energy.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figures/bias_lmax.eps}
\caption{The ratio of bias and 1$\sigma$ error of various cosmological parameters as a function of $\ell_{max}$ for $M_{\rm crit}=10^{13} h^{-1} M_{\bigodot}$. Solid lines are for the BAR model and dashed curves are for the DMO model. Horizontal black dashed lines mark the $\pm 1$ and vertical black dashed lines shows important scales.}
\label{fig:bias}
\end{figure*}
\subsection{Accuracy in cosmology}
When precision cosmology is the goal, one should also take into account the ability to recover the cosmological parameter accurately. If there are systematic errors in the model, one can still derive very tight constraints from the wrong model, but the recovered parameters will be wrong or biased as compared to the true values. In this section we will present the results from our analysis of the bias in the cosmological parameters due to the lack of baryonic physics in DMO models and we will assess if these biases are significant. We define bias as the difference between the mean value of the parameter in MCMC and its fiducial or true value.
Figure \ref{fig:mcmc} shows the 1$\sigma$ error ellipses of cosmological parameters when the model is BAR (solid curves) and DMO (dashed curves). For small $\ell_{max}$, the two models are indistinguishable, a consequence of the fact that baryonic physics becomes more important only at smaller scales. But as we go higher and higher in $\ell_{max}$, the target density of the DMO model shifts further from the true target density, however, the BAR model remains at the correct location. We find that for all BAR models this bias is smaller than the 1$\sigma$ error of the parameter, however, the bias in the parameters obtained fitting for the DMO model increases with increasing $\ell_{max}$.
Figure \ref{fig:bias} shows the ratio of these biases and the 1$\sigma$ error on the cosmological parameters as a function of $\ell_{max}$ for the two models, BAR (solid curves) and DMO (dashed curves). The bias never exceeds the 1$\sigma$ error for the BAR models, however, it does for the DMO models only after $\ell_{max} \sim 4000$. This is again a consequence of the fact that baryonic physics is only important at smaller scales. This indicates that if we only perform our experiment up to $\ell_{max}$=4000, no baryonic physics needs to be taken into account, however, if one is interested in $\ell_{max}>4000$ baryonic physics becomes very important. After $\ell_{max}$=4000 the bias increases with $\ell_{max}$ and goes as big as 10$\sigma$ at $\ell_{max}$=10000 and remain flat after that. We see in the previous section that constraints on cosmological parameter can still be improved up to $\ell_{max}$=8000, but considering the wrong model, DMO, the cosmological parameters will be 5-10$\sigma$ away from the true values. So, in order to gain the best constraints on cosmology, baryonic physics must be taken into account.
\subsection{An optimistic model}
We analysed the $\ell_{max}=10000$ case for our optimistic model with $M_{\rm crit}=10^{12} h^{-1} M_{\bigodot}$. As in our previous analysis, we performed two MCMC in this case too, fitting for the BAR model and for the DMO model. Figure \ref{fig:mcmc} (bottom row) shows the 1$\sigma$ error ellipses of cosmological parameters. In this case the bias in the cosmological parameters does not exceed the $1\sigma$ error and hence is not a very troubling case. This was expected, as for lower $M_{\rm crit}$, baryonic physics is less important even at comparatively small scales as compared to cases where $M_{\rm crit}$ is higher. For example if we look at figure \ref{fig:pkandcl} (bottom-left panel), we can see that for $M_{\rm crit}=10^{12} h^{-1} M_{\bigodot}$, the deviation of $C_{\ell}$ from the DMO model is negligible at $\ell=10000$. Hence, we actually expect smaller or no bias.
\section{Discussion and conclusions}
\label{sec:discussion}
In this work we first review the important theoretical framework necessary to calculate the matter power spectrum using the halo model and to compute the shear angular power spectrum in different redshift bins. We presented an analytic prescription to distribute baryons into two components -- the intra-cluster plasma in hydrostatic equilibrium within the halo, and the BCG, which dominates the mass distribution in the centre of the halo, and whose properties are well measured using abundance matching techniques. We also take into account the adiabatic contraction of the dark matter particles due to the central condensation of baryons. We also compared these analytic density profiles to the simulations of \cite{2014MNRAS.440.2290M}, both dark-matter-only and baryonic with AGN feedback, and found a remarkable agreement.
We model the shear power spectrum in the two models, BAR and DMO, and found that baryonic corrections are important after $k \sim 0.5\ h/Mpc$ in the matter power spectrum at redshift 0 for our most realistic AGN feedback model, which translates into $\ell \sim 800$ for the shear power spectrum in one big redshift bin. However, if binned in redshift space (lensing tomography), these corrections become larger in each bin and for each auto- and cross-correlation function. These baryonic corrections have one free parameter, $M_{\rm crit}$, which regulates AGN feedback, i.e., it controls how much gas will be inside the halo as a function of the halo mass. We believe the most realistic value of this parameter is near $10^{13} h^{-1}M_{\bigodot}$, which sets the most likely magnitude of baryonic corrections.
We perform the likelihood analysis using MCMC for total ten different datasets. Nine of them assume our realistic model for the AGN feedback with $M_{\rm crit} = 10^{13} h^{-1}M_{\bigodot}$ but different $\ell_{max}$, and one assumes a less extreme (optimistic) model with $M_{\rm crit} = 10^{12} h^{-1}M_{\bigodot}$. For each mock dataset, we perform MCMC to fit for both models, BAR and DMO.
The main results of the likelihood analysis are summarized in figure \ref{fig:mcmc}, \ref{fig:errors} and \ref{fig:bias}. The results are very interesting in two aspects: first, we found that the constraints on all cosmological parameters improve with increasing $\ell_{max}$, but after $\ell_{max} \sim 8000$, the variance of each parameter becomes nearly constant. This indicates that even if we go to higher $\ell_{max}$ (or smaller scales), no additional constraints on the cosmological parameters can be gained. Second, if the wrong model, in this case DMO, is fitted to the data, after $\ell_{max}=4000$ the mean recovered value of the parameters starts moving away from its true value. We refer to the difference between the true value and recovered mean value as bias in the cosmological parameter. The bias in the parameters becomes more than 1$\sigma$ after $\ell_{max}=5000$ and goes up to 10$\sigma$ for $\ell_{max}=10000$, remaining flat afterwards. So, there is a very interesting window from $\ell_{max}=4000-8000$ which is useful for improving the constraints on cosmology, but if wrong model like DMO is chosen, the recovered cosmology can be highly biased from few to 10-$\sigma$.
\subsection{Goodness of fit}
In the previous sections we see that for $\ell_{max}<4000$, there is no significant bias added to the determination of the cosmological parameters in our analysis, however, for $\ell_{max}>5000$ the bias exceeds 1$\sigma$ and keep increasing up to 10$\sigma$ with increasing $\ell_{max}$. The question here is: can we discard these biased models by looking at the goodness of fit? The answer to this question lies in figure \ref{fig:chi2} where we show the ratio between the best fit $\chi^2$ in the DMO model and that in the BAR case as a function of $\ell_{max}$. This ratio is as little as 5-10\% up to $\ell_{max} \sim 5000$ but after that it only goes up to 25\% at $\ell_{max}=20000$ where bias is more than 10$\sigma$. Now, the reduced $\chi^2=1.25$ does not appear as such a bad fit for our cosmological measurements. So, by looking at the $\chi^2$ only, it is not really possible to discard a model. The same conclusion can be drawn from figure \ref{fig:bestfit}, where we show the mock datasets of the six spectra (between different bins) for $\ell_{max}=20000$. In this figure, we also show the two best fit from the DMO model (in red) and the BAR model (green). As we expect, the green curve is a better fit to the data than the red curve. But if the green curve is not present in this figure, the red curve does not appear to be a very bad fit. So, when deriving constraints on cosmology from this kind of experiments, one should be extremely careful about the possible magnitude of baryonic effects at small scales, because, although the results obtained with the wrong model may appear as a good fit, the corresponding bias can be in fact as high as many $\sigma$. Also, the recovered parameters from the wrong model (DMO) move away from the true value with increasing $\ell_{max}$. This suggests a potential test for a given model, the cosmological parameter space should not move significantly when analysing up to different scales, the difference should only be seen in the variance of the parameters and not in its mean value.
\subsection{$M_{\rm crit}$ parameter}
The only free parameter in our BAR model, $M_{\rm crit}$, regulates the amount of gas inside the halo as a function of halo mass. We explore the consequences of what we believe to be a realistic model ($M_{\rm crit} = 10^{13} h^{-1}M_{\bigodot}$) in details considering nine different $\ell_{max}$. At $\ell_{max}=1000$, there is hardly any constraint drawn from weak lensing on this parameter, but as we increase $\ell_{max}$, baryonic physics become more and more important and thus constraints can be put on $M_{\rm crit}$. In fact, the constraints on this parameter increase rapidly from 15\% at $\ell_{max}=1000$ to 1-2\% at $\ell_{max}=4000$. After this, no significant improvement on the constraints can be gained on this parameter. The variance of $M_{\rm crit}$ becomes constant after nearly $\ell_{max}=8000$, which is what happens for the other cosmological parameters. So, with this kind of weak lensing experiment, $M_{\rm crit}$ (or $\log(M_{\rm crit})$) could be constrained up to 1-2\%, which is quite impressive.
\label{subsec:goodness}
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/chi2.eps}
\caption{Showing the ratio of the best fit $\chi^2$ in DMO model and BAR model at different $\ell_{max}$.}
\label{fig:chi2}
\end{figure}
\subsection{Non-Gaussian covariance vs baryonic corrections}
\label{sec:NG}
Being able to extract cosmological information from clustering data down to a few percent accuracy can be considered very optimistic. It can be jeopardized by many unresolved issues. The two most important issues are (i) baryonic physics at small scales, and (ii) non-Gaussian effects in the covariance matrix of the power spectrum. These two issues can be quantified in projected weak-lensing statistics, like the shear power spectrum. In this work, we primarily talk about the effect of baryonic physics at small scales on the shear power spectrum and its cosmological implications. However, we ignore the effect of non-Gaussianity (NG) on the covariance matrix.
The NG contribution to the covariance becomes more important at small scales, like baryonic physics \citep{2009MNRAS.395.2065T, 2013PhRvD..87l3504T}. Now the question is, which one is more important to deal with and which one appears first when going towards smaller scales? This question does not have a very straightforward answer. Ignoring both of these contributions may result in highly biased cosmological parameters estimations.
\cite{2012PhRvD..86h3504Y} (figure 9, right panel) shows the constraints on the amplitude of fluctuations ($\sigma_8$) as we go to smaller scales. If one considers only Gaussian errors, the constraints continue to improve until the instrumental shot noise kicks in. However, NG contribution are likely to dominate over Gaussian errors after $\ell=700$. But we cannot directly compare to this plots as the constraints depend on many other details. We can still compare the ratios of the NG and Gaussian contributions. At $\ell_{max}=10000$, the NG covariance is six times the Gaussian covariance. On the other hand, in figure \ref{fig:bias} the bias in cosmology becomes close to 10$\sigma$ for $\sigma_8$ at $\ell=10000$. This means that the NG corrections are sub-dominant than the baryonic effects. However, our analogy is very hand-wavy and requires further study.
\subsection{The ideal configuration}
We explore the baryonic effects on the cosmological parameter estimation and found big bias in cosmological parameters if the analysis include $\ell>4000$. After this limit, the cosmological parameters start to become biased and mislead the constraints. However, the constraints keep improving up to $\ell=10000$. So the question arises, what is the ideal configuration to perform weak-lensing power spectrum analysis to put useful constraints on cosmology with Euclid-like surveys?
We explore this answer in our analysis and stated our results in the previous sections. To summarize, the ideal configuration is to go as high as $\ell=8000$, including baryonic physics and marginalize over the baryonic parameters, in our case $M_{\rm crit}$. In this configuration, one can find unbiased estimates of the cosmological parameters. Having the unbiased estimates, we can also constrain the cosmological parameter space with much better accuracy than before. In this configuration, $\Omega_m$ and $\sigma_8$ can be estimated with nearly 2\% and 0.5\% respectively. The variance of the two parameters defining the redshift evolution of the equation of state of dark energy, $w_0$ and $w_a$ are 0.07 and 0.15 respectively. Along with cosmological parameters, the baryonic parameter $M_{\rm crit}$ can also be estimated to very high accuracy, as good as 1-2\%.
When dealing with real clustering datasets, we are also able to use independent constraints on the baryonic parameters, such as abundance matching data and/or X-ray data on individual halos, providing a solid understanding of the overall signal and the underlying baryonic effects.
\section{Acknowledgement}
I.M. would like to thank Prasenjit Saha, Uros Seljak and Ravi Sheth for useful discussions about the topic and their suggestions.
\bibliographystyle{mn2e}
\defApJ{ApJ}
\defApJL{ApJL}
\defApJS{ApJS}
\defAJ{AJ}
\defPRD{PRD}
\defMNRAS{MNRAS}
\defA\&A{A\&A}
\defPhysicsReports{PhysicsReports}
\defNature{Nature}
\defARAA{ARAA}
|
\section{Introduction}
The first detection of a baryon $\Xi^{++}_{cc}$ containing two charm quarks (i.e., having the structure {\it ccu}) has been made at CERN by LHCb collaboration last summer \cite{01}. Its mass was measured to be $3621$ MeV. The existence of such a baryon may be considered is an inevitable consequence of the existence of the $c$-quark itself. Calculations of the $\Xi^{++}_{cc}$-baryon mass have started almost thirty years ago \cite{02,03,04}. Probably the first review paper on this subject dates back to 2002 \cite{05}. A reliable prediction on the double charmed baryon mass and properties turned out to be a difficult talk -- see a list of references in \cite{01}. This is not surprising since the theoretical description of the QCD ``hydrogen atom'' -- charmonium, to a great extent relies on the phenomenological input rather than on the first principles of QCD. Connection between different approaches to the double charmed baryons, e.g., the quark-diquark model, the potential model, and the QCD sum rules, their interreletion and relation to the fundamental QCD Lagrangian are obscure. The aim of the present note is to investigate the $\Xi^{++}_{cc}$ mass and the structure of the wave function taking our works \cite{03,04} as a starting point for a discussion. In \cite{03,04} the spin-averaged masses and the wave functions of multiquark systems made up to 12 quarks were calculated in potential model through Green Function Monte Carlo, or random walks method -- see below.
Despite the fact that QCD gives no sound arguments in favor of the effective potential with only two-body forces, the constituent quark model has given results which are in surprisingly good agreement with experimental hadron spectroscopy. Attempts to apply this approach to multiquark systems encounter serious difficulties even if one ignores problems like complicated QCD structure in the infrared region. With growing number of particles, especially with non-equal masses, and for the complicated form of the potential, the traditional methods -- variational, integral equations, hyper-spherical functions, face difficulties. The accuracy, in particular that of the many-body wave function determination, becomes uncontrollable and the computation time increases catastrophically. This is true even for the three quark system. The convergence for the wave function is much slower than for the binding energy both in the harmonic oscillation expansion and in the hyperspherical formalism \cite{02}. In \cite{03,04} the spectrum and the wave functions of multiquark systems were investigated using the Green Function Monte Carlo method (GFMC). The GFMC is based on the idea attributed to Fermi that the imaginary time Schrodinger equation is equivalent to the diffusion equation with branching (sources and sinks). {The GFMC allows to calculate easily and with high accuracy the spectrum and, what is most important, the wave function of a multiquark system in various models.} Originally GFMC has been used to calculate the ground-state properties of a variety of systems in statistical and atomic physics \cite{06}. At present it is the most powerful method to solve the many-body problem \cite{07}. Similar methods have been used in Quantum Field Theory under the names Projector Monte Carlo \cite{08} and Guided Random Walks \cite{09}.
The description of the GFMC is beyond the scope of this paper. We only stress that the method does not require to solve differential or integral equations for the wave function. There is no need even to write down such equations. In GFMC the exact many-body Schrodinger equation is represented by a random walk in the many-dimensional space in such a way that physical averages are exactly calculated given sufficient computational resources. {At this point we emphasize the need to disentangle the accuracy of computations from possibly large uncertainty related to the use of a particular model.} When we tried to apply GFMC to multiquark system, i.e., to a system of fermions, we encounter a difficulty. For the fermionic system the kernel of the Green function may take a negative value at a certain step of the sampling procedure. A recipe to circumvent this difficulty was proposed in \cite{03}. We note in passing that similar problem does not allow to perform lattice Monte Carlo investigation of quark matter properties at finite density.
Now we come to the description of the interquark interaction chosen for the calculation of the ground state mass of \textit{(ccu)} system and evaluation of the wave function. For a system of $N$ quarks the interaction between quarks is taken in the form
\begin{equation}
V_{ij}(r_{ij})=\lambda_i\lambda_j V_{8}(r_{ij}),
\label{eq:01}
\end{equation}
{
\noindent with $\lambda_i$ being the Gell-Mann matrices. Solving the quark problem with $N \geq 4$ with the interaction (\ref{eq:01}) one encounters a problem that the coupling of colors of the constituents into a color singlet is not unique -- see \cite{04} for a discussion. For $V_{8}(r)$ we have taken the well-known Cornell potential \cite{10}
\begin{equation} V_{8}(r)=-\frac{3}{16}\left(-\frac{\varkappa}{r}+\frac{r}{a^2}+C_f \right), \label{eq:02} \end{equation}
where $\varkappa=0.52$ and $a=2.34$ GeV$^{-1}$ which corresponds to the string tension $\sigma = a^{-2} = 0.18$ GeV$^2$. For the baryon $\left<\lambda_i\lambda_j\right>=-8/3$. We would like to present two more arguments in addition to the well known ones \cite{10} in support of the Cornell potential. Slightly varying its parameters one may obtain a good fit of the lattice simulation of the quark-antiquark static potential \cite{11}. Cornell like behavior arises also from AdS/QFT correspondence \cite{12}.}
In Ref. \cite{13} an excellent fit of both meson and baryon sectors has been obtained under the assumption that the constant $C_f$ is weakly flavor dependent. Following \cite{13} we have chosen the input parameters for \textit{(ccu)} baryon (all values in GeV units)
\begin{equation}
m_u=0.33,\,\,\,m_c=1.84,\,\,\,C_{uc}=-0.92.
\label{eq:03}
\end{equation}
The value of $m_c$ might look too high but it corresponds to the Classical Cornell set of parameters \cite{10}.
Next one has to evaluate the contribution from spin-spin splitting \cite{14}
\begin{equation}
V_{s_i s_j} = \frac{16\pi}{9}\,\alpha_S\,\frac{\bm{s}_i\bm{s}_j}{m_i m_j}\,\delta^{(3)}(\bm{r}_{ij}).
\label{eq:04}
\end{equation}
We have calculated the ground-state expectation values $\delta_{ij} = \left<\delta(\bm{r}_{ij})\right>$, where $\left<\, \ldots \,\right>$ means the average over the ground-state wave function. To obtain $\delta_{ij}$ we made a smearing over the small sphere around the origin and then averaged over a sequence of such spheres. For \textit{(ccu)} baryon we obtain the following results for $10^3\,\delta_{ij}$ (GeV$^3$)
\begin{equation}
\delta_{cc}=45.25 \pm 2.90,\quad \delta_{cu}=11.28 \pm 1.94.
\label{eq:05}
\end{equation}
The GFMC method allows to obtain the wave function and its arbitrary moments with the accuracy restricted only by the computational resources. In particular, there is no need to introduce the quark-diquark structure as a forced anzatz. The system will form such configuration by itself if it corresponds to the physical picture. This turned out to be the case for \textit{(ccu)} baryon. Indeed, the ground-state expectation values of $\left< r_{ij}^{2} \right>^{1/2}$ in GeV$^{-1}$ are
\begin{equation}
\left< r_{cc}^{2} \right>^{1/2} = 2.322\pm 0.024,\quad
\left< r_{cu}^{2} \right>^{1/2} = 3.407 \pm 0.035.
\label{eq:06}
\end{equation}
If we identify $\left< r_{cc}^{2} \right>^{1/2} \simeq 0.46$ fm with the size of a diquark, then it is more compact one than a diquark with $r_d=0.6$ fm introduced ad hoc in \cite{15}. {It is instructive to look at the interquark distances (\ref{eq:06}) from an angle of the $\Xi^{++}_{cc} - \Xi^{+}_{cc}$ isospin splitting. This is a long standing puzzle. More than a decade ago SELEX Collaboration reported \cite{16} the observation of the $\Xi^{+}_{cc}$ \textit{ccd} baryon with a mass $3519$ MeV. However, this result was not confirmed by other experiments (see \cite{01} for references). The isospin splitting of about $100$ MeV between \textit{ccu} and \textit{ccd} states and its sign are hardly possible to explain. The $d$-quark is heavier than the $u$-quark and to overcome the ``wrong'' sign of the splitting by about $100$ MeV the electromagnetic mass difference should be very large. This is turn requires the $\Xi^{++}_{cc}$ baryon to be very compact \cite{17}. To obtain $9$ MeV splitting the radius should satisfy $\sqrt{\left< r^2 \right>} < 0.26$ fm \cite{17}. From (\ref{eq:06}) we see that the diquark size is $0.46$ fm. If we identify the baryon radius with three quarks hyperradius $\rho = \sqrt{\bm{\eta}^2 + \bm{\xi}^2}$ with $\bm{\eta}$ and $\bm{\xi}$ being the Jacobi coordinates \cite{02}, the result is $\rho=(0.53-0.57)$ fm depending on the value of the Delves angle $\tan \varphi = {\xi}/{\rho}$. Therefore our result on the $\Xi^{++}_{cc}$ wave function strongly contradicts the abnormal value and the sign of the conjectural isospin splitting.}
{
Our result for the center of gravity (spin-averaged) mass $\Xi^{++}_{cc}$ baryon is \begin{equation} m\left[ \Xi^{++}_{cc} \right] = 3632.8 \pm 2.4 \text{ MeV}.\label{eq:07} \end{equation}
The error characterizes the accuracy of the GFMC calculations with the Cornell potential parameters specified above. We did not vary these parameters since they were fitted to a great body of observables. Next we take into account the hyperfine interaction (\ref{eq:04}) which induces the splitting between the lowest state with $S=1/2$ and its $S=3/2$ partner.
Taking into account that statistics requires the \textit{cc} pair to be in a spin $1$ state, we can write the following equation for the hyperfine energy shift
\begin{equation}
\Delta E_{hf} = \frac{4\pi}{9}\,\alpha_S\,\left\{\frac{\delta_{cc}}{m_{c}^2} + \frac{2\delta_{cu}}{m_{c}m_{u}}\left[ S(S+1) - \frac{11}{4}\right]\right\}.
\label{eq:08}
\end{equation}
We used the Cornell value of $\alpha_S=\frac34\,\varkappa=0.39.$ With this value of $\alpha_S$ baryon magnetic moments were successfully described \cite{18}. Equation (\ref{eq:08}) yields
\begin{equation} \Delta E_{hf}\left(S=1/2\right)=-32.2\text{ MeV}, \qquad \Delta E_{hf}\left(S=3/2\right)=26.9\text{ MeV}. \label{eq:09} \end{equation} and \begin{equation}
m\left[ \Xi^{1/2\,++}_{cc} \right] = 3601 \text{ MeV},\qquad m\left[ \Xi^{3/2\,++}_{cc} \right] = 3660 \text{ MeV}.\label{eq:10} \end{equation}
In (\ref{eq:10}) the uncertainty of GFMC calculations which are about $(2-3)$ MeV are not presented since as repeatedly stated above the model-dependent theoretical uncertainty may be much larger as one can see from theoretical predictions presented in the reference list in \cite{01}.
Another doubly charmed baryon which may be observed soon is $\Omega_{cc}$ with quark content {(ccs)}. Our result for its c.o.g. is \begin{equation} m\left[ \Omega_{cc} \right] = 3760.7 \pm 2.4 \text{ MeV}.\label{eq:11} \end{equation}
}
\noindent\makebox[\linewidth]{\resizebox{0.4\linewidth}{1.2pt}{$\bullet$}}\bigskip
The author is grateful to Yu.S.~Kalashnikova for enlightening discussions, to M.~Karliner, V.~Novikov, J.-M.~Richard, and M.I.~Vysotsky for questions and remarks. The interest to the work fron V.Yu.~Egorychev is gratefully acknowledged. The work was supported by the grant from the Russian Science Foundation project number \#16-12-10414.
|
\section{Introduction}
In the context of complex networks, the Laplacian formalism can be used to find many useful properties of the underlying graph
\cite{MERRIS1994143, Chung97, benzi, biggs93, MR1324340, Mitrovic2009}.
In particular, the idea of spectral clustering is to extract some important information on the network structure from the
matrices associated with the network, by considering one or few of the leading eigenvectors \cite{Boccaletti2006175}. \\
According to the Fiedler theory, a bipartition of a graph can be obtained from the second eigenvector both of the
Laplacian matrix \cite{fiedler73, fiedler75, donath1973lower}, and of the Normalized Laplacian matrix \cite{Chung97}.
More precisely, one can obtain a good ratio cut of the graph from any vector orthogonal to the all-ones vector, with a small Rayleigh quotient \cite{conf/focs/Mihail89}.\\
In general, a different number of clusters can obtained by means of the following strategies:
\begin{description}
\item[a)] by a Recursive Spectral Bisection (RSB) \cite{journals/concurrency/BarnardS94, I91partitioningof, CPE:CPE4330070103}: after using the Fiedler
eigenvector to split the graph into two subgraphs, one can find the Fiedler eigenvector in each of these subgraphs, and continue recursively until
some a-priori criterion is satisfied;
\item[b)] by using the first $k$ eigenvectors related to the smallest eigenvalues, to induce further partitions through clustering algorithms applied
to the corresponding invariant subspace \cite{journals/dam/AlpertKY99, journals/tcad/ChanSZ94}.
\end{description}
We consider the second approach, recalling that the optimal number $k$ of clusters is often indicated by a large gap between the $k$ and the $k+1$ eigenvalues
for both the Laplacian and Normalized Laplacian matrices \cite{Lee:2012:MSP:2213977.2214078}.\\
Within this framework, we are interested consider the algebraic multiplicity of Laplacian eigenvalues, since the corresponding eigenvectors
can be considered equivalent in a partition procedure of graphs. In presence of multiple eigenvalues, we investigate the possibility of reducing the dimensionality
of the original graph (i.e. of removing some of its nodes) keeping fixed its spectral properties
\cite{onred11, DBLP:conf/cdc/BeckLLW09, DBLP:conf/cdc/DengMM09,sonin1999state, Sadiq:2000:APM:344358.344369}.
After some preliminary remarks (section \ref{sec:2}), in section \ref{sec:3} we define two classes of graphs, by giving conditions on the graph structure
which implies the presence of multiple eigenvalues. Then we propose a reduction on the number of nodes, such that it is possible to get an identical spectrum
for the Laplacian matrices of the original and the reduced graphs (up to the eigenvalue multiplicity) with respect to a suitable diagonal
\textit{mass matrix}, that changes the link weights a plays the role of metric matrix.
Furthermore, we get a connection between the primary and the reduced graph eigenvectors.
Thanks to these results it is possible to perform a partition of the primary and the reduced graphs using the same procedure.
Finally, in section \ref{sec:4} we draw some conclusions and give an outlooks on future developments.\\
\section{Premises}\label{sec:2}
We consider an undirected weighted connected graph $\mathcal G:=(\mathcal V, \mathcal E, w)$, where the $n$ vertices $\mathcal V$ are connected by
the $\mathcal E$ edges with $w$ the weight function: $w:\mathcal E\rightarrow \mathbb R^{+}.$
Let $A$ be the weighted adjacency matrix, which is symmetric since the graph is undirected ($A\in Sym_n(\mathbb R^+)$),
$$A_{ij} = \begin{cases} w(i,j), & \mbox{if $i$ is connected to $j$ } (i\sim j) \\
0 & \mbox{otherwise } \end{cases}$$
where $i,j\in\mathcal V,$,
the Laplacian matrix $L\in Sym_n (\mathbb R)$ and normalized Laplacian matrix $\hat L\in Sym_n (\mathbb R)$ are respectively defined
$$L_{ij} = \begin{cases} -w(i,j), & \mbox{if } i\sim j \\
\sum_{k=1}^n w(i,k), & \mbox{if } i= j\\
0 & \mbox{otherwise } \end{cases}$$
$$\hat L_{ij} = \begin{cases} -\displaystyle \frac{w(i,j)}{\sqrt{\sum_{k=1}^n w(i,k)\sum_{k=1}^n w(k,j)}}, & \mbox{if } i\sim j \\
1, & \mbox{if } i= j\\
0 & \mbox{otherwise }. \end{cases}$$
Whenever we refer to the $k$-th eigenvalue of a Laplacian matrix, we will refer to the $k$-th nonzero eigenvalue according to a increasing order.
For the classical results on Laplacian matrices theory, one may refer to \cite{Chung97,doi:10.1080/03081088508817681, MERRIS1994143}.
\section{Eigenvalues multiplicity theorems}\label{sec:3}
The first result is an extension of Theorem {(4)} in \cite{grone94} to weighted graphs: by defining the weighted $(m,k)$-stars in a graph,
we are able to give a condition on both the structure and edge weights of graphs in order to get the eigenvalue multiplicity.
{As we will see later, an $(m,k)-$star is nothing else that the union of a $k$-cluster of order $m$ and its $k$ neighbours.}\\
The second result, that is the main results of this work, is a further extension of the previous Theorem to understand the relation between
eigenvalue multiplicity and the structure of the weights of graphs.\\
The third result concerns the reduction of graphs with one or more $(m,k)$-stars under some conditions, and possible applications
on spectral graphs partitioning.
\subsection{$(m,k)$-star and $l$-dependent: eigenvalues multiplicity}
We recall that a vertex of a graph is said \textit{pendant} if it has exactly one neighbour, and \textit{quasi pendant}
if it is adjacent to a pendant vertex. It is possible to prove that the multiplicity $m_{L}(1)$ of the eigenvalue $\lambda=1$ of the Laplacian of an
unweighted graph, is greater or equal than the number of pendant vertices less the number of quasi pendant vertices of the graph \cite{FARIA1985255}.\\
To extend these definitions to vertices with $k$ neighbours, we define a $(m,k)$-star:
\begin{figure}[!!h]
\begin{subfigure}{}
\includegraphics[width=6cm]{S63.png}\end{subfigure}
\begin{subfigure}{}
\includegraphics[width=6cm]{S36_2.png}\end{subfigure}
\caption{In the left a $S_{6,3}$ graph, in the right a $S_{3,6}$ graph.}
\end{figure}
\begin{defn}[$(m,k)$-star: $S_{m,k}$ ]
A $(m,k)$-star is a graph $\mathcal G=(\mathcal V, \mathcal E,w)$ whose vertex set $\mathcal V$ has a bipartition $(\mathcal V_1,\mathcal V_2)$
of cardinalities $m$ and $k$ respectively, such that the vertices in $\mathcal V_1$ have no connections among them, and each of these vertices is connected
with all the vertices in $\mathcal V_2$: i.e
$$\forall i\in \mathcal V_1,\forall j\in \mathcal V_2,\quad (i,j)\in \mathcal E$$
$$\forall i,j\in \mathcal V_1, \quad (i,j)\notin \mathcal E$$
We denote a $(m,k)$-star graph with partitions of cardinatilty $|\mathcal V_1|=m$ and $|\mathcal V_2|=k$ by $S_{m,k}.$
\end{defn}
We define a \textit{$(m,k)$-star of a graph} $\mathcal G=(\mathcal V, \mathcal E,w)$ as the $(m,k)$-star of partitions $\mathcal V_1$, $\mathcal V_2\subset \mathcal V$,
both of them univocally determined, such that the vertices in $\mathcal V_1$ have no connection with vertices
in $\mathcal V\setminus \mathcal V_2$ in $\mathcal G., $: i.e.\\
$$\forall i\in \mathcal V_1,\forall j\in \mathcal V_2,\quad (i,j)\in \mathcal E$$
$$\forall i \in \mathcal V_1, \forall j\in \mathcal V\setminus\mathcal V_2 \quad (i,j)\notin \mathcal E$$
{ \begin{obs} In \cite{grone94} is defined a $k$-cluster of $\mathcal G$ to be an independent set of $m$ vertices of $\mathcal G$, $m>1$,
each of which with the same set of neighbours.
The order of a $k$-cluster is the number of vertices in $k$-cluster.
Therefore, the set $\mathcal V_1$ of the $(m,k)$-star is a $k$-cluster of order $m$ and the set $\mathcal V_2$ is the set of the $k$ neighbour vertices.
An $(m,k)-$star of a graph $\mathcal G$ is the union of a $k$-cluster (i.e. $\mathcal V_1$) and its neighbour vertices (i.e. $\mathcal V_2$).
\end{obs}}
By defining the degree and weight of a $(m,k)$-star we simplify the stating of the theorems on eigenvalues multiplicity.
\begin{defn}[Degree of a $(m,k)$-star: $deg(S_{m,k})$]
The \textit{degree} of a $(m,k)$- star is $deg(S_{m,k}):=m-1$ and the degree of a set $\mathcal S$ of $(m,k)$-stars, as $m$ and $k$ vary in $\mathbb N$ ,
such that $|\mathcal S|=l,$ is defined as the sum over each $(m,k)$-star degree, i.e.
$$deg(\mathcal S):=\sum_{i=1}^l deg(S_{m_i,k_i}).$$
\end{defn}
\begin{defn}[Weight of a $(m,k)$-star: $w(S_{m,k})$]
The \textit{weight} of a $(m,k)$-star of vertices set $\mathcal V_1\cup\mathcal V_2$ is defined as the strength of the vertices in $\mathcal V_1$,
provided that the following condition holds:\\
let $\{i_1,...,i_m\}=\mathcal V_1$, then $w(i_1,j)=...=w(i_m,j), \forall j\in\mathcal V_2.$. More precisely the
weight of a $(m,k)$-star: $w(S_{m,k})$ is $$w(S_{m,k}):=\sum_{j\in \mathcal V_2}w(i,j)\mbox{ for any }i\in\mathcal V_1.$$
+\end{defn}
We are ready to enunciate the first theorem, that is an extension to weighted graph of the theorem in \cite{grone94}.
Given a graph $\mathcal G=(\mathcal V,\mathcal E,w)$ associated with the Laplacian matrix $L$, and denoting $\sigma(L)$ the set of the
eigenvalues of $L$ and $m_L(\lambda)$ the algebraic multiplicity of the eigenvalue $\lambda$ in $L$, the following theorem holds\\
\begin{thm}\label{th:one}
Let
\begin{itemize}
\item $s$ be the number of all the $S_{m,k}$ as $m$ and $k$ vary in $\mathbb N$ and $m+k\leq n,$ of $\mathcal G$;
\item $r$ be the number of $S_{m,k}$ with different weight, $w_1,...,w_r$, i.e. $w_i\neq w_j$ for each $i\neq j,$ where $ i,j\in\{1,...,r\};$\\
\end{itemize}
then for any $i\in\{1,...,r\},$
$$\exists \lambda\in{\sigma(L)} \mbox{ such that } \lambda=w_i \mbox{ and } m_{L}(\lambda)\geq deg(\mathcal S_{w_i})$$
where $\mathcal S_{w_i}:=\{S_{m,k}\in \mathcal G | w(S_{m,k})=w_i\}$.
\end{thm}
{
Before proving Theorem \ref{th:one}, we introduce some useful definitions.
\begin{defn}[$k$-pendant vertex]
A vertex of a graph is said to be \textit{$k$-pendant} if its neighborhood contains exactly $k$ vertices.
\end{defn}
\begin{defn}[$k$-quasi pendant vertex]
A vertex of a graph is said to be \textit{$k$-quasi pendant} if it is adjacent to a $k$-pendant vertex.
\end{defn}
We remark that in the definition of an $(m,k)-$star, the vertices in $\mathcal V_1$ are $k-$pendant vertices, and
vertices in $\mathcal V_2$ are $k-$quasi pendant vertices.\\
\begin{proof}\ref{th:one} \\
We consider connected graphs; indeed if a graph is not connected the same result holds, since the $(m,k)$-star degree
of the graph is the sum of the star degrees of the connected components and
the characteristic polynomial of $L$ is the product of the characteristic polynomials of the connected components. \\
Let a $(m,k)$-star of the graph $\mathcal G$.\\
Under a suitable permutation of the rows and columns of weighted adjacency matrix $A$, we can label the $k$-pendant vertices with the indices $1,...,m$,
and with $m+1,...,m+k$ the indices of the $k$-quasi pendant vertices.\\
We call $v_1,...,v_m$ the rows corresponding to $k$-pendant vertices, then the adjacency matrix has the following form
\[ A = \left( \begin{array}{ccc|ccccccc}
0 & ... & 0 & w(1,m+1) & w(1,m+2) &...& w(1,m+k)& 0&...&0\\
\vdots & ... & \vdots & \vdots & \vdots &...& \vdots & 0&...&0\\
0 & ... & 0 & w(m,m+1) & w(m,m+2) &...& w(m,m+k)& 0&...&0\\
\hline
w(1,m+1)& ... & w(m,m+1) & & & & & & &\\
\vdots & ... & \vdots & & & & & & &\\
w(1,m+k)& ... & w(m,m+k) & & & & & & & \\
0& ... & 0 & & & & & & & \\
\vdots& ... & \vdots & & & A_{22} & & & \\
0& ... & 0 & & & & & & & \\
\end{array} \right)\]
where the block $A_{22}$ is any $(n-m)\times(n-m)$ symmetric matrix.\\
The $m$ rows (and $m$ columns) $v_1,...,v_m$ are linearly dependent such that $v_1=...=v_m$, then $v_1,...,v_{m-1}\in ker(A)$.\\
Hence
$$\exists \mu_1,...,\mu_{m-1}\in\sigma(A)\quad \mbox{ such that }\quad \mu_1=...=\mu_{m-1}=0.$$
By considering the Laplacian matrix $L$, it has at least $m$ diagonal entries with value $\sum_{j=1}^k w(1,m+j)=w(S_{m,k}):=w_1$.\\
Then also in the matrix $(L-w_1 I)$ there are the linearly dependent vectors $v_i, \ i\in\{1,...,m\}$, hence $v_1,...,v_{m-1}\in ker(L-w_1 I)$ and
$$\exists \mu_1,...,\mu_{m-1}\in\sigma(L-w_1 I)\quad \mbox{ such that }\quad \mu_1=...=\mu_{m-1}=0.$$
Let $\mu_i$ be one of these eigenvalues, then
$$0=det((L-w_1 I)-\mu_i I)=det(L-(w_1 +\mu_i )I)$$
so that $\lambda:=w_1\in\sigma (L)$ with multiplicity greater or equal to $deg(S_{m,k})$.\\
Let us now consider a number $s$ of $S_{m,k}$ in $\mathcal G$, namely $S_{m_1,k_1},...,S_{m_s,k_s}$.
Denoting $w_1,...,w_r$ the different weights of such a $(m,k)$-stars, and $r\leq s$,
we prove that for any $i\in\{1,...,r\},$
$$\exists \lambda\in{\sigma(L)} \mbox{ such that } \lambda=w_i \mbox{ and the multiplicity of } \lambda\geq deg(\mathcal S_{w_i})=
\sum_{S_{m_j,k_j}\in\mathcal S_{w_i}} deg(S_{m_j,k_j}),$$
where $\mathcal S_{w_i}:=\{S_{m,k}\in \mathcal G | w(S_{m,k})=w_i\}$.
Let $i\in\{1,...,r\}$ and let $R_i\leq r$ be the number of $(m,k)$-stars in $\mathcal S_{w_i}$, and $\sum_{i=1}^r R_r=s$,
we assume that the first $R_1$ indexes refer to the $(m,k)$-stars in $\mathcal S_{w_1}$, whereas the indexes $R_1+1,...,R_1+R_2$ refer to the $(m,k)$-stars in
$\mathcal S_{w_2}$, and so on.
We focus on the $R_i$ $(m,k)$-stars in $\mathcal S_{w_i}$.
The rows in $A$ corresponding to the $k_j$-pendant vertices with$j\in\{\sum_{q=1}^{i-1} R_q+1,...,\sum_{q=1}^{i} R_q\}$,
are $m_j$ vectors $(v^{(j)}_{j_1},...,v^{(j)}_{j_{m_j}})$, linearly dependent and such that $v^{(j)}_{j_1}=...=v^{(j)}_{j_{m_j}}$,
whose indexes are
$$j_1=\sum_{p=1}^{j-1} m_{p}+1,...,{j_{m_j}}=\sum_{p=1}^{j-1} m_{p}+m_j$$
when $j>1$, or
$$j_1=1,...,{j_{m_j}}=m_j$$
when $j=1$.\\
Then we get
$$v^{(j)}_{j_1},...,v^{(j)}_{j_{{m_j}-1}}\in ker(A),\quad \forall j\in\{\sum_{q=1}^{j-1} R_q+1,...,\sum_{q=1}^{j} R_q\}.$$
and
$$\exists \mu_{j_1},...,\mu_{j_{{m_j}-1}}\in\sigma(A)\quad \mbox{ such that }\quad \mu_{j_1}=...=\mu_{j_{{m_j}-1}}=0.$$
This is true for each $j\in\{\sum_{q=1}^{j-1} R_q+1,...,\sum_{q=1}^{j} R_q\}$, so that
$$\exists \mu_1,...,\mu_{deg(\mathcal S_{w_i})} \in\sigma(A)\quad \mbox{ such that }\quad \mu_1=...=\mu_{deg(\mathcal S_{w_i})}=0.$$
and the Laplacian matrix $L$ has at least $deg(\mathcal S_{w_i})+R_i$ diagonal entries with value $w_i$.\\
In the matrix $(L-w_i I)$ there are $v^{(j)}_{j_q}, \ q\in\{1,...,m_j\}$ vectors linearly dependent for each $j$, as a consequence
$v^{(j)}_{j_1},...,v^{(j)}_{j_{m_j-1}}\in ker(L-w_i I)$ and
$$\exists \mu_1,...,\mu_{deg(\mathcal S_{w_i})} \in\sigma(L-w_i I)\quad \mbox{ such that }\quad \mu_1=...=\mu_{deg(\mathcal S_{w_i})} =0.$$
Finally, let $\mu_p$ be one of these eigenvalues, then
$$0=det((L-w_i I)-\mu_p I)=det(L-(w_i +\mu_p )I)$$
and $\lambda:=w_i\in\sigma (L)$ with multiplicity greater or equal to $deg(\mathcal S_{w_i})$.\\
\end{proof}
}
Some corollaries on the signless and normalized Laplacian matrices can be obtained by using similar proofs.
Let $B$ and $\hat L$ be the signless and normalized Laplacian matrices of $\mathcal G=(\mathcal V,\mathcal E,w)$ respectively
and let $\sigma(B)$, $\sigma(\hat L)$ the eigenvalues of $B$ and $\hat L$ with algebraic multiplicity
$m_B(\lambda)$, $m_{\hat L}(\lambda)$ for the eigenvalue $\lambda$ in $B$ and $\hat L$ respectively.\\
\begin{cor}
If
\begin{itemize}
\item $s$ is the number of all the $S_{m,k}$ as $m$ and $k$ vary in $\mathbb N$ and $m+k\leq n,$ of $\mathcal G$,
\item $r$ is the number of $S_{m,k}$ with different weights, $w_1,...,w_r$,\\
\end{itemize}
then for any $i\in\{1,...,r\},$
$$\exists \lambda\in{\sigma(B)} \mbox{ such that } \lambda=w_i \mbox{ and } m_B(\lambda)\geq deg(\mathcal S_{w_i})$$
where $\mathcal S_{w_i}:=\{S_{m,k}\in \mathcal G | w(S_{m,k})=w_i\}$.
\end{cor}
\begin{cor}
If
\begin{itemize}
\item $s$ is the number of all the $S_{m,k}$ as $m$ and $k$ vary in $\mathbb N$ and $m+k\leq n,$ of $\mathcal G$,
\item $r$ is the number of $S_{m,k}$ with different weights, $w_1,...,w_r$,\\
\end{itemize}
then for any $i\in\{1,...,r\},$
$$\exists \lambda\in{\sigma(\hat L)} \mbox{ such that } \lambda=1 \mbox{ and } m_{\hat L}(\lambda)\geq \sum_{i=1}^r deg(\mathcal S_{w_i})$$
where $\mathcal S_{w_i}:=\{S_{m,k}\in \mathcal G | w(S_{m,k})=w_i\}$.
\end{cor}
A wider class of graphs for which the previous results can be extended is the class of the $l$-dependent graphs, defined as follows:
\begin{defn}[$l$-dependent graph: $D^l$]
A $l$-dependent graph is a graph $(\mathcal V,\mathcal E, w)$ whose vertices can be partitioned into four subsets:
the independent set $\mathcal V_1$, the central set $\mathcal V_2$, the independent set $\mathcal V_3$ and the set
$\mathcal V\setminus (\mathcal V_1\cup\mathcal V_2 \cup\mathcal V_3)$ such that
\begin{enumerate}
\item each vertex in $\mathcal V_1$ has at least one edge in $\mathcal V_2$ and vice versa, i.e.
$$\forall i\in \mathcal V_1,\exists j\in \mathcal V_2\ \mbox{ such that } \ (i,j)\in \mathcal E$$
$$\forall j\in \mathcal V_2,\exists i\in \mathcal V_1\ \mbox{ such that } \ (i,j)\in \mathcal E$$
\item vertices in $\mathcal V_1$ and $\mathcal V_3$ have edges only in $\mathcal V_2$, i.e.
$$\forall i\in \mathcal V_1\cup\mathcal V_3,\forall j\in \mathcal V\setminus\mathcal V_2, \quad (i,j)\notin \mathcal E$$
\item vertices in $\mathcal V_3$ have only edges that are a linear combination of all the edges of some vertices in $\mathcal V_1$, i.e.
$$\forall i\in\mathcal V_3, \exists j_1,...,j_{l_i}\in \mathcal V_1 \mbox{ such that }$$
$$\forall j\in\{j_1,...,j_{l_i}\}, \ \forall z \ \mbox{ such that } \ (j,z)\in\mathcal E, z\in\mathcal V_2\Rightarrow $$
$$\exists a(j)\in \mathbb R^{> 0} \mbox{ and } (i,z)\in\mathcal E, \ \mbox{ such that }\ w(i,z)=a(j)w(j,z).$$
\item $\mathcal V_1, \mathcal V_2, \mathcal V_3\subseteq \mathcal V$ are kept in order to satisfy the following
condition $$l:=\max_{\mathcal V_1,\mathcal V_2, \mathcal V_3\subseteq \mathcal V}|\mathcal V_3|.$$
\end{enumerate}
A $l$-dependent graph with $|\mathcal V_3|=l$, is denoted $D^l.$
\end{defn}
\begin{figure}[!!h
\centering
\includegraphics[width=6cm]{dependent_6.png}\caption{$D^l(\tilde w)$ graph, where the subsets $\mathcal V_1$
(for example the green vertices), $\mathcal V_2$ (the yellow vertices), $\mathcal V_3$ (for example the red vertex) and
$\mathcal V\setminus (\mathcal V_1\cup\mathcal V_2 \cup\mathcal V_3)$ are respectively with cardinality $\bar m=\underline m=2$,
$\bar k=\underline k=3$, $l=1$ and $|\mathcal V\setminus (\mathcal V_1\cup\mathcal V_2 \cup\mathcal V_3)|=0$. In the Laplacian matrix
there is the eigenvalue $\lambda=\tilde w=6$ with multiplicity 1.}
\end{figure}
\begin{obs}
First of all, we remark that neither the uniqueness of partition nor the cardinality of both $\mathcal V_1$ and $\mathcal V_2$ sets is guaranteed.
If we require the uniqueness of the cardinality further conditions are necessary: for instance
\begin{enumerate}
\item[5.*] maximum cardinality of the sets $\mathcal V_1,\mathcal V_2$
$$\bar{m}:=\max_{\mathcal V_1,\mathcal V_2 \subseteq \mathcal V\setminus\mathcal V_3}|\mathcal V_1|$$
$$\bar k:=\max_{\mathcal V_1,\mathcal V_2 \subseteq \mathcal V\setminus\mathcal V_3}|\mathcal V_2|$$
\item[5.**] minimum cardinality of the sets $\mathcal V_1,\mathcal V_2$
$$\underline m:=\min_{\mathcal V_1,\mathcal V_2 \subseteq \mathcal V\setminus\mathcal V_3}|\mathcal V_1|$$
$$\underline k:=\min_{\mathcal V_1,\mathcal V_2 \subseteq \mathcal V\setminus\mathcal V_3}|\mathcal V_2|.$$
\end{enumerate}
Even by requiring the maximum or minimum cardinality of both $\mathcal V_1$ and $\mathcal V_2$ sets, the uniqueness of the partition is not univocally determined.\\
The uniqueness of the set $\mathcal V_2$ is satisfied whenever one of the conditions 5. holds.
We notice that according to 5.**, the set $\mathcal V_2$ is defined as the set of all the
vertices $i\in\mathcal V$ such that $(i,j)\in\mathcal E,\ j\in\mathcal V_3.$
\end{obs}
\begin{figure}[!!h
\centering
\includegraphics[width=6cm]{dependent_10.png}
\includegraphics[width=6cm]{dependent_10b.png}
\caption{$D^l(\tilde w)$ graph, where the subsets $\mathcal V_1$ (green vertices) and $\mathcal V_3$ (red vertices) can be chosen differently.
The cardinalities of the sets are respectively $\bar m=\underline m=3$, $\bar k=\underline k=4$, $l=3$ and
$|\mathcal V\setminus (\mathcal V_1\cup\mathcal V_2 \cup\mathcal V_3)|=0$.
In the Laplacian matrix there is the eigenvalue $\lambda=\tilde w=4$ with multiplicity 3.}
\end{figure}
\begin{obs}
Whenever in the condition [3.] the set $\{j_1,...,j_{l_i}\}$ coincides with the set $\mathcal V_1$,
then the $l$-dependent graph is also a graph with an (m,k)-star, with m=l+1.
\end{obs}
We define an $l$-dependent graph of weight $\tilde w$, $D^l(\tilde w)$ as the $l$-dependent graph such that each vertex $i\in\mathcal V_1\cup\mathcal V_3$
has strength $\tilde w$.\\
Now we can extend the Theorem \ref{th:one} on graphs with $(m,k)$-star to $l$-dependent graphs, that is one of the main results of this work.\\
Let $\mathcal G=(\mathcal V,\mathcal E,w)$ be a graph, and $L$ the Laplacian matrix of $\mathcal G$.
\begin{thm}\label{th:main}
If $\mathcal G$ be a $D^l(\tilde w)$ graph, with $\tilde w\in\mathbb(R^{>0})$ and $l\in \mathbb N$,\\
then
$$\exists \lambda\in{\sigma(L)} \mbox{ such that } \lambda=\tilde w \mbox{ and } m_L(\lambda)\geq l.$$
\end{thm}
\begin{proof}
The proof is similar to Theorem \ref{th:one}. By definition of $D^l(\tilde w)$, each vertex $i\in\mathcal V_3$ has a corresponding
row in the adjacency matrix $A$, that is a linear combination of the rows of some vertices $j_1,...,j_{l_i}\in\mathcal V_1$.
Therefore the adjacency matrix $A$ has an eigenvalue $\mu=0$ of multiplicity at least $l$.
Since each vertex $i\in\mathcal V_1\cup\mathcal V_3$ has strength $\tilde w$ we can conclude the proof.
\end{proof}
\begin{obs}
The previous result does not require the conditions 5.
\end{obs}
We observe that a $D^{l}(\tilde w)$ graph, with $l\in\mathbb{N},\ \tilde w\in\mathbb{R}^+$, could be also a $D^{l_*}(\tilde w_*)$ graph,
for any $\ l_*\in\mathbb{N},\ \tilde w_*\in\mathbb{R}^+$.
As for the Theorem \ref{th:one}, some corollaries on the signless and normalized Laplacian matrices can be obtained by means of similar proofs.
Let $\mathcal G=(\mathcal V,\mathcal E,w)$ be a graph, and $B$ and $\hat L$ the signless and normalized Laplacian matrices respectively.
\begin{cor}\label{cor:main}
If
$\tilde w_1,...,\tilde w_m\in\mathbb(R^{>0})$ and $l_1,...,l_m\in \mathbb N$ such that $\mathcal G$ is a $D^l_i(\tilde w_i)$ graph, $ i\in\{1,...,m\}$;\\
then
$$\exists \lambda \in{\sigma(\hat L)} \mbox{ such that } \lambda=1 \mbox{ and } m_{\hat L}(\lambda)\geq \sum_{i=1}^m l_i.$$
\end{cor}
\subsection{(m,k)-star graph reduction}
According to the previous results, we have defined a class of graphs whose Laplacian matrices have an eigenvalues spectrum with known multiplicities and values.
Now, our aim is to simplify the study of such graphs by collapsing these vertices into a single vertex replacing the original graph with a reduced graph.
\\
At this purpose, the following definitions are useful:
\begin{defn}[$(m,k)$-star $q$-reduced: $S^q_{m,k}$]
A $q$-reduced $(m,k)$-star is a $(m,k)$-star of vertex sets $\{\mathcal V_1,\mathcal V_2\}$, such that $q$ of its vertices in $\mathcal V_1$ are removed.
Hence the order and degree of the $S^q_{m,k}$ are $m+k-q$ and $m-q-1$ respectively.\\
\end{defn}
\begin{defn}[$q$-reduced graph: $\mathcal G^q$]
A $q$-reduced graph $\mathcal G^q$ is obtained from a graph $\mathcal G$ with some $(m,k)$-stars removing $q$ of the vertices in the set $\mathcal V_1$
of $\mathcal G.$
\end{defn}
We derive a spectrum correspondence between graphs $\mathcal G$ and $\mathcal G^q$
\begin{defn}[Mass matrix of a $S_{m,k}^q$]
Let $\mathcal V_1$ and $\mathcal V_2$ be the vertex sets of the graph $S_{m,k}^q,\ q< m$.\\
Let $B$ be the adjacency matrix of $S_{m,k}^q$. The mass matrix of a $S_{m,k}^q$, $M$ is a diagonal matrix of order $m+k-q$ such that
\begin{equation}\label{eq:diagM}
M_{ii} = \begin{cases} \frac{m}{m-q}, & \mbox{if } i\in\mathcal V_1 \\ 1 & \mbox{otherwise } \end{cases},
\end{equation}
\end{defn}
The mass matrix $M$ can be defined in the same way also for a graph $\mathcal G^q$, with one (or more) $S_{m,k}^q$ by means
of a matrix of order $n-q$, whenever the graph $\mathcal G^q$ is composed by $n-q$ vertices.\\
Now we state the second main result of this paper.
\begin{thm}[$(m,k)$-star adjacency matrix reduction theorem]\label{th:reduction1}
Let
\begin{itemize}
\item $\mathcal G$ be a graph, of n vertices, with a $S_{m,k}, \ m+q\leq n$,
\item $\mathcal G^q$ be the reduced graph with a $S_{m,k}^q$ instead of $S_{m,k}$, of $n-q$ vertices,\\
\item $A$ be the adjacency matrix of $\mathcal G$,
\item $B$ be the adjacency matrix of $\mathcal G^q$,
\item $M$ be the diagonal mass matrix of $\mathcal G^q$,
\end{itemize}
then
\begin{enumerate}
\item $\sigma(A)=\sigma(MB),$
\item There exists a matrix $K\in\mathbb R^{n\times (n-q)}$ such that $M^{1/2}BM^{1/2}=K^TAK$ and $K^TK=I$. Therefore, if $v$ is an eigenvector of $M^{1/2}BM^{1/2}$ for an
eigenvalue $\mu$, then Kv is an eigenvector of A for the same eigenvalue $\mu$.
\end{enumerate}
\end{thm}
{ Before proving Theorem \ref{th:reduction1}, we recall the well known result for eigenvalues of
symmetric matrices, \cite{Hwang2004}.
\begin{lem}[Interlacing theorem]
Let $A\in Sym_n(\mathbb R)$ with eigenvalues $\mu_1(A)\geq...\geq \mu_n(A).$ For $m<n$, let $S\in\mathbb R^{n,m}$ be a
matrix with orthonormal columns, $K^TK=I$, and consider the $B=K^TAK$ matrix, with eigenvalues $\mu_1(B)\geq...\geq \mu_m(B).$
If
\begin{itemize}
\item the eigenvalues of $B$ interlace those of $A$, that is,
$$\mu_i(A)\geq\mu_i(B)\geq\mu_{n_A-n_B+i}(A), \quad i=1,...,n_B,$$
\item if the interlacing is tight, that is, for some $0\leq k\leq n_B,$
$$\mu_i(A)=\mu_i(B), \ i=1,...,k\ \mbox{ and } \ \mu_i(B)=\mu_{n_A-n_B+i}(A), \ i=k+1,...,n_B$$
then $KB=AK.$
\end{itemize}
\end{lem}
\begin{proof}
First we prove the existence of the $K$ matrix:\\
let $\mathcal P=\{P_1,...,P_{n_B}\}$ be a partition of the vertex set $\{1,...,n_A\}$, where $n_B=n_A-q.$
The \textit{ characteristic matrix H } is defined as the matrix where the $j$-th column is the characteristic vector of $P_j$ ($j=1,...,n_B$).\\
Let A be partitioned according to $\mathcal P$
\[
A=\left(
\begin{array}{ccc}
A_{1,1} & \dots & A_{1,n_B} \\
\vdots & & \vdots \\
A_{n_B,1} & \dots & A_{n_B,n_B}
\end{array}
\right),\]
where $A_{i,j}$ denotes the block with rows in $P_i$ and columns in $P_j$.
The matrix $B=(b_{ij})$ whose entries $b_{ij}$ are the averages of the $A_{i,j}$ rows, is called the \textit{quotient matrix} of $A$ with respect $\mathcal P$,
i.e. $b_{ij}$ denote the average number of neighbours in $P_j$ of the vertices in $P_i$.\\
The partition is equitable if for each $i,j$, any vertex in $P_i$ has exactly $b_{ij}$ neighbours in $P_j$.
In such a case, the eigenvalues of the quotient matrix $B$ belong to the spectrum of $A$ ($\sigma(B)\subset\sigma(A)$) and the spectral radius
of $B$ equals the spectral radius of $A$: for more details cfr. \cite{brouwer12}, chapter 2.\\
Then we have the relations
$$MB=H^TAH, \quad H^TH=M.$$
Considering a $q$-reduced $(m,k)-$star with adjacency matrix $B$, we weight it by a diagonal mass matrix $M$ whose diagonal entries
are one except for the $m-q$ entries of the vertices in $\mathcal V_1$,
\begin{equation}\label{eq:diagM}
M_{ii} = \begin{cases} \frac{m}{m-q}, & \mbox{if } i\in\mathcal V_1 \\ 1 & \mbox{otherwise } \end{cases},
\end{equation}
and we get
$$MB\sim M^{1/2}BM^{1/2}=K^TAK, \quad K^TK=I,$$
where $K:=HM^{1/2}.$
In addition to the th.(\ref{th:one}), the eigenvalues of $MB$ are a subset of the eigenvalues of $A$,
the adjacency matrix of the corresponding $S_{m,k}$ graph
$$
\sigma(MB)\subset\sigma(A).
$$
Whenever $q<m-1$, we get $\sigma(MB)=\sigma(A)$, up to the multiplicity of the eigenvalue $\mu=0$.\\
Finally, if $v$ is an eigenvector of $M^{1/2}BM^{1/2}$ with eigenvalue $\mu$, then $Kv$ is an eigenvector of $A$ with the same eigenvalue $\mu$.\\
Indeed form the equation
$$\tilde Bv=\mu v$$
an taking into account that the partition is equitable, we have $K\tilde B=AK,$ and
$$AKv=K\tilde Bv=\mu Kv.$$
\end{proof}}
We obtain a similar result for the Laplacian matrix.
\begin{thm}[$(m,k)$-star Laplacian matrix reduction theorem]\label{th:reduction2}
If
\begin{itemize}
\item $\mathcal G$ be a graph, of n vertices, with a $S_{m,k}, \ m+q\leq n$,
\item $\mathcal G^q$ be the reduced graph with a $S_{m,k}^q$ instead of $S_{m,k}$, of $n-q$ vertices,
\item $L(A)$ be the Laplacian matrix of $\mathcal G$,
\item $L(B)$ be the Laplacian matrix of $\mathcal G^q$. Let $M$ the diagonal mass matrix of $\mathcal G^q$,
\end{itemize}
then
\begin{enumerate}
\item $\sigma(L(A)=\sigma(L(MB))$
\item There exists a matrix $K\in\mathbb R^{n\times (n-q)}$ such that $M^{1/2}BM^{1/2}=K^TAK$ and $K^TK=I$.
Therefore, if $v$ is an eigenvector of $\tilde L(M B):=diag(MB)-M^{1/2}BM^{1/2}$ for an eigenvalue $\lambda$, then Kv is
an eigenvector of L(A) for the same eigenvalue $\lambda$.
\end{enumerate}
\end{thm}
{The proof for the Laplacian version of the Reduction Theorem \ref{th:reduction1} is similar to that for the adjacency matrix,
in fact using the same arguments as in the proof of
\ref{th:reduction1}, we can say that 1. is true and that the $K$ matrix exists. So we prove directly only the second part of point 2. of the theorem.
\begin{proof}
Let $v$ be an eigenvector of $L(\tilde B):=diag(MB)-M^{1/2}BM^{1/2}$ for an eigenvalue $\lambda$, then
$$L(\tilde B)v=\lambda v.$$
Because of $K\tilde B=AK$ and $diag(A)K=Kdiag(MB)$, we obtain
$$L(A)Kv=diag(A)Kv-AKv=Kdiag(MB)v-K\tilde B v=\lambda Kv.$$
\end{proof}}
According to the previous results, graphs with $(m,k)$-stars and graphs $q$-reduced can be partitioned in the same way, up to the removed vertices.\\
\begin{cor}\label{cor:reduction}
Under the hypothesis of theorem \ref{th:reduction2}, if $v$ is a (left or right) eigenvector of $L(MB)$ with eigenvalue $\lambda$,
then its entries have the same signs of the entries of the eigenvector $u$ of $L(A)$ with the same eigenvalue $\lambda$. \end{cor}
Indeed, the matrices $L(MB)$ and $\tilde L(MB)$ are similar, by means of the non singular matrix $M^{1/2}$.
Furthermore, since the similarity matrix $M^{1/2}$ is diagonal with all positive elements on the diagonal,
then both left and right eigenvectors of $ L(MB)$ preserve the sign of the eigenvectors of $\tilde L(MB)$. { We formally prove the Corollary.
\begin{proof}
$\tilde L(MB)$ and $L(MB)$ are similar by means of the matrix $M^{1/2}$, in fact
\begin{eqnarray}
M^{-1/2}L(MB)M^{1/2}&=&M^{-1/2}diag(MB)M^{1/2}-M^{-1/2}MBM^{1/2}\nonumber\\
&=&diag(MB)-M^{1/2}BM^{1/2}\nonumber\\
&=&\tilde L(MB). \nonumber
\end{eqnarray}
$L(MB)$ preserves the sign of the eigenvectors of $\tilde L(MB)$.\\
If $\tilde v$ an eigenvector of $\tilde L(MB)$ of the eigenvalue $\lambda\in\sigma(\tilde L(MB))$, then
\begin{eqnarray}
\tilde L(MB) \tilde v=\lambda \tilde v & \Leftrightarrow & M^{-1/2}L(MB)M^{1/2} \tilde v=\lambda \tilde v\nonumber\\
& \Leftrightarrow & L(MB)M^{1/2} \tilde v=\lambda M^{1/2} \tilde v\nonumber
\end{eqnarray}
As a consequence $v:=M^{1/2} \tilde v$ is the eigenvector of $L(MB)$ of the eigenvalue $\lambda,$ and $v_i=(M\tilde v)_i$,
$$v_i=\sum_{r=1}^{n-q} M_{ir}\tilde v_r=M_{ii}\tilde v_i.$$
\end{proof}}
Thanks to the previous result, we can partition the primary graph $\mathcal G$ containing the $(m,k)$-star and the $q$-reduced graph $\mathcal G^q$, weighted
by the matrix $M$, in the same way except for the removed vertices.\\
\section{Concluding remarks}\label{sec:4}
In this work we considered the problem of spectral partitioning of weighted graphs that contain $(m,k)$-stars.
We showed that, under some hypotheses on edge weights, the Laplacian matrix of graphs with $(m,k)$-stars has eigenvalues of multiplicity at least $m-1$
and computable values.\\
We proved that it is possible to reduce the node cardinality of these graphs by a suitable equivalence relation, keeping the same eigenvalues
on the adjacency and Laplacian matrices up to their multiplicity.\\
Furthermore, we have shown that Laplacian matrices of both the original and reduced graphs have the same signs of the eigenvectors entries,
so that it is possible to partition both graphs in the same way, up to removed vertices.\\
According to these results, whenever a weighted graph is composed by one or more $(m,k)$-star subgraphs, it is possible to collapse some of its vertices
into one, and to reduce the dimension of the matrices associated to these graphs, preserving the spectral properties.\\
These results can be relevant for applications to the network partitioning problems, or whenever a sort of node summarization is sought, merging nodes
with similar spectral properties.
These nodes could share similar functional properties, e. g. in the case of proteins with a similar neighborhood structure in interactome networks\cite{bioplex15},
with implications on biomedical and Systems Biology applications \cite{menche15}.
Moreover, the possibility to reduce network dimensionality by an equivalence relation among nodes can possibly be extended in a perturbative approach,
performing network reduction whenever the conditions of our theorems are 'almost satisfied', that is if some eigenvalues are sufficiently close.
\section{Acknowledgments}
The authors thank Nicola Guglielmi (University of L'Aquila, Italy), and E. A. also thanks
Domenico Felice (Max Planck Institute of Leipzig, Germany) and Carmela Scalone (University of L'Aquila, Italy) for useful discussions.
|
\section{Introduction}
In this paper we study numerical integration of smooth functions defined over the $s$-dimensional unit cube.
For an integrable function $f\colon [0,1)^s\to \mathbb{R}$, we denote the integral of $f$ by
\[ I(f) := \int_{[0,1)^s}f(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}. \]
We approximate $I(f)$ by a linear algorithm of the form
\[ I(f;P_N,W_N) = \sum_{n=0}^{N-1}w_nf(\boldsymbol{x}_n), \]
where $P_N=\{\boldsymbol{x}_n\colon 0\leq n<N\}\subset [0,1)^s$ is the set of quadrature nodes and $W_N=\{w_n\colon 0\leq n<N\}\subset \mathbb{R}$ is the set of associated weights.
A quasi-Monte Carlo (QMC) rule is an equal-weight quadrature rule where the weights sum up to 1, i.e., a linear algorithm with the special choice $w_n=1/N$ for all $n$.
Thus, $I(f)$ is simply approximated by
\[ I(f;P_N) = \frac{1}{N}\sum_{n=0}^{N-1}f(\boldsymbol{x}_n). \]
We refer to \cite{DKS13,DPbook,Nbook,SJbook} for comprehensive information on QMC integration.
The quality of a given quadrature rule is often measured by the worst-case error, that is, the worst absolute integration error in the unit ball of a normed function space $V$:
\[ e^{\mathrm{wor}}(V; P_N,W_N) := \sup_{\substack{f\in V\\ \|f\|_V\leq 1}}|I(f;P_N,W_N)-I(f)|, \]
for a general linear algorithm, and
\[ e^{\mathrm{wor}}(V; P_N) := \sup_{\substack{f\in V\\ \|f\|_V\leq 1}}|I(f;P_N)-I(f)|, \]
for a QMC algorithm.
In this paper, we consider weighted unanchored Sobolev spaces with dominating mixed smoothness $\alpha\geq 2$ as introduced in \cite{DKLNS14}, see Section~\ref{subsec;sobolev} for the details.
For such function spaces consisting of smooth functions, it is possible to construct good QMC integration rules achieving the almost optimal order of convergence $O(N^{-\alpha+\epsilon})$ with arbitrarily small $\epsilon>0$, see for instance \cite{BD09,BDGP11,BDLNP12,D08,G15,GD15,GSY16}.
In particular, so-called interlaced polynomial lattice rules have been recently applied in the context of partial differential equations with random coefficients, see for instance \cite{DKLNS14,DLS16}, due to their low construction cost and weak dependence of the worst-case error on the dimension.
In this paper, we propose an alternative QMC-based quadrature rule, named \emph{extrapolated polynomial lattice rule}, which achieves the almost optimal order of convergence with weak dependence on the dimension and can be constructed at a low computational cost.
Roughly speaking, extrapolated polynomial lattice rules are given by constructing classical polynomial lattice rules with consecutive sizes of nodes and then applying Richardson extrapolation in a recursive way.
Therefore, the resulting quadrature rule is a linear algorithm but not equally weighted.
Our motivation behind introduction of extrapolated polynomial lattice rules lies in so-called fast QMC matrix-vector multiplication which is briefly explained below.
Recently in \cite{DKLS15}, Dick et al.\ consider the problem of approximating integrals of the form
\[ \int_{[0,1)^s}f(\boldsymbol{x} A) \, \mathrm{d} \boldsymbol{x},\]
where $\boldsymbol{x}$ is an $1\times s$ row vector, and $A$ is an $s\times t$ real matrix.
They design QMC quadrature nodes $\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{N-1}\in [0,1)^s$ suitably such that the matrix-vector product $XA$, where $X=(\boldsymbol{x}_0^{\top},\ldots,\boldsymbol{x}_{N-1}^{\top})^{\top}$, can be computed in $O(N\log N)$ arithmetic operations by using the fast Fourier transform without requiring any structure in the matrix $A$. This is done by choosing the quadrature nodes such that $X = CP$, where $C$ is a circulant matrix and the matrix $P$ reorders and extends the vector $\mathbf{a}$ when multiplied with $P$.
The resulting vector $XA=Y=(\boldsymbol{y}_0^{\top},\ldots,\boldsymbol{y}_{N-1}^{\top})^{\top}$ is used to approximate $I(f)$ by
\[ \frac{1}{N}\sum_{n=0}^{N-1}f(\boldsymbol{y}_n). \]
Their proposed method can be applied to classical polynomial lattice rules, but not to interlaced polynomial lattice rules, since the interlacing destroys the circulant structure.
In fact, it has been an open question whether it is possible to design QMC quadrature nodes which achieve higher order of convergence of the integration error for sufficiently smooth functions, and at the same time, can be used in fast QMC matrix-vector multiplication.
Since extrapolated polynomial lattice rules are just given by a linear combination of classical polynomial lattice rules, we can apply fast QMC matrix-vector multiplication to extrapolated polynomial lattice rules in a straightforward manner, which gives an affirmative solution to the above question.
The remainder of this paper is organized as follows.
In the next section we describe the necessary background and notation, namely, weighted unanchored Sobolev spaces with dominating mixed smoothness, Walsh functions, polynomial lattice rules, and Richardson extrapolation.
In Section~\ref{sec:explr}, we first give the key ingredient for introducing extrapolated polynomial lattice rules, and then study their worst-case error in Sobolev spaces with general weights as well as their dependence on the worst-case error bound on the dimension.
Here we prove the existence of good extrapolated polynomial lattice rules achieving the almost optimal order of convergence.
In Section~\ref{sec:cbc}, we restrict ourselves to the case of product weights and show that the so-called fast component-by-component construction algorithm works for finding good extrapolated polynomial lattice rules.
We conclude this paper with numerical experiments in Section~\ref{sec:experiment}.
\section{Preliminaries}
Throughout this paper, let $\mathbb{N}$ denote the set of positive integers and $\mathbb{N}_0=\mathbb{N}\cup \{0\}$.
Let $b$ be a prime, and $\mathbb{F}_b$ be the finite field with $b$ elements which is identified with the set $\{0,1,\ldots,b-1\}\subset \mathbb{Z}$ equipped with addition and multiplication modulo $b$.
Further, we denote by $\mathbb{F}_b[x]$ the set of all polynomials over $\mathbb{F}_b$ and by $\mathbb{F}_b((x^{-1}))$ the field of formal Laurent series over $\mathbb{F}_b$.
For $m\in \mathbb{N}$, we write
\[ G_{b,m} = \{q\in \mathbb{F}_b[x]\colon \deg(m)<m \}\quad \text{and}\quad G^*_{b,m}=G_{b,m}\setminus \{0\}. \]
It is obvious that $|G_{b,m}|=b^m$ and $|G^*_{b,m}|=b^m-1$.
With a slight abuse of notation, we often identify $n\in \mathbb{N}_0$, whose finite $b$-adic expansion is given by $n=\nu_0+\nu_1b+\cdots$, with the polynomial over $\mathbb{F}_b$ given by $n(x)=\nu_0+\nu_1x+\cdots$.
\subsection{Sobolev spaces with dominating mixed smoothness}\label{subsec;sobolev}
We give the definition of weighted Sobolev spaces with dominating mixed smoothness.
Let $\alpha,s\in \mathbb{N}$, $\alpha\geq 2$, $1\leq q,r\leq \infty$, and let $\boldsymbol{\gamma}=(\gamma_u)_{u\subset \mathbb{N}}$ be a set of non-negative real numbers called weights, which plays a role in moderating the importance of different variables or groups of variables in the function space \cite{SW98}.
Assume that $f\colon [0,1)^s\to \mathbb{R}$ has partial mixed derivatives up to order $\alpha$ in each variable.
We define the norm on the weighted unanchored Sobolev space with dominating mixed smoothness $\alpha$ by
\begin{align*}
\|f\|_{s,\alpha,\boldsymbol{\gamma},q,r} & := \Bigg( \sum_{u\subseteq \{1,\ldots,s\}}\Bigg( \gamma_u^{-q}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}} \\
& \qquad \qquad \int_{[0,1)^{|v|}}\left| \int_{[0,1)^{s-|v|}}f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\right|^q \, \mathrm{d} \boldsymbol{x}_v\Bigg)^{r/q}\Bigg)^{1/r},
\end{align*}
with the obvious modifications if $q$ or $r$ is infinite.
Here $(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})$ denotes a sequence $\boldsymbol{\beta}=(\beta_j)_{1\leq j\leq s}$ with $\beta_j=\tau_j$ if $j\in u\setminus v$, $\beta_j=\alpha$ if $j\in v$, and $\beta_j=0$ if $j\notin u$.
Further, $f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})}$ denotes the partial mixed derivative of order $(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})$ of $f$, and we write $\boldsymbol{x}_v=(x_j)_{j\in v}$ and $\boldsymbol{x}_{-v}=(x_j)_{j\in \{1,\ldots,s\}\setminus v}$.
We denote the Banach-Sobolev space of all such functions with finite norm $\|\cdot\|_{s,\alpha,\boldsymbol{\gamma},q,r}$ by $W_{s,\alpha,\boldsymbol{\gamma},q,r}$.
In what follows, let $B_{\tau}(\cdot)$ denote the Bernoulli polynomial of degree $\tau$.
We put $b_{\tau}(\cdot)=B_{\tau}(\cdot)/\tau!$ and $b_{\tau}=b_{\tau}(0)$.
Further, let $\tilde{b}_{\tau}(\cdot)\colon \mathbb{R}\to \mathbb{R}$ denote the one-periodic extension of the polynomial $b_{\tau}(\cdot)\colon [0,1)\to \mathbb{R}$.
Then, as shown in the proof of \cite[Theorem~3.5]{DKLNS14} we have the following.
\begin{lemma}\label{lem:func_represent}
For any $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$, we have a pointwise representation
\[ f(\boldsymbol{y}) = \sum_{u\subseteq \{1,\ldots,s\}}f_u(\boldsymbol{y}_u), \]
where each function $f_u$ depends only on a subset of variables $\boldsymbol{y}_u=(y_j)_{j\in u}$ and is explicitly given by
\begin{align*}
f_u(\boldsymbol{y}_u) & = \sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\prod_{j\in u\setminus v}b_{\tau_j}(y_j) \\
& \qquad \times (-1)^{(\alpha+1)|v|}\int_{[0,1)^s} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})}(\boldsymbol{x}) \prod_{j\in v}\tilde{b}_{\alpha}(x_j-y_j) \, \mathrm{d} \boldsymbol{x}.
\end{align*}
Furthermore we have
\[ \|f\|_{s,\alpha,\boldsymbol{\gamma},q,r} = \left( \sum_{u\subseteq \{1,\ldots,s\}}\|f_u\|_{s,\alpha,\boldsymbol{\gamma},q,r}^r\right)^{1/r}. \]
\end{lemma}
\subsection{Walsh functions}\label{subsec:walsh}
Here we introduce the definition of Walsh functions and state the result on the decay of Walsh coefficients for functions in $W_{s,\alpha,\boldsymbol{\gamma},q,r}$.
\begin{definition}
For a prime $b$, put $\omega_b=\exp(2\pi i/b)$.
For $k\in \mathbb{N}_0$ with finite $b$-adic expansion $k=\kappa_0+\kappa_1b+\cdots$, the $k$-th Walsh function ${}_{b}\mathrm{wal}_k\colon [0,1)\to \{1,\omega_b,\ldots,\omega_b^{b-1}\}$ is defined by
\[ {}_{b}\mathrm{wal}_k(x) := \omega_{b}^{\kappa_0 \xi_1+\kappa_1 \xi_2+\cdots},\]
for $x\in [0,1)$ with $b$-adic expansion $x=\xi_1/b+\xi_2/b^2+\cdots$, where this expansion is understood to be unique in the sense that infinitely many of the $\xi_i$ are different from $b-1$.
For $s\geq 2$ and $\boldsymbol{k}=(k_1,\ldots,k_s)\in \mathbb{N}_0^s$, the $\boldsymbol{k}$-th Walsh function ${}_{b}\mathrm{wal}_{\boldsymbol{k}}\colon [0,1)^s\to \{1,\omega_b,\ldots,\omega_b^{b-1}\}$ is defined by
\[ {}_{b}\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) := \prod_{j=1}^{s}{}_{b}\mathrm{wal}_{k_j}(x_j) ,\]
for $\boldsymbol{x}=(x_1,\ldots,x_s)\in [0,1)^s$.
\end{definition}
\noindent
Since we shall use Walsh functions in a fixed prime base $b$ in this paper, we omit the subscript and simply write $\mathrm{wal}_k$ or $\mathrm{wal}_{\boldsymbol{k}}$. Note that the system $\{\mathrm{wal}_{\boldsymbol{k}}\colon \boldsymbol{k}\in \mathbb{N}_0^s\}$ is a complete orthonormal system in $L^2([0,1)^s)$, see \cite[Theorem~A.11]{DPbook}. Thus for $f\in L^2([0,1)^s)$, we have the Walsh expansion of $f$:
\[ \sum_{\boldsymbol{k}\in \mathbb{N}_0^s}\hat{f}(\boldsymbol{k})\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}), \]
where $\hat{f}(\boldsymbol{k})$ denotes the $\boldsymbol{k}$-th Walsh coefficient of $f$ defined by
\[ \hat{f}(\boldsymbol{k}):=\int_{[0,1)^s}f(\boldsymbol{x})\overline{\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x})}\, \mathrm{d} \boldsymbol{x}. \]
Here we note that the integral of $f$ is given by $I(f)=\hat{f}(\boldsymbol{0})$.
The Walsh coefficients of a function $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$ are bounded as follows, see \cite[Theorem~14]{D09} and \cite[Theorem~3.5]{DKLNS14} for the proof.
\begin{lemma}\label{lem:walsh_bound}
For $k\in \mathbb{N}$, we denote the $b$-adic expansion $k$ by $k=\kappa_1b^{a_1-1}+\cdots+\kappa_vb^{a_v-1}$ with $a_1>\cdots>a_v>0$ and $\kappa_1,\ldots,\kappa_v\in \{1,\ldots,b-1\}$.
We define the metric $\mu_{\alpha}:\mathbb{N}_0\to \mathbb{N}_0$ by
\[ \mu_{\alpha}(k):=a_1+\cdots+a_{\min(v,\alpha)}, \]
and $\mu_{\alpha}(0):=0$. In case of a vector $\boldsymbol{k}=(k_1,\ldots,k_s)\in \mathbb{N}_0^s$, we define
\[ \mu_{\alpha}(\boldsymbol{k}) := \sum_{j=1}^{s}\mu_{\alpha}(k_j). \]
For a subset $u\subseteq \{1,\ldots,s\}$ and $\boldsymbol{k}_u\in \mathbb{N}^{|u|}$, the $(\boldsymbol{k}_u,\boldsymbol{0})$-th Walsh coefficient of a function $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$ is bounded by
\[ |\hat{f}(\boldsymbol{k}_u,\boldsymbol{0})| \leq \gamma_uC_{\alpha}^{|u|}b^{-\mu_{\alpha}(\boldsymbol{k}_u)}\|f_u\|_{s,\alpha,\boldsymbol{\gamma},q,r}, \]
where
\begin{align*}
C_{\alpha} & = \max\left( \frac{2}{(2\sin\frac{\pi}{b})^{\alpha}}, \max_{1\leq z\leq \alpha-1}\frac{1}{(2\sin\frac{\pi}{b})^{z}}\right) \\
& \qquad \times \left( 1+\frac{1}{b}+\frac{1}{b(b+1)}\right)^{\alpha-2}\left( 3+\frac{2}{b}+\frac{2b+1}{b-1}\right).
\end{align*}
\end{lemma}
\begin{remark}\label{rem:walsh_bound}
For the special but important case $b=2$, Yoshiki \cite{Y15} proved that the constant $C_{\alpha}$ can be improved to $C_{\alpha}=2^{-1/p'}$ where $p'$ denotes the H\"older conjugate of $q$, i.e., $1\leq q'\leq \infty$ which satisfies $1/q+1/q'=1$.
\end{remark}
\subsection{Polynomial lattice rules}\label{subsec:poly_lattice}
Polynomial lattice point sets are a special construction of QMC quadrature nodes introduced by Niederreiter in \cite{N88}, which are defined as follows.
\begin{definition}
Let $p\in \mathbb{F}_{b}[x]$ with $\deg(p)=m$ and $\boldsymbol{q}=(q_1,\ldots,q_s)\in (G^*_{b,m})^s$.
We define the map $v_m\colon \mathbb{F}_b((x^{-1}))\to [0,1)$ by
\[ v_m\left( \sum_{i=w}^{\infty}a_i x^{-i}\right) := \sum_{i=\max\{1,w\}}^{m}a_ib^{-i}. \]
For $0\leq n< b^m$, which is identified with a polynomial over $\mathbb{F}_b$, put
\[ \boldsymbol{x}_n = \left( v_m\left(\frac{nq_1}{p}\right),\ldots, v_m\left(\frac{nq_s}{p}\right)\right)\in [0,1)^s. \]
Then the point set $P(p,\boldsymbol{q})=\{\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{b^m-1}\}$ is called a polynomial lattice point set (with modulus $p$ and generating vector $\boldsymbol{q}$).
A QMC rule using the point set $P(p,\boldsymbol{q})$ as quadrature nodes is called a polynomial lattice rule.
\end{definition}
The concept of dual polynomial lattice plays a key role in the error analysis of polynomial lattice rules.
\begin{definition}
For $k\in \mathbb{N}_0$ with finite $b$-adic expansion $k=\kappa_0+\kappa_1b+\cdots$, we define the map $\mathrm{tr}_m\colon \mathbb{N}_0\to G_{b,m}$ by
\[ \mathrm{tr}_m(k) = \kappa_0+\kappa_1x+\cdots +\kappa_{m-1}x^{m-1}. \]
For $p\in \mathbb{F}_{b}[x]$ with $\deg(p)=m$ and $\boldsymbol{q}=(q_1,\ldots,q_s)\in (G^*_{b,m})^s$, the dual polynomial lattice of $P(p,\boldsymbol{q})$ is defined by
\[ P^{\perp}(p,\boldsymbol{q}) := \left\{\boldsymbol{k}\in \mathbb{N}_0^s\colon \mathrm{tr}_m(\boldsymbol{k})\cdot \boldsymbol{q} \equiv 0 \pmod p \right\}. \]
\end{definition}
\begin{remark}\label{rem:poly_lattice}
For $\boldsymbol{k}\in \mathbb{N}_0^s$ such that $b^m\mid k_j$ for all $j$, we have $\mathrm{tr}_m(\boldsymbol{k})=\boldsymbol{0}$.
Thus, regardless of the choice on $p$ and $\boldsymbol{q}$, such $\boldsymbol{k}$ is always included in the dual polynomial lattice $P^{\perp}(p,\boldsymbol{q})$.
\end{remark}
The following lemma shows the character property of polynomial lattice point sets, see for instance \cite[Lemmas~4.75 and 10.6]{DPbook} for the proof.
\begin{lemma}\label{lem:character}
Let $p\in \mathbb{F}_{b}[x]$ with $\deg(p)=m$ and $\boldsymbol{q}=(q_1,\ldots,q_s)\in (G^*_{b,m})^s$.
For $\boldsymbol{k}\in \mathbb{N}_0^s$, we have
\begin{align*}
\sum_{\boldsymbol{x}\in P(p,\boldsymbol{q})}\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) = \begin{cases}
b^m & \text{if $\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})$}, \\
0 & \text{otherwise.}
\end{cases}
\end{align*}
\end{lemma}
\noindent
By considering the Walsh expansion of a continuous function $f\colon [0,1)^s \to \mathbb{R}$ with $\sum_{\boldsymbol{k}\in \mathbb{N}_0^s}|\hat{f}(\boldsymbol{k})|<\infty$ and using Lemma~\ref{lem:character}, we obtain
\begin{align}
I(f; P(p,\boldsymbol{q})) & = \frac{1}{b^m}\sum_{\boldsymbol{x}\in P(p,\boldsymbol{q})}\sum_{\boldsymbol{k}\in \mathbb{N}_0^s}\hat{f}(\boldsymbol{k})\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) \nonumber \\
& = \frac{1}{b^m}\sum_{\boldsymbol{k}\in \mathbb{N}_0^s}\hat{f}(\boldsymbol{k})\sum_{\boldsymbol{x}\in P(p,\boldsymbol{q})}\mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) \nonumber \\
& = \sum_{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})}\hat{f}(\boldsymbol{k}) = I(f)+\sum_{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}}\hat{f}(\boldsymbol{k}) .\label{eq:poly_lattice_error}
\end{align}
\subsection{Richardson extrapolation}\label{subsec:extrapolation}
Richardson extrapolation is a classical technique to speed up the convergence of a sequence by exploiting the asymptotic expansion of each term, see for instance \cite[Section~1.4]{DRbook} and \cite[Section~3.2.7]{Gbook}. In our current setting, we may have a sequence of polynomial lattice rules with the consecutive sizes of nodes, $b^1,b^2,\ldots$, which means that each term of the sequence corresponds to the approximate value $I(f;P(p,\boldsymbol{q}))$ for some $m\in \mathbb{N}$.
To simplify the situation, instead of an infinite sequence, let us consider a chain of $\alpha$ reals $(I^{(1)}_n)_{m-\alpha+1\leq n \leq m}$ with each given by
\begin{align}\label{eq:extra_seq}
I^{(1)}_n = c_0 + \frac{c_1}{b^n}+\cdots + \frac{c_{\alpha-1}}{b^{(\alpha -1)n}} + R_{\alpha,b^n},
\end{align}
where $b>1$, $c_0,\ldots,c_{\alpha-1}\in \mathbb{R}$ and $R_{\alpha,n}\in O(b^{-\alpha n})$. As shown later in \eqref{eq:decomp}, $I(f;P(p,\boldsymbol{q}))$ has actually such an expansion. In standard notation for extrapolation methods, the reciprocal $1/b^n$ should be regarded as a so-called admissible value of the step parameter $h$ for each term $I^{(1)}_n$. The aim here is to approximate $c_0$ as precisely as possible from the chain $(I^{(1)}_n)_{m-\alpha+1\leq n \leq m}$ without knowing the coefficients $c_1,\ldots,c_{\alpha-1}$.
To do so, let us consider the following recursive application of Richardson extrapolation of successive orders:
For $1\leq \tau< \alpha$, compute
\begin{align*}
I^{(\tau+1)}_{n} = \frac{b^{\tau} I^{(\tau)}_{n}-I^{(\tau)}_{n-1}}{b^{\tau}-1} \qquad \text{for $m-\alpha+\tau < n\leq m$}.
\end{align*}
Regarding this recursion, the following result holds. Although a similar result has been shown, for instance, in \cite{LP17}, we give the proof for self-containedness.
\begin{lemma}\label{lem:extra}
For a given $1\leq \tau \leq \alpha$, let
\[ a_{\nu}^{(\tau)} := \prod_{j=1}^{\nu-1}\left(\frac{-1}{b^j-1}\right)\prod_{j=1}^{\tau-\nu}\left(\frac{b^j}{b^j-1}\right) \qquad \text{for $1\leq \nu\leq \tau$}, \]
where the empty product is set to $1$.
Then we have
\[ I^{(\tau)}_{n} = \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)} I_{n+1-\nu}^{(1)} \qquad \text{for $m-\alpha+\tau \leq n\leq m$}. \]
\end{lemma}
\begin{proof}
We prove the lemma by induction on $\tau$.
As $a_1^{(1)}=1$, the case $\tau=1$ is trivial.
Let $1\leq \tau<\alpha$ and suppose that the equality
\[ I^{(\tau)}_{n} = \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)} I_{n+1-\nu}^{(1)} \]
holds for all $m-\alpha+\tau \leq n\leq m$.
It follows from the definition of $I^{(\tau+1)}_{n}$ that
\begin{align*}
I^{(\tau+1)}_{n} & = \frac{b^{\tau} I^{(\tau)}_{n}-I^{(\tau)}_{n-1}}{b^{\tau}-1} \\
& = \frac{b^\tau}{b^\tau-1} \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)} I_{n+1-\nu}^{(1)} - \frac{1}{b^\tau-1} \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)} I_{n-\nu}^{(1)} \\
& = \frac{b^\tau}{b^\tau-1} a_1^{(\tau)} I_{n}^{(1)} - \frac{1}{b^\tau-1} a_{\tau}^{(\tau)} I_{n-\tau}^{(1)} + \sum_{\nu=2}^{\tau}\left(\frac{b^\tau}{b^\tau-1}a_{\nu}^{(\tau)}- \frac{1}{b^\tau-1}a_{\nu-1}^{(\tau)}\right)I_{n+1-\nu}^{(1)},
\end{align*}
for $m-\alpha+\tau < n\leq m$.
For each term on the right-most side above, we have
\begin{align*}
\frac{b^\tau}{b^\tau-1} a_1^{(\tau)} & = \frac{b^\tau}{b^\tau-1}\prod_{j=1}^{\tau-1}\left(\frac{b^j}{b^j-1}\right) = \prod_{j=1}^{\tau}\left(\frac{b^j}{b^j-1}\right) = a_1^{(\tau+1)}, \\
- \frac{1}{b^\tau-1} a_{\tau}^{(\tau)} & = \frac{-1}{b^\tau-1}\prod_{j=1}^{\tau-1}\left(\frac{-1}{b^j-1}\right) = \prod_{j=1}^{\tau}\left(\frac{-1}{b^j-1}\right) = a_{\tau+1}^{(\tau+1)} ,
\end{align*}
and for $2\leq \nu\leq \tau$
\begin{align*}
& \frac{b^\tau}{b^\tau-1}a_{\nu}^{(\tau)}- \frac{1}{b^\tau-1}a_{\nu-1}^{(\tau)} \\
& = \frac{b^\tau}{b^\tau-1} \prod_{j=1}^{\nu-1}\left(\frac{-1}{b^j-1}\right)\prod_{j=1}^{\tau-\nu}\left(\frac{b^j}{b^j-1}\right) - \frac{1}{b^\tau-1} \prod_{j=1}^{\nu-2}\left(\frac{-1}{b^j-1}\right)\prod_{j=1}^{\tau-\nu+1}\left(\frac{b^j}{b^j-1}\right) \\
& = \prod_{j=1}^{\nu-1}\left(\frac{-1}{b^j-1}\right)\prod_{j=1}^{\tau+1-\nu}\left(\frac{b^j}{b^j-1}\right) = a_{\nu}^{(\tau+1)}.
\end{align*}
Thus we have
\begin{align*}
I^{(\tau+1)}_{n} = a_1^{(\tau+1)} I_{n}^{(1)} + a_{\tau+1}^{(\tau+1)} I_{n-\tau}^{(1)} + \sum_{\nu=2}^{\tau}a_{\nu}^{(\tau+1)} I_{n+1-\nu}^{(1)} = \sum_{\nu=1}^{\tau+1}a_{\nu}^{(\tau+1)} I_{n+1-\nu}^{(1)},
\end{align*}
which proves the lemma.
\end{proof}
In particular, this lemma shows that the final value $I^{(\alpha)}_{m}$ is given by
\begin{align}\label{eq:extra_final}
I^{(\alpha)}_{m} = \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I_{m-\tau+1}^{(1)}.
\end{align}
Regarding the coefficients $a_{\nu}^{(\tau)}$, the following property holds:
\begin{lemma}\label{lem:extra2}
For any $1\leq \tau \leq \alpha$, we have
\[ \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)} = 1\qquad \text{and}\qquad \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)}b^{w(\nu-1)} = 0 \qquad \text{for $1\leq w\leq \tau-1$}. \]
\end{lemma}
\begin{proof}
We prove the lemma by induction on $\tau$.
As $a_1^{(1)}=1$, the case $\tau=1$ is trivial.
Suppose that the claim of this lemma holds for some $1\leq \tau<\alpha$.
Using the recursions appearing in the proof of Lemma~\ref{lem:extra}, we have
\begin{align*}
\sum_{\nu=1}^{\tau+1}a_{\nu}^{(\tau+1)} & = a_{1}^{(\tau+1)}+\sum_{\nu=2}^{\tau}a_{\nu}^{(\tau+1)}+a_{\tau+1}^{(\tau+1)} \\
& = \frac{b^\tau}{b^\tau-1} a_1^{(\tau)}+\sum_{\nu=2}^{\tau}\left( \frac{b^\tau}{b^\tau-1}a_{\nu}^{(\tau)}- \frac{1}{b^\tau-1}a_{\nu-1}^{(\tau)} \right) - \frac{1}{b^\tau-1} a_{\tau}^{(\tau)}\\
& = \sum_{\nu=1}^{\tau}\left( \frac{b^\tau}{b^\tau-1}-\frac{1}{b^\tau-1}\right)a_{\nu}^{(\tau)} = \sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)}=1.
\end{align*}
Similarly, for $1\leq w\leq \tau$, we have
\begin{align*}
\sum_{\nu=1}^{\tau+1}a_{\nu}^{(\tau+1)}b^{w(\nu-1)} & = a_{1}^{(\tau+1)}+\sum_{\nu=2}^{\tau}a_{\nu}^{(\tau+1)}b^{w(\nu-1)}+a_{\tau+1}^{(\tau+1)}b^{w\tau} \\
& = \frac{b^\tau}{b^\tau-1} a_1^{(\tau)}+\sum_{\nu=2}^{\tau}\left( \frac{b^{\tau+w(\nu-1)}}{b^\tau-1}a_{\nu}^{(\tau)}- \frac{b^{w(\nu-1)}}{b^\tau-1}a_{\nu-1}^{(\tau)} \right) - \frac{b^{w\tau}}{b^\tau-1} a_{\tau}^{(\tau)}\\
& = \sum_{\nu=1}^{\tau}\left( \frac{b^{\tau+w(\nu-1)}}{b^\tau-1}-\frac{b^{w\nu}}{b^\tau-1}\right)a_{\nu}^{(\tau)} \\
& = \frac{b^\tau-b^w}{b^\tau-1}\sum_{\nu=1}^{\tau}a_{\nu}^{(\tau)}b^{w(\nu-1)}=0,
\end{align*}
where the last equality follows from the induction assumption for $1\leq w\leq \tau-1$, and is trivial for $w=\tau$.
\end{proof}
Using these results, we further have the following.
\begin{corollary}\label{cor:extra}
Using the notation above, we have
\[ I^{(\alpha)}_{m} = c_0 + \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}R_{\alpha,b^{m-\tau+1}}. \]
\end{corollary}
\begin{proof}
Plugging the expression \eqref{eq:extra_seq} into \eqref{eq:extra_final} and then using Lemma~\ref{lem:extra2}, we have
\begin{align*}
I^{(\alpha)}_{m} & = \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} \left( c_0 + \sum_{w=1}^{\alpha-1}\frac{c_w}{b^{w(m-\tau+1)}} + R_{\alpha,b^{m-\tau+1}}\right) \\
& = c_0 \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} + \sum_{w=1}^{\alpha-1}\sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}\frac{c_w}{b^{w(m-\tau+1)}}+ \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}R_{\alpha,b^{m-\tau+1}} \\
& = c_0 \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} + \sum_{w=1}^{\alpha-1}\frac{c_w}{b^{wm}}\sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}b^{w(\tau-1)}+ \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}R_{\alpha,b^{m-\tau+1}}\\
& = c_0 + \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}R_{\alpha,b^{m-\tau+1}}.
\end{align*}
This completes the proof.
\end{proof}
\section{Extrapolated polynomial lattice rules}\label{sec:explr}
The main idea for coming up with extrapolated polynomial lattice rules is to look at the approximate value of a polynomial lattice rule, as shown in \eqref{eq:poly_lattice_error}, in the following way:
\begin{align*}
I(f; P(p,\boldsymbol{q})) & = I(f)+ \sum_{\substack{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^m\nmid k_j}} \hat{f}(\boldsymbol{k})+\sum_{\substack{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}\\ \forall j\colon b^m\mid k_j}} \hat{f}(\boldsymbol{k}) \\
& = I(f) + \sum_{\substack{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^m\nmid k_j}} \hat{f}(\boldsymbol{k})+\sum_{\boldsymbol{k}\in \mathbb{N}_0^s\setminus \{\boldsymbol{0}\}} \hat{f}(b^m\boldsymbol{k}),
\end{align*}
where the second equality follows from Remark~\ref{rem:poly_lattice}.
By considering the character property of regular grids
\[ P_{\mathrm{grid},b^m} = \left\{ \left( \frac{n_1}{b^m},\ldots,\frac{n_s}{b^m}\right)\in [0,1)^s\colon 0\leq n_1,\ldots,n_s< b^m\right\},\]
we see that the third term in the last expression is nothing but the approximation error of $f$ when using $P_{\mathrm{grid},b^m}$ as quadrature nodes in a QMC integration.
Therefore we have
\[ I(f; P(p,\boldsymbol{q})) = I(f) + \sum_{\substack{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^m\nmid k_j}} \hat{f}(\boldsymbol{k})+ \left(I(f;P_{\mathrm{grid},b^m}) -I(f)\right). \]
Plugging in the Euler-Maclaurin formula for $I(f;P_{\mathrm{grid},b^m})$, which is shown later in Theorem~\ref{thm:euler-maclaurin}, into the right-hand side above, we obtain
\begin{align}\label{eq:decomp}
I(f; P(p,\boldsymbol{q})) = I(f) + \sum_{\substack{\boldsymbol{k}\in P^{\perp}(p,\boldsymbol{q})\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^m\nmid k_j}} \hat{f}(\boldsymbol{k})+ \sum_{\tau=1}^{\alpha -1}\frac{c_{\tau}(f)}{b^{\tau m}} + R_{s,\alpha,b^m},
\end{align}
where $c_{\tau}(f)$ depends only on $f$ and $\tau$, and the remainder term $R_{s,\alpha,b^m}$ is proven to decay with order $b^{-\alpha m}$.
Now suppose that we have polynomial lattice rules with consecutive sizes of nodes, $b^{m-\alpha+1},b^{m-\alpha+2},\ldots,b^m$.
For ease of notation, we denote by $P_{b^n}$ a polynomial lattice point set with the number of nodes equal to $b^n$, and by $P^{\perp}_{b^n}$ the dual polynomial lattice of $P_{b^n}$.
Then we can obtain a chain of $\alpha$ approximate values of the integral, i.e., $I(f;P_{b^{m-\alpha+1}}),\ldots,I(f;P_{b^m})$.
By applying Richardson extrapolation in a recursive way as described in Section~\ref{subsec:extrapolation}, it follows from Lemma~\ref{lem:extra}, Corollary~\ref{cor:extra} and \eqref{eq:decomp} that the final value is given by
\begin{align}\label{eq:extra_approximation}
\sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I(f;P_{b^{m-\tau+1}}) = I(f) + \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)}\left( \sum_{\substack{\boldsymbol{k}\in P^{\perp}_{b^{m-\tau+1}}\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^{m-\tau+1}\nmid k_j}} \hat{f}(\boldsymbol{k})+ R_{s,\alpha,b^{m-\tau+1}}\right).
\end{align}
If we can construct good polynomial lattice rules such that the inner sum on the right-hand side of \eqref{eq:extra_approximation} decays with order $b^{-(\alpha-\epsilon)m}$ (with arbitrarily small $\epsilon >0$) for any function $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$, the integration error
\[\sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I(f;P_{b^{m-\tau+1}})-I(f)\]
decays with the almost optimal order.
(Note that we use $N=b^{m-\alpha+1}+\cdots + b^m$ quadrature nodes in total, which does not affects the order of convergence.)
This is our key observation for introducing extrapolated polynomial lattice rules.
In what follows, we start with showing the worst-case error bound of extrapolated polynomial lattice rules, and then in Section~\ref{subsec:euler-maclaurin}, we prove the Euler-Maclaurin formula on the regular grid quadrature.
In Section~\ref{subsec:existence}, we prove the existence of such good polynomial lattice rules for $W_{s,\alpha,\boldsymbol{\gamma},q,r}$ with general weights $\boldsymbol{\gamma}=(\gamma_u)_{u\subset \mathbb{N}}$.
In Section~\ref{sec:cbc}, by restricting to product weights, i.e., the case where the weights are given by the form $\gamma_u=\prod_{j\in u}\gamma_j$ for a sequence of reals $(\gamma_j)_{j\in \mathbb{N}}$, we show that good polynomial lattice rules can be constructed by the fast component-by-component (CBC) algorithm.
\subsection{Worst-case error bound}\label{subsec:worst-case}
Using the equality \eqref{eq:extra_approximation}, the absolute integration error of an extrapolated polynomial lattice rule is bounded by
\begin{align}\label{eq:bound_error}
\left| \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I(f;P_{b^{m-\tau+1}}) -I(f)\right| \leq \sum_{\tau=1}^{\alpha}|a_{\tau}^{(\alpha)}|\left( \sum_{\substack{\boldsymbol{k}\in P^{\perp}_{b^{m-\tau+1}}\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^{m-\tau+1}\nmid k_j}} |\hat{f}(\boldsymbol{k})|+ |R_{s,\alpha,b^{m-\tau+1}}|\right).
\end{align}
In the following, we write
\[ P_{b^{m-\tau+1},u}^{\perp}= \left\{\boldsymbol{k}_u\in \mathbb{N}^{|u|} \colon (\boldsymbol{k}_u,\boldsymbol{0})\in P_{b^{m-\tau+1}}^{\perp} \right\}, \]
for a subset $\emptyset \neq u\subseteq \{1,\ldots,s\}$. Note that we have
\[ P_{b^{m-\tau+1}}^{\perp}\setminus \{\boldsymbol{0}\} = \bigcup_{\emptyset \neq u\subseteq \{1,\ldots,s\}}P_{b^{m-\tau+1},u}^{\perp}. \]
We now obtain a worst-case error bound as follows.
\begin{theorem}\label{thm:bound_wrst_error}
Let $\alpha,s\in \mathbb{N}$, $\alpha\geq 2$, $1\leq q,r\leq \infty$, and let $\boldsymbol{\gamma}=(\gamma_u)_{u\subset \mathbb{N}}$ be a set of weights.
Let $q'$ and $r'$ be the H\"older conjugates of $q$ and $r$, respectively.
For $m\geq \alpha$, we have
\begin{align*}
& \sup_{\substack{f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}\\ \|f\|_{s,\alpha,\boldsymbol{\gamma},q,r}\leq 1}}\left| \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I(f;P_{b^{m-\tau+1}}) -I(f)\right| \\
& \qquad \qquad \leq \sum_{\tau=1}^{\alpha}|a_{\tau}^{(\alpha)}| \left( B_{\boldsymbol{\gamma},r}(P_{b^{m-\tau+1}})+\frac{H_{s,\boldsymbol{\gamma},q,r}}{b^{\alpha(m-\tau+1)}}\right),
\end{align*}
where
\[ B_{\boldsymbol{\gamma},r}(P_{b^{m-\tau+1}}) = \Bigg(\sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\Bigg(\gamma_uC_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_{b^{m-\tau+1},u}^{\perp}\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)}\Bigg)^{r'} \Bigg)^{1/r'}, \]
and
\[ H_{s,\boldsymbol{\gamma},q,r} = \Bigg(\sum_{u\subseteq \{1,\ldots,s\}}\gamma_u^{r'}(\alpha+1)^{|u|r'/q'}D_{\alpha}^{r'|u|}\Bigg)^{1/r'}, \]
with $D_{\alpha}=\max\left\{|b_1|,\ldots,|b_{\alpha-1}|,\sup_{x\in [0,1)}|\tilde{b}_{\alpha}(x)|\right\}.$
\end{theorem}
\begin{proof}
Let us consider the inner sum on the right-hand side of \eqref{eq:bound_error} first.
Using the bound on the Walsh coefficient in Lemma~\ref{lem:walsh_bound} and H\"older inequality, we have
\begin{align*}
\sum_{\substack{\boldsymbol{k}\in P^{\perp}_{b^{m-\tau+1}}\setminus \{\boldsymbol{0}\}\\ \exists j\colon b^{m-\tau+1} \nmid k_j}} |\hat{f}(\boldsymbol{k})| & = \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\sum_{\substack{\boldsymbol{k}_u\in P^{\perp}_{b^{m-\tau+1},u}\setminus \{\boldsymbol{0}\}\\ \exists j\in u\colon b^{m-\tau+1}\nmid k_j}} |\hat{f}(\boldsymbol{k}_u,\boldsymbol{0})| \\
& \leq \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\|f_u\|_{s,\alpha,\boldsymbol{\gamma},q,r}\gamma_uC_{\alpha}^{|u|} \sum_{\substack{\boldsymbol{k}_u\in P^{\perp}_{b^{m-\tau+1},u}\setminus \{\boldsymbol{0}\}\\ \exists j\in u\colon b^{m-\tau+1}\nmid k_j}} b^{-\mu_{\alpha}(\boldsymbol{k}_u)} \\
& \leq \Bigg( \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\|f_u\|_{s,\alpha,\boldsymbol{\gamma},q,r}^r\Bigg)^{1/r} \\
& \qquad \times \Bigg( \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\Bigg(\gamma_uC_{\alpha}^{|u|} \sum_{\substack{\boldsymbol{k}_u\in P^{\perp}_{b^{m-\tau+1},u}\setminus \{\boldsymbol{0}\}\\ \exists j\in u\colon b^{m-\tau+1}\nmid k_j}} b^{-\mu_{\alpha}(\boldsymbol{k}_u)}\Bigg)^{r'}\Bigg)^{1/r'} \\
& \leq \|f\|_{s,\alpha,\boldsymbol{\gamma},q,r}B_{\boldsymbol{\gamma},r}(P_{b^{m-\tau+1}}).
\end{align*}
Regarding the bound on $R_{\alpha,b^{m-\tau+1}}$, it follows from Theorem~\ref{thm:euler-maclaurin} below that
\[ |R_{s,\alpha,b^{m-\tau+1}}| \leq \frac{\|f\|_{s,\alpha,\boldsymbol{\gamma},q,r} H_{s,\boldsymbol{\gamma},q,r}}{b^{\alpha(m-\tau+1)}}.\]
Plugging these bounds into the right-hand side of \eqref{eq:bound_error} and then taking the supremum among $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$ such that $\|f\|_{s,\alpha,\boldsymbol{\gamma},q,r}\leq 1$, the result follows.
\end{proof}
\begin{remark}
As already pointed out in \cite{DKLNS14}, since we have $B_{\boldsymbol{\gamma},r}(P_{b^{m-\tau+1}})\leq B_{\boldsymbol{\gamma},\infty}(P_{b^{m-\tau+1}})$ and $H_{\boldsymbol{\gamma},q,r}\leq H_{\boldsymbol{\gamma},q,\infty}$ for any $r$, it is convenient to work with an upper bound which can be obtained by setting $r=\infty$ and thus $r'=1$. In the rest of this paper, we always consider the case $r=\infty$. The bound $B_{\boldsymbol{\gamma}, r}$ is used below to construct good generating vectors for polynomial lattice rules. The choice $r'=1$ simplifies the computation of $B_{\boldsymbol{\gamma}, r}$.
\end{remark}
\subsection{Euler-Maclaurin formula for regular grid quadrature}\label{subsec:euler-maclaurin}
Here we show the Euler-Maclaurin formula on $I(f;P_{\mathrm{grid},N})$, where
\[ P_{\mathrm{grid},N} = \left\{ \left( \frac{n_1}{N},\ldots,\frac{n_s}{N}\right)\in [0,1)^s\colon 0\leq n_1,\ldots,n_s< N\right\}. \]
As preparation, we prove the following lemma.
\begin{lemma}\label{lem:bernoulli_sum}
For $\tau, N\in \mathbb{N}$ and $x\in [0,1)$, we have
\[ \frac{1}{N}\sum_{n=0}^{N-1}b_{\tau}\left( \frac{n}{N}\right) = \frac{b_{\tau}}{N^{\tau}} \quad \text{and} \quad \frac{1}{N}\sum_{n=0}^{N-1}\tilde{b}_{\tau}\left( x-\frac{n}{N}\right) = \frac{\tilde{b}_{\tau}(Nx)}{N^{\tau}}.\]
\end{lemma}
\begin{proof}
For $\tau=1$, we obtain the results by direct calculation, which is omitted here.
We assume $\tau\ge 2$.
By using the Fourier series of $b_{\tau}$, we have
\begin{align*}
\frac{1}{N}\sum_{n=0}^{N-1}b_{\tau}\left( \frac{n}{N}\right) & = \frac{1}{N}\sum_{n=0}^{N-1}\frac{-1}{(2\pi i)^{\tau}}\sum_{h\in \mathbb{Z}\setminus \{0\}}\frac{e^{2\pi i hn/N}}{h^{\tau}} \\
& = \frac{-1}{(2\pi i)^{\tau}}\sum_{h\in \mathbb{Z}\setminus \{0\}}\frac{1}{h^{\tau}}\left(\frac{1}{N}\sum_{n=0}^{N-1}e^{2\pi i hn/N}\right) \\
& = \frac{-1}{(2\pi i)^{\tau}}\sum_{\substack{h\in \mathbb{Z}\setminus \{0\}\\ N\mid h}}\frac{1}{h^{\tau}} = \frac{-1}{(2\pi i)^{\tau}}\sum_{h\in \mathbb{Z}\setminus \{0\}}\frac{1}{(hN)^{\tau}}= \frac{b_{\tau}}{N^{\tau}},
\end{align*}
which completes the proof of the first equality.
Since the second equality can be proven in exactly the same way by using the Fourier series of $\tilde{b}_{\tau}$, we omit the proof.
\end{proof}
As shown in Lemma~\ref{lem:func_represent}, we have the following pointwise representation for a function $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$:
\begin{align}\label{eq:func_represent}
f(\boldsymbol{y}) & = \sum_{u\subseteq \{1,\ldots,s\}}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\prod_{j\in u\setminus v}b_{\tau_j}(y_j) \nonumber \\
& \qquad \times (-1)^{(\alpha+1)|v|}\int_{[0,1)^s} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_v,\boldsymbol{0})}(\boldsymbol{x}) \prod_{j\in v}\tilde{b}_{\alpha}(x_j-y_j) \, \mathrm{d} \boldsymbol{x}.
\end{align}
By using Lemma~\ref{lem:bernoulli_sum}, we obtain the Euler-Maclaurin formula on $I(f;P_{\mathrm{grid},N})$.
\begin{theorem}\label{thm:euler-maclaurin}
For $f\in W_{s,\alpha,\boldsymbol{\gamma},q,r}$, we have
\[ I(f;P_{\mathrm{grid},N}) = I(f)+\sum_{\tau=1}^{\alpha -1}\frac{c_{\tau}(f)}{N^{\tau}} + R_{s,\alpha,N}, \]
where $c_{\tau}(f)$ depends only on $f$ and $\tau$, and is given by
\[ c_{\tau}(f) = \sum_{\substack{\boldsymbol{\tau} \in \{0,1,\ldots,\alpha-1\}^{s}\\ |\boldsymbol{\tau}|_1=\tau}}\prod_{j=1}^{s}b_{\tau_j} \int_{[0,1)^{s}}f^{(\boldsymbol{\tau})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x} \]
with $ |\boldsymbol{\tau}|_1 = \sum_{j=1}^{s}|\tau_j|$. Further we have
\[ |R_{s,\alpha,N}| \leq \frac{\|f\|_{s,\alpha,\boldsymbol{\gamma},q,r} H_{s,\boldsymbol{\gamma},q,r}}{N^{\alpha}},\]
where $H_{s,\boldsymbol{\gamma},q,r}$ is given as in Theorem~\ref{thm:bound_wrst_error}.
\end{theorem}
\begin{proof}
Plugging the representation \eqref{eq:func_represent} into $I(f;P_{\mathrm{grid},N})$ and using Lemma~\ref{lem:bernoulli_sum}, we have
\begin{align*}
I(f;P_{\mathrm{grid},N}) & = \frac{1}{N^s}\sum_{n_1=0}^{N-1}\cdots\sum_{n_s=0}^{N-1}f\left( \frac{n_1}{N},\ldots,\frac{n_s}{N}\right) \\
& = \sum_{u\subseteq \{1,\ldots,s\}}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\prod_{j\in u\setminus v}\frac{1}{N}\sum_{n_j=0}^{N-1}b_{\tau_j}\left(\frac{n_j}{N}\right) \\
& \qquad \times (-1)^{(\alpha+1)|v|}\int_{[0,1)^s}f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x}) \prod_{j\in v}\frac{1}{N}\sum_{n_j=0}^{N-1}\tilde{b}_{\alpha}\left(x_j-\frac{n_j}{N}\right)\, \mathrm{d} \boldsymbol{x} \\
& = \sum_{u\subseteq \{1,\ldots,s\}}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\frac{1}{N^{|\boldsymbol{\tau}_{u\setminus v}|_1+\alpha|v|}}\prod_{j\in u\setminus v}b_{\tau_j} \\
& \qquad \times (-1)^{(\alpha+1)|v|}\int_{[0,1)^{s}}f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\prod_{j\in v}\tilde{b}_{\alpha}(Nx_j)\, \mathrm{d} \boldsymbol{x}.
\end{align*}
Let us reorder the summands with respect to the value of $|\boldsymbol{\tau}_{u\setminus v}|_1+\alpha|v|$, which appears in the exponent of $N$. If $|\boldsymbol{\tau}_{u\setminus v}|_1+\alpha|v|=0$, we must have $u=v=\emptyset$ and the corresponding summand is nothing but $I(f)$. If $|\boldsymbol{\tau}_{u\setminus v}|_1+\alpha|v|=\tau$ with $1\leq \tau<\alpha$, we must have $v=\emptyset$ and thus
\begin{align*}
c_{\tau}(f) & = \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\sum_{\substack{\boldsymbol{\tau}_u\in \{1,\ldots,\alpha-1\}^{|u|}\\ |\boldsymbol{\tau}_u|_1=\tau}}\prod_{j\in u}b_{\tau_j} \int_{[0,1)^s}f^{(\boldsymbol{\tau}_u,\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x} \\
& = \sum_{\substack{\boldsymbol{\tau} \in \{0,1,\ldots,\alpha-1\}^{s}\\ |\boldsymbol{\tau}|_1=\tau}}\prod_{j=1}^{s}b_{\tau_j} \int_{[0,1)^{s}}f^{(\boldsymbol{\tau})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}.
\end{align*}
The other summands have the exponents $|\boldsymbol{\tau}_{u\setminus v}|_1+\alpha|v| \geq \alpha$ and belong to $R_{s,\alpha,N}$.
Next we prove the bound on $R_{s,\alpha,N}$.
From the above argument, it is obvious that $R_{s,\alpha,N}$ is bounded by
\begin{align*}
|R_{s,\alpha,N}| & \leq \frac{1}{N^{\alpha}}\Bigg|\sum_{u\subseteq \{1,\ldots,s\}}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\prod_{j\in u\setminus v}b_{\tau_j} \\
& \qquad \times (-1)^{(\alpha+1)|v|}\int_{[0,1)^{s}}f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\prod_{j\in v}\tilde{b}_{\alpha}(Nx_j)\, \mathrm{d} \boldsymbol{x} \Bigg| .
\end{align*}
By applying H\"older's inequality, we have
\begin{align*}
& \Bigg|\int_{[0,1)^{s}}f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\prod_{j\in v}\tilde{b}_{\alpha}(Nx_j)\, \mathrm{d} \boldsymbol{x} \Bigg| \\
& \quad \leq \int_{[0,1)^{|v|}}\Bigg|\int_{[0,1)^{s-|v|}} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\Bigg| \cdot \Bigg|\prod_{j\in v}\tilde{b}_{\alpha}(Nx_j)\Bigg|\, \mathrm{d} \boldsymbol{x}_v \\
& \quad \leq D_{\alpha}^{|v|}\Bigg(\int_{[0,1)^{|v|}}\Bigg|\int_{[0,1)^{s-|v|}} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\Bigg| ^q\, \mathrm{d} \boldsymbol{x}_v\Bigg)^{1/q},
\end{align*}
for $1\leq q\leq \infty$. Using the above inequality and H\"older's inequality twice, we obtain
\begin{align*}
& |R_{s,\alpha,N}| \\
& \leq \frac{1}{N^{\alpha}}\sum_{u\subseteq \{1,\ldots,s\}}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}D_{\alpha}^{|u|} \\
& \qquad \times \Bigg(\int_{[0,1)^{|v|}}\Bigg|\int_{[0,1)^{s-|v|}} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\Bigg| ^q\, \mathrm{d} \boldsymbol{x}_v\Bigg)^{1/q} \\
& \leq \frac{1}{N^{\alpha}}\sum_{u\subseteq \{1,\ldots,s\}}\Bigg(\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\gamma_u^{q'}D_{\alpha}^{q'|u|}\Bigg)^{1/q'} \\
& \qquad \times \Bigg(\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}}\gamma_u^{-q}\int_{[0,1)^{|v|}}\Bigg|\int_{[0,1)^{s-|v|}} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\Bigg| ^q\, \mathrm{d} \boldsymbol{x}_v\Bigg)^{1/q} \\
& \leq \frac{1}{N^{\alpha}}\Bigg(\sum_{u\subseteq \{1,\ldots,s\}}\gamma_u^{r'}(\alpha+1)^{|u|r'/q'}D_{\alpha}^{r'|u|}\Bigg)^{1/r'} \\
& \qquad \times \Bigg(\sum_{u\subseteq \{1,\ldots,s\}} \Bigg(\gamma_u^{-q}\sum_{v\subseteq u}\sum_{\boldsymbol{\tau}_{u\setminus v}\in \{1,\ldots,\alpha\}^{|u\setminus v|}} \\
& \qquad \qquad \qquad \int_{[0,1)^{|v|}}\Bigg|\int_{[0,1)^{s-|v|}} f^{(\boldsymbol{\tau}_{u\setminus v},\boldsymbol{\alpha}_{v},\boldsymbol{0})}(\boldsymbol{x})\, \mathrm{d} \boldsymbol{x}_{-v}\Bigg| ^q\, \mathrm{d} \boldsymbol{x}_v\Bigg)^{r/q}\Bigg)^{1/r} \\
& = \frac{\|f\|_{s,\alpha,\boldsymbol{\gamma},q,r}H_{s,\boldsymbol{\gamma},q,r}}{N^{\alpha}}.
\end{align*}
This completes the proof of this theorem.
\end{proof}
\subsection{Existence results}\label{subsec:existence}
Here we prove the existence of good extrapolated polynomial lattice rules which achieve the almost optimal order of convergence.
Since each point set $P_{b^{m-\tau+1}}$ can be constructed independently, it suffices to prove the existence of a good polynomial lattice rule of size $b^m$ which achieves the almost optimal order of the term $B_{\boldsymbol{\gamma},\infty}(P_{b^m})$ for any $m\in \mathbb{N}$.
In order to emphasize the role of the modulus $p$ and generating vector $\boldsymbol{q}$, instead of $B_{\boldsymbol{\gamma},\infty}(P_{b^m})$ we write
\[ B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}) = \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_uC_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q})\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)}, \]
where $m=\deg(p)$.
First we recall the following auxiliary result. See \cite[Lemma~7]{G16} for the proof.
\begin{lemma}\label{lem:sum_mu_alpha}
For $\alpha\geq 2$ and $1/\alpha<\lambda\leq 1$, we have
\[ \sum_{k=1}^{\infty}b^{-\lambda\mu_{\alpha}(k)} = \sum_{w=1}^{\alpha-1}\prod_{i=1}^{w}\left( \frac{b-1}{b^{\lambda i}-1}\right)+ \left( \frac{b^{\lambda \alpha}-1}{b^{\lambda \alpha}-b}\right)\prod_{i=1}^{\alpha}\left( \frac{b-1}{b^{\lambda i}-1}\right) =: E_{\alpha,\lambda}. \]
\end{lemma}
Now we prove the existence result.
\begin{theorem}\label{thm:existence}
Let $p\in \mathbb{F}_b[x]$ with $\deg(p)=m$ be irreducible.
For a set of weights $\boldsymbol{\gamma}=(\gamma_u)_{u\subset \mathbb{N}}$, there exists at least one $\boldsymbol{q}^*=(q_1^*,\ldots,q_s^*)\in (G^*_{b,m})^s$ such that
\begin{align*}
B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*) \leq \frac{1}{(b^m-1)^{1/\lambda}}\left[ \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda|u|}E_{\alpha,\lambda}^{|u|}\right]^{1/\lambda}
\end{align*}
holds for any $1/\alpha<\lambda\leq 1$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{q}^*$ be given by
\[ \boldsymbol{q}^* = \arg\min_{\boldsymbol{q} \in (G^*_{b,m})^s}B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}). \]
Using Jensen's inequality, for any $1/\alpha < \lambda\leq 1$ we have
\begin{align*}
(B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*))^{\lambda} & \leq \frac{1}{(b^m-1)^s}\sum_{\boldsymbol{q} \in (G^*_{b,m})^s}(B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}))^{\lambda} \\
& \leq \frac{1}{(b^m-1)^s}\sum_{\boldsymbol{q} \in (G^*_{b,m})^s}\sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda |u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q})\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\lambda \mu_{\alpha}(\boldsymbol{k}_u)} \\
& = \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda |u|}\sum_{\substack{\boldsymbol{k}_u\in \mathbb{N}^{|u|}\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\lambda \mu_{\alpha}(\boldsymbol{k}_u)}\\
& \qquad \times \frac{1}{(b^m-1)^{|u|}}\sum_{\substack{\boldsymbol{q}_u \in (G^*_{b,m})^{|u|}\\ \mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u=0 \pmod p}}1.
\end{align*}
If there exists at least one component $k_j$ with $j\in u$ such that $b^m\nmid k_j$, the number of polynomials $\boldsymbol{q}_u\in (G^*_{b,m})^{|u|}$ which satisfies $\mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u=0\pmod p$ is $(b^m-1)^{|u|-1}$. Thus we obtain
\begin{align*}
(B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*))^{\lambda} & \leq \frac{1}{b^m-1}\sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda |u|}\sum_{\substack{\boldsymbol{k}_u\in \mathbb{N}^{|u|}\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\lambda \mu_{\alpha}(\boldsymbol{k}_u)} \\
& \leq \frac{1}{b^m-1}\sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda |u|}E_{\alpha,\lambda}^{|u|}.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Dependence of the upper bound on the dimension}\label{subsec:dependence}
Here we study the dependence of the worst-case error bound on the dimension.
For $1/\alpha < \lambda<1$, we write
\[ J_{s,\lambda,\boldsymbol{\gamma}}=\left[ \sum_{\emptyset \neq u\subseteq \{1,\ldots,s\}}\gamma_u^{\lambda}C_{\alpha}^{\lambda|u|}E_{\alpha,\lambda}^{|u|}\right]^{1/\lambda}. \]
From Theorem~\ref{thm:bound_wrst_error} together with Theorem~\ref{thm:existence}, we have
\begin{align*}
& \sup_{\substack{f\in W_{s,\alpha,\boldsymbol{\gamma},q,\infty}\\ \|f\|_{s,\alpha,\boldsymbol{\gamma},q,\infty}\leq 1}}\left| \sum_{\tau=1}^{\alpha}a_{\tau}^{(\alpha)} I(f;P_{b^{m-\tau+1}}) -I(f)\right| \\
& \qquad \leq \sum_{\tau=1}^{\alpha}|a_{\tau}^{(\alpha)}| \left( \frac{J_{s,\lambda,\boldsymbol{\gamma}}}{(b^{m-\tau+1}-1)^{1/\lambda}}+\frac{H_{s,\boldsymbol{\gamma},q,\infty}}{b^{\alpha(m-\tau+1)}}\right) \\
& \qquad \leq \sum_{\tau=1}^{\alpha}|a_{\tau}^{(\alpha)}| \frac{b^{\tau/\lambda}J_{s,\lambda,\boldsymbol{\gamma}}+b^{\alpha(\tau-1)}H_{s,\boldsymbol{\gamma},q,\infty}}{(b^m-1)^{1/\lambda}} \leq \alpha |a_{1}^{(\alpha)}|\frac{b^{\alpha/\lambda}J_{s,\lambda,\boldsymbol{\gamma}}+b^{\alpha(\alpha-1)}H_{s,\boldsymbol{\gamma},q,\infty}}{(b^m-1)^{1/\lambda}},
\end{align*}
for any $1/\alpha < \lambda<1$. Here we recall
\[ H_{s,\boldsymbol{\gamma},q,\infty} = \sum_{u\subseteq \{1,\ldots,s\}}\gamma_u(\alpha+1)^{|u|/q'}D_{\alpha}^{|u|}. \]
The dependence of the upper bound on the dimension can be stated as follows.
\begin{corollary}\label{cor:dependence}
Let $\alpha > 1$ be an integer and $N = b^m + b^{m-1} + \cdots + b^{m-\alpha+1}$ be the number of function evaluations used in the extrapolated polynomial lattice rule.
\begin{enumerate}
\item For general weights, assume that
\[ \lim_{s\to \infty}J_{s,\lambda,\boldsymbol{\gamma}}<\infty\quad \text{and}\quad \lim_{s\to \infty}H_{s,\boldsymbol{\gamma},q,\infty} <\infty, \]
for some $1/\alpha<\lambda\leq 1$. Then the worst-case error for extrapolated polynomial lattice rules converges with order $\mathcal{O}(N^{-1/\lambda})$ with the constant bounded independently of the dimension.
\item For general weights, assume that there exists a positive real $q$ such that
\[ \limsup_{s\to \infty}\frac{J_{s,\lambda,\boldsymbol{\gamma}}}{s^q}<\infty\quad \text{and}\quad \limsup_{s\to \infty}\frac{H_{s,\boldsymbol{\gamma},q,\infty}}{s^q} <\infty, \]
holds for some $1/\alpha<\lambda\leq 1$. Then the worst-case error bound for extrapolated polynomial lattice rules converges with order $\mathcal{O}(N^{-1/\lambda})$ with the constant depending polynomially on the dimension.
\item For product weights $\gamma_u=\prod_{j\in u}\gamma_j$, assume that
\[ \sum_{j=1}^{\infty}\gamma_j^{\lambda}<\infty, \]
for some $1/\alpha<\lambda\leq 1$. Then the worst-case error for extrapolated polynomial lattice rules converges with order $\mathcal{O}(N^{-1/\lambda})$ with the constant bounded independently of the dimension.
\item For product weights $\gamma_u=\prod_{j\in u}\gamma_j$, assume that
\[ \limsup_{s\to \infty}\frac{\sum_{j=1}^{s}\gamma_j^{\lambda}}{\log (s+1)}<\infty, \]
for some $1/\alpha<\lambda\leq 1$. Then the worst-case error bound for extrapolated polynomial lattice rules converges with order $\mathcal{O}(N^{-1/\lambda})$ with the constant depending polynomially on the dimension.
\end{enumerate}
\end{corollary}
\begin{proof}
The results for general weights follows immediately.
The proof of the results for product weights can be also completed by following essentially the same argument as in \cite[Proof of Theorem~5.3]{DP07}.
\end{proof}
\begin{remark}
For product weights, good extrapolated polynomial lattice rules can be constructed as discussed in the next section. As can be seen from the error bound obtained in Theorem~\ref{thm:cbc}, if the same condition as Item~3 or 4 of Corollary~\ref{cor:dependence} holds, we also have exactly the same result for the dependence of the worst-case error bound on the dimension.
\end{remark}
\section{Component-by-component construction}\label{sec:cbc}
\subsection{Convergence analysis}
Here we only consider the case of product weights and prove that the CBC construction algorithm can find a good polynomial lattice rule which achieves the almost optimal order bound on the criterion $B_{\boldsymbol{\gamma}}(p,\boldsymbol{q})$. Remark~\ref{rem_prod} below points out the challenge in generalizing the result to general weights.
The CBC construction algorithm proceeds as follows:
\begin{algorithm}
\label{alg:cbc}
For $m,s\in \mathbb{N}$, $\alpha\geq 2$ and $\boldsymbol{\gamma}=(\gamma_j)_{j\in \mathbb{N}}$.
\begin{enumerate}
\item Choose an irreducible polynomial $p\in \mathbb{F}_b[x]$ with $\deg(p)=m$.
\item Set $q_1^*=1$.
\item For $2\leq d \leq s$, find $q_d^*\in G^*_{b,m}$ which minimizes $$B_{\boldsymbol{\gamma}}(p,(q^*_1,\ldots,q^*_{d-1},q_d))=\sum_{\emptyset \neq u\subseteq \{1,\ldots,d \}}\gamma_uC_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,(q^*_1,\ldots,q^*_{d-1},q_d))\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)}$$ as a function of $q_d$.
\end{enumerate}
\end{algorithm}
\noindent In Section~\ref{sec_fast} we simplify the formula for $B_{\boldsymbol{\gamma}}(p,(q^*_1,\ldots,q^*_{d-1},q_d))$ to obtain a criterion which can be computed efficiently.
\begin{theorem}\label{thm:cbc}
Let $p\in \mathbb{F}_b[x]$ with $\deg(p)=m$ and $\boldsymbol{q}_s^*=(q_1^*,\ldots,q_s^*)\in (G^*_{b,m})^s$ be found by Algorithm~\ref{alg:cbc}. Then for $1\leq d \leq s$ we have
\begin{align*}
B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d^*)\leq \frac{1}{(b^m-1)^{1/\lambda}}\prod_{j=1}^{d}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right]^{1/\lambda}
\end{align*}
holds for any $1/\alpha<\lambda\leq 1$.
\end{theorem}
\begin{proof}
Without loss of generality, we assume that the modulus $p$ is monic.
We prove the theorem by induction on $d$.
First let $d=1$.
Since we assume $q_1^*=1$, the dual polynomial lattice is given by
\[ P^{\perp}(p,1) = \{k\in \mathbb{N}_0\colon \mathrm{tr}_m(k)=0\pmod p\} = \{k\in \mathbb{N}_0\colon b^m\mid k\}. \]
Thus we have
\[ B_{\boldsymbol{\gamma}}(p,1)= C_{\alpha}\gamma_1 \sum_{\substack{k\in P^{\perp}(p,1)\setminus \{0\}\\ b^m\nmid k}}b^{-\mu_{\alpha}(k)}=0 \leq \frac{1}{(b^m-1)^{1/\lambda}}\left(1+\gamma^{\lambda}_1C^{\lambda}_{\alpha}E_{\alpha,\lambda}\right)^{1/\lambda}, \]
for any $1/\alpha<\lambda\leq 1$.
Next suppose that we have already found the first $d-1$ components of the generating vector $\boldsymbol{q}_{d-1}^*=(q^*_1,\ldots,q^*_{d-1})\in (G^*_{b,m})^{d-1}$ such that
\begin{align*}
B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_{d-1}^*)\leq \frac{1}{(b^m-1)^{1/\lambda}}\prod_{j=1}^{d-1}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right]^{1/\lambda}
\end{align*}
holds for any $1/\alpha<\lambda\leq 1$.
Putting $\boldsymbol{q}_d=(\boldsymbol{q}_{d-1}^*,q_d)$ with $q_d \in G^*_{b,m}$ we have
\begin{align}
B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d) & = \sum_{\emptyset \neq u\subseteq \{1,\ldots,d-1\}}\gamma_u C_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_d)\\ \exists j\in u\colon b^m\nmid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)} \nonumber \\
& \qquad + \sum_{\emptyset \neq u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}} C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ \exists j\in u\colon b^m\nmid k_j\\ b^m\mid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} \nonumber \\
& \qquad + \sum_{u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}}C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ b^m\nmid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} \nonumber \\
& = B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*_{d-1}) + \sum_{\emptyset \neq u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}}C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_{d-1}^*)\\ \exists j\in u\colon b^m\nmid k_j}}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\mid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_u,k_d)} \nonumber \\
& \qquad + \sum_{u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}}C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ b^m\nmid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} \nonumber \\
& = B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*_{d-1}) \left(1+ \gamma_d C_{\alpha}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\mid k_d}}b^{-\mu_{\alpha}(k_d)}\right) \nonumber \\
& \qquad + \sum_{u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}}C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ b^m\nmid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} , \label{eq:decomp_error}
\end{align}
where the second equality stems from the fact that since $b^m\mid k_d$, we have $\mathrm{tr}_m(k_d)=0$ and thus $\mathrm{tr}_m(\boldsymbol{k}_{u\cup \{d\}})\cdot (\boldsymbol{q}_u^*, q_d)=\mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u^*$, which yields
\[ \{\boldsymbol{k}_{u\cup \{d\}}\in P_{u\cup \{d\}}^{\perp}(p,\boldsymbol{q}_d)\colon b^m\mid k_d\}= \{(\boldsymbol{k}_u,k_d)\in \mathbb{N}^{|u|+1}\colon \boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_{d-1}^*), b^m \mid k_d\}. \]
It is clear that the first term of \eqref{eq:decomp_error} does not depend on the choice of $q_d$.
Thus denoting the second term of \eqref{eq:decomp_error} by
\[ \psi_{p,\boldsymbol{q}_{d-1}^*}(q_d):= \sum_{u\subseteq \{1,\ldots,d-1\}}\gamma_{u\cup\{d\}}C_{\alpha}^{|u|+1}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ b^m\nmid k_d}}b^{-\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})}, \]
we have
\[ q_d^*=\arg\min_{q_d\in G^*_{b,m}} B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d) = \arg\min_{q_d\in G^*_{b,m}} \psi_{p,\boldsymbol{q}_{d-1}^*}(q_d) . \]
Using Jensen's inequality, as long as $1/\alpha<\lambda\leq 1$, we have
\begin{align*}
& (\psi_{p,\boldsymbol{q}_{d-1}^*}(q_d^*))^{\lambda} \\
& \leq \frac{1}{b^m-1}\sum_{q_d\in G^*_{b,m}}(\psi_{p,\boldsymbol{q}_{d-1}^*}(q_d))^{\lambda} \\
& \leq \frac{1}{b^m-1}\sum_{q_d\in G^*_{b,m}}\sum_{u\subseteq \{1,\ldots,d-1\}}\gamma^{\lambda}_{u\cup\{d\}}C_{\alpha}^{\lambda(|u|+1)}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in P_{u\cup\{d\}}^{\perp}(p,\boldsymbol{q}_d)\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} \\
& = \frac{1}{b^m-1}\sum_{u\subseteq \{1,\ldots,d-1\}}\gamma^{\lambda}_{u\cup\{d\}}C_{\alpha}^{\lambda(|u|+1)}\sum_{\substack{\boldsymbol{k}_{u\cup\{d\}}\in \mathbb{N}^{|u|+1}\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(\boldsymbol{k}_{u\cup\{d\}})} \\
& \qquad \times \sum_{\substack{q_d\in G^*_{b,m}\\ \mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u^{*}+\mathrm{tr}_m(k_d)q_d = 0\pmod p}}1.
\end{align*}
Since $b^m\nmid k_d$, we have $\mathrm{tr}_m(k_d)\neq 0$.
For $\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_{d-1}^*)$, it follows from the definition of the dual polynomial lattice that $\mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u^{*}=0\pmod p$, and thus there is no polynomial $q_d\in G^*_{b,m}$ such that the condition $\mathrm{tr}_m(k_d)q_d = 0\pmod p$ is satisfied.
For $\boldsymbol{k}_u\notin P_u^{\perp}(p,\boldsymbol{q}_{d-1}^*)$, there exists exactly one $q_d\in G^*_{b,m}$ such that $\mathrm{tr}_m(k_d)q_d = -\mathrm{tr}_m(\boldsymbol{k}_u)\cdot \boldsymbol{q}_u^{*} \pmod p$. From these facts and Lemma~\ref{lem:sum_mu_alpha}, we obtain
\begin{align*}
(\psi_{p,\boldsymbol{q}_{d-1}^*}(q_d^*))^{\lambda} & \leq \frac{1}{b^m-1}\sum_{u\subseteq \{1,\ldots,d-1\}}\gamma^{\lambda}_{u\cup\{d\}}C_{\alpha}^{\lambda(|u|+1)}\sum_{\substack{\boldsymbol{k}_u\in \mathbb{N}^{|u|}\\ \boldsymbol{k}_u\notin P_u^{\perp}(p,\boldsymbol{q}_{d-1}^*)}}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(\boldsymbol{k}_u,k_d)} \\
& \leq \frac{1}{b^m-1}\sum_{u\subseteq \{1,\ldots,d-1\}}\gamma^{\lambda}_{u\cup\{d\}}C_{\alpha}^{\lambda(|u|+1)}\sum_{\boldsymbol{k}_u\in \mathbb{N}^{|u|}}b^{-\lambda\mu_{\alpha}(\boldsymbol{k}_u)}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(k_d)} \\
& = \frac{1}{b^m-1}\prod_{j=1}^{d-1}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right]\cdot \gamma_d^{\lambda} C_{\alpha}^{\lambda}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(k_d)}.
\end{align*}
Finally by applying Jensen's inequality to \eqref{eq:decomp_error} and using Lemma~\ref{lem:sum_mu_alpha}, we have
\begin{align*}
(B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*_d))^{\lambda} & \leq (B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}^*_{d-1}))^{\lambda}\left(1+ \gamma_d^{\lambda}C^{\lambda}_{\alpha}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\mid k_d}}b^{-\lambda\mu_{\alpha}(k_d)}\right) \\
& \qquad + \frac{1}{b^m-1}\prod_{j=1}^{d-1}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right]\cdot \gamma_d^{\lambda} C_{\alpha}^{\lambda}\sum_{\substack{k_d\in \mathbb{N}\\ b^m\nmid k_d}}b^{-\lambda\mu_{\alpha}(k_d)} \\
& \leq \frac{1}{b^m-1}\prod_{j=1}^{d-1}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right] \cdot \left[ 1+\gamma_d^{\lambda}C^{\lambda}_{\alpha}\sum_{k_d\in \mathbb{N}}b^{-\lambda\mu_{\alpha}(k_d)}\right] \\
& = \frac{1}{b^m-1}\prod_{j=1}^{d}\left[ 1+ \gamma_j^{\lambda} C_{\alpha}^{\lambda}E_{\alpha,\lambda}\right] .
\end{align*}
This completes the proof.
\end{proof}
\begin{remark}\label{rem_prod}
In the above proof, we use the property of product weights to obtain the equality \eqref{eq:decomp_error}.
In fact, this is a crucial step to get the almost optimal order upper bound on $B_{\boldsymbol{\gamma}}(p,\boldsymbol{q})$.
Thus it is an open question whether a similar proof goes through for general weights.
\end{remark}
\subsection{Fast construction algorithm}\label{sec_fast}
In the convergence analysis above, we used the criterion $B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d)$.
However, since the quantity
\[ \sum_{\emptyset \neq u\subseteq \{1,\ldots,d\}}\gamma_uC_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_d)\\ \forall j\in u\colon b^m\mid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)} = \sum_{\emptyset \neq u\subseteq \{1,\ldots,d\}}\gamma_uC_{\alpha}^{|u|}\sum_{\boldsymbol{k}_u\in \mathbb{N}^{|u|}}b^{-\mu_{\alpha}(b^m\boldsymbol{k}_u)} \]
does not depend on the choice of generating vector $\boldsymbol{q}_d$, we can add this quantity to the criterion $B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d)$ to get another criterion
\begin{align*}
\tilde{B}_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d) & = B_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d) + \sum_{\emptyset \neq u\subseteq \{1,\ldots,d\}}\gamma_u C_{\alpha}^{|u|}\sum_{\substack{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_d)\\ \forall j\in u\colon b^m\mid k_j}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)} \\
& = \sum_{\emptyset \neq u\subseteq \{1,\ldots,d\}}\gamma_u C_{\alpha}^{|u|}\sum_{\boldsymbol{k}_u\in P_u^{\perp}(p,\boldsymbol{q}_d)}b^{-\mu_{\alpha}(\boldsymbol{k}_u)} \\
& = -1+\frac{1}{b^m}\sum_{n=0}^{b^m-1}\sum_{u\subseteq \{1,\ldots,d\}}\gamma_u C_{\alpha}^{|u|}\sum_{\boldsymbol{k}_u\in \mathbb{N}^{|u|}}b^{-\mu_{\alpha}(\boldsymbol{k}_u)}\mathrm{wal}_{(\boldsymbol{k}_u,\boldsymbol{0})}(\boldsymbol{x}_n) \\
& = -1+\frac{1}{b^m}\sum_{n=0}^{b^m-1}\prod_{j=1}^{d}\left[ 1+ \gamma_j C_{\alpha} w_{\alpha}\left(v_m\left( \frac{nq_j}{p}\right) \right)\right],
\end{align*}
where we used Lemma~\ref{lem:character} in the third equality, and the function $w_{\alpha}\colon [0,1)\to \mathbb{R}$ is defined by
\[ w_{\alpha}(x) = \sum_{k=1}^{\infty}b^{-\mu_{\alpha}(k)}\mathrm{wal}_k(x). \]
As shown in \cite[Theorem~2]{BDLNP12}, one can compute $w_{\alpha}$ efficiently when $x$ is a $b$-adic rational.
More precisely, if $x$ is of the form $a/b^m$ for $m\in \mathbb{N}$ and $0\leq a< b^m$, $w_{\alpha}(x)$ can be computed in at most $O(\alpha m)$ operations.
Furthermore, in case of $b=2$, we have explicit formulas for $w_2$ and $w_3$, see \cite[Corollary~1]{BDLNP12}.
In what follows, we show how one can use the fast CBC construction algorithm to find suitable polynomials $q_1^*,\ldots,q_s^*\in G^*_{b,m}$ by employing $\tilde{B}_{\boldsymbol{\gamma}}(p,\boldsymbol{q})$ as a quality measure.
Assume that $q^*_1=1,q^*_2,\ldots,q^*_{d-1}$ are already found.
Let
\[ P_{n,d-1}=\prod_{j=1}^{d-1}\left[ 1+ \gamma_j C_{\alpha} w_{\alpha}\left(v_m\left( \frac{nq^*_j}{p}\right) \right)\right], \]
for $0\leq n<b^m$. Note that we have
\[ P_{0,d-1}=\prod_{j=1}^{d-1}\left[ 1+ \gamma_j C_{\alpha} w_{\alpha}\left(0 \right)\right] , \]
regardless of the choice $q^*_1,q^*_2,\ldots,q^*_{d-1}$.
Now the criterion $\tilde{B}_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d)$ is given by
\begin{align*}
\tilde{B}_{\boldsymbol{\gamma}}(p,\boldsymbol{q}_d) & = -1+\frac{1}{b^m}\sum_{n=0}^{b^m-1}P_{n,d-1}\left[ 1+ \gamma_d C_{\alpha} w_{\alpha}\left(v_m\left( \frac{nq_d}{p}\right) \right)\right] \\
& = -1+\frac{P_{0,d}}{b^m} + \frac{1}{b^m}\sum_{n=1}^{b^m-1}P_{n,d-1}\left[ 1+ \gamma_d C_{\alpha} w_{\alpha}\left(v_m\left( \frac{nq_d}{p}\right) \right)\right] \\
& = -1+\frac{P_{0,d}}{b^m} + \frac{1}{b^m}\sum_{n=1}^{b^m-1}P_{n,d-1}+ \frac{\gamma_d C_{\alpha}}{b^m}\sum_{n=1}^{b^m-1}P_{n,d-1} w_{\alpha}\left(v_m\left( \frac{nq_d}{p}\right) \right).
\end{align*}
Thus it is obvious that the CBC algorithm finds a component $q_d^*$ which minimizes the last sum.
Since the modulus $p$ is assumed to be irreducible, there exists a primitive polynomial $g\in \mathbb{F}_b[x]/p$ for which we have $\{g^0=g^{b^m-1}=1, g^{1},\ldots,g^{b^m-2}\}=(\mathbb{F}_b[x]/p)\setminus \{0\}$, and then the last sum for a polynomial $q_d=g^{z}$ with $1\leq z\leq b^m-1$ is equivalent to
\begin{align*}
\sum_{n=1}^{b^m-1}P_{n,d-1} w_{\alpha}\left(v_m\left( \frac{nq_d}{p}\right) \right) = \sum_{n=1}^{b^m-1} P_{g^{-n},d-1} w_{\alpha}\left(v_m\left( \frac{g^{z-n}}{p}\right) \right) =: \eta_z,
\end{align*}
where we note that the subscript $g^{-n}$ appearing in $P_{g^{-n},d-1}$ is identified with the integer in $\{1,\ldots,b^m-1\}$.
We define the circulant matrix
\[ A = \omega_{\alpha}\left( v_m\left( \frac{g^{z-n}}{p}\right)\right)_{1\leq z,n\leq b^m-1},\]
and compute
\[ (\eta_1,\ldots,\eta_{b^m-1})^{\top} = A \cdot (P_{g^{-1},\tau-1},P_{g^{-2},\tau-1},\ldots,P_{g^{-b^m+1},d-1})^{\top}.\]
Let $z_0$ be an integer such that $\eta_{z_0}\leq \eta_{z}$ holds for any $1\leq z\leq b^m-1$. Then we set $q_d^*=g^{z_0}$.
Since the matrix $A$ is circulant, the matrix-vector multiplication above can be done by using the fast Fourier transform in $O(mb^m)$ arithmetic operations with $O(b^m)$ memory space for $P_{n,d-1}$, see \cite{NC06a,NC06b}.
Therefore, we can compute the vector $(\eta_1,\ldots,\eta_{b^m-1})$ in a fast way.
After finding $q_d^*=g^{z_0}$, each $P_{n,d-1}$ is updated simply by
\[ P_{g^{-n},d} = P_{g^{-n},d-1} \left( 1+ \gamma_d C_{\alpha} w_{\alpha}\left(v_m\left( \frac{g^{z_0-n}}{p}\right) \right)\right). \]
Since each element of the circulant matrix $A$ can be calculated in at most $O(\alpha m)$ arithmetic operations, calculating one row (or one column) of $A$ requires $O(\alpha mb^m)$ arithmetic operations as the first step of the CBC algorithm.
Then the CBC algorithm proceeds in an inductive way as described above, yielding $O((s+\alpha)mb^m)$ arithmetic operations with $O(b^m)$ memory space for finding the generating vector $\boldsymbol{q}^*\in (G^*_{b,m})^s$.
Further, for an extrapolated polynomial lattice rule, we need to construct polynomial lattice rules with $\alpha$ consecutive sizes of nodes, $b^{m-\alpha+1},\ldots,b^{m}$, implying that the total number of points is $N=b^{m-\alpha+1}+\cdots+b^{m}$.
The obvious inequality
\[ \sum_{\tau=1}^{\alpha}(s+\alpha)(m-\tau+1)b^{m-\tau+1} \leq (s+\alpha)m N \leq (s+\alpha)N\log_b N\]
shows that the total construction cost is of $O((s+\alpha)N \log N)$ together with $O(N)$ memory space, which improves the currently known result for an interlaced polynomial lattice rule that requires $O(s\alpha N\log N)$ arithmetic operations with $O(N)$ memory space \cite{G15,GD15}.
\section{Numerical experiments}\label{sec:experiment}
As a low-dimensional problem, let us consider a simple bi-variate test function
\[ f(x,y)=\frac{ye^{xy}}{e-2}, \]
whose exact value of $I(f)$ equals 1. This function has been often used in the literature, see for instance \cite[Chapter~8]{SJbook}. We approximate $I(f)$ by using extrapolated polynomial lattice rules over $\mathbb{F}_2$ and also by using interlaced polynomial lattice rules over $\mathbb{F}_2$ for comparison. Here extrapolated polynomial lattice rules are constructed by the fast CBC algorithm as described in Section~\ref{sec_fast} with the constant $C_{\alpha}=1$, which is justified as mentioned in Remark~\ref{rem:walsh_bound}, whereas interlaced polynomial lattice rules are constructed by the fast CBC algorithm based on a computable quality criterion given in \cite[Corollary~3]{G15}. For both the rules, we set $\gamma_1=\gamma_2=1$ within the CBC algorithm.
\begin{figure}[!b]
\centering
\includegraphics[width=0.49\textwidth]{testfunc0_order2.eps}
\includegraphics[width=0.49\textwidth]{testfunc0_order3.eps}
\caption{The results for $f(x,y)=ye^{xy}/(e-2)$ by using extrapolated polynomial lattice rules (solid) and interlaced polynomial lattice rules (dashed) with $\alpha=2$ (left) and $\alpha=3$ (right).}
\label{fig:test0}
\end{figure}
Figure~\ref{fig:test0} shows the results for the cases $\alpha=2$ (left) and $\alpha=3$ (right). The absolute integration errors as functions of $\log_2 N$ are shown in each graph. The solid lines denote the results for extrapolated polynomial lattice rules and the dashed lines for interlaced polynomial lattice rules. For reference, the dotted lines correspond to $O(N^{-1})$ and $O(N^{-2})$ convergences for $\alpha=2$, and to $O(N^{-2})$ and $O(N^{-3})$ convergences for $\alpha=3$. For the case $\alpha=2$, both the rules perform comparably and achieve approximately the desired rate of the error convergence $O(N^{-2})$. For the case $\alpha=3$, although interlaced polynomial lattice rules outperform extrapolated polynomial lattice rules, we see that the rate of the error convergence for extrapolated polynomial lattice rules asymptotically improves towards the expected $O(N^{-3})$, which supports our theoretical funding.
Next let us consider the following high-dimensional test integrands
\begin{align*}
f_1(\boldsymbol{x}) & = \prod_{j=1}^{s}\left[ 1+\gamma_j\left(x_j^{c_1}-\frac{1}{1+c_1}\right)\right] ,\\
f_2(\boldsymbol{x}) & = \prod_{j=1}^{s}\left[ 1+\frac{\gamma_j}{1+\gamma_j x_j^{c_2}} \right] ,
\end{align*}
for positive constants $c_1,c_2>0$.
Note that the exact values of the integrals for $f_1$ and for $f_2$ with the special cases $c_2=1$ and $c_2=2$ are known. We put $s=100$ and $\gamma_j=j^{-2}$. We construct both extrapolated polynomial lattice rules and interlaced polynomial lattice rules by using the fast CBC algorithm with the same choice of the weights $\gamma_j=j^{-2}$. Note that, in our experiments, we do not observe the phenomenon that the same elements of the generating vector repeat as pointed out in \cite{GS16}.
\begin{figure}[!p]
\centering
\includegraphics[width=0.49\textwidth]{testfunc1_order2.eps}
\includegraphics[width=0.49\textwidth]{testfunc1_order3.eps}\\
\includegraphics[width=0.49\textwidth]{testfunc2_1_order2.eps}
\includegraphics[width=0.49\textwidth]{testfunc2_1_order3.eps}\\
\includegraphics[width=0.49\textwidth]{testfunc2_2_order2.eps}
\includegraphics[width=0.49\textwidth]{testfunc2_2_order3.eps}
\caption{The results for $f_1$ with $c_1=1.3$ (top), $f_2$ with $c_2=1$ (middle), and $f_2$ with $c_2=2$ (bottom) by using extrapolated polynomial lattice rules (solid) and interlaced polynomial lattice rules (dashed) with $\alpha=2$ (left) and $\alpha=3$ (right).}
\label{fig:test1}
\end{figure}
Figure~\ref{fig:test1} shows the results for the cases $\alpha=2$ (left column) and $\alpha=3$ (right column). Each row corresponds to the results for $f_1$ with $c_1=1.3$, $f_2$ with $c_2=1$, and $f_2$ with $c_2=2$, respectively. Again, for reference, the dotted lines correspond to $O(N^{-1})$ and $O(N^{-2})$ convergences for $\alpha=2$, and to $O(N^{-2})$ and $O(N^{-3})$ convergences for $\alpha=3$. For the case $\alpha=2$, extrapolated polynomial lattice rules perform competitively with interlaced polynomial lattice rules and achieve approximately the desired rate of the error convergence $O(N^{-2})$. For the case $\alpha=3$, similarly to the result for the bi-variate test function, interlaced polynomial lattice rules outperform extrapolated polynomial lattice rules, but the rate of the error convergence for extrapolated polynomial lattice rules improves as the number of points increases.
These numerical results indicate that extrapolated polynomial lattices rule can be quite useful in fast QMC matrix-vector multiplication with higher order convergence, which shall be undertaken in the near future.
\section*{Acknowledgments}
The second author would like to thank Professor Josef Dick for his hospitality while visiting the University of New South Wales where most of this research was carried out.
|
\section{Introduction.}
In this paper we study the evolution of a classical Newtonian particle moving
in the three-dimensional space, in the field of forces produced by a random
distribution of fixed sources (scatterers). We will assume that the
force field acting over a particle at the position $x$ due to a scatterer
centered at $x_{j}$ is given by $-Q_j\nabla\Phi\left(x-x_{j}\right)$, $Q_j\in\mathbb{R}$, where the potential
$\Phi(x)$ is radially symmetric and behaves typically as $\left\vert x\right\vert^{-s}$ for large $\left\vert x\right\vert $.
The case of gravitational or Coulombian scatterers corresponds to potentials
proportional to $\left\vert x\right\vert^{-1}$.
The dynamics of tagged particles in fixed centers of scatterers has been
extensively studied, see \cite{Al78, vB82, Ha74, LL2} for classical surveys on the topic.
Such systems are known generally as Lorentz gases,
since they were proposed by Lorentz in 1905 to explain the motion of
electrons in metals \cite{L}.
The scatterers are assumed to be short ranged and,
in the classical setting, they are modelled as elastic hard spheres. A case of
weak, long range field with diffusion has been studied in \cite{Pi81}.
The statistical properties of the gravitational field generated by a
Poisson repartition of point masses with finite density are also well known. The associated
distribution, known as Holtsmark field, has been investigated in connection with
several applications in spectroscopy and astronomy \cite{CH,F, H, K}. It is a symmetric stable
distribution with parameter $3/2$, skewness parameter $0$ and semiexplicit form of the density function.
In the present paper, we intend to study generalizations of such
distribution for a large class of random potentials $\Phi$. The main motivation is to
clarify how the tracer particle dynamics depends on the details of $\Phi$.
It is not an easy task to construct the dynamics of a tagged particle for
arbitrary $\Phi$.
Nevertheless, it is possible to obtain a rather detailed information in the `kinetic limit'.
We focus on a family of potentials $\left\{ \Phi\left(x,\varepsilon\right) \right\} _{\varepsilon>0}$
and denote by $\ell_\varepsilon$ the {\em mean free path}, namely the characteristic length travelled by a particle before
its velocity vector $v$ changes by an amount of the same order of magnitude as the velocity itself.
We will denote by $d$ the typical distance between scatterers and we will use this distance as unit of length.
We will then fix the unit of time in order to make the speed of the tagged particle of order one.
We are interested in potentials for which
\begin{equation}
\lim_{\varepsilon\rightarrow0}\frac{\ell_{\varepsilon}}{d}=\lim_{\varepsilon
\rightarrow0}\ell_{\varepsilon}=\infty\label{KL}\;.%
\end{equation}
In particular, the forces produced are small at distances of order one from the scatterers
(but possibly of order 1 at small distances).
Since the location of the centers of force is random, the position of
the particle is also a random variable. To describe the evolution, we therefore
compute the probability density $f_{\varepsilon}$
of finding the particle at a given point $x$ with a given value of the
velocity $v$ at time $t \geq 0$. If an additional condition concerning the independence of the deflections in
macroscopic time scales $\ell_{\varepsilon}$ holds
as $\varepsilon\to 0$, then on a macroscopic scale of space and time $f_{\varepsilon}$ approximates a function
$f$ governed by a kinetic equation. More precisely, $f_{\varepsilon}\left(\ell_{\varepsilon}t,\ell_{\varepsilon}x,v\right)\to f(t,x,v)$ which solves one of the following equations:
\smallskip
\noindent
1) a linear Boltzmann equation of the form
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\int_{S^{2}}B\left( v;\omega \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega\;, \label{BE}%
\end{equation}
where we denoted by $\partial_{x}$ the three dimensional gradient and $B$ is some nonnegative collision kernel (depending on $\Phi$);
\noindent
2) a linear Landau type equation
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( x,v,t\right)
=\kappa\Delta_{v_{\bot}}f\left( x,v,t\right)\;,\label{LE}%
\end{equation}
where $\kappa>0$ depends on $\Phi$.
\smallskip
\noindent
The form of the equation derived, as well as the collision kernel $B$ and the diffusion coefficient $\kappa$ depend on the specific families of potentials $\left\{ \Phi\left(x,\varepsilon\right) \right\} _{\varepsilon>0}$ under consideration (see Section \ref{GenKinEq} for further details).
Additionally, there are families of potentials $\left\{ \Phi\left(
x,\varepsilon\right) \right\} _{\varepsilon>0}$ for which the evolution
equation for $f$ contains a sum of both a Boltzmann term as in (\ref{BE}), and other
terms as in (\ref{LE}). In other cases the model describing the dynamics of the tagged particle must take into account the non-negligible correlations between the velocities at points placed within a distance of the order of the mean free path.
Heuristically, (\ref{BE}) describes dynamics in which the main deflections
are binary collisions taking place when the tagged particle approaches within
a distance $\lambda_{\varepsilon}$ of one of the scatterer centers, with
$\lambda_{\varepsilon}>0$ converging to $0$ as $\varepsilon\rightarrow0$.
This does not imply that the interaction potential $\Phi$ should be
compactly supported or have a very fast decay within distances of order
$\lambda_{\varepsilon}$ (cf.\,Remark \ref{BoSD}).
The quantity $\lambda_{\varepsilon}$ is a fundamental length of the
problem and we shall refer to it (when it exists) as {\em collision length}
associated to $\left\{ \Phi\left(x,\varepsilon\right) \right\} _{\varepsilon>0}$.
The kinetic equation (\ref{LE}) characterizes cases in which $\lambda_{\varepsilon}$
is not reached, meaning that
particle deflections are only due to the addition of very small forces produced by
the scatterers. In turn, the latter process can arise in different ways, depending on the
specific $\Phi$. For instance, at any given time,
one can have a huge number of scatterers
producing a relevant (small) force in the tagged particle,
or no scatterer or just one scatterer producing a relevant force, in such a way that the
accumulation of many of these small, binary interactions yields an important deflection
of the trajectory on the large time scale. We will discuss several of these
possibilities in Section \ref{Examples}.
A mathematical derivation of the kinetic equations (\ref{BE}) and (\ref{LE})
has been provided in cases of compactly supported potentials,
in the so-called low density and weak coupling limits respectively.
We refer to \cite{DGL, G, KP} for first results in these directions, to
\cite{S1} (Chapter I.8) and to \cite{S2} for an account of the subsequent literature.
An alternative way of deriving a linear Landau equation is to consider multiple weak interactions of the tagged particle with the scatterers whose density is intermediate between the Boltzmann-Grad regime and the weak-coupling regime; see for instance \cite{BNP, DR, MN} (and Section \ref{DiffLandau} below).
As shown in
\cite{DP}, it is furthermore possible to derive Boltzmann equations in cases of
potentials with diverging support. It is however unclear, even at a formal level,
what kinetic behaviour has to be expected for generic $\left\{ \Phi\left(x,\varepsilon\right) \right\} _{\varepsilon>0}$
and in particular for potentials of the form
\begin{equation}
\Phi\left( x,\varepsilon\right) =\frac{\varepsilon^{s}}{\left\vert
x\right\vert ^{s}}\label{SingP}%
\end{equation}
(for which $\lambda_{\varepsilon}=\varepsilon$).
We sketch now the main ideas behind the derivation of (\ref{BE}) and (\ref{LE})
(or combinations of them) for generic $\Phi$. We construct the
generalized Holtsmark field associated to $\left\{ \Phi\left( x,\varepsilon\right) \right\}
_{\varepsilon>0}$.
These are random force fields obtained as the sum of the forces generated by a Poisson distribution of points.
Their analysis was initiated by Holtsmark in \cite{H}.
We assume condition (\ref{KL}) and also
that the particle deflections are independent in the macroscopic time scale.
We split (in a smooth manner) the potential
$\Phi\left( x,\varepsilon\right) $ as
\[
\Phi\left( x,\varepsilon\right) =\Phi_{B}\left( x,\varepsilon\right)
+\Phi_{L}\left( x,\varepsilon\right)
\]
where $\Phi_{B}\left( x,\varepsilon\right) $ is supported in a ball of
radius $M\lambda_{\varepsilon},$ with $M$ of order one but large, and
$\Phi_{L}\left( x,\varepsilon\right) $ is supported at distances larger than
$\frac{M\lambda_{\varepsilon}}{2}.$
At distances of order $\lambda_{\varepsilon}$, the particle is deflected
for an amount of order one by the interaction $\Phi_{B}\left( x,\varepsilon\right) .$
Since the scatterers are distributed according to a Poisson repartition with finite density,
the time between two such consecutive collisions is of order of the
`Boltzmann-Grad time scale'
\begin{equation}
T_{BG}=\frac{1}{\left( \lambda_{\varepsilon}\right) ^{2}}\;. \label{BGT}%
\end{equation}
We compute next the time scale $T_{L}$ (`Landau time scale') in which
the deflections produced by $\Phi_{L}$ become relevant.
Due to (\ref{KL}), we have
$T_{BG}\rightarrow\infty$ and $T_{L}\rightarrow\infty$ as $\varepsilon
\rightarrow0.$ We shall have then three different possibilities:
(i) if $T_{BG}\ll T_{L}$ as $\varepsilon\rightarrow0$, the
evolution of $f$ will be given by (\ref{BE}); (ii) if $T_{BG}\gg T_{L}$ as
$\varepsilon\rightarrow0$, $f$ will evolve according to
(\ref{LE}); (iii) if $T_{BG}$ and $T_{L}$ are of the same order of
magnitude, $f$ will be driven by a Boltzmann-Landau equation. In the case of the
potentials (\ref{SingP}) it turns out that $T_{BG}\ll T_{L}$ as
$\varepsilon\rightarrow0$ if $s>1$ and $T_{BG}\gg T_{L}$ as $\varepsilon
\rightarrow0$ if $1/2< s \leq 1$. The limitation $s>\frac 1 2$ is necessary to ensure that the random fields are not almost constant over distances of order $d$ (see Remark \ref{rem:s=1/2}).
Our analysis relies on the smallness of the total force field at distances of order one from
the scatterers.
For slowly decaying potentials, these fields can be still constructed in the
system of infinitely many scatterers, thanks to the mutual
cancellations of forces produced by a random, spatially homogeneous distribution
of sources (notice that in Lorentz gases, contrarily to plasmas, no screening effects occur,
even when the scatterers contain charges with different signs). However, some
extra conditions must be imposed, depending on the decay. Let us restrict to (\ref{SingP}) for definiteness.
Then, if $s>2$ the random force field can be written as a convergent series of the binary forces
produced by the scatterers. If $\frac{1}{2}<s\leq2$, it is possible to obtain
the random force fields at a given point as the limit of forces produced by
scatterers in finite clouds whose size tends to infinity.
Nevertheless, the average value of the force field at a given point will generally depend on the
geometry of the finite clouds. For $s>1$, a translation invariant field can be still obtained by imposing a symmetry condition on the cloud.
Finally, if $s \leq 1$, the force fields become spatially inhomogeneous,
unless we consider ``neutral'' distributions of scatterers (e.g. fields produced by charges $Q_j = \pm 1$ yielding
attractive and repulsive forces and with average zero charge).
In this paper we restrict the analysis to the three-dimensional case. We
also comment on the analogue of our main results in two dimensions
(see Remarks \ref{rem:2D1}, \ref{rem:2D2}).
The paper is organized as follows. In the first part (Section
\ref{Holtsm}) we construct the generalized
Holtsmark fields.
In the second part (Sections \ref{GenKinEq} and \ref{Examples}) we study the deflections
experienced by a tagged particle in several families of such distributions and deduce the
resulting kinetic equation for $f$. A discussion on inhomogeneous cases and concluding remarks
are collected in the last two sections (Sections \ref{Nonhomog} and \ref{sec:CR}).
\section{Generalized Holtsmark fields.\label{Holtsm}}
\subsection{Definitions.}
In this section we characterize in a precise mathematical form the random distribution
of forces in which the dynamics of the tagged particle will take place.
We will call {\em scatterer configuration} any countable collection of infinitely many points
$\left\{x_{n}\right\} _{n\in\mathbb{N}},$ $x_{n}\in\mathbb{R}^{3}$. We will denote by
$\Omega$ the set of {\em locally finite} scatterer configurations, i.e.\,such that $\#\left\{
x_{n}\right\} _{n\in\mathbb{N}}\cap K<\infty$ for any compact $K\subset\mathbb{R}^{3}$.
The associated $\sigma-$algebra $\mathcal{B}$ is generated by the sets
$
\mathcal{U}_{U,m}=\Big\{\left\{ x_{n}\right\}
_{n\in\mathbb{N}},\ \#\left\{ x_{n}\right\} _{n\in\mathbb{N}}\cap
U=m\Big\}
$
for all bounded open $U\subset\mathbb{R}^{3}$ and $m=0,1,2,\cdots$. We
then have a measurable space $\left( \Omega,\mathcal{B},\nu\right) $
where $\nu$ is the probability measure associated to the Poisson distribution with
intensity one.
Given any finite set
\begin{equation}
I=\left\{ Q_{1},Q_{2},Q_{3},...,Q_{L}\right\}, \quad Q_{j}\in{\mathbb R} \label{def:set_I}%
\end{equation}
we introduce a probability measure $\mu$ in
the set $I$.
We define the set of charged scatterer configurations $\Omega \otimes I $ as%
\[
\Omega \otimes I =\left\{ \left\{ \left( x_{n},Q_{n}\right) \right\} _{n\in
\mathbb{N}},\ \left\{ x_{n}\right\} _{n\in\mathbb{N}}\in{\Omega
},\ Q_{n}\in I\right\}\;.
\]
We define the $\sigma-$algebra $\mathcal{B}$ of subsets of $\Omega \otimes I$ as the one
generated by the sets $$\mathcal{U}_{U,m,j}=\Big\{\left\{ \left( x_{n}%
,Q_{n}\right) \right\} _{n\in\mathbb{N}},\ \#\left\{ \left( x_{n}%
,Q_{n}\right) :Q_{n}=Q_{j},\ x_{n}\in U\right\} =m\Big\},$$ where
$U\subset\mathbb{R}^{3}$ and $Q_{j}\in I.$
We then define a probability measure $\nu
\otimes\mu$ by means of%
\begin{equation}
(\nu\otimes\mu)\left( \bigcap_{j=1}^{L}\left[ \mathcal{U}_{U,n_{j},j}\right] \right)
=\frac{\exp\left( -\left\vert U\right\vert \right) \prod_{j=1}^{L}\left[
\mu\left( Q_{j}\right) \left\vert U\right\vert \right] ^{n_{j}}}%
{\prod_{j=1}^{L}\left( n_{j}\right) !} \;.\label{def:measure}%
\end{equation}
Notice that (\ref{def:measure}) defines a probability measure because $\sum_{j=1}%
^{L}\mu\left( Q_{j}\right) =1.$
\begin{definition}
\label{RandForcField}Suppose that $I$ is a finite set as in \eqref{def:set_I} and let $\mu$ be a
probability measure on $I.$ We consider the measure $\nu\otimes\mu$ on $\Omega\otimes I$.
We define random force field (associated to $\nu\otimes\mu$)
the set of measurable mappings $\left\{ F\left( x\right)
:x\in\mathbb{R}^{3}\right\} $:
\begin{equation}
F\left( x\right) :\Omega\otimes I\rightarrow \mathbb{R}^3\cup \{\infty\}
\ \ ,\ \ \ \ \omega\rightarrow F\left( x\right)
\omega\label{S1E2}%
\end{equation}
satisfying $\left( \nu\otimes\mu\right) \left( \left( F\left( x\right)
\right) ^{-1}\left( \left\{ \infty\right\} \right) \right) =0$ for any
$x\in\mathbb{R}^{3}.$
\end{definition}
\noindent
When $I$ has just one element, we shorten the notation $\Omega\otimes I$
by $\Omega$ and $\nu\otimes\mu$ by $\nu$.
Let $\mathcal{B}\left( \mathbb{R}^{3}\right)$ be the Borel algebra of $\mathbb{R}^{3}$.
We are interested in translation invariant fields, defined as follows:
\begin{definition}
\label{InvTrans}The random force field $\left\{ F\left(
x\right) :x\in\mathbb{R}^{3}\right\} $ is invariant under translations if
for any collection of points $x_{1},x_{2},\cdots\in\mathbb{R}^{3}$, for
any collection $A_{1},A_{2},\cdots\in\mathcal{B}\left(
\mathbb{R}^{3}\right) $ and any $a\in\mathbb{R}^{3}$ the following identity
holds:%
\[
\left( \nu\otimes\mu\right) \left( \bigcap_{k}\left( F\left(
x_{k}+a\right) \right) ^{-1}\left( A_{k}\right) \right) =\left(
\nu\otimes\mu\right) \left( \bigcap_{k}\left( F\left( x_{k}\right)
\right) ^{-1}\left( A_{k}\right) \right)\;.
\]
\end{definition}
Moreover, we focus on additive force fields generated by the scatterers in configurations $\omega\in\Omega$.
More precisely:
\begin{definition}
\label{Holt}Assign the potential $\Phi\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{
0\right\}; \mathbb{R}\right) $ and let $I=\left\{ Q_{1},Q_{2},...,Q_{L}\right\} $ be
a finite set with probability measure $\mu.$ An element
$\omega\in\Omega\otimes I$ is then characterized by the sequence
$\omega=\left\{ \left( x_{n},Q_{j_{n}}\right) \right\} _{n\in\mathbb{N}}.$
We say that the random force field $\left\{ F\left( x\right)
:x\in\mathbb{R}^{3}\right\} $ is a generalized Holtsmark field
associated to $\Phi$ if there exists an open set $U\subset
\mathbb{R}^{3}$ with $0\in U$ such that, for any $x\in\mathbb{R}^{3}$, the
following identity holds:%
\begin{equation}
F\left( x\right) \omega=F_{U}\left( x\right) \omega=\lim_{R\rightarrow
\infty}F^{(R)}_U(x) \omega\;,
\label{S1E3}%
\end{equation}
where
\begin{equation}
F^{(R)}_U(x)\,\omega:=-\sum_{x_{n}\in RU}Q_{j_{n}}\nabla\Phi\left( x-x_{n}\right)
\label{eq:FRUx}
\end{equation}
and the convergence in \eqref{S1E3} takes place in law.
Namely:
\[
\lim_{R \to \infty}\left( \nu\otimes\mu\right)\left( \left\{ \omega:-\sum_{x_{n}\in RU}Q_{j_{n}}\nabla\Phi\left(
x-x_{n}\right) \in A\right\} \right) = \left( \nu\otimes\mu\right)\Big( \left\{
\omega:F_{U}\left( x\right) \omega\in A\right\} \Big)
\]
for all $A \in \mathcal{B}\left( \mathbb{R}^{3}\right)$.
\end{definition}
\noindent
Notice that we do not assume the absolute
convergence of the series $\sum_{x_{n}}Q_{j_{n}}\nabla\Phi\left(
x-x_{n}\right)$. As we shall see in the rest of this section, the limits $F\left( x\right) \omega$
might in general depend on the choice of the domains $U$.
Notice also that the
generalized Holtsmark fields
are not necessarily invariant under translations in the sense of Definition \ref{InvTrans}.
Finally, we mention that the $C^2-$regularity could be relaxed (it will be used in Section \ref{GenKinEq}
to ensure a well defined dynamics for one particle moving in the Holtsmark field).
We shall refer to the numbers $Q_{j}$ as scatterer {\em charges}.
Moreover, for brevity, we shall call {\em scatterer distribution} the joint
distribution $\nu\otimes\mu$ of scatterer configurations and scatterer charges on $\Omega\otimes I$,
and indicate by $\mathbb{E}\,[\cdot]$ the corresponding expectation.
In particular, the condition $\sum_{j=1}^{L}Q_{j}\mu\left(Q_{j}\right) =0$ models a {\em neutral} system.
In the case of Coulomb, we shall call this condition `electroneutrality'.
Neutral systems display special properties. In fact, we will show that
when the potential $\Phi$ decays slowly (including the Coulombian case),
the neutrality becomes necessary to prove the existence of a translation invariant $F$.
Finally, in the case of scatterers with only one charge, neutrality
can be replaced by the assumption of a ``background charge'' with opposite sign:
\begin{definition}
\label{HoltBack}Assign the potential $\Phi\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{
0\right\}; \mathbb{R}\right) $ such that $\int_{|y|<1}\left\vert \nabla\Phi\left( y\right) \right\vert dy<\infty.$ We
will say that the random force field $\left\{ F\left( x\right) :x\in
\mathbb{R}^{3}\right\} $ is a generalized Holtsmark field with
background associated to $\Phi$ if there exists an open set
$U\subset\mathbb{R}^{3}$ with $0\in U$ such that, for any $x\in\mathbb{R}^{3}$,
\eqref{S1E3} holds (in the sense of convergence in law) with $F^{(R)}_{U}=F^{(R), 0}_{U}$ given by
\begin{equation}
F^{(R), 0}_{U}\left( x\right) \omega:=-\left[ \sum_{x_{n}\in RU}\nabla\Phi\left( x-x_{n}\right) -\int
_{RU}\nabla\Phi\left( x-y\right) dy\right]\;. \label{S1E4}%
\end{equation}
\end{definition}
\noindent
Note that in this case we assume $L=1$ and $Q_1=1$,
although more general cases could be easily included.
The above negative background can be thought as a form of the so-called `Jeans swindle',
which has been often used in cosmological problems to study the stability properties of gravitational
systems \cite{BT,Ki03}.
\begin{remark}[Historical comment]
The goal of Holtsmark's paper (cf. \cite{H}) was
to describe the broadening of the spectral lines in gases. This broadening of
spectral lines is induced by the electrical fields acting on the molecules of
the gas (Stark effect).
On the other hand the electrical field acting on each individual molecule is a
random variable which depends on the specific distribution of the surrounding
gas molecules. The properties of such a random field were computed in \cite{H}
in the cases in which the fields induced by the gas molecules
can be approximated by either point charges (ions), dipoles or quadrupoles.
Combining the properties of the resulting random electrical fields with the
physical laws describing the Stark effect it is possible to prove that the
broadening of the spectral lines scales like a power law of the gas density.
The exponents characterizing these laws are different for ions, dipoles and
quadrupoles (cf. \cite{H}).
\end{remark}
In the rest of this section, we construct several examples of generalized Holtsmark fields,
and study their basic properties.
\subsection{Construction.} \label{sec:costr}
Let $s > 1/2 $. We will consider potentials $\Phi$ in the following classes
\begin{eqnarray}
{\cal C}_s &:=& \Big\{ \;\; \Phi\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{
0\right\}; \mathbb{R}\right)
\;\;\;\ \Big|\ \;\;\;\Phi\left( x\right) =\Phi\left( \left\vert x\right\vert \right)
\nonumber\\
&&\;\;\;\; \mbox{and }\;\; \exists\, A \neq 0, \; r>\max(s,2) \mbox{ \;\;s.t., for $|x|\geq 1$,}\nonumber\\
&&\;\;\;\; \left\vert \Phi\left( x\right) -\frac{A}{\left\vert x\right\vert^s
}\right\vert +\left\vert x\right\vert \left\vert \nabla\left( \Phi\left(
x\right) -\frac{A}{\left\vert x\right\vert^s }\right) \right\vert
\leq \frac{C}{\left\vert x\right\vert ^{r}}\;\; \Big\}\;. \label{eq:defCCs}
\end{eqnarray}
where $C>0$ is a given constant.
The potentials in ${\cal C}_s$ decay as $\vert x\vert ^{-s}$ with an explicit error determined
by $C,r$.
The singularity at the origin, possibly strong, is not relevant in the discussions of this section.
The assumption of radial symmetry might be relaxed. The condition $r>\max(s,2)$
is technically helpful in the proofs that follow and it could be also relaxed, at the price of additional
estimates of the remainder.
Let us denote by $m^{(R)}\left(\eta_{1},\cdots,\eta_{J};y_1,\cdots,y_J\right)$ the $J-$point
characteristic function of the random field $F^{(R)}_U(x)$, that is:
\begin{equation}
m^{(R)}\left(\eta_{1},\cdots,\eta_{J};y_1,\cdots,y_J\right) :=
\mathbb{E}\left[ \exp\left( i\sum_{k=1}^{J}\eta_{k}\cdot F^{\left(
R\right) }_U\left( y_{k}\right) \right) \right]
\label{eq:mpf}
\end{equation}
for all $J \geq 1$, $\eta_{1},\cdots,\eta_{J}\in \mathbb{R}^{3}$ and $y_1,\cdots,y_J \in \mathbb{R}^{3}$.
Notice that we are dropping from the notation the possible dependence on $U$. We will further abbreviate $m^{(R)}\left(\eta_{1},\cdots,\eta_{J}\right)$
in statements where $y_1,\cdots,y_J$ are fixed.
Suppose that $\Phi \in {\cal C}_s$. Let $\xi\in C^{\infty}\left( \mathbb{R}^{3}\right)$ be an
arbitrary cutoff function satisfying $\xi\left( y\right) =1$ for $\left\vert
y\right\vert \geq1,$ $\xi\left( y\right) =0$ for $\left\vert y\right\vert
\leq\frac{1}{2}$ and $\xi\left( y\right) =\xi\left( \left\vert y\right\vert
\right)$ when $|\nabla\Phi|$ is non-integrable at the origin, and $\xi \equiv 1$ otherwise.
Then, we will show that (see the proof of the theorem below
, as $R \to \infty$, $m^{(R)}$ converges pointwise to
\begin{eqnarray}
&& m\left(\eta_{1},\cdots,\eta_{J}\right) := \exp\Big(i \sum_{k=1}^{J}\eta_{k}\cdot M_U(y_k)\Big)
\label{eq:mpm}\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot\exp\Big( \sum_{j=1}^{L}\mu\left( Q_{j}\right) \int_{\mathbb{R}^{3}%
}\Big[ \exp\Big( -iQ_{j}\sum_{k=1}^{J}\eta_{k}\cdot\nabla\Phi\left(
y_{k}-y\right) \Big) \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -1 +iQ_{j}\sum_{k=1}^{J}\eta_{k}\cdot\nabla\Phi\left(
y_{k}-y\right) \xi\left( y_{k}-y\right) \Big] dy\Big) \nonumber
\end{eqnarray}
where $ M_U: \mathbb{R}^{3}\to \mathbb{R}^3\cup \{\infty\} $ is given by
\begin{equation}
M_U(x):= -A \lim_{R \to \infty} \sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}%
\int_{RU}\nabla\left(\frac{1}{|x-y|^s}\right) \xi\left( x-y\right) dy
\label{eq:MUx}
\end{equation}
(and by $0$ in the case of the field \eqref{S1E4}).
Formula \eqref{eq:mpm} is a generalization of the formulas for the Holtsmark field for
Coulomb potential in \cite{CH}. As we shall explain below, Eq.\,\eqref{eq:MUx}
determines the (non)invariance properties of the limiting field as well as its (in)dependence
in the features of the geometry $U$.
The integral in the last two lines of \eqref{eq:mpm} converges absolutely for $s>1/2$.
Moreover, it is $o(\eta)$ for small values of the $\eta-$variables.
In particular, if $|M_U(x)|< \infty$, then the limit field $F_U$ in \eqref{S1E3} is well defined and
\begin{equation}
\mathbb{E}\left[ F_U(x)\right] = M_U(x)\;.
\label{eq:MUxexp}
\end{equation}
However, the limit integral in \eqref{eq:MUx} is only conditionally convergent
when $s\leq 2$, so that the existence and the properties of $F_U$ depend strongly on $\mu, U, s$ through $M_U(x)$.
The results
are summarized by the following statement.
\begin{theorem} \label{thm:costruzione}
Suppose that $\Phi \in {\cal C}_s$. Let $U \subset \mathbb{R}^{3}$ be a bounded open set with $0 \in U$
and $\partial U \in C^1$. Then, the right-hand side of \eqref{S1E3} converges in law and defines
a random field $F$ (in the sense of Definition \eqref{S1E2}) in the following cases:
\begin{itemize}
\item[\emph{\textbf{a.1}}] $s > 2$\;;
\item[\emph{\textbf{a.2}}] $\sum_{j=1}^{L}Q_{j}\mu\left(Q_{j}\right) =0$ (`neutrality') and $s>1/2$;
\item[\emph{\textbf{a.3}}] $F^{(R)}_{U}=F^{(R), 0}_{U}$ (given by \eqref{S1E4}),
$\int_{|y|<1}\left\vert \nabla\Phi\left( y\right) \right\vert dy<\infty$ and $s > 1/2$\;;
\item[\emph{\textbf{b.}}] $1 < s \leq 2$ and $\int_{U \setminus \{|y|<\frac{1}{2}\}}\nabla\left( \frac{1}{\left\vert y\right\vert ^{s}}\right) dy=0$\,;
\item[\emph{\textbf{c.}}] $s = 1$ and $\int_{U}\nabla\left( \frac{1}{\left\vert y\right\vert }\right) dy=0$\,.
\end{itemize}
Moreover, the following properties hold in the specific cases:
\begin{itemize}
\item[ \emph{\textbf{a.1-2-3}}]
$F$ is independent on the domain $U$, it is invariant under translations in the sense of Definition
\ref{InvTrans}, and it has characteristic function given by \eqref{eq:mpm} independent of $\xi$ and with $M_U = 0$;
\item[\emph{\textbf{b.}}] $F$ is independent on the domain $U$, it is invariant under translations in the sense of Definition
\ref{InvTrans}, and it has characteristic function given by \eqref{eq:mpm} independent of $\xi$ and with $M_U = 0$;
however, if $RU$ is replaced by $RU - R^{s-1}e$ in \eqref{eq:FRUx} with $e\in\mathbb{R}^3$ and $|e|$ small enough,
then it can be $M_U(x) \neq 0$;
\item[\emph{\textbf{c.}}]
$F = F_U(x)$ depends on the domain $U$, it is not invariant under translations, and it has characteristic function given by \eqref{eq:mpm} independent of $\xi$ and with
\begin{equation}
M_U(x) = -A\sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}
\int_{\partial U}\frac{\left( x\cdot y\right) n\left(
y\right) }{\left\vert y\right\vert ^{3}}dS\left( y\right)
\label{eq:MUxs1}
\end{equation}
where $n(y)$ is the outer normal to $\partial U$ at $y$.
\end{itemize}
\end{theorem}
Some remarks on this result follow.
\begin{remark}
Note that $s \leq 2$ implies that $\sum_{x_{n}}Q_{j_{n}}\nabla\Phi\left(x-x_{n}\right)$
can be no more than conditionally convergent. In this case, if the
neutrality condition is not satisfied, we need a rather
stringent assumption on the domain (items {\bf b} and {\bf c}),
which can be thought as a geometrical condition on the cloud scatterer distribution.
To prove the criticality of such a condition, we show (see Section \ref{eq:depgeom} in the proof)
in case {\bf b} that small displacements of the domain $U$ (i.e.\,slight asymmetries of the cloud) can
yield in the limit random force fields with a non-zero component in one particular direction.
\end{remark}
\begin{remark}
The case $s=1$ is of course particularly relevant, because it corresponds to gravitational and Coulombian forces.
The major distinguishing feature of this case is that, in absence of neutrality,
the limiting force field is not invariant under translations (item {\bf c.}).
Roughly speaking, a density of charges of order one with
only one sign yields a change in the average force of order one over
distances of the order of the scatterer distance.
Additional features of this case are discussed in Section \ref{sec:diffeq} and in the next remark.
\end{remark}
\begin{remark}
In the case of potentials $\Phi$ given exactly by a
power law, the random variable $F_{U}\left( x\right) $ is given by $M_U(x)+\zeta$ (cf.\,\eqref{eq:MUxexp})
where $\zeta=\left( \zeta_{1},\zeta_{2},\zeta_{3}\right)$
and each of the random variables $\zeta_{i}$ is a multiple of a symmetric stable
Levy distribution \cite{F}. The characteristic exponent is $3/2$ in the case $s=1$. For this particular
potential and in the electroneutral case, we can take the limit of
the cutoff $\xi$ to one and the 1-point characteristic function $m\left(
\eta\right) $ becomes:%
\[
m\left( \eta\right) =\exp\left( -C_{0}\left\vert \eta\right\vert ^{\frac
{3}{2}}\sum_{j=1}^{L}\mu\left( Q_{j}\right) \left\vert Q_{j}\right\vert
^{\frac{3}{2}}\right) \ ,\ \eta\in\mathbb{R}^{3}%
\]
where the constant $C_{0}$ takes the
value $2\pi\int_{0}^{\infty}\left[ 1-\frac{\sin\left( x\right) }{x}\right] x^{-5/2}
dx$ (see \cite{CH} for the computation).
Moreover, using the inversion formula for the Fourier transform we can compute the
probability density for the force field $F.$ In particular, elementary
computations allow to obtain the probability density $\tilde{p}\left(
\left\vert F\right\vert \right) $ of the absolute value $|F|$. It
turns out that $\tilde{p}\left( \left\vert F\right\vert \right) $ is a
bounded function which behaves proportionally to $ |F|^{-5/2}$ as $\left\vert F\right\vert \rightarrow\infty.$
In particular, $\mathbb{E}\left[ \left\vert F\right\vert \right] <\infty.$
\end{remark}
\begin{remark} \label{rem:s=1/2}
Suppose that $s<\frac{1}{2}$ in items {\bf a.2} and {\bf a.3} and assume by definiteness that $\Phi(x)=\frac{\varepsilon}{|x|^s}$, $s<\frac 1 2$. In this case, we obtain a completely different picture of the
resulting random force fields, because this is due
mostly to the particles placed very far away. Assuming
neutrality (necessary to deal with spatially homogeneous force fields),
one can compute the limit of the fields generated by a
cloud of particles contained in $RU,$ where $U$ is the unit ball. One sees that, in order to
obtain force fields of order one, $\varepsilon$ has to be rescaled with $R$ as
$\varepsilon=R^{1-\frac{1}{2s}}.$ Then, the resulting force field obtained as
$R\rightarrow\infty$ is constant in regions where $\left\vert x\right\vert $
is bounded and it is given by a Gaussian distribution. Since the dynamics
of a tagged particle would greatly differ from the one taking place in the
generalized Holtsmark fields obtained in Theorem \ref{thm:costruzione}, we will not
continue with the study of this case in this paper.
\end{remark}
\begin{remark} \label{rem:2D1}
In two dimensions, the critical value separating absolutely and conditionally convergent
cases is $s=1$ (cf.\,\eqref{eq:MUx}) and the Coulombian case would correspond to
a logarithmic potential. One can introduce ${\cal C}_0$ defined as the class of potentials close to $\frac{A}{ \log x}$
for $|x|$ large (as in \eqref{eq:defCCs}). However, a straightforward adaptation
of Theorem \ref{thm:costruzione} is valid only for $s>0$, with the following identification of cases: {\bf a.1} $s>1$;
{\bf a.2} `neutrality' and $s>0$; {\bf a.3} `existence of background' and $s>0$; {\bf b.}
$0<s\leq 1$ and $\int_{U \setminus \{|y|<\frac{1}{2}\}}\nabla\left( \frac{1}{\left\vert y\right\vert ^{s}}\right) dy=0$.
Note that every power law decay leads to spatially
homogeneous generalized Holtsmark fields. If
$s\leq 1$, a nontrivial condition on the geometry of the finite clouds is required.
\end{remark}
\subsection{Proof of Theorem \ref{thm:costruzione}.}
\subsubsection{Convergence.}
It is a classical computation in probability theory \cite{F}. It is enough to prove
convergence of the $J-$point characteristic function of the field $F^{(R)}_U(x)$, defined
by \eqref{eq:FRUx} or by \eqref{S1E4}.
Let us assume \eqref{eq:FRUx}.
Fix $y_{1},\cdots,y_{J}\in \mathbb{R}^{3}$, $\eta_{1},\cdots,\eta_{J}\in \mathbb{R}^{3}$. By definition \eqref{eq:mpf}
\begin{equation}
m^{(R)} = \mathbb{E}\left[ \exp\left( i\sum_{k=1}^{J}\eta_{k}\cdot F^{\left(
R\right) }_U\left( y_{k}\right) \right) \right] =\mathbb{E}\left[\, \prod_{k=1}^J\prod_{x_{n}\in RU}%
\exp\left( -iQ_{j_{n}}\eta_k\cdot\nabla\Phi\left( y_k-x_{n}\right) \right)\\
\right]
\end{equation}
and the properties of
the scatterer distribution $\nu\otimes\mu$ imply
\begin{align}
& m^{(R)} =\sum_{M=0}^{\infty}\frac{e^{-\left\vert RU\right\vert }}{M!}\left(
\sum_{j=1}^{L}\mu\left( Q_{j}\right) \int_{RU}\exp\left( -iQ_{j
}\sum_{k=1}^J\eta_k\cdot\nabla\Phi\left( y_k-y\right) \right) dy\right) ^{M}\nonumber\\
& =\exp\Big( \sum_{j=1}^{L}\mu\left( Q_{j}\right) \int_{RU}\Big[
\exp\Big( -iQ_{j}\sum_{k=1}^J\eta_k\cdot\nabla\Phi\left( y_k-y\right) \Big)\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -1+iQ_{j}\sum_{k=1}^J\eta_k\cdot\nabla\Phi\left( y_k-y\right) \xi\left( y_k-y\right)
\Big] dy\Big) \nonumber\\
& \;\;\;\cdot\exp\Big( -i\sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}\int
_{RU}\sum_{k=1}^J\eta_k\cdot\nabla\Phi\left( y_k-y\right) \xi\left( y_k-y\right) dy\Big)\;,
\label{eq:mRcfirst}
\end{align}
where $\xi$ has been introduced before \eqref{eq:mpm}.
Note that the term in the square brackets is of order $(1+\left\vert y\right\vert^{2s+2})^{-1}$,
which is integrable in $\mathbb{R}^{3}$ for $s > 1/2$.
In the neutral case (item {\bf a.2}), the last exponential in \eqref{eq:mRcfirst} is trivially equal to $1$.
Furthermore for $s>2$ (item {\bf a.1}) all the integrals are absolutely convergent
and the last exponential in \eqref{eq:mRcfirst} converges to $1$ by the dominated convergence
theorem, because of the symmetry of $\Phi$ and $\xi$.
Otherwise (items {\bf b} and {\bf c}) we write
\begin{eqnarray}
&&\int_{RU}\eta_k\cdot\nabla\Phi\left( y_k-y\right) \xi\left( y_k-y\right)
dy\label{S8E3}\\
&&=A\int_{RU}\eta_k\cdot\nabla\left( \frac{1}{\left\vert y_k-y\right\vert ^{s}%
}\right) \xi\left(
y_k-y\right) dy+\int_{RU}\eta_k\cdot\nabla\rho\left(
y_k-y\right) \xi\left( y_k-y\right) dy \nonumber
\end{eqnarray}
where $A$ is the constant appearing in \eqref{eq:defCCs}
and $\rho\left( y\right) :=\Phi\left( y\right) -\frac{A}{\left\vert
y\right\vert ^{s}}.$
Using $\Phi \in {\cal C}_s$ as well as the fact that $r>2$ in \eqref{eq:defCCs}, we
obtain that $\left\vert \nabla\rho\left( y\right) \right\vert $ is
integrable in $\mathbb{R}^{3}$ and therefore the symmetry of $\Phi$ and $\xi$ imply
$\int_{RU}\eta_k\cdot\nabla\rho\left(
y_k-y\right) \xi\left( y_k-y\right) dy \to \int
_{\mathbb{R}^{3}}\eta_k\cdot\nabla\rho\left( y\right) \xi\left( y\right)
dy=0$ in the limit $R \to \infty$.
In the case $F^{(R)}_{U}=F^{(R), 0}_{U}$ given by \eqref{S1E4} (item {\bf a.3}),
the computation is simpler since we do not need to add/subtract
the last exponential in \eqref{eq:mRcfirst}.
By dominated convergence, we conclude that $$\lim_{R \to \infty}m^{(R)} = m$$
holds in all the cases considered with $m$ given by \eqref{eq:mpm}
and $M_U(x)=0$ in cases {\bf a.1-2-3}, while
we are left with the evaluation of $M_U(x)$ defined by the limit \eqref{eq:MUx} in cases {\bf b} and {\bf c}.
Let us focus on case {\bf b}. Using the hypothesis on $U$, the symmetry of $\xi$
and a change of variables $y \to Ry$ we find
\begin{eqnarray}
&&\lim_{R \to \infty}\Big| \int_{RU}\nabla\left( \frac{1}{\left\vert x-y\right\vert ^{s}%
}\right) \xi\left( x-y\right) dy\Big|\nonumber\\
&&=\lim_{R \to \infty} \Big|\int_{RU}\left[ \nabla\left( \frac{1}{\left\vert x-y\right\vert
^{s}}\right) \xi\left( x-y\right) -\nabla\left( \frac{1}{\left\vert
y\right\vert ^{s}}\right) \xi\left( y\right) \right] dy\Big|\nonumber\\
&& = \lim_{R \to \infty} \Big|\int_{RU-x} \nabla\left( \frac{1}{\left\vert y\right\vert
^{s}}\right) \xi\left( y\right) dy -\int_{RU}\nabla\left( \frac{1}{\left\vert
y\right\vert ^{s}}\right) \xi\left( y\right) dy\Big|\nonumber\\
&& \leq \lim_{R \to \infty} C \frac{R^2}{R^{s+1}} = 0
\label{S8E2}%
\end{eqnarray}
since $s>1$. Therefore $M_U(x)=0$. Notice that, in the estimate, $C$ is a positive constant dependent on $x$ and we used the regularity of $\partial U$.
Moreover, the cutoff has been used only to treat the non-integrable singularity of the case $s=2$.
Similarly, in case {\bf c} we get
\begin{eqnarray}
&&\int_{RU}\nabla\left( \frac{1}{\left\vert x-y\right\vert %
}\right) \xi\left( x-y\right) dy\nonumber\\
&&= \int_{RU}\nabla\left( \frac{1}{\left\vert x-y\right\vert }-\frac
{1}{\left\vert y\right\vert }\right) dy\nonumber\\
&& = \int_{R\partial
U}\left( \frac{1}{\left\vert x-y\right\vert }-\frac{1}{\left\vert
y\right\vert }\right) n\left( y\right)
dS\left( y\right)
\end{eqnarray}
by using the divergence theorem, where $n\left( y\right) $ is the outer normal at $y\in\partial U.$
Hence, for any given $x$,
\begin{eqnarray}
&&\int_{RU}\nabla\left( \frac{1}{\left\vert x-y\right\vert %
}\right) \xi\left( x-y\right) dy\nonumber\\
&& = R\int_{\partial
U}\left( \frac{1}{\left\vert \frac{x}{R}-y\right\vert }-\frac{1}{\left\vert
y\right\vert }\right) n\left( y\right)
dS\left( y\right) \nonumber\\
&& = \int_{\partial
U}\frac{\left( x\cdot y\right) n\left( y\right)
}{\left\vert y\right\vert ^{3}}dS\left( y\right) +o\left(
1\right) \quad \text{as}\quad R \to \infty
\end{eqnarray}
where $o\left(1\right)$ is the remainder of the Taylor expansion in $x/R$.
This proves that the limit \eqref{eq:MUx} reduces to \eqref{eq:MUxs1} in this case.
All the properties stated in the theorem follow from the explicit computation
of the characteristic function $m$ performed above.
In particular, we readily check that $$m\left(\eta_{1},\cdots,\eta_{J};y_1,\cdots,y_J\right) = \left(\eta_{1},\cdots,\eta_{J};y_1+a,\cdots,y_J+a\right)$$
for any $a \in \mathbb{R}^3$, from which translation invariance of $F$ follows in the cases {\bf a.1-2-3} and {\bf b}.
\subsubsection{Dependence on the geometry.\label{eq:depgeom}}
In this section we prove the statement concerning small asymmetries of the cloud of scatterers in the case {\bf b}.
For convenience, we restrict to $1<s<2$ and to $L=1$ and consider the following random field
\begin{equation}
\bar F^{(R)}_U(x)\,\omega:=-\sum_{x_{n}\in RU-R^{s-1}e}\nabla\Phi\left( x-x_{n}\right)\;.
\end{equation}
where $e \in \mathbb{R}^{3}$ is small (so that $x \in RU-R^{s-1}e$ for any $R$ large enough).
Notice that the displacement of the domain, which is of order $R^{s-1}$ tends
to infinity, since $s>1,$ but it is smaller than the size of the domain. As will become clear, choosing displacements
for the domains of order one brings no changes on the value of
$F_{U}\left( x\right)$.
Arguing exactly as in the previous section we obtain $\lim_{R \to \infty}m^{(R)} = m$
with $m$ given by \eqref{eq:mpm} and the limit in \eqref{eq:MUx} to be determined.
The computation in \eqref{S8E2} is now replaced by
\begin{eqnarray}
&& \int_{\left[ RU-R^{s-1}e\right] }\nabla\left( \frac{1}{\left\vert x-y\right\vert ^{s}%
}\right) \xi\left( x-y\right) dy\nonumber\\
&& = -s\int_{\left[ RU-R^{s-1}e-x\right] }\frac{y}{\left\vert y\right\vert
^{s+2}}\xi\left( y\right) dy+s\int_{RU}\frac{y}{\left\vert y\right\vert
^{s+2}}\xi\left( y\right) dy \nonumber\\
&& =-s\int_{\left[ RU-R^{s-1}e-x\right] \setminus RU}\frac{y}{\left\vert
y\right\vert ^{s+2}}dy+s\int_{RU\setminus\left[ RU-R^{s-1}e-x\right] }%
\frac{y}{\left\vert y\right\vert ^{s+2}}dy\nonumber\\
&& =-sR^{2-s}\left[ \int_{\left[ U-R^{s-2}e-\frac{x}{R}\right] \setminus
U}\frac{y}{\left\vert y\right\vert ^{s+2}}dy-\int_{U\setminus\left[
U-R^{s-2}e-\frac{x}{R}\right] }\frac{y}{\left\vert y\right\vert ^{s+2}%
}dy\right] \;.
\end{eqnarray}
Neglecting the contribution $x/R$ and parametrizing the boundary, we obtain
\begin{equation}
\int_{\left[ RU-R^{s-1}e\right] }\nabla\left( \frac{1}{\left\vert x-y\right\vert ^{s}%
}\right) \xi\left( x-y\right) dy = s \int_{\partial U}\frac{(e\cdot y) n\left( y\right)}{\left\vert y\right\vert ^{s+2}}dS\left( y\right)+ o(1)
\end{equation}
as $R \to \infty$, where $n\left( y\right)$ is the outer normal at $y\in\partial U.$ Therefore
\begin{eqnarray}
M_U(x)&=& -A \lim_{R \to \infty}
\int_{\left[ RU-R^{s-1}e\right] }\nabla\left(\frac{1}{|x-y|^s}\right) \xi\left( x-y\right) dy\nonumber\\
&=& -A s \int_{\partial U}\frac{(e\cdot y) n\left( y\right)}{\left\vert y\right\vert ^{s+2}}dS\left( y\right)
\end{eqnarray}
which is a nontrivial vector depending on $e$.
\subsection{Case $\Phi\left( x\right) =\frac{1}{\left\vert
x\right\vert }$: differential equations. \label{sec:diffeq}}
In the particular case in which $\Phi$ is the Coulomb or the Newton potential the
random force field $F_U(x)$ described by Theorem \ref{thm:costruzione} satisfies a system of
(Maxwell) differential equations. In this section we derive such equations (Theorem \ref{NewtonEquation}).
Then, we use them to show that electroneutrality is necessary in order to obtain the translation invariance
(Theorem \eqref{eq:NtiCp}).
\begin{theorem}
\label{NewtonEquation}Suppose that $\Phi\left( x\right) =\frac{1}{\left\vert
x\right\vert }$ and let $\left\{ F_{U}\left( x\right) :x\in\mathbb{R}%
^{3}\right\} $ be the corresponding random force field constructed by means
of Theorem \ref{thm:costruzione}.
(i) In the cases {\bf a.2} and {\bf c}, for almost every
$\omega\in\Omega\otimes I$ with the form $\omega=\left\{ \left(
x_{n},Q_{j_{n}}\right) \right\} _{n\in\mathbb{N}}$ we have that the function
$\psi\left( x\right) : =F_{U}\left( x\right)\omega$ is a weak solution of (i.e.\;it satisfies in the sense of
distributions) the equation:%
\begin{equation}
\operatorname{div}\psi=\sum_{n}Q_{j_{n}}\delta\left( \cdot-x_{n}\right)
\ \ ,\ \ \operatorname{curl}\psi=0\;.\label{S2E5}%
\end{equation}
(ii) In the case {\bf a.3}, for almost
every $\omega\in\Omega$ with the form $\omega=\left\{ x_{n}\right\}
_{n\in\mathbb{N}}$ we have that the function $\psi\left(
x\right) :=F_{U}\left( x\right)\omega$ is a weak solution
of (i.e.\;it satisfies in the sense of distributions) the equation:%
\begin{equation}
\operatorname{div}\psi=\sum_{n}\delta\left( \cdot-x_{n}\right)
-1\ \ ,\ \ \operatorname{curl}\psi=0\;.\label{S2E6}%
\end{equation}
\end{theorem}
\begin{proof} The proof is very similar for the two cases and we perform the computations for (ii) only.
Let $B_a(y)$ be the ball of radius $a$ centered in $y$. For any positive integer $n$ we write
\begin{align*}
F_{n}( x)\omega & :=\sum_{\left\{ \left\vert x_{j}\right\vert
\leq2^{n}\right\} }\frac{x-x_{j}}{\left\vert x-x_{j}\right\vert ^{3}}%
-\int_{B_{2^{n}}\left( 0\right) }\frac{x-y}{\left\vert x-y\right\vert ^{3}%
}dy\\
& =\sum_{\left\{ \left\vert x_{j}\right\vert \leq1\right\} }\frac{x-x_{j}%
}{\left\vert x-x_{j}\right\vert ^{3}}-\int_{B_{1}\left( 0\right) }\frac
{x-y}{\left\vert x-y\right\vert ^{3}}dy\\
& +\sum_{\ell=1}^{n}\left[ \sum_{\left\{ 2^{\ell-1}<\left\vert
x_{j}\right\vert \leq2^{\ell}\right\} }\frac{x-x_{j}}{\left\vert
x-x_{j}\right\vert ^{3}}-\int_{B_{2^{\ell}}\left( 0\right) \setminus
B_{2^{\ell-1}}\left( 0\right) }\frac{x-y}{\left\vert x-y\right\vert ^{3}%
}dy\right] \\
& \equiv\left[ \sum_{\left\{ \left\vert x_{j}\right\vert \leq1\right\} }%
\frac{x-x_{j}}{\left\vert x-x_{j}\right\vert ^{3}}-\int_{B_{1}\left(
0\right) }\frac{x-y}{\left\vert x-y\right\vert ^{3}}dy\right] +\sum_{\ell
=1}^{n}f_{\ell}( x)\omega\;.
\end{align*}
We want to prove that the quantities $|f_{\ell}( x)|$ converge to zero as $\ell\rightarrow\infty$ so
quickly as to obtain convergence with probability one as $n \to \infty$ for any fixed $x$.
Set $\Omega_{\ell}=B_{2^{\ell}}\left( 0\right) \setminus B_{2^{\ell-1}}(0)$ and $I_{\ell}\left( x\right) =\int_{\Omega_{\ell}}\frac
{x-y}{\left\vert x-y\right\vert ^{3}}dy$. Note that $I_{\ell}\left( 0\right) = 0$ and (since the gradient is bounded)
$I_{\ell}\left( x\right)$ is bounded uniformly in compact sets. By straightforward computation we find that the variance is
\begin{eqnarray}
\mathbb{E}\left[ \left( f_{\ell}\left( x\right) \right)
^{2}\right] & =&\sum_{J=0}^{\infty}\frac{1}{J!}e^{-\left\vert \Omega_{\ell
}\right\vert }\int_{\Omega_{\ell}}dx_{1}\int_{\Omega_{\ell}}dx_{2}%
...\int_{\Omega_{\ell}}dx_{J}\nonumber\\
&& \cdot\left( \sum_{\left\{ 2^{\ell-1}<\left\vert x_{j}\right\vert \leq2^{\ell
}\right\} }\frac{x-x_{j}}{\left\vert x-x_{j}\right\vert ^{3}}-I_\ell(x)\right)
\left( \sum_{\left\{ 2^{\ell-1}<\left\vert x_{k}\right\vert
\leq2^{\ell}\right\} }\frac{x-x_{k}}{\left\vert x-x_{k}\right\vert ^{3}}%
-I_\ell(x)\right)\nonumber\\
&= & \left( I_{\ell}\left( x\right) \right) ^{2}\sum_{J=0}^{\infty}%
\frac{\left\vert \Omega_{\ell}\right\vert ^{J-2}}{J!}e^{-\left\vert
\Omega_{\ell}\right\vert }\left[ \left( J\left( J-1\right) -2J\left\vert
\Omega_{\ell}\right\vert +\left\vert \Omega_{\ell}\right\vert ^{2}\right)
\right] \nonumber\\
&& +\left[ \sum_{J=1}^{\infty}\frac{1}{\left( J-1\right) !}e^{-\left\vert
\Omega_{\ell}\right\vert }\left\vert \Omega_{\ell}\right\vert ^{J-1}\right]
\int_{\Omega_{\ell}}\frac{1}{\left\vert x-y\right\vert ^{4}}dy\nonumber\\
&=&
\int_{\Omega_{\ell}}\frac{1}{\left\vert x-y\right\vert ^{4}}dy\;,
\end{eqnarray}
which is of order $2^{-\ell}$.
If $A_{\ell}:=\left\{ \omega:\left\vert f_{\ell}(x)
\right\vert \geq\frac{1}{\ell^{2}}\right\}$, then by Chebyshev's inequality
$\left\vert A_{\ell}\right\vert \leq C\ell^{4}2^{-\ell}$ for some $C>0$, which is summable over $\ell$.
Borel-Cantelli implies that, with probability one, there are at most a
finite number of values of $\ell$ for which $\omega\in A_{\ell}.$ Thus, the series $\sum_{\ell
\geq 1}f_{\ell}( x)\omega$ converges absolutely for any given $x$, with probability one.
In the same way, one estimates the gradients
\begin{align*}
\nabla F_{n}( x)\omega & =\nabla\left[ \sum_{\left\{
\left\vert x_{j}\right\vert \leq1\right\} }\frac{x-x_{j}}{\left\vert
x-x_{j}\right\vert ^{3}}-\int_{B_{1}\left( 0\right) }\frac{x-y}{\left\vert
x-y\right\vert ^{3}}dy\right] +\sum_{\ell
=1}^{n}\nabla f_{\ell}( x)\omega\;.
\end{align*}
We conclude that, with probability 1, $F_{n}(x)$ and $\nabla F_n(x)$ converge uniformly over compact sets after removal of a finite number of singularities.
We can then pass to the limit in the weak formulation of the equation
\begin{equation}
\operatorname{div}F_{n}=\sum_{\left\{ \left\vert x_{j}\right\vert \leq
2^{n}\right\} }\delta\left( \cdot-x_{j}\right)
-1\ \ ,\ \ \operatorname{curl}F_{n}=0\;.
\end{equation}
\end{proof}
\begin{theorem} \label{eq:NtiCp}
Suppose that $\left\{ F\left( x\right) :x\in\mathbb{R}^{3}\right\} $ is a
random force field in the sense of Definition \ref{RandForcField} with
$I=\left\{ Q_{1},Q_{2},...,Q_{L}\right\} $ and such that for almost every
$\omega\in\Omega\otimes I$ with the form $\omega=\left\{ \left(
x_{n},Q_{j_{n}}\right) \right\} _{n\in\mathbb{N}}$ the function $\psi\left(
x\right) :=F\left( x\right) \omega $ satisfies (\ref{S2E5}) in the weak formulation.
Suppose that the random force field $\left\{ F\left( x\right)
:x\in\mathbb{R}^{3}\right\} $ is invariant under translations and that the
average $\mathbb{E}\left[ \left\vert F\left( x\right)\right\vert\right]
$ is finite for any point $x\in\mathbb{R}^{3}$. Then $\sum_{j=1}^{L}Q_{j}%
\mu\left( Q_{j}\right) =0.$
\end{theorem}
\begin{proof}
Let $\varphi\in C_{0}^{\infty}\left( \mathbb{R}^{3}\right) $ be a test
function. Let $\overline\psi := \mathbb{E}\left[ \psi\right]$.
Then, by averaging \eqref{S2E5} with respect to $\nu\otimes\mu$ (cf.\,Definition \ref{RandForcField})
we have
\[
-\int_{\mathbb{R}^{3}}\nabla\varphi\cdot \overline\psi\left( x\right) dx=\left(
\sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}\right) \int_{\mathbb{R}^{3}%
}\varphi,\ \ \ \ \ \ \operatorname{curl}\overline\psi=0
\]
where we have used that $\mathbb{E}\left[ \sum_{n}\varphi\left(
x_{n}\right) \right] =\int_{\mathbb{R}^{3}}\varphi.$
Then, there exists $\phi$ such that $\overline\psi=\nabla\phi$
and $\phi$ is a weak solution of %
\begin{equation}
\Delta\phi=\sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}\ \ \text{in\ \ }%
\mathbb{R}^{3}\label{T1E9}\;.%
\end{equation}
On the other hand, since the random field is invariant under
traslations, $\overline\psi$ must be constant. Taking $\phi$ as a
linear function, the left-hand side of (\ref{T1E9}) vanishes and
the theorem follows.
\end{proof}
\begin{remark}
The Holtsmark field for Newtonian potentials has been studied in connection to astrophysics. Several statistical properties of these random forces can be found in \cite{CH, CH1, CH2, CH3, CH4}.
\end{remark}
\section{Conditions on the potentials yielding as limit equations for $f$
Boltzmann and Landau equations. \label{GenKinEq}}
In the rest of this paper we discuss the dynamics of a tagged particle in some
families of generalized Holtsmark fields as those constructed in Section \ref{sec:costr}.
We shall consider families of potentials of the form%
\begin{equation} \label{Pot}
\left\{ \Phi\left( x,\varepsilon\right) ;\ \varepsilon>0\right\}
\end{equation}
where $\varepsilon$ is a small parameter tuning the mean free path $\ell_{\varepsilon}$ (cf.\,Introduction).
The latter is defined as the typical length that the
tagged particle must travel in order to have a change in its velocity
comparable to the absolute value of the velocity itself.
We recall that, in our units, the average distance $d$ between the scatterers is normalized to one,
and the characteristic speed of the tagged particle is of order one.
We will be interested in the dynamics of the tagged particle in the so called kinetic
limit. One of the assumptions that we need in order to derive such a limit
is %
\begin{equation}
1=d\ll \ell_{\varepsilon} \ \ \text{as\ \ }\varepsilon\rightarrow 0.
\label{KinLim}%
\end{equation}
A second condition is the statistical independence of the
particle deflections experienced over distances of the
order of $\ell_\varepsilon$. This condition will be discussed in more detail
in Section \ref{GenLandCase}.
As argued in the Introduction, assumption (\ref{KinLim}) can be obtained
in two different ways. A first possibility is that the deflections are small except
at rare collisions over distances of order $\lambda_{\varepsilon}\ll d.$ If such rare
deflections are the main cause for the change
of velocity of the tagged particle, we will obtain that the dynamics is
given by a linear Boltzmann equation. A second possibility is that the potentials in (\ref{Pot})
are very weak, but the interaction with many scatterers of the background
yields eventually a change of the velocity of order one when the
particle moves over distances $\ell_{\varepsilon}\gg d.$
The force acting over the tagged particle at any given time is a random variable depending on the
(random) scatterer configuration, leading to a diffusive process in the space of velocities.
The dynamics of the tagged particle is then described by a linear Landau
equation (if the deflections are uncorrelated in a time
scale of order $\ell_{\varepsilon}$).
We make now more precise the concept of collision length (sometimes also
termed `Landau length' in the literature), namely the characteristic distance for which the deflections
experienced by the tagged particle are of order one.
\begin{definition}
\label{LandLength}We will say that a family of radially symmetric potentials
\eqref{Pot}
has a well defined collision length $\lambda_{\varepsilon}$ if there exists a positive
function $\left\{ \lambda_{\varepsilon}\right\} $ such that $\lambda
_{\varepsilon}\rightarrow0$ as $\varepsilon\rightarrow0$ and
\[
\lim_{\varepsilon\rightarrow0}\Phi\left( \lambda_{\varepsilon}y,\varepsilon
\right) =\Psi\left( y\right) =\Psi\left( \left\vert y\right\vert \right)
\ \text{uniformly in compact sets of }y\in\mathbb{R}^{3}\;,%
\]
where $\Psi\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{ 0\right\}
\right) $ is not identically zero and satisfies%
\[
\lim_{\left\vert y\right\vert \rightarrow\infty}\Psi\left( y\right) =0.
\]
In this case, the characteristic time between collisions
(Boltzmann-Grad time scale) is defined by%
\begin{equation}
T_{BG}=\frac{1}{\lambda_{\varepsilon}^{2}}\;. \label{BG}%
\end{equation}
\end{definition}
For instance, families of potentials behaving as in \eqref{SingP}
have a collision length $\lambda_\varepsilon=\varepsilon$.
On the contrary a family of potentials like
$\Phi\left( x,\varepsilon\right) =\varepsilon G\left( x\right) ,$
where $G$ is globally bounded, do not have a collision length.
Notice that $T_{BG}\rightarrow\infty$ as $\varepsilon
\rightarrow0.$ In the kinetic regime (\ref{KinLim}),
Boltzmann terms can appear only if the family of potentials in (\ref{Pot}) admits
a collision length. If a family of potentials does not have a collision length we
will set $T_{BG}=\infty,\ \lambda_{\varepsilon}=0.$
Later on we will further assume that the potential $\Psi$ yields a well
defined scattering problem between the tagged particle and one single scatterer, in the precise sense
discussed in Section~\ref{ScattPb}.
Next, we recall the class of potentials (\ref{Pot}) for which we assume (\ref{KinLim}).
We will restrict to radially symmetric functions $\Phi$ which are either globally smooth, or singular
at the origin. Moreover, we will be interested in random force fields which are defined in the whole
space and are spatially homogeneous. As explained in
Section \ref{Holtsm} this requires to assume that there are different types of
charges and a neutrality condition holds, or that a background charge is present, depending on
the long range decay of the potential.
More precisely, the above assumptions are satisfied by
the generalized Holtsmark fields as constructed in Theorem \ref{thm:costruzione},
items {\bf a.1-2-3} and {\bf b}, by assuming
\begin{equation}
\Phi(\cdot,\varepsilon) \in {\cal C}_s
\label{eq:PeCs}
\end{equation}
for some $s > 1/2$ (cf.\,\eqref{eq:defCCs}). Clearly the constant $A=A(\varepsilon)$ in \eqref{eq:defCCs}
depends now on $\varepsilon$.
Let $F\left( x,\varepsilon\right)$ be such a generalized Holtsmark field.
Let $(x(t),v(t))$ be the position and velocity of the
tagged particle moving in the field. For each given scatterer configuration
$\omega \in \Omega \times I$ with the form $\omega=\left\{ \left(
x_{n},Q_{j_{n}}\right) \right\} _{n\in\mathbb{N}}$, the evolution
is given by the ODE:%
\begin{equation}
\frac{dx}{dt}=v\ \ ,\ \ \ \frac{dv}{dt}=F\left( x,\varepsilon\right) \omega \label{S3E2}%
\end{equation}
with initial data
\begin{equation}
x\left( 0\right) =x_{0},\ \ v\left( 0\right) =v_{0} \label{S3E3}%
\end{equation}
for some $x_{0}\in\mathbb{R}^{3},\ v_{0}\in\mathbb{R}^{3}$.
Since the vector fields $F\left( x,\varepsilon\right)\omega$ are singular at the points $\left\{ x_{n}\right\}$, given $\left(
x_{0},v_{0}\right) $ we do not have global well posedness of solutions for
all $\omega\in{\Omega}$.
However, with \eqref{eq:PeCs} we assume to have global existence with
probability one, i.e.\,the fields are locally
Lipschitz away from the points $\left\{x_{n}\right\}$ and
the tagged particle does not collide with any of the scatterers.
Let us denote by $T^{t}\left( x_{0},v_{0};\varepsilon;\omega\right)
$ the hamiltonian flow associated to the equations (\ref{S3E2})-(\ref{S3E3}).
By assumption this flow is defined for all $t\in\mathbb{R}$ and a.e.\;$\omega$.
Suppose that \thinspace$f_{0}\in\mathcal{M}_{+}\left(
\mathbb{R}^{3}\times\mathbb{R}^{3}\right) ,$ where $\mathcal{M}_{+}$ denotes
the set of nonnegative
Radon measures. Our goal is to study the asymptotics of
the following quantity as $\varepsilon$ tends to zero:%
\begin{equation}
f_{\varepsilon}\left( \ell_\varepsilon t,\ell_\varepsilon x,v \right) =\mathbb{E}[f_{0}(T^{-\ell_\varepsilon t}\left(
\ell_\varepsilon x,v;\varepsilon;\cdot\right) )] \label{eq:exp}%
\end{equation}
where the expectation is taken with respect to the scatterer distribution.
In order to check if it is possible to have a kinetic regime described
by a Landau equation, we must examine the contribution to the deflections of
the tagged particle due to the action of the potentials $\Phi\left(
x,\varepsilon\right) $ at distances larger than the collision length
$\lambda_{\varepsilon}.$ To this end we split $\Phi\left( x,\varepsilon
\right) $ as follows. We introduce a cutoff $\eta\in C^{\infty}\left(
\mathbb{R}^{3}\right) $ such that $\eta\left( x\right) =\eta\left(
\left\vert x\right\vert \right) ,$ $0\leq\eta\leq1,$ $\eta\left( x\right)
=1$ if $\left\vert x\right\vert \leq1,$ $\eta\left( x\right) =0$ if
$\left\vert x\right\vert \geq2.$ We then write%
\begin{equation}
\Phi\left( x,\varepsilon\right) =\Phi_{B}\left( x,\varepsilon\right)
+\Phi_{L}\left( x,\varepsilon\right) \ \label{S4E4}%
\end{equation}
with%
\begin{equation}
\Phi_{B}\left( x,\varepsilon\right) :=\Phi\left( x,\varepsilon\right)
\eta\left( \frac{\left\vert x\right\vert }{M\lambda_{\varepsilon}}\right)
\ \ ,\ \ \Phi_{L}\left( x,\varepsilon\right) :=\Phi\left( x,\varepsilon
\right) \left[ 1-\eta\left( \frac{\left\vert x\right\vert }{M\lambda
_{\varepsilon}}\right) \right]\;. \ \label{S4E5}%
\end{equation}
Here $M>0$ is a large real number which eventually will be sent to
infinity at the end of the argument.
If the family of potentials does not have a
collision length we just set $\Phi_{L}\left( x,\varepsilon\right)
=\Phi\left( x,\varepsilon\right) .$ In the above definitions
$B$ stands for `Boltzmann' and $L$ for `Landau'. Indeed the potential $\Phi_{B}$ yields the
big deflections experienced by the tagged particle within distances of order
$\lambda_{\varepsilon}$ of one single scatterer and $\Phi_{L}$ accounts for the
deflections induced by the scatterers which remain at distances much larger
than $\lambda_{\varepsilon}$ from the particle trajectory.
Note that, if the potentials $\Phi\left( x,\varepsilon\right) $
satisfy the above explained conditions allowing to define spatially homogeneous generalized Holtsmark fields,
then $\Phi_{B}\left( x,\varepsilon\right) $ and $\Phi_{L}\left(
x,\varepsilon\right) $ satisfy similar conditions and we can define random
force fields $\left\{ F_{B}\left( x,\varepsilon\right) :x\in\mathbb{R}%
^{3}\right\} ,\ \left\{ F_{L}\left( x,\varepsilon\right) :x\in
\mathbb{R}^{3}\right\} $ associated respectively to $\Phi
_{B}\left( x,\varepsilon\right) $ and $\Phi_{L}\left( x,\varepsilon\right).$
To understand the deflections produced by $\Phi_{L}$ we have to study
the ODE
\begin{equation}
\frac{dx}{dt}=v\ \ ,\ \ \ \frac{dv}{dt}=F_L\left( x,\varepsilon\right) \omega \label{S9E1}%
\end{equation}
for $0 \leq t \leq T$, with initial data $x\left( 0\right) =x_{0}, v\left( 0\right) =v_{0}$.
Due to the invariance under translations, we can assume
$x_0 =0,$ $v_0 =0.$ The time scale $T$ is chosen sufficiently small to guarantee that the deflection
experienced by the tagged particle in the time interval $t\in\left[
0,T\right] $ is much smaller than $\left\vert v_{0}\right\vert .$ Then, it is
reasonable to use the approximation%
\[
x\left( t\right) \simeq v_{0}t\ \ ,\ \ t\in\left[ 0,T\right]
\ \ \text{as\ \ }\varepsilon\rightarrow0
\]
whence%
\[
\frac{dv}{dt}\simeq F_{L}\left( v_{0}t,\varepsilon\right)\omega
\ \ \text{for }t\in\left[ 0,T\right] \ \ \text{as\ \ }\varepsilon
\rightarrow 0
\]
and the change of velocity in $[0,T]$ can be approximated as $\varepsilon\rightarrow0$ by the random
variable%
\begin{equation}
D_{T}\left( \varepsilon\right)\omega :=\int_{0}^{T}F_{L}\left( v_{0}t,\varepsilon\right)\omega\, dt\; .\label{S9E2}%
\end{equation}
As in Section \ref{sec:costr}, we may study these random variables by computing
the characteristic function:
\begin{equation}
m_{T}^{(\varepsilon)}\left( \theta \right) =\mathbb{E}\left[ \exp\left(
i\theta \cdot D_{T}\left( \varepsilon\right)\omega \right) \right]
\ \ ,\ \ \theta\in\mathbb{R}^{3}. \label{S9E3}
\end{equation}
The following result is a corollary of Theorem \ref{thm:costruzione}.
\begin{corollary}
\label{CharFunctions}
Suppose that $\Phi(\cdot,\varepsilon)\in {\cal C}_s$. Then, we can define spatially homogeneous random
force fields $ F_{L}(\cdot,\varepsilon)$ associated to $\Phi_L(\cdot,\varepsilon)$, by means of Theorem \ref{thm:costruzione}
(items {\bf a.1-2-3} and {\bf b}).
The characteristic function \eqref{S9E3}
is given by
\begin{eqnarray}
m_{T}^{(\varepsilon)}\left( \theta \right) &=& \exp\Big( \sum_{j=1}^{L}\mu\left( Q_{j}\right) \int_{\mathbb{R}^{3}%
}\Big[ \exp\Big( -iQ_{j}\theta\cdot\int_0^T dt\, \nabla_x\Phi_L\left(
v_0 t-y,\varepsilon\right) \Big) \label{eq:CFmeTt} \\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -1 +iQ_{j}\theta\cdot\int_0^T dt \,\nabla_x\Phi_L\left(
v_0 t-y,\varepsilon\right) \Big] dy\Big)\;. \nonumber
\end{eqnarray}
\end{corollary}
We focus now on the magnitude of the deflections due to $\Phi_{L}.$
We assume that
$\left\vert \theta\right\vert $ is of order one.
We are interested in time scales $T=T_{\varepsilon}$ for
which $D_{T}\left( \varepsilon\right)\omega$ is small, which means
$|\int_0^T dt\, \nabla_x\Phi_L\left(
v_0 t-y,\varepsilon\right)| \ll1$ as
$\varepsilon\rightarrow0$ for the range of values of $y \in\mathbb{R}^{3}$
contributing to the integrals in \eqref{eq:CFmeTt}.
We can then approximate the characteristic function as:
\begin{equation}
m_{T}^{(\varepsilon)}\left( \theta \right) =\exp\left( -\frac{1}{2}\sum_{j=1}%
^{L}\mu\left( Q_{j}\right) Q_{j}^{2}\int_{\mathbb{R}^{3}%
}\left( \theta\cdot\int_0^T dt \,\nabla_x\Phi_L\left(
v_0 t-y,\varepsilon\right)\right) ^{2} dy \right)\;. \label{S4E9}%
\end{equation}
This formula suggests the following way of defining a characteristic time for the deflections.
Setting
\begin{equation}
\sigma\left( T;\varepsilon\right) :=\sup_{\left\vert \theta\right\vert
=1}\int_{\mathbb{R}^{3}}dy\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-y,\varepsilon\right) dt\right) ^{2}\;, \label{S4E8a}%
\end{equation}
we define the Landau time scale $T_{L}$ as the solution of the equation%
\begin{equation}
\sigma\left( T_{L};\varepsilon\right) =1\;. \label{S4E8}%
\end{equation}
Notice that $T_{L}$ is a function of $\varepsilon$ and that we can assume,
without loss of generality, that $|v|=1$ (we will do so in the following). If there is no solution
of (\ref{S4E8}) for small $\varepsilon$ we set $T_{L}=\infty.$
Using the time scales $T_{BG},\ T_{L}$, we reformulate condition \eqref{KinLim} as
\begin{equation}
\ell_\varepsilon = \min\left\{ T_{BG},T_{L}\right\} \gg1\ \ \text{as\ \ }\varepsilon
\rightarrow 0\label{S9E4}%
\end{equation}
and we deduce whether the evolution of $f := \lim_{\varepsilon \to 0} f_\varepsilon$ is described by means of a Landau or a Boltzmann equation.
In fact the relevant time scale to describe the evolution of $f$ is the shortest among $T_{BG},\ T_{L},$
and the condition \eqref{S9E4} can take place in different ways:
\begin{align}
T_{L} & \gg T_{BG}\ \ \text{as\ \ }\varepsilon\rightarrow0\label{S9E5a}\\
T_{L} & \ll T_{BG}\ \ \text{as\ \ }\varepsilon\rightarrow0\label{S9E5b}\\
\frac{T_{L}}{T_{BG}} & \rightarrow C_{\ast}\in\left( 0,\infty\right)
\ \text{as\ \ }\varepsilon\rightarrow0\;. \label{S9E5c}%
\end{align}
If (\ref{S9E5a}) holds the dynamics of $f$ will be
described by a linear Boltzmann equation.
If (\ref{S9E5b}) takes place we
would obtain that the small deflections produced in the trajectories of the
tagged particle due to the part $\Phi_{L}$ of the potential modify $f\left(
t,x,v\right) $ faster than the binary encounters with scatterers yielding
deflections of order one. In this case, if in addition the deflections of the tagged particle are uncorrelated in time scales of order $T_L$,
the evolution will be given by a suitable linear Landau equation.
Finally, if
(\ref{S9E5c}) takes place then both processes, binary collisions and collective
small deflections of the tagged particle, are relevant in the evolution of $f$,
and we can have combinations of the above equations.
A technical point must be addressed here. Due to the presence of the
cutoff $M$ in (\ref{S4E4})-(\ref{S4E5}) some care is needed concerning the
precise meaning of (\ref{S4E8}). Indeed, $\Phi_{L}$ yields also contributions due to binary collisions within
distances of order $M\lambda_{\varepsilon}$ from the scatterers. This
implies that, if (\ref{S9E5a}) holds, we have
$\sigma\left( T_{BG};\varepsilon\right) \simeq\delta\left( M\right) >0.$
The natural way of giving a precise meaning to the condition (\ref{S9E5a})
will be then the following. The dynamics of $f$ will be
described by the linear Boltzmann equation if we have%
\begin{equation}
\lim\sup_{\varepsilon\rightarrow0}\sigma\left( T_{BG};\varepsilon\right)
\leq\delta\left( M\right) \ \ \text{with\ }\lim_{M\rightarrow\infty}%
\delta\left( M\right) =0\;, \label{S9E6}%
\end{equation}
that is, the small deflections due to interactions between the
tagged particle and the scatterers at distances larger than $M\lambda
_{\varepsilon}$ become irrelevant as $M\rightarrow\infty$ in the time scale
$T_{BG}$\;.
In the rest of this section, we discuss the specific form of the kinetic equations obtained in the
different cases.
\subsection{Kinetic equations describing the evolution of the distribution
function $f:$ the Boltzmann case.}
In this section we describe the evolution of the function $f_{\varepsilon}$ defined in (\ref{eq:exp}) as
$\varepsilon\rightarrow0,$ assuming \eqref{S9E4} and (\ref{S9E6}) (i.e.\,\eqref{S9E5a}). Before doing that,
we briefly review the associated two-body problem. The following discussion
is classical. For further details we refer to \cite{LL1}.
\subsubsection{Scattering problem in $\Phi_{B}\;.$\label{ScattPb}}
We consider the mechanical problem of the deflection of a
single particle of mass $m=1$ and initial ($t \to -\infty$) velocity $v \neq 0$ moving in a field $\Phi_{B}$, whose centre
(scatterer source) is
at rest. Due to Definition \ref{LandLength}, it is natural to use here $\lambda_{\varepsilon}$ as unit
of length. We define $y=\frac{x}{\lambda_{\varepsilon}}$ and
focus on the scattering problem associated to the potential $\Psi_{B}\left(
y\right) : =\Psi\left( y\right) \eta\left( \frac{y}{M}\right) .$ We write $V = |v|$ and $r = |y|$.
\begin{figure}[th]
\centering
\includegraphics [scale=0.4]{scattering.pdf}
\caption{
The two-body scattering.
The solution of the two-body problem lies in a plane, which is taken to be the plane
of the page. The scatterer lies in the origin. The scalar $b$ is the impact parameter and $\chi=\chi(b, V)$ is the scattering angle.
\label{fig:scattering}
}
\end{figure}
\noindent
The path of the particle in the central field is
symmetrical about a line from the centre to the nearest point in the orbit,
hence the two asymptotes to the orbit make equal angles $\phi_{0}$ with this line (see e.g.\,Figure \ref{fig:scattering}).
The angle of scattering is seen from Fig.\,\ref{fig:scattering} to be%
\begin{equation}
\chi\left( b,V\right) =\pi-2\phi_{0}\;.\label{DA}%
\end{equation}
We will say that the scattering problem is well defined for a given value of
$V$ and $b$ if the solution of the equations\
\begin{equation}
\frac{dy}{dt}=v\ ,\ \ \frac{dv}{dt}=-\nabla\Psi_{B}\left( y\right)
\label{S9E1a}%
\end{equation}
satisfies:
\begin{equation}
\lim_{t\rightarrow-\infty}\left\vert y\left( t\right) \right\vert
=\lim_{t\rightarrow\infty}\left\vert y\left( t\right) \right\vert
=\infty,\ \lim_{t\rightarrow-\infty}\left\vert v\left( t\right) \right\vert
=\lim_{t\rightarrow\infty}\left\vert v\left( t\right) \right\vert =V.
\label{P1E6}%
\end{equation}
The effective potential reads%
\begin{equation}
\Psi_{eff}\left( r\right) =\Psi_{B}(r)+\frac{b^{2}V^2}{2r^{2}} \label{P1E6a}%
\end{equation}
where $r=\left\vert y\right\vert .$ A sufficient condition for the scattering
problem to be well defined for a given value of $V$ and almost all the values
of $b$ is that the set of nontrivial solutions $r$ of the simultaneous equations%
\begin{equation}
\frac{\Psi_{B}{}^{\prime}(r)}{V^{2}}=\frac{b^{2}}{r^{3}}\ ,\ \ \frac{\Psi
_{B}(r)}{V^{2}}+\frac{b^{2}}{2r^{2}}=\frac{1}{2}%
\label{eq:sing2bP}
\end{equation}
is nonempty only for a finite set of values $b>0.$ We will assume that
this condition is satisfied for all the families of potentials considered.
A standard analysis of Newton equations shows that the scattering angle is given by
\begin{equation}
\chi\left( b,V\right) =\pi-2\phi_{0}=\pi-2\int_{r_{\ast}}^{+\infty
}\frac{b\,dr}{r^{2}\sqrt{1-2\Psi_{eff}\left( r\right)/V^2 }} \label{eq:SE_0a}%
\end{equation}
where $r_{\ast}$ is the nearest approach of the particle to the
scatterer (defined as the largest solution to the second equation in \eqref{eq:sing2bP}).
Using spherical coordinates with north pole $\frac{v}{\left\vert v\right\vert }$ and
azimuth angle $\varphi$ characterizing the plane of scattering,
we define a mapping
\begin{equation}
\left( b, \varphi\right) \rightarrow\omega=\omega\left( b,\varphi;v\right)
\in S^{2}, \label{map}%
\end{equation}
where $\omega$ is the unit vector in the direction of the velocity of the
particle as $t \to +\infty$.
Let $\Sigma\left( v\right) \subset S^{2}$ be the image
of this mapping. Due to the symmetry of the potential $\Psi$ the
set $\Sigma\left( v\right) $ is invariant under rotations around $\frac
{v}{\left\vert v\right\vert }.$
We do not need to assume that the mapping \eqref{map} is injective
in the variable $b$.
In particular, a point $\omega\in\Sigma\left(
v\right) $ can have different preimages. We will consider only potentials
$\Psi$ for which the number of these preimages is finite. We can then define a
family of inverse functions%
\[
\omega\rightarrow\left( b_{j}\left( \omega\right) ,\varphi_{j}\left(
\omega\right) \right) \ \ ,\ \ j\in J\left( \omega\right)
\]
where $J\left( \omega\right) $ is a set of indexes which characterizes the
number of preimages of $\omega$ for each $\omega\in\Sigma\left( v\right).$
We classify the points of $\Sigma\left( v\right) $ by means of the
number of preimages, i.e.\,we write
$
\bigcup_{k=1}^{\infty}A_{k}\left( v\right) =\Sigma\left( v\right)
$
with
\begin{equation}
A_{k}\left( v\right) :=\left\{ \omega\in\Sigma\left( v\right) :\#J\left(
\omega\right) =k\right\} \ ,\ \ k=1,2,3,... \label{P3E8}%
\end{equation}
Let $\chi_{A_{k}\left(
v\right) }\left( \omega\right)$ be the characteristic function of the set $A_k(v)$.
We define the differential cross-section of the scattering problem as $\frac{1}{|v|}B$
where
\begin{align}
B\left( v;
\omega\right) & =\sum_{k=1}^{\infty}B_{k}\left( v;\omega\right) \chi_{A_{k}\left(
v\right) }\left( \omega\right) \ \ \ \ ,\ \ \ \ \omega\in\Sigma\left( v\right)\;,
\label{P1E8}\\
\frac{1}{|v|}B_{k}\left( v;\omega\right) & =\sum_{j\in J\left( \omega\right) }%
\frac{b_{j}}{|\sin\chi|}\Big|\left( \frac{\partial\chi\left( b_{j},V\right) }{\partial b}\right)
^{-1}\Big|\ \ \ \ \text{for }\omega\in A_{k}\left( v\right)\;.
\label{P1E8a}
\end{align}
Since the dynamics defined by the equations (\ref{S9E1a}) is invariant under time
reversal $\left( y,v,t\right) \rightarrow\left( y,-v,-t\right)$, the following detailed balance condition holds:%
\begin{equation}
B_{k}\left( v; \omega\right) =B_{k}\left(\left\vert v\right\vert \omega;
\frac{v}{\left\vert v\right\vert
}\right)\;. \label{P1E7}%
\end{equation}
With a slight abuse, we will use in what follows the same notation for cutoffed and uncutoffed ($M \to \infty$) potential.
\subsubsection{Kinetic equations.} \label{ss:GenBoltzCase}
We focus first on the case of Holtsmark fields with one single charge ($L=1$, $Q_1=1$).
\begin{claim}
\label{BoltGen}Suppose that (\ref{S9E6}) holds. Let us assume that $\Psi$ is
as in Definition \ref{LandLength}. Suppose that the following limit exists:%
\begin{equation}
f_{\varepsilon}\left( T_{BG}t,T_{BG}x,v\right) \rightarrow f\left(
t,x,v\right) \text{ as }\varepsilon\rightarrow0 \;. \label{P1E5a}%
\end{equation}
Then $f$
solves the linear Boltzmann equation%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\int_{S^{2}}B\left( v;\omega \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega\label{P1E5}%
\end{equation}
where $B$ is as in (\ref{P1E8}).
\end{claim}
\begin{proofof}[Justification of the Claim \ref{BoltGen}]
We introduce a time scale $t_{\ast}$ satisfying $1\ll t_{\ast}\ll T_{BG}.$ We
define the domain $D_{\varepsilon}\left( vt_{\ast}\right) \subset
\mathbb{R}^{3}$ as the set swept out by the sphere of radius $M\lambda
_{\varepsilon}$ initially centered at the tagged particle, moving in the
direction of $v$ during the time $t_{\ast}$ (cf.\,\eqref{S4E5}).
The motion of the tagged particle is rectilinear between collisions and is
affected by the interaction $\Phi_B$ if the
domain $D_{\varepsilon}\left( vt_{\ast}\right) $ contains one or more
scatterers. Notice that the volume of $D_{\varepsilon}\left( vt_{\ast}\right)
$ satisfies $\left\vert D_{\varepsilon}\left( vt_{\ast}\right) \right\vert
\simeq \pi V M^2\lambda_\varepsilon^{2}t_{\ast}\ll1$ where $V = |v|$.
Using the properties of the Poisson distribution it follows that the probability of finding one scatterer in
$D_{\varepsilon}\left( vt_{\ast}\right) $ is approximately $\left\vert
D_{\varepsilon}\left( vt_{\ast}\right) \right\vert $ and the probability of
finding two or more scatterers is proportional to $\left\vert D_{\varepsilon
}\left( vt_{\ast}\right) \right\vert ^{2}$ which can be neglected. We
introduce a system of spherical coordinates having $\frac{v}{\left\vert
v\right\vert }$ as north pole and we denote by $\varphi$ the azimuth angle
(as in Section \ref{ScattPb}). Assuming that there
is one scatterer in the domain $D_{\varepsilon}\left( vt_{\ast}\right) $,
the conditional probability that the scatterer has azimuth angle in the interval $\left[ \varphi,\varphi
+d\varphi\right] $ and the impact parameter of the collision is in
the interval $\left[ \bar{b},\bar{b}+d\bar{b}\right] $ can
be approximated by $\frac{Vt_*\bar{b}d\bar{b}d\varphi}{\left\vert
D_{\varepsilon}\left( vt_{\ast}\right) \right\vert }$.
We obtain deflections in the velocity $v$ of order one if $\bar{b}$ is of
order $\lambda_{\varepsilon},$ therefore it is natural to introduce the change of
variables $\bar{b}=\lambda_{\varepsilon}b.$ We conclude that the probability of a
collision in a time interval of length $t_{\ast}$ with rescaled impact parameter $b$
and azimuth angle $\varphi$ is:
\begin{equation}
\left( \lambda_{\varepsilon}\right) ^{2}Vt_{\ast}bdbd\varphi. \label{P1E1}%
\end{equation}
In order to derive the evolution equation for the function $f\left( t,x,v\right) $ it is convenient to compute the limit behaviour of (cf.\,(\ref{eq:exp}))
\begin{equation}
\psi_{\varepsilon}\left( t,x,v \right) :=\mathbb{E}\left[ \psi_{0,\varepsilon}\left(
T^{t}\left( x,v;\varepsilon;\cdot\right) \right) \right] \label{P3E1}%
\end{equation}
where $\psi_{0,\varepsilon}=\psi_{0,\varepsilon}\left( x,v\right) $ is a smooth test function. We
have the following duality formula%
\begin{equation}
\int\int f_{\varepsilon}\left( t,x,v\right) \psi_{0,\varepsilon}\left( x,v\right)
dxdv=\int\int f_{0}(x,v)\psi_{\varepsilon}\left( t,x,v\right) dxdv\ \ ,\ \ t>0
\label{P3E2}%
\end{equation}
which follows using (\ref{eq:exp}), the change of variables $T^{-t_{0}}\left(
x,v;\varepsilon;\cdot\right) =\left( y,w\right),$ and (\ref{P3E1}). We compute the differential equation
satisfied by the function $\psi\left( t,x,v\right)
=\lim_{\varepsilon\rightarrow0}\psi_{\varepsilon}\left( T_{BG}t,T_{BG}x,{v}\right) $ with $\psi_{0}\left( x,v \right)
=\lim_{\varepsilon\rightarrow0}\psi_{0,\varepsilon}\left( T_{BG}x,v\right) .$
Supposing that $h>0$ is small but such that $hT_{BG}\gg1$ and using the semigroup property of $T^{t}$ we obtain%
\begin{align}
\psi_{\varepsilon}\left( T_{BG}( t+h),T_{BG} x,v\right) &=\mathbb{E}\left[ \psi
_{0,\varepsilon}\left( T^{T_{BG}(t+h)}\left( T_{BG} x,v;\varepsilon;\cdot\right)
\right) \right] \nonumber \\&
=\mathbb{E}\left[ \psi_{0,\varepsilon}\left( T^{T_{BG}h}%
T^{T_{BG}t}\left( T_{BG} x,v;\varepsilon;\cdot\right) \right) \right].
\end{align}
We assume now that the deflection of the
particle during $\left[ T_{BG} t,T_{BG}(t+h)\right] $ is
independent from its previous evolution in $\left[ 0,T_{BG}t\right]$
(in particular, recollisions of the particle with the scatterers happen with small probability).
To prove this independence would be a crucial step of any rigorous proof of
the Claim \ref{BoltGen} (notice that this implies also the Markovianity of the
limit process). Then
\begin{equation}
\psi_{\varepsilon}\left( T_{BG}(t+h),T_{BG}x,v \right) \simeq \mathbb{E}\left[
\psi_{\varepsilon}\left( T_{BG} t,T^{T_{BG}h}\left( T_{BG}x,v;\varepsilon;\cdot\right)
\right) \right] \label{P3E3}%
\end{equation}
for small $\varepsilon>0$.
If the position of the particle is $\left( T_{BG} x,v\right) $
at the time $T_{BG}t,$ its new position at the time $T_{BG}(t+h)$ is
$T_{BG}x+vT_{BG}h.$ Recalling the expression (\ref{P1E1}) for the probability of a collision
with given impact parameter and azimuth angle, we deduce
\begin{align*}
& \psi_{\varepsilon}\left( T_{BG}(t+h),T_{BG}x,v \right) \\
& \simeq \psi_{\varepsilon}\left( T_{BG}t ,T_{BG}x+vT_{BG}h,v\right) \left[ 1-\left(
\lambda_{\varepsilon}\right) ^{2}VhT_{BG}\int_{0}^{2\pi}d\varphi\int
_{0}^{M}bdb\right] +\\
& +\left( \lambda_{\varepsilon}\right) ^{2}VT_{BG}h\int_{0}^{2\pi}%
d\varphi\int_{0}^{M}bdb\, \psi_{\varepsilon}\left(T_{BG} t ,T_{BG} x,\left\vert
v\right\vert \omega\left( b,\varphi;v\right) \right)
\end{align*}
where $\omega\left( b,\varphi;v\right) $ is
as in (\ref{map}). Here we neglected the probability of having more than one collision in the time interval $[T_{BG} t , T_{BG}( t +h)]$, since $h$ is sufficiently small. Using $T_{BG}\lambda_\varepsilon^{2}=1$ and a Taylor expansion in $h$,
we obtain, in the limit $h\rightarrow0$,
\begin{equation}
\frac{\partial\psi\left( t,x,v\right) }{\partial\tau}%
=v\frac{\partial\psi\left( t,x,v\right) }{\partial x%
}+V\int_{0}^{2\pi}d\varphi\int_{0}^{M}bdb\left[
\psi\left( t,x,\left\vert v \right\vert \omega\left(
b,\varphi;v\right) \right) -\psi\left( t,x,v\right)
\right] . \label{P3E4}%
\end{equation}
Finally, we pass to the limit in (\ref{P3E2}).
We set $f(t,x,v) =\lim_{\varepsilon
\rightarrow0}f_\varepsilon\left( T_{BG}t,T_{BG}x,v\right) $, $f_{0}(x,v)=f\left( 0,x,v\right) $
and $\bar{\psi}\left( t,x,v \right) =\psi\left( t_{0}-t
,x,v\right)$ for $t_0>0$. We get
\[
\int\int f\left( t_{0},x,v\right) \bar{\psi}\left( t
_{0},x,v\right) dxdv=\int\int f_{0}(x,v
)\bar{\psi}_{0}\left(x,v\right) dxdv%
\ \ \ ,\ t_{0}>0
\]
which implies
\begin{equation}
\partial_{t}\left( \int\int f\left(t,x,v\right)
\bar{\psi}\left( t,x,v\right) dxdv\right) =0\;.
\label{P3E6}%
\end{equation}
Note that, by (\ref{P3E4}), for $0<t<t_0$ we have%
\begin{align}
\frac{\partial\bar{\psi}\left( t,x,v \right) }{\partial t}
& =-v\frac{\partial\bar{\psi}\left( t,x,v\right) }%
{\partial x}-V\int_{0}^{2\pi}d\varphi\int_{0}^{M}bdb\left[ \bar{\psi
}\left( t,x,\left\vert v\right\vert \omega\left(
b,\varphi;v\right) \right) -\bar{\psi}\left( t,x,v\right) \right]\;,\nonumber\\
\bar{\psi}\left( t_{0}, x,v\right) & =\psi_{0}\left(x,v\right) . \label{P3E7}%
\end{align}
Using
(\ref{P3E6}) and (\ref{P3E7}) and integrating by parts in the term containing
$\partial_{x}$ we obtain%
\begin{align}
0 & =\int\int\bar{\psi}\left( t,x,v\right) \partial_{t}f\left(
t,x,v\right) dxdv+\int\int\bar{\psi}\left( t,x,v\right) v\partial
_{x}f\left( t,x,v\right) dxdv-\nonumber\\
& -\int\int dxdv\int_{0}^{2\pi}d\varphi\left\vert v\right\vert \int_{0}%
^{M}bdb\left[ \bar{\psi}\left( t,x,\left\vert v\right\vert \omega\left(
b,\varphi;v\right) \right) -\bar{\psi}\left( t,x,v\right) \right]
f\left( t,x,v\right). \label{P1E3}%
\end{align}
Performing the change of variables in (\ref{map}) (cf.\,(\ref{P1E8a})) and taking then the limit $M\rightarrow\infty$ we can write
the last integral term in (\ref{P1E3}) as%
\begin{equation}
-\int\int dxdv\, \bar{\psi}\left( t,x,v\right)
\sum_{k=1}^{\infty}\int_{A_{k}\left( v\right) }\left[ B_{k}\left(\left\vert v\right\vert \omega;
\frac{v}{\left\vert v\right\vert
}\right) f\left(
t,x,\left\vert v\right\vert \omega\right) -B_{k}\left(v;\omega\right)
f\left( t,x,v\right) \right] d\omega\;. \\
\end{equation}
From (\ref{P1E8}), (\ref{P1E7}) and the arbitrariness of $\bar\psi$, we get \eqref{P1E5}.
\end{proofof}
\begin{remark}
The above way of obtaining the Boltzmann equation is reminiscent of the cutoff procedure used in \cite{DP} to derive the Boltzmann equation
rigorously for potentials of the form $|x|^{-s}$ for $s>2$ in two space dimensions.
\end{remark}
\begin{remark}
The condition (\ref{S9E6}) enters in the argument because we assume that the
trajectories of the particles between collisions are rectilinear. This is due
to the fact that the time $T_{L}$ required to produce deflections between collisions is much larger than the scale $T_{BG}$.
\end{remark}
\begin{remark}
If the Holtsmark field in which the particle evolves has different
types of charges we must replace (\ref{P1E5}) by the equation
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\sum_{j=1}^{L}\mu\left( Q_{j}\right)\int_{S^{2}}B\left( v;\omega;Q_j \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega\nonumber
\end{equation}
where $B\left( v;\omega;Q_j \right)$ is the scattering kernel obtained computing the
deflections for each type of charge. Notice that the form of $\Psi_{eff}$\ in
(\ref{P1E6a}) yields the following functional dependence for the differential cross-section
$\Sigma = B/|v|$: %
\[
\Sigma\left( v;\omega;Q_j \right)=\Sigma\left( \frac{v}{\sqrt{\left\vert Q_{j}\right\vert }};\omega;
\operatorname*{sgn}\left( Q_{j}\right)\right)
\]
i.e.\,we can reduce the computation of the scattering kernel to just two values
of the charge $\pm1$ and arbitrary particle velocities. Notice that there is
no reason to expect $B$ to take the same value for positive and negative
charges and a given value of the velocity, although it turns out that this
happens in the particular case of Coulomb potentials.
\end{remark}
\subsection{Kinetic equations describing the evolution of the distribution
function $f:$ the Landau case. \label{GenLandCase}}
In this section we consider the evolution of the function $f_{\varepsilon}$ defined in (\ref{eq:exp}) as
$\varepsilon\rightarrow0,$ assuming \eqref{S9E4} and (\ref{S9E5b}).
The latter condition is not sufficient to
obtain a Landau equation, since we need also to have independent deflections on
time scales of order $T_{L}.$
Under the conditions yielding
the Landau equation the deflections in times of order $T_{L}$ are gaussian
variables and the independence condition reads (cf.\,\eqref{S9E2})
\begin{equation}
\mathbb{E}\left( D(0) \, D(\tilde{T}_{L})\right)
\ll\sqrt{\mathbb{E}\left( D(0)^2 \right)
\mathbb{E}\left( D(\tilde{T}_{L})^{2}\right)
}\ \ \text{as\ \ }\varepsilon\rightarrow0 \label{I1E1}%
\end{equation}
where $\tilde{T}_{L}$ is some time scale much smaller than $T_{L}$, and we
denoted $D\left( 0\right) $ and $D( \tilde{T}_{L}) $ the deflections experienced during the time intervals $\left[0,\tilde{T}_{L}\right] ,\ \left[ \tilde{T}_{L},2\tilde{T}_{L}\right] $ respectively.
Furthermore, in order to have a well defined probability distribution
for the deflections we need the convergence as $\varepsilon\to 0$ of the
characteristic function $m_{T}^{(\varepsilon)}\left( \theta \right) $ in (\ref{S4E9}).
More precisely, restricting for simplicity to the case of one single charge
and assuming $|v|=1$, we have
\begin{equation}
\frac{1}{2}\int_{\mathbb{R}^{3}%
}\left( \theta\cdot\int_0^{\zeta T_L} dt \,\nabla_x\Phi_L\left(
v \zeta T_L -y,\varepsilon\right)\right) ^{2} dy \rightarrow\kappa\,\zeta\left\vert \theta_{\bot}\right\vert ^{2}%
\ \text{as\ }\varepsilon\rightarrow0 \label{I1E2}%
\end{equation}
for every $\zeta>0$ and for some constant $\kappa>0,$ where $\theta_{\bot}%
=\theta-\frac{\theta\cdot v}{\left\vert v\right\vert }\frac{v}{\left\vert
v\right\vert }.$ In particular,
\begin{equation}
m_{\zeta T_L}^{(\varepsilon)}\left( \theta \right) \rightarrow\exp\left(
-\kappa\zeta\left\vert \theta_{\bot}\right\vert ^{2}\right) \ \ \text{as\ }%
\varepsilon\rightarrow 0. \label{I1E3}%
\end{equation}
We will not try to formulate the most general set of conditions under which
(\ref{I1E2}) takes place. However, we can expect this formula to be a
consequence of the smallness of the deflections, the independence condition
(\ref{I1E1}) and the x limit theorem. We will show in
Section \ref{Examples} examples of families of potentials for which the
left-hand side of (\ref{I1E2}) converges to a different
quadratic form. For those families of potentials (\ref{I1E1}) also fails.
Moreover, the following simple argument sheds some light on the relation between
(\ref{I1E1}) and (\ref{I1E2}). Suppose that the deflections of the tagged particle
in the time intervals $\left[ 0,\zeta_{1}T_{L}\right] $ and $\left[ \zeta
_{1}T_{L},\left( \zeta_{1}+\zeta_{2}\right) T_{L}\right] $, denoted by
$D_1$ and $D_2$, are independent (at least asymptotically as $\varepsilon
\rightarrow0$). Then the characteristic function
$m_{(\zeta_1+\zeta_2) T_L}^{(\varepsilon)}\left( \theta \right)$ of the total deflection $D=D_{1}+D_{2}$
is close to a product $m_{\zeta_1 T_L}^{(\varepsilon)}\left( \theta \right) m_{\zeta_2 T_L}^{(\varepsilon)}\left( \theta \right)$.
This is possible only if the function on the right-hand side of
(\ref{I1E2}) is linear in $\zeta$ (cf.\,\eqref{I1E3}).
In the following we assume both (\ref{I1E1}) and (\ref{I1E2}) and we derive a linear Landau equation.
\begin{claim}
\label{ClaimLand}Assume that (\ref{S9E5b}) holds and suppose that
(\ref{I1E1}), (\ref{I1E2}) are also satisfied. Suppose that the
following limit exists:%
\begin{equation}
f_{\varepsilon}\left( T_{L}t,T_{L}x,v\right) \rightarrow f\left(
t,x,v\right) \text{ as }\varepsilon\rightarrow 0. \label{LimEx}%
\end{equation}
Then%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right) =\kappa\Delta_{v_{\perp}}f\left( t,x,v \right) \label{GenLanEq}
\end{equation}
where $\Delta_{v_{\perp}}$ is the Laplace Beltrami operator on $S^2$ (sphere of radius $|v|=1$) and the diffusion coefficient $\kappa>0$ is defined by \eqref{I1E2}.
\end{claim}
\begin{remark}
Using Cartesian coordinates, the diffusion term in \eqref{GenLanEq} reads as
\begin{equation}
\sum_{i,j=1}^3 \frac{\partial}{\partial v_i} A_{i,j}(v) \frac{\partial}{\partial v_j} f\left(t,x,v\right)
\end{equation}
where the diffusion matrix $ A_{i,j}(v)$ is given by
\begin{equation}
A_{i,j}(v)= \kappa \left(\delta_{ij}-\frac{v_iv_j}{|v|^2}\right)\;.
\end{equation}
We refer to \cite{LL2} and \cite{S} for further details.
\end{remark}
\begin{proofof}
[Justification of the Claim \ref{ClaimLand}]
Using (\ref{I1E3}) and the Fourier inversion
formula, we can compute the probability density for the transition from
$\left( T_{L}x,v\right) $ to $\left( T_{L}y,v+D\right) $ in a time interval of
length $T_{L}h $ with $D\in\mathbb{R}^{3}:$
\begin{align*}
p\left( T_{L}y,v+D;T_{L} x,v;T_{L}h\right) & =\frac{\delta\left( T_{L}y-T_{L}x-v
T_{L}h\right) T_{L}^3 }{\left( 2\pi\right) ^{3}}\int_{\mathbb{R}^{3}}\exp\left(
-\kappa h \left\vert \theta_{\bot}\right\vert ^{2}\right) \exp\left(
iD\cdot\theta\right) d\theta\\
& =\frac{\delta\left( T_{L}y-T_{L}x-v T_{L}h\right) \delta\left( D_{\parallel
}\right) T_{L}^3 }{4\kappa\pi h}\exp\left( -\frac{\left\vert D_{\perp}\right\vert
^{2}}{\kappa h}\right)\;.
\end{align*}
Here we write $\theta=\left( \theta_{\parallel},\theta_{\bot}\right)
$ with %
\begin{equation}
\theta_{\parallel}=\theta\cdot\frac{v}{\left\vert v\right\vert },\ \ \ \
\theta_{\bot}=\theta-\left( \theta\cdot\frac{v}{\left\vert v\right\vert
}\right) \frac{v}{\left\vert v\right\vert } \label{eq:notTPTP}
\end{equation}
and use a similar
decomposition for $D=\left( D_{\parallel},D_{\perp}\right)$ and other
vectors appearing later. That is, the probability density yielding the transition
from $\left(T_{L} x,v\right) $ to $\left( T_{L} y,w\right) $ is%
\begin{equation}
p\left( T_{L} y,w;T_{L} x,v;T_{L} h\right) =\frac{\delta\left( x-y-v h \right)
\delta\left( v_{\parallel}-w_{\parallel}\right) }{4\kappa\pi h}%
\exp\left( -\frac{\left\vert v_{\perp}-w_{\perp}\right\vert ^{2}}{\kappa
h}\right) \equiv G(y,w; x,v;h) \;.
\label{I1E4}%
\end{equation}
Using the independence assumption, we obtain the following approximation for $h$ small%
\[
f_{\varepsilon}\left( T_{L}(t+h) ,T_{L} x,v\right) \simeq\int_{\mathbb{R}^{3}}%
dy\int_{\mathbb{R}^{3}}dw f_{\varepsilon}\left( T_{L} t,T_{L} y,w;\varepsilon\right) p\left(
T_{L}y,w;T_{L}x,v;T_{L}h\right)
\]
whence, using (\ref{LimEx}),
\[
f\left( t+h,x,v\right) =\int_{\mathbb{R}^{3}}dy
\int_{\mathbb{R}^{3}}dw f\left( t,y,w\right) G\left( y,w;x,v;h\right)
\]
and by (\ref{I1E4})
\begin{align*}
f\left( t+h,x,v\right) & =
\frac{1}{4\kappa\pi h}\int_{\mathbb{R}^{2}}dw_{\bot}f\left( t
,x-vh,v_{\parallel},w_{\bot}\right) \exp\left( -\frac{\left\vert
v_{\perp}-w_{\perp}\right\vert ^{2}}{\kappa h}\right)\;.
\end{align*}
Approximating $f\left( t,y,w\right) $ by means of its Taylor
expansion up to second order in $w_{\bot}=v_{\bot}$ and to first order in
${y}={x}$ as well as $f\left( t+h,x,v\right) $ by its
first order Taylor expansion in $h=0$, we obtain (\ref{GenLanEq}).
\end{proofof}
\subsection{The case of deflections with correlations of order one.} \label{ss:CorrCase}
If \eqref{S9E4} and $T_{L}\ll T_{BG}$ hold but the condition (\ref{I1E1}) fails, then
the dynamics of the distribution function $f_{\varepsilon}$ cannot be
approximated by means of a Landau equation. We shall not consider this case in
detail in this paper. However it is interesting to formulate the type of mathematical problem
describing the dynamics of the tagged particle. We discuss such formulation in the present
section.
We denote the deflection experienced by the tagged particle at the point
$x,$ with initial velocity $v$ during a small (eventually infinitesimal) time
$h$ as $D\left( x,v;h\right) .$ We use here macroscopic variables for space and
time. As $\varepsilon\to 0$, the characteristic function of $D$ approaches
the exponential of a quadratic function and the
deflections become gaussian variables with zero average. For these variables, the form of the
correlation might be strongly dependent on the family of
potentials considered, but some general features might be expected.
First of all, due to the invariance under translation of the Holtsmark field,
the correlation functions will take the form
\begin{equation}
\mathbb{E}\left[ D\left( x_{1},v_{1};h\right) \otimes D\left(
x_{2},v_{2};h\right) \right] =K\left( x_{1}-x_{2},v_{1},v_{2}%
;h\right) \neq 0. \label{ST1}%
\end{equation}
Furthermore, we will obtain (cf.\,examples in Section \ref{Examples})
\begin{equation}
\int_{0}^{1} K\left( y(s),v_{1},v_{2};h\right)
ds<\infty\label{ST2}%
\end{equation}
for any curve $y(s)$ of class $C^1$. This integrability condition might be expected if (\ref{I1E1}) fails, because
otherwise one could have large deflections at small distances and $T_L$ would not coincide with the scale of the
mean free path (cf.\,\eqref{S9E4}).
Finally, the equation yielding the evolution of the tagged particle can be written
as
\begin{equation}
x\left( \tau+d\tau\right) -x\left( \tau\right) =v(\tau)d\tau\ \ ,\ \ v\left(
\tau+d\tau\right) -v\left( \tau\right) =D\left( x\left( \tau\right)
,v\left( \tau\right) ;d\tau\right)\;,\label{ST3}%
\end{equation}
where the order of
magnitude of $D$ is not necessarily $d\tau$, but it might
be of order $(d\tau)^{\alpha'}$ for some $0<\alpha' < 1$ (see e.g.\,Section \ref{subsec:KE1}, $\frac{1}{2}<s<1$).
A typical example which can be derived for a family of power law potentials is the following
(cf.\,Theorem \ref{CorrPowLaw}-{\em (ii)}):
\begin{equation}
K\left( y,v_{1},v_{2};d\tau\right) =\frac{1}{\left\vert y\right\vert
^{\alpha}}\Lambda\left( \frac{y}{\left\vert y\right\vert} ,v_{1},v_{2}\right) \left( d\tau\right) ^{2}\ \ ,\ y\neq0\ ,\ \ \ 0<\alpha
<1\ \ ,\ \ \label{ST6}%
\end{equation}%
\begin{equation}
K\left( 0,v_{1},v_{2};d\tau\right) =\Lambda\left( v_{1},v_{2}\right)
\left( d\tau\right) ^{2-\alpha}\;,\label{ST7}
\end{equation}
where $\Lambda$ is a matrix valued function. Note that, likely, due to the integrability of the factor $\frac{1}{\left\vert y\right\vert^{\alpha}}$ in \eqref{ST6},
the condition \eqref{ST7} does not play a relevant role.
It would be interesting to clarify
if (\ref{ST1})-(\ref{ST7}) forms a well defined mathematical problem which can be
solved for a suitable choice of initial values $x\left( 0\right)
=x_{0},\ v\left( 0\right) =v_{0}.$\ Note that this is not a standard
stochastic differential equation, but rather
a stochastic differential equation with correlated noise.
The dynamics \eqref{ST3} has some analogies with fractional Brownian motion \cite{MvN68}.
\section{Examples of kinetic equations derived for different families of
potentials.\label{Examples}}
We now apply the formalism of Section \ref{GenKinEq} to different families
of potentials. First we check if the families of potentials considered have an associated collision length,
then we examine the behaviour of the functions $\sigma\left( T;\varepsilon
\right) $ in (\ref{S4E8a}). We compute the time scales
$T_{BG},\ T_{L}$ defined in (\ref{BG}), (\ref{S4E8}) and check if (\ref{S9E4})
and some of the conditions (\ref{S9E5a})-(\ref{S9E6}) and \eqref{I1E1}-\eqref{I1E2} hold. Finally,
we write the corresponding kinetic equations.
\subsection{Kinetic time scales for potentials with the form $\Phi\left( x,\varepsilon\right)
=\Psi\left( \frac{\left\vert x\right\vert }{\varepsilon}\right)
.$\label{Psiovereps}}
We first consider the family of potentials
\begin{equation}
\left\{ \Phi\left( x,\varepsilon\right) ;\ \varepsilon>0\right\} =\left\{
\Psi\left( \frac{\left\vert x\right\vert }{\varepsilon}\right)
;\ \varepsilon>0\right\} \label{S9E7}%
\end{equation}
where $\Phi(\cdot,\varepsilon) \in {\cal C}_s$, $s > 1/2$. We have
$\Psi\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{ 0\right\}
\right) $ and
\begin{equation}
\Psi\left( y\right) \sim\frac{A}{\left\vert y\right\vert ^{s}}%
\ \ \text{as\ \ }\left\vert y\right\vert \rightarrow\infty\ \ ,\ \ \ \nabla
\Psi\left( y\right) \sim-\frac{sAy}{\left\vert y\right\vert ^{s+2}%
}\ \ \text{as\ \ }\left\vert y\right\vert \rightarrow\infty\label{PsiAs}%
\end{equation}
for some real number $A\neq0$ (cf.\,\eqref{eq:defCCs}).
By Definition \ref{LandLength},
the collision length is $\lambda_{\varepsilon}=\varepsilon$ and
\begin{equation}
T_{BG}=\frac{1}{\varepsilon^{2}}\;. \label{TG1}%
\end{equation}
It is possible to state some general result for these potentials which depends
only on the asymptotics of $\Psi\left( y\right) $ as a power law as
$\left\vert y\right\vert \rightarrow\infty.$
\begin{theorem}
\label{ProofGenPow} Consider the family of potentials
(\ref{S9E7}) with $\Phi(\cdot,\varepsilon) \in {\cal C}_s$, $s>1/2$.
Suppose that the corresponding Holtsmark field
defined in Section \ref{Holtsm} is spatially homogeneous.
Then
\begin{equation}
\lim\sup_{\varepsilon\rightarrow0}\sigma\left( T_{BG};\varepsilon\right)
\leq\delta\left( M\right) \ \ \text{with\ }\lim_{M\rightarrow\infty}%
\delta\left( M\right) =0 \quad \text{if} \quad s>1 \label{S5E2a}%
\end{equation}
where $\sigma\left( T;\varepsilon
\right) $ is defined in (\ref{S4E8a}).
If we define $T_{L}$ by means of (\ref{S4E8}) we have
\begin{equation}
T_{L}\sim\frac{1}{4\pi A^{2}\varepsilon^{2}\log\left( \frac{1}{\varepsilon
}\right) }\ \ \text{as\ \ }\varepsilon\rightarrow0 \quad \text{if } \quad s=1
\label{P4E1}
\end{equation}
and
\begin{equation}
T_{L}\sim\left( \frac{1}{W_{s}A^{2}\varepsilon^{2s}}\right) ^{\frac{1}%
{3-2s}}\ \ \text{as\ \ }\varepsilon\rightarrow0 \quad \text{if} \quad \frac{1}%
{2}<s<1 \label{P4E2}
\end{equation}
with
\begin{equation}
W_{s}=\sup_{|\theta|=1} \int_{\mathbb{R}^{3}}d\xi \left[s^2\left( \theta_{\bot}\cdot\xi_{\bot}\right)
^{2}\left( \int_{0}^{1}\frac{d\tau}{\left\vert v\tau-\xi\right\vert ^{s+2}%
}\right) ^{2}+ \theta_{\parallel}^2\,
\left( \frac{1 }{\left\vert
\xi\right\vert ^{s}}-\frac{1 }{\left\vert \xi-v\right\vert ^{s}}\right) ^{2}
\right] \label{P7E1}
\end{equation}
(cf.\,\eqref{eq:notTPTP}), whence
\[
T_{L}\ll T_{BG}\ \ \text{as } \ \ \varepsilon\rightarrow0 \quad \text{if }\quad
s\leq1 .
\]
\end{theorem}
\begin{remark}
We recall that the assumption of spatial homogeneity for the Holtsmark
field means that we need to consider neutral distributions of charges
or distributions with a background charge if $s\leq1.$
\end{remark}
\subsubsection{Proof of Theorem \ref{ProofGenPow}: general strategy.}
We use the splitting
(\ref{S4E4}) which in the case of the family (\ref{S9E7}) becomes
$\Phi\left( x,\varepsilon\right) =\Phi_{B}\left( x,\varepsilon\right)
+\Phi_{L}\left( x,\varepsilon\right)$ with
\begin{equation}
\Phi_{B}\left( x,\varepsilon\right) =\Psi_{B,M}\left( \frac{x}{\varepsilon
}\right) \ \ ,\ \ \Phi_{L}\left( x,\varepsilon\right) =\Psi_{L,M}\left(
\frac{ x}{\varepsilon}\right) \;,
\ \label{S5E0}%
\end{equation}
\begin{equation}
\Psi_{B,M}\left( y\right) =\Psi\left( y\right)
\eta\left( \frac{y}{M}\right) \ \ ,\ \ \Psi
_{L,M}\left( y\right) =\Psi\left( y \right) \left[
1-\eta\left( \frac{ y }{M}\right) \right] . \label{S5E0a}%
\end{equation}
We study the asymptotic
properties of the function%
\begin{equation}
\sigma\left( T;\varepsilon\right) =\sup_{\left\vert
\theta\right\vert =1}\int_{\mathbb{R}^{3}}dy\left( \theta\cdot\int_{0}%
^{T}\nabla_{x}\Phi_{L}\left( vt-y,\varepsilon\right) dt\right) ^{2}
\label{S5E1}%
\end{equation}
separately for the three different ranges $s>1,\ s=1$ and $\frac{1}{2}<s<1$
and assuming $|v|=1$.
\subsubsection{Proof of Theorem \ref{ProofGenPow}: the case $s>1$.%
\label{PowSLarge}}
Our goal is to prove (\ref{S5E2a}). To this end we first notice that
(\ref{S4E8a}) and (\ref{TG1}) imply%
\[
\sigma\left( T_{BG};\varepsilon\right) =\frac{1}{\varepsilon^{8}}%
\sup_{\left\vert \theta\right\vert =1}\int_{\mathbb{R}^{3}}d\xi\left(
\frac{\theta}{\varepsilon^{2}}\cdot\int_{0}^{1}\nabla\Psi_{L,M}\left(
\frac{v\tau-\xi}{\varepsilon^{3}}\right) d\tau\right) ^{2}\;.%
\]
We use that $\left\vert \nabla\Psi_{L,M}\right\vert \leq\frac
{C\chi_{\left[ M,\infty\right) }\left( \left\vert y\right\vert \right)
}{\left\vert y\right\vert ^{s+1}}$ for some $C>0$, where we denote as $\chi_{A}$ the
characteristic function of the set $A.$ Then%
\begin{equation}
\sigma\left( T_{BG};\varepsilon\right) \leq\frac{C}{\varepsilon^{8}}%
\int_{\mathbb{R}^{3}}d\xi\left( \frac{\varepsilon^{3\left( s+1\right) }%
}{\varepsilon^{2}}\int_{0}^{1}\frac{\chi_{\left[ M,\infty\right) }\left(
\frac{\left\vert v\tau-\xi\right\vert }{\varepsilon^{3}}\right) }{\left\vert
v\tau-\xi\right\vert ^{s+1}}d\tau\right) ^{2} . \label{P4E4a}%
\end{equation}
We split the integral as $\int_{\mathbb{R}^{3}}\left[ \cdot\cdot
\cdot\right] d\xi=\int_{\left\{ \left\vert \xi\right\vert \geq2\right\}
}\left[ \cdot\cdot\cdot\right] d\xi+\int_{\left\{ \left\vert
\xi\right\vert <2\right\} }\left[ \cdot\cdot\cdot\right] d\xi$
and notice that the first term is
\begin{equation}
\int_{\left\{ \left\vert \xi\right\vert \geq2\right\} }\left[ \cdot
\cdot\cdot\right] d\xi\leq C\varepsilon^{6\left( s-1\right) }%
\int_{\left\{ \left\vert \xi\right\vert \geq2\right\} }\frac{d\xi}{\left\vert \xi\right\vert ^{2\left(
s+1\right) }}\leq C\varepsilon^{6\left( s-1\right) } .\label{P4E4}%
\end{equation}
We split again the second domain as
$$\left\{ \left\vert \xi_{\bot}\right\vert
\geq\frac{M\varepsilon^{3}}{2},\ \left\vert \xi\right\vert <2\right\}
\cup\left\{ \left\vert \xi_{\bot}\right\vert <\frac{M\varepsilon^{3}}%
{2},\ \left\vert \xi\right\vert <2\right\} $$
where $\eta=\left( \eta_{\parallel},\eta_{\bot}\right)$ (as in \eqref{eq:notTPTP}).
Note that
$\left\vert v\tau-\xi\right\vert ^{s+1}=\left(
\left( \tau-\xi_{\parallel}\right) ^{2}+\left( \xi_{\bot}\right)^{2}\right)
^{\frac{s+1}{2}}$.
If $\left\vert \xi_{\bot
}\right\vert \geq\frac{M\varepsilon^{3}}{2}$ we use%
\begin{equation}
\int_{0}^{1}\frac{\chi_{\left[ M,\infty\right) }\left( \frac{\left\vert
v\tau-\xi\right\vert }{\varepsilon^{3}}\right) }{\left\vert v\tau
-\xi\right\vert ^{s+1}}d\tau\leq\int_{-\infty}^{\infty}\frac{d\tau}{\left(
\tau^{2}+\left( \xi_{\bot}\right) ^{2}\right) ^{\frac{s+1}{2}}}\leq
\frac{C}{\left\vert \xi_{\bot}\right\vert ^{s}}. \label{P4E5}%
\end{equation}
Otherwise if $\left\vert \xi_{\bot}\right\vert <\frac{M\varepsilon
^{3}}{2},$ since the integrand is different from zero only if
$\frac{\left\vert v\tau-\xi\right\vert }{\varepsilon^{3}}\geq M,$ we must
have $\left\vert \tau-\xi_{\parallel}\right\vert \geq\frac{M\varepsilon^{3}}{2}$ to
have a nontrivial contribution, hence
\begin{align}
\int_{0}^{1}\frac{\chi_{\left[ M,\infty\right) }\left( \frac{\left\vert
v\tau-\xi\right\vert }{\varepsilon^{3}}\right) }{\left\vert v\tau
-\xi\right\vert ^{s+1}}d\tau & \leq\int_{\left[ 0,1\right] \cap
\left\{ \left\vert \tau-\xi_{\parallel}\right\vert \geq\frac{M\varepsilon^{3}}%
{2}\right\} }\frac{d\tau}{\left( \left( \tau-\xi_{\parallel}\right) ^{2}+\left( \xi_{\bot}\right)^{2}\right) ^{\frac{s+1}{2}}}
\nonumber\\
& \leq \frac{1}{\left\vert \xi_{\bot}\right\vert ^{s}}\int_{\left\{ \left\vert
\tau\right\vert \geq\frac{M\varepsilon^{3}}{2\left\vert \xi_{\bot}\right\vert
}\right\} }\frac{d\tau}{\left( \tau^{2}+1\right) ^{\frac{s+1}{2}}}
\leq\frac{C}{M^{s}\varepsilon^{3s}}. \label{P4E6}
\end{align}
Combining (\ref{P4E5}) and (\ref{P4E6}) we get
\[
\int_{0}^{1}\frac{\chi_{\left[ M,\infty\right) }\left( \frac{\left\vert
v\tau-\xi\right\vert }{\varepsilon^{3}}\right) }{\left\vert v\tau
-\xi\right\vert ^{s+1}}d\tau\leq\frac{C}{\left( \max\left\{ \left\vert
\xi_{\bot}\right\vert ,M\varepsilon^{3}\right\} \right) ^{s}}.%
\]
Plugging this estimate into the term $\int_{\left\{ \left\vert \xi
\right\vert <2\right\} }\left[ \cdot\cdot\cdot\right] d\xi$ (cf.\,(\ref{P4E4a})) we arrive at
\begin{align*}
& \frac{1}{\varepsilon^{8}}\int_{\left\{ \left\vert \xi\right\vert
<2\right\} }d\xi\left( \frac{\varepsilon^{3\left( s+1\right) }%
}{\varepsilon^{2}}\int_{0}^{1}\frac{\chi_{\left[ M,\infty\right) }\left(
\frac{\left\vert v\tau-\xi\right\vert }{\varepsilon^{3}}\right) }{\left\vert
v\tau-\xi\right\vert ^{s+1}}d\tau\right) ^{2}\\
& \leq\frac{C}{\varepsilon^{8}}\int_{-2}^{2}d\xi_{\parallel}\int_{\left\{
\left\vert \xi_{\bot}\right\vert \leq M\varepsilon^{3}\right\} }d\xi_{\bot
}\left( \frac{\varepsilon^{3\left( s+1\right) }}{\varepsilon^{2}}\frac
{1}{\left( M\varepsilon^{3}\right) ^{s}}\right) ^{2}+\\
& +\frac{C}{\varepsilon^{8}}\int_{-2}^{2}d\xi_{\parallel}\int_{\left\{ \left\vert
\xi_{\bot}\right\vert >M\varepsilon^{3}\right\} }d\xi_{\bot}\left(
\frac{\varepsilon^{3\left( s+1\right) }}{\varepsilon^{2}}\frac{1}{
\left\vert \xi_{\bot}\right\vert ^{s}}\right) ^{2}\leq \frac{C}{M^{2\left(
s-1\right) }}\;.
\end{align*}
Using this and (\ref{P4E4}) we obtain%
\[
\sigma\left( T_{BG};\varepsilon\right) \leq C\varepsilon^{6\left(
s-1\right) }+\frac{C}{M^{2\left( s-1\right) }}.
\]
Taking first the limit $\varepsilon\rightarrow0$ (using $s>1$) and then
$M\rightarrow\infty$ we obtain (\ref{S5E2a}). This gives the result of Theorem
\ref{ProofGenPow} for $s>1.$
\subsubsection{Proof of Theorem \ref{ProofGenPow}: the case $s=1$.%
\label{TimeScaleCoulomb}}
Our goal is to prove the existence of $T_{L}\ll T_{BG}$ such that
$\sigma\left( T_{L};\varepsilon\right) \sim1$ as $\varepsilon\rightarrow0$,
using (\ref{PsiAs}) with $s=1$. We assume that $A=1,$
since $A$ can be simply absorbed as a rescaling factor.
Changing variables $y=T\xi,\ t=T\tau$ in (\ref{S4E8a}) we can
write:%
\begin{align*}
\sigma\left( T;\varepsilon\right)
=\frac{T^3}{\varepsilon^{2}}%
\sup_{\left\vert \theta\right\vert =1}\int_{\mathbb{R}^{3}}d\xi\left(
T\theta \cdot\int_{0}^{1}\nabla\Psi_{L,M}\left(
\frac{T(v\tau-\xi)}{\varepsilon}\right) d\tau\right) ^{2}\;.%
\end{align*}
Let $Z = 1 - \eta$ (cf.\,\eqref{S5E0a}).
Using (\ref{PsiAs}) we obtain%
\begin{equation} \label{P4E8}
\sigma\left( T;\varepsilon\right) =T\varepsilon^{2}\sup
_{\left\vert \theta_{\bot}\right\vert =1}\int_{\mathbb{R}^{3}}d\xi\left(
\left( \theta_{\bot}\cdot\xi_{\bot}\right) \int_{0}^{1}\frac{Z\left(
\frac{T\left\vert v\tau-\xi\right\vert }{M\varepsilon}\right)
}{\left\vert v\tau-\xi\right\vert ^{3}}d\tau\right) ^{2}\left[ 1+\zeta\left(
M;\varepsilon\right) \right] %
\end{equation}
where $\lim\sup_{\varepsilon\rightarrow0}|\zeta\left( M;\varepsilon\right)|
\leq\zeta\left( M\right) $ and $\zeta\left( M\right) \rightarrow0$ as
$M\rightarrow\infty.$ In $\zeta\left( M;\varepsilon\right)$ we collect:
(i) the errors coming from the computation of the gradient via \eqref{S5E0a}
with the approximation \eqref{eq:defCCs} (with $s=1$ and $A=1$), which can be
estimated as in the previous section; (ii) the contribution of the longitudinal component
$\theta_\parallel$. Note that the latter yields a term of order
\begin{equation}\label{eq:longcont}
T\varepsilon^{2}\int_{\mathbb{R}^{3}}d\xi\left( \frac{Z\left(
\frac{T\left\vert \xi\right\vert }{M\varepsilon}\right) }{\left\vert
\xi\right\vert }-\frac{Z\left( \frac{T\left\vert \xi-v\right\vert
}{M\varepsilon}\right) }{\left\vert \xi-v\right\vert }\right) ^{2}\;,
\end{equation}
which can be estimated by $CT\varepsilon^{2}.$ Since the integral in \eqref{P4E8} will produce an additional contribution of the order $\log\big(\frac{1}{\varepsilon}\big)$, \eqref{eq:longcont} can be absorbed into $\zeta\left(
M;\varepsilon\right)$.
We decompose the integral $\int_{\mathbb{R}^{3}}\left[ \cdot\cdot
\cdot\right] d\xi$ in (\ref{P4E8}) as $\int_{\left\{ \left\vert
\xi\right\vert \geq2\right\} }\left[ \cdot\cdot\cdot\right] d\xi
+\int_{\left\{ \left\vert \xi\right\vert <2\right\} }\left[ \cdot
\cdot\cdot\right] d\xi$ and notice that the first one is bounded, so that
\begin{equation}
\sigma\left( T;\varepsilon\right) =T\varepsilon^{2}\sup
_{\left\vert \theta_{\bot}\right\vert =1}\int_{\left\{ \left\vert \xi\right\vert <2\right\} }d\xi\left(
\left( \theta_{\bot}\cdot\xi_{\bot}\right) \int_{0}^{1}\frac{Z\left(
\frac{T\left\vert v\tau-\xi\right\vert }{M\varepsilon}\right)
}{\left\vert v\tau-\xi\right\vert ^{3}}d\tau\right) ^{2}\left[ 1+\zeta\left(
M;\varepsilon\right) \right] + O(T\varepsilon^2)\;.\nonumber%
\end{equation}
Arguing similarly we obtain that the main contribution to the integral is due to a cylinder with principal axis $(\xi_{\parallel}\in [0,1],\xi_{\perp}=0)$ and radius much smaller than $1$. In particular
\begin{equation}
\label{eq:intEsth}
\sigma\left( T;\varepsilon\right) \sim T\varepsilon^{2}\int_{0}%
^{1}d\xi_{\parallel}\int_{\left\{ \left\vert \xi_{\bot}\right\vert \leq1\right\}
}d\xi_{\bot}\left( \theta_{\bot}\cdot\xi_{\bot}\right) ^{2}\left( \int_{0}^{1}\frac{Z\left(
\frac{T\left\vert v\tau-\xi\right\vert }{M\varepsilon}\right)
}{\left\vert v\tau-\xi\right\vert ^{3}}d\tau\right)^{2}\;,
\end{equation}
where the error is negligible in the limit $\varepsilon \to 0 $ and then $M \to \infty$.
Note that the region with $\left\vert \xi_{\bot}\right\vert \leq\frac{2M\varepsilon}{T}$
yields also a contribution of order $T\varepsilon^{2}$ (as it might be seen
estimating the integral $\int_{0}^{1}\frac{d\tau}{\left\vert v\tau
-\xi\right\vert ^{3}}$ as $\frac{C}{\left( \frac{M\varepsilon}{T}\right)
^{2}}$\ if $\frac{M\varepsilon}{T}\leq\left\vert \xi_{\bot}\right\vert
\leq\frac{2M\varepsilon}{T}$).
We then have the approximation%
\[
\sigma\left( T;\varepsilon\right) \sim T\varepsilon^{2}\int_{0}%
^{1}d\xi_{\parallel}\int_{\left\{ \frac{2M\varepsilon}{T}\leq\left\vert \xi_{\bot
}\right\vert \leq1\right\} }d\xi_{\bot}\left( \theta_{\bot}\cdot\xi_{\bot}\right)
^{2}\left( \int_{0}^{1}\frac{Z\left( \frac{T\left\vert v\tau-\xi\right\vert
}{M\varepsilon}\right) }{\left\vert v\tau-\xi\right\vert ^{3}}d\tau\right)
^{2}\;.
\]
Finally, for any $\xi_{\parallel}\in\left( 0,1\right) $ we have%
\begin{equation}
\int_{0}^{1}\frac{Z\left( \frac{T\left\vert v\tau-\xi\right\vert
}{M\varepsilon}\right) }{\left\vert v\tau-\xi\right\vert ^{3}}d\tau\sim
\frac{1}{\left\vert \xi_{\bot}\right\vert ^{2}}\int_{-\infty}^{\infty}%
\frac{d\tau}{\left( \tau^{2}+1\right) ^{\frac{3}{2}}}=\frac{2}{\left\vert
\xi_{\bot}\right\vert ^{2}}\ \ \text{as }\left\vert \xi_{\bot}\right\vert
\rightarrow0,\ \ \left\vert \xi_{\bot}\right\vert \geq \frac{M\varepsilon}{T}\;.
\label{P4E7}%
\end{equation}
The approximation is not uniform in $\xi_{\parallel}$ when $\xi_{\parallel}$ is close to $0$ or $1,$ but the
contributions of those regions (which yield terms bounded by the right-hand
side of (\ref{P4E7})) are negligible compared with those due to the region
$\xi_{\parallel}\in\left( \varepsilon_{0},1-\varepsilon_{0}\right)
,\ \varepsilon_{0}>0$ small.
Therefore%
\[
\sigma\left( T;\varepsilon\right) \sim4T\varepsilon^{2}\int_{0}^{1}d\xi
_{\parallel}\int_{\left\{ \frac{2M\varepsilon}{T}\leq\left\vert \xi_{\bot}\right\vert
\leq1\right\} }\frac{\left( \theta_{\bot}\cdot\xi_{\bot}\right) ^{2}%
}{\left\vert \xi_{\bot}\right\vert ^{4}}d\xi_{\bot}\]
and, computing the integral in $\xi_{\bot}$ using polar coordinates, we obtain%
\begin{equation}
\label{eq:frpT41c}
\sigma\left( T;\varepsilon\right) \sim 4\pi T\varepsilon^{2}\log\left(
\frac{1}{\varepsilon}\right) \left[ 1+o\left( 1\right) \right]
\ \ \text{as\ \ }\varepsilon\rightarrow0 \;.
\end{equation}
Using (\ref{S4E8}), Eq.\,(\ref{P4E1}) follows.
\subsubsection{Proof of Theorem \ref{ProofGenPow}: the case $\frac{1}{2}%
<s<1$.} \label{sss:criticaltimescale}
We proceed as in the previous section, assuming
$\frac{1}{2}<s<1$ and $A=1$.
The analogous of \eqref{P4E8}-\eqref{eq:longcont} is%
\begin{equation}
\sigma\left( T;\varepsilon\right) =s^{2}T^{3-2s}\varepsilon^{2s}%
\sup_{\left\vert \theta\right\vert =1}
(J_1({\theta_{\bot}})+J_2(\theta_\parallel)) \left[ 1+\zeta\left(
M;\varepsilon\right) \right]
\label{P4E9}%
\end{equation}
where
\begin{equation}
J_{1}({\theta_{\bot}}):=\int_{\mathbb{R}^{3}}d\xi \left(
\theta_{\bot}\cdot\xi_{\bot}\right) ^{2}\left( \int_{0}^{1}\frac{Z\left(
\frac{T\left\vert v\tau-\xi\right\vert }{M\varepsilon}\right) d\tau
}{\left\vert v\tau-\xi\right\vert ^{s+2}}\right) ^{2}
\label{P4E9_1}%
\end{equation}
and
\begin{equation}
J_{2}({\theta_{\parallel}}):=\frac{1}{s^2}\int_{\mathbb{R}^{3}}d\xi \,
\theta_{\parallel}^2\,
\left( \frac{Z\left(
\frac{T\left\vert \xi\right\vert }{M\varepsilon}\right) }{\left\vert
\xi\right\vert^s }-\frac{Z\left( \frac{T\left\vert \xi-v\right\vert
}{M\varepsilon}\right) }{\left\vert \xi-v\right\vert^s }\right) ^{2}\;.
\label{P4E9_2}%
\end{equation}
Notice however that, for this range of values of $s$, the
region where $\left\vert v\tau-\xi\right\vert \leq
\frac{M\varepsilon}{T}$ gives a negligible contribution, because the resulting
integral is finite, differently from the previous case. Thus we can replace
$Z$ by $1$ introducing a negligible error.
The integral $W_{s}$ in (\ref{P7E1}) is a
numerical constant depending only on $s$ and%
\[
\sigma\left( T;\varepsilon\right) \sim W_{s}T^{3-2s}\varepsilon^{2s}%
\]
as $\varepsilon\rightarrow0$, from which
(\ref{P4E2}) follows.
This concludes the proof of Theorem \ref{ProofGenPow}.
\subsection{Computation of the correlations for potentials
$\Phi\left( x,\varepsilon\right) =\Psi\left( \frac{\left\vert x\right\vert
}{\varepsilon}\right) $.}
We now estimate the correlations of the deflections for families of potentials
with the form $\Phi\left( x,\varepsilon\right) =\Psi\left( \frac{\left\vert
x\right\vert }{\varepsilon}\right) .$ We restrict to the cases where
$T_{L}\ll T_{BG},$ i.e.\,to potentials with the asymptotics (\ref{PsiAs}) with
$s\leq1$. We indicate in this section
the deflection vector during the time interval $\left[ 0,\tilde{T}%
_{L}\right] $ as%
\begin{equation}
D\left( x_{0},v;\tilde{T}_{L}\right) =\int_{0}^{\tilde{T}_{L}}\nabla_x\Phi_{L}\left(
x_{0}+vt,\varepsilon\right)\omega \, dt \;,
\label{eq:DevCorr}
\end{equation}
where $\tilde{T}_{L}=h T_{L},$ $h>0$, $x_{0}\in
\mathbb{R}^{3}$ and $v\in\mathbb{R}^{3}$ with $\left\vert v\right\vert =1$.
\begin{theorem}
\label{CorrPowLaw}Suppose that the assumptions in Theorem \ref{ProofGenPow} hold.
\begin{itemize}
\item[(i)] Let us assume that $s=1$ and $T_{L}$ is as in (\ref{P4E1}). Then (cf.\,\eqref{I1E1})
\begin{align}
&\mathbb{E}\left( D\left( x_{0},v;\tilde{T}_{L}\right) D\left(
x_{0}+v\tilde{T}_{L},v;\tilde{T}_{L}\right) \right)\nonumber \\& \ll\sqrt{\mathbb{E}%
\left( \left( D\left( x_{0},v;\tilde{T}_{L}\right) \right) ^{2}\right)
\mathbb{E}\left( \left( D\left( x_{0}+v\tilde{T}_{L},v;\tilde{T}%
_{L}\right) \right) ^{2}\right) }\ \ \text{as\ \ }\varepsilon\rightarrow 0.
\label{I1E1a}%
\end{align}
\item[(ii)] Suppose that $\frac{1}{2}<s<1$ and $T_{L}$ is as in (\ref{P4E2}). Let
$x_{1},x_{2}\in\mathbb{R}^{3}$, $\left( x_{2}-x_{1}\right)
=T_{L}y$ with $y\in\mathbb{R}^{3}$ and
$v_1,v_2\in S^2$.
Then (cf.\,Section \ref{ss:CorrCase})%
\[
\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) D\left(
x_{2},v_{2};\tilde{T}_{L}\right) \right) \sim K\left(y,v_{1}%
,v_{2};h\right) \ \ \text{as\ \ }\varepsilon\rightarrow 0\;,
\]
where:
\begin{equation}
K\left(y,v_{1}
,v_{2};h\right) \sim\frac{\Lambda\left(e\right) h^{2}}{\left\vert y\right\vert ^{2s-1}}\text{\ \ as\ \ }%
\left\vert y\right\vert \rightarrow\infty \label{I1E1b}%
\end{equation}
with $e = \frac{y}{|y|}$ and
\[
\Lambda\left(e\right) :=\frac{s^2}{W_s}\int_{\mathbb{R}^{3}}d\eta\frac{\left[
\eta\otimes \left( \eta-e\right) \right]}{\left\vert \eta\right\vert ^{s+2}\left\vert \eta-e\right\vert ^{s+2}%
}\;
\]
and
$K( 0,v_{1},v_{2};h) =O(h^{3-2s})$. Moreover, as $\varepsilon \to 0$ the correlation matrix is
\begin{equation}
\label{eq:corrmatex}
\frac{\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes
D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) }{\sqrt{\mathbb{E}\left(
\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \right) ^{2}\right)
\mathbb{E}\left( \left( D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right)
^{2}\right) }} \sim\frac{K\left( y,v_{1},v_{2};h\right) }{C_{s}h^{3-2s}}
\end{equation}
where $C_s>0$ is given by \eqref{eq:CsCorr} below.
\end{itemize}
\end{theorem}
\begin{remark}
Notice that we approximate in the case (i) the trajectory of the particle by
rectilinear ones. In an analogous manner, we could prove that
the correlations also tend to zero if we consider particles separated by
distances larger than $\tilde{T}_{L}.$ In case (ii) we obtain that the
correlations between the particles at distances of the order of the mean free
path do not vanish as $\varepsilon\rightarrow0.$
\end{remark}
\subsubsection{Proof of Theorem \ref{CorrPowLaw}: the case $s=1$.}
We assume, without loss of generality, that $x_{0}=0.$
The result \eqref{eq:frpT41c} in Section \ref{TimeScaleCoulomb} shows that the asymptotic
behaviour in the right-hand side of \eqref{I1E1a} is given, up to
a multiplicative constant, by $\tilde{T}_{L}\varepsilon^{2}\log\left( \frac
{1}{\varepsilon}\right) ,$ which is of order $h$ as $\varepsilon
\rightarrow0$\ if $\tilde{T}_{L}=h T_{L}.$ Therefore, we need to prove
that $\mathbb{E}\left( D\left( 0,v;\tilde{T}_{L}\right) D\left( v\tilde
{T}_{L},v;\tilde{T}_{L}\right) \right) \ll1$ as $\varepsilon\rightarrow0.$
We get%
\begin{align}
&\mathbb{E}\left( D\left( 0,v;\tilde{T}_{L}\right) D\left( v\tilde{T}%
_{L},v;\tilde{T}_{L}\right) \right) \nonumber\\&=\int_{\mathbb{R}^{3}}d\xi\left(
\int_{0}^{\tilde{T}_{L}}\nabla_{x}\Phi_{L}\left( vt_{1}-\xi,\varepsilon
\right) dt_{1}\right) \otimes\left( \int_{\tilde{T}_{L}}^{2\tilde{T}_{L}%
}\nabla_{x}\Phi_{L}\left( vt_{2}-\xi,\varepsilon\right) dt_{2}\right)\;.
\label{A1}%
\end{align}
Arguing as in Section \ref{TimeScaleCoulomb} (cf.\,\eqref{P4E8}) we can prove that the longitudinal contributions (i.e.\,those parallel to $v$) are negligible. Therefore we only consider the components on the plane orthogonal to $v$.
Using the rescaling of variables $t_{1}=T_{L}\tau_1,\ t_{2}=T_{L}\tau_{2},\ \xi=T_{L} y$, we obtain
that
\eqref{A1} is bounded by $\frac{\left( T_{L}\right) ^{5}}{\varepsilon^{2}}$ times
(cf.\,\eqref{S5E0})
$$
\int_{0}^{h
}d\tau_{1}\int_{h}^{2h}d\tau_{2}\int_{\mathbb{R}^{3}}dy\left\vert
\Psi_{L,M}^{\prime}\left( \frac{T_{L}\left( v\tau_{1}-y\right)
}{\varepsilon}\right) \right\vert \left\vert \Psi_{L,M}^{\prime}\left(
\frac{T_{L}\left( v\tau_{2}-y\right) }{\varepsilon}\right) \right\vert
\frac{\left\vert y_{\bot}\right\vert }{\left\vert v\tau_{1}-y\right\vert
}\frac{\left\vert y_{\bot}\right\vert }{\left\vert v\tau_{2}-y
\right\vert }%
$$
where $y_{\bot}$ denotes the orthogonal projection of $y$ in the plane
orthogonal to $v.$ That is, for some $C>0$ (cf.\,\eqref{S5E0a}, \eqref{PsiAs}),
\begin{align*}
& \mathbb{E}\left( D\left( 0,v;\tilde{T}_{L}\right) D\left( v\tilde {T}_{L},v;\tilde{T}_{L}\right) \right)\nonumber\\& \leq
CT_{L}\varepsilon^{2}\int_{0}^{h}d\tau_{1}\int_{h}^{2h}%
d\tau_{2}\int_{\mathbb{R}^{3}}dy \left\vert y_{\bot}\right\vert ^{2}
\frac{\chi_{\left\{ \left\vert y\right\vert \geq\frac{\varepsilon}{T_{L}%
}\right\} }}{\left\vert y\right\vert ^{3}}\frac{\chi_{\left\{ \left\vert
v\left( \tau_{2}-\tau_{1}\right) -y\right\vert \geq\frac{\varepsilon
}{T_{L}}\right\} }}{\left\vert v\left( \tau_{2}-\tau_{1}\right)
-y\right\vert ^{3}}\\
& =
CT_{L}\varepsilon^{2}\int_{-h}^{0}d\tau_{1}\int_{0}^{h}%
d\tau_{2}\int_{\mathbb{R}^{2}}dy_\bot \left\vert y_{\bot}\right\vert ^{2}
\int_{\mathbb{R}} dy_\parallel \frac{\chi_{\left\{ \left\vert y\right\vert \geq\frac{\varepsilon}{T_{L}%
}\right\} }}{(y^2_\parallel+y^2_\bot)^{\frac{3}{2}}}\frac{\chi_{\left\{ \left\vert
v\left( \tau_{2}-\tau_{1}\right) -y\right\vert \geq\frac{\varepsilon
}{T_{L}}\right\} }}{((y_\parallel-(\tau_2-\tau_1))^2+y^2_\bot)^{\frac{3}{2}}}\;.
\end{align*}
We now use the change of variables $y_\parallel=\left\vert y_{\bot}\right\vert
X.$ Estimating also the characteristic functions by
$1$, we find
\begin{align*}
\mathbb{E}\left( D\left( 0,v;\tilde{T}_{L}\right) D\left( v\tilde
{T}_{L},v;\tilde{T}_{L}\right) \right)
\leq CT_{L}\varepsilon^{2}\int_{-h}^{0}d\tau_{1}\int_{0}^{h}d\tau
_{2}\int_{\mathbb{R}^{2}}\frac{dy_{\bot}}{\left\vert y_{\bot}\right\vert
^{3}}Q\left( \frac{\tau_{2}-\tau_{1}}{\left\vert y_{\bot}\right\vert
}\right)
\end{align*}
where
\[
Q\left( s\right) =\int_{\mathbb{R}}\frac{dX}{\left( X^{2}+1\right)
^{\frac{3}{2}}}\frac{1}{\left( \left( X-s\right) ^{2}+1\right) ^{\frac
{3}{2}}}.
\]
We remark that $Q\left( s\right) \leq\frac{C}{1+\left\vert s\right\vert
^{3}}.$ Then%
\begin{align*}
\mathbb{E}\left( D\left( 0,v;\tilde{T}_{L}\right) D\left( v\tilde{T}%
_{L},v;\tilde{T}_{L}\right) \right) & \leq CT_{L}\varepsilon^{2}%
\int_{-h}^{0}d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{2}}%
\frac{dy_{\bot}}{\left\vert y_{\bot}\right\vert ^{3}+\left( \tau
_{2}-\tau_{1}\right) ^{3}}\\
& \leq CT_{L}\varepsilon^{2}\int_{-h}^{0}d\tau_{1}\int_{0}^{h}%
\frac{d\tau_{2}}{\left( \tau_{2}-\tau_{1}\right) }
\leq CT_{L}\varepsilon^{2}
\end{align*}
which is negligible as $\varepsilon\rightarrow0,$ by using the formula \eqref{P4E1}.
\subsubsection{Proof of Theorem \ref{CorrPowLaw}: the case $s<1$.} \label{subsec:PrThCPLs<1}
By definition
\begin{align*}
& \mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes
D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) \\
& =\int_{0}^{\tilde{T}_{L}}dt_{1}\int_{0}^{\tilde{T}_{L}}dt_{2}%
\int_{\mathbb{R}^{3}}d\xi\,\nabla_{x}\Phi_{L}\left( \xi-v_{1}t_{1}%
,\varepsilon\right) \otimes\nabla_{x}\Phi_{L}\left( \xi-\left( x_{2}%
-x_{1}\right) -v_{2}t_{2},\varepsilon\right)
\end{align*}
and we are interested in a situation where $\left( x_{2}-x_{1}\right)=T_{L}y, y\in\mathbb{R}^{3}$.
We use again the change of variables $\xi=T_{L}%
\eta,\ t_{j}=T_{L}\tau_{j},\ j=1,2$.
Then:%
\begin{align*}
& \mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes
D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) \\
& =\frac{\left( T_{L}\right) ^{5}}{\varepsilon^{2}}\int_{0}^{h}%
d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{3}}d\eta\,\Psi_{L,M}^{\prime
}\left( \frac{T_{L}\left( \eta-v_{1}\tau_{1}\right) }{\varepsilon}\right)
\Psi_{L,M}^{\prime}\left( \frac{T_{L}\left( \eta-y-v_{2}\tau_{2}\right)
}{\varepsilon}\right) \\&
\quad\left[ \frac{\left( \eta-v_{1}\tau_{1}\right) }{\left\vert
\eta-v_{1}\tau_{1}\right\vert }\otimes\frac{\left( \eta-y-v_{2}\tau
_{2}\right) }{\left\vert \eta-y-v_{2}\tau_{2}\right\vert }\right] \;.
\end{align*}
Using (\ref{PsiAs}) (with $A=1$) we obtain that, up to an arbitrarily
small error $\zeta\left(
M;\varepsilon\right) $ (as in \eqref{P4E8}),
\begin{align*}
& \mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes
D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) \\
& \sim s^{2}\left( T_{L}\right) ^{3-2s}\varepsilon^{2s}\int_{0}^{h}%
d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \left( \eta-v_{1}\tau_{1}\right)
\otimes\left( \eta-y-v_{2}\tau_{2}\right) \right] }{\left\vert
\eta-v_{1}\tau_{1}\right\vert ^{s+2}\left\vert \eta-y-v_{2}\tau
_{2}\right\vert ^{s+2}} \\
& =\frac{s^2}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \left( \eta-v_{1}\tau_{1}\right)
\otimes\left( \eta-y-v_{2}\tau_{2}\right) \right] }{\left\vert
\eta-v_{1}\tau_{1}\right\vert ^{s+2}\left\vert \eta-y-v_{2}\tau
_{2}\right\vert ^{s+2}} \\
& =\frac{s^2}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^{h}d\tau_{2} \int_{\mathbb{R}^{3}} d\eta\frac{\left[ \eta\otimes\left( \eta-U\right)
\right] }{\left\vert \eta\right\vert ^{s+2}\left\vert \eta-U\right\vert
^{s+2}}\;,
\end{align*}
where we have used that $T_{L}\sim\left( \frac{1}{W_{s}\varepsilon^{2s}%
}\right) ^{\frac{1}{3-2s}}$ (cf.\,(\ref{P4E2})) and
that $U:=\left[ y+v_{2}\tau_{2}-v_{1}\tau_{1}\right] .$ We remark that the integral in the variable $\eta$
is well defined for each $U\in\mathbb{R}^{3},$ $U\neq0$ since $\frac{1}{2}<s<1.$
Let
$e=\frac{U}{\left\vert U\right\vert }$ be a unit vector in the direction of $U.$
Then, rescaling we obtain
\begin{equation}
\label{eq:proThmCorr-i}
\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right)
\otimes D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) \sim
\int_{0}^{h}d\tau_{1}\int_{0}^{h}d\tau_{2}\frac{\Lambda(e)}{\left\vert
y+v_{2}\tau_{2}-v_{1}\tau_{1}\right\vert ^{2s-1}}
\end{equation}
where
\[
\Lambda\left( e\right) =\frac{s^2}{W_{s}}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \eta
\otimes\left( \eta-e\right) \right] }{\left\vert \eta\right\vert
^{s+2}\left\vert \eta-e\right\vert ^{s+2}}\ \ ,\ \ \ \left\vert e\right\vert
=1\;.
\]
We are interested now in taking $h$ much smaller than
$\left\vert y\right\vert.$ Then the following approximation holds:
\[
\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right)
\otimes D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right)
\sim\frac{h^{2}\Lambda(e)}{\left\vert y\right\vert ^{2s-1}}
\]
with $e = \frac{y}{|y|}$. Therefore \eqref{I1E1b} is proved.
Notice also that \eqref{eq:proThmCorr-i} implies
\[
\left\| K\left(y,v_{1}%
,v_{2};h\right) \right\| \leq\frac
{Ch^{2}}{\left( \left\vert y\right\vert + h
\right) ^{2s-1}}\;.
\]
Similarly, we can compute the typical deflection from a
given point $x_1,v_1$:
\begin{eqnarray}
&&\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes D\left(
x_{1},v_{1};\tilde{T}_{L}\right) \right) \\
&& \sim\frac{s^2}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \left( \eta-v_{1}\tau_{1}\right)
\otimes\left( \eta-v_{1}\tau_{2}\right) \right] }{\left\vert
\eta-v_{1}\tau_{1}\right\vert ^{s+2}\left\vert \eta-v_{1}\tau
_{2}\right\vert ^{s+2}}\nonumber\\
&& =\frac{s^2}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^{h}d\tau_{2}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \eta
\otimes\left( \eta-v_{1}(\tau_{2}-\tau_1)\right) \right] }{\left\vert
\eta\right\vert ^{s+2}\left\vert \eta-v_{1}(\tau
_{2}-\tau_1)\right\vert ^{s+2}}\nonumber\\
&& =\frac{s^2}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^{h}\frac{d\tau_{2}}{|\tau_2-\tau_1|^{2s-1}}\int_{\mathbb{R}^{3}}d\eta\frac{\left[ \eta
\otimes\left( \eta-v_{1}\right) \right] }{\left\vert
\eta\right\vert ^{s+2}\left\vert \eta-v_{1}\right\vert ^{s+2}}\;.\nonumber
\end{eqnarray}
The last integral
is a matrix and it remains invariant under rotations $v_1\rightarrow Rv_1,$ where
$R\in O\left( 3\right) ,$ whence it is a multiple of the identity
$\sigma_{s}I$. Since the matrix is positive definite we have $\sigma_{s}>0.$
Then the integral above becomes%
\[
\frac{s^2\sigma_{s}}{W_{s}}\int_{0}^{h}d\tau_{1}\int_{0}^h \frac
{d\tau_{2}}{\left\vert \tau_{2}-\tau_{1}\right\vert ^{2s-1}}=C_{s}h^{3-2s}%
\]
where
\begin{equation}
\label{eq:CsCorr}
C_{s}=\frac{s^2\sigma_{s}}{W_{s}}\int_{0}^{1}%
d\tau_{1}\int_{0}^{1}\frac{d\tau_{2}}{\left\vert \tau_{2}-\tau_{1}\right\vert
^{2s-1}}\;.
\end{equation}
We have then obtained
\begin{align*}
\frac{\mathbb{E}\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \otimes
D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right) }{\sqrt{\mathbb{E}\left(
\left( D\left( x_{1},v_{1};\tilde{T}_{L}\right) \right) ^{2}\right)
\mathbb{E}\left( \left( D\left( x_{2},v_{2};\tilde{T}_{L}\right) \right)
^{2}\right) }}\sim \mathcal{C}\left( y,v_1,v_2;h\right)
\end{align*}
where the correlation function is given by \eqref{eq:corrmatex} and
\[
\| \mathcal{C}\left( y,v_1,v_2;h\right) \|
\leq\frac{C}{\left( 1+\frac{\left\vert y\right\vert }{
h}\right) ^{2s-1}}\;.%
\]
\subsubsection{Kinetic equations.} \label{subsec:KE1}
We can now argue as in Section \ref{GenKinEq} to write the kinetic equations
yielding the evolution for the function $f\left( t,x,v\right)$, for families of potentials
with the form $\Phi\left( x,\varepsilon\right) =\Psi\left( \frac{\left\vert
x\right\vert }{\varepsilon}\right)$. Recall that the long range behaviour is given by (\ref{PsiAs}).
\paragraph{The case $s>1$: Boltzmann equation.}
Let us assume that the scattering problem associated to the potential $\Psi$ is well posed for
every $V$ and almost every impact parameter $b$ (cf.\,Section \ref{ScattPb}). Then, since (\ref{S5E2a}) holds, Claim
\ref{BoltGen} yields that the function defined by means of (\ref{P1E5a})
solves%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\int_{S^{2}}B\left( v;\omega \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega \nonumber%
\end{equation}
if there are only charges of one type and%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\sum_{j=1}^{L}\mu\left( Q_{j}\right)\int_{S^{2}}B\left( v;\omega;Q_j \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega\nonumber
\end{equation}
if the distribution of scatterers contains more than one type of charges. In
these equations the scattering kernel $B$ is given by (\ref{P1E8})-(\ref{P1E8a}).
A particular case is $\Psi\left( y\right) :=\frac{1}{\left\vert
y\right\vert ^{s}},\ s>1.$ A rescaling argument allows to
restrict to $V=1$ and the expression for the kernel (with $|v|=1$) is
\[
B\left( v;\omega\right) =\frac{b}{|\sin\chi|}\Big|\left( \frac{\partial\chi\left( b\right)
}{\partial b}\right) ^{-1}\Big|%
\]
where the scattering angle $\chi\left( b\right) =\chi\left( b,1\right) $ is a monotone function of
$b$ given by%
\begin{equation}
\chi\left( b\right) =\pi-2\int_{r_{\ast}}^{+\infty
}\frac{b\,dr}{r^{2}\sqrt{1-2\Psi_{eff}\left( r\right)}}\;, \label{eq:SE_0}%
\end{equation}
with%
\[
\Psi_{eff}(r)=\frac{1}{r^s}+\frac{b^{2}}{2r^{2}}
\]
and $r_*$ the unique solution of $2\Psi_{eff}(r_*)=1$.
One finds
\begin{equation}
\frac{\partial\chi\left( b\right) }{\partial b}=\frac{2s}{b^{s+1}}\int
_{0}^{\frac{\pi}{2}}\frac{\sin(\xi)u^{s-1}}{\left( u+\frac{s}{b^{s}}%
u^{s-1}\right) ^{2}}\left\{ (s-1)\frac{\frac{1}{s-1}+\frac{su^{s-2}}{b^{s}}%
}{\left( 1+\frac{su^{s-2}}{b^{s}}\right) }-s\right\} d\xi\;,\label{eq:SE_3}%
\end{equation}
where $u$ and $\xi$ are related by%
\[
\sin^{2}\left( \xi\right) =u^{2}+2\left( \frac{u}{b}\right) ^{s}\;.%
\]
\paragraph{The case $s=1$: Landau equation.}
Combining Theorems \ref{ProofGenPow} and \ref{CorrPowLaw} we obtain that
the function $f\left( t,x,v\right) $ in (\ref{LimEx}) satisfies the
linear Landau equation, which in the case of charges of a single type has the form
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right) =\kappa\,\Delta_{v_{\perp}}f\left( t,x,v \right) \label{P4E3
\end{equation}
where $\kappa=\frac{1}{2}$ (since we have absorbed all the numerical constants in the formula for
$T_{L}$, see Section \ref{TimeScaleCoulomb}). If we have charges of different
types (cf.\,\eqref{S4E9}), the same definition of $T_{L}$ in (\ref{P4E1}) leads to
\begin{equation}
\kappa=\frac{1}{2}\sum_{j=1}^{L}\mu\left( Q_{j}\right) Q_{j}^{2}\;.
\end{equation}
Coulombian potentials, i.e.\,$\Psi\left( y\right) :=\frac{1}{\left\vert
y\right\vert }$, are particularly relevant in plasma physics and in
astrophysics where kinetic equations are used to
describe the relaxation to equilibrium. The
presence of the logarithmic term in (\ref{P4E1}) is well known in both
fields \cite{BT,LL2}. As explained in \cite{BT}, in systems where
the particles interact by means of Coulombian potentials the scatterers at
distances between $R$ and $2R$ of the trajectory with $R$
larger than the collision length contribute equally to the deflections.
This is the reason for the onset of the
logarithmic term, and also for the fact that the large amount of small
deflections yields a larger effect than Boltzmann-type collisions with individual scatterers.
\paragraph{The case $\frac{1}{2}<s<1$: correlated deflections in times of the
order of $T_L$.}
In this case we have $T_{L}\ll T_{BG}.$ However, due to Theorem
\ref{CorrPowLaw}, the correlations between the deflections in times of the order of $T_{L}$ are of order one.
Therefore the dynamics of
the distribution function $f$ cannot be approximated by means of the Landau
equation. In fact the probability distribution for the deflection in a time
$h T_{L}$ is a gaussian distribution with zero average and typical deviation of order $h
^{\frac{3-2s}{2}}$ in the limit $\varepsilon\rightarrow0$, i.e.\,we obtain (cf.\,Section \ref{sss:criticaltimescale})%
\[
m_{h T_L}^{(\varepsilon)}\left( \theta \right) \rightarrow\exp\left(
-\kappa \,h^{3-2s}\theta^{2}\right) \text{ as
}\varepsilon\rightarrow0, \quad \kappa>0\;.
\]
Diffusive processes (in the space of velocities) like the ones given by
the Landau equation are characterized by typical deviations of order
$\sqrt{h}$ which only take place for $s=1.$ Therefore, a diffusive process
cannot be expected if $\frac{1}{2}<s<1$, but rather a stochastic differential equation with correlations as explained in Section \ref{ss:CorrCase}, see \eqref{ST1}-\eqref{ST7}.
\begin{remark}
\label{RangeInter} The analysis of the function $\sigma\left( T;\varepsilon\right)$
given by \eqref{S5E1} allows to determine the set of scatterers which
influence the dynamics of the tagged particle. We will denote this set
as `domain of influence'. This corresponds to the regions in the $y$ variable which
determine the asymptotics of the function $\sigma\left( T_{L};\varepsilon
\right) $ as $\varepsilon\rightarrow0$ if $T_{L}\lesssim T_{BG}.$ Assume
that $|v|=1$ and that the tagged particle is in the origin at time zero.
For the potentials with the form (\ref{S9E7}) considered in this section,
we obtain that in the case $s=1$ the domain of influence are the scatterers
located in $\left( x_{\parallel},x_{\bot}\right) $ with
$x_{\parallel}\in\left[ 0,T_{L}\right] $ and $k_{1}\leq\left\vert x_{\bot
}\right\vert \leq\frac{T_{L}}{k_{1}},$ where $k_{1}$ is a large number. These
scatterers are responsible for the logarithmic correction which determines the
time scale $T_{L}$ (see Section \ref{TimeScaleCoulomb}). If $s<1,$ the domain of influence is
$x_{\parallel}\in\left[ 0,T_{L}\right] $ and
$\left\vert x_{\bot}\right\vert \leq k_{1}T_L$ (see Section \ref{sss:criticaltimescale}).
\end{remark}
\begin{remark} \label{rem:2D2}
In the two-dimensional case, we may consider families of potentials of the form
$\Phi\left( x,\varepsilon\right)
=\Psi\left( \frac{\left\vert x\right\vert }{\varepsilon}\right)$ with
$\Phi(\cdot,\varepsilon) \in {\cal C}_s$ for any $s>0$ and we always obtain
spatially homogeneous Holtsmark fields (see Remark \ref{rem:2D1}).
Nevertheless, unlike in three dimensions, the Coulombian decay does not correspond
to the crossing of the Boltzmann and the Landau time scales. Indeed, $T_{BG} = \frac{1}{\varepsilon}$,
\eqref{S5E2a} holds if $s>\frac{1}{2}$ and $T_L$ diverges as $\frac{1}{\varepsilon\log\frac{1}{\varepsilon}}$ if $s=\frac{1}{2}$.
Moreover, $T_{L}\sim\left( \frac{1}{W_{s}A^{2}\varepsilon^{2s}}\right) ^{\frac{1}%
{2-2s}}\ \ \text{as\ \ }\varepsilon\rightarrow0\ \ \text{if\ \ }0<s<\frac{1}{2}\;.$ Therefore
$$ T_{L}\ll T_{BG}\ \ \text{as\ \ }\varepsilon\rightarrow0\ \ \text{if\ \ }%
s\leq \frac{1}{2} \ \ \text{in two dimensions}.$$
Moreover, \eqref{I1E1a} is valid for $s = \frac{1}{2}$ and a Landau equation is expected to hold.
Instead, for $0<s<\frac{1}{2}$ the correlations do not vanish on the scale of the mean free path
and a set of equations with memory arises as in \eqref{ST1}-\eqref{ST7} (with $\alpha = 2s$ and $\alpha' = 1-s$).
\end{remark}
\subsection{Potentials with the form $\Phi\left( x,\varepsilon\right)
=\varepsilon \,G\left( \left\vert x\right\vert \right) $.}
We will now consider families of potentials with a form different from
(\ref{S9E7}). We shall see how sensitively the kinetic time scales $T_{BG},\ T_{L}$
and the resulting limiting kinetic equation can depend on the
specific details of the interaction.
Let us consider%
\begin{equation}
\left\{ \Phi\left( x,\varepsilon\right) ;\varepsilon>0\right\} =\left\{
\varepsilon G\left( \left\vert x\right\vert \right) ;\varepsilon>0\right\}
\label{T1E1}%
\end{equation}
where $G \in {\cal C}_s$, $s > 1/2$. We have
$G\in C^{2}\left( \mathbb{R}^{3}\setminus\left\{ 0\right\}\right)$ and
\begin{equation}
G\left( x\right) \sim\frac{A}{\left\vert x\right\vert ^{s}}%
\ \ \text{as\ \ }\left\vert x\right\vert \rightarrow\infty,\ \ A\neq0\;.
\label{T1E2}%
\end{equation}
Note that these potentials have an intrinsic length scale of order one, i.e.\,the
order of magnitude of the average distance between scatterers. Other types of
potentials with different or additional length scales might be considered
with analogous types of arguments, but we restrict to the present case for simplicity.
Moreover, we restrict to classes of functions satisfying
\begin{equation}
G\left( x\right) \sim\frac{B}{\left\vert x\right\vert ^{r}}\ \ \text{as\ \ }%
\left\vert x\right\vert \rightarrow0\ \ ,\ \ B\neq0,\ \ r \geq 0\;. \label{T1E4}%
\end{equation}
The case $r=0$ corresponds to
\begin{equation}
G\in C^{2}\left( \mathbb{R}^{3}\right) \ \ ,\ \ G\text{ bounded near the
origin}\;. \label{T1E3}%
\end{equation}
We remark that in the case (\ref{T1E3}) the family of potentials (\ref{T1E1})
does not have a collision length (or equivalently $\lambda_{\varepsilon}=0$, $T_{BG} = +\infty$).
On the other hand, in the case (\ref{T1E4}) the collision length is%
\begin{equation}
\lambda_{\varepsilon}=\varepsilon^{\frac{1}{r}} \label{T1E5}%
\end{equation}
and the Boltzmann-Grad time scale is then (cf.\,(\ref{BG}))%
\begin{equation}
T_{BG}=\frac{1}{\varepsilon^{\frac{2}{r}}} . \label{T1E6}%
\end{equation}
\subsubsection{Kinetic time scales.}
We now study the properties of the function $\sigma\left( T;\varepsilon
\right) $ in (\ref{S4E8a}) and compare the time scale $T_{L}$ defined
by means of (\ref{S4E8}) with $T_{BG}$ given by \eqref{T1E6}.
\begin{theorem}
\label{GDiffScales} Consider the family of potentials (\ref{T1E1}) with
$G \in {\cal C}_s$, $s > 1/2$ and satisfying (\ref{T1E4}).
Suppose that the corresponding Holtsmark field
defined in Section \ref{Holtsm} is spatially homogeneous.
\begin{itemize}
\item[(i)] \ If $s>1$ and $r>1$, then $\lim\sup_{\varepsilon\rightarrow
0}\sigma\left( T_{BG};\varepsilon\right) \leq\delta\left( M\right)$ with
$\delta\left( M\right) \rightarrow0$ as $M\rightarrow\infty.$
\item[(ii)] If $s>1$, then $T_{L}\sim\frac
{1}{4\pi B^2 \varepsilon^{2}\left\vert \log\left( \varepsilon\right) \right\vert }$
as $\varepsilon\rightarrow0$ if $r=1$ and $T_{L}\sim\frac{C}{\varepsilon^{2}}$
for some $C>0$
as $\varepsilon\rightarrow0$ if $r<1$. In both cases $T_{L}\ll
T_{BG}$ as $\varepsilon\rightarrow0.$
\item[(iii)] Suppose that $s=1.$ If $r>1$ we have $\lim\sup_{\varepsilon\rightarrow
0}\sigma\left( T_{BG};\varepsilon\right) \leq\delta\left( M\right)$ with
$\delta\left( M\right) \rightarrow0$ as $M\rightarrow\infty.$ If $r=1$ we
obtain $T_{L}\sim\frac{C_{1}}{\varepsilon^{2}\left\vert \log\left( \varepsilon\right)
\right\vert }$ for some $C_{1}>0$ and therefore $T_{L}\ll T_{BG}$ as $\varepsilon\rightarrow0.$
If $r<1$ we
obtain $T_{L}\sim\frac
{C_{2}}{\varepsilon^{2}\left\vert\log\left( \varepsilon \right) \right\vert }$ for some
$C_{2}>0$ and therefore $T_{L}\ll T_{BG}$
as $\varepsilon\rightarrow0.$
\item[(iv)] Suppose that $s<1.$ If $r+2s>3$ we have $\lim\sup_{\varepsilon
\rightarrow0}\sigma\left( T_{BG};\varepsilon\right) \leq\delta\left(
M\right) $ with $\delta\left( M\right) \rightarrow0$ as $M\rightarrow
\infty.$ If $r+2s<3$ we obtain $T_{L}\sim\frac{C_{0}}{\varepsilon^{\frac{2}{3-2s}}}$ as
$\varepsilon\rightarrow0$ and then $T_{L}\ll T_{BG}=\frac{1}{\varepsilon
^{\frac{2}{r}}}$ as $\varepsilon\rightarrow0.$ If $r+2s=3$ we obtain that
$T_{L}$ and $T_{BG}$ are comparable as $\varepsilon\rightarrow0.$
\end{itemize}
\end{theorem}
\begin{proof}
We will assume in all the proof that $v=\left( 1,0,0\right) .$
We use the splitting (\ref{S4E4}) which becomes here
$\Phi\left( x,\varepsilon\right) =\Phi_{B}\left( x,\varepsilon\right)
+\Phi_{L}\left( x,\varepsilon\right)$ with
\begin{equation}
\Phi_{B}\left( x,\varepsilon\right) =\varepsilon G\left( |x|\right)
\eta\left( \frac{|x|}{M \lambda_\varepsilon}\right) \ \ ,\ \ \Phi
_{L}\left( x,\varepsilon\right) =\varepsilon G\left( |x|\right) \left[
1-\eta\left( \frac{ |x| }{M\lambda_\varepsilon}\right) \right] \;. \label{T1E7}%
\end{equation}
\medskip
\noindent
{\em Proof of (i).}
\smallskip
\noindent
Suppose that $s>1.$ Then using \eqref{S4E8a} and the fact that $\left\vert \theta_{\bot}\right\vert \leq1$
we have (in a similar way as in the proof of Theorem \ref{ProofGenPow})%
\begin{equation}\sigma\left( T;\varepsilon\right) =\sup_{\left\vert \theta\right\vert
=1}\int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-\xi,\varepsilon\right) dt\right) ^{2} \leq J_1 + J_2\label{P5E1}
\end{equation}
where
\begin{align}
& J_1 = C\varepsilon^{2}\int_{\mathbb{R}^{3}}d\xi
\left\vert \xi_{\bot}\right\vert
^{2}\left( \int_{0}^{T}dt\,\frac{\chi_{\left\{ \left\vert vt-\xi\right\vert
>1\right\} }}{\left\vert vt-\xi\right\vert ^{s+2}}\right) ^{2}\;,\nonumber \\%
& J_2 = C\varepsilon^{2}\int_{\mathbb{R}^{3}}d\xi
\left\vert \xi_{\bot}\right\vert
^{2}\left( \int_{0}^{T}dt\,\frac{\chi_{\left\{ M\varepsilon^{\frac{1}{r}%
}\leq\left\vert vt-\xi\right\vert \leq1\right\} }}{\left\vert
vt-\xi\right\vert ^{r+2}}\right) ^{2}\nonumber
\end{align}
for some $C>0$.
We estimate $J_{1}$ as%
\begin{align}
J_{1} & \leq C\varepsilon^{2}\int_{\mathbb{R}^{3}}d\xi\left\vert \xi_{\bot
}\right\vert ^{2}\left( \int_{0}^{T}dt\frac{\chi_{\left\{ \left\vert
\xi-vt\right\vert >1\right\} }}{\left( \left( \xi_{1}-t\right)
^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}}\right)
^{2}\nonumber\\
& \leq C\varepsilon^{2}\int_{-T}^{T}dt_{1}\int_{-T}^{T}H_{1}\left( t\right)
dt=CT\varepsilon^{2}\int_{-T}^{T}H_{1}\left( t\right) dt \label{P5E2}%
\end{align}
where $s_{\ast}:=\min\{s ,2\}$ and
\begin{align}
H_{1}\left( t\right) & :=\int_{\mathbb{R}^{2}}d\xi_{\bot}\left\vert \xi_{\bot
}\right\vert ^{2}\int_{-\infty}^{\infty}d\xi_{1}\frac{\chi_{\left\{
\left\vert \xi\right\vert >1\right\} }\chi_{\left\{ \left\vert
\xi-vt\right\vert >1\right\} }}{\left( \left( \xi_{1}\right)
^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}\left(
\left( \xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{s_{\ast}+2}{2}}}\nonumber\\
&=\frac{1}{\left\vert t\right\vert ^{2s_{\ast}-1}}\int_{\mathbb{R}^{2}}\left\vert
\xi_{\bot}\right\vert ^{2}d\xi_{\bot}\int_{-\infty}^{\infty}\frac
{\chi_{\left\{ \left\vert \xi\right\vert >\frac{1}{t}\right\} }%
\chi_{\left\{ \left\vert \xi-v\right\vert >\frac{1}{t}\right\} }d\xi_{1}%
}{\left( \left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert
^{2}\right) ^{\frac{s_{\ast}+2}{2}}\left( \left( \xi_{1}-1\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}}. \label{P5E3}
\end{align}
In the case $t\geq \frac 1 2$ we estimate the characteristic functions by one.
Suppose first that $\left\vert \xi_{\bot}\right\vert \geq1.$ We split the
integral in $\xi_{1}$ in the regions $\left\vert \xi_{1}\right\vert
\leq\left\vert \xi_{\bot}\right\vert $ and $\left\vert \xi_{1}\right\vert
>\left\vert \xi_{\bot}\right\vert .$ The resulting contribution to the
integral in $\xi_{1}$ would be of order
$\frac{1}{\left\vert \xi_{\bot}\right\vert ^{2s_{\ast}+3}}$ in the first region and
similar in the second region. This gives an integrable contribution in the
region $\left\vert \xi_{\bot}\right\vert \geq1.$ Suppose now that $\left\vert
\xi_{\bot}\right\vert <1.$ We first estimate the integral%
\[
\int_{-\infty}^{\infty}\frac{d\xi_{1}}{\left( \left( \xi_{1}\right)
^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}\left(
\left( \xi_{1}-1\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{s_{\ast}+2}{2}}}%
\]
for $\left\vert \xi_{\bot}\right\vert $ small. We separate the regions close
to $\xi_{1}=0$ and $\xi_{1}=1.$ The rest gives a bounded contribution. The
contributions near these two points are similar and can be bounded by:%
\[
\int_{-\infty}^{\infty}\frac{d\xi_{1}}{\left( \left( \xi_{1}\right)
^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}}\leq
\frac{C}{\left\vert \xi_{\bot}\right\vert ^{s_{\ast}+1}}.
\]
Then, the contribution to the integral $\int_{|\xi_{\bot}|< 1} (\dots) \left\vert
\xi_{\bot}\right\vert ^{2}d\xi_{\bot}$ is bounded by $\int_{|\xi_{\bot}|< 1}\frac
{d\xi_{\bot}}{\left\vert \xi_{\bot}\right\vert ^{s_{\ast}-1}}<\infty $.
Therefore, for $t \geq \frac 1 2$, we have that:
\[
0\leq H_{1}\left( t\right) \leq\frac{C}{\left\vert t\right\vert ^{2s_{\ast}-1}}.
\]
Suppose now that $t <\frac 1 2.$ Then, since $\left\vert
\xi\right\vert >1$ and $\left\vert \xi-vt\right\vert >1$ we obtain that:%
\[
\left( \left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert
^{2}\right) ^{\frac{s_{\ast}+2}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{s_{\ast}+2}{2}}\geq C\left[ 1+\left(
\left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{s_{\ast}+2}\right]
\]
whence we obtain the following estimate:
\begin{align*}
H_{1}\left( t\right) & \leq C\int_{\mathbb{R}^{2}}d\xi_{\bot} \int_{{\mathbb R}} d\xi_1 \frac{\left\vert \xi_{\bot}\right\vert ^{2}%
}{\left[ 1+\left( \left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{s_{\ast}+2}\right] }\leq C<\infty
\end{align*}
since $s_{\ast}>1>\frac{1}{2}$. Hence, we obtain
\[
0\leq H_{1}\left( t\right) \leq\frac{C}{1+\left\vert t\right\vert
^{2s_{\ast}-1}}
\]
and, using (\ref{P5E2}), we get
\begin{equation}
J_{1}\leq CT\varepsilon^{2}\int_{-T}^{T}\frac{dt}{1+\left\vert t\right\vert
^{2s_{\ast}-1}}\leq CT\varepsilon^{2}\label{P5E4}
\end{equation}
if $s>1$.
We have several possibilities for $J_{2}$ depending on the values of $r.$
Here we suppose that $r>1.$ A computation similar to the one yielding
(\ref{P5E2}) gives%
\begin{equation}
J_{2}\leq CT\varepsilon^{2}\int_{-T}^{T}H_{2}\left( t\right) dt
\label{P5E4a}%
\end{equation}
with%
\[
H_{2}\left( t\right) :=\int_{\mathbb{R}^{2}}d\xi_{\bot}\left\vert \xi_{\bot}\right\vert
^{2}\int_{-\infty}^{\infty}d\xi_{1}\frac{\chi_{\left\{ M\varepsilon
^{\frac{1}{r}}\leq\left\vert \xi\right\vert \leq1\right\} }\chi_{\left\{
M\varepsilon^{\frac{1}{r}}\leq\left\vert \xi-vt\right\vert \leq1\right\} }%
}{\left( \left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert
^{2}\right) ^{\frac{r+2}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}\;.
\]
If $\left\vert t\right\vert \geq2$
we have
\begin{align*}
& \frac{\chi_{\left\{ M\varepsilon^{\frac{1}{r}}\leq\left\vert
\xi\right\vert \leq1\right\} }\chi_{\left\{ M\varepsilon^{\frac{1}{r}}%
\leq\left\vert \xi-vt\right\vert \leq1\right\} }}{\left( \left( \xi
_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac
{r+2}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}\\
& \leq C\frac{\chi_{\left\{ M\varepsilon^{\frac{1}{r}}\leq\left\vert
\xi\right\vert \leq1\right\} }\chi_{\left\{ M\varepsilon^{\frac{1}{r}}%
\leq\left\vert \xi-vt\right\vert \leq1\right\} }}{\left\vert t\right\vert
^{r+2}}\left[ \frac{1}{\left( \left( \xi_{1}\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}+\frac{1}{\left( \left(
\xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{r+2}{2}}}\right]
\end{align*}
whence the contribution of this term to the integral in (\ref{P5E4a}) can be
estimated as
\begin{align*}
& \int_{\left[ -T,T\right] \setminus\left[ -2,2\right] }H_{2}\left(
t\right) dt\\
& \leq C\int_{\left[ -T,T\right] \setminus\left[ -2,2\right] }\frac
{dt}{\left\vert t\right\vert ^{r+2}}\int_{\mathbb{R}^{2}}d\xi_{\bot}\left\vert \xi_{\bot
}\right\vert ^{2}\int_{-\infty}^{\infty}\frac{\chi_{\left\{
M\varepsilon^{\frac{1}{r}}\leq\left\vert \xi\right\vert \leq1\right\} }%
d\xi_{1}}{\left( \left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}\leq C\int_{\left\{ M\varepsilon
^{\frac{1}{r}}\leq\left\vert \xi\right\vert \leq1\right\} }\frac{d^{3}\xi
}{\left\vert \xi\right\vert ^{r}}%
\end{align*}
which implies%
\begin{equation}
\int_{\left[ -T,T\right] \setminus\left[ -2,2\right] }H_{2}\left(
t\right) dt\leq C\left[ \left( \frac{1}{M\varepsilon^{\frac{1}{r}}}\right)
^{r-3+\delta}+1\right] \label{P5E7a}%
\end{equation}
(where the coefficient $\delta>0$ has been introduced to include the case $r=3$
in which the integral diverges logarithmically).
On the other hand the contribution to the integral in (\ref{P5E4a}) due to the
region $\left\{ \left\vert t\right\vert <2\right\} $ can be estimated as%
\begin{align*}
& \int_{\left[ -T,T\right] \cap\left[ -2,2\right] }H_{2}\left( t\right)
dt\\
& \leq C\int_{\left\{ M\varepsilon^{\frac{1}{r}}\leq\left\vert
\xi\right\vert \leq 1\right\} }\frac{d\xi_{\bot}}{\left\vert \xi_{\bot
}\right\vert ^{r}}\int_{-\infty}^{\infty}\frac{d\xi_{1}}{\left( \left(
\xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{r+2}{2}}}\leq \frac{C}{\left( M\varepsilon^{\frac{1}{r}}\right)
^{2\left( r-1\right) }}\;.
\end{align*}
Combining this inequality with (\ref{P5E7a}) we obtain%
\[
\int_{-T}^{T}H_{2}\left( t\right) dt\leq C\left[ \frac{1}{\left(
M\varepsilon^{\frac{1}{r}}\right) ^{2\left( r-1\right) }}+1\right] .
\]
Using now (\ref{P5E4a}), as well as $r>1$, we obtain%
\begin{equation}
J_{2}\leq
\frac{CT\varepsilon^{2}}{\left( M\varepsilon^{\frac{1}{r}}\right) ^{2\left(
r-1\right) }}=\frac{CT\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right) }}.
\label{P5E8}
\end{equation}
Thanks to (\ref{P5E1}), (\ref{P5E4}), (\ref{P5E8}) as well as (\ref{T1E6}) we obtain%
\[
\sigma\left( T_{BG};\varepsilon\right) \leq CT_{BG}\varepsilon^{2}%
+\frac{CT_{BG}\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right) }}=
C\varepsilon^{2\left( 1-\frac{1}{r}\right) }+\frac{C}{M^{2\left(
r-1\right) }}
\]
so that item (i) in Theorem \ref{GDiffScales} is proved.
\medskip
\noindent
{\em Proof of (ii).}
\smallskip
\noindent
Suppose now that $s>1$ and $r\leq1.$ We can then use formula (\ref{P5E4}), but the above
estimate for $J_{2}$ is not enough. Actually we need to approximate the
integral%
\[
\int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-\xi,\varepsilon\right) \chi_{\left\{ M\varepsilon
^{\frac{1}{r}}\leq\left\vert vt-\xi\right\vert \leq1\right\} }dt\right)
^{2}%
\]
as $\varepsilon\rightarrow0$ for $T$ large.
Proceeding as above, we obtain that the integral is approximated by%
\begin{equation}
\varepsilon^2B^2\int_{0}%
^{T}dt_{1}\int_{-t_{1}}^{T-t_{1}}W\left( t\right) dt \label{P5E9}%
\end{equation}
where%
\begin{align*}
W\left( t\right) :=\int_{\mathbb{R}^{2}}d\xi_{\bot}(\theta_\perp \cdot \xi_\perp)^2\int_{-\infty}^{\infty
}d\xi_{1}\frac{\chi_{\left\{ M\varepsilon
^{\frac{1}{r}}\leq\left\vert \xi\right\vert \leq1\right\} }\,\chi_{\left\{
M\varepsilon^{\frac{1}{r}}\leq\left\vert \xi-vt\right\vert
\leq1\right\} }}{\left( \left( \xi
_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac
{r+2}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}
\;.
\end{align*}
We are interested in computing (\ref{P5E9}) for $T\gg1.$ Note then that
most of the contribution is due to the region $t_{1}\in\left[
L,T-L\right] $ with $L$ large:%
\begin{equation}
\int_{0}^{T}dt_{1}\int_{-t_{1}}^{T-t_{1}}W\left( t\right) dt=\int_{L}%
^{T-L}dt_{1}\int_{-t_{1}}^{T-t_{1}}W\left( t\right) dt+O\left( L\right)
\int_{-\infty}^{\infty}W\left( t\right) dt\;. \label{P6E2}%
\end{equation}
Moreover, it turns out that $\int_{-\infty}^{\infty}\left\vert
W\left( t\right) \right\vert dt<\infty.$ The main contribution to
the first integral on the right-hand side of (\ref{P6E2}) is due to the strip $\left\{
\left\vert t\right\vert <L'\right\} $ where we can assume that $L'<L.$ We then
get
\begin{equation}
\int_{0}^{T}dt_{1}\int_{-t_{1}}^{T-t_{1}}W\left( t\right) dt=T\left[
1+o\left( 1\right) \right] \int_{-\infty}^{\infty}W\left( t\right) dt
\label{P6E2a}%
\end{equation}
where $o\left( 1\right) \rightarrow0$ as $T\rightarrow\infty.$ We have%
\begin{align}
& \int_{-\infty}^{\infty}W\left( t\right) dt\nonumber\\
& =\int_{\mathbb{R}^{2}}d\xi_{\bot
}(\theta_\perp \cdot \xi_\perp)^2\int_{-\infty}^{\infty}d\xi_1\frac{\chi_{\left\{ M\varepsilon^{\frac{1}{r}}
\leq\left\vert \xi\right\vert \leq1\right\} }}{\left( \left( \xi_{1}\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}\int_{-\infty}^{\infty
}dt\frac{\chi_{\left\{ M^2\varepsilon^{\frac{2}{r}}%
\leq t^2+\left\vert \xi_\perp\right\vert^2 \leq1\right\} }}{\left( t^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{r+2}{2}}}\;.\label{P6E3}%
\end{align}
We must deal now separately with the cases $r=1$ and $r<1.$ In the first case,
we are going to show that \eqref{P6E3}
diverges logarithmically as $\varepsilon\rightarrow0.$
The main contribution is due to the
region where $\left\vert \xi_{\bot}\right\vert \rightarrow0$.
The characteristic function $\chi_{\left\{ M^2\varepsilon^{2}%
\leq t^2+\left\vert \xi_\perp\right\vert^2 \leq1\right\} }$
can be replaced by $ \chi_{\left\{ \left\vert t\right\vert \leq1\right\}}$ by making an error of order $O\left( \frac{M\varepsilon}{\left\vert \xi_{\bot}\right\vert ^{3}}\right)$ in the integral $\int_{-\infty}^{+\infty} dt \cdots$, and the resulting
contribution to the whole integral is bounded by a constant:
\begin{align*}
\int_{-\infty}^{\infty}W\left( t\right) dt
=\int_{\mathbb{R}^{2}}d\xi_{\bot
} \frac{(\theta_\perp \cdot \xi_\perp)^2}{|\xi_\perp|^2}
\int_{-\infty}^{\infty}d\xi_1\frac{\chi_{\left\{ M\varepsilon
\leq\left\vert \xi\right\vert \leq1\right\} }}{\left( \left( \xi_{1}\right) ^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{3}{2}}}\int_{-\frac{1}{\left\vert \xi_{\bot}\right\vert }}^{\frac
{1}{\left\vert \xi_{\bot}\right\vert }}\frac{dt}{\left( t^{2}+1\right)
^{\frac{3}{2}}}+ O(1)\;.
\end{align*}
For $\left\vert \xi_{\bot}\right\vert \rightarrow0$, using $\int_{-\infty}^{\infty}\frac{dt}{\left( t^{2}+1\right)
^{\frac{3}{2}}}=2$, we can approximate with
\begin{align*}
& 2\int_{\mathbb{R}^{2}}d\xi_{\bot}
\frac{(\theta_\perp \cdot \xi_\perp)^2}{|\xi_\perp|^2}
\int_{-\infty}^{\infty}\chi_{\left\{ M\varepsilon\leq\left\vert
\xi\right\vert \leq1\right\} }\frac{d\xi_{1}}{\left( \left( \xi_{1}\right)
^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac{3}{2}}}\\
& = 2\int_{\mathbb{R}^{2}}d\xi_{\bot}
\frac{(\theta_\perp \cdot \xi_\perp)^2}{|\xi_\perp|^4}
\int_{-\infty}^{\infty}%
\chi_{\left\{ \frac{M\varepsilon}{\left\vert \xi_{\bot}\right\vert }%
\leq\sqrt{X^{2}+1}\leq\frac{1}{\left\vert \xi_{\bot}\right\vert }\right\}
}\frac{dX}{\left( X^{2}+1\right) ^{\frac{3}{2}}} \\
& \sim 4 \pi\left\vert \theta_{\bot}\right\vert ^{2}\log\left( \frac{1}{\varepsilon
}\right)
\end{align*}
as $\varepsilon \to 0$. We have then obtained
\begin{equation}
\int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-\xi,\varepsilon\right) \chi_{\left\{ M\varepsilon
\leq\left\vert vt-\xi\right\vert \leq1\right\} }dt\right)
^{2}\sim4\pi B^2 T\left\vert \theta_{\bot}\right\vert ^{2}\varepsilon^2\log\left( \frac
{1}{\varepsilon}\right) \ \ \text{as\ }\varepsilon\rightarrow0 \label{P6E4a}%
\end{equation}
whence the asymptotics in Theorem \ref{GDiffScales}-(ii) for $r=1.$
In the case $r<1$, (\ref{P6E3}) converges to a
finite integral as $\varepsilon\rightarrow0.$ The regions
$\left\{ \left\vert vt-\xi\right\vert >1\right\} $ and $\left\{ \left\vert
vt-\xi\right\vert \leq1\right\} $ give contributions of the same order of
magnitude and
\begin{align}
& \int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-\xi,\varepsilon\right) dt\right) ^{2}\nonumber\\
& \sim \varepsilon^2 B^2 T\int_{\mathbb{R}^{2}%
}d\xi_{\bot}(\theta_\perp \cdot \xi_\perp)^2
\int_{-\infty}^{\infty}d\xi_1
\frac{\chi_{\left\{ \left\vert \xi\right\vert
\leq1\right\} }}{\left(
\left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{r+2}{2}}}\int_{-\infty}^{\infty}dt\frac{\chi_{\left\{ \sqrt{t^2+|\xi_\perp|^2}\leq1\right\} }}{\left( t^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{r+2}{2}}}+\nonumber\\
& +\varepsilon^2 A^2 T\int_{\mathbb{R}^{2}%
}d\xi_{\bot}(\theta_\perp \cdot \xi_\perp)^2
\int_{-\infty}^{\infty}d\xi_1
\frac{\chi_{\left\{ \left\vert \xi\right\vert
>1\right\} }}{\left(
\left( \xi_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right)
^{\frac{s+2}{2}}}\int_{-\infty}^{\infty}dt\frac{\chi_{\left\{ \sqrt{t^2+|\xi_\perp|^2} >1\right\} }}{\left( t^{2}+\left\vert
\xi_{\bot}\right\vert ^{2}\right) ^{\frac{s+2}{2}}} \label{P6E4}%
\end{align}
as $\varepsilon\rightarrow0$ for $T$ large. In particular this implies the asymptotics of
$T_{L}$ in Theorem \ref{GDiffScales}-(ii) for $r<1.$
\medskip
\noindent
{\em Proof of (iii).}
\smallskip
\noindent
We assume that $s=1$ and $r>1.$ We can use the decomposition
(\ref{P5E1}) and bound $J_{2}$ using (\ref{P5E8}). Concerning $J_{1},$ we notice that (\ref{P5E2}), (\ref{P5E3}), (\ref{P5E3a}) are
valid for $s\leq1.$ Then
\[
\sigma\left( T;\varepsilon\right) \leq CT\log\left( T\right)
\varepsilon^{2}+\frac{CT\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right) }}\;,
\]
therefore \eqref{T1E6} implies
$$
\sigma\left( T;\varepsilon\right) \leq
C\log\left( \frac{1}{\varepsilon^{\frac{2}{r}}%
}\right) \varepsilon^{2\left( 1-\frac{1}{r}\right) }
+\frac{C}{M^{2\left(
r-1\right) }}
$$
which proves the first
statement of (iii) in Theorem \ref{GDiffScales}.
Suppose now that $s=1$ and $r\leq1.$ The case $r=1$ is already
included in the results of Theorem \ref{ProofGenPow}. For $r<1,$ we consider the asymptotics of the quadratic form
appearing in the definition of $\sigma\left( T;\varepsilon\right) .$
Computing the contribution due to the region $\left\{ M\varepsilon
^{\frac{1}{r}}\leq\left\vert vt-\xi\right\vert \leq1\right\} $ we can
argue as in the proof of the point (ii) above and we obtain a term identical
to the first one on the right-hand side of (\ref{P6E4}). We are left with the contribution due to the region $\left\{ \left\vert vt-\xi
\right\vert >1\right\} .$ We remark as before that%
\begin{align}
\int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}^{T}\nabla_{x}\Phi
_{L}\left( vt-\xi,\varepsilon\right) \chi_{\left\{ \left\vert
vt-\xi\right\vert >1\right\} }dt\right) ^{2} \sim \varepsilon^2A^2\int_{0}^{T}dt_{1}\int
_{-t_{1}}^{T-t_{1}}\tilde{W}\left( t\right) dt \label{P6E5}%
\end{align}
where, using that $s=1$, we get
\[
\tilde{W}\left( t\right) :=\int_{\mathbb{R}^{2}}d\xi_{\bot}(\theta_\perp \cdot \xi_\perp)^2\int_{-\infty}^{\infty
}d\xi_{1}\frac{\chi_{\left\{ \left\vert \xi\right\vert >1\right\} }\,\chi_{\left\{
\left\vert \xi-vt\right\vert
>1\right\} }}{\left( \left( \xi
_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac
{3}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{\frac{3}{2}}}
\;.
\]
Arguing as in the derivation of (\ref{P6E2a}) we see that the main
contribution to (\ref{P6E5}) as $T\rightarrow\infty$ is due to the regions
where $t_{1}\gg1$ and $\left( T-t_{1}\right) \gg1.$ However we cannot
derive an approximation like (\ref{P6E2a}) because the integral of $\tilde{W}$
is not finite. Indeed, proceeding as in the proof of (\ref{P5E3a}) we obtain the asymptotics%
\begin{equation}
\tilde{W}\left( t\right) \sim\frac{C'}{\left\vert t\right\vert
}\ \ \text{as\ }\left\vert t\right\vert \rightarrow\infty\label{P6E6}%
\end{equation}
with%
\[
C'=\int_{\mathbb{R}^{2}}d\eta_{\bot}\frac
{(\eta_\perp\cdot\theta_\perp)^2}{\left\vert \eta_{\bot}\right\vert ^{5}}\int_{-\infty}^{\infty
}\frac{d\eta_{1}}{\left( \left( \eta_{1}\right) ^{2}+1\right) ^{\frac
{3}{2}}\left( \left( \eta_{1}-\frac{1}{\left\vert \eta_{\bot}\right\vert
}\right) ^{2}+1\right) ^{\frac{3}{2}}}\;.
\]
Moreover, $\tilde{W}\left( t\right) $ is bounded if $\left\vert t\right\vert
$ is bounded.
Writing
\[
\int_{0}^{T}dt_{1}\int_{-t_{1}}^{T-t_{1}}\tilde{W}\left( t\right)
dt=T^{2}\int_{0}^{1}d\tau_{1}\int_{-\tau_{1}}^{1-\tau_{1}}\tilde{W}\left(
T\tau\right) d\tau\;,
\]
we obtain that the region where $\left\vert \tau\right\vert \leq\frac{L}{T}$ with $L$ large
(independent of $T$) yields a contribution of order $O(LT).$ On the
other hand, in the region where $\left\vert \tau\right\vert >\frac{L}{T}$ we
can use (\ref{P6E6}). It follows that
\begin{align}
&T^{2}\int_{0}^{1}d\tau_{1}\int_{-\tau_{1}}^{1-\tau_{1}}\tilde{W}\left(
T\tau\right) d\tau \nonumber\\&
\quad =C'T\int_{0}^{1}d\tau_{1}\int_{-\tau_{1}}^{1-\tau_{1}%
}\chi_{\left\{ \left\vert \tau\right\vert >\frac{L}{T}\right\} }\frac{d\tau
}{\left\vert \tau\right\vert }+O\left( T\right) \sim 2C'T\log\left(
T\right) +O\left( T\right) \text{ as }T\rightarrow\infty\;.
\end{align}
Therefore $\sigma(T;\varepsilon)$ is, up to a multiplicative constant, asymptotic to $\varepsilon^2 T \log T$
for $T$ large, whence the last statement of (iii) in Theorem \ref{GDiffScales} follows.
\medskip
\noindent
{\em Proof of (iv).}
\smallskip
\noindent
We finally consider the case $\frac{1}{2}<s<1.$ Suppose first that $r+2s>3.$
Then $r>3-2s>1.$ We use the splitting (\ref{P5E1}) and bound $J_{1}$
using (\ref{P5E2}), (\ref{P5E3a}) which are valid for $\frac{1}{2}<s<1.$ We
then obtain%
\[
J_{1}\leq C\varepsilon^{2}T^{3-2s}\;.
\]
We estimate $J_{2}$ using (\ref{P5E8}) with $r>1$, which yields
$J_{2}\leq\frac{CT\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right) }}$.
Using (\ref{T1E6}) we arrive to
\[
\sigma\left( T_{BG};\varepsilon\right) \leq C\varepsilon^{\frac{2\left(
r+2s-3\right) }{r}}+\frac{C}{M^{2\left( r-1\right) }}%
\]
which proves the first statement of case (iv).
\smallskip
Suppose next that $\frac{1}{2}<s<1$ and $r+2s \leq 3.$ If $r>1$ we can use
(\ref{P5E8}) to prove $J_{2}\leq\frac{CT\varepsilon^{\frac{2}{r}}}{M^{2\left(
r-1\right) }}.$ If $r=1$, (\ref{P6E4a}) yields $J_{2}\leq
CT\varepsilon^{2}\log\left( \frac{1}{\varepsilon}\right) .$ If $r<1$ we
argue as in the derivation of (\ref{P6E4}) to obtain $J_{2}\leq CT\varepsilon
^{2}.$ We combine all those estimates in a single formula:%
\begin{equation}
J_{2}\leq CT\left( \frac{\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right)
}}+\varepsilon^{2}\log\left( \frac{1}{\varepsilon}\right) \right)\;.
\label{P6E7}
\end{equation}
On the other hand, we can compute the contribution of the region $\left\{
\left\vert vt-\xi\right\vert >1\right\} $ to $\sigma\left( T;\varepsilon
\right) $ using (\ref{P6E5}) with $\tilde{W}\left( t\right) $ replaced by%
\[
\bar{W}\left( t\right) :=\int_{\mathbb{R}^{2}}d\xi_{\bot}(\theta_\perp \cdot \xi_\perp)^2\int_{-\infty}^{\infty
}d\xi_{1}\frac{\chi_{\left\{ \left\vert \xi\right\vert >1\right\} }\,\chi_{\left\{
\left\vert \xi-vt\right\vert
>1\right\} }}{\left( \left( \xi
_{1}\right) ^{2}+\left\vert \xi_{\bot}\right\vert ^{2}\right) ^{\frac
{s+2}{2}}\left( \left( \xi_{1}-t\right) ^{2}+\left\vert \xi_{\bot
}\right\vert ^{2}\right) ^{\frac{s+2}{2}}}
\;.
\]
We have the asymptotics%
\[
\bar{W}\left( t\right) \sim\frac{C''}{\left\vert t\right\vert ^{2s-1}%
}\ \ \text{as\ \ }\left\vert t\right\vert \rightarrow\infty
\]
with
\[
C''=\int_{\mathbb{R}^{2}}d\eta_{\bot}\frac
{(\eta_\perp\cdot\theta_\perp)^2}{\left\vert \eta_{\bot}\right\vert ^{2s+3}}\int_{-\infty}^{\infty
}\frac{d\eta_{1}}{\left( \left( \eta_{1}\right) ^{2}+1\right) ^{\frac
{s+2}{2}}\left( \left( \eta_{1}-\frac{1}{\left\vert \eta_{\bot}\right\vert
}\right) ^{2}+1\right) ^{\frac{s+2}{2}}}\;,%
\]
whence%
\[
\int_{0}^{T}dt_{1}\int_{-t_{1}}^{T-t_{1}}\bar{W}\left( t\right) dt\sim
C''T^{3-2s}\int_{0}^{1}d\tau_{1}\int_{-\tau_{1}}^{1-\tau_{1}}\frac{d\tau
}{\left\vert \tau\right\vert ^{2s-1}}\;
\]
Then, (\ref{P6E5}) and (\ref{P6E7}) imply
\begin{equation}
\sigma\left( T;\varepsilon\right) \sim \varepsilon^{2}\left(\frac{T}{C_0}\right)^{3-2s}+O\left(
\frac{T\varepsilon^{\frac{2}{r}}}{M^{2\left( r-1\right) }}\right) +O\left(
T\varepsilon^{2}\log\left( \frac{1}{\varepsilon}\right) \right).
\label{P6E8}%
\end{equation}
Using that $3-2s>1$ and $r+2s \leq 3$ we obtain that the solution of the equation
$\sigma\left( T_{L};\varepsilon\right) =1$ satisfies%
\[
T_{L}\sim\frac{C_{0}}{\varepsilon^{\frac{2}{3-2s}}}\ \ \text{as\ \ }\varepsilon\rightarrow0\;,
\]
whence case (iv) in Theorem \ref{GDiffScales} follows.
\end{proof}
\subsubsection{Computation of the correlations.}
We now discuss under which conditions the correlations of deflections
in times of order of $T_{L}$ are negligible. We restrict our analysis
to the case $T_{L}\ll T_{BG}$.
We shall use the notation \eqref{eq:DevCorr} for the deflection vector.
\begin{theorem}
\label{CorrG}Suppose that the assumptions of the
Theorem \ref{GDiffScales} hold.
\begin{itemize}
\item[(i)] Suppose that $s>1$ and $r\leq1.$ Let $T_{L}$ be as in Theorem
\ref{GDiffScales}, case (ii). Then%
\begin{align}
&\mathbb{E}\left( D\left( x_{0},v;\tilde{T}_{L}\right) D\left(
x_{0}+v\tilde{T}_{L},v;\tilde{T}_{L}\right) \right) \nonumber \\&
\ll \sqrt{\mathbb{E}%
\left( \left( D\left( x_{0},v;\tilde{T}_{L}\right) \right) ^{2}\right)
\mathbb{E}\left( \left( D\left( x_{0}+v\tilde{T}_{L},v;\tilde{T}%
_{L}\right) \right) ^{2}\right) }\ \ \text{as\ \ }\varepsilon\rightarrow0.
\label{A1E1}%
\end{align}
\item[(ii)] Suppose that $s=1$ and $r\leq1.$ Let $T_{L}$ be as in Theorem
\ref{GDiffScales}, case (iii). Then (\ref{A1E1}) holds.
\item[(iii)] Suppose that $s<1$ and $r+2s\leq3.$ Let $T_{L}$ be as in Theorem
\ref{GDiffScales}, case (iv). Then%
\[
\lim\inf_{\varepsilon\rightarrow0}\frac{\mathbb{E}\left( D\left(
x_{0},v;\tilde{T}_{L}\right) D\left( x_{0}+v\tilde{T}_{L},v;\tilde{T}%
_{L}\right) \right) }{\sqrt{\mathbb{E}\left( \left( D\left(
x_{0},v;\tilde{T}_{L}\right) \right) ^{2}\right) \mathbb{E}\left( \left(
D\left( x_{0}+v\tilde{T}_{L},v;\tilde{T}_{L}\right) \right) ^{2}\right) }%
}>0
\]
for each fixed $h$.
\end{itemize}
\end{theorem}
\begin{proof}
It is similar to the proof of Theorem \ref{CorrPowLaw} and we just
sketch the details. In the case (i) the main contribution to the deflections
is due to the region where $\left\vert \xi-vt\right\vert \leq1$ or at least
the region where $\left\vert \xi-vt\right\vert $ is bounded. On the other hand
the computation of the correlations requires to take into account the
contribution to the integrals of regions where $\left\vert \xi-vt\right\vert $ is large.
The latter are negligible and (\ref{A1E1}) follows. The case (ii) with $r=1$
is similar to the case (i) of Theorem \ref{CorrPowLaw}. If
$r<1$ we can use analogous arguments to show that (\ref{A1E1}) holds, since the contribution of the regions $\left\vert
\xi-vt\right\vert \leq1$ is negligible. Finally, in the case
(iii) we argue as in the case (ii) of Theorem \ref{CorrPowLaw}, since the
largest contribution to the deflections is due to the region $\left\vert
\xi-vt\right\vert >1.$
\end{proof}
\subsubsection{Kinetic equations.}
Combining Theorems \ref{GDiffScales} and \ref{CorrG} we can write the kinetic
equations yielding the evolution of the distribution $f$ using the arguments
in Section \ref{GenKinEq}.
We then obtain the following list of cases, assuming that $\Phi\left( x,\varepsilon\right)
=\varepsilon \,G\left( \left\vert x\right\vert \right) $ with $G$ satisfying
(\ref{T1E2})-(\ref{T1E4}) and restricting for simplicity to the case of one single charge.
\begin{itemize}
\item If $s>1$ and $r>1$ we claim that%
\begin{equation}
f_{\varepsilon}\left( T_{BG}t,T_{BG}x,v\right) \rightarrow f\left(
t,x,v\right) \text{ as }\varepsilon\rightarrow0 \label{A2}%
\end{equation}
with $T_{BG}=\frac{1}{\varepsilon^{\frac{2}{r}}}$ where $f$ solves the linear
Boltzmann equation%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right)
=\int_{S^{2}}B\left( v;\omega \right) \left[ f\left( t,x,\left\vert
v\right\vert \omega\right) -f\left( t,x,v\right) \right] d\omega \label{A3}%
\end{equation}
with $B$ as in (\ref{P1E8})-(\ref{P1E8a}).
\item If $s>1$ and $r\leq1$ we have%
\begin{equation}
f_{\varepsilon}\left( T_{L}t,T_{L}x,v\right) \rightarrow f\left(
t,x,v\right) \text{ as }\varepsilon\rightarrow0 \label{A4}%
\end{equation}
where $f$ solves the Landau equation%
\begin{equation}
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right) =\kappa\,\Delta_{v_{\perp}}f\left( t,x,v \right)
\label{A5}
\end{equation}
with $\kappa$ as in \eqref{I1E2} and where $T_L$ is as in Theorem \ref{GDiffScales},
case (ii).
\item If $s=1$ and $r>1$ we obtain (\ref{A2}) with $T_{BG}=\frac
{1}{\varepsilon^{\frac{2}{r}}}$ where $f$ solves (\ref{A3}) and $B$ is as in
(\ref{P1E8})-(\ref{P1E8a}).
\item If $s=1$ and $r\leq1$ we obtain (\ref{A4}) where $f$ solves (\ref{A5})
and $T_L$ is as in Theorem \ref{GDiffScales},
case (iii).
\item Suppose that $s<1$ and $r+2s>3.$ Then $r>1$ and we obtain (\ref{A2})
with $T_{BG}=\frac{1}{\varepsilon^{\frac{2}{r}}}$ where $f$ solves the linear
Boltzmann equation (\ref{A3}) with the kernel $B$ given by (\ref{P1E8})-(\ref{P1E8a}).
\item Suppose that $s<1$ and $r+2s<3.$ Then the paths yielding the
trajectories are given by a probability measure with
correlations as described in Section \ref{ss:CorrCase}.
If $r+2s=3,$ the trajectories would be given by a measure with correlations plus pointwise large deflections described by the Boltzmann equation.
\end{itemize}
Notice that in all the examples mentioned above where the dynamics is given by
the Boltzmann equation we only need to compute the collision kernel for cross
sections associated to the potential $\frac{1}{\left\vert x\right\vert ^{r}}$
with $r>1.$
We could use similar methods to study more complicated classes of potentials,
for instance
$\Phi\left( x,\varepsilon\right) =\varepsilon^{a_{1}}\Psi_{1}\left( x\right)
+\varepsilon^{a_{2}}\Psi_{2}\left( x\right) $ or
$\Phi\left( x,\varepsilon\right) =
\Psi_{1}\left( \frac{x}{\lambda_{1,\varepsilon}}\right) +\varepsilon\Psi
_{2}\left( \frac{x}{\lambda_{2,\varepsilon}}\right)
.$ However, we will not continue with a further analysis of these cases or similar generalizations.
\begin{remark}
\label{MixEq} We have found examples of families of potentials for
which the resulting kinetic equation contains both Boltzmann and terms associated to correlations over
distances of order $T_{L}$. This phenomenon takes place when the
time scale $T_{L}$ associated to the deflections produced by the collective
effect of many scatterers is similar to the Boltzmann-Grad time scale $T_{BG}.$
In the family of potentials (\ref{T1E1})-(\ref{T1E4}) we have obtained that
all the potentials satisfying $r+2s=3,\ \frac{1}{2}< s<1$ yield such evolution. In the case
of Coulomb potentials the Landau term is the only one which appears in the
limit $\varepsilon\rightarrow0,$ due to the presence of a logarithmically
small factor in the Boltzmann type term. It might be possible to
modify the Coulomb potential, replacing it by terms like $\frac{1}{\left\vert
x\right\vert \left( \log\left( 1+\left\vert x\right\vert \right) \right)
^{\alpha}}$ to obtain an evolution equation for the tagged particle described
by an equation containing both Boltzmann and Landau terms.
\end{remark}
\begin{remark}
\label{BoSD}
It is interesting to remark that the fact that the dynamics is described by a
Landau or a Boltzmann equation does not depend only on the decay properties of
the potential but also on the size of the coefficients describing such decay.
Suppose for instance that we consider the family of potentials $\Phi\left(
x,\varepsilon\right) =\frac{\varepsilon^{r}}{\left\vert x\right\vert ^{r}%
}+\frac{\varepsilon^{\alpha}}{\left\vert x\right\vert ^{s}}$ where
$r>1,\ \alpha+2s>3,\ \frac{1}{2}<s<1.$ Then, $T_{BG}=\frac{1}{\varepsilon^{2}%
}.$ Arguing as in the derivation of (4.4) it might be seen that the possible
contribution to the Landau time of $\frac{\varepsilon^{r}}{\left\vert
x\right\vert ^{r}}$\ would be much larger than $\frac{1}{\varepsilon^{2}}.$ On
the other hand the Landau time scale associated to $\frac{\varepsilon^{\alpha
}}{\left\vert x\right\vert ^{s}}$ is of order $\frac{1}{\left( \varepsilon
\right) ^{\frac{2\alpha}{3-2s}}}$ which is much larger than $\frac
{1}{\varepsilon^{2}}$ since $\frac{2\alpha}{3-2s}>2.$ Therefore the dynamics
of $f\left( t,x,v\right) $ is given by the linear Boltzmann equation with
the cross section associated to the scattering potential $\frac{1}{\left\vert
x\right\vert ^{r}}$ in spite of the fact that the potential $\Phi\left(
x;\varepsilon\right) $ behaves for large values of $\left\vert x\right\vert $
as $\frac{\varepsilon^{\alpha}}{\left\vert x\right\vert ^{s}}$ with $s<1.$
\end{remark}
\begin{remark}
Let us consider the domain of influence as defined in Remark \ref{RangeInter}.
For the potentials with the form (\ref{T1E1}) considered in this section, assuming
$v = (1,0,0)$ and that the tagged particle is in the origin at time zero,
we obtain that in the case $s=1,\ r<1$ the domain of influence are the scatterers
located in $\left( x_{1},x_{\bot}\right) $ with
$x_{1}\in\left[ 0,T_{L}\right] $ and $k_{1}\leq\left\vert x_{\bot
}\right\vert \leq\frac{T_{L}}{k_{1}},$ where $k_{1}$ is a large number. If $s>1,\ r<1,$ the domain of influence is given by
$x_{1}\in\left[ 0,T_{L}\right] $ and
$\left\vert x_{\bot}\right\vert \leq k_{1}$. Finally
if $s<1$ and $r<3-2s,$ then $\left\vert x_{\bot
}\right\vert \leq k_{1}T_{L}.$
\end{remark}
\begin{remark}
In the two-dimensional case, similar computations
lead to kinetic equations as established in this section, with the following differences.
The Boltzmann-Grad time scale is $T_{BG} = \frac{1}{\varepsilon^{\frac{1}{r}}}$. The critical value
of $r$ separating the Boltzmann and the Landau behaviour is $r= \frac{1}{2}$
(instead of $1$) for $s \geq \frac{1}{2}$. For $0<s<\frac{1}{2}$ and $r>1-s$, a linear Boltzmann
equation is expected to hold, while for $r \leq 1-s$ we find that $T_L$
grows as $\varepsilon^{-\frac{1}{1-s}}$ and the correlations do not vanish on the macroscopic scale.
\end{remark}
\subsection{Two different ways of deriving Landau equations with finite range
potentials.\label{DiffLandau}}
We now discuss two classes of potentials which can be studied with the
formalism developed above yielding in both cases Landau kinetic equations, but
for which the interaction has a very different form.
We consider
\begin{equation}
\Phi\left( x,\varepsilon\right) =\varepsilon\Psi\left( \frac{x}
{L_{\varepsilon}}\right) \ \ \text{with\ }L_{\varepsilon}\geq 1,
\label{S5E3}%
\end{equation}%
\begin{equation}
\Phi\left( x,\varepsilon\right) =\varepsilon\Psi\left( \frac{x}%
{L_{\varepsilon}}\right) \ \ \text{with\ }L_{\varepsilon}%
\rightarrow0\text{ as }\varepsilon\rightarrow 0 \label{S5E4}
\end{equation}
and we assume that the functions $\Psi$ are bounded and smooth in
$\mathbb{R}^{3}$, so that the collision length associated to
$\left\{ \Phi\left( x,\varepsilon\right) ;\varepsilon
>0\right\} $ is $\lambda_{\varepsilon}=0$ and $$T_{BG}=\infty\;.$$ For the sake
of definiteness we assume also that the potentials $\Psi\left( y\right)
=\Psi\left( \left\vert y\right\vert \right) $ are compactly supported or
decay very fast (say exponentially) as $\left\vert y\right\vert \rightarrow
\infty,$ but the results described in the following would remain valid if
$\Psi\left( \left\vert y\right\vert \right) \sim\frac{1}{\left\vert
y\right\vert ^{s}}$ at least if $s>1$. Rigorous derivations of a Landau
equation in the case (\ref{S5E3}) have been obtained in \cite{Pi81} if $L_{\varepsilon}\rightarrow\infty$, as $\varepsilon\rightarrow 0$, and in \cite{DGL,KP, KR} for $L_{\varepsilon}$ of order $1$.
The case (\ref{S5E4}) has been considered in \cite{DR}.
Notice that there is a difference between the dynamics of the
tagged particle in the cases (\ref{S5E3}),\ (\ref{S5E4}). Indeed, in the case
(\ref{S5E3}) the tagged particle interacts at any time with a large number of
scatterers (of the order of $L_{\varepsilon}^{3}$). These
interactions are very weak, but the randomness in the distribution of
scatterers has as a consequence that the force acting on the tagged particle
at a given time is a random variable and this yields, under suitable
assumptions on $\varepsilon,\ L_{\varepsilon}$ a diffusive dynamics for the
velocity, or more precisely a Landau equation for $f.$
In the case of potentials as in (\ref{S5E4}) the tagged particle does not
interact with any scatterer during most of the time, but meets one scatterer
in times of order $\frac{1}{L_{\varepsilon}^{2}}$ much in
the same manner as in the Boltzmann-Grad limit. The main difference with the
Boltzmann-Grad case is that in these collisions the velocity of the tagged
particle is deflected a very small amount. The accumulation of many
independent random deflections yields also a diffusive behaviour for the
velocity of the tagged particle due to the central limit theorem.
In spite of these differences, we obtain the same type of Landau
equation in both cases. This is due to the fact that the relevant variable is the deflection of the
particle velocity. In the case in which these deflections
are small, they are additive, and there is no important difference if many
scatterers act on the particle at a given time or if only one of them
acts rarely.
\begin{theorem} We have the following cases.
\label{LandTimes}
\begin{itemize}
\item[(i)] Suppose that we consider potentials with the form
(\ref{S5E3}) with $\varepsilon^{-\frac{2}{5}}\ll L_{\varepsilon}%
\ll\varepsilon^{-\frac{2}{3}}\ \ $as\ \ $\varepsilon\rightarrow0.$ Then
$T_{L}\sim\frac{1}{\varepsilon\left( L_{\varepsilon
}\right) ^{\frac{3}{2}}}\rightarrow\infty$ as $\varepsilon\rightarrow0.$
\item[(ii)] Suppose that we consider potentials with the form (\ref{S5E3}) with
$\varepsilon\left( L_{\varepsilon}\right) ^{\frac{5}{2}}\rightarrow
C_{\ast}\in\left( 0,\infty\right) .$ Then $T_{L}%
\sim\frac{1}{\varepsilon\left( L_{\varepsilon}\right) ^{\frac{3}{2}}%
}=\frac{L_{\varepsilon}}{C_{\ast}}$ as $\varepsilon\rightarrow0.$
\item[(iii)] Suppose that we consider potentials with the form (\ref{S5E3}) or
(\ref{S5E4}) and that $L_{\varepsilon}\ll\varepsilon^{-\frac{2}{5}}$
(notice that this includes the case (\ref{S5E4})). Then $T_{L}\sim\frac{1}{\varepsilon^{2}\left( L_{\varepsilon}\right) ^{4}}$ as
$\varepsilon\rightarrow0.$
\end{itemize}
\end{theorem}
\begin{remark}
The condition $L_{\varepsilon}\ll\varepsilon^{-\frac{2}{3}}$ which is
assumed in Theorem\ \ref{LandTimes} is required in order to have a kinetic
limit. It is possible to have potentials for which this condition fails and where $T_{L}\leq C$. Then (\ref{KinLim})
would also fail.
\end{remark}
\begin{proof}
Using (\ref{S5E3}),\ (\ref{S5E4}) and the fact that $\Phi = \Phi_L$, we can write the function $\sigma\left(
T;\varepsilon\right) $ in (\ref{S4E8a}) as%
\begin{eqnarray}
\sigma\left( T;\varepsilon\right) & =&\varepsilon^{2}\sup_{\left\vert
\theta\right\vert =1}\int_{\mathbb{R}^{3}}d\xi\left( \theta\cdot\int_{0}%
^{T}\nabla_{x}\Psi\left( \frac{vt-\xi}{L_{\varepsilon}}\right) dt\right)
^{2} \nonumber\\
& = &\varepsilon^{2}\left( L_{\varepsilon
}\right) ^{3}\sup_{\left\vert \theta\right\vert =1}\int_{\mathbb{R}^{3}}%
d\eta\left( L_{\varepsilon}\theta\cdot\int_{0}^{\frac{T}{L
_{\varepsilon}}}\nabla_{x}\Psi\left( v\tau-\eta\right) d\tau\right) ^{2}
\label{B1} \;.
\end{eqnarray}
As usual we assume $v=\left( 1,0,0\right)$ and study parallel and longitudinal components separately.
We have
\begin{equation}
\theta_{1}\int_{0}^{\frac{T}{L_{\varepsilon}}%
}\frac{\partial\Psi}{\partial x_{1}}\left( v\tau-\eta\right) d\tau
=\theta_{1}\left[ \Psi\left( \frac{T}{L_{\varepsilon}}v-\eta\right)
-\Psi\left( -\eta\right) \right] . \label{B2}%
\end{equation}
The longitudinal contribution is
\begin{equation}
\theta_{\bot}\cdot\int_{0}^{\frac{T}{L_{\varepsilon}}}\nabla_{x}\Psi\left(
v\tau-\eta\right) d\tau=-\left( \theta_{\bot}\cdot\eta_{\bot}\right)
\int_{0}^{\frac{T}{L_{\varepsilon}}}\frac{\partial\Psi}{\partial\left(
\left\vert x\right\vert \right) }\left( \left\vert v\tau-\eta\right\vert
\right) \frac{d\tau}{\left\vert v\tau-\eta\right\vert } \;. \label{B3}%
\end{equation}
The integral on the right of (\ref{B3}) can be approximated in the form
$\frac{T}{L_{\varepsilon}}Q\left( \eta\right) $ where $Q\left(
\eta\right) $ decreases fast as $\left\vert \eta\right\vert \rightarrow
\infty$ if $\frac{T}{L_{\varepsilon}}\leq1.$ If $\frac{T}{L
_{\varepsilon}}\geq1$ we can approximate the integral by a function
$W=W\left( \eta_{\bot}\right) $ which is rather independent of $\frac
{T}{L_{\varepsilon}}$ for the values of $\eta_{1}$ in the interval $\left[
0,\frac{T}{L_{\varepsilon}}\right] .$ On the other hand the right-hand
side of (\ref{B2}) can be estimated by a function
decreasing fast if $\frac{T}%
{\ell_{\varepsilon}}\geq1$ and as $\frac{T}{L_{\varepsilon}}$ if $\frac{T}{L_{\varepsilon}}\leq1.$
Suppose first that $\frac{T}{L_{\varepsilon}}\leq1.$ Then $\sigma\left( T;\varepsilon\right) $ can be approximated as
$C\varepsilon^{2}\left( L_{\varepsilon}\right) ^{3}T^{2}.$ In order to
have a kinetic limit we need to have $T_{L}\gg1,$ therefore
$1=\sigma\left( T_{L};\varepsilon\right) $ yields $L_{\varepsilon}\ll\varepsilon^{-\frac{2}{3}}$.
Moreover we have $T_{L}\sim\frac{C}{\varepsilon\left( L_{\varepsilon
}\right) ^{\frac{3}{2}}}$ if $\frac{T}{L_{\varepsilon}}\sim\frac{C%
}{\varepsilon\left( L_{\varepsilon}\right) ^{\frac{5}{2}}}\leq1.$ This
gives the results in the cases (i) and (ii) of the theorem.
If
$\frac{T}{L_{\varepsilon}}\geq1,$ then the contribution to
$\sigma\left( T;\varepsilon\right) $ of the term proportional to $\theta
_{1}$ in (\ref{B2}) is negligible and we obtain, using also (\ref{B3}), the
following approximation for $\sigma\left( T;\varepsilon\right) $:%
\[
\varepsilon^{2}\left( L_{\varepsilon}\right) ^{5} \sup_{\left\vert
\theta\right\vert =1} \int
_{0}^{\frac{T}{L_{\varepsilon}}}d\eta_{1}\int d\eta_{\bot}(\theta_{\bot}
\cdot\eta_{\bot})^2\left( Q\left( \eta_{\bot}\right) \right)
^{2}=C_{1}\varepsilon^{2}\left( L_{\varepsilon}\right) ^{5}\frac{T}%
{L_{\varepsilon}}=C_{1}\varepsilon^{2}\left( L_{\varepsilon}\right)
^{4}T
\]
whence $T_{L}\sim\frac{C_{2}}{\varepsilon^{2}\left( L_{\varepsilon
}\right) ^{4}}$ as $\varepsilon\rightarrow0.$ Notice that
$\frac{T_{L}}{L_{\varepsilon}}\geq1$ if $\frac{1}{\varepsilon^{2}\left(
L_{\varepsilon}\right) ^{5}}\geq c_{0}>0,$ whence case (iii) follows.
\end{proof}
We can now examine in which cases we can derive a Landau equation for
$f.$ This is not possible if $\frac
{T_{L}}{L_{\varepsilon}}\leq C$ because in that case the form of the
interaction potential in (\ref{S5E3}) implies that deflections separated by
times of order $T_{L}$ have correlations of order one and (\ref{I1E1}) would
not be satisfied. In this case a correlated model in the spirit
of the one discussed in Section \ref{ss:CorrCase}
would allow to describe the trajectories of the tagged particle.
We then restrict our attention to the case in which $L_{\varepsilon
}\ll\varepsilon^{-\frac{2}{5}}.$ Suppose that $T=h T_{L},$ $h>0$.
In this case we have the approximation %
\begin{align*}
& \varepsilon^{2}\left( L_{\varepsilon}\right) ^{3}\int_{\mathbb{R}^{3}%
}d\eta\left( L_{\varepsilon}\theta\cdot\int_{0}^{\frac{T}{L
_{\varepsilon}}}\nabla_{x}\Psi\left( v\tau-\eta\right) d\tau\right) ^{2}\\
& \sim\varepsilon^{2}\left( L_{\varepsilon}\right) ^{3}\int_{0}%
^{\frac{h}{\varepsilon^{2}\left( L_{\varepsilon}\right) ^{5}}}%
d\eta_{1}\int_{\mathbb{R}^{2}}d\eta_{\bot}(L_\varepsilon\theta_{\bot}
\cdot\eta_{\bot})^2\left( Q\left( \eta_{\bot}\right) \right)
^{2}\\
& \rightarrow h \int_{\mathbb{R}^{2}}d\eta_{\bot}(\theta_{\bot}
\cdot\eta_{\bot})^2\left( Q\left( \eta_{\bot}\right) \right)
^{2}=2\kappa h\left\vert \theta_{\bot
}\right\vert ^{2}%
\end{align*}
where $\kappa>0.$ Then, using the Claim \ref{ClaimLand} we
obtain that $f$ satisfies%
\[
\left( \partial_{t}f+v\partial_{x}f\right) \left( t,x,v\right) =\kappa\Delta_{v_{\perp}}f\left( t,x,v \right) \;.
\]
\begin{remark}
Clearly it is possible to describe the difference between the
cases $L_{\varepsilon}\ll1$ and $L_{\varepsilon}\gtrsim1$ in terms of the domain of influence as
introduced in Remark \ref{RangeInter}. In the case of the potentials
having the forms (\ref{S5E3}), (\ref{S5E4}) this domain of
influence are the points $x=\left( x_{1},x_{\bot
}\right) $ with $x_{1}\in\left[ 0,T_{L}\right] ,$ $\left\vert x_{\bot
}\right\vert \leq C_{1}L_{\varepsilon}.$
In the
first case the tagged particle interacts at any given time with a large number
of scatterers. On the contrary if we assume (\ref{S5E4}) the
tagged particle at a given time would interact typically with zero scatterers,
and occasionally would interact with one scatterer.
These rare interactions are weak
collisions, and the accumulation of many of them yields
the deflection of the particle velocity.
\end{remark}
\begin{remark}
The impossibility to obtain a Landau equation if $\varepsilon^{-\frac{2}{5}%
}\lesssim L_{\varepsilon}\ll\varepsilon^{-\frac{2}{3}}$ can be seen also in
the fact that the characteristic function for the deflections takes the form
\[
m_{hT_L}^{(\varepsilon)}\left( \theta \right)
\rightarrow\exp\left( -h^{2}%
\int_{\mathbb{R}^{3}}d\eta\left( \theta\cdot\nabla_{x}\Psi\left(
\eta\right) \right) ^{2}\right) \text{ as }\varepsilon\rightarrow0\;.
\]
The characteristic size of the deflections in this case is $h$
instead of the parabolic rescaling $\sqrt{h}$ which takes place in the
diffusive limit.
\end{remark}
\section{Spacial nonhomogeneous distribution of scatterers} \label{Nonhomog}
\subsection{Dynamics of a tagged particle in a spherical scatterer cloud with
Newtonian interactions.}\label{ss:clouds}
We have seen in Theorem \ref{eq:NtiCp} that it is not possible to have
spatially homogeneous generalized Holtsmark fields for Newtonian scatterers, i.e.\,having just
one sign for the charges and generating potentials of the form $\Phi\left(
x,\varepsilon\right) =\frac{\varepsilon}{\left\vert x\right\vert }$. In this case it is natural to examine the dynamics of
a tagged particle in the field generated by random scatterer distributions in
bounded clouds. One of the simplest examples that we might consider is the
dynamics of a tagged particle in a spherical cloud of scatterers uniformly distributed. Theorem \ref{NewtonEquation}
ensures that in this case we cannot ignore the macroscopic average force
acting on the tagged particle due to the overall mass distribution on the
sphere. Let us assume that the cloud has a radius $R$ and that the tagged
particle moves inside this cloud in an orbit with a characteristic semiaxis of
order $\frac{R}{2}.$ Then we shall argue that a kinetic
description for the dynamics of the tagged particle is not possible, due to the
onset of correlations between the forces in macroscopic times.
Assume that $N=\frac{4\pi R^{3}}{3}$ scatterers are distributed
independently and uniformly in the ball $B_{R}\left(
0\right) ,$ with density one. A scatterer located in $x_{j}$ yields a potential
$\frac{\varepsilon}{\left\vert x-x_{j}\right\vert }.$ Since all the forces are
attractive, it follows that in the limit $N\rightarrow\infty$ there is a mean
force at each point $x\in B_{R}\left( 0\right),$ directed towards
the centre of the sphere and proportional to $\left\vert x\right\vert =r.$ Let the tagged particle
have unit mass and move
in an orbit around the center with characteristic length $r$ of order $R$, say $\frac{R}{2}$. The orbit can be expected to
experience random deflections due to the discreteness of the scatterer
distribution. To estimate the time scale in which these deflections take place,
we first remark that the potential energy is of order $C_{1}\varepsilon r^{2}$
where $C_{1}$ is just a numerical constant.
Since the kinetic energy is $\frac
{V^{2}}{2}$, it follows that the velocity of the tagged particle is of order
$C_{2}\sqrt{\varepsilon}r.$ We recall that the Landau time scale $T_L$ is defined as the time in which
the velocity experiences a deflection comparable to itself. In the case of Coulomb potentials,
we have seen that $T_L$ differs from the
Boltzmann-Grad time scale $T_{BG}$ only by a logarithmic factor. The collision length
$\lambda_{\varepsilon}$ for a particle with velocity $V$ is given by
$\lambda_{\varepsilon}=\frac{\varepsilon}{V^{2}}=\frac{C_{3}}{r^{2}},$
therefore
$T_{BG}=\frac{1}{V\left( \lambda_{\varepsilon
}\right) ^{2}}$. Including the effect of the Coulombian logarithm we would
obtain $T_{L}=\frac{1}{V\left( \lambda_{\varepsilon}\right) ^{2}}%
\log\left( \frac{R}{\lambda_{\varepsilon}}\right) .$ The mean free path is
then approximated if $R\rightarrow\infty$ as
$\ell_{\varepsilon}\simeq\frac{1}{\left( \lambda_{\varepsilon}\right) ^{2}%
}\log\left( \frac{R}{\lambda_{\varepsilon}}\right) =C_{4}R^{4}\log\left(
R\right) $ and
\[
\frac{\ell_{\varepsilon}}{R}\simeq C_{4}R^{3}\log\left( R\right) \simeq
C_{5}N\log\left( N\right) \ \ \text{as\ \ }N\rightarrow\infty\;,
\]
where $C_{5}$ is just a numerical constant (see \cite{BT} for a similar
estimate). Namely the mean free path is much larger than the length of the orbit.
If the deviations in one orbit were described by a Landau equation, then they should be approximated
by the sum of $\frac{\ell_{\varepsilon}}{R}\sim N\log\left( N\right) $ independent random deflections with zero average.
Denoting by $\sigma$ the relative change of velocity in each deflection,
we would need $N\log\left( N\right) \sigma^{2}\sim 1$, whence the typical deviation in one
orbit would be $\frac{1}{\sqrt{N\log\left( N\right) }}$
and the corresponding change of velocity $\frac
{V}{\sqrt{N\log\left( N\right) }}\simeq\frac{\sqrt{\varepsilon}r}{\sqrt
{N\log\left( N\right) }}.$ But the period of the orbit is $\frac{1}%
{\sqrt{\varepsilon}}$ and then the typical change of position of the tagged particle
in one period would be$\frac
{R}{\sqrt{R^{3}\log\left( R\right) }}\rightarrow 0$.
We finally remark that the onset of large correlations in the forces acting on
the tagged particle in times of the order of the orbit period would also take place if the scatterers move, since they would
move in elliptical orbits with the same period, given that the density of
scatterers is constant in the cloud. The situation could change, however, for a
nonspherical cloud, but we will not pursue this analysis here.
\subsection{On the derivation of kinetic equations with a Vlasov term.}
If we assume that the scatterers are not distributed in a
spatially homogeneous way, then it is possible to obtain limit equations for $f$ containing
Vlasov terms.
We will restrict here the analysis to particles interacting by means
of Coulomb potentials $\Phi\left( x,\varepsilon\right) =\frac{\varepsilon}{\left\vert x\right\vert
}$. We have shown that, in order to have random force fields which are spatially
homogeneous, we need to assume electroneutrality. Let us restrict ourselves to
the case in which there are only two types of charges $+1$ and $-1$ and
that the scatterers have these charges with probability $\frac{1}{2}$,
independently on the probability measure yielding their spatial distribution.
We will assume that the two types of scatterers are distributed in space according to inhomogeneous
Poisson measures with densities
\begin{equation}
\rho_{+}\left( x\right) =\frac{1}{2}+\delta_{\varepsilon}F_{+}\left(
\frac{x}{\ell_{\varepsilon}}\right) \ ,\ \ \rho_{-}\left( x\right)
=\frac{1}{2}+\delta_{\varepsilon}F_{-}\left( \frac{x}{\ell_{\varepsilon}%
}\right) \label{Ch1}%
\end{equation}
respectively. Here $\ell_{\varepsilon}$ denotes the corresponding mean free path for one tagged particle moving
in the field of scatterers with density approximately equal to one. By \eqref{S9E4}, \eqref{TG1} and \eqref{P4E1} we
may assume
\[
\ell_{\varepsilon}=\frac{1}{\varepsilon^{2}\log\left( \frac{1}{\varepsilon
}\right) }.
\]
Moreover, the parameter $\delta_{\varepsilon}>0$ converges to zero as $\varepsilon
\rightarrow0.$ Its precise dependence on $\varepsilon$ will be fixed below. Finally we
assume also that the functions $F_{+}\left( y\right) ,\ F_{-}\left(
y\right) $ decay fast for large values of $\left\vert y\right\vert $ (we
could assume for instance that these functions are compactly supported).
The force produced by a given configuration has the form%
\[
-\frac{\varepsilon}{2}\sum_{j,k}\left[ \frac{\left( x-x_{j}^{+}\right)
}{\left\vert x-x_{j}^{+}\right\vert ^{3}}-\frac{\left( x-x_{k}^{-}\right)
}{\left\vert x-x_{k}^{-}\right\vert ^{3}}\right]\;,
\]
where $x_j^+, x_k^-$ are the locations of scatterers with charges $+1$, $-1$.
These forces yield deflections described by a Landau equation. In addition, the slight fluctuations of the
density yield a nonvanishing mean field which can be approximated by
\begin{align*}
& -\varepsilon\delta_{\varepsilon}\int_{\mathbb{R}^{3}}\left[
F_{+}\left( \frac{y}{\ell_{\varepsilon}}\right) -F_{-}\left( \frac{y}%
{\ell_{\varepsilon}}\right) \right] \frac{\left( x-y\right) }{\left\vert
x-y\right\vert ^{3}}dy\\
& =-\varepsilon\delta_{\varepsilon}\ell_{\varepsilon}\int
_{\mathbb{R}^{3}}\left[ F_{+}\left( \xi\right) -F_{-}\left( \xi\right)
\right] \frac{\left( \frac{x}{\ell_{\varepsilon}}-\xi\right) }{\left\vert
\frac{x}{\ell_{\varepsilon}}-\xi\right\vert ^{3}}d\xi \;.
\end{align*}
The mean field variation is of order one in regions with
macroscopic size $\ell_{\varepsilon}.$ The macroscopic time scale is also
$\ell_{\varepsilon}.$ Therefore, the change induced by these terms in the
macroscopic time scale is of order $\varepsilon\delta_{\varepsilon}%
\ell_{\varepsilon}^{2}.$
We select $\delta_{\varepsilon}$ in order to make
this quantity of order one, i.e. $\varepsilon\delta_{\varepsilon}%
\ell_{\varepsilon}^{2}=1,$ whence:%
\begin{equation}
\delta_{\varepsilon}=\varepsilon^{3}\left( \log\left( \frac{1}{\varepsilon
}\right) \right) ^{2} \;.\label{Ch2}%
\end{equation}
We then obtain that $f_{\varepsilon}\left( \ell_{\varepsilon}t,\ell
_{\varepsilon}x,v\right) \rightarrow f\left( t,x,v\right) $ where $f$
solves the Vlasov-Landau equation:%
\begin{align}
\partial_{t}f+v\partial_{x}f+g\partial_{v}f & =\kappa\Delta_{v^{\bot}%
}f\ ,\ \kappa>0\label{Ch3}\\
g\left( x\right) & :=-\int_{\mathbb{R}^{3}}\left[ F_{+}\left(
\xi\right) -F_{-}\left( \xi\right) \right] \frac{\left( x-\xi\right)
}{\left\vert x-\xi\right\vert ^{3}}d\xi \;.\label{Ch4}%
\end{align}
Notice that the distributions of charges $F_{+},F_{-}$ must be chosen in such
a way that we do not have periodic orbits for the tagged particle, since then
there would be correlations and the Landau
equation would fail as in Section \ref{ss:clouds}.
It is possible to derive Vlasov-Boltzmann or Vlasov-Landau equations for other types of
long range potentials like the ones considered in this paper. One obtains
mean field forces of the same order of magnitude as the
Landau or Boltzmann terms if the size of the inhomogeneities is chosen in a
suitable way as indicated above. An attempt for a rigorous derivation of this type of equations
is provided in \cite{DSS17}.
\section{Concluding remarks.} \label{sec:CR}
We have developed a formalism which allows to obtain the kinetic equation
describing the evolution of a tagged particle moving in a field of fixed
scatterers (Lorentz gas) distributed in the whole three-dimensional space according to
a Poisson measure with density of order one. Each scatterer is the centre of an interaction potential
which decays at infinity as a power law $\frac{1}{\left\vert x\right\vert
^{s}}$ with $s>\frac{1}{2}.$
We have first studied the properties of the random force field generated by
the scatterers and, in particular, the conditions under which this field is invariant under translations. In the case of potentials decreasing
for large $\left\vert x\right\vert $ as $\frac{1}{\left\vert x\right\vert
^{s}}$ with $s\leq1$ some ``electroneutrality'' of the system must be imposed,
either by means of the addition of a background with opposite charge density
or using charges with positive and negative signs.
We have then studied the conditions in the interactions which allow to
obtain a kinetic description for the dynamics of the tagged particle. To this
end, the interaction between the tagged particle and the scatterers must be
weak enough to ensure that the mean free path
is much larger than the typical distance among the scatterers.
Under this assumption, we have three main
possibilities. If the fastest process yielding particle
deflections are binary collisions with single
scatterers, the resulting equation is the linear Boltzmann equation. If, on the contrary, the deflections
due to the accumulation of a large number of small interactions yield a
relevant change in the direction of the velocity before
a binary collision takes place, then we can have a Landau type dynamics.
We have denoted as Landau time $T_{L}$ the time scale in
which such macroscopic deflections take place. In order to be able
to describe the evolution of the tagged particle by means of a Landau equation,
we have shown also that deflections experienced by the
particle over times of order $T_{L}$ must be uncorrelated. We have provided
examples of potentials for which this lack of correlations does not take
place. In such cases, we cannot expect to have a single PDE describing the
probability distribution in the particle phase space. Instead, the correlations between
macroscopic deflections must be taken into account.
\bigskip
\noindent
\textbf{Acknowledgment.}
We thank Chiara Saffirio and Mario Pulvirenti for interesting discussions on the topic. The authors acknowledge support through the CRC 1060
\textit{The mathematics of emergent effects}
at the University of Bonn that is funded through the German Science
Foundation (DFG). S.\,acknowledges hospitality at HCM Bonn as well as support from the DFG grant 269134396.
|
\section{Introduction}
Two codes $C,D\subseteq{\mathbb F}_q^n$ are said to be $x$-isometric, for $x\in{\mathbb F}_q^n$ if and only if the map $\chi_x:{\mathbb F}_q^n\rightarrow{\mathbb F}_q^n$ given by the component-wise product $\chi_x(v)=x * v$ satisfies
$\chi_x(C)=D$. Then, a sequence of codes $(C^i)_{i={0,\dots,n}}$ is said to satisfy the {\it isometry-dual condition} if there exists $x\in({\mathbb F}_q^*)^n$ such that $C^i$ is $x$-isometric to $(C^{n-i})^\perp$ for all $i=0,1,\dots,n$.
Sequences of one-point AG codes satisfying the isometry-dual condition
were characterized in \cite{GMRT} in terms of the Weierstrass
semigroup at the defining point. The result can be stated in terms of
maximum sparse ideals of numerical semigroups, that is, ideals whose gaps
are maximal.
In this contribution we analyze the effect of puncturing
a sequence of AG codes satisfying the isometry-dual property in terms
of the inheritance of this property.
In Section 2 we introduce and characterize maximum sparse ideals.
In Section 3 we analyze the inclusion among maximum sparse ideals. In
Section 4 we prove that the set of maximum sparse ideals is in bijection with
another ideal. In Section 5 we prove that the isometry-dual property can be inherited after puncturing a sequence of
codes only if the number of punctured coordinates is a non-gap of the
Weierstrass semigroup at the defining point. In particular, the
property is not inherited in general if one only takes out one coordinate.
\section{Maximum sparse ideals}
A {\em numerical semigroup} $S$ is a subset of ${\mathbb N}_0$ that contains $0$, is closed under addition and has a finite complement in ${\mathbb N}_0$.
An {\em ideal} $I$ of a numerical semigroup $S$ is a subset of $S$
such that $I+S\subseteq I$. We say that $I$ is a {\em proper} ideal of $S$ if $I\neq S$.
Denote the elements of $S$, in increasing ordre, by $\lambda_0=0,
\lambda_1,\dots$ and call {\em genus} the number $g=\#{\mathbb
N}_0\setminus S$. The {\em conductor} $c$ of the semigroup is the smallest integer such that $c+{\mathbb N}_0\subseteq S$.
It is proved in \cite{BLV} that the largest integer not belonging to
an ideal, which is called the {\em Frobenius number} of the ideal is at most $2g-1+\# (S\setminus I)$.
The ideals whose Frobenius number attains this bound will be called {\it maximum sparse ideals}.
It is also proved in \cite{BLV} that, letting $G(i)$ be the number of pairs of gaps adding up to $\lambda_i$, a proper ideal $I$ is maximum sparse if and only if $I=S\setminus D(i)$ for some $i$ with $G(i)=0$. In this case we denote $\lambda_i$ the leader of the maximum sparse ideal $I$ and it coincides with the maximum gap of the ideal.
\begin{lemma}
The leaders of proper maximum sparse ideals are always at least the conductor.
\end{lemma}
\begin{proof}
The Frobenius number of the maximum sparse ideal is $2g-1+\#(S\setminus I)$, which, since the ideal is $S\setminus D(i)$, equals the maximum of $D(i)$. But the maximum of $D(i)$ is $\lambda_i$, that is, the leader of the ideal.
Then $\lambda_i=2g-1+\#(S\setminus I)$. Since the ideal is proper, $\#(S\setminus I)\geq 1$, and so $2g-1+\#(S\setminus I)\geq 2g\geq c$. Hence, $\lambda_i\geq c$.
\end{proof}
\section{Inclusion among maximum sparse ideals}
\begin{lemma}
\label{l:diffnongaps}
If the proper ideals $I,I'$ are maximum sparse, with leaders $\lambda_i,\lambda_{i'}$, and $I'\supseteq I$, then $\lambda_i-\lambda_{i'}\in S$.
\end{lemma}
\begin{proof}
The inclusion $I\supseteq I$ implies $(S\setminus I')\subseteq (S\setminus I)$. We know that $(S\setminus I)=D(i)$, $(S\setminus I')=D(i')$, so, $D(i')\subseteq D(i)$. This is equivalent to $\lambda_{i'}\in D(i)$ which means that $\lambda_i-\lambda_i'\in S.$
\end{proof}
\begin{lemma}
\label{l:diffcards}
If the proper ideals $I,I'$ are maximum sparse and $I'\supseteq I$, then $\#(S\setminus I)-\#(S\setminus I')\in S$.
\end{lemma}
\begin{proof}
Suppose that the leaders of $I,I'$ are $\lambda_i,\lambda_{i'}$.
Since both $I$ and $I'$ are maximum sparse, $\#(S\setminus I)=(\lambda_i-(2g-1))$ and $\#(S\setminus I')=(\lambda_{i'}-(2g-1))$. Consequently,
$\#(S\setminus I)-\#(S\setminus I')=\lambda_i-\lambda_{i'}$ which, by Lemma~\ref{l:diffnongaps}, belongs to $S$.
\end{proof}
\begin{remark}
The converse is not true in general. Suppose that $\lambda_i$ is the leader of a maximum sparse ideal and that $\lambda_{i'}$, which is at least the conductor, satisfies $\lambda_i-\lambda_{i'}\in S$. This does not imply that $\lambda_{i'}$ is the leader of any maximum sparse ideal, unless $D(i')=0$.
\end{remark}
The previous results lead to the next theorem.
\begin{theorem}
For two proper sparse ideals $I,I'$ with leaders $\lambda_i,\lambda_{i'}$, the following are equivalent:
\begin{itemize}
\item $I'\supseteq I$
\item $\lambda_i-\lambda_{i'}\in S$.
\item $S\setminus I'\subseteq S\setminus I$.
\item $\#(S\setminus I)-\#(S\setminus I)\in S$.
\end{itemize}
\end{theorem}
\section{The ideal of sparse ideal leaders}
\begin{lemma}
The set $L$ of non-zero non-gaps $\lambda_i$ such that $G(i)=0$ is an ideal of $S$.
\end{lemma}
\begin{proof}
First of all, notice that the non-gaps smaller than the conductor do not satisfy $G(i)=0$. Indeed, if $\lambda_i<c$, there is a gap $a<\lambda_i$ with $\lambda_i-a<\lambda_1$ because otherwise $\lambda_i$ would be larger than the conductor. Now, $\lambda_i-a<\lambda_1$ is a positive gap which, together with $a$ adds up to $\lambda_i$. So, $G(i)\neq 0$. So, all the elements in $L$ are at least equal to the conductor.
We need to prove that if $\lambda_i\in L$ then $\lambda_i+\lambda_j\in L$ for any $\lambda_j\in S$. We can assume that $\lambda_j\neq 0$ because otherwise it is obvious. Let $k$ be such that $\lambda_i+\lambda_j=\lambda_k$.
Suppose that $\lambda_k\not\in L$ and so that $G(k)\neq 0$. This means that there exist two gaps $a$, $b$ such that $a+b=\lambda_k$. Since $\lambda_k=\lambda_i+\lambda_j$ with $\lambda_i\geq c$, we have $\lambda_k-1,\lambda_k-2,\lambda_k-3,\dots,\lambda_k-\lambda_j\in S$ since they all are larger than $\lambda_i$. Hence both $a$ and $b$ are smaller than $\lambda_k-\lambda_j$ and so, since $a+b=\lambda_k$, they both are larger than $\lambda_j$. Then $a-\lambda_j$ is a gap of $S$ since, otherwise, $a=(a-\lambda_j)+\lambda_j\in S+S\subseteq S$.
In particular, $(a-\lambda_j)+b$ is a sum of two gaps which adds up to $a+b-\lambda_j=\lambda_k-\lambda_j=\lambda_i$, a contradiction since $\lambda_i$ is assumed to belong to $L$ and so $D(i)=0$.
\end{proof}
\section{Puncturing sequences of isometry-dual one-point AG codes}
Let $P_1,\dots,P_n, Q$ be different rational points of a (projective, non-singular, geometrically irreducible) curve with genus $g$
and define $C_m=\{(f(P_1),\dots,f(P_n)):f\in L(mQ)\}$.
Note that it can be the case that $C_m=C_{m-1}$.
Let $W=\{m\in{\mathbb N}: L(mP)\neq L((m-1)P)\}$ be the Weierstrass
semigroup at $Q$ and let $W^*=\{0\}\cup\{m\in{\mathbb N}, m>0:C_m\neq
C_{m-1}\}=\{m_1=0,m_2,\dots,m_n\}$.
Then, if $n>2g+2$, the set $W\setminus W^*$ is an ideal of $W$ (this is stated in different words in \cite[Corollary 3.3.]{GMRT}).
In particular, $C^0=\{0\}$ together with $C^1=C_{m_1},C^2=C_{m_2},\dots,C^n=C_{m_n}$ satisfy the isometry-dual condition if and only if $n+2g-1\in W^*$, that is, if and only if $W\setminus W^*$ is maximum sparse. This is proved in \cite[Proposition 4.3.]{GMRT}.
\begin{theorem}\label{t:inh}Suppose that the sequence $C^0,C_{m_1},C_{m_2},\dots,C_{m_n}$ as just defined satisfies the isometry-dual condition.
Let $\{P_{i_1},\dots,P_{i_{n'}}\}\subseteq\{P_1,\dots,P_n\}$,
with $2g+2<n'<n$. Define $C'_m=\{(f(P_{i_1}),\dots,f(P_{i_{n'}})):f\in
L(mQ)\}$
and $(W^*)'=\{0\}\cup\{m\in{\mathbb N}, m>0:C'_m\neq C'_{m-1}\}=\{m'_1=0,m'_2,\dots,m'_{n'}\}$.
If the sequence $\{0\},C'_{m'_1},C'_{m'_2},\dots,C_{m'_{n'}}$
satisfies the isometry-dual condition, then $n-n'\in W$.
\end{theorem}
\begin{proof}
Since the sequence $C^0,C_{m_1},C_{m_2},\dots,C_{m_n}$ satisfies the
isometry-dual condition, the ideal $W\setminus W^*$ is maximum
sparse.
If the sequence
$\{0\},C'_{m'_1},C'_{m'_2},\dots,C_{m'_{n'}}$ also satisfies the
isometry-dual condition, then $W\setminus (W^*)'$ is also sparse.
Since $C'_m\neq C'_{m-1}$ implies $C_m\neq C_{m-1}$, we have $(W^*)'\subseteq W^*$
and so $W\setminus (W^*)'\supseteq S\setminus W^*$.
Now, by Lemma~\ref{l:diffcards}, $\# W^*-\# (W^*)'=n-n'\in W$.
\end{proof}
The conclusion of Theorem~\ref{t:inh} is that, given a sequence of AG
codes evaluating at a set of evaluation points $P_1,\dots,P_n$ the
functions having only poles at a defining point $Q$, with the sequence
satisfying the isometry-dual condition, one needs to take out a number
of evaluation points at least equal to the multiplicity (smallest
non-zero non-gap) of the Weierstrass semigroup of $Q$ in order to obtain another punctured sequence satisfying the isometry-dual property.
\begin{example}
The Hermitian curve over ${\mathbb F}_{q^2}$, with affine equation $x^{q+1}=y^q+y$, has $q^3$ affine points and one point $P_\infty$ at infinity.
A basis of $\cup_{m\geq 0}L(mP_\infty)$ is given in increasing order by the list
$\left((x^{d-i}y^i)_{i\in \{0,\dots,\min(q-1,d)\}}\right)_{d\geq 0}$.
Consider the Hermitian curve over ${\mathbb F}_4$.
For this, let $\alpha$ be the class of $x$ in ${\mathbb F}_4=({\mathbb Z}_2[x])/(x^2+x+1)$.
The $8$ points of the Hermitian curve over ${\mathbb F}_4$ are
$P_1=(0, 0), P_2=(\alpha,\alpha), P_3=(\alpha + 1, \alpha), P_4=(1, \alpha), P_5=(\alpha, \alpha + 1), P_6=(\alpha + 1, \alpha + 1), P_7=(1, \alpha + 1), P_8=(0, 1)$. The curve has genus $g=1$.
The Weierstrass semigroup at $P_\infty$ is $W=\{0,2,3,4,5,6,\dots\}$ and one basis for $\cup_{m\geq 0}L(mP_\infty)$ is
$${\mathcal B}=\{1, x, y, x^2, xy, x^3, x^2y, x^4, x^3y, x^5, x^4y, x^6, x^5y, x^7, x^6y, x^8, x^7y,\dots\}.$$
For each subset of points $\mathcal P\subseteq\{P_1,\dots,P_8\}$ we analyzed whether $\#{\mathcal P}+2g-1\in(W^*)'$,
that is, if the evaluation vector of the $(\#{\mathcal P}+g)$th function of ${\mathcal B}$ at the points of ${\mathcal P}$ is
not linearly dependent of the set of the evaluation vectors of the previous functions in ${\mathcal B}$ at the points of ${\mathcal P}$.
In Figure~\ref{fig} we depicted all sets of points satisfying $\#{\mathcal P}+2g-1\in(W^*)'$, denoting the set $P_{i_1},P_{i_2},\dots,P_{i_s}$ by just
$i_1i_2\dots i_s$. We draw an edge for every inclusion relation among sets of points.
According to the assumption $n>2g+2$ we draw a line separating the sets of point such that $\#{\mathcal P}>4$. For the sets of points on the left of this line, being in the graph is equivalent to satisfying the isometry-dual condition (by \cite[Proposition 4.3.]{GMRT}).
One can check that there are only edges between sets of points whose cardinality difference is at least two. This is coherent with Theorem~\ref{t:inh} and the fact that $W=\{0,2,3,4,5,6,\dots\}$.
\end{example}
\begin{figure}
\caption{Hierarchy of sets of points of the Hermitian curve over ${\mathbb F_4}$.}
\label{fig}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
{\Huge\bfseries
\SetGraphUnit{3.75}
{ \Vertex[x=0,y=0]{12345678}
\Vertices[x=10,y=0,dir=\SO]{line}{123568,124578,134678,234567}
\Vertices[x=20,y=30,dir=\SO]{line}{12348,12357,12467,13456,15678,23678,24568,34578}
\draw[dashed] (25, 35) -- (25, -25);
\Vertices[x=30,y=0,dir=\SO]{line}{1258,1368,1478,2356,2457,3467}
\Vertices[x=40,y=30,dir=\SO]{line}{126,137,145,234,278,358,468,567}
\Vertices[x=50,y=0,dir=\SO]{line} {18,47,36,25}
}
\Edge(18)(1258)
\Edge(18)(1368)
\Edge(18)(1478)
\Edge(25)(1258)
\Edge(25)(2356)
\Edge(25)(2457)
\Edge(36)(1368)
\Edge(36)(2356)
\Edge(36)(3467)
\Edge(47)(1478)
\Edge(47)(2457)
\Edge(47)(3467)
\Edge(18)(12348)
\Edge(18)(15678)
\Edge(25)(12357)
\Edge(25)(24568)
\Edge(36)(13456)
\Edge(36)(23678)
\Edge(47)(12467)
\Edge(47)(34578)
\Edge(126)(12467)
\Edge(137)(12357)
\Edge(145)(13456)
\Edge(234)(12348)
\Edge(278)(23678)
\Edge(358)(34578)
\Edge(468)(24568)
\Edge(567)(15678)
\Edge(126)(123568)
\Edge(358)(123568)
\Edge(145)(124578)
\Edge(278)(124578)
\Edge(137)(134678)
\Edge(468)(134678)
\Edge(234)(234567)
\Edge(567)(234567)
\Edge(1258)(123568)
\Edge(1368)(123568)
\Edge(2356)(123568)
\Edge(1258)(124578)
\Edge(1478)(124578)
\Edge(2457)(124578)
\Edge(1368)(134678)
\Edge(1478)(134678)
\Edge(3467)(134678)
\Edge(2356)(234567)
\Edge(2457)(234567)
\Edge(3467)(234567)
\Edge(12467)(12345678)
\Edge(12357)(12345678)
\Edge(13456)(12345678)
\Edge(12348)(12345678)
\Edge(23678)(12345678)
\Edge(34578)(12345678)
\Edge(24568)(12345678)
\Edge(15678)(12345678)
\Edge(123568)(12345678)
\Edge(124578)(12345678)
\Edge(134678)(12345678)
\Edge(234567)(12345678)
}
\end{tikzpicture}}
\end{figure}
\section{Acknowledgment}
The author was supported by the Spanish government under grant TIN2016-80250-R and by the Catalan government under grant 2014 SGR 537.
\bibliographystyle{plain}
|
\section{Introduction and related work}
\subsection*{Tournament}
A tournament ${\cal T}$ on $n$ vertices is an orientation of the edges of
the complete undirected graph $K_n$. Thus, given a tournament
${\cal T}=(V,A)$, where $V = \{v_i, i\in [n]\}$, for each $i,j \in [n]$,
either $v_iv_j \in A$ or $v_jv_i \in A$. A tournament ${\cal T}$ can
alternatively be defined by an ordering $\sigma({\cal T})=(v_1,\dots,v_n)$ of
its vertices and a set of \emph{backward arcs} $\bA{A}_{\sigma}({\cal T})$
(which will be denoted $\bA{A}({\cal T})$ as the considered ordering is not
ambiguous), where each arc $a \in \bA{A}({\cal T})$ is of the form
$v_{i_1}v_{i_2}$ with $i_2 < i_1$. Indeed, given $\sigma({\cal T})$ and
$\bA{A}({\cal T})$, we can define $V = \{v_i, i\in [n]\}$ and $A= \bA{A}({\cal T})
\cup \fA{A}({\cal T})$ where $\fA{A}({\cal T}) = \{v_{i_1}v_{i_2} : (i_1 <
i_2) \mbox{ and } v_{i_2}v_{i_1} \notin \bA{A}({\cal T})\}$ is the set of
forward arcs of ${\cal T}$ in the given ordering $\sigma({\cal T})$. In the following,
$(\sigma({\cal T}), \bA{A}({\cal T}))$ is called a \emph{linear representation} of the
tournament ${\cal T}$. For a backward arc $e=v_jv_i$ of $\sigma({\cal T})$ the
\emph{span value} of $e$ is $j-i-1$. Then $\mathtt{minspan}(\sigma({\cal T}))$
(resp. $\mathtt{maxspan}(\sigma({\cal T}))$) is simply the minimum
(resp. maximum) of the span values of the backward arcs of
$\sigma({\cal T})$.\\ A set $A'\subseteq A$ of arcs of ${\cal T}$ is a \emph{feedback
arc set} (or \emph{FAS}) of ${\cal T}$ if every directed cycle of ${\cal T}$
contains at least one arc of $A'$. It is clear that for any linear
representation $(\sigma({\cal T}), \bA{A}({\cal T}))$ of ${\cal T}$ the set $\bA{A}({\cal T})$ is
a FAS of ${\cal T}$. A tournament is \emph{sparse} if it admits a FAS which
is a matching.
We denote by {\sc $C_3$-Packing-T}~the problem of packing the maximum number of vertex
disjoint triangles in a given tournament, where a triangle is a
directed 3-cycle. More formally, an input of {\sc $C_3$-Packing-T}~is a tournament ${\cal T}$,
an output is a set (called a \emph{triangle packing}) $\S=\{t_i, i \in
[|S|]\}$ where each $t_i$ is a triangle and for any $i \neq j$ we have
$V(t_i) \cap V(t_j) = \emptyset$, and the objective is to maximize
$|S|$. We denote by $opt({\cal T})$ the optimal value of ${\cal T}$. We denote by
{\sc $C_3$-Perfect-Packing-T}~the decision problem associated to {\sc $C_3$-Packing-T}~where an input ${\cal T}$
is positive iff there is a triangle packing $\S$ such that
$V(\S)=V({\cal T})$ (which is called a \emph{perfect triangle packing}).
\subsection*{Related work}
We refer the reader to Appendix where we recall the definitions of the
problems mentioned bellow as well as the standard definitions about
parameterized complexity and approximation. A first natural related
problem is {\sc 3-Set-Packing}
as we can reduce {\sc $C_3$-Packing-T}~to {\sc 3-Set-Packing} by creating an hyperedge
for each triangle.
\paragraph*{Classical complexity / approximation.}
It is known that {\sc $C_3$-Packing-T}~is polynomial if the tournament does not contain
the three forbidden sub-tournaments described in~\cite{cai2002min}.
From the point of view of approximability, the best approximation
algorithm is the $\frac{4}{3}+\epsilon$ approximation
of~\cite{cygan2013improved} for {\sc 3-Set-Packing}, implying the same
result for {\sc $K_3$-packing}
and {\sc $C_3$-Packing-T}. Concerning negative results, it is
known~\cite{guruswami1998vertex} that even {\sc $K_3$-packing} is {\sf
MAX SNP}-hard on graphs with maximum degree four. We can also
mention the related "dual" problem {\sc FAST} and {\sc FVST} that received a lot of attention
with for example the {\sf NP}-hardness and {\sf PTAS} for {\sc FAS}
in~\cite{charbit2007minimum} and~\cite{kenyon2007rank} respectively,
and the $\frac{7}{3}$ approximation and inapproximability results for
{\sc FVST} in~\cite{73approx}.
\paragraph*{Kernelization.}
We precise that the implicitly considered parameter here is the size
of the solution. There is a $\O(k^2)$ vertex kernel for {\sc
$K_3$-packing} in~\cite{moser2009problem}, and even a $\O(k^2)$
vertex kernel for {\sc 3-Set-Packing} in~\cite{abu2009quadratic},
which is obtained by only removing vertices of the ground set. This remark is important as it directly
implies a $\O(k^2)$ vertex kernel for {\sc $C_3$-Packing-T}~(see
Section~\ref{sec:kernel}). Let us now turn to negative results. There
is a whole line of research dedicated to finding lower bounds on the
size of polynomial kernels. The main tool involved in these bounds is
the weak composition introduced in~\cite{hermelin2012weak} (whose
definition is recalled in Appendix). Weak composition allowed several
almost tight lower bounds, with for example the $\O(k^{d-\epsilon})$
for {\sc $d$-Set-Packing} and $\O(k^{d-4-\epsilon})$ for {\sc
$K_d$-packing} of ~\cite{hermelin2012weak}. These results where
improved in~\cite{dell2014kernelization} to $\O(k^{d-\epsilon})$ for
\textsc{perfect} $d$-\textsc{Set-Packing}, $\O(k^{d-1-\epsilon})$ for {\sc $K_d$-packing}, and
leading to $\O(k^{2-\epsilon})$ for {\sc perfect $K_3$-packing}. Notice that negative results for
the "perfect" version of problems (where parameter $k=\frac{n}{d}$,
where $d$ is the number of vertices of the packed structure) are
stronger than for the classical version where $k$ is arbitrary.
Kernel lower bound for these "perfect" versions is sometimes referred
as \emph{sparsification lower bounds}.
\subsection*{Our contributions}
Our objective is to study the approximability and kernelization of
{\sc $C_3$-Packing-T}. On the approximation side, a natural question is a possible
improvement of the $\frac{4}{3}+\epsilon$ approximation implied by
{\sc 3-Set-Packing}. We show that, unlike {\sc FAST}, {\sc $C_3$-Packing-T}~does not
admit a {\sf PTAS} unless {\sf P}={\sf NP}, even if the tournament is
sparse. We point out that, surprisingly, we were not able to find any
reference establishing a negative result for {\sc $C_3$-Packing-T}, even for the {\sf
NP}-hardness. As these results show that there is not much room for
improving the approximation ratio, we focus on sparse tournaments and
followed a different approach by looking for a condition that would
allow ratio arbitrarily close to $1$. In that spirit, we provide a
$(1+\frac{6}{c-1})$ approximation algorithm for sparse tournaments
having a linear representation with $\mathtt{minspan}$ at least
$c$.
Concerning kernelization, we complete the panorama of sparsification
lower bounds of~\cite{jansen2015sparsification} by proving that
{\sc $C_3$-Perfect-Packing-T}~does not admit a kernel of (total bit) size
$\O(n^{2-\epsilon})$ unless ${\sf NP} \subseteq {\sf coNP / Poly}$.
This implies that {\sc $C_3$-Packing-T}~does not admit a kernel of (total bit) size
$\O(k^{2-\epsilon})$ unless ${\sf NP} \subseteq {\sf coNP / Poly}$.
We also prove that {\sc $C_3$-Packing-T}~admits a kernel of $\O(m)$ vertices, where $m$
is the size of a given FAS of the instance, and that
{\sc $C_3$-Packing-T}~restricted to sparse instances has a kernel in $\O(k)$ vertices
(and so of total size bit $\O(k\log (k))$). The existence of a kernel
in $\O(k)$ vertices for the general {\sc $C_3$-Packing-T}~remains our main open
question.
\section{Specific notations and observations}
\label{sec:notation}
Given a linear representation $(\sigma({\cal T}),\bA{A}({\cal T}))$ of a tournament
${\cal T}$, a triangle $t$ in ${\cal T}$ is a triple $t=(v_{i_1},v_{i_2},v_{i_3})$
with $i_l < i_{l+1}$ such that either $v_{i_3}v_{i_1} \in \bA{A}({\cal T})$,
$v_{i_3}v_{i_2} \notin \bA{A}({\cal T})$ and $v_{i_2}v_{i_1} \notin
\bA{A}({\cal T})$ (in this case we call $t$ a \emph{triangle with backward
arc} $v_{i_3}v_{i_1}$), or $v_{i_3}v_{i_1} \notin \bA{A}({\cal T})$,
$v_{i_3}v_{i_2} \in \bA{A}({\cal T})$ and $v_{i_2}v_{i_1} \in \bA{A}({\cal T})$
(in this case we call $t$ a \emph{triangle with two backward arcs}
$v_{i_3}v_{i_2}$ and $v_{i_2}v_{i_1}$).
Given two tournaments ${\cal T}_1, {\cal T}_2$ defined by $\sigma({\cal T}_l)$ and
$\bA{A}({\cal T}_l)$ we denote by ${\cal T}={\cal T}_1{\cal T}_2$ the tournament called the
concatenation of ${\cal T}_1$ and ${\cal T}_2$, where $\sigma({\cal T}) = \sigma({\cal T}_1)\sigma({\cal T}_2)$
is the concatenation of the two sequences, and $\bA{A}({\cal T}) =
\bA{A}({\cal T}_1) \cup \bA{A}({\cal T}_2)$. Given a tournament ${\cal T}$ and a subset
of vertices $X$, we denote by ${\cal T} \setminus X$ the tournament
${\cal T}[V({\cal T}) \setminus X]$ induced by vertices $V({\cal T}) \setminus X$, and
we call this operation \emph{removing $X$ from ${\cal T}$}. Given an arc
$a=uv$ we define $h(a)=v$ as the head of $a$ and $t(a)=u$ as the tail
of $a$. Given a linear representation $(V({\cal T}),\bA{A}({\cal T}))$ and an arc
$a \in \bA{A}({\cal T})$, we define $s(a) = \{v : h(a) < v < t(a)\}$ as the
\emph{span} of $a$. Notice that the span value of $a$ is then exactly
$|s(a)|$. \\ Given a linear representation $(V({\cal T}),\bA{A}({\cal T}))$ and a
vertex $v \in V({\cal T})$, we define the degree of $v$ by $d(v)=(a,b)$,
where $a = |\{vu \in \bA{A}({\cal T}) : u < v\}|$ is called the \emph{left
degree} of $v$ and $b = |\{uv \in \bA{A}({\cal T}) : u > v\}|$ is called
the \emph{right degree} of $v$. We also define \mbox{$V_{(a,b)} = \{v
\in V({\cal T})| d(v)=(a,b)\}$}. Given a set of pairwise distinct pairs
$D$, we denote by {\sc $C_3$-Packing-T}$^D$ the problem {\sc $C_3$-Packing-T}~restricted to tournaments
such that there exists a linear representation where $d(v) \in D$ for
all $v$. Notice that when $D_{M}=\{(0,1),(1,0),(0,0)\}$, instances of
{\sc $C_3$-Packing-T}$^{D_M}$ are the sparse tournaments.\\ Finally let us point out
that it is easy to decide in polynomial time if a tournament is sparse or not, and if so,
to give a linear representation whose FAS is a
matching. The corresponding algorithm is detailed in Appendix in Lemma~\ref{lem:faslinear}.
Thus, in the
following, when considering a sparse tournament we will assume that a
linear ordering of it where backward arcs form a matching is also
given.
\section{Approximation for sparse tournaments}
\label{sec:approx}
\subsection{{\sf APX}-hardness for sparse tournaments}
In this subsection we prove that {\sc $C_3$-Packing-T}$^{D_M}$ is {\sf APX}-hard by providing a
$L$-reduction (see Definition~\ref{def:L} in appendix) from Max
2-SAT(3), which is known to be {\sf APX}-hard~\cite{ausiello2012complexity,berman1999some}. Recall that in
the {\sc Max 2-SAT(3)} problem where each clause contains exactly $2$
variables and each variable appears in at most 3 clauses (and at
most twice positively and once negatively).
\paragraph*{Definition of the reduction}
\label{subsec:reduction2}
Let $\cal{F}$ be an instance of {\sc Max 2-SAT(3)}. In the following,
we will denote by $n$ the number of variables in $\cal{F}$ and $m$ the
number of clauses. Let $\{x_i, 1 \in [n]\}$ be the set of variables of
$\cal{F}$ and $\{C_j, j \in [m]\}$ its set of clauses.
We now define a reduction $f$ which maps an instance ${\cal F}$ of {\sc Max 2-SAT(3)} to an instance ${\cal T}$ of {\sc $C_3$-Packing-T}$^{D_M}$.
For each variable $x_i$ with $i \in [n]$, we create a tournament $L_i$ as follows and we call it \emph{variable gadget}. We refer the reader to Figure~\ref{fig:li} where an example of variable gadget is depicted.
Let $\sigma(L_i) = (X_i, X'_i, \overline{X_i}, \overline{X_i}', \{\beta_i\}, \{\beta'_i\}$ $, A_i, B_i, \{\alpha_i\}, A'_i, B'_i)$.
We define \mbox{$C=\{ X_i,X'_i,\overline{X_i},\overline{X_i}',A_i,B_i,A'_i,B'_i\}$}. All sets of $C$ have size $4$. We denote $X_i = (x_i^1,x_i^2,x_i^3,x_i^4)$, and we extend the notation in a straightforward manner to the other others sets of $C$.
Let us now define $\bA{A}(L_i)$. For each set of $C$, we add a backward arc whose head is the first element and the tail is the last element (for example for $X_i$ we add the arc $x_i^4x_i^1$).
Then, we add to $\bA{A}(L_i)$ the set $\{e_1,e_2,e_3,e_4\}$ where $e_1=x_i^3a_i^3$, $e_2 = x_i^{'3}a_i^{'3}$, $e_3 = \overline{x_i^3} b_i^3$, $e_4 = \overline{x_i^{'3}} b_i^{'3}$
and the set $\{m_1,m_2\}$ where $m_1 = a_i^{'2}a_i^2$, $m_2 = b_i^{'2}b_i^2$ called the two \emph{medium arcs} of the variable gadget.
This completes the description of tournament $L_i$. Let $L = L_1 \dots L_n$ be the concatenation of the $L_i$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{def_Li.pdf}
\end{center}
\caption{Example of a variable gadget $L_i$.}
\label{fig:li}
\end{figure}
For each clause $C_j$ with $j \in [1,m]$, we create a tournament $K_j$ with ordering $\sigma(K_i) = (\theta_j, d^1_j,c^1_j,c^2_j,d^2_j)$ and $\bA{A}(K_i) = \{d^2_jd^1_j\}$.
We also define $K = K_1\dots K_m$.
Let us now define ${\cal T} = LK$. We add to $\bA{A}({\cal T})$ the following backward arcs from $V(K)$ to $V(L)$. If $C_j = l_{i_1} \vee l_{i_2} $ is a clause in $\cal{F}$ then we add the arcs $c_j^1v_{i_1}, c_j^2v_{i_2}$ where $v_{i_c}$ is the vertex in $\{x_{i_c}^2,x_{i_c}^{'2},\overline{x_{i_c}^2}\}$ corresponding to $l_{i_c}$: if $l_{i_c}$ is a positive occurrence of variable $i_c$ we chose
$v_{i_c} \in \{x_{i_c}^2,x_{i_c}^{'2}\}$, otherwise we chose $v_{i_c} = \overline{x_{i_c}^2}$. Moreover, we chose vertices $v_{i_c}$ in such a way that for any $i \in [n]$, for each $v \in \{x_i^2,x_i^{'2},\overline{x_i^2}\}$ there exists a unique arc $a \in \bA{A}({\cal T})$ such that $h(a)=v$. This is always possible as each variable has at most two positive occurrences and one negative occurrence.
Thus, $x_i^2$ represent the first positive occurrence of variable $i$, and $x_i^{'2}$ the second one. We refer the reader to Figure~\ref{fig:LetK} where an example of the connection between variable and clause gadget is depicted.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{def_LetK.pdf}
\end{center}
\caption{Example showing how a clause gadget is attached to variable gadgets.}
\label{fig:LetK}
\end{figure}
Notice that vertices of $\overline{X'_i}$ are never linked to the clauses gadget. However, we need this set to keep the variable gadget symmetric so that setting $x_i$ to true or false leads to the same number of triangles inside $L_i$. This completes the description of ${\cal T}$. Notice that the degree of any vertex is in $\{(0,1),(1,0),(0,0)\}$, and thus ${\cal T}$ is an instance of {\sc $C_3$-Packing-T}$^{D_M}$.
Let us now distinguish three different types of triangles in ${\cal T}$. A triangle $t=(v_1,v_2,v_3)$ of ${\cal T}$ is called an \emph{outer} triangle iff $\exists j \in [m]$ such that
$v_2 = \theta_j$ and $v_3 = c^l_j$ (implying that $v_1 \in V(L)$), \emph{variable inner} iff $\exists i \in [n]$ such that $V(t) \subseteq V(L_i)$,
and \emph{clause inner} iff $\exists j \in [m]$ such that $V(t) \subseteq V(K_j)$.
Notice that a triangle $t=(v_1,v_2,v_3)$ of ${\cal T}$ which is neither outer, variable or clause inner has necessarily $v_3 = c^l_j$ for some $j$, and $v_2 \neq \theta_j$ ($v_2$ could be in $V(L)$ or $V(K)$).
In the following definition, for any $Y \in C$ (where $C=\{ X_i,X'_i,\overline{X_i},\overline{X_i}',A_i,B_i,A'_i,B'_i\}$) with $Y=(y^1,y^2,y^3,y^4)$, we define $t_Y^2 = (y^1,y^2,y^4)$ and $t_Y^3 = (y^1,y^3,y^4)$. For example, $t_{X'_i}^2 = (x_i^{'1},x_i^{'2},x_i^{'4})$.
For any $i\in [n]$, we define $P_i$ and $\overline{P_i}$, two sets of vertex disjoint variable inner triangles of $V(L_i)$, by:
\begin{itemize}
\item $P_i = \{t_{X_i}^3, t_{X'_i}^3, t_{\overline{X_i}}^2, t_{\overline{X'_i}}^2, t_{A_i}^3, t_{B_i}^2, t_{A'_i}^3, t_{B'_i}^2, (h(e_3),\beta_i,t(e_3)), (h(e_4),\beta'_i,t(e_4)), (h(m_1),\alpha_i,t(m_1))\}$
\item $\overline{P_i} = \{t_{X_i}^2, t_{X'_i}^2, t_{\overline{X_i}}^3, t_{\overline{X'_i}}^3, t_{A_i}^2, t_{B_i}^3, t_{A'_i}^2, t_{B'_i}^3, (h(e_1),\beta_i,t(e_1)), (h(e_2),\beta'_i,t(e_2)), (h(m_2),\alpha_i,t(m_2))\}$
\end{itemize}
Notice that $P_i$ (resp. $\overline{P_i}$) uses all vertices of $L_i$ except $\{x_i^2,x_i^{'2}\}$ (resp. $\{\overline{x_i^2},\overline{x_i^{'2}}\}$).
For any $j \in [m]$, and $x \in [2]$ we define the set of clause inner triangle of $K_j$, that is $Q^x_j = \{(d^1_j,c^x_j,d^2_j)\}$.
Informally, setting variable $x_i$ to true corresponds to create the $11$ triangles of $P_i$ in $L_i$ (as leaving vertices $\{x^2_i,x^{2'}_i\}$ available allows to create outer triangles corresponding to satisfied clauses), and setting it to false corresponds to create the $11$ triangles of $\overline{P_i}$. Satisfying a clause $j$ using its $x^{th}$ literal (represented by a vertex $v \in V(L)$) corresponds to create triangle in $Q^{3-x}_j$ as it leaves $c^x_j$ available to create the triangle $(v,\theta_j,c^x_j)$. Our final objective (in Lemma~\ref{lem:Lreducv2}) is to prove that satisfying $k$ clauses is equivalent to find $11n+m+k$ vertex disjoint triangles.
\paragraph*{Restructuration lemmas}
Given a solution $\S$, let $I^{L}_i =\{t \in \S : V(t) \subseteq V(L_i)\}$, $I^{K}_j =\{t \in \S : V(t) \subseteq V(K_j)\}$,
$I^{L} = \cup_{i \in [n]} I^L_i$ be the set of variable inner triangles of $\S$, $I^{K} = \cup_{j \in [m]} I^K_j$ be the set of clause inner triangles of $\S$, and
$O = \{t \in \S \mbox{ $t$ is an outer triangle }\}$ be the set of outer triangles of $\S$. Notice that \textit{a priori} $I^L,I^K,O$ does not necessarily form a partition of $\S$.
However, we will show in the next lemmas how to restructure $\S$ such that $I^L,I^K,O$ becomes a partition.
\begin{lemma}\label{lem:intv2}
For any $\S$ we can compute in polynomial time a solution $\S' =
\{t'_l, l\in [k]\}$ such that $|\S'| \ge |\S|$ and for all $j\in[m]$
there exists $x \in [2]$ such that $I^{'K}_j=Q^x_j$ and
\begin{itemize}
\item either $\S'$ does not use any other vertex of $K_j$ ($V(\S') \cap V(K_j) = V(Q^x_j)$)
\item either $\S'$ contains an outer triangle $t'_l=(v,\theta_j, c^{3-x}_j)$ with $v \in V(L)$ (implying $V(\S') \cap V(K_j) = V(K_j)$)
\end{itemize}
\end{lemma}
\begin{proof}
Consider a solution $\S = \{t_l,l \in [k]\}$.
Let us suppose that $\S$ does not verify the desired property.
We say that $j \in [m]$ satisfies $(\star)$ iff there exists $x \in [2]$ such that $I^{K}_j=Q^x_j$ and
either $\S$ does not use any other vertex of $K_j$, or $\S$ contains an outer triangle $t_l=(v,\theta_j, c^{3-x}_j)$ with $v \in V(L)$.
Let us restructure $\S$ to increase the number of $j$ satisfying
$(\star)$, which will be sufficient to prove the lemma.
Consider the largest $j\in [m]$ which does not satisfy $(\star)$.
Let $c = |I^{K}_j|$. Notice that the only possible triangle of $I^{K}_j$ contains $a=d^2_jd^1_j$, implying $c \le 1$.
If $c=1$, let $t \in I^K_j$ and $v_0 = \{c^1_j,c^2_j\} \setminus V(t)$.
If $v_0 \notin V(\S)$, then let us prove that $\theta_j \notin V(\S)$. Indeed, by contradiction if $\theta_j \in V(S)$, let $t' \in \S$ such that $\theta_j \in V(t')$. As $d(\theta_j)=(0,0)$ we necessarily have
$t'=(u,\theta_j,w)$ with $w = c^{x'}_{j'}$ with $j' \ge j$, which contradicts the maximality of $j$.
Otherwise ($v_0 \in V(\S)$), then denoting by $t'$ the triangle of $\S$ which contains $v_0$ we must have $t'=(u,v,v_0)$.
Indeed, we cannot have (for some $u', v'$) $t'=(v_0,u',v')$ as there is no backward arc $a$ with $h(a)=v_0$ and we cannot have either $t'=(u',v_0,v')$ as this would imply $v'=c^{x'}_{j'}$ for $j' > j$ and again contradict the definition of $j$. As, again, by maximality of $j$ we get $\theta_j \notin V(\S)$ (and since $u\theta_j$ and $\theta_jv_0$ are forward arcs), we can replace $t'$ by the triangle $(u,\theta_j,v_0)$ which is disjoint to the other triangles of $\S$.
If $c=0$. Notice first that by maximality of $j$, $d^2_j \notin V(\S)$ as $d^2_j$ could only be used in a triangle $t=(v,d^2_j,c^x_{j'})$ with $j' > j$.
Let $Z = V(\S) \cap \{c^1_j,c^2_j\}$.
If $|Z|=0$, then by maximality of $j$ we get $d^1_j \notin V(\S)$ and $\theta_j \notin V(\S)$, and thus we add to $\S$ triangle $(d^1_j,c^1_j,d^2_j)$.
If $|Z|=1$, let $c^x_j \in Z$ and $t \in \S$ such that $c^x_j \in V(t)$. By maximality of $j$ we necessarily have $t=(u,v,c^x_j)$ for some $u,v$.
If $v \neq \theta_j$ then by maximality of $j$ we have $\theta_j \notin V(\S)$, and thus we swap $v$ and $\theta_j$ in $t$ and now suppose that $\theta_j \in V(t)$. This implies that $d^1_j \notin V(\S)$
(before the swap we could have had $v = d^1_j$, but now by maximality of $j$ we know that $d^1_j$ is unused), and we add $(d^1_j,c^{3-x}_j,d^2_j)$ to $\S$.
It only remains now case where $|Z|=2$. If there exists $t \in \S$ with $Z \subseteq V(t)$, then $t=(u,c^1_j,c^2_j)$. Using the same arguments as above we get that $\{\theta_j,d^1_j\} \cap V(\S) = \emptyset$,
and thus we swap $c^1_j$ by $\theta_j$ in $t$ and add $(d^1_,c^1_j,d^2_j)$ to $\S$.
Otherwise, let $t_x \in \S$ such that $c^x_j \in V(t_x)$ for $x \in [2]$. This implies that $t_x = (u_x,v_x,c^x_j)$. If $\theta_j \notin V(t_1) \cup V(t_2)$ then $\theta_j \notin V(\S)$ and we swap $v_1$ with $\theta_j$. Therefore, from
now on we can suppose that $\theta_j \in V(t_x)$ for $x \in [2]$. Then, if $d^1_j \notin V(t_{3-x})$ then $d^1_j \notin V(\S)$ and thus we swap $v_{3-x}$ with $d^1_j$ and we now assume that
$d^1_j \in V(t_{3-x})$. Finally, we remove $t_{3-x}$ from $\S$ and add instead $(d^1_j,c^{3-x}_j,d^2_j)$.
\end{proof}
\begin{corollary}\label{cor:outerinnerv2}
For any $\S$ we can compute in polynomial time a solution $\S'$ such that $|\S'| \ge |\S|$, and $\S'$
only contains outer, variable inner, and clause inner triangles. Indeed, in the solution $\S'$ of Lemma~\ref{lem:intv2}, given any $t \in \S'$, either
$V(t)$ intersects $V(K_j)$ for some $j$ and then $t$ is an outer or a clause inner triangle, or $V(t) \subseteq V(L_i)$ for $i \in [n]$ as there is no backward arc $uv$ with $u \in V(L_{i_1})$ and $v \in V(L_{i_2})$ with $i_1 \neq i_2$ .
\end{corollary}
\begin{lemma}
\label{lem:goodpatternv2}
For any $\S$ we can compute in polynomial time a solution $\S'$ such
that $|\S'| \ge |\S|$, $\S'$ satisfies Lemma~\ref{lem:intv2}, and for
every $i \in[n]$, $I^{'L}_i = P_i$ or $I^{'L}_i = \overline{P_i}$.
\end{lemma}
\begin{proof}
Let $\S_0$ be an arbitrary solution, and $\S$ be the solution obtained
from $\S_0$ after applying Lemma~\ref{lem:intv2}.
By Corollary~\ref{cor:outerinnerv2}, we partition $\S$ into $\S = I^L \cup I^K \cup O$.
Let us say that $i \in [n]$ satisfies $(\star)$ if $I^L_i = P_i$ or $I^L_i = \overline{P_i}$.
Let us suppose that $\S$ does not verify the desired property, and show how to restructure $\S$ to increase the number of $i$ satisfying $(\star)$ while still satisfying Lemma~\ref{lem:intv2}, which will prove the lemma.
Let $Lft_i = X_i \cup X'_i \cup \overline{X_i} \cup \overline{X'_i}$ and $Rgt_i = A_i \cup B_i \cup \{\alpha_i\} \cup A'_i \cup B'_i$ be two subset of vertices of $V(L_i)$.
Given any solution $\tilde{\S}$ satisfying Lemma~\ref{lem:intv2}, we define the following sets.
Let $\tilde{\S}^{Lft_i} = \{t \in \tilde{I}^L_i : V(t) \subseteq Lft_i \}$, $\tilde{\S}^{Rgt_i} = \{t \in \tilde{I}^L_i : V(t) \subseteq Rgt_i \}$, and
\mbox{$\tilde{\S}^{Lft_iRgt_i} = \{t \in \tilde{I}^L_i : V(t) \cap Lft_i \neq \emptyset \mbox{ and } V(t) \cap Rgt_i \neq \emptyset\}$}. Observe that these three sets define a partition of $\tilde{I}^L_i$,
and that triangles of $\tilde{\S}^{Lft_i}$ are even included in $W$ with $W \in \{X_i, X'_i , \overline{X_i}, \overline{X_i}'\}$.
Let $\tilde{\S}^{O_i} = \{t \in \tilde{O} : V(t) \cap V(L_i) \neq \emptyset\}$ be the set of outer triangles of $\tilde{\S}$ intersecting $L_i$.
We also define $g_i(\tilde{\S})=(|\tilde{\S}^{Lft_i}|,|\tilde{\S}^{Lft_iRgt_i}|,|\tilde{\S}^{Rgt_i}|,|\tilde{\S}^{O_i}|)$ and
$h_i(\tilde{S})=|\tilde{\S}^{Lft_i}|+|\tilde{\S}^{Lft_iRgt_i}|+|\tilde{\S}^{Rgt_i}|+|\tilde{\S}^{O_i}|=|\tilde{I}^L_i \cup \tilde{\S}^{O_i}|$.
Our objective is to restructure $\S$ into a solution $\S'$ with $\S' = (\S \setminus (I^L_i \cup \S^{O_i})) \cup (I^{'L}_i \cup \S^{'O_i})$. We will
define $I^{'L}_i$ and $\S^{'O_i}$ verifying the following properties $(\triangle)$:
\begin{description}
\item[$\triangle_1:$] $I^{'L}_i = P_i$ or $I^{'L}_i=\overline{P_i}$,
\item[$\triangle_2:$] $\S^{'O_i} \subseteq \S^{O_i}$,
\item[$\triangle_3:$] $|(I^{'L}_i \cup \S^{'O_i})| \ge |(I^L_i \cup \S^{O_i})| $ (which is equivalent to $h_i(\S') \ge h_i(\S) $),
\item[$\triangle_4:$] triangles of $I^{'L}_i \cup \S^{'O_i}$ are vertex disjoint.
\end{description}
Notice that $\triangle_2$ and $\triangle_4$ imply that all triangles of $\S'$ are still vertex disjoint. Indeed, as $\S$ satisfies Lemma~\ref{lem:intv2}, the only triangles of $\S$ intersecting
$L_i$ are $I^L_i \cup \S^{O_i}$, and thus replacing them with $I^{'L}_i \cup \S^{'O_i}$ satisfying the above property implies that all triangles of $\S'$ are vertex disjoint. Moreover, $\S'$ will still satisfy Lemma~\ref{lem:intv2} even with $\S^{'O_i} \subseteq \S^{O_i}$ as removing outer triangles cannot violate property of Lemma~\ref{lem:intv2}.
Finally $\triangle_3$ implies that $|\S'| \ge |\S|$. Thus, defining $I^{'L}_i$ and $\S^{'O_i}$ satisfying $(\triangle)$ will be sufficient to prove the lemma. Let us now state some useful properties.
\begin{description}
\item[$p_1:$] $|\S^{Lft_i}| \le 4$
\item[$p_2:$] $|\S^{Lft_iRgt_i}| \le 4$ as for any $t \in \S^{Lft_iRgt_i}$ there exists $l \in [4]$ such that $V(t) \supseteq V(e_l)$.
\item[$p_3:$] $|\S^{Rgt_i}| \le 5$ (as $|V(\S^{Rgt_i})| = 17$).
Let $Z = V(\S^{Lft_iRgt_i}) \cap Rgt_i$.
Let us also prove that if $Z \supseteq \{a^3_i,b^3_i\}$, $Z \supseteq \{a^{'3}_i,b^{'3}_i\}$, $Z \supseteq \{a^3_i,b^{'3}_i\}$ or $Z \supseteq \{a^{'3}_i,b^3_i\}$ then $|\S^{Rgt_i}| \le 4$.
For any $W \in \{A_i, B_i, A'_i, B'_i\}$, let $s_W$ be the unique arc $a$ of ${\cal T}$ such that $V(a) \subseteq W$ and let $m_W$ be the unique medium arc $a$ such that $V(a) \cap W \neq \emptyset$.
Let us call the $\{s_W\}$ the four small arcs of the tournament induced by $Rgt_i$.
Let $\bA{A}(\S^{Rgt_i}) = \{a \in \bA{A}(L_i) : \exists t \in \S^{Rgt_i}$ \mbox{such that } $V(a) \subseteq V(t) \}$ be the set of backward arcs used by $\S^{Rgt_i}$.
Observe that arcs of $\bA{A}(\S^{Rgt_i})$ are small or medium arcs. Let us bound $|\bA{A}(\S^{Rgt_i})|=|\S^{Rgt_i}|$.
Notice that for any $W \in \{A_i, B_i, A'_i, B'_i\},$ $W \cap Z \neq \emptyset$ implies that $\bA{A}(\S^{Rgt_i})$ cannot contain both $s_W$ and $m_W$.
If $\S^{Rgt_i}$ contains the $4$ small arcs then by previous remark $\S^{Rgt_i}$ cannot contain any medium arc,
and thus $|\S^{Rgt_i}| \le 4$. If $\S^{Rgt_i}$ contains $3$ small arcs then it can only contain one medium arc, implying $|\S^{Rgt_i}| \le 4$. Obviously, if $|\S^{Rgt_i}|$ contains $2$ or less small arcs then $|\S^{Rgt_i}| \le 4$.
\item[$p_4:$] property $p_3$ implies that if $|\S^{Lft_iRgt_i}| \ge 3$, or if $|\S^{Lft_iRgt_i}|=2$ and triangles of $\S^{Lft_iRgt_i}$ contain $\{e_1,e_3\}$, $\{e_1,e_4\}$, $\{e_2,e_3\}$ or $\{e_2,e_4\}$,
then $|\S^{Rgt_i}| \le 4$ (where triangles of $\S^{Lft_iRgt_i}$ contains $\{e_i,e_j\}$ means that there exist $t_1,t_2$ in $\S^{Lft_iRgt_i}$ such that $V(t_1) \supseteq V(e_i)$ and $V(t_2) \supseteq V(e_j)$).
\item[$p_5:$] $|\S^{O_i}| \le 3$. Moreover, if $|\S^{Lft_i}|=4$ then $|\S^{O_i}| \le 4 - |\S^{Lft_iRgt_i}|$, and if $|\S^{Lft_i}|=3$ and $|\S^{Lft_iRgt_i}|=4$ then $|\S^{O_i}| \le 1$. The last two inequalities come from the fact that for any $W \in \{X_i, X'_i, \overline{X_i}, \overline{X'_i}\}$, we cannot have both $t_1 \in \S^{O_i}$, $t_2 \in \S^{Lft_iRgt_i}$
and $t_3 \in \S^{Lft_i}$ with $V(t_i) \cap W \neq \emptyset$.
\end{description}
Notice that if a solution $\S'$ satisfies $I^{'L}_i = P_i$ or $I^{'L}_i=\overline{P_i}$ then $g_i(\S')=(4,2,5,z)$ where $z \in [2]$, and $h_i(\S')=11+z$.
In the following we write $(u^1_1,u^1_2,u^1_3,u^1_4) \le (u^2_1,u^2_2,u^2_3,u^2_4)$ iff $u^1_i \le u^2_i$ for any $i \in [4]$.
Let us describe informally the following argument which will be used several times. Let $z=|\S^{O_i}|$. If $z \le 1$ or if $z = 2$ but the two corresponding outer triangles do not use one vertex in $X_i \cup X'_i$ and one vertex in $\overline{X_i}$, then we will able to "save" all these outer triangles (while creating the optimal number of variable inner triangles in $L_i$), meaning that $\S^{'O_i} = \S^{O_i}$, as either $P_i$ or $\overline{P_i}$ will leave vertices of $\S^{O_i} \cap Lft_i$ available for outer triangles. Let us proceed by case analysis according to the value $|\S^{Lft_iRgt_i}|$. Remember that $|\S^{Lft_iRgt_i}| \le 4$ according to $p_2$.
Case 1: $|\S^{Lft_iRgt_i}| \le 1$. According to $p_1, p_3$ we get $g_i(\S) \le (4,1,5,z)$ where $z \in [3]$.
In this case, $\S^{'O_i} = \S^{O_i} \setminus \{t \in \S : V(t) \ni \overline{x^2_i}\}$ and $I^{'L}_i = P_i$ verify $(\triangle)$.
In particular, we have $h_i(\S') \ge h_i(\S)$ as $g_i(\S') \ge (4,2,5,z-1)$.
Case 2: $|\S^{Lft_iRgt_i}|=2$. Let $g_i(\S) = (x,2,y,z)$. If $x \le 3$, then $g_i(\S) \le (3,2,5,z)$ by $p_3$ and we set
$\S^{'O_i} = \S^{O_i} \setminus \{t \in \S : V(t) \ni \overline{x^2_i}\}$ and $I^{'L}_i = P_i$. This satisfies $(\triangle)$
as in particular we have $h_i(\S') \ge h_i(\S)$ as $g_i(\S') \ge (4,2,5,z-1)$.
Let us now turn to case where $x=4$. Let $\S^{Lft_iRgt_i}=\{t_1,t_2\}$. Let us first suppose that triangles of $\S^{Lft_iRgt_i}$ contain $\{e_i,e_j\}$ with
$\{e_i,e_j\} \in \{\{e_1,e_3\},\{e_1,e_4\},\{e_2,e_3\},\{e_2,e_4\}\}$.
By $p_4$ we get $y \le 4$, implying $g_i(\S) \leq (4,2,4,z)$.
In this case, $\S^{'O_i} = \S^{O_i} \setminus \{t \in \S : V(t) \ni \overline{x^2_i}\}$ and $I^{'L}_i = P_i$ verify $(\triangle)$.
In particular, we have $h_i(\S') \ge h_i(\S)$ as $g_i(\S')=(4,2,5,z-1)$. Let us suppose now that $t_1$ contains $e_1$ and $t_2$ contains $e_2$ (case (2a)), or
$t_1$ contains $e_3$ and $t_2$ contains $e_4$ (case (2b)). In both cases we have $g_i(\S) \le (4,2,5,z)$ where $z \in [2]$ by $p_5$.
More precisely, $p_5$ implies that $\{W \in \{X_i, X'_i, \overline{X_i}, \overline{X'_i}\} : W \cap V(\S^{O_i})\} \neq \emptyset$ is included in $\{X_,X'_i\}$ (case 2b) or in $\overline{X_i}$ (case 2a).
Thus, in case (2a) we define $\S^{'O_i} = \S^{O_i}$ and $I^{'L}_i = \overline{P_i}$. In case (2b) we define $\S^{'O_i} = \S^{O_i}$ and $I^{'L}_i = P_i$.
In both cases these sets verify $(\triangle)$ as in particular $g_i(\S') = (4,2,5,z)$.
\begin{sloppypar}
Case 3: $|\S^{Lft_iRgt_i}|=3$. In this case $g_i(\S) \le (x,3,4,z)$ by $p_4$.
If $x \le 3$, the sets \mbox{$\S^{'O_i} = \S^{O_i} \setminus \{t \in \S : V(t) \ni \overline{x^2_i}\}$} and $I^{'L}_i = P_i$ verify $(\triangle)$.
In particular, we have \mbox{$h_i(\S') \ge h_i(\S)$} as $g_i(\S') \ge (4,2,5,z-1)$.
If $x = 4$ then $z \le 1$ by $p_5$. Thus, we define $I^{'L}_i = P_i$ if $V(\S^{O_i}) \cap (X_i \cup X'_i) \neq \emptyset$, and
$I^{'L}_i = \overline{P_i}$ otherwise, and $\S^{'O_i} = \S^{O_i}$. These sets satisfy $(\triangle)$ as in particular $g_i(\S') = (4,2,5,z)$.
\end{sloppypar}
Case 4: $|\S^{Lft_iRgt_i}|=4$. Let $g_i(\S) = (x,4,y,z)$.
If $x=4$ then $z \le 0$ by $p_5$ and $y \le 3$ as $x+4+y \le \frac{|V(L_i)|}{3}$.
Thus, we set $\S^{'O_i} = \S^{O_i} = \emptyset$, $I^{'L}_i = P_i$ (which is arbitrary in this case), and we have property $(\triangle)$ as $g_i(\S') \ge (4,2,5,0)$.
If $x=3$ (this case is depicted Figure~\ref{fig:ex_restruct_Li}) then $y \le 4$ by $p_3$ and $z \le 1$ by $p_5$, implying $g_i(\S) = (3,4,4,z)$. Thus, we define $I^{'L}_i = P_i$ if $V(\S^{O_i}) \cap (X_i \cup X'_i) \neq \emptyset$, and
$I^{'L}_i = \overline{P_i}$ otherwise, and $\S^{'O_i} = \S^{O_i}$. These sets satisfy $(\triangle)$ as in particular $g_i(\S') = (4,2,5,z)$.
Finally, if $x \le 2$ then $g_i(\S) \le (2,4,4,z)$ by $p_3$. In this case, $\S^{'O_i} = \S^{O_i} \setminus \{t \in \S : V(t) \ni \overline{x^2_i}\}$ and $I^{'L}_i = P_i$ verify $(\triangle)$.
In particular, we have $h_i(\S') \ge h_i(\S)$ as $g_i(\S') \ge (4,2,5,z-1)$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.80\textwidth]{ex_restruct_Li.pdf}
\end{center}
\caption{Example showing a "bad shaped" solution of case $4$ with $g_i(\S) = (3,4,4,1)$. We have $\S^{Lft_iRgt_i} = \{t_1,t_2,t_3,t_4\}$, $\S^{O_i} = \{t_5\}$, $\S^{Lft_i} = \{t_6,t_7,t_8\}$ and
$\S^{Rgt_i} = \{t_9,t_{10},t_{11},t_{12}\}$. The three vertices of triangle $t_l$ are annotated with label $l$.}
\label{fig:ex_restruct_Li}
\end{figure}
\end{proof}
\paragraph*{Proof of the L-reduction}
We are now ready to prove the main lemma (recall that $f$ is the
reduction from {\sc Max 2-SAT(3)} to {\sc $C_3$-Packing-T}$^{D_M}$ described in
Section~\ref{subsec:reduction2}), and also the main theorem of the section.
\begin{lemma}\label{lem:Lreducv2}
Let $\cal{F}$ be an instance of {\sc Max 2-SAT(3)}. For any $k$, there
exists an assignment $a$ of $\cal{F}$ satisfying at least $k$ clauses
if and only if there exists a solution $\S$ of $f(\cal{F})$ with $|\S|
\geq 11n+m+k$, where $n$ and $m$ are respectively is the number of
variables and clauses in $\cal{F}$. Moreover, in the $\Leftarrow$
direction, assignment $a$ can be computed from $\S$ in polynomial
time.
\end{lemma}
\begin{proof}
For any $i \in [n]$, let $A_i =P_i$ if $x_i$ is set to true in $a$,
and $A_i=\overline{P_i}$ otherwise. We first add to $\S$ the set
$\cup_{i \in [n]}A_i$. Then, let $\{C_{j_l}, l \in [k]\}$ be $k$
clauses satisfied by $a$. For any $l \in [k]$, let $i_l$ be the index
of a literal satisfying $C_{j_l}$, let $x \in [2]$ such that
$c^x_{j_l}$ corresponds to this literal, and let $Z_l =
\{x^2_{i_l},x^{'2}_{i_l}\}$ if literal $i_l$ is positive, and $Z_l =
\{\overline{x^2_{i_l}}\}$ otherwise. For any
$j \in [m]$, if $j=i_l$ for some $l$ (meaning that $j$ corresponds to
a satisfied clause), we add to $\S$ the triangle in $Q^{3-x}_j$, and
otherwise we arbitrarily add the triangle $Q^1_j$. Finally, for any
$l \in [k]$ we add to $\S$ triangle $t_l =
(y_l,\theta_{j_l},c^x_{j_l})$ where $y_l \in Z_l$ is such that $y_l$
is not already used in another triangle. Notice that such an $y_l$
always exists as triangles of $A_{i}, i \in [n]$ do not intersect
$Z_l$ (by definition of the $A_i$), and as there are at most two
clauses that are true due to positive literal, and one clause that is
true due to a negative literal.
Thus, $\S$ has $11n+m+k$ vertex disjoint triangles.
Conversely, let $\S$ a solution of $f(\cal{F})$ with $|\S| \geq
11n+m+k$. By Lemma~\ref{lem:goodpatternv2} we can construct in
polynomial time a solution $\S'$ from $\S$ such that $|\S'| \ge |\S|$,
$\S'$ only contains outer, variable or clause inner triangles, for
each $j \in [m]$ there exists $x \in [2]$ such that $I^{'K}_j=Q^x_j$,
and for each $i\in[n], I^{'L}_i = P_i$ or $I^{'L}_i =
\overline{P_i}$. This implies that the $k' \ge k$ remaining triangles
must be outer triangles. Let $\{t'_l, l \in [k']\}$ be these $k'$
outer triangles with $t'_l = (y_l,\theta_{j_l},c^{x_l}_{j_l})$
Let us define the following assignation $a$: for each $i\in[n]$, we set $x_i$ to true if $I^{'L}_i = P_i$, and false otherwise.
This implies that $a$ satisfies at least clauses $\{C_{j_l}, l \in [k']\}$.
\end{proof}
\begin{theorem}\label{thm:apxhv2}
{\sc $C_3$-Packing-T}$^{D_M}$ is {\sf APX}-hard, and thus does not admit a {\sf PTAS} unless $P={\sf NP}$.
\end{theorem}
\begin{proof}
Let us check that Lemma~\ref{lem:Lreducv2} implies a $L$-reduction (whose definition is recalled in Definition~\ref{def:L} of appendix).
Let $OPT_1$ (resp. $OPT_2$) be the optimal value of $\cal{F}$ (resp. $f(\cal{F})$).
Notice that Lemma~\ref{lem:Lreducv2} implies that $OPT_2 = OPT_1+11n+m$.
It is known that $OPT_1 \ge \frac{3}{4}m$ (where $m$ is the number of clauses of ${\cal{F}}$). As $n\le m$ (each variable has at least one positive and one negative occurrence),
we get $OPT_2 = OPT_1+11n+m \le \alpha OPT_1$ for an appropriate constant $\alpha$, and thus point $(a)$ of the definition is verified.
Then, given a solution $\S'$ of $f(\cal{F})$, according to Lemma~\ref{lem:Lreducv2} we can construct in polynomial time an assignment $a$ satisfying $c(a)$ clauses with $c(a) \ge \S' - 11n-m$.
Thus, the inequality $(b)$ of Definition~\ref{def:L} with $\beta=1$ becomes $OPT_1-c(a) \le OPT_2 - \S' = OPT_1+11n+m - \S'$, which is true.
\end{proof}
Reduction of Theorem~\ref{thm:apxhv2} does not imply the {\sf
NP}-hardness of {\sc $C_3$-Perfect-Packing-T}~as there remain some unused vertices.
However, it is straightforward to adapt the reduction by adding
backward arcs whose head (resp. tail) are before (resp. after) ${\cal T}$ to
consume the remaining vertices. This leads to the following result.
\begin{theorem}\label{thm:nphperfectv2dm}
{\sc $C_3$-Perfect-Packing-T}$^{D_M}$ is {\sf NP}-hard.
\end{theorem}
\begin{proof}
Let $({\cal F},k)$ be an instance of the decision problem of
$MAX-2-SAT(3)$ and let ${\cal T} = f({\cal F})$ be the tournament defined in
Section~\ref{subsec:reduction2}. Recall that we have ${\cal T} = LK$. Let
$N = |V(T)| = 35n+5m$, $x^* = 33n+3m+3k$ and $n' = N-x^*$. We now
define ${\cal T}'$ by adding $2n'$ new vertices in ${\cal T}$ as follows: $V({\cal T}')
= R_1V({\cal T})R_2$ with $R_i = \{r_i^l, l \in [n']\}$. We add to
$\bA{A}({\cal T}')$ the set of arcs $R=\{(r_2^lr_1^l), l \in [n']\}$ which
are called the dummy arcs.
We say that a triangle $t=(u,v,w)$ is dummy iff $(wu) in R$ and $v \in V({\cal T})$.
Let us prove that there are at least $k$ clauses satisfiable in $\cal{F}$ iff there exists a perfect packing in ${\cal T}'$.
$\Rightarrow$\\ Given an assignement satisfying $k$ clause we define a
solution $\S$ with $V(\S) \subseteq V({\cal T})$ as in
Lemma~\ref{lem:Lreducv2} (triangles of $P_i$ or $\overline{P_i}$ for
each $i \in [n]$, a triangle $Q^x_j$ for each $j \in [m]$, and an
outer triangle $t_l$ with $l \in [k]$ for each satisfied clause. We
have $|\S|=11n+m+k$. This implies that $|V({\cal T}) \setminus V(\S)|=n'$,
and thus we use $n'$ remaining vertices of $V({\cal T})$ by adding to $\S$
$n'$ dummy triangles.
$\Leftarrow$\\ Let $\S'$ be a perfect packing of ${\cal T}'$. Let $\S = \{t
\in \S' : V(t) \subseteq V({\cal T})\}$. Let $X = V({\cal T}) \setminus
V(\S)$. As $\S'$ is a perfect packing of ${\cal T}'$, vertices of $X$ must
be used by $|X|$ dummy triangles of $\S'$, implying $|X| \le n'$ and
$|\S| \ge 11n+m+k$. As $\S$ is set of vertex disjoint triangles of
${\cal T}$ of size at least $11n+m+k$, this implies by
Lemma~\ref{lem:Lreducv2} that at least $k$ clauses are satisfiable in
$\cal{F}$.
\end{proof}
To establish the kernel lower bound of Section~4, we also need the {\sf NP}-hardness
of {\sc $C_3$-Perfect-Packing-T}~where instances have a slightly simpler structure (to
the price of losing the property that there exists a FAS which is a
matching).
\begin{theorem}\label{thm:nphperfectv2}
{\sc $C_3$-Perfect-Packing-T}~remains {\sf NP}-hard even restricted to tournament ${\cal T}$ admitting the following linear ordering.
\begin{itemize}
\item ${\cal T} = LK$ where $L$ and $K$ are two tournaments
\item tournaments $L$ and $K$ are "fixed":
\begin{itemize}
\item $K = K_1\dots K_m$ for some $m$, where for each $j \in [m]$ we have $V(K_j) = (\theta_j,c_j)$
\item $L=R_1L_1 \dots L_n R_2$, where each $L_i$ has is a copy of the variable gadget of Section~\ref{subsec:reduction2}, $R_i = \{r_i^l, l \in [n']\}$ where $n'=2n-m$, and in addition $\bA{L}$ also contains $R =\{(r_2^lr_1^l), l \in [n']\}$ which are called the dummy arcs.
\end{itemize}
\fixme{je commente sinon ça compile pas mais on a besoin de ctte propriete !for any $a \in \bA{A}({\cal T})$, $V(a) \cap V(K) \neq \emptyset$ implies $a=vc_j$ for $v \in L$ (there are no backward arc included in $K$, and all $\theta_j$ have degree $(0,0)$)}
\end{itemize}
\end{theorem}
\begin{proof}
We adapt the reduction of Section~\ref{subsec:reduction2}, reducing now from 3-SAT(3) instead of MAX 2-SAT(3).
Given $\cal{F}$ be an instance of {\sc 3-SAT(3)} with $n$ variables $\{x_i\}$ nd $m$ clauses $\{C_j\}$.
For each variable $x_i$ with $i \in [n]$, we create a tournament $L_i$ exactly as in Section~\ref{subsec:reduction2} and we define $L=L_1 \dots L_n$.
For each clause $C_j$ with $j \in [m]$, we create a tournament $K_j$ with $V(K_j) = (\theta_j,c_j)$, and we define $K = K_1\dots K_m$.
Let us now define ${\cal T} = LK$. Now, we add to $\bA{A}({\cal T})$ the following backward arcs from $V(K)$ to $V(L)$ (again, we follow the construction of Section~\ref{subsec:reduction2} except
that now each $c_j$ has degree $(3,0)$). If $C_j = l_{i_1} \vee l_{i_2} \vee l_{i_3}$ is a clause in $\cal{F}$ then we add the arcs
$c_jv_{i_1}, c_jv_{i_2}, c_jv_{i_3}$ where $v_{i_c}$ is the vertex in $\{x_{i_c}^2,x_{i_c}^{'2},\overline{x_{i_c}^2}\}$ corresponding to $l_{i_c}$: if $l_{i_c}$ is a positive occurrence of variable $i_c$ we chose
$v_{i_c} \in \{x_{i_c}^2,x_{i_c}^{'2}\}$, otherwise we chose $v_{i_c} = \overline{x_{i_c}^2}$. Moreover, we chose vertices $v_{i_c}$ in such a way that for any $i \in [n]$, for each $v \in \{x_i^2,x_i^{'2},\overline{x_i^2}\}$ there exists a unique arc $a \in \bA{A}({\cal T})$ such that $h(a)=v$. This is always possible as each variable has at most $2$ positive occurrences and $1$ negative one.
Finally, we add $2n'$ new vertices in ${\cal T}$ as follows: $V(T) = R_1V(L)R_2V(K)$, $R_i = \{r_i^l, l \in [n']\}$ where $n'=2n-m$.
We add to $\bA{A}({\cal T})$ the set of arcs $R =\{(r_2^lr_1^l), l \in [n']\}$ which are called the dummy arcs.
Notice that ${\cal T}$ satisfies the claimed structure (defining the left part as $R_1LR_2$ and not only $L$).
We define an outer and variable inner triangle as in Section~\ref{sec:approx} (there are no more clause inner triangle), and in addition we say that a triangle $t=(u,v,w)$ is dummy iff $(wu) \in R$ and $v \in V(L)$.
Let us prove that there is an assignment satisfying the $m$ clauses of $\cal{F}$ iff ${\cal T}$ has a perfect packing.
$\Rightarrow$\\
Given an assignment satisfying the $m$ clauses we define a solution $\S$ containing only outer, variable inner and dummy triangles.
The variable inner triangle are defined as in Lemma~\ref{lem:Lreducv2} (triangles of $P_i$ or $\overline{P_i}$ for each $i \in [n]$).
For each clause $j \in [m]$ satisfied by a literal $l_{i_x}$ we create an outer triangle $(v_{i_x},\theta_j,c_j)$.
It remains now $2n-m=n'$ vertices of $L$, that we use by adding $n'$ dummy triangles to $\S$.
$\Leftarrow$\\
Let $\S$ be a perfect packing of ${\cal T}'$.
Notice that restructuration lemmas of Section~\ref{sec:approx} do not directly remain true because of the dummy arcs. However, we can adapt in a straightforward manner arguments of these lemmas,
using the fact that $\S$ is even a perfect packing.
Given a solution $\S$, we define as in Section~\ref{sec:approx} set $I^{L}_i =\{t \in \S : V(t) \subseteq V(L_i)$,
$I^{L} = \cup_{i \in [n]} I^L_i$, $O = \{t \in \S \mbox{ $t$ is an outer triangle }\}$, and $D = \{t \in \S \mbox{ $t$ is a dummy triangle }\}$.
Again, we do not claim (at this point) that $\S$ does not contain other triangles. Given any perfect packing $\S$ of ${\cal T}$, we can prove the following properties.
\begin{itemize}
\item $\S$ must contain exactly $m$ outer triangles ($|O| =m$). Indeed, for any $j$ from $m$ to $1$, the only way to use $\theta_j$ is to create
an outer triangle $(u_j,\theta_j,c_j)$. This implies that triangles of $O$ consume exactly $m$ disjoint vertices in $L$.
\item for any $i \in [n]$, we must have $|I^L_i|=11$.
Indeed, let $x$ be the number of vertices of $L$ used in $\S$ (as $\S$ is a perfect packing we know that $x=|L|=35n$).
The only triangles of $\S$ that can use a vertex of $L$ are the outer, the variable inner and the dummy triangles, implying $x \le (\sum_{i \in [n]}|I^L_i|)+m+n'$
as $|D| \le n'$. As $|V(L_i)| = 35$ we have $|I^L_i| \le 11$ and thus we must have $|I^L_i|=11$ for any $i$.
\end{itemize}
\fixme{c'est ici où c'est un peu crado}
Let us now consider the tournament ${\cal T}_0 = {\cal T}[V({\cal T}) \setminus V(R)]$ without the dummy arcs, and $\S_0 = \{t \in \S : V(t) \subseteq V({\cal T}_0)\}$.
We adapt in a straightforward way the notion of variable inner and outer triangle in ${\cal T}_0$.
Observe that the variable inner and outer triangles of $\S$ and $\S_0$ are the same, and thus are both denoted respectively $I^L_i$ and $\S^{O_i}$.
In particular, $\S_0$ still contains $m$ outer triangle of ${\cal T}_0$.
Now we simply apply proof of Lemma~\ref{lem:goodpatternv2} on $\S_0$. More precisely, Lemma~\ref{lem:goodpatternv2} restructures $\S_0$ into a solution $\S_0'$ with $\S_0' = (\S_0 \setminus (I^L_i \cup \S^{O_i})) \cup (I^{'L}_i \cup \S^{'O_i})$,
where $I^{'L}_i$ and $\S^{'O_i}$ satisfy properties $(\triangle)$. In particular, as $|I^L_i|=|I^{'L}_i|=11$, $ \triangle_3$ implies that $|\S_0^{'O_i}| \ge |\S_0^{O_i}|$,
and thus that $|\S_0^{'O}| \ge |\S_0^{O}| = m$. Thus, $\S'_0$ satisfies $I^L_i = P_i$ or $I^L_i = \overline{P_i}$ for any $i$, and has $m$ outer triangles. We can now define
as in Lemma~\ref{lem:Lreducv2} from $\S'_0$ an assignment satisfying the $m$ clauses.
\end{proof}
\subsection{$(1+\frac{6}{c-1})$-approximation when backward arcs have large minspan}
Given a set of pairwise distinct pairs $D$ and an integer $c$, we denote by {\sc $C_3$-Packing-T}$^D_{\ge c}$ the problem {\sc $C_3$-Packing-T}$^D$ restricted to tournaments such that
there exists a linear representation of minspan at least $c$ and where $d(v) \in D$ for all $v$.
In all this section we consider an instance ${\cal T}$ of {\sc $C_3$-Packing-T}$^{D_M}_{\ge c}$ with a given linear ordering $(V({\cal T}),\bA{A}({\cal T}))$ of minspan at least $c$ and whose degrees belong to $D_M$.
The motivation for studying the approximability of this special case comes from the situation of MAX-SAT(c) where the approximability becomes easier as $c$ grows, as the derandomized uniform assignment provides a $\frac{2^c}{2^c-1}$ approximation algorithm. Somehow, one could claim that MAX-SAT(c) becomes easy to approximate for large $c$ as there many ways to satisfy a given clause.
As the same intuition applies for tournament admitting an ordering with large minspan (as there are $c-1$ different ways to use a given backward in a triangle), our objective was to
find a polynomial approximation algorithm whose ratio tends to $1$ when $c$ increases.
Let us now define algorithm $\Phi$.
We define a bipartite graph $G = (V_1,V_2,E)$ with $V_1 = \{v^1_{a} : a \in \bA{A}({\cal T})\}$ and
$V_2 =\{v^2_l : v_l \in V_{(0,0)}\}$.
Thus, to each backward arc we associate a vertex in $V_1$ and to each vertex $v_l$ with $d(v_l) = (0,0)$ we associate a vertex in $V_2$.
Then, $\{v^1_{a},v^2_l\} \in E$ iff $(h(a),v_l,t(a))$ is a triangle in ${\cal T}$.
In phase $1$, $\Phi$ computes a maximum matching $M = \{e_l, l \in [|M|]\}$ in $G$. For every $e_l = \{v^1_{ij},v^2_l\} \in M$ create a triangle $t^1_l = (v_j,v_l,v_i)$.
Let $S^1 = \{t^1_l, l \in [|M|]\}$. Notice that triangles of $S^1$ are vertex disjoint. Let us now turn to phase $2$.
Let ${\cal T}^2$ be the tournament ${\cal T}$ where we removed all vertices $V(S^1)$.
Let $(V({\cal T}^2),\bA{A}({\cal T}^2))$ be the linear ordering of ${\cal T}^2$ obtained by
removing $V(S^1)$ in $(V({\cal T}),\bA{A}({\cal T}))$.
We say that three distinct backward edges $\{a_1,a_2,a_3\} \subseteq \bA{A}({\cal T}^2)$ can be packed into triangles $t_1$ and $t_2$ iff $V(\{t_1,t_2\}) = V(\{a_1,a_2,a_3\})$ and the $t_i$ are vertex disjoint.
For example, if $h(a_1) < h(a_2) < t(a_1) < h(a_3) < t(a_2) < t(a_3)$, then $\{a_1,a_2,a_3\}$ can be packed into $(h(a_1),h(a_2),t(a_1))$ and $(h(a_3),t(a_2),t(a_3))$ (recall that
when $\bA{A}({\cal T})$ form a matching, $(u,v,w)$ is triangle iff $wu \in \bA{A}({\cal T})$ and $u<v<w$), and if $h(a_1) < h(a_2) < t(a_2) < h(a_3) < t(a_3) < t(a_1)$, then $\{a_1,a_2,a_3\}$ cannot be packed into two triangles. In phase $2$, while it is possible, $\Phi$ finds a triplet of arcs of $Y \subseteq \bA{A}({\cal T}^2)$ that can be packed into triangles, create the two corresponding triangles, and remove $V(Y)$.
Let $S^2$ be the triangle created in phase $2$ and let $S = S^1 \cup S^2$.
\begin{observation}\label{obs:arc}
For any $a \in \bA{A}({\cal T})$, either $V(a) \subseteq V(S)$ or $V(a) \cap V(S) = \emptyset$. Equivalently, no backward arc has one endpoint in $V(S)$ and the other outside $V(S)$.
\end{observation}
According to Observation~\ref{obs:arc}, we can partition $\bA{A}({\cal T}) = \bA{A}_0 \cup \bA{A}_1 \cup \bA{A}_2$, where for $i \in \{1,2\}, $ $\bA{A}^i = \{a \in \bA{A}({\cal T}) : V(a) \subseteq V(S^i)$ is the set of arcs used in phase $i$,
and $\bA{A}_0 =_{def} \{b_i, i \in [x] \}$ are the remaining unused arcs. Let $\bA{A}_\Phi = \bA{A}_1 \cup \bA{A}_2$, $m_i = |\bA{A}_i|$, $m = m_0+m_1+m_2$ and $m_{\Phi} = m_1+m_2$ the number of arcs (entirely) consumed by $\Phi$.
To prove the $1+f(\frac{6}{c-1})$ desired approximation ratio, we will first prove in Lemma~\ref{lemma:numberarcs} that $\Phi$ uses at most all the arcs ($m_A \ge (1-\epsilon(c))m$), and in Theorem~\ref{thm:approxc} that the number of triangles made with these arcs is "optimal". Notice that the latter condition is mandatory as if $\Phi$ used its $m_\Phi$ arcs to only create $\frac{2}{3}(m_\Phi)$ triangles in phase 2
instead of creating $m' \approx m_\Phi$ triangle with $m'$ backward arcs and $m'$ vertices of degree $(0,0)$, we would have a $\frac{3}{2}$ approximation ratio.
\begin{lemma}\label{lemma:numberarcs}
For any $c\ge 2$, $m_\Phi \ge (1-\frac{6}{c+5})m$
\end{lemma}
\begin{proof}
In all this proof, the span $s(a)$ is always considered in the initial input ${\cal T}$, and not in ${\cal T}^2$.
For any $i \in [x]$, let us associate to each $b_i \in \bA{A}_0$ a set $B_i \subseteq \bA{A}_\Phi$ defined as follows (see Figure~\ref{fig:B_i} for an example).
Let $b_j \in \bA{A}_0$ such that $s(b_j) \subseteq s(b_i)$ and there does not exist a $b_k \in \bA{A}_0$ such that
$s(b_k)$ included in $s(b_j)$ (we may have $b_j = b_i$).
Let $Z = V(\bA{A}_0) \cap s(b_j)$. Notice that $|Z| \le 1$, meaning that there is at most one endpoint of a $b_l, l\neq j$ in $s(b_j)$, as otherwise we would be three arcs in $\bA{A}_0$ that could be packed in two triangles. If there exists $a \in \bA{A}_{\Phi}$ with $s(a) \subseteq s(b_j)$ we define $a_0 = a$, and otherwise we define $a_0 = b_j$.
Now, let $v \in s(a_0) \setminus Z$.
Observe that $V({\cal T})$ is partitioned into $V(\bA{A}_0) \cup V(\bA{A}_{\Phi}) \cup V_{(0,0)}$. If $v \in V_{(0,0)}$, then there exists $t^1_l = (u,v,w)$ with $wu \in \bA{A}_1$ (as otherwise the matching in phase 1 would not be maximal and we could add $b_j$ and $v$),
and we add $wu$ to $B_i$. Otherwise, $v \in V(a)$ with $a \in \bA{A}_{\Phi}$ (this arcs could have been used in phase $1$ or phase $2$), and we add $a$ to $B_i$. Notice that as $a_0$ does not properly contains another arc of $\bA{A}_{\Phi}$, all the added arcs are pairwise distinct, and thus $|B_i| = |s(a_0) \setminus Z| \ge c-1$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{def_Bi.pdf}
\end{center}
\caption{On this example white vertices represent $V({\cal T}) \setminus V(S)$ (vertices not used by $\Phi$), and black ones represent $V(S)$. In this case
we have $B_i = \{a_l, l \in [3]\}$. Indeed, each $v_l \in s(a_0) \setminus Z$, for $l \in [3]$, brings $a_l$ in $B_i$. In particular $v_2 \in V_{(0,0)}$ and was used with $a_2$ to create
a triangle in phase 1.}
\label{fig:B_i}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{ex_Bi_tight.pdf}
\end{center}
\caption{Example where $|B(a)|=6$ for $a \in \bA{A}_\Phi$, where $B(a)=\{b_l, l \in [6]\}$.}
\label{fig:B_i_tight}
\end{figure}
Given $a \in \bA{A}_{\Phi}$, let $B(a) = \{B_i, a \in B_i\}$. Let us
prove that $|B(a)| \le 6$ for any $a \in \bA{A}_{\Phi}$. For any $v
\in V(S)$, let $d_B(v) = |\{b_i : v \in s(b_i)\}|$. Observe that
$d_B(v) \le 2$, as otherwise any triplet of arcs containing $v$ in
their span could be packed into two triangles (there are only $6$
cases to check according to the $3!$ possible ordering of the tail of
these $3$ arcs).
For any $a \in \bA{A}_1$, let $V'(a) = V(t^a)$ where $t^a \in S$ is the triangle containing $a$,
and for any $a \in A_2$, let $V'(a) = V(a)$.
Observe that by definition of the $B_i$, $a \in B_i$ implies that $b_i$ contributes to the degree $d_B(v)$ for a $v \in V'(a)$. As in particular $d_B(v)$ for any $v \in V'(a)$,
this implies by pigeonhole principle that $|B(a)| \le 6$ (notice that this bound is tight as depicted Figure~\ref{fig:B_i_tight}).
Thus, if we consider the bipartite graph with vertex set $(\bA{A}_0,\bA{A}_{\Phi})$ and an edge between $b_i \in \bA{A}_0$ and $a \in \bA{A}_{\Phi}$ iff $a \in B_i$, the number of edges $x$ of this graph satisfies
$|\bA{A}_0|(c-1) \le x \le 6|\bA{A}_{\Phi}|$, implying the desired inequality as $m_\Phi = m - m_0$.
\end{proof}
\begin{theorem}\label{thm:approxc}
For any $c \ge 2$, $\Phi$ is a polynomial $(1+\frac{6}{c-1})$ approximation algorithm for {\sc $C_3$-Packing-T}$^{D_M}_{\ge c}$.
\end{theorem}
\begin{proof}
Let $OPT$ be an optimal solution. Let us define set $OPT_i \subseteq OPT$ and $\bA{A}^*_i \subseteq \bA{A}({\cal T})$ as follows.
Let $t=(u,v,w) \in OPT$. As the FAS of the instance is a matching, we know that $wu \in \bA{A}({\cal T})$ as we cannot have a triangle with two backward arcs.
If $d(v)=(0,0)$ then we add $t$ to $OPT_1$ and $wu$ to $\bA{A}^*_1$.
Otherwise, let $v'$ be the other endpoint of the unique arc $a$ containing $v$. If $v' \notin V(OPT)$, then we add $t$ to $OPT_3$ and $\{wu,a\}$ to $\bA{A}^*_3$.
Otherwise, let $t' \in OPT$ such that $v' \in V(t')$. As the FAS of the instance is a matching we know that $v'$ is the middle point of $t'$, or more formally that
$t' = (u',v',w')$ with $u'w' \in \bA{A}({\cal T})$. We add $\{t,t'\}$ to $OPT_2$ and $\{wu,a,w'u'\}$ to $\bA{A}^*_2$. Notice that the $OPT_i$ form a partition of $OPT$, and that the $\bA{A}^*_i$ have pairwise empty intersection, implying $|\bA{A}^*_1|+|\bA{A}^*_2|+|\bA{A}^*_3| \le m$. Notice also that as triangles of $OPT_1$ correspond to a matching of size $|OPT_1|$ in the bipartite graph defined in phase $1$ of algorithm $\Phi$, we have $|OPT_1|=|\bA{A}^*_1| \le |\bA{A}_1|$.
Putting pieces together we get (recall that $S$ is the solution computed by $\Phi$): $|OPT| = |OPT_1|+|OPT_2|+|OPT_3| = |\bA{A}^*_1|+\frac{2}{3}|\bA{A}^*_2|+\frac{1}{2}|\bA{A}^*_3|
\le |\bA{A}^*_1|+\frac{2}{3}(|\bA{A}^*_2|+|\bA{A}^*_3|) \le |\bA{A}^*_1|+\frac{2}{3}(m-|\bA{A}^*_1|) \le \frac{1}{3}|\bA{A}_1|+\frac{2}{3}m $ and
$|S| = |S^1|+|S^2|
= |\bA{A}_1|+\frac{2}{3}|\bA{A}_2|
\ge |\bA{A}_1|+\frac{2}{3}((1-\frac{6}{c+5})m - |\bA{A}_1|)
= \frac{1}{3}|\bA{A}_1|+\frac{2}{3}(1-\frac{6}{c+5})m $
which implies the desired ratio.
\end{proof}
\section{Kernelization}
\label{sec:kernel}
In all this section we consider the decision problem {\sc $C_3$-Packing-T}~parameterized by the size of the solution. Thus, an input is a pair $I=({\cal T},k)$ and we say that $I$ is
positive iff there exists a set of $k$ vertex disjoint triangles in ${\cal T}$.
\subsection{Positive results for sparse instances}
Observe first that the kernel in $\O(k^2)$ vertices for $3$-{\sc Set Packing} of~\cite{abu2009quadratic} directly implies a kernel in $\O(k^2)$ vertices for {\sc $C_3$-Packing-T}.
Indeed, given an instance $({\cal T}=(V,A),k)$ of {\sc $C_3$-Packing-T}, we create an instance $(I'=(V,C),k)$ of $3$-{\sc Set Packing} by creating an hyperedge $c \in C$ for each triangle of ${\cal T}$.
Then, as the kernel of~\cite{abu2009quadratic} only removes vertices, it outputs an induced instance $(\overline{I'}=I'[V'],k')$ of $I$ with $V' \subseteq V$, and thus this induced instance can be interpreted
as a subtournament, and the corresponding instance $({\cal T}[V'],k')$ is an equivalent tournament with $\O(k^2)$ vertices.
As shown in the next theorem, as we could expect it is also possible to have kernel bounded by the number of backward arcs.
\begin{theorem}\label{thm:kernelm}
{\sc $C_3$-Packing-T}~admits a polynomial kernel with $\O(m)$ vertices, where $m$ is the number of arcs in a given FAS of the input.
\end{theorem}
\begin{proof}
Let $I=({\cal T},k)$ be an input of the decision problem associated to {\sc $C_3$-Packing-T}. Observe first that we can build in polynomial time a linear ordering $\sigma({\cal T})$ whose backward arcs $\bA{A}({\cal T})$ correspond to the given FAS. We will obtain the kernel by removing useless vertices of degree $(0,0)$.
Let us define a bipartite graph $G = (V_1,V_2,E)$ with $V_1 = \{v^1_{a} : a \in \bA{A}({\cal T})\}$ and $V_2 =\{v^2_l : v_l \in V_{(0,0)}\}$.
Thus, to each backward arc we associate a vertex in $V_1$ and to each vertex $v_l$ with $d(v_l) = (0,0)$ we associate a vertex in $V_2$.
Then, $\{v^1_{a},v^2_l\} \in E$ iff $(h(a),v_l,t(a))$ is a triangle in ${\cal T}$.
By Hall's theorem, we can in polynomial time partition $V_1$ and $V_2$ into $V_1=A_1 \cup A_2$, $V_2=B_0 \cup B_1 \cup B_2$ such that $N(A_2) \subseteq B_2$, $|B_2| \le |A_2|$, and
there is a perfect matching between vertices of $A_1$ and $B_1$ ($B_0$ is simply defined by $B_0 = V_2 \setminus (B_1 \cup B_2)$).
For any $i, 0 \le i \le 2$, let $X_i = \{v_l \in V_{(0,0)} : v^2_l \in B_i\}$ be the vertices of ${\cal T}$ corresponding to $B_i$.
Let $V_{\neq(0,0)} = V({\cal T}) \setminus V_{(0,0)}$. Notice that $|V_{\neq(0,0)}| \le 2m$.
We define ${\cal T}' = {\cal T}[V_{\neq(0,0)} \cup X_1 \cup X_2]$ the sub-tournament obtained from ${\cal T}$ by removing vertices of $X_0$,
and $I' = ({\cal T}',k)$. We point out that this definition of ${\cal T}'$ is similar to the final step of the kernel of~\cite{abu2009quadratic} as our partition of $V_1$ and $V_2$ (more precisely
$(A_1,B_0 \cup B_1)$) corresponds in fact to the crown decomposition of~\cite{abu2009quadratic}. Observe that $|V({\cal T}')| \le 2m+|A_1|+|A_2| \le 3m$, implying the desired bound of the number of vertices of the kernel.
It remains to prove that $I$ and $I'$ are equivalent. Let $k \in \mathbb{N}$, and let us prove that there exists a solution $\S$ of ${\cal T}$ with $|\S| \ge k$ iff
there exists a solution $\S'$ of ${\cal T}'$ with $|\S'| \ge k$. Observe that the $\Leftarrow$ direction is obvious as ${\cal T}'$ is a subtournament of ${\cal T}$. Let us now prove the $\Rightarrow$ direction.
Let $\S$ be a solution of ${\cal T}$ with $|\S| \ge k$. Let $\S = \S_{(0,0)} \cup \S_1$ with $\S_{(0,0)} = \{t \in \S : t=(h(a),v,t(a))\mbox{ with } v \in V_{(0,0)}, a \in \bA{A}({\cal T})\}$
and $\S_1 = \S \setminus \S_{(0,0)}$. Observe that $V(\S_1) \cap V_{(0,0)} = \emptyset$, implying $V(\S_1) \subseteq V_{\neq(0,0)}$.
For any $i \in [2]$, let $\S^i_{(0,0)} = \{t \in \S_{(0,0)} : t=(h(a),v,t(a))\mbox{ with } v \in V_{(0,0)}, v^1_a \in A_i\}$ be a partition of $\S_{(0,0)}$.
We define $\S' = \S_1 \cup \S^2_{(0,0)} \cup \S^{'1}_{(0,0)}$, where $\S^{'1}_{(0,0)}$ is defined as follows. For any $v^1_a \in A_1$, let $v^2_{\mu(a)} \in B_1$ be the vertex associated to $v^1_a$
in the $(A_1,B_1)$ matching. To any triangle $t=(h(a),v,t(a)) \in \S^1_{(0,0)}$ we associate a triangle $f(t)=(h(a),v_{\mu(a)},t(a)) \in \S^{'1}_{(0,0)}$, where by definition $v_{\mu(a)} \in X_1$.
For the sake of uniformity we also say that any $t \in \S_1 \cup \S^2_{(0,0)}$ is associated to $f(t)=t$.
Let us now verify that triangles of $\S'$ are vertex disjoint by verifying that triangles of $\S^{'1}_{(0,0)}$ do not intersect another triangle of $\S'$.
Let $f(t)=(h(a),v_{\mu(a)},t(a)) \in \S^{'1}_{(0,0)}$. Observe that $h(a)$ and $t(a)$ cannot belong to any other triangle $f(t')$ of $\S'$ as for any $f(t'') \in \S'$, $V(f(t'')) \cap V_{\neq(0,0)} = V(t'') \cap V_{\neq(0,0)}$
(remember that we use the same notation $V_{\neq(0,0)}$ to denote vertices of degree $(0,0)$ in ${\cal T}$ and ${\cal T}'$).
Let us now consider $v_{\mu(a)}$. For any $f(t') \in \S_1$, as $V(f(t')) \cap V_{(0,0)} = \emptyset$ we have $v_{\mu(a)} \notin V(f(t'))$.
For any $f(t')=(h(a'),v_l,t(a')) \in \S^{2}_{(0,0)}$, we know by definition that $v^1_{a'} \in A_2$, implying that $v^2_l \in B_2$ (and $v_l \in X_2$) as $N(A_2) \subseteq B_2$ and thus that $v_l \neq v_{\mu(a)}$.
Finally, for any $f(t')=(h(a'),v_l,t(a')) \in \S^{'1}_{(0,0)}$, we know that $v_l = v_{\mu(a')}$, where $a \neq a'$, leading to $v_l \neq v_{\mu(a)}$ as $\mu$ is a matching.
\end{proof}
Using the previous result we can provide a $\O(k)$ vertices kernel for
{\sc $C_3$-Packing-T}~restricted to sparse tournaments.
\begin{theorem}\label{thm:kernel-for-sparse}
{\sc $C_3$-Packing-T}~restricted to sparse tournaments admits a polynomial kernel with
$\O(k)$ vertices, where $k$ is the size of the solution.
\end{theorem}
\begin{proof}
Let $I=({\cal T},k)$ be an input of the decision problem associated to
{\sc $C_3$-Packing-T}~such that ${\cal T}$ is a sparse tournament. We say that an arc $a$ is
a \emph{consecutive backward arc} of $\sigma({\cal T})$ if it is a backward arc
of ${\cal T}$ and $a=v_{i+1}v_i$ with $v_i$ and $v_{i+1}$ being consecutive
in $\sigma({\cal T})$. If ${\cal T}$ admits a consecutive backward arc $v_iv_{i+1}$
then we can exchange $v_i$ and $v_{i+1}$ in ${\cal T}$. The backward arcs of
the new linear ordering is exactly $\bA{A}({\cal T})\setminus v_{i+1}v_i$
and so is still a matching. Hence we can assume that ${\cal T}$ does not
contain any consecutive backward arc. Now if $|\bA{A}({\cal T})|< 5k$ then
by Theorem~\ref{thm:kernelm} we have a kernel with $\O(k)$
vertices. Otherwise, if $|\bA{A}({\cal T})|\ge 5k$ we will prove that $T$ is
a {\sc yes} instance of {\sc $C_3$-Packing-T} . Indeed we can greedily produce a family
of $k$ vertex disjoint triangles in $T$. For that consider a backward
arc $v_jv_i$ of ${\cal T}$, with $i<j$. As $v_jv_i$ is not consecutive there
exists $l$ with $i<l<j$ and we select the triangle $v_iv_jv_l$ and
remove the vertices $v_i$, $v_l$ and $v_j$ from ${\cal T}$. Denote by ${\cal T}'$
the resulting tournament and let $\sigma({\cal T}')$ be the order induced by
$\sigma({\cal T})$ on ${\cal T}'$. So we loose at most 2 backward arcs in $\sigma({\cal T}')$
($v_jv_i$ and a possible backward arc containing $v_l$) and create at
most 3 consecutive backward arcs by the removing of $v_i$, $v_l$ and
$v_j$. Reducing these consecutive backward arcs as previously, we can
assume that $\sigma({\cal T}')$ does not contain any consecutive backward arc
and satisfies $|\bA{A}({\cal T}')|\ge |\bA{A}({\cal T})|-5 \ge 5(k-1)$. Finally
repeating inductively this arguments, we obtain the desired family of
$k$ vertex-disjoint triangles in ${\cal T}$, and ${\cal T}$ is a {\sc yes}
instance of {\sc $C_3$-Packing-T}.
\end{proof}
\subsection{No (generalised) kernel in ${\cal O}(k^{2-\epsilon})$}
In this section we provide an OR-cross composition (see
Definition~\ref{def:orcompo} in Appendix) from {\sc $C_3$-Perfect-Packing-T}~restricted
to instances of Theorem~\ref{thm:nphperfectv2} to {\sc $C_3$-Perfect-Packing-T}~
parameterized by the total number of vertices.
\paragraph*{Definition of the instance selector}
The next lemma build a special tournament, called an \emph{instance
selector} that will be useful to design the cross composition.
\begin{lemma}\label{lem:path}
For any $\g =2^{\gp}$ and $\m$ we can construct in polynomial time (in
$\g$ and $\m$) a tournament $\Pg$ such that
\begin{itemize}
\item there exists $\g$ subsets of $\m$ vertices $\Xg{i}=\{x^i_j : j
\in [\m] \}$, that we call the distinguished set of vertices, such
that
\begin{itemize}
\item the $\Xg{i}$ have pairwise empty intersection
\item for any $i \in [\g]$, there exists a packing $\S$ of triangles
of $\Pg$ such that $V(\Pg) \setminus V(\S) = \Xg{i}$ (using this
packing of $\Pg$ corresponds to select instance $i$)
\item for any packing $\S$ of triangles of $\Pg$ with
$|V(\S)|=|V(\Pg)|-\m$ there exists $i \in [\g]$ such that $V(\Pg)
\setminus V(\S) \subseteq \Xg{i}$
\end{itemize}
\item $|V(\Pg)|=\O(\m\g)$.
\end{itemize}
\end{lemma}
\begin{proof}
Let us first describe vertices of $\Pg$. For any $i \in [\g-1]_0$
(where $[x]_0$ denotes $\{0,\dots,x\}$) let $\Xg{i}=\{x^i_j : j \in
[\m] \}$, and let $X = \cup_{i\in [\g-1]_0}\Xg{i}$. For any $l \in
[\gp-1]_0$, let $\Vg{l}=\{v^l_k,k \in [|\Vg{l}|]\}$ be the vertices of
level $l$ with $|\Vg{l}|= \m \g/2^{l} +2$, and $V=\cup_{l \in
[\gp-1]_0}\Vg{l}$. Finally, we add a set $\Alg{}=\{\Alg{l} :
l\in[\gp-1]_0\}$ containing one dummy vertex for each level and
finally set $V(\Pg)= X\cup V\cup \Alg{}$. Observe that
$|V(\Pg)|=\m\g+\sum_{l \in [\gp-1]_0}(|\Vg{l}|+1)=\O(\m\g)$. Let us
now describe $\sigma(\Pg)$ and $\bA{A}(\Pg)$ recursively. Let $\Pgc{0}$ be
the tournament such that $\sigma(\Pgc{0})=(v^0_1, x^0_1, v^0_2, x^1_1,
\dots , v^0_\g, x^{\g-1}_1)$ $(v^0_{\g+1}, x^0_2, \dots, v^0_{2\g},
x^{\g-1}_2)$ $\dots$ $(v^0_{(\m-1)\g+1}$ $,x^0_\m,\dots,
v^0_{\m\g},x^{\g-1}_\m)$ $(v^0_{\m\g+1}, \Alg{1}, v^0_{\m\g+2})$ and
$\bA{A}(\Pgc{0})=Z^0_P$ where $Z^0_P=A^0_P \cup A^{'0}_P$ with
$A^0_P=\{v^0_{k+1}v^0_{k} : k \in [|\Vg{0}|-2]\}$ and $A^{'0}_P =
\{v^0_{|\Vg{0}|}v^0_{|\Vg{0}|-1},v^0_{|\Vg{0}|}v^0_1\}$.
Then, given a tournament $\Pgc{l}$ with $0 \leq l <\gp -1$, we
construct the tournament $\Pgc{l+1}$ such that the vertices of
$\Pgc{l+1}$ are those of $\Pgc{l}$ to which are added the set
$\Vg{l+1}$. For $j \in [|\Vg{l+1}|-2]$, we add the vertex $v^{l+1}_j$
of $\Vg{l+1}$ just after the vertex $v^l_{2j-1}$ in the order of
$\Pgc{l+1}$, and we for $i \in \{0,1\}$ we add vertex
$v^{l+1}_{|\Vg{l+1}|-i}$ just after $v^{l}_{|\Vg{l}|-i}$. Similarly,
we add the vertex $\Alg{l+1}$ just after the vertex $\Alg{l}$. The
backward arcs of $\Pgc{l+1}$ are defined by: $\bA{A}(\Pgc{l+1}) =
\bA{A}(\Pgc{l}) \cup Z^{l+1}_P$ where $Z^{l+1}_P=A^{l+1}_P \cup
A^{'l+1}_P$ are called the \emph{arcs of level $l$}, with
$A^{l+1}_P=\{v^{l+1}_{k+1}v^{l+1}_{k} : k \in [|\Vg{l+1}|-2]]\}$ and
$A^{'l+1}_P=\{v^{l+1}_{|\Vg{l+1}|}v^{l+1}_{|\Vg{l+1}|-1},v^{l+1}_{|\Vg{l+1}|}v^{l+1}_1\}$. We
can now define our gadget tournament $\Pg$ as the tournament
corresponding to $\Pgc{\gp-1}$. We refer the reader to Figure~\ref{fig:Pathgadget} where an example of the gadget is depicted, where $\m = 3$ and $\g=4$.
\begin{figure}%
\centering
\includegraphics[width=\textwidth]{Pathgadget.pdf}
\caption{An example of the instance selector, where $\m = 3$ and $\g=4$. All depicted arcs are backward arcs.}%
\label{fig:Pathgadget}%
\end{figure}
In all the following given $i \in [\g-1]_0$ we call the last $x$ bits
(resp. the $x^{th}$ bit) $i$ its $x$ right most (resp. the $x^{th}$,
starting from the right) bits in the binary representation of $i$.
Let us now state the following observations.
\begin{itemize}
\item[$\triangle_1$] The vertices of $X$ have degree $(0,0)$ in $\Pg$.
\item[$\triangle_2$] For any $l \in [\gp-1]_0$, the extremities of the
arcs of level $l$ are exactly $V^l$ ($V(Z^l_P) = V^l$) and the arcs
of $Z^l_P$ induce an even circuit on $V^l$.
\item[$\triangle_3$] For any $a \in A^l_P$, the span of $a$ contains
$2^l$ consecutive vertices of $X$, more precisely $s(a) \cap X =
\{x^i_j,\dots,x^{i+2^l-1}_j\}$ for $j \in [m]$ and $i$ such that the
$l-1$ last bits of $i$ are equal to $0$.
\item[$\triangle_4$] There is a unique partition $Z^l_P = Z^{l,0}_P
\cup Z^{l,1}_P$ such that $|Z^{l,0}_P|=|Z^{l,1}_P|=\mm{l}$, the size
of a maximum matching of backward arcs in $\Pg[\Vg{l}]$, such that
each $Z^{l,x}_P$ is a matching (for any $a,a' \in Z^{l,x}_P, V(a)
\cap V(a') = \emptyset$), and such that $\cup_{a \in Z^{l,x}_P
\setminus A^{'l}_P} s(a) \cap X$ is the set of all vertices
$x^i_j$ of $X$ whose $l^{th}$ bit of $i$ is $x$.
\end{itemize}
Now let us first prove that for any $i \in [\g-1]_0$, there exists an
packing $\S$ of $\Pg$ such that $V(\Pg) \setminus V(\S) = \Xg{i}$.
Let $(x_{\gp-1} \dots x_0)$ be the binary representation of $i$. Let
us define recursively $\S=\cup_{l \in [\gp-1]_0}\S_l$ in the following
way. We maintain the invariant that for any $l$, the remaining
vertices of $X$ after defining $\cup_{z \in [l]_0}\S_z$ are all the
vertices of $X$ having their $l$ last bits equal to
$(x_{l-1},\dots,x_0)$. We define $\S_l$ as the $\mm{l}-1$ triangles
$\{(h(a),x_a,t(a), a \in Z^{l,1-x_l}_P) \setminus A^{'l}_P \}$ such
that $x_a$ is the unique remaining vertex of $X$ in $s(a)$ (by
$\triangle_3$ and our invariant of the $\S_{\le l}$, there remains
exactly one vertex in $s(a)$, and by $\triangle_4$ these $\mm{l}-1$
triangles consume all remaining vertices of $X$ whose $l^{th}$ bit is
$1-x_l$), and a last triangle using an arc in $A^{'l}_P$ with
$t=(v^l_{|\Vg{0}|},\Alg{l},v^l_{|\Vg{0}|-1})$ if $x_l = 1$ and
$t=(v^l_{0},\Alg{l},v^0_{|\Vg{0}|})$ otherwise. Thus, by our
invariant, the remaining vertices of $X$ after defining $\S$ are
exactly $\Xg{i}$. As $\S$ also consumes $\alpha$ and $V$ we have
$V(\Pg) \setminus V(\S) = \Xg{i}$. Notice that this definition of $\S$
shows that $|V(\Pg)|-m = |V(\S)| = 3\sum_{l \in [\gp-1]_0}\mm{l}$
Let us now prove that for any packing $\S$ of $\Pg$ with
$|V(S)|=|V(\Pg)|-m=3\sum_{l \in [\gp-1]_0}\mm{l}$, there exists $i \in
[\g]$ such that $V(\Pg) \setminus V(\S) \subseteq \Xg{i}$. Let $t_1,
\dots, t_\mm{}$ be the triangles of $\S$. For any $t_k$ of $\S$, we
associate one backward arc $a_k$ of $t_k$ (if there are two backward
arcs, we pick one arbitrarily). Let $Z=\{a_k : k \in [|\S|]\}$ and
for every $l \in |\gp-1]_0$ let $Z^l=\{a_k\in A : V(a_k) \subset
\Vg{l}\}$ the set of the backward arcs which are between two
vertices of level $l$. Notice that the $Z^l$ 's form a partition of
$Z$. For any $l \in [\gp-1]_0$, we have $|Z^l| \leq \mm{l}$ as two
arcs of $Z^l$ correspond to two different triangles of $\S$,
implying that $Z^l$ is a matching. Furthermore, as
$|\S|=|Z|=\sum_{l \in [\gp-1]_0}|Z^l|=\mm{} =
\sum_{l\in[\gp]}{\mm{l}}$, we get the equality $|Z^l| = \mm{l}$ for
any $l \in [\gp-1]_0$. This implies that for each $Z^l$ there
exists $x$ such that $Z^l=Z^{l,x}_P$, implying also that
$V(Z^l)=\Vg{l}$, and $V(Z)=\Vg{}$.
Let $A^l = Z^l \setminus A^{'l}_P$, $\S^{l} = \{t_k \in \S : a_k \in
A^l\}$.
We can now prove by induction that all the remaining vertices $R_l=X
\setminus V(\cup_{x \in [l]_0} \S^{l})$ have the same $l$ last bits.
Notice that since all vertices of $\Vg{}$ are already used, and as
triangles of $\S^l$ cannot use a dummy vertex in $\alpha$, all
triangles of $\S^l$ must be of the from $(h(a_k),x,t(a_k))$ with $x
\in X$. As $A^l= Z^{l,x}_P \setminus A^{'l}_P$, by $\triangle_4$ we
know that $\cup_{a \in A^l} s(a) \cap X$ contains all the remaining
vertices of $X$, and thus of $R_{l-1}$, whose $l^{th}$ bit is
$x$. Moreover, by $\triangle_3$ we know that for any $a \in A^l$ we
have $|R_{l-1} \cap s(a)| \le 1$ because as $a \in A^l_P$ we know
exactly the structure of $s(a) \cap X$, and if there remain two
vertices in $s(a) \cap X$ then their last $l-1$ last bits would be
different. Thus, as triangles of $\S^l$ remove on vertex in the span
of each $a \in A^l$, they remove all vertices of $R_{l-1}$ whose
$l^{th}$ bit is $x$, implying the desired result.
\end{proof}
\paragraph*{Definition of the reduction}
We suppose given a family of $t$ instances $F=\{{\cal I}_l, l \in [t]\}$ of
{\sc $C_3$-Perfect-Packing-T}~restricted to instances of Theorem~\ref{thm:nphperfectv2}
where ${\cal I}_l$ asks if there is a perfect packing in ${\cal T}_l=L_lK_l$. We
chose our equivalence relation in Definition~\ref{def:orcompo} such
that there exist $n$ and $m$ such that for any $l \in [t]$ we have
$|V(L_l)|=n$ and $|V(K_l)|=m$. We can also copy some of the $t$
instances such that $t$ is a square number and $g = \sqrt{t}$ is a
power of two. We reorganize our instances into $F=\{ {\cal I}_{(p,q)} : 1
\leq p,q \leq g \}$ where ${\cal I}_{(p,q)}$ asks if there is a perfect
packing in ${\cal T}_{(p,q)}=L_pK_q$. Remember that according to
Theorem~\ref{thm:nphperfectv2}, all the $L_p$ are equals, and all the
$K_q$ are equals. We point out that the idea of using a problem on
"bipartite" instances to allow encoding $t$ instances on a "meta"
bipartite graph $G=(A,B)$ (with $A=\{A_i, i \in \sqrt{t}\}$, $B=\{B_i,
i \in \sqrt{t}\}$) such that each instance $p,q$ is encoded in the
graph induced by $G[A_i \cup B_i]$ comes from~\cite{dell2014kernelization}.
We refer the reader to Figure~\ref{fig:Reduc2} which represents the
different parts of the tournament. We define a tournament
$G=L M_G \tilde{L} \tilde{M}_G P_{(n,g)}$, where $L = L_1 \dots L_g$,
$\tilde{M}_G$ is a set of $n$ vertices of degree $(0,0)$, $M_G$ is a set
of $(g-1)n$ vertices of degree $(0,0)$, $\tilde{L} = \tilde{L}_1 \dots \tilde{L}_g$
where each $\tilde{L}_p$ is a set of $n$ vertices, and $P_{(n,g)}$ is a copy
of the instance selector of Lemma~\ref{lem:path}. Then, for every $p
\in [g]$ we add to $G$ all the possible $n^2$ backward arcs going from
$\tilde{L}_p$ to $L_p$. Finally, for every distinguished set $X^p$ of
$P_{(n,g)}$ (see in Lemma~\ref{lem:path}), we add all the possible
$n^2$ backward arcs from $X^p$ to $\tilde{L}_p$.
Now, in a symmetric way we define a tournament $D=K M_D \tilde{K} \tilde{M}_D
P'_{(m,g)}$, where $K = K_1 \dots K_g$, $\tilde{M}_D$ is a set of $m$
vertices of degree $(0,0)$, $M_D$ is a set of $(g-1)m$ vertices of
degree $(0,0)$, $\tilde{K} = \tilde{K}_1 \dots \tilde{K}_g$ where each $\tilde{K}_q$ is a set
of $m$ vertices, and $P'_{(m,g)}$ is a copy of the instance selector
of Lemma~\ref{lem:path}. Then, for every $q \in [g]$ we add to $G$
all the $m^2$ possible backward arcs going from $\tilde{K}_p$ to
$K_p$. Finally, for every distinguished set $X^{'q}$ of $P'_{(m,g)}$
we add all the possible $m^2$ backward arcs from $X^{'q}$ to $\tilde{K}_q$.
Finally, we define ${\cal T} = GD$. Let us add some backward arcs from $D$
to $G$. For any $p$ and $q$ with $1\leq p, q \leq g$, we add backward
arcs from $K_q$ to $L_p$ such that ${\cal T}[K_qL_p]$ corresponds to
${\cal T}_{(p,q)}$. Notice that this is possible as for any fixed $p$, all
the ${\cal T}_{(p,q)}, q \in [g]$ have the same left part $L_p$, and the
same goes for any fixed right part.
\begin{figure}%
\centering
\includegraphics[width=\textwidth]{def_compo.pdf}
\caption{A example of the weak composition. All depicted arcs are backward arcs. Bold arcs represents the $n^2$ (or $m^2$) possible arcs between the two groups.}%
\label{fig:Reduc2}%
\end{figure}
\paragraph*{Restructuration lemmas}
Given a set of triangles $\S$ we define $\S_{\subseteq P'}=\{t \in \S
| V(t) \subseteq P'_{(m,g)}\}$, $\S_{\subseteq P}=\{t \in \S : V(t)
\subseteq P_{(n,g)}\}$, $\S_{\tilde{M}_D}= \{t \in \S : V(t) \mbox{
intersects $\tilde{K}$, $\tilde{M}_D$ and $P'_{m,g}$}\}$, $\S_{M_D}= \{t \in
\S : V(t) \mbox{ intersects $K$, $M_D$ and $\tilde{K}$}\}$, $\S_{\tilde{M}_G}=
\{t \in \S : V(t) $ intersects $\tilde{L}$, $\tilde{M}_G$ and $P_{n,g}\}$,
$\S_{M_G}= \{t \in \S : V(t) \mbox{ intersects $L$, $M_G$ and
$\tilde{L}$}\}$, $\S_D = \{t \in \S : V(t) \subseteq V(D)\}$, $\S_G = \{t
\in \S : V(t) \subseteq V(G)\}$, and $\S_{GD} = \{t \in \S : V(t)
\mbox{ intersects}$ $V(G)$ and $V(D)\}$. Notice that $\S_G, \S_{GD},
\S_D$ is a partition of $\S$.
\begin{claim}
\label{cl:D2goodtriangles}
If there exists a perfect packing $\S$ of ${\cal T}$, then $|\S_{\tilde{M}_D}| =
m$ and $|\S_{M_D}| = (g-1)m$. This implies that $V(\S_{\tilde{M}_D} \cup
S_{M_D}) \cap V(\tilde{K}) = V(\tilde{K})$, meaning that the vertices of $\tilde{K}$
are entirely used by $\S_{\tilde{M}_D} \cup S_{M_D}$.
\end{claim}
\begin{proof}
We have $|\S_{\tilde{M}_D}| \leq m$ since $|\tilde{M}_D|= m$. We obtain the
equality since the vertices of $\tilde{M}_D$ only lie in the span of
backward arcs from $P'_{m,g}$ to $\tilde{K}$, and they are not the head or
the tail of a backward arc in ${\cal T}$. Thus, the only way to use vertices
of $\tilde{M}_D$ is to create triangles in $\S_{\tilde{M}_D}$, implying
$|\S_{\tilde{M}_D}| \ge m$. Using the same kind of arguments we also get
$|\S_{M_D}| = (g-1)m$. As $|V(\tilde{K})|=gm$ we get the last part of the
claim.
\end{proof}
\begin{claim}
\label{cl:tildeKgoodtriangle}
If there exists a perfect packing $\S$ of ${\cal T}$, then there exists $q_0
\in [g]$ such that $\tilde{K}_\S=\tilde{K}_{q_0}$, where $\tilde{K}_\S=\tilde{K} \cap
V(\S_{\tilde{M}_D})$.
\end{claim}
\begin{proof}
Let $\S_{P'}$ be the triangles of $\S$ with at least one vertex in
$P'_{m,g}$. As according to Claim~\ref{cl:D2goodtriangles} vertices
of $\tilde{K}$ are entirely used by $\S_{\tilde{M}_D} \cup S_{M_D}$, the only
way to consume vertices of $P'_{m,g}$ is by creating local triangles
in $P'_{m,g}$ or triangles in $\S_{\tilde{M}_D}$. In particular, we cannot
have a triangle $(u,v,w)$ with $\{u,v\} \subseteq \tilde{K}$ and $w \in
P'_{m,g}$, or with $u \in \tilde{K}$ and $\{v,w\} \subseteq P'_{m,g}$. More
formally, we get the partition $\S_{P'}=\S_{\subseteq P'} \cup
\S_{\tilde{M}_D}$. As $\S$ is a perfect packing and uses in particular all
vertices of $P'_{m,g}$ we get $|V(\S_{P'})|=|V(P'_{m,g})|$, implying
$|V(\S_{\subseteq P'})|=|V(P'_{m,g})|-m$ by
Claim~\ref{cl:D2goodtriangles}. By Lemma~\ref{lem:path}, this implies
that there exists $q_0 \in [g]$ such that $X' \subseteq X^{'q_0}$
where $X'=V(P'_{m,g}) \setminus V(\S_{\subseteq P'})$. As $X'$ are
the only remaining vertices that can be used by triangles of
$\S_{\tilde{M}_D}$, we get that the $m$ triangles of $\S_{\tilde{M}_D}$ are of the
form $(u,v,w)$ with $u \in \tilde{K}_{q_0}$, $v \in \tilde{M}_D$, and $w \in X'$.
\end{proof}
\begin{claim}
\label{cl:Dgoodremainingtriangles}
If there exists a perfect packing $\S$ of ${\cal T}$, then there exists $q_0
\in [g]$ such that $V(\S_{P'} \cup \S_{\tilde{M}_D} \cup \S_{M_D}) = V(D)
\setminus K_{q_0}$.
\end{claim}
\begin{proof}
By Claim~\ref{cl:D2goodtriangles} we know that $|\S_{M_D}| =
(g-1)m$. As by Claim~\ref{cl:tildeKgoodtriangle} there exists $q_0 \in
[g]$ such that $\tilde{K}_\S=\tilde{K}_{q_0}$, we get that the $(g-1)m$ triangles
of $\S_{M_D}$ are of the form $(u,v,w)$ with $u \in K \setminus
K_{q_0}$, $v \in M_D$, and $w \in \tilde{K} \setminus \tilde{K}_{q_0}$.
\end{proof}
\begin{lemma}
\label{cl:gd}
If there exists a perfect packing $\S$ of ${\cal T}$, then $V(\S_{GD}) \cap
V(G) \subseteq V(L)$. Informally, triangles of $\S_{GD}$ do not use
any vertex of $M_G, \tilde{L}, \tilde{M}_T$ and $P_{n,g}$.
\end{lemma}
\begin{proof}
By Claim~\ref{cl:Dgoodremainingtriangles}, there exists $q_0 \in [g]$
such that $V(\S_{P'} \cup \S_{\tilde{M}_D} \cup \S_{M_D}) = V(D) \setminus
K_{q_0}$. By Theorem~\ref{thm:nphperfectv2} we know that $K_{q_0}
= K_{(q_0,1)}\dots K_{(q_0,m')}$ for some $m'$ (we even have $m' =
\frac{m}{2}$), where for each $j \in [m']$ we have $V(K_{(q_0,j)}) =
(\theta_j,c_j)$. Moreover, for any $p \in [g]$, the last property of
Theorem~\ref{thm:nphperfectv2} ensures that for any $a \in
\bA{A}({\cal T}_{(p,q_0)})$, $V(a) \cap V(K_{q_0}) \neq \emptyset$ implies
$a=vc_j$ for $v \in L_p$. So no arc of $\bA{A}({\cal T}_{(p,q_0)})$, and
thus no arc of ${\cal T}$ is entirely included in $K_{q_0}$. This implies
that $\S$ cannot cover the vertices of $K_{q_0}$ using triangles $t$
with $V(t) \subseteq V(K_{q_0})$, and thus that all these vertices
must be used by triangles of $\S_{GD}$, implying that $V(\S_{GD}) \cap
V(D) = K_{q_0}$. The last property of Theorem~\ref{thm:nphperfectv2}
also implies that all the $\theta_j$ have a left degree equal to $0$
in ${\cal T}$, or equivalently that there is no arc $a$ of ${\cal T}$ such that
$t(a)=\theta_j$ and $h(a) < \theta_j$. Thus, by induction for any $j$
from $m'$ to $1$, we can prove that the only way for triangles of
$\S_{GD}$ to use $\theta_j$ is to create a triangle
$t_j=(v,\theta_j,c_j)$ with necessarily $v \in V(L)$.
\end{proof}
Lemma~\ref{cl:gd} will allow us to prove
Claims~\ref{cl:G2goodtriangles}, ~\ref{cl:tildeLgoodtriangle}
and~\ref{cl:Ggoodremainingtriangles} using the same arguments as in
the right part ($D$) of the tournament as all vertices of $M_G, \tilde{L},
\tilde{M}_T$ and $P_{n,g}$ must be used by triangles in $\S_G$.
\begin{claim}
\label{cl:G2goodtriangles}
If there exists a perfect packing $\S$ of ${\cal T}$, then $|\S_{\tilde{M}_G}| =
n$ and $|\S_{M_G}| = (g-1)n$. This implies that $V(\S_{\tilde{M}_G} \cup
S_{M_G}) \cap V(\tilde{L}) = V(\tilde{L})$, meaning that vertices of $\tilde{L}$ are
entirely used by $\S_{\tilde{M}_G} \cup S_{M_G}$.
\end{claim}
\begin{proof}
We have $|\S_{\tilde{M}_G}| \leq n$ since $|\tilde{M}_G|= n$. Lemma~\ref{cl:gd} implies that all vertices of $\tilde{M}_G$ must be used by triangles of $\S_G$, and thus using arcs whose both endpoints lie in $V(G)$.
As vertices of $\tilde{M}_G$ are not the head or the tail of a backward arc in ${\cal T}$, we get that the only way for $\S_G$ to use vertices of $\tilde{M}_G$ is to create triangles in $\S_{\tilde{M}_G}$, implying $|\S_{\tilde{M}_G}| \ge n$.
Using the same kind of arguments (and as all vertices of $M_G$ must also be used by triangles of $\S_G$) we also get $|\S_{M_G}| = (g-1)n$.
As $|V(\tilde{L})|=gn$ we get the last part of the claim.
\end{proof}
\begin{claim}
\label{cl:tildeLgoodtriangle}
If there exists a perfect packing $\S$ of ${\cal T}$, then there exists $p_0
\in [g]$ such that $\tilde{L}_\S=\tilde{L}_{p_0}$, where $\tilde{L}_\S=\tilde{L} \cap
V(\S_{\tilde{M}_G})$.
\end{claim}
\begin{proof}
Lemma~\ref{cl:gd} implies that all vertices of $\tilde{M}_G$ and $P_{(n,g)}$ must be used by triangles in $\S_G$.
Let $\S_{P}$ be the triangles of $\S_G$ with at least one vertex in $P_{n,g}$.
As according to Claim~\ref{cl:G2goodtriangles} vertices of $\tilde{L}$ are entirely used by $\S_{\tilde{M}_G} \cup S_{M_G}$, the only way for $\S_G$ to consume vertices of $P_{n,g}$
is by creating local triangles in $P_{n,g}$ or triangles in $\S_{\tilde{M}_G}$. In particular, we cannot have a triangle $(u,v,w)$ with $\{u,v\} \subseteq \tilde{L}$ and $w \in P_{n,g}$, or with $u \in \tilde{L}$ and $\{v,w\} \subseteq P_{n,g}$. More formally, we get the partition $\S_{P}=\S_{\subseteq P} \cup \S_{\tilde{M}_G}$.
As $\S_G$ uses in particular all vertices of $P_{n,g}$ we get $|V(\S_{P})|=|V(P_{n,g})|$, implying $|V(\S_{\subseteq P})|=|V(P_{n,g})|-n$ by Claim~\ref{cl:G2goodtriangles}.
By Lemma~\ref{lem:path}, this implies that there exists $p_0 \in [g]$ such that $X \subseteq X^{p_0}$ where $X=V(P_{n,g}) \setminus V(\S_{\subseteq P})$.
As $X$ are the only remaining vertices that can be used by triangles of $\S_{\tilde{M}_G}$, we get that the $n$ triangles of $\S_{\tilde{M}_G}$ are of the form $(u,v,w)$ with $u \in \tilde{L}_{p_0}$, $v \in \tilde{M}_G$, and $w \in X$.
\end{proof}
\begin{claim}
\label{cl:Ggoodremainingtriangles}
If there exists a perfect packing $\S$ of ${\cal T}$, then there exists $p_0
\in [g]$ such that $V(\S_{P} \cup \S_{\tilde{M}_G} \cup \S_{M_G}) = V(G)
\setminus L_{p_0}$.
\end{claim}
\begin{proof}
By Claim~\ref{cl:D2goodtriangles} we know that $|\S_{M_G}| = (g-1)n$. As by Claim~\ref{cl:tildeLgoodtriangle} there exists $p_0 \in [g]$ such that $\tilde{L}_\S=\tilde{L}_{p_0}$,
we get that the $(g-1)n$ triangles of $\S_{M_G}$ are of the form $(u,v,w)$ with $u \in L \setminus L_{p_0}$, $v \in M_G$, and $w \in \tilde{L} \setminus \tilde{L}_{p_0}$.
\end{proof}
We are now ready to state our final claim is now straightforward as
according Claim~\ref{cl:Dgoodremainingtriangles}
and~\ref{cl:Ggoodremainingtriangles} we can define $\S_{(p_0,q_0)}=\S
\setminus ((\S_{P'} \cup \S_{\tilde{M}_D} \cup \S_{M_D}) \cup (\S_{P} \cup
\S_{\tilde{M}_G} \cup \S_{M_G}))$.
\begin{claim}
\label{cl:mainclaim}
If there exists a perfect packing $\S$ of ${\cal T}$, there exists $p_0, q_0
\in [g]$ and $\S_{(p_0,q_0)} \subseteq \S$ such that
$V(\S_{(p_0,q_0)}) = V({\cal T}_{(p_0,q_0)})$ (or equivalently such that
$\S_{(p_0,q_0)}$ is a perfect packing of ${\cal T}_{(p_0,q_0)}$).
\end{claim}
\paragraph*{Proof of the weak composition}
\begin{theorem}
For any $\epsilon>0$, {\sc $C_3$-Perfect-Packing-T}~(parameterized by the total number of
vertices $N$) does not admit a polynomial (generalized) kernelization
with size bound $\O(N^{2-\epsilon})$ unless ${\sf NP} \subseteq {\sf coNP / Poly}$.
\end{theorem}
\begin{proof}
Given $t$ instances $\{{\cal I}_l\}$ of {\sc $C_3$-Perfect-Packing-T}~restricted to instances
of Theorem~\ref{thm:nphperfectv2}, we define an instance ${\cal T}$ of
{\sc $C_3$-Perfect-Packing-T}~ as defined in Section~\ref{sec:kernel}. We recall that
$g = \sqrt{t}$, and that for any $l \in [t]$, $|V(L_l)|=n$ and
$|V(K_l)|=m$. Let $N = |V({\cal T})|$. As
$N=|V(P'_{(m,g)})|+m+(g-1)m+2mg+|V(P_{(n,g)})|+n+(g-1)n+2ng$ and
$|V(P_{(\m,\g)})| = O(\m\g)$ by Lemma~\ref{lem:path}, we get $N =
\O(g(n+m))=\O(t^{\frac{1}{2+o(1)}} \max(|{\cal I}_l|))$. Let us now verify
that there exists $l \in [t]$ such that ${\cal I}_l$ admits a perfect
packing iff ${\cal T}$ admits a perfect packing.
First assume that there exist $p_0,q_0 \in [g]$ such that ${\cal I}_{(p_0,q_0)}$
admits a perfect packing. By Lemma~\ref{cl:mainclaim}, there is a
packing $\S_{P'}$ of $P'_{(m,g)}$ such that $V(\S_{p'})= V(P'_{(m,g)})
\setminus X^{'q_0}$. We define a set $\S_{\tilde{M}_D}$ of $m$ vertex
disjoint triangles of the form $(u,v,w)$ with $u \in \tilde{L}_{q_0}, v \in
\tilde{M}_D, w \in X^{'q_0}$. Then, we define a set $\S_{M_D}$ of
$(g-1)m$ vertex disjoint triangles of the form $(u,v,w)$ with $u \in
L \setminus L_{q_0}, v \in M_D, w \in \tilde{L} \setminus \tilde{L}_{q_0}$.
In the same way we define $\S_{P}$, $\S_{\tilde{M}_G}$ and
$\S_{M_G}$. Observe that $V({\cal T}) \setminus ((\S_{P'} \cup \S_{\tilde{M}_D}
\cup \S_{M_D}) \cup (\S_{P} \cup \S_{\tilde{M}_G} \cup
\S_{M_G}))=K_{q_0} \cup L_{p_0}$, and thus we can complete our
packing into a perfect packing of ${\cal T}$ as ${\cal I}_{(p_0,q_0)}$ admits a
perfect packing.
Conversely if there exists a perfect packing $\S$ of ${\cal T}$, then by
Claim~\ref{cl:mainclaim} there exists $p_0, q_0 \in [g]$ and
$\S_{(p_0,q_0)} \subseteq \S$ such that $V(\S_{(p_0,q_0)}) =
V({\cal T}_{(p_0,q_0)})$, implying that ${\cal I}_{(p_0,q_0)}$ admits a perfect
packing.
\end{proof}
\begin{corollary}
\label{theo:noKernel}
For any $\epsilon>0$, {\sc $C_3$-Packing-T}~(parameterized by the size $k$ of the solution)
does not admit a polynomial kernel with size
$\O(k^{2-\epsilon})$ unless ${\sf NP} \subseteq {\sf coNP / Poly}$.
\end{corollary}
\section{Conclusion and open questions}
Concerning approximation algorithms for {\sc $C_3$-Packing-T}~restricted to sparse
instances, we have provided a $(1+\frac{6}{c+5})$-approximation
algorithm where $c$ is a lower bound of the ${\mathtt minspan}$ of the
instance. On the other hand, it is not hard to solve by dynamic
programming {\sc $C_3$-Packing-T}~for instances where ${\mathtt maxspan}$ is bounded
above. Using these two opposite approaches it could be interesting to
derive an approximation algorithm for {\sc $C_3$-Packing-T}~with factor better than
$4/3$ even for sparse tournaments.
Concerning {\sf FPT} algorithms, the approach we used for sparse
tournament (reducing to the case where $m=\O(k)$ and apply the $\O(m)$
vertices kernel) cannot work the general case. Indeed, if we were able
to sparsify the initial input such that $m'=\O(k^{2-\epsilon})$,
applying the kernel in $\O(m')$ would lead to a tournament of total
bit size (by encoding the two endpoint of each arc)
$\O(m'log(m'))=\O(k^{2-\epsilon})$, contradicting
Corollary~\ref{theo:noKernel}. Thus the situation for {\sc $C_3$-Packing-T}~could be as
in vertex cover where there exists a kernel in $\O(k)$ vertices,
derived from~\cite{linearVc74}, but the resulting instance cannot have
$\O(k^{2-\epsilon})$ edges~\cite{dell2014kernelization}. So it is
challenging question to provide a kernel in $\O(k)$ vertices for the
general {\sc $C_3$-Packing-T}~problem.
|
\section{Introduction}
The Michaelis-Menten mechanism \cite{michaelismenten} is probably the best known model for an enzyme-catalyzed reaction. In this reaction network, a substrate $S$ and an enzyme $E$ combine to form a complex $C$, which degrades back to substrate and enzyme, or to product P and enzyme. In the reversible setting there is also a back reaction combining $E$ and $P$ to complex. The reaction scheme thus reads
\[
E+S \xrightleftharpoons[k_{-1}]{k_{1}} C \xrightleftharpoons[k_{-2}]{k_{2}} E+P.
\]
In the irreversible case one assumes that product and enzyme cannot combine to form complex, i.e. one has $C\xrightarrow{k_{2}}E+P$. Typically, no complex or product are assumed present initially.
Assuming mass action kinetics and spatially homogeneous concentrations, the evolution of the concentrations $(s,e,c,p)$ of $S,E,C,P$ can be described by a system of four ordinary differential equations, from which by stoichiometry one obtains a two-dimensional system (first discussed from a mathematical perspective by Briggs and Haldane \cite{briggshaldane}). Employing the familiar quasi-steady state (QSS) assumption for complex, based on small initial concentration of enzyme, further reduces the system to dimension one.\\
For reaction systems, quasi-steady state (QSS) assumptions frequently lead to singular perturbation problems for which the classical theories of Tikhonov \cite{tikh} and Fenichel \cite{fenichel} are applicable. (Moreover, one should note results by Hoppensteadt \cite{Hoppensteadt} on unbounded time intervals; see also \cite{lws}.) \\
For spatially inhomogeneous concentrations in a reaction vessel, thus for reaction-diffusion systems, Tikhonov's and Fenichel's theory is not applicable since their fundamental results are limited to finite dimensional systems. Therefore, reaction-diffusion systems are much more difficult to analyze, and only partial results are known. As for the Michaelis-Menten reaction with diffusion and small initial enzyme concentration, Britton \cite{britton} and Yannacopoulos et al. \cite{Yannacopoulos} derived QSS reductions with the additional assumptions of immobile complex and enzyme. Kalachev et al. \cite{kkkpz} used asymptotic expansions with respect to a small parameter to obtain results about the behavior of the solutions under different time scales for diffusion, with the diffusion time scale different from the time scale for the slow reaction part. (As \cite{kkkpz} indicates, even finding candidates for reduced reaction-diffusion systems may be a nontrivial task.) Starting from different assumptions about the reaction mechanism (viz., smallness of certain rate constants), Bothe and Pierre \cite{BothePierre1} as well as Bisi et al. \cite{bisi} discussed reductions for a related system, including convergence proofs. \\
In the present paper we will discuss QSS reductions for both the irreversible and the reversible Michaelis-Menten reaction with diffusion, under the conditions of small initial enzyme concentration and slow diffusion. Our work is based on a heuristic method described in \cite{laxgoeke}, which utilizes a spatial discretization to obtain an ordinary differential equation which admits reduction by Tikhonov-Fenichel theory. In many relevant cases, the reduced ODE system can, in turn, be identified as the spatial discretization of another partial differential equation system. This resulting PDE is a candidate for the reduced system and, as pointed out in \cite{laxgoeke}, it is the only possible candidate. In the present paper we will not discuss convergence issues, which seem to be quite technically involved, but we provide numerical simulations that support the accuracy of the reduction. \\
The plan of the paper is as follows. In Section 2 we will briefly recall the most important aspects of the spatially homogeneous system and moreover note some general features of the inhomogeneous case. \\
In Section 3 the ``classical'' QSS assumption is discussed, i.e. we assume small initial enzyme concentration and slow diffusion. We first review some relevant results from the literature, and give an informal description of the reduction procedure from \cite{laxgoeke}. Following a (degenerate) scaling similar to the one in Heineken, Tsuchiya and Aris \cite{hta} we derive a reduction via the approach in \cite{laxgoeke}; to the authors' knowledge, the form of the reduced PDE system has not been known in the literature to date. \\ The reduction is consistent with the spatially homogeneous case, thus setting the diffusion constants equal to zero yields the usual Michaelis-Menten equation. The degenerate scaling seems unavoidable in the PDE case (while one can circumvent it for the ODE), thus we need to go beyond the classical singular perturbation reduction due to Tikhonov and Fenichel. The scaling requires a consistency condition which is intuitively likely to hold in general; we can justify it mathematically in the case when enzyme and complex diffuse at the same rate. In Section 4 we present numerical simulations which exhibit very good agreement with the reduced system. \\
In the Appendix, employing the heuristic method from \cite{laxgoeke}, we carry out the necessary computations for the reductions and also determine suitable initial values for the reduced system.
\section{Preliminaries}
\subsection{The spatially homogeneous setting}
We recall some facts about the Michaelis-Menten reaction with homogeneously distributed concentrations. The evolution of the concentrations $(s,e,c,p)$ of $S,E,C,P$ is governed by the four-dimensional ordinary differential equation
\begin{align*}
&\dot s=-k_1es+k_{-1}c\\
&\dot e=-k_1es+(k_{-1}+k_2)c-k_{-2}ep\\
&\dot c=k_1es-(k_{-1}+k_2)c+k_{-2}ep\\
&\dot p=k_2c-k_{-2}ep.
\end{align*}
This system admits the (stoichiometric) first integrals $\Psi_1(s,e,c,p)=e+c$ and $\Psi_2(s,e,c,p)=s+c+p$. Therefore a two-dimensional system remains:
\begin{align}
&\dot s=-k_1e_0s+(k_1s+k_{-1})c\label{mm1}\\
&\dot c=k_1e_0s-(k_1s+k_{-1}+k_2)c+k_{-2}(e_0-c)(s_0-s-c)\label{mm2},
\end{align}
where $s_0$ and $e_0$ are the initial concentrations of $S$ and $E$, and initially no product $P$ or complex $C$ are present. The system is called irreversible whenever $k_{-2}=0$, and reversible otherwise. The most common quasi-steady state assumption is that the initial enzyme concentration is small, one considers $e_0=\varepsilon e_0^*$ in the asymptotic limit $\varepsilon\to 0$, for the irreversible system.\\
Heineken, Tsuchiya and Aris \cite{hta} were the first to discuss the Michaelis-Menten system from the perspective of singular perturbations, and Segel and Slemrod \cite{ss} were the first to directly prove a rigorous convergence result for the unbounded time interval: Writing \eqref{mm1}--\eqref{mm2} in the slow time scale $\tau=\varepsilon t$
\begin{align}
&s'=-k_1se_0^*+\varepsilon^{-1}(k_1s+k_{-1})c\label{mm1slow}\\
&c'=k_1se_0^*-\varepsilon^{-1}(k_1s+k_{-1}+k_2)c\label{mm2slow},
\end{align}
the solutions of \eqref{mm1slow}--\eqref{mm2slow} converge for all $t_0>0$ uniformly on $[t_0,\infty)$ to the solutions of
\begin{equation}\label{red}
s'= - \frac{k_1k_2se_0^*}{k_1s+k_{-1}+k_2}
\end{equation}
on the asymptotic slow manifold $\mathcal V=\{(s,0),\ s\geq0\}$ as $\varepsilon\to0$. (Below we will sometimes change between time scales without mentioning this explicitly.) \\
Both the approach by Heineken et al. \cite{hta} and the proof by Segel and Slemrod \cite{ss} use appropriate scalings of the variables, in particular they introduce $z:=c/e_0$. It is possible to avoid such a scaling, which becomes degenerate as $e_0\to 0$, in the spatially homogeneous case (see e.g. \cite{gw}) but as it turns out we will need to utilize such a degenerate scaling to obtain a reduction when concentrations are not homogeneously distributed in the reaction vessel.\\
We will also discuss the reversible Michaelis-Menten system, which appears less frequently in the literature; in part this may be due to the unwieldy expression for the QSS reduction; see Miller and Alberty \cite{MiAl}. The singular perturbation reduction (see \cite{nw11} and \cite{gwz2}) of \eqref{mm1slow}--\eqref{mm2slow} for $k_{-2}>0$ and $e_0=\varepsilon e_0^*$ leads to
\begin{equation}\label{redrev}
s'= - \frac{(k_1k_2s+k_{-1}k_{-2}(s-s_0))e_0^*}{k_1s+k_{-2}(s_0-s)+k_{-1}+k_2}
\end{equation}
on the asymptotic slow manifold $\mathcal V=\{(s,0),\ s\geq0\}$ as $\varepsilon\to0$. (Here, uniform convergence again holds on $[t_0,\infty)$; see \cite{lws}). Both the QSS and the singular perturbation reductions agree up to first order in the small parameter; see \cite{gwz2}.
\subsection{The spatially inhomogeneous setting}
When the concentrations are inhomogeneously distributed and diffusion is present then the system is described by a reaction-diffusion equation. Thus, let $\Omega$ be a bounded region with a smooth boundary and let $\delta_s,\delta_e,\delta_c,\delta_p\geq0$ denote the diffusion constants. The governing equations are
\begin{align}
\partial_{t} s&= \delta_s \Delta s-k_1se+k_{-1} c, &\text{in } (0,\infty)\times \Omega \label{mm1diff}\\
\partial_{t} e&= \delta_e\Delta e -k_1se+ (k_{-1}+k_2) c-k_{-2}ep, &\text{in } (0,\infty)\times \Omega \label{mm2diff} \\
\partial_{t} c&= \delta_c\Delta c +k_1se-(k_{-1}+k_2) c+k_{-2}ep, &\text{in } (0,\infty)\times \Omega \label{mm3diff} \\
\partial_{t} p&= \delta_p\Delta p+k_2 c -k_{-2}ep, &\text{in } (0,\infty)\times \Omega \label{mm4diff}
\end{align}
with continuous initial values
\[
s(0,x)=s_0(x),\quad e(0,x)=e_0(x),\quad c(0,x)=c_0(x),\quad p(0,x)=p_0(x),\quad \text{in } \Omega
\]
and one has Neumann boundary conditions
\[
\frac{\partial s}{\partial \nu}=\frac{\partial e}{\partial \nu}=\frac{\partial c}{\partial \nu}=\frac{\partial p}{\partial \nu}=0,\quad \text{in } (0,\infty)\times \partial\Omega
\]
with $\frac{\partial }{\partial \nu}$ denoting the outer normal derivative.
We collect a few general properties.
\begin{remark}\label{remark1}
\begin{itemize}
\item From Smith \cite{Smith}, Ch.~7, Thm.~3.1 and Cor.~3.2--3.3 one sees that all the solution entries remain nonnegative for all $t>0$ whenever they are nonnegative at $t=0$. Moreover, Bothe and Rolland \cite{BotheRolland1} (see in particular Remark 1) have shown that there exists a classical solution of {class $C^{\infty}$} whenever one has initial values of class $W^{s,p}(\Omega;\mathbb{R}^4_+)$ for $p>1$, $s>0$.
\item When $\delta_e=\delta_c$ then
\[
\partial_t(e+c)=\delta_e\Delta(e+c)
\]
and as a consequence of the strong maximum principle (see Smith \cite{Smith} Theorem 2.2) $e+c$ is uniformly bounded by ${\rm max}(e_0+c_0)$ for all $t\geq 0$.\\
Furthermore, in the case that $\delta_s=\delta_e=\delta_c=\delta_p$ one gets
\[
\partial_t(s+e+2c+p)=\delta_e\Delta(s+e+2c+p),
\]
whence $s+e+2c+p$ is bounded by ${\rm max}(s_0+e_0+2c_0+p_0)$ for all $t\geq 0$; in particular nonnegativity implies that every component is bounded.
\item The stoichiometric first integrals of the spatially homogeneous setting survive as conservation laws
\[\frac{1}{\abs{\Omega}}\int_{\Omega}e(0,x)+c(0,x)\: dx=\frac{1}{\abs{\Omega}}\int_{\Omega}e_0(x)+c_0(x)\: dx\]
resp.
\[\frac{1}{\abs{\Omega}}\int_{\Omega}s(0,x)+c(0,x)+p(0,x)\: dx=\frac{1}{\abs{\Omega}}\int_{\Omega}s_0(x)+c_0(x)+p_0(x)\: dx,\]
but a reduction of dimension (i.e., elimination of certain variables) is no longer possible.
\item In the irreversible case one may consider only the first three equations as their right-hand sides do not depend on $p$.
\item Results regarding the long time behavior of solutions of the reversible Michaelis-Menten reaction can be found in Elia\v{s} \cite{elias}.
\end{itemize}
\end{remark}
\section{Reduction given slow diffusion and small initial enzyme concentration}
\subsection{Review of results in the literature}
As noted above, there exists no counterpart to Tikhonov's and Fenichel's theorems for infinite dimensional systems, hence the reduction of reaction-diffusion equations is not possible in a similarly direct manner.\\
Regarding the reduction of the Michaelis-Menten reaction with diffusion, one sometimes finds the one-dimensional equation \eqref{red} augmented by a diffusion term for substrate, with no further argument given. This ad-hoc method is problematic, since it amounts to ignoring diffusion in the reduction step.
The appropriate approach is to start with the full system \eqref{mm1diff}--\eqref{mm4diff} and consider possible reductions in the limiting case of small initial concentration for enzyme, with slow diffusion. This will be the vantage point in the present paper.\\
With regard to such an approach, the authors are aware only of three papers for the irreversible system (i.e. \eqref{mm1diff}--\eqref{mm3diff} with $k_{-2}=0$). Yannacopoulos et al. \cite{Yannacopoulos} assumed $P$ and $C$ to be immobile (i.e. $\delta_e=\delta_c=0$; see their equation (71)) and gave a second order approximation for the case of a one dimensional domain (see in particular equation (80) which in lowest order reduces to the Michaelis-Menten equation for substrate, augmented by diffusion). Britton \cite{britton}, Ch.~8 gave the first order approximation
\begin{equation}\label{immobile}
\partial_{\tau} s= \delta_s \Delta s- \frac{k_1k_2s{(e_0+c_0)}}{k_1s+k_{-1}+k_2}
\end{equation}
which is in agreement with the lowest order terms given in \cite{Yannacopoulos}. He made no assumptions on diffusion constants for enzyme or complex, and instead started with system \eqref{mm1}--\eqref{mm2}, augmented by diffusive terms for $s$ and $c$. This is problematic because the elimination of $e$ via stoichiometry is no longer possible when diffusion is present. Therefore Britton's approach is limited to the case considered by Yannacopoulos at al. \cite{Yannacopoulos}. \\
Kalachev et al. \cite{kkkpz} started from \eqref{mm1diff}--\eqref{mm3diff} and considered up to three time scales, with the slow reaction part of order $\varepsilon$ (the total initial mass of enzyme divided by the total initial mass of substrate), a fast reaction part, and diffusion of order $\delta$, deriving asymptotic expansions for the solutions and reductions in different time regimes. They did not discuss the case that slow reaction and diffusion are in the same time scale (i.e., $\delta=\varepsilon$) which we will consider. (In \cite{kkkpz}, Remark 1.2 further work was announced for this case, but apparently this has not been published yet.)
\subsection{Informal review of the reduction heuristics}\label{heuristic}
We will employ a heuristic method to construct a candidate for a reduced system that was introduced in \cite{laxgoeke}. In contrast to the convergence property for the ODE after discretization (which is a consequence of Tikhonov's and Fenichel's theorems) we will not prove convergence here; generally this seems a very hard task (see Section \ref{conclrem}). However, as remarked in \cite{laxgoeke}, Proposition 4.3, the reduced PDE determined by the heuristics represents the only possible reduction of the reaction-diffusion system as $\varepsilon\to 0$. \\
Briefly the heuristics can be described as follows: By spatial discretization of a reaction-diffusion system which depends on a small parameter $\varepsilon$, one obtains a system of ordinary differential equations depending on $\varepsilon$. If the ODE system admits a Tikhonov-Fenichel reduction and the reduced ODE is the spatial discretization of another partial differential equation system, then we will call the latter {\em the reduced PDE of the reaction-diffusion system}. (The conditions stated above are frequently satisfied; see e.g.\cite{Lax}.) The following results are in part based on the second author's doctoral thesis \cite{Lax}. {Detailed computations will be presented in the Appendix.
\subsection{The irreversible case}\label{mainresultirrev}
In order to determine the reduced PDE systems, we need some preparations.
We consider first the irreversible reaction-diffusion system \eqref{mm1diff}--\eqref{mm3diff}.
Defining total enzyme concentration $y:=e+c$, we get
\begin{align}
\partial_{t} s&= \delta_s \Delta s-k_1s(y-c)+k_{-1} c, &\text{in } (0,\infty)\times \Omega \label{mm1diffy}\\
\partial_{t} c&= \delta_c\Delta c +k_1s(y-c)-(k_{-1}+k_2) c, &\text{in } (0,\infty)\times \Omega\label{mm2diffy}\\
\partial_{t} y&= \delta_c\Delta c + \delta_e(\Delta y -\Delta c), &\text{in } (0,\infty)\times \Omega\label{mm3diffy}
\end{align}
with initial values $s(0,x)=s_0(x)$, $c(0,x)=c_0(x)$, $y(0,x)=e_0(x)+c_0(x)$. Our basic assumptions are:
\begin{itemize}
\item Diffusion is slow, and therefore we introduce the scaling
\[
\delta_z=\varepsilon \delta_z^*\text{ for }z=s,e,c.
\]
\item Total enzyme concentration is small for all $t\geq 0$, and therefore we set
\[
y=\varepsilon y^*{\text{ and } c=\varepsilon c^*},\text{ and also }e_0=\varepsilon e_0^*,\quad c_0=\varepsilon c_0^*.
\]
\end{itemize}
Incorporating these assumptions we have
\begin{align*}
\partial_{t} s&= \varepsilon\delta_s^* \Delta s+\varepsilon(k_1s+k_{-1}) c^*- \varepsilon k_1s y^*, &\text{in } (0,\infty)\times \Omega \\
\partial_{t} c^*&= \varepsilon\delta_c^* \Delta c-(k_1s+k_{-1}+k_2) c^* + k_1s y^*, &\text{in } (0,\infty)\times \Omega\\
\partial_{t} y^*&= \varepsilon\delta_e^*\Delta y^*+\varepsilon(\delta_c^*-\delta_e^*) \Delta c^*, &\text{in } (0,\infty)\times \Omega
\end{align*}
with initial values \[s(0,x)=s_0(x),\quad c^*(0,x)=c_0^*(x),\quad y^*(0,x)=y^*_0(x):=e_0^*(x)+c_0^*(x).\] In slow time $\tau=\varepsilon t$ one now finds
\begin{align}
\partial_{\tau} s&= \delta_s^* \Delta s+(k_1s+k_{-1}) c^*- k_1s y^*, &\text{in } (0,\infty)\times \Omega \label{mm1diffskal}\\
\partial_{\tau} c^*&= \delta_c^* \Delta c^*-\varepsilon^{-1}(k_1s+k_{-1}+k_2) c^* + \varepsilon^{-1} k_1s y^*, &\text{in } (0,\infty)\times \Omega\label{mm2diffskal}\\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*+\delta \Delta c^*, &\text{in } (0,\infty)\times \Omega\label{mm3diffskal}
\end{align}
with the abbreviation
\begin{equation}\label{delteq}
\delta:=\delta_c^*-\delta_e^*.
\end{equation}
We will discuss two different cases: If the diffusion constants $\delta_e^*$ and $\delta_c^*$ are close in the sense that $\delta=\varepsilon \delta^*$, then equation \eqref{mm3diffskal} reads
\begin{equation}
\partial_{\tau} y^*= \delta_e^*\Delta y^*+\varepsilon\delta^* \Delta c
\end{equation}
and the reduced system for $\varepsilon\to 0$ is again a reaction-diffusion system (with a rational reaction term). Otherwise, the reduced system becomes highly nonlinear.
\begin{remark}
The argument is based on the critical assumption that the ``degenerate'' scalings $c^*=\varepsilon^{-1} c$ and $y^*=\varepsilon^{-1} y$ hold for all $t\geq 0$; to state it more precisely, one needs a uniform bound (with respect to $\varepsilon$) for $c^*$ and $y^*$.
In the special case $\delta_c^*=\delta_e^*$ (e.g. if the molecules of enzyme and complex are of the same size; see Keener and Sneyd \cite{ksI}, Subsection 2.2.2), Remark \ref{remark1} implies that $c^*$ and $y^*$ are uniformly bounded by $e_0^*+c_0^*$. We are not able to extend this property to the case $\delta_c^*\neq\delta_e^*$, but we will verify in the Appendix that the corresponding uniform boundedness property holds for the ODEs obtained via discretization. Furthermore, numerical results indicate that degenerate scaling poses no problem for the Michaelis-Menten system (see Section \ref{numeric}).
\end{remark}
\subsubsection{Irreversible case with $\delta_c^*-\delta_e^*=\mathcal O(\varepsilon)$}\label{mainresultirrevsub1}
In this case the reduced PDE (as defined in subsection \ref{heuristic}) is given by
\begin{align}
\partial_{\tau} s&= \delta_s^* \Delta s-\frac{k_1k_2y^*s}{k_1s+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega\label{mm1diffred1} \\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*, &\text{in } (0,T)\times \Omega \label{mm3diffred1}
\end{align}
on the asymptotic slow manifold \[\mathcal V=\left\{(s,c^*,y^*)\in\mathbb{R}^3_+,\ c^*=\frac{k_1s y^*}{k_1s+k_{-1}+k_2}\right\}.\]
Appropriate initial values on $\mathcal V$ are given by
$(\tilde s_0,\tilde y_0^*)=\left(s_0,y_0^*\right)$.
This assertion is a direct consequence of Proposition \ref{irrevdiscred} in the Appendix.\\
Total enzyme concentration in the reduced equation is subject only to diffusion, and there remains a reaction-diffusion equation for substrate, with the reaction part similar to the usual Michaelis-Menten term. It is worth looking at some special cases: When $\delta_e^*=\delta_c^*=0$, $y^*=y_0^*$ is constant in time and we have
\begin{equation*}
\partial_{\tau} s= \delta_s \Delta s- \frac{k_1k_2s_0y_0^*}{k_1s+k_{-1}+k_2}
\end{equation*}
as in Yannacopoulos et al. \cite{Yannacopoulos}, Equation (80) and in Britton \cite{britton}, Ch.~8. Moreover, setting all diffusion constants to zero (and assuming $c_0^*=0$ as well as constant $y_0^*$) leads to the usual spatially homogeneous reduction as given in \eqref{red}.\\
As far as the authors know, this reduced system has not appeared in the literature so far. The numerical simulations in Section \ref{numeric} indicate convergence.
\subsubsection{Irreversible case with $\delta_c^*-\delta_e^*=\mathcal O(1)$}\label{mainresultirrevsub2}
In this case the reduction is given by
\begin{align}
\partial_{\tau} s&= \delta_s^* \Delta s-\frac{k_1k_2y^*s}{k_1s+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega\label{mm1diffred} \\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*+\delta \Delta\left(\frac{k_1y^*s}{k_1s+k_{-1}+k_2}\right), &\text{in } (0,T)\times \Omega \label{mm3diffred}
\end{align}
on the asymptotic slow manifold \[\mathcal V=\left\{(s,c^*,y^*)\in\mathbb{R}^3_+,\ c^*=\frac{k_1s y^*}{k_1s+k_{-1}+k_2}\right\}.\]
The appropriate initial values are as before (also following from Proposition \ref{irrevdiscred}).\\
This case may be said to correspond to the one mentioned but not treated in Kalachev et al. \cite{kkkpz}; there seems to be no discussion of this in the literature. Note that now the equations for $s$ and $y^*$ are fully coupled; this is a more complex situation than before. Again, numerical simulations (Section \ref{numeric}) are in good agreement with the reduction.
\subsection{The reversible case}\label{mainresultrev}
We will determine a reduced system for the reversible Michaelis-Menten reaction with diffusion, i.e.,
\begin{align*}
\partial_{\tau} s&= \delta_s^* \Delta s+(k_1s+k_{-1}) c^*- k_1s y^*, &\text{in } (0,T)\times \Omega \\
\partial_{\tau} c^*&= \delta_c^* \Delta c^*-\varepsilon^{-1}[(k_1s+k_{-1}+k_{-2}p+k_2) c^* + (k_1s+k_{-2}p) y^*], &\text{in } (0,T)\times \Omega\\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*+\delta \Delta c^*, &\text{in } (0,T)\times \Omega\\
\partial_{\tau} p&= \delta_p^* \Delta p+(k_{-2}p+k_{2}) c^*- k_{-2}py^*, &\text{in } (0,T)\times \Omega
\end{align*}
Here we choose the same scaling as in \eqref{mm1diffskal}--\eqref{mm3diffskal} and additionally we let $\delta_p=\varepsilon \delta_p^*$. If $\delta_c^*-\delta_e^*=\mathcal O(\varepsilon)$ then we get
\begin{align}
\partial_{\tau} s&= \delta_s^* \Delta s-\frac{(k_1k_2s-k_{-1}k_{-2}p)y^*}{k_1s+k_{-2}p+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega\label{mm1diffred3} \\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*, &\text{in } (0,T)\times \Omega \label{mm3diffred3}\\
\partial_{\tau} p&= \delta_p^*\Delta p+\frac{(k_1k_2s-k_{-1}k_{-2}p)y^*}{k_1s+k_{-2}p+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega \label{mm4diffred3}
\end{align}
on the asymptotic slow manifold \[\mathcal V=\left\{(s,c^*,y^*,p)\in\mathbb{R}^4_+,\ c^*=\frac{(k_1s+k_{-2}p) y^*}{k_1s+k_{-2}p+k_{-1}+k_2}\right\}.\] Note that \eqref{mm3diffred3} is uncoupled from the remaining system.
In case $\delta_c^*-\delta_e^*=\mathcal O(1)$ we get
\begin{align}
\partial_{\tau} s&= \delta_s^* \Delta s-\frac{(k_1k_2s-k_{-1}k_{-2}p)y^*}{k_1s+k_{-2}p+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega\label{mm1diffred4} \\
\partial_{\tau} y^*&= \delta_e^*\Delta y^*+\delta\Delta\left(\frac{(k_1s+k_{-2}p)y^*}{k_1s+k_{-2}p+k_{-1}+k_2}\right), &\text{in } (0,T)\times \Omega \label{mm3diffred4}\\
\partial_{\tau} p&= \delta_p^*\Delta p+\frac{(k_1k_2s-k_{-1}k_{-2}p)y^*}{k_1s+k_{-2}p+k_{-1}+k_2}, &\text{in } (0,T)\times \Omega \label{mm4diffred4}
\end{align}
on the same asymptotic slow manifold,
but here one has a fully coupled system for $s$, $y^*$ and $p$.\\
The proofs follow from Proposition \ref{revdiscred}.
In both settings, appropriate initial values are given by
$(\tilde s_0,\tilde y_0^*,\tilde p_0)=\left(s_0,y_0^*,p_0\right)$.\\
Again, setting all diffusion constants to zero (and assuming $c_0^*=p_0=0$ as well as constant $y_0^*$ and $s_0$) leads to $s(\tau,x)+p(\tau,x)=s_0$ and thus to the usual reduction as given in \eqref{redrev}.
\section{Numerical simulations}\label{numeric}
In the following we will provide numerical results that are in good agreement with the reduction given above.
The solutions have been obtained using MATLAB's \texttt{pdepe} function. This function solves an initial-boundary value problem for spatially one-dimensional systems of parabolic and elliptic partial differential equations in the self-adjoint form
$$
C(x,t,u,\partial_x u)\partial_t u = x^{-m} \partial_x (x^m F(x,t,u,\partial_x u)) + S(x,t,u,\partial_x u).
$$
In our case, $m=0$ and $C$ is the identity matrix.
Furthermore, in the case of system \eqref{mm1diffskal}--\eqref{mm3diffskal}, using the unknown $u = ( s, c^*, y^*)^T$, the flux $F$ and the source $S$ become
$$
F = \begin{pmatrix} \delta_s\partial_x s \\ \delta_c^*\partial_x c^* \\ \delta_e^*\partial_x y^* +\delta\partial_x c^* \end{pmatrix}, \quad
S = \begin{pmatrix} (k_1s +k_{-1})c^* -k_1s y^* \\ \varepsilon^{-1}(k_1s +k_{-1}+k_2) c^* + \varepsilon^{-1} k_1s y^* \\ 0 \end{pmatrix}.
$$
The reduced system \eqref{mm1diffred}--\eqref{mm3diffred}, using the unknown $u = ( s, y^*)^T$, the flux $F$ and the source $S$ become
$$
F = \begin{pmatrix} \delta_s\partial_x s \\ \frac{\delta k_1y(k_{-1}+k_2)}{(k_1s+k_{-1}+k_2)^2}\partial_xs+(\frac{\delta k_1s}{k_1s+k_{-1}}+\delta_e^*)\partial_x y^* \end{pmatrix}, \quad
S = \begin{pmatrix} \frac{k_1 k_2 s y^*}{k_1 s+k_{-1}+k_2} \\ 0 \end{pmatrix}.
$$
As boundary conditions, we use homogeneous Neumann boundary conditions, i.e., for each unknown we set the spatial derivative equal to zero at the boundary.
The \texttt{pdepe} function uses a self-adjoint finite difference semi-discretization in space, and solves the obtained system ordinary differential equations by the implicit, adaptive multistep solver \texttt{ode15s}. In all our experiments we have set the tolerances to values below the accuracy we intend to observe (absolute tolerance $10^{-14}$, relative tolerance $10^{-10}$). We have used 100 equidistant grid cells.
Figure \ref{fig:InitialCondition} shows the initial condition we have used; a step function in $s$, a smooth cosine profile for $c$, and a cosine profile with an additional Gaussian bump for $y$. We have set $\delta_s=\delta_e=k_1=k_{-1}=k_2=1$ and $\delta_c=2$ (so $\delta=1$; see case \ref{mainresultirrevsub2}).
Figure \ref{fig:sol10} shows the solutions at time $T=0.005$ for $\varepsilon=1.0$. Already, one can see that the concentration $s$ is described well by the reduced system, whereas we see a discrepancy in $y$. For $\varepsilon=0.0001$, shown in Figure \ref{fig:sol00001}, to the eye there is no difference between the solutions of the original and the reduced systems. In Figure \ref{fig:convergence} we investigate the convergence of the solution of the full system to the solution of the reduced system. The error is measured in the $L^\infty$ norm in all three solution components. As $\varepsilon\to 0$, we observe rather clean first-order convergence in double-logarithmic plot. Finally, we also set $\delta_c=1$ (so $\delta=0$; see case \ref{mainresultirrevsub1}) and measure in Figure \ref{fig:convergenceglsc} and Figure \ref{fig:convergencegly} again the error. This confirms what the theory has predicted.
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{initial.pdf}
\caption{Initial condition for $s$, $c^*$ and $y^*$.}
\label{fig:InitialCondition}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{solungle1.pdf}
\caption{Solutions $s$ and $y^*$ at time $T=0.005$. Comparison between Michaelis-Menten and reduced system for $\varepsilon=1.0$.}
\label{fig:sol10}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{solungle00001.pdf}
\caption{Solutions $s$ and $y^*$ at time $T=0.005$. Comparison between Michaelis-Menten and reduced system for $\varepsilon=0.0001$.}
\label{fig:sol00001}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{convergenceungl.pdf}
\caption{Convergence of the full solution to the reduced solution as $\varepsilon\to 0$. Error measured in the $L^\infty$ norm.}
\label{fig:convergence}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{convergenceglsc.pdf}
\caption{Convergence of the full solution to the reduced solution for equal diffusion constants as $\varepsilon\to 0$. Error measured in the $L^\infty$ norm.}
\label{fig:convergenceglsc}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.8\linewidth]{convergencegly.pdf}
\caption{Convergence of the full solution to the reduced solution for equal diffusion constants as $\varepsilon\to 0$. Error measured in the $L^\infty$ norm.}
\label{fig:convergencegly}
\end{figure}
\section{Concluding remarks}\label{conclrem}
\begin{itemize}
\item As already noted, we do not discuss convergence results. But it is easy to see that the uniform bound for $y^*$ implies that $c$ converges uniformly to 0 as $\varepsilon\to0$. Moreover, up to taking a subsequence, $c^*:=\varepsilon^{-1}c$ and $y^*$ converge $\text{weakly}^*$ in $C^{0}$ and weakly in $L^{p}$ for all $1<p<\infty$. This may be a starting point for a convergence proof.
\item As already mentioned, this above reductions can be obtained only after a degenerate scaling of certain variables; then a Tikhonov-Fenichel reduction is applicable. (The corresponding scaling by Heineken et al. \cite{hta} in the ODE case is convenient, but not necessary.) This may also be the underlying reason why the approach by Yannacopoulos et al. \cite{Yannacopoulos} was not directly applicable to the given setting. The scaled quantities $y^*$ and $c^*$ can be seen as first order approximations of $y$ and $s$ of the solution of \eqref{mm1diffy}--\eqref{mm3diffy} (with respect to the assumptions regarding slow diffusion and small total initial enzyme concentration) where the zero order terms are equal to zero. The effect of degenerate scalings in general is investigated in a forthcoming paper \cite{lw2}.
\item A reduction similar to the one above was already given in the dissertation \cite{Lax}, but it was based on writing the system in $(s,e,c,p)$ and scaling both $e=\varepsilon e^*$ and $c=\varepsilon c^*$. The reduced system is equivalent to the reduced system given here. We chose to change the variables to $(s,c,y,p)$ in order to emphasize the resemblance to the non-diffusive case which is otherwise lost.
\item It is also possible to only scale $y$ instead of both $c$ and $y$ (and still obtain that $c$ will be of order $\varepsilon$). But there are some disadvantages: The computation of the reduced system gets more involved as the results of \cite{laxgoeke} cannot be used directly. Moreover, we only get a zero order approximation to the slow manifold, given by $c=0$.
\item Different QSS assumptions are also being discussed in the literature. Various choices of small rate constants can be found in \cite{laxgoeke,Lax}; for example the assumptions of slow product formation ($k_2=\varepsilon k_2^*$) and slow diffusion ($\delta_z=\varepsilon \delta_z^*\text{ for }z=s,e,c,p$) as well as only slow product formation are discussed.\\
Moreover, the assumption of slow complex formation ($k_1=\varepsilon k_1^*$ and $k_{-2}=\varepsilon k_{-2}^*$) and slow diffusion can be discussed by employing the method developed in \cite{laxgoeke}. A reduced system is given by
\begin{align*}
\partial_{\tau} s&= \delta_s \Delta s-\frac{k_1k_2}{k_{-1}+k_2}se+\frac{k_{-1}k_{-2}}{k_{-1}+k_2} ep\\
\partial_{\tau} e&= \delta_e\Delta e \\
\partial_{\tau} p&= \delta_p\Delta p+\frac{k_1k_2}{k_{-1}+k_2}se-\frac{k_{-1}k_{-2}}{k_{-1}+k_2} ep
\end{align*}
on the slow manifold defined by $c=0$. This corresponds to the convergence results of Bothe and Pierre \cite{BothePierre1} and Bisi et al. \cite{bisi} for a related system which is defined by the reaction $A_1+A_2 \rightleftharpoons A_3 \rightleftharpoons A_4+A_5$. (Note that the latter reaction is easier to analyze, due to the structure of the conservation laws; see Elia\v{s} \cite{elias}). In all cases, the numerical results are in good agreement with the reduction.
\item By analogous methods one can derive a reduction given the assumption of small total initial enzyme concentration, but with fast diffusive terms. Scaling again $y=\varepsilon y^*$ and $c=\varepsilon c^*$ and using results of \cite{Lax} one obtains the classical reduction: the fast diffusion yields a homogenization of the concentrations, enzyme and complex are in QSS and the reduced dynamics of the substrate are described by \eqref{red} (again, the reduction is in good agreement with numerical results). We omit details here.
\end{itemize}
\section{Acknowledgement}
The second-named author was supported by the DFG Research Training Group ``Experimental and Constructive Algebra'' (GRK 1632).
|
\section{Introduction \label{intro}}
The discrete time quantum walk (QW) was introduced as a quantum version of the classical random walk, whose time evolution are defined by unitary evolutions of amplitudes. The distribution of QW on the one dimensional lattice is different from that of the random walk \cite{Konno2002,Konno2005}. The review and books on QWs are Venegas-Andraca \cite{Venegas2013}, Konno \cite{Konno2008b}, Cantero et al. \cite{CanteroEtAl2013}, Portugal \cite{P2013}, Manouchehri and Wang \cite{MW2013}, for examples. The QW is a subject of study that has been investigated among quantum information and computation since around 2000. For its characteristic properties, recently, QWs have been widely studied by a number of groups in connection with various topics, for examples, separation of the radioisotope \cite{MatsuokaEtAl2011}, energy transfer of photosynthesis complexes \cite{MohseniEtAl2008} and topological insulator \cite{ObuseEtAl2015}.
There are two types of QWs, one is homogeneous QWs and the other is inhomogeneous QWs. The meaning of ``inhomogeneity" is that the quantum coin of a QW depends on time and/or space \cite{Ahl201214,Ahl201211,Ahl201152,joye2011}. We focus on space-inhomogeneous QWs in one dimension. One of the basic interests is to obtain measures induced by unitary evolutions of QWs, e.g., stationary measure, (time-averaged) limit measure and rescaled weak-limit measure. In this paper, we consider the stationary measures of QWs on $\mathbb{Z}$, where $\mathbb{Z}$ is the set of integers. Hence the stationary measure is the measure which does not depend on time. Especially, we get stationary measures of the two-state space-inhomogeneous QWs.
We briefly review the backgrounds of stationary measures for space-inhomogeneous models, i.e., QWs with defects. As for stationary measures of two-state QWs with one defect at the origin, Konno et al. \cite{KLS2013} showed that a stationary measure with exponential decay with respect to the position for the QW starting from infinite sites is identical to a time-averaged limit measure for the same QW starting from just the origin. We call this stationary measure a exponential type measure. One of our results contains the stationary measure shown by Konno et al. \cite{KLS2013}. Endo et al. \cite{ekst2014} got a stationary measure of the QW with one defect whose coin matrices are defined by the Hadamard matrix at $x \not= 0$ and the rotation matrix at $x=0$. Endo and Konno \cite{ek2014} calculated a stationary measure of the QW with one defect which was introduced and studied by W\'ojcik et al. \cite{WojcikEtAl2004}. Moreover, Endo et al. \cite{eko2015} and Endo et al. \cite{eekst2015} obtained stationary measures of the two-phase QW without defect and with one defect, respectively. Our result includes the stationary measure of the two-phase QW without defect and with one defect which was studied by Endo et al. \cite{eekst2015, eko2015}.
Konno and Takei \cite{kt2015} considered stationary measures of QWs and gave non-uniform stationary measures expressed as a quadratic polynomial. We call this stationary measure a quadratic polynomial type measure. Moreover, they proved that the set of the stationary measures contains uniform measure for the QW in general. So our aim is to find the non-trivial stationary measure of two-state QWs with multi-defect on $\mathbb{Z}$. One of our results belongs the stationary measure with quadratic polynomial type which is given by Konno and Takei \cite{kt2015}.
Stationary measures for other QW models are also investigated, for example, three-state QW on $\mathbb{Z}$ \cite{EndoEtAl2016, EndoEtAl20162, Kawai2017, Konno2014, WangEtAl2015} and higher dimensional QW \cite{Komatsu2017}. In order to analyze the details of QWs, a method based on transfer matrices is one of the common approaches, for example, Ahlbrecht et al. \cite{Ahl2011} and Bourget et al. \cite{Bou2003}. In this paper, we apply this method to two-state space-inhomogeneous QWs to obtain the stationary measures.
This paper is organized as follows. In Section \ref{Model}, we introduce the definition of the two-state inhomogeneous QWs with multi-defect on $\mathbb{Z}$. In Section \ref{result}, we present our results. Section \ref{proof} gives the proofs of results shown in the previous section by solving the corresponding the eigenvalue problem. In Section \ref{exam}, we deal with typical examples of two-state space-inhomogeneous QWs. Finally, summary is devoted to Section \ref{summ}.
\section{Model and method}\label{Model}
We introduce a discrete-time space-inhomogeneous QW on the line which is a quantum version of the classical random walk with an additional coin state. The particle has a coin state at time $n$ and position $x$ described by a two-dimensional vector:
\begin{align*}
\Psi_n (x)=
\begin{bmatrix}
\Psi_{n}^{L}(x)\\
\Psi_{n}^{R}(x)
\end{bmatrix}\ \ (x\in\mathbb{Z}),
\end{align*}
The upper and lower elements express left and right chiralities, respectively. The time evolution is determined by $2\times2$ unitary matrices $U_x$ which is called coin matrix here:
\begin{align*}
U_x=
\begin{bmatrix}
a_x&b_x\\
c_x&d_x
\end{bmatrix}
\ \ (x\in\mathbb{Z}).
\end{align*}
The subscript $x$ stands for the location. We divide $U_x$
into $U_x=P_x+Q_x$ with
\begin{align*}
P_x=
\begin{bmatrix}
a_x&b_x\\
0&0
\end{bmatrix},\ \
Q_x=
\begin{bmatrix}
0&0\\
c_x&d_x
\end{bmatrix}.
\end{align*}
The $2\times2$ matrix $P_x$ (resp. $Q_x$) represents that the walker moves to the left (resp. right) at position $x$ at each time step. Then the time evolution of the walk is defined by
\begin{align*}
\Psi_{n+1}(x)\equiv U^{(s)}\Psi_n(x)=P_{x+1}\Psi_n(x+1)+Q_{x-1}\Psi_n(x-1)\hspace{5mm}(x\in\mathbb{Z}).
\end{align*}
That is,
\begin{align*}
\begin{bmatrix}
\Psi_{n+1}^L (x)\\
\Psi_{n+1}^R (x)
\end{bmatrix}=
\begin{bmatrix}
a_{x+1}\Psi^L_n (x+1)+b_{x+1}\Psi^R_n (x+1)\\
c_{x-1}\Psi^L_n (x-1)+d_{x-1}\Psi^R_n (x-1)
\end{bmatrix}.
\end{align*}
Now let
\begin{align*}
\Psi_n= {}^T[\cdots,\Psi_n^L (-1),\Psi_n^R (-1),\Psi_n^L (0),\Psi_n^R (0),
\Psi_n^L (1),\Psi_n^R (1),\cdots],
\end{align*}
\begin{align*}
U^{(s)}=
\begin{bmatrix}
\ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\ldots\\
\ldots&O&P_{-1}&O&O&O&\ldots\\
\ldots&Q_{-2}&O&P_0&O&O&\ldots\\
\ldots&O&Q_{-1}&O&P_1&O&\ldots\\
\ldots&O&O&Q_0&O&P_2&\ldots\\
\ldots&O&O&O&Q_1&O&\ldots\\
\ldots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{bmatrix}
\ \ \ with\ O=
\begin{bmatrix}
0&0\\
0&0
\end{bmatrix},
\end{align*}
\noindent where $T$ means the transposed operation and the meaning
of the superscript $(s)$ is the first letter of system. Then the state of the QW at time $n$ is given by $\Psi_n =(U^{(s)})^n \Psi_0$ for any $n \geq 0$. Let $\mathbb{R}_{+} =[0,\infty).$ Here we introduce a map $\phi:(\mathbb{C}^2)^{\mathbb{Z}} \to \mathbb{R}^{\mathbb{Z}}_{+}$ such that for
\begin{align*}
\Psi={}^T\bigg[\cdots,
\begin{bmatrix}
\Psi^L (-1)\\
\Psi^R (-1)
\end{bmatrix},
\begin{bmatrix}
\Psi^L (0)\\
\Psi^R (0)
\end{bmatrix},
\begin{bmatrix}
\Psi^L (1)\\
\Psi^R (1)
\end{bmatrix},
\cdots\bigg]\in(\mathbb{C}^2)^{\mathbb{Z}},
\end{align*}
we define the measure of the QW by
$
\mu:\mathbb{Z} \to \mathbb{R}_{+}
$
satisfying
\begin{align*}
\mu(x)=\phi(\Psi)(x)=|\Psi^L(x)|^2+|\Psi^R(x)|^2 \ \ \ (x\in \mathbb{Z}).
\end{align*}
We should note that $\mu(x)$ gives the measure of the QW at position $x$. Our model here is considered on the set of all the $\mathbb{C}^2$-valued functions on $\mathbb{Z}$ whose inner product is $\langle\Psi,\Phi\rangle =\sum_{x\in\mathbb{Z}}\langle\Psi(x),\Phi(x)\rangle_{\mathbb{C}^2}$, where $\langle\cdot,\cdot\rangle_{\mathbb{C}^2}$ denotes the standard inner product on $\mathbb{C}^2$. We do not have any assumptions on the norm for the sets. In this paper, we consider the stationary measures for QWs on the above framework.
Let $\cal{M}$$(U^{(s)})$ be the set of measures of the QW. To explain our results, we introduce three classes of the measures for QW. First one is the set of the measures with exponential type:
\begin{equation*}
\begin{split}
&\mathcal{M}_{et}(U^{(s)})=\Big\{\mu\in\mathcal{M}(U^{(s)})\ ;\ there\ exist\ c_{+},\ c_{-}>0\ (c_{+}, c_{-}\ne1)\ such\ that\\
&\hspace{4.0cm}0<\lim_{x\to+\infty}\frac{\mu(x)}{c_{+}^x}<+\infty,\quad 0<\lim_{x\to-\infty}\frac{\mu(x)}{c_{-}^x}<+\infty\Big\},
\end{split}
\end{equation*}
where $\mathcal{M}(U^{(s)})$ is the set of measures on $\mathbb{Z}$. Second one is the set of the measures with quadratic polynomial type:
\begin{equation*}
\begin{split}
&\mathcal{M}_{qpt}(U^{(s)})=\Big\{\mu\in\mathcal{M}(U^{(s)})\ ;\ 0<\lim_{x\to\pm\infty}\frac{\mu(x)}{|x|^2}<+\infty\Big\}.
\end{split}
\end{equation*}
Last one is the set of the stationary measures:
\begin{equation*}
\begin{split}
&\mathcal{M}_s(U^{(s)})=\Big\{\mu\in\mathcal{M}(U^{(s)})\ ;\ there\ exists\ \Psi_0\in\left(\mathbb{C}^2\right)^{\mathbb{Z}}\ such\ that\\
&\hspace{6.0cm}\phi{((U^{(s)}})^n\Psi_0)=\mu\ (n=0,1,2,\ldots)\Big\}
\end{split}
\end{equation*}
and we call the element of ${\cal{M}}_s(U^{(s)})$ the stationary measure of the QW. In general, if unitary operators $U^{(s)}_1$ and $U^{(s)}_2$ are different, the sets of stationary measures $\mathcal{M}_s(U^{(s)}_1)$ and $\mathcal{M}_s(U^{(s)}_2)$ are different. For example, if we take the unitary operators $U^{(s)}_1$ and $U^{(s)}_2$ corresponding to the following matrices $U_1$ and $U_2$ respectively:
\begin{align*}
U_1=
\begin{bmatrix}
1&0\\
0&1
\end{bmatrix}, \ \ \
U_2=\frac{1}{\sqrt{2}}
\begin{bmatrix}
1&1\\
1&-1
\end{bmatrix},
\end{align*}
then we have
\begin{align*}
\mathcal{M}_s(U^{(s)}_1)=\mathcal{M}_{unif}(U^{(s)}),\ \ \
\mathcal{M}_s(U^{(s)}_2)\supsetneq\mathcal{M}_{unif}(U^{(s)}).
\end{align*}
The above results are given in Konno and Takei \cite{kt2015}. Here $\mathcal{M}_{unif}(U^{(s)})$ is the set of the uniform measures defined by
\[
\mathcal{M}_{unif}(U^{(s)})=\Big\{\mu_{c}\in\mathcal{M}(U^{(s)})\ ;\ \mu_{c}(x)=c,\ c>0 \Big\}.
\]
Let us consider the eigenvalue problem:
\begin{align*}
U^{(s)}\Psi=\lambda\Psi,
\end{align*}
where $\lambda\in\mathbb{C}$ with $|\lambda|=1$ and $U^{(s)}$ is an doubly infinite unitary matrix. If we assume that the initial state $\Psi_0$ is the above solution, we have
\begin{align*}
\Psi_n =(U^{(s)})^n \Psi_0 =\lambda^n \Psi_0.
\end{align*}
Noting that $|\lambda|=1$, we see
\begin{align*}
\mu_n (x) =||\Psi_n (x)||^2=|\lambda|^{2n}||\Psi_0 (x)||^2 =\mu_0 (x)\ \ (x\in\mathbb{Z}).
\end{align*}
Therefore $\mu_0 (x)=\phi(\Psi_0)(x)$ gives the stationary measure.
\section{Results}\label{result}
Applying the method introduced in Section 4 to the space-inhomogeneous QW, we solve the eigenvalue problem $U^{(s)}\Psi=\lambda\Psi$ as follows. From now on, we put $\alpha=\Psi^L (0)$ and $\beta=\Psi^R (0)$.
\begin{thm}\label{thm:3.1}
Let $\Psi(x)={}^T[\Psi^L (x),\Psi^R (x)]$ be the amplitude.
Put coin matrix which is defined by
\begin{align*}
U_x=
\begin{bmatrix}
a_x&b_x\\
c_x&d_x
\end{bmatrix}\ \ (x\in\mathbb{Z}),
\end{align*}
where $a_x b_x c_x d_x\neq 0$. Then a solution of the following eigenvalue problem:
\begin{align*}
U^{(s)}\Psi=\lambda \Psi
\end{align*}
is given by
\begin{eqnarray*}
\Psi(x)=
\begin{cases}
\displaystyle \prod_{y=1}^{x} D^{+}_{y} \Psi(0)&(x\geq 1),\\
\Psi(0)&(x=0),\\
\displaystyle \prod_{y=-1}^{x} D^{-}_{y} \Psi(0)&(x\leq -1),
\end{cases}
\end{eqnarray*}
where
\begin{eqnarray*}
D^{+}_{x}=\begin{bmatrix}
\dfrac{\lambda^{2}-b_x c_{x-1}}{\lambda a_x}&-\dfrac{b_x d_{x-1}}{\lambda a_x}\\
\dfrac{c_{x-1}}{\lambda}&\dfrac{d_{x-1}}{\lambda}
\end{bmatrix},\ \
D^{-}_{x}=\begin{bmatrix}
\dfrac{a_{x+1}}{\lambda}&\dfrac{b_{x+1}}{\lambda}\\
-\dfrac{a_{x+1}c_x}{\lambda d_x}&\dfrac{\lambda^2-b_{x+1} c_{x}}{\lambda d_x}
\end{bmatrix}.
\end{eqnarray*}
Moreover a stationary measure $\mu$ is given by
\begin{align*}
\mu (x)=\phi(\Psi)(x)=||\Psi(x)||^2\ \ \ (x\in\mathbb{Z}).
\end{align*}
\end{thm}
We introduce some notations
\begin{eqnarray*}
&\mathbb{Z}_{\geq}=\{0,1,2,\ldots \},&\\
&\mathbb{Z}_{>}=\{1,2,3,\ldots \},&\\
&\mathbb{Z}_{[a,b]}=\{a, a+1, \ldots, b-1, b\}&(a,b\in\mathbb{Z}\ with\ a < b).
\end{eqnarray*}
Next we consider a special case of Theorem \ref{thm:3.1} in which a sequence of coin matrices $\{U_x\}$ is defined by
\begin{align*}
U_x=U\ \ (x\notin\mathbb{Z}_{[-m,n]})
\end{align*}
for $m,n\in\mathbb{Z}_>$. Here $U$ is a $2\times2$ unitary matrix. The model can be considered on a QW with $(m+n+1)$ defects. The following result is a direct consequence of Theorem 3.1.
\begin{pro}\label{pro:3.2}
Put $m,n\in\mathbb{Z}_{>}$.
Let $\Psi(x)={}^T[\Psi^L (x),\Psi^R (x)]$ be the amplitude.
Define a sequence of coin matrices $\{U_x\}$ as follows:
\begin{align*}
&U_x=
\begin{cases}
\begin{bmatrix}
a_x&b_x\\
c_x&d_x
\end{bmatrix}&(x\in\mathbb{Z}_{[-m,n]}),\vspace{3mm}\\
\begin{bmatrix}
a&b\\
c&d
\end{bmatrix}&(x\notin\mathbb{Z}_{[-m,n]}),
\end{cases}
\end{align*}
where $a_x b_x c_x d_x\neq0 $, and $abc d\neq0$. Then a solution of the following eigenvalue problem:
\begin{align*}
U^{(s)}\Psi=\lambda \Psi
\end{align*}
is given by
\begin{align*}
\Psi(x)=
\begin{cases}
\bigg\{ (D^{+}) ^{x-(n+1)}\bigg\}\displaystyle \prod_{y=1}^{n+1}D^{+}_{y} \Psi(0)&(n+2\leq x),\\
\displaystyle \prod_{y=1}^{x}D^{+}_{y} \Psi(0)&(1\leq x \leq n+1),\\
\Psi(0)&(x=0),\\
\displaystyle \prod_{y=-1}^{x}D^{-}_{y} \Psi(0)&(-(m+1) \leq x \leq -1),\\
\bigg\{ (D^{-})^{-x-(m+1)} \bigg\}\displaystyle \prod_{y=-1}^{-(m+1)}D^{-}_{y} \Psi(0)&(x\leq -(m+2)),
\end{cases}
\end{align*}
where
\begin{eqnarray*}
D^{+}=
\begin{bmatrix}
\dfrac{\lambda^2-bc}{\lambda a}&-\dfrac{bd}{\lambda a}\\
\dfrac{c}{\lambda}&\dfrac{d}{\lambda}
\end{bmatrix},\
D^{+}_{n+1}=
\begin{bmatrix}
\dfrac{\lambda^{2}-b c_{n}}{\lambda a}&-\dfrac{b d_{n}}{\lambda a}\\
\dfrac{c_{n}}{\lambda}&\dfrac{d_{n}}{\lambda}
\end{bmatrix},\\\\
D^{-}_{-(m+1)}=
\begin{bmatrix}
\dfrac{a_{-m}}{\lambda}&\dfrac{b_{-m}}{\lambda}\\
-\dfrac{a_{-m} c}{\lambda d}&\dfrac{\lambda^2-b_{-m} c}{\lambda d}
\end{bmatrix},\
D^{-}=
\begin{bmatrix}
\dfrac{a}{\lambda}&\dfrac{b}{\lambda}\\
-\dfrac{ac}{\lambda d}&\dfrac{\lambda^2-bc}{\lambda d}
\end{bmatrix}.
\end{eqnarray*}
Furthermore, a stationary measure $\mu$ is determined by
\begin{align*}
\mu (x)=\phi(\Psi)(x)=||\Psi(x)||^2\ \ \ (x\in\mathbb{Z}).
\end{align*}
\end{pro}
The following result is obtained by solving the recurrence relations of $D^+$ and $D^-$, respectively.
\begin{pro}\label{pro:3.3}
We put $\Psi^{L}(x)=S_x, \Psi^{R}(x)=T_x\ and\ \Delta=ad-bc$.
\begin{description}
\item[(1)]
For $x\geq n+1$, we have
\begin{description}
\item[(i)] $\lambda^2 \neq ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_x&=\dfrac{1}{\Lambda_{+} -\Lambda_{-}}\Big\{
\Lambda^{x-n}_{+} ( S_{n+1}-\Lambda_{-} S_{n} ) -
\Lambda^{x-n}_{-} ( S_{n+1}-\Lambda_{+} S_{n} )\Big\},&\\\\
T_{x}&
=\dfrac{1}{\Lambda_{+} -\Lambda_{-}}\Big\{
\Lambda^{x-(n+1)}_{+} ( T_{n+2}-\Lambda_{-} T_{n+1} ) -\\
&\hspace{50mm}\Lambda^{x-(n+1)}_{-} ( T_{n+2}-\Lambda_{+} T_{n+1} )\Big\},
\end{align*}
where
\begin{align*}
\Lambda_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2a\lambda}.
\end{align*}
\item[(ii)] $\lambda^2 = ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_x &= \Lambda^{x-(n+1)} \Big[S_{n+1}+\{x-(n+1)\}{(S_{n+1}-\Lambda S_n)}\Big]\vspace{2mm},&\\
T_x &= \Lambda^{x-(n+2)} \Big[T_{n+2}+\{x-(n+2)\}{(T_{n+2}-\Lambda T_{n+1})}\Big],&
\end{align*}
where\begin{align*}
\Lambda=\dfrac{\lambda^2+\Delta}{2a\lambda}.
\end{align*}
\end{description}
\item[(2)]
For $x\leq -(m+1)$, we have
\begin{description}
\item[(i)] $\lambda^2 \neq ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_{x}&=\dfrac{1}{\Gamma_{+}-\Gamma_{-}}\bigg\{
\Gamma^{-x-(m+1)}_{+} ( S_{-(m+2)}-\Gamma_{-} S_{-(m+1)} )\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \hspace{22mm}-
\Gamma^{-x-(m+1)}_{-} ( S_{-(m+2)}-\Gamma_{+} S_{-(m+1)} )\bigg\},
\\\\
T_{x}&=\dfrac{1}{\Gamma_{+}-\Gamma_{-}}
\bigg\{ \Gamma^{-x-m}_{+} ( T_{-(m+1)}-\Gamma_{-} T_{-m} )\\
&\hspace{50mm}-
\Gamma^{-x-m}_{-} ( T_{-(m+1)}-\Gamma_{+} T_{-m} )\bigg\},
\end{align*}
where
\begin{align*}
\Gamma_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2d\lambda}=\dfrac{d}{a} \Lambda_{\pm}.
\end{align*}
\item[(ii)] $\lambda^2 = ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_x &=\Gamma^{-x-(m+2)} \Big[ S_{-(m+2)}+\{-x-(m+2)\} (S_{-(m+2)}-\Gamma S_{-(m+1)})\Big]\vspace{2mm},&\\
T_x &= \Gamma^{-x-(m+1)} \Big[T_{-(m+1)}+\{-x-(m+1)\} (T_{-(m+1)}-\Gamma T_{-m})\Big],&
\end{align*}
where
\begin{align*}
\Gamma=\dfrac{\lambda^2+\Delta}{2d\lambda}.
\end{align*}
\end{description}
\end{description}
\end{pro}
From this theorem we can obtain the following result for the space-homogeneous case.
\begin{cor}\label{cor:3.4}
Let $\Psi(x)={}^T[\Psi^L (x),\Psi^R (x)]$ be the amplitude.
Put
\begin{align*}
U_x=U=
\begin{bmatrix}
a&b\\
c&d
\end{bmatrix}\ \ (x\in\mathbb{Z}),
\end{align*}
with $abcd\neq0$. Then a solution of the following eigenvalue problem:
\begin{align*}
U^{(s)}\Psi=\lambda \Psi
\end{align*}
is given by
\begin{description}
\item[(i)] $\lambda^2 \neq ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
&\!\!\!\!\!\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{R}(x)
\end{bmatrix}\\
&=
\begin{cases}
\dfrac{1}{\Lambda_{+} -\Lambda_{-}}
\begin{bmatrix}
\Lambda^{x}_{+} (\Psi^{L}(1)-\Lambda_{-}\alpha)-
\Lambda^{x}_{-}(\Psi^{L}(1)-\Lambda_{+}\alpha)\vspace{3mm}\\
\Lambda^{x}_{+} (\Psi^{R}(1)-\Lambda_{-}\beta)-
\Lambda^{x}_{-}(\Psi^{R}(1)-\Lambda_{+}\beta)
\end{bmatrix}&(x\geq1),\\\\
\dfrac{1}{\Gamma_{+} -\Gamma_{-}}
\begin{bmatrix}
\Gamma^{-x}_{+} (\Psi^{L}(-1)-\Gamma_{-}\alpha)-
\Gamma^{-x}_{-}(\Psi^{L}(-1)-\Gamma_{+}\alpha)\vspace{3mm}\\
\Gamma^{-x}_{+} (\Psi^{R}(-1)-\Gamma_{-}\beta)-
\Gamma^{-x}_{-}(\Psi^{R}(-1)-\Gamma_{+}\beta)
\end{bmatrix}&(x\leq-1),
\end{cases}
\end{align*}\\
\item[(ii)]$\lambda^2 = ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
&\!\!\!\!\!\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{R}(x)
\end{bmatrix}\\
&=
\begin{cases}
\bigg(\dfrac{\lambda^2+\Delta}{2a\lambda}\bigg)^x \dfrac{1}{\lambda^2+\Delta}
\begin{bmatrix}
\alpha(1+x)\lambda^2-(\alpha\nabla+2bd\beta)x+\alpha\Delta\vspace{3mm}\\
\beta(1-x)\lambda^2+(\beta\nabla+2ac\alpha)x+\beta\Delta
\end{bmatrix}&(x\geq1),\\\\
\bigg(\dfrac{\lambda^2+\Delta}{2d\lambda}\bigg)^{-x} \dfrac{1}{\lambda^2+\Delta}
\begin{bmatrix}
\alpha(1+x)\lambda^2-(\alpha\nabla+2bd\beta)x+\alpha\Delta\vspace{3mm}\\
\beta(1-x)\lambda^2+(\beta\nabla+2ac\alpha)x+\beta\Delta
\end{bmatrix}&(x\leq-1),
\end{cases}
\end{align*}
\end{description}
where
\begin{align*}
&\Delta=ad-bc,\
\nabla=ad+bc,&\\
&\Lambda_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2a\lambda},\
\Gamma_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2d\lambda},&\\
&\Psi^{L}(1)=\dfrac{\alpha \lambda^2-b(d\beta+c\alpha)}{a\lambda},\
\Psi^{R}(1)=\dfrac{d\beta+c\alpha}{\lambda},&\\
&\Psi^{L}(-1)=\dfrac{b\beta+a\alpha}{\lambda},\
\Psi^{R}(-1)=\dfrac{\beta \lambda^2-c(b\beta+a\alpha)}{d\lambda}.&
\end{align*}
Furthermore, a stationary measure $\mu$ is given by
\begin{align*}
\mu (x)=\phi(\Psi)(x)=||\Psi(x)||^2\ \ \ (x\in\mathbb{Z}).
\end{align*}
\end{cor}
\noindent A stationary measure $\mu\in\mathcal{M}(U^{(s)})$ in case (i) of Corollary \ref{cor:3.4} becomes an exponential type measure, i.e., $\mu\in\mathcal{M}_s(U^{(s)})\cap\mathcal{M}_{et}(U^{(s)})$. On the other hand, a stationary measure $\mu\in\mathcal{M}(U^{(s)})$ in case (ii) of Corollary \ref{cor:3.4} becomes a quadratic polynomial type measure, i.e. $\mu\in\mathcal{M}_s(U^{(s)})\cap\mathcal{M}_{qpt}(U^{(s)})$, which was given in Konno and Takei \cite{kt2015}.
\section{Proofs}\label{proof}
In this section, we prove Theorem \ref{thm:3.1} and Proposition \ref{pro:3.3}.
\subsection{Proof of Theorem \ref{thm:3.1}}
We focus on the space-inhomogeneous QW whose coin matrix is determined by
\begin{align*}
U_x=
\begin{bmatrix}
a_x&b_x\\
c_x&d_x
\end{bmatrix}&\ \ (x\in\mathbb{Z})
\end{align*}
where $a_x b_x c_x d_x\neq 0 $. We consider the solution of
\begin{align*}
U^{(s)}\Psi=\lambda\Psi.
\end{align*}
Then $\Psi(x)={}^T[\Psi^L (x),\Psi^R (x)]$ satisfies
\begin{align}
\lambda
\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{R}(x)
\end{bmatrix}=
\begin{bmatrix}
a_{x+1}\Psi^{L}(x+1)+b_{x+1}\Psi^{R}(x+1)\\
c_{x-1}\Psi^{L}(x-1)+d_{x-1}\Psi^{R}(x-1)
\end{bmatrix}\ \ (x\in\mathbb{Z})\label{nagi1}.
\end{align}
From now on, we will obtain $D^+$ and $D^-$. Eq.(\ref{nagi1}) gives
\begin{align}
\Psi^L(x-1)=\dfrac{a_{x}}{\lambda}\Psi^L(x)+\dfrac{b_x}{\lambda}\Psi^R(x).\label{nagi2}
\end{align}
Then Eq.(\ref{nagi2}) can be rewritten as
\begin{align*}
\Psi^L(x-1)=\dfrac{a_x}{\lambda}\Psi^L(x)+\dfrac{b_x}{\lambda}
\bigg\{ \dfrac{c_{x-1}}{\lambda}\Psi^L(x-1)+\dfrac{d_{x-1}}{\lambda}\Psi^R(x-1) \bigg\}.
\end{align*}
Hence, we get
\begin{align*}
\Psi^L(x)=\dfrac{1}{a_x}\left( \lambda-\dfrac{b_x c_{x-1}}{\lambda}\right) \Psi^L(x-1)
-\dfrac{b_x d_{x-1}}{a_x \lambda}\Psi^R(x-1).
\end{align*}
From now on, we put
\begin{align*}
D^+_x=
\begin{bmatrix}
\dfrac{\lambda^2-{b_x c_{x-1}}}{a_x \lambda}&-\dfrac{b_x d_{x-1}}{a_x \lambda}\\
\dfrac{c_{x-1}}{\lambda}&\dfrac{d_{x-1}}{\lambda}
\end{bmatrix}.
\end{align*}
Therefore Eq.(\ref{nagi1}) becomes
\begin{align*}
\begin{bmatrix}
\Psi^L(x)\\
\Psi^R(x)
\end{bmatrix}&=
\begin{bmatrix}
\dfrac{1}{a_x}\left(\lambda-\dfrac{b_x c_{x-1}}{\lambda}\right)\Psi^L(x-1)
-\dfrac{b_x d_{x-1}}{a_x \lambda}\Psi^R(x-1)\\
\dfrac{c_{x-1}}{\lambda}\Psi^{L}(x-1)+\dfrac{d_{x-1}}{\lambda}\Psi^{R}(x-1)
\end{bmatrix}&
\\&=\begin{bmatrix}
\dfrac{\lambda^2-{b_x c_{x-1}}}{a_x \lambda}&-\dfrac{b_x d_{x-1}}{a_x \lambda}\\
\dfrac{c_{x-1}}{\lambda}&\dfrac{d_{x-1}}{\lambda}
\end{bmatrix}
\begin{bmatrix}
\Psi^L(x-1)\\
\Psi^R(x-1)
\end{bmatrix}&\\
&=D^+_x
\begin{bmatrix}
\Psi^L(x-1)\\
\Psi^R(x-1)
\end{bmatrix}.&
\end{align*}
Thus we get
\begin{align}
\Psi(x)=D^+_x\Psi(x-1)\ \ \ (x\in\mathbb{Z})\label{nagi3}.
\end{align}
From Eq.(\ref{nagi3}), we obtain
\begin{align*}
\Psi(x)=\displaystyle \prod_{y=1}^{x}D^+_y \Psi(0)\ \ \ (x\geq1).
\end{align*}
We put $\{D^+_x\}^{-1}=D^-_{x-1}$. We should remark that determinant of $D^{+}_{x}$ is not $0$, since $a_x\neq 0\ (x\in\mathbb{Z})$. Then Eq.(\ref{nagi3}) gives
\begin{align}
\Psi(x)=D^-_{x} \Psi(x+1)\ \ \ (x\in\mathbb{Z}).\label{nagi4}
\end{align}
By Eq.(\ref{nagi4}), we have
\begin{align*}
\Psi(x)=\displaystyle \prod_{y=-1}^{x}D^-_y \Psi(0)\ \ \ (x\leq-1).
\end{align*}
\subsection{Proof of Proposition 3.3}
From now on, we focus on the space-inhomogeneous QW whose coin matrix is determined by
\begin{align*}
U_x=
\begin{cases}
\begin{bmatrix}
a_x&b_x\\
c_x&d_x
\end{bmatrix}&(x\in\mathbb{Z}_{[-m,n]}),\vspace{3mm}\\
\begin{bmatrix}
a&b\\
c&d
\end{bmatrix}&(x\notin\mathbb{Z}_{[-m,n]}),
\end{cases}
\end{align*}
where $a_x b_x c_x d_x\neq 0 ,$ and $ abcd\neq0$. We consider the solution of
\begin{align*}
U^{(s)}\Psi=\lambda\Psi.
\end{align*}
Then $\Psi(x)={}^T[\Psi^L (x),\Psi^R (x)]$ satisfies
\begin{align*}
\lambda
\begin{bmatrix}
\Psi^L (x)\\
\Psi^R (x)
\end{bmatrix}=
\begin{bmatrix}
0&0\\
c_{x-1}&d_{x-1}
\end{bmatrix}
\begin{bmatrix}
\Psi^L (x-1)\\
\Psi^R (x-1)
\end{bmatrix}+
\begin{bmatrix}
a_{x+1}&b_{x+1}\\
0&0
\end{bmatrix}
\begin{bmatrix}
\Psi^L (x+1)\\
\Psi^R (x+1)
\end{bmatrix}\ \ \ (x\in\mathbb{Z}).
\end{align*}
First we consider $x\geq0$ case. In particular, we have
\begin{align*}
\lambda
\begin{bmatrix}
\Psi^L (x)\\
\Psi^R (x)
\end{bmatrix}=
\begin{bmatrix}
0&0\\
c&d
\end{bmatrix}
\begin{bmatrix}
\Psi^L (x-1)\\
\Psi^R (x-1)
\end{bmatrix}+
\begin{bmatrix}
a&b\\
0&0
\end{bmatrix}
\begin{bmatrix}
\Psi^L (x+1)\\
\Psi^R (x+1)
\end{bmatrix}\hspace{5mm}(x\geq n+2),
\end{align*}
From now on, we put $S_x =\Psi^L (x)$ and $T_x=\Psi^R (x)$, then above equations become
\begin{eqnarray}
&\lambda S_x=a S_{x+1}+ b T_{x+1}&(x\geq n)\label{ukiyoe1},\\
&\lambda T_x=c S_{x-1}+d T_{x-1}&(x\geq n+2)\label{ukiyoe2}.
\end{eqnarray}
Then Eq.(\ref{ukiyoe1}) and Eq.(\ref{ukiyoe2}) can be rewritten as
\begin{eqnarray}
&\lambda S_{x-1}=a S_x+b T_x &(x\geq n+1)\label{ukiyoe3},\\
&T_x=\dfrac{c}{\lambda}S_{x-1} +\dfrac{d}{\lambda}T_{x-1}&(x\geq n+2)\label{ukiyoe4}.
\end{eqnarray}
From Eq.(\ref{ukiyoe3}), we have
\begin{eqnarray}
&T_x=\dfrac{\lambda}{b} S_{x-1}-\dfrac{a}{b}S_x &(x\geq n+1)\label{ukiyoe5},\\
&T_{x-1}=\dfrac{\lambda}{b}S_{x-2}-\dfrac{a}{b}S_{x-1} &(x\geq n+2)\label{ukiyoe6}.
\end{eqnarray}
By substituting Eqs. (\ref{ukiyoe5}) and (\ref{ukiyoe6}) into Eq. (\ref{ukiyoe3}), we see that $S_x$ satisfies
\begin{eqnarray}
\lambda \dfrac{a}{b} S_x+\left( c-\dfrac{ad}{b}-\dfrac{\lambda^2}{b} \right) S_{x-1} +\lambda\dfrac{ d}{b}S_{x-2} =0\hspace{5mm}(x\geq n+2)\label{ukiyoe7}.
\end{eqnarray}
If the characteristic equation of Eq.(\ref{ukiyoe7}) has two distinct roots, $\Lambda_+$ and $\Lambda_-$, then we obtain
\begin{eqnarray}
\begin{cases}
S_x-\Lambda_+ S_{x-1} =\Lambda_- (S_{x-1} - \Lambda_+ S_{x-2})\\
S_x-\Lambda_- S_{x-1} =\Lambda_+ (S_{x-1} - \Lambda_- S_{x-2})
\end{cases}\hspace{5mm}(x\geq n+1)\label{ukiyoe8},
\end{eqnarray}
where
\begin{align*}
\Lambda_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2a\lambda},
\end{align*}
with
\begin{align*}
\Delta=ad-bc.
\end{align*}
Then Eq.(\ref{ukiyoe8}) can be rewritten as
\begin{eqnarray*}
\begin{cases}
S_{x+1}-\Lambda_+ S_x =\Lambda^{x-n}_- (S_{n+1}-\Lambda_+ S_n)\\
S_{x+1}-\Lambda_- S_x =\Lambda^{x-n}_+ (S_{n+1}-\Lambda_- S_n)
\end{cases}\hspace{5mm}(x\geq n+1),
\end{eqnarray*}
hence
\begin{eqnarray*}
S_x&=\dfrac{1}{\Lambda_{+} -\Lambda_{-}}\Big\{
\Lambda^{x-n}_{+} ( S_{n+1}-\Lambda_{-} S_{n} ) -
\Lambda^{x-n}_{-} ( S_{n+1}-\Lambda_{+} S_{n} )\Big\}\hspace{5mm}(x\geq n).&
\end{eqnarray*}
Next, if the characteristic equation of Eq.(\ref{ukiyoe7}) has a multiple root, $\Lambda$, then we have
\begin{eqnarray}
S_x -\Lambda S_{x-1}=\Lambda (S_{x-1}-\Lambda S_{x-2})\hspace{5mm}(x\geq n+2)\label{DRI1},
\end{eqnarray}
where
\begin{align*}
\Lambda=\dfrac{\lambda^2+\Delta}{2a\lambda}.
\end{align*}
Then Eq.(\ref{DRI1}) implies
\begin{eqnarray}
S_x -\Lambda S_{x-1} =\Lambda^{x-(n+1)} (S_{n+1}-\Lambda S_n)\hspace{5mm}(x\geq n+2)\label{DRI2}.
\end{eqnarray}
From Eq.(\ref{DRI2}), we get
\begin{eqnarray}
S_x = \Lambda^{x-(n+1)} \Big[S_{n+1}+\{x-(n+1)\}{(S_{n+1}-\Lambda S_n)}\Big]\hspace{5mm}(x\geq n).
\end{eqnarray}
Therefore
\begin{description}
\item[(i)] $\lambda^2 \neq ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_x&=\dfrac{1}{\Lambda_{+} -\Lambda_{-}}\Big\{
\Lambda^{x-n}_{+} ( S_{n+1}-\Lambda_{-} S_{n} ) -
\Lambda^{x-n}_{-} ( S_{n+1}-\Lambda_{+} S_{n} )\Big\}\hspace{5mm}(x\geq n)\label{ukiyoe9},&
\end{align*}
where
\begin{align*}
\Lambda_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2a\lambda}.
\end{align*}
\item[(ii)] $\lambda^2 = ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
S_x &= \Lambda^{x-(n+1)} \Big[S_{n+1}+\{x-(n+1)\}{(S_{n+1}-\Lambda S_n)}\Big],\vspace{2mm}&
\end{align*}
where\begin{align*}
\Lambda=\dfrac{\lambda^2+\Delta}{2a\lambda}.
\end{align*}
\end{description}
In a similar fashion, we obtain
\begin{eqnarray*}
\lambda \dfrac{a}{b} T_{x+1}+\left(c-\dfrac{ad}{b}-\dfrac{\lambda^2}{b}\right)T_{x} +\lambda\dfrac{ d}{b}T_{x-1} =0\hspace{5mm}(x\geq n+2).
\end{eqnarray*}
Then we have
\begin{description}
\item[(i)] $\lambda^2 \neq ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
T_{x}&=\dfrac{1}{\Lambda_{+} -\Lambda_{-}}\Big\{
\Lambda^{x-(n+1)}_{+} ( T_{n+2}-\Lambda_{-} T_{n+1} ) -\\
&\hspace{50mm}
\Lambda^{x-(n+1)}_{-} ( T_{n+2}-\Lambda_{+} T_{n+1} )\Big\}\hspace{5mm}(x\geq n+1),&
\end{align*}
where
\begin{align*}
\Lambda_{\pm}=\dfrac{\lambda^2+\Delta\pm\sqrt{(\lambda^2+\Delta)^2-4\lambda^{2}ad}}{2a\lambda}.
\end{align*}
\item[(ii)] $\lambda^2 = ad+bc\pm2\sqrt{abcd}$ case
\begin{align*}
T_x &= \Lambda^{x-(n+2)} \Big[T_{n+2}+\{x-(n+2)\}{(T_{n+2}-\Lambda T_{n+1})}\Big]\hspace{5mm}(x\geq n+1),&
\end{align*}
where\begin{align*}
\Lambda=\dfrac{\lambda^2+\Delta}{2a\lambda}.
\end{align*}
\end{description}
\section{Examples}\label{exam}
In this section, we give two examples. The first model is a QW with two defects. The second model is the Hadamard walk with three defects which is a generalization of the model proposed by Wojcic et al \cite{WojcikEtAl2004}.
\subsection{QW with two defects}
From now on, we consider the space-inhomogeneous QW whose quantum coin is determined by
\begin{align*}
U_x=
\begin{cases}
\begin{bmatrix}
\cos{\theta}&\sin{\theta}\\
\sin{\theta}&-\cos{\theta}
\end{bmatrix}&(x=-m,m),\\\\
\begin{bmatrix}
1&0\\
0&-1
\end{bmatrix}&(x\neq -m,m),
\end{cases}
\end{align*}
for $m\in\mathbb{Z}_>, \theta\neq\frac{\pi}{2}$. From Theorem \ref{thm:3.1} and Proposition \ref{pro:3.2}, we have
\begin{align*}
D^+ =
\begin{bmatrix}
\lambda&0\\
0&-\frac{1}{\lambda}
\end{bmatrix},\
D^{+}_{m}=
\begin{bmatrix}
\frac{\lambda}{\cos{\theta}}&\frac{\sin{\theta}}{\lambda \cos{\theta}}\\
0&-\frac{1}{\lambda}
\end{bmatrix},\
D^{+}_{m+1}=
\begin{bmatrix}
\lambda&0\\
\frac{\sin{\theta}}{\lambda}&-\frac{\cos{\theta}}{\lambda}
\end{bmatrix},\\\\
D^{-}_{-m}=
\begin{bmatrix}
\frac{1}{\lambda}&0\\
\frac{\sin{\theta}}{\lambda \cos{\theta}}&-\frac{\lambda}{\cos{\theta}}
\end{bmatrix},\
D^{-}_{-(m+1)}=
\begin{bmatrix}
\frac{\cos{\theta}}{\lambda}&\frac{\sin{\theta}}{\lambda}\\
0&-\lambda
\end{bmatrix},\
D^- =
\begin{bmatrix}
\frac{1}{\lambda}&0\\
0&-\lambda
\end{bmatrix}.
\end{align*}
By Theorem \ref{thm:3.1}, the amplitude becomes
\begin{eqnarray*}
\Psi(x)=
\begin{cases}
\displaystyle \prod_{y=1}^{x} D^{+}_{y} \Psi(0)&(x\geq 1),\\
\Psi(0)&(x=0),\\
\displaystyle \prod_{y=-1}^{x} D^{-}_{y} \Psi(0)&(x\leq -1),
\end{cases}
\end{eqnarray*}
i.e,
\begin{align*}
\begin{bmatrix}
\Psi^L (x)\\
\Psi^R (x)
\end{bmatrix}=
\begin{cases}
\begin{bmatrix}
\lambda^x \alpha\\
(-\frac{1}{\lambda})^x \beta
\end{bmatrix}&(0\leq x \leq m-1),\\\\
\begin{bmatrix}
\frac{1}{\cos{\theta}} \left\{ \lambda^m \alpha-\sin\theta(-\frac{1}{\lambda})^m\beta \right\}\\
(-\frac{1}{\lambda})^m\beta
\end{bmatrix}&(x=m),\\\\
\begin{bmatrix}
\frac{\lambda}{\cos\theta} \left\{ \lambda^m \alpha - \sin\theta (-\frac{1}{\lambda})^m \beta \right\}\\
\frac{1}{\lambda\cos\theta} \left\{ \sin\theta \lambda^m \alpha - (-\frac{1}{\lambda})^m \beta \right\}
\end{bmatrix}&(x=m+1),\\\\
\begin{bmatrix}
\lambda^{x-(m+1)} \Psi^L (m+1)\\
(-\frac{1}{\lambda})^{x-(m+1)} \Psi^R (m+1)
\end{bmatrix}&(x\geq m+2),
\end{cases}
\end{align*}
\begin{align*}
\begin{bmatrix}
\Psi^L (x)\\
\Psi^R (x)
\end{bmatrix}=
\begin{cases}
\begin{bmatrix}
(\frac{1}{\lambda})^{-x}\alpha\\
(-\lambda)^{-x}\beta
\end{bmatrix}&(-m+1\leq x \leq 0),\\\\
\begin{bmatrix}
\frac{1}{\lambda^m} \alpha\\
\frac{1}{\cos\theta} \left\{ \sin\theta \frac{1}{\lambda^m}\alpha + (-\lambda)^m\beta \right\}
\end{bmatrix}&(x=-m),\\\\
\begin{bmatrix}
\frac{1}{\lambda\cos\theta} \left\{ \frac{1}{\lambda^m}\alpha+\sin\theta (-\lambda)^m \beta \right\}\\
-\frac{\lambda}{\cos\theta} \left\{ \sin\theta \frac{1}{\lambda^m}\alpha +(-\lambda)^m \beta \right\}
\end{bmatrix}&(x=-m-1),\\\\
\begin{bmatrix}
(\frac{1}{\lambda})^{-x-(m+1)}\Psi^L (-(m+1))\\
(-\lambda)^{-x-(m+1)}\Psi^R (-(m+1))
\end{bmatrix}&(x\leq-m-2).
\end{cases}
\end{align*}
Furthermore, a stationary measure $\mu$ is given by
\begin{align*}
\mu(x)=\phi(\Psi)(x)=||\Psi(x)||^2\ \ \ (x\in\mathbb{Z}).
\end{align*}
For example, if $\alpha=1/\sqrt{2}, \beta=i/\sqrt{2}$ and $\theta=\pi/4$, we get the stationary measure $\mu$ given by
\begin{align*}
\mu(x)=
\begin{cases}
1&(x\in\mathbb{Z}_{[-(m-1),m-1]}),\\
2&(x=\pm m),\\
3&(x\notin\mathbb{Z}_{[-m,m]}).
\end{cases}
\end{align*}
This stationary measure is not uniform measure.
\subsection{Hadamard walk with three defects}
Next, we consider the space-inhomogeneous QW whose quantum coin is determined by
\begin{align*}
U_x=
\begin{cases}
\omega H&(x\in\mathbb{Z}_{[-1,1]}),\vspace{3mm}\\
H&(x\notin\mathbb{Z}_{[-1,1]}),
\end{cases}
\end{align*}
with $\omega=e^{2i\pi \phi}\ (\phi\in[0,1))$, where
\begin{align*}
H=\frac{1}{\sqrt{2}}
\begin{bmatrix}
1&1\\
1&-1
\end{bmatrix}.
\end{align*}
In particular, if $\phi=0\ (i.e.,\ \omega=1)$, then this space-homogeneous QW is called the Hadamard walk.
Endo and Konno \cite{ek2014} investigated the stationary measures of Hadamard walk with one defect introduced by Wojcik et al. \cite{WojcikEtAl2004} via a different approach, i.e., the splitted generating function method.\\
From Theorem \ref{thm:3.1} and Proposition \ref{pro:3.2}, we have
\begin{align*}
D^+=
\begin{bmatrix}
\frac{2\lambda^2-1}{\sqrt{2}\lambda}&\frac{1}{\sqrt{2}\lambda}\\
\frac{1}{\sqrt{2}\lambda}&-\frac{1}{\sqrt{2}\lambda}
\end{bmatrix},\
D^{+}_2 =
\begin{bmatrix}
\frac{2\lambda^2-\omega}{\sqrt{2}\lambda}&\frac{\omega}{\sqrt{2}\lambda}\\
\frac{\omega}{\sqrt{2}\lambda}&-\frac{\omega}{\sqrt{2}\lambda}
\end{bmatrix},\
D^{+}_1 =
\begin{bmatrix}
\frac{2\lambda^2-\omega^2}{\sqrt{2}\lambda}&\frac{\omega}{\sqrt{2}\lambda}\\
\frac{\omega}{\sqrt{2}\lambda}&-\frac{\omega}{\sqrt{2}\lambda}
\end{bmatrix},\\
D^{-}_{-1}=
\begin{bmatrix}
\frac{\omega}{\sqrt{2}\lambda}&\frac{\omega}{\sqrt{2}\lambda}\\
\frac{\omega}{\sqrt{2}\lambda}&-\frac{2\lambda^2-\omega^2}{\sqrt{2}\omega\lambda}
\end{bmatrix},\
D^{-}_{-2}=
\begin{bmatrix}
\frac{\omega}{\sqrt{2}\lambda}&\frac{\omega}{\sqrt{2}\lambda}\\
\frac{\omega}{\sqrt{2}\lambda}&-\frac{2\lambda^2-\omega}{\sqrt{2}\omega\lambda}
\end{bmatrix},\
D^{-}=
\begin{bmatrix}
\frac{1}{\sqrt{2}\lambda}&\frac{1}{\sqrt{2}\lambda}\\
\frac{1}{\sqrt{2}\lambda}&-\frac{2\lambda^2-1}{\sqrt{2}\lambda}
\end{bmatrix}.
\end{align*}
Furthermore, the amplitude is given by
\begin{align*}
\Psi(x)=
\begin{cases}
\bigg\{ (D^{+}) ^{x-2}\bigg\}\displaystyle \prod_{y=1}^{2}D^{+}_{y} \Psi(0)&(3\leq x),\\
\displaystyle \prod_{y=1}^{x}D^{+}_{y} \Psi(0)&(1\leq x \leq 2),\\
\Psi(0)&(x=0),\\
\displaystyle \prod_{y=-1}^{x}D^{-}_{y} \Psi(0)&(-2 \leq x \leq -1),\\
\bigg\{ (D^{-})^{-x-2} \bigg\}\displaystyle \prod_{y=-1}^{-2}D^{-}_{y} \Psi(0)&(x\leq -3),
\end{cases}
\end{align*}
Next, by Proposition \ref{pro:3.3}, we consider the amplitude of $3\leq x$ and $x\leq-3$. If characteristic equation of Eq.(\ref{ukiyoe7}) has a multiple root, then
$
\lambda=
e^{\pm \frac{\pi i}{4}}, -e^{\pm \frac{\pi i}{4}}. \label{tokyo1}
$
Therefore the amplitude is given by
\begin{description}
\item[(i)]$\lambda \neq
e^{\pm \frac{\pi i}{4}}, -e^{\pm \frac{\pi i}{4}}$ (distinct root) case\\
\begin{align*}
&\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{R}(x)
\end{bmatrix}\\
&=
\begin{cases}
A
\begin{bmatrix}
\Lambda^{x-1}_{+} (\Psi^{L}(2)-\Lambda_{-}\Psi^{L}(1))-
\Lambda^{x-1}_{-}(\Psi^{L}(2)-\Lambda_{+}\Psi^{L}(1))\vspace{3mm}\\
\Lambda^{x-2}_{+} (\Psi^{R}(3)-\Lambda_{-}\Psi^{R}(2))-
\Lambda^{x-2}_{-}(\Psi^{R}(3)-\Lambda_{+}\Psi^{R}(2))
\end{bmatrix}\ \ (x\geq3),\\\\
A
\begin{bmatrix}
\Gamma^{-x-2}_{+} (\Psi^{L}(-3)-\Gamma_{-}\Psi^{L}(-2))-
\Gamma^{-x-2}_{-}(\Psi^{L}(-3)-\Gamma_{+}\Psi^{L}(-2))\vspace{3mm}\\
\Gamma^{-x-1}_{+} (\Psi^{R}(-2)-\Gamma_{-}\Psi^{R}(-1))-
\Gamma^{-x-1}_{-}(\Psi^{R}(-2)-\Gamma_{+}\Psi^{R}(-1))
\end{bmatrix}\\\hspace{90mm}(x\leq-3),
\end{cases}
\end{align*}\\
\item[(ii)]$\lambda=
e^{\pm \frac{\pi i}{4}}, -e^{\pm \frac{\pi i}{4}}$ (multiple root) case.
\begin{align*}
&\begin{bmatrix}
\Psi^{L}(x)\\
\Psi^{R}(x)
\end{bmatrix}\\
&=
\begin{cases}
\begin{bmatrix}
B^{x-2}\left\{ \Psi^L (2)+(x-2)(\Psi^L (2)-B\Psi^L (1)) \right\}\vspace{3mm}\\
B^{x-3}\left\{ \Psi^R (3)+(x-3)(\Psi^R (3)-B\Psi^R (2)) \right\}
\end{bmatrix}&(x\geq3),\\\\
\begin{bmatrix}
(-B)^{-x-3}\left\{ \Psi^L (-3)+(-x-3)(\Psi^L (-3)+B\Psi^L (-2)) \right\}\vspace{3mm}\\
(-B)^{-x-2}\left\{ \Psi^R (-2)+(-x-2)(\Psi^R (-2)+B\Psi^R (-1)) \right\}
\end{bmatrix}&(x\leq-3),
\end{cases}
\end{align*}
\end{description}
where
\begin{align*}
\Lambda_{\pm}=&\frac{\lambda^2-1\pm\sqrt{\lambda^4+1}}{\sqrt{2}\lambda},\ \Gamma_{\pm}=-\Lambda_{\mp},\ A=\dfrac{\lambda}{\sqrt{2(\lambda^4+1)}},\
B=\frac{\lambda^2-1}{\sqrt{2}\lambda},\\
\begin{bmatrix}
\Psi^{L}(1)\\
\Psi^{R}(1)
\end{bmatrix}=&
\begin{bmatrix}
\frac{2 \alpha {{\lambda}^{2}}+\beta {{\omega}^{2}}-\alpha {{\omega}^{2}}}{\sqrt{2} \omega b \lambda}\\\\
-\frac{\left( \beta-\alpha\right) \omega}{\sqrt{2} \lambda}
\end{bmatrix},\
\begin{bmatrix}
\Psi^L (2)\\
\Psi^R (2)
\end{bmatrix}=
\begin{bmatrix}
\frac{2\alpha \lambda^4-\left\{ (\alpha-\beta)\omega^2+\alpha\omega \right\}\lambda^2+(\alpha-\beta)\omega^3}{\omega\lambda^2}\\\\
\frac{\alpha\lambda^2-(\alpha-\beta)\omega^2}{\lambda^2}
\end{bmatrix},\\
\begin{bmatrix}
\Psi^L (-1)\\
\Psi^R (-1)
\end{bmatrix}=&
\begin{bmatrix}
\frac{(\alpha+\beta)\omega}{\sqrt{2}\lambda}\\\\
-\frac{2\beta\lambda^2-\beta\omega^2-\alpha\omega^2}{\sqrt{2}\omega\lambda}
\end{bmatrix},\
\begin{bmatrix}
\Psi^L (-2)\\
\Psi^R (-2)
\end{bmatrix}=
\begin{bmatrix}
-\frac{\beta\lambda^2-(\alpha+\beta)\omega^2}{\lambda^2}\\\\
\frac{2\beta\lambda^4-\left\{ (\alpha+\beta)+\beta\omega \right\}\lambda^2+(\alpha+\beta)\omega^3}{\omega\lambda^2}
\end{bmatrix},\\
\end{align*}
and
\begin{align*}
\Psi^R (3)=&
\frac{2\alpha\lambda^4 - \left\{ (\alpha-\beta)\omega^2+2\alpha\omega \right\}\lambda^2 +2(\alpha-\beta)\omega^3 }{\sqrt{2}\omega\lambda^3},\\
\Psi^L (-3)=&
\frac{2\beta\lambda^4-\left\{ (\alpha+\beta)\omega^2+2\beta\omega \right\}\lambda^2+2(\alpha+\beta)\omega^3}{\sqrt{2}\omega\lambda^3}.
\end{align*}
Furthermore, a stationary measure $\mu$ is given by
\begin{align*}
\mu(x)=\phi(\Psi)(x)=||\Psi(x)||^2\ \ \ (x\in\mathbb{Z}).
\end{align*}
\section{Summary}\label{summ}
In this paper, we obtained stationary measures for the two-state space-inhomogeneous QWs on $\mathbb{Z}$ by solving the corresponding eigenvalue problem (Theorem 3.1). From this result follow several interesting corollaries. For example, we got a stationary measure $\mu\in\mathcal{M}_s(U^{(s)})\cap\mathcal{M}_{et}(U^{(s)})$ in Corollary \ref{cor:3.4} (i). This case is a generalization of the model studied by Konno et al.\cite{KLS2013}. On the other hand, we obtained a stationary measure $\mu\in\mathcal{M}_s(U^{(s)})\cap\mathcal{M}_{qpt}(U^{(s)})$ in Corollary \ref{cor:3.4} (ii). This case is a generalization of the model considered by Konno and Takei \cite{kt2015}.
As a future work, it would be fascinating to investigate the stationary measure of the multi-state space-inhomogeneous QW on general graphs.
\begin{small}
\bibliographystyle{jplain}
|
\section{Introduction}\label{section_abstr}
Assume that an $n$-tuple of positive numbers $L=(l_1,...,l_n)$ is fixed. We associate with it a \textit{flexible polygon}, that is, $n$ rigid bars of lengths $l_i$ connected in a cyclic chain by revolving joints. A \textit{configuration} of $L$ is an $n$-tuple of points $(q_1,...,q_n)$ with $|q_iq_{i+1}|=l_i, \ \ |q_nq_1|=l_n$.
It is traditional since long (see \cite{fa}, \cite{HausmannKnu}, \cite{Kam} and many other papers and authors)
to study the following two spaces:
\begin{dfn}\label{defConfSpace} \textit{The moduli space
$M_2(L)$} is the set of all planar configurations of $L$
modulo isometries of $\mathbb{R}^2$.
\textit{The moduli space
$M_3(L)$} is the set of all configurations of $L$ lying in $\mathbb{R}^3$
modulo orientation preserving isometries of $\mathbb{R}^3$.
\end{dfn}
\begin{dfn}\label{SecondDfn}
Equivalently, one defines
$$M_2(L)=\{(u_1,...,u_n) \in (S^1)^n : \sum_{i=1}^n l_iu_i=0\}/O(2), \hbox{ and} $$
$$M_3(L)=\{(u_1,...,u_n) \in (S^2)^n : \sum_{i=1}^n l_iu_i=0\}/SO(3). $$
\end{dfn}
The second definition shows that $M_{2}(L)$ and $M_{3}(L)$ do not depend on the
ordering of $\{l_1,...,l_n\}$; however, they do depend on the values
of $l_i$.
Throughout the paper we assume that no configuration of $L$ fits in a
straight line. This assumption implies that the moduli spaces $M_{2}(L)$ and $M_{3}(L)$
are smooth closed manifolds.
In more details, let us take all subsets $I\subset \{1,...,n\}$.
The associated hyperplanes
$$\sum_{i\in I}l_i =\sum_{i\notin I}l_i$$
called \textit{walls }subdivide the parameter space $\mathbb{R}_+^n$
into a number of \textit{chambers}. The topological type of $M_2(L)$ and $M_3(L)$ depends only on the chamber containing $L$;
this becomes clear in view of the (coming below) stable configurations representations.
For $L$ lying strictly inside a chamber, the spaces $M_{2}(L)$ and $M_{3}(L)$ are smooth manifolds.
Let us make an additional assumption: throughout the paper we assume that $\sum_I\pm l_i$ never vanishes for all non-empty $I\subset \{1,...,n\}$.
This agreement does not restrict generality: one may perturb the edge lengths while staying in the same chamber.
So for instance, when we write $L=(3,2,2,1,1)$, we mean $L=(3+\varepsilon_1,2+\varepsilon_2,2+\varepsilon_3,1+\varepsilon_4,1+\varepsilon_5)$ for some generic small epsilons.
The space $M_{2}(L)$ is an $n-3$-dimensional manifold. In most of the cases it is non-orientable,
so we work with cohomology ring with coefficients in $\mathbb{Z}_2$.
The space $M_{3}(L)$ is a $2n-6$-dimensional complex-analytic manifold.\footnote{Moreover, Klyachko \cite{Kl} showed that it is an algebraic variety.} So we work with cohomology ring with integer coefficients. Since $M_3(L)$ has a canonical orientation coming from the complex structure, we canonically identify $H^{2n-6}(M_3(L),\mathbb{Z})$ with $\mathbb{Z}$.
\subsection*{Stable configuration of points}
We make use of yet another representation of $M_{2}(L)$ and $M_{3}(L)$.
Following paper \cite{KapovichMillson}, consider configurations of $n$ (not necessarily all distinct)
points
$p_i$ in the real
projective line $\mathbb{R}P^1$ (respectively, complex projective line). Each point $p_i$ is assigned the weight
$l_i$. The
configuration of (weighted) points is called {\em
stable} if sum of the weights of coinciding points is
less than half the weight of all points.
Denote by $S_\mathbb{R}(L)$ (respectively, $S_\mathbb{C}(L)$) the space of stable configurations in the real projective (respectively, complex projective) line.
The group $PGL(2,\mathbb{R})$ (respectively, $PSL(2,\mathbb{C})$) acts naturally on this space.
In this setting we have:
$$M_2(L)=S_\mathbb{R}(L)/PGL(2,\mathbb{R}), \hbox{ and}$$
$$M_3(L)=S_\mathbb{C}(L)/PSL(2,\mathbb{C}).$$
Therefore we think of $M_{2}(L)$ and $M_{3}(L)$ as compactifications of the spaces of $n$-tuples of distinct points on the projective line (either complex or real). That is, for each $n$ we have a finite series of compactifications of ${\mathcal{M}}_{0,n}$ (respectively, ${\mathcal{M}}_{0,n}(\mathbb{R})$) depending on the particular choice of the lengths $L$.
For the Deligne-Mumford compactification $\overline{\mathcal{M}}_{0,n}$ and its real part $\overline{\mathcal{M}}_{0,n}(\mathbb{R})$
M. Kontsevich introduced the \textit{tautological line bundles} $L_i, \ i=1,...,n$. Their first Chern classes (or Euler classes
for the space $\overline{\mathcal{M}}_{0,n}(\mathbb{R})$) are called $\psi$-\textit{classes} $\psi_1, \dots ,\psi_{n}$. It is known \cite{Kon} that the top degree monomials in $\psi$-classes equal the multinomial coefficients. That is,
$$\psi_1^{d_1} \smile \dots \smile \psi_{n}^{d_n}=\binom{n-3}{d_1\ d_2\ ...\ d_n}\hbox{ \ \ \ for } \sum_{i=1}^n d_i=n-3$$
In the present paper we mimic the definition of tautological line bundles and define $n$ similar tautological line bundles over the spaces $M_2(L)$ and $M_3(L)$, compute their Euler and Chern classes (Section \ref{SectTaut}), and study their intersection numbers (Section \ref{SecMon}).
The latter depend on the chamber containing the length vector $L$ and amount to counting some special types of triangular configurations\footnote{Precise meaning is clarified later.} of the flexible polygon.
Besides, we show that the Chern classes almost never vanish (Proposition \ref{PropNonVanish}).
Informally speaking, computation of Euler classes and their cup-products is a baby version of computation of Chern classes: they are the same with the exception that for the Euler classes one does not care about orientation since the coefficients are $\mathbb{Z}_2$ .
Throughout the paper all the cup-product computations take place in the ring generated by (the Poincar\'{e} duals of) \textit{nice submanifolds} (Section \ref{SectNice}). As a technical tool, we present multiplication rules in the ring (Proposition \ref{computation_rules}).
\medskip
Let us fix some notation. We say that some of the edges of a configuration $\{l_iu_i\}_{i\in I}$ are \textit{parallel} if $u_i=\pm u_j$ for $i,j \in I$. If $I=\{k,k+1,...,k+l\}$ is a consecutive family of indices, this means that the edges lie on a line.
Two parallel edges are either codirected or oppositely directed.
\medskip
\medskip
\textbf{Acknowledgement.} This research is supported by the Russian Science
Foundation under grant 16-11-10039.
\section{Intersections of nice manifolds.}\label{SectNice}
The cohomology rings $H^*(M_2(L),\mathbb{Z}_2)$ and $H^*(M_3(L),\mathbb{Z})$ are described in \cite{HausmannKnu}.
However for the sake of the subsequent computations we develop intersection theory in different terms. The (Poincar\'{e} duals of) the introduced below nice submanifolds generate a ring which is sufficient for our goals.
\subsection*{Nice submanifolds of $M_2(L)$}
Let $i\neq j$ belong to $[n]=\{1,...,n\}$.
Denote by $(ij)_{2,L}$ the image of the natural embedding of the space \newline $M_2(l_i+l_j,l_1,...,\hat{l}_i,...,\hat{l}_j,...,l_n)$ into
the space $M_2(L)$.
That is, we think of the configurations of the new $n-1$-gon as the configurations of $L$ with parallel codirected edges $i$ and $j$ \textit{frozen} together to a single edge of length $l_i+l_j$. Since the moduli space does not depend on the ordering of the edges, it is convenient to think that $i$ and $j$ are consecutive indices. The space $(ij)_{2,L}$ is a (possibly empty) smooth closed submanifold of $M_2(L)$. We identify it with the Poincar\'{e}
dual cocycle and write for short
$$(ij)_{2,L} \in H^1(M_2(L),\mathbb{Z}_2).$$
Denote by $(i\,\overline{j})_{2,L}$ the image of the natural embedding of the space \newline $M_2(|l_i-l_j|,l_1,...,\hat{l}_i,...,\hat{l}_j,...,l_n)$ into
the space $M_2(L)$. Again we have a smooth closed (possibly empty) submanifold.
Now we think of the configurations of the new polygon as the configurations of $L$ with parallel oppositely directed edges $i$ and $j$ {frozen} together to a single edge of length $|l_i-l_j|$.
We can freeze several collections of edges, and analogously define a nice submanifold labeled by the formal product $$ (l\overline{m})_{2,L}\cdot(ij\overline{k})_{2,L}=(ij\overline{k})_{2,L}\cdot (l\overline{m})_{2,L}\in H^3(M_2(L),\mathbb{Z}_2).$$ All submanifolds arising this way are called \textit{nice submanifolds of $M_2(L)$, } or just \textit{nice manifolds} for short.
Putting the above more formally,
each nice manifold is labeled by an unordered formal product $$(I_1\overline{J}_1)_{2,L}\cdot ...\cdot(I_k\overline{J}_k)_{2,L},$$ where $I_1,...,I_k, {J}_1,,,{J}_k$ are some disjoint subsets of $[n]$ such that each set $I_i\cup {J}_i$ has at least one element.\footnote{For further computations is convenient to define also nice manifolds with $I_i\cup {J}_i$ consisting of one element. That is, we set $(1)_{2,L}=M_2(L),\ \ (1)_{2,L}\cdot (23)_{2,L}=(23)_{2,L}$ etc. }
{By definition, the manifold $(I_1\overline{J}_1)_{2,L}\cdot ...\cdot(I_k\overline{J}_k)_{2,L}$ is the subset of $M_2(L)$ defined by the conditions:}
(1) $i,j \in I_k$ implies $u_i=u_j$, and
(2) $i \in I_k, j\in J_k$ implies $u_i=-u_j$.
{Note that $(I_1\overline{J}_1)_{2,L}\cdot ...\cdot(I_k\overline{J}_k)_{2,L}$ is the intersection of $(I_i\overline{J}_i)$, $i=1,...,k$.}
\medskip
Some of nice manifolds might be empty and thus represent the zero cocycle. This depends on the values of $l_i$.
\subsection*{Nice submanifolds of $M_3(L)$}
By literally repeating the above we define nice submanifolds of $M_3(L)$ as point sets. Since a complex-analytic manifold has a fixed orientation coming from the complex structure, each nice manifold has a \textit{canonical} orientation.
By definition, the \textit{relative orientation} of a nice manifold $(I\overline{J})_{3,L}$ coincides with its canonical orientation iff $$\sum _I l_i>\sum _J l_i.\ \ \ \ \ \ \ \ (*)$$
Further, the two orientation (canonical and relative ones) of a nice manifold$$(I_1\overline{J}_1)_{3,L}\cdot ...\cdot (I_k\overline{J}_k)_{3,L}$$ coincide iff the above inequality $(*)$ fails for even number of $(I_i\overline{J_i})$.
From now on, by $(I\overline{J})_{3,L} \in H^*(M_3(L),\mathbb{Z})$ we mean (the Poincar\'{e} dual of) the nice manifold taken with its relative orientation, whereas $(I\overline{J})_{3,L}^{can}$ denotes the nice manifold with the canonical orientation.
\medskip
To compute cup-product of these cocycles (that is, the intersections of nice manifolds) we need the following rules. Since the rules are the same for all
$L$ and for both dimensions $2$ and $3$ of the ambient Euclidean space, we omit subscripts in the following proposition.In the sequel, we use the subscripts only if the dimension of the ambient space matters.
\begin{prop}\label{rules}\textbf{(Computation rules)}\label{computation_rules}
The following rules are valid for nice submanifolds of $M_2(L)$ and $M_3(L)$.
\begin{enumerate}
\item The cup-product is a commutative operation.
\item $(I\overline{J})=-(J\overline{I}).$
\item If the factors have no common entries, the cup-product equals the formal product, e.g.:
$$(12) \smile (34)=(12)\cdot (34).$$
\item If $I_1\cap I_2=\{i\},\ \ I_1\cap J_2= \emptyset, \ \ I_2\cap J_1= \emptyset,\ \ I_2\cap J_2= \emptyset, \hbox{and} \ \ J_1\cap J_2= \emptyset$, then
$$(I_1\overline{J}_1)\smile (I_2\overline{J}_2)=
(I_1\cup I_2 \ \overline{J_1\cup J_2}).
$$
Examples: $$(123)\smile (345)=(12345),$$$$(123)\smile (34\overline{5})=(1234\overline{5}).$$
\item If $J_1\cap J_2=\{i\},\ \ I_1\cap J_2= \emptyset, \ \ I_2\cap J_1= \emptyset,\ \ I_2\cap J_2= \emptyset, \hbox{and} \ \ I_1\cap I_2= \emptyset,$ then
$$(I_1\overline{J}_1)\smile (I_2\overline{J}_2)=-
(I_1\cup I_2 \ \overline{J_1\cup J_2}).
$$
Example:
$$(12\overline{3})\smile (45\overline{3})= -(1245\overline{3}).$$
\end{enumerate}
\end{prop}
\begin{proof} The statement (1) is true for $M_2$ since we work over $\mathbb{Z}_2$. (1) is true also for $M_3$ since the dimension of a nice manifold is even. The statement (2) follows from the definition. The statement (5) follows from (2) and (4).
The statement (3) follows from $(I_1\overline{J}_1)^{can} \smile (I_2\overline{J}_2)^{can}= ((I_1\overline{J}_1)\cdot (I_2\overline{J}_2))^{can}$, which is true by reasons of toric geometry, see \cite{HausmannKnu}.
So it remains to prove (4).
In notation of Definition \ref{SecondDfn}, take $(u_1,...,u_{n-3})\in (S^2)^{n-3}$ as a coordinate system on $M_3(L)$. It is well-defined on some connected dense subset of $ M_3(L)$. Taken together, the standard orientations of each of the copies of $S^2$ give the canonical
orientation on $M_3(L)$. In other words, the basis of the tangent space $(d u_1,du_2,du_3,...,du_{n-3})$ yields the canonical orientation.
Let us start with two examples.
(A) The nice manifold $(12)_{3,L}$ embeds
as $$(u_1,u_3,...,u_{n-3})\rightarrow (u_1,u_1, u_3,...,u_{n-3}).$$ The relative orientation is defined by the basis $(d u_1,d u_3,...,d u_{n-3})$ of the tangent space. It always coincides with the canonical orientation.
(B) The nice manifold $(1\overline{2})_{3,L}$ embeds
as $$(u_1,u_3,...,u_{n-3})\rightarrow (u_1,-u_1, u_3,...,u_{n-3}).$$ The relative orientation is defined by the basis $(d u_1,d u_3,...,d u_{n-3})$ of the tangent space. It coincides with the canonical orientation iff $l_1>l_2$.
Note that the nice manifold $(2\overline{1})_{3,L}$ coincides with $(1\overline{2})_{3,L}$ as a point set, but comes with the opposite relative orientation defined by the basis $(du_2,du_3,...,du_{n-3})$.
A nice manifold $(I\overline{J})$ such that none of $n,n-1,n-2$ belongs to $I\cup J$ embeds in a similar way.
Assuming that $1\in I$, the relative orientation is defined by the basis $$(d u_1,\{du_i\}_{i\notin I\cup J\cup\{n,n-1,n-2\}}).$$
For a nice manifold $(I\overline{J})$ such that $I\cup J$ has less than $n-3$ elements, one chooses another coordinate system obtained by renumbering of the edges.
The statement (4) is now straightforward if there are at least three edges that do not participate in the labels of nice manifolds.
Now prove (4) for the general case. We may assume that $n\notin I_1\cup I_2\cup J_1\cup J_2$.
\textit{Defreeze} the edge $n$: cut it in three smaller edges and join the pieces by additional revolving joints.
Thus we obtain a new linkage $$L'=(l_1,...,l_{n-1}, \frac{1}{3}l_n,\frac{1}{3}l_n,\frac{1}{3}l_n).$$
The nice manifolds $(I_1\overline{J}_1)_{3,L}\smile (I_2\overline{J}_2)_{3,L}$ and \newline $(I_1\overline{J}_1)_{3,L'}\smile (I_2\overline{J}_2)_{3,L'}\smile(n\ n+1\ n+2)_{3,L'}$ have one and the same relative orientation. For $(I_1\overline{J}_1)_{3,L'}\smile (I_2\overline{J}_2)_{3,L'}$ we can apply (4) and write
$$(I_1\overline{J}_1)_{3,L}\smile (I_2\overline{J}_2)_{3,L}=(I_1\overline{J}_1)_{3,L'}\smile (I_2\overline{J}_2)_{3,L'}\smile(n\ n+1\ n+2)_{3,L'}=$$
$$ \Big((I_1\overline{J}_1)_{3,L'}\cdot (I_2\overline{J}_2)_{3,L'}\Big)\smile(n\ n+1\ n+2)_{3,L'}=$$
$$(I_1\cup I_2\ \overline{J_1\cup J_2})_{3,L'}\smile(n\ n+1\ n+2)_{3,L'}=(I_1\cup I_2\ \overline{J_1\cup J_2})_{3,L}.$$
\end{proof}
\section{Tautological line bundles over $M_{2}$ and $M_{3}$. Euler and Chern classes.}\label{SectTaut}
Let us give the main definition in notation of Definition \ref{SecondDfn}:
\begin{dfn}
\begin{enumerate}
\item The tautological line bundle $E_{2,i}(L) $ is the real line bundle over the space $M_2(L)$ whose fiber over a point $(u_1,...,u_n)\in (\mathbb{R}P^1)^n$ is the tangent line to $\mathbb{R}P^1$
at the point $u_i$.
\item
Analogously, the tautological line bundle
$E_{3,i}(L)$ is the complex line bundle over the space $M_3(L)$ whose fiber over a point $(u_1,...,u_n)\in (\mathbb{C}P^1)^n$ is the complex tangent line to the complex projective line $\mathbb{C}P^1$
at the point $u_i$.
\end{enumerate}
\end{dfn}
\begin{lemma}The bundles $E_{2,i}(L)$ and $E_{2,j}(L)$ are isomorphic for any $i,j$.
\end{lemma} \textit{Proof.} {Define $\widetilde{M}_2(L):=\{(u_1,...,u_n) \in (S^1)^n : \sum_{i=1}^n l_iu_i=0\} $. That is, we have $M_2(L)=\widetilde{M}_2(L)/O(2)$. The space $\widetilde{M}_2(L)$ comes equipped with the line bundles $\widetilde{E}_{2,i}(L)$: the fiber of $\widetilde{E}_{2,i}(L)$ over a point $(u_1,...,u_n)$ is the tangent line to $\mathbb{R}P^1$ at the point $u_i$. Clearly, we have $E_{2,i}(L)=\widetilde{E}_{2,i}(L)/O(2).$ Let $\rho$ be the (unique) rotation of the circle $S^1$ which takes $u_i$ to $u_j$. Its pushforward $d\rho: \widetilde{E}_{2,i}(L)\rightarrow \widetilde{E}_{2,j}(L)$ is an isomorphism. Since $d\rho$ commutes with the action of $O(2)$, it yields an isomorphism between $E_{2,i}(L)$ and $E_{2,j}(L)$. \qed}
\medskip
The bundles $E_{3,i}(L)$ and $E_{3,j}(L)$ are (in general) not isomorphic, see Lemma \ref{Lemma_for_quadrilateral} and further examples.
\medskip
\begin{thm}\label{ThmEulerChern}For $n\geq 4$ we have:\begin{enumerate}
\item
\begin{enumerate}
\item The Euler class of $E_{2,i}(L)$ does not depend on $i$ and equals\footnote{We omit in the notation dependence on the $L$, although $Ch(i)$ as well as $e$ depend on the vector $L$.} $$e:=e(E_{2,i}(L))=(12)_{2,L}+(1\overline{2})_{2,L}=(ij)_{2,L}+(i\,\overline{j})_{2,L} \in H^1(M_2(L),\mathbb{Z}_2) \hbox{ for any} \ i \neq j.$$
\item The alternative expression for the Euler class is: $$e=(jk)_{2,L}+(kr)_{2,L}+(jr)_{2,L} \in H^1(M_2(L),\mathbb{Z}_2) \hbox{ for any distinct } j,k,r.$$
\end{enumerate}
\item The first Chern class of $E_{3,i}(L)$ equals $$Ch(i):=Ch(E_{3,i}(L))=
(ij)_{3,L}-(i\,\overline{j})_{3,L} \in H^2(M_3(L),\mathbb{Z}) \hbox{ for any } j \neq i.$$
\end{enumerate}
\end{thm}
\begin{proof}
Let us remind the reader briefly \textbf{a recipe } for
computing the first Chern class {of a complex line bundle. The background for the recipe comes from \cite{MilSt} and its explicitation \cite{Kaz}: in our case, the first Chern class equals the Euler class of the complex line bundle, so it is represented by the zero locus of a generic smooth section.} Assume we have a complex line bundle $E$ over an oriented smooth base $B$.
Replace $E$ by an $S^1$-bundle by taking the oriented unit circle in each of the fibers.
\begin{itemize}
\item If the dimension of the base is $2$, the choice of orientation identifies $H^2(B,\mathbb{Z})$ with $\mathbb{Z}$. So the Chern class is identified with some integer number $Ch(E)$ {which is called the Chern number, or the Euler-Chern number}. Choose a section $s$ of the $S^1$-bundle which is discontinuous at a finite number of singular points. {The singularities of the section correspond to the zeros of the associated section of $E$.} Each point contributes to $Ch(E)$ a summand. Take one of such points $p$ and a small positively oriented circle $\omega \subset B$ embracing $p$. We may assume that the circle is small enough to fit in a neighborhood of $p$ where the bundle is trivializable. This means that in the neighborhood there exist a continuous section $t$. Except for $t$, at each point of $\omega$ we have the section $s$.
The point $p$ contributes to $Ch(E)$ the winding number of $s$ with respect to $t$. {For removable singularities this contribution is zero.}
\item The general case reduces to the two-dimensional one. Let the dimension of the base be greater than $2$. Choose a section $s$ of the $S^1$-bundle which is discontinuous at a finite number of { (not necessarily disjoint) }oriented submanifolds $m_i$ \footnote{ {The zero locus of a generic smooth section of $E$ gives an example of such $m_i$. }} of real codimension $2$. {In our case, we'll present these submanifolds explicitly; they are disjoint.} Each $m_i$ contributes to the cocycle $Ch(E)$ the summand $c_i\cdot m_i$ where the integer number $c_i$ is computed as follows. Take a $2$-dimensional smooth oriented {disc} $\mathcal{D}\subset B$ transversally intersecting the manifold $m_i$ at a smooth point $p$. We assume that taken together, orientations of $m_i$ and $\mathcal{D}$ yield the (already fixed) orientation of $B$. The pullback of the line bundle to $\mathcal{D}$ comes with the restriction of the section $s$, which we denote by $\overline{s}$. Let $c_i$ be the number computed in the previous section related to $\mathcal{D}$ and to the section $\overline{s}$ at the point $p$. Then the first Chern class equals (the Poincar\'{e} dual of) \ $Ch(E) = \sum_i c_im_i$.
\end{itemize}
\medskip
The proof starts similarly for all the cases (1,a), (1,b) and (2). Let us describe a section of $E_{2,i}(L)$ and $E_{3,i}(L)$ respectively. Its zero locus (taken with a weight $\pm 1$, coming from mutual orientations) is the desired class, either $e(i)$ or $Ch(i)$.
Fix any $j\neq i$, and consider the following (discontinuous) section of $E_{2,i}(L)$ or $E_{3,i}(L)$ respectively: at a point $(u_1,...,u_n)$ we take the unit vector in the tangent line (either real or complex) at $u_i$ pointing in the direction of the shortest arc connecting $u_i$ and $u_j$. The section is well-defined except for the points of $M(L)$ with $u_i=\pm u_j$, that is, except for nice manifolds $(ij)$ and $(i\,\overline{j})$. The section is transversal to the zero section.
(1,a) To compute $e(i)$, it suffices to observe that the section has no continuous extension neither to $(1i)_{2,L}$ nor to $(1\overline{i})_{2,L}$, so the statement is proven.
(1,b) Each (discontinuous) choice of orientation of $S^1$ induces a discontinuous section of $E_{2,i}(L)$. An orientation of the circle $S^1$ is defined by any three distinct points, say, $(u_j,u_k,u_r)$. Consider the section whose orientation agrees with the orientation of the ordered triple $(u_j,u_k,u_r)$. {Whenever any two of $u_j,u_k,u_r$ coincide, we obtain a non-removable singularity of the section, so the statement follows.}
(2) To compute $Ch(i)$, we need to take orientation into account. We already know that $Ch(E_{3,i}(L))=A\cdot (ij)_{3,L}-B\cdot (i\,\overline{j})_{3,L}$ for some {integer (possibly zero)} $A$ and $B$ that may depend on $L$, $i$ and $j$.
\medskip
We are going to apply the above recipe. Therefore we first look at the case when the base has real dimension two, which corresponds to $n=4$.
All existing cases are described in the following lemma:
\begin{lemma}\label{Lemma_for_quadrilateral}
{Theorem \ref{ThmEulerChern} (2) is valid for $n=4$. More precisely,}
\begin{enumerate}
\item For $l_1,l_2,l_3,l_4$ such that $l_1>l_2>l_3>l_4$ and $l_3+l_2>l_4+l_1$, we have
$Ch(1)=Ch(2)=Ch(3)=0$, and \ \ $Ch(4)=2$.
\item For $l_1,l_2,l_3,l_4$ such that $l_1>l_2>l_3>l_4$ and $l_3+l_2<l_4+l_1$, we have
$Ch(2)=Ch(3)=Ch(4)=1$, and \ \ $Ch(1)=-1$.
\end{enumerate}
\end{lemma}\textit{Proof of the lemma.}
{Let us compute $Ch(1)$. Consider the section
of $E_{3,1}(L)$ equal to the unit vector $T_{u_1}(S^2)$ pointing in the direction of the shortest arc connecting $u_1$ and $u_2$.
The section is well-defined everywhere except for the points $(12)_{3,L}$ and $(1\overline{2})_{3,L}$. The manifold $M_3(L)$ is diffeomorphic to $S^2$ and can be parameterized by coordinates $(d, \alpha)$, where $d$ is the length of the diagonal $q_1q_3$ in a configuration $(q_1,q_2,q_3,q_4)$, and $\alpha$ is the angle between the (affine hulls of) triangles $q_1q_2q_3$ and $q_3q_4q_1$.
The contribution of $(12)_{3,L}$ to the Chern number is computed according to the above recipe. Take a small circle on $M_3(L)$ embracing $(12)_{3,L}$ as follows: make first two edges almost parallel and codirected, and rotate the (almost degenerate) triangle $q_1q_2q_3$ with respect to $q_3q_4q_1$ keeping diagonal $q_1q_3$ fixed. It remains to count the winding number. The contribution of the $(1\overline{2})_{3,L}$ to the Chern number is found analogously. }
{Although the lemma is proven, let us give one more insight. Assume we have $l_1>l_2>l_3>l_4$ and $l_3+l_2>l_4+l_1$. Let us we make use of the representation of configurations of stable points and factor out the action of $PSL(2,\mathbb{C})$ by assuming that first three points $p_{1}, p_{2}$ and $p_{3}$ are $0,1$ and $\infty$ respectively. This is always possible since $p_1,p_2$, and $p_3$ never coincide pairwise. Thus, $M_3(L)$ is the two-sphere, and $E_{3,4}(L)$ is the tangent bundle over $S^2$, whose Chern number count is a classical exercise in topology courses. Let us just repeat that in this representation we take the (same as above) section of the $E_{3,4}(L)$ as the unite vector in the tangent plane going in the direction of the point $p_1 = 0$. \qed}
\medskip
Now we prove the statement (2) of the theorem for the general case.
Without loss of generality, we may assume that $i=1$.
Choose a generic point $p\in (12)_{3,L}$ (or a generic point $p\in (1\overline{2})_{3,L}$). It is some configuration $(q_1,\dots,q_n)$. Freeze the edges $4,5,...,n$. Fix also the rotation parameter with respect to the diagonal $q_1q_4$ (for instance, one may fix the angle between the planes $(q_1q_2q_3)$ and $(q_1q_4q_5)$; generically these triples are not collinear). Keeping in mind the recipe, set { $\widetilde{\mathcal{D}}\subset M_3$ be the set of configurations satisfying all these freezing conditions. By construction, $p $ lies in $\widetilde{\mathcal{D}}$. The manifold $\widetilde{\mathcal{D}}$ amounts to the configuration space of a $4$-gon constituted of first three edges and the diagonal $q_1q_4$. Let the two-dimensional disc $\mathcal{D}$ be a neighborhood of $p$ in $\widetilde{\mathcal{D}}$. }
It remains to refer to the case $n=4$. Theorem is proven.
Let us comment on the following alternative proof of $(1,a)$. The space $M_2(L)$ embeds in $M_3(L)$. The representation via stable points configurations shows that $M_2(L)$ is the space of real points of the complex manifold $M_3(L)$; see \cite{HausmannKnu, Kl} for more details.
The bundle $E_{2,i}(L)$ is the real part of the pullback of $E_{3,i}(L)$. Therefore, the Euler class $e$ is the real part of the pullback of $Ch(i)$. In our framework, to take the pullback of $Ch(i)$ means to intersect its dual class with $M_2(L)$. It remains to observe that $(IJ)_{3,L}\cap M_2(L)=(IJ)_{2,L}$ and $(I\overline{J})_{3,L}\cap M_2(L)=(I\overline{J})_{2,L}$. One should keep in mind that passing to $M_2(L)$ we replace the coefficient ring $\mathbb{Z}$ by $\mathbb{Z}_2$.
\end{proof}
\medskip
\textbf{Remark.} We have a natural inclusion $incl:M_3(L)\rightarrow (\mathbb{C}P^1)^n/PSL(2,\mathbb{C})$. Define the analogous linear bundles over $(\mathbb{C}P^1)^n/PSL(2,\mathbb{C})$ and their first Chern classes $\mathbf{Ch}(i)$. The above computation of the Chern class is merely taking the pullback $Ch(i)=incl^*\mathbf{Ch}(i)$.
\medskip
Let us now examine the cases when the Chern class $Ch(i)$ is zero.
\begin{prop}\label{PropNonVanish} Assume that $l_1>l_2>...>l_n$. \begin{enumerate}
\item If $$l_2+l_3<l_1+l_4+l_5+...+l_n \ \ \ \ \ \ (**)$$ then the Chern class $Ch(i)$ does not vanish for all $i=1,...,n$.
\item The above condition $(**)$ holds true for all the chambers (that correspond to non-empty moduli space) except the unique one represented by $L=(1,1,1,\varepsilon,...,\varepsilon)$.
In this exceptional case $Ch(1)=Ch(2)=Ch(3)=0$, and $Ch(i)\neq 0$ for $i>3$.
\end{enumerate}
\end{prop}
\begin{proof}(1) Take a maximal (by inclusion) set $I$ containing $1$ with the property $\sum_{i\in I} l_i<\sum_{i\notin I} l_i.$ \footnote{Such a set may be not unique; take any.} The condition $ (**) $ implies that its complement has at least three elements. Make all the edges from $I$ codirected and freeze them together to a single edge. Also freeze (if necessary) some of the remaining edges to get a flexible $4$-gon out of the initial $n$-gon. It is convenient to think that the edges that get frozen are consecutive ones. We get a new flexible polygon $L'$ whose moduli space $M_3(L')$ embeds in $M_3(L)$. For a given $i$ choose $j\in [n]$ in such a way that the edges $i$ and $j$ are not frozen together, and write $Ch(i)=(ij)-(i\,\overline{j})$.
(a) If none of $i,j$ is frozen with the edge $1$, then $(i\,\overline{j})$ does not intersect $M_3(L')$, whereas $(i{j})$ intersects $M_3(L')$ transversally at exactly one point.
(b) If one of $i,j$ is frozen with the edge $1$, then $(i{j})$ does not intersect $M_3(L')$, whereas $(i\,\overline{j})$ intersects $M_3(L')$ transversally at exactly one point.
In both cases the product of $Ch(i)$ with the cocycle $M_3(L')$ is non-zero, so the proof is completed.
(2) Clearly, $Ch(1)=(12)-(1\overline{2})=0$ since both summands are empty manifolds.
Further, $$Ch(4)\smile...\smile Ch(n)=\left[(41)+(1\overline{4})\right]\smile...\smile \left[(n1)+(1\overline{n})\right]=2^{n-3}\neq 0,$$ which proves the statement.
\end{proof}
\section{Monomials in Euler and Chern classes}\label{SecMon}
Let us start with small examples.
Table \ref{Tab_Pentagon1} represents the multiplication table for the five Chern classes for the flexible pentagon $L=(3,1, 1,1,1)$.
\begin{table}[h] \caption{Multiplication table for $L=(3,1, 1,1,1)$}
\label{Tab_Pentagon1}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & 1 & -1 & -1 & -1& -1\\
$Ch(2)$ & -1 & 1 & 1 & 1& 1\\
$Ch(3)$ & -1 & 1 & 1 & 1& 1\\
$Ch(4)$ & -1 & 1 & 1 & 1& 1\\
$Ch(5)$ & -1 & 1 & 1 & 1 & 1\\
\end{tabular}
\end{table}
Here is a detailed computation of $Ch(1)\smile Ch(2)$:
$$Ch(1)\smile Ch(2)=[(13)+(\overline{1}3)]\smile [(23)+(\overline{2}3)]=$$
$$(123)+(1\overline{2}3)+(\overline{1}23)+(\overline{1}\overline{2}3)=0+0-1+0=-1.$$
Tables \ref{Tab_Pentagon2}, \ref{Tab_Pentagon3}, \ref{Tab_Pentagon4}, \ref{Tab_Pentagon5}, \ref{Tab_Pentagon6} represent the multiplication tables for all the remaining cases of flexible pentagons (listed in \cite{Zvonk}).
\begin{table}[h] \caption{Multiplication table for $L=(2,1, 1,1, \varepsilon)$}
\label{Tab_Pentagon2}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & 0 & 0 & 0 & 0& -2\\
$Ch(2)$ & 0 & 0 & 0 & 0& 2\\
$Ch(3)$ & 0 & 0 & 0 & 0& 2\\
$Ch(4)$ & 0 & 0 & 0 & 0& 2\\
$Ch(5)$ & -2 & 2 & 2 & 2 & 0\\
\end{tabular}
\end{table}
\begin{table}[h] \caption{Multiplication table for $L=(3,2, 2, 1, 1)$.}
\label{Tab_Pentagon3}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & -1 & 1 & 1 & -1& -1\\
$Ch(2)$ & 1 & -1 & -1 & 1& 1\\
$Ch(3)$ & 1 & -1 & -1 & 1& 1\\
$Ch(4)$ & -1 & 1 & 1 & -1& 3\\
$Ch(5)$ & -1 & 1 & 1 & 3 & -1\\
\end{tabular}
\end{table}
\begin{table}[h]\caption{Multiplication table for $L=(2, 2,1,1,1)$.}
\label{Tab_Pentagon4}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & -2 & 2 & 0 & 0& 0\\
$Ch(2)$ & 2 & -2 & 0 & 0& 0\\
$Ch(3)$ & 0 & 0 & -2 & 2& 2\\
$Ch(4)$ & 0 & 0 & 2& -2& 2\\
$Ch(5)$ & 0 & 0 & 2 & 2& -2\\
\end{tabular}
\end{table}
\begin{table}[h]\caption{Multiplication table for $L=(1,1,1,1,1)$.}
\label{Tab_Pentagon5}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & -3 & 1 & 1 & 1& 1\\
$Ch(2)$ & 1 & -3 & 1 & 1& 1\\
$Ch(3)$ & 1 & 1 & -3 & 1& 1\\
$Ch(4)$ & 1 & 1 & 1 & -3& 1\\
$Ch(5)$ & 1 & 1 & 1 & 1& -3\\
\end{tabular}
\end{table}
\begin{table}[h] \caption{Multiplication table for $L=(1,1, 1,\varepsilon, \varepsilon)$}
\label{Tab_Pentagon6}
\begin{tabular}{cccccc}
& $Ch(1)$ & $Ch(2)$ & $Ch(3)$ & $Ch(4)$ & $Ch(5)$ \\
$Ch(1)$ & 0 & 0 & 0 & 0& 0\\
$Ch(2)$ & 0 & 0 & 0 & 0& 0\\
$Ch(3)$ & 0 & 0 & 0 & 0& 0\\
$Ch(4)$ & 0 & 0 & 0 & 0& 4\\
$Ch(5)$ & 0 & 0 & 0 & 4 & 0\\
\end{tabular}
\end{table}
\medskip
\newpage
\textbf{Remarks.} In each of the tables, the parity of all the entries is one and the same. Indeed, modulo $2$ these computations are the computations of the squared Euler class.
Table \ref{Tab_Pentagon6} illustrates the case $2$ of Proposition \ref{PropNonVanish}.
\subsection{First computation of $e^{n-3}$}
If the dimension of the ambient space is $2$, all the tautological linear bundles are isomorphic, so we have the unique top degree monomial
$e^{n-3} \in \mathbb{Z}/2$. Let us compute it using rules from Proposition \ref{rules}:$$e^2=[(12)+(1\overline{2})]\smile [(23)+(2\overline{3})]=(123)+(1\overline{2}\overline{3})+(12\overline{3})+(1\overline{2}{3}),$$
$$e^3=e^2\smile [(34)+(3\overline{4})]=(1234)+(1\overline{2}34)+(12\overline{3}4)+$$$$(123\overline{4})+(1\overline{23}4)+
(12\overline{34})+(1\overline{2}3\overline{4})+(1\overline{234}).$$
Proceeding this way one concludes:
\begin{prop} \label{PropEulMon} \begin{enumerate}
\item The top power of the Euler class $e^{n-3}$ (as an element of $\mathbb{Z}_2$)
equals the number of triangular configurations of the flexible polygon $L$ such that all the edges $1,...,n-2$ are parallel.
\item Choose any three vertices of the flexible polygon $L$. Let them be, say, $q_i,q_j,$ and $q_k$ for some $i<j<k$.
The top power of the Euler class $e^{n-3}$
equals the number of triangular configurations of the flexible polygon $L$ with the vertices $i,j$, and $k$. More precisely, we count configurations such that
\begin{enumerate}
\item the edges $i+1,...,j$ are parallel,
\item the edges $j+1,...,k$ are parallel,
\item the edges $k+1,...,n$ and $1,...,i$ are parallel.\qed
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{Ex} Let $L=(1,1,\dots,1)$, that is, we have a flexible equilateral $(2s+3)$-gon.
The number of triangles indicated in Proposition \ref{PropEulMon}, (1) is $\binom{2s+1}{s}$. By Luke theorem, modulo $2$ it equals $$ \prod_{t \geq 0} (s_{t} - s_{t-1} +1),$$ where $\{s_{t}\}_{t\geq 0}$ are digits of the binary numeral system representation of $s$.
Finally, we get $$e^{2s}=\left\{
\begin{array}{ll}
1, & \hbox{if } s=2^r-1;\\
0, & \hbox{otherwise.}
\end{array}
\right.
$$
\end{Ex}
\subsection{Second computation of $e^{n-3}$}
Now we make use of Theorem \ref{ThmEulerChern}(1,b) :
$$ e = (i\; i+1)+(i+1\; i+2)+(i\; i+2).$$
\begin{prop}\begin{enumerate}
\item We have
$$e^{k} = \sum_{T_1\cup T_2= [k+2]} (T_{1})\cdot (T_{2}),$$
where the sum runs over all unordered partitions of the set $[k+2]=\{1,...,k+2\}$ into two nonempty sets $T_{1}$ and $T_{2}$.
\item In particular,
$$e^{n-3} = \sum_{T_1\cup T_2= [n-1]} (T_{1})\cdot (T_{2}),$$ where the sum runs over all unordered partitions of $[n-1]$ into two nonempty disjoint sets $T_{1}$ and $T_{2}$.
\end{enumerate}
\end{prop}
\begin{proof}
Let us first prove the following lemma:
\begin{lemma} \label{transposition_square}
We have
$$(ij)\smile (ij) = (ijk) + (ijl) + (ij)\cdot (kl) = (ij)\smile e.$$
\end{lemma}
\textit{Proof of the lemma.} Perturb the manifold $(ij)$ by keeping $u_i$ as it is and pushing $u_j$ in the direction defined by the orientation of the triple $(u_i, u_k, u_l)$. We arrive at a manifold $\widehat{(ij)}_{k,l}$ which represents the same cohomology class. The manifolds $\widehat{(ij)}_{k,l}$ and $(ij)$ intersect transversely, and their product is the subset of $(ij)$ where the above orientation is not defined. The last equation follows from the representation $e = (jk)+(jl)+(kl)$. The lemma is proven. \qed
Now we prove the proposition using induction by $k$. The base comes from Theorem \ref{ThmEulerChern}(1,b):
$$e = (12)+(13)+(23).$$
The last expression equals exactly all two-component partitions of the set $\{1,2,3\} = [3]$.
The induction step:
$$e^{k+1} = e^{k}\smile e =$$
$$= \left(\sum_{T_1\cup T_2=\{1, 2, \dots , k+2\}}(T_{1})\cdot (T_{2})\right) \smile \Big((k+1 \; k+2)+(k+1 \; k+3)+(k+2 \; k+3)\Big).$$
The statement now follows from Lemma \ref{transposition_square}.
Let us illustrate the second induction step:
\begin{align*}
e^2 = &[(12)+(23)+(13)] \smile [(23)+(24)+(34)] =\\
&(12)\smile (23)+(12)\smile (24)+(12)\smile (34)+(23)\smile (23)+(23)\smile (24)+\\
&+(23)\smile (34)+(13)\smile (23)+(13)\smile (24)+(13)\smile (34) =\\
&(123)+(124)+(12)\smile (34)+(23)\smile (23)+(234)+\\
&+(234)+(123)+(13)\smile (24)+(134) =\\
&(124)+(12)\cdot (34)+(234)+ (123)+(14)\cdot (23)+\\
&+(13)\cdot (24)+(134).\\
\end{align*}
Each summand in the last expression corresponds to a two-element partition of the set $\{1,2,3,4\}$.
\end{proof}
\medskip
\subsection{Monomials in Chern classes} From now on, we consider $\mathbb{R}^{3}$ as the ambient space. Since we have different representations for the Chern class, there are different ways of computing the intersection numbers. They lead to combinatorially different answers for one and the same monomial and one and the same $L$, but of course, these different counts give one and the same number.
The canonical orientation of $M_3(L)$ identifies $H^{2n-6}(M_3(L),\mathbb{Z})$ with $\mathbb{Z}$. Therefore the top monomials in Chern classes can be viewed as integer numbers.
\begin{thm}
The top power of the Chern class $Ch^{n-3}(1)$ equals the signed number of triangular configurations of the flexible polygon $L$ such that all the edges $1,...,n-2$ are parallel.\footnote{Remind that the edges $\{l_iu_i\}_{i\in I}$ are \textit{parallel} if $u_i=\pm u_j$ for $i,j \in I$.} Each such triangle represents a one-point nice manifold $(I\overline{J})$ for some partition $[n-2]=I\cup J$ { with $1 \in I$}. Each triangle is counted with the sign
$$(-1)^N \cdot \epsilon ,$$
where $N= |J|$ is the cardinality of $J$, and
$$\epsilon= \left\{
\begin{array}{ll}
1, & \hbox{if \ \ } \sum_I l_i> \sum_Jl_j;\\
-1, & \hbox{otherwise.}
\end{array}
\right.$$
The expressions for the other top powers $Ch^{n-3}(i)$ come from renumbering.
\end{thm}
\begin{proof} We have $$Ch(1)=(12)-(1\overline{2})=(13)-(1\overline{3})=(14)-(1\overline{4}), \hbox{ etc.}$$ Therefore
$$Ch^2(1)= [(12)-(1\overline{2})]\smile [(13)-(1\overline{3})]= (123)-(12\overline{3})-(1\overline{2}3)+(1\overline{23}).$$
$$Ch^3(1)= [(14)-(1\overline{4})]\smile Ch^2 ( 1)=$$$$(1234)-(1\overline{2}34)-(12\overline{3}4)-(123\overline{4})+(12\overline{34})+(13\overline{24})+(14\overline{23})-(1\overline{234}).$$
At the final stage we arrive at a number of zero-dimensional nice manifolds. Each comes with the sign $(-1)^N$.
Further, each of them corresponds either to $1$ if the relative orientation agrees with the canonical orientation, or to $-1$.
This gives us $\epsilon$.
\end{proof}
\medskip
Figure \ref{FigInters1} represents all the triangular configurations that arise in computation of $Ch^2(1)$ for the equilateral pentagon. The values of $N$ and $\epsilon$ are: \newline (a) $N=2$, $\epsilon =-1$, (b) $N=1$, $\epsilon =1$, (c) $N=1$, $\epsilon =1$.
\begin{figure}[h]
\centering \includegraphics[width=10 cm]{FigInters1.eps}
\caption{Triangular configurations of the equilateral pentagon.}\label{FigInters1}
\end{figure}
\medskip
\begin{Ex} For the equilateral 2k+3-gon we have $$Ch^{2k}(i) = (-1)^{k} \binom{2k}{k}-(-1)^{k+1} \binom{2k}{k-1}=(-1)^k\cdot\binom{2k+1}{k}.$$ Indeed, we count equilateral triangle configurations with either $k$ or $k-1$ edges codirected with the first edge.
\end{Ex}
For a three-term monomial the above technique gives:
\begin{thm} Assume that $d_1+d_2+d_3=n-3.$ The monomial
$$Ch^{d_1}(1)\smile Ch^{d_2}(d_1+2)\smile Ch^{d_3 }(d_1+d_2+3)$$ equals the signed number of triangular configurations of the flexible polygon $L$ such that \begin{enumerate}
\item the edges $1,...,d_1+1$ are parallel,
\item the edges $d_1+2,...,d_1+d_2+2$ are parallel,
\item the edges $d_1+d_2+3,...,n$ are parallel.
\end{enumerate}
Each triangle is counted with the sign
$$(-1)^{N_1+N_2+N_3} \cdot \epsilon_1 \cdot \epsilon_2\cdot \epsilon_3,$$
{where $N_i$ and $\epsilon_i$ refer to the $i$-th side of the triangular configuration.}
More precisely, each triangle represents a one-point nice manifold \newline $(I_1\overline{J}_1)\cdot(I_2\overline{J}_2)\cdot(I_3\overline{J}_3)$
with $1\in I_1,\ d_1+2 \in I_2,\ \ d_1+d_2+2 \in I_3$.
\newline Here $N_i=|J_i|,$ and $$\epsilon_i= \left\{
\begin{array}{ll}
1, & \hbox{if \ \ } \sum_{I_i} l_k> \sum_{J_i}l_k;\\
-1, & \hbox{otherwise.}
\end{array}
\right.$$
The expressions for the other three-term top monomials come from renumbering.
\qed
\end{thm}
\medskip
Figure \ref{FigInters2} represents two triangular configurations that arise in computation of $Ch^2(1)\smile Ch^2(4)\smile Ch^2(7)$ for the equilateral 9-gon. Here we have \newline (a) $N_1=0 , N_2=1, N_3=0$, $\epsilon_1 =1,\epsilon_2 =1, \epsilon_3 =1$, \newline (b) $N_1=1 , N_2=1, N_3=1$, $\epsilon_1 =1,\epsilon_2 =1, \epsilon_3 =1$.
\begin{figure}[h]
\centering \includegraphics[width=8 cm]{FigInters2.eps}
\caption{Some triangular configurations of the equilateral $9$-gon.}\label{FigInters2}
\end{figure}
\medskip
The general case is:
\begin{thm}\label{main_theorem} Modulo renumbering any top monomial in Chern classes has the form $$Ch^{d_1}(1)\smile ...\smile Ch^{d_k}(k)$$ with $\sum_{i=1}^k d_i=n-3$ and $d_i \neq 0$ for $i=1,...,k$. Its value equals the signed number of triangular configurations of the flexible polygon $L$ such that all the edges $1,...,n-2$ are parallel. Each such triangle represents a one-point nice manifold $(I\overline{J})$ for some partition $$[n-2]=I\cup J$$ with $k+1 \in I$. Each triangle is counted with the sign
$$(-1)^N \cdot \epsilon ,$$
where $$N= |J|+\sum_{i \in J \cap [k]} d_i,$$ and
$$\epsilon= \left\{
\begin{array}{ll}
1, & \hbox{if \ \ } \sum_I l_i> \sum_Jl_j;\\
-1, & \hbox{otherwise.}
\end{array}
\right.$$
\end{thm}
\begin{proof}
We choose the special way of representing the Chern classes which is encoded in the graph depicted in Fig. \ref{FigGraph}. The vertices of the graph are labeled by elements of the set $[n]$. Each vertex $1,\dots,k$ has $d_k$ emanating directed edges.
A bold edge $\overrightarrow{ij}$ means that we choose the representation $$Ch(i)=(ij)-(i\overline{j}),$$
a dashed edge $\overrightarrow{ij}$ means that we choose the representation $$Ch(i)=(ij)+(\overline{i}j).$$
\begin{figure}[h]
\centering \includegraphics[width=10 cm]{graph.eps}
\caption{Encoded representations of $Ch(i)$}\label{FigGraph}
\end{figure}
Denote by $$S_i=\left\{ k+\sum_{j =1}^{i-1} d_j+2, \dots, k+\sum_{j =1}^{i} d_j+1 \right\}, $$
the set of vertices connected with the vertex $i$ by bold edges.
We multiply the Chern classes in three steps. Firstly, we multiply those that correspond to dashed edges and get
\begin{equation}\label{a}
Ch(1) \smile Ch(2)\smile ... \smile Ch(k)= \sum_{I \cup J = [k+1],\ k+1 \in I} (I \overline{J}). \tag{A}
\end{equation}
Secondly, for every $i\in [k]$ we multiply the $d_{i}-1$ representations (of one and the same $Ch(i)$) that correspond to bold edges. This gives
$k$ relations
\begin{equation}\label{b}
Ch^{d_i-1}(i)=\sum_{I \cup J = S_i,\ i \in I} (-1)^{|J|} (I \overline{J}). \tag{B}
\end{equation}
Before we proceed note that:
\begin{enumerate}
\item Pick two nice manifolds, one from the sum (\ref{a}), and the other $(I \overline{J})$ from the sum (\ref{b}). Assuming that $I \cup J = S_i$, the unique common entry of the labels is $i$.
\item The labels of any two nice manifolds from the sum (\ref{b}) which are associated with different $i$'s are disjoint.
\end{enumerate}
Finally, one computes the product of (\ref{a}) and (\ref{b}) using the rules from Proposition \ref{computation_rules}. Every summand $(I \overline{J})$ in the result is a product of one nice manifold $(I_0 \overline{J_0})$ corresponding to a dashed edge, and $k$ nice manifolds $(I_1 \overline{J_1}), \dots,(I_k \overline{J_k})$ corresponding to the bold edges.
\medskip
Let us exemplify the last computation.
\begin{itemize}
\item For $n=6$:
$$Ch^2(1)\smile Ch(2)=[(13)+(\overline{1}3)]\smile [(23)+(\overline{2}3)] \smile[(14)-(1\overline{4}) ] \smile (2)=$$$$
[(123)+(\overline{1}23)+(1\overline{2}3)+(\overline{1}23)] \smile[(14)-(1\overline{4})] =$$$$(1234)-(123\overline{4})+(1\overline{2}34)-(1\overline{2}3\overline{4})-(\overline{1}23\overline{4})
+(\overline{1}234)-(\overline{1}\overline{2}3\overline{4})+(\overline{1}\overline{2}34).$$
\item For $n=7$:
$$Ch^2(1)\smile Ch^2(2)=$$$$[(13)+(\overline{1}3)]\smile[(23)+(\overline{2}3)] \smile [(14)-(1\overline{4}) ] \smile [(25)-(2\overline{5})]=$$$$[(1234)-(123\overline{4})+(1\overline{2}34)-(1\overline{2}3\overline{4})-(\overline{1}23\overline{4})
+(\overline{1}234)-(\overline{1}\overline{2}3\overline{4})+(\overline{1}\overline{2}34)] \smile [(25)-(2\overline{5})]=$$$$(12345)-(1234\overline{5})-(123\overline{4}5)+(123\overline{4}\overline{5})-
(1\overline{2}34\overline{5})+(1\overline{2}345)+$$$$(1\overline{2}3\overline{4}\overline{5})-
(1\overline{2}3\overline{4}5)-(\overline{1}23\overline{4}5)+(\overline{1}23\overline{4}\overline{5})+(\overline{1}2345)-
(\overline{1}234\overline{5})+$$$$(\overline{1}\overline{2}3\overline{4}\overline{5})-
(\overline{1}\overline{2}3\overline{4}5)-(\overline{1}\overline{2}34\overline{5})+(\overline{1}\overline{2}345).$$
\end{itemize}
\end{proof}
|
\section{Introduction}
Runaway electrons (RE), thermal electrons accelerated to relativistic energies during the rapid termination of a magnetic confinement fusion (MCF) plasma, pose a threat to ITER if they are not avoided or mitigated before they hit the wall, causing damage to plasma facing components \cite{Hender2007,Boozer2015}.
Various strategies to avoid or mitigate RE in MCF plasmas have been proposed, e.g. using resonant magnetic perturbations (RMPs) to deconfine RE before they reach high energies~\cite{Papp2011}, or using either massive gas injection (MGI) or shattered pellet injection (SPI) of high $Z$ impurities to slow down RE through collisional drag and by enhancing synchrotron radiation losses of RE through pitch angle scattering driven by collisions~\cite{Lehnen2011,Hollmann2015,Commaux2016}.
An accurate diagnostic capability that allows good estimates of the RE parameters in MCF plasmas is needed to gain a better understanding of the underlying physics of the RE dynamics, as well as to guide the development of the avoidance and mitigation strategies.
Synchrotron radiation (SR) of RE in MCF plasmas is important for two reasons: SR is one of the main damping mechanisms for RE in the high energy relativistic regime, limiting the maximum energy that RE can reach during a disruption~\cite{Martin-Solis1998,Andersson2001}, and substantially reducing the runaway electron rate for weak (near critical) $E$ fields~\cite{Stahl2015}.
On the other hand, SR is routinely measured in current MCF experiments to infer the RE energy and pitch angle distribution function. The latter is done by interpreting the measured SR spectra and geometric features of the radiation spatial patterns seen by the visible and infrared cameras~\cite{Finken1990,Jaspers1995,Entrop1999,Jaspers2001,Kudyakov2008,DeAngelis2010,Shi2010}.
Most recently, SR was used to infer the characteristic energy and pitch angle of RE in the DIII-D tokamak when MGI was used as the mitigation mechanism ~\cite{Yu2013,Hollmann2013}.
In these plasmas the RE parameters were obtained by fitting the measured SR spectra with the single-particle spectrum, that is, the spectrum of a single electron calculated with a single energy and pitch angle, and using characteristic parameters of the plasma as measured at the magnetic axis.
In Ref.~\cite{Stahl2013} the authors showed that using single-particle spectra overestimates the SR by orders of magnitude and this can be misleading when inferring the RE parameters, and in general one should iteratively use the SR spectrum of a guess distribution function for the RE until the best fit to the experimental data is found.
This overestimation, that depending on the wavelength can reach several orders of magnitude, is due to assuming that all runaways emit as much synchrotron radiation as the most strongly emitting particle in the actual runaway distribution function.
Later, the study of Ref.~\cite{Landreman2014} went a step further by solving the Fokker-Planck equation to obtain the RE distribution function for a given set of plasma parameters, and then calculating the corresponding SR spectrum. Again, the authors found that the single-particle spectra can be misleading when used to infer the RE parameters of more realistic RE distribution functions.
Importantly, the above studies did not include any information of the electrons' orbits, thus ignoring confinement and collisionless pitch angle scattering that can substantially modify the SR spectra \cite{Carbajal2017}.
In the present paper we address the long standing question about what are the relationships between different runaway electrons distribution functions and their corresponding synchrotron emission simultaneously including: full-orbit effects, information of the spectral and angular distribution of synchrotron radiation of each electron, and basic geometric optics of a camera.
We follow the full-orbit dynamics of ensembles of runaway electrons in DIII-D-like plasmas using the new Kinetic Orbit Runaway electrons Code (KORC) to generate synthetic data to calculate different aspects of their synchrotron radiation.
First, we use mono-energy and mono-pitch angle distributions of runaways as initial conditions in our simulations to study the spatial distribution of the synchrotron radiation on the poloidal plane, and the statistical properties of the expected value of the synchrotron spectra of runaway electrons.
Then, we find relations between the runaway electrons' parameters and both the spatial distribution of the synchrotron emission and the synchrotron spectra as observed by a camera placed at the outer midplane plasma.
Finally, we use these results to interpret the synchrotron emission for an avalanche RE distribution function.
In our simulations we observe a strong dependence of the spatial distribution of the radiation on the pitch angle distribution of the runaways. Also, we find that the synchrotron spectra is very sensitive to oversimplifications of the angular distribution of the synchrotron radiation, dramatically changing its shape and amplitude.
The rest of the paper is organised as follows: in Sec.~\ref{theory} we present a brief summary of the theory of the synchrotron radiation used throughout the paper. In Sec.~\ref{sim_setup} we describe the parameters used in our simulations. In Sec.~\ref{results} we present the study of the relationship between various distribution functions of runaway electrons and their synchrotron emission on the poloidal plane and as measured by a camera. Finally, in Sec.~\ref{conclusions} we summarise our results and discuss their implications in the interpretation of experimental data. Details on the synthetic camera diagnostic are provided in the appendix.
\section{Synchrotron radiation theory}
\label{theory}
In our full-orbit simulations of runaway electrons we calculate the total radiated power, the synchrotron radiation spectra, and the spectral and angular distribution of the radiation.
The total instantaneous synchrotron radiated power of a relativistic electron moving in an arbitrary orbit with velocity $v$ is given by:
\begin{equation}
P_{T}= \frac{e^2}{6\pi \epsilon_0 c^3} \gamma^4 v^4 \kappa^2,
\label{Ptot}
\end{equation}
\noindent
where $\kappa$ is the instantaneous curvature of the electron orbit, $\gamma=1/\sqrt{1 - v^2/c^2}$ is the relativistic factor, $e$ is the magnitude of the electron charge, $c$ is the speed of light, and $\epsilon_0$ is the vacuum permittivity. For a relativistic electron moving in an electric $\bm{E}$ and magnetic $\bm{B}$ field, the instantaneous curvature $\kappa$ is given by:
\begin{equation}
\kappa = \frac{e}{\gamma m_e v^3} |\bm{v}\times \left( \bm{E} + \bm{v}\times\bm{B} \right)| \, .
\label{kappa}
\end{equation}
In the case when $\bm{E} \ll \bm{v}\times \bm{B}$, the curvature can be approximated as $\kappa \approx eB \sin{\theta}/\gamma m_e v$, where $\theta$ is the pitch angle of the electron, that is, the angle between the vectors $\bm{v}$ and $\bm{B}$.
The spectral distribution of the synchrotron radiation of relativistic electrons is given by \cite{Schwinger1949}:
\begin{equation}
P_R(\lambda) = \frac{1}{\sqrt{3}} \frac{ce^2}{\epsilon_0 \lambda^3} \left( \frac{mc^2}{\mathcal{E}}\right)^2\int_{\lambda_c/\lambda}^\infty K_{5/3}(\eta)d\eta
\label{P_lambda}
\end{equation}
\noindent
Here, $\mathcal{E}=\gamma m_e c^2$ is the relativistic electron's energy, $K_{5/3}(\eta)$ is the modified Bessel function of the second kind of order $5/3$, and $\lambda$ is the wavelength at which the electron is radiating. The critical wavelength $\lambda_c = 4\pi/(3\kappa \gamma^3)$ is the wavelength characterizing $P_R(\lambda)$, dividing the spectra into two parts of equal radiated power, that is, half the total power is radiated at wavelengths $\lambda > \lambda_c$, and the rest is radiated at wavelengths $\lambda < \lambda_c$ \cite{Mobilio2015}. We should note that Eq.~(\ref{P_lambda}) is completely general and can be used for calculating the synchrotron spectrum of a relativistic electron moving in an arbitrary orbit with radius of curvature $1/\kappa$. In Ref.~\cite{Pankratov1999} an approximate expression for the spectral distribution of the synchrotron radiation of runaway electrons with small pitch-angle in tokamaks was derived, and used in Refs.~\cite{Jaspers2001,Yu2013,Stahl2013}.
The most detailed level of description for the synchrotron radiation emitted by a relativistic electron is given by its spectral and full angular distribution, which in the case when the angle between the direction of emission and motion is small is given by:
\begin{eqnarray}
P_R(\lambda,\psi,\chi) & = & \frac{c e^2}{\sqrt{3}\epsilon_0 \kappa \lambda^4\gamma^4} \left( 1 + \gamma^2 \psi^2 \right)^2 \times \nonumber \\
& & \left\lbrace \frac{\gamma^2 \psi^2}{1 + \gamma^2 \psi^2} K_{1/3}(\xi) \cos\left[\frac{3}{2}\xi\left( z + \frac{1}{3}z^3\right) \right] \right. \nonumber \\
& & \left. -\frac{1}{2}K_{1/3}(\xi)\left( 1 + z^2 \right)\cos\left[ \frac{3}{2}\xi\left( z + \frac{1}{3}z^3\right) \right] \right. \nonumber \\
& & \left. + K_{2/3}(\xi)z\sin\left[\frac{3}{2}\xi\left( z + \frac{1}{3}z^3\right) \right] \right\rbrace \, ,
\label{P_ang}
\end{eqnarray}
\noindent
where $\xi = 2\pi \left( 1/\gamma^2 + \psi^2 \right)^{3/2}/3\lambda\kappa$, $z = \gamma \chi/\sqrt{1 + \gamma^2 \psi^2}$, $\psi$ is the angle between the direction of emission $\hat{\bm{n}}$ and the instantaneous orbital plane containing the tangent and normal vectors, that is, $\psi$ is the complementary angle to the angle between $\hat{\bm{n}}$ and the binormal vector defined below; $\chi$ is the angle between the projection of $\hat{\bm{n}}$ on the instantaneous orbital plane and the instantaneous direction of motion $\bm{v}$, respectively. The unit vector defining the direction of emission $\hat{\bm{n}}$ points from the electron's position towards where an observer measuring the radiation is.
It is worth mentioning that Eq.~(\ref{P_ang}) is obtained when going from Eq.~(II.31) to Eq.~(II.32) of Ref.~\cite{Schwinger1949}.
In Eq.~(\ref{P_ang}) it is assumed that the synchrotron radiation is emitted mainly along $\bm{v}$, that is, small $\psi$ and $\chi$. $K_{1/3}$ and $K_{2/3}$ are the modified Bessel functions of second kind of order $1/3$ and $2/3$, respectively.
The instantaneous orbital plane of the electron is uniquely determined by the tangent vector $\hat{\bm{\mathcal{T}}}$, which corresponds to the unit electron velocity $\hat{\bm{v}} = \bm{v}/v$, the normal vector $\hat{\bm{\mathcal{N}}}$, and the binormal vector $\hat{\bm{\mathcal{B}}} = \hat{\bm{\mathcal{T}}} \times \hat{\bm{\mathcal{N}}}$, which is perpendicular to the instantaneous orbital plane. For a relativistic electron moving in an arbitrary electric and magnetic field
\begin{equation}
\hat{\bm{\mathcal{B}}} = \frac{\bm{v}\times{\dot{\bm{v}}}}{|\bm{v}\times{\dot{\bm{v}}}|} = \frac{\bm{v}\times \left( \bm{E} + \bm{v}\times \bm{B}\right)}{|\bm{v}\times \left( \bm{E} + \bm{v}\times \bm{B}\right)|}\, .
\end{equation}
\noindent
$P_R(\lambda,\psi,\chi)$ in Eq.~(\ref{P_ang}) decreases exponentially as a function of $\psi$ through the function $\xi$, this due to $K_{1/3}(\xi)$ and $K_{2/3}(\xi)$. On the other hand Eq.~(\ref{P_ang}) shows oscillations as a function of $\chi$ through the function $z$, and can become negative for large values of $\chi$ or $\psi$.
In order to make an efficient search on the $\psi\chi$-plane where $P_R(\lambda,\psi,\chi)$ is positive and thus physically meaningful, we restrict our search to to a rectangular domain containing the region of validity defined by the values \cite{jackson2007classical}:
\begin{equation}
\psi_c = \left( \frac{3\kappa \lambda}{4\pi} \right)^{1/3} \, ,
\label{psi_c}
\end{equation}
\noindent
and $\chi_c$, which is a solution of the equation:
\begin{equation}
\frac{\gamma^3}{3}\chi_c^3 + \gamma \chi_c - \frac{\pi}{3\xi} = 0\, .
\label{chi_c}
\end{equation}
Fig.~\ref{P_ang_contours}(a) shows an example of $P_R(\lambda,\psi,\chi)$ in the domain defined by $\psi_c$ and $\chi_c$ for a relativistic electron with energy $\mathcal{E} = 30$ MeV and pitch angle $\theta_0 = 10^\circ$ at the high-field side (HFS) of a DIII-D-like magnetic field.
From this figure we observe that the synchrotron radiation is emitted within an ellipse in the $\psi$ and $\chi$ plane with major and minor radii bounded by $\psi_c$ and $\chi_c$. This means that the radiation is emitted within an elliptic cone with its axis along $\hat{\bm{v}}$.
Previous studies where synchrotron emission has been used to diagnose runaway electrons \cite{Finken1990,Jaspers1995,Yu2013,Wongrach2014} have simplified the synchrotron angular distribution to either a $\delta$ function in space, that is, $P_{R}^\delta(\lambda) = P_R(\lambda)\cdot \delta (\psi)\cdot \delta (\chi)$, or to a circular cone with ``natural aperture'' $\alpha = 1/\gamma$, that is, $P_{R}^{\Omega_\alpha}(\lambda) = P_R(\lambda)/\Omega_\alpha$ for $\psi^2 + \chi^2 \leq \alpha^2$, and $P_{R}^{\Omega_\alpha}(\lambda) =0$ otherwise.
In the former case, all the radiation $P_R(\lambda)$ of Eq.~(\ref{P_lambda}) is emitted along the velocity of the particle, while in the latter case $P_R(\lambda)$ is allowed to ``spread'' uniformly within the solid angle subtended by $\alpha$, that is, $\Omega_\alpha = 2\pi [1 - \cos(\alpha) ]$. Throughout this paper, we will refer to $P_{R}^{\Omega_\alpha}(\lambda)$ as the simplified model for the angular distribution of the synchrotron radiation. In Fig.~\ref{P_ang_contours}(b) we show the corresponding $P_{R}^{\Omega_\alpha}(\lambda)$ for the same values of Fig.~\ref{P_ang_contours}(a).
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.475]{Fig1.pdf}
\end{center}
\caption{Filled contours of the angular distribution of the synchrotron radiation emitted by a relativistic electron with $\mathcal{E} = 30$ MeV, and $\theta_0 = 10^\circ$ at the HFS of an DIII-D-like magnetic field. The magnetic field at the magnetic axis is $B_0 = 2.19$ T and the safety factor at the magnetic axis and the edge are $q=1$, and $q=3$, respectively. Panel a): full spectral and angular distribution $P_R(\lambda,\psi,\chi)$ of Eq.~(\ref{P_ang}). Panel b): simplified model for the angular distribution $P_{R}^{\Omega_\alpha}(\lambda)$. Panel c): upper limits of the angles $\psi$ and $\chi$ of the spectral and angular distribution of Eq.~(\ref{P_ang}) as a function of the wavelength. The horizontal black line shows the values of $\psi_c$ and $\chi_c$ for the simplified model for the angular distribution $P_{R}^{\Omega_\alpha}(\lambda)$. The dashed, vertical line shows the wavelength at which panels a) and b) are computed.}
\label{P_ang_contours}
\end{figure}
\section{Simulations setup}
\label{sim_setup}
In order to study the relationship between the runaway electron distribution functions and different aspects of their synchrotron radiation we have used the new Kinetic Orbit Runaway electrons Code (KORC). This code is a parallel Fortran 95 code that follows large ensembles of runaway electrons in the full 6-D phase space. KORC efficiently exploits the shared memory computational systems by using the hybrid open MP + MPI paradigm for parallelisation, showing nearly ideal weak and strong scaling. KORC incorporates the Landau-Lifshitz formulation of the radiation reaction force \cite{Carbajal2017}, and Coulomb collisions of RE with the thermal plasma using the model of Ref.~\cite{Papp2011}.
In all the simulations reported in this paper we have used the analytical field of Ref.~\cite{Carbajal2017} with DIII-D-like plasma parameters where RE occur, that is, the magnitude of the magnetic field at the magnetic axis $B_0=2.1$ T, safety factor $q_0 = 1$ at the magnetic axis and $q_{edge} = 3$ at the plasma edge, which is located at $r_{edge} = 0.5$ m. The major radius of the plasma is $R_0 = 1.5$ m. The magnetic field model in toroidal coordinates is given by:
\begin{equation}
\bm{B}(r,\vartheta) = \frac{1}{1 + \eta \cos{\vartheta}} \left[ B_0 \hat{\bm{e}}_\zeta + B_\vartheta(r) \hat{\bm{e}}_\vartheta \right].
\label{Eq3}
\end{equation}
\noindent
where $\eta = r/R_0$ is the aspect ratio, $B_\vartheta(r) = \eta B_0/q(r)$ is the poloidal magnetic field. The safety factor is
\begin{equation}
q(r) = q_0\left( 1 + \frac{r^2}{\varepsilon^2} \right) \, .
\label{Eq4}
\end{equation}
\noindent
The constant $\varepsilon$ is obtained from the values of $q_0$ and $q(r)$ at the plasma edge $r=r_{edge}$. The coordinates $(r,\vartheta, \zeta)$ are
defined as $x= \left( R_0 + r \cos \vartheta \right ) \sin \zeta$, $y =\left( R_0 + r \cos \vartheta \right ) \cos \zeta$, and $z =r \sin \vartheta$, where $(x,y,z)$ are the Cartesian coordinates. In these coordinates, $r$ denotes the minor radius, $\vartheta$ the poloidal angle, and $\zeta$ the toroidal angle. Note that in this right-handed toroidal coordinate system, the toroidal angle $\zeta$ rotates clockwise, that is, it is anti-parallel to the azimuthal angle, $\phi=\pi/2-\zeta$, of the standard cylindrical coordinate system.
Throughout the paper we use different distribution functions for the runaway electrons in the energy and pitch-angle space $f_{RE}(\mathcal{E},\theta)$. We use $5\times 10^6$ computational particles uniformly distributed in a torus as the spatial distribution of the different $f_{RE}(\mathcal{E},\theta)$, see for example Fig.~\ref{PR_poloidal_plane_30MeV}(c) or Fig.~\ref{camera_setup}(a). The major radii of the torus and the radii of the RE beam used for the spatial initial condition are specified in each section, and are chosen so that all the runaways remain confined during the simulation \cite{Carbajal2017}.
In our simulations the plasma current is anti-parallel to the toroidal electric field.
The simulation time $t_{sim} \sim 10 \ \mu$s is set so that the less energetic RE considered in our simulations undergo 30 poloidal turns. Because this time is much smaller than both the collisional time $\tau_{coll} = 4\pi \epsilon_0^2 m_e^2 c^3/(n_e e^4 \log{\Lambda})\sim 10 \ \mbox{ms}$ and the characteristic time for radiation losses $\tau_{R} = 6\pi\epsilon_0(m_ec)^3/(e^4B^2) \sim 1\ \mbox{s}$, we have turned off the radiation reaction force and collisions in KORC, so the energy is conserved in these simulations. This simulation time is observed to be enough for reaching a collisionless steady-state distribution function, that is, a time independent solution of the full orbit $f_{RE}(\mathcal{E},\theta)$.
\section{Full-orbit effects on synchrotron emission of various RE distribution functions}
\label{results}
In this section we study the collisionless pitch angle dispersion effects on the synchrotron radiation spectra emitted by various runaway electron distribution functions.
Previous studies using a full-orbit description of RE in toroidal plasmas \cite{Liu2016,Wang2016,Carbajal2017} have shown that due to the variation of the magnetic field seen by runaways along their orbits, they experience collisionless pitch angle dispersion, even in the the case where collisions or synchrotron radiation losses are not included. Because the synchrotron radiation of each electron strongly depends on its pitch angle, it is expected that the resulting synchrotron emission of different ensembles of runaway electrons will show non-trivial changes with respect to the results inferred from distributions that do not take into account collisionless pitch angle dispersion effects. The aim of this sections is to study these changes in detail.
\subsection{Synchrotron emission of mono-energetic and mono-pitch angle RE distributions on the poloidal plane}
\label{mono-section_poloidal}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[scale=0.55]{Fig2.pdf}
\end{center}
\caption{Spatial distribution on the poloidal plane of the total and integrated synchrotron radiated power of a simulated ensemble of runaway electrons with initial $\mathcal{E}=30$ MeV and $\theta_0=10^\circ$. Panel a): spatial distribution of the total synchrotron radiated power $P_T$ of Eq.~(\ref{Ptot}). The radiation is more intense at the HFS and less intense at the LFS. An up-down symmetry is observed. Panel b): spatial distribution of the radiated power $P_R(\lambda)$ of Eq.~(\ref{P_lambda}) integrated over $\lambda\in (100,10000)$ nm. The same qualitative features of $P_T$ are observed. Panel c): spatial distribution of the full orbit RE distributions. These same features of $P_T$ and the integrated synchrotron radiation power are observed in all the other simulations of initially mono-energetic and mono-pitch angle distributions of runaway electrons. For producing these figures we computed the histograms of each quantity using a grid of $75\times 75$ bins.}
\label{PR_poloidal_plane_30MeV}
\end{figure*}
We start our study of the collisionless pitch angle dispersion effects on synchrotron radiation emission by using mono-energetic and mono-pitch angle runaway electron distributions as the initial condition of KORC simulations. The kinetic energies (i.e. not including the rest mass energy $m_ec^2$) of the simulated runaways are $\mathcal{E}_0 = 10$ MeV and 30 MeV, and initial pitch angles of $\theta_0 = 5^\circ, 10^\circ, 15^\circ$, and $20^\circ$. This means that our initial distributions functions are delta functions in the energy and pitch angle $f_{RE}(\mathcal{E},\theta,t=0) = \delta (\theta - \theta_0)\delta (\mathcal{E} - \mathcal{E}_0)$.
The major radii of the torus used for the spatial initial condition are $R=1.475$ m and $R=1.43$ m for RE with $\mathcal{E} = 10$ MeV and 30 MeV, respectively. In all cases we use the radius of the RE beam $r=0.2$ m. In our simulations we evolve the runaway electrons by $t\sim 10 \ \mu$s, which is enough for reaching a steady-state distribution function.
In Fig.~\ref{PR_poloidal_plane_30MeV}(a) we show the spatial distribution of the total synchrotron radiated power $P_T$ of Eq.~(\ref{Ptot}) for the ensemble of runaway electrons with $\mathcal{E}=30$ MeV and $\theta_0=10^\circ$. The intensity of the radiation is higher at the high-field side (HFS) and lower at the low-field side (LFS), and the spatial distribution shows an up-down symmetry. Fig.~\ref{PR_poloidal_plane_30MeV}(b) shows the spatial distribution of the radiated power $P_R(\lambda)$ of Eq.~(\ref{P_lambda}) integrated over $\lambda\in (100,10000)$ nm.
This range of wavelengths encompasses the visible and a part of the infrared portions of the electromagnetic spectrum, usually used in experimental studies.
The same qualitative features of $P_T$ are observed.
Fig.~\ref{PR_poloidal_plane_30MeV}(c) shows the spatial distribution of runaways on the poloidal plane of the simulation of panels (a) and (b). For producing these figures we computed the histograms of each quantity using a grid of $75\times 75$ bins.
These same features of $P_T$ and the integrated synchrotron radiation power are observed in all the other simulations of initially mono-energetic and mono-pitch angle distributions of runaway electrons.
In Fig.~\ref{spectra_poloidal_plane_30MeV} we show the comparison between the expected value of the synchrotron radiation spectra for different full orbit $f_{RE}(\mathcal{E},\theta)$, that is,
\begin{equation}
\mathcal{P}_R(\lambda) = \int \int f_{RE}(\mathcal{E},\theta) P_R(\lambda,\mathcal{E},\theta) d \mathcal{E} d \theta\ ,
\label{<PR>}
\end{equation}
\noindent
and the so-called single-particle spectrum, namely, the synchrotron spectrum of Eq.~(\ref{P_lambda}) computed using the initial values for the energy and pitch angle of the runaways, and characteristic values for the magnetic field (taken at the magnetic axis). In this figure we only show the simulations with $\mathcal{E}_0 = 30$ MeV and $\theta_0=5^\circ,10^\circ,$ and $20^\circ$. The other simulations show similar results.
Among the differences between $\mathcal{P}_R(\lambda)$ and the corresponding single-particle spectra we observe that the maximum of $\mathcal{P}_R(\lambda)$ tends to move towards smaller wavelengths, and its magnitude is larger in all cases. These changes in the shape of $\mathcal{P}_R(\lambda)$ are particularly important because the runaway electrons' parameters are usually inferred by fitting the experimentally measured synchrotron spectrum with the single-particle spectrum.
In Ref.~\cite{Stahl2013} the authors used pre-computed distribution functions for the runaways to show that $\mathcal{P}_R(\lambda)$ can be very different from what is called the single-particle spectrum. This was also shown in Ref.~\cite{Landreman2014} for distribution functions of runaways obtained from solving the Fokker-Plank equation with radiation losses and collisions in 0-D simulations, that is, not including spatial information.
In our simulations any departure of $\mathcal{P}_R(\lambda)$ from the single-particle spectra results from allowing the magnetic field to have a spatial dependence, which in turn translated into collisionless pitch angle dispersion. In Fig.~\ref{pitch_stats}(b) we show the full orbit, steady state distribution functions. We observe that as the value of the relative dispersion of the pitch angle $\sigma_\theta/\mu_\theta$ (Fig.~\ref{pitch_stats}(a)) increases, the departure of $\mathcal{P}_R(\lambda)$ from the single-particle spectra becomes larger; we measure this departure using the relative difference between the integrated power of the two spectra in the range of wavelengths $\lambda\in (100,10000)$ nm, this is shown on the right axis of Fig.~\ref{pitch_stats}(a) as $\Delta P_R$.
Here $\mu_\theta$ and $\sigma_\theta$ are the mean and standard deviation of the full orbit $f_{RE}(\mathcal{E},\theta)$.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.525]{Fig3.pdf}
\end{center}
\caption{Comparison between the synchrotron radiation spectra $\mathcal{P}_R(\lambda)$ and the corresponding single-particle spectra. Panel a): $\mathcal{P}_R(\lambda)$ in Eq.~(\ref{<PR>}) calculated for a simulation with $\mathcal{E}_0 = 30$ MeV and $\theta_0 = 5^\circ$. The corresponding single-particle spectrum is calculated using the above values for the energy and pitch angle and the value of the magnetic field at the magnetic axis $B = 2.1$ T. Panel b): same as panel a) for $\theta_0 = 10^\circ$. Panel c): same as panel a) for $\theta_0 = 20^\circ$.}
\label{spectra_poloidal_plane_30MeV}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.53]{Fig4.pdf}
\end{center}
\caption{Collisionless steady state distribution functions of runaway electrons. Panel a): left axis, relative dispersion of the pitch angle $\sigma_\theta/\mu_\theta$; right axis, the relative difference between the integrated power of the two spectra $\Delta P_R$ in the range of wavelengths $\lambda\in (100,10000)$ nm. Here $\mu_\theta$ and $\sigma_\theta$ are the mean and standard deviation of the full orbit $f_{RE}(\mathcal{E},\theta)$. Panel b): collisionless, steady state distribution functions of simulated runaway electrons for various initial pitch angles and the two energies $\mathcal{E}_0=10$ MeV (dashed lines), and $30$ MeV (solid lines). The departure of $\mathcal{P}_R(\lambda)$ from the single-particle spectra becomes larger as $\sigma_\theta/\mu_\theta$ becomes larger.}
\label{pitch_stats}
\end{figure}
\subsection{Synchrotron emission of mono-energetic and mono-pitch angle RE distributions as measured by a camera}
\label{mono-section_camera}
We now go a step further and calculate the spatial distribution and spectra of the synchrotron radiation as measured by a camera placed at the outer midplane plasma. In this calculation each pixel of the camera measures the synchrotron radiation integrated along the corresponding line of sight.
To the best of our knowledge this calculation is the first of its kind, including the exact full-orbit dynamics of runaway electrons in toroidal magnetic fields and the basic geometric optics of a camera.
In the Appendix we describe in detail the set-up of the camera in the simulations. For this calculation we have used the full orbit information of each electron in our simulations and two models for the angular distribution, namely, the full angular distribution $P_R(\lambda,\psi,\chi)$ in Eq.~(\ref{P_ang}) and the simplified model for the angular distribution $P_{R}^{\Omega_\alpha}(\lambda) = P_R(\lambda)/\Omega_\alpha$.
In Fig.~\ref{camera_30MeV} we show the spatial distribution of the integrated synchrotron emission of simulations with $\mathcal{E}_0=30$ MeV and $\theta_0=5^\circ, 10^\circ$, and $20^\circ$ calculated with $P_R(\lambda,\psi,\chi)$. We have integrated the radiation over the range of wavelengths $\lambda \in(100,10000)$ nm. No significant difference is observed if a visible or infrared filter is used for the synchrotron radiation. Using $P_R^{\Omega_\alpha}(\lambda)$ for calculating the spatial distribution of the synchrotron emission yields to qualitatively similar results, showing the same spatial features, but having an intensity one order of magnitude larger.
Contrary to the spatial distribution of the synchrotron emission on the poloidal plane (c.f. Fig.~\ref{PR_poloidal_plane_30MeV}), the spatial distribution of the synchrotron emission seen by the camera shows a variety of different non-symmetric shapes, they transition from a crescent shape to an ellipse shape as the mean pitch angle increases. For distributions of runaway electrons with $\mathcal{E}_0 < 30$ MeV and with pitch angles in the range $\theta_0 \leq 20^\circ$ we always observe crescent shapes.
In addition to the different shapes of the radiation seen by the camera, we observe a shift of the bright regions towards the HFS as we increase the pitch angle, despite the actual spatial distribution of the runaways remain fairly symmetric and localised around the magnetic axis, see Fig.~\ref{PR_poloidal_plane_30MeV}(c). This shift of the bright regions towards the HFS strongly depends on the pitch angle of the electrons, becoming larger as we increase the pitch angle; its dependence on energy is observed to be rather weak, increasing as we increase the energy only for $\theta_0\geq 20^\circ$.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[scale=0.55]{Fig5.pdf}
\end{center}
\caption{Spatial distribution of the integrated synchrotron radiation of simulated runaway electrons with energy $\mathcal{E}_0 = 30$ MeV and various initial pitch angles as measured by a camera. Panel a): spatial distribution of the integrated synchrotron radiation for the simulation with initial pitch angle $\theta_0=5^\circ$. Panel b): same as panel a) for $\theta_0=10^\circ$. Panel c): same as panel a) for $\theta_0=20^\circ$. For this calculation the camera has been placed at the outer midplane plasma at a radial distance from the center of the plasma of $R_{sc}=2.4$ m. The other parameters of the camera are described in the appendix of Sec.~\ref{Apendix1}. For this calculation we have integrated the radiation over the range of wavelengths $\lambda \in(100,10000)$ nm. We observe a transition from a crescent shape to an ellipse shape for the spatial distribution of the radiation as we go from small to large initial pitch angles.}
\label{camera_30MeV}
\end{figure*}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.525]{Fig6.pdf}
\end{center}
\caption{Synchrotron radiation spectra of simulated runaway electrons with $\mathcal{E}_0=30$ MeV as measured by the camera. For comparison purposes we show the spectra calculated using $P_R(\lambda,\psi,\chi)$ (solid blue line) and $P_R^{\Omega_\alpha}(\lambda)$ (dashed red line). Panel a): synchrotron radiation spectra for an ensemble of runaways with $\theta_0=5^\circ$. Panel b): same as panel a) but for $\theta_0=10^\circ$. Panel c): same as panel a) but for $\theta_0=20^\circ$. The amplitude of the spectra is approximately sixty times larger when calculated using $P_R^{\Omega_\alpha}(\lambda)$ with respect to the result obtained with $P_R(\lambda,\psi,\chi)$. Also, the maximum of the spectra is shifted towards larger wavelengths when using $P_R^{\Omega_\alpha}(\lambda)$. These large differences may result in underestimating the runaway electron density and pitch angles of the runaway electrons if $P_R^{\Omega_\alpha}(\lambda)$ is used to interpret the experimental measurements.}
\label{spectra_camera_30MeV}
\end{figure}
Finally, we calculate the synchrotron radiation spectra of the simulated distributions of runaways as measured by the camera. In this case we regard the camera as one big spectrometer, merging the information of all the pixels of the camera. This calculation can be done using only one or a small subset of pixels of the camera if needed.
In Fig.~\ref{spectra_camera_30MeV} we show the spectra of simulated runaway electrons with $\mathcal{E}_0 = 30$ MeV and various pitch angles. We calculate the spectra using both the full angular distribution $P_R(\lambda,\psi,\chi)$, and the simplified angular distribution $P_R^{\Omega_\alpha}(\lambda)$.
The spectra calculated using the full angular distribution $P_R(\lambda,\psi,\chi)$ shows the same features than the spectra of Fig.~\ref{spectra_poloidal_plane_30MeV}, namely, the amplitude of the spectra becomes larger and the maximum of the spectra shifts towards smaller wavelengths as the pitch angle increases.
The differences between the spectra of $P_R(\lambda,\psi,\chi)$ and $P_R^{\Omega_\alpha}(\lambda)$ are in their magnitude, being approximately sixty times larger when calculated using $P_R^{\Omega_\alpha}(\lambda)$ than when using $P_R(\lambda,\psi,\chi)$, and in their shape, having the maximum of the spectra shifted towards larger wavelengths when using $P_R^{\Omega_\alpha}(\lambda)$; these large differences may result in underestimating the runaway electron density and pitch angles of the runaway electrons if $P_R^{\Omega_\alpha}(\lambda)$ is used to interpret the experimental measurements.
We have explored the case when the ``natural aperture'' $\alpha$ of the cone defining the emission region of $P_R^{\Omega_\alpha}(\lambda)$ becomes smaller than $1/\gamma$. In this case, the spatial distribution and the shape of the spectra of the synchrotron emission measured by the camera remains practically unchanged, but the amplitude of the synchrotron spectra becomes even larger than in the case where $\alpha=1/\gamma$.
\subsection{Synchrotron emission of avalanching RE on the poloidal plane}
\label{avalanching_runaways_poloidal}
Now we consider a more realistic distribution function for runaway electrons that might occur during the early times of a runaway disruption in tokamak plasmas, that is, the avalanche distribution function \cite{Rosenbluth1997,Fulop2006,Stahl2013}. This distribution function describes the exponential increase in time of the runaway density during early times of a runaway disruption and is given by:
\begin{equation}
f_{RE}(p,\eta) = \frac{\hat{E} p}{2\pi C_z \eta} \exp{\left( -\frac{p\eta}{C_z} - \frac{\hat{E}p}{2\eta}(1-\eta^2) \right)}\ ,
\label{avalanchePDF}
\end{equation}
\noindent
where $p = \gamma m_e v$ is the relativistic momentum of an electron, $\eta = \cos\theta$, $\hat{E} = (\bar{E}-1)/(1+Z_{eff})$, $Z_{eff}$ is the effective ion charge, $\bar{E} = E_\parallel/E_c$, $E_\parallel$ is the parallel electric field normalised to the critical electric field $E_c = m_ec/(e\tau_{coll})$, and $C_z=\sqrt{3(Z_{eff} + 5)/\pi} \log{\Lambda}$.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.54]{Fig7.pdf}
\end{center}
\caption{Filled contours of the analytical and simulated avalanche distribution function for runaway electrons with $Z_{eff}=1$. Panel a): filled contours of the analytical $f_{RE}(\mathcal{E},\theta)$. Panel b): simulated distribution function by the end of the simulation. We infer a linear relation between the energy of the bulk of the distribution and the pitch angle given by $\mathcal{E} \approx 10\times \theta$.}
\label{avalanche_PDF}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.525]{Fig8.pdf}
\end{center}
\caption{Expected value of the synchrotron radiation spectra of simulated avalanche distribution functions for runaway electrons (a)-(b), and synchrotron radiation spectra as measured by a camera placed at the outer midplane plasma (c)-(d). Panel a): synchrotron radiation spectra of Eq.~(\ref{<PR>}) (solid red line) for the avalanche distribution function with $Z_{eff}=1$. The dashed black line shows the approximate analytical spectra using directly Eq.~(\ref{avalanchePDF}) into Eq.~(\ref{<PR>}). Panel b): same as panel a) for $Z_{eff}=10$. Panel c): synchrotron radiation spectra as measured by the camera for the case $Z_{eff}=1$. Panel d): same as panel c) for the case with $Z_{eff}=10$.}
\label{spectra_avalanche}
\end{figure}
We use Eq.~(\ref{avalanchePDF}) as the initial condition of our simulations with $n_e=3.9\times10^{20}$ m$^{-3}$, which results in $\tau_{coll} \sim 10$ ms, $E_c=0.15$ V/m, and we consider $Z_{eff}=1$ and $Z_{eff}=10$ for simulating an hydrogenic plasma and a plasma with high concentration of impurities, respectively.
We use $E_\parallel = 0.74$ V/m so that it is in agreement with typical values of the loop voltage measured in DIII-D plasmas during runaway disruptions~\cite{Stahl2013,Hollmann2013,Yu2013}. Larger (smaller) values of $E_\parallel$ result in narrower (wider) pitch-angle distributions and longer (shorter) tails of the energy distribution of avalanching runaways. Therefore different values of $E_\parallel$ leading to different avalanche distributions modify the corresponding synchrotron emission.
The major radius of the torus used for the spatial initial condition is $R=1.37$ m, and the radius of the RE beam is set to $r=0.2$ m. In Fig.~\ref{avalanche_PDF}(a) we show the filled contours of $f_{RE}(\mathcal{E},\theta)$ using Eq.~(\ref{avalanchePDF}) with $Z_{eff}=1$; using $Z_{eff}=10$ results in a wider distribution in pitch angle space at low energies $\mathcal{E}\sim 10$ MeV. Here, $\mathcal{E} = c\sqrt{p^2 + m_e^2 c^2}$ and $\theta=\arccos{\eta}$. We observe only small fluctuations for the difference between the analytical and the initial condition of our simulations, that is, $\sqrt{(f_{RE}-f_{sim})^2 }\sim 0.01$, where $f_{sim}$ is the sampled distribution function used as the initial condition of our simulations. We sample $f_{RE}(p,\eta)$ using the Metropolis-Hastings algorithm. By the end of the simulations $f_{RE}(\mathcal{E},\theta)$ have reached a steady state, in Fig.~\ref{avalanche_PDF}(b) we show the simulated distribution function which shows departures from the initial condition, specially at large energies $\mathcal{E}\geq 20$ MeV. We infer a linear relation between the energy of the bulk of the distribution and the pitch angle given by $\mathcal{E} \approx 10\times \theta$.
As for the case of the mono-energy and mono-pitch angle distributions, we first calculate the spatial distribution of the total and the integrated synchrotron radiated power for the avalanche distribution function. This is shown in Fig.~\ref{PR_poloidal_plane_Zeff1} for the case with $Z_{eff}=1$, we obtain the same qualitative results for $Z_{eff}=10$. This time, the spatial distribution on the poloidal plane of $P_T$ and the integrated $P_R(\lambda)$ shows more structure, with a bright region of radiation with a crescent shape at the HFS. Notice that the bright regions of radiation not necessarily corresponds to the more dense regions, see Fig.~\ref{PR_poloidal_plane_Zeff1}(c).
In Fig.~\ref{spectra_avalanche}(a)-(b) we show (red solid line) the spectra $\mathcal{P}_R(\lambda)$ in Eq.~(\ref{<PR>}) of the simulated avalanche distribution functions with $Z_{eff}=1$ and $Z_{eff}=10$, respectively. We also show for comparison the spectra computed directly using Eq.~(\ref{avalanchePDF}) (dashed black line). As it can be seen, the spectra of the simulated avalanche distributions show the same trends as the mono-energy and mono-pitch angle distributions: a larger amplitude, and the shift of the maxima of $\mathcal{P}_R(\lambda)$ towards smaller wavelengths. However, as we increase $Z_{eff}$ the differences between the approximate analytical and the full orbit $\mathcal{P}_R(\lambda)$ become smaller.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[scale=0.55]{Fig9.pdf}
\end{center}
\caption{Spatial distribution of the total and integrated synchrotron radiated power of the simulated avalanche distribution function for runaway electrons by the end of the simulation.
Panel a): spatial distribution of the total synchrotron radiated power $P_T$ of Eq.~(\ref{Ptot}). Panel b): spatial distribution of the radiated power $P_R(\lambda)$ of Eq.~(\ref{P_lambda}) integrated over $\lambda\in (100,10000)$ nm. Panel c): the spatial distribution of the simulated runaways by the end of the simulation. Notice that the bright regions of radiation not necessarily corresponds to the more dense regions. For producing these figures we computed the histograms of each quantity using a grid of $75\times 75$ bins.}
\label{PR_poloidal_plane_Zeff1}
\end{figure*}
\subsection{Synchrotron emission of avalanching RE as measured by a camera}
\label{avalanching_runaways_camera}
Next, we compute the spatial distribution and the spectra of the synchrotron radiation as measured by a camera placed a the outer midplane plasma. For this calculations the parameters of the camera are the same as in Sec.~\ref{mono-section_camera} and in the appendix. In Fig.~\ref{camera_avalanche} we show the spatial distribution of the integrated synchrotron radiation calculated using the full angular distribution $P_R(\lambda,\psi,\chi)$. We have integrated the radiation over the range of wavelengths $\lambda \in(100,10000)$ nm. No significant difference is observed if a visible or infrared filter is used for the synchrotron radiation. Using the simplified angular distribution $P_R^{\Omega_\alpha}(\lambda)$ results in similar features of the spatial distribution of the radiation.
Consistent with the results of Sec.~\ref{mono-section_camera}, we observe the transition from a crescent to an ellipse shape for the spatial distribution of the radiation as we increase $Z_{eff}$, as we are effectively increasing the pitch angle of the bulk of the runaway distribution function.
The crescent shape of the spatial distribution of the synchrotron radiation observed in Fig.~\ref{camera_avalanche}(a) and \ref{camera_30MeV}(a) results from the contribution of runaway electrons with small pitch angle that follow the winding of the magnetic field lines.
In Fig.~\ref{camera_avalanche_sections} we show the contribution of different toroidal sectors of the runaway beam to Fig.~\ref{camera_avalanche}(a); as it can be seen, the larger contribution to the crescent shape of the synchrotron radiation spatial distribution comes from the toroidal sector with $\varphi \in (40^\circ,70^\circ)$, where $\varphi$ is the toroidal angle as defined in Fig.~\ref{camera_setup}(c).
As the pitch angle of the runaways increases, their velocity vector is not longer pointing along the magnetic field lines, resulting in shapes similar to Fig.~\ref{camera_avalanche}(b) and \ref{camera_30MeV}(c).
Finally, we calculate the spectra of the synchrotron radiation as measured by the camera, these are shown in Fig.~\ref{spectra_avalanche}(c)-(d). As for the simulations of Sec.~\ref{mono-section_camera}, we observe large differences between the spectra calculated using the two different angular distributions for the radiation, namely, the magnitude of the spectra calculated using $P_R^{\Omega_\alpha}(\lambda)$ is approximately twenty times larger than when using $P_R(\lambda,\psi,\chi)$, also the maximum of the spectra are shifted towards larger wavelengths in the case when $P_R^{\Omega_\alpha}(\lambda)$ is used. As discussed before, this may result in underestimating the runaway electron density and pitch angles of the runaway electrons if $P_R^{\Omega_\alpha}(\lambda)$ is used to interpret the experimental measurements.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.53]{Fig10.pdf}
\end{center}
\caption{Spatial distribution of the integrated synchrotron radiation of the full orbit avalanche distribution function for runaway electrons as measured by a camera. Panel a): spatial distribution of the integrated synchrotron radiation for the avalanche distribution with $Z_{eff}=1$. For this calculation we have used the full angular distribution $P_R(\lambda,\psi,\chi)$. Panel b): same as panel a) for $Z_{eff}=10$. The parameters of the camera are the same as in Fig.~\ref{camera_30MeV}. We observe a transition from a crescent shape to an ellipse shape for the bright regions of the radiation as we go from small to large values of $Z_{eff}$.}
\label{camera_avalanche}
\end{figure}
\section{Discussion and conclusions}
\label{conclusions}
In this paper we have addressed the long standing question about what are the relationships between different runaway electrons distribution functions and their corresponding synchrotron emission including: full-orbit effects, information of the spectral and angular distribution of synchrotron radiation of each electron, and the basic geometric optics of a camera.
We performed kinetic simulations of the full-orbit dynamics of different ensembles of runaway electrons in DIII-D-like magnetic fields to study in detail various aspects of their synchrotron emission.
In Sec.~\ref{mono-section_poloidal} and \ref{mono-section_camera}, we used mono-energetic and mono-pitch angle distribution functions as the initial conditions of the simulations. For these simulations we calculated the spatial distribution on the poloidal plane of the total and the integrated synchrotron radiated power, which show bright regions of radiation at the HFS and up-down symmetry. Then we compared the synchrotron spectra of the full orbit distributions of runaways with the so-called single-particle spectra, showing that full orbit effects and in particular collisionless pitch angle dispersion effects cause the former to depart from the single-particle spectra. These effects become more evident as the relative dispersion of the pitch angle $\sigma_\theta/\mu_\theta$ increases, see Fig.~\ref{spectra_poloidal_plane_30MeV} and \ref{pitch_stats}.
Then, we calculated the spatial distribution and spectra of the synchrotron radiation as measured by a camera placed at the outer midplane plasma. To the best of our knowledge this calculation is the first of its kind, including the exact full-orbit dynamics of runaway electrons in toroidal magnetic fields and the basic geometric optics of a camera.
We used two models for the angular distribution of the synchrotron radiation, namely, the full spectral and angular distribution of Eq.~(\ref{P_ang}), and a simplified model where the radiation is emitted isotropically within a circular cone with ``natural aperture'' $\alpha = 1/\gamma$.
Using either model for the angular distribution we observed a rich variety of non-symmetric shapes for the spatial distribution of the radiation that strongly depend on the pitch angle distribution of the runaways, and weakly depend on the runaways energy distribution, value of the $q$-profile at the plasma edge, and the chosen range of wavelengths.
We noticed a transition from a crescent shape to an ellipse shape as the mean pitch angle increases, see Fig.~\ref{camera_30MeV}. On the other hand, we found that the magnitude of the synchrotron spectra measured by the camera is overestimated by approximately a factor of 60 when the angular distribution is oversimplified, and the shape is affected too, moving to larger wavelengths when we use the simplified angular distribution $P_R^{\Omega_\alpha}(\lambda)$, see Fig.~\ref{spectra_camera_30MeV}. This may result in underestimating the runaway electron density and pitch angles of the runaway electrons if $P_R^{\Omega_\alpha}(\lambda)$ is used to interpret the experimental measurements.
In Sec.~\ref{avalanching_runaways_poloidal} and \ref{avalanching_runaways_camera} we repeated the analysis of previous sections for an avalanche RE distribution function. We studied the case of a hydrogenic plasma ($Z_{eff}=1$) and a plasma with a high content of impurities ($Z_{eff}=10$).
We find that collisionless pitch angle dispersion modifies the initial distribution function (c.f. Fig.~\ref{avalanche_PDF}), so that there exist a deviation of the pitch angle of the bulk distribution as function of the runaways' energy, that is, $\mathcal{E} \approx 10\times \theta$.
In this case we also observed a complex structure of the spatial distribution of the synchrotron radiation on the poloidal plane with a non-trival relation to the spatial density of runaway electrons, see Fig.~\ref{PR_poloidal_plane_Zeff1}. As in the simulations of Sec.~\ref{mono-section_poloidal}, the synchrotron spectra of the full orbit avalanche distributions depart from the analytical approximation, showing larger departures for the case of $Z_{eff}=1$, see Fig.~\ref{spectra_avalanche}(a)-(b).
On the other hand, the spatial distribution of the synchrotron emission measured by the camera in our simulations showed a transition from a crescent shape to an ellipse shape as we increased $Z_{eff}$, this due to the effective increase of the pitch angle of the bulk distribution as $Z_{eff}$ becomes larger, c.f. Fig.~\ref{camera_30MeV}.
We expect that in longer time scales, especially in plasmas containing high-Z impurities, the collisionless pitch-angle dispersion will be modified by collisions. This is a problem that we plan to address in a future publication.
Regarding the synchrotron spectra measured by the camera, {\bf similarly as in the simulations of Sec.~\ref{mono-section_camera}, we found that its amplitude is overestimated by approximately a factor of 20 when $P_R^{\Omega_\alpha}(\lambda)$ is used}, and its maximum is shifted to larger wavelengths with respect to the spectra of Eq.~(\ref{P_ang}), see Fig.~\ref{spectra_avalanche}(c)-(d).
The results reported in this paper show a weak dependence with the value of the q-profile at the plasma edge, remaining qualitatively and quantitatively similar. A more detailed analysis for investigating the dependence of the runaways synchrotron emission with different shapes of the q-profile is not in the scope of the present study.
These results shed some light into the relationship between a given runaway distribution function and its corresponding synchrotron emission in magnetic confinement plasmas. This might help to find better ways to interpret experimental measurements of synchrotron radiation to obtain better estimates of the runaway electron parameters, and so help to both formulate better theoretical descriptions of the runaways in these plasmas, and to improve the mechanisms for avoiding and/or mitigating runaway electrons.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.55]{Fig11.pdf}
\end{center}
\caption{Contribution of different toroidal sectors to the spatial distribution of the integrated synchrotron radiation of the full orbit runaways avalanche distribution function with $Z_{eff}=1$. Panel a): contribution of some toroidal sectors to the synchrotron spectra of Fig.~\ref{spectra_avalanche}(c). Panels b) to d): synchrotron radiation as measured by the camera of the toroidal sectors $\varphi\in(40^\circ,50^\circ)$, $\varphi\in(50^\circ,60^\circ)$, and $\varphi\in(60^\circ,70^\circ)$, respectively.
The crescent shape of the spatial distribution of the synchrotron radiation observed in Fig.~\ref{camera_avalanche}(a) results from the contribution of runaways with small pitch angle that follow the winding of the magnetic field lines.}
\label{camera_avalanche_sections}
\end{figure*}
\section{Acknowledgments}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Contract No. DE-AC05-00OR22725.
Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy.
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
\section{Appendix: Setup of the synthetic camera diagnostic}
\label{Apendix1}
The camera diagnostic in KORC consists of an array of pixels on a rectangular detector placed at $\bm{R}_{sc} = (R_{sc},Z_{sc})$, where in a cylindrical coordinate system with origin at the center of the tokamak, $R_{sc}$ is the cylindrical radial position of the camera, and $Z_{sc}$ the corresponding camera position along the $z$-axis.
For simplicity we assume that the camera is placed along the $x$-axis of a Cartesian coordinate system, and that the radial camera position $R_{sc}$ defines the outer wall at the midplane plasma, too.
The horizontal and vertical size of the camera detector determine the optics of the camera, that is, the horizontal and vertical angles of view of the camera.
We assume that the camera has a single lens located at $\bm{R}_{sc}$, so that each pixels has a single line of sight that connects the center of each pixel to the center of the lens, and then extends into the plasma.
In Fig.~\ref{camera_setup}(a) we show the setup of the synthetic camera placed at $R_{sc} = 2.4$ and $Z_{sc} =0$.
The size of the detector is $40\ \mbox{cm}\times 40\ \mbox{cm}$, and the pixel array is made of $75 \times 75$ pixels.
The blue lines show the horizontal angle of view, while the green line shows the main line of sight, that is, the line of sight joining the center of the detector and the lens. In Fig.~\ref{camera_setup}(c) we show the top view of the camera setup. In this figure the dotted lines show some lines of sight of different pixels of the camera.
Another parameter of the camera is its focal length $f$, which is the distance between the lens and the center of the camera detector and is chosen to be $f=50$ cm.
Finally, the incline of the camera $\vartheta$, which is the angle between the main line of sight (green line) and the solid horizontal red line in Fig.~\ref{camera_setup}(c), can be used to aim the camera. We choose $\vartheta = 55^\circ$ for all the simulations in this work.
In this way, the size of the detector, the focal length of the camera, and the incline of the camera determines the camera's field of view (See Fig.~\ref{camera_setup}(b)).
The frequency at which the camera can take snapshots is equal to or lower than the inverse of the time step used in a KORC simulation, with an exposure time that depends on how many snapshots are used to produce the final picture of the synchrotron radiation.
Each pixel of the camera measures the line integrated synchrotron emission over the whole exposure time.
On the other hand, in axisymmetric plasmas the electromagnetic fields and particles' variables are independent of the cylindrical azimuthal angle $\phi$ (or the toroidal angle $\zeta$ in toroidal coordinates). Thus, any rigid rotation of the electron's variables by an arbitrary angle in the azimuthal direction is a possible realization of an electron in the plasma. The above implies that an electron can be detected by more than one pixel of the camera. In the camera set-up of Fig.~\ref{camera_setup} the azimuthal angle $\phi$ is measured anticlockwise.
A potential complication is that for every snapshot taken by the camera the radiation spectra would have to be calculated for each electron of the simulation--in a typical KORC simulation we simultaneously follow hundreds of thousands ($\sim 10^5$) of runaway electrons. It can be seen that for an array of $100\times100$ pixels the number of computations involved is larger than $10^9$, increasing quadratically with the number of pixels of the camera detector.
This computation can become computationally costly if it is not done in an efficient way. In order to reduce the number of computations involved, we pre-select those runaways that are more likely to be seen by the camera.
The pre-selection of the electrons is done as follows: for each electron with velocity $\bm{v}_i$ and position $\bm{R}_i$, we extend $\bm{v}_i$ and calculate $\bm{R}_i^*=(R_{sc},Z_i^*)$, the point at which $\bm{v}_i$ intersects the outer wall. Here, the outer wall is modeled as an infinitely long cylindrical shell with inner radius $R_{sc}$. Note that $\bm{v}_i$ is a vector with origin at the electron position $\bm{R}_i$.
Then, we measure the angle between the electron's velocity and the vector $\bm{R}^*_i - \bm{R}_i$, which is given by $\cos{\varsigma_i} = \bm{v}_i \cdot (\bm{R}^*_i - \bm{R}_i)/|\bm{v}_i| |\bm{R}^*_i - \bm{R}_i|$.
In the simplest approximation for the angular distribution of the synchrotron radiation, the radiation is emitted within a circular cone with its axis along $\bm{v}_i$, and aperture $\alpha = 1/\gamma$, where $\gamma$ is the relativistic gamma factor of the particle. See Sec.~\ref{theory} for details. Only electrons with $\varsigma _i\leq \alpha$ are kept for the calculation of the camera snapshot.
Next, we iterate over each pixel of the camera detector and calculate the contribution of each electron to the line integrated emission measured by that pixel.
The process for computing the radiation emitted by the $i$-th electron and measured by each pixel is a two-step process: The first step is to find the columns of pixels that detect the $i$-th electron. We note that the pixels in the same column of the camera detector share the same line of sight when the camera setup is seen from the top, see Fig.~\ref{camera_setup}(c).
We say that the $i$-th electron is detected by the $j$-th column of pixels when the circle with radius $R_i = \sqrt{x_i^2 + y_i^2}$ defined by the position of the electron intersects the line of sight of that column of pixels. Here $\bm{x}_i = (x_i,y_i,z_i)$ is the position of the $i$-th electron. For the $i$-th electron seen by the $j$-th column of pixels we calculate the angle $\varphi_{i,j}$, which is the angle between the camera position and the position at which the circle with radius $R_i$ intersects the $j$-th line of sight. This angle is measured anticlockwise from the solid red line of Fig.~\ref{camera_setup}(a).
In the second step we identify the row of pixels that detect the $i$-th electron. This is done by identifying the row of pixels that the unitary vector $\hat{\bm{n}}_i$, the direction of emission of the $i$-th electron, hits when it extends from the electrons' position to the plane of the camera detector. Here $\hat{\bm{n}}_i = \hat{T}_{-\varphi_{i,j}}\hat{T}_{\phi_0}\bm{x}_i - \hat{\bm{R}}_{sc}$, $\hat{T}_\varphi$ are rigid rotations along the $z$-axis by an angle $\varphi$, and $\phi_0$ is the azimuthal angle defined by the position of the particle $\bm{x}_i$.
Once that we have identified which pixels detect which electrons we compute their contribution to the measured synchrotron emission using either model for the angular distribution of the synchrotron radiation of Sec.~\ref{theory}.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[scale=0.575]{Fig12.pdf}
\end{center}
\caption{Camera setup in KORC simulations. Panel a): schematic representation of the camera setup showing the horizontal angle of view of the camera (blue lines), the main line of sight of the camera (green line), the position of the camera (black square), the synchrotron emission at the poloidal plane and at the detector plane (a.k.a. pixel plane), and an example of the initial spatial distribution of the simulated runaway electrons (black dots).
Panel b): zoom of the detector plane of the camera showing an example of the measured synchrotron emission in a KORC simulation.
Panel c): top view of the camera setup showing some examples of lines of sight of the camera in a KORC simulation. The toroidal sectors used in Fig.~\ref{camera_avalanche_sections} are highlighted in magenta.}
\label{camera_setup}
\end{figure*}
|
\section{Introduction}
Understanding the properties of the interstellar medium (ISM) of primeval galaxies is a fundamental challenge of physical cosmology.
The high sensitivity/spatial resolution allowed by current observations have dramatically improved our understanding of the ISM of local and moderate redshift ($z=2-3$) galaxies \citep{osterbrock:1989book,stasinska:2007,perezmontero:2017,stanway:2017}. We now have a clearer picture of the gas phases and thermodynamics \citep{daddi:2010apj,carilli:2013ara&a}, particularly for what concerns the molecular component, representing the stellar birth environment \citep{klessen:2014review,krumholz:2015review}.
For galaxies located in the Epoch of Reionization (EoR, $5 \lsim z \lsim 15$) optical/near infrared (IR) surveys have been very successful in their identification and characterization in terms of stellar mass and star formation rate \citep{Dunlop13,Madau14,Bouwens:2015}. However, only recently we have started to probe the internal structure of such objects. With the advent of the Atacama Large Millimeter/Submillmeter Array (ALMA) it is now possible to access the far infrared (FIR) band at $\mbox{high-}z$~ with an unprecedented resolution and sensitivity. Excitingly, this enables for the first time studies of ISM energetics, structure and composition in such pristine objects.
Since \hbox{C~$\scriptstyle\rm II $}~is one of the major coolant of the ISM, \hbox{[C~$\scriptstyle\rm II $]}~ALMA detections (and upper limits) have so far mostly used this line for the above purposes \citep{maiolino:2015arxiv,willott:2015arxiv15,capak:2015arxiv} and to determine the sizes of early galaxies \citep{fujimoto:2017}. Line emission from different species (e.g. \OIII) have been used to derive the interstellar radiation field (ISRF) intensity \citep[][]{inoue:2016sci,carniani:2017bdf3299}, while continuum detections give us a measure of the dust content and properties \citep[][]{watson:2015nature,laporte:2017apj}. Finally, some observations are beginning to resolve different ISM components and their dynamics by detecting spatial offsets and kinematic shifts between different emission lines, i.e. \hbox{[C~$\scriptstyle\rm II $]}~and optical-ultraviolet (UV) emission \citep{maiolino:2015arxiv,capak:2015arxiv}, \hbox{[C~$\scriptstyle\rm II $]}~and Ly$\alpha$ \citep{pentericci:2016apj,bradac:2017} and \hbox{[C~$\scriptstyle\rm II $]}~and \OIII~\citep{carniani:2017bdf3299}.
In spite of these progresses, several pressing questions remain unanswered. A partial list includes: {(a)} What is the chemical composition and thermodynamic state of the ISM in $\mbox{high-}z$~ galaxies? {(b)} How does the molecular gas turns into stars and regulate the evolution of these systems? {(c)} What are the optimal observational strategies to better constrain the properties of these primeval objects?
Theoretically, cosmological numerical simulations have been used to attack some of these problems. The key idea is to produce a coherent physical framework within which the observed properties can be understood. Such learning strategy is also of fundamental importance to devise efficient observations from current (e.g. HST/ALMA), planned (JWST) and proposed (SPICA) instruments.
Before this strategy can be implemented, though, it is necessary to develop reliable numerical schemes catching all the relevant physical processes. While the overall performances of the most widely used schemes have been extensively benchmarked \citep{aquila:2012MNRAS,agora:2013arxiv,kim:2016apj}, high-resolution simulations of galaxy formation introduce a new challenge: they are very sensitive to the implemented physical models, particularly those acting on small scales.
Among these, the role of feedback, i.e. how stars affect their own formation history via energy injection in the surrounding gas by supernova (SN) explosions, stellar winds and radiation, is far from being completely understood, despite considerable efforts to improve its modeling \citep{agertz:2015apj,martizzi:2015mnras} and understand its consequences on $\mbox{high-}z$~ galaxy evolution \citep{ceverino:2014,oshea:2015apj,barai:2015mnras,pallottini:2017dahlia,fiacconi:2017mnras,hopkins:2017}.
Additionally, we are still lacking a completely self-consistent treatment of radiation transfer. This is an area in which intensive work is ongoing in terms of faster numerical schemes \citep{wise:2012radpres,rosdahl:2015mnras,katz:2016arxiv}, or improved physical modelling \citep{petkova:2012mnras,roskar:2014,maio:2016mnras}.
A third aspect has received comparatively less attention so far in $\mbox{high-}z$~ galaxy formation studies, i.e. the implementation of adequate chemical networks. While various models have been proposed and tested \citep{krumholz:2009apj,bovino:2016aa,grassi:2017dust}, the galaxy-scale consequences of the different prescriptions are still largely unexplored \citep{tomassetti:2015MNRAS,maio:2015,smith:2017mnras}. Besides, there is no clear consensus on a minimal set of physical ingredients required to produce reliable simulations.
The purpose of this paper is to analyze the impact of ${\rm {H_2}}$~chemistry on the internal structure of $\mbox{high-}z$~ galaxies. To this aim, we simulate two prototypical $M_{\star}\simeq 10^{10}{\rm M}_{\odot}$ Lyman Break Galaxies (LBG) at $z=6$, named \quotes{Dahlia} and \quotes{Alth{\ae}a}, respectively. The two simulations differ for the ${\rm {H_2}}$~formation implementation, equilibrium vs. non-equilibrium. We show how chemistry has a strong impact on the observed properties of early galaxies.
The paper is organized as follows. In Sec.~\ref{sec_numerical} we describe the two simulations highlighting common features (Sec.~\ref{sec_common_pre}), separately discussing the different chemical models used for Dahlia (Sec.~\ref{sec_chem_eq}) and Alth{\ae}a (Sec.~\ref{sec_chem_noneq}).
Results are presented as follow. First we perform a benchmark of the chemical models (Sec.~\ref{sec_chem_test}), and compare the star formation and feedback history of the two galaxies (Sec.~\ref{sec_sfr_feed}). Next, we characterize their differences in terms of morphology (Sec.~\ref{sec_morphology}), thermodynamical state of the ISM (Sec.~\ref{sec_thermo_state}), and predicted \hbox{[C~$\scriptstyle\rm II $]}~and ${\rm {H_2}}$~(Sec.~\ref{sec_obs_prop}) emission line properties. Our conclusions are summarized in Sec.~\ref{sec_conclusione}.
\section{Numerical simulations}\label{sec_numerical}
To assess the impact of ${\rm {H_2}}$~chemistry on the internal structure of $\mbox{high-}z$~ galaxies, we compare two zoom-in simulations adopting different chemical models. Both simulations follow the evolution of a prototypical $z=6$ LBG galaxy hosted by a $M_{\rm h}\simeq 10^{11} {\rm M}_{\odot}$ dark matter (DM) halo (virial radius $r_{\rm vir}\simeq 15 \,{\rm kpc}$).
The first simulation has been presented in \citet[][hereafter \citetalias{pallottini:2017dahlia}]{pallottini:2017dahlia}. The targeted galaxy (which includes also about 10 satellites) is called \quotes{Dahlia} \citep[see also][for analysis of its infall/outflow structure]{gallerani:2016outflow}. In such previous work we showed that Dahlia's specific SFR (sSFR) is in agreement with both analytical calculations \citep{behroozi:2013apj}, and with $z=7$ observations \citep[][see also Sec.~\ref{sec_sfr_feed}]{gonzalez:2010}.
In the second, new simulation we follow the evolution of \quotes{Alth{\ae}a}, by using improved thermo-chemistry, but keeping everything else (initial conditions, resolution, star formation and feedback prescriptions) unchanged with respect to the Dahlia simulation. We describe the implementation of these common processes in the following Section. Next we describe separately the chemical model used for Dahlia (Sec.~\ref{sec_chem_eq}) and Alth{\ae}a (Sec.~\ref{sec_chem_noneq}).
\subsection{Common physical models}\label{sec_common_pre}
Both simulations are performed with a customized version of the Adaptive Mesh Refinement (AMR) code \textlcsc{ramses} \citep{teyssier:2002}. Starting from cosmological IC\footnote{We assume cosmological parameters compatible with \emph{Planck} results: $\Lambda$CDM model with total matter, vacuum and baryonic densities in units of the critical density $\Omega_{\Lambda}= 0.692$, $\Omega_{m}= 0.308$, $\Omega_{b}= 0.0481$, Hubble constant $\rm H_0=100\,{\rm h}\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ with ${\rm h}=0.678$, spectral index $n=0.967$, $\sigma_{8}=0.826$ \citep[][]{planck:2013_xvi_parameters}.} generated with \textlcsc{music} \citep{hahn:2011mnras}, we zoom-in the $z\simeq 6$ DM halo hosting the targeted galaxy.
The total simulation volume is $(20\,{\rm Mpc}/{\rm h})^{3}$ that is evolved with a base grid with 8 levels (gas mass $6\times 10^6{\rm M}_{\odot}$); the zoom-in region has a volume of $(2.1\,{\rm Mpc}/{\rm h})^{3}$ and is resolved with 3 additional level of refinement, thus yielding a gas mass resolution of $m_b = 1.2\times 10^4 {\rm M}_{\odot}$. In such region, we allow for 6 additional level of refinement, that allow to follow the evolution of the gas down to scales of $l_{\rm cell}\simeq 30\,{\rm pc}$ at $z=6$, i.e. the refined cells have mass and size typical of Galactic molecular clouds \citep[MC, e.g.][]{federrath:2013}. The refinement is performed with a Lagrangian mass threshold-based criterion.
, i.e. a cell is refined if its total (DM+baryonic) mass exceed the the mass resolution by a factor 8.
Metallicity ($Z$) is followed as the sum of heavy elements, assumed to have solar abundance ratios \citep{asplund:2009ara&a}. We impose an initial metallicity floor $Z_{\rm floor}=10^{-3}{\rm Z}_{\odot}$ since at $z \gsim 40$ our resolution is still insufficient to catch the metal enrichment by the first stars \citep[e.g.][]{oshea:2015apj}. Such floor is compatible with the metallicity found at $\mbox{high-}z$~ in cosmological simulations for diffuse enriched gas \citep{dave:2011mnras,pallottini:2014sim,maio:2015}; it only marginally affects the gas cooling time.
Dust evolution is not explicitly tracked during simulations. However, we make the simple assumption that the dust-to-gas mass ratio scales with metallicity, i.e. $\mathcal{D} = \dust_{\odot} (Z/{\rm Z}_{\odot})$, where $\dust_{\odot}/{\rm Z}_{\odot} = 0.3$ for the Milky Way (MW, e.g. \citealt[][]{hirashita:2002mnras,asano:2013}.
\subsubsection{Star formation}
Stars form according to a linearly ${\rm {H_2}}$-dependent Schmidt-Kennicutt relation \citep[][]{schmidt:1959apj,kennicutt:1998apj} i.e.
\be\label{eq_sk_relation}
\dot{\rho}_{\star}= \zeta_{\rm sf} f_{\rm H2} {\rho \over t_{\rm ff}},
\ee
where $\dot{\rho}_{\star}$ is the local SF rate density, $\zeta_{\rm sf}$ the SF efficiency, $f_{\rm H2}$ the ${\rm {H_2}}$~mass fraction, and $\rho =\mu m_p n$ is density of the gas of mean molecular weight $\mu$.
Eq. \ref{eq_sk_relation} is solved stochastically, by drawing the mass of the new star particles from a Poisson distribution \citep{rasera:2006,dubois:2008,pallottini:2014sim}.
In detail, in a star formation event we create a star particle with mass $Nm_b$, with $N$ an integer drawn from
\be
P(N) = {\langle N\rangle \over N!} \exp-\langle N\rangle\,,
\ee
where the mean of the Poisson distribution is
\be
\langle N\rangle = {f_{\rm H2} \rho l_{\rm cell}^{3}\over m_b} { \zeta_{\rm sf} \delta t\over t_{\rm ff}}\,,
\ee
with $\delta t$ the simulation time step. For numerical stability, no more than half of the cell mass is allowed to turn into stars. Since we prevent formation of star particle with mass less then $m_b$, cells with density less then $\sim 15\,{\rm cm}^{-3}$ (for $l_{\rm cell}\simeq 30\,\rm pc$) are not allowed to form stars.
We set $\zeta_{\rm sf}=0.1$, in accordance with the average values inferred from MC observations \citep[][see also \citealt{agertz:2012arxiv}]{murray:2011apj}; $f_{\rm H2}$ depends on the adopted thermo-chemical model, as described later in Sec.~\ref{sec_chem_eq} and Sec.~\ref{sec_chem_noneq}.
\subsubsection{Feedback}
Similarly to \citet[][]{agora:2013arxiv}, we account for stellar energy inputs and chemical yields that depend both on time and stellar populations by using \textlcsc{starburst99} \citep{starburst99:1999}. Stellar tracks are taken from the {\tt padova} \citep{padova:1994} library with stellar metallicities in the range $0.02 \leq Z_{\star}/{\rm Z}_{\odot} \leq 1$, and we assume a \citet{kroupa:2001} initial mass function.
Stellar feedback includes SNs, winds from massive stars and radiation pressure \citep[][]{agertz:2012arxiv}. We model the thermal and turbulent energy content of the gas according to the prescriptions by \citet{agertz:2015apj}.
The turbulent (or non-thermal) energy is dissipated as $\dot{e}_{\rm nth} = -e_{\rm nth}/t_{\rm diss}$ \citep[][see eq. 2]{teyssier:2013mnras}, where, following \citet{maclow1999turb}, the dissipation time scale can be written as
\be
t_{\rm diss} = 9.785 \left(l_{\rm cell}\over 100\,{\rm pc}\right)\left(\sigma_{\rm turb}\over 10\,{\rm km}\,{\rm s}^{-1}\right) ^{-1}\,\rm Myr\,,
\ee
where $\sigma_{\rm turb}$ is the turbulent velocity dispersion. Adopting the SN blastwave models and OB/AGB stellar winds from \citet{ostriker:1988rvmp} and \citet{weaver:1977apj}, respectively, we account for the dissipation of energy in MCs as detailed in Sec.~2.4 and App. A of \citetalias{pallottini:2017dahlia}.
\subsection{Dahlia: equilibrium thermo-chemistry}\label{sec_chem_eq}
In the Dahlia simulation we compute $f_{\rm H2}$ by adopting the \citetalias{krumholz:2009apj}~analytical prescription \citep{krumholz:2008apj,krumholz:2009apj,mckee:2010apj}. In \citetalias{krumholz:2009apj}, the ${\rm {H_2}}$~abundance is derived by modelling the radiative transfer on an idealized MC and by assuming equilibrium between ${\rm {H_2}}$~formation on dust grains and dissociation rates. For each gas cell, $f_{\rm H2}$ can then be written as a function of $n$, $Z$ and hydrogen column density ($N_{\rm H}$). By further assuming pressure equilibrium between CNM and WNM \citep{krumholz:2009apj}, $f_{\rm H2}$ turns out to be independent on the intensity of the ISRF, and can be written as
\begin{subequations}\label{eq_fh2_anal}
\begin{align}
f_{\rm H2} &= \left[1 -0.75\,s/(1+0.25\,s) \right]\Theta(2-s)\,,\\ \intertext{with}
s &= \ln\left(1+0.6\,\chi +0.01\chi^{2}\right) /(0.6\,\tau_{\rm UV})\\
\chi &= 0.75\,\left[1+3.1\,(Z/{\rm Z}_{\odot})^{0.365}\right]\,,
\end{align}
\end{subequations}
and where $\Theta$ is the Heaviside function; $\tau_{\rm UV}$ is the dust UV optical depth and it can be calculated by linearly rescaling the MW value,
\be
\tau_{\rm UV} = \left({N_{\rm H}\over 1.6 \times 10^{21}{\rm cm}^{-2}}\right)\left({\mathcal{D}\over\dust_{\odot}}\right).
\ee
In Dahlia cooling/heating rates are computed using \textlcsc{grackle} 2.1\footnote{\url{https://grackle.readthedocs.org/}} \citep{bryan:2014apjs}. We use a \hbox{H}~and \hbox{He}~primordial network, and tabulated metal cooling/photo-heating rates from \textlcsc{cloudy} \citep{cloudy:2013}. Inverse Compton cooling is also present, and we consider heating from a redshift-dependent ionizing UV background \citep[UVB, ][]{Haardt:2012}. Since ${\rm {H_2}}$~is not explicitly included in the network, we do not include the corresponding cooling contribution.
\subsection{Alth{\ae}a: non-equilibrium thermo-chemistry}\label{sec_chem_noneq}
In Alth{\ae}a we implement a non-equilibrium chemical network by using \textlcsc{krome}\footnote{\url{https://bitbucket.org/tgrassi/krome}} \citep{grassi:2014mnras}. Given a set of species and their reactions, \textlcsc{krome} can generate the code needed to solve the system of coupled ordinary differential equations that describe the gas thermo-chemical evolution.
\subsubsection{Chemical network}
Similarly to \citet[][hereafter \citetalias{bovino:2016aa}]{bovino:2016aa}, our network includes H, H$^{+}$, H$^{-}$, He, He$^{+}$, He$^{++}$, H$_{2}$, H$_{2}^{+}$ and electrons. Metal species are not followed individually in the network, as for instance done in Model IV from \citetalias{bovino:2016aa}; therefore, we use an equilibrium metal line cooling calculated via \textlcsc{cloudy} tables\footnote{As a caveat, we point out that there is a formal inconsistency in the modelling. Metal line cooling tables are usually calculated with \textlcsc{cloudy} by assuming a \citet{Haardt:2012} UV background, while the ISRF spectral energy density we adopt is MW-like. To remove such inconsistency one should explicitly track metal species, adopt a non-equilibrium metal line cooling and include radiative transfer. As noted in \citetalias{bovino:2016aa} (see their Fig.~16), using non-equilibrium metal line cooling can typically change the cooling function by a factor $\lsim 2$. This will be addressed in future work.}.
The adopted network contains a total of 37 reactions, including photo-chemistry (Sec.~\ref{sec_sub_photo}), dust processes (Sec.~\ref{sec_sub_dust}) and cosmic rays (CR, Sec.~\ref{sec_sub_cr}). The reactions, their rates, and corresponding references are listed in App. B of \citetalias{bovino:2016aa}: specifically we use reactions from 1 to 31 (Tab. B.1 in \citetalias{bovino:2016aa}), 53, 54, and from 58 to 61 (Tab. B.2 in \citetalias{bovino:2016aa}).
\subsubsection{Photo-chemistry}\label{sec_sub_photo}
Photo-chemistry cross sections are taken from \citet{verner:1996apjs} and by using the SWRI\footnote{\url{http://phidrates.space.swri.edu}.} and Leiden\footnote{\url{http://home.strw.leidenuniv.nl/~ewine/photo/}.} databases.
In the present simulation, the ISRF is not evolved self-consistently and it is approximated as follows.
For the spectral energy density (SED), we assume a MW like spectrum \citep{black:1987,draine:1978apjs}, and we specify the SED using 10 energy bins from $0.75$ eV to $14.16$ eV. Beyond 13.6 eV the flux drops to zero, i.e. we do not include ionizing radiation.
We consider a spatially uniform ISRF whose intensity is rescaled with the SFR such that $G = G_{0} ({\rm SFR}/\msun\,{\rm yr}^{-1})$, where $G_{0}=1.6\times 10^{-3} {\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$ is the far UV (FUV) flux in the Habing band ($6-13.6\,{\rm eV}$) normalized to the average MW value \citep{habing:1968}. Because of their sub-kpc sizes (\citealt{shibuya:2015apjs}, \citealt{fujimoto:2017}) high $G_0$ values are expected in typical LBG at $z\simeq 6$, as inferred also by \citet{carniani:2017bdf3299}. A similar situation is seen in some local dwarf galaxies \citep{cormier:2015} that are generally considered as local counterparts of $\mbox{high-}z$~ galaxies.
It is worth noting that the spatial variation of $G$ is very small in the MW, with an r.m.s. value $\simeq 3\,G_0$ \citep{habing:1968,wolfire:2003apj}. Nonetheless, spatial fluctuations of the ISRF, if present, might play some role in
the evolution of $\mbox{high-}z$~ galaxies \citep[e.g.][]{katz:2016arxiv}. We will analyze this effect in future work.
On top of the ISRF, we consider the cosmic microwave background (CMB), that effectively sets a temperature floor for the gas. Additionally, we neglect the cosmic UVB, since the typical ISM densities are sufficiently large to ensure an efficient self-shielding \citep[e.g.][]{gnedin:2010}. For example, \citet{rahmati:2013mnras} have shown that at $z\simeq 5$ the hydrogen ionization due to the UVB is negligible for $n \gsim 10^{-2}{\rm cm}^{-3}$, the typical density of diffuse ISM.
The self-shielding of ${\rm {H_2}}$~to photo-dissociation is accounted by using the \citet{richings:2014} prescription\footnote{The self-shielding formulation by \citet{richings:2014} does not account for a directional dependence as done in more computationally costly models \citep{hartwig:2015apj}.}, thus in each gas cell the shielding can be expressed as an analytical function of its ${\rm {H_2}}$~column density, temperature and turbulence \citep[cfr. with][]{wolcottgreen:2011}.
\subsubsection{Dust processes}\label{sec_sub_dust}
As for Dahlia, the dust mass is proportional to the metal mass. Here we also specify the dust size distribution to be the one appropriate for $\mbox{high-}z$~ galaxies, the Small Magellanic Cloud one, following \citet{weingartner:2001apj}.
Dust grains can affect the chemistry through cooling\footnote{Dust cooling is not included in the current model, as it gives only a minor contribution for $n<10^4{\rm cm}^{-3}$, i.e. see Fig.~3 in \citetalias{bovino:2016aa}.} \citep{hollenbach1979apjs}, photoelectric-heating \citep{bakes:1994apj}, and by mediating the formation of molecules \citep{cazaux2009aa}.
In particular, the formation rate of ${\rm {H_2}}$~on dust grains is approximated following \citet{Jura:1975apj}
\be\label{eq_jura}
R_{\rm H2-dust} = 3\times 10^{-17}n\,n_{\rm H} (\mathcal{D}/\dust_{\odot})\,{\rm cm}^{-3}\,{\rm s}^{-1}\,,
\ee
where $n_{\rm H}$ is the hydrogen density. Note that for $\mathcal{D}\gsim 10^{-2}\dust_{\odot}$ this dust channel is dominant with respect to gas-phase formation (e.g. reactions 6--7 and 9--10 B.1 in \citetalias{bovino:2016aa}).
\subsubsection{Cosmic rays}\label{sec_sub_cr}
CR ionization can become important in regions shielded from radiation, like MC interiors. We assume a CR hydrogen ionization rate $\propto$ SFR \citep{valle:2002apj} and normalized to the MW value \citep{webber:1998apj}:
\be
\zeta_{\rm cr} = 3\times 10^{-17} ({\rm SFR}/\msun\,{\rm yr}^{-1})\, {\rm s}^{-1}.
\ee
The rate $\zeta_{\rm cr}$ includes the flux of CR and secondary electrons \citep[][]{richings:2014}. In the network, CR ionizations are proportional to $\zeta_{\rm cr}$ and to coupling constants that depend on the specific ions; such couplings are taken from the \textlcsc{kida} database \citep{kida:2012apjs}.
Additionally we account for Coulomb heating, by assuming that every CR ionization releases an energy\footnote{For a more accurate treatment of Coulomb heating refer to \citet[][]{glassgold:2012apj}.} of $20$ eV.
\subsubsection{Initial abundances of the species}
Finally, following \citet{galli:1998AA}, we calculate IC for the various species by accounting for the primordial chemistry\footnote{For a possible implementation of the \citet{galli:1998AA} chemical network see the \quotes{earlyUniverse} test contained in \textlcsc{krome}.} at $z\gsim 100$, for a density and temperature evolution corresponding to gas at the mean cosmic density.
\subsection{Benchmark of ${\rm {H_2}}$~formation models}\label{sec_chem_test}
As a benchmark for our simulations, we compare the formation of ${\rm {H_2}}$~in different physical environments. For the Dahlia \citetalias{krumholz:2009apj}~model we compute $f_{\rm H2}$ from eq. \ref{eq_fh2_anal} as a function of $n$ and $Z$. We choose an expression for $N_{\rm H} = n\,l_{\rm cell}\mu\propto n^{2/3}$ resulting from the mass threshold-based AMR refinement criterion for which $l_{\rm cell} \propto n^{-1/3}$. We restate that the equilibrium \citetalias{krumholz:2009apj}~model is independent on $G$ and the gas temperature $T$.
For the Alth{\ae}a \citetalias{bovino:2016aa}~model we use \textlcsc{krome} to perform single-zone tests varying $n, Z$ and $G$. In this case we assume an initial temperature\footnote{The initial temperature corresponds to the virial temperature of the first star-forming halos present in the zoomed region. The results depend very weakly on this assumption.} $T= 5\times 10^3 {\rm K}$, and we let the gas patch evolve at constant density until thermo-chemical equilibrium is reached. This typically takes 100 Myr.
The comparison between the two models is shown in Fig.~\ref{fig_chimica_krome} as a function of $n$ for different metallicities.
For $G>0$ and $Z<{\rm Z}_{\odot}$, ${\rm {H_2}}$~formation is hindered in \citetalias{bovino:2016aa}~with respect to \citetalias{krumholz:2009apj}, i.e. higher $n$ are needed to reach similar $f_{\rm H2}$ fractions. For $G = G_{0}$ and $Z={\rm Z}_{\odot}$ the two models are roughly in agreement: this is expected since \citetalias{krumholz:2009apj}~is calibrated on the MW environment. Finally, for $G>0$ and at $Z=10^{-3}{\rm Z}_{\odot}$ (the metallicity floor in our simulation set) the ${\rm {H_2}}$~formation in the \citetalias{bovino:2016aa}~model is strongly suppressed, e.g. $f_{\rm H2}\simeq 10^{-3}$ for $n\simeq 10^3{\rm cm}^{-3}$. Note that these fractions are comparable to the ones expected for ${\rm {H_2}}$~formation in a pristine environment where ${\rm {H_2}}$~formation proceeds via gas-phase reactions.
As noted in \citetalias{pallottini:2017dahlia}, Dahlia's star formation (SF) model (eqs \ref{eq_sk_relation} and \ref{eq_fh2_anal}) is roughly equivalent to a density threshold criterion with metallicity-dependent critical density $n_{c} \simeq 26.45 \, (Z/Z_\odot)^{-0.87} {\rm cm}^{-3}$. Physically this corresponds to the density at which $f_{\rm H2} \geq 0.5$ (see also \citealt{agertz:2012arxiv}). Thus, Fig.~\ref{fig_chimica_krome} quantifies the density threshold required to spawn stars in the simulation.
Dahlia forms stars in gas with $n\simeq 30\,{\rm cm}^{-3}$ and $Z\simeq 0.5\,{\rm Z}_{\odot}$ at a rate of about $10^2\msun\,{\rm yr}^{-1}$ at $z=6$. If Alth{\ae}a has a similar SFR history (this is checked a posteriori, see Fig.~\ref{fig_sfr_comparison}), the resulting metallicity and ISRF intensity ($G\simeq 10^2 G_{0}$) should also be similar. Then, by inspecting Fig.~\ref{fig_chimica_krome} (middle-left panel) one can conclude that Alth{\ae}a forms stars in much denser environments where $n > n_{c} \simeq 263 \, (Z/Z_\odot)^{-1.19} {\rm cm}^{-3}$ for $G= 10^2 G_{0}$.
As noted in \citet{hopkins:2013arxiv}, although variations in the density threshold lead to similar total SFR, they might severely affect the galaxy morphology. We will return to this point in the next Section.
We remind that in both simulation we use the cell radius to calculate column density, that are used e.g. to calculate the gas self-shielding. This is done to mainly to ensure a fair comparison between the two simulations.
In other simulations, e.g. MC illuminated by an external radiation field, the prescriptions adopted accounts for the contribution of column density from nearby cells, i.e. by using Jeans or Sobolev-like length (see e.g. \citealt{hartwig:2015apj} for a comparison between different prescriptions). However in our simulation we expect stars to be very close or embedded in potential star forming regions. Using the contribution to the column density from the surrounding gas would then overestimate the self-shielding effect. Such modelling uncertainty would be solved by including radiative transfer in the simulation.
However, we note that at z=6 the radius of our cells as a function of density can be approximated as $r_{\rm cell} = 154.1\,(n/{\rm cm}^{-3})^{-1/3}{\rm pc}$, while the jeans length is $l_J = 15.6\, (n/{\rm cm}^{-3})^{-1/2} (T/{\rm K})^{1/2}{\rm pc}$. Thus for typical values found for the molecular gas in Althaea ($n \simeq 300 \,{\rm cm}^{-3}$ and $T\simeq 100 \,\rm K$, see later Fig. \ref{fig_eos_h2}), the two prescriptions gives similar results, i.e. $r_{\rm cell} \sim 20 \,\rm pc$ and $l_J \sim 10\,\rm pc$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/test_krome_Z.pdf}
\caption{Benchmark of the formation of ${\rm {H_2}}$~for the model used in Dahlia (\citetalias{krumholz:2009apj}, Sec.~\ref{sec_chem_eq}) and in Alth{\ae}a (\citetalias{bovino:2016aa}, Sec.~\ref{sec_chem_noneq}).
In each panel we plot the ${\rm {H_2}}$~mass fraction $f_{\rm H2}$ as a function of density ($n$), with different panels showing the results for different metallicities ($Z$). In each panels the dashed grey line indicates the \citetalias{krumholz:2009apj}~model, while the \citetalias{bovino:2016aa}~models are plotted with solid lines, with different colours indicating a different impinging ISRF flux ($G$). In the upper axis we indicate the free-fall times ($t_{\rm ff}$) corresponding to $n$.
\label{fig_chimica_krome}
}
\end{figure}
\section{Results}
We now turn to a detailed analysis of the two zoomed galaxies, Dahlia and Alth{\ae}a\footnote{We refrain from the analysis of the satellite population of the two galaxies due to the oversimplifying assumption of a spatially uniform ISRF artificially suppressing star formation in environments with metallicity close to the floor value $Z_{\rm floor}=10^{-3}{\rm Z}_{\odot}$.}. We start by studying the star formation and the build-up of the stellar mass from $z \simeq 15$ to $z=6$ (Sec.~\ref{sec_sfr_feed}). We then specialize at $z=6$ to inspect the galaxy morphology (Sec.~\ref{sec_morphology}), the ISM multiphase structure (Sec.~\ref{sec_thermo_state}) and the predicted observable properties (Sec.~\ref{sec_obs_prop}). An overall summary of the properties of the two galaxies is given in Tab.~\ref{tab_summary}.
\subsection{Star formation history}\label{sec_sfr_feed}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/comparison_althaea_dahlia_sfr.pdf}
\caption{
Star formation rate (SFR) as a function of galaxy age ($t_{\star}$) for Dahlia (black line and hatched region) and Alth{\ae}a (orange line and transparent region).
Also shown (grey dashed line) is an analytical approximation (within a factor 2 for both galaxies) to the average SFR trend.
The redshift ($z$) corresponding to $t_{\star}$ is plotted on the upper axis, and note that $t_{\star}=0$ corresponds to the first stellar formation event in Dahlia, and the plotted SFRs are averaged over $4\,\rm Myr$.
\label{fig_sfr_comparison}
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/comparison_sfr_mass.pdf}
\caption{
SFR vs stellar mass ($M_{\star}$) for Alth{\ae}a (circles) and Dahlia (squares), with symbols coloured accordingly to the age $t_{\star}$. With crosses we overplot SFR and $M_{\star}$ inferred from 27 galaxies observed at $z\simeq 6$ by \citetalias{jiang:2016apj}. Following \citetalias{jiang:2016apj} analysis\textsuperscript{\ref{foot_j16}}, galaxies identified as young and old are plotted in blue and red, respectively. To guide the eye, the linear correlation between the data sets are also shown with a dashed lines. See the text for more details.
\label{fig_sfr_mass_obs_comparison}
}
\end{figure}
In Fig.~\ref{fig_sfr_comparison} we plot the SFR history as a function of ``galaxy age'' ($t_{\star}$) for Dahlia and Alth{\ae}a; $t_{\star}=0$ marks the first star formation event in Dahlia\footnote{Note that even with the same modelling and IC, differences in the SFR may arise as a result of stochasticity in the star formation prescription eq. \ref{eq_sk_relation}. Such differences vanish once the SFR is averaged on timescales longer than the typical free-fall time of the star forming gas.}.
For both Dahlia and Alth{\ae}a the SFR has an increasing trend which can be approximated with good accuracy (within a factor of 2) as ${\rm SFR} = 1.5 \log(t_{\star}/(30\,\rm Myr))$. However, on average the SFR in Dahlia is larger by a factor $\simeq 1.5 \pm 0.6$ when averaged over the entire SFR history ($\simeq 700\,\rm Myr$).
Thus, in spite of very different chemical prescriptions, the SFR in the two galaxies shows very little variation. Stated differently, the higher critical density for star formation arising from non-equilibrium chemistry does not alter significantly the rate at which stars form, as already noticed in Sec.~\ref{sec_chem_test}. This also entails a comparable metallicity, and we note that in both galaxy most of the metal mass is locked in stars (see Tab. \ref{tab_summary}), as they are typically formed from the most enriched regions.
It is interesting to check the evolutionary paths of Dahlia and Alth{\ae}a (Fig.~\ref{fig_sfr_mass_obs_comparison}) in the standard SFR vs. stellar mass ($M_{\star}$) diagram, and compare them with data\footnote{SFR and $M_{\star}$ have been derived by assuming an exponentially \textit{increasing} SFR, consistent with the history of both our simulated galaxies (Fig.~\ref{fig_sfr_comparison}).\label{foot_j16}} inferred from $z\simeq 6$ observations of 27 Lyman Alpha Emitters (LAE) and LBGs \citep[][hereafter \citetalias{jiang:2016apj}]{jiang:2016apj}. By using multi-band data, precise redshift determinations, and an estimate of nebular emission from Ly$\alpha$, \citetalias{jiang:2016apj}~were able to distinguish between a young ($t_{\star}\lsim 30\,\rm Myr$) and an old ($t_{\star}\gsim 100\,\rm Myr$) subsample. Each subsample exhibits a linear correlation in $\log {\rm SFR} - \log M_*$, albeit with a different normalization: the young (old) subsample has a ${\rm sSFR} = {\rm SFR}/M_{\star} = 39.7\, {\rm Gyr}^{-1} (4.1\, {\rm Gyr}^{-1})$.
The SFR vs stellar mass of our simulated galaxies for $M_{\star}\lsim 10^{8.5} {\rm M}_{\odot}$ ($t_{\star}\lsim 100\, \rm Myr$) is fairly consistent with the young subsample relation (keeping in mind stochasticity effects at low stellar masses). At later evolutionary stages ($t_{\star}\gsim 300\, \rm Myr$ or $M_{\star}\gsim 10^{9.5} {\rm M}_{\odot}$), Dahlia and Alth{\ae}a nicely shift to the lower sSFR values characterizing the old \citetalias{jiang:2016apj} subsample data. This shift must be understood as a result of increasing stellar feedback: as galaxies grow, the larger energy input from the accumulated stellar populations hinders subsequent SFR events.
Note that at late times ($t_{\star}\gsim 300$ Myr), when $M_{\star} = 5\times 10^9 M_\odot$, the sSFRs of Dahlia and Alth{\ae}a are in agreement with analytical results by \citet{behroozi:2013apj}, and with $z = 7$ observations by \citet{gonzalez:2010}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/comparison_althaea_dahlia_e_input_ratio.pdf}
\caption{
Ratio of mechanical ($\dot{E}_{\rm sn+w}$) and radiative ($\dot{E}_{\rm rad}$) energy deposition rates by stars as a function of galaxy age ($t_{\star}$) for Dahlia (black line/hatched area) and Alth{\ae}a (orange/transparent). Dashed lines indicate the $\simeq 700\,\rm Myr$ time-averaged mean of the ratios for each galaxy. To guide the eye we plot the unity value (dotted grey line). Similar to Fig.~\ref{fig_sfr_comparison} the ratios are averaged over $4\,\rm Myr$. The upper horizontal axis indicates redshift.
\label{fig_energy_comparison}
}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
Property & Symbol & Dahlia & Alth{\ae}a & [units] \\
\hline
\hline
Star formation rate & $\rm SFR$ & $156.19$ & $136.50$ & ${\rm M}_{\odot}/{\rm yr}$ \\
Specific SFR & $\rm sSFR$ & $4.45 $ & $5.23 $ & ${\rm Gyr}^{-1}$ \\
Stellar mass & $M_{\star}$ & $3.51 $ & $2.61 $ & $10^{10}{\rm M}_{\odot}$ \\
Metal mass in stars & $M_{\star}^Z$ & $8.20 $ & $5.87 $ & $10^{8}{\rm M}_{\odot}$ \\
\hline
Gas mass & $M_{g}$ & $1.23 $ & $2.72 $ & $10^9{\rm M}_{\odot}$ \\
H$_2$ mass & $M_{\rm H2}$ & $17.01$ & $4.76 $ & $10^7{\rm M}_{\odot}$ \\
Metal mass & $M_{Z}$ & $1.41 $ & $2.48 $ & $10^7{\rm M}_{\odot}$ \\
Disk radius & $r_d$ & $610 $ & $504 $ & $\rm pc$ \\
Disk scale height & $H$ & $224 $ & $191 $ & $\rm pc$ \\
\hline
Gas density & $\langle n\rangle$ & $23.89$ & $164.41$ & ${\rm cm}^{-3}$ \\
H$_2$ density & $\langle n_{\rm H2}\rangle$ & $6.62 $ & $4.95$ & ${\rm cm}^{-3}$ \\
Metallicity & $\langle Z\rangle$ & $0.57 $ & $0.46$ & ${\rm Z}_{\odot}$ \\
\hline
Gas surface density & $\langle\Sigma\rangle$ & $37.89$ & $222.02$ & ${\rm M}_{\odot}/{\rm pc}^{2}$ \\
Star formation surface density & $\langle\dot{\Sigma}_{\star}\rangle$ & $0.40 $ & $0.83 $ & ${\rm M}_{\odot}/{\rm pc}^{2}/\rm Myr$ \\
Luminosity \hbox{[C~$\scriptstyle\rm II $]}~$157.74\,\mu{\rm m}$ & $L_{\rm CII}$ & $3.39 $ & $21.08 $ & $10^7{\rm L}_{\odot}$ \\
Luminosity ${\rm {H_2}}$~$17.03\,\mu{\rm m}$ & $L_{\rm H2}$ & $2.31 $ & $33.24 $ & $10^5{\rm L}_{\odot}$ \\
\end{tabular}
\caption{
Physical properties of Dahlia and Alth{\ae}a at $z=6$. The values refer to gas and stars within $2.5\,{\rm kpc}$ from the galaxy center (similar to the field of view in Fig.s \ref{fig_mappe_comparison_1} and \ref{fig_mappe_comparison_2}). The effective radius, $r_d$, and gas scale height, $H$, are calculated from the principal component analysis of the density field. Values for $n$, $n_{\rm H2}$, $Z$, $\Sigma$, and $\dot{\Sigma}_{\star}$ represent mass-weighted averages.
\label{tab_summary}
}
\end{table*}
As feedback clearly plays a major role in the overall evolution of early galaxies, we turn to a more in-depth analysis of its energetics.
This can be quantified in terms of the stellar energy deposition rates in \emph{mechanical} (SN explosions + OB/AGB winds\footnote{On average OB/AGB winds account only for $\lsim 10\%$ of the SN power.}, $\dot{E}_{\rm sn+w}$) and \emph{radiative} ($\dot{E}_{\rm rad}$) forms. These are shown as a function of time in Fig.~\ref{fig_energy_comparison}. The $\dot{E}_{\rm sn+w}/\dot{E}_{\rm rad}$ ratio shows short-term ($\simeq 20 \,\rm Myr$) fluctuations corresponding to coherent burst of star formation/SN activity.
Barring this time modulation, on average the mechanical/radiative energy ratio increases up to $\simeq 250\,\rm Myr$, when it suddenly drops and reaches an equilibrium value. This implies that radiation pressure dominates the energy input; consequently it represents the major factor in quenching star formation. While this is true throughout the evolution, it becomes even more evident after $\simeq 250\,\rm Myr$, when the first stellar populations with $Z_{\star}\gsim 10^{-1}{\rm Z}_{\odot}$ enter the AGB phase. At that time, winds from AGBs enrich their surroundings with metals and dust. As dust produced by AGBs remains more confined than SN dust around the production region, it provides a higher opacity, thus boosting radiation pressure via a more efficient dust-gas coupling (see also \citetalias{pallottini:2017dahlia}).
For Dahlia the radiative energy input rate is about 20 times larger than the mechanical one, while for Alth{\ae}a such ratio is on average $8$ times higher, although larger fluctuations are present. The latter are caused by the occurrence of more frequent and powerful bursts of SN events in Alth{\ae}a. Why does this happen?
The answer has to do with the different gas morphology. As already noted discussing Fig.~\ref{fig_chimica_krome}, the higher critical density for star formation imposed by non-equilibrium chemistry has a number of consequences: (a) each formation event can produce a star cluster with an higher mass;
(b) star formation is more likely hosted in isolated high density clumps (see later, particularly Fig.~\ref{fig_morfologia}); (c) in a clumpier disk, SN explosions can easily break into more diffuse regions. The combination of (a) and (b) increases the probability of spatially coherent explosions having a stronger impact on the surrounding gas; due to (c), the blastwaves suffer highly reduced radiative losses \citep{gatto:2015mnras}, and affect larger volumes.
Similar effects have been also noted in the context of single giant MCs ($\sim10^6{\rm M}_{\odot}$), where unless the SNs explode coherently, their energy is quickly radiated away because of the very high gas densities \citep{reyraposo:2017}.
For the reminder of the work we focus on $z=6$, when the galaxies have an age of $t_{\star}\simeq 700\, \rm Myr$.
\subsection{Galaxy morphology}\label{sec_morphology}
\begin{figure*}
~\hfill\includegraphics[width=0.96\textwidth]{plots_pdf/maps/density_27.pdf}\hfill~\\
~\hfill\includegraphics[width=0.96\textwidth]{plots_pdf/maps/temperature_27.pdf}\hfill~\\
~\hfill\includegraphics[width=0.96\textwidth]{plots_pdf/maps/dens_H2_27.pdf}\hfill~\\
\caption{
(Caption next page.) %
\label{fig_mappe_comparison_1}
}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\caption{(Previous page.)
Face-on maps\textsuperscript{\ref{footnote_pymses}} of Dahlia (left panels) and Alth{\ae}a (right) at age $t_{\star}\simeq 700\, \rm Myr$ ($z=6$). Shown are line-of-sigh mass weighted average of the gas density (upper panels), temperature (middle), and ${\rm {H_2}}$~density (lower) fields with amplitude given by the colorbar. The maps are $6.31\,{\rm kpc}$ on a side.
}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/minkowsky_compare.pdf}
\caption{
Morphological comparison of the molecular gas at $z=6$.
In the four panels we plot the Minkowsky functionals ($V_{0},V_{1},V_{2},V_{3}$) of the ${\rm {H_2}}$~density field ($n_{\rm H2}/{\rm cm}^{-3}$).
Functionals are plotted with black line and hatched regions for Dahlia, with orange line and transparent region for Alth{\ae}a.
Note that Minkowsky functionals are indicated in comoving units.
For detail on the calculation of the Minkowsky functional see App.~\ref{sec_app_minchioschi} (in particular see Fig.~\ref{fig_morfologia_test}).
\label{fig_morfologia}
}
\end{figure}
Dahlia and Alth{\ae}a sit at the centre of a cosmic web knot and accrete mass from the intergalactic medium mainly via 3 filaments of length $\simeq 100\,{\rm kpc}$. In both simulations, the large scale structure is similar, and we refer the reader to Sec.~3.1 of \citetalias{pallottini:2017dahlia} for its analysis. Differences between the simulation are expected to arise on the ISM scale, whose structure is visible on $\simeq 7\,{\rm kpc}$ scales. In Fig.~\ref{fig_mappe_comparison_1} we show the gas density, temperature, and ${\rm {H_2}}$~density ($n_{\rm H2} = f_{\rm H2}\,n\,\mu$) fields for Dahlia and Alth{\ae}a. The map\footnote{The maps of this work are rendered by using a customized version of \textlcsc{pymses} \citep{labadens:2012aspc}, a visualization software that implements optimized techniques for the AMR of \textlcsc{ramses}.\label{footnote_pymses}} centers coincide with Dahlia's stellar center-of-mass.
\subsubsection{Overview}
Qualitatively, both galaxies show a clearly defined, albeit somewhat perturbed, spiral disk of radius $\simeq 0.5\,\rm kpc$, embedded in a lower density ($n\simeq 0.1\,{\rm cm}^{-3}$) medium. However the mean disk gas density for Dahlia is $\langle n\rangle=24\,{\rm cm}^{-3}$, while for Alth{\ae}a $\langle n\rangle = 164\,{\rm cm}^{-3}$ (see Tab.~\ref{tab_summary}). The temperature structure shows fewer differences, i.e. the inner disk is slightly hotter for Dahlia ($T\simeq 300\,\rm K$) than for Alth{\ae}a ($T\simeq 100\,\rm K$), which features instead slightly more abundant and extended pockets of shock-heated gas ($T\gsim 10^6$). Such high-$T$ regions are produced by both accretion shocks and SN explosions. In both cases the typical ${\rm {H_2}}$~density is the same, i.e. $\langle n_{\rm H2}\rangle = 5\,{\rm cm}^{-3}$, however, with respect to Dahlia, Alth{\ae}a shows a slightly smaller disk, that also seems more clumpy.
To summarize, the galaxies differ by an order of magnitude in atomic density, but have the same molecular density. In spite of this difference, the SFR are roughly similar. This can be explained as follows. To first order, in our model $SFR \propto n_{\rm H2} n^{1/2} V$, where $V = 2\pi r_d^2 H$ is the galaxy volume (Tab.~\ref{tab_summary}). It follows that the larger density is largely compensated by the smaller Alth{\ae}a volume.
\subsubsection{In-depth analysis}
Fig.~\ref{fig_mappe_comparison_1} visually illustrates the morphological differences between the two galaxies. The gas in Alth{\ae}a appears clumpier than in Dahlia. To quantify this statement we start by introducing the ${\rm {H_2}}$~clumping factor on the smoothing scale $r$, which is defined as\footnote{To calculate the clumping factor, first we construct the 3D unigrid cube of the ${\rm {H_2}}$~mass field, then we smooth it with a Gaussian kernel of scale $r$ and finally we calculate the mass-weighted average and variance of the smoothed ${\rm {H_2}}$~density field.}
\be
C(r) = \langle n_{\rm H2}^{2} \rangle_{r}/\langle n_{\rm H2} \rangle_{r}^{2}\,,
\ee
For Dahlia $C(r)$ decreases from $10^3$ to 10 going from 30 pc to 1 kpc, while for Alth{\ae}a $C(r)$ is $\gsim 2$ times larger
on all scales.
A more in-depth analysis can be performed using the Minkowsky functionals \citep[][App.~\ref{sec_app_minchioschi}]{schmalzing:1998,gleser:2006mnras,yoshiura:2017mnras} which can give a complete description of the molecular gas morphological structure. For a 3-dimensional field, 4 independent Minkowsky functionals can be defined. Each of the functionals, $V_{i}(n_{\rm H2}) \, (i=0,\dots,3)$ characterizes a different morphological property of the excursion set with ${\rm {H_2}}$~density $>n_{\rm H2}$: $V_{0}$ gives the volume filling factor, $V_{1}$ measures the total enclosed surface, $V_{2}$ is the mean curvature, quantifying the sphericity/concavity of the set, and $V_{3}$ estimates the Euler characteristic (i.e. multiple isolated components vs. a single connected one. Appendix \ref{sec_app_minchioschi} gives more rigorous definitions with an illustrative application (Fig.~\ref{fig_morfologia_test}).
In Fig.~\ref{fig_morfologia} we plot the Minkowsky functionals ($V_{0},V_{1},V_{2},V_{3}$) calculated for the ${\rm {H_2}}$~density field for Dahlia and Alth{\ae}a.
The $V_{0}$ functional analysis shows that Alth{\ae}a is more compact, i.e. for each $n_{\rm H2}$ value Dahlia's excursion set volume is larger and it plummets rapidly at large densities. On the other hand, the set surface of Alth{\ae}a is larger by about a factor of 5, implying that this galaxy is fragmented into multiple, disconnected components.
This is confirmed also by Alth{\ae}a's larger ($10\times$) Euler characteristic measure, $V_{3}$, an indication of the prevalence of isolated structures. This feature becomes more evident towards larger densities, as expected if ${\rm {H_2}}$~is concentrated in molecular clouds\footnote{$V_{3}>0$ values at $\log(n_{\rm H2}/{\rm cm}^{-3})\simeq 1.2$ in Dahlia are mainly due to the presence of the 3 satellites/clumps outside the disk.}.
Further, in Dahlia most of the molecular gas resides in connected ($V_{3}\lsim 0$) disk regions, with a concave shape ($V_{2}<0$). For Alth{\ae}a there is a transition: for $\log(n_{\rm H2}/{\rm cm}^{-3})\lsim 1$ the gas has a concave ($V_{2}<0$), disjointed ($V_{3}>0$), filamentary structure, while for $\log(n_{\rm H2}/{\rm cm}^{-3})\gsim 1$ the galaxy is composed by spherical clumps ($V_{2}>0$).
\subsection{ISM thermodynamics}\label{sec_thermo_state}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_dens_y_temp_z_mass_dahlia_27.pdf}
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_dens_y_temp_z_mass_althaea_28.pdf}
\caption{
Equation of state (EOS) of the gas within $30\,{\rm kpc}$ for Dahlia (left panel) and Alth{\ae}a (right panel) at $t_{\star}\simeq 700\, \rm Myr$ ($z=6$).
EOS are shown as mass-weighted probability distribution function (PDF) in the density-temperature ($n-T$) plane, as specified by the colorbar. For both galaxies, the EOS projection on the $n$ ($T$) axis is additionally shown as an horizontal (vertical) inset. The 2D EOS are normalized such that the integral on the $n-T$ plane is unity; the projected EOS are normalized such that the sum of the bins is equal to $100\%$.
\label{fig_eos}
}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_dens_y_temp_z_H2_KTM_mass_dahlia_27.pdf}
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_dens_y_temp_z_H2_krome_mass_althaea_28.pdf}
\caption{
EOS of the molecular (${\rm {H_2}}$) gas for Dahlia (left panel) and Alth{\ae}a (right panel) i.e. the ${\rm {H_2}}$~mass-weighted PDF in the $n-T$ plane. Notation is similar to Fig.~\ref{fig_eos}, albeit a different region of $n-T$ plane is shown.
\label{fig_eos_h2}
}
\end{figure*}
The thermodynamical state of the ISM can be analyzed by studying the probability distribution function (PDF) of the gas in the density-temperature plane, i.e. the equation of state (EOS). In Fig.~\ref{fig_eos} we plot the mass-weighted EOS for Dahlia and Alth{\ae}a at $z=6$. We include gas within $30\,{\rm kpc}$, or $\simeq 2\, r_{\rm vir}$, from the galaxy center.
From the EOS we can see that in both galaxies $70\%$ of the gas in a photoionized state ($T\sim 10^4\rm K$), that in Dahlia is induced by the \citet[][]{Haardt:2012} UVB, while in Alth{\ae}a is mainly due to photo-electric heating on dust grains illuminated from the uniform ISRF of intensity $G$. Only $\simeq 10\%$ of the gas is in a hot $10^6$ K component produced by accretion shocks and SN explosions. A relatively minor difference descends from Alth{\ae}a's more effective mechanical feedback, already noted when discussing Fig.~\ref{fig_energy_comparison}: small pockets of freshly produced very hot ($\ge 10^6\,\rm K$) and diffuse ($0.1\, {\rm cm}^{-3}$) gas are twice more abundant in Alth{\ae}a, as it can be appreciated from a visual inspection of the temperature maps in Fig.~\ref{fig_mappe_comparison_1}.
Fig.~\ref{fig_eos} (in particular compare the upper horizontal panels) shows that the density PDF is remarkably different in the two galaxies. In Dahlia the distribution peaks at $0.1\,{\rm cm}^{-3}$; Alth{\ae}a instead features a bi-modal PDF with a second, similar amplitude peak at $n\simeq 100\,{\rm cm}^{-3}$. This entails the fact that the dense $\gsim 10\,{\rm cm}^{-3}$ gas is about 2 times more abundant in the latter system. In addition, the very dense gas ($n\gsim 300\,{\rm cm}^{-3}$), only present in Alth{\ae}a, can cool to temperatures of 30 K, not too far from the CMB one.
The high-density part of the PDF is worth some more insight as it describes the gas that ultimately regulates star formation. This gas is largely in molecular form, and accounts (see Tab.~\ref{tab_summary}) for $1.7\%$ ($13.8\%$) of the total gas mass in Dahlia (Alth{\ae}a). Its ${\rm {H_2}}$~density-weighted distribution in the $n-T$ plane is reported in Fig.~\ref{fig_eos_h2}.
On average, the ${\rm {H_2}}$~gas in Dahlia is 10 times less dense than in Alth{\ae}a as a result of the new non-equilibrium prescription requiring higher gas densities to reach the same $f_{\rm H2}$ fraction; at the same time the warm ($T\gsim 10^{3}\rm K$) ${\rm {H_2}}$~fraction drops from $20\%$ (Dahlia) to an almost negligible value. Clearly, the warm component was a spurious result as (a) ${\rm {H_2}}$~cooling was not included, and (b) $f_{\rm H2}$ was considered to be independent of gas temperature (see eq.s \ref{eq_fh2_anal}).
Note that in Alth{\ae}a traces of warm ${\rm {H_2}}$~are only found at large densities, in virtually metal-free gas in which ${\rm {H_2}}$~production must proceed via much less efficient gas-phase reactions rather than on dust surfaces. This tiny fraction of molecular gas can survive only if densities large enough to provide a sufficient ${\rm {H_2}}$~self shielding against photodissociation are present.
Finally, the sharp EOS cutoff at $n\gsim 10^{2}{\rm cm}^{-3}$ in Dahlia is caused by the density-threshold behavior mimicked by the enforced chemical equilibrium: above $n_{c} \simeq 26.45 \, (Z/{\rm Z}_{\odot})^{-0.87} {\rm cm}^{-3}$ (Sec.~\ref{sec_chem_eq}) the gas is rapidly turned into stars. This spurious effect disappears in Alth{\ae}a, implementing a full time-dependent chemical network.
\section{Observational properties}\label{sec_obs_prop}
As we already mentioned, the strongest impact of different chemistry implementations is on the gas properties, and consequently
on ISM-related observables. In the following, we highlight the most important among these aspects.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/sk_global.pdf}
\caption{
Comparison of the observed and simulated Schmidt-Kennicutt relation expressed in terms of $\dot{\Sigma}_{\star}$ - $\Sigma/t_{\rm ff}$.
Observations are taken from single MCs \citep{heiderman:2010,lada:2010apj}, local unresolved galaxies \citep{kennicutt:1998apj}, and moderate redshift unresolved galaxies \citep{bouche2007apj,daddi:2010apj,daddi:2010b,tacconi:2010Natur,genzel:2010mnras}; the correlation (dispersion) for the observation found by \citet[][see the text for details]{krumholz:2012apj} is plotted with a black dashed line (grey shaded region).
Dahlia and Alth{\ae}a averaged value are plotted with black and orange stars, respectively (see Fig.~\ref{fig_eos_sk} for the complete distribution in the simulated galaxies).
\label{fig_riassunto_sk}
}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_surfd_dens_y_surfd_star_KTM_z_H2_KTM_mass_dahlia_27.pdf}
\includegraphics[width=0.49\textwidth]{plots_pdf/eos/eos_x_surfd_dens_y_surfd_star_z_H2_krome_mass_althaea_28.pdf}
\caption{
Schmidt-Kennicutt relation in Dahlia (left panel) and Alth{\ae}a (right panel) at $t_{\star}\simeq 700\, \rm Myr$ ($z=6$).
The relation is plotted using the ${\rm {H_2}}$~mass weighted PDF of the instantaneous SFR surface density $(\dot{\Sigma}_{\star}/{\rm M}_{\odot}\,{\rm pc}^{-2}\,\rm Myr^{-1})$ versus the total gas surface density ($\Sigma/{\rm M}_{\odot}/{\rm pc}^{2}$).
On both panels with dashed grey lines we overplot the relation observed from \citet{Kennicutt:2012}, i.e. $\dot{\Sigma}_{\star} \propto \Sigma^{1.4}$, for several normalizations that written inline.
Otherwise notation is similar to Fig.s \ref{fig_eos_h2} and \ref{fig_eos}.
\label{fig_eos_sk}
}
\end{figure*}
\begin{figure*}
~\hfill\includegraphics[width=0.97\textwidth]{plots_pdf/maps/emi_CII_27.pdf}\hfill~\\
~\hfill\includegraphics[width=0.97\textwidth]{plots_pdf/maps/emi_H2_emi_27.pdf}\hfill~\\
\caption{
Synthetic emission maps\textsuperscript{\ref{footnote_pymses}} of the simulated galaxies Dahlia (left panels) and Alth{\ae}a (right panels) at age $t_{\star}\simeq 700\, \rm Myr$ ($z=6$).
Integrated surface brightness of \hbox{[C~$\scriptstyle\rm II $]}~($S_{\rm [CII]}/(\lsun\,{\rm kpc}^{-2})$) and ${\rm {H_2}}$~($S_{\rm H2}/(\lsun\,{\rm kpc}^{-2})$) are shown in the upper and lower panels, respectively.
The field of view is the same as in Fig.~\ref{fig_mappe_comparison_1}.
\label{fig_mappe_comparison_2}
}
\end{figure*}
\begin{figure}
\includegraphics[width=0.49\textwidth]{plots_pdf/CII_SFR_AlthaeaDahlia.pdf}
\caption{
The \hbox{[C~$\scriptstyle\rm II $]}-SFR relation. Shown are Alth{\ae}a (orange star) and Dahlia (black) at 700 Myr or $z=6$; the errors refers to r.m.s. variation in the last $50\,\rm Myr$.
Lines refer to results from the \citetalias{vallini:2015}~model: constant metallicity models with $Z={\rm Z}_{\odot}$ (solid black), $Z=0.2\,{\rm Z}_{\odot}$ (solid orange), $Z=0.05\,{\rm Z}_{\odot}$ (pink dashed), and a model with mean $\langle Z/{\rm Z}_{\odot} \rangle = 0.05$ + density-metallicity relation extracted from cosmological simulations \citep[][blue dot-dashed]{pallottini:2014cgmh}.
Data for local dwarf galaxies \citep{delooze:2014aa} are plotted with little circles and the grey hatched region gives the mean and r.m.s. variation in the sample.
For high$-z$ galaxies, detections (upper-limits) are plotted with filled (empty) symbols, according to the inset legend. The high$-z$ sample include individual galaxies as BDF-3299 \citep{maiolino:2015arxiv,carniani:2017bdf3299}, HCM6A \citep{kanekar2013}, Himiko \citep{ouchi2013,ota:2014apj}, IOK-1 \citep{ota:2014apj}, and data from \citet[][$z\simeq 5.5$]{capak:2015arxiv}, \citet[][$z\simeq 6$]{willott:2015arxiv15}, \citet[][$z\simeq 7$]{schaerer:2015}, \citep[][$z\simeq 7$]{pentericci:2016apj}, \citet[][$z\simeq 8$]{gonzalezlopez:2014apj}, and lensed $z\simeq 6.5$ galaxies from \citet{knudsen:2016} and \citet[][]{bradac:2017}.
\label{fig_cii_sfr}
}
\end{figure}
\subsection{Schmidt-Kennicutt relation}
We start by analyzing the classical Schmidt-Kennicutt (SK) relation. This comparison should be interpreted as a consistency check of the balance between SF and feedback, since in the model we assume a SFR law that mimics a SK relation (eq. \ref{eq_sk_relation}).
The SK relation, in its most modern \citep{krumholz:2012apj} formulation, links the SFR ($\dot{\Sigma}_{\star}$) and total gas ($\Sigma$) surface density per unit free-fall time, $\dot{\Sigma}_{\star} = \epsilon_{\star}^{\rm ff} \Sigma/t_{\rm ff}$. The proportionality constant, often referred to as the efficiency per free-fall time following eq. \ref{eq_sk_relation}, is simply $\epsilon_{\star}^{\rm ff} = \zeta_{\rm sf} f_{\rm H2}$. Experimentally, \citet{krumholz:2012apj} find $\epsilon_{\star}^{\rm ff} = 0.015$ (see \citealt{krumholz:2015review} for a complete review on the subject). This result is supported also by a larger set of observations including single MCs \citep{heiderman:2010,lada:2010apj}, local unresolved galaxies \citep{kennicutt:1998apj}, and moderate redshift, unresolved galaxies \citep{bouche2007apj,daddi:2010apj,daddi:2010b,tacconi:2010Natur,genzel:2010mnras}. The SK relation is shown in Fig.~\ref{fig_riassunto_sk}, along with the location of Dahlia and Alth{\ae}a at $z=6$.
Dahlia appears to be over-forming stars with respect to its gas mass, and therefore it is located about $3\sigma\,$ above the KS relation. As Alth{\ae}a needs about 10 times higher density to sustain the same SFR, its location is closer to expectations from the SK. We have checked that the agreement is even better if we use only data relative to MC complexes \citep[e.g.][]{heiderman:2010,murray:2011apj}.
Dahlia's $\epsilon_{\star}^{\rm ff} = \zeta_{\rm sf} f_{\rm H2}$ is similar to the analog values found by \citet{semenov:2015}, who compute such efficiency using a turbulent eddy approach \citep{padoan:2012}, with no notion of molecular hydrogen fraction. The difference is that Dahlia misses the high density gas. Alth{\ae}a instead matches both the $\epsilon_{\star}^{ff}$ and the amount of high density gas found by \citet{semenov:2015}. Also, its $\dot{\Sigma}_{\star}-\Sigma$ relation is consistent with \citet{torrey:2016arxiv}, who use a star formation recipe involving self-gravitating gas with a local SK ${\rm {H_2}}$~dependent relation.
From our simulations it is also possible to perform a cell-by-cell analysis of the SK relation (Fig.~\ref{fig_eos_sk}). As expected, the results show the presence of a consistent spread in the local efficiency values which, however, has a different origin for Dahlia and Alth{\ae}a. While in the former the variation is mostly due to a different enrichment level affecting ${\rm {H_2}}$~abundance (eq. \ref{eq_fh2_anal}), for Alth{\ae}a the spread is larger because it results also from the individual evolutionary histories of the cells.
As noted by \citet{rosdahl:2017mnras}, for galaxy simulations with a SF model based on SK-like relation (eq. \ref{eq_sk_relation}), the resulting $\epsilon_{\star} =\epsilon_{\star}^{\rm ff}/t_{\rm ff}$ depends on how the feedback is implemented.
However, here we show that Alth{\ae}a has a lower $\epsilon_{\star}$ in spite of the fact that it implements exactly the same feedback prescription as Dahlia. The latter is qualitatively similar to a delayed cooling scheme used by \citet{rosdahl:2017mnras} and others \citep{stinson:2006mnras,teyssier:2013mnras}.
The lower efficiency $\epsilon_{\star}$ is a consequence of chemistry. As under non-equilibrium conditions the gas must be denser to form ${\rm {H_2}}$, the ISM becomes more clumpy (Fig.~\ref{fig_morfologia}). These clumps can form massive clusters of OB stars which, acting coherently, yield stronger feedback and may disrupt completely the star forming site.
\subsection{Far and mid infrared emission}
A meaningful way to compare the two galaxies is to predict their \hbox{C~$\scriptstyle\rm II $}~and ${\rm {H_2}}$~line emission, that can be observable at $\mbox{high-}z$~ with ALMA, and possibly with SPICA \citep[][in preparation]{spinoglio:2017,egami:2017}, respectively. Similarly to \citetalias{pallottini:2017dahlia}, we use a modified version of the \hbox{[C~$\scriptstyle\rm II $]}~emission model from \citet[][hereafter \citetalias{vallini:2015}]{vallini:2015}. Such model is based on temperature, density and metallicity grids built using \textlcsc{cloudy} \citep{cloudy:2013}, as detailed in App.~\ref{sez_cloudy_model}.
In Fig.~\ref{fig_mappe_comparison_2} we plot the \hbox{[C~$\scriptstyle\rm II $]}~$157.74\,\mu{\rm m}$ and ${\rm {H_2}}$~$17.03\,\mu{\rm m}$ surface brightness maps ($S/(\lsun\,{\rm kpc}^{-2})$); the field of view is the same as in Fig.~\ref{fig_mappe_comparison_1}.
\subsubsection{Far infrared emission}
Let us analyze first the \hbox{C~$\scriptstyle\rm II $}~emission. Dahlia has a \hbox{[C~$\scriptstyle\rm II $]}~luminosity of $\log (L_{\rm CII}/{\rm L}_{\odot}) \simeq 7.5$ which is about 7 times smaller than Alth{\ae}a, i.e. $\log (L_{\rm CII}/{\rm L}_{\odot}) \simeq 8.3$. Fig.~\ref{fig_mappe_comparison_2} shows that the surface brightness morphology in the two galaxies is similar. Dahlia's emission is concentrated in the disk, featuring and average surface brightness of $\log\langle S_{\rm [CII]}/(\lsun\,{\rm kpc}^{-2}) \rangle\simeq 6.4$ with peaks up to $\log(S_{\rm [CII]}/(\lsun\,{\rm kpc}^{-2}))\simeq 7.4$ along the spiral arms. The analogous values for Alth{\ae}a are $7.3$ and $8.7$, respectively.
This can be explained as follows.
FIR emission from the warm ($\simeq 10^{4} \rm K$), low density ($\lsim 0.1\,{\rm cm}^{-3}$) component of the ISM is suppressed at $\mbox{high-}z$~ by the CMB (\citealt[][]{Gong:2012ApJ,dacunha:2013apj,pallottini:2015cmb}; \citetalias{vallini:2015}; App. \ref{sez_cloudy_model}), as the upper levels of the \hbox{[C~$\scriptstyle\rm II $]}~transition cannot be efficiently populated through collisions and the spin temperature of the transition approaches the CMB one \citep[see][for possibility of \hbox{[C~$\scriptstyle\rm II $]}~detection from low density gas via CMB distortions]{pallottini:2015cmb}.
Thus, $\simeq 95\%$ of the \hbox{[C~$\scriptstyle\rm II $]}~emission comes from dense ($\gsim 10\,{\rm cm}^{-3}$, cold ($\simeq 100\,\rm K$), mostly molecular disk gas.
As noted in \citetalias{vallini:2015} (see in particular their Fig. 4) even when the CMB effect is neglected, the diffuse gas ($\lsim 0.1\,{\rm cm}^{-3}$) account only for $\lsim 5\%$ of the emission for galaxies with ${\rm SFR}\sim 100 \msun\,{\rm yr}^{-1}$ and $Z\sim {\rm Z}_{\odot}$, while it can be important in smaller objects \citep[][]{olsen:2015apj}.
The emissivity (in ${\rm L}_{\odot}/{\rm M}_{\odot}$) of such gas can be written as in \citetalias{pallottini:2017dahlia} \citep[in eq. 8, see also][]{Vallini:2013MNRAS,vallini:2017,goicoechea:2015apj}:
\be\label{eq_emission_cii}
\epsilon_{[\rm CII]} \simeq 0.1\, \left({n\over 10^{2} {\rm cm}^{-3}}\right)\left({Z\over {\rm Z}_{\odot}}\right)\,.
\ee
for $n\lsim 10^3{\rm cm}^{-3}$, i.e. the critical density for \hbox{[C~$\scriptstyle\rm II $]}~emission\footnote{As the suppression of the CMB affects only the diffuse component ($\lsim 0.1\,{\rm cm}^{-3}$), no significant difference is expected in the emissivity from the disks of the two galaxies (eq. \ref{eq_emission_cii}), that is composed of much higher ($\gsim 20\,{\rm cm}^{-3}$) density material.}.
As the metallicity in the disk of the two galaxies is roughly similar ($\langle Z\rangle\simeq0.5\,{\rm Z}_{\odot}$, see Tab.~\ref{tab_summary}), difference in the luminosities is entirely explained by the larger density in Alth{\ae}a.
We stress once again that such density variation is a result of a more precise, non-equilibrium chemical network requiring to reach much higher densities before the gas is converted to stars. It is precisely that dense gas that accounts for a larger FIR line emissivity from PDRs.
We can also compare the calculated synthetic \hbox{[C~$\scriptstyle\rm II $]}~emission vs. SFR with observations (Fig.~\ref{fig_cii_sfr}) obtained for dwarf galaxies \citep{delooze:2014aa}, and available $\mbox{high-}z$~ detections or upper-limits. The \hbox{[C~$\scriptstyle\rm II $]}~emission from Dahlia is lower than expected based on the local \hbox{[C~$\scriptstyle\rm II $]}-SFR relation; its luminosity is also well below all upper limits for $\mbox{high-}z$~ galaxies. Although Alth{\ae}a is $\simeq 10 $ times more luminous, even this object lies below the local relation, albeit only by $1.3\sigma$. We believe that the reduced luminosity is caused by the combined effects of the CMB suppression and relatively lower $Z$. Note, however, that the predicted luminosity exceeds the upper limits derived for LAEs \citep[e.g.][]{ouchi2013,ota:2014apj}, but is broadly consistent with that of the handful of LBGs so far detected, like e.g. the four galaxies in the \citet{pentericci:2016apj}.
In general, observations are still rather sparse, with few \hbox{[C~$\scriptstyle\rm II $]}~detections with SFR comparable to Alth{\ae}a \citep[e.g.][]{capak:2015arxiv}. Also unclear is the amplitude of the scattering of the relation for $\mbox{high-}z$~ objects compared with local ones.
Improvements in the understanding of the ISM structure are expected from deeper observations and/or other ions \citep[e.g. \OIII][]{inoue:2016sci,carniani:2017bdf3299}. Also helpful would be a larger catalogue of simulated galaxies \citep[cfr.][]{ceverino:2017}, to control environmental effects.
\subsubsection{Mid infrared emission}
By inspecting the lower panel of Fig.~\ref{fig_mappe_comparison_2} showing the predicted ${\rm {H_2}}$~$17.03\,\mu{\rm m}$~line emission, we come to conclusions similar to those for the \hbox{[C~$\scriptstyle\rm II $]}. Alth{\ae}a outshines Dahlia by $\simeq 15 \times$ by delivering a total line luminosity of $\log (L_{\rm H2}/{\rm L}_{\odot}) \simeq 6.5$. Differently from the \hbox{[C~$\scriptstyle\rm II $]}~case, also the deviations from the mean are much more marked in Alth{\ae}a, as appreciated from the Figure.
Note that the ${\rm {H_2}}$~$17.03\,\mu{\rm m}$~line emissivity is enhanced in high density, high temperature regions. Indeed, ${\rm {H_2}}$~emission mostly arises from shocked-heated molecular gas, for which $100\lsim n/{\rm cm}^{-3}\lsim 10^5$ and for $10 \lsim T/{\rm K}\lsim 3000$ (see App.~\ref{sez_cloudy_model}).
For Dahlia, the disk density is relatively low, $n\simeq 30\,{\rm cm}^{-3}$; in addition only $20\%$ of the gas is warm enough to allow some ($\epsilon_{\rm H2}\simeq 0.01\,{\rm L}_{\odot}/{\rm M}_{\odot}$) emission. In practice, such emission predominantly occurs along the outer spiral arms of the galaxy where these conditions are met due to the heating produced by SN explosions. In the denser Alth{\ae}a disk, the gas emissivity can attain $\epsilon_{\rm H2}\simeq 10^{-3} - 0.01$ already at moderate $T = 200 \,\rm K$. The brightness peaks are associated to a few ($\lsim 1\%$) pockets of thousand-degree gas; they can be clearly identified in the map.
This is particularly interesting because a galaxy like Alth{\ae}a might be detectable at very $\mbox{high-}z$~ with SPICA, as suggested by \citet[][in preparation]{egami:2017}.
\section{Conclusions}\label{sec_conclusione}
To improve our understanding of $\mbox{high-}z$~ galaxies we have studied the impact of ${\rm {H_2}}$~chemistry on their evolution, morphology and observed properties. To this end, we compare two zoom-in galaxy simulations implementing different chemical modelling. Both simulations start from the cosmological same initial conditions, and follow the evolution of a prototypical $M_{\star}\simeq 10^{10}{\rm M}_{\odot}$ galaxy at $z=6$ resolved at the scale of giant molecular clouds (30 pc). Stars are formed according to a ${\rm {H_2}}$~dependent Schmidt-Kennicutt relation. We also account for winds from massive stars, SN explosions and radiation pressure in a stellar age/metallicity dependent fashion (see Sec.~\ref{sec_common_pre}). The first galaxy is named Dahlia and ${\rm {H_2}}$~formation is computed from the \citet{krumholz:2009apj} equilibrium model; {Alth{\ae}a} instead implements a non-equilibrium chemistry network, following \citet{bovino:2016aa}. The key results can be summarized as follows:
\begin{itemize}
\item[\bf (a)] The star formation rate of the two galaxies is similar, and increases with time reaching values close to $100\, \msun\,{\rm yr}^{-1}$ at $z=6$ (see Fig.~\ref{fig_sfr_comparison}). However, Dahlia forms stars at a rate that is on average $1.5\pm 0.6$ times larger than Alth{\ae}a; it also shows a less prominent burst structure.
\item[\bf (b)] Both galaxies at $z=6$ have a SFR-stellar mass relation compatible with \citet[][]{jiang:2016apj} observations (Fig.~\ref{fig_sfr_mass_obs_comparison}). Moreover, they both show a continuous time evolution from specific SFR of ${\rm sSFR}\simeq 40\,{\rm Gyr}^{-1}$ to $5\,{\rm Gyr}^{-1}$. This is understood as an effect of the progressively increasing impact of stellar feedback hindering subsequent star formation events.
\item[\bf (c)] The non-equilibrium chemical model implemented in Alth{\ae}a determines the atomic to molecular hydrogen transition to occur at densities exceeding 300 ${\rm cm}^{-3}$, i.e. about 10 times larges that predicted by equilibrium model used for Dahlia (Fig.~\ref{fig_chimica_krome}). As a result, Alth{\ae}a features a more clumpy and fragmented morphology (Fig.~\ref{fig_morfologia}). This configuration makes SN feedback more effective, as noted in point {(a)} above (Fig.~\ref{fig_energy_comparison}).
\item[\bf (d)] Because of the lower density and weaker feedback, Dahlia sits $3\sigma$ away from the Schmidt-Kennicutt relation; Alth{\ae}a, instead nicely agrees with observations (Fig.~\ref{fig_riassunto_sk}). Note that although the SF efficiency is similar in the two galaxies and consistent with other simulations \citep{semenov:2015}, Dahlia is off the relation because of insufficient molecular gas content (Fig.~\ref{fig_eos_h2}).
\item[\bf (e)] We confirm that most of the emission from the \hbox{C~$\scriptstyle\rm II $}~and ${\rm {H_2}}$~is due to the dense gas forming the disk of the two galaxies. Because of Dahlia's lower average density, Alth{\ae}a outshines Dahlia by a factor of $7$ ($15$) in \hbox{[C~$\scriptstyle\rm II $]}~$157.74\,\mu{\rm m}$ (${\rm {H_2}}$~$17.03\,\mu{\rm m}$) line emission (Fig.~\ref{fig_mappe_comparison_2}). Yet, Alth{\ae}a has a 10 times lower \hbox{[C~$\scriptstyle\rm II $]}~luminosity than expected from the locally observed \hbox{[C~$\scriptstyle\rm II $]}-SFR relation (Fig.~\ref{fig_cii_sfr}). Whether this relation does not apply at $\mbox{high-}z$~ or the line luminosity is reduced by CMB and metallicity effects remains as an open questions which can be investigated with future deeper observations.
\end{itemize}
To conclude, both Dahlia and Alth{\ae}a follow the observed $\mbox{high-}z$~ SFR-$M_{\star}$ relation. However, many other observed properties (Schmidt-Kennicutt relation, \hbox{C~$\scriptstyle\rm II $}~and ${\rm {H_2}}$~emission) are very different. This shows the importance of accurate, non-equilibrium implementation of chemical networks in early galaxy numerical studies.
\section*{Acknowledgments}
We are grateful to the participants of \emph{The Cold Universe} program held in 2016 at the KITP, UCSB, for discussions during the workshop.
We thank P. Capelo, D. Celoria, E. Egami, D. Galli, T. Grassi, L. Mayer, S. Riolo, J. Wise for interesting and stimulating discussions.
We thank the authors and the community of \textlcsc{ramses} and \textlcsc{pymses} for their work.
AP acknowledges support from Centro Fermi via the project CORTES, \quotes{Cosmological Radiative Transfer in Early Structures}.
AF acknowledges support from the ERC Advanced Grant INTERSTELLAR H2020/740120.
RM acknowledge support from the ERC Advanced Grant 695671 \quotes{QUENCH} and from the Science and Technology Facilities Council (STFC).
SS acknowledges support from the European Commission through a Marie Sk{\l}odowska-Curie Fellowship, program PRIMORDIAL, Grant No. 700907.
This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.
\bibliographystyle{mnras}
|
\section{Introduction}
\begin{figure}
\begin{center}
\includegraphics[scale=.32]{figura}
\caption{(Color on-line) Crystalline structure of ZrSe$_2$ and deficiencies. In a) we show Zr in Se positions b) shows the stacking structure of the compound, in the middle is shown the ZrSe$_6$ octhahedron. This stacking structure permits the intercalation of small atoms. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.4]{Fig1.pdf}
\caption{(Color on-line) Difractograms of four samples and compositions, the bottom difractogram with maximum Zr content about 0.96\% was not superconducting. The three next difractograms with Zr content 0.90, 0.94, and 0.86 were superconducting. The crystalline structure was Rietveld refined and shows complete agreements to ZrSe$_2$ close to the stoichiometric composition, without impurities. Slightly above 0.94. the compound is semiconducting. The small black dots in the figures are the observed data, and the red lines are the Riettveld fitting. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.4]{Fig2}
\caption{(Color on-line) Superconducting transition temperature determined by magnetic measurements with field of 10 Oe in two modes of measurements; Zero Field Cooling (ZFC) and Field Cooling (FC). The curves show small differences in transition temperature but appreciables. Arrows mark the onset temperature from 8.14 K, 8.31 K, 8.34 K and 8.59 K. The fraction of superconducting material was very small, only in Fig C the amount was about 10.2 \%, other were about 1 \% or small. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.4]{Fig6}
\caption{(Color on-line) Critical magnetic fields H$_{C1}$, and H$_{C2}$ versus temperature. H$_{C1}$ at the lower temperature is about 170 Oe, whereas H$_{C2}$ is about 2400 Oe. H$_{C1}$ fits quite well to a parabolic behavior with transition temperature at 8.8 K, but the best fit for H$_{C2}$ was a linear function. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.36]{Fig5.pdf}
\caption{(Color on-line) Possible defects or vacancies configuration in Zr$_x$Se$_2$ with deficiencies of Zr, according to Gleizes and Jeannin analyzes \cite {gleit}.Blue circles correspond to Zr atoms, Se are in yellow color. Vacancies can be in different sites and number. }
\end{center}
\end{figure}
ZrSe$_2$ crystalizes in C6-CdI$_2$ type structure. Frequently may exhibits deviations of stoichiometry, that changes it physical properties. Many of those studies were performed many years ago by Van Arkel, MacTaggart and Wadsley, Han and Ness, Gleitzes and Jeannin \cite{van, Mac, Han, gleit}. The crystalline structure of ZrSe$_2$ has a chain-like structure formed by a sequence of stacking atoms in the unit cell, In Fig 1 we show the structure of this compound with details related to vacancies and stacking of the structure. As mentioned by these authors the variant of stacking atoms in C6 type compounds may presents vacancies of Se or Zr depending of the Se/Zr ratio, the crystal structure of the compound has parallel layers consisting of two hexagonal chalcogenes planes with one hexagonal Zr plane. The intralayer bonds are strong, whereas the others are quite weak. The compound is highly anisotropic as seen in its physical properties, i.e. electrical and thermal conductivity, as many others properties \cite{iso}.
In fact it is important to mention that defects in ZrSe$_2$, play an important role in the compound, as Gleitzes and Jeannin found \cite{gleit}. They observed that two type of defects or vacancies can occur depending of the Se/Zr ratio. One important change we found is related to the semiconducting behavior to superconducting character. These new characteristics reported here are the main result of our investigations. Thus we found that decreasing the Zr content up to some small level ZrSe$_2$ becomes a type two superconductor.
\section{Experimental details}
Synthesis of the ZrSe$_2$ was carried out by solid state reactions. Zr and Se reactives were alpha Aldrich with purities of 99.9 \%, and 99.999 \% respectively. The powders were mixed in an agatha mortar inside a globe box. The powders with appropriated stoichiometry were sealed in a quartz tube under a clean atmosphere free of oxygen using first, an argon atmosphere, and evacuated. The synthesize was performed at a temperature of 800 Celsius in different periods of time. In general we noted that 5 or 6 days of heating is enough to obtain a pure compound. Several samples with different stoichiometries, Zr$_x$Se$_2$ were produced. The amount of Zr content was changed from x= 0.70 to 0.97 according to the formula Zr$_x$Se$_2$. In this two limits the resulting compound is semiconducting, whereas in between the material becomes superconducting with small variation in the transition temperature.
Chemical analysis for all samples, was performed by Rutherford Backscattering Spectrometry (RBS) performed using a Pelletron NEC accelerator (National Electrostatics Corp, Middleton, WI) with 3 MeV beam of alpha particles. RBS spectra were collected with a detector at an angle of 12 degrees respect to the incident beam. The sample surface was normal to the incident beam. Experimental chemical analysis was fitted to a theoretically calculated curve by means of the program XRUMP.
\section{X-Ray Diffraction Analysis and Structural Characterization}
Powders of each sample for X-Ray determination were measured at room temperature in a Bruker D8 Advance diffractometer (Bruker AXS GmbH, Karlsruhe, Germany; CuK$_{\alpha1}$ radiation, $\lambda = 1.5405$ $\AA$ and goniometer with a lynx-eye detector (Bruker AXS GmbH). Data were collected from 2$\theta$, 6 - 90 degrees, with 30 kV and 40 mA in the X-ray generator. Figure 2 displays X-Ray data and analyses by Rietveld. The structural information for each one of the identified phases was obtained from the Inorganic Crystal Structure Database (ICSD) databank \cite{crystal}. Cell parameters, crystal symmetry, and atomic coordinates were introduced in the Rietveld program GSAS \cite{larson} using the graphical interphase EXPGUI \cite{toby}. A modified pseudo-Voigt function was chosen to generate the peak shapes of the diffraction reflections. The refined parameters were zero point and scale factors, cell parameters, half-width, preferred orientation and atomic occupancy factors. Two types of defects proposed by Gleizes \& Jeannin \cite{gleit}] for the ZrSe$_2$ structure were considered in the Rietveld refinements.
\section{Results and discusion}
Single phase of ZrSe$_2$ was found in all samples with exception of the sample with high superconducting fraction about 10.3\%, where selenium appears as secondary phase. This is shown in first column of Table. In the crystallographic data for ZrSe$_2$ reported by Van Arkel \cite{van}, Zr is at the origin of the unit cell, while Se occupies the 2d position at (1/3, 2/3, 1/4) forming a layered structure, details are clearly shown in Fig 1. Each layer has a sandwich-like structure in which Zr atoms form a two-dimensional hexagonal close packing plane and six Se atoms octahedrally coordinate each of them (Fig 1b).
Of the two types of defects proposed by Gleizes \& Jeannin \cite{gleit}, only one (that refers to vacancies of Zr atoms, located at the origin of the unit cell) leds to good results in our Rietveld refinements, however as mentioned by Ikari \cite{ikari} Zr also may be found in interticial site of the crystal structure. The second type of defect has to do with substitution of Se by Zr in the (1/3, 2/3, 1/4) position; since in the refinements the occupancy factors for Se and Zr gave practically 1 and 0 respectively, this defect was considered absent in the samples. With this dominant defect we found that the lattice parameters vary from 3.7722 to 3.7576 $\AA$ for $a$, and from 6.1297 to 6.135 $\AA$ for $c$, as the Se/Zr ratio increases from 2.06 to 2.47 and the superconducting fraction increases from 0\% to 10.3\% as shown in (Table).
\begin{table*}[t]
\caption{ Table shows the characteristics of the studied samples. Samples S1, S2, and S3 are superconducting, sample S4 is not. Table shows the detected phases, the ICSD code, space group and cell parameters, only in S1 we see a drastic reduction of volume. The details were determined by Rietveld analyses. However the most important change related to superconductivity was the proportion of Zr\/Se ratio. The main phase was modeled with the structural type of ZrSe$_2$, code in the ICSD database listed in theTable. Those number were **Calculated from the quantification of identified phases by Rietveld refinement, and ***Calculated from RBS results.}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Samples & \multicolumn{2}{c|}{S1} & S2 & S3 & S4 \\ \hline
\begin{tabular}[c]{@{}l@{}}Superconducting\\ fraction (\%)\end{tabular} & \multicolumn{2}{c|}{10.3} & 1.3 & 0.7 & 0.0 \\ \hline
Crystal Phases & Zr$_{0.81}$Se$_2$ & Se & Zr$_{0.93}$Se$_2$ & Zr$_{0.93}$Se$_2$ & Zr$_{0.97}$Se$_2$ \\ \hline
ICSD code* & 652244 & 53801 & 652244 & 652244 & 652244 \\ \hline
Weight (\%) & 99.76(5) & 0.24(4) & 100 & 100 & 100 \\ \hline
Space group & P-3m1 & P3$_1$21 & P-3m1 & P-3m1 & P-3m1 \\ \hline
Cell parameters ($\AA$) & \begin{tabular}[c]{@{}c@{}}a = 3.7576(6)\\ c = 6.135(1)\end{tabular} & \begin{tabular}[c]{@{}c@{}}a = 4.40(1)\\ c = 4.948(9)\end{tabular} & \begin{tabular}[c]{@{}c@{}}a = 3.7695(1)\\ c = 6.1437(2)\end{tabular} & \begin{tabular}[c]{@{}c@{}}a = 3.7701(2)\\ c = 6.1354(4)\end{tabular} & \begin{tabular}[c]{@{}c@{}}a = 3.7722(3)\\ c = 6.1297(4)\end{tabular} \\ \hline
Volume ( $\AA$3) & 75.01(3) & 83.0(4) & 75.60(1) & 75.52(1) & 75.54(1) \\ \hline
Se/Zr ratio** & \multicolumn{2}{c|}{2.47} & 2.15 & 2.15 & 2.06 \\ \hline
Zr/Se ratio*** & \multicolumn{2}{c|}{2.44} & 1.54 & 2.38 & 1.54 \\ \hline
R$_{wp}$ & \multicolumn{2}{c|}{0.038} & 0.064 & 0.113 & 0.109 \\ \hline
R$_p$ & \multicolumn{2}{c|}{0.028} & 0.043 & 0.082 & 0.074 \\ \hline
\end{tabular}}
\end{table*}
The crystal structure was determined by X-Ray Diffraction Analysis and the Structural Characterization of the powders of each sample at room temperature with a Bruker D8 Advance diffractometer (Bruker AXS GmbH, Karlsruhe, Germany; using Cu$K_{\alpha1}$ radiation, $\lambda = 1.5405 \AA$ and goniometer with a lynx-eye detector (Bruker AXS GmbH). Data were collected, in 2$\theta$ from 6 to 90 degrees, with 30 kV and 40 mA in the X-ray generator. A glass sample holder was used to perform the characterization of powders. The X-Ray analysis and structural characterization information for each sample was obtained and identified using the ICSD databank (see Table). Cell parameters, crystal symmetry, and atomic coordinates were introduced to fit the crystal structure using Rietveld program. It is interestig to mention that cell parameters determined for this compound coincides with calculation of other authors, see for instances Hussain, et, al \cite{hussain}. A modified pseudo-Voigt function was chosen to generate the peak shapes of reflections. The refined parameters were zero point and scale factors, cell parameters, half-width, atomic coordinates, isotropic thermal coefficients for each phase. For the case of the ZrSe$_2$ vacancies there were considered in the Zr sites and the occupancy of this site was refined.
\section{Superconducting characteristics}
Figures 3 and 4 show the superconducting behavior of the compounds. Fig. 3 displays four curves of Magnetization - Temperature of four different samples with different Zr vacancies, the four samples show small variation of the transition critical temperature but clearly defined from 8.14, 8.31, 8.34, and 8.59 onsets in Kelvins. Those curves were determined in ZFC an FC in order to determine the fraction of superconductivity. In Fig. 4 we present the behavior of the two critical fields.
As before mentioned, one importan aspect on this study is related to defects configuration in the crystalline structure of the compound. The defects on the (100) plane, as found by Gleizes and Jeannin, \cite{ gleit} is quite important because some of the physical properties are depending on it, for instance in the Se/Zr ratio. The number of defects (vacancies) is a critical aspects for the physics behavior, the number of defects, (vacancies) is important and turn to be very complicated in elucidation of the involved physics. The electronic properties depend of the missing atom and position on the cell. The superconducting characteristics also are depending on the number of vacancies. According to this, the compound, could be semiconductor, metal, or superconductor.
Onuki et al, \cite{onuki} and Thompson \cite{tom} have studied the electrical properties of TiS$_2$ and ZrSe$_2$, they found that in T${^2}$ the resistivity behavior, above 50 K, behaves as a degenerated semiconductor similarly as occurs in ZrSe$_2$. This is a consequence that the number of defects change the size of the band gap. Our Fig. 5 shows a similar picture of the distribution of vacancies in the (100) plane, considering only Zr atoms as our compositions; with Zr vacancies about 0.75, 0.80, and 0.85, this figure is quite similar to in the Gleizes and Jeannin paper. It is worth mentioning that in Onuki et al, \cite{onuki} they mention similar deficiencies in the TiS$_2$. The changes on the density of states is depending on the number of missing Ti atoms, and therefore physical properties displays a change from a weak metallic behavior to a semiconductor and to a metal. In our study Zr deficiencies provoke a change into the t$_{2g}$ band. This is our main consideration to the change from semiconducting to a superconductor. It is important to mention that in this compound six valence
bands are primarily derived from Se 4$p$ orbitals while the conduction bands are derived from the Zr 4$d$ orbitals\cite{brauer}.
It is worth mentioning the importance of the band filling in the compound: the six valence bands of this compound are primarily derived from chalcogen p orbitals while the conduction bands are derived from the transition metal $d$ orbitals. The valence band can hold 12 electrons per unit cell, so after filling the valence band, no $d$ electrons remain for TiS$_2$ and ZrSe$_2$, making them semiconductors with indirect band gaps, Eg ~ 0.2 eV and Eg ~ 1 eV respectively. Friend et al \cite{friend} found that TiS$_2$ is a degenerate semiconductor which carriers do not arise from $p$, $d$ band overlap, but from partial occupation of t$_2g$ band resulting by Ti excess. and the compound becomes a degenerated semiconductor and the high conductivity can be attributed to charge transfer from self intercalated Ti atoms in the Van der Waals inter layer space.
as mentioned by Brauer, et. al. \{brauer, and Onuki et al \cite{onuki}, they investigate an interesting behavior related to the band filling in TiS$_2$. The band structure of the valence band can hold 12 electrons per unit cell, and no electrons remains after filling, thus TiS$_2$ and ZrSe$_2$ becomes semiconducting, with a band gap about 1 to 1.3 eV. The six valence bands are primarily derived from chalcogen $p$ orbitals while the conduction bands are derived from the transition metal $d$ orbitals. The valence band can hold 12 electrons per unit cell, so after filling the valence band, no $d$ electrons remain for TiS$_2$ and ZrSe$_2$, making them
semiconductors with indirect band gaps, Eg, Once the band is not completed filled with the Zr vacancies, then superconductivity characteristic arises. In other situations the compound behaves as a semiconductor.
It is important to mention that the superconducting characteristics show only a small diamagnetic fraction, so the superconductivity fraction is very small, or we can call filamentary. In Fig. 2 we show those measurements, in plots of Magnetization - Temperature, (M-T) determined in zero field cooling (ZFC) and field cooling (FC) at 10 Oe. Fig 3 shows the critical fields, top figure shows H$C_1$ fitted to a parabolic function with critical temperature of T$_C$ = 8.8 K. Bottom plot displays H$C_2$. fitted only to a linear function. The magnitude of the two critical fields is of the order of 170, and 2500 Oe for the two fields. The superconductor is a type two, with thermodynamic field about H$C$ = $650$ Oe, so the the Ginzburg-Landau parameter, $\kappa$ is 2.70, is in the strong coupling limit \cite{kittel}.
In order to have more insight into the characteristics of this new superconductor, various samples were used to measure the specific heat in function of temperature, close to the superconducting temperature, we never see the specific heat jump related to the transition.This is clearly indicative that the superconductivity is only a very small fraction, and therefore filamentary. Resitivity mneaurements were not measured, because the compound was powder, and almost imposible to form a bulk compact material.
\section{Conclusions}
We have found and studied a new superconducting compound with composition ZrSe$_2$, which presents a transition temperature about 8.14 - 8.59 K with Zr vacancies from about 0.75, 0.8 and 0.85,
out of this small window range the semiconducting compound is as already determined.
\section*{Acknowledgments}
We thank to DGAPA UNAM project IT106014, and to A. L\'opez-Vivas, and A. Pompa-Garc\'ia (IIM-UNAM), for help in technical problems.
\thebibliography{99}
\bibitem{van}A. E. Van Arkel., Physica 4, 286 (1924).
\bibitem{Mac}K. F. MacTaggart and A. D. Wadsley., Australian J. Chem. II 445 (1958).
\bibitem{Han} H. Han and P. Ness., Naturwiss 44, 534 (1957).
\bibitem{gleit}A. Gleitzes and Y.Jeannin., Journal of Solid State Chemistry1, 180 (1970).
\bibitem{iso}H. Isomaki and J. von Bohem., Physica Scripta. 24, 465 (1981).
\bibitem{crystal}ICSD. Inorganic Crystal Structure Database. Fachinformations zentrum Karlsruhe, and the U.S. Secretary of Commerce on behalf of the United States (2013).
\bibitem{larson}A. C. Larson, R. B. Von Dreele., General Structure Analysis System, GSAS. Los Alamos National Laboratory Report LAUR (2000) 86–748.
\bibitem{toby}B. H. Toby., EXPGUI, a graphical user interface for GSAS. J. Appl. Cryst. 34 (2001) 210-213.
\bibitem{ikari} T. Ikari, K. Maeda, Futagami, A. Nakashima., Jpn. J. Appl. Phys. 34, 1442 (1995).
\bibitem{hussain}Ali Hussain Reshak, Sushil Auluck., Physica B 353 (2004) 230–237.
\bibitem{onuki} Y. Onuki, R. Inada, S. Tanuma., Journal of Physical Society of Japan. 51(4), 1223 (1982). Onuki, et al., Synthetic Metals, 5 (1983) 245 - 255.
\bibitem{tom}A. H. Thompson., Phys. Rev. Lett. 35, 1786 (1975)
\bibitem{brauer} H. E. Brauer, et al.. J. Phys.: Condens. Matter 6, 7741 (1995). H.E. Brauer a, H.I. Starnberg, L.J. Holleboom, H.P. Hughes., Surface Science 331-333 (1995) 419-424.
\bibitem{friend} P. C. Klipstein, A. G. Bagnall, W. Y. Liang, E. A. Marseglia, and R. H. Friend.,Phys. C: Solid State Phys., 14 (1981) 40674081, Printed in Great Britain
\bibitem{kittel}Ch. Kittel., Introduction to Solid State Phys. John Wiley and Son, Inc New York.1971.
\end{document}
|
\section{Introduction}
The high prevalence and rate of mortality of cardiac diseases have driven the development of methods to gain
quantitative insight into cardiac funtion and, in particular, the function of the left ventricle (LV). Among the
different aspects of LV biomechanics, intracavitary fluid dynamics plays a pivotal role and can provide a noninvasive
indicator of different pathological conditions. In healthy LVs, the haemodynamics are characterized by: (i)
\emph{high intraventricular pressure gradients} during the early diastole (E-wave) that give rise to a 3-D vortex
ring forming immediately downstream of the mitral valve (MV) leaflets and help generate a strong jet that extends all
the way to the apex with minimal viscous dissipation; (ii) \emph{sustained rotational flow} during late-diastolic atrial
contraction (A-wave); and (iii) \emph{rapid contraction} during systole that redirects the blood to the LV outflow tract and
through the aortic valve.
These features contribute to optimizing the LV pumping efficiency, as well as to the haemodynamics in the proximal aorta,
and their alteration is usually associated with pathological conditions.
Computational fluid dynamics simulations can be useful for analyzing LV fluid dynamics to overcome the limitations of current imaging
techniques. Due to the importance of the motion of the LV cavity and the MV leaflets, these fluid dynamics simulations should ideally
be extended to fluid-structure interaction (FSI) simulations taking also into account the interplay between the biological
tissue and moving blood. The setting of a fully realistic FSI model of the LV is particularly challenging, since it has to account for:
i) the complex LV geometry;
ii) the presence of heart valves, whose modeling is non-trivial and is required to have realistic inflow and outflow boundary conditions;
iii) the complex motion of the LV wall during the cardiac cycle. Such motion can be imposed either directly by means of
kinematic boundary conditions on the boundary of the LV cavity, or through the explicit modeling of
the LV myocardium, which requires the modeling of passive and active mechanical properties of myocardial tissue, of the
laminar structure of the LV and its myocardial fiber architecture, and ultimately of the propagation of contraction. The latter
approach is more demanding since it requires the identification of a large number of model parameters through complex experimental
set-ups and procedures, as was done in \cite{Sermesant2011} for patient-specific electromechanical models and in \cite{Wenk2010} for
valve models.
Current medical imaging technology can yield the information required to reconstruct LV 3D geometry. For instance, cardiac magnetic resonance
imaging (cMRI) can be performed with different acquisition sequences to quantify i) LV anatomy, time-dependent volume,
and wall motion from cine images, ii) regional 2D wall motion and strains from tagged images, iii) local tissue necrosis/fibrosis
from late gadolinium enhanced images, iv) myocardial fiber architecture from diffusion tensor cMRI, and v) blood velocity
fields from phase contrast images. These data can be used to feed image-based, patient-specific FSI models, which can be
used to study different aspects of LV biomechanics by means of morphologically realistic 3D models, as well as to test the
suitability of different medical and surgical treatments on a patient-specific basis to support clinical planning. Still,
only cine-cMRI and late gadolinium enhanced acquisitions are routinely performed in clinics, while the acquisition of the
other sequences is usually limited to research activities due to their complex implementation and their excessive time for
analysis.
With increased availability of image-based detailed information, many FSI studies have been performed in realistic LV
geometries (see \cite{cheng2005fluid,doenst2009fluid,Long2008,nakamura2006influence,Tang2010}), accounting also for LV
wall electro-mechanics in some cases (see \cite{Nordsletten2011,watanabe2004multiphysics}). In such studies it is
typically assumed that the effect of the MV leaflets is negligible and the MV is modelled by inflow boundary conditions
that try to represent the time-varying shape and orientation of the MV orifice. In the literature, only recently have appeared
fully three-dimensional fluid dynamics studies in patient-specific LV geometries incorporating the leaflet dynamics
(see \cite{mangual2013comparative,mihalef2011patient,votta2013toward}),
although a fully realistic model coupling unsteady haemodynamics effects with anisotropic material models for the leaflets
with contact modelling and inclusion of the effects of chordae tendinae and the papillary muscles still seems out of reach.
This aspect may appear of minor relevance, but is not: experimental evidence strongly suggests that motion of the MV
leaflets and LV vorticity influence each other during diastole (see \cite{charonko2013vortices, kim1995}), while computational
studies indicate that the systolic configuration of the anterior MV leaflet plays a role in LV ejection efficiency \cite{dimasi2012influence}.
In this work we present a novel approach to LV FSI modeling. Based on standard short-axis cine cMRI images an in vivo LV
model-based geometry is reconstructed and regional LV wall 3D displacements are identified from the cMRI by an algorithm
of \cite{Conti2011}. Space- and time-dependent 3D-displacements are imposed to the LV endocardial surface
through a thin fictitious elastic solid, so as to avoid difficulties related to exact volume conservation during the isovolumic phases.
A MV model is introduced through three different approaches. In the first
case the mitral valve is treated as an idealized diode that is either fully open or fully closed and provides no resistance
to the flow. In the second case the ideal diode is modified to allow for regurgitation according to suitable and the valve
resistance. In the third case a lumped parameter model accounts for the opening dynamics of the valve as proposed by
\cite{Mynard2011} so that the resistance offered by the valve changes in time according to the pressure gradient across it.
All three models also account for the up-and-down motion of the mitral annulus, but not for its resizing nor the effect of the
immersed leaflets. A comparison between the fluid dynamics predictions is made in order to understand which implications
the choice of the model has on the fluid dynamics predictions.
\section{Methods} \label{sec:methods}
\subsection{Image acquisition and motion reconstruction of the LV}
The methodology was tested on imaging data from a 65-year-old female patient who had a hibernating myocardium in the LAD
territory and volume overload due to mitral regurgitation; the septal side of the LV was severely akinetic. Using a
$1.5$~T whole-body Siemens Avantoa MRI scanner, equipped with a commercial cardiac coil, electrocardiogram-gated breath-hold
cine images of the LV were acquired in multiple short axes using steady state free procession sequences ($20$~time frames/cardiac
cycle, reconstruction matrix $256\times256$~pixels, in plane resolution $1.719\times1.719$~mm$^2$, slice thickness $8$~mm,
gap $1.6$~mm). The valves and LV apex were not captured, due to the inherent limitations of the short-axis sequence.
The imaging and reconstruction method are described in detail in \cite{Conti2011}, where a validation of the methodology
was done by comparison to commercial software for CMR analysis, and is briefly summarized here. For every time frame and on
each short-axis slice the LV endocardial contour was
semi-automatically detected through the Chan-Vese approach and integrated with a priori knowledge of the statistical
distribution of gray levels in medical images. Contours were regularized using a curvature-based motion algorithm designed to
disallow curvatures above the mean Euclidean value. For each time frame, the smooth LV endocardial surface was obtained by
biplanar cubic spline approximation of previously detected contours. The surface was discretized into approximately $2$\,$000$
three-node triangular elements. In the end-diastolic frame, the endocardial surface was divided into six longitudinal sections
and three circumferential sections, thus obtaining $18$ sectors. For each sector, nine points forming a $3\times3$ mapped grid
were identified and the corresponding local principal curvatures calculated as in \cite{Vieira2005}. Then, each point of each
section was tracked throughout the subsequent time-points by means of a nearest neighbor search, based on the minimization of
the frame-by-frame variations in spatial position and local curvature. The result of this procedure was the time-dependent position of
$84$ landmark points for the entire endocardial surface.
\subsection{Patient-specific LV geometry for FSI simulations}
A model-based approach for constructing a computational LV geometry was used. The LV intracavitary volume mesh was obtained
by starting from an idealized tetrahedral mesh representing the general LV shape and then morphing it by nonrigid deformations
to fit the short-axis landmark points in the end-diastolic configuration. To mimic the LV we used a truncated ellipsoid with short extruded
sections extending from both valves, which were modelled as ellipsoidal surfaces. By an extrusion procedure a thin fictitious
elastic structure around the endocardium was generated for the purposes of imposing the motion of the LV. The dimensions and
alignment of the mitral valve long and short axis were fitted to those observed in the long-axis sequence. The mitral valve
annulus was approximated by an ellipsoid with major axis $3.5$~cm and minor axis $2.6$~cm at peak systole, whereas the aortic
valve was approximated by a circle of diameter $1.8$~cm.
The idealized LV was aligned and resized to match the position of the landmarks at end-diastolic configuration of the
anatomically correct LV reconstructed from MRI. First, we applied a rigid transformation to the idealized LV to align
the vertices on the endocardium surface with the MRI-derived set of landmark points. Then, we defined a least-squares
error functional measuring the discrepancy between the two sets of points and solved a least-square minimization
problem to find the optimal scaling of the idealized LV in each of the three major axis directions. During this minimization
process the volume of the LV was constrained to equal the MRI-based approximation and the total length of the LV was
constrained to equal that observed in the long-axis sequence. The top-most short-axis slice was assumed to be located
at a distance of $2$~cm of length from the valvular plane.
After alignment and resizing, the ideal LV geometry was nonrigidly deformed to fit the landmark points.
First, for each landmark point we identified the closest boundary mesh nodal point on the endocardium at end-diastole.
Using this point-to-point identification an initial deformation was applied to the idealized LV using radial basis
function interpolation that warped the idealize LV endocardium to match the landmark points. This deformed configuration
was then taken as the initial end-diastolic configuration. After the nonrigid deformation was applied to obtain the
end-diastolic configuration that matched the position of the landmarks at the end-diastolic instant, the motion of the
landmark points was extrapolated and used to drive the finite element simulation of the ventricular haemodynamics. The
positions of the landmark points on the endocardium were chosen as interpolation centers, and the motion of the
landmarks in space and time was extrapolated in the space-time domain by performing radial basis function interpolation
in space and trigonometric interpolation in time. In order to regularize the motion, the five highest temporal modes
were neglected so that the resulting LV volume reconstruction was monotone increasing during diastole. This resulted in
a globally smooth and time-periodic extension field of the LV motion throughout one heartbeat. The procedure is illustrated
in Fig.~\ref{fig:fitting}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{mesh_procedure}
\caption{The idealized ellipsoidal LV geometry (left) is first aligned to the short-axis planes and landmarks. An affine
transformation that minimizes the discrepancy between the landmarks and the endocardial surface is then sought by least-squares
fitting (middle). In the final step a nonrigid radial basis function deformation is applied to
both fluid and solid geometries to obtain the computational mesh (right).}
\label{fig:fitting}
\end{figure}
A separate long-axis sequence was used to register the mitral leaflets relative to the position of the aortic
root. Two phases of the long-axis sequence were used for the mitral leaflet registration at diastolic (fully open)
and systolic (closed but regurgitant) positions. This was performed manually by identifying $42$--$46$ landmark points split
between the two leaflets and least-squares fitting of two bivariate polynomial surfaces of total degree four
for both leaflet surfaces. From this reconstruction the maximum opening area of the mitral valve was estimated
at $7.22$~cm$^2$ and the mitral regurgitant area was estimated at $0.07$~cm$^2$.
\subsection{Finite element modelling of LV haemodynamics}
Ventricular haemodynamics were modelled with a finite element FSI model. The fluid part
of the FSI modeling problem consisted of the Navier-Stokes equations (NSE) for incompressible Newtonian fluids written
in the arbitrary Lagrangian-Eulerian (ALE) formulation. The thin structure surrounding the LV was only used to impose
the motion of the LV in such a way as to obtain smooth LV pressure fields both in space and time, and was thus
modelled as a thin pseudo-incompressible linear elastic and isotropic material. In order to impose the reconstructed
motion of the LV in the haemodynamics simulation, we used the extrapolated space-time motion field as a boundary
condition on the external surface of the thin extruded structure surrounding the LV. The current study only focuses
on LV haemodynamics and so no predictions of myocardial strains or stresses were needed, though these could have been
obtained by replacing the thin linear isotropic structure with a physiologically motivated nonlinear orthotropic
material of realistic thickness. While there is extra computational cost related to the fluid-solid coupled problem,
this formulation allowed both the recovery of a spatio-temporally smooth LV pressure field as well as the ability
to seamlessly simulate both isovolumic phases, which are a known difficulty for the pure NSE-in-moving-domains
-formulation (e.g. \cite{chnafa2014image} eliminate the isovolumic phases completely in an otherwise high-fidelity
simulation of full-heart fluid dynamics).
The resulting FSI problem reads
\begin{equation}
\BraceBracketLeft{\begin{array}{r@{\,\,}c@{\,\,}l@{\quad}l}
\VerticalBracketRight{\dfrac{\partial {\boldsymbol u}\xspace_\mathrm{F}\xspace}{\partial t}}_{{\boldsymbol x}\xspace^0} + \RoundBracket{\RoundBracket{{\boldsymbol u}\xspace_\mathrm{F}\xspace - \VerticalBracketRight{\dfrac{\partial {\boldsymbol d}\xspace_\mathrm{F}\xspace}{\partial t}}_{{\boldsymbol x}\xspace^0}} \Dot {\boldsymbol\nabla}\xspace} {\boldsymbol u}\xspace_\mathrm{F}\xspace -\dfrac{1}{\rho_\mathrm{F}\xspace}{\boldsymbol\nabla}\xspace \Dot \sigma_\mathrm{F}\xspace &=& {\boldsymbol 0}\xspace & \textrm{in } \Omega^t_\mathrm{F}\xspace \times (0,T],\\[2ex]
{\boldsymbol\nabla}\xspace \Dot {\boldsymbol u}\xspace_\mathrm{F}\xspace &=& 0 & \textrm{in } \Omega^t_\mathrm{F}\xspace \times (0,T],\\[1ex]
\rho_\mathrm{S}\xspace \dfrac{\partial^2 {\boldsymbol d}\xspace_\mathrm{S}\xspace}{\partial t^2} - {\boldsymbol\nabla}\xspace \Dot \sigma_\mathrm{S}\xspace &=& {\boldsymbol 0}\xspace & \textrm{in } \Omega^0_\mathrm{S}\xspace \times (0,T],\\[2ex]
-{\boldsymbol\Delta}\xspace {\boldsymbol d}\xspace_\mathrm{F}\xspace &=& {\boldsymbol 0}\xspace & \textrm{in } \Omega^0_\mathrm{F}\xspace \times (0,T],\\[1ex]
{\boldsymbol u}\xspace_\mathrm{F}\xspace \circ {\mathcal{M}}\xspace^t - \dfrac{\partial {\boldsymbol d}\xspace_\mathrm{S}\xspace}{\partial t} &=& {\boldsymbol 0}\xspace & \textrm{on } \Gamma^0_\mathrm{I}\xspace \times (0,T],\\[2ex]
\sigma_\mathrm{S}\xspace \Dot {\boldsymbol n}\xspace_\mathrm{S}\xspace - J_\mathrm{S}\xspace {\mathrm{G}}\xspace_\mathrm{S}\xspace^{-\mathsf{T}} \RoundBracket{\sigma_\mathrm{F}\xspace \circ {\mathcal{M}}\xspace^t} \Dot {\boldsymbol n}\xspace_\mathrm{S}\xspace &=& {\boldsymbol 0}\xspace & \textrm{on } \Gamma^0_\mathrm{I}\xspace \times (0,T],\\[1ex]
{\boldsymbol d}\xspace_\mathrm{F}\xspace - {\boldsymbol d}\xspace_\mathrm{S}\xspace &=& {\boldsymbol 0}\xspace & \textrm{on } \Gamma^0_\mathrm{I}\xspace \times (0,T],
\end{array}}
\label{eq:GlobalFSI}
\end{equation}
where $(0,T]$ is the time interval, ${\boldsymbol u}\xspace_\mathrm{F}\xspace$ the fluid velocity, $\rho_\mathrm{F}\xspace$ and $\rho_\mathrm{S}\xspace$ are
the fluid and solid density, respectively, ${\boldsymbol n}\xspace_\mathrm{S}\xspace$ is the outgoing normal direction applied to the
solid domain, ${\mathrm{G}}\xspace_\mathrm{S}\xspace = \textrm{I} + {\boldsymbol\nabla}\xspace {\boldsymbol d}\xspace_\mathrm{S}\xspace$ the solid deformation
gradient (with $\textrm{I}$ the identity matrix), and $J_\mathrm{S}\xspace = \textrm{det}\xspace\RoundBracket{{\mathrm{G}}\xspace_\mathrm{S}\xspace}$.
In addition, $\sigma_\mathrm{F}\xspace$ and $\sigma_\mathrm{S}\xspace$ are the Cauchy and the first Piola--Kirchhoff stress
tensors. The motion of the interior vertices of the fluid mesh was obtained by harmonic extension from
the FSI interface by solving an elliptic PDE.
\subsection{Modelling of insufficient mitral valve dynamics}
For the purposes of this study we performed the LV haemodynamics simulations with three different models
for the mitral valve.
\textbf{Model A:} The classical model for cardiac valves is the ideal diode model, which offers no resistance
to the blood flow and opens and closes instantaneously in response to the changing of the pressure gradient
sign and flow direction:
\begin{equation} \label{eq:diode_valve}
Q_{\textrm{mv}} =
\left\{
\begin{aligned}
\frac{p_{\textrm{pv}} - p_{\textrm{lv}}}{R_{\textrm{la}}}, \quad &\textrm{ if } p_{\textrm{pv}} > p_{\textrm{lv}} \\
0, \quad &\textrm{ if } p_{\textrm{pv}} \leq p_{\textrm{lv}} \\
\end{aligned}
\right. ,
\end{equation}
where $p_{\textrm{pv}}$ and $p_{\textrm{lv}}$ are the pulmonary and LV pressure respectively. In this model the mitral
inflow rate $Q_{\textrm{mv}}$ was imposed as a boundary condition on the FSI LV problem using Lagrange multipliers
to enforce the defective boundary condition (see e.g.~\cite{Formaggia2002}),
which has the benefit that no explicit velocity profile needs to be imposed at the inflow. In order to stabilize the
velocity at the mitral valve due to flow reversal effects, the tangential component of the velocity field at the inlet
was further constrained to zero (see \cite{moghadam11} and the discussion therein), leading to the inflow boundary condition:
\begin{equation}
\int_{\Gamma_{\textrm{in}}} {\boldsymbol u}\xspace_\mathrm{F}\xspace \cdot {\boldsymbol n}\xspace \: d\Gamma = 0, \quad (\textrm{I} - {\boldsymbol n}\xspace\nb^T) {\boldsymbol u}\xspace = \boldsymbol{0} \textrm{ on } \Gamma_{\textrm{in}}.
\end{equation}
\textbf{Model B:} Is an extension of Model A that incorporates regurgitation and inertial effects of the valve on the
fluid dynamics. In this model the flow rate is given by the Bernoulli's equation for flow through an orifice:
\begin{equation} \label{eq:bernoulliValve}
p_{\textrm{pv}} - p_{\textrm{lv}} = R_{\textrm{la}} Q_{\textrm{mv}} + B Q_{\textrm{mv}} |Q_{\textrm{mv}}| + L \frac{dQ_{\textrm{mv}} }{dt}
\end{equation}
where $B = \rho / (2 A_{\textrm{eff}}^2)$ is the Bernoulli resistance of the valve and
$L = \rho \ell_{\textrm{eff}} / A_{\textrm{eff}}$ the blood inertance. The coefficients
$L$ and $B$ are determined by the effective orifice area $A_{\textrm{eff}}$ that switches
between open and closed valve configurations similarly to the ideal diode case:
\begin{equation}
A_{\textrm{eff}} =
\left\{
\begin{aligned}
A_{\max}, \quad &\textrm{ if } p_{\textrm{pv}} > p_{\textrm{lv}} \\
A_{\min}, \quad &\textrm{ if } p_{\textrm{pv}} \leq p_{\textrm{lv}} \\
\end{aligned}
\right. .
\end{equation}
For $A_{\min} > 0$ the model allows regurgitation to take place.
The maximum and minimum orifice area were calibrated from the long-axis segmentation of the valve geometry and were estimated as
$A_{\max}=7.22$~cm$^2$ and $A_{\min}=0.07$~cm$^2$ respectively, for the case studied. A numerically stable time discretization
was obtained for \eqref{eq:bernoulliValve} by using the semi-implicit scheme
\begin{equation}
Q^{n,k}_{\textrm{mv}} = \frac{Q^{n-1}_{\textrm{mv}} + \frac{\Delta t}{L} \left( p_{\textrm{la}}^{n,k-1} - p_{\textrm{lv}}^{n,k-1} \right)}{1 + \Delta t B / L
\, |Q^{n-1}_{\textrm{mv}}|}.
\end{equation}
In this model the pressure $p_{\textrm{lv}}$ was imposed as a normal stress boundary condition on the FSI LV problem along
with the aforementioned tangential velocity stabilization condition:
\begin{equation} \label{eq:bcModelB}
\left(\sigma_\mathrm{F}\xspace + p_{\textrm{lv}} \textrm{I} \right) {\boldsymbol n}\xspace = \boldsymbol{0} \textrm{ on } \Gamma_{\textrm{in}}, \quad (\textrm{I} - {\boldsymbol n}\xspace\nb^T) {\boldsymbol u}\xspace = \boldsymbol{0} \textrm{ on } \Gamma_{\textrm{in}}.
\end{equation}
\textbf{Model C:}
To model more precisely the valve opening dynamics we used a lumped parameter model proposed by
\cite{Mynard2011}, which prescribes simple and smooth opening and closing dynamics for $A_{\textrm{eff}}$
without explicitly modeling the valve leaflets. In this model the flow rate through the mitral valve is
again given by Bernoulli's equation \eqref{eq:bernoulliValve} for flow through an orifice, which in turn
depends on an internal variable $\zeta \in [0,1]$ according to
\begin{equation}
A_{\textrm{eff}}(t) = \left[ A_{\max} - A_{\min} \right] \zeta(t) + A_{\min},
\end{equation}
where the internal variable evolves according to the rate equation
\begin{equation} \label{eq:leaflet_momentum_equation}
\dfrac{d\zeta}{dt} =
\left\{
\begin{aligned}
(1-\zeta) K_{vo} \left( p_{\textrm{la}} - p_{\textrm{lv}} \right), &\quad \textrm{ if } p_{\textrm{la}} \geq p_{\textrm{lv}} \\
\zeta K_{vc} \left( p_{\textrm{la}} - p_{\textrm{lv}} \right) , &\quad \textrm{ if } p_{\textrm{la}} \leq p_{\textrm{lv}}
\end{aligned}
\right. .
\end{equation}
This model captures the valve opening dynamics and represents mitral insufficiency, but does not model the
effect of the leaflets on the local flow pattern. The boundary conditions applied on the FSI LV problem
were identical to \eqref{eq:bcModelB}.
\subsection{Ventricular pre- and afterload}
A scenario of chronic mitral regurgitation with a constant pulmonary pressure of $p_{\textrm{pv}} = 10$~mmHg
was chosen for the preload.
For the ventricular afterload we used a standard three-element windkessel model with the parameters given by
\cite{Stergiopulos1999} to determine the aortic flow rate $Q_{\textrm{ao}}$ and aortic pressure $p_{\textrm{ao}}$ as:
\begin{equation} \label{eq:windkessel3}
\frac{dQ_{\textrm{ao}}}{dt} = \frac{1}{R_{\textrm{ao}} R_{\textrm{pe}} C_{\textrm{ao}}} \left[ R_{\textrm{pe}} C_{\textrm{ao}} \frac{d}{dt}(p_{\textrm{ao}} - p_{\textrm{ve}}) + (p_{\textrm{ao}} - p_{\textrm{ve}}) - (R_{\textrm{pe}} + R_{\textrm{ao}}) Q_{\textrm{ao}} \right],
\end{equation}
where $R_{\textrm{ao}}$ is the aortic resistance, $R_{\textrm{pe}}$ is the peripheral resistance, and
$C_{\textrm{ao}}$ is the aortic compliance. The venous pressure $p_{\textrm{ve}}$ was fixed at $5$~mmHg.
All of values of the model parameters used are listed in Table~\ref{Tab:HeartParameters}.
Model A was used for the aortic valve in all three cases.
It is also possible to model the aortic valve similarly to the mitral one, though this was not deemed
necessary in the presence of a healthy aortic valve. For the coupling algorithm between the 3D LV model
with the windkessel model for the ventricular afterload, we refer to
\cite{Malossi2011,Malossi2011Algorithms1D,Malossi2011Algorithms3D1DFSI}.
\section{Results}
The LV FSI simulation was initialized at rest with zero velocity and pressure and driven for a few heartbeats at 75 bpm
until pressure conditions stabilized into periodicity. The pulmonary pressure was ramped to $10$~mmHg in
the course of the first $100$~ms of the simulation to provide an impulse-free initialization, then kept constant
for the rest of the run. No further initializations or regularizations needed to be performed. A fixed time step
of $1$~ms was used for the solution of the
FSI problem with second-order backward differentiation formula in time. The finite element problem was discretized
using 124$\,$942 tetrahedral elements in the fluid domain and piecewise linear basis functions for both velocity
and pressure approximation. Well-posedness of the problem was guaranteed by convective and pressure stabilization
performed with the interior penalty method by \cite{Burman2006}. The peak Reynolds number inside the LV during the
diastolic phase was around $2$\,$000$, indicating transitional but not fully turbulent flow, and therefore no turbulence
modelling was performed.
The velocity field and the diastolic jet for each of the three different valve models A, B and C is presented
in Fig.~\ref{fig:vortices} for three different time instances: early diastole, late diastole and early
systole. In all three cases the diastolic jet is strongly driven towards the lateral wall and generates a large
vortex near the aortic root that expands to fill the entire LV during the late diastolic A-wave. These features
are independent of the inflow boundary condition applied (flow rate in Model A and pressure in Models B and C)
and the opening dynamics of the MV. All three models exhibited vorticial flow at the mitral inlet during early
systole, but the numerical simulation remained stable and convergent throughout, indicating a successful stabilization
of the inlet boundary condition during flow reversal.
The mitral valve opening ratio, LV pressure and LV volume in time for the three different valve Models A, B and C
are presented in Fig.~\ref{fig:valve_behavior}. Model A stands apart from the other two due to the absence of
mitral regurgitation, leading to larger systolic pressure and delayed opening of the MV by about $10$~ms.
The MV inflow is strongly bimodal and the A-wave is considerably stronger than the E-wave, which is consistent
with clinical findings of chronic or compensated mitral regurgitation when the left atrium has to compensate for the diminished
filling of the LV.
Again, very little quantitative difference between Models B and C can be observed in terms of pressure and flow rate.
This can be explained by the fact that the inflow/outflow volumetric flow rates are largely constrained by the imposed
motion of fictitious elastic structure that follows from the 4-D reconstruction.
Table~\ref{Tab:MRIndicators} shows the regurgitant volume and its fraction of the total systolic outflow, the
ejection fraction and the viscous dissipation. The predicted regurgitant volume is 9\% higher in Model C,
mainly due to the slower closure of the mitral valve. Peak viscous dissipation during systole was slightly
higher in Model A without regurgitation, but almost identical across all three models during diastole. The
prediction of viscous dissipation depends mainly on whether or not regurgitation is considered or not.
\begin{figure}
\centering
\includegraphics[height=8cm]{valves_comparison_ED}
\includegraphics[height=8cm]{valves_comparison_LD}
\includegraphics[height=8cm]{valves_comparison_ES}
\caption{Comparison of vortex jets for the Models A, B and C. Top row: early diastolic velocity ($t = 550$~ms).
Middle row: late diastolic velocity ($t = 750$~ms). Bottom row: early systolic velocity ($t = 800$~ms). Color bar
ranges between $0-40$~cm/s.}
\label{fig:vortices}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=4.25cm]{MitralValve}
\includegraphics[height=4.25cm]{MitralPressure}
\includegraphics[height=4.25cm]{MitralFlow}
\caption{Comparison of mitral valve characteristics for the valve Models A, B and C. The absence of mitral regurgitation
in Model A leads to an increase of 19\% in the prediction of systolic pressure peak pressure. The difference between
the simple regurgitant valve (Model B) and the dynamic regurgitant valve (Model C) is negligible in terms of pressure and flow rate.}
\label{fig:valve_behavior}
\end{figure}
\section{Discussion}
In this work we presented a computational method starting from a standard short-axis MRI sequence
and proceeding to a 4-D reconstruction of LV motion combined with FSI simulations using
a fictitious elastic structure for regularization. A model-based approach was used to generate the
LV computational geometry. The use of short-axis images permitted a streamlined workflow from
images to simulations with minimal user intervention. The motion of the basal cutplane was added
in order to obtain sufficient ejection fraction and to recover the downward motion of the LV that
was missing from the short-axis images. The mitral annulus diameter orifice area were calibrated
according to manual long-axis segmentation of the LV.
Compared to standard Navier-Stokes-in-moving-domain formulations, the FSI formulation treats
seamlessly the isovolumic phases without need for explicit volume preservation constraints on the
imposed LV motion. Thus the entire cardiac cycle was simulated in one continuous run without
needing to change boundary conditions or enforcing isovolumic constraints when switching from
systole to diastole and vice versa.
Comparison of the three different simplified MV models produced qualitatively similar diastolic
flow patterns. A nonsymmetric fluid jet created a two-phase vorticity pattern where a small vortex
was generated near the posterior mitral leaflet in the E-wave and during the A-wave a large
vortex developed to fill the entire LV cavity. Addition of the lumped parameter valve dynamics by
themselves had little effect on the observed LV flow, provided the mitral regurgitation was properly
accounted for, and neither had the change in the LV FSI boundary condition from average flow
rate condition to pressure condition. The fact that consistent results were obtained across all the
valve models used indicates that LV vorticial flow patterns may be simulated to some extent with
just knowledge of the LV wall motion.
Limitations of the current simulation study include the lack of explicit leaflet modelling and their
effect on the inflow jet, lack of variability of the orifice shape and size, and the missing information
about the orientation of the left atrium with respect to the MV, which influences the diastolic jet
orientation and consequently the vorticity pattern (as shown by \cite{seo2013}). While
the approximate configuration of the mitral valve geometry can be obtained from the MRI in the
fully open and fully closed position, extending this information to the intermediate configurations
and simulating the effect of the leaflet motion to the flow in a proper way requires further work.
Furthermore, solving the problem in the FSI formulation introduces certain additional complexities
in the preconditioning and solution algorithms that may not always be available in commercial
software.
\section*{Acknowledgements}
Clinical patient data for this study was provided by the team of Prof. O.~Parodi at Ospedale Niguarda, Milan, Italy.
S.~Deparis, T.~Lassila, A.~Redaelli, M.~Stevanella, and E.~Votta acknowledge the support of the European Community
7${}^\textrm{th}$ Framework Programme, Project FP7-224635 VPH2 (`Virtual Pathological Heart of the Virtual Physiological
Human'). A.~C.~I.~Malossi and S.~Deparis acknowledge the European Research Council Advanced
Grant `Mathcard, Mathematical Modeling and Simulation of the Cardiovascular System', Project ERC-2008-AdG 227058, as well
as the Swiss Platform for High-Performance and High-Productivity Computing (HP2C). All of the numerical results
presented in this paper have been computed using the LGPL \texttt{LifeV} library (\url{www.lifev.org}). All authors
disclose no conflicts of interest.
{\small
\bibliographystyle{alpha}
|
\section{Introduction}
\label{section:Introduction}
\subsection{Motivation}
With an increasing number of users and things being connected to each
other, not only the overall amount of communication increases, but
also the amount of private and personal information being
transferred. This information needs to be protected from various
attacks. For some potential applications, like e.g. emerging e-health
technologies where sensitive medical data is transmitted using a Body
Area Network, the problem of providing secrecy guarantees is a key
issue. As discovered by Csiszár~\cite{CsiszarSecrecy} and later more
explicitly by Bloch and Laneman~\cite{BlochStrongSecrecy} and investigated by Yassaee and Aref \cite{YassaeeMACWiretap} for the multiple-access case, the concept
of channel resolvability can be applied to provide such guarantees; it
can further be of use as a means of exploiting channel noise in order
to convey randomness to a receiver, where the observed distribution
can be accurately controlled at the transmitter. In this paper, we
explore channel resolvability in a multiple-access setting in which
there is no communication between the transmitters, yet they can
control the distribution observed at the receiver in a non-cooperative
manner.
\subsection{Literature}
\label{section:literature}
To the best of our knowledge, the concept of approximating a desired output distribution over a
communication channel using as little randomness as possible at the
transmitter was first introduced by
Wyner~\cite{WynerCommonInformation}, who used normalized
Kullback-Leibler divergence to measure how close the actual and the
desired output distribution are. The term \emph{channel resolvability}
for a similar concept was introduced by Han and
Verdú~\cite{HanApproximation}, who however used variational distance
as a metric. In particular, they showed the existence of a codebook
that achieves an arbitrarily small variational distance by studying
the expected variational distance of a random codebook.
Resolvability for MACs has been explored by Steinberg~\cite{SteinbergResolvability} and later by Oohama~\cite{OohamaConverse}. Explicit low-complexity codebooks for the special case of symmetric MACs have been proposed by Chou, Bloch and Kliewer~\cite{ChouLowComplexity}.
A stronger result stating that the probability of drawing an
unsuitable random codebook is doubly exponentially small is due to
Cuff~\cite{CuffSoftCovering}. Related results were proposed before by
Csiszár~\cite{CsiszarSecrecy} and by
Devetak~\cite{DevetakPrivateCapacity} for the quantum setting, who
based his work on the non-commutative Chernoff
bound~\cite{AhlswedeIdentification}. Further secrecy results based on
or related to the concept of channel resolvability are due to
Hayashi~\cite{HayashiResolvability}, Bloch and Laneman
\cite{BlochStrongSecrecy}, Hou and Kramer~\cite{HouEffectiveSecrecy},
and Wiese and Boche~\cite{WieseWiretap}, who applied Devetak's
approach to a multiple-access setting.
Cuff~\cite{CuffSoftCovering} also gave a result on the second-order
rate; a related result was proposed by Watanabe and
Hayashi~\cite{WantanabeSecondOrder}.
\subsection{Overview and Outline}
In this work, we revisit the proof in~\cite{WieseWiretap}, while focusing on channel resolvability. We use a slightly different technique as in~\cite{CuffSoftCovering}, which we extend to the multiple-access case to provide an explicit statement and a more intuitive proof for a result only implicitly contained in~\cite{WieseWiretap}, and extend it by providing a second-order result
In the following section, we state definitions and prior results that we will be using in our proofs in Section~\ref{section:main}.
\section{Notation, Definitions and Prerequisites}
\label{section:preliminaries}
The operations $\log$ and $\exp$ use Euler's number as a basis, and all information quantities are given in nats. $\positive{\cdot}$ denotes the maximum of its argument and $0$.
A \emph{channel}
$\mathcal{W} = (\mathcal{X}, \mathcal{Y}, \mathcal{Z}, q_{Z | X, Y})$
is given by finite input alphabets $\mathcal{X}$ and $\mathcal{Y}$, a finite output alphabet $\mathcal{Z}$ and a collection of probability mass functions $q_{Z | X, Y}$ on $\mathcal{Z}$ for each pair $(x,y) \in \mathcal{X} \times \mathcal{Y}$. The random variables $X$, $Y$ and $Z$ represent the two channel inputs and the channel output, respectively. \emph{Input distributions} for the channel are probability mass functions on $\mathcal{X}$ and $\mathcal{Y}$ denoted by $q_{X}$ and $q_{Y}$, respectively. We define an \emph{induced joint distribution} $q_{X, Y, Z}$ on $\mathcal{X} \times \mathcal{Y} \times \mathcal{Z}$ by
$q_{X, Y, Z}(x,y,z) := q_{X}(x) q_{Y}(y) q_{Z | X, Y}(z | x,y)$
and the \emph{output distribution}
$
q_Z(z)
:=
\sum_{x \in \mathcal{X}}
\sum_{y \in \mathcal{Y}}
q_{X, Y, Z}(x, y, z)
$
is the marginal distribution of $Z$.
By a pair of \emph{codebooks} of block length $n$ and rates $R_1$ and $R_2$, we mean finite sequences
$\mathcal{C}_1 = (\codebookOneWord{m})_{m = 1}^{\exp(\codebookBlocklengthR_1)}$
and
$\mathcal{C}_2 = (\codebookTwoWord{m})_{m = 1}^{\exp(\codebookBlocklengthR_2)}$,
where the \emph{codewords} $\codebookOneWord{m} \in \mathcal{X}^n$ and $\codebookTwoWord{m} \in \mathcal{Y}^n$ are finite sequences of elements of the input alphabets. We define a probability distribution $\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}$ on these codebooks as i.i.d. drawings in each component of each codeword according to $q_X$ and $q_Y$, respectively. Accordingly, we define the \emph{output distribution induced by $\mathcal{C}_1$ and $\mathcal{C}_2$} on $\mathcal{Z}^n$ by
\begin{multline*}
p_{Z^n | \mathcal{C}_1, \mathcal{C}_2}(z^n) :=
\exp(-n(R_1+R_2))
\\ \cdot
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2}).
\end{multline*}
Given probability distributions $P$ and $Q$ on a finite set $\mathcal{A}$ with mass functions $p$ and $q$, respectively, and positive $\alpha \neq 1$, the \emph{Rényi divergence of order $\alpha$ of $P$ from $Q$} is defined as
\[
\renyidiv{\alpha}{P}{Q}
:=
\frac{1}{\alpha-1}
\log
\sum\limits_{a \in \mathcal{A}}
p(a)^\alpha
q(a)^{1-\alpha}.
\]
Furthermore, we define the \emph{variational distance} between $P$ and $Q$ (or between their mass functions) as
\[
\totalvariation{p - q}
:=
\frac{1}{2} \sum\limits_{a \in \mathcal{A}} \absolute{p(a) - q(a)}
=
\sum\limits_{a \in \mathcal{A}} \positive{p(a) - q(a)}.
\]
Given random variables $A$, $B$ and $C$ distributed according to $r_{A, B, C}$, we define the \emph{(conditional) information density} as
\[
\informationDensity{a}{b} := \log \frac{r_{B | A}(b | a)}{r_{B}(b)}
,~~
\informationDensityConditional{a}{b}{c} := \log \frac{r_{B | A, C}(b | a, c)}{r_{B | C}(b | c)}.
\]
The (conditional) mutual information is the expected value of the (conditional) information density.
The following inequality was introduced in~\cite{Berry} and~\cite{Esseen}; we use a refinement here which follows e.g. from~\cite{BeekBerryEsseen}.
\begin{theorem}[Berry-Esseen Inequality]
\label{theorem:berry-esseen}
Given a sequence $(A_k)_{k=1}^{n}$ of i.i.d. copies of a random variable $A$ on the reals with $\mathbb{E} A = 0$ and finite $\mathbb{E} A^2 = \sigma^2$ and $\mathbb{E} \absolute{A}^3 = \rho$, define $\bar{A} := (A_1 + \dots + A_n)/n$. Then the cumulative distribution functions $F(a) := \mathbb{P}(\bar{A}\sqrt{n}/\sigma \leq a)$ of $\bar{A}\sqrt{n}/\sigma$ and $\Phi(a) := \int_{-\infty}^a 1/(2\pi) \exp(-x^2/2) d x$ of the standard normal distribution satisfy for all real numbers $a$
\[
\absolute{F(a) - \Phi(a)}
\leq
\frac{\rho}
{\sigma^3 \sqrt{n}}.
\]
\end{theorem}
\begin{shownto}{arxiv}
We further use variations of the concentration bounds introduced in~\cite{HoeffdingInequalities}.
\begin{theorem}[Chernoff-Hoeffding Bound]
\label{theorem:hoeffding}
Suppose $A = \sum_{k=1}^{n} A_k$, where the random variables in the sequence $(A_k)_{k=1}^n$ are independently distributed with values in $[0,1]$ and $\mathbb{E} A \leq \mu$. Then for $0 < \delta < 1$,
\[
\mathbb{P}(A > \mu(1+\delta)) \leq \exp\left(-\frac{\delta^2}{3} \mu \right).
\]
\end{theorem}
This version can e.g. be found in~\cite[Ex. 1.1]{ConcentrationTextbook}. We will also be using an extension of the Chernoff-Hoeffding bound for dependent variables due to Janson~\cite[Theorem 2.1]{JansonLargeDeviations}, of which we state only a specialized instance that is used in this paper.
\begin{theorem}[Janson~\cite{JansonLargeDeviations}]
\label{theorem:janson}
Suppose $A = \sum_{k=1}^{n} A_k$, where the random variables in the sequence $(A_k)_{k=1}^n$ take values in $[0,1]$ and can be partitioned into $\chi \geq 1$ sets such that the random variables in each set are independently distributed. Then, for $\delta > 0$,
\[
\mathbb{P}(A \geq \mathbb{E} A + \delta)
\leq
\exp\left(
-2 \frac{\delta^2}
{\chi \cdot n}
\right).
\]
\end{theorem}
\end{shownto}
\section{Main Results}
\label{section:main}
\begin{theorem}
\label{theorem:soft-covering-two-transmitters}
Suppose
$\mathcal{W} = (\mathcal{X}, \mathcal{Y}, \mathcal{Z}, q_{Z | X, Y})$
is a channel, $q_X$ and $q_Y$ are input distributions, $R_1 > \mutualInformationConditional{X}{Z}{Y}$ and $R_2 > \mutualInformation{Y}{Z}$.
Then there exist $\gamma_1, \gamma_2 > 0$ such that for large enough block length $n$, the codebook distributions of block length $n$ and rates $R_1$ and $R_2$ satisfy
\begin{multline}
\label{theorem:soft-covering-two-transmitters-probability-statement}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2} \left(
\totalvariation{
p_{Z^n | \mathcal{C}_1, \mathcal{C}_2} - q_{Z^n}
}
>
\exp(-\finalconstOnen)
\right)
\\
\leq
\exp\left(-\exp\left(\finalconstTwon\right)\right).
\end{multline}
\end{theorem}
Observing that this theorem can be applied with the roles of $X$ and $Y$ reversed and that time sharing is possible, we obtain the following corollary.
\begin{cor}
Theorem~\ref{theorem:soft-covering-two-transmitters} holds for all interior points in the convex closure of
\begin{align*}
\{
(R_1, R_2)
:
&(
R_1 \geq \mutualInformationConditional{X}{Z}{Y}
\wedge
R_2 \geq \mutualInformation{Y}{Z}
)
\\
&\vee
(
R_1 \geq \mutualInformation{X}{Z}
\wedge
R_2 \geq \mutualInformationConditional{Y}{Z}{X}
)
\}.
\end{align*}
\end{cor}
\begin{theorem}
\label{theorem:soft-covering-two-transmitters-second-order}
Given a channel
$\mathcal{W} = (\mathcal{X}, \mathcal{Y}, \mathcal{Z}, q_{Z | X, Y})$,
input distributions $q_X$ and $q_Y$, $\varepsilon \in (0,1)$, let the central second and absolute third moment of $\informationDensityConditional{X}{Z}{Y}$ be $\channelDispersion{1}$ and $\channelThirdMoment{1}$, respectively; analogously, we use $\channelDispersion{2}$ and $\channelThirdMoment{2}$ to denote the central second and absolute third moment of $\informationDensity{Y}{Z}$. Suppose the rates $R_1, R_2$ depend on $n$ in the following way:
\begin{alignat}{3}
\label{theorem:soft-covering-two-transmitters-second-order-rate-one}
R_1
&=
\mutualInformationConditional{X}{Z}{Y}&
&+
\sqrt{\frac{\channelDispersion{1}}{n}} \mathcal{Q}^{-1}(\varepsilon)&
&+
c\frac{\log n}
{n} \\
\label{theorem:soft-covering-two-transmitters-second-order-rate-two}
R_2
&=
\mutualInformation{Y}{Z}&
&+
\sqrt{\frac{\channelDispersion{2}}{n}} \mathcal{Q}^{-1}(\varepsilon)&
&+
c\frac{\log n}
{n},
\end{alignat}
where $\mathcal{Q} := 1 - \Phi$ with $\Phi$ as defined in the statement of Theorem~\ref{theorem:berry-esseen}, and $c>1$. Then, for any $d \in (0, c-1)$, we have
\begin{multline*}
\begin{aligned}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}
\left( \vphantom{\frac{1}{\sqrt{n}}}
\right.
&\totalvariation{p_{Z^n | \mathcal{C}_1, \mathcal{C}_2} - q_{Z^n}}
>
\\ &\left.
(\secondOrderAtypicalProbability{1} + \secondOrderAtypicalProbability{2})
\left(1+\frac{1}{\sqrt{n}}\right)
+
\frac{3}{\sqrt{n}}
\right)
\end{aligned}
\\
\leq
2\exp\left(
-\frac{2\min(\secondOrderAtypicalProbability{1}^2,\secondOrderAtypicalProbability{2}^2)}
{n}
\exp(n \min(R_1,R_2))
\right) \\
+
2\exp\left(
n(\log \cardinality{\mathcal{Z}} + \log \cardinality{\mathcal{Y}})
-\frac{1}{3}
n^{c - d - 1}
\right),
\end{multline*}
where for both $k=1$ and $k=2$,
\begin{align*}
\secondOrderAtypicalProbability{k}
:=
\mathcal{Q}\left(
\mathcal{Q}^{-1}(\varepsilon)
+
\frac{d \log n}
{\sqrt{n\channelDispersion{k}}}
\right)
+
\frac{\channelThirdMoment{k}}
{\channelDispersion{k}^{\frac{3}{2}} \sqrt{n}}
\end{align*}
tends to $\varepsilon$ for $n \rightarrow \infty$.
\end{theorem}
Again, observing that this theorem can be applied with the roles of $X$ and $Y$ reversed, we have
\begin{cor}
\label{cor:soft-covering-two-transmitters-second-order}
Theorem~\ref{theorem:soft-covering-two-transmitters-second-order} holds with (\ref{theorem:soft-covering-two-transmitters-second-order-rate-one}) and (\ref{theorem:soft-covering-two-transmitters-second-order-rate-two}) replaced by
\begin{align*}
R_1
&=
\mutualInformation{X}{Z}
+
\sqrt{\frac{\channelDispersion{1}}{n}} \mathcal{Q}^{-1}(\varepsilon)
+
c\frac{\log n}
{n} \\
R_2
&=
\mutualInformationConditional{Y}{Z}{X}
+
\sqrt{\frac{\channelDispersion{2}}{n}} \mathcal{Q}^{-1}(\varepsilon)
+
c\frac{\log n}
{n}
\end{align*}
and $\channelDispersion{1}$, $\channelThirdMoment{1}$, $\channelDispersion{2}$ and $\channelThirdMoment{2}$ redefined to be the second and third moments of $\informationDensity{X}{Z}$ and $\informationDensityConditional{Y}{Z}{X}$, respectively.
\end{cor}
\begin{remark}
The question of how the achievable second-order rates behave near the line connecting the two corner points should be a subject of further research.
\end{remark}
In the proofs of these theorems, we consider two types of typical sets:
\begin{align*}
\typicalSetIndex{\varepsilon}{n}{1}
&:=
\{
(x^n, y^n,z^n)
:
\informationDensityConditional{x^n}{z^n}{y^n}
\leq
n(\mutualInformationConditional{X}{Z}{Y}+\varepsilon)
\}
\\
\typicalSetIndex{\varepsilon}{n}{2}
&:=
\{
(y^n,z^n)
:
\informationDensity{y^n}{z^n}
\leq
n(\mutualInformation{Y}{Z}+\varepsilon)
\}.
\end{align*}
We split the variational distance in atypical and typical parts as follows, where $P_{\mathrm{atyp}, 1}$, $P_{\mathrm{atyp}, 2}$ and $\totvarTypical{z^n}$ are defined by~(\ref{def:soft-covering-atypical-term-one}), (\ref{def:soft-covering-atypical-term-two}) and (\ref{def:soft-covering-typical-term}) shown on the next page.
\begin{figure*}
\normalsize
\begin{align}
\label{def:soft-covering-atypical-term-one}
P_{\mathrm{atyp}, 1}
&:=
\sum\limits_{z^n \in \mathcal{Z}^n}
\exp(-n(R_1+R_2))
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})
\indicator{(\codebookOneWord{m_1}, \codebookTwoWord{m_2}, z^n) \notin \typicalSetIndex{\varepsilon}{n}{1}}
\\
\label{def:soft-covering-atypical-term-two}
P_{\mathrm{atyp}, 2}
&:=
\sum\limits_{z^n \in \mathcal{Z}^n}
\exp(-n(R_1+R_2))
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})
\indicator{(\codebookTwoWord{m_2}, z^n) \notin \typicalSetIndex{\varepsilon}{n}{2}}
\\
\label{def:soft-covering-typical-term}
\totvarTypical{z^n}
&:=
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
\exp(-n(R_1+R_2))
\frac{q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})}
{q_{Z^n}(z^n)}
\indicator{(\codebookTwoWord{m_2}, z^n) \in \typicalSetIndex{\varepsilon}{n}{2}}
\indicator{(\codebookOneWord{m_1}, \codebookTwoWord{m_2}, z^n) \in \typicalSetIndex{\varepsilon}{n}{1}}
\end{align}
\hrulefill
\end{figure*}
\begin{align}
\notag
&\totalvariation{ p_{Z^n | \mathcal{C}_1, \mathcal{C}_2} - q_{Z^n}}
\\
\notag
=
&\sum\limits_{z^n \in \mathcal{Z}^n}
q_{Z^n}(z^n)
\positive{\frac{p_{Z^n | \mathcal{C}_1, \mathcal{C}_2}(z^n)}
{q_{Z^n}(z^n)}
- 1
}
\\
\label{proof:soft-covering-two-transmitters-typical-split}
\leq
&P_{\mathrm{atyp}, 1} + P_{\mathrm{atyp}, 2}
+
\sum\limits_{z^n \in \mathcal{Z}^n}
q_{Z^n}(z^n)
\positive{\totvarTypical{z^n} - 1}.
\end{align}
\begin{remark}
The denominator of the fraction is almost surely not equal to $0$ as long as the numerator is not equal to $0$. We implicitly let the summation range only over the support of the denominator, as we do in all further summations.
\end{remark}
So the theorems can be proven by considering typical and atypical terms separately.
\showto{arxiv}{But first, we prove two lemmas to help us to bound the typical and the atypical terms.}
\showto{conference}{But first, we state two lemmas to help us bound these terms. The proofs use Chernoff-Hoeffding concentration bounds introduced in~\cite{HoeffdingInequalities} and an extension thereof for dependent variables by Janson~\cite[Theorem 2.1]{JansonLargeDeviations}. We omit the proofs here due to lack of space, however, they can be found in the extended version of this paper~\cite{arxivVersion}.}
\begin{lemma}[Bound for typical terms]
\label{lemma:soft-covering-two-transmitters-typical}
Given a block length $n$, $\varepsilon > 0$, $0 < \delta < 1$, random variables $A$, $B$ and $C$ on finite alphabets $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ respectively with joint probability mass function $r_{A, B, C}$, a rate $R$ and a codebook
$\mathcal{C} = (\codebookWord{m})_{m=1}^{\exp(\codebookBlocklengthR)}$ with each component of each codeword drawn i.i.d. according to $r_A$, for any $b^n \in \mathcal{B}^n$ and $c^n \in \mathcal{C}^n$, we have
\begin{multline*}
\showto{arxiv}{\check{\mathbb{P}} :=}
\mathbb{P}_{\mathcal{C}}\left(
\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}
\exp(-\codebookBlocklengthR)
\frac{r_{C^n | A^n, B^n}(c^n | \codebookWord{m}, b^n)}
{r_{C^n | B^n}(c^n | b^n)}
\right.
\\
\left.
\vphantom{\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}}
\cdot
\indicator{(\codebookWord{m}, b^n, c^n) \in \typicalSet{\varepsilon}{n}}
>
1 + \delta
\right) \\
\leq
\exp\left(
-\frac{\delta^2}{3} \exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon - R))
\right),
\end{multline*}
where the typical set is defined as
\begin{shownto}{arxiv}
\begin{align}
\label{lemma:soft-covering-two-transmitters-typical-def}
\typicalSet{\varepsilon}{n}
:=
\{
(a^n, b^n, c^n)
:
\informationDensityConditional{a^n}{c^n}{b^n}
\leq
n(\mutualInformationConditional{A}{C}{B}+\varepsilon)
\}.
\end{align}
\end{shownto}
\begin{shownto}{conference}
\begin{align*}
\typicalSet{\varepsilon}{n}
:=
\{
(a^n, b^n, c^n)
:
\informationDensityConditional{a^n}{c^n}{b^n}
\leq
n(\mutualInformationConditional{A}{C}{B}+\varepsilon)
\}.
\end{align*}
\end{shownto}
\end{lemma}
\begin{shownto}{arxiv}
\begin{proof}
We have
\begin{multline*}
\check{\mathbb{P}} =
\mathbb{P}_{\mathcal{C}}\left(
\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}
\exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon))
\right.
\\
\cdot
\frac{r_{C^n | A^n, B^n}(c^n | \codebookWord{m}, b^n)}
{r_{C^n | B^n}(c^n | b^n)}
\cdot
\indicator{(\codebookWord{m}, b^n, c^n) \in \typicalSet{\varepsilon}{n}}
\\
>
\left. \vphantom{\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}}
\exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon - R))
(1 + \delta)
\right).
\end{multline*}
By the definition of $\typicalSet{\varepsilon}{n}$ in~(\ref{lemma:soft-covering-two-transmitters-typical-def}), the summands are at most $1$, and furthermore, the expectation of the sum can be bounded as
\begin{align*}
&
\begin{aligned}
\mathbb{E}_{\mathcal{C}}\left(
\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}
\right.
&\exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon))
\\
&\cdot
\frac{r_{C^n | A^n, B^n}(c^n | \codebookWord{m}, b^n)}
{r_{C^n | B^n}(c^n | b^n)}
\indicator{(\codebookWord{m}, b^n, c^n) \in \typicalSet{\varepsilon}{n}}
\left.
\vphantom{\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}}
\right)
\end{aligned}
\\
&
\begin{aligned}
\leq
\sum\limits_{m=1}^{\exp(\codebookBlocklengthR)}
&\exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon))
\\
&\cdot
\mathbb{E}_{\mathcal{C}}\left(
\frac{r_{C^n | A^n, B^n}(c^n | \codebookWord{m}, b^n)}
{r_{C^n | B^n}(c^n | b^n)}
\right)
\end{aligned}
\\
&=
\exp(-n (\mutualInformationConditional{A}{C}{B} + \varepsilon - R)).
\end{align*}
Now applying Theorem~\ref{theorem:hoeffding} to the above shows the desired probability statement and completes the proof.
\end{proof}
\end{shownto}
\begin{lemma}[Bound for atypical terms]
\label{lemma:soft-covering-two-transmitters-atypical}
Given a channel
$\mathcal{W} = (\mathcal{X}, \mathcal{Y}, \mathcal{Z}, q_{Z | X, Y})$,
input distributions $q_X$ and $q_Y$, some set $A \subseteq \mathcal{X}^n \times \mathcal{Y}^n \times \mathcal{Z}^n$, $\delta > 0$, $\mu \geq \mathbb{P}((X^n, Y^n, Z^n) \in A)$ as well as rates $R_1$ and $R_2$ and codebooks distributed according to $\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}$ defined in Section~\ref{section:preliminaries}, we have
\begin{multline*}
\showto{arxiv}{\hat{\mathbb{P}} :=}
\mathbb{P}_{\mathcal{C}_1,\mathcal{C}_2}\left(
\sum\limits_{z^n \in \mathcal{Z}^n}
\exp(-n(R_1+R_2))
\right.
\\
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})
\\
\indicator{(\codebookOneWord{m_1}, \codebookTwoWord{m_2}, z^n) \in A}
\left. \vphantom{\sum\limits_{z^n \in \mathcal{Z}^n}} >
\mu(1+\delta)
\right) \\
\leq
\exp(-2 \delta^2 \mu^2 \exp(n\min(R_1,R_2))).
\end{multline*}
\end{lemma}
\begin{shownto}{arxiv}
\begin{proof}
We have
\begin{align*}
&
\begin{aligned}
\hat{\mathbb{P}} =
&\mathbb{P}_{\mathcal{C}_1,\mathcal{C}_2}\left(
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
\sum\limits_{z^n \in \mathcal{Z}^n}
\right.
\\
&~
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})
\indicator{(\codebookOneWord{m_1}, \codebookTwoWord{m_2}, z^n) \in A}
\\
&\left. \vphantom{\sum\limits_{z^n \in \mathcal{Z}^n}} >
\exp(n(R_1+R_2))
(
\mu
+
\mu
\delta
)
\right)
\end{aligned}
\\
&
\begin{aligned}
\leq
&\mathbb{P}_{\mathcal{C}_1,\mathcal{C}_2}\left(
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\sum\limits_{m_2=1}^{\exp(\codebookBlocklengthR_2)}
\sum\limits_{z^n \in \mathcal{Z}^n}
\right.
\\
&~
q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, \codebookTwoWord{m_2})
\indicator{(\codebookOneWord{m_1}, \codebookTwoWord{m_2}, z^n) \in A}
\\
&\left. \vphantom{\sum\limits_{z^n \in \mathcal{Z}^n}} >
\exp\Big(n(R_1+R_2)\Big)
\Big(
\mathbb{P}((X^n, Y^n, Z^n) \in A)
+
\mu
\delta
\Big)
\right)
\end{aligned}
\\
&\leq
\exp\left(
-2\frac{\exp(2n(R_1+R_2))\mu^2\delta^2}
{\exp(n\max(R_1,R_2)) \exp(n(R_1 + R_2))}
\right)
\\
&=
\exp(-2 \delta^2 \mu^2 \exp(n\min(R_1,R_2))),
\end{align*}
where the inequality follows from Theorem~\ref{theorem:janson} by observing that the innermost sum is confined to $[0,1]$, the two outer summations together have $\exp(n(R_1+R_2)$ summands which can be partitioned into $\exp(n(\max(R_1,R_2))$ sets with $\exp(n\min(R_1,R_2))$ independently distributed elements each, and the overall expectation of the term is $\exp(n(R_1+R_2)\mathbb{P}((X^n, Y^n, Z^n) \in A)$.
\end{proof}
\end{shownto}
\begin{proof}[Proof of Theorem~\ref{theorem:soft-covering-two-transmitters}]
In order to bound $P_{\mathrm{atyp}, 1}$, we observe that for any $\alpha > 1$, we can bound
$\mathbb{P}_{X^n, Y^n, Z^n}((X^n, Y^n, Z^n) \notin \typicalSetIndex{\varepsilon}{n}{1})$
as shown in~(\ref{proof:soft-covering-two-transmitters-probability-bound-start}) to~(\ref{proof:soft-covering-two-transmitters-probability-bound-end}) in the appendix,
where the inequality in (\ref{proof:soft-covering-two-transmitters-probability-bound-end}) holds as long as ${\beta} < (\alpha-1)(\mutualInformationConditional{X}{Z}{Y}+\varepsilon-\renyidiv{\alpha}{\mathbb{P}_{X, Y, Z}}{\mathbb{P}_{X | Y}\mathbb{P}_{Z | Y}\mathbb{P}_Y})$. We can achieve this for sufficiently small ${\beta} > 0$ as long as $\alpha>1$ and $\mutualInformationConditional{X}{Z}{Y}+\varepsilon-\renyidiv{\alpha}{\mathbb{P}_{X, Y, Z}}{\mathbb{P}_{X | Y}\mathbb{P}_{Z | Y}\mathbb{P}_Y} > 0$. In order to choose an $\alpha > 1$ such that the latter requirement holds, note that since our alphabets are finite, the Rényi divergence is also finite and thus it is continuous and approaches the Kullback-Leibler divergence for $\alpha$ tending to $1$~\cite{RenyiDiv}, which is in this case equal to the mutual information term.
We apply Lemma~\ref{lemma:soft-covering-two-transmitters-atypical} with $A = (\mathcal{X}^n \times \mathcal{Y}^n \times \mathcal{Z}^n) \setminus \typicalSetIndex{\varepsilon}{n}{1}$ and $\delta = 1$ to obtain
\begin{multline}
\label{proof:soft-covering-two-transmitters-atypical-bound-1}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}\left(
P_{\mathrm{atyp}, 1}
>
2\exp(-n{\beta})
\right)
\\
\leq
\exp(
-2\exp(
n(
\min(R_1,R_2) - 2{\beta}
)
)
).
\end{multline}
Proceeding along similar lines of reasoning including another application of Lemma~\ref{lemma:soft-covering-two-transmitters-atypical} with $A = \mathcal{X}^n \times ((\mathcal{Y}^n \times \mathcal{Z}^n) \setminus \typicalSetIndex{\varepsilon}{n}{2})$ and $\delta=1$, we show that if ${\beta}>0$ is small enough,
\begin{multline}
\label{proof:soft-covering-two-transmitters-atypical-bound-2}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}\left(
P_{\mathrm{atyp}, 2}
>
2\exp(-n{\beta})
\right)
\\
\leq
\exp(
-2\exp(
n(
\min(R_1,R_2) - 2{\beta}
)
)
).
\end{multline}
As for the typical term, we first observe that for any fixed $y^n$ and $z^n$, we can apply Lemma~\ref{lemma:soft-covering-two-transmitters-typical} with $A=X$, $B=Y$, $C=Z$ and $\delta=\exp(-n{\beta})$ to obtain
\begin{multline}
\label{proof:soft-covering-two-transmitters-typical-bound}
\mathbb{P}_{\mathcal{C}_1}\left(
\totvarTypicalOne{y^n}{z^n}
>
1 + \exp(-n{\beta})
\right)
\\
\leq
\exp\left(
-\frac{1}{3} \exp(-n (\mutualInformationConditional{X}{Z}{Y} + \varepsilon + 2{\beta} - R_1))
\right),
\end{multline}
where we used
\begin{multline}
\label{def:soft-covering-typical-term-one}
\totvarTypicalOne{y^n}{z^n}
:=
\sum\limits_{m_1=1}^{\exp(\codebookBlocklengthR_1)}
\exp(-n(R_1))
\\
\cdot \frac{q_{Z^n | X^n, Y^n}(z^n | \codebookOneWord{m_1}, y^n)}
{q_{Z^n | Y^n}(z^n | y^n)}
\indicator{(\codebookOneWord{m_1}, y^n, z^n) \in \typicalSetIndex{\varepsilon}{n}{1}}.
\end{multline}
We define a set of codebooks
\begin{align}
\label{def:soft-covering-good-codebooks}
\mathbb{C}_{z^n}
:=
\bigcap\limits_{y^n \in \mathcal{Y}^n}
\left\{
\mathcal{C}_1:
\totvarTypicalOne{y^n}{z^n}
\leq
1 + \exp(-n{\beta})
\right\}
\end{align}
and bound for arbitrary but fixed $z^n$
\begin{align*}
\tilde{\mathbb{P}}
:=
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}\left(
\totvarTypical{z^n}
>
1 + 3\exp(-n{\beta})
~|~
\mathcal{C}_1 \in \mathbb{C}_{z^n}
\right)
\end{align*}
in~(\ref{proof:soft-covering-two-transmitters-totalprob1}) to~(\ref{proof:soft-covering-two-transmitters-lemmaapplication2}) in the appendix, where~(\ref{proof:soft-covering-two-transmitters-totalprob1}) follows from the law of total probability,~(\ref{proof:soft-covering-two-transmitters-lemmaapplication1}) is a consequence of the condition $\mathcal{C}_1 \in \mathbb{C}_{z^n}$,~(\ref{proof:soft-covering-two-transmitters-totalprob2-and-bound}) results from an application of the law of total probability and the assumption that $n$ is sufficiently large such that $\exp(-n{\beta}) \leq 1$. Finally,~(\ref{proof:soft-covering-two-transmitters-lemmaapplication2}) follows from Lemma~\ref{lemma:soft-covering-two-transmitters-typical} with $A=Y$, $C=Z$, $B$ a deterministic random variable with only one possible realization and $\delta=\exp(-n{\beta})$.
We can now put everything together as shown in (\ref{proof:soft-covering-two-transmitters-union-bound-start}) to (\ref{proof:soft-covering-two-transmitters-union-bound-substitutions}) in the appendix, where~(\ref{proof:soft-covering-two-transmitters-union-bound-application}) follows from~(\ref{proof:soft-covering-two-transmitters-typical-split}) and the union bound and~(\ref{proof:soft-covering-two-transmitters-union-bound-substitutions}) is a substitution of~(\ref{proof:soft-covering-two-transmitters-atypical-bound-1}), (\ref{proof:soft-covering-two-transmitters-atypical-bound-2}), (\ref{proof:soft-covering-two-transmitters-typical-bound}) and~(\ref{proof:soft-covering-two-transmitters-lemmaapplication2}).
What remains is to choose $\gamma_1$ and $\gamma_2$ such that (\ref{theorem:soft-covering-two-transmitters-probability-statement}) holds. First, we have to choose $\varepsilon$ and ${\beta}$ small enough such that the terms $\min(R_1,R_2)-2{\beta}$, $R_1 - 2{\beta} - \varepsilon - \mutualInformationConditional{X}{Z}{Y}$ and $R_2 - 2{\beta} - \varepsilon - \mutualInformation{Y}{Z}$ are all positive. Since there have so far been no constraints on ${\beta}$ and $\varepsilon$ except that they are positive and sufficiently small, such a choice is possible provided $R_1 > \mutualInformationConditional{X}{Z}{Y}$ and $R_2 > \mutualInformation{Y}{Z}$. The theorem then follows for large enough $n$ by choosing $\gamma_2$ positive, but smaller than the minimum of these three positive terms, and $\gamma_2 < {\beta}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:soft-covering-two-transmitters-second-order}]
We consider the typical sets $\typicalSetIndex{\varepsilon_1}{n}{1}$ and $\typicalSetIndex{\varepsilon_2}{n}{2}$, where for $k=1,2$, we choose $\varepsilon_k >0$ to be
\begin{align}
\label{proof:soft-covering-two-transmitters-second-order-typicalityparam}
\varepsilon_k
:=
\sqrt{\frac{\channelDispersion{k}}
{n}
}
\mathcal{Q}^{-1}(\varepsilon)
+
d
\frac{\log n}{n}.
\end{align}
The definitions~(\ref{def:soft-covering-atypical-term-one}), (\ref{def:soft-covering-atypical-term-two}) and (\ref{def:soft-covering-typical-term}) change accordingly.
In order to bound $P_{\mathrm{atyp}, 1}$, we use Theorem~\ref{theorem:berry-esseen} to obtain
\showto{arxiv}{\pagebreak}
\begin{align*}
&\phantom{{}={}}
\mathbb{P}_{X^n, Y^n, Z^n}((X^n, Y^n, Z^n) \notin \typicalSetIndex{\varepsilon_1}{n}{1})
\\
&
=
\mathbb{P}_{X^n, Y^n, Z^n}\left(
\frac{1}{n}
\sum\limits_{k=1}^n
\left(
\informationDensityConditional{X_k}{Z_k}{Y_k}
-
\mutualInformationConditional{X}{Z}{Y}
\right)
>
\varepsilon_1
\right)
\\
&\leq
\mathcal{Q}\left(
\varepsilon_1
\sqrt{\frac{n}
{\channelDispersion{1}}
}
\right)
+
\frac{\channelThirdMoment{1}}
{\channelDispersion{1}^{\frac{3}{2}} \sqrt{n}}
=
\secondOrderAtypicalProbability{1}.
\end{align*}
An application of Lemma~\ref{lemma:soft-covering-two-transmitters-atypical} with $\delta = 1/\sqrt{n}$ yields
\begin{multline}
\label{proof:soft-covering-two-transmitters-second-order-atypical-bound-1}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2} \left(
P_{\mathrm{atyp}, 1}
>
\secondOrderAtypicalProbability{1}\left(
1+\frac{1}{\sqrt{n}}
\right)
\right)
\\
\leq
\exp\left(
-\frac{2\secondOrderAtypicalProbability{1}^2}
{n}
\exp(n \min(R_1,R_2))
\right).
\end{multline}
Reasoning along similar lines shows
\begin{align*}
\mathbb{P}_{Y^n, Z^n}((Y^n, Z^n) \notin \typicalSetIndex{\varepsilon_2}{n}{2})
\leq
\secondOrderAtypicalProbability{2}
\end{align*}
so that a renewed application of Lemma~\ref{lemma:soft-covering-two-transmitters-atypical} gives
\begin{multline}
\label{proof:soft-covering-two-transmitters-second-order-atypical-bound-2}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}\left(
P_{\mathrm{atyp}, 2}
>
\secondOrderAtypicalProbability{2}\left(
1+\frac{1}{\sqrt{n}}
\right)
\right)
\\
\leq
\exp\left(
-\frac{2\secondOrderAtypicalProbability{2}^2}
{n}
\exp(n \min(R_1,R_2))
\right).
\end{multline}
For the typical term, we use the definitions~(\ref{def:soft-covering-typical-term-one}) and~(\ref{def:soft-covering-good-codebooks}) with the typical set $\typicalSetIndex{\varepsilon_1}{n}{1}$, and observe that for any fixed $y^n$ and $z^n$, we can apply Lemma~\ref{lemma:soft-covering-two-transmitters-typical} with $A=X$, $B=Y$, $C=Z$ and $\delta=1/\sqrt{n}$ to obtain
\showto{conference}{\pagebreak}
\begin{multline}
\label{proof:soft-covering-two-transmitters-second-order-typical-bound}
\mathbb{P}_{\mathcal{C}_1}\left(
\totvarTypicalOne{y^n}{z^n}
>
1 + \frac{1}{\sqrt{n}})
\right) \\
\leq
\exp\left(
-\frac{1}{3n} \exp(-n (\mutualInformationConditional{X}{Z}{Y} + \varepsilon_1 - R_1))
\right).
\end{multline}
Now proceeding in a similar manner as in~(\ref{proof:soft-covering-two-transmitters-totalprob1}) to~(\ref{proof:soft-covering-two-transmitters-lemmaapplication2}) shows
\begin{multline*}
\mathbb{P}_{\mathcal{C}_1, \mathcal{C}_2}\left(
\totvarTypical{z^n}
>
1 + \frac{3}{\sqrt{n}}
~|~
\mathcal{C}_1 \in \mathbb{C}_{z^n}
\right)
\\
\leq
\exp\left(
-\frac{1}{3n} \exp(-n (\mutualInformation{Y}{Z} + \varepsilon_2 - R_2))
\right),
\end{multline*}
where there is no assumption on $n$ because $1/\sqrt{n} \leq 1$ for all $n \geq 1$.
The theorem then follows from (\ref{proof:soft-covering-two-transmitters-second-order-union-bound-start}) to (\ref{proof:soft-covering-two-transmitters-second-order-union-bound-end}) in the appendix,
\showto{arxiv}{
\vfill\null
\pagebreak
\noindent
}
where~(\ref{proof:soft-covering-two-transmitters-second-order-union-bound-application}) results from~(\ref{proof:soft-covering-two-transmitters-typical-split}) and the union bound, (\ref{proof:soft-covering-two-transmitters-second-order-union-bound-substitutions}) follows by substituting (\ref{proof:soft-covering-two-transmitters-second-order-atypical-bound-1}), (\ref{proof:soft-covering-two-transmitters-second-order-atypical-bound-2}) and~(\ref{proof:soft-covering-two-transmitters-second-order-typical-bound}), and (\ref{proof:soft-covering-two-transmitters-second-order-union-bound-end}) follows by substituting (\ref{theorem:soft-covering-two-transmitters-second-order-rate-one}), (\ref{theorem:soft-covering-two-transmitters-second-order-rate-two}) and (\ref{proof:soft-covering-two-transmitters-second-order-typicalityparam}), as well as elementary operations.
\end{proof}
\bibliographystyle{plain}
|
\section{Introduction}
\textit{Background}: Transport properties in materials with spin-orbit coupling (SOC) are of great interest for potential spintronic applications, especially because of the unique spin-momentum locking (SML) observed in diverse classes of materials such as topological insulator (TI) \cite{Hasan_RMP_2010, Zhang_RMP_2011, Wang_SPIN_2016}, heavy metals \cite{Miron_Nat_2011, Ohno_APL_2011, Ralph_APL_2012, Ralph_Science_2012, Felser_NatComm_2015}, oxide interfaces \cite{FertLAOSTO2016, Songe1602312}, and narrow bandgap semiconductors \cite{Johnson_PRB_2000, Koo_JAppPhys_2012, SilsbeeJPhys2004}. There has been an immense effort to model the interplay between spin and charge in such materials using time-dependent classical \cite{CosimoPRB2017} or quantum Boltzmann equation \cite{SinovaPRB2012,SinovaPRL2013}, nonequilibrium Green's function \cite{HalperinPRL2004, BurkovPRB2004, BurkovPRL2010, ZainuddinPRB2011, Hong_PRB_2012}, phenomenological equations coupled to magnet dynamics \cite{TserkovnyakPRB2014}, and time-independent diffusion equation used to explain bulk spin Hall effect \cite{BauerPRB2013}.
\textit{Four-Component Diffusion Equation}: In this paper, we propose a time-dependent four-component diffusion equation that can be used for transport analysis on multi-contact based structures implemented with materials exhibiting SML. The model is obtained from the Boltzmann transport equation assuming linear response and elastic scattering processes in the channel. The basic approach is to assign one electrochemical potential $\mu(\vec{p},s)$ to each of the eigenstates ($\vec{p},s$) where $\vec{p}$ is the momentum confined in the $z$-$x$ plane and $s=\pm1$ is the spin index with $+1$ and $-1$ denoting the up ($U$) and the down ($D$) spins with respect to the spin quantization axis $\hat{y}\times\left(\vec{p}-q\vec{A}\right)$ ($\vec{A}$ is the vector magnetic potential).
We then classify the eigenstates into four groups ($U^+$, $D^+$, $U^-$, and $D^-$) based on the spin index ($U$, $D$) and the sign of the $x$-component of the group velocity ($+,-$) and define an average electrochemical potential corresponding to the each of the four groups resulting in a four-component diffusion equation. This can be viewed as an extension of the Valet-Fert equation which uses two electrochemical potentials for $U$ and $D$ states \cite{Valet_Fert_1993}. The four-component diffusion equation in steady-state reduces to our prior model \cite{Sayed_SciRep_2016, Hong_SciRep_2016} that we used to predict a unique three resistance state on SML materials with two FM contacts in a multi-terminal spin valve structure \cite{Sayed_SciRep_2016}. The prediction has been observed recently on Pt \cite{Pham_NanoLett_2016, Pham_APL_2016} and InAs \cite{Koo_New} up to room temperature. We expect the prediction to be observed on any channel exhibiting SML.
In our generalized view, $U^+$ (and $U^-$) states have same number of modes $M$ (and $N$) as $D^-$ (and $D^+$) states due to the time reversal symmetry. The degree of SML in our model is given by \cite{Sayed_SciRep_2016, Hong_SciRep_2016}
\begin{equation}
\label{degSML}
p_0=\dfrac{M-N}{M+N},
\end{equation}
where $M$ and $N$ are evaluated at Fermi energy for zero temperature and in general require thermal averaging. For normal metal (NM) channels $p_0=0$ i.e. $M=N$. For a perfect topological insulator (TI) $N=0$ leading to $p_0=1$, however, $p_0$ gets effectively lowered by the presence of parallel channels. $p_0$ has been quantified for different TIs by a number of groups \cite{JonkerNatNano2014, KLWangNanoLett2014, DasNanoLett2015, SamarthPRB2015, YPChenSciRep2015, Samarth_PRB_2015, YoichiPRB2016} by measuring the charge current induced spin voltage using a ferromagnetic (FM) contact, motivated by a theoretical proposal \cite{Hong_PRB_2012}. For a Rashba channel with a coupling coefficient $\alpha_R$, $p_0 \approx \alpha_R k_F /(2E_F)\ll 1$ \cite{SilsbeeJPhys2004, Hong_PRB_2012} which can be quantified with similar spin voltage measurements \cite{Johnson_PRB_2000,Koo_JAppPhys_2012}. $k_F$ and $E_F$ are the Fermi wave vector and Fermi energy respectively. Recently, spin voltage measurements have been reported on heavy metals like platinum \cite{Pham_NanoLett_2016, Pham_APL_2016} and gold \cite{AppelbaumPRB2016}. These experiments can also be quantified by $p_0$, though the underlying mechanism is subject to active debate \cite{StilesPRB2013, SinovaRevModPhys2015, HoffmannIEETMAG2013} and could involve a bulk spin Hall effect \cite{Ralph_Science_2012, BauerPRB2013, WangPRL2014} or interface Rashba-like channel \cite{Felser_NatComm_2015, Saitoh_SciRep_2015, HoeschPRB2004, Tamai_PRB_2013}.
\textit{Transmission Line Model}: We translate our four-component ($U^+$, $D^+$, $U^-$, and $D^-$) semiclassical model into a transmission line model with two component (charge and $z$-component of spin) voltages and currents where the coupling between charge and spin in a SML channel is characterized by $p_0$ in Eq. \eqref{degSML}. The model is compatible within a standard circuit simulator tool like Simulation Program with Integrated Circuit Emphasis (SPICE) which will enable straightforward analysis of complex geometries. The transmission line model is a new addition to our multi-physics spin-circuit framework \cite{ModApp} which has been previously used to explain experiments and evaluate spin-based device proposals \cite{Camsari_SciRep_2015, Sayed_SciRep2_2016}.
For NM channels (i.e. $p_0=0$), the proposed transmission line model decouples into the well-known model for charge that has been previously used to analyze transport in quantum wires \cite{Burke_TNANO_2002, Burke_TNANO_2003, Sayeef_IEEE_2005} and a time-dependent version of Valet-Fert equation \cite{Valet_Fert_1993} for spin. For SML channels (i.e. $p_0\neq 0$), our model lead to several prior results on charge-spin interconversion in the steady-state limit \cite{Hong_PRB_2012, Hong_SciRep_2016} that have been previously used by a number of experimental groups \cite{JonkerNatNano2014, KLWangNanoLett2014, DasNanoLett2015, SamarthPRB2015, YPChenSciRep2015, Samarth_PRB_2015, YoichiPRB2016, Koo_New} to quantify their spin voltage measurements using potentiometric ferromagnetic contacts. We further derive a simple expression and present SPICE simulation results for a parameter that has been widely used to quantify the inverse Rashba-Edelstein effect (IREE) in 2D channels, which are in good agreement with existing experiments \cite{Fert_NatComm_2016,IssaCuBi2016,FertLAOSTO2016,SmarthPRL2016} on diverse materials.
We then use the full time-dependent transmission line model to study the spin-charge separation in the presence of SML in materials with SOC, a subject that has been controversial in the past (see, for example, \cite{BalseiroPRL2002, CalzonaPRB2015, Stauber_PRB_2013,BarnesPRL2000}). Our model suggests that depending on the channel cross-section, the charge signal can travel faster than the spin signal resulting in spin-charge separation which is well-known for materials without SOC \cite{HalperinJAP2007, PoliniPRL2007, SchroerPRL2014, Burke_TNANO_2002} based on the Luttinger liquid theory. We argue using our model that the spin-charge separation persists even in the SOC materials exhibiting SML (i.e. $p_0\neq0$). Similar arguments have been made in the past considering the presence of spin-orbit coupling (SOC) \cite{BalseiroPRL2002, CalzonaPRB2015, Stauber_PRB_2013} although there exists counter arguments that the presence of SOC destroys the spin-charge separation \cite{BarnesPRL2000}. However, we predict that the high velocity charge signal in SML channels accompanies an additional spin component proportional to $p_0$ having the same velocity as the charge, which has not been discussed before.
Note that the proposed model does not take into account the effects such as spin precession involving the off-diagonal elements of the density matrix which we assume to be negligible. An extension of this model to include $x$ and $y$ components of spin could possibly address such issues, as done earlier for materials without SOC (see \cite{Camsari_SciRep_2015}, and references therein). The assumptions made to derive the model have been discussed in detail in Section \ref{sec_semi}. Several predictions from our model for steady-state \cite{Sayed_SciRep_2016} have already received support from experiments \cite{Pham_APL_2016,Pham_NanoLett_2016,Koo_New} suggesting that the assumptions are within the reasonable limits. The assumptions can be revisited as the field evolves leading to revised model parameters, but the basic model should remain valid.
\textit{Outline}: The paper is organized as follows. In Section II, we describe the transmission line model for SML channels and show that special cases lead to prior well-known models. In Section III, we derive several results on charge-spin interconversion from our transmission line model in steady-state and present comparison with SPICE simulations using the full model. We obtain a simple expression for a parameter that has been widely used to quantify IREE and show that it is in good agreement with available experiments on diverse materials. In Section IV, we study the spin-charge separation in terms of spin and charge signal velocities obtained from our time-dependent transmission line equations. We show that the separation persists even in SML channels, however, in SML channels there exists an additional spin component accompanied by the high velocity charge signal. In Section V, we derive the transmission line model starting from the Boltzmann transport equation with all the assumptions clearly stated. We discuss different scattering mechanisms in the channels and their effects on charge and spin transport. Finally, in Section VI, we end with a brief summary.
\section{Transmission Line Model}
\subsection{Model Description}
We consider the structure and axes in Fig. \ref{1}(a) to derive the transmission line model. The model has two components: charge and $z$-component of spin with coupling between them characterized by $p_0$ in Eq. \eqref{degSML}. The charge model is given by
\begin{equation}
\label{charge_TL}
\begin{aligned}
&\left(\dfrac{1}{C_E}+\dfrac{1}{C_Q}\right)^{-1}\;\dfrac{\partial}{{\partial t}}{V_c} = - \dfrac{\partial}{{\partial x}}I_c,\\
&\left(L_K+L_M\right)\;\dfrac{\partial}{{\partial t}}I_c + R_c\,I_c = - \dfrac{\partial}{\partial x}{V_c} + {p_0}{\eta_c}{V_s},
\end{aligned}
\end{equation}
where $I_c$ and $V_s$ are charge current and voltage along $\hat{x}$-direction, $C_E$ and $C_Q$ are the electrostatic and quantum capacitances per unit length, $L_M$ and $L_K$ are the magnetic and kinetic inductances per unit length, and $R_c$ is the charge resistance per unit length.
\begin{figure}
\includegraphics[width=0.47 \textwidth]{Figure1.png}
\caption{(a) Structure and corresponding axes of the channel with spin-momentum locking (SML) under consideration, for which a transmission line model is derived. The model has two components: (b) charge (corrseponds to Eq. \eqref{charge_TL}) and (c) spin (corrseponds to Eq. \eqref{spin_TL}), with coupling between them described by degree of SML $p_0$ (see Eq. \eqref{degSML}). Coupling between charge and spin are modeled by dependent voltage and current sources with $V_m=\eta_c V_s$, $V_n=\eta_s V_c+r_m I_{em}$, and $I_n=\gamma_s I_c - g_m V_{em}$.}\label{1}
\end{figure}
The spin model is given by
\begin{equation}
\label{spin_TL}
\begin{aligned}
&\dfrac{C_Q}{\alpha^2}\dfrac{\partial}{{\partial t}}{V_s} + G_{sh}{V_s} + p_0 g_m V_{em} = - \dfrac{\partial}{{\partial x}}{I_s} + p_0 \gamma_s I_c,\\
&\alpha^2{L_K}\dfrac{\partial}{{\partial t}}{I_s} + {R_s}{I_s} - p_0 r_m I_{em} = - \dfrac{\partial}{{\partial x}}{V_s} + p_0 \eta_s {V_c},
\end{aligned}
\end{equation}
where $I_s$ and $V_s$ are spin current and voltage along $\hat{x}$-direction with spin polarization along the $\hat{z}$-direction. In this discussion, $\hat{y}$-direction is out-of-plane. Here, $\alpha=2/\pi$ is an angular averaging factor, $R_s$ is the spin resistance per unit length of the channel and $G_{sh}$ is the shunt conductance per unit length that captures the spin lost in the channel due to the spin relaxation. Detailed derivation of Eqs. \eqref{charge_TL} and \eqref{spin_TL} from the Boltzmann transport equation will be discussed in Section \ref{sec_semi} with clearly stated assumptions.
Distributed circuit models for charge and spin are shown in Fig. \ref{1}(b) and (c), which are based on Eqs. \eqref{charge_TL} and \eqref{spin_TL} respectively. The dependent sources proportional to $p_0$ represent charge-spin inter-coupling between the two models. The dependent source parameters in Fig. \ref{1} are given by
\begin{subequations}
\begin{equation}
\label{dep_Vm}
V_m=\eta_c V_s,
\end{equation}
\begin{equation}
\label{dep_Vn}
V_n=\eta_s V_c+r_m I_{em},
\end{equation}
\begin{equation}
\text{and,}\;\, I_n=\gamma_s I_c - g_m V_{em}.
\end{equation}
\end{subequations}
The parameters of Eqs. \eqref{charge_TL} and \eqref{spin_TL} are given by
\begin{subequations}
\label{params}
\begin{alignat}{10}
\label{cq}
&C_Q = \dfrac{2}{R_B \left| {\left\langle {{v_x}} \right\rangle } \right|},\\
\label{lk}
&L_K=\dfrac{R_B}{2 \left| {\left\langle {{v_x}} \right\rangle } \right|},\\
\label{rc}
&R_c = \dfrac{R_B}{\lambda},\\
\label{rs}
&R_s=\dfrac{\alpha^2 R_B}{\lambda_0},\\
\label{gsh}
&G_{sh}=\dfrac{4}{\alpha^2 R_B\lambda_s},\\
&g_m=\dfrac{2}{\alpha R_B},\\
&r_m=\dfrac{\alpha}{\left| {\left\langle {{v_x}} \right\rangle } \right|C_E},\\
\label{ets}
&\eta_s=\dfrac{{2\alpha}}{{{\lambda _0}}},\\
\label{etc}
&\eta_c=\dfrac{2}{\alpha {\lambda_r}},\\
\label{gamas}
&\gamma_s=\dfrac{{2}}{{{\alpha\lambda_t}}},\\
\label{bal_res}
\text{and,}\;\;&R_B=\dfrac{h}{q^2}\dfrac{1}{M+N}.
\end{alignat}
\end{subequations}
Here, $\lambda$, $\lambda_0$, and $\lambda_s$ are three distinct mean free paths that determine $R_c$, $R_s$, and $G_{sh}$ respectively. $|\langle v_x\rangle|$ is the the magnitude of the thermally averaged electron velocity $\langle{v}_x\rangle$. $\eta_c$ represents spin to charge conversion coefficient and $\eta_s,\gamma_s$ represent charge to spin conversion coefficients. These coefficients depend on different scattering mechanisms in the channel, which will be discussed later in Section \ref{sec_semi}. $r_m$ and $g_m$ are transient charge-spin coupling coefficients which are in the units of resistance per unit length and conductance per unit length respectively.
$R_B$ is the ballistic resistance of the channel, $h$ is the Planck's constant, and $q$ is electron charge. $R_B$ is inversely proportional to the total number of modes ($M + N$) in the channel which represents a material property and does not imply ballistic transport. The models and related results discussed in this paper are valid all the way from ballistic to diffusive regime of operation.
\begin{figure}
\includegraphics[width=0.49 \textwidth]{Figure3.png}
\caption{(a) Structure of the spin-momentum locked (SML) channel in the presence of an external contact. Both (b) charge and (c) spin transmission line models are modified as compared to Fig. \ref{1}, which now correspond to Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont} respectively. The dependent sources are $V_m '=\eta_c V_s+\frac{G_0 R_B}{2}\left(\frac{v_s}{\alpha}+p_f v_c\right)$, $V_n '=\eta_s V_c+r_m I_{em}+\frac{G_0 R_B}{2}\left(\alpha v_c+p_f v_s\right)$, and $I_n=\gamma_s I_c - g_m V_{em}$. Note that the model reduces to that shown in Fig. \ref{1} in the limit $G_0 \rightarrow 0$.}\label{3}
\end{figure}
\subsection{Presence of an External Contact}
In the presence of an external contact on the channel (see Fig. \ref{3}(a)), the charge model in Eq. \eqref{charge_TL} is modified as
\begin{equation}
\label{charge_TL_cont}
\begin{aligned}
&\left(\dfrac{1}{C_E}+\dfrac{1}{C_Q}\right)^{-1}\;\dfrac{\partial}{{\partial t}}{V_c} = - \dfrac{\partial}{{\partial x}}I_c+i^c,\\
&\left(L_K+L_M\right)\;\dfrac{\partial}{{\partial t}}I_c + R_c\,I_c = - \dfrac{\partial}{\partial x}{V_c} + {p_0}{\eta_c}{V_s}+\Delta v^c,
\end{aligned}
\end{equation}
and the spin model in Eq. \eqref{spin_TL} is modified as
\begin{equation}
\label{spin_TL_cont}
\begin{aligned}
&\dfrac{C_Q}{\alpha^2}\dfrac{\partial}{{\partial t}}{V_s} + G_{sh}{V_s} + p_0 g_m V_{em} = - \dfrac{\partial}{{\partial x}}{I_s} + p_0 \gamma_s I_c+i^s,\\
&\alpha^2 {L_K}\dfrac{\partial}{{\partial t}}{I_s} + {R_s}{I_s} - p_0 r_m I_{em} = - \dfrac{\partial}{{\partial x}}{V_s} + p_0 \eta_s {V_c}+\Delta v^s,
\end{aligned}
\end{equation}
where $i^c$ and $i^s$ represent charge and spin currents entering into the channel per unit length from the external contact, $\Delta v^c$ and $\Delta v^s$ represent the change in channel charge and spin voltages per unit length in the region under the external contact. They are given as
\begin{equation}
\label{contact_charge_spin}
\left\{ {\begin{array}{*{20}{c}}
{{i^c}}\\
{{i^s}}
\end{array}} \right\} = {G_0}\left[ {\begin{array}{*{20}{c}}
1&{\dfrac{p_f}{\alpha}}\\\\
\dfrac{p_f}{\alpha} &{\dfrac{1}{\alpha^2}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{v_c} - {V_c}}\\
{{v_s} - {V_s}}
\end{array}} \right\}, \text{ and}
\end{equation}
\begin{equation}
\label{contact_charge_spin1}
\begin{array}{l}
\left\{ {\begin{array}{*{20}{c}}
{\Delta {v^c}}\\
{\Delta {v^s}}
\end{array}} \right\} = - \dfrac{{{G_0}R_B^2}}{4}\left[ {\begin{array}{*{20}{c}}
1&{\alpha {p_f}}\\
{\alpha {p_f}}&{{\alpha ^2}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{I_c}}\\
{{I_s}}
\end{array}} \right\}\\
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \dfrac{{{p_0}{G_0}{R_B}}}{2}\left[ {\begin{array}{*{20}{c}}
{{p_f}}&{\dfrac{1}{\alpha }}\\
\alpha &{{p_f}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{v_c}}\\
{{v_s}}
\end{array}} \right\},
\end{array}
\end{equation}
where $v_c=(v_u+v_d)/2$ and $v_s=\alpha(v_u-v_d)/2$ are charge and spin voltages applied at the external contact, $G_0$ is the contact conductance per unit length, and $p_f$ is the contact polarization with $p_f=0$ indicating a normal metal contact and $p_f\neq 0$ indicating a ferromagnetic contact. The derivation of the model starting from the Boltzmann transport equation is shown in Section \ref{sec_semi} with clearly stated assumptions streamlined with subheadings.
Modified distributed circuit models for charge and spin are shown in Fig. \ref{3}(b) and (c), which are based on Eqs. \eqref{charge_TL_cont}-\eqref{spin_TL_cont} respectively. The presence of a contact with conductance $G_0$ adds series resistances $R_{cont}^c=\dfrac{G_0R_B^2}{4}$ and $R_{cont}^s=\dfrac{\alpha^2 G_0R_B^2}{4}$ in the charge and spin models as shown in Fig. \ref{3}. This effect exists even if the channel is NM. The presence of the external contact also modulates the dependent sources in Eqs. \eqref{dep_Vm} and \eqref{dep_Vn} as
\begin{subequations}
\begin{equation}
V_m '=\eta_c V_s+\dfrac{G_0 R_B}{2}\left(\dfrac{v_s}{\alpha}+p_f v_c\right),
\end{equation}
\begin{equation}
V_n '=\eta_s V_c+r_m I_{em}+\dfrac{G_0 R_B}{2}\left(\alpha v_c+p_f v_s\right),
\end{equation}
\end{subequations}
with $V_a$ and $V_b$ representing the voltage drop across $R_{cont}^c$ and $R_{cont}^s$ in charge and spin models in Fig. \ref{3} respectively. Note that the additional terms are proportional to $G_0$ and negligible for potentiometric contacts where $G_0$ is very low.
\subsection{Special Case: Normal Metals ($p_0=0$)}
We consider a special case of Eqs. \eqref{charge_TL} and \eqref{spin_TL} for a normal metal (NM) channel i.e. $p_0=0$. For NM channel, charge and spin model decouples to well-known models as described below.
\subsubsection{Charge Model: Transport Model for Quantum Wires}
For NM channels ($p_0=0$), the charge model in Eq. \eqref{charge_TL} reduces to the well-known transmission line model for charge that has been previously used to analyze transport in quantum wires, given by
\begin{equation}
\label{Qwire}
\begin{aligned}
&\left(\dfrac{1}{C_E}+\dfrac{1}{C_Q}\right)^{-1}\;\dfrac{\partial}{{\partial t}}{V_c} = - \dfrac{\partial}{{\partial x}}I_c,\\
&\left(L_K+L_M\right)\;\dfrac{\partial}{{\partial t}}I_c + R_c\,I_c = - \dfrac{\partial}{\partial x}{V_c}.
\end{aligned}
\end{equation}
The model was first derived from Luttinger liquid theory \cite{Burke_TNANO_2002, Burke_TNANO_2003} and then from Boltzmann transport equation with one electrochemical potential \cite{Sayeef_IEEE_2005}. In the quantum wire limit $L_K\gg L_M$ and $C_Q\ll C_E$ while in the classical transmission line limit $L_K\ll L_M$ and $C_Q\gg C_E$ \cite{Burke_TNANO_2002,Sayeef_IEEE_2005}.
In steady-state ($\partial/\partial t \rightarrow 0$), we get the diffusion equation for charge from Eq. \eqref{Qwire}, given by
\begin{equation}
\dfrac{d^2}{dx^2}V_c=0.
\end{equation}
\subsubsection{Spin Model: Valet-Fert Equation}
For NM channels ($p_0=0$), the spin model in Eq. \eqref{spin_TL} becomes a time-dependent spin-diffusion equation, given by
\begin{equation}
\label{time_Valet_Fert}
\begin{aligned}
&\dfrac{C_Q}{\alpha^2}\dfrac{\partial}{{\partial t}}{V_s} + G_{sh}{V_s} = - \dfrac{\partial}{{\partial x}}{I_s},\\
&\alpha^2{L_K}\dfrac{\partial}{{\partial t}}{I_s} + {R_s}{I_s} = - \dfrac{\partial}{{\partial x}}{V_s},
\end{aligned}
\end{equation}
which in steady-state ($\partial/\partial t \rightarrow 0$), becomes the well-known Valet-Fert equation \cite{Valet_Fert_1993}, given by
\begin{equation}
\dfrac{d^2}{dx^2}V_s=\dfrac{V_s}{\lambda_{sf}^2},
\end{equation}
with the spin diffusion length given by
\begin{equation}
\lambda_{sf}=\dfrac{1}{\sqrt{R_s G_{sh}}}=\dfrac{\sqrt{\lambda_0 \lambda_s}}{2}.
\end{equation}
Spin model similar to Eq. \eqref{time_Valet_Fert} has been discussed previously based on Luttinger liquid theory \cite{Burke_TNANO_2002}, however, the model did not take into account the spin relaxation processes in the channel (the shunt conductance $G_{sh}$).
\section{Steady-State Transport Results}
\label{sec_Steady}
In this section, we discuss several established steady-state results on charge-spin interconversion in the potentiometric limit. We start from the steady-state ($\partial/\partial t \rightarrow 0$) form of the transmission line model with external contact in Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont}, given by
\begin{subequations}
\label{steady_state_TL}
\begin{alignat}{4}
&\dfrac{d}{{d x}}I_c = i^c,\\
&\dfrac{d}{d x}{V_c} = - R_c\,I_c + {p_0}{\eta_c}{V_s} + \Delta v^c,\\
&\dfrac{d}{{d x}}{I_s} = -G_{sh}{V_s} + p_0 \gamma_s I_c + i^s,\\
\text{and}\;\;&\dfrac{d}{{d x}}{V_s} = - {R_s}{I_s} + p_0 \eta_s {V_c} + \Delta v^s.
\end{alignat}
\end{subequations}
Under the steady-state condition ($\partial/\partial t \rightarrow 0$), the capacitors and inductors in Fig. \ref{3} become open and short circuits respectively. The steady-state form in Eq. \eqref{steady_state_TL} is equivalent to our prior time-independent semiclassical equations with four electrochemical potentials \cite{Sayed_SciRep_2016} and all our previous results can be reproduced using Eq. \eqref{steady_state_TL}.
We first derive a resistance matrix for a three terminal setup (two charge and one spin) with potentiometric contacts (see Fig. \ref{4}) and assuming reflection with spin-flip to be the dominant scattering mechanism in the channel. We present dc simulation results on charge-spin interconversion in the SML channel using the full model (Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont}) in SPICE and compare with the results from the resistance matrix. We then derive a simple expression for a parameter that has been widely used to quantify inverse Rashba-Edelstein effect (IREE) in 2D channels. We compare the expression with available experiments on diverse materials as well as dc SPICE simulation results using the full model.
\subsection{Resistance Matrix for Potentiometric Setup}
\begin{figure}
\includegraphics[width=0.49 \textwidth]{Figure4.png}
\caption{(a) Setup to observe charge current $I_c$ induced spin voltage $v_s$ in the SML channel. Charge terminal of contact 3 is kept open and spin terminals of contacts 1 and 2 are grounded. (b) $v_s/I_c$ vs. $p_0$ for $i^s_{tot}=0$ from SPICE simulation and comparison with Eq. \eqref{Hong_Eqn}. (c) Setup to observe spin current $i^s_{tot}$ induced charge voltage difference $\Delta V_c$ across the SML channel. Charge terminal of contact 3 is kept open and spin terminals of contacts 1 and 3 are grounded. (d) $\Delta V_c/i^s_{tot}$ vs. $p_0$ for $I_c=0$ from SPICE simulation and comparison with Eq. \eqref{Hong_Inv}. The SPICE setup is shown in Fig. \ref{3_7} of Appendix \ref{App_Hong}. Parameters: $\alpha=2/\pi$, $G_0\approx0.05G_B$, $M+N=100$, and scattering rate per unit mode $=0.04$ per lattice points. We assumed reflection with spin-flip to be the dominant scattering process in the channel.}\label{4}
\end{figure}
We consider a structure with three NM contacts ($p_f=0$) on top of a SML channel, as shown in Fig. \ref{4}. Contacts 1 and 2 are the charge terminals and contact 3 is the spin terminal with no charge current flowing through it (i.e. $i^c=0$). We start from Eq. \eqref{steady_state_TL} and make two assumptions to drive the resistance matrix: (i) the contacts are potentiometric i.e. the contact conductance per unit length $G_0$ is very low such that the following condition is satisfied
\begin{equation}
\label{pot_cond}
\dfrac{1}{\lambda},\dfrac{1}{\lambda_0}, \dfrac{1}{\lambda_r}\gg \dfrac{G_0 R_B}{4},
\end{equation}
and (ii) the reflection with spin-flip is the dominant scattering mechanism in the channel. The details of the derivation are given in Appendix \ref{App_Rmat}.
The resistance matrix is given by
\begin{equation}
\label{R_Mat}
\left\{ {\begin{array}{*{20}{c}}
\Delta V_c\\
{{v_{s}}}
\end{array}} \right\} = \left[ {\begin{array}{*{20}{c}}
{\dfrac{R_B L}{{\lambda }}}+\dfrac{2}{G_0''}&{ - \dfrac{{\alpha {p_0} R_B}}{{2}}}\\
{\dfrac{{\alpha {p_0} R_B}}{{2}}}&{ \dfrac{{{\alpha ^2}}}{{{G_0}^\prime }}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{I_c}}\\
{i_{tot}^s}
\end{array}} \right\},
\end{equation}
where $\Delta V_c$ is the charge voltage difference between contacts 1 and 2, $I_c$ is the charge current flowing in the channel with length $L$, $v_s$ is the spin voltage at contact 3, $i^s_{tot}=i^sL$ is the spin current at contact 3, contacts 1 and 2 have equal conductance $G_0''$, and $G_0'=G_0 L$ is the contact conductance of contact 3. Eq. \eqref{R_Mat} is the similar to that previously reported in Ref. \cite{Hong_SciRep_2016} with corrections in the diagonal components. The diagonal components $(1,1)$ and $(2,2)$ in the matrix represent the charge resistance between contacts 1 and 2 and spin resistance at contact 3 respectively.
Note that Eq. \eqref{R_Mat} is derived under the assumption that reflection with spin-flip is the dominant scattering mechanism. In the presence of other scattering mechanisms, the basic structure of Eq. \eqref{R_Mat} remains the same, however, additional factors related to scattering rates multiply each of the components in the matrix (see Appendix \ref{App_Rmat}).
\subsection{Direct Effects: Charge Current Induced Spin Voltage}
For a charge current $I_c$ flowing through the SML channel, the open circuit spin voltage $v_s$ at the contact 3 (i.e. $i^s_{tot}=0$) can be derived from Eq. \eqref{R_Mat} as
\begin{equation}
\label{Hong_Eqn}
v_s = \dfrac{\alpha p_0}{2 G_B}I_c.
\end{equation}
Eq. \eqref{Hong_Eqn} was originally proposed in Ref. \cite{Hong_PRB_2012} which was later confirmed by a number of experiments \cite{JonkerNatNano2014, KLWangNanoLett2014, DasNanoLett2015, SamarthPRB2015, YPChenSciRep2015, Samarth_PRB_2015, YoichiPRB2016, Koo_New} on different TI materials using potentiometric ferromagnetic contact. Note that the spin voltage measured using a contact with higher conductance will be lower than Eq. \eqref{Hong_Eqn} due to the current shunting effect in the contact \cite{Sayed_SciRep_2016}. The full model in Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont} takes into account such effect related to contact conductances.
We have simulated the structure in Fig. \ref{4}(a) by connecting the full two-component circuit models given in Figs. \ref{1} and \ref{3} in a distributed manner using the standard circuit rules. The details of the simulation are discussed in Appendix \ref{App_Hong}. The simulation results of $v_s / I_c$ as a function of $p_0$ is shown in Fig. \ref{4}(b), which is in good agreement with Eq. \eqref{Hong_Eqn}.
\subsection{Inverse Effects: Spin Current Induced Charge Voltage}
The reciprocal effect of Eq. \eqref{Hong_Eqn} is the spin current $i_s$ induced open circuit charge voltage difference $\Delta V_c$ across the SML channel \cite{Hong_SciRep_2016}. For $I_c=0$ we have from Eq. \eqref{R_Mat}
\begin{equation}
\label{Hong_Inv}
\Delta V_c = - \dfrac{\alpha p_0}{2 G_B}i^s_{tot}.
\end{equation}
The reciprocal relation between Eqs. \eqref{Hong_Eqn} and \eqref{Hong_Inv} including the negative sign has been observed experimentally \cite{SamarthPRB2015, Pham_NanoLett_2016, Koo_New}. Note that Eq. \eqref{Hong_Inv} is exact for a potentiometric contact. For a contact with higher conductance, $\Delta V_c$ will be lower than Eq. \eqref{Hong_Inv} due to current shunting by the same amount as that for the direct effect (in Eq. \eqref{Hong_Eqn}) \cite{Sayed_SciRep_2016}, which can be analyzed using the full model in Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont}. We have simulated the structure shown in Fig. \ref{4}(c) using the full two-component models in Figs. \ref{1} and \ref{3}. The simulation results of $\Delta V_c / i^s_{tot}$ as a function of $p_0$ is shown in Fig. \ref{4}(d), which show good agreement with Eq. \eqref{Hong_Inv}. The details of the simulation are discussed in Appendix \ref{App_Hong}.
\begin{table}
\begin{center}
\caption{Estimated Material Parameters.}
\label{table_IREE}
\begin{tabular}{||c | c | c | c | c ||}
\hline
\label{IREE_table}
Material & $\lambda$ [nm] & $p_0$ & $\lambda_{IREE}$ [nm] & $\lambda_{IREE}$ [nm]\\&&&(Eq. \eqref{IREE_length})&(measured)\\ [0.5ex]
\hline\hline
Ag$|$Bi & 22.6$^\dag$ & 0.05$^{\dag\dag}$ & 0.36 & 0.3 \cite{Fert_NatComm_2016}\\
\hline
Cu$|$Bi & 0.88$^\dag$ & 0.05$^{\dag\dag}$ & 0.014 & 0.009 \cite{IssaCuBi2016}\\
\hline
LAO$|$STO & 180.8$^\dag$ & 0.0616$^{\dag\dag}$ & 3.55 & 6.4 \cite{FertLAOSTO2016}\\
\hline
Bi$_2$Se$_3$ & 3.2$^\dag$ & 0.025$^{\ddagger}$ & 0.026 & 0.035 \cite{SmarthPRL2016}\\
\hline
\end{tabular}
\end{center}
\begin{flushleft}
$^\dag${\footnotesize Estimated from the measured sheet resistance of the samples.}\\$^{\dag\dag}${\footnotesize Estimated from the Rashba coupling coefficient and Fermi velocity of the materials.\\$^{\ddagger}${\footnotesize Estimated from spin-pumping induced voltage and Eq. \eqref{Hong_Inv}.}\\The estimations are discussed in detail in Appendix \ref{App_IREE}.}
\end{flushleft}
\end{table}
\subsection{Inverse Rashba-Edelstein Effect (IREE) Length}
The phenomena described by Eq. \eqref{Hong_Inv} is often known as the inverse Rashba Edelstein effect (IREE) for 2D channels. IREE is often quantified with a parameter called IREE length defined as
\begin{equation}
\label{defIREE}
\lambda_{IREE}=\dfrac{J_c}{J_s},
\end{equation}
where $J_c$ is the longitudinal short circuit charge current density in A/m induced by the injected transverse spin current density $J_s$ in A/m$^2$.
We derive a simple expression for IREE length starting from the first row of Eq. \eqref{R_Mat} for short circuit condition between contacts 1 and 2 (i.e. $v_{c1}=v_{c2}$) and assuming large channel resistance compared to contact resistance (i.e. $L/(G_B\lambda)\gg2/G_0''$). The expression is given by
\begin{equation}
\label{IREE_length1}
\lambda_{IREE} = \dfrac{\alpha p_0 \lambda}{2}.
\end{equation}
The derivation is given in Appendix \ref{App_Rmat}. Note that both $p_0$ and $\lambda$ are two completely independent parameters.
Here, $\alpha$ is an angular averaging factor that can vary between 0 and 1 depending on the angular variation of the spin polarization of the eigenstates and details of the scattering mechanism. We assume that the distribution is such that the angle between $z$-axis and the spin polarization of the eigenstates with particular group velocity ($+$ or $-$) vary between $-\pi/2$ to $+\pi/2$, which yields $\alpha=2/\pi$. Thus, from Eq. \eqref{IREE_length1} we have the following expression
\begin{equation}
\label{IREE_length}
\lambda_{IREE}=\dfrac{p_0\lambda}{\pi},
\end{equation}
which has been previously reported in Ref. \cite{Sayed_SciRep_2016}. We have simulated the setup in Fig. \ref{5}(a) in order to estimate $\lambda_{IREE}$ of diverse ranges of $p_0$ and $\lambda$. The setup in Fig. \ref{5}(a) is same as that in Fig. \ref{4}(c), except we observe the short circuit charge current $I_c$ between the charge terminals of contacts 1 and 2 induced by the injected spin current $i_s$ through the spin terminal of contact 3.
\begin{figure}
\includegraphics[width=0.4 \textwidth]{Figure5.png}
\caption{ (a) Setup similar to that in Fig. \ref{4}(c) but short circuit charge current between contacts 1 and 2 is being observed. (b) Inverse Rashba Edelstein effect (IREE) length ($\lambda_{IREE}$) vs. $p_0\lambda$ from SPICE simulation and comparison to Eq. \eqref{IREE_length} and experiments on different interfaces with Rashba SOC: Ag$|$Bi \cite{Fert_NatComm_2016}, Cu$|$Bi \cite{IssaCuBi2016}, LaAlO$_3$$|$SrTiO$_3$ (LAO$|$STO) \cite{FertLAOSTO2016}, and Bi$_2$Se$_3$ \cite{SmarthPRL2016}. The back scattering length $\lambda$ is estimated from measured sheet resistance or resistivity. The degree of spin-momentum locking $p_0$ is estimated from the Rashba coupling coefficient and the Fermi velocity using Eq. \eqref{RashbaSML}. The details of estimations are given in Appendix \ref{App_Hong}. SPICE simulation parameters: $\alpha=2/\pi$, $G_0\approx0.05G_B$, and $M+N=100$. We assumed reflection with spin-flip to be the dominant scattering process in the channel.}\label{5}
\end{figure}
We have compared the simulation results with Eq. \eqref{IREE_length} as well as available measurements from spin-pumping and lateral spin valve experiments on different Rashba interfaces: Ag$|$Bi \cite{Fert_NatComm_2016}, Cu$|$Bi \cite{IssaCuBi2016}, LaAlO$_3$$|$SrTiO$_3$ (LAO$|$STO) \cite{FertLAOSTO2016}, and Bi$_2$Se$_3$ \cite{SmarthPRL2016}. The comparison is shown in Fig. \ref{5}(b). We have estimated $\lambda$ from the reported sheet resistance or resistivity of the samples. $p_0$ for Ag$|$Bi, Cu$|$Bi, and LAO$|$STO are estimated using the Rashba coupling coefficient ($v_0$) and the Fermi velocity ($v_F$) quoted in the literature, using the following expression:
\begin{equation}
\label{RashbaSML}
p_0 = \dfrac{{{v_0}}}{{\sqrt {v_0^2 + v_F^2}}}.
\end{equation}
For Bi$_2$Se$_3$, we have estimated $p_0$ from spin-pumping induced inverse voltage and Eq. \eqref{Hong_Inv}. Note that $p_0$ estimated for the sample in Ref. \cite{SmarthPRL2016} is much lower than previous reports \cite{JonkerNatNano2014, KLWangNanoLett2014, DasNanoLett2015, SamarthPRB2015, YPChenSciRep2015, Samarth_PRB_2015, YoichiPRB2016}, which may be due to the presence of large number of parallel channels. We expect higher $\lambda_{IREE}$ for Bi$_2$Se$_3$ samples with higher $p_0$.
The derivation of Eq. \eqref{RashbaSML} from the Rashba Hamiltonian is shown in Appendix \ref{App_IREE}. The estimations are summarized in Table \ref{table_IREE} and the details are given in Appendix \ref{App_IREE}. These two independent estimations of $p_0$ and $\lambda$ when applied to Eq. \eqref{IREE_length}, agrees very well with experimentally reported $\lambda_{IREE}$, as shown in Fig. \ref{5} and Table \ref{table_IREE}.
\section{Time-Dependent Transport Results}
In this section, we use the full time-dependent model in Eqs. \eqref{charge_TL} and \eqref{spin_TL} to discuss a well-known phenomenon called the spin-charge separation. The spin-charge separation in the presence of spin-orbit coupling is a subject that has been controversial in the past. Our model shows that the charge and spin propagates with two distinct velocities which persist even in the materials with spin-orbit coupling exhibiting spin-momentum locking ($p_0\neq 0)$. However, we show that the lower velocity signal is purely spin while the higher velocity signal is largely charge with an additional spin component proportional to $p_0$.
\subsection{Velocities of Charge and Spin Signals}
The velocities for charge and spin signals can be derived by finding the eigenvalues of Eqs. \eqref{charge_TL} and \eqref{spin_TL} assuming the low loss limit and constant coefficients. In addition, we find the corresponding eigenvectors as well to analyze the coupling between spin and charge in the channel due to SML. The details of the derivation are given in Appendix \ref{AppA}.
The lower velocity eigenvalue is given by
\begin{equation}
\label{spin_vel}
v_{g,s}=\pm\dfrac{1}{\sqrt{L_K C_Q}}= \pm\langle {v}_x\rangle,
\end{equation}
which is determined by quantum capacitance $C_Q$ and kinetic inductance $L_K$, resulting in thermally averaged electron velocity $\langle{v}_x\rangle$. The corresponding eigenvector is given by
\begin{equation}
\label{spin_eig}
\left\{ {\begin{array}{*{20}{c}}
{{V_c}}\\
{{I_c}}
\end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}}
0\\
0
\end{array}} \right\}, \text{ and } \left\{ {\begin{array}{*{20}{c}}
{{V_s}}\\
{{I_s}}
\end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}}
{ \pm \alpha^2 \sqrt {\dfrac{{{L_K}}}{{{C_Q}}}} }\\
1
\end{array}} \right\},
\end{equation}
which shows that the lower velocity signal is purely spin and no charge accompanies the signal even in channels with SML i.e. $p_0 \neq 0$.
The higher velocity eigenvalue is given by
\begin{equation}
\label{charge_vel}
v_{g,c}=\pm\dfrac{1}{\sqrt{L_{eff} C_{eff}}},
\end{equation}
where $C_{eff}$ is a series combination of $C_E$ and $C_Q$ and $L_{eff}$ is a series combination of $L_M$ and $L_K$. The corresponding eigenvector is given by
\begin{equation}
\label{charge_eig}
\begin{array}{*{20}{c}}
\left\{ {\begin{array}{*{20}{c}}
{{V_c}}\\
{{I_c}}
\end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}}
{ \pm \sqrt {\dfrac{{{L_{eff}}}}{{{C_{eff}}}}} }\\
1
\end{array}} \right\}, \text{ and}\\
\left\{ {\begin{array}{*{20}{c}}
{{V_s}}\\
{{I_s}}
\end{array}} \right\} = \dfrac{{{p_0}}}{{v_{g,c}^2 - v_{g,s}^2}}\left\{ {\begin{array}{*{20}{c}}
{ - \alpha^2 {g_m}\dfrac{{{L_M}}}{{{C_Q}}}v_{g,c}^2 + {r_m}v_{g,s}^2}\\
{ \pm {{{v_{g,c}}{v_{g,s}^2}}}{{{C_Q}}}\left( { - {g_m}\dfrac{{{L_M}}}{{{C_Q}}} + \dfrac{r_m}{\alpha^2}} \right)}
\end{array}} \right\},
\end{array}
\end{equation}
which shows that the higher velocity signal is largely charge which will be accompanying an additional spin signal proportional to $p_0$, which has not been discussed before. This additional spin component vanishes in a NM channel where there is no SML (i.e. $p_0=0$) and the signal is purely charge. Further evaluation of this high velocity spin component we leave as future work.
The quantum capacitance $C_Q$ is proportional to the total number of modes $(M+N)$ in the channel (see Eq. \eqref{cq}) while the kinetic inductance $L_K$ is inversely proportional to $M+N$ (see Eq. \eqref{lk}). $M+N$ is proportional to the channel width (for 2D) or cross-sectional area (for 3D) \cite{Datta_LNE_2012}.
For a conductor with very large cross-section, we may have $C_E\ll C_Q$ and $L_M\gg L_K$ which is the standard transmission line limit. In this limit, the velocity in Eq. \eqref{charge_vel} becomes
\begin{equation*}
c=\dfrac{1}{\sqrt{L_MC_E}},
\end{equation*}
which is the velocity predicted by standard transmission line theory and can be as high as the speed of light.
For a conductor with very small cross-section like quantum wires, we may have $C_E\gg C_Q$ and $L_M\ll L_K$. In this limit, velocity in Eq. \eqref{charge_vel} becomes the thermally averaged electron velocity, given by
\begin{equation*}
\langle{v}_x\rangle=\dfrac{1}{\sqrt{L_KC_Q}},
\end{equation*}
which is same as Eq. \eqref{spin_vel}. Note that the two velocity eigenvalues are equal at this limit.
\subsection{Spin-Charge Separation}
From Eqs. \eqref{spin_vel} and \eqref{charge_vel} we have
\begin{equation}
\dfrac{v_{g,s}}{v_{g,c}}=\sqrt{\dfrac{1+\delta \dfrac{C_Q}{C_E}}{1+ \dfrac{C_Q}{C_E}}},
\end{equation}
where $\delta = \left(L_M C_E\right)/\left(L_K C_Q\right) = \left(\langle v_x\rangle/c\right)^2$ (see Eq. 6 of Ref. \cite{Sayeef_IEEE_2005}) which is usually much less than one, making the spin signal slower than the charge signal, given by
\begin{equation}
\label{SCS}
v_{g,s} < v_{g,c}.
\end{equation}
This results in spin-charge separation which is well-established for channels without SML (i.e. $p_0=0$) from Luttinger liquid theory (see for example, Refs. \cite{HalperinJAP2007, PoliniPRL2007, SchroerPRL2014, Burke_TNANO_2002}, and references therein). Electrons' charge excites $C_E$ and $L_M$ hence the charge signal velocity is determined by $C_{eff}$ and $L_{eff}$ given by Eq. \eqref{charge_vel}. However, pure spin signal do not excite $C_E$ and $L_M$, hence its velocity is determined by $C_Q$ and $L_K$ only given by Eq. \eqref{spin_vel}. Similar arguments have been made in the past \cite{Burke_TNANO_2002, Burke_TNANO_2003} in the context of carbon nanotubes without SOC.
Note that the argument in Eq. \eqref{SCS} is independent of $p_0$ (see Appendix \ref{AppA}), which indicates that the spin-charge separation persists even in channels with SOC exhibiting SML (i.e. $p_0\neq 0$). Similar arguments have been discussed previously by considering SOC \cite{BalseiroPRL2002, CalzonaPRB2015, Stauber_PRB_2013} although there exists argument that the presence of SOC may destroy the spin-charge separation \cite{BarnesPRL2000}.
In SML channels, an additional spin signal proportional to $p_0$ accompanies the charge signal at the same velocity as the charge ($v_{g,c}$) as seen from Eq. \eqref{charge_eig}. This additional spin component is induced by the instantaneous voltage drop across $L_M$ and the instantaneous current through $C_E$ of the channel. However, the low velocity signal (see Eq. \eqref{spin_vel}) remains purely spin since the spin signal do not excite $L_M$ and $C_E$ \cite{Burke_TNANO_2002,Sayeef_IEEE_2005} to induce a similar accompanying charge component.
\section{Transmission Line Model from Boltzmann Formalism}
\label{sec_semi}
In this section, we derive the transmission line model in Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont} starting from the time-dependent Boltzmann transport equation under several clearly stated assumptions, which allow us to obtain the simple expressions for the model parameters stated in Eq. \eqref{params}. Several of our predictions for steady-state \cite{Sayed_SciRep_2016} have already received experimental support \cite{Pham_APL_2016,Pham_NanoLett_2016,Koo_New} suggesting that the assumptions are reasonable, but they could be revisited as the field evolves.
\subsection{Boltzmann Transport Equation}
We assume a structure where the spatial variations and the applied fields are along $\hat{x}$-direction. The time-dependent Boltzmann transport equation is given by
\begin{equation}
\label{BTE_1D}
\begin{aligned}
\frac{\partial f}{{\partial t}} + {v}_x \frac{\partial f}{{\partial x}} + {F}_x \frac{\partial f}{{\partial p_x}} = \displaystyle \sum_{\vec{p'}, s '} S(\vec{p},s\leftrightarrow\vec{p'},s') \left(f-f'\right),
\end{aligned}
\end{equation}
where we have assumed elastic scattering so that the scattering rates are same in both directions i.e.
\begin{equation*}
S(\vec{p},s\rightarrow\vec{p'},s')=S(\vec{p'},s'\rightarrow\vec{p},s)
\equiv S(\vec{p},s\leftrightarrow\vec{p'},s').
\end{equation*}
Here, $f\equiv f(x,t,\vec{p},s)$ is the occupation factor of a state for a particular position $x$, time $t$, momentum $\vec{p}$ and spin index $s=\pm1$, $f'\equiv f(x,t,\vec{p'},s')$ with momentum $\vec{p'}$ and spin index $s'=\pm1$, ${v}_x=\partial E / \partial p_x$ is the $x$-component of the group velocity, ${F}_x=-\partial E/\partial x$ is the force on electrons along $\hat{x}$-direction, and $E$ is the total energy. Note that the spin index $+1$ and $-1$ correspond to up ($U$) and down ($D$) spin polarized states for a particular $\vec{p}$.
\subsection{Occupation Factor}
We write the occupation factor $f$ in terms of an electrochemical potential $\mu\equiv\mu(x,t,\vec{p},{s})$, in the form
\begin{equation}
\label{occup_fact}
f(x,t,\vec{p},{s})=\dfrac{1}{1 + \exp\left(\dfrac{E(x,t,\vec{p},{s})-\mu(x,t,\vec{p},s)}{k_BT} \right)},
\end{equation}
where $k_B$ is the Boltzmann constant, $T$ is the temperature. Note that
\subsection{Linearization}
We apply a variable transform $\xi= E-\mu$ on the left hand side of Eq. \eqref{BTE_1D}. On the right hand side of Eq. \eqref{BTE_1D}, we expand both $f$ and $f'$ into Taylor series around
\begin{equation}
\label{eq_occup_fact}
f_0 = \dfrac{1}{1+\exp\left(\dfrac{(E(x,\vec{p},{s})-\mu_0)}{k_BT}\right)},
\end{equation}
with constant electrochemical potential $\mu_0$ and apply linear response approximation. Thus Eq. \eqref{BTE_1D} can be written as (see Appendix \ref{AppB} for details of the derivation)
\begin{equation}
\label{semi_1D_full}
\begin{array}{l}
\left(-\dfrac{\partial f_0}{\partial E}\right)\left(\dfrac{\partial \mu}{{\partial t}} - \dfrac{\partial E }{{\partial t}} + {v}_x \dfrac{\partial \mu}{{\partial x}} + {F}_x \dfrac{\partial \mu}{{\partial p_x}}\right) \\\quad\quad = -\displaystyle \sum_{\vec{p'},s'}S(\vec{p},s\leftrightarrow\vec{p'},s') \left(-\dfrac{\partial f_0}{\partial E}\right){\left(\mu - \mu ' \right)}.
\end{array}
\end{equation}
We assume that there are no internal fields in the present discussion. Hence, $F_x$ comes from the applied voltage and the term $F_x(\partial \mu /\partial p_x)$ depends on the higher order of the applied voltage, which can be neglected in the linear response regime \cite{Datta_LNE_2012}. Thus Eq. \eqref{semi_1D_full} is given by
\begin{equation}
\label{semi_1D}
\begin{array}{l}
\left(-\dfrac{\partial f_0}{\partial E}\right)\left(\dfrac{\partial \mu}{{\partial t}} - \dfrac{\partial E }{{\partial t}} + {v}_x \dfrac{\partial \mu}{{\partial x}} \right) \\\quad\quad= -\displaystyle \sum_{\vec{p'},s'}S(\vec{p},s\leftrightarrow\vec{p'},s') \left(-\dfrac{\partial f_0}{\partial E}\right){\left(\mu - \mu ' \right)}.
\end{array}
\end{equation}
The term $\partial E / \partial t$ can be evaluated from the dispersion relation of a given Hamiltonian in the semiclassical approximation as discussed below.
\subsection{Dispersion Relation}
We start from the following Rashba Hamiltonian
\begin{equation}
\label{Rashba}
\mathcal{H}=\dfrac{|\vec p -q \vec A|^2}{2m}I_{2\times2}-v_0 \left(\vec \sigma \times (\vec p-q\vec A)\right)\cdot\hat{y} + U_E I_{2\times2}.
\end{equation}
Here, $I_{2\times2}$ is a $2\times2$ identity matrix, $\vec p$ and $\vec{A}$ are the momentum and vector magnetic potential respectively in the $z$-$x$ plane, $\vec{\sigma}$ is the Pauli's matrices, $v_0$ is the Rashba coefficient, $U_E$ is the electrostatic potential, $m$ is the electron mass, and $q$ is the electron charge.
Eigenstates of Eq. \eqref{Rashba} are given by
\begin{equation}
\label{E_eig}
E(\vec{p},{s})=\dfrac{|\vec{p}-q\vec{A}|^2}{2m}-s\, v_0 {|\vec{p}-q\vec{A}|} + U_E,
\end{equation}
with $\vec{p}$ is confined to the $z$-$x$ plane.
We assume that $U_E$ and $\vec{A}$ varies slowly with $x$ and $t$, so that in the semiclassical approximation we have
\begin{equation}
\label{E_rel}
E(x,t,\vec{p},{s})=\dfrac{|\vec{p}-q\vec{A}(x,t)|^2}{2m}-s\, v_0 {|\vec{p}-q\vec{A}(x,t)|} + U_E(x,t).
\end{equation}
Differentiating Eq. \eqref{E_rel} with respect to $t$ yields
\begin{equation}
\label{dE_rel}
\dfrac{\partial E}{\partial t} = \vec{v}\cdot \left(-q\dfrac{\partial \vec{A}}{\partial t}\right) + \dfrac{\partial U_E}{\partial t},
\end{equation}
where $\vec{v}=\nabla_{\vec{p}}E$ (see Appendix \ref{AppC} for the derivation).
The electrostatic potential $U_E$ and the vector magnetic potential $\vec A$ on the structure of interest can be evaluated from the theory of electromagnetism.
\subsection{From Potentials to Charge and Current}
The electrostatic potential $U_E$ is related to the total charge $Q$ in the channel by the electrostatic capacitance $C_E$ of the structure under consideration, given by
\begin{equation}
\dfrac{U_E}{q}=\dfrac{Q}{C_E}.
\end{equation}
We assume that the charge current $I_c$ flows along $\hat{x}$-direction which is uniform in the channel. The vector magnetic potential $\vec{A}$ is related to $I_c$ by the magnetic inductance $L_M$ of the channel, given by
\begin{equation}
\label{vec_mag_assum}
\vec{A}\equiv \hat{x}A_x = \hat{x} L_MI_c.
\end{equation}
Thus Eq. \eqref{dE_rel} can be written as
\begin{equation}
\label{dE_rel2}
\dfrac{\partial E }{\partial t} = -q v_x L_M \dfrac{\partial I_c}{\partial t} + \dfrac{q}{C_E}\dfrac{\partial Q}{\partial t}.
\end{equation}
We combine Eq. \eqref{semi_1D} with Eq. \eqref{dE_rel2} to get
\begin{equation}
\label{semi_1mu}
\begin{aligned}
\left(\dfrac{\partial f_0}{\partial E}\right)&\left(\frac{\partial \mu}{{\partial t}} + {v}_x \frac{\partial \mu}{{\partial x}} + q v_x L_M \dfrac{\partial I_c}{\partial t} - \dfrac{q}{C_E}\dfrac{\partial Q}{\partial t}\right) \\& = -\displaystyle \sum_{\vec{p'},s'}S(\vec{p},s\leftrightarrow\vec{p'},s') \left(\dfrac{\partial f_0}{\partial E}\right){\left(\mu - \mu ' \right)}.
\end{aligned}
\end{equation}
\subsection{Classification}
We classify all $\vec{p},s$ states into four groups based on the sign of $v_x$ ($+$ or $-$) and the spin index $s=\pm1$, given by
\begin{equation}
\label{classification}
\begin{aligned}
\Re:
\begin{cases}
&U^+\in\{\vec{p},s\,\,\,|\,\,\,v_x>0,\,\,\,s=+1\},\\
&D^-\in\{\vec{p},s\,\,\,|\,\,\,v_x<0,\,\,\,s=-1\},\\
&U^-\in\{\vec{p},s\,\,\,|\,\,\,v_x<0,\,\,\,s=+1\},\text{ and}\\
&D^+\in\{\vec{p},s\,\,\,|\,\,\,v_x>0,\,\,\,s=-1\}.
\end{cases}
\end{aligned}
\end{equation}
where $s=+1$ and $-1$ denote up ($U$) and down ($D$) spins with respect to the spin quantization axis defined by $\hat{y}\times\left(\vec{p}-q\vec{A}\right)$, which is different for each direction of $\vec{p}$.
Such classification can be mapped onto the two Fermi circles of a Rashba channel (see Fig. \ref{2}(a)). The large circle corresponds to $U^+$ and $D^-$ groups which share the same number of modes $n_m(U^+)=n_m(D^-)=M$ satisfying the time-reversal symmetry. Similarly, the small circle corresponds to $U^-$ and $D^+$ groups sharing the same number of modes $n_m(U^-)=n_m(D^+)=N$. Note that the eigenstates belonging to each of the four half Fermi circles in Fig. \ref{2}(a) has an average spin polarization along $\hat{z}$-direction with an averaging factor of $\alpha = 2/\pi$, which we will use later when writing spin currents and voltages.
\subsection{Averaging}
We define the thermal average of a variable $\psi\equiv\psi(\vec{p},s)$ within each of the group $\vec{p},s\in\Re$ as
\begin{equation}
\label{th_avg}
\langle\psi\rangle_{\vec{p},s\in\Re} = \dfrac{\displaystyle\sum_{\vec{p},s\in\Re}\left(-\dfrac{\partial f_0}{\partial E}\right)\psi(\vec{p},s)}{\displaystyle\sum_{\vec{p},s\in\Re}\left(-\dfrac{\partial f_0}{\partial E}\right)}.
\end{equation}
We sum both sides of Eq. \eqref{semi_1mu} over all $\vec p$ states within range $\vec{p},s\in\Re$ in the $z$-$x$ plane as
\begin{equation}
\label{semi_sum}
\begin{aligned}
&\dfrac{D_0(\Re)}{2}\dfrac{\partial \left\langle \mu \right\rangle }{\partial t} + \dfrac{D_0(\Re)}{2}\dfrac{\partial \langle {v_x\mu} \rangle}{\partial x} \\&+ \dfrac{qD_0(\Re)}{2}\langle v_x\rangle L_M \dfrac{\partial I_c}{\partial t}
- \dfrac{q}{C_E}\dfrac{D_0(\Re)}{2}\dfrac{\partial Q}{\partial t} \\&= -\displaystyle \sum_{\vec{p},s\in\Re}\,\, \displaystyle \sum_{\vec{p'},s'}S(\vec{p},s\leftrightarrow\vec{p'},s') \left(-\dfrac{\partial f_0}{\partial E}\right){\left(\mu - \mu ' \right)}.
\end{aligned}
\end{equation}
Here, $D_0(\Re)$ is the thermally averaged density of states within $\vec{p},s\in\Re$ given by
\begin{equation}
\label{DOS}
\dfrac{D_0(\Re)}{2} = \displaystyle \sum_{\vec{p},s\in\Re} \left(-\dfrac{\partial f_0}{\partial E}\right),
\end{equation}
where the factor of 2 appeared since we are summing over all $s$ states. Note that $D_0(U^+)=D_0(D^-)$ and $D_0(U^-)=D_0(D^+)$ due to time-reversal symmetry.
We make the following assumption in Eq. \eqref{semi_sum}
\begin{equation}
\label{cond}
\langle v_x \mu \rangle \approx \langle v_x \rangle \langle \mu \rangle,
\end{equation}
which yields
\begin{equation}
\label{semi_dos}
\begin{aligned}
&\dfrac{D_0(\Re)}{2}\dfrac{\partial \left\langle \mu \right\rangle }{\partial t} + \dfrac{D_0(\Re)}{2}\langle v_x\rangle\dfrac{\partial \langle {\mu} \rangle}{\partial x} \\&+ \dfrac{qD_0(\Re)}{2}\langle v_x\rangle L_M \dfrac{\partial I_c}{\partial t}
- \dfrac{q}{C_E}\dfrac{D_0(\Re)}{2}\dfrac{\partial Q}{\partial t} \\&= -\displaystyle \sum_{\vec{p},s\in\Re}\,\, \displaystyle \sum_{\vec{p'},s'}S(\vec{p},s\leftrightarrow\vec{p'},s') \left(-\dfrac{\partial f_0}{\partial E}\right){\left(\mu - \mu ' \right)}+\dfrac{i_{ext}}{q}.
\end{aligned}
\end{equation}
Note that the term $i_{ext}/q$ on the right hand side has been added to take into account the total current entering into $\vec{p},s\in\Re$ states of the channel an external contact.
\begin{figure}
\includegraphics[width=0.35 \textwidth]{Figure2.png}
\caption{(a) Two Fermi circles of a Rashba channel with spin-momentum locking having opposite spin polarizations at a given energy $E_F$. Left (right) half of each circle represents negative (positive) group velocity. The large circle corresponds to the large number of modes $M$ in the channel and have a net spin polarization along $+\hat{z}$ ($-\hat{z}$) on right (left) side. The small circle corresponds to smaller number of modes $N$ and have net spin polarization along $-\hat{z}$ ($+\hat{z}$) on right (left) side. We classify all electronics states into four groups based on the $z$-component of spin polarization (up ($U$), down ($D$)) and the sign of $x$-component of the group velocity ($+$, $-$). (b) Assuming high scattering among states in each individual groups, we assign four electrochemical potentials for these four groups: $\mu(U^+)$, $\mu(U^-)$, $\mu(D^+)$, and $\mu(D^-)$. External contact is modeled as up ($v_u$) and down ($v_d$) spin voltages applied to up and down states of the channel through up ($g_u$) and down ($g_d$) spin conductances per mode, respectively. Four different currents ($i_U^+$, $i_D^+$, $i_U^-$, and $i_D^-$) enter the four different groups in the channel. The contact can be either normal metal $g_u = g_d$ or ferromagnet $g_u \neq g_d$.}\label{2}
\end{figure}
The number of modes within $\vec{p},s\in\Re$ in the channel is given by \cite{Datta_LNE_2012}
\begin{equation}
\label{NOM}
n_m(\Re)=\dfrac{h D_0|\langle v_x\rangle|}{2L},
\end{equation}
where $|\langle v_x\rangle|$ is the magnitude of $\langle v_x\rangle$ and $L$ is the channel length.
Note that $|\langle v_x\rangle|$ is same in all four groups for the Rashba Hamiltonian considered here (see Appendix \ref{AppC}). Thus Eq. \eqref{semi_dos} can be written as
\begin{equation}
\label{semi1Dfinal}
\begin{aligned}
&\dfrac{{n_m\left(\Re \right)}}{{\left| {\left\langle {{v_x}} \right\rangle } \right|}}\dfrac{{\partial \left\langle \mu \right\rangle }}{{\partial t}} + {\mathop{\rm sgn}} \left( {\left\langle {{v_x}} \right\rangle } \right) n_m\left( { \Re } \right) \dfrac{{\partial \langle \mu \rangle }}{{\partial x}} - \dfrac{q}{{{C_E}}}\frac{{n_m\left( { \Re } \right)}}{{\left| {\left\langle {{v_x}} \right\rangle } \right|}}\frac{{\partial Q}}{{\partial t}}\\&+ {\mathop{\rm sgn}} \left( {\left\langle {{v_x}} \right\rangle } \right) q \, {n_m\left( {\Re } \right)} {L_M}\dfrac{{\partial {I_c}}}{{\partial t}} = \tilde{S}(\Re) + \frac{h}{q}\frac{{{i_{ext}}}}{L},
\end{aligned}
\end{equation}
where
\begin{equation}
\label{scatter1}
\tilde{S}(\Re)=-\frac{h}{L}\sum\limits_{\vec p,s \in \Re } {\mkern 1mu} {\mkern 1mu} \sum\limits_{\vec p',s'} S (\vec p,s \leftrightarrow \vec p',s')\left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right)\left( {\mu - \mu '} \right).
\end{equation}
Eq. \eqref{semi1Dfinal} applies to each group in Eq. \eqref{classification} with an average electrochemical potential given by $\left\langle \mu \right\rangle_{\vec{p},s \in U^+} \equiv \mu\left(U^+\right)$, $\left\langle \mu \right\rangle_{\vec{p},s \in D^-} \equiv \mu\left(D^-\right)$, $\left\langle \mu \right\rangle_{\vec{p},s \in U^-} \equiv \mu\left(U^-\right)$, and $\left\langle \mu \right\rangle_{\vec{p},s \in D^+} \equiv \mu\left(D^+\right)$ respectively.
\subsection{Scattering Matrix}
We make the following assumption
\begin{equation}
\label{cond2}
\langle S \mu \rangle \approx \langle S \rangle \langle \mu \rangle,
\end{equation}
which when applied to the term related to the scattering rate $\tilde{S}(\Re)$ in Eq. \eqref{scatter1} becomes
\begin{equation}
\label{scatter}
\begin{aligned}
\tilde{S}(\Re)= &{\hat S_{\Re \leftrightarrow {U^ + }}}\left( {\left\langle {\mu \left( {{U^ + }} \right)} \right\rangle - \left\langle {\mu \left( \Re \right)} \right\rangle } \right) \\
&+ {\hat S_{\Re \leftrightarrow {D^ - }}}\left( {\left\langle {\mu \left( {{D^ - }} \right)} \right\rangle - \left\langle {\mu \left( \Re \right)} \right\rangle } \right)\\
&+ {\hat S_{\Re \leftrightarrow {U^ - }}}\left( {\left\langle {\mu \left( {{U^ - }} \right)} \right\rangle - \left\langle {\mu \left( \Re \right)} \right\rangle } \right)\\
&+ {\hat S_{\Re \leftrightarrow {D^ + }}}\left( {\left\langle {\mu \left( {{D^ + }} \right)} \right\rangle - \left\langle {\mu \left( \Re \right)} \right\rangle } \right),
\end{aligned}
\end{equation}
with
\begin{equation}
{\hat S_{\Re \leftrightarrow \Re '}}=\frac{h}{L}\sum\limits_{\vec p,s \in \Re } \sum\limits_{\vec p',s'\in\Re '} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s').
\end{equation}
We can evaluate Eq. \eqref{scatter} for each of the group in Eq. \eqref{classification} i.e. $\Re \equiv U^+$, $D^-$, $U^-$, and $D^+$, which together in the $\{\mu(U^+),\mu(D^-),\mu(U^-),\mu(D^+)\}^T$ basis becomes the following matrix (see Appendix \ref{AppE} for details)
\begin{equation}
\label{scatter_mat}
\begin{aligned}
\left[ S \right] = \left[ {\begin{array}{*{20}{c}}
{ - {u_1}}&{{{\hat S}_{{U^ + } \leftrightarrow {D^ - }}}}&{{{\hat S}_{{U^ + } \leftrightarrow {U^ - }}}}&{{{\hat S}_{{U^ + } \leftrightarrow {D^ + }}}}\\
{{{\hat S}_{{U^ + } \leftrightarrow {D^ - }}}}&{ - {u_1 '}}&{{{\hat S}_{{D^ - } \leftrightarrow {U^ - }}}}&{{{\hat S}_{{D^ - } \leftrightarrow {D^ + }}}}\\
{{{\hat S}_{{U^ + } \leftrightarrow {U^ - }}}}&{{{\hat S}_{{D^ - } \leftrightarrow {U^ - }}}}&{ - {u_2}}&{{{\hat S}_{{U^ - } \leftrightarrow {D^ + }}}}\\
{{{\hat S}_{{U^ + } \leftrightarrow {D^ + }}}}&{{{\hat S}_{{D^ - } \leftrightarrow {D^ + }}}}&{{{\hat S}_{{U^ - } \leftrightarrow {D^ + }}}}&{ - {u_2 '}}
\end{array}} \right],
\end{aligned}
\end{equation}
where
\begin{equation*}
\begin{aligned}
&{u_1} = {{\hat S}_{{U^ + } \leftrightarrow {D^ - }}} + {{\hat S}_{{U^ + } \leftrightarrow {U^ - }}} + {{\hat S}_{{U^ + } \leftrightarrow {D^ + }}},\\
&u_1' = {{\hat S}_{{U^ + }\leftrightarrow {D^ - }}} + {{\hat S}_{{D^ - } \leftrightarrow {U^ - }}} + {{\hat S}_{{D^ - } \leftrightarrow {D^ + }}},\\
&{u_2} = {{\hat S}_{{U^ + } \leftrightarrow {U^ - }}} + {{\hat S}_{{D^ - } \leftrightarrow {U^ - }}} + {{\hat S}_{{U^ - } \leftrightarrow {D^ + }}},\\
\text{and }\;&{{u_2'}} = {{\hat S}_{{U^ + } \leftrightarrow {D^ + }}} + {{\hat S}_{{D^ - } \leftrightarrow {D^ + }}} + {{\hat S}_{{U^ - } \leftrightarrow {D^ + }}}.
\end{aligned}
\end{equation*}
The scattering matrix is such that the sum of each column is zero satisfying the charge conservation and the sum of each row is zero satisfying the zero current requirement under equal potential.
In addition, the time-reversal symmetry requires that
\begin{equation}
{{\hat S}_{{U^ + } \leftrightarrow {U^ - }}}={{\hat S}_{{D^ - } \leftrightarrow {D^ + }}} \text{ and } \hat{S}_{U^+ \leftrightarrow D^+}=\hat{S}_{D^- \leftrightarrow U^-}.
\end{equation}
There are three types of scattering processes considered in the channel: (a) reflection with spin flip $r_{s1} = \hat{S}_{U^+ \leftrightarrow D^-}$ and $r_{s2} = \hat{S}_{U^-\leftrightarrow D^+}$, (b) reflection without spin flip $r=\hat{S}_{U^+ \leftrightarrow U^-}=\hat{S}_{D^- \leftrightarrow D^+}$, and (c) transmission with spin flip $t_s=\hat{S}_{U^+ \leftrightarrow D^+}=\hat{S}_{D^- \leftrightarrow U^-}$. They are given by
\begin{subequations}
\begin{equation}
\label{rs1}
r_{s1} = \frac{h}{L}\sum\limits_{\vec p,s \in U^+ } \sum\limits_{\vec p',s'\in D^-} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s'),
\end{equation}
\begin{equation}
\label{rs2}
r_{s1} = \frac{h}{L}\sum\limits_{\vec p,s \in U^- } \sum\limits_{\vec p',s'\in D^+} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s'),
\end{equation}
\begin{equation}
\label{r}
\begin{aligned}
r &= \frac{h}{L}\sum\limits_{\vec p,s \in U^+} \sum\limits_{\vec p',s'\in U^-} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s'),\\
&= \frac{h}{L}\sum\limits_{\vec p,s \in D^-} \sum\limits_{\vec p',s'\in D^+} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s'),
\end{aligned}
\end{equation}
and
\begin{equation}
\label{ts}
\begin{aligned}
t_s &= \frac{h}{L}\sum\limits_{\vec p,s \in U^+} \sum\limits_{\vec p',s'\in D^+} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s'),\\
&= \frac{h}{L}\sum\limits_{\vec p,s \in D^-} \sum\limits_{\vec p',s'\in U^-} \left( { - \frac{{\partial {f_0}}}{{\partial E}}} \right) S (\vec p,s \leftrightarrow \vec p',s').
\end{aligned}
\end{equation}
\end{subequations}
Eq. \eqref{semi1Dfinal} for each group in Eq. \eqref{classification} is given as
\begin{equation}
\label{semi_modelx}
\begin{aligned}
&\frac{1}{\left| {\left\langle {{v_x}} \right\rangle } \right|}\frac{\partial }{{\partial t}}\left\{ {\begin{array}{*{20}{c}}
{M\tilde \mu \left( {{U^ + }} \right)}\\
{M\tilde \mu \left( {{D^ - }} \right)}\\
{N\tilde \mu \left( {{U^ - }} \right)}\\
{N\tilde \mu \left( {{D^ + }} \right)}
\end{array}} \right\} + \frac{\partial}{{\partial x}}\left\{ {\begin{array}{*{20}{c}}
{M\tilde \mu \left( {{U^ + }} \right)}\\
{ - M\tilde \mu \left( {{D^ - }} \right)}\\
{ - N\tilde \mu \left( {{U^ - }} \right)}\\
{N\tilde \mu \left( {{D^ + }} \right)}
\end{array}} \right\} \\&= \left[ {\begin{array}{*{20}{c}}
{ - {u_1}}&{{r_{s1}}}&r&{{t_s}}\\
{{r_{s1}}}&{ - {u_1}}&{{t_s}}&r\\
r&{{t_s}}&{ - {u_2}}&{{r_{s2}}}\\
{{t_s}}&r&{{r_{s2}}}&{ - {u_2}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{\tilde \mu \left( {{U^ + }} \right)}\\
{\tilde \mu \left( {{D^ - }} \right)}\\
{\tilde \mu \left( {{U^ - }} \right)}\\
{\tilde \mu \left( {{D^ + }} \right)}
\end{array}} \right\}\\& - q L_M { \dfrac{{\partial I_c}}{{\partial t}}} \left\{ {\begin{array}{*{20}{c}}
{M}\\
{-M}\\
{-N}\\
{N}
\end{array}} \right\} + \dfrac{q}{\left| {\left\langle {{v_x}} \right\rangle } \right| C_E}\dfrac{{\partial Q}}{{\partial t}}\left\{ {\begin{array}{*{20}{c}}
M\\ M\\ N\\ N
\end{array}} \right\} + \frac{h}{q}\left\{ {\begin{array}{*{20}{c}}
{i_U^ + }\\
{i_D^ - }\\
{i_U^ - }\\
{i_D^ + }
\end{array}} \right\}.
\end{aligned}
\end{equation}
Note that the electrochemical potentials are referenced with respect to the constant $\mu_0$ i.e. $\tilde{\mu} = \mu - \mu_0$. $i_U^+$, $i_D^-$, $i_U^-$, and $i_D^+$ are the currents per unit length entering into the four groups from an external contact (see Fig. \ref{2}(b)), which are given as \cite{Sayed_SciRep_2016}
\begin{equation}
\label{contact_NM_FM}
\begin{aligned}
&i_U^+ = \frac{q^2}{h} M g_u \left(v_u - \frac{\tilde{\mu}(U^+)}{q}\right),\\ &i_D^+ = \frac{q^2}{h} N g_d \left(v_d - \frac{\tilde{\mu}(D^+)}{q}\right),\\ &i_U^- = \frac{q^2}{h} N g_u \left(v_u -\frac{\tilde{\mu}(U^-)}{q}\right),\\\text{and}\;\;\; &i_D^- = \frac{q^2}{h} M g_d \left(v_d - \frac{\tilde{\mu}(D^-)}{q}\right).
\end{aligned}
\end{equation}
Here, $v_u$ and $v_d$ are up and down spin voltages at the external contact respectively. $g_u$ and $g_d$ are up and down spin conductances per unit mode per unit length of the contact. The contact can be either NM ($g_u=g_d$) or FM ($g_u \neq g_d$). In steady-state, Eq. \eqref{semi_modelx} reduces to our prior model \cite{Sayed_SciRep_2016}.
\subsection{Conversion to Charge-Spin Basis}
The charge and spin voltages and currents in the channel are defined in terms of the four average electrochemical potentials as
\begin{equation}
\label{transform}
\left\{ \begin{array}{l}
I_c{R_B}\\
2{V_s}\\
{I_s}{R_B}\\
2{V_c}
\end{array} \right\}\; = \dfrac{q}{h}R_B\left[ {\begin{array}{*{20}{c}}
1&-1&-1&1\\
\alpha &-\alpha &{\alpha }&{ - \alpha }\\
{\dfrac{1}{\alpha }}&{\dfrac{1}{\alpha }}&{-\dfrac{1}{\alpha }}&{ - \dfrac{1}{\alpha }}\\
1&{ 1}&{ 1}&1
\end{array}} \right]\;\left\{ \begin{array}{l}
M\tilde \mu ({U^ + })\\
M\tilde \mu ({D^ - })\\
N\tilde \mu ({U^ - })\\
N\tilde \mu ({D^ + })
\end{array} \right\},
\end{equation}
where $R_B$ is the ballistic resistance of the channel given in Eq. \eqref{bal_res} and $\alpha$ is an angular averaging factor. The derivation of Eq. \eqref{transform} is given in Appendix \ref{AppF}.
We have multiplied the second row of Eq. \eqref{transform} with a factor $0\leq\alpha\leq1$ to take into account the angular distribution of the spin polarization of the eigenstates on the half Fermi circles indicated by $U^+$, $U^-$, $D^+$, and $D^-$ in Fig. \ref{2}(a). The net $z$-spin polarization (or $z$-spin voltage) is expected to be lower by the factor $\alpha$ \cite{Hong_PRB_2012, Hong_SciRep_2016, Sayed_SciRep_2016} which depends on the distribution of the spin polarization of the eigenstates and the details of the scattering mechanisms. The $\alpha$ factor introduced in Eq. \eqref{transform} appears in Eq. \eqref{Hong_Eqn} to indicate a lowering of the charge current induced spin voltage in the channel from the ideal value due to such angular distribution. Onsager reciprocity requires that the spin current induced charge voltage in the channel will be lowered by the same factor $\alpha$ as shown in Eq. \eqref{Hong_Inv}, which has been taken into account by multiplying $1/\alpha$ to the third row of Eq. \eqref{transform}. In the simplest approximation, the angle between the $z$-axis and the spin polarization of the eigenstates of a particular half Fermi circle in Fig. \ref{2}(a) varies from $-\pi/2$ to $+\pi/2$ which yields $\alpha=2/\pi$.
Combining Eq. \eqref{semi_modelx} with Eq. \eqref{transform} yields
\begin{equation}
\label{charge_spin_eqn}
\begin{array}{l}
\dfrac{1}{{\left| {\left\langle {{v_x}} \right\rangle } \right|}}\dfrac{\partial}{{\partial t}}\;\left[ {\begin{array}{*{20}{c}}
0&0&0&1\\
0&0&\alpha^2&0\\
0&\dfrac{1}{\alpha^2}&0&0\\
1&0&0&0
\end{array}} \right]\left\{ \begin{array}{l}
I_c{R_B}\\
2{V_s}\\
{I_s}{R_B}\\
2{V_c}
\end{array} \right\} + \dfrac{\partial}{{\partial x}}\left\{ \begin{array}{l}
I_c{R_B}\\
2{V_s}\\
{I_s}{R_B}\\
2{V_c}
\end{array} \right\}\; \\ = \;\left[ {\begin{array}{*{20}{c}}
0&0&0&0\\
0&0&{ - \dfrac{2 \alpha^2}{{{\lambda _0}}}}&{\dfrac{2\alpha p_0}{{{\lambda _0} }}}\\
{\dfrac{2}{\alpha\lambda_s '}}&{ - \dfrac{2}{{{\alpha^2\lambda _s}}}}&0&0\\
{ - \dfrac{2}{\lambda }}&{\dfrac{2}{{\alpha \lambda '}}}&0&0
\end{array}} \right]\left\{ \begin{array}{l}
I_c{R_B}\\
2{V_s}\\
{I_s}{R_B}\\
2{V_c}
\end{array} \right\}\\ - 2{L_M}\dfrac{{\partial I_c}}{{\partial t}}\left\{ \begin{array}{l}
0\\
0\\
\dfrac{p_0}{\alpha}\\
1
\end{array} \right\} + \dfrac{1}{{\left| {\left\langle {{v_x}} \right\rangle } \right|}}\dfrac{2}{{{C_E}}}\dfrac{{\partial Q}}{{\partial t}}\left\{ \begin{array}{l}
1\\
\alpha {p_0}\\
0\\
0
\end{array} \right\} + \left\{ \begin{array}{l}
{R_B i^c}\\
2\Delta v^s\\
R_B i^s\\
2\Delta v^c
\end{array} \right\},
\end{array}
\end{equation}
with the external contact terms given by
\begin{equation}
\label{cont}
\begin{aligned}
&i^c = i_U^+ + i_D^- + i_U^- + i_D^+,\\
&i^s = \dfrac{1}{\alpha} \left(i_U^+ - i_D^- + i_U^- - i_D^+\right)\\
&\Delta v^c = \frac{R_B}{2}\left( i_U^+ - i_D^- - i_U^- + i_D^+ \right), \text{ and}\\
&\Delta v^s = \frac{\alpha R_B}{2}\left(i_U^+ + i_D^- - i_U^- - i_D^+ \right).
\end{aligned}
\end{equation}
Eq. \eqref{cont} combined with Eq. \eqref{contact_NM_FM} yields Eqs. \eqref{contact_charge_spin} and \eqref{contact_charge_spin1}.
\subsection{Continuity Equation}
The term $\partial Q / \partial t$ on the right hand side of Eq. \eqref{charge_spin_eqn} is related to the charge currents according to the continuity equation given by
\begin{equation}
\label{cont_eqn}
\dfrac{\partial Q}{\partial t} + \dfrac{\partial I_c}{\partial x}=i^c.
\end{equation}
The first row of Eq. \eqref{charge_spin_eqn} combined with Eq. \eqref{cont_eqn} becomes
\begin{equation}
\label{sup_eqn}
\frac{\partial }{{\partial t}}{V_c} = \left( {\frac{{\left| {\left\langle {{v_x}} \right\rangle } \right|{R_B}}}{2} + \frac{1}{{{C_E}}}} \right)\left({i^c}-\frac{\partial }{{\partial x}}{I_c}\right),
\end{equation}
which is the first equation in Eq. \eqref{charge_TL_cont}.
The second row of Eq. \eqref{charge_spin_eqn} combined with Eq. \eqref{cont_eqn} becomes
\begin{equation}
\begin{aligned}
\alpha^2 \frac{{{R_B}}}{{2\left| {\left\langle {{v_x}} \right\rangle } \right|}}\frac{\partial }{{\partial t}}{I_s} &+ \frac{\partial }{{\partial x}}{V_s} = - \frac{{\alpha^2{R_B}}}{{{\lambda _0}}}{I_s} + \frac{{2\alpha {p_0}}}{{{\lambda _0}}}{V_c} \\&+ \alpha {p_0}\frac{1}{{\left| {\left\langle {{v_x}} \right\rangle } \right|}}\frac{1}{{{C_E}}}\left( {{i^c} - \frac{{\partial {I_c}}}{{\partial x}}} \right) + \Delta {v^s}.
\end{aligned}
\end{equation}
We replace the expression for ${{i^c} - \dfrac{{\partial}}{{\partial x}}}{I_c}$ from Eq. \eqref{sup_eqn} to get the second equation in Eq. \eqref{spin_TL_cont}. For contact conductance $G_0\rightarrow0$, Eqs. \eqref{charge_TL_cont} and \eqref{spin_TL_cont} reduces to Eqs. \eqref{charge_TL} and \eqref{spin_TL} respectively.
\subsection{Mean Free Paths}
We have three distinct mean free paths in Eq. \eqref{charge_spin_eqn}, given by
\begin{equation}
\label{mfps}
\begin{aligned}
&\dfrac{1}{\lambda } = \dfrac{1}{2}\left( {\dfrac{{{r_{s2}}}}{N} + \dfrac{{{r_{s1}}}}{M}} \right) + \dfrac{r}{2}\left( {\dfrac{1}{N}\; + \dfrac{1}{M}} \right),\\
&\dfrac{1}{{{\lambda _0}}} = \dfrac{r + {t_s}}{2} \left( {\dfrac{1}{N} + \dfrac{1}{M}} \right), \;\text{and}\\
&\dfrac{1}{{{\lambda _s}}} = \dfrac{1}{2}\left( {\dfrac{{{r_{s2}}}}{N} + \dfrac{{{r_{s1}}}}{M}} \right) + \dfrac{{t_s}}{2}\left( {\dfrac{1}{N} + \dfrac{1}{M}} \right),\,
\end{aligned}
\end{equation}
where $\lambda$, $\lambda_0$, and $\lambda_s$ determine the series charge resistance $R_c$ (see Eq. \eqref{rc}), the series spin resistance $R_s$ (see Eq. \eqref{rs}), and the shunt spin conductance $G_{sh}$ (see \eqref{gsh}) respectively. Note that $\lambda_s$ depends on the spin-flip processes in the channel and determines the shunt conductance $G_{sh}$ that takes into account the spin relaxation process.
\subsection{Charge-Spin Coupling Coefficients}
The other terms of Eq. \eqref{charge_spin_eqn} are given by
\begin{equation}
\label{spin_charge_coupling}
\begin{array}{l}
\dfrac{1}{{\lambda '}} = \dfrac{1}{2}\left( {\dfrac{{{r_{s2}}}}{N} - \dfrac{{{r_{s1}}}}{M}} \right) + \dfrac{r}{2}\left( {\dfrac{1}{N}\; - \dfrac{1}{M}} \right),\;\text{and}\\
\dfrac{1}{{{{\lambda_s '}}}} = \dfrac{1}{2}\left( {\dfrac{{{r_{s2}}}}{N} - \dfrac{{{r_{s1}}}}{M}} \right) + \dfrac{t_s}{2}\left( {\dfrac{1}{N} - \dfrac{1}{M}} \right),
\end{array}
\end{equation}
representing coupling coefficients between charge and spin. $\lambda_s '$ and $\lambda '$ cause a charge induced spin signal and spin induced charge signal respectively. Note that the first terms of Eq. \eqref{spin_charge_coupling} indicate a purely scattering induced spin-charge coupling even if $M=N$ (i.e. $p_0=0$) since $r_{s1}$ and $r_{s2}$ are two independent parameters and there could be situations where $r_{s1}\neq r_{s2}$.
In this paper, we restrict ourselves to SML caused by difference between $M$ and $N$ (i.e. $p_0\neq0$). We can eliminate the first terms in Eq. \eqref{spin_charge_coupling} by assuming either of the followings:
\begin{subequations}
\label{assump12}
\begin{equation}
\label{assump1}
r_{s1}=r_{s2}=r_s,
\end{equation}
\begin{equation}
\label{assump2}
\text{or,}\;\;\dfrac{r_{s1}}{M}=\dfrac{r_{s2}}{N}.
\end{equation}
\end{subequations}
The first assumption Eq. \eqref{assump1} when applied in Eq. \eqref{spin_charge_coupling} yields
\begin{equation}
\label{spin_charge_coupling1}
\begin{array}{l}
\dfrac{1}{{\lambda '}} = (r_s+r) \left( {\dfrac{{{1}}}{N} - \dfrac{{{1}}}{M}} \right)=\dfrac{p_0}{\lambda},\;\text{and}\\
\dfrac{1}{{{{\lambda_s '}}}} = (r_s + t_s) \left( {\dfrac{{{1}}}{N} - \dfrac{{{1}}}{M}} \right)=\dfrac{p_0}{\lambda_s},
\end{array}
\end{equation}
which in turn gives $\lambda_r=\lambda$ and $\lambda_t=\lambda_s$ in Eqs. \eqref{etc} and \eqref{ets} respectively.
The second assumption Eq. \eqref{assump2} when applied in Eq. \eqref{spin_charge_coupling} yields
\begin{equation}
\begin{aligned}
\label{spin_charge_coupling2}
&\dfrac{1}{\lambda '} = r\left( {\dfrac{1}{N}\; - \dfrac{1}{M}} \right) = \dfrac{p_0}{\lambda _r},\;\text{and}\\
&\dfrac{1}{\lambda_s '} = t_s\left( {\dfrac{1}{N} - \dfrac{1}{M}} \right) = \dfrac{p_0}{\lambda _t}.
\end{aligned}
\end{equation}
\subsection{Comments on the Assumptions}
The assumptions in Eqs. \eqref{cond}, \eqref{cond2}, and \eqref{assump12} result in an effective change in the transmission line model parameters in Eqs. \eqref{params}, but does not change the models in Eqs. \eqref{charge_TL}, \eqref{spin_TL} (Fig. \ref{1}) and Eqs. \eqref{charge_TL_cont}, \eqref{spin_TL_cont} (Fig. \ref{3}) themselves. The assumptions made to derive the model can be revisited as the field evolves. However, several predictions from our model for steady-state \cite{Sayed_SciRep_2016} have already received support from experiments \cite{Pham_APL_2016,Pham_NanoLett_2016,Koo_New} suggesting that the assumptions are within the reasonable limits.
\section{Summary}
We have proposed a two component (charge and $z$-component of spin) transmission line model for channels with spin-momentum locking (SML) which is a new addition to our SPICE compatible multi-physics model library \cite{ModApp, Camsari_SciRep_2015, Sayed_SciRep2_2016}. The model enables easy analysis of complex geometries involving materials with spin-orbit coupling (SOC) observed in diverse classes of materials e.g. topological insulators, heavy metals, oxide interfaces, and narrow bandgap semiconductors. The model is derived from a four-component diffusion equation obtained from the Boltzmann transport equation assuming linear response and elastic scattering in the channel. The four-component diffusion equation uses four average electrochemical potentials based on a classification depending on the sign of $z$-component of spin (up ($U$) or down ($D$)) and the sign of $x$-component of group velocity ($+$ or $-$). Such classification can be viewed as an extension of the Valet-Fert equation \cite{Valet_Fert_1993} which uses two electrochemical potentials for $U$ and $D$ states. For a normal metal channel, the time-dependent model presented here decouples into (i) the well-known transmission line model for charge transport in quantum wires \cite{Burke_TNANO_2002, Burke_TNANO_2003, Sayeef_IEEE_2005} and (ii) a time-dependent version of Valet-Fert equation \cite{Valet_Fert_1993} for spin transport. We first derive several results on charge-spin interconversion starting from our model in steady-state. The steady-state results show good agreement with existing experiments on diverse materials. We then study the phenomenon of spin-charge separation using our full time-dependent model, especially in the materials with SOC exhibiting SML. Our model shows the expected spin-charge separation with two distinct velocities for charge and spin, which persist even in channels exhibiting SML. However, we show that the lower velocity signal is purely spin while the higher velocity signal is largely charge with an additional spin component proportional to the degree of SML.
\begin{acknowledgments}
This work was in part supported by FAME, one of six centers of STARnet, a Semiconductor Research Corporation (SRC) program sponsored by MARCO and DARPA and in part by ASCENT, one of six centers in JUMP, a SRC program sponsored by DARPA.
\end{acknowledgments}
|
\section{Introduction}
A \emph{solvmanifold} is a simply-connected solvable Lie group $\Ss$
endowed with a left-invariant Riemannian metric.
It is called of \emph{real type}, if $\Ss$ is non-abelian and if for each element of its Lie algebra the corresponding adjoint map is either
nilpotent or has an eigenvalue with non-zero real part.
Nilmanifolds are of real type,
and so are --up to isometry--
all homogeneous manifolds with negative sectional curvature, see \cite{Hei74} and \cite{Jbl13c}, but flat solvmanifolds are not \cite{Mln}. More interestingly,
by the deep structure results of \cite{Heber1998}, \cite{standard} and \cite{solvsolitons},
examples of solvmanifolds of real type include all non-flat
Einstein solvmanifolds and \emph{solvsolitons}.
Recall that a solvsoliton is a solvmanifold which
is also an expanding Ricci soliton, and whose corresponding Ricci flow evolution
is driven by diffeomorphisms which are Lie group automorphisms.
Thus, up to isometry, solvmanifolds of real type
contain all known examples of non-compact, non-flat homogeneous Ricci solitons.
Since the homogeneous Ricci flow solution starting at a solvmanifold exists for all positive times \cite{scalar}, any sequence of blow-downs subconverges to a
homogeneous limit Ricci soliton \cite{BL17}.
Our first main result addresses the question of uniqueness of such limits:
\begin{teointro}\label{mainthm_uniq}
On a sim\-ply-connec\-ted solvable Lie group $\Ss$ of real type,
any scalar-curvature-normalized homogeneous Ricci flow solution converges in Cheeger-Gromov topology to a non-flat solvsoliton $\big(\bar \Ss, \bgsol\big)$, which does not
depend on the initial metric.
\end{teointro}
The Lie group $\Ss$ is of course called of real type, if it satisfies the above condition.
Notice, that the limit transitive group $\bar \Ss$ may be non-isomorphic to $\Ss$, but must still be of real type:
see
Remark \ref{rem_solvsolreal} and Remark \ref{rem_realtype}.
Theorem \ref{mainthm_uniq} was known for $\dim \Ss = 3$ and partially for $\dim \Ss = 4$ \cite{Lot07}, for nilpotent Lie groups \cite{nilRF}, \cite{Jbl11}, and for unimodular, almost-abelian Lie groups \cite{Arroyo2013}.
In the compact case, such a uniqueness result does not hold in general,
since most compact Lie groups admit non-isometric Einstein metrics.
A first immediate consequence of Theorem \ref{mainthm_uniq} is
\begin{corintro}\label{maincor_globalstability}
Let $(\Ss,\gsol)$ be a non-flat solvsoliton. Then, any scalar-curvature-normalized homogeneous Ricci flow solution on $\Ss$ converges in
Cheeger-Gromov topology to $(\Ss,\gsol)$.
\end{corintro}
We not only recover that solvsolitons on such $\Ss$
are unique up to isometry,
proved in \cite{solvsolitons}, but show also
that homogeneous Ricci solitons on a solvable Lie group of real type are pairwise
equivariantly isometric: see Corollary \ref{cor_uniqsolvsolmetric}.
Recall that along a homogeneous Ricci flow solution
the full isometry group remains unchanged \cite{Kot}.
However, this does not imply in general that this group
will be a subgroup of the full isometry group of the limit: see \cite[Example 1.6]{GJ15}. The main reason for this is that, in the non-Einstein solvsoliton case,
we only have convergence in Cheeger-Gromov topology: see Remark \ref{rem:solnonCinfty}.
Still, Theorem \ref{mainthm_uniq} yields
\begin{corintro}
For a non-flat solvsoliton $(\Ss,\gsol)$,
the dimension of its isometry group $\Iso(\Ss,\gsol)$ is maximal among all left-invariant metrics on $\Ss$.
\end{corintro}
In the Einstein case, the convergence can be improved as follows:
\begin{teointro}\label{mainthm_Einstein}
Let $(\Ss,g_E)$ be a non-flat Einstein solvmanifold. Then,
any scalar-curvature-normalized homogeneous Ricci flow solution on $\Ss$ converges in $C^\infty$ topology to $\psi^* g_E$, for some $\psi \in \Aut(\Ss)$.
\end{teointro}
Notice that Theorem \ref{mainthm_Einstein} gives in particular a dynamical proof of the recent result of Gordon and Jablonski on the maximal symmetry of Einstein solvmanifolds \cite{GJ15}.
There exist already in the literature several results on the stability of certain non-compact homogeneous
Einstein metrics and Ricci solitons, see for instance \cite{SSS11},
\cite{Bam15}, \cite{JPW16}, \cite{WW16}. Even though more general, non-homogeneous variations are considered in these articles, none of them implies Theorem \ref{mainthm_Einstein}, since two different homogeneous metrics on $\RR^n$ are not within bounded distance to each other.
Our last main result provides a geometric characterization of solvmanifolds of real type, in terms of the behavior of homogeneous Ricci flow solutions. More precisely, recall that given a Cheeger-Gromov-convergent sequence $(M^n_k,g_k)_{k\in\NN}$ of homogeneous manifolds, each of which having an $N$-dimensional transitive group of isometries $\G_k$, there is a natural way of making sense of an $N$-dimensional limit group of isometries $\bar \G$, by taking limits of appropriately rescaled Killing fields
(see \cite[$\S$9]{BL17}). The sequence is then called \emph{algebraically non-collapsed} if the action of $\bar \G$ is transitive on the limit space $(\bar M^n, \bar g)$, and collapsed otherwise. A homogeneous Ricci flow solution is algebraically non-collapsed if any convergent sequence of parabolic blow-downs has that property.
\begin{teointro}\label{mainthm_algnonc}
On a simply-connected, non-abelian, solvable Lie group $\Ss$, a homogeneous Ricci flow solution is algebraically non-collapsed if and only if $\Ss$ is of real type.
\end{teointro}
A first consequence of Theorem \ref{mainthm_algnonc} is that for Ricci flow solutions on a simply-connected, non-abelian solvable Lie group which are \emph{not} of real type, the dimension of the isometry group must always jump in the limit: see Section \ref{sec_nocollapse}.
But even more importantly, studying Ricci flow solutions on such Lie groups
would in general involve the understanding of algebraic collapse,
which cannot be achieved by using the moving brackets framework only.
We turn to the content of the paper and
the proofs of the above results. In Section \ref{sec_real} we
discuss algebraic properties of solvmanifolds of real type. In Section
\ref{sec_stratif} we recall the GIT stratification of the space of brackets
and state in Theorem \ref{thm_GITuniq} a uniqueness result
for critical points of the energy map.
This is the main ingredient in the proof Theorem \ref{thm_uniqsolvsol}, a uniqueness result for solvsoliton brackets lying in the intersection of the closure of an orbit and the stratum containing that orbit. Finally,
using the equivalence of the Ricci flow and the gauged bracket flow,
we show in Section \ref{sec_thmA} that on a solvmanifold of real type
any scalar curvature normalized bracket flow converges to a unique solvsoliton bracket in the closure of the corresponding orbit. The proof uses essentially
a Lyapunov function for the bracket flow, described in \cite{BL17}.
Then Theorem \ref{mainthm_uniq} follows immediately. The proof of Theorem \ref{mainthm_algnonc} is given in Section \ref{sec_nocollapse}. Finally, in Section \ref{sec_Einstein} we prove Theorem \ref{mainthm_Einstein},
using the computations of the linearization of the gauged bracket flow
at an Einstein bracket given in Section \ref{sec_linear}.
\section{Solvable Lie groups of real type}\label{sec_real}
In this section we discuss algebraic properties of solvable Lie groups or real type.
Let $(\sg, \mu)$ be the Lie algebra of a solvable Lie group $\Ss$. The Lie bracket
$\mu$ is a skew-symmetric bilinear map, i.e.~ an element of the vector space
of \emph{brackets}
\begin{equation}\label{eqn_brackets}
\Vs := \Lambda^2 \sg^* \otimes \sg,
\end{equation}
that satisfies the Jacobi identity. We denote by
$\ad_\mu(X)(Y) := \mu(X, Y) \in \sg$, $X,Y\in \sg$, the
corresponding adjoint maps.
For convenience, we also fix a scalar product $\ip$ on $\sg$.
\begin{definition}\label{def_realtype}
A non-abelian solvable Lie algebra $(\sg,\mu)$ is called
\begin{itemize}
\item[(i)] of \emph{imaginary type}, if $\spec( \ad_\mu(X)) \subset i \, \RR$ for all $X\in \sg$;
\item[(ii)] of \emph{real type}, if all its subalgebras of imaginary type are nilpotent.
\end{itemize}
A solvable Lie group is called of imaginary or real type if its Lie algebra is of that type.
\end{definition}
In the above definition, $\spec(E)$ denotes
the spectrum of an endomorphism $E$ of $\sg$.
Notice that $\sg$ is of real type if and only if all for all $X\in \sg$ the adjoint map
$\ad_\mu(X)$ is either nilpotent or has an eigenvalue $\lambda \notin i \RR$.
Recall also that there exist non-abelian solvable Lie groups admitting flat left-in\-variant metrics.
By \cite{Mln}, they are all of imaginary type and have abelian nilradical; we call the corresponding Lie brackets \emph{flat brackets}.
\begin{remark}\label{rem_solvsolreal}
In \cite{Jbl13c}, Lie algebras of real type are called \emph{almost completely solvable}.
Completely solvable Lie algebras, characterized by $\spec ( \ad(X) ) \subset \RR$ for all $X\in \sg$, are of course of real type. Moreover, by \cite[Prop.8.4]{Jbl2015},
any solvable Lie group $\Ss$ admitting a solvsoliton metric is of real type.
\end{remark}
Considering $\sg$ as a real vector space, there is a natural `change of basis' linear action of the group $\Gl(\sg)$ on $\Vs$, given by
\begin{equation}\label{eqn_Gsaction}
\big(h \cdot \mu\big)(\cdot, \cdot) := h \mu (h^{-1} \, \cdot, h^{-1} \, \cdot), \qquad h\in \Gl(\sg), \quad \mu \in \Vs\,.
\end{equation}
The orbit $\Gl(\sg) \cdot \mu$ is precisely the set of brackets on $\sg$ such that the corresponding Lie algebra is isomorphic to $(\sg,\mu)$. Since
by (\ref{eqn_Gsaction}) we have
\begin{eqnarray}\label{eqn_adconj}
\ad_{h\cdot \mu} (X)= {h \ad_\mu(h^{-1}X)h^{-1}}\,,
\end{eqnarray}
the type of a Lie bracket is constant on each $\Gl(\sg)$-orbit.
Next, for $\mu \in V(\sg)$, $X\in \sg$ we set
\begin{align*}
\varphi(\mu,X) = \max \big\{ \, |\Re(\lambda) | : \lambda \in \spec( \ad_\mu(X) ) \big\}\,,
\quad
\psi(\mu,X) = \max \big\{ \, |\lambda | : \lambda \in \spec(\ad_\mu(X) ) \big\}\,.
\end{align*}
\begin{lemma}\label{lem_solveigenvaluesad}
Let $(\sg,\mu_0)$ be a solvable Lie algebra with nilradical $\ngo$, and let $\ag$ be the orthogonal complement
of $\ngo $ in $\sg$. If $\mu =h \cdot \mu_0$ for $h \in \Gl(\sg)$, then
for all $X\in \sg$ we have
\begin{equation*}
\varphi(h \cdot \mu_0, X) = \varphi(\mu_0, (h^{-1} X)_\ag )\quad \textrm{ and }\quad
\psi(h \cdot \mu_0, X) = \psi(\mu_0, (h^{-1} X)_\ag )\,,
\end{equation*}
where the $\ag$-subscript denotes orthogonal projection onto $\ag$.
Moreover, if $\ngo \neq \sg$ then $\mu_0$ is of real type if and only if $\sigma_\ag(\mu_0) := \min_{X\in \ag, \Vert X \Vert = 1} \varphi (\mu_0,X) > 0$.
\end{lemma}
\begin{proof}
By \eqref{eqn_adconj} we have $\varphi(h \cdot \mu_0, X)=
\varphi(\mu_0,h^{-1}\cdot X)$ and we obtain the first two claims, since
for any $X\in \sg$ and $Y\in \ngo$ the adjoint maps
$\ad_{\mu_0}(X+Y)$ and $\ad_{\mu_0}(X)$ share the same eigenvalues.
This follows from \cite[Theorem 3.7.3]{Varad84}, since there is a basis for the complexified
Lie algebra $\sg^\CC$ such that all the adjoint maps are upper triangular, and strictly upper
triangular for $Y\in \ngo$.
Regarding the second claim, $\varphi$ is continuous in
$X$ and $\mu$, because the eigenvalues of linear maps vary continuously. Since for $X \in \ag \setminus \{ 0 \}$ the endomorphism $\ad_{\mu_0}(X)$ is not nilpotent, it immediately follows that $\mu_0$ is not of real type if and only if
$\sigma_\ag(\mu_0) = 0$.
\end{proof}
\begin{lemma}\label{lem_realnonflat}
Let $\mu_0\in V(\sg)$ be a solvable Lie bracket of real type. Then, there are no non-zero flat brackets in $\overline{\Gl(\sg) \cdot \mu_0}$.
\end{lemma}
\begin{proof}
Since $(\sg,\mu_0)$ is of real type,
$\varphi(\mu_0,Y) > 0$ for all $Y\in \ag\backslash\{0 \}$. Therefore,
using that the continuous
maps $(\mu,X)\mapsto \varphi(\mu,X),\,\, \psi(\mu,X)$ are homogeneous of degree one in $X$,
we see that there exists a constant $C_{\mu_0}>0$ such that
$ \psi(\mu_0,Y) \leq C_{\mu_0} \cdot \varphi(\mu_0, Y)$
for all $Y\in \ag$.
Suppose now that $h_k\cdot \mu_0$ converges for $k \to \infty$
to a flat bracket $\bar \mu \in V(\sg)$. Recall, that by
\cite{Mln} the bracket $\bar \mu$ is of imaginary type, that is
$\varphi(\bar \mu, \cdot ) \equiv 0$.
For any fixed $X\in \sg$
we then obtain by Lemma \ref{lem_solveigenvaluesad}
\[
\psi(h_k \cdot \mu_0, X) = \psi(\mu_0, (h_k^{-1} X)_\ag) \leq C_{\mu_0} \cdot \varphi(\mu_0, (h_k^{-1} X)_\ag) = C_{\mu_0} \cdot \varphi(h_k\cdot \mu_0, X) \underset{k\to\infty}\longrightarrow 0\,.
\]
Therefore, $\psi(\bar \mu, \cdot ) \equiv 0$ and $\bar \mu$ is nilpotent by Engel's theorem.
Thus $\bar \mu = 0$ by \cite{Mln}.
\end{proof}
Recall, that the \emph{rank} of a solvable Lie algebra is the codimension of its nilradical.
\begin{lemma}\label{lem_realtype}
Among solvable Lie brackets of fixed rank, those of real type form an open set.
\end{lemma}
\begin{proof}
If the rank is zero, the claim is obvious.
Let now $(\mu_k)_{k\in\NN} \in \Vs$ be a sequence of rank $a \in \NN$
solvable Lie brackets which are not of real type, converging to $\mu_0$ as $k\to \infty$, also of rank $a$. After acting with suitable orthogonal maps on each $\mu_k$ and passing to a convergent subsequence, we may assume that the nilradicals of $\mu_k, \mu_0$ are a fixed subspace
$\ngo \neq \sg$. The claim now follows by Lemma \ref{lem_solveigenvaluesad},
since $0=\lim_{k\to \infty}\sigma_{\ag}(\mu_k)=\sigma_{\ag}(\mu_0)$.
\end{proof}
\begin{remark}\label{rem_realtype}
There is only one $2$-dimensional non-abelian solvable Lie algebra, and it is of real type.
Table \ref{s3} contains up to isomorphism all non-abelian, solvable Lie algebras of dimension $3$, according to \cite{ABDO}.
After fixing an orthonormal basis $\{e_1, e_2, e_3\}$, they are described by
$(\ad e_1)|_\ngo \in \glg_2(\RR)$, where $\ngo = \operatorname{span}_\RR\{e_2, e_3\}$ is an abelian ideal.
Since $\mu^{\hg_3} \in \overline{\Gl_3(\RR)\cdot \mu^{\eg(2)}}$,
real type is not an open condition in the space of brackets. Here $\eg(2)$ denotes
the Lie algebra of rigid motions of the Euclidean plane and
$\hg_3$ the $3$-dimensional Heisenberg Lie algebra.
On $\Ss_3$,
the simply-connected solvable Lie group with Lie algebra $\sg_3$,
any homogeneous Ricci flow solution
converges to the soliton on $\Ss_{3,1}$ in Cheeger-Gromov topology. This follows from Remark \ref{rmk_limitsoliton}, since the Lie brackets $\mu^{\sg_3}$
and $\mu^{\sg_{3,1}}$ lie in the same stratum, $\mu^{\sg_{3,1}}\in \overline{\Gl_3(\RR)\cdot \mu^{\sg_3}}$,
and $S_{3,1}$ admits a solvsoliton (isometric to the hyperbolic $3$-space).
In dimension $4$, solvable Lie algebras have rank at most $2$. The Lie algebra $\mathfrak{aff}(\CC)$ is the unique example of rank $2$ which is not of real type. From Lemma \ref{lem_realtype} and the fact that the rank is lower semi-continuous on the space of brackets, it follows that there is no sequence of brackets of real type converging to a bracket of type $\mathfrak{aff}(\CC)$. Thus, the set of solvable Lie brackets of real type is not dense in the space of all solvable Lie brackets.
\end{remark}
{\small
\begin{table}[h]
\[
\begin{array}{cccccc}
& (\ad{e_1})|_{\ngo} & \mbox{constraints} & \mbox{real type} & \mbox{flat} \\
\hline \\[-0.3cm]
\hg_3 & \left[\begin{smallmatrix} 0&0 \\1 &0\end{smallmatrix}\right] & - & \checkmark & - \\[0.1cm]
\sg_3 & \left[\begin{smallmatrix} 1& 0\\1 &1\end{smallmatrix}\right] & - & \checkmark & - \\[0.1cm]
\sg_{3,\lambda} & \left[\begin{smallmatrix} 1&0 \\ 0 &\lambda\end{smallmatrix}\right] & -1\leq\lambda\leq 1 & \checkmark & - \\[0.1cm]
\sg_{3,\lambda}' & \left[\begin{smallmatrix} \lambda&1\\ -1&\lambda\end{smallmatrix}\right] & 0<\lambda & \checkmark & -
\\[0.1cm]
\eg(2) & \left[\begin{smallmatrix} 0&1\\ -1& 0\end{smallmatrix}\right] & - & - & \checkmark
\\[0.1cm]
\hline
\end{array}
\]
\caption{$3$-dimensional solvable Lie algebras}\label{s3}
\end{table}}
\section{Stratification and uniqueness results for the moment map}\label{sec_stratif}
We review in this section the GIT stratification of the space of brackets $\Vs$ with respect to the linear action \eqref{eqn_Gsaction} of the real reductive Lie group $\Gs$. We also recall a uniqueness result for critical points of the moment map which will play a key role in the proof of Theorem \ref{mainthm_uniq}. The reader is referred to \cite{BL17}, \cite{GIT} for a more thorough presentation.
Let us fix a Euclidean vector space $(\sg, \ip)$. Denote also by $\ip$ the induced scalar products on $\glg(\sg) \simeq \sg^* \otimes \sg$ and $\Vs$.
If $\Or(\sg)$ denotes the orthogonal group of $(\sg, \ip)$, $\sog(\sg)$ its Lie algebra, and $\pg := \Sym(\sg,\ip)\subset \glg(\sg)$ the subspace of $\ip$-symmetric endomorphisms, then there is a Cartan decomposition
\begin{equation}\label{eqn_Cartandec}
\Gl(\sg) = \Os \exp(\pg), \qquad \glg(\sg) = \sog(\sg) \oplus \pg\,.
\end{equation}
The maximal compact subgroup $\Os$ of $\Gs$ acts orthogonally on $\Vs$ via \eqref{eqn_Gsaction}, and the elements in $\exp(\pg)$ act by symmetric maps.
At the linear level, there is a corresponding $\glg(\sg)$-representation $\pi : \glg(\sg) \to \End(\Vs)$ given by
\begin{equation}\label{eqn_gsrep}
\big(\pi(A) \mu \big) (\cdot, \cdot) := A \mu(\cdot,\cdot) - \mu(A\cdot, \cdot) - \mu(\cdot, A \cdot), \qquad A\in \glg(\sg), \quad \mu\in \Vs\,,
\end{equation}
which yields $\pi(\sog(\sg)) \subset \sog(\Vs)$, $\pi(\pg) \subset \Sym(\Vs,\ip)$.
Thus, $\Gs$ is a \emph{real reductive Lie group} in the sense of \cite{GIT}, and one can study its linear action on $\Vs$ using real geometric invariant theory.
The \emph{moment map} $\mmm : \Vs \backslash \{0 \} \to \pg$ and its \emph{energy} $\normmm : \Vs\backslash \{ 0\} \to \RR$ are respectively defined by
\begin{equation}\label{eqn_defmmb}
\la \mmm(\mu), \Alpha \ra
= \tfrac1{\Vert \mu\Vert^2} \cdot \la \pi(\Alpha) \mu, \mu\ra \,, \qquad \normmm(\mu) = \Vert {\mmm(\mu)} \Vert^2,
\end{equation}
for all $\Alpha\in \pg$, $\mu \in \Vs \backslash \{ 0\}$.
Notice that the moment map is $\Os$-equivariant:
\begin{equation}\label{eqn_Kequivmm}
\mmm(k \cdot \mu) = k \, \mmm(\mu) \, k^{-1}, \qquad k\in \Os, \quad \mu \in \Vs \backslash \{ 0\}.
\end{equation}
The energy $\normmm$ is therefore $\Os$-invariant. The following theorem shows how $\normmm$ determines a $\Gs$-invariant, ``Morse-type'' stratification of $\Vs \backslash \{0 \}$:
\begin{theorem}\label{thm_stratifb}\cite{GIT}
There exists a finite subset $\bca \subset \pg$
and a collection of smooth, $\Gs$-invariant submanifolds
$\{ \sca_\Beta \}_{\Beta \in \bca}$ of $\Vs$, with the following properties:
\begin{itemize}
\item[(i)]
We have $\Vs\backslash \{ 0\} = \bigcup_{\Beta\in \bca} \sca_\Beta$
and $\sca_\Beta \cap \sca_{\Beta'}=\emptyset$ for $\Beta \neq \Beta'$.
\item[(ii)] We have
$\overline{\sca_\Beta} \backslash \sca_\Beta \subset \bigcup_{\Beta'\in \bca, \Vert \Beta'\Vert > \Vert \Beta \Vert} \sca_{\Beta'}$ (the closure taken in $\Vs \backslash \{0\}$).
\item[(iii)] A bracket $\mu$ is contained in
$\sca_\Beta$ if and only if the negative gradient
flow of $\normmm$ starting at $\mu$
converges to a critical point $\mu_C$ of $\normmm$
with $\mmm(\mu_C) \in \Os \cdot \Beta$.
\end{itemize}
\end{theorem}
We now describe the strata in more detail. For $\Beta\in \pg$ we set
\[
\Beta^+ := \Beta + \Vert \Beta \Vert^2 \Id_\sg\,.
\]
Denote by $\Vr \subset \Vs$ the eigenspace of $\pi(\Beta^+) = \pi(\Beta) - \Vert \Beta \Vert^2 \, \Id_{\Vs}$ corresponding to the eigenvalue $r\in \RR$ (recall that $\pi(\Id_\sg) = - \Id_\Vs$), and consider the subspace
\begin{equation}\label{eqn_defVnn}
\Vnn := \bigoplus_{r \geq 0} \Vr\,.
\end{equation}
There exist subgroups of $\Gs$ adapted to these subspaces. In order to describe them, since
the linear map $\ad(\Beta) : \glgs \to \glgs$, $A \mapsto [\beta, A]$ is self-adjoint,
we may decompose $\glg(\sg) = \bigoplus_{r\in \RR} \glg(\sg)_r$ as a sum of $\ad(\Beta)$-eigenspaces, and set accordingly
\begin{equation}\label{eqn_guqbeta}
\ggo_\Beta := \glg(\sg)_0 = \ker (\ad(\Beta) ), \qquad \ug_\Beta := \bigoplus_{r> 0} \glg(\sg)_r, \qquad \qg_\Beta := \ggo_\Beta \oplus \ug_\Beta.
\end{equation}
We then denote by
\[
\Gb := \{ g \in \Gs : g \Beta g^{-1} = \Beta \}, \qquad \Ub := \exp(\ug_\Beta), \qquad \Qb := \Gb \Ub,
\]
the centralizer of $\Beta$ in $\Gs$, the unipotent subgroup associated with $\Beta$, and the parabolic subgroup associated with $\Beta$, respectively. Set $\Kb := \Os \cap \Gb$ and consider also
\[
\Hb := \Kb \, \exp(\pg \cap \hg_\Beta), \qquad \hg_\Beta := \{ \Alpha \in \ggo_\Beta : \langle \Alpha, \Beta\rangle = 0\}, \qquad
\]
a codimension-one reductive subgroup (resp.~ subalgebra) of $\Gb$ (resp.~ $\ggo_\Beta$).
The groups $\Gb$, $\Ub$ and $\Qb$ are closed in $\Gs$, and $\Ub$ is normal in $\Qb$. They satisfy $\Gb \cdot \Vr \subset \Vr$ for all $r$, and $\Qb \cdot \Vnn \subset \Vnn$. The subgroup $\Gb$ is real reductive, with Cartan decomposition $\Gb = \Kb \exp(\pg_\Beta)$, $\pg_\Beta = \pg \cap \ggo_\Beta$, induced from that of $\Gs$. The same holds for $\Hb$, and in fact $\Gb = \exp(\RR \Beta) \times \Hb$ is a direct product. A key property of $\Qb$ is that it is large enough so that we have
\begin{equation}\label{eqn_GOQ}
\Gs = \Os \Qb.
\end{equation}
For a critical point $\mu_C$ of $\normmm$ we set $\Beta := \mmm(\mu_C)$ and define
\[
\Vzeross := \big\{ \mu \in \Vzero : 0\notin \overline{\Hb\cdot \mu} \big\}\,, \qquad \Vnnss := p_\Beta^{-1} \big(\Vzeross\big),
\]
where $p_\Beta : \Vnn \to \Vzero$ denotes the orthogonal projection. The set $\Vzeross$ (resp.~ $\Vnnss$) is open and dense in $\Vzero$ (resp.~ in $\Vnn$).
The proof of Theorem \ref{thm_stratifb} in fact shows that
\begin{equation}
\sca_\Beta =\Os \cdot \Vnnss. \label{eqn_SbetaKU}
\end{equation}
In particular, for any $\mu \in \sca_\Beta$ one can always find $k\in \Os$ such that $k \cdot \mu \in \Vnn$.
\begin{definition}\label{def_gauged}
We say a bracket $\mu \in \Sb \subset \Vs$ is \emph{gauged correctly}
w.r.t. $\Beta$, if $\mu \in \Vnnss$.
\end{definition}
Later on, we will fix a stratum label $\beta$, and then exploit \eqref{eqn_SbetaKU} to gauge everything, in order to work on the set $\Vnnss$ which is better adapted to $\Beta$.
Finally, we recall the following uniqueness result for critical points of $\normmm$. We think of it as a generalization of the Kempf-Ness theorem, which gives `uniqueness' of minimal vectors (zeros
of the moment map).
\begin{theorem}\label{thm_GITuniq}\cite{GIT}
For $\mu \in \sca_\Beta$, the set of critical points of $\normmm$ contained in $\overline{\Gs \cdot \mu} \cap \sca_\Beta$ equals $\RR_{>0} \cdot \Os \cdot \mu_C$, where
$\mu_C \in \sca_\Beta$ is a critical point of $\normmm$ with $\mmm(\mu_C) = \Beta$.
\end{theorem}
\section{Uniqueness of solvsolitons}\label{sec_uniq}
The main result of this section is Theorem \ref{thm_uniqsolvsol}, a `solvsoliton analogue' of Theorem \ref{thm_GITuniq}. Before turning to that, we recall the correspondence between left-invariant metrics on
a Lie group and Lie brackets.
Let $\Ss$ be a simply-connected Lie group whose Lie algebra $\sg$ is endowed with a fixed background scalar product $\ip$. Denote by $\mu^\sg \in \Vs$ the Lie bracket of $\sg$. The set $\mca_{\mathsf{left}}(\Ss)$ of left-invariant Riemannian metrics on $\Ss$ can be parameterized by the orbit $\Gs \cdot \mu^\sg \subset \Vs$ as follows: any $g' \in \mca_{\mathsf{left}}(\Ss)$ is determined by a scalar product $\ip'$ on $\sg$, which may be written as $ \ip' = \la h \, \cdot, h \, \cdot\ra$ for some $h\in \Gs$. We then associate to $g'$ the bracket $h \cdot \mu^\sg$. Recall that $\Gl(\sg)$ acts on $\Vs$ via \eqref{eqn_Gsaction}.
Conversely, to a bracket $h \cdot \mu^\sg \in \Gs \cdot \mu^\sg$ we associate the left-invariant metric on $\Ss$ determined by the scalar product $\la h \,\cdot , h\,\cdot\ra$ on $\sg$. Notice that in both directions the map $h$ is not unique, thus this correspondence is one to one only when we take into account the action of the groups $\Aut(\sg,\mu^\sg) \simeq \Aut(\Ss)$ and $\Os$:
\begin{equation}\label{eqn_metricsbrackets}
\mca_{\mathsf{left}}(\Ss) \, / \, \Aut(\Ss) \quad \leftrightsquigarrow \quad \Gs \cdot \mu^\sg \, / \, \Os.
\end{equation}
Here $\Aut(\Ss)$ acts by pull-back on $\mca_{\mathsf{left}}(\Ss)$, each orbit consisting of pairwise isometric metrics.
To every Lie bracket $\mu \in \Vs$ there corresponds a Riemannian manifold $(\Ss_\mu, g_\mu)$, where $\Ss_\mu$ is the simply-connected Lie group with Lie algebra $(\sg,\mu)$, and the metric
$g_\mu \in \mca_{\mathsf{left}}(\Ss_\mu)$ is determined by $\ip$ on $\sg \simeq T_ e \Ss_\mu$.
\begin{definition}
A \emph{solvsoliton} $(\Ss, g_{\mathsf{sol}})$ is a solvmanifold for which
\begin{equation}\label{eqn_solvsoliton}
\Ricci_{g_{\mathsf{sol}}} = c \cdot \Id_\sg + D, \qquad c\in \RR, \qquad D\in \Der(\sg),
\end{equation}
where $\Ricci_{g_{\mathsf{sol}}}$ denotes the Ricci endomorphism at $e\in \Ss$ and $\Der(\sg)$ is the Lie algebra of derivations of $\sg$.
The corresponding Lie bracket $ \mu \in \Gs \cdot \mu^\sg$ is also called a solvsoliton.
\end{definition}
We now turn to the main result of this section.
\begin{theorem}\label{thm_uniqsolvsol}
Let $(\sg,\mu)$ be a solvable Lie algebra of real type with $\mu \in \sca_\Beta$. Then, up to scaling
the set of solvsolitons in $\overline{ \Gl(\sg) \cdot \mu}\cap \sca_\Beta$
is contained in a unique $\Or(\sg)$-orbit.
\end{theorem}
This immediately implies
\begin{corollary}\label{cor_uniqsolvsolmetric}
Let $(\Ss, \gsol)$ be a non-flat solvsoliton. Then, any other left-invariant Ricci soliton metric on $\Ss$ is of the form $\alpha \cdot \psi^* \gsol$, for some $\alpha > 0$, $\psi\in \Aut(\Ss)$.
\end{corollary}
\begin{proof}
The group $\Ss$ is of real type by Remark \ref{rem_solvsolreal}. Hence by \cite[Prop.~8.4]{Jbl2015} any left-invariant Ricci soliton $g$ on $\Ss$ is a solvsoliton.
After rescaling $g$, by Theorem \ref{thm_uniqsolvsol} we may assume that $\gsol$ and $g$ have associated solvsoliton brackets $\musol, \mu \in V(\sg)$ with
$\musol = k\cdot \mu$ for some $k\in \Or(\sg)$.
Thus, by \eqref{eqn_metricsbrackets} the metrics $\gsol$ and $g$ are isometric via an automorphism.
\end{proof}
The uniqueness of solvsolitons up to equivariant isometry was known for completeley solvable groups: see \cite{Heber1998} for the Einstein case and \cite{solvsolitons} for solvsolitons.
We now work towards a proof of Theorem \ref{thm_uniqsolvsol}. The idea is that starting with a solvsoliton bracket one can explicitly construct a bracket on the same $\Gl(\sg)$-orbit which is a critical point of the energy map $\normmm$, and then Theorem \ref{thm_GITuniq} can be applied.
Let us recall the following formula for the Ricci endomorphism $\Ricci_\mu \in \Syms$ of
a Lie bracket $\mu \in V(\sg)$:
\begin{equation}\label{eqn_Ricmu}
\Ricci_\mu \, = \, \mm_\mu - \unm \kf_\mu - \unm \left(\ad_\mu \mcv_\mu + (\ad_\mu \mcv_\mu) ^t \right)\,.
\end{equation}
Here, $\mm_\mu = \unc \cdot \mmm(\mu) \cdot \Vert \mu \Vert^2$ is a multiple of the moment map $\mmm(\mu)$ defined in \eqref{eqn_defmmb} and
$\la \kf_\mu X, Y \ra = \tr (\ad_\mu X \ad_\mu Y)$ is the endomorphism
associated to the Killing form. The mean curvature vector $\mcv_\mu$ is implicitly defined by $\la \mcv_\mu, X \ra = \tr \ad_\mu X$ for all $X\in\sg$, and $(\cdot)^t$ denotes the transpose with respect to $\ip$. The \emph{modified Ricci curvature} is defined by
\begin{equation}\label{eqn_Ricmod}
\Riccim_\mu \, := \, \mm_\mu - \unm \kf_\mu.
\end{equation}
Moreover, we set $\scalm(\mu)=\tr \Riccim_\mu$. Notice, that
for non-flat solvmanifolds $\scalm(\mu)<0$ by Lemmas 3.5 and 3.6 in \cite{BL17}.
In terms of the stratification from Section \ref{sec_stratif}, by \cite[Thm.~6.4 $\&$ Cor.~C.3]{BL17} we have
\begin{proposition}\cite{BL17}\label{prop_solvsol}
Let $\musol \! \in \Vnnss \subset \! \sca_\Beta$ be a solvsoliton with $\scalm(\musol) = -1$. Then,
\[
\Riccim_\musol \, = \, \Beta \, = \, c \cdot \Id_\sg + D, \qquad \, c = - \Vert \Beta \Vert^2, \quad D = \Beta^+ \in \Der(\sg, \musol).
\]
Moreover, $\Beta^+ = \Beta + \Vert \Beta \Vert^2 \cdot \Id_\sg \geq 0$, and its image is the nilradical of $(\sg,\musol)$.
\end{proposition}
Next, let us briefly review some of the structural results for solvsolitons from \cite{solvsolitons}. Given a solvsoliton bracket $\mu\in \Vs$ with nilradical $\ngo$, consider the orthogonal decomposition $\sg = \ag \oplus\ngo$. We have that $\mu(\ag,\ag) = 0$, and for all $Y\in \ag$, $X\in \ngo$, it holds that
\begin{equation}\label{eqn_properties}
\left[\ad_\mu Y, (\ad_\mu Y)^t\right] = 0, \qquad \tr \left((\ad_\mu Y) \, (\ad_\mu X)^t \right) = 0\,.
\end{equation}
Moreover, the symmetric endomorphism $\mm_\mu$ defined after equation \eqref{eqn_Ricmu} satisfies
\begin{equation}\label{eqn_mmmuan}
\la \mm_\mu Y, Y \ra = -\unm \Vert {\ad_\mu Y} \Vert^2, \qquad \la \mm_\mu Y, X\ra = 0, \qquad \la \mm_\mu X, X \ra= \la \mm_\nu X, X\ra,
\end{equation}
for all $Y\in \ag$, $X\in \ngo$. Here $\nu : \ngo\wedge \ngo \to \ngo$ denotes the restriction of $\mu$ to $\ngo$ (see \cite[Prop. 4.13]{LafuenteLauret2014b}).
This yields in particular $\mm_\mu|_\ag < 0$, since $\ad_\mu Y = 0$ implies $Y\in \ngo$.
The next lemma shows how to modify a solvsoliton to obtain a critical point of $\normmm$.
\begin{lemma}\label{lem_solvsolmmsol}
Let $\mu\in \sca_\Beta$ be a solvsoliton Lie bracket with $\Ricci_\mu^* = \Beta$, and let $\sg = \ag \oplus \ngo$ be the orthogonal decomposition where $\ngo$ is the nilradical of $\mu$.
\begin{itemize}
\item[(i)] If $h = \minimatrix{h_\ag}{0}{0}{\Id_\ngo} \in \Gl(\sg)$, then
\[
\mm_{h\cdot \mu} = (h^{-1})^t \, \mm_\mu \, h^{-1}, \qquad \Riccim_{h\cdot \mu} = (h^{-1})^t \, \Riccim_\mu \, h^{-1}.
\]
\item[(ii)] There exists $h = \minimatrix{h_\ag}{0}{0}{\Id_\ngo} \in \Gl(\sg)$ such that $h \cdot \mu$ is a critical point of the norm squared of the moment map $\normmm$, with $\mmm(h\cdot \mu) = \Beta$.
\end{itemize}
\end{lemma}
\begin{proof}
Set $\mub := h \cdot \mu$. Notice that $\mub(\ag, \ag) = 0$ and $\mub|_{\ngo \wedge \ngo} = \nu$. By \cite[Lemma 4.4]{LafuenteLauret2014b}, if $\{ Y_i\}$ is an orthonormal basis of $\ag$ then for $Y\in \ag$, $X\in \ngo$ we have that
\begin{align*}
\langle \mm_\mub Y, Y \rangle &= -\unm \tr \ad_\mub Y (\ad_\mub Y)^t, \\
\langle \mm_\mub X, X \rangle &= \langle \mm_\nu X, X\rangle + \unm \sum \big\langle [\ad_\mub Y_i, (\ad_\mub Y_i)^t] X, X \big\rangle, \\
\langle \mm_\mub Y, X \rangle &= -\unm \tr \ad_\mub Y (\ad_\mub X)^t.
\end{align*}
Since $(\ad_\mu Y)(\ngo) \subset \ngo$ and $\ad_\mu Y|_\ag = 0$, \eqref{eqn_adconj} implies that $\ad_{\mub} Y = \ad_\mu (h^{-1} Y)$ for any $Y\in \ag$. Using that and \eqref{eqn_properties}, \eqref{eqn_mmmuan} one can easily verify the formula for $\mm_\mub$.
Since $\kf_{\mub} = (h^{-1})^t \kf_\mu h^{-1}$ (see \cite[Lemma 3.7]{homRF}), the formula for the modified Ricci curvature also follows.
To prove (ii), we look for a map $h = \minimatrix{h_\ag}{0}{0}{\Id_\ngo} \in \Gl(\sg)$ such that $\mm_{h\cdot \mu} = \Beta$. It will then follow that $h\cdot \mu$ is a critical point of the energy map $\normmm$. Indeed, $\mm_{h\cdot \mu}=\tfrac{1}{4}\mmm(h\cdot \mu)\cdot \Vert h\cdot \mu\Vert^2$, and $\tr \mmm(h\cdot \mu) = -1 = \tr\Beta$ by \cite[Lemma 3.7]{homRF},
from which we deduce $\mmm(h\cdot \mu) = \Beta$. But $\normmm \geq \Vert \beta \Vert^2$ on $\sca_\Beta$ by Theorem \ref{thm_stratifb}, (iii).
To that end, let $h = \minimatrix{h_\ag}{0}{0}{\Id_\ngo} \in \Gl(\sg)$ satisfying
\[
h^t h = \Id_\sg - \tfrac{1}{2 \Vert \Beta \Vert^2} \cdot \kf_\mu.
\]
Such an $h$ exists if and only if the right-hand-side is positive definite. But $\kf_\mu |_\ngo = 0$, thus on $\ngo$ we have $\Id_\ngo$. And the restriction to $\ag$ equals $- \Vert \Beta\Vert^{-2} \mm_\mu|_\ag > 0$, by the fact that $\Riccim_\mu = \Beta$ and Proposition \ref{prop_solvsol}. Hence there is at least one such $h$.
Using (i) we obtain
\begin{align*}
\mm_{h\cdot \mu} =& \, \,(h^{-1})^t \, \mm_\mu \, h^{-1} = (h^{-1})^t \, \big( \Riccim_\mu + \unm \cdot \kf_\mu \big) \, h^{-1} \\
=& \, \, (h^{-1})^t \, \big( \Beta^+ - \Vert \Beta \Vert^2 \cdot \Id_\sg + \unm \cdot \kf_\mu \big) \, h^{-1} \\
=& \, \, (h^{-1})^t \, \big( \Beta^+ - \Vert \Beta \Vert^2 \cdot h^t h \big) \, h^{-1} = \Beta,
\end{align*}
where in the last step we are using the identity $(h^{-1})^t \, \Beta^+ \, h^{-1} = \Beta^+$, which follows at once from the special form of $h$ and the properties of $\Beta^+$ stated in Proposition \ref{prop_solvsol}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_uniqsolvsol}]
Let $\mu_1, \mu_2 \in \overline{\Gl(\sg) \cdot \mu^\sg} \cap \sca_\Beta$ be two solvsoliton brackets. By \eqref{eqn_SbetaKU} and Proposition \ref{prop_solvsol}, after acting with $\Or(\sg)$, we may assume that
$\Riccim_{\mu_1} = \Riccim_{\mu_2} = \Beta$
and that the nilradicals of $\mu_1$ and $\mu_2$ equal to $\ngo$. Setting $\sg = \ag\oplus \ngo$,
by Lemma \ref{lem_solvsolmmsol}, (ii)
there exist maps $h_i = \minimatrix{(h_i)_\ag}{0}{0}{\Id_\ngo} \in \Gs$, $i=1,2$, such that $h_i\cdot \mu_i \in \overline{\Gl(\sg) \cdot \mu^\sg}$ are critical points of $\normmm$ with $\mmm(h_i \cdot \mu_i) = \Beta$. Theorem \ref{thm_GITuniq} then yields $h_1\cdot \mu_2 = (kh_2) \cdot \mu_2$ for some $k\in \Or(\sg)$. This implies that $\Beta = \mmm(h_1 \cdot \mu_1) = k \mmm(h_2 \cdot \mu_2) \, k^{-1} = k \, \Beta \, k^{-1}$ by \eqref{eqn_Kequivmm}, and hence $k$ commutes with $\Beta$ and $\Beta^+$, thus
$k = \minimatrix{k_\ag}{0}{0}{k_\ngo}$. After acting on $\mu_2$ with $\minimatrix{\Id_\ag}{0}{0}{k_\ngo^{-1}}$ we may assume $k_\ngo = \Id_\ngo$. Hence,
$ \mu_1 = h \cdot \mu_2$ for $ h = \minimatrix{h_\ag}{0}{0}{\Id_\ngo} \in \Gl(\sg)$.
Finally, using $\Riccim_{\mu_1} = \Riccim_{\mu_2} = \Beta$,
the fact that $\Beta|_\ag = c \cdot \Id_\ag$, see Proposition \ref{prop_solvsol}, and Lemma \ref{lem_solvsolmmsol}, (i),
we get $h_\ag^t h_\ag = \Id_\ag$, from which it follows that $h\in \Or(\sg)$.
\end{proof}
\section{Proof of Theorem \ref{mainthm_uniq}}\label{sec_thmA}
Before turning to the proof of Theorem \ref{mainthm_uniq}, we discuss here one of our main tools, an ODE on the space of brackets which is equivalent to the Ricci flow of left-invariant metrics.
It was shown in \cite{homRF} that each Ricci flow solution of left-invariant metrics on $\Ss$ corresponds to a curve of brackets in $\Vs$ solving the \emph{bracket flow} $\mu'=-\pi(\Ricci_\mu)\mu$. Since the right-hand-side is always tangent to the $\Gs$-orbit, the flow preserves orbits. Hence if the initial bracket $\mu(0)$ belongs to a stratum $\Sb$, then also $\mu(t) \in \Sb$ for all $t$ for which the solution exists, since $\Sb$ is $\Gs$-invariant.
Moreover, the flow can be `gauged' so that it consists of brackets which are gauged correctly w.r.t.~$\Beta$ (Def.~ \ref{def_gauged}). To that end, consider the subspace $\kg_{\ug_\Beta} = \{A - A^t : A\in \ug_\Beta \} \subset \sog(\sg)$, a direct complement for $\qg_\Beta$:
\begin{equation}\label{eqn_qku}
\glg(\sg) = \qg_\Beta \oplus \kg_{\ug_\Beta}\,.
\end{equation}
Denote by $(\cdot)_{\qg_\Beta}$ the corresponding linear projection onto $\qg_\Beta$.
\begin{remark}\label{rmk_projqbeta}
In general \eqref{eqn_qku} is not an orthogonal decomposition. We do have an orthogonal decomposition $\glgs = \ug_\Beta \oplus \ggo_\Beta \oplus \ug_\Beta^t$, and for $A = A_{\ug_\Beta} + A_{\ggo_\Beta} + A_{\ug_\Beta^t}\in \glgs$
the projection according to \eqref{eqn_qku} is given by $A_{\qg_\Beta} = A_{\ggo_\Beta} + A_{\ug_\Beta}+(A_{\ug_\Beta^t} )^t$. In particular, for $A \in \Syms$
we have $A_{\qg_\Beta} = A_{\ggo_\Beta} + 2\cdot A_{\ug_\Beta}$
and
\[
\Vert A \Vert \leq \Vert A_{\qg_\Beta}\Vert \leq 2 \cdot \Vert A \Vert\,.
\]
\end{remark}
Given a Ricci flow solution $(g(t))\subset \mca_{\mathsf{left}}(\Ss)$ with $g(0) = g_0$, we write $g_0$ at the point $e \in \Ss$ with respect to the fixed scalar product $\ip$ on $\sg \simeq T_e \Ss$,
that is $(g_0)_e(\cdot,\cdot) = \la h_0 \, \cdot, h_0 \, \cdot \ra$, and we may assume that $h_0 \in \Qb$ by \eqref{eqn_GOQ}. We call the following ODE the \emph{gauged bracket flow}:
\begin{equation}\label{eqn_QbgaugedBF}
\frac{\rm d \mu}{ {\rm d} t} = - \pi \big( ( \Riccim_{\mu} )_{\qg_\Beta} \big) \, \mu, \qquad \mu(0) = h_0\cdot \mu^\sg \in \Qb\cdot \mu^\sg .
\end{equation}
\begin{theorem}\cite{homRF,BL17}\label{thm_BFRFequiv}
Let $(\Ss, g_0)$ be a non-abelian, simply-connected Lie group with left-invariant metric $g_0$ and Lie algebra $(\sg, \mu^\sg)$, and consider a fixed scalar product $\ip$ on $\sg$. Then, the solution $(g(t))_{t \in [0,\infty)}$ to the Ricci flow starting at $g_0$ and the solution $(\mu(t))_{t \in [0,\infty)}$ to the gauged bracket flow \eqref{eqn_QbgaugedBF} starting at $h_0 \cdot \mu^\sg$
differ only by pull-back by time-dependent diffeomorphisms. Here, $h_0\in \Qb$ is such that $(g_0)_e(\cdot,\cdot) = \la h_0\, \cdot, h_0\,\cdot \ra$.
\end{theorem}
The proof shows the existence of Lie group isomorphisms $\varphi_t : \Ss \to \Ss_{\mu(t)}$ such that $g(t) = \varphi_t^* \, g_{\mu(t)}$. They can be obtained from the corresponding Lie algebra isomorphisms $h(t) : (\sg, \mu^\sg) \to (\sg, \mu(t))$, which satisfy the ODE
\begin{equation}\label{eqn_ODEh}
h' = - \big( {\Riccim_{\mu(t)} } \big)_{\qg_\Beta} \cdot h ,\qquad h(0) = h_0.
\end{equation}
The gauging is chosen so that for $\mu(0) \in \Vnnss$ we have $\mu(t) \in \Qb \cdot \mu^\sg \subset \Vnnss$ for all $t \in [0,\infty)$.
\begin{remark}\label{rmk_gbfequiv}
Unlike the bracket flow, the gauged bracket flow \eqref{eqn_QbgaugedBF} is not $\Os$-equivariant. However, it is still $\Kb$-equivariant: for $k\in \Kb$ one has that
\begin{align*}
k \cdot \left( -\pi \big( ( \Riccim_{\mu} )_{\qg_\Beta} \big) \mu \right) =& -\pi \big( k ( \Riccim_{\mu} )_{\qg_\Beta} k^{-1} \big) (k \cdot \mu) = - \pi \big( (k \Riccim_\mu k^{-1})_{\qg_\Beta} \big) (k\cdot \mu) \\
=& -\pi \big( (\Riccim_{k\cdot\mu})_{\qg_\Beta}\big) (k\cdot \mu).
\end{align*}
The second identity follows from Remark \ref{rmk_projqbeta}, since conjugation by $k\in \Kb$ preserves $\ggo_\Beta$ and $\ug_\Beta$, thus for $A\in \Sym(\sg)$ we have that $k A_{\qg_\Beta} k^{-1} = (k A k_{-1})_{\qg_\Beta}$.
\end{remark}
Next, we recall a scale-invariant Lyapunov function,
which is monotone along immortal solutions to \eqref{eqn_QbgaugedBF}:
see \cite[$\S$7]{BL17}.
Consider the following codimension-one subgroup of $\Qb$,
\[
\Slb := \Hb \Ub \subset \Qb,
\]
and let $\slgb$ be its Lie algebra. Assume without loss of generality that $\mu^\sg \in \Vnnss$. Since
$\Qb \cdot \mu^\sg$ is a cone over $\Slb \cdot \mu^\sg$
by \cite{BL17}, for every $\mu \in \Qb \cdot \mu^\sg$ there exists a unique scalar $v_\Beta(\mu) \in \RR_{>0}$ such that
\[
v_\Beta (\mu) \, \mu \in \Slb \cdot \mu^\sg.
\]
We call $v_\Beta$ the \emph{$\Beta$-volume functional}; notice that it depends on the `base bracket' $\mu^\sg$. It has the property that for some constant $C_{\mu^\sg} >0$ and for all $\mu \in \Qb\cdot \mu^\sg$ we have
\begin{equation}\label{eqn_lowbdvbeta}
v_\Beta(\mu) \geq C_{\mu^\sg} \cdot \Vert \mu\Vert^{-1}.
\end{equation}
\begin{theorem}\cite{BL17}\label{thm_lyapunov}
Let $(\mu(t))_{t\in [0,\infty)}$ be a solution to \eqref{eqn_QbgaugedBF} with $\mu(0) \in \Qb\cdot \mu^\sg $. Then,
\[
F_\beta : \Qb \cdot \mu^\sg \to \RR, \qquad \mu \mapsto v_\Beta(\mu)^2 \cdot \scalm(\mu),
\]
is scale-invariant and evolves along \eqref{eqn_QbgaugedBF} by
\begin{equation}\label{eqn_Fbetanondec}
\ddtbig \, F_\Beta( \mu)\, = \,
2 \cdot v_\Beta(\mu)^2 \cdot \Big( \left\Vert {\Riccim_{\mu} } \right\Vert^2 + \scalm({\mu}) \cdot \langle \Riccim_\mu, \Beta \rangle \Big) \geq 0\,.
\end{equation}
Equality holds for some $t>0$ if and only if $\mu(0)$ is a solvsoliton.
\end{theorem}
The monotonicity of $F_\Beta$ follows from Cauchy-Schwarz and the estimate
\begin{equation}\label{eqn_ricbetapgeq0}
\la \Riccim_\mu , \Beta \ra \geq \vert \scalm_\mu \vert \cdot \Vert \Beta\Vert^2,
\end{equation}
which holds on $\Qb\cdot \mu^\sg$. Moreover, even though $F_\Beta$ is defined only on one orbit, the rigidity statement also holds for all potential limits:
\begin{proposition}\cite{BL17}\label{prop_rigidity}
Let $\bar \mu \in \overline{\Qb \cdot \mu^\sg}$ with $\scalm(\bar \mu) = -1$. Then,
\[
\left\Vert {\Riccim_{\bar\mu} } \right\Vert^2 - \langle \Riccim_\mu, \Beta\rangle = 0
\]
if and only if $\bar \mu$ is a solvsoliton with $\Riccim_{\bar \mu} = \Beta$ and $\bar \mu \in \sca_\Beta$.
\end{proposition}
We are now in a position to prove the main result of the article.
\begin{proof}[Proof of Theorem \ref{mainthm_uniq}]
Let $(\Ss,g_0)$ be a solvmanifold of real type, with Lie algebra $(\sg, \mu^\sg)$. By Theorem \ref{thm_BFRFequiv}, since $\mu^\sg \neq 0$, to the Ricci flow solution $g(t)$ with $g(0) = g_0$ there corresponds a solution $\mu(t)$ to the gauged bracket flow \eqref{eqn_QbgaugedBF}. Recall that $\mu(t) \in \Qb \cdot \mu^\sg$ for all $t\geq 0$, and that $\Qb\cdot \mu^\sg \subset \sca_\Beta$ for some $\Beta$.
For non-compact homogeneous spaces there exists a normalized bracket
flow keeping the modified scalar curvature $\scalm$ constant. More precisely, since $\scalm(\mu(t)) < 0$ for all $t\geq 0$, as explained in \cite[$\S$3.3]{homRF}, after an appropriate time reparameterization the corresponding $\scalm$-normalized family $\nu(t) := \vert {\scalm ({\mu(t)}) } \vert^{-1/2} \cdot \mu(t)$ solves
\begin{equation}\label{eqn_normalizedgaugedBF}
\frac{\rm d \nu}{ {\rm d} t} = - \pi \Big( ( \Riccim_{\nu} )_{\qg_\Beta} + \Vert{ \Riccim_\nu}\Vert^2 \cdot \Id_\sg \Big) \, { \nu}, \qquad { \nu}(0) = \nu_0 \in \Qb\cdot \mu^\sg \,.
\end{equation}
To this end, recall that by \cite{homRF}
we have ${\rm d} \scalm |_\mu (\pi(A)\mu) = - 2 \cdot \la\Riccim_\mu, A\ra$.
Thus $\scalm(\nu(t)) \equiv -1$, since
$ ( \Riccim_{\nu} )_{\qg_\Beta}= \Riccim_{\nu} - (\Riccim_{\nu} )_{ \kg_{\ug_\Beta}}$
and $\Riccim_\nu \perp ( \Riccim_{\nu} )_{ \kg_{\ug_\Beta}}$, see \eqref{eqn_qku}.
We first show that there exist $c_{\mu_0}, C_{\mu_0} > 0$ such that for all $t\geq 0$ it holds
\begin{equation}\label{eqn_nubounded}
0 < c_{\mu_0} \leq \Vert \nu(t) \Vert \leq C_{\mu_0}\,.
\end{equation}
The existence of $c_{\mu_0}$ is clear since $\scalm(0)=0$.
On the other hand, if some subsequence $\nu(t_k)$ is unbounded, then
the sequence $\tilde \nu_k := \nu(t_k) / \Vert \nu(t_k) \Vert$ satisfies
$\scalm(\tilde \nu_k) \to 0$ as $k\to\infty$. A subsequential limit would then contradict the real type hypothesis: see Lemma \ref{lem_realnonflat}.
Next, we claim that the $\omega$-limit of the solution $(\nu(t))_{t\in [0,\infty)}$ consists entirely of solvsoliton brackets lying in the same stratum $\sca_\Beta$. This will follow from Proposition \ref{prop_rigidity}, once we show that
the non-negative function
$f(t):= \Vert {\Riccim_{\nu(t)} }\Vert^2 - \langle \Riccim_{\nu(t)}, \Beta\rangle$ tends to $0$ as $t\to \infty$. To see that, notice first that by scale-invariance of the Lyapunov function $F_\Beta$ (Theorem \ref{thm_lyapunov})
we have that $(\grad F_\Beta)_\nu \perp \RR \,\nu$.
Hence,
along the normalized bracket flow \eqref{eqn_normalizedgaugedBF} $F_\Beta$ satisfies the same evolution equation \eqref{eqn_Fbetanondec}. Together with \eqref{eqn_nubounded} and \eqref{eqn_lowbdvbeta} this implies
\[
\ddt F_\Beta(\nu) \geq C'_{\mu_0} \cdot f(t) \geq 0,
\]
for some constant $C'_{\mu_0} >0$. Since $F_\Beta$ is monotone non-decreasing and
$F_\Beta(\nu(t)) < 0$ for all $t\geq 0$, it follows that
$\int_0^\infty f(t) \, dt <\infty$.
On the other hand, notice that with respect to a fixed orthonormal basis of $V(\sg)$
the entries of $\ddt \nu$ are polynomials in the entries of $\nu$. By using again the upper bound in \eqref{eqn_nubounded} we deduce that $f'(t)\leq D_{\mu_0}$
for all $t \geq 0$. It is now clear that $ \lim_{t\to\infty} f(t) = 0$, which proves our claim.
Applying Theorem \ref{thm_uniqsolvsol} we conclude that the $\omega$-limit is contained in a single $\Os$-orbit of non-flat solvsolitons $\Os \cdot \nu_{\mathsf{sol}}$. From this we deduce that it is equivalent to normalize the scalar curvature. Indeed, by $\Os$-invariance of $\scal$, we have
$0 > s_\infty = \lim_{t\to \infty} \scal(\nu(t)) $. Hence, the $\omega$-limit of the $\scal$-normalized bracket family
\begin{eqnarray}
\big(\vert \scal(\mu(t))\vert^{-1/2} \, \mu(t) \big)_{t\in [0,\infty)}=
\big(\vert \scal(\nu(t)) \vert^{-1/2} \, \nu(t) \big)_{t\in[0,\infty)}\label{eqn_normconvergence}
\end{eqnarray}
is contained in $\Or(\sg) \cdot \tilde\nu_{\mathsf{sol}}$, where $ \tilde \nu_{\mathsf{sol}} := \vert s_\infty \vert^{-1/2} \, \nu_{\mathsf{sol}}$.
The bracket $ \tilde \nu_{\mathsf{sol}}$ corresponds to a solvmanifold $(\bar \Ss, \bgsol)$,
which by Theorem \ref{thm_uniqsolvsol} does not depend (up to isometry) on the initial metric $g_0$. By \cite[Corollary 6.20]{Lauret2012}, bracket convergence implies Cheeger-Gromov subconvergence to a space locally isometric to
$(\bar \Ss, \bgsol)$.
If we set $\bar g(t) := \vert \scal(t)\vert \cdot g(t)$, this says that any sequence $(\Ss, \bar g(t_k))_{k\in\NN}$, $t_k\to\infty$, has a subsequence converging in Cheeger-Gromov topology to a Riemannian manifold locally isometric to $(\bar \Ss, \bgsol)$.
By Theorem D.2 in \cite{BL17}, the limit is in fact simply-connected, and hence equal to $(\bar\Ss,\bgsol)$, as claimed.
\end{proof}
\begin{remark}\label{rmk_limitsoliton}
The above proof shows that in fact the $\scal$-normalized Ricci flow converges to the unique soliton whose bracket lies in $\overline{\Gl(\sg)\cdot \mu} \cap \sca_\Beta$, see Theorem \ref{thm_uniqsolvsol}.
\end{remark}
\section{No algebraic collapsing}\label{sec_nocollapse}
Let $(M^n,g_k)_{k\in \NN}$ be a sequence of Riemannian metrics converging in
pointed Cheeger-Gromov topology to a limit space $\big(\bar M^n, \bar g\big)$. Assume that for each $k \in \NN$ there is a connected, $n$-dimensional Lie group $\G_k$ of $g_k$-isometries acting transitively on $M^n$. A natural way of obtaining an isometric group action in the limit is by arguing at the infinitesimal level, as follows: for each $k\in \NN$ one considers $n$ linearly independent $g_k$-Killing fields, which after a suitable normalization subconverge in $C^1$-topology to $n$ linearly independent $\bar g$-Killing fields on $\bar M^n$; see \cite[$\S$6.2]{Heber1998} and \cite[$\S$9]{BL17}. The sequence $(M^n, g_k)_{k\in \NN}$ is called \emph{algebraically non-collapsed}, if the $n$ limit Killing fields span the tangent space at each point of $\bar M^n$. Notice that if this is the case, then after lifting them to the universal cover $(X^n, \bar g)$ of $(\bar M^n,\bar g)$, they can be `integrated' to a simply-transitive, $\bar g$-isometric action
of a simply-connected Lie group $\bar \G$ on $X^n$ \cite[Ch.VI, Thm.3.4]{KobNom96}.
\begin{definition}\label{def_homRFalgnonc}
An immortal homogeneous Ricci flow solution $(M^n,g(t))_{t\in[0,\infty)}$ is called \emph{algebraically non-collapsed}, if any Cheeger-Gromov-convergent sequence of parabolic blow-downs
\[
g_{s_k}(t) := \tfrac{1}{s_k} \cdot g(s_k \, t)\,, \qquad s_k \to \infty,
\]
is algebraically non-collapsed in the above sense.
\end{definition}
We now work towards a proof of Theorem \ref{mainthm_algnonc}. Let $(\Ss, g(t))_{t\in [0,\infty)}$ be an immortal homogeneous Ricci flow solution of left-invariant metrics on a simply-connected solvable Lie group $\Ss$. Recall that $\Ss$ is diffeomorphic to $\RR^n$.
Consider the associated bracket flow solution $(\mu(t))_{t\in[0,\infty)}$, $\mu(0) = \mu_0$ and recall that for a blown-down solution $\tfrac{1}{s} \cdot g(s t)$, $s>0$, the corresponding brackets $\mu_s(t)$ scale like
\begin{equation}\label{eqn_bracketblowdown}
\Vert \mu_s(1) \Vert = \sqrt{s} \cdot \Vert \mu( s)\Vert.
\end{equation}
By \cite{Bhm2014} the solution is Type-III, and we have injectivity radius estimates
by \cite[Thm.~ 8.2]{BL17}.
Hence, Hamilton's compactness theorem \cite{Ham95b}
implies that any sequence of blow-downs $\big(\Ss, (g_{s_k}(t))_{t\in[1,\infty)} \big)_{k\in\NN}$ subconverges to a limit Ricci flow solution in Cheeger-Gromov topology, uniformly over compact subsets of $\Ss \times [1,\infty)$.
We claim that this limit Ricci flow solution may be written as $\big(\bar \Ss, \bar g(t)_{t\in [1,\infty)} \big)$,
where $\bar \Ss$ is a simply-connected solvable Lie group, in general not isomorphic to $\Ss$.
To that end, notice that by Theorem D.2 in \cite{BL17} the limit is simply connected. Thus we are in a position to apply the results in \cite[$\S$6]{Heber1998} and conclude that there is a solvable Lie group of isometries acting transitively on $(\bar M^n,\bar g)$. More precisely, one may use items (i), (ii) of Step 1 in the proof of Theorem 6.6 from that paper. A quick inspection of the proof shows that the Einstein hypothesis is not used at all for these items, and indeed all that is needed is that the limit space is simply-connected.
By \cite[Lemma 1.2]{GrdWls}, there is also a simply-transitive solvable group of isometries, hence the limit is a solvmanifold.
\begin{lemma}\label{lem_varphibdd}
Suppose that the solution $(g(t))_{t \in [0,\infty)}$ is algebraically non-collapsed.
Then, there exists $0< c_{\mu_0}< C_{\mu_0}$ such that $c_{\mu_0} \leq t \cdot \Vert \mu(t)\Vert^2 \leq C_{\mu_0}$ for all $t\geq 1$.
\end{lemma}
\begin{proof}
To prove the upper bound, assume on the contrary that $s_k \cdot \Vert \mu(s_k) \Vert^2 \to +\infty$ for some sequence $s_k \to \infty$. After extracting a convergent subsequence of blow-downs and using the algebraic non-collapsedness, we may apply \cite[Thm.~ 9.2]{BL17} and conclude that the corresponding brackets are bounded. This is a contradiction: see \eqref{eqn_bracketblowdown}.
The lower bound holds even without the non-collapsedness assumption. To see that, notice that the vector field defining the bracket flow, $\mu \mapsto -\pi(A_\mu) \mu$, can be extended to a smooth vector field on $\Vs$, which is homogeneous of degree $3$. By compactness of the sphere in $\Vs$ we conclude that there is a uniform bound
$ \Vert \pi(A_\mu) \mu \Vert \leq C \cdot \Vert \mu \Vert^3$ for some
$ C>0$, see also \cite[$\S$3]{scalar}. This implies that
$ \ddt \Vert \mu\Vert^2 = 2 \, \big\la \mu, \ddt \mu \big\ra \geq - 2\, C \cdot \Vert \mu \Vert^4$,
which by integrating yields
$ \Vert \mu(t) \Vert^2 \geq 1/2 \, C \, t + \Vert \mu_0 \Vert^{-2}$
for all $t\geq0$. The desired lower bound for $t \geq 1$ now follows.
\end{proof}
Let $C_n$ denote the norm of the linear map $\pi : \glgs \to \End(\Vs)$ defined in \eqref{eqn_gsrep}.
The next lemma says that the Ricci curvature cannot be too small for a very long time
in the algebraically non-collapsed case.
\begin{lemma}\label{lem_Riccismall}
Suppose that the solution $(g(t))_{t \in [1,\infty)}$ is algebraically non-collapsed.
Then, there exists $\alpha_{\mu_0}>0$ such that if $t \cdot \Vert {\Ricci_{\mu(t)} }\Vert \leq
\frac{1}{8C_n }$ holds for all $t\in [t_1, t_2]$, then $t_2 \leq \alpha_{\mu_0} \cdot t_1$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem_varphibdd}, the function $\varphi:[1,\infty)\to \RR\,;\,\,t \mapsto
t \cdot \Vert \mu(t) \Vert^2$ is bounded. Moreover, if $A_\mu := (\Riccim_\mu)_{\qg_\Beta}$ then by Cauchy-Schwarz and Remark \ref{rmk_projqbeta} we have
\[
t \cdot \la \mu , \pi(A_\mu) \mu \ra
\leq C_n \cdot t \cdot \Vert \mu \Vert^2 \, \Vert A_\mu \Vert
\leq 2 \, C_n \cdot t \cdot \Vert \Riccim_\mu \Vert \, \Vert \mu\Vert^2
\leq 2 \, C_n \cdot t \cdot \Vert \Ricci_\mu \Vert \, \Vert \mu \Vert^2.
\]
Using that, for $t\in [t_1,t_2]$ we obtain
\begin{align*}
\ddt \varphi &= \Vert \mu \Vert^2 + 2 \, t \cdot \la \mu, \ddt \mu \ra = \Vert \mu\Vert^2 - 2 \, t \cdot \la \mu , \pi(A_\mu) \mu \ra\\
& \geq \Vert \mu\Vert^2 - 4 \, C_n \cdot t \cdot \Vert \Ricci_\mu \Vert \, \Vert \mu \Vert^2 \geq \unm \Vert \mu\Vert^2 = \tfrac{1}{2\,t} \cdot \varphi.
\end{align*}
Integrating on $[t_1,t_2]$ one gets $\varphi(t_2) /\varphi(t_1) \geq \sqrt { {t_2}/{t_1}}$,
and the lemma follows.
\end{proof}
We now show that the blow-down limits cannot be flat.
\begin{lemma}\label{lem_Riclowerbd}
Suppose that the solution $(g(t))_{t \in [1,\infty)}$ is algebraically non-collapsed.
Then, there exists $\delta_{\mu_0} > 0$ such that $t \cdot \Vert{ \Ricci_{\mu(t)} }\Vert \geq \delta_{\mu_0}$ for all $t\geq 1$.
\end{lemma}
\begin{proof}
Assume that this is not the case and let $s_k \to \infty$ be a sequence of times with $s_k \cdot \Vert{ \Ricci_{\mu(s_k)} }\Vert \to 0$ as $k\to\infty$. Any convergent subsequence of the corresponding sequence of blow-downs $g_{s_k}(t) = \tfrac{1}{s_k}g (s_k \, t)$ must have a Ricci-flat limit. After passing to such a subsequence, it follows
that there exists $k_0$ such that for all $k\geq k_0$
and all $t\in [1,1+\alpha_{\mu_0}]$ we have $ \left\Vert \Ricci(g_{s_k}(t)) \right\Vert
\leq \tfrac{1}{8C_n(1+\alpha_{\mu_0}) }$. This yields
\[
(s_k \, t) \cdot \Vert \Ricci(g(s_k \, t))\Vert = t \cdot \left\Vert \Ricci(g_{s_k}(t)) \right\Vert \leq \tfrac{1}{8C_n},
\]
for all $ t\in [1,1+\alpha_{\mu_0}]$, thus
$\tilde t \cdot \Vert {\Ricci_{\mu(\tilde t)} }\Vert \leq \tfrac{1}{8C_n} $ for all $\tilde t\in [s_k, (1+\alpha_{\mu_0})s_k]$. But this contradicts Lemma \ref{lem_Riccismall}.
\end{proof}
We are finally in a position to prove Theorem \ref{mainthm_algnonc}:
\begin{proof}[Proof of Theorem \ref{mainthm_algnonc}]
Let $(\Ss, g(t))_{t\in [0,\infty)}$ be an algebraically non-collapsed Ricci flow solution of left-invariant metrics, and let $0 \neq \mu^\sg \in\sca_\Beta$ correspond to the initial metric $g(0)$.
As in the proof of Theorem \ref{mainthm_uniq}, let $\mu(t)$ be the corresponding solution to the gauged bracket flow and $\nu(t) := \vert {\scalm ({\mu(t)}) } \vert^{-1/2} \cdot \mu(t)$ the $\scalm$-normalized solution, which after a time reparameterization solves \eqref{eqn_normalizedgaugedBF}.
Assume that $\Vert \nu(t_k) \Vert \to \infty$ for some sequence $t_k \to \infty$, and let $\tilde \nu_k := \nu(t_k) / \Vert \nu(t_k)\Vert = \mu(t_k) / \Vert \mu(t_k) \Vert$. Then any subsequential limit $\bar \nu$ is a solvable Lie bracket with $\scalm(\bar \nu) = 0$, hence flat (see \cite[Rmk.~3.2(b)]{Heber1998}). On the other hand, $\Vert \mu(t) \Vert \sim 1/\sqrt{t}$ by Lemma \ref{lem_varphibdd}, thus
\[
\big\Vert {\Ricci_{\mu/\Vert \mu \Vert} }\big\Vert \sim \Vert \Ricci_{\sqrt{t} \cdot \mu}\Vert = t \cdot \Vert \Ricci_{\mu}\Vert \geq \delta_{\mu_0} > 0,
\]
thanks to Lemma \ref{lem_Riclowerbd}. This implies that $\bar \nu$ cannot be flat, a contradiction.
Precisely as in the proof of Theorem \ref{mainthm_uniq} it
follows that for some subsequence $s_k \to \infty$ we have $\nu(s_k) \to \nu_{\mathsf{sol}} \in \sca_\Beta$, a solvsoliton. By Remark \ref{rem_solvsolreal}, $\nu_{\mathsf{sol}}$ is of real type. And since the nilradical of solvable brackets in $\sca_\Beta$ is of constant dimension (equal to $\rank(\Beta^+)$), for $k$ large enough $\nu(s_k)$ is of real type by Lemma \ref{lem_realtype}. Since $\mu^\sg$ and $\nu(t)$ are isomorphic for all $t$, it follows that $\Ss$ is of real type.
Conversely, assume that $\Ss$ is of real type and let $\mu(t)$ be a bracket flow solution corresponding to a Ricci flow of left-invariant metrics on $\Ss$. By Corollary 9.13 in \cite{BL17} it suffices to show that $ t \cdot \Vert \mu(t) \Vert^2 \leq C_{\mu_0}$ for some constant $C_{\mu_0} > 0$. But this
follows immediately from
\eqref{eqn_normconvergence} and the Type-III behavior
of homogeneous Ricci flow solutions.
\end{proof}
\section{The Einstein case}\label{sec_Einstein}
In this section we prove Theorem \ref{mainthm_Einstein}, an improvement of the above convergence results in the Einstein case, made possible by the linearization computations from Section \ref{sec_linear}.
\begin{proof}[Proof of Theorem \ref{mainthm_Einstein}]
Let $(\nu^*(t))_{t \in [0,\infty)}$ denote a solution to the $\scalm$-normalized gauged
bracket flow (\ref{eqn_normalizedgaugedBF}) keeping $\scalm(\nu^*(t)) \equiv -1$, with
$\nu^*(0) \in \Qb \cdot \mu^\sg$.
By Theorem \ref{mainthm_uniq} we may assume
that for a large time we are as close to an Einstein bracket $\mu_E$ as we like.
The set of Einstein brackets in $\Qb \cdot \mu^\sg$ with $\scalm \equiv -1$
equals $\Kb \cdot \mu_E$ by Theorem \ref{thm_uniqsolvsol} and \cite[Cor.~8.4]{GIT}. Moreover,
by Theorem \ref{thm_BFlin} for such an Einstein bracket $\mu_E$
the tangent space to its $\Slb$-orbit may be decomposed as
$T_{\mu_E}(\Slb \cdot \mu_E) = T_{\mu_E}(\Kb \cdot \mu_E) \oplus V_{\mu_E}$,
where $V_{\mu_E}$ denotes the sum of the eigenspaces of the linearization of \eqref{eqn_normalizedgaugedBF} with negative eigenvalues.
This decomposition is $\Kb$-equivariant, since the gauged bracket flow is so by Remark \ref{rmk_gbfequiv}. Using the normal exponential map of the compact orbit $\Kb\cdot \mu_E$
in $\Slb \cdot \mu_E$ in direction of $V_{\mu_E}$,
we can find coordinates $(x,y)\in U:=(1,3)^k \times (-1,1)^l$
of the orbit $\Slb\cdot \mu_E$ close to $\mu_E$,
where $(x,0)$ parametrizes to $\Kb$-orbit of $\mu_E$ locally and
$(0,y)$ the transversal slice given by $V_{\mu_E}$. In these coordinates
the differential equation (\ref{eqn_normalizedgaugedBF}) reads as $(x,y)' = F(x,y)=(F_1(x,y),F_2(x,y))$ with
$F(x,0)=0$ and
\[
(dF)_{(x,0)} = \left( \begin{array}{cc} 0 & \tfrac{\partial F_1}{\partial y} \\
0 & \tfrac{\partial F_2}{\partial y} \end{array} \right)_{(x,0)}\,,
\]
where $\big(\tfrac{\partial F_2}{\partial y} \big)_{(x,0)}$ has only
eigenvalues with negative real part, say bounded from the above by $-\epsilon<0$.
It is easy to see that choosing $y(0)$ small enough one can conclude that
\begin{equation}\label{eqn_expfast}
\nu^*(t) \underset{t\to\infty}\longrightarrow \mu_E, \qquad \hbox{exponentially fast.}
\end{equation}
Next, consider the solution $(\nu(t))_{t\in [0,\infty)}$ to the $\scalm$-normalized bracket flow
\begin{equation*}
\frac{\rm d \nu}{ {\rm d} t} = - \pi \big( \Ricci_{\nu} + \Vert \Riccim_{\nu}\Vert^2 \cdot \Id_\sg \big) { \nu}, \qquad { \nu}(0) = \nu^*(0) \in \Qb\cdot \mu^\sg \,.
\end{equation*}
Since $\nu^*(t)$ is obtained by `gauging' $\nu(t)$, by \cite[$\S$3]{BL17} there exists a smooth family of orthogonal maps $(k(t)) \subset \Or(\sg)$ such that
\begin{equation}\label{eqn_nugaugednu*}
\nu(t) = k(t) \cdot \nu^*(t), \qquad \forall \, \, t\in [0,\infty).
\end{equation}
It might be the case that the $\omega$-limit of $(\nu(t))_{t\in[0,\infty)}$ is not a single bracket. However, by \eqref{eqn_expfast} and compactness of $\Or(\sg)$, it must be contained in $\Or(\sg) \cdot \mu_E$. In particular, since the function $\mu \mapsto \Vert {\Riccim_\mu} \Vert^2$ is $\Or(\sg)$-invariant, there exists a limit $\Vert {\Riccim_{\nu(t)} } \Vert^2 \to c_1$ as $t \to \infty$.
And also by $\eqref{eqn_expfast}$, and using that the entries of $\Ricci_\mu$ are quadratic in the entries of $\mu$, we have that $\Ricci_{\nu^*(t)} \to c \cdot \Id_\sg$ exponentially fast, since $\mu_E$ is Einstein. Hence, from \eqref{eqn_nugaugednu*} and the $\Or(\sg)$-equivariance of $\mu \mapsto \Ricci_\mu$, we deduce that $\Ricci_{\nu(t)} \to c_2 \cdot \Id_\sg$ as $t\to \infty$, exponentially fast. We thus get exponentially fast convergence
\begin{equation}\label{eqn_vftozero}
\Ricci_{\nu(t)} + \Vert {\Riccim_{\nu(t)} }\Vert^2 \cdot \Id_\sg \underset{t\to\infty}\longrightarrow \alpha \cdot \Id_\sg.
\end{equation}
Taking scalar products against $\Riccim_{\nu(t)}$ we get $\alpha = 0$,
since $\scalm \equiv -1$ and
$\la \Riccim_{\mu}, \Ricci_\mu\ra = \Vert {\Riccim_\mu}\Vert^2$ (see \cite[Lemma 2.1]{warped}).
Recall now that by Theorem \ref{thm_BFRFequiv}, for a curve
$(h(t)) \subset \Gl(\sg)$ solving the linear equation
\begin{eqnarray}
h'(t) \,\, = \,\,
- \big( \Ricci_{\nu(t)} + \Vert{ \Riccim_{\nu(t)} }\Vert^2 \cdot \Id_\sg \big)
\cdot h(t)\,, \quad h(0)=\Id_\sg, \label{Uhlnorm1}
\end{eqnarray}
one can recover the corresponding $\scalm$-normalized Ricci flow solution.
By \eqref{eqn_vftozero},
the differential equation (\ref{Uhlnorm1}) can be rewritten as
\begin{eqnarray}
h'(t) &= & \delta(t) \cdot h(t)
\,, \quad h(0)=\Id_\sg\,, \label{Uhlnorm2}
\end{eqnarray}
where $\delta(t)\in \Syms$ converges exponentially fast to $0$ for $t\to \infty$.
We denote by $\sigma(t)\geq 0$ the maximum between $0$ and the largest eigenvalue of $\delta(t)$,
and by $f(t): = \tr( h(t) \cdot h(t)^T ) = \Vert h(t) \Vert ^2$.
Since $\sigma(t)$ is integrable over $[0,\infty)$, from
\[
\tr( h'(t)h(t)^T ) = \tr \big( \delta(t) \cdot h(t) \cdot h(t)^T \big),
\]
we get a differential inequality $f'(t) \leq 2 \sigma(t) f(t)$, thus
$f(t)$ is bounded above on $[0,\infty)$.
The function $g(t) :=\det(h(t))$ satisfies an
equation $g'(t) = g(t) \cdot s(t)$,
where $\vert s(t)\vert $ is again integrable over $[0,\infty)$.
Thus, there exists a limit $\lim_{t \to \infty}g(t)>0$. Consequently,
we can find a subsequence $(t_i)_{ i \in \NN}$ of times
converging to infinity,
such that $\lim_{i \to \infty} h(t_i) \to h_\infty \in \Gl(\sg)$.
From \eqref{Uhlnorm2} it follows that $\Vert h'(t)\Vert$ is integrable over $[0,\infty)$, hence the curve $h : [0,\infty) \to \Gs$ has finite length and we must have $\lim_{t \to \infty}h(t)=h_\infty$.
After knowing that the $\scalm$-normalized Ricci flow has a non-flat limit, the same is also true for the scalar-curvature-normalized solution, as they differ only by scaling. The theorem now follows using the uniqueness of Einstein metrics stated in Corollary \ref{cor_uniqsolvsolmetric}.
\end{proof}
\begin{remark}\label{rem:solnonCinfty}
If the limit bracket is not Einstein but a non-trivial solvsoliton,
then the endomorphism
$\Ricci_{\nu(t)}$ $+ \Vert{ \Riccim_{\nu(t)} }\Vert^2 \cdot \Id_\sg$ converges exponentially fast to a derivation $D \neq 0$ of the limit bracket $\musol$. Thus equation (\ref{Uhlnorm2}) becomes
\[
h'(t) = - (\delta(t) + D) \, \cdot \, h(t), \qquad h(0) = \Id_\sg,
\]
with $\delta(t) \to 0$.
It follows that the solution $h(t)$ does not converge.
\end{remark}
\section{The linearization of the bracket flow at a solvsoliton}\label{sec_linear}
We finally compute the linearization of the $\scalm$-normalized gauged bracket flow
\begin{equation}\label{eqn_scalmgaugedBF}
\frac{\rm d \nu}{ {\rm d} t} = - \pi \Big( ( \Riccim_{\nu} )_{\qg_\Beta} + r_\nu \cdot \Id_\sg \Big) \, { \nu}, \qquad { \nu}(0) = \nu_0 \in \Qb\cdot \mu^\sg \,,
\end{equation}
at a solvsoliton bracket $\bar \mu$ which is gauged correctly w.r.t $\Beta$. Here, $r_\nu = \Vert{ \Riccim_\nu}\Vert^2$. Recall that for such a bracket $\mub \in \Vnnss \subset \sca_\Beta$ we have $\Riccim_\mub = \Beta$ and
\begin{equation}\label{eqn_betapder}
(\Riccim_{\mub} )_{\qg_\Beta} + r_\mub \cdot \Id_\sg = \Beta + \Vert \Beta \Vert^2 \cdot \Id_\sg = \Beta^+ \in \Der(\mub)\,,
\end{equation}
thanks to Proposition \ref{prop_solvsol}. Thus, $\mub$ is a fixed point of \eqref{eqn_scalmgaugedBF}.
The evolution equations for $\Riccim_\mu$ stated in \cite[pp.~390]{homRF},
applied to (\ref{eqn_scalmgaugedBF}), imply that
\begin{eqnarray*}
T_\mub \big( (\Qb\cdot \mub) \cap \{ \scalm=-1\}\big) = \big\{ \pi(A)\mub : A\in \qg_\Beta, \langle A, \Riccim_\mub \rangle = 0 \big\}\,.
\end{eqnarray*}
In particular if $\mub$ is a solvsoliton with $\Riccim_\mub = \Beta$ then
\begin{eqnarray}\label{eqn_qbetascalone}
T_\mub \big( (\Qb\cdot \mub) \cap \{ \scalm=-1\} \big) = T_\mub\big(\Slb \cdot \mub\big).
\end{eqnarray}
\begin{theorem}\label{thm_BFlin}
Let $\mub \in \Vnnss \subset \sca_\Beta$ be a solvsoliton bracket with $\Riccim_\mub = \Beta$. Then, the linearization of the $\scalm$-normalized gauged bracket flow \eqref{eqn_scalmgaugedBF} at $\mub$,
\[
L_\mub : T_\mub \big( \Slb \cdot \mub \big) \to T_\mub \big( \Slb \cdot \mub \big),
\]
has kernel given by $\pi(\kg_\Beta)\mub$, and its non-zero eigenvalues are negative.
\end{theorem}
\begin{proof}
We apply the formula for $L_\mub$ given in Lemma \ref{lem_Lmub}. Lemma \ref{lem_Pmupositive} implies that he kernel of $L_\mub$ is contained in $\pi(\kg_\Beta)\mub$. By Lemmas \ref{lem_propertiesPmub} and \ref{lem_Pmupositive}, we may now choose $A\in \slgb$ an eigenvector of both $P_\mub$ and $\ad(\Beta^+)$, with eigenvalues adding up to $c>0$. Then,
\[
L_\mub(\pi(A)\mub) = -\pi \big(P_\mub(A) + [\Beta^+, A] \big) \mub = - c \cdot \pi(A)\mub,
\]
hence $\pi(A)\mub$ is an eigenvector of $L_\mub$ with negative eigenvalue. The theorem follows.
\end{proof}
In the rest of this section $\mub$ will denote a solvsoliton bracket as in Theorem \ref{thm_BFlin}.
\begin{lemma}\label{lem_Lmub}
If $A\in \slgb$ then $L_\mub \big( \pi(A)\mub \big) = -\pi\big( P_\mub(A) + [\Beta^+, A] \big)\mub$,
where
\begin{equation}\label{eqn_defPmu}
P_\mub : \slgb \to \slgb\,; \qquad
A \mapsto \Big( {\rm d} \, {\Riccim} \big|_\mub (\pi(A)\mub) \Big)_{\qg_\Beta} .
\end{equation}
\end{lemma}
\begin{proof}
Since $( \Riccim_{\mub} )_{\qg_\Beta} + r_\mub \cdot \Id_\sg = \Beta^+$ by \eqref{eqn_betapder},
a direct computation yields
\begin{eqnarray}\label{eqn_formulaLmub}
L_\mub \left( \pi(A)\mub \right)
&=&
- \pi\left( P_\mub (A) \right) \mub
- \pi \left( \left({\rm d} r |_\mub (\pi(A) \mub) \right) \cdot \Id_\sg \right) \mub - \pi(\Beta^+) \pi(A)\mub \, .
\end{eqnarray}
Using that $\Beta^+ \in \Der(\mub)$, we obtain
\[
\pi\big( [\Beta^+, A] \big)\mub = \pi(\Beta^+) \pi(A) \mub - \pi(A)\pi(\Beta^+)\mub = \pi(\Beta^+)\pi(A)\mub\,.
\]
On the other hand, by \eqref{eqn_Fbetanondec} we know that $\Vert {\Riccim_\mu} \Vert \geq \vert \scalm_\mu \vert \cdot \Vert \Beta \Vert$ for all $\mu \in \Qb \cdot \mub$, and at $\mub$ equality holds. Thus the first variation of $ \Vert {\Riccim_\mu} \Vert$ at $\mub$ along directions tangent to the subset of brackets with $\scalm = -1$ must vanish, and this amounts to saying that $\mub$ is a critical point for $r_\mu$ restricted to ${\Slb \cdot \mub}$. Therefore, the second term in \eqref{eqn_formulaLmub} vanishes.
Finally note, that the image of $P_\mub$ is contained in $\qg_\Beta$. But since by \eqref{eqn_ricbetapgeq0} $\mub$ is also a minimum for the functional $\mu \mapsto \la \Riccim_\mu, \Beta \ra$ restricted to $\Slb \cdot \mub$,
it follows that the image is contained in the subalgebra
$\slgb$ by its very definition.
\end{proof}
Recall from \eqref{eqn_guqbeta} that $\ad(\Beta) = \ad(\Beta^+) : \glgs \to \glgs$ is a symmetric map, and if $(\glg(\sg)_r)_{r\in \RR}$ denote its pairwise orthogonal eigenspaces, then $\hg_\Beta \subset \ggo_\Beta = \glg(\sg)_0$ and $\ug_\Beta = \bigoplus_{r>0} \glg(\sg)_r$.
\begin{lemma}\label{lem_piAmueigenv}
For $A\in \glg(\sg)_r$ we have that $\pi(A)\mub \in V_{\Beta^+}^r$ (see paragraph before \eqref{eqn_defVnn}).
\end{lemma}
\begin{proof}
Since $\mub \in \Vzero$,
we have $\pi(\Beta^+) \pi(A) \mub = \pi([\Beta^+, A]) \mub + \pi(A) \pi(\Beta^+) \mub = r \cdot \pi(A) \mub.$
\end{proof}
\begin{lemma}\label{lem_propertiesPmub}
The linear maps $P_\mub$, $\ad(\Beta^+) : \slgb \to \slgb$ commute. In particular, $P_\mub$ preserves $\hg_\Beta$ and $\ug_\Beta$, and it satisfies
\[
P_\mub (A) =
\begin{cases}
\unm \cdot \big(S \circ \delta_\mub^t \delta_\mub (A) + A^t \kf_\mub + \kf_\mub A \big), & \qquad A\in \hg_\Beta; \\
\unm \cdot \delta_\mub^t \delta_\mub (A), &\qquad A\in \ug_\Beta.
\end{cases}
\]
Here, $S(A) := \unm (A+A^t)$, and
\[
\delta_\mub : \glg(\sg) \to V(\sg)\,\,;\,\,\, A \mapsto -\pi(A)\mub\,,
\]
and $\delta_\mub^t : (V(\sg),\ip) \to (\glg(\sg),\ip)$ is the usual adjoint map,
\end{lemma}
\begin{proof}
Using the formula for ${\rm d} \, {\Riccim} \big|_\mub (\pi(A)\mub)$ given in (36) and (37) in \cite{homRF} we have
\[
P_\mub(A) = \unm \big( S\circ \delta_\mub^t \delta_\mub(A) \big)_{\qg_\Beta} + \unm \big(A^t \kf_\mub + \kf_\mub A\big)_{\qg_\Beta} \, .
\]
We show that $P_\mub$ preserves the eigenspaces of $\ad(\Beta^+)$,
recalling that by Lemma \ref{lem_Lmub} $P_\mub$ preserves $\slgb$.
First, we claim that the linear map $A \mapsto \delta_\mub^t \delta_\mub(A)$ preserves the eigenspaces of $\ad(\Beta^+)$.
Indeed, if $A_1, A_2 \in \slgb$ are eigenvectors of $\ad(\Beta^+)$ with eigenvalues $r_1\neq r_2$, then
\[
\la \delta_\mub^t \delta_\mub (A_1), A_2 \ra = \la \pi(A_1) \mub, \pi(A_2) \mub\ra = 0
\]
by Lemma \ref{lem_piAmueigenv}, since two different eigenspaces of $\pi(\Beta^+)$ are orthogonal.
For $A\in \ggo_\Beta$ this implies that $S \circ \delta_\mub^t \delta_\mub(A) \in \ggo_\Beta$, and the projection $(\cdot)_{\qg_\Beta}$ is the identity when restricted to $\ggo_\Beta$ (see Remark \ref{rmk_projqbeta}).
For $A\in \glg(\sg)_r \subset \ug_\Beta$, $r>0$, we have that $\delta_\mub^t \delta_\mub(A) \in \glg(\sg)_r$ as well, and the map $(S(\cdot))_{\qg_\Beta}$ is the identity on $\ug_\Beta$ (see again Remark \ref{rmk_projqbeta}). The statement for the first summand thus follows.
Regarding the second term, the decomposition $\sg = \ag \oplus \ngo$ induces natural inclusions $\glg(\ag), \glg(\ngo) \subset \glgs$. By Proposition \ref{prop_solvsol}, we have that $\glg(\ag) \subset \ggo_\beta$. Since the Killing form is trivial on the nilradical, we also have $\kf_\mub \in \glg(\ag)$, or informally, $\kf_\mub = \minimatrix{\star}{0}{0}{0}$. On the other hand, any $A\in \ug_\Beta$ is of the form $A = \minimatrix{0}{0}{\star}{\star}$, thus the map $A \mapsto (A^t \kf_\mub + \kf_\mub A)$ vanishes on $\ug_\Beta$. It clearly preserves $\ggo_\Beta$.
Finally, the formula stated for $P_\mub$ now follows form the previous discussion.
\end{proof}
\begin{lemma}\label{lem_Pmupositive}
The linear map $P_\mub : \slgb \to \slgb$ defined in \eqref{eqn_defPmu} is symmetric, positive semi-definite, and its kernel is given by $\Der(\mub) + \kg_\Beta$.
\end{lemma}
\begin{proof}
For $A\in \kg_\Beta$, by $\Or(\sg)$-equivariance of $\mu\mapsto \Riccim_\mu$ we have that
\[
\Riccim_{\exp(s A) \cdot \mub} = \exp(s A) \, \Riccim_\mub \, \exp(-s A) = \exp(s A) \Beta \exp(-s A) = \Beta,
\]
thus $P_\mub(\kg_\Beta) = 0$.
Recall also that since $\mub \in \Vnnss$, by \cite[Cor.~4.11]{BL17} we have $\Der(\mub) \subset \slgb$.
It remains to show that on the orthogonal complement of
$\Der(\mub) + \kg_\Beta$ in $\slgb$
the map $P_\mub$ is symmetric and positive definite. By Lemma \ref{lem_propertiesPmub} we may argue on $\hg_\Beta$ and $\ug_\Beta$ separately. The formula given in that lemma for $P_\mub |_{\ug_\Beta}$ immediately implies the claim in this case (recall that $\ker \delta_\mub = \Der(\mub)$).
Regarding the restriction to $\hg_\Beta$, using that $P_\mub(\kg_\Beta) = 0$ and the formula from Lemma \ref{lem_propertiesPmub} we need to worry only about the restriction $P_\mub : \pg_\Beta \to \pg_\Beta$, where $\pg_\Beta = \ggo_\Beta \cap \Sym(\sg)$. Using $\sg = \ag \oplus \ngo$, by Proposition \ref{prop_solvsol} $\pg_\beta$ decomposes as
\[
\pg_\Beta = \Sym(\ag) \oplus \pg_\Beta^\ngo, \qquad \pg_\Beta^\ngo := \pg_\Beta \cap \glg(\ngo).
\]
Let us first see that $P_\mub$ maps these two subspaces onto orthogonal subspaces. For the Killing form term this is clear, since $\kf_\mub \in \Sym(\ag)$.
We must thus show that for symmetric maps $A_\ag\in \glg(\ag)$ and
$A_\ngo \in \ggo_\Beta^\ngo$ we have $\la \pi(A_\ag) \mub, \pi(A_\ngo) \mub\ra = 0$.
By linearity we may assume that the rank of $A_\ag$ is one, and that $A_\ag \, e_1 = e_1$ for some vector $e_1 \in \ag$ of norm one. Then,
\begin{align*}
\left\la \pi(A_\ag) \mub\, , \, \pi(A_\ngo) \mub \, \right\ra =& \,\, 2 \cdot \left\la \ad_{\pi(A_\ag)\mub} e_1 \, , \, \ad_{\pi(A_\ngo)\mub} e_1 \right\ra \\
=& -2 \cdot \left\la \ad_\mub e_1, [A_\ngo, \ad_\mub e_1] \right\ra \\
=& -2 \tr A_\ngo \, [\ad_\mub e_1, (\ad_\mub e_1)^t],
\end{align*}
and the last expression vanishes since for a solvsoliton $\mub$ we have that $\ad_\mub e_1$ is a normal operator, by \cite[Theorem 4.8]{solvsolitons}.
Having this at hand, we may prove the statement of the lemma separately for $\Sym(\ag)$ and $\pg_\Beta^\ngo$. On $A_\ngo \in \pg_\Beta^\ngo$ we have that $P_\mub = \unm S\circ \delta_\mub^t \delta_\mub$, and the claim is clear. Finally, for $A_\ag \in \Sym(\ag)$, we may apply Lemma \ref{lem_solvsolmmsol}, (i) to obtain
\[
P_\mub(A_\ag) = \tfrac{\rm d}{\rm d s}\big|_0 \Riccim_{\exp(s A_\ag) \cdot \mub} = \tfrac{\rm d}{\rm d s}\big|_0 \exp(-s A^t_\ag) \Riccim_\mu \exp(-s A_\ag) = - A_\ag \Beta - \Beta A_\ag.
\]
Thus, $P_\mub (A_\ag) = 2 \cdot \Vert \Beta \Vert^2 \cdot A_\ag$, since
$\Beta^+|_\ag = 0$, and $A_\ag$ is an eigenvector.
\end{proof}
|
\subsubsection*{Acknowledgments}#1}
\def\keywords#1{\vskip.2in\begin{minipage}[t]{1in}%
{\bf Keywords:}\end{minipage}\begin{minipage}[t]{4in}#1\end{minipage}}
\usepackage{caption}
\usepackage{amsmath}
\usepackage{arydshln}
\RequirePackage{lineno}
\usepackage[numbers]{natbib}
\usepackage{graphics}
\pagenumbering{arabic}
\usepackage{siunitx}
\newcommand{\beginsupplement}{%
\setcounter{table}{0}
\renewcommand{\thetable}{S\arabic{table}}%
\setcounter{figure}{0}
\renewcommand{\thefigure}{S\arabic{figure}}%
}
\setlength{\droptitle}{-4\baselineskip}
\pretitle{\begin{center}\Huge\bfseries}
\posttitle{\end{center}}
\title{One-shot learning and behavioral
eligibility traces in sequential decision making}
\author[1,*]{Marco P. Lehmann}
\author[2]{He A. Xu}
\author[1]{Vasiliki Liakoni}
\author[2,**]{Michael H. Herzog}
\author[1,**]{Wulfram Gerstner }
\author[3,**]{Kerstin Preuschoff}
\affil[1]{\normalsize School of Computer and Communication Sciences and Brain-Mind-Institute, School of Life Sciences, \'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne, CH-1015 Lausanne EPFL}
\affil[2]{Laboratory of Psychophysics, School of Life Sciences, \'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne, CH-1015 Lausanne EPFL}
\affil[3]{Swiss Center for Affective Sciences, University of Geneva, CH-1211 Gen\`{e}ve}
\affil[*]{Corresponding author: <EMAIL>}
\affil[**]{Equal contribution}
\renewcommand\Authands{ and }
\renewcommand{\maketitlehookd}{%
\begin{abstract}
\noindent
\normalsize
In many daily tasks we make multiple decisions before reaching a goal.
In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary.
Reinforcement learning theory suggests two classes of algorithms solving this credit assignment problem:
In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task,
whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot).
Here we asked whether humans use eligibility traces.
We developed a novel paradigm to \textit{directly} observe which actions and states along a multi-step sequence are reinforced after a single reward.
By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.
\end{abstract}
}
\begin{document}
\captionsetup[figure]{labelfont={bf},name={Fig.},labelsep=period}
\captionsetup[table]{labelfont={bf},name={Table},labelsep=period}
\maketitle
\keywords{ eligibility trace, human learning, sequential decision making, pupillometry, Reward Prediction Error }
\acknowledgements{
This research was supported by
Swiss National Science Foundation (no. CRSII2 147636 and no. 200020 165538),
by the European Research Council (grant agreement no. 268 689, MultiRules),
and
by the European Union Horizon 2020 Framework Program under grant agreement no. 720270 and no. 785907 (Human Brain Project, SGA1 and SGA2)}
\section{Introduction}
In games, such as chess or backgammon,
the players have to perform a sequence of many actions before a reward is received (win, loss).
Likewise in many sports, such as tennis, a sequence of muscle movements is performed until, for example, a successful hit is executed.
In both examples it is impossible to immediately
evaluate the goodness of a single action. Hence the question arises:
How do humans learn sequences of actions from reward provided at the very end of the sequence?
Reinforcement learning (RL) models
\cite{Sutton18}
have been successfully used to describe reward-based learning in humans
\citep{Pessiglione06, Gläscher10, Daw11, Niv12, ODoherty17, Tartaglia17}.
In RL, an action (e.g., moving a token or swinging the arm) leads from an old state
(e.g., configuration of the board, or position of the body)
to a new one.
RL theories can be grouped into two different classes.
In classic one-step algorithms of Temporal-Difference learning (such as TD-0 \cite{Sutton88a}),
information about reward "travels", after each step,
from the new state to the immediately preceding state or action.
Consequently, when exploring a new sequence with a single reward in the final state
only the last action from the penultimate state to the last state
is rewarded.
Because, after a first reward, the reward information cannot reach states or actions that are
two or more steps away from the reward, one-step algorithms are intrinsically slow.
Rapid learning of multi-step sequences, ideally after a single epoch ('one-shot' learning) requires an algorithm to keep a memory of past states and actions making them eligible for later reinforcement.
Such a memory is a key feature of the second class of RL theories -- called {\em RL with eligibility trace} --, which includes
algorithms with explicit eligibility traces
\citep{Sutton88a,Watkins89,Williams92,Peng96,Singh96} and related reinforcement learning models
\citep{Sutton18,Watkins89,Mnih16,Moore93,Blundell16}.
Eligibility traces are well-established in computational models \citep{Sutton18}, and supported by synaptic plasticity experiments
\citep{Yagishita14, He15, Bittner17, Fisher17, Gerstner18}.
However, it is unclear whether humans use an eligibility trace when learning multistep decision tasks.
In general, human learning is well described by both classes of reinforcement learning, whereby models with eligibility trace tend to statistically outperform those without \citep{Daw11, Tartaglia17,Bogacz07b, Walsh11}.
However, a direct test between the classes of RL models with and without eligibility trace has never been performed.
Multi-step sequence learning with delayed feedback \citep{Gläscher10, Daw11,Tartaglia17}
offers a way to directly compare the two.
Our question can therefore be reformulated more precisely: Is there evidence for RL with eligibility trace in the form of one-shot learning?
In other words, are actions and states more than one step away from the reward reinforced after a single reward?
And if eligibility traces play a role, how many states and actions are reinforced by a single reward?
To answer these questions, we designed a novel sequential learning task to directly observe which actions and states of a multi-step sequence are reinforced.
We exploit that after a single reward,
models of learning without eligibility traces (our null hypothesis) and with eligibility traces (alternative hypothesis) make qualitatively distinct predictions about changes in behavior and in state evaluation (Fig.~\ref{fig:F1_TaskAndHyp}).
We measure changes in action-selection bias from behavior, and changes in state evaluation from a physiological signal, namely the pupil dilation.
Pupil responses have been previously linked to decision making, and in particular to variables that reflect changes in state value such as expected reward, reward prediction error, surprise, and risk \citep{ODoherty03, Jepma11, Otero11,Preuschoff11}.
By focusing our analysis on those states for which the two hypotheses make distinct predictions after a \textit{single} reward ('one-shot') we find clear behavioral and physiological signatures of reinforcement learning with eligibility trace.
\section{Results}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figures/Fig1_TaskAndHyp.pdf}
\caption{\textbf{Experimental design and Hypothesis:}
\textbf{[a]}
Typical state-action sequences of the first two episodes.
At each state, participants execute one of two actions, 'a' or 'b', leading to the next state.
Here, the participant discovered the goal state after randomly choosing three actions:
'b' in state S (Start),
'a' in D2 (two actions from the goal),
and 'b' in D1 (one action from the goal).
Episode 1 terminated at the rewarding goal state.
Episode 2 started in a new state, Y.
Note that D2 and D1 already occurred in episode 1.
In this example, the participant repeated in each state the action which led to the goal in episode 1.
\textbf{[b]}
Reinforcement learning models make predictions about such behavioral biases,
and about learned properties (such as action value $Q$, state value $V$ or TD-errors, denoted as $x$) presumably observable as changes in a physiological measure (e.g. pupil dilation).
\textbf{Null Hypothesis:}
In RL without eligibility traces, only the state-action pair immediately preceding a reward is reinforced, leading to a bias at state D1, but not at D2 (50\%-line).
Similarly, the state value of D2 does not change and therefore the physiological response at the D2 in episode 2 (solid red line) should not differ from episode 1 (dashed black line).
\textbf{Alternative Hypothesis:}
RL with eligibility traces reinforces decisions further back in the state-action history.
These models predict a behavioral bias at D1 and D2, and a learning-related physiological response at the onset of these states after a single reward.
}
\label{fig:F1_TaskAndHyp}
\end{figure*}
Since we were interested in one-shot learning, we needed an experimental multi-step action paradigm that allowed a comparison of behavioral and physiological measures between episode 1 (before any reward) and episode 2 (after a single reward).
Our learning environment had six states plus a goal G (Fig.~\ref{fig:F1_TaskAndHyp} and \ref{fig:F2_Task_Cond_Behav}),
identified by clip-art images shown on a computer screen in front of the participants.
It was designed such that participants were likely to encounter in
episode 2 the same states D1 (one step away from the goal) and/or D2 (two steps away) as in episode 1 (Fig.~\ref{fig:F1_TaskAndHyp} [a]).
In each state, participants chose one out of two actions, 'a' or 'b', and explored the environment until they discovered the goal G (the image of a reward) which terminated the episode.
The participants were instructed to complete as many episodes as possible within a limited time of 12 minutes (Methods).
The first set of predictions applied to the state D1 which served as a control if participants were able to learn, and assign value to, states or actions.
Both classes of algorithms, with or without eligibility trace,
predicted
that effects of learning after the first reward should be reflected in the action choice probability during a subsequent visit of state D1 (Fig.~\ref{fig:F1_TaskAndHyp}[b]).
Furthermore,
any physiological variable that correlates with variables of reinforcement learning theories, such as action value Q, state value V, or TD-error, should increase at the second encounter of D1.
We measured pupil dilation, a known marker for learning-related signals \citep{ODoherty03, Jepma11, Otero11,Preuschoff11}, to asses this effect of learning, and predicted a change in pupil dilation in episode 2 as compared to episode 1 (Fig.~\ref{fig:F1_TaskAndHyp}[b]).
Our second set of predictions concerned state D2.
RL without eligibility trace (null hypothesis) such as TD-0, predicted that the action choice probability at D2 during episode 2 should be at 50 percent, since information about the reward at the goal state G cannot "travel" two steps.
However, the class of RL with eligibility trace (alternative hypothesis) predicted an increase in the probability of choosing the correct action, i.e., the one leading toward the goal.
The two hypotheses also made different predictions about the pupil response to the onset of state D2.
Under the null hypothesis, the evaluation of the state D2 could not change after a single reward.
In contrast, learning with eligibility trace predicted a change in state evaluation, presumably reflected in pupil dilation (Fig.~\ref{fig:F1_TaskAndHyp}[b]).
Participants could freely choose actions, but in order to maximize
encounters with states D1 and D2,
we assigned actions to state transitions 'on the fly'.
In the first episode, all participants started in state $S$ (Figs. \ref{fig:F1_TaskAndHyp} and \ref{fig:F2_Task_Cond_Behav}[a])
and chose either action $a$ or $b$.
Independently of their choice and unbeknownst to the participants,
the first action brought them always to state D2, two steps away from the goal.
Similarly, in D2, participants could freely choose an action but always transitioned to D1,
and with their third action, to G.
These initial actions
determined the assignment of state-action pairs to state transitions for all remaining episodes in this environment.
For example, if, during the first episode, a participant had chosen action $a$ in state D2 to initiate the
transition to D1, then action $a$ brought this participant in all future encounters of D2 to D1 whereas action $b$ brought her from D2 to Z (Fig \ref{fig:F2_Task_Cond_Behav}).
In episode 2, half of the participants started from state Y.
Their first action always brought them
to D2, which they had already seen once during the first episode.
The other half of the participants started in state X and their first action brought them to D1 (Fig.~\ref{fig:F2_Task_Cond_Behav}[b]).
Participants who started episode 2 in state X started episode 3 in state Y and {\em vice versa}.
In episodes 4 to 7, the starting states were randomly chosen from $\{$S, D2, X, Y, Z$\}$.
After 7 episodes, we considered the task as solved, and the same procedure started again in a new environment (see Methods for the special cases of repeated action sequences).
This task design allowed us to study human learning in specific and controlled state sequences, without interfering with the participant's free choices.
\subsection{Behavioral evidence for one-shot learning}
As expected, we found that the action taken in state D1 that led to the rewarding state G was reinforced after episode 1.
Reinforcement was visible as an action bias toward the correct action when D1 was seen again in episode 2 (Fig.~\ref{fig:F2_Task_Cond_Behav}[e]).
This action bias is predicted by many different RL algorithms including the early theories of Rescorla and Wagner
\cite{Rescorla72}.
Importantly, we also found a strong action bias in state D2 in episode 2:
participants repeated the correct action (the one leading toward the goal) in 85\% of the cases.
This strong bias is significantly different from chance level 50\% (p<0.001; Fig \ref{fig:F2_Task_Cond_Behav}[f]), and indicates that participants learned to assign a positive value
to the correct state-action pair after a {\em single exposure} to state D2 and a {\em single reward}
at the end of episode 1.
In other words we found evidence for one-shot learning in a state two steps away from goal in a multi-step decision task.
This is compatible with our alternative hypothesis, i.e., the broad class of RL 'with eligibility trace',
\citep{Sutton88a, Watkins89,Williams92, Peng96,Singh96,Sutton18,Mnih16,Moore93,Blundell16}
that keep explicit or implicit memories of past state-action pairs (see Discussion).
However, it is not compatible with the null hypothesis, i.e. RL 'without eligibility trace'.
In both classes of algorithms, action biases or values that reflect the expected future reward are assigned to states.
In RL 'without eligibility trace', however,
value information collected in a single action step is shared only between neighboring states (for example between states G and D1), whereas in RL 'with eligibility trace' value information can reach state D2 after a single episode.
Importantly, the above argument is both fundamental and qualitative in the sense that it does
not rely on any specific choice of parameters or implementation details of an algorithm.
Our finding can be interpreted as
a signature of a behavioral eligibility trace in human multi-step decision making and complements the well-established synaptic eligibility traces observed in animal models
\citep{Yagishita14, He15, Bittner17, Fisher17, Gerstner18},
We wondered whether the observed one-shot learning in our multi-step decision task depended
on the choice of stimuli. If clip-art images helped participants to construct an imaginary story
(e.g., with the method of loci \citep{Yates66}) in order to rapidly memorize state-action associations, the effect should disappear with other stimuli.
We tested participants in environments where states were defined by acoustic stimuli (2nd experiment: 'sound' condition)
or by the spatial location of a black-and-white rectangular grid on the grey screen
(3rd experiment: 'spatial' condition; see Fig.~\ref{fig:F2_Task_Cond_Behav} and Methods).
Across all conditions, results were qualitatively similar (Fig.~\ref{fig:F2_Task_Cond_Behav}[f]):
not only the action directly leading to the goal (i.e., the action in D1) but also
the correct action in state D2 were chosen in episode 2 with a probability significantly different from a random choice.
This behavior is consistent with the class of RL with eligibility trace, and excludes all algorithms
in the class of RL without eligibility trace.
\begin{figure*}
\centering
\includegraphics[width=0.55\textwidth]{figures/Fig2_StructureAndResultBehavior.pdf}
\caption{ \textbf{A single delayed reward reinforces state-action associations.}
\textbf{[a]}
Structure of the environment: 6 states, 2 actions, rewarded goal 'G'.
Transitions (arrows) were predefined, but actions were attributed to transitions {\em during} the experiment.
Unbeknownst to the participants, the first actions always led through the sequence 'S' (Start), 'D2' (2 steps before goal), 'D1' (1 step before goal) to 'G' (Goal).
Here, the participant chose actions 'b', 'a', 'b' (underlined boldface).
Half of the experiments, started episode 2 in X, always leading to D1, where we tested if the action rewarded in episode 1 was repeated.
\textbf{[c]}
In the other half of experiments, we tested the decision bias in episode 2 at D2 ('a' in this example) by starting from Y.
\textbf{[d]}
The same structure was implemented in three conditions.
States are identified by location (\textit{Spatial} condition, 22 participants, \textit{top} row in Figures [d], [e] and [f]),
by unique short sounds (\textit{Sound} condition, 15 participants, \textit{middle} row),
or by unique images (\textit{Clip-art} condition, 12 participants, \textit{bottom} row).
Red arrows in the \textit{Spatial} condition illustrate an example sequence S, D2, D1, G.
\textbf{[e]}
Action selection bias in state D1, in episode 2, averaged across all participants.
\textbf{[f]}
In all three conditions the action choices at D2 were significantly different from chance level (dashed horizontal line) and biased toward the actions leading to reward in episode 1.
Error bars: SEM, $^{*} p<0.05$, $^{***} p<0.001$.
For clarity, actions are labeled 'a' and 'b' in [e] and [f], consistent with panels [a] - [c], even though actual choices of participants varied.
}
\label{fig:F2_Task_Cond_Behav}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig3_ResultPupilE1E2.pdf}
\caption{
\textbf{Pupil dilation reflects one-shot learning.}
\textbf{[a]} Pupil responses to state D1 are larger during episode 2 (red curve)
than during episode 1 (black).
\textbf{[b]} Pupil responses to state D2 are larger during episode 2 (red curve)
than during episode 1 (black).
\textit{Top row}: spatial, \textit{middle row}: sound, \textit{bottom row}: clip-art condition.
Pupil diameter
averaged across all participants in units of standard deviation (z-score, see Methods),
aligned at stimulus onset and plotted as a function of time since stimulus onset.
Thin lines indicate the pupil signal $\pm$SEM.
Green lines indicate the time interval during which the two curves differ significantly ($p<FDR_{\alpha}=0.05$).
Significance was reached at a time $t_{min}$, which depends on the condition and the state:
\textit{spatial D1:} $t_{min}= 730$ ms (22, 131, 85);
\textit{spatial D2:} $t_{min}= 1030$ ms (22, 137,130)
\textit{sound D1:} $t_{min}= 1470$ ms (15, 34, 19);
\textit{sound D2:} $t_{min}= 1280$ ms (15, 35, 33);
\textit{clip-art D1:} $t_{min}= 970$ ms (12, 39, 19);
\textit{clip-art D2:} $t_{min}= 980$ ms (12, 45, 41);
(Numbers in brackets: number of participants, number of pupil traces in episode 1 or 2, respectively).
\textbf{[c]}
Participant-specific mean pupil dilation at state D2 (averaged over the interval [1000ms, 2500ms]) before (black dot) and after (red dot) the first reward.
Grey lines connect values of the same participant. Differences between episodes are significant
(paired t-test, p-values indicated in the Figure).
}
\label{fig:PupilDiameter_GrandAvg}
\end{figure*}
\subsection{Reinforcement learning with eligibility trace is reflected in pupil dilation}
We then investigated the time-series of the pupil diameter.
Both, the null and the alternative hypothesis predict a change in the evaluation of state D1, when comparing the second with the first encounter.
Therefore, if the pupil dilation indeed serves as a proxy for a learning-related state evaluation (be it Q-value, V-value, or TD-error), we should observe a difference between the pupil response to the onset of state D1 before (episode 1) and after (episode 2) a single reward.
We extracted (Methods) the time-series of the pupil diameter,
focused on the interval [0s, 3s] after the onset of
states D2 or D1, and
averaged the data across participants and environments
(Fig.~\ref{fig:PupilDiameter_GrandAvg}, black traces).
We observed a significant change in the pupil dilatory response to stimulus D1 between
episode 1 (black curve) and episode 2 (red curve).
The difference was computed per time point (paired samples t-test);
significance levels were adjusted to control for false discovery rate (FDR, \cite{Benjamini95}) which is a conservative measure given the temporal correlations of the pupillometric signal.
This result suggests that participants change the evaluation of D1 after a single reward, and that this change is reflected in pupil dilation.
Importantly, the pupil dilatory response to the state D2 was also significantly stronger in episode 2 than in episode 1.
Therefore, if pupil diameter is correlated with the state value $V$, the action value $Q$, the TD-error, or a combination thereof, then the class of
RL without eligibily trace must be excluded as an explanation of the pupil response (i.e. we can reject the null hypothesis in Fig.~\ref{fig:F1_TaskAndHyp}).
However, before drawing such a conclusion we controlled for correlations of pupil response with other parameters of the experiment.
First, for visual stimuli, pupil responses changed with stimulus luminance.
The rapid initial contraction of the pupil observed in the clip-art condition (bottom row in Fig.~\ref{fig:PupilDiameter_GrandAvg}) was a response to the 300 ms display of the images.
In the spatial condition, this initial transient was absent, but the difference in state D2 between episode 1 and episode 2 were equally significant.
For the {\em sound} condition, in which stimuli were longer on average (Methods), the significant separation of the curves occurred slightly later than in the other two conditions.
A paired t-test of differences showed that, across all three conditions,
pupil dilation changes significantly between episodes 1 and 2 (Fig.~\ref{fig:PupilDiameter_GrandAvg}[c];
paired t-test, p<0.001 for the \textit{spatial} condition, p<0.01 for the two others).
Since in all three conditions luminance is identical in episodes 1 and 2, luminance cannot explain the observed differences.
Second, we checked whether
the differences in the pupil traces could be explained by the novelty of a state during episode 1, or familiarity with the state in episode 2 \citep{Otero11}, rather than by reward-based learning.
In a control experiment,
a different set of participants saw a sequence of states, replayed from the main experiment.
In order to ensure that participants were focusing on the state sequence and engaged in the task, they had to push a button in each state (freely choosing either 'a' or 'b'), and count the number of states from start to goal.
Stimuli, timing and data analysis were the same as in the main experiment.
The strong difference after $1000\,ms$ in state D2, that we observed in Fig.~\ref{fig:PupilDiameter_GrandAvg}[b], was absent in the control experiments (Fig.~\ref{fig:Control_PupilDiameter})
indicating that the significant differences in pupil dilation in response to state D2 cannot be explained by novelty or familiarity alone. The findings in the control experiment also exclude other interpretations of correlations of pupil diameter such as memory formation in the absence of reward.
In summary, across three different stimulus modalities, the single reward received at the end of the first episode strongly influenced the pupil responses to the same stimuli later in episode 2.
Importantly, this effect was observed not only in state D1 (one step before the goal)
but also in state D2 (two steps before the goal).
Furthermore, a mere engagement in button presses while observing a sequence of stimuli, as in the control experiment,
did not evoke the same pupil responses as the main task.
Together these results suggested
that the single reward at the end of the first episode triggered
increases in pupil diameter during later encounters of the same state.
The increases observed in state D1 are consistent with an interpretation that pupil diameter reflects state value $V$, action value $Q$, or TD error -
but do not inform us whether $Q$-value, $V$-value, or TD-error are estimated by the brain using RL with or without eligibility trace.
However,
the fact that very similar changes are also observed in state D2 excludes the possibility that the learning-related contribution to the pupil diameter
can be predicted by RL without eligibility trace.
\subsection{Estimation of the time scale of the behavioral eligibility trace using Reinforcement Learning Models}
Given the behavioral and physiological evidence for RL 'with eligibility trace', we wondered whether our findings are consistent with earlier studies \citep{Daw11, Tartaglia17, Bogacz07b} where several variants of reinforcement learning algorithms were fitted to the experimental data.
We considered algorithms with and (for comparison) without eligibility trace.
Eligibility traces $e_n(s,a)$ can be modeled as a memory of past state-action pairs $(s,a)$ in an episode.
At each discrete time step $n$, the eligibility of the current state-action pair was set to 1,
while that of all others decayed by a factor $\gamma\lambda$ according to \citep{Singh96}
\begin{align}
e_{n}(s,a) &= \left\{\begin{array} {l l}
1 &\text{if } s = s_n, a = a_n \\
\gamma \lambda e_{n-1}(s,a) &\text{otherwise}.\\
\end{array}\right.
\label{eq:ET_lambda}
\end{align}
The parameter $\gamma \in [0,1]$ exponentially discounts a distal reward, as commonly described in neuroeconomics \citep{Glimcher13} and machine learning \cite{Sutton18}, and $\lambda \in [0,1]$ controls the decay of the eligibility trace, where the limit case $\lambda =0$ can be interpreted as no memory (no eligibility trace).
At the beginning of each episode all twelve eligibility trace values
(two actions for each of the six decision states) were set to $e_n(s,a)=0$.
We considered eight common algorithms to explain the behavioral data:
Four algorithms belonged to the class of RL with eligibility traces.
The first two, \textit{SARSA-$\lambda$} and \textit{Q-$\lambda$}
(see Methods, Eq. \ref{eq:QLearnQupd})
implement a memory of past state-action pairs by
an eligibility trace as defined in Eq. \ref{eq:ET_lambda};
as a member of the Policy-Gradient family, we implemented a variant of \textit{Reinforce} \citep{Williams92, Sutton18}, which memorizes all state-action pairs of an episode.
A fourth algorithm with eligibility trace is the 3-step Q-learning algorithm
\citep{Watkins89,Sutton18,Mnih16},
which keeps memory of past states and actions over three steps (see Discussion and Methods).
From the model-based family of RL, we chose the \textit{Forward Learner} \cite{Gläscher10}, which memorizes not state-action pairs, but learns a state-action-next-state model, and uses it for offline updates of action-values.
The \textit{Hybrid Learner} \citep{Gläscher10}
combines the \textit{Forward Learner} with \textit{SARSA-0}.
As a control, two algorithms belonged to the class of RL without eligibility traces (thus modeling the null hypothesis): \textit{SARSA-$0$} and \textit{Q-$0$}.
We found that the four RL algorithms with eligibility trace explained human behavior better than the
\textit{Hybrid Learner}, which was the top-scoring among all other RL algorithms.
Cross-validation confirmed that our ranking based on
the Akaike Information Criterion (AIC, \cite{Akaike74}; see Methods) was robust.
According to the Wilcoxon rank-sum test,
the probability that the \textit{Hybrid Learner} ranks better than one of the three RL algorithms with explicit eligibility traces was below 14$\%$ in each of the conditions and below 0.1$\%$ for the aggregated data ($p<0.001$, Table \ref{tab:AIC_CV_pVals} and Methods). The models \textit{Q}-$\lambda$ and \textit{SARSA}-$\lambda$ with eligbility trace
performed each significantly better than the corresponding models
\textit{Q-$0$} and \textit{SARSA-$0$} without eligbility trace.
Since the ranks of the four RL algorithms with eligibility traces were not significantly different, we focused on one of these, viz. \textit{Q-$\lambda$}. We
wondered whether the parameter $\lambda$ that characterizes the decay of the eligibility trace in Eq. \ref{eq:ET_lambda}
could be linked to a time scale.
To answer this question, we proceeded in two steps.
First, we analyzed the human behavior in discrete
time steps corresponding to state transitions.
We found that the best fitting values (maximum likelihood, see Methods) of the eligibility trace parameter $\lambda$ were 0.81 in the \textit{clip-art}, 0.96 in the \textit{sound}, and 0.69 in the \textit{spatial} condition (see Fig.~\ref{fig:MCMC_Posterior_ETonly}).
These values
are all significantly larger than zero (p<0.001) indicating the presence of an eligibility trace consistent with our findings in the previous subsections.
In a second step,
we modeled the same action sequence in continuous time, taking into account the measured inter-stimulus interval (see Methods).
In this continuous-time version of the eligibility trace model, both the discount factor $\gamma$ and the decay factor $\lambda$ were integrated into a single time constant $\tau$ that describes the decay of the memory of past state-action associations in continuous time.
We found maximum likelihood values for $\tau$ around 10 seconds (Fig \ref{fig:MCMC_Posterior_ETonly}), which implies that an action taken 10 seconds before a reward was reinforced and
associated with the state in which it was taken -- even if one or several decisions happened in between (see Discussion).
Thus eligibility traces, i.e. memories of past state-action pairs, decay over about 10 seconds and
can be linked to a reward occurring during that time span.
\begin{table}
\centering
\begin{tabular}{c}
\includegraphics[width=0.95\textwidth]{figures/Tbl_AICandCV.pdf}
\end{tabular}
\caption{ \textbf{Models with eligibility trace explain behavior significantly better than alternative models}.
Four reinforcement learning models with eligibility trace (Q-$\lambda$, REINFORCE, SARSA-$\lambda$, 3-step-Q), two model-based algorithms (Hybrid, Forward Learner), two RL models without eligibility trace (Q-0, SARSA-0), and a null-model (Biased Random, Methods) were fitted to the human behavior, separately for each experimental condition (spatial, sound, clip-art).
Models with eligibility trace ranked higher than those without
(lower Akaike Information Criterion, AIC, evaluated on all participants performing the condition).
The ranking is stable as indicated by the sum of $k$ rankings (column \textit{rank sum})
on test data, in $k$-fold crossvalidation (Methods).
P-values refer to the following comparisons:
P(a): Each model in the \textit{with eligibility trace} group was compared with the best model \textit{without eligibility trace} (Hybrid in all conditions); models for which the comparison is significant are shown in bold.
P(b): \textit{Q-0} compared with \textit{Q-$\lambda$}.
P(c): \textit{SARSA-0} compared with \textit{SARSA-$\lambda$}.
P(d): \textit{Biased Random} compared with the second last model, which is \textit{Forward Learner} in the clip-art condition and \textit{SARSA-0} in the two others.
In the \textbf{Aggregated} column, we compare the same pairs of models, taking into account all ranks across the three conditions.
All algorithms with eligibility trace explain the human behavior better than algorithms without eligibility trace.
Differences among the four models with eligibility trace are not significant.
In each comparison, $k$ pairs of individual ranks are used to compare pairs of models and obtain the indicated p-values (Wilcoxon rank-sum test, Methods).
} \label{tab:AIC_CV_pVals}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/MCMC_Posterior_ETonly.pdf}
\caption{\textbf{Eligibility for reinforcement decays with a time-scale $\tau$ in the order of 10 seconds.}
The behavioral data of each experimental condition constrain the free parameters of the model \textit{Q-$\lambda$} to the ranges indicated by the blue histograms (see Methods and Fig.~\ref{fig:QLearn_MCMC_posterior})
\textbf{[a]}
Distribution over the eligibility trace parameter $\lambda$ in Eq. \ref{eq:ET_lambda} (discrete time steps).
Vertical black lines indicate the values that best explain the data (maximum likelihood, see Methods).
All values are significantly different from zero.
\textbf{[b]}
Modeling eligibility in continuous time with
a time-dependent decay (Methods, Eq. \ref{eq:ET_tau}), instead of a discrete per-step decay.
The behavioral data constrains the time-scale parameter $\tau$ to around 10 seconds.
Values in the column \textit{All} are obtained by fitting $\lambda$ and $\tau$ to the aggregated data of all conditions.
}
\label{fig:MCMC_Posterior_ETonly}
\end{figure}
\section{Discussion}
Eligibility traces provide a mechanism for learning temporally extended action sequences from a single reward (one-shot).
While one-shot learning is a well-known phenomenon for tasks such as image recognition
\cite{Standing73,Brady08} and one-step decision making \cite{Duncan16, Greve17, Rouhani18}
it has so far not been linked to Reinforcement Learning (RL) with eligibility traces in multi-step decision making.
In this study, we asked whether humans use eligibility traces when learning long sequences from delayed feedback.
We formulated mutually exclusive hypotheses, which predict directly observable changes in behavior and in physiological measures when learning with or without eligibility traces.
Using a novel paradigm, we could reject the null hypothesis of learning without eligibility trace in favor of the alternative hypothesis of learning with eligibility trace.
Our multi-step decision task shares aspects
with earlier work in the neurosciences
\citep{Pessiglione06, Gläscher10, Daw11, Niv12, ODoherty17},
but overcomes their limitations (i) by using a recurrent graph structure of the environment
that enables relatively long episodes \citep{Tartaglia17},
and (ii) by implementing an 'on-the-fly' assignment rule for state-action transitions during the first episodes.
This novel design allows the study of human learning in specific and controlled conditions, without interfering with the participant's free choices.
In the quantitative analysis, RL models with eligibility trace explained the behavioral data significantly better than the best tested RL models without.
There are, however, in the reinforcement learning literature, several alternative algorithms that would also account for one-shot learning but do not rely on the explicit eligibility traces formulated in Eq. \ref{eq:ET_lambda}.
First, $n$-step reinforcement learning algorithms \cite{Sutton18,Watkins89,Mnih16}
compare the value of a state not with that of its direct neighbor but of neighbors that are $n$ steps away. These algorithms are closely related to eligibility traces and in certain cases even mathematically equivalent \cite{Sutton18}.
Second, reinforcement learning algorithm with storage of past sequences \cite{Moore93,Blundell16,Mnih16}
enable the offline replay of the first episode so as to update values of states far away from the goal.
While these approaches are formally different from eligibility traces, they nevertheless implement the idea of eligibility traces
as memory of past state-action pairs \cite{Crow68,Fremaux16}, albeit
in a different algorithmic framework.
For example, prioritized sweeping with small backups \cite{Seijen13} is an offline algorithm that is, if applied
to our deterministic environment after the end of the first episode, equivalent to both episodic control \cite{Brea17a} and an eligibility trace.
Interestingly, the two model-based algorithms (\textit{Forward Learner} and \textit{Hybrid}) would in principle be able to explain one-shot learning since reward information is spread, after the first episode, throughout the model, via offline Q-value updates. Nevertheless, when behavioral data from our experiments were fitted across all 7 episodes, the two model-based algorithms performed significantly worse than the RL models with explicit eligibility traces.
Since our experimental design does not allow us to distinguish between these different algorithmic implementations of closely related ideas, we put them all in the class of RL with eligibility traces.
Importantly, RL algorithms with explicit eligibility traces
\citep{Sutton88a,Williams92, Peng96,Fremaux16,Izhikevich07}
can be mapped to known synaptic and circuit mechanisms
\citep{Yagishita14, He15, Bittner17, Fisher17, Gerstner18}.
A time scale of the eligibility trace of about 10 seconds in our experiments is in the range of, but a bit longer than those observed for dopamine modulated plasticity in the striatum \citep{Yagishita14},
serotonin and norepinephrine modulated plasticity in the cortex \citep{He15},
or complex-spike plasticity in hippocampus \citep{Bittner17},
but shorter than the time scales of minutes reported in hippocampus \citep{Brzosko17}.
The basic idea for the relation of eligibility traces as in Eq. \ref{eq:ET_lambda} to
experiments on synaptic plasticity is that choosing
action $a$ in state $s$ leads to co-activation of neurons and leaves a trace at the synapses connecting
those neurons.
A later phasic neuromodulator signal will transform the trace into a change of the synapses so that taking action $a$ in state $s$ becomes more likely in the future \cite{Crow68,Izhikevich07,Sutton18,Gerstner18}.
Neuromodulator signals could include dopamine \cite{Schultz15}, but reward-related signals could also be conveyed, together with novelty or attention-related signals, by other modulators \citep{Fremaux16}.
Since in our paradigm the ISI was not systematically varied, we cannot distinguish between an eligibility trace with purely time-dependent, exponential decay, and one that decays discretely, triggered by events such as states or actions.
Future research needs to show whether the decay is event-triggered or defined by molecular characteristics, independent of the experimental paradigm.
Our finding that changes of pupil dilation correlate with reward-driven variables of reinforcement learning (such as value or TD error)
goes beyond the changes linked to state recognition reported earlier \cite{Otero11,Kucewicz18}.
Also, since non-luminance related pupil diameter is influenced by the neuromodulator norepinephrine \cite{Joshi16}
while reward-based learning is associated with the neuromodulator dopamine \cite{Schultz15}, our findings suggest that the roles, and regions of influence, of neuromodulators could be mixed \cite{Berke18,Fremaux16} and less well segregated than suggested by earlier theories.
From the qualitative analysis of the pupillometric data of the main experiment (Fig.~\ref{fig:PupilDiameter_GrandAvg}), together with those of the control experiment (Fig.~\ref{fig:Control_PupilDiameter}),
we concluded that changes in pupil dilation
reflected a learned, reward-related property of the state.
In the context of decision making and learning, pupil dilation is most frequently associated with violation of an expectation in the form of a reward prediction error or stimulus prediction error as in an oddball-task \citep{Nieuwenhuis11}.
However,
our experimental paradigm was not designed to decide whether pupil diameter correlates stronger with state values or TD-errors.
Nevertheless, a more systematic analysis (see Methods) suggests that correlation of pupil dilation with TD-errors is stronger than correlation with state values:
First,
we extracted all pupil responses after the onset of non-goal states and calculated the TD-error (according to the best-fitting model, \textit{Q-$\lambda$}) of the corresponding state transition.
We found that the pupil dilation was much larger after transitions with high TD-error compared to transitions with zero TD-error (Fig.~\ref{fig:RPE_Regression_Raw}[a] and Methods).
Importantly,
these temporal profiles of the pupil responses to states with high TD-error had
striking similarities across the three experimental conditions,
whereas the mean response time course
was different across the three conditions (Fig.~\ref{fig:RPE_Regression_Raw}[c]).
This suggests that the underlying physiological process causing the TD-error-driven component in the
pupil responses was invariant to stimulation details.
Second, a statistical analysis including all data confirmed the correlation of pupil dilation with TD error (Fig.~\ref{fig:RegressionAndPermTest}).
Third, a further qualitative analysis revealed that TD-error, rather than value itself, was a factor modulating pupil dilation (Fig.~\ref{fig:RPE_Regression_Raw}[b]).
\subsection{Conclusion}
Eligibility traces are a fundamental factor underlying the human capability of quick learning and adaptation.
They implement a memory of past state-action associations and are a crucial element to efficiently solve the credit assignment problem in complex tasks \citep{Sutton18, Gerstner18, Izhikevich07}.
The present study provides direct evidence for human learning with eligibility traces.
The correlation of the pupillometric signals with an RL algorithm with eligibility traces suggests
that humans not only exploit memories of past state-action pairs in behavior
but also assign reward-related values to these memories.
The consistency and similarity of our findings across three experimental conditions suggests that the underlying cognitive, or neuromodulatory, processes are independent of the stimulus modality.
It is an interesting question for future research to actually identify the neural implementation of these memory traces.
\clearpage
\beginsupplement
\section{Materials and Methods (Supplementary) }
\subsection{Experimental conditions}
We implemented three different experimental conditions based on the same Markov Decision Process (MDP) of Fig.~\ref{fig:F2_Task_Cond_Behav}[a].
The conditions only differed in the way the states were presented to the participant.
Furthermore, in order to collect enough samples from early trials, where the learning effects are strongest, participants did not perform one long experiment.
Instead, after completing seven episodes in the same environment, the experiment paused for 45 seconds while participants were instructed to close and relax their eyes.
Then the experiment restarted with a new environment: the transition graph was reset, a different, unused, stimulus was assigned to each state, and the participant had to explore and learn the new environment.
In the \textit{spatial} condition, each state was defined by the location (on an invisible circle) on the screen of a 100x260 pixels checkerboard image, flashed for 100ms, (Fig.~\ref{fig:F2_Task_Cond_Behav}[d]).
The goal state was represented by the same rectangular checkerboard, but rotated by 90 degrees.
The checkerboard had the same average luminance as the grey background screen.
In each new environment, the states were randomly assigned to locations and
the checkerboards were rotated (states: 260x100 pixels checkerboard, goal: 100x260).
In the \textit{sound} condition each state was represented by a unique acoustic stimulus (tones and natural sounds) of $300ms$ to $600ms$ duration.
New, randomly chosen, stimuli were used in each environment.
At the goal state an applause was played.
An experimental advantage of the \textit{sound} condition is that a change in the pupil dilation cannot stem from a luminance change but must be due to a task-specific condition.
In the \textit{clip-art} condition, each state was represented by a unique 100x100 pixel clip-art image that appeared for $300ms$ in the center of the screen.
For each environment, a new set of images was used, except for the goal state which was always the same (a person holding a trophy) in all experiments.
The screen resolution was 1920x1080 pixels.
In all three conditions, the background screen was grey with a fixation cross in the center of the screen.
It was rotated from $+$ to $\times$ to signal to the participants when to enter their decision by pressing one of two push-buttons (one in the left and the other in the right hand).
No lower or upper bound was imposed on the reaction time.
The next state appeared after a random delay of $2.5$ to $4$ seconds after the push-buttons was pressed.
Participants were instructed to reach the goal state as often as possible within a limited time (12 minutes).
Prior to the actual learning task, they performed a few trials to check they all understood the instructions.
While the participants performed the \textit{sound}- and \textit{clip-art} conditions, we recorded the pupil
diameter using an SMI iViewX high speed video-based eye tracker (recorded at $500 Hz$, down-sampled to $100Hz$ for the analysis by averaging over 5 samples).
From participants performing the \textit{spatial} condition, we recorded the pupil diameter using a $60Hz$ Tobii Pro tracker.
An eye tracker calibration protocol was run for each participant.
All experiments were implemented using the Psychophysics Toolbox \citep{Brainard97}.
The number of participants performing the task was: \textit{sound} (SMI): 15; \textit{clip-art} (SMI): 12; \textit{spatial} (TET): 22 participants; Control \textit{sound} (SMI): 7; Control \textit{clip-art} (SMI): 10; Control \textit{spatial} (SMI): 10.
All participants were recruited from the EPFL students pool; all provided written, informed consent.
The experiment was approved by the EPFL Human Research Ethics Committee.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/FigS1_ControlExpPupil.pdf}
\caption{\textbf{Pupil dilation during the control experiment.} In the control experiment, different participants passively observed state sequences which were recorded during the main experiment. Data analysis was the same as for the main experiment. \textbf{[a]} Pupil time course after state onset ($t=0$) of state D1 (before goal). \textbf{[b]} State D2 (two before goal).
Black traces show the pupil dilation during episode one, red traces during episode two.
At state D1 in the \textit{clip-art} condition the pupil time course shows a separation similar to the one observed in the main experiment. This suggest that participants may recognize the clip-art image that appears just before the final image. Importantly in state D2, the pupil time course during episode two is qualitatively different from the one in the main experiment (Fig.~\ref{fig:PupilDiameter_GrandAvg}).}
\label{fig:Control_PupilDiameter}
\end{figure*}
\subsection{Pupil data processing}
Our data processing pipeline followed recommendations described in \citep{Mathot17}.
Eye blinks (including $100ms$ before, and $150ms$ after) were removed and short blocks without data (up to $500ms$) were linearly interpolated.
In all experiments, participants were looking at a fixation cross which reduces artifactual pupil-size changes \citep{Mathot17}.
For each environment, the time-series of the pupil diameter during the 7 episodes was extracted and then normalized to zero-mean, unit variance.
This step renders the measurements comparable across participants and environments.
We then extracted the pupil recordings at each state from $200ms$ before to $3000ms$ after each state onset and applied subtractive baseline correction where the baseline was taken as the mean in the interval [$-100ms$, $+100ms$].
Taking the $+100ms$ into account does not interfere with event-specific effects because they develop only later (>220ms according to \cite{Mathot17}), but a symmetric baseline reduces small biases when different traces have different slopes around t=0ms.
The analysis in Fig.~\ref{fig:PupilDiameter_GrandAvg} compared trials from episodes one and two.
In the case of D2, trials from episode 3 were included if participants observed \textit{exactly} the following sequences: S-D2-D1-G in episode 1, X-D1-G in episode 2, and Y-D2.
No trials from episode 3 were used in the comparison at state D1.
We excluded event-locked pupil responses with less than 50\% eye-tracker data or with z-values outside $\pm 3$ within the time window of interest.
\subsection{Action assignment in the Markov Decision Process}
Actions in the graph of Fig.~\ref{fig:F2_Task_Cond_Behav} were assigned to transitions during the first few actions as explained in the main text.
However, our learning experiment would become corrupted if participants would discover that in the first episode any three actions lead to the goal.
First, such knowledge would bypass the need to actually learn state-action associations, and second, the knowledge of "distance-to-goal" implicitly provides reward information even before seeing the goal state.
We avoided the learning of the latent structure by two manipulations:
First, if a participant repeated the exact same action sequence as in the previous environment, or if they tried trivial action sequences (a-a-a or b-b-b), the assignment of the third action led from state D1 to Z, rather than to the Goal.
This manipulation further implied that participants had to make decisions against their potential left/right bias.
Second, an additional state H (not shown in Fig.~\ref{fig:F2_Task_Cond_Behav}) was added in some environments.
Participants then started from H (always leading to S) and the path length to goal was four steps.
Interviews after the experiment showed that no participant became aware of the experimental manipulation and, importantly, they did not notice that they could reach the goal with a random action sequence in episode one.
\subsection{Reinforcement Learning Models}
For the RL algorithm $Q-\lambda$, four quantities are important: the reward $r$;
the value $Q(s,a)$
of a state-action association such as taking action 'b' in state D2;
the value $V(s)$ of the state itself, defined as
the larger of the two $Q$-values in that state, i.e., $V(s) = {\rm max}_{\tilde{a}} Q(s,\tilde{a})$;
and the TD-error (also called Reward Prediction Error or RPE) calculated at the end of the $n^{th}$ action
after the transition from state $s_n$ to $s_{n+1}$
\begin{equation}
{\rm RPE}(n \rightarrow n+1) = r_{n+1} + \gamma \cdot \, V(s_{n+1}) - Q(s_{n}, a_{n}) \\
\label{eq:RewardPredictionError}
\end{equation}
Here $\gamma$ is the discount factor and $V(s)$ is the estimate of the discounted future reward that can maximally be collected when starting from state $s$.
Note that RPE is different from reward. In our environment a reward occurs only at the transition from state D1 to state G whereas reward prediction errors occur in episodes 2 - 7 also several steps before the reward location is reached.
The table of values $Q(s,a)$ is initialized at the beginning of an experiment
and then updated by combining the RPE and the
eligibility traces $e_n(s, a)$ defined in the main text (Eq. \ref{eq:ET_lambda}),
\begin{equation}
Q(s, a) \leftarrow Q(s, a) + \alpha \cdot RPE(n) \cdot e_n(s, a) \, ,
\label{eq:QLearnQupd}
\end{equation}
where $\alpha$ is the learning rate.
Note that \textit{all} Q-values are updated, but changes in $Q(s_n,a_n)$ are proportional to the eligibility of the state-action pair $e_n(s, a)$.
In the literature the table $Q(s, a)$ is often initialized with zero, but
since some participants pressed the left (or right) button more often than the other one, we identified for each participant the preferred action $a_{pref}$ and initialized $Q(s, a_{pref})$ with a small bias $b$, adapted to the data.
Action selection exploits the Q-values of Eq. \ref{eq:QLearnQupd} using a softmax criterion with temperature $T$:
\begin{align}
p(s,a) = \frac{exp(Q(s,a)/ T )}{\sum_{\tilde{a}} exp(Q(s,\tilde{a})/ T )} \, ,
\label{eq:psa_softmax}
\end{align}
As an alternative to the eligibility trace defined in Eq. \ref{eq:ET_lambda}, where the eligibility decays at each discrete time-step, we also modeled a decay in continuous time, defined as
\begin{equation}
e_t(s,a) = exp \left(-\frac{t - B(s,a)}{\tau} \right) \quad {\rm if} \; t>B(s,a)
\label{eq:ET_tau}
\end{equation}
and zero otherwise.
Here, $t$ is the time stamp of the current discrete step,
and $B(s,a)$ is the time stamp of the last time a state-action pair $(s,a)$ has been selected.
The discount factor $\gamma$ in Eq. \ref{eq:RewardPredictionError} is kept, while in Eq. \ref{eq:ET_tau} a potential discounting is absorbed into the single parameter $\tau$.
Our implementation of \textit{Reinforce} followed the pseudo-code of \textit{REINFORCE: Monte-Carlo Policy-Gradient Control (without baseline)} (\cite{Sutton18}, Chapter 13.3) which updates the action-selection probabilities at the end of each episode.
This requires the algorithm to keep a (non-decaying) memory of the complete state-action history of each episode.
We refer to \citep{Sutton18}, \citep{Gläscher10} and \citep{Peng96} for the pseudo-code and in-depth discussions of all algorithms.
\subsection{Parameter Fit and Model Selection}
Each learning model $m$ is characterized by a set of parameters $\theta^m = [\theta^m_1, \theta^m_2, ...]$.
For example, our implementation of the \textit{Q-$\lambda$} algorithm has five free parameters:
the eligibility trace decay $\lambda$;
the learning rate $\alpha$;
the discount rate $\gamma$;
the softmax temperature $T$;
and the bias $b$ for the preferred action.
To find the most likely values of those parameters, we pooled the behavioral recordings of all participants into one data set $D$.
For each model $m$, we were interested in the posterior distribution $P(\theta^m | D)$ over the free parameters $\theta^m$, conditioned on the behavioral data of all participants $D$.
This distribution was approximated by sampling using the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) algorithm \citep{Hastings70}.
For sampling, MCMC requires a function $f(\theta^m, D)$ which is proportional to $P(\theta^m | D)$.
Choosing a uniform prior $P(\theta^m) = const$, and exploiting that $P(D)$ is independent of $\theta^m$, we can directly use the model likelihood $P(D | \theta^m)$:
\begin{align}
P( \theta^m | D) = \frac{P(D | \theta^m) P(\theta^m)}{P(D)} \propto P(D | \theta^m) := f(\theta^m, D).
\label{eq:MCMCposterior}
\end{align}
We calculated the likelihood $P(D | \theta^m)$ of the data as the joint probability of all action selection probabilities obtained by evaluating the model (Eqs. \ref{eq:ET_lambda}, \ref{eq:RewardPredictionError}, \ref{eq:QLearnQupd}, and \ref{eq:psa_softmax} in the case of $Q(\lambda)$) given a parameter sample $\theta^m$.
The log likelihood (LL) of the data under the model is
\begin{equation}
LL(D|\theta^m) = \sum_{p=1}^N \, \sum_{j=1}^{E_p} \, \sum_{t=1}^{T_j} log( p(a_t | s_t ; \theta^m)) \, ,
\end{equation}
where the sum is taken over all participants $p$, all environments $j$, and all actions $a_t$ a participant has taken in the environment $j$:
For each model, we collected $100'000$ parameter samples
(burn-in: 1500;
keeping only every $10^{th}$ sample;
50 random start positions;
proposal density: Gaussian with $\sigma=0.004$ for temperature $T$ and bias $b$, and $\sigma=0.008$ for all other parameters).
From the samples we chose the $\hat{\theta}^m$ which maximizes the log likelihood (LL), calculated the $AIC_m$ and ranked the models accordingly.
Note that the parameter vector $\hat{\theta}^m$ could be found by a hill-climbing algorithm towards the optimum, but such an algorithm does not give any indication about the uncertainty.
Here we obtained an approximate conditional posterior distribution $p(\theta^m_{i} | D, \hat{\theta}^m_{j \neq i})$ for each component $i$ of the parameter vector $\theta^m$ (cf. Fig.~\ref{fig:QLearn_MCMC_posterior}).
We estimated this posterior for a given parameter $i$ by selecting only the $1 \%$ of all samples falling into a small neighborhood: $\hat{\theta}^m_{j} - \epsilon^m_j \leq \theta_{ j} \leq \hat{\theta}^m_{j} + \epsilon^m_j , i \neq j$.
We determined $\epsilon^m_j$ such that along each dimension $j$, the same percentage of samples was kept (about 22\%) and the overall number of samples was 1000.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/MCMC_PosteriorQLambdaAndTau.pdf}
\caption{ \textbf{Fitting results: behavioral data constrained the free parameters of \textit{Q-$\lambda$}.}
\textbf{[a]} For each experimental condition a distribution over the five free parameters is estimated by sampling.
The blue histograms show the approximate conditional posterior for each parameter (see methods).
Vertical black lines indicate the values of the 5-parameter sample that best explains the data (maximum likelihood, ML).
The bottom row (All) shows the distribution over $\lambda$ when fitted to the aggregated data of all conditions, with other parameters fixed to the indicated value (mean over the three conditions).
\textbf{[b]} Estimation of a time dependent decay ($\tau$ instead of $\lambda$) as defined in equation \ref{eq:ET_tau}.
}
\label{fig:QLearn_MCMC_posterior}
\end{figure*}
One problem using the AIC for model selection stems from the fact that there are considerable behavioral differences across participants and the AIC model selection might change for a different set of participants.
This is why we validated the model ranking using $k$-fold cross-validation.
The same procedure as before (fitting, then ranking according to AIC) was repeated $K$ times, but now we used only a subset of participants (training set) to fit $\hat{\theta}^m_k$ and then calculated the $LL^m_{k}$ and the $AIC^m_{k}$ on the remaining participants (test set).
We created the $K$ folds such that each participant appears in exactly one test set and in $K-1$ training sets.
Also, we kept these splits fixed across models, and evaluated each model on the same split into training and test set.
In each fold $k$, the models were sorted with respect to $AIC^m_{k}$, yielding $K$ lists of ranks.
In order to evaluate whether the difference between two models is significant, we compared their ranking in each fold (Wilcoxon rank-sum test on K matched pairs, p-values shown in Table \ref{tab:AIC_CV_pVals}).
The cross-validation results were summarized by summing the $K$ ranks (Table \ref{tab:AIC_CV_pVals}).
The best rank sum a model could obtain is $K$, and is obtained if it achieved the first rank in each of the $K$ folds.
\subsection{Regression Analysis}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/FigS3_RPE_Value_Mean_BLACK.pdf}
\caption{\textbf{Reward prediction error (RPE) at non-goal states modulates pupil dilation.}
Pupil traces (in units of standard deviation) from all states except G
were aligned at state onset ($t=0ms$) and the mean pupil response $\mu_t$ was subtracted (see Methods).
\textbf{[a]}
The deviation from the mean is shown for states with $RPE = 0$ (black, dashed) and for states with $RPE \geq 80^{th}$ percentile (solid, blue).
Shaded areas: $\pm$ SEM.
The pupil dilation reflects the RPE caused by spreading of value information to nonrewarded states.
\textbf{[b]}
To qualitatively distinguish pupil correlations with RPE from correlations with state values $V(s)$, we started from the following observation:
over the course of learning, RPE decreases, while the state values $V(s)$ increases.
We wanted to observe this qualitative difference in the pupil dilations of subsequent visits of the \textit{same} state.
We selected pairs of visits $n$ and $n+1$ for which the RPE decreased while $V(s)$ increased and extracted the pupil measurements of the two visits (again, mean $\mu_t$ is subtracted).
The dashed, black curves show the average pupil trace during the $n^{th}$ visit of a state.
The solid, blue curves correspond to the next visit ($n+1$) of the same state.
In the spatial condition, the two curves significantly ($p<FDR_{\alpha}=0.05$) separate at $t>1s$ (indicated by the green line).
All three conditions show the same trend (with strong significance in the spatial condition), compatible with a correlation of pupil response with RPE, but not with state value $V(s)$.
\textbf{[c]} The mean pupil dilation $\mu_t$ is different in each condition, whereas the learning related deviations from the mean (in [a] and [b]) have similar shapes.
}
\label{fig:RPE_Regression_Raw}
\end{figure*}
The reward prediction error (RPE, Eq. \ref{eq:RewardPredictionError}) used for a comparison with pupil data
was obtained by applying the algorithm \textit{Q-$\lambda$} with the optimal (maximum likelihood) parameters.
We chose \textit{Q-$\lambda$} for regression because, first, it explained the behavior best across the three conditions and, second, it evaluates the outcome of an action at the onset of the next state (rather than at the selection of the next action as in \textit{SARSA-$\lambda$}), which enabled us to compare the model with the pupil traces triggered at the onset of the next state.
In a first, qualitative, analysis, we split data of all state transitions of all Participants into two groups:
all the state transitions where the model predicts an RPE of zero;
and the twenty percent of state transitions where the model predicts the largest RPE (Fig.~\ref{fig:RPE_Regression_Raw}[a]).
We found that the pupil responses looked very different in the two groups, across all three modalities.
In a second, rigorous, statistical analysis, we tested whether pupil responses were correlated with the RPE across all RPE values, not just those in the two groups with zero and very high RPE.
In our experiment, only state 'G' was rewarded; at non goal states, the RPE depended solely on learned $Q$-values ($r_{n+1} = 0$ in Eq. \ref{eq:RewardPredictionError}).
Note that at the first state of each episode the RPE is not defined.
We distinguished these three cases in the regression analysis by defining two events "Start" and "Goal", as well as a parametric modulation by the reward prediction error at intermediate states.
From Figure \ref{fig:PupilDiameter_GrandAvg} we expected significant modulations in the time window $t \in [500ms, 2500ms]$ after stimulus onset.
We mapped $t$ to $t' = (t-1500ms)/1000ms$ and used orthogonal Legendre polynomials $P_k(t')$ up to order $k=5$ (Fig.~\ref{fig:RegressionAndPermTest}) as basis functions on the interval $-1 < t' < 1$.
We use the indices $p$ for participant and $n$ for the $n^{th}$ state-on event. With a noise term $\epsilon$ and $\mu_t$ for the overall mean pupil dilation at $t$, the regression model for the pupil measurements $y$ is
\begin{equation}
y_{p, n+1, t} = \mu_t + \sum_{k=0}^{5} RPE_{p}(n \rightarrow n+1) \times P_{k}(t') \times \beta_k \; + \epsilon_{p,n+1,t} \, ,
\label{eq:LegendreRegression}
\end{equation}
where the participant-independent parameters $\beta_k$ were fitted to the experimental data (one independent analysis for each experimental condition).
The models for "start state" and "goal state" are analogous and obtained by replacing the real valued $RPE_{p, n} $ by a 0/1 indicator for the respective events.
By this design we obtained three uncorrelated regressors with six parameters each.
Using the regression analysis sketched here, we quantified the qualitative observations
suggested by (Fig.~\ref{fig:RPE_Regression_Raw})
and found a significant parametric modulation of the pupil dilation by reward prediction errors at non-goal states (Fig.~\ref{fig:RegressionAndPermTest}).
The extracted modulation profile reached a maximum at around $1-1.5s$ ( 1300 ms in the \textit{clip-art}, 1100 ms in the \textit{sound} and 1400 ms in the \textit{spatial} condition), with a strong mean effect size ($\beta_0$ in Fig.~\ref{fig:RegressionAndPermTest}) of $0.48$ ($p<0.001$), $0.41$ ($p =0.008$) and $0.35$ ($p<0.001$), respectively.
We interpret the pupil traces at the start and the end of each episode (Fig.~\ref{fig:RegressionAndPermTest})
as markers for additional cognitive processes beyond reinforcement learning
which could include correlations
with cognitive load \citep{Kahneman66, Beatty82}, recognition memory \citep{Otero11}, attentional effort \citep{Alnaes14}, exploration \citep{Jepma11}, and encoding of memories \citep{Kucewicz18}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/RegressionAndPermutationResults_3Cond.pdf}
\caption{\textbf{Detailed results of regression analysis and permutation tests.}
The regressors are \textit{top}: Start state event, \textit{middle}: Goal state event and \textit{bottom}: Reward Prediction Error.
We extracted the time course of the pupil dilation in [$500ms$, $2500ms$] after state onset for each of the conditions, \textit{clip-art}, \textit{sound} and \textit{spatial}, using Legendre polynomials $P_k(t)$ of orders k=0 to k=5 (top row) as basis functions.
The extracted weights $\beta_k$ (cf. Eq. \ref{eq:LegendreRegression}) are shown in each column below the corresponding Legendre polynomial as vertical bars with color indicating the level of significance
(red, statistically significant at p<0.05/6 (Bonferroni); orange, p<0.05; black, not significant).
Blue histograms summarize shuffled samples obtained by 1000 permutations.
Black curves in the leftmost column show the fits with all 6 Legendre Polynomials,
while the red curve is obtained by summing only over the few Legendre Polynomials with significant $\beta$.
Note the similarity of the pupil responses across conditions.}
\label{fig:RegressionAndPermTest}
\end{figure*}
\bibliographystyle{naturemag}
|
\section{Introduction }
For centuries, it was thought that Euclidean geometry is the only geometric system until the discoveries of hyperbolic geometry that is a non-Euclidean geometry. In 1870, it was shown by Cayley-Klein that there are 9 different geometries in the plane including Euclidean geometry. These geometries are determined by parabolic, elliptic, and hyperbolic measures of angles and lengths.
The main aim of this work is to study some special curves in Galilean geometry which is also among foregoing geometries. The conventional view about Galilean geometry is that it is relatively simpler than Euclidean geometry. There are some problems that cannot be solved in Euclidean geometry, however they are an easy matter in Galilean geometry. For instance, the problem of determination of position vector of an arbitrary curve and the problem that we study in this article can be considered as good examples for the case. Another advantageous of Galilean geometry is that it is associated with the Galilean principle of relativity. For more details about Galilean geometry, we refer the interested reader to the book by Yaglom \cite{yag}.
The theory of curves forms an important and useful class of theories in differential geometry. The curves emerge from the solutions of some important physical problems. Also, mathematical models are often used to describe complicated systems that arising in many different branch of science such as engineering, chemistry, biology, etc. \cite{biyo, kimya}
A curve in space is studied by assigning at each point a moving frame.
The method of moving frame is a central tool to study a curve or a surface. The fundamental theorem of curves states that curves are determined by curvatures and Frenet vectors \cite{krey}. Thus, curvature functions provide us with some special and important information about curves. For example; line, circle, helix (circular or generalized), Salkowski curve, geodesic , asymptotic and line of curvature etc. All of these curves are characterized by the specific conditions imposed on their curvatures. To examine the characteristics of this curves, it is important that the position vectors of the curves are given according to the curvature functions. However, this is not always possible in all geometries. For example, the problem of determination of the position vector of a curve in Euclidean or Minkowski spaces can only be solved for some special curve such as plane line, helix and slant helix.
However, this problem can be solved independent of type of curves in Galilean space \cite{ali,buket}.
Curves can also be produced in many different ways, such as solution of physical problems, trajectory of a moving particle, etc. \cite{krey}. In addition, one can produce a new curve by using Frenet vector fields of a given curve, such as Evolutes and involutes, spherical indicatrix, and Smarandache curves.
If the position vector of $\alpha$ curve is formed by frame vectors of $\beta$ curve, then $\alpha$ is called Smarandache curve of $\beta$ \cite{suha}.
Recently, many researchers have studied special Smarandache curves with respect to different frames in different spaces. In \cite{suha}, the authors introduced a special case of Smarandache curves in the space $E_4^1$. \cite{ali10} studied special Smarandache curve in Euclidean space $E^3$. In \cite{yuce, cetin}, the authors investigate the curves with respect to Bishop and Darboux frame in $E^3$, respectively. Also, \cite{saad17} investigated the curves with respect to Darboux frame in Minkowski $3-$space.
Among these studies, only \cite{saad} used general position vector with respect to Frenet frame of curve to obtain Samarandache curves in Galilean space.
The main aim of this paper is to determine position vector of Smarandache curves of arbitrary curve on a surface in $G_3$ in terms of geodesic, normal curvature and geodesic torsion with respect to the standard frame. The results of this work include providing Smarandache curves of some special curves such as geodesic, asymptotic curve, line of curvature on a surface in $G_3$ and Smarandache curves for special cases of curves such as, Smarandache curves of geodesics that are circular helix, genaralized helix or Salkowski, etc. Finally, we elaborate on some special curves by giving their graphs.
\section{Introduction and Preliminaries}
The Galilean space $G^{3}$ is one of the Cayley-Klein spaces associated with the projective metric of signature $\left( 0,0,+,+\right) $ \cite{mol}. The absolute figure of the Galilean space is the ordered triple $\{w,f,I\}$, where $w$ is an ideal (absolute) plane,
$f$ is a line (absolute line) in $w$, and $I$ is a fixed eliptic involution of points of $f$.
In non-homogeneous coordinates the group of isometries of $G^{3}$ has the following form:
\begin{eqnarray}
\overline{x} &=&a_{11}+x, \notag \\
\overline{y} &=&a_{21}+a_{22}x+y\cos \varphi +z\sin \varphi , \\
\overline{z} &=&a_{31}+a_{32}x-y\sin \varphi +z\cos \varphi, \notag
\end{eqnarray}%
where $a_{11}, a_{21}, a_{22}, a_{31}, a_{32}$, and $\varphi$ are real numbers \cite{pav}.
If the first component of a vector is zero, then the vector is called as isotropic, otherwise it is called non-isotropic vector \cite{pav}.
In $G^{3}$, the scalar product of two vectors $\mathbf{v}=(v_{1},v_{2},v_{3})$ and $\mathbf{w}=(w_{1},w_{2},w_{3})$ is defined by
$$\mathbf{v}\cdot _{G}\mathbf{w} = \left\{
\begin{array}{lr}
v_{1}w_{1} , & \text{if } v_{1}\neq 0 \text{ or } w_{1}\neq 0\, \ \ \ \\
v_{2}w_{2}+v_{3}w_{3} ,& \text{if } v_{1}=0 \text{ and } w_{1}=0\,.
\end{array}\right.$$
The Galilean cross product of these vectors is defined by
\begin{eqnarray}
\mathbf v\times _{G}\mathbf w=%
\begin{vmatrix}
0 & \mathbf e_{2} &\mathbf {e_{3}} \\
v_{1} & v_{2} & v_{3} \\
w_{1} & w_{2} & w_{3}%
\end{vmatrix}.%
\end{eqnarray}
If $\mathbf{v}\cdot _{G}\mathbf{w}=0$, then $\mathbf{v}$ and $\mathbf{w}$
are perpendicular.
The norm of $\mathbf{v}$ is defined by
$$\Vert \mathbf{v}\Vert_{G}=\sqrt{\vert\mathbf{v}\cdot_{G}\mathbf{v}\vert}.$$
Let $I\subset \mathbb R$ and let $\gamma :I\rightarrow G^{3}$ be a unit speed curve
with curvature $\kappa>0$ and torsion $\tau$.
Then the curve $\gamma$ is defined by
\begin{eqnarray*}
\gamma \left( x\right) =\left( x,y\left( x\right) ,z\left( x\right) \right) ,
\end{eqnarray*}
and that the Frenet frame fields are given by
\begin{eqnarray}{\label{fframe}}
T\left(x\right) &=&\alpha ^{\prime }\left( x\right),
\notag \\
N\left( x\right) &=& \frac{\gamma''(x)}{\Vert \gamma''(x)\Vert_{G}}
\\
B\left( x\right) &=&T(x)\times _{G}B(x) \notag \\&=&\frac{1}{\kappa \left( x\right) }\left( 0,
-z^{\prime \prime }\left( x\right) , y^{\prime \prime }\left(
x\right) \right), \notag
\end{eqnarray}%
where
\begin{equation}
\kappa \left( x\right) ={\Vert \gamma''(x) \Vert }_{G} \quad \text{and} { \ \ }\tau
\left( x\right) =\frac{\det \left( \gamma ^{\prime }\left( x\right) ,\gamma
^{\prime \prime }\left( x\right) ,\gamma ^{\prime \prime \prime }\left(
x\right) \right) }{\kappa ^{2}\left( x\right) }\,.
\end{equation}%
The vector fields $\mathbf{T, N} $ and $\mathbf{B}$ are called the tangent vector field, the principal normal
and the binormal vector field, respectively \cite{pav}. Therefore, the Frenet-Serret formulas can be written in matrix form as
\begin{eqnarray}
\begin{bmatrix}
\mathbf{T} \\
\mathbf{N} \\
\mathbf{B}%
\end{bmatrix}%
^{\prime }=%
\begin{bmatrix}
0 & \kappa & 0 \\
0 & 0 & \tau \\
0 &- \tau & 0%
\end{bmatrix}%
\begin{bmatrix}
\mathbf{T} \\
\mathbf{N} \\
\mathbf{B}%
\end{bmatrix}\,.
\end{eqnarray}%
There is another useful frame for study curves on a surface. For an easy reference we call this surface $M$. This frame can be formed by two basic vectors. These vectors are a unit tangent vector field $\mathbf{T}$ of the curve $\gamma$ on $M$ and the unit normal vector field $\mathbf{n}$ of $M$ at the point $\gamma(x)$ of $\gamma$. Therefore, the frame field $\{\mathbf{T, Q, n}\}$ is obtained and is called Darboux frame or the tangential-normal frame field. Here, $\mathbf{Q=n}\times_{G}\mathbf{T}$.
\begin{theorem}Let $\gamma :I\subset \mathbb{R}\rightarrow M\subset G^{3}$ be a unit-speed curve, and let $\{\mathbf{T, Q, n}\}$ be the Darboux frame field of $\gamma$ with respect to M. Then the Frenet formulas in matrix form is given by
\begin{eqnarray}\label{Darboux}
\begin{bmatrix}
\mathbf{T} \\
\mathbf{Q} \\
\mathbf{n}
\end{bmatrix}
^{\prime }=
\begin{bmatrix}
0 & \kappa_g & \kappa_n \\
0 & 0 & \tau_g \\
0 & -\tau_g & 0
\end{bmatrix}
\begin{bmatrix}
\mathbf{T} \\
\mathbf{Q} \\
\mathbf{n}
\end{bmatrix}\, ,
\end{eqnarray}
where $\kappa_g$, $\kappa_n$ and $\tau_g$ are called geodesic curvature, normal curvature and geodesic torsion, respectively.
\end{theorem}
\begin{proof} It follows from solving \eqref{Darboux} componentwise \cite{buket, tevfik} .
\end{proof}
Also, (\ref{Darboux}) implies the important relations
\begin{eqnarray}\label{kt}
\kappa^2(x)=\kappa^2_g(x)+\kappa^2_n(x), \hskip .5cm \tau(x)=-\tau_g(x)+\frac{\kappa'_g(x)\kappa_n(x)-\kappa_g(x)\kappa'_n(x)}{\kappa^2_g(x)+\kappa^2_n(x)}
\end{eqnarray}
where $\kappa(x)$ and $\tau(x)$ are the curvature and the torsion of $\beta$, respectively.
We refer to \cite{pav, ros, yag} for detailed treatment of Galilean and pseudo-Galilean geometry.
\section{Special Smarandache Curves with Darboux Apparatus with Respect to Frenet Frame in $G_{3}$}
In this section, we will give special Smarandache curves with Darboux apparatus with respect to Frenet frame of a curve on a surface in $G_3$. In order to the position vector of an arbitrary curve with geodesic curvature $\kappa_{g}$, normal curvature $\kappa_{n}$ and geodesic torsion $\tau_{g}$ on the surface in $G_{3}$ \cite{buket}.
Based on the definition of Smarandache curve in \cite{saad,suha}, we will state the following definition.
\begin{definition}\label{smadef}
Let $\gamma(x)$ be a unit speed curve in $G_3$ and ${\mathbf{T, N, B}}$ be the Frenet frame field along with $\gamma$. Special Smarandache $\mathbf{TN, TB}$ and $\mathbf{TNB}$ curves are, respectively, defined by
\begin{eqnarray}
\gamma_{\mathbf{TN}}&=&\mathbf{T}+\mathbf{N}\\
\gamma_{\mathbf{TB}}&=&\mathbf{T}+\mathbf{B}\\
\gamma_{\mathbf{TNB}}&=&\mathbf{T}+\mathbf{N}+\mathbf{B}.
\end{eqnarray}
\end{definition}
The following result which is stated as theorem is our main work in this article.
\begin{theorem}\label{posmat}
The $\mathbf{TN}$, $\mathbf{TB}$ and $\mathbf{TNB}$ special Smarandache curves with Darboux apparatus of $\gamma$ with respect to Frenet frame are, respectively, written as
\begin{eqnarray}{\label{posma}}
\gamma_{\mathbf{TN}}&=&\left(
\begin{array}{c}
1,\int N_1 dx+ \frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}N_1, \,
\int N_2 dx+\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}N_2
\end{array}\notag
\right) \\\notag
&& \\
\gamma_{\mathbf{TB}} &=&\left(
\begin{array}{c}
\ 1\ ,\ \int N_1 dx - \frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}N_2,\,
\int N_2 dx+\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}N_1
\end{array}
\right) \\\notag
&& \\\notag
\gamma_{\mathbf{TNB}} &=&\ \left(
\begin{array}{c}
1, \, \int N_1 dx+\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}(N_{1}-N_{2}),\,
\int N_2 dx+\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}(N_{1}+N_{2})%
\end{array}%
\right)
\end{eqnarray}
where
\begin{eqnarray*}
{N_1}&=&\kappa _{g}\sin \Big(\int\tau _{g}dx\Big)+\kappa _{n}\cos
\Big(\int \tau _{g}dx\Big),\\
N_2&=&\kappa _{g}\cos \Big(\int \tau _{g}dx\Big)-\kappa _{n}\sin \Big(\int \tau
_{g}dx\Big).
\end{eqnarray*}
\end{theorem}
\begin{proof}
The position vector of an arbitrary curve with geodesic curvature $\kappa_{g}$, normal curvature $\kappa_{n}$ and geodesic torsion $\tau_{g}$ on the surface in $G_{3}$ which is introduced by \cite{buket} as follows
\begin{eqnarray}\label{pos}
\gamma(x)\ =\left(
\begin{array}{c}
x,\, \int (\int (\kappa _{g}(x)\sin (\int \tau _{g}(x)dx)-\kappa _{n}(x)\int \tau
_{g}(x)\sin (\int \tau _{g}(x)dx)dx)dx)dx,\\
\\
\int (\int (\kappa _{g}\cos (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\cos
(\int \tau _{g}dx)dx)dx)dx
\end{array}
\right)
\end{eqnarray}
The derivatives of this curve are, respectively, given by;
\begin{eqnarray}
\notag
\gamma^{\prime}(x) &=&\left(
\begin{array}{c}
1, \, \int (\kappa _{g}\sin (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\sin
(\int \tau _{g}ds)dx)dx, \\
\\
\ \int (\kappa _{g}\cos (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\cos(\int \tau _{g}dx)dx)dx%
\end{array}
\right)\notag \\
&& \notag \\
\gamma^{\prime \prime }(x) &=&\left(
\begin{array}{c}
0, \, \kappa _{g}\sin (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\sin
(\int \tau _{g}dx)dx, \\
\\
\kappa _{g}\cos (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\cos (\int \tau
_{g}dx)dx
\end{array}
\right) \notag
\end{eqnarray}
The Frenet frame vector fields with Darboux apparatus of $\gamma$ are determined as follows
\begin{eqnarray*}
\mathbf{T} &=&\left(
\begin{array}{c}
1, \, \int (\kappa _{g}\sin (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\sin
(\int \tau _{g}dx)dx)dx, \\
\\
\ \int (\kappa _{g}\cos (\int \tau _{g}dx)-\kappa _{n}\int \tau _{g}\cos
(\int \tau _{g}dx)dx)dx%
\end{array}%
\right) \\
&& \\
\mathbf{N} &=&\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}\left(
\begin{array}{c}
0,\, \kappa _{g}\sin(\int \tau _{g}dx)+\kappa _{n}\cos(\int \tau _{g}dx),
\\ \\ \kappa _{g}\cos(\int \tau _{g}dx)-\kappa _{n}\sin(\int \tau _{g}dx)
\end{array}
\right)\\
&& \\
\mathbf{B} &=&\frac{1}{\sqrt{{\kappa_g}^2+{\kappa_n}^2}}\left(
\begin{array}{c}
0,\, -\kappa _{g}\cos(\int \tau _{g}dx)+\kappa _{n}\sin(\int \tau _{g}dx),
\\ \\ \kappa _{g}\sin(\int \tau _{g}dx)+\kappa _{n}\cos(\int \tau _{g}dx)
\end{array}
\right)
\end{eqnarray*}
Using the definition (\ref{smadef}), we obtain desired results.
We now provide some applications of this theorem for some special curves.
\end{proof}
\section{Applications}
We begin with studying Smarandache curves of important special curves lying on surfaces such as geodesic, asymtotic and curvature (or principal) line. Also, we will provide special cases such as helix and Salkowski curve of these curves.
Let $\gamma$ be regular curve on a surface in $G^3$ with the curvature $\kappa$, the torsion $\tau$, the geodesic curvature $\kappa_g$, the normal curvature $\kappa_n$ and the geodesic torsion $\tau_g$.
\begin{definition}\label{defgap}
\cite{krey} We can say that $\gamma$ is
\begin{eqnarray*}
\begin{split}
geodesic \, curve &\Longleftrightarrow \kappa_g\equiv 0,
\\asymptotic \, curve & \Longleftrightarrow \kappa_n\equiv 0,
\\line \, of \, curvature & \Longleftrightarrow \tau_g\equiv 0.
\end{split}
\end{eqnarray*}
Also, We can say that $\gamma$ is called:
\begin{eqnarray}\label{helsal}
\begin{array}{ccc}
\kappa, \tau & \hskip 1cm& r\\
\hline
\kappa\equiv0 &\Longleftrightarrow &\textbf{a straight line.}\\
\tau\equiv0 &\Longleftrightarrow &\textbf{a plane curve.}\\
\kappa\equiv\textit{cons.$>$0},\tau\equiv\textit{cons.$>$0} &\Longleftrightarrow &\textbf{a circular helix or W-curve.}\\
\frac{\tau}{\kappa}\equiv\textit{cons.} &\Longleftrightarrow &\textbf{a generalized helix.}\\
\kappa\equiv\textit{cons.}, \tau\neq\textit{cons.} &\Longleftrightarrow &\textbf{Salkowski curve \cite{mon,sal}.}\\
\kappa\neq\textit{cons.}, \tau\equiv\textit{cons.} &\Longleftrightarrow &\textbf{anti-Salkowski curve \cite{sal}.}\\
\end{array}
\end{eqnarray}
\end{definition}
\subsection{The position vectors of Smarandache curves of a general geodesic curve in $G_3$}
\begin{theorem}{\label{thmgeo}}
The position vectors $\alpha_g(x)$ of Smarandache curves of a family of geodesic curve in $G_3$ are provided by
%
%
\begin{eqnarray*}
\alpha^g _{\mathbf{TN}} &=&\left(
\begin{array}{c}
1, \, \int \kappa _{n}\cos (\int \tau _{g}dx)dx+\cos (\int \tau _{g}dx),
-\int \kappa _{n}\sin (\int \tau _{g}dx)dx-\sin (\int \tau _{g}dx)
\end{array}
\right) \\
&& \\
\alpha^g _{\mathbf{TB}} &=&\left(
\begin{array}{c}
1, \, \int \kappa _{n}\cos (\int \tau _{g}dx)dx+\sin (\int \tau _{g}dx),
-\int \kappa _{n}\sin (\int \tau _{g}dx)dx+\cos (\int \tau _{g}dx)
\end{array}
\right)\\
&&\\
\alpha^g _{\mathbf{TNB}} &=&\left(
\begin{array}{c}
1, \, \int \kappa _{n}\cos (\int \tau _{g}dx)dx+\cos (\int \tau _{g}dx)+\sin (\int \tau _{g}dx),\\
\\
-\int \kappa _{n}\sin (\int \tau _{g}dx)dx+\cos (\int \tau _{g}dx)-\sin (\int \tau _{g}dx)
\end{array}%
\right) \notag
\end{eqnarray*}
\end{theorem}
\begin{proof}
The above equations are obtained as general position vectors for $\mathbf{TN}, \, \mathbf{TB}$ and $\mathbf{TNB}$ special Smarandache curves with Darboux apparatus of a geodesic curve on a surface in $G_3$ by using the definition (\ref{defgap}) and Theorem \ref{posmat}.
\end{proof}
Now, we will give the position vectors for special Smarandache curves of some special cases of a geodesic curve in $G_3$.
\begin{corollary}
The position vectors of special Smarandache curves of a family of geodesic curve that is a circular helix in $G_3$ are given by the equations
%
%
%
%
%
\begin{eqnarray*}
\alpha^g_{ch}{_\mathbf{TN}}(x) &=& \left(
\begin{array}{c}
1, \, \frac{e}{c}\sin(cx+c_{1})+\frac{e}{c}\cos(cx+c_{1}), \\ \\
\frac{e}{c}\cos(cx+c_{1})-\frac{e}{c}\sin(cx+c_{1})
\end{array}
\right)\\
\\
\alpha^g_{ch}{_\mathbf{TB}}(x) &=& \left(
\begin{array}{c}
1, \, (\frac{e+c}{c})\sin(cx+c_{1}), \, (\frac{e+c}{c})\cos(cx+c_{1})
\end{array}
\right)\\
\\
\alpha^g_{ch}{_\mathbf{TNB}}(x) &=&\left(
\begin{array}{c}
1, \,
(\frac{e+c}{c})\sin(cx+c_{1})+\cos(cx+c_{1}), \\
\\
(\frac{e+c}{c})\cos(cx+c_{1})-\sin(cx+c_{1}) \\
\\
\end{array}%
\right)
\end{eqnarray*}
where $c, \, c_1$ and $e$ are integral constants.
\end{corollary}
\begin{corollary}
The position vectors of special Smarandache curves of a family of geodesic curve that is a generalized helix in $G_3$ are given by the equations
%
%
%
%
%
\begin{eqnarray*}
\alpha^g_{gh}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, \frac{1}{d}\sin(d\int\kappa_{n}dx)+\cos(d\int\kappa_{n}dx), \\
\\ \frac{1}{d}\cos(d\int\kappa_{n}dx)-\sin(d\int\kappa_{n}dx)\\
\end{array}
\right)\\
\\
\alpha^g_{gh}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, \frac{1}{d}\sin(d\int\kappa_{n}dx)+\sin(d\int\kappa_{n}dx), \\
\\ \frac{1}{d}\cos(d\int\kappa_{n}dx)+\cos(d\int\kappa_{n}dx)\\
\end{array}%
\right) \\
\\
\alpha^g_{gh}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, \frac{d+1}{d}\sin(d\int\kappa_{n}dx)+\cos(d\int\kappa_{n}dx), \\
\\
\frac{d+1}{d}\cos(d\int\kappa_{n}dx)-\sin(d\int\kappa_{n}dx) \\
\end{array}
\right)
\end{eqnarray*}
where $d$ is integral constant.
\end{corollary}
\begin{corollary}
The position vectors of Smarandache curves of a family of geodesic that is a Salkowski curve in $G_3$ are given by the equations
%
%
%
%
%
%
\begin{eqnarray*}
\alpha^g_{s}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, m\int\cos(\int\tau_{g}dx)dx+\cos(\int\tau_{g}dx), \\
\\ -m\int\sin(\int\tau_{g}dx)dx-\sin(\int\tau_{g}dx) \\
\end{array}%
\right) \\
\\
\alpha^g_{s}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, m\int\cos(\int\tau_{g}dx)dx+\sin(\int\tau_{g}dx), \\
\\ -m\int\sin(\int\tau_{g}dx)dx+\cos(\int\tau_{g}dx) \\
\end{array}%
\right)\\
\\
\alpha^g_{s}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, m\int\cos(\int\tau_{g}dx)dx+\cos(\int\tau_{g}dx)+\sin(\int\tau_{g}dx),\\
\\ -m\int\sin(\int\tau_{g}dx)dx+\cos(\int\tau_{g}dx)-\sin(\int\tau_{g}dx) \\
\end{array}%
\right)
\end{eqnarray*}
where $m$ is an integral constant.
\end{corollary}
\begin{corollary}
The position vectors of Smarandache curves of a family of geodesic that is a anti-Salkowski curve in $G_3$ are given by the equations
%
%
%
\begin{eqnarray*}
\alpha^g_{as}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, \int\kappa_{n}\cos(cx+c_{1})dx+\cos(cx+c_{1}),\\
\\ -\int\kappa_{n}\sin(cx+c_{1})dx-\sin(cx+c_{1})
\end{array}%
\right)\\
\\
\alpha^g_{as}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, \int\kappa_{n}\cos(cx+c_{1})dx+\sin(cx+c_{1}) , \\
\\ -\int\kappa_{n}\sin(cx+c_{1})dx+\cos(cx+c_{1})
\end{array}%
\right)\\
\alpha^g_{as}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, \int\kappa_{n}\cos(cx+c_{1})dx+\cos(cx+c_{1})+\sin(cx+c_{1}), \\
\\ -\int\kappa_{n}\sin(cx+c_{1})dx+\cos(cx+c_{1})-\sin(cx+c_{1})
\end{array}%
\right)
\end{eqnarray*}
where $c$ and $c_{1}$ are integral constants.
\end{corollary}
We want to note that above corollaries can be proved by using the equations (\ref{kt}), (\ref{helsal}) and Theorem \ref{thmgeo}.
\subsection{The position vectors of Smarandache curves of an general asymptotic curve in $G_3$}
\begin{theorem}{\label{thmasy}}
The position vectors $\alpha_g(x)$ of Smarandache curves of a family of asymptotic curve in $G_3$ are provided by
%
%
%
%
%
%
%
%
\begin{eqnarray*}
\beta^a _{\mathbf{TN}} &=&\left( 1, \, \int \kappa _{g}\sin (\int \tau _{g}dx)dx+\sin
\int \tau _{g}dx\ ,\ \int \kappa _{g}\cos (\int \tau _{g}dx)dx+\cos \int
\tau _{g}dx\right) \\
&& \notag \\
\beta^a _{\mathbf{TB}} &=&\left( 1, \, \int \kappa _{g}\sin (\int \tau _{g}dx)dx-\cos
\int \tau _{g}dx\ ,\ \int \kappa _{g}\cos (\int \tau _{g}dx)dx+\sin \int
\tau _{g}dx\right) \notag \\
&& \notag \\
\beta^a _{\mathbf{TNB}} &=&\left(
\begin{array}{c}
1, \, \int \kappa _{g}\sin (\int \tau _{g}dx)dx+\sin \int \tau _{g}dx-\cos
\int \tau _{g}dx, \, \\ \\
\int \kappa _{g}\cos (\int \tau _{g}dx)dx+\cos \int \tau _{g}dx+\sin \int
\tau _{g}ds%
\end{array}%
\right) \notag
\end{eqnarray*}
\end{theorem}
\begin{proof}
By using the definition (\ref{defgap}) in Theorem \ref{posmat}, then the above equations are obtained as general position vectors for $\mathbf{TN}, \, \mathbf{TB}$ and $\mathbf{TNB}$ special smarandache curves with Darboux apparatus of an asymptotic curve on a surface in $G_3$.
\end{proof}
Now, we will give the position vectors for Smarandache curves of some special cases of an asymptotic curve in $G_3$
\begin{corollary}
The position vectors of Smarandache curves of a family of asymptotic curve that is a circular helix in $G_3$ are given by the equations
%
%
%
%
\begin{eqnarray*}
\beta^a_{ch}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, -\frac{f}{c}\cos(cx+c_{1})+\sin(cx+c_{1})\ , \\
\\ \frac{f}{c}\sin(cx+c_{1})+\cos(cx+c_{1}) \\
\end{array}%
\right)\\ \\
\beta^a_{ch}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, -(\frac{c+f}{c})\cos(cx+c_{1}),
\, (\frac{c+f}{c})\sin(cx+c_{1}) \\
\end{array}
\right)\\ \\
\beta^a_{ch}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, -(\frac{c+f}{c})\cos(cx+c_{1})+\sin(cx+c_{1}),\\
\\ (\frac{c+f}{c})\sin(cx+c_{1})+\cos(cx+c_{1})
\end{array}%
\right)
\end{eqnarray*}
where $c, c_{1}$ and $f$ are integral constants.
\end{corollary}
\begin{corollary}
The position vectors of Smarandache curves of a family of asymptotic curve that is a generalized helix in $G_3$ are given by the equations
%
%
\begin{eqnarray*}
\beta^a_{gh}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, -\cos(k\int \kappa_{g}dx)+\sin(k \int \kappa_{g}dx), \\
\\
\sin(k\int \kappa_{g}dx)+\cos(k \int \kappa_{g}dx) \\
\end{array}%
\right) \\
\\
\beta^a_{gh}{_\mathbf{TB}}(x)&=&\Big(
\begin{array}{c}
1, \, -2\cos(k\int \kappa_{g}dx), \,
2\sin(k\int \kappa_{g}dx) \\
\end{array}%
\Big)\\
\\
\beta^a_{gh}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, -2\cos(k\int \kappa_{g}dx)+\sin(k\int \kappa_{g}dx), \\
\\
2\sin(k\int \kappa_{g}dx)+\cos(k\int \kappa_{g}dx) \\
\end{array}%
\right)
\end{eqnarray*}
where $k$ is integral constant.
\end{corollary}
\begin{corollary}
The position vectors of Smarandache curves of a family of asymptotic curve that is a Salkowski curve in $G_3$ are given by the equations
\begin{eqnarray*}
\beta^a_{s}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, \int(f\sin(\int \tau_{g}dx))dx+\sin(\int \tau_{g}dx), \\
\\
\int(f\cos(\int \tau_{g}dx))dx+\cos(\int \tau_{g}dx) \\
\end{array}%
\right)\\
\\
\beta^a_{s}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, \int(f\sin(\int \tau_{g}dx))dx-\cos(\int \tau_{g}dx), \\
\\
\int(f\cos(\int \tau_{g}dx))dx+ \sin(\int \tau_{g}dx) \\
\end{array}%
\right)\\
\\
\beta^a_{s}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, \int(f\sin(\int \tau_{g}dx))dx+\sin(\int \tau_{g}dx)-\cos(\int \tau_{g}dx), \\
\\
\int(f\cos(\int \tau_{g}dx))dx+\cos(\int \tau_{g}dx)+ \sin(\int \tau_{g}dx) \\
\end{array}%
\right)
\end{eqnarray*}
where $f$ is constant.
\end{corollary}
\begin{corollary}
The position vectors of Smarandache curves of a family of asymptotic curve that is an anti-Salkowski curve in $G_3$ are given by the equations
%
%
\begin{eqnarray*}
\beta^a_{as}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, \int(\kappa_{g}\sin(cx+c_{1}))dx+\sin(cx+c_{1}),\\ \\ \int(\kappa_{g}\cos(cx+c_{1}))dx+\cos(cx+c_{1}) \\
\end{array}
\right)\\
\\
\beta^a_{as}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, \int(\kappa_{g}\sin(cx+c_{1}))dx-\cos(cx+c_{1}), \\ \\
\int(\kappa_{g}\cos(cx+c_{1}))dx+\sin(cx+c_{1}) \\
\end{array}
\right)\\
\\
\beta^a_{as}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, \int(\kappa_{g}\sin(cx+c_{1}))dx+\sin(cx+c_{1})-\cos(cx+c_{1}), \\
\\
\int(\kappa_{g}\cos(cx+c_{1}))dx+\cos(cx+c_{1})+\sin(cx+c_{1})
\end{array}
\right)
\end{eqnarray*}
where $c$ and $c_1$ are constants.
\end{corollary}
\begin{proof}
We want to point out that above corollaries can be proved by using the equations (\ref{kt}), (\ref{helsal}) and Theorem \ref{thmasy}.
\end{proof}
\subsection{The position vectors of Smarandache curves of a general curvature line in $G_3$}
\begin{theorem}{\label{thmcur}}
The position vectors $\gamma^c(x)$ of Smarandache curves of a family of curvature line in $G_3$ are provided by
%
%
%
%
%
%
%
%
%
\begin{eqnarray*}
\gamma^c _{_\mathbf{TN}} &=&\left(
\begin{array}{c}
1,\, \int (\kappa _{g}\sin a + \kappa_n \cos a)dx+\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}\sin a + \kappa_n \cos a),\\
\, \int (\kappa _{g}\cos a - \kappa_n \sin a)dx-\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}\cos a - \kappa_n \sin a)
\end{array}
\right) \\
&& \notag \\
\gamma^c _{_\mathbf{TB}} &=&\left(
\begin{array}{c}
1,\, \int (\kappa _{g}\sin a + \kappa_n \cos a)dx-\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}\cos a - \kappa_n \sin a),\\
\, \int (\kappa _{g}\cos a - \kappa_n \sin a)dx+\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}\sin a + \kappa_n \cos a)
\end{array}
\right) \\
&& \notag \\
\gamma^c _{_\mathbf{TNB}} &=&\left(
\begin{array}{c}
1,\, \int (\kappa _{g}\sin a + \kappa_n \cos a)dx\\-\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}(\sin a-\cos a) + \kappa_n(\sin a+\cos a)),\\
\, \int (\kappa _{g}\cos a - \kappa_n \sin a)dx\\+\frac{1}{\sqrt{\kappa_g^2+\kappa_n^2}}(\kappa _{g}(\sin a+\cos a) + \kappa_n (\cos a-\sin a))
\end{array}
\right) \notag
\end{eqnarray*}
\end{theorem}
\begin{proof}
By using the definition (\ref{defgap}) in Theorem \ref{posmat}, then the above equations are obtained as general position vectors for $\mathbf{TN}, \, \mathbf{TB}$ and $\mathbf{TNB}$ special smarandache curves with Darboux apparatus of a curvature line on a surface in $G_3$.
\end{proof}
Now, we will give the position vectors for Smarandache curves of some special cases of a curvature line in $G_3$
\begin{corollary}
The position vectors $\gamma^c(x)$ of Smarandache curves of a family of curvature line with $\kappa_{g}\equiv$ const. and $\kappa_{n}\equiv$ const. is a circular helix in $G_3$ are provided by
%
%
%
%
\begin{eqnarray*}
\gamma^c_{ch}{_\mathbf{TN}}(x)&=&\left(
\begin{array}{c}
1, \, (a_{1}\sin a+a_{2}\cos a)x+\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}\sin a+a_2\cos a), \\
\\ (a_{1}\cos a-a_{2}\sin a)x-\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}\cos a-a_2\sin a)
\end{array} \right) \\
\\
\gamma^c_{ch}{_\mathbf{TB}}(x)&=&\left(
\begin{array}{c}
1, \, (a_{1}\sin a+a_{2}\cos a)x-\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}\cos a-a_2\sin a), \\
\\ (a_{1}\cos a-a_{2}\sin a)x+\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}\sin a+a_2\cos a)
\end{array}
\right)\\
\\
\gamma^c_{ch}{_\mathbf{TNB}}(x)&=&\left(
\begin{array}{c}
1, \, (a_{1}\sin a+a_{2}\cos a)x\\+\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}(\sin a-\cos a)+a_2(\cos a+\sin a)), \\
\\ (a_{1}\cos a-a_{2}\sin a)x\\-\frac{1}{\sqrt{a_1^2+a_2^2}}(a_{1}(\sin a+\cos a)-a_2(\cos a-
\sin a))
\end{array}
\right)
\end{eqnarray*}
%
\end{corollary}
\begin{proof}
By using the equations (\ref{kt}) and (\ref{helsal}) in Theorem \ref{thmcur}, we obtain the above equation.
\end{proof}
We will now provide some illustrative examples for arbitrary curve on a surface.
\begin{example}
In \eqref{pos}, if we let $\kappa_g(x)=\sin x, \, \kappa_n(x)=\cos x$ and $\tau_g(x)=x$, we obtain the following curve:
\begin{eqnarray}
\gamma(x)= \left(
\begin{array}{c}
x, \\\\ \sqrt{\pi } \left( x\cos \left( 1/2 \right) -\cos \left( 1/2 \right) \right) {\it FresnelC} \left( {\frac {x-1}{ \sqrt{\pi }}} \right) \\
\mbox{}+ \sqrt{\pi } \left( x\sin \left( 1/2 \right) -\sin \left( 1/2 \right) \right) {\it FresnelS} \left( {\frac {x-1}{ \sqrt{\pi }}} \right) \\
\mbox{}-\cos \left( 1/2 \right) \sin \left( 1/2\, \left( x-1 \right) ^{2} \right) +\sin \left( 1/2 \right) \cos \left( 1/2\, \left( x-1 \right) ^{2} \right),\\\\
- \sqrt{\pi } \left( \sin \left( 1/2 \right) -x\sin \left( 1/2 \right) \right) {\it FresnelC} \left( {\frac {x-1}{ \sqrt{\pi }}} \right) \\
\mbox{}- \sqrt{\pi } \left( x\cos \left( 1/2 \right) -\cos \left( 1/2 \right) \right) {\it FresnelS} \left( {\frac {x-1}{ \sqrt{\pi }}} \right) \\-\cos \left( 1/2\, \left( x-1 \right) ^{2} \right) \cos \left( 1/2 \right)
\mbox{}-\sin \left( 1/2\, \left( x-1 \right) ^{2} \right) \sin \left( 1/2 \right)
\end{array}
\right)
\end{eqnarray}
where $$FresnelS(x)=\int \sin\left(\frac{\pi x^2}{2}\right) dx, \hskip .5cm FresnelC(x)=\int \cos\left(\frac{\pi x^2}{2}\right).$$
The special Smaradache curves of $\gamma$ can be obtained directly from Definition \ref{smadef}, or replacing $\kappa_g(x)$, $\kappa_n(x)$ and $\tau_g(x)$ by $\sin x, \cos x$ and $x$ in Theorem 3, respectively.
In this case, the graphs of $\gamma$ curve and its $\mathbf{TN, TB, TNB}$ special Smarandache curves are given as follows Figure 1.
\end{example}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4 \textwidth]{alfa_Tg=x,Kg=sinx,Kn=cosx_.png}
\includegraphics[width=0.4\textwidth]{TNB-TB-TN.png}
\caption{$\gamma$ curve and the right figure is printed from outside to inside $\gamma_{TNB}, \gamma_{TB}, \gamma_{TN}$ Smarandache curves of $\gamma$} \label{fig1}
\end{center}
\end{figure}
We now consider another example for geodesic curve on surface along with their graphs.
\begin{example} Let the surface $M$ be defined by
$$\phi(u,v)=\Bigg(u+v, \frac{u-\sin(u+v)\cos(u+v)}{4}, \frac{\sin(u+v)^2-u^2}{4}\Bigg)$$
and define the curve $\gamma$ which lies on the surface $M$ as follows
$$\displaystyle \gamma(x) = \Bigg(x, \frac{x-\sin(x)\cos(x)}{4}, \frac{\sin(x)^2-x^2}{4}\Bigg).$$
Thus, $\gamma$ is a geodesic curve with $\kappa(x)=\sin x$ and $\tau(x)\equiv1$ on $M$ in $G_3$. Also, $\mathbf {T, Q, n}$ vector fields and $\kappa_n(x), \tau_g(x)$ curvatures are obtained by using equation (\ref{Darboux}), (\ref{kt}). Using these curvatures in Theorem \ref{thmgeo}, we derive special Smarandache curves of $\gamma$.
\end{example}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{buketsekil1.png}%
\includegraphics[width=0.4\textwidth]{buketsekil2.png}
\caption{$\phi(u,v)$ surface and $\gamma(x)$ curve}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.3\textwidth]{BuketTN.png}%
\includegraphics[width=0.3\textwidth]{BuketTB.png}%
\includegraphics[width=0.3\textwidth]{BuketTNB.png}
\caption{$\gamma_\mathbf{TNB}, \gamma_\mathbf{TB}, \gamma_\mathbf{TN}$ special Smarandache curves of $\gamma$, respectively.} \label{fig1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.3\textwidth]{alfa_Tn.png}%
\includegraphics[width=0.3\textwidth]{alfa_TQ.png}%
\includegraphics[width=0.3\textwidth]{alfa_TQn.png}
\caption{$\gamma_\mathbf{Tn}, \gamma_\mathbf{TQ}, \gamma_\mathbf{TQn}$ special Smarandache curves with respect to Darboux frame of $\gamma$, respectively.} \label{fig1}
\end{center}
\end{figure}
\newpage
\section{Conclusion}
In this work, we studied general position vectors of special Smarandache curves with Darboux apparatus of an arbitrary curve on a surface in the three-dimensional Galilean space $G^{3}$. As a result of this, we also provided special Smarandache curves of geodesic, asymptotic and curvature line on the surface in $G^{3}$ and provided some related examples of special Smarandache curves with respect to Frenet and Darboux frame of an arbitrary curve on a surface. Finally, we emphasize that one can investigate position vectors of elastic curves on a surface using the general position vectors of curves on a surface in Galilean space. Last but not least, we want to point out that the results of this study can be easily generalized to families of surfaces that have common Smarandache curves.
\section*{Acknowledgements}
This study was supported financially by the Research Centre of Amasya University (Project No: FMB-BAP16-0213).
|
\section{An Interpretation of DST}
Let
us assume that we know that objects of a population can
be described by an intrinsic attribute X taking
exclusively one of the n discrete values from its
domain $\Xi=\{v_1,v_2,...,v_n\}$ . Let us assume furthermore
that to obtain knowledge of the actual value taken by an object
we must apply a measurement method (a system of tests) $M$
\begin{df} \label{MDef}
$X$ be a set-valued attribute taking as its values non-empty
subsets of a finite domain $\Xi$.
By a measurement method
of value of the attribute $X$
we understand a function:
$$M: \Omega \times 2^\Xi \rightarrow \{TRUE,FALSE\}$$.
where $\Omega$ is the set of objects, (or population of objects)
such that
\begin{itemize}
\item
$ \forall_{\omega; \omega \in \Omega} \quad
M(\omega,\Xi)=TRUE$ (X takes at least one of values from $\Xi$)
\item
$ \forall_{\omega; \omega \in \Omega} \quad
M(\omega,\emptyset)=FALSE$
\item
whenever
$M(\omega,A)=TRUE$
for $\omega \in \Omega$, $A \subseteq \Xi$
then for any $B$ such that $A \subset B$ $M(\omega,B)=TRUE$
holds,
\item
whenever
$M(\omega,A)=TRUE$
for $\omega \in \Omega$, $A \subseteq \Xi$ and if $card(A)>1$ then there
exists $B$, $B \subset A$ such that $M(\omega,B)=TRUE$ holds.
\item
for every $\omega$ and every $A$
either
$M(\omega,A)=TRUE$ or
$M(\omega,A)=FALSE$ (but never both).
\end{itemize}
$M(\omega,A)$ tells us whether or not any of the elements of the set A
belong to the actual value of the attribute $X$ for the object $\omega$
\end{df}
The measuring function M(O,A), if it takes the value TRUE,
states for an object O and a set A of values from the domain of X
that the X
takes for this object (at least) one of the values in A.
With each application of the
measurement procedure some costs be connected, increasing
roughly with the decreasing size of the tested set A so that we
are ready to accept results of previous measurements in the form
of pre-labeling of the population. So
\begin{df}
A {\em label} $L$ of an object $\omega \in \Omega$ is a subset of the domain
$\Xi$ of the attribute $X$. \\
A {\em labeling} under the measurement method $M$ is a function $l: \Omega
\rightarrow 2^\Xi$ such that for any object $\omega \in \Omega$ either
$l(\omega)=\emptyset$ or $M(\omega,l(\omega))=TRUE$.\\
Each {\em labelled object} (under the labeling $l$)
consists of a
pair $(O_j,L_j)$, $O_j$ - the j$^{th}$ object, $L_j=l(O_j)$ - its label.\\
By a {\em population under the labeling $l$} we understand the predicate
$P:\Omega \rightarrow \{TRUE,FALSE\}$ of the form
$P(\omega)=TRUE \ iff \ l(\omega) \neq \emptyset$
(or alternatively, the set of objects for which this predicate is true) \\
If for every object of the
population the label is equal
to $\Xi$ then we talk of an {\em unlabeled population} (under the
labeling $l$), otherwise of a {\em pre-labelled} one.
\end{df}
Let us assume that in practice we apply a modified
measurement method
$M_l$ being a function:
\begin{df}
Let $l$ be a labeling under the measurement method $M$.
Let us consider the population under this labeling.
The modified measurement method
$$M_l:
\Omega \times 2^\Xi \rightarrow
\{TRUE,FALSE\}$$
where $\Omega$ is the set of objects,
is is defined as
$$M_l(\omega,A)= M(\omega,A \cap l(\omega) )$$
(Notice that
$M_l(\omega,A)=FALSE$ whenever $A \cap l(\omega)= \emptyset$.)
\end{df}
For a labeled object $(O_j,L_j)$ ($O_j$ - proper object,
$L_j$ -
its label) and a set A of values from the domain of X,
the modified measurement method tells us
that $X$ takes one of the values in A if and only if it takes in fact
a value from intersection of A and $L_j$.
Expressed differently, we
discard a priori any attribute not in the label
Please pay attention also to the fact, that given a population P for which
the measurement method $M$ is defined, the labeling $l$ (according to its
definition) selects a subset of this population, possibly a proper subset,
namely the population P'
under this labeling.
$P'(\omega)=P(\omega) \land M(\omega,l(\omega))$.
Hence also $M_l$ is defined possibly for the "smaller"
population P' than $M$ is. \\
Let us define the following functions referred to as
labelled Belief, labelled Plausibility and labelled Mass
Functions respectively for the labeled population P:
The predicate $\Prob{\omega}{P}\alpha(\omega)$ shall denote
the probability of truth of expression $\alpha(\omega)$ over $\omega$ given
population $P(\omega)$.
\begin{df}
Let P be a population and $l$ its labeling. Then
$$Bel_P ^{M_l}(A)=\Prob{\omega}{P} \lnot M_l(\omega,\Xi-A)$$
$$Pl_P ^{M_l}(A)=\Prob{\omega}{P} M_l(\omega,A)$$
$$m_P ^{M_l}(A)=\Prob{\omega}{P} (\bigwedge_{B;B=\{v_i\}\subseteq A}
M_l(\omega,B)
\land \bigwedge_{B;B=\{v_i\}\subseteq \Xi-A} \lnot
M_l(\omega,B))$$
\end{df}
\begin{th}
$m_P ^{M_l}$, $Bel_P ^{M_l}$, $Pl_P ^{M_l}$ and $Q_P ^{M_l}$ are the mass,
belief, plausibility and commonality Functions in the sense of DST.
\end{th}
Let us now assume we run a "(re-)labelling process" on the
(pre-labelled or unlabeled)
population P.
\begin{df}
Let $M$ be a measurement method, $l$ be a labeling under this measurement
method, and P be a population under this labeling (Note that the population
may also be unlabeled).
The {\em (simple) labelling process} on
the
population P
is defined as a functional
$LP: 2^\Xi \times \Gamma \rightarrow \Gamma$, where $\Gamma$ is the set of
all possible labelings under $M$,
such that for the given labeling $l$ and a given nonempty
set of attribute values $L$ ($L \subseteq \Xi$),
it delivers a new labeling $l'$ ($l'=LP(L,l)$) such that for every object
$\omega \in \Omega$:
1. if $M_l(\omega,L)=FALSE$ then
$l'(\omega)=\emptyset$\\
(that is l' discards a
labeled
object $(\omega,l(\omega))$ if $M_l(\omega,L )=FALSE$
2. otherwise $l'(\omega)=l(\omega) \cap L $
(that is l' labels the object with $l(\omega) \cap L $ otherwise.
\end{df}
Remark: It is immediately obvious, that the population obtained as the sample
fulfills the requirements of the definition of a labeled population
The labeling process clearly induces from P another population P' (a
population under the labeling $l'$) being a subset of P (hence perhaps
"smaller"
than P) labelled a
bit differently. If we retain the primary measurement
method M then a new modified measurement method
$M_{l'}$ is induced by the new labeling.
\begin{df} "labelling process function"
$m ^{LP;L }: 2 ^\Xi \rightarrow [0,1]$:
is defined as:
$$m ^{LP;L }(L )=1$$
$$\forall_{B; B \in 2^\Xi,B \ne L } m ^{LP;L }(B)=0$$
\end{df}
It is immediately obvious that:
\begin{th}
$m ^{LP;L }$ is a Mass Function in sense of DST.
\end{th}
Let $Bel ^{LP,L }$ be the belief and $Pl ^{LP,L }$ be the
Plausibility corresponding to $m ^{LP,L }$. Now let us pose the
question: what is the relationship between $Bel_{P'} ^{M_{l'}}$,
$Bel_P ^{M_l}$, and $Bel ^{LP,L }$.
\begin{th}
\label{thSimpleLab}
Let $M$ be a measurement function, $l$ a labeling, P a population under
this labeling. Let $L $ be a subset of $\Xi$.
Let $LP$ be a labeling process and let $l'=LP(L ,l)$.
Let P' be a population under the labeling $l'$.
Then
$Bel_{P'} ^{M_{l'}}$ is a combination via Dempster's Combination
rule of $Bel ^{M_l}$, and $Bel ^{LP;L }$., that is:
$$Bel_{P'} ^{M_{l'}} = Bel_P ^{M_l} \oplus Bel ^{LP;L }$$.
\end{th}
Let us define a more general (re-)labeling process.
Instead of a single set of
attribute values let us take a set of sets of attribute
values $L ^1, L ^2, ...,L ^k$ (not necessarily disjoint) and
assign to each one a probability
$m ^{LP, L ^1, L ^2, ...,L ^k}(A_i)$
of selection.
\begin{df}
Let $M$ be a measurement method, $l$ be a labeling under this measurement
method, and P be a population under this labeling (Note that the population
may also be unlabeled).
Let us take a set of (not necessarily disjoint) nonempty sets of
attribute values $\{L ^1, L ^2, ...,L ^k\}$ and
let us define the probability of selection as a function
$m ^{LP, L ^1, L ^2, ...,L ^k}: 2 ^\Xi \rightarrow [0,1]$ such that
$$\sum_{A;A \subseteq \Xi}m ^{LP, L ^1, L ^2, ...,L ^k}(A)=1$$
$$\forall_{A; A \in \{ L ^1, L ^2, ...,L ^k\}}
m ^{LP, L ^1, L ^2, ...,L ^k}(A)>0$$
$$\forall_{A; A \not\in \{ L ^1, L ^2, ...,L ^k\}}
m ^{LP, L ^1, L ^2, ...,L ^k}(A)=0$$
The {\em (general) labelling process} on
the
population P
is defined as a (randomized) functional
$LP: 2^{2^\Xi} \times \Delta
\times \Gamma \rightarrow \Gamma$, where $\Gamma$ is the set
of all possible labelings under $M$, and $\Delta$ is
a set of all possible probability of selection functions,
such that for the given labeling $l$ and a given
set of (not necessarily disjoint) nonempty sets of
attribute values $\{L ^1, L ^2, ...,L ^k\}$ and
a given probability of selection
$m ^{LP, L ^1, L ^2, ...,L ^k}$
it delivers a new labeling $l"$ such that for every object
$\omega \in \Omega$:
1. a label L, element of the set $\{ L ^1, L ^2, ...,L ^k\}$
is sampled randomly according to the probability distribution
$m ^{LP, L ^1, L ^2, ...,L ^k}$;
This sampling is done independently for each individual object,
2. if $M_l(\omega,L)=FALSE$ then
$l"(\omega)=\emptyset$\\
(that is l" discards an object $(\omega,l(\omega))$ if
$M_l(\omega,L )=FALSE$
3. otherwise $l"(\omega)=l(\omega) \cap L $
(that is l" labels the object with $l(\omega) \cap L $ otherwise.)
\end{df}
Again we obtain another ("smaller") population P" under the labeling $l"$
labelled
a bit differently. Also a new modified measurement method
$M_{l"}$ is induced by the "re-labelled" population.
Please notice, that $l"$ is not derived deterministicly.
Another run of the general (re-)labeling process LP may result in a different
final labeling of the population and hence a different subpopulation under
this new labeling
Clearly:
\begin{th}
$m ^{LP,L ^1,...,L ^k}$ is a Mass Function in sense of DST.
\end{th}
Let $Bel ^{LP;L ^1,...,L ^k}$ be the belief and $Pl
^{LP,L ^1,...,L ^k}$ be the
Plausibility corresponding to $m ^{LP,L ^1,...,L ^k}$. Now let us pose the
question: what is the relationship between $Bel_{P"} ^{M_{l"}}$,
$Bel_P ^{M_l}$, and $Bel ^{LP,L ^1,...,L ^k}$.
\begin{th}
Let $M$ be a measurement function, $l$ a labeling, P a population under
this labeling.
Let $LP$ be a generalized labeling process and let $l"$
be the result of application of the $LP$ for the set
of labels from the set $\{ L ^1, L ^2, ...,L ^k\}$
sampled randomly according to the probability distribution
$m ^{LP, L ^1, L ^2, ...,L ^k}$;.
Let P" be a population under the labeling $l"$.
Then
The expected value
over the set of all possible resultant labelings $l"$ (and hence
populations P")
(or, more precisely, value vector) of
$Bel_{P"} ^{M_{l"}}$ is a combination via Dempster's Combination
rule of $Bel_P ^{M_l}$, and $Bel ^{LP,L ^1,...,L ^k}$., that is:
$$E(Bel_{P"} ^{M_l'}) = Bel_P ^{M_l} \oplus Bel ^{LP,L ^1,...,L ^k}$$.
\end{th}
Let us now introduce the notion of quantitative independence for DS-Theory.
We will fix the measurement method M we use and the population P we consider
so that respective indices will be usually dropped.
\begin{df}
Two variables $X_1,X_2$ are (mutually, marginally) independent when for
objects of the population
knowledge of the truth value of $M_l ^{\downarrow X_1}
(\omega,A ^{\downarrow X_1})$ for all
$A \subseteq \Xi_1 \times \Xi_2$ does not change our prediction capability
of the values of $M_l ^{\downarrow X_2}
(\omega,B ^{\downarrow X_2})$ for any
$B \subseteq \Xi_1 \times \Xi_2$, that is
$$\Prob{\omega}{P}( M_l ^{\downarrow X_2}(\omega,B ^{\downarrow X_2})=
\Prob{\omega}{P}( M_l ^{\downarrow X_2}(\omega,B ^{\downarrow X_2})
| M_l ^{\downarrow X_1}(\omega,A ^{\downarrow X_1}) )
$$
\end{df}
\begin{th}
If variables $X_1,X_2$ are quantitatively independent, then
for any $B \subseteq \Xi_2$, $A \subseteq \Xi_1$
$$
m ^{\downarrow X_2}(B) \cdot m ^{\downarrow X_1}(A) =
\sum_{F; F ^{\downarrow X_1}=A,F ^{\downarrow X_2}=B} m(F) $$
\end{th}
\begin{th}
If variables $X_1,X_2$ are quantitatively independent, then
for any $B \subseteq \Xi_2$, $A \subseteq \Xi_1$
$$
Bel ^{\downarrow X_2}(B) \cdot Bel ^{\downarrow X_1}(A) = Bel(A \times B)$$
\end{th}
This actually expresses the relationship between marginals of two
independent variables and their joint belief distribution. This
relationship has one dismaying aspect: in general, we cannot
calculate the joint distribution from independent marginals
(contrary to our intuition connected with joint probability
distribution).
In practical settings, however, we frequently have to do
with some kind
of composite measurement method, that is:
\begin{df} Two variables $X_1,X_2$ are measured compositely iff
for $A \subseteq \Xi_1, B \subseteq \Xi_2$:
$$M(\omega,A \times C) = M(\omega, A \times \Xi_2) \land
M(\omega, \Xi_1 \times C) $$
and whenever $M(\omega, B)$ is sought,
$$M(\omega, B) = \bigvee_{A,C; A \subseteq \Xi_1, C \subseteq \Xi_2,
A\times C \subseteq B}
M(\omega,A \times B)$$
\end{df}
Under these circumstances, it is easily shown that
whenever $m(B) > 0 $, then there exist
A and C such that: $B = A \times C$.
So we obtain:
\begin{th}
If variables $X_1,X_2$ are quantitatively independent and measured
compositely, then
$$m(A \times C)= m ^{\downarrow X_1}(A) \cdot m ^{\downarrow X_2}(C)
$$
\end{th}
Hence the Belief function can be calculated from Belief functions
of independent variables under composite measurement.:
\begin{th}
If variables $X_1,X_2$ are quantitatively independent and measured
compositely, then
$$Bel=Bel ^{\downarrow X_1} \oplus Bel ^{\downarrow X_2}
$$
\end{th}
Let us justify now the notion of empty extension:
\begin{df}
The joint distribution over $X=X_1 \times X_2$
in variables $X_1,X_2$ is
independent of the variable $X_1$ when for objects of the population
for every A,$A \subseteq \Xi_1 \times \Xi_2$
knowledge of the truth value of $M_l ^{\downarrow X_1}
(\omega,A ^{\downarrow X_1})$
does not change our prediction capability
of the values of $M_l(\omega,A )$, that is
$$\Prob{\omega}{P} (M_l (\omega,A ))=
\Prob{\omega}{P}( M_l (\omega,A)
| M_l ^{\downarrow X_1}(\omega,A ^{\downarrow X_1}) )
$$
\end{df}
\begin{th}
The joint distribution over $X=X_1 \times X_2$
in variables $X_1,X_2$, measured
compositely,
is
independent of the variable $X_1$ only if
$m ^{\downarrow X_2}(\Xi_2)=1$
that is the whole mass of the marginalized distribution onto $X_2$ is
concentrated at the only focal point
$\Xi_2$.
\end{th}
\begin{th}
If for $X=X_1 \times X_2$
$Bel=(Bel ^{\downarrow X_2})^{\uparrow X}$
that is
Bel is the empty extension of some Bel defined only over $X_2$,
then the Bel is independent of the variable $X_2$.\\
If for a Bel over $X=X_1 \times X_2$ with $X_1,X_2$ measured
compositely
Bel is independent of $X_2$, then $Bel=(Bel ^{\downarrow X_2})^{\uparrow X}$.
\end{th}
\begin{df}
Let Bel be defined over $X_1 \times X_2$.
We shall speak that Bel is {\em compressibly independent} of $X_2$ iff
$Bel=(Bel ^{\downarrow X_1}) ^{\uparrow X_2}$.
\end{df}
REMARK: $m ^{\downarrow X_1}(\Xi_1)=1$ does not imply empty extension as
such, especially for non-singleton values of the variable $X_2$. As previously
with marginal independence, it is the composite measurement that
makes the empty extension a practical notion.\\
Let us introduce a concept of conditionality related to the
above definition of independence. Traditionally, conditionality
is introduced to obtain a kind of independence between variables
de facto on one another. So let us define that:
\begin{df}
For discourse spaces of the form $X=X_1 \times ... \times X_n$
we define (anti-)conditional belief function
$Bel ^{X | X_i}(A)$ as
$$Bel=Bel ^{\downarrow X_i} \oplus Bel ^{X | X_i}$$
\end{df}
Let us notice at this point that the (anti-)conditional belief as defined
above does not need to be unique, hence we have here a kind of pseudoinversion
of the $\oplus$ operator. Furthermore, the conditional belief does not need
to
be a belief function at all, because some focal points m may be negative.
But it is then the pseudo-belief function in the sense of the DS-theory as
the Q-measure remains positive.
Please recall the fact that if $Bel_{12}=Bel_1 \oplus Bel_2$ then
$Q_{12}(A)=c\cdot Q_1(A)\cdot Q_2(A)$, c being a proportionality factor
(as all supersets of a set are contained in all intersections of its
supersets
and vice versa). Hence also for
our conditional belief definition:
$$Q(A)=c \cdot (Q ^{\downarrow X_i}) ^{\uparrow X}(A) \cdot Q ^{X |
X_i}(A)$$ We shall talk later of unnormalized conditional belief iff\\
$$ Q_* ^{X | X_i}(A) = Q(A)/(Q ^{\downarrow X_i}) ^{\uparrow X}(A) $$
Let us now reconsider the problem of independence, this time of a
conditional distribution of $(X_1 \times X_2 \times X_3 | X_1 \times X_3)$
from the third variable $X_3$.
\begin{th}
Let $X =X_1 \times X_2 \times X_3 $ and let $Bel$ be defined
over X.
Furthermore let $Bel ^{X|X_1 \times X_3}$ be a conditional Belief conditioned
on variables $X_1,X_3$. Let this conditional distribution be
compressibly
independent of
$X_3$. Let $Bel ^{\downarrow X_1 \times X_2}$ be the projection of $Bel$ onto
the subspace spanned by $X_1,X_2$. Then there exists
$Bel ^{\downarrow X_1 \times X_2 | X_1}$ being a conditional belief of that
projected belief conditioned on the variable $X_1$ such that this $Bel
^{X|X_1 \times
X_3}$ is the empty extension of $Bel ^{\downarrow X_1 \times X_2 | X_1}$
$$Bel ^{X|X_1 \times X_3} =
(Bel ^{\downarrow X_1 \times X_2 | X_1}) ^{\uparrow X}$$
\end{th}
Let us notice that under the conditions of the above theore
$$ Bel =
Bel ^{X|X_1 \times X_3}\oplus Bel ^{\downarrow X_1 \times X_3 } =
Bel ^{\downarrow X_1 \times X_2 | X_1}
\oplus Bel ^{\downarrow X_1 \times X_3 }
$$
and hence for any $Bel ^{\downarrow X_1 \times X_3 | X_1}$
$$Bel =
Bel ^{\downarrow X_1 \times X_2 | X_1}
\oplus Bel ^{\downarrow X_1}
\oplus Bel ^{\downarrow X_1 \times X_3 | X_1 }
$$
and therefore
$$Bel =
Bel ^{\downarrow X_1 \times X_2}
\oplus Bel ^{\downarrow X_1 \times X_3 | X_1 }
$$
This means that whenever the conditional
$Bel ^{X_1 \times X_2 \times X_3|X_1 \times X_3}$
is compressibly independent of $X_3$,
then there exists a
conditional
$Bel ^{X_1 \times X_2 \times X_3|X_1 \times X_2}$
compressibly independent of $X_2$.
But this fact combined with the previous theorem results in
\begin{th}
Let $X =X_1 \times X_2 \times X_3$ and let $Bel$ be defined
over X.
Furthermore let $Bel ^{X|X_1 \times X_3}$ be a conditional Belief conditioned
on variables $X_1,X_3$. Let this conditional distribution be
compressibly independent of
$X_3$.
Then the empty extension onto $X$ of any
$Bel ^{\downarrow X_1 \times X_2 | X_1}$ being a conditional belief of
projected belief conditioned on the variable $X_1$
is a conditional belief function of $X$ conditioned
on variables $X_1,X_3$. Hence for every $A\subseteq \Xi$
$$ \frac{ Q(A) }
{ Q ^{\downarrow X_1 \times X_3}(A ^{\downarrow X_1 \times X_3} ) }
= \frac { Q ^{\downarrow X_1 \times X_2}(A ^{\downarrow X_1 \times X_2} ) }
{ Q ^{\downarrow X_1}(A ^{\downarrow X_1} ) }
$$
\end{th}
In this way we obtained some sense of conditionality suitable for
decomposition of a joint belief distribution.
\subsection{Independence from Data}
The preceding sections defined precisely what is meant by marginal
independence of two variables in terms of the relationship between marginals
and the joint distribution, as well as concerning the independence of a joint
distribution from a single variable.\\
For the former case we can establish frequency tables with rows
and columns
corresponding to cardinalities of focal points of the first and the second
marginal, and inner elements being cardinalities from the respective sum on
DS-masses of
the joint distribution. Clearly, cases falling into different inner
categories of the table are different and hence $\chi ^2$ test is
applicable.\\
The match can be $\chi ^2$-tested. The following
formula should be followed for calculation
$$\sum_{A;A \subseteq \Xi_1, m ^{\downarrow X_1}(A)>0}
\sum_{B;B \subseteq \Xi_2, m ^{\downarrow X_2}(B)>0}
\frac { ((
\sum_{C;C \subseteq \Xi, A = C ^{\downarrow X_1}
B=C ^{\downarrow X_2}}
m(C))
- m ^{\downarrow X_1}(A)\cdot m ^{\downarrow X_2}(B))^2
}
{ m ^{\downarrow X_1}(A)\cdot m ^{\downarrow X_2}(B)}$$
The number of df is calculated as
$$ (card(\{A;A \subseteq \Xi_1, m ^{\downarrow X_1}(A)>0\}) -1)\cdot
(card(\{B;B \subseteq \Xi_2, m ^{\downarrow X_2}(B)>0\})-1)$
In case of independence of a distribution from one variable one needs to
calculate the marginal of the distribution of that variable, say $X_i$.
Then the measure of discrepancy from the assumption of independence is given
as
$$ 1 - m ^{\downarrow X_i}(\Xi_i)$
Statistically we can test, based on Bernoullie distribution, what is the
lowest possible and the highest possible value of $ 1 - m ^{\downarrow
X_i}(\Xi_i)$ for a given significance level of the true underlying
distribution.\\
\subsection{Conditional Independence from Data}
In case of
independence between the conditional distribution and one of conditioning
variables, however, it is useless to calculate the pseudoinversion of
$\oplus$, as we are working then with a population and a sample the size of
which is not properly defined (by the "anti-labeling").
But we can build the contingency table of the unconditional joint
distribution for the independent variable on
the one hand and the remaining variables on the other hand, and compare the
respective cells on how do they match the distribution we would obtain
assuming the independence. The number of degrees of freedom for the
$\chi ^2$
test would then be the number of focal points of the joint distribution,
minus
the number of focal points within each of the multi-variable marginals plus
one (for covering twice the total sum of 1 on all focal points)
If
we test conditional independence of variables $X$ and $Y$ on the set of
variables $Z$, then we have to compare empirical distribution $Bel
^{\downarrow X,Y,Z}$ with $Bel ^{\downarrow X,Z|Z} \oplus Bel ^{\downarrow Y,
Z|Z} \oplus Bel ^{\downarrow Z}$. The traditional $\chi ^2$ statistics is
computed (treating the latter distribution as expected one). If the
hypothesis of equality is rejected on significance level $\alpha=0.05$ then
X and Y are considered dependent, otherwise independent.
\section{Introduction}
The Dempster-Shafer Theory or the Mathematical Theory of Evidence (DST)
\cite{Shafer:76},
\cite{Dempster:67}
shows one of possible ways of application of mathematical probability for
subjective evaluation and is intended to be a generalization of bayesian
theory of subjective probability. \cite{Shafer:90ijar}.
This theory offers a number of methodological advantages like: capability of
representing ignorance in a simple and direct way, compatibility with the
classical probability theory, compatibility of boolean logic and feasible
computational complexity \cite{Ruspini:92ijar}.
DST may be applied for (1) representation of incomplete knowledge, (2) belief
updating, (3) and for combination of evidence \cite{Provan:92}
DST covers the statistics of random sets and may be applied for representation
of incomplete statistical knowledge. Random set statistics is quite popular in
analysis of opinion polls whenever partial indecisiveness of respondents is
allowed \cite{Dubois:92}.
Practical applications of DST include: integration of knowledge from
heterogeneous sources for object identification \cite{deKorvin:93},
technical diagnosis under unreliable measuring devices \cite{Durham:92},
medical applications: \cite{Gordon:90}, \cite{Zarley:88b}.
Relationships betwenn DST network reliability computation have been
investigated \cite{Provan:90}.
In spite of indicated merits, DST experienced sharp criticism from many sides.
The basic line of criticism is connected with the relationship between the
belief function (the basic concept of DST) and frequencies.
The problem of frequencies is not solely a scholar problem. It has significant
knowledge engineering (expressive power, sources of knowledge,
knowledge acquisition strategies, learning algorithms) and software
engineering
implications (internal representation, measure transformation procedures).
First of all one should realize that a computer-based advisory system is
rarely made for a single consultation. Hence we may (at least theoretically)
obtain a statistics of cases for which the system has been applied. Life
verifies also frequently enough the advices obtained from the advisory system.
Hence after a long enough time one may pose at least partially the question
whether or not the advices have been correct. A belief function (the basic
concept of DST) without a modest frequentist interpretation may be treated as
a void answer in this context
Therefore numerous probabilistic interpretations have been attempted since the
early days of DST.
Dempster \cite{Dempster:67} initiated interval interpretation of DST. \c{a}
Kyburg \cite{Kyburg:87} showed that the belief function may be
represented
by an envelop of a family of traditional probability functions and claimed
that the behaviour of combining evidence via belief functions may be properly
explained in statistics under proper independence assumptions.
Hummel and Landy \cite{Hummel:88} considered DST as a "statistics of expert
opinions" so that it
"contains nothing more than Bayes' formula applied
to Boolean assertions, .... (and) tracks multiple opinion as opposed to a
single probabilistic assessment".
Pearl \cite{Pearl:90} and Provan \cite{Provan:90} considered belief functions
as "probabilities of provability".
Still another view has been developed in connection with rough set theory
\cite{Grzymala:91},
\cite{Skowron:93}, \cite{Skowron:93b}. Belief function is considered as the
lower approximation of the set of possible decisions
Fagin and Halpern \cite{Fagin:91} postulated probabilistic interpretation of
DST around lower and upper probability measures defined over a probability
structure (rather than space).
Halpern and Fagin \cite{Halpern:92} proposed to treat Bel as a generalized
probability and proposed a rule of combination of evidence differing from the
one of Dempster and Shafer.
The list of other attempts is quite long, still
a number of attempts to interpret belief functions in terms of probabilities
have failed so far to produce a fully compatible interpretation with DST - see
e.g. \cite{Kyburg:87}, \cite{Halpern:92}, \cite{Fagin:91} etc. Shafer
\cite{Shafer:90ijar} and Smets \cite{Smets:92}, in defense of DST, dismissed
every attempt to interpret DST frequentistically. Shafer stressed that
even modern (that meant bayesian) statistics is not frequentistic at all
(bayesian theory assigns subjective probabilities), hence frequencies be no
matter at all. Smets stated that domains of DST applications are those where
"we are ignorant of the existence of probabilities",
and warns that DST is
"not a
model for poorly known probabilities" (\cite{Smets:92}, p.324). Smets states
further "Far too often, authors concentrate on the static component (how
beliefs are
allocated?) and discover many relations between TBM (transferable belief
model of Smets)
and ULP (upper lower probability) models, inner and outer measures
(Fagin and Halpern \cite{Fagin:89}), random sets (Nguyen \cite{Nguyen:78}),
probabilities of provability
(Pearl \cite{Pearl:88}), probabilities of necessity (Ruspini
\cite{Ruspini:86}) etc. But these authors
usually do not explain or justify the dynamic component (how are beliefs
updated?), that is, how updating (conditioning) is to be handled (except in
some cases by defining conditioning as a special case of combination). So I
(that is Smets) feel that these partial comparisons are incomplete,
especially
as all these interpretations lead to different updating rules."
(\cite{Smets:92}, pp. 324-325).
Shafer in \cite{Shafer:90ijar} claims that probability theory developed over
last years from the old-style frequencies towards modern subjective
probability theory within the framework of bayesian theory. By analogy he
claims that the very attempt to consider relation between DST and frequencies
is old-fashioned and out of date and should be at least forbidden - for the
sake of progress of humanity.
Wasserman opposes this view (\cite{Wasserman:92ijar}, p.371)
reminding "major
success story in Bayesian theory", the exchangeability theory of
de Finetti \cite{deFinetti:64}. It treats frequencies as special case of
bayesian belief. "The Bayesian
theory contains within it a definition of frequency probability and a
description of the exact assumptions necessary to invoke that
definition" \cite{Wasserman:92ijar}. Wasserman dismisses Shafer's suggestion
that probability relies on analogy of frequency.
Though the need of a frequentist interpretation of DST is obvious,
critical remarks of Smets cannot
be however ignored. Therefore, still another attempt of probabilistic
interpretation is made in this paper. Within this paper we assume strong
mutual relationship between the way an inference engine reasons, the way one
understands the inputs and outputs and the way knowledge is represented -m and
acquired. Section 2 reminds some basic terms of DST. Section 3 presents the
way we understand the belief function and the relationship between reasoning
system input and output. In section 4 we assume a particular type of inference
engine - Shenoy-Shafer method of local computations (for presentation of this
method the reader should consult the original paper of them
\cite{Shenoy:90}). Then we demonstrate how our understanding of belief
function and this particular inference mechanism imply knowledge
representation in terms of a new DS belief network. Section 5 and 6 show how
the results of section 4 lead to two learning algorithms recovering a
tree-structured and poly-tree-structured resp. belief network from data.
Section 7 makes use of independence results obtained in section 3 to develop
an algorithm recovering general type DS belief networks from data. Some
implications of this uniform view of belief functions, reasoning,
representation and acquisition are discussed in section 8. Conclusions are
summarized in section 9.
\section{Basics of DST}
Let us first remind basic definitions of DST:
\begin{df} (see \cite{Provan:90})
Let $\Xi$ be a finite set of elements called elementary events.
Any subset of $\Xi$ be a composite event. $\Xi$ be called also the
frame of discernment.\\
A basic probability assignment function is any function m:$2^\Xi \rightarrow
[0, 1]$ such that $$ \sum_{A \in 2^\Xi } |m(A)|=1, \qquad
m(\emptyset)=0, \qquad
\forall_{A \in 2^\Xi} \quad 0 \leq \sum_{A \subseteq B} m(B)$$
($|.|$ - absolute value.\\
A belief function be defined as Bel:$2^\Xi \rightarrow [0,1]$ so that
$Bel(A) = \sum_{B \subseteq A} m(B)$
A plausibility function be Pl:$2^\Xi \rightarrow [ 0,1]$ with
$\forall_{A \in 2^\Xi} \ Pl(A) = 1-Bel(\Xi-A )$
A commonality function be Q:$2^\Xi-\{\emptyset\} \rightarrow [0,1]$ with
$\forall_{A \in 2^\Xi-\{\emptyset\}} \quad Q(A) = \sum_{A \subseteq B}
m(B)$ \end{df}
Furthermore, a Rule of Combination of two Independent Belief Functions
$Bel_1$,
$Bel_2$ Over the Same Frame of Discernment (the so-called Dempster-Rule),
denoted
$$Bel_{E_1,E_2}=Bel_{E_1} \oplus Bel_{E_2}$$
is defined as follows: :
$$m_{E_1,E_2}(A)=c \cdot \sum_{B,C; A= B \cap C} m_{E_1}(B) \cdot
m_{E_2}(C)$$ (c - constant normalizing the sum of $|m|$ to 1
Furthermore, let the frame of discernment $\Xi$ be structured in that it is
identical to cross product of domains $\Xi_1$, $\Xi_2$, \dots $\Xi_n$ of n
discrete variables $X_1, X_2, \dots X_n$, which span the space $\Xi$. Let
$(x_1, x_2, \dots x_n)$ be a vector in the space spanned by the variables
$X_1,
, X_2, \dots X_n$. Its projection onto the subspace spanned by variables
$X_{j_1}, X_{j_2}, \dots X_{j_k}$ ($j_1, j_2,\dots j_k$ distinct indices from
the set 1,2,\dots,n) is then the vector $(x_{j_1}, x_{j_2}, \dots x_{j_k})$.
$(x_1, x_2, \dots x_n)$ is also called an extension of $(x_{j_1}, x_{j_2},
\dots x_{j_k})$. A projection of a set $A$ of such vectors is the set
$A ^{\downarrow X_{j_1}, X_{j_2}, \dots X_{j_k}}$
of
projections of all individual vectors from A onto $X_{j_1}, X_{j_2}, \dots
X_{j_k}$. A is also called an extension of $A ^{\downarrow X_{j_1}, X_{j_2},
\dots X_{j_k}}$. A is called the vacuous extension of $A ^{\downarrow
X_{j_1},
X_{j_2}, \dots X_{j_k}}$ iff A contains all possible extensions of each
individual vector in $A ^{\downarrow X_{j_1}, X_{j_2}, \dots X_{j_k}}$ .
The fact, that A is a vacuous extension of B onto space $X_1,X_2,\dots\,
X_n$ is denoted by $A=B ^{\uparrow X_1,X_2,\dots\,X_n}$
\begin{df} (see \cite{Shenoy:90})
Let m be a basic probability assignment function on the space of discernment
spanned by variables $X_1,X_2,\dots\,X_n$. $m ^{\downarrow X_{j_1},
X_{j_2}, \dots X_{j_k}}$ is called the projection of m onto
subspace spanned by
$X_{j_1}, X_{j_2}, \dots X_{j_k}$ iff
$$m ^{\downarrow X_{j_1}, X_{j_2}, \dots X_{j_k}}(B)= c \cdot
\sum_{A; B=A ^{\downarrow X_{j_1}, X_{j_2}, \dots X_{j_k}} } m(A)$$
(c - normalizing factor)
\end{df
\begin{df} (see \cite{Shenoy:90})
Let m be a basic probability assignment function on the space of discernment
spanned by variables $ X_{j_1},
X_{j_2}, \dots X_{j_k} $. $m ^{\uparrow X_1,X_2,\dots\,X_n}$ is called
the vacuous extension
of m onto superspace spanned by $X_1,X_2,\dots\,X_n$
iff
$$m ^{\uparrow X_1, X_2, \dots X_n}(B ^{\uparrow X_1,X_2,\dots\,X_n})=m(B)$$
and $m ^{\uparrow X_1, X_2, \dots X_n}(A)=0$ for any other A. \\
We say that a belief function is vacuous iff $m(\Xi)=1$ and $m(A)=0$ for any A
different from $\Xi$
\end{df
Projections and vacuous extensions of Bel, Pl and Q functions are defined
with
respect to operations on m function. Notice that by convention if we want to
combine by Dempster rule two belief functions not sharing the frame of
discernment, we look for the closest common vacuous extension of their
frames of discernment without explicitly notifying it
\begin{df} (See \cite{Shafer:90b}) Let B be a subset of $\Xi$, called
evidence,
$m_B$ be a basic probability assignment such that $m_B(B)=1$ and $m_B(A)=0$
for any A different from B. Then the conditional belief function $Bel(.||B)$
representing the belief function $Bel$ conditioned on evidence B
is defined
as: $Bel(.||B)=Bel \oplus Bel_B$.
\end{df}
The subsequent definitions of hypergraphs and operations on them are due to
\cite{Shenoy:90}.
{\em Hypergraphs}: A nonempty set H of nonempty subsets of a finite set S be
called
a hypergraph on S. The elements of H be called hyperedges. Elements of S be
called vertices. H and H' be both hypergraphs on S, then we call a
hypergraph H' a {\em reduced hypergraph} of the
hypergraph H, iff for every $h'\in H'$ also $h'\in H$ holds, and for
every $h \in H$ there exists such a $h' \in H'$ that $h \subseteq h'$
A hypergraph H {\em covers} a hypergraph H' iff for every $h'\in H'$ there
exists such a $h\in H$ that $h'\subseteq h$
{\em Hypertrees}: t and b be distinct hyperedges in a hypergraph H, $t \cap
b\neq
\emptyset$, and b contains every vertex of t that is contained in a hyperedge
of H other than t; if $X\in t$ and $X\in h$, where $h\in H$ and $h\neq t$,
then $X\in b$. Then we call t a twig of H, and we call b a branch for t. A
twig may have more than one branch.
We call a hypergraph a hypertree if there is an ordering of its hyperedges,
say $h_1,h_2,...,h_n$ such that $h_k$ is a twig in the hypergraph
$\{h_1,_h2,...,h_k\}$ whenever $2 \leq k \leq n$. We call any such
ordering of
hyperedges a hypertree construction sequence for the hypertree. The first
hyperedge in the hypertree construction sequence be called the root of the
hypertree construction sequence.
{\em Variables and valuations}:Let {\V} be a finite set. The elements of {\V}
are called
variables. For each $h \subseteq \V$ there is a set $VV_h$. The elements of
$VV_h$ are called valuations. Let VV=$\bigcup \{ VV_h | h \subseteq \V \}$
be called the set of all valuations
In case of probabilities a valuation on h will be a non-negative,
real-valued
function on the set of all configurations of h(a configuration on h is a
vector of possible values of variables in h). In the belief function case a
valuation is a non-negative, real-valued function on the set of all
subsets of
configurations of h
{\em Proper valuation}: for each $h \subseteq \V$ there is a subset $P_h$ of
$VV_h$
elements of which are called proper valuations on h. Let P be the set of all
proper valuations.
{\em Combination}: We assume that there is a mapping $\odot: VV \times VV
\rightarrow VV$ called combination such that:\\
(i) if G and H are valuations on g and h respectively, then $G \odot H$ is a
valuation on $g \cup h$ \\
(ii) if either G or H is not a proper valuation then $G \odot H$ is not a
proper valuation \\
(iii) if both G and H are proper valuations then $G \odot H$ may be or not
be a proper valuation
{\em Factorization}: Suppose A is a valuation on a finite set of variables \V,
and suppose HV is a hypergraph on \V. If A is equal to the combination of
valuations of all hyperedges h of HV then we say that A factorizes on HV.
\input FCEINTER.tex
\section{A Concept of Belief Network}
The axiomatization system of Shenoy/Shafer refers to the notion of
factorization along a hypergraph.
On the other hand other authors insisted on a decomposition
into a belief network. We investigate below implications
of this disagreement. BEl shall denote the general belief function
as considered in \cite{Shenoy:90}, the DS belief function Bel
and the probability are specialization of.
\begin{df} \cite{Klopotek:93f}
We define a mapping $\oantidot: VV \times VV
\rightarrow VV$ called decombination such that:
if $BEL_{12}=BEL_1 \oantidot BEL_2$ then $BEL_1=BEL_2 \odot BEL_{12}$
\end{df
In case of probabilities, decombination means memberwise division:
$Pr_{12}(A)=Pr_1(A)/Pr_2(A)$. In case of DS pseudo-belief functions it means
the operator $\ominus$ yielding a DS pseudo-belief function such that:
whenever $Bel_{12}=Bel_1 \ominus Bel_2$
then $Q_{12}(A)=c \cdot Q_1/Q_2$. Both for probabilities and for DS belief
functions decombination may be not uniquely determined. Moreover, for DS
belief functions not always a decombined DS belief function will exist. But
the domain of DS pseudo-belief functions is closed under this
operator. We claim here without a proof (which is simple) that DS
pseudo-belief
functions fit the axiomatic framework of Shenoy/Shafer. Moreover, we claim
that if an (ordinary) DS belief function is represented by a
factorization in
DS pseudo-belief functions, then any propagation of uncertainty yields the
very
same results as when it would have been factorized into ordinary DS belief
functions.
\begin{df} \cite{Klopotek:93f}
By anti-conditioning $|$ of a belief function $BEL$ on a set of variables
$h$
we understand the transformation: $BEL ^{|h}= BEL \oantidot BEL ^{\downarrow
h}$.
\end{df
Notably, anti-conditioning means in case of probability functions proper
conditioning. In case of DS pseudo-belief functions the operator $|$
means the DS anti-conditioning from previous section.
It has
meaning entirely different from traditionally used
Shafer's
notion of conditionality
(compare
\cite{Klopotek:93p4}) - anti-conditioning is a technical term used
exclusively
for valuation of nodes in belief networks. Notice: some other authors
e.g. \cite{Cano:93} recognized also the necessity of introduction of two
different notions in the context of the Shenoy/Shafer axiomatic framework
(compare a priori and a posteriori conditionals in \cite{Cano:93}).
\cite{Cano:93} introduces 3 additional axioms governing the 'a priori'
conditionality to enable propagation with them.
Our
anti-conditionality is bound only to the assumption of executability of the
$\oantidot$ operation and does not assume any further properties of it.
Let
us define now the general notion of belief network
\begin{df} \cite{Klopotek:93f}
A
belief
network is a pair (D,BEL) where D is a dag (directed acyclic graph)
and BEL is a belief
distribution called the {\em underlying distribution}. Each node i in D
corresponds to a variable $X_i$ in BEL, a set of nodes I corresponds to a
set of variables $X_I$ and $x_i, x_I$
denote values drawn from the domain of $X_i$
and from the (cross product) domain of $X_I$ respectively. Each node in the
network is regarded as a storage cell for any distribution
$BEL ^{\downarrow \{X_i\} \cup X_{\pi (i)} | X_{\pi (i)} }$
where $X_{\pi (i)}$ is a set of nodes corresponding to
the
parent nodes $\pi(i)$ of i. The underlying distribution represented by a
belief network is computed via
$$BEL = \bigodot_{i=1}^{n}BEL ^{\downarrow \{X_i\} \cup X_{\pi (i)} |
X_{\pi (i)} } $
\end{df}
Please notice the local character of valuation of a node:
to valuate the node $i$ corresponding to variable $X_i$ only
the marginal $BEL ^{\downarrow \{X_i\} \cup X_{\pi (i)}}$ needs to be known
(e.g. from data) and not the entire belief distribution
There exists a straight forward transformation of a belief network structure
into a hypergraph, and hence of
a belief network into a hypergraph:
for every node i of the underlying dag define a hyperedge as the set
$\{X_i\} \cup X_{\pi(i)}$; then the valuation of this hyperedge define as
$BEL ^{\downarrow \{X_i\} \cup X_{\pi(i)} | X_{\pi(i)}}$. We say that the
hypergraph obtained in this way is {\em induced} by the belief network
Let us consider now the inverse operation: transformation of a valuated
hypergraph into a belief network.
As the first stage we consider structures of a hypergraph and of a
belief network (the underlying dag). we say that a belief network is
{\em compatible} with a hypergraph
iff the reduced set of hyperedges induced by the belief network is
identical with the reduced hypergraph.
\begin{Bsp}
Let us consider the following hypergraph
\{\{A,B,C\}, \{C,D\}, \{D,E\}, \{A, E\}\}.
the following belief network structures are compatible with this hypergraph:
\{$A,C\rightarrow B$, $C\rightarrow D$, $D\rightarrow E$, $E\rightarrow A$\}
\{$A,C\rightarrow B$, $D\rightarrow C$, $D\rightarrow E$, $E\rightarrow A$\},
\{$A,C\rightarrow B$, $D\rightarrow C$, $E\rightarrow D$, $E\rightarrow A$\},
\{$A,C\rightarrow B$, $D\rightarrow C$, $E\rightarrow D$, $A\rightarrow E$\}.
\end{Bsp}
\begin{Bsp}
Let us consider the following hypergraph
\{\{A,B,C\}, \{C,D\}, \{D,E\}, \{A, E\}, \{B,F\}, \{F,D\}\}.
No belief network structure is compatible with it.
\end{Bsp}
The missing compatibility is connected with the fact that
a hypergraph may represent a cyclic graph.
Even if a compatible belief network has been found we may have troubles with
valuations. In Example 1 an unfriendly valuation of hyperedge
\{A,C,B\} may require an edge AC in a belief network representing the same
distribution, but it will make
the
hypergraph incompatible (as e.g. hyperedge \{A,C,E\} would be induced). This
may be demonstrated as follows
\begin{df} \cite{Klopotek:93f}
If $X_J,X_K,X_L$ are three disjoint sets of variables of a distribution BEL,
then $X_J,X_K$ are said to be conditionally independent given $X_L$ (denoted
\linebreak
$I(X_J,X_K |X_L)_{BEL}$ iff
$$BEL ^{\downarrow X_J \cup X_K \cup X_L | X_L}
\odot BEL ^{\downarrow X_L } =
BEL ^{\downarrow X_J \cup X_L | X_L} \odot
BEL ^{\downarrow X_K \cup X_L | X_L}
\odot BEL ^{\downarrow X_L } $$
$I(X_J,X_K |X_L)_{BEL}$ is called a {\em
conditional independence statement
\end{df}
Let $I(J,K|L)_D$ denote d-separation in a graph \cite{Geiger:90}.:
\begin{th} \label{IDIBEL}
\cite{Klopotek:93f}
Let $BEL_D=\{BEL|$(D,BEL) be a belief network\}. Then:\\
$I(J,K|L)_D$ iff $I(X_J,X_K |X_L)_{BEL}$ for all $BEL \in BEL_D$
\end{th}
Now we see in the above example that nodes D and E d-separate nodes A and C.
Hence within any belief network based on one of the three dags mentioned A
will
be conditionally independent from C given D and E. But one can easily check
that with general type of hypergraph valuation nodes A and C may be rendered
dependent.
\begin{th} \cite{Klopotek:93f}
Hypergraphs considered by
Shenoy/Shafer \cite{Shenoy:90}
may for a given joint belief distribution have simpler structure than
(be properly covered by)
the closest hypergraph induced by a
belief network.
\end{th}
Notably, though the axiomatic system of Shenoy/Shafer refers to hypergraph
factorization of a joint belief distribution, the actual propagation is run
on a hypertree (or more precisely, on one construction sequence of a
hypertree, that is on Markov tree) covering that hypergraph \cite{Shenoy:90}.
\Bem{
Covering a
hypergraph with a hypertree is a trivial task, yet finding the optimal one
(with
as small number of variables in each hyperedge of the hypertree as possible)
may be very difficult \cite{Shenoy:90}.
Let us look closer at the outcome of the process of covering with a reduced
hypertree factorization, or more precisely, at the relationship of a
hypertree construction
sequence and a belief network constructed out of it in the following way
If $h_k$ is a twig in the sequence $\{h_1,...,h_k\}$ and $h_{i_k}$ its branch
with $i_k<k$, then let us span the following directed edges in a belief
network: First make a complete directed acyclic graph out of nodes
$h_k-h_{i_k}$. Then add edges $Y_l \rightarrow X_j$ for every $Y_l \in
h_k \cap h_{i_k}$ and every $X_j \in h_k-h_{i_k}$.
Repeat this for every k=2,..,n.
\Bem{(Note: no connection is introduced between
nodes contained in $h_1$).
} For k=1 proceed as if $h_1$ were a twig with an
empty set as a branch for it.
It is easily checked that
the hypergraph induced by a belief network structure obtained in this way is
in fact a hypertree (if reduced, then exactly the original reduced
hypertree). Let us turn now to valuations.
Let $BEL_i$ be the valuation originally attached to the hyperedge $h_i$.
then $BEL = BEL_1 \odot ...\odot BEL_n$.
What conditional belief is to be
attached to $h_n$ ? First marginalize: $BEL'_n = BEL_1^{\downarrow h_1 \cap
h_n} \odot \dots \odot BEL_{n-1}^{\downarrow h_{n-1} \cap
h_n} \odot BEL_n$.
Now calculate: $BEL"_n={BEL'}_n ^{|h_n \cap h_{i_n}}$, and
$BEL"'_n={BEL'}_n ^{\downarrow h_n \cap h_{i_n}}$.
Let $BEL_{*k}= BEL_k\oantidot BEL_k ^{\downarrow h_1 \cap
h_n}$ for k=1,...,$i_n$-1,$i_n$+1,...,(n-1),
and
let $BEL_{*i_n}= (BEL_{i_n}\oantidot BEL_{i_n} ^{\downarrow h_1 \cap
h_n}) \odot BEL"'_n$ .
Obviously, $BEL=BEL_{*1}
\odot
\dots \odot BEL_{*(n-1)} \odot BEL"_n$
Now let us consider a new hypertree only with hyperedges $h_1,\dots
h_{n-1}$, and
with valuations equal to those marked with asterisk (*), and repeat the
process
until only one hyperedge is left, the now valuation of which is considered
as $BEL"_1$. In the process, a new factorization is
obtained: $BEL=BEL"_1 \odot \dots \odot BEL"_n$. \\
If now for a hyperedge $h_k$ $card(h_k-h_{i_k})=1$, then we assign $BEL"_k$
to
the node of the belief network corresponding to $h_k-h_{i_k}$. If for a
hyperedge $h_k$ $card(h_k-h_{i_k})>1$, then we split $BEL"_k$ as follows:
Let $h_k-h_{i_k}=\{X_{k1},X_{k2},....,X_{km}\}$ and the indices shall
correspond to the order in the belief network induced by the above
construction procedure. Then
$$BEL"_k=BEL ^{\downarrow h_k|h_k \cap h_{i_k}}=
\bigodot_{j=1}^{m} BEL ^{\downarrow (h_k \cap h_{i_k}) \cup
\{X_{k1},...,X_{kj}\} | (h_k \cap h_{i_k}) \cup
\{X_{k1},...,X_{kj}\}-\{X_{kj}\}}$
and we assign valuation $BEL ^{\downarrow (h_k \cap h_{i_k}) \cup
\{X_{k1},...,X_{kj}\} | (h_k \cap h_{i_k}) \cup
\{X_{k1},...,X_{kj}\}-\{X_{kj}\}}$ to the node corresponding to $X_{kj}$ in
the network structure. It is easily checked that
\begin{th} \label{xxxx} \cite{Klopotek:93f}
(i) The network obtained by the above construction of its structure and
valuation from hypertree factorization is a belief network.\\
(ii) This belief network represents exactly the joint belief distribution of
the hypertree\\
(iii) This belief network induces exactly the original reduced hypertree
structur
\end{th
The above theorem implies that any hypergraph suitable for
propagation must have a compatible
belief network. Hence seeking for belief network decompositions of joint
belief distributions is sufficient for finding any suitable
factorization.
\section{Recovery of Tree-structured Belief Networks}
Let us consider now a special class of hypertrees: connected hypertrees with
cardinality of each hyperedge equal 2. It is easy to demonstrate that such
hypertrees correspond exactly to directed trees. Furthermore, valuated
hypergraphs of this form correspond to belief networks with directed trees as
underlying dag structures. Hence we can conclude that any factorization in
form of connected hypertrees with
cardinality of each hyperedge equal 2 may be recovered from data by
algorithms recovering belief trees from data
This does not hold e.g. for poly-trees.
Let us assume that there
exists a measure $\delta(BEL_1,BEL_2)$ equal to zero whenever both belief
distributions $BEL_1,BEL_2$ are identical and being positive otherwise.
Furthermore, we assume that $\delta$ grows with stronger deviation of both
distributions without specifying it further.
The algorithm of Chow/Liu \cite{Chow:68} for recovery of tree structure of a
probability distribution is well known and has been deeply investigated, so
we will omit its description.
It requires a
distance measure DEP(X,Y) between each two variables X,Y rooted in
empirical data and
spans a maximum weight spanning unoriented tree between the
nodes. Then any orientation of the tree is the underlying dag structure where
valuations are calculated as conditional probabilities.
To accommodate it for general belief trees one needs a proper measure of
distance between variables. As claimed earlier in \cite{Acid:91},
this distance measure has to fulfill the following
requirement:
$ \min(DEP(X,Y),DEP(Y,Z))>DEP(X,Z)$ for any X, Y,
Z such that there exists
a directed path between X and Y, and between Y and Z.
For probabilistic belief networks one of such functions is known
to be Kullback-Leibler distance:
$$ DEP0(X,Y)=\sum_{x,y} P(x,y)\cdot \log
\frac{P(x,y)} {P(x)*P(y)}
$$
If we have the measure $\delta$ available,
we can construct the measure DEP as follows:
By the ternary joint distribution of the variables $X_1,X_3$ with background
$X_3$ we understand the function:\\
$$BEL ^{\downarrow X_1 \times X_2[X_3]}
=(BEL ^{\downarrow X_1 \times X_3 | X_3} \odot
BEL ^{\downarrow X_2 \times X_3 | X_3} \odot BEL ^{\downarrow X_3})
^{\downarrow X_1 \times X_2}$
Then we introduce:\\
$$DEP_{BN}(X_1,X_2)=
\min(\delta(BEL ^{\downarrow X_1}\odot BEL ^{\downarrow X_2}, BEL
^{\downarrow X_1 \times X_2}),$$
$$, \min_{X_3;X_3 \in \V-{X_1,X_2}}
\quad \delta(BEL ^{\downarrow X_1 \times
X_2[X_3]}, BEL ^{\downarrow X_1 \times X_2})) $$\\
with {\V} being the set of all variables
The following theorem is easy to prove
\begin{th} \cite{Klopotek:93f}
$ \min(DEP_{BN}(X,Y),DEP_{BN}(Y,Z))>DEP_{BN}(X,Z)$ for any X, Y, Z such that
there exists a directed path between X and Y, and between Y and Z.
\end{th
This suffices to extend the Chow/Liu algorithm to recover general belief tree
networks from data
The general algorithm would be of the form
\begin{itemize}
\item[A)] E be the set of unoriented edges, V be the set of all (at least 3)
variables,
$V_c$ be the set of connected variables, $V_u$ be the set of not connected
variables. Find the variables X,Y from V maximizing the function
$DEP_{BN}(X,Y)$ for all pairs (X,Y) of distinct X,Y from V. \\
Initialize: $E=\{(X,Y)\}$, $V_c=\{X,Y\}$, $V_u=V-\{X,Y\}
\item[B)] repeat\\
Find variables P,Q from V maximizing the function
$DEP_{BN}(P,R)$ for all pairs (P,R) with $P\in V_c$, $R\in V_u$.\\
Substitute: $E=E \cup \{(P,R)\}$, $V_c=V_c\cup\{R\}$, $V_u=V_u-\{R\}$.\\
until $V_u$ is an empty set
\item[C)] Pick one of variables from V and orient all the edges in E away from
this variable
\end{itemize
To demonstrate the validity of this general theorem, its specialization was
implemented
for the Dempster-Shafer belief networks. The following $\delta$ function was
used: Let $Bel_1$ be a DS belief function and $Bel_2$ be a DS
pseudo-belief
function approximating it. Let
$$\delta(Bel_2,Bel_1)= \sum_{A; m_1(A)>0} m_1(A) \cdot| \ln
\frac{Q_1(A)}{Q_2(A)}|$$
where the assumption is made that natural logarithm of a non-positive number
is plus infinity. $|.|$ is the absolute value operator. The values of
$\delta$ in variable $Bel_2$ with parameter $Bel_1$ range
$[0,+\infty)$
For randomly generated tree-like DS belief distributions, if we were working
directly with these distributions, as expected, the algorithm yielded perfect
decomposition into the original tree. For random samples generated from such
distributions, the structure was recovered properly for reasonable sample
sizes (200 for up to 8 variables). Recovery of the joint distribution was not
too perfect, as the space of possible value combination is tremendous and
probably quite large sample sizes would be necessary. It is worth mentioning,
that even with some departures from truly tree structure a distribution
could be obtained which reasonable approximated the original one
\section{Recovery of Polytree-structured Belief Networks}
A well known algorithm for recovery of polytree from data for probability
distributions is that of Pearl \cite{Pearl:88}, \cite{Rebane:89},
we refrain from describing it
here. To accommodate it for usage with DS belief distributions we had to
change the dependence criterion of two variables given a third one.
$$Criterion(X_1\rightarrow X_3, X_2 \rightarrow X_3) =
\delta(BEL ^{\downarrow X_1 \times
X_2[X_3]}, BEL ^{\downarrow X_1 \times X_2}))
- $$
$$- \alpha \cdot
\delta(BEL ^{\downarrow X_1}\odot BEL ^{\downarrow X_2}, BEL
^{\downarrow X_1 \times X_2})
$$
If the above function $Criterion$ is negative, we assume
head-to-head meeting
of edges $X_1,X_3$ and $X_2,X_3$. The rest of the algorithm runs as that of
Pearl
The general algorithm would be of the form
\begin{itemize}
\item[A)] E be the set of unoriented edges, V be the set of all (at least 3)
variables,
$V_c$ be the set of connected variables, $V_u$ be the set of not connected
variables. Find the variables X,Y from V maximizing the function
$DEP_{BN}(X,Y)$ for all pairs (X,Y) of distinct X,Y from V. \\
Initialize: $E=\{(X,Y)\}$, $V_c=\{X,Y\}$, $V_u=V-\{X,Y\}
\item[B)] repeat\\
Find variables P,Q from V maximizing the function
$DEP_{BN}(P,R)$ for all pairs (P,R) with $P\in V_c$, $R\in V_u$.\\
Substitute: $E=E \cup \{(P,R)\}$, $V_c=V_c\cup\{R\}$, $V_u=V_u-\{R\}$.\\
until $V_u$ is an empty set
\item[C)]
For every pair of edges((X,Z),(Y,Z)) from E sharing an edge end calculate
$Criterion(X\rightarrow Z, Y\rightarrow Z )$. (head-to-head) If the result is
negative,
orient both edges as $(X\rightarrow Z)$, $(Y\rightarrow Z)$, otherwise
do nothing for them
\item[D)] Orient remaining unoriented edges as not to cause new head-to-head
meetings
\end{itemize
The steps C) and D) may result in conflicts concerning edge orientations if
the true underlying joint belief distribution was not poly-tree shaped.
Special heuristic procedures need to be applied to resolve them reasonably (In
the actual implementation both steps are in fact intermixed)
For randomly generated polytree-like DS belief distributions, if we were
working
directly with these distributions, as expected, the algorithm yielded perfect
decomposition into the original polytree. For random samples generated from
such
distributions, the structure was recovered properly only for very
large sample
sizes (5000 for 6 variables), with growing sample sizes leading to
spurious indications of head-to-head meetings not present in the original
distribution. Recovery of the joint distribution was also not
too perfect, due to immense size of space of possible value combinations
\section{Recovery of General Type Belief Networks}
Hidden (latent) variables are source of trouble both for
identification of
causal relationships (well-known confounding effects) and
for construction of
a belief network (ill-recognized direction of causal
influence may lead to
assumption of independence of variables not present in the
real
distribution). Hence much research has been devoted to
construction of models
with hidden variables. It is a trivial task to construct
a belief network with hidden
variables correctly reflecting the measured joint
distribution. One can
consider a single hidden variable upon which all the
measurables depend on.
But such a model would neither meet the requirements put
on belief network
(space saving representation of distribution, efficient
computation of
marginals and conditionals) nor those for causal networks
(prediction
capability under control of some variables). Therefore,
criteria like minimal
latent model (IC algorithm \cite{Pearl:91}) or maximally informative
partially
oriented path graph (CI algorithm
and its accelerator FCI algorithm
\cite{Spirtes:93}) have been proposed.
As the IC algorithm
for learning minimal latent model \cite{Pearl:91} is
known to be
wrong \cite{Spirtes:93},
and a failure of FCI has also been reported \cite{Klopotek:93i},
let us consider the CI algorithm from
\cite{Spirtes:93}.
In \cite{Spirtes:93} the concept of including path graph
is introduced and
studied. Given a directed acyclic graph G with the set of
hidden nodes $V_h$
and visible nodes $V_s$ representing a causal network CN,
an including path
between nodes A and B belonging to $V_s$ is a path in the
graph G such that
the only visible nodes (except for A and B) on the path
are those where edges
of the path meet head-to-head and there exists a directed
path in G from such a node
to either A or B. An including path graph for G is such a
graph over $V_s$ in
which if nodes A and B are connected by an including path
in G ingoing into A
and B, then A and B are connected by a bidirectional edge
$A<->B$. Otherwise
if they are connected by an including path in G outgoing
from A and ingoing
into B then A and B are connected by an unidirectional
edge $A->B$. As the set
$V_h$ is generally unknown, the including path graph (IPG)
for G is the best we can
ever know about G. However, given an empirical
distribution (a sample), though
we may be able to detect presence/absence of edges from
IPG, we may fail to
decide uniquely orientation of all edges in IPG.
Therefore, the concept of a
partial including path graph was considered in
\cite{Spirtes:93}.
A partially oriented including path graph contains the
following types of
edges unidirectional: $A->B$, bidirectional $A<->B$,
partially oriented
$Ao->B$ and non-oriented $Ao-oB$, as well as some local
constraint information $A*-\underline{*B*}-*C
meaning that edges between A and B and
between B and C cannot meet head to head at B.
(Subsequently an asterisk (*)
means any orientation of an edge end: e.g. $A*->B$ means
either $A->B$ or $Ao->B$ or $A<->B$).
A partial including path graph (PIPG) would be maximally
informative if all
definite edge orientations in it (e.g. $A-*B$ or $A<-*B$
at A) would be
shared by all candidate IPG for the given sample and vice
versa (shared
definite orientations in candidate IPG also present in
maximally informative
PIPG), the same should hold for local constraints.
Recovery of the maximally informative PIPG is considered
in \cite{Spirtes:93}
as too ambitious and a less ambitious algorithm CI has
been developed therein
producing a PIPG where only a subset of edge end
orientations of the maximally
informative PIPG are recovered. Authors of CI claim such
an output to be
still useful when considering direct and indirect causal
influence among
visible variables as well as some prediction tasks.
However, CI algorithm is known to be of high computational complexity even for
probabilistic variables. Therefore, we developed a modified version of it
\cite{Klopotek:94}, \cite{Klopotek:93h} to reduce its complexity and to
provide a bridge towards application for DS empirical distributions. \\
We cite below some useful definitions from
\cite{Spirtes:93} and then present our Fr(k)CI algorithm.
In a partially oriented including path graph $\pi$
\begin{itemize}
\item[(i)] A is a parent of B if and only if edge $A->B$
is in $\pi$
\item[(ii)] B is a collider along the path $<A,B,C>$ if
and only if $A*->B<-*C$ in $\pi$
\item[(iii)] An edge between B and A is into A iff $A<-*B$
is in $\pi
\item[(iv)] An edge between B and A is out of A iff $A->B$
is in $\pi$
\item[(v)] In a partially oriented including path graph
$\pi$, U is a definite
discriminating path for B if and only if U is an
undirected path between X and
Y containing B, $B \neq X, B \neq Y$, every vertex on U
except for B and the
endpoints is a collider or a definite non-collider on U
and:\\
(a) if V and V" are adjacent on U, and V" is between V and
B on U, then $V*->V"$ on U,\\
(b) if V is between X and B on U and V is a collider on U,
then $V->Y$ in $\pi$, else $V<-*Y$ on $\pi$\\
(c) if V is between Y and B on U and V is a collider on U,
then $V->X$ in $\pi$, else $V<-*X$ on $\pi$\\
(d) X and Y are not adjacent in $\pi$.\\
(e) Directed path U: from X to Y: if V is adjacent to X on
U then $X->V$ in
$\pi$, if $V$ is adjacent to Y on V, then $V->Y$, if V and
V" are adjacent on U
and V is between X and V" on U, then $V->V"$ in $\pi$
\end{itemize
Let us introduce some notions specific for Fr(k)CI:
\begin{itemize}
\item[(i)] A is r(k)-separated from B given set S
($card(S)\leq k$) iff A and
B are conditionally independent given S
\Bem{- conditional independence means
$\chi
^2$-test does not deny the thesis of independence of
variables A and B given S. .
\item[(ii)] In a partially oriented including path graph
$\pi$,
a node A is called {\em legally removable} iff there
exists no local constraint
information $B*-\underline{*A*}-*C$ for any nodes B and C
and there exists no
edge of the form $A*->B$ for any node B.
\end{itemize}
{\noindent \bf The Fast Restricted-to-k-Variables Causal
Inference Algorithm (Fr(k)CI):}\\
Input: Empirical joint probability distribution\\
Output: Belief network
\begin{description
\item[A)] Form the complete undirected graph Q on the
vertex set V
\item[B')]
for j=0 step 1 to k\\
do
if A and B are r(k)-separated given any subset S of
neighbours of A or of
B, card(S)=j, remove the edge between
A and B, and record S in Sepset(A,B) and Sepset(B,A).
\item[B'')] if A and B are r(k)-separated given any subset
S of V ($card(S)>0$), remove the edge between
A and B, and record S in Sepset(A,B) and Sepset(B,A).
\item[C)] Let F be the graph resulting from step B).
Orient each edge as
$o-o$ (unoriented at both ends). For each
triple of vertices A,B,C such that the pair A,B and the
pair B,C are each
adjacent in F, but the pair A,C are not adjacent in F,
orient \Bem{$(C)$} A*-*B*-*C as
$A*->B<-*C$ if and only if B is not in Sepset(A,C), and
orient A*-*B*-*C
as $A*-\underline{*B*}-*C$ if and only if B is in
Sepset(A,C)
\item[D)] Repea
\begin{description}
\item[(D1) if] there is a directed path from A to B, and
an edge A*-*B, orient \Bem{$(D_p)$} A*-*B
as $A*->B$
\item[(D2) else if] B is a collider along $<A,B,C>$ in
$\pi$, B is adjacent
to D, A and C are not adjacent, and there exists
local constraint $A*-\underline{*D*}-*C$, then orient
\Bem{$(D_s)$} $B*-*D$ as $B<-*D$
\item[(D4) else if] $P*-\underline{>M*}-*R$ then orient
\Bem{$(D_c)$} as $P*->M->R$
\item[(D3) else if] U is a definite discriminating path
between A and B for M in $\pi$ and
P and R are adjacent to M on U, and P-M-R is a triangle,
then\\
if M is in Sepset(A,B) then M is marked as non-collider on
subpath $P*-\underline{*M*}-R$\\
else $P*-*AM*-*R$ is oriented \Bem{$(D_d)$} as
$P*->M<-*R$
\item[until] no more edges can be oriented
\item[E)] Orient every edge $Ao->B$ as $A->B$
\item[F)]
\Bem{
Orient all the edges of type $Ao-oB$ either as $A<-B$ or
$A->B$ so as not to
violate $P*-\underline{*M*}-*R$ constraints as follows:
}
Copy the partially oriented including path graph $\pi$
onto $\pi'$. \\
Repeat: \\
In $\pi'$ identify a legally removable node A. Remove it
from $\pi'$
together with every edge $A*-*B$ and every constraint
with A involved in it. Whenever an edge $Ao-oB$ is
removed from $\pi'$, orient
edge $Ao-oB$ in $\pi$ as $A<-B$. \\
Until no more node is left in $\pi'$.
\item[G)] Remove every bidirectional edge $A<->B$ and
insert instead
parentless hidden variable $H_{AB}$ adding edges
$A<-H_{AB}->B$
\end{description}
\item[End of Fr(k)CI
\end{description}
The algorithm Ci has been first of all moved to DST grounds by using DST
independence tests instead of probabilistic ones.
Steps E) and F) constitute an extension of \Bem{(are not
present in)} the original CI algorithm of
\cite{Spirtes:93}, bridging the gap between partial
including path graph and the belief network (see also \cite{Klopotek:93g}).
Conditional belief functions, also in presence of hidden variables are
calculated according to \cite{Klopotek:93h}.
Step B) was modified by substituting the term
"d-separation" with
"r(k)-separation" \cite{Klopotek:94}. This means that not all possible
subsets S of the set of
all nodes V (with card(S) up to card(V)-2) are tested on
rendering nodes A and
B independent, but only those with cardinality
0,1,2,...,k. If one takes into
account that higher order conditional independencies
require larger amounts of
data to remain stable, superior stability of this step in
Fr(k)CI becomes
obvious. Furthermore, this step was subdivided into two
substeps, B') and
B"). The first substep corresponds to technique used by
FCI - restriting
candidate sets of potential d-separators to the so far
established
neighbourhood. This substep is followed by the full search
over all nodes of
V - but only for edges left by B' - this is in contrast to
FCI which omits
step B) of the original CI, and thus runs into the
troubles described in \cite{Klopotek:93i}.
Step D2) has been modified in that the term "not
d-connected" of CI was substituted
by reference to local constraints. In this way results of
step B) are
exploited more thoroughly and in step D) no more
reference is made to original
body of data (which clearly accelerates the algorithm).
This modification is
legitimate since all the other cases covered by the
concept of "not
d-connected" of CI would have resulted in orientation of
$D*->B$ already in
step C). Hence the generality of step D2) of the original
CI algorithm is not needed here.
Steps D3) and D4) were interchanged as the step D3) of CI
is quite time
consuming and should be postponed until no alternative
substep can do anything.\\
\section{Discussion}
The particular feature of the presented algorithms for identification of
belief structure in DST is the close relationship between target knowledge
representation scheme, the reasoning scheme and the data viewing scheme. This
unity enabled to adopt some probabilistic belief network recovery algorithms
for purposes of DS belief network recovery from data.
If, for example, no frequentist interpretation for DST existed (as required by
Smets \cite{Smets:92}), then the last algorithm (Fr(k)CI) would be pointless,
as it relies on statistical tests over a sample. Furthermore, if the reasoning
scheme would not be that of Shenoy-Shafer \cite{Shenoy:90}, then clearly
tree-generation and polytree generation algorithms would make little sense as
the knowledge representation scheme might require more general
belief functions
than that
utilized by Shenoy and Shafer. It may prove also pointless to represent a
joint belief distribution as a belief network as
local influences among variables may lose any meaning
(being replaced by some global fixpoint state of all variables).
One should also notice that the possibility of
utilization of statistical independence test for simplification of a joint
belief distribution (decomposition of it) is bound to separate measurability
of attributes. But this separate measurability is obvious for Shenoy-Shafer
uncertainty propagation scheme, but does not need to be so for other
On the other hand, the notion of statistical independence is strongly related
with the interpretational attitude of the researcher. E.g. if we consider DST
as calculus over lower and upper probability bounds, as done e.g. in
\cite{Halpern:92}, then not only the reasoning scheme would have to be changed
(is indicated in \cite{Halpern:92}) but also the understanding of independence
of variables: instead of missing mutual influence the possibility of missing
mutual influence would have to be considered. The application of Chow/Liu
algorithm \cite{Chow:68} would then be connected with notion of lower
and upper entropy etc. \\
Some researchers do not even bother if their notion of independence has at all
a empirical sense. E.g. Hummel and Landy \cite{Hummel:88} talk about
"independent opinions of experts" within their probabilistic interpretation.
But how such an independence has to be understood ? Opinions of
experts on a
subject cannot be independent as they have a common cause - the
subject for
which these opinions are issued. Should opinions of experts be really
independent, then at least one of the opinions would have to be unrelated to
the subject of expertise, hence devoid of any useful content. Another
strange approach is exhibited in a recent paper by Zhu and
Lee \cite{Zhu:93}
where it is a priori assumed that premise and conclusion of an expert rule are
statistically independent. Under these circumstances the value of a
reasoning rule and hence of the whole reasoning system is questionable (we
can infer a posteriori beliefs without knowledge of observables).
One of the few interpretations of belief functions possessing intrinsic
physical relevance is the rough set based interpretation presented by Skowron
\cite{Skowron:93} and Grzyma{\l}a-Busse \cite{Grzymala:91}. This rough set
approach explains a possible physical source of Dempster-Shafer
uncertainty
(incomplete observable set for decision table). However, it
couldn't be used in
combination with our algorithms as it does not fit the separate measurability
requirement (enforcing separate measurability would cause loss of
information
otherwise present in decision table). Rough set approach allows also for more
precise representation of conditional relationships between decision variables
than that actually imposed by Shafer's conditioning. Therefore Shenoy-Shafer
uncertainty propagation scheme is also not suitable for application within the
rough set framework as it would deteriorate decision table capabilities
The algorithms developed constitute in some sense extensions of known
algorithms from the bayesian belief network literature. However, these
extensions were not straightforward ones. First of all the
empirical meaning
of independence and conditional independence from data for DS theory had to be
established. This required imposition of a compatible probabilistic
interpretation of DST. However, as just stated, no such completely useful
interpretation was available from literature. Hence one had to be developed.
As the tree-recovery algorithm is concerned, the general famework of Chow/Liu
algorithm \cite{Chow:68} could be adopted with the exception of distance
measure. This measure had to be invented completely from scratch as the
intuition behind Chow/Liu original measure is that of probabilistic
composition which has properties contrary to DS composition (compare the
effects of decompositions). Similarly conditional distance had to be designed
anew for polytree-recovery algorithm of Pearl \cite{Rebane:89}. \\
In case of general network with variable hiding the adaptation of CI algorithm
Spirtes et al. \cite{Spirtes:93} involved more complex work. As
stated already,
the notion of conditional independence from data for DST had to be invented.
Furthermore, a result in form of a belief network instead of CI's partial
including path graph was required \cite{Klopotek:93g} and later adopted for
DST \cite{Klopotek:93h}. But the time complexity of CI was already high for
probabilistic networks and explodes for DS networks. Therefore some
simplifications and algorithmic improvements had to be carried out (compare
\cite{Klopotek:94}). Last not least, some heuristics for calculation of
marginal and conditional distributions from data for and in
presence of hidden
variables had to be elaborated (This task was not reported here)
\section{Conclusions}
Within this paper belief network discovery algorithms for three different
classes of Dempster-Shafer
belief networks (tree-structured, poly-tree-structured, general ones with
variable hiding) have been presented. Close relationship between utility of
these algorithms and the usage of a particular uncertainty propagation scheme
(Shenoy-Shafer local computation method \cite{Shenoy:90}) has been
demonstrated. Also a new frequentist interpretation of DST has been described
and shown to be a prerequisite for application of developed algorithms
Though in basic ideas these algorithms resemble ones known from bayesian
network literature, considerable effort was required for clearing various
details, like measures of distances between variables,
instantiating of causal
networks to belief networks etc.\\
It is hoped that this research may contribute to adaptation of further
bayesian network recovery algorithms for DS belief networks and/or outline
procedures for development of complex probabilistic models for other known
uncertainty propagation schemes of DST
\input FCEBIB.tex
\end{document}
|
\section{Introduction}
\label{s:intro}
The relationship between supermassive black holes and their host galaxies continues to generate much interests in the literature. Since the discovery of the relationship between the mass of the black holes (M$_{\rm BH}$) and the galaxy luminosity \citep[$L$,][]{1995ARA&A..33..581K}, it was clear that these objects must be linked in an intricate way \citep[for a review of the development of ideas about the scaling relations between black holes and galaxies see][]{2016ASSL..418..263G}. The subsequent discoveries of the relations between M$_{\rm BH}$ and galaxy mass \citep[M$_\ast$,][]{1998AJ....115.2285M}, velocity dispersion \citep{2000ApJ...539L..13G, 2000ApJ...539L...9F}, circular velocity \citep{2002ApJ...578...90F}, and galaxy concentration \citep{2001ApJ...563L..11G, 2003RMxAC..17..196G}, as well as several secondary scaling relation \citep{2010ApJ...720..516B, 2011MNRAS.410.2347H, 2008ApJ...678L..93S}, only deepened the interest. These relations, their tightness and the dynamic range covering several orders of magnitude, indicate that the growth of supermassive black holes in the centres of galaxies, and galaxies themselves, must be closely related. The hope is that by understanding the properties of these relations, their universality, shape, tightness and related uncertainties, will also highlight and untangle the relevant processes that are involved in the growth of black holes and galaxies.
The number of galaxies with measured black hole masses has dramatically increased over the last ten years \citep{2005SSRv..116..523F, 2013ARA&A..51..511K}, approaching a hundred estimates of M$_{\rm BH}$ based on dynamical models of stellar or gaseous motions \citep{2016ApJ...818...47S}. Once these are combined with estimates based on the reverberation mapping for active galactic nuclei (AGN) and upper limit measurements, the sample comprises more than 200 galaxies \citep{2016ApJ...831..134V}. Such numbers, while not yet of sufficient size for pure statistical studies, allow a more complex analysis of the scaling relations, specifically to investigate which of the various relation has the smallest scatter (and therefore is more fundamental), if the data actually support two (or more) power-law relations, and if there is a third parameter which could make the scaling relations even tighter. The main limitation of the sample, next to the relatively low number of galaxies, is that it does not represent the complete population of galaxies \citep{2007ApJ...660..267B}. The sample is biased towards bright (massive), nearby early-type galaxies (ETGs), with, possibly more massive black holes than in the average of the population, as these are easier to directly probe with dynamical models given the spatial resolution achieved by observations \citep{2010ApJ...711L.108B,2016MNRAS.460.3119S}.
Indications of non-universality of M$_{\rm BH}$ scaling relation come from the demographics of host galaxies, for example by investigating if spiral galaxies, galaxies with bars, pseudo-bulges or AGNs satisfy the same relations as more massive early-type galaxies \citep[e.g.][]{2006MNRAS.365.1082W,2009ApJ...695.1577G,2013ApJ...764..184M}. Although samples of non-ETGs galaxies are relatively small \citep[e.g.][]{2016ApJ...818...47S}, there are clear indications that they do not necessary follow the same scaling relations as the more massive ETGs. Barred galaxies, galaxies hosting masers or AGNs, or pseudo-bulges seem to be offset from the main relation \citep{2008ApJ...680..143G, 2008MNRAS.386.2242H, 2009ApJ...698..812G, 2010ApJ...721...26G, 2011Natur.469..374K, 2014MNRAS.441.1243H,2016ApJ...826L..32G}. Scaling relations for early-type and spiral galaxies are also offset with respect to each other \citep{2013ApJ...764..184M}. It is, however, difficult to establish black hole scaling relations based on morphological classifications, as the classification can be difficult \citep[e.g. recognising pseudo-bulges,][]{2015HiA....16..360G}, and the samples are small, or span a limited range in both M$_{\rm BH}$ and galaxy stellar mass.
Nevertheless, there are several arguments that offer a tantalising indications that the M$_{\rm BH}$ scaling relation are not universal. \citet{2007ApJ...662..808L} showed that the predictions from M$_{\rm BH} - \sigma$ relation and M$_{\rm BH} - L$ are different for high mass galaxies and brightest cluster galaxies in particular. The main issue is that the M$_{\rm BH} - \sigma$ relation, using measured velocity dispersions of galaxies, predicts M$_{\rm BH}$ that are rarely larger than a few times $10^9$ M$_\odot$, while the relation with the luminosity predicts M$_{\rm BH}$ in the excess of $10^{10}$ M$_\odot$. The origin of this tension is in the curvature of the $L - \sigma$ relation, which for the most luminous systems departs from the \citet{1976ApJ...204..668F} relation; galaxies with velocity dispersion larger than about 300 km/s are very rare \citep{2003ApJ...594..225S,2006AJ....131.2018B}. Crucially, as \citet{2007ApJ...662..808L} pointed out, there is a difference in $L - \sigma$ relations for galaxies with and without cores in their nuclear profiles. The relation is much steeper for core galaxies, following $\sim\sigma^7$, compared to canonical $\sim\sigma^4$. This is true regardless of using the ``Nuker'' \citep{1995AJ....110.2622L} or (core-) Sersic \citep{2003AJ....125.2951G} parametrisation of the nuclear profiles \citep{2013ApJ...769L...5K}.
More recently, the data from complete ATLAS$^{3D}$ survey \citep{2011MNRAS.413..813C} revealed that the previously reported major break in the $L-\sigma$ relation is not related to the transition between core and core-less galaxies. Instead, the break is clearly observed, consistently in both the $M_\ast-R_e$ and $M_\ast-\sigma$ relations, around a mass $M_\ast\approx3\times10^{10}$ M$_\odot$, and is present even when all core galaxies are removed \citep{2013MNRAS.432.1862C}. It appears related to the transition between a sequence of bulge growth and dry merger growth. However, a much subtler change \citep{2009MNRAS.394.1978H} in the $L-\sigma$ is observed around $M_\ast\approx2\times10^{11}$ M$\odot$. Crucially, this characteristic mass marks also a transition between (core-less) fast rotators and (core) slow rotators \citep{2013MNRAS.433.2812K}, indicating a transition in the dominant assembly process \citep[for a review see section 4.3 in][]{2016ARA&A..54..597C}.
Distinguishing between core and core-less galaxies in the black hole scaling relation is still of potentially great significance, as cores are predicted to be created by black holes. When massive galaxies (harbouring massive black holes) merge, their black holes will eventually spiral down to the bottom of the potential well and form a binary \citep{1980Natur.287..307B,1991Natur.354..212E}. The decay of the binary orbit will be enabled by removal of stars that cross it \citep{1996NewA....1...35Q, 2001ApJ...563...34M}, resulting in a central region devoid of stars, a core, compared to the initial steep power-law light profile. As the removed stars were mostly on the radial orbits, this process introduces a strong tangential anisotropy, significantly larger than that expected for an adiabatic black hole growth \citep{1995ApJ...440..554Q,2014ApJ...782...39T}. The mergers are dissipation-less (dry) and there is no significant star-formation which could refill the core. An implication of this effect is that there should also exist a relation between the M$_{\rm BH}$ and the size of the core region \citep{2007ApJ...662..808L}, or the missing stellar mass \citep{2004ApJ...613L..33G} in the most massive galaxies. The uncertainties in these relations are, however, not any smaller than in other relations, also because there is no unique way to measure the size of the core \citep[e.g.][]{2007ApJ...662..808L,2009ApJ...691L.142K, 2010MNRAS.407..447H, 2013AJ....146..160R, 2014MNRAS.444.2700D}.
Dividing galaxies into Sersic and core-Sersic (or power-law and core) is significantly different from looking at the difference between various morphological types (late or early-type galaxies, classical or pseudo-bulges, barred and non-barred, etc). A key point here is that the property on which the sample is divided is based on a physical process of the core scouring by black holes \citep{1997AJ....114.1771F}. Therefore, there is a working paradigm supporting a possibility of the non-universality of the M$_{\rm BH}$ scaling relations. This was explored by \citet{2012ApJ...746..113G}, \citet{2013ApJ...768...76S} and \citet{2013ApJ...764..151G}, who showed that galaxies with and without cores (using the Sersic and core-Sersic parametrisation) have different M$_{\rm BH}$ - M$_\ast$ relation. Adding AGNs (all with Sersic profiles), which extends the sample of galaxies to lower masses, gives even more support to such a break in the scaling relation \citep{2015ApJ...798...54G, 2015ApJ...813...82R}. Another physically motivated separation of galaxies is to divide them into star forming and quiescent galaxies \citep{2016ApJ...830L..12T,2017arXiv170701097T}, prompted by the need to suppress star formation in galaxy evolution models, where the activity of central black hole provides a ready feedback mechanism. Furthermore, the expectation is that quiescent galaxies will have larger black hole masses. This is similar to diving galaxies into early- and late-types as done by \citet{2013ApJ...764..184M}, which also found evidence that early-type galaxies harbour more massive black holes. Neither of these studies, however, reported a break in the relations. These division are only approximately similar to Sersic/core-Sersic divisions as many early-type galaxies do not have cores, but are quiescent, and further work is needed to describe the shape of the scaling relations across galaxy properties.
Adding more parameters to the correlations with M$_{\rm BH}$ could, in principle, result in tighter relations. This was investigate by a number of studies over the past decade \citep[e.g.][]{2003ApJ...589L..21M,2007ApJ...665..120A, 2016ApJ...818...47S,2016ApJ...831..134V}. Such attempts, however, generally conclude that the decrease of scatter when adding an additional parameter is not substantial, and M$_{\rm BH} -\sigma$ is still considered the tightest and perhaps the most fundamental relation, inspite of the intrinsic scatter no-longer being considered consistent with zero \citep[e.g.][]{2009ApJ...698..198G}.
In this work we introduce a different approach. Instead of looking for the best scaling relation and then inferring the possible formation scenarios, we start from the emerging paradigm of the two phase formation of galaxies, from both a theoretical \citep{2010ApJ...725.2312O} and observational \citep{2013MNRAS.432.1862C,2015ApJ...813...23V} points of view \citep[as reviewed in][]{2016ARA&A..54..597C}. Assuming that black holes evolve in sync with galaxies, and are modified through similar processes, which are dependant on the galaxy mass and environment \citep[e.g.][]{2010ApJ...721..193P}, we search for the records of these processes in the dependance of black holes masses with galaxy properties. In particular, we consider the distribution of galaxies with black hole measurements in the mass - size diagram, an orthogonal projection of the thin Mass Plane \citep{2013MNRAS.432.1862C}.
After defining the sample of galaxies with black hole masses that we will use (Section~\ref{s:obs}), we present the mass - size diagram and analyse its distribution of black hole masses (Section~\ref{s:mass-size}). In Section~\ref{ss:phot} we use the stellar photometry and sizes of latest compilation of galaxies black hole mass measurements from the literature \citep{2016ApJ...831..134V}, while in Section~\ref{ss:2mass} we repeat the exercise using 2MASS catalog values to estimate stellar masses and sizes, and provide a scaling relation simply based on 2MASS photometry alone. In Section ~\ref{s:discs}, we present a toy model that reproduces what is seen in the data and discuss the implication of our results, before concluding with a brief summary of main results (Section~\ref{s:con}).
\section{A compilation of black hole masses}
\label{s:obs}
We make use of the most recent compilation of M$_{\rm BH}$ measurements presented in Table 2 of \citet{2016ApJ...831..134V}, with black hole masses obtained from dynamical models and reverberation mapping. We do not include objects excluded from the regression fits in that paper, and we have also removed 49 upper limits. This results in 181 objects. Next to the black hole masses listed in \citet{2016ApJ...831..134V}, we also use the listed estimates for the size (effective or half-light radius, $R_e$) and the total $K_s$-band luminosity (Vega magnitudes) of these galaxies. We estimate the mass of these galaxies (M$_\ast$) using the relation between the $K_s$-band mass-to-light ratio and the velocity dispersion given by eq.~(24) in \citet{2016ARA&A..54..597C}. The velocity dispersions are also taken from \citet{2016ApJ...831..134V}. We stress that the velocity dispersion is the so-called {\it effective velocity dispersion} $\sigma_e$, which incorporates both the mean and random motions within an aperture the size of the effective radius. Unfortunately, the measurements of $\sigma_e$ are not uniform across the sample, as only for a subset of galaxies observed with integral-field units $\sigma_e$ can be measured directly. Figure~\ref{f:ms1} presents the mass - size diagram. Note that in the figure we do not plot three galaxies (NGC\,0221, NGC\,0404 and NGC\,4486B), which have significantly lower mass and sizes than the majority of the sample. We keep these objects for calculations, but do not show them for the presentation purposes. The only notable difference between this and the mass - size diagram in fig. 8 of \citet{2016ApJ...831..134V} is that the mass of the galaxies increased by about 15 per cent due to our slightly different conversion to mass.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figure1.pdf}
\caption{Mass size diagram for galaxies with black hole mass measurements from the latest compilation presented in \citet[][see text for galaxies that were excluded]{2016ApJ...831..134V}. Filled circles are galaxies with black hole mass estimates, while their colour indicates the type of method used for black hole mass determination, as shown on the legend. Small open (red) circles and (blue) squares are ETGs and spirals from the ATLAS$^{\rm 3D}$ sample, respectively. Solid red line shows the "zone-of-exclusion" \citep[ZOE,][]{2013MNRAS.432.1862C} and the dashed-dotted diagonal lines show constant velocity dispersion, for values shown at the top of the diagram. Note that the mass and size estimates for the spirals, ETGs and the black holes sample have different origin, but the values for the black hole sample are internally consistent and uniformly measured. }
\label{f:ms1}
\end{figure}
\begin{figure*}
\includegraphics[width=0.495\textwidth]{figure2a.pdf}
\includegraphics[width=0.495\textwidth]{figure2b.pdf}
\caption{Mass - size diagram as in Fig.~\ref{f:ms1}, but showing only galaxies with measured black hole masses. The colour of the symbols indicate the black hole masses within the range given on the colour-bar. M$_{\rm BH}$ values were smoothed using the LOESS method assuming constant errors. For an edge-on projection of the plot highlighting the scatter of M$_{\rm BH}$ see fig.~7 in \citet{2016ApJ...831..134V}. On the right panel, we show a continuous colour surface obtained by interpolating between the LOESS smoothed M$_{\rm BH}$ values. The red solid line is ZOE as on Fig.~\ref{f:ms1}. Diagonal dashed lines are lines of constant velocity dispersion. Note that the the contours depart from the constant velocity dispersion lines for masses M$_{crit}> 2\times 10^{11}$ M$_\odot$.}
\label{f:ms2}
\end{figure*}
In Fig.~\ref{f:ms1} we also plot galaxies from ATLAS$^{\rm 3D}$ survey, comprising early-type galaxies with masses measured via dynamical models \citep{2013MNRAS.432.1709C} and spiral galaxies from the parent sample. The mass estimates for the latter are based on their $K_s$ magnitudes following the eq.(2) from \citet{2013ApJ...778L...2C}. As already remarked by several authors \citep[e.g.][]{2016ApJ...831..134V}, galaxies with measured M$_{\rm BH}$ occupy a special place in this parameters space, typically being the most massive for a given radius and the smallest for a given mass. Inclusion of galaxies with M$_{\rm BH}$ measured using the reverberation mapping or galaxies with central masers, extends significantly the distribution towards low mass and low velocity dispersion regimes. The available M$_{\rm BH}$ determinations are also biased towards massive galaxies: many black hole masses have been measured in massive galaxies (e.g. $>5\times10^{11}$ M$_\odot$), where only a few such galaxies exist in the ATLAS$^{\rm 3D}$ volume limited sample. The black hole sample, while still relatively small and non-representative of the general galaxy population, spans a large range in the effective velocity dispersion (70 - 300 km/s), and is appropriate for the following analysis. We note that the results in this paper do not depend on the details of the photometric parameters, and we address this is Section~\ref{ss:2mass}.
\section{Black hole masses on the mass - size diagram}
\label{s:mass-size}
\subsection{Photometry from van den Bosch (2016)}
\label{ss:phot}
In the left panel of Fig.~\ref{f:ms2} we plot the mass - size diagram for galaxies with measured black hole masses. Now we also add M$_{\rm BH}$ as a third dimension shown by the colour. The data (M$_{\rm BH}$) are adaptively smoothed using the Locally Weighted Regression \citep[LOESS,][]{Clev:1979}. As shown in \citet{Clev:Devl:1988}, who also generalise the method to two dimensions, LOESS increases the visual information of a scatterplot and can be used for data exploration, such as uncovering underlying trends which would be easier to observe in a much larger sample. In practice, we use the two-dimensional LOESS algorithm of \citet{Clev:Devl:1988}, as implemented in \citet{2013MNRAS.432.1862C}\footnote{Available from https://purl.org/cappellari/software}. We adopt a linear local approximation and a regularising factor f=0.5. To deal with different scale of the axes (log($R_e$) and log(M$_\ast$)), the software performs a robust estimation of the moment of inertia of the distribution and then performs a change of coordinates to transform the inertia ellipse into a circle. We assign a constant fractional error to all black holes and do not use the tabulated uncertainties on black hole masses, as they differ greatly from case to case, and generally ignore systematic uncertainties, which dominate the error budget. We can, however, confirm that the conclusions of this work do not change when using the reported uncertainties to weight the linear regression of the LOESS method.
The most striking feature of these plots is a change in black hole mass (LOESS smoothed and denoted by colour) from low to high (blue to read), which closely follows the diagonal lines of constant (virial) velocity dispersion. This is, of course, expected from the M$_{\rm BH} - \sigma$ relation, but the lines of constant velocity dispersion ($\sigma$) are actually predicted by the virial mass estimator $R_e=G\times$M$_\ast/(5\times \sigma^2)$, where G is the gravitational constant \citep{2006MNRAS.366.1126C}. To make the trend more obvious, we also interpolate across the region spanned by the galaxies from our sample, predicting the M$_{\rm BH}$ at every position in this plane. This results in a coloured surface plot as the background of the right panel of Fig.~\ref{f:ms2}. The interpolation was done based on the LOESS smoothed black holes masses of the sample galaxies.
In addition to highlighting the underlying trends, the LOESS method provides a nonparametric regression to a surface, which has the following significance: by using the LOESS smoothing we are, essentially, producing a non-parametric surface, which is defined locally. For the present purpose of investigating the relation between M$_{\rm BH}$ and other galaxy properties, this differs from what has been done previously. Other searches focused on defining a plane within the three-dimensional space of black hole mass, galaxy size and mass (or other parameters) \citep[e.g.][]{2003ApJ...589L..21M, 2016ApJ...818...47S, 2016ApJ...831..134V}. We are however, now not looking for one plane that best describes all parameters, but allow for a possible local bends of the plane, a change in the orientation of the plane within the space defined by $R_e$, M$_\ast$ and M$_{\rm BH}$. This LOESS fitted (non-parametric) surface is essentially shown by the contours in the right panel of Fig.~\ref{f:ms2}.
\citet{2013MNRAS.432.1862C} showed that the lines of constant velocity dispersion trace the mass concentration and the mass density (or bulge mass fraction) of galaxies below M$_{\rm crit} \approx 2\times10^{11}$ M$_\odot$. Therefore, Fig.~\ref{f:ms2} shows that M$_{\rm BH}$ behaves similarly to a variety of galaxy properties linked to the stellar populations, such as strength of H$\beta$ and Mg$b$ absorption, optical colour, molecular gas fraction, dynamical M/L, initial mass function normalisation, age, metallicity and $\alpha$-element abundance. This is summarised in \citet{2016ARA&A..54..597C}, using results from \citet{2011MNRAS.414..940Y}, \citet{2011ApJS..193...29A}, \citet{2013MNRAS.432.1709C}, \citet{2013MNRAS.432.1862C} and \citet{2015MNRAS.448.3484M}. We urge the reader to compare our Fig.~\ref{f:ms2} with fig. 22 from \citet{2016ARA&A..54..597C}. It is striking that for the majority of galaxies, their black hole masses follow the same trend as galaxy properties arising from star formation. This implies that the black hole growth is strongly related to the growth of the galaxy's stellar populations.
The adaptively smoothed mass-size diagram reveals another striking characteristic. At some point above $\approx2\times10^{11}$ M$_\odot$, the black hole masses cease to follow closely the lines of constant velocity dispersion. The iso-colour lines (the lines of constant M$_{\rm BH}$) change in slope from one that is the same to that of the iso-$\sigma$ lines, to a steeper one, more closely following the increase in mass. The change is gradual and subtle. It can be seen by following the change in the colour along the lines of constant velocity dispersion, for example for $\sigma = 200$ or 250 km/s. Along those lines, the symbol colours change from yellow to orange (for $\sigma=200$ km/s) and from orange to red (for $\sigma=250$ km/s) with increasing mass. This effect is more visible comparing the lines of constant velocity dispersion with the coloured contours\footnote{Note that for $\sigma<70$ km/s there is also a change in the shape of the contours, but this effect is based on 3-4 galaxies at the edge of the distribution and is not robust.} on the right panel of Fig.~\ref{f:ms2}.
The effect indicates that beyond a certain galaxy mass, the black hole masses do not only follow the changes in velocity dispersion, but also changes in the galaxy mass. The detection of this transition is remarkable, especially when one considers that galaxies for which M$_{\rm BH}$ does not seem to follow iso-$\sigma$ lines closely, span only a factor of about 3-4 in galaxy mass. At a given velocity dispersion in that mass range, the observed range of black hole masses is approximately an order of magnitude, but taking into account a realistic factor of two in the uncertainties for black hole masses \citep[i.e. depending on the type of data and type of models used,][]{2006MNRAS.370..559S, 2009MNRAS.399.1839K, 2010MNRAS.401.1770V,2011ApJ...729..119G,2013ApJ...770...86W}, it is not surprising that the effect is marginal and difficult to see in the current data. Furthermore, the visualisation of the effect is hindered by the increasing closeness of iso-$\sigma$ lines and the scarcity of galaxies with masses M$_\ast>10^{12}$ M$_\odot$ and size $R_e>20$ kpc, as will be discussed later. Nevertheless, we will attempt to reproduce the effect by a simple model in the next section, but we first look for it using differently established luminosities and sizes of galaxies.
\subsection{Photometry from 2MASS All Sky Extended Source Catalog}
\label{ss:2mass}
\begin{figure*}
\includegraphics[width=0.49\textwidth]{figure3a.pdf}
\includegraphics[width=0.49\textwidth]{figure3b.pdf}
\caption{Mass - size diagrams as in Fig.~\ref{f:ms2}, showing galaxies with measured black hole masses, with the galaxy luminosity and size obtained from the 2MASS All Sky Extended Source Catalog. The conversion to mass was achieved following two prescriptions based either on the mass - luminosity relation \citet{2013ApJ...778L...2C}, or the mass-to-light ratio - velocity dispersion relation \citet{2016ARA&A..54..597C}, shown on the left and right panels, respectively (see text for details). The symbols indicate the location of the galaxies in the mass - size plane. Their black hole masses were LOESS smoothed, and then interpolated into the continuous colour surface, showing the variation of the black hole masses in the mass - size plane, within the range given on the colour-bars. The red solid line is the ZOE. Two galaxies that fall strongly below ZOE, due to the contribution of the active nucleus to the total luminosity, were excluded from the LOESS fit. Diagonal dashed lines are lines of constant velocity dispersion. On both panels it is possible to see the change in the M$_{\rm BH}$ correlation from the velocity dispersion to mass, but the changes are not identical. This indicates the systematic error in the recovery of this effect from the choice of the photometry and conversion to mass.}
\label{f:2mass}
\end{figure*}
The location of galaxies on the mass - size plot depends on the global parameters, which for this study were obtained from \citet{2016ApJ...831..134V}. That study used 2MASS \citep{2000AJ....119.2498J} imaging and derived its own sizes and total K$_s-$band luminosities for all galaxies parameterising the surface brightness with \citet{1968adga.book.....S} profiles and using the growth curve approach. One of the reasons for this is that the 2MASS catalog values are typically found to underestimate the actual sizes and magnitudes. Furthermore, \citet{2016ApJ...831..134V} also used a detailed parametrisation of the point-spread function to account for the active galactic nuclei. Here we show that it is possible to qualitatively recover the same results by using 2MASS All Sky Extended Source Catalog \citep[XSC][]{2000AJ....119.2498J,2006AJ....131.1163S} data as they are.
We query XCS for the size and extrapolated total magnitudes of galaxies from Table 2 of \citet{2016ApJ...831..134V}. Of 230 galaxies the search in XSC returned 228 sources from which we removed galaxies with upper limits on the black hole masses. We follow \citet{2013ApJ...778L...2C} and use the major axis of the isophote enclosing half of the total galaxy light in J-band (XSC keyword j\_r\_eff) which has better S/N than the K$_s-$band equivalent. Following \citet{2013MNRAS.432.1709C}, we define the size as $R_e = 1.61\times \rm j\_r\_eff$ and use the distances from \citet{2016ApJ...831..134V} to convert to physical units.
Galaxy stellar masses (M$_\ast$) are estimated from the total magnitude (XSC keyword k\_m\_ext), in two different ways, based on the mass - luminosity and mass-to-light ratio - velocity dispersion relations. In the {\it Approach 1}, we used the prescription from eq.~(2) of \citet{2013ApJ...778L...2C}, which relates the K$_s-$band absolute magnitude with the stellar mass, and is calibrated on the ATLAS$^{\rm 3D}$ sample of early-type galaxies, given their 2MASS K$_s-$band magnitudes and masses from \citep{2013MNRAS.432.1709C}. In the {\it Approach 2} we estimated the M/L ratio using eq.~{24} from \citet{2016ARA&A..54..597C} and the velocity dispersion from the compilation of \citet{2016ApJ...831..134V}. The obtained M/L is then multiplied with the $K$-band luminosity of galaxies $L_K$, which was obtained from the 2MASS magnitudes using the absolute magnitude of the Sun in K-band (M$_{\odot,K} = 3.29$) from \citet{2007AJ....133..734B}, as well as the distances from \citet{2016ApJ...831..134V}. These two approaches produce similar stellar masses, with the mean of the difference of the logarithms equal to 0.07 and the standard deviation of less then 0.1.
In Fig.~\ref{f:2mass} we show the mass - size diagrams, equivalent to the right one in Fig~\ref{f:ms2}, but now using the sizes and the two different mass estimates described above. We perform the LOESS fit to both distributions, and compare them side by side. We exclude from the fit two galaxies which are significantly below the ZOE\footnote{These are Ark120 and Mrk0509, both known active galaxies with black hole estimates based on the reverberation mapping. Their 2MASS luminosities are likely biased.}. The circles show the distribution of the galaxies with measured black hole masses, while the underlying coloured surface represents the continuous variation of the black hole masses interpolated between the LOESS smoothed M$_{\rm BH}$.
The panels in Fig.~\ref{f:2mass} differ from the right-hand panel of Fig.~\ref{f:ms2}, as galaxies have different mass and size measurements. However, panels in Fig.~\ref{f:2mass} show the same trends seen on Fig.~\ref{f:ms2}. M$_{\rm BH}$ values closely follow the lines of constant velocity dispersion for galaxies with masses below a few times $10^{11}$ M$_\odot$, following the behaviour or other properties of galaxies related to star formation \citep[as in fig~22 of][]{2016ARA&A..54..597C}. For highest mass galaxies the black hole masses deviate from the tight relation with the velocity dispersion\footnote{The deviation is also present for $\sigma<70$ km/s in the right panel, but it is due to the lack of data points for robust interpolation.}. The departure from a relation with velocity dispersion occurs above a mass a few times $10^{11}$ M$_\odot$. It is a remarkable fact that by using the ready catalog values one can produce the plot qualitatively similar to Fig.~\ref{f:ms2}. This adds weight to the robustness of the result presented in Section~\ref{ss:phot}.
For completeness of this section and to provide ready values for readers interested in using XCS catalog, we present the best fit relation between M$_{\rm BH}$, stellar mass M$_\ast$ and the effective radius $R_e$. We use the {\it Approach 1} values shown on the left panel of Fig.~\ref{f:2mass}, based on the compilation of M$_{\rm BH}$ from \citet{2016ApJ...831..134V} and M$_\ast$ estimated using \citet{2013MNRAS.432.1709C} prescription, which relies only on the XSC values and distances from \citet{2016ApJ...831..134V}. We fit the relation of the form:
\begin{equation}
\label{eq:2mass}
\begin{split}
\log \bigg( \frac {M_{\rm BH}}{10^8 M_\odot} \bigg) = a + b \log \bigg(\frac{M_\ast}{10^{11} M_\odot} \bigg) + c \log \bigg(\frac{R_e}{5 kpc} \bigg),
\end{split}
\end{equation}
\noindent The fit was performed using the least trimmed squares fitting method LTS\_PLANEFIT\footnote{Available at http://purl.org/cappellari/software} of \citet{2013MNRAS.432.1709C}, which combines the Least Trimmed Squares robust technique of \citet{ROUSSEEUW2006} into a least-squares fitting algorithm which allows for errors in all variables and intrinsic scatter. We used the tabulated errors for M$_{\rm BH}$ from \citet{2016ApJ...831..134V}, while for galaxy sizes we follow \citet{2013MNRAS.432.1709C}, assuming 10 per cent errors, and we approximate the uncertainty on the mass to be of the order of 10 per cent. The fit is shown in Fig~\ref{f:fits}. As a consistency check, we also fitted the relation from the {\it Approach 2}, where the M$_\ast$ is obtained using eq.~24 of \citet{2016ARA&A..54..597C} and the velocity dispersion compilation from \citet{2016ApJ...831..134V}. The results can be seen in Table~\ref{t:fits}, and they are consistent between each other. Both fits are consistent with those of galaxy mass - size - black hole mass relation of \citet{2016ApJ...831..134V}.
Scatters reported in Table~\ref{t:fits} are somewhat higher than those from recent detailed studies of the scaling relations \citep{2016ApJ...831..134V,2016ApJ...818...47S}. This is expected as we did not performed any cleaning of XSC values and use approximate errors. It is actually remarkable that the best fit parameters are consistent with estimates based on different photometry, and that even the scatters are comparable. Although the {\it Approach 2} fit has a smaller scatter, we, nevertheless, advise usage of the {\it Approach 1} relation, as it does not depend on still mostly uncertain velocity dispersion measurements for galaxies with black hole masses.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figure4.pdf}
\caption{Black hole mass - stellar mass - effective size relation with the best fit of the form as given by eq.~\ref{eq:2mass}. Photometric properties are taken from 2MASS XSC catalog and the mass is estimated using the mass - luminosity relation from \citet[][eq. (2)]{2013MNRAS.432.1709C} and distance from \citet{2016ApJ...831..134V}. Best fit parameters are given in the legend, where $\Delta$ is the root-mean-square ({\it rms}) scatter of the fit and the intrinsic scatter around the M$_{\rm BH}$ axis is given by $\epsilon_z$. Dashed and dotted lines are one and two times the {\it rms} scatter, respectively. Green symbols are data points rejected during the fit. }
\label{f:fits}
\end{figure}
\begin{table}
\caption{Black hole scaling relations based on 2MASS XSC data.}
\label{t:fits}
$$
\begin{array}{c c c c c c }
\hline
\noalign{\smallskip}
$fit$ & a &b & c &\epsilon_z & \Delta\\
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
$Approach $ 1 & 7.66 \pm 0.06 & 2.7 \pm 0.2 & -2.9 \pm 0.3 & 0.64 \pm 0.04 & 0.7\\
$Approach $ 2 & 7.77 \pm 0.05 & 2.6 \pm 0.2 & -2.7 \pm 0.3 & 0.54 \pm 0.04 & 0.6\\
\noalign{\smallskip}
\hline
\end{array}
$$
{Notes -- The form of the relation is given in eq.~\ref{eq:2mass}, while $\epsilon$ is the intrinsic scatter around the M$_{\rm BH}$ axis of the plane and $\Delta$ is the root-mean square scatter. For {\it Approach 1}, the stellar mass was estimated using XSC k\_m\_ext keyword, distances from \citet{2016ApJ...831..134V} and the mass - luminosity relation from \citet[][eq. (2)]{2013MNRAS.432.1709C}. For {\it Approach 2}, the stellar mass was estimated using the mass-to-light ratio - velocity dispersion relation from \citet[][eq. (24)]{2016ARA&A..54..597C}, the velocity dispersion compilation from \citet{2016ApJ...831..134V} and luminosity of galaxies, which was converted from XSC k\_m\_ext values using the absolute magnitude of the Sun in K-band (M$_{\odot,K} = 3.29$) \citep{2007AJ....133..734B}, and distances from \citet{2016ApJ...831..134V}. For both fits, sizes were estimated using XSC keyword j\_r\_eff, as $R_e = 1.61\times \rm j\_r\_eff$.
}
\end{table}
\section{Discussion}
\label{s:discs}
\begin{figure*}
\includegraphics[width=0.49\textwidth]{figure5a.pdf}
\includegraphics[width=0.49\textwidth]{figure5b.pdf}
\caption{A toy model simulation of M$_{\rm BH}$ on the mass - size plane (left) and the same model with over-plotted LOESS smoothed black hole masses (right). The toy model M$_{\rm BH}$ are based on a simple prescription in which black hole masses are calculated based on the M$_{\rm BH} - \sigma$ relation for galaxies less massive than $2\times10^{11}$, and on a M$_{\rm BH} - \sigma$ modulated by the galaxy mass for higher mass galaxies (see eqs.~\ref{eq:sig} and~\ref{eq:mass}). The colour scale of the model is limited to the same range as the LOESS smoothed black hole masses of the sample galaxies, as shown on the colour bars. The red solid lines show ZOE. Diagonal dashed lines are lines of constant velocity dispersion. We also restricted the model to $\sigma>70$ km/s as there are no galaxies that could be compared with in that region.}
\label{f:ms3}
\end{figure*}
\subsection{A toy model for evolution of black hole masses}
\label{ss:toy}
Here we want to understand what kind of signature we should expect to observe, based on simple assumptions on the growth process of supermassive black holes. We construct a toy model based on the assumption that below a critical mass of $2\times10^{11}$ M$_\odot$ the black hole mass can be predicted from the M$_{\rm BH} - \sigma$ relation, while above this mass the M$_{\rm BH} - \sigma$ relation is modulated by an additional term depending explicitly on the stellar mass. Our assumption follows the physically motivated distinction in black hole growth. Galaxies with masses less than $2\times10^{11}$ M$_\odot$ follow the channel of gas accretion, bulge growth and quenching, while more massive galaxies are descendents of galaxies formed in intense star-bursts at high redshifts and since than have grown following a channel of dissipation-less (dry) merging.
We construct a toy model by defining a galaxy sample in a mass - size plane, spanning between 0.5 and 30 kpc in the effective radius and $10^9$ and $3\times10^{12}$ in mass, but restricted to be above ZOE. This distribution of points is similar to the one on Fig.~\ref{f:ms1}, including spiral galaxies, which occupy the large size - low mass part of the plane. The velocity dispersion of each galaxy is estimated using the virial relation $\sigma_e^2 = (M_\ast \times G)/(5\times R_e)$ \citep{2006MNRAS.366.1126C}. For each galaxy, depending on their stellar masses, we then calculate two sets of black holes masses using relations:
\begin{equation}
\label{eq:sig}
\begin{split}
\log \bigg( \frac {M_{\rm BH}}{M_\odot} \bigg) = \alpha + \beta \log \bigg(\frac{\sigma_e}{200 {\rm km/s}} \bigg), \\
{\rm for }~M_{\ast} < 2\times10^{11} M_\odot,
\end{split}
\end{equation}
and
\begin{equation}
\label{eq:mass}
\begin{split}
\log \bigg( \frac {M_{\rm BH}}{M_\odot} \bigg) = \alpha + \beta \log \bigg(\frac{\sigma_e}{200 {\rm km/s}} \bigg) +\log \bigg(\frac{M_\ast}{2\times10^{11} M_\odot} \bigg),\\
{\rm for}~M_{\ast} > 2\times10^{11} M_\odot
\end{split}
\end{equation}
\noindent where $\alpha=8.22$ and $\beta=5.22$ are taken from the M$_{\rm BH} - \sigma$ relation for all non-bared galaxies from \citet{2013ApJ...764..151G}. The exact choice of the M$_{\rm BH} - \sigma$ relation is not important. As we only assume that there is a trend with the velocity dispersion for low mass galaxies, we use a relation fitted to galaxies with typically low velocity dispersions and masses. In our toy model, M$_{\rm BH}$ varies smoothly from being a $\sigma$ dominated to being a M$_\ast$ dominated. Galaxies with mass less than $2\times10^{11}$ M$_\odot$ are fully in the regime of the M$_{\rm BH} - \sigma$ dependance. As the stellar velocity dispersion saturates above 300 km/s, the stellar mass eventually becomes the dominant contributor, but this happens significantly only for galaxies of the highest masses.
We plot the results in Fig~\ref{f:ms3}, showing our model predictions in the mass - size plane. The colour of the contours specifies the variation of black hole masses. We limit the colour scale to the extent for the LOESS smoothed values of the sample galaxies, which are over-plotted as coloured circles in the right panel. The colours specified by the model follow the trend similar to that in Fig.~\ref{f:ms2}. For low mass galaxies, the black hole mass follows the lines of constant velocities, but black hole masses in galaxies with M$_\ast>2\times10^{11}$ M$_\odot$ and $\sigma>200$ km/s start departing from the constant velocity dispersion lines, being higher than expected from the pure M$_{\rm BH} -\sigma$ relation. The predictive power of our toy model with no free parameters is surprising. It is best visible when the model contours are compared with the LOESS smoothed black hole masses of the sample galaxies (right panel of the figure), which closely follow the changes in the underlying colour (M$_{\rm BH}$) of the toy model.
The toy model also justifies the choice of the critical mass M$_{\rm crit}= 2\times10^{11}$ M$_\odot$ for the transition between the models. The choice is primarily physically motivated as this mass separates core-less fast rotators from core slow rotators \citep{2013MNRAS.433.2812K,2013MNRAS.432.1862C} and, therefore, it is the mass above which dissipation-less mergers (and likely black hole mergers) start dominating the galaxy (black hole) evolution. The change is gradual, reflecting the dominating contribution of the velocity dispersion even for galaxies above M$_{\rm crit}= 2\times10^{11}$ M$_\odot$, but also the expectation that only the most massive galaxies will experience a sufficient number of major dry mergers to significantly modify their black hole masses via direct black hole mergers.
Fig.~\ref{f:ms3} also shows why the dependance of M$_{\rm BH}$ with M$_\ast$ for high masses is difficult to detect. As referred before, the current sample of galaxies with measured black hole masses is tracing only the edges of the galaxy distribution. The large region populated by spiral galaxies and low mass fast rotators is poorly explored. As it is not constrained, we limit the prediction of the toy model to $\sigma> 70$ km/s.
For the detection of the bend in the properties of M$_{\rm BH}$ in the mass - size plane, however, there are two significant regions. The first one is centred on the large ($R_e>10$ kpc) and relatively less massive galaxies, around the line of constant velocity dispersion of about 150 km/s. In the nearby universe, where black hole masses can be measured with dynamical methods, these galaxies are rare and are typically spirals, as shown by the ATLAS$^{\rm 3D}$ Survey (see Fig~\ref{f:ms1}). The rarity of these systems currently exclude a possibility for a direct comparison with the proposed model, but further determination of black hole masses using molecular gas kinematics \citep[e.g.][]{2013Natur.494..328D, 2016ApJ...822L..28B, 2017MNRAS.468.4663O, 2017MNRAS.468.4675D} offer a possible route of exploring this range.
The second region is the branch of the most massive galaxies, with sizes of 20 kpc or more, masses in excess of $10^{12}$ M$_\odot$ and $\sigma>250$ km/s. These galaxies are also rare in the nearby universe, but they are present in the form of brightest cluster galaxies or cD galaxies. In order to improve on the current description of the M$_{\rm BH}$ dependance, more galaxies of the highest masses and largest sizes need to be probed \citep[e.g.][]{2011Natur.480..215M, 2012ApJ...756..179M, 2016Natur.532..340T}, preferably through dedicated surveys \citep[e.g.][]{2014ApJ...795..158M}.
As mentioned above, the change in the M$_{\rm BH}$ dependance implies that the plane defined by M$_{\rm BH}$, M$_\ast$ and R$_e$, has a change of curvature at high masses. Its significance can be tested by the scatter from a fit defined by eqs.~\ref{eq:sig} and \ref{eq:mass}, in comparison with that from a standard M$_{\rm BH} - \sigma$ regression. We used a least squares method to fit a linear regression to eq.~\ref{eq:sig} for all galaxies plotted in Fig.~\ref{f:ms2} regardless of their mass. We then also repeated the same fit using eqs.~\ref{eq:sig} and~\ref{eq:mass} taking into account the mass dependance as described in the equations. We did not take into account the observed uncertainties on any variable, as our intention was not to find the best fit relation, but just to compare if there was a decrease in the scatter of the residuals. We compared the standard deviation of the residuals of the fits to these two equations and found that there was no improvement going from the first ($\Delta=0.533$) to the second ($\Delta=0.532$) fit. Equivalent results are obtained if instead of fitting we use for $\alpha$ or $\beta$ values derived in \citet{2016ApJ...831..134V}. Therefore, the model described by eqs.~\ref{eq:sig} and~\ref{eq:mass}, using the current database of black hole masses is not necessarily warranted in terms of providing a better correlation. This is not surprising given that, especially for lower mass galaxies, the total galaxy mass is not always found to be the best predictor of the black hole mass \citep[e.g.][]{2011Natur.469..374K,2016ApJS..222...10S,2016ApJ...818...47S}, although decomposing galaxies and using masses of certain components is difficult and uncertain \citep[e.g.][]{2014ApJ...780...70L, 2016ApJS..222...10S}. \citet{2016ApJ...818...47S} find that introducing a bivariate correlations with bulge mass and velocity dispersion reduces the overall scatter, even when core ellipticals are removed from the scatter. On the other hand, \citet{2016ApJS..222...10S}, by decomposing galaxies observed at 3.6 $\mu$m and using bulge masses, were able to fit different scaling relations for early- and late-type galaxies. Our model, by concentrating on the total mass, however, remains physically motivated, describes the behaviour of the data in the mass - size diagram, and, crucially, it is easily testable with better samples of black hole masses which will likely follow with time.
\subsection{Origin for the non-universality of the M$_{\rm BH}$ scaling relations}
\label{s:theor}
The transition of the dependance of black hole mass from velocity dispersion (following the trends of other star formation related properties) to galaxy mass supports current ideas on the growth of galaxies and black holes \citep[e.g.][]{2013ApJ...768...76S}. The main processes regulating the growth of galaxies can be separated between those related to the in-situ star formation and those related to the assembly of galaxy mass by accretion of elsewhere created stars. Black holes also grow via two types of processes: by accretion of gas or by merging \citep{2010A&ARv..18..279V}. Accretion of material onto a black hole (i.e. gas originating from gas clouds or from destruction of passing stars), converts gravitational energy into radiation and results in active galactic nuclei, or quasars, when the radiation is particularly strong. Moreover, this process influences both the growth of the black hole and the galaxy.
Current models predict that M$_{\rm BH}$ growth by accretion is proportional to $\sigma^4$ \citep{1999MNRAS.308L..39F,2005ApJ...618..569M} or $\sigma^5$ \citep{1998A&A...331L...1S, 1998MNRAS.300..817H}, establishing the relations with the host galaxies properties. The scatter of the M$_{\rm BH} - \sigma$ relation is too high to distinguish between these cases, partially due to difficulties in reducing the systematic errors in measuring black hole masses. However, at least part of the scatter comes from genuine outliers to the relation \citep[see fig. 1 of ][]{2016ApJ...831..134V}; special systems which probably did not follow the same evolution as the majority of galaxies \citep[e.g.][]{2015ApJ...808...79F}. The same reservoir of gas that is fuelling AGNs or quasars, maintains the star-formation responsible for the growth of galaxies. This connection is evident from similar trends in the rates of star formation and black hole accretions with redshift \citep[e.g.][]{2004MNRAS.354L..37M,2014ARA&A..52..415M}, even though the actual growth of the black hole and the build up of the galaxy mass does not have to be concurrent, as the AGN duty cycles are relatively short and responses to the feedback are of different duration.
Accretion models predict that black holes can reach masses of $10^{10}$ M$_\odot$ and such black holes have been detected at high redshift quasars \citep[e.g][]{2001AJ....122.2833F,2003AJ....125.1649F,2011Natur.474..616M,2015Natur.518..512W}. While such objects could explain the population of the highest mass black holes without the need for further growth via merging, they are not very common. Furthermore, there seems to exist a certain upper limit to the black hole growth via accretion of material \citep{2009MNRAS.393..838N}. The limit could be initiated by the feedback induced via mechanically or radiatively driven winds form the accretion disc around the black hole \citep{1998MNRAS.300..817H, 1998A&A...331L...1S, 1999MNRAS.308L..39F,2005ApJ...618..569M}. This means that the black holes with masses in excess to these predictions, if they exist, had to continue growing through mergers.
Once when the AGN activity expels all gas, or the galaxy is massive enough and resides in a massive dark halo, which hinders the cooling of gas (and its accretion to the central black hole), the star formation is shut down and the mass growth of galaxies is possible only through dry merging. The accretion of small mass satellites changes the sizes of galaxies \citep[e.g.][]{2009ApJ...699L.178N}, but a significant increase of galaxy mass is only possible through major (similar mass) mergers. Such mergers are also characterised by eventual collisions of massive black holes (residing in the progenitors) and a fractional growth of black hole mass is equal to the fractional growth of galaxy mass. Still, the contribution of the mergers to the total growth of black holes has to be relatively small. \citet{2007ApJ...667..813Y} showed that mergers can change masses of black holes up to a factor of two, but this only happens in massive galaxies residing in galaxy clusters. \citet{2015ApJ...799..178K} confirmed this result showing that the black hole growth through mergers is only relevant for the most massive galaxies (and black holes), while the accretion is the main channel of growth.
The channels of galaxy growth are essentially the same to those for black holes: one is dominated by the consumption of accreted gas in star-bursts and the other is dominated by accretion of mass through dissipation-less mergers. Numerical simulations show that these phases of growth can be separated in time \citep{2010ApJ...725.2312O}, where the early phase is characterised by in-situ formation of stars fuelled by cold flows \citep{2005MNRAS.363....2K,2009ApJ...703..785D}, while the later phase is dominated by accretion of stars formed elsewhere \citep{2012MNRAS.425..641L}. There is also a critical dependence on the mass of the galaxy as the less massive galaxies grow mostly with in-situ star formation, while only massive galaxies significantly grow by late stellar assembly \citep{2016MNRAS.458.2371R, 2017MNRAS.464.1659Q}. This supports the postulations that these are actually separate channels and galaxies follow either one or the other \citep{2016ARA&A..54..597C}.
The emerging paradigm of the growth of galaxies can be illustrated in the mass - size diagram \citep[see fig. 29 of ][]{2016ARA&A..54..597C}. For low mass galaxies, the redshift evolution of the distribution of galaxies in the mass - size diagram \citep{2014ApJ...788...28V} is explained by an inside-out growth of small star-forming galaxies. This phase of evolution increases the stellar mass within a fixed physical radius through star-formation, until the onset of quenching processes, which happen when galaxies reach a stellar density or a velocity dispersion threshold \citep{2015ApJ...813...23V}. At that moment galaxies transform from star forming spirals to fast rotator ETGs \citep{2013MNRAS.432.1862C}. This is characterised by a structural compaction of galaxies: a decrease in the size, and an increase in the concentration parameter (or the increase in the Sersic index of the light profiles), buildup of the bulge, and an increase in the velocity dispersion. While the quenching of star formation and buildup of bulges (or compacting processes) could be diverse \citep[e.g.][]{2014MNRAS.444.3408Y}, the main consequence is that galaxy properties related to the star formation history vary, on average, along lines of nearly constant velocity dispersion \citep[e.g.][]{2013MNRAS.432.1862C, 2015MNRAS.448.3484M,2016ARA&A..54..597C}. Given that the black hole growth is linked to the gas supply and the growth of the host, it is natural to expect that the black hole mass will closely follow the characteristic velocity dispersion of the host \citep{1998A&A...331L...1S,1999MNRAS.308L..39F}, where the details of the shape and scatter of the scaling relation depend on the details of the feeding of black holes \citep[e.g. how is the gas transported to the black hole,][]{2015ApJ...800..127A,2017MNRAS.464.2840A} and the feedback type \citep[e.g.][]{2005ApJ...618..569M}. The evidence shown in Fig.~\ref{f:ms2} supports a close relation between the star formation and the growth of black holes, as well as the cessation of star-formation and the final M$_{\rm BH}$.
The picture is somewhat different for the very massive galaxies ($M_\ast \ga 2\times10^{11}$ M$_\odot$). They occupy a special region of the mass - size diagram; they are found along a relatively narrow ``arm'' extending both in mass and size from the distribution of other ETG \citep{2013MNRAS.432.1862C}. The narrowness of this arm is indicative of a small range in velocity dispersion that the most massive galaxies span, which is directly linked to the findings of \citet{2007ApJ...662..808L} and \citet{2007ApJ...660..267B} regarding the predictions for M$_{\rm BH}$ based on galaxy $\sigma$ or luminosity. This on the other hand provides a strong constraint on the processes that form massive galaxies: they have to increase both the mass and the size, but keep the velocity dispersion relatively unchanged. This particular property is characteristic for dissipation-less mergers of similar size galaxies \citep{1992ApJ...393..484B, 2003MNRAS.342..501N,2009ApJ...697.1290B, 2009ApJ...699L.178N}. Furthermore, as expected from such mergers, galaxies along this arm in the mass - size diagram have low angular momentum \citep{2011MNRAS.414..888E}, do not show evidence for containing stellar disks \citep{2011MNRAS.414.2923K}, harbour core-like profiles \citep{2013MNRAS.433.2812K}, are made of old stars \citep{2015MNRAS.448.3484M}, and are found at the density peaks of group or cluster environments \citep{2011MNRAS.416.1680C,2013ApJ...778L...2C, 2013MNRAS.436...19H, 2014MNRAS.441..274S,2014MNRAS.443..485F,2017arXiv170401169B}. For massive galaxies, the black hole growth can be linked to the growth of galaxies via dry mergers, and therefore, unlike for the low mass galaxies, it should be less strongly dependant on the velocity dispersion.
The expectation that growths of black holes and galaxies are connected implies that for star-formation driven build-up of galaxy mass (fuelled by direct accretion of gas or wet dissipational mergers) black holes grow by feeding, while in a merger driven growth of galaxies (via accretion of smaller objects to massive galaxies or major dry mergers) black holes will increase their mass through coalescence with other similar size black holes. The consequence is that at low galaxy masses the black holes should correlate with the galaxy velocity dispersion, while at high masses M$_{\rm BH}$ should be more closely related to the galaxy mass. The transition, however, is not sudden, and one can expect a persistence of the M$_{\rm BH} - \sigma$ relation to high galaxy masses. The reason for this is related to the expectation that high mass galaxies do not experience many similar mass dry mergers \citep[a few at most,][]{2009MNRAS.397..506K}. The significance of this is that only the most massive galaxies will go through a sufficient number of black hole mergers that increase M$_{\rm BH}$ disproportionally from the galaxy velocity dispersion. Therefore, the transition between the two regimes of M$_{\rm BH}$ dependence should be contingent on the galaxy mass, but it should be gradual and only visible at the highest masses, at a point beyond the critical mass of $2\times10^{11}$ M$_\odot$.
There are challenges to this simple scenario already present in the literature. At low masses there are indications that different galaxy types follow different scaling relations with black hole mass when either bulge mass (luminosity) or effective velocity dispersion are taken into account \citep{2013ApJ...764..151G,2015ApJ...798...54G,2016ApJ...818...47S, 2016ApJS..222...10S}. At high masses, \citet{2015MNRAS.446.2330S} found that galaxies with most massive black holes reside in galaxies that seem to have undergone only a limited number of dissipation-less mergers (as measured by a proxy of the ratio between the missing light converted to mass and the black hole mass). It is, however, clear that low and high mass systems do go through different evolutionary paths, which might also be more diverse at lower masses. Confirming or disproving specific scenarios requires larger samples of galaxies with reliable black hole masses both at low and high galaxy mass range.
\section{Conclusions}
\label{s:con}
We used a recent compilation of black hole measurements, enhanced by a uniform determination of their sizes and total K-band luminosities, to show a variation of black hole masses in a mass - size diagram. As shown by previous studies \citep{2016ApJ...818...47S, 2016ApJ...831..134V}, black hole masses can be predicted from a combination of M$_\ast$ and R$_e$. In this study we show two additional characteristics of black hole masses. Firstly, black hole mass closely follows the changes in effective velocity dispersion in the mass - size plane, showing a similar behaviour as almost all properties of galaxies linked with star formation \citep{2016ARA&A..54..597C}. Secondly, there is tentative evidence that for higher masses (above $\approx2\times 10^{11}$ M$_\odot$) the black hole mass is progressively more correlated with the galaxy mass than with the velocity dispersion.
We consider a physically motivated toy model in which black holes below a critical galaxy mass grow by accretion and follow M$_{\rm BH} - \sigma$ relation. Above the critical mass black holes grow via mergers of similar size black holes. As these mergers are enabled by major dry mergers of galaxies, the black hole growth implicitly depends on the galaxy mass. As the critical galaxy mass, we choose M$_{\rm crit} = 2\times10^{11}$ M$_\odot$, which also roughly separates the regions in the mass - size plane populated by axisymmetric fast rotators and spiral galaxies from the slow rotators with cores in central surface brightness profiles \citep{2013MNRAS.432.1862C}.
Assuming a M$_{\rm BH} - \sigma$ relation for Sersic galaxies \citep[][but other relations would also give similar results]{2013ApJ...764..151G}, our toy model has no free parameters and is able to qualitatively reproduce the trend in the data. While it does not provide a relation with less scatter than the standard M$_{\rm BH} - \sigma$ relation, it is physical motivated by the current paradigm of galaxy formation. The most massive galaxies, such as the central galaxies in groups and clusters, evolve through a different process than the bulk of the galaxy population. Namely, they experience multiple dissipation-less mergers, of which some (a few) are major and responsible for an equal increase of galaxy and black hole masses (through black hole binary mergers), but the stellar velocity dispersion remain unchanged. The consequence is a departure of black hole masses from the M$_{\rm BH} - \sigma$ relation for massive galaxies, in particular brightest cluster galaxies and massive slow rotators (with cores). This suggest that there should be a break in M$_{\rm BH} - \sigma$ and M$_{\rm BH} - $M$_\ast$ relations, similar to the one reported by \citet{2013ApJ...764..151G,2015ApJ...798...54G}, although the detection of the break or the need for more than a single power-law also depends on the choice of considered galaxies \citep[e.g. including or excluding galaxies of certain bulge type][]{2016ApJ...818...47S}. Irrespective of the chosen sample, the expected effect of the modulation of the M$_{\rm BH}$ is small as galaxies with suitable mass assembly history are rare and dry major mergers occur infrequently.
The results presented here imply that there is no universal black hole - host galaxy scaling relation, but that it depends on the channel of formation that galaxies follow. The proposed model is simple and can easily be tested, but the black hole sample will have to include a larger number of massive and large galaxies than are currently available. The coming facilities such as JWST and E-ELT will allow us to reach such objects.
\section*{Acknowledgements}
DK acknowledges support from BMBF grant no. 05A14BA1 and thanks Alister Graham for pointing some relevant references in the literature. MC acknowledges support from a Royal Society University Research Fellowship. RMcD is the recipient of an Australian Research Council Future Fellowship (project number FT150100333). DK thanks Jakob Walcher for comments on an earlier version of the manuscript.
|
\section{Introduction}
\label{S:intro}
\subsection*{The general framework}
$\phantom{ab}$
\nopagebreak
Constant Mean Curvature (CMC) (hyper)surfaces in a Riemannian manifold
can be described variationally as critical points of
the induced intrinsic volume (or area in dimension two) functional,
subject to an enclosed volume constraint.
Alternatively they can be described as soap films (or fluid interfaces)
in equilibrium under only the forces of surface tension and uniform
enclosed pressure.
In both cases the geometric condition is that the mean curvature $H$
of the hypersurface is constant as the name suggests.
Of particular interest are the complete CMC (hyper)surfaces of finite topological type
smoothly immersed in Euclidean spaces and in particular in Euclidean three-space.
The only classically known such examples were the round spheres, the cylinders,
and more generally the rotationally invariant surfaces discovered by Delaunay in 1841 \cite{Delaunay}.
Two major results were proved in the 1950's characterizing the
round two-spheres as the only closed CMC surface in Euclidean three-space,
under the assumption of embeddedness (by Alexandrov \cite{Alexandrov}),
or the assumption of zero genus (by Hopf \cite{Hopf}).
These results and their methods of proof had a profound influence in Mathematics.
They also led to the celebrated conjecture (or question according to some)
by Hopf on whether the only immersed closed CMC surfaces in Euclidean three-space are round spheres.
In 1986 Wente disproved the Hopf conjecture by constructing genus one closed immersed examples \cite{Wente}.
At that stage the only examples of finite topological type in Euclidean three-space
were the classical ones and the Wente tori.
Following a general gluing methodology developed by Schoen \cite{schoen} and N.K. \cite{KapAnn},
and using the Delaunay surfaces as building blocks,
most of the possible finite topological types were realized as immersed (or Alexandrov embedded)
CMC surfaces for the first time \cite{KapAnn,KapJDG}.
\cite{KapJDG} in particular settled the Hopf question for high genus closed surfaces
by providing examples of any genus $g\ge3$.
In spite of its success the use of Delaunay pieces as building blocks has the limitation that it
does not allow the construction of closed genus two CMC examples.
In \cite{KapWente} a systematic and detailed refinement of the original gluing methodology
made it possible to construct genus two (actually any genus $g\ge2$)
closed examples with the Wente tori as building blocks.
Since then, many other gluing problems have been successfully resolved by using this refined approach.
These results include gluing constructions for special Lagrangian cones \cite{HaskKap,HaskKap2,HaskKap3}
and various gluing constructions for minimal surfaces
\cite{Yang,KapYang,KapSurvey,KapClay,KapJDGd,KapI,KapII}.
It is worth pointing out that the constructions in \cite{schoen,KapAnn,KapJDG}
are quite general in two ways:
First, in that each construction is reduced to finding graphs satisfying some rather general conditions.
There is an abundance of such graphs and so a plethora of examples can be produced.
Second, in that no symmetry is required---although it can be imposed in special cases---and indeed most examples constructed do not satisfy any symmetries.
These constructions can serve then as a prototype for general constructions in other geometric settings---see \cite{KapClay,KapSurvey,KapG}.
We briefly mention that much progress has been made in
the case of embedded, or more generally Alexandrov embedded, complete CMC surfaces
of finite genus $g$ with $k$ ends.
Meeks \cite{MeeksCMC} proved that such (noncompact) surfaces have at least two ends and all their ends are cylindrically bounded.
Motivated by \cite{MeeksCMC,KapAnn},
Korevaar, Kusner, and Solomon \cite{KKS} showed that each end converges
exponentially fast to a Delaunay surface and if there are only two ends then the surface is Delaunay.
Further progress in this direction was made in \cite{KoKu1,KoKu2}
and also in understanding the moduli space of these surfaces as for example in \cite{KuMaPo}.
Moreover a significant success was that in some cases of genus zero,
complete classification results were obtained with a satisfactory understanding of the surfaces involved \cite{GBKS,GBKSII,GBKKRS}.
We briefly also mention that various constructions extended the results of \cite{KapAnn}:
Gro{\ss}e-Brauckmann \cite{GB} used a conjugate surface construction to construct genus zero examples with $k$ ends
under maximal ($k$-fold dihedral) symmetry, including examples with large neck size for the first time.
Various gluing constructions related to non-degeneracy \cite{MaPa,Ratzkin,MaPaPo,MaPaPoClay,RosCosin,JleliPacard} were developed in certain cases,
which allowed some new examples,
in particular examples with asymptotically cylindrical ends \cite{MaPaPo},
with noncatenoidal necks used as nodes instead of spheres \cite{MaPa},
and a modified construction (end-to-end gluing) of the closed CMC examples \cite{Ratzkin,JleliPacard}.
Recently the construction and estimates in \cite{KapAnn} were refined in \cite{BKLD}
by applying the improved methodology of \cite{KapWente}.
This way a large class of embedded examples was produced.
\cite{BKLD} served also as in intermediate step for developing the high-dimensional constructions presented in this article.
Contrary to the case of Euclidean three-space very little is currently known in the case of higher-dimensional Euclidean spaces:
Rotationally invariant CMC hypersurfaces analogous to the ones found by Delaunay have been constructed \cite{Kenmotsu}.
In 1982 Hsiang \cite{Hsiang} demonstrated that the theorem of Hopf does not extend to higher dimensions by constructing immersed CMC hyperspheres that are not round.
Jleli has studied moduli spaces \cite{JleliMS} and has developed an end-to-end gluing construction
\cite{JleliE2E} which will provide new symmetric closed examples \cite{JleliCompact} when \cite{JleliToappear} appears.
He also constructed examples bifurcating from the Delaunay-like ones \cite{Jleli_bifurcate}.
Finally we briefly mention that constructions of CMC hypersurfaces have also been carried out in compact ambient
manifolds under certain metric restrictions.
Ye \cite{Ye} provided the first such example,
proving that there exists a foliation by CMC hyperspheres in a neighborhood
of a non-degenerate critical point of the scalar curvature.
Pacard and Xu \cite{PacardXu} partially extended Ye's result.
Mazzeo and Pacard extended Ye's result to geodesic tubes \cite{MazzeoPacardTubes}.
Further constructions of
CMC surfaces (two-dimensional) condensing around geodesic intervals or rays
were provided in \cite{ButscherMazzeo},
and for CMC hypersurfaces condensing around higher dimensional submanifolds
in \cite{MahMazPa}.
\subsection*{Brief discussion of the results}
$\phantom{ab}$
\nopagebreak
In this article we extend the results of \cite{KapAnn} to higher dimensions,
that is to the construction of CMC $n$-dimensional hypersurfaces in Euclidean $(n+1)$-space for $n\ge3$.
Note that although the present proof and construction work for $n=2$ with small appropriate modifications,
we restrict our attention to $n>2$ to simplify the presentation.
For the same reason we restrict our attention to the construction of CMC hypersurfaces of finite topological type.
Our constructions as in \cite{KapAnn,BKLD} are based on a suitable family of graphs $\mathcal{F}$ which consists
of small perturbations of a central graph $\Gamma$ (see \ref{FamilyDefinition}).
Our graphs have vertices, edges, rays, and nonzero weights assigned to the edges and the rays (see \ref{D:graph}).
$\Gamma$ is balanced in the sense that the resultant forces exerted on the vertices by the edges and rays vanish
(see \ref{Vpdefn} and \ref{deltadefn}), and moreover its edges have even integer lengths.
The other graphs in $\mathcal{F}$ have approximately prescribed resultant forces (unbalancing condition) and
prescribed small changes of the lengths of the edges (flexibility condition).
Given $\mathcal{F}$ and a small nonzero $\underline{\tau}$ a family of initial immersions is constructed,
where the image of each such immersion is built around a properly chosen $\Gamma'\in\mathcal{F}$,
and consists of unit spheres (with small geodesic balls removed) centered at the vertices of $\Gamma'$,
and appropriately perturbed Delaunay pieces of parameter $\underline{\tau}$ times the corresponding weight of $\Gamma'$.
We have then the following.
\begin{theorem}[Main Theorem]
Given a family of graphs $\mathcal F$, there exists $\maxT(\mathcal F) >0$ such that for all $0<|\underline{\tau}| \leq \maxT$,
there exists a $\Gamma' \in \mathcal F$ and an immersion built around $\Gamma'$ as outlined above
which admits a small graphical perturbation which has mean curvature $H\equiv1$.
Moreover the immersion is an embedding if the central graph $\Gamma$ satisfies certain conditions (see \ref{D:pre})
and $\underline{\tau}>0$.
\end{theorem}
Note that the conditions in \ref{D:pre} are the expected ones,
that is they ensure that the various pieces stay away from each other and that the Delaunay pieces are embedded.
It is easy then to realize infinitely many topological types as immersed complete CMC surfaces with $k$ ends,
where any $k\ge2$ can be given in advance.
These constructions (when no symmetries are imposed)
have $(k-1)(n+1)-\binom{n+1}2+\binom{n+1-k}2$ continuous parameters,
reflecting thus the asymptotics of the $k$ Delaunay ends.
Moreover there is further great variety in the immersions of a given number of ends and topological type reflected by the central graphs $\Gamma$ we can choose.
We can restrict our attention to embedded examples.
In this case we could find examples with $k\ge3$ and then we have only finitely many topological
types for each $k$, with the number of topological types for each $k$ tending to $\infty$ as $k\to\infty$.
Finally we remark that in ongoing work we plan to extend these results to the compact case
in the manner of \cite{KapJDG} extending \cite{KapAnn}.
\subsection*{Outline of the approach}
$\phantom{ab}$
\nopagebreak
The construction in this article is an extension to high dimensions of the constructions in \cite{KapAnn,BKLD}
with \cite{BKLD} serving also as an intermediate step in the development of this article.
The main difficulties and their resolution in extending to high dimensions are the following:
\\
(1). A careful understanding of the geometry and analysis of the Delaunay hypersurfaces in
high dimensions is needed,
which to the best of our knowledge is new at least at this level of detail.
In particular understanding their periods requires some work and is similar to
work for special Legendrian submanifolds \cite{HaskKapCAGpq,HaskKap3}.
\\
(2). The conformal covariance of the Laplacian in dimension two is not available anymore.
Moreover the linearized operator in dimension two can be formulated with respect to
a conformal metric $h=\frac12 |A|^2g$ which compactifies the catenoidal necks in the limit
and actually converts the catenoidal necks of the Delaunay surfaces into
spherical regions isometric to the actual spherical regions,
introducing thus new symmetries which did not exist in the induced metric $g$;
all of this is unavailable in high dimensions.
We resolved this difficulty by
understanding the linearized equation on the catenoidal necks
using Fourier decompositions on the meridians and some $L^2$ estimates.
This is a simpler version of the approach in the analysis of the linearized
equation on the (complicated and only approximately rotationally invariant)
necks in \cite{KapWente}.
Note also that since we cannot compactify the necks we use appropriate weighted estimates.
\\
(3).
Since we do not use the end-to-end gluing idea which simplifies at the expense
of limiting the scope of the construction,
we still have to use the ideas of \cite{KapAnn}, modified for the high dimensions,
to understand the linearized equation on the central---where
the fusion with the Delaunay pieces occurs---spherical regions.
We also use semi-localization, that is studying the linearized equation
on the extended standard regions and combining the results.
\\
(4).
Because of the generality of the construction the whole scheme is quite involved.
We tried to carefully organize the various steps so the whole structure of
the proof is conceptually clear and easy to follow.
\\
(5). We remark also that motivated by the geometric principle we achieve much
faster decay away from the central spherical regions (compared to \cite{KapAnn}),
by introducing simple dislocations between the central spherical regions and
the Delaunay pieces attached.
\\
(6). Finally we remark that instead of monitoring the use of the extended substitute
kernel at each step we chose to use a balancing formula \cite{KKS}
on the final hypersurface to estimate the unbalancing error because this seems
to provide better control.
\subsection*{Organization of the presentation}
$\phantom{ab}$
\nopagebreak
Appendix \ref{DelSection} contains a thorough treatment of the essential information about the geometry of Delaunay surfaces.
Appendix \ref{quadapp} provides standard background on the quadratic error estimates.
Finally, in Appendix \ref{annuli} we study the Dirichlet problem on a flat annulus.
Section \ref{graphs} contains a description of the family of graphs which provides the structure for the immersion of the initial surfaces.
We discuss the unbalancing and flexibility conditions and we associate to each graph in the family two parameters $(\tilde d, \tilde \ell)$
which give quantitative meaning to these conditions.
In Section \ref{BuildingBlocks}, we describe the building blocks of the construction,
spheres with balls removed and Delaunay pieces with perturbations near their boundaries.
The Delaunay building blocks are not necessarily CMC near their boundaries;
the estimates are controlled by the parameters describing the perturbation.
We are careful to describe these building blocks independent of any reference to a family of graphs.
The building blocks depend only on general parameters and not on the structure of a graph.
In Section \ref{DelaunayLinear} we study the linear operator $\mathcal L_g$ on compact pieces of Delaunay surfaces.
At this stage we choose a fixed large constant $\underline{b} \gg1$ and a small $\maxT>0$ depending on $\underline{b}$.
For any $0<|\tau|<\maxT$, we consider regions on a Delaunay immersion with parameter $\tau$.
The size of the regions considered depend upon $\tau$ and $\underline{b}$ and the choice of $\underline{b}, \tau$
along with our understanding of the geometry of Delaunay surfaces provide good geometric estimates.
Again,
the statements and proofs of this section do not reference or rely on a graph or family of graphs.
In Section \ref{InitialSurface} we construct a family of initial surfaces which depend upon a parameter $\underline{\tau}$
and a pair of parameters $(d,\boldsymbol \zeta)$.
We presume a given family of graphs $\mathcal F$.
The parameter $\underline{\tau}$ satisfies $0<|\underline{\tau}| <\maxTG$ where $\maxTG$ depends upon $T$ and the graph $\Gamma$ but not on the structure of $\Gamma$.
The parameters $(d,\boldsymbol \zeta)$ and $\underline{\tau}$ determine $(\tilde d, \tilde \ell)$ and thus a graph in the family $\Gamma'$.
We build the initial surface by positioning and fusing building blocks at designated locations given by the structure of $\Gamma'$.
The parameters describing the building blocks are encoded in $\underline{\tau}$, $(d,\boldsymbol \zeta)$ and the graph (but not the structure) of $\Gamma$.
In Section \ref{GlobalSection} we study the linearized operator on the family of initial surfaces.
We define the extended substitute kernel and solve the modified linear problem.
Section \ref{GeometricPrinciple} contains the prescribing of substitute and extended substitute kernel.
We prove the Main Theorem in Section \ref{MThm} using a fixed point theorem.
\subsection*{Preliminaries}
\begin{definition}\label{scalednorms}
For $k \in \mathbb N\cup \{0\}$, $\beta \in (0,1)$, a domain $\Omega$ in a Riemannian manifold, $u \in C^{k,\beta}_{\mathrm{loc}}(\Omega)$, and $f,\rho:\Omega \to \mathbb R^+$ we define the norm
\[
\|u:C^{k,\beta}(\Omega, \rho, g, f)\| :=\sup_{x \in \Omega}f(x)^{-1}\|u:C^{k,\beta}(B_x\cap \Omega, \rho^{-2}(x)g)\|.
\]
Here $B_x$ is a geodesic ball centered at $x$ with radius $1/10$ in the metric $\rho^{-2}(x)g$.
For simplicity, when $\rho=1$ or $f=1$ we may omit them from the notation.
\end{definition}
Note from the definition that
\[
\|\nabla u:C^{k-1,\beta}(\Omega, \rho,g,\rho^{-1}f)\| \leq \|u:C^{k,\beta}(\Omega, \rho,g,f)\|
\]and
\[
\|u_1u_2:C^{k,\beta}(\Omega, \rho,g, f_1f_2)\| \leq C(k)\|u_1:C^{k,\beta}(\Omega, \rho,g,f_1)\| \, \|u_2:C^{k,\beta}(\Omega,\rho,g,f_2)\|.
\]
\begin{definition}
If $a,b>0$ and $c>1$, then we write
\[
a \sim_c b
\]if $a \leq cb$ and $b \leq ca$.
\end{definition}
Throughout this paper we make extensive use of cut-off functions, and thus we adopt the following notation: Let $\Psi:\mathbb R \to [0,1]$ be a smooth function such that
\begin{enumerate}
\item $\Psi$ is non-decreasing
\item $\Psi \equiv 1$ on $[1,\infty)$ and $\Psi \equiv 0$ on $(-\infty, -1]$
\item $\Psi-1/2$ is an odd function.
\end{enumerate}
For $a,b \in \mathbb R$ with $a \neq b$, let $\psi[a,b]:\mathbb R \to [0,1]$ be defined by $\psi[a,b]=\Psi \circ L_{a,b}$ where $L_{a,b}:\mathbb R \to \mathbb R$ is a linear function with $L(a)=-3, L(b)=3$.
Then $\psi[a,b]$ has the following properties:
\begin{enumerate}
\item $\psi[a,b]$ is weakly monotone.
\item $\psi[a,b]=1$ on a neighborhood of $b$ and $\psi[a,b]=0$ on a neighborhood of $a$.
\item $\psi[a,b]+\psi[b,a]=1$ on $\mathbb R$.
\end{enumerate}
\begin{notation}
\label{NT}
For $X$ a subset of
a Riemannian manifold $(M,g)$ we write
${\mathbf{d}}^{M,g}_X$ for the distance function from $X$ in $(M,g)$.
For $\delta>0$ we define a tubular neighborhood of $X$ by
$$
D^{M,g}_X(\delta):=\left \{p\in M:{\mathbf{d}}^{M,g}_X(p)<\delta\right\}.
$$
In both cases we may omit $M$ or $g$ if understood from the context and
if $X$ is finite we may just enumerate its points.
\end{notation}
\subsection*{Acknowledgments}
CB was supported in part by National Science Foundation grants DMS-1308420 and DMS-1609198.
This material is also based upon work supported by the National Science Foundation under Grant No. DMS-1440140
while CB was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Spring 2016 semester.
NK would like to thank the Mathematics Department and the MRC at Stanford University
for providing a stimulating mathematical environment and generous financial support during Fall 2011, Winter 2012 and Spring 2016.
NK was also partially supported by NSF grants DMS-1105371 and DMS-1405537.
\section{Finite Graphs}
\label{graphs}
The gluing construction carried out in this article uses round spheres and pieces of Delaunay surfaces
to build initial hypersurfaces which are then perturbed to become CMC hypersurfaces.
The parameters of the Delaunay pieces and the positioning of the spheres and the Delaunay pieces
are naturally encoded by graphs.
In this article for simplicity we restrict ourselves to finite graphs which we discuss in this section.
The initial graph we use should satisfy all of the relations one expects for a singular CMC surface
and thus we impose a balancing restriction on each vertex and a restriction on the length of each edge.
We first define the kind of graphs we will be using:
\begin{definition}[Graphs]
\label{D:graph}
We define a finite graph in $\Rn$ for some $n>2$ to be a collection
$\{V(\Gamma),E(\Gamma), R(\Gamma), \hat \tau\}$ such that
\begin{enumerate}
\item $V(\Gamma) \subset \Rn$ is a finite collection of vertices.
\item $E(\Gamma)$ is a finite collection of edges in $\Rn$, each with its two endpoints in $V(\Gamma)$.
\item $R(\Gamma)$ is a finite collection of rays in $\Rn$, each with its one endpoint in $V(\Gamma)$.
\item $\hat \tau: E(\Gamma) \cup R(\Gamma) \to \mathbb R \backslash \{0\}$ is a function.
\end{enumerate}
\end{definition}
\begin{notation}
Given a finite graph $\Gamma$, the input of a function or vector valued function of $V(\Gamma), E(\Gamma), R(\Gamma)$ will be given by $[\cdot ]$.
\end{notation}
\begin{definition}[Edge and Vertex Relations]
\label{vedef}
Let $E_p$ denote the collection of edges and rays that have $p \in V(\Gamma)$ as an endpoint.
We have then
\[
\bigcup_{p \in V(\Gamma)}E_p = E(\Gamma) \cup R(\Gamma).\]
We also define the set of \emph{attachments}
\begin{equation}
A(\Gamma) := \{ \pe \in V(\Gamma) \times \left(E(\Gamma) \cup R(\Gamma)\right) \, : \, e \in E_p \} .
\end{equation}
Finally for each
$[p,e]\in A(\Gamma)$ we denote the unit vector pointing away from $p$ and in the direction of $e$
by $\mathbf{v}\pe$.
\end{definition}
\begin{definition}
\label{Def:dlz}
For a graph $\Gamma$, let $L(\Gamma)$ denote the space of functions from $E(\Gamma)$ to $\mathbb R$, let $D(\Gamma)$ denote the space of functions from $V(\Gamma)$ to $\Rn$, and let $Z(\Gamma)$ denote the space of functions from $A(\Gamma)$ to $\Rn$.
Equip each of these spaces with the maximum norm.
\end{definition}
\begin{definition}\label{Vpdefn}
We define $\hd[\Gamma,\cdot] =\hd\in D(\Gamma)$ such that
\begin{equation}
\label{hd_gamma_def}
\hd[\Gamma,p] = \hd[p]: = \left(\frac{\omega_{n-1}}n\right)\left(\frac{n+1}{\omega_n}\right)^{1/2}\sum_{e \in E_p} \hat \tau[e]\Bv\pe:=
\frac{\widetilde \omega_{n-1}}{\widetilde \omega_{n}^{\frac 12}}\sum_{e \in E_p} \hat \tau[ e]\Bv\pe
\end{equation}
measures the deviation from balancing at the vertex $p$.
Here $\widetilde \omega_{k-1}:= \frac{\omega_{k-1}}{k}$
and $\omega_k$ denotes as usual the $k$-dimensional volume of $\mathbb S^k \subset \mathbb R^{k+1}$.
We let $l[\Gamma, \cdot] = l \in L(\Gamma)$ such that for $e \in E(\Gamma)$, $2l[e]$ equals the length of $e$.
\end{definition}
\begin{remark}\label{Cndefn}
The constant $\widetilde \omega_{k-1}$ will arise because of various normalizations throughout the argument.
Absorbing it into the definition of $\hd$ will be convenient later.
\end{remark}
Our construction will be based on a family of graphs that are perturbations of some fixed graph
which we will call the central graph $\Gamma$ (see \ref{deltadefn}).
The idea of the construction is to replace
each edge or ray $e$ of $\Gamma$ by a Delaunay piece of parameter
$\underline{\tau} \hat \tau[e]$, where $\underline{\tau}$ is a sufficiently small global parameter.
(See Section \ref{BuildingBlocks} for a description of the Delaunay pieces.)
The construction of the initial surfaces
requires appropriate small perturbations of $\Gamma$
depending on $\underline{\tau}$ and on other parameters.
The central graph $\Gamma$ will be the limit of the graphs employed as $\underline{\tau}\to0$.
In this limit our surfaces will tend to tangentially touching unit spheres.
Correspondingly,
the period of the Delaunay surfaces will tend to $2$.
Therefore $\Gamma$ has to satisfy the condition that its edges have even integer length.
Moreover the balancing conditions satisfied by CMC surfaces
(see \ref{unbalancinglemma}, \eqref{forcevec}, \eqref{force})
imply the vanishing of $\hd$ on $\Gamma$.
These considerations motivate the following definition.
\begin{definition}
\label{deltadefn}
Let $\Gamma$ be a finite graph.
If $\hd[p] = 0$ for all $p \in V(\Gamma)$, we say $\Gamma$ is a \emph{balanced} graph.
We call $\Gamma$ a \emph{central} graph if $\Gamma$ is balanced and $l[e] \in \mathbb N$ for all $e \in E(\Gamma)$.
\end{definition}
Finally, we define central graphs that guarantee that our construction produces an \emph{embedded} CMC hypersurface:
\begin{definition}[Pre-embedded graphs]
\label{D:pre}
We say $\Gamma$ is \emph{pre-embedded} if it is a central graph with $\hat \tau :E(\Gamma)\cup R(\Gamma) \to \mathbb R^+$ and
\begin{enumerate}
\item For all $p \in V(\Gamma)$ and all $e_i \neq e_j \in E_p$, $\angle(\Bv[p,e_i] , \Bv[p,e_j]) \geq \pi/3$, where $\angle(\xX,\yY)$
measures the angle between the two vectors $\xX,\yY$.
\item For all $e,e' \in E(\Gamma) \cup R(\Gamma)$ that do not share any common endpoints, the Euclidean distance between $e,e'$ is greater than $2$.
\item For any two rays $e, e' \in R(\Gamma)$, $1-\Bv\pe \cdot \Bv[p',e'] > 0$.
\end{enumerate}
\end{definition}
For a pre-embedded $\Gamma$ and sufficiently small $\underline{\tau}$, each of the initial surfaces constructed from one of the possible perturbations of $\Gamma$ is embedded. In the singular setting, when $\underline{\tau}=0$, the angle condition between edges and rays about a fixed vertex allows for a singular surface with unit spheres touching tangentially. We do not require a strict inequality for this condition since the change in the period for small $\underline{\tau}$ (on the order $|\underline{\tau}|^{\frac 1{n-1}}$) dominates both the radius change and the changes we allow via unbalancing and dislocation (on the order $|\underline{\tau}|$). The second item requires a strict inequality as the maximum radius of an embedded Delaunay surface is on the order $1- \underline{\tau}\hat \tau + O(\underline{\tau}^2)$ but we allow for the edges to move with order $\underline C |\underline{\tau} \hat \tau|$ where $\underline C$ can be quite large.
The final condition also requires a strict inequality. Indeed if the central graph $\Gamma$ has two parallel rays pointing into the same half-plane, then the family of graphs on which we base our initial surfaces may include graphs with intersecting rays.
\subsection*{Deforming the graphs}
Given a central graph $\Gamma$, we will consider perturbations of this graph subject to parameters $\tilde d,\tilde \ell$.
We need the perturbations to be smoothly dependent on the parameters and are thus interested in graphs $\Gamma$
which can be deformed in this way.
\begin{definition}[Isomorphic graphs]
\label{n1}
We define two graphs as isomorphic if there exists a one-to-one correspondence between the vertices, edges, and rays,
such that corresponding rays and edges emanate from the corresponding vertices.
For convenience we will often use the same letter to denote corresponding objects for isomorphic graphs.
Using this correspondence, for $\tilde \Gamma$ isomorphic to $\Gamma$,
we identify $D(\tilde \Gamma), L(\tilde \Gamma), Z(\tilde \Gamma)$ with $D(\Gamma), L(\Gamma), Z(\Gamma)$ respectively.
\end{definition}
We proceed to define the function $\ell$, which quantifies the length change of each edge for a perturbation of $\Gamma$.
\begin{definition}
Given a graph $\Gamma_1$ isomorphic to a central graph $\Gamma$,
we define $\ell[\Gamma_1,\cdot] \in L(\Gamma)$ such that (following \ref{n1}) for all $e \in E(\Gamma) \approx E(\Gamma_1)$,
\begin{equation}\label{first_ell_def}
\ell[\Gamma_1,e] := l[\Gamma_1,e] - l[\Gamma,e],
\end{equation}
and therefore the length of the edge of $\Gamma_1$ corresponding to $e\in E(\Gamma)$ is
$$
2l[\Gamma_1,e] = 2 l[\Gamma,e] + 2\ell[\Gamma_1,e].
$$
\end{definition}
\begin{definition}[Families of graphs]
\label{FamilyDefinition}
We define a {\it family of graphs}
$\mathcal{F}$ to be a collection of graphs
parametrized by $(\tilde d,\tilde\ell)\in B_{\mathcal{F}}$
such that the following hold:
\begin{enumerate}
\item $\Gamma := \Gamma(0,0)$ is a central graph in the sense of \ref{deltadefn} and
$B_{\mathcal{F}}$ is a small ball about $(0,0)$ in $D(\Gamma) \times L(\Gamma)$.
\item $\Gamma(\tilde d,\tilde\ell)$ is isomorphic to $\Gamma(0,0)$ and depends smoothly on $(\tilde d,\tilde\ell)$.
\item Following \ref{n1}, $ \hd[\Gamma(\tilde d,0) , \cdot]=\tilde d[\cdot] $ (unbalancing condition).
\item Following \ref{n1}, $\ell[\Gamma (\tilde d,\tilde\ell) , \cdot] =\tilde \ell[\cdot]$ (flexibility condition).
\item
$ \hat \tau[\Gamma(\tilde d,0),.] = \hat \tau[\Gamma(\tilde d,\tilde\ell),.]$.
\end{enumerate}
\end{definition}
Note that by the above definition each $\Gamma(\tilde d,0)$ with $\tilde d \neq 0$
is a modification of the central graph that is unbalanced as prescribed by $\tilde d$ while the lengths
of the edges remain unchanged.
Perturbing $\Gamma(\tilde d,0)$ to $\Gamma(\tilde d,\tilde \ell)$ is achieved by changing the lengths of the edges as prescribed by $\tilde \ell$.
Note that by \ref{FamilyDefinition}.5
$\hat\tau$ is unmodified under this perturbation.
However, $\hd[\Gamma(\tilde d,0), \cdot]$ is not necessarily equal to $\hd[\Gamma(\tilde d, \tilde \ell), \cdot]$,
as the edges may rotate to accommodate the changes in edge length.
\begin{definition}\label{Rnframe}
Throughout the paper, let $\{\Be_1, \dots, \Be_{n+1}\}$ denote the standard orthonormal basis of $\mathbb R^{n+1}$.
\end{definition}
We now choose a frame associated to each edge in the graph $\Gamma$ and use this frame to determine a frame on each edge for any graph in $\mathcal{F}$.
\begin{definition}
\label{gammaframe}
For $e \in E(\Gamma)$ we choose once for all one of its endpoints to call $p^+[e]$.
We call then its other endpoint $p^-[e]$
and we define $\mathrm{sgn}[p^{\pm} [e] ,e]:=\pm1$.
For $e \in E(\Gamma) \cup R(\Gamma)$
we choose once and for all an ordered, positively oriented orthonormal frame
$F_\Gamma [e]=\{\mathbf{v}_{1}[e], \dots, \mathbf{v}_{n+1}[e]\}$,
such that $\Bv_1[e]=\Bv\pe $,
where $p$ is the endpoint of $e$ if $e \in R(\Gamma)$
and $p=p^+[e]$ if $e \in E(\Gamma)$.
We have therefore when $[p,e]\in A(\Gamma)$ and $e \in E(\Gamma)$
$$
\Bv[p^+[e],e] = \Bv_1[e]=-\Bv[p^-[e],e]
\quad\text{ and } \quad
\mathrm{sgn}[p,e]= \Bv\pe \cdot \Bv_1[e].
$$
\end{definition}
\begin{definition}\label{rotationdefn}
Given two unit vectors $\xX, \yY \in \Rn$ such that $\angle(\xX,\yY) < \pi/2$,
let $\RRR[\xX, \yY]$ denote the unique rotation defined in the following manner.
\begin{itemize}
\item
If $\xX = \yY$, take $\RRR[\xX,\yY]$ to be the identity.
\item If $\xX \neq \yY$, set $\xX \cdot \yY = \cos a$ and $\mathbf v_y:= \frac{\yY-\xX\cos a }{\sin a}$.
We define $\RRR[\xX,\yY]$ to be the rotation in the plane given by $\xX,\yY$ that rotates $\xX$ to $\yY$,
that is in closed form
\[
\RRR[\xX,\yY] = I + \sin a \left(\mathbf v_y \xX^T - \xX \mathbf v_y^T\right) + (1-\cos a)\left(\xX \xX^T+\mathbf v_y \mathbf v_y^T\right). \]
\end{itemize}
\end{definition}
\begin{lemma}\label{smoothrotation} The rotation $\RRR[\xX,\yY]$ depends smoothly on $\xX$ and $\yY$.
\end{lemma}
\begin{proof}
Simplifying the expression, using the definition of $\mathbf v_y$, we observe that for $\xX \neq\yY$,
\[
\RRR[\xX,\yY]=I +(\yY\xX^T-\xX\yY^T)+ (1-\cos a)\,\xX\xX^T+ \frac {1}{1+\cos a}\left(\yY\yY^T - \cos a\,( \yY\xX^T +\xX \yY^T) + \cos^2 a\, \xX\xX^T\right).
\]
This expression is clearly smooth in $\xX,\yY$.
\end{proof}
For $\pe \in A(\Gamma)$ and $[p',e']$ the corresponding attachment on an isomorphic graph, let
\[
\angle(e,e'):= \arccos(\Bv\pe\cdot \Bv[p',e']).
\]
We use the rotation defined above to describe an orthonormal frame on the edges and rays of any graph in the family $\mathcal{F}$.
By the smooth dependence on $\tilde d,\tilde \ell$, and the presumed smallness of their norms,
$\angle(e,e') <\pi/2$ for $e \in E(\Gamma) \cup R(\Gamma)$ and $e'$ a corresponding edge or ray on any graph in the family.
It follows that the rotation we need will always be well-defined.
\begin{definition}
\label{FrameLemma}
For $\Gamma(\tilde d, \tilde \ell) \in \mathcal{F}$
with $\mathcal{F}$ as in
\ref{FamilyDefinition},
given $e \in E(\Gamma) \cup R(\Gamma)$ we define an orthonormal frame $F_{\Gamma(\tilde d,\tilde\ell)}[e]=\{\Bvp_1\epdl,
\dots, \Bvp_{n+1}\epdl\}$
uniquely by requiring the following:
\begin{enumerate}
\item $\Bvp_1\epdl=\Bv[\Gtdtl, p^+[e],e]$.
\item $\Bvp_i\epdl=\RRR[\Bv_1[e], \Bvp_1\epdl](\Bv_i[e])$ for $i=2, \dots, n+1$.
\end{enumerate}
\end{definition}
\begin{remark}
$F_{\Gtdtl}[e]$ depends smoothly on $\tilde d,\tilde \ell$.
\end{remark}
\section{The Building Blocks}
\label{BuildingBlocks}
The initial hypersurfaces we construct will be built out of appropriately fused pieces of spheres and perturbed Delaunay hypersurfaces.
The positioning of these pieces and the parameter of each Delaunay piece is determined by the graphs of $\mathcal{F}$ and the parameters ${d,\boldsymbol \zeta}$.
The building blocks however can be described independently of any reference to the graphs of $\mathcal{F}$.
To highlight this fact, we first develop the immersions of the building blocks to depend upon other general parameters not related to any graph.
In Section \ref{InitialSurface} we use these immersions to produce a family of hypersurfaces from a family of graphs $\mathcal{F}$,
where each hypersurface will depend on the central graph $\Gamma$ of $\mathcal{F}$ as well as the parameters $d, \zetabold$.
\subsection*{Spherical building blocks}
Let $Y_0: \mathbb R \times \Ssn \to \Ss^n \subset \mathbb R^{n+1}$
be as in \ref{Y0}.
Immediately we see that
\[
g_0 = \sech^2 t(dt^2 + g_{\Ssn}), \qquad \: |A|^2 = n, \qquad \: H \equiv 1.
\]
\begin{definition}
\label{adef}
Let $\delta'$ be a small positive constant which we will choose in
\ref{def:a2}
and define $a>0$ by
$\tanh (a+1) = \cos \left(\delta'\right)$.
Note that $Y_0(\{a+1\}\times \Ssn) = \partial D_{(1,\mathbf 0)}^{\mathbb{S}^{n}} (\delta') \subset \mathbb{S}^{n}$ (recall \ref{NT}).
\end{definition}
We determine now sphere diffeomorphisms that will be used to guarantee that the immersion is well-defined.
First we define
a rotation $\hat \RRR[F,F']$ which maps $F$ to $F'$
for a given orthonormal frame $F$ and a perturbation $F'$ of $F$.
\begin{definition}
Let $F:=\{\xX_1, \dots, \xX_{n+1}\}, F':=\{\yY_1, \dots, \yY_{n+1}\}$ be two orthonormal frames of $\mathbb R^{n+1}$ with the same orientation. We define
$\hat \RRR[F, F']:\Rn \to \Rn$ to be the unique rotation such that
\begin{align*}
\hat \RRR[ F, F']( \xX_i) = \yY_i.
\end{align*}
\end{definition}
We now define a map on $\mathbb S^n$ that consists of $m$ local frame transformations and smoothly transits to the identity map away from these transformations. In application, the first vector in each frame will describe the positioning of an edge on a graph $\Gtdtl \in \mathcal{F}$.
\begin{definition}[Spherical Building Blocks]
\label{defn:sphere}
We assume given two sets of positively oriented ordered orthonormal frames
$W=\{F_1 , F_2,\dots , F_m\}$ and $W' = \{F'_1, F'_2, \dots, F'_m\}$,
where
\begin{equation*}
\begin{gathered}
F_i=\{\xX_{1,i},\xX_{2,i}, \dots, \xX_{n+1,i}\},
\qquad
F_i' =\{\yY_{1,i}, \dots, \yY_{n+1,i}\},
\\
\angle(\xX_{1,i}, \xX_{1,j})>16\delta' \quad \forall i \neq j,
\qquad
\angle (\xX_{1,i}, \yY_{1,i}) \leq (\delta')^2 \quad \forall i .
\end{gathered}
\end{equation*}
That is, the first vectors in each frame of $W$ are
not close,
while the first vectors in each pair of frames $F_i, F_i'$ are close.
We define then a family of diffeomorphisms $\hat Y[ W, W']:\Ss^n\to \Ss^n \subset \mathbb R^{n+1}$,
smoothly dependent on $ W, W'$,
by
\begin{equation*}
\hat Y[ W,W'](x):=
\left\{\begin{array}{ll}x& \text{for } x \in \Ss^n\backslash \bigsqcup_i D^{\Ss^n}_{\xX_{1,i}}({4\delta'}),
\\
\frac{ \psi_W(x) \, x + (1-\psi_W(x)) \, \hat \RRR[F_i, F_i'](x) }
{ \,\left| \psi_W(x) \, x + (1-\psi_W(x)) \, \hat \RRR[F_i, F_i'](x) \right|\, }
& \text{for } x \in D^{\Ss^n}_{\xX_{1,i}}({4\delta'})\backslash D^{\Ss^n}_{\xX_{1,i}}({3\delta'}),
\\
\hat \RRR[F_i, F_i'](x)& \text{for } x \in D^{\Ss^n}_{\xX_{1,i}}({3\delta'}),
\end{array}\right.
\end{equation*}
where $\psi_W:=\psi[3\delta',4\delta']\circ {\mathbf{d}}^{\mathbb{S}^n}_{\{\xX_{1,1},\, \xX_{1,2},\, \dots,\, \xX_{1,n+1}\}}$.
\end{definition}
\noindent
\subsection*{Delaunay building blocks}
We now describe a general immersion of an appropriately perturbed Delaunay piece.
For a description of Delaunay immersions, see Section \ref{DelSection}.
Throughout this subsection, let $a$ be the value defined in \ref{adef}, let $l \in \mathbb Z^+$,
and let $\Pdo$ and $\Pim$ be as in \ref{dPim} so that
$2\Pdo$ is the domain period and $2\Pim$ the translational period of a Delaunay hypersurface of parameter $\tau$.
We presume throughout that $0<\maxT \ll 1$ is a constant chosen sufficiently small to guarantee
that all immersions are smooth and well-defined and that all error estimates will hold as stated.
Finally, we let $\underline C$ denote a possibly large constant that is independent of $\maxT$.
\begin{definition}
Let
$\psi_{\mathrm{dislocation}^\pm}, \psi_{\mathrm{gluing}^\pm}: [a, 2\Pdo l -a] \to\mathbb R$ be cutoff functions such that:
\begin{itemize}
\item $\psi_{\mathrm{dislocation}^+}=\psi[a+2,a+1]$,
\item $\psi_{\mathrm{dislocation}^-}=\psi[2\Pdo l-(a+2),2\Pdo l-(a+1)]$,
\item $\psi_{\mathrm{gluing}^+}=\psi[a+3,a+4]$,
\item $\psi_{\mathrm{gluing}^-}=\psi[2\Pdo l-(a+3),2\Pdo l-(a+4)]$.
\end{itemize}
\end{definition}
With these cutoff functions, we define the building blocks.
Notice that $Y_0$ is the embedding of $\Ss^n$ defined in \eqref{Y0} and $Y_\tau$ is the Delaunay immersion defined in \eqref{DelImm}.
\begin{definition}
\label{defn:Yedge}
Given $\tau,l,a$ with $0<|\tau|\leq \maxT$ and $\boldsymbol\zeta^\pm\in \mathbb R^{n+1}$ with $0\leq| \boldsymbol\zeta^\pm|\leq\underline C |\tau|$, we define two smooth immersions
$ Y_{\mathrm{edge}}[\tau,l,\boldsymbol \zeta^+,\boldsymbol \zeta^-]:[a, 2\Pdo l -a] \times \Ssn\to\Rn$ and $ Y_{\mathrm{ray}}[\tau,\boldsymbol \zeta^+]:[a, \infty) \times \Ssn\to\Rn$ such that, for $x=(t,\bt)$,
\begin{align*} Y_{\mathrm{edge}}[\tau,l,\boldsymbol \zeta^+,\boldsymbol \zeta^-](x)=& \psi_{\mathrm{dislocation}^+}(t) \cdot \left( Y_0(x)+\boldsymbol \zeta^+\right)\\
&+(1-\psi_{\mathrm{dislocation}^+}(t))(1-\psi_{\mathrm{gluing}^+}(t))Y_0(x)\\
&+\psi_{\mathrm{gluing}^+}(t) \cdot \psi_{\mathrm{gluing}^-}(t) \cdot Y_\tau(x)\\
&+(1-\psi_{\mathrm{dislocation}^-}(t))(1-\psi_{\mathrm{gluing}^-}(t))Y_0^-(x) \\
& + \psi_{\mathrm{dislocation}^-}(t) \cdot \left(Y_0^-(x)+ \boldsymbol \zeta^-\right)
\end{align*}
\begin{align*}
Y_{\mathrm{ray}}[\tau,\boldsymbol \zeta^+](x)= &\psi_{\mathrm{dislocation}^+}(t) (Y_0(x)+ {\boldsymbol \zeta}^+ )+(1-\psi_{\mathrm{dislocation}^+}(t))(1-\psi_{\mathrm{gluing}^+}(t))Y_0(x)\\
&+\psi_{\mathrm{gluing}^+}(t) \cdot Y_\tau(x)
\end{align*}
where $Y_0^-(x)= Y_0(t-2\Pdo l,\bt) +(2+2\Pim)l \Be_1$.
\end{definition}To aid the reader, we describe the geometry of the $Y_{\mathrm{edge}}$ immersion in some detail. For $t \in [a,a+1]$, the image is a geodesic
hyperannulus sitting on a unit sphere with the sphere centered at $\boldsymbol \zeta^+$. The annulus is centered at $\boldsymbol \zeta^+ + \Be_1$ with inner radius $\delta'$. When $t \in [a+1,a+2]$, the immersion
smoothly interpolates between
the annular region on the dislocated sphere and an annular region centered at $\Be_1$ on a unit sphere centered at the origin.
For $t \in [a+2,a+3]$, the immersion remains on the unit sphere centered at the origin, while for $t \in [a+3,a+4]$, the immersion smoothly transits between this sphere and
a Delaunay piece with parameter $\tau$. The same procedure happens toward the other end. First, the Delaunay piece transits back to a unit sphere centered
at $\left(Y_\tau(2\Pdo l,\Theta)\cdot \Be_1\right) \Be_1$. This position represents the location of the end of
a Delaunay piece with parameter $\tau$ and $l$ periods, with initial end at the origin. Finally,
this sphere transits to a unit sphere centered at $\boldsymbol \zeta^- + \left(Y_\tau(2\Pdo l,\Theta)\cdot \Be_1\right) \Be_1$, a dislocation of $\boldsymbol \zeta^-$ from the previously
described sphere.
Of course, the $Y_{\mathrm{ray}}$ immersion has the same behavior as $Y_{\mathrm{edge}}$ near the origin. The only difference is that the Delaunay immersion continues out to infinity and there is no transiting back to a sphere.
\begin{prop}\label{geopropcentral}
Let $g:= Y_{\mathrm{edge}}^*(g_{\Rn})$ or $g:=Y_{\mathrm{ray}}^*(g_{\Rn})$ as the situation dictates. For a fixed, large constant $x>a+5$,
\[
\| Y_{\mathrm{edge}}[\tau,l,{\boldsymbol \zeta}^+,{\boldsymbol \zeta}^-]-Y_0:C^k((a, x) \times \Ssn, g)\| \leq C(k,x)(|\boldsymbol \zeta^+| + |\tau|)
\]
\[
\| Y_{\mathrm{edge}}[\tau,l,{\boldsymbol \zeta}^+,{\boldsymbol \zeta}^-]-Y_0^-:C^k((2\Pdo l -x, 2\Pdo l-a) \times \Ssn, g)\| \leq C(k,x)(|\boldsymbol \zeta^-| + |\tau|)
\]and
\[
\| Y_{\mathrm{ray}}[\tau,{\boldsymbol \zeta}^+]-Y_0:C^k((a, x) \times \Ssn, g)\| \leq C(k,x)(|\boldsymbol \zeta^+| + |\tau|).
\]
\end{prop}
\begin{proof}
On the region where $t \in [a+1, a+2] \cup[2\Pdo l-(a+2),2\Pdo l-(a+1)]$,
the only difference between the immersions comes from the cutoff function applied to $\boldsymbol \zeta^\pm$,
where the $\pm$ is appropriate for the domain.
Thus the $C^k$ estimates on these regions are immediate.
For the other regions, we first note that the immersion $Y_0$ defines $\tanh(s)=x_1$, $\sinh(s)=\rho_0(x_1)$ from \ref{radiuslemma}.
Using an ODE comparison for $k(t)$ and $\tanh(t)$, we can appeal to \ref{radiuslemma} to get the $C^k$ estimates for the remaining regions.
\end{proof}
\begin{definition}
\label{Defn:Herror}
Let $H_X$ denote the mean curvature of the immersion $X:\Omega \subset \mathbb R \times \Ssn \to \mathbb R^{n+1}$.
Let
\[
H_{\mathrm{dislocation}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-],H_{\mathrm{gluing}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-]: [a, 2\mathbf p_\tau l -a]\times \Ssn \to \mathbb R,
\]
\[
H_{\mathrm{dislocation}}[\tau, \boldsymbol \zeta^+], H_{\mathrm{gluing}}[\tau, \boldsymbol \zeta^+]:[a,\infty) \times \Ssn\to\mathbb R
\] such that
\begin{align*}
H_{\mathrm{dislocation}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-](x)&:=\left\{ \begin{array}{ll} H_{ Y_{\mathrm{edge}}[\tau,l,{\boldsymbol \zeta}^+,{\boldsymbol \zeta}^-]} - 1&\text{if } x\in \left([a,a+2] \cup [2\mathbf p_\tau l -(a+2), 2\mathbf p_\tau l -a] \right) \times \Ssn,\\
0&\text{otherwise},\end{array}\right.
\\
H_{\mathrm{gluing}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-](x)&:=\left\{ \begin{array}{ll} H_{ Y_{\mathrm{edge}}[\tau,l,{\boldsymbol \zeta}^+,{\boldsymbol \zeta}^-]} - 1& \text{if } x \in [a+3,a+5] \times \Ssn, \\
& \text{or if } x \in [2\mathbf p_\tau l -(a+5),2\mathbf p_\tau l -(a+3)] \times \Ssn \\
0&\text{otherwise},\end{array}\right.
\\
H_{\mathrm{dislocation}}[\tau, \boldsymbol \zeta^+](x)&:=\left\{ \begin{array}{ll} H_{ Y_{\mathrm{ray}}[\tau,{\boldsymbol \zeta}^+]} - 1&\text{if } x\in [a,a+2] \times \Ssn,\\
0&\text{otherwise},\end{array}\right.
\\
H_{\mathrm{gluing}}[\tau,\boldsymbol \zeta^+](x)&:=\left\{ \begin{array}{ll} H_{ Y_{\mathrm{ray}}[\tau,{\boldsymbol \zeta}^+]} - 1&\text{if } x \in [a+3,a+5] \times \Ssn, \\
0&\text{otherwise}.\end{array}\right.
\end{align*}
\end{definition}From these definitions and \ref{geopropcentral} we immediately bound the error on the mean curvature.
\begin{corollary}\label{Cor:Herror}
\begin{align*}
&\|H_{\mathrm{dislocation}}[\tau, l,\boldsymbol \zeta^+\boldsymbol \zeta^-]:C^{0, \beta}([a,2\mathbf p_\tau l-a]\times \Ssn,g)\|\leq C(\beta)\left( |\boldsymbol \zeta^+|+ |\boldsymbol \zeta^-|\right)\\
&\|H_{\mathrm{gluing}}[\tau, l,\boldsymbol \zeta^+\boldsymbol \zeta^-]:C^{0, \beta}([a,2\mathbf p_\tau l -a]\times \Ssn,g)\|\leq C(\beta) |\tau|\\
&\|H_{\mathrm{dislocation}}[\tau, \boldsymbol \zeta^+]:C^{0, \beta}([a,\infty)\times \Ssn,g)\|\leq C(\beta) |\boldsymbol \zeta^+|\\
&\|H_{\mathrm{gluing}}[\tau, \boldsymbol \zeta^+]:C^{0, \beta}([a,\infty)\times \Ssn,g)\|\leq C(\beta) |\tau|
\end{align*}
\end{corollary}
\begin{lemma}\label{Lemma:Hdis}
For $g$ as in \ref{geopropcentral}, $N_X$ denoting the unit normal of the immersion $X$, and $b \in (a+3, \mathbf p_\tau)$,
\begin{align*}
&\int_{[a,b] \times \Ssn}H_{\mathrm{dislocation}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-] N_{ Y_{\mathrm{edge}}[\tau,l, {\boldsymbol \zeta}^+, \boldsymbol \zeta^-]} dg = 0,\\
&\int_{[2\mathbf p_\tau l -b,2 \mathbf p_\tau l -a] \times \Ssn}H_{\mathrm{dislocation}}[\tau, l,\boldsymbol \zeta^+, \boldsymbol \zeta^-] N_{ Y_{\mathrm{edge}}[\tau,l, {\boldsymbol \zeta}^+, \boldsymbol \zeta^-]} dg = 0,\\
&\int_{[a,b] \times \Ssn}H_{\mathrm{dislocation}}[\tau, \boldsymbol \zeta^+] N_{ Y_{\mathrm{ray}}[\tau,{\boldsymbol \zeta}^+]} dg = 0.
\end{align*}
\end{lemma}
\begin{proof}
We prove the result for the ray immersion as the others follow identically. For convenience we also remove the notation $[\tau, \boldsymbol \zeta^+]$.
First recall that $H_{\mathrm{dislocation}}$ is supported on $[a+1, a+2]\times \Ssn$. Thus
\[
n\int_{[a,b] \times \Ssn}H_{\mathrm{dislocation}} N_{ Y_{\mathrm{ray}}} dg =\int_{[a+ 1/2,a+5/2] \times \Ssn} nH_{ Y_{\mathrm{ray}}} dg -n \int_{[a+1/2,a+5/2] \times \Ssn} N_{ Y_{\mathrm{ray}}} dg.
\]By the divergence theorem and since ${ Y_{\mathrm{ray}}}= Y_0 + \boldsymbol \zeta^+$ on $[a,a+1]\times \Ssn$, ${ Y_{\mathrm{ray}}}= Y_0$ on $[a+2, a+3] \times \Ssn$ the first term can be rewritten as
\begin{align*}
\int_{[a+ 1/2,a+5/2] \times \Ssn} \sum_{i=1}^{n+1} \Delta_g x_i \Be_i dg
&= \int_{\partial([a+ 1/2,a+5/2] \times \Ssn)} \sum_{i=1}^{n+1} (\nabla_g x_i \cdot \eta_{Y_{\mathrm{ray}}}) \Be_i d\sigma_g\\
&= \int_{\partial([a+ 1/2,a+5/2] \times \Ssn)} \sum_{i=1}^{n+1} (\nabla_{g_0} x_i \cdot \eta_{Y_{0}}) \Be_i d\sigma_{g_0}\\
&=\int_{[a+ 1/2,a+5/2] \times \Ssn} nH_{ Y_0} dg_0
\end{align*}where $d\sigma_g$ is the induced metric on the boundary. By similar logic, we note that
\[
\int_{[a+1/2,a+5/2] \times \Ssn} N_{ Y_{\mathrm{ray}}} dg= \int_{[a+1/2,a+5/2] \times \Ssn} N_{ Y_{0}} dg_0
\] and thus
\[
n\int_{[a,b] \times \Ssn}H_{\mathrm{dislocation}} N_{ Y_{\mathrm{ray}}} dg= n \int_{[a,b] \times \Ssn}(H_{Y_0}-1)N_{Y_0} dg_0 =0
\]
\end{proof}
\section{Linear theory on Delaunay hypersurfaces}
\label{DelaunayLinear}
In this section, we solve semi-local linear problems on Delaunay surfaces with small parameter.
Throughout the paper we denote the linearized operator in the induced metric by $\mathcal L_g$.
On a Delaunay immersion as described in Appendix \ref{DelSection}, by \eqref{FF} and \eqref{modA} the operator takes the form
\begin{equation}
\label{Lg}
\mathcal L_g:=\Delta_g+|A_g|^2 = \frac 1{r^2} \partial_{tt} + \frac{n-2}{r^2}w'\partial_t + \frac 1{r^2} \Delta_{\Ssn}+n(1+(n-1)\tau^2 r^{-2n}).
\end{equation}
\begin{assumption}
\label{ass:b}
Throughout this section,
we will assume $\underline{b}\gg1$ is a fixed constant,
chosen as large as necessary and depending only on $n$ and $\epsilon_1$,
where $\epsilon_1$ is a small constant which depends on $\gamma \in (1,2), \beta \in (0,1)$.
In particular, $\underline{b}$ is independent of the constant $\maxT>0$, which will be chosen as small as needed, in terms of $\underline{b}$.
We also assume given $b\in \left(\frac9{10}\underline{b},\frac{11}{10}\underline{b}\right)$.
Unless otherwise stated we will denote by $C$ positive constants which depend on $\underline{b}$ but not on $b, \maxT$.
\end{assumption}
\begin{definition}
\label{domaindefinitions}
Given $0<|\tau| < \maxT$ and a Delaunay immersion $Y_\tau:\mathbb R\times \Ssn\to \mathbb R^{n+1}$ defined as in Appendix \ref{DelSection},
we define the following regions on the domain:
\begin{enumerate}
\item $\Lambda_{x,y}:= [b+x,\Pdo-(b+y)] \times \Ssn$
\item $\Cout_x := \{b+x\}\times \Ssn$
\item $\Cin_y:=\{\Pdo-(b+y)\} \times \Ssn$
\item $\Sm_x:= [\Pdo-(b+x), \Pdo +(b+x)]\times \Ssn$
\item $\Smext_x:=[b+x, 2\Pdo-(b+x)]\times \Ssn$
\item $\Sp_x := [2\Pdo -(b+x),2\Pdo+(b+x)]\times \Ssn$
\item $\Spext_x:= [\Pdo+(b+x), 3\Pdo-(b+x)]\times \Ssn$
\end{enumerate}
Here $0\le x,y < \Pdo-b$, where $\Pdo -b>0$ is guaranteed by the smallness of $\maxT$ in terms of $\underline{b}$. When $x=y=0$ we may drop the subscript.
\end{definition}
Notice that for $\maxT$ small enough, by \ref{radiuslemma},
\ref{Cat_lemma} the immersion of the region $\Sp$ has geometry roughly like $\Ss^n$ while the immersion of the region $\Sm$,
after an appropriate rescaling, looks roughly like a catenoid.
Following usual terminology
we refer to these regions as \emph{standard regions}
and we refer to $\Spext, \Smext$ as \emph{extended standard regions}.
The extended standard regions contain one standard region and two adjacent regions with $t$-coordinate length $\Pdo -2b$.
We have labeled one such region $\Lambda$ and we refer to $\Lambda$ as a \emph{transition} or \emph{intermediate region}.
\subsection*{The linearized equation on the transition region}
Let $\rout, \rin$ denote the radius of the meridian spheres at $\Cout, \Cin$ respectively, in the induced metric. That is
$$\rout = r_{\tau}(b) \text{ and }\rin = r_{\tau}(\Pdo-b).$$
We consider a flat metric on $\Lambda$ given by
\begin{equation}\label{D:s}
g_A:= ds^2 + s^2 g_{\Ssn}
\text{ where $s:[b,\Pdo-b] \to \mathbb R^+$ satisfies }
\left\{\begin{array}{l}\frac {ds}{dt} = r(t)\\ s(\Pdo-b) = \rin
\end{array} \right.
\end{equation}
\begin{lemma}\label{lemma:rvss}
Let $\gamma\in (1,2), \beta \in (0,1)$. Given $0<\delta<\min\{\frac 1{100}, \frac 1{10n}\}$, there exists $\underline{b}$ large enough and $\maxT>0$ small enough depending on $\underline{b}$ such that for all $0<|\tau|<\maxT$, for $\Lambda$ defined by $\tau$ and $b$ satisfying \ref{ass:b}:
\begin{align}\label{r_metric_equiv}
\|1: C^{0, \beta}(\Lambda, r, g, r^{-2})\| &\sim_{10} \|r^{2}:C^{0,\beta}(\Lambda, r, g)\|,\notag \\
\|r^{-2n}: C^{0, \beta}(\Lambda, r, g, r^{-2})\| &\sim_{10} \|r^{2-2n}:C^{0,\beta}(\Lambda, r, g)\|
\end{align}
Moreover, for $s$ defined by \eqref{D:s},
\begin{equation}\label{soverr}
\left| \frac sr - 1\right| \leq 5\delta,
\qquad
\left| \frac{ds}{dr}-1\right|\leq 4\delta,
\qquad
\left|\frac{d^2s}{dr^2}\right| \leq \frac{C(n)}r\delta.
\end{equation}As a consequence, for any $v\in C^{k, \beta}(\Lambda)$, for $0 \leq k \leq 2$,
\begin{align}\label{uniformnorms}
{\|v:C^{k,\beta}(\Lambda,r,g, r^{\gamma-2})\|}&\sim_{10}{\|v:C^{k,\beta}(\Lambda,s,g_A,s^{\gamma-2})\|},\notag\\
{\|v:C^{k,\beta}(\Lambda,r,g, r^{-n-\gamma})\|}&\sim_{10}{\|v:C^{k,\beta}(\Lambda,s,g_A,s^{-n-\gamma})\|}.
\end{align}
\end{lemma}
\begin{proof}
Notice that the geometry of $Y_\tau$ near $t=0$ and $t=\Pdo$ (see \ref{radiuslemma}, \ref{Cat_lemma})
implies that by picking $\underline{b}$ large, independent of $\maxT$ and $\maxT$ sufficiently small, for all $0<|\tau|<\maxT$ we have the bound $r \in \left( |\tau|^{\frac 1{n-1}}/\delta, \delta\right)$ on $\Lambda$.
To prove \eqref{r_metric_equiv}, consider a fixed $(u, \bt) \in \Lambda$ and note that $r^{-2}(u)g= \frac{r^2}{r^2(u)}\left(dt^2+g_{\Ss^{n-1}}\right)$. Observe that as $\frac{r( u)}{r(t)} = e^{w( u)-w(t)}$ and $|w'| \in(1-3\delta^2, 1]$ by the choice of $\underline{b}, \maxT$,
\[
\frac{99}{100}|t-u|< (1-3\delta^2)|t-u| \leq |w( t)-w(u)| = \left| \int_u^{ t} w'( \tilde t) \, d \tilde t\right|.
\]
Therefore, if $|u-t|> \frac 15$ then the length of a curve connecting $(u,\bt)$, $(t,\bt)$ in the metric $\frac{r^2}{r^2(u)}dt^2$ is at least $\frac 1{10}$ as
\[
\int_u^t e^{|w(u)-w(s)|}ds \geq \int_u^t e^{99|u-s|/100}ds = \frac{100}{99}({e^{99|u-t|/100}-1}).
\]
It follows that
a ball of radius $1/10$ about $(u,\bt)$ in the metric $r^{-2}(u) g$, is contained in the cylinder $[u-1/5, u+1/5] \times \Ss^{n-1}$.
Now for $m=0$ or $m= -2n$,
\[
\frac{r^{-m}(t)r^2(u)} {r^{2-m}(t)} = \frac{ \tau^{\frac{2-m}{n}}e^{2 w( u) - m w(t)}} { \tau^{\frac{2-m}n} e^{(2-m)w(t)}}= e^{2w(u)-2w(t)}.
\]The $C^0$ equivalence follows as $(1-3\delta^2)|t-u| \leq |w(t) - w(u)| \leq |t-u|$ on $\Lambda$ and $|t-u| \leq \frac 15$ for every comparison in the weighted metric. To get the $C^{0,\beta}$ equivalence, first observe that
\[
\frac{r^2(u)}{r^2(t)}(r^2(t))' = \frac{2w'(t)r^2(u)}{r(t)} = 2w'(t) r(u) e^{w(u)-w(t)}.
\]For $|u-t| < \frac 15$, the above is bounded by $4\delta$ on $\Lambda$ and the first equivalence holds.
In the other case,
\[
\frac{\frac d{dt}r^{-2n}(t)r^2(u)}{\frac d{dt}r^{2-2n}(t)}= \frac{2nr^{-2n-1}(t)r^2(u)}{(2n-2)r^{1-2n}(t)}= \frac {2n}{2n-2}\cdot\frac{r^2(u)}{r^2(t)}
\]so the same comparisons as in the $C^0$ case give bounds on the ratio here, which implies the second equivalence in \eqref{r_metric_equiv}.
Recall that by \eqref{req}, $r'(t) = w'(t)r(t)$ where
$w = \log \left(|\tau|^{-1/n}r\right)$. Substituting into \eqref{weq},
\[
\frac{dr}{d t} = r\sqrt{1 - \left(r+ \tau r^{1-n}\right)^2}.
\]Therefore,
\[
\frac{ds}{dr} = \left(1 - \left(r+ \tau r^{1-n}\right)^2\right)^{-1/2}.
\]
As the maximum of the function $|r+\tau r^{1-n}|$, restricted to $\Lambda$, occurs on $\partial \Lambda$, by choosing $\maxT>0$ perhaps smaller, when $0<|\tau|<\maxT$ we can bound
\begin{equation}\label{delta_Lambda}
|\delta + \delta^{1-n} \tau\:| \leq 2\delta, \; \; \; |\delta^{-1}|\tau|^{\frac 1{n-1}}+ |\tau|^{-1} \delta^{n-1}\tau\:| \leq 2\delta.
\end{equation}
The derivative estimates then in \ref{soverr} follow from \eqref{delta_Lambda} and the observation that
\begin{align*}
\left|\frac{d^2s}{dr^2}\right| = &\left|\frac{(r+ r^{1-n}\tau)(1+(1-n)r^{-n}\tau)}{(1-(r+ r^{1-n}\tau)^2)^{3/2}}\right|\notag\\
=&\left|\frac 1r\cdot\frac{(r+ r^{1-n}\tau)(r+(1-n)r^{1-n}\tau)}{(1-(r+ r^{1-n}\tau)^2)^{3/2}}\right| .
\end{align*}
By the fact that $\frac{ds}{dr}>0$ and the estimate on $ds/dr$ we conclude the proof of \eqref{soverr} by
\begin{align*}
(1-4 \delta) (r - \rin )\leq s(r)-\rin&=\int_{\rin}^{r}\frac{ds}{d\tilde r}d\tilde r \leq (1+4 \delta) (r - \rin ).
\end{align*}
The $C^0$ equivalence of the norms in \eqref{uniformnorms} follows immediately from \eqref{soverr},
and indeed the ratio of the weight functions will always contribute error ratios no worse than $(1+10\delta n) \leq 2$.
To prove equivalence up to higher derivatives,
observe that for a fixed $(u,\bt) \in \Lambda$,
\[
s^{-2}(u)g_A = \frac{1}{s^2(u)}\left( ds^2+ s^2g_{\Ssn}\right)= \frac{r^2}{s^2(u)}dt^2 + \frac{s^2}{s^2(u)}g_{\Ssn}.
\]
Fix a point $(u, \bt) \in \Lambda$ and consider the ball of radius $1/10$ about this point with respect to the metric $s^{-2}(u)g_A$. In the $t$-direction, the inequality
\[
\frac{r(t)}{r(u)}(1-5\delta) \leq \frac{r(t)}{s(u)} = \frac{r(t)}{r(u)}\frac{r(u)}{s(u)}\leq \frac{r(t)}{r(u)}(1+5\delta)
\]implies that it is enough to consider $|t-u|\leq \frac 25$. Moreover, as
\begin{equation}\label{eq:thetaderiv}
\frac {99}{100} \leq \frac{s(u)}{s(t)}\cdot\frac{r(t)}{r(u)} \leq \frac{100}{99},
\end{equation}$|t-u|<\frac 25$ is sufficient for the $\Ssn$ direction as well.
All derivatives purely in the $\Ssn$ direction are comparable in the norms as indicated by \eqref{eq:thetaderiv}. So we consider only partial derivatives involving $t$. The $C^1$ comparison is straightforward since (presuming $\partial_t v \neq 0$)
\[
1-5\delta\leq \left(\frac{r(u)}{r(t)}\partial_t v \right)\cdot \left( \frac{s(u)}{r(t)}\partial_t v \right)^{-1} \leq 1+5 \delta.
\]Second derivatives in the $t$ direction then follow since
\[
\left(\frac{r^2(u)r'(t)}{r^3(t)}\right) \cdot \left( \frac{r^2(u)r'(t)}{r^3(t)}\right)^{-1}, \quad \quad \left( \frac{r^2(u)}{r^2(t)}\right)\cdot \left(\frac{s^2(u)}{r^2(t)} \right) ^{-1}
\]satisfy equally good inequalities. The mixed partials and third derivatives again satisfy equal ratio estimates and the result follows.
\end{proof}
We define operators
\begin{equation}\label{D:Lgl}
\mathcal L_{g_A}:= \Delta_{g_A} = \partial_{ss} +\frac{n-1}s \partial_s + \frac 1{s^2}\Delta_{\Ssn},
\qquad
\mathcal L_g^\lambda:= \mathcal L_g + \lambda.
\end{equation}
We first demonstrate that in an appropriately weighted metric, for sufficiently small $\lambda$, the operator $\mathcal L_g^\lambda$ is close to $\mathcal L_{g_A}$:
\begin{lemma}
\label{annularlemma}
Let $\gamma\in (1,2), \beta \in (0,1)$. For $\epsilon_1>0$ there exists $\underline{b}$ large enough depending on $\epsilon_1$ and $\maxT>0$ small enough depending on $\underline{b}$ such that for all $0<|\tau|<\maxT$ the following holds:
Consider $\Lambda$ defined by $\tau$ and $b$ satisfying \ref{ass:b}. Let $0\leq |\lambda| < (2\rout)^{-1}$. Then for all $V \in C^{2,\beta}(\Lambda)$
\begin{equation}\notag
\begin{aligned}
\|\mathcal L_g^\lambda V - \mathcal L_{g_A} V:C^{0,\beta}(\Lambda, r,g, r^{-n-\gamma})\| \leq&
\epsilon_1\|V: C^{2,\beta}(\Lambda, r,g, r^{2-n-\gamma})\|,
\\
\|\mathcal L_{g}^\lambda V - \mathcal L_{g_A}V:C^{0,\beta}(\Lambda,r,g,r^{\gamma-2})\| \leq &
\epsilon_1 \|V:C^{2,\beta}(\Lambda,r,g,r^{\gamma})\|.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}Choose $\delta>0$ small enough so that $C(n) \delta <\epsilon_1/2$ where $C(n)$ is a fixed constant depending only on $n$. Decrease $\delta$ if necessary so that it also satisfies the hypotheses in \ref{lemma:rvss}. Then choose $\underline{b}, \maxT$ as in \ref{lemma:rvss} for this $\delta$.
Applying \eqref{r_metric_equiv} and recalling \eqref{modA},
\begin{align*}
\|\, |A_g|^2+ \lambda:C^{0,\beta}(\Lambda, r,g, r^{-2})\|& =\|n(1+(n-1)\tau^2 r^{-2n})+\lambda:C^{0,\beta}(\Lambda, r,g, r^{-2})\| \\
& \leq 100\|nr^2+n(n-1)\tau^2 r^{2-2n}+\lambda r^2:C^{0,\beta}(\Lambda, r,g)\| \leq C(n)\delta.
\end{align*}
By calculation,
\[
\mathcal L_{g_A} - \mathcal L_g^\lambda =\frac 1{r} \left(\frac{n-1}{s}-\frac{n-1}{r}w'\right) \partial_t+ \left(\frac 1{s^2}-\frac 1{r^2}\right)\Delta_{\Ssn}-n(1+(n-1)\tau^2r^{-2n})- \lambda.
\]
Given the constraints on $\delta$, the estimates of \ref{lemma:rvss} and multiplicative properties of H\"older norms imply the result.
\end{proof}
\begin{definition}\label{defn:fhat}We define $\hf_0: \mathbb R\times \Ssn \to \mathbb R$ (recall \ref{FF}) such that
\begin{equation}\notag
\begin{aligned}
\hf_0:= & \nu_\tau \cdot \Be_1 = \frac{\ovr'}{\ovr} = \pm \sqrt{1-\left(\ovr+ \tau \ovr^{1-n}\right)^2}.
\end{aligned}
\end{equation}
Here the sign for $\hf_0$ depends on the domain of definition but note that $\hf_0$ is odd about $t=0$.
\end{definition}
\begin{lemma}
The lowest eigenvalue for the Dirichlet problem for $\mathcal L_g$ on $\Lambda$ is bounded below by $(2\rout)^{-1}$.
\end{lemma}
\begin{proof}
First notice that $\mathcal L_g \hf_0 =0$ and $\hf_0(0)=\hf_0(\Pdo)=0$. Moreover, by definition, $\hf_0 <0$ on $(0,\Pdo)\times \Ssn$. Classical theory implies that on $(0, \Pdo) \times \Ssn$, the lowest eigenvalue for the Dirichlet problem for $\mathcal L_g$ is $0$.
Domain monotonicity then implies that on $\Lambda \subset (0, \Pdo) \times \Ssn$, the lowest eigenvalue for the Dirichlet problem for $\mathcal L_g$ is positive. Suppose $\lambda_1$ is the lowest eigenvalue for the Dirichlet problem on $\Lambda$ and that $0< \lambda_1<(2\rout)^{-1}$. For any $0<\lambda < (2\rout)^{-1}$, \ref{annularlemma} applies to the operators $\mathcal L^\lambda_g, \mathcal L_{g_A}$. Let $\widetilde V$ satisfy $\mathcal L_{g_A}\widetilde V =0$ on $\Lambda$, $\widetilde V|_{\Cout}=1, \widetilde V_{\Cin}=0$. By inspection, one determines the estimate
\[
\|\widetilde V:C^{2,\beta}(\Lambda,r,g)\| \leq C(\beta).
\] Using \ref{annularlemma} with the weaker decay estimate $r^{-2}$, we may iterate to produce $V$ such that
$\mathcal L^{\lambda_1} V =0$ with the same boundary data and $\|V:C^{2,\beta}(\Lambda,r,g)\| \leq C(\beta)$. Let $f$ be the lowest eigenfunction for $\mathcal L_g$. Then $\mathcal L_g f= -\lambda_1 f$ and $f>0$ on $\Lambda$ with $f=0$ on $\partial \Lambda$. Since $f \not\equiv 0$, there exists $C$ sufficiently large such that $Cf>V$ on a domain $\Omega \subset \Lambda$. Then $\mathcal L_g(V-Cf) = -\lambda_1(V-Cf)$ and $Cf-V>0$ on $\Omega \subset \Lambda$.
Domain monotonicity then implies $\lambda_1$ is not the lowest eigenvalue, giving a contradiction.
\end{proof}
\begin{corollary}\begin{enumerate}
\item The Dirichlet problem on $\Lambda$ for $\mathcal L_g^\lambda$ with $0\leq |\lambda| <(4\rout)^{-1}$ and given $C^{2,\beta}$ Dirichlet data has a unique solution.
\item For $E \in C^{0, \beta}(\Lambda)$ there exists a unique $\varphi \in C^{2,\beta}(\Lambda)$ such that $\mathcal L_g^\lambda \varphi =E$ and $\varphi|_{\partial \Lambda}=0$. Moreover
\[
\|\varphi:C^{2,\beta}(\Lambda,g)\| \leq C(\beta,\gamma)\rout \|E:C^{0,\beta}(\Lambda,g)\|.\]
\end{enumerate}
\end{corollary}
\begin{proof}
The first item follows immediately from the lemma and by noting that if $|\lambda|<(4\rout)^{-1}$ then the lowest eigenvalue for $\mathcal L_g^\lambda$ is greater than $(4\rout)^{-1}$. The second follows from the Rayleigh quotient and standard techniques.
\end{proof}
We now use \ref{annularlemma} and \ref{flatannuluslinear} to prove the decay estimates we desire. Note that \ref{flatannuluslinear} gives the analogous decay estimates for solutions to $\mathcal L_{g_A} V =E$ on flat annuli.
\begin{definition}\label{phidef}
For $i =1, \dots, n$, let $\phi_i$ denote the $i$-th component of the canonical immersion of $\Ssn$ into $\mathbb R^n$. For convenience going forward, let $\phi_0\equiv 1$.
\end{definition}
Note that $\Delta_{\Ssn} \phi_i = - (n-1) \phi_i$ and $\Delta_{\Ssn} \phi_0=0$
and that the functions are $L^2$ orthogonal but we have chosen not to normalize them.
Since we will be particularly interested in understanding the low harmonics of a function on the boundary of $\Lambda$, we introduce the following notation.
\begin{definition}\label{LowHdef}
Let $\mathcal H_k[C]$ denote the finite dimensional space of spherical harmonics on the meridian sphere at $C$
that includes all of those up to (and including) the $k$-th eigenspace.
That is, $\mathcal H_0[C]$ is the span of $\{\phi_0\}$ and
$\mathcal H_1[C]$ is the span of $\{\phi_0, \dots, \phi_n\}$.
\end{definition}
\begin{prop}\label{RLambda}Given $\beta \in(0,1)$ and $\gamma \in (1,2)$, there exists $\underline{b}$ large enough depending on $\beta,\gamma$ and $\maxT>0$ small depending on $\underline{b}$ such that the following holds.\\
For $0<|\tau|<\maxT$ and $b$ satisfying \ref{ass:b} and any $|\lambda| <(4\rout)^{-1}$, there are linear maps $\mathcal{R}^{\mathrm{out}}_{\Lambda,\lambda}, \mathcal R^{\mathrm{in}}_{\Lambda,\lambda}:C^{0,\beta}(\Lambda) \to C^{2, \beta}(\Lambda)$ such that, given $E \in C^{0,\beta}(\Lambda)$:
\begin{enumerate}[(i)]
\item if $V^{\mathrm{out}} := \mathcal{R}^{\mathrm{out}}_{\Lambda,\lambda}( E)$ then
\begin{itemize}
\item $ \mathcal{L}_g^\lambda V^{\mathrm{out}} =E \text{ on }\Lambda.$
\item $V^{\mathrm{out}}|_{\Cout} \in \mathcal H_1[\Cout]$ and vanishes on $\Cin$.
\item $\|V^{\mathrm{out}}:C^{2,\beta}(\Lambda,r,g, r^{\gamma}) \|\leq C(\beta, \gamma)\|E:C^{0,\beta}(\Lambda,r,g,r^{\gamma-2}) \|$.
\end{itemize}
\item if $V^{\mathrm{in}} := \mathcal{R}^{\mathrm{in}}_{\Lambda,\lambda}( E)$ then
\begin{itemize}
\item $ \mathcal{L}_g^\lambda V^{\mathrm{in}} =E \text{ on }\Lambda.$
\item $V^{\mathrm{in}}|_{\Cin} \in \mathcal H_1[\Cin]$ and vanishes on $\Cout$.
\item $\|V^{\mathrm{in}}:C^{2,\beta}(\Lambda,r,g, r^{2-n-\gamma}) \|\leq C(\beta, \gamma)\|E:C^{0,\beta}(\Lambda,r,g, r^{-n-\gamma}) \|$.
\end{itemize}
\end{enumerate}
In either case, $\mathcal{R}^{\mathrm{out}}_{\Lambda,\lambda}, \mathcal R^{ \mathrm{in}}_{\Lambda,\lambda}$ both depend continuously on the choice of $\tau, b$.
\end{prop}
\begin{proof}We prove the result for $\mathcal R^{\mathrm{out}}_{\Lambda,\lambda}$ as the other argument follows similarly.
Let $E \in C^{0, \beta}(\Lambda)$ where $\underline{b}, \maxT$ of \ref{annularlemma} are determined by choosing $\epsilon_1<1/(20C(\beta,\gamma))$. We now apply \ref{flatannuluslinear} with $s$ defined as a function of $t$ as in \eqref{D:s} and the domain of definition equal to $\Lambda$. Thus, there exists $V_0= \mathcal R^{\mathrm{out}}_A(E)$ such that
\begin{enumerate}
\item $\mathcal L_{g_A} V_0 = E$,
\item $V_0|_{\Cout} \in \mathcal H_1[\Cout]$ and vanishes on $\Cin$,
\item $\|V_0:C^{2,\beta}(\Lambda,s,g_A, s^{\gamma}) \|\leq C(\beta, \gamma)\|E:C^{0,\beta}(\Lambda,s,g_A,s^{\gamma-2}) \|$.
\end{enumerate} \ref{uniformnorms} and \ref{annularlemma} together imply that
\[
\|\mathcal L_{g}^\lambda V_0 - E:C^{0,\beta}(\Lambda,r,g, r^{\gamma-2}) \|\leq 10\epsilon_1C(\beta, \gamma)\|E:C^{0,\beta}(\Lambda,r,g,r^{\gamma-2}) \|.
\]
We complete the proof by iteration.
\end{proof}In a similar fashion, we can prove the following corollary.
\begin{corollary}\label{linearcor}
Assuming $\epsilon_1$ of \ref{annularlemma} is small enough in terms of $\epsilon_2$ and $\beta\in(0,1), \gamma \in(1,2)$, for any $0\leq |\lambda| < (4\rout)^{-1}$, there are two linear maps:
\begin{equation}\notag
\begin{aligned}
\mathcal{R}^{\mathrm{out}}_{\partial,\lambda} :\{u \in C^{2, \beta}(\Cout): u \text{ is } L^2(\Cout, g_{\Ssn}) \text{-orthogonal to } \mathcal H_1[\Cout]
\} \to C^{2, \beta}(\Lambda),
\\
\mathcal{R}^{\mathrm{in}}_{\partial,\lambda} :\{u \in C^{2, \beta}(\Cin): u \text{ is } L^2(\Cin, g_{\Ssn}) \text{-orthogonal to } \mathcal H_1[\Cin]
\} \to C^{2, \beta}(\Lambda) .
\end{aligned}
\end{equation}
such that the following hold:
\begin{enumerate}
\item If $u$ is in the domain of $\mathcal R^{\mathrm{out}}_{\partial,\lambda}$ and $V^{\mathrm{out}}:= \mathcal{R}^{\mathrm{out}}_{\partial,\lambda}( u)$ then
\begin{itemize}
\item $\mathcal{L}_g^\lambda V^{\mathrm{out}}=0 \text{ on } \Lambda$.
\item $V^{\mathrm{out}}|_{\Cout}-u \in \mathcal H_1[\Cout]$ and $V^{\mathrm{out}}$ vanishes on $\Cin$.
\item $\|V^{\mathrm{out}}|_{\Cout} - u:C^{2, \beta}(\Cout, g_{\Ssn})\| \leq \epsilon_2\|u:C^{2,\beta}(\Cout, g_{\Ssn})\|$.
\item $\|V^{\mathrm{out}}:C^{2,\beta}(\Lambda, r,g,(r/\rout)^{\gamma})\| \leq C(\beta, \gamma)\|u:C^{2, \beta}(\Cout, g_{\Ssn})\|$.
\end{itemize}
\item If $u$ is in the domain of $\mathcal R^{\mathrm{in}}_{\partial,\lambda}$ and $V^{\mathrm{in}}:= \mathcal{R}^{\mathrm{in}}_{\partial,\lambda}( u)$ then
\begin{itemize}
\item $\mathcal{L}_g^\lambda V^{\mathrm{in}}=0 \text{ on } \Lambda$.
\item $V^{\mathrm{in}}|_{\Cin}-u \in \mathcal H_1[\Cin]$ and $V^{\mathrm{in}}$ vanishes on $\Cout$.
\item $\|V^{\mathrm{in}}|_{\Cout} - u:C^{2, \beta}(\Cin, g_{\Ssn})\| \leq \epsilon_2\|u:C^{2,\beta}(\Cin, g_{\Ssn})\|$.
\item $\|V^{\mathrm{in}}:C^{2,\beta}(\Lambda, r,g,(\rin/r)^{n-2+\gamma})\| \leq C(\beta, \gamma)\|u:C^{2, \beta}(\Cin, g_{\Ssn})\|$.
\end{itemize}
\end{enumerate} In either case $\mathcal{R}^{\mathrm{out}}_{\partial,\lambda}, \mathcal R^{ \mathrm{in}}_{\partial,\lambda}$ depend continuously on $\tau,b$.
\end{corollary}
\begin{proof}Again, we prove the result only for $\mathcal R^{\mathrm{out}}_{{\partial,\lambda}}$.
We first note that as an immediate corollary to \ref{flatannuluslinear}, we may define a linear map
\[\mathcal R^{\mathrm{out}}_{\partial,A}:\{u \in C^{2, \beta}(\Cout): u \text{ is } L^2(\Cout, g_{\Ssn}) \text{-orthogonal to } \mathcal H_1[\Cout]
\} \to C^{2, \beta}(\Lambda)
\]such that if $u$ is in the domain of $\mathcal R^{\mathrm{out}}_{\partial,A}$ and $\tilde V^{\mathrm{out}}:= \mathcal R^{\mathrm{out}}_{\partial, A}(u)$ then
\begin{enumerate}
\item $\mathcal L_{g_A} \tilde V^{\mathrm{out}} = 0$ on $\Lambda$,
\item $\tilde V^{\mathrm{out}}|_{\Cout}=u$ and $\tilde V^{\mathrm{out}}|_{\Cin}=0$,
\item $\|\tilde V^{\mathrm{out}}:C^{2,\beta}(\Lambda, s,g_A,s^{\gamma})\| \leq C(\beta, \gamma)\sout^{-\gamma}\|u:C^{2, \beta}(\Cout, g_{\Ssn})\|$.
\end{enumerate}Set
\[
\mathcal R^{\mathrm{out}}_{\partial,\lambda} = \mathcal R^{\mathrm{out}}_{\partial,A} - \mathcal R^{\mathrm{out}}_{\Lambda,\lambda} \mathcal L_g^\lambda\mathcal R^{\mathrm{out}}_{\partial,A}.
\]The previous estimates immediately imply the result.
\end{proof}
We now introduce Dirichlet solutions to $\mathcal L_g^\lambda$ for $\lambda$ in the specified region. These solutions will allow us to understand the behavior of the low harmonics of any function defined on $\Lambda$.
\begin{definition}\label{annulardecaydef}For any $0\leq |\lambda| <(4\rout)^{-1}$ and $i=0, \dots, n$, let $V_i^\lambda[\Lambda, a_1, a_2]$, $\widetilde V_i[\Lambda, a_1, a_2]$ denote solutions to the Dirichlet problem given by
\[
\mathcal L_g^\lambda V_i^\lambda[\Lambda, a_1, a_2] = 0, \qquad \mathcal L_{g_A} \widetilde V_i[\Lambda, a_1, a_2]=0
\]with boundary conditions
\begin{align*}
V_0^\lambda[\Lambda, a_1, a_2] &= \widetilde V_0[\Lambda, a_1, a_2] =a_1 \text{ on } C^{\mathrm{out}}\\
V_0^\lambda[\Lambda, a_1, a_2] &= \widetilde V_0[\Lambda, a_1, a_2] =a_2 \text{ on } C^{\mathrm{in}}\\
V_i^\lambda[\Lambda, a_1, a_2] &= \widetilde V_i[\Lambda, a_1, a_2] = a_1 \phi_i \text{ on } C^{\mathrm{out}}, \text{ for } i =1, \dots, n\\
V_i^\lambda[\Lambda, a_1, a_2] &= \widetilde V_i[\Lambda, a_1, a_2] = a_2 \phi_i \text{ on } C^{\mathrm{in}}, \text{ for } i =1, \dots, n.
\end{align*}
\end{definition}
Recall that $s(\rin) = \rin:= \ssin$ and set $\sout := s(\rout)$.
By \eqref{soverr}, $|\sout/\rout -1|\leq 4\delta$.
We observe that, recall \ref{phidef},
\[
\widetilde V_0[\Lambda, 1, 0] = \frac{s^{2-n} - s^{2-n}_{\mathrm{in}}}{s^{2-n}_{\mathrm{out}} - s^{2-n}_{\mathrm{in}}},\qquad \widetilde V_0[\Lambda, 0, 1] = \frac{s^{2-n} - s^{2-n}_{\mathrm{out}}}{s^{2-n}_{\mathrm{in}}-s^{2-n}_{\mathrm{out}}},
\]
\[
\widetilde V_i[\Lambda, 1, 0]= \frac{s-s^n_{\mathrm{in}}s^{1-n}}{s_{\mathrm{out}} -s^n_{\mathrm{in}}s^{1-n}_{\mathrm{out}}} \phi_i , \qquad \widetilde V_i[\Lambda, 0,1]=\frac{s-s^n_{\mathrm{out}}s^{1-n}}{s_{\mathrm{in}} -s^n_{\mathrm{out}}s^{1-n}_{\mathrm{in}}}\phi_i.
\]
\begin{lemma}\label{annulardecaylemma}
For each $0 \leq |\lambda| <(4\rout)^{-1}$,
$V_0^\lambda$ is constant on each meridian sphere and each $V_i^\lambda$ is a multiple of $\phi_i$ on each meridian sphere.
Moreover, there exists a choice as in \ref{annularlemma} of $\epsilon_1>0$ small enough so the following hold:
\begin{enumerate}
\item $\|V_0^\lambda[\Lambda,1,0]-\widetilde V_0[\Lambda,1,0]:C^{2,\beta}(\Lambda, r,g)\|\leq C(\beta)\epsilon_1$.
\item $\|V_0^\lambda[\Lambda,0,1]-\widetilde V_0[\Lambda,0,1]:C^{2,\beta}(\Lambda, r,g, (\rin/r)^{n-2})\|\leq C(\beta)\epsilon_1$.
\item $\|V_i^\lambda[\Lambda,1,0]-\widetilde V_i[\Lambda,1,0]:C^{2,\beta}(\Lambda, r,g,r)\|\leq C(\beta)\epsilon_1$.
\item $\|V_i^\lambda[\Lambda,0,1]-\widetilde V_i[\Lambda,0,1]:C^{2,\beta}(\Lambda, r,g,(\rin/r)^{n-1})\|\leq C(\beta)\epsilon_1$.
\end{enumerate}
\end{lemma}
\begin{proof}
By inspection $\widetilde V_0[\Lambda, 1,0], \widetilde V_0[\Lambda, 0,1]$ satisfy the estimates
\[
\|\widetilde V_0[\Lambda,1,0]:C^{2,\beta}(\Lambda,r,g)\| \leq C(\beta),
\]
\[
\|\widetilde V_0[\Lambda,0,1]:C^{2,\beta}(\Lambda,r,g,r^{2-n})\| \leq C(\beta) \rin^{n-2}.
\]
By \ref{annularlemma},
\[
\|\mathcal L^\lambda_g \widetilde V_0[\Lambda,1,0]:C^{0,\beta}(\Lambda,r,g, r^{-2})\| \leq C(\beta)\epsilon_1,
\]
\[
\|\mathcal L^\lambda_g \widetilde V_0[\Lambda,0,1]:C^{0,\beta}(\Lambda,r,g,r^{-n})\| \leq C(\beta )\rin^{n-2}\epsilon_1.
\]
Using \ref{RLambda} applied to the operator $\mathcal L_g^\lambda$ (with $\gamma=0$), let $\widehat V_{\mathrm{out}}=\mathcal R_\Lambda^{\mathrm{out}}(\mathcal L^\lambda_g \widetilde V_0[\Lambda,1,0])$ and $\widehat V_{\mathrm{in}}=\mathcal R_\Lambda^{\mathrm{in}}(\mathcal L^\lambda_g \widetilde V_0[\Lambda,0,1])$.
Then
\[
\| \widehat V_{\mathrm{out}}:C^{2,\beta}(\Lambda,r,g)\| \leq C(\beta)\epsilon_1,
\qquad
\| \widehat V_{\mathrm{in}}:C^{2,\beta}(\Lambda,r,g,r^{2-n})\| \leq C(\beta)\rin^{n-2}\epsilon_1.
\]Note that the boundary data is in $\mathcal H_0[\Cout], \mathcal H_0[\Cin]$.
Set
\[
V:=\widetilde V_0[\Lambda,A_{\mathrm{out}},A_{\mathrm{in}}] - A_{\mathrm{out}}\widehat V_{\mathrm{out}}- A_{\mathrm{in}}\widehat V_{\mathrm{in}}.
\]where $A_{\mathrm{out}}, A_{\mathrm{in}}$ are chosen such that
\[
V |_{\Cout} = 1, \quad V |_{\Cin} =0.
\]Then $\mathcal L^\lambda_g V=0$ by construction and since the Dirichlet problem has a unique solution
\[
V^\lambda_0[\Lambda,1,0] = V.
\] By definition,
\[
\left\{\begin{array}{ll} 1&= A_\mathout - A_\mathout \widehat V_\mathout(\rout) - A_\mathin \widehat V_\mathin (\rout)\\
0&= A_\mathin - A_\mathout \widehat V_\mathout(\rin) - A_\mathin \widehat V_\mathin(\rin).
\end{array}\right.
\]Inspection of the estimates implies that $|1-A_\mathout| \leq C(\beta)\epsilon_1$ and $|A_\mathin| \leq C(\beta) \epsilon_1$. Item (1) then follows from the triangle inequality and all previous estimates.
For item (2), choose $A_\mathout, A_\mathin$ such that
\[
V:=\widetilde V_0[\Lambda,A_{\mathrm{out}},A_{\mathrm{in}}] - A_{\mathrm{out}}\widehat V_{\mathrm{out}}- A_{\mathrm{in}}\widehat V_{\mathrm{in}}.
\]where $A_{\mathrm{out}}, A_{\mathrm{in}}$ are chosen such that
\[
V |_{\Cout} = 0, \quad V |_{\Cin} =1.
\]As before, the choice of boundary data and uniqueness of Dirichlet solutions implies that
\[
V^\lambda_0[\Lambda,0,1]= V.
\]
Note that in this case
\[
\left\{\begin{array}{ll} 0&= A_\mathout - A_\mathout \widehat V_\mathout(\rout) - A_\mathin \widehat V_\mathin (\rout)\\
1&= A_\mathin - A_\mathout \widehat V_\mathout(\rin) - A_\mathin \widehat V_\mathin(\rin).
\end{array}\right.
\]
Again, the estimates imply that $|A_\mathout| \leq C(\beta) \epsilon_1(\frac{\rin}{\rout})^{n-2}$ and $|1-A_\mathin| \leq C(\beta) \epsilon_1$.
For the estimates on $V_i^\lambda[\Lambda,1,0]$, $V_i^\lambda[\Lambda,0,1]$ we note that
\[
\|\mathcal L^\lambda_g \widetilde V_i[\Lambda,1,0]:C^{0,\beta}(\Lambda,r,g, r^{-1})\| \leq C(\beta)\epsilon_1,
\]
\[
\|\mathcal L^\lambda_g \widetilde V_i[\Lambda,0,1]:C^{0,\beta}(\Lambda,r,g,(\rin/r)^{n-1}r^{-2})\| \leq C(\beta)\epsilon_1.
\]For $\widehat V_i[1,0] := \mathcal R_\Lambda^{\mathrm{out}}(\mathcal L^\lambda_g \widetilde V_i[\Lambda,1,0])$ and $\widehat V_i[0,1]:= \mathcal R_\Lambda^{\mathrm{in}}(\mathcal L^\lambda_g \widetilde V_i[\Lambda,0,1])$ we have the estimates
\[
\| \widehat V_i[1,0]:C^{2,\beta}(\Lambda,r,g,r)\| \leq C(\beta)\epsilon_1,
\]
\[\| \widehat V_i[0,1]:C^{2,\beta}(\Lambda,r,g,(\rin/r)^{n-1})\| \leq C(\beta)\epsilon_1.
\]Note that the boundary data is in $\mathcal H_1[\Cout], \mathcal H_1[\Cin]$.
Using these estimates with the same techniques previously outlined implies the result.
\end{proof}
\subsection*{Solving the linearized equation semi-locally on $\Spext, \Smext$}
The goal of this subsection is to prove \ref{linearpluslemma} and \ref{linearminuslemma} which provide
semi-local estimates on $\Spext$ and $\Smext$.
In contrast to \cite{BKLD}
we do not attempt to solve a Dirichlet problem with zero boundary data.
Instead, we solve an ODE where solutions to the ODE are allowed to grow at a particular rate back toward the nearest central sphere.
Throughout the subsection we will decompose functions by their projections onto various spaces of the kernel of the operator $\Delta_{\Ssn}$.
For this reason, we introduce the following notation.
\begin{definition}
Let $L_k$ denote the projection of the operator $\mathcal L_g$ onto the $k$-th space of eigenfunctions for the operator $\Delta_{\Ssn}$. That is,
\[
L_k:= \frac 1{r^2} \partial_{tt} + \frac{n-2}{r^2}\frac {r'}r \partial_t +\left[n(1+(n-1)\tau^2 r^{-2n})-\frac{k}{r^2}(n-2+k)\right].
\]
\end{definition}
We will use the projected operators to decompose the local linear problems and determine separate estimates for the high and the low eigenvalues.
For ease of notation, we introduce the following decomposition which we will use throughout this subsection.
\begin{definition}\label{f_decomp}For $j=0, \dots, n$, let $\phi_j$ be defined as in \ref{phidef}. For $j \geq n+1$, choose $\phi_j$ such that $\{\phi_{n+1}, \phi_{n+2}, \dots\}$ is an $L^2$ orthonormal basis for the remaining eigenspaces of $\Delta_{\Ssn}$. (Recall that $\{\phi_0, \dots, \phi_n\}$ is an $L^2$-orthogonal basis for the lowest two eigenspaces of $\Delta_{\Ssn}$.)
Let $f \in C^{k,\beta}$ on $\Spext$ or $\Smext$.
We define the decompositions
\[
f(t,\bt)= \sum_{i=0}^\infty f_i(t) \phi_i = f_0 + f_1 + f_{\mathrm{high}}
\
\text{ where }
\
f_1 := \sum_{i=1}^n f_i(t) \phi_i,
\text{ }
f_{\mathrm{high}} := \sum_{i=n+1}^\infty f_i(t) \phi_i.
\]
\end{definition}
We first consider the linear problem for functions with no low harmonics.
\begin{lemma}\label{highharmonicslemma}
Let $\beta \in (0,1)$, $\gamma \in (1,2)$. For $\underline{b}$ chosen as in \ref{annularlemma} and $\maxT>0$ satisfying the requirements of \ref{annularlemma} and the inequality $\maxT^{\frac 1{n-1}} \leq 1/(2n^2)$, let $b$ satisfy \ref{ass:b}. Then
there exist linear maps $\mathcal R_{\mathrm{high}}^+, \mathcal R_{\mathrm{high}}^-$ where
\[
\mathcal R_{\mathrm{high}}^\pm:\{ E^\pm \in C^{0,\beta}(\widetilde S^\pm): E^\pm = E_{\mathrm{high}}^\pm, \supp(E^\pm) \subset S^\pm_1\} \to C^{2,\beta}(\widetilde S^\pm)
\]such that for $E^\pm$ in the domain of $\mathcal R_{\mathrm{high}}^\pm$, and $f^\pm:=\mathcal R^\pm_{\mathrm{high}}(E^\pm)$,
\begin{enumerate}
\item \label{onef}$\mathcal L_g f^\pm = E^\pm$.
\item \label{twof}$f^\pm = f_{\mathrm{high}}^\pm$.
\item \label{threef}$f^\pm = 0$ on $\partial \widetilde S^\pm$.
\item $\|f^+:C^{2,\beta}(\Spext,r,g,r^\gamma)\|\leq C(\underline{b},\beta,\gamma)\|E^+:C^{0, \beta}(\Sp_1,r,g)\|$.
\item $\|f^-:C^{2,\beta}(\Smext,r,g,(\rin/r)^{n-2+\gamma})\|\leq C(\beta,\gamma)\|E^-:C^{0, \beta}(\Sm_1,r,g)\|$.
\end{enumerate}
Finally, $\mathcal R_{\mathrm{high}}^\pm$ depend continuously on $\tau$.
\end{lemma}
The proof will follow from the decay estimates determined on $\Lambda$ and the following lemma.
\begin{lemma}\label{highprojlemma}
For a fixed $n \in \mathbb N$, $n>2$, consider $\underline{b}$ chosen as in \ref{annularlemma} and $\maxT>0$ satisfying the requirements of \ref{annularlemma} and the inequality $\maxT^{\frac 1{n-1}} \leq 1/(2n^2)$. Then the following holds:
For any $0<|\tau|<\maxT$ and $b$ satisfying \ref{ass:b}, let $\widetilde S^\pm$ be the domain defined by $\tau$ and $b$ as in \ref{domaindefinitions}.
Consider the two sets of functions
$X^\pm:=\{ f\in L^2(\widetilde S^\pm) \, : \, f=f_{\mathrm{high}}, f|_{\partial \widetilde S^\pm}=0,\int_{\widetilde S^\pm} f^2 =1\}$.
Then
\[
\inf_{f \in X^\pm}{-\int_{\widetilde S^\pm} f \mathcal L_g f}
=
\inf_{f \in X^\pm}{ \int_{\widetilde S^\pm} |\nabla f|^2 - |A|^2 f }
\geq 1.
\]
\end{lemma}
\begin{proof}
Let $f= \sum_{i=n+1}^\infty f_i \phi_i$. Since $i \geq n+1$,
\[\int_{\Ssn}|\nabla_{\Ssn}\phi_i|^2dg_{\Ssn}\geq 2n\int_{\Ssn}\phi_i^2dg_{\Ssn} = 2n.
\]Therefore, recalling \eqref{modA},
\begin{align}
\notag \int_{\widetilde S^\pm} |\nabla f|^2 - |A|^2 f^2 dx &=\int_{\widetilde S^\pm} \frac 1{r^2}|\sum_i (f_i'\phi_i+ f_i \nabla_{\Ssn}\phi_i)|^2 - |A|^2 f^2 \, dx\\
& \geq \int r^{n-2}\sum_i (f_i')^2 + nr^{n-2}\sum_i f_i^2(2-r^2-(n-1)\tau^2r^{2-2n}) \, dt.\label{k1}
\end{align}On $\widetilde S^+$, $r \in (|\tau|^{\frac 1{n-1}}/\delta,1+ O(|\tau|))$. Therefore, by the bound on $\delta>0$ imposed in \ref{annularlemma} , \eqref{k1} is bounded below by
\[
n\int r^{n-2} \sum_i f_i^2 \, dt\geq \frac n2 \int r^n \sum_i f_i^2 \, dt \geq 1.
\]The last inequality follows since $\|f\|_{L^2(\widetilde S^+)} = 1$.
It remains to show the estimate on $\widetilde S^-$. We now demonstrate that the positive terms on the right hand side are sufficiently large to more than overcome the negative contribution. First observe that
\[
-2 \int r^{n-2} w' f_i f_i' \, dt =- \int r^{n-2}w'(f_i^2)'\, dt = \int (r^{n-2} w')' f_i^2 \, dt = \int r^{n-2}f_i^2(w''+ (n-2)(w')^2)\, dt.
\]
Now we use Cauchy-Schwarz and an absorbing inequality to note that
\begin{align*}
-2 \int r^{n-2} w' f_i f_i' \, dt & \leq 2\left( \int r^{n-2}(w')^2 f_i^2 \, dt \cdot \int r^{n-2} (f_i')^2 \, dt\right)^{1/2} \\
& \leq (n-2) \int r^{n-2}(w')^2 f_i^2 \, dt + \frac 1{n-2}\int r^{n-2} (f_i')^2 \, dt.
\end{align*}
Combining the above and simplifying,
\[
(n-2) \int r^{n-2}f_i^2w'' \, dt \leq \int r^{n-2} (f_i')^2 \, dt .
\]
Thus, recalling \eqref{w_derivs}
\begin{align*}
\int r^{n-2} (f_i')^2 &+ nr^{n-2} f_i^2(2-r^2-(n-1)\tau^2r^{2-2n}) \, dt\\
& \geq \int r^{n-2}f_i^2 \left((n-2)w'' + 2n-nr^2 -n(n-1)\tau^2 r^{2-2n} \right)\, dt\\
&= \int r^{n-2}f_i^2\left( 2n-2(n-1)r^2 +(n-2)^2 \tau r^{2-n} -2(n-1) \tau^2r^{2-2n}\right)\, dt.
\end{align*}
We simplify the above expression by using \eqref{w_derivs} to note that
\[
(2n-2)(w')^2 = (2n-2) - 2(n-1)r^2 - 4(n-1)\tau r^{2-n} - 2(n-1)\tau^2 r^{2-2n}.
\]Thus,
\begin{align*}
\int r^{n-2} (f_i')^2+ nr^{n-2} f_i^2(2-r^2-(n-1)\tau^2r^{2-2n}) \, dt &\geq \int r^{n-2}f_i^2\left( (2n-2)(w')^2 + 2+ n^2 \tau r^{2-n} \right) \, dt \\
& \geq \int r^{n-2}f_i^2\left( 2+ n^2 \tau r^{2-n} \right) \, dt .
\end{align*}
Now, since $r \geq |\tau|^{\frac 1{n-1}}$, the hypothesis on $\maxT$ implies that
\[
2+ n^2 \tau r^{2-n} \geq 1.5 + \left(0.5 -n^2|\tau|^{1-\frac{n-2}{n-1}}\right) \geq 1.5 \geq r^2.
\]
Immediately we observe that
\[
-\int_{\Smext} f \mathcal L_g f \geq \int r^n \sum_if_i^2 \, dt= 1.
\]
\end{proof}
We can now complete the proof of \ref{highharmonicslemma}.
\begin{proof}
Given $E=E_{\mathrm{high}}$, the existence of $f$ satisfying items \eqref{onef}, \eqref{twof}, \eqref{threef} follows from standard theory using the coercivity estimate provided. We determine the decay estimates in the following manner. First, the coercivity estimate implies that $\|f\|_{L^2(\widetilde S)} \leq C \|E\|_{L^2(\widetilde S)}$. The uniform geometry on $S_2$--in the natural scaling, which is the metric we use--allows us to boost these $L^2$ estimates to $C^{k,\alpha}$ using Schauder theory and De Giorgi-Nash-Moser techniques. Thus, for $S=S^\pm$,
\[
\|f:C^{2,\beta}(S_2,r,g)\|\leq C(\beta)\|E:C^{0,\beta}(S_1,r,g,r^{-2})\|\leq C(\underline{b},\beta)\|E:C^{0,\beta}(S_1,r,g)\|
\]where the second inequality follows from \ref{radiuslemma}. Using the estimates of \ref{linearcor}, since $\mathcal L_g f = 0$ on $\widetilde S \backslash S_1$, we then determine for $S=S^+$,
\[\|f:C^{2,\beta}(\Lambda, r,g,r^{\gamma})\| \leq \rout^{-\gamma}C(\beta, \gamma)\|f:C^{2, \beta}(\Cout_1, g_{\Ssn})\|,\]and for $S=S^-$,
\[\|f:C^{2,\beta}(\Lambda,r, g,(\rin/r)^{n-2+\gamma})\| \leq C(\beta, \gamma)\|f:C^{2, \beta}(\Cin_1, g_{\Ssn})\|.\]Combining these estimates as appropriate, and noting that $\rout^{-\gamma} \leq C(\underline{b}, \gamma)$, gives the result.
\end{proof}
As the previous lemma provides solvability for high harmonics and good estimates on the solutions, we solve the semi-local linearized problem by appealing to \ref{annulardecaylemma} to understand the behavior of $f_0, f_1$.
To easily adapt this argument to the global problem at hand, we will use notation that, in the setting of a single Delaunay surface, makes little sense. We presume that $\Spext:= \Lc \cup \Sp \cup \Lf$ and do not explain the definitions of $\Lc, \Lf$ until they are needed later. Suffice it to say that on $\Lc$ we allow our solution to grow toward the boundary but on $\Lf$ we force the solution to decay to the boundary at a prescribed rate.
\begin{lemma}\label{linearpluslemma}Given $\beta \in (0,1), \gamma \in (1,2)$,
for each $\Sp$ there exists a linear map
\[
\mathcal R_{\Spext}:\{E\in C^{0,\beta}(\Spext): E \text{ is supported on } \Sp_1\} \to C^{2,\beta}(\Spext, g)
\]such that the following hold for $E$ in the domain of $\mathcal R_{\Spext}$ and $\varphi = \mathcal R_{\Spext}(E)$:
\begin{enumerate}
\item $\mathcal L_g \varphi = E$ on $\Spext$.
\item $\|\varphi:C^{2,\beta}(\Sp_1,r,g)\| \leq C(\underline{b},\beta)\|E:C^{0,\beta}(\Sp_1, r,g)\|$.
\item $\|\varphi:C^{2,\beta}(\Lf,r,g,r^\gamma)\| \leq C(\underline{b},\beta, \gamma)\|E:C^{0,\beta}(\Sp_1,r,g)\|$.
\item $\|\varphi:C^{2,\beta}(\Lc,r,g,(\rin/r)^{n-1})\| \leq C(\underline{b},\beta)\rin^{1-n}\|E:C^{0,\beta}(\Sp_1,r,g)\|$.
\item $\mathcal R_{\Spext}$ depends continuously on $\tau$.
\end{enumerate}
\end{lemma}
\begin{proof}Consider $E$ in the domain of $\mathcal R_{\Spext}$ and decompose $E = E_0 + E_1 + E_{\mathrm{high}}$. For $E_0$, there exists a unique $\varphi_0(t)$ such that $L_0 \varphi_0 = E_0$ and $\varphi_0(2\Pdo + b+1)=\varphi_0'(2\Pdo +b+1)=0$. Since $E_0 \equiv 0$ on $\Spext \backslash \Sp_1$, $\varphi_0 \equiv 0$ on $\Lf \backslash \Sp_1:= \Lf_{1,0}$. By standard ODE theory, we note that
\[
\|\varphi_0:C^{2,\beta}(\Sp_2,r,g)\|\leq C(\beta)\|E_0:C^{0,\beta}(\Sp_1,r,g,r^{-2})\|\leq C(\underline{b},\beta)\|E_0:C^{0,\beta}(\Sp_1,r,g)\|\\
\]where the final inequality uses \ref{radiuslemma}. At $t = 2\Pdo -(b+1)$, determine the unique $a_0, b_0$ such that
\[
\varphi_0(2\Pdo -(b+1)) = a_0 V_0[\Lc,1,0] (2\Pdo -(b+1))+ b_0V_0[\Lc,0,1](2\Pdo -(b+1))
\] where $V_0$ are the functions defined in \ref{annulardecaydef}. Then on $\Lc \backslash \Sp_1:= \Lc_{0,1}$
\[
\varphi_0= a_0V_0[\Lc,1,0] + b_0V_0[\Lc,0,1].
\] Combining the estimates of \ref{annulardecaylemma} and the ones above imply
\[
\|\varphi_0:C^{2,\beta}(\Lc,r,g, (\rin/r)^{n-1})\|\leq C(\beta)\rin^{1-n}\|E_0:C^{0,\beta}(\Sp_1,r,g)\|.
\]For $E_1$ we proceed in a similar fashion and produce $\varphi_1$ such that $L_1 \varphi_1 = E_1$, $\varphi_1 \equiv 0$ on $\Lf_{1,0}$ and
\[
\|\varphi_1:C^{2,\beta}(\Sp_1,r,g)\|\leq C(\underline{b},\beta)\|E_1:C^{0,\beta}(\Sp_1,r,g)\|.
\]Using this estimate and the fact that $L_1\varphi_1 \equiv 0$ on $\Lc_{0,1}$, we determine $a_i, b_i$ such that
$\varphi_1 = \sum_{i=1}^n V_i[\Lc,a_i,b_i]$ on $\Lc_{0,1}$. The estimate
\[
\|\varphi_1:C^{2,\beta}(\Lc,r,g, (\rin/r)^{n-1})\|\leq C(\beta)\rin^{1-n}\|E_1:C^{0,\beta}(\Sp_1,r,g)\|
\]follows again by combining the estimates on the coefficients with the estimates of \ref{annulardecaylemma}.
Finally, for $E_{\mathrm{high}}$ we determine $\varphi_{\mathrm{high}} = \mathcal R_{\mathrm{high}}^+(E_{\mathrm{high}})$ by \ref{highharmonicslemma} which provides the decay estimate on $\Lf$ and does not contribute to growth on $\Lc$.
Setting
\[
\varphi:= \varphi_0 + \varphi_1 + \varphi_{\mathrm{high}}
\]implies the result.
\end{proof}
\begin{lemma}\label{linearminuslemma}Given $\beta \in (0,1), \gamma \in (1,2)$,
for each $\Sm$ there exists a a linear map
\[
\mathcal R_{\Smext}:\{E\in C^{0,\beta}(\Smext): E \text{ is supported on } \Sm_1\} \to C^{2,\beta}(\Smext, g)
\]such that the following hold for $E$ in the domain of $\mathcal R_{\Smext}$ and $\varphi = \mathcal R_{\Smext}(E)$:
\begin{enumerate}
\item $\mathcal L_g \varphi = E$ on $\Smext$.
\item $\|\varphi:C^{2,\beta}(\Sm_1,r,g)\| \leq C(\beta,\gamma)\|E:C^{0,\beta}(\Sm_1, r,g, r^{-2})\|$.
\item $\|\varphi:C^{2,\beta}(\Lf,r,g,(\rin/r)^{n-2+\gamma})\| \leq C(\beta,\gamma)\|E:C^{0,\beta}(\Sm_1,r,g,r^{-2})\|$.
\item $\|\varphi:C^{2,\beta}(\Lc,r,g,r)\| \leq C(\beta,\gamma)\rin^{-1}\|E:C^{0,\beta}(\Sm_1,r,g,r^{-2})\|$.
\item $\mathcal R_{\Smext}$ depends continuously on $\tau$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is essentially identical to the proof for $\Spext$, though we use the estimates appropriate for growth away from $\rin$ on $\Lc$.
We skip the details.
\end{proof}
\section{The Initial Hypersurfaces}\label{InitialSurface}
In this section we assume given a family of graphs $\mathcal{F}$---defined as in
\ref{FamilyDefinition}---and we construct families of initial immersions which depend
on a parameter $\underline{\tau}$ which determines an overall scaling for the weights.
The first step in the construction is to describe an abstract surface $M$ based on the central graph $\Gamma$ of $\mathcal{F}$.
At the same time we construct parametrizations for $M$ which depend on $\Gamma$ and $\underline{\tau}$.
We then define a family of immersions of $M$ into $\Rn$ which
depends on $\underline{\tau}$ and is parametrized by parameters $(d,\boldsymbol \zeta)$.
The construction of each initial immersion is based on one of the graphs of $\mathcal{F}$ chosen on the basis of $(d,\boldsymbol \zeta)$ and $\underline{\tau}$.
\begin{assumption}\label{ass:tgamma}
In what follows $\underline{b} \gg 1$ will be as in Section \ref{DelaunayLinear},
large enough to invoke all of the results of that section,
but independent of the small constant $\maxT>0$.
In this section,
we choose a small constant $\maxTG>0$ which will depend on $\maxT>0$ (and thus on $\underline{b}$) and
on $\max_{e \in E(\Gamma) \cup R(\Gamma)} \left|\hat \tau[e]\right|$ but not on the structure of the graph $\Gamma$ or on the parameters $d,\boldsymbol \zeta$.
Note in particular that $\underline{b}$ will be independent of $\maxTG$.
While we are free to decrease $\maxTG$ as necessary, we presume throughout this section that
\begin{equation}
\max_{e \in E(\Gamma) \cup R(\Gamma)} \left|\hat \tau[\Gamma(0,0),e]\right|\maxTG <\maxT/2.
\end{equation}
Moreover, the constant $\underline{\tau}$ will be chosen so that
\[
0<|\underline{\tau}| < \maxTG.
\]
\end{assumption}
Let $\tz:=E(\Gamma) \cup R(\Gamma)\to \mathbb R\backslash\{0\}$ such that
\[\tz:= \underline{\tau} \hat \tau[\Gamma(0,0),e].
\]
\begin{remark}
Note that the choice of $\maxTG$ implies that
\[
0<|\tz|<\maxT/2.
\]
\end{remark}
\subsection*{The abstract surface $M$}
Given a flexible, central graph $\Gamma$ with the rescaled function $\tau_0$, we determine an abstract surface which will be mapped into $\mathbb R^{n+1}$ by translating and rotating the maps described in Section \ref{BuildingBlocks}. We construct $M$ in the following manner, noting that $M$ depends only on $\Gamma$ and $\underline{\tau}$ and not on $d,\boldsymbol \zeta$.
\begin{definition}
\label{def:a2}
We choose $\delta'>0$, depending only on $\Gamma$,
such that for each $p \in V(\Gamma)$ and all
$e\ne e' \in E_p$ we have $| \mathbf v[p,e] - \mathbf v[p,e'] | >50\delta'$.
Recall that by \eqref{adef} this defines also a constant $a$ such that
$\tanh(a+1)=\cos(\delta')$.
\end{definition}
\begin{definition}
For $p \in V(\Gamma)$
define
\begin{equation}
M[p]= \mathbb S^n_{V_p}:=\mathbb S^n \backslash D^{\Ssn}_{V_p}(\delta')
\quad \text{ where } \quad
V_p := \{\Bv\pe \,:\, e \in E_p\}.
\end{equation}
As the length of each edge domain depends upon the period and the number of periods, we set
\begin{equation*}
\RH:= 2\Pe l[e].
\end{equation*}
For $e \in E(\Gamma)$, let
\begin{equation*}
M[e] = [a,\RH-a]\times \Ssn
\end{equation*}
while for $e \in R(\Gamma)$, let
\begin{equation*}
M[e] = [a,\infty)\times \Ssn.
\end{equation*}
\end{definition}
\begin{definition}\label{ReDef}
For $e \in E(\Gamma) \cup R(\Gamma)$, let $\RRR[e]:\Rn \to \Rn$ denote the rotation such that
\[
\RRR[e](\Be_i)=\Bv_i[e]
\]for $i=1,\dots, n+1$, where here the $\Bv_i[e]$ refer to the ordered orthonormal frame chosen in \ref{gammaframe}. (The existence of such a rotation
follow precisely because we chose an ordered frame.)
\end{definition}
\begin{definition}
Let
\[
M'= \left(\bigsqcup_{p \in V(\Gamma)}M[p]\right) \bigsqcup\left( \bigsqcup_{e \in E(\Gamma) \cup R(\Gamma)} M[e]\right)\]
and let
\begin{equation}\label{eq:sim}
M=M'/\sim
\end{equation}
where we make the following identifications:\\
For $\pe \in A(\Gamma)$ with $p=p^+[e]$ and $x \in M[e] \cap \left([a,a+1] \times \Ssn\right)$,
\[
x \sim \left(\RRR[e] \circ Y_0(x)\right) \cap M[p].
\]
For $\pe \in A(\Gamma)$ with $p=p^-[e]$ and $ x\in M[e] \cap \left([\RH-(a+1),\RH-a] \times \Ssn\right)$,
\[
x \sim \left( \RRR[e] \circ Y_0 (t-\RH,\bt )\right) \cap M[p].
\]
\end{definition}
\subsection*{Standard and transition regions}In enumerating the important regions of the graph, we frequently reference the triple $[p,e,\cdot]$ where the third
component will be described below. For each $e\in E(\Gamma) \cup R(\Gamma)$, we enumerate the standard and transition regions along
the Delaunay piece by counting upward
as we move away from each central sphere. As in \cite{BKLD}, we denote a region as \emph{standard} if the limiting geometry as $\underline{\tau} \to 0$ is well understood and a region as \emph{transition} otherwise. See Section \ref{DelSection} for a more complete description of these regions. Recall that $2l[e]$ denotes the length
of an edge $e$. Thus, an edge $e$ will have
$2l[e]-1$ standard regions and $2l[e]$ transition regions. We make precise the following definition.
\begin{definition}
\label{pendef}
We define
\begin{align*}
V_S(\Gamma) := &\{[p,e,m] :e \in E(\Gamma),\ppe\in A(\Gamma), m \in \{1, 2, \dots, l[e]\}\}\\&\bigcup\{\pen : e \in E(\Gamma),\pme\in A(\Gamma), m \in\{1, 2, \dots, l[e]-1\} \}
\\&\bigcup \{\pen : e \in R(\Gamma), [p,e] \in A(\Gamma), m \in \mathbb N\}, \\
V_S^+(\Gamma) := &\{[p,e,m] \inV_S(\Gamma) : m\text{ is even}\,\}, \\
V_S^-(\Gamma) := &\{[p,e,m] \inV_S(\Gamma) : m\text{ is odd}\,\}, \\
V_{\Lambda}(\Gamma):=&\{ [p,e,m']:e \in E(\Gamma),[p^\pm[e],e] \in A(\Gamma), m' \in \{1, 2, \dots, l[e]\}\} \\
&\bigcup \{[p,e,m'] : e \in R(\Gamma), [p,e] \in A(\Gamma), m' \in \mathbb N\}.
\end{align*}
\end{definition}
We choose this notation so that the set $V_S(\Gamma)$ enumerates every standard region on an edge or ray exactly once.
Moreover, the enumeration of the standard regions is such that it increases along $M[e]$ as one moves further away from the nearest boundary.
$V_S^+(\Gamma)$ and $V_S^-(\Gamma)$ enumerate the spherical and catenoidal regions respectively.
$V_{\Lambda}(\Gamma)$ enumerates every transition region exactly once.
Notice that $V_S(\Gamma) \subset V_{\Lambda}(\Gamma)$ and $V_{\Lambda}(\Gamma) \backslash V_S(\Gamma)= \{ [p^-[e],e,l[e]] : e \in E(\Gamma)\}$.
We now define regions of particular importance.
A verbal description of these regions follows.
\begin{figure}[h]
\begin{center}
\includegraphics[width=5in]{STRPthin}
\caption{Two schematic renderings of regions of $M$. The top one is near a vertex $p$ with $|E_p|=3$ and the bottom one at a standard region associated to the center of an edge $e$. Note that, in the figure, standard regions appear spherical and transition regions appear cylindrical.}\label{STRPthin}
\end{center}
\end{figure}
Recall that $a$ is determined by \ref{def:a2}.
The constant $\underline{b}$ determines the size of each standard and transition region.
We use $x,y$ in subscripts to modify the size of the regions and the boundary circles.
For example, $S[p] \subset S_x[p]$ while $\widetilde S_x[p] \subset \widetilde S[p]$.
\begin{definition}
\label{regions}
For $p \in V(\Gamma)$,
$\pen\in V_S(\Gamma)$,
and $[p,e,m']\in V_{\Lambda}(\Gamma)$, (recall \ref{pendef}),
we define the following.
\begin{enumerate}
\item \label{centstand} $S_x[p]:=M[p] \bigsqcup_{\{e|p=p^+[e]\}}\left(M[e]\cap[a, \underline{b}+x]\times \Ssn\right)$\\
\indent \indent \indent \indent $\bigsqcup_{\{e|p=p^-[e]\}}\left(M[e] \cap [\RH-(\underline{b}+x),\RH-a] \times \Ssn\right)$
\item \label{centextstand}$\widetilde S_x[p]:=M[p]\bigsqcup_{\{e|p=p^+[e]\}}\left(M[e]\cap[a, \Pe-( \underline{b}+x)]\times \Ssn\right)$
\\ \indent \indent \indent\indent $\bigsqcup_{\{e|p=p^-[e]\}}\left(M[e] \cap [\RH-(\Pe-( \underline{b}+x)),\RH-a] \times \Ssn\right)$
\item \label{stand1}$S_x[p^+[e],e,m]:= M[e]\cap [m\Pe -(\underline{b}+x),m\Pe +(\underline{b}+x)]\times \Ssn$
\item \label{stand2}$S_x[p^-[e],e,m]:= M[e]\cap [\RH-(m\Pe +(\underline{b}+x)),\RH-(m\Pe -(\underline{b}+x))]\times \Ssn$
\item \label{extstand1}$\widetilde S_x[p^+[e],e,m]:= M[e]\cap [(m-1)\Pe +(\underline{b}+x),(m+1)\Pe -(\underline{b}+x)]\times \Ssn$
\item \label{extstand2}$\widetilde S_x[p^-[e],e,m]:= M[e]\cap [\RH-((m+1)\Pe -(\underline{b}+x)),\RH-((m-1)\Pe +(\underline{b}+x))]\times \Ssn$
\item\label{neckregion1} $\Lambda_{x,y}[p^+[e],e,m']:= M[e]\cap [(m'-1)\Pe +(\underline{b}+x),m'\Pe-(\underline{b}+y)]\times \Ssn$
\item\label{neckregion2} $\Lambda_{x,y}[p^-[e],e,m']:= M[e]\cap [\RH-(m'\Pe-(\underline{b}+y)),\RH-((m'-1)\Pe +(\underline{b}+x))]\times \Ssn$
\item $\Cout_x[p^+[e],e,m']:= M[e]\cap \{(m'-1)\Pe +(\underline{b}+x)\}\times \Ssn$ for $m'$ odd,
\item $\Cout_x[p^+[e],e,m']:= M[e]\cap \{m'\Pe -(\underline{b}+x)\}\times \Ssn$ for $m'$ even,
\item $\Cout_x[p^-[e],e,m']:= M[e]\cap \{\RH-((m'-1)\Pe +(\underline{b}+x))\}\times \Ssn$ for $m'$ odd,
\item $\Cout_x[p^-[e],e,m']:= M[e]\cap \{\RH-(m'\Pe -(\underline{b}+x))\}\times \Ssn$ for $m'$ even,
\item $\Cin_x[p^+[e],e,m']:= M[e]\cap \{m'\Pe -(\underline{b}+x)\}\times \Ssn$ for $m'$ odd,
\item $\Cin_x[p^+[e],e,m']:= M[e]\cap \{(m'-1)\Pe +(\underline{b}+x)\}\times \Ssn$ for $m'$ even,
\item $\Cin_x[p^-[e],e,m']:= M[e]\cap \{\RH -(m'\Pe -(\underline{b}+x))\}\times \Ssn$ for $m'$ odd,
\item $\Cin_x[p^-[e],e,m']:= M[e]\cap \{\RH-((m'-1)\Pe +(\underline{b}+x))\}\times \Ssn$ for $m'$ even,
\end{enumerate}
The constant $\underline{b}>a+5$ was initially determined in Section \ref{DelaunayLinear} but may be further increased in forthcoming sections as necessary. We let $0\leq x,y<\Pe-\underline{b}$ where positivity of $\Pe-\underline{b}$ is guaranteed by the smallness of $\maxTG$ in relation to $\underline{b}$.
We set the convention to drop the subscript $x$ when $x=0$; i.e. $S[p]=S_0[p]$. Moreover, we denote $\Lambda_{x,x}=\Lambda_x$.
\end{definition}
Notice that unlike in the case $n=2$, not all of the regions $S\pen$ have the same geometric limit as $\underline{\tau} \to 0$. With this notation, each $S\pen\subset M$ with $\pen \in V_S^+(\Gamma)$ will correspond to a \emph{standard region} or \emph{almost spherical region}. For $\pen \in V_S^-(\Gamma)$, $S\pen$ corresponds to a \emph{standard region} or \emph{almost catenoidal region}. Each $\Lambda[p,e,m']$ will correspond to a \emph{transition} or \emph{neck} region. For $e \in E(\Gamma)$, the middle standard region on $M[e]$ bears the label $S[p^+[e],e,l[e]]$. Each $\widetilde S\pen$ is an \emph{extended standard region} and contains both the standard region and the two adjacent transition regions. The $\widetilde S[p]$ are \emph{central extended standard regions} and contain all adjacent transition regions, where adjacency is determined by $e \in E_p$.
Finally, note that the spheres $\Cout, \Cin$ are enumerated so that
\[
\partial \Lambda_{x,y}[p,e,m']=\Cout_x[p,e,m'] \cup \Cin_y[p,e,m'] \text{ for } m' \text{ odd},
\]
\[
\partial \Lambda_{x,y}[p,e,m']=\Cin_x[p,e,m'] \cup \Cout_y[p,e,m'] \text{ for } m' \text{ even}.
\]The superscripts $\mathrm{out,in}$ are used to match those that are used throughout Section \ref{DelaunayLinear}.
We extend the definition here to include all meridian spheres that exhibit the same behavior under an immersion as those from \ref{domaindefinitions}.
\subsection*{The graph $\Gamma(\tilde d,\tilde \ell)$}
We use the parameters ${d,\boldsymbol \zeta}$ to determine a graph in $\mathcal{F}$.
Recall that by assumption $\Gamma$ is a central graph in a family $\mathcal{F}$.
We presume throughout that $d: V(\Gamma)\to \mathbb R^{n+1}$ where
\begin{equation}\label{drestriction}
\left|d[p ]\right| \leq |\underline{\tau}|^{1 + \frac 1{n-1}} \text{ for all } p \in V(\Gamma).
\end{equation}
Choose $\tilde d \in D(\Gamma)$ (recall \ref{n1}, \ref{Def:dlz}) such that
\begin{equation}\label{Find_d}
\tilde d [\cdot]= \frac 1{\underline{\tau}}d[\cdot].
\end{equation}
Choose $\Gamma(\tilde d,0) \in \mathcal{F}$ and let
\[
\taue:= \underline{\tau} \hat \tau[\Gamma(\tilde d,0),e].
\]
\begin{remark}The smooth dependence of $\Gamma(\tilde d,0)$ on $\tilde d$ implies that
\[
0<|\td|< |\tz|(1+C|\underline{\tau}|^{\frac 1{n-1}})<\maxT.
\]
\end{remark}
We now determine the value of the function $\tilde \ell \in L(\Gamma)$ (recall \ref{n1}, \ref{Def:dlz}) that will rely -- for each $e$ -- on $l[e], \taue,$ and two vectors $\boldsymbol \zeta[p^\pm,e]\in \Rn$.
The maps $\boldsymbol \zeta[p^\pm,e]$ will effectively describe the dislocation of each attached Delaunay piece from its central sphere. Though rays are not in the domain
of $\tilde \ell$, they can be dislocated from their vertex, and thus when describing $\boldsymbol \zeta[p,e]$ we must include rays in the domain.
\begin{definition}\label{zetadef}Let $\boldsymbol \zeta \in Z(\Gamma)$ (recall \ref{n1}, \ref{Def:dlz}) such that
\begin{equation}
\boldsymbol \zeta[p,e]=\sum_{i=0}^{n} \zeta_i[p,e]\mathbf e_{i+1}.
\end{equation}
\end{definition}
\noindent As we will see, the norm of $\boldsymbol\zeta$ can be quite large compared to the norm of $d$. Throughout the paper, we allow
\begin{equation}\label{zetarestriction}
\left| \boldsymbol\zeta\right|\leq \underline C |\underline{\tau}|
\end{equation}where $\underline C$ is a large, universal constant that is independent of $\underline{\tau}$.
Let $\widetilde l \in L(\Gamma)$ such that
\begin{equation}
\widetilde l[e]:= \left(2+2\Pimd\right) l[e].
\end{equation}Thus, a Delaunay piece with $l[e]$ periods and parameter $\taue$ will have length -- i.e. axial length -- equal to $\widetilde l[e]$.
Recall \eqref{first_ell_def} which informs our choice of $\tilde \ell$.
\begin{definition}
Choose $\tilde \ell\in L(\Gamma)$ such that
\begin{equation}\label{ellprimedefinition}
2 (l[e]+\tilde \ell[e])=\left|{\boldsymbol \zeta}\ppe - \left( {\boldsymbol \zeta}\pme + (\widetilde l[e],\boldsymbol 0) \right) \right|.
\end{equation}
\end{definition}
\begin{figure}[h]
\begin{center}
\includegraphics[width=5in]{DislocationFig}
\caption{In the figure, we let $\zeta^+, \zeta^-$ correspond to $\boldsymbol \zeta[p^+[e],e], \boldsymbol \zeta[p^-[e],e]$ respectively. Also, notice that $Y_0^-$ is defined so that its center is at $(2+2\Pimd) l[e]\Be_1$.}\label{Dis}
\end{center}
\end{figure}
For clarity, we provide a systematic description of $\tilde \ell$. First, we position a segment of length $\widetilde l[e]$ so that it sits on
the positive $x_1$-axis with one end fixed at the origin. Then we dislocate the two ends of this segment corresponding to $\boldsymbol \zeta\ppe$ and $\boldsymbol \zeta\pme$
where $\boldsymbol \zeta\ppe$ is the dislocation of the origin. We then measure the length of the segment connecting these two points. Finally,
we compare that length with the length of the edge $e$ in the graph $\Gamma$.
\begin{lemma}
For $\tilde \ell$ defined as in \eqref{ellprimedefinition}, we may decrease $\maxTG>0$ so that for all $0<|\underline{\tau}|<\maxTG$, there exists $C>0$ depending on $\Gamma$ but independent of $\underline{\tau}$, such that for all $e \in E(\Gamma)$,
\begin{equation}\label{lrestriction}
\left|\tilde \ell[e]\right| \leq {C}|\underline{\tau}|^{\frac 1{n-1}}.
\end{equation}
\end{lemma}
\begin{proof}We immediately get the bounds
\[
\widetilde l[e]-2 \left|\boldsymbol \zeta\right| \leq \left|{\boldsymbol \zeta}\ppe - \left( {\boldsymbol \zeta}\pme + (\widetilde l[e],\boldsymbol 0) \right) \right| \leq \widetilde l[e] + 2\left|\boldsymbol \zeta\right|.
\]Thus,
\[
{ \frac{\widetilde l[e] - 2\left|\boldsymbol \zeta\right|}{2}}-{l[e]} \leq{\tilde \ell[e]}\leq \frac { \widetilde l[e] + 2\left|\boldsymbol \zeta\right|}{2}-{l[e]}.
\]The definition of $\widetilde l[e]$, the bound on $\boldsymbol \zeta$ given by \eqref{zetarestriction}, and the estimates of \eqref{Pim_est} then immediately imply the result.
\end{proof}
\begin{lemma}
For a central graph $\Gamma$ with an associated family $\mathcal{F}$,
we may decrease $\maxTG>0$ so that for all $0<|\underline{\tau}|<\maxTG$ and $d, \boldsymbol \zeta$ as in \eqref{drestriction}, \eqref{zetarestriction},
there exists $\Gamma(\tilde d, \tilde \ell) \in \mathcal{F}$ with $\tilde d$,
$\tilde \ell$ given by \eqref{Find_d} and \eqref{ellprimedefinition} respectively, and a constant $C>0$ depending on $\Gamma$ but independent of $\underline{\tau}$ such that
\begin{enumerate}\item
\begin{equation}\label{tauratio}
\frac{\tau_d[e]}{\tau_0[e]}\in \left(1-C|\underline{\tau}|^{\frac 1{n-1}}, 1+C|\underline{\tau}|^{\frac 1{n-1}}\right),
\end{equation} and
\begin{equation}\label{diffeodifference}
\left| 1 - \frac{\Pde}{\Pe}\right| \leq -C \frac{|\underline{\tau}|^{\frac 1{n-1}}}{\log (|\tz|)} \leq -C \frac{|\underline{\tau}|^{\frac 1{n-1}}}{\log (C|\underline{\tau}|)}.
\end{equation}
\item Recalling \ref{FrameLemma},
\begin{equation}\label{tauratio2}
\angle(\Bv_1[e;0,0], \Bv_1[e;\tilde d,0]) \leq C|\underline{\tau}|^{\frac 1{n-1}}
\end{equation}
\item
\begin{equation}\label{ddifftau}
\angle(\Bv_1[e;\tilde d,0], \Bv_1[e;\tilde d,\tilde \ell]) \leq C \underline C|\underline{\tau}|.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
The smooth dependence of $\Gamma(\tilde d, \tilde \ell)$ on $(\tilde d, \tilde \ell)$ and \eqref{drestriction} together imply \eqref{tauratio} and \eqref{tauratio2}.
To see \eqref{diffeodifference}, note that by \eqref{tauratio} and \ref{periodasymptotics}, there exists $\tau'$ between $\td, \tz$ such that
\[
\left| 1- \frac{\Pde}{\Pe}\right| =\frac{|\td-\tz||\frac{d}{d\tau}|_{\tau =\tau'}\mathbf p_{\tau'}|}{|\Pe|}\leq C \frac{\left| 1- \frac{\td}{\tz}\right|}{|\log |\tz||}\leq -C\frac{|\underline{\tau}|^{\frac 1{n-1}}}{\log |\tz|}.
\]
Finally, to see \eqref{ddifftau} let $\theta[e]:= \angle(e,e')$. At worst,
\[
\sin \theta[e] \leq \frac{2 \left|\boldsymbol \zeta\right|}{\sqrt{\widetilde l[e]^2 + 4\left|\boldsymbol \zeta\right|^2}} \leq C\left|\boldsymbol \zeta\right|.
\]Thus, $\theta[e] \leq C \underline C |\underline{\tau}|$.
\end{proof}
\begin{remark}Since $\tz/\underline{\tau} = \hat \tau[\Gamma(0,0),e]$, the finiteness of the graph $\Gamma$ and \eqref{tauratio} imply that there exists $C$ depending only on $\Gamma$ such that ${|\tau_d[e]|}\sim_C{|\underline{\tau}|}$. This gives us the freedom to replace any bounds in $|\tau_d[e]|^{\pm 1}$ by $C|\underline{\tau}|^{\pm 1}$, reducing notation and bookkeeping.
\end{remark}
\subsection*{The smooth immersion}
The immersion we describe is an appropriate positioning of the building blocks described in Section \ref{BuildingBlocks}. Notice that the building blocks depend upon $\Gamma$ and the parameters $d, \boldsymbol \zeta$ and on $\underline{\tau}$, but the immersions describing the building blocks are determined prior to any positioning.
For each $e \in E(\Gamma)$, the positioning of the associated Delaunay building block will depend upon a rotation that takes an orthonormal frame of the edge connecting the vectors $\tilde l[e] + \boldsymbol \zeta[p^-[e],e]$ and $\boldsymbol \zeta[p^+[e],e]$ to the orthonormal frame of the corresponding edge $e' \in E(\Gtdtl)$. We first prove that this rotation is well defined and determine the estimates we will need.
\begin{prop}\label{zetaframe}
For $\boldsymbol \zeta$ as defined in \ref{zetadef} and each $e \in E(\Gamma)$ there exists a unique orthonormal frame $F_{\boldsymbol \zeta}[e]=\{\Be_1[e],
\dots, \Be_{n+1}[e]\}$,
depending smoothly on $\boldsymbol \zeta$,
such that
\begin{enumerate}
\item $\Be_1[e]$ is the unit vector parallel to $\boldsymbol \zeta\pme + (\widetilde l[e],\boldsymbol 0) -\boldsymbol \zeta\ppe$ such that $\Be_1[e] \cdot \mathbf e_1>0$.
\item For $i=2, \dots, n+1$, $\Be_i[e]=\RRR[\Be_1, \Be_1[e]](\Be_i)$.
\item For $\Bv \in \Rn$,
\begin{equation}\label{zetaframebound}
|\Bv - \RRR[\Be_1, \Be_1[e]](\Bv)| \leq C \left| \boldsymbol \zeta \right| \, |\Bv|.
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
The first two items are by definition. If $\Be_1 = \Be_1[e]$ then the third item is immediately true as the rotation is the identity matrix. Now suppose $\Be_1 \neq \Be_1[e]$. By \ref{rotationdefn}, for $\Bv = \Bv^T +\Bv^\perp$ where $\Bv^T$ is the projection onto the $2$-plane spanned by $\Be_1,\Be_1[e]$, $\RRR[\Be_1,\Be_1[e]](\Bv)=\RRR[\Be_1,\Be_1[e]](\Bv^T)+ \Bv^\perp$.
Writing $\Bv^T = a_1 \Be_1 + a_2 \left(\frac{\Be_1[e] - \Be_1 \cos \theta[e]}{\sin \theta[e]} \right):= a_1\Be_1 + a_2 \mathbf z$, the definition of the rotation implies that
\[
\RRR[\Be_1,\Be_1[e]](\Bv^T)-\Bv^T = \sin \theta[e](a_1 \mathbf z - a_2 \Be_1) + (1-\cos \theta[e]) \Bv^T
\]where $\theta[e]$ is the smallest angle between $\Be_1, \Be_1[e]$. Recall, in the proof of \eqref{ddifftau}, we observed that $\sin \theta[e] \leq C\left| \boldsymbol \zeta \right|$.
Therefore, $\cos \theta[e] \geq 1- C\left| \boldsymbol \zeta \right|^2$. It follows that
\[
\left|\RRR[\Be_1,\Be_1[e]](\Bv^T)-\Bv^T \right| \leq C\left| \boldsymbol \zeta \right|\,|\Bv^T|.
\]
\end{proof}
\begin{definition}
For $e \in R(\Gamma)$ we simply let
$\Be_i[e]:=\Be_i$.
\end{definition}
Using the frame previously defined, we describe the rigid motion that will position each Delaunay building block.
\begin{definition}
\label{defn:RT}
For each $e \in E(\Gamma)\cup R(\Gamma)$ with $e'$ denote the corresponding edge on the graph $\Gamma(\tilde d,\tilde \ell)$,
let $\RRR[e;{d,\boldsymbol \zeta}]$ denote the rotation in $\Rn$ such that $\RRR[e;{d,\boldsymbol \zeta}] (\Be_i[e])=\Bvp_i[e; \tilde d,\tilde \ell]$ for $i=1,\dots, n+1$ (recall \ref{FrameLemma}).
Let $\TTT[e;{d,\boldsymbol \zeta}]$ denote the translation in $\Rn$ such that $\TTT[e;{d,\boldsymbol \zeta}](\RRR[e;{d,\boldsymbol \zeta}](\boldsymbol \zeta\ppe))=p^+[e']$.
Letting $\UUU[e; {d,\boldsymbol \zeta}]= \TTT[e;{d,\boldsymbol \zeta}] \circ \RRR[e;{d,\boldsymbol \zeta}]$ we see that for all $c_i \in \mathbb R$,
\begin{equation}
\UUU[e; {d,\boldsymbol \zeta}]\left(\boldsymbol \zeta\ppe + c_i \Be_i[e]\right)= p^+[e'] + c_i \Bvp_i[e; \tilde d,\tilde \ell].
\end{equation}
\end{definition}
At each $p' \in V(\Gamma(\tilde d,\tilde \ell))$, we position a spherical building block. The rigid motion required for positioning these building blocks is simply a translation. The immersion of the building block associated with $p'$ depends upon a diffeomorphism determined by the frames $F_\Gamma[e]$ and the frames $F_{\boldsymbol \zeta}[e]$, for $e \in E_p$ where $p \in V(\Gamma)$ corresponds to $p'$.
For each $p \in V(\Gamma)$, let $\{e_1, \dots, e_{|E_p|}\}$ be an ordering of the edges and rays that have $p$ as an endpoint. For $i=1, \dots, |E_p|$, let
\[
F_i[p] = \{\Bv[p,e_i], \Bv_2[e_i], \dots, \Bv_{n+1}[e_i]\}.\]
Notice that $F_i[p]$ is a set of vectors where the first vector represents the direction the edge or ray $e$ emanates from $p$ in the graph $\Gamma$ and the next $n$ vectors complete the orthonormal frame $F_\Gamma[e_i]$ given in \ref{gammaframe}.
Recalling \ref{Rnframe}, let
\[
F_{\boldsymbol \zeta,i} [p]=\{\mathrm{sgn}[p,e_i] \RRR[e_i;{d,\boldsymbol \zeta}](\Be_1),
\dots, \RRR[e_i;{d,\boldsymbol \zeta}](\Be_{n+1})\}.\]
This set of vectors almost corresponds to rotating the elements of the standard frame in $\Rn$ by $\RRR[e_i;{d,\boldsymbol \zeta}]$.
The only change from the rotation is on the first element, which will differ from the rotation by a minus sign if $p = p^-[e_i]$.
For the reader, it may be useful to note that in general $\RRR[e;{d,\boldsymbol \zeta}](\Be_i)\neq \Bv_i[e;\tilde d,\tilde \ell]$.
See Figure \ref{EmbeddingPic}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=5in]{EmbeddingPic}
\caption{A rough idea of the immersion of one edge. Notice that the transformation $\UUU[e; {d,\boldsymbol \zeta}]$ sends the dislocated spheres to the vertices of the graph. The bold segment in the bottom picture corresponds to the positioning of the edge on the graph $\Gamma(\tilde d,\tilde \ell)$. The Delauney piece has axis parallel to $\RRR[e;{d,\boldsymbol \zeta}]\Be_1= \Bv_1[e;\tilde d, 0]$, which is parallel to the corresponding edge on the graph $\Gamma(\tilde d,0)$.}\label{EmbeddingPic}
\end{center}
\end{figure}
These sets of vectors will determine the diffeomorphisms describing the spherical building blocks. The geodesic disks removed from each $M[p]$ will be repositioned under the diffeomorphism. The centers of the repositioned disks do not correspond to the vectors $\mathbf v[\Gtdtl,p,e]$. Rather, the repositioned disks will be centered at the vectors $\mathrm{sgn}[p,e] \RRR[e;{d,\boldsymbol \zeta}](\Be_1)$. The diffeomorphism $\hat Y$ defined in Section \ref{BuildingBlocks} will guarantee that the immersion is well-defined.
Let
\[
W[p] :=\{ F_1[p] , \dots , F_{|E_p|}[p]\}, \; W'[p] := \{F_{\boldsymbol \zeta,1}[p] , \dots , F_{\boldsymbol \zeta,|E_p|}[p]\}.
\]
\begin{definition}\label{tddefn}Let $\tsd:\bigsqcup_{e \in E(\Gamma) \cup R(\Gamma)} M[e] \to \mathbb R$ such that for $e \in E(\Gamma)$,
\begin{align}
\notag\tsd|_{M[e]}(t,\bt):= &\psi[a+3,a+2](t)\cdot t+\psi[\RH-(a+3),\RH-(a+2)](t)\cdot t \\&
+ \psi[a+2,a+3](t) \cdot \psi[\RH-(a+2),\RH-(a+3)](t) \cdot \left( \frac{\Pde}{\Pe}t\right)
\end{align}and for $e \in R(\Gamma)$,
\begin{equation}
\tsd|_{M[e]}(t,\bt):= \psi[a+3,a+2](t)\cdot t+ \psi[a+2,a+3](t) \cdot \left( \frac{\Pde}{\Pe}t\right).
\end{equation}
Note that $t_0(t,\bt)=t$.
\end{definition}
\begin{definition}
\label{immersiondef}
Let $\hYtdz:M \to \mathbb R^{n+1}$ be defined so that, recall \ref{defn:sphere}, \ref{defn:Yedge}, \ref{defn:RT},
\begin{equation*}
\hYtdz(x):=\left\{ \begin{array}{ll}
p'+ \hat Y[ W[p], W'[p]](x)& x \in M[p]\\
\UUU[e;{d,\boldsymbol \zeta}] \circ Y_{\mathrm{edge}}[ \taue, l[e], \boldsymbol \zeta\ppe, \boldsymbol \zeta\pme](\tsd(x),\bt)& x=(t,\bt) \in M[e], e \in E(\Gamma)\\
\UUU[e;{d,\boldsymbol \zeta}] \circ Y_{\mathrm{ray}}[ \taue, \boldsymbol \zeta\ppe]( \tsd(x),\bt)& x=(t,\bt) \in M[e], e \in R(\Gamma)
\end{array}\right.
\end{equation*}
where $p'\in V(\Gtdtl)$ is the vertex corresponding to $p $.
Let $\Htdz\in C^\infty(M)$ denote the mean curvature of $\hYtdz(M)$.
\end{definition}
Notice that a Delaunay building block will only be positioned parallel to the associated edge of $\Gtdtl$ if $\boldsymbol \zeta[p^+[e],e]=\boldsymbol \zeta[p^-[e],e]$ as in that case $\Be_1[e]=\Be_1$.
\begin{definition}\label{defn:H}
Recalling \ref{Defn:Herror}, define $H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}],H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]:M \to \mathbb R$ in the following manner:
\begin{equation*}
H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}](x):=\left\{ \begin{array}{ll} H_{\mathrm{dislocation}}[\td,l[e], \boldsymbol \zeta^+[p^+[e],e], \boldsymbol \zeta^-[p^-[e],e]](\tsd(x),\bt)\\ \qquad \qquad \qquad \qquad \qquad \qquad x=(t,\bt) \in M[e], e \in E(\Gamma),\\
H_{\mathrm{dislocation}}[\td, \boldsymbol \zeta^+[p^+[e],e]](\tsd(x),\bt)\\ \qquad \qquad \qquad \qquad \qquad \qquad x=(t,\bt) \in M[e] , e \in R(\Gamma),\\
0\text{ otherwise},\end{array}\right.
\end{equation*}
\begin{equation*}
H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}](x):=\left\{ \begin{array}{ll} H_{\mathrm{gluing}}[\td,l[e], \boldsymbol \zeta^+[p^+[e],e], \boldsymbol \zeta^-[p^-[e],e]](\tsd(x),\bt)& x=(t,\bt) \in M[e], e \in E(\Gamma)\\
H_{\mathrm{gluing}}[\td, \boldsymbol \zeta^+[p^+[e],e]](\tsd(x),\bt)& x=(t,\bt) \in M[e] , e \in R(\Gamma),\\
0&\text{otherwise}.\end{array}\right.
\end{equation*}
\end{definition}
As an immediate consequence of the immersion and the definitions, and using the estimates of \ref{Cor:Herror}, we have the following characterization of the global mean curvature error function.
\begin{corollary}
\label{Hbounds}
All of the functions described above are smooth. Moreover the smooth function $H_{\mathrm{error}}[\dzeta]:=\Htdz -1:M \to \mathbb R$ can be decomposed as
\[
H_{\mathrm{error}}[\dzeta] = H_{\mathrm{dislocation}}[\dzeta] + H_{\mathrm{gluing}}[\dzeta].
\]Moreover,
\begin{enumerate}
\item $\|H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]:{C^k}( M[e] ,g)\| \leq C(a,k)|\underline{\tau}|.$
\item $\|H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]:C^k( M[e],g)\| \leq C(a,k)\left| \boldsymbol \zeta \right|\leq C(a,k)\underline C |\underline{\tau}|$.
\end{enumerate}
\end{corollary}
\section{The linearized equation}\label{GlobalSection}
The goal of this section is to state and prove \ref{LinearSectionProp}. We demonstrate for any immersion $\hYtdz$ with ${d,\boldsymbol \zeta}$ satisfying \eqref{drestriction}, \eqref{zetarestriction} respectively, with $0<|\underline{\tau}|<\maxTG$, we can modify the inhomogeneous term in such a way as to solve the linear problem in weighted H\"older spaces. While the construction of
an initial surface in $\Rn$ is fairly similar for $n=2, n>2$, solving the linear problem for $n=2$ is much simpler than for $n>2$. There
are a few reasons for this, not the least being that in two dimensions the Laplace operator simply scales by the conformal change.
\begin{assumption}\label{ass:tgamma6}
We presume throughout this section that $\maxTG>0$ and $\underline{b} \gg1$ satisfy the assumptions of the previous sections. Moreover, while $\underline{b}$ is fixed by the assumptions in Section \ref{DelaunayLinear}, we may further decrease $\maxTG$ as $\underline{b}$ is independent of $\maxTG$.
The constant $\underline{\tau}$ will always satisfy $0<|\underline{\tau}|<\maxTG$ and ${d,\boldsymbol \zeta}$ will satisfy \eqref{drestriction}, \eqref{zetarestriction} respectively for this fixed $\underline{\tau}$. The immersion $\hYtdz$ will be as described in \ref{immersiondef}.
\end{assumption}
\begin{definition}
Let $\ur_d:\bigsqcup_{e \in E(\Gamma) \cup R(\Gamma)}M[e]\to \mathbb R$ such that
\begin{equation}
\ur_d|_{M[e]}:= r_{\td} \circ \tsd|_{M[e]} \text{ (recall \ref{tddefn})}.
\end{equation}
Moreover, let
\begin{equation}
\begin{gathered}
\urout[e;d]:=\ur_{d}(\underline{b},\bt)=r_{\td}(b) , \qquad
\urin[e;d]:= \ur_{d}(\Pe-\underline{b},\bt) =r_{\td}(\, \Pde-b\, ) ,
\end{gathered}
\end{equation}
where $b:= t_d (\underline{b},\bt)$
and $\underline{b}$ is as in \ref{ass:b}.
\end{definition}
Note that by \eqref{diffeodifference} $b$ is then as in \ref{ass:b}.
\begin{remark} On $M[e]\cap ([a+4,\RH - (a+4)]\times \Ssn)$ for $e \in E(\Gamma)$ and on $M[e]\cap ([a+4,\infty) \times \Ssn)$ for $e \in R(\Gamma)$,
\begin{equation}
g=\hYtdz^*(g_{\mathbb R^{n+1}}) = \ur_d^2(d\tsd^2 + g_{\Ssn}).
\end{equation}
\end{remark}
\begin{lemma}\label{rratiolemma}On $\bigsqcup_{e \in E(\Gamma) \cup R(\Gamma)}M[e]$,
\[
\ur_d \sim_{C(\underline{b})} \ur_0.
\]
\end{lemma}
\begin{proof}
First observe that by \ref{rmaxminlemma} and \eqref{tauratio}, for any $e \in E(\Gamma) \cup R(\Gamma)$,
\[
\frac{\ur_{d}(\Pe, \bt)}{\ur_{0}(\Pe, \bt)} = \left(\frac{|\td|}{|\tz|}\right)^{\frac 1{n-1}}\left(1+O(|\underline{\tau}|^{\frac 1{n-1}})\right) = 1+O(|\underline{\tau}|^{\frac 1 {n-1}}),
\]and
\[
\frac{\ur_{d}(2\Pe, \bt)}{\ur_{0}(2\Pe, \bt)} = 1+O(|\underline{\tau}|).
\]Therefore, by the uniform geometry on each $S_{\underline{b}}\pen$, the previous estimates imply that for all $x \in S\pen$,
\[
\frac{\ur_d(x)}{\ur_0(x)} \sim_{C(\underline{b})} 1.
\]
We will improve this estimate at $x = (\underline{b}, \bt)$ and use this improvement as the starting point to produce the equivalence on $\Lambda\pen$. By the triangle inequality, \ref{radiuslemma} and \eqref{diffeodifference},
\begin{align*}
|\ur_0(\underline{b},\bt)-\ur_d(\underline{b},\bt)| &\leq \left|r_{\tz}(\underline{b}) - \sech(\underline{b}) + \sech\left(\underline{b}\cdot \frac{\Pde}{\Pe}\right)-r_{\td}\left(\underline{b}\cdot \frac{\Pde}{\Pe}\right)\right| \\
& \quad \quad+ \left| \sech(\underline{b}) -\sech\left(\underline{b}\cdot \frac{\Pde}{\Pe}\right)\right|\\
& \leq C(\underline{b}) \left(|\underline{\tau}|- \frac{|\underline{\tau}|^{\frac 1{n-1}}}{\log |\tz|}\right).
\end{align*}Thus, we may decrease $\maxTG$ so that
\[
\left| \frac{\ur_d(\underline{b},\bt)}{\ur_0(\underline{b},\bt) }-1\right| \leq |\underline{\tau}|^{\frac 1{2(n-1)}}.
\]Let
\[
f(x):= \log \frac{\ur_d(x)}{\ur_0(x)}
\]and observe that
\[
|f(\underline{b}, \bt)|\leq 2 |\underline{\tau}|^{\frac 1{2(n-1)}}.
\]Going forward, we will assume always that $|f|< \frac 1{10}$ so that we are free to Taylor expand at will. Then, on any $\Lambda \pen$, letting $u_{\tau'}(t,\bt):= r_{\tau'}(\mathbf p_{\tau'}t/\Pe) + \tau' r_{\tau'}^{1-n}(\mathbf p_{\tau'} t/\Pe)$,
\begin{align*}
\frac d{dt} f(t,\bt) &= \frac{\frac{d\ur_d}{dt_d}(t_d(t,\bt),\bt)\cdot\frac{\Pde}{\Pe}}{\ur_d(t,\bt)}- \frac{\frac{d\ur_0}{dt}(t,\bt)}{\ur_0(t,\bt)}\\
&=\sqrt{1- u_{\td}^2(t,\bt)}- \sqrt{1- u_{\tz}^2(t,\bt)} + \sqrt{1- u_{\td}^2(t,\bt)}\left( \frac{\Pde}{\Pe}-1\right)\\
&= -\frac{u_{\tau'}(t,\bt)}{\sqrt{1-u_{\tau'}^2(t,\bt)}}(u_{\td}(t,\bt)-u_{\tz}(t,\bt)) + \sqrt{1- u_{\td}^2(t,\bt)}\left( \frac{\Pde}{\Pe}-1\right)
\end{align*} for some $\tau'$ between $\td, \tz$. As $\ur_d = e^f \ur_0$,
\[
u_{\td} - u_{\tz} = (e^f-1)\ur_0 + (\td-\tz)\ur_0^{1-n} + \td(e^{(1-n)f}-1)\ur_0^{1-n}.
\]
Since we are presuming $|f|$ is small,
\[
u_{\td} - u_{\tz} =(f\cdot h)\ur_0 + \left(\frac{\td}{\tz} -1\right)\tz\ur_0^{1-n} + \td(1-n)(f \cdot h) \ur_0^{1-n}
\]where $|h| \leq 1 + 2|f|$.
Thus,
\[
\frac d{dt}f = fA +B
\]where
\[
A:= -\frac{u_{\tau'}}{\sqrt{1-u_{\tau'}^2}}\left( h \ur_0 + \td(1-n) h \ur_0^{1-n}\right) ,
\]
\[ B:= -\frac{u_{\tau'}}{\sqrt{1-u_{\tau'}^2}}\left(\frac{\td}{\tz} -1\right)\tz\ur_0^{1-n} + \sqrt{1- u_{\td}^2}\left( \frac{\Pde}{\Pe}-1\right) .
\]Since we are presuming that $|f|$ is small, $| u_{\tau'} | \leq C(\ur_0+|\tz| \ur_0^{1-n})$. Moreover, since $\frac d{dt}\ur_0 \sim_{C(\underline{b})} \ur_0$ on $\Lambda\pen$ as long as $|f|< 1/10$ we may estimate for $\ur_0(x) \in [\urin[e;0], \urout[e;0]]$,
\begin{align*}
\left|\int_{\underline{b}}^{t_0(x)} A(t,\bt) \, dt\right| &\leq C(\underline{b}) \int_{\urout[e;0]}^{\ur_0(x)} \left( r + |\underline{\tau}| r^{1-n} + |\underline{\tau}|^2 r^{1-2n}\right) dr\\
& \leq C(\underline{b},n)\left| \left( r^2 + |\underline{\tau}|r^{2-n}+ |\underline{\tau}|^2 r^{2-2n}\right)\big|_{\urout[e;0]}^{\ur_0(x)}\right|\\
& \leq C(\underline{b}, n).
\end{align*}Moreover, for all $\ur(x) \in[ \urin[e;0], \urout[e;0]]$,
\begin{align*}
\left|\int_{\underline{b}}^{t_0(x)} B(t, \bt) \, dt\right| & \leq C(\underline{b}) \left|\int_{\urout[e;0]}^{\ur_0(x)} \left((1+ |\underline{\tau}| r^{-n})|\underline{\tau}|^{\frac 1{n-1}} - \frac{|\underline{\tau}|^{\frac 1{n-1}}}{r\log |\tz|} \right) dr\right| \\
& \leq C(\underline{b}) |\underline{\tau}|^{\frac 1{n-1}}.
\end{align*}
It follows that for $x = (t',\bt)$,
\[
f(x) = \exp\left(\int_{(\underline{b}}^{t'} A(t) dt\right)\left( f(\underline{b},\bt) + \int_{\underline{b}}^{t'} B(t) dt\right).
\]Thus, as long as $|f|< \frac 1{10}$, the previous estimates imply that
\[
|f(x)| \leq C(\underline{b}, n) |\underline{\tau}|^{\frac 1{2(n-1)}}.
\]Since the result holds for $x=(\underline{b}, \bt)$, it will continue to hold for all $x\in \Lambda\pen$. This implies the $C^0$ equivalence.
\end{proof}
Because the problem proves more tractable when considering norms that allow for the natural scaling, we will record the initial error estimates with respect to this scaling.
\begin{definition}\label{rhodef}
We define the function $\rho_d: M \to \mathbb R$ such that
\begin{equation}\label{rhoeq}
\rho_d(x) = \left\{\begin{array}{ll}
1 & \text{if } x \in M[p], p \in V(\Gamma)\\
(\psi[a+4, \underline{b}](t)\cdot \psi[\RH-(a+4),\RH-\underline{b}](t)\cdot \ur_d(x) &\\
\indent + \psi[\underline{b},a+4](t)+\psi[\RH-\underline{b},\RH-(a+4)](t)& \text{if } x=(t, \bt) \in M[e] , e \in E(\Gamma) \\
\psi[a+4, \underline{b}](t)\cdot \ur_d(x)+ \psi[\underline{b},a+4](t)& \text{if } x=(t, \bt) \in M[e], e \in R(\Gamma)
\end{array}\right.
\end{equation}
\end{definition}
Observe that $\rho_d$ is a smooth function that behaves like $\ur_{d}$ on each $M[e]$ and is $1$ on each central sphere. The cutoff function smooths out the transition between them.
Because the error was previously determined in the standard H\"older norm, we now record the error induced by gluing and dislocation in the chosen weighted metric.
\begin{prop}\label{Hestimates}
\begin{itemize}
\item $\|H_{\mathrm{gluing}}[\dzeta]:{C^{0,\beta}}(M,\rho_d, g)\| \leq C(\beta,\underline{b})| \underline{\tau}|$.
\item $\|H_{\mathrm{dislocation}}[\dzeta]:{C^{0,\beta}}(M,\rho_d, g)\| \leq C(\beta, \underline{b})|{\boldsymbol \zeta}| \leq C(\beta, \underline{b}) \underline C | \underline{\tau}|$.
\end{itemize}
\end{prop}
\begin{proof}
First recall that $H_{\mathrm{gluing}}[\dzeta], H_{\mathrm{dislocation}}[\dzeta]$ are supported on $\cup_pS[p]$. Thus, for all $x \in \supp(H_{\mathrm{gluing}} [\dzeta]\cup H_{\mathrm{dislocation}}[\dzeta])$, $ \rho_d \sim_{ C(\underline{b})}1$. The bounds then follow immediately from \ref{Hbounds}.
\end{proof}
\begin{prop}
\label{find:c1}
There exists $c_1(\underline{b},k,n)>0$ such that
\begin{equation}\label{rhoest}
\|\rho_d^{\pm 1}:C^{k}(M,\rho_d,g, \rho_d^{\pm1})\| \leq c_1(\underline{b},k,n).
\end{equation}
\end{prop}
\begin{proof}
First note that the uniform geometry of $\Omega = S_{\underline{b}}[p], S^+_{\underline{b}}\pen, S^-_{\underline{b}}\pen$ in the $g$ metric immediately implies the estimate
\[
\|\rho_d^{\pm 1}:C^{k}(\Omega,\rho_d,g, \rho_d^{\pm1})\| \leq C(\underline{b},k).
\]Now for any fixed $x\in \Lambda\pen$, consider the function $\hat \rho(y):= \rho_d(y)/ \rho_d(x)$. Then,
\[
\hat \rho(y) = e^{ w_{\td}(\tsd(y))-w_{\td}(\tsd(x))}.
\]Because of the local nature of the estimate and since $\frac {d}{dt_d}w_{\td}\circ \tsd \in [1-3\delta, 1]$, we are interested in $y$ such that $|\tsd(y) - \tsd(x)| \leq \frac 15$. (Recall the proof of \eqref{r_metric_equiv} and note that $|\tsd(y)-t_0(y)| \leq C|\underline{\tau}|^{\frac 1{n-1}}$ by \ref{periodasymptotics} and \eqref{diffeodifference}.) Thus
\[
| w_{\td}(\tsd(y))-w_{\td}(\tsd(x))| = \left| \int_{\tsd(x)}^{\tsd(y)} \frac{dw_{\td}}{d\tsd}\right| \leq | \tsd(y)-\tsd(x)| \leq \frac 15 .
\]This implies the $C^0$ estimates. Now note that
\[
\frac{\partial}{\partial \tsd} \hat \rho= \frac{d}{d\tsd}w_{\td} \cdot \hat \rho, \quad \quad \frac{\partial^2}{\partial \tsd^2} \hat \rho = \frac{d^2}{d\tsd^2}w_{\td}\cdot \hat \rho+\left ( \frac{d}{d\tsd}w_{\td}\right)^2 \hat \rho.
\]Using the estimates in Appendix \ref{DelSection}, we recall that
\[
-\frac{d^2}{d\tsd^2}w_{\td}= \ur_{d}^2 + (2-n)\td \ur_{d}^{2-n} + (1-n) \td^2 \ur_{d}^{2-2n}.
\] Moreover, $\ur_{d} \geq r_{\td}^{\min} \geq C(\underline{b})|\underline{\tau}|^{\frac 1{n-1}} + O(|\underline{\tau}|^{\frac 2{n-1}})$. Taken together we see that on $\Lambda\pen$,
\[
\left|(2-n)\td \ur_{d}^{2-n} + (1-n) \td^2 \ur_{d}^{2-2n}\right| \leq C(\underline{b},n).
\] It follows from the previous analysis and these new estimates that
\[
\|\rho_d^{\pm 1}:C^2(M,\rho_d,g, \rho_d^{\pm 1})\|\leq C(\underline{b}, n).
\]As $w_{\td}$ satisfies a second order ODE, any higher derivatives in $w_{\td}$ can be written in terms of the function and its first and second derivatives. Since $\frac{\partial}{\partial \tsd^k} \hat \rho$ can be written in terms of $\frac{\partial}{\partial \tsd^{m}}w_{\td}$ and $\hat \rho$ where $m = 0, \dots, k-2$, the uniform bounds for $\rho_d$ in $C^k$ follow immediately. For $\rho_d^{-1}$ we only need to note that the denominator will contain the power $\hat \rho^{-k-1}$ which will also be controlled in terms of some constant $C(k)$.
\end{proof}
\subsection*{Solving the semi-linear problem on $\widetilde S[p]$}
The goal of this subsection is to prove \ref{linearpartp}.
We wish to solve a linearized problem with zero boundary data and fast decay toward all boundary components.
These requirements force us to proceed as in the lower dimensional version \cite{BKLD} and introduce the extended substitute kernel.
We prove that a modified version of the linearized problem is solvable by what are now standard methods (see for example \cite{HaskKap}).
We introduce maps $\widetilde Y[p]$ on $\widetilde S[p]$ which are useful parametrizations of $\Ss^{n}$.
A comparison between these maps and $\hYtdz$ will help us understand the possible obstructions
to solving the linearized problem on these central spheres by considering the linearized operator
in the induced metric of $\widetilde Y$ (which corresponds to the metric on the round sphere).
\begin{definition}
Let $\widetilde Y[p]:\widetilde S[p] \to \mathbb S^n\subset \Rn$ such that (recall \ref{ReDef})
\begin{equation}\label{tildeYp}
\widetilde Y[p](x)=\left\{ \begin{array}{ll}
\hat Y[W,W](x) &\text{if }x \in M[p],\\
\RRR[e]\circ Y_0(x)& \text{if } p=p^+[e], x \in M[e]\cap\left([a,\Pe-\underline{b}]\times \Ssn\right), \\
\RRR[e]\circ Y_0 (t-\RH,\bt )&\text{if } p=p^-[e], x=(t,\bt), \\
&\:\:x \in M[e]\cap\left( [\RH-(\Pe-\underline{b}),\RH-a]\times \Ssn\right).
\end{array}\right.
\end{equation}
Let $\widetilde g:= \widetilde Y[p]^*( g_{\mathbb R^{n+1}})$.
\end{definition}
\begin{prop}
\label{geolimit}
Let $p \in V(\Gamma)$ and let $p'$ be the corresponding vertex in the graph $\Gtdtl$. Then
\[
\|(\hYtdz-p') - \widetilde Y[p]:C^k(S_x[p],\widetilde g)\|\leq C(k,x) \left(|\underline{\tau}|^{\frac 1{n-1}}+ |\boldsymbol \zeta|\right) \leq C(k,x) |\underline{\tau}|^{\frac 1{n-1}}.
\]
\end{prop}
\begin{proof}
Note that $(\widetilde Y[p] + p) \big|_{M[p]}= Y_{0,\mathbf 0}\big|_{M[p]}$. Since the mapping $\hYtdz|_{M[p]}$ depends smoothly on ${d,\boldsymbol \zeta}$ and approaches $Y_{0, \mathbf 0}$ as $|\underline{\tau}| \to 0$, the result on $M[p]$ follows immediately.
Recall \ref{defn:Yedge}, \ref{defn:RT}, \ref{tddefn}. For $e \in E(\Gamma)$, $x \in M[e]\cap \widetilde S[p^+[e]]$, we determine the $C^k$ estimate by considering the norms of the two immersions
\begin{align*}
&\RRR[e;{d,\boldsymbol \zeta}]\circ Y_1(y):=\RRR[e;{d,\boldsymbol \zeta}]\circ \left(Y_{\mathrm{edge}}[\taue, l, {\boldsymbol \zeta}\ppe, {\boldsymbol \zeta}\pme] (\tsd(y),\bt)- Y_0(y)\right),\\
&\left(\RRR[e;{d,\boldsymbol \zeta}] - \RRR[e]\right)\circ Y_0(y)
\end{align*} and applying the triangle inequality.
First, \ref{defn:Yedge}, \ref{tddefn}, \eqref{Y0} imply that, for $y=(t,\bt)$,
\[
\|Y_1:C^0(M[e] \cap [a,a+3]\times \Ssn, \widetilde g)\|\leq C(\left| \boldsymbol \zeta \right|+ |\tanh(t)-\tanh(\tsd(y))| + |\sech(t)-\sech(\tsd(y))|).
\]By \eqref{diffeodifference},
\[
|\tanh(t)-\tanh(\tsd(y))|+|\sech(t)-\sech(\tsd(y))|\leq -C\frac{|\underline{\tau}|^{\frac 1{n-1}}}{\log(C |\underline{\tau}|)}.
\]The $C^k$ estimates on this region then follow from the definitions and further applications of \eqref{diffeodifference}. On $[a+4,x]\times \Ssn$, for $y=(t,\bt)$,
\[
Y_1(y):= (k_d(t_d(y)) - \tanh(t), (r_d(t_d(y))-\sech(t) )\bt).
\] \eqref{diffeodifference} and \ref{radiuslemma} then imply the $C^k$ bounds on this region. The immersion on $[a+2,a+5]\times \Ssn$ is simply a smooth transition between the immersions at $t=a+2$ and $t=a+5$. Thus, the $C^k$ estimates hold here since the transition function and its derivatives are well controlled.
For the second immersion, note that $\|Y_0:C^k(S_x[p],\widetilde g)\| \leq C(x,k)$. Moreover, by\eqref{zetaframebound},
the smooth dependence of $\mathcal{F}$ on $\tilde d,\tilde \ell$, and
\begin{align*}
\left|(\RRR[e;{d,\boldsymbol \zeta}] - \RRR[e] )\Be_i \right|&\leq \left|\RRR[e;{d,\boldsymbol \zeta}](\Be_i-\Be_i[e]) \right|+\left| \RRR[e;{d,\boldsymbol \zeta}]\Be_i[e] - \RRR[e]\Be_i\right|\\
& = \left|(I - \RRR[\Be_1,\Be_1[e]])\Be_i\right|+ \left|\Bv_i[e;\tilde d,\tilde\ell] - \Bv_i[e]\right|, \text{ recall } \ref{FrameLemma}\\
& \leq C\left| \boldsymbol \zeta \right| + C(|\underline{\tau}|^{\frac 1{n-1}} + |\boldsymbol \zeta|) \text{ by } \eqref{tauratio2}, \eqref{ddifftau}\\
& \leq C\left(|\underline{\tau}|^{\frac 1{n-1}}+|\boldsymbol \zeta|\right).
\end{align*}
Identical arguments hold for $e \in R(\Gamma)$ and the immersion $Y_{\mathrm{ray}}$ replacing $Y_{\mathrm{edge}}$. When $p=p^-[e]$, the only modification in the proof comes from orientation of $\Bv\pe= -\Bv_1[e]$. The definition of $\widetilde Y$ accounts for that modification by taking $\RRR[e]$ of the reflection of $Y_0(t,\bt)$ about the $\Be_1$ axis.
\end{proof}
We now consider the nature of the approximate kernel of $\mathcal L_g$ on $\widetilde S[p]$.
By approximate kernel, we mean the span of eigenfunctions of $\mathcal L_g$ with small eigenvalue.
Following standard methodology
we will use the methods of Appendix B in \cite{KapAnn} and compare each $\widetilde S[p]$ in the induced metric with an appropriate embedding of the round sphere.
The maps $\widetilde Y[p]$ will be used for the comparison.
We also find it helpful to define scaled Jacobi functions, induced by translation vector fields. Notice that for $d=0, \boldsymbol \zeta=0$, these functions behave on $\widetilde Y[p](S[p])$ like an orthonormal basis of eigenfunctions for the lowest two eigenspaces of the operator $\Delta_{\Ss^n}+n$.
\begin{definition}Recalling \ref{Vpdefn},
let $\hF_i[{d,\boldsymbol \zeta}] : M \to \mathbb R$ for $i=0, \dots, n$ be defined by
\begin{equation}
\label{hFdefeq}
\hF_i[{d,\boldsymbol \zeta}](x) := \frac{ N_{{d,\boldsymbol \zeta}}(x)\cdot \mathbf e_{i+1} }{\| N_{\Ss^n}\cdot \Be_{i+1}\|_{L^2(\Ss^n)}}=\widetilde \omega_{n}^{-\frac 12} N_{{d,\boldsymbol \zeta}}(x)\cdot \mathbf e_{i+1}.
\end{equation}Here $N_{{d,\boldsymbol \zeta}}$ is the unit normal to the immersion $\hYtdz$.
\end{definition}
Before determining the approximate kernel, we prove a technical lemma. This lemma provides suppremum bounds for the eigenfunctions with low eigenvalue.
\begin{lemma}\label{boundedeflemma}
Let $f$ be an eigenfunction for $\mathcal L_g$ on $\widetilde S[p]$ with eigenvalue $0\leq|\lambda| <(4\rout)^{-1}$. Then
\[
\|f:C^{2,\beta}(\widetilde S[p],\rho_d,g)\| \leq C(\beta).
\]
\end{lemma}
\begin{proof}Suppose $\mathcal L_g^\lambda f = 0$ on $\widetilde S[p]$ for some $f$.
We first note that the uniform geometry on $S_5[p]$ (even as $\underline{\tau} \to 0$) implies the boundedness on $S_1[p]$. Next, we note that on each $\Lambda[p,e,1]$ adjoining $S[p]$ the Dirichlet problem for $\mathcal L_g^\lambda$ has a unique solution. At $\Cout_1[p,e,0]$, decompose $f=f_0+f_1+f_{\mathrm{high}}$ following \ref{f_decomp}. Determine $a_i$ such that $(f_0+f_1)\big|_{\Cout_1}= \sum_{i=0}^n a_i V_i^\lambda[\Lambda[p,e,1],1,0]\big|_{\Cout_1}$. Then the equality holds on all of $\Lambda[p,e,1] \backslash S_1[p]$ and the bound follows by \ref{annulardecaylemma}. For $f_{\mathrm{high}}$ on $\Lambda[p,e,1]$, the bounds follow immediately from the estimates of \ref{linearcor}, with $\Cout$ replace by $\Cout_1$.
\end{proof}
\begin{lemma}
\label{approxkerprop}
For $\underline{b}$ as in \ref{ass:tgamma} and $\epsilon>0$ there exists $\maxTG>0$ sufficiently small such that for each $0<|\underline{\tau}|<\maxTG$:
Let $p \in V(\Gamma)$.
Then the Dirichlet problem for $\mathcal L_g$ on $\widetilde S[p]$ has exactly $n+1$ eigenvalues in $[-\epsilon, \epsilon]$ and no other eigenvalues in $[-1,1]$.
There exists a set $\{f_0[p], \dots, f_{n}[p]\}$ that are an orthonormal basis for the \emph{approximate kernel} for $\widetilde S[p]$ such that
$f_i[p] \in C_0^\infty (S[p])$. Moreover, $f_{i}[p]$ depends continuously on all of the
parameters of the construction and satisfies
\[\|f_{i}[p] - \hF_i[{d,\boldsymbol \zeta}]: C^{2,\beta}(S_5[p],g) \| < \epsilon.
\]
\end{lemma}
\begin{proof}We prove the proposition by some modifications of the results of \cite{KapAnn}, Appendix B, which determine relationships between eigenfunctions and eigenvalues of the Laplace operator for two Riemannian manifolds that are shown to be
close in some reasonable sense. (Throughout the proof, all references to Appendix B or enumerations with B are references to the paper \cite{KapAnn}.)
In the lower dimensional setting in \cite{BKLD,KapAnn},
the linear problem was solved in a conformal metric so that $\mathcal L_h = \Delta_h + c$ for some constant $c$.
Therefore, it was enough to consider the Laplace operator and compare eigenvalues and eigenfunctions there.
In this new setting, we are not free to choose such a conformal metric and the potential is not constant in the metric $g$.
Therefore, we will adapt the ideas of Appendix B in \cite{KapAnn} to include our non-constant potential.
Let $(\widetilde S[p],g)$ and $(\Ss^n,g_{\Ss^n})$ be the two manifolds under consideration. Note that these manifolds satisfy assumptions (1) and (2) of B.1.4. Also note that assumption (3) was needed
to provide suppremum bounds for the eigenfunctions.
We observe that for each $\widetilde S[p]$, \ref{boundedeflemma} implies such a bound exists for eigenfunctions with eigenvalue
$\le (4\max_{e \in E_p}\urout[e;d])^{-1}$. This bound is $>2(n+1)$ and so it will be sufficient for our purposes.
We follow the convention that $\lambda$ is an eigenvalue for the operator $\mathcal L$ if there exists $f$ such that $\mathcal Lf=-\lambda f$.
Then the operator $\mathcal L_{g_{\Ss^n}}:=\Delta_{\Ss^n}+n$ has lowest eigenvalues $-n,0,n+2$ and the eigenvalue $0$ has multiplicity $n+1$. Thus, the only eigenvalue for $\mathcal L_{g_{\Ss^n}}$ in the interval $[-1,1]$ is zero with multiplicity $n+1$. Indeed, an orthonormal basis of the kernel of $\mathcal L_{g_{\Ss^n}}$ is given by the functions
\begin{equation}\label{Sphere_kernel}
\hat f_{i, \Ss^n}:= \frac{ N_{\Ss^n}\cdot \Be_{i+1}}{\| N_{\Ss^n}\cdot \Be_{i+1}:L^2(\Ss^n)\|},
\qquad\text{ for } i=0,\dots, n,
\end{equation}
where $N_{\Ss^n}$ is the inward normal to the unit hypersphere.
We now construct two functions $F_1, F_2$ that will satisfy the assumptions B.1.4 and one additional assumption.
Let $\widetilde Y[p]: \widetilde S[p] \to \Ss^n$ be the function defined in \eqref{tildeYp} and recall that $\widetilde g:= \widetilde Y^*( g_{\mathbb S^n})$.
Let $\overline \psi:\widetilde S[p] \to [0,1]$ be a smooth cutoff function on $\widetilde S[p]$ such that $\overline \psi \equiv 1$ on $S[p]$ and on each adjacent $\Lambda[p,e,1]$, $\overline \psi \equiv 1$ on $\Lambda[p,e,1] \cap ([\underline{b}, d_1] \times \Ssn)$, $\overline \psi \equiv 0$ on $\Lambda[p,e,1] \cap ([d_2, \Pe-\underline{b}]\times \Ssn)$ for $d_1, d_2$ where $d_1< d_2$ are chosen so that $\sech(d_1)=\underline \epsilon$ and $\sech(d_2)=\underline \epsilon/2$ for some $\underline \epsilon>0$ to be determined. If $\overline \psi(t,\Theta):= \frac 2 {\underline \epsilon}\sech(t)-1$ on $[d_1,d_2] \times \Ssn$ then $|\nabla_{\widetilde g} \overline \psi| \leq 4\underline \epsilon^{-1}$ and elsewhere the gradient vanishes. Moreover, by \ref{rratiolemma} and \ref{radiuslemma} $ |\nabla_g \overline \psi| \sim_{C(\underline{b})} |\nabla_{\widetilde g} \overline \psi| $.
Let $F_1:C^\infty_0(\widetilde S[p]) \to C^\infty_0(\Ss^n)$ such that for $f \in C^\infty_0(\widetilde S[p])$,
\[
F_1(f) \circ \widetilde Y[p] :=\overline \psi f .
\] Let $F_2:C^\infty_0(\Ss^n) \to C^\infty_0(\widetilde S[p])$ such that for $f \in C_0^\infty(\Ss^n)$,
\[
F_2(f):= \overline \psi f \circ \widetilde Y[p].
\] For any $\epsilon>0$ there exists $\underline \epsilon$ sufficiently small so that the requirements of B.1.6 are met. In addition, we demonstrate that
\begin{equation}
\label{A_requirement}
\left|\int_{M_i}|A_i|^2F_i(f)^2 \, dg_i - \int_{M_{i'}}|A_{i'}|^2f^2\, dg_{i'} \right|\leq \epsilon\|f\|_{\infty}^2 \text{ for } i \neq i'; i,i' \in\{1,2\}.
\end{equation}Here $(M_i,g_i), (M_{i'},g_{i'})$ correspond to the two manifolds and metrics of interest and $A_i, A_{i'}$ correspond to the second fundamental form on the appropriate manifold. We require an estimate like \eqref{A_requirement} since the Rayleigh quotient now includes such a term in the numerator.
We demonstrate \eqref{A_requirement} and a few of the estimates in B.1.6 and leave the rest to the reader as they can be easily verified.
Note that the first inequality in B.1.6 should read
\[
\|F_if\|_\infty \leq 2 \|f\|_\infty
\]and this is immediately verified by the definitions. Further, since $n \geq 3$, using \ref{geolimit} with $x=d_2$ implies that
\[
\int_{\widetilde S[p]} |\nabla_{ g} \overline \psi|^2 \, dg \leq C(\underline{b})(1+ C(d_2) |\underline{\tau}|^{\frac 1{n-1}}) \int_{\widetilde S[p]} |\nabla_{\widetilde g} \overline \psi|^2 \, d\widetilde g \leq C(n,\underline{b})(1+ C(d_2) |\underline{\tau}|^{\frac 1{n-1}})\underline \epsilon^{n-2}\leq C \underline \epsilon.
\]Again using \ref{geolimit}, by the definition of the $F_i$,
\begin{equation}\begin{split}
\label{middlecomparison}
\int_{M[p]} fg \, dg = (1+O( |\underline{\tau}|^{\frac 1{n-1}}))\int_{\Ss^n \cap \widetilde Y[p](M[p])} F_1(f)F_1(g) \, dg_{\Ss^n},\\
\int_{M[p]} F_2(f)F_2(g) \, dg = (1+O( |\underline{\tau}|^{\frac 1{n-1}}))\int_{\Ss^n \cap \widetilde Y[p](M[p])} fg \, dg_{\Ss^n}.
\end{split}
\end{equation}
Therefore, to demonstrate that orthogonality is almost preserved, we only need consider the behavior on each $\Lambda[p,e,1]$.
In that case, one can verify that
\begin{multline*}
\left|\int_{\Lambda[p,e,1]} F_2(f)F_2(g) \, dg - \int_{\Ss^n \cap \widetilde Y[p](\Lambda[p,e,1])} fg \, dg_{\Ss^n}\right|\leq
\\
\le (1+ O( |\underline{\tau}|^{\frac 1{n-1}}))\left|\int_{\Lambda[p,e,1]}(f \circ \widetilde Y[p])( g \circ \widetilde Y[p])(1-\overline \psi) \, dg\right|
\leq C\underline \epsilon^n\|f\|_\infty \|g\|_\infty.
\end{multline*}
Other estimates in B.1.6 proceed similarly.
By \eqref{middlecomparison} and the fact that $|A_g| = n$ on $M[p]$, \eqref{A_requirement} holds on the domain $M[p]$. So we consider each $\Lambda[p,e,1]$. Of critical importance is the fact that while $|A_g|^2$ becomes unbounded as $\underline{\tau}\to 0$, we may choose $\underline \epsilon$ small enough so that $\int_{[d_1, \Pe-\underline{b}] \times \Ssn}|A_g|^2 dg$ is as small as we like. To see this, first recall that $|\frac {dw_{\td}}{d\tsd}|\leq 1$ and thus by \eqref{diffeodifference}, $|\frac 1{\ur_{d}}\frac{\partial \ur_{d}}{\partial t}| \leq 1+ |\underline{\tau}|^{\frac 1{2(n-1)}}$.
Therefore, we may make the change of variables
\begin{multline*}
\int_{[d_1, \Pe-\underline{b}]\times \Ssn} |A_g|^2 dg = n\int_{d_1}^{\Pe-\underline{b}} \int_{\Ssn}( 1+(n-1)\td^2 \ur_{d}^{-2n} )\ur_{d}^n \,dt dg_{\Ssn} \leq
\\
\leq n(1+|\underline{\tau}|^{\frac 1{2(n-1)}})\omega_{n-1}\int^{\ur_{d}(d_1)}_{\urin[e;d]}( 1+(n-1)\td^2 r^{-2n} )r^{n-1}\, dr \leq
\\
\leq C(n)\left( \underline \epsilon^n +(n-1)\td^2 \urin[e;d]^{-n} \right)
\leq C(n)\underline \epsilon^n
\end{multline*}
since $\td^2 \urin[e;d]^{-n}= O(|\underline{\tau}|^{\frac{n-2}{n-1}})$.
Therefore, given $\epsilon>0$ we may increase $d_1$ if necessary (decreasing $\underline \epsilon$) so that $\int_{[d_1,\Pe-\underline{b}]\times \Ssn}|A_g|^2\, dg \leq \epsilon/2$. Thus,
\begin{multline*}
\left|\int_{\Ss^n \cap \widetilde Y[p](\Lambda[p,e,1])}n F_1(f)^2\,dg_{\Ss^n} - \int_{\Lambda[p,e,1]} |A_g|^2 f^2 \, dg\right| \leq
\\
\leq
O( |\underline{\tau}|^{\frac 1{n-1}})\int_{[\underline{b},d_1]\times \Ssn} f^2 \, dg
+ \int_{[d_1, \Pe-\underline{b}] \times \Ssn} |A_g|^2f^2 \, dg
\leq \epsilon\|f\|_\infty^2.
\end{multline*}
On the other hand,
\begin{multline*}
\left|\int_{\Lambda[p,e,1]} F_2(f)^2|A_g|^2 \, dg - \int_{\Ss^n \cap \widetilde Y[p](\Lambda[p,e,1])} nf^2 \, dg_{\Ss^n}\right|=
\\
= O( |\underline{\tau}|^{\frac 1{n-1}})\int_{\Ss^n \cap \widetilde Y[p]([\underline{b},d_1]\times \Ssn)} n f^2\, dg_{\Ss^n} +
\int_{\Ss^n \cap \widetilde Y[p]([d_1, \Pe -\underline{b}]\times \Ssn)} n f^2\, dg_{\Ss^n} \leq
\\
\leq C(n) \underline \epsilon^n\|f\|_\infty^2
\leq \epsilon \|f\|_\infty^2.
\end{multline*}
With the addition of the estimate \eqref{A_requirement},
B.2.2 holds for eigenfunctions and eigenvalues of $\mathcal L_g$.
Perhaps the most difficult estimate to confirm in this new setting is B.2.2 (4).
Using the Rayleigh quotient, we have that (for $\| \cdot \|^2$ signifying the squared $L^2$ norm)
\[
\| \, |A|\, f'_n\|^2 + \|f'_n\|^2 \geq \frac {\delta - C \epsilon}{\lambda_{2,n}+ \delta}.
\]When $|A|^2 \equiv n$ (i.e. the manifold is $\Ss^n$), we immediately get the required lower bound on $\|f'_n\|^2$. On the other hand, if the manifold is $\widetilde S$, for each $\Lambda = \Lambda[p,e,1]$,
\[
\int_\Lambda |A_g|^2 f^2 \, dg \leq C(n) \int_\Lambda f^2 \, dg + \epsilon\|f\|_\infty^2.
\]Therefore, the lower bound holds in this case as well. All other applications of the Rayleigh quotient to the proof in B.2.2 are more obvious and do not need the small integrability condition of $|A_g|^2$ along $\Lambda$.
Now we may apply the results of Appendix B, appropriately modified, to find an orthonormal basis of $n+1$ eigenfunctions on $\widetilde S$ with small eigenvalue that are $L^2$ close to those described in \eqref{Sphere_kernel}.
We get the desired $C^{2,\beta}$ estimate in the following manner. Because of the uniform geometry of $S_6$ for $\maxTG$ sufficiently small, we can use standard linear theory on the interior to increase the $L^2$ norms of Appendix B to $C^{2,\beta}$ norms. Moreover, on $S_{d_1}$,
\[F_2(\hat f_{i,\Ss^n}) =\hat f_{i,\Ss^n} \circ \widetilde Y.
\] By the definition of the immersions and \ref{geopropcentral} and \ref{geolimit}, for any $\epsilon>0$ we can choose $\maxTG$ sufficiently small so that
\[
\|F_2(\hat f_{i,\Ss^n})-\hF_i[{d,\boldsymbol \zeta}]:C^{2,\beta}( S_6, g)\| < \epsilon/2.
\]To make the dependence continuous, we let $f_{i}$ denote the normalized $L^2(\widetilde S, g)$ projection of $\hF_i[{d,\boldsymbol \zeta}]$ onto the span of $F_2(\hat f_{i, \Ss^n})$.
\end{proof}
Following the general methodology, we introduce the extended substitute kernel. Notice that we have already solved the semi-local linearized problem everywhere except $\widetilde S[p]$. Thus, the extended substitute kernel is a much smaller space of functions than for previous constructions of similar type.
Let $p \in V(\Gamma)$.
We fix $\Be'_i[p]$ for $i=1, \dots, n+1$ depending only on $\Gamma$ and such that
$$
|\Be'_i[p]-\Be_i| < 54 \delta',
\qquad\qquad
\text{ and }
\forall e\in E_p(\Gamma) ,
\qquad
|\Be'_i[p]- \mathbf{v}\pe | >9\delta',
$$
which by the smallness of the parameters implies that $\Be'_i[p]\in S[p]\subset M$.
We have then
\begin{definition}[The substitute {kernel $\mathcal{K}[p]$}]
\label{D:calKp}
We define
$\mathcal{K}[p]$
to be the span of
(recall \ref{hFdefeq})
$\left\{ \, \psi[2\delta',\delta'] \circ {\mathbf{d}}^{M,g}_{\Be'_1[p],\dots,\Be'_{n+1}[p]} \, \hF_i[{d,\boldsymbol \zeta}] \, \right\}_{i=0}^n \subset C^\infty(M) $.
We also define a basis $\{w_i[p]\}_{i=0}^n$ of $\mathcal{K}[p]$ by
\begin{equation}
\label{wFdefeq}
\int_{S[p]} w_i[p] \hF_j[{d,\boldsymbol \zeta}] \; dg = \delta_{ij}.
\end{equation}
\end{definition}
\begin{lemma}
\label{subskernellemma}
\label{extsubslemma}
For each $p$, the following hold:
\begin{enumerate}
\item $w_i[p]$ is supported on $S[p]$.
\item $\|w_i[p]: C^{2, \beta}(M, g)\| \leq C$.
\item For $E \in C^{0,\beta}(\widetilde S[p],g)$ there is a unique $\widetilde w \in \mathcal{K}[p]$ such that $E+\widetilde w$ is $L^2(\widetilde S[p],g)$ orthogonal to the approximate kernel on $\widetilde S[p]$. Moreover, if $E$ is supported on $S_1[p]$, then
\[
\|\widetilde w:C^{2,\beta}(M,g)\|\leq C(\beta)\|E:C^{0,\beta}( S_1[p],g)\|.
\]
\end{enumerate}
\end{lemma}
\begin{definition}[The extended substitute {kernel $\mathcal{K}\pe$}]
\label{D:wi}
For $i=0,\dots, n$, $\pe \in A(\Gamma)$, let $v_i\pe:M \to \mathbb R$ such that (recall \ref{annulardecaydef})
\[v_{i}\pe(x):= \left\{\begin{array}{ll} \widetilde c_i\pe V_i[\Lambda [p,e,1],1,0](x) \psi[\underline{b}, \underline{b}+1]\circ \tsd(x) ,& x\in \Lambda[p,e,1]\\
0, & x \in M \backslash \Lambda[p,e,1]
\end{array}\right.\]
where the $\widetilde c_i\pe$ are normalized constants so that (recall \ref{phidef})
\begin{equation}
\label{tildecs2}
\widetilde c_i\pe V_i[\Lambda[p,e,1],1,0]= \phi_i
\end{equation}
on $\Cout_1[p,e,0]$.
For each $i=0, \dots, n$ define $w_i\pe: M \to \mathbb R$ such that
$
w_i\pe:= \mathcal L_g v_i\pe.
$
For each $\pe \in A(\Gamma)$, let $\mathcal{K}\pe = \langle w_0\pe, \dots, w_{n}\pe \rangle_{\mathbb{R}}$.
\end{definition}
\begin{remark}Note that the $\widetilde c_i\pe$ depend smoothly on $d$.
Moreover, by \ref{ass:b}, \ref{annulardecaydef}, and \ref{annulardecaylemma},
\begin{equation}\label{wide_c_bound}
|\widetilde c_i\pe |\sim_{C(\underline{b})} 1.
\end{equation}
\end{remark}
\begin{definition}[The global extended substitute {kernel $\mathcal{K}$}]
\label{D:K}
We define the extended substitute kernel
\[
\mathcal K := \mathcal K_V \oplus \mathcal K_A, \qquad \text{ where } \qquad
\mathcal{K}_V:= \bigoplus_{p \in V(\Gamma)} \mathcal K[p] , \quad
\mathcal{K}_A:= \bigoplus_{\pe\in A(\Gamma)} \mathcal K\pe.
\]
\end{definition}
We now demonstrate how to solve a modified linear problem on $\widetilde S[p]$ with good estimates.
\begin{lemma}\label{linearpartp}
Let
$\mathcal K_A[p] := \bigoplus_{e\in E_p} \mathcal K\pe$.
Given $ \beta \in (0,1), \gamma \in(1,2)$, there is a linear map
\[
\mathcal R_{\widetilde S[p]}:\{E \in C^{0,\beta}(\widetilde S[p]):E \text{ is supported on } S_1[p]\} \to
C^{2,\beta}(\widetilde S[p]) \oplus \mathcal K[p]\oplus \mathcal K_A[p],
\]
such that the following hold for $E$ in the domain of $\mathcal R_{\widetilde S[p]}$ above and $(\varphi,w_v,w_a)=\mathcal R_{\widetilde S[p]}(E)$:
\begin{enumerate}
\item $\mathcal{L}_g \varphi = E + w_v+w_a$ on $\widetilde S[p]$.
\item $\varphi$ vanishes on $\partial \widetilde S[p]$.
\item $\|w_v,w_a:C^{2,\beta}( S[p],g)\|+ \|\varphi: C^{2, \beta}(\widetilde S[p], \rho_d,g)\| \leq C( \beta)\|E: C^{0,\beta}(S_1[p], \rho_d,g)\|$.
\item $ \|\varphi:C^{2,\beta}(\Lambda[p,e,1], \ur_{d},g , \ur_{d}^{\gamma})\| \leq C( \underline{b},\beta, \gamma)\|E:C^{0,\beta}(S_1[p], \rho_d,g)\|$ for all $e \in E_p$.
\item $\mathcal{R}_{\widetilde S[p]}$ depends continuously on ${d,\boldsymbol \zeta}$.
\end{enumerate}
\end{lemma}
\begin{proof} \ref{subskernellemma} and classical theory together imply there exists $w_v \in \mathcal K[p]$ and $\widetilde\varphi \in C^{2,\beta}(\widetilde S[p])$ such that $\mathcal L_g \widetilde\varphi = E + w_v$ and $\widetilde\varphi|_{\partial \widetilde S[p]}=0$. For each $\Lambda[p, e,1]$, $e \in E_p$, we modify $\widetilde\varphi$ using the elements $v_i[p,e]$. Let $\widetilde \varphi_e^T$ denote the projection of $\widetilde\varphi$ onto $\mathcal H_1[\Cout_1[p,e,0]]$. Let $\widetilde\varphi_e^\perp = \widetilde\varphi_e-\widetilde\varphi_e^T$ on $\Cout_1[p,e,0]$ and let $V_{\widetilde\varphi_e} := \mathcal R_\partial^{\mathrm{out}} (\widetilde\varphi_e^\perp|_{\Cout_1[p,e,0]})$. Notice that $(\widetilde\varphi - V_{\widetilde\varphi_e})|_{\Cout_1[p,e,0]} \in \mathcal H_1[\Cout_1[p,e,0]]$ and we denote
\[
(\widetilde\varphi - V_{\widetilde\varphi_e})|_{\Cout_1[p,e,0]} = \sum_{i=0}^{n}\alpha_i \phi_{i}|_{\Cout_1[p,e,0]}= \sum_{i=0}^{n} \alpha_i v_i[p,e]|_{\Cout_1[p,e,0]}.
\]
Standard theory implies that $\| \widetilde\varphi:C^{2,\beta}(S_2,\rho_d, g)\|\leq C(\underline{b},\beta) \|E:C^{0,\beta}(S_1,\rho_d,g)\|$. Coupling this with \ref{linearcor} implies that $|\alpha_i| \leq C(\underline{b},\beta) \|E:C^{0,\beta}(S_1,\rho_d,g)\|$. Set
\[
\varphi = \widetilde \varphi - \sum_{e \in E_p}\sum_{i=0}^{n} \alpha_i v_i[p,e], \qquad w_a = - \mathcal L_g\left(\sum_{e \in E_p}\sum_{i=0}^{n} \alpha_i v_i[p,e]\right).
\]
By construction $\varphi_e^\perp = \varphi_e$ and thus on each $\Lambda[p,e,1]$, $\mathcal R_\partial^{\mathrm{out}} ( \varphi_e^\perp) = \varphi$. \ref{linearcor} then provides the necessary decay.
\end{proof}
\subsection*{Solving the linearized equation globally}
We will solve the global problem in a manner analogous to \cite{BKLD}.
The hypotheses of the semi-local lemmas require that the inhomogeneous term on each extended standard region is supported on the enclosed standard region. Thus, to solve the global problem we will first use a partition of unity defined on $M$ to allow us to consider the inhomogeneous problem on separate regions that allow for solvability and good estimates. After solving on each region separately, we patch the solutions back together. Obviously, the partitioning and patching introduces error. We demonstrate that the error estimates are sufficiently small to iterate away.
We first introduce the cutoff functions we require.
\begin{definition} For $d$ satisfying \eqref{drestriction}, we define uniquely smooth functions
$\psi_{S[p]}[d]$, $\psi_{\widetilde S[p]}[d]$, $\psi_{S\pen}[d]$, $ \psi_{\widetilde S\pen}[d]$, $\psi_{\Lambda[p,e,m']}[d]$ such that
\begin{enumerate}[(i)]
\item $\psi_{S[p]}[d] = \psi_{\widetilde S[p]}[d] \equiv 1$ on $S[p]$,
\[\psi_{S[p]}[d]=\left\{\begin{array}{ll}
\psi[\underline{b}+1, \underline{b}]\circ \tsd\big|_{M[e]}& \text{if } p=p^+[e]\\
\psi[\RH-(\underline{b}+1), \RH-\underline{b}]\circ \tsd\big|_{M[e]}& \text{if }p=p^-[e],
\end{array}\right.
\]
\[\psi_{\widetilde S[p]}[d]=\left\{\begin{array}{ll}
\psi[\Pe-\underline{b}, \Pe-(\underline{b}+1)] \circ \tsd\big|_{M[e]}& \text{if } p=p^+[e]\\
\psi[\RH-(\Pe -\underline{b}), \RH-(\Pe -(\underline{b}+1))]\circ \tsd\big|_{M[e]}& \text{if }p=p^-[e],
\end{array}\right.
\]and the functions are $0$ elsewhere.
\item
\[\psi_{\Lambda[p^+[e],e,m']}[d]=\left\{\begin{array}{l}
\psi[(m'-1)\Pe+\underline{b},(m'-1)\Pe+( \underline{b}+1)]\circ \tsd\big|_{M[e]}\cdot
\\ \quad \quad \psi[m' \Pe - \underline{b}, m'\Pe -(\underline{b}+1)]\circ \tsd\big|_{M[e]},
\end{array}\right.
\]
\[
\psi_{\Lambda[p^-[e],e,m']}[d]=\left\{\begin{array}{l}
\psi[\RH-(m'\Pe- \underline{b}), \RH-(m'\Pe-(\underline{b}+1))]]\circ \tsd\big|_{M[e]}\cdot \\
\quad \quad \psi[\RH - ((m'-1)\Pe + \underline{b}), \RH - ((m'-1)\Pe + (\underline{b} +1))]\circ \tsd\big|_{M[e]},
\end{array}\right.
\]and the functions are $0$ elsewhere.
\item For $m<l[e]$,
$\psi_{S[p,e,m]}[d] = (1-\psi_{\Lambda\pen}[d])(1-\psi_{\Lambda[p,e,m+1]}[d])$ on $S_1\pen$ and is $0$ elsewhere.
\item For $m=l[e]$, $\psi_{S[p,e,l[e]]}[d] = (1-\psi_{\Lambda[p^+[e],e,l[e]}[d])(1-\psi_{\Lambda[p^-[e],e,l[e]]}[d])$ on $S_1\pen$ and is $0$ elsewhere.
\item For $m<l[e]$, $\psi_{\widetilde S[p,e,m]}[d] = \psi_{S\pen}[d] + \psi_{\Lambda[p,e,m]}[d] + \psi_{\Lambda[p,e,m+1]}[d]$.
\item For $m=l[e]$, $\psi_{\widetilde S[p,e,m]}[d] = \psi_{S[p,e,l[e]]} [d]+ \psi_{\Lambda[p^+[e],e,l[e]]} [d]+ \psi_{\Lambda[p^-[e],e,l[e]-1]}[d]$.
\end{enumerate}
\end{definition}
Observe that $\psi_{S[p]}[d], \psi_{S\pen}[d], \psi_{\Lambda[p,e,m']}[d]$ form a partition of unity on $M$. Also note that each of the functions $\psi_{\widetilde S[p]}[d], \psi_{\widetilde S\pen}[d]$ are identically 1 on almost all of $\widetilde S[p], \widetilde S\pen$, respectively. Near the boundary they transition smoothly to zero. Finally, $\supp(\psi_{S[p]}[d]) \subset S_1[p], \supp (\psi_{S\pen}[d] )\subset S_1\pen$.
We now set the notation for defining a global $C^{2,\beta}$ function by pasting together appropriately cutoff local functions.
\begin{definition}\label{patching}
Let $u[p] \in C^{k, \beta}( \widetilde S[p])$, $u\pen \in C^{k,\beta}(\widetilde S\pen)$, $p \in V(\Gamma), \pen \in V_S(\Gamma)$, be functions that are
zero in a neighborhood of $\partial \widetilde S[p]$, $\partial \widetilde S\pen$. We define $U=\mathbf U(\{u[p],u\pen\})\in C^{k,\beta}(M)$ to be the unique function such that
\begin{enumerate}[(i)]
\item $U|_{S[p]}=u[p], U|_{S\pen}=u\pen$.
\item $U|_{\Lambda[p,e,1]}=u[p] +u[p,e,1]$.
\item For $m'<l[e]$, $U|_{\Lambda[p,e,m']} = u[p,e,m'-1] + u[p,e,m']$.
\item For $U|_{\Lambda[p^+[e],e,l[e]]} = u[p^+[e],e,l[e]-1] + u[p^+[e],e,l[e]]$ while $U|_{\Lambda[p^-[e],e,l[e]]}=u[p^-[e],e,l[e]-1]+u[p^+[e],e,l[e]]$.
\end{enumerate}
\end{definition}
Finally, we define the global norm that we will use. In order to close the fixed point argument, the global norm we define must be uniformly equivalent for all immersions $\hYtdz$ that may arise. After defining the global norm, we establish this equivalence in \ref{metricequivalence}. Before we prove this equivalence and before precisely defining the global norm, we give some indication as to why we choose to define the norm in this particular manner.
Given any ${d,\boldsymbol \zeta}$ satisfying \eqref{drestriction}, \eqref{zetarestriction}, it will be straightforward to show that the semi-local norms we used in the semi-local settings are uniformly equivalent to the norm given by $d=0,\boldsymbol \zeta= \mathbf 0$. Therefore, on the semi-local level we are free to use those norms already given. On the other hand, the global norm will need to incorporate a decaying weight function. If this weight function is given entirely in terms of ${d,\boldsymbol \zeta}$ then the ratio between two norms for $d\neq 0$ will blow up along a ray of $M$. Therefore, it is convenient to use a decay function that depends upon the immersion given by $d=0, \boldsymbol \zeta =0$. To clearly distinguish the semi-local norms and the decay, we therefore define the global norm by taking the suppremum of semi-local norms on overlapping regions.
\begin{definition}[The global norms]
\label{metricdefn}
For $k \in \mathbb N$, $\beta \in (0,1)$, $\gamma \in (1,2)$, and $u\in C^{k,\beta}_{loc}(M)$,
we define $\|u\|_{k,\beta,\gamma;\dzeta}$ to be the supremum of the following semi-norms (when they are finite)
\begin{enumerate}[(i)]
\item $\|u:C^{k,\beta}(S_1[p],\rho_d,g)\| $ for each $p \in V(\Gamma)$,
\item $\de^{-m/2}\|u:C^{k,\beta}(\Sp_1\penp,\rho_d,g)\| $ for each $\pen \in V_S^+(\Gamma)$,
\item $\de^{-(m-1)/2}\|u:C^{k,\beta}( \Smext \penm, \rho_d,g, f_d\,\ur_{d}^{k-2})\|$ for each $\pen \in V_S^-(\Gamma)$,
\end{enumerate}
where
$f_d:\cup_{\penm}\Smext\penm \to \mathbb R$ is such that, for $m \neq l[e]$,
\begin{equation}\label{fdef}
f_d(x) = \left\{ \begin{array}{ll}
\ur_{d}(x)^\gamma,& x\in \Lambda\penm\\
\urin[e;d]^{\gamma},& x \in \Sm\penm\\
\urin[e;d]^\gamma(\urin[e;d]/\ur_{d}(x))^{n-2+ \gamma}, & x\in \Lambda[p,e,m+1]
\end{array}\right.
\end{equation}
and when $m=l[e]$ define
\begin{equation}\label{fdef2}
f_d(x) = \left\{ \begin{array}{ll}
\ur_{d}(x)^\gamma,& x \in \Lambda[p^+[e],e,l[e]]\\
\urin[e;d]^{\gamma},& x \in \Sm[p,e,l[e]]\\
\ur_{d}(x)^\gamma, & x\in \Lambda[p^-[e],e,l[e]]
\end{array}\right.
\end{equation}
Also, observe that
\begin{equation}\label{ddef}
\de:= \urin[e;0]^{2\gamma+n-2} \sim_{C(\underline{b})}| \tau_0[e]|^{1+(2\gamma-1)/(n-1)}.
\end{equation}
For $w=w_v+w_a \in \mathcal K$ such that
\[
w_v= \sum_{i=0}^{n} \sum_{p \in V(\Gamma)}\mu_i[p] w_i[p], \quad\quad w_a= \sum_{i=0}^{n} \sum_{\pe \in A(\Gamma)} \mu_{i}\pe w_i\pe,
\] we define the norms on $\mathcal K_V, \mathcal K_A$ such that
\begin{equation}
\left|w_v\right|_V^2:= \max_{p \in V(\Gamma) } \left\{\sum_{i=0}^n(\mu_i[p])^2\right\}, \; \:\; \left|w_a\right|_A^2:=\max_{[p,e] \in A(\Gamma) } \left\{{\sum_{i=0}^n(\mu_i[p,e])^2}\right\}.
\end{equation}
\end{definition}
Notice that since $\rho_d \sim_{C(\underline{b})} 1$ on $S_1[p], \Sp_1\penm$, the semi-norms on these regions in the definition above are uniformly equivalent to those taken with respect to the unscaled metric $g$.
\begin{remark}\label{td_tzcomparison}
The total decay for the function $f_d$ moving through one $\Smext$ region is:
\begin{equation*}
\mathfrak t_d[e]= \urin[e;d]^{2\gamma + n-2} \sim_{C(\underline{b})} |\td|^{1+(2\gamma-1)/(n-1)}.
\end{equation*}
The global decay factor given by $\de$ does not correspond to this value. However, since
\[
\frac{\mathfrak t_d[e]}{\mathfrak t_0[e]} \sim_{ C(\underline{b})} \left(\frac{\td}{\tz}\right)^{1+(2\gamma-1)/(n-1)} \sim_{2} 1,
\]where the second relation follows by \eqref{tauratio}, we have that
\[
\de \sim_{C(\underline{b})} \mathfrak t_d[e]
\] on each $\Smext\penm$.
\end{remark}
\begin{remark}
While the definition of norms for $w$ might appear unnatural, the choice is motivated by the nature of the construction. Because the functions $w_i[p], w_i[p,e_j]$ have uniform $C^{2,\beta}$ bounds and are supported on $S_1[p]$,
\[
\|\cdot\|_{2,\beta,\gamma;\dzeta}\sim_{C(\underline{b})} \left|\cdot\right|_V, \left|\cdot\right|_A
\] for elements of $\mathcal K_V, \mathcal K_A$.
\end{remark}
\begin{remark}
Observe that the exponent of $\de$ chosen in the definition corresponds -- in absolute value -- to the number of extended standard regions $\Smext$ between the region on which the norm is being determined and the closest central sphere. (Recall \ref{pendef}.)
\end{remark}
\begin{lemma}
\label{metricequivalence}
There exists $\widetilde C(\underline{b})>0$, independent of $\maxTG$, such that if $\maxTG$ is sufficiently small then for any $0<|\underline{\tau}|< \maxTG$ and ${d,\boldsymbol \zeta}$ satisfying \eqref{drestriction}, \eqref{zetarestriction} the following holds:
If $u : M \to \mathbb R$ such that $ \|u\|_{2,\beta,\gamma;\dzeta} <\infty$, then
\begin{equation}\label{Meq}
\|u\|_{2,\beta,\gamma;\dzeta} \sim_{\widetilde C(\underline{b})}\|u\|_{2,\beta,\gamma;{0,\boldsymbol 0}} .
\end{equation}
\end{lemma}
\begin{proof}
The definition of the global norm allows us to consider the equivalence on the semi-local norms.
Let $g_0:=Y_{0, \mathbf 0}^*(g_{\mathbb R^{n+1}})$. By \ref{geolimit}
\[
\|(Y_{0,\mathbf 0}-p)-(\hYtdz-p'):C^k(S_{\underline{b}}[p],\rho_0,g_{0})\| \leq C(k,\underline{b}) |\underline{\tau}|^{\frac 1{n-1}}
\]
for each $p \in V(\Gamma)$ and corresponding $p' \in V(\Gtdtl)$. Thus,
\[
\|u:C^{k,\beta}(S_{\underline{b}}[p],\rho_d,g)\| \sim_{ C(k, \underline{b})} \|u:C^{k,\beta}(S_{\underline{b}}[p],\rho_0,g_0)\|
\]
Now consider the comparison for each $e \in E(\Gamma)$ and $x \in M[e] \cap [\underline{b}, \RH - \underline{b}] \times \Ssn$. On these regions, we consider $C^{k,\beta}$ norms on balls of radius $1/10$ with respect to the metrics
\[
\rho_0(x)^{-2}g_0= \left(\frac{\ur_0}{\ur_0(x)}\right)^2(dt_0^2 + g_{\Ssn}), \quad \quad \rho_d(x)^{-2}g = \left(\frac{\ur_{d}}{\ur_{d}( x)}\right)^2\left({d\tsd^2} + g_{\Ssn}\right).
\]
The equivalence of the weights and the metrics is immediately given by \ref{diffeodifference} and \ref{rratiolemma}.
The argument for $e \in R(\Gamma)$ is identical so the proof is complete.
\end{proof}
We are now ready to state and prove the main proposition of this section.
The strategy is as follows.
We first presume that the inhomogeneous term is supported on the standard regions.
Using the semi-local lemmas, we solve the problem on each extended standard region.
We then patch together cutoffs of these semi-local solutions, which introduces error that can be removed by iteration.
For the more general case, we first partition the inhomogeneous term and use \ref{RLambda} to solve the problem on each $\Lambda$.
We then show that the problem remaining can be reduced to the first case.
\begin{prop}\label{LinearSectionProp}For each ${d,\boldsymbol \zeta}$ satisfying \eqref{drestriction}, \eqref{zetarestriction} respectively,
there exists a linear map $\mathcal{R}_{{d,\boldsymbol \zeta}}: C^{0, \beta}(M) \to C^{2,\beta}(M)\oplus \mathcal{K}_V \oplus \mathcal K_A$ such that for $E \in C^{0,\beta}(M)$ and $(\varphi, w_v,w_a)=\mathcal{R}_{{d,\boldsymbol \zeta}}(E)$ the following hold:
\begin{enumerate}
\item $\mathcal{L}_g \varphi = E+ w_v+w_a$ on $M$.
\item $\|\varphi\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}+\left|w_v\right|_{V}+\left|w_a\right|_A\leq C( \beta,\gamma)\|E\|_{0, \beta, \gamma;{d,\boldsymbol \zeta}}$
\item $\mathcal{R}_{{d,\boldsymbol \zeta}}$ depends continuously on ${d,\boldsymbol \zeta}$.
\end{enumerate}
\end{prop}
\begin{proof}
We begin by presuming that $\supp(E) \subset\left( \cup_{V(\Gamma)} S_1[p] \bigcup \cup_{V_S(\Gamma)} S_1\pen\right)$. Let $\varphi\pen:=\mathcal R_{\widetilde S\pen}(E|_{S_1\pen})$ where $\mathcal R_{\widetilde S\pen}$ denotes the linear map from \ref{linearpluslemma} or \ref{linearminuslemma} as appropriate.
We will directly apply the results of Section \ref{DelaunayLinear} using the decay and metric dilation in terms of $\ur_d$ rather than $r$ to account for the coordinate change induced by the map $\tsd$.
Let $(\varphi[p], w_v[p], w_a[p]):= \mathcal R_{S[p]}(E|_{S_1[p]})$, defined by \ref{linearpartp}. Let
\[
\mathcal R^0 E:= \mathbf U\left( \{ \psi_{\widetilde S[p]}[d]\varphi[p], \psi_{\widetilde S\pen}[d]\varphi\pen\}\right) \in C^{2,\beta}(M),
\]
\[
\mathcal W_v^0E:= \sum_{p \in V(\Gamma)} w_v[p] \in \mathcal K_V,
\]
\[
\mathcal W_a^0E:= \sum_{p \in V(\Gamma)} w_a[p] \in \mathcal K_A,
\]\[
\mathcal{E}E:=\mathbf U(\{[[\psi_{\widetilde S[p]}[d],\mathcal L_g]]\varphi[p], [[\psi_{\widetilde S\pen}[d],\mathcal L_g]]\varphi\pen\})\in C^{0,\beta}(M),
\]where here $[[\cdot,\cdot]]$ denotes the commutator. That is, $[[\psi_{\widetilde S[p]}[d],\mathcal L_g]]\varphi[p]= \psi_{\widetilde S[p]}[d]\mathcal L_ g \varphi[p] -\mathcal L_g (\psi_{\widetilde S[p]}[d]\varphi[p])$ and the like for $\varphi\pen$.
One easily checks that, as the support of $E$ implies $\psi_{\widetilde S[p]}[d] \mathcal L_g \varphi[p]= \mathcal L_g \varphi[p]$ and the like for $\varphi\pen$,
\begin{equation}
\mathcal L_g \mathcal{R}^0E + \mathcal{E}E = E+ \mathcal{W}_v^0E+ \mathcal{W}_a^0E \qquad \text{on } M.
\end{equation}Notice that by construction, on regions where $\psi_{\widetilde S[\cdot]}[d]$ is not constant, $|\partial_{\tsd}^k \psi_{\widetilde S[\cdot]}[d]| \leq C$. We will use frequently without repeated reference the fact that
\begin{align*}
\|\varphi \psi:C^{2,\beta}(\Omega \cap \supp(|\nabla \psi|), \rho_d,g,f_d) \|&\leq C\|\psi:C^3(\Omega \cap \supp(|\nabla \psi|), \rho_d,g)\| \|\varphi:C^{2,\beta}(\Omega, \rho_d, g, f_d)\| \\
&\leq C \|\varphi:C^{2,\beta}(\Omega, \rho_d, g, f_d)\|.
\end{align*}
Using the estimates of \ref{linearpluslemma}, \ref{linearminuslemma}, \ref{linearpartp} and the inequalities above, we quickly verify that\begin{align*}
\|\mathcal R^0E\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} &\leq C(\beta, \gamma)\|E\|_{0, \beta,\gamma;{d,\boldsymbol \zeta}},\\
\|\mathcal W_v^0E\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} &\leq C(\beta, \gamma)\|E\|_{0, \beta,\gamma;{d,\boldsymbol \zeta}},\\
\|\mathcal W_a^0E\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}& \leq C(\beta,\gamma)\|E\|_{0, \beta,\gamma;{d,\boldsymbol \zeta}}.
\end{align*}
The only tedious calculation to verify is the first one. But this is still quite straightforward as, by construction, while $\varphi\pen$ is allowed to grow in the direction of the nearest central sphere, the estimate on the rate of growth is much smaller than the growth allowed in that direction by the definition of the global norm.
We can finish this step of the proof by iteration, once we determine the estimate
\begin{equation}\label{iterationest}
\|\mathcal EE\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \leq \frac 12\|E\|_{0, \beta,\gamma;{d,\boldsymbol \zeta}}.
\end{equation}
To prove this, first note that $\supp(\mathcal EE) \subset \left( \left(\cup_{p } S_1[p] \backslash S[p]\right) \bigcup\left( \cup_{\pen}S_1\pen \backslash S\pen\right)\right)$. We now consider the estimates. On any $S_1[p]$,
\begin{align*}
\|\mathcal EE:C^{0,\beta}(S_1[p],\rho_d,g)\| &\leq C(\underline{b},\beta,\gamma)\max_{e_j \in E_p}\|\varphi[p,e_j,1]:C^{2,\beta}(S_1[p],\rho_d,g)\|\\
& \leq C(\underline{b},\beta,\gamma)\max_{e_j \in E_p}\urin[e_j,d]^{-1}\|E:C^{0,\beta}(S_1[p,e_j,1],\rho_d,g,\ur_{d}^{-2})\| \\
& \qquad \qquad \qquad\qquad\text{ by } \ref{linearminuslemma} \, (4)\\
& \leq C(\underline{b},\beta,\gamma)\max_{e_j \in E_p}\urin[e_j,d]^{\gamma-1}\|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \text{ by } \eqref{fdef}.
\end{align*}
Now consider the estimates on $S_1\pen$ for $m$ odd (a catenoidal type region). First note that for $\gamma' \in (\gamma, 2)$ one can apply \ref{linearpluslemma} (3) to produce the estimate
\[
\|\varphi[p,e,m-1]:C^{2,\beta}(S_1\pen,\rho_d,g,\ur_d^{\gamma})\| \leq C(\underline{b},\beta, \gamma, \gamma')\urin[e;d]^{\gamma'-\gamma}\|E:C^{0,\beta}(S[p,e,m-1],\rho_d,g)\|.
\]Note that we are using the fact that $\mathrm{supp}(\phi[p,e,m-1]) \cap S_1\penm$ is on $\Lf$ with respect to the domain $S[p,e,m-1]$. Also note that if $m=1$ use instead \ref{linearpartp} (4).
Then
\begin{align*}
\de^{-\frac{m-1}{2}}\|\mathcal EE:C^{0,\beta}&(\Lambda[p,e,m] \cap S_1\pen,\rho_d,g,\ur_d^{\gamma-2})\| \\
&\leq C(\underline{b},\beta,\gamma)\de^{-\frac{m-1}{2}} \|\varphi[p,e,m-1]:C^{2,\beta}(S_1\pen,\rho_d,g,\ur_d^{\gamma})\|\\
&\leq C(\underline{b},\beta,\gamma, \gamma')\de^{-\frac{m-1}{2}}\urin[e;d]^{\gamma'-\gamma}\|E:C^{0,\beta}(S[p,e,m-1],\rho_d,g,\ur_d^{-2})\|\\
& \leq C(\underline{b},\beta, \gamma, \gamma') \urin[e;d]^{\gamma'-\gamma} \|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}}.
\end{align*}On the other adjoining transition region we note that for $m<l[e]$,
\begin{align*}
\de^{-\frac{m-1}{2}}\left\|\mathcal EE:C^{0,\beta}\right.&\left.\left(\Lambda[p,e,m+1] \cap S_1\pen,\rho_d,g,\urin[e;d]^{\gamma}\left(\frac{\urin[e;d]}{\ur_d}\right)^{n-2+\gamma}\ur_d^{-2}\right)\right\| \\
&\leq C(\underline{b},\beta,\gamma) \de^{-\frac{m-1}{2}}\urin[e;d]^{-\gamma} \cdot \\
& \quad \quad \quad \quad\left\|\varphi[p,e,m+1]:C^{2,\beta}\left(S_1\pen,\rho_d,g,\left(\frac{\urin[e;d]}{\ur_d}\right)^{n-1}\right)\right\|\\
&\leq C(\underline{b},\beta,\gamma)\de^{-\frac{m+1}{2}}\de\urin[e;d]^{1-n-\gamma}\|E:C^{0,\beta}(S[p,e,m+1],\rho_d,g)\| \\
& \qquad \qquad \qquad\qquad\text{ by } \ref{linearpluslemma} \, (4)\\
& \leq C(\underline{b},\beta, \gamma) \urin[e;d]^{\gamma-1} \|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \text{ by } \eqref{tauratio}, \eqref{ddef}.
\end{align*}For $m=l[e]$, we just use the previous estimate twice, once on each $\Lambda[p^+[e],e,l[e]]$ and $\Lambda[p^-[e],e,l[e]]$. Given the definition of $f_d$ for $m=l[e]$, this proves the result.
One can perform similar estimates on $S_1\pen$ for $m$ even. Adapting the argument for this setting, we apply the results of \ref{linearminuslemma} for $\gamma' \in (\gamma,2)$ to see
\begin{align*}
\de^{-\frac {m-2}2} & \left\|\mathcal EE:C^{0,\beta}\left(\Lambda[p,e,m] \cap S_1\pen,\rho_d,g,\urin[e;d]^{\gamma}\left(\frac{\urin[e;d]}{\ur_d}\right)^{n-2+\gamma}\ur_d^{-2}\right)\right\| \\
& \leq C(\underline{b},\beta,\gamma) \de^{-\frac {m-2}2}\urin[e;d]^{-\gamma} \|\varphi[p,e,m-1]:C^{2,\beta}\left(S_1\pen,\rho_d,g,(\urin[e;d]/\ur_d)^{n-2+\gamma}\right)\|\\
&\leq C(\underline{b},\beta,\gamma, \gamma')\de^{-\frac {m-2}2}\urin[e;d]^{\gamma'-\gamma}\|E:C^{0,\beta}(S_1[p,e,m-1],\rho_d,g,\ur_d^{-2}\urin[e;d]^\gamma)\|
\\
& \qquad \qquad \qquad\qquad\text{ by } \ref{linearminuslemma} \, (3)\\
&\leq C(\underline{b},\beta,\gamma, \gamma')\urin[e;d]^{\gamma'-\gamma}\|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}}.
\end{align*}And finally
\begin{align*}
\de^{-\frac{m}{2}}\|\mathcal EE:C^{0,\beta}&(\Lambda[p,e,m+1] \cap S_1\pen,\rho_d,g,\ur_d^{\gamma-2})\| \\
&\leq C(\underline{b},\beta,\gamma)\de^{-\frac{m}{2}} \|\varphi[p,e,m+1]:C^{2,\beta}(S_1\pen,\rho_d,g,\ur_d)\|\\
&\leq C(\underline{b},\beta,\gamma)\de^{-\frac{m}{2}}\urin[e;d]^{\gamma-1}\|E:C^{0,\beta}(S_1[p,e,m+1],\rho_d,g,\urin[e;d]^\gamma \ur_d^{-2})\|\\
& \qquad \qquad \qquad\qquad\text{ by } \ref{linearminuslemma} \, (4)\\
& \leq C(\underline{b},\beta,\gamma)\urin[e;d]^{\gamma-1}\|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}}.
\end{align*}For $\maxTG>0$ sufficiently small,
\[
\max_{e \in E(\Gamma) \cup R(\Gamma)}\left\{C(\underline{b},\beta,\gamma) \urin[e;d]^{\gamma -1} +C(\underline{b},\beta,\gamma, \gamma')\urin[e;d]^{\gamma'-\gamma}\right\} < \frac 12.
\] This implies \eqref{iterationest}. As $\supp(\mathcal EE) \subset \supp(E)$ we can apply the same procedure and produce $\mathcal R^1E, \mathcal W^1_vE, \mathcal W^1_aE, \mathcal E^1E$ such that
\[
\mathcal L_g \mathcal R^1 E + \mathcal E^1E = \mathcal EE + \mathcal W^1_vE+ \mathcal W^1_aE
\]with
\[\|\mathcal R^1E\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}+ \|\mathcal W^1_vE\|_{2,\beta,\gamma; {d,\boldsymbol \zeta}}+ \|\mathcal W^1_aE\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}+ \|\mathcal E^1E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \leq C(\beta,\gamma) \frac 12 \|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}}.
\]We continue by induction and produce, for all $k \in \mathbb Z^+$,
\[
\mathcal L_g \mathcal R^k E + \mathcal E^kE = \mathcal E^{k-1}E + \mathcal W^k_vE+ \mathcal W^k_aE \text{ on } M.
\]Set
\[
\varphi:= \sum_{k=0}^\infty \mathcal R^kE, \quad w_v:= \sum_{k=0}^\infty \mathcal W_v^kE, \quad w_a:= \sum_{k=0}^\infty \mathcal W_a^kE.
\]The estimates imply that all three of the series converge and we have proven the proposition in the first case.
We now move to the general case. First apply \ref{RLambda} to each $\Lambda[p,e,m']$ such that
\begin{align*}
V^{\mathrm{out}}[p,e,m']&:= \mathcal R_\Lambda^{\mathrm{out}}E|_{\Lambda[p,e,m']} \text{ for }m'\text{ odd}, \\
V^{\mathrm{in}}[p,e,m']&:= \mathcal R_\Lambda^{\mathrm{in}}E|_{\Lambda[p,e,m']} \text{ for }m'\text{ even}.
\end{align*}By the proposition,
\[
V^{\mathrm{out}}[p,e,m']|_{\Cout[p,e,m']} \in \mathcal H^1[\Cout[p,e,m']], V^{\mathrm{out}}[p,e,m']|_{\Cin[p,e,m']} \equiv 0 \text{ for } m' \text{ odd, while}
\]
\[
V^{\mathrm{in}}[p,e,m']|_{\Cin[p,e,m']} \in \mathcal H^1[\Cin[p,e,m']], V^{\mathrm{in}}[p,e,m']|_{\Cout[p,e,m']} \equiv 0 \text{ for } m' \text{ even}.
\]Let
\[
\widetilde E:= \mathbf U\left(\left\{\psi_{S[p]}[d]E, \psi_{S\pen}[d]E\right\}\right) + \mathbf U\left(\left\{0, [\psi_{\Lambda[p,e,m']}[d],\mathcal L_g]V[p,e,m']\right\}\right) \in C^{0,\beta}(M).
\]Note that $\widetilde E \subset \left(\cup_{V(\Gamma)} S_1[p] \bigcup \cup_{V_S(\Gamma)} S_1\pen\right)$ so we can apply the initial argument of the proof with $\widetilde E$ in place of $E$. Thus there exist $(\widetilde \varphi, w_v,w_a) \in C^{2,\beta}(M) \times \mathcal K_V \times \mathcal K_A$ such that $\mathcal L_g \widetilde \varphi = \widetilde E + w_v+w_a$ on $M$. Set
\[
\varphi := \widetilde \varphi + \mathbf U\left( \left\{0, \psi_{\Lambda[p,e,m']}[d] V[p,e,m']\right\}\right).
\]Then by the definition of the cutoff functions, $\mathcal L_g \varphi = E + w_v+w_a$. Moreover, the estimates from \ref{RLambda} and the work done above imply $(\varphi, w_v,w_a)$ satisfy the necessary estimates.
\end{proof}
\section{The Geometric Principle}
\label{GeometricPrinciple}
Throughout this section, let $\gamma \in (1,2), \beta \in (0,1), \gamma' \in (\gamma,2),$ and $\beta' \in (0,\beta)$.
Any constant depending on $\underline{b},\beta,\gamma,\beta',\gamma',n$ we simply denote as $C$.
The goal of this section is to prove \ref{prescribexi}.
\subsection*{Prescribing the substitute kernel}
Throughout this subsection, we consider ``super-extended" central standard regions. For each $p \in V(\Gamma)$, these regions will be determined by immersions of the domain
\begin{align*} S^+[p]:=M[p]&\bigsqcup_{\{e|p=p^+[e]\}}\left(M[e]\cap[a, \Pe]\times \Ssn\right)\\&\bigsqcup_{\{e|p=p^-[e]\}}\left(M[e] \cap [\RH-\Pe,\RH-a] \times \Ssn\right).
\end{align*}
\begin{lemma}\label{unbalancinglemma}
Let $d, \boldsymbol \zeta$ satisfy \eqref{drestriction}, \eqref{zetarestriction} respectively.
Then, (recall \ref{Vpdefn}, \ref{gammaframe}, \ref{defn:RT}, \ref{defn:H})
\[
\int_{S^+[p]} H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]\Ntdz\, dg =\widetilde \omega_n \sum_{e \in E_p} \td\mathrm{sgn}[p,e] \RRR[e;{d,\boldsymbol \zeta}]\Be_1.
\]
\end{lemma}
\begin{proof}This is an easy calculation involving the force vector. Let $\partial S^+[p] := \bigsqcup_{e \in E_p}\widetilde \Gamma_e$ and let $K_e \subset \mathbb R^{n+1}$ be hypersurfaces such that $\partial K_e =\widetilde \Gamma_e$. Then, (recalling \ref{Lemma:Hdis} and \ref{Hbounds})
\begin{align*}
n\int_{S^+[p]}H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]\Ntdz\, dg &=n\int_{S^+[p]}H_{\mathrm{error}}[{d,\boldsymbol \zeta}]\Ntdz\, dg\\
& =\int_{S^+[p]}\sum_{i=1}^{n+1} (\Delta_g x_i)\Be_i -n \int_{S^+[p]} \Ntdz\\
&=\sum_{e \in E_p} \left(\int_{\widetilde\Gamma_e} \sum_{i=1}^{n+1} \nabla_g x_i \cdot \eta_e \Be_i - n \int_{K_e} \nu_e \right) \\
&=\sum_{e \in E_p}\left( \int_{\widetilde\Gamma_e} \eta_e- n \int_{K_e} \nu_e \right).
\end{align*}
Here $\eta_e$ is the conormal to $\widetilde\Gamma_e$, $\nu_e$ is normal to $K_e$ with the appropriate orientation.
Applying \eqref{forcevec} implies the result.
\end{proof}
\begin{definition}
Given $d,\boldsymbol \zeta $ satisfying \eqref{drestriction}, \eqref{zetarestriction}, let $\hYtdz$ denote the corresponding immersion of $M$. For $\|f\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}<\infty$, consider the immersion $(\hYtdz)_f:M \to \mathbb R^{n+1}$. Let $\mathcal S[p]:=(\hYtdz)_f(S^+[p])$ and let $\underline{d}[(\hYtdz)_f,\cdot] \in D(\Gamma)$ such that
\begin{equation}\label{greatdest}
\underline{d}[(\hYtdz)_f,p]:=\widetilde \omega_n^{-\frac 12} \int_{ S^+[p]}(H_{\mathcal S[p]}-1) N_{\mathcal S[p]} \, dg_{\mathcal S[p]}.
\end{equation}
\end{definition}
\begin{prop}\label{greatdprop}
Given $d,\boldsymbol \zeta $ satisfying \eqref{drestriction}, \eqref{zetarestriction}, let $\hYtdz$ denote the corresponding immersion of $M$. If $\|f\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}<C\underline C|\underline{\tau}|$, then for $\underline{d}[(\hYtdz)_f,\cdot]$ defined by \eqref{greatdest},
\[
\left|\underline{d}[(\hYtdz)_f,p] - d[p]\right| \leq C\underline C|\underline{\tau}|^{1+(n-3+\gamma)/(n-1)}
\]for all $p \in V(\Gamma)$.
\end{prop}
\begin{proof}We will use the notation of \ref{unbalancinglemma}, but the domains $K_e$ will refer now to the parametrizing domain rather than the immersion itself.
More specifically, for $p \in V(\Gamma)$ and each $e \in E_p$, let $K_e$ denote the domain $[0, r_{\td}^{\mathrm{min}}] \times \Ssn$ and let $g_e:= ds^2 + s^2 g_{\Ssn}$ denote the standard polar metric on $K_e$. Let $\nu_e:=\mathrm{sgn}[p,e] \RRR[e;{d,\boldsymbol \zeta}]\Be_1$. Let $\widetilde \Gamma_e:= \partial K_e$ and note that the metric $g_e$ restricted to $\widetilde\Gamma_e$ takes the form $\sigma_e:=(r_{\td}^{\mathrm{min}})^{n-1} g_{\Ssn}$. Let $\eta_e:= \nu_e$.
Then we may consider each $\nu_e$ as the outward pointing normal of an immersion of $K_e$ into $\mathbb R^{n+1}$ such that these immersions are the ``caps" of the immersion of $S^+[p]$ by the map $\hYtdz$. Moreover, each $\eta_e$ represents the conormal of the immersion of $S^+[p]$ by $\hYtdz$ along $\widetilde\Gamma_e$.
Using \eqref{forcevec} and the calculation in the proof of \ref{unbalancinglemma}, we observe that
\begin{align*}
\widetilde \omega_n^{- \frac 12}\int_{S^+[p]}(H_{d,\boldsymbol \zeta} -1)N_{d, \boldsymbol \zeta} dg &=
\widetilde \omega_n^{- \frac 12}\sum_{e \in E_p} \left((r_{\td}^{\mathrm{min}})^{n-1} \int_{\widetilde\Gamma_e} \eta_e dg_{\Ssn} - n \int_{K_e} \nu_e dg_e \right)
\\&=\frac{\widetilde \omega_{n-1}}{\widetilde \omega_{n}^{\frac 12}} \sum_{e \in E_p} \td\mathrm{sgn}[p,e] \RRR[e;{d,\boldsymbol \zeta}]\Be_1\\
&= \frac{\widetilde \omega_{n-1}}{\widetilde \omega_{n}^{\frac 12}}\sum_{e \in E_p} \td\mathrm{sgn}[p,e] \RRR[e;{d,\boldsymbol \zeta}]\Be_1 \\
&:= d[\hYtdz,p].
\end{align*}
By definition, (recall \ref{defn:RT})
\begin{align*}
d[\hYtdz,p] - d[p]&= \frac{\widetilde \omega_{n-1}}{\widetilde \omega_{n}^{\frac 12}}\sum_{e\in E_p} \td \mathrm{sgn}[p,e] \left(\RRR[e;{d,\boldsymbol \zeta}]\Be_1-\RRR[e;{d,\boldsymbol \zeta}]\Be_1[e] + \RRR[e;{d,\boldsymbol \zeta}]\Be_1[e] - \Bv_1[e;\tilde d,0]\right)\\
&= \frac{\widetilde \omega_{n-1}}{\widetilde \omega_{n}^{\frac 12}}\sum_{e\in E_p} \td \mathrm{sgn}[p,e] \left(\RRR[e;{d,\boldsymbol \zeta}](\Be_1-\Be_1[e] )+ \Bv_1[e;\tilde d,\tilde \ell]- \Bv_1[e;\tilde d,0]\right).
\end{align*}
Using \eqref{ddifftau} and \ref{zetaframe} we see
\begin{equation}\label{dbzEst}
|d[\hYtdz,p] - d[p]|\leq C \underline C \,\underline{\tau}^2
\end{equation}
We will get the full estimate by comparing $d[\hYtdz,p]$ and $\underline{d}[(\hYtdz)_f,p]$.
Note that the first quantity depends upon $d, \boldsymbol \zeta, \Gamma$ while the second depends again on these and also on $f$.
Again using the proof of \ref{unbalancinglemma},
\[
\underline{d}[(\hYtdz)_f,p] =\widetilde \omega_{n}^{-\frac 12}\sum_{e \in E_p}\left( \int_{\widetilde\Gamma_e} \eta_{e,f} d\sigma_{e,f} - n\int_{K_e} \nu_{e,f} dg_{e,f}\right).
\]Here $\eta_{e,f}, \sigma_{e,f}$ represent the modified conormal and metric, respectively, for the immersion of $\widetilde\Gamma_e$ using the map $(\hYtdz)_f$. And $dg_{e,f}$ represents the immersion of $K_e$ with respect to the same map. Note that since $N_{d,\boldsymbol \zeta} \cdot \nu_e = 0$ and $f$ is a normal graph over $\hYtdz$, the normal to the immersion of $K_e$ by the map $(\hYtdz)_f$ is the same as the normal of the immersion by $\hYtdz$. That is, $\nu_{e,f} = \nu_e$.
We compare $\underline{d}[(\hYtdz)_f,p], d[\hYtdz,p]$ by components. For a fixed $e$,
\begin{align*}
\left|\int_{K_e} \nu_{e,f} dg_{e,f}-\int_{K_e} \nu_e dg_e\right| &=\left| \int_{K_e}\nu_e dg_{e,f}- \int_{K_e}\nu_e dg_e\right|
\\
&\leq \left|\mathrm{Vol}_{g_{e,f}}(K_e) - \mathrm{Vol}_{g_{e}}(K_e)\right|
\end{align*}where $\mathrm{Vol}_g(\Omega)$ represents the volume of $\Omega$ with respect to the metric $g$.
By definition
\[
\mathrm{Vol}_{g_e}(K_e) = (r_{\td}^{\mathrm{min}})^n\omega_{n-1}.
\]On the other hand, we calculate
\[
\mathrm{Vol}_{g_{e,f}}(K_e) \leq \int_0^{r_\td^{\mathrm{min}}+ |f|_{C^0(\Gamma_e)}} \int_{\Ssn}r^{n-1} d\Theta dr \leq (|f|_{C^0(\widetilde\Gamma_e)} + r_{\td}^{\mathrm{min}})^n\omega_{n-1}.
\]Since $\|f\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq C\underline C|\underline{\tau}|$ implies that $|f|_{C^0(\widetilde\Gamma_e)} \leq C \underline C|\underline{\tau}| {\urin[e;d]}^\gamma$,
\begin{multline}
\label{KeuEst}
\left|\int_{K_e} \nu_{e,f} dg_{e,f}-\int_{K_e} \nu_e dg_e\right| \leq C|f|_{C^0(\widetilde\Gamma_e)} (r_\td^{\mathrm{min}})^{n-1} \le \\
\leq C\underline C|\underline{\tau}|{\urin[e;d]}^\gamma(r_\td^{\mathrm{min}})^{n-1}
\leq C\underline C|\underline{\tau}|^{2+ \gamma/(n-1)}.
\end{multline}
The other estimate is a bit more delicate. We will use the triangle inequality and consider
\[
\left|\int_{\widetilde\Gamma_e} \eta_{e} d\sigma_{e} -\int_{\widetilde\Gamma_e} \eta_{e,f}d\sigma_{e,f} \right| \leq \left|\int_{\widetilde\Gamma_e} \eta_{e,f} d\sigma_{e} -\int_{\widetilde\Gamma_e} \eta_{e,f}d\sigma_{e,f} \right| +\int_{\widetilde\Gamma_e} |\eta_{e,f}-\eta_e| d\sigma_{e} .
\]Note that along $\widetilde\Gamma_e$, $\frac \partial{\partial t} \hYtdz$ is parallel to $\frac \partial{\partial t}N_{d,\boldsymbol \zeta}$. Therefore, $\angle(\eta_{e} , \eta_{e,f})$ is maximized if $f=0$ on $\widetilde\Gamma_e$. In that case,
\[
\eta_{e,f}\cdot \eta_e = \frac{\frac \partial{\partial t} \hYtdz+ \left(\frac \partial{\partial t} f \right)N_{d,\boldsymbol \zeta}}{\sqrt{\left(r_\td^{\mathrm{min}}\right)^2+ \left|\frac \partial{\partial t} f\right|^2}} \cdot (1,\boldsymbol 0)= \frac{r_\td^{\mathrm{min}}}{\sqrt{\left(r_\td^{\mathrm{min}}\right)^2+ \left|\frac \partial{\partial t} f\right|^2}}
\]
By the definition of the global norm, $|f|_{C^1(\Gamma_e)} \leq C \underline C|\underline{\tau}|{\urin[e;d]}^{\gamma-1}\leq C \underline C |\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{\gamma-1}$. Thus,
\[
\frac {\left|\frac \partial{\partial t} f\right|}{r_\td^{\mathrm{min}}} \leq C \underline C| \underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{\gamma-2}
\]and therefore
\[
|\eta_{e,f}-\eta_e| = \sqrt 2 \sqrt{1- \eta_{e,f}\cdot \eta_e} \leq C\underline C|\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{\gamma-2}.
\]
It follows that
\begin{equation}\label{FirstGeuEst}
\int_{\widetilde\Gamma_e} |\eta_{e,f}-\eta_e| d\sigma_{e} \leq C\underline C|\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{\gamma-2}\mathrm{Vol}_{\sigma_e}(\widetilde\Gamma_e) \leq C \underline C |\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{n-3+\gamma}\leq C\underline C|\underline{\tau}|^{2+(\gamma-2)/(n-1)}.
\end{equation}For the last estimate, we note that
\[
\sigma_{e,f} = \sum_{i=1}^n\left(r^2 + f_i^2+ f^2-2rf\right) d\Theta_i^2= \sum_{i=1}^n\left((r-f)^2 + f_i^2\right) d\Theta_i^2
\] where $\sum_{i=1}^n d\Theta_i^2$ represents the standard metric on $\Ssn$. Thus
\begin{align*}
\int_{\widetilde\Gamma_e} d\sigma_{e,f} &\leq \omega_{n-1}\left((r_\td^{\mathrm{min}}+|f|_{C^0(\widetilde\Gamma_e)})^2 + |f|^2_{C^1(\Gamma_e)}\right)^{(n-1)/2}\\
& \leq \omega_{n-1}\left(r_\td^{\mathrm{min}}\right)^{n-1} + C\underline C |\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{n-3+\gamma}.
\end{align*}Here we used the estimate $(a^2+b^2)^{1/2} \leq a+b$ for $a,b\geq 0$ and the fact that the worst remaining term that appears in the expansion has the form of the second term above.
Applying this estimate reveals that
\begin{align}
\left|\int_{\widetilde\Gamma_e} \eta_{e} d\sigma_{e} -\int_{\widetilde\Gamma_e} \eta_{e}d\sigma_{e,f} \right| &\leq |\mathrm{Vol}_{\sigma_{e,f}}(\widetilde\Gamma_e) - \mathrm{Vol}_{\sigma_e}(\widetilde\Gamma_e)| \notag\\& \leq C\underline C |\underline{\tau}| \left(r_\td^{\mathrm{min}}\right)^{n-3+\gamma} \notag\\& \leq C\underline C|\underline{\tau}|^{2+(\gamma-2)/(n-1)}.\label{SecondGeuEst}
\end{align}
Combining the estimates of
\eqref{dbzEst}, \eqref{KeuEst}, \eqref{FirstGeuEst}, and \eqref{SecondGeuEst} implies the result.
\end{proof}
\begin{prop}\label{subsprescribep}
For $d, \boldsymbol \zeta$ satisfying \eqref{drestriction}, \eqref{zetarestriction} respectively, we have the following:
For each $p \in V(\Gamma)$, there exist $\varphi_{\mathrm{gluing}}[p] \in C^{2,\beta}(\widetilde S[p]), \mu_i'[p]$, and $\mu_i'\pe$, where $i=0, \dots, n$, such that
\begin{enumerate}
\item $\mathcal L_g \varphi_{\mathrm{gluing}}[p] + H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}] = \sum_{i=0}^n \mu_i'[p]w_i[p] + \sum_{e \in E_p}\sum_{i=0}^n \mu_i'\pe w_i\pe$ on $\widetilde S[p]$.
\item $\varphi_{\mathrm{gluing}}[p] =0$ on $\partial \widetilde S[p]$.
\item $\|\varphi_{\mathrm{gluing}}[p]:C^{2,\beta}(\widetilde S[p],g)\| \leq C|\underline{\tau}|$.
\item $\|\varphi_{\mathrm{gluing}}[p]:C^{2,\beta}(\Lambda[p,e,1],\rho_d,g,\ur_d^\gamma)\| \leq C|\underline{\tau}|$ for all $e \in E_p$.
\item $\left|\mu_i'\pe\right| \leq C|\underline{\tau}|$.
\item Each of the $\varphi_{\mathrm{gluing}}, \mu_i'[p], \mu_i'\pe$ are all unique by their construction and depend continuously on ${d,\boldsymbol \zeta}$.
\end{enumerate}
\end{prop}
\begin{proof}
Determine $\mu_i'[p]$ such that for $j=0,\dots, n$,
\[
\int_{\widetilde S[p]} \left(\sum_{i=0}^{n} \mu_i' [p]w_i[p] - H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]\right) f_j[p] \: dg=0.
\]
The estimates follow immediately from \ref{linearpartp} and the bound on $H_{\mathrm{gluing}}[{d,\boldsymbol \zeta}]$ given in \ref{Hestimates}.
\end{proof}
\subsection*{Prescribing the extended substitute kernel}
\begin{prop}\label{localquadprop}
Let $\hYtdz:M \to \mathbb R^{n+1}$ be the immersion of \ref{immersiondef}.
Let $x \in M$ and $D \subset M$ be a disk of radius $1/10$ in the metric $\rho_d^{-2}(x)g$, centered at $x$. Let $c_1>0$ denote the constant found in \ref{find:c1}.
If $v \in C^{2, \beta}(D,\rho_d^{-2}(x)g)$ satisfies
\[
\|v:C^{2,\beta}(D,\rho_d^{-2}(x)g)\| \leq {\rho_d(x)}{\epsilon(c_1)}
\]for $\epsilon(c_1)$ given by \ref{quadboundsunscaled}, then
\[
\rho_d(x)\|\left(\hYtdz\right)_v - \hYtdz - v\Ntdz: C^{1,\beta}(D,\rho_d^{-2}(x)g)\| \leq C(c_1) \|v:C^{2,\beta}(D,\rho_d^{-2}(x)g)\|^2
\]
\[
\rho_d^{2}(x)\|H_v-\Htdz-\mathcal L_g v:C^{0,\beta}(D,\rho_d^{-2}(x)g)\| \leq C(c_1)\rho_d^{-1}(x)\|v:C^{2,\beta}(D,\rho_d^{-2}(x)g)\|^2.
\]
\end{prop}
\begin{proof}
We wish to apply \ref{quadboundsunscaled} and to that end, we rescale the target by $\rho_d^{-1}(x)$.
We now proceed with the proof. By \eqref{rhoest}, the conditions of \eqref{quadconditions} are satisfied and the hypothesis of \ref{quadboundsunscaled} is satisfied for the rescaled function $\rho_d^{-1}(x)v$. Under rescaling, $H_{\rho_d^{-1}(x)v} = {\rho_d(x)}H_v$ and $\mathcal L_{(\rho_d^{-1}(x)\hYtdz)^*(\rho_d^{-2}(x)g)} = \rho_d^{2} (x)\mathcal L_{g}$. Thus, \ref{quadboundsunscaled} implies
\[
\|\rho_d(x)\left(H_v - \Htdz- \mathcal L_{g}v\right) :C^{0,\beta}(D,\rho_d^{-2}(x)g)\|\leq C(c_1)\|\rho_d^{-1}(x) v: C^{2, \beta}(D,\rho_d^{-2}(x)g)\|^2.
\] Simplifying implies the second estimate. The first estimate follows directly from the scaling of the target and the function.
\end{proof}
For ease of presentation, we define rotations of the components of the normal vector.
\begin{definition}\label{tildefdefn}
Let $\tilde f_i:\bigsqcup_{e \in E(\Gamma) \cup R(\Gamma)} M[e] \to \mathbb R$ for $i=0, \dots, n$ such that for $x \in M[e]$,
\[
\tilde f_i(x):= {\left(\RRR[e;{d,\boldsymbol \zeta}]^{-1}\Ntdz(x)\right)\cdot \Be_{i+1} }.
\]
\end{definition}Note that $\mathcal L_g \tilde f_i=0$.
For $e \in E(\Gamma)\cup R(\Gamma)$, let $f^+[e]:M[e] \cap\left( [a,a+3] \times \Ssn\right)\to \mathbb R$ and for $e \in E(\Gamma)$ let $f^-[e]: M[e] \cap\left( [\RH -(a+3),\RH -a] \times \Ssn\right)\to \mathbb R$ such that on these regions
\begin{align}
\label{one}\UUU[e;{d,\boldsymbol \zeta}]^{-1}\circ\hYtdz\circ D^+ &=\left(Y_0 + \boldsymbol \zeta\ppe\right)_{f^+[e]}\\
\UUU[e;{d,\boldsymbol \zeta}]^{-1}\circ\hYtdz \circ D^-& =\left(Y_0 + \boldsymbol \zeta\pme\right)_{f^-[e]}
\end{align}
where $D^\pm$ are small perturbations of the identity map.
We prove estimates for $f^+[e]$ and note that the same estimates hold for $f^-[e]$,
once we account for appropriate changes to the domain.
\begin{lemma}\label{quadflemma}
For $f^+[e]$ as above, we have the following:
\begin{enumerate}
\item \label{qf1}$f^+[e]= 0$ on $M[e] \cap \left([a,a+1]\times \Ssn\right)$.
\item \label{qf2}$\|f^+[e]:C^{2,\beta}(M[e] \cap \left([a,a+3] \times \Ssn\right),g)\|\leq C\left| \boldsymbol \zeta \right|$.
\item \label{qf4}For $\tilde f_i$ described above,
\[\|f^+[e](x) - \sum_{i=0}^{n} \zeta_i\ppe\tilde f_i(x):C^{1,\beta}(M[e] \cap \left([a+2,a+3]\times \Ssn\right), g)\|\leq C\left| \boldsymbol \zeta \right|^2.\]
\item \label{qf3}$\|\mathcal L_{g} f^+[e] - H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]:C^{0,\beta}(M[e] \cap \left([a,a+3] \times \Ssn\right),g)\|\leq C\left| \boldsymbol \zeta \right| \, |\underline{\tau}|^{\frac 1{n-1}}$.
\end{enumerate}
\end{lemma}
\begin{remark}
Note that we could have stated the previous lemma to fit with the global norm on $S_1[p]$ but since $\rho_d \sim_{C(\underline{b})}1$ on $S_1[p]$ the norm bounds given above can be used to bound the global norm.
\end{remark}
\begin{proof}
Items \eqref{qf1} and \eqref{qf2} follow immediately from the definition of $f^+[e]$ and the behavior of the immersion $\hYtdz$ on each of the domains. For items \eqref{qf4} and \eqref{qf3}, we note that item \eqref{qf2} and the uniform estimates on $\rho_d$ allow us to invoke \ref{localquadprop}, which we do with $Y_0 +\boldsymbol \zeta\ppe$ in place of $\hYtdz$. Then item \eqref{qf4} follows from the definition of the functions $\tilde f_i$ and the linear error estimate of \ref{localquadprop} since $\UUU[e;{d,\boldsymbol \zeta}]^{-1}\circ\hYtdz=\RRR[e;{d,\boldsymbol \zeta}]^{-1}N_{{d,\boldsymbol \zeta}} =Y_0$ for $t \in [a+2,a+3]$. Item \eqref{qf3} follows from the quadratic error estimate of \ref{localquadprop} applied to $Y_0 +\boldsymbol \zeta\ppe$ and by recalling \ref{geolimit} to compare $\mathcal L_gf^+[e]$ and $\mathcal L_{Y_0}f^+[e]$.
\end{proof}
\begin{definition}\label{underlinef}Let $S^x[p] \subset M$ such that
\begin{align*}
S^{x}[p]:=M[p] &\bigsqcup_{\{e|p=p^+[e]\}}\left(M[e] \cap [a,x] \times \Ssn\right)\\& \bigsqcup_{\{e|p=p^-[e]\}} \left(M[e] \cap [\RH -x,\RH -a]\times \Ssn\right)
\end{align*}with the appropriate regions identified as in \eqref{eq:sim}.
For each $p \in V(\Gamma)$, let
\[
\underline f[p]: S^{a+3}[p]\to \mathbb R
\]
such that
\begin{equation*}
\underline f[p](x)= \left\{\begin{array}{ll}
0,& \text{if } x \in M[p],\\
f^+[e](x),& \text{if } p=p^+[e],x \in M[e]\cap [a,a+3] \times \Ssn,\\
f^-[e](x),&\text{if } p=p^-[e],x\in M[e] \cap[\RH -(a+3),\RH -a]\times \Ssn.
\end{array}\right.
\end{equation*}
\end{definition}
The definition of $\underline f[p]$ immediately implies the following corollary.
\begin{corollary}\label{corundf}
\begin{enumerate}
\item $\underline f[p] =0$ on $S^{a+1}[p]$.
\item $\|\mathcal L_g \underline f[p]-H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]:C^{0,\beta}(S^{a+3}[p],g)\|\leq C\left| \boldsymbol \zeta \right|\, |\underline{\tau}|^{\frac 1{n-1}}$.
\item if $p=p^+[e]$,
\[\|\underline f[p]- \sum_{i=0}^{n} \zeta_i\ppe\tilde f_i(x):C^{1,\beta}(M[e] \cap \left([a+2,a+3]\times \Ssn\right), g)\|\leq C\left| \boldsymbol \zeta \right|^2.\]
\item if $p=p^-[e]$,
\[\|\underline f[p]- \sum_{i=0}^{n} \zeta_i\ppe\tilde f_i(x):C^{1,\beta}(M[e] \cap \left([\RH-(a+3),\RH-(a+2)]\times \Ssn\right), g)\|\leq C\left| \boldsymbol \zeta \right|^2.
\]
\end{enumerate}
\end{corollary}\noindent
We use these functions to prescribe the dislocation on each central sphere. For convenience, we normalize the functions $\tilde f_i$ on the meridian circle $C_1^{\mathrm{out}}[p,e,0]$.
\begin{definition}\label{wfdef}For $\pe \in A(\Gamma)$, choose $c'_i[p,e]$ such that for each $i=0,\dots, n$, on $C_1^{\mathrm{out}}[p,e,0]$, (recall \ref{phidef})
\[
c'_i[p,e] \tilde f_i =\phi_{i}.
\]
\end{definition}
\begin{remark}\label{ref:cprime}
Notice that while $c'_i\pe$ depends on $d$, these values are independent of $\boldsymbol \zeta$ since $\tilde f_i$ is independent of $\boldsymbol \zeta$ at $(\underline{b}+1,\bt)$.
Moreover, by the asymptotic geometric behavior at $\underline{b}+1$ (recall \ref{radiuslemma}),
\[
|c'_i\pe| \sim_{C(\underline{b})} 1.
\]
\end{remark}
\begin{assumption}
We now choose a constant $c'\geq 1$, independent of ${d,\boldsymbol \zeta}$ and of $\underline{\tau}$ but depending on $\underline{b}$ such that for all $d$ satisfying \eqref{drestriction} and corresponding $c'_i \pe$,
\begin{equation}\label{cprimedef}
c' \geq \max_{\substack{i=0,\dots,{n},\\ [p,e] \in A(\Gamma)}} |c'_i[p,e]|.
\end{equation}
\end{assumption}
Recalling \eqref{tildecs2} and \ref{D:wi}, the normalization we choose implies that
\begin{equation}\label{eq:vanish}
c'_i[p,e]\tilde f_i - v_i\pe|_{C_1^{\mathrm{out}}[p,e,0]}=0.
\end{equation}This normalization will be convenient for estimating item $(3)$ in the proposition below.
\begin{prop}\label{prescribequad}
Let ${d,\boldsymbol \zeta}$ satisfy \eqref{drestriction}, \eqref{zetarestriction}, respectively.
For each $p \in V(\Gamma)$ there exist $\phi_{\mathrm{dislocation}}[p]\in C^{2,\beta}(\widetilde S[p])$, $\mu_i''[p]$, and $\mu_i''[p,e]$, where $i=0, \dots, n$, such that
\begin{enumerate}
\item $\mathcal L_g \phi_{\mathrm{dislocation}}[p] + H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}] = \sum_{i=0}^{n} \left( \mu_i''[p] w_i[p]+ \sum_{e \in E_p} \mu_i''[p,e]w_i[p,e]\right)$ on $\widetilde S[p]$.
\item $\phi_{\mathrm{dislocation}}[p] = 0$ on $\partial \widetilde S[p]$.
\item $|\mu_i''[p]| + | \zeta_i[p,e]/c_i'\pe-\mu_{i}''[p,e]|\leq C\left| \boldsymbol \zeta \right| \, |\underline{\tau}|^{\frac 1{n-1}}$
for all $i = 0, \dots, n$.
\item $\| \phi_{\mathrm{dislocation}}[p]:C^{2,\beta}(\widetilde S[p], g)\|\leq C\left| \boldsymbol \zeta \right|$.
\item $\|\phi_{\mathrm{dislocation}}[p]:C^{2,\beta}(\Lambda[p, e, 1], \rho_d,g, \ur_d^\gamma)\|\leq C\left| \boldsymbol \zeta \right| \, |\underline{\tau}|^{\frac 1{n-1}}$ for all $e \in E_p$.
\item $\phi_{\mathrm{dislocation}}[p], \mu_i''[p], \mu_{i}''[p,e]$ are all unique by their construction and depend continuously on the parameters of the construction.
\end{enumerate}
\end{prop}
\newcommand{\uphipp}[0]{\phi_{\mathrm{dislocation}}'[p] }
\newcommand{\uphip}[0]{\phi_{\mathrm{dislocation}}''[p]}
\begin{proof}
Let $\underline f[p]$ represent the function from \ref{underlinef}. On $M[e] \cap S[p]$, for $p=p^+[e]$ set
$\check \psi_e:=\psi[a+3,a+2](t)$ and for $p=p^-[e]$ set $\check \psi_e:= \psi[\RH-(a+3), \RH-(a+2)](t)$. On each $\Lambda[p,e,1]$, find $\underline V_e$ such that $\mathcal L_g \underline V_e =0$, $\underline V_e = -\sum_{i=0}^n \zeta_i[p,e] \tilde f_i$ on $C^{\mathrm{in}}[p,e,1]$ and $\underline V_e = 0$ on $C^{\mathrm{out}}[p,e,0]$.
We construct $\uphipp \in C^{2, \beta} (\widetilde S[p])$ in the following way. Let
\[
\uphipp = \left\{ \begin{array}{ll}
\underline f[p]& \text{on } M[p],\\
\check \psi_e \underline f[p]+(1-\check \psi_e) \sum_{i=0}^n \zeta_i[p,e] \tilde f_i&\text{on } M[e] \cap S[p], \pe \in A(\Gamma),\\
\sum_{i=0}^n \zeta_i[p,e] \tilde f_i+ (1-\psi_{S[p]}[d])\underline V_e& \text{on } \Lambda[p,e,1].
\end{array}\right.
\]
The construction of $\underline V_e$ and the estimates of \ref{annulardecaylemma} imply that
\[\|\underline V_e:C^{2,\beta}([\underline{b},\underline{b}+2] \times \Ss^{n-1} \cap M[e],g)\|\leq C\left| \boldsymbol \zeta \right| (\urin[e;d])^{n-2}. \]
Noting the estimates provided by \ref{corundf}, we have the following for $\uphipp$:
\begin{enumerate}
\item $\mathcal L_g \uphipp$ is supported on $S_1[p]$ and $\uphipp=0$ on $\partial \widetilde S[p]$.
\item For each $e \in E_p$, $$\|\mathcal L_g \uphipp - H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]:C^{0,\beta}(S_1[p] \cap M[e], g)\| \leq C\left| \boldsymbol \zeta \right|(|\underline{\tau}|^{\frac 1{n-1}}+ (\urin[e;d])^{n-2}).$$
\end{enumerate}
On $C^{\mathrm{out}}_1[p,e,0]$, $\uphipp = \underline V_e + \sum_{i=0}^n \zeta_i[p,e] \tilde f_i$. To modify $\uphipp$ and prescribe the fast decay, we follow the argument of \ref{linearpartp}. In this case, $\tilde f_i \in \mathcal H_1[C_1^{\mathrm{out}}[p,e,0]]$ so we first find $\mathcal R_\partial^{\mathrm{out}}(\underline V_e^\perp|_{C_1^{\mathrm{out}}[p,e,0]})$. Recall that $\underline V_e^\perp=\underline V_e-\underline V_e^T$ where $\underline V_e^T$ denotes the projection of $\underline V_e$ onto $\mathcal H_1[C_1^{\mathrm{out}}[p,e,0]]$. Thus, we find coefficients $\underline \mu_i[p,e]$ such that (recall \ref{D:wi}, \ref{wfdef})
\begin{align*}
\sum_{i=0}^n{ \underline \mu_i[p,e]} v_i[p,e]& = \uphipp - \mathcal R_\partial^{\mathrm{out}}\left(\underline V_e^\perp|_{C_1^{\mathrm{out}}[p,e,0]}\right)\\
&= \underline V_e+ \sum_{i=0}^n \zeta_i[p,e] \tilde f_i- \mathcal R_\partial^{\mathrm{out}}\left(\underline V_e^\perp|_{C_1^{\mathrm{out}}[p,e,0]}\right).
\end{align*}
Since this implies that on $C_1^{\mathrm{out}}[p,e,0]$,
\[
-\sum_{i=0}^n \zeta_i[p,e] \tilde f_i+\sum_{i=0}^n {\underline \mu_i[p,e]}v_i[p,e]= \underline V_e^T + \underline V_e^\perp - \mathcal R_\partial^{\mathrm{out}}\left(\underline V_e^\perp|_{C_1^{\mathrm{out}}[p,e,0]}\right)
\]we can appeal to the estimates of \ref{linearcor}, exploiting the normalization given in \eqref{eq:vanish}, to conclude that
\[
\left|\frac{\zeta_i[p,e]}{c_i'\pe} - \underline \mu_i[p,e]\right| \leq C\|\underline V_e:C^{2,\beta}([\underline{b},\underline{b}+2] \times \Ss^{n-1} \cap M[e],g)\|\leq C\left| \boldsymbol \zeta \right| (\urin[e;d])^{n-2},
\]
\[
\|\uphipp- \sum_{i=0}^n\underline \mu_{i}[p,e] v_i[p,e]:C^{2,\beta}(\Lambda[p,e,1],\rho_d, g, \ur_d^\gamma)\| \leq C \left| \boldsymbol \zeta \right|( \urin[e;d])^{n-2}.
\] Using \ref{linearpartp} with $E: = \mathcal L_g \uphipp - H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]$, let $$(\uphip, w_{\mathrm{dislocation}}):= \mathcal R_{\widetilde S[p]}(E)$$ where
\[
w_{\mathrm{dislocation}} = \sum_i \mu_i''[p] w_i[p] + \sum_{e \in E_p}\sum_{i} \mu_{i}'''[p,e]w_i[p,e].
\]
Then
\[\mathcal L_g \uphip= \mathcal L_g \uphipp - H_{\mathrm{dislocation}}[{d,\boldsymbol \zeta}]+ w_{\mathrm{dislocation}} \text{ on } \widetilde S[p], \text{ and}\]
\[|\mu_i''[p]|, |\mu_{i}'''[p,e]|\leq C\left| \boldsymbol \zeta \right| \, |\underline{\tau}|^{\frac 1{n-1}}
\]
Set
\[
\phi_{\mathrm{dislocation}}[p] = \uphip - \uphipp + \sum_{e \in E_p}\sum_{i}{\underline \mu_{i}[p,e]}v_i[p,e]
\]and
\[
\mu_{i}''[p,e] = \mu_{i}'''[p,e]+{\underline\mu_{i}[p,e]}.
\]We complete the proof by appealing to all of the estimates above and those of \ref{linearpartp}.
\end{proof}
\subsection*{Prescribing the extended substitute kernel globally}
\newcommand{\boldsymbol \xi}{\boldsymbol \xi}
Choose $d \in D(\Gamma)$ satisfying \eqref{drestriction} such that
\begin{equation}\label{dtaxi}
d[p] = \sum_{i=0}^{n}d_i[p]\Be_{i+1}
\end{equation}
and choose $\boldsymbol \xi \in Z(\Gamma)$ such that (recall \ref{cprimedef}) $|\boldsymbol \xi| \leq \underline C |\underline{\tau}|/c'$ and
\[
\boldsymbol \xi\pe:= \sum_{i=0}^n\xi_i \pe \Be_{i+1}.
\] Let $\boldsymbol \zeta \in Z(\Gamma)$ such that (recall \ref{wfdef})
\begin{equation}\label{zetaxi}
\zeta_i\pe = c_i'\pe \xi_i\pe.
\end{equation}Then $ \boldsymbol \zeta$ satisfies \eqref{zetarestriction}.
Using this ${d,\boldsymbol \zeta}$, find $\phi_{\mathrm{gluing}}[p],\phi_{\mathrm{dislocation}}[p]$ using \ref{subsprescribep}, \ref{prescribequad} and set
\[
\Phi'_{{d,\boldsymbol \zeta}} = {\bf U}\left( \left\{\psi_{\widetilde S[p]}[d] (\phi_{\mathrm{gluing}}[p]+ \phi_{\mathrm{dislocation}}[p]), 0 \right\}\right).
\]
Setting $\mu_i[p]:=\mu_i'[p]+\mu_i''[p], \mu_i\pe:=\mu_{i}'\pe+\mu_i''\pe$ where these coefficients come from \ref{subsprescribep}, \ref{prescribequad}, define
\[
\underline w'_v:=\sum_{i=0}^n \sum_{p \in V(\Gamma)}\mu_i[p]w_i[p] \in \mathcal K_V, \quad \quad \underline w'_a:= \sum_{i=0}^n
\sum_{\pe \in A(\Gamma)} \mu_i\pe w_i\pe \in \mathcal K_A.
\]
Using \ref{LinearSectionProp}, determine
$$(\Phi_{{d,\boldsymbol \zeta}}'' , \underline w''_v, \underline w''_a):=\mathcal R_M\left(-\mathcal L_g\Phi_{{d,\boldsymbol \zeta}}'+\underline w'_v+ \underline w'_a - H_{\mathrm{error}}[{d,\boldsymbol \zeta}]\right).$$ Now set
\[
\Phi_{{d,\boldsymbol \zeta}}:= \Phi_{{d,\boldsymbol \zeta}}'' + \Phi_{{d,\boldsymbol \zeta}}' \: \text{ and } (\underline w_{d})_v:= \underline w''_v + \underline w'_v \: \text{ and } (\underline w_{\boldsymbol \zeta})_a:= \underline w''_a + \underline w'_a.
\]
\begin{prop}
\label{prescribexi}
For ${d,\boldsymbol \zeta}$ chosen as in \eqref{dtaxi}, \eqref{zetaxi} respectively, the functions $\Phi_{{d,\boldsymbol \zeta}}$, $ (\underline w_{d})_v$, and $(\underline w_{\boldsymbol \zeta})_a$ depend continuously on ${d,\boldsymbol \zeta}$ and satisfy (by decreasing $\maxT$ if necessary):
\begin{enumerate}
\item $\mathcal L_g \Phi_{{d,\boldsymbol \zeta}} +H_{\mathrm{error}}[{d,\boldsymbol \zeta}]= (\underline w_{d})_v+(\underline w_{\boldsymbol \zeta})_a$ on $M$
(recall \ref{Hbounds}).
\item $\|\Phi_{{d,\boldsymbol \zeta}}\|_{2,\beta,\gamma; {d,\boldsymbol \zeta}} \leq C(|\underline{\tau}|+\left| \boldsymbol \zeta \right|) \leq C \underline C |\underline{\tau}|$.
\item
$\left|(\underline w_{\boldsymbol \zeta})_a- (w_{\boldsymbol \zeta})_a\right|_{A} \leq C |\underline{\tau}|,$
where
$(w_{\boldsymbol \zeta})_a:=\sum_{i=0}^n\sum_{\pe \in A(\Gamma)} { \xi}_i\pe w_i\pe$
(recall \ref{D:calKp} and \ref{D:wi}).
\end{enumerate}
\end{prop}
\begin{proof}By construction, item $(1)$ is immediately satisfied. Moreover, the estimates for $\phi_{\mathrm{gluing}}[p]$ and $\phi_{\mathrm{dislocation}}[p]$ imply that
\[
\|\Phi_{{d,\boldsymbol \zeta}}'\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq C\left(|\underline{\tau}| + \left| \boldsymbol \zeta \right|\right).
\]
To determine estimates for $\Phi_{{d,\boldsymbol \zeta}}''$ note that for $E :=-\mathcal L_g\Phi_{{d,\boldsymbol \zeta}}'+\underline w'_v+ \underline w'_a - H_{\mathrm{error}}[{d,\boldsymbol \zeta}]$,
\[
E|_{\widetilde S[p]}=\mathcal L_g\left((1-\psi_{\widetilde S[p]}[d]) (\phi_{\mathrm{gluing}}[p]+ \phi_{\mathrm{dislocation}}[p])\right):=E[p].
\]and
\[
\supp(E) \subset \cup_{\pe \in A(\Gamma)} \left(\Lambda[p,e,1] \cap [\ur_{d}^{-1}(2\urin[e;d]), \Pe-\underline{b}] \times \Ssn\right).
\]
Using the same strategy that produced the estimate \eqref{iterationest}, we note that
\begin{align*}
\|E\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \leq \max_{p\in V(\Gamma)}\|E[p]:C^{0,\beta}(S_1[p,e,1] \cap \Lambda[p,e,1],\rho_d, g ,\ur_{d}^{\gamma-2} )\| \leq C\max_{e \in E_p}\urin[e;d]^{\gamma'-\gamma}\left(|\underline{\tau}| + \left| \boldsymbol \zeta \right| \right).
\end{align*}Therefore, the estimates from \ref{LinearSectionProp} imply
\[
\|\Phi_{{d,\boldsymbol \zeta}}''\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} + \|\underline w''_v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}+ \|\underline w''_a\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq C\max_{e \in E(\Gamma) \cup R(\Gamma)}\urin[e;d]^{\gamma'-\gamma}\left(|\underline{\tau}| + \left| \boldsymbol \zeta \right|\right).
\]
Finally, the estimates on $\mu_i[p,e]$ from \ref{subsprescribep}, \ref{prescribequad} imply
$
\left|\underline w_a'- (w_{\boldsymbol \zeta})_a\right|_{A} \leq C( |\underline{\tau}|+\left| \boldsymbol \zeta \right| |\underline{\tau}|^{\frac 1{n-1}} ).
$
\end{proof}
\section{The Main Theorem}
\label{MThm}
\begin{prop}
\label{globalquadprop}
There exists $\maxTG>0$ sufficiently small such that for all $0<|\underline{\tau}|<\maxTG$,
$\alpha \in \left(0, 1\right)$, and $v \in C^{2, \beta}_{loc}(M)$ such that $\|v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq |\underline{\tau}|^{1-\alpha/2}$,
we have
(recall \ref{metricdefn} and \ref{metricequivalence})
\[
\|H_v - \Htdz- \mathcal L_{g}v\|_{0,\beta,\gamma;{d,\boldsymbol \zeta}} \leq |\underline{\tau}|^{\alpha-1}\|v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}^2/\widetilde C.
\]
\end{prop}
\begin{proof}
The estimate $\|v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq |\underline{\tau}|^{1-\alpha/2}$ coupled with the definition of $\rho_d$ implies that for every $x \in M$, $v$ satisfies the hypothesis of \ref{localquadprop} on the disk of radius $1/10$ in the metric $\rho_d^{-2}(x)g$, centered at $x$. We refer to it as $D$. The conclusion of the same proposition implies that
\[
\|H_v-\Htdz-\mathcal L_g v:C^{0,\beta}(D,\rho_d,g,f_d\rho_d^{-2})\| \leq C(c_1)f_d(x)\rho_d^{-1}(x)\|v:C^{2,\beta}(D,\rho_d,g,f_d)\|^2,
\]where, for $D \subset S_1[p], S_1^+\penp$, we presume that $f_d\equiv 1$.
The definition of the global norm and the fact that $\rho_d \sim_{C(\underline{b})} 1$ on $S_1[p],
S_1^+\penp$ implies that it is enough to show that the right hand side of the inequality is bounded above by $|\underline{\tau}|^{\alpha-1}\|v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}^2/\widetilde C$.
For each $D \subset S_1[p]$ or $D \subset S^+_1\penp$, $f_d(x)\rho_d^{-1}(x) \leq C$ and thus
\begin{multline*}
C(c_1)f_d(x)\rho_d^{-1}(x)\|v:C^{2,\beta}(D,\rho_d,g,f_d)\|^2 \leq
\\
\le C\|v:C^{2,\beta}(D,\rho_d,g,f_d)\|^2 \leq |\underline{\tau}|^{\alpha-1}\|v:C^{2,\beta}(D,\rho_d,g,f_d)\|^2/\widetilde C
\end{multline*}
for sufficiently small $\underline{\tau}$.
Since the weighting $\de^{-m/2}>1$, it follows that $\|v:C^{2,\beta}(D,\rho_d,g,f_d)\| < \|v\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}}$ for any $D \subset S^+_1 \penp$.
On each $M[e]$, the definitions imply that $f_d\rho_d^{-1}$ is maximized at $\Pe$.
By decreasing $\maxTG$ if necessary, depending on $\underline{b}$,
\[
\max_{e \in E(\Gamma) \cup R(\Gamma)}f_d(\Pe)\rho_d^{-1}(\Pe)=
\max_{e \in E(\Gamma) \cup R(\Gamma)} \urin[e;d]^{\gamma}(r_{\td}^{\mathrm{min}})^{-1} \leq C(\underline{b})|\underline{\tau}|^{\frac {\gamma-1}{n-1}} \leq |\underline{\tau}|^{\alpha-1}/\widetilde C.
\]
The result again follows immediately by the definition of the global norms.
\end{proof}
\begin{theorem}
Let $\Gamma$ be a finite central graph with an associated family $\mathcal{F}$.
Then there exist $\underline C, \underline{b}$ sufficiently large and $\maxTG>0$ sufficiently small so that for all $0<|\underline{\tau}|<\maxTG$:
There exist $d,\boldsymbol \zeta$ satisfying \eqref{drestriction},\eqref{zetarestriction} and a function $f \in C^{2,\beta}_{\mathrm{loc}}(M)$ such that $(\hYtdz)_f:M \to \mathbb R^{n+1}$ is an immersed surface with CMC equal to $1$ and $\|f\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq \widetilde C|\underline{\tau}|$ (recall \ref{Meq}). Moreover, if $\Gamma$ is pre-embedded then $(\hYtdz)_f$ is embedded for $\underline{\tau}>0$.
\end{theorem}
In the statement, $M$ is the abstract surface based on the graph $\Gamma$ and the parameter $\underline{\tau}$. $\hYtdz$ is the immersion described in Section \ref{InitialSurface} depending on $\Gamma, \underline{\tau}, d, \boldsymbol \zeta$. Finally, $(\hYtdz)_f$ is the normal graph over $\hYtdz$ by $f$, as defined in Appendix \ref{quadapp}.
\begin{proof}
Choose $\underline{b}\gg 1$ as in \ref{ass:b}.
Recall that $\widetilde C, c'$ (\ref{Meq}, \ref{cprimedef}) and all $C$ appearing in the statement of \ref{prescribexi} depend only on $\underline{b}$.
Choose $\underline C$ independent of $\maxTG$ so that $\underline C \geq 4c' C$.
Choose
$\maxTG>0$ as in \ref{ass:tgamma6}.
We again point out that $\maxTG$ does not depend upon the structure of $\Gamma$ but only on the function $\hat \tau$ and on various geometric quantities.
Moreover, $\underline{b}$ is independent of $\maxTG$. Fix $\alpha \in \left(0,1\right)$ and $\gamma \in (1,2)$.
Reduce $\maxTG$ if necessary so that
$C\underline C+ \widetilde C \leq \maxTG^{-\alpha/2}$
and $C\underline C \maxTG^{\frac{\gamma- 1}{n-1}} \leq 1$.
For any $0<|\underline{\tau}|<\maxTG$, we define $B$ to be the set
\[
\{u \in C^{2,\beta}_{\mathrm{loc}}(M): \|u\|_{2,\beta,\gamma;0, \boldsymbol 0} \leq |\underline{\tau}|\} \times
\{d \in D(\Gamma):\left|d\right| \leq |\underline{\tau}|^{1+\frac 1{n-1}}\} \times \{\boldsymbol \xi \in Z(\Gamma):\left|\boldsymbol \xi \right| \leq \underline C|\underline{\tau}|/c'\}.
\]
We define $\mathcal J: B \to B$ in the following manner.
For $(u, d, \boldsymbol \xi) \in B$, define $\boldsymbol \zeta$ by \eqref{zetaxi}.
Using this ${d,\boldsymbol \zeta}$, find $\Gtdtl \in \mathcal{F}$ and determine $\hYtdz$ in the manner outlined in Section \ref{InitialSurface}.
Determine $\Phi_{{d,\boldsymbol \zeta}}$ and $(\underline w_{d})_v, (\underline w_{\boldsymbol \zeta})_a$ by \ref{prescribexi} and define a function $\tilde u = \Phi_{{d,\boldsymbol \zeta}} -u$. Then,
\[
\mathcal L_g \tilde u = (1-\Htdz) + (\underline w_{d})_v +(\underline w_{\boldsymbol \zeta})_a-\mathcal L_g u,
\]
\[
\|\tilde u\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} \leq C\underline C|\underline{\tau}| + \widetilde C |\underline{\tau}|\leq |\underline{\tau}|^{1-\alpha/2}.
\]Using \ref{LinearSectionProp}, with $H_{\tilde u}$ denoting the mean curvature of the surface $(\hYtdz)_{\tilde u}$, define $(u',w'_v, w'_a)= \mathcal R_{{d,\boldsymbol \zeta}}(H_{\tilde u} - \Htdz-\mathcal L_g \tilde u)$. Then
\[
\mathcal L_g u' = H_{\tilde u} -1+ \mathcal L_g u + w'_v-(\underline w_{d})_v+ w'_a-(\underline w_{\boldsymbol \zeta})_a
\]
and by \ref{globalquadprop},
\begin{equation}\label{eq:CCC}
\|u'\|_{2,\beta,\gamma;{d,\boldsymbol \zeta}} +\left|w'_v\right|_V+\left|w'_a\right|_A \leq
|\underline{\tau}|^{\alpha-1}\|\tilde u\|^2_{2,\beta,\gamma;{d,\boldsymbol \zeta}}/\widetilde C \leq
|\underline{\tau}|/\widetilde C.
\end{equation}
By \eqref{Meq}
\begin{equation}\label{wprime}
\|u'\|_{2,\beta,\gamma;{0,\boldsymbol 0}} \leq |\underline{\tau}|.
\end{equation}
Define $\boldsymbol \mu_a \in Z(\Gamma)$ such that
\[
\boldsymbol \mu_a\pe:= \sum_{i=0}^n \mu_i\pe \Be_{i+1}
\] where the coefficients $\mu_i \pe$ are determined to satisfy
\[\sum_{i=0}^n \mu_i\pe w_i\pe =\left( w_a' + (w_{\boldsymbol \zeta})_a-(\underline w_{\boldsymbol \zeta})_a\right)\big|_{S_1[p] \cap \Lambda[p,e,1]}
\]and the definition of $(w_{\boldsymbol \zeta})_a$ is given in \ref{prescribexi}. The estimates from \ref{prescribexi} and \eqref{eq:CCC} imply that
\begin{equation}\label{muest}
\left|\boldsymbol\mu_a\right| \leq |\underline{\tau}|/\widetilde C + C |\underline{\tau}| \leq \underline C|\underline{\tau}|/c'.
\end{equation}
Define $\boldsymbol \mu_v \in D(\Gamma)$ such that $\boldsymbol \mu_v[p] := \underline{d}[(\hYtdz)_{\tilde u},p]$ (recall \eqref{greatdest}). By \ref{greatdprop}, for each $p \in V(\Gamma)$,
\begin{equation}\label{muestd}
\left|d[p] - \boldsymbol \mu_v[p]\right| \leq C\underline C |\underline{\tau}|^{1+(n-3+\gamma)/(n-1)} \leq C \underline C |\underline{\tau}|^{1+ \frac 1{n-1}+ \frac{\gamma-1}{n-1}} \leq |\underline{\tau}|^{1+\frac 1{n-1}}.
\end{equation}
We use the procedure above to define the map
\[
\mathcal J(u, d, \boldsymbol \xi) = (u', d - \boldsymbol \mu_v, \boldsymbol \mu_a).
\] Then by \eqref{wprime}, \eqref{muest}, \eqref{muestd}, $\mathcal J(u, d, \boldsymbol \xi) \in B$ and the map $\mathcal J: B \to B$ is well defined. Moreover, for some $\beta' \in (0, \beta)$, $B$ is a compact, convex subset of $C^{2,\beta'}_{\mathrm{loc}}(M) \times D(\Gamma) \times Z(\Gamma)$ and one can easily check that $\mathcal J$ is continuous in the induced topology.
Thus, Schauder's fixed point theorem \cite[Theorem 11.1]{GiTr}, implies there exists a fixed point $(u', d', \boldsymbol \mu'_a) \in B$.
By inspection, at a fixed point one has that
\begin{equation}\label{Hparallel}
1-H_{ u'} = w_v'-(\underline w_{d'})_v
\end{equation}and
\begin{equation}
\label{diszero}
\underline{d}[(Y_{d',\boldsymbol \zeta})_{ u'},p] \equiv 0 \text{ for all } p \in V(\Gamma).
\end{equation}
Recall that the function $w_v'-(\underline w_{d'})_v$ is supported on the interior of $\bigsqcup_{p \in V(\Gamma)} S[p]$. For a fixed $p \in V(\Gamma)$,
\[
\left(w_v'-(\underline w_{d'})_v\right)\big|_{S[p]} = \sum_{i=0}^n \lambda_i w_i[p]
\]for some $\lambda_i \in \mathbb R$.
Let $N_{d',\boldsymbol \zeta, u'}$ denote the normal to the immersion $(Y_{d',\boldsymbol \zeta})_{u'}(M)$
and let $\widetilde F_i:= \widetilde \omega_n^{-\frac 12} N_{d',\boldsymbol \zeta, u'} \cdot \Be_{i+1}$.
The definition of the global norm implies that $\| u':C^{2,\beta}(U[p],g)\| \leq C\underline C|\underline{\tau}|$ and thus on $U[p]$,
\[
|N_{d',\boldsymbol \zeta}- N_{d',\boldsymbol \zeta, u'} | \leq C \underline C |\underline{\tau}|, \quad |\hF_i - \widetilde F_i| \leq C \underline C |\underline{\tau}|.
\]
Using then \eqref{hFdefeq} and \ref{wFdefeq} we have
\[
\int_{U[p]} w_i[p] \widetilde F_i\geq \frac 12, \quad \quad \left|\int_{U[p]} w_i[p] \widetilde F_j \right|\leq C\underline C|\underline{\tau}| \text{ for } i \neq j.
\]
Consider the $(n+1)\times (n+1)$ dimensional matrix $\mathcal M$ where $\mathcal M_{ij} = \int_{U[p]} w_j[p] \widetilde F_i$, $i,j = 0, \dots n$.
The previous calculations demonstrate that $\mathcal M$ is invertible.
The definition for $\underline{d}[(Y_{d',\boldsymbol \zeta})_{ u'}, \cdot]$ along with \eqref{Hparallel} and \eqref{diszero} together imply that
\[
\int_{U[p]} \left(\sum_{j=0}^n \lambda_j w_j[p]\right) \widetilde F_i = 0 \text{ for all } i = 0, \dots, n.
\]
Since $\mathcal M$ is invertible, this implies that
$\lambda_j=0$ for all $j= 0, \dots, n$ and thus $w_v'-(\underline w_{d'})_v\equiv 0$.
By \eqref{Hparallel}, $1-H_{ u'} \equiv 0$ and thus the immersion $(Y_{d',\boldsymbol \zeta})_{ u'}:M \to \mathbb R^{n+1}$ has mean curvature identically $1$.
Embeddedness follows when $\Gamma$ is pre-embedded and $\underline{\tau}>0$ as in this case $Y_{d',\boldsymbol \zeta}(M)$ is embedded and $\| u'\|_{2,\beta,\gamma;d',\boldsymbol \zeta} \leq \widetilde C|\underline{\tau}|$.
\end{proof}
We do not provide an extensive list of examples but instead point out that those provided in \cite[Section 2.2]{BKLD}
and the finite topology examples in \cite[Section 4]{KapAnn} can easily be modified for the higher dimensional setting.
For example \cite[Example 4.1]{KapAnn} remains valid and again produces infinitely many topological types with two ends.
Moreover it is not hard to construct more examples by modifying those graphs to take advantage of the extra dimensions.
In the embedded case also a finite number of topological types can easily be realized with $k$ ends,
with the number of the topological types tending to $\infty$ as $k\to\infty$.
Finally an easy parameter count demonstrates that there are
$(k-1)(n+1)-\binom{n+1}2+\binom{n+1-k}2$ continuous parameters in these constructions in the absence of symmetry.
Here the first summand reflects that we have $k-1$ Delaunay ends whose direction and $\tau$ parameter can be arbitrarily assigned,
and the rest correct for the trivial changes induced by rotations.
|
\section{Introduction}
Despite a wealth of evidences for its existence, the nature of the Dark Matter (DM) composing more than 80\% of the total matter content of our universe remains unknown.
Particle candidates---e.g. from supersymmetric extensions of the standard model of particle physics---are still the most explored ones, in particular {\em weakly interacting massive particles} (WIMPs), in which the DM relic density $\Omega_{\rm cdm} h^2= 0.1205$~\cite{Aghanim:2016yuo}, is obtained via the standard freeze-out mechanism. However, the lack of a WIMP detection via collider, direct, or indirect experiments is now reviving the interest for alternatives. A promising and well-studied {\it macroscopic} alternative to particle DM are primordial black holes (PBH), as recently reviewed in Ref.~\cite{Carr:2016drx}. This scenario has received a lot of attention after the aLIGO discovery of three or four binary black hole (BH) mergers of tens of solar masses~\cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017vtc}, including one with a progenitor spin misaligned with the orbital momentum. Intriguingly, their merging rate is compatible with the expectation from binaries formed
in present-day halos by a BH population whose density is comparable to the DM one \cite{Bird:2016dcv,Clesse:2016vqa}, although Ref.s \cite{Sasaki:2016jop,Raidal:2017mfl} argue that this is significantly lower than the merger rate of binaries formed in the early universe, which would thus overshoot the aLIGO observed rate.
Black holes in a wide range of masses could have formed in the early universe due to the collapse of $\mathcal{O}(1)$ primordial inhomogeneities \cite{Carr:1974nx,Carr:1975qj,Harada:2013epa}, usually associated to either extended inflationary models (such as hybrid inflation \cite{GarciaBellido:1996qt,2011arXiv1107.1681L,Bugaev:2011wy,Clesse:2015wea}, curvaton scenarios \cite{Kohri:2012yw,Kawasaki:2012wr}, single-field and multi-field models in various frameworks~\cite{Kawasaki:2016pql,Garcia-Bellido:2016dkw,2017arXiv170203901G,Domcke:2017fix,Germani:2017bcs,Ezquiaga:2017fvi,Kannike:2017bxn,Motohashi:2017kbs}), or to first and second-order phase transitions \cite{Jedamzik:1999am,Rubin:2001yw}.
PBH with masses $M\lesssim 10^{-17} M_{\odot}$ evaporate into standard model particles with a blackbody spectrum (the so-called Hawking radiation \cite{Hawking:1974sw, Hawking:1974rv}), leading to energetic particle injection which can be looked for in cosmic rays \cite{Barrau:2001ev}, $\gamma$ rays \cite{Carr:2016hva} or CMB analysis \cite{2017JCAP...03..043P}. The intermediate mass range up to stellar masses is covered by a number of lensing constraints. From low to high masses, we mention femtolensing in gamma-ray bursts~\cite{2012PhRvD..86d3001B}, microlensing in high-cadence observations of M31~\cite{Niikura:2017zjd} and of the Magellanic clouds~\cite{PalanqueDelabrouille:1997uj,Alcock:2000ph,Tisserand:2006zx}. The latter are however still controversial (e.g. Ref.~\cite{Hawkins:2011qz,Green:2017qoa}), depending on the PBH clustering properties \cite{Clesse:2015wea}; some results even point at a possible detection of anomalous microlensing events \cite{PalanqueDelabrouille:1997uj,Alcock:2000ph}. Additional constraints from neutron stars and white dwarfs in globular clusters also exist in this range~\cite{Capela:2013yf,Capela:2012jz}, but depend on astrophysical assumptions. Stellar mass or heavier PBH are constrained by dynamical properties of ultra-faint dwarf galaxies \cite{Brandt:2016aco,Green:2016xgy,Li:2016utv,Koushiappas:2017chw}, by halo wide binaries \cite{2014ApJ...790..159M}, by X-ray or radio emission~\cite{Gaggero:2016dpq,Inoue:2017csr}, as well as by the cosmic microwave background (CMB) bounds discussed in the following\footnote{Further constraints exist, e.g. based on the emitted gravitational wave background \cite{Nakama:2016gzw,Clesse:2016ajp,Schutz:2016khr,Cholis:2016xvo} or non-gaussianities in the primordial fluctuations \cite{Tada:2015noa,Young:2015kda}, which---while often quite stringent---are model dependent.}. Indeed, due to their gravitational attraction on the surrounding medium, such massive objects accrete matter, which heats up, gets eventually ionized and emits high-energy radiation. In turn, these energetic photons can alter the ionization and thermal history of the universe, affecting the statistical properties of CMB anisotropies. Very stringent constraints (excluding PBH as DM with $M\gtrsim 0.1~M_\odot$) have been thus derived on this scenario already a decade ago~\cite{Ricotti:2007au}. These bounds (as well as their update in Ref.~\cite{Horowitz:2016lib}) have been recently revisited and corrected in Ref.~\cite{Ali-Haimoud:2016mbv} (see also Ref.~\cite{Blum:2016cjs}), yielding significantly weaker constraints $M \lesssim 10-100~M_\odot$ if PBH constitute the totality of the DM, depending on the assumption on radiation feedback.
Although such bounds are usually derived assuming a monochromatic PBH mass function, actual bounds on extended mass functions are typically more stringent~\cite{Green:2016xgy,Kuhnel:2017pwq,Carr:2017jsz}. Also, the time evolution of the initial mass function due to merging events is strongly constrained by purely gravitational CMB bounds: in each merger with comparable BH masses, a few percent of their mass is converted into gravitational waves, i.e. ``dark'' radiation, a phenomenon that cannot involve more than a small fraction of the DM, due to alterations to the Sachs-Wolfe effect. Essentially no more than one merger per PBH on average is allowed between recombination and now~\cite{Poulin:2016nat}.
In this paper, we revisit the CMB anisotropy constraints on the PBH abundance, which have been derived until now assuming {\it spherical} accretion of matter onto BH. We revisit this hypothesis and find plausible arguments suggesting that an {\em an accretion disk} generically forms in the dark ages, between recombination and reionization possibly already at $z\sim {\cal O}$(1000). A firm proof in that sense would require deeper studies of the non-linear growth of structures at small scales, accounting for the peculiarities of PBH clustering and for the time-dependent building-up of the baryonic component of halos. A first step to motivate such studies, however, is to prove that they have a potentially large impact: in presence of disks, CMB constraints on PBH improve by (at least) two orders of magnitude, excluding the possibility that PBH with masses $M \gtrsim 2~M_\odot$ account for the totality of the DM. As we will argue, we expect the bounds to be greatly improved if the baryon velocity at small scales is not coherent and comparable with (or smaller than) their cosmological thermal velocity, and/or if a sizable baryon filling of the PBH halos is present already at $z\gtrsim {\cal O}$(100).
This article is structured as follows: In Sec.~\ref{sec:essentials}, we provide a short---and necessarily incomplete---review of the current understanding of accretion,
and discuss its applicability in the cosmological context. The crucial arguments on why we think plausible that the accretion (at least the one relevant for CMB bounds) should proceed via disks is discussed in Sec.~\ref{sec:disks}. In Sec.~\ref{Lumin} we review the expected high-energy luminosity associated to these accretion phenomena and describe benchmark prescriptions used afterwards. Section~\ref{CMBbound} described our procedure on obtaining CMB bounds. In section \ref{sec:conclu}, we summarize our results and draw our conclusions.
\section{Accretion in cosmology}\label{sec:basics}
\subsection{Essentials on accretion}\label{sec:essentials}
The problem of accretion of a point mass $M$ moving at a constant speed $v_{\rm rel}$ in a homogeneous gas of number density $n_{\infty}$ (and mass density $\rho_{\infty}$, where the subscript $\infty$ means far away from the point mass)
was first studied by Hoyle and Lyttleton~\cite{1939PCPS...35..405H,1940PCPS...36..424H,1940PCPS...36..325H} in a purely ballistic limit, i.e. accounting only for gravitational effects but no hydrodynamical or thermodynamical considerations. They found the accretion rate (natural units $c=\hbar=k_{\rm B}=1$ are used throughout, unless stated otherwise)
\begin{equation}\label{eq:HLAccRate}
\dot{M}_{\rm HL} \equiv \pi r^2_{\rm HL}\rho_{\infty}v_{\rm rel}\equiv4\pi \rho_{\infty}\frac{(GM)^2}{v_{\rm rel}^3}\,,
\end{equation}
where we introduced the Hoyle-Lyttleton radius $r_{\rm HL}$, the radius of the {\it cylinder} effectively sweeping the medium. This model does not describe the motion of the particles once they reach the (infinitely thin and dense) accretion line in the wake of the point mass, when pressure and dissipation effects prevail. Also, it is clearly meaningless in the limit of very small velocity $v_{\rm rel}$. A first attempt to address the former problem and account for the accretion column was done by Bondi and Hoyle~\cite{1944MNRAS.104..273B}, suggesting a reduced accretion by up to a factor two. The second problem is linked to neglecting pressure. It has only been solved exactly for an accreting body at rest in a homogeneous gas, when the accretion is spherical by symmetry. Its rate has been computed by Bondi~\cite{Bondi:1952ni}, yielding the so-called {\em Bondi accretion rate}:
\begin{equation}\label{eq:BondiAccRate}
\dot{M}_{\rm B} \equiv 4\pi \lambda\, \rho_{\infty}c_{{\rm s},\infty}r^2_{\rm B}\equiv 4\pi \lambda \,\rho_{\infty}\frac{(GM)^2}{c_{{\rm s},\infty}^3}\,,
\end{equation}
where $r_{\rm B}$ is the Bondi radius, i.e. the radius of the equivalent accreting {\it sphere} (as opposed to a cylinder, hence the $4\pi$ geometric factor), $c_{s,\infty}$ is the sound speed far away from the point mass, depending on the pressure ${\rm P}_\infty$ and density $\rho_\infty$, and $\lambda$ is a parameter that describes the deviation of the accretion from the Bondi idealised regime.
In the cosmological plasma, one typically has:
\begin{eqnarray}
&&c_{{\rm s},\infty} = \sqrt{\frac{\gamma{\rm P}_{\infty}}{\rho_{\infty}}}= \sqrt{\frac{\gamma(1+x_{\rm e})T}{m_{\rm p}}}\simeq 6 \frac{\rm km}{\rm s}\sqrt{\frac{1+z}{1000}}\,,\label{cs}\\
&&\Rightarrow r_{\rm B} \equiv \frac{GM}{c_{{\rm s},\infty}^2} \simeq 1.2 \times 10^{-4} {\rm pc} \frac{M}{M_\odot}\frac{10^3}{1+z}\,,\label{rBnum}
\end{eqnarray}
$m_{\rm p}$ being the proton mass, and $\gamma$ is the polytropic equation of state coefficient for monoatomic ideal gas. The approximation at the RHS of Eq.~(\ref{cs}) typically holds for $100\lesssim z\lesssim 1000$. The mean cosmic gas density in the early universe is given by:
\begin{equation}\label{eq:n_gas}
n_{\infty} \simeq \frac{\rho_\infty }{m_{\rm p}} \simeq 200 \,{\rm cm}^{-3}\bigg(\frac{1+z}{1000}\bigg)^3\,.
\end{equation}
Finally, $\lambda$ is a numerical parameter which quantifies non-gravitational forces (pressure, viscosity, radiation feedbacks, etc.) partially counteracting the gravitational attraction of the object. Historically, Bondi computed the maximal value of $\lambda$ as a function of the equation of state of the gas, finding $\lambda\sim{\cal O}$(1), ranging from 0.25 ($\gamma = 5/3$, adiabatic case) to 1.12 ($\gamma =1$, isothermal case).
There is no exact computation of the accretion rate accounting for the finite sound speed and a displacement of the accreting object. However, as argued by Bondi in Ref.~\cite{Bondi:1952ni}, a reasonable proxy can be obtained by the quadratic sum of the relative velocity and the sound speed at infinity, which leads to an effective velocity $v_{\rm eff}^2 = c_{s,\infty}^2+v_{\rm rel}^2$. We thus define the Hoyle-Bondi radius and rate\footnote{Actually, our rate definition is a factor 2 larger than the original proposal, but has been confirmed as more appropriate even with numerical simulations, see Ref.~\cite{1985MNRAS.217..367S}.}
\begin{equation}\label{eq:HBAccRate}
\dot{M}_{\rm HB} \equiv 4\pi\lambda\, \rho_{\infty}v_{\rm eff}r^2_{\rm HB}\equiv 4\pi\lambda\, \rho_{\infty}\frac{(GM)^2}{v_{\rm eff}^3}\,.
\end{equation}
Despite the fact that the Bondi analysis was originally limited to spherical accretion, this formalism is commonly used to treat non-spherical cases, with e.g. formation of an accretion disk, by choosing an appropriate value for $\lambda$. Although it has been shown for instance that the simple analytical formulae can overestimate accretion in presence of vorticity~\cite{Krumholz:2004vj} or underestimates it in presence of turbulence~\cite{Krumholz:2005pb}, typically Eq.~(\ref{eq:HBAccRate}) provides a reasonable order-of-magnitude description of the simulations (see for instance~\cite{Mellah:2015sja} for a recent simulation and interpolation formulae).
\subsection{Relative baryon-PBH velocity and disk accretion in the early universe}\label{sec:disks}
In the cosmological context, one might naively estimate the relative velocity between DM and baryons to be of the order of the thermal baryon velocity or of the speed of sound, Eq.~(\ref{cs}). In that case, the appropriate accretion rate would be the Bondi one, Eq.~(\ref{eq:BondiAccRate}).
The situation is however more complicated, since at the time of recombination the sound velocity drops abruptly and the baryons, which were initially tightly coupled to the photons in a standing acoustic wave, acquire what is an eventually supersonic relative stream with respect to DM, coherent over tens of Mpc scales.
In linear theory, one finds that the square root of the variance of the relative baryon-DM velocity is basically constant before recombination and then drops linearly with $z$~\cite{Tseliakhovich:2010bj,Dvorkin:2013cea}:
\begin{equation}
\sqrt{\langle v_{\rm L}^2\rangle}\simeq {\rm min} \left[1, \frac{1+z}{1000}\right]\times 30\, {\rm km/s}\,.\label{vbulk}
\end{equation}
Yet, this is a linear theory result, and it is unclear if it can shed any light on the accretion, which depends on very small, sub-pc scales (Bondi radius, see Eq.~(\ref{rBnum})).
In Ref.~\cite{Tseliakhovich:2010bj}, the authors first studied the problem of small-scale perturbation growth into such a configuration, by a perturbative expansion of the fluid equations for DM, baryons, and the Poisson equation around the exact solution with uniform bulk motion given by Eq.~(\ref{vbulk}), {\it further assuming zero density contrast, and zero Poisson potential.}
Their results suggest that small-scale structure formation and the baryon settling into DM potential wells is significantly delayed with respect to simple expectations. Equation~(\ref{vbulk}) has also entered recent treatments of the Hoyle-Bondi PBH accretion rate, see Ref.~\cite{Ali-Haimoud:2016mbv}, yielding a correspondingly suppressed accretion. In particular, by taking the appropriate moment of the function of velocity entering the luminosity of accreting BH over the velocity distribution, Ref.~\cite{Ali-Haimoud:2016mbv} found
\begin{equation}
v_{\rm eff}\equiv\left\langle\frac{1}{ (c_{s,\infty}^2+v_{\rm L}^2)^3 }\right\rangle^{-1/6}\simeq \sqrt{c_{s,\infty}\sqrt{\langle v_{\rm L}^2\rangle}}\,,\label{veff}
\end{equation}
with the last approximation only valid if $c_{s,\infty}\ll\sqrt{\langle v_{\rm L}^2\rangle}$, which is acceptable at early epochs after recombination, of major interest in the following.
The application of the above perturbative (but non-linear) theory to the relative motion between PBH and the baryon fluid down to sub-pc scales appears problematic. A first consideration is that the behavior of an ensemble of PBH of stellar masses is very different from the ``fluid-like'' behavior adopted for microscopic DM candidates like WIMPs. The discreteness of PBHs is associated to a ``Poissonian noise'', enhancing the DM power spectrum at small scale, down to the horizon formation one~\cite{Afshordi:2003zb,Chisholm:2005vm,Zurek:2007gn,Gong:2017sie}. Our own computation suggests that a density contrast of $\mathcal{O}(1)$ is attained at $z\simeq 1000$ at a comoving scale as large as $k_{\rm NL}\sim 10^3$ Mpc$^{-1}$ for a population of 1$\,M_\odot$ PBH whose number density is comparable to the DM one. Even allowing for fudge factors (e.g. $f_{\rm PBH}\sim 0.1$, different mass) the non-linearity scale is unavoidably pertinent to the scales of interest. In fact, the PBH formation mechanism itself {\em is} a non-linear phenomenon, and peaks theory suggests that PBH are likely {\it already born} in clusters, on the verge of forming bound systems~\cite{Chisholm:2005vm,Chisholm:2011kn}. Our first conclusion is that the application of the scenario considered in Refs.~\cite{Tseliakhovich:2010bj,Dvorkin:2013cea} to the PBH case is not at all straightforward. In particular, a more meaningful background solution around which to perturb would be the one of vanishing initial baryon perturbations in the presence of an already formed halo (and corresponding gravitational potential) at a scale $k_{\rm NL}\gtrsim 10^3$ Mpc$^{-1}$.
A second caveat is that the treatment in Refs.~\cite{Tseliakhovich:2010bj,Dvorkin:2013cea} uses a fluid approximation, i.e. it does not account for ``kinetic'' effects such as the random (thermal) velocity distribution around the bulk motion velocity given by Eq.~(\ref{vbulk}). One expects that ``cold'' baryons (statistically colder than the average) would already settle in the existing PBH halo at early time, forming a virialized system---albeit still under-dense in baryons, with respect to the cosmological baryon to DM ratio.
One may also worry about other effects, such as shocks and instabilities, which may hamper the applicability of the approach of Ref.~\cite{Tseliakhovich:2010bj} to too small scales and too long times.
Assuming that the overall picture remains nevertheless correct in a more realistic treatment, we expect that the PBH can generically accrete from two components: the high-velocity, free-streaming fraction at cosmological density and diminished rate of Eqs.~(\ref{eq:HBAccRate}) and (\ref{veff}), as considered in Ref.~\cite{Ali-Haimoud:2016mbv}, and a virialized component, of initial negligible density but growing with time and eventually dominating, with typical relative velocity of the order of the virial ones. If we normalize to the Milky Way halo ($10^{12}\,M_\odot$) value $v_{\rm vir}\sim 10^{-3}\,c$, and adopt the simple scaling of the velocity with the halo mass over size, $v_{\rm vir}(M_{\rm halo})\propto (M_{\rm halo}/d_{\rm halo})^{1/2}\propto M_{\rm halo}^{1/3}$, we estimate $v_{\rm vir}\sim 0.3\,$km/s to $3\,$km/s for a halo mass of $10^{3}\,M_\odot$ to $10^{6}\,M_\odot$. The latter roughly corresponds to the smallest dwarf galaxies one is aware of, see e.g.~\cite{Bonnivard:2015xpq}~\footnote{The PBH distribution can hardly be dominated by heavier clumps, or the lack of predicted structures at the dwarf scales would automatically exclude them as dominant DM component.}. At $z\simeq {\cal O}(1000)$, it is likely that the fast, unbound baryons constitute the dominating source of accretion. But at latest when the density of the virialized baryon component attains values comparable to the {\it cosmological average} density---which given the $z$-dependences Eq.~(\ref{cs}) and Eq.~(\ref{vbulk}) appears unavoidable for $z\lesssim {\mathcal O}(100)$---the accretion is dominated by this halo-bound component.
After these preliminary considerations, we are ready to discuss disk formation. The basic criterion used to assess if a disk forms is to estimate the angular momentum of the material at the accretion distance: if this is sufficient to keep the matter in Keplerian rotation at a distance $r_{\rm D}\gg 3\, r_{\rm S}$ (i.e. well beyond the innermost stable orbit, where we introduced the Schwarzschild radius $r_{\rm S}\equiv 2\,G\,M$) at least for BH luminosity purposes, dominated by the region close to the BH, a disk will form~\cite{1976ApJ...204..555S,1977ApJ...216..578I, Ruffert:1999ch, 2002MNRAS.334..553A}. To build up angular momentum, the material accreted at the Hoyle-Bondi distance along different directions must have appreciable velocity or density differences. The angular momentum per unit mass of the accreted gas scales like
\begin{equation}
l\simeq\left(\frac{\delta\rho}{\rho}+\frac{\delta v}{v_{\rm eff}}\right)v_{\rm eff}r_{\rm HB}\,,\label{lscale}
\end{equation}
where $\delta\rho/\rho$ represent typical inhomogeneities at the scale $r_{\rm HB}$ in the direction {\it orthogonal} to the relative motion PBH-baryons, and $\delta v/v_{\rm eff}$
the analogous typical velocity gradient at the same scale (see e.g.~\cite{2002MNRAS.334..553A}).
The above quantity can be compared to the specific angular momentum of a Keplerian orbit,
\begin{equation}
l_{\rm D} \simeq r_{\rm D}v_{\rm Kep}(r_{\rm D}) \simeq \sqrt{GMr_{\rm D}}\,,\label{kepdisk}
\end{equation}
to extract $r_D$. For instance, in the case of inhomogeneities, if we adopt the effective velocity at the RHS of Eq.~(\ref{veff}) as a benchmark, as in Ref.~\cite{Ali-Haimoud:2016mbv}, we obtain:
\begin{equation}
\frac{r_D}{r_S}\simeq \left(\frac{\delta \rho}{\rho}\right)^2 \frac{c^2}{2\,v_{\rm eff}^2}\simeq 2.5\times10^8\left(\frac{\delta \rho}{\rho}\right)^2\left(\frac{1000}{1+z}\,\right)^{3/2}\,,\label{deltarhocond}
\end{equation}
so that, already soon after recombination, gradients $\delta \rho/\rho \gg 10^{-4}$ in the baryon flow on the scale of the Bondi radius are sufficient for a disk to form. We find this to be largely satisfied already at $z\sim 1000$ because of the ``granular'' potential due to neighboring PBHs.
Equivalently, given the similar way the fractional fluctuation of velocity and density enter Eq.~(\ref{lscale}), the condition for a disk to form can be written as a lower limit on the absolute value of the velocity perturbation amounting to
\begin{equation}
\delta v\gg 1.5\, \left(\frac{1+z}{1000}\,\right)^{3/2}\,{\rm m/s}\,.\label{diskcriterion3}
\end{equation}
At least the component of virialized baryons, whose velocity dispersion is $\gtrsim 0.1$ km/s as argued above, should easily match this criterion.
But even for a ``ideal'', free-streaming homogeneous gas moving at a bulk motion comparable to Eq.~(\ref{vbulk}) without any velocity dispersion, the disk formation criterion is likely satisfied, if the
non-linear PBH motions at small scales are taken into account. Since this is in general a complicated problem, we cannot provide a cogent proof, but the following argument makes us confident that this is a likely circumstance. In general, the BH motion within its halo at very small scale is influenced by its nearest neighbors. The simplest scenario (see for instance~\cite{Sasaki:2016jop}) amenable to analytical estimates is that a sizable fraction of PBH forms binary systems with their nearest partner, under the tidal effect of the next-to-nearest.
According to~\cite{Sasaki:2016jop}, for PBH constituting a sizable fraction of the DM, it is enough for their distance to be only slightly below the average distance at matter-radiation equality for a binary to form. Under the assumption of an isotropic PBH distribution and monochromatic PBH mass function of mass $M$, this distance can be estimated as
\begin{equation}
d \sim \left(\frac{3M}{4\pi\rho_{\rm PBH}}\right)^{1/3}=\frac{1}{1+z_{\rm eq}}\left(\frac{2G M}{H_0^2 f_{\rm PBH}\Omega_{\rm DM}}\right)^{1/3}\,,
\end{equation}
i.e.
\begin{equation}
d\sim 0.05\,{\rm pc}\left(\frac{M}{f_{\rm PBH}\,M_\odot}\right)^{1/3}\frac{3400}{1+z_{\rm eq}}\,.\label{dfirst}
\end{equation}
If bound, the two PBH (each of mass $M$) orbit around the common center of mass on an elliptical orbit whose major semi-axis is $a$ with the Keplerian angular velocity
\begin{equation}
\omega = \sqrt{\frac{2\,G\,M}{a^3}}\,.\label{omegaorbit}
\end{equation}
We conservatively assume $a= d/2$ for a quasi-circular orbit, although for
the very elongated orbits usually predicted for PBH a value $a=d/4$ is closer to reality. Note that the orbital size of the order of Eq.~(\ref{dfirst}) is typically larger than (or at most comparable to) the Bondi-Hoyle radius, so that to a good approximation the gas---assumed to have a bulk motion with respect to the PBH pair center of mass---accretes around a single PBH, which is however rotating with respect to it.
In the PBH rest-frame, Eq.~(\ref{lscale}) is simply replaced by
\begin{equation}
l\simeq \omega\,r_{\rm {HB}}^2\,,
\end{equation}
or, equivalently, one can apply Eq.~(\ref{diskcriterion3}) with $\delta v= \omega\,r_{\rm HB}$.
If we adopt the effective velocity at the RHS of Eq.~(\ref{veff}), this leads to the disk formation condition ($z\lesssim 1000$):
\begin{equation}
f_{\rm PBH}^{1/2}\frac{M}{M_\odot}\gg \left(\frac{1+z}{730}\right)^{3} \,.
\end{equation}
Whenever $M\gtrsim M_\odot$ and PBH constitute a sizable fraction of the DM, this is satisfied at the epoch of interest for CMB bounds.
In fact, it has been shown in Ref.~\cite{2017JCAP...03..043P,Slatyer:2016qyl} that most of the constraining power of CMB anisotropies on exotic energy injection {\em does not} come from redshift 1000 and above, rather around a typical redshift of $\sim 300$ for an energy injection rate scaling like $\propto(1+z)^3$. In the problem at hand, the constraining power should be further skewed towards lower redshifts, given the growth of the signal at smaller $z$ due to the virializing component.
We believe that these examples show that disk formation at relatively early times after recombination is a rather plausible scenario, with spherical accretion which would rather require physical justification.
Note that we have improved upon the earlier discussion of this point in Ref.~\cite{Ricotti:2007au} by taking into account the essential ingredient that stellar mass PBH are clustered in non-linear structures at small scales and early times, greatly differing from WIMPs in that respect.
In the following, we shall assume that the disk forms at all relevant epochs for setting CMB bounds, and deduce the consequences of this Ansatz. In the conclusions, we will comment on the margins for improvements
over the current treatment.
\subsection{Luminosity}\label{Lumin}
In addition to $\dot{M}$, the second crucial quantity for accretion luminosity is the radiative efficiency factor $\epsilon$, which simply relates the {\em accretion luminosity} $L_{\rm acc}$ to the accretion rate in the following way:
\begin{equation}
L_{\rm acc}=\epsilon\dot{M}.
\end{equation}
The radiative efficiency is itself tightly correlated with the accretion geometry and thus the accretion rate, since it directly depends on the temperature, density and optical thickness of the accretion region. Hence, a coherent analysis determines both parameters $\lambda$ and $\epsilon$ jointly. In practice, no complete, first-principle theory exists, although a number of models have been developed to compute $L_{\rm acc}$ (which is the main observable in BH physics) under different assumptions and approximations. A typical fiducial value is $\epsilon=0.1$, to be justified below. A useful benchmark upper limit to $L_{\rm acc}$
is the so-called Eddington luminosity, $L_E=4\pi G M m_p/\sigma_T=1.26\times 10^{38}\,(M/M_\odot)\,$erg/s, which is the luminosity at which electromagnetic radiation pressure (entering via the Thomson cross section $\sigma_T$) balances the inward gravitational force in a hydrogen gas, preventing larger accretion, unless special conditions are realized. In practice, for the parameters of cosmological interest, it turns out that we will always be below $L_E$.
The simplest and most complete theoretical treatment applies to spherical accretion, going back to Shapiro in Refs.~\cite{1973ApJ...180..531S,1973ApJ...185...69S} in the case of non-rotating BHs and Ref.~\cite{1974ApJ...189..343S} for rotating (Kerr) BHs, accounting for relativistic effects. Since we have argued that this case is unlikely to apply to the cosmological context of interest, we will not review it here, but address for instance to Ref.~\cite{Ali-Haimoud:2016mbv} for a recent and detailed treatment. We will only refer to this case for comparison purposes, and for these cases we follow the equations in Ref.~\cite{Ali-Haimoud:2016mbv}.
For {\it moderate or low disk} accretion rate, which is the case of interest here, there are two main models:
If the radiative cooling of the gas is {\it efficient}, a geometrically thin disk forms, which radiates very efficiently. This is the ``classical'' disk solution obtained almost half a century ago by Shakura and Sunyaev~\cite{Shakura:1972te}. In this case, the maximal energy per unit mass available is uniquely determined by the binding energy at the innermost stable orbit. This can be computed accurately in General Relativity, yielding $\epsilon$ from 0.06 to 0.4 when going from a Schwarzschild to a maximally rotating Kerr BH. This range, which justifies the benchmark value $\epsilon=0.1$ mentioned above, is often an upper limit to the radiative efficiency actually inferred from BH observations. Also note that, since the disk can efficiently emit radiation, the temperatures characterizing the disk emission are relatively low, below a few hundreds of keV.
If the radiative cooling of the gas is {\it inefficient}, then hot and thick/inflated disks (or torii) form, with advection and/or convective motions dominating the gas dynamics and inefficient equilibration of ion and electron temperature, with the former that is much higher and can easily reach tens of MeV. This regime is widely (albeit with a little abuse of notation) known under the acronym ADAF, ``advection-dominated accretion flow'' (see~\cite{Yuan:2014gma} for a review).
It has been discovered in the pioneering articles~\cite{Ichimaru:1977uf} and later \cite{Rees:1982pe}, but has been extensively studied only after its ``rediscovery'' and 1D self-similar analytical treatment in Ref.~\cite{Narayan:1994xi}.
It is worth noting that in the ADAF solution, the viscosity $\alpha$ plays a fundamental role in accretion:
Indeed the viscously liberated energy is not radiated and dissipated away, but instead is conveyed into the optically thick gas towards the center.
As a consequence, the accretion rate is typically diminished by an order of magnitude with respect to the Bondi rate with $\lambda =1$ (see~\cite{Narayan:2002ss} for a short pedagogical overview). In practice, $\alpha$ is degenerate with the previously introduced parameter $\lambda$, so that one might roughly capture this effect by assuming as benchmark $\lambda=0.1$. In ``classical'' ADAF models, the efficiency scales roughly linearly with $\dot{M}$, attaining (and stabilizing at) a value of the order of 0.1 only for a critical accretion which is about $0.1\,L_E$. Overall, this class of models provides a moderately satisfactory description (at least for $\alpha\lesssim 0.1$) of ``median'' X-ray observations of nuclear regions of supermassive black holes, see e.g.~\cite{Pellegrini:2005pi} (in particular the lower dashed curve in Fig. 3).
\begin{figure}
\centering
\includegraphics[scale=0.38]{Fig1a.pdf}
\includegraphics[scale=0.38]{Fig1b.pdf}
\caption{{\em Top panel:} The dimensionless accretion rate $\dot{m}$ as a function of redshift for different accretion modeling and PBH mass. Our benchmark model corresponds to the result of simulations attested by observations. {\em Bottom panel:} The dimensionless luminosity $l$ as a function of redshift for different accretion modeling. The benchmark model stands for $\delta = 0.1$, while the low-luminosity and high-luminosity scenarii corresponds to $\delta = 10^{-3}$ and $0.5$ respectively. \label{fig:PBHAccretion}}
\end{figure}
A further refinement takes into account that gas outflows and jets typically accompany this regime, so that the accretion rate becomes in general a function of radius~\cite{Blandford:1998qn}. We will still normalize the (diminished) accretion rate responsible for the bulk of the luminosity to the one at the Bondi radius.
For a specific example, we rely on some recent numerical solutions~\cite{2012MNRAS.427.1580X} which suggest: i) On the one hand, a more significant role of outflows, so that only $\sim 1\%$ of the accretion rate at the Bondi radius is ultimately accreted in the inner region most relevant for the luminosity of the disk. We shall model that by benchmarking $\lambda=0.01$. ii) On the other hand, an increase of the
fraction, $\delta$, of the ion energy shared by electrons. Typically, in classical ADAF models, such a fraction is considered to be very small, $\delta \ll 1$.
A greater efficiency $\delta$ implies a corresponding higher efficiency $\epsilon$, somewhat intermediate between the thin disk and the classical ADAF solution, also scaling with a milder power of the mass accretion ($\epsilon \propto \dot{M}^{0.7}$) at low accretion rates. In Ref.~\cite{2012MNRAS.427.1580X}, suitable fitting formulae have been provided, which we rely upon in the following. In particular, we adopt the parameterization in Eq.~(11), with parameters taken from Tab.~1 for the ADAF accretion rate regime.
In Fig.~\ref{fig:PBHAccretion}, we compare the spherical case with $v_{\rm eff} = \sqrt{c_{s,\infty} \langle v_L\rangle^{1/2}}$ to our benchmark $\delta = 0.1$, as well as a more optimistic $\delta = 0.5$ and a more pessimistic\footnote{It is worth noting that such a low value is reported in Ref.~\cite{2012MNRAS.427.1580X} rather for historical reasons, being associated to the early analytical solutions of Ref.~\cite{Narayan:1994xi} and thus being an old benchmark, than because of theoretical or observational arguments related e.g. to Sgr $A^*$: The authors of Ref.~\cite{2012MNRAS.427.1580X} make clear that all evidence points to a higher range for $\delta$, with $\delta=0.1$ being on the {\it conservative} side, and any $\delta \lesssim 0.3$ is in agreement with data from Sgr $A^*$~\cite{Narayan:2002ss}.} $\delta = 10^{-3}$: the accretion rate (top panel) reduces when a disk forms (independently of $\delta$), but the luminosity (bottom panel) is enhanced. Since in the redshift range of interest (blue band in bottom panel of Fig.~\ref{fig:PBHAccretion}, according to~\cite{2017JCAP...03..043P,Slatyer:2016qyl}) the latter is enhanced despite the fact that the former is reduced (whatever the value of $\delta$), we expect the CMB bound to improve appreciably in our more realistic disk accretion scenario.
\section{Computing the CMB bound}\label{CMBbound}
The total energy injection rate per unit volume is:
\begin{equation}\label{eq:dQdt}
\frac{\mathrm{d} E}{\mathrm{d} V \mathrm{d} t}=L_{\rm acc}n_{\rm pbh}=L_{\rm acc}f_{\rm pbh}\frac{\rho_{\rm DM}}{M}\,.
\end{equation}
However, not all radiation is equally effective: to compute the impact on the CMB we need to quantify what amount of this injected energy is deposited into the medium, either through heating, ionization or excitation of the atoms. The modifications of the free electron fraction $x_{\rm e}$ are eventually responsible for the CMB bound. For a given energy differential luminosity spectrum $L_\omega$, the key information is encoded in the {\em energy deposition functions per channel} $f_c(z,x_{\rm e})$ by means of a convolution with the transfer functions $T_c(z',z,E)$ (which we take from Ref. \cite{Slatyer15-2}) according to:
\begin{eqnarray}
f_c(z,x_{\rm e}) & \equiv & \frac{\mathrm{d} E/(\mathrm{d} V \mathrm{d} t)\big|_{{\rm dep},c}}{\mathrm{d} E/(\mathrm{d} V\mathrm{d} t)\big|_{\rm inj}} \label{fzexpr}\\
& = & H(z)\frac{\int \frac{\mathrm{d} \ln(1+z')}{H(z')}\int T(z',z,\omega) L_\omega \mathrm{d} \omega}
{\int L_\omega \mathrm{d} \omega}\,.\nonumber
\end{eqnarray}
The only ingredient left is thus the spectrum of the radiation emitted via BH accretion. Note that it is only the shape that enters Eq.~(\ref{fzexpr}), which is indeed an efficiency function, while the overall normalization was discussed in Sec.~\ref{Lumin}. In the spherical accretion scenario (see~\cite{1973ApJ...180..531S,1973ApJ...185...69S,Ali-Haimoud:2016mbv}) the spectrum is dominated by Bremsstrahlung emission, with a mildly decreasing frequency dependence over several decades and a cutoff given by the temperature of the medium near the Schwarzschild radius $T_s$
\begin{equation}\label{eq:L_nu}
L_{\omega}\propto \omega^{-a}\exp(-\omega/T_s)\,,
\end{equation}
where $T_s\sim {\cal O}(m_{\rm e})$ (we used 200 keV in the following for definiteness) and $|a|\lesssim 0.5$ ($a=0$ was used in ~\cite{Ali-Haimoud:2016mbv}).
For consistency with our discussion in Sec.~\ref{Lumin}, we base our disk accretion spectra on the numerical results for ADAF models reported in Ref.~\cite{Yuan:2014gma}, Fig. 1. In particular, we adopt
\begin{equation}\label{eq:L_nudisk}
L_{\omega}\propto \Theta(\omega-\omega_{\rm min})\omega^{-a}\exp(-\omega/T_s)\,,
\end{equation}
with a choice for $T_s$ as above. We ignore the dependence of $T_s$ upon accretion rate and PBH mass, which is very mild in the range of concern for us. We consider $a\in[-1.3;-0.7]\,$, with a hardening linear in the log of $\dot{M}$ (as from the caption in that figure) with $-0.7$ corresponding almost to the limiting case of the thick disk. We take $\omega_{\rm min} =(10\,M_\odot/M)^{1/2}\,$eV.
Note that such cutoff at low energy only affects the normalization at the denominator of Eq.~(\ref{fzexpr}), i.e. the ``useful'' photon fraction of the bolometric luminosity, normalized as described in Sec.~\ref{Lumin}. On the other hand, the cutoff at the numerator in Eq.~(\ref{fzexpr}) is in principle given by the ionization or excitation threshold (depending on the channel), since photons of lower energy do not contribute to the efficiency. In practice, the transfer functions are only directly available for energy injection above 5 keV. However, we can safely extrapolate the transfer function down to $\sim$ 100 eV: It has been shown in Ref.~\cite{Galli13} that the energy repartition fractions are to an extremely good approximation independent of the initial particle energy in the range between $\sim$ 100 eV and a few keV. In fact, this behaviour is at the heart of the ``low energy code'' used by authors of Ref.~\cite{Slatyer15-2} to compute their transfer functions. Below $\sim$ 100 eV, the power devoted to ionization starts to drop, and we conservatively cut the integral at the numerator at this energy.
We show the $f_c(z,x_{\rm e})$-functions for the spherical accretion scenario and the disk accretion scenario in Fig.~\ref{fig:fzxeAccretion} - top panel (we chose a mass which we estimate to be among the least efficient at depositing energy).
We incorporated the effects of accretion into a modified version of the {\sf Recfast} module \cite{Seager:1999bc} of the Boltzmann solver {\sf CLASS} \cite{Blas:2011rf}. It is enough for our purpose to work with a modified {\sf Recfast} that has been fudged to reproduce the more accurate calculation from {\sf CosmoRec} \cite{2011MNRAS.412..748C} and {\sf HyRec} \cite{2011PhRvD..83d3513A}.
The impact of the accretion on the free-electron fraction for a PBH mass of $500 M_\odot$ is shown in the bottom panel of Fig.~\ref{fig:fzxeAccretion}: It is much more pronounced in the disk accretion scenario (we chose a PBH fraction $\sim$ 300 times smaller!), even if the energy deposition efficiency is lower.
In Fig.~\ref{fig:Cl}, the corresponding impact on the CMB power spectra is illustrated. The effects are typical of an electromagnetic energy injection (for a detailed review see Ref.~\cite{2017JCAP...03..043P}): The delayed recombination slightly shifts acoustic peaks and thus generates small wiggles at high multipoles $\ell$ in the residuals with respect to a standard $\Lambda$CDM scenario. Meanwhile, the increased freeze-out fraction leads to additional Thomson scattering of photons off free electrons along the line-of-sight, which manifests itself as a damping of temperature anisotropies and an enhanced power in the polarization spectrum. Note that in principle the different accretion recipes could be distinguished via a CMB anisotropy analysis. Indeed, each accretion scenario has a peculiar energy injection history which does not lead to a simple difference in the normalization: the actual shape of the power spectra slightly changes. This behavior is also present when changing the PBH mass, but is much less pronounced, albeit still above cosmic variance in the EE spectrum (not shown here to avoid cluttering). Hence, if a signal were found, it is conceivable that some constraints could be put on the PBH mass and (especially) accretion mechanism, but a strong statement would require better characterization of the signal, which goes beyond our present goals.
\begin{figure}
\centering
\includegraphics[scale=0.38]{Fig2a.pdf}
\includegraphics[scale=0.38]{Fig2b.pdf}
\caption{{\em Top panel:} Energy deposition functions computed following ref.~\cite{Slatyer15-2} in the case of accreting PBH. {\em Bottom panel:} Comparison of the free electron fractions obtained for a monochromatic population of PBH with masses 500 $\,M_\odot$ depending on the accretion recipe used. The curve labelled ``standard'' refers to the prediction in a $\Lambda$CDM model whose parameters have been set to the best fit of Planck 2016 likelihoods high-$\ell$ TT,TE,EE + LOWSim \cite{Aghanim:2016yuo}. \label{fig:fzxeAccretion}}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.38]{Fig3a.pdf}
\includegraphics[scale=0.38]{Fig3b.pdf}
\caption{CMB TT (top panel) and EE (bottom panel) power spectrum obtained for a monochromatic population of PBH with masses 500 $M_\odot$ depending on the accretion recipe used. \label{fig:Cl}}
\end{figure}
We compute the 95\% CL bounds using data from {\em Planck} high-$\ell$ TT TE EE+lensing \cite{Planck15} and a prior on $\tau_{\rm reio}$ \cite{Aghanim:2016yuo}, by running an MCMC using the {\sf MontePython} package \cite{Audren12} associated to {\sf CLASS}. For ten PBH masses log-spaced in the range $[M_{\rm min},1000 M_\odot]$ we perform a fit to the data with flat priors on the following set of parameters:
$$
\Lambda{\rm CDM}\equiv\{\omega_b,\theta_s,A_s,n_s,\tau_{\rm reio},\omega_{\rm DM}\}+f_{\rm PBH}\,,
$$
with $M_{\rm min}$ fixed by a preliminary run where $f_{\rm PBH}$ has been set to one, and the PBH mass $M_{\rm PBH}$ has been let free to vary (with a flat prior as well)\footnote{We have checked that making use of a logarithmic prior {\it improves} the bound by roughly $50\%$. We thus conservatively stick to the linear prior, which also eases comparison to previous works.}.
We use a Choleski decomposition to handle the large number of nuisance parameters in the Planck likelihood \cite{Lewis:2013hha}. We consider chains to have converged when the Gelman-Rubin \cite{Gelman:1992zz} criterium gives $R -1<0.01$.
First, to check our code, we run it {\it under the same hypotheses as}~\cite{Ali-Haimoud:2016mbv}
(the conservative, collisional ionization case), finding the constraint $M_{\rm PBH}< 150\,M_\odot$ for $f_{\rm PBH}=1$, as opposed to their $M_{\rm PBH}\lesssim 100\,M_\odot$. We attribute the 50\% degradation of our bound compared to Ref.~\cite{Ali-Haimoud:2016mbv} to our more refined energy deposition treatment. We checked that an agreement at a similar level with Refs.~\cite{Ricotti:2007jk,Horowitz:2016lib} is obtained if we implement their prescriptions, but since some equations in Ref.~\cite{Ricotti:2007jk} (re-used in Ref.~\cite{Horowitz:2016lib}) have been shown to be erroneous~\cite{Ali-Haimoud:2016mbv}, we do not discuss them further.
Our fiducial conservative constraints (at 95\% C.L.) are represented in Fig.~\ref{fig:monochromatic} with the blue-shaded region in the plane $(M_{\rm PBH},f_{\rm PBH})$: We exclude PBH with masses above $\sim 2\, M_\odot$ as the dominant form of DM. The constraints can be roughly cast in the form:
\begin{equation}
f_{\rm PBH} < \bigg(\frac{2\,M_\odot}{M}\bigg)^{1.6}\bigg(\frac{0.01}{\lambda}\bigg)^{1.6}\,.\label{fidbound}
\end{equation}
This is two orders of magnitudes better than the spherical accretion scenario, and it improves significantly over the radio and X-ray constraints from Ref.~\cite{Gaggero:2016dpq}, without dependence on the DM halo profile as those ones.
Lensing constraints are nominally better only at $M\lesssim 6\,M_\odot$. Note also the importance of the relative velocity between PBH and accreting baryons: If instead of Eq.~(\ref{veff}) we were to adopt $v_{\rm eff}\simeq c_{s,\infty}$---representative of a case where a density of baryons comparable to the cosmological one is captured by halos at high redshift---the bound would improve by a further order of magnitude, to $M\lesssim 0.2\, M_\odot$ (light-red shaded region in Fig. \ref{fig:monochromatic}). This is also true, by the way, for the spherical accretion scenario, where---all other conditions being the same---adopting $v_{\rm eff}\simeq c_{s, \infty}$ would imply $M\lesssim 15\,M_\odot$, to be compared to $M\lesssim 150 M_\odot$ previously quoted. The ``known'' uncertainties in disk accretion physics are probably smaller: When varying---at fixed accretion eigenvalue $\lambda$---the electrons heating parameter $\delta$ within the range described in section \ref{sec:disks}, for the 30 $M_\odot$ benchmark case reported in the bottom panel of Fig.~\ref{fig:PBHAccretion}, the radiative efficiency $\epsilon$ varies by a factor $\sim 3$, reflecting correspondingly on the constraints. To help the readers grasp the dependence of the bound upon different parameters,
we also derive a parametric bound, obtained from a run where we assumed that $v_{\rm eff}$ is constant over time (and the accretion rate is always small, i.e. $\dot{M}_{\rm B} < 10^{-3}L_{\rm Ed}$), scaling as
\begin{equation}
f_{\rm PBH} < \bigg(\frac{4\,M_\odot}{M}\bigg)^{1.6}\bigg(\frac{v_{\rm eff}}{10~{\rm km/s}}\bigg)^{4.8}\bigg(\frac{0.01}{\lambda}\bigg)^{1.6}\,.\label{parambound}
\end{equation}
\vspace{-0.9cm}
\begin{figure}[!t]
\hspace{-2.3cm}
\vspace{-1.1cm}
\includegraphics[scale=0.30]{Fig4.pdf}
\caption{ Constraints on accreting PBH as DM. Our constraints, derived from a disk accretion history (blue region: Eq.~(\ref{veff}); light-red region: $v_{\rm eff}\simeq c_{s,\infty}$), are compared to: i) the CMB constraints obtained assuming that spherical accretion holds as in Ref.~\cite{Ali-Haimoud:2016mbv} (red full line); ii) the non observation of micro-lensing events in the Large Magellanic Cloud as derived by the EROS-2 collaboration \cite{Tisserand:2006zx} (black dot-dashed line); iii) the non observation of disk-accreting PBH at the Galactic Center in the radio band, extrapolated from Ref.~\cite{Gaggero:2016dpq} (green long-dashed line); iv) constraints from the disruption of the star cluster in Eridanus II \cite{Green:2016xgy} (blue short-dashed line, see text for details).\label{fig:monochromatic}}
\end{figure}
\vspace{+0.4cm}
\begin{figure}[!hb]
\hspace{-0.5cm}
\includegraphics[scale=0.29]{Fig5.pdf}
\caption{Constraints on the width $\sigma_{\rm pbh}$ of a broad mass spectrum of accreting PBH as from Eq.~(\ref{eq:broad_spectrum}) as a function of the mean mass $ \mu_{\rm PBH}$, assuming that they represent 100\% of the DM. For comparison the dashed blue line represents our calculation of the best constraint from the dynamical heating of the star cluster in the faint dwarf Eridanus II, following the method and parameters of Ref.~\cite{Green:2016xgy}. }\label{fig:broad_mass}
\end{figure}
We have also extended the constraints to a broad log-normal mass distribution of the type
\begin{equation}\label{eq:broad_spectrum}
M\frac{\mathrm{d} n}{\mathrm{d} M}=\frac{1}{\sqrt{2\pi}\sigma M}\exp\bigg(\frac{-\log_{10}(M/\mu_{\rm PBH})^2}{2\sigma_{\rm pbh}^2}\bigg)\,.
\end{equation}
i.e. with mean mass $\mu_{\rm PBH}$ and width $\sigma_{\rm pbh}$. Our constraints in the plane $(\sigma_{\rm pbh}, \mu_{\rm PBH})$ assuming that PBH represent 100\% of the DM are shown in Fig.~\ref{fig:broad_mass}. It is clear that the bound on the median PBH mass is robust and can only get more stringent if a broad, log-normal mass function is considered, confirming the overall trend discussed in Ref.~\cite{Carr:2017jsz}. However, we estimate that the tightening of the constraints for a broad mass function is more modest than the corresponding one from some dynamical probes. This is illustrated by the blue dashed line in Fig.~\ref{fig:broad_mass}, which
is the result of our calculation of the constraints from the disruption of the star cluster in Eridanus II,
following the method and parameters of Ref.~\cite{Green:2016xgy} (cluster mass of $3000 \ M_\odot$, timescale of $12$ Gyr, initial and final radius of $2$ pc and $13$ pc respectively and a cored DM density of $\rho_{\rm DM} = 1 M_\odot {\rm pc}^{-3}$).
\section{Conclusions}\label{sec:conclu}
The intriguing possibility that DM is made of PBH is nowadays a subject of intense work in light of the recent gravitational wave detections of merging BH with masses of tens of $M_\odot$. However, high mass PBH are known to accrete matter, a process that leads to the emission of a high energy radiation able to perturb the thermal and ionization history of the universe, eventually jeopardizing the success of CMB anisotropy studies. In this computation, the geometry of the accretion, namely whether it is spherical or associated to the formation of a disk, is a major ingredient. Until now, studies have focused on the case of spherical accretion. In this work, we argued that, based on a standard criterion for disk formation, all plausible estimates suggest that a disk forms {\em soon after recombination}. This is essentially due to the fact that stellar-mass PBH are in a non-linear regime (i.e. clustered in halos of bound objects, from binaries to clumps of thousands of PBH) at scales encompassing the Bondi radius already {\em before recombination}. This feature was ignored in the pioneering article~\cite{Ricotti:2007au}, which assumed that massive PBH cluster like WIMPs and deduced the adequacy of the spherical accretion approximation, eventually adopted by all subsequent studies.
Then, we have computed the effects of accretion around PBH onto the CMB power spectra, making use of state-of-the art tools to deal with energy deposition in the primordial gas. Our 95\% CL fiducial bounds preclude PBH from accounting for the totality of DM if having a monochromatic distribution of masses above $\sim 2\, M_\odot$, the bound on $f_{\rm PBH}$ improving roughly like $M^{1.6}$ with the mass. All in all, the formation of disks improves over the spherical approximation of Ref.~\cite{Ali-Haimoud:2016mbv} by two orders of magnitude. We also checked that the constraints derived on the monochromatic mass function apply to the average mass value of a broad, log-normal mass distribution too, actually becoming more stringent if the distribution is broader than a decade.
A realistic assessment of ``known'' astrophysical uncertainties, like for instance the electron share of the energy in ADAF models, suggests that our quantitative results can only vary within a factor of a few, not enough to change qualitatively our conclusions. Nonetheless, we believe that our constraints are conservative rather than optimistic. In particular, we assumed accretion from an environment at the {\it average cosmological density}: This is less and less true when PBH halos gradually capture baryonic gas in their potential wells. Alone, capturing from a pool of baryons of density comparable to the cosmological one, but bound to PBH halos, would reduce the relative PBH-baryon velocity and improve the bounds to $\sim 0.2 M_\odot$. Once baryons accumulate well above the cosmological average, the accretion rate $\dot{M}$ from this bound component grows correspondingly, and the constraining power more than linearly with it. It would be interesting to reconsider the CMB bounds on stellar-mass PBH once a better understanding of the halo assembly history in these scenario is achieved, a task probably requiring dedicated hydrodynamical simulations.
Together with other constraints discussed recently (see for instance~\cite{Koushiappas:2017chw,Brandt:2016aco,2014ApJ...790..159M,Green:2016xgy,Gaggero:2016dpq,Inoue:2017csr}) our bounds suggest that the possibility that PBH of stellar masses could account for an appreciable fraction of the DM is excluded. It remains to be seen if the small $f_{\rm PBH}$ allowed by present constraints may still be sufficient to explain LIGO observations in terms of PBH and, in that case, to find signatures of their primordial nature, possibly peculiar of some specific production mechanism: Such signatures become all the more crucial since both PBH mass (of stellar size) and their small DM fraction (for instance, in a halo of the Milky Way size about 0.1\% of the DM should be made of astrophysical BH) cannot be easily used as diagnostic tools to discriminate PBH from astrophysical ones.
It is worth noting that, based on the recent study \cite{2017JCAP...03..043P}, we expect that forthcoming CMB polarization experiments (very sensitive to energy injection) and 21 cm experiments~\cite{2013MNRAS.435.3001T,Gong:2017sie} (the golden channel for searches looking at energy-injection during the Dark Ages) will be able to give more insights on PBH scenarios, including stellar mass ones, even if the possibility that they may contribute to a high fraction of the DM has faded away.
\begin{acknowledgments}
This work is partly supported by the Alexander von Humboldt Foundation (P.S.), JSPS KAKENHI Grant Numbers 26247042,
JP15H05889, JP16H0877, JP17H01131 (K.K.), the Toshiko Yuasa France-Japan Particle Physics Laboratory ``TYL-FJPPL'' (P.S. and K.K.),
as well as ``Investissements d' avenir, Labex ENIGMASS'' of the French ANR (V.P.). The authors warmly thank Yacine Ali-Hamoud, Juan Garc\`ia-Bellido, Mark Kamionkowski, Nagisa Hiroshima, and Ville Vaskonen for useful comments and discussions, and J. Lesgourgues for discussions and technical help with the {\sf CLASS} implementation.
\end{acknowledgments}
|
\subsection{ISI Grounding Dataset}
We also evaluate our model on the ISI Language Grounding dataset \cite{bisk2016naacl}, which contains human-annotated instructions describing how to arrange blocks identified by numbers and logos. Although it does not contain variable environment maps as in our dataset, it has a larger action space and vocabulary. The caveat is that the task as posed in the original dataset is not compatible with our model. For a policy to be derived from a value map with the same dimension as the state observation, it is implicitly assumed that there is a single controllable agent, whereas the ISI set allows multiple blocks to be moved. We therefore modify the ISI setup using an oracle to determine which block is given agency during each step. This allows us to retain the linguistic variability of the dataset while overcoming the mismatch in task setup. The states are discretized to a $13 \times 13$ map and the instructions are lemmatized.
Performance on the modified ISI dataset is reported in Table~\ref{tbl:bisk} and representative visualizations are shown in Figure~\ref{fig:bisk}. Our model outperforms both baselines by a greater margin in policy quality than on our own dataset.
Misra et al. \shortcite{misra2017mapping} also use this dataset and report results in part by determining the minimum distance between an agent and a goal during an evaluation lasting $N$ steps. This evaluation metric is therefore dependent on this timeout parameter $N$. Because we discretized the state space so as to be able to represent it as a grid of embeddings, the notion of a single step has been changed and direct comparison limited to $N$ steps is ill-defined.\footnote{When a model is available and the states are not overwhelmingly high-dimensional, policy quality is a useful metric that is independent of this type of parameter. As such, it is our default metric here. However, estimating policy quality for environments substantially larger than those investigated here is a challenge in itself.} Hence, due to modifications in the task setup, we cannot compare directly to the results in Misra et al. \shortcite{misra2017mapping}.
\paragraph{Understanding grounding evaluation}
An interesting finding in our analysis was that the difficulty of the language interpretation task is a function of the stage in task execution (Figure~\ref{fig:bisk}(d)). In the ISI Language Grounding set \cite{bisk2016naacl}, each individual instruction (describing where to move a particular block) is a subgoal in a larger task (such as constructing a circle with all of the blocks). The value maps predicted for subgoals occurring later in a task are more accurate than those occurring early in the task.
It is likely that the language plays a less crucial role in specifying the subgoal position in the final steps of a task.
As shown in Figure~\ref{fig:bisk}(a), it may be possible to narrow down candidate subgoal positions just by looking at a nearly-constructed high-level shape. In contrast, this would not be possible early in a task because most of the blocks will be randomly positioned. This finding is consistent with a result from Branavan et al. \shortcite{branavan2011learning}, who reported that strategy game manuals were useful early in the game but became less essential further into play. It appears to be part of a larger trend that the marginal benefit of language in such grounding tasks can vary predictably between individual instructions.
\subsection{Implementation details}
Our model implementation uses an LSTM with a learnable 15-dimensional embedding layer, 30 hidden units, 8-dimensional embeddings $\phi(s)$, and a 3x3 kernel applied to the embeddings, giving a dimension of $72$ for $h_2(t)$. The final CNN has layers of $\{3,6,12,6,3,1\}$ channels, all with 3x3 kernels and padding of length 1 such that the output value map prediction is equal in size to the input observation.
For each map, a reward of $3$ is given for reaching the correct goal specified by human annotation and a reward of $-1$ is given for falling in a puddle cell. The only terminal state is when the agent is at the goal. Rewards are discounted by a factor of 0.95. We use Adam optimization~\cite{ba2014adam} for training all models.
\subsection{Architecture}
Generalization over both environment configurations and text instructions requires a model that meets two desiderata. First, it must have a flexible representation of goals, one which can encode both the local structure and global spatial attributes inherent to natural language instructions. Second, it must be compositional, in order to learn a generalizable representation of the language even though each unique instruction will only be observed with a single map during training. Namely, the learned representation for a given instruction should still be useful even if the objects on a map are rearranged or the layout is changed entirely.
To that end, our model combines the textual instructions with the map in a spatially localized manner, as opposed to prior work which joins goal representations and environment observations via simpler functions like an inner product \cite{schaul2015universal}. While our approach can more effectively learn local relations specified by language, it cannot naturally capture descriptions at the global environment level. To address this problem, we also use the language representation to predict coefficients for a basis set of gradient functions which can be combined to encode global spatial relations.
More formally, inputs to our model (see Figure~\ref{fig:model}) consist of an environment observation $s$ and textual description of a goal $x$. For simplicity, we will assume $s$ to be a $2${\sc{D}} matrix, although the model can easily be extended to other input representations. We first convert $s$ to a $3${\sc{D}} tensor by projecting each cell to a low-dimensional embedding ($\phi$) as a function of the objects contained in that cell. In parallel, the text instruction $x$ is passed through an {\sc{LSTM}} recurrent neural network~\cite{hochreiter1997long} to obtain a continuous vector representation $h(x)$. This vector is then split into \emph{local} and \emph{global} components $h(x) = [h_1(x); h_2(x)]$. The local component, $h_2(x)$, is reshaped into a kernel to perform a convolution operation on the state embedding $\phi(s)$ (similar to Chen et al. \shortcite{chen2015abc}):
\begin{dmath}
z_1 = \psi_1(\phi(s); h_2(x))
\end{dmath}
Meanwhile, the three-element global component $h_1(x)$ is used to form the coefficients for a vertical and horizontal gradient along with a corresponding bias term.\footnote{Note that we are referring to gradient filters here, not the gradient calculated during backpropagation in deep learning.} The gradients, denoted $G_1$ and $G_2$ in Figure~\ref{fig:model}, are matrices of the same dimensionality as the state observation with values increasing down the rows and along the columns, respectively. The axis-aligned gradients are weighted by the elements of $h_1(x)$ and summed to give a final global gradient spanning the entire $2${\sc{D}} space, analogous to how steerable filters can be constructed for any orientation using a small set of basis filters \cite{freeman1991steerable}:
\begin{dmath}
z_2 = h_{1}(x)[1] \cdot G_1 + h_{1}(x)[2] \cdot G_2 + h_{1}(x)[3] \cdot J
\end{dmath}
in which $J$ is the all-ones matrix also of the same dimensionality as the observed map.
Finally, the local and global information maps are concatenated into a single tensor, which is then processed by a convolutional neural network (CNN) with parameters $\theta$ to approximate the generalized value function:
\begin{dmath}
\hat{V}(s,x) = \psi_2([z_1; z_2]; \theta)
\end{dmath}
for every state $s$ in the map.
\begin{algorithm}[t]
\caption{Training Procedure}
\label{alg:training}
\small
\begin{algorithmic}[1]
\State Initialize experience memory $\mathcal{D}$
\State Initialize model parameters $\Theta$
\medmuskip=0mu
\thinmuskip=0mu
\thickmuskip=0mu
\For {$ epoch = 1,M $}
\State Sample instruction $x \in X$ and associated environment $E$
\State Predict value map $\hat{V}(s,x;\Theta) \text{ for all } s \in E$
\State Choose start state $s_0$ randomly
\For {$ t = 1, N $ }
\State Select $a_t = \argmax\limits_{a} \sum\limits_{s} T(s | s_{t-1}, a)\hat{V}(s,x;\Theta)$
\State Observe next state $s_t$ and reward $r_t$
\EndFor
\State Store trajectory $(\bm{s}=s_0,s_1,\ldots, \bm{r}=r_0,r_1,\ldots)$ in $\mathcal{D}$
\For {$j=1,J$}
\State Sample random trajectory $(\bm{s}, \bm{r})$ from $\mathcal{D}$
\State Perform gradient descent step on loss $\mathcal{L}(\theta)$
\EndFor
\EndFor
\end{algorithmic}
\normalsize
\end{algorithm}
\paragraph{Reinforcement Learning}
Given our model's $\hat{V}(s,x)$ predictions, the resulting policy (Equation~\ref{eq:policy}) can be enacted, giving a continuous trajectory of states $\{s_t,s_{t+1},\ldots\}$ on a single map and their associated rewards $\{r_t,r_{t+1},\ldots\}$ at each timestep $t$. We stored entire trajectories (as opposed to state transition pairs) in a replay memory $\mathcal{D}$ as described in Mnih et al. \shortcite{mnih2015dqn}. The model is trained to produce an accurate value estimate by minimizing the following objective:
\begin{dmath}
\mathcal{L}(\Theta) = \mathrm{E}_{s \sim \mathcal{D}} \left[ \hat{V}(s,x;\Theta) - \left(R(s, x) + \\ \gamma \max_{a} \sum_{s'} T(s' | s, a) \hat{V}(s',x;\Theta^{-}) \right) \right] ^2
\label{eq:loss}
\end{dmath}
where $s$ is a state sampled from $\mathcal{D}$, $\gamma$ is the discount factor, $\Theta$ is the set of parameters of the entire model, and $\Theta^{-}$ is the set of parameters of a target network copied periodically from our model. The complete training procedure is shown in Algorithm~\ref{alg:training}.
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{relatedwork}
\section{General Framework}
\input{task}
\section{Model}
\input{model}
\section{Experimental Setup}
\input{experiments}
\input{results}
\input{analysis}
\section{Conclusions}
\input{conclusions}
\section*{Acknowledgement}
\input{acknowledgement}
\section{Results}
We present empirical results on two different datasets - our annotated puddle world and an existing block navigation task~\cite{bisk2016naacl}.
\subsection{Puddle world navigation}
\paragraph{Comparison with the state-of-the-art}
We first investigate the ability of our model to learn solely from environment simulation. Figure~\ref{fig:learning_curves} shows the discounted reward achieved by our model as well as the two baselines for both instruction types. In both experiments, our model is the only one of the three to achieve an average nonnegative reward after convergence (0.88 for local instructions and 0.49 for global instructions), signifying that the baselines do not fully learn how to navigate through these environments.
Following Schaul et al. \shortcite{schaul2015universal}, we also evaluated our model using the metric of \emph{policy quality}. This is defined as the expected discounted reward achieved by following a softmax policy of the value predictions.
Policy quality is normalized such that an optimal policy has a score of 1 and a uniform random policy has a score of 0.
Intuitively, policy quality is the true normalized expectation of score over all maps in the dataset, instructions per map, and start states per map-instruction pair. Our model outperforms both baselines on this metric as well on the test maps (Table~\ref{tbl:rl_eval}). We also note that the performance of the baselines flip with respect to each other as compared to their performance on the training maps (Figure \ref{fig:learning_curves}). While the UVFA variant learned a better policy on the train set, it did not generalize to new environments as well as the CNN + LSTM.
Finally, given the nature of our environments, we can use the predicted value maps to infer a goal location by taking the position of the maximum value. We use the Manhattan distance from this predicted position to the actual goal location as a third metric. The accuracy of our model's goal predictions is more than twice that of the baselines on local references and roughly $45\%$ better on global references.
\subsection{Results}
|
\section{Introduction}
\label{intro}
Fluid matter exhibits a remarkable tendency to build up patterns and organized
dynamical structures. Different stages can appear in a fluid system as we depart
further away from equilibrium \citep{G95,K01}: laminar convection ends up developing
turbulence \citep{B82} that produces different characteristic patterns
\citep{HD99} such as plumes, swirls, eddies, vortices \citep[the
vortex inside Saturn's hexagon is a beautiful example of atmospheric
stable vortex, see for instance the work by][]{G90}, and eventually,
spatio-temporal chaos \citep*{EMPE00}. Furthermore, extensive
spatio-temporal chaos may render back again equilibrium-like states at
larger time/length scales \citep{E00}.
Since the seminal works by \cite{B00} and \cite{R16}, the fluid flow
in closed circuits (\textit{convection}), caused by the presence of
temperature inhomogeneities, has likely been one of the most studied
problems in science \citep*[the works by][are good reviews on the
subject]{CH93,BPA00,MWG06}. The phenomenon is well known to be ubiquitous
in nature, including biological systems. But let us describe
it again, at its simplest: we consider a real,
experimental system where there is a gravity force, keeping a fluid
layer at rest at temperature $T$ and with two horizontal limiting
fluid surfaces. The upper one (in the sense of gravity) is either free
or in contact with a solid surface. Then, by means of some kind of
temperature source at higher temperature $T_0>T$, the fluid is heated
from below. The difference $T_0-T$ is gradually increased but the
fluid remains static. However, when a critical temperature gradient is
reached, fluid motion is set on, shaping regular patterns in all of
the fluid volume \citep{B00}. According to classical
theory, the Rayleigh number is the convection control parameter in this kind of
problem. This dimensionless parameter is usually defined as $\Ray\equiv \alpha
\Delta T gh^3/\kappa\nu$, where $\alpha\equiv (1/V)(\partial T/\partial V)_p$
is the fluid expansion ($V$ is the fluid volume and $T$ and $p$ temperature and
hydrostatic pressure respectively) coefficient, $\Delta T$ is the boundaries temperature difference, $g$
is gravity acceleration, $h$ is the system width and $\kappa$ and $\nu$ are the
thermal conductivity and kinematic viscosity transport coefficients, respectively. In fact, linear
theory predicts a critical value $\Ray_c = 657.5$ and $\Ray_c = 1708$ for the free
surface and the closed on top system cases respectively \citep{MWG06}. The theoretical treatment by \cite{R16}
relied on the work by \cite{B03} \citep[see also the book edited by][for a
more recent review]{MWG06}, who defined the relevant
contributions for this convection in the fluid balance equations. These balance
equations, as it is known, were worked out by C.-L. Navier
and G. G. Stokes only a few years before \citep{B67}.
Let us recall also that a more generic concept of fluid involves also
systems where the particles are not necessarily \textit{microscopic};
i.e., the particles can be \textit{macroscopic}, when their typical size is greater than $1~\mu\mathrm{m}$ \citep{B54}. In fact, the dynamical
properties of a set of rigid macroscopic particles in a high state of agitation
was elucidated as a subject of the theory of fluids a long time ago by
\cite{R85}. However, for
macroscopic particles the kinetic energy will be partially
transferred, upon collision, to the lower (smaller length scales)
dynamics levels, never coming back to the upper granular level. For
instance, it may be transferred into thermal movement of the molecules
that are the constituents of the disk material \citep*{AFP13}. Thus,
unless the system gets an energy input from some kind of source, it
will evolve by continuously decreasing its total kinetic energy;
i.e. lowering the system \textit{granular temperature}
\citep{K79}. This temperature decay rate was calculated, for a
homogeneous and low density granular system (i.e., a homogeneous
\textit{granular gas}), by \cite{H83}.
Nevertheless, when excited by some persistent external action, stable
granular gas systems are found spontaneously in nature, for instance
in sand storms \citep{B54}, and also in laboratory experiments, where
air flow \citep*{LBLG00} or mechanical vibration \citep*{OU98,PG12}
may be used as energy inputs. Under these conditions, the granular gas
can develop steady laminar flows \citep{VU09}. Unfortunately, the hydrodynamics
of granular fluids is not in the same stage of development as it is for
molecular fluids \citep{P15}. For instance, the corresponding hydrodynamic theory for thermal convection
in a granular gas was developed only very recently
\citep[see for instance the work by][]{KM03} and only in the case of the
academic problem of horizontal (or inclined)
infinite walls. But, obviously real systems are finite. In the case of a gas
heated from two horizontal walls the simplest finite configuration considered is
when the system is closed by adding vertical lateral walls. As it is known, finite size effects have an impact in both the critical Rayleigh number and
the convection scenario in a molecular fluid changes if a cold vertical
wall is present \citep*{D77,HW77,MWG06}.
The lack of a theoretical analysis on the effects of finite size
systems in granular convection theory \citep[see for instance the works by][and
others]{FP03,KM03,B10,EWea07,EMea10} may have hindered and/or rendered
not possible a complete interpretation of a part of the previous results on granular
dynamics laboratory
experiments. Furthermore, as we
will see in some experimental works, the observed granular convection
\cite*{WHP01,WHP01E,RSGC05,EMea10,WRP13,PGVP16} should be either exclusively or partly due to
sidewall energy sink, and not of Rayleigh-B\'enard type. And of course, although the no-sidewalls theoretical
approach may be accurate when bulk convection is present \citep{KM03}, in the cases where the
convection is caused only due to sidewalls energy sink we may expect
the convection properties to be very different.
Notice that in an enclosed granular gas the
lateral energy sink should always be present, since wall-particle
collisions are inherently inelastic. This implies that the present analysis
should be relevant for many granular convection experiments. Furthermore, as we
will see, lateral wall effects are also
more substantial for the granular gas than for the molecular fluid.
\section{Description of the system and the problem}
Let us consider a system consisting of a large set of circular
particles (disks) in a two-dimensional (2D) system. The particles are
identical inelastic smooth hard disks with mass $m$ and diameter $\sigma$. The
hard collision model works reasonably well at an experimental level for a
variety of materials \citep*{FLCA94}. We
use here this model in the smooth particle approximation; i.e., we neglect the
effects of sliding and friction in the collision. Under the smooth hard particle model for
collisions, the fraction of kinetic energy loss after collision is
characterized by a constant parameter called the coefficient of normal
restitution $\alpha$, not to be confused with the expansion coefficient, usually
denoted also as $\alpha$.
\begin{equation}
\boldsymbol{n\cdot v'}_{12} = -\alpha \boldsymbol{n\cdot v}_{12},
\label{collisional_rule}
\end{equation} where $\boldsymbol{v}_{12}, \boldsymbol{v'}_{12}$ are the
collision pair contact
velocities before and after collision, respectively \citep{FLCA94}.
The system is under the action of a constant gravitational
field $\boldsymbol{g}=-g\boldsymbol{\hat e}_y$. We will also assume that the system has low particle
density ($n$) everywhere at all times. Therefore, our
fluid is a granular gas. Collisions are
instantaneous \citep[in the sense that the contact time
is very short compared to the average time between
collisions,][]{CC70} and occur only between two particles. Since particles
are inelastic, our theory should take into account the inelastic cooling
term.
The system is provided with either two (top and bottom) or just one (bottom, in
the sense of gravity)
\textit{horizontal walls} (i.e.,
perpendicular to gravity), these being provided with energy
sources. In addition, our granular gas is caged in a finite
rectangular region by two inert \textit{vertical walls} (we call them
lateral walls, or sidewalls). Sidewalls-particle
collisions are inherently inelastic, the degree of inelasticity of these
collisions being characterized by a coefficient of normal restitution
$\alpha_w$ that is in general different from the one for particle-particle collisions, $\alpha$. See
Figure~\ref{sketch} for a graphical description.
\begin{figure}
\centerline{\includegraphics[width=.75 \columnwidth]{figs/sketch_fourier_discs}}
\caption{Simple sketch of the system. The system is heated
from the horizontal walls (at $y=\pm h/2$). If the lateral walls (at $x=\pm L/2$)
act as energy surface sinks, bi-dimensional flow occurs.}
\label{sketch}
\end{figure}
We denote the single particle
velocity distribution function as $f(\boldsymbol{r},\boldsymbol{v}|t)$
with $\boldsymbol{r}, \boldsymbol{v}$ being the particle position and
velocity, respectively functions of time ($t$). The first three
velocity moments of the distribution function
$n(\boldsymbol{r},t)=\int\mathrm{d}\boldsymbol{v}f(\boldsymbol{r},\boldsymbol{v}|t)$,
$\boldsymbol{u}(\boldsymbol{r},t)=(1/n)\int{\mathrm{d}\boldsymbol{v}f(\boldsymbol{r},\boldsymbol{v}|t)}\boldsymbol{v}$
,
$T(\boldsymbol{r},t)=(1/dn)\int{\mathrm{d}\boldsymbol{v}f(\boldsymbol{r},\boldsymbol{v}|t)}mV^2$,
define the average fields particle density ($n$), flow velocity
($\boldsymbol{u}$) and temperature ($T$), respectively. Here, $\boldsymbol{V}=\boldsymbol{v}-\boldsymbol{u}$ and $d$ is the
system dimension. In this work we consider only $d=2$ (and that is why the
particles are necessarily flat).
For a granular gas, molecular chaos
(i.e., particle velocities are not statistically correlated) also occurs in
most practical situations \citep*{PEU02,BO07}. Therefore, the kinetic
Boltzmann equation \citep{CC70}, may also be used to describe granular
gases \citep*{D01,BDKS98}. The general balance equations that follow from the inelastic Boltzmann equation
have the same form that for molecular fluid except for the additional inelastic
cooling term arising in the energy equation. They have the following form \citep{BDKS98,SG98}
\begin{equation}
{D_tn}=-n\nabla\cdot\boldsymbol{u} , \quad {D_t\boldsymbol{u}}=-\frac{1}{mn}\boldsymbol\nabla\cdot{\mathsfbi{P}}+\boldsymbol{g},
\quad {D_tT}+\zeta T=-\frac{2}{dn}\left(\mathsfbi{P}\boldsymbol{:\nabla}\boldsymbol{u}+\boldsymbol\nabla\cdot{\boldsymbol q}\right).
\label{bal_eq}
\end{equation} In the above equations, $D_t\equiv \partial_t+\boldsymbol{u}\cdot\boldsymbol\nabla$ is the material derivative \citep{B67}, and $\mathsfbi{P}, \boldsymbol{q}$ are the moment and energy fluxes (stress tensor and heat flux), defined respectively by $\mathsfbi{P}=m\int \mathrm{d}\boldsymbol{v}\,\boldsymbol{V}\boldsymbol{V}f(\boldsymbol{v})$ and
$\boldsymbol{q}=(m/2)\int \mathrm{d}\boldsymbol{v}\,V^2\boldsymbol{V}f(\boldsymbol{v})$.
As we said, notice the new term $\zeta T$ in the energy equation of
\eqref{bal_eq}, where $\zeta$ represents the rate of kinetic energy loss, and is
usually called \textit{inelastic cooling rate}. Accurate expressions of the
cooling rate and the inelastic Boltzmann equation are well known and may be
found elsewhere \citep[we use here the one worked out by][]{BDKS98}.
Let us note also that the set of balance equations \eqref{bal_eq} is exact and
always valid. However, in order to close
the system of equations, we need to express the fluxes
$\mathsfbi{P}, \boldsymbol{q}$ and the cooling rate $\zeta$ as functions of the
average fields $n, \boldsymbol{u}, T$. Hydrostatic pressure field $p$ is
defined by the equation of state for an ideal gas: $p=nT$. Starting out of the
kinetic equation for the gas, this can only be done if the distribution
function spatio-temporal dependence can be expressed through a functional
dependence on the average fields; i.e., if the gas is in a \textit{normal
state} \citep{H12}, this only being true if the spatial gradients vary over
distances greater than the mean free path (that is, the characteristic
microscopic scale). In this case, it is usually said that there is \textit{scale separation} \citep{G03}. Henceforth, we assume this scale separation occurs at all situations considered for this work \citep[see the reference by][for more detail about the conditions for accuracy of this assumption in steady granular gas flows]{VU09}.
The boundary conditions come from usual forms for temperature sources,
no-slip velocity \citep{VSG10}, and controlled pressure at the boundaries:
$T(y=+h/2)=T_+$, (substituted by $[\partial T/\partial y =0]_{y=+h/2}$, with
$h\gg L$, in the case of an open on top system),
$T(y=-h/2)=T_-$, $\boldsymbol{u}(x=\pm L/2)=\boldsymbol{u}(y=\pm
h/2)=\boldsymbol{0}$, $p(y=-h/2)=p_0$. For an enclosed granular gas we also
necessarily need to consider the dissipation at the lateral walls, as we previously
explained. A condition for the horizontal derivative of temperature would suffice to
account for an energy sink at the side walls \citep{HW77}. However, taking into
account that the energy sink comes from wall-particle collision inelasticity, then it
is more appropriate a horizontal heat flux that is proportional to
lateral wall-disks collisions degree of inelasticity
inelasticity, $\propto (1-\alpha_w^2)$ \citep{JJ87},
\begin{equation}
q_x(x=\pm L/2) = \mathcal{A}(\alpha_w) \left[pT^{3/2}\right]_{x=\pm L/2},
\label{DLW_bc}
\end{equation} where $\mathcal{A}(\alpha_w)=(\pi/2)m(1-\alpha_w^2)$ is given by
the dilute limit of the corresponding expression in the work by
\cite*{NAAJS99}. (Additionally, condition $p(x=-L/2)=p(x=+L/2)$ is also
implicitly used, since in this work we only consider identical lateral walls at
both sides). The detailed balance of fluxes across the boundaries is beyond the
scope of this work, but for a more detailed analysis on realistic boundary conditions the
reader may refer to the work by \cite*{NAAJS99}.
\subsection{Navier-Stokes equations and transport coefficients}
We will assume that the spatial gradients are sufficiently small, which is true for steady laminar flows near the elastic limit \citep[][]{VU09}. Therefore, we use the Navier-Stokes constitutive relations for the fluxes \cite{BDKS98,BC01}
\begin{align}
\mathsfbi{P}&=p\mathsfbi{I}-\eta\left[\boldsymbol\nabla\boldsymbol{u}+\boldsymbol\nabla\boldsymbol{u}^\dagger-\frac{2}{d}\mathsfbi{I}(\boldsymbol\nabla\cdot\boldsymbol{u})\right], \\
\boldsymbol{q}&=-\kappa\boldsymbol\nabla T-\mu\boldsymbol\nabla n.
\label{fluxes_NS}
\end{align}
In \eqref{fluxes_NS} we can find the transport coefficients: $\eta$
(viscosity), $\kappa$ (thermal conductivity), and $\mu$ (thermal
diffusivity). Notice that $\mu$ is a new coefficient that arises from inelastic
particle collisions \cite{BDKS98}. The transport coefficients for the granular
gas have been calculated by several authors, with some variations
in the theoretical approach. For instance, \cite{SG98} performed a
Chapman-Enskog-like power series expansion in terms of both the spatial
gradients and inelasticity, up to Burnett order, but limited to quasi-elastic particles whereas \cite{BDKS98} perform the expansion only in the spatial gradients (and the theory is formally valid for all values of inelasticity). Previous works, like the work by \cite{JS83} and by \cite*{LSJC84} obtain the granular gas transport coefficients only in the quasi-elastic limit. These theories will actually yield indistinguishable values of the Navier-Stokes transport coefficients for nearly elastic particles (the case of our interest in the present work).
The essential point to our problem is the scaling with temperature $T$ and particle density $n$. This scaling for hard particles is \citep{CC70,BDKS98}
\begin{equation}
\eta=\eta_0^*T^{1/2}\: , \quad \kappa=\kappa_0^*T^{1/2} \: ,
\quad \mu=\mu_0^*\frac{T^{3/2}}{n} \: , \quad \zeta=\zeta_0^*\frac{p}{T^{1/2}} \:.
\label{coefs0}
\end{equation}
The values of the coefficients for disks are $\eta^*_0\equiv\eta^*(\alpha)\sqrt{m}/(2\sigma\sqrt{\pi}),
\kappa^*_0\equiv 2\kappa^*(\alpha)/(\sqrt{\pi m\sigma}), \mu^*_0\equiv 2\mu^*(\alpha)/(\sqrt{\pi m\sigma}),
\zeta^*_0\equiv(2\sigma\sqrt{\pi/m})\zeta^*(\alpha)$ the expressions dimensionless functions
that depend on the coefficient of restitution can be found in the work by
\cite{BC01}. We write them here for completion
\begin{subequations}
\begin{align}
& \eta^*(\alpha)=\left[\nu^*_1(\alpha)-\frac{\zeta^*(\alpha)}{2}\right]^{-1}, \\
& \kappa^*(\alpha)=\left[\nu^*_2(\alpha)- \frac{2d}{d-1}\zeta^*(\alpha)\right],
\\
&
\mu^*(\alpha)=2\zeta^*(\alpha)\left[\kappa^*(\alpha+\frac{(d-1)c^*(\alpha)}{2d\zeta^*(\alpha)})\right],
\\
& \zeta(\alpha^*)= \frac{2+d}{4d}(1-\alpha^2)\left[1+\frac{3}{32}c^*(\alpha)\right],
\end{align}
\label{coefficients}
\end{subequations} where
\begin{subequations}
\begin{align}
&
\nu^*_1(\alpha)=\frac{(3-3\alpha+2d)(1+\alpha)}{4d}\left[1-\frac{1}{64}c^*(\alpha)\right],\\
& \nu^*_2(\alpha)= \frac{1+\alpha}{d-1}\left[ \frac{d-1}{2} +
\frac{3(d+8)(1-\alpha)}{16}+\frac{4+5d-3(4-d)\alpha}{1024}c^*(\alpha)\right],\\
& c^*(\alpha)= \frac{32(1-\alpha)(1-2\alpha^2)}{9+24d+(8d-41)\alpha+30\alpha^2(1-\alpha)},
\end{align}
\label{coefficients_functions}
\end{subequations} and in our system $d=2$ since we deal with disks.
\subsection{The heated granular gas: convective base state}
\label{convective}
First, we revisit the general argument which states that a hydrostatic
state is impossible when a temperature gradient is assumed in the
horizontal (transverse to gravity) direction, as it is in the case of
dissipative lateral walls \cite{PGVP16}; i.e., when $T = T(x,y)$. Momentum balance in Eq.~\eqref{bal_eq}
supplemented by ideal equation of state, in the absence of macroscopic
flow (hydrostatic) states that
\begin{subequations}
\label{2d_balance}
\begin{align}
\partial_x p=\partial_x(n(x,y)T(x,y))=0\\
\partial_y p=\partial_y (n(x,y)T(x,y))=-mgn(x,y).
\end{align}
\end{subequations}
The first equation yields $p(x,y) \equiv p(y)$, which, used in the second equation,
sets $n(x,y) \equiv n(y)$ and, using this back in the first equation above, we get $T(x,y)\equiv T(y)$; i.e.,
$\partial T/\partial x = 0$. This is in contradiction with the horizontal
temperature gradient assumed above. Therefore, the simple hydrostatic system of
equations is not compatible with the condition $\partial T/\partial x\neq 0$,
originated by the energy sinks at the side walls.
We must conclude that a hydrostatic state in the presence of DLW and
gravity is not possible. Thus, since $\boldsymbol{u}\neq 0$ a flow must always be present, even at
infinitesimal values of the Rayleigh number; i.e. there is no hydrostatic
solution even if $\mathrm{Ra}\to 0$.
In the terminology of previous
bibliography on molecular fluids enclosed by dissipative sidewalls,
\textit{a smooth
transition occurs so that the concept of a critical Rayleigh number is no
longer tenable} \citep{HW77}.
\section{Extended Boussinesq-like approximation for a granular gas}
\label{boussinesq}
However, our previous analysis does not
explain why a convection in the form of that observed in the experiments
appears. Indeed, our analysis only implies that there is never a hydrostatic
solution: it does not lead necessarily to a flow with one convection cell next
to each inelastic wall, nor to a convection-free region for points sufficiently
far away from the side walls, as seen in the experiments
\citep{WHP01,WHP01E,RSGC05,EMea10,WRP13,PGVP16}. Furthermore,
we also need to discard if other properties, other than sidewalls inelasticity, present in the experimental
system (such as particle-bottom plate friction) are important for the appearance
of DLWC in the granular gas.
Therefore, we need to analyze in \S \ref{boussinesq} the minimal system of
differential equations that derives from the general balance equations and that
is able to reproduce a convection with the characteristics of the one observed
in the experiments. Once it
is numerically solved, we will be able to describe in detail the main features
of the non-hydrostatic base state in our system.
According to both experiments and computer simulations (molecular dynamics)
results, this convection would only show one cell per inert wall, independently of
thermal gradients strength and system size \cite{PGVP16}. The flow in the bulk
of the fluid, for wide systems, appears to be zero or negligible. This is
analogous to the result for molecular fluids where the sidewalls effects are
important \citep{HW77}. Moreover, the presence of sidewalls introduces two
important but different in origin
effects. The first one arises from finite size effects alone and it shows up in
a lower critical Rayleigh number for B\'enard's convection, even if the lateral
walls are perfectly insulating (i.e., even if they do not convey a lateral energy flux). It is
impossible to escape this effect both in molecular \citep[as described in the
work by][]{HW77} and granular fluids when enclosed by lateral walls. The second
effect comes properly from energy dissipation at the lateral walls and as we saw
produces a non-hydrostatic steady state by default.
We are specifically interested in this second effect that arises only with
dissipative lateral walls and leave for future work the study of the first
effect (that should lead eventually to consider more appropriate theoretical
criteria when comparing with experimental results for the classical bulk
granular convection). Therefore, it is our
aim to elucidate the minimal theoretical framework that is able to take into account the important
experimental evidence of the DLWC in granular gases.
For our theoretical description, let us use as reference units: particle mass $m$ for mass, particle diameter $\sigma$ for length, thermal velocity at the base $v_0=(4\pi
T_0/m)^{1/2}$ for velocity, pressure at the base $p_0=n_0T_0$ and $\sigma/v_0$ for time, where $T_0, n_0$ are arbitrary values of temperature and particle density, respectively.
A common situation for thermal convection is that all density derivatives are
negligible, except for the spatial dependence of density that is coming from gravity, which appears in the momentum balance equation. This happens when the variation of mechanical energy is small compared to the variation of thermal energy \citep{GG76} and leads to the Boussinesq equations \citep{B78,C81}. This is always true in our system if the reduced gravitational acceleration fulfills $g\sigma/v_0^2\ll 1$. Thus, we restrict our analysis to small values of $g$. Taking this into account in the mass balance equation in \eqref{bal_eq}, immediately yields $\partial u_x/\partial x+ \partial u_y/\partial y=0$.
Moreover, for weak convection (as it is the case of our experimental results) we can
neglect the advection (nonlinear) terms that emerge in the balance equations
\eqref{bal_eq} \citep{B78}.
Incorporating these approaches into the other balance
equations in \eqref{bal_eq} and for our system geometry (see figure \ref{sketch}),
and with our reference units, we get the following dimensionless Boussinesq equations
for weak convection in the granular gas
\begin{align}
& \eta^*(\alpha)\frac{\partial}{\partial y}\left[\sqrt{ T}\left(\frac{\partial u_x}{\partial y}+\frac{\partial u_y}{\partial x}\right)\right]+
2 \eta^*(\alpha)\frac{\partial}{\partial x}\left[\sqrt{ T}\frac{\partial u_x}{\partial x}\right]-\frac{\partial (n T)}{\partial x} = 0, \label{linear_boussinesq_ux}\\
& \eta^*(\alpha)\frac{\partial}{\partial x}\left[\sqrt{ T}\left(\frac{\partial u_x}{\partial y} + \frac{\partial u_y}{\partial x}\right)\right]+2 \eta^*(\alpha)\frac{\partial}{\partial y}\left[\sqrt{ T}\frac{\partial u_y}{\partial y}\right]-\frac{\partial ( n T)}{\partial y} - n g^* = 0, \label{linear_boussinesq_uy} \\
& n^2 T^{3/2}\zeta^*(\alpha)= \frac{\kappa^*(\alpha)}{\pi}\left[ \frac{\partial}{\partial x}\left(\sqrt{ T} \frac{\partial T}{\partial x}\right)+\frac{\partial}{\partial y}\left(\sqrt{ T} \frac{\partial T}{\partial y}\right)\right], \label{linear_boussinesq_T}
\end{align} with $g^*=4\pi g$. In fact, our Boussinesq-extended approximation
includes additional terms with respect to the classical Boussinesq approximation used
for thermal convection, since we do not neglect the temperature dependence of the transport coefficients, which results in
$\sqrt{T}$ factors inside the bracket terms in equations
\eqref{linear_boussinesq_ux}-\eqref{linear_boussinesq_T}. In a previous work we
noticed that these temperature factors are relevant for important properties of the steady
profiles of the granular temperature, such as the curvature \citep{VU09}. Notice
we also keep the density derivatives in
\eqref{linear_boussinesq_ux}-\eqref{linear_boussinesq_uy}, since the granular
gas is highly compressible. For this reason, we do not strictly consider density
to be constant in the mass balance equation; we neglect density variations along
the flow field lines instead, while keeping density variations in the momentum
balance equation. The corresponding dimensionless forms of the boundary
conditions would be: $T(y=+h/2)=T_+/(mv_0^2/2)$, ($[\partial T/\partial y ]_{y=+h/2}=0$ for
an open on top system),
$T(y=-h/2)=1/\sqrt{4\pi}$, $\boldsymbol{u}(x=\pm L/2)=\boldsymbol{u}(y=\pm
h/2)=\boldsymbol{0}$, $p(y=-h/2)=1$, plus for the lateral walls dissipation
condition $q_x(x=\pm L/2) = \mathcal{A}(\alpha_w) \left[pT^{3/2}\right]_{x=\pm L/2}$.
\subsection{Numerical solution and comparison with
experiments}
\begin{figure}
\vspace{8pt}
\centerline{
\includegraphics[height=4.0cm]{figs/wide1} \quad
\includegraphics[height=3.5cm]{figs/wide3} \vspace{8pt}\\
}
\centerline{\includegraphics[height=3.5cm]{figs/wide5} }
\caption{DLW convection for systems with a top wall and different widths. Left
to right top to bottom: $L=h$, $L=3h$, $L=5h$. In our dimensionless units:
system height $h=170$. In each plot, thickest stream lines correspond to
$u_0 = (u_x^2+u_y^2)^{1/2} = 3.11\times 10^{-3}$ (in our dimensionless
units). The other relevant parameters in this figure are: $T_0=3T_+$,
$g=0.002\,g_0$, $\alpha=0.9$ (for both particle-particle and
wall-particle collisions).}
\label{fig_wide}
\end{figure}
In order to numerically solve the equations
\eqref{linear_boussinesq_ux}-\eqref{linear_boussinesq_T}, we used the finite volume
method. For this, we wrote a code using the SIMPLE algorithm \citep[in order to avoid numerical decoupling of the pressure field][]{FP02} and the FiPy differential equation package with the PySparse solver \citep*{GWW09}. We have seen (see figure 3) that the agreement with experiment and MD simulations is qualitatively very good. Also, all major properties of the flow are reproduced in the numerical solution obtained from the Boussinesq approximation.
Figure \ref{fig_wide}, that corresponds to systems provided with a top wall,
clearly demonstrates that wider systems do not display more convection
cells. The
number of cells remains always one per dissipative wall. We also notice that
the flow is upwards in the outer part of the cells (towards the system center)
and downwards next to the lateral walls. It is interesting to note also that,
according to numerical results, the convection speed $u_0=(u_x^2+u_y^2)^{1/2}$
reaches roughly the same maximum value when varying system
thickness. Furthermore, if we define the cell size as the horizontal distance
between the upwards and downwards streams (see Figure 2) points with maximum convection speed, then we see that the size of the cells remain constant. The exception is for systems thinner than twice the cell size, in which case the cells squeeze each other (panel 1 in Figure \ref{fig_wide}). All of these results show the peculiarities of the DLW convection with respect to Rayleigh-B\'enard convection and coincide with the experimental behaviour previously detected \citep{PGVP16}.
In figures \ref{theory-exp} and \ref{theory-exp-2}, that correspond to open
systems (i.e., without a top wall) we see a comparison between theory and
experimental results, for $g=0.016~g_0$ (with $g_0=9.8~\mathrm{m/s^2}$) in the
cases of $N=300$ and $N=1000$ particles in the experimental set up
respectively. In all cases, we may consider the system as dilute, since the
local packing fraction $\nu = n\pi\sigma^2/4$ is never greater than,
approximately, $\nu=0.5\times 10^{-2}$. As we can see, the agreement is good
for the flow field, and more qualitative for the temperature and density
fields. In the case of Figure~\ref{theory-exp}, there is some disagreement
between theory and experiment for both temperature and density, which be
partially explained by the fact that the Knudsen number is not small
($\mathrm{Kn}\sim 10^{-1}$) and therefore, there might be non-Newtonian effects
in the experiments that are not taken into account in our theory. However, it
is clear in both cases that the cold fluid regions next to the lateral walls
are adjacent to the convective
cell centers. We also checked that when dissipation at the lateral walls is
switched off, no DLW convection appears out of the theoretical solution. In
this way, we may conclude that the convection mechanism appears as a
consequence of the combined action of two perpendicular gradients: the density
(and thermal) gradient due to the action of gravity and the horizontal gradient
due to energy dissipation by the lateral walls.
Let us point out that our theory does not take into account all of the
ingredients and details that are present in previous experiments on DLWC in
granular systems \citep*{PGVP16,WHP01,WRP13}, such as plate-particle friction
and/or sliding, or dynamical effects derived from particle sphericity (just to
put two examples), etc. Moreover, as noticed in previous works, there is a
tendency do volume convection disappearance for not so small Knudsen numbers \citep{PGVP16,AA16}. For all this, a comparison with our previous
experimental results \cite{PGVP16} and the experimental results by others
\citep{WHP01,EMea10,WRP13} should be regarded as qualitative, not
quantitavive. The advantage of this procedure us that, because it is
reduced to the essentials, we have finally been able to identify the key
ingredients that produce the DLWC in the granular gas.
\begin{figure}
\begin{center}
\includegraphics[width=3.5cm,height=4.1cm]{figs/flow_exp}
\includegraphics[width=3.5cm,height=4.0cm]{figs/flow3}\\ \vspace{0.25cm}
\includegraphics[width=12.75cm,height=3.5cm]{figs/fields_exp_theor_bar_2}
\caption{Fields from experiments and theory (left \& right panels respectively, for each
pair of panels for the corresponding field), for a system without a top wall (open system). Length unit:
particles diameter ($\sigma=1$ mm). $g=0.016\times 9.8~\mathrm{m/s^2}$. Top
row: flow field ($\boldsymbol{u}=\boldsymbol{0}$). Bottom row, left half: the corresponding temperature $T/m$
for experiment (first panel from the left) and theory (second panel from the
left). Bottom row, right half: packing fraction fields $\nu=n \pi \sigma^2/4$ for
experiment (left) and theory (right). Black stands for lower and red for higher field value. $0<T\le
0.2mv_0^2$, $(v_0=370~\mathrm{mm/s}$); $0.02\times 10^{-2}<\nu\le0.15\times 10^{-2}$, mean packing
fraction: $\nu = 0.049\times 10^{-2}$. Density color bars are in percentage
units. Experiments: with $f=45$ Hz, $A=1.85$ mm, $N=300$ (total number of particles). Theory: coefficient of restitution $\alpha=0.9$ (both for particle-particle and wall-particle collisions).}
\label{theory-exp}
\end{center}
\end{figure}
Furthermore, theoretical procedures (simulations, hydrodynamics) allow
us to separate the two sources of dissipation and make ``ideal''
assumptions. For instance, we can switch off internal dissipation (and reproduce
the molecular fluid limit case) while retaining dissipation
at the walls.
\section{Conclusions}
We discuss in this work the theory framework for a previously observed
experimental phenomenon of granular convection
\citep*{WHP01,WHP01E,RSGC05,EMea10,WRP13,PGVP16}. This convection appears
automatically; i.e., occurs at arbitrary \Ray~ and, in close analogy to the
convection in molecular fluids
induced by cold sidewalls \citep{HW77}, we have concluded that the new granular
convection is also induced by dissipative vertical
sidewalls and a gravity field. We denote it as DLW convection. We have built a
granular hydrodynamic theoretical framework that explains the physical origin of this
convection, in the context of the Boussinesq equations for the granular gas. To
our knowledge, it is the first time that a Boussinesq-type approach is used for
granular convection. We found that our
theory also explains the main features of the experimental observations.
That is, the DLW convection displays in all cases only one convection cell per
dissipative wall, the width of this wall being increased when convection intensity
increases. Moreover, the DLW convection intensity is enhanced by increasing gravity
acceleration, and/or wall dissipation. Conversely, it is decreased by increasing
bottom wall temperature (at fixed gravity and wall dissipation).
\begin{figure}
\begin{center}
\includegraphics[width=3.5cm,height=4.1cm]{figs/flow_exp_N1k} \includegraphics[width=3.5cm,height=4.0cm]{figs/flow4}\\
\includegraphics[width=2.75cm,height=3.25cm]{figs/temp_exp_N1k_bar}
\includegraphics[width=2.75cm,height=3.25cm]{figs/temp4_bar}
\quad
\includegraphics[width=2.75cm,height=3.25cm]{figs/dens_exp_N1k_bar}
\includegraphics[width=2.75cm,height=3.25cm]{figs/dens4_bar}
\caption{Fields from experiments and theory (left \& right panels, for each
pair of panels for the corresponding field,
respectively) for a system without a top wall (open system). Length unit:
particles diameter ($\sigma=1$ mm). $g=0.016\times 9.8~\mathrm{m/s^2}$. Top
row: flow field ($\boldsymbol{u}=\boldsymbol{0}$). Bottom row, left half: the corresponding temperature $T/m$ for
experiment and theory (right). Bottom row, right half: packing fraction
$\nu=n \pi \sigma^2/4$ fields for experiment (left) and theory (right). Black
stands for lower and red for higher field value. $0<T\le 0.2mv_0^2$,
$(v_0=370~\mathrm{mm/s}$); $0.02\times 10^{-2}<\nu\le0.46\times 10^{-2}$,
mean packing fraction: $\nu=0.15\times 10 ^{-2}$. Density color bars are in
percentage units. Experiments: with $f=45$ Hz, $A=1.85$ mm, $N=1000$ (total
number of particles). Theory: coefficient of restitution $\alpha=0.9$ (both
for particle-particle and wall-particle collisions).}
\label{theory-exp-2}
\end{center}
\end{figure}
Notice that sidewalls are inherently inelastic in granular fluid experimental
systems. Thus, the DLW convection is always present at experimental level and
our results imply that the classical volume thermal convection in granular gases
does not appear in experimental systems out of a hydrostatic state. Instead, it
develops as a secondary instability out of the DLWC state. Therefore, more
theory work would be needed in general to correctly describe the experimental
instability criteria for the volume thermal convection in granular gases. This
implies also that accuracy of previous hydrodynamic theory for the volume
thermal convection could be seemingly improved if the prior existence of the DLW
is taken into account.
Finally, our present work constitutes another strong evidence that steady granular flows can be correctly described with a standard hydrodynamic theory \citep{P15} \citep[see also the recent work by][with results in the same line]{VL17}.
F. V. R. acknowledges support from grants ”Salvador
Madariaga” No. PRX14/00137, FIS2016-76359-P (both from
Spanish Government) and No.GR15104 (Regional
Government of Extremadura, Spain). Use of computing facilities from the Extremadura Research Centre for Advanced Technologies (CETA-CIEMAT), partially financed by the ERDF is also acknowledged.
\bibliographystyle{jfm}
|
\section{Introduction}
Phase imaging plays a crucial role in the fields of optical, X-ray and electron microscopy \cite{optical1,optical2,x-ray,electron}. The phase of biological cells and tissues carries important information about the structure and intrinsic optical properties in microscopic imaging. Although this information cannot be directly recorded by the digital detector (CCD or CMOS), the Zernike phase contrast microscopy \cite{PCM} and differential interference contrast (DIC) microscopy \cite{DIC} could provide reliable phase contrast about the transparent cells and weakly absorbing objects via converting the phase into intensity. However, these techniques just only be used for the visualized and qualitative imaging instead of giving quantitative maps of phase change, which makes the quantitative data interpretation and phase reconstruction difficult.
Quantitative phase imaging (QPI) is a powerful tool for wide-ranging biomedical research and characterization of optical elements, which can realize the quantitative reconstruction of the sample information due to its label-free and unique capabilities to image the phase or the optical path thickness of cells, tissues, and optical fibers. As the conventional interferometric approach of QPI, off-axis digital holographic microscopy (DHM) \cite{DMH1,DMH2} measures the phase delay quantitatively introduced by the heterogeneous refractive index distribution within the specimen. Such method requires a coherent illumination source and a relatively complicated, vibration-sensitive optical system, even the speckle noise of laser degrades the spatial resolution of phase image. By contrast, other non-interferometric QPI approaches, which are based on common path geometries, utilizing white-light illumination \cite{wl1,wl2,wl3} have been developed to alleviate the problem of coherent noise and enhance the stability of mechanical vibrations, thus the spatial resolution and imaging quality of the phase measurement are greatly improved. Nevertheless, these quantitative phase measurements based on QPI techniques still rely on spatially coherent illumination, the maximum achievable resolution of phase imaging is only dependent on the numerical aperture (NA) of objective and restricted by the coherent diffraction limit.
On the other hand, the deterministic phase retrieval can also be realized by the transport of intensity equation (TIE) \cite{TIE1,TIE2,TIE3} only using object field intensities at multiple axially displaced planes. The TIE linearizes the relationship between the phase and derivative of intensity along the axis of propagation \cite{TIE1}, then the direct phase can be uniquely determined by solving the TIE with intensity images and the longitudinal intensity derivative on the in-focus plane. QPI based on TIE has been increasingly investigated in micro-optics inspection and dynamic phase imaging of biological processes in recent years due to its unique advantages over interferometric techniques to achieve quantitative reconstruction result without the need for complicated interferometric optical configurations, reference beam, laser illumination sources and phase unwrapping \cite{TIE_Appl1,TIE_Appl2,TIE_Appl3}. It has been demonstrated that the non-interferometric phase retrieval methods based on TIE can be well adopted to partially coherent illumination \cite{PC_TIE0,PC_TIE1,PC_TIE2,PC_TIE3} in spite of the fact that the original derivation of TIE is established on the paraxial approximation and coherent illumination. Due to the non-linear relationship among the intensity image of object, illumination source, and optical system under partially coherent field, the imaging process and mathematical modeling become more complicated than coherent situation \cite{Partial_Con1,Partial_Con2}. Nevertheless, the phase retrieval of TIE could be reformulated informatively using of the concept of weak object transfer function (WOTF) under weak defocus assumptions and ignoring the bilinear terms originating from the self-interference of scattered light \cite{WOTF1,WOTF2,WOTF3,WOTF4}. The WOTF describes the frequency domain response of phase and absorption for a certain optical imaging system, which is also been called the contrast transfer function (CTF) in the field of propagation-based X-ray phase imaging \cite{CTF1,CTF2}.
Although the reconstructed phase from TIE is not well-defined over the partially coherent field, the definition of ``phase'' has been proven to be the weighted average superposition of phase under various coherent illumination using the theory of coherent mode decomposition \cite{Partial_decomp}, and can be converted to the well-defined optical path length of sample \cite{PC_TIE2}. The physical meaning of phase for partially coherent field is related to the transverse Poynting vector \cite{PC_TIE0} or Wigner distribution moment \cite{Winger} as well. Under the coherent illumination, the transfer function would be truncated by the limit of objective NA, and the poor response of TIE at low spatial frequency amplifies the noise and leads the cloud-like artifacts superimposed on the reconstructed phases \cite{TIE3,TIE_Appl1}. While in the case of partially coherent light, the maximum achievable resolution of phase imaging is extended to the sum of objective NA and illumination NA over coherent situation, where the ratio of illumination NA to objective NA is called coherence parameter $s = N{A_{ill}}/N{A_{obj}}$. As the value of parameter $s$ increases ($N{A_{ill}} \le N{A_{obj}}$ actually), the phase contrast of defocused intensity image is vanished dramatically due to the attenuated response of transfer function. While the illumination NA approaches objective NA, the spatial cutoff frequency is increased to the two times of objective NA as predicted by the WOTF \cite{WOTF1,OTF2}, but meanwhile the low contrast intensity images would lead to the disadvantage that the signal-to-noise ratio (SNR) is too bad to recovery the phase from the defocused intensity images. The imaginary part of WOTF of large defocus distance rises faster than small defocus distance at low spatial frequency near zero frequency, so most of phase retrieval methods with TIE based on multiple defocus-plane select the low frequencies of large defocus as the optimal one \cite{WOTF4,PC_TIE3}. But, the transfer function of phase under large defocus distances contains too much zero-crossings due to the rhythmical fluctuation of sine function, and these points make it almost impossible to recovery the high frequencies information of phase.
In this paper, we present a highly efficient QPI approach which combines the annular aperture and programmable LED illumination by replacing traditional halogen illumination source with a LED array within a conventional transmission microscope. The annular illumination pattern matched with objective pupil is displayed on the LED array and each isolated LED is treated as the coherent source. The WOTF of axisymmetric oblique source in arbitrary position on source pupil plane is derived and the principle of discrete annular LED illumination pattern is validated. Not only the spatial resolution of final reconstructed phase could be extended to 2 NA of objective, but also the phase contrast of defocused intensity image is strong because the response of phase transfer function (PTF) with annular source tends to be roughly constant across a wide range of frequencies, which is an ideal form for noise-robust, high-resolution, and well-posed phase reconstruction.
Even though this TIE-based QPI approach utilizing annular illumination has been reported by our group in the earlier paper \cite{AI_TIE} and the LED array has also been employed for Fourier ptychography \cite{FP1,FP2} and other QPI modalities \cite{QP_LED1,QP_LED2}, the novelty of this work is to deduce the WOTF for axisymmetric oblique source, and develop this discrete source to the superposition of arbitrary illumination pattern, such as circular illumination, annular illumination, or any other axisymmetric illumination. Furthermore, the combination of annular illumination and programmable LED array makes the modulation of illumination more flexible and compatible without the need for anodized and dyed circular glass plate or customized 3D printed annuli \cite{AI_TIE}. These advantages make it a competitive and powerful alternative to traditional bright-field illumination approaches for wide variety of biomedical investigations, micro-optics inspection and biophotonics. The noise-free and noisy simulation results sufficiently validate the applicability of discrete annular source, and the quantitative phase measurements of a micro polystyrene bead and visible blazed transmission grating demonstrate the accuracy of this method. The experimental investigation of unstained human cancer cells using different types objective are presented, and this results show the possibility of widespread adoption of QPI in the morphology study of cellular processes and biomedical community.
\section{Principle}
\subsection{WOTF for axisymmetric oblique source}
In the standard 6$f$ optical configuration, illustrated in Figure 1 of \cite{WOTF1}, an object is illuminated by the k\"ohler illumination source and imaged via an objective lens. The image formation of this telecentric microscopic imaging system could be described by the Fourier transform and a linear filtering operation in the pupil plane \cite{Partial_Con1}. For the incoherent case, the intensity image can be given by the convolution equation $I\left( \mathbf{r} \right)={{\left| h\left( \mathbf{r} \right) \right|}^{2}}\otimes {{\left| t\left( \mathbf{r} \right) \right|}^{2}}\text{=}{{\left| h\left( \mathbf{r} \right) \right|}^{2}}\otimes {{I}_{u}}\left( \mathbf{r} \right)$, where $h$ denotes the amplitude point spread function (PSF) of the imaging system, $t$ is the complex amplitude, and ${I_u}$ represents the intensities of coherent partial images arising from all light source points. On a different note, in the coherent case it obeys $I\left( \mathbf{r} \right) = {\left| {h\left( \mathbf{r} \right) \otimes t\left( \mathbf{r} \right)} \right|^2}$. Thus, the incoherent system is linear in intensity, whereas the coherent system is highly nonlinear in that quantity \cite{Partial_Con1}. More information about how to obtain the intensity under partially coherent illumination can be found in the Appendix A.
Due to the fact that above image formation is not linear in either amplitude or intensity, the mathematical derivation of phase recovery becomes more complicated for partially coherent system \cite{Partial_Con1,Partial_Con2}. To simplify this theoretical modeling, one way is to assume that the observed sample is a weak phase object, and the first-order Taylor expansion of complex amplitude can be described as:
\begin{equation}\label{1}
t\left( {\bf{r}} \right) \equiv a\left( {\bf{r}} \right)\exp \left[ {i\phi \left( {\bf{r}} \right)} \right] \approx a\left( {\bf{r}} \right){\left[ {1 + i\phi \left( {\bf{r}} \right)} \right]^{a\left( {\bf{r}} \right) = {a_0} + \Delta a\left( {\bf{r}} \right)}} \approx {a_0} + \Delta a\left( {\bf{r}} \right) + i{a_0}\phi \left( {\bf{r}} \right)
\end{equation}
where $a\left( {\bf{r}} \right)$ is the amplitude with a mean value of ${a_0}$, and $\phi \left( {\bf{r}} \right)$ is the phase distribution. Implementing the Fourier transform to $t$ and multiplying it with its conjugate form, the interference terms of the object function (bilinear terms) can be neglected owing to the scattered light is weak compared with the un-scattered light for a weak phase object. The formula of complex conjugate multiplication can be approximated as:
\begin{equation}\label{2}
\begin{aligned}
T\left( {{{\bf{u}}_1}} \right){T^*}\left( {{{\bf{u}}_2}} \right) = & a_0^2\delta \left( {{{\bf{u}}_1}} \right)\delta \left( {{{\bf{u}}_2}} \right) + {a_0}\delta \left( {{{\bf{u}}_2}} \right)\left[ {\Delta \widetilde a\left( {{{\bf{u}}_1}} \right) + i{a_0}\widetilde \phi \left( {{{\bf{u}}_{{1}}}} \right)} \right] \\
&+ {a_0}\delta \left( {{{\bf{u}}_1}} \right)\left[ {\Delta {{\widetilde a}^*}\left( {{{\bf{u}}_2}} \right) - i{a_0}{{\widetilde \phi }^*}\left( {{{\bf{u}}_2}} \right)} \right].
\end{aligned}
\end{equation}
The approximation used in Eq. (\ref{2}) corresponds to the first order Born's approximation, and this approximation is commonly used in optical diffraction tomography \cite{ODT0,ODT1}. While the two cross-related points are coinciding with each other in the frequency domain, the intensity image under the partially coherent field for a weak object can be rewritten as the following equation by substitute Eq. (\ref{2}) into Eq. (\ref{27}) in the Appendix A:
\begin{equation}\label{3}
I\left( {\bf{r}} \right) = a_0^2TCC\left( {0;0} \right) + 2{a_0}{\mathop{\rm Re}\nolimits} \left\{ {\int {TCC\left( {{\bf{u}};0} \right)\left[ {\Delta \widetilde a\left( {\bf{u}} \right) + i{a_0}\widetilde \phi \left( {\bf{u}} \right)} \right]\exp \left( {i2\pi {\bf{ru}}} \right)d{\bf{u}}} } \right\}
\end{equation}
where the $TCC^{\rm{*}}\left( {0;{\bf{u}}} \right)$ is equal to $TCC\left( {{\bf{u}};0} \right)$ due to the conjugate symmetry of transmission cross coefficient (TCC). The intensity contribution of various system components (eg. source and object) is separated and decoupled in Eq. (\ref{3}), and the $TCC\left( {{\bf{u}};0} \right)$ could be expressed as WOTF:
\begin{equation}\label{4}
WOTF\left( \mathbf{u} \right)\equiv TCC\left( \mathbf{u};0 \right)=\iint{S\left( {{\mathbf{u}}^{'}} \right)}P^*\left( {{\mathbf{u}}^{'}} \right)P\left( {{\mathbf{u}}^{'}}+\mathbf{u} \right)d{{\mathbf{u}}^{'}}
\end{equation}
where ${\mathbf{u}}^{'}$ represents the variable in Fourier polar coordinate. The WOTF is real and even as long as the distribution of source $S\left( {\bf{u}} \right)$ or objective pupil $P\left( {\bf{u}} \right)$ is axisymmetric, thus the intensity image on the in-focus plane gives no phase contrast but absorption contrast. Some other asymmetric illumination methods could produce the phase contrast in the in-focus intensity image by break the symmetry of $S\left( {\bf{u}} \right)$ or $P\left( {\bf{u}} \right)$, and the prominent examples are differential phase contrast microscopy\cite{Axisys,QP_LED1} and partitioned or programmable aperture microscopy \cite{Program_micro1,Program_micro2}. The defocusing of optical system along $z$ axial, which is another more convenient way to produce phase contrast and an imaginary part, would be introduced into the pupil function:
\begin{equation}\label{5}
P\left( {\bf{u}} \right) = \left| {P\left( {\bf{u}} \right)} \right|{e^{ikz\sqrt {1 - {\lambda ^2}{{\left| {\bf{u}} \right|}^2}} }}, \left| {\bf{u}} \right|\lambda \le 1
\end{equation}
where $z$ is the defocus distance along the optical axis. Substituting the complex pupil function into Eq. (\ref{4}) yields a complex WOTF:
\begin{equation}\label{6}
WOTF\left( {\bf{u}} \right) = \iint{ S\left( {{{\bf{u}}^{'}}} \right)\left| {{P^*}\left( {{{\bf{u}}^{'}}} \right)} \right|\left| {P\left( {{{\bf{u}}^{'}} + {\bf{u}}} \right)} \right|\exp \left[ {ikz\left( { - \sqrt {1 - {\lambda ^2}{{\left| {{{\bf{u}}^{'}}} \right|}^2}} {\rm{ + }}\sqrt {1 - {\lambda ^2}{{\left| {{\bf{u}}{\rm{ + }}{{\bf{u}}^{'}}} \right|}^2}} } \right)} \right]d{{\bf{u}}^{'}}}
\end{equation}
The transfer functions of amplitude and phase component are corresponding to the real and imagery part of WOTF, respectively:
\begin{equation}\label{7}
\begin{aligned}
& {{H}_{A}}\left( \mathbf{u} \right)=2{{a}_{0}}\operatorname{Re}\left[ WOTF\left( \mathbf{u} \right) \right] \\
& {{H}_{P}}\left( \mathbf{u} \right)=2{{a}_{0}}\operatorname{Im}\left[ WOTF\left( \mathbf{u} \right) \right].
\end{aligned}
\end{equation}
\begin{figure}[!b]
\centering
\includegraphics[width=11.5cm]{Fig1.jpg}
\caption{2D images of PTF for different types axisymmetric source under weak defocusing conditions and the line profiles of TIE and PTF for various defocus
distances.}
\label{}
\end{figure}
Considering that the upright incident coherent source is a special case of oblique illumination, the derivation of WOTF for oblique source will be processed under the same framework for these two different types illumination. There is a pair of symmetrically ideal light spots on the source pupil plane, and the distance from these two points to the center point is ${\bm{\rho}_s}$ (normalized spatial frequency). The intensity distribution of this source pupil could be expressed as:
\begin{equation}\label{8}
S\left( \mathbf{u} \right)=\delta \left( \mathbf{u}-{{\bm{\rho }}_{s}} \right)\text{+}\delta \left( \mathbf{u}+{{\bm{\rho }}_{s}} \right)
\end{equation}
Substituting this source pupil function into Eq. (\ref{6}) results in a complex (but even) WOTF for oblique situation
\begin{equation}\label{9}
\begin{aligned}
WOT{{F}_{obl}}\left( \mathbf{u} \right)\text{=} & \left| P\left( \mathbf{u}-{{\bm{\rho }}_{{s}}} \right) \right|{{e}^{ikz\left( -\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}}\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}-{{\bm{\rho }}_{{s}}} \right|}^{2}}} \right)}} \\
& + \left| P\left( \mathbf{u}+{{\bm{\rho }}_{{s}}} \right) \right|{{e}^{ikz\left( -\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}}\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}+{{\bm{\rho }}_{{s}}} \right|}^{2}}} \right)}}
\end{aligned}
\end{equation}
where $\left| {P\left( {{\bf{u}} - {{\bm{\rho }}_{s}}} \right)} \right|$ and $\left| {P\left( {{\bf{u}} + {{\bm{\rho }}_{s}}} \right)} \right|$ are the pair of aperture functions shifted by the oblique coherent source in Fourier space. The aperture function for a circular objective pupil with normalized spatial radius ${{\bm{\rho }}_p}$ is given by
\begin{equation}\label{10}
\left| P\left( \mathbf{u} \right) \right|=
\left\{
\begin{aligned}
& 1,\quad \text{if }\mathbf{u}\le {{\bm{\rho }}_{p}} \\
& 0, \quad \text{if }\mathbf{u}>{{\bm{\rho }}_{p}}.
\end{aligned}
\right.
\end{equation}
In the coherent case (${{\bm{\rho }}_{{s}}}{\rm{ = }}0$), the WOTF can be greatly simplified as:
\begin{equation}\label{11}
WOT{{F}_{coh}}\left( \mathbf{u} \right)\text{=}\left| P\left( \mathbf{u} \right) \right|{{e}^{ikz\left( -1\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u} \right|}^{2}}} \right)}}.
\end{equation}
The two aperture functions are overlapped each other in this situation, so the values of final coherent WOTF is only half. The absorption contrast and phase contrast are given by the real and imaginary parts of $WOT{F_{coh}}$ using Euler's formula as shown in Eq. (\ref{7}). By further invoking the paraxial approximation and replacing $\sqrt {1 - {\lambda ^2}{{\bf{u}}^2}} $ with $1 - {{{\lambda ^2}{{\bf{u}}^2}} \mathord{\left/{\vphantom {{{\lambda ^2}{{\bf{u}}^2}} 2}} \right.
\kern-\nulldelimiterspace} 2}$, the imaginary part of the $WOT{F_{coh}}$ could be written as a sine term $\sin \left( {\pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right)$. Under the condition of weak defocusing, this transfer function can be further approximated by a parabolic function
\begin{equation}\label{12}
{{H}_{p}}{{\left( \mathbf{u} \right)}_{TIE}}\text{=} \left| P\left( \mathbf{u} \right) \right| \sin \left( {\pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right) \approx \left| P\left( \mathbf{u} \right) \right| \pi \lambda z{\left| {\bf{u}} \right|^2}
\end{equation}\label{12}
This Laplacian operator is corresponding to the PTF of TIE in Fourier domain, and the two dimensional (2D) image of WOTF for coherent source under weak defocusing condition is shown in Fig. 1(a1). The line profiles of TIE and PTF for various defocus distances are illustrated in Fig. 1(a2) as well. It is obvious that the transfer function profile of TIE is consistent with the PTF for weak defocus distance (0.5 $\mu$m) at low frequency, so the coherent transfer function is getting closer to the TIE as long as the defocus distance is getting smaller. In other words, the TIE is a special case of coherent transfer function under weak defocusing.
On the other hand, these two coherent points do not coincide with each other in the center of source plane, as shown in Fig. 1(b1) and (c1). The imaginary part of Eq. (\ref{9}) is limited by their own pupil functions, thus the PTF for oblique point source could be written as:
\begin{equation}\label{13}
\begin{aligned}
{{H}_{p}}{{\left( \mathbf{u} \right)}_{obl}}\text{=}& \frac{1}{2} \left| P\left( \mathbf{u}-{{\bm{\rho }}_{{s}}} \right) \right|\sin \left[ kz\left( \sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}-{{\bm{\rho }}_{{s}}} \right|}^{2}}}-\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}} \right) \right] \\
& + \frac{1}{2}\left| P\left( \mathbf{u}+{{\bm{\rho }}_{{s}}} \right) \right|\sin \left[ kz\left( \sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}+{{\bm{\rho }}_{{s}}} \right|}^{2}}}-\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}} \right) \right]
\end{aligned}
\end{equation}
Figure 1(b2) and (c2) show the curves of PTF for different ${{\bm{\rho }}_s}$ and defocus distances additionally. The cutoff frequency of transfer function is determined by the shifted aperture functions, and the achievable imaging resolution, which is equal to ${{\bm{\rho }}_{{p}}}{\rm{ + }}{{\bm{\rho }}_{{s}}}$, becomes bigger and bigger with the increment of ${{\bm{\rho}}_{s}}$ in the oblique direction. Nevertheless, the profile line of transfer function has two jump edges due to the overlap and superposition of two shifted objective pupil functions. The jump edge would induce zero-crossings and make the response of frequency bad around these points, thus these jump edges should be avoided as much as possible. While this pair of points source matches objective pupil (${{\bm{\rho }}_{{p}}}{\rm{ \approx }}{{\bm{\rho }}_{{s}}}$), not only the cutoff frequency of PTF could be extended to the twice resolution of coherent diffraction limit but also the frequency response of PTF is roughly constant in a specific direction under this axisymmetric oblique illumination.
\subsection{Validation of discrete annular LED illumination}
\begin{figure}[!b]
\centering
\includegraphics[width=12.5cm]{Fig2.jpg}
\caption{(a-c) 2D images of PTF and line profiles of three different types discrete annular illumination patterns for various defocus distances. (d) Traditional circular diaphragm aperture and corresponding PTF.}
\label{}
\end{figure}
For any axisymmetric shape of partially coherent illumination, a certain illumination pattern could be discretized into a lot of coherent point source with the finite physical size including oblique and upright incident light points. The image formation of an optical microscopic system under partially coherent field could be simply understood as a convolution with a magnified replica of each discrete coherent source. Moreover, this process is coincident with the incoherent superposition of all intensities of the coherent partial images arising from all discrete light source points for the optical imaging with K\"ohler illumination \cite{Partial_Con1,Partial_decomp}. As we know, while the condenser aperture iris diaphragm becomes bigger, the maximum achievable imaging resolution of intensity image is getting bigger also and the depth of field (DOF) becomes shallower. However, the phase contrast (as well as absorption contrast) of the defocused image will become weak, and the attenuation of phase effect of captured intensity image will reduce the SNR of the phase reconstruction while the coherence parameter $s$ continues to grow \cite{WOTF1,AI_TIE}. So the parameter $s$ is recommended to be set between 0.7 and 0.8 for properly image resolution and contrast in most microscope instruction manual.
To overcome the tradeoff between image contrast and resolution, we present the highly efficient programmable annular illumination which is different from the traditional circular diaphragm aperture for QPI microscopy. The LED array is placed at the front focal plane of the condenser to illuminate the specimen, and each single LED could be controlled separately. A test image, which is used to simulate the discrete LED array, with 512 $\times$ 512 pixels with a pixel size of 0.176 $\mu$m $\times$ 0.176 $\mu$m and an objective with 0.75 NA are employed for the validation of annular LED illumination. While a pair of oblique illumination points is located on the edge of source pupil, it could be known that the imaging resolution is twice objective NA in oblique direction as shown in Fig. 1(c). Thus, three different types of discrete annular patterns and one circular pattern are utilized for the comparison of WOTF under same system parameter. The expression of annular source could be written as the summation of delta function
\begin{equation}\label{14}
S({\bf{u}}) = \sum\limits_{i = 0}^N {\delta (\bf{u} - {{\bf{u}}_i})},\quad \left| {{{\bf{u}}_i}} \right| \approx \left| {{{\bm{\rho }}_p}} \right|
\end{equation}
where $N$ is the number of all discrete light points on the source plane.
Figure 2 shows the 2D images and line profiles of imaginary part of WOTF for various annular illumination patterns and defocus distances. There are four LEDs on the top-bottom and left-right of source plane in Fig. 2(a), so the double imaging resolution of objective NA could be obtained in the vertical and horizontal directions. While eight LEDs could cover the twice cutoff frequency of objective in four different directions, and the PTF image of eight LEDs seems to be the superposition of transfer function of several pairs axisymmetric oblique source. For the continuous situation of annular illumination, as shown in Fig.2(c), the final PTF provides isotropic imaging resolution in all directions. In addition to above three different types of annular shape, the PTF of circular illumination aperture is illustrated in Fig. 2(d) and the cutoff frequency is extended to 2 NA of objective as well. However, the value of transfer function of circular apertures is diminished dramatically compared to above three annular shapes. It is corresponding to the phenomenon that the larger aperture diaphragm provides higher imaging resolution but the phase contrast of defocused image is too weak to capture. The condenser aperture of circular illumination must be stopped down to produce appreciable contrast for phase information, but it is not necessary for the annular illumination. Here it is worth noting that the number of LED located on the edge of source pupil $N$ should be as much as possible for isotropic imaging resolution in all directions, but we chose the eight LEDs as the proposed illumination pattern considering the finite spacing between two adjacent LED elements.
From the plot of PTF for various aperture shapes and defocus distances, all four illumination patterns have twice frequency bandwidth of objective NA, but the response of circular illumination is too weak. The phase information can hardly be transferred into intensity via defocusing when illumination
NA is large, and the weak phase contrast of defocus intensity image would leads bad SNR. The zero-crossings number of PTF for large defocus distances is more than weak defocusing due to the rhythmical fluctuation of imaginary part of WOTF, and it is difficult to recovery the signal component from the noise around these point. Thereby, the proposed annular LED illumination pattern not only extends the imaging resolution to double NA in most directions but also provides the robust phase contrast response for defoused intensity image.
\subsection{QPI via TIE and WOTF inversion }
In the paraxial regime, the wave propagation is mathematically described by the Fresnel diffraction integral \cite{Partial_Con1}, while the relationship between the intensity and phase during wave propagation can be described by TIE \cite{TIE1}:
\begin{equation}\label{15}
-k\frac{\partial{I(\bm{r})}}{\partial{z}} = \nabla_\perp\bm\cdot[I(\bm{r})\nabla_\perp\phi(\bm{r})]
\end{equation}
where $k$ is the wave number ${2\pi }/{\lambda }$, $I(\bm{r})$ is the intensity image on the in-focus plane, $\nabla_\perp$ denotes the gradient operator over the transverse direction $\bm{r}$, $\bm\cdot$ denotes the dot product, and $\phi(\bm{r})$ represents the phase of object. The left hand of TIE is the spatial derivative of intensity on the in-focus plane along $z$ axis. The longitudinal intensity derivative $\partial{I}/\partial{z}$ can be estimated through difference formula ${\left( {{I_1} - {I_2}} \right)}$\slash${2\Delta z}$, where $I_1$ and $I_2$ are the two captured defocused intensity images, and $\Delta z$ is the defoucs distance of axially displaced image. By introducing the Teague's auxiliary function $\nabla_\perp\psi(\bm{r}) = I(\bm{r})\nabla_\perp\phi(\bm{r})$, the TIE is converted into the following two Poisson equations:
\begin{equation}\label{16}
-k\frac{\partial{I(\bm{r})}}{\partial{z}} = {\nabla_\perp}^2\psi
\end{equation}
and
\begin{equation}\label{17}
\nabla_\perp\bm\cdot(I^{-1}\nabla_\perp\psi) = {\nabla_\perp}^2\phi
\end{equation}
The solution for $\psi$ could be obtained by solving the first Poisson equation Eq. (\ref{16}), thus the phase gradient can be obtained as well. The second Poisson equation Eq. (\ref{17}) is used for phase integration, and the quantitative phase $\phi(\bm{r})$ can be uniquely determined by these two Poisson equations. For a special case of pure phase object (unstained cells and tissues generally), the intensity image on the in-focus plane could be treated as a constant because of the untainted cells is almost transparent, and the TIE can be simplified as only one Poisson equation:
\begin{equation}\label{18}
- k\frac{{\partial I\left( {\bf{r}} \right)}}{{\partial z}} = I\left( {\bf{r}} \right){\nabla ^2}\phi \left( {\bf{r}} \right)
\end{equation}
Then, the fast Fourier transform (FFT) solver \cite{TIE_Appl2,TIE_Appl3} is applied to Eq. (\ref{18}) and the forward form of TIE in the Fourier domain corresponds to a Laplacian filter
\begin{equation}\label{19}
\frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} = \left( { \pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right)\widetilde{\phi}(\bf{u})
\end{equation}
The inverse Laplacian operator $1\slash{\pi \lambda z{{\left| {\bf{u}} \right|}^2}}$ is analogous to an inversion of weak defocus CTF or PTF in the coherent limit.
\begin{figure}[!b]
\centering
\includegraphics[width=13.5cm]{Fig3.jpg}
\caption{Various noise-free reconstruction results based on a simulated phase resolution target corresponding different illumination patterns. The parameters of optical system and pixel size of camera is set to satisfy the Nyquist sampling criterion, and the sampling frequency of camera equals twice imaging resolution of objective NA. Scale bar, 15 $\mu$m.}
\label{}
\end{figure}
For partially coherent illumination, the traditional form of TIE is not suitable for the phase retrieval since this equation contains no parameters about imaging system. To take the effect of partial coherence and imaging system into account, the Laplacian operator $\pi \lambda z{\left| {\bf{u}} \right|^2}$ of TIE in the Fourier space should be replaced by the PTF of arbitrary axisymmetric source. The ATF ${H_A}\left( {\bf{u}} \right)$ and PTF ${H_{\rm{P}}}\left( {\bf{u}} \right)$ are determined by the real and imagery part of WOTF respectively, as shown in Eq. (\ref{7}). Thus, the ATF is an even function due to the nature of the cosine function, while the PTF is always an odd function for various different defocus distance. On the condition that the defoucs distances of two captured intensity images are same and defocus direction is opposite, the subtraction between two intensity images give no amplitude contrast but a pure twice phase contrast. Therefore, the in-focus image ${I(\bm{r})}$ is treated as the background intensity and the forward form of WOTF can be expressed as:
\begin{equation}\label{20}
\frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} = {\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right] \widetilde{\phi}(\bf{u})
\end{equation}
Equation (\ref{20}) makes the relationship between phase and PTF linear, then QPI can be realized by the inversion of WOTF in Fourier space
\begin{equation}\label{21}
\phi \left( {\bf{r}} \right) = {{\mathscr{F}}^{ - 1}} \left\{{ \frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} {\frac{{{\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right]}}{{{{\left| {{\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right]} \right|}^2} + \alpha }}} } \right\}
\end{equation}
where ${{\mathscr{F}}^{ - 1}}$ denotes the inverse Fourier transform, and $\alpha$ is the Tikhonov-regularization parameter, which is usually used in the Wiener filter to set maximum amplification, avoiding the division by zero of WOTF.
First, we implement our method to the phase reconstruction of a simulated resolution target. The resolution test image is used as an example phase object defined on a square region and the grid width is 512 pixels with a pixel size of 0.176 $\mu$m. The wavelength of illumination is 530 nm, and the objective NA is 0.75. The captured defocused intensity images are noise-free and the defocus distance is 0.5 $\mu$m. The WOTF for various illumination patterns could be derived using Eq. (\ref{9}) and Eq. (\ref{11}), and the inversion of WOTF is applied to the Fourier transform of captured intensity stack. The detailed compare reconstruction results of resolution target under different illumination patterns are shown in Fig. 3. The NA of objective and the pixel size of camera are set to satisfy the Nyquist sampling criterion, and twice imaging resolution of objective NA is equal to the maximum sampling frequency of camera. The center region of simulated resolution target is enlarged and marked with the dashed rectangle. As predicted by the WOTF of corresponding illumination pattern, the recovered spectrum is determined by the cutoff frequency of WOTF. Also, the phase profile lines of resolution elements in the smallest Group in this simulated resolution test image are plotted in the last row of sub-figure, and it could be seen that the edge of the resolution elements of coherent illumination is distorted and blurry but the elements of other groups of three aperture patterns are distinguishable.
In order to characterize the noise sensitivity of proposed method, another simulated result is presented as well. The system parameters are same as above simulation, but each defocused intensity image is corrupted by Gaussian noise with standard deviation of 0.1 to simulate the noise effect. The shape of reconstructed Fourier spectrum is same as the non-zero region of PTF, and the final retrieved phase is evaluated by the root-mean-square error (RMSE). From this diagram, the cutoff frequency of coherent illumination is restricted to coherent diffraction limit, but the other three groups of source could extend the cut-off frequency to double imaging resolution of objective NA. Although the coherent situation could provides the maximum value of PTF (unit 1 approximatively), the slowly rising of PTF response at low frequency leads the over-amplification of noise, and the cloud-like artifacts is superimposed on the finally reconstructed phase. The values of WOTF of traditional circular aperture is too close to zero and results the over-amplification of noise at both low and high frequency. Therefore, the proposed annular illumination method provides not only the twice resolution of objective NA but also the robust response of transfer function, and the accuracy and stable quantitatively retrieved phase of the test object is given at last.
\begin{figure}[!htp]
\centering
\includegraphics[width=13.5cm]{Fig4.jpg}
\caption{Phase reconstruction results under the Gaussian noise with standard deviation of 0.1. The response of transfer function rises slowly at low frequencies leading the over-amplification of noise, and there are cloud-like artifacts superimposed on the reconstructed phases for coherent illumination. While the values of WOTF of traditional circular aperture is too close to zero and leads the over-amplification of noise at both low and high frequency. Scale bar, 15 $\mu$m.}
\label{}
\end{figure}
\section{Experimental setup}
\begin{figure}[!htp]
\centering
\includegraphics[width=13.5cm]{Fig5.jpg}
\caption{(a) Schematic diagram of highly efficient quantitative phase microscopy. (b-c) The annular pattern is displayed on the LED array and the size of this annulus is matched with objective pupil in the back focal plane. (d) Photograph of whole imaging system. The LED array is placed beneath the sample and the crucial parts of setup in this photo are marked with the yellow boxes. Scale bar represents 300 $\mu$m.}
\label{}
\end{figure}
As depicted in Fig. 5(a), the highly efficient quantitative phase microscopy is composed of three major components: a programmable LED array, a microscopic imaging system, and a CMOS camera. The commercial surface-mounted LED array is placed at the front focal plane of the condenser as illumination source, and the light emitted from condenser lens for single LED can be nearly treated as a plane wave. Each LED can provide approximately spatially coherent quasi-monochromatic illuminations with narrow bandwidth (central wavelength $\lambda$ = 530 nm, $\sim$ 20 nm bandwidth). The distance between every adjacent LED elements is 1.67 mm, and only a fraction of whole array are used for programmable illumination. The array is driven dynamically using a LED controller board, which is custom-built by ourselves with a Field Programmable Gate Array (FPGA) unit, to provide the various illumination patterns.
In our work, the discrete annular LED illumination pattern matched with objective pupil is displayed on the array, as shown in Fig. 5(b). Figure 5(c) is taken in the objective back focal plane by inserting a Bertrand lens into one of the eyepiece observation tubes or removing the eyepiece tubes. The microscope is equipped with a scientific CMOS (sCMOS) camera (PCO.edge 5.5, 6.5 $\mu$m pixel pitch) and an universal plan objective (Olympus, UPlan 20 $\times$, NA = 0.4). Another universal plan super-apochromat objective (Olympus, UPlan SAPO 20$\times$, NA $=$ 0.75) and a higher sampling rate detector (2.2 $\mu$m pixel pitch) are also utilized for higher resolution imaging result. The photograph of whole imaging system is illustrated in Fig. 5(d) and the crucial parts of setup in this photo are marked with the yellow boxes.
\section{Results}
\subsection{Quantitative characterization of control samples}
\begin{figure}[!t]
\centering
\includegraphics[width=13cm]{Fig6.jpg}
\caption{(a1-b1) Reconstructed phase distributions of the micro polystyrene bead with 8 $\mu$m diameter and blazed transmission grating with 3.33 $\mu$m period. (a2-b2) Measured quantitative phase line profiles for a single bead and a few periods grating. Theoretical (assuming 90$^\text{o}$ groove angles) line profiles are also plotted for reference. Scale bar denotes 10 $\mu$m and 3 $\mu$m, respectively.}
\label{}
\end{figure}
In order to validate the accuracy of proposed QPI approach based on annular LED illumination, the micro polystyrene bead (Polysciences, $n$=1.59) with 8 $\mu$m diameter immersed in oil (Cargille, $n$=1.58) is measured using 0.4 NA objective and sCMOS camera. The sample is slightly defocused, and three intensity images are recorded at $\pm$ 1 $\mu$m plane and in-focus plane. By invoking the inversion of WOTF, the reconstructed quantitative phase image of bead is shown in Fig. 6(a1), which is a sub-region of whole field of view (FOV). The horizontal line profile through the center of a single bead is illustrated as the solid brown line in Fig. 6(a2), and the blue dash line represents the theoretical quantitative phase of the micro polystyrene bead. Of interest in these results is excellent agreement between the magnitude and shape of the compared bead profile. There is still some slight high frequency noise in the retrieved phase image due to the fact that the tiny value of WOTF amplifies the noise near the cutoff frequency, but these artifacts do not affect the accuracy and feasibility of our proposed method.
Further more, a visible blazed transmission grating ($Thorlabs\;GT13-03$, grating period $\Lambda$ = 3.33 $\mu$m, blaze angle ${\theta _B}$ = 17.5$^\text{o}$) is employed in the quantitative experiment using the same method and procedures. The grating is made by Schott B270 glass ($n_{glass}$ = 1.5251), and mounted face up on a glass slide with refractive index matching water ($n_{water}$ = 1.33) and a thin $no$. 0 coverslip. Considering that the large pixel size of sCMOS camera and high density of grating, a higher NA objective (NA = 0.75) and sampling rate detector (2.2 $\mu$m pixel size) are utilized for the imaging of this grating. The measured phase image is represented in Fig. 6(b1) for a 23.7 $\mu$m $\times$ 15.6 $\mu$m rectangular patch. Plotted for reference are the theoretical profiles in blue solid line, assuming 90$^\text{o}$ groove angles, and also plotted in Fig. 6(b1) is a few periods of the associated brown dot-solid line profiles with no interpolation. These two curves are well consistent with each other excepting the jump edges of phase owing to the rapid oscillations of grating. Thus, the two group quantitative characterizations of control samples further indicate success and accuracy of our method.
\subsection{Experimental results of biological specimens}
\begin{figure}[!htp]
\centering
\includegraphics[width=13cm]{Fig7.jpg}
\caption{(a) Quantitative reconstruction results of LC-06 with 0.4 NA objective and 6.5 $\mu$m pixel pitch camera for coherent and discrete annular illumination. (b-c) Three enlarged sub-regions of quantitative maps and simplified DIC images are illustrated as well. The white arrows shows line profiles taken at different positions in the cells. Scale bar equals 50$\mu$m, 10$\mu$m and 15$\mu$m, respectively.}
\label{}
\end{figure}
As demonstrated by the previous simulation results in subsection 2.3, the developed annular LED illumination could provides twice imaging resolution of objective NA and noise robust response of WOTF. We also test the present reconstruction method in its intended biomedical application experimentally, and the unstained lung cancer cell (LC-06) is used for highly efficient QPI firstly with 0.4 NA objective and 6.5 $\mu$m pixel pitch camera. Figure 7(a1) and (a2) are the quantitative phase images of LC-06 defined on a square FOV for point source and annular source respectively. Three representative sub-areas of whole quantitative map are selected and enlarged for more detailed descriptions. The phase images of three enlarged sub-regions are shown in jet map, and the corresponding simplified DIC images are illustrated in Fig. 7(b) and (c).
From these quantitative phase and phase gradient images, it is obvious that the phase imaging resolution of annular illumination source is higher than the coherent one, and some tiny grains in cytoplasm could be observed clearer and more vivid. In addition, the white arrows show line profiles taken from two different positions in the cells, and the comparative phase profiles are presented in different colors of the lines in Fig. 7. The plot lines indicate that the significant improvement of high frequency features using annular aperture as compared to the coherent illumination. Thus, the allowed highest spatial frequency of QPI base on annular LED illumination is 0.8 NA (0.66$\mu$m) effectively in the phase reconstruction.
Then, our system is used for the QPI of label-free HeLa cell by replacing the objective and the camera with another 0.75 NA objective and 2.2 $\mu$m pixel size camera. The FOV is 285.1 $\times$ 213.8 $\mu$m$^\text{2}$ with the sampling rate of 0.11 $\mu$m in the object plane. Figure 8(a) and (b) show the images of high resolution quantitative phase and the phase gradient in the direction of the image shear (45$^\text{o}$). As can be seen in Fig. 8(c), three sub-regions are selected by solid rectangular shape for no resolution loss of phase images. For this group of quantitative results, We will not repeat the enhancement of resolution of annular LED illumination but point out some defects in quantitative images. The background of this quantitative phase image is not ``black'' enough, which is caused by the loss of low frequency features of Fourier spectrum in the Fourier space. The root cause of this problem is the finite spacing between two adjacent LED elements leading to the mismatching between objective pupil and annular LED pattern. Furthermore, the PTF of system tends to be zero near zero frequency and makes the recovery of low frequency information difficult.
\begin{figure}[!t]
\centering
\includegraphics[width=13cm]{Fig8.jpg}
\caption{(a) High resolution QPI of HeLa cell with 0.75 NA objective. (b) Simulated DIC image. (c) Three enlarged sub-regions of quantitative phase of HeLa cell. Scale bar equals 20$\mu$m, 3$\mu$m and 5$\mu$m, respectively.}
\label{}
\end{figure}
\section{Discussion and conclusion}
In summary, we demonstrate an effective QPI approach based on programmable annular LED illumination for twice imaging resolution of objective NA and noise-robust reconstruction of quantitative phase. The WOTF of axisymmetric oblique source is derived using the concept of TCC, and the WOTF of discrete annular aperture is validated with the incoherent superposition of the individual point source. The inversion of WOTF is applied to the intensity stack containing three intensity images with equal and opposite defoci, and the quantitative phase could be retrieved. The recovered phase of simulated resolution target and noise-corrupted test image prove that the proposed illumination pattern could extend imaging resolution to 2 NA of objective and give great noise insensitivity. Further more, the biological sample of human cancer cells are imaged with two different types objective and the imaging resolution of retrieved phase is enhanced compared with the coherent illumination indeed. Besides, this QPI setup is easily fitted into a conventional optical microscope after small modifications and the programable source makes the modulation of annular pattern more flexible and compatible without customized-build annuli matched objective pupil.
But there are still some important issues that require further investigation or improvement in this work. Due to the dispersion of LEDs and the finite spacing between adjacent LED elements, the annular illumination pattern and pupil of objective are not well tangent internally with each other. The unmatched annular aperture with objective may cause the loss of low frequency owing to the overlap and offset of PTF near zero frequency. In other words, the missing of low frequency would lead to that the background of phase images is not ``black'' enough. Another shortcoming of this modified microscopic imaging system is that it is difficult to apply the long-term time-elapse living cellular imaging to these relatively low end bright-field microscope, like Olympus CX22 microscope, different from our early work based on IX83 microscope. To solve these problems, a special sample cuvette is required for the imaging of living biological cells and the additional devices may be needed to modify our setup, such as a smaller spacing and brighter LED array. Despite these existing drawbacks, the configuration of this system takes full advantage of the compatibility and flexibility of the programmable LED illumination and bright-field microcopy. And the annular illumination pattern gives the quantitative demonstration of control samples and promising results of biological specimens.
\section*{APPENDIX}
\subsection*{A. Derivation of Intensity formation under partially coherent illumination using Hopkins' formulae}
In the main text, the standard optical microscope system can be simplified as an extended light source, a condenser lens, a sample, an objective lens, and a camera on the image plane. Based on Abbe's theory \cite{Abbe}, the captured image of object at the image plane can be interpreted as the summation of all the source points of the illumination. For each source point, the optical formation is described by Fourier-transforms and a linear filtering operation as a linear system, and the electrical field $E\left( {x,y} \right)$ on the camera plane can be expressed as
\begin{equation}
E\left( {x,y;{f_c},{g_c}} \right) = \iint { {t\left( {f,g} \right)h\left( {f + {f_c},g + {g_c}} \right)\exp \left[ { - i2\pi \left( {fx + gy} \right)} \right]dfdg}}
\end{equation}
Where $t$ is the complex transmittance of object, and $h$ represents the amplitude point spread function (PSF) of the imaging system. The intensity on the image plane is proportional to the square magnitude of the electric field distribution and takes the form of
\begin{equation}\label{23}
\begin{aligned}
I\left( {x,y} \right) & = {S\left( {{f_c},{g_c}} \right){{\left| {E\left( {x,y;{f_c},{g_c}} \right)} \right|}^2}d{f_c}d{g_c}}\\
& = {S\left( {{f_c},{g_c}} \right){{\left| {{\mathscr{F}} \left[ {t\left( {f,g} \right)h\left( {f + {f_c},g + {g_c}} \right)} \right]} \right|}^2}d{f_c}d{g_c}}
\end{aligned}
\end{equation}
Where $I(x,y)$ is the intensity of the object captured at the image plane, $S( {{f_c},{g_c}})$ is the distribution of extended light source, and ${\mathscr{F}}$ denotes Fourier transform. By interchanging the order of integration, we can express Eq. (\ref{23}) according to Hopkins' formulation \cite{Hopk,MBorn}
\begin{equation}
\begin{aligned}
I\left( {x,y} \right) = \iiiint{} & S\left( {{f_c},{g_c}} \right)P\left( {{f^{'}} + {f_c},{g^{'}} + {g_c}} \right){P^*}\left( {{f^{''}} + {f_c},{g^{''}} + {g_c}} \right)T\left( {{f^{'}},{g^{'}}} \right){T^*}\left( {{f^{''}},{g^{''}}} \right) \\
& \exp \left[ { - i2\pi \left( {{f^{'}} - {f^{''}}} \right)x - i2\pi \left( {{g^{'}} - {g^{''}}} \right)y} \right]d{f^{'}}d{g^{'}}d{f^{''}}d{g^{''}}
\end{aligned}
\end{equation}
Where $P$ is the coherent transfer function with the objective pupil function $\left| P \right|$, and $T$ is the spatial object spectrum respective to the Fourier transform of object complex transmittance $t$. Here, we separate the contribution of the specimen and system, and the transmission cross coefficient (TCC) is introduced as a combination of the source and pupil expressed as
\begin{equation}\label{25}
TCC\left( {{f^{'}},{g^{'}};{f^{''}},{g^{''}}} \right) = \iint{{S\left( {{f_c},{g_c}} \right)P\left( {{f^{'}} + {f_c},{g^{'}} + {g_c}} \right){P^*}\left( {{f^{''}} + {f_c},{g^{''}} + {g_c}} \right)d{f_c}d{g_c}}}
\end{equation}
By replacing the variable $\left( {{f^{'}},{g^{'}}} \right)$ and $\left( {{f^{''}},{g^{''}}} \right)$ with two 2D vector ${{\bf{u}}_1}$ and ${{\bf{u}}_2}$ in frequency domain, and the Eq. (\ref{25}) can be simplified as
\begin{equation}
TCC\left( {{{\bf{u}}_1};{{\bf{u}}_2}} \right) = \iint{{S\left( {\bf{u}} \right)P\left( {{\bf{u}} + {{\bf{u}}_1}} \right){P^*}\left( {{\bf{u}} + {{\bf{u}}_2}} \right)d{\bf{u}}}}
\end{equation}
Then, the final intensity of object on the image plane can be rewritten in 2D vector variable
\begin{equation}\label{27}
I\left( {\bf{r}} \right) = \iint{{TCC\left( {{{\bf{u}}_1};{{\bf{u}}_2}} \right)T\left( {{{\bf{u}}_1}} \right){T^*}\left( {{{\bf{u}}_2}} \right)\exp \left[ {i2\pi {\bf{r}}\left( {{{\bf{u}}_1} - {{\bf{u}}_2}} \right)} \right]d{{\bf{u}}_1}d{{\bf{u}}_2}}}
\end{equation}
|
\section{Foolbox{} Overview}
\subsection{Structure}
Crafting adversarial examples requires five elements: first, a \textbf{model} that takes an input (e.g. an image) and makes a prediction (e.g. class-probabilities). Second, a \textbf{criterion} that defines what an adversarial is (e.g. misclassification). Third, a \textbf{distance measure} that measures the size of a perturbation (e.g. L1-norm). Finally, an \textbf{attack algorithm} that takes an input and its label as well as the model, the adversarial criterion and the distance measure to generate an \textbf{adversarial perturbation}.
The structure of Foolbox{} naturally follows this layout and implements five Python modules (models, criteria, distances, attacks, adversarial) summarized below.
\paragraph{Models}\mbox{}\\
\code{foolbox.models}\\
This module implements interfaces to several popular machine learning libraries:
\begin{itemize}
\item TensorFlow \citep{tensorflow} \\
\code{foolbox.models.TensorFlowModel}
\item PyTorch \citep{pytorch} \\
\code{foolbox.models.PyTorchModel}
\item Theano \citep{theano} \\
\code{foolbox.models.TheanoModel}
\item Lasagne \citep{lasagne} \\
\code{foolbox.models.LasagneModel}
\item Keras (any backend) \citep{keras} \\
\code{foolbox.models.KerasModel}
\item MXNet \citep{mxnet} \\
\code{foolbox.models.MXNetModel}
\end{itemize}
Each interface is initialized with a framework specific representation of the model (e.g. symbolic input and output tensors in TensorFlow or a neural network module in PyTorch). The interface provides the adversarial attack with a standardized set of methods to compute predictions and gradients for given inputs. It is straight-forward to implement interfaces for other frameworks by providing methods to calculate predictions and gradients in the specific framework.
Additionally, Foolbox{} implements a
\code{CompositeModel}
that combines the predictions of one model with the gradient of another. This makes it possible to attack non-differentiable models using gradient-based attacks and allows transfer attacks of the type described by \citet{compositeattack}.
\paragraph{Criteria}\mbox{}\\
\code{foolbox.criteria}\\
A \textit{criterion} defines under what circumstances an [input, label]-pair is considered an adversarial. The following criteria are implemented:
\begin{itemize}
\item Misclassification \\
\code{foolbox.criteria.Misclassification} \\
Defines adversarials as inputs for which the predicted class is not the original class.
\item Top-k Misclassification \\
\code{foolbox.criteria.TopKMisclassification} \\
Defines adversarials as inputs for which the original class is not one of the top-k predicted classes.
\item Original Class Probability \\
\code{foolbox.criteria.OriginalClassProbability} \\
Defines adversarials as inputs for which the probability of the original class is below a given threshold.
\item Targeted Misclassification \\
\code{foolbox.criteria.TargetClass} \\
Defines adversarials as inputs for which the predicted class is the given target class.
\item Target Class Probability \\
\code{foolbox.criteria.TargetClassProbability} \\
Defines adversarials as inputs for which the probability of a given target class is above a given threshold.
\end{itemize}
Custom adversarial criteria can be defined and employed. Some attacks are inherently specific to particular criteria and thus only work with those.
\paragraph{Distance Measures}\mbox{}\\
\code{foolbox.distances}\\
Distance measures are used to quantify the size of adversarial perturbations. Foolbox{} implements the two commonly employed distance measures and can be extended with custom ones:
\begin{itemize}
\item Mean Squared Distance\\
\code{foolbox.distances.MeanSquaredDistance}\\
Calculates the mean squared error\\$d({\bf x}, {\bf y}) = \frac{1}{N} \sum_i (x_i - y_i)^2$\\between two vectors ${\bf x}$ and ${\bf y}$.
\item Mean Absolute Distance\\
\code{foolbox.distances.MeanAbsoluteDistance}\\
Calculates the mean absolute error\\$d({\bf x}, {\bf y}) = \frac{1}{N} \sum_i |x_i - y_i|$\\between two vectors ${\bf x}$ and ${\bf y}$.
\item $L\infty{}$\\
\code{foolbox.distances.Linfinity}\\
Calculates the $L\infty{}$-norm $d({\bf x}, {\bf y}) = \max_i |x_i - y_i|$ between two vectors ${\bf x}$ and ${\bf y}$.
\item $L0$\\
\code{foolbox.distances.L0}\\
Calculates the $L0$-norm $d({\bf x}, {\bf y}) = \sum_i \mathbbm{1}_{x_i \ne y_i}$ between two vectors ${\bf x}$ and ${\bf y}$.
\end{itemize}
To achieve invariance to the scale of the input values, we normalize each element of ${\bf x, y}$ by the difference between the smallest and largest allowed value (e.g. 0 and 255).
\paragraph{Attacks}\mbox{}\\
\code{foolbox.attacks}\\
Foolbox{} implements a large number of adversarial attacks, see section 2 for an overview. Each attack takes a model for which adversarials should be found and a criterion that defines what an adversarial is. The default criterion is \textit{misclassification}. It can then be applied to a reference input to which the adversarial should be close and the corresponding label. Attacks perform internal hyperparameter tuning to find the minimum perturbation. As an example, our implementation of the fast gradient sign method (FGSM) searches for the minimum step-size that turns the input into an adversarial. As a result there is no need to specify hyperparameters for attacks like FGSM. For computational efficiency, more complex attacks with several hyperparameters only tune some of them.
\paragraph{Adversarial}\mbox{}\\
\code{foolbox.adversarial}\\
An instance of the adversarial class encapsulates all information about an adversarial, including which model, criterion and distance measure was used to find it, the original unperturbed input and its label or the size of the smallest adversarial perturbation found by the attack.
An adversarial object is automatically created whenever an attack is applied to an [input, label]-pair. By default, only the actual adversarial input is returned.
Calling the attack with \texttt{unpack} set to \texttt{False} returns the full object instead.
Such an adversarial object can then be passed to an adversarial attack instead of the [input, label]-pair, enabling advanced use cases such as pausing and resuming long-running attacks.
\subsection{Reporting Benchmark Results}
\label{benchmarking}
When reporting benchmark results generated with Foolbox{} the following information should be stated:
\vspace*{-10pt}
\begin{itemize}
\itemsep-3pt
\item the version number of Foolbox{},
\item the set of input samples,
\item the set of attacks applied to the inputs,
\item any non-default hyperparameter setting,
\item the criterion and
\item the distance metric.
\end{itemize}
\subsection{Versioning System}
Each release of Foolbox{} is tagged with a version number of the type MAJOR.MINOR.PATCH that follows the principles of semantic versioning\footnote{\url{http://semver.org/}} with some additional precautions for comparable benchmarking. We increment the
\begin{enumerate}
\item MAJOR version when we make changes to the API that break compatibility with previous versions.
\item MINOR version when we add functionality or make backwards compatible changes that can affect the benchmark results.
\item PATCH version when we make backwards compatible bug fixes that do not affect benchmark results.
\end{enumerate}
Thus, to compare the robustness of two models it is important to use the same MAJOR.MINOR version of Foolbox{}. Accordingly, the version number of Foolbox{} should always be reported alongside the benchmark results, see section \ref{benchmarking}.
\section{Implemented Attack Methods}
\label{attacks}
We here give a short overview over each attack method implemented in Foolbox{}, referring the reader to the original references for more details. We use the following notation:
\vspace*{-\baselineskip}
\begin{table}[htbp]
\centering
\begin{tabular}{r p{5cm} }
${\bf x}$ & a model input\\
$\ell$ & a class label\\
${\bf x}_0$ & reference input \\
$\ell_0$ & reference label\\
$L({\bf x}, \ell)$ & loss (e.g. cross-entropy)\\
$[b_{\text{min}}, b_{\text{max}}]$ & input bounds (e.g. 0 and 255)
\end{tabular}
\label{tab:notation}
\end{table}
\vspace*{-20pt}
\subsection{Gradient-Based Attacks}
Gradient-based attacks linearize the loss (e.g. cross-entropy) around an input ${\bf x}$ to find directions $\boldsymbol\rho$ to which the model predictions for class $\ell$ are most sensitive to,
\begin{equation}
L({\bf x} + \boldsymbol\rho, \ell) \approx L({\bf x}, \ell) + \boldsymbol\rho^\top\nabla_{\bf x} L({\bf x}, \ell).
\end{equation}
Here $\nabla_{\bf x} L({\bf x}, \ell)$ is referred to as the gradient of the loss w.r.t. the input ${\bf x}$.
\paragraph{Gradient Attack}\mbox{}\\
\code{foolbox.attacks.GradientAttack}\\
This attack computes the gradient ${\bf g}({\bf x}_0) = \nabla_{\bf x} L({\bf x}_0, \ell_0)$ once and then seeks the minimum step size $\epsilon$ such that ${\bf x}_0 + \epsilon {\bf g}({\bf x}_0)$ is adversarial.
\paragraph{Gradient Sign Attack (FGSM)}\mbox{}\\
\code{foolbox.attacks.GradientSignAttack}\\
\code{foolbox.attacks.FGSM}\\
This attack computes the gradient ${\bf g}({\bf x}_0) = \nabla_{\bf x} L({\bf x}_0, \ell_0)$ once and then seeks the minimum step size $\epsilon$ such that ${\bf x}_0 + \epsilon \sign({\bf g}({\bf x}_0))$ is adversarial \citep{fgsm}.
\paragraph{Iterative Gradient Attack}\mbox{}\\
\code{foolbox.attacks.IterativeGradientAttack}\\
Iterative gradient ascent seeks adversarial perturbations by maximizing the loss along small steps in the gradient direction ${\bf g}({\bf x})=\nabla_{\bf x} L({\bf x}, \ell_0)$, i.e. the algorithm iteratively updates ${\bf x}_{k+1} \leftarrow {\bf x}_k + \epsilon {\bf g}({\bf x}_k)$. The step-size $\epsilon$ is tuned internally to find the minimum perturbation.
\paragraph{Iterative Gradient Sign Attack}\mbox{}\\
\code{foolbox.attacks.IterativeGradientSignAttack}\\
Similar to iterative gradient ascent, this attack seeks adversarial perturbations by maximizing the loss along small steps in the ascent direction $\sign({\bf g}({\bf x})) = \sign\left(\nabla_{\bf x} L({\bf x}, \ell_0)\right)$, i.e. the algorithm iteratively updates ${\bf x}_{k+1} \leftarrow {\bf x}_k + \epsilon \sign({\bf g}({\bf x}_k))$. The step-size $\epsilon$ is tuned internally to find the minimum perturbation.
\paragraph{DeepFool $L2$ Attack}\mbox{}\\
\code{foolbox.attacks.DeepFoolL2Attack}\\
In each iteration DeepFool \citep{deepfool} computes for each class $\ell\ne\ell_0$ the minimum distance $d(\ell, \ell_0)$ that it takes to reach the class boundary by approximating the model classifier with a linear classifier. It then makes a corresponding step in the direction of the class with the smallest distance.
\paragraph{DeepFool $L\infty{}$ Attack}\mbox{}\\
\code{foolbox.attacks.DeepFoolLinfinityAttack}\\
Like the DeepFool L2 Attack, but minimizes the $L\infty{}$-norm instead.
\paragraph{L-BFGS Attack}\mbox{}\\
\code{foolbox.attacks.LBFGSAttack}\\
L-BFGS-B is a second-order optimiser that we here use to find the minimum of
\begin{equation*}
L({\bf x} + \boldsymbol\rho, \ell) + \lambda \left\|\boldsymbol\rho\right\|_2^2\quad\text{s.t.}\quad x_i + \rho_i\in [b_{\text{min}}, b_{\text{max}}]
\end{equation*}
where $\ell\ne\ell_0$ is the target class \citep{szegedy2013}. A line-search is performed over the regularisation parameter $\lambda > 0$ to find the minimum adversarial perturbation. If the target class is not specified we choose $\ell$ as the class of the adversarial example generated by the gradient attack.
\paragraph{SLSQP Attack}\mbox{}\\
\code{foolbox.attacks.SLSQPAttack}\\
Compared to L-BFGS-B, SLSQP allows to additionally specify non-linear constraints. This enables us to skip the line-search and to directly optimise
\begin{equation*}
\left\|\boldsymbol\rho\right\|_2^2 \quad\text{s.t.}\quad L({\bf x} + \boldsymbol\rho, \ell) = l \,\,\wedge\,\, x_i + \rho_i\in [b_{\text{min}}, b_{\text{max}}]
\end{equation*}
where $\ell\ne\ell_0$ is the target class. If the target class is not specified we choose $\ell$ as the class of the adversarial example generated by the gradient attack.
\paragraph{Jacobian-Based Saliency Map Attack}\mbox{}\\
\code{foolbox.attacks.SaliencyMapAttack}\\
This targeted attack \citep{papernot15} uses the gradient to compute a \textit{saliency score} for each input feature (e.g. pixel). This saliency score reflects how strongly each feature can push the model classification from the reference to the target class. This process is iterated, and in each iteration only the feature with the maximum saliency score is perturbed.
\subsection{Score-Based Attacks}
Score-based attacks do not require gradients of the model, but they expect meaningful scores such as probabilites or logits which can be used to approximate gradients.
\paragraph{Single Pixel Attack}\mbox{}\\
\code{foolbox.attacks.SinglePixelAttack}\\
This attack \citep{localsearch} probes the robustness of a model to changes of single pixels by setting a single pixel to white or black. It repeats this process for every pixel in the image.
\paragraph{Local Search Attack}\mbox{}\\
\code{foolbox.attacks.LocalSearchAttack}\\
This attack \citep{localsearch} measures the model's sensitivity to individual pixels by applying extreme perturbations and observing the effect on the probability of the correct class. It then perturbs the pixels to which the model is most sensitive. It repeats this process until the image is adversarial, searching for additional critical pixels in the neighborhood of previously found ones.
\paragraph{Approximate L-BFGS Attack}\mbox{}\\
\code{foolbox.attacks.ApproximateLBFGSAttack}\\
Same as L-BFGS except that gradients are computed numerically. Note that this attack is only suitable if the input dimensionality is small.
\subsection{Decision-Based Attacks}
Decision-based attacks rely only on the class decision of the model. They do not require gradients or probabilities.
\paragraph{Boundary Attack}\mbox{}\\
\code{foolbox.attacks.BoundaryAttack}\\
Foolbox provides the reference implementation for the Boundary Attack~\citep{boundaryattack}. The Boundary Attack is the most effective decision-based adversarial attack to minimize the L2-norm of adversarial perturbations. It finds adversarial perturbations as small as the best gradient-based attacks without relying on gradients or probabilities.
\paragraph{Pointwise Attack}\mbox{}\\
\code{foolbox.attacks.PointwiseAttack}\\
Foolbox provides the reference implementation for the Pointwise Attack. The Pointwise Attack is the most effective decision-based adversarial attack to minimize the L0-norm of adversarial perturbations.
\paragraph{Additive Uniform Noise Attack}\mbox{}\\
\code{foolbox.attacks.AdditiveUniformNoiseAttack}\\
This attack probes the robustness of a model to i.i.d. uniform noise. A line-search is performed internally to find minimal adversarial perturbations.
\paragraph{Additive Gaussian Noise Attack}\mbox{}\\
\code{foolbox.attacks.AdditiveGaussianNoiseAttack}\\
This attack probes the robustness of a model to i.i.d. normal noise. A line-search is performed internally to find minimal adversarial perturbations.
\paragraph{Salt and Pepper Noise Attack}\mbox{}\\
\code{foolbox.attacks.SaltAndPepperNoiseAttack}\\
This attack probes the robustness of a model to i.i.d. salt-and-pepper noise. A line-search is performed internally to find minimal adversarial perturbations.
\paragraph{Contrast Reduction Attack}\mbox{}\\
\code{foolbox.attacks.ContrastReductionAttack}\\
This attack probes the robustness of a model to contrast reduction. A line-search is performed internally to find minimal adversarial perturbations.
\paragraph{Gaussian Blur Attack}\mbox{}\\
\code{foolbox.attacks.GaussianBlurAttack}\\
This attack probes the robustness of a model to Gaussian blur. A line-search is performed internally to find minimal blur needed to turn the image into an adversarial.
\paragraph{Precomputed Images Attack}\mbox{}\\
\code{foolbox.attacks.PrecomputedImagesAttack}\\
Special attack that is initialized with a set of expected input images and corresponding adversarial candidates. When applied to an image, it tests the models robustness to the precomputed adversarial candidate corresponding to the given image. This can be useful to test a models robustness against image perturbations created using an external method.
\section{Acknowledgements}
\label{acknowledgements}
This work was supported by the Carl Zeiss Foundation (0563-2.8/558/3), the Bosch Forschungsstiftung (Stifterverband, T113/30057/17), the International Max Planck Research School for Intelligent Systems (IMPRS-IS), the German Research Foundation (DFG, CRC 1233, Robust Vision: Inference Principles and Neural Mechanisms) and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
|
\section{Introduction}
\markboth{\centerline{\it Introduction}}{\centerline{\it D.T.V.~An, J.-C.~Yao,
and N.D.~Yen}} \setcounter{equation}{0}
According to Bryson \cite[p.~27, p.~32]{Bryson}, optimal control had its origins in the calculus of variations in the 17th century. The calculus of variations was developed further in the 18th by
L.~Euler and J.~L.~Lagrange and in the 19th century by A.~M.~Legendre, C.~G.~J.~Jacobi, W.~R.~Hamilton, and K.~T.~W.~Weierstrass. In 1957, R.~E.~Bellman gave a new view of Hamilton-Jacobi theory which he called \textit{dynamic programming}, essentially a nonlinear feedback control scheme. McShane \cite{McShane} and Pontryagin et al. \cite{Pontryagin_etal.__1962} extended the calculus of variations to handle control variable inequality constraints. \textit{The Maximum Principle} was enunciated by Pontryagin.
\medskip
As noted by Tu \cite[ p.~110]{Tu}, although much pioneering work had been carried out by other authors, Pontryagin and his associates are the first ones to develop and present the Maximum Principle in unified manner. Their work attracted great attention among mathematicians, engineers, economists, and spurred wide research activities in the area (see \cite[Chapter~6]{Mordukhovich_2006b}, \cite{Tu,Vinter2000}, and the references therein).
\medskip
Differential stability of parametric optimization problems is an important topic in variational analysis and optimization. In 2009, Mordukhovich et al. \cite{MordukhovichNamYen2009} presented formulas for computing and estimating the Fr\'echet subdifferential, the Mordukhovich subdifferential, and the singular subdifferential of the optimal value function. If the problem in question is convex, An and Yen \cite{AnYen} and then An and Yao \cite{AnYao} gave formulas for computing subdifferentials of the optimal value function by using two versions of the Moreau-Rockafellar theorem and appropriate regularity conditions.
It is worthy to emphasize that differential properties of optimal value functions in parametric mathematical programming and the study on differential stability of optimal value functions in optimal control have attracted attention of many researchers; see, e.g., \cite{CerneaFrankowska2005,Chieu_Kien_Toan,MoussaouiSeeger1994,TR2004,RockafellarWolenski2000I,RockafellarWolenski2000II,Seeger1996,Toan_2015,Toan_Kien} and the references therein.
\medskip
Very recently, Thuy and Toan \cite{Toan_Thuy} have obtained a formula for computing the subdifferential of the optimal value function to a parametric unconstrained convex optimal control problem with a convex objective function and linear state equations.
\medskip
The aim of this paper is to develop the approach of \cite{Toan_Thuy} to deal with \textit{constrained control problems}. Namely, based on the paper of An and Toan \cite{AnToan} about differential stability of parametric convex mathematical programming problems, we will get new results on computing the subdifferential and the singular subdifferential of the optimal value function. Among other things, our result on computing the subdifferential extends and improves the main result of \cite{Toan_Thuy}. Moreover, we also describe in details the process of finding vectors belonging to the subdifferential (resp., the singular subdifferential) of the optimal value function. Thus, on one hand, our results have the origin in the study of \cite{Toan_Thuy}. On the other hand, they are the results of deepening that study for the case of constrained control problems. Meaningful examples, which have the origin in \cite[Example~1, p.~23]{Pontryagin_etal.__1962}, are designed to illustrate our results. In fact, these examples constitute an indispensable part of the present paper.
\medskip
Note that differentiability properties of the optimal value function in both unconstrained and constrained control problems have been studied from different point of views. For instance, Rockafellar \cite{TR2004} investigated the optimal value of a parametric optimal control problem with an differential inclusion and a point-wise control constraint as a function of the time horizon and the terminal state. Meanwhile, based on an epsilon-maximum principle of Pontryagin type, Moussaoui and Seeger \cite[Theorem~3.2]{MoussaouiSeeger1994} considered an optimal control problem with linear state equations and gave a formula for the subdifferential of the optimal value function without assuming the existence of optimal solutions to the unperturbed problem.
\medskip
The organization of the paper is as follows. Section 2 formulates the parametric convex optimal control problem which we are interested in. The same section reviews some of the standard facts on functional analysis \cite{KF70,Rudin_1991}, convex analysis \cite{Ioffe_Tihomirov_1979}, variational analysis \cite{Mordukhovich_2006a}, and presents one theorem from \cite{AnToan} which is important for our investigations. Formulas for the subdifferential and the singular subdifferential of the optimal value function of the convex optimal control problem are established in Section 3. The final section presents a series of three closely-related illustrative examples.
\section{Preliminaries}
\markboth{\centerline{\it Preliminaries}}{\centerline{\it D.T.V.~An, J.-C.~Yao,
and N.D.~Yen}} \setcounter{equation}{0}
Let $W^{1,p} ([0,1],\Bbb{R}^n)$, $1 \le p < \infty$, be the Sobolev space consisting of absolutely continuous functions $x:[0,1] \rightarrow \Bbb{R}^n$ such that $\dot{x} \in L^p([0,1], \Bbb{R}^n)$. Let there be given
\par - matrix-valued functions $A(t)=(a_{ij}(t))_{n\times n},\, B(t)=(b_{ij}(t))_{n\times m},$ and $ C(t)=(c_{ij}(t))_{n\times k};$
\par - real-valued functions $g: \Bbb{R}^n \to \Bbb{R}$ and $L:[0,1]\times \Bbb{R}^n \times \Bbb{R}^m \times \Bbb{R}^k \to \Bbb{R}$;
\par - a convex set $\mathcal{U} \subset L^p([0,1], \Bbb{R}^m)$;
\par - a pair of parameters $(\alpha, \theta) \in \Bbb{R}^n \times L^p([0,1], \Bbb{R}^k).$
\\Put
\begin{align*}
& X=W^{1,p} ([0,1],\Bbb{R}^n), \ U=L^p([0,1],\Bbb{R}^m), \ Z= X \times U,\\
&\Theta= L^p([0,1],\Bbb{R}^k), \ W=\Bbb{R}^n \times \Theta.
\end{align*}
Consider the constrained optimal control problem which depends on a parameters pair $(\alpha, \theta)$: \textit{Find a pair $(x,u)$, where $x\in W^{1,p}([0,1],\Bbb{R}^n)$ is a trajectory and $u \in L^p([0,1],\Bbb{R}^m)$ is a control function, which minimizes the objective function
\begin{align}\label{objective_Function_Control}
g(x(1)) + \int_0^1 L(t, x(t), u(t),\theta(t))dt
\end{align}
and satisfies the linear ordinary differential equation
\begin{align}
\label{state_equation}
\dot{x}(t)=A(t)x(t)+B(t)u(t)+C(t)\theta(t) \ \, \mbox{a.e.} \ t \in [0,1],
\end{align}
the initial value
\begin{align}
\label{initial_value}
x(0)=\alpha,
\end{align}
and the control constraint
\begin{align}
\label{control_constraint}
u \in \mathcal{U}.
\end{align}}
It is well-known that $X,\, U,\, Z,$ and $\Theta$ are Banach spaces. For each $w=(\alpha, \theta)\in W$, denote by $V(w)$ and $S(w)$, respectively, the optimal value and the solution set of problem~\eqref{objective_Function_Control}--\eqref{control_constraint}. We call $V: W \to \overline{\Bbb{R}}$, where $\overline{\Bbb{R}}=[-\infty,+\infty]$ denotes the extended real line, the \textit{optimal value function} of the problem in question. If for each $w=(\alpha, \theta)\in W$ we put
\begin{align}
\label{opjective_function_J}
J(x,u,w)=g(x(1))+ \int_0^1 L(t, x(t), u(t),\theta(t))dt,
\end{align}
\begin{align}
\label{map_constraint}
G(w)=\big\{z=(x,u)\in X \times U \,\mid \mbox{\eqref{state_equation} and \eqref{initial_value} are satisfied} \big \},
\end{align}
and
\begin{align}
\label{set_constraint}
K=X \times \mathcal{U},
\end{align}
then \eqref{objective_Function_Control}--\eqref{control_constraint} can be written formally as $\min\{J(z, w)\mid z \in G(w)\cap K\}$, and
\begin{align}
\label{Re_optimal_value_function}
V(w)=\inf\{J(z, w)\mid z=(x,u)\in G(w)\cap K\}.
\end{align}
It is assumed that $V$ is finite at $\bar w=(\bar\alpha,\bar\theta)\in W$ and $(\bar x, \bar u)$ is a solution of the corresponding problem, that is $(\bar x, \bar u)\in S(\bar w)$.
\medskip
Similarly as in \cite[p.~310]{KF70}, we say that a vector-valued function $g:[0,1]\to Y$, where $Y$ is a normed space, is essentially bounded if there exists a constant $\gamma >0$ such that the set $T:=\left\{t \in [0,1] \mid ||g(t)|| \le \gamma \right\}$ is of full Lebesgue measure. The latter means $\mu([0,1] \backslash\ T )=0,$ with $\mu$ denoting the Lebesgue measure on $[0,1].$
\medskip
Consider the following assumptions:
\begin{description}
\item[(A1)] The matrix-valued functions $A:[0,1] \to M_{n,n}(\Bbb{R}),$ $B:[0,1] \to M_{n,m}(\Bbb{R}),$ and $C:[0,1] \to M_{n,k}(\Bbb{R}),$ are measurable and essentially bounded.
\item[(A2)] The functions $g: \Bbb{R}^n \to \Bbb{R}$ and $L:[0,1]\times \Bbb{R}^n \times \Bbb{R}^m \times \Bbb{R}^k \to \Bbb{R}$ are such that $g(\cdot)$ is convex and continuously differentiable on $\Bbb{R}^n$, $L(\cdot,x,u,v)$ is measurable for all $(x,u,v) \in \Bbb{R}^n \times \Bbb{R}^m \times \Bbb{R}^k$, $L(t,\cdot,\cdot,\cdot)$ is convex and continuously differentiable on $\Bbb{R}^n \times \Bbb{R}^m \times \Bbb{R}^k $ for almost every $t \in [0,1]$, and there exist constants $c_1>0,\, c_2>0,\, r \ge 0,\, p \ge p_1 \ge 0,\, p-1 \ge p_2 \ge 0$, and a nonnegative function $w_1\in L^p([0,1], \Bbb{R})$, such that $|L(t,x,u,v)| \le c_1\big (w_1(t) +||x||^{p_1}+ ||u||^{p_1}+||v||^{p_1}\big ),$
$$ \max \big \{ |L_x(t,x,u,v)|,|L_u(t,x,u,v)|,|L_v(t,x,u,v)| \big \} \le c_2\big (||x||^{p_2}+||u||^{p_2}+||v||^{p_2} \big )+r$$
for all $(t,x,u,v) \in [0,1] \times \Bbb{R}^n \times \Bbb{R}^m \times \Bbb{R}^k$.
\end{description}
We now recall some results from functional analysis related to Banach spaces. The results can be found in \cite[pp.~ 20-22]{Ioffe_Tihomirov_1979}.
For every $p\in [1,\infty)$, the symbol $L^p([0,1],\Bbb{R}^n)$ denotes the Banach space of Lebesgue measurable functions $x$ from $[0,1]$ to $\Bbb{R}^n$ for which the integral $\displaystyle\int_0^1 \|x(t)\|^p dt$ is finite. The norm in $L^p([0,1],\Bbb{R}^n)$ is given by
$$||x||_p
=\bigg(\int_0^1 \|x(t)\|^p dt \bigg)^{\frac{1}{p}}.$$
The dual space of $L^p([0,1],\Bbb{R}^n)$ is $L^q([0,1],\Bbb{R}^n)$, where $\frac{1}{p}+\frac{1}{q}=1.$ This means that, for every continuous linear functional $\varphi$ on the space $L^p([0,1],\Bbb{R}^n)$, there exists a unique element $x^* \in L^q([0,1],\Bbb{R}^n)$ such that $\varphi(x)=\langle \varphi, x \rangle =\int_0^1 x^*(t)x(t) dt$
for all $x\in L^p([0,1],\Bbb{R}^n)$. One has $||\varphi||=||x^*||_q.$
\medskip
The Sobolev space $W^{1,p} ([0,1],\Bbb{R}^n)$ is equipped with the norm
$$||x||^1_{1,p}=\|x(0)\|+||\dot x||_p,$$ or the norm $||x||^2_{1,p}=||x||_p+||\dot x||_p.$ The norms $||x||^1_{1,p}$ and $||x||^2_{1,p}$ are equivalent (see, e.g., \cite[p.~21]{Ioffe_Tihomirov_1979}). Every continuous linear functional $\varphi$ on $W^{1,p} ([0,1],\Bbb{R}^n)$ can be represented in the form
$$\langle \varphi, x \rangle = \langle a, x(0)\rangle + \int_0^1 \dot {x}(t)u(t) dt,$$
where the vector $a\in \Bbb{R}^n$ and the function $ u\in L^{q} ([0,1],\Bbb{R}^n)$ are uniquely defined. In other words, $\big(W^{1,p} ([0,1],\Bbb{R}^n) \big)^*= \Bbb{R}^n\times L^{q} ([0,1],\Bbb{R}^n),$ where $\ \frac{1}{p}+\frac{1}{q}=1;$ see, e.g., \cite[p.~21]{Ioffe_Tihomirov_1979}. In the case of $p=2$, $W^{1,2} ([0,1],\Bbb{R}^n)$ is a Hilbert space with the inner product being given by
$$\langle x, y \rangle = \langle x(0), y(0) \rangle + \int_0^1 \dot x (t) \dot y (t)dt,$$
for all $x,y \in W^{1,2} ([0,1],\Bbb{R}^n).$
\medskip
We shall need two dual constructions from convex analysis \cite{Ioffe_Tihomirov_1979}. Let $X$ and $Y$ be Hausdorff locally convex topological vector spaces with the duals denoted, respectively, by $X^*$ and $Y^*$. For a convex set $\Omega\subset X$ and $\bar x\in \Omega$, the set
\begin{align}\label{normals_convex_analysis} N(\bar x; \Omega)=\{x^*\in X^* \mid \langle x^*, x-\bar x \rangle \leq 0, \ \, \forall x \in \Omega\}\end{align}
is called the \textit{normal cone} of $\Omega$ at $\bar {x}.$
Given a function $f: X\rightarrow \overline{\mathbb{R}}$, one says that $f$ is \textit{proper} if ${\rm{dom}}\, f:=\{ x \in X \mid f(x) < +\infty\} $
is nonempty, and if $f(x) > - \infty$ for all $x \in X$. If $ {\rm{epi}}\, f:=\{ (x, \alpha) \in X \times \mathbb{R} \mid \alpha \ge f(x)\}$ is convex, then $f$ is said to be a convex function. The {\it subdifferential} of a proper convex function $f: X\rightarrow \overline{\mathbb{R}}$ at a point $\bar x \in {\rm dom}\,f$ is defined by
\begin{align}\label{subdifferential_convex_analysis}
\partial f(\bar x)=\{x^* \in X^* \mid \langle x^*, x- \bar x \rangle \le f(x)-f(\bar x), \ \forall x \in X\}.
\end{align}
It is clear that
$\partial f(\bar x)=\{x^* \in X^* \mid (x^*,-1) \in N(( \bar x, f(\bar x)); {\rm{\mbox{\rm epi}\,}}\, f ) \}. $
\medskip
In the spirit of \cite[Definition~1.77]{Mordukhovich_2006a}, we define
the {\it singular subdifferential} of a convex function $f$ at a point $\bar x \in {\rm dom}\,f$ by
\begin{align}\label{singular_subdifferential_convex}
\partial^\infty f(\bar x)=\{x^* \in X^* \mid (x^*,0)\in N ( (\bar x, f(\bar x)); {\rm{epi}}\, f)\}.
\end{align}
For any convex function $f$, one has $\partial^\infty f(x)=N(x;{\rm dom}\,f)$ (see, e.g., \cite{AnYen}). If $\bar x\notin {\rm dom}\,f$, then we put $\partial f(\bar x)=\emptyset$ and $\partial^\infty f(\bar x)=\emptyset$.
\medskip
In the remaining part of this section, we present without proofs one theorem from \cite{AnToan} which is crucial for the subsequent proofs, thus making our exposition self-contained and easy for understanding.
\medskip
Suppose that $X$, $W$, and $Z$ are Banach spaces with the dual spaces $X^*$, $W^*$, and $Z^*$, respectively. Assume that $M: Z\to X$ and $T: W\to X$
are continuous linear operators. Let $M^*: X^*\to Z^*$ and $T^*: X^*\to W^*$ be the adjoint operators of $M$ and $T$, respectively. Let $f:Z\times W\to\overline{\Bbb{R}}$ be a convex function and $\Omega$ a convex subset of $Z$ with nonempty interior. For each $w\in W$, put
$ H(w)=\big\{z\in Z\,|\, Mz=Tw\big\}$ and consider the optimization problem
\begin{eqnarray} \label{12b}
\min\{f(z,w) \, \mid \,z\in H(w)\cap \Omega \}.
\end{eqnarray}
It is of interest to compute the subdifferential and the singular subdifferential of the optimal value function
\begin{eqnarray} \label{12}
h(w):=\inf_{z\in H(w)\cap \Omega} f(z,w)
\end{eqnarray} of \eqref{12b} where $w$ is subject to change. Denote by $\widehat S(w)$ the solution set of that problem. For our convenience,
we define the linear operator $\Phi: Z\times W \rightarrow X$ by setting $\Phi(z,w)=Mz-Tw$ for all $(z,w)\in Z \times W$. Note that
the subdifferential $\partial h(w)$ has been computed in \cite{Chieu_Kien_Toan,AnToan}, while the singular subdifferential $\partial^\infty h(w)$ has been evaluated in \cite{AnToan}.
\medskip
The following result of \cite{AnToan} will be used intensively in this paper.
\begin{theorem}{\rm(See \cite[Theorem 2]{AnToan})}\label{asprogramingproblem}
Suppose that $\Phi$ has closed range and ${\rm ker}\, T^* \subset {\rm ker}\,M^*$. If the optimal value function $h$ in \eqref{12} is finite at $\bar w \in {\rm dom}\, \widehat{S}$ and $f$ is continuous at $(\bar z,\bar w) \in (W \times \Omega) \cap {\rm gph}\,H ,$ then
\begin{eqnarray}\label{BT1}
\partial h(\bar w)= \bigcup_{(z^*, w^*)\in \partial f(\bar z, \bar w)}\;\bigcup_{v^*\in N(\bar z;
\Omega)}\big[ w^*+ T^*\big((M^*) ^{-1}(z^* +v^*)\big)\big]
\end{eqnarray}
and
\begin{eqnarray}\label{BT1'}
\partial^\infty h(\bar w)= \bigcup_{(z^*, w^*)\in \partial^\infty f(\bar z, \bar w)}\;\bigcup_{v^*\in N(\bar z;
\Omega)}\big[ w^*+ T^*\big((M^*) ^{-1}(z^* +v^*)\big)\big],
\end{eqnarray}
where $\big(M^*) ^{-1}(z^* +v^*)=\{x^*\in X^*\mid M^*x^*=z^* +v^*\}$.
\end{theorem}
When $f$ is Fr\'echet differentiable at $(\bar z, \bar w)$, it holds that $\partial f(\bar z, \bar w)=\{\nabla f(\bar z, \bar w) \}$. Hence~\eqref{BT1} has a simpler form. Namely, the following statement is valid.
\begin{theorem}\label{Frechet differentiable}
Under the assumptions of Theorem \ref{asprogramingproblem}, suppose additionally that the function $f$ is Fr\'echet differentiable at $(\bar z,\bar w)$. Then
\begin{eqnarray}\label{BT1a}
\partial h(\bar w)= \bigcup_{v^*\in N(\bar z;
\Omega)}\big[ \nabla_w f(\bar z, \bar w)+ T^*\big((M^*) ^{-1}(\nabla_z f(\bar z, \bar w) +v^*)\big)\big],
\end{eqnarray}
where $\nabla_z f(\bar z, \bar w)$ and $\nabla_w f(\bar{z}, \bar w)$, respectively, stand for the Fr\'echet derivatives of $f(\cdot,\bar w)$ at $\bar z$ and of $f(\bar z,\cdot)$ at $\bar w$.
\end{theorem}
\section{The main results} \label{section3}
\markboth{\centerline{\it The main results}}{\centerline{\it D.T.V.~An, J.-C.~Yao,
and N.D.~Yen}} \setcounter{equation}{0}
Keeping the notation of Section 2, we consider the linear mappings $\mathcal{A}: X \to X$, $\mathcal{B}: U \to X,$ $\mathcal{M}: X \times U \to X,$ and $\mathcal{T}: W \to X$ which are defined by setting
\begin{align}
\label{mappingA}
\mathcal{A}x:= x- \int_0^{(.)} A(\tau) x(\tau) d \tau,
\end{align}
\begin{align}
\label{mappingB}
\mathcal{B}u:= - \int_0^{(.)} B(\tau) u(\tau) d \tau,
\end{align}
\begin{align}
\label{mappingM}
\mathcal{M}(x,u):=\mathcal{A}x +\mathcal{B}u,
\end{align}
and
\begin{align}
\label{mappingT}
\mathcal{T}(\alpha, \theta):= \alpha+ \int_0^{(.)} C(\tau) \theta(\tau) d \tau,
\end{align}
where the writing $\displaystyle\int_0^{(.)} g (\tau)d \tau$ for a function $g\in L^p([0,1], \Bbb{R}^n )$ abbreviates the function $t \mapsto \displaystyle\int_0^{t} g (\tau)d \tau$, which belongs to $X=W^{1,p}([0,1], \Bbb{R}^n ). $
\medskip
Under the assumption $\mathbf{(A1)}$, we can write the linear ordinary differential equation~\eqref{state_equation} in the integral form
$$x=x(0)+\int_0^{(.)} A(\tau) x(\tau) d \tau+\int_0^{(.)} B(\tau) u(\tau) d \tau+\int_0^{(.)} C(\tau) \theta(\tau) d \tau. $$
Combining this with the initial value in \eqref{initial_value}, one gets
$$x=\alpha+\int_0^{(.)} A(\tau) x(\tau) d \tau+\int_0^{(.)} B(\tau) u(\tau) d \tau+\int_0^{(.)} C(\tau) \theta(\tau) d \tau. $$
Thus, in accordance with \eqref{mappingA}--\eqref{mappingT}, \eqref{map_constraint} can be written as
\begin{equation}
\begin{split}
G(w)&=\left\{(x,u)\in X \times U \mid x= \alpha +\int_0^{(.)} A x d \tau+\int_0^{(.)} B u d \tau+\int_0^{(.)} C \theta d \tau \right\} \\
&=\left\{(x,u)\in X \times U \mid x- \int_0^{(.)} A x d \tau-\int_0^{(.)} B u d \tau= \alpha +\int_0^{(.)} C \theta d \tau \right\}\\
&=\left\{(x,u)\in X \times U \mid \mathcal{M}(x,u)=\mathcal{T}(w)\right\}.
\end{split}
\end{equation}
Hence, the control problem \eqref{objective_Function_Control}--\eqref{control_constraint} reduces to the mathematical programming problem \eqref{12b}, where the function $J(\cdot)$, the multifunction $G(\cdot)$, and the set $K$ defined by \eqref{opjective_function_J}--\eqref{set_constraint}, play the roles of $f(\cdot)$, $H(\cdot)$, and $\Omega$.
\medspace
We shall need several lemmas.
\begin{lemma}{\rm{(See \cite[Lemma 2.3]{Toan_Kien}})}\label{Au_lemma1} Under the assumption $\mathbf{(A1)}$, the following are valid:
\\{\rm(i)} The linear operators $\mathcal{M}$ in \eqref{mappingM} and $\mathcal{T}$ in \eqref{mappingT} are continuous;
\\{\rm(ii)} $\mathcal{T}^*(a,u)=(a, C^Tu)$ for every $(a,u) \in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^n);$
\\{\rm(iii)} $\mathcal{M}^*(a,u)= (\mathcal{A}^*(a,u), \mathcal{B}^*(a,u ))$, where
\begin{align}\label{formula_new}
\mathcal{A}^*(a,u)= \bigg( a- \int_0^{1} A^T(t) u(t) dt,\, u+ \int_0^{(.)} A^T(\tau) u(\tau) d \tau - \int_0^{1} A^T(t) u(t) dt \bigg),
\end{align}
and $\mathcal{B}^*(a,u)=-B^Tu$
for every $(a,u)\in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^n).$
\end{lemma}
\begin{lemma} {\rm{(See \cite[Lemma 3.1 (b)]{Toan_Kien}})}
\label{Au_Lemma2} If $\mathbf{(A1)}$ and $\mathbf{(A2)}$ are satisfied, then the functional $J: X \times U\times W \to \Bbb{R}$ is Fr\'echet differentiable at $(\bar z, \bar w)$ and $\nabla J(\bar z, \bar w)$ is given by
$$\nabla_w J(\bar z, \bar w)= \big (0_{\Bbb{R}^n}, L_\theta(\cdot, \bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot) )\big ),$$
$$\nabla_z J(\bar z, \bar w)= \big(J_x(\bar x, \bar u,\bar \theta), J_u( \bar x, \bar u, \bar \theta )\big),$$
with
\begin{align*} J_x( \bar x, \bar u, \bar \theta )&=\bigg( g'(\bar x(1)) + \int_0^1 L_x(t,\bar x(t),\bar u(t), \bar \theta(t))dt ,\\
& \quad \quad g'(\bar x(1)) + \int_{(.)}^1 L_x(\tau,\bar x(\tau),\bar u(\tau), \bar \theta(\tau))d\tau \bigg),
\end{align*}
and $J_u( \bar x, \bar u, \bar \theta )=L_u(\cdot, \bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot) ).$
\end{lemma}
Let
\begin{align*}
& \varPsi_A: L^q([0,1], \Bbb{R}^n ) \to \Bbb{R}, \quad \varPsi_B: L^q([0,1], \Bbb{R}^n ) \to L^q([0,1], \Bbb{R}^m ), \\
&\varPsi_C: L^q([0,1], \Bbb{R}^n ) \to L^q([0,1], \Bbb{R}^k ),\quad \varPsi :L^q([0,1], \Bbb{R}^n) \to L^q([0,1], \Bbb{R}^n )
\end{align*}
be defined by
\begin{align*}
\varPsi_A(v)= \int_0^1 A^T(t) v(t)dt,\quad \varPsi_B(v)(t)=-B^T(t)v(t) \ \, \mbox{a.e.}\ t \in [0,1],\\
\varPsi_C(v)(t)=C^T(t)v(t)\ \, \mbox{a.e.} \ t \in [0,1], \quad \varPsi(v)=-\int_0^{(.)} A^T(\tau) v(\tau) d\tau.
\end{align*}
We will employ the following two assumptions.
\begin{description}
\item[(A3)]
Suppose that
\begin{align}\label{re_con}
{\rm \mbox{\rm ker}\,}\, \varPsi_C \subset \big({\rm \mbox{\rm ker}\,}\, \varPsi_A \cap {\rm \mbox{\rm ker}\,}\, \varPsi_B \cap {\rm Fix}\, \varPsi\big),
\end{align}
where ${\rm Fix}\, \varPsi:=\{x \in X \mid \varPsi(x)=x\}$ is the set of the fixed points of $\varPsi$, and ${\rm \mbox{\rm ker}\,}\, \varPsi_A$ (resp., ${\rm \mbox{\rm ker}\,}\, \varPsi_B$, ${\rm \mbox{\rm ker}\,}\, \varPsi_C$) denotes the kernel of $\varPsi_A$ (resp., $\varPsi_B$, $\varPsi_C$).
\item[(A4)] The operator $\varPhi: W \times Z \to X$, which is given by
$$\varPhi(w,z)=x - \int_0^{(.)} A(\tau) x(\tau) d \tau - \int_0^{(.)} B(\tau) v(\tau) d \tau -\alpha - \int_0^{(.)} C(\tau) \theta(\tau) d \tau $$
for every $ w=(\alpha, \theta) \in W$ and $z=(x,v) \in Z,$
has closed range.
\end{description}
\begin{lemma}
\label{Au_lemma3}
If $\mathbf{(A3)}$ is satisfied, then ${\rm \mbox{\rm ker}\,}\, \mathcal{T}^* \subset {\rm \mbox{\rm ker}\,}\, \mathcal{M}^*.$
\end{lemma}
\begin{proof}
For any $(a,v) \in {\rm \mbox{\rm ker}\,}\, \mathcal{T}^*$, it holds that $(a,v) \in L^q ([0,1], \Bbb{R}^n)$ and $\mathcal{T}^*(a,v)=0$. Then $(a, C^Tv)=0$. So $a=0$ and $C^Tv=0$. The latter means that $C^T(t)v(t)=0$ a.e. on $[0,1]$. Hence $v \in {\rm \mbox{\rm ker}\,}\, \varPsi_C.$ By \eqref{re_con}, $$ v \in {\rm \mbox{\rm ker}\,}\, \varPsi_A \cap {\rm \mbox{\rm ker}\,}\, \varPsi_B \cap {\rm Fix}\, \varPsi. $$
The condition $ \varPsi_B(v)=0$ implies that $-B^T(t)v(t)=0$ a.e. on $[0,1]$. As $\mathcal{B}^*(a,v)=-B^Tv$ by Lemma \ref{Au_lemma1}, this yields
\begin{align}
\label{re_con1}
\mathcal{B}^*(a,v)=0.
\end{align}
According to the condition $v \in {\rm \mbox{\rm ker}\,}\, \varPsi_A$, we have $\displaystyle\int_0^1 A^T(t)v(t)dt=0$. Lastly, the condition $v \in {\rm Fix}\, \varPsi $ implies
$v=-\displaystyle\int_0^{(.)} A^T(\tau) v(\tau) d \tau,$
hence $v+\displaystyle\int_0^{(.)} A^T(\tau) v(\tau) d \tau=0$. Consequently, using formula \eqref{formula_new}, we can assert that
\begin{align}
\label{re_con2}
\mathcal{A}^*(a,v)=0.
\end{align}
Since $\mathcal{M}^*(a,v)=(\mathcal{A}^*(a,v), \mathcal{B}^*(a,v) )$ by Lemma \ref{Au_lemma1}, from \eqref{re_con1} and \eqref{re_con2} it follows that $\mathcal{M}^*(a,v)=0$. Thus, we have shown that ${\rm \mbox{\rm ker}\,}\, \mathcal{T}^* \subset {\rm \mbox{\rm ker}\,}\, \mathcal{M}^*.$
\end{proof}
The assumption $(H_3)$ in \cite{Toan_Thuy} can be stated as follows
\begin{description}
\item[(A5)] There exists a constant $c_3>0$ such that, for every $ v \in \Bbb{R}^n$,
$$||C^T(t)v|| \ge c_3 ||v||\ \, \mbox{a.e.} \ t \in [0,1]. $$
\end{description}
\begin{Proposition}\label{Proposition_condition}
If $\mathbf{(A5)}$ is satisfied, then $\mathbf{(A3)}$ and $\mathbf{(A4)}$ are fulfilled.
\end{Proposition}
\begin{proof}
By $\mathbf{(A5)}$ and the definition of $\varPsi_C$, for any $v \in L^q([0,1], \Bbb{R}^n)$, one has
\begin{align*}
|| \varPsi_C (v)(t)|| \ge c_3 ||v(t)||\ \;\mbox{a.e.} \ t \in [0,1].
\end{align*}
So, if $\varPsi_C(v)=0$, i.e., $\varPsi_C(v)(t)=0$ a.e. $t \in [0,1]$, then $v(t)=0$ a.e. $t \in [0,1]$.
This means that ${\rm \mbox{\rm ker}\,}\, \varPsi_C=\{0\}$. Therefore, the condition \eqref{re_con} in $\mathbf{(A3)}$ is satisfied.
By Lemma \ref{Au_lemma1}, we have $\mathcal{T}^*(a,v)=(a, C^Tv)$ for every $(a,v) \in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^n)$. It follows that
\begin{align}
\label{formula2n}
||\mathcal{T}^*(a,v)||=||a||+||C^Tv||_q.
\end{align}
Using $\mathbf{(A5)}$, we get
\begin{align*}
||\mathcal{T}^*(a,v)||&=||a||+||C^Tv||_q \ge c_1 (||a||+||v||_q ),
\end{align*}
where $c_1=\min\{1, c_3\}.$ This means $||\mathcal{T}^*w^*|| \ge c_1 ||w^*||$ for every $w^* \in W^*$. By \cite[Theorem 4.13, p. 100]{Rudin_1991}, $\mathcal{T}:W \to X$ is surjective. Since $\varPhi(w,z)$ can be rewritten as $\varPhi(w,z)=-\mathcal{T}w+ \mathcal{M}z,$ the surjectivity of $\mathcal{T}$ implies that $\{\varPhi (w,0)\mid w\in W \}=X$. Hence $\varPhi$ has closed range.
\end{proof}
We are now in a position to formulate our main results on differential stability of problem \eqref{objective_Function_Control}--\eqref{control_constraint}. The following theorems not only completely describe the subdifferential and the singular subdifferential of the optimal value function, but also explain in detail the process of finding vectors belonging to the subdifferentials. In particular, from the results it follows that each subdifferential is either a singleton or an empty set.
\begin{theorem}
\label{Control_main_theorem}
Suppose that the optimal value function $V$ in \eqref{Re_optimal_value_function} is finite at $\bar w=(\bar \alpha, \bar \theta)$, ${\rm{int}\,} \mathcal{U} \not= \emptyset$, and $\mathbf{(A1)}-\mathbf{(A4)}$ are fulfilled. In addition, suppose that problem \eqref{objective_Function_Control}--\eqref{control_constraint}, with $\bar w=(\bar \alpha, \bar \theta)$ playing the role of $w=(\alpha, \theta)$, has a solution $(\bar x, \bar u).$ Then, a vector $(\alpha^*, \theta^*) \in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^k)$ belongs to $\partial \, V(\bar\alpha, \bar \theta)$ if and only if
\begin{align}
\label{eq1}
\alpha^*= g'(\bar x(1)) + \int_0^1 L_x(t,\bar x(t),\bar u(t), \bar \theta(t))dt-\int_0^1 A^T(t)y(t)dt,
\end{align}
\begin{align}
\label{eq5}
\theta^* (t) =-C^T (t)y(t)+L_\theta(t,\bar x(t),\bar u(t), \bar \theta(t))\ \, \mbox{a.e.}\ t\in [0,1],
\end{align}
where $y \in W^{1,q} ([0,1],\Bbb{R}^n)$ is the unique solution of the system
\begin{align}\label{eq3}
\begin{cases}
\dot y (t) +A^T (t)y(t)=L_x(t,\bar x(t),\bar u(t), \bar \theta(t)) \ \rm\mbox{a.e.}\ t\in [0,1],\\
y(1)=-g'( \bar x(1)),
\end{cases}
\end{align}
such that the function $u^* \in L^q([0,1], \Bbb{R}^m)$ defined by
\begin{align}
\label{eq4}
u^*(t)=B^T (t)y(t)-L_u(t,\bar x(t),\bar u(t), \bar \theta(t)) \ \mbox{a.e.}\ t\in [0,1]
\end{align}
satisfies the condition $u^* \in N(\bar u; \mathcal{U}).$
\end{theorem}
\begin{proof}
We apply Theorem \ref{Frechet differentiable} in the case where $J(z, w)$, $K$ and $ V(w)$, respectively, play the roles of $f(z,w)$, $\Omega$ and $h(w)$. By Lemmas \ref{Au_Lemma2} and \ref{Au_lemma3}, the conditions $\mathbf{(A1)}-\mathbf{(A4)}$ guarantee that all the assumptions of Theorem \ref{Frechet differentiable} are satisfied. So, we have
\begin{eqnarray}\label{BT1aa}
\partial V(\bar w)= \bigcup_{v^*\in N(\bar z;K)}\big[ \nabla_w J(\bar z, \bar w)+ \mathcal{T}^*\big((\mathcal{M}^*) ^{-1}(\nabla_z J(\bar z, \bar w) +v^*)\big)\big].
\end{eqnarray}
From \eqref{BT1aa}, $(\alpha^*, \theta^*) \in \partial V(\bar w)$ if and only if
\begin{align}
\label{17new}
(\alpha^*, \theta^*) - \nabla_w J(\bar z, \bar w) \in \mathcal{T}^*\big((\mathcal{M}^*) ^{-1}(\nabla_z J(\bar z, \bar w) +v^*)\big)
\end{align}
for some $v^* \in N(\bar z; K).$ Note that $\nabla_w J(\bar z, \bar w)=(0_{\Bbb{R}^n},J_\theta(\bar z, \bar w) )$ and $v^*=(0_{\Bbb{R}^n},u^*)$ for some $u^* \in N(\bar u; \mathcal{U})$. Hence, from \eqref{17new} we get
$$(\alpha^*, \theta^* - J_\theta(\bar z, \bar w)) \in \mathcal{T}^*\big((\mathcal{M}^*) ^{-1}(\nabla_z J(\bar z, \bar w) +v^*)\big).$$
Thus, there exists $(a,v) \in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^n)$ such that
\begin{align}
\label{formula2}
(\alpha^*, \theta^* - J_\theta(\bar z, \bar w)) \in \mathcal{T}^*(a,v) \ \; \mbox{and}\ \;\nabla_z J(\bar z, \bar w) +v^*= \mathcal{M}^*(a,v).
\end{align}
By virtue of Lemma \ref{Au_lemma1}, we see that \eqref{formula2} is equivalent to the following
\begin{align}
& \begin{cases}
\alpha^*=a, \ \theta^* -J_\theta (\bar z, \bar w)=C^T(\cdot )v(\cdot ),\\
\big (J_x(\bar x, \bar u, \bar w), J_u(\bar x, \bar u,\bar w)+v^* \big)=( \mathcal{A}^*(a,v), \mathcal{B}^*(a,v) ).
\end{cases}\nonumber
\end{align}
Invoking Lemma \ref{Au_Lemma2}, we can rewrite this system as
\begin{align}
\begin{cases}
\alpha^*=a,\, \theta^*= L_\theta(\cdot, \bar x(\cdot ), \bar u(\cdot ), \bar \theta(\cdot )) + C^T(\cdot )v(\cdot ),\\
g'( \bar x(1)) + \displaystyle\int_0^1L_x(t, \bar x(t), \bar u(t), \bar \theta (t)) dt =a - \displaystyle\int_0^1 A^T (t) v(t)dt,\\
g'(\bar x(1) ) + \displaystyle\int _{(.)}^1 L_x (\tau, \bar x(\tau), \bar u (\tau ), \bar \theta(\tau))d \tau
= v(\cdot )+ \displaystyle\int_0^{(.)} A^T (\tau) v(\tau) d \tau-\displaystyle\int_0^1 A^T(t) v(t) dt,\\
L_u(\cdot ,\bar x(\cdot ), \bar u(\cdot ), \bar \theta(\cdot)) +u^* =-B^T(\cdot )v(\cdot),
\end{cases}\nonumber
\end{align}
Clearly, the latter is equivalent to
\begin{align} \label{formula3}
\begin{cases}
\alpha^*=a,\,
\theta^*= L_\theta(\cdot , \bar x(\cdot ), \bar u(\cdot ), \bar \theta(\cdot )) + C^T(\cdot )v(\cdot ),\\
g'( \bar x(1)) + \displaystyle\int_0^1L_x(t, \bar x(t), \bar u(t), \bar \theta (t)) dt =a - \displaystyle\int_0^1 A^T (t) v(t)dt,\\
g'(\bar x(1) ) - \displaystyle\int ^{(.)}_1 L_x (\tau, \bar x(\tau), \bar u (\tau ), \bar \theta(\tau))d \tau
= v(\cdot)+ \displaystyle\int_1^{(.)} A^T (\tau) v(\tau) d \tau,\\
L_u(\cdot ,\bar x(\cdot ), \bar u(\cdot ), \bar \theta(\cdot )) +u^* =-B^T(\cdot )v(\cdot).
\end{cases}
\end{align}
The third equality in \eqref{formula3} and the condition $v(\cdot)\in L^q([0,1], \Bbb{R}^n)$ imply that $v(\cdot)$ is absolutely differentiable on $[0,1]$ and, moreover, $\dot v(\cdot)\in L^q([0,1], \Bbb{R}^n)$. Hence $v(\cdot)\in W^{1,q}([0,1], \Bbb{R}^n)$. In addition, the third equality in \eqref{formula3} implies that $v(1)=g'(\bar x(1) )$. Moreover, by differentiating, we get $-\dot v(\cdot) - A^T (\cdot)v(\cdot) =L_x(\cdot, \bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot)).$ Therefore,~\eqref{formula3} can be written as the following
\begin{align}\label{formula4}
\begin{cases}
\alpha^*=a,\,
v\in W^{1,q}([0,1], \Bbb{R}^n),\\
\alpha^*= g'( \bar x(1)) + \displaystyle\int_0^1L_x(t, \bar x(t), \bar u(t), \bar \theta (t)) dt + \displaystyle\int_0^1 A^T (t) v(t)dt,\\
\theta^*= L_\theta(\cdot, \bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot)) + C^T(\cdot)v(\cdot) ,\\
v(1)=g'(\bar x(1)),\\
-\dot v(\cdot) - A^T (\cdot)v(\cdot) =L_x(\cdot, \bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot)),\\
-B^T(\cdot)v(\cdot)= L_u(\cdot,\bar x(\cdot), \bar u(\cdot), \bar \theta(\cdot)) +u^* .
\end{cases}
\end{align}
Defining $y:=-v$ and omitting the vector $\alpha=\theta^* \in \Bbb{R}^n$, we can put \eqref{formula4} in the form
\begin{align*}
\begin{cases}
y\in W^{1,q}([0,1], \Bbb{R}^n),\\
\alpha^*= g'( \bar x(1)) + \displaystyle\int_0^1L_x(t, \bar x(t), \bar u(t), \bar \theta (t)) dt - \displaystyle\int_0^1 A^T (t) y(t)dt,\\
\theta^*(t)= L_\theta(t, \bar x(t), \bar u(t), \bar \theta(t)) + C^T(t)y(t) \ \,\mbox{a.e.}\ t\in [0,1],\\
\dot y(t) + A^T (t)y(t) =L_x(t, \bar x(t), \bar u(t), \bar \theta(t))\ \, \mbox{a.e.}\ t\in [0,1],\\
y(1)=-g'(\bar x(1)),\\
B^T(t)y(t)-u^*(t)= L_u(t,\bar x(t), \bar u(t), \bar \theta(t)) \ \, \mbox{a.e.}\ t\in [0,1]. \\
\end{cases}
\end{align*}
The assertion of the theorem follows easily from this system.
\end{proof}
Next, let us show that how the singular subdifferential of $V(\cdot)$ can be computed.
\begin{theorem}
\label{control_singular_subdifferential} Suppose that all the assumptions of Theorem \ref{Control_main_theorem} are satisfied. Then, a vector $(\alpha^*, \theta^*) \in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^k)$ belongs to $\partial^\infty V(\bar w)$ if and only if
\begin{align}
\label{equa1}
\alpha^*=\int_0^1 A^T(t) v(t)dt,
\end{align}
\begin{align}
\label{equa2}
\theta^*(t)=C^T(t)v(t)\ \, \mbox{a.e.}\ t\in[0,1],
\end{align}
where $v \in W^{1,q}([0,1], \Bbb{R}^n )$ is the unique solution of the system
\begin{align}
\label{equa3}
\begin{cases}
\dot v(t)=-A^T(t)v(t) \ \mbox{a.e.} \ t\in[0,1], \\
v(0)=\alpha^*,
\end{cases}
\end{align}
such that the function $u^* \in L^q([0,1], \Bbb{R}^n)$ given by
\begin{align}
\label{equa4}
u^*(t)=-B^T(t)v(t) \ \mbox{a.e.} \ t\in[0,1]
\end{align}
belongs to $ N(\bar u, \mathcal{U}).$
\end{theorem}
\begin{proof}
We apply Theorem \ref{Frechet differentiable} in the case where $J(z, w)$, $K$ and $ V(w)$, respectively, play the roles of $f(z,w)$, $\Omega$ and $h(w)$. By Lemmas \ref{Au_Lemma2} and \ref{Au_lemma3}, the conditions $\mathbf{(A1)}-\mathbf{(A4)}$ guarantee that all the assumptions of Theorem \ref{asprogramingproblem} are satisfied. Hence, by \eqref{BT1'} we have
\begin{eqnarray}\label{BT1'a}
\partial^\infty V(\bar w)= \bigcup_{(w^*, z^*)\in \partial^\infty J(\bar z, \bar w)}\;\bigcup_{v^*\in N(\bar z;
K)}\big[ w^*+ \mathcal{T}^*\big((\mathcal{M}^*) ^{-1}(z^* +v^*)\big)\big].
\end{eqnarray}
Since ${\rm \mbox{\rm dom}\,\,}J= Z \times W$ and $\partial^\infty J(\bar z, \bar w)=N((\bar z, \bar w);{\rm \mbox{\rm dom}\,\,}J )$ by \cite[Proposition 4.2]{AnYen}, it holds that $\partial^\infty J(\bar z, \bar w)=\{(0_{Z^*}, 0_{W^*}) \}$. Therefore, from \eqref{BT1'a} one gets
\begin{eqnarray}\label{BT1'ab}
\partial^\infty V(\bar w)= \bigcup_{v^*\in N(\bar z;K)}\big[ \mathcal{T}^*\big((\mathcal{M}^*) ^{-1}(v^*)\big)\big].
\end{eqnarray}
Thus, a vector $(\alpha^*, \theta^*)\in \Bbb{R}^n \times L^q ([0,1], \Bbb{R}^k)$ belongs to $ \partial^\infty V(\bar w)$ if and only if one can find $v^* \in N(\bar z;K)$ and $(a,v)\in \Bbb{R}^n \times L^q([0,1], \Bbb{R}^n)$ such that
\begin{align}
\label{equa5} \mathcal{M}^*(a,v)=v^* \ \;\mbox{and} \ \;\mathcal{T}^* (a,v)=(\alpha^*, \theta^*).
\end{align}
Since $N(\bar z; K)=\{0_{\Bbb{R}^n}\} \times N(\bar u; \mathcal{U})$, we must have $v^*=(0_{\Bbb{R}^n}, u^*)$ for some $u^*\in N(\bar u; \mathcal{U})$. By Lemma \ref{Au_lemma1}, we can rewrite \eqref{equa5} equivalently as
\begin{align*}
\begin{cases}
a-\displaystyle\int_0^1 A^T(t) v(t) dt=0,\\
v(\cdot)+ \displaystyle\int_0^{(.)} A^T(\tau) v(\tau) d \tau - \displaystyle\int_0^1 A^T(t) v(t) dt=0,\\
-B^T(\cdot)v(\cdot)=u^*(\cdot),\\
a=\alpha^*,\\
C^T(\cdot)v(\cdot)=\theta^*(\cdot).
\end{cases}
\end{align*}
Omitting the vector $a \in \Bbb{R}^n$, we transform this system to the form
\begin{align*}
\begin{cases}
\alpha^*=\displaystyle\int_0^1 A^T(t) v(t) dt,\\
v(\cdot)=- \displaystyle\int_0^{(.)} A^T(\tau) v(\tau) d \tau + \displaystyle\int_0^1 A^T(t) v(t) dt,\\
u^*(\cdot)=-B^T(\cdot)v(\cdot),\\
\theta^*(\cdot)=C^T(\cdot)v(\cdot).
\end{cases}
\end{align*}
The second equality of the last system implies that $v \in W^{1,q} ([0,1], \Bbb{R}^n )$ (see the detailed explanation in the proof of Theorem \ref{Control_main_theorem}). Hence, that system is equivalent to the following
\begin{align*}
\begin{cases}
v\in W^{1,q}([0,1], \Bbb{R}^n ),\\
\alpha^*=\displaystyle\int_0^1 A^T(t) v(t) dt,\\
\dot v(t)= -A^T(t) v(t) \ \,\mbox{a.e.} \ t \in [0,1],\\
v(0)=\displaystyle\int_0^1 A^T(t) v(t) dt,
\\
u^*(t)=-B^T(t)v(t) \ \,\mbox{a.e.} \ t \in [0,1],\\
\theta^*(t)=C^T(t)v(t) \ \,\mbox{a.e.} \ t \in [0,1].
\end{cases}
\end{align*}
These properties and the inclusion $u^*\in N(\bar u; \mathcal{U})$ show that the conclusion of the theorem is valid.
\end{proof}
\section{Illustrative examples}
\markboth{\centerline{\it Illustrative examples}}{\centerline{\it D.T.V.~An, J.-C.~Yao, and N.D.~Yen}} \setcounter{equation}{0}
We shall apply the results obtained in Theorems \ref{Control_main_theorem} and \ref{control_singular_subdifferential} to an optimal control problem which has a clear mechanical interpretation.
\medskip
Following Pontryagin et al. \cite[Example~1, p.~23]{Pontryagin_etal.__1962}, we consider a vehicle of mass~1 moving without friction on a straight road, marked by an origin, under the impact of a force $u(t)\in \mathbb{R}$ depending on time $t\in [0,1]$. Denoting the coordinate of the vehicle at $t$ by $x_1(t)$ and its velocity by $x_2(t)$. According to Newton's Second Law, we have $u(t)=1\times \ddot{x}_1(t)$; hence
\begin{align}\label{P_EX}
\begin{cases}
\dot x_1(t)=x_2(t),\\
\dot x_2 (t)=u(t).
\end{cases}
\end{align} Suppose that the vehicle's initial coordinate and velocity are, respectively, $x_1(0)=\bar\alpha_1$ and $x_2(0)=\bar\alpha_2$.
The problem is to minimize both the distance of the vehicle to the origin and its velocity at terminal time $t=1$. Formally, it is required that the sum of squares $[x_1(1)]^2+[x_2(1)]^2$ must be minimum when the measurable control $u(\cdot)$ satisfies the constraint $\displaystyle\int_0^1 |u(t)|^2dt\leq 1$ (\textit{an energy-type control constraint}).
\medskip
It is worthy to stress that the above problem is different from the one considered in \cite[Example~1, p.~23]{Pontryagin_etal.__1962}, where \textit{the pointwise control constraint} $u(t)\in [-1,1]$ was considered and the authors' objective is to minimize the terminal time moment $T\in [0,\infty)$ at which $x_1(T)=0$ and $x_2(T)=0$. The latter conditions mean that the vehicle arrives at the origin with the velocity 0. As far as we know, the classical Maximum Principle \cite[Theorem~1, p.~19]{Pontryagin_etal.__1962} cannot be applied to our problem.
\medskip
We will analyze the model \eqref{P_EX} with the control constraint $\displaystyle\int_0^1 |u(t)|^2dt\leq 1$ by using the results of the preceding section. Let $X= W^{1,2}([0,1], \Bbb{R}^2 )$, $ U= L^2([0,1], \Bbb{R} )$, $\Theta= L^2([0,1], \Bbb{R}^2 ).$ Choose $A(t)=A$, $B(t)=B$, $C(t)=C$ for all $t\in [0,1]$, where $$
A=\left(
\begin{array}{ll}
0& 1\\
0& 0\\
\end{array} \right),
\quad
B=\left(
\begin{array}{l}
0\\
1\\
\end{array} \right),
\quad C=\left(
\begin{array}{ll}
1& 0\\
0 & 1\\
\end{array} \right).
$$
Put $g(x)=\|x\|^2$ for $x\in\mathbb{R}^2$ and $L(t,x,u,\theta)=0$ for $(t,x,u,\theta)\in [0,1]\times\mathbb{R}^2\times\mathbb{R}\times\mathbb{R}^2$.
Let $\mathcal{U}=\left\{u \in L^2([0,1], \Bbb{R} ) \mid ||u||_2 \le 1\right\}.$ With the above described data set, the optimal control problem \eqref{objective_Function_Control}--\eqref{control_constraint} becomes
\begin{align}\label{E_problem1n}
\begin{cases}
J(x,u,w)=x_1^2(1)+ x_2^2(1)\to {\rm \inf}\\
\dot x_1(t)=x_2(t)+\theta_1(t),\
\dot x_2(t)=u(t)+\theta_2(t),\\
x_1(0)=\alpha_1, \,
x_2(0)=\alpha_2,\,
u \in \mathcal{U}.
\end{cases}
\end{align}
The perturbation $\theta_1(t)$ may represent a noise in the velocity, that is caused by a small wind. Similarly, the perturbation $\theta_2(t)$ may indicate a noise in the force, that is caused by the inefficiency and/or improperness of the reaction of the vehicle's engine in response to a human control decision. We define the function $\bar\theta\in\Theta$ by setting $\bar\theta(t)=(0,0)$ for all $t\in [0,1]$. The vector $\bar\alpha=(\bar\alpha_1,\bar\alpha_2)\in\mathbb{R}^2$ will be chosen in several ways.
\medskip
In next examples, optimal solutions of \eqref{E_problem1n} is sought for $\theta=\bar\theta$ and $\alpha=\bar\alpha$, where $\bar\alpha$ is taken from certain subsets of $\mathbb{R}^2$. These optimal solutions are used in the subsequent two examples, where we compute the subdifferential and the singular subdifferential of the optimal value function $V(w)$, $w=(\alpha,\theta)\in\mathbb R^2\times \Theta$, of \eqref{E_problem1n} at $\bar w=(\bar\alpha,\bar\theta)$ by applying Theorems \ref{Control_main_theorem} and \ref{control_singular_subdifferential}.
\begin{ex}\label{EX1} \rm Consider the parametric problem \eqref{E_problem1n} at the parameter $w=\bar w$:
\begin{align}\label{E_problem1n(1)}
\begin{cases}
J(x,u, \bar w)=x_1^2(1)+ x_2^2(1)\to {\rm \inf}\\
\dot x_1(t)=x_2(t),\ \,
\dot x_2(t)=u(t),\\ x_1(0)=\bar\alpha_1, \,
x_2(0)=\bar\alpha_2,\,
u \in \mathcal{U}.
\end{cases}
\end{align}
In the notation of Section \ref{section3}, we interpret \eqref{E_problem1n(1)} as the parametric optimization problem
\begin{align}\label{E_problem1n(2)}
\begin{cases}
J(x,u, \bar w)=x_1^2(1)+ x_2^2(1)\to {\rm \inf}\\
(x,u)\in G(\bar w) \cap K,
\end{cases}
\end{align}
where $G(\bar w)=\left\{(x,u)\in X \times U \mid \mathcal{M}(x,u)=\mathcal{T}(\bar w)\right\}$ and $K=X \times \mathcal{U}.$
Then, in accordance with \cite[Proposition~2, p.~81]{Ioffe_Tihomirov_1979}, $(\bar x, \bar u)$ is a solution of \eqref{E_problem1n(1)} if and only if
\begin{align}
\label{1plus}
(0_{X^*},0_{U^*})\in \partial_{x,u} J(\bar x, \bar u, \bar w) +N((\bar x, \bar u); G(\bar w) \cap K ).
\end{align}
\textit{Step 1} \big (computing the cone $N((\bar x, \bar u); G(\bar w))$\big). We have
\begin{align}
\label{normal_cone_G}
N((\bar x, \bar u);G(\bar w))={\rm rge}(\mathcal{M^*}):=\{\mathcal{M^*}x^*\mid x^* \in X^* \}.
\end{align}
Indeed, since $G(\bar w)=\left\{(x,u)\in X \times U \mid \mathcal{M}(x,u)=\mathcal{T}(\bar w)\right\}$ is an affine manifold, \begin{align}\label{ker_mathcal_M} N((\bar x, \bar u);G(\bar w))&=({\rm ker}\mathcal{M})^\perp\\ \nonumber
& =\{(x^*,u^*)\in X^* \times U^* \mid \langle (x^*,u^*), (x,u)\rangle=0\ \forall (x,u)\in {\rm ker}\mathcal{M}\}.\end{align}
For any $z(\cdot)=(z_1(\cdot),z_2(\cdot))\in X$, if we choose $x_2(t)=z_2(0)$ and $x_1(t)=z_1(t)+z_2(0)t$ for all $t\in [0,1]$, and $u(t)=\dot x_2(t)$ for a.e. $t\in [0,1]$, then $(x,u)\in X\times U$ and $\mathcal{M}(x,u)=z$. This shows that the continuous linear operator $\mathcal{M}:X \times U\to X$ is surjective. In particular, $\mathcal{M}$ has closed range.
Therefore, by \cite[Proposition~2.173]{Bonnans_Shapiro_2000}, from \eqref{ker_mathcal_M} we get $$ N((\bar x, \bar u);G(\bar w))=({\rm ker}\mathcal{M})^\perp={\rm rge}(\mathcal{M^*})=\{\mathcal{M^*}x^*\mid x^* \in X^* \};$$ so \eqref{normal_cone_G} is valid.
\textit{Step 2} \big(decomposing the cone $N((\bar x, \bar u); G(\bar w ) \cap K)$\big).
To prove that
\begin{align}
\label{decompose_normal}
N((\bar x, \bar u); G(\bar w ) \cap K)
=\{0_{X^*}\} \times N(\bar u; \mathcal{U}) + N((\bar x, \bar u);G(\bar w)),
\end{align}
we first notice that
\begin{align}
\label{normal_times}
N((\bar x, \bar u); K)
=\{0_{X^*}\} \times N(\bar u; \mathcal{U}).
\end{align}
Next, let us verify the normal qualification condition
\begin{align}
\label{Q-C}
N((\bar x, \bar u); K) \cap [- N((\bar x, \bar u);G(\bar w))]=\{(0,0)\}
\end{align}
for the convex sets $K$ and ${\rm gph}\, G$. Take any $(x_1^*,u_1^*) \in N((\bar x, \bar u); K) \cap [- N((\bar x, \bar u);G(\bar w))].$ On one hand, by \eqref{normal_times} we have $x_1^*=0$ and $u_1^*\in N(\bar u; \mathcal{U})$. On the other hand, by \eqref{normal_cone_G} and the third assertion of Lemma~\ref{Au_lemma1}, we can find an element $$x^*=(a,v) \in X^*=\mathbb{R}^2 \times L^2([0,1], \mathbb{R}^2)$$ such that $x_1^*= -\mathcal{A}^* (a,v)$ and $u_1^*=-\mathcal{B}^*(a,v).$ Then
\begin{align}\label{system_n}
0= \mathcal{A}^* (a,v),\ \, u_1^*= -\mathcal{B}^*(a,v).
\end{align}
Write $a=(a_1,a_2)$, $v=(v_1,v_2)$ with $a_i\in\mathbb R$ and $v_i\in L^2([0,1], \mathbb{R})$, $i=1,\, 2$. According to Lemma~\ref{Au_lemma1}, \eqref{system_n} is equivalent to the following system
\begin{align}\label{system2}
\begin{cases}
a_1=0,\, a_2-\displaystyle\int_0^1 v_1(t)dt=0,\\
v_1=0, \\
v_2+ \displaystyle\int_0^{(.)} v_1(\tau)d\tau-\int_0^1 v_1(t)dt=0,\\
u_1^*=v_2.
\end{cases}
\end{align}
From \eqref{system2} it follows that $(a_1, a_2)=(0,0),$ $(v_1,v_2)=(0,0)$ and $u_1^*=0$. Thus $(x_1^*, u_1^*)=(0,0).$ Hence,~\eqref{Q-C} is fulfilled.
Furthermore, since $\mathcal{U}=\left\{u \in L^2([0,1], \Bbb{R} ) \mid ||u||_2 \le 1\right\}$, we have ${\rm int}\, \mathcal{U}\not=\emptyset$; so $K$ is a convex set with nonempty interior. Due to \eqref{Q-C}, one cannot find any $(x_0^*, u_0^*) \in N((\bar x, \bar u); K)$ and $(x_1^*,u_1^*)\in N((\bar x, \bar u);G(\bar w))$, not all zero, with $(x_0^*,u_0^*)+(x_1^*, u_1^*)=0.$ Hence, by \cite[Proposition~3, p.~206]{Ioffe_Tihomirov_1979}, $G(\bar w) \cap {\rm int}\, K\not=\emptyset$. Moreover, according to \cite[Proposition~1, p.~205]{Ioffe_Tihomirov_1979}, we have
$$ N((\bar x, \bar u); G(\bar w ) \cap K)
=N((\bar x, \bar u);K) + N((\bar x, \bar u);G(\bar w)).$$
Hence, combining the last equation with \eqref{normal_times} yields \eqref{decompose_normal}.
\textit{ Step 3} \big(computing the partial subdifferentials of $J( \cdot , \cdot, \bar w)$ at $(\bar x, \bar u)$\big ).
We first note that $J(x,u,\bar w)$ is a convex function.
Clearly, the assumptions $\mathbf{(A1)}$ and $\mathbf{(A2)}$ are satisfied. Hence, by Lemma \ref{Au_Lemma2}, the function $J(x, u, \bar w )=g( x (1))=x_1^2(1)+ x_2^2(1)$ is Fr\'echet differentiable at $(\bar x,\bar u)$, $J_u(\bar x, \bar u, \bar w)=0_{U^*}$, and \begin{align}\label{derivativeJ_0}J_x(\bar x, \bar u, \bar w)=\big(g'( \bar x(1)), g'(\bar x(1)) \big) =\big( (2 \bar x_1(1), 2 \bar x_2(1)), (2 \bar x_1(1), 2 \bar x_2(1)) \big),\end{align} where the first symbol $(2 \bar x_1(1), 2 \bar x_2(1)) $ is a vector in $\mathbb R^2$, while the second symbol $(2 \bar x_1(1), 2 \bar x_2(1)) $ signifies the constant function $t\mapsto (2 \bar x_1(1), 2 \bar x_2(1)) $ from $[0,1]$ to $\mathbb R^2$.
Therefore, one has
\begin{equation}\label{derivativeJ}
\partial J_{x,u}(\bar x, \bar u, \bar w)= \left\{\big (J_x(\bar x, \bar u, \bar w), 0_{U^*} \big )\right\}
\end{equation} with $J_x(\bar x, \bar u, \bar w)$ being given by \eqref{derivativeJ_0}.
\smallskip
\textit{ Step 4} (solving the optimality condition)
By \eqref{normal_cone_G}, \eqref{decompose_normal}, and \eqref{derivativeJ}, we can assert that \eqref{1plus} is fulfilled if and only if there exist $u^* \in N(\bar u; \mathcal{U})$ and $x^*=(a,v) \in \mathbb{R}^2 \times L^2([0,1], \mathbb{R}^2 )$ with $a=(a_1,a_2)\in \mathbb{R}^2$, $v=(v_1,v_2)\in L^2([0,1], \mathbb{R}^2 )$, such that
\begin{align}
\label{4plus}
\big( \big ((-2 \bar x_1(1), -2 \bar x_2(1)), (-2 \bar x_1(1),- 2 \bar x_2(1))\big ), -u^* \big) =\mathcal{M^*}(a,v).
\end{align}
According to Lemma \ref{Au_lemma3}, we have $\mathcal{M^*}(a,v)=(\mathcal{A^*}(a,v), \mathcal{B^*} (a,v) ),$ where
\begin{align*}
\mathcal{A}^*(a,v)= \bigg( a- \int_0^{1} A^T(t) v(t) dt,\, v+ \int_0^{(.)} A^T(\tau) v(\tau) d \tau - \int_0^{1} A^T(t) v(t) dt \bigg),
\end{align*}
and $\mathcal{B}^*(a,v)=-B^Tv.$ Combining this with \eqref{4plus} gives
\begin{align}\label{4plusn}
\begin{cases}
-2 \bar x_1(1)=a_1,\ -2 \bar x_2(1)=a_2 - \displaystyle \int_0^1 v_1(t)dt,
\\
-2 \bar x_1(1)=v_1,\, -2 \bar x_2(1)=v_2+\displaystyle\int_0^{(.)} v_1(\tau )d \tau - \displaystyle\int_0^1 v_1(t)dt,\\
u^*=v_2.
\end{cases}
\end{align}
If we can choose $a=0$ and $v=0$ for \eqref{4plusn}, then $u^*=0$; so $u^*\in N(\bar u; \mathcal{U})$. Moreover,~\eqref{4plusn} reduces to
\begin{align}\label{initial_constraints}
\bar x_1(1)=0,\ \,\bar x_2(1)=0.
\end{align}
Besides, we observe that $(\bar x, \bar u) \in G(\bar w)$ if and only if
\begin{align}\label{5plus}
\begin{cases}
\dot {\bar x}_1(t)=\bar x_2(t),\ \,
\dot {\bar x}_2(t)=\bar u(t),\\ \bar x_1(0)=\bar\alpha_1, \,
\bar x_2(0)=\bar\alpha_2,\,
\bar u\in \mathcal{U}.
\end{cases}
\end{align}
Combining \eqref{initial_constraints} with \eqref{5plus} yields
\begin{align}\label{6plus}
\begin{cases}
\dot {\bar x}_1(1)=0,\ \,
\dot {\bar x}_1(0)=\bar \alpha_2,\\ \bar x_1(0)=\bar\alpha_1, \,
\bar x_1(1)=0,\\
\dot {\bar x}_1(t)=\bar x_2(t),\ \,
\dot {\bar x}_2(t)=\bar u(t),\\
\bar u\in \mathcal{U}.
\end{cases}
\end{align}
We shall find $\bar x_1(t)$ in the form $\bar x_1(t)=at^3+bt^2+ct+d$. Substituting this $\bar x_1(t)$ into the first four equalities in \eqref{6plus}, we get
$$\begin{cases}
3a+2b+c=0,\ c=\bar \alpha_2,\\
d=\bar \alpha_1,\ a+b+c+d=0.
\end{cases}$$
Solving this system, we have
$ a= 2\bar \alpha_1 + \bar \alpha_2,$ $
b=-3 \bar \alpha_1 -2 \bar \alpha_2,$ $
c=\bar \alpha_2,$ $
d=\bar \alpha_1.
$
Then $ \bar x_1(t)=(2\bar \alpha_1 + \bar \alpha_2)t^3- (3 \bar \alpha_1 +2 \bar \alpha_2 ) t^2 +\bar\alpha_2 t +\bar\alpha_1.$ So, from the fifth and the sixth equalities in \eqref{6plus} it follows that
\begin{align*}
\begin{cases}
\bar x_2(t)=\dot {\bar{x}}_1(t)=3(2\bar \alpha_1 + \bar \alpha_2)t^2- 2(3\bar \alpha_1 +2\bar \alpha_2 ) t +\bar\alpha_2 ,\\
\bar u(t)=\dot {\bar{x}}_2(t)=(12\bar \alpha_1 + 6\bar \alpha_2)t- (6 \bar \alpha_1 +4\bar \alpha_2 ) .
\end{cases}
\end{align*}
Now, condition $\bar u \in \mathcal{U}$ in \eqref{6plus} means that
\begin{align}\label{7plus}
1 \ge \int_0^1 |\bar u(t)|^2dt &= \int_0^1 \left[(12\bar \alpha_1 +6 \bar \alpha_2)t- (6 \bar \alpha_1 +4\bar \alpha_2 ) \right]^2 dt.
\end{align}
By simple computation, we see that \eqref{7plus} is equivalent to
\begin{align}
\label{8plus}
12 \bar \alpha_1^2 + 12 \bar \alpha_1 \bar \alpha_2 +4 \bar \alpha_2^2 -1 \le 0.
\end{align}
Clearly, the set $\Omega$ of all the points $\bar\alpha=(\bar \alpha_1, \bar \alpha_2)\in\mathbb{R}^2$ satisfying \eqref{8plus} is an ellipse.
We have shown that for every $\bar\alpha=(\bar \alpha_1, \bar \alpha_2)$ from $\Omega$, problem \eqref{E_problem1n(1)} has an optimal solution $(\bar x, \bar u)$, where
\begin{align}\label{optimal_solution}
\begin{cases}
\bar x_1(t)=(2\bar \alpha_1 + \bar \alpha_2)t^3-(3 \bar \alpha_1 +2 \bar \alpha_2 ) t^2 +\bar\alpha_2 t +\bar\alpha_1,\\
\bar x_2(t)=3(2\bar \alpha_1 + \bar \alpha_2)t^2- 2(3\bar \alpha_1 +2\bar \alpha_2 ) t +\bar\alpha_2 ,\\
\bar u(t)=(12\bar \alpha_1 +6 \bar \alpha_2)t- (6 \bar \alpha_1 +4\bar \alpha_2 ) .
\end{cases}
\end{align}
In this case, the optimal value is $J(\bar x,\bar u'\bar w)=0.$
\end{ex}
In the forthcoming two examples, we will use Theorems \ref{Control_main_theorem} and \ref{control_singular_subdifferential} to compute the subdifferential and the singular subdifferential of the optimal value function $V(w)$ of \eqref{E_problem1n} at $\bar w=(\bar\alpha,\bar\theta)$, where $\bar\alpha$ satisfies condition \eqref{8plus}. Recall that the set of all the points $\bar\alpha=(\bar \alpha_1, \bar \alpha_2)\in\mathbb{R}^2$ satisfying \eqref{8plus} is an ellipse, which has been denoted by $\Omega$.
\begin{ex} \label{EX3} \rm \textit{(Optimal trajectory is implemented by an internal optimal control)}
For $\alpha=\bar \alpha:=\left(\frac{1}{5}, 0 \right)$, that belongs to ${\rm int}\,\Omega$, and $\theta=\bar\theta$ with $\bar\theta(t)=(0,0)$ for all $t\in[0,1]$,
the control problem \eqref{E_problem1n} becomes
\begin{align}\label{E_problem1}
\begin{cases}
J(x,u)=||x(1)||^2 \to {\rm \inf}\\
\dot x_1(t)=x_2(t),\
\dot x_2(t)=u(t),\\
x_1(0)=\frac{1}{5}, \,
x_2(0)=0,\,
u \in \mathcal{U}.
\end{cases}
\end{align}
For the parametric problem \eqref{E_problem1n}, it is clear that the assumptions $\mathbf{(A1)}$ and $\mathbf{(A2)}$ are satisfied. As $C(t)=\left(
\begin{array}{ll}
1& 0\\
0& 1\\
\end{array}
\right)$ for $t\in\ [0,1]$, one has for every $v \in \Bbb{R}^2$ the following
$$||C^T(t)v||=||v||\ \, {\rm \mbox{a.e.}}\ t\in [0,1].$$ Hence, assumption $\mathbf{(A5)}$ is also satisfied. Then, by Proposition \ref{Proposition_condition}, the assumptions $\mathbf{(A3)}$ and $\mathbf{(A4)}$ are fulfilled. According to \eqref{optimal_solution} and the analysis given in Example \ref{EX1}, the pair $(\bar x, \bar u)\in X\times U$, where $\bar x(t)= \left( \frac{2}{5} t^3-\frac{3}{5}t^2 +\frac{1}{5} , \frac{6}{5}t^2 -\frac{6}{5}t \right)$ and $\bar u(t)=\frac{12}{5}t-\frac{6}{5}$ for $t\in [0,1]$, is a solution of \eqref{E_problem1}. In this case, $\bar u(t)$ is an interior point of $\mathcal{U}$ since $\int_0^1|\bar u(t)|^2dt =\frac{12}{25}<1$.
Thus, by Theorem \ref{Control_main_theorem}, a vector $(\alpha^*, \theta^*) \in \Bbb{R}^2 \times L^2 ([0,1], \Bbb{R}^2)$ belongs to $\partial \, V(\bar\alpha, \bar \theta)$ if and only if
\begin{align}
\label{eq11}
\alpha^*= g'(\bar x(1)) -\int_0^1 A^T(t)y(t)dt
\end{align}
and
\begin{align}
\label{eq55}
\theta^* (t) =-C^T (t)y(t)\ \,\mbox{a.e.}\ t\in [0,1],
\end{align}
where $y \in W^{1,2} ([0,1],\Bbb{R}^2)$ is the unique solution of the system
\begin{align}\label{eq33}
\begin{cases}
\dot y (t) =-A^T (t)y(t)\ \, \mbox{a.e.}\ t\in [0,1],\\
y(1)=-g'( \bar x(1)),
\end{cases}
\end{align}
such that the function $u^* \in L^2([0,1], \Bbb{R})$ defined by
\begin{align}
\label{eq44}
u^*(t)=B^T (t)y(t)\ \,\mbox{a.e.}\ t\in [0,1]
\end{align}
satisfies the condition $u^* \in N(\bar u; \mathcal{U}).$
Since $\bar x(1)= \left(0,0 \right)$,
we have $g'( \bar x(1))=(0,0)$.
So, \eqref{eq33} can be rewritten as
\begin{align*}
\begin{cases}
\dot y_1 (t) =0,\
\dot y_2(t)=-y_1(t),\\
y_1(1)=0,\
y_2(1)=0.
\end{cases}
\end{align*}
Clearly, $y(t)=\left(0,0
\right)$ is the unique solution of this terminal value problem . Combining this with \eqref{eq11}, \eqref{eq55} and \eqref{eq44}, we obtain $\alpha^*= \left(0,0 \right)$ and $\theta^*(t)=\theta^*=(0,0)$ a.e. $t\in [0,1],$ and $u^*(t)=0$ a.e. $t \in [0,1]$. Since
$u^*(t)=0$ satisfies the condition $u^* \in N(\bar u; \mathcal{U})$, we have $\partial V(\bar w)=\{(\alpha^*, \theta^*)\}$, where $\alpha^*= \left(0,0 \right)$ and $\theta^*=(0,0)$.
We now compute $\partial V^\infty(\bar \alpha, \bar \theta)$. By Theorem \ref{control_singular_subdifferential}, a vector $(\tilde\alpha^*, \tilde\theta^*) \in \Bbb{R}^2 \times L^2 ([0,1], \Bbb{R}^2)$ belongs to $\partial^\infty V(\bar w)$ if and only if
\begin{align}
\label{equa11}
\tilde\alpha^*=\int_0^1 A^T(t) v(t)dt,
\end{align}
\begin{align}
\label{equa22}
\tilde\theta^*(t)=C^T(t)v(t)\ \,\mbox{a.e.}\ t\in[0,1],
\end{align}
where $v \in W^{1,2}([0,1], \Bbb{R}^2 )$ is the unique solution of the system
\begin{align}
\label{equa33}
\begin{cases}
\dot v(t)=-A^T(t)v(t)\ \,\mbox{a.e.} \ t\in[0,1],\\
v(0)=\tilde\alpha^*,
\end{cases}
\end{align}
such that the function $\tilde u^* \in L^2([0,1], \Bbb{R})$ given by
\begin{align}
\label{equa44}
\tilde u^*(t)=-B^T(t)v(t)\ \;\mbox{a.e.} \ t\in[0,1]
\end{align}
belongs to $ N(\bar u; \mathcal{U}).$ Thanks to \eqref{equa11}, we can rewrite \eqref{equa33} as
\begin{align*}
\begin{cases}
\dot v_1 (t) =0,\
\dot v_2(t)=-v_1(t),\\
v_1(0)=0,\
v_2(0)=\displaystyle\int_0^1 v_1(t)dt.
\end{cases}
\end{align*}
It is easy to show that $v(t)=(0,0)$ is the unique solution of this system. Hence, \eqref{equa11}, \eqref{equa22} and \eqref{equa44} imply that $\tilde\alpha^*=(0,0)$, $\tilde\theta^*=(0,0)$ and $\tilde u^*=0$. Since $\tilde u^* \in N(\bar u; \mathcal{U})$, we have $\partial^\infty V(\bar w)=\{(\tilde\alpha^*,\tilde\theta^*)\}$, where $\tilde\alpha^*= \left(0,0 \right)$ and $\tilde\theta^*=(0,0)$.
\end{ex}
\begin{ex} \label{EX4} \rm \textit{(Optimal trajectory is implemented by a boundary optimal control)}
For $\alpha=\bar \alpha:=\left(0, \frac{1}{2}\right)$, that belongs to ${\rm \partial}\,\Omega$, and $\theta=\bar\theta$ with $\bar\theta(t)=(0,0)$ for all $t\in[0,1]$, problem \eqref{E_problem1n} becomes
\begin{align}\label{E_problem4}
\begin{cases}
J(x,u)=||x(1)||^2 \to {\rm \inf}\\
\dot x_1(t)=x_2(t),\
\dot x_2(t)=u(t),\\
x_1(0)=0, \,
x_2(0)=\frac{1}{2},\,
u \in \mathcal{U}.
\end{cases}
\end{align}
As it has been shown in Example \ref{EX1}, $(\bar x, \bar u) =\left(\frac{1}{2} t^3-t^2 +\frac{1}{2}t , \frac{3}{2}t^2-2t + \frac{1}{2}, 3t-2\right)$ is a solution of \eqref{E_problem4}. In this case, we have $\int_0^1 |\bar u(t)|^2dt=\int_0^1 (3t-2)^2dt=1$. This means that $\bar u(t)$ is a boundary point of $\mathcal{U}.$ So, $N(\bar u; \mathcal{U})=\{\lambda \bar u\mid \lambda\geq 0\}$. Since $\bar x(1)= \left(0,0 \right)$, arguing in the same manner as in Example \ref{EX3}, we obtain $\partial V(\bar w)=\left\{ \left(\alpha^*,\theta^*\right)\right\}$ and $\partial^\infty V(\bar w)=\left\{\left(\tilde\alpha^*,\tilde\theta^*\right)\right\}$, where $\alpha^*=\tilde\alpha^*=\left(0, 0\right)$ and $\theta^*=\tilde\theta^*=\left(0, 0\right)$.
\end{ex}
\section*{Acknowledgements}
\noindent
This work was supported by College of Sciences, Thai Nguyen University (Vietnam), the Grant MOST 105-2221-E-039-009-MY3 (Taiwan), and National Foundation for Science $\&$ Technology Development (Vietnam).
|
\section{Introduction}
Ammonia molecules are detected in the gas phase of molecular clouds: Taurus Molecular Cloud-1 (TMC1-N)
\citep{1968Cheung}, with relatively high abundances 10$^{-7}$-10$^{-8}$ respective to H$_2$ molecules \citep{Hama2013}. Solid NH$_3$ has been detected through infrared absorption in different astrophysical environments: high-mass
protostars \citep{2004Gibb}, low-mass protostars \citep{2010Bottinelli}, comets \citep{2004BockeleeMorvan}, and in dense molecular
clouds \citep{2001Dartois}. The interstellar grain mantles in dense molecular clouds are predominantly composed of ${\rm H_2O}$ ice, combined with
other molecules such as (CO, ${\rm CO_2}$, ${\rm NH_3}$, ${\rm H_2CO}$, and ${\rm CH_3OH}$) \cite{1982Tielens,2010Bottinelli}.
The abundance of ammonia (NH$_3$) in the icy mantles is 1 to 15~\% with respect to water (H$_2$O) ice
\citep{2000Gibb,2004Gibb}, while in the cold dust envelopes of young stellar objects, the ammonia ice fraction is 5~\% or less
\citep{2001Dartois}. In comets, ammonia is present at the 1~\% level
relative to water ice \citep{2004BockeleeMorvan}.
Deuterated ammonia NH$_2$D was first detected by Rodriguez Kuiper et al~\citep{1978Rodriguez} in high temperature molecular clouds such as
Orion-KL Nebula region (T=50-150~K).
NH$_2$D molecules have been also observed in many sources towards dark molecular clouds \citep{2000Saito},
galactic protostellar cores~\citep{2001Shah}, and interstellar dense cores (L134N) \citep{2000Tine}. The [NH$_2$D]/[NH$_3$]
ratio in gas phase varies from 0.02 to 0.1. These abundance ratios are larger than the cosmic abundance of elemental deuterium relative to hydrogen (D/H),
which is expected from the Big-Bang nucleosynthesis to be 1.5$\times$10$^{-5}$~\citep{2003Linsky}.
Observations in low mass protostellar cores showed the highest [NH$_2$D]/[NH$_3$] ratios (0.3), indicating that deuterium fractionation of ammonia increases towards protostellar regions \citep{2003Hatchell}. Chemical models explained this high fractionation ratio by the gas-phase
ion-molecule chemistry with depletion of C, O and CO from the gas phase \citep{2000Tine,2003Hatchell}.
Doubly deuterated ND$_2$H ammonia has been also detected for the first time in cold, 10~K, dense cores L134N by~Roueff et al.~\citep{2000Roueff}.
The expected fractionation ratio [ND$_2$H]/[NH$_3$] from models is 0.03 \citep{2003Hatchell}.
The Caltech Submillimeter Observatory (CSO) has detected the triply deuterated ammonia, ND$_3$, through its J$_K$ emission
transition near 310~GHz \citep{2002Van} in cold clouds (10~K). The observed [ND$_3$]/[NH$_3$] ratio in very cold clouds of gas and dust is found to be closer to 0.001.
Such a high isotopic ratio between ND$_3$ and NH$_3$ suggested that the deuteration of NH$_3$ is likely to occur by ion-molecule reactions in the gas phase, in which deuteron transfer reactions are much faster than proton transfer \citep{2002Van,2002Lis}.
Theoretical models of pure gas-phase chemistry \citep{2000Robert,2000Saito,2005Roueff} explained relatively well the abundances of simply
and multiply deuterated ammonia molecules in dense cores. According to Tielens et al.~\citep{1983Tielens}, grain surface chemistry would also build deuterated
molecules by deuteration process on grain mantles with D atoms. The trapped deuterated species on grains are eventually released into the gas phase due to the heating of a close star in the formation stage. Recent chemical models of cold dark clouds \cite{2004Roberts} have shown that desorption of species into the gas phase via thermal evaporation is negligible for dark clouds with temperatures of 10~K.
The deuteration experiments of solid ammonia by D atoms has been already performed by two astrophysical groups; the Watanabe group (Nagaoka et al. \cite{2005Nagaoka}), and the Leiden group (Fedoseev et al. \cite{2015Fedoseev} using mainly infrared spectroscopy. The experimental studies of Nagaoka et al.~\cite{2005Nagaoka} have shown an efficient deuteration of CH$_3$OH ice by D atoms addition at low surface temperature. The deuterated methanol species are formed via H-abstraction and D-addition mechanism, and through tunneling quantum reactions. These authors have reported that no deuterated species of ammonia are observed in the exposure of pre-deposited NH$_3$ ice to D atoms at temperatures below 15~K. Even the experimental results of Fedoseev et al. \cite{2015Fedoseev} have also shown that the deuteration of solid NH$_3$ by D atoms did not take place at temperature lower than 15~K, by depositing D atoms on ammonia ice, or by performing co-deposition experiments of NH$_3$ molecules with D atoms on gold cold surface.
Based on the previous experimental results of Fedoseev et al. \citep{2015Fedoseev} and Nagaoka et al. \citep{2005Nagaoka}, someone wonders about the dramatic difference observed in the deuteration of ammonia and methanol by D atoms in the solid phase. If these authors \citep{2015Fedoseev,2005Nagaoka} did not observe the deuteration of the NH$_3$ by D atoms in their experiments, this is probably because of the very high activation energy barrier of the reaction NH$_3$+D in the solid phase in comparison to that of ${\rm CH_3OH}$+D. The value of the activation energy barrier of the reaction (NH$_3$~+~D~$\rightarrow$~NH$_2$D~+~H) has been estimated from earlier experimental \citep{1969Kurylo} and theoretical \citep{2005Moyano} works in the gas phase to be ${\rm 11~kcal\cdot mol^{-1}}$ or ${\rm 46~kJ\cdot mol^{-1}}$. While the activation energy barrier of the abstraction reaction (${\rm CH_3OH~+~D}$~${\longrightarrow}$~${\rm CH_2OH +HD}$), has been reported from gas phase estimations to be lower than that of ammonia (${\rm 27~kJ\cdot mol^{-1}}$) \cite{Hama2013}. But up to now, there is no laboratory studies providing activation energy barriers for the reaction ${\rm CH_3OH+D}$ and that of ${\rm NH_3+D}$ in the solid phase.
It is obvious that laboratory experiments are important for understanding the deuteration reactions occurring on the cold grain surfaces between condensed molecules and the impinging D atoms. However, some factors related to the gas flux of the deuterium atoms, the thickness of the ices on the grains, and the fluences of the atomic species on the surface, may affect the progress and the evolution of these reactions.
In the case of the the previous works of Nagaoka et al. \citep{2005Nagaoka}, and Fedoseev et al. \citep{2015Fedoseev}, the authors have performed experiments in the multi-layer regime by covering the aluminium surface with 10~ML of solid NH$_3$ \citep{2005Nagaoka}, and the the gold surface by 50~ML of ammonia ice \citep{2015Fedoseev}, and irradiating the corresponding ices with D-flux of 1-4$\times$10$^{13}$ ${\rm atoms\cdot cm^{-2}\cdot s^{-1}}$, and 3.7$\times$10$^{14}$~${\rm atoms\cdot cm^{-2}\cdot s^{-1}}$, respectively (see Table~\ref{tablea}).
First of all, the use of a high flux of D atoms in their experiments favors the recombination reactions D+D on the surface or in the bulk of the ices, and reduces therefore the reaction efficiency of D atoms with the adsorbed ${\rm CH_3OH}$ and ${\rm NH_3}$ species. However, in the experiments of Nagaoka et al. \citep{2005Nagaoka}, the deuteration reaction (${\rm CH_3OH+D}$) seems to be not affected by the high flux of D atoms. This is probably because the activation energy barrier of H-D exchange reaction between D and ${\rm CH_3OH}$ is lower that that between D and ${\rm NH_3}$.
On the other hand, as reported by Fedoseev et al. \citep{2015Fedoseev}, the use of a thick layer of ammonia ice favors the formation of hydrogen bonds (N-H), that can strength the interaction ${\rm NH_3}$-${\rm NH_3}$ molecules, and prevent the D-H exchange between D atoms and adsorbed ${\rm NH_3}$ molecules.
In this work, we performed deuteration experiments of solid ${\rm NH_3}$ by D atoms in the sub-monolayer regime, and with low D-flux, on an oxidized highly oriented pyrolytic graphite (HOPG) surface, partly covered with an amorphous solid water (ASW) ice held at 10~K of less than 0.5~ML of thickness, simulating water ice contaminations.
We deposited only a fraction of one monolayer of solid ammonia (0.8~ML) on the substrate to study the effect of the grain surface on the efficiency of the deuteration reaction between D atoms and the adsorbed ${\rm NH_3}$ molecules. In this work, we considered that the physisorption of species on the oxidized HOPG surface dominates the chemisorption process.
We also used low D-flux in comparison to the previous works \citep{2015Fedoseev,2005Nagaoka} to reduce the recombination efficiency of D atoms on the surface, and increase the probability of the H-D substitution reaction. As shown in the Table~\ref{tablea}, at our experimental conditions, even by reducing the D atoms Fluence by factors 100 and 10, with respect to those of Nagaoka et al. \cite{2005Nagaoka} and Fedoseev et al. \cite{2015Fedoseev}, respectively, the total amount of D atoms (53.5~ML) sent on the surface seems to be sufficient for the D-fractionation of solid ammonia, and the formation of the deuterated species ${\rm NH_2D}$, ${\rm NHD_2}$, and ${\rm ND_3}$.
For comparison, similar D atoms addition experiments have been performed with ${\rm CH_3OH}$ molecules to corroborate the findings of Nagaoka et al.~\citep{2007Nagaoka}, and validate the deuteration method governed by the abstraction-addition mechanism. D atom addition and H atom abstraction may not be the only mechanism to deuterate molecules on ices. Direct H-D substitution reactions could also proceed at low temperatures to fractionate the astrophysical molecules.
\begin{table*}
\centering \caption{Comparison between the experimental conditions and the results of the system (${\rm NH_3+D}$) for different works and references. \label{tablea}}
\begin{tabular}{ccccccc}
\hline\hline
Article & D Fluence & D Thickness & D-Flux & ${\rm NH_3}$ Thickness & results \\
& & & \\
\hline
& ${\rm atoms\cdot cm^{-2}}$ & ML & ${\rm atoms\cdot cm^{-2}\cdot s{-1}}$ & ML & \\
\hline
Nagaoka et al. \cite{2005Nagaoka} & $\leq$10$^{18}$ & $\leq$1000 &10$^{14}$ & 10 & no deuteration \\
(2015) & & & \\
Fedoseev et al. \cite{2015Fedoseev} & 8 $\times$ 8$^{16}$-3$\times$ 10$^{17}$ & $\leq$100 &1-4$\times$ 10$^{13}$ & 50 & no deuteration \\
(2015) & & & \\
This work & $\leq$5.35$\times$ 10$^{16}$ & $\leq$53.5 &3.7 $\times$ 10$^{12}$ & 0.8 & deuteration \\
\hline\hline
\end{tabular}
\end{table*}
The paper is organized as follows: in section~\ref{exp}, we describe the experimental setup and explain the procedures of the deuteration experiments; section~\ref{results} presents the experimental results for ${\rm NH_3~+~D}$ and ${\rm CH_3OH~+~D}$ reactions, and in the section~\ref{model and discussion}, we propose a kinetic model to estimate the activation energy barriers of the successive H-D substitution reactions. We make some concluding remarks in the final section.
\section{Experimental}\label{exp}
The experiments were performed with the FORMOLISM (FORmation of MOLecules in the InterStellar Medium)
apparatus.
The experimental setup is briefly described here and more details are given in a previous work \citep{2007Amiaud}.
The apparatus is composed of an ultra-high vacuum (UHV) stainless steel chamber with a base pressure of about $10^{-11}$ mbar.
The sample holder is located in the center of the main chamber. It is thermally connected to a cold finger of a closed-cycle Helium cryostat.
The temperature of the sample is measured in the range of 6~K-350~K. The sample holder is made of a 1~cm diameter copper block which is covered with a highly
oriented pyrolytic graphite (HOPG, ZYA-grade) substrate. The HOPG is a model of an ordered carbonaceous material mimicking interstellar dust grains analogues in astrophysical environments. It is characterized by an arrangement
of carbon atoms in a hexagonal lattice. The HOPG grade (10~mm diameter $\times$ 2~mm thickness) was firstly dried in an oven at about 100~$^{\circ}$C during two hours, and then cleaved several times using "Scotch tape" method at room temperature to yield several large terraces (micron scale) that contain limited defects and step edges. The HOPG was cleaved in air immediately prior to being inserted into the vacuum chamber. It was mounted directly onto the copper finger by means of a glue (ARMECO Product INC CERAMA BOND 571-P).
In chamber, the HOPG sample was annealed to 300~K under UHV to remove any contaminants. In this work, we used an oxidized HOPG sample, which has been preliminary exposed to oxygen atomic beam under UHV for several exposure doses, and then warmed-up from 10~K to 300~K to desorb oxygen and other species from the substrate, mainly water molecules. The oxidation phase was achieved after the saturation bonds of the surface, the defects and the step edges of the sample. This behavior was deduced when there is no modification in the Thermally Programmed Desorption profiles of the adsorbates. Prior oxidation of the HOPG is expected to give stable surface, where the structure cannot be modified by other adsorbates.
FORMOLISM is equipped with a quadrupole mass spectrometer (QMS) which allows routinely the simultaneous detection
of several species in the gas phase by their masses. The QMS can be placed either in front of the surface for the detection of species desorbed into the gas phase during the warming-up of the sample, or in front of the beam-line for the characterization and the calibration of the NH$_3$, CH$_3$OH, and D atoms beams. The experimental setup is also equipped with a Fourier transform infrared spectrometer (FTIR) for the in-situ solid phase measurements by reflection absorption infrared spectroscopy (RAIRS) in the spectral range 4000-700 (${\rm cm^{-1}}$ \cite{2012Chaabouni}.
The D atomic jet is prepared in a triply differentially pumped beam-line aimed at the sample holder. Its is composed of three vacuum chambers connected togethers by tight diaphragms of 3~mm diameters. The beam-line is equipped with a quartz tube with inner diameter of 4~mm, which is surrounded by a microwave source cavity for the dissociation of D$_2$ molecules. When the microwave source (Sairem) is turned on, the cavity is cooled down with a pressurized air jet, and D atoms are produced from the D$_2$ molecular plasma. The D$_2$ plasma is generated by a microwave power supply coupled into a Surfatron cavity operating at 2.45~GHz and producing up to 300~W. The warm D atoms undergo several collisions with the inner walls of the tube, and finally thermalize at the room temperature of about 350~K before they reach the surface . However, the charged particles composed of exited atoms, ions and electrons, produced in the plasma quickly recombine within the tight quartz tube \cite{2010Ioppolo, 2011Theule}. Because of the high micro-wave frequency, the hot energetic particles cannot leave the discharge pipe, as reported in some astrophysical laboratory works \cite{2010Ioppolo}.
The deuterium beam dissociation rate, measured with the quadrupole mass spectrometer from the ${\rm D_2}$ signals (m/z=4) during the discharge (ON) and the discharge (OFF) of the microwave source is calculated from the following relationship ${\rm \tau=\frac{D_2(OFF)-D_2(ON)}{D_2(OFF)}}$. In this work, the dissociation rate ${\rm \tau}$ of ${\rm D_2}$ beam reaches a high value of 85~\% with an effective microwave power of 50~W.
The flux of the dissociated D atoms coming from the gas phase and hitting the surface is
${\rm \Phi_{D, ON}}$ = ${\rm (3.7 \pm 0.5)}$ ${\rm \times10^{12}}$ ${\rm atom\cdot cm^{-2}\cdot s^{-1}}$. It is defined as ${\rm \Phi_{D, ON}}$ = ${\rm 2\tau~\Phi_{D_2,Off}}$,
where ${\rm \Phi_{D_2,Off}}$ = ${\rm (2.2 \pm 0.4)}$ ${\rm \times10^{12}}$ ${\rm molecule\cdot cm^{-2}\cdot s^{-1}}$ is the
flux of D$_2$ beam before running the microwave discharge.
The flux of D$_2$ molecules coming from the beam-line is determined by the so-called the King and Wells method \citep{1972King}, which is generally used to evaluate the sticking coefficient of particles incident on a cold surface. This method consists to measure with the mass spectrometry the indirect D$_2$ signal in the vacuum chamber in the real time during the exposure of D$_2$ on the oxidized HOPG surface.
${\rm \Phi_{D_2}}$ is calculated from the ratio between the exposure dose of D$_2$ molecules that saturate the graphitic surface, expressed in (${\rm molecule\cdot cm^{-2}}$) by the corresponding exposure time of D$_2$ on the surface, expressed in (second). According to the estimation made by Amiaud et al. \cite{2007Amiaud}, a compact ice layer begins to saturate after an exposure to 0.45~ML of D$_2$ (i.e. ${\rm 0.45\times10^{15}}$ ${\rm molecule\cdot cm^{-2}}$). More detail description and errors about the estimation of D-flux are given in the reference \cite{2007Amiaud}.
In this study, we have deposited all species (NH$_3$, CH$_3$OH, H$_2$O molecules, and D atoms) by using only one beam-line, oriented at 45$^{\circ}$ relatively to the surface of the sample. That guarantees a quasi-perfect match between the effective areas on which particles are deposited.
In our NH$_3$+D experiments, the beam-line is pumped off to evacuate the residual gas of ammonia species, after the deposition phase of ${\rm NH_3}$ ice on the cold surface. Then the D atoms are generated in the same beam-line by the microwave dissociation of D$_2$ molecules. We have checked with the QMS, placed in front of the beam-line, that no deuterated species (${\rm NH_{2}D}$, ${\rm ND_2H}$, ${\rm HDO}$, ${\rm ND_3}$, ${\rm D_{2}O}$) and radicals (ND$_2$, OD, OH) contaminants are coming from the D beam. Figure~\ref{Fig8} shows the signal of (m/z=4) before the dissociation of deuterium molecules (discharge OFF) and during the dissociation phase (discharge ON). We notice that there is no increase in the signal of mass 18 during the discharge ON, which may correspond to ${\rm NH_{2}D}$ and ${\rm ND_2}$ species formed from NH$_3$ and D atoms within the beam-line. The small signal of mass 18 is the background signal of
${\rm H_2O}$ molecules contaminants in the main chamber. Moreover, the absence of the signals (m/z=19) and (m/z=20) excludes any possible formation of ${\rm NHD_{2}}$ and ${\rm ND_3}$ species in the D beam.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure8.eps}
\caption{The QMS signals (in counts/seconds) as a function of the time (s) of m/z=2 (D), m/z=4 (D$_2$), m/z=17 (NH$_3$), m/z=18 (NH$_2$D, ND$_2$H), m/z=19 (ND$_2$H), and m/z=20 (ND$_3$), given by the QMS, placed in front of the the D beam after the deposition of the NH$_3$ molecules on the oxidized HOPG surface using the same beamline.}
\label{Fig8}
\end{figure}
Ammonia and methanol ice films, with a thickness of 0.8~ML
were grown on the oxidized HOPG surface held at 10~K by beam-line vapor deposition
of NH$_3$ molecules (from Eurisotop bottle with 99.9~\% purity) and CH$_3$OH molecules (from liquid methanol with 99.5~\% purity).
The monolayer surface coverage corresponds to the number density of molecules that populate ${\rm 10^{15}}$ sites on the surface. It is defined as ${\rm 1~ML}$ = ${\rm 10^{15}}$ ${\rm molecules\cdot cm^{-2}}$.
In this work, the fluxes of ammonia and methanol species that hit the surface, are defined as the amounts of these species that saturate the surface per unit time ${\rm (\Phi}$ ${\rm =\frac{exposure~dose~for~1~ML}{exposure~time~for~1~ML}}$). The values of the fluxes are found to be ${\rm \Phi_{NH_3}}$ ${\rm =2.1\times10^{12}}$ ${\rm molecules\cdot cm^{-2}\cdot s^{-1}}$
and ${\rm \Phi_{CH_{3}OH}}$ ${\rm = 1.7\times10^{12}}$ ${\rm molecules\cdot cm^{-2}\cdot s^{-1}}$.
Because water is always present as contaminant in the ultrahigh vacuum chamber, and can be condensed on the cold surface at 10~K, we have performed experiments to study the effect of the water ice on the deuteration of solid ammonia. In order to simulate the small amount of the water ice that can be condensed on the surface during the exposure phase of the reactants at 10~K, we have deposited a very thin film of porous amorphous solid water (ASW) ice with $\sim$0.5~ML of thickness on the oxidized graphite surface at 10~K, by H$_2$O vapor deposition during 5 minutes, using the same beam-line as that for ammonia and D atoms. The water vapor was obtained from deionized water which had been purified by several pumping cycles under cryogenic vacuum.
We have estimated the thickness in ML of the amorphous water ice film grown on the surface at 10~K by beam-line vapor (${\rm H_2O}$) deposition using the reflection absorption infrared spectroscopy (RAIRS). We have deposited water ice on the surface at 10~K, for different exposure times, then we have recorded the RAIR spectra, and measured the integrated areas $\int \nu d\nu$ in (${\rm cm^{-1}}$) below the IR absorption bands of water ice at about 3200~${\rm cm^{-1}}$. Using the formulae ${\rm S=\frac{Ln10 \int \nu d\nu} {N}}$ \cite{2007Bisschop}, where N is the column density of water ice in (molecule $\cdot$ cm$^{-2}$), and S is the strength band of ${\rm H_2O}$ at 3200~${\rm cm^{-1}}$ in (cm~$\cdot$ molecule$^{-1}$), we have estimated the exposure time required to form 1~ML surface coverage of ASW ice at 10~K. The first monolayer of amorphous water ice covering the surface at 10~K is reached after about 10~minutes of water deposition time with a flux of ${\rm 10^{12} molecules\cdot cm^{-2}\cdot s^{-1}}$. The absorbance value of the corresponding ${\rm H_2O}$ infrared band at 3200 ${\rm cm^{-1}}$ is very low (AB=0.0005). Because of the low surface coverage of solid ${\rm NH_3}$ (0.8 ML) on oxidized graphite, and the low absorbance (AB=0.0002) of the infrared band of ammonia at 1106 ${\rm cm^{-2}}$ for the vibrational mode ${\rm \nu_2}$, the deuteration experiments of ammonia by D atoms on the oxidized graphite surface has been analyzed in this work only by TPD-QMS spectroscopic method.
For the deuteration experiments of solid NH$_3$ (or CH$_3$OH), we firstly deposited 0.8~ML of NH$_3$ (or CH$_3$OH) ices
on the HOPG surface held at 10~K, and then we exposed the films of ammonia (or methanol) to D atomic beam at the same surface temperature.
After the exposure phases, we used TPD technique by warming-up
the sample from 10~K to 210~K with a linear heating rate of ${\rm 0.17~K\cdot s^{-1}}$, until the
sublimation of the ices from the surface. The species desorbed into the gas phase are then detected and identified through mass spectrometry.
\section{Results}\label{results}
\subsection{Co-deposition of H$_{2}$O-NH$_{3}$}\label{H2O-NH3}
In order to study the effect of the water on the adsorption-desorption of ammonia molecules, two kinds of experiments have been performed on the graphite surface held at 10~K using ${\rm H_2O}$ and NH$_3$ molecules.
In the first experiment, we grow (${\rm \sim 0.5~ML}$) of amorphous solid water (ASW) ice on the oxidized graphite surface by exposing the sample held at 10~K to ${\rm H_2O}$ water beam during 5 minutes. The TPD curve of the water ice desorbed from the surface during the heating phase is displayed in the Figure~\ref{Fig1}, bottom panel for mass (m/z= 18) between 10~K and 200~K. The maximum of this desorption peak is centered at about 147~K.
In the second experiment, we deposited 0.8~ML of solid NH$_3$ on top of 0.5~ML surface coverage of water (${\rm H_2O}$) ice grown on the oxidized graphite surface at 10~K.
Figure~\ref{Fig1}, top panel shows the signals of m/z=17 after the warming-up of pure ${\rm~H_2O}$ and (${\rm~H_2O-NH_3}$) ices from 10~K to 200~K. The small TPD peaks (in black and magenta lines) at 147~K correspond to the cracking pattern ${\rm OH^+}$ (m/z=17) of ionized ${\rm H_2O^+}$ molecules desorbing from the oxidized graphite surface. While the TPD peak (in magenta line) (m/z=17) at 106~K corresponds to the desorption of pure ionized NH$_3$ molecules from the surface.
Figure~\ref{Fig1}, bottom panel compares the TPD signals of m/z=18 before and after the addition of solid ammonia on top of ${\rm H_2O}$ ice. For the two ${\rm~H_2O}$ and (${\rm~H_2O-NH_3}$) ices, the Figure~\ref{Fig1}, bottom panel shows only one desorption peak at around 147~K, which is slightly more intense for ${\rm~H_2O-NH_3}$ ice (in magenta line) than pure ${\rm H_2O}$ ice (in black line). The small increase in the area below the desorption curve of m/z=18 at 147~K for the ${\rm~H_2O-NH_3}$ ice, may result either from the instability of the ${\rm H_2O}$ flux, or from the desorption of the ${\rm NH_4^{+}}$ (m/z=18) compound formed by the protonation reaction between NH$_3$ and ${\rm H_2O}$ molecules, as has been observed by Souda \cite{2016Souda} in its recent work for the interaction of NH$_3$ with porous ASW ice.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure1.eps}
\caption{TPD signals of mass m/z=17 (Top panel) and mass m/z=18 (bottom panel) between 10~K and 200~K. Black curve: ($\sim$~0.5)~ML of pure porous amorphous water (${\rm H_2O}$) ice deposited on the oxidized graphite surface at 10~K. Magenta curve: ${\rm ~0.8~ML}$ of NH$_3$ ice deposited on top of ($\sim$~0.5)~ML of ${\rm H_2O}$ ice pre-deposited on the oxidized graphite surface at 10~K.} \label{Fig1}
\end{figure}
\subsection{Exposure of NH$_{3}$ and D atoms on graphite surface}\label{NH3+D}
\begin{figure*}
\centering
\includegraphics[width=18cm]{Figure2.eps}
\caption{TPD signals between 50~K and 200~K of masses: a) m/z=17, b) m/z=18, c) m/z=19, and d) m/z=20. Black curve: deposition of 0.8~ML of NH$_3$ ice, blue curve: deposition of 15.5~ML of D atoms, magenta curve: deposition of the film 1 (15.5~ML~D~atoms~+~0.8~ML~NH$_3$), green curve: deposition of the film 2 (0.8~ML~NH$_3$~+~15.5~ML D atoms). The deposition of the species is performed on an oxidized HOPG surface held at 10~K. The Figure e) gives the difference between the green curve (Film 2) and the magenta curve (Film 1) for mass 18. It illustrates the desorption peak of NH$_2$D (m/z=18) at around 96~K, really formed by the surface reaction NH$_3$~+~D.} \label{Fig2}
\end{figure*}
In the first experiment, we prepared a film~1 (${\rm 15.5}$~ML~(D)~+~${\rm 0.8~ML}$~(NH$_3$)), by exposing firstly the oxidized HOPG surface at 10~K, to D beam for ${\rm 15.5 ML}$ surface coverage, and then to 0.8~ML of NH$_3$ ice at the same surface temperature 10~K. In the second experiment, the film~2 (${\rm 0.8~ML}$ (NH$_3$)~+~${\rm 15.5~ML}$ (D)), is prepared by exposing ${\rm 15.5~ML}$ of D atoms on top of 0.8~ML of NH$_3$ ice pre-deposited on the oxidized HOPG surface at 10~K.
Two TPD control experiments were also performed in addition to the previous ones, by depositing separately 0.8~ML of solid NH$_3$, and ${\rm 15.5~ML}$ of D atoms on the oxidized HOPG surface. The TPD curves of all the experiments are displayed in the Figure~\ref{Fig3} for the masses 17, 18, 19 and 20.
The TPD curves of the two films of NH$_3$ (0.8~ML) and that of D atoms (${\rm 15.5~ML}$) deposited separately on the surface, showed small peaks at around 152~K, for masses 18, 19 and 20 (see panels (b), (c) and (d) of Figure~\ref{Fig2}). These peaks correspond to the desorption of water contaminations, such as H$_2$O (m/z=18), HDO (m/z=19), and D$_2$O (m/z=20). These water impurities came either from the beam-line during the exposure phases of NH$_3$ and D atoms, or from the ultra-high vacuum chamber.
In the case of the film~1, where D atoms are deposited before NH$_3$ on the surface, the TPD curve of mass 18 in the Figure~\ref{Fig2}b shows a small desorption peak at around 96~K overlapping a large peak at 130~K-150~K. While in the panels (c) and (d) of the Figure~\ref{Fig2}, we observe only the desorption peaks at 152~K for the masses 19 and 20, respectively. The desorption peaks at the higher temperatures 150~K and 152~K for the masses 18, 19 and 20 are likely to be originated from water impurities H$_2$O, HDO and D$_2$O, respectively.
Based on the computational results of Burke et al. \cite{1983Burke}, the sticking coefficients of the impinging D atoms coming from the gas phase at room temperature onto graphite and ASW ice (held at 10~K) is 90~\% and 60~\%, respectively. However, the experimental studies of Matar et al. \citep{2010Matar} for the sticking coefficient of D atoms on the non-porous ASW ice is estimated to be 30~\%. Since our substrate is constituted of an oxidized HOPG, partly coved with ASW ice contaminants (H$_2$O, HDO and D$_2$O), most of the D atoms exposed on the substrate for the (film~1), will stick both on the oxidized graphite and on the water surface adsorption sites. These atoms promptly form D$_2$ molecules by D~+~D surface recombination, either by Langmuir-Hinshelwood mechanism based on the diffusion of two adsorbed D atoms on the surface, or via Eley-Rideal abstraction reactions between adsorbed D atoms and incoming D atoms from the gas phase \citep{2002Zecho,2005Cazaux}. Moreover, the experimental and the theoretical studies of Horneker et al. \cite{2006Horneker} have revealed a possible route for D$_2$ formation on the HOPG surface through D adsorbate clusters. The D$_2$ molecules formed on the graphitic surface cannot react with the NH$_3$ molecules adsorbed on the surface, and cannot be therefore involved in the formation of the new isotopic species of ammonia.
We suggested that water contaminants present on the surface, such as HDO, D$_2$O, may react with the deposited NH$_3$ molecules and form NH$_2$D species (m/z=18) through the following exothermic reactions (\ref{Eq1a}) and (\ref{Eq1b}), provided by Nist web-book~\citep{Nist}.
\begin{equation}\label{Eq1a}
{\rm NH_3+HDO~\rightarrow~NH_2D~+~H_2O},
\end{equation}
\begin{equation}\label{Eq1b}
{\rm NH_3+D_2O~\rightarrow~NH_2D~+~HDO},
\end{equation}
The presence of a very small desorption peak (m/z=20) for the deuterated water at 150~K following exposure to ammonia (magenta traces in Figure~\ref{Fig2}d) may support our suggestion. The NH$_2$D molecules that can be formed by isotopic exchange reaction between ${\rm NH_3}$ molecules and HDO and D$_2$O species on the oxidized graphite surface may desorb between 50~K and 120~K.
Moreover, the exposure of D atoms on the oxidized graphite surface may create new functional groups or intermediates, such as (-OD). These reactive intermediate species may interact with ${\rm NH_3}$ and form NH$_2$D following the exothermic reaction (\ref{Eq1c}), provided by Nist web-book~\citep{Nist}.
\begin{equation}\label{Eq1c}
{\rm NH_3~+~OD~\rightarrow~NH_2D~+~OH},
\end{equation}\label{Eq1c}
All these suggestions for the formation of NH$_2$D by heavy water contaminants or by -OD intermediates, could explain the observed small desorption peak (in magenta), for mass 18, in the Figure~\ref{Fig2}b, at around 96~K (film~1), where NH$_3$ molecules are deposited on top of D atoms on the oxidized graphite surface.
In the case of the film~2 (${\rm 0.8~ML~NH_3}$+${\rm 15.5 ML}$~of~D), where D atoms are deposited on top of the solid NH$_3$ film, the desorption peak (in green) at 96~K for mass 18 becomes larger than previously (see Figure~\ref{Fig2}b). The increase in the TPD area of the peak at 96~K for mass 18 is likely due from the reaction of D atoms with NH$_3$ molecules on the surface. The TPD peak at 96~K of the NH$_2$D molecules produced only from the NH$_3$~+~D reaction on the oxidized HOPG substrate with water deposits, is shown in the Figure~\ref{Fig2}e. The desorption curve (in orange) is the difference (film~2-film~1), between the TPD signal of NH$_2$D (m/z=18), expected to be formed on the surface in the film 2, by the TPD signal of NH$_2$D (m/z=18), produced by the reaction of NH$_3$ with -OD, HDO and/or D$_2$O contaminants on the surface in the film~1.
\subsection{Kinetics of NH$_3$+D reaction}\label{Kinetic NH3+D}
The kinetic reaction ${\rm NH_3+D}$ has been studied by exposing 0.8~ML of solid NH$_3$ to different doses of D atoms
(0 ML, ${\rm 1.0~ML}$, ${\rm 6.6~ML}$, ${\rm 15.5~ML}$, ${\rm 31.0~ML}$ and ${\rm 53.2~ML}$).
TPD curves of species for masses 17, 18, 19 and 20 are shown in Figure~\ref{Fig3} between 50~K and 210~K, for each film of ${\rm NH_3}$ and D atoms. As shown in the Figure~\ref{Fig3}a, the maximum of the TPD peak of NH$_3$ shifts slightly toward the higher temperatures with the D-exposure time from 96~K to 104~K, and in parallel we observe the disappearance of a second desorption peak, as a shoulder at about 150~K. These desorption temperatures differences can be explained in terms of reaction sites and/or surface contamination, such as water molecules. In these experiments the amount of water ices contaminants on the surface is negligible.
\begin{figure*}
\centering
\includegraphics[width=18cm]{Figure3.eps}
\caption{TPD desorption curves of ammonia species between 60~K and 220~K as a function of D-atoms exposure doses (0 ML, ${\rm 6.6~ML}$, ${\rm 15.5~ML}$, ${\rm 26.6~ML}$ ${\rm 31.0~ML}$ and ${\rm 53.2~ML}$) on 0.8~ML of solid NH$_3$ ice grown on the oxidized HOPG surface held at 10~K with amorphous water ice contaminants: a) m/z=17 (${\rm NH_3}$), b) m/z=18 (${\rm NH_2D}$), c) m/z=19 (${\rm NHD_2}$), and d) m/z=20 (${\rm ND_3}$).}
\label{Fig3}
\end{figure*}
In addition, panels (b), (c) and (d) of Figure~\ref{Fig3} show the growth of three double desorption peaks at 96~K and 150~K, for masses 18, 19 and 20, respectively.
The desorption peak at about 96~K in Figure~\ref{Fig3}b is likely to be attributed to NH$_2$D (m/z=18) species, mainly produced from the reaction between ${\rm NH_3}$ and D atoms. Similarly, the desorption peaks observed at 96~K in Figure~\ref{Fig3}c, and d are attributed to the doubly deuterated species ${\rm NHD_2}$ (m/z=19), and triply deuterated ammonia ${\rm ND_3}$ (m/z=20), formed mainly by the reaction NH$_2$D+D, and NHD$_2$+D, respectively. We neglected the contribution of NH$_2$D, ${\rm NHD_2}$ ${\rm ND_3}$ formed from the contaminants on the surface, mainly water ices in these experiments.
Furthermore, the cracking pattern of the ionized ammonia molecules ${\rm ND_3^{+}}$ (m/z=20), ${\rm NHD_2^{+}}$ (m/z=19), ${\rm NH_2D^{+}}$ (m/z=18), and ${\rm NH_3^{+}}$ (m/z=17) by electron impact, in the ion source of the QMS, are ${\rm ND_2^{+}}$, ${\rm NHD^{+}}$, ${\rm NH_2^{+}}$, ${\rm ND^{+}}$, ${\rm NH^{+}}$, ${\rm D_2^{+}}$, ${\rm H_2^{+}}$, ${\rm D^{+}}$, ${\rm H^{+}}$, and ${\rm N^{+}}$.
The ion fragments ${\rm NHD^{+}}$ (m/z=17) and ${\rm ND_2^{+}}$ (m/z=18) provided by the QMS in the vacuum chamber during the warming-up phase, can be added to the TPD signals of ionized NH$_3^{+}$ (m/z=17) and ${\rm NH_2D^{+}}$ (m/z=18) molecules, respectively.
This means that the TPD curves in the Figure~\ref{Fig3}a (m/z=17) peaking at 96~K and 150~K are the mixture of the ionized ${\rm NH_3^{+}}$, and the cracking pattern ${\rm NHD^{+}}$ of the ionized ${\rm NHD_2^{+}}$ (m/z=19) and ${\rm NH_2D^{+}}$ (m/z=18).
Similarly, the TPD curves in the Figure~\ref{Fig3}b (m/z=18), peaking at 96~K and 150~K are the mixture of the ionized ${\rm NH_2D^{+}}$ molecules, and the cracking pattern ${\rm ND_2^{+}}$ of ionized ${\rm NHD_2^{+}}$ (m/z=19) and ${\rm ND_3^{+}}$ (m/z=20) of the deuterated molecules.
In our experimental conditions, the electron's energy of the QMS ion source is 32~eV. With this energy, only 30~$\%$ of molecules desorbing from the surface are ionized in the head of the QMS. So we can not determine the precise contribution of species having the same mass m/z to the QMS data, but we can assume that most of ammonia molecules desorbing from the surface are not fragmented in the QMS head but only ionized.
As previously discussed in section \ref{NH3+D}, TPD peaks observed at 150~K in figure~\ref{Fig3}, panels b), c) and d) match well with the desorption of water impurities H$_2$O, HDO, and D$_2$O, respectively.
The observed deuterated ammonia species in the TPD spectra are likely to be formed by H-D substitution reaction between the impinging D atoms and the ammonia adsorbed on the oxidized graphite surface. We excluded any energetic particles (photons, electrons and ions) in the formation of the deuterated ammonia species. Previous control experiments realized in the laboratory exclude any possible interaction of the electrons with the surface. The energetic particles produced in the microwave plasma of D atoms inside the beam-line can not reach the cold surface of the sample during the D exposure phase, and dissociate therefore the adsorbed ${\rm NH_3}$ molecules and cause their deuteration.
In Figure~\ref{Fig3}a, the strong TPD peak at 96~K (in black line) has the behavior of multilayer desorption of ${\rm NH_3}$ ice, where ${\rm NH_3}$ is probably bound to adsorbed ${\rm NH_3}$ by hydrogen bonds. While the TPD peak (in black line) at 150~K corresponds to ${\rm NH_3}$ molecules physisorbed on the sites of the oxidized HOPG surface and contaminants (OD, CD...).
In Figure~\ref{Fig3}b (m/z=18), the TPD peaks at 96~K that increase with the increase of the exposure dose of D atoms, are likely to be attributed to ${\rm NH_2D}$ molecules (m/z=18), formed from the reaction (${\rm NH_3+D}$) on ammonia ice deposited on the surface. In the same Figure~\ref{Fig3}b, the growth of the TPD peaks at 150~K with D exposure doses seems to be coherent with the decrease of the TPD peaks at 150~K in the Figure~\ref{Fig3}a. The desorption peaks at 150~K Figure~\ref{Fig3}b are likely to be attributed to ${\rm NH_2D}$ formed from the reaction (${\rm NH_3+D}$) on the surface of the oxidized HOPG. The maximum of these TPD peaks shifts towards the lower temperature with the increase of the peak height, and the ${\rm NH_2D}$ coverage on the surface.This means that the interaction ${\rm NH_3-NH_3}$ with D atoms leads to the formation of deuterated species ${\rm NHD_2}$, ${\rm NHD_2}$, and ${\rm ND_3}$, desorbing into the gas phase at 96~K, as seen in the TPD curves of Figure~\ref{Fig3}b, c, and d, respectively. While the successive deuteration of adsorbed ${\rm NH_3}$ on the oxidized graphite surface, produces deuterated ammonia molecules on stronger bending sites, which desorb from the surface at 150~K (see Figure~\ref{Fig3}b, c, and d).
The astrophysical group of Watanabe et al. \cite{2005Nagaoka} have demonstrated experimentally the efficient formation of the deuterated isotopologue species of methanol at low surface temperature (10~K) by the D atoms exposure on ${\rm CH_3OH}$ ice. The isotopic species were observed and detected by infrared spectroscopy during the exposure of the adsorbates at 10~K. Similarly to methanol molecules, we believed that the deuteration reaction ${\rm NH_3+D}$ proceeds during the exposure phase of ${\rm NH_3}$ and D atoms on the oxidized HOPG substrate at 10~K, thanks to the tunneling process. At 10~K, D atoms are mobile \citep{2008Matar} and can diffuse on the surface to react with solid ammonia molecules. However, since in our experiments the deuterated species of ammonia are detected by TPD measurements from 10~K to 200~K, it is possible that the formation of the deuterated species of ammonia proceeds during the warming-up phase of the sample, rather than during the exposure of the reactants on the surface at 10~K. This assumption for the deuteration of ammonia by D atoms at higher surface temperatures is not taken into account in our experiments.
\subsection{Kinetics of CH$_3$OH+D reaction}\label{discussion}
In this section, we would like to compare the kinetic reaction ${\rm NH_3+D}$ to that of ${\rm CH_3OH+D}$ molecules in the sub-monolayer regime. We investigated similar deuteration experiments of solid ${\rm CH_3OH}$ by D atoms as for ammonia molecules. The experiments were performed under the same conditions: same low surface coverage ($\sim$ 0.8~ML), same
D atomic flux ${\rm \phi (D)=3.7\times10^{12}}{\rm molecules\cdot cm^{-2}\cdot s^{-1}}$, and same surface temperature (10~K).
Firstly, we deposited 0.8~ML of solid ${\rm CH_3OH}$ on the HOPG surface at 10~K, and then we added 6.6~ML of D atoms for the first experiment, and 15.5~ML of D atoms for the second one. After the D-addition phase, each film of ${\rm CH_3OH+D}$ was heated linearly from 10~K to 210~K using the same heating rate of 0.17~K$\cdot$s$^{-1}$. Figure~\ref{Fig4} shows the TPD desorption curves of ${\rm CH_3OH}$ (m/z=32), and the newly formed
isotopic species ${\rm CH_2DOH}$ (m/z=33), ${\rm CHD_2OH}$ (m/z=34) and ${\rm CD_3OH}$ (m/z=35) between 100~K and 200~K.
According to Nagaoka et al. \cite{2007Nagaoka} and Hiraoka et al. \cite{2005Hiraoka}, the H-abstraction of ${\rm CH_3OH}$ by D atoms
is likely to occur in the methyl ${\rm -CH_3}$ group rather than the hydroxyl ${\rm -OH}$ group of the ${\rm CH_3OH}$ (m/z=32) molecules.
We have thus attributed the TPD signals of masses m/z=33, m/z=34 and m/z=35 in the Figure~\ref{Fig4} to the newly formed deuterated species ${\rm CH_2DOH}$, ${\rm CHD_2OH}$, and ${\rm CD_3OH}$, respectively, which are deuterated in the methyl group.
The formation of the deuterated species in the hydroxyl group, such as ${\rm CH_3OD}$ (m/z=33), ${\rm CH_2DOD}$ (m/z=34),
and ${\rm CHD_2OD}$ (m/z=35), by the reaction system (${\rm CH_3OH+D}$) is expected to be negligible in this work. However, these deuterated methanol species in the hydroxyl group can be formed by D-H isotopic exchange between the species (${\rm CH_3OH}$, ${\rm CH_2DOH}$, and ${\rm CHD_2OH}$) with the deuterated ${\rm D_2O}$ water ice contaminants, during the transition phase from the amorphous to the crystalline state of the water ice at 120~K \citep{2009Ratajczak}.
\begin{figure*}
\centering
\includegraphics[width=17cm]{Figure4.eps}
\caption{TPD curves of ${\rm CH_3OH}$ (m/z=32), ${\rm CH_2DOH}$ (m/z=33), ${\rm CHD_2OH}$ (m/z=34) and ${\rm CD_3OH}$ (m/z=35) between
100~K and 200~K. Black curve: 0.8 ML of ${\rm CH_3OH}$ ice pre-deposited on the oxidized graphite surface held at 10~K; Red curve: after the exposure of 6.6~ML of D atoms at 10~K on 0.8 ML of ${\rm CH_3OH}$ ice; Blue curve: after the exposure of 15.5~ML of D atoms at 10~K on 0.8 ML of ${\rm CH_3OH}$ ice.} \label{Fig4}
\end{figure*}
\section{Analysis}\label{model and discussion}
\subsection{Rate equations of NH$_3$+D system reactions}\label{rate constant NH3}
We suggested that the reaction between ${\rm NH_3}$ and D atoms on the oxidized graphite surface held at 10~K, proceeds through direct Hydrogen-Deuterium substitution process by H-abstraction and D-addition mechanism, as proposed by
Nagaoka et al.~\citep{2005Nagaoka,2006Nagaoka} for ${\rm H_2CO~+~D}$ and ${\rm CH_3OH~+~D}$ reactions. In fact, the direct H-D substitution reaction (\ref{Eq2a}) leading to the formation of ${\rm NH_2D}$ species is
slightly exothermic with a formation enthalpy $\Delta${\rm H$^{0}$}= -${\rm 781.8~K}$.
\begin{equation}\label{Eq2a}
{\rm NH_3}~+~{\rm D}~{\longrightarrow}~{\rm NH_2D} + {\rm H},
\end{equation}
In the case of the H-abstraction and D-addition mechanism of ${\rm NH_3}$, the indirect H-D substitution process is described by the
following reactions (\ref{Eq2}) and (\ref{Eq3}).
\begin{equation}\label{Eq2}
{\rm NH_3~+~D}~{\longrightarrow}~{\rm NH_2~+~HD},
\end{equation}
\begin{equation}\label{Eq3}
{\rm NH_2~+~D}~{\longrightarrow}~{\rm NH_2D},
\end{equation}
The first H-atom abstraction reaction~(\ref{Eq2}) of ${\rm NH_3}$ molecule by D atom leads to the formation of HD molecule and the ${\rm NH_2}$ radical. This reaction is endothermic with a reaction enthalpy of
${\rm \Delta H^{0}}$=${\rm +1527.5~K}$, and
needs an excess thermal energy to be produced. While the second D-addition reaction~(\ref{Eq3}) leading to the formation of the first
isotopologue ${\rm NH_2D}$ is exothermic with higher heat of formation $\Delta$${\rm H^{0}}$ = ${\rm-54480~K}$.
All the standard reaction enthalpies involving ammonia species and D atoms are provided by NIST database \citep{Nist} .
The same endothermic behavior takes place in the H-abstraction reactions~(\ref{Eq4}) and (\ref{Eq6}) of ${\rm NH_2D}$ and ${\rm NHD_2}$
species by D atoms, respectively.
\begin{equation}\label{Eq4}
{\rm NH_2D+D}~{\longrightarrow}~{\rm NHD+HD} ~~~~({\rm \Delta{\rm H^{0}=+1455.3~K}})
\end{equation}
\begin{equation}\label{Eq4a}
{\rm NHD+D}~{\longrightarrow}~{\rm NH_2D}
\end{equation}
and
\begin{equation}\label{Eq6}
{\rm NHD_2+D}~{\longrightarrow}~{\rm ND_2+HD} ~~~~({\rm \Delta{\rm H^{0}=+1527.5~K}})
\end{equation}
\begin{equation}\label{Eq6a}
{\rm ND_2+D}~{\longrightarrow}~{\rm ND_3}
\end{equation}
In order to fit the TPD experimental data of ${\rm NH_3}$, ${\rm NH_2D}$, ${\rm NHD_2}$ and ${\rm ND_3}$
species, given in the Figure~\ref{Fig6}, we used a kinetic model described by the following exothermic system reactions
(\ref{Eq8}-\ref{Eq10}) for the three direct H-D substitution reactions.
\begin{equation}\label{Eq8}
{\rm NH_3}+{\rm D} {\longrightarrow}{\rm NH_2D+H}~~~~(\Delta{\rm H^{0}}=-{\rm 781.8~K})
\end{equation}
\begin{equation}\label{Eq9}
{\rm NH_2D}+{\rm D}{\longrightarrow}{\rm NHD_2+H}~~~~(\Delta{\rm H^{0}}=-{\rm 938.1~K})
\end{equation}
\begin{equation}\label{Eq10}
{\rm NHD_2}+{\rm D}{\longrightarrow}{\rm ND_3+H}~~~~(\Delta{\rm H^{0}}=-{\rm 1131~K})
\end{equation}
These reactions are in competition with the exothermic D+D surface reaction leading to the formation of ${\rm D_2}$ molecules.
\begin{equation}\label{Eq11}
{\rm D}+{\rm D}{\longrightarrow}{\rm D_2}~~~~(\Delta{\rm H^{0}}=-{\rm 52440~K})
\end{equation}
Our model includes both Eley-Rideal (ER) and Langmuir-Hinshelwood (LH) mechanisms for the reactions of D atom either with another
D atom on the surface or with an ammonia species already adsorbed on the surface at 10~K. The Eley-Rideal mechanism occurs when one of
the species already adsorbed on the surface promptly reacts with a particle coming from the gas phase, before being adsorbed on the
surface. The Langmuir-Hinshelwood mechanism describes the formation of molecules on the surface when two adsorbed reaction-partners diffuse on the surface. D atoms are thermalized with the surface and they react with ammonia molecules thanks to surface diffusion. The ER mechanism is independent of the temperature of the surface ${\rm T_s}$, while LH mechanism is very sensitive to ${\rm T_s}$ since it depends on diffusion coefficients.
Moreover, LH is more efficient than ER mechanism at low surface coverage~\citep{2013Minissale}.
In our experiment, a D atom coming from the gas phase can hit an ammonia species already adsorbed on the surface, react and form a newly isotopic species of ammonia through ER mechanism. If the adsorbed D atom does not react through ER mechanism, it can diffuse on the surface from one site to a neighboring one. The diffused D atom can react either with another D atom on the surface to form ${\rm D_2}$ molecule, or with an adsorbed NH$_3$, NH$_2$D or NHD$_2$ molecules to form NHD$_2$, NH$_2$D, or ND$_3$ species, respectively, through the LH mechanism. All species, except D atoms, are not mobile on the surface at 10~K.
\subsubsection{Kinetic model}\label{model}
The model used to fit our experimental data is very similar to the one described by Minissale et al.~\citep{2013Minissale,2015Minissale}.
It is composed of six differential equations, one for each of the species considered:
D atoms, coming exclusively from the beam; NH$_3$ molecules, deposited on the surface; NH$_2$D, NHD$_2$, and ND$_3$, formed on
the surface; and finally D$_2$, coming both from the beam and formed on the surface. Each differential equation is composed of positive
and negative terms, indicating respectively an increase (i.e. species arriving from the gas phase or formed on the surface), or a
decrease (i.e. species reacting on the surface) in the surface coverage of the species.
The terms involving the ER and LH mechanisms are independent of one another, thus we are able to determine the amount of a
species formed (or consumed) via ER or LH mechanism.
Below, we present the list of differential equations governing the NH$_3$ deuteration:
\begin{align}\label{Eq12}
\frac {\rm d[{\rm D}]}{\rm {dt}}=&{\rm \phi_D} \biggl ({\rm 1-2 p_{1ER}[D]-p_{2ER}[NH_3]}-\nonumber \\
& {\rm p_{3ER}[{\rm NH_2D}]-p_{4ER}[NHD_2]}\biggr)-\nonumber \\
&{\rm k_{diff}}\biggl ({\rm 4p_{1LH}[D][D]-p_{2LH}[D][NH_3]}-\nonumber\\
&{\rm p_{3LH}[D][NH_2D]-p_{4LH}[D][NHD_2]}\biggr)
\end{align}
\begin{align}\label{Eq13}
\frac {\rm d[{\rm D_2}]}{\rm {dt}}=&{\rm \phi_D{_2}+2\phi_D (1-e_1)}{\rm p_{1ER}~[D]}+\nonumber \\
&2(1-e_1) {\rm k_{diff}}\cdot {\rm p_{1LH}[D][D]}
\end{align}
\begin{align}\label{Eq14}
\frac {\rm d[{NH_3}]}{\rm dt}=& -{\rm \phi_D} {\rm p_{2ER}}[{\rm NH_3}]-{\rm k_{diff}}\cdot {\rm p_{2LH}[D]}[{\rm NH_3}]
\end{align}
\begin{align}\label{Eq15}
\frac {\rm d[{NH_2D}]}{\rm dt}=&{\rm \phi_D} \biggl({\rm p_{2ER}}[{\rm NH_3}]- {\rm p_{3ER}}[{\rm NH_2D}]\biggr)+\nonumber\\
&{\rm k_{diff}}\biggl({\rm p_{2LH}} {\rm [D][NH_3]}-{\rm p_{3LH}}[{\rm D}][{\rm {NH_2D}}]\biggr)
\end{align}
\begin{align}\label{Eq16}
\frac {\rm d[{\rm NHD_2}]}{\rm dt}=&{\rm \phi_D} \biggl({\rm p_{3ER}}[{\rm NH_2D}]- {\rm p_{4ER}} [{\rm NHD_2}]\biggr)+\nonumber\\
&{\rm k_{diff}} \biggl({\rm p_{3LH}}[{\rm D}][{\rm NH_2D}]-{\rm p_{4LH}}[{\rm D}][{\rm NHD_2}]\biggr)
\end{align}
\begin{align}\label{Eq17}
\frac {\rm d[{ND_3}]}{\rm dt}={\rm \phi_D} \cdot {\rm p_{4ER}}[{\rm {NHD_2}}]+{\rm k_{diff}}\cdot {\rm p_{4LH}}[{\rm D}][\rm {NHD_2}]
\end{align}
The [${\rm D}$], [${\rm D_2}$], [${\rm NH_3}$], [${\rm NH_2D}$], [${\rm NHD_2}$], and [${\rm ND_3}$] quantities are the surface coverages of species. [X] is dimensionless and represents the percentage of surface covered with the X species. For each species [X] ranging between 0 and 1. This condition is true for all species except for D$_2$, whose surface coverage can be bigger than one. We stress that it does not represent a problem to evaluate activation barrier, since D$_2$ is an inert species and does not have any effect on kinetics of reactions. The initial reaction conditions at t=0 simulate the experimental conditions: $[\rm {NH_3}](t=0)$=~0.8 and ${\rm [NH_2D]=[NHD_2]=[ND_3]=0}$ for t=0. Furthermore, we impose that at any time:
\begin{align}
[\rm {NH_3}](t)+[\rm {NH_2D}](t)+[\rm {NHD_2}](t)+[\rm {ND_3}](t) \nonumber \\
=[\rm {NH_3}](t=0)
\end{align}
Dimensionless surface coverage is then converted in ML (or ${\rm molecule\cdot cm^{-2}}$) by multiplying [X] for the amount of adsorption sites of our surface ($10^{15}$~${\rm sites \cdot cm^{-2}})$ and compared with experimental results.
${\rm \phi_{X}}$ represents the part of surface covered per second by the X species coming from the gas phase.We know that in our experimental conditions the total number density of the impinging D-atoms coming from the gas phase and hitting the surface is given by the flux of D atoms in the beam-line: ${\rm \phi_{D}}$~=~${\rm 3.7\times10^{12}~atoms \cdot cm^{-2} \cdot s^{-1}}$. If we consider again that a surface contains 10$^{15}$~${\rm sites \cdot cm^{-2}}$, the flux of D atoms landing the surface is
${\rm \phi_{D}}$=${\rm 3.7\times10^{-3}~s^{-1}}$.
The terms concerning the chemical desorption of ammonia species ${\rm NH_2D}, {\rm NHD_2}$, and ${\rm ND_3}$ (formed by the reaction with D atoms) are not considered in the model, since the thermal desorption of these considered species is negligible at 10~K.
Despite the various heats of formation of ${\rm NH_3}$ isotopologues ($\Delta$${\rm H}$ less than 1100~K), no desorption of the newly formed ammonia species has been observed experimentally at 10~K from the graphite surface. This is because the local heats of formation of these species through the exothermic reactions (\ref{Eq8}-\ref{Eq10}) do not exceed the desorption energy of these ammonia species (${\rm E_{des}}$=2300~K)~\cite{2005Bolina}. So once the isotopologue ammonia species are formed, they stay in the solid phase on the graphite surface at 10~K, because their binding energy of about 2300~K is higher than the excess energies of formation.
The non chemical desorption of the corresponding molecules at 10~K is also confirmed experimentally by using the DED (During Exposure Desorption) method~\citep{2013Dulieu}, which consists of monitoring with the QMS placed in front of the sample, the signal of the deuterated molecules released into the gas phase during the deposition phase.
However, the parameter $e_1$ characterizing the prompt desorption of some ${\rm D_2}$ molecules, upon formation on the surface at 10~K, through the very exothermic reaction (\ref{Eq11}), is expected to be non negligible.
We point out that this term ${\rm e_1}$ does not influence ammonia species surface coverage.
In fact, as we have already said ${\rm D_2}$ is a non-reactive species and it cannot consume neither D atoms nor ${\rm NH_xD_y}$ species on the surface. However, it has been demonstrated by Amiaud et al. \cite{2007Amiaud} that the presence of D$_2$ molecules already adsorbed on the water ice increases the recombination efficiency of D atoms through the barrierless ${\rm D+D {\longrightarrow} D_2}$ reaction, by increasing the sticking coefficient of the deuterium atoms on the surface. This behaviour may have an important impact in the deuteration experiments, since the presence of condensed inert D$_2$ species may separate D and ${\rm NH_3}$, resulting in the decrease of the recombination efficiency of D atoms with adsorbed ${\rm NH_3}$, and the reduction of the H-D substitution reaction.
${\rm p_{1ER}}$ and ${\rm p_{1LH}}$ parameters are the reaction probabilities (dimensionless) of the D+D surface reaction (\ref{Eq11}), and we fixed to one their values.
Similarly, ${\rm p_{2ER}}$, ${\rm p_{3ER}}$, ${\rm p_{4ER}}$, and ${\rm p_{2LH}}$, ${\rm p_{3LH}}$ and ${\rm p_{4LH}}$ parameters represent the probabilities of the reactions (\ref{Eq8}-\ref{Eq10}) to be occurred via ER and LH mechanisms, respectively. Ammonia and D atoms species can react together by overcoming the activation barrier (Arrhenius term), or by crossing the barrier (tunneling term) as expressed by the following equations (\ref{Eq18}) and (\ref{Eq19}):
\begin{equation}\label{Eq18}
{\rm p_{iER}=e^{-Ea_i/(k_{B}\times T_{eff})} + e^{-2~Z_r \times \sqrt{2~\mu\times Ea_i\times k_{B}/h}}}
\end{equation}
and
\begin{equation}\label{Eq19}
{\rm p_{iLH}=e^{-Ea_i/(k_{B} \times T_{s})} + e^{-2~Z_r \times \sqrt{2~\mu\times Ea_i\times k_{B}/h}}}
\end{equation}
Where ${\rm k_B}$ is the Boltzmann constant, $h$ the Planck constant, ${\rm Ea_i}$ (i=2-4) are the activation energy barriers of the
reactions (\ref{Eq8}-\ref{Eq10}), ${\rm Z_r}$ is the width of the (rectangular) activation barrier, and ${\rm \mu}$ is the tunneling mass which is described by the reduced mass of the system involved in bi-molecular atom transfer reaction. This tunneling mass is defined as:
\begin{equation}
{\rm \mu=\frac{m_{NH_{x}D_{y}}\times m_D}{m_{NH_{x}D_{y}}+m_D}},~ \text{with x,y=0-3 and x+y=3}
\end{equation}
${\rm T_{eff}}$ is the effective temperature of the reaction between ${\rm NH_3}$, ${\rm NH_2D}$, ${\rm NHD_2}$ and D atoms given by:
\begin{equation}
{\rm T_{eff}= \mu(\frac{T_{solid}}{m_{NH_{3}}}+\frac{T_{gas}}{m_D})=314~K}
\end{equation}
${\rm T_{s}}$ (=10~K) is the surface temperature.
The parameter ${\rm k_{diff}}$ is the diffusion coefficient of D atoms between sites on the surface. It represents the amount of surface sites scanned in one second by D atoms.
It is defined by the following equation (\ref{Eq20})
\begin{align}\label{Eq20}
{\rm k_{diff}}=&{\rm \nu}~\biggl({\rm e^{-E_{diff}/k_{B}\times T_{s}}}+{\rm e^{-2~Z_d \times \sqrt{2~\mu \times E_{diff}\times k_{B}/h}}}\biggr)
\end{align}
Where ${\rm \nu=10^{12}~s^{-1}}$ is the attempt frequency for overcoming the diffusion barrier of D atoms and ${\rm E_{diff}}$ is the energy barrier for the diffusion of D atoms on cold surfaces held at 10~K. Bonfant et al. \cite{2007Bonfant} have reported an extremely low diffusion barriers of 4~meV for hydrogen atoms on graphite surface, meaning that hydrogen atoms physisorbed on graphite is highly mobile at low surface temperatures. However, for irregular surfaces such as ASW, the diffusion energy barrier of D atoms does not have a single value but follows a distribution, because there are several potential sites of different depths. Since our substrate used in our experiments is composed of an oxidized HOPG mixed with ASW ice deposits, the diffusion energy value of D atoms used in this model is that estimated on ASW ice for low surface coverage, ${\rm E_{diff}=(22\pm2)~meV~or~(255~\pm22)~K}$ \citep{2008Matar}. Even if this diffusion energy is different from that calculated on graphite surface, its high value does not affect the modeling results. The parameter
${\rm Z_d}$ is the width of the (rectangular) diffusion barrier. We have fixed the width of the diffusion barrier of D atoms (${\rm
Z_d}$) to 1~$\textup{\AA}$, a value commonly used to describe H or D atoms diffusion on the surface.
In our kinetic model, the term of the tunneling probability ${\rm e^{-2~Z_r \sqrt{2~\mu~Ea_i~k_B/h}}}$ for crossing the rectangular activation barrier depends on the tunneling mass of the reaction. This tunneling mass is described by the reduced mass $\mu$ of the system involving ammonia species ${\rm NH_{x}D_{y}}$ and D atom with x,y=0-3 and x+y=3. The value of $\mu$ for each reaction system is equal to 1.8 amu, and is close to the mass of the deuterium particle D (m/z=2 amu). This means that for direct D-H exchange reaction between D atom and ${\rm NH_3}$ molecules, the D atom is considered to be the tunneling particle that conduct the system ${\rm NH_3+D}$ to across the rectangular barrier through quantum tunneling.
According to Hidaka et al. \cite{2009hidaka}, the tunneling mass significantly depends on the reaction mechanism. For the addition reaction (${\rm AX+B~\rightarrow~AXB}$), the tunneling mass in the reaction coordinate is simply described by the reduced mass of the two-body system. However, for the abstraction reaction (${\rm AX+B~\rightarrow~A+BX}$), which involves three free particles in the reaction system, the tunneling mass is described by the effective mass, defined in the papers of Hidaka et al. \cite{2009hidaka}. For the direct H-D exchange reaction (\ref{Eq8}) between ${\rm NH_3}$ and D atoms, the description of the tunneling mass is not straightforward according to the reference ~\cite{2009hidaka}. However, if we assume that the H-D substitution reaction (\ref{Eq8}) occurs via an intermediate ${\rm NH_3D}$, having a tetrahedral geometry as demonstrated by ab-initio calculations~\citep{2005Moyano}, we can thus apply the reduced mass $\mu$ to describe the tunneling mass of the addition (\ref{Eq21}) reaction.
\begin{equation}\label{Eq21}
{\rm NH_3+D {\longrightarrow} NH_3D},
\end{equation}
\subsubsection{Activation energy barriers of the reactions}
In our kinetic model, we have four free parameters: the activation barriers (E$_{a2}$, E$_{a3}$, E$_{a4}$) for reaction (reaction
barrier) and the width of the reaction barrier ${\rm Z_r}$. This last can be constrained between 0.7 and 0.9~$\textup{\AA}$. Actually, solid-state chemistry at low temperatures should be dominated by quantum tunneling, according to Harmony~\citep{1971Harmony} and
Goldanskii~\citep{1976Goldanskii}. In particular, H-abstraction and D-substitution should be ruled by tunneling, as shown by Goumans et al. ~\citep{2011Goumans} in the case of CH$_3$OH deuteration. The reaction NH$_3$~+~D has been studied experimentally very long time ago in the gas phase by Kurylo et al. \cite{1969Kurylo} over the temperature range 423-741~K. These authors found that the H-D exchange between NH$_3$ and D may proceed through an intermediate NH$_3$D following the reaction ${\rm NH_3+D}$ ${\rightarrow}$ [${\rm NH_3D}$] ${\rightarrow}$ ${\rm NH_2D+H}$. As mentioned previously, the ${\rm NH_3+D}$ system reaction has been also studied theoretically \cite{2005Moyano} using ab-initio interpolated potential energy surface calculations. In both papers \citep{2005Moyano,1969Kurylo}, the activation barrier of the H-D exchange reaction (\ref{Eq8}) is reported to be ${\rm E_a=11~kcal/mol}$ or 5540~K.
However, some works (i.e. Bell~\citep{1980Bell} and references therein, Chapter 6: Tunneling in molecular spectra, the inversion of ammonia and related processes, page 153) argue that in the case of ammonia inversion, tunneling should be the dominant process, with a typical width of the reaction barrier (${\rm Z_r}$) is 0.7-0.8~$\textup{\AA}$. This reaction is considered the prototype of processes involving tunneling in a symmetrical (or quasi-symmetrical) potential energy curve. To the best of our knowledges no experimental and theoretical works have deal with the width of ammonia deuteration barrier. Thus we have used a value for width of deuteration similar to that of ammonia inversion, aware that the two reactions involve not identical chemical processes. For the sake of simplicity, we have used a common ${\rm Z_r}$ value for the deuteration reactions (\ref{Eq8}-\ref{Eq10}), instead of a value for each reaction. We suggest that tunneling is necessary for ammonia deuteration (in analogy with methanol deuteration). For this reason we use quantum tunneling in our model but we point out that our simple formulation of tunneling is useful only for a qualitative evaluation of our experimental results. Quantum tunneling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount. Quantum tunneling is known to be an important process for molecular synthesis on interstellar grains at very low temperatures \citep{2008Watanabe}. A detailed description of tunneling falls outside the scope of this work.
Figure~\ref{Fig5} shows the surface densities of NH$_3$, ${\rm NH_2D}$, NHD$_2$ and ND$_3$ species as a function of D atoms Fluences.
These surface densities, expressed in fraction of monolayer (ML) are the normalized integrated areas below the TPD curves of NH$_3$, ${\rm NH_2D}$, NHD$_2$ and ND$_3$ peaked at 97~K for each D atom fluence, with respect to the TPD integrated area of NH$_3$ for one monolayer coverage.
As previously explained in the section \ref {Kinetic NH3+D}, we assumed that all the deuterated ammonia species are ionized by electron impact in the ion source of the QMS during the TPD experiments. However, it has been reported by Rejoub et al. \cite{2011Rejoub} that the ionization cross-section of light ${\rm NH_3}$ molecules is twice larger than that of ${\rm ND_3}$, meaning that the cross-sections for formation of ion fragments from heavy deuterated molecules, are much smaller than those from NH$_3$. Because we neglected the contribution of the cracking patterns in the TPD experiments, we did not considered the different ionization cross-section values of the deuterated species in the measurements of the areas from TPD profiles.
As shown in the Figure~\ref{Fig5}, there is a good correlation between the experimental data and the fits obtained by the model for the exponential decay of NH$_3$, and the growth of ${\rm NH_2D}$, NHD$_2$ and ND$_3$ species on the surface when increasing the amount of D atoms on the surface. The plots of the Figure~\ref{Fig5} show that for the higher D-irradiation time of 240 minutes, and the higher D-fluence of ${\rm 5.34 \times 10^{16}~atoms \cdot cm^{-2}}$, about 20~\% of the adsorbed NH$_3$ molecules are mainly deuterated into ${\rm NH_2D}$ and ${\rm NHD_2}$ species with traces of ${\rm ND_3}$. The formation yields of the single, the double and the triple deuterated ammonia species are approximately ($14~\%$), ($5~\%$), and ($1~\%$), respectively.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure6.eps}
\caption{Kinetic evolutions of ${\rm NH_3}$, ${\rm NH_2D}, {\rm NHD_2}$, and ${\rm ND_3}$ species present on the surface
as a function of D atoms exposure doses, and D fluences on 0.8~ML coverage of solid ammonia already deposited on the oxidized graphite substrate at 10~K. Black squares, red circles, blue triangles, green diamonds are the TPD experimental data
of ${\rm NH_3}$, ${\rm NH_2D}$, ${\rm NHD_2}$, and ${\rm ND_3}$, respectively. Solid lines are the fits
obtained from the model. The uncertainties are given by the errors bars.}
\label{Fig5}
\end{figure}
Thanks to our model, we have tested different scenario: we used three values (150~K, 250~K, 350~K) for the diffusion barrier ${\rm E_{diff}}$ of D atoms, and for each value, we have varied ${\rm Z_r}$ from 0.7 to 0.9~$\textup{\AA}$ (step of 0.01 $\textup{\AA}$).
In the case of ${\rm E_{diff}= 250~K}$, the activation energy barriers of the successive H-D substitution reactions of ammonia species by
D atoms are found to be ${\rm Ea_2=(1840\pm270)}$~K for the reaction~(\ref{Eq8}), ${\rm Ea_3=(1690\pm245)}$~K for the reaction~(\ref{Eq9}), and ${\rm Ea_4}$ = ${\rm (1670\pm230)}$~K for the reaction~(\ref{Eq10}).
In the Table \ref{table1}, we list the width and the energy activation barrier for the H-D substitution reactions of
${\rm NH_2D}$, NHD$_2$ and ND$_3$ species. The listed values of the activation energies ${\rm Ea_i}$ minimize the $\chi^2$ value of our fit with respect to our experimental data.
The statistical parameter $\chi^2$ is obtained for each set of parameters by using the following formula:
\begin{equation}
{\rm \chi^2=\Sigma_{ML, mol} [S_t(ML, mol)-S_e(ML, mol)]^2/ S_e(ML, mol)}
\end{equation}
Where ${\rm S_t(ML, mol)}$ and ${\rm S_e(ML, mol)}$ are respectively the theoretical and experimental surface density for each molecule at a certain D-fluence.
In order to have a good correlation between the model and the experiments, Figure~\ref{Fig6} shows how we can minimize the $\chi^2$ value by setting a couple of activation energies values (${\rm Ea_2,~Ea_3}$) for the deuteration reactions~(\ref{Eq8}) and (\ref{Eq9}) and varying only the value of the third energy ${\rm Ea_4}$ for the reaction~(\ref{Eq10}).
Our activation energy barriers for the direct H-D substitution reactions (\ref{Eq8}-\ref{Eq10}) between ammonia species and D atoms (see Table~\ref{table1}) are smaller than the activation energy barrier ${\rm E_a=5540~K}$ reported by the two references \cite{1969Kurylo, 2005Moyano}, both in gas and solid phases. The low values of the activation energies obtained in this work can be explained by the catalytic effect of the ASW ice~+~oxidized HOPG on the deuteration reaction ${\rm NH_3+D}$. This substrate favors the diffusion of D atoms, and increases therefore the reactivity between ${\rm NH_3}$ molecules and D atoms on the surface.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure5.eps}
\caption{The curves minimizing the $\chi^2$ value between the kinetic model and the experimental measurements for the reaction between ${\rm NH_3}$ and D atoms, by setting a couple of activation energies barriers (${\rm Ea_2,~Ea_3}$) for the deuteration reactions~(\ref{Eq8}) and (\ref{Eq9}) and varying the value of third one (${\rm Ea_4}$) for the reaction~(\ref{Eq10}). ${\rm E_{diff}}$ and Z$_r$ are respectively 250~K and 0.83~$\textup{\AA}$. }
\label{Fig6}
\end{figure}
As shown in the Table~\ref{table1}, the width ${\rm Z_r}$ and the energy of the activation barriers ${\rm E_a}$ depend
on the diffusion energy of D atoms on the surface ${\rm E_{diff}}$. One can note that the higher the diffusion energy (${\rm E_{diff}}$) of D atoms, the lower the width (${\rm Z_r}$) of the energy barriers. The diffusion of D atoms on the cold surface increases the probability of the H-D substitution reactions of ammonia in the solid phase.
Table \ref{table1} also shows that for each ${\rm E_{diff}}$, the value of the activation energy barrier is always high for the first deuteration reaction ${\rm NH_3+D}$, and then decreases by almost $10~\%$ for the second ${\rm NH_2D+D}$ and the third ${\rm NHD_2+D}$ deuteration reactions. Our activation energy barriers for the deuteration reaction ${\rm NH_3+D}$ in the solid phase is much lower than the value (${\rm 46~kJ\cdot mol^{-1}}$) given in the gas phase \citep{2005Moyano,1969Kurylo}. This large difference can be explained by the catalytic effect of substrate composed of oxidized graphite and ASW ices deposits.
\begin{table}
\centering \caption{The width Z$_r$ and the height of the energy barriers E$_a$, expressed in ($\textup{\AA}$) and in kelvin (K), respectively, of the successive H-D substitution reactions of ${\rm NH_3}$ molecules by D atoms on the oxidized, partly ASW covered graphite surface at 10~K, for a fixed value of D-atom diffusion energy ${\rm E_{diff}}$. The minimum $\chi^2$ value of the fits is found to vary between 0.3 and 0.1.}\label{table1}
\begin{tabular}{c|c|c|ccc}
\hline\hline
Reactions & ${\rm E_{diff}}$ & Z$_r$ & E$_a$ \\
\hline
units & K & $\textup{\AA}$ & K& \\
\hline
${\rm NH_3}$+D $\overset{p_{2}}{\longrightarrow}$ ${\rm NH_2D}$+H &150 & 0.86 & 1950 $\pm$ 250& \\
&250 & 0.83 & 1840 $\pm$ 270& \\
&350 & 0.81 & 1750 $\pm$ 320& \\
\hline
${\rm NH_2D}$+D $\overset{p_{3}}{\longrightarrow}$ ${\rm NHD_2}$+H &150 & 0.86 & 1820 $\pm$ 220& \\
&250 & 0.83 & 1690 $\pm$ 245& \\
&350 & 0.81 & 1610 $\pm$ 290& \\
\hline
${\rm NHD_2}$+D $\overset{p_{4}}{\longrightarrow}$ ${\rm ND_3}$+H &150 & 0.86 & 1800 $\pm$ 210&\\
&250 & 0.83 & 1670 $\pm$ 230& \\
&350 & 0.81 & 1600 $\pm$ 250& \\
\hline\hline
\end{tabular}
\end{table}
\subsection{Rate equations of CH$_3$OH + D system reactions }\label{barrier CH3OH}
As has been suggested by Nagaoka et al.~\citep{2007Nagaoka}, the deuteration of ${\rm CH_3OH}$, ${\rm CH_2DOH}$ and ${\rm CH_2DOH}$ species by D atoms on cold surfaces, occur through the successive H abstraction and D addition mechanism as follows:
\begin{equation}\label{Eq25}
{\rm CH_3OH~+~D}~{\longrightarrow}~{\rm CH_2OH +HD}
\end{equation}
\begin{equation}\label{Eq26}
{\rm CH_2OH~+~D}~{\longrightarrow}~{\rm CH_2DOH}
\end{equation}
\begin{equation}\label{Eq27}
{\rm CH_2DOH+D}~{\longrightarrow}~{\rm CHDOH +HD}
\end{equation}
\begin{equation}\label{Eq28}
{\rm CHDOH +D}~{\longrightarrow}~{\rm CHD_2OH}
\end{equation}
\begin{equation}\label{Eq29}
{\rm CHD_2OH +D}~{\longrightarrow}~{\rm CD_2OH+HD}
\end{equation}
\begin{equation}\label{Eq30}
{\rm CD_2OH~+~D}~{\longrightarrow}~{\rm CD_3OH}
\end{equation}
Where the H-abstraction reactions (\ref{Eq25}, \ref{Eq27} and \ref{Eq29}) are exothermic with small activation
barriers in comparison to the direct H-D substitution reactions. While the D-addition reactions (\ref{Eq26}, \ref{Eq28} and \ref{Eq30}) are exothermic with no barriers. In Hama et all's review~\citep{Hama2013}, the direct H-D substitution reaction
${\rm CH_3OH+D}$~${\longrightarrow}$~${\rm CH_2DOH +H}$ has a very large activation energy barrier of 169~kJ/mol (or 20330~K), in comparison to the following H-abstraction reaction
${\rm CH_3OH~+~D}$~${\longrightarrow}$~${\rm CH_2OH +HD}$, which has an activation energy of ${\rm 27~kJ\cdot mol^{-1}}$ (or 3250~K), estimated from gas phase calculations.
In this work, the successive deuteration reactions of methanol species by D atoms are described by the following simple reactions
(\ref{Eq31}-\ref{Eq33}).
\begin{equation}\label{Eq31}
{\rm CH_3OH+D} \overset{p'_{2}}{\longrightarrow}.....{\longrightarrow}{\rm CH_2DOH}
\end{equation}
\begin{equation}\label{Eq32}
{\rm CH_2DOH+D} \overset{p'_{3}}{\longrightarrow}.....{\longrightarrow}{\rm CHD_2OH}
\end{equation}
\begin{equation}\label{Eq33}
{\rm CH_2DOH+D} \overset{p'_{4}}{\longrightarrow}.....{\longrightarrow}{\rm CD_3OH}
\end{equation}
Where ${\rm p'_2, p'_3~and~p'_4}$ parameters are the reaction probabilities of the H-abstraction reactions (\ref{Eq25}, \ref{Eq27} and \ref{Eq29}) of ${\rm CH_3OH}$, ${\rm CH_2DOH}$ and ${\rm CH_2DOH}$ by D atoms, respectively.
Using the same kinetic model previously described for ${\rm NH_3+D}$ system reactions in solid phase, we have
estimated the activation energy barriers of the H-abstraction reactions (\ref{Eq25}, \ref{Eq27} and \ref{Eq29}) for ${\rm CH_3OH+D}$ system reactions.
Table~\ref{table2} summarizes the values of the width Z$_r$ and the activation energy barriers E$_a$ for each H-D substitution reaction of methanol species.
Our kinetic model provides an activation energy barrier E$_a$=(1450~$\pm$~210)~K or (12.1~$\pm$~1.7)~${\rm kJ\cdot mol^{-1}}$ for the first abstraction reaction (${\rm CH_3OH+D}$) given by the equation (\ref{Eq25}). This value is more than a factor of two smaller
than the activation energy value (${\rm 27~kJ\cdot mol^{-1}}$ or ${\rm 3250~K}$), reported by \cite{Hama2013} from theoretical estimations in the gas phase. Once again the catalytic role of the surface can explain the difference between gas phase and solid phase activation barriers.
The activation energy barriers of the successive deuteration reactions of methanol species
decease significantly with the increase of the diffusion energy ${\rm E_{diff}}$ of D atoms on the surface.
Nerveless, the values of the activation energy barriers for ${\rm CH_3OH+D}$ system reactions (see Table~\ref{table2})
are always smaller than those of the ${\rm NH_3+D}$ system reactions (see Table~\ref{table1}).
\begin{table}
\centering \caption{The width Z$_r$ and the height of the energy barriers E$_a$, expressed in ($\textup{\AA}$) and in kelvin (K), respectively, of the successive H-D substitution reactions of ${\rm CH_3OH}$ molecules by D atoms on the oxidized, partly ASW covered graphite surface at 10~K, for a fixed value of D-atom diffusion energy ${\rm E_{diff}}$. The $\chi^2$ value of the fit vary between 0.3 and 0.1. \label{table2}}
\begin{tabular}{c|c|c|ccc}
\hline\hline
Reactions & ${\rm E_{diff}}$ & Z$_r$ & E$_a$ \\
\hline
units & K & $\textup{\AA}$ & K& \\
\hline
${\rm CH_3OH}$+D $\overset{p'_{2}}{\longrightarrow}$ ${\rm CH_2DOH}$+H &150 & 0.86 & 1450 $\pm$ 210& \\
&250 & 0.85 & 1080 $\pm$ 180& \\
&350 & 0.84 & 860 $\pm$ 120& \\
\hline
${\rm CH_2DOH}$+D $\overset{p'_{3}}{\longrightarrow}$ ${\rm CHD_2OH}$+H &150 & 0.86 & 1330 $\pm$ 200&\\
&250 & 0.85 & 990 $\pm$ 180&\\
&350 & 0.84 & 770 $\pm$ 145&\\
\hline
${\rm CHD_2OH}$+D $\overset{p'_{4}}{\longrightarrow}$ ${\rm CD_3OH}$+H &150 & 0.86 & 1300 $\pm$ 205&\\
&250 & 0.85 & 980 $\pm$ 170&\\
&350 & 0.84 & 780 $\pm$ 150&\\
\hline\hline
\end{tabular}
\end{table}
Figure~\ref{Fig7} shows the best fit of the data for the exponential decay of ${\rm CH_3OH}$,
and the increase of the surface densities of ${\rm CH_2DOH}$, ${\rm CHD_2OH}$ and ${\rm CD_3OH}$ with the increasing time and fluence of D atoms exposure on 0.8~ML of solid methanol ${\rm CH_3OH}$ pre-deposited on the oxidized HOPG surface.
We can note that after 70~minutes of D atoms addition, about ${\rm 0.44~ML}$ of the adsorbed ${\rm CH_3OH}$ molecules, are deuterated into three isotopologue species, with the formation yields of ($\sim22~\%$) for ${\rm CH_2DOH}$, ($\sim15~\%$) for ${\rm CHD_2OH}$, and ($\sim8~\%$) for ${\rm CD_3OH}$. The comparison with the previous kinetic results given in the Figure~\ref{Fig6}, shows that when 0.8~ML of solid NH$_3$ is irradiated with D atoms during the same exposure time of 70 minutes, only $10~\%$ of NH$_3$ molecules are deuterated into ${\rm NH_2D}$ ($\sim8~\%$) with traces of ${\rm NHD_2}$ ($\sim2~\%$) and ${\rm NHD_3}$ ($<1~\%$). This means that during 70 minutes of D atoms exposure phase, 0.44~ML (or 55~\%) of adsorbed ${\rm CH_3OH}$ molecules are consumed by D atoms, while only 0.08~ML (or 10~\%) of solid ammonia can be destructed by D atoms.
We defined the deuteration rate ${\rm \upsilon_X=\frac{d[X]}{dt}}$ of an adsorbed species X by D atoms, as the amount [X] of this species (in ${\rm molecules}$ ${\rm \cdot cm^{-2}}$), consumed per unit time (in ${\rm min}$) of D atoms exposed on the surface. The deuteration rate value of ${\rm CH_3OH}$ is estimated to ${\rm \upsilon_{CH3OH}}$ ${\rm\backsimeq 0.005\times10^{15}}$ ${\rm molecules}$ ${\rm \cdot cm^{-2}\cdot min^{-1}}$, while that of NH$_3$ species is found to be ${\rm \upsilon_{NH3}}$ ${\rm \backsimeq 0.001\times10^{15}}$ ${\rm molecules\cdot cm^{-2}\cdot min^{-1}}$, and can slightly decrease for extended irradiation up to 240 minutes. The relationship between the two deuteration rates is ${\rm \frac{\upsilon_{CH3OH}}{\upsilon_{NH3}}\simeq5}$, meaning that the deuteration rate of ${\rm CH_3OH}$ molecules by D atoms on cold and oxidized graphite HOPG surfaces with ASW ice deposits, is five times higher than of NH$_3$.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure7.eps}
\caption{The variation of the surface densities of ${\rm CH_3OH}$ (m/z=32), ${\rm CH_2DOH}$ (m/z=33), ${\rm CHD_2OH}$
(m/z=34) and ${\rm CD_3OH}$ (m/z=35) as a function of D atoms exposure times and D fluences on 0.8~ML coverage of ${\rm CH_3OH}$ film pre-deposited on the surface of an oxidized graphite substrate held at 10~K. Full squares, circles and triangles are the TPD experimental data. Solid lines are the fits obtained from the model. The uncertainties are given by the errors bars.} \label{Fig7}
\end{figure}
\section{Discussion and conclusions}
In this work, we demonstrated experimentally the possible deuteration of ${\rm NH_3}$ molecules by D atoms on cold oxidized HOPG surface, partly covered with ASW ices. The deuteration experiments of solid ammonia were performed at low surface coverage and low temperature 10~K using mass spectroscopy and temperature programmed desorption (TPD).
The isotopologue ammonia species NH$_2$D, NHD$_2$ and ND$_3$ desorbing from the surface at 96~K and 150~K are likely to be formed by direct exothermic H-D substitution reactions between the adsorbed ammonia species on the surface and the impinging D atoms. A kinetic model taking into account the diffusion of D atoms on the surface provides the activation energy barriers of the deuteration reactions ${(\rm NH_3+D)}$ in the solid phase. We found that the energy barrier for the D-H exchange reaction ${(\rm NH_3+D)}$ is (1840~$\pm$~270~K) or (${\rm 15.4~\pm~2.5~kJ\cdot mol^{-1}}$), three times lower than that predicted in the gas phase (5530~K or ${\rm 46~kJ\cdot mol^{-1}}$) \citep{1969Kurylo,2005Moyano}. Our results also show that the activation energy barrier for the first deuteration reaction ${(\rm NH_3+D)}$ of ammonia is almost two times higher than that of the abstraction reaction (\ref {Eq25}) of ${\rm CH_3OH}$ (1080~$\pm$~180~K) or (9.0~$\pm$~1.2~${\rm kJ\cdot mol^{-1}}$).
Our experimental results showed that the deuteration reaction (${\rm NH_3+D}$) occurs through quantum tunneling, and it is five orders of magnitude slower than methanol (${\rm CH_3OH}$) deuteration process.
If our laboratory experiments lead to the formation of the deuterated ammonia species in comparison to the previous experiments of Nagaoka et al. \citep{2005Nagaoka} and Fedoseev et al. \citep{2015Fedoseev}. This is because our experimental conditions are different from the other works, and help to overcome the classical, the quantum tunneling, and the diffusion activation barriers of the reactions between ${\rm NH_3}$ and D atoms on the oxidized graphite surface. It seems that the main factors that enhance the deuteration reactions between ammonia and D atoms in our experiments are the low D flux and the low thickness of solid ${\rm NH_3}$. The effect of the ASW water ices contaminants on the formation of the observed deuterated ammonia species seems to be negligible.
By lowering significantly the D atoms-flux in this work with respect to the previous works of Fedoseev et al. \citep{2015Fedoseev} and Nagaoka et al. \citep{2005Nagaoka}, we increase the density of D atoms available to diffuse on the surface, and interact efficiently with physisorbed ammonia species on the oxidized HOPG surface. However, the D-H exchange reaction between ${\rm NH_3}$ and D atoms is almost in competition with the barrier-less recombination ${\rm D}+{\rm D}{\longrightarrow}{\rm D_2}$ reaction. We notice that the used D-flux could be suitable to have H-D substitution reactions in the experiments of Fedoseev et al. \citep{2015Fedoseev} and Nagaoka et al. \citep{2005Nagaoka}, both for methanol and ammonia, if they have reduced the thickness of their ices to a fraction of one monolayer. These authors did not try to reduce simultaneously the thickness of the ices and the flux of D atoms to study the effect of these two parameters on the the efficiency of the deuteration reactions at low temperatures.
Other factors related to a specific orientation of the graphitic surface, or a possible interaction of ammonia with the substrate by chemisorption process, at low surface temperature at 10~K, can induce the H-D exchange between ${\rm NH_3}$ and D atoms. Some experimental \citep{2013Yeh} and theoretical \citep{2012Tang}
works reported in the literature have demonstrated the possible dissociative chemisorption of NH$_3$ molecules on the oxidized graphite surface at the epoxy functional groups, created by the oxidation of the HOPG surface at room temperature \citep{2012Larciprete}. The dissociation of the adsorbed ammonia leads to the formation of ${\rm C-NH_2}$ radicals, which can react with deuterium atoms. If the chimisorption occurs for NH$_3$ in our experiments, CH$_3$OH molecules will be also chemisorbed on the oxidized graphite surface, and leads to the formation of the radical CH$_2$OH. It may be the case, but all the molecules in our experiments desorb at the physisorption temperatures, at 140~K for methanol, 96~K for ammonia species, and 150~K for traces of water ices. In addition, the energy barrier for the dissociative chemisorption of NH$_3$ on the oxidized HOPG surface has been predicted to be 97.90~${\rm kJ\cdot mol^{-1}}$ \citep{2012Tang}. This activation energy barrier is significantly high to be overcome, in comparison to the activation energy barriers found in this work (16 ${\rm kJ\cdot mol^{-1}}$), and makes chemisorption improbable in our experiments.
Our results for the deuteration of small molecules, such as NH$_3$ by surface chemistry are important in the fields of astronomy, astrochemistry and low-temperature physics.
The formation of the isotopologue ammonia species on cold grain surfaces can contribute to the D-enrichment of ammonia in the interstellar medium, and explain therefore the observed ratios of H-and D-ammonia bearing molecules in dark clouds. However, the amount of the NH$_3$ molecules expected to be deuterated in dense molecular clouds on a timescale of ${\rm 10^{5}-10^{6}}$ years is not enough important to reproduce the large gas-phase interstellar abundances of deuterated ammonia molecules.
In space, the low reaction probability of NH$_3$ molecules with D atoms on interstellar grain mantles results from the competitive surface reactions of D atoms with the accreted species (D, N, O, O$_2$...) from the gas phase, leading to the formation of D$_2$ molecules through D+D recombination \cite{2006Horneker}, NH$_2$D and ND$_3$ through D+N addition reactions \cite{2015Fedoseev}, and heavy D$_2$O water ices through D+O and D+O$_2$ chemical reactions \cite{2010Ioppolo,2013Dulieu, 2013Chaabouni}.
Observational studies towards cold regions (dark molecular clouds and dense cores L134N ) \citep{2000Roueff, 2000Saito,2002Van} show deuterated ammonia species in gas phase. In these cold environments of the interstellar medium, grain mantles are exposed to cosmic rays and UV irradiation fields originating from hot stars These energetic UV photons may induce the desorption into the gas phase of the newly formed deuterated species NH$_2$D, NHD$_2$ and ND$_3$ on the grain surfaces by non thermal photo-desorption process \cite{1990Hartquist}. The desorption of the deuterated ammonia species in cold regions may also result from exothermic reactions occurring on the grain, in particular the formation of molecular hydrogen (${\rm H_2}$) \cite{1993Duley} by H bombardment. This reaction releases 4.5 eV of excess energy which can be transferred to the grain surface, and causes the local heating of the deuterated species. It has been demonstrated that such local heating can reorganize the local structure of the ice mantle \cite{2011Accolla}, although it can hardly induce indirect desorption \cite{2016Minissale}.
Because the abundance of ammonia (NH$_3$) in icy mantles is nearly 15~\% with respect to water (H$_2$O) ice
\citep{2000Gibb,2004Gibb}, it is interesting to study the efficiency of the deuteration reaction ${\rm NH_3+D}$ on amorphous solid water ASW ice surfaces, and explore the role of the ice grain chemistry in the interstellar deuterium fractionation of ammonia molecules.
\section{Acknowledgements}
The authors thank the editor and the anonymous reviewers for their valuable and useful comments. The authors also thank Dr Paola Caselli (Center for Astrochemical Studies, Max Planck Institute for Extraterrestrial Physics MPE) for the relevant discussions about the deuteration of ammonia in the interstellar medium.
\bibliographystyle{spphys}
|
\section{Section title}
\section{Introduction}
Computers are more and more present in everyday life, and they often perform tasks that were previously reserved to human beings. In particular, the raise of Artificial Intelligence in recent years showed that many situations of decision-making can be handled by computers in a way comparable to or more efficient than that of humans. However, the processes used by computers are often very different from the ones used by human beings. These different processes can affect the decision-making in ways that are difficult to assess but should be explored to better understand the limitations and advantages of the computer approach. A particularly spectacular way of testing these differences was put forward by Alan Turing: in order to distinguish a human from a computer one could ask a person to dialog with both anonymously and try to assess which one is the biological agent. As the question of human-computer interaction gets more pregnant, there is an ever growing need to understand these differences \cite{hci14}. Many complex problems can illustrate the deep differences between human reasoning and the computer approach. Board games such as chess or go, which are perfect-information zero-sum games, provide an interesting testbed for such investigations. The complexity of these games is such that computers cannot use brute force, as in complex decision making, and have to rely on refined algorithms from Artificial Intelligence. Indeed, the number of legal positions is about $10^{50}$ in chess and $10^{171}$ in go \cite{TroFar07}, and the number of possible games of go was recently estimated to be at least $10^{10^{108}}$ \cite{WalTro16}. This makes any exhaustive analysis impossible, even for machines, and pure computer power is not enough to beat humans. Indeed, the most recent program of go simulation AlphaGo \cite{alphago16} used state of the art tools such as deep learning neural networks in order to beat world champions.
Various approaches were considered to overcome the vastness of configuration space. A cornerstone of the computer approach to board games is a statistical physics treatment of game features. A first possibility is to explore the tree of all games stochastically, an approach which allowed for instance to investigate the topological structure of the state space of chess \cite{sequencingchess16}. A second option is to consider only opening sequences in the game tree. This allowed e.g.~to identify Zipf's law in the tree of openings in chess \cite{chess} and in go \cite{Weiqi15}. A third possibility is to restrict oneself to local features of the game, by considering only local patterns. This approach was taken for instance in \cite{LiuDouLu08}, where the frequency distribution of patterns in professional go game records was investigated.
Local patterns play an essential part in the most recent approaches to computer go simulators \cite{stern2006bayesian}. Pioneering software was based on deterministic algorithms \cite{computers,BouCaz01}. Today, computer algorithms implement Monte-Carlo go \cite{MonteCarloGo, Mogo} or Monte-Carlo tree search techniques \cite{progressive, Brown12, Cou07, GelKoc12}, which are based on a statistical approach : typically, the value of each move is estimated by playing a game at random until its end and by assigning to the move the average winning probability. The random part relies on a playout policy which tells how to weight each probabilistic move. Such a playout policy rests on properties of local features, e.g.~3$\times 3$ patterns with atari status \cite{Coulom2}. The most recent computer go approaches such as in AlphaGo \cite{alphago16}, which famously defeated a world champion in 2016 and 2017, also incorporates local pattern-based features such as $3\times 3$ patterns and diamond-shaped patterns.
In the present work, we investigate the differences between human and computer players of go using statistical properties of complex networks built from local patterns of the game. We will consider networks whose nodes correspond to patterns describing the local situation on the $19\times 19$ goban (board). In the original setting \cite{goGG}, we introduced a network based on $3\times 3$ patterns of moves. We then extended it to larger, diamond-shaped patterns and explored the community structure of the network for human players \cite{goKGG}. Here we will focus on the differences between networks obtained from games played by humans and games played by computers. This study is new to the best of our knowledge. In a parallel way, there have already been previous studies to distinguish amateur and professional human players by looking at statistical differences between their games. For instance, professional moves in a fixed region of the goban were shown to be less predictable than amateur ones and this predictability turned out to evolve as a function of the degree of expertise of the professional \cite{Harre11}. Differences between amateur players of different levels were also identified in \cite{goKGG}. Here we will show that there are clear differences between complex networks based on human games and those based on computer games. These differences, which appear at a statistical level, can be seen as a signature of the nature of players involved in the game, and reveal the different processes and strategies at work. We will specify which quantities can be used to detect these differences, and how large a sample of games should be for them to be statistically significant. Additionnally, we will show that this technique allows to distinguish between computer games played with different types of algorithms.
\section{Construction of the networks}
Our network describing local moves in the game of go is constructed in the following way \cite{goGG}. Nodes correspond to $3\times 3$ intersection patterns in the $19\times 19$ goban with an empty intersection at its centre. Since an intersection can be empty, black or white, there are $3^8$ such patterns. Taking into account the existence of borders and corners on the goban, and considering as identical the patterns equivalent under any symmetry of the square as well as colour swap, we end up with $N=1107$ non-equivalent configurations, which are the nodes of our graph. Let $i$ and $j$ be two given nodes. In the course of a game, it may happen that some player plays at a position $(a,b)$ which is the centre of the pattern labeled by $i$, and that some player (possibly the same) plays later in the same game at some position $(a',b')$ which is the centre of the pattern labeled by $j$. If this happens in such a way that the distance between $(a,b)$ and $(a',b')$ is smaller than some fixed distance $d_s$, and that the move at $(a',b')$ is the first one to be played at a distance less than $d_s$ since $(a,b)$ has been played, then we put a directed link between nodes $i$ and $j$. Since part of the go game corresponds to local fights, the distance $d_s$ allows to connect moves that are most likely to be strategically related. Following \cite{goGG} we choose this strategic distance to be $d_{s}=4$. We thus construct from a database a weighted directed network, where the weight is given by the number of occurences of the link in the games of the database.
In what follows, we will use three databases. The first one corresponds to 8000 games played by amateur humans, and is available online in sgf format \cite{database}. The two other databases correspond to games played by computer programs, using either a deterministic approach or a Monte-Carlo approach. To our knowledge, there is no freely available database for computer games, and therefore we opted for free go simulators. As a deterministic computer player we chose the software Gnugo \cite{gnugo}. Although this program is relatively weak compared to more recent computer programs, it is easy to handle and a seed taken as an input number in the program allows to deterministically reproduce a game. Using 8000 different seeds and letting the program play against itself we constructed a database of 8000 distinct computer-generated games. As a computer player implementing the Monte-Carlo approach we chose the software Fuego \cite{fuego}, placing very well in computer go tournaments in the past few years, with which we generated a database of 8000 games. These databases allowed us to construct three distinct networks, one from the human database and one for each computer-generated one. In order to investigate the role of the database size, we also consider graphs constructed from smaller subsets of these databases (with networks constructed from 1000 to 8000 games).
\section{General structure of the networks}
We first investigate the general structure of the three networks built from all 8000 games for each database.
Taking into account the degeneracies of the links, each node has a total of $K_{\textrm{\scriptsize in}}$ incoming links and $K_{\textrm{\scriptsize out}}$ outgoing links. The (normalized) integrated distribution of $K_{\textrm{\scriptsize in}}$ and $K_{\textrm{\scriptsize out}}$, displayed in Fig.~\ref{figlinks} for each network, shows that general features are similar. In all cases, the distribution of outgoing links is very similar to the distribution of ingoing links. This symmetry is due to the fact that the way of constructing the networks from sequentially played games ensures that in most cases an ingoing link is followed by an outgoing link to the next move. The distributions of links are close to power-law distributions, with a decrease in $1/K^{\gamma}$ with $\gamma \approx 1.0$. Networks displaying such a power-law scaling of the degree distribution have been called scale-free networks \cite{AB99}. Many real-world networks (from ecological webs to social networks) possess this property, with an exponent $\gamma$ typically around 1 (see Table II of \cite{AB02}). Our networks belong to this class, which indicates the presence of hubs (patterns with large numbers of incoming or outgoing links), and more generally a hierarchical structure between patterns appearing very commonly and others which are scarce in the database.
While the distributions for the three networks are very similar, the power-law scaling ends (on the right of the plot) at a smaller value of $K$ in the case of both computer go networks, with strong oscillations. The rightmost points correspond to hubs. The figure shows that such hubs are slightly rarer in the computer case than in the human one. This may indicate that certain moves are preferred by some human players independently of the global strategy, while computers play in a more even fashion. However, at the level of these distributions the differences between databases seem too weak to lead to reliable indicators.
\begin{figure}
\begin{center}
\includegraphics*[width=0.99\linewidth]{fig1.pdf}
\end{center}
\caption{Integrated distribution of ingoing links $K_{\textrm{\scriptsize in}}^{*}=K_{\textrm{\scriptsize in}}/k_{\textrm{tot}}$ and outgoing links $K_{\textrm{\scriptsize out}}^{*}=K_{\textrm{\scriptsize out}}/k_{\textrm{tot}}$. Here $k_{\textrm{tot}}$ is the total number of links in the network (from top to bottom $1 589 729$, $2 046 260$ and $1 527 421$). $P(K^{*})$ is defined as the proportion of nodes having more than $K=k_{\textrm{tot}}K^{*}$ links. From top to bottom, deterministic computer (Gnugo, empty blue $K_{\textrm{\scriptsize in}}^{*}$, filled maroon $K_{\textrm{\scriptsize out}}^{*}$), Monte-Carlo computer (Fuego, empty orange $K_{\textrm{\scriptsize in}}^{*}$, filled grey $K_{\textrm{\scriptsize out}}^{*}$) and humans (empty red $K_{\textrm{\scriptsize in}}^{*}$, filled black $K_{\textrm{\scriptsize out}}^{*}$), shifted down by respectively $0$, $-1/2$ and $-1$ for clarity. The leftmost point corresponds to the nodes with minimal number of links $k_{\textrm{min}}$ (here $k_{\textrm{min}}=1$ or $2$), with abscissa $k_{\textrm{min}}/k_{\textrm{tot}}$ and ordinate $1-N_{0}/N$, $N_{0}$ being the number of nodes with no link. The righmost point corresponds to the node with maximal number of links $k_{\textrm{max}}$ (which happens to be also the node with highest PageRank shown in Fig.~\ref{figPR}), with abscissa $k_{\textrm{max}}/k_{\textrm{tot}}$ and ordinate $1/N$. The networks are all built from 8000 games. Black dashed lines have slope $-1$. \label{figlinks}}
\end{figure}
\section{PageRank}
Each directed network constructed above can be described by its $N\times N$ weighted adjacency matrix $(A_{ij})_{1\leq i,j\leq N}$, with $N=1107$, such that $A_{ij}$ is the number of directed links between $i$ and $j$ as encountered in the database. The PageRank vector, that will be defined below, allows to take into account the network structure in order to rank all nodes according to their significance within the network. It allows to go beyond the mere frequency ranking of the nodes, where nodes would be ordered by the frequency of their occurence in the database. Physically speaking, the significance of a node is determined by the average time that would be spent on it by a random walker moving on the network by one step per time unit and choosing a neighouring node at random with a probability proportional to the number of links to this neighbour. Such a walker would play a virtual game where, at each step, it can play any move authorized by the network, with some probability given by the network. The PageRank vector assigns to any node $i$ a nonnegative value corresponding to the equilibrium probability of finding this virtual player on node $i$.
More precisely, the PageRank vector is obtained from the Google matrix $G$, defined as $G_{ij}=\alpha S_{ij}+(1-\alpha)/N$, with $S$ the matrix obtained by normalizing the weighted adjacency matrix so that each column sums up to 1 (any column of 0 being replaced by a column of $1/N$), and $\alpha$ is some parameter in $[0,1]$. Since $G$ is a stochastic matrix (all its columns sum up to 1), there is a vector $p$ such that $Gp=p$ and $p_i\geq 0$ for $1\leq i\leq N$. This right eigenvector of $G$, associated with the eigenvalue 1, is called the PageRank vector. We can then define the corresponding ranking vector $(a_k)_{1\leq k\leq N}$, with $1\leq a_k\leq N$, as the permutation of integers from 1 to $N$ obtained by ranking nodes in decreasing order according to the entries $p_i$ of the PageRank vector, namely $p_{a_1}\geq p_{a_2}\geq \ldots\geq p_{a_N}$. As an illustration, we show in Fig.~\ref{figPR}, for each network, the 20 nodes $a_1,\ldots,a_{20}$ with largest PageRank vector entry $p_i$. The distinction clearly appears. For instance, among the 20 entries of the human network only 12 appear in the Gnugo Pagerank vector (18 in the Fuego one).
\begin{figure}
\begin{center}
\includegraphics*[width=.99\linewidth]{fig2.pdf}
\end{center}
\caption{First 20 patterns as ranked by the PageRank vector, for the
networks generated by Gnugo (top), Fuego (middle) and humans (bottom), each built from 8000 games, for $\alpha=0.85$ (see text). Black plays at the cross. \label{figPR}}
\end{figure}
In order to quantify more accurately the discrepancy between the human and
computer PageRank vectors, and between PageRank vectors obtained for different database sizes, we consider the correlations between their associated ranking vectors. If $p$ and $q$ are two PageRank vectors, let $A=(a_k)_{1\leq k\leq N}$ and $B=(b_k)_{1\leq k\leq N}$ be their respective ranking vectors, with $1\leq a_k,b_k\leq N$. The correlations are estimated from the discrepancy between pairs $(a_k,b_k)$ and the line $y=x$. As an illustration, such correlation plots are shown in Fig.~\ref{correlPR}, where pairs $(a_k,b_k)$ are plotted for the Gnugo and human databases. While correlation between two human PageRanks or two Gnugo PageRanks is quite good, the correlation between human and computer-generated networks is very poor. This observation does not depend on the choice or the size of the database: indeed, as appears in Fig.~\ref{correlPR}, several different databases of different sizes all give comparable results. In order to be more quantitative, we introduce the dispersion
\begin{equation}
\label{sigmadef}
\sigma(A,B)=\left(\frac{\sum_{k=1}^{\lfloor N/2 \rfloor} (a_k-b_k)^{2}}{\lfloor N/2 \rfloor} \right)^{1/2},
\end{equation}
where we restrict ourselves to the first half of the entries, corresponding to the largest
values of the $p_i$ (this truncation to $N/2$ amounts to neglecting entries smaller than $p_{a_{554}}$, which, for all samples and database sizes investigated, is of order $\simeq 3.10^{-4}$ for a PageRank vector normalized by $\sum_ip_i=1$). The dispersion gives the (quadratic) mean distance from perfect correlation function $y=x$ to
points $(a_k,b_k)$ in the plot of Fig.~\ref{correlPR}, with for random permutations $\sigma \approx 450 $ on average. In the case of two groups of 4000 games, the human-human dispersion is 43.66, the computer-computer one is 24.04, while the human-computer one is 192.58. A similar discrepancy can be measured for the 1000 game groups, with a dispersion $\sigma$ (averaged over the different samples) given by {$\sigma = 66.71 $} for human-human , {$\sigma = 44.14 $} for computer-computer, and {$\sigma = 199.27 $} for human-computer. The plot at the bottom of Fig.~\ref{correlPR} is a PageRank correlation plot between human and computer with the whole database of 8000 games for each, giving $\sigma = 193.48$.
\begin{figure}
\begin{center}
\includegraphics*[width=.99\linewidth]{fig3.pdf}
\end{center}
\caption{PageRank-PageRank correlation for networks built from subsets of
the database of computer (Gnugo) or human games ($\alpha=0.85$). First line: databases are split into 2 groups of 4000 games, yielding two distinct networks, and correlator between the two PageRanks is plotted. Second line: database is split into 8 groups of 1000, and 4 correlators are plotted: group 1 vs 2 (black) 3 vs 4 (red), 5 vs 6 (green), 7 vs 8 (blue). Middle column corresponds to human vs computer, left column to human vs human, right column to computer vs computer. Bottom panel: Same for networks constructed from the whole database (8000 human vs 8000 computer games). Nodes are ranked according to the PageRank of each network. Only the $\lfloor N/2\rfloor=553$ first nodes are plotted. \label{correlPR}}
\end{figure}
\section{Spectrum of the Google matrix}
The PageRank vector is the right eigenvector of $G$ associated with the largest eigenvalue $\lambda=1$. It already shows some clear differences between the networks built by computer-played games and human-played games. We now turn to subsequent eigenvalues and eigenvectors. In Fig.~\ref{figlambda} we display the distribution of eigenvalues of the Google matrix in the complex plane for the three networks of 8000 games. Properties of the matrix impose that all eigenvalues lie inside the unit disk, with one of them (associated with the PageRank) exactly at 1, and that complex eigenvalues occur in conjugated pairs. The spectra obtained from different networks are clearly very different: eigenvalues for the network built from computer-played games using Gnugo are much less concentrated around zero, with many eigenvalues at a distance 0.2-0.6 from zero which are absent in the other networks. Moreover, while the bulk of eigenvalues looks similar for games played by Fuego or by humans, many outlying eigenvalues are present in the case of Fuego.
To make these observations more quantitative, we plot in the main panel of Fig.~\ref{figlambda} the radius $\lambda_{c}(x)$ of the minimal circle centred at 0 and containing a certain percentage $x$ of eigenvalues. The difference between the two behaviours is striking. Considering plots obtained from subsets of the databases, we see that the result is robust: although $\lambda_{c}(x)$ depends much more on the size of the subset used to build the network for Gnugo than for the other two networks, the difference between the plots remains clear.
In fact, the presence of eigenvalues with large absolute value has been related to the existence of parts of the network which are weakly linked to the rest ('communities'). Eigenvalues lying out of the bulk in the Fuego network would mean that there are more communities present in that network than in the human case, and even more in the Gnugo network. Note however that the average number of moves per game is larger for Fuego, which reflects in the total number of links in the network (see caption of Fig.~\ref{figlinks}), and may give a small bias in the comparison. The outlying eigenvalues seem to indicate that the deterministic program, and less markedly the Monte-Carlo one, can create different groups of moves linked to each other and not much linked to the other moves, i.e.~different strategies relatively independent from each other. This can be related with the results displayed in Fig.~\ref{figlinks}, which show the presence of more hubs with large number of links in the network generated by human-played games.
\begin{figure}
\begin{center}
\includegraphics*[width=0.99\linewidth]{fig4.pdf}
\end{center}
\caption{Top row : spectrum of the Google matrix for $\alpha=1$ (see text) for the network generated by computers (Gnugo left, Fuego middle) and humans (right). Main plot : $\lambda_{c}(x)$ for $x = 50, 60, 70, 80, 90$ (see text), for Gnugo (blue circles), Fuego (orange squares) and humans (red diamonds), averaged over $m$ networks built from 1000 games ($m=8$, dashed line), 4000 games ($m=2$, long dashed line) and 8000 games ($m=1$, full line).\label{figlambda}}
\end{figure}
\section{Other eigenvectors of the Google matrix}
The analysis of the PageRank vector, which is the eigenvector associated with the largest eigenvalue, has shown (see Fig.~\ref{figPR}) that the most significant moves differ between the three networks. When eigenvalues are ordered according to their modulus, as $1=\lambda_1>|\lambda_2|\geq|\lambda_3|\ldots$, the right eigenvectors associated with eigenvalues $\lambda_2,\lambda_3,\ldots$ may be expected to display more refined differences between the networks.
In order to quantify the difference between eigenvectors of the
Google matrix, we consider two vectors $\phi$ and $\psi$ of
components respectively $\phi_i$ and $\psi_i$, normalized in such a way that
$\sum_i|\phi_i|^2=\sum_i|\psi_i|^2=1$, and we introduce (following the usual definition from quantum mechanics) the fidelity
\begin{equation}
\label{fidelity}
F=|\sum_{i=1}^{N}\phi_i^*\psi_i|,
\end{equation}
where $*$ denotes complex conjugation. The fidelity is $F=1$ for two identical vectors, and $F=0$ for orthogonal ones. In Fig.~\ref{figfidelity} (top panel) we plot the fidelity of the 7 right eigenvectors of $G$ corresponding to the largest eigenvalues $1=\lambda_1>|\lambda_2|\geq|\lambda_3|\geq\ldots\geq |\lambda_7|$. It shows that the fidelity decreases much faster in the computer/human comparison than in both the cases of subgroups of human/human and computer/computer, where it remains very close to 1 for the first 4 eigenvectors, indicating that, remarkably, these eigenvectors only weakly depend on the choice of the data set but strongly on the nature of the players. Interestingly enough, in the computer/computer case there is also a dropoff starting from the fifth eigenvector. This may be due to inversion of close eigenvalues between two realizations of the subgroups.
\begin{figure}
\begin{center}
\includegraphics*[width=0.95\linewidth]{fig5.pdf}
\end{center}
\caption{Top: Fidelity (\ref{fidelity}) for the 7 first right eigenvectors
(corresponding to the largest eigenvalues of $G$, including
the PageRank, for $\alpha=0.85$), for computer (Gnugo) and human networks, in decreasing order of the norm of the associated eigenvalue, for human-human (red diamonds), computer-computer (blue circles), computer-human (green triangles).
Each point is an average over $N_S$ different choices of pairs of groups.
Networks are built from groups of 4000 games (solid line, $N_S= 30$), 2000 games
(dashed line, $N_S=180$) and 1000 games (dotted lines, $N_S=840$) (for computer/human resp. $N_S=120, 480, 1920$). Standard deviation is comparable to symbol size for the 3 first eigenvectors, and is much larger for
subsequent ones. Middle and bottom: Ordered Vector Similarity (\ref{OVS}) and Non-ordered Vector Similarity (\ref{NVS}) respectively for the 7 first eigenvectors, same conventions and datasets as above. \label{figfidelity}}
\end{figure}
In order to compare more accurately the eigenvectors at the level of patterns, one can define quantities based on ranking vectors, in line with the ranking of nodes that can be obtained from the PageRank vector. For any eigenvector, we define a ranking vector $A=(a_i)_{1\leq i\leq N}$ with $1\leq a_i\leq N$, where nodes are ordered by decreasing values of the modulus of the components of the vector. We thus define the Ordered Vector Similarity $S_O$, which takes the value 1 if two ranking vectors are identical in their first 30 entries (this choice of cut-off is arbitrary but keeps only the most important nodes). Namely, if $A=(a_i)_{1\leq i\leq N}$ and $B=(b_i)_{1\leq i\leq N}$ are two ranking vectors, $S_O$ is defined through
\begin{equation}
\label{OVS}
S_O(A,B)=\sum_{i=1}^{30}\frac{f(i)}{30},
\qquad
f(i) = \left\{
\begin{array}{ll}
1 & \mbox{if } a_{i}=b_{i}\\
0 & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
The similarity $S_O$ gives the proportion of moves which are exactly at the same rank in both ranking vectors within the first 30 entries. This quantity is shown in Fig.~\ref{figfidelity} (middle panel).
Again the data single out the computer/human similarity as being the weakest. However, the choice of the data set affects the results: the dependence on the number of games used to build the networks inside each database is relatively large, and makes the results less statistically significant than for the fidelity. This is due to the fact that some components of the vectors can have very similar values, and a small perturbation can then shuffle the ranking of components. To make this effect less important, we define a Non-ordered Vector Similarity $S_N$ for two vectors A and B through a new similarity function $f_{bis}$:
\begin{equation}
f_{bis}(i) = \left\{
\begin{array}{ll}
1 & \mbox{if } \exists\, j \in [1;30] \mbox{ such that } a_{i}=b_{j}, \\
0 & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
$S_N$ is thus defined as:
\begin{equation}
\label{NVS}
S_N(A,B)=\sum_{i=1}^{30}\frac{f_{bis}(i)}{30}
\end{equation}
In this quantity, what matters is now the proportion of moves which are common to both lists of the $30$ most important moves, irrespective of their exact rankings through both vectors.
The data are displayed for this quantity on Fig.~\ref{figfidelity} (bottom panel). The dispersion between different choices of subgroup sizes in the same database is now much more reduced, and the results from the human vs computer case are now clearly separated from the ones inside one of the individual databases.
\section{Towards a Turing test for go simulators}
Figures~\ref{figlinks}--\ref{figfidelity} illustrate the differences between networks built from computer-played games and the ones built from human-played games. These differences are relatively difficult to characterize at the level of the distributions of links of Fig.~\ref{figlinks}. On the other hand, Figs.~\ref{correlPR} and \ref{figfidelity} show that the eigenvectors associated with largest eigenvalues allow more clearly to distinguish between the types of players, with statistical differences
visibly stronger than the ones between the networks built from different subgroups of the same database. This indicates that it may be possible to conceive an indicator to differentiate between a group of human-played games and computer-played games, without any previous knowledge of the players. This could be similar to the famous Turing test of Artificial Intelligence, where a person tries to differentiate a human from a computer from answers to questions, without prior knowledge of which interlocutor is human. In our case, confronted with databases of games from both types of players, it could be possible to differentiate the human from the computer from statistical tests on the network. To construct such an indicator, we focus on the PageRank which corresponds to the largest eigenvalue of the Google matrix, and use the three quantities which distinguish best the different types of players, namely
the fidelity $F$, Non-ordered Vector Similarity $S_N$ and dispersion $\sigma$. The first two quantities describe discrepancies mostly for the largest values of the PageRank, while the dispersion is dominated by intermediate values (see Fig.~\ref{correlPR}). In order to synthetize the results from these two kinds of quantities, we present in Fig.~\ref{figfidsig} the pairs $(F,\sigma)$ and the pairs $(S_N,\sigma)$ for PageRank vectors constructed from games played by humans or computers.
\begin{figure}
\begin{center}
\includegraphics*[width=0.99\linewidth]{fig6.pdf}
\end{center}
\caption{Fidelity (\ref{fidelity}) (top) and Non-ordered Vector Similarity (\ref{NVS}) (bottom) of pairs of PageRanks as a function of the dispersion $\sigma$ of (\ref{sigmadef}), for databases of human players and three computer programs: Gnugo, Fuego and AlphaGo. We use $\alpha=0.85$. For each case, one PageRank corresponds to a 8000-game network, and the other one to several choices of networks built from smaller samples; empty symbols correspond to averages over 1000-game networks, checkerboard ones to 2000-game networks, filled ones to 4000-game networks. Averages are made respectively over 240 instances, 120 instances, 60 instances. Error bars are standard deviations of these averages. For AlphaGo, a 50-game network was used. \label{figfidsig}}
\end{figure}
The data displayed in Fig.~\ref{figfidsig} show that there is some variability of these quantities if subgroups from the same databases are compared, indicated by the error bars. However, the difference between the computer- and
human-generated networks is much larger than this variability, indicating that there is a statistically significant difference between them. Interestingly enough, it is also possible to distinguish between the different types of algorithms used in the computer games:
differences between Fuego and Gnugo are larger than the variability when compared to humans, and also when compared to each other. We have included the result obtained for games played by AlphaGo, based on the small 50-game database available \cite{alphabase}; despite the smallness of the database, the points obtained seem to be also statistically well separated from humans.
\section{Conclusion}
Our results show that the networks built from computer-played games and human-played games have statistically significant differences in several respects, in the spectra of the Google matrix, the PageRank vector or the first eigenvectors of the matrix. There are also differences between the different types of algorithms which can be detected statistically, from deterministic to Monte Carlo and even (although the database is smaller) the recent AlphaGo. In general, the computer has a tendency to play using a more varied set of most played moves, but with more correlations between different games for the deterministic program (Gnugo) and less for the stochastic one (Fuego).
These statistical differences could be used to devise a Turing test for the go simulators, enabling to differentiate between the human and the computer player. Interestingly enough, it does not seem to require very large databases to reach statistical significance. We note that a manifestation of these differences was noted during the games played by AlphaGo against world champions in 2016 and 2017: the computer program used very surprising strategies that were difficult to understand by human analysts following the games.
The results shown in this study show that the computer programs simulating complex human activities proceed in a different way than human beings, with characteristics which can be detected with statistical significance using the tools of network theory. It would be very interesting to probe other complex human activities with these tools, to specify if the differences between human and computers can be quantified statistically, and to deduce from it the fundamental differences between human information processing and computer programming.
\acknowledgments We thank Vivek Kandiah for help with the computer programming and scientific discussions. We thank Calcul en Midi-Pyr\'en\'ees (CalMiP) for access to its supercomputers. OG thanks the LPT Toulouse for hospitality.
|
\section{Introduction}
Number theorists are often concerned with integer powers, with Fermat's
``last theorem'' and Waring's problem being the two most prominent
examples.
Another classic problem from number theory is the
{\it Nagell-Ljunggren problem}: for which integers
$n, q \geq 2$ does the Diophantine equation
\begin{equation}
y^q = {{b^n - 1} \over {b-1}}
\label{nle}
\end{equation}
have positive integer solutions $(y,b)$?
See, for example,
\cite{Nagell:1920,Nagell:1921,Ljunggren:1943a,Ljunggren:1943b,Oblath:1956,Shorey:1986,Le:1994,Hirata-Kohno&Shorey:1997,Bugeaud&Mignotte:1999a,Bugeaud&Mignotte:1999b,Bugeaud&Mignotte&Roy&Shorey:1999,Bugeaud&Mignotte&Roy:2000,Shorey:2000,Bennett:2001,Bugeaud&Hanrot&Mignotte:2002,Bugeaud:2002,Bugeaud&Mignotte:2002,Mihailescu:2007,Bugeaud&Mihailescu:2007,Mihailescu:2008,Browkin:2008,Kihel:2009,Laishram&Shorey:2012,Li&Li:2014,Bennett&Levin:2015}.
On the other hand, in combinatorics on words, repetitions of
strings play a large
role (e.g., \cite{Thue:1906,Thue:1912,Berstel:1995}). If $w$ is a word (i.e., a
string or block of symbols chosen from a finite alphabet $\Sigma$), then
by $w \uparrow n$ we mean the concatenation
$\overbrace{ww\cdots w}^n$. (This is ordinarily written $w^n$, but we have
chosen a different notation to avoid any possible confusion with the power of
an integer.) For example,
${\tt (mur)}\uparrow 2 = {\tt murmur}$.
In this paper we combine both these definitions of powers and examine
the consequences.
In terms of the base-$b$ representation of both sides,
the Nagell-Ljunggren equation \eqref{nle} can be viewed as asking
when a power of an integer has base-$b$ representation
of the form $1\uparrow n$ for some integer $n \geq 2$; such a number
is sometimes called a ``repunit'' \cite{Yates:1978}.
An obvious generalization is to consider those powers of integers
with base-$b$ representation $a \uparrow n$ for a single digit $a$;
such a number is somtimes called a ``repdigit'' \cite{Broughan:2012}.
This suggests an obvious further generalization of \eqref{nle}:
when does the power of an integer have a base-$b$
representation of the form $w\uparrow n$ for some $n \geq 2$ and some
arbitrary word $w$ (of some given nonzero length $\ell$)?
In this paper we investigate this problem.
\begin{remark}
A related topic, which we do not examine here, is integer powers that
have base-$b$ representations that are palindromes. See, for
example, \cite{Korec:1991,Hernandez&Luca:2006,Cilleruelo&Luca&Shparlinski:2009}.
\end{remark}
We introduce some notation. Let $\Sigma_b = \{ 0,1,\ldots, b-1 \}$.
Let $b \geq 2$ be an
integer. For an integer $n \geq 0$, we let
$(n)_b$ represent the canonical representation of $n$ in base $b$ (that is,
the one having no leading zeroes).
For a word $w = a_1 a_2 \cdots a_n \in \Sigma_b^n$ we define
$[w]_b$ to be $\sum_{1 \leq i \leq n} a_i b^{n-i}$, the value of the
word $w$ interpreted as an integer in base $b$, and
we define $|w|$ to be the length of the word $w$
(number of alphabet symbols in it).
Using this notation, we can express the class of equations we are
interested in: they are of the form
\begin{equation}
(y^q)_b = w\uparrow n ,
\label{first}
\end{equation}
where $y, q, b, n \geq 2$ and $w \in \Sigma_b^*$.
Here we are thinking of $q$ and $n$ as given, and our goal is to
determine for which $b$ there exist
solutions $y$ and $w$. Furthermore, we may classify
solutions $w$ according to their length $\ell = |w|$.
Alternatively, we can ask about the solutions to the equation
\begin{equation}
y^q = c {{b^{n\ell} - 1} \over {b^\ell - 1}} ,
\label{equiva}
\end{equation}
with $b^{\ell-1} \leq c < b^\ell$. The correspondence of this equation with
Eq.~\eqref{first} is that $w = (c)_b$.
The inequality $b^{\ell-1} \leq c < b^\ell$ guarantees that
the base-$b$ representation of $y^q$ is indeed an
$\ell$-digit string that does not start with the digit $0$.
Our results can be summarized as follows. We call a triple of
integers $(q,n, \ell)$ for $q, n \geq 2$ and $\ell \geq 1$
{\it admissible} if either
\begin{itemize}
\item $(q,n) = (2,2)$,
\item $(n,\ell) = (2,1)$, or
\item $(q,n,\ell) \in \{ (2,3,1),(2,3,2),(3,2,2),(3,2,3),(3,3,1),(2,4,1),(4,2,2) \}$.
\end{itemize}
Otherwise $(q,n,\ell)$ is {\it inadmissible}.
Here is our main result:
\begin{theorem}
\leavevmode
\begin{itemize}
\item[(a)]
Assuming the $abc$ conjecture, there are only finitely many solutions
$(q,n,\ell,b,y,c)$ to \eqref{equiva} such that the triple $(q,n,\ell)$
is inadmissible.
\item[(b)]
For each admissible triple $(q,n,\ell)$, there are infinitely many solutions
$(b,y)$ to the equation $(y^q)_b = w \uparrow n$ for $|w| = \ell$.
\end{itemize}
\label{main-thm}
\end{theorem}
In Section~\ref{abc-section} we prove (a) (as Theorem~\ref{abc}) and in
Section~\ref{admiss-section} we prove (b).
One appealing distinction
between the Nagell-Ljunggren problem and the variant considered here is
that, for fixed $n$ and $q$, finding solutions to the classical
equation~\ref{nle} amounts to finding the integral points on a single
affine curve. Provided that $(q,n) \notin \{ (2,2), (2,3), (3,2) \}$,
the genus of this curve is positive, so Siegel's theorem implies that
it has only finitely many integer points. On the other hand, in the
variant considered here, for fixed $n$, $q$, and $\ell$, finding
solutions to Eq.~\eqref{equiva} amounts to finding integral points of controlled
height on a family of twists of a single curve, which is well known to
be a hard problem. Moreover, there is an established literature of
using the $abc$ conjecture to attack such problems; for example, see
\cite{Granville:2007}.
We comment briefly on our representation of words. In some cases,
particularly if $b \leq 10$, we write a word as a concatenation of digits.
For example, $1234$ is a word of length $4$. However, if $b > 10$, this
becomes infeasible. Therefore, for $b \geq 10$, we write
a word using parentheses and commas. For example,
$(11,12,13,14)$ is a word of length $4$ representing
$40034$ in base $15$.
\section{Implications of the $abc$ conjecture}
\label{abc-section}
Let $\rad(n) = \prod_{p|n} p$ be the radical function, the product
of distinct primes dividing $n$.
We recall the $abc$ conjecture of Masser and Oesterl\'e
\cite{Masser:1985,Oesterle:1988}, as follows
(see, e.g., \cite{Stewart&Tijdeman:1986,Nitaj:1996,Browkin:2000,Granville&Tucker:2002,Robert&Stewart&Tenenbaum:2014}):
\begin{conjecture}
For all $\epsilon>0$, there exists a constant $C_\epsilon$ such that for all $a,b,c\in{\mathbb{Z}}^+$ with $a+b=c$ and $\gcd(a,b)=1$, we have
$$c\leq C_\epsilon (\rad (abc))^{1+\epsilon}.$$
\end{conjecture}
We will need the following technical lemma. Its purpose will become clear in the proof of Theorem \ref{abc}. The proof is a straightforward manipulation of inequalities, but we include it for the sake of completeness.
\begin{lemma}\label{F positive}
Suppose that $q,n,\ell$ are positive integers with $q\geq 2$, $n\geq 2$, and $\ell\geq 1$. Further suppose that $(q,n,\ell)$ is not an admissible triple. Define
$$F(q,n,\ell)=\frac{24}{25}n\ell-1-\frac{n\ell}{q}-\ell.$$
Then $F(q,n,\ell)>0$.
\end{lemma}
\begin{proof}
If $(q,n,\ell)$ is not admissible, then either $n\geq 3$ or both $q\geq 3$ and $\ell\geq 2$.
First assume that $n\geq 3$. Then
\[
F(q,n,\ell)=n\ell\left(\frac{24}{25} -\frac{1}{q}\right)-1-\ell \geq \frac{47}{25}\ell - \frac{3\ell}{q}-1.
\]
This quantity is positive if and only if
\[
q\geq \frac{75\ell}{47\ell-25}.
\]
For $\ell\geq 1$, the quantity $\frac{75\ell}{47\ell-25}$ is strictly less than $2$, and $q\geq 2$, so $F(q,n,\ell)>0$.
Now assume instead that $q\geq 3$ and $\ell\geq 2$. Rearranging the inequality in a different way, we see that $F(q,n,\ell)>0$ if and only if
\[
n\geq\frac{\ell q+q}{\frac{24}{25}\ell q -\ell}= 25\left(\frac{\ell+1}{\ell}\right)\left(\frac{q}{24 q-25}\right).
\]
This quantity is decreasing in $\ell$ and increasing in $q$ for all $\ell\geq 1$ and $q\geq 2$, and it is strictly less than $\frac{75}{48}$, which is less than 2. As $n\geq 2$, $F(q,n,\ell)$ is positive in this case as well.
\end{proof}
\begin{theorem}
Assume the $abc$ conjecture.
There are only finitely many solutions $(q,n,\ell,b,y,c)$ to the
generalized Nagell-Ljunggren equation such that $(q,n,\ell)$ is an inadmissible triple.
\label{abc}
\end{theorem}
\begin{proof}
The equation $(y^q)_b=w\uparrow n$ can be written
\[
y^q = c\left(\frac{b^{n\ell}-1}{b^\ell-1}\right)
\]
for $c\in{\mathbb{Z}}$ such that $(c)_b=w$.
Note that $c\leq b^\ell-1$, so $y< b^{n\ell/q}$.
Suppose $p$ is a prime that divides $\frac{b^{n\ell}-1}{b^\ell-1}$. Then $p$ divides $y^q$, and thus $p^q$ divides $y^q$. Therefore
\[
y^q\geq \left(\rad\left(\frac{b^{n\ell}-1}{b^\ell-1}\right)\right)^q\geq
\frac{(\rad(b^{n\ell}-1))^q}{(\rad(b^\ell-1))^q},
\]
where we have used the obvious inequality $\rad(a/b)\geq \rad(a)/\rad(b)$. So
\[
\rad(b^{n\ell}-1)\leq y\rad(b^\ell-1)< yb^\ell<b^{n\ell/q+\ell},
\]
using $y< b^{n\ell/q}$ and $\rad(b^\ell-1)<b^\ell$.
Now consider the equation
\[
(b^{n\ell}-1)+1=b^{n\ell}.
\]
By the $abc$ conjecture, for all $\epsilon>0$, there is some positive constant $C_\epsilon$ such that
\[
b^{n\ell}\leq C_\epsilon (\rad((b^{n\ell}-1)(1)b^{n\ell}))^{1+\epsilon}\leq
C_\epsilon(b\rad(b^{n\ell}-1))^{1+\epsilon},
\]
using that $b^{n\ell}$ and $b^{n\ell}-1$ are coprime and that $\rad(b^{n\ell})=\rad(b)\leq b$. We rewrite this inequality as
\[
b^{n\ell/(1+\epsilon)-1}\leq C_\epsilon^{1/(1+\epsilon)}\rad(b^{n\ell}-1).
\]
Set $C_\epsilon'=C_\epsilon^{1/(1+\epsilon)}$.
Combining the upper and lower bounds on $\rad(b^{n\ell}-1)$, we get
\[
b^{n\ell/(1+\epsilon)-1}\leq C_\epsilon' b^{n\ell/q+\ell}.
\]
Rearranging this, we have
\[
b^{n\ell /(1+\epsilon)-1-n\ell/q-\ell}\leq C_\epsilon',
\]
or equivalently,
\begin{equation}\label{fundamental inequality}
\frac{n\ell}{(1+\epsilon)}-1-\frac{n\ell}{q}-\ell \leq\frac{\log(C_\epsilon')}{\log(b)}.
\end{equation}
Recall that $y<b^{n\ell/q}$, or equivalently
\[
\frac{1}{\log(b)}<\frac{n\ell}{q\log(y)}.
\]
Therefore
\begin{equation}\label{inequality 2}
\frac{n\ell}{(1+\epsilon)}-1- \frac{n\ell}{q} -\ell \leq \log(C_\epsilon')\frac{n\ell}{q \log(y)}.
\end{equation}
In order for the triple $(q,n,\ell)$ to give rise to a solution of $(y^q)_b=w\uparrow n$,
it is necessary that inequalities (\ref{fundamental inequality}) and (\ref{inequality 2}) are
both satisfied. This puts restrictions on $b$ and $y$, respectively.
From this point forward, fix $\epsilon=\frac{1}{24}$. (Any fixed choice of $\epsilon<\frac{1}{23}$ would work for our purposes.) Let
\[
F(q,n,\ell)=\frac{n\ell}{(1+\epsilon)}-1-\frac{n\ell}{q}-\ell .
\]
It is easy to see that $F$ is increasing in $q$. We will soon see
that $F$ is also increasing in $n$ and $\ell$ when $(q,n,\ell)$ is inadmissible.
It can be verified by an explicit calculation that $F(q,n,\ell)<0$ for all admissible triples $(q,n,\ell)$, including the infinite families with $(q,n)=(2,2)$ and $\ell$ arbitrary or $(n,\ell)=(2,1)$ and $q$ arbitrary. By Lemma \ref{F positive}, for every inadmissible triple $(q,n,\ell)$ we have $F(q,n,\ell)>0$, so there are only finitely many $b$ that satisfy inequality (\ref{fundamental inequality}). We will show that for large values of $n$ or $\ell$, no bases $b\geq 2$ satisfy (\ref{fundamental inequality}), and for large values of $q$, no $y\geq 2$ satisfy (\ref{inequality 2}) (clearly $y=1$ never gives a solution). Therefore, conditional on the $abc$ conjecture, there are only finitely many solutions to the generalized Nagell-Ljunggren equation that come from inadmissible parameters.
First we consider large values of $n$ or $\ell$ by computing lower bounds on the partial derivatives of $F$.
Assume that $(q,n,\ell)$ is not admissible,
and therefore either $n\geq 3$ and $q\geq 2$ or
$n\geq 2$ and $q\geq 3$. Then we have lower bounds on the partial derivatives as follows:
\begin{align*}
\frac{\partial F}{\partial n} & = \ell\left(\frac{1}{1+\epsilon} - \frac{1}{q}\right)\geq
1\left(\frac{1}{1+\epsilon} - \frac{1}{2}\right)= \frac{23}{50}\\
\frac{\partial F}{\partial \ell} & = n\left(\frac{1}{1+\epsilon}-\frac{1}{q}\right)-1\geq
\min\left(\frac{19}{75},\frac{19}{50}\right)=\frac{19}{75}
\end{align*}
If $n\geq 5$, then we have
\[
F(q,n,\ell)\geq F(2,n,1)\geq\frac{23}{50}(n-5)+F(2,5,1)>\frac{23}{50}(n-5),
\]
If $\ell\geq 5$, we have
\begin{align*}
F(q,n,\ell) & \geq \min(F(3,2,\ell),F(2,3,\ell))\\
& \geq\min\left((\ell-4)\frac{19}{75}+F(3,2,4),(\ell-5)\frac{19}{75}+F(2,3,3) \right) \\
& > \frac{19}{75}(\ell-4).
\end{align*}
Importantly, in the above calculations we have used both that $F(q,n,\ell)>0$ and that $F$ is increasing in $q$, $n$, and $\ell$ for all inadmissible triples. So $F(q,n,\ell)\to\infty$ as either $n\to\infty$ or $\ell\to\infty$. Thus for large values of either $n$ or $\ell$,
inequality (\ref{fundamental inequality}) is not satisfied for any $b\geq 2$,
and there are no solutions to $(y^q)_b=w\uparrow n$.
It remains to show that large values of $q$ cannot be used in solutions.
First we rewrite inequality (\ref{inequality 2}) as
\[
q\left(\frac{1}{1+\epsilon}-\frac{1}{n\ell}-\frac{1}{n}\right) \leq \frac{\log(C_\epsilon')}{\log(y)}+1.
\]
If $(q,n,\ell)$ is inadmissible, then either $n\geq 3$ and $\ell\geq 1$ or $n\geq 2$ and $\ell\geq 2$. So
\[
\frac{1}{n\ell}+\frac{1}{n} = \frac{1}{n}\left(1+\frac{1}{\ell}\right)\leq \frac{3}{4}
\]
and
\[
\frac{21q}{100}=q\left(\frac{24}{25}-\frac{3}{4}\right)\leq \frac{\log(C_\epsilon')}{\log(y)}+1\leq\frac{\log(C_\epsilon')}{\log(2)}+1,
\]
where we have replaced $y$ with $2$, which is the smallest value of $y$ that can be used in a solution. So for inadmissible triples $(q,n,\ell)$ with large values of $q$, inequality (\ref{inequality 2}) is not satisfied, and there are no solutions.
We have shown that there are only finitely many inadmissible triples that admit any solutions. By Lemma \ref{F positive} and inequality (\ref{fundamental inequality}), there are only finitely many bases $b$ that can appear in a solution corresponding to each such triple, and thus only finitely many solutions for each such triple. So the set of all inadmissible triples contributes in total only finitely many solutions.
\end{proof}
\begin{remark}
Shinichi Mochizuki, in a series of papers released in 2016, has
recently claimed a proof of the $abc$ conjecture. If the proof is ultimately
verified, then Theorem~\ref{abc} will hold unconditionally.
\end{remark}
\section{Admissible triples}
\label{admiss-section}
In this section we examine each admissible triple and prove there are
infinitely many solutions.
\subsection{The case $(q,n) = (2,2)$}
\begin{theorem}
For each length $\ell \geq 1$, there are infinitely
many $b \geq 2$ such that the equation
$(y^2)_b = w \uparrow 2$ has a solution with $|w| = \ell$.
\label{c22}
\end{theorem}
We need a lemma.
\begin{lemma}
For each integer $t \geq 0$ there exist infinitely
many integer pairs $(p,b)$ where $p \geq 2$ is prime and
$b \geq 2$ such that
$b^{2^t} \equiv \modd{-1} {p^2}$. Furthermore, among
these pairs there are infinitely many distinct $b$.
\label{pbl}
\end{lemma}
\begin{proof}
By Dirichlet's theorem on primes in arithmetic
progressions, there are infinitely many primes
$p \equiv \modd{1} {2^{t+1}}$. The group $G$ of integers
modulo $p^2$ is cyclic, and of order $p(p-1)$.
Since $2^{t+1} { \, | \,} p-1$, there is an element
$b$ of order $2^{t+1}$ in $G$. For this element
$b$ we have $b^{2^t} \equiv \modd{-1} {p^2}$.
To prove the last claim, note that for each fixed $t$
and fixed $b$ there are are only finitely many prime divisors
of $b^{2^t} + 1$. If there were only finitely many distinct
$b$ among those pairs $(p,b)$ with
$b^{2^t} \equiv \modd{-1} {p^2}$, then there would only
be, in total, finitely many pairs $(p,b)$, contradicting
what we just proved.
\end{proof}
Now we can prove Theorem~\ref{c22}.
\begin{proof}
Let $\ell = r \cdot 2^t$, where $r$ is odd.
By Lemma~\ref{pbl} we know there exist infinitely
many $p$ and $b$ such that $b^{2^t} \equiv
\modd{-1} {p^2}$. Then
$b^\ell = b^{r \cdot 2^t} \equiv \modd{-1} {p^2}$.
Now write $b^\ell + 1 = m p^2$. Then
$m^2 (b^\ell +1) = m^2 p^2$. Choose
$v = \lceil {p \over {\sqrt{b}}} \rceil$.
Then
$$ {p \over {\sqrt{b}}} \leq v \leq {p \over {\sqrt{b}}} + 1,$$
so
$$ {{p^2} \over b} m \leq m v^2 \leq m \left({p \over {\sqrt{b}}} + 1 \right)^2 .$$
Hence
$$mv^2 \geq mp^2/b = {{{b^\ell} + 1} \over b} \geq b^{\ell - 1}.$$
Similarly, if $p \geq 5$, then
${p \over {\sqrt{2}}} + 1 \leq {p \over {1.1}},$
so
$$ m v^2 \leq m \left({p \over {\sqrt{b}}} + 1 \right)^2
\leq m \left( {p \over {1.1}} \right)^2 \leq mp^2 - 1$$
if $p \geq 5$.
Then $(mvp)^2 = (mv^2) (b^\ell + 1)$. The inequalities obtained
above imply that $mv^2$ in base $b$ is an $\ell$-digit number, so
the base-$b$ representation of
$(mvp)^2$ consists of two copies of $(mv^2)_b$, as desired.
From the second part of the Lemma, we get that there
are infinitely many $b$ corresponding to each length $\ell$.
\end{proof}
\begin{example}
Take $\ell = 12$. Then $r = 3$ and $t = 2$. If
$b = 110$ and $p = 17$, then $b^4 \equiv \modd{-1} {17^2}$.
Write $b^\ell + 1 = m \cdot p^2$, where $m = 10859613760280276816609$.
Let $v = \lceil {p \over {\sqrt{b}}} \rceil = 2$.
Then $mvp = 369226867849529411764706$ and
$((mvp)^2)_{b} = w\uparrow 2$
where $$ w = [1, 57, 52, 15, 108, 52, 57, 94, 1, 57, 52, 16].$$
\end{example}
We now examine this case from a different angle, considering
$b$ to be fixed and examining for which pairs $(y,w)$ there
are solutions to $(y^2)_b = w\uparrow 2$.
\begin{theorem}
For each base $b\geq 2$, the equation
$$ (y^2)_b = w\uparrow 2$$
has infinitely many solutions $(y,w)$.
\label{baseb}
\end{theorem}
First, we need a lemma:
\begin{lemma}
For all integers $b \geq 2$, there exists a prime $p \geq 5$ such
that $b$ has even order in the multiplicative group of
integers modulo $p^2$.
\label{p2}
\end{lemma}
\begin{proof}
First observe that if $b$ has even order mod $p$, then it must have
even order mod $p^2$.
If there is some prime $p\geq 5$ that divides $b+1$, then $p$ cannot
also divide $b-1$. Then $b^2\equiv 1\pmod{p}$ and $b\not\equiv
1\pmod{p}$, so $b$ has order 2 mod $p$, and we are done. Therefore it
suffices to prove the Lemma in the case that the primes dividing $b+1$
are a subset of $\{2,3\}$.
We aim to show that there is some prime $p\geq 5$ that divides $b^2+1$.
Then $p$ cannot also divide $b^2-1$, and so
\[b^4-1=(b^2+1)(b^2-1)\equiv 0\pmod{p}\] and $b$ has order 4 mod $p$.
Assume, to get a contradiction, that the only possible prime factors of
$b^2+1$ are $2$ and $3$.
By the Euclidean algorithm, $\gcd(b^2+1,b+1)= \gcd(1-b,b+1)=
\gcd(2,b+1)$, so $2$ is the only possible common prime divisor of both
$b^2+1$ and $b+1$. In particular, it is not possible that both numbers
are divisible by $3$. Therefore one of $b+1$ or $b^2+1$ is a power of $2$;
thus $b$ is odd, and $\gcd(b+1,b^2+1)=2$. This leaves two possibilities:
either $b+1=2^n$ and $b^2+1=2 \cdot 3^m$, or $b+1=2 \cdot 3^m $
and $b^2+1=2^n$,
for some positive integers $n,m$.
If $b+1=2^n$, then $b+1\equiv 1$ or $2\pmod{3}$, so $b\equiv 0$ or
$1\pmod{3}$. But then $b^2+1$ cannot be divisible by 3. So instead we
must have $b+1=2 \cdot 3^m$, and
\[2^n=b^2+1=(2 \cdot 3^m-1)^2+1=2(2 \cdot 3^{2m}-2 \cdot 3^m+1).\] So $2^n$ is twice
an odd number, and $n=1$. But then $b=1$, which is a contradiction.
\end{proof}
We can now prove Theorem~\ref{baseb}:
\begin{proof}
Fix $b$, and
let $p \geq 5$ be a prime satisfying the conclusion of Lemma~\ref{p2}.
Let the order of $b$, modulo $p^2$, be $e' = 2e$ for some
integers $e', e \geq 1$.
First, we claim that for all $b \geq 2$ such a $p$ can be chosen
such that there is an integer $t$
with
\begin{equation}
b^{-1/4} \sqrt{p} < t < \sqrt{9p/10} .
\label{ineq1}
\end{equation}
If $b \geq 16$, then the open interval
$(b^{-1/4} \sqrt{p}, \sqrt{9p/10})$ has length $> 1$ if $p \geq 5$, and
hence contains an integer.
If $2 \leq b < 16$, we can use the $t$ and $p$ in the table below:
\begin{table}[H]
\begin{center}
\begin{tabular}{ccc}
$b$ & $p$ & $t$ \\
\hline
2& 5& 2 \\
3& 5& 2 \\
4& 5& 2 \\
5& 7& 2 \\
6& 7& 2 \\
7& 5& 2 \\
8& 5& 2 \\
9& 5& 2 \\
10& 7& 2\\
11&13& 3\\
12& 5& 2\\
13& 5& 2\\
14& 5& 2\\
15&13& 3\\
\end{tabular}
\end{center}
\end{table}
Hence, from \eqref{ineq1} we get
$$ p^2/b <t^4 < .81 p^2$$
and so
$$ 1/b < t^4/p^2 < .81. $$
Now consider $z = (t^4/p^2)(b^{re} + 1)$ for odd $r \geq 1$.
Since $b$ has order $2e$ (mod $p^2$), we must have
$b^e \equiv \modd{-1} {p^2}$. Then for odd $r \geq 1$ we have
$b^{re} \equiv \modd{-1} {p^2}$, and so
$z = {{t^4} \over {p^2}} (b^{re} + 1)$ is an integer.
From the previous paragraph we have
$$b^{re-1} < (t^4/p^2) b^{re} <
(t^4/p^2) (b^{re} + 1) = z,$$
and
$$ z = {{t^2}\over p} (b^{re}+1) < 0.81 (b^{re} + 1) < b^{re},$$
where the very last inequality holds provided $b^{re} \geq 5$.
If $b \geq 5$ this inequality holds for all $e$. For smaller
$b$, we can choose $e$ as follows to ensure $b^{re} \geq 5$:
\begin{itemize}
\item if $b = 2$ then $p = 5$ and $e = 10$;
\item if $b = 3$ then $p = 5$ and $e = 10$;
\item if $b = 4$ then $p = 5$ and $e = 5$.
\end{itemize}
It follows that the base-$b$ representation of $z$ has
exactly $re$ digits. Let $w = (z)_b$.
Finally, note that
$$ [ww]_b = {{t^4} \over {p^2}} (b^{re} + 1) (b^{re} + 1) =
({{t^2} \over p} (b^{re} + 1))^2 ,$$
so we can take $y = {{t^2} \over p} (b^{re} + 1)$.
\end{proof}
\begin{remark}
For $b = 2$ the solutions $y$ to the equation
$(y^2)_2 = w\uparrow 2$ are given by the sequence
$$6, 820, 104391567, 119304648, 858993460, 900719925474100, \ldots,$$
which is sequence \seqnum{A271637} in the
OEIS \cite{oeis}.
\end{remark}
\subsection{The case $(n,\ell) = (2,1)$}
This case, where $n = 2$ and $\ell = 1$, is the least interesting of
all the cases.
\begin{proposition}
The equation $(y^q)_b = w \uparrow 2$, $|w| = 1$, has infinitely
many solutions $b$ for each $q \geq 2$.
\end{proposition}
\begin{proof}
The equation can be rewritten as $y^q = c (b+1)$ for
$1 \leq c < b$. Given $q$, we can take $c = 1$, $y \geq 2$,
and $b = y^q - 1$.
\end{proof}
\subsection{The case $(q,n,\ell) = (2,3,1)$}
In this section we show
\begin{theorem}
There are infinitely many bases $b$
for which the equation
$ (y^2)_b = w\uparrow 3$ has a solution with $|w|= 1$.
\end{theorem}
\begin{proof}
We want to show there are infinitely many positive integer solutions to
\begin{equation}\label{2,3,1}
y^2 = c(b^2 + b + 1)
\end{equation}
with $1\leq c < b$.
We show below that there are infinitely many
integral points on the affine curve defined by
\begin{equation}
3y^2 = x^2 + x + 1
\label{y3}
\end{equation}
with $x>0$. Taking such a point, we easily obtain a solution to \eqref{2,3,1} with $c=3$, namely $(3y)^2 = 3(x^2+x+1).$
We rewrite \eqref{y3} as a norm equation in the real
quadratic field $\mathbb{Q}(\sqrt{3})$. In particular, rearranging terms yields
\[
(2x+1)^2 - 12y^2 = -3,
\]
which is equivalent to $N((2x+1) + 2y\sqrt{3})=-3$, where $N$ is the
norm from ${\mathbb{Q}}(\sqrt{3})$ to ${\mathbb{Q}}$. Running this process in reverse, if $\alpha \in \mathbb{Q}(\sqrt{3})$ has norm $-3$ and can be written in the form $\alpha=a+b\sqrt{3}$ for positive integers $a,b$ with $a$ odd and $b$ even, then $x=(a-1)/2$, $y=b/2$ gives an integer point on $3y^2 = x^2+x+1$.
The unit group of $\mathbb{Z}[\sqrt{3}]$ (which is the ring of integers of ${\mathbb{Q}}(\sqrt{3})$) is
generated by $-1$ and the fundamental unit $u=2-\sqrt{3}$, which has $N(u)=1$. If $\alpha$ is any element of the desired form (e.g., $\alpha = 1 + 2\sqrt{3}$), then $\alpha u^{2k} = a_k + b_k\sqrt{3}$ will also have norm $-3$. Moreover, \[u^2=7-4\sqrt{3}\equiv 1\pmod{2{\mathbb{Z}}[\sqrt{3}]},\]
so that $\alpha u^{2k} \equiv \alpha \pmod{2\mathbb{Z}[\sqrt{3}]}$. Thus, $a_k$ is odd and $b_k$ is even for every $k\in\mathbb{Z}$. This gives infinitely many integer solutions to Eq.~\eqref{y3}, which, multiplying $a_k$ and $b_k$ by $-1$ if necessary, we may assume to have $x>0$.
\end{proof}
\begin{remark}
A similar class of solutions can be found for any $c>0$ for which the real quadratic field $\mathbb{Q}(\sqrt{c})$ has an integral element of norm $-3$. Another such field is ${\mathbb{Q}}(\sqrt{7})$, for which the first associated solution is $(49^2)_{18} = [7,7,7]$.
\end{remark}
\subsection{The case $(q,n,\ell) = (2,3,2)$}
\begin{theorem}
There are infinitely many solutions to the case $(q,n,\ell) = (2,3,2)$.
\end{theorem}
\begin{proof}
We would like to find solutions to
\begin{equation}\label{2,3,2}
y^2 = c(b^4+b^2+1)
\end{equation}
in positive integers $b,y,c$ such that $b\leq c < b^2$. Without loss of generality, any integer
solution can be replaced by a positive integer solution.
Notice that $x^4+x^2+1 = (x^2+x+1)(x^2-x+1)$, and suppose that $(x,y)$ is a integral point on the curve $3y^2=x^2+x+1$
such that $x^2-x+1$ is divisible by 49.
Let $c=\frac{3}{49}(x^2-x+1)$, so that $3c(x^2-x+1)$ is a square. We compute
\[
\left(\sqrt{3c(x^2-x+1)}y\right)^2 = 3c(x^2-x+1)y^2 = c(x^2-x+1)(x^2+x+1) = c(x^4+x^2+1),
\]
which gives a solution to Eq.~\eqref{2,3,2} with $b=x$, as long as $x\leq c<x^2$; this inequality holds provided that $x\geq 18$. We now produce infinitely many integral points on the curve $3y^2=x^2+x+1$ such that $49\mid x^2-x+1$, so this inequality is not an issue.
As in the (2,3,1) case, we rewrite $3y^2 = x^2 + x + 1$
as the norm equation \[N((2x+1) + 2y\sqrt{3})=-3,\] where $N$ is the norm from ${\mathbb{Q}}(\sqrt{3})$ to ${\mathbb{Q}}$.
Again as in the (2,3,1) case, if $\alpha\in\mathbb{Q}(\sqrt{3}$ has norm $-3$ and can be written as $\alpha=a+b\sqrt{3}$ for positive integers $a,b$ with $a$ odd and $b$ even, then $x=(a-1)/2$, $y=b/2$ gives
an integer point on $3y^2 = x^2+x+1$. Observe that the polynomial $x^2-x+1$ will be divisible by $49$ if and only if either
$x\equiv 19$ or $x\equiv 31\pmod{49}$. If $a=2x+1$, this occurs if and only if
either $a\equiv 39$ or $a\equiv 63\pmod{98}$.
Observe that $\alpha=627 + 362\sqrt{3}$ satisfies the desired congruence
conditions. Let $u=2-\sqrt{3}$ be the fundamental unit of ${\mathbb{Z}}[\sqrt{3}]$. As $u$ is a unit, some power $u^r$ will necessarily be congruent to $1 \pmod{98 \mathbb{Z}[\sqrt{3}]}$; an explicit computation shows that $r=56$ works. Thus, setting $\alpha u^{56k} = a_k + b_k \sqrt{3}$, for every $k\in \mathbb{Z}$ we have $a_k \equiv 39 \pmod{98}$ and that $b_k$ is even. This produces the infinitely many solutions to Eq.~\eqref{2,3,2}. The next one, with $k=1$, is
\begin{multline*}
(b,y,c)=(33519770429365238471302383574583401, \\
19352648480568478024495121554106701, \\
68790306712490710007811612444611710421528067927390557506093905927147).
\end{multline*}
\end{proof}
\subsection{The case $(q,n,\ell) = (3,2,2)$}
\begin{theorem}
There are infinitely many solutions to the case $(q,n,\ell)=(3,2,2)$.
\end{theorem}
\begin{proof}
The equation we want to solve in positive integers is
\begin{equation}\label{3,2,2}
y^3 = c(b^2+1)
\end{equation}
for $b^2\leq c < b^3$.
We begin by showing that there are infinitely many integral points $(x,y)$ on the curve
\begin{equation}\label{3,2,2 curve}
2y^2 = x^2 + 1,
\end{equation}
where without loss of generality, both $x$ and $y$ are positive. Starting with such a point $(x,y)$ and
rearranging Eq.~\eqref{3,2,2 curve}, we have $(2y)^3=4y(x^2+1)$, which gives a solution
to Eq.~\eqref{3,2,2} with $b=x$ and $c=4y$. As $y=\sqrt{(x^2+1)/2}$, certainly $c\geq x$. The upper bound
$c\leq x^2-1$ is equivalent to $2\sqrt{2(x^2+1)}\leq x^2-1$, which can be verified to hold for all $x\geq 4$,
so all but finitely many of the integral points we find will produce solutions with $c$ in the correct range.
Eq.~\eqref{3,2,2 curve} is easily seen to be equivalent to the norm equation
\[
N(x+y\sqrt{2})=-1
\]
where $N$ is the norm from ${\mathbb{Q}}(\sqrt{2})$ to ${\mathbb{Q}}$.
Let $u=1+\sqrt{2}$ be the fundamental unit of ${\mathbb{Q}}(\sqrt{2})$. Note that $N(u^k)=-1$ for
all odd integers $k$. If we let $u^k=a_k+b_k\sqrt{2}$ for integers $a_k,b_k$, then the
point $(a_k,b_k)$ is an integral point on Eq.~\eqref{3,2,2 curve}. So we have an infinite family of
such points, and thus infinitely many solutions to Eq.~\eqref{3,2,2}.
\end{proof}
\subsection{The case $(q,n,\ell) = (3,3,1)$}
\begin{theorem}
There are infinitely many solutions to the case $(q,n,\ell) = (3,3,1)$.
\end{theorem}
\begin{proof}
We show that the equation
\begin{equation}\label{(3,3,1) curve}
343y^2 = x^2+x+1
\end{equation}
has infinitely many solutions in integers with $x>y>0$.
It follows that the equation
\begin{equation}\label{3,3,1}
(7y)^3 =c(b^2+b+1)
\end{equation}
has infinitely many solutions with $c=y$, $b=x$, and $1\leq c<b$.
Completing the square on the RHS of Eq.~\eqref{(3,3,1) curve}, multiplying both sides by 4, and
rearranging, we obtain the equivalent equation
\[
((2x+1)^2-(14y)^2(7))=-3.
\]
We write this as $N((2x+1) + 14y\sqrt{7}) = -3$, where $N$ is
the norm from ${\mathbb{Q}}(\sqrt{7})$ to ${\mathbb{Q}}$. Let $\alpha=a+b\sqrt{7}$ with positive integers $a,b$.
If $N(\alpha)=-3$, $a$ is odd, and $b$ is divisible by 14, then $x=(a-1)/2$ and $y=b/14$ yield
a solution to Eq.~\eqref{(3,3,1) curve}.
As in the previous theorems, we start with a single element with the desired properties, in this case $\alpha=37+98\sqrt{7}$, and use the unit group to produce infinitely many. The fundamental unit
of ${\mathbb{Q}}(\sqrt{7})$ is $u=8-3\sqrt{7}$, which satisfies $u^{14} \equiv 1 \pmod{14 \mathbb{Z}[\sqrt{7}]}$. Thus, any of the elements $\alpha u^{14k}$ will be of the desired form, and there are infinitely many solutions to Eq.~\eqref{3,3,1} as well.
\end{proof}
\subsection{The case $(q,n,\ell) = (3,2,3)$}
The solutions we found to the $(3,3,1)$ case also produce solutions to
the $(3,2,3)$ case by a straightforward algebraic manipulation. Recall that for
the $(3,2,3)$ case, the equation to solve is
\begin{equation}\label{3,2,3}
y^3 = c(b^3+1)
\end{equation}
with $b^2\leq c<b^3$.
\begin{theorem}
There are infinitely many solutions to the case $(q,n,\ell) = (3,2,3)$.
\end{theorem}
\begin{proof}
Let $(y,b,c)$ be a solution to the case $(q,n,\ell) = (3,3,1)$. Then
$y^3 = c(b^2 + b + 1)$ for some integer $c$ satisfying
$1 \leq c < b$. Set $b'=b+1$, $y'=y(b+2)$, and $c'=c(b+2)^2$. We claim
that $(y',b',c')$ is a solution to Eq.~\eqref{3,2,3}. We compute
\begin{align*}
(y')^3 = y^3(b+2)^3 = c(b^2+b+1)(b+2)^3 = c(b+2)^2((b+1)^3+1) = c'((b')^3+1),
\end{align*}
as claimed. The only thing left to check is that $c'$ is in the correct range $(b')^2\leq c'< (b')^3$.
As $1\leq c\leq b-1$ by assumption, we have
\[
(b+2)^2\leq c(b+2)^2\leq (b-1)(b+2)^2.
\]
So $c'\geq (b+2)^2\geq (b+1)^2=(b')^2$,
and the lower bound on $c'$ is satisfied. For the upper bound, we directly compute
\[
(b')^3 -(b-1)(b+2)^2= (b+1)^3 -(b-1)(b+2)^2 = 3b+5,
\]
and $3b+5 > 0$ for all $b$ under consideration. So
$(b-1)(b+2)^2 < (b')^3$; thus $c'<(b')^3$, and $c'$ is in the correct range.
We have shown there are infinitely many solutions to the $(3,3,1)$ case, so
it follows that there are infinitely many solutions to the $(3,2,3)$ case. Note that
$w=(c,2c,c,2c,c,2c)$ in base $b'=b+1$.
\end{proof}
\subsection{The case $(q,n,\ell) = (2,4,1)$}
Here we will show that the equation
$$ (y^2)_b = w\uparrow 4 $$
has solutions for infinitely many bases $b$.
\begin{theorem}
There are infinitely many solutions to the case $(q,n,\ell)=(2,4,1)$.
\end{theorem}
\begin{proof}
The $(2,4,1)$ case requires solving the equation
\begin{equation}\label{2,4,1}
y^2 = c(b^3+b^2+b+1)
\end{equation}
for $1\leq c<b$. As in the (3,2,2) case, we use the
infinitely many integral points on the curve
\begin{equation}\label{2,4,1 curve}
2y^2 = x^2 + 1.
\end{equation}
Notice that any such $x$ is odd, and that we may assume that $x,y>0$ without loss of generality.
Setting $b=x$ and $c=\frac{1}{2}(x+1)$, and multiplying both sides
of Eq.~\eqref{2,4,1 curve} by $\frac{1}{2}(x+1)^2$, we obtain
\[
(y(x+1))^2 = \frac{1}{2}(x+1)(x+1)(x^2+1) = c(b^3+b^2+b+1),
\]
which gives a solution to Eq.~\eqref{2,4,1}. If $x>1$, we have $1\leq c < x$, as required.
\end{proof}
\subsection{The case $(q,n,\ell) = (4,2,2)$}
\begin{theorem}
There are infinitely many solutions to the $(q,n,\ell)=(4,2,2)$ case.
\end{theorem}
\begin{proof}
The key equation to solve for the $(4,2,2)$ case is
\begin{equation}\label{4,2,2}
y^4 = c(b^2+1)
\end{equation}
for $b\leq c<b^2$. We begin as in the $(3,2,2)$ case by finding infinitely many integral points
on the curve
\begin{equation}\label{4,2,2 curve}
2y^2 = x^2+1,
\end{equation}
but now also insisting that $y$ be divisible by $13$.
Assuming this is possible for the moment, set $b=x$ and
$c= 2^3 \cdot 3^4 \cdot13^{-4} \cdot y^2=
2^3 \cdot 3^4 \cdot13^{-4} \cdot (x^2+1)$, and
note that $c$ is an integer.
Clearly $c\leq x^2-1$; on the other hand, $c\geq x$ holds for all $x\geq 89$
and thus for all but finitely many of the integral solutions to Eq.~\eqref{4,2,2 curve}.
Multiplying through by $(6y/13)^2$, we have
\[
2\left(\frac{6y}{13}\right)^4 = \frac{36}{169}y^2(x^2+1)=2c(x^2+1),
\]
so there are infinitely many solutions to Eq.~\eqref{4,2,2}.
It remains to demonstrate the existence of an infinite family of integral points $(x,y)$ on
the curve Eq.~\eqref{4,2,2 curve} such that $13\mid y$. As in the $(3,2,2)$ case, the equation
defining the curve can be rewritten $N(x+y\sqrt{2}) = -1$ where $N$ is the norm from
${\mathbb{Q}}(\sqrt{2})$ to ${\mathbb{Q}}$. Let $u=1+\sqrt{2}$ be the fundamental unit in ${\mathbb{Q}}(\sqrt{2})$.
We compute $u^7 = 239+169\sqrt{2}$. Writing $u^{7k} = a_k + b_k\sqrt{2}$ for integers $a_k,b_k$,
it is easy to see that $13\mid b_k$ for all $k\geq 1$, and $N(u^{7k})=-1$ for all odd $k$. So the family
$u^{14k+7}$ gives an infinite supply of points of the desired form.
\end{proof}
We have now considered all admissible triples, and
this completes the proof of Theorem~\ref{main-thm}. \hfill $\qed$
\section{Solutions for inadmissible triples}
As we have seen above, the $abc$ conjecture implies that there are only
a finite number of solutions, in toto, corresponding to all inadmissible
triples, to the equation $(y^q)_b = w\uparrow n$ with $|w| = \ell$.
So far we have found $8$ such solutions, and they are given
below in Table~\ref{inadmiss}.
\begin{table}[H]
\begin{center}
\begin{tabular}{ccclll}
$q$ & $n$ & $\ell$ & $b$ & $y$ & $w$ \\
\hline
2 & 5 & 1 & 3 & 11 & (1) \\
4 & 3 & 1 & 18 & 7 & (7) \\
4 & 2 & 3 & 19 & 70 & (9,13,4) \\
4 & 2 & 3 & 23 & 78 & (5,17,6) \\
5 & 2 & 2 & 239 & 52 & (27,203) \\
5 & 2 & 2 & 239 & 78 & (211,115) \\
6 & 2 & 2 & 239 & 26 & (22,150) \\
3 & 2 & 4 & 12400 & 57459558593 & (4208, 7128, 8441, 5457) \\
\end{tabular}
\end{center}
\label{inadmiss}
\end{table}
We searched various ranges for other solutions and our search results
are summarized below.
\def486800{486800}
\def{10^7}{{10^7}}
\def3764000{3764000}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{cccc}
$q$ & $n$ & $\ell$ & $b$ \\
\hline
2 & 3 & 3 & none $\leq 3764000$ \\
2 & 3 & 4 & none $\leq 486800$ \\
2 & 3 & 5 & none $\leq 486800$ \\
2 & 4 & 2 & none $\leq 486800$ \\
2 & 4 & 3 & none $\leq 486800$ \\
2 & 4 & 4 & none $\leq 486800$ \\
2 & 4 & 5 & none $\leq 486800$ \\
2 & 5 & 1 & \red{one} $\leq {10^7}$ \\
2 & 5 & 2 & none $\leq 486800$ \\
2 & 5 & 3 & none $\leq 486800$ \\
2 & 6 & 1 & none $\leq 486800$ \\
2 & 6 & 2 & none $\leq 486800$ \\
2 & 6 & 3 & none $\leq 486800$ \\
3 & 2 & 4 & \red{one} $\leq {10^7}$ \\
3 & 2 & 5 & none $\leq 486800$ \\
3 & 2 & 6 & none $\leq 486800$ \\
3 & 3 & 2 & none $\leq 5\cdot 10^5$ \\
3 & 3 & 3 & none $\leq 3764000$\\
3 & 3 & 4 & none $\leq 486800$ \\
3 & 3 & 5 & none $\leq 486800$ \\
3 & 4 & 1 & none $\leq 486800$ \\
3 & 4 & 2 & none $\leq 486800$ \\
3 & 4 & 3 & none $\leq 486800$ \\
3 & 4 & 4 & none $\leq 486800$ \\
3 & 4 & 5 & none $\leq 486800$ \\
3 & 5 & 1 & none $\leq 5 \cdot 10^5$ \\
3 & 5 & 2 & none $\leq 486800$ \\
3 & 5 & 3 & none $\leq 486800$ \\
4 & 2 & 3 & \red{two} $\leq {10^7}$ \\
4 & 2 & 4 & none $\leq 5 \cdot 10^5$ \\
\end{tabular}
\quad\quad\quad
\begin{tabular}{cccc}
$q$ & $n$ & $\ell$ & $b$ \\
\hline
4 & 2 & 5 & none $\leq 486800$ \\
4 & 2 & 6 & none $\leq 486800$ \\
4 & 3 & 1 & \red{one} $\leq {10^7}$ \\
4 & 3 & 2 & none $\leq 5 \cdot 10^5$ \\
4 & 3 & 3 & none $\leq 3764000$\\
4 & 3 & 4 & none $\leq 486800$ \\
4 & 3 & 5 & none $\leq 486800$ \\
4 & 4 & 1 & none $\leq 5 \cdot 10^5$ \\
4 & 4 & 2 & none $\leq 486800$ \\
4 & 4 & 3 & none $\leq 486800$ \\
4 & 4 & 4 & none $\leq 486800$ \\
4 & 4 & 5 & none $\leq 486800$ \\
4 & 5 & 1 & none $\leq 5 \cdot 10^5$ \\
4 & 5 & 2 & none $\leq 486800$ \\
4 & 5 & 3 & none $\leq 486800$ \\
5 & 2 & 2 & \red{one} $\leq {10^7}$ \\
5 & 2 & 3 & none $\leq 5 \cdot 10^5$ \\
5 & 3 & 1 & none $\leq 5 \cdot 10^5$ \\
5 & 3 & 2 & none $\leq 5 \cdot 10^5$ \\
5 & 3 & 3 & none $\leq 3764000$ \\
5 & 4 & 1 & none $\leq 486800$ \\
5 & 4 & 2 & none $\leq 486800$ \\
5 & 4 & 3 & none $\leq 486800$ \\
6 & 2 & 2 & \red{one} $\leq {10^7}$\\
6 & 2 & 3 & none $\leq 5 \cdot 10^5$ \\
6 & 3 & 1 & none $\leq 486800$\\
6 & 3 & 2 & none $\leq 486800$\\
6 & 3 & 3 & none $\leq 3764000$ \\
6 & 4 & 1 & none $\leq 486800$ \\
\end{tabular}
}
\end{center}
\subsection{Our search procedure}
Consider Eq.~\eqref{equiva}: $y^q = c {{b^{n\ell} - 1} \over {b^\ell - 1}}$.
We describe a search procedure to find solutions $(b,y)$ to this equation,
which produced the results above.
It has been
implemented in three different languages: APL, Maple, and python. Code
is available from the authors.
Given $(q,n, \ell)$ and $b$,
we start by factoring $ r := (b^{n\ell} - 1)/(b^\ell- 1)$. This prime
factorization
can be speeded up using the algebraic factorization of the
polynomial $X^{(n-1)\ell} + \cdots + X^\ell + 1$ over ${\mathbb{Q}}[X]$. For example,
if $n = 3$ and $\ell = 2$, the polynomial
$X^4 + X^2 + 1$ has the factorization $f(X) \cdot g(X)$ where
$f(X) = X^2 + X + 1$ and $g(X) = X^2 - X + 1$.
We therefore can compute $f(b)$ and $g(b)$ and factor each piece
independently and combine the results.
Now we have the prime factorization of $r$, say
$r = p_1^{e_1} \cdots p_t^{e_t}$,
If $cr$ is to be a $q$th power, then we must have that
$p_i^{q \lceil e_i/q \rceil}$ divides $cr$ for $1 \leq i \leq t$.
So $c$ must be a multiple of
$$ d := \prod_{1 \leq i \leq t} p_i^{q \lceil e_i/q \rceil - e_i} .$$
and $c$ must further satisfy the inequality $b^{\ell - 1} \leq c < b^\ell$.
Writing $c = k^qd$ for some integer $k$, we have
$(b^\ell -1)/d \leq k^q < b^\ell/d$ and so
$((b^\ell - 1)/d)^{1/q} \leq k < (b^\ell/d)^{1/q}$. A solution then exists for
each integer $k$ in this interval, which can be easily checked.
The most time-consuming part of this calculation is the integer
factorization. Typically we were searching some subrange of
the interval $[2, 10^7]$, with $n \leq 6$ and $\ell \leq 5$.
Thus we could be factoring numbers of size as large as
$10^{175}$.
If we want to perform this computation for many different
triples $(q,n,\ell)$ at once, it makes sense to first precompute the
algebraic factorizations described above, next compute
the factorizations of individual pieces, and finally assemble
the needed factorizations from these pieces.
\section{Beyond canonical base-$b$ representation}
One can consider the equation $(y^q) = w \uparrow n$ for a wide variety of
other types of
representations. In this section we consider two such other types of
representations.
\subsection{Bijective base-$b$ representation}
First, we consider the so-called ``bijective base-$b$ representation'';
see, for example, \cite{Foster:1947}; \cite[\S 9, pp.~34--36]{Smullyan:1961};
\cite[Solution to Exercise 4.1-24, p.\ 495]{Knuth:1969};
\cite[Note 9.1, pp.\ 90--91]{Salomaa:1973};
\cite[pp.\ 70--76]{Davis&Weyuker:1983};
\cite{Forslund:1995}; \cite{Boute:2000}.
This representation is like ordinary base-$b$ representation, except
that instead of using the digits $0,1, \ldots, b-1$, we use the
digits $1, 2, \ldots, b$ instead. We use the notation
$\langle x \rangle_b$ to denote this representation.
\begin{theorem}
For all $b, \ell \geq 2$ there exists a word $w$ of length $\ell$
and an integer $y$ such that $\langle y^2 \rangle_b = w \uparrow 2$.
\label{bij1}
\end{theorem}
\begin{proof}
Consider $y =b^\ell + 1$ for $\ell \geq 2$.
Then $y^2 =b^{2\ell} + 2b^\ell + 1$,
which has bijective base-$b$ representation $w\uparrow 2$
for $w = ( (b-1) \uparrow (\ell-2), b, 1 )$.
\end{proof}
\def435000{435000}
For some specific bases $b$ there are other infinite families of
solutions. For example, the table below summarizes
some of these families, for $n \geq 0$. They are easy to prove by direct
calculation.
\begin{table}[H]
\begin{center}
\begin{tabular}{ccc}
$b$ & $\langle y \rangle_b$ & $\langle y^2 \rangle_b$ \\
\hline
2 & $((12)\uparrow {3n+3}) 212$ & $ ( ((221112)\uparrow n) 221121112)\uparrow 2 $ \\
3 & $((1331)\uparrow {5n+2}) 22 $ & $ ((12132111223231233322)\uparrow n) 1213211131)\uparrow 2$ \\
4 & $((21)\uparrow {5n+2}) 3$ & $ (( (1123421433)\uparrow n) 11241)\uparrow 2 $ \\
4 & $((24) \uparrow {5n+2}) 4$ & $ (( (2143311234)\uparrow n) 21434) \uparrow 2 $ \\
5 & $((31) \uparrow {3n+1}) 4 $ &$ (((155234)\uparrow n) 211)\uparrow 2$ \\
6 & $((41)\uparrow {7n+3} )5$ & $ (((26211162534435)\uparrow n) 2621121) \uparrow 2$ \\
6 & $((46)\uparrow {7n+3}) 6$ & $ (((42236551331456)\uparrow n) 4223656) \uparrow 2$ \\
7 & $ ( 3 \uparrow {2n+2}) 4$ & $ (((15) \uparrow {n+1}) 2)\uparrow 2$ \\
8 & $ ((52) \uparrow {n+1}) 6$ & $ (((34) \uparrow {n+1}) 4)\uparrow 2$ \\
9 & $ ((35) \uparrow {10n+2}) 4$ & $ (((1385674932)\uparrow {2n}) 13857)\uparrow 2 $ \\
9 & $ ((53) \uparrow {10n+2}) 6 $ & $ (((3213856749)\uparrow {2n}) 32139)\uparrow 2$ \\
9 & $ ((71) \uparrow {10n+2}) 8 $ & $ (((5674932138)\uparrow {2n}) 56751)\uparrow 2$
\end{tabular}
\end{center}
\end{table}
\subsection{Fibonacci representation}
Yet another representation for integers involves the Fibonacci numbers.
The so-called Fibonacci or Zeckendorf representation of an integer $n \geq 0$
consists of writing $n$ as the sum of non-adjacent Fibonacci numbers:
$$ n = \sum_{2 \leq i \leq t} e_i F_i $$
where $e_i \in \{ 0, 1 \}$ and $e_i e_{i+1} \not= 1$ for $i \geq 2$;
see \cite{Lekkerkerker:1952,Zeckendorf:1972}.
In this case we write the representation of $n$, starting with the most
significant digit, as the binary word $(n)_F = e_t e_{t-1} \cdots e_2$.
\begin{theorem}
There are infinitely many solutions to the equation
$(y^2)_F = w \uparrow 2$, for integers $y$ and words $w$.
\end{theorem}
\begin{proof}
The proof depends on the following identity:
\begin{multline*}
(F_{4n+3} + F_{4n+6} + F_{8n+8} + F_{8n+11})^2 = \\
F_{4n+2} + F_{4n+5} + F_{4n+8} + F_{4n+10} +
\left(\sum_{1 \leq i < n} F_{4n+4i+10} \right) + F_{8n+11} + \\
F_{12n+12} + F_{12n+15} + F_{12n+18} + F_{12n+20} +
\left( \sum_{1 \leq i < n} F_{12n+4i+20} \right) + F_{16n+21} ,
\end{multline*}
which can be proved with a computer algebra system, such as Maple.
This identity shows that the Fibonacci representation of
$(F_{4n+3} + F_{4n+6} + F_{8n+8} + F_{8n+11})^2$
has the form $w^2$ with
$$ w = (1, 0,0,0,0, ((1, 0,0,0)\uparrow (n-1)), 1,0,1,0,0,1,0,0,1,
(0 \uparrow (4n))) .$$
\end{proof}
\begin{remark}
There are other infinite families of solutions. Here is one:
let $n \geq 1$ and
suppose $(y)_F= ((100)\uparrow (4n+2), 1,0,1,0,0,0)$.
Then $(y^2)_F = ww$ with
$w = ( ((1,0,0,1,(0\uparrow 8))\uparrow n), 1,0,0,1,0,0,0,0,0,0,1,0) $.
\end{remark}
A list of all the solutions to $(y^2)_F = w \uparrow 2$
with $y < 34000000$ is given below.
\begin{table}[H]
\begin{center}
\begin{tabular}{rl}
$y$ & $w$ \\
\hline
4 & 100\\
49 & 10100100\\
306 & 100100000010\\
728 & 10000000101000\\
2021 & 1000100000101010\\
3556 & 10010101001000100\\
3740 & 10100101001000010\\
5236 & 100001010010010000\\
21360 & 100000010100101010010\\
35244 & 1000010000001010000000\\
98210 & 100100000000100100000010\\
243252 & 10000100010100100100000000\\
1096099 & 10010000010100100100010101010\\
1625040 & 100000010101001010000100001000\\
1662860 & 100001000100000010100000000000\\
4976785 & 10100010000100000000000010100100\\
5080514 & 10100100101001000000001000010100\\
11408968 & 1000010001000101001001000000000000\\
31622994 & 100100000000100100000000100100000010\\
31831002 & 100100000101010000010000010101000010\\
33587514 & 101000000100100010001000100101000000\\
33599070 & 101000000100101001000010000101000000\\
\end{tabular}
\end{center}
\end{table}
\begin{remark}
The only solutions to $(y^q)_F = w \uparrow n$ other than $(q,n) = (2,2)$
that we found are $(2^4)_F = (100)\uparrow 2$ and $(7^4)_F = (10100100)\uparrow 2$.
\end{remark}
|
\section{Introduction}
Hypernyms are useful in many natural language processing tasks ranging from construction of taxonomies~\cite{Snow:06,panchenko-EtAl:2016:SemEval} to query expansion~\cite{Gong:05} and question answering~\cite{Zhou:13}. Automatic extraction of hypernyms from text has been an active area of research since manually constructed high-quality resources featuring hypernyms, such as WordNet~\cite{Miller:95}, are not available for many domain-language pairs.
The drawback of pattern-based approaches to hypernymy extraction~\cite{Hearst:92} is their sparsity. Approaches that rely on the classification of pairs of word embeddings~\cite{Levy:15} aim to tackle this shortcoming, but they require candidate hyponym-hypernym pairs. We explore a hypernymy extraction approach that requires no candidate pairs. Instead, the method performs prediction of a hypernym embedding on the basis of a hyponym embedding.
The contribution of this paper is a novel approach for hypernymy extraction based on projection learning. Namely, we present an improved version of the model proposed by~\newcite{Fu:14}, which makes use of both positive and negative training instances enforcing the asymmetry of the projection. The proposed model is generic and could be straightforwardly used in other relation extraction tasks where both positive and negative training samples are available. Finally, we are the first to successfully apply projection learning for hypernymy extraction in a morphologically rich language. An implementation of our approach and the pre-trained models are available online.\footnote{\url{http://github.com/nlpub/projlearn}}
\section{Related Work}
\textbf{Path-based methods} for hypernymy extraction rely on sentences where both hyponym and hypernym co-occur in characteristic contexts, e.g., ``such \textit{cars} as \textit{Mercedes} and \textit{Audi}''. \newcite{Hearst:92} proposed to use hand-crafted lexical-syntactic patterns to extract hypernyms from such contexts. \newcite{Snow:04} introduced a method for learning patterns automatically based on a set of seed hyponym-hypernym pairs. Further examples of path-based approaches include~\cite{TjongKimSang:09} and \cite{Navigli:10}. The inherent limitation of the path-based methods leading to sparsity issues is that hyponym and hypernym have to co-occur in the same sentence.
Methods based on distributional vectors, such as those generated using the \textit{word2vec} toolkit~\cite{Mikolov:13:w2v}, aim to overcome this sparsity issue as they require no hyponym-hypernym co-occurrence in a sentence. Such methods take representations of individual words as an input to predict relations between them. Two branches of methods relying on distributional representations emerged so far.
\textbf{Methods based on word pair classification} take an ordered pair of word embeddings (a candidate hyponym-hypernym pair) as an input and output a binary label indicating a presence of the hypernymy relation between the words. Typically, a binary classifier is trained on concatenation or subtraction of the input embeddings, cf.~\cite{Roller:14}. Further examples of such methods include~\cite{Lenci:12,Weeds:14,Levy:15,Vylomova:16}.
HypeNET~\cite{Shwartz:16:hypenet} is a hybrid approach which is also based on a classifier, but in addition to two word embeddings a third vector is used. It represents path-based syntactic information encoded using an LSTM model~\cite{Hochreiter:97}. Their results significantly outperform the ones from previous path-based work of~\newcite{Snow:04}.
An inherent limitation of classification-based approaches is that they require a list of candidate words pairs. While these are given in evaluation datasets such as BLESS~\cite{Baroni:11}, a corpus-wide classification of relations would need to classify all possible word pairs, which is computationally expensive for large vocabularies. Besides, \newcite{Levy:15} discovered a tendency to lexical memorization of such approaches hampering the generalization.
\textbf{Methods based on projection learning} take one hyponym word vector as an input and output a word vector in a topological vicinity of hypernym word vectors. Scaling this to the vocabulary, there is only one such operation per word. \newcite{Mikolov:13:mt} used projection learning for bilingual word translation. \newcite{Vulic:16} presented a systematic study of four classes of methods for learning bilingual embeddings including those based on projection learning.
\newcite{Fu:14} were first to apply projection learning for hypernym extraction. Their approach is to learn an affine transformation of a hyponym into a hypernym word vector. The training of their model is performed with stochastic gradient descent. The $k$-means clustering algorithm is used to split the training relations into several groups. One transformation is learned for each group, which can account for the possibility that the projection of the relation depends on a subspace. This state-of-the-art approach serves as the baseline in our experiments.
\newcite{Nayak:15} performed evaluations of distributional hypernym extractors based on classification and projection methods (yet on different datasets, so these approaches are not directly comparable). The best performing projection-based architecture proposed in this experiment is a four-layered feed-forward neural network. No clustering of relations was used. The author used negative samples in the model by adding a regularization term in the loss function. However, drawing negative examples uniformly from the vocabulary turned out to hamper performance. In contrast, our approach shows significant improvements using manually created synonyms and hyponyms as negative samples.
\newcite{yamane-EtAl:2016:COLING} introduced several improvements of the model of~\newcite{Fu:14}. Their model jointly learns projections and clusters by dynamically adding new clusters during training. They also used automatically generated negative instances via a regularization term in the loss function. In contrast to~\newcite{Nayak:15}, negative samples are selected not randomly, but among nearest neighbors of the predicted hypernym. Their approach compares favorably to~\cite{Fu:14}, yet the contribution of the negative samples was not studied. Key differences of our approach from~\cite{yamane-EtAl:2016:COLING} are (1) use of explicit as opposed to automatically generated negative samples, (2) enforcement of asymmetry of the projection matrix via re-projection. While our experiments are based on the model of~\newcite{Fu:14}, our regularizers can be straightforwardly integrated into the model of~\newcite{yamane-EtAl:2016:COLING}.
\section{Hypernymy Extraction via Regularized Projection Learning}
\subsection{Baseline Approach}
In our experiments, we use the model of \newcite{Fu:14} as the baseline. In this approach, the projection matrix $\mathbf{\Phi}^*$ is obtained similarly to the linear regression problem, i.e., for the given row word vectors $\vec{x}$ and $\vec{y}$ representing correspondingly hyponym and hypernym, the square matrix $\mathbf{\Phi}^*$ is fit on the training set of positive pairs $\mathcal{P}$:
\begin{equation*}
\mathbf{\Phi}^* = \arg\min_{\mathbf{\Phi}} \frac{1}{|\mathcal{P}|}
\sum_{(\vec{x}, \vec{y}) \in \mathcal{P}} \left\|\vec{x}\mathbf{\Phi} - \vec{y}\right\|^2\text{,}
\label{eq:baseline}
\end{equation*}
where $|\mathcal{P}|$ is the number of training examples and $\|\vec{x}\mathbf{\Phi} - \vec{y}\|$ is the distance between a pair of row vectors $\vec{x}\mathbf{\Phi}$ and $\vec{y}$. In the original method, the $L^2$~distance is used. To improve performance, $k$ projection matrices $\mathbf{\Phi}$ are learned one for each cluster of relations in the training set. One example is represented by a hyponym-hypernym offset. Clustering is performed using the $k$-means algorithm~\cite{MacQueen:67}.
\subsection{Linguistic Constraints via Regularization}
The nearest neighbors generated using distributional word vectors tend to contain a mixture of synonyms, hypernyms, co-hyponyms and other related words~\cite{wandmacher2005semantic,Heylen:08,panchenko:2011:GEMS}.
In order to explicitly provide examples of undesired relations to the model, we propose two improved versions of the baseline model: \textit{asymmetric regularization} that uses inverted relations as negative examples, and \textit{neighbor regularization} that uses relations of other types as negative examples. For that, we add a regularization term to the loss function:
\begin{equation*}
\mathbf{\Phi}^* = \arg\min_{\mathbf{\Phi}} \frac{1}{|\mathcal{P}|}
\sum_{(\vec{x}, \vec{y}) \in \mathcal{P}} \left\|\vec{x}\mathbf{\Phi} - \vec{y}\right\|^2 + \lambda R\text{,}
\label{eq:regularized}
\end{equation*}
where $\lambda$ is the constant controlling the importance of the regularization term $R$.
\paragraph{Asymmetric Regularization.} As hypernymy is an asymmetric relation, our first method enforces the asymmetry of the projection matrix. Applying the same transformation to the predicted hypernym vector $\vec{x}\mathbf{\Phi}$ should not provide a vector similar ($\cdot$) to the initial hyponym vector $\vec{x}$. Note that, this regularizer requires only positive examples $\mathcal{P}$:
\begin{equation*}
R = \frac{1}{|\mathcal{P}|} \sum_{(\vec{x},\_) \in \mathcal{P}} (\vec{x}\mathbf{\Phi}\mathbf{\Phi} \cdot \vec{x})^2.
\label{eq:hyponym}
\end{equation*}
\vspace{-.75em}\paragraph{Neighbor Regularization.} This approach relies on the negative sampling by explicitly providing the examples of semantically related words $\vec{z}$ of the hyponym $\vec{x}$ that penalizes the matrix to produce the vectors similar to them:
\begin{equation*}
R = \frac{1}{|\mathcal{N}|} \sum_{(\vec{x}, \vec{z}) \in \mathcal{N}} (\vec{x}\mathbf{\Phi}\mathbf{\Phi} \cdot \vec{z})^2.
\label{eq:synonym}
\end{equation*}
Note that this regularizer requires negative samples $\mathcal{N}$. In our experiments, we use synonyms of hyponyms as $\mathcal{N}$, but other types of relations can be also used such as antonyms, meronyms or co-hyponyms. Certain words might have no synonyms in the training set. In such cases, we substitute $\vec{z}$ with $\vec{x}$, gracefully reducing to the previous variation. Otherwise, on each training epoch, we sample a random synonym of the given word.
\paragraph{Regularizers without Re-Projection.} In addition to the two regularizers described above, that rely on re-projection of the hyponym vector ($\vec{x}\mathbf{\Phi\Phi}$), we also tested two regularizers without re-projection, denoted as $\vec{x}\mathbf{\Phi}$. The neighbor regularizer in this variation is defined as follows:
\begin{equation*}
R = \frac{1}{|\mathcal{N}|} \sum_{(\vec{x}, \vec{z}) \in \mathcal{N}} (\vec{x}\mathbf{\Phi} \cdot \vec{z})^2.
\label{eq:synonymnoreproj}
\end{equation*}
In our case, this regularizer penalizes relatedness of the predicted hypernym $\vec{x}\mathbf{\Phi}$ to the synonym $\vec{z}$. The asymmetric regularizer without re-projection is defined in a similar way.
\subsection{Training of the Models}
To learn parameters of the considered models we used the Adam method~\cite{Kingma:14} with the default meta-parameters as implemented in the TensorFlow framework~\cite{Abadi:16}.\footnote{\url{https://www.tensorflow.org}} We ran $700$ training epochs passing a batch of $1024$ examples to the optimizer. We initialized elements of each projection matrix using the normal distribution $\mathcal{N}(0, 0.1)$.
\section{Results}
\subsection{Evaluation Metrics}
In order to assess the quality of the model, we adopted the $\hitk{l}$ measure proposed by \newcite{Frome:13} which was originally used for image tagging. For each subsumption pair $(\vec{x}, \vec{y})$ composed of the hyponym $\vec{x}$ and the hypernym $\vec{y}$ in the test set $\mathcal{P}$, we compute $l$ nearest neighbors for the projected hypernym $\vec{x}\mathbf{\Phi}^*$. The pair is considered matched if the gold hypernym $\vec{y}$ appears in the computed list of the $l$ nearest neighbors $\nn{l}(\vec{x}\mathbf{\Phi}^*)$. To obtain the quality score, we average the matches in the test set $\mathcal{P}$:
\begin{equation*}
\hitk{l} = \frac{1}{|\mathcal{P}|} \sum_{(\vec{x}, \vec{y}) \in \mathcal{P}} \mathbbm{1}\big(
\vec{y} \in \nn{l}(\vec{x}\mathbf{\Phi}^*)
\big)\text{,}
\end{equation*}
where $\mathbbm{1}(\cdot)$ is the indicator function. To consider also the rank of the correct answer, we compute the area under curve measure as the area under the $l-1$ trapezoids:
$$
\ensuremath\operatorname{\text{AUC}} = \frac{1}{2} \sum^{l - 1}_{i=1} (\hitk{(i)} + \hitk{(i+1)}).
$$
\begin{figure*}[t]
\centering
\includegraphics[width=.475\textwidth]{ru-sz500-hit10}
\quad
\includegraphics[width=.475\textwidth]{en-sz300-hit10}
\vspace{-1em}
\caption{Performance of our models with re-projection as compared to the baseline approach of~\cite{Fu:14} according to the $\hitk{10}$ measure for Russian (left) and English (right) on the validation set.}
\label{fig:hit10}
\end{figure*}
\begin{table}[t]
\footnotesize
\centering
\scalebox{0.95}{
\begin{tabular}{ll|cccc}
\textbf{Model} & & \textbf{hit@1} &\textbf{hit@5} & \textbf{hit@10} & \textbf{AUC} \\\hline
Baseline & &
$0.209$ & $0.303$ & $0.323$ & $2.665$ \\
Asym. Reg. & $\vec{x}\mathbf{\Phi}$ &
$0.213$ & $0.300$ & $0.322$ & $2.659$ \\
Asym. Reg. & $\vec{x}\mathbf{\Phi\Phi}$ &
$0.212$ & $0.312$ & $0.334$ & $2.743$ \\
Neig. Reg. & $\vec{x}\mathbf{\Phi}$ &
$\mathbf{0.214}$ & $0.304$ & $0.325$ & $2.685$ \\
Neig. Reg. & $\vec{x}\mathbf{\Phi\Phi}$ &
$0.211$ & $\mathbf{0.315}$ & $\mathbf{0.338}$ & $\mathbf{2.768}$ \\
\end{tabular}
}
\caption{Performance of our approach for Russian for $k=20$ clusters compared to~\cite{Fu:14}.}
\label{tab:performance:ru}
\end{table}
\begin{table*}[t]
\centering
\footnotesize
\begin{tabular}{ll|ccccc|ccccc}
& & & \multicolumn{4}{c|}{EVALution}
& \multicolumn{5}{c}{EVALution, BLESS, K\&H+N, ROOT09}\\
\textbf{Model} & & $k$ & \textbf{hit@1} & \textbf{hit@5} & \textbf{hit@10} & \textbf{AUC}&
$k$ & \textbf{hit@1} & \textbf{hit@5} & \textbf{hit@10} & \textbf{AUC}\\\hline
Baseline & & $1$ &
$0.109$ & $0.118$ & $0.120$ & $1.052$ &
$1$ & $0.104$ & $0.247$ & $0.290$ & $2.115$\\
Asymmetric Reg. & $\vec{x}\mathbf{\Phi}$ & $1$ &
$0.116$ & $0.125$ & $0.132$ & $1.140$ &
$1$ & $0.132$ & $0.256$ & $0.292$ & $2.204$\\
Asymmetric Reg. & $\vec{x}\mathbf{\Phi\Phi}$ & $1$ &
$0.145$ & $0.166$ & $0.173$ & $1.466$ &
$1$ & $0.112$ & $\mathbf{0.266}$ & $0.314$ & $2.267$\\
Neighbor Reg. & $\vec{x}\mathbf{\Phi}$ & $1$ &
$0.134$ & $0.141$ & $0.150$ & $1.280$ &
$1$ & $\mathbf{0.134}$ & $0.255$ & $0.306$ & $2.267$ \\
Neighbor Reg. & $\vec{x}\mathbf{\Phi\Phi}$ & $1$ &
$\mathbf{0.148}$ & $\mathbf{0.168}$ & $\mathbf{0.177}$ & $\mathbf{1.494}$&
$1$ & $0.111$ & $0.264$ & $\mathbf{0.316}$ & $\mathbf{2.273}$\\\hline
Baseline & & $30$ &
$0.327$ & $0.339$ & $0.350$ & $3.080$ &
$25$ & $0.546$ & $0.614$ & $0.634$ & $5.481$\\
Asymmetric Reg. & $\vec{x}\mathbf{\Phi}$ & $30$ &
$0.336$ & $0.354$ & $0.366$ & $3.201$ &
$25$ & $0.547$ & $0.616$ & $0.632$ & $5.492$\\
Asymmetric Reg. & $\vec{x}\mathbf{\Phi\Phi}$ & $30$ &
$0.341$ & $0.364$ & $0.368$ & $3.255$ &
$25$ & $\mathbf{0.553}$ & $0.621$ & $\mathbf{0.642}$ & $5.543$\\
Neighbor Reg. & $\vec{x}\mathbf{\Phi}$ & $30$ &
$0.339$ & $0.357$ & $0.364$ & $3.210$ &
$25$ & $0.547$ & $0.617$ & $0.634$ & $5.494$ \\
Neighbor Reg. & $\vec{x}\mathbf{\Phi\Phi}$ & $30$ &
$\mathbf{0.345}$ & $\mathbf{0.366}$ & $\mathbf{0.370}$ & $\mathbf{3.276}$&
$25$ & $\mathbf{0.553}$ & $\mathbf{0.623}$ & $0.641$ & $\mathbf{5.547}$\\
\end{tabular}
\caption{Performance of our approach for English without clustering $(k=1)$ and with the optimal number of cluster on the EVALution datasets ($k=30$) and on the combined datasets ($k=25$).
}
\label{tab:performance:en}
\vspace{-1.25em}
\end{table*}
\subsection{Experiment 1: The Russian Language}
\paragraph{Dataset.} In this experiment, we use word embeddings published as a part of the Russian Distributional Thesaurus~\cite{Panchenko:16} trained on $12.9$ billion token collection of Russian books. The embeddings were trained using the skip-gram model~\cite{Mikolov:13:w2v} with $500$ dimensions and a context window of $10$ words.
The dataset used in our experiments has been composed of two sources. We extracted synonyms and hypernyms from the Wiktionary\footnote{\url{http://www.wiktionary.org}} using the Wikokit toolkit~\cite{Krizhanovsky:13}. To enrich the lexical coverage of the dataset, we extracted additional hypernyms from the same corpus using Hearst patterns for Russian using the PatternSim toolkit~\cite{Panchenko:12}.\footnote{\url{https://github.com/cental/patternsim}} To filter noisy extractions, we used only relations extracted more than $100$ times.
As suggested by~\newcite{Levy:15}, we split the train and test sets such that each contains a distinct vocabulary to avoid the lexical overfitting. This results in $25\,067$ training, $8\,192$ validation, and $8\,310$ test examples. The validation and test sets contain hypernyms from Wiktionary, while the training set is composed of hypernyms and synonyms coming from both sources.
\paragraph{Discussion of Results.}
\figurename~\ref{fig:hit10} (left) shows performance of the three projection learning setups on the validation set: the baseline approach, the asymmetric regularization approach, and the neighbor regularization approach. Both regularization strategies lead to consistent improvements over the non-regularized baseline of~\cite{Fu:14} across various cluster sizes. The method reaches optimal performance for $k=20$ clusters. \tablename~\ref{tab:performance:ru} provides a detailed comparison of the performance metrics for this setting. Our approach based on the regularization using synonyms as negative samples outperform the baseline (all differences between the baseline and our models are significant with respect to the $t$-test). According to all metrics, but $\hitk{1}$ for which results are comparable to $\vec{x}\mathbf{\Phi}$, the re-projection ($\vec{x}\mathbf{\Phi\Phi}$) improves results.
\subsection{Experiment 2: The English Language}
We performed the evaluation on two datasets.
\paragraph{EVALution Dataset.} In this evaluation, word embeddings were trained on a $6.3$ billion token text collection composed of Wikipedia, ukWaC~\cite{Ferraresi:08}, Gigaword~\cite{Graff:03}, and news corpora from the Leipzig Collection \cite{Goldhahn:12}. We used the skip-gram model with the context window size of $8$ tokens and $300$-dimensional vectors.
We use the EVALution dataset~\cite{Santus:15} for training and testing the model, composed of $1\,449$ hypernyms and $520$ synonyms, where hypernyms are split into $944$ training, $65$ validation and $440$ test pairs. Similarly to the first experiment, we extracted extra training hypernyms using the Hearst patterns, but in contrast to Russian, they did not improve the results significantly, so we left them out for English. A reason for such difference could be the more complex morphological system of Russian, where each word has more morphological variants compared to English. Therefore, extra training samples are needed for Russian (embeddings of Russian were trained on a non-lemmatized corpus).
\paragraph{Combined Dataset.} To show the robustness of our approach across configurations, this dataset has more training instances, different embeddings, and both synonyms and co-hyponyms as negative samples. We used hypernyms, synonyms and co-hyponyms from the four commonly used datasets: EVALution, BLESS~\cite{Baroni:11}, ROOT09~\cite{Santus:16} and K\&H+N~\cite{Necsulescu:15}
The obtained $14\,528$ relations were split into $9\,959$ training, $1\,631$ validation and $1\,625$ test hypernyms; $1\,313$ synonyms and co-hyponyms were used as negative samples. We used the standard $300$-dimensional embeddings trained on the $100$ billion tokens Google News corpus~\cite{Mikolov:13:w2v}
\paragraph{Discussion of Results.} \figurename~\ref{fig:hit10} (right) shows that similarly to Russian, both regularization strategies lead to consistent improvements over the non-regularized baseline. \tablename~\ref{tab:performance:en} presents detailed results for both English datasets. Similarly to the first experiment, our approach consistently improves results robustly across various configurations. As we change the number of clusters, types of embeddings, the size of the training data and type of relations used for negative sampling, results using our method stay superior to those of the baseline. The regularizers without re-projection ($\vec{x}\mathbf{\Phi}$) obtain lower results in most configurations as compared to re-projected versions ($\vec{x}\mathbf{\Phi\Phi}$). Overall, the neighbor regularization yields slightly better results in comparison to the asymmetric regularization. We attribute this to the fact that some synonyms $\vec{z}$ are close to the original hyponym $\vec{x}$, while others can be distant. Thus, neighbor regularization is able to safeguard the model during training from more errors. This is also a likely reason why the performance of both regularizers is similar: the asymmetric regularization makes sure that a re-projected vector does not belong to a semantic neighborhood of the hyponym. Yet, this is exactly what neighbor regularization achieves. Note, however that neighbor regularization requires explicit negative examples, while asymmetric regularization does not.
\section{Conclusion}
In this study, we presented a new model for extraction of hypernymy relations based on the projection of distributional word vectors. The model incorporates information about explicit negative training instances represented by relations of other types, such as synonyms and co-hyponyms, and enforces asymmetry of the projection operation. Our experiments in the context of the hypernymy prediction task for English and Russian languages show significant improvements of the proposed approach over the state-of-the-art model without negative sampling.
\section*{Acknowledgments}
We acknowledge the support of the Deutsche For\-schungs\-gemeinschaft (DFG) foundation under the ``JOIN-T'' project, the Deutscher Akademischer Austauschdienst (DAAD), the Russian Foundation for Basic Research (RFBR) under the project no.~16-37-00354 mol\_a, and the Russian Foundation for Humanities under the project no.~16-04-12019 ``RussNet and YARN thesauri integration''. We also thank Microsoft for providing computational resources under the Microsoft Azure for Research award. Finally, we are grateful to Benjamin Milde, Andrey Kutuzov, Andrew Krizhanovsky, and Martin Riedl for discussions and suggestions related to this study.
|
\section{Introduction}
Randomization-based inference centers around the idea that the treatment assignment mechanism is the only stochastic element in a randomized experiment and thus acts as the basis for conducting statistical inference.\citep{fisher1935design} In general, a central tenet of randomization-based inference is that the analysis of any given experiment should reflect its design: The inference for completely randomized experiments, blocked randomized experiments, and other designs should reflect the actual assignment mechanism that was used during the experiment. The idea that the assignment mechanism is the only stochastic element of an experiment is also commonly employed in the potential outcomes framework,\citep{neyman1923} which is now regularly used when estimating causal effects in randomized experiments and observational studies.\citep{rubin1974estimating,rubin2005causal} While randomization-based inference focuses on estimating causal effects for only the finite sample at hand, it can flexibly incorporate any kind of assignment mechanism without model specifications. Rosenbaum\cite{rosenbaum2002observational} provides a comprehensive review of randomization-based inference.
An essential step to estimating causal effects within the randomization-based inference framework as well as the potential outcomes framework is to state the probability distribution of the assignment mechanism. For simplicity, we focus on treatment-versus-control experiments, but our discussion can be extended to experiments with multiple treatments. Let the vector $\mathbf{W}$ denote the assignment mechanism for $N$ units in an experiment or observational study. It is commonly assumed that the probability distribution of $\mathbf{W}$ can be written as a product of independent Bernoulli trials that may depend on background covariates:\citep{rosenbaum2002covariance,rubin2007design,rubin2008objective}
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \prod_{i=1}^N e(\mathbf{x}_i)^{w_i} [1 - e(\mathbf{x}_i)]^{1 - w_i}, \hspace{0.05 in } \text{where } 0 < e(\mathbf{x}_i) < 1 \hspace{0.05 in} \forall i =1,\dots,N \label{eqn:psModel}
\end{align}
Here, $\mathbf{X}$ is a $N \times p$ covariate matrix with rows $\mathbf{x}_i$, and $e(\mathbf{x}_i)$ denotes the probability that the $i^{\text{th}}$ unit receives treatment conditional on pre-treatment covariates $\mathbf{x}_i$; i.e., $e(\mathbf{x}_i) \equiv P(W_i = 1 | \mathbf{x}_i)$. The probabilities $e(\mathbf{x}_i)$ are commonly known as propensity scores.\citep{rosenbaum1983central} An assignment mechanism that can be written as (\ref{eqn:psModel}) is known as an unconfounded, strongly ignorable assignment mechanism.\citep{rubin2008objective} The assumption of an unconfounded, strongly ignorable assignment mechanism is essential to propensity score analyses and other methodologies (e.g., regression-based methods) for analyzing observational studies.\citep{dehejia2002propensity,sekhon2009opiates,stuart2010matching,austin2011introduction}
In randomized experiments, the propensity scores are defined by the designer(s) of the experiment and are thus known; this knowledge is all that is needed to construct unbiased estimates for average treatment effects.\citep{rubin2008objective} The propensity score $e(\mathbf{x}_i)$ is not necessarily a function of all or any of the covariates: For example, in completely randomized experiments, $e(\mathbf{x}_i) = 0.5$ for all units; and for blocked-randomized and paired experiments, the propensity scores are equal for all units within the same block or pair.
In observational studies, the propensity scores are not known, and instead must be estimated. The $e(\mathbf{x}_i)$ in (\ref{eqn:psModel}) are often estimated using logistic regression, but any model that estimates conditional probabilities for a binary treatment can be used. These estimates, $\hat{e}(\mathbf{x}_i)$, are commonly employed to ``reconstruct'' a hypothetical experiment that yielded the observed data.\citep{rubin2008objective} For example, matching methodologies are used to obtain subsets of treatment and control that are balanced in terms of pre-treatment covariates; then, these subsets of treatment and control are analyzed as if they came from a completely randomized experiment.\citep{ho2007matching,rubin2008objective,stuart2010matching} Others have suggested regression-based adjustments combined with the propensity score\cite{robins1995semiparametric,rubin2000combining} as well as Bayesian modeling.\cite{rubin1978bayesian,rubin2005causal,zigler2014uncertainty} Notably, all of these methodologies implicitly assume the Bernoulli trial assignment mechanism shown in (\ref{eqn:psModel}), but the subsequent analyses reflect a completely randomized, blocked-randomized, or paired assignment mechanism instead. One methodology commonly employed in observational studies that more closely reflects a Bernoulli trial assignment mechanism is inverse propensity score weighting;\citep{hirano2001estimation,hirano2003efficient,lunceford2004stratification,hernan2006estimating} however, the variance of such estimators is unstable, especially when estimated propensity scores are particularly close to 0 or 1, which is an ongoing concern in the literature.\citep{cole2008constructing,austin2015moving} Furthermore, the validity of such point estimates and uncertainty intervals rely on asymptotic arguments and an infinite-population interpretation.
More importantly, all of the above methodologies---matching, frequentist or Bayesian modeling, inverse propensity score weighting, or any combination of them---assume the strongly ignorable assignment mechanism shown in (\ref{eqn:psModel}), but they also intrinsically make additional modeling or asymptotic assumptions. On the other hand, although randomization-based inference methodologies also make the common assumption of the strongly ignorable assignment mechanism, they do not require any additional model specifications or asymptotic arguments.
However, while there is a wide literature on randomization tests, most have focused on assignment mechanisms where the propensity scores are assumed to be the same across units (i.e., completely randomized experiments) or groups of units (i.e., blocked or paired experiments), instead of the more general case where they may differ across all units, as in (\ref{eqn:psModel}). Imbens and Rubin\cite{imbens2015causal} briefly mention Bernoulli trial experiments, but only discuss inference for purely randomized and block randomized designs. Another example is Basu,\cite{basu1980randomization} who thoroughly discusses Fisherian randomization tests and briefly considers Bernoulli trial experiments, but does not provide a randomization-test framework for such experiments. This trend continues for observational studies: Most randomization tests for observational studies utilize permutations of the treatment indicator within covariate strata, and thus reflect a block-randomized assignment mechanism instead of the assumed Bernoull trial assignment mechanism.\citep{rosenbaum1984conditional,rosenbaum1988permutation,rosenbaum2002covariance} While these tests are valid under certain assumptions, they are not immediately applicable to cases where covariates are not easily stratified (e.g., continuous covariates) or where there is not at least one treated unit and one control unit in each stratum.\cite{rosenbaum2002observational} None of these randomization tests are applicable to cases where the propensity scores (known or unknown) differ across all units.
Most randomization tests that incorporate varying propensity scores focus on the biased-coin design popularized by Efron\cite{efron1971forcing}, where propensity scores are dependent on the order units enter the experiment and possibly pre-treatment covariates as well. Wei\cite{wei1978application} and Soares and Wu\cite{soares1983some} developed extensions for this experimental design, while Smythe and Wei\cite{smythe1983significance}, Wei\cite{wei1988exact}, and Mehta et al.\cite{mehta1988constructing} developed significance tests for such designs. Good\cite{good2013permutation} (Section 4.5) provides further discussion on this literature. The biased-coin design is related to covariate-adaptive randomization schemes in the clinical trial literature, starting with the work of Pocock and Simon.\cite{pocock1975sequential} Covariate-adaptive randomization schemes sequentially randomize units such that the treatment and control groups are balanced in terms of pre-treatment covariates,\cite{loux2013simple,lin2015pursuit,zagoraiou2017choosing} and recent works in the statistics literature have explored valid randomization tests for covariate-adaptive randomization schemes.\cite{simon2011using,shao2013validity} Importantly, the randomization test literature for biased-coin and covariate-adaptive designs differs from the randomization test presented here: All of these works focus on sequential designs, and thus depend on the sequential dependence among units inherent in the randomization scheme. In contrast, we assume that all units are simultaneously assigned to treatment according to the strongly ignorable assignment mechanism (\ref{eqn:psModel}).
To the best of our knowledge, there is not an explicit randomization-based inference framework for analyzing Bernoulli trial experiments, let alone observational studies. Here we develop such a framework for randomized experiments characterized by Bernoulli trials, with the implication that this framework can be extended to the observational study literature as well. In particular, we develop rejection-sampling and importance-sampling approaches for conducting conditional randomization-based inference for Bernoull trial experiments, which has not been previously discussed in the literature. These approaches allow one to conduct randomization tests conditional on statistics of interest for more precise inference.
In Section \ref{s:randomizationInferenceReview}, we review randomization-based inference in general, including randomization tests and how these tests can be inverted to yield point estimates and confidence intervals. In Section \ref{s:bernoulliTrials}, we develop a randomization-based inference framework for Bernoulli trial experiments, first reviewing the case where propensity scores are equal across units, and then extending this framework to the general case where propensity scores differ across units. Furthermore, we establish that randomization tests under this framework are valid tests, both unconditionally and conditional on statistics of interest. In Section \ref{s:simulationExample}, we demonstrate our framework with a simple example and provide simulation evidence for how our rejection-sampling and importance-sampling approaches can yield statistically powerful conditional randomization tests. In Section \ref{s:discussion}, we discuss extensions and implications of this work, particularly for observational studies.
\section{Review of Randomization-Based Inference} \label{s:randomizationInferenceReview}
Randomization-based inference focuses on randomization tests for treatment effects, which can be inverted to obtain both point estimates and confidence intervals. Randomization tests were first proposed by Fisher,\cite{fisher1935design} and foundational theory for these tests was later developed by Pitman\cite{pitman1938significance} and Kempthorne.\cite{kempthorne1952design} We follow the notation of Imbens and Rubin\cite{imbens2015causal} in our discussion of randomization tests for treatment-versus-control experiments.
\subsection{Notation}
Randomization tests utilize the potential outcomes framework, where the only stochastic element of an experiment is the treatment assignment. Let
\begin{align}
W_i = \begin{cases}
1 &\mbox{ if } \text{the $i^{\text{th}}$ unit receives treatment} \\
0 &\mbox{ if } \text{the $i^{\text{th}}$ unit receives control}
\end{cases}
\end{align}
denote the treatment assignment, and let $Y_i(W_i)$ denote the $i^{\text{th}}$ unit's potential outcome, which only depends on the treatment assignment $W_i$. Only $Y_i(1)$ or $Y_i(0)$ is ultimately observed at the end of an experiment---never both. Let
\begin{align}
y_i^{obs} = Y_i(1) W_i + Y_i(0)(1 - W_i)
\end{align}
denote the observed outcomes. Finally, let $\mathbb{W} \equiv \{0, 1\}^N$ denote the set of all possible treatment assignments, and let $\mathbb{W}^+ \subset \mathbb{W}$ denote the subset of $\mathbb{W}$ with positive probability, i.e., $\mathbb{W}^+ = \{\mathbf{w} \in \mathbb{W} : P(\mathbf{W} = \mathbf{w}) > 0\}$.
Importantly, the probability distribution of treatment assignments, $P(\mathbf{W})$, fully characterizes the assignment mechanism: Because treatment assignment is the only stochastic element in a randomized experiment, the distribution $P(\mathbf{W})$ specifies the randomness in a randomized experiment. Consequentially, inference within the randomization-based framework is determined by $P(\mathbf{W})$.
We first review how $P(\mathbf{W})$ is used to perform randomization tests. We then discuss how to invert these tests to obtain point estimates and confidence intervals for the average treatment effect.
\subsection{Testing the Sharp Null Hypothesis via Randomization Tests} \label{ss:testingFishersSharpNull}
The most common use of randomization tests is to test the Sharp Null Hypothesis, which is
\begin{align}
H_0: Y_i(1) = Y_i(0) \hspace{0.05 in} \forall i = 1, \dots, n \label{sharpNull}
\end{align}
i.e., the hypothesis that there is no treatment effect. Under the Sharp Null Hypothesis, the outcomes for \textit{any} randomization from the set of all possible randomizations $\mathbb{W}^+$ is known: Regardless of a unit's treatment assignment, its outcome will always be equal to the observed response $y_i^{obs}$ under the Sharp Null Hypothesis. This knowledge allows one to test the Sharp Null Hypothesis.
To test this hypothesis, one first chooses a suitable test statistic
\begin{align}
t \big(Y(\mathbf{W}), \mathbf{W} \big) \label{testStatistic}
\end{align}
and determines whether the observed test statistic $t^{obs} \equiv t(\mathbf{y}^{obs}, \mathbf{W}^{obs})$ is unlikely to occur according to the randomization distribution of the test statistic (\ref{testStatistic}) under the Sharp Null Hypothesis. For example, one common choice of test statistic is the difference in mean response between treatment and control units, defined as
\begin{align}
t \big(Y(\mathbf{W}), \mathbf{W} \big) = \frac{\sum_{i: W_i = 1} Y_i(1)}{\sum_{i=1}^N W_i} - \frac{\sum_{i: W_i = 0} Y_i(0)}{\sum_{i=1}^N (1-W_i)} \label{eqn:meanDiffEstimator}
\end{align}
Such a test statistic will be powerful in detecting a difference in means between the distributions of $Y_i(1)$ and $Y_i(0)$. In general, one should choose a test statistic according to possible differences in the distributions of $Y_i(1)$ and $Y_i(0)$ that one is most interested in. Please see Rosenbaum\cite{rosenbaum2002observational} (Chapter 2) for a discussion on the choice of test statistics for randomization tests.
After a test statistic is chosen, a randomization-test $p$-value can be computed by comparing the observed test statistic $t^{obs}$ to the set of $t \big( Y(\mathbf{W}), \mathbf{W} \big)$ that are possible given the set of possible treatment assignments $\mathbb{W}^+$, assuming the Sharp Null Hypothesis is true. The two-sided randomization-test $p$-value is
\begin{align}
P \big( |t \big(Y(\mathbf{W}), \mathbf{W} \big)| \geq |t^{obs}| \big) &= \sum_{\mathbf{w} \in \mathbb{W}^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w}) \label{randomizationTestPValue}
\end{align}
where $\mathbb{I}(A) = 1$ if event $A$ occurs and zero otherwise. Importantly, the randomization-test $p$-value (\ref{randomizationTestPValue}) depends on the set of possible treatment assignments $\mathbb{W}^+$, the probability distribution $P(\mathbf{W})$, and the choice of test statistic $t \big( Y(\mathbf{W}), \mathbf{W} \big)$.
Thus, testing the Sharp Null Hypothesis is a three-step procedure:
\begin{enumerate}
\item Specify the distribution $P(\mathbf{W})$ (and, consequentially, the set of possible treatment assignments $\mathbb{W}^+$).
\item Choose a test statistic $t\big(Y(\mathbf{W}), \mathbf{W} \big)$.
\item Compute or approximate the $p$-value (\ref{randomizationTestPValue}).
\end{enumerate}
All randomization tests discussed in this paper follow this three-step procedure, with the only difference among them being the choice of $P(\mathbf{W})$, i.e. the first step. The third step notes that exactly computing the randomization-test $p$-value is often computationally intensive because it requires enumerating all possible $\mathbf{W} \in \mathbb{W}^+$; instead, it can be approximated. A typical approximation is to generate a random sample $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$ from $P(\mathbf{W})$, and then approximate the $p$-value (\ref{randomizationTestPValue}) by
\begin{align}
P \big( \big| t \big(Y(\mathbf{W}), \mathbf{W} \big) \big| \geq |t^{obs}| \big) &\approx \frac{ \sum_{m=1}^M \mathbb{I} \big( \big| t \big(Y(\mathbf{w}^{(m)}), \mathbf{w}^{(m)} \big) \big| \geq |t^{obs}| \big)}{M} \label{randomizationTestPValueApproximationSimple}
\end{align}
Importantly, the approximation (\ref{randomizationTestPValueApproximationSimple}) still depends on the probability distribution of the assignment mechanism, $P(\mathbf{W})$, because the random samples $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$ are generated using $P(\mathbf{W})$. This distinction will be important in our discussion of Bernoulli trial experiments, where the probability of receiving treatment---i.e., the propensity scores---may be equal or non-equal across units. In both cases, the set $\mathbb{W}^+$ is the same, but the probability distribution $P(\mathbf{W})$ is different.
Testing the Sharp Null Hypothesis will provide information about the presence of any treatment effect amongst all units in the study. Furthermore, this test can be inverted to obtain point estimates and confidence intervals for the treatment effect.
\subsection{Randomization-based Point Estimates and Confidence Intervals for the Treatment Effect} \label{ss:confidenceIntervals}
A confidence interval can be constructed by inverting a variation of the Sharp Null Hypothesis that assumes an additive treatment effect. A randomization-based confidence interval for the average treatment effect is the set of $\tau \in \mathbb{R}$ such that one fails to reject the hypothesis
\begin{align}
H_0^{\tau}: Y_i(1) = Y_i(0) + \tau \hspace{0.05 in} \forall i = 1, \dots, N \label{sharpNullTau}
\end{align}
The above hypothesis is a sharp hypothesis in the sense that, under $H_0^{\tau}$, every unit's outcome for any treatment assignment is known: Under $H_0^{\tau}$, the missing potential outcome of any treated unit would be $y_i^{obs} - \tau$; likewise, the missing potential outcome of any control unit would be $y_i^{obs} + \tau$. Thus, for any hypothetical treatment assignment $\mathbf{w} \in \mathbb{W}^+$, one can calculate the corresponding potential outcomes $Y(\mathbf{w})$ under $H_0^{\tau}$ in terms of the observed outcomes $\mathbf{y}^{obs}$ and observed treatment assignment $\mathbf{w}^{obs}$:
\begin{align}
Y_i(w_i) &= y_i^{obs} + \tau (w_i - w_i^{obs}), \hspace{0.05 in} \forall i = 1, \dots, N \label{eqn:hypotheticalPotentialOutcomesTau}
\end{align}
Therefore, one can obtain a $p$-value for the hypothesis $H_0^{\tau}$ by drawing many hypothetical randomizations $\mathbf{w}^{(1)},\dots, \mathbf{w}^{(M)}$ from $P(\mathbf{W})$, computing each $Y(\mathbf{w}^{(m)})$ using (\ref{eqn:hypotheticalPotentialOutcomesTau}), and then using (\ref{randomizationTestPValueApproximationSimple}) to approximate the $p$-value for any given test statistic $t(Y(\mathbf{W}), \mathbf{W})$.
To construct a 95\% confidence interval, one considers many $\tau$ (e.g., via a line search), tests the hypothesis $H_0^{\tau}$ for each $\tau$, and defines the confidence interval as the set of $\tau$ with corresponding $p$-values above 0.05.\citep{rosenbaum2002observational,imbens2015causal} Importantly, the confidence interval will depend on the probability distribution $P(\mathbf{W})$ through the draws $\mathbf{w}^{(1)},\dots,\mathbf{w}^{(M)}$ to compute each $p$-value; thus, the confidence interval will reflect a prespecified assignment mechanism. As we discuss in Section \ref{ss:acceptRejectProcedure}, this also allows one to flexibly construct confidence intervals that condition on particular statistics of interest.
Testing the hypothesis $H_0^{\tau}$ also yields a natural point estimate: Define the point estimate $\hat{\tau}$ as the $\tau$ such that the $p$-value for testing the hypothesis $H_0^{\tau}$ is maximized. For example, given a 95\% confidence interval containing $\tau$ with corresponding $p$-values above 0.05, $\hat{\tau}$ is defined as the $\tau$ with the highest $p$-value. The interpretation of such a $\hat{\tau}$ is that this is the ``most probable'' $\tau$ under the assumption of an additive treatment effect. This point estimate is a variant of the Hodges-Lehmann randomization-based point estimate, which equates the test statistic under the hypothesis $H_0^{\tau}$ to its expectation under the randomization distribution.\citep{hodges1963estimates,rosenbaum2002observational}
Some have criticized randomization-based confidence intervals constructed by inverting hypotheses such as (\ref{sharpNullTau}) because it assumes a homogeneous treatment effect, which may be an inappropriate assumption. However, in general, confidence intervals can be constructed using any Sharp Null Hypothesis that fully specifies unit-level treatment effects, including sharp null hypotheses that specify heterogeneous treatment effects.\citep{caughey2016beyond} Thus, while we focus on homogeneous treatment effects as assumed in (\ref{sharpNullTau}), the randomization test framework that we present below can be extended to point estimates and confidence intervals that account for treatment effect heterogeneity to the extent that one can specify sharp null hypotheses that incorporate heterogeneous treatment effects.
\section{Randomization-based Inference for Bernoulli Trial Experiments} \label{s:bernoulliTrials}
Here we consider experimental designs that are characterized by Bernoulli trials and develop randomization tests for these designs. First, we review randomization tests for experimental designs where the probability of receiving treatment is the same for all units; this will motivate our development of randomization tests for experimental designs where the probability of receiving treatment differs across units, which is our main contribution. For both cases---first when the propensity scores are equal across units, and then when the propensity scores differ---we will discuss several assignment mechanisms $P(\mathbf{W})$ and sets of possible treatment assignments $\mathbb{W}^+$, which correspond to different randomization tests. Once $P(\mathbf{W})$ and $\mathbb{W}^+$ are specified, the Sharp Null Hypothesis can be tested by following the three-step procedure in Section \ref{ss:testingFishersSharpNull}; furthermore, these tests can be inverted to yield point estimates and confidence intervals, as discussed in Section \ref{ss:confidenceIntervals}. For each test, we will state an explicit form for $P(\mathbf{W} = \mathbf{w})$ for any $\mathbf{w} \in \mathbb{W}^+$ to compute the randomization test $p$-value (\ref{randomizationTestPValue}) exactly, and we will also state how random samples $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$ can be generated to approximate this $p$-value using (\ref{randomizationTestPValueApproximationSimple}). In Section \ref{ss:acceptRejectProcedure}, we introduce rejection-sampling and importance-sampling approaches to perform randomization tests conditional on various statistics of interest, which has not been previously considered for randomization-based inference for Bernoulli trial experiments.
\subsection{Case 1: Propensity Scores are Equal Across Units} \label{ss:equalProbabilities}
Let $e(\mathbf{x}_i) = P(\mathbf{W}_i = 1 | \mathbf{x}_i)$ denote the propensity score, i.e., the probability that the $i^{\text{th}}$ unit receives treatment, given a vector of pre-treatment covariates $\mathbf{x}_i$. In this section we assume without loss of generality that $e(\mathbf{x}_i) = 0.5$ for all $i = 1,\dots,N$; i.e., $P(W_i = 1 | \mathbf{x}_i) = P(W_i = 1) = 0.5$ for all units. We consider several sets of possible treatment assignments $\mathbb{W}^+$ and note the corresponding $P(\mathbf{W} = \mathbf{w})$ for each $\mathbf{w} \in \mathbb{W}^+$, which can be used to compute the $p$-value (\ref{randomizationTestPValue}) for testing the Sharp Null Hypothesis.
First consider the set $\mathbb{W}^+ = \mathbb{W} = \{0,1\}^N$, i.e., experiments that are characterized by independent, unbiased coin flips, where any number of units can receive treatment or control. In this case, $P(\mathbf{W} = \mathbf{w}) = \frac{1}{2^N}$ for all $\mathbf{w} \in \mathbb{W}^+$. To generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, one simply flips $N$ unbiased coins to generate an $N$-dimensional vector of 0s and 1s.
However, Imbens and Rubin\cite{imbens2015causal} note that when $\mathbb{W}^+ = \{0, 1\}^N$, there is a non-zero probability of $\mathbf{W} = \mathbf{0}_N \equiv (0, \dots, 0)$ or $\mathbf{W} = \mathbf{1}_N \equiv (1, \dots, 1)$. In these cases, most test statistics are undefined, and so they do not consider this case further. This concern can be addressed by either defining test statistics for these cases (a common choice being zero) or instead considering the set $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$ of possible treatment assignments. In this case, $P(\mathbf{W} = \mathbf{w}) = \frac{1}{2^N - 2}$ for all $\mathbf{w} \in \mathbb{W}^+$. To generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, one simply flips $N$ unbiased coins and only accepts a random draw $\mathbf{w}^{(m)}$ if it is not $\mathbf{0}_N$ or $\mathbf{1}_N$. This follows the argument of Imbens and Rubin\cite{imbens2015causal} that preventing ``unhelpful treatment allocations'' will yield more precise inferences for treatment effects.
Indeed, we can even further restrict $\mathbb{W}^+$. It is common to condition on statistics such as the number of units that receive treatment $N_T \equiv \sum_{i=1}^N W_i$. When $\mathbb{W}^+ = \{ \mathbf{W} \in \mathbb{W} | \sum_{i=1}^N W_i = N_T\}$ for some prespecified $N_T$, $P(\mathbf{W} = \mathbf{w}) = \frac{1}{ {N \choose N_T} }$ for all $\mathbf{w} \in \mathbb{W}^+$. To generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, one simply flips $N$ unbiased coins and only accepts a random draw $\mathbf{w}^{(m)}$ if $\sum_{i=1}^N w_i^{(m)} = N_T$; equivalently, one can obtain such random draws by randomly permuting the observed treatment assignment $\mathbf{W}^{obs}$. A randomization test that uses such a $\mathbb{W}^+$ and $P(\mathbf{W})$ is the most common randomization test in the literature and corresponds to what is typically referred to as a ``completely randomized'' experimental design.\citep{imbens2015causal} Because of the equivalence to random permutations of $\mathbf{W}^{obs}$, this randomization test is also often called a permutation test.
\subsection{Case 2: Propensity Scores Differ Across Units} \label{ss:unequalProbabilities}
Now consider the case where $e(\mathbf{x}_i) \neq e(\mathbf{x}_j)$ for some $i \neq j$, i.e., where the propensity scores differ across units. This may be due to differences in the covariate vectors $\mathbf{x}_i$ and $\mathbf{x}_j$ or some other experimental design prespecification. Again we consider several sets of possible treatment assignments $\mathbb{W}^+$, note the corresponding $P(\mathbf{W} = \mathbf{w} | \mathbf{X})$ for each $\mathbf{w} \in \mathbb{W}^+$, and state how to generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, which can be used to compute or approximate the $p$-value for testing the Sharp Null Hypothesis.
First consider the set $\mathbb{W}^+ = \mathbb{W} = \{0,1\}^N$. In this case,
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i}
\end{align}
which is identical to the assignment mechanism (\ref{eqn:psModel}) typically assumed in observational studies. To generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, one simply flips $N$ \textit{biased} coins with probabilities corresponding to the $e(\mathbf{x}_i)$ to generate an $N$-dimensional vector of 0s and 1s.
However, there is still a chance---though small---that a random draw $\mathbf{w}$ from $\mathbb{W}^+ = \{0, 1\}^N$ will be equal to $\mathbf{0}_N$ or $\mathbf{1}_N$, and in this case test statistics will be undefined. Now consider the restricted set $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$. In this case,
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \frac{\prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i}}{1 - \prod_{i=1}^N e(\mathbf{x}_i) - \prod_{i=1}^N [1 - e(\mathbf{x}_i)]} \label{eqn:biasedCoinRestricted01Probabilities}
\end{align}
To arrive at this result, note that when $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$,
\begin{align}
\sum_{\mathbf{w} \in \mathbb{W}^+} \prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i} = 1 - \prod_{i=1}^N e(\mathbf{x}_i) - \prod_{i=1}^N [1 - e(\mathbf{x}_i)]
\end{align}
Thus, the probabilities (\ref{eqn:biasedCoinRestricted01Probabilities}) sum to one. To generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, one simply flips $N$ \textit{biased} coins and only accepts a random draw $\mathbf{w}^{(m)}$ if it is not $\mathbf{0}_N$ or $\mathbf{1}_N$.
Again, we can further restrict $\mathbb{W}^+$ to incorporate certain statistics of interest, such as the number of units assigned to treatment. Consider the set $\mathbb{W}^+ = \{\mathbf{W} \in \mathbb{W} | \sum_{i=1}^N W_i = N_T\}$ for some prespecified $N_T$. In this case,
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \frac{\prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i}}{P(\sum_{i=1}^N W_i = N_T | \mathbf{X}) } \label{eqn:probabilityConditionalNt}
\end{align}
The denominator, $P(\sum_{i=1}^N W_i = N_T | \mathbf{X}) = \sum_{\mathbf{w} \in \mathbb{W}^+} \prod_{i=1}^N e(\mathbf{x}_i)^{w_i} [1 - e(\mathbf{x}_i)]^{1-w_i}$, is seemingly difficult to compute, due to the large number, ${N \choose N_T }$, of possible treatment assignments $\mathbf{w} \in \mathbb{W}^+$. Chen and Liu\cite{chen1997statistical} provide an algorithm to compute $P(\sum_{i=1}^N W_i = N_T | \mathbf{X})$ exactly. Alternatively, $P(\sum_{i=1}^N W_i = N_T | \mathbf{X})$ can be estimated, and there are many ways to estimate this quantity. One option is to randomly sample $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$ from $\mathbb{W}^+$ and use the unbiased estimator
\begin{align}
\widehat{P} \left(\sum_{i=1}^N W_i = N_T | \mathbf{X} \right) = \frac{ {N \choose N_T} }{M} \sum_{m = 1}^M \prod_{i=1}^N e(\mathbf{x}_i)^{w_i^{(m)}} [1 - e(\mathbf{x}_i)]^{1-w_i^{(m)}} \label{sumApproximation}
\end{align}
which is the typical estimator for a population total seen in the survey sampling literature (e.g., Lohr Page 55).\cite{lohr2009sampling}
However, computing $P\left(\sum_{i=1}^N W_i = N_T | \mathbf{X} \right)$ is only required when one wants to compute the randomization-test $p$-value exactly using (\ref{randomizationTestPValue}). Instead, one can still approximate this $p$-value using (\ref{randomizationTestPValueApproximationSimple}) by generating random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$, which is done by flipping $N$ \textit{biased} coins and only accepting a random draw $\mathbf{w}$ if $\sum_{i=1}^N w_i = N_T$.
This introduces straightforward rejection-sampling and importance-sampling procedures for conducting conditional randomization-based inference for Bernoulli trial experiments.
\subsection{Rejection-Sampling and Importance-Sampling Procedures for Conditional Randomization Tests} \label{ss:acceptRejectProcedure}
As discussed in Section \ref{s:randomizationInferenceReview}, researchers do not typically compute the randomization test $p$-value (\ref{randomizationTestPValue}) exactly, but instead generate random draws $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$ from the probability distribution $P(\mathbf{W})$ and then approximate the randomization test $p$-value using (\ref{randomizationTestPValueApproximationSimple}). To conduct conditional randomization-based inference, one generates random draws from conditional probability distributions such as $P(\mathbf{W} | \sum_{i=1}^N W_i = N_T)$ instead of $P(\mathbf{W})$. This is straightforward when the propensity scores are the same across units: For example, as discussed in Section \ref{ss:equalProbabilities}, samples from $P(\mathbf{W} | \sum_{i=1}^N W_i = N_T)$ correspond to random permutations of the observed treatment assignment $\mathbf{W}^{obs}$ when the propensity scores are equal across units. However, sampling from such conditional distributions when the propensity scores differ across units is less trivial. To the best of our knowledge, a strategy for how to sample from such distributions has not been described in the literature.
Conducting conditional randomization-based inference involves focusing only on ``acceptable'' treatment assignments $\mathbf{W}$; e.g., $\mathbf{W}$ that are not $\mathbf{0}_N$ or $\mathbf{1}_N$, or $\mathbf{W}$ such that $\sum_{i=1}^N W_i = N_T$ for some prespecified $N_T$. To formalize this idea, define an acceptance criterion that is a function of the treatment assignment and pre-treatment covariates:
\begin{align}
\phi(\mathbf{W}, \mathbf{X}) = \begin{cases}
1 &\mbox{ if } \mathbf{W} \text{ is an acceptable treatment assignment} \\
0 &\mbox{ if } \mathbf{W} \text{ is not an acceptable treatment assignment.}
\end{cases}
\end{align}
The criterion $\phi(\mathbf{W}, \mathbf{X})$ can encapsulate any statistic of interest, such as the number of treated units or forms of covariate balance. The criterion $\phi(\mathbf{W}, \mathbf{X})$ should be defined by statistics that are believed to be related to the outcome, such as the number of treated units with a certain covariate value or the covariate means in the treatment and control groups. See Hennessy et al.\cite{hennessy2016conditional} for further discussion about the types of statistics that should be conditioned on for conditional randomization-based inference.
Once $\phi(\mathbf{W}, \mathbf{X})$ is defined, one conducts conditional randomization-based inference by performing a randomization test only within the set of randomizations such that the acceptance criterion is satisfied. For example, Sections \ref{ss:equalProbabilities} and \ref{ss:unequalProbabilities} discuss conducting randomization-based inference for the case when $\phi(\mathbf{W}, \mathbf{X}) = 1$ if $\sum_{i=1}^N W_i = N_T$ and 0 otherwise. Thus, the true conditional randomization test $p$-value is
\begin{align}
p_{\phi} \equiv \sum_{\mathbf{w} \in \mathbb{W}_{\phi}^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w}) \label{eqn:phiPValue}
\end{align}
where $\mathbb{W}^+_{\phi} = \{\mathbf{w}: \phi(\mathbf{w}, \mathbf{X}) = 1\}$ is the set of acceptable randomizations. The $p$-value $p_{\phi}$ is nearly identical to the $p$-value (\ref{randomizationTestPValue}), but using only the set of acceptable randomizations instead of the set of all randomizations. The set of acceptable randomizations is typically large, and thus the $p$-value $p_{\phi}$ cannot always be computed exactly. Instead, it can be unbiasedly estimated using
\begin{align}
\hat{p}_{RS} &= \frac{ \sum_{m=1}^M \mathbb{I} \big( \big| t \big(Y(\mathbf{w}^{(m)}), \mathbf{w}^{(m)} \big) \big| \geq |t^{obs}| \big)}{M}, \text{ where } \mathbf{w}^{(m)} \sim P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1) \label{eqn:rejectionSamplingPValue}
\end{align}
i.e., the approximation presented in (\ref{randomizationTestPValueApproximationSimple}). We propose a rejection-samping procedure for generating random samples $\mathbf{w}^{(1)},\dots,\mathbf{w}^{(M)} \sim P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$: Randomly generate draws from $P(\mathbf{W})$, and only accept a draw $\mathbf{w}$ if $\phi(\mathbf{w}, \mathbf{X}) = 1$. For Bernoulli trials, this involves flipping $N$ coins (biased or unbiased, depending on the experimental design), and only accepting a particular assignment $\mathbf{w}$ if $\phi(\mathbf{w}, \mathbf{X}) = 1$.
While the rejection-sampling estimator $\hat{p}_{RS}$ is unbiased for $p_{\phi}$, it may be computationally intensive to generate random samples $\mathbf{w}^{(m)} \sim P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ if $\phi(\mathbf{W}, \mathbf{X})$ is particularly stringent. As an alternative, one can take an importance-sampling approach to biasedly estimate $p_{\phi}$ at a much lower computational cost.\cite{kong1992note,christian1999monte,robert2004monte} First, define a proposal distribution $P_q(\mathbf{W})$ whose support includes the support of $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ but is less computationally burdensome to sample from than from $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$. Then, the importance-sampling estimator for $p_{\phi}$ is
\begin{align}
\hat{p}_{IS} &= \frac{ \sum_{m=1}^M \mathbb{I} \big( \big| t \big(Y(\mathbf{w}^{(m)}), \mathbf{w}^{(m)} \big) \big| \geq |t^{obs}| \big) \frac{P(\mathbf{W} = \mathbf{w}^{(m)} | \phi(\mathbf{W}, \mathbf{X}) = 1)}{P_q(\mathbf{W} = \mathbf{w}^{(m)})} }{\sum_{m=1}^M \frac{P(\mathbf{W} = \mathbf{w}^{(m)} | \phi(\mathbf{W}, \mathbf{X}) = 1)}{P_q(\mathbf{W} = \mathbf{w}^{(m)})}}, \text{ where } \mathbf{w}^{(m)} \sim P_q(\mathbf{W})
\end{align}
In other words, the rejection-sampling estimator $\hat{p}_{RS}$ is a simple average based on the random draws $\mathbf{w}^{(m)} \sim P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$, whereas the importance-sampling estimator is a weighted average based on the random draws $\mathbf{w}^{(m)} \sim P_q(\mathbf{W})$. Thus, $\hat{p}_{IS}$ will be easier to compute than $\hat{p}_{RS}$ if it is less computationally intensive to sample from the proposal distribution $P_q(\mathbf{W})$ than from the target distribution $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$.
The importance-sampling estimator can be reduced to a simple form by first noting that, under the assumption of a strongly ignorable assignment mechanism (\ref{eqn:psModel}),
\begin{align}
P(\mathbf{W} = \mathbf{w} | \phi(\mathbf{W}, \mathbf{X}) = 1) &= \frac{P(\mathbf{W} = \mathbf{w}, \phi(\mathbf{W}, \mathbf{X}) = 1)}{P(\phi(\mathbf{W}, \mathbf{X}) = 1)} \\
&= \frac{\prod_{i=1}^N e(\mathbf{x}_i)^{w_i} [1 - e(\mathbf{x}_i)]^{1 - w_i}}{P(\phi(\mathbf{W}, \mathbf{X}) = 1)}, \text{ where } \mathbf{w} \in \mathbb{W}^+_{\phi} \\
&\propto \prod_{i=1}^N e(\mathbf{x}_i)^{w_i} [1 - e(\mathbf{x}_i)]^{1 - w_i}, \text{ where } \mathbf{w} \in \mathbb{W}^+_{\phi}
\end{align}
where $\mathbb{W}^+_{\phi} \equiv \{ \mathbf{w} \in \mathbb{W}^+ : \phi(\mathbf{w}, \mathbf{X}) = 1\}$ is the set of acceptable assignments according to the acceptance criterion. Then, if the proposal distribution is uniform across all acceptable assignments, i.e., if $P_q(\mathbf{W} = \mathbf{w}) = c$ for all $\mathbf{w} \in \mathbb{W}^+_{\phi}$, then the importance-sampling $p$-value approximation reduces to
\begin{align}
\hat{p}_{IS} &= \frac{ \sum_{m=1}^M \mathbb{I} \big( \big| t \big(Y(\mathbf{w}^{(m)}), \mathbf{w}^{(m)} \big) \big| \geq |t^{obs}| \big) \prod_{i=1}^N e(\mathbf{x}_i)^{w^{(m)}_i} [1 - e(\mathbf{x}_i)]^{1 - w^{(m)}_i} }{\sum_{m=1}^M \prod_{i=1}^N e(\mathbf{x}_i)^{w^{(m)}_i} [1 - e(\mathbf{x}_i)]^{1 - w^{(m)}_i}}, \text{ where } \mathbf{w}^{(m)} \sim P_q(\mathbf{W}) \label{eqn:importanceSamplingPValue}
\end{align}
where the quantity $\prod_{i=1}^N e(\mathbf{x}_i)^{w^{(m)}_i} [1 - e(\mathbf{x}_i)]^{1 - w^{(m)}_i}$ is easy to compute because the propensity scores $e(\mathbf{x}_i)$ are known.
For example, sampling from the distribution $P(\mathbf{W} | \sum_{i=1}^N W_i = N_T)$ via rejection-sampling may be computationally intensive if the propensity scores differ across units and $N$ is large. One proposal distribution that is uniform across assignments is random permutations of $\mathbf{W}^{obs}$, whose support is equal to the support of $P(\mathbf{W} | \sum_{i=1}^N W_i = N_T)$ but is less computational to sample from. Thus, one can still utilize random permutations of $\mathbf{W}^{obs}$ to estimate the conditional randomization test $p$-value---as in Case 1 in Section \ref{ss:equalProbabilities}---using the importance-sampling estimator $\hat{p}_{IS}$.
However, as noted earlier, unlike the estimator $\hat{p}_{RS}$, the estimator $\hat{p}_{IS}$ is biased of order $M^{-1}$,\cite{kong1992note} which---as we show in Section \ref{s:simulationExample}---may break the validity of the conditional randomization test. Thus, we recommend using the rejection-sampling estimator $\hat{p}_{RS}$ to ensure valid inferences from our conditional randomization test if it is not computationally intensive to do so. However, if it is computationally intensive to generate draws $\mathbf{w} \sim P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ but easy to generate draws $\mathbf{w} \sim P_q(\mathbf{W})$ for some proposal distribution, then we recommend using the importance-sampling estimator $\hat{p}_{IS}$ while ensuring that the number of random samples $M$ is large such that the bias of $\hat{p}_{IS}$ is minimal. For an in-depth discussion of rejection-sampling versus importance-sampling, see Robert and Casella (Chapter 3).\cite{christian1999monte}
The above procedure is closely related to the rerandomization framework developed by Morgan and Rubin,\cite{morgan2012rerandomization} who define an assignment criterion $\phi(\mathbf{W}, \mathbf{X})$ in order to ensure a certain level of covariate balance as part of an experimental design. Recent works on rerandomization have shown how $\phi(\mathbf{W}, \mathbf{X})$ can be flexibly defined: Morgan and Rubin\cite{morgan2015rerandomization} defined $\phi(\mathbf{W}, \mathbf{X})$ such that it incorporates tiers of importance for covariates, and Branson et al.\cite{branson2016improving} defined $\phi(\mathbf{W}, \mathbf{X})$ such that it incorporates tiers of importance for both covariates and multiple treatment effects of interest.
However, the purpose of the introduction of $\phi(\mathbf{W}, \mathbf{X})$ here is to conduct a conditional randomization test, rather than yield a desirable experimental design. It is similar to the conditional randomization test of Hennessy et al.,\cite{hennessy2016conditional} who define $\phi(\mathbf{W}, \mathbf{X})$ in terms of categorical covariate balance. However, because Hennessy et al.\cite{hennessy2016conditional} and other conditional randomization tests (e.g., Rosenbaum\cite{rosenbaum1984conditional}) have focused on cases where propensity scores are equal across units or strata, they could sample from $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ directly via random permutations of $\mathbf{W}^{obs}$. Indeed, both the rerandomization and conditional randomization test literature have focused on cases where the propensity scores are equal across units, whereas our approach addresses the more general case where propensity scores differ across units. Furthermore, if our rejection-sampling approach is computationally intensive, our importance-sampling approach allows one to still utilize random permutations of $\mathbf{W}^{obs}$ to quickly estimate the conditional randomization test $p$-value at the cost of incurring a small bias.
Now we establish that the unconditional and conditional randomization tests (i.e., the randomization test using $p$ in (\ref{randomizationTestPValue}) and the randomization test using $p_{\phi}$ in (\ref{eqn:phiPValue}), respectively) are valid tests for Bernoulli trial experiments. While these are results for the randomization tests that use the exact $p$-values $p$ and $p_{\phi}$, this also suggests that our rejection-sampling approach for unbiasedly estimating $p_{\phi}$ yields valid statistical inferences. In Section \ref{s:simulationExample}, we empirically confirm the validity of these randomization tests, and we discuss to what extent our importance-sampling approach also yields valid statistical inferences.
\subsection{Validity of Unconditional and Conditional Randomization Tests for Bernoulli Trial Experiments}
For both theorems presented below, we assume that the treatment is assigned according to the strongly ignorable assignment mechanism (\ref{eqn:psModel}). First, we establish that the randomization test that uses this assignment mechanism is valid, i.e., that the probability of this $\alpha$-level randomization test falsely rejecting the Sharp Null Hypothesis is no greater than $\alpha$. This result is unsurprising given well-known results about the validity of randomization tests. Then, we establish that the conditional randomization test---i.e., the randomization test that uses the assignment mechanism $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ for some prespecified criterion $\phi(\mathbf{W}, \mathbf{X})$ instead of the assignment mechanism (\ref{eqn:psModel})---is also valid. This result is slightly surprising in the sense that the validity of the randomization test holds even if the test uses an assignment mechanism other than the one used to conduct the randomized experiment.
\begin{theorem}[Validity of Unconditional Randomization Test]
\label{thm:unconditionalValidity}
Assume that a randomized experiment is conducted using the strongly ignorable assignment mechanism (\ref{eqn:psModel}). Define the two-sided randomization-test $p$-value as
\begin{align}
p \equiv \sum_{\mathbf{w} \in \mathbb{W}^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w}) \label{eqn:theorem1PValue}
\end{align}
for some test statistic $t \big(Y(\mathbf{W}), \mathbf{W} \big)$, where $\mathbb{W}^+ = \{0,1\}^N$. Then the randomization test that rejects the Sharp Null Hypothesis when $p \leq \alpha$ is a valid test in the sense that
\begin{align}
P(p \leq \alpha | H_0) \leq \alpha
\end{align}
where $H_0$ is the Sharp Null Hypothesis defined in (\ref{sharpNull}). \\
\end{theorem}
\begin{theorem}[Validity of Conditional Randomization Test]
\label{thm:conditionalValidity}
Assume that a randomized experiment is conducted using the strongly ignorable assignment mechanism (\ref{eqn:psModel}). Define the two-sided conditional randomization-test $p$-value as
\begin{align}
p_{\phi} \equiv \sum_{\mathbf{w} \in \mathbb{W}_{\phi}^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w}) \label{eqn:theorem2PValue}
\end{align}
for some test statistic $t \big(Y(\mathbf{W}), \mathbf{W} \big)$, where $\mathbb{W}_{\phi}^+ = \{\mathbf{w} \in \mathbb{W}^+: \phi(\mathbf{w}, \mathbf{X}) = 1\}$ is the set of acceptable randomizations according to some prespecified criterion $\phi(\mathbf{W}, \mathbf{X})$. Then the randomization test that rejects the Sharp Null Hypothesis when $p_{\phi} \leq \alpha$ is a valid test in the sense that
\begin{align}
P(p_{\phi} \leq \alpha | H_0) \leq \alpha
\end{align}
where $H_0$ is the Sharp Null Hypothesis defined in (\ref{sharpNull}).
\end{theorem}
The proofs for Theorems \ref{thm:unconditionalValidity} and \ref{thm:conditionalValidity} are in the Appendix.
Now we illustrate our randomization test procedure using a simple example where the randomization test $p$-value is computed exactly. Then we conduct a simulation study where the randomization test $p$-value is estimated, and we compare the rejection-sampling and importance-sampling approaches for estimating the $p$-value. Furthermore, we empirically confirm the validity of our randomization tests as established by Theorems \ref{thm:unconditionalValidity} and \ref{thm:conditionalValidity} above, and we demonstrate how conditioning on various statistics of interest can be used to construct statistically powerful randomization tests for Bernoulli trial experiments.
\section{Simulation Study of Unconditional and Conditional Randomization Tests} \label{s:simulationExample}
\subsection{Illustrative Example: Computing the Exact $p$-value}
As discussed in Section \ref{ss:testingFishersSharpNull}, the randomization-test $p$-value is typically approximated using (\ref{randomizationTestPValueApproximationSimple}) by drawing many possible treatment assignments $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(M)}$. However, for small samples, the $p$-value can be computed exactly using (\ref{randomizationTestPValue}) by examining each $\mathbf{w}$ in the set of possible treatment assignments $\mathbb{W}^+$. Here we explore a small-sample example to illustrate how to conduct randomization tests and construct confidence intervals when propensity scores vary across units. We also discuss how this procedure differs from the typical case where propensity scores are the same across units.
Consider a randomized experiment with $N = 10$ units. The potential outcomes for these units are shown in Table \ref{tab:exampleN10}, where the true treatment effect is $\tau = 0.5$. Say that a randomized experiment characterized by Bernoulli trials has occurred; the corresponding propensity scores, treatment assignment, and observed outcomes are also shown in Table \ref{tab:exampleN10}. For now, assume that the task at hand is to conduct randomization-based inference for the average treatment effect given the treatment assignment, observed outcomes, and propensity scores in Table \ref{tab:exampleN10}.
\begin{table}[H]
\small\sf\centering
\begin{tabular}{cccccc}
\toprule
Unit $i$ & $Y_{i}$(0) & $Y_i$(1) & $W_i^{\text{obs}}$ & $y_{i}^{\text{obs}}$ & $e(\mathbf{x}_i)$\\
\midrule
1 & -0.56 & -0.06 & 0 & -0.56 & 0.1 \\
2 & -0.23 & 0.27 & 1 & 0.26 & 0.2 \\
3 & 1.56 & 2.06 & 1 & 2.06 & 0.3 \\
4 & 0.07 & 0.57 & 0 & 0.07 & 0.4 \\
5 & 0.13 & 0.63 & 0 & 0.13 & 0.5 \\
6 & 1.72 & 2.22 & 1 & 2.22 & 0.5 \\
7 & 0.46 & 0.96 & 1 & 0.96 & 0.6 \\
8 & -1.27 & -0.77 & 1 & -0.77 & 0.7 \\
9 & -0.69 & -0.19 & 0 & -0.69 & 0.8 \\
10& -0.45 & 0.05 & 1 & 0.05 & 0.9 \\
\bottomrule
\end{tabular}\\[2pt]
\caption{Potential outcomes, treatment assignment, observed outcome, and propensity score for 10 units in a hypothetical randomized experiment. Note that the true treatment effect is $\tau = 0.5$.}
\label{tab:exampleN10}
\end{table}
With $N = 10$ units, only $2^{10} = 1024$ possible treatment assignments can be considered. Excluding the treatment assignments $\mathbf{0}_N$ and $\mathbf{1}_N$ leaves 1022 possible assignments. Under the Sharp Null Hypothesis, the observed outcomes $\mathbf{y}^{obs}$ will be the same as those in Table \ref{tab:exampleN10} for all 1022 of these assignments. We test this hypothesis following the three-step procedure in Section \ref{ss:testingFishersSharpNull}: First choose $\mathbb{W}^+$ and $P(\mathbf{W})$, then choose a test statistic, and finally compute the randomization test $p$-value.
We first consider the set $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$ that was used during randomization, where
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \frac{\prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i}}{1 - \prod_{i=1}^N e(\mathbf{x}_i) - \prod_{i=1}^N [1 - e(\mathbf{x}_i)]}
\end{align}
for each $\mathbf{w} \in \mathbb{W}^+$, as previously shown in (\ref{eqn:biasedCoinRestricted01Probabilities}). We choose the mean-difference estimator---given in (\ref{eqn:meanDiffEstimator})---as the test statistic. We then iterate through each of the $1022$ treatment assignments $\mathbf{w} \in \mathbb{W}^+$ and compute the test statistic assuming the Sharp Null Hypothesis is true. Once this is done, the randomization test $p$-value can be computed exactly using
\begin{align}
P \big( |t \big(Y(\mathbf{W}), \mathbf{W} \big)| \geq |t^{obs}| \big) &= \sum_{\mathbf{w} \in \mathbb{W}^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w})
\end{align}
as previously shown in (\ref{randomizationTestPValue}). From Table \ref{tab:exampleN10}, one can calculate the observed test statistic, $t^{obs}$, which is equal to 1.06.
Figure \ref{fig:n10ExampleHistogram} shows the distribution of the absolute value of the test statistic $t \big(Y(\mathbf{w}), \mathbf{w} \big)$ for each $\mathbf{w} \in \mathbb{W}^+$ assuming the Sharp Null Hypothesis is true. The portion of this distribution that corresponds to test statistics larger than the observed one is colored in gray. The randomization test $p$-value is then the probability of any gray treatment assignment occurring, which we find to be 0.12. If the propensity scores were equal across units---which is typically the case in the randomization test literature---then the randomization test $p$-value would simply be the number of gray treatment assignments divided by the total number of treatment assignments, which was, in this case, $\frac{164}{1022} \approx 0.16$. Thus, importantly, the $p$-value reflects the design of the randomized experiment---i.e., it incorporates the propensity scores that were used to randomize the units during the experiment.
Furthermore, we can obtain a confidence interval for the average treatment effect by inverting this randomization test using the procedure outlined in Section \ref{ss:confidenceIntervals}. We did a line search of values $\tau \in \{-3, -2.9, \dots, 2.9, 3\}$ and defined our 95\% confidence interval as the set of $\tau$'s for which we obtained $p$-values greater than 0.05 when testing the hypothesis (\ref{sharpNullTau}) for each $\tau$. We found the confidence interval to be $(-0.1, 2.4)$. Again, this confidence interval reflects the design of the randomized experiment, because the $p$-values corresponding to each $\tau$ depend on the propensity scores that were used during randomization.
Note that Figure \ref{fig:n10ExampleHistogram} displays every possible treatment assignment, including assignments where only one unit is assigned to treatment and the rest to control (and vice versa). However, researchers may want the statistical analysis to only consider treatment assignments similar to the observed one. For example, consider the more stringent set of treatment assignments $\mathbb{W}^+ = \{ \mathbf{W} \in \mathbb{W} | \sum_{i=1}^N W_i = N_T \}$, where in this example the number of treated units $N_T = 6$, as seen in Table \ref{tab:exampleN10}. Figure \ref{fig:n10ExampleConditionalHistogram} shows the distribution of the test statistic for each $\mathbf{w} \in \mathbb{W}^+$ in this case, assuming the Sharp Null Hypothesis is true. Note that there are only ${10 \choose 6} = 210$ treatment assignments, which is a subset of the assignments displayed in Figure \ref{fig:n10ExampleHistogram}. Again, the randomization test $p$-value is the probability of any gray treatment assignment occurring, but now the probability of any $\mathbf{w} \in \mathbb{W}^+$ is
\begin{align}
P(\mathbf{W} = \mathbf{w} | \mathbf{X}) = \frac{\prod_{i=1}^N e(\mathbf{x}_i)^{w_i}[1 - e(\mathbf{x}_i)]^{1 - w_i}}{P(\sum_{i=1}^N W_i = N_T | \mathbf{X}) }
\end{align}
as previously shown in (\ref{eqn:probabilityConditionalNt}). Because there are only 210 treatment assignments $\mathbf{w}$ such that $\sum_{i=1}^N w_i = N_T$, we can compute the denominator exactly and thus compute the randomization test $p$-value exactly as well, which we find to be equal to 0.17. Furthermore, using the same procedure as above, we found the 95\% confidence interval to be $(-0.1, 2.4)$. Thus, in addition to reflecting the experimental design, randomization-based inference can also reflect particular experiments of interest, such as ones similar to the observed one.
Now we conduct a simulation study with $N = 100$ units. In this case, it is computationally intensive to compute randomization test $p$-values exactly, and we instead approximate them. Furthermore, because the propensity scores vary across units, it will be difficult to directly sample from conditional probability distributions such as $P(\mathbf{W} | \sum_{i=1}^N W_i = N_T)$, and thus we will need the rejection-sampling procedure from Section \ref{ss:acceptRejectProcedure} to conduct conditional inference.
\newpage
\thispagestyle{empty}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.55\textwidth}
\centering
\includegraphics[width=\linewidth]{n10ExampleHistogram.pdf}
\caption{The distribution of $t \big(Y(\mathbf{w}), \mathbf{w} \big)$ for each $\mathbf{w} \in \mathbb{W}^+$, where $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$. The observed test statistic is marked by a red vertical line. Assignments corresponding to test statistics larger than the observed one are in gray. }
\label{fig:n10ExampleHistogram}
\end{subfigure}%
\begin{subfigure}[t]{.55\textwidth}
\centering
\includegraphics[width=\linewidth]{n10ExampleConditionalHistogram.pdf}
\caption{The distribution of $t \big(Y(\mathbf{w}), \mathbf{w} \big)$ for each $\mathbf{w} \in \mathbb{W}^+$, where $\mathbb{W}^+ = \{ \mathbf{W} \in \mathbb{W} | \sum_{i=1}^N W_i = N_T \} $. }
\label{fig:n10ExampleConditionalHistogram}
\end{subfigure}
\caption{Unconditional and conditional randomization distributions of the test statistic under the Sharp Null Hypothesis.}
\end{figure}
\newpage
\subsection{Simulation Setup}
Hennessy et al.\cite{hennessy2016conditional} conducted a simulation study to show that their randomization test that conditioned on categorical covariate balance was more powerful than unconditional randomization tests when covariates were associated with the outcome. Hennessy et al.\cite{hennessy2016conditional} consider the case where the propensity scores are the same across units. We modify their simulation study such that units' propensity scores differ. This simulation study serves two purposes:
\begin{enumerate}
\item Confirm the validity of the unconditional and conditional randomization tests discussed in Section \ref{ss:unequalProbabilities}, as established by Theorems \ref{thm:unconditionalValidity} and \ref{thm:conditionalValidity}.
\item Demonstrate how the rejection-sampling and importance-sampling procedures presented in Section \ref{ss:acceptRejectProcedure} can be used to construct statistically powerful conditional randomization tests.
\end{enumerate}
Consider $N = 100$ units with a single covariate $X$, where 50 units have covariate value $X = 1$ and the other 50 units have covariate value $X = 2$. Each unit has two potential outcomes---corresponding to treatment and control---which are generated once from the following:
\begin{equation}
\begin{aligned}
Y_i(0) | X_i &\sim N(\lambda X_i, 1), \hspace{0.1 in} i = 1,\dots,N \\
Y_i(1) &= Y_i(0) + \tau \label{eqn:potentialOutcomesModelSimulation}
\end{aligned}
\end{equation}
The parameter $\lambda$ determines the strength of the association between $X$ and the potential outcomes, while $\tau$ is the treatment effect. Similar to Hennessy et al.,\cite{hennessy2016conditional} we consider the values $\lambda \in \{0, 1.5, 3\}$ and $\tau \in \{0, 0.1, \dots, 1\}$ in our simulation. The previous example from Table \ref{tab:exampleN10} was generated using $\lambda = 0$ and $\tau = 0.5$.
The probability of the $i^{\text{th}}$ unit receiving treatment---i.e., its propensity score---was generated once from the following:
\begin{align}
P(W_i = 1 | X_i) = P(W_i = 1) \sim \text{Beta}(5, 5), \hspace{0.1 in} i = 1,\dots, N \label{eqn:psModelSimulation}
\end{align}
This generating mechanism resulted in propensity scores being centered but spread around 0.5. In our simulation, propensity scores ranged from 0.22 to 0.87 with a mean of 0.49.
After the potential outcomes and propensity scores were generated, we randomly assigned units to treatment and control according to the probability distribution $P(\mathbf{W})$ defined by the propensity scores. We prevented any single treatment assignment from being $\mathbf{0}_N$ or $\mathbf{1}_N$; in other words, we considered the set of possible treatment assignments $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$ during randomization. In this case, there will always be 50 units with $X = 1$ and 50 units with $X = 2$, but the number of units assigned to treatment and control can vary from randomization to randomization. Any randomization of the 100 units to treatment and control can be summarized by Table \ref{tab:contingencyTable}, which includes the number of units assigned to treatment and control ($N_T$ and $N_C$) and the number of units with covariate values $X = 1$ and $X = 2$ ($N_1$ and $N_2$).
\begin{table}
\centering
\begin{tabular}{ c c | c c | c }
\hline
& & \multicolumn{2}{ c |}{$\mathbf{W}$} & \\
& & 1 & 0 & \\
\hline
\multirow{2}{*}{$X$} & 1 & $N_{T1}$ & $N_{C1}$ & $N_1 = 50$ \\
& 2 & $N_{T2}$ & $N_{C2}$ & $N_2 = 50$ \\
\hline
& & $N_T$ & $N_C$ & $N = 100$ \\
\hline
\end{tabular}
\caption{Contingency table of the number of units assigned to treatment and control ($N_T$ and $N_C$) and the number of units with covariate values $X = 1$ and $X = 2$ ($N_1$ and $N_2$). The values $N_1 = 50$, and $N_2 = 50$ were fixed across randomizations in the simulation study; the other values varied across randomizations.}
\label{tab:contingencyTable}
\end{table}
Before conducting the full simulation, let's first consider one possible treatment assignment that we may observe during this simulation. We will present four randomization tests one could use to test the Sharp Null Hypothesis.
\subsection{Example of One Treatment Assignment} \label{ss:simulationExample}
Consider the case when $\lambda = 3$ and $\tau = 0.5$; i.e., when the covariate is strongly associated with the outcome and the treatment effect is moderate. The potential outcomes were generated using (\ref{eqn:potentialOutcomesModelSimulation}), the propensity scores were generated using (\ref{eqn:psModelSimulation}), and then units were randomized by flipping biased coins corresponding to these propensity scores. Table \ref{tab:exampleContingencyTable} shows the resulting randomization. Given this randomization and the corresponding dataset, how should we test the Sharp Null Hypothesis?
\begin{table}
\centering
\begin{tabular}{ c c | c c | c }
\hline
& & \multicolumn{2}{ c |}{$\mathbf{W}^{obs}$} & \\
& & 1 & 0 & \\
\hline
\multirow{2}{*}{$X$} & 1 & $N_{T1}^{obs} = 30$ & $N_{C1}^{obs} = 20$ & $N_1 = 50$ \\
& 2 & $N_{T2}^{obs} = 24$ & $N_{C2}^{obs} = 26$ & $N_2 = 50$ \\
\hline
& & $N_T^{obs} = 54$ & $N_C^{obs} = 46$ & $N = 100$ \\
\hline
\end{tabular}
\caption{Example of a possible treatment allocation in our simulation study.}
\label{tab:exampleContingencyTable}
\end{table}
Any randomization test should involve generating treatment assignments via biased coins corresponding to the prespecified propensity scores, because this is how the randomization observed in Table \ref{tab:exampleContingencyTable} was generated. However, which set of possible treatment assignments $\mathbb{W}^+$ should one consider during the test? We consider four different $\mathbb{W}^+$ and their associated randomization tests:
\begin{enumerate}
\item An unconditional randomization test (as presented in Section \ref{ss:testingFishersSharpNull}), with $\mathbb{W}^+ = \{0, 1\}^N \setminus (\mathbf{0}_N \cup \mathbf{1}_N )$.
\item A randomization test conditional on the number of units assigned to treatment, with $\mathbb{W}^+ = \{ \mathbb{W} | \sum_{i=1}^N W_i = N_T^{obs} \}$.
\item A randomization test conditional on the number of units with $X = 1$ assigned to treatment, with $\mathbb{W}^+ = \{ \mathbb{W} | \sum_{i: X_i = 1} W_i = N_{T1}^{obs} \}$.
\item A randomization test conditional on $N_T$ and $N_{T1}$, with $\mathbb{W}^+ = \big\{ \mathbb{W} \big| \sum_{i=1}^N W_i = N_T^{obs} \hspace{0.05 in} \text{and } \sum_{i: X_i = 1} W_i = N_{T1}^{obs} \big\}$.
\end{enumerate}
Arguably, the first randomization test is the most natural choice, because it corresponds to the $\mathbb{W}^+$ that was actually used to generate the randomization observed in Table \ref{tab:exampleContingencyTable}; however, because conditional randomization tests can be more powerful than unconditional randomization tests, the other three tests may be options researchers might consider as well.
The above tests are ordered in terms of the restrictiveness of $\mathbb{W}^+$: The first two randomization tests involve flipping biased coins to generate treatment assignments, where the values $N_{T1}$, $N_{C1}$, $N_{T2}$, and $N_{C2}$ in Table \ref{tab:contingencyTable} can vary across assignments; in the third randomization test, only $N_{T2}$ and $N_{C2}$ can vary; and in the fourth randomization test, none of these values can vary. Because iterating through every possible treatment assignment in $\mathbb{W}^+$ is computationally intensive---for the example in Table \ref{tab:exampleContingencyTable}, $|\mathbb{W}^+| = 2^{100} - 2$ for the first test, and $|\mathbb{W}^+| = {50 \choose 30}$ for the fourth test---we instead generate 1,000 treatment assignments $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(1000)}$ using our rejection-sampling procedure discussed in Section \ref{ss:acceptRejectProcedure} to approximate the randomization distribution for each test.
The approximate randomization distribution of the mean-difference test statistic $\bar{y}_T - \bar{y}_C$ under the Sharp Null Hypothesis for each of these four tests is shown in Figure \ref{fig:exampleRandomizationDistributions}. The conditional randomization distributions for the third and fourth tests are shifted to the left of the unconditional randomization distribution. This is no coincidence: In Table \ref{tab:exampleContingencyTable}, there are more units with $X = 1$ in the treatment group and more units with $X = 2$ in the control group; as a result, the treatment group will have units with systematically lower potential outcomes, due to the potential outcomes model (\ref{eqn:potentialOutcomesModelSimulation}). This is reflected in the conditional randomization distributions but not the unconditional one. Consequentially, the conditional and unconditional randomization tests will give different results: One-sided $p$-values for the four tests are 0.58, 0.57, 0.08, and 0.00, respectively. This suggests that some of these randomization tests may be more powerful at detecting a treatment effect than others, which we further explore below.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.5]{randomizationDistributionExamplePlot.pdf}
\caption{The unconditional and conditional randomization distributions for the mean-difference test statistic under the Sharp Null Hypothesis for the example in Table \ref{tab:exampleContingencyTable}. The observed test statistic for this example dataset is marked by a black vertical line. Each randomization distribution was approximated by drawing $\mathbf{w}^{(1)}, \dots, \mathbf{w}^{(1000)}$ from the corresponding $\mathbb{W}^+$ using the rejection-sampling procedure discussed in Section \ref{ss:acceptRejectProcedure}. }
\label{fig:exampleRandomizationDistributions}
\end{figure}
\subsection{Full Simulation Study}
Now we compare the four randomization tests discussed in Section \ref{ss:simulationExample} in terms of their power. For each combination of $\lambda \in \{0, 1.5, 3\}$ and $\tau \in \{0, 0.1, \dots, 1\}$, the potential outcomes were generated using (\ref{eqn:potentialOutcomesModelSimulation}), the propensity scores were generated using (\ref{eqn:psModelSimulation}), and then units were randomized 1,000 times by flipping biased coins corresponding to these propensity scores.
For each of the 1,000 randomizations, we performed the four randomization tests discussed in Section \ref{ss:simulationExample} using the rejection-sampling approach to unbiasedly estimate each $p$-value using $\hat{p}_{RS}$ given in (\ref{eqn:rejectionSamplingPValue}). For each test, we rejected the Sharp Null Hypothesis if $\hat{p}_{RS} \leq 0.05$. Figure \ref{fig:powerAnalysis} displays the average rejection rate of the Sharp Null Hypothesis---i.e., the power---for each randomization test. When $\tau = 0$, the Sharp Null Hypothesis is true, and all of the randomization tests reject the null 5\% of the time. This confirms the validity of our unconditional and conditional randomization tests, as established by Theorems \ref{thm:unconditionalValidity} and \ref{thm:conditionalValidity}. When $\lambda = 0$, the covariate is not associated with the outcome, and all of the randomization tests are essentially equivalent. As the covariate becomes more associated with the outcome, the third and fourth conditional randomization tests become more powerful than the unconditional test, while the randomization test that only conditions on $N_T$ remains equivalent to the unconditional randomization test. This is due to the fact that the quantity $N_{T1}$ combined with $N_T$ may be confounded with the treatment effect if there is covariate imbalance between the treatment and control groups, as in the example presented in Table \ref{tab:exampleContingencyTable} and Figure \ref{fig:exampleRandomizationDistributions}.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.75]{bernoullTrialsPowerAnalysisPlot.pdf}
\caption{Average rejection rates for the four randomization tests across 1,000 randomizations for each value of $\lambda$ and $\tau$. As $\lambda$ increases, the covariate becomes more associated with the outcome; as $\tau$ increases, the treatment effect should become easier to detect. The gray horizontal line marks 0.05.}
\label{fig:powerAnalysis}
\end{figure}
However, our rejection-sampling approach can be computationally expensive. Generating 1,000 samples for the unconditional randomization test, the randomization test conditional on $N_T$, the randomization test conditional on $N_{T1}$, and the randomization test conditional on $N_{T}$ and $N_{T1}$ took on average 0.25, 1.22, 2.14, and 34.75 seconds, respectively. As an alternative to the rejection-sampling approach for computing the randomization test $p$-value $\hat{p}_{RS}$ conditional on $N_T$ and $N_{T1}$, we can take our importance-sampling approach discussed in Section \ref{ss:acceptRejectProcedure}. Instead of sampling directly from $P(\mathbf{W} | \phi(\mathbf{W}, \mathbf{X}) = 1)$ via rejection-sampling, we generate $M$ proposals $\mathbf{w}^{(1)},\dots,\mathbf{w}^{(M)}$ uniformly from the set of acceptable randomizations $\{\mathbf{w}: \sum_{i=1}^N w_i = N_T \text{ and } \sum_{i: w_i = 1} \mathbb{I}(X_i = 1) = N_{T1}\}$; this corresponds to random permutations of $\mathbf{W}^{obs}$ within the $X = 1$ and $X = 2$ strata. Then, we compute $\hat{p}_{IS}$ given in (\ref{eqn:importanceSamplingPValue}) and reject if $\hat{p}_{IS} \leq 0.05$.
Figure \ref{fig:powerAnalysisRSvsIS} compares the rejection-sampling approach (i.e., rejecting the Sharp Null Hypothesis if $\hat{p}_{RS} \leq 0.05$) with the importance-sampling approach (i.e., rejecting the Sharp Null Hypothesis if $\hat{p}_{IS} \leq 0.05$) for different values of $M$. The importance-sampling approach is computationally less intensive than the rejection-sampling approach: The importance-sampling approach using $M = 1,000$, $M = 5,000$, and $M = 25,000$ took on average 0.68, 3.30, 16.31 seconds, respectively. Note that even the $M = 25,000$ case required less than half the time as the rejection-sampling approach. However, as noted in Section \ref{ss:acceptRejectProcedure}, $\hat{p}_{IS}$ has a bias of order $M^{-1}$, and thus the $p$-value for the importance-sampling approach may be notably biased for low $M$. This can be seen in Figure \ref{fig:powerAnalysisRSvsIS}: For $M = 1,000$, the importance-sampling approach falsely rejects the Sharp Null Hypothesis when $\tau = 0$ at a substantially higher rate than 0.05; this suggests that the importance-sampling approach has a negative bias in this case. However, as $M$ increases, this bias is less substantial, and results using $\hat{p}_{IS}$ approach those using $\hat{p}_{RS}$. Thus, the bias of importance-sampling can break the validity of our randomization test, but this can be alleviated by increasing the number of proposals $M$ at a minimal computational cost.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.75]{bernoullTrialsPowerAnalysisRSvsISPlot.pdf}
\caption{Average rejection rates for the rejection-sampling and importance-sampling approaches conditional on $N_T$ and $N_{T1}$. For importance-sampling, we tried various numbers of proposals $M$. The line for $\hat{p}_{RS}$ (i.e., the rejection-sampling approach) is the same as the line for ``Conditional on $N_T$ and $N_{T1}$'' in Figure \ref{fig:powerAnalysis}. }
\label{fig:powerAnalysisRSvsIS}
\end{figure}
In summary, these results reinforce the idea of Hennessy et al.\cite{hennessy2016conditional} that conditional randomization tests are more powerful than unconditional randomization tests when the acceptance criterion $\phi(\mathbf{W}, \mathbf{X})$ incorporates statistics that are associated with the outcome. Furthermore, this demonstrates how our rejection-sampling procedure can be used to condition on several combinations of statistics of interest, thus yielding statistically powerful randomization tests for Bernoulli trial experiments. Finally, when this rejection-sampling procedure is computationally intensive, our importance-sampling approach is a viable alternative; however, we recommend generating a large number of proposals $M$ such that the bias of the importance-sampling approach is neglible and thus still yields valid statistical inferences.
\section{Discussion and Conclusion} \label{s:discussion}
Here we presented a randomization-based inference framework for experiments whose assignment mechanism is characterized by independent Bernoulli trials. Our framework and corresponding randomization tests encapsulate all strongly ignorable assignment mechanisms, including experiments based on complete, blocked, and paired randomization, as well as the general case where propensity scores differ across all units. In particular, we introduced rejection-sampling and importance-sampling approaches for obtaining randomization-based point estimates and confidence intervals conditional on any statistics of interest for Bernoulli trial experiments, which has not been previously studied in the literature. We also established that our randomization test is a valid test, and the power of this test can be improved by conditioning on various statistics of interest without sacrificing the validity of the test.
While our discussion of point estimates and confidence intervals are based on a sharp hypothesis that assumes a constant additive treatment effect, our framework can be extended to any sharp hypothesis, including those that incorporate heterogeneous treatment effects. Recent works in the randomization-based inference literature have begun to address treatment effect heterogeneity (e.g., Ding et al.\cite{ding2015randomization} and Caughey et al.\cite{caughey2016beyond}), and our framework can be extended to these discussions.
Throughout, we assumed that the propensity scores are known, as in randomized experiments. In observational studies, the propensity scores are estimated, typically with model-based methodologies like logistic regression. Nonetheless, propensity score methodologies still assume a strongly ignorable assignment mechanism as in (\ref{eqn:psModel}), with the assumption that the estimated propensity scores $\hat{e}(\mathbf{x})$ are ``close'' to the true $e(\mathbf{x})$, i.e., the propensity score model is well-specified. An implication of our randomization-based inference framework is that it can still be applied to observational studies, where estimates $\hat{e}(\mathbf{x})$ are used instead of known $e(\mathbf{x})$. Such a test is valid to the extent that the $\hat{e}(\mathbf{x})$ are ``close'' to the true $e(\mathbf{x})$; this is not a limitation of our framework specifically but of propensity score methodologies in general. Determining when our randomization test is valid for observational studies is future work.
However, our randomization test would seem to be the most natural randomization test to use for observational studies, because it directly reflects the strongly ignorable assignment mechanism (\ref{eqn:psModel}) that is assumed in most of the observational study literature. Other proposed randomization tests for observational studies reflect other assignment mechanisms, such as blocked and paired assignment mechanisms; these randomization tests are not immediately applicable to cases where the propensity score varies across all units.
There are many other methodologies for analyzing randomized experiments and observational studies, such as regression with or without inverse probability weighting, matching, and Bayesian modeling. Importantly, all of these methodologies assume the strongly ignorable assignment mechanism (\ref{eqn:psModel}) in addition to other assumptions about model specification, asymptotics, or units' propensity scores within covariate strata. Our framework only makes the strongly ignorable assignment mechanism assumption, and thus is a minimal-assumption approach while still yielding point estimates and confidence intervals that directly reflect the assignment mechanism. Furthermore, we established the validity of our randomization test and demonstrated how conditioning on relevant statistics of interest can yield powerful randomization tests for Bernoulli trial experiments.
\newpage
\section{Appendix}
\subsection{Proof of Theorem \ref{thm:unconditionalValidity}}
This proof closely follows the proof provided in Hennessey et al. (Page 64),\cite{hennessy2016conditional} but with a focus on Bernoulli trial experiments instead of completely randomized experiments.
Define $T_W$ as a random variable whose distribution is the same as $|t(Y(\mathbf{W}), \mathbf{W})|$, for some test statistic $t(Y(\mathbf{W}), \mathbf{W})$, where $\mathbf{W} \sim P(\mathbf{W} | \mathbf{X})$ specified by the strongly ignorable assignment mechanism (\ref{eqn:psModel}). Furthermore, let $F_{T_W}(\cdot)$ be the CDF of $T_W$. Note that $T_W$ must be defined for all $\mathbf{W} \in \mathbb{W}^+$, including $\mathbf{W} = \mathbf{1}_N$ or $\mathbf{W} = \mathbf{0}_N$; without loss of generality, one can define $T_W = 0$ for these two cases.
Under the Sharp Null Hypothesis $H_0$ defined in (\ref{sharpNull}), $Y(\mathbf{W}) = \mathbf{y}^{obs}$ for all $\mathbf{W} \in \mathbb{W}^+$. Thus, under $H_0$,
\begin{align}
|t(\mathbf{y}^{obs}, \mathbf{W})| \sim T_W \label{eqn:testStatisticNullDistribution}
\end{align}
i.e., the distribution of the observed test statistic $|t^{obs}| \equiv |t(\mathbf{y}^{obs}, \mathbf{W}^{obs})|$ across randomizations is the same as the distribution of $T_W$.
Now note that the randomization test $p$-value defined in (\ref{eqn:theorem1PValue}) of Theorem \ref{thm:unconditionalValidity} is such that, under $H_0$,
\begin{align}
p &= 1 - F_{T_W}(|t^{obs}|)
\end{align}
Furthermore, given (\ref{eqn:testStatisticNullDistribution}), we have that the distribution of $p$ across randomizations is
\begin{align}
p &\sim 1 - F_{T_W}(T_W)
\end{align}
under $H_0$.
If $T_W$ were continuous, then $(1 - F_{T_W}(T_W)) \sim \text{Unif}(0,1)$ by the probability integral transform; however, $T_W$ is discrete due to the discreteness of $\mathbb{W}^+$. Nonetheless, $(1 - T_W)$ stochastically dominates $U \sim \text{Unif}(0,1)$, and thus
\begin{align}
P(p \leq \alpha | H_0) &\leq P(U \leq \alpha | H_0) \label{eqn:stochasticDominance} \\
&\leq \alpha \label{eqn:uniformDistribution}
\end{align}
where (\ref{eqn:stochasticDominance}) follows from the definition of stochastic dominance, and (\ref{eqn:uniformDistribution}) follows from properties of the standard uniform distribution. This concludes the proof of Theorem \ref{thm:unconditionalValidity}.
\subsection{Proof of Theorem \ref{thm:conditionalValidity}}
Define a set of partitions $\mathbb{W}^+_1,\dots,\mathbb{W}^+_B$, where $\mathbb{W}^+_b \cap \mathbb{W}^+_{b'} = \emptyset$ for all $b \neq b'$ and $\cup_{b=1}^B \mathbb{W}^+_b = \mathbb{W}^+ = \{0,1\}^N$. In other words, the $\mathbb{W}^+_1,\dots,\mathbb{W}^+_B$ partition the set of possible randomizations under the strongly ignorable assignment mechanism (\ref{eqn:psModel}) into non-overlapping sets. Consider a randomization test that is conducted only within a particular one of these partitions; the associated randomization test $p$-value is
\begin{align}
p_b \equiv \sum_{\mathbf{w} \in \mathbb{W}_b^+} \mathbb{I} \big( \big| t \big(Y(\mathbf{w}), \mathbf{w} \big) \big| \geq | t^{obs} | \big)P(\mathbf{W} = \mathbf{w})
\end{align}
Importantly, by Theorem \ref{thm:unconditionalValidity}, for randomizations $\mathbf{W} \in \mathbb{W}^+_b$, the randomization test that rejects the Sharp Null Hypothesis when $p_b \leq \alpha$ is a valid test, i.e.,
\begin{align}
P(p_b \leq \alpha | H_0, \mathbf{W} \in \mathbb{W}^+_b) \leq \alpha \text{ for all } b = 1,\dots,B
\end{align}
The acceptance criterion $\phi(\mathbf{W}, \mathbf{X})$ determines the particular partition in which the conditional randomization test is conducted. Without loss of generality, say that $\phi(\mathbf{W}, \mathbf{X})$ is defined such that
\begin{align}
\phi(\mathbf{W}, \mathbf{X}) \equiv \begin{cases}
1 &\mbox{ if } \mathbf{W} \in \mathbb{W}^+_b \text{ for some } b =1,\dots,B \\
0 &\mbox{ otherwise.}
\end{cases}
\end{align}
Defined this way, $\phi(\mathbf{W}, \mathbf{X})$ varies across randomizations $\mathbf{W} \in \mathbb{W}^+$; as a result, the set of acceptable randomizations $\mathbb{W}^+_{\phi} \equiv \{ \mathbf{w} \in \mathbb{W}^+: \phi(\mathbf{w}, \mathbf{X}) = 1 \}$ varies across $\mathbf{W} \in \mathbb{W}^+$ as well. As an example, consider the criterion $\phi(\mathbf{W}, \mathbf{X})$ defined as equal to $1$ if $\sum_{i=1}^N W_i = N_T$ and equal to $0$ otherwise. The number of treated units $N_T$ can vary across $\mathbf{W} \in \mathbb{W}^+$, and thus $\mathbb{W}^+_{\phi}$ will vary across $\mathbf{W} \in \mathbb{W}^+$ as well, based on the realization of $N_T$. In this case, the partitions $\mathbb{W}^+_1,\dots,\mathbb{W}^+_B$ are defined as the sets of treatment assignments corresponding to the unique values of $N_T$. In general, the criterion $\phi(\mathbf{W}, \mathbf{X})$ will be a function of statistics, and the partitions $\mathbb{W}^+_1,\dots,\mathbb{W}^+_B$ can be defined by the unique values of these statistics. This setup is a generalization of the covariate balance function discussed in Hennessy et al. (Page 67).\cite{hennessy2016conditional}
Thus, for each $b=1,\dots,B$, there is an associated probability $P(\mathbf{W} \in \mathbb{W}^+_b) = P(\mathbb{W}^+_{\phi} = \mathbb{W}^+_b)$, and this probability is determined by the strongly ignorable assignment mechanism (\ref{eqn:psModel}). Once it is determined which partition the set of acceptable randomizations is equal to, the randomization test is conducted within this partition; i.e., the $p$-value $p_b$ is used for the $b$ such that $\mathbb{W}^+_{\phi} = \mathbb{W}^+_b$.
Thus, for the conditional randomization test $p$-value $p_{\phi}$ defined in Theorem \ref{thm:conditionalValidity}, we have that
\begin{align}
P(p_{\phi} \leq \alpha | H_0) &= \sum_{b=1}^B P(p_{\phi} \leq \alpha | H_0, \mathbb{W}^+_{\phi} = \mathbb{W}^+_b) P(\mathbb{W}^+_{\phi} = \mathbb{W}^+_b) \text{ (by law of total probability)} \\
&= \sum_{b=1}^B P(p_b \leq \alpha | H_0, \mathbb{W}^+_{\phi} = \mathbb{W}^+_b) P(\mathbb{W}^+_{\phi} = \mathbb{W}^+_b) \\
&\leq \sum_{b=1}^B \alpha P(\mathbb{W}^+_{\phi} = \mathbb{W}^+_b) \text{ (by Theorem \ref{thm:unconditionalValidity})} \\
&= \alpha \text{ (because $\sum_{b=1}^B P(\mathbb{W}^+_{\phi} = \mathbb{W}^+_b) = 1$)}
\end{align}
which is our desired result.
\newpage
|
\section{\label{sec:intro} Introduction}
Spatial patterns frequently emerge in driven fluids in a variety of contexts, including chemistry, biology, and nonlinear optics~\cite{Cross1993}. Instabilities in these systems can generally be categorized as Rayleigh-B\'{e}nard convection, Taylor-Couette flow, or parametric surface waves. One of the earliest and best known examples of the latter type are the surface waves found by Faraday when a vessel containing a fluid was shaken vertically~\cite{Faraday1831}. The resulting standing wave patterns that appear on the fluid surface arise from parametric excitation of collective modes of the fluid. The Faraday experiment has been repeated in various geometries where complex patterns were observed for small driving amplitudes~\cite{Douady1988}. Chaotic behavior, such as sub-harmonic bifurcation, is seen when the drive amplitude is strong~\cite{Keolian1981,Ciliberto1984,Douady1988,Ciliberto1991} and this behavior has been connected to the onset of turbulence~\cite{Feigenbaum1979}.
A model of the Faraday instability has been developed for an inviscid fluid in which the underlying hydrodynamic equations have been linearized~\cite{Benjamin1954}. The linearized dynamics are described by a Mathieu equation, $\ddot{x} + p(t)x = 0$, where $x$ is the displacement, $p(t) = \Omega^2 (1 + \epsilon \cos(\omega t))$ is the drive, representing a parametrically driven (undamped) harmonic oscillator with a natural frequency $\Omega$, drive frequency $\omega$, and drive amplitude $\epsilon$. Solving the equations using a Floquet analysis results in a series of resonances at $\omega = 2\Omega / n$, where $n$ is an integer~\cite{Bechoefer1996}.
Superfluids are particularly interesting in the context of Faraday waves because the damping of collective modes can be much smaller than in normal fluids, and because patterns may dissipate by the formation of quantized vortices in two or three dimensions. Several theoretical works have investigated Faraday waves in Bose-Einstein condensates (BECs) of atomic gases~\cite{Garcia-Ripoll1999, Staliunas2002, Staliunas2004, Nicolin2007, Nath2010, Nicolin2011, Balaz2014}. To our knowledge, only three experiments on Faraday waves in superfluids have been performed, one in which a vessel containing liquid $^4$He is vertically shaken in a way similar to the original Faraday experiment~\cite{Abe2007}, a pioneering experiment in which Faraday waves were excited by modulation of the transverse trap frequency, $\omega_r$, of an elongated BEC of Rb atoms~\cite{Engels2007}, and another in which a non-destructive imaging technique was used to observe Faraday waves in a BEC of Na atoms~\cite{Groot2015}. In the BEC experiments, the transverse breathing mode, excited at a frequency of $2\omega_r$, strongly couples to the density, and hence, to the nonlinear interactions of the condensate. This coupling produces the longitudinal sound waves responsible for creating Faraday waves~\cite{Engels2007,Groot2015}. The spatial period of the Faraday waves was measured as a function of $\omega$, and the response to the strength $\epsilon$ of the drive was investigated~\cite{Engels2007}. In a related BEC experiment, modulation of the scattering length in a regime of large modulation amplitude and frequency resulted in the stimulated emission of matter-wave jets from a $2$D BEC of Cs atoms~\cite{Clark2017}.
In this paper, we report measurements characterizing the response of an elongated BEC to direct modulation of the interaction parameter using a Feshbach resonance~\cite{Malomed2006, Pollack2010, Vidanovic2011}. For drive frequencies near the first parametric resonance ($\omega$ near $2\omega_r$), we observe robust linear spatial patterns characterized by a spatial period $\lambda_F$($\omega$) consistent with Faraday waves. We also observe the response of the gas to the next lowest ``resonant'' mode ($\omega$ near $\omega_r$)~\cite{Nicolin2011}. We have also investigated how $\lambda_F$ depends on the interaction strength. These measurements are compared with a theory that fully incorporates radial, as well as axial dynamics using a variational method~\cite{Nicolin2011}, and, as we will show, the agreement is excellent.
We also explore a different modulation regime, both experimentally and theoretically, where $\omega$ is far from any trap frequency. The behavior in this regime is distinctly different; no clear resonances are observed, and much larger $\epsilon$ and modulation times are needed to obtain a significant response. The response is not regular in this regime, and no clear patterns emerge; rather, modulation produces a series of irregular grains.
Granulation is found in a variety of systems extending over many length and energy scales~\cite{Jaeger1996, Mehta1994}. In quantum gases, granular states have been discussed previously in the context of perturbed atomic BECs and explored theoretically using a mean-field approach~\cite{Yukalov2014, Yukalov2015}. Granular states have been defined to have the following properties~\cite{Yukalov2014}: i) they are dynamical quantum states where particles cluster in higher density grains interleaved by regions of very low density, ii) the spatial distribution of grains is random, and iii) the grain size is variable and of a multiscale nature.
Our theoretical description uses the multiconfigurational time-dependent Hartree method for bosons (MCTDHB)~\cite{Streltsov2007,Alon2008}. MCTDHB captures many of the salient experimental observations and goes systematically beyond a mean-field description obtained from the Gross-Pitaevskii equation. The discrepancies between the Gross-Pitaevskii mean-field description and both the experimental observations, and our MCTDHB results hint that granulation emerges concurrently with many-body correlations.
\section{Faraday waves}
In our experiment, we confine a gas of up to $8 \times 10^5$ $^7$Li atoms in a single-beam optical dipole trap and cool them to well-below $T_c$, the transition temperature for Bose-Einstein condensation~\cite{Pollack2010}. This configuration results in a highly elongated cylindrical trapping geometry whose corresponding axial and radial harmonic frequencies are $\omega_z = (2\pi) 7\ \mathrm{Hz}$ and $\omega_r = (2\pi) 475\ \mathrm{Hz}$, respectively. The atoms are optically pumped into the lowest ground state hyperfine level, ${|F = 1, m_F = 1\rangle}$, where their $s$-wave scattering length may be controlled using a broad Feshbach resonance located at $737.7\ \mathrm{G}$~\cite{Pollack2009, Gross2011, Navon2011, Dyke2013}. The magnetic field is sinusoidally modulated according to $B(t) = \bar{B} + \Delta B \sin(\omega t)$, resulting in a modulated scattering length, $a(t)$. The modulation amplitude $\Delta B$, modulation time $t_m$ and hold time $t_h$ following $t_m$ are varied for each value of the modulation frequency $\omega$, as necessary to produce a Faraday pattern with similar contrast. After $t_h$, we take a polarization phase contrast image~\cite{Bradley1997} with a probe laser propagating along the $x$-axis, perpendicular to the cylindrical $z$-axis of the trap. These images provide column density distributions that we integrate along the $y$-axis to obtain line density profiles. We apply a fast-Fourier transform (FFT) to these profiles in order to determine the spectrum of spatial frequencies exhibited by the BEC following modulation.
A typical image of a single experimental run is shown in Fig.~\ref{fig:fig1}(a). In this example, $\omega=(2\pi)950\ \mathrm{Hz}$ is resonant with the Faraday mode at $\omega=2\omega_r$. A surface wave is generated after $t_m = 5\ \mathrm{ms}$ of modulation followed by $t_h = 20\ \mathrm{ms}$. The FFT, shown in Fig.~\ref{fig:fig1}(b), features a single dominant peak corresponding to a spatial period of $\lambda = 10\ \mathrm{\mu m}$.
\begin{figure}
\includegraphics[width=\linewidth]{figure1.pdf}
\caption{(a): column density image; (b): FFT of the line density. The modulation parameters are: $\omega = (2\pi)950$ Hz, $\bar{B} = 572.5$ G, $\Delta B = 5$ G, corresponding to a mean scattering length $\bar{a} = 4.2 a_0$, and a modulation amplitude $\Delta a = 0.9 a_0$, where $a_0$ is the Bohr radius. In addition, $t_m = 5\ \mathrm{ms}$ and $t_h = 20\ \mathrm{ms}$. The blue arrow indicates the calculated $\lambda_F^{-1}$ for these parameters. The DC component has been subtracted, and the FFT amplitude is normalized by this DC value.}
\label{fig:fig1}
\end{figure}
Figure~\ref{fig:fig2} shows the spatial period of the observed structure as a function of $\omega$. Typically, $t_h=0\ \mathrm{ms}$ and $20<t_m<40\ \mathrm{ms}$, with the exception of $\omega = \omega_r$ and $\omega = 2\omega_r$. Near these resonances, the modulation time was kept short, $t_m = 20\ \mathrm{ms}$ and $t_m=5\ \mathrm{ms}$, respectively, followed by $t_h=20\ \mathrm{ms}$. The blue data points correspond to the spatial period of the primary peak in the FFT spectrum. Except for the point at $\omega =(2\pi)475\ \mathrm{Hz}$, the period monotonically increases with decreasing $\omega$. The blue line in Fig.~\ref{fig:fig2} is the result of a $3$D variational calculation of $\lambda_F$~\cite{Nicolin2011}, which fits the data well. We have verified that the standing wave surface wave amplitude oscillates at $\omega /2$ for $\omega$ near $2\omega_r$, consistent with its identification as a Faraday wave, which is excited parametrically.
\begin{figure}
\includegraphics[width=\linewidth]{figure2.pdf}
\caption{Spatial period vs.~$\omega$. The blue data points are the primary peak of the FFT’s, while the red data points correspond to a secondary peak, where one exists. The error bars here, and in each subsequent figure, corresponds to the standard error of the mean determined from $10$ different experimental runs for each point. The solid blue line is the calculated spatial period $\lambda_F$ of the Faraday mode, while the red is that of the resonant mode $\lambda_R$~\cite{Nicolin2011}. The resonant mode only dominates when $\omega$ is tuned to resonance at $\omega_r$, producing the observed primary peak.}
\label{fig:fig2}
\end{figure}
The excitation at $\omega = (2\pi)475$ Hz = $\omega_r$ is not a sub-harmonic of the Faraday mode at $2\omega_r$, but rather the next lowest mode in the infinite series of modes, identified as the ``resonant'' mode in Ref.~\citenum{Nicolin2011}. In addition to having a different dispersion relation, this mode is also weaker, and therefore more difficult to excite, except exactly on resonance, $\omega = \omega_r$, where the growth rate of the resonant mode exceeds that of the Faraday mode~\cite{Nicolin2011}. A similar excitation at $\omega_r$ was previously reported \cite{Engels2007}. The theoretical calculation of the period of this mode is indicated in Fig.~\ref{fig:fig2} by the red line, $\lambda_R$~\cite{Nicolin2011}.
We find that as $\omega$ is tuned away from $2 \omega_r=(2\pi)950\ \mathrm{Hz}$, a larger modulation amplitude $\Delta B$ and modulation time $t_m$ are required to obtain a pattern with similar contrast. For example, Fig.~\ref{fig:fig3} displays the spectrum for $\omega = (2\pi)200\ \mathrm{Hz}$, for which $\Delta B = 35\ \mathrm{G}$, $t_{m} = 20\ \mathrm{ms}$, and $t_h=20\ \mathrm{ms}$. Two peaks dominate the spectrum: the primary peak at lower spatial frequency, and a secondary peak at roughly twice this spatial frequency. These secondary peaks only appear for $\omega \lesssim (2\pi) 400\ \mathrm{Hz}$, and are identified by the red data points in Fig.~\ref{fig:fig2}. The appearance of the next lowest mode depends on being sufficiently near its resonance frequency at $\omega = \omega_r$, and far enough off-resonant with the Faraday mode at $\omega=2\omega_r$ that it does not dominate the FFT spectrum. We have looked for additional modes in the data, but the FFT spectrum is dominated by the off-resonant response to the $2\omega_r$ and $\omega_r$ resonances, and we are unable to observe any resonances below $\omega_r$. A comparison of the period of these secondary peaks with the theoretically calculated solid red line indicates that they correspond to the resonant mode $\lambda_R$.
We also explored a more impulsive regime, with short $t_m$, and where $\omega$ is kept within $10\%$ of the Faraday resonance at $\omega=2\omega_r$. In this case, with short $t_m$, we find that the wavelength of the resulting Faraday pattern is constant, independent of $\omega$.
\begin{figure}
\includegraphics[width=\linewidth]{figure3.pdf}
\caption{(a): Image at $\omega = (2\pi)200\ \mathrm{Hz}$. (b): Spectrum showing the primary peak, which corresponds to $\lambda_F$, and the secondary peak due to the resonant mode. The blue and red arrows indicate the calculated values for $\lambda_F^{-1}$ and $\lambda_R^{-1}$, respectively, for these parameters. Here, $\Delta B = 35\ \mathrm{G}$, but since $a(B)$ is a nonlinear function of $\Delta B$, the bounds $a_{+}=12a_0$ and $a_{-}=-0.9a_0$ are not symmetrically located about $\bar{a}=4.2a_0$. Also, $t_m=t_h=20\ \mathrm{ms}$.}
\label{fig:fig3}
\end{figure}
The Faraday period also depends on the strength of the nonlinearity, as shown in Fig.~\ref{fig:fig4}, where both the measured and calculated~\cite{Nicolin2011} values of $\lambda_F$ are plotted vs.~the interaction parameter $\bar{a}\bar{\rho}$, where $\bar{\rho}$ is the line density obtained by integrating the column density along the transverse direction. The measured period is consistent with the $3$D theory from Ref~\cite{Nicolin2011}.
\begin{figure}
\includegraphics[width=\linewidth]{figure4.pdf}
\caption{Interaction dependence of $\lambda_F$. The relevant interaction parameter is $\bar{a}\bar{\rho}$, where $\bar{\rho}$ is the average line density and $\bar{a}$ varies between $1a_0$ and $26a_0$. Here, $\Delta B= 5\ \mathrm{G}$, corresponding to $\Delta a = 0.7a_0$ for $\bar{a}=1a_0$ and $\Delta a = 3a_0$ for $\bar{a}=26a_0$. The data are indicated by filled squares, while the solid line is the theory of Ref.~\cite{Nicolin2011}. The error bars along the vertical axis correspond to the standard error, determined from $10$ different experimental runs while the error bars along the horizontal axis arise from the systematic uncertainty in determining $\bar{a}$~\cite{Pollack2009}.}
\label{fig:fig4}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figure5.pdf}
\caption{Growth and suppression of the Faraday pattern. (a) The normalized amplitude of the primary spatial frequency in the FFT spectrum as function of $t_h$. (b) The fitted axial Thomas-Fermi radius of the central region over the same time interval are shown by the filled circles. The solid line is a sinusoidal fit corresponding to a period of $95\ \mathrm{ms}$. For this data, $\omega = (2\pi)950\ \mathrm{Hz}$ and $t_m = 5\ \mathrm{ms}$.}
\label{fig:fig5}
\end{figure}
We have also explored the dynamics for the emergence of the Faraday pattern and its persistence following a short modulation time interval of $t_m = 5\ \mathrm{ms}$ near $2\omega_r$. Figure \ref{fig:fig5}(a) shows the magnitude of the primary peak in the FFT spectrum vs.~$t_h$. Following modulation, the Faraday pattern forms after $t_h \simeq 20\ \mathrm{ms}$. By $t_h = 50\ \mathrm{ms}$, however, the Faraday pattern vanishes before reemerging again at $t_h \simeq 90\ \mathrm{ms}$. A subsequent weaker collapse and revival occur at later $t_h$. We can gain some intuition as to the origins of this behavior by comparing measurements of the condensate length vs.~$t_h$. Figure~\ref{fig:fig5}(b) shows the axial Thomas-Fermi radius during the same $t_h$ interval. It shows that a low frequency collective mode is excited by the coupling to the modulated nonlinearity. The parameters of this condensate place it between the $1$D mean-field and the $3$D cigar regimes~\cite{Menotti2002}. In the $3D$ Thomas-Fermi limit, the lowest $m=0$ quadrupolar mode for an elongated condensate has a frequency of $\sqrt{5/2}\ \omega_z$ while in the $1$D limit the collective mode oscillates at $\sqrt{3}\omega_z $~\cite{Stringari1996, Mewes1996,Menotti2002}. For $\omega_z = (2\pi)7\ \mathrm{Hz}$, the corresponding period for this mode is, therefore, ${\sim}90\ \mathrm{ms}$, which is close to the observed oscillation period of $95\ \mathrm{ms}$. We find that the Faraday pattern is suppressed during axial compression, but subsequently revives as the condensate returns to its original size. The phase of the two oscillations, the FFT amplitude and the Thomas-Fermi radius, do not exactly coincide. We attribute this observation to the delay in the initial growth of the Faraday pattern. We have determined experimentally that the frequency of the collapse and revival of the Faraday pattern scales with the axial trap frequency. A similar collapse and revival of the Faraday wave was previously observed~\cite{Groot2015}.
\section{Granulation}
A Faraday pattern is not observed for low frequency modulation, for which $\omega \ll \omega_r$. We find that as $\omega$ is reduced both modulation time $t_m$ and modulation amplitude $\Delta B$ must be increased in order to observe any change. As these parameters are increased, more spatial frequencies contribute (see Fig.~\ref{fig:fig3}), and as $t_m$ and $\Delta B$ are increased further, we observe random patterns spanning a broad spatial frequency range, resembling grains~\cite{Yukalov2014, Yukalov2015}. We do not observe a significant thermal fraction before, nor after modulation, and therefore we attribute the observed granular patterns to quantum fluctuations and use a theory applicable to pure states.
In Fig.~\ref{fig:fig6}(a) we show experimental images and compare them to Gross-Pitaevskii (GP) simulations. Note also that the axial and radial trap frequencies in this section are $\omega_z = (2\pi)8\ \mathrm{Hz}$ and $\omega_r = (2\pi)254\ \mathrm{Hz}$, respectively. We observe that granulation is remarkably persistent in time after the modulation is turned off, and that its structure is random between different experimental runs. GP simulations for similar parameters are shown in Fig.~\ref{fig:fig6}(b). In contrast to the experimental images, the GP simulations produce column density distributions that resemble Faraday waves, with a regularly spaced pattern. Without a stochastic component the GP model represents a crude approximation. The qualitative difference between the observations in Fig.~\ref{fig:fig6}(a) and the GP simulations in Fig.~\ref{fig:fig6}(b) suggest that the observed state of the atoms in the experiment goes beyond what the GP mean-field theory can describe.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figure6a.png}
\includegraphics[trim={15cm 0 0 0},clip, width=0.42\textwidth]{figure6b.png}
\caption{(a) Experimental images and (b) GP simulations of column density images for several values of $t_h$, and with $\omega = (2\pi)70\ \mathrm{Hz}$, and $t_m = 250\ \mathrm{ms}$. The axial and radial trap frequencies for the experiments and simulations in this section are $\omega_z = (2\pi)8\ \mathrm{Hz}$ and $\omega_r = (2\pi)254\ \mathrm{Hz}$, respectively. (a) For the experiment, $\bar{B} = 577.4\ \mathrm{G}$ and $\Delta B = 41.3\ \mathrm{G}$, corresponding to $\bar{a} = 5 a_0$, $a_+ = 15 a_0$ and $a_- = -1 a_0$. Each image, with indicated $t_h$, is a separate realization of the experiment. (b) Cylindrically symmetric $3$D GP simulations where the calculated $3$D densities are integrated along one transverse direction to produce $2$D column densities. For the simulations, $a_+ = 20 a_0$ and $a_- = 0.5 a_0$. }
\label{fig:fig6}
\end{figure}
The GP ansatz is a product of one single-particle state $\phi_{GP}$: $\Psi_{GP} \sim \prod_{k=1}^{N}\phi_{GP}(r_k)$. This is a ``mean-field state'' because all particles in the many-body system occupy the single-particle state $\phi_{GP}(r)$. A GP product state cannot describe correlations, where the properties of one or several particles in the many-body system depend on the properties of other particles in it. We go beyond the mean-field GP theory by employing the multiconfigurational time-dependent Hartree for bosons method (MCTDHB or MB), which can account for many-body correlations. The MCTDHB ansatz incorporates all possible configurations $(n_1, ..., n_M)$ of $N$ particles in $M$ single-particle states, $\vert \Psi \rangle = \sum_{n_1, n_2, ..., n_M} C_{n_1, n_2, ..., n_M} \vert n_1, ..., n_M\rangle$. The MCTDHB ansatz can therefore self-consistently describe correlations in the many-body state~\cite{SupplMat}.
We simulate the \emph{in-situ} single-shot images \cite{Sakmann2016,Lode2017} from the wavefunctions obtained with MCTDHB for the various experimental parameters and for $M=2$ modes (see Supplemental Materials \cite{SupplMat} and Refs. \cite{Lode2016b,Fasshauer2016,ultracold.org,Brezinova2012,Sakmann2008,Penrose1956,Spekkens1999,Bouchoule2012,Roati2008} therein). The simulated single-shot images correspond to drawing random samples from the $N$-particle density $\vert \Psi(r_1,..., r_N) \vert^2$ of the many-body state. Single-shot images thus contains information about quantum fluctuations and correlation functions of all orders, and the average of many such single-shot images corresponds to the density. Due to computational constraints, at present, we can only perform 1D simulations. Along the axial direction, the experimental data show grains that are typically $4-10$ $\mu$m in length in the axial direction while granulation is suppressed transversely, thus justifying the validity of the $1$D approximation and our comparison of $1$D theory with the experimental line densities.
The simulation of single-shot images requires a model of the many-body probability distribution $\vert\Psi(r_1, ..., r_N)\vert^2$ as provided by MCTDHB. Classical field methods, in contrast, approximate the time-evolution of expectation values using ``clasical-field trajectories'', i.e., solutions of the GP equation with stochastic initial conditions. These classical-field methods, however, do not supply a model for the wavefunction $\vert\Psi(r_1, ..., r_n)\vert^2$ from which single-shots can be simulated~\cite{Sakmann2016}.
\begin{figure}
\vspace*{-10mm}\includegraphics[width=1.0\textwidth]{figure7.pdf}
\vspace*{-00mm}
\caption{Experimental and theoretical line density profiles. (a) Experimental data and (b) many-body simulations for different modulation frequencies. (a) The rows show data for three independent experimental images (``shots'') for the indicated $\omega$, where $\omega = 0$ corresponds to no modulation. Here, $\bar{B} = 590.8\ \mathrm{G}, \Delta B = 41.3\ \mathrm{G}$, corresponding to $\bar{a}=8a_0$, $a_+ = 20a_0$, $a_-=0.7a_0$, and $t_m = t_h = 250\ \mathrm{ms}$. (b) The first column shows the density $\rho(z,t)$ as calculated from the $1$D MB theory (see Supplemental Materials) while the second and third columns display two simulated single shots. We observe that granulation is present in single-shot images, but absent in the average, $\rho(x,t)$.}
\label{fig:fig7}
\end{figure}
Figure \ref{fig:fig7}(a) shows the line density for three independent experimental shots, and for four modulation frequencies, $\omega/2\pi = 0, 20, 60,$ and $80\ \mathrm{Hz}$, where $\omega = 0$ corresponds to no modulation. For this data the time scales, $t_m = t_h = 250\ \mathrm{ms}$, are much longer than for the data discussed in the context of Faraday waves. The $1$D MB simulations of the density and, for comparison to experiment, two single shots are shown in Fig.~\ref{fig:fig7}(b). The single-shot simulations and experimental images are qualitatively similar, in contrast to the densities $\rho(x,t)$, obtained from the MB model. The shot-to-shot fluctuations in the single-shot simulations result from the fact that single-shots are random samples distributed according to the many-body probability distribution $\vert \Psi(r_1, ..., r_N;t)\vert^2$. At $\omega = (2\pi) 20\ \mathrm{Hz}$, the experimental line density is somewhat broadened, perhaps indicating an excitation of low-lying quadrupolar oscillations. For $60\ \mathrm{Hz}$ modulation the single-shot images exhibit large minima and maxima, which are even more pronounced at $80\ \mathrm{Hz}$. Thus, we find that there is a threshold modulation frequency $\omega_c$, above which the line density is significantly altered. The density, corresponding to the average of a large number of single shots, does not exhibit grains; they only emerge in single shot images.
Figure~\ref{fig:fig8} shows the $2^{nd}$ order correlation functions for the experiment $C^{(2)}(z,z^{\prime})$, and MB theory $g^{(2)}(z,z^{\prime})$, where both quantities are defined in the Supplementary Materials~\cite{SupplMat}. $C^{(2)}(z,z^{\prime})$ are evaluated using an average of up to $4$ experimental shots, whereas $g^{(2)}(z,z^{\prime})$ are computed directly from the MCTDHB wavefunctions.
\begin{figure}
\vspace*{-10mm}\includegraphics[width=0.95\textwidth]{figure8.pdf}
\vspace*{-00mm}
\caption{$2^{nd}$ order correlation functions. (a) Correlation function $C^{(2)}(z,z^{\prime})$ calculated from the experimental data for $\omega = (2\pi) 20\ \mathrm{Hz}$. (b) Correlation function $g^{(2)}(z,z^{\prime})$ calculated from MB theory for the same parameters as (a). (c) $C^{(2)}(z,z^{\prime})$ calculated from the experimental data for $\omega = (2\pi) 80\ \mathrm{Hz}$. (d) $g^{(2)}(z,z^{\prime})$ calculated from MB theory for the same parameters as (c). For the non-granulated states ((a) and (b)), the correlation function is ${\sim}1$, indicating the absence of $2^{nd}$ order correlations. For the granulated states ((c) and (d)), regions with correlations (red hues) and anti-correlations (blue hues) emerge. Theoretical and experimental $2^{nd}$ order correlations qualitatively agree: they are flat for the non-granular states ((a) and (b)) and exhibit patterns of comparable length-scale and magnitude for granular states ((c) and (d)). All images correspond to $t_{h}=t_{m}=250\ \mathrm{ms}$ and $\bar{a} = 8a_0$, $a_{+}=20$, and $a_{-}=0.5a_0$.}
\label{fig:fig8}
\end{figure}
In both the experiment (Fig.~\ref{fig:fig8}(a)) and MB theory (Fig.~\ref{fig:fig8}(b)), we find that when $\omega < \omega_c$ the condensate is practically uncorrelated, as evidenced by $C^{(2)}(z,z^{\prime})\approx g^{(2)}(z,z^{\prime}) \approx 1$. However, when $\omega > \omega_c$ we find that the relatively constant correlation plane evolves into smaller correlated and anti-correlated regions, as shown Fig.~\ref{fig:fig8}(c) for the experiment and Fig.~\ref{fig:fig8}(d) for the MB theory.
To further characterize the granulated states, we plot the contrast parameter $\mathcal{D}$ at each modulation frequency in Fig~\ref{fig:fig9}(a). $\mathcal{D}$ quantifies the deviation of a given set of single shots from a parabolic fit -- as discussed in the Supplementary Materials~\cite{SupplMat} and Fig.~S1 therein. A sharp threshold can be seen both in the experimental data and the simulations at $\omega_c \approx (2\pi) 30\ \mathrm{Hz}$, beyond which grains start to form. For $\omega<\omega_c$ the gas oscillates coherently without significant deviation from a Thomas-Fermi envelope.
\begin{figure}
\includegraphics[width=1\textwidth,angle=-90]{figure9.pdf}
\caption{Granulation vs. $\omega$. (a) Comparison of the deviations from a Thomas-Fermi distribution as quantified by the contrast parameter $\mathcal D=\mathcal D(\omega)$~\cite{SupplMat} for single shots simulated with the MB theory with those taken in experiment (EXP). MB theory predicts the threshold value, $\omega_c \approx (2\pi) 30\ \mathrm{Hz}$, where deviations become large and grains form. Each symbol and its error bar are the mean and standard error of the mean of at least $4$ experimental measurements of $\mathcal D$, while $100$ single shots at each $\omega$ have been used for the MB simulations. (b) Eigenvalues of the first and second order RDM. A growth of all three are observed to occur for $\omega > \omega_c$, indicating the emergence of correlations and fragmentation. The growth of both $n_2^{(1)}$ and $n_2^{(2)}$ occur as $\omega \approx \omega_c$, with the drop in $n_2^{(2)}$ near $60\ \mathrm{Hz}$ corresponding to the subsequent growth in $n_3^{(2)}$.}
\label{fig:fig9}
\end{figure}
The threshold frequency $\omega_c$, can be understood by examining the 2$^{nd}$ largest eigenvalues (or occupations) $n_2^{(1)}$ and $n_2^{(2)}$ of the 1$^{st}$ and 2$^{nd}$ order reduced density matrices (RDMs), respectively (see Supplemental Materials \cite{SupplMat}), which are plotted in Fig.~\ref{fig:fig9}(b). These may be used as a measure of the departure of our MB model from mean-field states. Many-body systems, where multiple eigenvalues of the 1$^{st}$ order RDM are macroscopic (ie.~of order $N$), are termed \emph{fragmented}~\cite{Spekkens1999,Mueller2006}. At zero excitation only $n_1^{(1),(2)}$ are macroscopic while $n_2^{(1),(2)}$ are nearly zero. The latter increase substantially with $\omega$ beyond $\omega_c$, heralding the loss of 1$^{st}$ and 2$^{nd}$ order coherence and the emergence of correlations as shown in Fig.~\ref{fig:fig8}. At $\omega \approx (2\pi) 50 \ \mathrm{Hz}$ we observe a drop in $n_2^{(2)}$, however, this results in an increase in $n_3^{(2)}$ and not an increase in $n_1^{(1),(2)}$. The MCTDHB computations thus show that the emergence of granulation is accompanied by the conversion of initial condensation (only a single macroscopic occupation~\cite{Sakmann2008}) into fragmentation.
Both observations, the emergence of fragmentation and the loss of 2$^{nd}$ order coherence, underscore that the granulation of Bose-Einstein condensates is a many-body effect. The system thus cannot be described by a mean-field product state any longer and has left the realm of GP theory. Although the transition to fragmentation is not sharp -- the natural occupations $n_i^{(1),(2)}$ take on continuous values -- it is well established at sufficiently large $\omega$. Granulation features randomly-distributed variably-sized grains of atoms which can be observed in single shot images. Fragmentation, or depletion, on the other hand, is characterized by the reduced density matrix and its (macroscopic) eigenvalues and is not necessarily accompanied by granulation of the density~\cite{Spekkens1999,Mueller2006}. In our close-to-one-dimensional setup, we observe granulation to emerge side-by-side with fragmentation.
\begin{figure}
\includegraphics[width=0.85\textwidth,angle=-90]{figure10.pdf}
\caption{Time-evolution, coherence, and fragmentation from simulations. (a,d) The density $\rho(z,t)$, (b,e) first-order spatial correlation function $|g^{(1)}(z,-z)|$; and (c,f) natural occupations $n^{(1)}_k(t)$ are plotted vs.~time $t$. $n_1^{(1)}$ is denoted by the black line, while $n_2^{(1)}$ is indicated by the yellow line, Panels (a)--(c) are calculated with $\omega = (2\pi)20\ \mathrm{Hz}< \omega_c$ and panels (d)--(f) with $\omega = (2\pi)80\ \mathrm{Hz}> \omega_c$. All other parameters are given in the Fig.~\ref{fig:fig7} caption. The onset and formation of granulation as inferred by the simultaneous drop in the values of $|g^{(1)}|$ and $n^{(1)}_1$, indicating the emergence of spatial correlations and fragmentation, respectively. }
\label{fig:fig10}
\end{figure}
The dynamical evolution, as calculated from the MB theory, of the density is shown in Fig.~\ref{fig:fig10}(a) and \ref{fig:fig10}(d) for $\omega < \omega_c$ and $\omega > \omega_c$, respectively. In both cases, the modulation of the Thomas-Fermi radius follows the external perturbation. Once the modulation is turned off, the radius oscillates at its natural quadrupolar frequency. The $1^{st}$-order spatial coherence is shown in Fig.~\ref{fig:fig10}(b,e) for the same parameters. The patterns that emerge and persist in $g^{(1)}(z,z^{\prime})$ demonstrate that spatial correlations between particles at distinct and distant locations in the granular state are present [Fig.~\ref{fig:fig10}(e)]. The length-scale of the patterns in $g^{(1)}(z,z^{\prime})$ is similar to what is seen in Fig.~\ref{fig:fig8} for $g^{(2)}(z,z^{\prime})$. We infer that the process of granulation in a BEC is accompanied by the emergence of non-local correlations in the many-body state. Fig.~\ref{fig:fig10}(f) shows the emergence of two macroscopic eigenvalues of the reduced one-body density matrix for $\omega > \omega_c$. While these so-called natural occupations are unaffected by modulation for $\omega < \omega_c$, as seen in Fig.~\ref{fig:fig10}(c), $\omega > \omega_c$ results in the second natural orbital being macroscopically populated, and hence, in the fragmentation of the BEC [Fig.~\ref{fig:fig10}(f)]. An examination of the total energy per particle ($E_t$) imparted during modulation for a time $t_m$ shows that $E_t \approx 22\ \mathrm{nK}$ when $\omega = (2\pi) 20 \ \mathrm{Hz}$, and $E_t \approx 36\ \mathrm{nK}$ when $\omega = (2\pi) 80\ \mathrm{Hz}$, both of which are much less than the critical temperature $T_c\approx 330\ \mathrm{nK}$.
\begin{figure}
\includegraphics[width=1\textwidth]{figure11.pdf}
\caption{Experimental column densities showing the formation of grains. Representative column density images taken at different $t_m$. For each value of $t_m$, $\omega=(2\pi) 70\ \mathrm{Hz}$ and $t_h = 250\ \mathrm{ms}$. All other parameters are given in the Fig.~\ref{fig:fig7} caption. Each image is a different realization of the experiment.}
\label{fig:fig11}
\end{figure}
The onset of granulation observed experimentally is shown in Fig.~\ref{fig:fig11}. The condensate was modulated at $\omega= (2\pi) 70 \ \mathrm{Hz}$ for various $t_m$ followed by $t_h=250\ \mathrm{ms}$. For $t_m < 100\ \mathrm{ms}$ there is no discernable difference between the modulated and unmodulated ($t_m = 0 \ \mathrm{ms}$) cases, but for $t_m > 100\ \mathrm{ms}$ grains are observed to form. Consistent with Fig.~\ref{fig:fig10}, the transition to a granulated state is gradual with increasing $t_m$. The observed grains are also long-lived in comparison to Faraday waves, as shown in Fig.~\ref{fig:fig6}(a) and Fig.~\ref{fig:fig5}(a), respectively.
The transition to granular states occurs due to the presence of quantum correlations. The $2^{nd}$ order correlations, shown in Fig.~\ref{fig:fig8}, and the $1^{st}$ order, non-local correlations, shown in the middle panel of Fig.~\ref{fig:fig10} result from modulating the interaction and do not disappear after the modulation is stopped. Our modeling of the state on the many-body level suggests that granulation represents a dynamical many-body state characterized by the presence of quantum fluctuations, correlations, fragmentation, and their persistence in time.
Granular states feature random patterns and lack periodicity in their distributions, distinguishing them from Faraday and shock waves~\cite{Perez2004}. The multi-characteristic nature of quantum grains is supported by our observation of additional anomalous features in real and momentum space. Indeed, we find signatures of different co-existing phases of perturbed quantum systems such as quantum turbulence and localization in granulated states. We verified that the density in momentum space (as calculated from the MB theory) of the granulated state shows clear signs of a $k^{-2}$ power-law scaling (see Supplemental Materials \cite{SupplMat} and Fig.~S2 therein) which indicates a connection to turbulent BECs~\cite{Thompson2014,Navon2016,Tsatsos2016}.
\section{Conclusions}
We have explored the response of a BEC to modulated interactions. In the regime where the drive frequency $\omega \gtrsim \omega_r$, the drive couples to parametric and resonant modes that result in $1$D spatial pattern formation. For $\omega$ near resonant with $2\omega_r$ or $\omega_r$, very little modulation time and amplitude are required to produce a significant response. Near these resonances the condensate undergoes breathing oscillations that persist for a long time, resulting in the formation of Faraday and resonant mode patterns for $t_h >0$. A pattern is also observed off-resonance, but only with increased modulation amplitude and modulation time. Due to the long modulation time, the resulting pattern can be seen at $t_h=0$, and is a direct consequence of the applied modulation. The dispersion relation of both Faraday and resonant modes is well-represented by a mean-field theory that accounts for the 3D nature of the elongated condensate.
For lower drive frequencies, the modulated interactions only weakly couple to the condensate. Significant response is achieved only by increasing the modulation amplitude and time, and then, only above a critical modulation frequency $\omega_c$. Fluctuating and irregular spatial patterns, that we define as grains, may then emerge and persist for long periods of time. A theoretical description of granulation requires approaches that go beyond mean-field theory, indicating that quantum granulation is characterized by non-local many-body correlations and quantum fluctuations.
\begin{acknowledgments}
This work was supported in part by the Army Research Office Multidisciplinary University Research Initiative (Grant No. W911NF-14-1-0003), the Office of Naval Research, the NSF (Grant No. PHY-1707992), the Welch Foundation (Grant No. C-1133), the Austrian Science Foundation (FWF) under grant No. F41(SFB `ViCoM') and No. P32033, the Wiener Wissenschafts- und TechnologieFonds (WWTF) project No. MA16-066 (`SEQUEX') and by FAPESP, under CEPID program (Grant No. 2013/07276-1). Computational time in the High-Performance Computing Center Stuttgart (HLRS) is gratefully acknowledged. We also thank Mustafa Amin for valuable discussions.
\end{acknowledgments}
|
\section{Introduction}
The Next-to-Minimal Supersymmetric Standard Model (NMSSM) is a well-motivated construction that addresses the $\mu$ problem of the MSSM through the inclusion of an extra singlet field, $S$, which mixes with the Higgs $SU(2)$ doublets and whose vacuum expectation value after electroweak symmetry breaking (EWSB) generates an effective EW-scale $\mu$ parameter~\cite{Kim:1983dt} (see, e.g., Ref.~\cite{Ellwanger:2009dp} for a review).
Among its many virtues, the NMSSM possesses a very interesting phenomenology, mainly due to its enlarged Higgs sector. For example, the mixing of the Higgs doublet with the new singlet field opens the door to very light scalar and pseudoscalar Higgs bosons with interesting prospects for collider searches.
Moreover, in the NMSSM the mass of the Higgs boson also receives new tree-level contributions from the new terms in the superpotential~\cite{Cvetic:1997ky,Barger:2006dh}, which can make it easier to reproduce the observed value~\cite{Hall:2011aa,Ellwanger:2011aa,Arvanitaki:2011ck,King:2012is,Kang:2012sy,Cao:2012fz,Ellwanger:2012ke}.
In addition, the amount in fine-tuning of the model~\cite{BasteroGil:2000bw,Ellwanger:2014dfa,Kaminska:2014wia} is reduced, when compared to the MSSM.
In order to explain the smallness of neutrino masses, the NMSSM can be extended to include a see-saw mechanism,
by adding singlet superfields that incorporate right-handed (RH) neutrinos (and sneutrinos)~\cite{Kitano:1999qb,Deppisch:2008bp}. In the resulting extended scenario the lightest RH sneutrino state is a viable dark matter (DM) candidate~\cite{Cerdeno:2008ep} with interesting phenomenological properties and a mass that can be as small as a few GeV~\cite{Cerdeno:2014cda}.
Supersymmetric (SUSY) models are characterized by the soft supersymmetry-breaking terms. The MSSM can be defined in terms of scalar masses, $m_a$, gaugino masses $M_i$, and trilinear parameters, $A_{ij}$.
The NMSSM also contains a new set of couplings: a singlet trilinear superpotential coupling, $\kappa$, and the strength of mixing between the singlet and Higgs doublets, $\lambda$. In addition, there are the corresponding supersymmetry breaking trilinear potential terms ${A_\lambda}$ and ${A_\kappa}$.
These input parameters can be defined at low-energy, in which case they would enter directly in the corresponding mass matrices to compute the physical masses of particles after radiative EWSB. This {\em effective} approach does not address the origin of the soft terms and, instead, tries to be as general as possible.
However, if SUSY models are understood as originating from supergravity theories (which in term can correspond to the low-energy limit of superstring models), the soft parameters can be defined at some high scale as a function of the moduli of the supergravity theory. In this case, the renormalization group equations (RGEs) are used to obtain the low-energy quantities and ultimately the mass spectrum \cite{ds,eghrz,fot}.
Although in principle the number of parameters is very large ($\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 100$), certain simplifying conditions can be imposed, which rely on the nature of the underlying supergravity (or superstring) model. A popular choice is to consider that the soft parameters are {\em universal} at the Grand Unification (GUT) scale, i.e., $m_a=m_0$, $M_i=m_{1/2}$, and $A_{ij}=A_0$ \cite{Drees:1992am,Kane:1993td,Ellis:1996xu,Ellis:1997wva,Baer:1995nc,Baer:1997ai,Ellis:2002rp,Ellis:2003cw,Chattopadhyay:2003xi,Ellis:2012aa}.
When applied to the MSSM, the resulting Constrained MSSM (CMSSM) has only four free parameters (including the ratio of the Higgs expectation values, $\tan\beta$) plus the sign of the $\mu$ parameter. The phenomenology of the CMSSM has been thoroughly investigated in the past decades. Current Large Hadron Collider (LHC) constraints set stringent lower bounds on the common scalar and gaugino masses, while viable neutralino DM further restricts the available regions of the parameter space (for an update of all these constraints, see Ref.~\cite{Buchmueller:2013rsa,Bagnaschi:2015eha}).
The universality condition can also be imposed in the context of the NMSSM.
The resulting constrained NMSSM (CNMSSM) also contains four free parameters which we choose as\footnote{Note that in the CMSSM, the value of $\mu$
and the supersymmetry breaking bilinear term, $B_0$, are fixed by the two conditions derived in the minimization of the Higgs potential.
In the NMSSM, we lose $\mu$ and $B_0$ as free parameters (the latter is replaced with $A_\lambda$, which is set equal to $A_0$). Thus, the two additional parameters $\lambda$ and $\kappa$, can be fixed by the three minimization conditions (which must also fix the expectation
value of the scalar component of $S$). In practice, as will be discussed in more detail below, we allow $\lambda$ to remain
free, using the minimization conditions to fix $\kappa$ and $\tan \beta$. In this sense, the CNMSSM is constructed from the {\em same}
number of free parameters as used in the CMSSM.}: $m_0$, $m_{1/2}$, $\lambda$, and $A_0={A_\lambda}={A_\kappa}$, and its phenomenology has been discussed in detail in Ref.~\cite{Djouadi:2008uj}.
It was pointed out there that recovering universal conditions for the singlet mass at the GUT scale with the correct EW vacuum at the low energy often requires a small universal scalar mass, satisfying $3 m_0 \sim - A_0 \ll m_{1/2}$.
In order for the singlet Higgs field to develop a vacuum expectation value (VEV) to fix the EW vacuum, we must require that
$|A_0|$ is large compared to $m_0$.
As a consequence, particularly due to small $m_0$, the predicted mass range of the SM-like Higgs boson is hard to reconcile with the observed value of $m_h \simeq 125$ GeV.
In addition, large $|A_0|$ (compared to $m_0$) is also problematic as in this case, the stau tends to be tachyonic.
In fact, this is one of the main obstacles for obtaining the observed value for the Higgs boson mass.
Furthermore, in the CNMSSM, the lightest SUSY particle (LSP) is generally either the lighter stau or the singlino-like neutralino~\cite{Ellwanger:2014hia,Ellwanger:2015axj}.
The stau, being a charged particle, can not be dark matter and
the appropriate thermal relic abundance of the singlino-like neutralino can only be realized only for limited stau-neutralino co-annihilation regions.
In this paper, we show that these problems can be alleviated if the NMSSM is extended to include RH neutrino superfields,
which couple to the singlet Higgs through a new term in the superpotential.
First, the extra contributions to the RGEs help achieve unification of the soft masses for
smaller values of the scalar and gaugino masses. This also allows more flexibility in the choice of the trilinear parameters.
Due to the RGE running of the soft mass of singlet Higgs field through its couplings with RH neutrinos, the realization of the EW vacuum becomes somewhat easier than in the NMSSM without RH neutrinos.
We find that the lightest RH sneutrino can be the LSP in wide areas of the parameter space, where the smallest coupling between RH neutrinos and the singlet Higgs field needs to be as small as ${\lambda_N}\sim10^{-4}$. As a result, the stau LSP region is significantly reduced and scalar masses as large as $m_0 \sim 10^3$ GeV are possible,
making it easier to obtain a SM-like Higgs boson with the right mass.
Likewise, for the neutralino LSP case with moderate values of ${\lambda_N}\sim10^{-2}$, the modification of the RGE of the singlet Higgs is effective and expands (reduces)
the neutralino (stau) LSP region. As the result, in this case as well, the observed SM-like Higgs boson mass can be obtained.
In both cases the small couplings to SM particles of either the RH sneutrino LSP or the neutralino LSP result in a thermal relic abundance which is in excess of the observed DM density and some kind of late-time dilution is needed.
The structure of this article is the following. In Section~\ref{sec:rges}, we review the main features of the NMSSM with RH sneutrinos, we study the RGEs of the Higgs parameters, comparing them to those of the usual NMSSM, and we describe our numerical procedure. In Section \ref{sec:results}, we carry out an exploration of the parameter space of the theory, including current experimental constraints, and study the viable regions with either a neutralino or RH sneutrino LSP. We also compare our results with the ordinary NMSSM.
Finally, our conclusions are presented in Section \ref{sec:conclusions}. Relevant minimization equations and beta functions are
given in the Appendix.
\section{RGEs and universality condition}
\label{sec:rges}
The NMSSM is an extension of the MSSM and includes new superpotential terms
\begin{equation}
W_{\rm{NMSSM}} = (y_u)_{ij} Q_i \cdot H_2 U_j +(y_d)_{ij} Q_i \cdot H_1 D_j + (y_e)_{ij} L_i \cdot H_1 E_j
+ \lambda S H_1 \cdot H_2 + \kappa S^3\ ,
\label{eq:superpotential}
\end{equation}
where the dot is the antisymmetric product and flavour indices, $i,j=1,\,2,\,3$, are explicitly included.
The model discussed here consists of the full NMSSM,
and is extended by adding RH neutrino/sneutrino chiral superfields. This model was introduced in Refs.~\cite{Cerdeno:2008ep,Cerdeno:2009dv} (based on the construction in~\cite{Kitano:1999qb,Deppisch:2008bp}), where it was shown that the lightest RH sneutrino state is a viable candidate for DM. In previous works, only one RH neutrino superfield was considered, but here we extend the construction to include three families, $N_i$, in analogy with the rest of the SM fields and to account for three massive active neutrinos.
The NMSSM superpotential, $W_{\rm{NMSSM}}$, has to be extended in order to accommodate these new states,
\begin{equation}
W=W_{\rm{NMSSM}}+{(\lambda_N)_{ij}} S N_i N_j + (y_N)_{ij} L_i \cdot H_2 N_j\ .
\label{eq:superpotential}
\end{equation}
The new terms link the new chiral superfields with the singlet Higgs, $S$, with couplings ${\lambda_N}$. Similarly, the new Yukawa interactions, $y_N$, couple the RH neutrino superfields to the second doublet Higgs, $H_2$, and the lepton doublet, $L$.
In addition, the total Lagrangian of the model is,
\begin{equation}
-\mathcal{L}=-\mathcal{L}_{\rm{NMSSM}} + {(m^2_{\tilde{N}})_{ij}} \tilde{N}_i\tilde{N}_j^* + \left({(\lambda_N)_{ij}} {(A_{\lambda_N})_{ij}} S\tilde{N}_i\tilde{N}_j +(y_N)_{ij}(A_{y_N})_{ij}\bar{L}_iH_2 \tilde{N}_j + h.c. \right),
\label{eq:softlagrangian}
\end{equation}
where $\mathcal{L}_{NMSSM}$ includes the scalar mass terms and trilinears terms of the NMSSM and $\mathcal{L}$
includes new $3\times 3$ matrices of trilinear parameters, ${A_{\lambda_N}}$ and $A_{y_N}$, and a $3\times 3$ matrix of squared soft masses for the RH sneutrino fields, ${m^2_{\tilde{N}}}$. In our analysis, we will consider that all these matrices are diagonal at the GUT scale.
As pointed out in Ref.~\cite{Cerdeno:2008ep}, the neutrino Yukawa parameters are small, $(y_N)_{ij}\lesssim 10^{-6}$, since the neutrino Majorana masses generated after EWSB are naturally of the order of the EW scale. Thus, they play no relevant role in the RGEs of the model and can be safely neglected.
The new parameters ($\lambda_N, A_{\lambda_N}$) are chosen to be real.
Finally, we will extend the universality conditions to the new soft parameters, thus demanding
\begin{eqnarray}
m_S^2&=&m_0^2\,,\nonumber\\
{(m^2_{\tilde{N}})_{ij}}&=&diag\left( m^2_0,\, m^2_0,\,m^2_0\right)\, ,\nonumber\\
{(\lambda_N)_{ij}}&=&diag\left( \lambda_{N_1},\, \lambda_{N_2},\,\lambda_{N_3} \right)\ ,\nonumber\\
{A_\lambda}={A_\kappa}&=&A_0 \, ,\nonumber \\
{(A_{\lambda_N})_{ij}}=(A_{y_N})_{ij}&=&diag\left( A_0,\, A_0,\,A_0\right)\ ,
\label{eq:unificationcond}
\end{eqnarray}
at the GUT scale, which is defined as the scale where gauge couplings of $SU(2)_L$ and $U(1)_Y$ coincide.
\subsection{Radiative EW symmetry-breaking and the singlet soft mass}
Using the values of the soft terms, defined at the GUT scale, the RGEs can be numerically integrated down to the EW scale. After EWSB, the minimization conditions of the scalar potential leave three tadpole equations for the VEVs of the three Higgs fields. At tree level, these are
\begin{eqnarray}
\frac{\partial V}{\partial \phi_d}&=&\frac{v_sv_u \lambda}{2} (- \sqrt{2}A_\lambda - \kappa v_s)-\frac{(g_1^2+g_2^2)}{8}v_d(v_u^2-v_d^2)+ m_{H_d}^2v_d + \frac{\lambda^2}{2}(v_s^2+v_u^2)v_d, \label{Eq:Stat:vd}\\
\frac{\partial V}{\partial \phi_u}&=&\frac{v_sv_d \lambda}{2} (- \sqrt{2}A_\lambda - \kappa v_s)+\frac{(g_1^2+g_2^2)}{8}v_u(v_u^2-v_d^2)+ m_{H_u}^2v_u + \frac{\lambda^2}{2}(v_s^2+v_d^2)v_u, \label{Eq:Stat:vu}\\
\frac{\partial V}{\partial \phi_s}&=&\frac{v_s}{2}(\sqrt{2}A_\kappa \kappa v_s + 2m_S^2 + \lambda^2(v_d^2+v_u^2)-2\kappa\lambda v_u v_d+2\kappa^2v_s^2)-\frac{A_\lambda\lambda}{\sqrt{2}} v_uv_d.\label{Eq:Stat:vs}
\end{eqnarray}
As noted earlier, using the measured value of the mass of the $Z$ boson, $M_Z$, and its relation to the Higgs doublet VEVs, $v_u$ and $v_d$, the conditions for correct EWSB allow us to determine the combination $\tan\beta\equiv v_u/v_d$, and $v_s$, as well as one additional parameter which we take as $\kappa$.
Thus, the constrained version of the NMSSM can be defined in terms of four universal input parameters,
\begin{equation}
m_0,\ m_{1/2},\ \lambda ,\ A_0={A_\lambda}={A_\kappa}\,.
\label{Eq:inputs}
\end{equation}
In practice, however, solving the system of tadpole equations is in general easier if one fixes the value of $\tan \beta$ and uses the
tadpole conditions to determine the soft mass of the singlet Higgs, $m_S^2$. Although this generally results in a non-universal mass for $m_S$, it is then possible to iteratively find the value of $\tan \beta$ such that $m_S = m_0$.
More specifically, using the above tree-level expressions (for illustrative purposes), a combination of Eqs.~\eqref{Eq:Stat:vd} and \eqref{Eq:Stat:vu} leads to
\begin{eqnarray}
\mu_{\rm eff}^2 \equiv \frac{1}{2} (\lambda v_s)^2 =
-\frac{1}{2}M_Z^2
- \frac{ m_{H_u}^2 \tan\beta^2 -m_{H_d}^2 }{\tan\beta^2 - 1}.
\label{Setlow:mueff}
\end{eqnarray}
Since $\lambda$ is an input free parameter, we can use it to define $v_s$ as
\begin{equation}
v_s = \pm \sqrt{\frac{2 \mu_{\rm eff}^2}{\lambda^2}}.
\label{eq:tadvs}
\end{equation}
The sign of $v_s$ plays the role of the sign of $\mu$-term in the CMSSM.
From another combination of Eqs.~\eqref{Eq:Stat:vd} and \eqref{Eq:Stat:vu} we obtain
\begin{equation}
(B\mu)_{\rm eff} \equiv
\frac{\lambda v_s}{\sqrt{2}} ( A_{\lambda} + \frac{1}{\sqrt{2}} \kappa v_s)
= \frac{\sin2\beta}{2}( m_{H_u}^2 +m_{H_d}^2 + 2\mu_{\rm eff}^2 ),
\label{stateq:doublet}
\end{equation}
which allows us to solve for $\kappa$
\begin{equation}
\kappa
= \frac{\sqrt{2}}{v_s}\left( -A_{\lambda} + \frac{(B\mu)_{\rm eff} }{sgn(\mu_{\rm eff})\mu_{\rm eff}} \right) .
\label{eq:tadkappa}
\end{equation}
For the last parameter, $m_S^2$, we can use Eq.~\eqref{Eq:Stat:vs} in the form of
\begin{equation}
m_S^2
= - \left( \frac{1}{\sqrt{2}}A_{\kappa} \kappa v_s +\frac{1}{2} \lambda^2 (v_d^2 + v_u^2)
- \kappa \lambda v_u v_d + \kappa^2 v_s^2 \right) + \frac{1}{\sqrt{2}v_s}A_{\lambda} \lambda v_u v_d.
\label{eq:tadms2}
\end{equation}
The one-loop expressions can be found in the Appendix \ref{app:loopedminimization}.
The above procedure assumes $\tan \beta$ is free, but in our analysis we add one extra step: for each point in the parameter space, we vary the value of $\tan\beta$ in order to impose $m_S^2({\rm GUT})=m_0^2$ (within a certain tolerance ($\sim 1$ \%)). If this universality condition cannot be achieved, the point is discarded. This procedure was outlined in Ref.~\cite{Djouadi:2008yj}. Thus, at the end of this iterative process, the free parameters are those in Eq.~\eqref{Eq:inputs}.
This prescription has been applied in the literature to study the phenomenology of the CNMSSM. A first thing to point out is that the resulting value of $m_S^2$ at the EW scale from Eq.~\eqref{eq:tadms2} is often negative~\cite{Ellwanger:2006rn}, and this makes it difficult to satisfy the universality condition.
In particular, it was found in~\cite{Djouadi:2008uj}, that the resulting value of $\tan\beta$ in the CNMSSM is in general large and that, in general, the value of the universal gaugino mass is also large. As a result, the lightest stau is the LSP in the remaining viable areas of the parameter space (which poses a problem to incorporate DM in this scenario).
In order to alleviate this, a semi-constrained version of the NMSSM was explored in Ref.~\cite{Ellwanger:2006rn}, allowing for $m_S^2\neq m_0^2$ and $A_\kappa\neq A_0$ at the GUT sale.
In our extended model, the solution of the tadpole equations proceeds in the same way as in the CNMSSM. However, as we will argue in Section~\ref{sec:results}, the RH sneutrino contributes to the RGEs of the singlet and singlino and opens up the parameter space allowing
us to restore full universality.
In particular, the new terms in the superpotential and the soft breaking parameters
enter the 1-loop beta function for the scalar mass of the singlet Higgs, $m_S^2$, which is now given by
\begin{eqnarray}
\beta_{m_S^2}^{(1)} =
4 \Big(
3 m_S^2 |\kappa|^2 + |T_{\kappa}|^2 + |T_{\lambda}|^2 +
\left(m_{H_d}^2 + m_{H_u}^2 + m_S^2\right)|\lambda|^2\notag\\
+ m_S^2 \mbox{Tr}\left({\lambda_N \lambda_N }\right)
+2 \mbox{Tr}\left({{m^2_{\tilde{N}}} \lambda_N \lambda_N }\right)
+ \mbox{Tr}\left(T_{\lambda_N} T_{\lambda_N}
\right)
\Big).
\label{eq:1loop}
\end{eqnarray}
We have defined
$T_{g_i}= A g_{i}$, where $A$ is the soft trilinear term and $g_i$ is the corresponding coupling constant, $g_i= y_i,\ \lambda ,\ \kappa,\ \lambda_N$.
The first line corresponds to the usual NMSSM result, and the second line contains the new contribution from the coupling of the singlet to
the right-handed neutrino. For completeness, the two-loop expression is given in Eq.~(\ref{eq:twoloop}).
\begin{figure}
\begin{center}
\scalebox{0.8}{
\includegraphics[scale=0.5]{figs/plots_RGEs_2.eps}
\includegraphics[scale=0.5]{figs/plots_RGEs_1.eps}
}
\caption{\footnotesize
2-loop RGE running of the soft Higgs mass parameters, $m_{H_d}^2$, $m_{H_u}^2$, and $m_S^2$, imposing the universality condition $m_0=1000$~GeV at the GUT scale,
with $A_0=-3.5\,m_0$, $m_{1/2}=4500$~GeV, and $\lambda = 0.01$ (the latter is input at the weak scale).
The plot on the left corresponds to the standard NMSSM (i.e., with ${\lambda_N}=0$). The plot on the right corresponds to the extended NMSSM with RH neutrinos for
$\lambda_N=(0.0002, 0.6,0.6)$, defined at the GUT scale.
The value of $\tan\beta$ has been fixed separately in each example in order to achieve universality. }
\label{fig:mv2tln}
\end{center}
\end{figure}
We show in Fig.\,\ref{fig:mv2tln} the running of the Higgs mass-squared parameters as a function of the renormalization scale. We have chosen an example where the soft terms unify at the GUT scale in the standard NMSSM (left) and in the extended NMSSM with RH neutrinos (right). As the RGE running in the two models differs, we require slightly different values of $\tan \beta$ to achieve $m_S = m_0$.
Enforcing the unification of the scalar singlet mass tends to be problematic for radiative EWSB
in models without the right-handed neutrino, as $m_S^2$ remains positive down to the weak scale.
As we can observe, the effect of the RH sneutrino fields in the running of the $m_S^2$ parameter is remarkable.
In this example, it can drive the positive singlet mass-squared term negative. This alleviates some tension in the choice of initial parameters.
\subsection{Details on the numerical code}
\label{subsec:numcode}
We have modified the supersymmetric spectrum calculator {\tt SSARD} \cite{ssard} by adding the necessary RGEs to include additional
terms needed in our extension of the NMSSM. The code numerically integrates the RGEs between the weak and GUT scales and solves the tadpole equations used to determine $\kappa$, $v_s$ and $m^2_S$ as outlined above. The output of this program is then passed through the public packages {\tt NMSSMTools 4.9.2}~\cite{Ellwanger:2004xm,Ellwanger:2005dv,Ellwanger:2006rn} and {\tt Micromegas 4.3}~\cite{Belanger:2014vza} in order to get the physical particle spectrum and the thermal component to the DM relic abundance.
{\tt SSARD} implements an iterative procedure to solve the RGEs as follows. Using weak scale inputs for the gauge and Yukawa couplings, the GUT scale is defined as the renormalization scale where the $SU(2)_L$ and $U(1)_Y$ gauge couplings coincide.
At this GUT scale, universal boundary conditions are imposed for all gaugino masses, $m_{1/2}$,
trilinear terms, $A_i=A_\lambda=A_\kappa=A_0$,
and scalar masses, $m^2_i=m_0^2$, but we leave $m_S^2({\rm GUT})$ as a free parameter.
The couplings $\lambda_N$ are also input at the GUT scale.
We then run the RGEs from the GUT to the SUSY scale, where we solve the tadpole equations (now including the tadpole condition for $S$) with the resulting values of the parameters. The coupling $\lambda$ is input at the weak scale.
Using these low-scale values, we then run the RGEs upwards, recalculating the GUT scale, and we iterate this procedure until a good stable solution is found.
As a final step, this procedure is repeated for different values of $\tan\beta$, searching for points in which the unification condition
$|1 - m_S^2({\rm GUT})/m_0^2| < 10^{-2}$ is satisfied.
Once the tadpole equations are solved for the points that fulfill the universality conditions, we collect all the parameters at EW scale and compute the
SUSY spectrum using the public package {\tt NMSSMTools 4.9.2} ~\cite{Ellwanger:2004xm,Ellwanger:2005dv,Ellwanger:2006rn}.
The code checks the scalar potential, looking for tachyonic states, the correct EW vacuum, divergences of the coupling at some scale
between the SUSY and GUT scales, as well as collider constraints from LEP and LHC, and low energy observables.
If a point is allowed, the program computes the SUSY spectrum for the given set of parameter values as well as the SM-like Higgs mass
with full 1-loop contributions and the 2-loop corrections from the top and bottom Yukawa couplings.
In order to test our procedure, we have also implemented our model in {\tt SARAH}~\cite{Staub:2009bi,Staub:2010jh,Staub:2012pb,Staub:2013tta,Staub:2015kfa},
which produces the model files for {\tt SPheno}~\cite{Porod:2003um,Porod:2011nf} to perform the running from the GUT to the EW scale.
We notice that even a ``small" variation (within 10\%) of the parameters given as input to the numerical codes (such as $\lambda$, $A_0$, $m_0$, $m_{1/2}$)
can lead to very different values of the outputs - in particular of
$A_\lambda, \kappa$ and $m_S^2$. On the other hand, $v_s$ turns out not to be affected much by these variations,
since its tadpole equation depends mostly on $\rm tan \beta$, when $\rm tan \beta$ is large.
In particular, $A_\lambda$ is the most numerically unstable parameter.
This instability may induce differences in the soft mass of the singlet Higgs $m_S^2$, although its RGE is rather stable and its low-scale value is only affected through the stationary conditions. Eventually, $\rm tan \beta$ is the most sensitive parameter to change outputs significantly. However its value is finally fixed by imposing the universality condition $m_S^2 = m_0^2$ and therefore all the eventual differences in the parameters get reabsorbed. We have carried out several tests and we have found an agreement within a 10\% between both codes. Moreover, we have also tested the codes in the pure NMSSM limit and we have found an agreement within a 10\% between {\tt SSARD} and {\tt NMSSMTools}.
\section{Results}
\label{sec:results}
In this section, we provide some numerical examples that illustrate the effect of adding RH sneutrinos in the four-dimensional NMSSM parameter space with universal conditions. Rather than performing a full numerical scan on all the parameters, we have selected some representative ($m_{1/2},m_0$) slices, and fixed $\lambda=0.01$, $A_0=-3.5\,m_0$.
The condition $3 m_0 \sim - A_0 \ll m_{1/2}$ is required to get the correct EW vacuum~\cite{Djouadi:2008uj}, as already stated in the Introduction.
In agreement with observed values, we have also fixed $m_{\rm top} =173.2$ GeV, $m_{\rm bottom} = 4.2$ GeV.
We have investigated three different scenarios. First, for comparison, we consider the Constrained NMSSM case, and then we study two scenarios of the extended model with RH sneutrinos. In particular, we consider one scenario with $\lambda_N=(0.0002, 0.6,0.6)$ (``\textit{small} $\lambda_N$'') and another one with $\lambda_N=(0.01, 0.6,0.6)$ (``\textit{large} $\lambda_N$''). The ``\textit{small} $\lambda_N$'' scenario is motivated by the fact that the RH sneutrino can be the LSP whereas in the ``\textit{large} $\lambda_N$'' the lightest neutralino can be the LSP.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{figs/wplot_001wo.eps}
\caption{\footnotesize
Higgs mass contour plot in the plane ($m_{1/2}$-$m_0$) for the CNMSSM scenario. We depict in magenta the region of the parameter space excluded by any of the following reasons: existence of another vacuum deeper than the EW one; the presence of a tachyonic particle; experimental constraints from LEP, LHC and others (see text for a detailed description). In the brown shaded area the stau is the LSP while in the white area the neutralino is the LSP. Red dashed contours account for the Higgs mass (in GeV), while the black lines represent the value of $\tan\beta$.}
\label{fig:nmssmcase}
\end{center}
\end{figure}
\paragraph{CNMSSM:}
Let us first focus on the pure CNMSSM case without RH neutrino fields. In Fig.~\ref{fig:nmssmcase}, we show the results of a numerical scan in the plane ($m_{1/2},m_0$).
We have imposed consistency with all experimental results, including ATLAS scalar searches~\cite{ATLAS:2014vga}, bounds on low energy observables,
such as $B_s\to \mu^+\mu^-$~\cite{Bobeth:2013uxa,Domingo:2015wyn} and $b \to s +\gamma$~\cite{Misiak:2015xwa,Domingo:2015wyn} by {\tt NMSSMTools},
and collider constraints on the masses of SUSY particles.
In Fig.~\ref{fig:nmssmcase}, the magenta area for large $m_0$ corresponds to parameter values which lead to a tachyonic stau,
whereas for small $m_0$ it is due to the ATLAS $h^0/H^0/A^0 \rightarrow\gamma\gamma$ searches~\cite{ATLAS:2014vga},
which can be used as a constraint on searches of a light Higgs boson that often appears in the general NMSSM
(this essentially rules out the region of the parameter space with $m_h < 122$ GeV).
Since the purpose of this paper is not to explain anomalies such as those observed in the measurement of the muon anomalous magnetic moment, $(g-2)_\mu$, or the $B^+ \to \tau^+ \nu_{\tau}$ branching ratio, we do not restrict our interest to such a parameter region.
The magenta area also represents an unavailable or excluded region where either the universal conditions are not realized, there are deeper vacua than the EW one, a sfermion or any Higgs boson is tachyonic, or any experimental bound is not fulfilled according to the constraints described in Section~\ref{subsec:numcode}.
The brown shaded area corresponds to the solutions where the universal conditions are fulfilled but the stau is the LSP, whereas in the remaining white area, the neutralino is the LSP.
The black contours represent the values of $\tan\beta$ necessary to achieve the universal conditions (seen here to lie in the range of $\tan\beta\sim40 - 50$), while the red dot-dashed contours show the SM-like Higgs mass. We notice that the experimentally observed Higgs mass is not achieved in the allowed region. Indeed, the highest value for the SM-like Higgs mass is around 124 GeV for large values of $\tan\beta$ ($\sim 50$), although this region
remains acceptable if we consider a $\pm 2$ GeV uncertainty in the calculation of the Higgs mass.
It has been pointed out in Ref.~\cite{Djouadi:2008uj} that the stau-neutralino coannihilation strip in the CNMSSM extends only up to values of $m_{1/2}$ of the order of a few TeV, which roughly corresponds to $m_{\tilde{\tau}_1} \lesssim 1$ TeV.
In this plot, this region is excluded due to constraints in the Higgs sector, as explained above.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{figs/wplot35_001w.eps}
\caption{\footnotesize
Higgs mass contour plot in the plane ($m_{1/2}$-$m_0$) for the ``\textit{small} $\lambda_N$'' scenario with $\lambda_N=(0.0002,0.6,0.6)$. The colour code is the same as in Fig.~\ref{fig:nmssmcase}, except in the white region that represents the case where the sneutrino is the LSP in this case. Red dashed contours account for the Higgs mass (in GeV), while the black lines represent the value of $\tan\beta$.}
\label{fig:smalllambda}
\end{center}
\end{figure}
\paragraph{Small ${\lambda_N}$ scenario:}
Next, we concentrate on our extended model, when the RH sneutrino field is added to the particle content of the NMSSM. In Fig.~\ref{fig:smalllambda}, we show the results of a scan in the ($m_{1/2},m_0$) plane,
for the ``\textit{small} $\lambda_N$'' scenario, $\lambda_N=(0.0002,0.6,0.6)$. The colour code in this figure is the same as in Fig.~\ref{fig:nmssmcase}.
The excluded magenta areas are due to tachyonic staus (for large $m_0$), tachyonic RH sneutrino (for a portion of small $m_0$ and large $m_{1/2}$), and due to the ATLAS bound on $h^0/H^0/A^0 \rightarrow \gamma\gamma$ (for the small $m_0$ region).
The allowed parameter space differs from that obtained in the CNMSSM. In particular, greater values of $m_0$ are allowed. Interestingly, this leads to larger values of the Higgs mass and the correct value ($\sim 125$ GeV) can be achieved for $0.9 \lesssim m_0\lesssim 1$ TeV, $m_{1/2}\gtrsim 4.5$ TeV and $\tan \beta \gtrsim 40$.
In the allowed area of this scenario, the RH sneutrino is the LSP.
Since the RH neutrino Majorana mass term is proportional to ${\lambda_N}$, and this is also the leading contribution to the RH sneutrino mass,
small values ${\lambda_N} \sim 10^{-4}$, are favoured to a obtain RH sneutrino LSP.
Notice however that for such a small value of the coupling, the annihilation rate of the RH sneutrino into SM particles is in general very small
and the resulting thermal relic density is too large. Thus, the viability of this model would entail some sort of dilution mechanism at late times.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{figs/wplot35_1w.eps}
\caption{\footnotesize
Higgs mass contour plot in the plane ($m_{1/2}$-$m_0$) for the ``\textit{large} $\lambda_N$'' scenario with $\lambda_N=(0.01,0.6,0.6)$. The colour code is the same as in Fig.~\ref{fig:nmssmcase}. In this scenario a light neutralino is the LSP in the white areas. Red dashed contours account for the Higgs mass (in GeV), while the black lines represent the value of $\tan\beta$.}
\label{fig:biglambda}
\end{center}
\end{figure}
\paragraph{Large ${\lambda_N}$ scenario:}
An interesting alternative is to work in the ``\textit{large} $\lambda_N$'' regime. In Fig.~\ref{fig:biglambda} we show the scan result in the ($m_{1/2},m_0$), now taking $\lambda_N=(0.01,0.6,0.6)$. With a larger ${\lambda_N}$, the resulting mass of the lightest RH sneutrino as well as that of the RH neutrino increase and hence the LSP is found to be either the neutralino or stau. In the allowed area of Fig.~\ref{fig:biglambda} the lightest neutralino is the LSP while the brown area shows where the stau is the LSP as in previous figures. We notice also that in this scenario a larger value of $m_{1/2} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 900$ GeV is required in order to reproduce the observed Higgs mass.
As we demonstrated in the previous examples, the inclusion of RH neutrinos expands the parameter region of the neutral LSP compared with the CNMSSM case, however the difficulty of achieving the thermal relic abundance of DM is not improved. The reason is the same as in the pure CNMSSM mentioned above. The lower bound on the Higgs boson mass, $m_h>122$ GeV, sets bounds on the soft masses that are $m_{1/2} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}$ a few TeV and $m_{0} (m_{\tilde{\tau}_1}) \lesssim 1$ TeV, where the annihilation cross section of $\tilde{\tau}$ is smaller than about $1$ pb. Hence, even with strong coannihilation with staus, the resultant thermal relic abundance of the neutralino LSP is too large leaving $\Omega~h^2 > 0.12$. For the RH sneutrino LSP in the ``\textit{small} $\lambda_N$'' scenario, the main annihilation modes are $\tilde{N}\tilde{N} \rightarrow \rm W^+ W^-,~Z^0 Z^0, ...$ through Higgs boson exchange, with a cross section that is also suppressed by small ${\lambda_N}$, ending up with a huge thermal relic abundance. One may then search for possible coannihilation effects with stau NLSP in the parameter region where $\tilde{N}$ is quasi-degenerate with $\tilde{\tau}_1$. However, unfortunately this is not the case. In addition to the fact that annihilation cross section of stau is smaller than $1$ pb for $m_{\tilde{\tau}_1} \lesssim 1$ TeV as mentioned above, the coannihilating particles $\tilde{N}$ and $\tilde{\tau}$ are actually decoupled from each other, because the reaction rates of all processes between $\tilde{N}$ and $\tilde{\tau}$ such as $\tilde{\tau},\tilde{N} \rightarrow X, Y$ and $\tilde{\tau}, X \rightarrow \tilde{N}, Y$, with $X, Y$ being possible SM particles, are negligible due to small $\lambda_N$ of the order of $10^{-4}$ with heavy mediating neutralinos. Hence, in both scenarios with ``\textit{large} $\lambda_N$'' and ``\textit{small} $\lambda_N$'', if the LSP is DM, its final abundance has to be explained by nonthermal mechanisms. However in fact, within the framework of supergravity or superstring, it is possible that our Universe has undergone nonstandard thermal history because many supergravity models predict moduli fields and hidden sector fields, which affect the evolution of the early Universe.
Scenarios of nonthermal DM production include, for example, (i) regulated thermal abundance by late time entropy production from moduli decay~\cite{Coughlan:1983ci,deCarlos:1993wie,ego}, thermal inflation~\cite{Lyth:1995hj,Lyth:1995ka,Asaka:1999xd} or defect decay~\cite{Kawasaki:2004rx,Hattori:2015xla}, (ii) generated by the decay of late decaying objects such as moduli~\cite{Moroi:1994rs,Kawasaki:1995cy,ego} or $Q$-balls~\cite{Enqvist:1998en}, and (iii) nonthermal scatterings and decays as studied in Refs.~\cite{McDonald:2001vt,Asaka:2005cn,Asaka:2006fs}.
In the results of the analysis performed in this model and shown in Figs.~\ref{fig:nmssmcase},~\ref{fig:smalllambda} and~\ref{fig:biglambda} we have fixed the trilinear term $A_0=-3.5 ~m_0$. We have numerically checked the effect of changing this relation. We found that a smaller ratio $-A_0/m_0$ would require larger values of $m_0$, $m_{1/2}$ and $\tan \beta$ to reproduce the observed Higgs mass. For instance, in the scenario with ``\textit{small} $\lambda_N$'', if $A_0=-2.6\,m_0$ the Higgs mass ($\sim 125$ GeV) is obtained for $m_0 \sim 1.5$ TeV, $m_{1/2} \sim 6 - 8$ TeV and $\tan \beta \gtrsim 47$.
A larger value of $-A_0/m_0$ ratio, generally leads to Landau poles in the RGEs (as the value of $\tan\beta$ needed to obtain $m_S({\rm GUT})=m_0$ becomes too large). Finally, for the opposite sign of the trilinear parameter, $A_0$, the correct EW vacuum cannot be realized and tachyons in the Higgs sector appear.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have studied an extended version of the NMSSM in which RH neutrino superfields are included through a coupling with the singlet Higgs.
We have observed that the contributions of the new terms to the RGEs make it possible to impose universality conditions on the soft parameters,
thus considerably opening up the parameter space of the constrained NMSSM.
We have computed the two-loop RGEs of this model and solved them numerically, using the spectrum calculator {\tt SSARD}. The RH sneutrino coupling to the singlet Higgs leads to a contribution to the RGE of the singlet Higgs mass-squared parameter that helps driving it negative, thus making it easier to satisfy the conditions for EWSB, while imposing universality conditions at the GUT scale. This significantly alleviates the tension in the choice of initial parameters and opens up the parameter space considerably. Moreover, the RH sneutrino contribution also leads to slightly larger values of the resulting SM Higgs mass, which further eases finding viable regions of the parameter space.
We have studied two possible benchmark scenarios in which the LSP is neutral: either the lightest RH sneutrino or the lightest neutralino. In these examples, we have implemented all the recent experimental constraints on the masses of SUSY particles and on low-energy observables. Finally, we have also computed the resulting thermal dark matter relic density, but we have not imposed any constraint on this quantity.
The RH sneutrino can be the LSP, but only when its coupling to the singlet Higgs is very small (${\lambda_N}\sim10^{-4}$). This leads to very large values of the thermal relic abundance. Although there are regions in which the stau NLSP is very close in mass, coannihilation effects are negligible (since the RH sneutrino-stau annihilation diagrams are also suppressed by ${\lambda_N}$.)
On the other hand, for large values of ${\lambda_N}\sim10^{-2}$, the lightest neutralino can be the LSP. The remaining areas feature in general smaller values of the soft scalar mass than in the NMSSM, however, the neutralino relic abundance is also too large
requiring some form of late time dilution.
\vspace*{1cm}
\noindent{\bf \large Acknowledgments}
We are thankful to F. Staub for his help with SARAH. DGC is supported by the STFC and the partial support of the Centro de Excelencia Severo Ochoa Program through the IFT-UAM/CSIC Associate programme. VDR acknowledges support by the Spanish grant SEV-2014-0398 (MINECO) and partial support by the Spanish grants FPA2014-58183-P and PROMETEOII/2014/084 (Generalitat Valenciana).
VML acknowledges the support of the BMBF under project 05H15PDCAA.
The work of K.A.O. was supported in part by
DOE grant DE-SC0011842 at the University of Minnesota.
We also acknowledge support by
the Consolider-Ingenio 2010 programme under grant MULTIDARK CSD2009-00064 and
the European Union under the ERC Advanced Grant SPLE under contract ERC-2012-ADG-20120216-320421.
|
\section{INTRODUCTION}
\label{sec:intro}
\subsection{Tests for High-Dimensional Covariance Matrices}\label{test_hd}
Testing covariance matrices is of fundamental importance in multivariate analysis. There has been a long history of study on testing (i) the covariance matrix $\boldsymbol{\Sigma}$ is equal to a given matrix, or (ii) the covariance matrix~$\boldsymbol{\Sigma}$ is proportional to a given matrix. To be specific, for a given covariance matrix $\boldsymbol{\Sigma}_0$, one aims to {test} either
{\begin{align}\label{test:identity}
&H_{0}:~\boldsymbol{\Sigma} = \boldsymbol{\Sigma}_0\quad vs. \quad H_{a}:~\boldsymbol{\Sigma} =\boldsymbol{\Sigma}_0,\quad\text{or}
\end{align}
\begin{align}
&H_{0}:~\boldsymbol{\Sigma} \propto \boldsymbol{\Sigma}_0\quad vs. \quad H_{a}:~\boldsymbol{\Sigma} \not\propto\boldsymbol{\Sigma}_0.\label{test:sphericity}
\end{align}}
In the classical setting where the dimension $p$ is fixed and the sample size $n$ goes to infinity, the sample covariance matrix is a consistent estimator, and further inference can be made based on the associated {\it central limit theory} (CLT). Examples include the likelihood ratio tests (see, e.g., \cite{Muirhead82}, Sections~8.3 and 8.4), and the locally most powerful invariant tests (\cite{John1971}, \cite{Nagao73}).
In the high-dimensional setting, because the sample covariance matrix is inconsistent, the conventional tests may not apply. New methods for testing high-dimensional covariance matrices have been developed. The existing tests were first proposed under the multivariate normal distribution, then have been modified to fit more generally distributed data.
\begin{itemize}
\item Multivariate normally distributed data. When $p/n\rightarrow y\in(0, \infty)$, \cite{LedoitW02} show that John's test for \eqref{test:sphericity} is still consistent and propose a modified Nagao's test for~\eqref{test:identity}. \cite{Srivastava05} introduces a new test for \eqref{test:sphericity} under a more general condition that $n=O(p^{\delta})$ for some $\delta\in(0, 1]$. \cite{Birke05} show that the asymptotic null distributions of John's and the modified Nagao's test statistics in \cite{LedoitW02} are still valid when $p/n\rightarrow\infty$. Relaxing the normality assumption but still assuming the kurtosis equals $3$, \cite{BaiJ09} develop a corrected likelihood ratio test for~\eqref{test:identity} when $p/n\rightarrow y\in(0, 1)$. {For testing \eqref{test:sphericity}, \cite{JiangYAos13} derive the asymptotic distribution of the likelihood ratio test statistic under the multivariate normal distribution with $p/n\rightarrow y\in(0, 1]$}.
\item More generally distributed data. \cite{ChenZ10} generalize the results in \cite{LedoitW02} without assuming normality nor an explicit relationship between $p$ and $n$. By relaxing the kurtosis assumption, \cite{WCY13} extend the corrected likelihood ratio test in \cite{BaiJ09} and the modified Nagao's test in \cite{LedoitW02} for testing \eqref{test:identity}. Along this line, \cite{WangY13} propose two tests by correcting the likelihood ratio test and John's test for \eqref{test:sphericity}.
\end{itemize}
\subsection{The Elliptical Distribution and Its Applications}
The elliptically distributed data can be expressed as
$$
\fY=\omega\fZ,
$$
where $\omega$ is a positive random scalar, $\fZ$ is a $p$-dimensional random vector from $N(\mathbf{0}, \boldsymbol{\Sigma})$, and further~$\omega$ and $\fZ$ are independent of each other. It is a natural generalization of the multivariate normal distribution, and contains many widely used
distributions as special cases including the multivariate $t$-distribution, the symmetric multivariate Laplace distribution and the symmetric multivariate stable distribution.
See \cite{FangK90} for further details.
One of our motivations of this study arises from the wide applicability of the elliptical distribution. For example, in finance, the heavy-tailedness of stock returns has been extensively studied, dating back at least to \cite{fama1965behavior} and \cite{mandelbrot1967variation}.
{Accommodating both heavy-tailedness and flexible shapes makes} the elliptical distribution a more admissible candidate for stock-return models than the Gaussian distribution; see, e.g., \cite{owen83} and \cite{bingham2002semi}.
\cite{mcneil05} state that ``elliptical distributions ... provided far superior models to the
multivariate normal for daily and weekly US stock-return data" and that ``multivariate return data for groups of returns of similar type often look roughly elliptical."
{The elliptical distribution has also been used in modeling genomics data (\cite{LIMSYrobustmicroarray03}, \cite{posekany2011biological}), sonar data (\cite{sonarZhaoLiu14}), and bioimaging data (\cite{han2017eca}).}
\subsection{Performance of the Existing Tests under the Elliptical Model} \label{performance_of_existing_tests}
Given the wide applicability of the elliptical distribution, it is important to check whether the existing tests for covariance matrices are applicable to the elliptical distribution under the high-dimensional setting. Both numerical and theoretical analyses give a negative answer.
We start with a simple numerical study to investigate the empirical sizes of the aforementioned tests. Consider observations
$\fY_i=\omega_i \fZ_i, i=1, \cdots, n,$ {where}
\begin{enumerate}[(i)]
\item $\omega_i$'s are absolute values of \hbox{i.i.d.} standard normal random variables,
\item $\fZ_i$'s are \hbox{i.i.d.} $p$-dimensional standard multivariate normal random vectors, and
\item $\omega_i$'s and $\fZ_i$'s are independent of each other.
\end{enumerate}
Under such a setting, $\fY_i$'s are \hbox{i.i.d.} random vectors with mean $\mathbf{0}$ and covariance matrix $\fI$. We will test both \eqref{test:identity} and \eqref{test:sphericity}.
To test \eqref{test:identity}, we use the tests {in} \cite{LedoitW02} (LW$_1$ test), \cite{BaiJ09} (BJYZ test), \cite{ChenZ10} (CZZ$_1$ test) and \cite{WCY13} (WYMC-LR and WYMC-LW tests). For testing \eqref{test:sphericity}, we apply the tests proposed by \cite{LedoitW02} (LW$_2$ test), \cite{Srivastava05} (S test), \cite{ChenZ10} (CZZ$_2$ test) and \cite{WangY13} (WY-LR and WY-JHN tests).
Table \ref{counterexample_sphere} reports the empirical sizes for testing $H_0: \boldsymbol{\Sigma}=\fI$ or $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance~level.
\begin{table}[H]
\centering
\ra{0.63}\setlength{\tabcolsep}{3pt}
\begin{tabular}{@{}c||ccccccccccc@{}}
\toprule[1pt]
\multicolumn{11}{c}{$H_0: \boldsymbol{\Sigma}=\fI$}\\
\midrule[1pt]
&\multicolumn{5}{c}{$p/n=0.5$} & \phantom{abc}& \multicolumn{4}{c}{$p/n=2$}\\
\cmidrule{2-6} \cmidrule{8-11}
$p$& LW$_1$ &BJYZ& CZZ$_1$ & WYMC-LR& WYMC-LW && LW$_1$ & CZZ$_1$&\multicolumn{2}{c}{WYMC-LW}\\ \midrule[1pt]
$100$&$100$&$100$&$54.0$&$100$&$100$&&$100$&$50.2$&\multicolumn{2}{c}{$100$}\\
$200$&$100$&$100$&$51.6$&$100$&$100$&&$100$&$53.0$&\multicolumn{2}{c}{$100$}\\
$500$&$100$&$100$&$52.3$&$100$&$100$&&$100$&$53.3$&\multicolumn{2}{c}{$100$}\\
\midrule[1pt]
\multicolumn{11}{c}{$H_0: \boldsymbol{\Sigma}\propto\fI$}\\
\midrule[1pt]
~~~~~~&\multicolumn{5}{c}{$p/n=0.5$} & \phantom{abc}& \multicolumn{4}{c}{$p/n=2$}\\
\cmidrule{2-6} \cmidrule{8-11}
$p$& LW$_2$ &S & CZZ$_2$& WY-LR& WY-JHN && LW$_2$ & S& CZZ$_2$& WY-JHN \\ \midrule[1pt]
$100$&$100$& $100$ & $51.8$&$100$& $100$ && $100$&$100$&$50.2$& $100$\\
$200$&$100$& $100$& $53.0$& $100$& $100$ && $100$&$100$ &$52.3$& $100$\\
$500$&$100$& $100$& $52.3$&$100$& $100$ && $100$&$100$&$53.5$& $100$\\
\bottomrule[1pt]
\end{tabular}
\caption{\it Empirical sizes $(\%)$ of the existing tests for testing $H_0: \boldsymbol{\Sigma}=\fI$ or $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level. Data are generated as $\fY_i=\omega_i\fZ_i$ where $\omega_i$'s are absolute values of \hbox{i.i.d.} $N(0, 1)$, $\fZ_i$'s are \hbox{i.i.d.} $N(\mathbf{0}, \fI)$, and further $\omega_i$'s and $\fZ_i$'s are independent of each other. The results are based on $10,000$ replications for each pair of $p$ and $n$. }\label{counterexample_sphere}
\end{table}
We observe from Table \ref{counterexample_sphere} that the empirical sizes of the existing tests are far higher than the nominal level of $5\%$,
suggesting that they are inconsistent for testing either \eqref{test:identity} or~\eqref{test:sphericity} under the elliptical distribution. Therefore, new tests are needed.
Theoretically, the distorted sizes in Table \ref{counterexample_sphere} are not unexpected. In fact, denote $\fS_n=n^{-1}\sum_{i=1}^n \fZ_i\fZ_i^T$ and $\fS^{\omega}_n=n^{-1}\sum_{i=1}^n\fY_i\fY_i^T=n^{-1}\sum_{i=1}^n\omega_i^2 \fZ_i\fZ_i^T$. The celebrated Mar\v{c}enko-Pastur theorem states that the {\it empirical spectral distribution} (ESD) of $\fS_n$ converges to the Mar\v{c}enko-Pastur law. However, Theorem 1 of \cite{ZhengL11} implies that the ESD of $\fS^{\omega}_n$ will \emph{not} converge to the
Mar\v{c}enko-Pastur law except in the trivial situation where $\omega_i$'s are constant. Because
all the aforementioned tests involve certain aspects of the {\it limiting ESD} (LSD) of $\fS_n^{\omega}$, the asymptotic null distributions of the involved test statistics are different from the ones in the usual setting, and consequently the tests are no longer consistent.
\subsection{Our Model and Aim of This Study}\label{Ourmodelandgoal}
In various real situations, the assumption that the observations are \hbox{i.i.d.} is too strong to hold. An important source of violation is (conditional) heteroskedasticity, which is encountered in a wide range of applications. For instance, in finance, it is well documented that stock returns are (conditionally) heteroskedastic, which motivated the development of ARCH and GARCH models (\cite{Engle82}, \cite{Bollerslev86}). In engineering, \cite{yucek2009} explain that the heteroskedasticity of noise is one of the factors that degrade the performance of target detection systems.
In this paper, we study testing high-dimensional covariance matrices when the data may exhibit heteroskedasticity. Specifically, we consider the following model. Denote by $\fY_i$, $i=1,\cdots, n$, the observations, which can be decomposed as
\begin{equation}\label{observe}
\fY_i=\omega_i\fZ_i,
\end{equation}
where
\begin{enumerate}[(i)]
\item $\omega_i$'s are positive random scalars reflecting heteroskedasticity,
\item { $\fZ_i=\boldsymbol{\Sigma}^{1/2}\widetilde{\fZ}_i$, where $\widetilde{\fZ}_i$ consists of \hbox{i.i.d.} standardized random variables,}
\item { $\omega_i$'s can depend on each other and on $\{\fZ_i: ~i=1,\cdots, n\}$ in an \emph{arbitrary} way, and}
\item $\omega_i$'s do \emph{not} need to be stationary.
\end{enumerate}
Model \eqref{observe} incorporates the elliptical distribution as a special case. This general model further possesses several important advantages:
\begin{itemize}
\item It can be considered as a multivariate extension of the ARCH/GARCH model, and accommodates the conditional heteroskedasticity in real data. In the ARCH/GARCH model, the volatility process is serially dependent and depends on past information. Such dependence is excluded from the elliptical distribution; however, it is perfectly compatible with Model~\eqref{observe}.
\item The dependence of $\omega_i$ and $\fZ_i$ can feature the leverage effect in financial econometrics, {which accounts for the negative correlation between asset return and change in volatility.} Various research has been conducted to study the leverage effect; see, e.g., \cite{schwert1989does}, \cite{campbell1992no} and \cite{ait2013leverage}.
\item Furthermore, it can capture the (conditional) asymmetry of data by allowing $\fZ_i$'s to be asymmetric.
The asymmetry is another stylized fact of financial data.
For instance, the empirical study in \cite{singleton1986skewness} shows high skewness in individual stock returns. Skewness is also reported in exchange rate returns in \cite{peiro1999skewness}. \cite{Christoffersen12} documents that asymmetry exists in standardized returns; see Chapter 6 therein.
\end{itemize}
Because $\omega_i$'s are not required to be stationary, the unconditional covariance matrix may not exist, in which case there is no basis for testing \eqref{test:identity}.
Testing \eqref{test:sphericity}, however, still makes perfect sense, because the scalars $\omega_i$'s only scale up or down the covariance matrix by a constant. We henceforth focus on testing~\eqref{test:sphericity}. As usual, by working with $\boldsymbol{\Sigma}_0^{-1/2}\fY_i$, testing~\eqref{test:sphericity} can be reduced to testing
\begin{align}\label{test:identityI}
H_{0}:~\boldsymbol{\Sigma} \propto \fI\quad vs. \quad H_{a}:~\boldsymbol{\Sigma} \not\propto\fI.
\end{align}
In the following, we focus on testing \eqref{test:identityI}, in the high-dimensional setting where both $p$ and~$n$ grow to infinity with the ratio $p/n\rightarrow y\in(0,\infty)$.
\subsection{Summary of Main Results} To deal with heteroskedasticity, we propose to self-normalize the observations. To be specific, we focus on the self-normalized observations $\fY_i/\left|\fY_i\right|$, where $|\cdot|$ stands for the Euclidean norm. Observe that
$$
\frac{\fY_i}{|\fY_i|}=\frac{\fZ_i}{|\fZ_i|},\quad i=1, \cdots, n.
$$
Hence $\omega_i$'s no longer play a role, and this is exactly the reason why we make no assumption on~$\omega_i$'s. There is, however, no such thing as a free lunch. Self-normalization introduces a new challenge in that the entries of $\fZ_i/|\fZ_i|$ are dependent
in an unusual fashion.
To see this, consider the simplest case where $\fZ_i$'s are \hbox{i.i.d.} standard multivariate normal random vectors. In this case, the entries of~$\fZ_i$'s are \hbox{i.i.d.} random variables from $N(0,1)$. However, the self-normalized random vector $\fZ_i/|\fZ_i|$ is uniformly distributed over the $p$-dimensional unit sphere~(known as the Haar distribution on the sphere), and its $p$ entries are dependent on each other in an unusual way.
To conduct tests, we need some kind of CLTs. Our strategy is to establish a CLT for the {\it linear spectral statistic} (LSS)
of the sample covariance matrix based on the self-normalized observations, namely,
\begin{equation}\label{eq:fBn}
\widetilde{\fS}_n=\frac{p}{n}\sum_{i=1}^{n}\frac{\fY_i\fY_i^{T}}{|\fY_i|^2}=\frac{p}{n}\sum_{i=1}^{n}\frac{\fZ_i\fZ_i^T}{|\fZ_i|^2}.
\end{equation}
When $|\fY_i|$ or $|\fZ_i|=0$, we adopt the convention that $0/0=0$. {Note that $\widetilde{\fS}_n$ is \emph{not} the sample correlation matrix, which normalizes each variable by its standard deviation. Here we are normalizing each observation by its Euclidean norm.}
As we shall see below, our CLT is different from the ones for the usual sample covariance matrix. One important advantage of our result is that applying our CLT requires neither $\mathbb{E}(Z_{11}^4)=3$ as in \cite{BaiS04}, nor the estimation of $\mathbb{E}(Z_{11}^4)$, which is inevitable in \cite{Najim2013}. Based on the new CLT, we propose two tests by modifying the likelihood ratio test and John's test. {More tests based on general moments of the ESD of $\widetilde{\fS}_n$ are also constructed.}
Numerical studies show that our proposed tests work well even when $\mathbb{E}(Z_{11}^4)$ does not exist. Because heavy-tailedness and heteroskedasticity are commonly encountered in practice, such relaxations are appealing in many real applications.
Independently, \cite{li2017structure} study high-dimensional covariance matrix test under a mixture model. {Their test relies on comparing two John's test statistics: one is based on the original data and the other is based on the randomly permutated data.}
There are a couple of major differences between our paper and theirs. First and foremost, in \cite{li2017structure}, the mixture coefficients ($\omega_i$'s in \eqref{observe}) are assumed to be \hbox{i.i.d.} and drawn from a distribution with a bounded support. Second, \cite{li2017structure} require independence between the mixture coefficients and the innovation process $(\fZ_i)$.
In our paper, we do not put any assumptions on the mixture coefficients. As we discussed in Section \ref{Ourmodelandgoal}, such relaxations allow us to accommodate several important stylized features of real data, consequently, make our tests more suitable in many real applications.
It can be shown that the test in \cite{li2017structure} can be inconsistent under our general setting. Furthermore, as we can see from the simulation studies, the test in \cite{li2017structure} is less powerful than the existing tests {in the \hbox{i.i.d.} Gaussian setting and}, in general, less powerful than our tests.
\emph{Organization of the paper.}~~
The rest of the paper is organized as follows. In Section~\ref{MRTS}, we state the CLT for the LSS of $\widetilde{\fS}_n$, based on which we derive the asymptotic null distributions of the modified likelihood ratio test statistic and John's test statistic, {as well as other test statistics based on general moments of the ESD of $\widetilde{\fS}_n$.} Section~\ref{testpro} examines the finite-sample performance of our proposed tests. {Section~\ref{sect:empirical} is dedicated to a real data analysis.}
Section~\ref{conclusion} concludes. More simulation results and all the proofs are collected in the supplementary article~\cite{YZCL17}.
\emph{{Notation}.}~~ For any symmetric matrix $\fA\in\mathbb{R}^{p\times p}$, $F^{\fA}$ denotes its ESD, that is,
$$
F^{\fA}(x)=\frac{1}{p}\sum_{i=1}^p \mathbbm{1}_{\{\lambda_i^{\fA}\leq x\}},~~ \mbox{for all } x\in\mathbb{R},
$$
where $\lambda^{\fA}_i$, $i=1, \cdots, p$, are the eigenvalues of $\fA$ and $\mathbbm{1}_{\{\cdot\}}$ denotes the indicator function.
For any function~$f$, the associated LSS of $\fA$ is given by
\begin{align*}
\int_{-\infty}^{+\infty}f(x){\rm d}F^{\fA}(x)=\frac{1}{p}\sum_{i=1}^pf(\lambda_i^{\fA}).
\end{align*}
Finally, the Stieltjes transform of a distribution $G$ is defined as
\begin{align*}
m_G(z)=\int_{-\infty}^{\infty}\frac{1}{\lambda-z}{\rm d}G(\lambda), \quad\mbox{for all } z\not\in\hbox{supp}(G),
\end{align*}
where supp($G$) denotes the support of $G$.
\section{MAIN RESULTS}\label{MRTS}
\subsection{CLT for the LSS of $\widetilde{\fS}_n$}\label{ssec:CLT_LSS}
As discussed above, we focus on the sample covariance matrix based on the self-normalized~$\fZ_i$'s, namely,
$\widetilde{\fS}_n$ defined in~\eqref{eq:fBn}. Denote by $\fZ=(\fZ_1, \ldots, \fZ_n)$.
We now state the assumptions:
\begin{myassump}{A}\label{ass:A}
$\fZ=\big(Z_{ij}\big)_{p\times n}$ consists of \mbox{i.i.d.} random variables with $\mathbb{E}\big(Z_{11}\big)=0$ and $0<\mathbb{E}\big(Z_{11}^2\big)<\infty$;
\end{myassump}
\begin{myassump}{B}\label{ass:B}
$\mathbb{E}\big(Z_{11}^4\big)<\infty$; and
\end{myassump}
\begin{myassump}{C}\label{ass:C} $y_n:=p/n\rightarrow y\in\left(0, \infty\right)$ as $n\rightarrow\infty$.
\end{myassump}
The following proposition gives the LSD of $\widetilde{\fS}_n$
\begin{prop}\label{prop:LSD}
Under Assumptions \ref{ass:A} and \ref{ass:C}, almost surely, the ESD of $\widetilde{\fS}_n$ converges weakly to the standard Mar\v{c}enko-Pastur law $F_y$, which admits the density
\begin{align*}
p_y(x)=\left\{\begin{array}{lc}
\frac{1}{2\pi x y}\sqrt{(x-a_-(y))(a_+(y)-x)}, &~x\in [a_-(y),a_+(y)],\\
0,&\mbox{otherwise},
\end{array}\right.
\end{align*}
and has a point mass $1-1/y$ at the origin if $y>1$, where $a_{\pm}(y)=(1\pm\sqrt{y})^2$.
\end{prop}
\begin{rmk}\label{compare_samp_corr}
Proposition \ref{prop:LSD} is essentially a special case of Theorem 2 in \cite{ZhengL11} but with weaker moment assumptions. {As we discussed before, $\widetilde{\fS}_n$ is \emph{not} a sample correlation matrix.} {However, under the situation when~$\fZ$ consists of \hbox{i.i.d.} random variables, there is a close connection between $\widetilde{\fS}_n$ and a sample correlation matrix. The connection is as follows: firstly, $\widetilde{\fS}_n$ shares the same nonzero eigenvalues with {$p/n\left(\fZ_1/|\fZ_1|, \ldots, \fZ_n/|\fZ_n|\right)^T\cdot$ $\left(\fZ_1/|\fZ_1|, \ldots, \fZ_n/|\fZ_n|\right)$}. When~$\fZ$ consists of \hbox{i.i.d.} random variables, $\left(\fZ_1/|\fZ_1|, \ldots, \fZ_n/|\fZ_n|\right)^T\cdot \left(\fZ_1/|\fZ_1|, \ldots, \fZ_n/|\fZ_n|\right)$ is the sample correlation matrix (without subtracting the sample mean) of the $n$-dimensional observations $(Z_{i1},\ldots, Z_{in})^T$ for $i=1,\ldots,p$. Using such a connection, Proposition \ref{prop:LSD} can be derived from Theorem 2 in \cite{JiangTF04}, where the LSD of the sample correlation matrix is derived.}
\end{rmk}
According to Proposition \ref{prop:LSD}, if one assumes (without loss of generality) that $\mathbb{E}\big(Z_{11}^2\big)=~1$, then $\widetilde{\fS}_n$ shares the same LSD as the usual sample covariance matrix $\fS_n=n^{-1}\sum_{i=1}^n \fZ_i\fZ_i^T$. To conduct tests, we need the associated CLT. The CLTs for the LSS of~$\fS_n$ have been established in \cite{BaiS04} and \cite{Najim2013}, {under the Gaussian and {non-Gaussian kurtosis conditions}, respectively}. Given that $\widetilde{\fS}_n$ and~$\fS_n$ have the same LSD, one naturally asks whether their LSSs also have the same CLT. The following theorem gives a negative answer. Hence, an important message is:
\begin{center}
\emph{Self-normalization does not change the LSD, but it does affect the CLT.}
\end{center}
To be more specific, for any function $f$, define the following centered and scaled LSS:
\begin{align}\label{generalG}
G_{\widetilde{\fS}_n}(f):=p\int_{-\infty}^{+\infty}f(x){\rm d} \Big(F^{\widetilde{\fS}_n}(x)-F_{y_n}(x)\Big).
\end{align}
\begin{thm}\label{CLTLSS}
Suppose that Assumptions \ref{ass:A}~--~\ref{ass:C} hold. Let $\mathcal{H}$ denote the set of functions that are
analytic on a domain containing $[a_-(y)\mathbbm{1}_{\{0<y<1\}},$ $ a_+(y)]$, and $f_1, \ldots, f_k\in\mathcal{H}$. Then, the random vector $\big(G_{\widetilde{\fS}_n}(f_1)$, $\ldots$, $G_{\widetilde{\fS}_n}(f_k)\big)$ converges weakly to a Gaussian vector $\big(G(f_1),$ $\ldots, $ $G(f_k)\big)$ with mean
\begin{equation}
\begin{aligned}\label{eq:EGf}
\hspace{-1em}\mathbb{E}\big(G(f_\ell)\big)
=&\frac{1}{\pi\mathrm{i}}\oint_{\mathcal{C}} f_\ell(z)\Bigg(\frac{y\ul{m}^3(z)}{\big(1+\ul{m}(z)\big)^3}\Bigg)\!\Bigg(1-\frac{y\ul{m}^2(z)}{\big(1+\ul{m}(z)\big)^2}\Bigg)^{-1}{\rm d}z\\
&\!-\!\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}} f_\ell(z)\Bigg(\frac{y\ul{m}^3(z)}{\big(1+\ul{m}(z)\big)^3}\Bigg)\!\Bigg(1-\frac{y\ul{m}^2(z)}{\big(1+\ul{m}(z)\big)^2}\Bigg)^{-2}{\rm d}z,
\end{aligned}
\end{equation}
where $\ell=1, \ldots, k,$ and covariance
\begin{equation}
\begin{aligned}\label{eq:thmcov}
\cov((G(f_i), G(f_j))=&\frac{y}{2\pi^2}\oint_{\mathcal{C}_2}\oint_{\mathcal{C}_1}\frac{f_i(z_1)f_j(z_2)\ul{m}^{\prime}(z_1)\ul{m}^{\prime}(z_2)}{\big(1+\ul{m}(z_1)\big)^2\big(1+\ul{m}(z_2)\big)^2}{\rm d}z_1{\rm d}z_2\\
&-\frac{1}{2\pi^2}\oint_{\mathcal{C}_2}\oint_{\mathcal{C}_1}\frac{f_i(z_1)f_j(z_2)\ul{m}'(z_1)\ul{m}'(z_2)}{\big(\ul{m}(z_2)-\ul{m}(z_1)\big)^2}{\rm d}z_1{\rm d}z_2,
\end{aligned}
\end{equation}
where $i, j=1, \ldots, k.$
Here, $\mathcal{C}_1$ and $\mathcal{C}_2$ are two non-overlapping contours contained in the domain and enclosing the interval $[a_-(y)\mathbbm{1}_{\{0<y<1\}}, a_+(y)]$, and $\ul{m}(z)$ is the Stieltjes transform of $\underline{F}_y:=(1-y)\mathbbm{1}_{[0,\infty)}+yF_y$.
\end{thm}
\begin{rmk}
The second terms on the right-hand sides of \eqref{eq:EGf} and \eqref{eq:thmcov} appeared in equations~(1.6) and (1.7) in Theorem 1.1 of \cite{BaiS04} (in the special case when $\fT=\fI$). The first terms are new and are due to the self-normalization in $\widetilde{\fS}_n$. It is worth emphasizing that our CLT neither requires $\mathbb{E}\big(Z_{11}^4\big)=3$ as in \cite{BaiS04}, nor involves $\mathbb{E}\big(Z_{11}^4\big)$ as in \cite{Najim2013}.
\end{rmk}
{
\begin{rmk}
After this paper was finished, we learned that a CLT for
the LSS of the sample correlation matrix was established in \cite{GaoJsssb17}. As we discussed in Remark \ref{compare_samp_corr}, under the special situation when $Z_{ij}$'s are \hbox{i.i.d.}, the ESD of $\widetilde{\fS}_n$ is related to the ESD of a sample correlation matrix. It is because of such a special property that our theorem and the result in \cite{GaoJsssb17} are connected. There is, however, an important difference: In \cite{GaoJsssb17}, the sample correlation matrix is based on demeaned observations, while in our case, we do not subtract sample mean when defining $\widetilde{\fS}_n$. Such a distinction leads to important differences in dealing with a key step in the proof (Lemma 2.2 in the supplementary material \cite{YZCL17} and Lemma 6 in \cite{GaoJsssb17}). Furthermore, about the CLT in this paper and in \cite{GaoJsssb17}, as we emphasized above, our CLT does not involve kurtosis, however, kurtosis does appear in the CLT in \cite{GaoJsssb17} (We actually believe the kurtosis should not be there).
\end{rmk}
}
\subsection{Tests for the Covariance Matrix in the Presence of Heteroskedasticity} Based on Theorem~\ref{CLTLSS}, we propose two tests for testing \eqref{test:identityI} by modifying the likelihood ratio test and John's test. {More tests based on general moments of the ESD of $\widetilde{\fS}_n$ are also established.}
\subsubsection{Likelihood Ratio Test Based on Self-normalized Observations (LR-SN)}
Recall that $\fS_n=n^{-1}\sum_{i=1}^n\fZ_i\fZ_i^T$.
The likelihood ratio test statistic is
$$
L_n=\log|\fS_n|-p\log\big(\tr\big(\fS_n\big)\big)+p\log p;
$$
see, e.g., Section 8.3.1 in \cite{Muirhead82}.
For the heteroskedastic case, we modify the likelihood ratio test statistic by replacing $\fS_n$ with $\widetilde{\fS}_n$. Note that $\tr\big(\widetilde{\fS}_n\big)=p$ on the event $\{|\fZ_i|>0~\text{for}~ i=1, \ldots, n\}$, which, by Lemma 2 in \cite{BaiY93}, occurs almost surely for all large $n$. Therefore, we are led to the following modified likelihood ratio test statistic:
\begin{align*}
\widetilde{L}_n=\log\big|\widetilde{\fS}_n\big|=\sum_{i=1}^p\log\big(\lambda^{\widetilde{\fS}_n}_i\big).
\end{align*}
It is the LSS of $\widetilde{\fS}_n$ when $f(x)=\log(x)$. In this case, when $y_n\in (0,1),$ we have
\begin{align*}
G_{\widetilde{\fS}_n}(\log)=&p\int_{-\infty}^{+\infty}\log(x){\rm d}\left(F^{\widetilde{\fS}_n}(x)-F_{y_n}(x)\right)\\
=&\sum_{i=1}^p\log\big(\lambda^{\widetilde{\fS}_n}_i\big)-p\bigg(\frac{y_n-1}{y_n}\log(1-y_n)-1\bigg)\\
=&\widetilde{L}_n-p\bigg(\frac{y_n-1}{y_n}\log(1-y_n)-1\bigg).
\end{align*}
Applying Theorem \ref{CLTLSS}, we obtain the following proposition.
\begin{prop}\label{prop:CLT_specific_log}
Under the assumptions of Theorem \ref{CLTLSS},
if $y_n\rightarrow y\in(0, 1)$, then
\begin{align}
\frac{\widetilde{L}_n-p\bigg(\frac{y_n-1}{y_n}\log(1-y_n)-1\bigg)-y_n-\log(1-y_n)/2}{\sqrt{-2y_n-2\log(1-y_n)}}\stackrel{D}\longrightarrow N(0, 1). \label{eq:CLT_log}
\end{align}
\end{prop}
The convergence in \eqref{eq:CLT_log} gives the asymptotic null distribution of the modified likelihood ratio test statistic. Because it is derived for the sample covariance matrix based on self-normalized observations, the test based on \eqref{eq:CLT_log} will be referred to as the likelihood ratio test based on the self-normalized observations (LR-SN).
\subsubsection{John's Test Based on Self-normalized Observations (JHN-SN)}
John's test statistic is given by
$$
T_n=\frac{n}{p}\tr\bigg(\frac{\fS_n}{1/p\tr\big(\fS_n\big)}-\fI\bigg)^2-p;
$$
see \cite{John1971}.
Replacing $\fS_n$ with $\widetilde{\fS}_n$ and noting again that $\tr\big(\widetilde{\fS}_n\big)=p$ almost surely for all large~$n$ lead to the following modified John's test statistic:
\begin{align*}
\widetilde{T}_n=\frac{n}{p}\tr\Big(\widetilde{\fS}_n-\fI\Big)^2
-p=\frac{1}{y_n}\sum_{i=1}^p\big(\lambda^{\widetilde{\fS}_n}_i\big)^2-n-p.
\end{align*}
It is related to the LSS of $\widetilde{\fS}_n$ when $f(x)=x^2$. In this case, we have
\begin{align*}
G_{\widetilde{\fS}_n}(x^2)=p\int_{-\infty}^{+\infty}x^2\ {\rm d}\left(F^{\widetilde{\fS}_n}(x)\!-\!F_{y_n}(x)\right)=\sum_{i=1}^p\big(\lambda^{\widetilde{\fS}_n}_i\big)^2-p(1+y_n)
=y_n\widetilde{T}_n.
\end{align*}
Based on Theorem \ref{CLTLSS}, we can prove the following proposition.
\begin{prop}\label{prop:CLT_specific_x^2}
Under the assumptions of Theorem \ref{CLTLSS}, we have
\begin{align}
\frac{\widetilde{T}_n+1}{2}\stackrel{D}\longrightarrow N(0, 1).\label{eq:CLT_sq}
\end{align}
\end{prop}
Below we will refer to the test based on \eqref{eq:CLT_sq} as John's test based on the self-normalized observations (JHN-SN).
\subsubsection{More General Tests Based on Self-normalized Observations }
More tests can be constructed by choosing $f$ in Theorem \ref{CLTLSS} to be different functions. When $f(x)=x^k$ for $k\geq2$, the corresponding LSS is the
$k$th moment of the ESD of $\widetilde{\fS}_n$, for which we have
\begin{align*}
G_{\widetilde{\fS}_n}(x^k)=&p\int_{-\infty}^{+\infty}x^k\ {\rm d}\left(F^{\widetilde{\fS}_n}(x)\!-\!F_{y_n}(x)\right)\\
=&\sum_{i=1}^p\big(\lambda^{\widetilde{\fS}_n}_i\big)^k-p(1+y_n)^{k-1}H_F\Big(\frac{1-k}{2}, 1-\frac{k}{2}, 2, \frac{4y_n}{(1+y_n)^2}\Big),
\end{align*}
where $H_F(a, b, c, d)$ denotes the hypergeometric function $_2F_1(a, b, c, d)$. By Theorem \ref{CLTLSS} again, we have the following proposition.
\begin{prop}\label{prop:CLT_x^k}
Under the assumptions of Theorem \ref{CLTLSS}, for any $k\geq2$, we have
\begin{align*}
\frac{G_{\widetilde{\fS}_n}(x^k)-\mu_{n, x^k}}{\sigma_{n, x^k}}\stackrel{D}\longrightarrow N(0, 1),\quad \text{where}
\end{align*}
\begin{align*}
\mu_{n, x^k}=&-\frac{2k(k-1)(1+y_n)^{k-2}}{(k+1)(k+2)}\bigg(\!(y_n-1)^2H_F\Big(\!\frac{3-k}{2}, 1-\frac{k}{2}, 1, \frac{4y_n}{(1+y_n)^2}\!\Big)\\
&~~~~~~~~+(-1+4ky_n-y_n^2)H_F\Big(\frac{3-k}{2}, 1-\frac{k}{2}, 2, \frac{4y_n}{(1+y_n)^2}\Big)\!\bigg)\\
&+\frac{1}{4}\Big((1+\sqrt{y_n})^{2k}+(1-\sqrt{y_n})^{2k}\Big)-\frac{1}{2}\sum_{i=0}^k{k \choose i}^2y_n^i,
\end{align*}
and
\begin{align*}
\sigma^2_{n, x^k}=&-2y_n\bigg((1-y_n)^kk\sum_{i=0}^{k+1}{k+1\choose i}\Big(\frac{1-y_n}{y_n}\Big)^{1-i}\frac{(k+i-1)!}{(i-1)!(k+1)!}\bigg)^2\\
&+2y_n^{2k}\sum_{i=0}^{k-1}\sum_{j=0}^k{k\choose i}{k\choose j}\Big(\frac{1-y_n}{y_n}\Big)^{i+j}\sum_{\ell=1}^{k-i}\ell{2k-1-(i+\ell)\choose k-1}{2k-1-j+\ell\choose k-1}.
\end{align*}
\end{prop}
{\begin{rmk}
Proposition \ref{prop:CLT_specific_x^2} is a special case of Proposition \ref{prop:CLT_x^k} with $k=2$.
\end{rmk}}
\begin{rmk}
Proposition \ref{prop:CLT_x^k} enables us to consistently detect any alternative hypothesis under which the covariance matrix of $\fZ$ admits an LSD not equal to $\delta_{1}$. The reason is that, under such a situation, the LSD of $\widetilde{\fS}_n$, say $\widetilde{H}$, will not be the standard Mar\v{c}enko-Pastur law specified in Proposition \ref{prop:LSD}. Therefore, there exists a $k\geq 2$ such that $\int_{-\infty}^{\infty} x^k\, {\rm d}\widetilde{H}(x)\neq\int_{-\infty}^{\infty} x^k\, {\rm d}F_y(x)$. Consequently, $G_{\widetilde{\fS}_n}(x^k)$ in \eqref{generalG} will blow up, and the testing power will approach $1$.
\end{rmk}
\section{SIMULATION STUDIES}\label{testpro}
We now demonstrate the finite-sample performance of our proposed tests.
For different values of $p$ and $p/n$, we will check the sizes and powers of the LR-SN and JHN-SN tests.
{In the simplest situation where the observations are \hbox{i.i.d.} multivariate normal, we compare our proposed tests, LR-SN and JHN-SN, with the tests mentioned in Section \ref{test_hd}, namely, LW$_2$, S, CZZ$_2$ and WY-LR, and also the newly proposed test in \cite{li2017structure} (LY test). (In the multivariate normal case, WY-JHN test reduces to LW$_2$ test.) We find that while developed under a much more general setup, our tests perform just as well as the existing ones. On the other hand, LY test is less powerful. The detailed comparison results are given in the supplementary material \cite{YZCL17}. Real differences emerge when we consider more complicated situations, where existing tests fail while our tests continue to perform well.}
\subsection{The Elliptical Case} We investigate the performance of our proposed tests under the elliptical distribution. As in Section \ref{performance_of_existing_tests}, we take the observations to be $\fY_i=\omega_i\fZ_i$ with
\begin{enumerate}[(i)]
\item $\omega_i$'s being absolute values of \hbox{i.i.d.} standard normal random variables,
\item $\fZ_i$'s \hbox{i.i.d.} $p$-dimensional random vectors from $N(\mathbf{0}, \boldsymbol{\Sigma})$, and
\item $\omega_i$'s and $\fZ_i$'s independent of each other.
\end{enumerate}
\emph{\textbf{Checking the size.}}
Table \ref{ellipticalsize} completes Table \ref{counterexample_sphere} by including the empirical sizes of our proposed LR-SN and JHN-SN tests, and also LY test in \cite{li2017structure}.
\begin{table}[H]
\begin{center}
\ra{0.63}
\setlength{\tabcolsep}{1.4pt}
\begin{tabular}{@{}c||cccccc|cccccccc|c@{}}\toprule[1pt]
~~~~~~&\multicolumn{8}{c}{$p/n=0.5$} & \phantom{~}& \multicolumn{6}{c}{$p/n=2$}\\
\cmidrule{2-9} \cmidrule{11-16}
$p$&{\small LW$\!_2$} &S &{\small CZZ$_2$}& {\small WY-LR}& {\small WY-JHN} &{\small LY}&{\small LR-SN}&{\small JHN-SN}&&{\small LW$\!_2$ }&{\small S}&{\small CZZ$_2$}& {\small WY-JHN}&{\small LY}&{\small JHN-SN}\\ \midrule[1pt]
{$100$}&{$100$}& $100$ & $51.8$&$100$& $100$&${4.4}$ &$\bf{4.6}$&$\bf{5.2}$ && $100$&$100$&$50.2$& $100$&${4.1}$&$\bf{4.9}$\\
$200$&$100$& $100$& $53.0$& $100$& $100$ &${4.5}$&$\bf{5.1}$&$\bf{4.9}$ && $100$&$100$ &$52.3$& $100$&${4.5}$&$\bf{4.5}$\\
$500$&$100$& $100$& $52.3$&$100$& $100$&${5.2}$&$\bf{4.9}$&$\bf{5.2}$ && $100$&$100$&$53.5$& $100$&${4.7}$&$\bf{5.2}$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\caption{\it{Empirical sizes $(\%)$ of LW$_2$, S, CZZ$_2$, WY-LR, WY-JHN, LY tests, and our proposed LR-SN, JHN-SN tests for testing $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level. Data are generated as $\fY_i=\omega_i\fZ_i$ where $\omega_i$'s are absolute values of \hbox{i.i.d.} $N(0, 1)$, $\fZ_i$'s are \hbox{i.i.d.}~$N(\mathbf{0}, \fI)$, and further $\omega_i$'s and $\fZ_i$'s are independent of each other. The results are based on $10,000$ replications for each pair of $p$ and~$n$. }}\label{ellipticalsize}
\end{table}
Table \ref{ellipticalsize} reveals sharp difference between the existing tests and our proposed ones: the empirical sizes of the existing tests are severely distorted, in contrast, the empirical sizes of our LR-SN and JHN-SN tests are around the nominal level of $5\%$ as desired. LY test also yields the right level of size.
\bigskip
\emph{\textbf{Checking the power.}}
Table \ref{ellipticalsize} shows that LW$_2$, S, CZZ$_2$, WY-LR and WY-JHN tests are inconsistent under the elliptical distribution, therefore we exclude them when checking the power.
We generate observations under the elliptical distribution with $\boldsymbol{\Sigma}=\big(0.1^{|i-j|}\big)$. Table~\ref{ellipticalpower} reports the empirical powers of our proposed tests and LY test for testing $H_{0}: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level.
\begin{table}[H]
\begin{center}
\centering
\ra{0.63}
\begin{tabular}{@{}c||c|cccc|ccc|c@{}}\toprule[1pt]
~~~~~~&\multicolumn{3}{c}{$p/n=0.5$} &\phantom{abc}& \multicolumn{2}{c}{$p/n=2$}\\
\cmidrule{2-4} \cmidrule{6-7}
$p$& LY&LR-SN& JHN-SN&&LY& JHN-SN\\ \midrule[1pt]
$100$&$7.6$&$35.0$& $48.9$&&$3.5$& $8.2$\\
$200$&$14.5$&$88.7$& $97.0$&&$5.7$& $17.2$\\
$500$&$64.9$&$100$& $100$&&$9.0$& $70.5$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\caption{\it{ Empirical powers $(\%)$ of LY test and our proposed LR-SN and JHN-SN tests for testing $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level.
Data are generated as $\fY_i=\omega_i\fZ_i$ where $\omega_i$'s are absolute values of \hbox{i.i.d.} $N(0, 1)$, $\fZ_i$ are \hbox{i.i.d.} random vectors from $N(\mathbf{0}, \boldsymbol{\Sigma})$ with $\boldsymbol{\Sigma}=\big(0.1^{|i-j|}\big)$, and further $\omega_i$'s and $\fZ_i$'s are independent of each other. The results are based on $10,000$ replications for each pair of $p$ and~$n$. }}\label{ellipticalpower}
\end{table}
From Table \ref{ellipticalpower}, we find that
\begin{enumerate}[(i)]
\item Our tests, LR-SN and JHN-SN, as well as LY test, enjoy a blessing of dimensionality: for a fixed ratio $p/n$, the higher the dimension $p$, the higher the power;
\item LY test is less powerful than our tests.
\end{enumerate}
\subsection{Beyond Elliptical, a GARCH-type Case}\label{ssec:sim_hetero_case}
Recall that in our general model \eqref{observe}, the observations $\fY_i$ admit the decomposition $\omega_i\fZ_i$, and $\omega_i$'s can depend on each other and on $\{\fZ_i: i=1, \ldots, n\}$ in an arbitrary way. To examine the performance of our tests in such a general setup, we simulate data using the following two-step procedure:
\begin{enumerate}[1.]
\item For each $\fZ_i$, we first generate another $p$-dimensional random vector $\widetilde{\fZ}_i$, which consists of \mbox{i.i.d.} standardized random variables $\widetilde{Z}_{ij}$'s; and with $\boldsymbol{\Sigma}$ to be specified, $\fZ_i$ is taken to be $\boldsymbol{\Sigma}^{1/2}\widetilde{\fZ}_i$. In the simulation below, $\widetilde{Z}_{ij}$'s are sampled from standardized $t$-distribution with $4$ degrees of freedom, which is heavy-tailed and even does not have finite fourth moment.
\item For each $\omega_i$, inspired by the ARCH/GARCH model, we take $\omega^2_i=0.01+0.85\omega^2_{i-1}+0.1|\fY_{i-1}|^2/\tr\big(\boldsymbol{\Sigma}\big)$.
\end{enumerate}
\emph{\textbf{Checking the size.}}
We test $H_{0}: \boldsymbol{\Sigma}\propto\fI$.
Table~\ref{size} reports the empirical sizes of our proposed tests and LY test at $5\%$ significance level.
\begin{table}[H]
\begin{center}
\centering
\ra{0.63}
\begin{tabular}{@{}c||c|cccc|ccc|c@{}}\toprule[1pt]
~~~~~~&\multicolumn{3}{c}{$p/n=0.5$} & \phantom{abc}& \multicolumn{2}{c}{$p/n=2$}\\
\cmidrule{2-4} \cmidrule{6-7}
$p$& LY&LR-SN& JHN-SN&& LY& JHN-SN\\ \midrule[1pt]
$100$&$8.2$&$\bf{5.5}$& $\bf{5.3}$&&$6.8$& $\bf{5.0}$\\
$200$&$8.5$&$\bf{5.7}$& $\bf{5.4}$&&$6.8$& $\bf{5.5}$\\
$500$&$7.6$&$\bf{5.3}$& $\bf{5.2}$&&$6.6$& $\bf{5.4}$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\caption{\it Empirical sizes $(\%)$ of LY test and our proposed LR-SN and JHN-SN tests for testing $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level. Data are generated as $\fY_i=\omega_i\fZ_i$ with $\omega^2_i=0.01+0.85\omega^2_{i-1}+0.1|\fY_{i-1}|^2/p$, and $\fZ_i$ consists of \mbox{i.i.d.} standardized $t(4)$ random variables. The results are based on $10,000$ replications for each pair of $p$ and $n$. }\label{size}
\end{table}
From Table \ref{size}, we find that, for all different values of $p$ and $p/n$, the empirical sizes of our proposed tests are around the nominal level of $5\%$. Again, this is in sharp contrast with the results in Table \ref{counterexample_sphere}, where the existing tests yield sizes far higher than $5\%$.
One more important observation is that although Theorem \ref{CLTLSS} requires the finiteness of~$\mathbb{E}\big(Z_{11}^{4}\big)$, the simulation above shows that
our proposed tests work well even when $\mathbb{E}\big(Z_{11}^4\big)$ does not exist.
Another observation is that with 10,000 replications, the margin of error for a proportion at 5\% significance level is 1\%, hence the sizes of LY test in Table \ref{size} are statistically significantly higher than the nominal level of $5\%$.
\bigskip
\emph{\textbf{Checking the power.}}
To evaluate the power, we again take $\boldsymbol{\Sigma}=\big(0.1^{|i-j|}\big)$ and generate data according to the design at the beginning of this subsection. Table \ref{tab:power} reports the empirical powers of our proposed tests and LY test for testing $H_{0}: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level.
\begin{table}[H]
\begin{center}
\centering
\ra{0.63}
\begin{tabular}{@{}c||c|cccc|ccc|c@{}}\toprule[1pt]
~~~~~~&\multicolumn{3}{c}{$p/n=0.5$} & \phantom{abc}& \multicolumn{2}{c}{$p/n=2$}\\
\cmidrule{2-4} \cmidrule{6-7}
$p$& LY&LR-SN& JHN-SN&& LY& JHN-SN\\ \midrule[1pt]
$100$&$20.7$&$34.4$& $47.9$&&$7.8$& $8.7$\\
$200$&$54.4$&$87.8$& $96.6$&&$10.5$& $17.6$\\
$500$&$100$&$100$& $100$&&$26.4$& $69.9$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\caption{\it Empirical powers $(\%)$ of LY test and our proposed LR-SN and JHN-SN tests for testing $H_0: \boldsymbol{\Sigma}\propto\fI$ at $5\%$ significance level.
Data are generated as $\fY_i=\omega_i\fZ_i$ with $\omega^2_i=0.01+0.85\omega^2_{i-1}+0.1|\fY_{i-1}|^2/p$, and $\fZ_i=\big(0.1^{|i-j|}\big)^{1/2}\widetilde{\fZ}_i$ where $\widetilde{\fZ}_i$ consists of \mbox{i.i.d.} standardized $t(4)$ random variables. The results are based on $10,000$ replications for each pair of $p$ and~$n$. }\label{tab:power}
\end{table}
{Table \ref{tab:power} shows again that our tests enjoy a blessing of dimensionality. Moreover, comparing Table \ref{tab:power} with Table \ref{ellipticalpower}, we find that for each pair of $p$ and $n$, the powers of our tests are similar under the two designs. Such similarities show that our tests can not only accommodate (conditional) heteroskedasticity but also are robust to heavy-tailedness in~$\fZ_i$'s. Finally, LY test is again less powerful.}
\subsection{Summary of Simulation Studies}\label{ssec:summary_simulation}
Combining the observations in the three cases, we conclude that
\begin{enumerate}[(i)]
\item {The existing tests, LW$_2$, S, CZZ$_2$, WY-LR and WY-JHN, work well in the \hbox{i.i.d.} Gaussian setting, however, they fail badly under the elliptical distribution and our general setup;}
\item The newly proposed LY test in \cite{li2017structure} is applicable to the elliptical distribution, however, it is less powerful than the existing tests {in the \hbox{i.i.d.} Gaussian} setting and, in general, less powerful than ours;
\item Our LR-SN and JHN-SN tests perform well under all three settings, yielding the right sizes and enjoying high powers.
\end{enumerate}
{\section{EMPIRICAL STUDIES}\label{sect:empirical}}
Let us first explain the motivation of the empirical study, which is about stock returns.
The total risk of a stock return can be decomposed into two components: systematic risk and idiosyncratic risk. Empirical studies in \cite{campbell2001have} and \cite{goyal2003idiosyncratic} show that idiosyncratic risk is the major component of the total risk.
It is not uncommon to assume that idiosyncratic returns are cross-sectionally uncorrelated, giving rise to the so-called strict factor model;
see, e.g., \cite{roll1980empirical}, \cite{brown1989number} and \cite{fan2008high}.
Our goal in this section is to test the cross-sectional uncorrelatedness of idiosyncratic returns.
We focus on the S\&P 500 Financials sector. There are in total $80$ stocks on the first trading day of 2012 (Jan 3, 2012), among which $76$ stocks have complete data over the years
of 2012-2016. We will focus on these 76 stocks. The stock prices that our analysis is based on are collected from the Center for Research in Security Prices (CRSP) daily database,
while the Fama-French three-factor data are obtained from Kenneth French's data library (\verb"http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html").
We consider two factor models: the CAPM and the Fama-French three-factor model.
We use a rolling window of six months to fit the two models.
Figure \ref{fig:hetero} reports the Euclidean norms of the fitted daily idiosyncratic returns.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3in]{hetero_capm.pdf}
\includegraphics[width=3in]{hetero_ff.pdf}
\end{center}
\caption{\it Time series plots of the Euclidean norms of the daily idiosyncratic returns of $76$ stocks in the S$\&$P 500 Financials sector, by fitting the CAPM (left) and the Fama-French three-factor model (right) over the years of 2012--2016.}\label{fig:hetero}
\end{figure}
We see from Figure \ref{fig:hetero} that under both models, the Euclidean norms of the fitted daily idiosyncratic returns exhibit clear heteroskedasticity and clustering. Such features indicate that the idiosyncratic returns are unlikely to be \hbox{i.i.d.}, but more suitably modeled as a conditional heteroskedastic time series, which is compatible with our framework.
Now we test the
cross-sectional uncorrelatedness of idiosyncratic risk.
Specifically,
for a diagonal matrix $\boldsymbol{\Sigma}_{\mathcal{D}}$ to be chosen, we test
\begin{align}
&H_{0}:~\boldsymbol{\Sigma}_{\mathcal{I}} \propto \boldsymbol{\Sigma}_{\mathcal{D}}\quad vs. \quad H_{a}:~\boldsymbol{\Sigma}_{\mathcal{I}} \not\propto\boldsymbol{\Sigma}_{\mathcal{D}},\label{test:diag}
\end{align}
where $\boldsymbol{\Sigma}_{\mathcal{I}}$ denotes the covariance matrix of the idiosyncratic returns.
\subsection{Testing Results}\label{sec:empirical_realdata}
We test \eqref{test:diag} using the same rolling window scheme as for fitting the CAPM or the Fama-French three-factor model. For each month to be tested, the
diagonal matrix $\boldsymbol{\Sigma}_{\mathcal{D}}$ in \eqref{test:diag} is obtained by extracting the diagonal entries of the
sample covariance matrix of the self-normalized fitted idiosyncratic returns over the previous five months.
Table \ref{tab:t_diag} summarizes the resulting JHN-SN test statistics.
\begin{table}[H]
\begin{center}
\ra{0.6}
\begin{tabular}{cccccc|c}
\hline\hline
\multicolumn{7}{c}{CAPM}\\
\toprule
~&Min& $Q_1$& Median& $Q_3$& Max& Mean (Sd)\\
JHN-SN& $6.3$& $18.1$& $29.8$& $44.3$& $83.1$&$33.1~(18.5)$\\
\toprule
\multicolumn{7}{c}{Fama-French three-factor model}\\
\toprule
~&Min& $Q_1$& Median& $Q_3$& Max& Mean (Sd)\\
JHN-SN& $5.0$& $12.4$& $24.4$& $30.4$& $77.0$&$23.8~(13.0)$\\
\hline\hline
\end{tabular}
\end{center}
\caption{\it Summary statistics of the JHN-SN statistics for testing \eqref{test:diag}. For both the CAPM and the Fama-French three-factor model, for each month,
we first estimate the idiosyncratic returns by fitting the model using the data in the current month and the previous five months. We then obtain
$\boldsymbol{\Sigma}_{\mathcal{D}}$ by extracting the diagonal entries of the sample covariance matrix of
the self-normalized idiosyncratic returns over the previous five months, and use the fitted idiosyncratic returns in the current month to conduct the test.}\label{tab:t_diag}
\end{table}
We observe from Table \ref{tab:t_diag} that:
\begin{enumerate}[(i)]
\item The values of the JHN-SN test statistics are in general rather big, which correspond to {almost zero} $p$-values. Such a finding casts doubt on the cross-sectional uncorrelatedness of the idiosyncratic returns from fitting either the CAPM or the Fama-French three-factor model;
\item On the other hand, compared with the CAPM, the Fama-French three-factor model gives rise to idiosyncratic returns that are associated with less extreme test statistics. This confirms that the two additional factors, size and value, do have pervasive impacts on stock returns.
\end{enumerate}
\subsection{Checking the Robustness of the Testing Results in Section~\ref{sec:empirical_realdata}}
{The results in Table \ref{tab:t_diag} are based on testing against the estimated diagonal matrix $\boldsymbol{\Sigma}_{\mathcal{D}}$, which inevitably contains estimation errors. This brings up the following question: are the extreme test statistics in Table \ref{tab:t_diag} due to the estimation error in $\boldsymbol{\Sigma}_{\mathcal{D}}$, or, are they really due to that the idiosyncratic returns are not uncorrelated?}
To answer this question, we redo the test based on simulated stock returns whose idiosyncratic returns are uncorrelated and exhibit heteroskedasticity.
Specifically, we consider the following three-factor model:
\begin{align}\label{three_factor_model}
\fr_t=\boldsymbol{\alpha}+\fB\ff_t+\boldsymbol{\varepsilon}_t,~~\text{with}~~ \ff_t\sim N(\boldsymbol{\mu}_f, \boldsymbol{\Sigma}_f),~~ \boldsymbol{\varepsilon}_t=\omega_t\cdot\boldsymbol{\Sigma}_{\boldsymbol{\mathcal{I}}}^{1/2}\fZ_t~~\text{and}~~\fZ_t\sim N(\boldsymbol{0}, \fI),
\end{align}
where $\fr_t$ denotes return vector at time $t$,
$\fB$ is a factor loading matrix, $\ff_t$ represents three factors, and $\boldsymbol{\varepsilon}_t$ consists of idiosyncratic returns.
To mimic the real data, we calibrate the parameters as follows:
\begin{enumerate}[(i)]
\item The factor loading matrix $\boldsymbol{B}$ is taken to be the estimated factor loading matrix by fitting the Fama-French three-factor model to the daily returns of the $76$ stocks that we analyzed above, and $\boldsymbol{\alpha}$ is obtained by hard thresholding the estimated intercepts by two standard errors;
\item The mean and covariance matrix of factor returns, $\mu_f$ and $\boldmath{\Sigma}_f$, are the sample mean and sample covariance matrix of the Fama-French three factor returns;
\item To generate data under the null hypothesis that the idiosyncratic returns are uncorrelated, {their covariance matrix $\boldsymbol{\Sigma}_{\boldsymbol{\mathcal{I}}}$
is taken to be the diagonal matrix obtained by extracting the diagonal entries of the sample covariance matrix of the self-normalized fitted idiosyncratic returns}; and
\item Finally, $\omega_t$ is taken to be the Euclidean norm of the fitted daily idiosyncratic returns.
\end{enumerate}
With such generated data, we test \eqref{test:diag} in parallel with the real data analysis.
Table \ref{tab:simu_diagonal} summarizes the JHN-SN test statistics for testing \eqref{test:diag} based on the simulated data.
\begin{table}[H]
\ra{0.7}
\begin{center}
\begin{tabular}{cccccc|c|c}
\hline\hline
\multicolumn{8}{c}{Simulated data based on a three-factor model}\\
\midrule
~&Min& $Q_1$& Median& $Q_3$& Max& Mean (Sd)& Percent within $[-1.96, 1.96]$ \\
JHN-SN& $-1.1$& $-0.2$& $0.6$& $1.2$& $2.5$&$0.6~(0.9)$& $94.5\%$\\
\hline\hline
\end{tabular}
\end{center}
\caption{\it Summary statistics of the JHN-SN statistics for testing \eqref{test:diag} based on simulated returns from Model \eqref{three_factor_model}. {To conduct the test, with a rolling window of six months,
we first estimate the idiosyncratic returns by fitting the three-factor model.} We then obtain
$\boldsymbol{\Sigma}_{\mathcal{D}}$ by extracting the diagonal entries of the sample covariance matrix of
the self-normalized fitted idiosyncratic returns over the previous five months, and use the fitted idiosyncratic returns in the current month to conduct the test.}\label{tab:simu_diagonal}
\end{table}
Table \ref{tab:simu_diagonal} reveals sharp contrast with Table \ref{tab:t_diag}. We see that if the idiosyncratic returns are indeed uncorrelated,
then even if they are heteroskedastic and even if we are testing against the estimated $\widehat{\boldsymbol{\Sigma}}_{\mathcal{D}}$, the percent of resulting test statistics that are
within $[-1.96, 1.96]$ is close to 95\%, the expected level under the null hypothesis. In sharp contrast, the test statistics in Table \ref{tab:t_diag} are all very extreme.
Such a comparison suggests that the idiosyncratic returns in the real data are indeed unlikely to be uncorrelated.
\section{CONCLUSIONS}\label{conclusion}
We study testing high-dimensional covariance matrices under a generalized elliptical distribution, which can feature heteroskedasticity, leverage effect, asymmetry, etc. We establish a CLT for the LSS of the sample covariance matrix based on self-normalized observations. The CLT is different from the existing ones for the usual sample covariance matrix, and it does not require $\mathbb{E}\big(Z_{11}^4\big)=3$ as in \cite{BaiS04}, nor involve $\mathbb{E}\big(Z_{11}^4\big)$ as in \cite{Najim2013}. Based on the new CLT, we propose two tests by modifying the likelihood ratio test and John's test. {More general tests are also provided.} Numerical studies show that our proposed tests work well no matter whether the observations are \hbox{i.i.d.} Gaussian or from an elliptical distribution or feature conditional heteroskedasticity or even when $\mathbf Z_i$'s do not admit the fourth moment. Empirically, we apply the proposed tests to test the cross-sectional uncorrelatedness of idiosyncratic returns. The test results suggest that the idiosyncratic returns from fitting either the CAPM or the Fama-French three-factor model are cross-sectionally correlated.
\section{ACKNOWLEDGEMENTS}
Research partially supported by RGC grants GRF606811, GRF16305315 and GRF16304317 of the HKSAR.
We thank Zhigang Bao for a suggestion that helps relax the assumptions of Lemma B.2 in the supplementary material \cite{YZCL17}.
|
\section{Introduction}\label{sec:Intro}
Iterative methods based on projections of the solution onto a Krylov subspace are often used to solve large-scale linear ill-posed problems \cite{RegParamItr,Chung2008,Novati2013,Bazan2010,Hochstenbach2010,projRegGenForm,Gazzola2015}, \cite[Chap. 6]{RankDeff}, \cite[Chap. 6]{HansenInsights}. Such large problems arise in a variety of applications including image deblurring \cite{Nagy2004,Yuan2007,SpecFiltBook,projRegGenForm,Chung2008} and machine learning \cite{Ong2004,Martens2010,Freitas2005,Ide2007}. Krylov methods iteratively project the solution onto a low-dimensional subspace and solve the resulting small-scale problem using standard procedures such as the QR decomposition \cite{LSQR}. These methods are therefore attractive for the solution of large-scale problems that cannot be solved directly as well as for problems perturbed by noise, since the projection onto a Krylov subspace possesses a regularizing effect \cite{Hnetynkova2009}. Accurate solution using iterative procedures requires stopping the process close to the optimal stopping iteration. In this paper we develop a general stopping criterion for Krylov subspace regularization, which we particularly apply to the Golub-Kahan bidiagonalization (GKB), also frequently referred to as Lanczos bidiagonalization \cite{Golub1965}.
Consider the problem
\begin{equation}\label{eq:Ax=b} Ax=b,\end{equation}
where the matrix \(A\in\mathbb{R}^{m\times n}\) is large and ill-conditioned and the data vector \(b=b_{true} + n\) constitutes the true data \(b_{true}\) perturbed by an additive white noise vector \(n\). We are interested in approximating the least-squares solution of the problem
\begin{equation}\label{eq:LSsolution}
\min_x ||b_{true}- Ax||^2
\end{equation}
without knowledge of \(b_{true}\). This can be done by minimizing the projected least-squares (PLS) problem
\begin{equation}\label{eq:GKBprob}
\min_x ||b-Ax||^2,\quad \text{such that}\quad x\in\mathcal{K}_k(A^TA,A^Tb),
\end{equation}
where
\begin{equation}\label{eq:KrylovSubspace}
\mathcal{K}_k(A^TA,A^Tb)=\text{span}\{A^Tb,\ldots,\left(A^TA\right)^{k-1}A^Tb\},
\end{equation}
is the Krylov subspace generated using the GKB process for each iteration \(k\).
The regularizing effect of the projection onto \(\mathcal{K}_k(A^TA,A^Tb)\) then dampens the noise in \(b\), allowing us to approximate \(x_{true}\) \cite{RegParamItr,Hnetynkova2009}. The reason for this regularization effect is that in the initial iterations the basis vectors spanning the Krylov subspace are smooth and so is the projected solution. However, for large iteration numbers \(k\) the accuracy of the projected solution decreases as the basis vectors become corrupted by noise. By applying GKB to \ref{eq:Ax=b}, we thus obtain a sequence of iterates \(\{x^{(k)}\}\) whose error \(||x^{(k)}-x_{true}||\) initially decreases with increasing iterations and then goes up sharply. This behavior of GKB is termed semi-convergence \cite[Sect. 6.3]{RankDeff}. It is therefore crucial to develop a reliable stopping rule by which to terminate the iterative solution of \ref{eq:GKBprob} before noise contaminates the solution. For this purpose, a number of stopping criteria have been proposed including the L-curve \cite{RegParamItr,LCurve}, the generalized cross validation (GCV) \cite{RegParamItr}, \cite[Sect. 7.4]{RankDeff} and the discrepancy principle \cite{RegParamItr}. However, both the GCV and the L-curve methods are inaccurate for determination of the stopping iteration in a significant percentage of cases, as has been demonstrated in \cite[Sect. 7.2]{Hansen2006} for the former and below in \ref{sec:NumEx} for the latter. The discrepancy principle, on the other hand, requires \emph{a priori} knowledge of the noise level and is highly sensitive to it \cite[Sect. 4.1.2]{Bauer2011}. Recently, a new stopping criterion called the normalized cumulative periodogram (NCP), which is based on a whiteness measure of the residual vector \(r^{(k)}=b-Ax^{(k)}\), was proposed in \cite{Hansen2006}. While this method outperforms the above-mentioned alternatives, we nevertheless show that its results are inconsistent for some of our numerical problems.
Estimation of the stopping iteration using the GCV or the L-curve method requires projection of the solution onto the subspace \(\mathcal{K}_k(A^TA,A^Tb)\), which depends on the noisy data vector \(b\). In contrast to the original problem \ref{eq:Ax=b} where the noise is entirely contained within data vector \(b\), while coefficient matrix \(A\) is noiseless, the coefficient matrix in the projected problem is contaminated by noise. It is thus nontrivial to generalize standard parameter-choice methods to the problem \ref{eq:GKBprob}, as their usage may result in suboptimal solutions due to the fact that they do not account for noise in the coefficient matrix. To overcome this problem, we estimate the optimal stopping iteration for GKB using the Data Filtering (DF) method originally proposed and briefly discussed in \cite{Levin2016}. Using the DF method the stopping iteration is selected as the one for which the distance \(||\widehat b-Ax^{(k)}||\) between the filtered data \(\widehat b\) and the data reconstructed from the iterated solution \(Ax^{(k)}\) is either minimal or levels-off. In \cite{Levin2016} the filtered data \(\widehat b\approx b-n\) is obtained by separating the noise from the data using the so-called Picard parameter \(k_0\), above which the coordinates of the data in the basis of the left singular vectors of \(A\) are dominated by noise. The approximation of the unperturbed data \(\widehat b\) is then obtained by setting the coordinates of \(b\) in that basis to zero for \(k>k_0\).
Computing the SVD of \(A\) for large-scale problems is not feasible and the specific basis in which the authors of \cite{Levin2016} achieve a separation between signal and noise is therefore unavailable. To overcome this problem we propose to replace the SVD basis used in \cite{Levin2016} by the basis of the discrete Fourier transform (DFT) and show that we can achieve a similar separation of signal from noise in one and in two dimensions. It is well known however, that the DFT assumes the signal to be periodic, and applying it to a non-periodic signal results in artifacts in the form of fictitious high-frequency components that cannot be distinguished from the noise in the DFT basis. We prevent these high-frequency components from appearing by using the Periodic Plus Smooth (PPS) decomposition \cite{Moisan2011}, which allows us to write the signal as a sum, \(b=p+s\), of a periodic component \(p\) and a smooth component \(s\). The periodic component is compatible with the periodicity assumption of the DFT and does not produce high-frequency artifacts, while the smooth component does not need to be filtered. We then have to filter only the Fourier coefficients of the periodic component.
For two-dimensional problems, the Fourier coefficients of the data require a vectorization prior to estimation of the Picard parameter. The coefficients are usually vectorized by order of increasing spatial frequency \cite{Hansen2006}. Here we propose an alternative vectorization, ordered by increasing value of the product of the spatial frequencies in each dimension, which enables a more accurate and consistent estimation of the Picard parameter. We demonstrate that such ordering is equivalent to the sorting of a Kronecker product of two vectors of spatial frequencies and stems from a corresponding Kronecker product structure of the two-dimensional DFT matrix. This approach is also analogous to reordering the SVD of a separable blur as discussed in e.g. \cite[Sect. 4.4.2]{SpecFiltBook}. We demonstrate that a filter based on the proposed ordering performs similarly to or outperforms its spatial frequencies-based counterpart in all our numerical examples, allowing termination of the iterative process closer to the optimal iteration. The new filtering procedure is simple and effective, and can be used independently of the iterative inversion algorithm.
Hybrid methods, which replace the semi-convergent PLS problem \ref{eq:GKBprob} with a convergent alternative, have received significant attention in recent years \cite{RegParamItr,Chung2008,Novati2013,Hochstenbach2010,Bazan2010,Gazzola2015,Chung2015}. These methods combine Tikhonov regularization with a projection onto \(\mathcal{K}_k(A^TA,A^Tb)\), replacing problem \ref{eq:GKBprob} with the projected Tikhonov problem
\begin{equation}\label{eq:TikhMinProb} \min_x ||b-Ax||^2 + \lambda^2||Lx||^2\quad \text{such that}\quad x\in\mathcal{K}_k(A^TA,A^Tb),\end{equation}
where \(L\) is a regularization matrix and \(\lambda\) is a regularization parameter that controls the smoothness of the solution. In this paper, we follow the authors of \cite{RegParamItr,Chung2008,Novati2013,Hochstenbach2010,Bazan2010,Gazzola2015} by considering only the \(L=I\) case. We would like to note, however, that a generalization to case \(L\neq I\) was developed and discussed in \cite{Hochstenbach2010}. The advantage of hybrid methods is that given an accurate choice of the value \(\lambda\) at each iteration, the error in the solution of \ref{eq:TikhMinProb} stabilizes for large iterations, contrary to the least-squares problem \ref{eq:GKBprob} for which the solution error has a minimum. However, appropriately choosing the regularization parameter \(\lambda\) at each iteration is a difficult task, since the coefficient matrix in the projected problem becomes contaminated by noise, as in the PLS problem. Hence, standard parameter-choice methods such as the GCV cannot be na\"{\i}vely applied to problem \ref{eq:TikhMinProb}. In practice, the GCV indeed fails to stabilize the iterations in a large number of cases as reported in \cite{Chung2008} and \cite[Sect. 5.1.1]{Bazan2010}. To overcome this problem, the authors of \cite{Chung2008} proposed the weighted GCV (W-GCV) method which incorporates an additional free weight parameter, chosen adaptively at each iteration and shown to significantly improve the performance of the method. We demonstrate however, using several numerical examples, that the results of the W-GCV method are still suboptimal.
This paper is organized as follows. In Sect. \ref{sec:DirectReg} we summarize results from the Tikhonov regularization of \ref{eq:Ax=b} that we extend to the PLS problem \ref{eq:GKBprob}. In Sect. \ref{sec:DFTfilter}, we present our filtering technique, based on the Picard parameter in the DFT basis for one- and two-dimensional problems. In Sect. \ref{sec:GKBinvert}, we present the GKB algorithm and formulate our stopping criterion. In this section we also discuss hybrid methods that combine projection with Tikhonov regularization. Finally, in Sect. \ref{sec:NumEx} we give numerical examples which demonstrate the performance of the proposed stopping criterion, and compare it to the L-curve, NCP and W-GCV methods.
\section{Tikhonov regularization}\label{sec:DirectReg}
We begin with a description of our parameter-choice method, detailed in \cite{Levin2016}, for standard direct Tikhonov regularization of \ref{eq:Ax=b} using the Picard parameter, which represent the starting point of our derivation. The Tikhonov regularization method solves the ill-posed problem \ref{eq:Ax=b} by replacing it with the related, well-posed counterpart
\begin{equation}\label{eq:TikhProb} \min_x ||b-Ax||^2 + \lambda^2||x||^2 \implies \left(A^TA+\lambda^2I\right)x=b.\end{equation}
The solution of \ref{eq:TikhProb} can be written in a convenient form, using the singular value decomposition (SVD) of \(A\), given by
\begin{equation}\label{eq:SVDofA} A = U\Sigma V^T,\end{equation}
where \(U\in\mathbb{R}^{m\times m}\) and \(V\in\mathbb{R}^{n\times n}\) are orthogonal matrices. For simplicity, let the \(j\)th columns of \(U\) and \(V\) be denoted by \(u_j\) and \(v_j\), respectively, and the \(j\)th singular value of \(A\) by \(\sigma_j\). Furthermore, let \(\beta_j= u_j^Tb\) be the \(j\)th Fourier coefficient of \(b\) with respect to \(\{u_j\}_{j=1}^m\) and let \(\nu_j = u_j^Tn\) be the Fourier coefficients of the noise. Then, the solution of \ref{eq:TikhProb} can be written as
\begin{equation}\label{eq:TikhSoln}
x(\lambda) = \sum_{j=1}^{m}\frac{\sigma_j}{\sigma_j^2+\lambda^2}\beta_j v_j.
\end{equation}
It can be shown that in order for solution \ref{eq:TikhSoln} to represent a good approximation to the true solution \(x_{true}\) for some \(\lambda\), the problem must satisfy the discrete Picard condition (DPC) \cite{DPC}. The DPC requires that the sequence of Fourier coefficients of the true data \(\{u_j^Tb_{true}\}=\{\beta_j-\nu_j\}\) decays faster than the singular values \(\{\sigma_j\}\) which, by the ill-conditioning of \(A\), decay relatively quickly. Therefore, the DPC implies that \(\beta_j-\nu_j \approx 0\), or equivalently, \(\beta_j\approx \nu_j\), for \(j\geq k_0\) from some index \(k_0\), termed the Picard parameter, on. In other words, the coefficients of \(b\) with indices larger than \(k_0\) are dominated by noise, while coefficients with smaller indices are dominated by the true data.
To estimate \(\lambda\) we can rewrite \ref{eq:LSsolution} as
\begin{equation}\label{eq:LSsolution2}
\min_\lambda ||b_{true}- Ax(\lambda)||^2,
\end{equation}
but since \(b_{true}\) is not known we suggest to replace it with the filtered field \(b_{true}\approx \hat b\), as in the DF method \cite{Levin2016}. The DF method sets the regularization parameter \(\lambda\) for \ref{eq:TikhProb} to be the minimizer of the distance function
\begin{equation}
\label{eq:DistNorm}
\min_\lambda ||\widehat{b}-Ax(\lambda)||^2,
\end{equation}
between the data \(Ax(\lambda)\) reconstructed from the solution \ref{eq:TikhProb}, and the filtered data \(\widehat b\). To obtain the filtered data \(\widehat b\) we remove the noise-dominated coefficients from the expansion of \(b\) in basis \(\{u_j\}\) so that
\begin{equation}\label{eq:PicFiltSVD}
\widehat b = \sum_{j=1}^{k_0-1}\beta_ju_j.
\end{equation}
The Picard parameter \(k_0\), can be found by detection of the levelling-off of the sequence
\begin{equation}\label{eq:VxDefn}
V(k) = \frac{1}{m-k+1}\sum_{j=k}^m \beta_j^2,
\end{equation}
which is shown to decrease on average until it levels-off at \(V(k_0)\simeq s^2\), where \(s^2\) is the variance of the white noise. The detection is done by setting \(k_0\) to be the smallest index satisfying
\begin{equation}\label{eq:PicIndCondSVD}
\frac{|V(k+h)-V(k)|}{V(k)} \leq \varepsilon,
\end{equation}
for some step size \(h\) and a bound on the relative change \(\varepsilon\). The above estimation of \(k_0\) is stated to be robust to changes in \(h\) and \(\varepsilon\), with the values \(\varepsilon\in[10^{-3},10^{-1}]\) and \(h\in[\lfloor\frac{m}{100}\rfloor,\lceil\frac{m}{10}\rceil]\) working consistently well \cite{Levin2016}.
Unfortunately, for large-scale problems, computing the SVD of \(A\) given by \ref{eq:SVDofA} is unfeasible in general and therefore the above separation of noise from signal in the SVD basis is unobtainable. In the next section, we propose a similar filtering procedure for \(b\), which utilizes the basis of the DFT instead of SVD. This basis satisfies an analog of the DPC and is effective for large-scale applications.
\section{The DFT filter}\label{sec:DFTfilter}
In this section we replace the SVD basis discussed in Sect. \ref{sec:DirectReg} with the DFT basis. Since computing the Fourier coefficients with respect to the DFT basis can be done efficiently, the proposed procedure remains computationally cheap even for large-scale problems.
We begin by noting that the true data \(b_{true}\) is generally smooth and therefore is dominated by low-frequencies. This is true for cases such as image deblurring problems and problems arising from discretization of integral equations, where the coefficient matrix \(A\) has a smoothing effect and hence, \(b_{true} = Ax_{true}\) is smooth even if \(x_{true}\) is not, see \cite{Hansen2006,Hansen2008} \cite[Sect. 5.6]{SpecFiltBook}. Furthermore, the SVD basis \(\{u_j\}\) is usually similar to that of the DFT,as shown in \cite{Hansen2006}, where the authors demonstrate that vectors \(u_j\) corresponding to small indices \(j\) are well represented by just the first few Fourier modes. In contrast, vectors \(u_j\) with large indices \(j\) are shown to include significant contributions from high frequency Fourier modes. These observations suggest that we can replace the SVD basis with the Fourier basis, so that the role of the decreasing singular values in the ordering of the basis vectors is played by the increasing Fourier frequencies. For our procedure to be valid we expect the DFT coefficients of \(b_{true}\) to satisfy an analog of the DPC and therefore to decay to zero as the frequency increases.
For an image \(B\) of size \(M\times N\) we use the two-dimensional DFT
\begin{equation}\label{eq:DFT2D}
\text{DFT2}[B] = \mathcal{F}_M^*B\overline{\mathcal{F}}_N,
\end{equation}
where
\begin{equation}\label{Eq:DFTmat} \left(\mathcal{F}_m\right)_{j,k} = \frac{1}{\sqrt{m}}e^{i2\pi(j-1)(k-1)/m},\end{equation}
is the unitary DFT matrix of size \(m\times m\), \(\overline{X}\) denotes complex conjugation, and \(i=\sqrt{-1}\). Note that \ref{eq:DFT2D} reduces to the one-dimensional DFT if \(N=1\). The data vector \(b\) in \ref{eq:Ax=b} is then obtained by vectorizing the matrix \(B\) by stacking its columns one upon the other so that \(b=\text{vec}(B)\), where \(\text{vec}(\cdot)\) denotes the above vectorization scheme and \(m=MN\) is the resulting length of \(b\). However, the Fourier coefficients found in \ref{eq:DFT2D} cannot be used directly for our purposes because a na\"{\i}ve application of DFT to a non-periodic signal causes artifacts in the frequency domain. Specifically, DFT assumes that the data to be transformed is periodic and therefore application of the DFT to a non-periodic data leads to discontinuities at the boundaries. In the frequency domain, these discontinuities take the form of large high-frequency coefficients \cite{Moisan2011}. Thus Fourier coefficients of smooth but non-periodic true data do not satisfy the DPC as we require. To circumvent this difficulty, we propose to use the Periodic Plus Smooth (PPS) decomposition introduced in \cite{Moisan2011}. The PPS decomposition decomposes an image into a sum
\begin{equation}\label{eq:PPS} B = P + S,\end{equation}
of a periodic component \(P\) very similar to the original one but that smoothly decays towards its boundaries, and a smooth component \(S\) that is nonzero mainly at the boundaries to compensate for the decaying \(P\).
To compute the PPS of \(B\), we define
\begin{equation}\label{eq:PerGapImg}
\begin{aligned}
&V_1(j,k) = \left\{\begin{array}{cc} B(M-j+1,k)-B(j,k), & \text{if } j=1 \text{ or } j=M,\\ 0, & \text{otherwise},\end{array}\right.,\\
&V_2(j,k) = \left\{\begin{array}{cc} B(j,N-k+1)-B(j,k), & \text{if } k=1 \text{ or } k=N,\\ 0, & \text{otherwise},\end{array}\right.,
\end{aligned}
\end{equation}
and \(V = V_1 + V_2\). Then, the two-dimensional DFT of the smooth component \(S\) is given by
\begin{equation}\label{eq:DFTS}
\text{DFT2}[S](j,k) = \left\{\begin{array}{cc} 0, & \text{if } j=k=1,\\ \frac{\text{DFT2}[V](j,k)}{2\cos\left(\frac{2\pi(j-1)}{M}\right)+2\cos\left(\frac{2\pi(k-1)}{N}\right)-4}, & \text{otherwise},\end{array}\right.
\end{equation}
which can be inverted to obtain \(S\) and \(P=B-S\). The subsequent filtering procedure uses only the periodic component \(P\), which contains all the noise as \(S\) is always smooth.
In order to filter the vectorized Fourier coefficients
\begin{equation}\label{eq:VecFourierCoeffs2D} \beta = \text{vec}\left(\text{DFT2}[P]\right),
\end{equation}
by using the Picard parameter, as in Sect. \ref{sec:DirectReg}, we first have to rearrange \(\beta\) so that the first coefficients correspond to the true data while the last are dominated by noise. In \cite{Hansen2006} the coefficients were arranged in order of increasing spatial frequency. Specifically, the basis of the two dimensional Fourier transform is a plane wave given by
\begin{equation}\label{eq:2DFTbasis}
f((j,k),(s,l)) = \exp\left\{-i2\pi \left[\frac{(j-1)(k-1)}{M}+\frac{(s-1)(l-1)}{N}\right]\right\} = \exp\left\{-i2\pi \mathbf{k}\cdot\mathbf{x}\right\},
\end{equation}
where \(\mathbf{k}=\left(\frac{j-1}{M},\frac{s-1}{N}\right)^T\) is the frequency vector, \(j=0,1,...,M\), \(s=0,1,...,N\) and \(\mathbf{r}=(k-1,l-1)^T\) is the spatial vector. The components of \ref{eq:VecFourierCoeffs2D} are arranged in order of increasing magnitude of the spatial frequency \(\mathbf{k}\), given by \(|\mathbf{k}|^2=(j-1)^2/M^2+(s-1)^2/N^2\). We refer to this ordering as the elliptic ordering since the contours of the spatial frequency function \(|\mathbf{k}|^2\) are ellipses centered about the zero frequency. However, use of this arrangement in our numerical experiments causes some results to be highly suboptimal.
An alternative ordering of the Fourier coefficients allows us to overcome this problem. Specifically, we utilize the Kronecker product structure of the two-dimensional Fourier transform, which can be written as a matrix-vector multiplication with \(b\) as
\begin{equation}\label{eq:DFT2kron}
\text{vec}\left(\text{DFT2}[B]\right) = \left(\mathcal{F}^{(2)}_{M,N}\right)^*b.
\end{equation}
Here
\begin{equation}\label{eq:2DForuierMat}
\mathcal{F}^{(2)}_{M,N} = \mathcal{F}_M\otimes\mathcal{F}_N
\end{equation}
is the 2D Fourier transform matrix and '\(\otimes\)' denotes the Kronecker product defined as
\begin{equation}\label{eq:KronDefn} A\otimes B = \left(
\begin{array}{ccc}
a_{1,1}B & \cdots & a_{1,n}B \\
\vdots & \ddots & \vdots \\
a_{m,1}B & \cdots & a_{m,n}B \\
\end{array}
\right).\end{equation}
In view of \ref{eq:2DForuierMat}, we suggest to reorder the Fourier coefficients \(\beta\) in \ref{eq:VecFourierCoeffs2D} according to the ordering permutation \(\pi\) which arranges the Kronecker product
\begin{equation}\label{eq:freqVec}
\mathbf{f}^{(2)}_{M,N} = \mathbf{f}_M\otimes \mathbf{f}_N\in\mathbb{R}^m,
\end{equation}
in increasing order, where \(\mathbf{f}_M\in\mathbb{R}^M\) and \(\mathbf{f}_N\in\mathbb{R}^N\) are the vectors representing the ordered absolute frequencies of \(\mathcal{F}_M\) and \(\mathcal{F}_N\) respectively. Note that since the frequencies in the two-dimensional Fourier transform are shifted so that the zero frequency is located at the corner of the image, the vectors \(\mathbf{f}_N\) and \(\mathbf{f}_M\) in \ref{eq:freqVec} also need to be correspondingly shifted.
The vector \(\mathbf{f}^{(2)}_{M,N}\) whose components are products of the absolute values of spatial frequencies is not ordered and in a component-wise form \ref{eq:freqVec} is given by
\begin{equation}\label{eq:freqVecCompWise}
\left(\mathbf{f}^{(2)}_{M,N}\right)_{M(j-1)+s}=\frac{1}{m}\left\{\begin{array}{ll}
(j-1)(s-1), & 1\leq j \leq \lfloor\frac{N}{2}\rfloor,\ 1\leq s\leq \lfloor\frac{M}{2}\rfloor,\\
(N-j+1)(s-1), & \lfloor\frac{N}{2}\rfloor < j \leq N,\ 1\leq s\leq \lfloor\frac{M}{2}\rfloor,\\
(j-1)(M-s+1), & 1 \leq j \leq \lfloor\frac{N}{2}\rfloor,\ \lfloor\frac{M}{2}\rfloor < s\leq M,\\
(N-j+1)(M-s+1), & \lfloor\frac{N}{2}\rfloor < j \leq N,\ \lfloor\frac{M}{2}\rfloor < s\leq M, \end{array}\right.
\end{equation}
where as above \(m=MN\). We then construct the permutation \(\pi\) such that \(\mathbf{f}^{(2)}_{M,N}(\pi(1:m))\) appears in increasing order and rearrange the coefficients \ref{eq:VecFourierCoeffs2D} to obtain \(\beta\mapsto \beta(\pi(1:m))\). We term this ordering the hyperbolic ordering since the contours of the function \ref{eq:freqVecCompWise} are hyperbolas centered about zero frequency (see \ref{fig:Masks}).
The above rearrangement using the ordering permutation \(\pi\) is analogous to the rearrangement of the SVD decomposition of a separable blur
\begin{equation}\label{eq:SepBlur} A = A_1\otimes A_2.\end{equation}
where \(A_1\in\mathbb{R}^{N\times N}\), \(A_2\in\mathbb{R}^{M\times M}\) and \(A\in\mathbb{R}^{m\times m}\) \cite[Sect. 2]{Hansen2008}. Letting \(A_1 = U_1\Sigma_1V_1^T\) and \(A_2 = U_2\Sigma_2V_2^T\) be the SVD of \(A_1\) and \(A_2\), the SVD of \(A\) \ref{eq:SVDofA} can be written as
\begin{equation}
A = \left(\underbrace{U_1\otimes U_2}_{= U}\right)\left(\underbrace{\Sigma_1\otimes \Sigma_2}_{= \Sigma}\right)\left(\underbrace{V_1\otimes V_2}_{= V}\right)^T.
\end{equation}
As in the case of the two-dimensional Fourier transform \ref{eq:2DForuierMat}, even though the singular values of \(A_1\) and \(A_2\) are ordered, those of \(A\) are not \cite[Sect. 4.4.2]{SpecFiltBook}, \cite{Hansen2008}. To be able to use the filter described in Sect. \ref{sec:DirectReg} we must reorder the entries of \(U\), \(\Sigma\) and \(V\) according to decreasing singular values using the ordering permutation \(\pi\) as in \ref{eq:freqVec}.
Once the Fourier coefficients are rearranged we proceed according to the procedure in Sect. \ref{sec:DirectReg}. Specifically, we form the sequence \(\{V(k)\}_{k=1}^m\) defined in \ref{eq:VxDefn}, estimate the Picard parameter using \ref{eq:PicIndCondSVD} and set to zero the Fourier coefficients with indices larger than \(k_0\) to form the vector \(\widehat\beta(\pi(1:m)) = (\beta(\pi(1)),\ldots,\beta(\pi(k_0-1)),\underbrace{0,\ldots,0}_{m-k_0+1})^T\). We then invert \ref{eq:VecFourierCoeffs2D} using \(\widehat\beta\) instead of \(\beta\) to obtain the filtered periodic component \(\widehat P\) and the filtered image \(\widehat B = \widehat P + S\). The filtered data vector is then obtained as \(\widehat b = \text{vec}(\widehat B)\).
Note that dropping the last coefficients of the data using the two orderings discussed above can also be interpreted as applying one-parameter windows of different shapes in the Fourier domain. Specifically, the elliptic ordering of \cite{Hansen2006} corresponds to setting to zero the Fourier coefficients outside of an ellipse, whereas the hyperbolic ordering corresponds to doing the same outside a hyperbola. This is illustrated in \ref{fig:Masks} where we plot the Fourier transform coefficients of a \(256\times 256\) image with \(k_0=10^4\) for each of the orderings. Viewed from this perspective, the proposed filtering algorithm simply applies a window depending on the parameter \(k_0\) to the DFT of the perturbed image. The difference between our approach and the approach of \cite{Hansen2006} is in the chosen shape of the window.
\section{Iterative inversion using the GKB}\label{sec:GKBinvert}
In this section, we consider the solution of the ill-posed problem \ref{eq:Ax=b} using the GKB algorithm \cite{Golub1965}. This algorithm approximates the subspace, spanned by the first \(k\) largest right singular vectors of \(A\) with the first \(k\) basis vectors of the Krylov subspace \(\mathcal{K}_k(A^TA,A^Tb)\) (see \cite[sect. 6.3.2]{RankDeff} and \cite[sect. 6.3.1]{HansenInsights}). After \(k\) iterations (\(1\leq k\leq n\)), the GKB algorithm yields two matrices with orthonormal columns \(W_k\in\mathbb{R}^{n\times k}\) and \(Z_{k+1}\in\mathbb{R}^{m\times(k+1)}\), and a lower bidiagonal matrix \(B_k\in\mathbb{R}^{(k+1)\times k}\) with the structure
\begin{equation}\label{eq:BkDefn} B_k = \left(
\begin{array}{cccc}
\varrho_1 & & & \\
\theta_2 & \varrho_2 & & \\
& \theta_3 & \ddots & \\
& & \ddots & \varrho_k \\
& & & \theta_{k+1} \\
\end{array}
\right),\end{equation}
such that
\begin{equation}
\label{eq:LanczosRels}\begin{aligned} &AW_k = Z_{k+1}B_k,\\ &A^TZ_{k+1} = W_kB_k^T + \varrho_{k+1}w_{k+1}e_{k+1}^T,\\ &Z_{k+1}\theta_1e_1 = b.
\end{aligned}
\end{equation}
The GKB algorithm is summarized in \ref{alg:GKB}. We perform a reorthogonalization at each step of the algorithm to ensure that the columns of \(W_k\) and \(Z_{k+1}\) remain orthogonal.
\begin{algorithm}
\caption{Golub-Kahan Bidiagonalization (GKB)}\label{alg:GKB}
\begin{algorithmic}
\INPUT{\(A,b,k\)}\Comment{Coefficient matrix \(A\), data vector \(b\) and number of iterations \(k\)}
\OUTPUT{\(W_k,Z_{k+1},B_k\)}
\LineComment{Initialization:}
\State \(\theta_1 \gets ||b||, \quad z_1\gets b/\theta_1\)
\State \(\varrho_1 \gets ||A^Tz_1||, \quad w_1\gets A^Tz_1/\varrho_1\)
\For{\(j=1,2,\ldots,k\)}
\State \(p_j \gets Aw_{j}-\varrho_jz_j\)
\State \(p_j\gets \left(I-Z_jZ_j^T\right)p_j\) \Comment{Reorthogonalization step}
\State \(\theta_{j+1} = ||p_j||, \quad z_{j+1}\gets p_j/\theta_{j+1}\)
\State \(q_j \gets A^Tz_{j+1}-\theta_{j+1}w_j\)
\State \(q_j\gets \left(I-W_jW_j^T\right)q_j\) \Comment{Reorthogonalization step}
\State \(\varrho_{j+1} = ||q_j||, \quad w_{j+1}\gets q_j/\varrho_{j+1}\)
\LineComment{Update output matrices:}
\State \(W_j \gets \left(W_{j-1},\ w_j\right), \quad Z_{j+1} \gets \left(Z_{j},\ z_{j+1}\right)\)
\State \(B_j \gets \left(\begin{array}{cc} \left(\begin{array}{c} B_{j-1} \\ 0 \end{array}\right), & \left(\begin{array}{c} \mathbf{0} \\ \varrho_j \\ \theta_{j+1} \end{array}\right)\end{array}\right)\)
\EndFor
\end{algorithmic}
\end{algorithm}
It can be shown that the columns of \(W_k\) span the Krylov subspace \ref{eq:KrylovSubspace} and that we can achieve a regularizing effect by projecting the solution onto this subspace \cite[Sect. 2.1]{Gazzola2015}.
Choosing a solution from the column space of \(W_k\), such that \(x=W_ky\), and using the relations \ref{eq:LanczosRels}, we can rewrite the residual norm in \ref{eq:GKBprob} as
\begin{equation}\rho(k) = ||b-Ax^{(k)}||^2 = ||U_{k+1}\left(\theta_1e_1-B_ky\right)||^2,\end{equation}
and, since \(U_{k+1}\) has orthonormal columns,
\begin{equation}\label{eq:rewriteLS}
\rho(k) = ||\theta_1e_1-B_ky||^2.
\end{equation}
Hence, solving the PLS problem \ref{eq:GKBprob} amounts to minimizing \ref{eq:rewriteLS}, which is small-scale and can be solved using standard procedures such as the QR decomposition of \(B_k\) as in the LSQR algorithm \cite{LSQR}.
In the absence of noise, the GKB algorithm is terminated once \(\varrho_{k}=0\) or \(\theta_{k+1}=0\).
However, the solution of the PLS problem \ref{eq:rewriteLS} exhibits semi-convergence, whereby the error in the iterates \(||x_{true}-x^{(k)}||\) first decreases as \(k\) increases and then increases sharply well before the above condition is met. This is due to the fact that the columns of \(W_k\) contain increasing levels of noise, as described in \cite{Hnetynkova2009}. Hence, at early iterations the columns are almost noiseless and the solution gets closer to the true one, while at later iterations the solution becomes contaminated by noise. It is thus crucial to appropriately terminate the iterations before the noise becomes dominant.
Usually, standard methods like the GCV or the L-curve are used for estimation of the optimal stopping iteration. However the GCV method assumes that the noise is additive and is fully contained in the data vector \(b\), while the coefficient matrix \(A\) is noiseless. That is indeed the case in the original, large-scale problem \ref{eq:Ax=b} but not in the PLS problem of minimizing \ref{eq:rewriteLS}. In the PLS problem the projected data vector \(Z_{k+1}^Tb=\theta_1e_1\) is a noiseless scaled standard basis vector. The projected coefficient matrix \(B_k = Z_{k+1}^TAW_k\) however, is generated by the Algorithm \ref{alg:GKB} from the noisy data vector \(b\) and depends on the columns of \(W_k\) and \(Z_{k+1}\). Therefore, the derivation of the standard form of the GCV function and the proof of its optimality, such as the ones given in \cite[Thm. 1]{GCV2} no longer apply. The justification for the L-curve method seems to hold for the projected problem, but, as we show in the numerical examples section, for many cases it is far from optimal.
In this paper we propose to use the DF method developed in \cite{Levin2016} to stop the iterative process. The DF method uses the distance between the filtered data \(\widehat b\) and the data reconstructed from the \(k\)th iterate \(Ax^{(k)}\) to characterize the quality of the iterated solution \(x^{(k)}\). Writing the distance as
\begin{equation}\label{eq:DistNormGKB}
\widehat f(k) = ||\widehat b - Ax^{(k)}||^2,
\end{equation}
and using the methods of Sect. \ref{sec:DFTfilter} to obtain the filtered data \(\widehat b\) we expect \(\widehat f(k)\) to have a global minimum at or near the optimal iteration. However, \(\widehat f(k)\) may also have local minima, and so we must continue the iterations beyond a potential minimum of \ref{eq:DistNormGKB} to ensure that the function \(\widehat f(k)\) continues to increase. In addition, for problems with very small noise levels, the filter of Sect. \ref{sec:DFTfilter} may not change the data vector \(b\) sufficiently, in which case \(\widehat f(k)\) will not have a minimum. Instead, \(\widehat f(k)\) will flatten after the optimal iteration, since the solution will not become significantly contaminated. To account for all the above behaviors, we propose to terminate the iterations once the relative change in \ref{eq:DistNormGKB} is small enough
\begin{equation}\label{eq:GKBcond}
\frac{\widehat f(k)-\widehat f(k+1)}{\widehat f(k)} \leq \delta,\quad \text{for \(p\) consecutive iterations}
\end{equation}
for some small bound \(\delta>0\). We emphasize that the numerator in \ref{eq:GKBcond} is not an absolute value and becomes negative if \(\widehat f(k)\) starts to increase. Since we choose \(\delta>0\), the condition \ref{eq:GKBcond} is automatically satisfied if \(\widehat f(k)\) has a minimum. Otherwise the iterations are terminated once \(\widehat f(k)\) becomes very flat. From our experience, the algorithm works well for \(\delta\in [10^{-2}, 10^{-4}]\) and \(p \geq 5\). Finally, we observe that since a discrete iteration number (\(k\in\{1,\ldots,n\}\)) acts as a regularization parameter, the ability of the reconstructed data \(Ax^{(k)}\) in \ref{eq:DistNormGKB} to get closer to the filtered data is limited, making the minimum of \ref{eq:DistNormGKB} less sensitive to the quality of the filtered data \(\widehat b\). Furthermore, the discrete regularization parameter does not limit the accuracy of the solution in practice. As we demonstrate in the numerical examples below, the additional regularization of hybrid methods is unnecessary and only slightly improves the solution, if at all.
\subsection{Regularizing the projected problem}\label{sec:TikhRegProj}
In this section, we briefly discuss hybrid methods for the solution of \ref{eq:Ax=b}, which combine projection onto a Krylov subspace with Tikhonov regularization and are the main competitors to our approach. Instead of attempting to terminate the GKB process at an optimal iteration, hybrid methods replace the PLS problem \ref{eq:GKBprob} with the Tikhonov minimization problem \ref{eq:TikhMinProb}. Using the relations \ref{eq:LanczosRels} and \(x=W_ky\), as in Sect. \ref{sec:GKBinvert}, problem \ref{eq:TikhMinProb} can be rewritten as
\begin{equation}\label{eq:MinTikhGKB} \min_y ||\theta_1e_1-B_ky||^2 + \lambda^2 ||y||^2,\end{equation}
or as the normal equation
\begin{equation}\label{Eq:TikhNormalEq0}
\left(B_k^TB_k + \lambda^2 I\right)y = B_k^T\theta_1e_1
\end{equation}
which has the solution
\begin{equation}\label{Eq:TikhNormalEq}
y_\lambda = \left(B_k^TB_k + \lambda^2 I\right)^{-1}B_k^T\theta_1e_1.
\end{equation}
The solution \ref{Eq:TikhNormalEq} can be rewritten, similarly to Sect. \ref{sec:DirectReg}, using the SVD of \(B_k\) as
\begin{equation}\label{eq:SVDbk}
B_k = U_k\Sigma_k V_k^T,
\end{equation}
where \(U_k\in \mathbb{R}^{(k+1)\times (k+1)}\) and \(V_k\in\mathbb{R}^{k\times k}\) are orthogonal and \(\Sigma_k\in\mathbb{R}^{(k+1)\times k}\) has the structure
\begin{equation}\label{eq:SigmaStruct}
\Sigma_k = \left(\begin{array}{c} \text{diag}\{\sigma_1,\ldots,\sigma_k\} \\ \mathbf{0}^T \end{array}\right),
\end{equation}
with the singular values \(\{\sigma_j\}\) arranged in decreasing order.
Due to the structure of \(B_k\) in \ref{eq:BkDefn} and the fact that \(\varrho_j,\; \theta_j\; >0\) for all relevant iterations, we have \(\text{rank}(B_k)=k\) and \(\sigma_j >0\) for all \(j\leq k\). Denoting the \(j\)th columns of \(U_k\) and \(V_k\) as \(u^{(k)}_j\) and \(v^{(k)}_j\) respectively, the Tikhonov solution \ref{Eq:TikhNormalEq} can be written similarly to \ref{eq:TikhSoln} as
\begin{equation}\label{eq:TikhSolnGKB} y_{\lambda} = \theta_1\sum_{j=1}^k \frac{\sigma^{(k)}_ju_j^{(k)}(1)}{\left(\sigma^{(k)}_j\right)^2+\lambda^2}v^{(k)}_j,\end{equation}
where \(u^{(k)}_j(1)=e_1^Tu^{(k)}_j\).
By appropriately choosing \(\lambda\) at each iteration we can, in theory, filter out noise added to the least-squares solution in \ref{eq:GKBprob} at higher iterations and thus, stabilize the error and make the final solution independent of the stopping iteration. Regularization in hybrid methods may also filter out noise that is not filtered by projection alone. Nevertheless we argue that this additional filtering has a negligible effect and that choosing \(\lambda\) appropriately at each iteration presents a significant challenge. Specifically, determination of \(\lambda\) for hybrid methods is usually done using standard procedures originally developed for direct regularization, the most popular of which is the GCV \cite{splines,GCV2}. These procedures assume a noiseless coefficient matrix \(A\) \ref{eq:MinTikhGKB}, whereas the hybrid methods project the solution into a noise-dependent Krylov space, thus also contaminating the projected coefficient matrix, similarly to the PLS problem. Therefore, the GCV method is not expected to produce optimal solutions for hybrid methods, as was indeed demonstrated in \cite[Sect. 4]{Chung2008}. The W-GCV method proposed in \cite{Chung2008} attempts to overcome the above shortcoming of the GCV by introducing an additional free parameter and choosing it adaptively at each iteration. However, as we show in the numerical examples in Sect. \ref{sec:NumEx}, the W-GCV method still produces suboptimal results in many cases.
In all of our numerical examples we observe that the minimum errors achievable using PLS and hybrid methods are almost identical and therefore any additional filtering achievable by hybrid methods is negligible. To explain this we note that at early iterations, the vectors spanning the Krylov subspace \ref{eq:KrylovSubspace}, and hence also the solutions projected onto them, are typically very smooth and do not contain noise \cite{Hnetynkova2009}. Therefore, little to no regularization is required at this stage and we expect \(\lambda\approx 0\), making the Tikhonov problem \ref{eq:TikhMinProb} equivalent to the least-squares problem \ref{eq:GKBprob}. Only after the basis vectors spanning \ref{eq:KrylovSubspace} become contaminated by noise does the solution require regularization. At this stage the optimal regularization parameter \(\lambda\) increases to some non-negligible, noise-dependent value that keeps the error of the regularized solution approximately constant, while the error of the unregularized PLS solution increases sharply. We demonstrate in the following section, using several numerical examples, that the optimal solution of the hybrid method occurs while the noise that contaminates it is very small and so in most typical cases the optimal solutions of \ref{eq:GKBprob} and \ref{eq:TikhMinProb} are very close to each other. This was also demonstrated in the numerical examples in \cite[Sect. 4]{Chung2015} and \cite{Chung2008}. Therefore, in most practical problems the only significant advantage of hybrid methods over simple GKB stopping criteria is in the ability to stabilize the iterations, making them less sensitive to the stopping iteration. This also implies that having a reliable stopping criterion for PLS obviates the need for a hybrid method in these cases.
\section{Numerical examples}\label{sec:NumEx}
In this section we demonstrate the performance of the proposed method using seven test problems from the \texttt{Matlab} toolbox \texttt{RestoreTools} \cite{Nagy2004}: \texttt{satellite}, \texttt{GaussianBlur440}, \texttt{AtmosphericBlur50}, \texttt{Grain}, \texttt{Text}, \texttt{Text2} and \texttt{VariantMotionBlur\_large}. Each of these problems includes a different blur, \(A\), and a different image, \(x_{true}\), to reconstruct. To generate the data vector \(b\), we form the true data, \(b_{true} = Ax_{true}\) and perturb it with white Gaussian noise of variance \(s^2=\alpha\max|b_{true}|^2\) where the noise level \(\alpha\) takes on three values for each problem \(\alpha\in\{10^{-2},10^{-4},10^{-6}\}\). We apply the inversion procedure to each test problem 100 times, each time using a different noise realization.
To implement our stopping criterion, we set \(h=\lceil\frac{m}{100}\rceil\) and \(\varepsilon=10^{-2}\) in \ref{eq:PicIndCondSVD} and also \(p=5\) and \(\delta=2\times10^{-3}\) in \ref{eq:GKBcond}. As mentioned above, however, our numerical results are robust and wouldn't change much for a wide range of \(h\), \(\varepsilon\), \(p\) and \(\delta\). In the numerical tests we compare our stopping criterion with the L-curve criterion \cite{RegParamItr,LCurve,Calvetti2000}, \cite[Chap. 7]{RankDeff}, the NCP method for the PLS problem \cite{Hansen2006,Rust2008}, and the hybrid W-GCV method \cite{Chung2008}. The L-curve method constitutes finding the point of maximum curvature on the so-called L-curve, defined as \((||r^{(k)}||,\; ||x^{(k)}||)\), where \(r^{(k)}=b-Ax^{(k)}\) is the residual vector. To do so we use the function \verb!l_corner! from Hansen's \texttt{Regularization Tools} toolbox \cite{RegTools}. Using the L-curve method, we terminate the iterations once the chosen iteration number either stays the same or decreases for \(p=5\) consecutive iterations, signifying that the corner of the L-curve is found.
The NCP method is based on calculating a whiteness measure of the residual vector \(r^{(k)}\) at each iteration \cite{Hansen2006}. The stopping iteration is chosen as the one at which the residual vector \(r^{(k)}\) most resembles white noise, as follows. The vector \(r^{(k)}\) is reshaped into an \(M\times N\) matrix \(R^{(k)}\) satisfying \(r^{(k)} = \text{vec}\left(R^{(k)}\right)\), and the quantity \(\widehat R = |\text{DFT2}(R^{(k)})|\) is defined to be the absolute value of its two-dimensional Fourier transform. Since \(R^{(k)}\) is a real valued signal, it follows that \(\widehat R\) is symmetric about \(q_1 = \lfloor M/2 \rfloor + 1\) and \(q_2 = \lfloor N/2 \rfloor +1\), so that \(\widehat R_{j,k} = \widehat R_{M-j,k}\) for \(2\leq j\leq q_1-1\) and \(\widehat R_{j,k} = \widehat R_{j,N-k}\) for \(2\leq k\leq q_2-1\). Consequently, only the first quarter of \(\widehat R\), which can be written as \(\widehat T = \widehat R(1:q_1,1:q_2)\) using \texttt{Matlab} notation, is required for the analysis. Vector \(\widehat t\) is then obtained by vectorizing \(\widehat T\) using the elliptical parametrization defined in Sect. \ref{sec:DFTfilter}. The NCP of \(R^{(k)}\) is defined as the vector \(c(R^{(k)})\) of length \(q_1q_2-1\) with components\footnote{In \cite{Hansen2006}, the authors assume the problem is square so that \(M=N\) and \(q_1=q_2\).}
\begin{equation}\label{eq:NCPdefn}
c(R^{(k)})_j = \frac{||\widehat t(2:j+1)||_1}{||\widehat t(2:q_1q_2)||_1},
\end{equation}
where the dc component of \(R^{(k)}\) is not included in the NCP. We note that it is argued in \cite{Rust2008} that the NCP should include the dc component of \(R^{(k)}\) since it captures the deviation from zero mean white noise. However, we found no difference between the two definitions in practice and therefore we shall follow \cite{Hansen2006} and disregard the dc component as in \ref{eq:NCPdefn}. It is shown in \cite{Hansen2006,Rust2008} that for white noise, the NCP should be a straight line from 0 to 1 represented by the vector with components \(s_j = j/(q_1q_2-1)\). Therefore, the whiteness measure is defined as the distance
\begin{equation}
\label{eq:NCPfunc}
N(k) = ||s-c(R^{(k)})||_1.
\end{equation}
The iterations are terminated once \ref{eq:NCPfunc} reaches its global minimum, signifying that the residual vector at the chosen iteration is the closest to white noise. To implement this method, we compute function \ref{eq:NCPfunc} at each iteration and terminate the iterations once the norm \ref{eq:NCPfunc} increases for \(p=5\) consecutive iterations, just as we do with our own method. We then choose the solution corresponding to the global minimum of the computed \(N(k)\).
In contrast to the above methods, the W-GCV is a hybrid method and solves the Tikhonov problem \ref{eq:TikhMinProb}. It is based on introducing a free parameter to the GCV criterion, as discussed in Sect. \ref{sec:TikhRegProj} and in \cite{Chung2008}. To implement the W-GCV, we use the \texttt{HyBR\_modified} routine provided in the \texttt{RestoreTools} package \cite{Nagy2004} as \texttt{x = HyBR\_modified(A,B,[],HyBRset('Reorth','on'),1)}. Note that we use the reorthogonalization option of the \verb!HyBR_modified! routine to make a fair comparison to our \ref{alg:GKB} that employs full reorthogonalization.
We measure the quality of a solution by computing its Mean-Square Deviation (MSD), defined as
\begin{equation}\label{eq:MSD} \text{MSD} = \frac{||x_{true} - x||^2}{||x_{true}||^2},\end{equation}
where \(x\) is a solution. We then define the optimal solutions to the PLS problem \ref{eq:GKBprob} and to the hybrid problem \ref{eq:TikhMinProb} as the ones minimizing the MSD to each problem.
We present the results of our simulations as boxplots of the resulting MSD values in \ref{fig:Boxplots_1-3} and \ref{fig:Boxplots_4-6}. The boxplots divide the data into quartiles with the boxes spanning the middle 50\% of the data, called the interquartile range and the vertical lines extending from the boxes span 150\% of the interquartile range above and below it. Anything outside this interval is considered an outlier and is marked with a '+'. Each box also contains a horizontal line marking the median of the data.
Based on the results presented in \ref{fig:Boxplots_1-3} and \ref{fig:Boxplots_4-6} we can make the following observations:
\begin{enumerate}
\item The hyperbolic ordering with the DF method performs similarly to or better than the corresponding elliptic ordering in all examples.
\item The DF method with hyperbolic ordering performs similarly to or outperforms the L-curve, NCP and W-GCV methods in all examples without exception.
\item The DF method with elliptic ordering failed to produce acceptable solutions for the \texttt{Text2} problem with \(\alpha=10^{-4}\). Contrary to the other examples where the distance function \ref{eq:DistNormGKB} with this ordering has a minimum, in this example it has neither a minimum nor even an inflection point and therefore the right stopping iteration could not be found with this ordering.
\item The optimal MSD values for the PLS problem \ref{eq:GKBprob} and the projected Tikhonov problem \ref{eq:TikhMinProb} are almost identical in all examples, as expected from the discussion in \ref{sec:TikhRegProj}.
\end{enumerate}
Overall, we can conclude that the DF criterion with hyperbolic ordering for estimation of optimal stopping iteration is accurate, robust and outperforms state-of-the-art methods.
\begin{figure}[!p]
\centering
\hspace*{\fill}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{MaskCirc.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{MaskKron.pdf}
\end{subfigure}
\hspace*{\fill}
\caption{Effect of the filtering procedure introduced in Sect. \ref{sec:DFTfilter} on an image of size \(256\times256\) by the elliptic ordering of \cite{Hansen2006} and the hyperbolic ordering \ref{eq:freqVec} in Fourier domain. The Picard parameter for both methods is \(k_0=10^4\). The zero frequency component is placed at the center of the image.}\label{fig:Masks}
\end{figure}
\begin{figure}[!p]
\centering
\includegraphics[width=\textwidth]{box_1-4.pdf}
\caption{Boxplots of the MSD values obtained for the PLS problem \ref{eq:GKBprob} and the Tikhonov regularized problem \ref{eq:TikhMinProb} with the methods: (1) DF with hyperbolic ordering (DF-h), (2) DF with elliptic ordering (DF-e), (3) L-curve, (4) NCP, (5) W-GCV, (6) minimum MSD for PLS problem (PLS\(_{OPT}\)), (7) minimum MSD solution for Tikhonov problem (Tikh\(_{OPT}\)). The problems presented are \emph{First row}: \texttt{satellite}; \emph{Second row}: \texttt{GaussianBlur440}; \emph{Third row}: \texttt{AtmosphericBlur50}; \emph{Fourth row}: \texttt{Grain}. The noise levels are \emph{First column}: \(\alpha=10^{-2}\); \emph{Second column}: \(\alpha=10^{-4}\); \emph{Third column}: \(\alpha=10^{-6}\).}\label{fig:Boxplots_1-3}
\end{figure}
\begin{figure}[!p]
\centering
\includegraphics[width=\textwidth]{box_4-7.pdf}
\caption{Boxplots of the MSD values obtained for the PLS problem \ref{eq:GKBprob} and the Tikhonov regularized problem \ref{eq:TikhMinProb} with the methods: (1) DF with hyperbolic ordering (DF-h), (2) DF with elliptic ordering (DF-e), (3) L-curve, (4) NCP, (5) W-GCV, (6) minimum MSD for PLS problem (PLS\(_{OPT}\)), (7) minimum MSD solution for Tikhonov problem (Tikh\(_{OPT}\)). The problems presented are \emph{First row}: \texttt{Text}; \emph{Second row}: \texttt{Text2}; \emph{Third row}: \texttt{VariantMotionBlur\_large}. The noise levels are \emph{First column}: \(\alpha=10^{-2}\); \emph{Second column}: \(\alpha=10^{-4}\); \emph{Third column}: \(\alpha=10^{-6}\).}\label{fig:Boxplots_4-6}
\end{figure}
\bibliographystyle{plain}
|
\section{Introduction}
The classical Hardy inequality in one space dimension states that
\begin{equation}
\label{Hardy_1D}
\int_0^{\infty} |u'(t)|^p \, dt \ge \( \frac{p-1}{p} \)^p \int_0^{\infty} \frac{|u(t)|^p}{t^p} \, dt
\end{equation}
holds for all $u \in W^{1,p}_0(0, +\infty)$ where $1 < p < \infty$.
This scaling invariant inequality is now very classical and there are wonderful treatises \cite{Ghoussoub-Moradifam(book)}, \cite{Mazya}, \cite{Opic-Kufner} on further generalizations of the inequality \eqref{Hardy_1D}.
It is also known that the constant $\( \frac{p-1}{p} \)^p$ is best possible and it is not achieved by any function in $W^{1,p}_0(0,+\infty)$.
The inequality \eqref{Hardy_1D} has been generalized to higher dimensions in two directions:
one is to replace the function $t$ in the right-hand side by the distance to the origin,
and the other is to replace it by the distance to the boundary.
For the former direction, let $\Omega$ be a domain with $0 \in \Omega$ in $\re^N$ ($N \ge 2$) and let $p \geq 1$.
Then the classical $L^p$-Hardy inequality states that
\begin{equation}
\label{H_p}
\intO |\nabla u|^p \, dx \ge \left| \frac{N-p}{p} \right|^p \intO \frac{|u|^p}{|x|^p} \, dx
\end{equation}
holds for all $u \in W^{1,p}_0(\Omega)$ when $1 \le p < N$, and
for all $u \in W^{1,p}_0(\Omega \setminus \{ 0 \})$ when $p > N$.
It is known that for $p > 1$, the best constant $|\frac{N-p}{p}|^p$ is never attained in $W^{1,p}_0(\Omega)$ when $p < N$, or in $W^{1,p}_0(\Omega \setminus \{ 0 \})$ when $p > N$, respectively.
After the pioneering work of Brezis and V\'{a}zquez \cite{Brezis-Vazquez}, which showed that the inequality can be improved on bounded domains when $p < N$,
there are many papers that treat the improvements of the inequality (\ref{H_p})
(see \cite{ACR}, \cite{BFT1}, \cite{BFT2}, \cite{Cazacu}, \cite{DPP}, \cite{Filippas-Tertikas}, \cite{GGM}, \cite{Sano-TF},
the recent book \cite{Ghoussoub-Moradifam(book)} and the reference therein.)
For the latter direction,
let $\Omega \subset \re^N$ be an open set with Lipschitz boundary and define $d(x) = {\rm dist}(x, \pd\Omega)$.
Then, a version of Hardy inequalities, called ``geometric type", states that for any $p > 1$,
there exists $c_p(\Omega) > 0$ such that the inequality
\begin{equation}
\label{GH_p}
\intO |\nabla u|^p \, dx \ge c_p(\Omega) \intO \frac{|u|^p}{(d(x))^p} \, dx
\end{equation}
holds for all $u \in W^{1,p}_0(\Omega)$.
For this inequality, refer to \cite{Ancona}, \cite{BFT1}, \cite{Brezis-Marcus}, \cite{DPP}, \cite{LP}, \cite{Lehrback}, \cite{MS(NA)}, \cite{Tidblom(JFA)}, \cite{Tidblom(PAMS)},
the recent book \cite{BEL(book)} and the references therein.
In \cite{MS(NA)}, it is proved that $c_p(\Omega) = \( \frac{p-1}{p} \)^p$ is the best constant on any convex domain $\Omega$,
that is,
\begin{equation}
\label{hq}
c_p(\Omega) = \inf_{u \in W^{1,p}_0(\Omega), u \not\equiv 0}
\frac{\intO |\nabla u |^p dx}{\intO \frac{|u(x)|^p}{(d(x))^p} dx} = \( \frac{p-1}{p} \)^p.
\end{equation}
In \cite{BFT1}, \cite{Tidblom(PAMS)}, the authors obtained an additional extra term on the right-hand side of (\ref{GH_p}),
which means that the best constant $c_p(\Omega)$ is never attained on any convex domain $\Omega$.
When $\Omega$ is the half-space $\re^N_{+} = \{ x = (x_1, \cdots, x_N) \,|\, x_N > 0 \}$, the inequality (\ref{GH_p}) has the form
\begin{equation}
\label{Hardy_half}
\int_{\re^N_{+}} |\nabla u|^p \, dx \ge \( \frac{p-1}{p} \)^p \int_{\re^N_{+}} \frac{|u|^p}{x_N^p} \, dx
\end{equation}
and the best constant $\( \frac{p-1}{p} \)^p$ is never attained by functions in $W^{1,p}_0(\re^N_{+})$.
On the other hand, let $\Omega$ be a bounded domain with $C^{1, \gamma}$ boundary for some $\gamma \in (0,1)$.
Then it is proved by Marcus, Mizel, and Pinchover in \cite{MMP} that
there exists a minimizer of $C_2(\Omega)$ if and only if $C_2(\Omega) < 1/4$.
See also \cite{MMP}, \cite{Marcus-Shafrir}, \cite{LP} for the corresponding results for $1 < p < \infty$.
So the compactness of any minimizing sequence fails only at the
bottom level $\( \frac{p-1}{p} \)^p.$
In the critical case $p = N$, the weight $|x|^{-N}$ is too singular for the same type of inequality as (\ref{H_p}) to hold true for functions in $W^{1,N}_0(\Omega)$.
Instead of (\ref{H_p}), it is known that the following {\it Hardy inequality in a limiting case}
\begin{equation}
\label{Hardy_N}
\intO |\nabla u |^N dx \ge \( \frac{N-1}{N} \)^N \intO \frac{|u(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx
\end{equation}
holds true for all $u \in W^{1,N}_0(\Omega)$ where $R = \sup_{x \in \Omega} |x|$;
refer to \cite{Leray}, \cite{Ladyzhenskaya}, \cite{DP}, \cite{Ioku-Ishiwata}, \cite{TF} and references therein.
Note that the additional $\log$ term weakens the singularity of $|x|^{-N}$ at the origin,
however, the weight function
\[
W_R(x) = \frac{1}{|x|^N \( \log \frac{R}{|x|} \)^N}
\]
becomes singular also on the boundary $\pd\Omega$ since $R = \sup_{x \in \Omega} |x|$.
Indeed, since
\begin{equation}
\label{Taylor}
|x|^N \( \log \frac{R}{|x|} \)^N = (R-|x|)^N + o((R-|x|)^N)
\end{equation}
as $|x| \to R$, $W_R$ has a similar effect of $(1/d(x))^N$ near the boundary.
In this sense, the critical Hardy inequality (\ref{Hardy_N}) has both features of the inequalities \eqref{H_p} and \eqref{GH_p}.
Note that (\ref{Hardy_N}) is invariant under the scaling
\begin{equation}
\label{scaling_N}
u_{\la}(x) = \la^{-\frac{N-1}{N}} u\( \( \frac{|x|}{R} \)^{\la-1} x \) \quad \text{for} \, \la > 0,
\end{equation}
which is different from the usual scaling $u_{\la}(x) = \la^{\frac{N-p}{p}} u(\la x)$ for (\ref{H_p}) when $\Omega = \re^N$ and $p < N$.
(However recently, a relation of both scaling transformations is obtained, see \cite{Sano-TF}).
Let $C_N(\Omega)$ be the best constant of the inequality (\ref{Hardy_N}):
\begin{equation}
\label{CHN}
C_N(\Omega) = \inf_{u \in W^{1,N}_0(\Omega), u \not\equiv 0}
\frac{\intO |\nabla u |^N dx}{\intO \frac{|u(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx}.
\end{equation}
By this definition and \eqref{Hardy_N}, we see $C_N(\Omega) \ge \( \frac{N-1}{N} \)^N$ for any bounded domain $\Omega \subset B_R$ with $R = \sup_{x \in \Omega} |x|$.
Here and henceforth, $B_R$ will denote the $N$-dimensional ball with radius $R$ and center $0$.
In \cite{Ioku-Ishiwata}, the authors proved that $C_N(B_R) = \( \frac{N-1}{N} \)^N$ and $C_N(B_R)$ is never attained by any function in $W^{1,N}_0(B_R)$.
See also \cite{DFP}, \cite{DP}.
Let us recall the arguments in \cite{Ioku-Ishiwata}.
First, the authors of \cite{Ioku-Ishiwata} prove that, if the infimum $C_N(B_R)$ is attained by a radially symmetric function $u \in W^{1,N}_{0, rad}(B_R)$,
then $u \in C^1(B_R \setminus \{ 0 \})$, $u > 0$ and $u$ is unique up to multiplication of positive constants.
By using these facts and the scaling invariance (\ref{scaling_N}), the authors prove that $C_N(B_R)$ is not attained by radially symmetric functions.
Indeed, by the scaling invariance (\ref{scaling_N}) and the uniqueness up to multiplication of positive constants,
the possible radially symmetric minimizer has the form $C (\log \frac{R}{|x|})^{\frac{N-1}{N}}$ which is not in $W^{1,N}_0(B_R)$.
Finally, they prove that if there exists a minimizer of $C_N(B_R)$, then there exists also a radially symmetric minimizer.
The argument of this part is elementary and the proof of the non-attainability of $C_N(B_R)$ is established.
The main purpose of this paper is to study the (non-)attainability of the infimum $C_N(\Omega)$ for more general domains $\Omega \subset B_R$.
Some new phenomena will be shown in this paper.
We first note that if $C_N(\Omega) = \( \frac{N-1}{N} \)^N$, $C_N(\Omega)$ is not attained.
In fact, if $C_N(\Omega)$ is attained by an element $u \in W_0^{1,N}(\Omega),$ by a trivial extension of $u$ as an element in $W_0^{1,N}(B_R),$
$C_N(B_R) = \( \frac{N-1}{N} \)^N $ is attained by $u$; this contradicts the result in \cite{Ioku-Ishiwata} that $ C_N(B_R) $ is not attained.
In the following, we may not impose the assumption that $0 \in \Omega$.
Since the weight function $W_R(x) = (|x| (\log \frac{R}{|x|} ))^{-N}$ itself depends on the geometric quantity $R$,
it is not clear whether $C_N(\Omega)$ has the same value as $C_N(B_R)$ for all domains $\Omega \subset B_R$ or not.
Since $W_R$ becomes unbounded around the origin and also around the set $|x| = R$,
it is plausible that minimizing sequences for $C_N(\Omega)$ tend to concentrate on the origin or on the boundary portion $\pd \Omega \cap \pd B_R$
in order to minimize the quotient
\[
Q_R(u) = \frac{\intO |\nabla u |^N dx}{\intO W_R(x) |u(x)|^N dx}.
\]
This will result in that $C_N(\Omega) = C_N(B_R)$ and $C_N(\Omega)$ is not attained,
if the origin is the interior point of $\Omega$, or $\Omega$ has a smooth boundary portion at a distance $R$ to the origin
(just like a ball $B_R$).
We will prove later that these intuitions are true, see Theorem \ref{theorem-origin} and Theorem \ref{theorem-smooth}.
However, when we treat a domain $\Omega \subset B_R$ with $R = \sup_{x \in \Omega} |x|$,
which does not contain the origin in its interior, nor have the smooth boundary portion $\pd\Omega \cap B_R$,
the situation is rather different.
Actually, we provide a sufficient condition on $\Omega \subset B_R$ which assures that $C_N(\Omega) > C_N(B_R)$ (Theorem \ref{theorem-inequality}).
Moreover, we prove that a stronger condition on $\Omega$ than the sufficient condition assures that $C_N(\Omega)$ is attained (Theorem \ref{theorem-existence}).
Finally, we provide an example of domain in $\re^2$ on which $C_2(\Omega) > C_2(B_R) = 1/4$ and $C_2(\Omega)$ is not attained (Theorem \ref{theorem-nonexistence}).
This is quite a contrast to the result for \eqref{hq} in \cite{MMP}, which says that
if $c_2(\Omega)$ is strictly less than the critical number $\frac 14,$ the infimum $c_2(\Omega)$ is attained.
The organization of this paper is as follows:
In \S 2, we prove Theorem \ref{theorem-origin}, which says that if $0 \in \Omega$, then
$C_N(\Omega) =\( \frac{N-1}{N} \)^N$ and the infimum is not attained.
In \S 3, we prove Theorem \ref{theorem-smooth}, which says that if $\partial B_R \cap \partial \Omega$
enjoys some regularity, then $C_N(\Omega) =\( \frac{N-1}{N} \)^N$ and the infimum is not attained.
In \S 4, we prove Theorem \ref{theorem-inequality}, which says that a strict inequality
$C_N(\Omega) > \( \frac{N-1}{N} \)^N$ holds under some condition on $\Omega$
and Theorem \ref{theorem-existence}, which says that under a stronger condition than the one in Theorem \ref{theorem-inequality}, the infimum is attained.
Finally in \S 6, we prove Theorem \ref{theorem-nonexistence}, which says that the condition for the existence of a minimizer in Theorem \ref{theorem-existence} is optimal.
Now, we fix some notations and usages.
For a bounded domain $\Omega \subset \re^N$,
the letter $R$ will be used to denote $R = \sup_{x \in \Omega} |x|$ throughout the paper.
$B_R$ will denote the $N$-dimensional ball with radius $R$ and center $0$.
The surface area $\int_{S^{N-1}} dS_{\omega}$ of the $(N-1)$ dimensional unit sphere $S^{N-1}$ in $\re^N$ will be denoted by $\omega_{N-1}$.
$S^{N-1}(r)$ will denote the sphere of radius $r$ with center $0$.
Finally, the letter $C$ may vary from line to line.
\begin{section}{Hardy's inequality for the case $0 \in \Omega$}
In this section, we treat the case when $\Omega \subset B_R$ has the origin as an interior point of $\Omega$.
In this case, we prove the following theorem.
\begin{theorem}
\label{theorem-origin}
For any bounded domain $\Omega \subset \re^N$ with $0 \in \Omega$ and $R = \sup_{x \in \Omega} |x|$,
\begin{align*}
C_N(\Omega) = C_N(B_R) = \( \frac{N-1}{N} \)^N,
\end{align*}
and the infimum $C_N(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
Note that by the definition of $R$, we have $\Omega \subset B_R$.
By a trivial extension of a function $u \in W^{1,N}_0(\Omega)$ on $B_R$ by $u(x) = 0$ for $x \in B_R \setminus \Omega$,
we see $W^{1,N}_0(\Omega) \subset W^{1,N}_0(B_R)$ and thus
\begin{equation}
\label{C_N_lower}
C_N(\Omega) \ge C_N(B_R) = \( \frac{N-1}{N} \)^N.
\end{equation}
For the fact $C_N(B_R) = \( \frac{N-1}{N} \)^N$, we refer to \cite{Ioku-Ishiwata}.
In \cite{Ioku-Ishiwata}, the authors prove this fact by using the test functions
\begin{align*}
\psi_{\beta} (x) = \begin{cases}
1, &\quad 0 \le |x| \le \frac{R}{e}, \\
\( \log \frac{R}{|x|} \)^\beta, &\quad \frac{R}{e} \le |x| \le R
\end{cases}
\end{align*}
for $\beta > \frac{N-1}{N}$.
Note that $\{ \psi_{\beta} \}$ will concentrate on the boundary $\pd B_R$ when $\beta \downarrow \frac{N-1}{N}$.
In our case, since $0 \in \Omega$ is an interior point, there exists a small $c \in (0,1)$ such that $B_{cR}(0) \subset \Omega$.
For $0 < \alpha < \frac{N-1}{N}$, we define a function
\begin{align*}
\phia (x) = \begin{cases}
\( \log \frac{R}{|x|} \)^{\alpha}, &\quad |x| \le \frac{cR}{2}, \\
\( \log \frac{2R}{c} \)^{\alpha}(2-\frac{2|x|}{cR}) , &\quad \frac{cR}{2} \le |x| \le cR, \\
0, &\quad cR \le |x|, \, \text{and} \; x \in \Omega.
\end{cases}
\end{align*}
Then we see that
\begin{align*}
A \equiv &\intO |\nabla \phia|^N dx
= \omega_{N-1} \int_0^{\frac{cR}{2}} \left| \alpha \( \log \frac{R}{r} \)^{\alpha-1} \( \frac{-1}{r} \) \right|^N r^{N-1} dr + O(1) \\
&= \omega_{N-1} \alpha^N \int_0^{\frac{cR}{2}} \( \log \frac{R}{r} \)^{N(\alpha-1)} \frac{1}{r} \; dr + O(1) \\
&= \omega_{N-1} \alpha^N \left[ \frac{-1}{N(\alpha-1) + 1} \( \log \frac{R}{r} \)^{N(\alpha-1) + 1} \right]_0^{\frac{cR}{2}} + O(1) \\
&= \omega_{N-1} \alpha^N \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1).
\end{align*}
Since $\alpha < \frac{N-1}{N}$, we have $N(\alpha-1) + 1 < 0$.
Thus $|\nabla \phia|^N$ is integrable near the origin and $\phia \in W^{1,N}_0(\Omega)$ for any $\alpha \in (0, \frac{N-1}{N})$.
Also we see that
\begin{align*}
B \equiv &\intO \frac{|\phia(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx
= \omega_{N-1} \int_0^{\frac{cR}{2}} \frac{(\log \frac{R}{r})^{\alpha N}}{r^N (\log \frac{R}{r})^N} r^{N-1} dr + O(1) \\
&= \omega_{N-1} \int_0^{\frac{cR}{2}} \( \log \frac{R}{r} \)^{N\alpha - N} \frac{1}{r} \; dr + O(1) \\
&= \omega_{N-1} \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1).
\end{align*}
Therefore, we conclude that
\begin{align*}
\frac{A}{B} &= \frac{\omega_{N-1} \alpha^N \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1)}{\omega_{N-1} \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1)}
= \frac{\alpha^N + O(1) (N(\alpha-1) + 1)}{1 + O(1) (N(\alpha-1) + 1)} \\
&\to \( \frac{N-1}{N} \)^N \quad \text{as} \; \alpha \uparrow \frac{N-1}{N}.
\end{align*}
This proves that
\[
C_N(\Omega) = \( \frac{N-1}{N} \)^N,
\]
thus the infimum $C_N(\Omega)$ is not attained; see Introduction.
\end{proof}
\end{section}
\begin{section}{Hardy's inequality for smooth domains}
In this section, we prove that $C_N(\Omega)$ equals to $\( \frac{N-1}{N} \)^N$ if the domain has a smooth boundary portion on $\pd B_R$.
For the smoothness on the boundary, the interior sphere condition is enough to obtain the result.
Here we say that a point $x_0 \in \pd\Omega \cap \pd B_R$ satisfies an {\it interior sphere condition} if there is an open ball $B \subset \Omega$
such that $x_0 \in \pd B$.
The idea here is to construct a (non-convergent) minimizing sequence $\{ u_n \}$ for $C_N(\Omega)$ for which the value of $Q_R(u_n)$ goes to $\( \frac{N-1}{N} \)^N$,
by modifying a minimizing sequence for the best constant of Hardy's inequality on the half-space \eqref{Hardy_half} when $p = N$:
\begin{equation}
\label{Hardy_half_inf}
\inf_{u \in C_0^\infty(\re^N_{+}) \setminus \{0\}} \frac{\int_{\re^N_{+}} |\nabla u|^N dx}{\int_{\re^N_{+}} |\frac{u}{x_N}|^N dx} = \( \frac{N-1}{N} \)^N.
\end{equation}
This is possible since the weight function $W_R(x)$ can be considered as $(1/d(x))^N$ near the smooth boundary portion $\pd\Omega \cap \pd B_R$.
\begin{theorem}
\label{theorem-smooth}
For a bounded domain $\Omega$, we assume that there exists a point $x_0 \in \pd\Omega \cap \pd B_R$ satisfying an interior sphere condition.
Then
\[
C_N(\Omega) = \( \frac{N-1}{N} \)^N
\]
and the infimum $C_N(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
The following proof is inspired by \cite{MMP}.
We write $x = (x_1, \cdots, x_{N-1}, x_N) = (x^{\prime}, x_N)$ for $x \in \re^N_{+}$.
Fix $\e > 0$ arbitrary.
By \eqref{Hardy_half_inf}, we may take $v_\e \in C_0^\infty(\re^N_+)$ such that
\[
\int_{\re^N_+} \left|\frac{v_\e}{x_N} \right|^N dx = 1, \quad \text{and} \quad \int_{\re^N_+} |\nabla v_\e|^N dx \le \(\frac{N-1}{N} \)^N + \e.
\]
Since $\textrm{supp}(v_\e)$ is compact, we may assume that
\[
\textrm{supp}(v_\e) \subset \{x = (x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < A x_N, \ x_N < B \}
\]
if we take $A,B > 0$ sufficiently large depending on $\e$.
We think $v_{\e}$ is $0$ outside of its support and is defined on the whole $\re^N_{+}$.
For $l \in \N$, we define $v_\e^l(x) = v_\e(l x)$.
Note that for each $l > 0$, we have
\[
\int_{\re^N_+} |\nabla v^l_\e|^N dx = \int_{\re^N_+} |\nabla v_\e|^N dx, \quad \int_{\re^N_+} \left|\frac{v^l_\e}{x_N} \right|^N dx = \int_{\re^N_+} \left|\frac{v_\e}{x_N} \right|^N dx
\]
and
\[
\textrm{supp}(v^l_\e) \subset \left\{(x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < \frac{A}{l} x_N, \ x_N < \frac{B}{l} \right\}.
\]
By a rotation, we may assume that $x_0 = (-R) e_N \in \partial \Omega \cap \pd B_R$ satisfies an interior sphere condition,
where $e_N = (0, \cdots, 0, 1)$.
Then we see that for some $A^{\prime}$, $B^{\prime} > 0$,
\[
\{(x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < A^{\prime} x_N, \ x_N < B^{\prime} \} \subset \Omega + R e_N
\]
Since \eqref{Taylor} holds for small $R - |x|$, we see that
\begin{equation}
\label{S1}
|x|^N \( \log\frac{R}{|x|} \)^N \le (x_N + R)^N + o((x_N +R)^N)
\end{equation}
for $x \in \Omega$ with small $x_N+R$.
Now we define
\[
u_\e^l(x) \equiv v_\e^l(x + R e_N)
\]
for $x \in \Omega$.
Then, for large $l > 0$, we see that
$u_\e^l \in C_0^\infty(\Omega)$ and
\[
{\rm supp}(u_\e^l) \subset \Omega \cap \{x \in B_R \ | \ x_N+R < B/l\}.
\]
Now \eqref{S1} implies that
\begin{align*}
\intO \frac{|u_\e^l(x)|^N}{|x|^N\big (\log\frac{R}{|x|}\big )^N} dx \geq \intO \frac{|u_\e^l(x)|^N}{(x_N + R)^N} dx + o_l(1) = \int_{\Omega + Re_N} \frac{|v_\e^l(y)|^N}{|y_N|^N} dy +o_l(1)
\end{align*}
where $o_l(1) \to 0$ as $l \to \infty$,
and
\begin{align*}
\intO \big |\nabla u_{\e}^l(x) \big|^N dx = \int_{\Omega + Re_N} \big |\nabla v_{\e}^l(y) \big|^N dy \leq \int_{\re^N_{+}} \big |\nabla v_{\e}^l(y) \big|^N dy.
\end{align*}
Thus we have
\begin{align*}
\frac{\intO \big |\nabla u_{\e}^l(x) \big|^N dx}{\intO \frac{|u_\e^l(x)|^N}{|x|^N\big (\log\frac{R}{|x|}\big )^N} dx}
\le
\frac{\int_{\re^N_+} \big |\nabla v_\e^l\big|^N dy}{\int_{\re^N_+} \frac{|v_\e^l(y)|^N}{|y_N|^N} dy} + o_l(1)
\le \(\frac{N-1}{N} \)^N + \e + o_l(1).
\end{align*}
This implies that
\[
\inf_{u \in W_0^{1,N}(\Omega) \setminus \{0\}} \frac{\intO \big |\nabla u \big|^N dx}{\intO \frac{|u(x)|^N}{|x|^N \(\log\frac{R}{|x|}\)^N} dx}
\le \( \frac{N-1}{N} \)^N.
\]
Since $C_N(\Omega) \ge C_N(B_R) = \(\frac{N-1}{N} \)^N$ by \eqref{C_N_lower}, we conclude the equality.
This again implies that the infimum $C_N(\Omega)$ is not attained.
\end{proof}
\end{section}
\begin{section}{Hardy's inequality for nonsmooth domains}
In this section, first we provide a sufficient condition to assure the strict inequality $C_N(\Omega) > C_N(B_R)$ for bounded domains $\Omega$ with $R = \sup_{x \in \Omega} |x|$.
First, we recall the notion of spherical symmetric rearrangement.
Let $B_r(p,s)$ denote the geodesic open ball in $S^{N-1}(r)$ with center $p \in S^{N-1}(r)$ and geodesic radius $s$.
Then for each $r \in (0,R)$, there exists a constant $a(r) \ge 0$ such that
the $(N-1)$-dimensional measure of the geodesic open ball $B_r(r e_N, a(r))$ with center $r e_N = (0,\cdots,0,r)$ and radius $a(r)$ equals to $\mathcal{H}^{N-1}(\Omega \cap S^{N-1}(r))$,
here $\mathcal{H}^{N-1}$ denotes the $(N-1)$-dimensional Hausdorff measure.
Define the {\it spherical symmetric rearrangement} $\Omega^*$ of a domain $\Omega \subset B_R$ by
\[
\Omega^* \equiv \bigcup_{r \in (0,R)} B_r(r e_N, a(r))
\]
and the {\it spherical symmetric rearrangement} $u^*$ of a function $u$ on $\Omega$ by
\[
u^*(x) \equiv \sup \{ t \in \re \, | \, x \in \{ x \in \Omega \, | \, u(x) \ge t \}^{*} \}, \quad x \in \Omega^*,
\]
see Kawohl \cite{Kawohl} p.17.
Note that this is an equimeasurable rearrangement with $u^*$ rotationally symmetric around the positive $x_N$-axis,
and there hold that the Polya-Szeg\"o type inequality
\[
\intO |\nabla u|^p \, dx \ge \int_{\Omega^*} |\nabla u^*|^p \, dx
\]
for $u \in W^{1,p}_0(\Omega)$ with $p > 1$,
and the Hardy-Littlewood inequality
\[
\intO u(x) v(x) \, dx \le \int_{\Omega^*} u^*(x) v^*(x) \, dx
\]
for nonnegative functions $u, v$ on $\Omega$, see \cite[pages 21, 23, and 26]{Kawohl}.
In the sequel, we use the {\it Poincar\'e inequality on a subdomain of spheres} of the following form:
\begin{proposition}
\label{prop-Poincare}
Let $S^n$ denote an $n$-dimensional unit sphere and $U \subset S^n$ be a relatively compact open set in $S^n$.
For any $1 \le p < \infty$,
there exists $C > 0$ depending on $p$ and $n$ such that the inequality
\[
\int_U | \nabla_{S^n} u |^p dS_{\omega} \ge C | U |^{-p/n} \int_U |u|^p dS_{\omega}
\]
holds for any $u \in W^{1,p}_0(U)$.
Here $|U|$ denotes the $n$-dimensional measure of $U \subset S^n$.
\end{proposition}
\begin{proof}
The inequality $\int_U | \nabla_{S^n} u |^p dS_{\omega} \ge C(U, p) \int_U |u|^p dS_{\omega}$ holds, see for example, \cite{Saloff-Coste} pp.86.
The constant $C(U, p)$ is bounded from below by the first Dirichlet eigenvalue $\lambda_p(U)$ of the $p$-Laplacian $-\Delta_p$ on the sphere,
and the estimate
\[
\la_p(U) \ge C(n, p) |U|^{-p/n}
\]
can be seen, for example, in \cite{Lieb} or \cite{Kawohl-Fridman} when the ambient space is $\re^n$.
Indeed, the lower bound of the first Dirichlet eigenvalue is also obtained on spheres.
By spherically symmetric rearrangement, we have the Faber-Krahn type inequality
\[
\la_p(U) \ge \la_p(U^*)
\]
where $U^* \subset S^n$ be a geodesic ball with $|U| = |U^*|$.
Also we have a scaling property $\la_p(r U) = r^{-p} \la_p(U)$ for the first eigenvalue of the $p$-Laplacian.
Since $U^* = r B_1$ for some $r > 0$ where $B_1$ denotes the geodesic ball of radius $1$, we have $|U| = |U^*| = r^n |B_1|$,
which implies $r = (|U|/|B_1|)^{1/n}$.
Thus we have
\[
\la_p(U) \ge \la_p(U^*) = \la_p(r B_1) = r^{-p} \la_p(B_1) = \(\frac{|U|}{|B_1|}\)^{-p/n} |B_1|.
\]
\end{proof}
Define
\begin{equation}
\label{m(r)}
m(r) = \mathcal{H}^{N-1}( \{ x \in \Omega \, | \, |x| = r \}) = \mathcal{H}^{N-1}(\Omega \cap S^{N-1}(r))
\end{equation}
for $r \in (0, R)$.
Then we have the following.
\begin{theorem}
\label{theorem-inequality}
If
\begin{equation}
\label{m_0}
m_0 \equiv \limsup_{r \to 0} \, m(r)/r^{N-1} < \omega_{N-1}
\end{equation}
and
\begin{equation}
\label{m_R_finite}
m_R \equiv \limsup_{r \to R} \, m(r)/(R-r)^{N-1} < \infty,
\end{equation}
it holds that
\[
C_N(\Omega) > \( \frac{N-1}{N} \)^N.
\]
\end{theorem}
\begin{proof}
If $0 \in \Omega$, then $m(r) = r^{N-1}\omega_{N-1}$ for any small $r > 0$.
Thus under the assumption \eqref{m_0}, the origin must not be interior of $\Omega$.
We assume the contrary and suppose that there exists a sequence $\{\phi_n\}_{n \in \N} $ in $ C_0^\infty(\Omega)\setminus \{0\}$ such that
\[
\lim_{n \to \infty} \frac{\intO \big|\nabla \phi_n \big|^N dx}{\intO \frac{|\phi_n(x)|^N}{|x|^N \(\log\frac{R}{|x|} \)^N} dx} = C_N(\Omega) = \(\frac{N-1}{N} \)^N.
\]
Let $\phi_n^*$ be the spherical symmetric rearrangement of $\phi_n$.
Then by the above remarks, it follows that
\[
\lim_{n \to \infty} \frac{\int_{\Omega^*} \big |\nabla \phi^*_n \big|^N dx}{\int_{\Omega^*} \frac{|\phi^*_n(x)|^N}{|x|^N \( \log\frac{R}{|x|} \)^N} dx}
= C_N(\Omega^*) = \(\frac{N-1}{N} \)^N.
\]
Since $\textrm{supp}(\phi_n^*)$ is compact in $\Omega^*$,
we find positive constants $R_n$ and $\delta_n$ with $\lim_{n \to \infty}R_n$ $=$ $R$ and $\lim_{n \to \infty} \delta_n = 0$ such that
$\textrm{supp}(\phi^*_n) \subset B_{R_n} \setminus \ol{B_{\delta_n}}$.
We define
\[
\Omega^*_n \equiv \Omega^* \cap (B_{R_n} \setminus \ol{B_{\delta_n}}).
\]
Since the weight function $W_R$ is bounded from above and below by positive constants on $\Omega_n^*$,
there exists a minimizer $\psi_n \in W^{1,N}_0(\Omega^*_n)$ of
\[
c_n \equiv \inf \Big \{ \int_{\Omega^*_n} \big |\nabla \psi \big|^N dx \ \Big | \
\int_{\Omega^*_n} \frac{|\psi(x)|^N}{|x|^N \( \log\frac{R}{|x|} \)^N} dx = 1, \, \psi \in W_0^{1,N}(\Omega^*_n) \Big \}.
\]
We may assume $\psi_n \ge 0$, $\psi_n$ satisfies
\[
\textrm{div}(|\nabla \psi_n|^{N-2}\nabla \psi_n) + c_n \frac{\psi_n(x)^{N-1}}{|x|^N\big (\log\frac{R}{|x|}\big )^N} = 0 \quad \textrm {in} \ \Omega^*_n,
\]
and $\psi_n$ is rotationally symmetric with respect to $x_N$-axis.
We think that $\psi_n$ is defined on $\Omega^*$ by extending by zero.
Then we see
\begin{equation}
\label{c_n}
\int_{\Omega^*} |\nabla \psi_n|^{N} dx = c_n \to \Big(\frac{N-1}{N}\Big )^N
\end{equation}
as $n \to \infty$.
Since $\( \frac{N-1}{N} \)^N$ is not attained by any element in $W_0^{1,N}(\Omega^*)$,
elliptic estimates imply that for any small $R^{\prime} > 0$ and any $\tilde{R} < R$ sufficiently close to $R$,
$\psi_n$ converges uniformly to $0$ on $\Omega^* \cap (B_{\tilde{R}} \setminus \ol{B_{R^{\prime}}})$
and $\psi_n$ converges weakly to $0$ in $W_0^{1,N}(\Omega^*)$ as $n \to \infty$.
We denote
\[
\Omega^*(r) \equiv \{ \omega \in S^{N-1} \ | \ r\omega \in \Omega^* \} \subset S^{N-1},
\]
so $m(r) = r^{N-1} \mathcal{H}^{N-1}(\Omega^*(r))$.
Then we note that
\begin{align}
\label{concentration}
1 &= \int_{\Omega^*} \frac{|\psi_n(x)|^N}{\big (|x|\log\frac{R}{|x|}\big )^N} dx =
\int_0^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr \notag \\
&= \int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(x)|^N}{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr
+ \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr + o_n(1)
\end{align}
as $n \to \infty$.
First, let us assume
\begin{equation}
\label{concentration on zero}
\lim_{n \to \infty} \int_0^{R^{\prime}} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr \ge C
\end{equation}
for some $C > 0$.
Since $m_0 <\omega_{N-1}$ by assumption \eqref{m_0},
$\Omega^*(r)$ is a proper subset of $S^{N-1} \setminus \{ -e_N \} \simeq \re^{N-1}$ for any small $r >0$.
Thus there exists a constant $C > 0$ independent of small $r > 0$ and $n \in \N$ such that the Poincar\'e inequality
in Proposition \ref{prop-Poincare} (with $U = \Omega^*(r)$, $p = N$, $n = N-1$)
\begin{equation}
\label{Poincare}
\int_{\Omega^*(r)}|\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} \ge C \int_{\Omega^*(r)}|\psi_n(r\omega)|^N dS_{\omega}
\end{equation}
holds true.
Note that
\[
\nabla \psi_n = \frac{x}{|x|}\frac{\partial \psi_n}{\partial r} + \frac{1}{r} \nabla_{S^{N-1}}\psi_n, \qquad
|\nabla \psi_n|^N \ge \left| \frac{\partial \psi_n}{\partial r} \right|^N + \frac{1}{r^N} |\nabla_{S^{N-1}}\psi_n|^N.
\]
Then for each small $R^\prime > 0$, we have
\begin{align}
\label{poes1}
\int_{\Omega^*}|\nabla \psi_n|^N dx &= \int_0^R \int_{\Omega^*(r)} \nabla \psi_n(r\omega)|^N r^{N-1} dS_{\omega} dr \notag \\
&\ge \int_0^{R^\prime} \int_{\Omega^*(r)} \frac{1}{r^{N}} |\nabla_{S^{N-1}} \psi_n|^N r^{N-1} dS_{\omega} dr \notag \\
&\ge C\int_{0}^{R^\prime} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr
\end{align}
by the Poincar\'e inequality \eqref{Poincare}.
On the other hand, since
\[
\int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr
\ge \( \log\frac{R}{R^\prime} \)^N
\int_0^{R^\prime} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr,
\]
we have by \eqref{concentration on zero},
\begin{equation}
\label{C2}
\int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr \ge \(C + o_n(1) \) \( \log\frac{R}{R^\prime} \)^N
\end{equation}
where $o_n(1) \to 0$ as $n \to \infty$.
Then by \eqref{c_n}, \eqref{poes1}, and \eqref{C2}, we have
\begin{align*}
\( \frac{N-1}{N} \)^N + o_n(1) &= \int_{\Omega^*}|\nabla \psi_n|^N dx \ge \frac{C}{2} \( \log\frac{R}{R^\prime} \)^N
\end{align*}
as $n \to \infty$.
This inequality is invalid if $R^{\prime}$ is very small.
Thus \eqref{concentration on zero} cannot happen and
\[
\lim_{n \to \infty} \int_0^{R^{\prime}} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr = 0
\]
under the assumption \eqref{m_0}.
Therefore by \eqref{concentration}, we have
\begin{equation}
\label{concentration on boundary}
\lim_{n \to \infty} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr = 1.
\end{equation}
Next, we will prove that \eqref{concentration on boundary} cannot occur under the assumption \eqref{m_R_finite}.
In fact, we see by \eqref{concentration on boundary} and \eqref{Taylor} that
\begin{align*}
1 + o_n(1) &= \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( r \log \frac{R}{r} \)^N}r^{N-1} dS_{\omega} dr \\
&= (1+o(1)) R^{N-1}\int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( R - r \)^N}dS_{\omega} dr,
\end{align*}
where $o_n(1) \to 0$ as $n \to \infty$ and $o(1) \to 0$ as $\tilde{R} \to R$.
Thus we have
\begin{equation}
\label{ap}
\lim_{n \to \infty} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( R - r \)^N}dS_{\omega} dr = (1 + o(1)) R^{-(N-1)}
\end{equation}
as $\tilde{R} \to R$.
On the other hand, since $\psi_n(r\omega) \big|_{r = R} = 0$, we can apply the one-dimensional Hardy inequality
\begin{equation}
\label{1D_Hardy}
\(\frac{N-1}{N}\)^N \int_{\tilde{R}}^R \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N} dr \le \int_{\tilde{R}}^R \bigg |\frac{\partial \psi_n(r\omega)}{\partial r}\bigg |^N dr
\end{equation}
to $\psi_n(r\omega)$.
Note that the best constant $\(\frac{N-1}{N}\)^N$ in the inequality \eqref{1D_Hardy} is the same as, by assumption, the value of $C_N(\Omega^*)$.
Then \eqref{1D_Hardy} implies
\begin{align*}
\(\frac{N-1}{N}\)^N \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N}dS_{\omega} dr
&\le \int_{\tilde{R}}^R \int_{\Omega^*(r)} \bigg |\frac{\partial \psi_n}{\partial r}(r\omega) \bigg |^N dS_{\omega} dr \\
&= (1+o(1)) R^{-(N-1)} \int_{\Omega^*} \bigg|\frac{\partial \psi_n}{\partial r}(x) \bigg|^N dx.
\end{align*}
The above inequality, \eqref{ap} and $C_N(\Omega^*) = (\frac{N-1}{N})^N = \lim_{n \to \infty} \int_{\Omega^*} |\nabla \psi_n(x)|^N dx$ by \eqref{c_n}
imply that
\[
\lim_{n \to \infty} \int_{\Omega^*}|\nabla \psi_n|^N dx \le \lim_{n \to \infty} \int_{\Omega^*}\bigg|\frac{\partial \psi_n}{\partial r}(x) \bigg|^N dx.
\]
The converse inequality holds trivially, thus we see that
\[
\lim_{n \to \infty} \int_{\Omega^*}|\nabla \psi_n|^N dx
= \lim_{n \to \infty} \int_{\Omega^*}\bigg|\frac{\partial \psi_n}{\partial r}\bigg|^N dx,
\]
which implies
\begin{equation}
\label{angle vanish}
\lim_{n \to \infty} \int_{R^\prime}^R \int_{r \Omega^*(r)} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N| d\sigma_r dr = 0,
\end{equation}
here $\sigma = r\omega \in S^{N-1}(r)$, $d\sigma_r = r^{N-1} dS_{\omega}$ is a volume element of a geodesic ball $r\Omega^*(r)$ with center $r e_N$ in $S^{N-1}(r)$,
and $\nabla_{S^{N-1}(r)} = (1/r) \nabla_{S^{N-1}}$.
From the assumption $m_R < \infty$ in \eqref{m_R_finite}, there exists a constant $ C > 0$ independent of $r \in (\tilde{R}, R)$ and $n$ such that
\[
r^{N-1} \mathcal{H}^{N-1}(\Omega^*(r)) \le C(R-r)^{N-1}
\]
holds true.
This implies that \[\( \mathcal{H}^{N-1}(r \Omega^*(r)) \)^{-N/(N-1)} \ge D (R - r)^{-N},\]
where $D = C^{-N/(N-1)} > 0$ independent of $r \in (\tilde{R}, R)$ and $n$.
Then, by the Poincar\'e inequality in Proposition \ref{prop-Poincare} ($n = N-1$, $p = N$) on the spherical cap $U = r \Omega^*(r) \subset S^{N-1}(r)$,
\begin{equation}
\label{Poincare2}
\int_{r\Omega^*_r} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N d\sigma_r \ge D \int_{r\Omega^*_r} \frac{|\psi_n(\sigma)|^N}{|R-r|^N} d\sigma_r
\end{equation}
holds true.
Combining \eqref{angle vanish} and \eqref{Poincare2}, we have
\begin{align*}
o_n(1) &= \int_{\tilde{R}}^R \int_{r\Omega^*(r)} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N d\sigma_r dr \ge D \int_{\tilde{R}}^R \int_{r \Omega^*(r)} \frac{|\psi_n(\sigma)|^N}{|R-r|^N} d\sigma_r dr \\
&= (1+o(1)) D R^{N-1} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N} dS_{\omega} dr
\end{align*}
where $o_n(1) \to 0$ as $n \to \infty$ and $o(1) \to 0$ as $\tilde{R} \to R$.
Combining this to \eqref{ap} and letting $n \to \infty$, we see
\[
0 = D (1 + o(1))R^{N-1} \times (1 + o(1)) R^{-(N-1)} = D + o(1)
\]
as $\tilde{R} \to R$.
This is a contradiction and we complete the proof.
\end{proof}
Next, we prove that a condition on $\Omega$ stronger than that of in Theorem \ref{theorem-inequality} assures the attainability of $C_N(\Omega)$.
The condition below implies that the boundary point $x \in \partial B_R \cap \pd\Omega$, if it existed, must be cuspidal,
but the origin, if $0 \in \partial \Omega$, may be a Lipschitz continuous boundary point.
\begin{theorem}
\label{theorem-existence}
For $r \in (0,R)$, let $m(r)$ be defined as \eqref{m(r)}.
If
\[
m_0 \equiv \limsup_{r \to 0} \, m(r)/r^{N-1} < \omega_{N-1}
\]
and
\begin{equation}
\label{m_R_0}
m_R \equiv \limsup_{r \to R} \, m(r)/(R-r)^{N-1} = 0,
\end{equation}
then
\[
C_N(\Omega) > \( \frac{N-1}{N} \)^N
\]
and $C_N(\Omega)$ is attained.
\end{theorem}
\begin{proof}
The strict inequality $C_N(\Omega) > \( \frac{N-1}{N} \)^N $ was proved in Theorem \ref{theorem-inequality}.
For each positive integer $n$, we define
\[
\Omega_n \equiv \Omega \cap (B_{R-1/n} \setminus \overline{B_{1/n}}).
\]
Then, since the weight function $W_R(x)$ is bounded on $\Omega_n$, there exists a minimizer $\psi_n$ of
\[
d_n \equiv \inf \Big \{ \int_{\Omega_n} \big |\nabla \psi\big|^N dx \ \Big | \
\int_{\Omega_n} \frac{|\psi(x)|^N}{|x|^N \(\log\frac{R}{|x|}\)^N} dx = 1, \, \psi \in W_0^{1,N}(\Omega_n) \Big \}.
\]
We may assume $\psi_n \ge 0$ and $\psi_n$ satisfies
\[
\textrm{div}(|\nabla \psi_n|^{N-2}\nabla \psi_n) +
d_n \frac{\psi_n(x)^{N-1}}{|x|^N \(\log\frac{R}{|x|} \)^N} = 0 \ \ \textrm { in } \ \Omega_n.
\]
We note that
\[
\int_{\Omega_n} |\nabla \psi_n|^{N} dx = d_n \to C_N(\Omega) \ \textrm { as } \ n \to \infty.
\]
Let $u$ be a weak limit of the sequence $\{\psi_n\}_{n \in \N}$ in $W_0^{1,N}(\Omega)$.
Then, we see that for each positive integer $n_0$,
$\psi_n$ converges uniformly to $u$ in $C^{1}(\Omega_{n_0})$,
and that
\[
\textrm{div}(|\nabla u|^{N-2}\nabla u) +
C_N(\Omega) \frac{|u(x)|^{N-1}}{|x|^N \(\log\frac{R}{|x|} \)^N} = 0, \ \ u \ge 0 \ \ \textrm {in} \ \Omega.
\]
Now it suffices to prove that $u \ne 0$ in $\Omega$, then $u$ becomes a minimizer for $C_N(\Omega)$.
To the contrary, we assume that $u \equiv 0$.
Then, we see that for each positive integer $n_0$, $\psi_n$ converges uniformly to $0$ on $\Omega_{n_0}$.
We denote
\[
\Omega(r) \equiv \{ \omega \in S^{N-1} \ | \ r\omega \in \Omega \} \subset S^{N-1}.
\]
Since $m_0 <\omega_{N-1}$,
by the spherical symmetric rearrangement, Poly\'a-Szeg\"o and the Poincar\'e inequality,
we see there exists a constant $C > 0$, independent of small $r > 0$ and $n \in \N$, such that
\[
\int_{\Omega(r)}|\nabla_{S^{N-1}} \psi_n|^N dS_{\omega} \ge C\int_{\Omega(r)}|\psi_n|^N dS_{\omega},
\]
see the proof of Theorem \ref{theorem-inequality}.
Then, we see that for each large positive integer $n_0$,
\begin{align}
\label{poes2}
\intO |\nabla \psi_n|^N dx &\ge \int_0^{1/n_0} \int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N r^{-1} dS_{\omega} dr \notag \\
&\ge C\int_{0}^{1/n_0} \int_{\Omega(r)}|\psi_n(r\omega)|^N r^{-1} dS_{\omega} dr.
\end{align}
Put $f_n(r) \equiv \int_{\Omega(r)} |\psi_n(r\omega)|^N / r \(\log\frac{R}{r} \)^N dS_{\omega}$.
Then we have
\begin{align*}
1 = &\intO \frac{|\psi_n(x)|^N}{\(|x|\log\frac{R}{|x|}\)^N} dx = \int_0^R \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr \\
& = \int_0^{1/n_0} f_n(r) dr + \int_{1/n_0}^{R-1/n_0} f_n(r) dr + \int_{R-1/n_0}^{R} f_n(r) dr,
\end{align*}
and that
\begin{align*}
&\int_0^{1/n_0} \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr
\le \( \log\frac{R}{1/n_0} \)^{-N} \int_0^{1/n_0} \int_{\Omega(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr.
\end{align*}
Then, \eqref{poes2} implies that for each large positive integer $n_0$,
\[
\int_0^{1/n_0} \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr
\le \big (\log\frac{R}{1/n_0}\big )^{-N} \frac{d_n}{C}.
\]
The right-hand side of the above inequality can be arbitrarily small if $n_0$ large, thus we have
$\lim_{n \to \infty} \int_0^{1/n_0} f_n(r) dr = 0$.
Since $\lim_{n \to \infty} \int_{1/n_0}^{R-1/n_0} f_n(r) dr = 0$,
we deduce that for each large positive integer $n_0$,
\[
\lim_{n \to \infty} \int_{R-1/n_0}^R f_n(r) dr = 1.
\]
Now, as in the proof of Theorem \ref{theorem-inequality},
let $\Omega^*(r) \subset S^{N-1}$ be a geodesic ball with the center $e_N$ such that the $(N-1)$-dimensional measure of $\Omega^*(r)$ equals to that of $\Omega(r)$.
Let $\psi^*_n$ be the spherical symmetric rearrangement of $\psi_n$ and
put $f^*_n(r) = \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega}$.
Since $r\log(R/r) = (R-r) + o(1)$ for small $R - r > 0$,
we see that
\begin{equation}
\label{E1}
f^*_n(r) = \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} = R^{N-1} \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{(R-r)^N} dS_{\omega} + o(1)
\end{equation}
for small $R - r > 0$.
On the other hand, by the assumption $m_R = 0$,
there exists $h(r) > 0$ with $h(r) \to 0$ as $r \to R$ such that $\mathcal{H}^{N-1}(r \Omega^*(r)) \le h(r) (R - r)^{N-1}$.
Thus
\[
\( \mathcal{H}^{N-1}(\Omega^*(r)) \)^{-N/(N-1)} \ge r^N \( h(r) \)^{-N/(N-1)} (R - r)^{-N}.
\]
Put $g(r) = r^N ( h(r) )^{-N/(N-1)}$. Then $\lim_{r \to R} g(r) = \infty$ and the Poincar\'e inequality in Proposition \ref{prop-Poincare}
(with $U = \Omega^*(r)$, $p = N$, $n = N-1$)
\begin{equation}
\label{E2}
\int_{\Omega^*(r)} |\nabla_{S^{N-1}}\psi^*_n(r\omega)|^N dS_{\omega} \ge C g(r)\int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N}{|R-r|^N} dS_{\omega}
\end{equation}
holds. Here $C = C(N) >0$ is an absolute constant.
Then by \eqref{E1} and \eqref{E2}, we see
\[
\int_{\Omega^*(r)} |\nabla_{S^{N-1}}\psi^*_n(r\omega)|^N dS_{\omega} \ge \frac{C}{2} g(r) \frac{f^*_n(r)}{R^{N-1}}
\]
and we may apply Poly\'a-Szeg\"o inequality
\[
\int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} \ge \int_{\Omega^*(r)} |\nabla_{S^{N-1}} \psi^*_n(r\omega)|^N dS_{\omega}.
\]
Then for large $n_0$, we have
\begin{align*}
&\intO |\nabla \psi_n|^N dx \ge \int_{R - 1/n_0}^R \int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} dr \\
&\ge \int_{R-1/n_0}^{R} \frac{C}{2} \frac{g(r) f^*_n(r)}{R^{N-1}} dr \ge \frac{C g(r^*)}{2 R^{N-1}} \int_{R-1/n_0}^{R} f^*_n(r) dr
= \frac{C g(r^*)}{2 R^{N-1}} (1 + o_n(1))
\end{align*}
where $r^*$ is a number with $r^* \in (R-1/n_0, R)$.
Since $g(r^*) \to \infty$ as $n_0 \to \infty$,
we conclude that $\lim_{n \to \infty}\int_{\Omega}|\nabla \psi_n|^N dx = \infty$.
This is a contraction; thus $C_N(\Omega)$ is attained.
\end{proof}
\end{section}
\begin{section}{Nonexistence of a minimizer for a domain $\Omega$ with $C_2(\Omega) > \frac{1}{4} $}
In this section, we provide a Lipschitz domain $\Omega$ in $\re^2$ on which $C_2(\Omega) > 1/4$ and $C_2(\Omega)$ is not attained. Recall Hardy's inequality \eqref{Hardy_half_inf} when $N = 2$:
\[
\inf \left\{ \int_{\re^2_{+}} |\nabla u|^2 dx \ \Big | \
\int_{\re^2_{+}} \frac{u^2}{(x_2)^2} dx = 1, \, u \in W_0^{1,2}(\re^2_+) \right\} = \frac {1}{4},
\]
and the best constant $1/4$ is not attained, where $x = (x_1, x_2)$.
For $a \in [0,\pi/2),$ we define
\[
E(a) \equiv \inf \Big \{ \frac{\int_{a}^{\pi-a}(\phi_\theta)^2 d\theta} {\int_{a}^{\pi-a} (\phi^2 / \sin^2 \theta) d\theta}
\ \Big | \ \phi \in C_0^\infty((a,\pi-a)) \setminus \{ 0 \} \Big \}.
\]
From \cite[Corollary 4.4]{Davies}, we see that
\begin{equation}
\label{Davies}
E \equiv E(0) = \inf \Big \{ \frac{\int_{0}^{\pi}(\phi_\theta)^2 d\theta} {\int_{0}^{\pi} (\phi^2/ \sin^2 \theta) d\theta}
\ \Big | \ \phi \in C_0^\infty((0,\pi)) \setminus \{ 0 \} \Big \} = \frac {1}{4}
\end{equation}
and $E$ is not achieved.
We prove these facts in Appendix for the reader's convenience.
It is obvious that for $a \in (0,\pi/2),$ $E(a)$ is achieved by a positive function $\varphi_a$ on $(a,\pi-a).$
Since $E(0)$ is not achieved in $W^{1,2}_0(0,\pi),$ $E(a) > E(0) = \frac 14$ for $a \in (0,\pi/2).$
\begin{theorem}
\label{theorem-nonexistence}
There exists a domain $\Omega \subset B_1 \subset \re^2$
such that
$C_2(\Omega) > \frac{1}{4}$ and $C_2(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
For $a \in (0,\pi/2)$, we define a cone
\[
\mathbf{C}_a \equiv \{ (r\cos \theta, r\sin \theta) \in \re^2_+ \ | \ r \in (0,\infty), \theta \in (a,\pi-a)\} \subset \re^2_+.
\]
We define
\begin{align*}
R(y_1,y_2) &\equiv \((y_1)^2 + (1-y_2)^2 \) \( \log \frac{1}{((y_1)^2 + (1 -y_2)^2)^{1/2}} \)^2 \\
&= \frac{1}{4} h(r, \theta) \{ \log h(r,\theta) \}^2
\end{align*}
for $(y_1, y_2) = (r \cos \theta, r \sin \theta)$, where $h(r, \theta) = r^2 - 2 r \sin \theta + 1$.
Since \[ \log h(r, \theta) = h(r,\theta) - 1 - \frac{(h(r, \theta) - 1)^2}{2} + O(r^3) \textrm{ as } r \to 0, \]
we have
\begin{align}
\label{asymp}
\frac{R(y_1, y_2)}{(y_2)^2} &= \frac{(r^2 - 2r \sin \theta + 1) (4\sin^2 \theta - 4r \sin \theta (1 - 2\sin^2 \theta) + O(r^2))}{4 \sin^2 \theta} \notag \\
&= \frac{4 \sin^2 \theta - 4r \sin \theta + O(r^2)}{4 \sin^2 \theta}
\end{align}
as $r \to 0$.
Thus we see that
\[
\lim_{y_2 \to 0, (y_1,y_2) \in \mathbf{C}_a} R(y_1,y_2)/(y_2)^2 = 1
\]
for each $a > 0$.
From now on, we fix $a \in (\pi/4,\pi/2)$.
We define
\[
g(r) \equiv \inf \Big \{ \frac{R(y_1,y_2)}{(y_2)^2} \ \Big | \ (y_1,y_2) \in \mathbf{C}_a, \, y_1^2+y_2^2 = r^2 \Big \}.
\]
By \eqref{asymp}, we see that $\lim_{r \to 0}g(r) = 1$.
Further, we see that $g(r) < 1$ for small $r > 0$.
We take $r_0 \in (0,1/2)$ such that $g(r) < 1$ for any $r \in (0,r_0)$.
Note that $E(a)$ is monotone non-decreasing with respect to $a \in (0, \pi/2)$.
Now for each $r \in (0,r_0)$, we take $a(r) \in (a,\pi/2)$ such that
$E(a)/E(a(r)) = g(r) \in (0,1)$.
Since $\lim_{r \to 0}g(r) =1$, it follows that $\lim_{r \to 0} a(r) = a$.
Since $E$ is continuous on $(0,\pi/2)$ and $g$ on $(0,r_0)$,
$a(r)$ is continuous with respect to $r \in (0,r_0)$.
We define
\[
\tilde{\Omega} \equiv \{(r\cos\theta,r\sin\theta) \in \re^2_+ \ | \ r \in (0, r_0), \theta \in (a(r),\pi-a(r))\}
\]
and
\[
\Omega = \{(x_1,x_2) \in B_1 \ | \ (x_1,1-x_2) \in \tilde{\Omega} \} \subset B_1 \subset \re^2.
\]
We claim that $C_2(\Omega) = E(a) >\frac14 $ and $C_2(\Omega)$ is not attained.
For any $u \in C_0^\infty(\Omega),$ we define $\tilde{u}(y_1,y_2) = u(y_1,1-y_2)$ for $y = (y_1, y_2) \in \tilde{\Omega}$.
Then, we see that $\tilde{u} \in C_0^\infty(\tilde{\Omega})$ and
\[
\int_{\Omega} |\nabla u|^2 dx_1dx_2 = \int_{\tilde{\Omega}} |\nabla \tilde{u}|^2 dy_1dy_2
= \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} r(\tilde{u}_r)^2 + r^{-1}(\tilde{u}_\theta)^2 d\theta dr
\]
and
\[
\int_{\Omega} \frac{(u(x_1,x_2))^2}{|x|^2(\log |x|)^2} dx_1dx_2 = \int_{\tilde{\Omega}} \frac{(\tilde{u}(y_1,y_2))^2}{ R(y_1,y_2) } dy_1dy_2.
\]
First of all, we claim that $C_2(\Omega) \le E(a)$.
To prove this, we note that for any $a^\prime \in (a,\pi/2)$, we can find $\delta^\prime \in (0,r_0)$ such that
\[
\{(r\cos\theta,r\sin \theta) \in \tilde{\Omega} \ | \ r \in (0,\delta^\prime), \theta \in (a^\prime,\pi-a^\prime)\} \subset \tilde{\Omega}.
\]
For any small $\e,\delta > 0$ with $4\e < \delta < \delta^\prime$,
we find a Lipschitz continuous function $\psi_\e^\delta$ satisfying
$\psi_\e^\delta(r) = 0$ for $r \le \e$ or $r \ge \delta$, $\psi_\e^\delta(r) = 1$ for $2\e \le r \le \delta/2$,
$|(\psi_\e^\delta)^\prime(r)| = 1/\e$ for $r \in (\e,2\e)$, and $|(\psi_\e^\delta)^\prime(r)| = 2/\delta$ for $r \in (\delta/2, \delta)$.
We define that for $y=(y_1,y_2) = (r \cos \theta,r\sin\theta) \in \tilde{\Omega}$ and $x=(x_1,x_2) \in \Omega$,
\[
\tilde{u}^{\delta}_{\e}(y_1,y_2) = \tilde{u}^{\delta}_{\e}(r,\theta) = \psi_\e^\delta(r)\varphi_{a^\prime}(\theta) \textrm { and }
u^{\delta}_{\e}(x_1,x_2) = \tilde{u}^{\delta}_{\e}(x_1,1-x_2).
\]
Then we see that
\begin{align*}
&\int_{\Omega} |\nabla u^{\delta}_{\e}|^2 dx = \int_{\tilde{\Omega}} |\nabla\tilde{u}^{\delta}_{\e} |^2 dy
= \int_0^\infty \int_{a^\prime}^{\pi-a^\prime} r((\tilde{u}^{\delta}_{\e})_r)^2 + r^{-1}((\tilde{u}^{\delta}_{\e})_{\theta})^2 d\theta dr \\
& = \( \int_\e^{2\e} ((\psi_{\e}^\delta)^{\prime}(r))^2 r dr + \int_{\delta/2}^{\delta} ((\psi_{\e}^\delta)^{\prime}(r))^2 r dr \) \int_{a^\prime}^{\pi-a^\prime}(\varphi_{a^\prime}(\theta))^2d\theta \\
&\quad + \int_\e^{\delta}\int_{a^\prime}^{\pi-a^\prime} r^{-1}(\psi_\e^\delta(r))^2\Big(\frac{d\varphi_{a^\prime}}{d\theta}\Big)^2 d\theta dr \\
& = 3\int_{a^\prime}^{\pi-a^\prime}(\varphi_{a^\prime}(\theta))^2d\theta + \int_\e^{\delta} r^{-1}(\psi_\e^\delta(r))^2 dr\int_{a^\prime}^{\pi-a^\prime}\Big(\frac{d\varphi_{a^\prime}}{d\theta}\Big )^2 d\theta
\end{align*}
and
\[
\int_{\Omega} \frac{(u^{\delta}_{\e}(x))^2}{|x|^2(\log |x|)^2} dx = \int_{\tilde{\Omega}}\frac{(\tilde{u}^{\delta}_{\e}(y))^2}{R(y_1,y_2)} dy = \int_\e^{\delta}\int_{a^\prime}^{\pi-a^\prime}\frac{(y_2)^2}{R(y_1,y_2)} r^{-1}(\psi_\e^\delta(r))^2 \Big(\frac{\varphi_{a^\prime}}{\sin \theta}\Big )^2 d\theta dr.
\]
Since $\lim_{\e \to 0}\int_\e^{\delta} r^{-1}(\psi^\delta_\e(r))^2 dr = \infty$ for each $\delta > 0,$
we see that
\[
\lim_{\e \to 0} \frac{\int_{\Omega}|\nabla u^\delta_\e|^2 dx}{ \int_{\Omega} \frac{|u^\delta_\e|^2}{|x|^2(\log |x|)^2} dx } \le
E(a^\prime) (\min_{r \in [0,\delta]}g(r))^{-1}.
\]
Then, $C_2(\Omega) \le E(a^\prime)$ for any $a^\prime \in (a, \pi/2)$ since $\lim_{r \to 0} g(r) = 1$.
This implies that $C_2(\Omega) \le E(a)$.
Now for any $v \in W_0^{1,2}(\Omega)$ with $\tilde{v}(y_1,y_2) \equiv v(y_1,1-y_2) \in W_0^{1,2}(\tilde{\Omega}),$
we see that
\begin{align*}
\int_{\Omega} |\nabla v|^2 dx_1dx_2
& \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} r(\tilde{v}_r)^2 + E(a(r))r^{-1}\frac{(\tilde{v})^2}{\sin^2 \theta} d\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r))\frac{(\tilde{v})^2}{(y_2)^2}\Big] rd\theta dr \\
&= \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r)) \frac{R(y_1,y_2)}{(y_2)^2}\frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r)) g(r)\frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a) \frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{v}_r)^2 rd\theta dr + E(a) \int_{\tilde{\Omega}} \frac{(\tilde{v})^2}{R(y_1,y_2)} dy_1dy_2\\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{v}_r)^2 rd\theta dr + E(a) \int_{\Omega} \frac{(v(x))^2}{|x|^2(\log |x|)^2} dx.
\end{align*}
This implies that $C_2(\Omega) \ge E(a).$
Combining above upper and lower estimates, we see that $C_2(\Omega) = E(a) > \frac14.$
From above estimate, we see that
for any $u \in W_0^{1,2}(\Omega),$ we see that
\begin{equation} \label{ces} \int_{\Omega} |\nabla u|^2 dx_1dx_2 \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{u}_r)^2 rd\theta dr + E(a) \int_{\Omega} \frac{(u(x_1,x_2))^2}{|x|^2(\log |x|)^2} dx_1dx_2.\end{equation}
If $C_2(\Omega)$ is attained by $u \in W_0^{1,2}(\Omega) \setminus \{0\},$
we see from \eqref{ces} that $\tilde{u}_r \equiv 0$ in $\tilde{\Omega}$.
This contradicts to the fact $u \in W_0^{1,2}(\Omega)$.
Thus we conclude that $C_2(\Omega)$ is not attained in $W_0^{1,2}(\Omega)$.
\end{proof}
$\Box$
\begin{remark}
For the domain $\Omega$ in Theorem \ref{theorem-nonexistence},
let $P, Q$ be two points in $\pd \Omega \cap \pd B_r$ when $r$ is close to $1$.
Then $m(r)$ is the length of the arc $\stackrel{\frown}{PQ}$, which is larger than the length of the segment $PQ$.
Thus it is easy to see that in this case, $m_0 = 0$ and
\[
m_1 = \limsup_{r \to 1} m(r)/(1-r) \ge 2\cos a > 0;
\]
see Theorem \ref{theorem-existence}.
\end{remark}
\end{section}
|
\section{Introduction}\label{Introduction}
How disorder and electron correlations shape material properties is a major question of current condensed matter research \cite{Imada1998}. The interest in this problem is many decades old \cite{Wigner1934,Boer1937,Mott1937,Mott1949,Korringa1958,Anderson1958,Edwards1972}, and significant progress has been made in important directions, e.g. in describing the correlation-induced Mott-Hubbard \cite{Gebhard2000} and the disorder-induced Anderson \cite{Abrahams1979} metal-insulator transition. Yet, a complete general understanding of the joint effect of interactions and disorder remains elusive to this day.
Advances in ultracold-atoms experiments \cite{Lewenstein2007,Bloch2012} have boosted interest in scenarios where disorder and interactions are simultaneously important and new implications emerge from their interplay.
An example of recently observed phenomena \cite{Choi2016} is many-body localization (MBL) \cite{Gornyi2005,Basko2006a}, a new experimental and theoretical paradigm where several notions of many-body physics blend coherently \cite{Nandkishore2015}. In fact, MBL is part of a broad palette of situations. For example, disorder or interactions alone can produce insulating behavior but, between these limits, how they simultaneously affect conductance is not fully settled \cite{Kramer1993a,Kravchenko1994,Kravchenko1995,Kravchenko1996,Vojta1998,Scalettar2007,Lahoud2014}. In equilibrium, interactions can increase or decrease conductivity in a disordered system \cite{Denteneer1999,Vojta1998,Abrahams2001}. Out of equilibrium, results for quantum rings \cite{Farhoodfar2011} and quantum transport setups~\cite{Karlsson2014b} suggest that at fixed disorder strength the current depends non-monotonically on interactions.
To facilitate the description of disordered and interacting systems, it would be useful to have a simple picture. A recent example in this direction was to look at a reduced quantity, the one-body density matrix, to establish a link between MBL, Anderson localization and Fermi-liquid-type features in closed systems \cite{Bera2015,Bera2017}.
Another possible reduced description would be in terms of an {\it independent-particle Hamiltonian}. In a traditional mean-field-intuitive description of disorder vs interactions \cite{Scalettar2007}, the low-energy pockets of the rugged potential landscape attract high particle density, but this is opposed by inter-particle repulsion, resulting in a flatter effective potential landscape, i.e. disorder is screened by interactions. It is not unambiguous how to define such potential, and different conclusions are reached in the literature \cite{Tanaskovic2003, Henseler2008, Henseler2008a, Song2008, Tanaskovic2003, Pezzoli2010, Farhoodfar2011}.
The question is even more delicate for open systems, where typical localization signatures are unavailable \cite{Luschen2017}. As such, a simple, rigorous picture valid also in the presence of reservoirs would be of utmost importance.
\begin{figure} [t]
\centering
\includegraphics[width=0.47\textwidth]{Systems.png}
\caption{Many-body and corresponding Kohn-Sham systems for rings and 2D quantum transport setups. The interaction $U$, the one-body potentials $\{v_i \}$ and KS potentials $\{v_{\text{eff},i}\}$ are shown at representative sites.
\label{system} }
\end{figure}
Motivated by these arguments, we introduce here a picture of disorder and interactions based on the Kohn-Sham (KS) independent-particle scheme \cite{Kohn1965} of density functional theory (DFT) \cite{Hohenberg1964,Runge1984}.
In DFT, the exact density of the interacting system can be obtained from a KS system subjected to an effective potential $v_{\text{eff}}$ (Fig.~\ref{system}). For the density, $v_{\text{eff}}$ is the best effective potential in an independent-particle picture. We propose that $v_{\text{eff}}$ {\it can be identified as the independent-particle effective energy landscape in a disordered and interacting system}, which unambiguously defines disorder screening. To assess disorder screening for conductance and currents, we consider out-of-equilibrium systems. In extending DFT to non-equilibrium, we also have to include the notion of an {\it effective bias} \cite{Schmitteckert2013,Stefanucci2015, Karlsson2016a}.
Our main findings are:
i) interactions smoothen the effective landscape seen by the electrons (we interpret this as disorder screening);
ii) a non-monotonic dependence of the current on the interaction strength cannot be explained by disorder screening alone; an ``effective bias" (corresponding
to a screening of the applied bias due to electron correlations) has to be taken into account; iii) The picture from i) and ii) applies to both isolated and open systems and to different dimensionality;
iv) More in general, our works paves the way to a rigorous understanding of the notion of effective disorder \cite{Patel2017} in a variety of situations,
including open systems in-and out-of equilibrium, a topic which is the object of
recent and fast-growing interest \cite{Luschen2017,VanNieuwenburg2018,Droenner2017,Xu2017}.
{\it Systems considered.-}
In this work, we focus on the transition from the weakly to the strongly correlated regime, and consider a single disorder strength.
This specific choice is enough to display how the competition of disorder and interaction is captured within a DFT picture.
We study quantum rings pierced by magnetic fields and electrically biased quantum-transport setups (Fig.~\ref{system}). Both situations show the aforementioned current crossover as function of the interaction strength. The rings are solved numerically exactly, while for quantum transport we use the Non-Equilibrium Green's Function (NEGF) formalism within many-body perturbation theory \cite{Kadanoff1962,Keldysh1965,Haug2008,stefanucci2013,Balzer2013,Hopjan2014} to obtain steady-state currents and densities. The effective potentials and biases were found via a numerical reverse-engineering algorithm~\cite{Karlsson2016a} within non-equilibrium lattice DFT \cite{Verdozzi2008,Farzanehpour2012}.
\section{Quantum rings}
We study disordered Hubbard rings with $L=10$ sites, $N$ electrons, and spin-compensated, i.e. $N_\uparrow = N_\downarrow = N/2$. Currents are set by a magnetic field threading the rings, via the so-called Peierls substitution \cite{Peierls1933, Hofstadter1976}. The Hamiltonian is
%
\begin{equation}
\!\hat{H} = \!-T \! \! \sum _{\langle mn\rangle \sigma}\!\! e^{i\frac{\phi}{L} x_{mn}} \hat{c}^\dagger_{m\sigma} \hat{c}_{n\sigma}
+\sum _{m\sigma} (v_m
\! +\! \frac{U}{2} \hat{n}_{m,-\sigma}) \hat{n}_{m\sigma},
\label{PeierlsHamiltonian}
\end{equation}
%
where $\hat{c}^\dagger_{m\sigma}$ creates an electron with spin projection $\sigma=\pm 1$ at site $m$. $\hat{n}_{m\sigma}=\hat{c}^\dagger_{m\sigma} \hat{c}_{m\sigma}$ is the density operator, and $\langle ...\rangle$ denotes nearest-neighbor sites. $\phi$ is the Peierls phase and $x_{mn}=\pm 1$ depending on the direction of the hop from $m$ to $n$. $U$ is the onsite interaction. We consider onsite energies with box disorder of strength $W$ with $v_m \in \left [ -W/2, W/2 \right ]$.
In passing, we note that Peierls phases can be realized experimentally in cold atoms by artificial gauge fields~\cite{Jimenez-Garcia2012}.
We study currents in rings regimes via exact diagonalization, obtaining the many-body ground-state wavefunction $| \psi(\phi) \rangle$, and the corresponding density matrix $\rho_{mn} = \langle \psi (\phi)| \hat{c}^\dagger_{n\sigma} \hat{c}_{m\sigma} | \psi (\phi)\rangle$. This gives the density at site $m$ as $n_m = 2\rho_{mm}$ and the bond current as
\mbox{$I_{m+1,m} = -4T \Im \left [ e^{i \phi /L} \rho_{m,m+1} \right ]$}.
As we are in a steady-state scenario, all nearest-neighbor bond currents are equal, and the current $I\equiv I_{m+1,m}$ for any lattice site $m$.
The corresponding effective KS Hamiltonian is~\cite{Akande2012,Akande2016}
\begin{align}
\hat{H}_{KS} =-T \sum _{\langle mn \rangle \sigma} e^{i\frac{\phi^ {\text{KS}}}{L} x_{mn}} \hat{c}^\dagger_{m\sigma} \hat{c}_{n \sigma} +
\sum _{m\sigma} v^{\text{KS}}_m \hat{n}_{m\sigma}.
\end{align}
%
The $L+1$ effective parameters ($v^{\text{KS}}_m, \phi^{\text{KS}}$) are found by solving
the KS equations $\left ( {\bm T } + {\bm v } ^\text{KS} \right )\varphi_\nu=\epsilon_v \varphi_\nu$, where $( {\bm T } )_{mn}=-Te^{i\frac{\phi^ {\text{KS}}}{L} x_{mn}}$ for nearest neighbors and 0 otherwise, and $( {\bm v } ^\text{KS})_{mn} = \delta_{mn} v^\text{KS}_m$.
Imposing that the KS density $n_m = 2 \sum_{\nu = 1}^{N/2} \left | \varphi_\nu(m) \right |^2$ and KS bond current
\mbox{
$
I = -4T \sum_{\nu =1}^{N/2} \Im \left [ e^{i \phi^{\text{KS}}/L} \varphi^*_\nu (m+1) \varphi_\nu (m) \right ]
$}
equal those from the original interacting system determines ($v^{\text{KS}}_m, \phi^{\text{KS}}$). No physical meaning should be \emph{a priori} given to the KS orbitals $\varphi_\nu$ or the KS eigenvalues $\epsilon_\nu$; they pertain to an auxiliary system giving the exact density and current but not necessarily other quantities.
The KS potential, referred to as $v_{\text{eff}}$ hereafter, is our proposed measure of disorder screening. It can be split into external (disorder) and Hartree-exchange-correlation parts: $v_{\text{eff}} = v + v_{\text{Hxc}}$ (similarly, $\phi_{\text{eff}} \equiv \phi_{\text{KS}} = \phi + \phi_{\text{xc}}$). Thus, in DFT, the screening of disorder by interactions (i.e. when $ |v_{\text{eff}}| < |v| $) comes from $v_{\text{Hxc}}$. This is an improvement over standard mean-field descriptions, in which the effective potential does not include correlations and the applied phase is unscreened.
%
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{homogeneousCaseBlue}
\caption{Current $I$ and KS phase $\phi_{\text{eff}}$ in a 10-site homogeneous ring with density $n=3/5$ (NQF) and $n=1$ (HF).
\label{homogeneousCase} }
\end{figure}
%
Both $v_{\text{eff}}$ and $\phi_{\text{eff}}$ are obtained by mapping the exact many-body ring system into a DFT-KS one.
In lattice models, existence and uniqueness issues for such a DFT-based map can occur \cite{Baer2008,Li2008a,Verdozzi2008, Stefanucci2010,Kurth2011,Farzanehpour2012,Akande2012}. Of relevance here, $\phi$ and $\phi + 2\pi k L$ ($k$ integer) give the same current (uniqueness issue); this periodicity also implies that the magnitude of the current has an upper bound (existence issue). Further, a non-interacting (or described within KS-DFT) homogeneous ring has energy degeneracy for even $N_\sigma$ (the degeneracy is lifted by many-body interactions).
To circumvent these occurrences, we choose $N_\sigma$ odd to avoid degeneracies. Furthermore, we fix $-\pi/L < \phi_\text{eff} \leq \pi/L$ in the reverse engineering scheme. However, even with this restriction, two different phases can yield the same current. Practically, we consistently choose the region for $\phi_{\text{eff}}$ that smoothly connects to $\phi$ for small $U$.
Finally, in practice the "maximal current'' existence issue is largely mitigated since the target current comes from a physical many-body system.
In the numerical reverse-engineering implementation of the DFT map, $\phi_{\text{eff}}$ and $v_{\text{eff}}$ are recursively updated until the interacting MB system and the KS system have the same current and density.
Using exact diagonalization, we obtain the exact many-body density $n_{\text{MB}}$ and current $I_{\text{MB}}$. These quantities are then used as input to obtain $\phi_{\text{eff}}$ and $v_{\text{eff}}$ according to the protocol~\cite{Karlsson2016a}
\begin{align}
&v_{\text{eff}}^{(k+1)} = v_\text{eff}^{(k)} + \alpha_1
\left (n^{(k)}_{\text{KS}} - n_{\text{MB}}\right ) \quad \text{ for all sites} \label{iterativeDensity} \\
&\phi_{\text{eff}}^{(k+1)} =
\phi_{\text{eff}}^{(k)} - \alpha_2 \left (I^{(k)}_{\text{KS}} -
I_{\text{MB}}\right ), \label{iterativeCurrent}
\end{align}
where $(k)$ denotes the $k$th iteration, and $\alpha_1,\alpha_2 < 1$ are convergence parameters.
\subsection{Quantum rings: results}
We consider two electron concentrations: half-filling (HF, $N_\uparrow = 5$), and near quarter-filling (NQF, $N_\uparrow= 3$). Furthermore, we take $T=1$, i.e. the energy unit.
For reference, we start our discussion with homogeneous rings, (i.e. $v_i = 0$, which gives a constant density $n_i=N/L$ and a constant $v_{\text{eff}}$). In Fig.~\ref{homogeneousCase}, we show HF and NQF currents and the corresponding $\phi_{\text{eff}}$:s as function of the interaction $U$, for fixed external phase $\phi = -0.5$. Both HF and NQF currents $I$ decrease monotonically with $U$, but tend to zero and nonzero values, respectively. This is consistent with Mott insulator behavior at HF and metallic behavior otherwise for the infinite $(L \to \infty)$ one-dimensional Hubbard model~\cite{Lieb1968}.
The homogeneity singles out the importance of the effective phase. The KS orbitals are plane waves for any value of $U$, and as such the current is determined solely by $\phi_{\text{eff}}$. This shows the importance of the effective phase in our Hamiltonian picture, and highlights that standard mean-field descriptions, which yields $\phi_\text{eff} = \phi$, cannot capture the correct physics.
We now address the effect of disorder in rings. We use $M=150$ box-disorder configurations. For a given configuration, the spread $\Delta X$ of a quantity $X$ over the $L=10$ sites is measured by
\begin{equation}
(\Delta X)^2 = \frac{1}{L}\! \sum _{m=1}^L \! \left ( \bar{X} - X_m \right )^2\!, \text{ with } \bar{X} = \frac{1}{L}\sum _{m=1}^L X_m. \label{averageDefinition}
\end{equation}
Results are presented for i) histograms collecting data from each disorder configuration and ii) arithmetic averages over all $M$ configurations. We examine the dependence on the interactions $U$ of the current $I$, $\phi_{\text{eff}}$, $\Delta n$,and $\Delta v_{\text{eff}}$. The latter is a measure of disorder screening (in the homogeneous case, $\Delta v_{\text{eff}} = 0$ for all $U$).
With disorder ($W=2$), for both NQF and HF the current $I$ is hindered by disorder at low $U$ and by interactions at large $U$, with a maximum in between (Fig.~\ref{10SitesNQF}).
As for $W=0$ (Fig.~\ref{homogeneousCase}), for HF $I$ vanishes at very large $U$. The non-monotonic behavior of $I$ results from competing disorder and interactions \cite{Farhoodfar2011,Vojta1998,Karlsson2014b}. Conversely, the density spread $\Delta n$ decreases monotonically as a function of $U$ at both NQF and HF, i.e. interactions favor a more homogeneous density. For NQF, $\Delta n$ seems to tend to a finite value for large $U$, while for the HF case, $\Delta n \to 0$, i.e. a fully homogeneous density.
%
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{10Sites_again}
\caption{Disorder vs interactions in 10-site rings near quarter-filling (NQF, $N=6$) and at half-filling (HF, $N=10$) for $W=2$, $\phi=-0.5$. For $\Delta v_{\text{eff}}$, histograms and disorder averages are shown.
For $\phi_{\text{eff}}$, $I$, $\Delta n$ disorder averages are reported.
\label{10SitesNQF} }
\end{figure}
%
In the KS system, $\Delta v_{\text{eff}}$ also decreases monotonically as function of $U$, tending to a finite value for NQF and to zero for HF.
This means that the \emph{exact} $v_{\text{eff}}$ for a strongly correlated system is smoother than for a weakly correlated system, and similarly for the density. Thus, we cannot simply look at the spread of the density to predict the current through the system: Including the effective phase is crucial.
The competition of disorder and interactions thus translates into a competition of a decreasing effective potential spread (favoring the current) and a decreasing effective phase (reducing the current). Mean-field \cite{Farhoodfar2011} or DFT-LDA treatments \cite{Vettchinkina2013} fail to explain the current drop since they only take the effective potential into account: With an effective potential and no effective phase, the current can only increase with interactions. This ends our discussion on exact treatments of quantum rings.
\section{Open systems}
We study short clusters connected to semi-infinite leads, with Hamiltonian
\begin{equation}
\hat{H} = \hat{H} _{C} + \hat{H} _{l} + \hat{H} _{Cl},\label{Ham}
\end{equation}
where $C$, $l$, and $Cl$ label the cluster, leads, and cluster-leads coupling parts, respectively. With the same notation as for rings,
\begin{align}
\hat{H} _{C} \! = \! -T \! \! \! \! \! \! \! \sum_{\langle mn \rangle \in C, \sigma } \! \! \! \! \! \! \! \hat{c}^\dagger_{m\sigma} \hat{c}_{n\sigma} + \sum _{m \sigma} \! v_m \hat{n}_{m\sigma} + U\sum _m \! \hat{n}_{m\uparrow} \hat{n}_{m\downarrow}.
\label{central}
\end{align}
As in the case of the quantum rings, we consider box disorder, $v_m \in [-W/2, W/2]$. Depending on the cluster dimensionality, the leads are either 1D (chain) or 2D (strip) semi-infinite tight-binding structures. The latter case is shown in
Fig.~\ref{system}.
The lead Hamiltonian is $\hat{H}_{l}=\sum_\alpha \hat{H}_{\alpha}$, and $\alpha= r (l)$ refers to the right (left) contact:
\begin{equation}
\hat{H}_{\alpha } = -T \! \! \! \! \! \sum _{\langle mn \rangle \in \alpha, \sigma} \! \! \! \! \hat{c}^\dagger_{m\sigma} \hat{c}_{n\sigma} + b_\alpha (t) \hat{N}_\alpha.
\end{equation}
Here, $b_\alpha(t)$ is the (site-independent) bias in lead $\alpha$, and $\hat{N}_\alpha = \sum _{m \in \alpha,\sigma} \hat{n} _{m\sigma}$ the number operator in lead $\alpha$.
Finally, the lead-cluster coupling $\hat{H}_{Cl}$ connects the edges of the central region to the leads (Fig.~\ref{system}) with tunneling parameter $-T$. In the following, we put $T=1$, which defines the energy unit. We focus on the steady-state scenario with $b_r (t) = 0$, and $b_l \equiv b_l(t\to\infty)=1$, beyond the linear regime. Our 1D and 2D clusters have $L=10$ sites, but are large enough to illustrate the relevant physics and the scope of a DFT perspective. Also, we put $n_\uparrow = n_\downarrow = n$ (non-magnetic case) and the temperature to zero.
\subsection{Steady-state Green's functions}
Both our many-body (MB) and KS treatments of open systems are based on NEGF in its steady-state formalism. Thus we keep our presentation general, and later specialize to MB or KS. To describe the steady-state regime, we use retarded $ {\bm G } ^R(\omega)$ and lesser $ {\bm G } ^<(\omega)$ Green's functions:
\begin{align}
& {\bm G } ^R(\omega) = \left [~\omega {\bm 1} - {\bm T } - {\bm v} - {\bm \Sigma } ^R (\omega)~ \right ]^{-1},\label{eq:retG}\\
& {\bm G } ^<(\omega) = {\bm G } ^R(\omega) {\bm \Sigma } ^< (\omega) {\bm G } ^A(\omega). \label{eq:lesserG}
\end{align}
Here, boldface quantities denote $L \times L$ matrices in site indices
of the cluster region. $ {\bm G } ^A = ( {\bm G } ^R)^\dagger$ is the advanced Green's function, $( {\bm T } )_{mn} = T_{mn}$ is the kinetic term of \Eq{central}, and ${\bm v}$ is not specified yet. The self-energy $ {\bm \Sigma } $ contains many-body (MB) and embedding (emb) parts: $ {\bm \Sigma } ^{R/<}= {\bm \Sigma } ^{R/<} _{\text{MB}} + {\bm \Sigma } ^{R/<} _{\text{emb}}$.
All correlation effects are contained in the many-body self-energy $ {\bm \Sigma } _{\text{MB}}^{R/<}$, whilst the embedding term
accounts in an exact way for the left ($l$) and right ($r$) leads~\cite{Myohanen2008}:
$ {\bm \Sigma } _{\text{emb}}^{R/<}= \sum_{\alpha=l,r} {\bm \Sigma } _{\alpha}^{R/<}$. More explicitly, $ {\bm \Sigma } _{\alpha}^{<}(\omega)=i f_\alpha(\omega) {\bm \Gamma } ^\alpha (\omega)$, where $ {\bm \Gamma } ^\alpha = -2\Im \left [ {\bm \Sigma } ^R_{\alpha}\right ]$ and
$ f_\alpha(\omega)=\theta(-\omega + \mu + b_\alpha)$. Thus, information of the
actual structure of the leads, the bias $b_\alpha$ and the chemical potential $\mu$ enters via $f_\alpha$ and $ {\bm \Sigma } _{\text{emb}}^{R}$. Explicit expressions of $ {\bm \Sigma } ^R_{\text{emb}}$ exist for 1D and 2D\cite{Myohanen2012Thesis} semi-infinite leads, since they are determined by the uncontacted-lead case.
For our system, the steady-state particle density and current are spin-independent. In each spin channel \cite{Meir1992}, with $I_l$ the left lead current, and the spin-labels omitted,
\begin{align}
&n_k = \int _{-\infty} ^\infty \frac{d\omega}{2\pi i} G_{kk}^<(\omega), \label{eq:density}\\
&I_l =\!\! \int _{-\infty} ^\infty \! \frac{d\omega}{2\pi i}
\text{Tr} \left [ {\bm \Gamma } ^l (\omega) \left ( {\bm G } ^< (\omega) - 2\pi i f_l(\omega) {\bm A} (\omega) \right ) \right ], \label{eq:meir}
\end{align}
where the spectral function is $ 2\pi {\bm A} = i( {\bm G } ^R- {\bm G } ^A)$. Both the interacting MB system as well as the KS system are described by Eqs.~(\ref{eq:retG}-\ref{eq:meir}). We now specialize the discussion to the separate cases.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{collection2.pdf}
\caption{Density $n$, effective potential $v_{\text{eff}}$, effective bias $b_{\text{eff}}$ and current $I$ (multiplied by 4 for convenience) for a specific one-dimensional disordered wire with $W=2$ and bias $b=1$ for $U=0$ and $6$.
\label{oneConfiguration} }
\end{figure}
\subsubsection{The interacting MB system}
Here $(\bm v)_{ij}=\delta_{ij}v_i$ are the disordered onsite energies and $b_\alpha$ is the applied bias.
While the NEGF formalism provides a formally exact description for open systems, in practice the MB self-energies $ {\bm \Sigma } _{\text{MB}}^{R/<}$ need to be approximated. We consider the self-consistent $ {\bm \Sigma } _{\text{MB}}^{R/<} = {\bm \Sigma } _{\text{MB}}^{R/<} [ {\bm G } ^R, {\bm G } ^<]$ 2nd Born approximation, \cite{Myohanen2008,Friesen2009,Karlsson2014b,Thygesen2008}, keeping all diagrams up to second order. While the numerical details depend on the chosen approximation, our conclusions do not, as discussed in more detail below.
We solve the equations self-consistently, with the convergence rate improved with the Pulay scheme \cite{Pulay1980,Thygesen2008}. Fully self-consistent NEGF calculations guarantee the satisfaction of general conservation laws \cite{Baym1961, Baym1962}, and in particular the continuity equation \cite{Karlsson2016c}. In the context of steady-state transport, the continuity equation leads to the condition that $I_l = -I_r \equiv I$.
\subsubsection{The independent-particle KS system}
Being an independent-particle system, $ {\bm \Sigma } _{\text{MB}}^{R/<}=0$. Thus, the KS system is described exactly by steady-state NEGF. Further, $( {\bm v } )_{ij} \equiv \delta_{ij} v_{i,\text{eff}}$ and $b_{\alpha} \equiv b_{\text{eff},\alpha}$ are found iteratively to make the KS and MB density and current the same \cite{Karlsson2016a}. The same iteration protocol as for the quantum rings was used, \Eq{iterativeDensity} and \Eq{iterativeCurrent}, replacing $\phi_\text{eff}$ with $b_\text{eff}$.
The embedding self-energies in the KS and MB systems differ only by the bias (effective in KS, applied in MB). We restrict $b_{\text{eff},r} = 0$ and define $b_{\text{eff}} = b_{\text{eff},l}$.
The KS independent-particle scheme permits to
write the Meir-Wingreen formula, \Eq{eq:meir}, in a Landauer-B\"uttiker form
$I = \int _{\mu}^{\mu + b_{\text{eff}}} \frac{d\omega}{2\pi} \mathcal{T}_{KS} (\omega)$, with $\mathcal{T}_{KS} = \text{Tr} \left [ {\bm \Gamma } ^l {\bm G } ^R {\bm \Gamma } ^r {\bm G } ^A \right ]$ the KS transmission function. Although recast in a Landauer-B\"uttiker form used for independent-particle systems, we stress that the KS current still equals the true current of the original interacting system.
\subsection{Open systems: results}
In the following, we put $\mu_\alpha = 0$ (half-filled leads). To address the behavior of $v_{\text{eff}}$ in quantum transport setups, we find it useful to start with one disorder configuration and two interaction values for a biased 10-site one-dimensional chain (Fig.~\ref{oneConfiguration}). At $U=0$, the density $n_k$ is non-uniform, since $v_{\text{eff}} = v$ and $b_{\text{eff}} = b$. For $U=6$, both $n_k$ and $v_{\text{eff}}$ (now incorporating correlations) become smoother: interactions thus provide a smoother energy landscape also for open systems. However, $I_{U=6}< I_{U=0}$, even if the effective energy landscape is smoother. This is due to $b_{\text{eff}}$, that at $U=6$ is much smaller than $b$ \cite{Stefanucci2004a,Stefanucci2015,Karlsson2016a}.
To corroborate this analysis, we consider the 2D open system of Fig.~\ref{system}. Results from 150 disorder configurations for $\Delta n$, $\Delta v_{\text{eff}}$ (defined as for rings, \Eq{averageDefinition}), $I$, and $b_{\text{eff}}$ are shown in Fig.~\ref{histogram2D}
\footnote{We do not include results from a small fraction of calculations $(\sim 1\%)$, which, especially for large $W$, did not converge.}.
%
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{All2DleadsW3}
\caption{Histograms of $\Delta n$, $\Delta v_{\text{eff}}$, $I$ and $b_{\text{eff}}$ and their arithmetical averages (dots) for the 2D quantum transport system of Fig.~\ref{system} with disorder strength $W=3$ and bias $b=1$.
The corresponding statistical errors $\sigma_x = \frac{\bar{x}}{\sqrt{M}}$ are comparable or smaller than the dot sizes, and thus not shown. Results for the 1D open system exhibit similar trends.}
\label{histogram2D}
\end{figure}
%
The current through the system is a non-monotonic function of $U$, while
$\Delta n$, $\Delta v_{\text{eff}}$ decrease monotonically. At low $U$, $b_{\text{eff}}$ almost equals $b$, and $I$ increases since $\Delta v_{\text{eff}}$ decreases. At larger $U$, however, the drop in $b_{\text{eff}}$ grows, and $I$ is smaller. This is why $I$ shows a crossover.
Thus, the competition between disorder and interactions in open systems transfers to a competition between the smoothness of the energy landscape favoring current flow and screening of the effective bias hindering such flow. We have performed the same analysis for one-dimensional linear chains, with the same qualitative results (not shown).
While the 2nd Born approximation is quite accurate at low interaction strengths \cite{Friesen2009,Uimonen2011, Hermanns2014a}, one can of course question the quantitative agreement for higher interaction strengths. We find no reason to question the qualitative results of the approximation, however, since the behavior is similar for the quantum rings, which were treated exactly, and also other calculations suggest similar conclusions\cite{Denteneer1999,Vojta1998,Abrahams2001}. In order to further confirm the aforementioned qualitative behavior, we also performed calculations for selected disorder configurations (not shown) using the T-matrix approximation~\cite{Galitskii1958,Friesen2009,PuigvonFriesen2011,Schlunzen2016}, which takes higher-order processes into account in the self-energy. We found the same qualitative behavior as for the 2nd Born approximation, also for the higher interaction strengths considered.
\section{Conclusions}
We introduced an exact independent-particle characterization of coexisting disorder and interaction effects, based on density-functional theory (DFT). Its scope as a diagnostic for disorder screening was shown for open-sample geometries and small quantum rings.
The many-body treatment of the quantum rings was exact, which allowed us to unambiguously characterize disorder screening. For open systems, where no exact solutions are available, we used non-equilibrium Green's functions (NEGF), with biased reservoirs treated exactly and electronic correlations treated via the 2nd Born approximation. We stress that the use of an approximation was simply an expedient way to provide an input to our reverse engineering algorithm; more sophisticated methods can of course be used for the same purpose.
Our DFT-based analysis consistently shows that interactions smoothen the energy landscape in disordered systems out of equilibrium, for both closed quantum rings and open one- and two-dimensional quantum transport systems. In line with earlier qualitative pictures from the literature, it is tempting to think that the spread in the effective potential or the density can be taken as a measure of the conductance of a system. This is not the case, as this picture is not accurate enough to explain the non-monotonic behavior for the current when changing the interaction strength. To make the picture complete, the effective bias (phase) has to be taken into account.
The fact that the quantum rings, the one-dimensional and the two-dimensional quantum transport systems yield the same behavior reinforces our conclusions that the independent-particle picture is general and can be applied to a wide range of systems.
Based on this interpretation, we can provide a simple explanation why mean-field theories can predict a too high current in disordered systems. These methods neglects the correlation screening of the disordered potential, and fully neglects the screening of the applied bias. To improve the picture, correlation effects need to be added.
To conclude, within our Hamiltonian independent-particle picture, strong correlation effects are behind the appearance of the effective potential and bias, and this is the essence of disorder screening.
As possible extensions of our approach, we mention applications to real materials and the generalization to finite temperatures~\cite{Mermin1965} to describe, for example, the many-body localized regime.
These are deferred to future work.
\begin{acknowledgments}
We acknowledge Fabian Heidrich-Meisner for useful discussions. D.K. would like to thank the Academy of Finland for support under project no. 267839.
\end{acknowledgments}
|
\section{Introduction}
\label{sec:Introduction}
\emph{Motion planning} is one of the basic tasks in robotics: it requires finding a suitable continuous motion
that transforms (or moves) a robotic device from a given initial position to a desired final position. A motion plan
is usually required to be \emph{robust}, i.e. the path of the robot must be a continuous function of the input data
given by the initial and final position of the robot.
Michael Farber \cite{Farber:TCMP} introduced the concept of \emph{topological complexity} as a
measure of the difficulty finding continuous motion plans that yield robot trajectories for every admissible pair
of initial and final points (see also \cite{Farber:ITR}).
However, finding a suitable trajectory for a robot is only part of the problem, because
robots are complex devices and one has to decide how to move each robot component so that the entire device moves along
the required trajectory. To tackle this more general
\emph{robot manipulation problem} one must take into account the relation between the internal states of robot parts
(that form the so-called \emph{configuration space} of the robot) and the actual pose of the robot within its
\emph{working space}. The relation between the internal states and the poses of the robot is given by the \emph{forward
kinematic map}. In \cite{Pav:CFKM} we defined the \emph{complexity of the forward kinematic map} as a measure of
the difficulty to find robust manipulation plans for a given robotic device.
We begin the paper with a survey of basic concepts of robotics which is intended as a motivation and background information
for various problems that appear in the study of topological complexity of configuration spaces and kinematic maps.
Section \ref{sec:Robot kinematics} contains a brief exposition of standard mathematical topics in robotics: description
of the position and
orientation of rigid bodies, classification of joints that are used to connect mechanism
parts, definition of configuration and working spaces and determination of the mechanism kinematics based on the motion
of the joints. Our exposition is by no means complete, as we limit our attention to concepts that appear in
geometrical and topological questions. More technical details can be found in standard books on robotics, like
\cite{RS}, \cite{MIRM} or \cite{Waldron-Schmiedeler}.
In Section \ref{sec:Topological properties} we consider the properties of kinematic maps that are relevant to the study
of topological
complexity (cf. \cite{Pav:CFKM}). In particular, we determine the dimension of the configuration space,
discuss when a kinematic map admits a continuous section (i.e. inverse kinematic map) and when a given kinematic map
is a fibration. We also mention some questions that arise in the kinematics of redundant manipulators.
The results presented in Section \ref{sec:Topological properties}
are not our original contribution, but we have made an effort to
give a unified exposition of relevant facts scattered in the literature, and to help
a topologically minded reader to familiarize herself or himself with aspects of robotics that have motivated
some recent work on topological complexity.
In the second part of the paper we recall some basic facts about topological complexity
in Section \ref{sec:TC overview} and then in Section \ref{sec:Complexity of a map} we
introduce the relative version, the complexity of a continuous map. We discuss some subtleties in the definition
of complexity, derive one basic estimate and present several possible applications. In Section
\ref{sec:Instability of robot manipulation} we relate the instabilities (i.e. discontinuities) that appear in
the manipulation of a mechanical system with the complexity of its kinematic map.
\section{Robot kinematics}
\label{sec:Robot kinematics}
In order to keep the discussion reasonably simple, we will restrict our attention to mechanical aspects of robotic
kinematics and disregard questions
of adaptivity and communication with humans or other robots. We will thus view a robot as a mechanical
device with rigid parts connected by joints. Furthermore, we will not take into account forces or
torques as these concepts properly belong to robot \emph{dynamics}.
A robot device consists of rigid components connected by joints that allow its parts to change their relative positions.
To give a mathematical model of robot motion
we need to describe the position and orientation of individual parts, determine the motion restrictions caused
by various types of joints, and compute the functional relation between the states of individual joints
and the position and orientation of the end-effector.
\subsection{Pose representations}
The spatial description of each part of a robot, viewed as a rigid body, is given by its \emph{position} and \emph{orientation},
which are collectively called \emph{pose}. The position is usually given by specifying a point in $\mathord{\mathbb{R}}^3$
occupied by some reference point in the robot part, and the orientation is given by an element of $SO(3)$.
Therefore, as $\mathord{\mathbb{R}}^3\times SO(3)$ is 6-dimensional, we need at least six coordinates to precisely locate each robot
component in Euclidean space. The representation
of the position is usually straightforward in terms of cartesian or cylindrical coordinates, but the explicit
description of the orientation is more complicated.
Of course, we may specify an element in $SO(3)$ by a $3\times 3$ matrix, but that requires a list of 9 coefficients
that are subject to 6 relations. Actually, 3 equations are redundant because the matrix is symmetric,
while the remaining 3 are quadratic and involve
all coefficients. This considerably complicates computations involving relative positions of various parts,
so most robotics courses begin with a lengthy discussions of alternative representations of rotations.
This includes description of elements of $SO(3)$ as compositions of three rotations around coordinate axes
(\emph{fixed angles} representation), or by rotations around changing axes (\emph{Euler angles} representation),
or by specifying the axes and the angle of the rotation (\emph{angle-axis} representation). While these representations
are more efficient in terms of data needed to specify a rotation, explicit formulas always have singularities,
where certain coefficients are undefined. This is hardly surprising, as we know that $SO(3)$ cannot be
parametrized by a single 3-dimensional chart. Other explicit descriptions of elements of $SO(3)$ include those
by quaternions and by
matrix exponential form. See \cite{RS}, \cite{Waldron-Schmiedeler} for more details and transition formulas between
different representations of spatial orientation.
The pose of a rigid body corresponds to an element of the special Euclidean group $SE(3)$, which can
be identified with the semi-direct product of $\mathord{\mathbb{R}}^3$ with $SO(3)$. Its elements admit a \emph{homogeneous representation}
by $4\times 4$-matrices of the form
$$\left(\begin{array}{cc}
R & \vec t\\
\vec 0^T & 1 \end{array}\right)$$
where $R$ is a $3\times 3$ special orthogonal matrix representing a rotation and $\vec t$ is a 3-dimensional
vector representing a translation.
The main advantage of this representation is that composition of motions is given by the multiplication of corresponding
matrices. Another frequently used representation of the rigid body motion is by \emph{screw transformations}. It is based
on the Chasles theorem which states that the motion given by a rotation around an axis passing through the center
of mass, followed by
a translation can be obtained by a screw-like motion given by simultaneous rotation and translation along a common axis
(parallel to the previous). See \cite[section 1.2]{Waldron-Schmiedeler}
for more details about joint position and orientation representations.
Explicit representations of rigid body pose may be complicated but are clearly unavoidable when it
comes to numerical computations.
Luckily, topological considerations mostly rely on geometric arguments and rarely involve explicit formulae.
\subsection{Joints and mechanisms}
Rigid components of a robot mechanism are connected by \emph{joints}, i.e. parts of the components' surfaces that are in
contact with each other. The geometry of the contact surface restricts the relative motion of the components connected
by a joint. Although most robot mechanisms employ only two basic kinds of joints, we will briefly describe
a general classifications of various joint types. First of all, two objects can be in contact along a proper
surface (these are called \emph{lower pair} joint), along a line, or even along isolated points (in the case of
\emph{upper pair} joints).
There are six basic types of lower pair joints, the most important being the first three:
\emph{Revolute joints:} the contact between the bodies is along a surface of revolution, which allows rotational motion.
Revolute joints are usually abbreviated by (R) and the corresponding motion has one degree of freedom (1 DOF).
\emph{Prismatic joints:}the bodies are in contact along a prismatic surface. Prismatic joints are abbreviated as (P) and
admit a rectilinear motion with one degree of freedom.
\emph{Helical joints:} the bodies are in contact along a helical surface. Helical joints are abbreviated as (H) and allow screw-like
motion with one degree of freedom.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig1.png}
\caption{Revolute (R), prismatic (P) and helical (H) joints.}
\label{fig: RPH joints}
\end{figure}
\emph{Cylindrical joints:}, denoted (C), where the bodies are in contact along a cylindrical surface. They allow simultaneous sliding and
rotation, so the corresponding motion has two degrees of freedom.
\emph{Spherical joints:}, denoted (S), with bodies in contact along a spherical surface and allowing motion with three degrees of freedom.
\emph{Planar joints:}, denoted (E) with contact along a plane with three degrees of freedom (plane sliding and rotation).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig2.png}
\caption{Cylindrical (C), spherical (S) and planar (E) joints.}
\label{fig: CSP joints}
\end{figure}
While the revolute, prismatic and helical joints can be easily actuated by motors or pneumatic cylinders, this is not
the case for the remaining three types, because they have two or three degrees of freedom and each degree of freedom
must be separately
actuated. As a consequence, they are used less frequently in robotic mechanisms and almost exclusively as passive
joints that restrict the motion of the mechanism.
Higher pair joints are also called rolling joints, being characterized by a one-dimensional contact between the bodies,
like a cylinder rolling on a plane, or by zero-dimensional contact, like a sphere rolling on a surface. They too appear
only as passive joints.
A complex of rigid bodies, connected by joints which as a whole allow at least one degree of freedom, forms
a \emph{mechanism} (if no movement is possible, it is called a \emph{structure}).
A mechanism is often schematically described by a graph whose vertices
are the joints and edges correspond to the components. The graph may be occasionally complemented with symbols
indicating the type of each joint
or its degree of freedom. A manipulator whose graph is a path is called \emph{serial chain}.
This class is sometimes extended to
include manipulators with tree-like graphs as in robot hands with fingers or in some gripping mechanisms. A serial
manipulator necessarily contains only actuated joints and is often codified by listing the symbols for its joints.
For example (RPR) denotes a chain in which the first joint is revolute, second is prismatic and the third is
again revolute. Typical serial chains are various kinds of robot arms.
Manipulators whose graphs contain one or more cycles are \emph{parallel}. Typical parallel mechanisms are various
lifting platforms (see Figure \ref{fig: ser-par}). We will see later that the kinematics of serial mechanisms is quite
different from the kinematics of parallel mechanisms and requires different methods of analysis.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig3.png}
\caption{Serial (RRR) robot 'triple-roll wrist' and parallel Stewart platform mechanism where each leg is of type (SPS).}
\label{fig: ser-par}
\end{figure}
\subsection{Kinematic maps}
Let us consider a simple example -- a pointing mechanism with two revolute joints as in Figure \ref{fig: pointing}.
Since each of the joints can rotate a full circle, we can specify their position by giving two angles, or better, two
points on the unit circle $S^1$. Every choice of angles uniquely determines the longitude and the latitude of a point on
the sphere. Thus we obtain an elementary example of a kinematic mapping $f\colon S^1\times S^1\to S^2$, explicitly given
in terms of geographical coordinates as $f(\alpha,\beta)=(\cos\alpha\cos\beta,\cos\alpha\sin\beta,\sin\alpha)$,
so that $\alpha$ is the latitude and $\beta$ is the longitude of a point on the sphere.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig5.png}
\caption{Pointing (RR) mechanism.}
\label{fig: pointing}
\end{figure}
Given two bodies connected by a joint $J$, we define the \emph{joint space} of $J$ as the subspace (usually a submanifold)
of $SE(3)$ that correspond to all possible relative displacements of the two bodies. So the joint space of a revolute joint
is (homeomorphic to) $S^1$, joint spaces of prismatic and helical joints are closed segments $B^1$, joint space of
a cylindrical
joint is $B^1\times S^1$, and the joint spaces of spherical and planar joints are $B^2\times S^1$ (note that theoretically
a spherical joint should have the entire $SO(3)$ as a joint space, but such a level of mobility cannot be technically achieved).
The \emph{Joint space} of a robot manipulator $\mathcal{M}$ is the product of the joint spaces of its joints.
Its \emph{configuration space} $\cs(\mathcal{M})$ is the subset of the joint space of $\mathcal{M}$, consisting of
values for the joint variables that satisfy all constraints determined by a geometrical realization of the manipulator.
The component of a manipulator that performs a desired task is called an \emph{end-effector}.
The \emph{kinematic mapping} for $\mathcal{M}$ is the function $f\colon \cs(\mathcal{M})\to SE(3)$ that to every admissible
configuration of joints assigns the pose of the end-effector. The image of the kinematic mapping is called
the \emph{working space}
of $\mathcal{M}$ and is denoted $\ws(\mathcal{M})$. Often we only care about the position (or orientation) of
the end-effector
and thus consider just the projections of the working space to $\mathord{\mathbb{R}}^3$ (or $SO(3)$). The \emph{inverse kinematic mapping}
for $\mathcal{M}$ is a right inverse (section) for $f$, i.e. a function $s\colon\ws(\mathcal{M})\to\cs(\mathcal{M})$,
satisfying $f\circ s=\mathord{\mathrm{Id}}_{\ws(\mathcal{M})}$. We will see later that many kinematic maps (especially for serial chains)
do not admit continuous inverse kinematic maps.
In order to study or manipulate a robot mechanism one must explicitly compute its forward kinematic map.
To this end it is necessary to describe
the poses of the joints, and this cannot be done in absolute terms because the movement of each joint can
change the pose of other joints. For a serial chain it makes sense to specify the pose of each joint relatively to
the previous joint in
the chain. In other words, we can fix a reference frame for each joint in the chain, and then form a list starting with
the pose of
the first joint, followed by the difference between the poses of the first joint and the second joint, followed by
the difference between
the poses of the second joint and the third joint, and so on. Of course, each difference is an element of $SE(3)$, hence
it is determined
by six parameters. However, by a judicious choice of reference frames one can reduce this to just four parameters for
each difference, with an additional bonus of a very simple computation of the joint poses. This method, introduced
by Denavit and Hartenberg \cite{D-H}, is the most widely used approach for the computation of the forward kinematic map
for serial chains, so deserves to be described in some detail.
Assume we are given a serial chain with links and revolute joints as in Figures \ref{fig: SCARA} and
\ref{fig: serial}. For the first joint we
let the $z$-axis be the axis of rotation of the joint, and choose the $x$-axis and $y$-axis so to form
a right-handed orthogonal frame. We may fix the frame
origin within the joint but that is not required, and indeed the other frame origins usually do not coincide
with the positions of the respective joints.
Given the frame $\mathbf{x},\mathbf{y},\mathbf{z}$ for the $i$-th joint, the frame $\mathbf{x'},\mathbf{y'},\mathbf{z'}$
for the $(i+1)$-st joint is chosen as follows:
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig5a.png}
\caption{Denavit-Hartenberg parameters and frame convention.}
\label{fig: DH}
\end{figure}
\begin{itemize}
\item $\mathbf{z'}$ is the axis of rotation of the joint;
\item $\mathbf{x'}$ is the line containing the shortest segment connecting $\mathbf{z}$ and $\mathbf{z'}$ (its
direction is thus $\mathbf{z'}\times \mathbf{z}$);
if $\mathbf{z}$ and $\mathbf{z'}$ are parallel, we usually
choose the line through the origin of the $i$-th frame:
\item $\mathbf{y'}$ forms a right-handed orthogonal frame with $\mathbf{x'}$ and $\mathbf{z'}$.
\end{itemize}
The relative position of the frame $\mathbf{x'},\mathbf{y'},\mathbf{z'}$ with respect to the frame
$\mathbf{x},\mathbf{y},\mathbf{z}$ is given by four Denavit-Hartenberg
parameters $d,\theta,a,\alpha$ (see Figure \ref{fig: DH}), where
\begin{itemize}
\item $d$ and $\theta$ are the distance and the angle between the axes $\mathbf{x}$ and $\mathbf{x'}$
\item $a$ and $\alpha$ are the distance and angle between the axes $\mathbf{z}$ and $\mathbf{z'}$.
\end{itemize}
Using the above procedure,
one can describe the structure of a serial chain with
$n$ joints by giving the initial frame and a list of $n$ quadruplets of Denavit-Hartenberg parameters.
Moreover, Denavit-Hartenberg approach can be easily extended to
handle combinations of prismatic and revolute joints, and to take into account some exceptional configurations,
e.g. coinciding axes of rotation.
Once the structure of the serial chain is coded in terms of Denavit-Hartenberg parameters it is not difficult
to write explicitly the corresponding kinematic map as a product
of rotation and translation matrices. It is important to note that, for each joint $\theta$ is precisely the joint
parameter describing the joint rotation by the angle $\theta$.
We omit the tedious computation and just mention that the kinematic map of a robot arm with $n$
revolute joints is a product of
$n$ Denavit-Hartenberg matrices of the form
$$\left(\begin{array}{cccc}
\cos\theta & -\cos\alpha\sin\theta & \sin\alpha \sin\theta & a\cos\theta \\
\sin\theta & \cos\alpha\cos\theta & -\sin\alpha \cos\theta & a\sin\theta \\
0 & \sin\alpha & \cos\alpha & d \\
0 & 0 & 0 & 1
\end{array}\right)$$
where the parameters range over all joints in the chain.
\section{Topological properties of kinematic maps}
\label{sec:Topological properties}
In this section we present a few results of topological nature concerning configuration spaces, working spaces and kinematic
maps. Although the complexity can be defined for any continuous map,
these results hint at additional conditions that can be reasonably imposed in concrete applications,
and conversely, show that some common topological assumptions
(e.g. that the kinematic is a fibration) can be too much to ask for in practical applications.
\subsection{Mobility}
In order to form a \emph{mechanism} a set of bars and joints must be mobile, otherwise it is more properly called
a \emph{structure}. The \emph{mobility} of a robot mechanism is usually defined to be the number of its degrees of
freedom. In more mathematical terms, we can identify mobility as the dimension of the configuration space (at least
when $\cs$ is
a manifold or an algebraic variety). The mobility of a serial mechanism is easily determined: it is the sum of the
degrees of freedom of its joints (which usually coincides with the number of joints,
because actuated joints have one degree of freedom).
In parallel mechanisms links that form cycles reduce the mobility of the mechanism. Assume that a mechanism consists
of $n$ moving bodies that are connected
directly or indirectly to a fixed frame. If they are allowed to move independently in the space then
the configurations space is $6n$-dimensional. Each joint introduces some constraints and generically
reduces the dimension of the configuration space by $6-f$, where $f$ is the degree of freedom of the joint. Therefore,
if there are $g$ joints whose degrees of freedom are $f_1,f_2,\ldots,f_g$, and if they
are independent, then the mobility of the system is
$$M=6n-\sum_{i=1}^g (6-f_i)= 6(n-g)+\sum_{i=1}^g f_i.$$
This is the so called \emph{Gr\"ubler formula} (sometimes called Chebishev-Gr\"ubler-Kutzbach formula).
If the mechanism is planar, then the mobility of each body is 3-dimensional (two planar coordinates and the plane rotation), and each joint introduces $3-f$ constraints. The corresponding planar Gr\"ubler
formula gives the mobility as
$$M= 3(n-g)+\sum_{i=1}^g f_i.$$
For example, in a simple planar linkage with four links (of which one is fixed), connected by four revolute joints the mobility is $M=3\times(3-4)+4=1$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig6.png}
\caption{Four-bar linkage with mobility 1.}
\label{fig: four bars}
\end{figure}
Observe however that the Gr\"ubler formula relies on the assumption that the constraints are independent.
In more complicated mechanisms relations between motions of adjacent joints may lead to redundant degrees of freedom
in the sense that some motions are always related. For example, in a (SPS) configuration, with a prismatic joint between
two spherical joints, a rotation of one spherical joint is transmitted to an equivalent
rotation of the other spherical joint. Thus the resulting degree of freedom is not $7=3+1+3$ but $6$.
For example, in the Stewart platform shown in Figure \ref{fig: ser-par} there is the fixed base, 13 mobile links, and
18 joints, of which 6 in the struts are prismatic with one degree of freedom,
and the remaining 12 at both platforms have three degrees of freedom each. Thus by the Gr\"ubler formula, the mobility of
the Stewart platform should be equal to
$M=(13-18)\times 6+12\times 3+6\times 1=12$, but in fact each leg has one redundant degree of freedom, so the mobility
of the Stewart platform is 6. Observe that to achieve all positions in the configuration
space, one must actuate at least $M$ joints (assuming that only joints with one degree of freedom are actuated). In fact,
in the Stewart platform the six prismatic joints are actuated, while the spherical joints are passive.
\subsection{Inverse kinematics}
A crucial step in robot manipulation is the determination of a configuration of joints that realizes a given pose in
the working space. In other words, we need to find an inverse kinematic map
in order to reduce the manipulation problem to a navigation problem within $\cs$. However, very often
we must rely on partial inverses because there are topological obstructions for the existence of an inverse kinematic map
defined on the entire working space $\ws$. The following result is due to Gottlieb \cite{Gottlieb:RFB}:
\begin{theorem}
\label{sections}
A continuous map $f\colon (S^1)^n\to \ws$ where $\ws=S^2, SO(3)$ or $SE(3)$ does not admit a continuous section.
\end{theorem}
\begin{proof}
A continuous map $s\colon \ws\to (S^1)^n$, such that $f\circ s=\mathord{\mathrm{Id}}_\ws$ induces a homomorphism between fundamental
groups $s_\sharp\colon\pi_1(\ws)\to\pi_1(\cs)$ satisfying $f_\sharp\circ s_\sharp=\mathord{\mathrm{Id}}$. However, the identity on
torsion groups
for $\pi_1(SO(3))=\pi_1(SE(3))=\mathord{\mathbb{Z}}_2$ cannot factor through the free abelian group $\pi_1((S^1)^n)=\mathord{\mathbb{Z}}^n$.
Similarly, by applying the second homotopy group functor, we conclude that the identity on $\pi_2(S^2)=\mathord{\mathbb{Z}}$ cannot
factor through $\pi_2((S^1)^n)=0$.
\end{proof}
As a consequence, if one wants to use a serial manipulator to move the end-effector in a spherical space around the
device, or to control a robot arm that is able to
assume any orientation, then the computation of joint configurations that yield a desired position or orientation
requires a partitioning of the working space into subspaces
that admit inverse kinematics. This explains the popularity of certain robot configurations that avoid this problem.
A typical example is the SCARA ('Selective Compliance Assembly Robot Arm') design.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig6a.png}
\caption{SCARA working space.}
\label{fig: SCARA}
\end{figure}
Its working space is a doughnut-shaped region homeomorphic to $S^1\times I\times I$, so
the previous theorem does not apply. Indeed, it is not difficult to obtain an inverse kinematic map for
the SCARA robot arm.
A similar question arises if one attempts to write an explicit formula to compute
the axes of rotations in $\mathord{\mathbb{R}}^3$.
It is well-known that for every non-identity rotation of $\mathord{\mathbb{R}}^3$ there is a uniquely defined axis of rotation, viewed
as an element of $\mathord{\mathbb{R}} P^2$. For programming
purposes it would be useful to have an explicit formula that to a matrix $A\in SO(3)-I$ assigns a vector
$(a,b,c)\in\mathord{\mathbb{R}}^3$ determining the axis of the rotation represented by $A$.
But that would amount to a factorization of the axis map $SO(3)-I\to\mathord{\mathbb{R}} P^2$
through some continuous map
$f\colon SO(3)-I\to\mathord{\mathbb{R}}^3-\mathbf{0}$, which cannot be done, because the axis map induces an isomorphism on
$\pi_1(SO(3)-I)=\pi_1(\mathord{\mathbb{R}} P^2)=\mathord{\mathbb{Z}}_2$, while $\pi_1(\mathord{\mathbb{R}}^3-\mathbf{0})=0$.
\subsection{Singularities of kinematic maps}
In robotics, the term \emph{kinematic singularity} is used to denote the reduction in freedom of movement in
the working space that arises in certain joint configurations.
Consider for example the pointing mechanism in Figure \ref{fig: pointing} and imagine that it is steered so as to point
toward some flying object.
If the object heads directly toward the north pole and from that point moves sidewise, then the mechanism will
not be able to follow it in a continuous
manner, because that would require an instantaneous rotation around the vertical axis. Similarly, if the object
flies very close to the axis through the north pole, then a continuous tracking is theoretically possible, but it may
require infeasibly high rotational speeds.
Both problems are caused by the fact that the poles are singular values of the forward kinematic map. More precisely,
let us assume that
$\cs$ and $\ws$ are smooth manifolds, and $f\colon\cs\to\ws$ is a smooth map. Then $f$ induces the derivative map $f_*$
from the tangent bundle of $\cs$ to the tangent
bundle of $\ws$. If $f_*$ is not onto at some point $c\in\cs$ (or equivalently, if the Jacobian of $f$ does not have maximal
rank at $c$), then it is not possible
to move 'infinitesimally' in certain directions from $f(c)$ while staying in a neighbourhood of $c$.
This phenomenon is clearly visible
in Figure \ref{fig: singularity}, which
depicts the kinematic map of the pointing mechanism.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig7.png}
\caption{Singularities of the pointing mechanism.}
\label{fig: singularity}
\end{figure}
For generic points the Jacobian of $f$ is a non-singular $2\times 2$-matrix, which means that the mechanism can move
in any direction. However, for $\alpha=\pi/2$ the range of the Jacobian is 1-dimensional, therefore (infinitesimal)
motion is possible only along one direction. While the explicit computation is somewhat tedious, there is a nice
conceptual way to arrive at that conclusion. In fact, in this case the kinematic map happens to be the Gauss map
of the torus, and it is known that determinant of its Jacobian is precisely the Gauss curvature. Therefore,
the singularities occur where the Gauss curvature is zero, i.e. along the top and the bottom parallels of the torus.
We are going to show that the above situation is not an exception.
To this end let us examine singularities in spatial positioning for
a serial chain consisting of revolute joints as in Figure \ref{fig: serial}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig8.png}
\caption{Serial chain with revolute joints.}
\label{fig: serial}
\end{figure}
Several observations can be made. First, note that the joints can be always rotated so that all axes become parallel
to some plane. Indeed, let us denote by $\vec x_i$ the vector product of the $i$-th and $(i+1)$-st joint axis
in the chain, as in the Denavit-Hartenberg convention, and then apply the following procedure. The first two axes are
parallel to a plane $P$, normal to $\vec x_1$, so we can rotate the
second joint until $\vec x_2$ becomes aligned with $\vec x_1$. Then the first three axes are all parallel to $P$.
Continue the rotations until all axes are parallel to $P$.
Now observe that the system cannot move infinitesimally in the direction normal to the plane.
In fact, the infinitesimal motion of the end-effector can be written as a vector sum
of infinitesimal motions of each joint. Clearly, if all axes are parallel to $P$, infinitesimal
motion in the direction that is normal to $P$ is impossible. Conversely, if the vectors
$\vec x_i$ are not all aligned, then the infinitesimal moves span $\mathord{\mathbb{R}}^3$, so the mechanism is not in singular
position. Furthermore, whenever the serial chain is aligned
in a singular configuration, we may rotate around the first and the last axis and clearly, still remain
in a singular configuration. Therefore, the set of singular points of
a serial chain is a union of two-dimensional tori.
\begin{theorem}
A serial manipulator with revolute links is in singular position if, and only if, all axes are parallel to a plane.
Moreover, the set of singular points is 2-dimensional.
\end{theorem}
The 'if' part of the above theorem was known to Hollerbach \cite{Hollerbach}, and the general formulation is due to Gottlieb \cite{Gottlieb}).
As a consequence, if the joint space is 3-dimensional, or more generally, if the robot arm is modular,
so that three joints are used for positioning and the remaining joints take care of the orientation of the arm,
then the singular space forms a separating subset (at least locally), which means
that in general we cannot avoid singularities while moving the robot arm between different positions
in the working space. This problem can be avoided
if the manipulator has more than three joints dedicated to the positioning of the arm. Even better, in redundant
systems one can have that singular values always
contain regular points in their preimages, which at least in principle opens a possibility for regular motion planning.
It is reasonable to ask whether one can completely eliminate singularities by constructing a robot arm with sufficiently
many revolute joints. Unfortunately, the answer is again negative, as Gottlieb \cite{Gottlieb} showed that every
map from a torus to standard working spaces must have singularities.
\begin{theorem}
\label{thm:singularities}
Every smooth map $f\colon (S^1)^n\to \ws$ where $\ws=S^2, SO(3)$ or $SE(3)$ has singular points.
\end{theorem}
\begin{proof}
Assume that the Jacobian of $f$ is everywhere surjective.
Then $f$ is a submersion, and therefore, by
a classical theorem of Ehresmann \cite{Ehresmann}, it is a fibre bundle. It follows that the composition of $f$
with the universal covering map $p\colon \mathord{\mathbb{R}}^n\to (S^1)^n$
yields a fibre bundle $f\circ p\colon \mathord{\mathbb{R}}^n\to\ws$, whose fibre is a submanifold of $\mathord{\mathbb{R}}^n$.
But $\mathord{\mathbb{R}}^n$ is contractible, so the fibre of $f\circ p$ is homotopy
equivalent to the loop space $\Omega\ws$. This contradicts the finite-dimensionality of the fibre,
because the loop spaces $\Omega S^2$, $\Omega SO(3)$ and $\Omega SE(3)$ are known to have infinitely
many non-trivial homology groups.
\end{proof}
Note that under the general assumptions of the theorem we do not have any information about the dimension of
the singular set.
\subsection{Redundant manipulators}
Although Theorem \ref{thm:singularities} implies that even redundant mechanisms have singularities,
the extra room in the configuration spaces of redundant manipulators
allows construction and extension of local inverse kinematic maps, so one can hope to achieve non-singular
manipulation planning over large portions of the working space. In fact, there is a great deal of ongoing research
that tries to exploit additional degrees of freedom in redundant manipulators. See for example the survey
\cite{Tutorial}, or a more recent book chapter \cite{Kinematically Redundant Manipulators}
for possible approaches to inverse kinematic maps and manipulation, especially in terms of differential equations
and variational problems. In this subsection we will focus on some qualitative questions. To fix the ideas, let us
assume that the mechanism has only revolute joints, so $\cs=(S^1)^n$, and
consider either the pointing or orientation mechanisms, i.e. $\ws=S^2$ or $\ws=SO(3)$. Moreover, assume that
$\dim \cs> \dim\ws$ so that the mechanism is redundant.
A standard approach to robot manipulation is the following: given a robot's initial joint configuration
$c\in\cs$ and end-effector's required final position $w\in\ws$, one first computes the initial position $f(c)\in \ws$,
and finds a motion from $f(c)$ to $w$ represented by a path $\alpha\colon [0,1]\to\ws$ from $\alpha(0)=f(c)$
to $\alpha(1)=w$. Then the path $\alpha$ is lifted to $\cs$ starting from the initial point $c$, thus obtaining
a path $\widetilde\alpha\colon[0,1]\to\cs$. The lifting $\widetilde\alpha$ represents the motion of joints
that steers the robot to the required position $w$, which is is reminiscent
of the path-lifting problem in covering spaces or fibrations. However, we know that the kinematic map is not
a fibre bundle in general, and so the path lifting must avoid singular points. From the computational viewpoint the most
natural approach to path lifting is by solving differential equations but there are also other approaches.
We will follow \cite{Baker} and call any such lifting method a \emph{tracking algorithm}.
Clearly, every (smooth) inverse kinematic map determines a tracking algorithm but the converse is not true in general.
In fact, a tracking algorithm determined by inverse kinematics is always \emph{cyclic} in the sense that it lifts
closed paths in $\ws$ to closed paths in $\ws$. Baker and Wampler \cite{Baker-Wampler} proved that
a tracking algorithm is equivalent to one determined by inverse functions if, and only if the tracking is cyclic.
Therefore Theorem \ref{sections} implies that, notwithstanding the available redundant degrees of freedom, one cannot
construct a cyclic tracking algorithm for pointing or orienting. This is not very surprising if we know that
most tracking algorithms rely on solutions of differential equations and are thus of a local nature.
In particular the most widely used
Jacobian method with additional constraints (cf. \emph{extended Jacobian method} in \cite{Tutorial} or
\emph{augmented Jacobian method} in \cite{Kinematically Redundant Manipulators}) yields tracking that is
only \emph{locally cyclic}, in the sense that there is an open cover of $\ws$, such that closed paths contained in
elements of the cover are tracked by closed paths in $\cs$.
However, this does not really help, because Baker and Wampler \cite{Baker-Wampler} (see also \cite[Theorem 2.3]{Baker})
proved that if there is a tracking algorithm defined on an entire $\ws=S^2$ or $\ws=SO(3)$, then there
are arbitrarily short
closed paths in the working space that are tracked by open paths in $\cs$. Therefore
\begin{theorem}
The extended Jacobian method (or any other locally cyclic method) cannot be used to construct a tracking
algorithm for pointing or orienting a mechanism with revolute joints.
\end{theorem}
Let us also mention that an analogous result can be proved for positioning mechanisms where the working space is a 2-
or 3-dimensional disk around the base of the mechanism and whose radius is sufficiently big. See
\cite[Theorem 2.4]{Baker} and the subsequent Corollary for details.
Our final result is again due to Gottlieb \cite{Gottlieb:RFB} and is related to the question of whether it is possible to restrict the
angles of the joints to stay away from the singular set $\mathrm{Sing}(f)$ of the Denavit-Hartenberg
kinematic map $f\colon (S^1)^n\to SO(3)$. We obtain the following surprising restriction.
\begin{theorem}
\label{thm:closed}
Let $M$ be a closed smooth manifold. Then there does not exist a smooth map
$s\colon M\to (\cs- \mathrm{Sing}(f))$ such that the map $f\circ s\colon M\to SO(3)$ is a submersion (i.e. non-singular).
\end{theorem}
The proof is based on a simple lemma which is also of independent interest.
\begin{lemma}
\label{lem:DH}
The Denavit-Hartenberg map $f\colon (S^1)^n\to SO(3)$ can be factored up to a homotopy as
$$\xymatrix{
(S^1)^n \ar[rr]^f \ar[dr]_m& & SO(3)\\
& S^1 \ar[ur]_g}$$
where $m$ is the $n$-fold multiplication map in $S^1$ and $g$ is the generator of $\pi_1(SO(3))$.
\end{lemma}
\begin{proof}
First observe that a rotation of the $z$-axis in the Denavit-Hartenberg frame of any joint induces a homotopy between
the resulting forward kinematic maps. Therefore, we may deform the robot arm until all $z$-axes are parallel
(so that the arm is effectively planar). Then the rotation angle of the end-effector is simply the sum of
the rotations around each axis. As the sum of angles correspond to the multiplication in $S^1$, it follows that
the Denavit-Hartenberg map factors as $f=g\circ m$ for some map $g\colon S^1\to SO(3)$. Clearly, $g$ must generate
$\pi_1(SO(3))=\mathord{\mathbb{Z}}_2$ because $f$ induces an epimorphism of fundamental groups.
\end{proof}
\begin{proof}(of Theorem \ref{thm:closed})
Assume that there exists $s\colon M\to (\cs- \mathrm{Sing}(f))$ such that $f\circ s\colon M\to SO(3)$
is a submersion. It is well-known that the projection of an orthogonal $3\times 3$-matrix to its last column determines a
map $p\colon SO(3)\to S^2$ that is also a submersion.
Then the composition $$\xymatrix{M \ar[r]^-s & \cs- \mathrm{Sing}(f)\ar[r]^-f & SO(3) \ar[r]^-p & S^2}$$
is a submersion, and hence a fibre bundle by Ehresmann's theorem \cite{Ehresmann}. It follows that
the fibre of $p\circ f\circ s$ is a closed submanifold of $M$. On the other side, $p\circ f\circ s$
is homotopic to the constant, because by Lemma \ref{lem:DH} $f$ factors through $S^1$ and $S^2$ is simply-connected.
This leads to a contradiction, as the fibre of the constant map $M\to S^2$ is homotopy equivalent to
$M\times \Omega S^2$, which cannot be homotopy equivalent to a closed manifold, because $\Omega S^2$ has
infinite-dimensional homology.
\end{proof}
In particular, Theorem \ref{thm:closed} implies that even if we add constraints that restrict
the configuration space of the robotic device to some closed submanifold $\cs'$ of the set of non-singular
configurations of the joints, the restriction $f\colon \cs'\to SO(3)$ of the Denavit-Hartenberg kinematic map
still has singular points. Clearly, the new singularities are not caused by the configuration of joints but
are a consequence of the constraints that define $\cs'$.
\section{Overview of topological complexity}
\label{sec:TC overview}
The concept of topological complexity was introduced by M. Farber in \cite{Farber:TCMP} as a qualitative measure of
the difficulty in constructing a robust motion plan for a robotic device. Roughly speaking, motion planning
problem for some mechanical device requires to find a rule ('motion plan') that yields a continuous trajectory
from any given initial position to a desired final position of the device. A motion plan is robust if small variations
in the input data results in small variations of the connecting trajectory.
Toward a mathematical formulation of the motion planning problem one
considers the space $\cs$ of
all positions ('configurations') of the device, and the space $\pcs$ of all continuous paths $\alpha\colon I\to\cs$. Let
$\pi\colon \pcs\to\cs\times \cs$ be the evaluation map given by
$\pi(\alpha)=(\alpha(0),\alpha(1)).$
A \emph{motion plan} is a rule that to each pair of points $c,c'\in\cs$ assigns a path $\alpha(c,c')\in\pcs$
such that $\pi(\alpha(c,c'))=(c,c')$.
For practical reason we usually require \emph{robust} motion plans, i.e. plans that are continuously dependent on $c$
and $c'$. Clearly,
robust motion plans are precisely the continuous sections of $\pi$. Farber observed that a continuous global section of
$\pi$ exists if, and only if, $\cs$ is contractible. For non-contractible spaces one may consider partial continuous
sections, so he defined
the \emph{topological complexity} $\TC(\cs)$ as the minimal $n$ for which $\cs\times\cs$ can be covered by $n$ open sets
each admitting a continuous section to $\pi$. Note that other authors prefer the 'normalized' topological
complexity (by one smaller that our $\TC$) as it sometimes leads to simpler formulas.
We list some basic properties of the topological complexity:
\begin{enumerate}
\item It is a homotopy invariant, i.e. $\cs\simeq\cs'$ implies $\TC(\cs)=\TC(\cs')$.
\item There is a fundamental estimate
$$\cat(\cs)\le\TC(\cs)\le\cat(\cs\times\cs),$$
where $\cat(\cs)$ denotes the (Lusternik-Schnirelmann) category of $\cs$ (see \cite{CLOT}).
\item Furthermore
$$\TC(\cs)\ge \mathord{\mathrm{nil}}\big(\mathord{\mathrm{Ker}}\,\Delta^*\colon H^*(\cs\times\cs)\to H^*(\cs)\big).$$
Here $\mathord{\mathrm{Ker}}\,\Delta^*$ is the kernel of the homomorphism between
cohomology rings induced by the diagonal map $\Delta\colon\cs\to\cs\times\cs$, and $\mathord{\mathrm{nil}}(\mathord{\mathrm{Ker}}\,\Delta^*)$ is the minimal $n$
such that every product of $n$ elements in
$\mathord{\mathrm{Ker}}\Delta^*$ is zero.
\end{enumerate}
There are many other results and explicit computations of topological complexity -- see \cite{Farber:ITR} for a fairly
complete survey of the general theory.
\section{Complexity of a map}
\label{sec:Complexity of a map}
In \cite{Pav:CFKM} we extended Farber's approach to study the more general problem of robot manipulation. Robots are usually
manipulated by operating their joints in a way to achieve
a desired pose of the robot or a part of it (usually called \emph{end-effector}), so we must take into account
the kinematic map which relates the internal
joints states with the position and orientation of the end-effector.
To model this situation, we take a map $f\colon \cs\to\ws$ and consider the
projection map $\pi_f\colon \pcs\to \cs\times \ws$,
defined as
$$\pi_f(\alpha):=(1\times f)(\pi(\alpha))= \big(\alpha(0),f(\alpha(1))\big).$$
Similarly to motion plans, a manipulation plan corresponds to a continuous sections of $\pi_f$, so it would be natural
to define the topological complexity $\TC(f)$ as the minimal $n$ such that $\cs\times\ws$ can be covered by $n$ sets,
each admitting a continuous section to $\pi_f$. This is analogous to the definition of $\TC(\cs)$, but there are two
important issues that we must discuss before giving a precise description of $\TC(f)$.
In the definition of $\TC(\cs)$ Farber considers continuous sections whose domains are open subsets of $\cs\times\cs$.
In most applications $\cs$ is a
nice space (e.g. manifold, semi-algebraic set,...), and in fact Farber \cite{Farber:ITR} shows that for such spaces
alternative definitions of topological complexity based on closed, locally compact or ENR domains yield the same result.
This was further generalized by Srinivasan \cite{Srinivasan}
who proved that if $\cs$ is a metric ANR space, then every section over an arbitrary subset $Q\subset \cs$ can be extended
to some open neighbourhood of $Q$.
Therefore, for a very general class of spaces (including metric ANRs) one can define topological complexity
by counting sections of the evaluation map $\pi\colon \pcs\to \cs\times \cs$ over arbitrary subsets of the base.
Another important fact is that the map $\pi$ is a fibration, which
implies that one can replace sections by homotopy sections in the definition of $\TC(\cs)$ and still get the same result.
This relates topological complexity with the so called Schwarz genus \cite{Schwarz}, a well-established and extensively
studied concept in homotopy theory. The \emph{genus} $\g(h)$ of a map $h\colon X\to Y$
is the minimal $n$ such that $Y$ can be covered by $n$ open subsets, each admitting a continuous \emph{homotopy section}
to $h$; the genus is infinite if there is no such $n$. Therefore, we have
$\TC(\cs)=\g(\pi)$, a result that puts topological complexity squarely within the realm of homotopy theory.
The situation is less favourable when it comes to the complexity of a map. Firstly,
$\pi_f\colon\pcs\to\cs\times\ws$ is a fibration if, and only if, $f\colon\cs\to\ws$ is a fibration, and that is
an assumption that we do not wish to make in view of our intended applications (cf. Theorem \ref{thm:singularities}).
Every section is a homotopy section but not vice-versa, and in fact, the minimal number of homotopy sections for a given map
can be strictly smaller than the number of sections. For example, the map $h\colon [0,3]\to [0,2]$ given by
$$h(t):=\left\{\begin{array}{ll}
t & t\in [0,1]\\
1 & t\in [1,2]\\
t-1 & t\in [2,3]
\end{array}\right.$$
(see Figure \ref{fig: genus 1 sec 2})
\begin{figure}[ht]
\centering
\includegraphics[scale= 0.6]{fig14.png}
\caption{Projection with genus 1 and sectional number 2.}
\label{fig: genus 1 sec 2}
\end{figure}
admits a global homotopy section because its codomain is contractible, but clearly there does not exist
a global section to $h$. Furthermore, the following example (which can be easily generalized) shows that the
difference between the minimal number of sections and the minimal number of homotopy sections can be arbitrarily large.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{fig15.png}
\caption{Projection with genus 1 and sectional number 3.}
\label{fig: genus 1 sec 3}
\end{figure}
Actually, many results on topological complexity depend heavily on the fact that the evaluation map
$\pi\colon \pcs\to \cs\times \cs$ is a fibration, and so some direct generalizations of the results about $\TC(\cs)$
are harder to prove while other are simply false.
The second difficulty is related to the type of subsets of $\cs\times\ws$ that are domains of sections to $\pi_f$.
While the spaces $\cs$ or $\ws$ are usually assumed to be nice (e.g. manifolds), the map $f$ can have singularities
which leads to the problem that we explain next.
Given a subset $Q$ of $\cs\times\ws$ and a point $c\in\cs$ let $Q|_c$ be the subset of $\ws$ defined as
$Q|_c:=\{w\in\ws\mid (c,w)\in Q\}$. Assume that $Q|_c$ is non-empty and
that there is a partial section $\alpha\colon Q\to\pcs$ to $\pi_f$. Then the map
$$\alpha_c\colon Q|_c\to \cs\ \ \ \ \ \alpha_c(w):=\alpha(c,w)(1)\; .$$
satisfies $f(\alpha_c(w))=w$, so $\alpha_c$ is a partial section to $f$. Furthermore
$H\colon Q|_c\times I\to\cs$, given by $H(w,t):=\alpha(c,w)(1-t)$ deforms the image of the section
$\alpha_c(Q|_c)$ in $\cs$ to the point $c$, while $f\circ H$ deforms $Q|_c$ in $\ws$ to the point $f(c)$.
These observations have several important consequences.
Assume that $(c,w)$ is an interior point of some domain $Q\subset\cs\times\ws$ of a partial section to $\pi_f$. Then $w$ is
an interior point of $Q|_c$ that admits a partial section to $f$.
Therefore, if $f$ is not locally sectionable around $w$ (like the previously considered map $h\colon [0,3]\to [0,2]$ around the point 1), then it is impossible to find an open cover of $\cs\times\ws$ that admits
partial sections to $\pi_f$. A similar argument shows that we cannot use closed domains for a reasonable definition
of the complexity of $f$. One way out would be to follow the approach by Srinivasan \cite{Srinivasan} and
consider sections with arbitrary subsets as domains, but that causes problems elsewhere. After some balancing
we believe that the following choice is best suited for applications.
Let $\cs$ and $\ws$ be path-connected spaces, and let $f\colon\cs\to\ws$ be a surjective map.
The \emph{topological complexity} $\TC(f)$ of $f$ is defined as the minimal $n$ for which there exists a filtration of
$\cs\times\ws$ by closed sets
$$\emptyset=Q_0\subseteq Q_1\subseteq\ldots\subseteq Q_n=\cs\times\ws,$$
such that $\pi_f$ admits partial sections over $Q_i-Q_{i-1}$ for $i=1,2,\ldots,n$. By taking complements we obtain
an equivalent definition based on filtrations of $\cs\times\ws$ by open sets. If $\ws$ is a metric ANR, then
$\g(\pi_f)\le\TC(f)$, and the two coincide if $f$ is a fibration.
Suppose $\TC(f)=1$, i.e. there exists a section $\alpha\colon\cs\times\ws\to\pcs$ to $\pi_f$. Then $(\cs\times\ws)|_c=\ws$ for
every $c\in\cs$, and by the above considerations, $f\colon\cs\to\ws$ admits
a global section that embeds $\ws$ as a categorical subset of $\cs$. Even more, $\ws$ can be deformed to a point within
$\ws$, so $\ws$ is contractible and $\TC(\ws)=\cat(\ws)=1$.
To get a more general statement, let us say that a partial section $s\colon Q\to \cs$ to $f\colon \cs\to \ws$ is
\emph{categorical} if $s(Q)$ can be deformed to a point within $\cs$.
Then we define $\mathord{\mathrm{csec}}(f)$ to be the minimal $n$ so that there is a filtration
$$\emptyset=A_0\subseteq A_1\subseteq\ldots\subseteq A_n=\ws,$$
by closed subsets, such that $f$ admits a categorical section over $A_i-A_{i-1}$ for $i=1,\ldots,n$
(and $\mathord{\mathrm{csec}}(f)=\infty$, if no such $n$ exists).
For $\ws$ a metric ANR space we have $\mathord{\mathrm{csec}}(f)\ge\cat(\ws)$ because $A=f(s(A))$ is contractible in $\ws$ for every categorical
section $s\colon A\to \cs$. If furthermore $f\colon \cs\to\ws$ is a fibration, then $\mathord{\mathrm{csec}}(f)=\cat(\ws)$.
\begin{theorem}
Let $\ws$ be a metric ANR space and let $f\colon\cs\to\ws$ be any map. Then
$$\cat(\ws)\le\mathord{\mathrm{csec}}(f)\le\TC(f)<\mathord{\mathrm{csec}}(f)+\cat(\cs)\:.$$
\end{theorem}
\begin{proof}
If $\TC(f)=n$, then we have shown before that there exists a cover of $\ws$ by $n$ sets, each admitting a categorical
section to $f$, therefore $\TC(f)\ge\mathord{\mathrm{csec}}(f)$.
As a preparation for the proof of the upper estimate, assume that $C\subseteq \cs$ admits a deformation
$H\colon C\times I\to\cs$ to a point $c_0\in\cs$, and that $A\subseteq \ws$ admits a categorical section $s\colon A\to\cs$
with a deformation $K\colon s(A)\times I\to \cs$ to a point $c_1\in\cs$. In addition, let $\gamma\colon I\to\cs$ be a path
from $c_0$ to $c_1$.
Then we may define a partial section $\alpha\colon C\times A\to \pcs$ by the formula
$$\alpha(c,w)(t):=\left\{\begin{array}{ll}
H(c,3t) & 0\le t\le 1/3\\
\gamma(3t-1) & 1/3\le t\le 2/3\\
K(s(w),2-3t)& 2/3\le t\le 1
\end{array}\right.$$
By assumption, there is a filtration of $\cs$ by closed sets
$$\emptyset=C_0\subseteq C_1\subseteq\ldots\subseteq C_{\cat(\cs)}=\cs,$$
such that each difference $C_i-C_{i-1}$ deforms to a point in $\cs$,
and there is also a filtration of $\ws$ by closed sets
$$\emptyset=A_0\subseteq A_1\subseteq\ldots\subseteq A_{\mathord{\mathrm{csec}}(f)}=\ws,$$
such that each difference $A_i-A_{i-1}$ admits a categorical section to $f$.
Then we can define closed sets
$$Q_k=\bigcup_{i+j=k} C_i\times A_j$$
that form a filtration
$$\emptyset=Q_1\subseteq Q_2\subseteq\cdots\subseteq Q_{\cat(\cs)+\mathord{\mathrm{csec}}(f)}=\cs\times\ws\;.$$
Each difference
$$Q_k-Q_{k-1}=\bigcup_{i+j=k} (C_i-C_{i-1})\times (A_j-A_{j-1})$$
is a mutually separated union of sets that admit continuous partial sections, and so there exists a continuous
partial section plan on each $Q_k-Q_{k-1}$. We conclude that $\TC(f)$ is less then or equal to $\cat(\cs)+\mathord{\mathrm{csec}}(f)-1$.
\end{proof}
Topological complexity of a map can be used to model several important situations in topological robotics.
In the rest of this section we describe some typical examples:
\begin{example}
The identity map on $X$ is a fibration, so if $X$ is a metric ANR, then $\TC(X)=\TC(\mathord{\mathrm{Id}}_X)$.
\end{example}
\begin{example}
In the motion planning of a device with several moving components one is often interested only in the position of a part of
the system. This situation may be modelled by considering
the projection $p\colon\cs\to\cs'$ of the configuration space of the entire system to the configuration space of the relevant
part. Then $\TC(p)$ measures the complexity of robust motion planning
in $\cs$ but with the objective to arrive at a requested state in $\cs'$. A similar situation which often arises and can be
modelled in this way is when the device can move and revolve in three
dimensional space (so that
its configuration space $\cs$ is a subspace of $\mathord{\mathbb{R}}^3\times SO(3)$), but we are only interested in its final position (or
orientation), so we consider the complexity of the projection $p\colon \cs\to\mathord{\mathbb{R}}^3$
(or $p\colon\cs\to SO(3)$).
\end{example}
\begin{example}
Our main motivating example is the complexity of the forward kinematic map of a robot as introduced in \cite{Pav:CFKM}. A
mechanical device consists of rigid parts connected by joints. As we explained in the first part of this paper
(see also \cite[Section 1.3]{Waldron-Schmiedeler}), although
there are many types of joints only two of them are easily actuated by motors --
\emph{revolute} joints (denoted as R) and \emph{prismatic} or \emph{sliding}
joints (denoted as P). Revolute joints allow rotational movement so their states can be described by points on the circle $S^1$. Sliding joints allow linear movement with motion limits, so their states are
described by points of the interval $I$. Other joints are usually passive and only restrict the motion of the device, so a typical configuration space $\cs$ of a system with $m$ revolute
joints and $n$ sliding joints is a subspace of the product $(S^1)^m\times I^n$. Motion of the joints results in the spatial displacement of the device, in particular of its end-effector.
The \emph{pose} of the end-effector is given by its spatial position and orientation, so the \emph{working space} $\ws$ of the device is a subspace of $\mathord{\mathbb{R}}^3\times SO(3)$ (or a subspace of $\mathord{\mathbb{R}}^2\times SO(2)$ if
the motion of the device is planar). In the following examples, for simplicity, we will disregard the orientation of the end-effector.
Given two revolute joints that are pinned together so that their axes of rotation are parallel, the configuration space
is $\cs=S^1\times S^1$ and
the working space is an annulus $\ws=S^1\times I$. The forward kinematic map $f\colon\cs\to\ws$ can be given explicitly
in terms of polar coordinates.
This configuration is depicted in Figure \ref{fig: two planar joints} with the mechanism and its working space overlapped,
and the complexity of the corresponding kinematic map is 3, see \cite[4.2]{Pav:CFKM}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig10.png}
\caption{(RR) planar configuration. Position of the arm is completely described by the angles $\alpha$ and $\beta$, $\cs=S^1\times S^1$, $\ws=S^1\times[R_1-R_2,R_1+R_2]$.}
\label{fig: two planar joints}
\end{figure}
If instead we pin the joints so that the axes of rotation are orthogonal then we obtain the so-called universal or Cardan
joint. The configuration spaces is a product of circles
$\cs=S^1\times S^1$, but the working space is the two dimensional sphere and the forward kinematic map may be expressed by
geographical coordinates (see Figure \ref{fig: two perpendicular joints}). By the computation in \cite[4.3]{Pav:CFKM}
the complexity of the kinematic map for the universal joint is either 3 or 4 (we do not know the exact value).
\\
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{fig11.png}
\caption{(RR) universal joint. $\cs=S^1\times S^1$, $\ws=S^2$.}
\label{fig: two perpendicular joints}
\end{figure}
One of the most commonly used joint configurations in robotics is SCARA (Selective Compliant Assembly Robot Arm), which is based on the (RRP) configuration as in Figure \ref{fig: SCARA}, and is
sometimes complemented with a screw joint or even with a 3 degrees-of-freedom robot hand. The configuration space is $\cs=S^1\times S^1\times I$ and the working space is $\ws=S^1\times I\times I$.
The forward kinematic map may be easily given in terms of cylindrical coordinates. Since the
kinematic map is the product of the kinematic map for the planar two-arm mechanism and the identity map
on the interval, its complexity is equal to 3.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig12.png}
\caption{(RRP) SCARA design. $\cs=S^1\times S^1\times I$, $\ws=S^1\times I\times I$.}
\label{fig: SCARA}
\end{figure}
\end{example}
The last computation can be easily generalized to products of arbitrary maps and we omit the proof since it follows the
standard lines as in \cite[3.8]{Pav:FATC}.
\begin{proposition}
The product of maps $f\colon\cs\to\ws$ and $f'\colon \cs'\to\ws'$ satisfies the relation
$$\max\{\TC(f),\TC(f')\}\le\TC(f\times f')<\TC(f)+\TC(f')\;.$$
\end{proposition}
\begin{example}
Robotic devices are normally employed to perform various functions and it often happens that different states of the device are functionally equivalent (say for grasping, welding, spraying or other purposes as in Figure \ref{fig: funct.eq.}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{fig13.png}
\caption{(RRR) design with functional equivalence for grasping.}
\label{fig: funct.eq.}
\end{figure}
Functional equivalence is often described by the action of some group $G$ on $\cs$ and there are several versions
of equivariant topological complexity -- see \cite{Colman-Grant}, \cite{Lubawski-Marzantowicz}, \cite{Dranishnikov}
or \cite{Błaszczyk-Kaluba}. Some of them require motion plans to be equivariant maps defined on invariant subsets of
$\cs\times\cs$, while other consider arbitrary paths that are allowed to 'jump' within the same orbit
(see \cite[Section 2.2]{Błaszczyk-Kaluba} for an overview and comparison of different approaches).
However, none of the mentioned papers give a convincing interpretation in terms of a motion planning problem for
a mechanical system.
We believe that the navigation planning for a device with a configuration space $\cs$ in which different
configurations can have the same functionality should be modelled in terms of the complexity of the quotient map
$q\colon\cs\to\cs/\sim$ associated to an equivalence relation on $\cs$
(or $q\colon\cs\to\cs/G$ if the equivalence by a group action).
Then $\TC(q)$ can be interpreted as a measure of the difficulty in constructing a robust motion plan that steers
a device from a given initial position to any of the final positions that have the required functionality.
It would be interesting to relate this concept with the above mentioned versions of equivariant topological complexity.
\end{example}
\section{Instability of robot manipulation}
\label{sec:Instability of robot manipulation}
Let us again consider the robot manipulation problem determined by a forward kinematic map $f\colon\cs\to\ws$.
A \emph{manipulation algorithm} for the given device is a rule that to every
initial datum $(c,w)\in\cs\times\ws$ assigns a path in $\cs$ starting at $c$ and ending at $c'\in f^{-1}(w)$. In other words,
a manipulation algorithm is a (possibly discontinuous)
section of $\pi_f\colon\pcs\to \cs\times\ws$, patched from one or more robust manipulation plans.
Let $\alpha_i\colon Q_i\to\pcs$ be a collection of robust manipulation plans such that the $Q_i$ cover $\cs\times\ws$. In general
the domains $Q_i$ may overlap,
so in order to define a manipulation algorithm for the device we must decide which manipulation plan to apply for a given
input datum $(c.w)\in\cs\times\ws$.
We can avoid this additional step by partitioning $\cs\times\ws$ into disjoint domains, e.g. by defining $Q'_1:=Q_1$,
$Q'_2:=Q_2-Q_1$, $Q'_3:=Q_3-Q_2-Q_1,\ldots$,
and restricting the respective manipulation plans accordingly.
Since $\cs$ and $\ws$ are by assumption path-connected, if we partition $\cs\times\ws$ into several domains, then there exist
arbitrarily close pairs of initial data $(c,w)$ and $(c',w')$
that belong to different domains. This can cause instability of the robot device guided by such a manipulation
algorithm in the sense that small perturbations of
the input may lead to completely different behaviour of the device. The problem is of particular significance when the input
data is determined up to some approximation or rounding
because the instability may cause the algorithm to choose an inadequate manipulation plan. Also, a level of unpredictability
significantly complicates coordination in a group of
robotic devices, because one device cannot infer the action of a collaborator just by knowing it's manipulation algorithm and
by determining its position.
Farber \cite{Farber:IRM} observed that for any motion planning algorithm the number of different choices that are available
around certain
points in $X$ increases with the topological complexity of $X$. He defined the \emph{order of instability} of a motion
planning algorithm for $X$ at a point
$(x,x')\in X\times X$ to be the number of motion plan domains that are intersected by every neighbourhood of $(x,x')$. Then he
proved (\cite[Theorem 6.1]{Farber:IRM})
that for every motion planning algorithm on $X$ there is at least one point in $X\times X$ whose order of instability is at
least $\TC(X)$.
We are going to state and prove a similar result for the topological complexity of a forward kinematic map. Our proof is based
on the approach used by Fox \cite{Fox} to tackle a similar question
on Lusternik-Schnirelmann category.
\begin{theorem}
Let $f\colon\cs\to\ws$ be any map and let $\cs\times\ws=Q_1\sqcup\ldots\sqcup Q_n$ be a partition of $\cs\times\ws$ into disjoint subsets, each of them
admitting a partial section $\alpha_i\colon Q_i\to\pcs$ of $\pi_f$. Then there exists a point $(c,w)\in\cs\times\ws$ such that every neighbourhood of $(c,w)$
intersects at least $\TC(f)$ different domains $Q_i$.
\end{theorem}
\begin{proof}
If every neighbourhood of $(c,w)$ intersects $Q_i$ then $(c,w)$ is in $\overline Q_i$, the closure of $Q_i$. Therefore,
we must prove that there exist $\TC(f)$ different
domains $Q_i$ such that their closures have non-empty intersection. To this end for each $k=1,2,\ldots, n$ we define
$R_k$ as the set of points in
$\cs\times\ws$ that are contained in at least $k$ sets $\overline Q_i$. Each $R_k$ is a union of intersections of
sets $\overline Q_i$, hence it is closed, and we obtain a filtration
$$\cs\times\ws=R_1\supseteq R_2\supseteq\ldots R_m\supseteq\emptyset,$$
where $m$ is the biggest integer such that $R_m$ is non-empty.
For each $k=1,\ldots,m$ the difference $R_k-R_{k+1}$ consists of points that are contained in
exactly $k$ sets $\overline Q_i$.
To construct a manipulation plan over $R_k-R_{k+1}$ let us define sets $S_I$ for every subset of indices
$I\subseteq \{1,\ldots,n\}$
as the set of points that are contained in $\overline Q_i$ if $i\in I$ and are not contained in $\overline Q_i$ if
$i\notin I$. It is easy to check that $R_k-R_{k+1}$ is the disjoint union of sets $S_I$ where $I$ ranges over
all $k$-element subsets of $\{1,\ldots,n\}$. Even more, if $I$ and $J$ are different but of the same
cardinality, then the closure of $S_I$ does not intersect
$S_J$ (i.e., $S_I$ and $S_J$ are \emph{mutually separated}). In fact, there is an index $i$ contained in $I$ but not
in $J$, and clearly $\overline S_I\subseteq \overline Q_i$
while $S_J\cap \overline Q_i=\emptyset$. Since for $I$ of fixed cardinality $k$ the sets $S_I$ are mutually separated
we can patch a continuous section $\beta_k\colon R_k-R_{k+1}\to \pcs$ to $\pi_f$
by choosing $i\in I$ for each $I$ and defining $\beta_k|_{S_I}:=\alpha_i|_{S_I}$.
By definition of $\TC(f)$ we must have $m\ge\TC(f)$ so
that $R_{\TC(f)}$ is non-empty and there exists a point in $\cs\times\ws$ that is
contained in at least $\TC(f)$ different sets $\overline Q_i$.
\end{proof}
The order of instability of a manipulation algorithm with $n$ robust manipulation plans clearly cannot exceed $n$, so there is always a cover of $\cs\times\ws$ by sets admitting section to $\pi_f$, whose order
of instability is exactly $\TC(f)$. As a corollary we obtain the characterization of $\TC(f)$: it is the minimal $n$ for which every manipulation plan on $\cs\times\ws$ has order
of instability at least $n$.
In applications the robot manipulation problem is often solved numerically, using gradient flows or successive approximations. Again, one may identify domains of continuity as well as regions
of instability. It would be an interesting project to compare different approaches with respect to their order of instability.
|
\section{Introduction}
Classical novae (CNe) are binary systems \citep{1954PASP...66..230W,1964ApJ...139..457K} with a white dwarf (WD) accreting matter from a non-degenerate companion star (either main-sequence, sub-giant or red giant; see e.g.\ \citealp{2012ApJ...746...61D}). As material builds up on the surface of the WD the pressure and temperature increase until nuclear fusion occurs, leading to a thermonuclear runaway \citep{1972ApJ...176..169S}. This causes a rapid increase in luminosity, with the most luminous CN eruptions exceeding $M_V=-10$ (\citealp{2009ApJ...690.1148S}, Williams et al.\ in prep). By definition, novae with one observed eruption are classified as CNe; those with two or more observed eruptions are classified as recurrent novae (RNe). The shortest recurrence period observed to date is one year, in M31N 2008-12a (see e.g.\ \citealp{2016ApJ...833..149D}). For detailed reviews of the nova phenomenon see \citet{2008clno.book.....B} and \citet{2014ASPC..490.....W}.
Novae have long been considered as potential Type Ia supernova (SN\,Ia) progenitor candidates (e.g.\ \citealp{1973ApJ...186.1007W}), with the latest models indicating that WDs in nova systems can indeed gain mass over a long series of eruption cycles and eventually produce a SN\,Ia \citep{2016ApJ...819..168H}. While it is widely accepted that SNe Ia are caused by thermonuclear explosions of carbon-oxygen WDs \citep{1960ApJ...132..565H,1984ApJ...286..644N,2000ARA&A..38..191H,2011Natur.480..344N}, the mechanisms via which the WD reaches the critical mass to explode is still unclear (see \citealp{2014ARA&A..52..107M} for a detailed review of SN\,Ia progenitor candidates). Although the production of lithium in nova eruptions has been predicted for some time (e.g.\ \citealp{1975A&A....42...55A,1978ApJ...222..600S,1996ApJ...465L..27H}), observational evidence has recently been found that novae may contribute the majority of the $^7$Li in the Galaxy \citep{2015Natur.518..381T,2016ApJ...818..191T,2015ApJ...808L..14I,2016MNRAS.463L.117M}.
While it is generally not possible to study individual extragalactic novae in as much detail as their Galactic counterparts, there are several advantages to observing extragalactic novae. The large uncertainties in distance that can be associated with Galactic novae are largely negated when studying extragalactic populations and a better representation is given of an entire galaxy's nova population. Additionally, it enables the studies of novae in different environments, for example the stellar populations of the nearby dwarf galaxies, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC) are very different from the large spirals like M31 and our own Galaxy.
Many Local Group novae are discovered each year, yet to date detailed studies of individual nova eruptions in the low-metallicity environments typically found in dwarf irregular galaxies have been restricted to the nearby Magellanic Clouds (MCs). Dwarf galaxies of course have low nova rates, with the nova rates of the LMC and SMC calculated to be $2.4\pm0.8$\,yr$^{-1}$ and $0.9\pm0.4$\,yr$^{-1}$ respectively \citep{2016ApJS..222....9M}. This compares to rates of $65^{+16}_{-15}$\,yr$^{-1}$ in M31 \citep{2006MNRAS.369..257D}, $33^{+13}_{-8}$\,yr$^{-1}$ in M81 \citep{2004AJ....127..816N}, and even as high as $363_{-45}^{+33}$\,yr$^{-1}$ in M87 \citep{2016ApJS..227....1S}. To build a full picture of how the properties of novae depend on the properties of their host galaxy, it is important we study nova eruptions in these dwarf irregulars in as much detail as possible.
IC\,1613 is an irregular dwarf galaxy in the Local Group at a distance of approximately 730\,kpc \citep{2013ApJ...773..106S,2015MNRAS.452..910M}. Recent evidence suggests it has a metallicity of about one fifth of Solar, similar to that of the SMC \citep{2014ApJ...788...64G,2015MNRAS.449.1545B}, and its star formation rate has been constant over time \citep{2014ApJ...786...44S}. IC\,1613 differs from the MCs as it is essentially isolated, whereas the MCs are interacting with the Milky Way (see \citealp{2000glg..book.....V} for an overview).
A total of three nova candidates have previously been discovered in IC\,1613. The first was imaged at $B\simeq17.5$ on three plates taken on a single night by Walter Baade in 1954 November, having not been visible the night before and no further images were taken that season \citep{1971ApJ...166...13S}. The second candidate was detected on 1996 October 12, although the eruption time of this candidate is poorly constrained, with the last non-detection being two months prior \citep{2001A&A...367..759M}. For nine days following October 12, the candidate was seen to decline in brightness \citep{2001A&A...367..759M}. The third and most recent nova candidate was discovered in 1999 \citep{1999IAUC.7287....2K}, but this was actually a Mira variable \citep{2001A&A...378..449K}.
Nova IC\,1613 2015 (PNV J01044358+0203419) was discovered at $01^{\mathrm{h}}04^{\mathrm{m}}43^{\mathrm{s}}\!.58$~$+02^{\circ}03^{\prime}41^{\prime\prime}\!\!.9$ with an unfiltered magnitude of 17.5 on 2015 September 10.48\,UT, with nothing visible down to a limiting magnitude of about 18.0 on September 9 \citep{2015CBET.4186....1H}, by the Lick Observatory Supernova Search (see \citealp{2001ASPC..246..121F} for further details). After classification as an extragalactic nova \citep{2015ATel.8061....1W}, we conducted optical, near-IR, near-UV and X-ray observations of the eruption, which we present in this paper.
\section{Observations and Data Analysis}
\subsection{Ground-based photometry}
Nova IC\,1613 2015 was initially followed with IO:O\footnote{\url{http://telescope.livjm.ac.uk/TelInst/Inst/IOO}}, the optical imager on the 2\,m Liverpool Telescope on La Palma, Canary Islands, Spain (LT; \citealp{2004SPIE.5489..679S}), using {\it B}, {\it V} and {\it i}$^{\prime}$ filters, with the first set of observations taken 1.61\,days after discovery on 2016 Sep 12.09\,UT. Once the nova nature of the object became clear, the filter set was expanded to {\it u$^{\prime}$}, {\it B}, {\it V}, {\it r}$^{\prime}$, {\it i}$^{\prime}$, and {\it z}$^{\prime}$. We also began monitoring the eruption in the near-IR using the fixed {\it H}-band filter on the IO:I imager on the LT \citep{2016JATIS...2a5002B}. In addition to the LT data, we also obtained some photometric observations through {\it B}, {\it V}, {\it r}$^{\prime}$, and {\it i}$^{\prime}$ filters using the Las Cumbres Observatory (LCO) 2\,m telescope at Siding Spring Observatory, New South Wales, Australia (formally the Faulkes Telescope South; FTS, \citealp{2013PASP..125.1031B}). An IO:O image of the nova in eruption is shown in Figure\,\ref{find}.
\begin{figure}
\includegraphics[width=\columnwidth]{find}
\caption{Negative image of Nova IC\,1613 2015 in eruption taken though an {\it r}$^{\prime}$-band filter with IO:O on the LT on 2015 Oct 9.00\,UT. The position of the nova is indicated by the red lines near the centre of the image.\label{find}}
\end{figure}
The {\it u$^{\prime}$}{\it BV}{\it r}$^{\prime}${\it i}$^{\prime}${\it z}$^{\prime}$ photometry was calculated using aperture photometry in GAIA\footnote{GAIA is a derivative of the Skycat catalogue and image display tool, developed as part of the VLT project at ESO. Skycat and GAIA are free software under the terms of the GNU copyright.} and calibrated against field stars from the Sloan Digital Sky Survey Data Release 9 \citep{2012ApJS..203...21A}. The {\it B} and {\it V} magnitudes of these calibration stars were calculated using the transformations in \citet{2006A&A...460..339J}. The {\it H}-band observations were calibrated against different stars from the 2MASS All Sky Catalog of point sources \citep{2003yCat.2246....0C}.
\subsection{Spectroscopy} \label{sec:spec}
The optical spectra were taken using the SPectrograph for the Rapid Acquisition of Transients (SPRAT), a low-resolution high-throughput spectrograph on the LT \citep{2014SPIE.9147E..8HP}. It has a 1$^{\prime\prime}\!\!.8$ slit width, giving a resolution of 18\,\AA. Our observations were all taken using the blue-optimised mode. The details of the spectra are summarised in Table\,\ref{tab:spec}.
\begin{table}
\caption{Summary of all spectroscopic observations of Nova IC\,1613 2015 with the SPRAT spectrograph on the LT.}
\label{tab:spec}
\begin{center}
\begin{tabular}{lcc}
\hline
Date [UT]$^a$ & Days post-discovery & Exposure time [s]\\
\hline
2015 Sep 12.07 & 1.59 & 1800\\
2015 Sep 17.07 & 6.59 & 1800\\
2015 Sep 21.02 & 10.54 & 1800\\
2015 Sep 25.09 & 14.61 & 1800\\
2015 Oct 07.03 & 26.55 & 3600\\
2015 Nov 06.99 & 57.51 & 5400\\
\hline
\end{tabular}
\end{center}
$^a$~The date listed here refers to the mid-point of each observation.
\end{table}
Spectrophotometric standards were not observed at similar times as the IC\,1613 spectroscopy, but we observed the standard G191-B2B using the same SPRAT instrument set-up on 2015 Dec 17, 2015 Dec 30 and 2016 Jan 10. The flux calibration of each spectrum was performed using standard routines in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.} \citep{1986SPIE..627..733T}. The standard observations were calibrated against data from \citet{1990AJ.....99.1621O} obtained via ESO. Due to different observing conditions, and particularly seeing losses and atmospheric conditions (i.e.\ cloud), the absolute flux calibrations of each spectrum can vary significantly. Examining the standard star observations discussed above, we estimate this causes the typical flux calibration error (at 5000\,\AA) to be of order $15-20$\,\%. However, the relative calibration across any individual spectrum (i.e.\ between red and blue, after removing the systematic flux calibration offset) should be relatively good with the uncertainties $<10$\,\%.
\subsection{\textit{Swift} observations} \label{sec:swift}
The super-soft X-ray source (SSS) phase in novae is caused by nuclear burning of hydrogen on the surface of the WD. The SSS emission can be detected once the ejecta become optically thin to X-rays and the SSS `turn-off' is thought to represent the end of nuclear burning (see e.g.\ \citealp{1996ApJ...456..788K}).
We were granted six {\it Swift}\ \citep{2004ApJ...611.1005G} target of opportunity (ToO) observations (target ID 34085) to follow the UV and X-ray evolution of the nova. Additionally, we analysed data aimed at IC\,1613 itself (target ID 84201), which includes our object in the field of view. All {\it Swift}\ data are summarised in Table\,\ref{tab:swift}.
The {\it Swift}\ UV/optical telescope \citep[UVOT,][]{2005SSRv..120...95R} data were reduced using the HEASoft (v6.16) tool \texttt{uvotsource}. The UVOT magnitudes are based on aperture photometry of carefully selected source and background regions. The photometric calibration assumes the UVOT photometric (Vega) system \citep{2008MNRAS.383..627P} and have not been corrected for extinction. The central wavelengths of the utilised UVOT filters are: \textit{UVW1}: 2600\,\AA; \textit{UVM2}: 2250\,\AA; \textit{UVW2}: 1930\,\AA.
All {\it Swift}\ X-ray telescope \citep[XRT;][]{2005SSRv..120..165B} data were obtained in the photon counting (PC) mode. For extraction of the count rate upper limits we made use of the on-line interface\footnote{\url{http://www.swift.ac.uk/user\_objects}} of \citet{2009MNRAS.397.1177E}. This tool uses the Bayesian formalism of \citet{1991ApJ...374..344K} for low numbers of counts. As is recommended for SSSs, only grade zero events were extracted. To convert the counts to X-ray fluxes we assume a conservative (maximum) black-body temperature of 50\,eV and a Galactic foreground absorption of \hbox{$N_{\rm H}$}~ = 3\hcm{20}. The absorption was derived from the HEASARC \hbox{$N_{\rm H}$}~ tool based on the hydrogen maps of \citet{1990ARA&A..28..215D}.
We estimated the X-ray temperature based on the reference frame of the M31 SSS nova sample and the correlations subsequently found by \citet{2014A&A...563A...2H}. In M31, a t$_2$ of 13\,days (see Section\,\ref{s:phot}) would correspond to a SSS phase from about days 60--200, which in turn suggests a black-body $kT \sim 50$\,eV \citep[cf. figure 8 of][]{2014A&A...563A...2H}. Using the \texttt{pimms} software (v4.8c) we estimated an energy conversion factor (count rate divided by unabsorbed flux in the 0.2--1.0\,keV band) of 1.2\tpower{10}\,ct\,cm$^2$\,erg$^{-1}$ for the XRT (PC mode). We derived the corresponding X-ray luminosities in Table\,\ref{tab:swift} by assuming a distance to IC\,1613 of 730\,kpc.
\begin{table*}
\caption{{\it Swift}\ UVOT magnitude and X-ray upper limits.}
\label{tab:swift}
\begin{center}
\begin{tabular}{rrrrrrrrrr}
\hline
ObsID & Exp$^a$ & Date$^b$ & MJD$^b$ & $\Delta t^c$ & \multicolumn{3}{c}{UV$^d$ [mag]} & Rate & L$_{0.2-1.0}^e$\\
& [ks] & [UT] & [d] & [d] & \textit{UVW1} & \textit{UVM2} & \textit{UVW2} & [\power{-3}\,ct\,s$^{-1}$] & [\power{37}\,erg\,s$^{-1}$]\\
\hline
00034085001 & 4.6 & 2015-10-20.54 & 57315.55 & 40.07 & $18.6\pm0.1$ & \ldots & \ldots & $<4.2$ & $<2.3$ \\
00034085002 & 4.2 & 2015-11-08.29 & 57334.29 & 58.81 & $19.6\pm0.1$ & \ldots & \ldots & $<2.6$ & $<1.5$ \\
00034085003 & 4.6 & 2015-11-29.22 & 57355.23 & 79.75 & $20.2\pm0.2$ & \ldots & \ldots & $<5.5$ & $<3.1$ \\
00034085004 & 4.1 & 2016-01-08.04 & 57395.05 & 119.57 & $20.9\pm0.3$ & \ldots & \ldots & $<2.1$ & $<1.2$ \\
00034085005 & 4.0 & 2016-02-09.09 & 57427.09 & 151.61 & $>20.9$ & \ldots & \ldots & $<3.2$ & $<1.8$ \\
00084201006 & 1.3 & 2016-02-17.81 & 57435.81 & 160.33 & $>19.5$ & $>19.8$ & $>20.0$ & $<8.7$ & $<4.8$ \\
00084201007 & 6.5 & 2016-05-26.43 & 57534.44 & 258.96 & $>20.1$ & $>20.8$ & $>21.1$ & $<1.8$ & $<1.0$ \\
00084201008 & 0.9 & 2016-05-28.90 & 57536.90 & 261.42 & $>19.3$ & $>19.6$ & $>19.8$ & $<9.5$ & $<5.3$ \\
00034085006 & 2.4 & 2016-08-07.07 & 57607.07 & 331.59 & $>21.0$ & \ldots & \ldots & $<4.9$ & $<2.7$\\
\hline
\end{tabular}
\end{center}
\begin{flushleft}
$^a$~Dead-time corrected XRT exposure time.\\
$^b$~Start date of the observation.\\
$^c$~Time in days after the eruption on 2015-09-10.48 UT (MJD 57275.48).\\
$^d$~Vega magnitudes for the $UVW1$, $UVM2$, and $UVW2$ filters with central wavelength: 2600\,\AA, 2250\,\AA, and 1930\,\AA, respectively.\\
$^e$~X-ray luminosity upper limits (unabsorbed, blackbody fit, 0.2--1.0\,keV) were estimated according to Sect.\,\ref{sec:swift}.
\end{flushleft}
\end{table*}
\subsection{Reddening}
IC\,1613 is subject to only a small amount of foreground reddening ($E_{B-V}=0.021$; \citealp{2011ApJ...737..103S}). However, estimating the reddening internal to IC\,1613 at the position of the nova is difficult, as this is highly variable throughout the galaxy \citep{2009A&A...502.1015G}. In a survey of IC\,1613 Cepheid variables, \citet{2006ApJ...642..216P} found an average total reddening of $E_{B-V}=0.090\pm0.019$ to the Cepheids, which we take as the extinction estimate for our absolute magnitude calculations.
\section{Results}
\subsection{Photometric evolution} \label{s:phot}
A light curve showing all the photometry taken by the LT, LCO 2\,m, and {\it Swift} is shown in Figure\,\ref{lc}. This photometry is also tabulated in Appendix\,\ref{append} and presented in the form of spectral energy distributions (SEDs) in Section\,\ref{sec:sed}. The light curve shows the nova was clearly discovered prior to peak.
\begin{figure}
\includegraphics[width=\columnwidth]{lc}
\caption{Light curve of Nova IC\,1613 2015. The colours represent different filters: $UVW1$, cyan; {\it u$^{\prime}$}, purple; {\it B}, blue; {\it V}, green; {\it r}$^{\prime}$, orange; {\it i}$^{\prime}$, red; {\it z}$^{\prime}$, grey; {\it H}, black. The magenta star shows the unfiltered discovery magnitude. The points on the light curve that correspond to the dates the spectra were taken are also indicated.\label{lc}}
\end{figure}
The nova follows a relatively uniform decline, although the $r'$-band fades significantly slower than {\it B}, {\it V}, and $i'$ due to the increasingly strong influence of the H$\alpha$ emission line on the broadband photometry. Initially the nova also declines more slowly in the {\it z}$^{\prime}$-band than other filters, but by 40\,days post-discovery, the {\it z}$^{\prime}$-band declines more quickly than the other filters. The initial slow {\it z}$^{\prime}$ decline is unlikely to indicate a change in the overall nova SED, as the ($V - i'$) colour evolution remains relatively unaltered during this phase (the early {\it H}-band observations are also consistent) and therefore the slower {\it z}$^{\prime}$ decline is probably line driven. As we have no spectra extending beyond 8000\,\AA, the species that may be responsible for this is not certain, but we suggest it is most likely due to very strong O\,{\sc i} 8446\,\AA\ emission (caused by Ly$\beta$ fluorescence; see discussion in Section\,\ref{sec:spec_evo}).
Adopting a distance modulus $24.31\pm0.04$ (weighted average from \citealp{2013ApJ...773..106S,2015MNRAS.452..910M}) and correcting for reddening using $E_{B-V}=0.090\pm0.019$ \citep{2006ApJ...642..216P} and the extinction law from \citet[$R_V=3.1$]{1989ApJ...345..245C} gives an absolute magnitude for the eruption peak of $M_V=-7.93\pm0.08$, which is typical for a nova. The absolute magnitude at 15 days after peak is $M_V=-5.84^{+0.20}_{-0.10}$, we note that the (conservative) constraints on the time of peak (assuming 2016 Sep $12.09^{+1.95}_{-1.61}$\,UT) dominate the upper error bar. This is similar to that expected from the relationships of the absolute magnitudes of novae 15 (or 17) days after peak brightness in M49 ($M_{V,15}=-6.36\pm0.19$; \citealp{2003ApJ...599.1302F}) and M87 ($M_{F606W,17}=-6.06\pm0.23$; \citealp{2017arXiv170206988S}, although note the `wide {\it V}' F606W filter contains H$\alpha$). We measure the de-reddened day-15 colour, $(B-V)_{t=15}=-0.03^{+0.11}_{-0.14}$. In the SDSS filters we find $M_{r',15}=-6.50^{+0.17}_{-0.09}$, $(u'-r')_{t=15}=0.34^{+0.14}_{-0.09}$, $(r'-i')_{t=15}=-0.56\pm0.05$ and $(i'-z')_{t=15}=0.60^{+0.08}_{-0.05}$. The {\it H}-band coverage is much poorer than the other filters, but using extrapolation we estimate $M_{H,15}=-6.53^{+0.22}_{-0.20}$.
Taking the brightest data point as the peak magnitude of the nova, and using linear extrapolation between the data points, we estimate the $t_2$ of this nova to be $15\pm3$, $13\pm2$ and $15\pm3$\,days in {\it B}, {\it V} and $i'$ filters, respectively. We estimate the $t_3$ values to be $t_{3{\mathrm{(}}B{\mathrm{)}}}=32\pm3$, $t_{3{\mathrm{(}}V{\mathrm{)}}}=26\pm2$ and $t_{3{\mathrm{(}}i'{\mathrm{)}}}=32\pm3$\,days (the uncertainties here are largely due to the cadence around peak).
\subsection{Spectroscopic Evolution} \label{sec:spec_evo}
Novae have been observed spectroscopically for 150 years, since the first eruption of RN T\,Coronae Borealis in 1866 \citep{1866MNRAS..26..275H}. Nova spectra tend to fit into one of two groups, named after the dominant non-Balmer species in the spectra, Fe\,{\sc ii} and He/N classes \citep{1992AJ....104..725W}. These two types of spectra are suggested to form in different components of gas, with the spectral type observed for a given nova reflecting the dominant spectral component at that time \citep{2012AJ....144...98W}. The Fe\,{\sc ii} spectra have been suggested to originate in the circumbinary gas originating from the companion star, whereas the He/N type is suggested to be produced by the ejecta themselves \citep{2012AJ....144...98W}. Although there are some exceptions, Fe\,{\sc ii} novae tend to produce narrower emission lines than He/N novae (see e.g.\ \citealp{1992AJ....104..725W,2011ApJ...734...12S}). Line identification can be difficult in novae due to the broad lines and often differing line profiles. This is further complicated when using low-resolution spectra, which are often required to study faint extragalactic novae. However, the multiple epochs of spectra we have obtained allow us to identify some lines that may not have been possible with a single observation, and more importantly, better interpret the overall evolution. Line identification was also significantly aided by the extensive nova line list from \citet{2012AJ....144...98W} and multiplet tables from \citet{1945CoPri..20....1M}. The spectra are shown in Figures\,\ref{spec1}, \ref{spec2}, and \ref{spec6}. All spectra are shown in the frame of the observer, but when discussing the identification of spectroscopic features, rest-frame wavelengths are used. The average radial velocity of IC\,1613 is $-231.6$\,km\,s$^{-1}$, with a velocity dispersion of 10.8\,km\,s$^{-1}$ \citep{2014MNRAS.439.1015K}.
\subsubsection{Optically thick `fireball' stage}
Our first spectrum was taken on 2015 Sep 12.07, 1.59\,days after discovery, and around peak brightness. The main features of this spectrum are the Balmer lines with clear P\,Cygni absorption profiles. H$\alpha$ is seen mainly in emission with a small, blue-shifted absorption component. H$\beta$ is seen with significant emission and absorption components, with H$\gamma$ and H$\delta$ mainly detected in absorption. This optically thick spectrum is shown in Figure\,\ref{spec1}. Fitting a Gaussian to the H$\gamma$ absorption profile and taking into account the radial velocity of IC\,1613 itself, the absorption minimum implies a velocity of $1200\pm200$\,km\,s$^{-1}$. Fe\,{\sc ii} 5169\,\AA\ is seen mainly in absorption, with features corresponding to the Fe\,{\sc ii} (42) triplet at 4924 and 5018\,\AA\ also tentatively detected. A few other weak absorption lines are also seen, e.g.\ one at 4465\,\AA\ from Mg\,{\sc ii}/Fe\,{\sc ii}.
\begin{figure*}
\includegraphics[width=2\columnwidth]{spec1}
\caption{The first spectrum of Nova IC\,1613 2015 taken 2015 Sep 12.07\,UT, 1.59\,days after discovery, and around peak optical brightness.\label{spec1}}
\end{figure*}
\subsubsection{Early decline}
The second spectrum, taken 6.59\,days after discovery, shows a dramatic change from the $t=1.59$\,day spectrum and clearly shows the nova nature of the transient, indeed this is the spectrum we used to announce that the transient was a nova eruption in \citet{2015ATel.8061....1W}. This, along with the third, fourth and fifth spectra, taken at $t=10.54$, 14.61 and 26.55\,days, respectively, are shown in Figure\,\ref{spec2}. In the second spectrum, the nova now shows strong Balmer emission, although weak P\,Cygni absorption components are still present. The Fe\,{\sc ii} (42) triplet, only just detected in the first spectrum is now clearly seen in emission. In the early decline spectra of novae, triplet 42 is typically the strongest of the Fe\,{\sc ii} lines, although several other multiplets, located between H$\alpha$ and H$\beta$, are usually easily identifiable in regular Fe\,{\sc ii} novae (e.g.\ 48, 49, 55, and 74). The Fe\,{\sc ii} (48) multiplet is the only one of these that appears to be weakly detected in Nova IC\,1613 2015, with multiplets 49, 55, and 74 not detected.
\begin{figure*}
\includegraphics[width=2\columnwidth]{specs}
\caption{The early decline spectra of Nova IC\,1613 2015. These were obtained 2015 Sep 17.07\,UT ($t=6.59$\,days; black line), Sep 21.02 ($t=10.54$\,days; grey line), Sep 25.09 ($t=14.61$\,days; red line) and Oct 7.03 ($t=26.55$\,days; blue line).\label{spec2}}
\end{figure*}
In the $t=6.59$\,day spectrum, the N\,{\sc i} (3) triplet is seen strongly in emission. It also shows a relatively broad absorption component, as would be expected given the wavelengths of the lines that make up the triplet (7424, 7442, and 7468\,\AA), the velocities associated with the nova and the resolution of the spectrograph. The N\,{\sc ii} (3) multiplet around 5682\,\AA\ is seen with a very strong absorption component, with the N\,{\sc ii} (28) multiplet at 5938\,\AA\ also identified. The profile at the position of the Fe\,{\sc ii} 5018\,\AA\ component of triplet 42, clearly has a different morphology than the 5169\,\AA\ line, the former having a strong absorption component. We interpret this as indicating the presence of N\,{\sc ii} 5001\,\AA. This is consistent with the morphology of the N\,{\sc ii} (3) multiplet, which also shows a very strong absorption component. It also explains the evolution of the 5018\,\AA\ line profile between spectra two and five (see below). The Bowen blend (N\,{\sc iii}/C\,{\sc iii}/O\,{\sc ii}; this complex is discussed by Harvey et al.\ in prep), which is sometimes referred to as `4640 emission' (it is at $\sim$4640\,\AA) is detected as a broad emission line with an accompanying broad absorption profile. This complex is visible at the time of the emergence of the nebular lines in most novae, however is typically only visible in the early spectra if the nova is a member of the He/N spectroscopic class.
The strongest non-Balmer line visible in the $t=6.59$\,day spectrum is O\,{\sc i} 7774\,\AA, produced by O\,{\sc i} (1) triplet. There is a relatively strong emission line peaking at 6162\,\AA\ (again with an absorption profile). This is clearly not the Fe\,{\sc ii} (74) multiplet emission line at 6148\,\AA\ as the other lines are not present (notably the 6248\,\AA\ line, for example). We identify this as most likely being the O\,{\sc i} (10) triplet at 6157\,\AA. Alternatively it could be N\,{\sc ii} (which has lines at a similar wavelength), although that is perhaps less likely given it fades to be undetected by the fifth spectrum, which is very different from the other three N\,{\sc ii} lines (5001, 5682 and 5938\,\AA), which remain detected even in the final nebular spectrum, as discussed below. In the second spectrum, there is an emission line peaking at 6722\,\AA. An emission line at this wavelength has been noted since the early days of nova spectroscopy (e.g.\ DN\,Geminorum; \citealp{1940PLicO..14...27W}) and has been suggested as O\,{\sc i} 6726\,\AA\ (e.g.\ \citealp{1964PASP...76...22B}, \citealp{2005A&A...435.1031M}, \citealp{2013ATel.5378....1S}, \citealp{2014MNRAS.440.3402M}). An alternative explanation would be the N\,{\sc i} 6723\,\AA\ line.
The Fe\,{\sc ii} 5018\,\AA/N\,{\sc ii} 5001\,\AA\ emission line appears to extend further redward than expected from other lines, which indicates the presence of another emission line. There is a He\,{\sc i} 5048\,\AA\ line which is a possible explanation, but there are no He\,{\sc i} lines at 6678 or 7065\,\AA, so a more likely explanation is N\,{\sc ii}. In the second and third spectra there is a weak feature at $\sim$7110\,\AA, which we identify as C\,{\sc i}. We also identify C\,{\sc ii} emission at 4267 and 7234\,\AA, which is visible from the day 6.59 to 26.55 spectra.
The spectroscopic evolution between day 6.59 and 26.55 shows the P\,Cygni profiles, that initially accompany many of the emission lines, weaken over time, as is usually seen in nova eruptions. The Fe\,{\sc ii} lines weaken along with the N\,{\sc i} and O\,{\sc i} lines (although O\,{\sc i} 7774\,\AA\ is still easily visible in the day\,26.55 spectrum). The N\,{\sc ii} and N\,{\sc iii} lines retain their strength through this evolution and by day\,26.55, apart from the Balmer and O\,{\sc i} 7774\,\AA\ lines, the spectrum is essentially dominated by ionised nitrogen lines (from the point of view of visible features, not the overall flux of the spectrum). The contrasting evolution of the Fe\,{\sc ii}/N\,{\sc ii} lines can be seen in the morphology of the blended line due to N\,{\sc ii} 5001\,\AA\ and Fe\,{\sc ii} 5018\,\AA, where the blue side of the blend become increasingly dominant as the nova evolves.
\subsubsection{Nebular phase}
Very few novae beyond the MCs have been observed spectroscopically in the nebular phase. As the evolution progresses, nebular lines also begin to emerge with [O\,{\sc i}] (6300 and 6364\,\AA) clearly detected by day 14.61 and possibly present even earlier. The [O\,{\sc ii}] 7320/7330\,\AA\ doublet is also likely present in the 26.55\,day spectrum. Our sixth and final spectrum was taken 57.41\,days after discovery and about 4 magnitudes below peak. This shows further evolution into the nebular phase with [O\,{\sc iii}] (4959 and 5007\,\AA) now clearly visible. The 5007/4959\,\AA\ emission line ratio is higher than expected (should be $\sim3$; the lines are highly forbidden and the ratio is essentially fixed), indicating the line is still blended with N\,{\sc ii} 5001\,\AA. He\,{\sc ii} (4686\,\AA) can now be seen emerging from the Bowen complex, with the peak of the complex itself consistent with it being dominated by N\,{\sc iii}. We also find He\,{\sc i} emission (5876 and 7065\,\AA). This spectrum is shown in Figure\,\ref{spec6}. The emission line fluxes of prominent lines are shown in Table\,\ref{lflux}.
\begin{figure*}
\includegraphics[width=2\columnwidth]{spec6}
\caption{The final spectrum of Nova IC\,1613 2015 taken 2015 Nov 6.99\,UT, 57.51\,days after discovery. This shows the nova in the nebular phase with strong [O\,{\sc iii}] emission.\label{spec6}}
\end{figure*}
The assignment of [N\,{\sc ii}] (5755\,\AA) is correct in the later spectra (e.g.\ nebular [N\,{\sc ii}] would be expected when [O\,{\sc i}] is clearly detected in the $t=14.61$\,day spectrum), however there appears to be a line there even in the $t=6.59$\,day spectrum. This has been noted by other authors (e.g.\ \citealp{2003A&A...404..997I,2014AJ....147..107S}). There is also a non-forbidden N\,{\sc ii} doublet (9) at a very similar wavelength. This is caused by the same excited state as the N\,{\sc ii} (3) multiplet, but here the electrons transition to $2s^{2}2p3s$\,$^{1}$P$^{\circ}$, rather than $2s^{2}2p3s$\,$^{3}$P$^{\circ}$. It is possible that this at least partially contributes to this emission line.
\begin{table*}
\caption{The evolution of emission line fluxes.}
\label{lflux}
\begin{center}
\begin{tabular}{lcccccc}
\hline
Line identification & \multicolumn{6}{c}{Emission line flux [$\times10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$]}\\
(rest wavelength) & $t=1.59$\,days & $t = 6.59$\,days & $t = 10.54$\,days & $t = 14.61$\,days & $t = 26.55$\,days & $t = 57.51$\,days\\
\hline
H$\delta$ (4102\,\AA) &\ldots &$18.7\pm3.3$ &$9.5\pm1.2$ &$7.0\pm1.9$ &$4.3\pm1.1$ &$3.9\pm0.6$\\
H$\gamma$ (4341\,\AA) &\ldots &$28.5\pm5.2$ &$16.6\pm3.0$ &$9.0\pm1.6$ &$4.9\pm0.7$ &$4.4\pm0.3$\\
H$\beta$ (4861\,\AA) &$3.2\pm1.6$ &$68.0\pm4.6$ &$43.9\pm4.0$ &$24.6\pm2.9$ &$12.1\pm1.2$ &$5.4\pm0.3$\\
Fe\,{\sc ii} (5169\,\AA) &\ldots &$3.8\pm0.8$ &$2.7\pm0.4$ &$2.7\pm0.6$ &\ldots &\ldots\\
N\,{\sc ii} (5682\,\AA) &\ldots &$1.5\pm0.5$ &$2.8\pm0.7$ &$3.2\pm0.5$ &$3.1\pm0.3$ &$1.0\pm0.2$\\\relax
[N\,{\sc ii}] (5755\,\AA) &\ldots &$1.2\pm0.2$ &$2.6\pm0.3$ &$2.5\pm0.4$ &$3.5\pm0.3$ &$1.0\pm0.2$\\
N\,{\sc ii} (5939\,\AA) &\ldots &$1.1\pm0.2$ &$2.4\pm0.6$ &$2.2\pm0.3$ &\ldots &\ldots\\
O\,{\sc i} (6157\,\AA) &\ldots &$2.6\pm0.3$ &$1.8\pm0.3$ &$1.1\pm0.3$ &\ldots &\ldots\\\relax
[O\,{\sc i}] (6300\,\AA) &\ldots &\ldots &\ldots &$1.0\pm0.3$ &$0.6\pm0.2$ &\ldots\\\relax
[O\,{\sc i}] (6364\,\AA) &\ldots &\ldots &\ldots &$0.9\pm0.2$ &\ldots &\ldots\\
H$\alpha$ (6563\,\AA) &$7.3\pm1.1$ &$146.4\pm5.1$ &$144.0\pm5.2$ &$122.0\pm3.7$ &$90.5\pm7.4$ &$29.1\pm2.1$\\
C\,{\sc ii} (7235\,\AA) &\ldots &$1.4\pm0.7$ &$2.1\pm0.7$ &$2.7\pm0.4$ &$1.2\pm0.3$ &\ldots\\
N\,{\sc i} (7452\,\AA) &\ldots &$5.4\pm0.7$ &$3.2\pm0.4$ &$1.6\pm0.5$ &\ldots &\ldots\\
O\,{\sc i} (7774\,\AA) &\ldots &$18.1\pm2.2$ &$16.6\pm2.7$ &$8.5\pm1.7$ &$2.8\pm1.2$ &\ldots\\
\hline
\end{tabular}
\end{center}
\begin{flushleft}
The emission line fluxes are dependent on the assumed continuum level and if a P\,Cygni absorption component is present, only the emission component of the feature is measured. The fluxes are dereddened for foreground Galactic extinction, assuming $E_{B-V}=0.021$ and $R_V=3.1$. The errors do not take into account uncertainties in the flux calibration discussed in Section\,\ref{sec:spec}.
\end{flushleft}
\end{table*}
\subsubsection{Balmer evolution} \label{sec:balmer}
By fitting of a Gaussian profile to the emission component of the H$\alpha$ line in the second ($t = 6.59$\,days) spectrum, we measure the FWHM to be $1580\pm70$\,km\,s$^{-1}$ after correcting for the spectral resolution. The line then apparently broadens to a FWHM of $1750\pm50$\,km\,s$^{-1}$ in the third ($t=10.54$\,day) spectrum and thereafter remains consistent. We measure it at $1760\pm90$, $1750\pm120$ and $1720\pm190$\,km\,s$^{-1}$ in the $t = 14.61$, $t = 26.55$ and $t = 57.51$\,day spectra, respectively. The most likely explanation for the early change in the FWHM is that in the $t = 6.59$\,day spectrum the H$\alpha$ emission line is significantly influenced by a P\,Cygni absorption component, which has the effect of narrowing the apparent emission line.
The peaks of the Balmer emission are not shown in Figures\,\ref{spec2} and \ref{spec6} to allow the reader to view the fainter lines in greater detail. The evolution of the H$\alpha$ line is show in Figure\,\ref{halpha}. The left panel of Figure\,\ref{halpha} shows the overall profile is relatively symmetrical, with a Gaussian profile generally fitting the central profile well. The only stage when a Gaussian does not appear a good fit is the $t=26.65$\,day spectrum, when the profile seems asymmetric, being stronger at the red side of the profile. Close inspection of the H$\alpha$ profile in the right panel of Figure\,\ref{halpha} shows there is emission peaking at around $-4000$\,km\,s$^{-1}$ ($\sim$6480\,\AA). Comparing it to the red side of the H$\alpha$ line shows it is too blue to be explained as part of a simple Gaussian with a P\,Cygni profile superimposed on the emission component, and could be due to emission from N\,{\sc i} or N\,{\sc ii}. An alternative explanation could be a separate higher velocity component to the H$\alpha$ line as there may be excess flux on the red side of the profile as well, although this $\sim$6480\,\AA\ emission appears to persist longer, visible in all but the final nebular spectrum.
\begin{figure*}
\includegraphics[width=\columnwidth]{halpha}\hfill
\includegraphics[width=\columnwidth]{halphal}
\caption{Evolution of the H$\alpha$ line from $t=6.59$ to $t=57.51$\,days. The left panel shows the evolution of the shape and absolute flux of the line. The right panel shows the evolution of the fainter emission either side of the main H$\alpha$ profile (see discussion in Section\,\ref{sec:balmer}). The velocities have been corrected for the radial velocity of IC\,1613 itself.\label{halpha}}
\end{figure*}
In Figure\,\ref{dec} we show the evolution of the H$\alpha$/H$\beta$ ratio between $t=6.59$ and $t=57.51$\,days, corrected for Galactic reddening ($E_{B-V}=0.021$). The figure shows the ratio initially increases, peaking at $7.5\pm1.0$ on $t = 26.55$\,days, before declining in the final nebular phase spectrum. During this period the H$\gamma$/H$\beta$ ratio does not change dramatically. The evolution in the H$\alpha$/H$\beta$ ratio can clearly not be caused by dust as such a dramatic change would be seen as a dip in the optical light curve. This H$\alpha$/H$\beta$ ratio evolution is common in novae and has been discussed by a number of authors (e.g.\ \citealp{1961PASJ...13..335K,1963ApJ...137..834M,1978ApJ...219..589F,1979ApJ...232..382F,1992A&A...263...87A,2003A&A...404..997I}). The changing Balmer decrement is caused by self-absorption. If Ly$\alpha$ and H$\alpha$ have high optical depth, high H$\alpha$/H$\beta$ ratios such as those observed here can be produced \citep{1975MNRAS.171..395N}. The calculations made by \citet{1975MNRAS.171..395N} also indicate the H$\gamma$/H$\beta$ ratio does not necessarily change dramatically during this H$\alpha$/H$\beta$ evolution, although this is dependent on the Ly$\alpha$ optical depth. This Balmer line ratio behaviour appears to well replicate that observed in Nova IC\,1613 2015. In the case of novae in eruption, Case B recombination is not valid (as discussed above), therefore the Balmer decrement cannot, and should not, be used to estimate reddening.
\begin{figure}
\includegraphics[width=\columnwidth]{dec}
\caption{The evolution of the H$\alpha$/H$\beta$ intensity ratio between the $t=6.59$ and $t=57.51$\,day spectra. The ratios are corrected for foreground Galactic reddening ($E_{B-V}=0.021$).\label{dec}}
\end{figure}
As suggested by \citet{1947PASP...59..196B}, the close proximity of O\,{\sc i} 1025.76\,\AA\ to Ly$\beta$ (1025.72\,\AA) can lead to excitation of the O\,{\sc i} ground state. This then produces strong emission at the 1.1287\,$\mu$m and 8446\,\AA\ wavelengths as the electrons fall back to the O\,{\sc i} ground state (see also \citealp{1995ApJ...439..346K}). This effect and its relationship to the Balmer decrement has also been discussed in the context of Seyfert galaxies \citep{1974ApJ...191..309S}. Such Ly$\beta$ florescence can only occur under conditions of optically thick hydrogen. The H$\alpha$/H$\beta$ ratio and the O\,{\sc i} 8446\,\AA\ intensity are closely linked \citep{1979ApJ...229..274F,1979ApJ...232..382F}, with the H$\alpha$/H$\beta$ and (O\,{\sc i}~8446\,\AA)/H$\beta$ line ratios often peaking at a similar point of the eruption (see e.g.\ \citealp{1986ApJS...60..375F}). We note that between the 26.55 and 57.51\,day spectra the H$\alpha$/H$\beta$ ratio drops. Between the 26.55 and 57.51\,days spectra, it can also be seen from Figure\,\ref{lc} that the {\it z}$^{\prime}$-band fades more rapidly than any other waveband, indicating Ly$\beta$ florescence (or specifically the 8446\,\AA\ line; as indicated by the drop in the H$\alpha$/H$\beta$ ratio) may be the cause of the initially slower {\it z}$^{\prime}$-band decline mentioned in Section\,\ref{s:phot}.
\subsection{Spectral energy distributions} \label{sec:sed}
SEDs can be derived from multiband photometry or spectra, both of which have drawbacks. Spectra are more time-expensive and are more prone to (variable and colour-dependent) calibration issues. However, they allow prominent spectral features not associated with the underlying SED to be removed before fitting, which broadband photometric observations do not. This is particularly important in novae, where during an eruption, the broadband photometry becomes increasingly influenced by line emission and can even be dominated by it at late times (e.g.\ [O\,{\sc iii}] and H$\alpha$).
Fitting a power-law to the first spectrum (excluding prominent emission and absorption features) indicates at 1.59\,days post-discovery, $f_{\lambda}\propto\lambda^{-2.42\pm0.08}$ at optical wavelengths. This is near that expected from optically thick free-free emission ($f_{\lambda}\propto\lambda^{-8/3}$; \citealp{1975MNRAS.170...41W}). The relatively short wavelength coverage however is also consistent with a black-body. Fitting a black-body function to the first spectrum yields a photospheric temperature of $11600\pm500$\,K. This is close to the expected effective temperature of a nova at peak ($\sim8000$\,K; see e.g.\ \citealp{1986ApJ...310..222P,1995A&A...294..195B,2005MNRAS.360.1483E}). Five days later the wavelength dependence of the optical continuum had changed dramatically to $f_{\lambda}\propto\lambda^{-1.48\pm0.08}$, even shallower than that expected from optically thin free-free ($f_{\lambda}\propto\lambda^{-1.9}$). The measured wavelength dependence of the SED increases for the third (10.54\,day) spectrum, with $f_{\lambda}\propto\lambda^{-1.58\pm0.09}$, although note these are consistent within the errors. The other three spectra give $\propto\lambda^{-1.68\pm0.11}$, $\appropto\lambda^{-2.11\pm0.12}$ and $\propto\lambda^{-1.21\pm0.18}$ on days 14.61, 26.55 and 57.51, respectively. Note that these fits only include the known foreground reddening ($E_{B-V}=0.021$), therefore the intrinsic slope of the SEDs of the nova eruption could be bluer.
The SEDs from the multi-band photometry are shown in Figure\,\ref{sed}. The photometry taken at similar epochs as the spectra discussed above are broadly consistent with the power-laws derived from the spectra themselves, keeping in mind that as the nova fades, the broadband photometry becomes increasingly influenced by strong emission lines (e.g.\ Balmer and O\,{\sc i}, and later [O\,{\sc iii}]). The photometry undoubtedly confirms that the dramatic change in the slope of the optical continuum emission between the spectra on day 1.59 and day 6.59 is real. Indeed a large change occurs between day 1.6 (i.e.\ the time of the first spectrum) and day 3.6.
\begin{figure}
\includegraphics[width=\columnwidth]{sed2}
\caption{SEDs of Nova IC\,1613 2015 from peak to 93.4\,days post-discovery (around $m_V=4.4$ below peak). The extreme effect of the H$\alpha$ emission on the {\it r}$^{\prime}$-band photometry can easily be seen. The size of the systematic uncertainty from the distance of IC\,1613 is indicated near the bottom of the plot.\label{sed}}
\end{figure}
\subsection{X-rays}
We do not detect X-ray emission from the nova between days 40--330 after eruption. The resulting luminosity upper limits, listed in Table\,\ref{tab:swift} for each individual {\it Swift}\ observation, were typically below 5\ergs{37}. This allows us to rule out a bright SSS under the conservative assumption of a 50\,eV black-body spectrum \citep[the fastest novae are considerably hotter, see e.g.][]{2011ApJ...727..124O,2014A&A...563L...8H,2015MNRAS.454.3108P}. Before day 40, the nova was still bright in UV and no X-rays would have been emitted \citep[cf.][]{2015A&A...580A..46H}. An observing gap between days 160--260 (Table\,\ref{tab:swift}) is likely too short to hide a SSS phase: based on our experience with the M31 population, a nova with a SSS turn-on time of more than 160 days would be expected to be visible in X-rays for longer than 260 days \citep{2014A&A...563A...2H}.
Combining all observations in Table\,\ref{tab:swift} (32.6\,ks) we derive an upper limit of 4.5\cts{-4}, corresponding to a luminosity of 2.4\ergs{35}. This is an order of magnitude lower than the observed luminosities of faint novae in M31 \citep{2010A&A...523A..89H,2011A&A...533A..52H,2014A&A...563A...2H}. Since fainter novae are typically visible for longer \citep{2014A&A...563A...2H} we can rule out a low-luminosity SSS counterpart for the observed time range. Note that this upper limit would only be valid if such a low luminosity SSS was emitting over the whole time frame of the {\it Swift} observations.
\subsection{The Progenitor Search}
The position of Nova IC\,1613 2015 is not covered by {\it Hubble Space Telescope} data (which is ideal for such progenitor searches due to its high resolving power and large wavelength coverage), however at the distance of IC\,1613, the most luminous quiescent systems will still be detectable in deep ground-based data. As we noted in \citet{2015ATel.8061....1W}, the nova appears very close to a {\it V} = 22.06, {\it I} = 21.53\,magnitude source recorded at $01^{\mathrm{h}}04^{\mathrm{m}}43^{\mathrm{s}}\!.56$~$+02^{\circ}03^{\prime}42^{\prime\prime}\!\!.0$ (J2000) in \citet{2001AcA....51..221U}.
The field was observed with the Very Large Telescope (VLT) using the FOcal Reducer/low dispersion Spectrograph 2 (FORS2; \citealp{1998Msngr..94....1A}) on 2012 Aug 20 under proposal 090.D-0009(A) and using the R\_SPECIAL filter (effective wavelength 6550\,\AA). Using the method described in detail by \citet{2009ApJ...705.1056B}, \citet{2014A&A...563L...9D} and \citet{2014ApJS..213...10W}, we used reference stars in {\it r}$^{\prime}$-band LT eruption images to precisely determine the position of the nova in the archival data. This is shown in Figure\,\ref{prog1}. We also independently (using different reference stars) calculated the position of the nova in archival SDSS {\it g}-band OmegaCAM \citep{2002Msngr.110...15K,2011Msngr.146....8K} data taken at the 2.6\,m VLT Survey Telescope \citep{1998Msngr..93...30A} on 2014 Dec 17, using {\it V}-band LT data.
\begin{figure*}
\includegraphics[width=\columnwidth]{zoom}\hfill
\includegraphics[width=\columnwidth]{prog}
\caption{The position of the nova in pre-eruption data compared to the nearby resolved source. Left: The Nova IC\,1613 2015 nova field imaged with FORS2 on the VLT using an R\_SPECIAL filter on 2012 Aug 20. The black box indicates the zoomed in region shown in the right panel. Right: The same data as the left panel, with the 1$\sigma$ and 3$\sigma$ errors on the position of the nova (calculated from $r'$-band LT eruption data) indicated by green circles and the position of the nearby resolved source indicated by a red `$\times$'.
\label{prog1}}
\end{figure*}
In the first archival image where the position was derived from the {\it r}$^{\prime}$-band LT eruption observations, the position of the nova is calculated to be $0^{\prime\prime}\!\!.09\pm0^{\prime\prime}\!\!.05$\,south and $0^{\prime\prime}\!\!.21\pm0^{\prime\prime}\!\!.05$\,east of the progenitor candidate. Using the errors on the positional transformation and the centroid on the nova/stellar source, implies an association between the two sources can be ruled out at the 4.1$\sigma$ level. In the second image we calculate the position of the nova to be $0^{\prime\prime}\!\!.05\pm0^{\prime\prime}\!\!.05$\,north and $0^{\prime\prime}\!\!.25\pm0^{\prime\prime}\!\!.05$\,east. From this it appears the progenitor candidate may have a small, but real offset (eastward) from the nova.
As a check for a systematic offset across the transformed field, we apply the positional transformation to 10 stars in close proximity to the nova (within $\sim$1$^{\prime}$), that were not used in the calculation of the positional transformation itself. There is no evidence for a systematic offset in these sources, with the average x/y offsets less than the standard deviation in the offsets. The standard deviation of these offsets however does indicate that it is possible the errors on the transformation are slightly underestimated. We therefore apply the average x/y offsets (in the R\_SPECIAL image) to the position of the nova. Using the standard deviation as the error indicates an association is still ruled out, but with a reduced 3$\sigma$ confidence. It is also worth noting that the transformed position of the source to the south-east of the nova (the brightest star seen in the left panel of Figure\,\ref{prog1}) is consistent within 1.1$\sigma$ (using the errors of the transformation itself) of that of the centroided position from the VLT image. If there were a systematic offset affecting the nova transformation, one would expect it to also be present in this very nearby ($\sim$5$^{\prime\prime}$ separation) source.
We therefore conclude that, despite the close proximity of the progenitor candidate, it is most likely simply a chance alignment. However, this should be confirmed by late-time spectroscopy. Novae retain strong Balmer emission for a significant time after eruption, but over time the optical spectrum becomes increasingly dominated by [O\,{\sc iii}] emission lines. If this progenitor candidate is indeed the luminous accretion disk of the pre-eruption nova, a quiescent spectrum may be expected to reveal narrow Balmer and He\,{\sc ii} emission. The lack of (broader) [O\,{\sc iii}] lines would confirm we are not observing an extended tail of the nova eruption.
\section{Discussion}
A comparison of Nova IC\,1613 2015 with other IC\,1613 novae is not possible due to the lack of data for the 1954 and 1996 candidates. We can however compare it to other extragalactic novae residing in Local Group galaxies.
At $t_{2(V)}=13\pm2$\,days, Nova IC\,1613 2015 can be considered a fast-fading nova \citep{1957gano.book.....G}. Comparing it to the cumulative $t_2$ distribution plot of M31 novae from \citet{2016ApJ...817..143W} would place it in the fastest 20\% of novae. However a better comparison may be the LMC, and comparing the $t_2$ value to those in Table\,2 from \citet{2013AJ....145..117S} reveals that in this (albeit small) sample, novae significantly slower than Nova IC\,1613 2015 are relatively rare. The low nova rate of the SMC, perhaps the best comparison to IC\,1613, makes a comparison to the overall SMC population difficult. There are several SMC novae with good light curves (see e.g.\ \citealp{1954PNAS...40..365H,1998A&A...335L..93D,2016ApJS..222....9M}) which display a broad range of decline times, and Nova IC 2016 2015 would certainly not seem out of place amongst these. Indeed, there have been some novae that evolved much more slowly than Nova IC\,1613 2015 (e.g.\ Nova SMC 1994, \citealp{1998A&A...335L..93D}; Nova SMC 2001, \citealp{2004IBVS.5582....1L,2016ApJS..222....9M}).
The early decline spectra of Nova IC\,1613 2015 are not typical of novae. In M31, around 80\% of all novae belong to the Fe\,{\sc ii} class \citep{2011ApJ...734...12S}. From the smaller sample size of LMC and M33 novae, a lower proportion (perhaps around 50\%) appear to be Fe\,{\sc ii} novae in these galaxies \citep{2012ApJ...752..156S,2013AJ....145..117S}. The hybrid spectroscopic class of novae can either evolve from one type to another or show both types simultaneously. Nova IC\,1613 2015 shows both Fe\,{\sc ii} lines and N\,{\sc ii} in the early decline spectra, classifying it as a hybrid nova. It is worth noting that it is not unreasonable to expect hybrid novae that transition from one type to another to go through a phase which simultaneously shows both types, even if it is only short lived. The early decline spectral evolution shows many similarities to the hybrid Nova LMC 1988 No.\,2 (see \citealp{1989MNRAS.241..827S} and \citealp{1991ApJ...376..721W}), although Nova IC\,1613 2015 fades significantly more slowly, with Nova LMC 1988 No.\,2 having a $t_2$ of around 5\,days \citep{1989MNRAS.241..827S}. The evolution of Nova IC\,1613 2015 also appears similar to the Galactic nova V5114\,Sagittarii, which had a similar $t_2$ ($\sim 11$\,days) and showed somewhat similar spectroscopic evolution and associated velocities (H$\alpha$ FWHM $\sim 2000$\,km\,s$^{-1}$; \citealp{2006A&A...459..875E}), although in this case the spectrum shortly after peak appears closer to a typical Fe\,{\sc ii} nova \citep{2006A&A...459..875E} than Nova IC\,1613 2015.
As the best Local Group galaxy with observed novae to compare IC\,1613 to is the SMC, it is worth reviewing the spectroscopic information on SMC novae. Nova SMC 1951 was observed spectroscopically several times and clearly shows the Bowen blend emission complex \citep{1954PNAS...40..365H}. However most novae show this at later times, so an unambiguous spectroscopic type cannot be assigned. Nova SMC 1952 was observed two days after peak, and likely showed He\,{\sc i} and He\,{\sc ii} emission \citep{1954AJ.....59R.193S}, classifying it as an He/N nova. Nova SMC 2001 was an Fe\,{\sc ii} nova \citep{2005A&A...435.1031M}. Nova SMC 2005 shows broad Fe\,{\sc ii} lines \citep{2005CBET..195....1M}, so is classed as an Fe\,{\sc ii}b nova. Most recently, the first spectrum taken of Nova SMC 2016 was consistent with it being a member of the He/N spectroscopic class \citep{2016ATel.9628....1W}. We note that Nova SMC 2016 has extensive pan-chromatic coverage (see e.g.\ \citealp{2016ATel.9635....1K,2016ATel.9688....1W,2016ATel.9733....1P,2016ATel.9810....1O}), which will be seen in forthcoming publication(s). The lack of early decline spectra of SMC novae prevents any conclusions being made on the proportion of novae that are Fe\,{\sc ii} novae or how Nova IC\,1613 2015 compares to the SMC population.
There are clearly significant changes in the wavelength dependence of the underlying continuum in the early stages of the eruption, which is apparent from the spectra, where the slope of the continuum changes from $f_{\lambda}\propto\lambda^{-2.42\pm0.08}$ 1.59 days after discovery to $f_{\lambda}\propto\lambda^{-1.48\pm0.08}$ 6.59 days after eruption. This is supported by the photometry where the nova shows a much redder colour 3.6 days after discovery compared to 1.6 days after discovery. In a nova eruption the {\it B}, {\it V} and {\it i}$^{\prime\prime}$ magnitudes are not greatly affected by line emission until the latter stages.
If we ignore the first two photometry points (where we have already established the continuum has changed dramatically over a short time), the {\it B}, {\it V}, and {\it i}$^{\prime}$ light curves are relatively well described by a power law. We find $f_{B({\mathrm{\lambda}})}\propto{t^{-1.22}}$, $f_{V({\mathrm{\lambda}})}\propto{t^{-1.28}}$ and $f_{i^{\prime}({\mathrm{\lambda}})}\propto{t^{-1.46}}$ (where {\it t} is days since discovery), which indicates the nova is becoming increasingly blue as the eruption evolves (as seen between spectra 2 to 5). However examining the ({\it B}$-${\it V}) colour evolution shows the picture is slightly more complex. The ({\it B}$-${\it V}) colour becomes lower until $m_V\sim20$ (around day 30$-$40) and then turns around and ({\it B}$-${\it V}) begins increasing (getting redder). This behaviour is seen in other novae (see for example Figure 39 in \citealp{2014ApJ...785...97H} and Figure 7 in \citealp{2016ApJ...833..149D}). While this seems in general agreement with the power-law of the final spectrum becoming shallower, we also must note that while {\it B} and {\it V} magnitudes are not as influenced by line emission early in the eruption, during the nebular phase, lines such as [O\,{\sc iii}] become increasingly dominant (and thus affect the broadband colours, but are removed from the power-law fitting in Section\,\ref{sec:sed}).
The X-ray upper limits in Table\,\ref{tab:swift} indicate that either the SSS phase had not started yet before day 330 or that the nova did not become visible in soft X-rays at all. If this were a M31 nova, then the fast $t_2$ and reasonably high expansion velocity would (consistently) predict a SSS turn-on time of about 50--90 days for a subset with similar properties \citep[cf.][]{2014A&A...563A...2H}. This is in agreement with the optical spectra indicating that by day 57 the nova had entered its nebular phase, where the ejecta would have become optically thin to X-rays. However, note that only a fraction of novae in the M31 reference sample showed SSS emission, which cannot be explained by observational coverage alone \citep{2011A&A...533A..52H,2014A&A...563A...2H}.
If we assume that by day 57 the nuclear burning in the residual hydrogen envelope (i.e. the part that was not ejected) had already extinguished, then we can estimate an upper limit on the mass of this envelope. Following the approach described in \citet[][see also the relevant references therein]{2014A&A...563A...2H} a SSS turn-off time of 57~days would correspond to a burned mass of 2.7\tpower{-7} M$_{\sun}$. Such relatively low masses are rare, but have been estimated for a few fast novae from the M31 sample \citep{2014A&A...563A...2H}. However, we stress the fact that for an individual object a large variety of factors, such as eruption geometry or inclination angle, can play a role in obscuring an existing SSS emission component.
Furthermore, there will of course be systematic differences between the nova samples of IC~1613 and M31, the latter of which we have learnt a great deal about nova population properties from. Different metallicities have been found to affect the average nova properties: see for instance the comparisons of M31 and LMC novae by \citet{1993A&A...271..175D,2013AJ....145..117S} and the theoretical models of \citep{2006ApJS..167...59H,2013ApJ...779...19K}. It remains unclear whether these systematics are large enough to affect the SSS phase of the nova significantly (e.g.\ to confine it to the narrow gap between days 160--260). Without further evidence we could only speculate on the specific causes for the SSS non detection, and we refrain from doing so.
\section{Summary and Conclusions}
We have presented detailed photometric and spectroscopic observations of the Nova IC\,1613 2015 eruption, from the early optically thick stage, through the early decline and nebular phases. This is the first detailed study of a nova residing in an irregular dwarf galaxy beyond the much closer MCs. Here we summarise our observations and conclusions:
\begin{enumerate}
\item We have undertaken a detailed observing campaign of Nova IC\,1613 2015, with ground-based photometry and spectroscopy led by the LT, with further observations from LCO. We also obtained UV photometry and X-ray observations with {\it Swift}.
\item The light curve shows a relatively smooth decline and the nova is classified as a fast nova, with $t_{2(V)}=13\pm2$ and $t_{3(V)}=26\pm2$\,days. The absolute peak magnitude of the nova is $M_V=-7.93\pm0.08$, which is typical for a classical nova.
\item The X-ray observations taken between 40--330\,days after discovery detected no SSS emission associated with the nova.
\item The spectra show that the nova is a member of the `hybrid' spectroscopic class, with it initially showing relatively strong Fe\,{\sc ii} lines and comparable N\,{\sc ii} lines. By the time it had declined by two magnitudes, the N\,{\sc ii}/N\,{\sc iii} features are significantly stronger than Fe\,{\sc ii}.
\item One of the more unusual features is a strong emission line peaking at $\sim$6162\,\AA. We interpret this as likely due to O\,{\sc i} (6157\,\AA), or possibly N~{\sc ii}.
\item The FWHM of the H$\alpha$ emission line is measured at around 1750\,km\,s$^{-1}$ and shows relatively little change over the course of the eruption.
\item The H$\alpha$/H$\beta$ ratio initially increases through the early decline spectra (due to self-absorption; peaking at $7.5\pm1.0$), before declining in the nebular spectrum. This implies the {\it z}$^{\prime}$-band light curve may be significantly influenced by a strong O\,{\sc i} 8446\,\AA\ emission line, which in turn is caused by Ly$\beta$ fluorescence.
\item We also obtained a nebular spectrum of Nova IC\,1613 2015, with [N\,{\sc ii}], [O\,{\sc i}], [O\,{\sc ii}] and [O\,{\sc iii}] all detected. This makes it one of the first novae beyond the MCs to be observed in the nebular phase.
\item The first spectrum taken around peak shows a steep blue continuum of $f_{\lambda}\propto\lambda^{-2.42\pm0.08}$, similar to that expected from optically thick free-free emission, but also consistent with photospheric (black-body) emission. The second spectrum, shows a dramatic change in the continuum to $f_{\lambda}\propto\lambda^{-1.48\pm0.08}$. A sudden change in the underlying continuum between the two epochs is supported by the photometry.
\item Despite the very close proximity of the nova to a stellar source, we find that this is most likely a chance alignment.
\end{enumerate}
To further our understanding of how nova eruptions depend on the underlying stellar population it is important we take the opportunity to study novae occurring in significantly different environments than can be found in the usual targets of M31 and our own Galaxy.
\section*{Acknowledgements}
We would like to thank the referee, Mike Shara, whose suggestions helped improve this paper. SCW acknowledges a visiting research fellowship at Liverpool John Moores University (LJMU). MH acknowledges the support of the Spanish Ministry of Economy and Competitiveness (MINECO) under the grant FDPI-2013-16933 as well as the support of the Generalitat de Catalunya/CERCA programme. The Liverpool Telescope is operated on the island of La Palma by LJMU in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. This work makes use of observations from the LCO network. This work made use of data supplied by the UK {\it Swift} Science Data Centre at the University of Leicester. Based in part on data obtained from the ESO Science Archive Facility under request numbers scw233242 and scw233245.
|
\section{Introduction}
Dynamical properties of ultra-cold gases have been enjoying increasing interest since the experimental achievement of the Bose-Einstein condensation in 1995 \cite{bec1,bec2}. Modern experimental methods, including advanced trapping techniques and controlling of mutual interactions, enable experimental investigation of many problems which would previously be considered on a theoretical level only. This opens a whole new research field of strongly correlated systems with potential applications in such fields as quantum computing or quantum simulating of condensed matter problems \cite{Feynman,lewenstein2007,lewenstein2012,sowinski2010}. One example of a widely studied problem in the field is the system of a few particles confined in a double-well potential \cite{smerzi1997,milburn1997,menotti2001,meier2001,salgueiro2007,zollner2008,simon2012,he2012,liu2015,dobrzyniecki2017,tylutki2017}. Such systems have been realized experimentally, and used to study the physics of bosonic condensates with a great effect \cite{andrews1997,shin2004,albiez2005,levy2007,leblanc2011,depaz2014}.
On a theoretical level, the dynamics of bosons in a double-well system is usually studied in a framework of a simplified two-mode model. The model relies on the assumption that all particles occupying a particular well can be described with a single orbital. Thus, the single-particle basis is limited to two modes, chosen as the lowest-energy wave functions localized in the left and the right well, respectively. They are constructed from the ground and the first excited eigenstate of the single-particle Hamiltonian. In consequence, the dynamics of the bosonic system can be calculated almost straightforwardly \cite{raghavan1999,ostrovskaya2000,ananikian2006,lahaye2009,adhikari2014}.
Although the model is commonly used, its applicability is essentially limited. The fundamental assumption hidden in this approximation is that the on-site interaction energy is much smaller than the excitation energy needed to reach higher energy levels. It means that the model becomes increasingly inadequate when the interaction strength increases. Additionally, the model completely neglects local inter-particle correlations. In a strong-interaction regime, local multi-particle correlations arise in each well and so the particles in a single site can no longer be adequately described \cite{guo2011,dobrzyniecki2016,sakmann2009,garciamarch2012}.
For intermediate interactions, some improvement of the two-mode approach can be conceived. In the traditional approach, the shapes of the single-particle wave functions are entirely independent of the interaction strength. By taking into account an influence of inter-particle interactions on the shape of single-particle wave functions, the two-mode description can be improved. Techniques of obtaining improved orbitals through variational and mean-field methods have been studied assuming time-independent \cite{masiello2005,streltsov2006} as well as time-dependent \cite{grond2012,dalton2012,sinatra2000} orbital wave functions.
In this paper we investigate a different, much simpler method, of obtaining an effective time-independent two-mode basis. In our approach the shapes of the basis wave functions emerge naturally after diagonalization of the single-particle density matrix of properly chosen eigenstates of the many-body Hamiltonian. We describe a construction of such an effective basis for a system of two, three, and four interacting bosons in a one-dimensional double-well potential. Then, we examine an accuracy of the resulting two-mode model by comparing its predictions with those obtained by both the exact model and the traditional two-mode model. It is shown that the effective model indeed allows one to extend validity of two-mode approximations to higher interaction strengths.
\section{The system under study}
\label{sec:systemunderstudy}
We consider a system of $N$ spinless bosons of mass $m$, confined in a one-dimensional double-well potential $V(x)$ and interacting via short-range interactions. We concentrate on systems of $N=2$, $N=3$, and $N=4$ particles, but generalization to larger $N$s, besides numerical complexity, is straightforward. The short-range inter-particle interaction is approximated with a point-like potential $g\delta(x-x')$, where the parameter $g$, related to the $s$-wave scattering length, controls the interaction strength \cite{pethick2008}. Note that in the one-dimensional case the Dirac $\delta$ function is a well-defined self-adjoint Hermitian operator and therefore it does not require any regularization \cite{busch1998}. We focus on repulsive interactions, $g > 0$. Experimentally, a quasi-one-dimensional geometry can be realized by introducing a strong harmonic confinement in two remaining spatial directions. In this way the dynamics in these directions is frozen and particles occupy single ground-states. Consequently, the system becomes effectively one-dimensional.
The many-body Hamiltonian of the system, expressed in the second quantization formalism, has the form:
\begin{equation}
\label{HamManyBody}
\hat{\mathcal{H}} = \int\!\! \mathrm{d}x\,\hat{\Psi}^\dagger(x) \mathcal{H}_0 \hat{\Psi}(x) + \frac{g}{2} \int\!\!\mathrm{d}x\, \hat{\Psi}^\dagger(x) \hat{\Psi}^\dagger(x) \hat{\Psi}(x) \hat{\Psi}(x).
\end{equation}
Here $\hat{\Psi}(x)$ is a bosonic field operator that annihilates a particle at position $x$. The operator fulfills the bosonic commutation relations, $\left[\hat{\Psi}(x),\hat{\Psi}^\dagger(x')\right] = \delta(x-x')$ and $\left[\hat{\Psi}(x),\hat{\Psi}(x')\right] = 0$. The single-particle part of the Hamiltonian has a form
\begin{equation}
\mathcal{H}_0 = -\frac{\hbar^2}{2m} \frac{\mathrm{d}^2}{\mathrm{d}x^2} + V(x).
\end{equation}
We model an external double-well potential $V(x)$ as a combination of a harmonic oscillator potential with frequency $\Omega$, and a Gaussian barrier which separates the central region into two wells:
\begin{equation}
V(x) = \hbar\Omega \left[ \frac{m\Omega}{2\hbar} x^2 + \lambda \exp \left(-\frac{m\Omega}{2\hbar} x^2 \right) \right].
\end{equation}
The height of the barrier is directly related to the dimensionless parameter $\lambda$. In further discussion, we use natural harmonic oscillator units, {\it i.e.}, energy is measured in \(\hbar\Omega\) and length in \(\sqrt{\hbar/m\Omega}\).
The spectrum of $\mathcal{H}_0$ can be found numerically via an exact diagonalization on a dense grid in position representation, giving a set of eigenfunctions $\Phi_i(x)$ and their corresponding eigenenergies $\mathcal{E}_i$ \cite{dobrzyniecki2016}. Following the harmonic oscillator convention, we number the individual states beginning from $i = 0$. For $\lambda = 0$, obviously the well-known harmonic oscillator spectrum is recovered.
In the analysis of double-well problems, it is usual to adopt a basis of single-particle wave functions $\{\varphi_{Li}(x), \varphi_{Ri}(x)\}$, where the individual states are localized respectively in the left or the right well. These states are constructed as combinations of the odd and even eigenstates of the Hamiltonian:
\begin{align}
\label{eq:lrbasis}
\varphi_{Ri}(x) &= \frac{1}{\sqrt{2}}(\Phi_{2i}(x) + \Phi_{2i+1}(x)), \nonumber \\
\varphi_{Li}(x) &= \frac{1}{\sqrt{2}}(\Phi_{2i}(x) - \Phi_{2i+1}(x)).
\end{align}
Although states $\{\varphi_{\sigma i}(x)\}$ are not eigenstates of the single-particle Hamiltonian ${\cal H}_0$, they form an orthonormal basis. In this basis the Hamiltonian $\mathcal{H}_0$ has both, diagonal (average energies) and off-diagonal (tunnelings) elements:
\begin{equation}
\int \varphi^*_{\sigma i}(x) \mathcal{H}_0 \varphi_{\sigma' j}(x) \mathrm{d}x = \delta_{ij}\left[\delta_{\sigma\sigma'} E_i-(1-\delta_{\sigma\sigma'})J_i\right],
\end{equation}
where
\begin{equation}
E_i = \frac{\mathcal{E}_{2i+1} + \mathcal{E}_{2i}}{2}, J_i = \frac{\mathcal{E}_{2i+1} - \mathcal{E}_{2i}}{2}.
\end{equation}
The field operator $\hat{\Psi}(x)$ can be decomposed as
\begin{equation}
\label{eq:decomposition}
\hat{\Psi}(x) = \sum\limits_i \left[ \varphi_{Li}(x) \hat{a}_{Li} + \varphi_{Ri}(x) \hat{a}_{Ri} \right],
\end{equation}
where $\hat{a}_{\sigma i}$ annihilates a boson in state $\varphi_{\sigma i}(x)$. For numerical purposes the summation index $i$ in the decomposition (\ref{eq:decomposition}) is limited to some cutoff number $i_{max}$. In the case of the system under study, we have verified that $i_{max} = 15$ is sufficient, as the final results do not change significantly for larger $i_{max}$. Therefore in further discussion, we will treat the Hamiltonian with $i_{max} = 15$ as equivalent to the full many-body Hamiltonian (\ref{HamManyBody}).
By substituting (\ref{eq:decomposition}) into (\ref{HamManyBody}), the Hamiltonian can be written as:
\begin{align}
\label{eq:manybodyhamiltonian2}
\hat{\cal H} = &\sum\limits_{i} \left[ E_i (\hat{a}^\dagger_{Li} \hat{a}_{Li} + \hat{a}^\dagger_{Ri} \hat{a}_{Ri}) - J_i (\hat{a}^\dagger_{Li} \hat{a}_{Ri} + \hat{a}^\dagger_{Ri} \hat{a}_{Li}) \right] \nonumber \\
&+ \frac{1}{2} \sum\limits_{IJKL} U_{IJKL} \hat{a}^\dagger_I \hat{a}^\dagger_J \hat{a}_K \hat{a}_L,
\end{align}
where the indices $I,J,K,L$ represent double-indices $(\sigma,i)$ identifying single-particle states $\varphi_{\sigma i}(x)$. The interaction terms $U_{IJKL}$ can be calculated as:
\begin{equation}
U_{IJKL} = g \int\limits^\infty_{-\infty} \varphi^*_I(x) \varphi^*_J(x) \varphi_K(x) \varphi_L(x) \mathrm{d}x.
\end{equation}
The spectrum of the Hamiltonian (\ref{eq:manybodyhamiltonian2}) can be calculated numerically. To do so, we express the Hamiltonian in a matrix form in the $N$-particle Fock basis and diagonalize it. Then all properties of the system at any moment can be determined.
Here, our aim is to predict the time evolution of the interacting system of bosons being initially located in the lowest single-particle state of the chosen well. Namely we assume that initially the many-body state of the system is
\begin{equation} \label{IniState}
|\mathtt{ini}\rangle = \frac{1}{\sqrt{N}}\left(\hat{a}^\dagger_{R0}\right)^N|\mathtt{vac}\rangle.
\end{equation}
It means that the state of the system at any later moment $t$ can then be calculated straightforwardly as
\begin{equation}
\label{eq:timeevolution1}
|\mathbf{\Psi}(t)\rangle = \sum\limits_k \exp\left( \frac{-i \epsilon_\mathtt{k} t}{\hbar} \right) \langle \mathtt{k} | \mathtt{ini} \rangle |\mathtt{k}\rangle,
\end{equation}
where $|\mathtt{k}\rangle$ and $\epsilon_\mathtt{k}$ are the eigenstates and their corresponding eigenenergies of (\ref{eq:manybodyhamiltonian2}), respectively. It is important to note that $|\mathtt{k}\rangle$ and $\epsilon_\mathtt{k}$ depend directly on interaction strength $g$. However, to simplify the notation we do not write out this dependence explicitly.
\section{Two-mode approximation}
\label{sec:effective}
A two-mode model is a natural approximation of any double-well system. Routinely, it involves choosing $i_{max} = 0$ in the decomposition (\ref{eq:decomposition}), i.e., the single-particle state basis is limited to the two lowest energy states, $\varphi_{L0}(x)$ and $\varphi_{R0}(x)$. Then the field operator $\hat{\Psi}(x)$ can be approximated by
\begin{equation}
\label{eq:traditional2Mode}
\hat{\Psi}(x) \approx \varphi_{L0}(x) \hat{a}_{L0} + \varphi_{R0}(x) \hat{a}_{R0}.
\end{equation}
By substituting (\ref{eq:traditional2Mode}) into the Hamiltonian (\ref{HamManyBody}), the two-mode many-body Hamiltonian is obtained, and similarly to the full-mode Hamiltonian it can be expressed in matrix form and diagonalized. In consequence, the time evolution can be predicted analogously to (\ref{eq:timeevolution1}).
In this traditional approximation the dynamics of the non-interacting system ($g = 0$) is reproduced perfectly, since the system remains in the space spanned by two the lowest orbitals, i.e., higher single-particle orbitals can be safely neglected. However, as the interactions increase, couplings to higher orbitals start to play an increasingly important role in the many-body Hamiltonian (\ref{eq:manybodyhamiltonian2}) and a model that neglects these states loses its ability to accurately reproduce the dynamics.
To overcome this difficulty we propose a modified version of a two-mode approximation taking into account interactions between particles and utilizing our information about the initial state. In this approach the basis wave functions are no longer the solutions of the single-particle Schr{\"o}dinger equation. Rather, the two-mode basis consists a pair of orthogonal wave functions $\phi_A(x), \phi_B(x)$ which are specifically tailored to the system to recover its dynamical properties correctly. These two orbitals depend on interactions, however in the limit of vanishing forces ($g=0$) they can be perfectly obtained as some superpositions of non-interacting orbitals $\varphi_{L0}$ and $\varphi_{R0}$.
Given an effective two-mode basis $\phi_A(x), \phi_B(x)$, it is easy to obtain the approximate dynamics in full analogy to the traditional two-mode approximation. First, we define the annihilation operators $\hat{a}_A$ and $\hat{a}_B$ annihilating bosons in appropriate effective single-particle orbitals $\phi_A(x)$ and $\phi_B(x)$. Then we decompose the field operator as:
\begin{equation}
\label{eq:decompositionAB}
\hat{\Psi}(x) \approx \phi_A(x) \hat{a}_A + \phi_B(x) \hat{a}_B
\end{equation}
and substitute this decomposition into the Hamiltonian (\ref{HamManyBody}). An effective two-mode Hamiltonian obtained in this way can be easily diagonalized and an approximate time evolution of the system can be predicted.
\section{Towards effective orbitals}
\begin{figure}
\includegraphics[width=1\linewidth]{fig1.pdf}
\caption{Projection of the initial state $|\mathtt{ini}\rangle$ on consecutive eigenstates of the many-body Hamiltonian \eqref{HamManyBody} as a function of interactions for different numbers of particles (thin black lines). Additionally, the cumulative contribution of the two the most dominant eigenstates $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$ is plotted (solid black lines). Note a different scale of interactions in the last plot obtained for $N=4$ particles. Interaction strength $g$ is given in units of $\sqrt{\hbar^3\Omega/m}$.}
\label{Fig1}
\end{figure}
\label{sec:eigenstatebasis}
The most challenging task for an effective two-mode description is to find a proper construction of orbitals $\phi_A(x)$ and $\phi_B(x)$. To make it as good as possible, it is quite obvious that one should take into account not only interactions between particles but also the initial state of the system, since in principle different initial states are coupled to different orbitals and in consequence they evolve in time completely different. To merge both these requirements, first we decompose an initial state (\ref{IniState}) to the eigenstates of the many-body Hamiltonian (\ref{HamManyBody}). Depending on interactions and the number of particles $N$, the initial state is decomposed to a different number of eigenstates. However, in the non-interacting case ($g=0$), only the lowest $N+1$ eigenstates of the many-body Hamiltonian $|\mathtt{k}\rangle$ (with $\mathtt{k}\in\{0,N\}$) have a nonzero overlap with the initial state, $\langle \mathtt{k} | \mathtt{ini} \rangle$. In this limit all these eigenstates
can be constructed from two single-particle orbitals $\varphi_{L0}$ and $\varphi_{R0}$ as following
\begin{equation}
|\mathtt{k}\rangle = \frac{1}{\sqrt{k!(N-k)!}}\left(\hat{b}_{+}^\dagger\right)^{N-k}\left(\hat{b}_{-}^\dagger\right)^{k}|\mathtt{vac}\rangle,
\end{equation}
where $\hat{b}_{\pm}=(\hat{a}_{R0}\pm\hat{a}_{L0})/\sqrt{2}$ are symmetric and antisymmetric combinations of annihilation operators in the lowest states of the left and the right well.
It is quite interesting to note that for non-vanishing interactions the situation becomes in some sense simpler. Although the total number of eigenstates $|\mathtt{k}\rangle$ contributing to the initial state of the system $|\mathtt{ini}\rangle$ increases, only two of them start to dominate in this decomposition. In Fig. \ref{Fig1} we show an overlap of the initial state with consecutive many-body eigenstates $|\mathtt{k}\rangle$ as functions of interactions for different numbers of particles $N$ and chosen depths of the wells $\lambda$. The states are numbered as their counterparts in the limit of vanishing interactions ($g=0$). As it is seen, cumulative contribution of the two selected eigenstates (solid thick lines) remains dominant for a large range of interactions. It means that in this range the initial state $|\mathtt{ini}\rangle$ can be well approximated by proper superposition of only two many-body eigenstates $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$. This observation is the first step for our construction. The second is a direct consequence of structural properties of these two many-body eigenstates. For each of these states one can calculate the single-particle density matrix
\begin{equation} \label{OneDensityMatrix}
\rho^{(k)}(x,x') = \frac{1}{N} \langle \mathtt{k} | \hat{\Psi}^\dagger(x)\hat{\Psi}(x') | \mathtt{k} \rangle,
\end{equation}
diagonalize it and find its decomposition to the natural single-particle orbitals
\begin{equation}
\rho^{(k)}(x,x') = \sum_i \lambda_i\, \psi^*_i(x)\psi_i(x').
\end{equation}
For convenience, the orbitals are ordered along their occupations $\lambda_0>\lambda_1>\ldots$. In general, a few the most occupied orbitals $\psi_0$, $\psi_1$, $\psi_2$, \ldots of a single-particle density matrix $\rho^{(k)}$ should be treated as natural candidates for effective orbitals $\phi_A$ and $\phi_B$ carrying information about interactions in the system. From our numerical analysis it follows that two the most important eigenstates $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$ with dominant contribution to the initial state share almost the same set of single-particle orbitals $\psi_\lambda$. Moreover, the state $|\mathtt{N}\rangle$ is dominated only by one orbital $\psi_0(x)$, which reproduces the state $\Phi_{0}(x)$ for vanishing interactions. In contrast, the second eigenstate state $|\mathtt{N\!-\!1}\rangle$ is dominated by two orbitals $\tilde\psi_0(x)$ and $\psi_1(x)$ corresponding to single-particle orbitals $\Phi_{0}(x)$ and $\Phi_{1}(x)$, respectively. Of course, single-particle orbitals $\psi_0(x)$ and $\tilde\psi_0(x)$ extracted from the eigenstates $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$ are not precisely the same. However, it is possible to deterministically establish two orthogonal orbitals $\phi_A(x)\approx\psi_0(x)\approx\tilde\psi_0(x)$ and $\phi_B(x)\approx\psi_1(x)$ which give the best description of these two many-body eigenstates $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$. It is quite obvious that the unique choice of $\phi_A(x)$ and $\phi_B(x)$ does not exist. However, one of the straightforward choices gives the best predictions for the dynamics of the system. The construction is as follows. Since the eigenstate $|\mathtt{N}\rangle$ is dominated only by one orbital, the first orbital $\phi_A(x)$ is simply set as equal to $\psi_0(x)$. This orbital has a natural decomposition into the single-particle basis (\ref{eq:lrbasis}):
\begin{equation}
\label{eq:Astate}
\phi_A(x) = \sum_i \left[\lambda_{Li} \varphi_{Li}(x) + \lambda_{Ri} \varphi_{Ri}(x)\right]
\end{equation}
with some coefficients $\lambda_{\sigma i}$. Subsequently, having these coefficients in hand, one constructs the second orbital $\phi_B(x)$ as follows:
\begin{equation}
\label{eq:makingBstate}
\phi_B(x) = \sum_i (-1)^i \left[\lambda_{Ri} \varphi_{Li}(x) - \lambda_{Li} \varphi_{Ri}(x)\right].
\end{equation}
These definitions assure automatically that the modes $\phi_A(x)$ and $\phi_B(x)$ are orthogonal and they reduce to the traditional two-mode basis in the limit of vanishing interactions.
We adopted the approach described above to extract effective single-particle orbitals $\phi_A(x)$ and $\phi_B(x)$ in all the cases up to four particles. In each case studied the procedure and conclusions are the same. Therefore, we believe that the method can be adopted also for larger number of particles. However, as seen in Fig.~\ref{Fig1}, for increasing number of particles the range of interactions where two many-body eigenstates dominate in the decomposition of the initial state rapidly decreases (note different ranges of interactions on different plots).
\section{Accuracy of the model}
\label{sec:results}
\begin{figure}
\includegraphics[width=0.8\linewidth]{fig2.pdf}
\caption{Population of the right well $N_R(t)$ as a function of time predicted by different models studied for different numbers of particles and example parameters of the model. In contrast to the standard two-mode model (red dashed lines), the effective two-mode description (solid blue lines) recovers correctly the results obtained from the full many-body Hamiltonian (solid black). A difference in predictions is clearly visible for longer times. Interaction strength $g$ is given in units of $\sqrt{\hbar^3\Omega/m}$.}
\label{Fig2}
\end{figure}
First, let us demonstrate a few examples confirming that the effective two-mode model indeed shows improved recovery of the exact dynamics of the system. We focus on the time dependence of the well population which is calculated by integrating a temporal single-particle density over an appropriate region of the space. For the right well it is defined as:
\begin{equation} \label{eq:populationr}
N_{R}(t) = \int_0^\infty \langle \mathbf{\Psi}(t) | \hat{\Psi}^\dagger(x) \hat{\Psi}(x) | \mathbf{\Psi}(t) \rangle \mathrm{d}x.
\end{equation}
The definition for the left well is analogous.
In Fig. \ref{Fig2} we plot the population of the right well $N_R(t)$ as a function of time as predicted by the full many-body Hamiltonian and by both two-mode models. As it is seen, an evolution of the exact population has a specific oscillatory behavior (solid black lines). In the non-interacting case, oscillations are directly related to the tunneling $J_0$, i.e., the characteristic frequency is equal $\hbar/J_0$. For increasing repulsions, the frequency of the oscillations is modified due to the effects of inter-particle interactions. In particular, tunneling via excited states accelerates a particle flow between the wells. The standard two-mode model, limited to the lowest unperturbed orbitals, is unable to reproduce this effect and in consequence an approximate dynamics generally underestimates the frequency of oscillations (red dashed lines). This causes a growth of the phase shift between the exact and approximate values of $N_R(t)$. On the other hand, as it is seen in Fig. \ref{Fig2}, the effective two-mode model almost exactly reproduces the oscillation frequency and only some small deviations from the exact predictions are visible.
\begin{figure}
\includegraphics[width=\linewidth]{fig3.pdf}
\caption{The difference $\Delta E$ between eigenenergies of two the most contributing eigenstates to the initial state calculated in a framework of a chosen two-mode approximation, divided by the same quantity obtained from the exact model $\Delta E_{\mathrm{exact}}$. Red dashed lines correspond to the standard two-mode approximation while solid blue lines to the effective model. Note that in the wide range of interaction strengths $g$ the effective model, in contrast to the standard one, recovers almost perfectly the difference $\Delta E$. In consequence, the effective two-mode description is able to predict oscillations of the population $N_R(t)$ correctly. Interaction strength $g$ is given in units of $\sqrt{\hbar^3\Omega/m}$.}
\label{Fig3}
\end{figure}
One way to understand sources of the improved accuracy of the effective two-mode model is to analyze the spectrum of the many-body Hamiltonian from the two-mode approximation point of view. In this case the state $|\mathtt{ini}\rangle$ is the only many-body state which describes all particles occupying the right well. Therefore, the frequency of oscillations is directly related to the overlap between the temporal state of the system $|\boldsymbol{\Psi}(t)\rangle$ and the initial state $|\mathtt{ini}\rangle$. Since for the considered initial state, there are two eigenstates of the many-body Hamiltonian $|\mathtt{N}\rangle$ and $|\mathtt{N\!-\!1}\rangle$ which have significant contribution, therefore the expression for time evolution of the state of the system (\ref{eq:timeevolution1}) can be reduced to the sum of two terms only
\begin{align}
|\mathbf{\Psi}(t)\rangle &\approx \langle \mathtt{N\!-\!1} | \mathtt{ini}\rangle\,\mathrm{e}^{-i\epsilon_\mathtt{N\!-\!1} t/\hbar} |\mathtt{N\!-\!1} \rangle \nonumber \\
&+ \langle \mathtt{N} | \mathtt{ini}\rangle\, \mathrm{e}^{-i\epsilon_\mathtt{N} t/\hbar} | \mathtt{N} \rangle.
\end{align}
In consequence the resulting overlap is simply written as
\begin{equation}
\lVert \langle \mathtt{ini} | \mathbf{\Psi}(t) \rangle \rVert ^2 \approx C_1 + C_2 \cos{\frac{(\epsilon_\mathtt{N} - \epsilon_\mathtt{N-1})t}{\hbar}},
\end{equation}
with some well established constants $C_1$ and $C_2$.
This simple analysis shows that the frequency of the dominant Fourier component is related to the difference $\Delta E$ of two eigenenergies of the two the most dominant eigenstates of the Hamiltonian. Depending on the model considered the eigenenergies of the many-body Hamiltonian are different. However, in a wide range of interactions, the difference $\Delta E$ is much closer to the exact value when it is obtained from the effective two-mode description than calculated in the framework of standard approximation. In Fig.~\ref{Fig3} we plot the difference $\Delta E$ divided by its value obtained from the exact model $\Delta E_{\mathrm{exact}}$. As it is seen, this ratio rapidly drops down in the case of the standard two-mode approximation (dashed red lines) while it remains very close to $1$ for a wide range of interactions in the case of the effective model (solid blue lines). In consequence, the frequency of the well population is preserved. This close agreement between the exact and the effective model can be attributed to the effective basis functions which directly take interparticle interactions into account.
\begin{figure}
\includegraphics[width=\linewidth]{fig4.pdf}
\caption{The smallest fidelity $\mathcal{F}_{min}$ achieved by different two-mode models as a function of interactions $g$. The red dashed lines and blue solid lines represent ${\cal F}_\mathrm{min}$ obtained for the traditional two-mode model and the effective two-mode model, respectively. The effective model describes the state of the system much better than the traditional approximation, especially for intermediate interactions. See the main text for details. Interaction strength $g$ is given in units of $\sqrt{\hbar^3\Omega/m}$.}
\label{Fig4}
\end{figure}
In order to systematically and qualitatively compare accuracies of both two-mode models one should focus not only on the single-particle observables but also on a full quantum many-body state. To make such a comparison possible we introduce a temporal fidelity ${\cal F}(t)=\lVert \langle \mathbf{\Psi}(t) | \psi(t) \rangle \rVert^2$ as a measure of actual accuracy. Here, $|\boldsymbol{\Psi}(t)\rangle$ and $|\psi(t)\rangle$ are the quantum many-body states of the system predicted by the exact model and a chosen two-mode model, respectively. Of course the fidelity defined in this way varies in time. Therefore, we assume that the quality of chosen model is determined by the smallest value of ${\cal F}(t)$ in a chosen time period $t \in (0,T)$. For our purposes we take $T=5\pi\hbar/J_0$, i.e., the period in which the noninteracting system undergoes $5$ oscillations between the wells. In Fig.~\ref{Fig4} we show the smallest fidelity ${\cal F}_\mathrm{min}$ as a function of interactions calculated for the traditional two-mode model (red dashed lines) and for the effective description (solid blue line). This comparison shows that there is a visible improvement in the description of the many-body dynamics over a range of intermediate interaction strengths. However, for strong interactions both models work equally bad. The reason is that any two-mode description has to break down at the moment when strong correlations between particles emerge.
\section{Conclusion}
\label{sec:conclusion}
We present an alternative way to obtain an appropriate two-mode description of the dynamics of a few bosons in a double-well potential. Our approach originates in the decomposition of the initial many-body state in the basis of exact eigenstates of the many-body Hamiltonian. Therefore, it takes into account interactions between particles as well as properties of the initial many-body quantum state. Consequently, the traditional two-mode single-particle basis is replaced by an alternative pair of wave functions, specifically tailored to the problem studied. These wave functions are extracted from the single-particle density matrices of specifically selected eigenstates of the many-body Hamiltonian.
In consequence, we have shown that for systems of interacting bosons in a double-well potential, the resulting effective two-mode model significantly increases accuracy of the evolution in the range of intermediate interactions where the traditional two-mode model completely fails. The range of interaction strengths for which the model is applicable depends on parameters such as the number of particles, or the height of the barrier between the wells.
The method presented here relies specifically on the properties of the chosen initial state and, to some extent, it can be generalized to other physical situations. For example, different initial states and larger numbers of particles can be considered. One of the other possible generalizations originates in extending the description to a few the lowest effective orbitals obtained in a very similar way. Although increasing of the number of modes substantially increases the complexity of numerical calculations, a few-mode model seems to be significantly simpler than a full model with many single-particle orbitals taken directly from the non-interacting problem.
\section{Acknowledgments}
The authors would like to thank M. Gajda and M. Lewenstein for their fruitful suggestions and questions. This work was partially supported by the (Polish) National Science Center Grant No. 2016/22/E/ST2/00555.
\section*{References}
|
\section{Introduction}
Following the success of deep learning approaches in the ImageNet competition, there has been a surge of interest in the application of deep models on
many tasks, such as image classification~\cite{Krizhevsky-et-al:2012}, speech recognition~\cite{Hinton-et-al:2012}, ...
Each new coming model proposes better and better architectures of the network to facilitate training through longer and longer chains of layers,
such as Alexnet~\cite{Krizhevsky-et-al:2012}, VGGNet~\cite{Simonyan-and-Zisserman:2014}, GoogleLeNet~\cite{Szegedy-et-al:2015}, ResNet~\cite{He-et-al:2015}
and more recently Densenet~\cite{Huang-et-al:2016}.
Several works have explained this success in computer vision, in particular~\cite{Zeiler-and-Fergus:2014} and~\cite{Donahue-et-al:2014}: deep model is able to learn a hiearchical feature representation from pixels to line, contour, shape and object.
These studies have not only helped to demistify the black-box of deep learning, but have also led the path to other approaches like transfer learning~\cite{Yosinski-et-al:2014},
where the first layers are believed to bring more general information and the last layers to convey specific information on the target task.
This paradigm has also been applied in text classification and sentiment analysis, with deeper and deeper networks being proposed in the literature:
\cite{Kim:2014} (shallow-and-wide CNN layer), \cite{Zhang-et-al:2015} (6 CNN layers), \cite{Conneau-et-al:2016} (29 CNN layers).
Besides the development of deep networks, there is a debate about which atom-level (word or character) would be the most effective for Natural Language Processing (NLP) tasks.
Word embeddings, which are continuous representations of words, initially proposed by~\cite{Bengio-et-al:2003} and widely adopted after word2vec~\cite{Mikolov-et-al:2013}
have been chosen as the main standard representations for most NLP tasks.
Based on this representation, a common belief is that, similarly to vision, the model will learn hiearchical features from the text: words combine to form n-grams, phrases, sentences...
Several recent works have extended this model to characters instead of words. Hence, \cite{Zhang-et-al:2015} propose for the first time an alternative character-based model,
while \cite{Conneau-et-al:2016} take a further step by introducing a very deep char-level network.
Nevertheless, it is still not clear which atom-level is the best and whether very deep networks at the word-level are really better for text classification.
This work is motivated by these questions and we hope to bring elements of a response
by providing a full comparison of a shallow-and-wide CNN~\cite{Kim:2014} both at the character and word levels on the 5 datasets described in~\cite{Zhang-et-al:2015}.
Moreover, we propose an adaptation of a DenseNet~\cite{Huang-et-al:2016} for text classification and sentiment analysis at the word-level, which we compare with the state-of-the-art.
This paper is structured as follows:
Section~\ref{sec:sota} summarizes the related work while
Section~\ref{sec:model} describes the shallow CNN and introduces our adaptation of DenseNet for text.
We then evaluate our approach on the 5 datasets of~\cite{Zhang-et-al:2015} and show experimental results in Section~\ref{sec:xp}.
Experiments show that the shallow-and-wide CNN on word-level can beat a very deep CNN on char-level.
The paper concludes with some open discussions for future research about deep structures on text.
\section{Related work}
\label{sec:sota}
Text classification is an important task in Natural Language Processing. Traditionally, linear classifiers are often used for text classification~\cite{Joachims:1998, McCallum:1998, Fan-et-al:2008}. In particular, \cite{Joulin-et-al:2016} show that linear models could scale to a very large dataset rapidly with a proper rank constraint and a fast loss approximation.
However, a recent trend in the domain is to exploit deep learning methods, such as convolutional neural networks:~\cite{Kim:2014, Zhang-et-al:2015, Conneau-et-al:2016} and recurrent networks:~\cite{Yogatama-et-al:2017, Xiao-and-Cho:2016}.
Sentiment Analysis is also an active topic of research in NLP for a long time, with real-world applications in
market research~\cite{Qureshi-et-al:2013}, finance~\cite{Bollen-et-al:2011}, social science~\cite{Dodds-et-al:2011}, politics~\cite{Kaya-et-al:2013}.
The SemEval challenge has been setup in 2013 to boost this field and is still bringing together many competitors
who have been using an increasing proportion of deep learning models over the years~\cite{Nakov-et-al:2013,Rosenthal-et-al:2014,Nakov-et-al:2016}.
2017 is the fifth edition of the competition, with at least 20 teams (over 48 teams) using deep learning and neural network methods.
The top 5 winning teams all use deep learning or deep learning ensembles.
Other teams use classifiers such as Naive Bayes classifier, Random Forest, Logistic Regression, Maximum Entropy and Conditional Random Fields~\cite{Rosenthal-et-al:2017}.
Convolutional neural networks with end-to-end training have been used in NLP for the first time in~\cite{Collobert-and-Weston:2008,Collobert-et-al:2011}.
The authors introduce a new \textit{global} max-pooling operation, which is shown to be effective for text, as an alternative to the conventional \textit{local} max-pooling
of the original LeNet architecture~\cite{Lecun-et-al:1998}.
Moreover, they proposed to transfer task-specific information by co-training multiple deep models on many tasks.
Inspired by this seminal work, \cite{Kim:2014} proposed a simpler architecture with slight modifications of~\cite{Collobert-and-Weston:2008}
consisting of fine-tuned or fixed pretraining word2vec embeddings~\cite{Mikolov-et-al:2013} and its combination as multi-channel.
The author showed that this simple model can already achieve state-of-the-art performances on many small datasets.
\cite{Kalchbrenner-et-al:2014} proposed a dynamic \textit{k}-max pooling to handle variable-length input sentences.
This dynamic \textit{k}-max pooling is a generalisation of the max pooling operator where \textit{k} can be dynamically set as a part of the network.
All of these works are based on word input tokens, following~\citep{Bengio-et-al:2003}, which introduced for the first time a solution to fight the curse of dimensionality
thanks to distributed representations, also known as \textit{word embeddings}.
A limit of this approach is that typical sentences and paragraphs contain a small number of words, which prevents the previous convolutional models to be very deep: most of them indeed only have two layers.
Other works~\cite{Santos-and-Gatti:2014}
further noted that word-based input representations may not be very well adapted to social media inputs like Twitter,
where tokens usage may be extremely creative: slang, elongated words, contiguous sequences of exclamation marks, abbreviations, hashtags,...
Therefore, they introduced a convolutional operator on characters to automatically learn the notions of words and sentences.
This enables neural networks to be trained end-to-end on texts without any pre-processing, not even tokenization.
Later, \cite{Zhang-et-al:2015} enhanced this approach and proposed a deep CNN for text:
the number of characters in a sentence or paragraph being much longer, they can train for the first time up to 6 convolutional layers.
However, the structure of this model is designed by hand by experts and it is thus difficult to extend or generalize the model with arbitrarily different kernels and pool sizes.
Hence, \cite{Conneau-et-al:2016}, inspired by \cite{He-et-al:2016}, presented a much simpler but very deep model with 29 convolutional layers.
Besides convolutional networks, \cite{Kim-et-al:2016} introduced a character aware neural language model by combining a CNN on character embeddings with an highway LSTM on subsequent layers.
\cite{Radford-et-al:2017} also explored a multiplicative LSTM (mLSTM) on character embeddings and found that a basic logistic regression learned on this representation can achieve
state-of-the-art result on the Sentiment Tree Bank dataset \cite{Socher-et-al:2013} with only a few hundred labeled examples.
Capitalizing on the effectiveness of character embeddings, \cite{Dhingra-et-al:2016} proposed a hybrid word-character models to leverage the avantages of both worlds.
However, their initial experiments show that this simple hybridation does not bring very good results:
the learned representations of frequent and rare tokens of words and characters is different and co-training them may be harmful.
To alleviate this issue, \cite{Miyamoto-and-Cho:2016} proposed a scalar gate to control the ratio of both representations,
but empiricial studies showed that this fixed gate may lead to suboptimal results.
\cite{Yang-et-al:2016} then introduced a fine-grained gating mechanism to combine both representations.
They showed improved performance on reading comprehension datasets, including Children's Book Test and SQuAD.
\section{Model}
\label{sec:model}
We describe next two models architectures, respectively shallow and deep, that we will compare in Section~\ref{sec:xp} on several text classifications tasks.
Both models share common components that are described next.
\subsection{Common components}
\subsubsection*{Lookup-Table Layer}
Every token (either word or character in this work) $\textit{i} \in Vocab$ is encoded as a \textit{d}-dimensional vector using a lookup table $Lookup_W (.)$:
\begin{equation}
Lookup_W (i) = \textbf{W}_i,
\end{equation}
where $\textbf{W} \in \rm I\!R^{\textit{d} \times \vert \textit{Vocab} \vert}$ is the embedding matrix, $\textbf{W}_i \in \rm I\!R^{d}$ is the $i^{th}$ column of \textbf{W} and ${d}$ is the number of embedding space dimensions.
The first layer of our model thus transforms indices of an input sentence ${{s_1}, {s_2}, \cdots, {s_n}}$ of \textit{n} tokens in \textit{Vocab} into a series of vectors
${{\textbf{W}_{s1}}, {\textbf{W}_{s2}}, \cdots, {\textbf{W}_{sn}}}$.
\noindent{\bf Classification Layer}
The embedding vectors that encode a complete input sentence are processed by one of our main models, which outputs
a feature vector $\textbf{x}$ that represents the whole sentence.
This vector is then passed to a classification layer that applies a \textit{softmax} activation function~\cite{Bridle:1990} to compute the predictive probabilities for all $K$ target labels:
\begin{equation}
p(y=k \vert X) = \frac{exp(\textbf{w}_k^T\textbf{x}+b_k)}{\sum_{k^{'}=1}^K exp(\textbf{w}_{k^{'}}^T\textbf{x}+b_{k^{'}})}
\end{equation}
where the weight and bias parameters $\textbf{w}_k$ and ${b}_k$ are trained simultaneously with the main model's parameters. The loss function is then minimized by cross-entropy error.
\subsection{Shallow-and-wide CNN}
Our first shallow-and-wide CNN model is adapted from~\citep{Kim:2014}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{shallow-and-wide-CNN.png}
\caption{Shallow-and-wide CNN, from~\protect\cite{Zhang-and-Wallace:2015}: 3 convolutional layers with respective kernel window sizes 3,4,5 are used. A global max-pooling is then applied to the whole sequence on each filter. Finally, the outputs of each kernel are concatenated to a unique vector and fed to a fully connected layer.}
\end{figure}
Let $\textbf{x}_i \in \rm I\!R^d$ be an input token (\textit{word} or \textit{character}).
An input \textit{h}-grams $x_{i:i+h-1}$ is transformed through a convolution filter $\textbf{w}_c \in \rm I\!R^{hd}$:
\begin{equation}
c_i = f(\textbf{w}_c \cdot \textbf{x}_{i:i+h-1} + b_c)
\end{equation}
with $b_c \in \rm I\!R$ a bias term and \textit{f} the non-linear ReLU function.
This produces a \textit{feature map} $\textbf{c} \in R^{n-h+1}$, where $n$ is the number of tokens in the sentence. Then we apply a global max-over-time pooling over the feature map:
\begin{equation}
\hat{c} = \max\{\textbf{c}\} \in \rm I\!R
\end{equation}
This process for one feature is repeated to obtain $m$ filters with different window sizes $h$. The resulting filters are concatenated to form a \textit{shallow-and-wide} network:
\begin{equation}
\textbf{g} = [ \hat{c}_1, \hat{c}_2, \cdots, \hat{c}_m]
\end{equation}
Finally, a fully connected layer is applied:
\begin{equation}
\hat{y} = f(\textbf{w}_y \cdot \textbf{g} + b_y)
\end{equation}
\subsubsection*{Implementation Details}
The kernel window sizes $h$ for character tokens are $N_f=(15,20,25)$ with $m=700$ filters.
For word-level, $N_f=(3,4,5)$ with $m=100$ filters.
\subsection{DenseNet}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{DenseNet.png}
\caption{Character-level DenseNet model for Text classification. \textbf{3, Temp Conv, 128} means temporal convolutional operation with kernel window size = 3 and filter size = 64; \textbf{pool/2} means local max-pooling with kernel size = stride size = 2, it will reduce the size of the sequence by a half.}
\label{DenseNet}
\end{figure}
\subsubsection*{Skip-connections}
In order to increase the depth of deep models, \citep{He-et-al:2015} introduced a skip-connection that modifies the non-linear transformation
$\textbf{x}_l = \mathcal{F}_l (\textbf{x}_{l-1})$
between the output activations $\textbf{x}_{l-1}$ at layer $l-1$ and at layer $l$
with an identity function:
\begin{equation}
\textbf{x}_l = \mathcal{F}_l (\textbf{x}_{l-1}) + \textbf{x}_{l-1}
\end{equation}
This allows the gradient to backpropagate deeper in the network and limits the impact of various issues such as vanishing gradients.
\subsubsection*{Dense Connectivity}
\cite{Huang-et-al:2016} suggested that the additive combination of this skip connection with $\mathcal{F}_l (\textbf{x}_{l-1})$ may negatively affect the information flow in the model.
They proposed an alternative concatenation operator, which allows to create direct connections from any layer to all subsequent layers, called \textit{DenseNet}.
Hence, the ${l^{th}}$ layer has access to the feature maps of all preceding layers, $\textbf{x}_0,\cdots,\textbf{x}_{l-1}$, as input:
\begin{equation}
\textbf{x}_l = \mathcal{F}_l ([\textbf{x}_0,\textbf{x}_1,\cdots,\textbf{x}_{l-1}])
\end{equation}
This can be viewed as an extreme case of a ResNet. The distance between both ends of the network is shrinked and the gradient may backpropagate more easily from the output back to the input, as illustrated in Figure~\ref{DenseBlock}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{DenseBlock.png}
\caption{Dense Block. Multiple convolutional filters output 2D matrices, which are all concatenated together before going into another dense block.}
\label{DenseBlock}
\end{figure}
\subsubsection*{Convolutional Block and Transitional Layer}
Following~\cite{He-et-al:2016}, we define $\mathcal{F}_l (.)$ as a function of three consecutive operations: batch normalization (BN), rectified linear unit (ReLU) and a 1x3 convolution.
To adapt the variability of the changing dimension of the concatenation operation, we define a transition layer which composes a 1x3 convolution and a 1x2 local max-pooling between two dense blocks. Given a vector $c^{l-1}$ outputed by a convolutional layer $l-1$, the local max-pooling layer \textit{l} outputs a vector $c^{l}$:
\begin{equation}
\big[c^{l}\big]_j = \max_{i} \big[c^{l-1}\big]_{k \times (j-1) \leq i < k \times j}
\end{equation}
where $1\leq i\leq n$ and \textit{k} is the kernel pooling size. The word-level DenseNet model is the same as the character-level model shown in Figure~\ref{DenseNet}, except for the last two layers,
where the local max-pooling and two fully connected layers are replaced by a single global average pooling layer.
We empirically observed that better results are thus obtained with word tokens.
\subsubsection*{Implementation Details}
The kernel window size with both character and word tokens is $h=3$ tokens. For word-level, the kernel of the last local max-pooling is 8 while it is equal 3 for char-level (because the size of the sequence is shorter). Following~\cite{Conneau-et-al:2016}, we experiment with two most effective configurations for word and character-level: $N_b=(4-4-4-4)$ and $N_b=(10-10-4-4)$, which are the number of convolutional layers in each of the four blocks.
\section{Experimental evaluation}
\label{sec:xp}
\begin{table*}[h]
\small
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline \bf Models & \bf AGNews & \bf Yelp Bin & \bf Yelp Full & \bf DBPedia & \bf Yahoo \\ \hline
Char shallow-and-wide CNN & 90.7 & 94.4 & 60.3 & 98.0 & 70.2 \\
Char-DenseNet $N_b=(4-4-4-4)$ Global Average-Pooling & 90.4 & 94.2 & 61.1 & 97.7 & 68.8 \\
Char-DenseNet $N_b=(10-10-4-4)$ Global Average-Pooling & 90.6 & 94.9 & 62.1 & 98.2 & 70.5 \\
Char-DenseNet $N_b=(4-4-4-4)$ Local Max-Pooling & 90.5 & 95.0 & 63.6 & 98.5 & 72.9 \\
Char-DenseNet $N_b=(10-10-4-4)$ Local Max-Pooling & 92.1 & 95.0 & 64.1 & 98.5 & 73.4 \\
Word shallow-and-wide CNN & 92.2 & \bf 95.9 & \bf 64.9 & \bf 98.7 & 73.0 \\
Word-DenseNet $N_b=(4-4-4-4)$ Global Average-Pooling & 91.7 & 95.8 & 64.5 & \bf 98.7 & 70.4* \\
Word-DenseNet $N_b=(10-10-4-4)$ Global Average-Pooling & 91.4 & 95.5 & 63.6 & 98.6 & 70.2* \\
Word-DenseNet $N_b=(4-4-4-4)$ Local Max-Pooling & 90.9 & 95.4 & 63.0 & 98.0 & 67.6* \\
Word-DenseNet $N_b=(10-10-4-4)$ Local Max-Pooling & 88.8 & 95.0 & 62.2 & 97.3 & 68.4* \\
\hline
bag of words~\cite{Zhang-et-al:2015} & 88.8 & 92.2 & 58.0 & 96.6 & 68.9 \\
ngrams~\cite{Zhang-et-al:2015} & 92.0 & 95.6 & 56.3 & 98.6 & 68.5 \\
ngrams TFIDF~\cite{Zhang-et-al:2015} & 92.4 & 95.4 & 54.8 & \bf 98.7 & 68.5 \\
fastText~\cite{Joulin-et-al:2016} & \bf 92.5 & 95.7 & 63.9 & 98.6 & 72.3 \\
char-CNN~\cite{Zhang-et-al:2015} & 87.2 & 94.7 & 62.0 & 98.3 & 71.2 \\
char-CRNN~\cite{Xiao-and-Cho:2016} & 91.4 & 94.5 & 61.8 & 98.6 & 71.7 \\
very deep char-CNN~\cite{Conneau-et-al:2016} & 91.3 & 95.7 & 64.7 & \bf 98.7 & 73.4 \\
Naive Bayes~\cite{Yogatama-et-al:2017} & 90.0 & 86.0 & 51.4 & 96.0 & 68.7 \\
Kneser-Ney Bayes~\cite{Yogatama-et-al:2017} & 89.3 & 81.8 & 41.7 & 95.4 & 69.3 \\
MLP Naive Bayes~\cite{Yogatama-et-al:2017} & 89.9 & 73.6 & 40.4 & 87.2 & 60.6 \\
Discriminative LSTM~\cite{Yogatama-et-al:2017} & 92.1 & 92.6 & 59.6 & \bf 98.7 & \bf 73.7 \\
Generative LSTM-independent comp.~\cite{Yogatama-et-al:2017} & 90.7 & 90.0 & 51.9 & 94.8 & 70.5 \\
Generative LSTM-shared comp.~\cite{Yogatama-et-al:2017} & 90.6 & 88.2 & 52.7 & 95.4 & 69.3 \\
\hline
\end{tabular}
\caption{Accuracy of our proposed models (10 top rows) and of state-of-the-art models from the litterature (13 bottom rows).}
\label{tab:res}
\end{table*}
\subsection{Tasks and data}
We test our models on the 5 datasets used in~\cite{Zhang-et-al:2015} and summarized in Table~\ref{tab:corpus}.
These datasets are:
\begin{itemize}
\item AGNews: internet news articles~\cite{AGNews} composed of titles plus descriptions and classified into 4 categories: World, Entertainment, Sports and Business, with 30k training samples and 1.9k test samples per class.
\item Yelp Review Polarity: The Yelp review dataset is obtained from the Yelp Dataset Challenge in 2015. Each polarity dataset has 280k training samples and 19k test samples.
\item Yelp Review Full: The Yelp review dataset is obtained from the Yelp Dataset Challenge in 2015. It has four polarity star labels: 1 and 2 as negative, and 3 and 4 as positive. Each star label has 130k training samples and 10k testing samples.
\item DBPedia: DBPedia is a 14 non-overlapping classes picked from DBpedia 2014 (wikipedia). Each class has 40k training samples and 5k testing samples.
\item Yahoo! Answers: ten largest main categories from Yahoo! Answers Comprehensive Questions and Answers version 1.0. Each class contains 140k training samples and 5k testing samples, including question title, question content and best answer. For DenseNet on word-level, we only used 560k samples because of lack of memory.
\end{itemize}
\begin{table}[h!]
\small
\centering
\begin{tabular}{|l|r|r|r|l|}
\hline \bf Dataset & \bf \#y & \bf \#train & \bf \#test & \bf Task \\ \hline
AGNews & 4 & 120k & 7.6k & ENC \\
Yelp Binary & 2 & 560k & 38k & SA \\
Yelp Full & 5 & 650k & 38k & SA \\
DBPedia & 14 & 560k & 70k & OC \\
Yahoo & 10 & 1 400k & 60k & TC \\
\hline
\end{tabular}
\caption{Statistics of datasets used in our experiments: number of training tokens ({\bf \#train}), of test tokens ({\bf \#test}) and of target labels ({\bf \#y}); \textbf{ENC}: English News Categorization. \textbf{SA}: Sentiment Analysis, \textbf{OC}: Ontology Classification, \textbf{TC}: Topic Classification}
\label{tab:corpus}
\end{table}
\subsection{Hyperparameters and Training}
For all experiments, we train our model's parameters with the Adam Optimizer \cite{Kingma-and-Ba:2014} with an initial learning rate of 0.001, a mini-batch size of 128. The model is implemented using Tensorflow and is trained on a GPU cluster (with 12Gb RAM on GPU). The hyperparameters are chosen following~\cite{Zhang-et-al:2015} and~\cite{Kim:2014}, which are described below. On average, it takes about 10 epochs to converge.
\subsubsection{Character-level}
Following \cite{Zhang-et-al:2015}, each character is represented as a one-hot encoding vector where the dictionary contains the following 69 tokens: $"abcdefghijklmnopqrstuvwxyz0123456789-,$ $;.!?:’"/|\_\#\%\hat{}\&\star\tilde{}'+=<>()[]{}"$. The maximum sequence length is 1014 following~\cite{Zhang-et-al:2015}; smaller texts are padded with 0 while larger texts are truncated.
The convolutional layers are initialized following~\cite{Glorot-and-Bengio:2010}.
\subsubsection{Word-level}
The embedding matrix \textit{W} is initialized randomly with the uniform distribution between $[-0.1;0.1]$ and is updated during model's training using backpropagation. The embedding vectors have 300 dimensions and are initialized with word2vec vectors pretrained on 100 billion words from Google News~\cite{Mikolov-et-al:2013}. Out-of-vocabulary words are initialized randomly. A dropout of 0.5 is used on shallow model to prevent overfitting.
The shallow-and-wide CNN requires 10 hours of training on the smallest dataset, and one day on the largest.
The DenseNet respectively requires 2 and 4 days for training.
\subsection{Experimental results}
Table~\ref{tab:res} details the accuracy obtained with our models (10 rows on top) and compare them with state-of-the-art results (13 rows at the bottom)
on 5 corpus and text classification tasks (columns).
The models from the litterature we compare to are:
\begin{itemize}
\item \textbf{bag of words:} The BOW model is based on the most frequent words from the training data~\cite{Zhang-et-al:2015}
\item \textbf{ngrams:} The bag-of-ngrams model exploits the most frequent word n-grams from the training data~\cite{Zhang-et-al:2015}
\item \textbf{ngrams TFIDF:} Same as the ngrams model but uses the words TFIDF (term-frequency inverse-document-frequency) as features~\cite{Zhang-et-al:2015}
\item \textbf{fastText:} A linear word-level model with a rank constraint and fast loss approximation~\cite{Joulin-et-al:2016}
\item \textbf{char-CNN:} Character-level Convolutional Network with 6 \textit{hand-designed} CNN layers~\cite{Zhang-et-al:2015}
\item \textbf{char-CRNN:} Recurrent Layer added on top of a Character Convolutional Network~\cite{Xiao-and-Cho:2016}
\item \textbf{very deep CNN:} Character-level model with 29 Convolutional Layers inspired by ResNet~\cite{Conneau-et-al:2016}
\item \textbf{Naive Bayes:} A simple count-based word unigram language model based on the Naive Bayes assumption~\cite{Yogatama-et-al:2017}
\item \textbf{Kneser-Ney Bayes:} A more sophisticated word count-based language model that uses tri-grams and Kneser-Ney smoothing~\cite{Yogatama-et-al:2017}
\item \textbf{MLP Naive Bayes:} An extension of the Naive Bayes word-level baseline using a two layer feedforward neural network~\cite{Yogatama-et-al:2017}
\item \textbf{Discriminative LSTM:} Word-level model with logistic regression on top of a traditional LSTM~\cite{Yogatama-et-al:2017}
\item \textbf{Generative LSTM-independent comp.:} A class-based word language model with no shared parameters across classes~\cite{Yogatama-et-al:2017}
\item \textbf{Generative LSTM-shared comp.:} A class-based word language model with shared components across classes~\cite{Yogatama-et-al:2017}
\end{itemize}
Figure~\ref{fig:res} visually compares the performances of 3 character-level models with 2 word-level models. Character-level models include: our shallow-and-wide CNN model with two models on the litterature 6 CNN layers~\cite{Zhang-et-al:2015}, 29 CNN layers~\cite{Conneau-et-al:2016}. On word-level, we present our shallow-and-wide CNN with the best DenseNet $N_b=(4-4-4-4)$ using Global Average-Pooling.
\begin{figure*}[h]
\centering
\includegraphics[scale=0.7]{Results.png}
\caption{Comparison of character (\textit{in blue, on the left}) and word-level (\textit{in red, on the right}) models on all datasets. On character-level, we compare our shallow-and-wide model with the 6 CNN layers of~\cite{Zhang-et-al:2015} and the 29-layers CNN of~\cite{Conneau-et-al:2016}. On word-level, we compare the shallow-and-wide CNN with our proposed DenseNet.}
\label{fig:res}
\end{figure*}
The main conclusions of these experiments are threefold:
\subsubsection*{Impact of depth for character-level models}
Deep character-level models do not significantly outperform the shallow-and-wide network. A shallow-and-wide network (row 1 in Table~\ref{tab:res}) achieves 90.7\%, 94.4\%, 98.0\% on AGNews, Yelp Bin, DBPedia respectively, comparing to 91.3\%, 95.7\%, 98.7\% of a very deep CNN~\cite{Conneau-et-al:2016}. Although the deep structure achieves a slight gain in performance on these three datasets, the difference is not significant.
Interestingly, a very simple shallow-and-wide CNN can get very close results to the deep 6 CNN layers of~\cite{Zhang-et-al:2015} which structure must be designed meticulously.
For the smallest dataset AGNews, we suspect that the deep model {\bf char-CNN} performs badly because it needs more data to take benefit from depth.
The deep structure gives an improvement of about 4\% on Yelp Full and Yahoo (first row of Table~\ref{tab:res} vs. {\bf very deep char CNN}), which is interesting but does not match the gains observed in image classification.
We have tried various configurations: $N_f=(15,20,25)$, $N_f=(10,15,20,25)$ and $N_f=(15,22,29,36)$ on shallow models but they didn't do better.
\subsubsection*{Impact of depth for word-level models}
The DenseNet is better with 20 layers $N_b=(4-4-4-4)$ than with 32 layers $N_b=(10-10-4-4)$ and Global Average-Pooling is better than the traditional Local Max-pooling. It is the opposite to char-level. This is likely a consequence of the fact that the observed sequence length is much shorted with words than with characters.
However, the main striking observation is that all deep models are matched or outperformed by the shallow-and-wide model on all datasets,
although it is still unclear whether this is because the input sequences are too short to benefit from depth or for another reason.
Further experiments are required to investigate the underlying reasons of this failure of depth at word-level.
\subsubsection*{State-of-the-art performances with shallow-and-wide word-level model}
With a shallow-and-wide network on word-level, we achieved a very close state-of-the-art (SOTA) result on AGNews, SOTA on 2 datasets DBPedia, Yahoo and set new SOTA on 2 datasets: Yelp Binary and Yelp Full.
We also empirically found that a word-level shallow model may outperform a very deep char-level network.
This confirms that word observations are still more effective than character inputs for text classification.
In practice, quick training on word-level with a simple convolutional model may already produce good result.
\subsection{Discussion}
\subsubsection*{Text representation - discrete, sparse}
Very deep models do not seem to bring a significant advantage over shallow networks for text classification, as opposed to
their performances in other domains such as image processing.
We believe one possible reason may be related to the fact that images are represented as real and dense values, as opposed to discrete, artificial and sparse representation of text. The first convolutional operation results in a matrix (2D) for image while it results in a vector (1D) for text (see Figure \ref{Image_vs_text}). The same deep network applied to two different representations (dense and sparse) will obviously get different results. Empirically, we found that a deep network on 1D (text) is less effective and learns less information than on 2D (image).
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.7]{image_vs_text.png}
\caption{Inside operations of convolution on \textbf{Image} vs \textbf{Text}. \textbf{Image} has real-valued and dense. \textbf{Text} has discrete tokens, many artificial and sparse values representation. The output of the first convolution layer on image is still a \textit{Matrix} but on text, it is reduced to a \textit{Vector}. }
\label{Image_vs_text}
\end{figure*}
\noindent{\bf Local vs Global max-pooling}
A global max-pooling \cite{Collobert-and-Weston:2008}, which retrieves the most influencial feature could already be good enough for sparse and discrete input text,
and gives similar results than a local max-pooling with a deep network.
\noindent{\bf Word vs Character level}
Char-level could be a choice but word-level is still the most effective method. Moreover, in order to use char-level representation, we \textit{must} use a very deep model, which is less practical because it
takes a long time to train.
\section{Conclusion}
In computer vision, several works have shown the importance of depth in neural networks and the major benefits that can be gained by
stacking many well-designed convolutional layers in terms of performances.
However, such advantages do not necessarily transfer to other domains, and in particular Natural Language Processing,
where the impact of depth in the model is still unclear.
This work exploits a number of additional experiments to further explore this question and potentially bring some new insights or
confirm previous findings.
We further investigate another related question about which type of textual inputs, characters or words, should be chosen at a given depth.
By evaluating on several text classification and sentiment analysis tasks, we show that a shallow-and-wide convolutional neural
network at the word-level is still the most effective, and that increasing the depth of such convolutional models with word inputs
does not bring significant improvement.
Conversely, deep models outperform shallow networks when the input text is encoded as a sequence of characters, but although such deep models
approach the performances of word-level networks, they are still worse on the average.
Another contribution of this work is the proposal of a new deep model that is an adaptation of DenseNet for text inputs.
Based on the litterature and the results presented in this work, our main conclusion is that deep models have not yet proven to
be more effective than shallow models for text classification tasks.
Nevertheless, further researches should be realized to confirm or infirm this observation on other datasets,
natural language processing tasks and models.
Indeed, this work derives from reference deep models that have originally been developed for image processing,
but novel deep architectures for text processing might of course challenge this conclusion in the near future.
\section*{Acknowledgments}
Part of the experiments realized in this work have been done on two GPU clusters: \textit{Grid5000 Inria/Loria Nancy, France} and \textit{Romeo Reims, France}. We would like to thank both consortiums for giving us access to their resources.
\bibliographystyle{emnlp_natbib}
|
\section{Introduction}
\label{sec:introduction}
Neural networks (NNs) are powerful approximators, which are built by interleaving linear layers with nonlinear mappings (generally called \emph{activation functions}). The latter step is usually implemented using an element-wise, (sub-)differentiable, and fixed nonlinear function at every neuron. In particular, the current consensus has shifted from the use of contractive mappings (e.g., sigmoids) to the use of piecewise-linear functions (e.g., rectified linear units, ReLUs \citep{glorot2010understanding}), allowing a more efficient flow of the backpropagated error \citep{goodfellow2016deep}. This relatively inflexible architecture might help explaining the extreme redundancy found in the trained parameters of modern NNs \citep{denil2013predicting}.
Designing ways to adapt the activation functions themselves, however, faces several challenges. On one hand, we can parameterize a known activation function with a small number of trainable parameters, describing for example the slope of a particular linear segment \citep{he2015delving}. While immediate to implement, this only results in a small increase in flexibility and a marginal improvement in performance in the general case \citep{agostinelli2014learning}. On the other hand, a more interesting task is to devise a scheme allowing each activation function to model a large range of shapes, such as any smooth function defined over a subset of the real line. In this case, the inclusion of one (or more) hyper-parameters enables the user to trade off between greater flexibility and a larger number of parameters per-neuron. We refer to these schemes in general as \emph{non-parametric} activation functions, since the number of (adaptable) parameters can potentially grow without bound.
There are three main classes of non-parametric activation functions known in the literature: adaptive piecewise linear (APL) functions \citep{agostinelli2014learning}, maxout networks \citep{goodfellow2013maxout}, and spline activation functions \citep{guarnieri1999multilayer}. These are described more in depth in Section \ref{sec:nonparametric_afs}. In there, we also argue that none of these approaches is fully satisfactory, meaning that each of them loses one or more desirable properties, such as smoothness of the resulting functions in the APL and maxout cases (see Table \ref{tab:non_parametric_afs_comparison} in Section \ref{sec:proposed_kernel_afs} for a schematic comparison).
In this paper, we propose a fourth class of non-parametric activation functions, which are based on a kernel representation of the function. In particular, we define each activation function as a linear superposition of several kernel evaluations, where the dictionary of the expansion is fixed beforehand by sampling the real line. As we show later on, the resulting kernel activation functions (KAFs) have a number of desirable properties, including: (i) they can be computed cheaply using vector-matrix operations; (ii) they are linear with respect to the trainable parameters; (iii) they are smooth over their entire domain; (iv) using the Gaussian kernel, their parameters only have \textit{local} effects on the resulting shapes; (v) the parameters can be regularized using any classical approach, including the possibility of enforcing sparseness through the use of $\ell_1$ norms. To the best of our knowledge, none of the known methods possess all these properties simultaneously. We call a NN endowed with KAFs at every neuron a \textit{Kafnet}.
Importantly, framing our method as a kernel technique allows us to potentially leverage over a huge literature on kernel methods, either in statistics, machine learning \citep{hofmann2008kernel}, and signal processing \citep{liu2011kernel}. Here, we preliminarly demonstrate this by discussing several heuristics to choose the kernel hyper-parameter, along with techniques for initializing the trainable parameters. However, much more can be applied in this context, as we discuss in depth in the conclusive section. We also propose a bi-dimensional variant of our KAF, allowing the information from multiple linear projections to be nonlinearly combined in an adaptive fashion.
In addition, we contend that one reason for the rareness of flexible activation functions in practice can be found in the lack of a cohesive (introductory) treatment on the topic. To this end, a further aim of this paper is to provide a relatively comprehensive overview on the selection of a proper activation function. In particular, we divide the discussion on the state-of-the-art in three separate sections. Section \ref{sec:fixed_af} introduces the most common (fixed) activation functions used in NNs, from the classical sigmoid function up to the recently developed self-normalizing unit \citep{klambauer2017self} and Swish function \citep{ramachandran2017swish}. Then, we describe in Section \ref{sec:parametric_af} how most of these functions can be efficiently parameterized by one or more adaptive scalar values in order to enhance their flexibility. Finally, we introduce the three existing models for designing non-parametric activation functions in Section \ref{sec:nonparametric_afs}. For each of them, we briefly discuss relative strengths and drawbacks, which serve as a motivation for introducing the model subsequently.
The rest of the paper is composed of four additional sections. Section \ref{sec:proposed_kernel_afs} describes the proposed KAFs, together with several practical implementation guidelines regarding the selection of the dictionary and the choice of a proper initialization for the weights. For completeness, Section \ref{sec:related_work} briefly describes additional strategies to improve the activation functions, going beyond the addition of trainable parameters in the model. A large set of experiments is described in Section \ref{sec:experiments}, and, finally, the main conclusions and a set of future lines of research are given in Section \ref{sec:conclusions}.
\subsubsection*{Notation}
We denote vectors using boldface lowercase letters, e.g., $\vect{a}$; matrices are denoted by boldface uppercase letters, e.g., $\vect{A}$. All vectors are assumed to be column vectors. The operator $\norm[p]{\cdot}$ is the standard $\ell_p$ norm on an Euclidean space. For $p=2$, it coincides with the Euclidean norm, while for $p=1$ we obtain the Manhattan (or taxicab) norm defined for a generic vector $\vect{v} \in \R^B$ as $\norm[1]{\vect{v}} = \sum_{k=1}^B |v_k|$. Additional notations are introduced along the paper when required.
\section{Preliminaries}
\label{sec:preliminaries}
We consider training a standard feedforward NN, whose $l$-th layer is described by the following equation:
\begin{equation}
\vect{h}_l = g_l\left( \vect{W}_l\vect{h}_{l-1} + \vect{b}_l \right) \,,
\label{eq:nn_layer}
\end{equation}
where $\vect{h}_{l-1} \in \R^{N_{l-1}}$ is the $N_{l-1}$-dimensional input to the layer, $\vect{W}_l \in \R^{N_{l-1} \times N_l}$ and $\vect{b}_l \in \R^{N_l}$ are adaptable weight matrices, and $g_l(\cdot)$ is a nonlinear function, called \emph{activation function}, which is applied element-wise. In a NN with $L$ layers, $\vect{x} = \vect{h}_0$ denotes the input to the network, while $\hat{\vect{y}} = \vect{h}_L$ denotes the final output.
For training the network, we are provided with a set of $I$ input/output pairs $\mathcal{S} = \left\{ \vect{x}_i, \vect{y}_i \right\}_{i=1}^I$, and we minimize a regularized cost function given by:
\begin{equation}
J(\vect{w}) = \sum_{i=1}^I l(\vect{y}_i, \hat{\vect{y}}_i) + C \cdot r(\vect{w}) \,,
\end{equation}
where $\vect{w} \in \R^Q$ collects all the trainable parameters of the network, $l(\cdot, \cdot)$ is a loss function (e.g., the squared loss), $r(\cdot)$ is used to regularize the weights using, e.g., $\ell_2$ or $\ell_1$ penalties, and the regularization factor $C > 0$ balances the two terms.
In the following, we review common choices for the selection of $g_l$, before describing methods to adapt them based on the training data. For readability, we will drop the subscript $l$, and we use the letter $s$ to denote a single input to the function, which we call an \emph{activation}. Note that, in most cases, the activation function $g_L$ for the last layer cannot be chosen freely, as it depends on the task and a proper scaling of the output. In particular, it is common to select $g(s) = s$ for regression problems, and a sigmoid function for binary problems with $y_i \in \left\{0, 1\right\}$:
\begin{equation}
g(s) = \delta(s) = \frac{1}{1+\exp\left\{-s\right\}}\,.
\label{eq:sigmoid}
\end{equation}
For multi-class problems with dummy encodings on the output, the softmax function generalizes the sigmoid and it ensures valid probability distributions in the output \citep{bishop2006pattern}.
\section{Fixed activation functions}
\label{sec:fixed_af}
We briefly review some common (fixed) activation functions for neural networks, that are the basis for the parametric ones in the next section. Before the current wave of deep learning, most activation functions used in NNs were of a `squashing' type, i.e., they were monotonically non-decreasing functions satisfying:
\begin{equation}
\lim_{s \rightarrow -\infty} g(s) = c, \,\, \lim_{s \rightarrow \infty} = 1 \,,
\end{equation}
where $c$ can be either $0$ or $-1$, depending on convention. Apart from the sigmoid in \eqref{eq:sigmoid}, another common choice is the hyperbolic tangent, defined as:
\begin{equation}
g(s) = \text{tanh}(s) = \frac{\exp\left\{s\right\} - \exp\left\{-s\right\}}{\exp\left\{s\right\} + \exp\left\{-s\right\}} \,.
\label{eq:tanh}
\end{equation}
\noindent \citet{cybenko1989approximation} proved the universal approximation property for this class of functions, and his results were later extended to a larger class of functions in \citet{hornik1989multilayer}. In practice, squashing functions were found of limited use in deep networks (where $L$ is large), being prone to the problem of vanishing and exploding gradients, due to their bounded derivatives \citep{hochreiter2001gradient}. A breakthrough in modern deep learning came from the introduction of the rectifier linear unit (ReLU) function, defined as:
\begin{equation}
g(s) = \max\left\{0, s\right\} \,.
\end{equation}
Despite being unbounded and introducing a point of non-differentiability, the ReLU has proven to be extremely effective for deep networks \citep{glorot2010understanding,maas2013rectifier}. The ReLU has two main advantages. First, its gradient is either $0$ or $1$,\footnote{For $s=0$, the function is not differentiable, but any value in $\left[0,1\right]$ is a valid subgradient. Most implementations of ReLU use $0$ as the default choice in this case.} making back-propagation particularly efficient. Secondly, its activations are sparse, which is beneficial from several points of views. A smoothed version of the ReLU, called softplus, is also introduced in \citet{glorot2011deep}:
\begin{equation}
g(s) = \log\left\{ 1 + \exp\left\{ s \right\} \right\} \,.
\end{equation}
Despite its lack of smoothness, ReLU functions are almost always preferred to the softplus in practice. One obvious problem of ReLUs is that, for a wrong initialization or an unfortunate weight update, its activation can get stuck in $0$, irrespective of the input. This is referred to as the `dying ReLU' condition. To circumvent this problem, \citet{maas2013rectifier} introduced the leaky ReLU function, defined as:
\begin{equation}
g(s) =
\begin{cases}
s & \text{ if } s \ge 0 \\
\alpha s & \text{ otherwise }
\end{cases} \,,
\label{eq:leaky_relu}
\end{equation}
where the user-defined parameter $\alpha > 0$ is generally set to a small value, such as $0.01$. While the resulting pattern of activations is not exactly sparse anymore, the parameters cannot get stuck in a poor region. \eqref{eq:leaky_relu} can also be written more compactly as $ g(s) = \max\left\{ 0, s \right\} + \alpha \min\left\{ 0, s \right\}$.
Another problem of activation functions having only non-negative output values is that their mean value is always positive by definition. Motivated by an analogy with the natural gradient, \citet{clevert2016fast} introduced the exponential linear unit (ELU) to renormalize the pattern of activations:
\begin{equation}
g(s) = \text{ELU}(s)
\begin{cases}
s & \text{ if } s \ge 0 \\
\alpha \left( \exp\left\{ s \right\} - 1 \right) & \text{ otherwise }
\end{cases} \,,
\label{eq:elu}
\end{equation}
where in this case $\alpha$ is generally chosen as $1$. The ELU modifies the negative part of the ReLU with a function saturating at a user-defined value $\alpha$. It is computationally efficient, being smooth (differently from the ReLU and the leaky ReLU), and with a gradient which is either $1$ or $g(s)+\alpha$ for negative values of $s$.
The recently introduced scaled ELU (SELU) generalizes the ELU to have further control over the range of activations \citep{klambauer2017self}:
\begin{equation}
g(s) = \text{SELU}(s) = \lambda \cdot \text{ELU}(s) \,,
\end{equation}
where $\lambda > 1$ is a second user-defined parameter. Particularly, it is shown in \citet{klambauer2017self} that for $\lambda \approx 1.6733$ and $\alpha \approx 1.0507$, the successive application of \eqref{eq:nn_layer} converges towards a fixed distribution with zero mean and unit variance, leading to a self-normalizing network behavior.
Finally, Swish \citep{ramachandran2017swish} is a recently proposed activation somewhat inspired by the gating steps in a standard LSTM recurrent cell:
\begin{equation}
g(s) = s \cdot \delta(s) \,,
\end{equation}
where $\delta(s)$ is the sigmoid in \eqref{eq:sigmoid}.
\section{Parametric adaptable activation functions}
\label{sec:parametric_af}
An immediate way to increase the flexibility of a NN is to parameterize one of the previously introduced activation functions with a \textit{fixed} (small) number of adaptable parameters, such that each neuron can adapt its activation function to a different shape. As long as the function remains differentiable with respect to these new parameters, it is possible to adapt them with any numerical optimization algorithm together with the linear weights and biases of the layer. Due to their fixed number of parameters and limited flexibility, we call these \textit{parametric} activation functions.
Historically, one of the first proposals in this sense was the generalized hyperbolic tangent \citep{chen1996feedforward}, a tanh function parameterized by two additional positive scalar values $a$ and $b$:
\begin{equation}
g(s) = \frac{a \left( 1 - \exp\left\{- bs\right\}\right)}{1 + \exp\left\{-bs\right\}} \,.
\label{eq:generalized_tanh}
\end{equation}
Note that the parameters $a,b$ are initialized randomly and are adapted independently for every neuron. Specifically, $a$ determines the range of the output (which is called the amplitude of the function), while $b$ controls the slope of the curve. \citet{trentin2001networks} provides empirical evidence that learning the amplitude for each neuron is beneficial (either in terms of generalization error, or speed of convergence) with respect to having unit amplitude for all activation functions. Similar results were also obtained for recurrent networks \citep{goh2003recurrent}.
More recently, \citet{he2015delving} consider a parametric version of the leaky ReLU in \eqref{eq:leaky_relu}, where the coefficient $\alpha$ is initialized at $\alpha = 0.25$ everywhere and then adapted for every neuron. The resulting activation function is called parametric ReLU (PReLU), and it has a very simple derivative with respect to the new parameter:
\begin{equation}
\frac{dg(s)}{d\alpha} =
\begin{cases}
0 & \text{ if } s \ge 0 \\
s & \text{ otherwise }
\end{cases} \,.
\end{equation}
For a layer with $N$ hidden neurons, this introduces only $N$ additional parameters, compared to $2N$ parameters for the generalized tanh. Importantly, in the case of $\ell_p$ regularization, the user has to be careful not to regularize the $\alpha$ parameters, which would bias the optimization process towards classical ReLU / leaky ReLU activation functions.
Similarly, \citet{trottier2016parametric} propose a modification of the ELU function in \eqref{eq:elu} with an additional scalar parameter $\beta$, called parametric ELU (PELU):
\begin{equation}
g(s) =
\begin{cases}
\displaystyle\frac{\alpha}{\beta}s & \text{ if } s \ge 0 \\
\alpha \left( \exp\left\{ \displaystyle\frac{s}{\beta} \right\} - 1 \right) & \text{ otherwise }
\end{cases} \,,
\label{eq:pelu}
\end{equation}
where both $\alpha$ and $\beta$ are initialized randomly and adapted during the training process. Based on the analysis in \citet{jin2016deep}, there always exists a setting for the linear weights and $\alpha, \beta$ which avoids the vanishing gradient problem. Differently from the PReLU, however, the two parameters should be regularized in order to avoid a degenerate behavior with respect to the linear weights, where extremely small linear weights are coupled with very large values for the parameters of the activation functions.
A more flexible proposal is the S-shaped ReLU (SReLU) \citep{jin2016deep}, which is parameterized by four scalar values $\left\{ t^r, a^r, t^l, a^l \right\} \in \R^4$:
\begin{equation}
g(s) =
\begin{cases}
t^r + a^r \left( s - t^r \right) & \text{ if } s \ge t^r \\
s & \text{ if } t^r > s > t^l \\
t^l + a^l \left(s - t^l \right) & \text{ otherwise }
\end{cases} \,.
\label{eq:srelu}
\end{equation}
The SReLU is composed by three linear segments, the middle of which is the identity. Differently from the PReLU, however, the cut-off points between the three segments can also be adapted. Additionally, the function can have both convex and nonconvex shapes, depending on the orientation of the left and right segments, making it more flexible than previous proposals. Similar to the PReLU, the four parameters should not be regularized \citep{jin2016deep}.
Finally, a parametric version of the Swish function is the $\beta$-swish \citep{ramachandran2017swish}, which includes a tunable parameter $\beta$ inside the self-gate:
\begin{equation}
g(s) = s \cdot \delta(\beta s) \,.
\end{equation}
\section{Non-parametric activation functions}
\label{sec:nonparametric_afs}
Intuitively, parametric activation functions have limited flexibility, resulting in mixed performance gains on average. Differently from parametric approaches, non-parametric activation functions allow to model a larger class of shapes (in the best case, any continuous segment), at the price of a larger number of adaptable parameters. As stated in the introduction, these methods generally introduce a further global hyper-parameter allowing to balance the flexibility of the function, by varying the effective number of free parameters, which can potentially grow without bound. Additionally, the methods can be grouped depending on whether each parameter has a local or global effect on the overall function, the former being a desirable characteristic.
In this section, we describe three state-of-the-art approaches for implementing non-parametric activation functions: APL functions in Section \ref{sec:apl_af}, spline functions in Section \ref{sec:spline_af}, and maxout networks in Section \ref{sec:maxout_networks}.
\subsection{Adaptive piecewise linear methods}
\label{sec:apl_af}
An APL function, introduced in \citet{agostinelli2014learning}, generalizes the SReLU function in \eqref{eq:srelu} by summing multiple linear segments, where all slopes and cut-off points are learned under the constraint that the overall function is continuous:
\begin{equation}
g(s) = \max\left\{0, s\right\} + \sum_{i=1}^S a_i \max\left\{0, -s + b_i \right\} \,.
\label{eq:apl}
\end{equation}
$S$ is a hyper-parameter chosen by the user, while each APL is parameterized by $2S$ adaptable parameters $\left\{ a_i, b_i \right\}_{i=1}^S$. These parameters are randomly initialized for each neuron, and can be regularized with $\ell_2$ regularization, similarly to the PELU, in order to avoid the coupling of very small linear weights and very large $a_i$ coefficients for the APL units.
The APL unit cannot approximate any possible function. Its approximation properties are described in the following theorem.
\begin{theorem}[Theorem 1, \citep{agostinelli2014learning}]
The APL unit can approximate any continuous piecewise-linear function $h(s)$, for some choice of $S$ and $\left\{ a_i, b_i \right\}_{i=1}^S$, provided that $h(s)$ satisfies the following two conditions:
\begin{enumerate}
\item There exists $u \in \R$ such that $h(s) = s, \,\, \forall s \ge u$.
\item There exist two scalars, $v, t \in \R$ such that $\frac{dh(s)}{s} = t, \,\, \forall s < v$.
\end{enumerate}
\end{theorem}
\noindent The previous theorem implies that any piecewise-linear function can be approximated, provided that its behavior is linear for very large, or small, $s$. A possible drawback of the APL activation function is that it introduces $S+1$ points of non-differentiability for each neuron, which may damage the optimization algorithm. The next class of functions solves this problem, at the cost of a possibly larger number of parameters.
\subsection{Spline activation functions}
\label{sec:spline_af}
An immediate way to exploit polynomial interpolation in NNs it to build the activation function over powers of the activation $s$ \citep{piazza1992artificial}:
\begin{equation}
g(s) = \sum_{i=0}^P a_i s^i \,,
\end{equation}
where $P$ is a hyper-parameter and we adapt the $(P+1)$ coefficients $\left\{a_i\right\}_{i=0}^P$. Since a polynomial of degree $P$ can pass exactly through $P+1$ points, this polynomial activation function (PAF) can in theory approximate any smooth function. The drawback of this approach is that each parameter $a_i$ has a global influence on the overall shape, and the output of the function can easily grow too large or encounter numerical problems, particularly for large absolute values of $s$ and large $P$.
An improved way to use polynomial expansions is spline interpolation, giving rise to the spline activation function (SAF). The SAF was originally studied in \citet{vecci1998learning,guarnieri1999multilayer}, and later re-introduced in a more modern context in \citet{scardapane2016learning}, following previous advances in nonlinear filtering \citep{scarpiniti2013nonlinear}. In the sequel, we adopt the newer formulation.
A SAF is described by a vector of $T$ parameters, called \textit{knots}, corresponding to a sampling of its $y$-values over an equispaced grid of $T$ points over the $x$-axis, that are symmetrically chosen around the origin with sampling step $\Delta x$. For any other value of $s$, the output value of the SAF is computed with spline interpolation over the closest knot and its $P$ rightmost neighbors, where $P$ is generally chosen equal to $3$, giving rise to a cubic spline interpolation scheme. Specifically, denote by $k$ the index of the closest knot, and by $\vect{q}_k$ the vector comprising the corresponding knot and its $P$ neighbors. We call this vector the \textit{span}. We also define a new value
\begin{equation}
u = \frac{s}{\Delta x} - \left\lfloor \frac{s}{\Delta x} \right\rfloor \,,
\end{equation}
where $\Delta x$ is the user-defined sampling step. $u$ defines a normalized abscissa value between the $k$-th knot and the $(k+1)$-th one. The output of the SAF is then given by \citep{scarpiniti2013nonlinear}:
\begin{equation}
g(s) = \vect{u}^T \vect{B} \vect{q}_k \,,
\end{equation}
where the vector $\vect{u}$ collects powers of $u$ up to the order $P$:
\begin{equation}
\vect{u} = \left[ u^P, u^{P-1}, \ldots, u^1, 1 \right]^T \,,
\end{equation}
and $\vect{B}$ is the spline basis matrix, which defines the properties of the interpolation scheme (as shown later in Fig. \ref{fig:saf_example_bbasis}). For example, the popular Catmull-Rom basis for $P=3$ is given by:
\begin{equation}
\vect{B} = \frac{1}{2}
\begin{bmatrix}
-1 & 3 & -3 & 1 \\
2 & -5 & 4 & -1 \\
-1 & 0 & 1 & 0 \\
0 & 2 & 0 & 0
\end{bmatrix} \,.
\label{eq:catmulrom_basis_matrix}
\end{equation}
The derivatives of the SAF can be computed in a similar way, both with respect to $s$ and with respect to $\vect{q}_k$, e.g., see \citet{scardapane2016learning}. A visual example of the SAF output is given in Fig. \ref{fig:saf_example}.
\begin{figure}
\subfloat[CR matrix]{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{./plots/saf_example_crbasis}
\label{fig:saf_example_crbasis}
} \hfill
\subfloat[B-basis matrix]{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{./plots/saf_example_bbasis}
\label{fig:saf_example_bbasis}
} \hfill
\caption{Example of output interpolation using a SAF neuron. Knots are shown with red markers, while the overall function is given in a light blue. (a) For a given activation, in black, only the control points in the green background are active. (b) We use the same control points as before, but we interpolate using the B-basis matrix \citep{scarpiniti2013nonlinear} instead of the CR matrix in \eqref{eq:catmulrom_basis_matrix}. The resulting curve is smoother, but it is not guaranteed to pass through all the control points.}
\label{fig:saf_example}
\end{figure}
Each knot has only a limited local influence over the output, making their adaptation more stable. The resulting function is also smooth, and can in fact approximate any smooth function defined over a subset of the real line to a desired level of accuracy, provided that $\Delta x$ is chosen small enough. The drawback is that regularizing the resulting activation functions is harder to achieve, since $\ell_p$ regularization cannot be applied directly to the values of the knots. In \citet{guarnieri1999multilayer}, this was solved by choosing a large $\Delta x$, in turn severely limiting the flexibility of the interpolation scheme. A different proposal was made in \citet{scardapane2016learning}, where the vector $\vect{q} \in \R^T$ of SAF parameters is regularized by penalizing deviations from the values at initialization. Note that it is straightforward to initialize the SAF as any of the known fixed activation functions described before.
\subsection{Maxout networks}
\label{sec:maxout_networks}
Differently from the other functions described up to now, the maxout function introduced in \citet{goodfellow2013maxout} replaces an entire layer in \eqref{eq:nn_layer}. In particular, for each neuron, instead of computing a single dot product $\vect{w}^T\vect{h}$ to obtain the activation (where $\vect{h}$ is the input to the layer), we compute $K$ different products with $K$ separate weight vectors $\vect{w}_1, \ldots, \vect{w}_K$ and biases $b_1, \ldots, b_K$, and take their maximum:
\begin{equation}
g(\vect{h}) = \max_{i=1,\ldots,K} \left\{ \vect{w}_i^T\vect{h} + b_i \right\} \,,
\end{equation}
where the activation function is now a function of a subset of the output of the previous layer. A NN having maxout neurons in all hidden layers is called a maxout network, and remains an universal approximator according to the following theorem.
\begin{theorem}[Theorem 4.3, \citep{goodfellow2013maxout}]
Any continuous function $h(\cdot): \R^{N_0} \rightarrow \R^{N_L}$ can be approximated arbitrarily well on a compact domain by a maxout network with two maxout hidden units, provided $K$ is chosen sufficiently large.
\end{theorem}
The advantage of the maxout function is that it is extremely easy to implement using current linear algebra libraries. However, the resulting functions have several points of non-differentiability, similarly to the APL units. In addition, the number of resulting parameters is generally higher than with alternative formulations. In particular, by increasing $K$ we multiply the original number of parameters by a corresponding factor, while other approaches contribute only linearly to this number. Additionally, we lose the possibility of plotting the resulting activation functions, unless the input to the maxout layer has less than $4$ dimensions. An example with dimension $1$ is shown in Fig. \ref{fig:maxout_example}.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth,keepaspectratio]{./plots/maxout_example}
\caption{An example of a maxout neuron with a one-dimensional input and $K=3$. The three linear segments are shown with a light gray, while the resulting activation is shown with a shaded red. Note how the maxout can only generate convex shapes by definition. However, plots cannot be made for inputs having more than three dimensions.}
\label{fig:maxout_example}
\end{figure}
In order to solve the smoothness problem, \citep{zhang2014improving} introduced two smooth versions of the maxout neuron. The first one is the soft-maxout:
\begin{equation}
g(\vect{h}) = \log\left\{ \sum_{i=1}^K \exp\left\{ \vect{w}_i^T\vect{h} + b_i \right\} \right\} \,.
\end{equation}
The second one is the $\ell_p$-maxout, for a user-defined natural number $p$:
\begin{equation}
g(\vect{h}) = \sqrt[p]{ \sum_{i=1}^K \left\lvert \vect{w}_i^T\vect{h} + b_i \right\rvert^p } \,.
\end{equation}
Closely related to the $\ell_p$-maxout neuron is the $L_p$ unit proposed in \citet{gulcehre2013learned}. Denoting for simplicity $s_i = \vect{w}_i^T\vect{h} + b_i$, the $L_p$ unit is defined as:
\begin{equation}
g(\vect{h}) = \left( \frac{1}{K} \sum_{i=1}^K \lvert s_i - c_i \rvert^{p} \right)^{\frac{1}{p}} \,,
\label{eq:lp_unit}
\end{equation}
where the $K+1$ parameters $\left\{c_1, \ldots, c_K, p\right\}$ are all learned via back-propagation.\footnote{In practice, $p$ is re-parameterized as $1 + \log\left\{1+\exp\left\{p\right\}\right\}$ to guarantee that \eqref{eq:lp_unit} defines a proper norm.} If we fix $c_i=0$, for $p$ going to infinity the $L_p$ unit degenerates to a special case of the maxout neuron:
\begin{equation}
\lim_{p \rightarrow \infty} g(\vect{h}) = \max_{i=1,\ldots,K} \left\{ \lvert s_i \rvert \right\} \,.
\end{equation}
\section{Proposed kernel-based activation functions}
\label{sec:proposed_kernel_afs}
In this section we describe the proposed KAF. Specifically, we model each activation function in terms of a kernel expansion over $D$ terms as:
\begin{equation}
g(s) = \sum_{i=1}^D \alpha_i \kappa\left(s, d_i\right) \,,
\label{eq:proposed_kaf}
\end{equation}
where $\left\{\alpha_i\right\}_{i=1}^D$ are the mixing coefficients, $\left\{d_i\right\}_{i=1}^D$ are the called the dictionary elements, and $\kappa(\cdot, \cdot): \R \times \R \rightarrow \R$ is a 1D kernel function \citep{hofmann2008kernel}. In kernel methods, the dictionary elements are generally selected from the training data. In a stochastic optimization setting, this means that $D$ would grow linearly with the number of training iterations, unless some proper strategy for the selection of the dictionary is implemented \citep{liu2011kernel,van2012kernel}. To simplify our treatment, we consider a simplified case where the dictionary elements are fixed, and we only adapt the mixing coefficients. In particular, we sample $D$ values over the $x$-axis, uniformly around zero, similar to the SAF method, and we leave $D$ as a user-defined hyper-parameter. This has the additional benefit that the resulting model is linear in its adaptable parameters, and can be efficiently implemented for a mini-batch of training data using highly-vectorized linear algebra routines. Note that there is a vast literature on kernel methods with fixed dictionary elements, particularly in the field of Gaussian processes \citep{snelson2006sparse}.
The kernel function $\kappa(\cdot, \cdot)$ needs only respect the positive semi-definiteness property, i.e., for any possible choice of $\left\{\alpha_i\right\}_{i=1}^D$ and $\left\{d_i\right\}_{i=1}^D$ we have that:
\begin{equation}
\sum_{i=1}^D \sum_{j=1}^D \alpha_i \alpha_j \kappa\left(d_i, d_j\right) \ge 0 \,.
\label{eq:psd_kernel}
\end{equation}
For our experiments, we use the 1D Gaussian kernel defined as:
\begin{equation}
\kappa(s, d_i) = \exp\left\{-\gamma\left(s - d_i\right)^2\right\} \,,
\label{eq:gaussian_kernel}
\end{equation}
where $\gamma \in \R$ is called the kernel bandwidth, and its selection is discussed more at length below. Other choices, such as the polynomial kernel with $p \in \mathbb{N}$, are also possible:
\begin{equation}
\kappa(s, d_i) = \left(1 + sd_i\right)^p \,.
\label{eq:polynomial_kernel}
\end{equation}
By the properties of kernel methods, KAFs are equivalent to learning linear functions over a large number of nonlinear transformations of the original activation $s$, without having to explicitly compute such transformations. The Gaussian kernel has an additional benefit: thanks to its definition, the mixing coefficients have only a local effect over the shape of the output function (where the radius depends on $\gamma$, see below), which is advantageous during optimization. In addition, the expression in \eqref{eq:proposed_kaf} with the Gaussian kernel can approximate any continuous function over a subset of the real line \citep{micchelli2006universal}. The expression resembles a one-dimensional radial basis function network, whose universal approximation properties are also well studied \citep{park1991universal}. Below, we go more in depth over some additional considerations for implementing our KAF model. Note that the model has very simple derivatives for back-propagation:
\begin{align}
\frac{\partial g(s)}{\partial \alpha_i} & = \kappa\left(s, d_i\right) \,, \\
\frac{\partial g(s)}{\partial s} & = \sum_{i=1}^{D} \alpha_i \frac{\partial \kappa\left(s, d_i\right)}{\partial s} \,.
\end{align}
\subsection*{On the selection of the kernel bandwidth}
\begin{figure}
\subfloat[$\gamma$ = 2.0]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/kaf_example_small_gamma}
\label{fig:kaf_example_small_gamma}
} \hfill
\subfloat[$\gamma$ = 0.5]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/kaf_example_medium_gamma}
\label{fig:kaf_example_medium_gamma}
} \hfill
\subfloat[$\gamma$ = 0.1]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/kaf_example_large_gamma}
\label{fig:kaf_example_large_gamma}
} \hfill
\caption{Examples of KAFs. In all cases we sample uniformly $20$ points on the $x$-axis, while the mixing coefficients are sampled from a normal distribution. The three plots show three different choices for $\gamma$.}
\label{fig:kaf_samples}
\end{figure}
Selecting $\gamma$ is crucial for the well-behavedness of the method, by acting indirectly on the effective number of adaptable parameters. In Fig. \ref{fig:kaf_samples} we show some examples of functions obtained by fixing $D=20$, randomly sampling the mixing coefficients, and only varying the kernel bandwidth, showing how $\gamma$ acts on the smoothness of the overall functions.
In the literature, many methods have been proposed to select the bandwidth parameter for performing kernel density estimation \citep{jones1996brief}. These methods include popular rules of thumb such as \citet{scott2015multivariate} or \citet{silverman1986density}.
In the problem of kernel density estimation, the abscissa corresponds to a given dataset with an arbitrary distribution. In the proposed KAF scheme, the abscissa are chosen according to a grid, and as such the optimal bandwidth parameter depends uniquely on the grid resolution. Instead of leaving the bandwidth parameter $\gamma$ as an additional hyper-parameter, we have empirically verified that the following rule of thumb represents a good compromise between smoothness (to allow an accurate approximation of several initialization functions) and flexibility:
\begin{equation}
\gamma = \frac{1}{6\Delta^2} \,,
\label{eq:sigma_rule_of_thumb}
\end{equation}
where $\Delta$ is the distance between the grid points. We also performed some experiments in which $\gamma$ was adapted through back-propagation, though this did not provide any gain in accuracy.
\subsection*{On the initialization of the mixing coefficients}
\label{sec:initialization}
A random initialization of the mixing coefficients from a normal distribution, as in Fig. \ref{fig:kaf_samples}, provides good diversity for the optimization process. Nonetheless, a further advantage of our scheme is that we can initialize some (or all) of the KAFs to follow any know activation function, so as to guarantee a certain desired behavior. Specifically, denote by $\vect{t} = \left[t_1, \ldots, t_D\right]^T$ the vector of desired initial KAF values corresponding to the dictionary elements $\vect{d} = \left[d_1, \ldots, d_D\right]^T$. We can initialize the mixing coefficients $\boldsymbol{\alpha} = \left[\alpha_1, \ldots, \alpha_D\right]^T$ using kernel ridge regression:
\begin{equation}
\boldsymbol{\alpha} = \left(\vect{K} + \varepsilon\vect{I}\right)^{-1}\vect{t} \,,
\label{eq:kaf_initialization_krr}
\end{equation}
where $\vect{K} \in \R^{D \times D}$ is the kernel matrix computed between $\vect{t}$ and $\vect{d}$, and we add a diagonal term with $\varepsilon > 0$ to avoid degenerate solutions with very large mixing coefficients. Two examples are shown in Fig. \ref{fig:kaf_initialization_examples}.
\begin{figure}
\centering
\subfloat[$\tanh$]{
\includegraphics[width=0.45\columnwidth,keepaspectratio]{./plots/kaf_initialization_tanh}
\label{fig:kaf_initialization_elu}
} \hfill
\subfloat[ELU]{
\includegraphics[width=0.45\columnwidth,keepaspectratio]{./plots/kaf_initialization_elu}
\label{fig:kaf_initialization_tanh}
} \hfill
\caption{Two examples of initializing a KAF using \eqref{eq:kaf_initialization_krr}, with $\varepsilon=10^{-6}$. (a) A hyperbolic tangent. (b) The ELU in \eqref{eq:elu}. The red dots indicate the corresponding initialized values for the mixing coefficients.}
\label{fig:kaf_initialization_examples}
\end{figure}
\subsection*{Multi-dimensional kernel activation functions}
In our experiments, we also consider a two-dimensional variant of the proposed KAF, that we denote as 2D-KAF. Roughly speaking, the 2D-KAF acts on a pair of activation values, instead of a single one, and learns a two-dimensional function to combine them. It can be seen as a generalization of a two-dimensional maxout neuron, which is instead constrained to output the maximum value among the two inputs.
Similarly to before, we construct a dictionary $\vect{d} \in \R^{D^2 \times 2}$ by sampling a uniform grid over the 2D plane, by considering $D$ positions uniformly spaced around $0$ in both dimensions. We group the incoming activation values in pairs (assuming that the layer has even size), and for each possible pair of activations $\vect{s} = \left[s_{k}, s_{k+1}\right]^T$ we output:
\begin{equation}
g\left(\vect{s}\right) = \sum_{i=1}^{D^2} \alpha_i \kappa\left(\vect{s}, \vect{d}_i\right) \,,
\label{eq:2d_kaf}
\end{equation}
where $\vect{d}_i$ is the $i$-th element of the dictionary, and we now have $D^2$ adaptable coefficients $\left\{\alpha_i\right\}_{i=1}^{D^2}$. In this case, we consider the 2D Gaussian kernel:
\begin{equation}
\kappa\left(\vect{s}, \vect{d}_i\right) = \exp\left\{ -\gamma\norm{\vect{s} - \vect{d}_i}^2 \right\} \,,
\label{eq:2d_gaussian_kernel}
\end{equation}
where we use the same rule of thumb in \eqref{eq:sigma_rule_of_thumb}, multiplied by $\sqrt{2}$, to select $\gamma$. The increase in parameters is counter-balanced by two factors. Firstly, by grouping the activations we halve the size of the linear matrix in the subsequent layer. Secondly, we generally choose a smaller $D$ with respect to the $1D$ case, i.e., we have found that values in $\left[5, 10\right]$ are enough to provide a good degree of flexibility. Table \ref{tab:non_parametric_afs_comparison} provides a comparison of the two proposed KAF models to the three alternative non-parametric activation functions described before. We briefly mention here that a multidimensional variant of the SAF was explored in \citet{solazzi2000artificial}.
\begin{sidewaystable}
\begin{threeparttable}
{\centering\hfill{}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.4}
\begin{footnotesize}
\begin{tabular}{lcccccc}
\toprule
\textbf{Name} & \textbf{Smooth} & \textbf{Locality} & \textbf{Can use regularization} & \textbf{Plottable} & \textbf{Hyper-parameter} & \textbf{Trainable weights} \\
\midrule
APL & No & Partially & Only $\ell_2$ regularization & Yes & Number of segments $S$ & $N_{i-1}N_{i} + N_{i} + 2SN_{i}$ \\
SAF & Yes & Yes & No & Yes & Number of control points $Q$ & $N_{i-1}N_{i} + N_{i} + QN_{i}$ \\
Maxout & No & No & Yes & No\tnote{*} & Number of affine maps $K$ & $KN_{i-1}N_{i} + KN_{i}$ \\
\midrule
\textbf{Proposed KAF} & Yes & Yes\tnote{**} & Yes & Yes & Size of the dictionary $D$ & $N_{i-1}N_{i} + N_{i} + DN_{i}$ \\
\textbf{Proposed 2D-KAF} & Yes & Yes\tnote{**} & Yes & Yes & Size of the dictionary $D$ & $N_{i-1}N_{i} + \frac{\left(N_{i} + D^2N_{i}\right)}{2}$ \\
\bottomrule
\end{tabular}
\end{footnotesize}
}
\hfill{}
\begin{tablenotes}
\item[*] Maxout functions can only be plotted whenever $N_{i-1} \le 3$, which is almost never the case.
\item[**] Only when using the Gaussian (or similar) kernel function.
\end{tablenotes}
\end{threeparttable}
\caption{A comparison of the existing non-parametric activation functions and the proposed KAF and 2D-KAF. In our definition, an activation function is local if each adaptable weight only affects a small portion of the output values.}
\label{tab:non_parametric_afs_comparison}
\end{sidewaystable}
\section{Related work}
\label{sec:related_work}
Many authors have considered ways of improving the performance of the classical activation functions, which do not necessarily require to adapt their shape via numerical optimization, or that require special care when implemented. For completeness, we briefly review them here before moving to the experimental section.
As stated before, the problem of ReLU is that its gradient is zero outside the `active' regime where $s \ge 0$. To solve this, the randomized leaky ReLU \citep{xu2015empirical} considers a leaky ReLU like in \eqref{eq:leaky_relu}, in which during training the parameter $\alpha$ is randomly sampled at every step in the uniform distribution $\mathcal{U}(l,u)$, and the lower/upper bounds $\left\{l, u \right\}$ are selected beforehand. To compensate with the stochasticity in training, in the test phase $\alpha$ is set equal to:
\begin{equation}
\alpha = \frac{l + u}{2} \,,
\end{equation}
which is equivalent to taking the average of all possible values seen during training. This is similar to the dropout technique \citep{srivastava2014dropout}, which randomly deactivates some neurons during each step of training, and later rescales the weights during the test phase.
More in general, several papers have developed stochastic versions of the classical artificial neurons, whose output depend on one or more random variables sampled during their execution \citep{bengio2013estimating}, under the idea that the resulting noise can help guide the optimization process towards better minima. Notably, this provides a link between classical NNs and other probabilistic methods, such as generative networks and networks trained using variational inference \citep{bengio2014deep,schulman2015gradient}. The main challenge is to design stochastic neurons that provide a simple mechanism for back-propagating the error through the random variables, without requiring expensive sampling procedures, and with a minimal amount of interference over the network. As a representative example, the noisy activation functions proposed in \citet{gulcehre2016noisy} achieve this by combining activation functions with `hard saturating' regimes (i.e., their value is exactly zero outside a limited range) with random noise over the outputs, whose variance increases in the regime where the function saturates to avoid problems due to the sparse gradient terms. An example is given in Fig. \ref{fig:noisy_af}.
\begin{figure}
\subfloat[Original function]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/hard_thresholded_sigmoid}
\label{fig:hard_thresholded_sigmoid}
} \hfill
\subfloat[Noise]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/noise}
\label{fig:noise_variance}
} \hfill
\subfloat[Noisy activation function]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/noisy_sigmoid_af}
\label{fig:noisy_sigmoid_af}
} \hfill
\caption{An example of noisy activation function. (a) Original sigmoid function (blue), together with its hard-thresholded version (in red). (b) For any possible activation value outside the saturated regime, we add random half-normal noise with increasing variance and matching sign according to the algorithm in \citet{gulcehre2016noisy} (the shaded areas correspond to one standard deviation). (c) Final noisy activation function computed as in \citet{gulcehre2016noisy}. At test time, only the expected values (represented with solid green line) are returned.}
\label{fig:noisy_af}
\end{figure}
Another approach is to design \textit{vector-valued} activation functions to maximize parameter sharing. In the simplest case, the concatenated ReLU \citep{shang2016understanding} returns two output values by applying a ReLU function both on $s$ and on $-s$. Similarly, the order statistic network \citep{rennie2014deep} modifies a maxout neuron by returning the input activations in sorted order, instead of picking the highest value only. Multi-bias activation functions \citep{li2016multi} compute several activations values by using different bias terms, and then apply the same activation function independently over each of the resulting values. The network-in-network \citep{lin2013network} model is a non-parametric approach specific to convolutional neural networks, wherein the nonlinear units are replaced with a fully connected NN.
For specific tasks of audio modeling, some authors have proposed the use of Hermite polynomials for adapting the activation functions \citep{siniscalchi2013hermitian,siniscalchi2017adaptation}. Similarly to our proposed KAF, the functions are expressed as a weighted sum of several fixed nonlinear transformations of the activation values, i.e., the Hermite polynomials. However, the nonlinear transformations are computed through the use of a recurrence formula, thus highly increasing the computational load.
\section{Experimental results}
\label{sec:experiments}
In this section we provide a comprehensive evaluation of the proposed KAFs and 2D-KAFs when applied to several use cases. As a preliminary experiment, we begin by comparing multiple activation functions on a relatively small classification dataset (Sensorless) in Section \ref{sec:visualizing_functions}, where we discuss several examples of the shapes that are generally obtained by the networks and initialization strategies. We then consider a large-scale dataset taken from \citet{baldi2014searching} in Section \ref{sec:results_susy}, where we show that two layers of KAFs are able to significantly outperform a feedforward network with five hidden layers, even when considering parametric activation functions and state-of-the-art regularization techniques. In Section \ref{sec:results_cifar10} we show that KAFs and 2D-KAF provide an increase in performance also when applied to convolutional layers on the CIFAR-10 dataset. Finally, we show in Section \ref{sec:results_rl} that they have significantly faster training and higher cumulative reward for a set of reinforcement learning scenario using MuJoCo environments from the OpenAI Gym\footnote{\url{https://gym.openai.com/}}. We provide an open-source library to replicate the experiments, with the implementation of KAFs and 2D-KAFs in three separate frameworks, i.e., AutoGrad\footnote{\url{https://github.com/HIPS/autograd}}, TensorFlow\footnote{\url{https://www.tensorflow.org/}}, and PyTorch\footnote{\url{http://pytorch.org/}}, which is publicly accessible on the web\footnote{\url{https://github.com/ispamm/kernel-activation-functions/}}.
\subsection{Experimental setup}
Unless noted otherwise, in all experiments we linearly preprocess the input features between -1 and +1, and we substitute eventual missing values with the median values computed from the corresponding feature columns. From the full dataset, we randomly keep a portion of the dataset for validation and another portion for test. All neural networks use a softmax activation function in their output layer, and they are trained by minimizing the average cross-entropy on the training dataset, to which we add a small $\ell_2$-regularization term whose weight is selected in accordance to the literature. For optimization, we use the Adam algorithm \citep{kingma2014adam} with mini-batches of 100 elements and default hyper-parameters. For each epoch we compute the accuracy over the validation set, and we stop training whenever the validation accuracy is not improving for 15 consecutive epochs. Experiments are performed using the PyTorch implementation on a machine with an Intel Xeon E5-2620 CPU, with 16 GB of RAM and a CUDA back-end employing an Nvidia Tesla K20c. All accuracy measures over the test set are computed by repeating the experiments for 5 different splits of the dataset (unless the splits are provided by the dataset itself) and initializations of the networks. Weights of the linear layers are always initialized using the so-called `Uniform He' strategy, while additional parameters introduced by parametric and non-parametric activation functions are initialized by following the guidelines of the original papers.
\subsection{Visualizing the activation functions}
\label{sec:visualizing_functions}
We begin with an experiment on the `Sensorless' dataset to investigate whether KAFs and 2D-KAFs can indeed provide improvements in accuracy with respect to other baselines, and for visualizing some of the common shapes that are obtained after training. The Sensorless dataset is a standard benchmark for supervised techniques, composed of 58509 examples with 49 input features representing electric signals, that are used to predict one among 11 different classes representing operating conditions. We partition it using a random 15\% for validation and another 15\% for testing, and we use a small regularization factor of $10^{-4}$.
In this dataset, we found that the best performing fixed activation function is a simple hyperbolic tangent. In particular, a network with one hidden layers of 100 neurons achieves a test accuracy of $97.75 \%$, while the best result is obtained with a network of three hidden layers (each composed of 100 neurons), which achieves a test accuracy of $99.18\%$. Due to the simplicity of the dataset, we have not found improvements here by adding more layers or including dropout during training as in the following sections. The best performing parametric activation function is instead the PReLU, that improves the results by obtaining a $98.48\%$ accuracy with a single hidden layer, and $99.30\%$ with three hidden layers.
\begin{figure}
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/2l_kaf_examples_10}
\label{fig:kaf_examples_0}
} \hfill
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/2l_kaf_examples_31}
\label{fig:kaf_examples_1}
} \hfill
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/kaf_examples_5}
\label{fig:kaf_examples_5}
} \vfill
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/kaf_examples_6}
\label{fig:kaf_examples_6}
} \hfill
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/kaf_examples_18}
\label{fig:kaf_examples_18}
} \hfill
\subfloat[]{
\includegraphics[width=0.28\columnwidth,keepaspectratio]{./plots/kaf_examples_46}
\label{fig:kaf_examples_46}
} \vfill
\caption{Examples of $6$ trained KAFs (with random initialization) on the Sensorless dataset. On the $y$-axis we plot the output value of the KAF. The KAF after initialization is shown with a dashed red, while the final KAF is shown with a solid green. The distribution of activation values after training is shown as a reference with a light blue.}
\label{fig:kaf_examples_random_initialization}
\end{figure}
For comparison, we train several feedforward networks with KAFs in the hidden layers, having a dictionary of $D=20$ elements equispaced between $-3.0$ and $3.0$, and initializing the linear coefficients from a normal distribution with mean $0$ and variance $0.3$. Using this setup, we already outperform all other baselines obtaining an accuracy of $99.04\%$ with a single hidden layer, which improves to $99.80\%$ when considering two hidden layers of KAFs.
Although the dataset is relatively simple, the shapes we obtain are representative of all the experiments we performed, and we provide a selection in Fig. \ref{fig:kaf_examples_random_initialization}. Specifically, the initialization of the KAF is shown with a dashed red line, while the final KAF is shown with a solid green line. For understanding the behavior of the functions, we also plot the empirical distribution of the activations on the test set using a light blue in the background. Some shapes are similar to common activation functions discussed in Section \ref{sec:fixed_af}, although they are shifted on the $x$-axis to correspond to the distribution of activation values. For example Fig. \ref{fig:kaf_examples_0} is similar to an ELU, while Fig. \ref{fig:kaf_examples_6} is similar to a standard saturating function. Also, while in the latter case the final shape is somewhat determined by the initialization, the final shapes in general tend to be independent from initialization, as in the case of Fig. \ref{fig:kaf_examples_0}. Another common shape is that of a radial-basis function, as in Fig. \ref{fig:kaf_examples_46}, which is similar to a Gaussian function centered on the mean of the empirical distribution. Shapes, however, can be vastly more complex than these. For example, in Fig. \ref{fig:kaf_examples_18} we show a function which acts as a standard saturating function on the main part of the activations' distribution, while its right-tail tends to remove values larger than a given threshold, effectively acting as a sort of implicit regularizer. In Fig. \ref{fig:kaf_examples_1} we show a KAF without an intuitive shape, that selectively amplify (either positively or negatively) multiple regions of its activation space. Finally, in Fig. \ref{fig:kaf_examples_5} we show an interesting pruning effect, where useless neurons correspond to activation functions that are practically zero everywhere. This, combined with the possibility of applying $\ell_1$-regularization \citep{scardapane2017group} allows to obtain networks with a significant smaller number of effective parameters.
Interestingly, the shapes obtained in Fig. \ref{fig:kaf_examples_random_initialization} seem to be necessary for the high performance of the networks, and they are not an artifact of initialization. Specifically, we obtain similar accuracies (and similar output functions) even when initializing all KAFs as close as possible to hyperbolic tangents, following the method described in Section \ref{sec:initialization}, while we obtain a vastly inferior performance (in some cases even worse than the baseline), if we initialize the KAFs randomly and we prevent their adaptation. This (informally) points to the fact that their flexibility and adaptability seems to be an intrinsic component of their good performance in this experiment and in the following sections, an aspect that we will return to in our conclusive section.
\begin{figure}
\begin{minipage}{0.6cm}
\vspace{-1.5\columnwidth}\rotatebox{90}{{\footnotesize Activation 2}}
\end{minipage
\begin{minipage}{\dimexpr\linewidth-2.50cm\relax
\raisebox{\dimexpr-.5\height-1em}{\includegraphics[scale=0.36]{./plots/2dkaf_examples_0}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_1}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_2}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_3}}\ {} \\
\raisebox{\dimexpr-.5\height-1em}{\includegraphics[scale=0.36]{./plots/2dkaf_examples_4}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_5}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_6}\hspace{0.2em}\includegraphics[scale=0.36]{./plots/2dkaf_examples_7}}\ {}
\vspace*{0.1cm}\hspace*{0.485\columnwidth}{\footnotesize Activation 1}
\end{minipage
\caption{Examples of $8$ trained 2D-KAFs on the Sensorless dataset. On the $x$- and $y$-axis we plot the two activation values (ranging from $-3$ to $3$), while we show the output of the KAF using a heat map, where darker colors represent larger output values and lighter colors represent lower activation values, respectively.}
\label{fig:2dkaf_examples}
\end{figure}
Results are also similar when using 2D-KAFs, that we initialize with $D=10$ elements on each axis using the same strategy as for KAFs. In this scenario, they obtain a test accuracy of $98.90\%$ with a single hidden layer, and an accuracy of $99.84\%$ (thus improving over the KAF) for two hidden layers. Some examples of obtained shapes are provided in Fig. \ref{fig:2dkaf_examples}.
\subsection{Comparisons on the SUSY benchmark}
\label{sec:results_susy}
In this section, we evaluate the algorithms on a realistic large-scale use case, the SUSY benchmark introduced in \citet{baldi2014searching}. The task is to predict the presence of super symmetric particles on a simulation of a collision experiment, starting from 18 features (both low-level and high-level) describing the simulation itself. The overall dataset is composed of five million examples, of which the last 500000 are used for test, and another 500000 for validation. The task is interesting for several reasons. Due to the nature of the data, even a tiny change in accuracy (measured in terms of area under the curve, AUC) is generally statistically significant. In the original paper, \citet{baldi2014searching} showed the best AUC was obtained by a deep feedforward network having five hidden layers, with significantly better results when compared to a shallow network. Surprisingly, \citet{agostinelli2014learning} later showed that a shallow network is in fact sufficient, so long as it uses non-parametric activation functions (in that case, APL units).
In order to replicate these results with our proposed methods, we consider a baseline network inspired to \citet{baldi2014searching}, having five hidden layers with 300 neurons each and ReLU activation functions, with dropout applied to the last two hidden layer with probability 0.5. For comparison, we also consider the same architecture, but we substitute ReLUs with ELU, SELU, and PReLU functions. For SELU, we also substitute the standard dropout with the customized version proposed in \citet{klambauer2017self}. We compare with simpler networks composed of one or two hidden layers of 300 neurons, employing Maxout neurons (with $K=5$), APL units (with $S=3$ as proposed in \citet{agostinelli2014learning}), and the proposed KAFs and 2D-KAFs, following the same random initializations as the previous section. Results, in terms of AUC and amount of trainable parameters, are given in Table \ref{tab:results_susy}.
\begin{table}
{\centering\hfill{}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.5}
\begin{footnotesize}
\begin{tabular}{lcc}
\toprule
\textbf{Activation function} & \textbf{Testing AUC} & \textbf{Trainable parameters}\\
\midrule
ReLU & $0.8739 (0.001)$ & \multirow{3}{*}{$367201$} \\
ELU & $0.8739 (0.001)$ & \\
SELU & $0.8745 (0.002)$ & \\
\midrule
PReLU & $0.8748 (0.001)$ & $368701$ \\
\midrule
Maxout (one layer) & $0.8744 (0.001)$ & $17401$ \\
Maxout (two layers) & $0.8744 (0.002)$ & $288301$ \\
\midrule
APL (one layer) & $0.8744 (0.002)$ & $7801$ \\
APL (two layers) & $0.8757 (0.002)$ & $99901$ \\
\midrule
KAF (one layer) & $0.8756 (0.001)$ & $12001$ \\
KAF (two layers) & $\underline{0.8758 (0.001)}$ & $108301$ \\
\midrule
2D-KAF (one layer) & $0.8750 (0.001)$ & $20851$ \\
2D-KAF (two layers) & $\vect{0.8764 (0.002)}$ & $81151$ \\
\bottomrule
\end{tabular}
\end{footnotesize}
}
\hfill{}
\caption{Results of different activation functions on the SUSY benchmark. The last four rows are the proposed KAF and 2D-KAF. Standard deviation for the AUC is given between brackets, the best result is shown in bold, and the second best result is underlined. All networks with fixed or parametric activation functions have five hidden layers. See the text for a full description of the architectures.}
\label{tab:results_susy}
\end{table}
There are several clear results that emerge from the analysis of Table \ref{tab:results_susy}. First of all, the use of sophisticated activation functions (such as the SELU), or of parametric functions (such as the PReLU) can improve performance, in some cases even significantly. However, these improvements still require several layers of depth, while they both fail to provide accurate results when experimenting with shallow networks. On the other hand, all non-parametric functions are able to achieve similar (or even superior) results, while only requiring one or two hidden layers of neurons. Among them, APL and Maxout achieve a similar AUC with one layer, but only APL is able to benefit from the addition of a second layer. Both KAF and 2D-KAF are able to significantly outperform all the competitors, and the overall best result is obtained by a 2D-KAF network with two hidden layers. This is obtained with a significant reduction in the number of trainable parameters, as also described more in depth in the following section.
\subsection{Experiments with convolutive layers on CIFAR-10}
\label{sec:results_cifar10}
Although our focus has been on feedforward networks, an interesting question is whether the superior performance exhibited by KAFs and 2D-KAFs can also be obtained on different architectures, such as convolutional neural networks (CNNs). To investigate this, we train several CNNs on the CIFAR-10 dataset, composed of $60000$ images of size $32 \times 32$, belonging to $10$ classes. Since our aim is only to compare different architectures for the convolutional kernels, we train simple CNNs made by stacking convolutional `modules', each of which is composed by (a) a convolutive layer with $150$ filters, with a filter size of $5 \times 5$ and a stride of $1$; (b) a max-pooling operation over $3 \times 3$ windows with stride of $2$; (c) a dropout layer with probability of $0.25$. We consider CNNs with a minimum of $2$ such modules and a maximum of $5$, where the output of the last dropout operation is flattened before applying a linear projection and a softmax operation. Our training setup is equivalent to the previous sections.
We consider different choices for the nonlinearity of the convolutional filters, using ELU as baseline, and our proposed KAFs and 2D-KAFs. In order to improve the gradient flow in the initial stages of training, KAFs in this case are initialized with the KRR strategy using ELU as the target. The results are shown in Fig. \ref{fig:cifar10}, where we show on the left the final test accuracy, and on the right the number of trainable parameters of the three architectures.
\begin{figure}
\subfloat[Test accuracy]{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{./plots/cifar10_accuracy}
\label{fig:cifar10_accuracy}
} \hfill
\subfloat[Parameters]{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{./plots/cifar10_params}
\label{fig:cifar10_params}
} \hfill
\caption{Results of KAF, 2D-KAF, and a baseline composed of ELU functions when using only convolutive layers on the CIFAR-10 dataset (see the text for a full description of the architectures). (a) Test accuracy. (b) Number of trainable parameters for the architectures.}
\label{fig:cifar10}
\end{figure}
Interestingly, both KAFs and 2D-KAFs are able to get significantly better results than the baseline, i.e., even $2$ layers of convolutions are sufficient to surpass the accuracy obtained by an equivalent $5$-layered network with the baseline activation functions. From Fig. \ref{fig:cifar10_params}, we can see that this is obtained with a negligible increase in the number of trainable parameters for KAF, and with a significant decrease (roughly $50\%$) for 2D-KAF. The reason, as before, is that each nonlinearity for the 2D-KAF is merging information coming from two different convolutive filters, effectively halving the number of parameters required for the subsequent layer.
\subsection{Experiments on a reinforcement learning scenario}
\label{sec:results_rl}
Before concluding, we evaluate the performance of the proposed activation functions when applied to a relatively more complex reinforcement learning scenario. In particular, we consider some representative MuJoCo environments from the OpenAI Gym platform,\footnote{\url{https://github.com/openai/gym/mujoco}} where the task is to learn a policy to control highly nonlinear physical systems, including pendulums and bipedal robots. As a baseline, we use the open-source OpenAI implementation of the proximal policy optimization algorithm \citep{schulman2017proximal}, that learns a policy function by alternating between gathering new episodes of interactions with the environment, and building the policy itself by optimizing a surrogate loss function. All hyper-parameters are taken directly from the original paper, without attempting a specific fine-tuning for our algorithm. The policy function for the baseline is implemented as a NN with two hidden layers, each of which has $64$ hidden neurons with $\tanh$ nonlinearities, providing in output the mean of a Gaussian distribution that is used to select an action. For the comparison, we keep the overall setup fixed, but we replace the nonlinearities with KAF neurons, using the same initialization as the preceding sections.
\begin{figure}
\subfloat[swimmer]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/reward_swimmer}
\label{fig:reward_swimmer}
} \hfill
\subfloat[humanoidstandup]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/reward_humanoid}
\label{fig:reward_humanoid}
} \hfill
\subfloat[pendulum\_inverted]{
\includegraphics[width=0.3\columnwidth,keepaspectratio]{./plots/reward_pendulum}
\label{fig:reward_pendulum}
} \hfill
\caption{Results for the reinforcement learning experiments, in terms of average cumulative rewards. We compare the baseline algorithm to an equivalent architecture with KAF nonlinearities. Details on the models and hyperparameters are provided in the main discussion.}
\label{fig:rl}
\end{figure}
We plot the average cumulative reward obtained for every iteration of the algorithms on different environments in Fig. \ref{fig:rl}. We see that the policy networks implemented with the KAF functions consistently learn faster than the baseline with, in several cases, a consistent improvement with respect to the final reward.
\section{Conclusive remarks}
\label{sec:conclusions}
In this paper, after extensively reviewing known methods to adapt the activation functions in a neural network, we proposed a novel family of non-parametric functions, framed in a kernel expansion of their input value. We showed that these functions combine several advantages of previous approaches, without introducing an excessive number of additional parameters. Furthermore, they are smooth over their entire domain, and their operations can be implemented easily with a high degree of vectorization. Our experiments showed that networks trained with these activations can obtain a higher accuracy than competing approaches on a number of different benchmark and scenarios, including feedforward and convolutional neural networks.
From our initial model, we made a number of design choices in this paper, which include the use of a fixed dictionary, of the Gaussian kernel, and of a hand-picked bandwidth for the kernel. However, many alternative choices are possible, such as the use of dictionary selection strategies, alternative kernels (e.g., periodic kernels), and several others. In this respect, one intriguing aspect of the proposed activation functions is that they provide a further link between neural networks and kernel methods, opening the door to a large number of variations of the described framework.
\section*{Acknowledgments}
The authors would like to thank the anonymous reviewers for their suggestions on how to improve the paper.
\bibliographystyle{elsarticle-harv}
|
\section{\label{sec:Introduction}Introduction}
Complex systems involving large collections of dynamical elements interacting with each other on complex networks are abundant across several disciplines of sciences and engineering~\cite{albert2002statistical, dorogovtsev2002evolution, boccaletti2006complex, newman2010networks}. This has generated a consolidated effort towards unveiling structural properties of manifold real-world networked systems and uncovering fundamental principles governing their organization~\cite{newman2003structure}. A significant milestone amid such explorations was the exposition of the small-world behaviour of diverse real networks, characterized by a small average path length between nodes and a high clustering coefficient~\cite{watts1998collective}. Further, the interplay between topological properties of complex networked systems and the collective dynamics exhibited by them has been simultaneously investigated, particularly with reference to the phenomenon of synchronization~\cite{pikovsky2003synchronization, arenas2008synchronization, newman2010networks}.
Synchronization is among the most relevant emergent behaviours in complex networks of dynamical systems and is often critical to their functionality~\cite{pikovsky2003synchronization, arenas2008synchronization, menck2014dead, mitra2014dynamical, mitra2015integrative, mitra2017multiple, mitra2017recovery}. As a result, there has been a persistent drive towards unravelling the influence of topological features of networks on their ability to synchronize, often with the objective of designing topologies for better synchronizability~\cite{motter2005enhancing, motter2005network, donetti2005entangled, nishikawa2006synchronization, nishikawa2006maximum, yin2006decoupling, duan2007complex, motter2007bounding, gu2009altering, nishikawa2010network}. In this regard, small-world networks have been particularly known to facilitate synchronization of dynamical systems coupled on them~\cite{lago2000fast, gade2000synchronous, hong2002synchronization, wang2002synchronizationsmall, barahona2002synchronization}. Besides the small-world property, real-world networks often exhibit two other remarkable generic features, namely, scale-free behaviour~\cite{barabasi1999emergence} and hierarchical structure~\cite{ravasz2002hierarchical, clauset2008hierarchical}.
Scale-free behaviour is characterized by the probability $P \left( k \right)$ that a randomly selected node has exactly $k$ links decaying as a power law $\left( P \left( k \right) \sim k^{- \gamma} \right)$ and appears in good approximation in diverse real networked systems such as the internet~\cite{faloutsos1999power}, the world wide web~\cite{barabasi1999emergence}, networks of metabolic reactions~\cite{jeong2000large}, protein interaction networks~\cite{jeong2001lethality}, the web of Hollywood actors linked by movies~\cite{albert2000topology}, social networks such as the web of human sexual contacts~\cite{liljeros2001web}, etc. In this context, the Barab\'{a}si-Albert (BA) model~\cite{barabasi1999emergence} has been suggested for realizing random scale-free networks with growth and preferential attachment, where an incoming node is more likely to get randomly linked to an existing node with higher connectivity.
Also, manifold real-world systems such as metabolic networks in the cell~\cite{ravasz2002hierarchical}, ecological niches in food webs~\cite{clauset2008hierarchical}, the scientific collaboration network~\cite{shen2009detect}, corporate and governmental organizations~\cite{yu2006genomic}, etc.\ exhibit hierarchical organization where small groups of nodes organize in a stratified manner into larger groups, over multiple scales. This definition of hierarchical structure, also used throughout this letter, relates to that proposed by Clauset \etal~\cite{clauset2008hierarchical}.
Naturally, collective dynamics on scale-free~\cite{jost2001spectral, wang2002synchronizationscale, wang2002complex, lind2004coherence} and hierarchical topologies~\cite{arenas2006synchronization, diaz2008dynamics, skardal2012hierarchical, mitra2017multiple, mitra2017recovery} have been investigated intensively, but mostly separately, leaving sufficient room for further explorations concerning synchronization in networks simultaneously exhibiting the two topological properties mentioned above. Notably, the coexistence of the generic feature of scale-free topology along with a hierarchical organization in many networks in nature and society is immensely intriguing~\cite{ravasz2003hierarchical}. Examples in this direction constitute the internet at the domain level, the world wide web of documents, the actor network, the semantic web viewed as a network of words, biochemical networks in the cell, etc.~\cite{ravasz2002hierarchical, ravasz2003hierarchical}.
\begin{figure}
\begin{center}
\subfigure[]
{
\includegraphics[height=2.5cm, width=5.0cm]{Figure_1_a.jpg}
}
\subfigure[]
{
\includegraphics[height=2.5cm, width=3.0cm]{Figure_1_b.jpg}
}
\caption{\label{fig:Figure_1}Topology of the (a) deterministic and (b) pseudofractal scale-free networks developed over 2 generations.}
\end{center}
\end{figure}
\subsection{\label{sec:Network_Construction}Network Construction}
Notable instances among models simultaneously incorporating the prominent topological features of scale-free behaviour and hierarchical organization under one roof are the deterministic scale-free (\textsc{DSF}{})~\cite{barabasi2001deterministic}, pseudofractal scale-free (\textsc{PSF}{})~\cite{dorogovtsev2002pseudofractal}, Apollonian~\cite{andrade2005apollonian} and the hierarchical network model~\cite{ravasz2003hierarchical}. We specifically study \textsc{DSF}{} and \textsc{PSF}{} networks in this letter, the topology of them developed over 2 generations is illustrated in Fig.~\ref{fig:Figure_1}(a, b). Evidently, these models are completely deterministic, leading to a perfectly hierarchical assembly of the associated networks. However, it is most natural to assume that real-world topologies are neither completely deterministic, nor perfectly hierarchical. Thus, a realistic model of practical networked systems should feature an aspect of randomness, besides simultaneously manifesting not far from scale-free and hierarchical design. Henceforth, as a preliminary solution to this problem, we suggest in the following perfectly hierarchical networks (generated by the deterministic rules of the aforementioned models) with randomly rewired links as better representatives of associated connected architectures in the real-world. The mechanism used throughout this letter for rewiring edges, while preserving the (scale-free) degree distribution of the otherwise perfectly hierarchical networks, is illustrated in Fig.~\ref{fig:Figure_2}.
\begin{figure}
\begin{center}
\subfigure[]
{
\includegraphics[height=2.5cm, width=2.5cm]{Figure_2_a.jpg}
}
\hspace{0.1cm}
\subfigure[]
{
\includegraphics[height=2.5cm, width=2.5cm]{Figure_2_b.jpg}
}
\hspace{0.1cm}
\subfigure[]
{
\includegraphics[height=2.5cm, width=2.5cm]{Figure_2_c.jpg}
}
\caption{\label{fig:Figure_2}(Color online) (a) We randomly select two (distinct) edges of the network with the first edge (red) connecting nodes numbered 1 and 2 and the second edge (blue) connecting nodes numbered 3 and 4. We rewire (b) the first edge to connect nodes 1 and 3 and the second edge to connect nodes 2 and 4 (provided there does not already exist an edge between nodes 1 and 3 or between 2 and 4). Otherwise, we rewire (c) the first edge to connect nodes 1 and 4 and the second edge to connect nodes 2 and 3 (provided there does not already exist edges between the respective nodes as well). If the aforementioned steps fail, we choose a new pair of edges to rewire. Clearly, we preserve the scale-free degree distribution of the deterministic networks we start with, but successively loose the hierarchical structure while rewiring them. Also, note that we allow for a multiple selection of the same edge in subsequent rewiring steps.}
\end{center}
\end{figure}
The desired operational state in complex networks is often associated with the synchronized motion of its dynamical components~\cite{pikovsky2003synchronization}. In this work, we investigate the synchronizability of the proposed network models using the master stability function (\textsc{MSF}{}) framework~\cite{pecora1998master}. We recall that real-world topologies exhibiting the small-world property are known to facilitate network synchronization~\cite{nishikawa2003heterogeneity, menck2013basin}, as well as, to be more robust to random perturbations~\cite{menck2013basin}. In this regard, the classical network model of Watts and Strogatz~\cite{watts1998collective} is particularly notable for capturing the small-world property. In strong analogy with the present work, the Watts-Strogatz model generates graphs by randomly rewiring completely regular architectures (ring lattices), thus interpolating between absolutely regular and random graphs with the small-world property appearing for intermediate rewiring. However, \textsc{MSF}{}-based~\cite{pecora1998master} measurements of synchronizability of the Watts-Strogatz model~\cite{watts1998collective} surprisingly do not reveal exclusive features in the small-world regime~\cite{hong2002synchronization}. In such networks, synchronizability is only enhanced for an initial increase of the number of rewired edges, which then saturates afterwards as further links are rewired. In fact, synchronizability of the rewired networks (for a given number of rewired edges) are not much different from one another. On the other hand, networks resulting from rewiring hierarchical scale-free networks considered here exhibit both significantly enhanced, as well as, deteriorated synchronizability (compared to that of their completely deterministic counterparts).
\section{\label{sec:Methods}Methods}
In the following, we briefly review the framework of MSF~\cite{pecora1998master} and the traditional quantifier of synchronizability of a network, prior to its application to the aforementioned network models. Subsequently, we discuss a few key characteristics of network topology and the relationships between them with the synchronizability of the networks will be studied in this letter.
\subsection{\label{sec:Synchronizability}Synchronizability}
Consider a network of $N$ identical oscillators where the isolated dynamics of the $i\textsuperscript{th}$ oscillator is described by
\begin{equation} \label{eq:DE_Individual}
\dot{\mathbf{x}}^{i} = \mathbf{F} \left( \mathbf{x}^{i} \right);\, \mathbf{x}^{i} \in \mathbb{R}^{d},\, i = 1,\, 2,\, \ldots,\, N,
\end{equation}
and coupling is established via an output function $\mathbf{H}:\, \mathbb{R}^{d}\, \rightarrow\, \mathbb{R}^{d}$ (identical for all $i$). The topology of interactions is captured by the adjacency matrix $\mathbf{A}$, where $A_{ij} = 1$ if nodes $i$ and $j \left( \neq i \right)$ are connected while $A_{ij} = 0$ otherwise. The dynamical equations of the networked system read
\begin{equation} \label{eq:DE_Network}
\begin{split}
\dot{\mathbf{x}}^{i} & = \mathbf{F} \left( \mathbf{x}^{i} \right) + \epsilon \sum\limits_{j = 1}^{N} A_{ij} \left[ \mathbf{H} \left( \mathbf{x}^{j} \right) - \mathbf{H} \left( \mathbf{x}^{i} \right) \right]\\
& = \mathbf{F} \left( \mathbf{x}^{i} \right) - \epsilon \sum\limits_{j = 1}^{N} L_{ij} \mathbf{H} \left( \mathbf{x}^{j} \right)
\end{split}
\end{equation}
where $\epsilon$ represents the overall coupling strength and $\mathbf{L}$ is the graph Laplacian such that $L_{ij} = - A_{ij}$ if $i \neq j$ and $L_{ii} = \sum\limits_{j = 1}^{N} A_{ij} = k_{i}$ is the degree of node $i$. Since the Laplacian matrix $\mathbf{L}$ is symmetric, its eigenvalue spectrum $\left( \lambda_{1},\, \lambda_{2},\, \ldots,\, \lambda_{N} \right)$ is real and ordered as $0 = \lambda_{1} < \lambda_{2} \le \ldots \le \lambda_{N}$, assuming the network is connected. Further, $\mathbf{L}$ has zero row sum by definition, guaranteeing the existence of a completely synchronized state, $\mathbf{x}^{1} \left( t \right) = \mathbf{x}^{2} \left( t \right) = \ldots = \mathbf{x}^{N} \left( t \right) = \mathbf{s} \left( t \right)$ as a solution of Eq.~(\ref{eq:DE_Network}). Starting from heterogeneous initial conditions, the oscillators (asymptotically) approach (and thus evolve on) the synchronization manifold $\mathbf{s} \left( t \right)$ corresponding to the solution of the uncoupled dynamics of the individual oscillators in Eq.~(\ref{eq:DE_Individual}) $\left( \dot{\mathbf{s}} = \mathbf{F} \left( \mathbf{s} \right) \right)$.
The local stability of the completely synchronized state determined by the framework of MSF~\cite{pecora1998master} relates the \emph{synchronizability} of a network to the \emph{eigenratio} $R \equiv \frac{\lambda_{N}}{\lambda_{2}}$. Irrespective of $\mathbf{F}$ and $\mathbf{H}$ (Eq.~(\ref{eq:DE_Network})), this condition has been extensively used to characterize the synchronizability of a network such that the lower the value of $R$, the more synchronizable the network and vice versa~\cite{barahona2002synchronization, nishikawa2003heterogeneity, motter2005enhancing, andrade2005apollonian, motter2005network, donetti2005entangled, boccaletti2006complex, nishikawa2006synchronization, nishikawa2006maximum, yin2006decoupling, duan2007complex, motter2007bounding, arenas2008synchronization, gu2009altering, nishikawa2010network, rad2008efficient, dadashi2010rewiring, jalili2013enhancing}.
\subsection{\label{sec:Network_Properties}Network Properties}
We utilize the above framework in exploring the synchronizability of the aforementioned network models (Fig.~\ref{fig:Figure_1}) after stochastically rewiring their edges. Further, we investigate the influence of rewiring on the topological properties of the resulting networks and in turn, their relation to the synchronizability of the associated topologies. For that purpose, we now briefly describe the topological properties of average path length, maximum betweenness centrality, average local clustering coefficient and global clustering coefficient (transitivity) of a network.
The \emph{average path length} $\mathcal{L}$ of a network with $N$ nodes is defined as the mean value of the shortest path length between all possible pairs of nodes~\cite{newman2010networks}. Thus, $\mathcal{L} = \frac{1}{N \left( N - 1 \right)} \sum\limits_{i \neq j} \ell \left( i, j \right)$ where $\ell \left( i, j \right)$ is the length of the shortest path between nodes $i$ and $j$~\cite{newman2010networks}. Intuitively, a smaller average path length of a network should facilitate efficient communication between oscillators, culminating in improved synchronizability of the overall system~\cite{hong2002synchronization}.
The \emph{betweenness centrality} $bc_{i}$ of a node $i$ is related to the fraction of shortest paths between all pairs of nodes that pass through that node~\cite{newman2010networks}. For an $N$-node network, the betweenness centrality of each node may further be normalized by dividing it by the number of node pairs $\left( \text{i.e.},\ {N \choose 2} \right)$, resulting in values between 0 and 1. Thus, $bc_{i} = \frac{2}{N \left( N - 1 \right)} \sum\limits_{j \neq k \neq i} \frac{\sigma_{j, k}^{i}}{\sigma_{j, k}}$, where $\sigma_{j, k}$ is the total number of shortest paths from node $j$ to node $k$ and $\sigma_{j, k}^{i}$ is the number of such shortest paths which pass through node $i$~\cite{newman2010networks}. We study here the maximum betweenness centrality values $bc_{max}$ of all nodes of a network realization, which have been argued to be inversely related to synchronizability~\cite{hong2004factors}.
The \emph{local clustering coefficient} $\mathcal{C}_{i}^{L}$ relates to the probability of the existence of an edge between two randomly selected neighbours of node $i$~\cite{newman2010networks}. $\mathcal{C}_{i}^{L}$ is defined as the ratio between the number of links between nodes within the neighbourhood of node $i$ and the number of links that could possibly exist between its neighbours~\cite{newman2010networks}. Thus, $\mathcal{C}_{i}^{L} = \frac{2}{k_{i} \left( k_{i} - 1 \right)} N_{i}^{\Delta}$ where $N_{i}^{\Delta}$ is the total number of closed triangles including node $i$ (with degree $k_{i}$), which is bounded by the maximum possible value of $\frac{k_{i} \left( k_{i} - 1 \right)}{2}$~\cite{newman2010networks}. The \emph{average local clustering coefficient} $\mathcal{C}^{L}$ of the network is then given by the mean of the local clustering coefficients of all nodes of the network, i.e., $\mathcal{C}^{L} = \frac{1}{N} \sum\limits_{i = 1}^{N} \mathcal{C}_{i}^{L}$. Likewise, the \emph{global clustering coefficient} $\mathcal{C}^{G}$ of a network (often also called network \emph{transitivity}~\cite{newman2010networks, barrat2000properties}) is related to the mean probability that two nodes with a common neighbour are themselves neighbours~\cite{newman2010networks}. $\mathcal{C}^{G}$ is defined as the fraction of the total number of triplets in the network that are closed, i.e, $\mathcal{C}^{G} = \frac{\text{(number of closed triplets)}}{\text{(total number of triplets)}}$~\cite{newman2010networks}. In this case, a triplet means three vertices $i$, $j$ and $k$ with edges $\left( i,\, j \right)$ and $\left( j,\, k \right)$, while the edge $\left( i,\, k \right)$ may be present or not. To avoid terminological confusion, we emphasize that the average local clustering coefficient $\mathcal{C}^{L}$ (as defined in this letter) is often referred to as the global clustering coefficient (e.g., as in Ref.~\cite{watts1998collective}). Larger clustering coefficients are generally associated with a reduced synchronizability of small-world and scale-free networks~\cite{arenas2008synchronization}.
\section{\label{sec:Results}Results}
\begin{figure}
\begin{center}
\subfigure[]
{
\includegraphics[height=4.75cm, width=8.5cm]{Figure_3_a.jpg}
}
\\
\subfigure[]
{
\includegraphics[height=4.75cm, width=8.5cm]{Figure_3_b.jpg}
}
\caption{\label{fig:Figure_3}Relationship of expected synchronizability $\langle R \rangle$ (solid line) with the fraction $f$ of rewired edges of the 3-generation (a) \textsc{DSF}{} and (b) \textsc{PSF}{} networks. The shaded areas are representative of the standard deviations (1$\sigma$) of the $R$ values for the ensemble of rewired networks generated for computing $\langle R \rangle$ for any particular value of $f$. The dashed line represents the minimum $R$ value over the ensemble of rewired networks for a given value of $f$. The inset magnifies the $\langle R \rangle$ values, where the vertical line marks the value of $f^{*} = 0.046$ (0.16) for the \textsc{DSF}{} (\textsc{PSF}{}) network. Note that we do not rewire ($e$) edges (for a given value of $f$) of the same realization, but generate ensembles of networks with ($e$) rewired edges (for the respective value of $f$). Therefore, one may obtain different values of $f^{*}$ for different realizations, if they were rewired consecutively instead of the procedure as followed here.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=5.0cm, width=8.5cm]{Figure_4.jpg}
\caption{\label{fig:Figure_4}Relationship between $f$ and the topological properties (a) $\langle \mathcal{L} \rangle$, (b) $\langle bc_{max} \rangle$, (c) $\langle \mathcal{C}^{L} \rangle$, and (d) $\langle \mathcal{C}^{G} \rangle$ of the associated ensemble of randomly rewired \textsc{DSF}{} networks. The shaded areas are representative of the standard deviations (1$\sigma$) of the respective topological features of the ensemble of rewired networks (generated for a given value of $f$). The vertical lines indicate the location of $f^{*}$.}
\end{center}
\end{figure}
We consider two paradigmatic network topologies simultaneously exhibiting scale-free degree distributions and hierarchical organization. In the one hand, we study a \textsc{DSF}{} network developed over 3 generations comprising $N = 81$ nodes and $E = 130$ edges. On the other hand, we investigate a 3-generation \textsc{PSF}{} network with $N = 123$ nodes and $E = 243$ edges. In both cases, we generate an ensemble of $10^4$ networks by rewiring $e$ (equivalently, a fraction $f = \frac{e}{E}$) pairs of edges of the completely deterministic networks, using the mechanism described in Fig.~\ref{fig:Figure_2}. Further, for a particular value of $f$, we compute the values of $\mathcal{L}$, $bc_{max}$, $\mathcal{C}^{L}$, $\mathcal{C}^{G}$ and $R$ of each network with $e$ randomly rewired links of the ensemble and then estimate the expectation values $\langle \mathcal{L} \rangle$, $\langle bc_{max} \rangle$, $\langle \mathcal{C}^{L} \rangle$, $\langle \mathcal{C}^{G} \rangle$ and $\langle R \rangle$ as the corresponding ensemble means.
We present the variation in the expected synchronizability $\langle R \rangle$ (solid line) with the fraction $f$ of rewired edges of the \textsc{DSF}{} network in Fig.~\ref{fig:Figure_3}(a). We clearly observe that rewired versions of the otherwise completely \textsc{DSF}{} network exhibit significantly enhanced, as well as, deteriorated values of synchronizability [Fig.~\ref{fig:Figure_3}(a)]. The dashed line represents the minimum $R$ value over the ensemble of rewired networks for a given value of $f$. The corresponding topologies thus represent approximately `optimally' synchronizable networks for the respective value of $f$. The fluctuations in the minimum $R$ values may be attributed to the relatively small considered ensemble sizes ($10^{4}$), as compared with the much greater variety of possible rewired networks for a given value of $f$. Also, in the inset of Fig.~\ref{fig:Figure_3}(a), we observe a minimal value of $\langle R \rangle$ (highest average synchronizability) for $f$ equal to $f^{*} = 0.046$ (equivalently, 6 rewired edges) of the 81-node network. As $f$ is further increased beyond $f^{*}$, the value of $\langle R \rangle$ increases again, finally saturating at $\langle R \rangle \sim 185$ for $f \gtrsim 0.6$.
Figure~\ref{fig:Figure_3}(b) demonstrates that a similar (and even more pronounced) behaviour of average synchronizability is found in the \textsc{PSF}{} networks, for which we observe a minimal value of $\langle R \rangle$ for $f^{*} = 0.16$ (equivalently, 39 rewired edges). Moreover, we found similar results (not presented here for brevity) with regard to synchronizability of 4-generation \textsc{DSF}{} and \textsc{PSF}{} networks as well.
\begin{figure}
\begin{center}
\includegraphics[height=5.0cm, width=8.6cm]{Figure_5.jpg}
\caption{\label{fig:Figure_5}Same as in Fig.~\ref{fig:Figure_4} for randomly rewired \textsc{PSF}{} networks.}
\end{center}
\end{figure}
We further investigate the relationships between $f$ and the topological properties $\langle \mathcal{L} \rangle$, $\langle bc_{max} \rangle$, $\langle \mathcal{C}^{L} \rangle$ and $\langle \mathcal{C}^{G} \rangle$ of the associated ensemble of stochastically rewired \textsc{DSF}{} networks in Fig.~\ref{fig:Figure_4}. For $f < f^{*}$, the decrease in $\langle \mathcal{L} \rangle$ and the increase in $\langle bc_{max} \rangle$ conform to the decreasing trend of $\langle R \rangle$ (as per the earlier discussion on network properties and their relationship with synchronizability). The value of $\langle \mathcal{C}^{L} \rangle$ (as well as $\langle \mathcal{C}^{G} \rangle$) starts from zero and increases as more edges are rewired. This implies the formation of triangles in the network, which promotes communication between the oscillators, thereby enhancing synchronizability. However, for $f > f^{*}$, further decrease in $\langle \mathcal{L} \rangle$ and increase in $\langle bc_{max} \rangle$ should still improve the average synchronizability, which however only declines from thereon.
Thus, rewiring a few edges ($f < f^{*}$) alters the topological features of the ensemble of networks for better synchronizability. However, when more edges ($f > f^{*}$) are further rewired, it no longer affects on average the topological properties relevant for improving synchronizability, in fact, only undermines it. Hong \etal~\cite{hong2004factors} have previously proposed maximum betweenness centrality as a suitable indicator for predicting synchronizability of networks. They have shown that among various topological factors, such as, short characteristic path length or large heterogeneity of the degree distribution, it is a small value of the maximum betweenness centrality of a network that promotes synchronization~\cite{hong2004factors}. However, this is not corroborated by our results in Fig.~\ref{fig:Figure_4} where we do not observe a strong linear relationship between $\langle R \rangle$ and $\langle bc_{max} \rangle$, as also indicated by a correlation coefficient of 0.776. Similarly, a correlation coefficient of -0.681 rules out a systematic linear dependence between $\langle R \rangle$ and $\langle \mathcal{L} \rangle$. However, a correlation coefficient of 0.847 (0.889) between $\langle R \rangle$ and $\langle \mathcal{C}^{L} \rangle$ ($\langle \mathcal{C}^{G} \rangle$) indicates an appreciable underlying linear relationship. Further, for $f > f^{*}$, the correlation coefficient of 0.939 (0.970) between $\langle R \rangle$ and $\langle \mathcal{C}^{L} \rangle$ ($\langle \mathcal{C}^{G} \rangle$) underlines the above observation.
Analogously to Fig.~\ref{fig:Figure_4}, Fig.~\ref{fig:Figure_5} again shows the relationships between $f$ and the topological properties $\langle \mathcal{L} \rangle$, $\langle bc_{max} \rangle$, $\langle \mathcal{C}^{L} \rangle$ and $\langle \mathcal{C}^{G} \rangle$ of the associated ensemble of rewired \textsc{PSF}{} networks. In this case, we observe a clear relationship between $\langle R \rangle$ and $\langle \mathcal{L} \rangle$, further corroborated by a correlation coefficient of 0.987. On the other hand, a possible linear relationship between $\langle R \rangle$ and $\langle bc_{max} \rangle$, $\langle \mathcal{C}^{L} \rangle$ and $\langle \mathcal{C}^{G} \rangle$ is ruled out by correlation coefficients of -0.25, -0.175 and -0.373, respectively.
Taken together, we notice that the topological features of the ensembles of rewired \textsc{DSF}{} (Fig.~\ref{fig:Figure_4}) and \textsc{PSF}{} (Fig.~\ref{fig:Figure_5}) networks exhibit certain contrasting variations, as $f$ is tuned from 0 to 1. Prior to saturation, the $bc_{max}$ of the rewired \textsc{DSF}{} networks (Fig.~\ref{fig:Figure_4}(b)) initially increases with $f$, as opposed to a corresponding decrease in $bc_{max}$ observed for the rewired \textsc{PSF}{} networks (Fig.~\ref{fig:Figure_5}(b)). On the contrary, both clustering coefficients $\langle \mathcal{C}^{L} \rangle$ and $\langle \mathcal{C}^{G} \rangle$ increase with $f$ until saturation for rewired \textsc{DSF}{} networks (Fig.~\ref{fig:Figure_4}(c, d)), which however display a decreasing trend in the case of rewired \textsc{PSF}{} networks (Fig.~\ref{fig:Figure_5}(c, d)).
We now compare the synchronizability of rewired \textsc{DSF}{} and \textsc{PSF}{} networks with that of random scale-free networks generated using the classical BA model of growth and preferential attachment~\cite{barabasi1999emergence}. In this regard, we consider an ensemble of 100 such random scale-free networks of 81 nodes (123 nodes) each for comparison with rewired \textsc{DSF}{} (\textsc{PSF}{}) networks, respectively. While generating the BA networks, we incorporate the growing character of the network by starting with a small number of vertices and at every time step introducing a new vertex and linking it to 2 vertices already present in the system, until the network comprises 81 (123) nodes. Preferential attachment is incorporated by assuming that the probability $\Pi_{i}$ that a new node will be connected to node $i$ depends on the degree $k_{i}$ of node $i$, such that $\Pi_{i} = \frac{k_{i}}{\sum\limits_{j} k_{j}}$. The 81-node (123-node) BA networks have a total of 158 (242) edges in each realization. The $\langle R \rangle$ values of the considered ensemble of 81-node (123-node) BA networks turn out to be 36.74 (49.75), which is much smaller than the minimum $R$ values among the ensembles of rewired \textsc{DSF}{} (\textsc{PSF}{}) networks for different $f$, presented in Fig.~\ref{fig:Figure_3}. Thus, random scale-free networks generated using the classical BA model appear to promote synchronizability better than randomly rewired \textsc{DSF}{}, as well as, \textsc{PSF}{} networks. We outline further investigations to unveiling the reasons for this behaviour as a subject of future research.
\section{\label{sec:Conclusion}Conclusion}
Many real-world complex networks simultaneously exhibit generic features of scale-free topology along with hierarchical organization. In this regard, two notable models which simultaneously capture the two different topological properties are the deterministic and pseudofractal scale-free networks. These models comprise completely deterministic processes underlying the formation of the respective networks. However, real-world networks are presumably neither completely deterministic, nor perfectly hierarchical. Thus, a practical model of such networks should feature an aspect of randomness, while exhibiting scale-free and hierarchical design. For this purpose, we suggested preserving the scale-free degree distribution of the deterministic networks we start with, while tweaking the hierarchical structure by rewiring them. Specifically, we hypothesized that perfectly hierarchical scale-free networks (generated by the deterministic rules of the aforementioned models) with randomly rewired links may provide more realistic representatives of associated real-world topologies than perfectly hierarchical ones.
The desired operational state in many complex systems often concurs with the synchronized motion of dynamical units coupled on a networked architecture. Consequently, we utilized the analytical framework of master stability function (\textsc{MSF}{}) in investigating synchronizability of dynamical systems coupled on the proposed network structures. Interestingly, this revealed that the process of rewiring is capable of significantly enhancing, as well as, deteriorating the synchronizability of the resulting networks. Importantly, when a certain critical fraction of edges of the otherwise completely deterministic networks were rewired, it optimized the average synchronizability of the resulting topologies. This observation is, however, different from \emph{Braess's paradox} where the \emph{addition} of edges undermines synchrony in complex oscillator networks~\cite{witthaut2012braess}. We also investigated the influence of rewiring links on some key topological properties (average path length, maximum betweenness centrality, average local clustering coefficient and global clustering coefficient) of the resulting networks and, in turn, their relation to the synchronizability of the associated topologies demonstrating distinct behaviours in these different models of hierarchical scale-free networks. We speculate that an interplay between the various topological properties of the networks, in particular, their average path lengths and clustering coefficients in a trade-off lead to an `optimal' value of synchronizability when rewiring the respective networks.
In a related context, we recall that networks exhibiting the small-world property have been considered conducive for synchronization~\cite{nishikawa2003heterogeneity, menck2013basin}. However, \textsc{MSF}{}-based measurements of the synchronizability of Watts-Strogatz networks did not reveal exclusive features in the small-world regime~\cite{hong2002synchronization}. Importantly, the critical fraction of rewired edges (for maximal synchronizability) in the hierarchical scale-free networks considered here, roughly corresponds to a similar value for typical Watt-Strogatz networks to exhibit small-world behaviour. Specifically, we also found that rewiring a few edges of the deterministic scale-free, as well as, pseudofractal scale-free networks generated a topology with significantly enhanced or `optimal' synchronizability, which did not exhibit major improvements thereafter, as the fraction of rewired edges was further increased.
The aforementioned results may have potential implications in the design of complex networks (simultaneously exhibiting hierarchical structure and scale-free behaviour) for better synchronizability. A more challenging problem is that of comparing real-world topologies with rewired versions of deterministic scale-free hierarchical networks explored here, in ascertaining a possible deterministic backbone of certain practical networks and the proportion of randomness in the same. Any efforts in this direction could certainly provide deeper insights into the developmental processes and synchronizability of many practical networked dynamical systems simultaneously displaying hierarchical structure and scale-free behaviour.
\acknowledgments
CM and RVD have been supported by the German Federal Ministry of Education and Research (BMBF) via the Young Investigators Group CoSy-CC\textsuperscript{2} (grant no.\ 01LN1306A). JK \& RVD acknowledge support from the IRTG 1740/TRP 2011/50151-0, funded by the DFG/FAPESP. The authors gratefully acknowledge the European Regional Development Fund (ERDF), the German Federal Ministry of Education and Research (BMBF) and the Land Brandenburg for supporting this project by providing resources on the high performance computer system at the Potsdam Institute for Climate Impact Research.
\bibliographystyle{eplbib}
|
\section{Introduction}
We introduce here the background for this paper. We refer the reader to Helgason's books \cite{Helgason1, Helgason2} as the standard reference on symmetric spaces. Let $G$ be a semisimple Lie group of noncompact type with maximal
compact subgroup $K$, let $\g=\kk\oplus\p$ be a Cartan decomposition of $G$ and let $\a$ be a maximal Abelian subspace of $\p$.
The corresponding symmetric space of noncompact type is $M=G/K$. We also recall the definition of the Cartan motion group and the symmetric space of Euclidean type associated with $G$: it is the semi-direct product $G_0=K\rtimes \p$ where the group multiplication is defined by $(k_1,X_1)\cdot(k_2,X_2)=(k_1\,k_2,\Ad(k_1)(X_2)+X_1)$ while the associated symmetric space of Euclidean type is then $M_0=\p\simeq G_0/K$ (the action of $G_0$ on $\p$ is given by $(k,X)\cdot Y=\Ad(k)(Y)+X$).
We recall some notions concerning spherical functions, the Abel transform and its dual. For $X\in \a$ and $\lambda\in \a_\C$, the spherical functions in the noncompact case are given by the equation
\begin{align*}
\phi_\lambda(e^X)=\int_K\,e^{(i\,\lambda-\rho)(H(e^X\,k))}\,dk
\end{align*}
where $\rho(k)=\frac{1}{2}\,\sum_{\alpha\in R_+}\,k_\alpha\,\alpha$, $k_\alpha$ is the multiplicity of $\alpha$ and $g=k\,e^{H(g)}\,n\in K\,A\,N$ refers to the Iwasawa decomposition and by
\begin{align*}
\psi_\lambda(X)=\int_K\,e^{i\,\lambda(\pi_\a(\Ad(k)\cdot X))}\,dk
\end{align*}
for the corresponding Cartan motion group (Euclidean type) where $\pi_\a$ is the orthogonal projection from $\p$ to $\a$ with respect to the Killing form.
When $X\not=0$, the spherical functions have a Laplace-type representation
\begin{align}
\phi_\lambda(e^X)&=\int_{\a}\,e^{i\,\langle\lambda,H\rangle}\,K(H,X)\,dH,\label{LT}\\
\psi_\lambda(X)&=\int_{\a}\,e^{i\,\langle\lambda,H\rangle}\,K_0(H,X)\,dH\nonumber
\end{align}
with $K(H,X)\geq0$ and $K_0(H,X)\geq0$ and where the support of $K(\cdot,X)$ and $K_0(\cdot,X)$ is $C(X)$, the convex hull of $W\cdot X$ in $\a$. Observe that $X\not=0$ ensures that $\dim C(X)=\dim\a$ by \cite[Theorem 10.1, Chap.{} IV]{Helgason2}.
The function $K(H,\cdot)$ is the kernel of the Abel transform
\begin{align*}
\mathcal{A}(f)(e^H)=e^{\rho(H)}\,\int_N\,f(a\,n)\,dn=\int_{\a}\,f(e^X)\,K(H,X)\,\delta(X)\,dX
\end{align*}
where $\delta(X)=\prod_{\alpha\in R_+}\,\sinh^{m_\alpha}\alpha(X)$ while the dual Abel transform is simply given by
\begin{align}
\mathcal{A}^*(f)(e^X)=\int_K\,f(e^{H(e^X\,k)})\,dk=\int_{\a}\,f(e^H)\,K(H,X)\,dH\label{Dual}
\end{align}
so that $\phi_\lambda=\mathcal{A}^*(e^{i\,\langle\lambda,\cdot\rangle})$.
We can also define the Abel transform $\mathcal{A}_0$ and its dual $\mathcal{A}_0^*$ for the symmetric space of Euclidean type in a similar fashion
(the area element $\delta(X)$ then becomes $\pi(X) =\prod_{\alpha\in R_+}\,\alpha(X)^{m_\alpha}$).
In Figure \ref{Dunkl}, we find some of the basic objects that will come into our discussion on the trigonometric Dunkl operators (also called Cherednik operators) and the rational Dunkl operators. These operators provide the basis for the generalization of the spherical functions and the Abel transform to root systems with arbitrary multiplicities. The reader should refer to \cite{Anker,Opdam1,Roesler3,Sawyer1} for more details.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{|p{6.0cm}|p{6.0cm}|}\hline
{\bf trigonometric Dunkl setting} & {\bf rational Dunkl setting}\\ \hline
generalization of the symmetric space of noncompact type&generalization of the symmetric space of Euclidean type \\ \hline
\parbox[t]{6.0cm}{$\displaystyle D_\xi=\partial_\xi+\sum_{\alpha\in R_+}\,k_\alpha\,\alpha(\xi)\,\frac{1-r_\alpha}{1-e^{-\alpha}}$ ~\qquad${}-\rho(k)(\xi)$}
&$\displaystyle T_\xi=\partial_\xi+\sum_{\alpha\in R_+}\,k_\alpha\,\alpha(\xi)\,\frac{1-r_\alpha}{\langle \alpha,X\rangle}$\\\hline
\multicolumn{2}{|l|}{\parbox[t]{12.0cm}{$\langle\cdot,\cdot\rangle$ denotes the Killing form, $R_+$ is the set of positive roots, $\partial_\xi$ is the derivative in the direction of $\xi\in\a$, $r_\alpha(X)=X-2\,\frac{\langle \alpha,X\rangle}{\langle \alpha,\alpha\rangle}\,\alpha$, $\alpha\in R$, $X\in\a$ and $W$ is the group generated by the $r_\alpha$'s}}\\\cline{1-2}
$D_\xi\,E(\lambda,\cdot) = \langle\xi,\lambda\rangle\,E(\lambda,\cdot)$,\hfill\break $\lambda\in \a_\C^*$; unique analytic solution with $E(\lambda,0)=1$
&$T_\xi\,E(\lambda,\cdot) = \langle\xi,\lambda\rangle\,E(\lambda,\cdot)$,\hfill\break $\lambda\in \a_\C^*$; unique analytic solution with $E(\lambda,0)=1$\\\hline
$p(D_\xi)\,J(\lambda,\cdot) = p(\langle\xi,\lambda\rangle)\,J(\lambda,\cdot)$, $\lambda\in \a_\C^*$; unique analytic solution with $J(\lambda,0)=1$
&$p(T_\xi)\,J(\lambda,\cdot) = p(\langle\xi,\lambda\rangle)\,J(\lambda,\cdot)$, $\lambda\in \a_\C^*$; unique analytic solution with $J(\lambda,0)=1$\\ \cline{1-2}
$p(\partial_\xi)\circ\mathcal{A}=\mathcal{A}\circ p(D_\xi)$ , $p$ any symmetric polynomial
&$p(\partial_\xi)\circ\mathcal{A}=\mathcal{A}\circ p(T_\xi)$, $p$ any symmetric polynomial\\ \hline
$\mathcal{A}^*\circ p(\partial_\xi)=p(D_\xi)\circ \mathcal{A}^*$ , $p$ any symmetric polynomial
&$\mathcal{A}^*\circ p(\partial_\xi)=p(T_\xi)\circ \mathcal{A}^*$ , $p$ any symmetric polynomial\\ \hline
\multicolumn{2}{|l|}{$J(\lambda,X)=\frac{1}{|W|}\,\sum_{s\in W}\,E(\lambda,s\cdot X)$, $X\in\a$}\\\cline{1-2}
\multicolumn{2}{|l|}{$J(w\cdot\lambda, w'\cdot X)=J(\lambda,X)$, $X\in\a$ and $w$, $w'\in W$}\\\hline
\end{tabular}
\caption{Dunkl setting and trigonometric Dunkl setting\label{Dunkl}}
\end{center}
\end{figure}
The functions $J(\lambda,\cdot)$ generalize the spherical functions on the symmetric spaces of noncompact type to arbitrary multiplicities (in the trigonometric Dunkl setting) and those on the symmetric spaces of Euclidean type (in the rational Dunkl setting). In the coming sections, we will use the notation $\phi_\lambda$ in the trigonometric setting and $\psi_\lambda$ in the rational setting.
As for the function $E(\lambda,\cdot)$ in the rational Dunkl setting, we have the following representation
\begin{align*}
E(\lambda,\cdot)&=V\,e^{\langle\lambda,\cdot\rangle}
\end{align*}
where $V$ is called the Dunkl intertwining operator. We also introduce the positive measure $\mu_x$ such that
\begin{align*}
V_X(f)=\int_{\a}\,f(H)\,d\mu_X(H)
\end{align*}
(for the existence of the positive measure, see for example \cite{Roesler2}). The support of $V_X$ is known to contain $W\cdot X$ and to be included in $C(X)$
(refer to \cite{Anker} for example).
The intertwining operator $V$ has the property that
$T_\xi\,V=V\,\partial_\xi$. Compare with the intertwining properties of the generalized Abel transform and of its dual given in Figure \ref{Dunkl}.
Recently (\cite{Trimeche4}), Trim\`eche was able to introduce the intertwining operator in the trigonometric Dunkl setting.
The terms ``group case'' or ``geometric setting'' refer to the situations corresponding to the roots systems associated to symmetric spaces.
From now on, unless stated otherwise, we are considering root systems of type $BC$.
When referring to the root systems of type $A$ which come into play in our discussions, we will use the superscript $A$. The integer $q\geq 1$ will be assumed to be fixed as well as $d=\dim_{\R}\,\F$ where $\F=\R$, $\C$ or $\H$.
In the geometric setting, the space $\a$ is made of the matrices
\begin{align*}
\left[
\begin{array}{ccc}
0_{q\times q}&D_X&0_{q\times (p-q)}\\
D_X&0_{q\times q}&0_{q\times (p-q)}\\
0_{(p-q)\times q}&0_{(p-q)\times q}&0_{(p-q)\times (p-q)}
\end{array}
\right]
\end{align*}
where $D_X=\diag[x_1,\dots,x_q]$ with $p\geq q$. If $p>q$ then $\a^+=\{X\in\a\colon x_1>x_2>\dots>x_p>0\}$; if $p=q$ then $\a^+=\{X\in\a\colon x_1>x_2>\dots>|x_p|\}$. We will assume that we are in the former case.
Note that the Killing form $\langle X,Y\rangle=C\,\tr X\,Y$ for some $C>0$ where $\tr$ denotes the trace. This means that $\langle X,Y\rangle=2\,C\,\sum_{i=1}^q\,x_i\,y_i$.
Figure \ref{geom} shows the multiplicities of the various roots depending on $p$, $q$ and $d$.
\begin{figure}[h]
\begin{center}
\begin{tabular}{|c|c|}\hline
$\alpha$&$m_\alpha$\\ \hline
$h_i$&$d\,(p-q)$\\ \hline
$2\,h_i$&$d-1$\\ \hline
$h_i-h_j$&$d$\\ \hline
$h_i+h_j$&$d$\\ \hline
\end{tabular}
\end{center}
\caption{The root system $BC$\label{geom} for the symmetric spaces $\SO_0(p,q)/\SO(p)\times\SO(q)$ ($d=1$), $\SU(p,q)/\SU(p)\times\SU(q)$ ($d=2$) and
$\Sp(p,q)/\Sp(p)\times\Sp(q)$ ($d=4$) with $p\geq q$}
\end{figure}
We set some terminology here which differs slightly from the usage in \cite{Voit}.
\begin{definition}\label{chamber}
We write $C_q=\{X\colon X=\diag[x_1,\dots,x_q],~x_i\in\R\}$ and
$C_q^+=\{X\in C\colon x_1>x_2>\dots>x_q>0\}$ (in \cite{Voit}, $C_q$ is used for $\overline{C_q^+}=\{X\in C_q\colon x_1\geq x_2\geq \dots\geq x_q\geq 0\}$).
The Weyl group $W$ acts on $C_q$ by permuting the $x_i$'s and by changing any numbers of its signs. We also define $C(X)$ as the convex hull of $W\cdot X$ in $C_q$.
\end{definition}
The following result from \cite{Voit} is a generalization of a result from \cite{Sawyer2} which was given for the symmetric spaces
$\SO_0(p,q)/\SO(p)\times\SO(q)$.
\begin{theorem}[\cite{Voit}]\label{phiBC}
The generalized spherical function associated to the root system of type $BC$ is given as
\begin{align}
\phi_\lambda(X)&=\int_{B_q}\,\phi^A_\lambda(Z(X,w)) \,|\det(Z(X,w))|^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\label{phi}
\end{align}
where $\Re p> 2\,q-1$, $B_q=\{w\in M_q(\F)\colon I-w^*\,w>0\}$, $Z(X,w)=\cosh X+\sinh X\,w$,
$m_p(w)=\frac{1}{\kappa^{p\,d/2}}\,\det(I-w^*\,w)^{p\,d/2-d\,(q-1/2)-1}$,
$\kappa^{p\,d/2}=\int_{B_q}\,\det(I-w^*\,w)^{p\,d/2-d\,(q-1/2)-1}\,dw$ and $\phi^A_\lambda$ is a spherical function for $\GL_0(q,\F)$ (the connected component of the $q\times q$ non-singular matrices over $\F$).
\end{theorem}
This formulation is adapted from \cite{Voit} to use the formula for the spherical functions of type $A$ (refer to \cite{Sawyer3} for example) and the formulas
$\rho^{BC}=\sum_{i=1}^q\,(\frac{d}{2}\,(p+q+2-2\,i)-1)\,f_i$ and $\rho^{A}=\sum_{i=1}^q\,\frac{d}{2}\,(q+1-2\,i)\,f_i$ where $f_i(X)=x_i$.
This allows us to provide the following expression for the dual of the Abel transform:
\begin{align}\label{Abel}
\mathcal{A}^*(f)(X)&=\int_{B_q}\,(\mathcal{A}^A)^*(f)(Z(X,w))
\,|\det(Z(X,w))|^{-d\,(p+1)/2+1}\,dm_p(w)\,dw.
\end{align}
Some additional notation which will be used throughout the paper:
\begin{definition}\label{XZ}
In addition to the notation $Z(X,w)=\cosh X+\sinh X\,w$, we will also denote $Z_2(X,w)=Z(X,w)^*\,Z(X,w)$ and $X(w)=a^A(Z(X,w))$ (the logarithms of the singular values of $Z(X,w)$ which equal half the logarithms of the eigenvalues of $Z_2(X,w)$ in decreasing order).
\end{definition}
This paper is organized as follows. We will start by assembling some preliminary result in Section \ref{prelim}.
In Theorem \ref{main} of Section \ref{Dually}, we will show that the support of the measure $f\mapsto \mathcal{A}^*(f)(X)$ is $C(X)$ as in the geometric setting.
We also show in Theorem \ref{KHX} that the dual of the Abel transform has a kernel $H\mapsto K(H,X)$ provided $X\not=0$. In Section \ref{DunklSetting}, we derive
an expression similar to the one given in \eqref{phi} for the generalized spherical functions associated to the root system $BC$ in the rational Dunkl setting. This allows us to adapt the results of Theorem \ref{main} and Theorem \ref{KHX} to that setting. We conclude by showing that the support of the intertwining operator $f\mapsto V_X(f)$ is also $C(X)$.
In \cite{Trimeche3}, Trim\`eche proved the more general result for all root systems that the measures $\mu_X$ (and therefore the corresponding dual of the Abel transform) are absolutely continuous with respect to the measure on $\a$ provided that $X$ is regular. His results apply both in the rational and in the trigonometric settings. However, our results for the root system $BC$ provide a more direct construction of the density $K$ (for the dual of the Abel transform), its exact support and are valid whenever $X\not=0$, a natural extension of the geometric setting.
The reader should also also consider the papers \cite{Trimeche1, Trimeche2} by Trim\`eche on the root system $BC_d$ and $BC_2$.
\section{Preliminaries}\label{prelim}
In this section, we will introduce a few technical lemmas which will allow us to better describe the support of the measure $f\mapsto\mathcal{A}^*(f)(X)$.
The following results will provide a description of the convex hull $C(X)$ of an element $X$ of the Cartan subspace $\a$ based on the geometric setting described in the Introduction. These results are easily transposed in terms of the sets $C_q$ and $C_q^+$ given in Definition \ref{chamber}.
\begin{lemma}\label{C+}
Let ${}^+C=\{H\in\a\colon \langle X,H\rangle>0~\hbox{for all}~ X\in\a^+\}$. Then
\begin{align*}
{}^+C&=\{H\in\a\colon h_1>0,h_1+h_2>0,\dots, h_1+\dots+h_q>0\}.
\end{align*}
\end{lemma}
\begin{proof}
Any $X\in\a^+$ can be written with $x_q=y_q$, $x_{p-1}=y_q+y_{q-1}$, \dots, $x_1=y_1+\dots+y_q$ where the $y_i$'s are strictly positive and arbitrary. The inequality $\langle X,H\rangle>0$ is equivalent to
\begin{align*}
0&<h_1\,x_1+\dots+h_q\,x_q\\
&=(h_1+\dots+h_q)\,y_q + (h_1+\dots+h_{q-1})\,y_{q-1}+\dots+h_1\,y_1
\end{align*}
which holds for all $X\in\a^+$ if and only if $h_1+\dots+h_{k}>0$ for $k=1$, \dots, $q$.
\end{proof}
We are now in a position to describe the set $C(X)$ in terms of inequalities.
\begin{proposition}\label{CBq}
Let $X\in\overline{\a^+}$. Then $H\in C(X)$ if and only if
\begin{align}
\sum_{k=1}^r\,|h_{i_k}|\leq \sum_{k=1}^r\,x_k~
\hbox{for any choice of distinct $i_1$, \dots, $i_r$, $1\leq r\leq q$}.
\label{CX}
\end{align}
Furthermore, $H\in C(X)^\circ$ if and only if all the inequalities in \eqref{CX} are strict.
\end{proposition}
\begin{proof}
According to \cite[Lemma 8.3 page 459]{Helgason2}, $C(X)\cap \a^+=(X- {}^+C)\cap \a^+$. Now, from Lemma \ref{C+},
if $X\in\overline{\a^+}$,
\begin{align*}
X-{}^+C&=\{H\in\a\colon h_1< x_1,h_1+h_2< x_1+x_2,\dots, h_1+\dots+h_q<x_1+\dots+x_q\}
\end{align*}
and therefore
\begin{align}
(X-{}^+C)\cap\a^+&=\{H\in\a\colon h_1>h_2>\dots>h_q>0,~ h_1< x_1,h_1+h_2<x_1+x_2,\dots,
\nonumber\\ &\qquad\qquad\qquad
h_1+\dots+h_q<x_1+\dots+x_q\}.\label{CX2}
\end{align}
Let $D(X)$ be the set described in \eqref{CX}. Since $D(X)$ is Weyl invariant and $D(X)\cap\a^+$ is equal to the
set in \eqref{CX2}, the result follows.
\end{proof}
We recall a few results related to the root systems of type $A$ in order to exploit the relationship between $\phi_\lambda$ and $\phi_\lambda^A$ in \eqref{phi}.
Recall that $\a^A$ is the space of the $q\times q$ real diagonal matrices,
that $(\a^A)^+$ is the set of the $q\times q$ real diagonal matrices with strictly decreasing diagonal entries
and that the Cartan decomposition of $g\in \GL_0(q,\F)$
is given by $g=k_1\,e^{a^A(g)}\,k_2$ with $k_i\in \U(q,\F)$ and
$a^A(g)\in\overline{(\a^A)^+}=\{H=\diag[h_1,\dots,h_q]\in \a^A\colon h_i\geq h_{i+1}\}$.
\begin{remark}[\cite{Rado}]\label{CXA}
If $X=\diag[x_1,\dots,x_q]\in\overline{(a^A)^+}$ then $H=\diag[h_1,\dots,h_q]\in\a^A$ belongs to $C^A(X)$ if and only if $\sum_{i=1}^q\,h_i=\sum_{i=1}^q\,x_i$ and
\begin{align}
h_{i_1}+\dots+h_{i_k}&\leq x_1+\dots+x_k\label{ineq}
\end{align}
for every choice of distinct indices $i_1$, \dots, $i_k\in\{1,\dots,n\}$, $1\leq k\leq n-1$.
Moreover, $H\in C^A(X)^\circ$ (in the relative topology of the set $\{H\colon \tr H=\tr X\}$) if and only if all the inequalities in \eqref{ineq} are strict.
\end{remark}
The next result is helpful in giving a ``smooth criteria'' to determine the number of distinct eigenvalues of a matrix.
\begin{theorem}[Hermite]\label{companion}
Let $P$ be a polynomial with real coefficients: the number of distinct
roots of $P$ is equal to the rank of the matrix
\begin{align*}
B= \left(\begin{array}{cccc}p_0&p_1&\ldots&p_{n-1}\\
p_1&p_2&\ldots&p_{n}\\
\vdots&\vdots&\ddots&\vdots\\
\vdots&\vdots&\ddots&\vdots\\
p_{n-1}&p_n&\ldots&p_{2n-2}\\
\end{array}\right)
\end{align*}
where $p_k=\sum b_j^k$ where $b_1$,\dots, $b_n$ are the roots of $P$.
The Newton polynomials $p_k$ are polynomials of the coefficients
of $P$.
\end{theorem}
\begin{proof}
Refer to \cite[Page 202]{Gantmacher}.
\end{proof}
The next result will be used repeatedly.
\begin{corollary}\label{eigen}
Let $f\colon B_q\to \hbox{Symm}(q,\F)$ (the space of $q\times q$ Hermitian matrices over $\F$) be analytic and, for $1\leq r\leq q$, let
\begin{align*}
U_r=\{w\in B_q\colon \hbox{$f(w)$ has at least $r$ distinct eigenvalues}\}.
\end{align*}
Then $U_r$ is an open set of $B_q$ and either $U_r=\emptyset$ or $U_r$ is dense in $B_q$.
\end{corollary}
\begin{proof}
Let $B_w$ be the matrix associated to the polynomial $P=\det(t\,I_q-f(w))$ by the theorem. Let $F\colon B_q\to\R$ be such that $F(w)$ is
the sum of the squares of the absolute values of all the $r\times r$ sub-determinants of $B_w$ and note that $F$ is analytic.
From the theorem, $U_r=F^{-1}((0,\infty))$ which is open; either $F$ is identically equal to 0 and $U_r=\emptyset$ or $U_r$ is dense in $B_q$.
\end{proof}
\section{The dual of the Abel transform}\label{Dually}
We will describe precisely the support of the dual of the Abel transform in Subsection \ref{Support} and then show in Subsection \ref{LaplaceSec} that, provided $X\not=0$, the measure $f\mapsto \mathcal{A}^*(f)(X)$ has a density.
\subsection{The support of the dual of the Abel transform}\label{Support}
We start by recalling the invariance properties of the Abel transform.
\begin{remark}\label{WINV}
We can observe directly from \eqref{Abel} that $\mathcal{A}^*(f)(s\cdot X)=\mathcal{A}^*(f)(X)$ for every $s\in W$. Indeed, we have $s=\Ad(k)\circ L(u)$ where $k\in \U(q,\F)$ and $\Ad(k)$ acts on the elements of $C_q$ by permuting the diagonal elements and $\Ad(k)(H)=k\,H\,k^*$ and $u=\diag[u_1,\dots, u_q]$, $u_i\in\{-1,1\}$ with $L(u)(H)=u\,H$. It then suffices to observe that $Z(s\cdot X,w)=\Ad(k)(Z(X,L(u)\,\Ad(k^*)\,w))$, that $dw$ is invariant under the action of the map $L(u)\,\Ad(k^*)$ and that $\Ad(k)\in W^A$, the Weyl group for the root system $A_{q-1}$.
Note also that $\mathcal{A}^*(f\circ s)=\mathcal{A}^*(f)$ for every $s\in W$ (a direct consequence of \eqref{LT} and \eqref{Dual} and of the last two lines in the table of Figure \ref{Dunkl}). This implies that the support of the measure $f\mapsto \mathcal{A}^*(f)(X)$ is Weyl-invariant.
\end{remark}
The next result provides a decomposition of $C(X)$.
\begin{proposition}\label{XC}
Let $X\in\a$ and $q\geq1$. Then $C(X)=\cup_{w\in\overline{B_q}}\,C^A(X(w))$.
\end{proposition}
\begin{proof}
Suppose for now that $p\geq 2\,q$ is an integer (we are then in the geometric setting).
For $k=\left[\begin{array}{cc}U&0\\0&V\end{array}\right]\in K=\SU(q,\F)\times\SU(p,\F)$,
let $w(k)$ be the $q\times q$ principal minor of $V$. It is not difficult to show that under the above conditions on $p$,
$\overline{B_q}=\{w=w(k)\colon k\in K\}$. Indeed, if $w=k_1\,\diag[\sigma_1,\dots,\sigma_q]\,k_2\in \overline{B_q}$, then it suffices to take $k$ with $U\in \SU(q,\F)$ arbitrary and
$V=\left[\begin{array}{cc}A&B\\C&D\end{array}\right]\,\diag[1,\dots,1,\alpha]$ with $A=w$, $B=k_1\,\Sigma_B$ where $\Sigma_B$ is a $q\times(p-q)$ matrix with $(\Sigma_B)_{ii}=\sqrt{1-\sigma_i^2}$ and zero elsewhere, $C=\Sigma_C\,k_2$ where $\Sigma_C$ is a $(p-q)\times $ matrix with $(\Sigma_C)_{ii}=-\sqrt{1-\sigma_i^2}$ and zero elsewhere, $D=\diag[\sigma_1,\dots,\sigma_q,\overbrace{1,\dots,1}^{p-q}]$ and $|\alpha|=1$ is chosen so that $\det V=1$. This proves that $\overline{B_q}\subseteq\{w=w(k)\colon k\in K\}$. The reverse inclusion is straightforward.
Given the definition of $X(w)$ and of $B_q$, it is therefore sufficient to prove the proposition in the group case with $p\geq 2\,q$.
With $k$ as above, $H(e^X\,k)=H^A((\cosh D_X+\sinh D_X\,w(k)\,U^*)\,U)\in C^A(X(w(k)\,U^*))$ (\cite{Sawyer2}). Since $C(X)=\{H(e^X\,k)\colon k\in K\}$ this implies that $C(X)\subseteq\cup_{w\in\overline{B_q}}\,C^A(X(w))$. On the other hand, let $w_0\in \overline{B_q}$; we have
$\cosh D_X+\sinh D_X\,w_0=k_1\,e^{X(w_0)}\,k_2$ with $k_i\in \U(q,\F)$, $i=1$, 2. Hence,
$X(w_0)=H^A(k_1^*\,(\cosh D_X+\sinh D_X\,w_0)\,k_2^*)=H^A((\cosh D_X+\sinh D_X\,w_0)\,k_2^*)=H^A((\cosh D_X+\sinh D_X\,(w_0\,k_2^*)\,k_2)\,k_2^*)
=H(e^X\,k)$ where $k=\left[\begin{array}{cc}k_2^*&0\\0&V\end{array}\right]$ and $V$ is built as above from $w=w_0\,k_2^*$. This shows that
$X(w_0)\in C(X)$ and therefore that $C^A(X(w_0))\subseteq C(X)$ since $C(X)$ is $W^A$ invariant and convex.
\end{proof}
In what follows, $C^A(X(w))^\circ$ means the interior of $C^A(X(w))$ in the relative topology of $\{H\in\a^A\colon \tr H=\tr X(w)\}$.
\begin{theorem}\label{main}
Let $q\geq 1$ be an integer and suppose $p > 2\,q-1$ (we assume that $p$ is real here). Then the support of $f\mapsto \mathcal{A}^\ast(f)(X)$ is $C(X)$.
\end{theorem}
\begin{proof}
If $X=0$ then $\mathcal{A}^*(f)(X)=f(X)$ and the support is $\{X=0\}=C(X)$. Based on Remark \ref{WINV}, we can assume that $X\in\overline{C_q^+}\setminus\{0\}$.
If $q=1$ in \eqref{Abel}, then
\begin{align*}
\lefteqn{\mathcal{A}^*(f)(X)=\int_{B_1}\,f(\log(|\cosh x_1+\sinh x_1\,w|))}
\\&\qquad\qquad\qquad\qquad
\,\cdot|\cosh x_1+\sinh x_1\,w|^{-d\,(p+1)/2+1}\,dm_p(w)\,dw;
\end{align*}
$\log(|\cosh x_1+\sinh x_1\,w|)$ takes the full range between $-x_1$ and $x_1$ for $w\in \overline{B_1}=\{w\colon |w|\leq1\}$. The result follows in this case.
Assume now that $q\geq2$ and use the notation of Lemma \ref{BqX}.
We have
\begin{align*}
\hbox{support}(\mathcal{A}_X^*)
=\overline{\cup_{w\in B_q}\,C^A(X(w))^\circ}
=\cup_{w\in \overline{B_q}}\,C^A(X(w))=C(X).
\end{align*}
The first equality follows from \eqref{Abel} and the fact that the density of the measure $f\mapsto(\mathcal{A}^A)^*(f)$ is positive on $C^A(X(w))$ (\cite{Sawyer1}). Now, let $H_0\in C(X)$ and pick $\epsilon>0$. There exists $H\in C(X)\setminus \R\,I_q$ such that $\|H-H_0\|<\epsilon/3$. By Proposition \ref{XC}, $H=\sum_{s\in W^A}\,a_s\,s\cdot X(w)$ with $0\leq a_s\leq 1$, $\sum_{s\in W^A}\,a_s=1$ and $w\in \overline{B_q}$. Note that since $H\not\in\R\,I_q$, the same is true of $X(w)$. Since the map $w\mapsto X(w)$ is continuous on $\overline{B_q}$, there exists $w'\in B_q$ with $\|X(w')-X(w)\|<\epsilon/3$ and $X(w')\not\in \R\,I_q$ (this is necessary to ensure that $\overline{C^A(X(w'))^\circ}=C^A(X(w'))$). Hence, $H'=\sum_{s\in W^A}\,a_s\,s\cdot X(w')$ satisfies $\|H'-H\|<\epsilon/3$. Finally, there exists $H''\in C^A(X(w'))^\circ$ such that $\|H''-H'\|<\epsilon/3$ and therefore $\|H''-H_0\|<\epsilon$.
The result follows.
\end{proof}
We will end this subsection by describing more closely the interior of $C(X)$ which will be useful when showing that the density of the measure
$f\mapsto \mathcal{A}^*(f)(X)$ is strictly positive on $C(X)^\circ$.
\begin{definition}
For $X\in \overline{C_q^+}$, let $\mathcal{U}(X)=\{\diag[u_1,\dots,u_q]\colon u_1\leq u_2\leq\dots\leq u_q,~|u_i|<x_i~\hbox{if $x_i>0$ and $u_i=0$ otherwise}\}$. Note that $U=X(w)\in\mathcal{U}(X)$ where $w=\diag[y_1,\dots,y_q]\in B_q$ with $y_i=(e^{u_i}-\cosh x_i)/\sinh x_i$ if $x_i\not=0$ and $y_i$ arbitrary in the interval $(-1,1)$ if $x_i=0$.
\end{definition}
The definition of the set $B_q(X)$ in the next lemma will allow us to avoid the difficulty that arises when $C^A(X(w))$ consists of only one point.
\begin{lemma}\label{BqX}
For $X\in C_q$, define $f_X\colon B_q\to \R$ by $f_X(w)=\tr X(w)$ and $B_q(X)=\{w\in B_q\colon \hbox{$df_X$ is surjective at $w$ and $X(w)\not\in\R\,I_q$}\}$. If $q\geq 2$ and $X\in C_q\setminus\{0\}$ then $B_q(X)$ is open and dense in $B_q$. Furthermore, $\mathcal{U}(X)\setminus\R\,I_q\subseteq B_q(X)$.
\end{lemma}
\begin{proof}
We can assume without loss of generality that $X\in \overline{C_q^+}\setminus\{0\}$ (refer to the discussion in Remark \ref{WINV}).
Observe that we can write $f_X(w)=\log(\det(Z_2(X,w)))/2$ and that $f_X$ and therefore $df_X$ are analytic on $B_q$. Let $V_X=\{w\in B_q\colon \hbox{$\left.df_X\right|_w$ is surjective}\}$. Since the rank of $\left.df_X\right|_w$ is a matter of a determinant being nonzero, $V_X$ is open and is either empty or dense in $B_q$. Taking
$w=\diag[y_1,\dots,y_q]\in \mathcal{U}(X)$, we have
$\left.df_X\right|_w(E_{1,1})=\left.\frac{d~}{dt}\right|_{t=0}\,f_X(w+t\,E_{1,1})
=\sinh x_1/(\cosh x_1+y_1\,\sinh x_1)\not=0$. Hence $V_X$ is a dense open set in $B_q$.
Note also that $w\in B_q(X)$ is equivalent to $w\in V_X$ and $\dim C^A(X(w))=q-1$ (this is the maximum it can be since $H\in C^A(X(w))$ implies $\tr H=\tr X(w)$).
Observe that $w\in B_q(X)$ if and only if $w\in V_X$ and $f(w)=Z_2(X,w)$ is not a multiple of the identity. Consider now Corollary \ref{eigen} with $r=2$ and observe that
$B_q(X)=V_X\cap U_2$. Since $X\in \overline{C_q^+}\setminus\{0\}$ then either $X=\mu\,I_q$, $\mu>0$ or $X$ has at least two distinct diagonal entries. In the first case, we observe that
$w=\diag[\mu/2,-\mu/2,0\dots,0]\in B_q(X)=V_X\cap U_2$. In the second case, if $X$ has at least two distinct non-negative diagonal entries, then $w=0\in B_q(X)=V_X\cap U_2$. In both cases, the result follows from Corollary \ref{eigen}.
\end{proof}
The following technical result will be useful in the proof of Proposition \ref{CC} below.
\begin{lemma}\label{special}
Suppose $X\in\overline{C_q^+}\setminus\{0\}$. Let $w_0=\diag[y_1,\dots,y_q]\in B_q(X)$ be such that
$X(w_0)=\diag[u_1,\dots,u_q]$ with $u_i\geq u_{i+1}$ for all $i$. Then for $b>0$ small enough, $w_0+b\,E_{1,q}\in B_q(X)$ and $X(w_0+b\,E_{1,q})=\diag[u_1+\delta,u_2,\dots,u_{q-1}, u_q-\delta]$ for some $\delta>0$
\end{lemma}
\begin{proof}
This situation can be reduced to considering the case when $X=\diag[x_1,x_2]$, $w_0=\diag[y_1,y_2]$ and $X(w_0)=\diag[u_1,u_2]$. One only has to look at the eigenvalues of $Z_2(w_0+b\,E_{1,2})=\left[ \begin {array}{cc} e^{2\,u_1}&e^{u_1}
\sinh x_1 \,b\\ e^{u_1}
\sinh x_1 \,b&e^{2\,u_2}+\sinh^2x_1\,b^2\end {array} \right] $ which is elementary.
\end{proof}
The following result will allow us to show that the density of the measure $f\mapsto \mathcal{A}^*(f)(X)$ is strictly positive on $C(X)^\circ$ when $X\not=0$.
\begin{proposition}\label{CC}
Let $q\geq 2$.
For $X\in C_q$, $\cup_{w\in B_q(X)}\,C^A(X(w))^\circ\subseteq C(X)^\circ$ and $C(X)^\circ\cap\overline{C_q^+}\subseteq\cup_{w\in B_q(X)}\,C^A(X(w))^\circ$.
\end{proposition}
\begin{proof}
Assume that $X\in \overline{C_q^+}$ and that $X\not=0$ (if $X=0$ then the result is straightforward).
We first show that if $H'\in C^A(X(w_0))^\circ$ and $w_0\in B_q(X)$ then $H'\in C(X)^\circ$. Suppose that this is not the case: we have
$|h'_{i_1}|+\dots+|h_{i_r}'|=x_1+\dots+x_r$ for some $r$ and distinct indices $i_k$. if $r<q$, let $I=\{i_k\colon h'_{i_k}\geq 0\}$,
$J=\{i_k\colon h'_{i_k}< 0\}$ and define $f_{I,J}(H)=\sum_{i\in I}\,h_i-\sum_{j\in J}\,h_j$. Since $f_{I,J}$ is linear and not constant on
$C^A(X(w_0))$ which is convex and compact, we have
\begin{align*}
|h'_{i_1}|+\dots+|h'_{i_r}|=f_{I,J}(H')<\max_{H\in C^A(X(w))}\,f_{I,J}(H)=f_{I,J}(H'')
\end{align*}
for some $H''\in C^A(X(w))$. This means that $|h''_{i_1}|+\dots+|h''_{i_r}|\geq f_{I,J}(H'')>f_{I,J}(H')=x_1+\dots+x_r$ which contradicts $H''\in C(X)$.
If $r=q$ and not all $h_i$'s are of the same sign, the same reasoning applies. Otherwise, if all $h'_i$'s are of the same sign, the map $f_X$ of Lemma \ref{BqX} would reach a maximum or a minimum at $w_0$ which is impossible since $\left.df_X\right|_w\not=0$ for all $w\in B_q(X)$.
We now show that $C(X)^\circ\cap\overline{C_q^+}\subseteq\cup_{w\in B_q(X)}\,C^A(X(w))^\circ$. Pick $H\in C(X)^\circ\cap\overline{C_q^+}$.
If $H=0$, let $y_i=(1-\cosh x_i)/\sinh x_i$ (if $x_i\not=0$) and $y_i\in (-1,1)$ arbitrary when $x_i=0$ and set $w_0=\diag[y_1,\dots,y_q]$. We have $H=X(w_0)$ and, using Lemma \ref{special}, we have $H\in C^A(X(w_0+b\,E_{1.q}))^0$ for $b>0$ small enough. From the proof of Lemma \ref{BqX}, we conclude that $w_0+b\,E_{1.q}\in B_q(X)$ provided again that $b$ is small enough.
Suppose then that $H\not=0$ and let $j$ be the smallest index $k$ such that $\sum_{i=1}^k\,x_i> \sum_{i=1}^q\,h_i$. Let $U$ be defined by the relations
\begin{align*}
u_k&=\left\lbrace
\begin{array}{cl}
x_k-\epsilon&\hbox{if $k< j$}\\
(j-1)\,\epsilon+\sum_{i=1}^q\,h_i-\sum_{i=1}^{j-1}\,x_i&\hbox{if $k=j$}\\
0&\hbox{if $j<k \leq q$}\\
\end{array}
\right.
\end{align*}
where $0<\epsilon<\min\{x_k, 1\leq k\leq j-1,(\sum_{i=1}^{r}\,x_i-\sum_{i=1}^r\,h_i)/r,r=1,\dots,j,(\sum_{i=1}^{j}\,x_i-\sum_{i=1}^q\,h_i)/(j-1)\}$.
We easily verify the inequalities $u_1\geq u_2\geq\dots\geq u_q\geq 0$, $u_i< x_i$, when $x_i>0$ using the definition of $j$ and the restrictions on $\epsilon$. We have $U=X(w_0)\in\mathcal{U}(X)$ and since $u_{j-1}
-u_j\geq \sum_{i=1}^j\,x_i-\sum_{i=1}^q\,h_i-j\,\epsilon>0$, $U\not\in \R\,I_q$ and therefore $w_0\in B_q(X)$ by Lemma \ref{BqX}.
We verify that $H\in C^A(U)$ using Remark \ref{CXA}:
since $U$ and $H\in\overline{(\a^A)^+}$, we only have to show that $h_1+\dots+h_q= u_1+\dots+u_q$ which is straightforward and that
$h_1+\dots+h_r\leq u_1+\dots+u_r$ for every $r<q$. For $r< j$, this follows immediately from the definition of $u_i$ and $\epsilon$ and when $j\leq r<q$,
we have $h_1+\dots+h_r\leq h_1+\dots+h_q=u_1+\dots+u_{r}$.
Using Lemma \ref{special} with $U=X(w_0)$ and noting that $H\in C^A(X(w_0+b\,E_{1,q}))^\circ$ with $w_0+b\,E_{1,q}\in B_q(X)$ provided $b$ is small enough, we can conclude.
\end{proof}
\subsection{A Laplace-type expression for the generalized spherical functions}\label{LaplaceSec}
As mentioned in the Introduction, in the geometric setting, the measure $f\mapsto \mathcal{A}^*(f)(X)$ is absolutely continuous with respect to the
Lebesgue measure on $\a$ provided $X\not=0$. We now show that this remains true in the present context.
\begin{theorem}\label{KHX}
Let $q\geq 1$ be an integer and suppose $p > 2\,q-1$ (we assume that $p$ is real here). There exists a measurable non-negative function $K$ defined on $C_q\times (C_q\setminus\{0\})$ such that for every measurable function $f$ on $C_q$ and every
$X\in C_q\setminus\{0\}$, we have
\begin{align*}
\mathcal{A}^*(f)(X)
&=\int_{C(X)^\circ}\,f(H)\,K(H,X)\,dH
\end{align*}
where $K(H,X)>0$ for all $H\in C(X)^\circ$. In particular, the support of the map $H\mapsto K(H,X)$ is $C(X)$.
\end{theorem}
\begin{proof}
If $q=1$, then the result follows from the beginning of the proof of Theorem \ref{main}. We may therefore assume that $q\geq2$.
Let $K^A$ be the kernel of the Abel transform for the root systems of type $A$ (this was generalized to arbitrary multiplicities in
\cite[Theorem 2.3]{Sawyer1} but here we are only using the group case). Recall that provided $X\not\in\R\,I_q$,
\begin{align*}
(\mathcal{A}^A)^*(f)(X)
&=\int_{C^A(X)^\circ}\,f(H)\,K^A(H,X)\,dH
\end{align*}
with $K^A(\cdot,X)>0$ on $C^A(X)^\circ$. Let $X(w)=a^A(Z(X,w))$ as before and assume $q\geq 2$. We have
\begin{align*}
\mathcal{A}^*(f)(X)
&=\int_{B_q}\,(\mathcal{A}^A)^*(f)(Z(X,w))\,|\det(Z(X,w))|^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\\
&=\int_{B_q(X)}\,(\mathcal{A}^A)^*(f)(Z(X,w))\,|\det(Z(X,w))|^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\\\\
&=\int_{B_q(X)}\,\int_{C^A(X(w))^\circ}\,f(e^H)\,K^A(H,X(w))\,dH\,\det(e^{X(w)})^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\\
&=\int_{\cup_{w\in B_q(X)}\,C^A(X(w))^\circ}\,f(e^H)\,\left[\int_{D_H(X)}\,K^A(H,X(w))\,\det(e^{X(w)})^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\right]\,dH
\end{align*}
where $D_H(X)=\{w\in B_q(X)\colon H\in C^A(X(w))^\circ\}$. Now,
\begin{align}
K(H,X)&=\int_{D_H(X)}\,K^A(H,X(w))\,\det(e^{X(w)})^{-d\,(p+1)/2+1}\,dm_p(w)\,dw\nonumber\\
&=\frac{1}{\kappa^{p\,d/2}}\,\det(e^{H})^{-d\,(p+1)/2+1}\label{DHX}
\\&\qquad\qquad
\,\cdot\int_{D_H(X)}\,K^A(H,X(w))\,\det(I-w^*\,w)^{p\,d/2-d\,(q-1/2)-1}\,dw.\nonumber
\end{align}
Suppose now that $H\in C(X)^\circ\cap \overline{C_q^+}$ and consider the open set $U(X,H)=\{w\in B_q(X)\colon \sum_{i=1}^r\,x_k(w)>\sum_{k=1}^r\, h_k,~1\leq r\leq q-1\}$. The set $D_H(X)$ is the nonempty intersection of $U(X,H)$ and the submanifold $\{w\in B_q(X)\colon \tr X(w)=\tr H\}$ (refer to Proposition \ref{CC}). For every $w_0\in D_H(H)$, there exists a coordinate system $(U,\phi)$ where $U$ is an open subset of $U(X,H)$ containing $w_0$, $\phi\colon U\to\R^{d\,q^2}$ and $\phi(w)=(x_1,\dots,x_{d\,q^2-1},0)$ on $U\cap D_H(X)$. This allows us to integrate over $D_H(X)$. Therefore, the map $H\to K(H,X)$ is strictly positive on $C(X)^\circ\cap \overline{C_q^+}$ and by the Weyl invariance of the map, over all of $C(X)^\circ$.
\end{proof}
\begin{corollary}\label{LaplaceType}
For $X\not=0$, we have
\begin{align*}
\phi_\lambda(X)
&=\int_{C(X)^\circ}\,f(H)\,K(H,X)\,dX.
\end{align*}
\end{corollary}
\begin{remark}
Note that the formulas in \eqref{DHX} remain valid with some adjustment when $q=1$; when in addition $d=1$, the set $D_H(X)$ only contains one point and the integral over that set disappears.
Note also that the integral term in the last expression for $K(H,X)$ (second line of \eqref{DHX}) in the proof of the theorem is decreasing with $p$ and corresponds to a group case when $p\geq 2\,q$ is an integer. Since in the geometric setting, the support is known to be $C(X)$, we could also have deduced the same result from that observation in the more general case.
Note also that the function $K(H,X)$ in the theorem is still defined when $\Re p>2\,q-1$ (\emph{i.e.} when $p$ is not assumed to be real) but $K$ is no longer real and proving that its support is exactly $C(X)$ is another matter.
\end{remark}
\section{The rational Dunkl setting}\label{DunklSetting}
In this section, we will derive results which provide the counterparts of formulas \eqref{phi} and \eqref{Abel} as well as of the results of Theorem \ref{main}, Theorem \ref{KHX} and its corollary in the rational Dunkl setting.
In \cite{Sawyer1}, we use the following result (originally from \cite{Sawyer3}) about the spherical functions associated to the root systems of type $A$ in the trigonometric Dunkl setting:
\begin{theorem}\label{old}
For $X\in (\a^A)^+$, we define $\phi_\lambda^A(X)=e^{i\,\lambda(X)}$ when $q=1$ and for
$q\geq 2$,
\begin{align*}
\phi^A_\lambda(X)
=\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}
e^{i\,\lambda_q\,\sum_{k=1}^q\,x_k}
\int_{E(X)}\, \phi^A_{\lambda_0}(e^\xi)\,S^{(d)}(\xi,X)\,d(\xi)^d\,d\xi
\end{align*}
where $E(X)
=\{\xi=(\xi_1,\dots,\xi_{p-1})\colon x_{k+1}\leq \xi_k\leq x_k\}$, $\lambda(X)=\sum_{j=1}^q\,\lambda_j\,x_j$,
$\lambda_0(\xi)=\sum_{i=1}^{q-1}\,(\lambda_i-\lambda_q)\,\xi_i$, $d(X)=\prod_{r<s}\,\sinh(x_r-x_s)$, $d(\xi)=\prod_{r<s}\,\sinh(\xi_r-\xi_s)$ and
\begin{align*}
S^{(d)}(\xi,X)=d(X)^{1-d}\,d(\xi)^{1-d}\,\left[\prod_{r=1}^{q-1}
\,\left(\prod_{s=1}^r\,\sinh(x_s-\xi_r)
\,\prod_{s=r+1}^q\,\sinh(\xi_r-x_s)\right)\right]^{d/2-1}
\end{align*}
\end{theorem}
\noindent and prove the following in the rational Dunkl setting:
\begin{theorem}\label{psidunkl}
Let $X\in(\a^A)^+$. The generalized spherical function associated to the root system $A_{q-1}$ in the rational Dunkl setting is given by
$\psi_\lambda^A(X)=e^{i\,\lambda(X)}$ when $q=1$ and for
$q\geq 2$,
\begin{align*}
\psi_\lambda^A(X)
&=\frac{\Gamma(d\,q/2)}{\Gamma(d/2)^q}
\,e^{i\,\lambda_q\,\sum_{k=1}^r\,x_k}
\,\int_{E(X)}\,\psi^A_{\lambda_0}(e^\xi)\,T^{(d)}(\eta,X)\,d_0(\eta)^d\,d\eta
\end{align*}
where $X\in (\a^A)^+$, $E(X)$, and $\lambda_0$ are as before, $d_0(\xi)=\prod_{r<s}\,(\xi_r-\xi_s)$, $d_0(X)=\prod_{r<s}\,(x_r-x_s)$ and
\begin{align*}
T^{(d)}(\xi,X)&=
d_0(X)^{1-d} \,d_0(\xi)^{1-d}
\prod_{r=1}^q\,\left[
\prod_{s=1}^{r-1}\,(x_r-\xi_s) \,\prod_{s=r}^{q-1}\,(\xi_s-x_r)
\right]^{d/2-1}.
\end{align*}
\end{theorem}
\begin{remark}\label{intT}
The results given in \cite{Sawyer1} are actually extended to $X\in\overline{(\a^A)^+}$.
Note also that if $\sigma=\{(\beta_1,\dots,\beta_q)\colon \beta_i\geq0,\sum_{i=1}^q\,\beta_i=1\}$ then
\begin{align*}
\frac{\Gamma(d\,q/2)}{\Gamma(d/2)^q}
\,\int_{E(X)}\,T(\eta,X)\,d_0(\eta)^d\,d\eta=\frac{\Gamma(d\,q/2)}{\Gamma(d/2)^q}\,\int_\sigma (\beta_1\cdots\beta_q)^{d/2-1}\,d\beta=1.
\end{align*}
This follows directly by making the change of variable $\beta_k=\frac{\prod_{i=1}^{q-1}\,(\xi_i-x_k)}{\prod_{i\not=k}\,(x_i-x_k)}$,
$k=1$, \dots, $q$.
\end{remark}
The next result describes the behaviour of $a^A(Z(\epsilon\,X,w))$ when $\epsilon$ tends to 0; a step necessary to apply the technique of ``rational limits'' by de Jeu (\cite[Theorem 4.13]{DeJeu}) later on.
\begin{lemma}\label{He}
Let $X\in C_q$.
For $w\in \overline{B_q}$, write $Z(\epsilon\,X,w)=k_1(\epsilon)\,e^{X^\epsilon(w)}\,k_2(\epsilon)$ with $X^\epsilon(w)\in\overline{(\a^A)^+}$
and $k_i(\epsilon)\in\U(q,\F)$. Then
\begin{align}
\lim_{\epsilon\to0}\,\frac{X^\epsilon(w)}{\epsilon}=a^A\left(\exp\left(\frac{X\,w+w^*\,X}{2}\right)\right)\label{limit}
\end{align}
uniformly on $\overline{B_q}$.
\end{lemma}
\begin{proof}
Note that the logarithm function is analytic when defined on the space of positive definite matrices with values in the space of symmetric matrices.
We have $Z_2(\epsilon\,X,w)=k_1(\epsilon)\,e^{2\,X^\epsilon(w)}\,k_1(\epsilon)^*$; observe also that $X^\epsilon(w)$ is continuous in $w$ and $\epsilon$.
Assuming that $|\epsilon|<1$ and $w\in\overline{B_q}$,
\begin{align*}
\det(t\,I-\frac{X^\epsilon(w)}{\epsilon})
&=\det(t\,I-\frac{k_1(\epsilon)\,2\,X^\epsilon(w)\,k_1^*(\epsilon)}{2\,\epsilon})
=\det(t\,I-\frac{\log(k_1(\epsilon)\,e^{2\,X^\epsilon(w)}\,k_1^*(\epsilon))}{2\,\epsilon})\\
&=\det(t\,I-\frac{\log(Z_2(\epsilon\,X,w))}{2\,\epsilon}).
\end{align*}
As $\epsilon$ tends to 0, the last term converges uniformly to $\det(t-(X\,w+w^*\,X)/2)$ since,
provided $|\epsilon|<<1$,
\begin{align*}
\log(Z_2(\epsilon\,X,w))&=\log(I+[Z_2(\epsilon\,X,w)-I])
=\sum_{k=1}^\infty\,(-1)^{k+1}\,\frac{[Z_2(\epsilon\,X,w)-I]^k}{k}\\
&=\epsilon\,(X\,w+w^*\,X)/2+O(\epsilon^2)
\end{align*}
which follows from the Taylor expansion of $Z_2(\epsilon\,X,w)=I+\epsilon\,(X\,w+w^*\,X)/2+O(\epsilon^2)$.
Since the characteristic polynomial of $X^\epsilon(w)/\epsilon$ tends uniformly to the characteristic polynomial of $(X\,w+w^*\,X)/2$ as $\epsilon\to0$, this proves
\eqref{limit}.
\end{proof}
\begin{corollary}
There exists $\delta>0$ and a compact set $\tilde{E}(X)$ independent of $w\in\overline{B_q}$ and $\epsilon$ such that $E(a^A\left(\exp\left(\frac{X\,w+w^*\,X}{2}\right)\right))
\subseteq\tilde{E}(X)$ and $E(X^\epsilon(w)/\epsilon)
\subseteq\tilde{E}(X)$ whenever $0<\epsilon<\delta$.
\end{corollary}
\begin{theorem}\label{psiBC}
The generalized spherical function associated to the root system of type $BC$ in the rational Dunkl setting is given as
\begin{align*}
\psi_\lambda(X)&=\int_{B_q}\,\psi^A_\lambda(\exp((X\,w+w^*\,X)/2))\,dm_p(w)\,dw
\end{align*}
where
$\psi^A_\lambda$ is the spherical function for the symmetric space of Euclidean type associated to $\GL_0(q,\F)$.
\end{theorem}
\begin{proof}
We will assume that $X\in C_q^+$ (the result follows for all $X\in\a$ by continuity and Weyl-invariance following a reasoning similar to the one in Remark \ref{WINV}). We first show that
\begin{align}
\lim_{\epsilon\to0}\,\phi_{\lambda/\epsilon}^A(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)
=\psi_\lambda^A(e^{(X\,w+w^*\,X)/2})\label{ae}
\end{align}
almost everywhere and that
\begin{align}
|\phi_{\lambda/\epsilon}^A(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)|
\leq M+|\psi_\lambda^A(e^{(X\,w+w^*\,X)/2})|\label{bounded}
\end{align}
for some constant $M>0$ independent of $\epsilon$ and $w\in B_q$.
Write $a^A(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)=X^\epsilon(w)=\diag[x_1^\epsilon(w),\dots,x_q^\epsilon(w)]$ and
let $\dot{X}^0(w)=\lim_{\epsilon\to0}\,X^\epsilon(w)/\epsilon=a^A\left(\exp\left(\frac{X\,w+w^*\,X}{2}\right)\right)=\diag[\dot{x}^0_1,\dots,\dot{x}^0_q]$ from Lemma \ref{He}. The set $U=\{w\in B_q\colon \frac{X\,w+w^*\,X}{2}~\hbox{has distinct eigenvalues}\}$ is open and dense in $B_q$ by Corollary \ref{eigen} (taking $r=q$ and noting that $w=I_q/2\in U=U_q$).
For $w\in U$, provided $\epsilon$ is close enough to 0, the diagonal entries of $X^\epsilon(w)/\epsilon$ are distinct and therefore $X^\epsilon(w)\in(\a^A)^+$.
From Theorem \ref{old}, we have
\begin{align*}
\lefteqn{\phi^A_{\lambda/\epsilon}(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)}\\
&=\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}\,
e^{i\,\lambda_q\,\sum_{k=1}^q\,x_q^\epsilon(w)/\epsilon}
\int_{E(X^\epsilon(w))}\, \phi^A_{\lambda_0/\epsilon}(e^\xi)\,S^{(d)}(\xi,X^\epsilon(w))\,d(\xi)^d\,d\xi\\
&=\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}
e^{i\,\lambda_q\,\sum_{k=1}^q\,x_q^\epsilon(w)/\epsilon}
\int_{E(X^\epsilon(w)/\epsilon)}\, \phi^A_{\lambda_0/\epsilon}(e^{\epsilon\,\eta})\,\left[\epsilon^{q-1}\,S^{(d)}(\epsilon\,\eta,X^\epsilon(w))
\,d(\epsilon\,\eta)^d\right]\,d\eta\\
&=\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}\,\int_{E(X^\epsilon(w)/\epsilon)}
\,F^{(d)}(\lambda_0,\epsilon,\eta,X^\epsilon(w)/\epsilon)
\,T^{(d)}(\eta,X^\epsilon(w)/\epsilon)\,d_0(\eta)\,d\eta
\end{align*}
where
\begin{align*}
F^{(d)}(\lambda_0,\epsilon,\eta,X^\epsilon(w)/\epsilon)
&=e^{i\,\lambda_q\,\sum_{k=1}^q\,x_q^\epsilon(w)/\epsilon}\,\prod_{i<j}\,\left(\frac{\sinh(x_i^\epsilon(w)-x_j^\epsilon(w))}
{\epsilon\,(x_i^\epsilon(w)/\epsilon-x_j^\epsilon(w)/\epsilon)}\right)^{1-d}
\,\prod_{i<j}\,\left(\frac{\sinh(\epsilon\,\eta_i-\epsilon\,\eta_j)}{\epsilon\,(\eta_i-\eta_j)}\right)^{1-d}
\\&\qquad
\cdot\left[\prod_{r=1}^{q-1}
\,\left(\prod_{s=1}^r\,\frac{\sinh(x_s^\epsilon(w)-\epsilon\,\eta_r)}{\epsilon\,(x_s^\epsilon(w)/\epsilon-\eta_r)}
\,\prod_{s=r+1}^q\,\frac{\sinh(\epsilon\,\eta_r-x_s^\epsilon(w))}{\epsilon\,(\eta_r-x_s^\epsilon(w)/\epsilon)}\right)\right]^{d/2-1}
\\&\qquad
\cdot\phi_{\lambda_0/\epsilon}^A(e^{\epsilon\,\eta})
\end{align*}
which converges uniformly to $e^{i\,\lambda_q\,\sum_{k=1}^q\,\dot{x}^0_q(w)}\,\psi_{\lambda_0}^A(e^\eta)$ on $\overline{B_q}\times \tilde{E}(X)$ as $\epsilon$ tends to 0 (\cite[Theorem 4.13]{DeJeu}).
Using Theorem \ref{psidunkl}, it follows that
\begin{align*}
\lefteqn{|\phi^A_{\lambda/\epsilon}(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)-\psi_{\lambda_0}^A(e^{(X\,w+w^*\,X)/2})|}\\
&=\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}\,\left|\int_{E(X^\epsilon(w)/\epsilon)}
\,F^{(d)}(\lambda_0,\epsilon,\eta,X^\epsilon(w)/\epsilon) \,T^{(d)}(\eta,X^\epsilon(w)/\epsilon)\,d_0(\eta)^d\,d\eta
\right.\\&{}\qquad\left.
-e^{i\,\lambda_q\,\sum_{k=1}^q\,\dot{x}^0_q(w)}\,\int_{E(\dot{X}^0(w))}\,\psi_{\lambda_0}^A(e^\eta)\,T^{(d)}(\eta,\dot{X}^0(w) )\,d_0(\eta)^d\,d\eta\right|\\
&\leq\frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}\,\int_{E(X^\epsilon(w)/\epsilon)}
\,\left|F^{(d)}(\lambda_0,\epsilon,\eta,X^\epsilon(w)/\epsilon)-e^{i\,\lambda_q\,\sum_{k=1}^q\,\dot{x}^0_q(w)}\,\psi_{\lambda_0}^A(e^\eta)\right|
\\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \cdot T^{(d)}(\eta,X^\epsilon(w)/\epsilon)\,d_0(\eta)^d\,d\eta
\\&\qquad
+ \frac{\Gamma(d\,q/2)}{(\Gamma(d/2))^q}\,\left|e^{i\,\lambda_q\,\sum_{k=1}^q\,\dot{x}^0_q(w)}\right|\,\left|\int_{E(X^\epsilon(w)/\epsilon)}\,\psi_{\lambda_0}^A(e^\eta)\,T^{(d)}(\eta,X^\epsilon(w)/\epsilon)\,d_0(\eta)^d\,d\eta
\right.\\&\qquad\qquad\qquad\left.
-\int_{E(\dot{X}^0(w))}\,\psi_{\lambda_0}^A(e^\eta)\,T^{(d)}(\eta,\dot{X}^0(w) )\,d_0(\eta)^d\,d\eta\right|\\
&\leq M(\epsilon)\,\int_{E(X^\epsilon(w)/\epsilon)}\,T^{(d)}(\eta,X^\epsilon(w)/\epsilon)\,d_0(\eta)^d\,d\eta
\\ &\qquad
+C\,\left|\int_{\sigma}\,\psi_{\lambda_0}^A(e^{\eta(\beta(\epsilon))})\,(\beta_1\,\cdots\,\beta_q)^{d/2-1}\,d\beta
-\int_{\sigma}\,\psi_{\lambda_0}^A(e^{\eta(\beta(0))})\,(\beta_1\,\cdots\,\beta_q)^{d/2-1}\,d\beta
\right|\\
&\leq M(\epsilon)
+ C\,\int_{\sigma}\,\left|\psi_{\lambda_0}^A(e^{\eta(\beta(\epsilon))})-\psi_{\lambda_0}^A(e^{\eta(\beta(0))})\right|
\,(\beta_1\,\cdots\,\beta_q)^{d/2-1}\,d\beta
\end{align*}
(we used the change of variables of Remark \ref{intT} in the last inequalities) where $C>0$ is a constant independent of $\epsilon$ and $w\in\overline{B_q}$.
Now, $\lim_{\epsilon\to0}\,M(\epsilon)=0$ uniformly on $\overline{B_q}\times\tilde{E}(X)$ while $\exp(2\,\eta_i(\beta(\epsilon)))$, $i=1$, \dots, $q-1$, are the roots of the polynomial
$\sum_{r=1}^q\,\beta_r\,\prod_{i\not=r}\,(x-e^{2\,x_i^\epsilon(w)/\epsilon})$ and $\exp(2\,\eta_i(\beta(0)))$, $i=1$, \dots, $q-1$, are the roots of the polynomial
$\sum_{r=1}^q\,\beta_r\,\prod_{i\not=r}\,(x-e^{2\,\dot{x}^0(w)_i})$. From there, we conclude that the term
\begin{align*}
\int_{\sigma}\,\left|\psi_{\lambda_0}^A(e^{\eta(\beta(\epsilon))})-\psi_{\lambda_0}^A(e^{\eta(\beta(0))})\right|
\,(\beta_1\,\cdots\,\beta_q)^{d/2-1}\,d\beta
\end{align*}
will also tend to 0 uniformly on $\overline{B_q}\times\sigma$ as $\epsilon$ tends to 0. This proves \eqref{ae} and, noting that
\begin{align*}
\lefteqn{|\phi^A_{\lambda/\epsilon}(\cosh (\epsilon\,X)+\sinh (\epsilon\,X)\,w)-\psi_{\lambda_0}^A(e^{(X\,w+w^*\,X)/2})|}\\
&\leq M(\epsilon)
+ C\,\int_{\sigma}\,\left|\psi_{\lambda_0}^A(e^{\eta(\beta(\epsilon))})-\psi_{\lambda_0}^A(e^{\eta(\beta(0))})\right|
\,(\beta_1\,\cdots\,\beta_q)^{d/2-1}\,d\beta,
\end{align*}
\eqref{bounded} follows. Using \cite[Theorem 4.13]{DeJeu} once more,
\begin{align*}
\psi_\lambda(X)&=\lim_{\epsilon\to0}\,\phi_{\lambda/\epsilon}(e^{\epsilon\,X})
\end{align*}
uniformly in $\lambda$ and $X$ over compact sets.
Using \eqref{ae} and \eqref{bounded} and the fact that $\lim_{\epsilon\to0}\,X^\epsilon(w)=0$ uniformly on $\overline{B_q}$, the result follows easily by the dominated convergence theorem.
\end{proof}
The proof of the following theorem follows the same lines as in the trigonometric setting.
\begin{theorem}\label{main2}
Let $X\in\overline{C_q^+}$. The support of the measure defined on $C_q$ by
\begin{align*}
\mathcal{A}^*_0(f)(X)=\int_{B_q}\,(\mathcal{A}^A)^*_0(f)(\exp((X\,w+w^*\,X)/2)) \,dm_p(w)\,dw
\end{align*}
is $C(X)$. Furthermore, if $X\not=0$, the measure $f\mapsto \mathcal{A}^*_0(f)(X)$ is absolutely continuous and its density is strictly positive on $C(X)^\circ$,
namely
\begin{align*}
K_0(H,X)&=\frac{1}{\kappa^{p\,d/2}}\,\int_{\widetilde{D_H(X)}}\,K_0^A(H,\dot{X}^0(w))\,\det(I-w^*\,w)^{p\,d/2-d\,(q-1/2)-1}\,dw
\end{align*}
where
\begin{align*}
\widetilde{D_H(X)}&=\{w\in \widetilde{B_q(X)}\colon H\in C^A(\dot{X}^0(w))^\circ\},\\
\widetilde{B_q(X)}&=\{w\in B_q\colon \hbox{$\dot{X}^0(w)\not\in \R\,I_q$ and $\left. d\widetilde{f_X}\right|_w$ is surjective} \},
\end{align*}
$\dot{X}^0(w)$ is the diagonal part of $(X\,w+w^*\,X)/2$ with decreasing diagonal entries and $\widetilde{f_X}\colon B_q\to \R$ is defined by
$\widetilde{f_X}(w)=\tr\dot{X}^0(w)=\tr (X\,w+w^*\,X)/2$.
\end{theorem}
\begin{proof}
The proof is very similar to the one of Theorem \ref{KHX}. We go over some of the differences. Recall that in the trigonometric setting,
\begin{align*}
\psi_\lambda(X)&=\int_K\,e^{i\,\langle \lambda,\Ad(k)\,X\rangle}\,dk
=\int_K\,e^{i\,\lambda(\pi_\a(\Ad(k)\,X))}\,dk
\end{align*}
where $\pi_\a\colon\p\to\a$ is the orthogonal projection with respect to the Killing form. For the Lie groups $\SO_0(p,q)$, $\SU(p,q)$ and $\Sp(p,q)$, $p>q$, we have
\begin{align*}
\pi_\a(k\cdot X)&=\pi_\a\left(
\left[\begin{array}{cc}U&0\\0&V\end{array}\right]\,\left[\begin{array}{cc}0&D_X\,Q^T\\Q\,X&0\end{array}\right]\,\left[\begin{array}{cc}U&0\\0&V\end{array}\right]^*
\right)\\
&=\pi_\a\left(
\left[\begin{array}{cc}0&U\,X\,Q^T\,V^*\\V\,Q\,X\,U^*&0\end{array}\right]
\right)=\pi_\a\left(
\left[
\begin{array}{cc}
0&U\,X\,\lbrack A^*,C^*\rbrack\\
\lbrack A^*,C^*\rbrack\,X\,U^*&0
\end{array}
\right]
\right)\\
&=\pi_\a\left(
\left[\begin{array}{cc}0&H\,Q^T\\Q\,H&0\end{array}\right]
\right)
\end{align*}
where $Q^T=\left[\begin{array}{cc}I_q&0_{q\times(p-q)}\end{array}\right]$, $V=\left[\begin{array}{cc}A&B\\C&D\end{array}\right]$,
\begin{align*}
H&=\pi_{\a^A}(U\,X\,A^*)=\pi_{\a^A}((U\,X\,A^*+A\,X\,U^*)/2)=\pi_{\a^A}(U\,\frac{X\,(U^*\,A)^*+(U^*\,A)\,X}{2}\,U^*)\\
&=\pi_{\a^A}(U\,\frac{X\,w+w^*\,X}{2}\,U^*)=\pi_{\a^A}(U\,\dot{X}^0(w)\,U^*),
\end{align*}
and $\pi_{\a^A}(g)$ is the matrix made of the diagonal of $g$ and $w\in B_q$. Now, the set
$\{\pi_{\a^A}(U\,\dot{X}^0(w)\,U^*)\colon U\in \U(q,\F)\}$ is the support of the measure $f\mapsto (\mathcal{A}_0^A)^*(X)$ which is equal to $C^A(\dot{X}^0(w))$ by \cite{Lu} . These considerations lead us to $C(X)=\cup_{w\in \overline{B_q}}C^A(\dot{X}^0(w))$. Hence, Proposition \ref{XC} (which relied also on the group case), Theorem \ref{main} and Proposition \ref{CC} have their counterparts from which the current result follows.
\end{proof}
\begin{corollary}
Let $X\in C_q^+$ and let $f\mapsto V_X(f)=\int_{C_q}\,f(H)\,d\mu_X(H)$ be the Dunkl intertwining operator. Then the support of $V$ is also $C(X)$.
\end{corollary}
\begin{proof}
In \cite{Sawyer1} we showed, using a result from the doctoral thesis of C.{} Rejeb \cite[Theorem 2.9]{Rejeb} (also found in the paper \cite{Gallardo}) that the support of the measure $f\mapsto V_X(f)$ is the same as the support of the measure $f\mapsto \mathcal{A}^*(f)(X)$ (this proof did not rely on the specific root system). The rest follows from the theorem.
\end{proof}
\begin{remark}
Given that Trim\`eche has shown that the intertwining operator can be defined in the trigonometric setting, the corresponding result is also valid: for the root systems studied in this paper, the support of the measure $f\mapsto V_X(f)$ is also $C(X)$ in the trigonometric setting (refer to \cite{Trimeche3, Trimeche4}).
\end{remark}
\section{Conclusion}
In this paper and in \cite{Sawyer1}, we have proved for specific root systems that the dual of the Abel transform, $H\mapsto \mathcal{A}^*(f)(X)$ has support exactly equal to $C(X)$ and that the same holds for the intertwining operator $V$ in the rational Dunkl setting. We also now know that this transform has a kernel provided $X\not=0$.
These results were possible because we had an iterative formula for the generalized spherical function in the case of root systems of type $A$ and a formula ``reducing'' the problem to the root systems of type $A$ in the case of the root systems of type $BC$.
The drawback of this approach is that unless similar formulas are derived for the other root systems, it is not easily generalizable.
Another question of interest would be to see if \eqref{phi} and the results of this paper can be generalized in a setting where $d$ is no longer restricted to 1, 2 or 4.
\subsection*{Acknowledgments}
This research was partly supported by funding from Laurentian University.
|
\section{Introduction}
Self-organization of light and atomic degrees of freedom in laser driven systems of cold atoms with optical feedback has in recent years received considerable attention \cite{ritsch13}. In addition to the longitudinal axis (e.g. of an optical cavity), spatial ordering can also occur in the plane transverse to the driving laser beam.
Transverse optical self-organization has been studied in a wide range of non-linear media during the last 30 years \cite{cross93, arecchi99}. A particularly simple and fruitful setup is the single feedback mirror (SFM) configuration, where a non-linear medium experiences double-pass excitation by a single single pump beam with mirror feedback. Spatial coupling of tranversely separate regions inside the medium is provided by diffraction \cite{firth90a, ackemann01}. Recently, we have used this setup to observe long-range hexagonal ordering in a thermal cold atomic gas, breaking the continuous spatial symmetries of the initial system \cite{labeyrie14,Camara2015}. This matches interest in a related scheme for patterns in cold atom systems interacting with two independent counterpropagating input fields \cite{Muradyan2005,Greenberg2011a, Schmittberger2016, Greenberg2011b,labeyrie16}.
Employing cold atoms as optical media offers a high degree of tunability such that the mechanism of the optical non-linearity can be selected by e.g.\ the duration of the pump pulse. For long pulses ($>10\, \mu$s), with blue detuning, optomechanical \cite{Bjorkholm1978,ashkin82} density modulations were shown to be dominant in optimum conditions \cite{labeyrie14}, whereas for shorter pulses ($<2\,\mu$s), pattern formation was found to be consistent with the standard two-level electronic nonlinearity \cite{Camara2015}. The results of Ref. \cite{Camara2015} constitute the first observation of pattern formation in a system with a saturable electronic two-level nonlinearity.
As was highlighted in our earlier work, the full analysis of both qualitative and quantitative features of the transverse patterns in cold atoms demands a departure from the ``thin-medium" approximation, in which diffraction within the medium is assumed negligible in comparison with the free-space diffraction between the medium and the mirror. One goal of the present paper is to derive a new, ``thick-medium", model of the two-level instability with the inclusion of diffraction within the nonlinear medium and to investigate how its predictions compare to experimental results.
A major advance from previous models of the SFM configuration is the inclusion of diffraction within the optical medium. The requisite theory is related to that used to analyze pattern formation in a mirrorless thick-medium (slab) with two counterpropagating (CP) input fields. Such CP systems have been analyzed for Kerr media by Firth et al \cite{firth90b} and Geddes et al \cite{Geddes1994}, and by Muradyan et al \cite{Muradyan2005}, as part of a study of optomechanical
effects in cold atoms.
Our model also includes the simultaneous presence of transmission gratings (purely transverse gratings resulting from the interference of the pump with copropagating sidebands) and reflection gratings (wavelength scaled gratings which result from the interference of counterpropagating beams) in the presence of feedback mirror, whereas earlier treatments only utilized pure transmission gratings \cite{firth90a,dalessandro92,ackemann95b}. Two-beam coupling via pure reflection gratings was included in the analysis of photorefractive experiments \cite{Honda1996}.
A system somewhat analogous to the present one was studied in Ref.~\cite{kozyreff06}, where dispersion in the time domain plays the role of diffraction in the spatial domain. The analogy is limited, however, because the interacting beams are co- and not counter-propagating, which leads to analytical differences. More important, reflection gratings, crucial in the cold-atom SFM and CP systems, are necessarily absent from the system analyzed in Ref.~\cite{kozyreff06}.
A key advance in the present paper is that we also include a full treatment of absorption (and its saturation), not included in the above-mentioned works. This is necessary to treat the region of small pump detuning, where absorptive effects were seen to limit pattern formation in recent experiments~\cite{Camara2015}. There is no known analytic solution to the thick-medium threshold equations in the presence of absorption, but we have developed an efficient and instructive graphical approach to the numerical evaluation of threshold curves. A side-benefit of our approach is our demonstration that, as the feedback mirror distance is varied, all the corresponding threshold curves are bounded by one or more envelope curves. These are as easily calculated as any single threshold curve, and are thus a very effective means of establishing the existence and extent of instability domains. Furthermore, we show that the zero-diffraction intercepts of these envelopes correspond exactly to thin-medium-model thresholds. This correspondence, the existence of envelope curves in SFM models, and our graphical ``gain-circle" approach to numerical evaluation of thresholds are likely to be applicable to SFM and related problems in a wide variety of nonlinear optical media.
\begin{figure}
\resizebox{85mm}{!}{\input{schematic_SFM.pdf_tex}}
\caption{ \label{fig:setup}
(Color online) Schematic of the SFM configuration. A linearly-polarized beam is sent into an atomic cloud modeled as a thick slab of length $L$ (blue online) with a non-linear susceptibility $\chi_{NL}$. The transmitted beam is retro-reflected by a mirror (M) with an adjustable displacement $D L$ beyond the end of the medium. The forward ($F$) and backward ($B$) propagating beams interfere inside the cloud. Experimental parameters: cloud of $^{87}$Rb atoms at $T=200\,\mu$K driven at a detuning of $\delta>0$ to the $F=2\to F'=3$ transition of the D$_2$-line, optical density (base $e$) in line center OD=210, effective sample size (FWHM of cloud) $L=8.5$~mm \cite{Camara2015}.
}
\end{figure}
\section{System and Model}
\label{sec_model}
Figure \ref{fig:setup} shows a schematic of our setup. A medium of length $L$ is illuminated by a laser beam leading to a forward field $F$. The transmitted light is retro-reflected by a plane mirror leading to a backward field $B$. We are scaling the longitudinal coordinate by the medium length $L$. Hence the normalized feedback distance $D$ measured from the exit face of the medium to the mirror is $D L$ in units of distance. (The mirror distance $d$ used in \cite{Camara2015} is measured from medium centre, $d=(D+1/2)L$.)
Similar to Muradyan et al \cite{Muradyan2005}, which we will refer to as MM, we consider the counter-propagating fields $F$ and $B$ to be coupled by a nonlinear susceptibility
\begin{equation}
\chi_{NL}= - \frac{6\pi}{k_0^3} n_a \frac{2\delta/\Gamma -i}{1+4\delta^2/\Gamma^2}\frac{1}{1+I/I_{s\delta}}
\label{chiNL}
\end{equation}
Here $n_a$ is the atomic density (considered constant here). $I$ is the intensity, which will be a
standing wave: $I/I_{s\delta} = |Fe^{ikz} + B e^{-ikz}|^2$. We can conveniently rewrite (\ref{chiNL}) as
\begin{equation}
\chi_{NL}= \chi_l \frac{1}{1+I/I_{s\delta}}
\label{chisat}
\end{equation}
where $\chi_l$ is the linear susceptibility (and is complex, though absorption is neglected in the MM, making the system Kerr-like).
As in MM, we use a time-independent susceptibility approach to the two-level nonlinearity. This precludes consideration of growth rates or oscillatory instabilities \cite{LeBerre1991}, but leads to reasonably tractable and transparent models which allow the parameter dependences of pattern thresholds to be investigated. We include absorption, so as to allow for arbitrary atom-field detunings. We include reflection-grating to all orders (MM include such effects, but only at lowest order). This analysis will be applied to the calculation of thresholds for transverse instability in the full thick-medium two-level model in Sections \ref{sec:transpert} and subsequent. Various limits and approximations of the full model will be discussed, so as to connect with earlier work. These include the Kerr limit, used for the thick-medium calculations presented in Fig. 3B of \cite{labeyrie14}. In \cite{Camara2015} preliminary two-level results were presented for two cases: quasi-Kerr (i.e.\ large detuning, neglecting absorption, but not saturation of the refractive nonlinearity) for the pattern size vs mirror displacement; and absorptive thin-slice for the threshold vs atomic detuning.
The next step is to expand the nonlinear factor in a Fourier series:
\begin{equation}
\frac{1}{1+I/I_{s\delta}}= \sigma_0 +\sigma_+ e^{2ikz} +\sigma_-e^{-2ikz} + h.o.t.
\label{Igrat}
\end{equation}
The higher-order terms do not lead to any phase matched couplings, and so can reasonably be neglected whatever the intensity. The coefficients $\sigma_{\pm}$ evidently describe a $2k$ longitudinal modulation of the susceptibility, i.e.\ a reflection (Bragg) grating, which will scatter the forward field into the backward one and vice versa.
The field equations (M3) of \cite{Muradyan2005} can then be written as
\begin{equation}
\label{modeleqs} \left \{
\begin{array}{l}
\frac{\partial F}{\partial z} - \frac{ i}{2k}\nabla^2_\perp F=i \frac{k}{2}\chi_l(\sigma_0 F +\sigma_+ B) ,\\
\\ \frac{\partial B}{\partial z} + \frac{ i}{2k}\nabla^2_\perp B = -i \frac{k}{2}\chi_l(\sigma_- F +\sigma_0 B) \\
\end{array} \right.
\end{equation}
To calculate $\sigma_{0,\pm}$, we write the exact expansion of the saturation term (\ref{Igrat}) as
\begin{equation}
\frac{1}{1+I/I_{s\delta}}= \frac{1}{1+p+q} (1+r(e_{+} + e_{+}^{*}))^{-1}
\label{gratexp}
\end{equation}
where $|F(z)|^2 = p(z)$, $|B(z)|^2 = q(z)$ and $e_{+} = e^{2ikz} e^{i(\theta_F-\theta_B)}$, with
$\theta_{F,B} = \text{arg}(F,B)$.
We have introduced a coupling parameter $r=h(pq)^\frac{1}{2}/(1+p+q)$, where the ``grating parameter" $h$ \cite{firth90b} allows consistent consideration of the cases of no reflection grating ($h=0$), and of a full grating ($h=1$). In the former case $\sigma_{\pm}=0$, which would correspond to the standing-wave modulation of the susceptibility being washed out by drift or diffusion. Partial wash-out could be accommodated by intermediate values of $h$, but would need some associated physical justification. The MM model includes the full grating, so corresponds to $h=1$.
The series expansion of $(1+r(e_{+} + e_{+}^{*}))^{-1}$ is always convergent, because $r< 1/2$. Even terms contribute to $\sigma_0$, odd terms to $\sigma_{\pm}$. Using the binomial theorem, we find
\begin{equation}
\label{sigmas} \left \{
\begin{array}{l}
(1+p+q)\sigma_0 = 1 +2r^2 +6r^4 + 20r^6 + ...\\
\\ (1+p+q)\sigma_+ = - e^{i(\theta_F-\theta_B)}(r + 3r^3 + 10r^5 + ...) \\
\end{array} \right.
\end{equation}
with $\sigma_- = \sigma_+^{*}$.
The series in (\ref{sigmas}) can be summed, leading to a set of field evolution equations:
\begin{equation}
\label{exacteqs} \left \{
\begin{array}{l}
\frac{\partial F}{\partial z} - \frac{ i}{2k}\nabla^2_\perp F =i \frac{k}{2}\chi_lF\left(\frac{1-\left(1-4r^2\right)^{-1/2}}{2hp}+\frac{\left(1-4r^2\right)^{-1/2}}{1+p+q}\right) ,\\
\\ \frac{\partial B}{\partial z} + \frac{ i}{2k}\nabla^2_\perp B = -i \frac{k}{2}\chi_l B\left(\frac{1-\left(1-4r^2\right)^{-1/2}}{2hq}+\frac{\left(1-4r^2\right)^{-1/2}}{1+p+q}\right) \\
\end{array} \right.
\end{equation}
Several papers, going back to the 1970s, have obtained analytic solutions (in the plane-wave limit) to (\ref{exacteqs}). For our purposes, the papers of van Wonderen et al \cite{vanW1989,vanWonderen91} (who were addressing optical bistability in a Fabry-Perot cavity) are most directly relevant, and underpin the analytic zero-order (no diffraction) solution obtained in the next section.
For finite $h$, there is explicit nonreciprocity, since the susceptibilities for $F$ and $B$ are different, because of the susceptibility grating. Quantitatively, the nonreciprocity is entirely due to the denominator, respectively $2hp$ and $2hq$, of the first term in the brackets on the right of (\ref{exacteqs}), the other terms all being symmetric in $p$ and $q$. In the limit of no grating, $h,r \to 0$, both brackets reduce to the expected saturation denominator $(1+s)$, where the total intensity $s=p+q$. Even with a susceptibility grating present, the amplitudes $F$ and $B$ are slowly varying in $z$, allowing the propagation in the medium to be approximated by comparatively few longitudinal spatial steps.
In all the cases discussed above, we can write the two propagation equations in the form
%
\begin{equation}
\label{transeqs} \left \{
\begin{array}{l}
\frac{\partial F}{\partial z} - \frac{ iL}{2k}\nabla^2_\perp F =-\frac{\alpha_lL}{2}(1+i\Delta) A(p,q)F ,\\
\\ \frac{\partial B}{\partial z} + \frac{ iL}{2k}\nabla^2_\perp B = \frac{\alpha_lL}{2}(1+i\Delta) A(q,p)B \\
\end{array} \right.
\end{equation}
where we have scaled $z$ to the thickness $L$ of the medium, $\alpha_l$ is the linear absorption coefficient, $\Delta (= 2\delta/\Gamma)$ is the scaled detuning. For a two-level system, the linear absorption coefficient can be written as $\alpha_l = \alpha_0/(1+\Delta^2)$, where $\alpha_0$ is the on-resonance absorption, and $\alpha_0 L$ is the on-resonance optical density (OD), which is an important figure of merit for a cold-atom cloud (OD=210 for the cloud in \cite{Camara2015}, see caption to Fig. \ref{fig:setup}).
The function $A(p,q)$ describes the nonlinearity of the atomic susceptibility, as modeled by (\ref{exacteqs}), by some approximation thereto, or some other model, including other optical systems with phase-independent interaction of counterpropagating beams \cite{Honda1996}.
By definition, $A(0,0) =1$, but $A(p,q) \ne A(q,p) $ in general, because of non-reciprocity due to standing-wave effects. The cubic model ($A(p,q) = 1-p -(1+h)q$) is the simplest example, explicitly non-reciprocal if $h \ne 0$.
\begin{figure}
\centering%
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{(Color online)
Dependence of zero-order intensities on the longitudinal coordinate $z$ scaled to the medium length~$L$, in a two-level medium with on-resonance optical density $OD=210$ (see Fig. \ref{fig:setup}): forward $p(z)$ and backward $q(z)$ for several cases. Lowest curves are for $\delta/ \Gamma = 5$, with output $p(1) =0.3$ and a $R=1$ mirror so that $q(1)=0.3$: upper and lower curves are for $h=0$, i.e. no reflection grating, inner curves for
$h=1$. Uppermost curves are for larger detuning $\delta/ \Gamma = 10$ and $h=1$, to illustrate a case where absorption effects might be considered negligible, leading to a quasi-Kerr approximation to the two-level response.
}
\label{fig:zeroorder}
\end{figure}
\section{Zero-order equations and solutions}
To find the pattern-formation thresholds, we first drop diffraction, and solve the plane-wave, zero-order problem in which $F,B$ depend on $z$ alone. From (\ref{transeqs}) it follows that the plane-wave intensities $p(z), q(z)$ obey the real equations:
\begin{equation}
\label{abzero} \left \{
\begin{array}{l}
\frac{dp}{d z} =-\alpha_l L A(p,q)p ,\\
\\ \frac{dq}{d z} =\alpha_l L A(q,p)q \\
\\
\end{array} \right.
\end{equation}
leading to the expected exponential absorption of the intensities in the linear limit.
We define the input intensity $p(0)=p_0$ and transmitted intensity $p(1)=p_1$, and similarly $q(0)=q_0$, $q(1)=q_1$. The boundary conditions of the SFM system are $q_1 = R p_1$, where $R$ is the mirror reflection coefficient.
We now solve (\ref{abzero}) for various two-level models.
For $h=0$, $A=1/(1+s)=1/(1+p+q)$ is symmetric in its arguments, and it follows that the product of the counter-propagating intensities (and indeed of the fields, $FB$) is independent of $z$, simplifying the analysis. We set $p(z)q(z) = K$, where $K$ is constant, and thus $K = p_1q_1 = Rp_1^2 $ for a feedback mirror of reflectivity $R$. It follows that the backward intensity $q(z)$ is given by $K/p(z)$, enabling the first equation of (\ref{abzero}) to be written in terms of $p(z)$ alone. It can then be integrated analytically, giving
\begin{equation}
\label{p_nograt}
\ln(p/p_0)+p-K/p - p_0 + K/p_0 +\alpha_lLz = 0 ,
\end{equation}
and hence, for the transmitted power $p_1$ (using the explicit SFM value of $K$):
\begin{equation}
\label{p_1nograt} \\
\ln(p_1/p_0)+(1-R)p_1=p_0-Rp_1^2/p_0 -\alpha_lL .\\
\\
\end{equation}
The all-grating system given by (\ref{exacteqs}) also possesses a propagation constant for $h=1$, this time given by $K = W(z) - s(z)$, where $ W(z) = (1+2s+\xi^2)^{\frac{1}{2}}$, and $\xi(z) = p(z) - q(z)$. Essentially the same conservation law was noted by Van Wonderen et al in the context of optical bistability in a Fabry-Perot resonator \cite{vanW1989}, for which the propagation equations are identical to the present case, though the boundary conditions are different.
In terms of $W, s,\xi$ the all-grating function $A_{all} (p,q)$ becomes
$A_{all} = (1+(\xi-1)/W)/(s+\xi)$, with its transpose $A_{all} (q,p)$ obtained by $\xi \rightarrow -\xi$. Recasting equations (\ref{abzero}), the propagation equations for $s$ and $\xi$ take a fairly simple form:
\begin{equation}
\label{sandxi} \left \{
\begin{array}{l}
\frac{ds}{d z} = - \alpha_l L\xi/W ,\\
\\ \frac{d\xi}{d z} = - \alpha_l L(1-1/W) \\
\\
\end{array} \right.
\end{equation}
from which one easily deduces $dW/dz =ds/dz$, and thus the constancy of $K = W(z) -s(z)$. One can then obtain an integrable differential equation in just one variable. For example, by using the definitions of $W$ and $K$ to express $W$ in terms of $K$ and $\xi$, the second of equations (\ref{sandxi}) is easily integrated to yield:
\begin{equation}
\label{xi_all} \\
\xi + \ln(\xi + (\xi^2+2-2K)^{\frac{1}{2}}) +\alpha_l L z = const.\\
\\
\end{equation}
For the important case $R=1$, we have $s_1 = 2p_1$, $\xi_1 = 0$, hence $W_1 = (1+4p_1)^\frac{1}{2}$ and thus $K = (1+4p_1)^\frac{1}{2} -2p_1$. Using this data in (\ref{xi_all}) yields an implicit expression for $\xi_0$ in terms of $K$ (and thus $p_1$):
\begin{equation}
\label{xi_0} \\
\xi_0 + \ln(\xi_0 + (\xi_0^2+2-2K)^{\frac{1}{2}}) -{\frac{1}{2}} \ln(2-2K) = \alpha_l L .\\
\\
\end{equation}
Given $\xi_0$, it is straightforward to calculate $W_0$ and $s_0$, and thus the input intensity $p_0$ and
the backward output intensity $q_0$, all in terms of the given transmitted intensity $p_1$, thus completing the solution of the plane-wave problem for the all-gratings model.
For the MM model $A(p,q)=(1+p)/(1+s)^2$. We can again find a propagation constant, in this case given by $K=pq/(1+s)$, again leading to a an integrable first-order equation in $p(z)$ alone. It turns out that the MM transmission shows ``bistability", i.e. the output $p_1$ is not a single-valued function of the input $p_0$, if $\alpha_l L$ is big enough.
This is surprising and counterintuitive, and turns out to be a flaw in the model: including more terms in the series expansion (\ref{sigmas}) eventually makes $p_1$ single-valued. In particular the all-gratings formula (\ref{xi_all}) and its $R=1$ sub-case (\ref{xi_0}) give single-valued transmission characteristics. We therefore drop further detailed consideration of the MM model.
Figure \ref{fig:zeroorder} illustrates the $z$-dependence of the zero-order intensities in a two-level medium for several cases, with $OD=210$ as in the experiment illustrated in Fig. \ref{fig:setup}. The lowest group of curves are for moderately high absorption, $\alpha_l L \sim 2$, at $\delta/ \Gamma = 5$, and chosen to illustrate the two cases $h=0$ described by (\ref{p_nograt}) and $h=1$, where the $z$-dependence may be deduced from (\ref{xi_all}). To assist comparison, we assume the same output $p_1 =0.3$ and a perfect mirror so that $q_1=0.3$ also. The differences are fairly slight, the no-grating case having a slightly higher effective absorption for both forward and backward intensities. As we will see, there is a much more profound difference in the instability thresholds. We also display full-grating curves for larger detuning $\delta/ \Gamma = 10$, to illustrate a case where absorption effects might be considered negligible, leading to a quasi-Kerr approximation to the two-level response, which we will analyze below.
\section{Transverse perturbations}
\label{sec:transpert}
We now assume that a solution has been found for the plane wave case: $F=F_0(z)$, $B=B_0(z)$, obeying appropriate longitudinal boundary conditions. This solution may be numerical, or a solution to some special-case or approximate version of (\ref{transeqs}). We now turn our attention to the stability of such a plane wave solution against transverse perturbations.
We consider perturbations of the form $F=F_0(1+f\cos(Qx))$, $B=B_0(1+b\cos(Qx))$, where ($f,b$) are complex ($z$-dependent) amplitudes of the transverse mode function $\cos(Qx)$, chosen without loss of generality to respect the transverse symmetries of (\ref{transeqs}) and the mirror boundary conditions. The transverse perturbation has wave vector $Q$, corresponding to a diffraction angle $Q/k$ in the far field. We define a diffraction parameter $\theta=Q^2L/2k$, physically the phase slippage between the $f$ and $F_0$ in traversing the cloud. Because $Q$ is experimentally a free parameter, so is $\theta$, and we have to calculate threshold intensities as a function of $\theta$, anticipating that the $Q$ corresponding to the lowest threshold will be dominant in any experiment, especially a pulsed experiment.
We assume that the fields $(f,b)$ are time-independent, adequate to calculate the threshold of a zero-frequency pattern-forming (Turing) instability at wavevector $Q$. To find Hopf instabilities, or to properly account for dynamical behavior of the field-atom system, we would have to start from the Maxwell-Bloch equations, rather than our susceptibility model. It is worth mentioning that van Wonderen and Suttorp, in a later paper on dispersive optical bistability \cite{vanWonderen91}, perform a perturbation analysis of the full Maxwell-Bloch equations with all grating orders included (though without transverse effects). The resulting model is very involved, and beyond our present scope. Meantime, we are content to address the Turing pattern threshold problem.
Within this constraint, we can say nothing about the nature and symmetry of the pattern which actually forms once threshold is exceeded. However, we know that hexagonal patterns are generic in systems of the type under consideration, and indeed are the dominant pattern observed in the experiments reported in \cite{Camara2015}. In a sense, therefore, threshold calculation is the most important step towards establishment of a theoretical underpinning for the observations of Camara et al \cite{Camara2015} and related experiments.
Assuming $|f|,|b| << 1$, we thus obtain the linearised propagation equations:
%
\begin{equation}
\label{thickpert} \left \{
\begin{array}{l}
\frac{df}{dz} = - i\theta f -\alpha_lL(1+i\Delta)(A_{11}f'+A_{12}b') ,\\
\frac{db}{dz} = i\theta b + \alpha_lL (1+i\Delta)(A_{21}f'+A_{22}b') \\
\end{array} \right.
\end{equation}
Here $f = f' +i f''$, $b=b' + ib''$, and the real quantities $A_{ij}$ are defined as $A_{11}=p\frac{\partial A(p,q)}{\partial p}$,
$A_{12}=q\frac{\partial A(p,q)}{\partial q}$,
$A_{21}=p\frac{\partial A(q,p)}{\partial p}$,
$A_{22}=q\frac{\partial A(q,p)}{\partial q}$, and form a $2 \times 2$ matrix, $\hat{A}$.
In the presence of absorption, the elements of $\hat{A}$ are z-dependent, for example obeying the zero-order solutions derived above for various models, and usually no analytic solution for $f(z),b(z)$ is available, requiring a resort to numerics. Below, we will consider both numerical investigations of the full (absorptive) model, as well as simpler models, including the quasi-Kerr case, in which the detuning is large enough to neglect the absorption, enabling analytic solution of the perturbation equations.
We have to solve (\ref{thickpert}) subject to appropriate boundary conditions. As there is no input field perturbation, we set $f_0 = f(0) = 0$. The counter-perturbation field at $z=0$, $b_0 =b(0)$, is physically determined by its value at $z=1$, but the system (\ref{thickpert}) is mathematically well-defined and solvable for any given $b_0$. Given initial conditions $(f,b)_{z=0}=(0, b_0)$, numerical integration of (\ref{thickpert}), using the known functions $p(z) = |F_0|^2$ and $q(z)=|B_0|^2$, generates a pair of complex output perturbation fields at $z=1$, namely ($f_1,b_1$). For an acceptable solution, these fields must obey appropriate physical boundary conditions at $z=1$. For the SFM system these are given by $f=b$ (note this is independent of mirror reflectivity $R$, because of the definition of ($f,b$) as relative perturbations).
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig3.pdf}
\caption{ (Color online) Two-level instability domain ($\delta > 0$) reported in \cite{Camara2015}.
Diffracted power $P_d$ is measured as a function of $\delta > 0$ (note that $\Delta=2\delta/\Gamma$) and input intensity $I$. Note the logarithmic horizontal scale.
The dotted loops indicate maximal instability domains calculated in the thin-medium approximation as described in \cite{Camara2015}: (full circles) domain calculated from (\ref{xi_0}), i.e. with all reflection gratings included ($h=1$); (open circles) domain calculated from (\ref{p_1nograt}), i.e. with no reflection gratings ($h=0$). Both dotted traces are rescaled to absolute values of intensity and detuning.}
\label{CamaraTuning}
\end{figure}
Turning now to the solution of (\ref{thickpert}), the fact that $f$ has to grow through the medium makes it useful to define an output ``gain" $g= f_1/b_1$. Since $f=b$ on the mirror of an SFM system, we immediately conclude that $|g| = 1$ is a necessary condition for SFM instability.
We can expect that $g \sim 0$ at low intensities, when the nonlinearity is negligible. As the intensity is increased, $f$ and $b$ begin to couple through the interaction matrix $\hat{A}$, and we can expect the gain to increase, leading to instability if the parameters permit. As mentioned, our present approach cannot describe behavior above threshold, but if the nonlinearity saturates, as is true for a two-level system, $|g|$ may begin to decrease for large enough input intensity. Then the system may re-stabilize, and the pattern will disappear. This scenario is illustrated in Figure~\ref{CamaraTuning}, which compares the threshold domains for two two-level absorptive models with experimental data \cite{Camara2015} on the detuning behavior of the diffracted power observed under pattern formation conditions in a cold Rb cloud with single feedback mirror. There is a minimum and maximum detuning for the observation of the SFM instability, while between these limits there is both a lower and an upper threshold power, with patterns observed only at intermediate powers. The computed threshold loops in Figure~\ref{CamaraTuning} correspond to approximate ``thin-medium" models with and without short-period (reflection) gratings. The loop for the ``with" case is in much better agreement with the experimental results than that for for a similar model without such gratings, for which the loop is much smaller, and does not span the experimental domain. Note that the presence of reflection gratings has a much larger effect on the instability thresholds (about a factor of two) than on the zero-order intensities, where the effect is modest (Fig. \ref{fig:zeroorder}).
\section{Gain circle}
The transverse gain function $g=f_1/b_1$ is complex, and its phase as well as its magnitude must satisfy the boundary conditions at $z=1$, which depend on the mirror displacement. If the mirror displacement is $DL$ (Fig.~\ref{fig:setup}), then the boundary condition is $b_1 = e^{-2i\psi_D} f_1$, where $\psi_D= D \theta $. (Note that $D$ can be negative if the feedback optics involves a telescope.) Thus the complete boundary condition is that $g = e^{2i\psi_D}$, i.e. $g$ must lie at a point, the threshold point, on the unit circle in the complex plane.
Before looking at specific examples, there are some general considerations which give insight into methodology, but also into the physics. Because (\ref{thickpert}) is a linear system, its solutions obey the principle of superposition. Hence, if input condition $(f_0,b_0) = (0,1)$ generates outputs $(f_1,b_1) = (f_r,b_r)$ and input condition $(f_0,b_0) = (0,i)$ generates outputs $(f_1,b_1) = (f_i,b_i)$, then an arbitrary input condition $(f_0,b_0) = (0,u+iv)$, with $(u,v)$ real, generates outputs $(f_1,b_1) = (uf_r+vf_i,ub_r+vb_i)$. The gain is then given by $g=g(u,v)=(uf_r+vf_i)/(ub_r+vb_i)$. Thus, for any given physical parameters, one need only obtain the pairs $(f_r,b_r)$ and $(f_i,b_i)$, and then testing for the SFM instability is a matter of algebra.
In looking for a solution, a graphical approach is convenient and instructive. Some algebra shows that the points of the gain function $g(u,v)$ always belong to a circle. This ``gain circle" is given by a simple analytic formula in terms of $g_r =g(1,0)=f_r/b_r$ and $g_i =g(0,1)=f_i/b_i$:
\begin{equation}
\label{gaincircle}
g(\phi)=g_i +(g_r - g_i)(1-e^{2i\phi})/( 1-e^{2i\phi_0})
\end{equation}
where $\phi$ is a free parameter which traces out the gain circle, while $\phi_0$ is the phase of $b_i/b_r$.
For finite $\theta$ the phase of the threshold point, the feedback phase, will vary as $D$ is varied, causing the threshold point to trace out all or part of the unit circle. Hence the intersections, if any, of the gain circle with the unit circle define instability thresholds for the mirror displacement(s) $D$ corresponding to the intersection(s).
\begin{figure}
\centering%
\includegraphics[width=\columnwidth]{fig4.pdf}
\caption{ (Color online) Illustration of transverse gain circles (see text) calculated from (\ref{thickpert}) for different input intensities. The parameters here are: $OD=210$, $\delta/\Gamma=2$, $\theta=2$. The unit circle centred on the origin is the locus of the feedback phase as mirror displacement $D$ is varied. Lying on it, the dot (red online) is the feedback phase for the particular case $D= -1.3$. The displaced circles are the loci of transverse gain for three cases: (a) the smallest gain circle (red online) lies wholly inside the unit circle, and so the system is always below threshold for this case (scaled input intensity $p_0 =7.90564$); (b) the middle gain circle (green online) touches the unit circle, and so the system reaches threshold for one value of $D$ (scaled input intensity $p_0 =8.1266$); (c) the largest gain circle (blue online) intersects the unit circle at two well-spaced points, and so the system is above threshold for a wide range of $D$ values, including $D= -1.3$ (scaled input intensity $p_0 =8.29754$). Points on the arc of the touching circle corresponding to $g_r$ (blue online) and $g_i$ (brown online) are also shown. Its center is also marked with a (green online) dot.
}
\label{fig:circles}
\end{figure}
Figure \ref{fig:circles} illustrates typical cases for system (\ref{thickpert}).
As expected, the gain circle lies wholly within the unit circle when the input intensity is low, so that there are no intersections, and thus no instability. At higher intensity, the gain circle intersects the unit circle at two points, and there is instability for all mirror displacements $D$ for which the feedback phase lies on the arc between the two intersections for which the gain circle lies outside the unit circle. Because the feedback phase $ e^{2i\psi_D} $ is periodic in $D$, such thresholds are periodic in mirror displacement, with a period which depends on $Q$ through $\theta$. This is an example of the Talbot effect,
whereby a transversely-periodic light field self-reconstructs under propagation through multiples of the Talbot period, $z_T=4\pi k/Q^2$ \cite{talbot36,ciaramella93}. Such $D$-periodicity of instability thresholds is observed experimentally, and will be discussed in more detail below.
An interesting and important intermediate case illustrated in Figure \ref{fig:circles} occurs when the gain circle touches the unit circle. This corresponds to the lowest possible threshold for any $D$ at these parameters (modulo Talbot recurrences). This minimum threshold will be achieved for some value of $D$ if it is varied over a Talbot period.
The implication is that the locus (or loci) in the ($\theta,p_0$) plane of tangencies between the gain circle and the unit circle forms an envelope curve (or curves) bounding the set of threshold curves in the ($\theta,p_0$) plane corresponding to any set of $D$ values. Given the analytic formula (\ref{gaincircle}) for the gain circle, it is straightforward to find $(\theta,p_0)$ pairs such that the gain circle touches the unit circle, thus tracing out envelope curves in the $(\theta,p_0)$ plane. It is similarly straightforward to find $p_0$ and $\theta$ such that the gain circle intersects the unit circle at the feedback phase corresponding to any given mirror displacement $D$, and thus to trace out threshold curves for that $D$. Examples, and implications, of envelope and threshold curves for various models will be presented below.
\section{ Two-level System Envelopes and Thresholds}
As a first detailed example, we consider the two level system to be fairly close to resonance, with blue detuning $\delta/\Gamma=1.5$. For optical density OD=210 (Fig.~\ref{CamaraTuning}) this corresponds to
$\alpha_l L= 21$, i.e. the linear absorption is very high. Such conditions have not been modeled before, except in thin-medium or no-grating approximations. Figure \ref{EnvD0del1p5} shows the envelope curve for this case, together with the threshold for the mirror displacement $D=0$, calculated using the gain circle technique. As might be expected, the minimum threshold is rather high, $p_0 \sim 17$, which means that substantial saturation is required - the output intensity $p_1$ is of order unity in the low-threshold region. There is also an upper threshold, essentially the bleaching of the absorption destroys the nonlinearity. Here $p_1$ is of the same order as $p_0$. As predicted, the threshold curve lies inside the envelope curve, touching it at closest approach.
Whereas the $D=0$ threshold curve avoids $\theta=0$, which is typical behavior for SFM models, the envelope seems to have finite intercepts at $\theta=0$. To interpret this, we note that the feedback phase $\theta D$ tends to zero as $\theta \to 0$ for any finite $D$. Thus the corresponding threshold point gets trapped close to the positive real axis, away from the envelope-defining contact between the gain circle and the unit circle, which will generally occur at a finite phase angle. If we also allow $D$ to increase without limit, however, finite feedback phase, and hence finite thresholds, can be sustained as $\theta \to 0$. Now the ``thin-medium" approximation, in which the diffraction within the medium is considered negligible compared to that in the feedback loop, implies $D \sim d/L$ diverges. Thus we identify the intercept of the envelope with the $\theta$ axis as exactly the thin medium limit. Indeed, this is confirmed for our case. The intercepts of the envelope found using the gain circle technique coincide exactly with those we calculated previously by direct use of the thin-medium approach, and the results of which were presented in \cite{Camara2015}. We will return to this issue below, when we consider other models.
Another question arising from the finite intercept of the envelope curve is how to interpret its continuation to negative $\theta$, which presents no numerical difficulties (for diffractively thin media negative feedback distances were first considered in \cite{ciaramella93}). If we look at the structure of (\ref{thickpert}), we observe that simultaneously changing the sign of $\theta$ and $\Delta$ has the effect of transforming the equations into their complex conjugates. The boundary condition is also conjugated. Thus we can interpret the continuation of the envelope curve(s) to negative $\theta$ as corresponding to the opposite sign of detuning. We will routinely take advantage of this symmetry to present result for both signs of detuning in a single diagram.
An important corollary is that SFM thresholds are equal for both signs of detuning in the thin-medium limit for all models described by (\ref{thickpert}). In contrast, the finite slope of the envelope curve in
Fig. \ref{EnvD0del1p5} at its intercepts with $\theta =0$ implies that there is no red-blue symmetry when diffraction in the medium is taken into account.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig5}
\caption{ (Color online) Threshold and envelope curves calculated from (\ref{thickpert}) for a two level system with all gratings included ($h=1$) with $R=1$ feedback mirror. Scaled input intensity $p_0$ is plotted against diffraction parameter $\theta=Q^2L/2k$. Outer (blue online) curve is the envelope curve, the limiting threshold for any mirror displacement: inner (orange online) is the threshold curve for mirror displacement $D=0$, which, close to its maximum, touches the envelope curve. It also touches the envelope at low values of $p_0$, in fact almost coinciding with the envelope curve over a wide range of $\theta$. The envelope curve has finite intercepts with $\theta=0$ axis (see text for discussion).
Other parameters: $OD=210$, $\delta/\Gamma=1.5$. }
\label{EnvD0del1p5}
\end{figure}
In Fig. \ref{EnvDnegdel1p5} we use this tuning-diffraction correspondence to extend the envelope, and also to display threshold curves for mirror displacement $D=-1.3$, which corresponds to the experimental results of Fig. \ref{CamaraTuning}. The extended envelope displays a huge red-blue tuning asymmetry in the upper threshold, and a smaller one in the lower threshold, for which blue tuning gives the lowest thresholds, in accord with experimental experience. The threshold curves for fixed $D=-1.3$ are very different from that for $D=0$ in Fig. \ref{EnvD0del1p5}, being a discrete set of closed loops, which each touch the envelope twice, close to their upper and lower extrema.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig6}
\caption{ (Color online) Threshold and envelope curves calculated from (\ref{thickpert}) for the same conditions as Fig. \ref{EnvD0del1p5}, except that the feedback mirror displacement is $D= -1.3$, which corresponds to the experimental results of Fig. \ref{CamaraTuning}. Scaled input intensity $p_0$ is plotted (here on a log scale, for clarity) against diffraction parameter $\theta=Q^2L/2k$, which is continued to negative $\theta$ (see text) so as to present results for red, as well as blue, atomic tuning. The envelope curve, the continuation to negative $\theta$ of that in Fig. \ref{EnvD0del1p5}, shows a large red-blue tuning asymmetry. Inside the envelope is a set of discrete closed threshold loops for $D= -1.3$, each of which touches the envelope above and below.
}
\label{EnvDnegdel1p5}
\end{figure}
Increasing the magnitude of the detuning, both the absorptive and the dispersive nonlinearity decrease, but at different rates, with the absorption decreasing faster, which favors pattern formation. Fig. \ref{CamaraTuning} shows that the pattern threshold intensity is a minimum, and its intensity range a maximum, for detunings of magnitude $\sim 5$. Figure \ref{Thresholds} illustrates envelope curves, and threshold curves for $D=-1.3$, vs diffraction parameter for $\delta/\Gamma=5$, with other parameters as before. For this case, both the envelope and the fixed-D threshold curves seem to be open to large $|\theta|$, indicating that low (but not lowest) thresholds persist to large diffraction angles (divergent $Q$). This is not unexpected, because the coupling of the $f$ and $b^{*}$ components of the transverse perturbations is phase-conjugate (PC) in nature, and so is phase-matched for all diffraction angles. As was discussed for counter-propagation in Kerr media by Firth et al \cite{firth90b}, at small diffraction angles the non-phasematched couplings of $f$ and $f^*$, and of $f$ and $b$ (and analogously for $b$'s couplings) give additional oscillatory contributions to the transverse gain, and can lead to thresholds which are significantly below the PC oscillation threshold \cite{grynberg93}. Similar considerations apply in our case, though the SFM boundary conditions and the two-level nonlinearity lead to quantitative differences.
Fig. \ref{Thresholds} displays oscillations in both the envelope and the threshold curves, for both signs of detuning, though more prominent for red-detuning. The $D=-1.3$ threshold curves are again wholly contained by the envelope curves, with touching contact at several points. There are several near-contacts, linked to the complexity of the system in such strongly-nonlinear regions. The minimum and maximum thresholds are associated with tangencies in all case, however.
Further increasing the detuning leads to a fall-off in nonlinearity, and the envelopes begin to close again, PC oscillation becomes impossible, and eventually the SFM transverse instability also disappears, at a detuning which depends on optical density OD. Fig. \ref{LargeDelta} shows the onset of this process, for detuning
$\delta/\Gamma=13.1$, other parameters as in the previous figures. At such large detunings, absorption becomes small, and it is of interest to compare Fig. \ref{LargeDelta} with the corresponding results in the quasi-Kerr case (discussed in the next section), in which absorption is neglected, enabling analytic solution to the thus simplified version of system (\ref{thickpert}).
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig7}
\caption{ (Color online) Threshold and envelope curves calculated from (\ref{thickpert}) for the same conditions as Fig. \ref{EnvDnegdel1p5}, except $\delta/\Gamma=5$. Scaled input intensity $p_0$ is plotted (again on a log scale, for clarity) against diffraction parameter $\theta=Q^2L/2k$, which is continued to negative $\theta$ (see text) so as to present results for red, as well as blue, atomic tuning. Upper and lower portions of both envelope and threshold curves are well separated for large $\theta$, asymptotically corresponding to phase-conjugate oscillation thresholds. }
\label{Thresholds}
\end{figure}
Similar threshold calculations enable the minimum and maximum thresholds to be found over the full range of detuning for which instability exists for a given configuration. Choosing parameters $D=-1.3$ and $R=0.95$ to align with the recent experiment \cite{Camara2015}, we have calculated the instability domain using
the above methods based on the full thick-medium model (\ref{thickpert}). Results are shown in Fig. \ref{AbsLoops}. The instability domain is broadly similar to that found for the thin-slice model used in Fig. \ref{CamaraTuning}, though with a significantly smaller upper threshold. As mentioned above, the thin-medium threshold corresponds precisely to the $\theta=0$ intercepts of the envelope curves. In all the tuning cases shown, Figs. \ref{EnvDnegdel1p5}, \ref{Thresholds}, \ref{LargeDelta}, the upper intercept is substantially above the highest upper threshold for fixed $D=-1.3$, and Fig. \ref{AbsLoops} shows this to be the case for all tunings. The lower threshold, which is perhaps the most interesting experimentally, is very similar for both thin-medium and fixed-$D$ cases.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig8}
\caption{ (Color online) Threshold and envelope curves calculated from (\ref{thickpert}) for the same conditions as Fig. \ref{EnvDnegdel1p5}, except $\delta/\Gamma=13.1$. Note that $p_0$ is here plotted on a linear scale.}
\label{LargeDelta}
\end{figure}
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig9}
\caption{ (Color online) Two-level instability domain, range of threshold input intensity $p_0$ in terms of $\delta/\Gamma$ , with a logarithmic horizontal scale, as in Fig. \ref {CamaraTuning}. The larger loop (dots, black online) is as presented in \cite{Camara2015}, calculated in the thin-medium approximation for $R=1$, but identifiable as the $\theta=0$ envelope (see text), the smaller (red online) is calculated from (\ref{thickpert}), i.e. with all reflection gratings included ($h=1$), and for $R=0.95$ and $D=-1.3$. $OD=210$. The contour plot loops show experimental data of Fig. \ref{CamaraTuning}, for comparison.}
\label{AbsLoops}
\end{figure}
The agreement with experiment of the all-grating models is rather satisfactory, bearing in mind that the theory only calculates threshold conditions, while the experiment detects diffracted power only if the perturbation gain is large enough to build a strong pattern from noise within the microsecond or so duration of the pump pulse. Moreover, we note that the no-grating threshold domain in Fig. \ref {CamaraTuning} is smaller than that in which transverse structure is observed. This provides firm evidence that reflection gratings are present in the cold-atom cloud, in agreement with expectations based on the inability of transport mechanisms to wash out susceptibility gratings at such low temperatures when such short input pulses are used.
The comparison between experimental and theoretical curves is further complicated by the fact that the theory uses an uniform plane wave and the experiment a Gaussian input beam. The Fourier transform to extract the power in the modulation was performed over an area with diameter equal to the beam waist radius (i.e.\ at the 60$\%$ power point). The pump power reported in Fig. \ref {AbsLoops} is the peak power. As a certain area of a least two pattern periods need to cross threshold for a sizeable effect, it is understandable that the experimentally detected threshold is higher than the predicted one. At the high intensity threshold, the center of the beam will become stable again but modulation still exists in the wings. Hence it makes sense that the plane-wave instability closes before the experimentally obtained threshold.
\section{Quasi-Kerr case}
While the above technique based on the gain circle is general and flexible, it yields little in the way of analytic insight in cases where strong nonlinear absorption leads to large and complicated changes in the forward and backward intensities in propagation through the medium. If we restrict to large enough detuning that the absorption can be considered negligible, however, it follows that $p$ and $q$ are constant in the medium, and analytic soluton to this ``quasi-Kerr" approximation to the thick-medium model (\ref{thickpert}) is possible. Formally, in such a model, we suppose that $|\Delta|$ is large enough that $\alpha_l L$ can be neglected, but with $\alpha_l \Delta L$ finite, so that the nonlinearity is purely refractive, as is the case for a true Kerr medium, in which the refractive index changes linearly with intensity.
In the quasi-Kerr approximation the matrix $\hat{A}$ has constant coefficients, and the equations (\ref{thickpert}) become
\begin{equation}
\label{quasiKerr} \left \{
\begin{array}{l}
\frac{df}{dz} = - i\theta f - i \alpha_lL \Delta(A_{11}f'+A_{12}b') ,\\
\frac{db}{dz} = i\theta b + i \alpha_lL \Delta(A_{21}f'+A_{22}b') \\
\end{array} \right.
\end{equation}
Evidently the combination $\alpha_lL \Delta$ is an important strength parameter for the nonlinearity. Bearing in mind that $\alpha_l = \alpha_0/(1+\Delta^2)$, with $\Delta$ large by assumption, there is an obvious trade-off between nonlinearity and absorption. We will proceed by solving (\ref{quasiKerr}), analytically where possible, and testing against the results derived above for the ``full" two-level model with absorption.
For feedback mirror boundary conditions, we have $q = R p$.
For the symmetric equal intensity case ($q=p$), $A_{11}=A_{22}=A_{sym}$ and $A_{12}=A_{21}= GA_{sym}$. The matrix $\hat{A}$ then has a simple symmetric form
\begin{eqnarray}
\label{Asym}
& & \hat{A}_{sym}= A_{sym} \left(
\begin{array}{cccc}
1 & G \\
G & 1
\end{array}
\right). \nonumber
\end{eqnarray}
Both $A_{sym}$ and $G$ are in general functions of $s=2p$, but are independent of $z$. \\
We now define $\psi_{1,2}^2 = \theta (\theta +\kappa \phi_{1,2})$, where the effective Kerr coefficient
$\kappa = \alpha_l L \Delta$.
($\phi_1, \phi_2$) are the eigenvalues of $\hat{A}$,
chosen such that
($\phi_1, \phi_2) \to A_{sym}(1-G,1+G)$ (the eigenvalues of $ \hat{A}_{sym}$) as $q \to p$. Thus defined $\psi_{1,2}$ coincide exactly with the quantities $\psi_{1,2}$ used in \cite{firth90b,Geddes1994} in analyzing the Kerr CP case. It follows that the analysis developed in these papers for the symmetrically-pumped CP Kerr problem extends to the present quasi-Kerr case, in which both the strength of the nonlinearity and of the grating-coupling $G$ can be intensity dependent.
Detailed consideration of the CP problem for a two-level system is a subject for future work.
We now present explicit forms of the matrix $\hat{A}$ for various models of interest here. For the Kerr case, we have
\begin{equation}
\label{AKerr}
\hat{A}_{Kerr}= -
\left(
\begin{array}{cccc}
p & (1+h)q \\
(1+h)p & q
\end{array}
\right).
\end{equation}
For $p=q$ this leads to $A_{sym} = -p$ and $G=1+h$ as expected.
For the MM model, we obtain
\\
\begin{eqnarray}
\label{Amur}
& & \hat{A}_{MM}= - \frac{1}{(1+s)^3}\\
& & \left(
\begin{array}{cccc}
p(1+s)-2hpq & (1+h)q(1+s) -2hq^2 \\
(1+h)p(1+s)-2hp^2 & q(1+s) -2hpq
\end{array}
\right). \nonumber
\end{eqnarray}
For $p=q=s/2$ and $h=1$ the above expression for $\hat{A}_{MM}$ leads to $A_{sym} = - \frac{p}{(1+s)^3}$, while we find an intensity-dependent grating factor $G=2+s$. This differs from the results of \cite{Muradyan2005}, wherein the given formulae imply $G=2$.
The general function $A$ given in (\ref{exacteqs}) also leads to explicit expressions for the matrix $\hat{A}_{all}$. In the absence of grating terms, i.e. for $h=0$, it simplifies to
\begin{eqnarray}
\label{Anogr}
& & \hat{A}_{h=0}= - \frac{1}{(1+s)^2} \left(
\begin{array}{cccc}
p & q \\
p & q
\end{array}
\right) \nonumber
\end{eqnarray}\\
which leads to $A_{sym} = - \frac{p}{(1+s)^2}$. $G=1$, as expected, implying a zero eigenvalue for $\hat{A}_{h=0}$, and hence $\psi_1 = \theta$. (The MM model gives identical results for $h=0$.) \\
With all grating terms included, i.e. for $h=1$, we obtain
\begin{eqnarray}
\label{Aall}
& & \hat{A}_{all}= \left(
\begin{array}{cccc}
(1+s)/W^3 - A & -2q/W^3 \\
-2p/W^3 &(1+s)/W^3 -A^T
\end{array}
\right)
\end{eqnarray}\\
where $A^T(p,q)= A(q,p)$. For equal intensities $W=\sqrt{1+2s }$ and $\xi =0$. Some calculation then shows that $G$ is approximately $2+2s$ for small $s$. For larger $s$, however, there is a strong departure from Kerr-like behavior, in that $A_{11}$ changes sign at $s= 1+ \sqrt{2}$, and it follows that $G$ is negative for higher values of $s$.\\
Using analysis analogous to that in \cite{firth90b,Geddes1994}, but with SFM boundary conditions $f_0=0$, $b_1 = \exp{-2i\psi_{D}} f_1$, we obtain, for perfect mirror reflection ($R=1$), the SFM threshold condition \\
\begin{equation}
\label{2LSfbm}
c_1 c_2 +\left(\frac{\psi_2}{\psi_1}c_D^2+\frac{\psi_1}{\psi_2} s_D^2 \right)s_1 s_2 = c_D s_D\left( \beta_1 s_1 c_2 -\beta_2 s_2 c_1\right).
\end{equation}
Here $c_i=\cos\psi_i$; $s_i=\sin\psi_i$: $c_D=\cos\psi_D$; $s_D=\sin\psi_D$, and
$\beta_n = \left(\frac{\psi_n}{\theta}-\frac{\theta}{\psi_n}\right)$.
In the quasi-Kerr case the envelope condition whereby the gain circle in diagrams like Fig. \ref{fig:circles} touches the unit circle corresponds to transition between complex and real $\psi_D$ as roots of (\ref{2LSfbm}). This leads to the following envelope condtion:
\begin{equation}
\label{env}
4(c_1 c_2 +\frac{\psi_1}{\psi_2} s_1 s_2)(c_1 c_2 +\frac{\psi_2}{\psi_1} s_1 s_2) = ( \beta_1 s_1 c_2 -\beta_2 s_2 c_1)^2.
\end{equation}
As an example, Fig. \ref{figQK7pt1} illustrates envelope and threshold curves for the all-grating quasi-Kerr model, for a fairly small quasi-Kerr coefficient, $|\alpha_l L \Delta| =8$. There is a very good correspondence to the full model for the same parameters (Fig. \ref{LargeDelta}). The main difference is that removing the small absorption losses make the instability and envelope domains slightly larger for the quasi-Kerr model. In particular, the range of $\theta$ is larger, extending to $ \sim 40$, but still finite, so that there is no phase-conjugate instability.
A key question is how useful the quasi-Kerr approximation is. To test this, we compare quasi-Kerr and ``exact" two-level thresholds over a range of tunings with other parameters equal, except that $R=1$ for the quasi-Kerr. Fig. \ref{exvsqK} shows such a comparison. Unsurprisingly, the fit is best at large detunings, with the quasi-Kerr model predicting lower thresholds which are increasingly underestimated as the detuning is decreased. Given that $\alpha_l L$ is about 0.93 at $\delta/\Gamma = 7.5 $ for $OD=210$, corresponding to a single-pass transmission of only about $0.4$, the quasi-Kerr model seems to provide a useful guide to the true instability range even into regions where the absorption is far from negligible. The fit to the upper threshold curve is very good over the whole tuning range shown, because the absorption is strongly saturated in this region. The nonlinearity is saturated too, but the quasi-Kerr model fully accounts for that.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig10}
\caption{(Color online) Threshold and envelope curves. Blue curves (dashed): Envelope curves calculated from (\ref{env}) for a two-level medium described by $\hat{A}_{all}$, with $h=1$. Quasi-Kerr coefficient $|\alpha_l L \Delta| = 8$ and detuning $\delta/\Gamma=13.1$. Orange curves (solid): Threshold curves with a feedback mirror at negative effective distance ($D=-1.3$) from the end of the medium, which touches the envelope curves.
}
\label{figQK7pt1}
\end{figure}
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth]{fig11}
\caption{ (Color online) Two-level instability domain, range of threshold input intensity in terms of $\delta/\Gamma$, with a logarithmic horizontal scale. The closed loop (red online) is that calculated from (\ref{thickpert}) with absorption and all reflection gratings included ($h=1$). $R=0.95$, $D=-1.3$ and $OD=210$, as in Fig.\ref {AbsLoops}. The open curve (dotted, black online) is calculated for the same parameters from (\ref{2LSfbm}), as derived from the quasi-Kerr model equations (\ref{quasiKerr}). The latter curve is calculated only for $\delta/\Gamma > 7.5 $, because the neglect of absorption in (\ref{quasiKerr}) is untenable at small detunings.}
\label{exvsqK}
\end{figure}
The similarities between the two-level quasi-Kerr and pure Kerr analyses can be exploited ``in reverse", to calculate thresholds for SFM pattern formation in Kerr media beyond the thin-medium models, for which some results (without detailed analysis) were reported in \cite{labeyrie14}. Further, envelope curves can be calculated, so as to capture the range of thresholds afforded by varying the mirror displacement $D$, and to illustrate the thin-medium limit as discussed above.
Figure~\ref{fig:KG1env} illustrates this for a Kerr medium with no grating term ($h=0$). Here two distances ($D=2.5,10$) are shown, and we begin to see how the faster oscillations of the threshold for larger mirror displacements allow a better exploration of the envelope, and thus potentially lower thresholds. For the self-focusing case, where the envelope has a minimum at finite $\theta$, we can see a transition of the lowest threshold from the second-lowest-Q for $D=2.5$, to the sixth-lowest-Q band for $D=10$. Assuming that the dominant pattern is determined by the lowest threshold, we would expect that, as D is increased, the pattern period will slowly increase, and then suddenly drop back, in a sawtooth pattern. This phenomenon is indeed observed, as shown in Fig.~\ref{pol_Talbotscan}, where the dominance of the first Talbot ring for small $|D|$ is replaced by the second Talbot ring for larger $|D|$.
Conversely, for self-defocusing the lowest threshold always decreases as $D$ is increased, so that the patterns with lowest threshold are found at large mirror displacements, and have large spatial scales, with pattern wavelength scaling like $\sqrt{d/k}$, as is well known from thin-medium theory \cite{firth90a}.
In contrast, CP thresholds for $h=0$ defocusing Kerr media decrease with increasing $Q$ (see e.g.\ \cite{Geddes1994}), so that phase-conjugate oscillation is the dominant instability.
This SFM advantage can be attributed to the ability of the feedback phase to compensate for both the diffractive and nonlinear phase shifts in the medium, which have the same sign for defocusing, and thus cannot cancel each other as they can for self-focusing. This no-grating Kerr case is also interesting in that the envelope curves cross, and hence the threshold curves must thread through the intersection (Fig.~\ref{fig:KG1env}). It follows that the threshold is actually independent of mirror displacement at these crossings. Note that the threshold will normally be lower at a different diffraction parameter (as occurs in Fig.~\ref{fig:KG1env}), so observing the phenomenon would probably require isolating the specific wavenumber by Fourier filtering in the feedback loop \cite{pesch03}.
The finite limit for small diffraction, $\theta \to 0$, of the envelope is ($\pm 0.5$) in Fig. \ref{fig:KG1env}, and corresponds exactly to the thin-slice value \cite{firth90a}. It is clear from the above discussion that the small-$\theta$ region of the envelope can only be accessed for large $D$, and hence that the $\theta \to 0$ limit corresponds to $D \to \infty$, i.e. the thin-medium limit \cite{firth90a}. While previous thick-medium analyses \cite{Honda1996,kozyreff06} are valid in this limit, these authors did not explicitly consider it. The finite slope at $\theta=0$ means that the pattern-forming modes are not, in fact, threshold-degenerate when the medium thickness is taken into account. As is illustrated in Fig. \ref{fig:KG1env}, the multi-fractal patterns predicted in the thin-slice limit \cite{Huang2005} and dependent on mode-degeneracy are not expected to occur in practice, unless other mechanisms or devices are able to restore degeneracy. This effect of diffraction within the nonlinear medium was recognized earlier in \cite{kozyreff06}.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth,trim=0 0 0 0,clip]{fig12
\caption{(Color online) Threshold intensity (in units of $\alpha_l L \Delta p/2$) vs diffraction parameter $\theta =Q^2L/2k$. Blue dashed curves: Envelope curves calculated from (\ref{env}) for a Kerr medium with $h=0$. Positive and negative intensity values, respectively, correspond to self-focusing and self-defocusing Kerr media. Also threshold curves with a feedback mirror at negative effective distance from the end of the medium. Gray solid curves: $D=2.5$. Orange solid curves (with more wiggles): $D=10.0$. In both cases the threshold curves touch the envelope curves, and are confined by them.
}
\label{fig:KG1env}
\end{figure}
\section {Talbot Fans}
The above figures demonstrate how the threshold extrema move vs $\theta$ as mirror displacement $D$ is varied. An interesting and relevant way to examine this is to plot pattern scale ( $ \sim 1/\sqrt{\theta}$) vs $D$ for fixed intensity. This is demonstrated in Fig.~\ref{fig:2LSsize}, where the parameters are chosen to match those of \cite{Camara2015}, and the intensity $s=0.085$ is just above the minimum threshold, so that the unstable regions appear as long narrow islands. The ``fan" shape of the island group is due to the Talbot effect: the threshold values satisfying (\ref{2LSfbm}) are evidently periodic in $\psi_D = D \theta$, which means that at fixed $\theta$ (size) and intensity, threshold values are periodic in $D$. This is particularly clear at the bottom of the fan in Fig. \ref{fig:2LSsize}, where the tips of the islands are equally-spaced in $D$. The Talbot periodicity is inversely proportional to $\theta$, which is why the islands fan out as the pattern scale increases (i.e. as $\theta$ decreases).
\begin{figure}
\includegraphics[width=\columnwidth]{fig13}
\caption{ Pattern period (arb.\ units) vs mirror displacement $D$ at fixed intensity $s=0.085$. Threshold curves calculated from (\ref{2LSfbm}) for a two-level medium described by $\hat{A}_{all}$, with $h=1$. The quasi-Kerr coefficient $\alpha_l L \Delta =13.94$, corresponding to blue detuning. For optical density 210 \cite{Camara2015}, this corresponds to detuning $\Delta = 2\delta/\Gamma = 15$.}
\label{fig:2LSsize}
\end{figure}
Such ``Talbot fans" are readily observed experimentally. The fan reported in \cite{Camara2015} is shown in Fig.~\ref{Talbotfan}, where the experimental data fit well to threshold data from (\ref{2LSfbm}) using our two-level all-grating model based on $\hat{A}_{all}$. Fig.~\ref{Talbotfan} b) plots the pattern period against mirror displacement. Around $D\approx 0$ the lengthscale with the smallest wavenumber (largest period) is selected. At higher $|D|$, two lengthscales are found in the pattern. Both are in good agreement with the prediction from the theory. The inset shows excellent agreement between the measured and calculated $D$-periodicities. In the earlier optomechanical patterns paper \cite{labeyrie14}, there is a more limited fan, to which threshold data from (\ref{2LSfbm}) are fitted using a Kerr model ($h=0$, because the slow time scale allows atomic motion to wash out the longitudinal grating).
Fig.~\ref{Talbotfan} a) plots the power diffracted into the first and second unstable wavenumber obtained by integrating the measured far field intensity distributions over an annulus with the respective radius. We did not measure thresholds, but to a first approximation one can argue that the diffracted power increases with increasing distance to threshold and hence the measured data can be interpreted as indicators of inverted threshold curves. We compare them with the threshold curves obtained from the all grating quasi-Kerr model as the detuning is reasonably large and absorption not very important. As indicated in the discussion of Fig.~\ref{Talbotfan} a), around $D\approx 0$, only the lowest wavenumber (i.e. the one from the first Talbot balloon) is excited. For a mirror within the medium ($D=-1 \dots 0$), the diffracted power is low and the predicted thresholds are high. For increasing $|D|$ threshold are predicted to fall dramatically and indeed well developed patterns, indicated by high diffracted power, are observed. For further increasing $|D|$ the theory predicts that the second Talbot balloon at higher wavenumber has the lowest threshold. Indeed excitation of this length scale is observed but it does not take over completely in the experimental data.
\begin{figure}
\includegraphics[width=\columnwidth,trim=0 60 0 20,clip]{fig14}
\caption{(Color online) a) Diffracted power (experiment, left axis) and predicted threshold saturation intensity (theory, right axis) vs scaled mirror displacement $D$. The cloud thickness is $L=9$ mm. b) Pattern period $\Lambda$ vs mirror displacement. In physical units, the $x$-axis corresponds to -60~mm to +40~mm measured from the center of the cloud. Parameters: blue detuning, $\Delta = 15$, see \cite{Camara2015}. The diffracted power is normalized to its maximal value. Red solid dots: experimental data for first Talbot balloon (lowest wavenumber), gray circles: experimental data for second Talbot balloon (next highest wavenumber excited, in a) enhanced by factor of 5). The red and gray curves are the corresponding theoretical predictions and are calculated from (\ref{2LSfbm}) using the all-grating two-level model.
Inset: The measured $D$ period as a function of the pattern size (stars), together with the Talbot effect prediction (line). }
\label{Talbotfan}
\end{figure}
For a further investigation of the Talbot fan phenomenon we analyze a somewhat different experimental SFM situation in which optical pumping between Zeeman substates, rather than two-level electronic excitation, is the main nonlinearity \cite{grynberg94,scroggie96,leberre95b,aumann97}. Experimental parameters are an effective medium length of $L=3.2$ mm, beam intensity $I=18$ mW/cm$^2$ and detuning $\Delta = -14$. The homogenous solution is not saturated in this case \cite{ackemann01b}, so it is reasonable to compare the data to the length scales and threshold curves obtained from a self-focusing thick medium Kerr theory.
Experimental measurements of diffracted power and pattern lengthscale vs mirror displacement are shown in Fig.~\ref{pol_Talbotscan}. It is apparent that the behavior is very similar to the one observed for the electronic 2-level case in Fig.~\ref{Talbotfan}, but there is one crucial difference. For large enough $|D|$ ($D> 0.7$, $D<-2.5$) the power in the first Talbot ring is suppressed down to $3\times 10^{-3}$ relative to the second one, and the length scale of the second balloon takes over completely. This is in good, although not quantitative, agreement with the thick medium model as discussed earlier in connection with Figure~\ref{fig:KG1env}, though the transition is predicted to occur at somewhat larger $|D|$. Nevertheless, it is an important confirmation of the importance of the diffraction within the medium influencing length scale selection. In view of the fact that the atomic clouds have an approximately Gaussian density distribution and the theory assumes a rectangular distribution, quantitative deviations between theory and experiment are not surprising.
We note that a similar phenomenon was predicted in photorefractives \cite{schwab99,denz98}, in spite of different mechanism of non-linearity. However, the experimental observation of the essentially complete extinction of patterns with the smallest Talbot wavevector in favour of the second Talbot wavevector was not reported before in the literature, only the excitation of the second wavenumber (see Fig.~4 of \cite{Camara2015}, quantified in Fig.~\ref{Talbotfan} b) of this manuscript, for the two-level case and Fig.~7 of \cite{schwab99} for the photorefractive case). In hot atomic vapours, an early and not very systematic study \cite{ackemann94,ackemann96t} showed coexistence between the first Talbot wavevector and the second one for $D\approx 2.4$ and between the first Talbot wavevector and the third one for $D\approx 3.9$. For even higher distances ($D\approx 8.3$) an excitation of a single, quite high order (five or six) Talbot wavevector was found.
It should be noted that these results are influenced by atomic diffusion lifting the degeneracy present in the thin-slice model and a limited aspect ratio preventing patterns with the first Talbot wave vector for $D>4$.
\begin{figure}
\includegraphics[scale=0.9,width=\columnwidth,trim=0 10 0 0,clip]{fig15}
\caption{ (Color online) a) Predicted threshold, b) experimentally observed diffracted power (normalized to its maximal value) and c) pattern period vs mirror displacement $D$. In unscaled parameters, the $x$-axis corresponds to -12.8~mm to +10.2~mm measured from cloud center. Parameters: effective medium length is $L=3.2$ mm, beam intensity $I=18$ mW/cm$^2$ and detuning $\Delta = -14$. Red solid dots: experimental data for first Talbot balloon (lowest wavenumber), blue circles: experimental data for second Talbot balloon (next highest wavenumber excited). The red and blue curves are the corresponding theoretical predictions and are calculated for a self-focusing Kerr medium with $h=1$ described by $\hat{A}_{Kerr}$. The insets show far field patterns obtained at the mirror positions indicated illustrating the length scale competition. }
\label{pol_Talbotscan}
\end{figure}
Figures~\ref{Talbotfan} and \ref{pol_Talbotscan} indicate that a change of mirror displacement can drag the pattern period along qualitatively as in a diffractively thin medium but only up to a point. Then the system jumps back to a smaller length scale it seems to prefer, which can be changed again to some extent by changing mirror displacement. The origin of this behavior lies in the interaction between the threshold curves and the envelope as discussed before. For increasing $|D|$ the threshold curves move to lower $Q$ and have more wiggles in a certain range of $\theta$ on the envelope curve, which means they can explore more effectively the potentially lowest threshold condition.
Another way to illustrate this point is visualized in Fig.~\ref{fig:D_thres}. The red solid curve in Fig.~\ref{fig:D_thres} a) denotes the length scale of the minimum threshold mode vs mirror displacement. For $D=-3 \ldots 1$ it mirrors the first Talbot balloon, until it jumps to the second and follows it for $D=-6 \ldots -4$ and $D=1.5 \ldots 4$. Afterwards it jumps again and wiggles around a horizontal. The changes of lengthscale imply that the minimum of the envelope curve is at finite $\theta$ and the system is trying to stay close to this value as far as compatible with the specific boundary conditions, i.e. diffractive phase shift $\theta$ at the feedback distance $D$.
This approach to the envelope curve is also nicely illustrated in the behavior of the threshold intensity vs $D$ (Fig.~\ref{fig:D_thres} b)), becoming nearly independent of distance for large mirror distances as the minimum of the envelope curve can be attained.
\begin{figure}
\centering%
\includegraphics[width=\columnwidth,clip]{fig16a.pdf}
\includegraphics[width=\columnwidth,clip]{fig16b.pdf}
\caption{(Color online) a) Pattern length scale (characterized by diffraction parameter $\theta$) and b) threshold intensity vs mirror displacement $D$ for a self-focusing Kerr medium with $h=1$ described by $\hat{A}_{Kerr}$. Red solid curve: minimum threshold, blue dashed curve: lowest wavenumber (first Talbot) balloon.}
\label{fig:D_thres}
\end{figure}
\section{Conclusion}
\label{conclusions}
In this paper we have undertaken a largely analytic investigation of thresholds and lengthscales for pattern formation in a saturable two-level medium, optically-excited close to resonance from one side, and with a feedback mirror to reflect and phase-shift the light fields after they have traversed the medium. In that scenario, we have established a number of results, in encouraging agreement with recent experimental results in several cases.
We have considered, and compared to experiment, the ``Talbot fan" characteristics which characterize the evolution of pattern scales as $D$ is varied, and explained observed sudden changes of scale in terms of mode competition in the neighborhood of the minimum possible (in $D$) threshold.
The additional degree of freedom offered by finite $D$ also implies an additional complexity in the analysis. We have shown, however, that thresholds are constrained by envelope curves to which the threshold curves are tangent, and along which they evolve as $D$ is varied. Hence important properties of the SFM system such as the minimum possible threshold, and the domains within which pattern formation is possible (or impossible) can be found, often analytically. Again, the envelope property is likely to be general, because it follows from the structure of the feedback boundary condition.
Importantly, the envelope functions enable a quantitative investigation of the limit $D \to\infty$, which correspond to diffraction in the medium being negligible compared to that in the feedback loop, i.e the thin-slice limit. We find that threshold values tend to precisely the thin-medium values, but with finite slope. As a consequence we have demonstrated that the degeneracy of the unstable modes predicted in thin-medium theory does not survive inclusion of finite medium length, even at lowest order.
Diffusive damping removing the degeneracy was introduced in the first treatments \cite{firth90a,dalessandro92} to model carrier diffusion in semiconductors or elasto-viscous coupling in liquid crystals, which will make these media deviate from purely local Kerr media. In hot atom experiments \cite{grynberg94,Ackemann1994,ackemann95b} the thermal motion of the atoms, which can be modelled as diffusive motion under appropriate conditions \cite{Ackemann1994,ackemann95b}, will in tendency provide a stronger wash-out for transverse gratings at larger wavenumber and thus remove the degeneracy. In cold atoms this effect is not very strong and the finite medium thickness appears to be the main mechanism responsible for the emergence of a defined length scale in the investigations reported in Refs.~\cite{labeyrie14, Camara2015}. The possibility of a cut-off at high transverse wavenumbers due to the diffraction within the nonlinear medium (at least for some parameter combinations) was realized before in \cite{kozyreff06}.
In the specific context of the two-level nonlinearity we have analyzed different models to take account of wavelength scale (reflection) gratings in the steady-state susceptibility applicable to counterpropagation problems. We have found that models in which only the lowest-order ($2k$) gratings are considered predict a zero-order bistability as resonance is approached. This bistability disappears when all orders ($m\times 2k$) of gratings are included, and is therefore probably spurious. We have been able to develop models which include all grating orders, numerically for the fully-absorptive system and analytically in the quasi-Kerr and thin-medium limits, and have demonstrated reasonable agreement with experiment using these all-grating models.
In summary, we have developed a firm and systematic foundation for the analysis of the effects of in-medium diffraction, and of reflection gratings, in SFM pattern formation. Though we have focused here on the saturable two-level electronic nonlinearity, our approach and techniques have applicability across a wide class of nonlinearities. While our present analysis deals only with thresholds and steady-state instabilities, these are an important, and even essential, preliminary to more extensive numerical simulations, necessarily involving many additional parameters and many spatial and temporal scales. We already showed \cite{labeyrie14} that a simple thick-medium Kerr model gives useful insight into optomechanical SFM patterns, and in this work we have shown that a similar analysis helps understand important features of polarization-mediated SFM patterns in cold atoms. Patterns in cold-atom clouds with laser irradiation and mirror feedback are proving to a be a very rich field, with diverse implications, and a secure basis for the interpretation of experimental results and the development of appropriate theoretical models is therefore very important.
\acknowledgments{The Strathclyde group is grateful for support by the Leverhulme Trust and an university studentship for IK by the University of Strathclyde. The Sophia Antipolis group is supported by CNRS, UNS, and R\'{e}gion PACA. The collaboration between the two groups was supported by Strathclyde Global Exchange Fund and CNRS. WJF also acknowledges sharing of unpublished work by M. Saffman. We are grateful to P. Gomes (first investigations of length scales like in Fig.~\ref{pol_Talbotscan} were done by him), A. Arnold and P. Griffin for experimental support, to G.R.M. Robb, G.-L. Oppo and R. Kaiser for fruitful discussions. }
|
\section{Introduction}
The recent Large Hadron Collider (LHC) discovery of a scalar particle with 125 GeV which is compatible with Standard Model (SM) Higgs boson predicted by Brout-Englert-Higgs symmetry breaking mechanism opens up a gateway to search for physics beyond the SM \cite{Aad:2012tfa, Chatrchyan:2012xdj}. But, an evidence for new physics beyond the SM using analysis of combined ATLAS and CMS data for probing the couplings of Higgs boson has not been observed yet.
Possible deviation from the SM predictions of Higgs boson couplings would imply the presence of new physics involving massive particles that are decoupled at energy scales much larger than the Higgs sector energies being probed \cite{Appelquist:1974tg}. The SM Effective Field Theory (EFT) is a well-known model independent method for investigation of any deviation from SM \cite{Buchmuller:1985jz,Grzadkowski:2010es}. The origin of this method is based on all new physics contributions to the SM described by a systematic expansion in a series of high dimensional operators beyond the SM fields. All high dimensional operators conform to $SU(3)_C\times SU(2)_L\times U(1)_Y$ SM gauge symmetry. The dimension-6 operators play an important role in the framework since they match to ultraviolet (UV) models which are simplified by the universal one-loop effective action. There have been many analyses for constraints on SM EFT operators with available data from LHC-Run 1 \cite{Corbett:2012ja,Ellis:2014jta,Ellis:2014dva, Falkowski:2015fla,Corbett:2015ksa,Ferreira:2016jea,Aad:2015tna,Green:2016trm} and with electroweak precision measurements provided from previous accelerator, namely Large Electron Positron (LEP) \cite{Jones:1979bq,Grinstein:1991cd,Hagiwara:1993qt, Han:2004az}. Especially, the prediction on dimension-6 operators have been examined in many rewarding studies at High Luminosity LHC (HL-LHC) \cite{Englert:2015hrx,Buckley:2015lku, Khanpour:2017inb} and future $e^+e^-$ colliders \cite{Amar:2014fpa,Kumar:2015eea,Ellis:2015sca,Ge:2016zro,Cohen:2016bsd,Ellis:2017kfi,Alam:2017hkf,Khanpour:2017cfq,Englert:2017gdy}.
The precision measurements of Higgs boson couplings with the other SM particles at the LHC and planned future colliders will give us detailed information about its true nature. The future multi-TeV $e^+e^-$ colliders with extremely high luminosity and clean environment due to the absence of hadronic initial state, would give access to precise measurement, especially for the Higgs couplings. The Compact Linear Collider (CLIC) is one the mature proposed linear colliders with centre of mass energies from a few hundred GeV up to 3 TeV \cite{CLIC:2016zwp}. The first energy stage of CLIC operation was chosen to be $\sqrt s$=380 GeV, with the predicted integrated luminosity of 500 $fb^{-1}. $ The primary motivation of this stage is the precision measurements of SM Higgs properties and also the model independent Higgs couplings to both fermions and bosons \cite{CLIC:2016zwp, Abramowicz:2016zbo}.
In this study, we focus on the analysis of $e^+e^-\to \nu \bar{\nu} H$ production process in order to assess the projection of the first energy stage of the CLIC on the CP-conserving dimension-6 operators involving the Higgs and gauge bosons ($W^{\pm}$, $\gamma$, $Z$) defined by an SM EFT Lagrangian in the next section.
\section{Effective Operators}
The well known SM Lagrangian ( $\mathcal{L}_{\rm SM}$ ) involving renormalizable interactions is suppressed by higher dimensional operators in SM EFT approach. All these operators parametrised by an energy scale of non-observed states assumed larger than vacuum expectation value of Higgs field ($v$). A few different operator bases are presented in the literature, we consider SM EFT operators as the strongly interacting light Higgs Lagrangian ($\mathcal{L}_{\rm SILH}$) in bar convention \cite{Englert:2015hrx,Contino:2013kra,Alloul:2013naa}. Assuming the baryon and lepton number conservation, the most general form of dimension-6 effective Lagrangian including Higgs boson couplings that keep SM gauge symmetry is given as follows;
\begin{eqnarray}
\mathcal{L} = \mathcal{L}_{\rm SM} + \sum_{i}\bar c_{i}O_{i}
\end{eqnarray}
where $\bar c_{i}$ are normalized Wilson coefficients that are free parameters. In this work, we consider the dimension-6 CP-conserving interactions of the Higgs boson and electroweak gauge boson in SILH basis as \cite{Alloul:2013naa}:
\begin{eqnarray}\label{massb}
\begin{split}
\mathcal{L}_{\rm SILH} = & \
\frac{\bar c_{H}}{2 v^2} \partial^\mu\big[\Phi^\dag \Phi\big] \partial_\mu \big[ \Phi^\dagger \Phi \big]
+ \frac{\bar c_{T}}{2 v^2} \big[ \Phi^\dag {\overleftrightarrow{D}}^\mu \Phi \big] \big[ \Phi^\dag {\overleftrightarrow{D}}_\mu \Phi \big] - \frac{\bar c_{6} \lambda}{v^2} \big[\Phi^\dag \Phi \big]^3
\\
& \
- \bigg[\frac{\bar c_{u}}{v^2} y_u \Phi^\dag \Phi\ \Phi^\dag\cdot{\bar Q}_L u_R
+ \frac{\bar c_{d}}{v^2} y_d \Phi^\dag \Phi\ \Phi {\bar Q}_L d_R
+\frac{\bar c_{l}}{v^2} y_l \Phi^\dag \Phi\ \Phi {\bar L}_L e_R
+ {\rm h.c.} \bigg]
\\
&\
+ \frac{i g\ \bar c_{W}}{m_{W}^2} \big[ \Phi^\dag T_{2k} \overleftrightarrow{D}^\mu \Phi \big] D^\nu W_{\mu \nu}^k + \frac{i g'\ \bar c_{B}}{2 m_{W}^2} \big[\Phi^\dag \overleftrightarrow{D}^\mu \Phi \big] \partial^\nu B_{\mu \nu} \\
&\
+ \frac{2 i g\ \bar c_{HW}}{m_{W}^2} \big[D^\mu \Phi^\dag T_{2k} D^\nu \Phi\big] W_{\mu \nu}^k
+ \frac{i g'\ \bar c_{HB}}{m_{W}^2} \big[D^\mu \Phi^\dag D^\nu \Phi\big] B_{\mu \nu} \\
&\
+\frac{g'^2\ \bar c_{\gamma}}{m_{W}^2} \Phi^\dag \Phi B_{\mu\nu} B^{\mu\nu}
+\frac{g_s^2\ \bar c_{g}}{m_{W}^2} \Phi^\dag \Phi G_{\mu\nu}^a G_a^{\mu\nu}
\end{split}
\end{eqnarray}
where $\lambda$ represents the Higgs quartic coupling; $y_u$, $y_d$ and $y_l$ are the $3\times3$ Yukawa coupling matrices in flavor space; $g'$, $g$ and $g_s$ denotes coupling constant of $U(1)_Y$, $SU(2)_L$ and $SU(3)_C$ gauge fields, respectively; the generators of $SU(2)_L$ in the fundamental representation are given by $T_{2k}=\sigma_k/2$, $\sigma_k$ being the Pauli matrices; $\Phi$ is Higgs field contains a single $SU(2)_L$ doublet of fields; $B^{\mu\nu}=\partial_\mu B_\nu-\partial_\nu B_\mu$ and $W^{\mu \nu}=\partial_\mu W_\nu^k-\partial_\nu W_\mu^k+g\epsilon_{ijk}W_\mu^iW_\nu^j$ are the electroweak field strength tensors and $G^{\mu\nu}$ is the strong field strength tensors; and the Hermitian derivative operators defined as,
$\Phi^\dag \overleftrightarrow{D}_\mu \Phi = \Phi^\dag D^{\mu}\Phi - D_{\mu}\Phi^{\dag}\Phi$.
The SM EFT Lagrangian (Eq.(\ref{massb})) containing the Wilson coefficients in the SILH bases of dimension-6 CP-conserving operators can be defined in terms of the mass eigenstates after electroweak symmetry breaking (Higgs boson, W, Z, photon, etc.) as follows
\begin{eqnarray}\label{gbase}
\mathcal{L} &=& - \frac{m_{ H}^2}{2 v} g^{(1)}_{ hhh}h^3 + \frac{1}{2} g^{(2)}_{ hhh} h\partial_\mu h \partial^\mu h
- \frac{1}{4} g_{ hgg} G^a_{\mu\nu} G_a^{\mu\nu} h
- \frac{1}{4} g_{ h\gamma\gamma} F_{\mu\nu} F^{\mu\nu} h\nonumber\\
& -& \frac{1}{4} g_{ hzz}^{(1)} Z_{\mu\nu} Z^{\mu\nu} h
- g_{ hzz}^{(2)} Z_\nu \partial_\mu Z^{\mu\nu} h
+ \frac{1}{2} g_{ hzz}^{(3)} Z_\mu Z^\mu h
- \frac{1}{2} g_{ haz}^{(1)} Z_{\mu\nu} F^{\mu\nu} h
- g_{ haz}^{(2)} Z_\nu \partial_\mu F^{\mu\nu} h\nonumber\\
&-& \frac{1}{2} g_{ hww}^{(1)} {W^+}^{\mu\nu} {W^-}_{\mu\nu} h
- \Big[g_{ hww}^{(2)} {W^+}^\nu \partial^\mu {W^-}_{\mu\nu} h + {\rm h.c.} \Big]
+g (1-\frac12 \bar c_{ H}) m_{\sss W} {W^-}_\mu {W+}^\mu h\nonumber\\
& -&\bigg[
\tilde y_u \frac{1}{\sqrt{2}} \big[{\bar u} P_R u\big] h +
\tilde y_d \frac{1}{\sqrt{2}} \big[{\bar d} P_R d\big] h +
\tilde y_\ell \frac{1}{\sqrt{2}} \big[{\bar \ell} P_R \ell\big] h
+ {\rm h.c.} \bigg] \ ,
\end{eqnarray}
where $W_{\mu\nu}$, $Z_{\mu\nu}$ and $F_{\mu\nu}$ are the field strength tensors of $W$-boson, $Z$-boson and photon, respectively; $m_H$ represent the mass of the Higgs boson; the effective couplings in gauge basis defined as dimension-6 operators are given in Table~\ref{mtable} in which $a_{H}$ ($g_H$) coupling is the SM contribution to the Higgs boson to two photons (gluons) vertex at loop level.
\begin{table}[h]
\caption{The relations between Lagrangian parameters in the mass basis (Eq.\ref{massb}) and the Lagrangian in gauge basis (Eq.\ref{gbase}). ($c_W\equiv\cos \theta_W$, $s_W\equiv\sin \theta_W$)}
\begin{ruledtabular}\label{mtable}
\begin{tabular}{ll}
$g_{hhh}^{(1)}$= $1 + \frac78 \bar c_{ 6} - \frac12 \bar c_{H}$&$g_{hhh}^{(2)}$= $\frac{g}{m_{\sss W}} \bar c_{H}$\\
$g_{hgg}$= $g_{ H}- \frac{4 \bar c_{g} g_s^2 v}{m_{\sss W}^2}$ &$g_{h\gamma\gamma}$= $a_{ H} - \frac{8 g \bar c_{ \gamma} s_{\sss W}^2}{m_{\sss W}}$ \\
$g^{(1)}_{ hzz}$= $\frac{2 g}{c_{\sss W}^2 m_{\sss W}} \Big[ \bar c_{HB} s_{\sss W}^2 - 4 \bar c_{ \gamma} s_{\sss W}^4 + c_{\sss W}^2 \bar c_{ HW}\Big]$& $g^{(2)}_{ hzz}$= $\frac{g}{c_{\sss W}^2 m_{\sss W}} \Big[(\bar c_{ HW} +\bar c_{ W}) c_{\sss W}^2 + (\bar c_{ B} + \bar c_{ HB}) s_{\sss W}^2 \Big]$ \\
$g^{(3)}_{hzz}$= $\frac{g m_{\sss W}}{c_{\sss W}^2} \Big[ 1 -\frac12 \bar c_{H} - 2 \bar c_{T} +8 \bar c_{\gamma} \frac{s_{\sss W}^4}{c_{\sss W}^2} \Big]$& $g^{(1)}_{ haz}$= $\frac{g s_{\sss W}}{c_{\sss W} m_{\sss W}} \Big[ \bar c_{ HW} - \bar c_{HB} + 8 \bar c_{ \gamma} s_{\sss W}^2\Big]$ \\
$g^{(2)}_{haz}$= $\frac{g s_{\sss W}}{c_{\sss W} m_{\sss W}} \Big[ \bar c_{ HW} - \bar c_{ HB} - \bar c_{ B} + \bar c_{ W}\Big]$& $\tilde y_d$= $y_d \Big[1 -\frac12 \bar c_{ H} + \frac32 \bar c_{ d}\Big]$ \\
$g^{(1)}_{ hww}$= $\frac{2 g}{m_{\sss W}} \bar c_{ HW}$ & $\tilde y_u$= $y_u \Big[1 -\frac12 \bar c_{ H} + \frac32 \bar c_{ u}\Big]$ \\
$g^{(2)}_{ hww}$= $\frac{g}{m_{\sss W}} \Big[ \bar c_{ W} + \bar c_{ HW} \Big]$ & $\tilde y_\ell$= $y_\ell \Big[1 -\frac12 \bar c_{ H} + \frac32 \bar c_{ \ell}\Big]$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics{fd2.pdf}
\caption{The Feynman diagrams for the process $e^+e^-\to \nu \bar{\nu} H$. \label{fd}}
\end{figure}
\begin{figure}
\includegraphics{cs_380.pdf}
\caption{ The total cross section as a function of CP-conserving $\bar{c}_W$=-$\bar{c}_B$, $\bar{c}_{HW}$, $\bar{c}_{HB}$ and $\bar{c}_{\gamma}$ couplings for $e^+e^-\to \nu \bar{\nu} H$ process at the CLIC with $\sqrt s$=380 GeV. \label{fig1}}
\end{figure}
We use the parametrization in Ref. \cite{Alloul:2013naa} based on the formulation given in Ref. \cite{Contino:2013kra} in our analysis. The parametrization is not complete as described in detail in section 3 of Ref. \cite{Alonso:2013hga} and also Ref.\cite{Brivio:2017bnu}. It chooses to remove two fermionic invariants while retaining all the bosonic operators. This choice assumes completely unbroken $U(3)$ flavor symmetry of the UV theory where the coefficient of these operators are unit matrices in flavor space. Therefore, we assume flavor diagonal dimension-six effects. It is sufficient for the purpose of this paper in which we do not consider higher order electroweak effects but only claim a sensitivity study for $\bar c_{ W}$, $\bar c_{ B}$, $\bar c_{ HW}$, $\bar c_{ HB}$ and $\bar c_{\gamma}$ couplings.
We have used the Monte Carlo simulations with leading order in \verb|MadGraph5_aMC@NLO| \cite{Alwall:2014hca} involving effect of the dimension-6 operators on $H \nu \bar{\nu} $ production mechanism in $e^+e^-$ collisions. The effective Lagrangian of the SM EFT in Eq.(\ref{massb}) is implemented into the \verb|MadGraph5_aMC@NLO| based on FeynRules \cite{Alloul:2013bka} and UFO \cite{Degrande:2011ua} framework.
In this study, we focus on searching for the dimension-6 Higgs-gauge boson couplings via $e^+e^-\to\nu \bar{\nu} H$ process as shown in Fig.\ref{fd}.
This process is sensitive to Higgs-gauge boson couplings; $g_{hzz}$, $g_{hww}$, $g_{hz\gamma}$, and the couplings of a quark or lepton pair and one single Higgs field; $\tilde y_u$, $\tilde y_d$, $\tilde y_l$ in the mass basis. In the gauge basis, $e^+e^-\to\nu \bar{\nu} H$ process is sensitive to the seven Wilson coefficients: $\bar c_{ W}$, $\bar c_{ B}$, $\bar c_{ HW}$, $\bar c_{ HB}$, $\bar c_{H}$, $\bar c_{ \gamma}$ and $\bar c_{T}$ related to Higgs-gauge boson couplings and also effective fermionic couplings. Due to the small Yukawa couplings of the first and second generation fermions, we neglect the effective fermionic couplings. We set $\bar c_{ W} + \bar c_{ B}$ and $\bar c_{T}$ to zero in all our calculations since the linear combination of $\bar c_{ W} + \bar c_{ B}$ and $\bar c_{T}$ strongly constrained from the electroweak precision test of the oblique parameters $S$ and $T$. The cross sections of $e^+e^-\to\nu \bar{\nu} H$ process as a function of $\bar c_{ W}$, $\bar c_{ B}$, $\bar c_{ HW}$, $\bar c_{ HB}$ and $\bar c_{\gamma}$ couplings are shown in Fig.\ref{fig1}. There have been many studies in the literature considering individual, subsets or simultaneous change of dimension-6 operators \cite{Ellis:2015sca, Ellis:2017kfi}. Here we vary individually dimension- 6 operators and calculate the contributions to the corrections from new physics in the analysis. We presume that only one of the effective couplings is non-zero at any given time, while the other couplings are fixed to zero. One can easily see the deviation from SM for this couplings even in a small value region for $e^+e^-\to\nu \bar{\nu} H$ process. Therefore, we will only consider these five among the Higgs-gauge boson effective couplings in the detailed analysis including detector simulations through the process $e^+e^-\to\nu \bar{\nu} H$ at CLIC with 380 GeV center of mass energy in the next section.
\section{Signal and Background Analysis}
We perform the detailed analysis of $\bar c_{ W}$, $\bar c_{ B}$, $\bar c_{ HW}$, $\bar c_{ HB}$ and $\bar c_{\gamma}$ effective couplings via $e^+e^-\to\nu \bar{\nu} H$ process as well as other relevant background at the first energy stage of CLIC. The $e^+e^-\to\nu \bar{\nu} H$ signal process includes both s-channel $e^+e^-\to Z^*/\gamma^* \to ZH \to \nu \bar{\nu} H$ process (Higgsstrahlung) and t-channel $e^+e^-\to \nu\bar{\nu} W^*/W^*\to \nu \bar{\nu} H$ process (WW-fusion) as shown in Fig.\ref{fd}. In the initial energy stage of CLIC at $\sqrt s$=380 GeV, these two process have approximately the same amount contribution to the production cross section of the $e^+e^-\to\nu \bar{\nu} H$ process. In our analysis, we include effective dimension-6 couplings and SM contribution as well as interference between effective couplings and SM contributions ($S+B_H$) that lead to $e^+e^-\to\nu \bar{\nu} H$ process where Higgs decay to pair of $b$-quark. We consider the following relevant backgrounds; $B_H$: $e^+e^-\to\nu \bar{\nu} H$ process which has the same final state of the considered signal process including only SM contribution where the Higgs decays to pair of $b$-quark; $B_{ZZ}$: $e^+e^-\to Z Z$ process where one $Z$ decays to $b\bar b$ and the other decays to $\nu\bar \nu$; $B_{tt}$: $e^+e^-\to t \bar t$ process where two b quarks are from $t (\bar t)$ decaying to $W^+b (W^{-} \bar b)$ in which $W^{\pm}$ decay leptonically; $B_{Z\nu\nu}$: $e^+e^-\to\nu \bar{\nu} Z$ process in which $Z$ decays to $b\bar b$. The generated signal and all backgrounds at parton level in \verb|MadGraph5_aMC@NLO| are passed through the Pythia 6 \cite{Sjostrand:2006za} for parton shower and hadronization. The detector responses are taken into account with ILD detector card \cite{Behnke:2013lya} in \verb|Delphes 3.3.3| \cite{deFavereau:2013fsa} package. Then, all events are analysed by using the ExRootAnalysis utility \cite{exroot} with ROOT \cite{Brun:1997pa}.
Requiring missing energy transverse (${\not}E_T$), no charged leptons and at least 2 jets with their transverse momenta ($p_T^j$) greater than 20 GeV and pseudo-rapidity ($\eta^j$) between -2.5 and 2.5 are the pre-selection of the event to be further analysed. The energy resolution of jets for $|\eta^j|\leqslant 3$ is assumed to be
\begin{eqnarray}
\frac{\Delta E_{jets}}{E_{jets}}=1.5\%+\frac{50\%}{\sqrt{E_{jets}(GeV)}}
\end{eqnarray}
The momentum resolution for jets as a function of $p_T^j$ and $\eta^j$ is
\begin{eqnarray}
\frac{\Delta p_T^{j}}{p_T^{j}}=(1.0+0.01\times p_T^{j})\times 10^{-3}~~ \textrm{for}~~|\eta^j|\leqslant 1\\
\frac{\Delta p_T^{j}}{p_T^{j}}=(1.0+0.01\times p_T^{j})\times 10^{-2} ~~\textrm{for} ~~~1<|\eta^j|\leqslant 2.4
\end{eqnarray}
Jets are clustered with the anti-$k_t$ algorithm \cite{Cacciari:2008gp} using FastJet \cite{Cacciari:2011ma} where a cone radius is used as $R = 0.5$. In order to select the signal and background events, the following kinematic cuts and requirements are applied;
\textbf{i)} requiring at least two jets tagged as the b-jet which significantly suppress the light-quark jet backgrounds. These two b-jets are used to reconstruct Higgs boson-mass. \textbf{ii)} One of the b-tagged jets with the highest $p_T$ is defined as $b1$ while the other is $b2$ with lower $p_T$. Fig.~\ref{fig2} shows $p_T$ distributions of $b1$ and $b2$ of signal (for $\bar{c}_{HW}$=0.05) and all relevant background processes versus reconstructed Higgs boson-mass from $b1$ and $b2$ ($M_{b,\bar b}$). As it can be seen in Fig.~\ref{fig2}, the $b_1$ with $p_T^{b1}>50$ GeV, $b_2$ with $p_T^{b2}>$30 GeV and pseudo-rapidity of the b-tagged jets to be $|\eta^{b1,b2}|\leqslant 2.0$ are considered to reduce $B_{ZZ}$ and $B_{Z\nu\nu}$. In ILD detector card, both b-tagging efficiency and misidentification rates are given as function of jet transverse momentum. For the transverse momentum of leading jet ($b1$) ranging from 50 GeV to 180 GeV, b-tagging efficiency is between 64\% and 72 \%, c-jet misidentification rate is 17\%-20\%, and misidentification rate of light jet 1.2\%-1.76\%.
The missing transverse energy (${\not}E_T$) and scalar transverse energy sum ($H_T$) for signal (for $\bar{c}_{HW}$=0.05) and all relevant background processes versus $M_{b,\bar b}$ are shown in Fig.~\ref{fig3}. \textbf{iii)} The missing transverse energy is required to be ${\not}E_T>30$ GeV to suppress the backgrounds at low missing energy region. \textbf{iv)} Especially, to reduce $tt$ background process, the scalar transverse energy sum ($H_T$) is required to be 100 GeV $<H_T<$ 200 GeV.
Normalized distributions of reconstructed invariant mass of Higgs-boson from $b\bar b$ for signal with $\bar{c}_{HW} $=0.05, $\bar{c}_{HB} $=0.05, $\bar{c}_{\gamma} $=0.05, $\bar{c}_{W}=- \bar{c}_{B}$=0.05 and relevant backgrounds processes are given in Fig.\ref{fig4}.\textbf{ v)} Finally, the reconstructed invariant mass of Higgs-boson from two b-jet is selected to be in the range 92 GeV $< M_{inv}^{rec}(b1,b2)< 136$ GeV. The kinematic distributions for each processes are normalized to the number of expected events which is defined to be the cross section of each processes times integrated luminosity with $L_{int}$=500 fb$^{-1}$.
Effects of the cuts used in the analysis can be seen from the Table~\ref{tab2} which shows number of events after each cut. Requiring two b-tagged jets reduces the $B_{ZZ}$, $B_{tt}$ and $B_{Z\nu\nu}$ backgrounds more than signal $S+B_H$ and background with same final state, $B_H$. Cut-2 effects on both signal and all relevant backgrounds, especially $B_{ZZ}$ and $B_{Z\nu\nu}$. ${\not}E_T$ cut decreases both $B_{tt}$ and $B_{ZZ}$ backgrounds while $H_T$ cut significantly suppresses $B_{tt}$ background. Final effect of the all cuts is approximately 15\% for signal $S+B_H$ and $B_H$ background while 0.3\%-0.8\% for other relevant backgrounds.
\begin{table}
\caption{Number of signal and background events after applied kinematic cuts used for the analysis for $\bar{c}_{HW}$=0.05 with $L_{int}=500$ fb$^{-1}$ \label{tab2}}
\begin{ruledtabular}
\begin{tabular}{clcccccc}
Cuts & Definitions&$S+B_H$&$B_H$&$B_{ZZ}$&$B_{tt}$&$B_{Z\nu\nu}$ \\ \hline
Cut-0 & $N_j \geqslant 2$, lepton vetos, MET>0 with pre-selection cuts & 30432.5&19383.2&207847&211384&94405.6 \\
Cut-1 & two jets with $b$-tagging&8003.41&5035.96&10047.4&25636.2&6995.5 \\
Cut-2 & $p_{T}^{b1} > 50$ GeV, $p_{T}^{b2} > 30$ GeV and $| \eta^{b1,\,b2}|\leqslant 2.0$&5862.8&3421.5&6662.6&24766.9&3205.9\\
Cut-3 & ${\not}E_T>30$ GeV&5548.4&3109.1&2705.4&8686.6 &3122.5 \\
Cut-4 & 100 GeV $<H_T<$ 200 GeV &5407.9&3017.9&2158.1&1927.1&2823.3 \\
Cut-5 &100 GeV $< M_{inv}^{rec}(b1,b2)< 136$ GeV& 4211.6&2433.7&17.9&824.8&20.9 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[scale=0.16]{2D_ptb1_SB_chw005.pdf}
\includegraphics[scale=0.16]{2D_ptb1_BH.pdf}
\includegraphics[scale=0.16]{2D_ptb1_BZZ.pdf}
\includegraphics[scale=0.16]{2D_ptb1_Btt.pdf}
\includegraphics[scale=0.16]{2D_ptb1_BZvv.pdf}\\
\includegraphics[scale=0.16]{2D_ptb2_SB_chw005.pdf}
\includegraphics[scale=0.16]{2D_ptb2_BH.pdf}
\includegraphics[scale=0.16]{2D_ptb2_BZZ.pdf}
\includegraphics[scale=0.16]{2D_ptb2_Btt.pdf}
\includegraphics[scale=0.16]{2D_ptb2_BZvv.pdf}
\caption{ Normalized distributions of transverse momentum of tagged b-jets; $b1$ (first row) and $b2$ (second row) versus reconstructed Higgs boson-mass from $b1$ and $b2$ ($M_{b,\bar b}$) for signal with $\bar{c}_{HW} $=0.05 and relevant background processes. \label{fig2}}
\end{figure}
\begin{figure}
\includegraphics[scale=0.16]{2D_met_SB_chw005.pdf}
\includegraphics[scale=0.16]{2D_met_BH.pdf}
\includegraphics[scale=0.16]{2D_met_BZZ.pdf}
\includegraphics[scale=0.16]{2D_met_Btt.pdf}
\includegraphics[scale=0.16]{2D_met_BZvv.pdf}\\
\includegraphics[scale=0.16]{2D_HT_SB_chw005.pdf}
\includegraphics[scale=0.16]{2D_HT_BH}
\includegraphics[scale=0.16]{2D_HT_BZZ.pdf}
\includegraphics[scale=0.16]{2D_HT_Btt.pdf}
\includegraphics[scale=0.16]{2D_HT_BZvv.pdf}
\caption{ Normalized distributions of missing transverse energy (first row) and scalar transverse energy sum (second row) for signal versus reconstructed Higgs boson-mass from $b1$ and $b2$ ($M_{b,\bar b}$) with $\bar{c}_{HW} $=0.05 and relevant backgrounds processes. \label{fig3}}
\end{figure}
\begin{figure}
\includegraphics[scale=0.4]{ratio_clic_005_htcut_chw.pdf}
\includegraphics[scale=0.4]{ratio_clic_chb005_htcut.pdf}\\
\includegraphics[scale=0.4]{ratio_clic_ca005_htcut.pdf}
\includegraphics[scale=0.4]{ratio_clic_005_htcut_cw.pdf}
\caption{ Normalized distributions of reconstructed invariant mass of Higgs-boson from $b\bar b$ for signal with $\bar{c}_{HW} $=0.05, $\bar{c}_{HB} $=0.05, $\bar{c}_{\gamma} $=0.05, $\bar{c}_{W}=- \bar{c}_{B}$=0.05 and relevant backgrounds processes. \label{fig4}}
\end{figure}
\section{sensitivity of Higgs-gauge boson couplings}
We calculate the sensitivity of the dimension-6 Higgs-gauge boson couplings in $e^+e^-\to \nu \nu H$ process by
applying $\chi^{2}$ criterion with and without a
systematic error. The $\chi^{2}$ function is defined as follows
\begin{eqnarray}
\chi^{2} (\bar{c_i})=\sum_i^{n_{bins}}\left(\frac{N_{i}^{NP}(\bar{c_i})-N_{i}^{B}}{N_{i}^{B}\Delta_i}\right)^{2}
\end{eqnarray}
where $N_i^{NP}$ is the total number of events in the existence of effective couplings ($S$) and total backgrounds ($B_H$, $B_{ZZ}$, $B_{tt}$ and $B_{Z\nu\nu}$), $N_i^B$ is number of events of total backgrounds in $i$th bin of the invariant mass distributions of reconstructed Higgs boson, $\Delta_i=\sqrt{\delta_{sys}^2+\frac{1}{N_i^B}}$ is the combined systematic ($\delta_{sys}$) and statistical errors in each bin. So, the numerator in Eq.(7) equals to the number of extra events due to the presence of new operators.
In this analysis, we focused on $\bar{c}_{HB} $, $\bar{c}_{W}=- \bar{c}_{B}$ and $\bar{c}_{HW}$ couplings which are the
main coefficients contributing to $e^+e^-\to\nu \bar{\nu} H$ signal process. The 95\% Confidence Level (C.L.) limits including only statistical error on dimension-6 Higgs-gauge boson couplings at $\sqrt s$=380 GeV and $L_{int}$=500 fb$^{-1}$ (CLIC-380) are compared with the LHC at 14 TeV center of mass energies for the integrated luminosity of 300 fb$^{-1}$ (LHC-300) and 3000 fb$^{-1}$ (LHC-3000) \cite{Englert:2015hrx} in Fig.~\ref{fig5}. We see that CLIC-380 results would be significantly more sensitive to $\bar{c}_{HW}$ and somewhat sensitive to $\bar{c}_{W}=- \bar{c}_{B}$ whereas sensitivity to $\bar{c}_{HB} $ is comparable with expected LHC results. The prediction on the limits for the future lepton colliders; ILC \cite{Khanpour:2017cfq,Ellis:2015sca} of an integrated luminosity $L_{int}=$300 fb$^{-1}$ at the center of mass energy $\sqrt s=$ 500 GeV, FCC-ee \cite{Ellis:2015sca} for $L_{int}$=10 $ab^{-1}$ at $\sqrt s=$ 240 GeV, CEPC \cite{Ge:2016tmm} for $L_{int}$=5 $ab^{-1}$ at $\sqrt s=$ 240 GeV are also shown in Fig.~\ref{fig5} . In order to include the systematical uncertainties we recompute the bounds. For example including a 10\% conservative systematic uncertainty, the constraint on $\bar{c}_{HW}$ is $[-0.07959; 0.02423]$. This bound is four times lower than the obtained limits without systematic uncertainties.
\begin{figure}
\includegraphics[scale=0.8]{limit_lv_rv.pdf}
\caption{ Obtained allowed range (CLIC-380), LHC at 14 TeV center of mass energies for the integrated luminosity of 300 fb$^{-1}$ (LHC-300) and 3000 fb$^{-1}$ (LHC-3000) \cite{Englert:2015hrx}, ILC-300 at $\sqrt s=$ 500 GeV with $L_{int}=$300 fb$^{-1}$ \cite{Khanpour:2017cfq,Ellis:2015sca}, FCC-ee for $L_{int}$=10 $ab^{-1}$ at $\sqrt s=$ 240 GeV \cite{Ellis:2015sca}, CEPC for $L_{int}$=5 $ab^{-1}$ at $\sqrt s=$ 240 GeV \cite{Ge:2016tmm} at 95\% C.L. for $\bar{c}_{HW} $, $\bar{c}_{HB} $, $\bar{c}_{W}=- \bar{c}_{B}$ coefficients. The limits are each derived with all other coefficients set to zero. \label{fig5}}
\end{figure}
\section{Conclusions}
We have investigated the CP-conserving dimension-6 operators of Higgs boson with other SM gauge boson via $e^+e^-\to\nu \bar{\nu} H$ process using an effective Lagrangian approach at first energy stage of CLIC ($\sqrt s=380$ GeV, $L_{int}$=500 fb$^{-1}$). We have used leading-order strongly interacting light Higgs basis assuming vanishing tree-level electroweak oblique parameterize and flavor universality of the new physics sector. We analyzed only hadronic ($b\bar b$ ) decay channel of the Higgs boson including dominant background processes by considering realistic detector effect in the analysis. We have shown the kinematic distributions of b-jets in final state, missing transverse energy, scalar transverse energy sum and invariant mass distributions. Due to the fact that the signal final state consists of two neutrinos and two b-jets, the distributions of missing transverse energy and scalar transverse energy sum are performed for determining a cut-based analysis. We have obtained 95 \% C.L. limits on dimension-six operators analysing invariant mass distributions of two b-jets from Higgs decay in $e^+e^-\to\nu \bar{\nu} H$ signal process and the other dominant backgrounds. The $e^+e^-\to\nu \nu H$ process is more sensitive to $\bar{c}_{HW}$ couplings than the other dimension-six couplings at first energy stage of CLIC. Our results show that a CLIC with $\sqrt s=380$ GeV, $L_{int}$=500 fb$^{-1}$ will be able to probe the dimension-six couplings of Higgs-gauge boson interactions in $e^+e^-\to\nu \bar{\nu} H$ process especially for $\bar{c}_{HW}$ couplings at scales beyond the HL-LHC bounds while they become competitive with the $\bar{c}_{HB} $, $\bar{c}_{W}=- \bar{c}_{B}$ couplings.
\begin{acknowledgments}
This work was partially supported by the Abant Izzet Baysal University Scientific Research Projects under the Project no: 2017.03.02.1137. H. Denizli's work was partially supported by Turkish Atomic Energy Authority (TAEK) under the grant No. 2013TAEKCERN-A5.H2.P1.01-24. The authors would like to thank to CLICdp group for the discussions, especially to Philipp G. Roloff for valuable suggestions in the CLICdp Working Group analysis meeting. Authors would also like to thank to L. Linssen for encouraging us to involve in CLICdp collaboration.
\end{acknowledgments}
|
\section{Introduction}\label{sec:intro}
A Markov decision process (MDP)~\cite{Baier2008} is a state-based
Markov model in which a state can perform one of the available action-labeled
transitions after which it ends up in a next state according to a probability
distribution on states. The choice of a transition to take is nondeterministic,
but once a transition is chosen the behaviour is probabilistic.
MDPs provide a valuable mathematical framework to solve control and dependability problems in a wide range of applications,
including the control of epidemic processes~\cite{lefevre81}, power management~\cite{Qui2001}, queueing systems~\cite{Sennott1998}, and cyber-physical systems~\cite{Ayala2012}.
MDPs are also known as reactive probabilistic systems~\cite{LS91:ic,GlabbeekSS95} and closely related to probabilistic automata~\cite{SL94:concur}.
In this paper, we study \emph{parametric} Markov decision processes (PMDPs)~\cite{ChenHHKQ013,HahnHZ11}. These are models in which (some of) the transition probabilities depend on a set of parameters. An example of an action in a PMDP is tossing a (possibly unfair) coin which lands heads with probability $p$ and tails with probability $1-p$ where $p \in [0,1]$ is a parameter. Hence, a PMDP represents a whole family of MDPs---one for each valuation of the parameters.
We study reachability properties in PMDPs. To explain what we do exactly, let us take a step back. If an MDP can only perform a single action in each state, then it is a Markov chain (MC). If a PMDP can perform a single action in each state, then it is a \emph{parametric} Markov chain (PMC). Given a start state and a target state in a PMC, the probability of reaching the target from the start state is a \emph{rational function} in the set of parameters. This rational function can be elegantly computed by the method of Daws~\cite{DC05} providing arithmetic interpretation for regular expressions. The method has been further developed and efficiently implemented in the tool PARAM~\cite{HHWZ2010,HahnHZ11b,HahnHZ11}.
Clearly, there is no such thing as the probability of reaching a target state from a starting state in an MDP: such a reachability probability depends on which actions were taken along the way, i.e. of how the nondeterministic choices were resolved. What is usually of interest though are the min/max reachability probabilities, i.e. among all possible ways to resolve the nondeterministic choices, those that provide minimal/maximal probability of reaching a state. Nondeterministic choices are resolved using schedulers or policies, and luckily when it comes to min/max reachability probabilities \emph{simple} schedulers suffice~\cite{Baier2008}. Simple schedulers are deterministic and memoryless, i.e. history independent. Given an MDP, a simple scheduler induces an MC, and the reachability probabilities under this scheduler are simply the reachability probabilities of the induced MC.
With PMDPs, the situation is even more delicate. The probability of reaching a target state from a starting state depends on the scheduler, i.e. on how the nondeterministic choices were resolved, as well as on the values of the parameters. The full reachability picture looks like a sea --- each scheduler imposes a rational function --- a \emph{wave} --- over the parameter range; the sea then consists of all the waves.
There are two possible scenarios of interest:
\begin{itemize}
\item[(1)] We have access to the parameters.
\item[(2)] We have no access to the parameters, they represent uncertainty or noise or choices of the environment.
\end{itemize}
In case (1), parameter synthesis is the problem to solve. The parameter synthesis problem comes in two flavours: (a)~Find the parameter values that maximise / minimise the reachability probability; (b)~For each value of the parameters, find the max/min reachability probability. These are the problems that have attracted most attention in the analysis of PMCs~\cite{BartocciGKRS11,JansenCVWAKB14,PathakAJTK15,
DehnertJJCVBKA15,QuatmannD0JK16} and PMDPs~\cite{ChenHHKQ013,HahnHZ11}, see Section~\ref{sec:rel-work} for more details.
In this paper, we consider case (2) and propose solutions for imposing bounds on the reachability probabilities throughout the whole parameter range.
In particular we:
\begin{itemize}
\item Start by enumerating all simple schedulers and computing their corresponding rational functions.
\item Identify classes of optimal schedulers, for different problems of interest, see Section~\ref{sec:scheds}.
\item Provide a tool that computes optimal schedulers in each of the classes for a given PMDP, see Section~\ref{sec:imp-exp}.
\end{itemize}
We admit that we take upon this task knowing that it is computationally hard. Already the number of simple schedulers is exponential in the number of states of the involved PMDP. Optimisation is in general also hard (computing maxima, minima, and integrals of the involved rational functions), see Section~\ref{sec:imp-exp} for references and more details. Nevertheless, the analysis that we aim at is not an online analysis, but rather a preprocessing step, and even if it may only work on small examples, it provides insight in the behaviour of a system and its schedulers.
Our tool extensively uses the state-of-the-art tools PARAM1~\cite{HHWZ2010,HahnHZ11b} and PARAM2~\cite{HahnHZ11} for efficient computation of the rational functions, see Section~\ref{sec:rel-work} and Section~\ref{sec:imp-exp} for details. Once we have all schedulers and their respective waves, we feed the waves to a numerical tool that allows us to calculate the optimal schedulers.
\begin{wrapfigure}{R}{0.3\textwidth}
\includegraphics[width=0.25\textwidth]{Pics/2x2-pic.pdf}
\caption{$2\times 2$ Labyrinth with sink at (1,2)}
\label{fig:2x2pic-ex}
\end{wrapfigure}
We experiment with a class of examples describing the behaviour of a robot walking in a labyrinth grid. Each position on the grid is a state of the MDP, and the available actions are $N$, $S$, $E$, and $W$, describing the directions (north, south, east, and west) of a possible move. Not all actions are available in every state. Some states represent holes (sinks) in which no action is available, others correspond to border-positions, and hence some actions are disabled.
See Section~\ref{sec:examples} for further description of our class of examples. One small concrete example in this class is the PMDP describing a $2\times 2$ grid with a sink at position $(1,2)$. In $(1,1)$ the actions $N$ and $E$ are available, in $(2,1)$ the actions $N$ and $W$, and in $(2,2)$, the actions $S$ and $W$.
The model is parametric with two parameters $l$ and $r$ having the following meaning: In a state $s$ with an enabled action $M$, the robot moves forward with probability $1-l-r$ to its intended state $s'$, or ends up in the state $s_l$ left of $s$ (in the direction of the move) with probability $l$, and in the state $s_r$ right of $s$ (in the directions of the move) with probability $r$, provided both states $s_l$ and $s_r$ exist. If one of $s_l$ or $s_r$ does not exist, then we consider two scenarios:
\begin{itemize}
\item Fixed failure: In this scenario, the probability to the existing state $s_l$ or $s_r$ remains $l$ or $r$, but the probability of reaching $s'$ increases to $1-l$ or $1-r$, respectively.
\item Fixed success: Here, the probability to $s'$ remains the same, and the probability to $s_l$ or $s_r$ (whichever exists) becomes $l+r$.
\end{itemize}
Figure~\ref{fig:state11-ex} shows the behaviour of state $(1,1)$ in both scenarios and
Figure~\ref{fig:2x2-sea} pictures all waves -- rational functions corresponding to reachability of target state $(2,2)$ from the starting state $(1,1)$ -- in each of the two scenarios.
\begin{figure}\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{Pics/State11-ff-pic.pdf}
\caption{Fixed failure}
\label{fig:state11-ff-ex}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{Pics/State11-fs-pic.pdf}
\caption{Fixed success}
\label{fig:state11-fs-ex}
\end{subfigure}
\caption{The behaviour of state $(1,1)$}
\label{fig:state11-ex}
\end{figure}
\begin{figure}\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{Pics/2x2-lr-ff-intro.pdf}
\caption{Fixed failure}
\label{fig:2x2-sea-ff}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{Pics/2x2-lr-fs-intro.pdf}
\caption{Fixed success}
\label{fig:2x2-sea-fs}
\end{subfigure}
\caption{The sea of reachability probabilities from state $(1,1)$ to state $(2,2)$}
\label{fig:2x2-sea}
\end{figure}
As we can see even in this small example, the reachability probability varies through the parameter range and significantly depends on the chosen scheduler. For different purposes, different schedulers may be preferred. We identify ten classes of optimal schedulers that may be preferred in certain cases. For example, one may wish to use a scheduler that guarantees highest reachability probability for any value of the parameter, if such a scheduler exists. We call such a scheduler \emph{dominant}. That would be the red scheduler in Figure~\ref{fig:2x2-sea-ff}. In Figure~\ref{fig:2x2-sea-fs} there is no dominant scheduler. However, one may prefer a scheduler that reaches the maximum value (reachability probability $1$ in this case) for some value of the parameter. We call such schedulers \emph{optimistic}. In Figure~\ref{fig:2x2-sea-fs} the red and the green schedulers are optimistic. In addition, one may prefer the red over the green, as under the assumption of a uniform distribution of parameters, the red has a higher value over a larger parameter region --- we call such schedulers \emph{expectation} schedulers. For yet another purpose, one may prefer the yellow or the blue scheduler, as its difference in reachability probabilities is the smallest --- the \emph{bound} scheduler according to our definition.
See Section~\ref{sec:scheds} for the exact definitions of all classes of optimal schedulers.
Finally, we mention that we started analysing simple schedulers as a first step in the scheduler analysis of PMDPs. As we discuss in Section~\ref{sec:conc}, we are aware that optimal schedulers (in our classes) need not be simple. Nevertheless, we believe that conquering simple schedulers is an important first step.
\section{Related Work}\label{sec:rel-work}
In the last decade there has been a growing interest in studying parametric probabilistic
models~\cite{DC05,LMST2007} where some of the probabilities (or rates) in the models
are not known a-priori. These models are very useful when certain quantities (e.g. fault rates,
packet loss ratios, etc.) are partially available (which is often the case)
or unavailable at the design time of a system.
In his seminal work~\cite{DC05}, Daws studies the problem of symbolic model checking of parametric
probabilistic Markov chains. He provides a method based on regular expressions extraction and state elimination
to symbolically express the probability to reach a target state from a starting state as a multivariate rational function whose domain is the parameter space.
This technique was further investigated and implemented in the PARAM1 and PARAM2 tools~\cite{HHWZ2010,HahnHZ11b}
and it is now also included in the popular PRISM model checker~\cite{KwiatkowskaNP11}.
In this context, the problem of \emph{parameter synthesis} for a parametric Markov chain consists
of solving a constrained nonlinear optimisation problem where the objective function
is a multivariate rational function representing the probability to satisfy a given reachability property
depending on the parameters.
As discussed in~\cite{Kreinovich1998bi,LMST2007} and later in this paper, when the order of these multivariate rational functions
is high, such constrained optimisation problem can become computationally very expensive.
In~\cite{BartocciGKRS11} Bartocci et al. introduce a complementary technique
to \emph{parameter synthesis}, called \emph{model repair},
that exploits the PARAM1 tool in combination with a nonlinear optimisation tool to find automatically
the minimal change of the parameter values required for a model to satisfy
a given reachability property that the model originally violates.
In this case the problem boils down to solving a nonlinear optimisation program having for
\emph{objective function} an L2-norm (quadratic and indeed suitable for convex optimisation)
measuring the distance between the original parameter values and the new ones and having
as \emph{constrains} the multivariate rational function associated
with the reachability property.
Recently, more sophisticated symbolic parameter synthesis techniques~\cite{JansenCVWAKB14,PathakAJTK15,
DehnertJJCVBKA15,QuatmannD0JK16}
based also on SMT solvers and greedy approaches~\cite{PathakAJTK15} have further improved this
field of research.
At the same time statistical-based approaches leveraging powerful machine learning
techniques~\cite{BortolussiMS16,BartocciBNS15} have been shown to provide better
scaling of the model checking problem for large parametric continuous Markov chains when
the number of parameters is limited and the event of satisfying the property is not rare.
All the aforementioned methods
do not natively support nondeterministic choice and are indeed not suitable for solving
parametric Markov decision processes. The parametric model checking problem
for this class of models has been addressed so far in the literature using two complementary
methods~\cite{ChenHHKQ013,HahnHZ11}.
The first method, implemented in PARAM2~\cite{HahnHZ11}, is a
\emph{region-based approach} where the parameter space is divided into regions
representing sets of parameter valuations. For each region, lower
and upper bounds on optimal parameter values are computed by
evaluating the edge points of the regions. Given a desired level
of precision for the result as input, the algorithm decides whether to further split the region
into smaller ones to be explored or to terminate with the intervals found.
The correctness and the termination of this algorithm is guaranteed
only under certain assumptions as discussed in~\cite{HahnHZ11}.
The second method~\cite{ChenHHKQ013} is a sampling-based approach
(i.e. based on sampling methods like the Metropolis-Hastings algorithm,
particle swarm optimisation, and the cross-entropy method) that are used
to search the parameter space. These heuristics usually do not guarantee
that global optimal parameters will be found. Furthermore, when the regions of the parameters satisfying
a requirement are very small, a large amount of simulations is required.
We just became aware of a very recent work of Cubuktepe et al.~\cite{Cubuktepe2017}
(to appear in TACAS'17) where the authors consider the problem
of parameter synthesis in parametric Markov decision processes
using signomial programs, a class of nonconvex optimisation problems
for which it is possible to provide suboptimal solutions.
\section{Markov Chains and Markov Decision Processes}\label{sec:mdp}
\begin{definition}[Markov chain]\label{def:MC}
A (discrete-time) Markov chain (MC) is a pair
$M = (S, P)$
where:
\begin{itemize}
\item
$S$ is a countable set of states, and
\item
$P\colon S\times {S} \rightarrow [0,1]$ is a transition
probability function such that for all $s$ in $S$, $\sum_{s'\in{S}}P(s,s')=1$.\hfill\ensuremath{\diamond}
\end{itemize}
\end{definition}
Given an MC $M = (S,P)$ and two states $s,t\in S$, we denote the
probability to reach $t$ from $s$ by $\Pre^M(s,t)$. If $M$ is clear from the context, we will omit the superscript in the reachability probability.
We next present the definition of an MDP without atomic propositions and rewards, as they do not play a role for what follows.
\begin{definition}[Markov decision process]\label{def:MDP}
A (discrete-time) Markov Decision Process (MDP) is a triple
$M = (S, \Act, P) $
where:
\begin{itemize}
\item
$S$ is a countable set of states,
\item
$\Act$ is a set of actions,
\item
$P\colon S\times \Act \times{S} \rightarrow [0,1]$ is a transition probability
function such that for all $s$ in $S$ and $a$ in $\Act$ we have $\sum_{s'\in{S}}P(s,a, s') \in \{0,1\} $.\hfill\ensuremath{\diamond}
\end{itemize}
\end{definition}
In this paper we only consider finite MCs and MDPs, that is MCs and MDPs in which the set of states $S$ (and actions $\Act$) is finite.
If needed, we may also specify a distinguished initial state $s_0 \in S$ in an MC or an MDP.
An action $a$ is \emph{enabled} in an MDP state $s$ iff $\sum_{s'\in{S}}P(s,a, s') = 1$.
We denote by $\Act(s)$ the set of enabled actions in state $s$. It is often required that $\Act(s) \neq \emptyset $ in an MDP, but we omit this requirement. A state $s$ for which $\Act(s) = \emptyset$ is called a \emph{sink}.
A \emph{simple scheduler} resolves the nondeterministic choice,
selecting at each non-sink state $s$ one of the enabled actions $a \in \Act(s)$. A synonym for a simple scheduler is deterministic memoryless/history-independent scheduler.
\begin{definition}[Simple scheduler] \label{def:scheduler}
Given an MDP $M = (S, \Act, P)$, a {simple scheduler} $\xi$ of $M$
is a function $\xi \colon S \to \Act + 1$ where $1 = \{\bot\}$ and $+$ denotes disjoint union, satisfying
$\xi(s) \in \Act(s)$ for all $s \in S$ such that $\Act(s) \neq \emptyset$, and $\xi(s) = \bot$ otherwise.\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition}[Scheduler-induced Markov chain]\label{def:IMC}
Let $\xi$ be a simple scheduler of an MDP $M$. Then the $\xi$-{induced Markov chain} is the Markov chain
$M_\xi = (S,P_\xi)$ where
$P_\xi(s,t) = P(s,\xi(s),t)$ if $\xi(s) \neq \bot$ and $P_\xi(s,s) = 1$ otherwise.\hfill\ensuremath{\diamond}
\end{definition}
Note that in this work we only consider simple schedulers. This justifies our nonstandard (and much simpler) definition of an induced Markov chain.
From now on we will sometimes simply say \emph{scheduler} for a simple scheduler.
\begin{definition}[Maximum/Minimum reachability probabilities] \label{def:min-max-reach-prob}
Given an MDP $M = (S,\Act,P)$ and two states $s,t\in S$, the
maximum reachability probability from $s$ to $t$ is
\[
\Pre_{\max}^M(s,t) = \max_\xi \Pre^{M_\xi}(s,t),
\]
\noindent
and similarly, the minimum reachability probability from $s$ to $t$ is given by
\[
\Pre_{\min}^M(s,t) = \min_\xi \Pre^{M_\xi}(s,t),
\]
\noindent where $\xi$ ranges over all simple schedulers. We call a scheduler $\xi$ a \emph{maximal (minimal) scheduler from $s$
to $t$}
iff $\Pre^{M_\xi}(s,t)$ is the maximal (minimal) reachability probability from $s$ to $t$.\hfill\ensuremath{\diamond}
\end{definition}
\section{Parametric MCs and MDPs} \label{sec:pmdp}
We first recall the notion of a rational function (following~\cite{HHWZ2010,HahnHZ11b}, with a small restriction). Let $V = \{x_1, \dots, x_n\}$ be a fixed set of variables. An \emph{evaluation} is a function $v \colon V \to \mathbb{R}$. A \emph{polynomial} over $V$ is a function
$$g(x_1, \dots,x_n) = \sum_{i_1, \dots, i_n} a_{i_1, \dots, i_n}x_1^{i_1} \cdots x_n^{i_n},$$
where $i_j \in \mathbb{N}$ for $1 \le j \le n$ and each $a_{i_1, \dots, i_n} \in \mathbb{R}$. A \emph{rational function} over $V$ is a quotient $$f(x_1, \dots, x_n) = \frac{g_1(x_1,\dots,x_n)}{g_2(x_1,\dots,x_n)}$$ of two polynomials $g_1$ and $g_2$ over $V$. By $\mathcal{F}_V$ we denote the set of rational functions over $V$. Hence, a rational function is a symbolic representation of a function from $\mathbb{R}^n$ to $\mathbb{R}$. Given $f \in \mathcal{F}_V$ and an evaluation $v$, we write $f\langle v\rangle$ for $f(v(x_1), \dots, v(x_n))$.
It is now straightforward to extend MCs and MDPs with parameters~\cite{DC05,LMST2007,HHWZ2010,HahnHZ11b}. Again, we only consider finite models.
\begin{definition}[Parametric Markov chain]\label{def:PMC}
A parametric (discrete-time) Markov chain (PMC) is a triple $M = (S, V, P) $
where:
\begin{itemize}
\item
$S$ is a finite set of states,
\item $V$ is a finite set of parameters, and
\item $P:S\times {S} \rightarrow \mathcal{F}_V$ is the parametric probability transition function.\hfill\ensuremath{\diamond}
\end{itemize}
\end{definition}
Given a PMC $M = (S,V,P)$, a valuation $v$ of the parameters induces an MC $M_v = (S, P_v)$ where $P_v(s,s') = P(s,s')\langle v\rangle$ for all $s,s' \in S$, if for all $s$ in $S$ we have $\sum_{s'\in{S}}P(s,s')\langle v\rangle=1$. If a valuation $v$ induces a Markov chain on $M$, then we call $v$ \emph{admissible}. The set of all admissible valuations for $M$ is the \emph{parameter space} of $M$.
Similarly, we define parametric MDPs.
\begin{definition}[Parametric Markov Decision Process]\label{def:PMDP}
A parametric (discrete-time) Markov Decision Process (PMDP) is a tuple
$M = (S, \Act, V, P) $
where:
\begin{itemize}
\item
$S$ is a finite set of states,
\item
$\Act$ is a finite set of actions,
\item
$V$ is a finite set of parameters, and
\item
$P:S\times \Act \times{S} \rightarrow \mathcal{F}_V$ is the parametric transition probability function.\hfill\ensuremath{\diamond}
\end{itemize}
\end{definition}
Also here a valuation may induce an MDP from a PMDP, in which case we call it \emph{admissible}.
Given a PMDP $M = (S,\Act,V,P)$, a valuation $v$ of the parameters induces an MDP $M_v = (S,\Act, P_v)$ where $P_v(s,a,s') = P(s,a,s')\langle v\rangle$, if for all $s$ in $S$ and $a$ in $\Act$ we have $\sum_{s'\in{S}}P(s,a,s')\langle v\rangle\in \{0,1\}$.
Also here, the set of admissible valuations is the \emph{parameter space} of $M$.
Notice that a PMDP $M$ and its $v$-induced MDP $M_v$ have the same set of states and actions, as well as the same sets of enabled actions in each state, and therefore they have the same simple schedulers. Now, starting from a PMDP $M$, and given its scheduler $\xi$, one may: (1) first consider the $\xi$-induced PMC $M_\xi$ and then the $v$-induced MC $({M_\xi})_v$ for a valuation $v$, or (2) one first takes the valuation-induced MDP $M_v$ and then its scheduler-induced MC $({M_v})_\xi$. The result is the same and hence we write $M_{\xi,v}$ for the $\xi$-and-$v$-induced MC.
We now fix a source state $s$ in a PMDP, and a target state $t$ and discuss the reachability probabilities that are now dependent on both the choice of a scheduler $\xi$ and the choice of a parameter valuation $v$. Given a valuation $v$ and a scheduler $\xi$, the reachability probability is $\Pre^{M_{\xi,v}}(s,t)$. The (reachability probability) \emph{wave} corresponding to $\xi$ is a rational function $f_\xi$ in the set of parameters, such that $f_\xi\langle v\rangle = \Pre^{M_{\xi,v}}(s,t)$. The (reachability probability) \emph{sea} consists of all $f_\xi$ for all schedulers $\xi$.
We also write (for a PMDP $M$):
$$\begin{array}{lcl}
\Pre_{\max}^{M_v}(s,t) & = & \max_\xi \Pre^{M_{\xi,v}}(s,t),\\
\Pre_{\max}^{M_\xi}(s,t) & = & \max_v \Pre^{M_{\xi,v}}(s,t),\\
\Pre_{\max}^{M}(s,t) & = & \max_{\xi,v} \Pre^{M_{\xi,v}}(s,t),\\
\end{array}$$
and similarly for the minimum reachability probabilities.
\section{Classes of Optimal Schedulers} \label{sec:scheds}
In this section we define and discuss a selection of types of optimal schedulers. This is meant to serve as an invitation for the reader to further develop useful notions of optimality.
Our initial idea is the following: Once we have generated all rational functions (corresponding to all schedulers), a type of optimality assigns a score to each rational function (and hence to the scheduler inducing it). The optimal schedulers of this type then maximise or minimise the assigned score.
We introduce the notion of a \emph{dominant scheduler} and nine additional types of optimal schedulers. These types are:
the optimistic, the pessimistic, the bound, the expectation, the stable, the $\varepsilon$-bounded, the $\varepsilon$-stable, and the $\varepsilon$-bounded- and $\varepsilon$-stable-robust.
We next present the definition for each of them. For simplicity, we may use scheduler and function interchangeably --- thus identifying a scheduler and its induced rational function when no confusion may arise.
\begin{definition}[Dominant scheduler] \label{def:dominant}
A scheduler $\omega$ is dominant if at any parameter valuation $v$, its function has the maximal value of all functions of all schedulers, i.e.
$\forall v. \forall \xi. f_\omega\langle v\rangle \ge f_\xi\langle v\rangle.$ \hfill\ensuremath{\diamond}
\end{definition}
\begin{definition}[Optimistic scheduler] \label{def:optimistic}
A scheduler $\omega$ is optimistic, if its function has the maximal maximum value of all functions of all schedulers, i.e.\\ \hspace*{3cm} $\Pre_{\max}^{M_\omega}(s,t) = \max_\xi \Pre_{\max}^{M_\xi}(s,t) = \Pre_{\max}^{M}(s,t).$\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition}[Pessimistic scheduler] \label{def:pessimistic}
A scheduler is pessimistic, if its function has the maximal minimum value of all functions of all schedulers, i.e. \\ \hspace*{3cm} $\Pre_{\min}^{M_\omega}(s,t) = \max_\xi \Pre_{\min}^{M_\xi}(s,t).$\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [Bound scheduler]\label{def:bound}
A scheduler is bound, if its function has the minimal range, i.e. minimal difference between its maximal and minimal value of all functions of all schedulers, i.e. \\ \hspace*{3cm} $\Pre_{\max}^{M_\omega}(s,t) - \Pre_{\min}^{M_\omega}(s,t) = \min_\xi \left(\Pre_{\max}^{M_\xi}(s,t) - \Pre_{\min}^{M_\xi}(s,t)\right).$\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [$\varepsilon$-Bounded scheduler]\label{def:eps-bounded}
A scheduler $\xi$ is $\varepsilon$-bounded if the length of the (closed-interval) range of its function is bounded by $\varepsilon$, i.e. \\ \hspace*{3cm} $\Pre_{\max}^{M_\xi}(s,t) - \Pre_{\min}^{M_\xi}(s,t) \le \varepsilon$\\
\noindent for a non-negative real number $\varepsilon$.\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [$\varepsilon$-Bounded robust scheduler]\label{def:eps-bounded-robust}
A scheduler $\omega$ is $\varepsilon$-bounded robust if it is the maximal among all $\varepsilon$-bounded schedulers, i.e. $\forall v. \forall \textrm{~$\varepsilon$-bound~}\xi. f_\omega\langle v\rangle \ge f_\xi\langle v\rangle.$\hfill\ensuremath{\diamond}
\end{definition}
The intuition behind these types of optimal schedulers is the following. If a user does not know the value of the parameters, then taking the
\begin{itemize}
\item dominant scheduler guarantees that one can do as good as it gets independent of the parameters;
\item optimistic scheduler guarantees that one can do as good as it gets in case the parameters are the best possible;
\item pessimistic scheduler guarantees that no matter what the parameters are, even in the worst case we will perform better than the worst case of any other scheduler;
\item bound scheduler guarantees that one will see minimal difference in reachability probability by varying the parameters;
\item $\varepsilon$-boundness is an absolute notion guaranteeing that such a scheduler never has a larger difference in reachability probability than $\varepsilon$;
\item finally, $\varepsilon$-bounded robustness gives the maximal scheduler among all $\varepsilon$-bounded ones.
\end{itemize}
Dominant, $\varepsilon$-bounded, and $\varepsilon$-bounded robust schedulers need not exist.
Note that computing optimistic, pessimistic, bound, $\varepsilon$-bounded, and $\varepsilon$-bounded robust schedulers requires computing the maximum and the minimum of the involved functions, which is in general hard~\cite{Kreinovich1998bi}, see Section~\ref{sec:imp-exp} for more details.
The following classes do not require computing extremal values and may provide a better global picture of the reachability probabilities. Their optimality is based on maximising/minimising or bounding the probability mass over the whole parameter space, also allowing for specifying a probability distribution on the parameter space. If the distribution of parameters is unknown, we assume uniform distribution. However, it is likely that a distribution of parameters is known or can be estimated, in which case these schedulers take it into account. From now on, Let $p$ denote a probability density function over the parameter space.
Before we proceed, let us define the expectation and variance of a scheduler. The expectation of a scheduler $\xi$ is $E(\xi) = E(f_\xi) = \int f_\xi\, dp$ and the variance is $\Var(\xi) = E(\xi - E(\xi))^2$. Note that here $\xi - E(\xi)$ denotes the rational function $f_\xi - E(\xi)$.
\begin{definition}[Expectation Scheduler] \label{def:expected}
A scheduler is an expectation scheduler, if its function has the maximal expected value of all functions of all schedulers, i.e. $\omega$ is an expectation scheduler if $E(\omega) = \max_\xi E(\xi)$.\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [Stable scheduler]\label{def:stable}
A scheduler $\omega$ is stable, if its function has the minimal variance, i.e. \\ \hspace*{3cm} $\Var(\omega) = \min_\xi \Var(\xi).$\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [$\varepsilon$-Stable scheduler]\label{def:eps-stable}
A scheduler $\xi$ is $\varepsilon$-stable if its variance is bounded by $\varepsilon$, i.e. \\ \hspace*{3cm} $\Var(\xi) \le \varepsilon$\\
\noindent for a non-negative real number $\varepsilon$.\hfill\ensuremath{\diamond}
\end{definition}
\begin{definition} [ $\varepsilon$-Stable robust scheduler]\label{def:eps-stable-robust}
A scheduler $\omega$ is $\varepsilon$-stable robust if it its expectation is maximal among all $\varepsilon$-stable schedulers, i.e. $ E(\omega) = \max_{\textrm{~$\varepsilon$-stable~}\xi} E(\xi).$\hfill\ensuremath{\diamond}
\end{definition}
\noindent If a dominant scheduler exists, then it is also optimistic, pessimistic, and expectation optimal.
\begin{example}
Consider the $2\times 2$ labyrinth with sink at $(1,2)$ from Figure~\ref{fig:2x2pic-ex} in the introduction.
In the fixed failure case, Figure~\ref{fig:2x2-sea-ff}, the red scheduler is dominant (and hence optimistic, pessimistic, and expectation optimal). All schedulers are optimistic, pessimistic, and bound. The yellow scheduler is stable, and the blue is (median variance)-stable.
In the fixed success case, Figure~\ref{fig:2x2-sea-fs}, there is no dominant scheduler. The red and green schedulers are optimistic, all are pessimistic, the yellow and the blue are bound. The red is expectation optimal, the yellow is stable, and the blue is (median variance)-stable robust.
\end{example}
\section{Parametric Labyrinths}\label{sec:examples}
The class of examples of a robot in a labyrinth provides a wide playground for studying parametric models. We consider $n \times n$ labyrinths. States are the positions in the labyrinth, and the set of actions is $\{N, S, E, W\}$.
Taking an action probabilistically determines the next state, as the robot may indeed reach the intended new position or fail to do so and end up in another unintended position. There are many ways to specify what happens if the robot fails, we chose the way as in the example in the introduction: our robot can fail to reach the intended position and instead end up left or right of its current position with a certain probability.
A most general way to turn this into a parametric model is to consider all probabilities depending on a parameter, e.g. in every state, for every action, there is a parameter that provides the probability to fail left and another that provides the probability to fail right, and the probability of success is determined by the values of these two parameters. This results in a model with $8|S| = 8n^2$ parameters.
We simplify this general scenario and limit the parameters to smaller numbers. In particular, we consider models with $k$ parameters where
\begin{itemize}
\item[(1)] $k = 8$ and we take per action two parameters (e.g. for action $N$, the probability to fail left with action $N$ and the probability to fail right), which are then the same in every state whenever this action is taken.
\item[(2)] $k=2$ and we take two parameters $l$ and $r$ that serve the purpose like in (1) and in the example in the introduction for every state and every action.
\item[(3)] $k=1$ and we have a single parameter $p$ in the model that serves the purpose like in (2) for every state and every action.
\end{itemize}
In all of these cases for states on the boundary we consider one of the two scenarios -- fixed failure or fixed success -- as specified for the example in the introduction.
In addition, we experiment with making some states sink states, just like we did with state $(1,2)$ in the introduction example.
\section{Implementation and Experiments}\label{sec:imp-exp}
We have implemented a first prototype of SEA-PARAM leveraging
the open-source parametric model checking framework of the PRISM
model checker~\cite{KwiatkowskaNP11} and Wolfram Mathematica\footnote{https://www.wolfram.com/mathematica/}.
SEA-PARAM receives as input a PMDP and a reachability property.
Firstly, it explores all the possible memoryless schedulers generating
for each of them a multivariate rational function that maps
the parameter space into the probability to satisfy the
desired property. For the generation
and the manipulation of the multivariate rational functions,
PRISM leverages the Java Algebra Systems (JAS)\footnote{http://krum.rz.uni-mannheim.de/jas/}.
This task is embarrassingly parallel,
since each memoryless scheduler can be
treated independently from the others. We exploit this with a concurrent implementation, which leads to constant (given by the number of cores) speed-up.
However, in the worst-case the number of schedulers (which we straightforwardly enumerate in this first attempt)
can be exponential in the number of states, resulting in exponential running time.
After the memoryless schedulers enumeration and function computation,
the corresponding multivariate rational functions are
evaluated according to a chosen optimality criterion
using a script developed within the Wolfram Mathematica
framework. We chose Mathematica for the ability to quickly
implement our different formal
notions of optimality criteria for the schedulers provided in the paper.
The Mathematica program takes as input the list of schedulers with their corresponding functions
generated in the previous step and computes a score
for each multivariate rational function.
This task can again be computed in parallel for each
multivariate rational function.
Nevertheless, again, in general the computation of the score of a multivariate rational function is NP-hard~\cite{Kawamura:2011tx,DBLP:journals/siamcomp/Sahni74,DBLP:journals/mp/MurtyK87}.
For example, already the minimisation of a multi-variate quadratic function
over the unit cube is NP-hard, see e.g.~\cite{DBLP:journals/mp/MurtyK87} for a
reduction from SUBSET-SUM.
For several classes of well-behaved functions (e.g.~convex functions
or unimodal ones) our scores can be efficiently computed.
We know for sure that not all our functions are convex or unimodal, but there
is still a chance that the functions form another well-behaved class.
We intend to explore this possibility in future work.
Note that, since we generate a list of schedulers together with their rational functions, it is straightforward to find the scheduler corresponding to a rational function.
\subsection{Experiments}\label{sec:exp}
\begin{table}[t]
\begin{tabular}{|lllll|rr|rrr|}
\hline
\multicolumn{5}{|c|}{Grid scenario} & \multicolumn{2}{c|}{Number of} & \multicolumn{3}{c|}{Execution time in seconds} \\
k & size & type & target & sinks & schedulers & functions & PRISM & optimistic & expectation \\ \hline
8 & 2x2 & ff & (2,2) & (1,2) & 4 & 4 & 0.11 & 1.39 & 1.40 \\
8 & 2x2 & fs & (2,2) & (1,2) & 4 & 4 & 0.10 & 2.19 & 19.09 \\ \hline
2 & 2x2 & ff & (2,2) & (1,2) & 4 & 4 & 0.07 & 0.58 & 0.71 \\
2 & 2x2 & fs & (2,2) & (1,2) & 216 & 63 & 1.76 & 1.90 & 0.26 \\
2 & 3x3 & ff & (1,3) & (1,2),(2,2) & 432 & 120 & 5.68 & 3.94 & 0.69 \\
2 & 3x3 & ff & (2,2) & (1,2) & 864 & 398 & 13.53 & 15.36 & 2.11 \\
2 & 3x3 & ff & (3,3) & (1,2) & 648 & 246 & 6.95 & 7.83 & 0.98 \\
2 & 3x3 & ff & (3,3) & (2,2) & 4 & 4 & 0.06 & 0.31 & 0.05 \\
2 & 3x3 & fs & (1,3) & (1,2), (2,2) & 216 & 63 & 1.85 & 1.85 & 0.42 \\
2 & 3x3 & fs & (2,2) & (1,2) & 432 & 120 & 6.53 & 4.29 & 0.78 \\
2 & 3x3 & fs & (3,3) & (1,2) & 864 & 399 & 12.14 & 18.64 & 2.63 \\
2 & 3x3 & fs & (3,3) & (2,2) & 648 & 234 & 9.18 & 8.13 & 1.32 \\ \hline
1 & 2x2 & ff & (2,2) & (1,2) & 4 & 4 & 0.06 & 1.40 & 0.04 \\
1 & 2x2 & fs & (2,2) & (1,2) & 216 & 60 & 0.78 & 5.67 & 0.16 \\
1 & 3x3 & ff & (1,3) & (1,2), (2,2) & 432 & 114 & 2.29 & 9.72 & 0.18 \\
1 & 3x3 & ff & (2,2) & (1,2) & 864 & 390 & 6.46 & 34.48 & 0.52 \\
1 & 3x3 & ff & (3,3) & (1,2) & 648 & 122 & 3.48 & 12.57 & 0.19 \\
1 & 3x3 & ff & (3,3) & (2,2) & 4 & 4 & 0.05 & 1.41 & 0.04 \\
1 & 3x3 & fs & (1,3) & (1,2), (2,2) & 216 & 63 & 1.09 & 4.87 & 0.12 \\
1 & 3x3 & fs & (2,2) & (1,2) & 432 & 114 & 1.72 & 11.21 & 0.19 \\
1 & 3x3 & fs & (3,3) & (1,2) & 864 & 391 & 5.63 & 34.01 & 0.54 \\
1 & 3x3 & fs & (3,3) & (2,2) & 648 & 124 & 2.42 & 10.21 & 0.17 \\
1 & 4x4 & fs & (4,4) & (1,2) & 4478976 & 2010270 & 89677.00 & 42824.00 & 32986.00 \\\hline
\end{tabular}
\caption{Overview of the experimental results
}
\label{tab:exp}
\end{table}
The experiments reported here ran on a unified memory architecture (UMA)
machine with four 10-core 2GHz Intel Xeon E7-4850 processors supporting two
hardware threads (hyper-threads) per core, 128GB of main memory, and Linux
kernel version 4.4.0.
The first part (based on PRISM SVN revision 11807) was compiled and run with
OpenJDK 1.8.0. The second part was executed in Mathematica 11.0 using 16 parallel kernels.
During our experiments we identified a bug in a greatest-common-divisor (gcd) procedure of the JAS library, that resulted in computing wrong functions.
We work around this bug by substituting a simpler gcd procedure.
All of our code and detailed results of the experiments can be found at
\cite{seaparam}.
Table \ref{tab:exp} gives an overview of our experimental results.
We present 23 experiments in total, for the various scenarios described in Section~\ref{sec:examples}.
For each scenario the table shows the number of schedulers, the number of unique functions (as two schedulers might have the same rational function), as well as the running times of key parts of our system.
In particular, we show the running time for the computation of the rational functions (column PRISM) and the computation of the expectation and optimistic optimal schedulers.
We selected these two classes of optimal schedulers as they illustrate the characteristics of our two score classes (integral/mass vs extremal values) the best.
Our largest experiments involves a 4x4 labyrinth with a single parameter; it
results in over 2 million distinct rational functions (and takes significant
amount of time to compute).
In Figure~\ref{fig:plot-ex} we plot the rational functions of two 3x3 labyrinths with one parameter, to give a flavour of the different schedulers encountered.
In both cases no scheduler is dominant and several of them are optimistic.
In Figure~\ref{fig:plot-ex-ff} we see a single expectation scheduler (actually two schedulers with a single rational function) and a single pessimistic scheduler (again actually two schedulers), while in Figure~\ref{fig:plot-ex-fs} there are two symmetric expectation schedulers, and \emph{all} schedulers are pessimistic (as they all have minimal value $0$).
In both scenarios the stable and the bound schedulers coincide - in Figure~\ref{fig:plot-ex-ff} the corresponding function is constant $0$.
We also show an $\varepsilon$-stable robust scheduler with $\varepsilon$ chosen to be the median variance of the rational functions.
In Figure~\ref{fig:plot-ex-fs} this yields a function very close to the expectation scheduler function with slightly lower variance.
Note that we plot the rational functions for the optimal schedulers.
From these functions we can look up the corresponding schedulers.
For instance, the function labeled expectation in Figure~\ref{fig:plot-ex-ff}
corresponds to the two expectation optimal schedulers that in $(1,1)$ take $E$ or $N$ respectively; take
$N$ in $(1,2)$, $(3,1)$, and $(3,2)$; and take $E$ in $(1,3)$, $(2,3)$ (and of course in $(2,1)$ where there is no other choice).
\begin{figure}\centering
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[height=.43\linewidth]{Pics/example-p-ff-3x3-33-sink22-all.pdf}
\includegraphics[height=.43\linewidth]{Pics/example-p-ff-3x3-33-sink22.pdf}
\caption{fixed failure from (1,1) to (3,3) with a sink at (2,2)}
\label{fig:plot-ex-ff}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[height=.43\linewidth]{Pics/example-p-fs-3x3-22-sink12-all.pdf}
\includegraphics[height=.43\linewidth]{Pics/example-p-fs-3x3-22-sink12.pdf}
\caption{fixed success from (1,1) to (2,2) with a sink at (1,2)}
\label{fig:plot-ex-fs}
\end{subfigure}
\caption{The rational functions of schedulers for two 3x3 labyrinths with $k=1$}
\label{fig:plot-ex}
\end{figure}
\section{Discussion}\label{sec:conc}
In our first-version prototype implementation of SEA-PARAM we focus on simple schedulers, which are already exponentially many. However, not always simple schedulers are optimal according to our optimality definitions. A history dependent (hence not simple) scheduler may estimate the parameters and thus provide better behaviour than any simple scheduler, as we show with the following example.
\begin{figure}[h]\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{Pics/dependent-sched-opt-system.pdf}
\caption{An example PMDP $M$}
\label{fig:history-dependent}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Pics/history-dep-induced-MC.pdf}
\caption{Induced MC by a history-dependent scheduler}
\label{fig:hist-dep-ind-MC}
\end{subfigure}
\caption{History dependency}
\label{fig:simple-are-not-enough}
\end{figure}
Consider the PMDP $M$ in Figure~\ref{fig:history-dependent} where $p$ is a parameter. There are two simple schedulers for $M$: $\alpha$ with $\alpha(\mathbf{c}) = a$ and $\beta$ with $\beta(\mathbf{c}) = b$ (all other states are mapped to the single available action and $\mathbf{t}$ and $\mathbf{x}$ are sink states). Their corresponding rational functions are $f_\alpha(p) = p$ and $f_\beta(p) = 1-p$.
Consider now the history-dependent scheduler $\chi$ of $M$ that schedules $a$ in state $\mathbf{c}$ if and only if the state $\mathbf{a}$ has been visited before. The $\chi$-induced MC is shown in Figure~\ref{fig:hist-dep-ind-MC}. The rational function corresponding to $\chi$ is $f_\chi(p) = p^2 + (1-p)^2$. All three rational functions are depicted in Figure~\ref{fig:simple-and-hist-dep}, and $\chi$ wins in all optimality classes against $\alpha$ and $\beta$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{Pics/history-dependent-wins.pdf}
\caption{The rational functions of $\alpha$, $\beta$, and $\chi$; $\chi$ wins in optimality}
\label{fig:simple-and-hist-dep}
\end{figure}
We aim at broadening our scheduler exploration to history-dependent schedulers in the near future.
\vspace*{3mm}
\noindent{}{\bf Acknowledgments.}
This work was supported by the Austrian National Research Network
RiSE/SHiNE (S11405-N23 and S11411-N23) project funded by the Austrian Science Fund (FWF)
and partially by the Fclose (Federated Cloud Security) project funded by UnivPM.
\bibliographystyle{eptcs}
|
\section{Introduction}\label{intro}
An ongoing area of research is to
find complete Boolean algebras that witness first failures of distributive laws.
In the late 1960's, Solovay asked the following question:
For which cardinals $\kappa$
is there a complete Boolean algebra $\mathbb{B}$ such that for all $\mu<\kappa$, the $(\om,\mu)$-distributive law holds in $\mathbb{B}$, while the $(\om,\kappa)$-distributive law fails (see \cite{Namba70})?
In forcing language, Solovay's question asks for which
cardinals $\kappa$ is there a forcing extension in which
there is a new $\om$-sequence of ordinals in $\kappa$, while every $\om$-sequence of ordinals bounded below $\kappa$ is in the ground model?
Whenever such a Boolean algebra exists, it must be the case that
$\mu^{\om}<\kappa$, for all $\mu<\kappa$.
It also must be the case that
either $\kappa$ is regular or else $\kappa$ has cofinality $\om$, as shown in \cite{Namba70}.
For the case when $\kappa$ is regular, Solovay's question was solved independently using different forcings by Namba in \cite{Namba70} and \Bukovsky\ in \cite{Bukovsky75}.
Namba's forcing is similar to Laver forcing, where above the stem, all nodes split with the number of immediate successors having maximum cardinality.
\Bukovsky's forcing consists of perfect trees, where splitting nodes have the maximum cardinality of immediate successors.
\Bukovsky's work was motivated by the following question which \Vopenka asked in 1966:
Can one change the cofinality of a regular cardinal without collapsing smaller cardinals (see \cite{Bukovsky75})?
Prikry solved \Vopenka's question for measurable cardinals in his dissertation \cite{Prikry70}.
The work of \Bukovsky\ and of Namba solved \Vopenka's question for $\aleph_2$, which is now known, due to Jensen's covering theorem, to be the only possibility without assuming large cardinals.
In the late 1960's, Prikry solved Solovay's question for the case when $\kappa$ has cofinality $\om$
and $\mu^{\om}<\kappa$ for all $\mu<\kappa$.
His proof was never published, but his result is quoted in \cite{Namba70}.
In this article, we provide
modified versions of Prikry's original proofs, generalizing them to cardinals of uncountable cofinality whenever this is straightforward.
The perfect tree forcings constructed by Prikry are interesting in their own right, and his original results provided the impetus for the recent results in this article, further investigating their forcing properties.
\Bukovsky\ and \Coplakova\ conducted a comprehensive study of
forcing properties of generalized Namba forcing and of a family of perfect tree forcings
in \cite{Bukovsky/Coplakova90}.
They found which distributive laws hold, which cardinals are collapsed, and proved under certain assumptions that the forcing extensions are minimal for adding new $\om$-sequences.
Their perfect tree forcings,
defined in Section 3 of \cite{Bukovsky/Coplakova90},
are similar, but not equivalent, to the forcings investigated in this paper;
some of their techniques are appropriated in later sections.
A variant of Namba style tree forcings, augmented from Namba forcing analogously to how the perfect tree forcings in \cite{Bukovsky/Coplakova90} are augmented from those in \cite{Bukovsky75}, was used
by Cummings, Foreman and Magidor in \cite{Cummings/Foreman/Magidor03} to prove that a supercompact cardinal can be forced to collapse to $\aleph_2$ so that in this forcing extension,
$\square_{\om_n}$ holds for all positive integers $n$, and each stationary subset of $\aleph_{\om+1}\cap \mathrm{cof}(\om)$ reflects to an $\al$ with cofinality $\om_1$.
We point out that the addition of a new $\om$-sequence of ordinals has consequences for the co-stationarity
of the ground model in the $\mathcal{P}_{\mu}(\lambda)$ of the extension model.
It follows from more general work in \cite{DobrinenOmSeq06} that if the ground model $V$ satisfies $\square_{\mu}$ for all regular cardinals $\mu$ in forcing extension $V[G]$ and if
$V[G]$ contains a new sequence $f:\om\ra\kappa$, then for all cardinals $\mu<\lambda$
in $V[G]$ with $\mu$ regular in $V[G]$ and $\lambda\ge \kappa$,
$(\mathcal{P}_{\mu}(\lambda))^{V[G]}\setminus V$ is stationary in $(\mathcal{P}_{\mu}(\lambda))^{V[G]}$.
It seems likely that further investigations of variants of Namba and perfect tree forcings should lead to interesting results.
A complete Boolean algebra $\mathbb{B}$ is said to {\em satisfy the $(\lambda,\mu)$-distributive law}
($(\lambda,\mu)$-d.l.)
if for each collection of $\lambda$ many partitions of unity into at most $\mu$ pieces, there is a common refinement.
This is equivalent to saying that forcing with $\mathbb{B}\setminus \{\mathbf{0}\}$ does not add any new functions from $\lambda$ into $\mu$.
The weaker three-parameter distributivity is defined as follows:
$\mathbb{B}$ {\em satisfies the $(\lambda,\mu,<\delta)$-distributive law}
($(\lambda,\mu,<\delta)$-d.l.)
if in any forcing extension $V[G]$ by $\mathbb{B}\setminus\{\mathbf{0}\}$,
for each function $f:\lambda\ra\mu$ in $V[G]$, there is a function $h:\lambda\ra[\mu]^{<\delta}$ in the ground model $V$ such that $f(\al)\in h(\al)$, for each $\al<\lambda$.
Such a function $h$ may be thought of as a covering of $f$ in the ground model.
Note that the $\delta$-chain condition implies
$(\lambda,\mu,<\delta)$-distributivity, for all $\lambda$ and $\kappa$.
We
shall usually write $(\lambda, \mu,\delta)$-distributivity instead of $(\lambda,\mu,<\delta^+)$-distributivity.
See \cite{KoppelbergHB} for more background on distributive laws.
In this paper,
given any strictly increasing sequence of regular cardinals $\lgl \kappa_n :n<\om\rgl$, letting $\kappa=\sup_{n<\om}\kappa_n$ and assuming that $\mu^{\om}<\kappa$ for all $\mu<\kappa$,
$\mathbb{P}$ is a collection of certain perfect subtrees of $\prod_{n<\om}\kappa_n$, partially ordered by inclusion, described in
Definition \ref{defn.2.5}.
Let $\mathbb{B}$ denote its Boolean completion.
We prove the following.
$\mbb{P}$ has size $\kappa^\omega$
and $\mbb{B}$ has maximal antichains of
size $\kappa^\omega$, but no larger.
$\mbb{P}$ satisfies the
$(\omega, \kappa_n)$-d.l.\ for each $n < \omega$
but not the
$(\omega, \kappa)$-d.l.
In fact, it does not satisfy the
$(\omega, \kappa, \kappa_n)$-d.l.\ for any $n < \omega$.
It does, however, satisfy the
$(\omega, \kappa, {<\kappa})$-d.l., and in fact it satisfies
the $(\omega, \infty, {<\kappa})$-d.l.,
because it satisfies a Sacks-like property.
On the other hand,
the $(\mf{d}, \infty, {<\kappa})$-d.l.\ fails.
We do not know if $\infty$ can be replaced by a cardinal strictly smaller than
$\kappa^\omega$.
However, we do know that the
$(\mf{h}, 2)$-d.l.\ fails.
($\mf{h}$ and $\mf{d}$ are cardinal characteristics of the
continuum, and $\omega_1 \le \mf{h} \le \mf{d} \le 2^\omega$.)
In fact, we have that $P(\omega)/\Fin$ densely embeds
into the regular open completion of $\mbb{P}$.
By similar reasoning, we show that forcing with
$\mbb{P}$ collapses $\kappa^\omega$ to $\mf{h}$.
Under the assumption that $\kappa$ is the limit of measurables,
we have that
every $\omega$-sequence of ordinals in the extension is either
in the ground model or it constructs the generic filter.
If $G$ is $\mbb{P}$-generic over $V$ and
$H \in V[G]$ is $P(\omega)/\Fin$-generic
over $V$, then since $P(\omega)/\Fin$ does not
add $\omega$-sequences, $G \not\in V[H]$.
Thus, $\mbb{P}$
does not add a minimal degree of constructibility.
Some of the results also hold for cardinals $\kappa$ of uncountable cofinality, and these are presented in full generality.
The article closes with an example of what can go wrong when $\kappa$ has uncountable cofinality,
highlighting some open problems and ideas for how to approach them.
\section{Definitions and Basic Lemmas}\label{sec.defs}
\subsection{Basic Definitions}
Recall that given a separative poset $\mbb{P}$,
the \textit{regular open completion} $\mbb{B}$ of $\mbb{P}$
is a complete Boolean algebra into which $\mbb{P}$
densely embeds (after we remove the zero
element $\mathbf{0}$ from $\mbb{B}$).
Every other such complete
Boolean algebra is isomorphic to $\mbb{B}$.
A set $C \subseteq \mbb{P}$ is
\textit{regular open} iff
\begin{itemize}
\item[1)]
$(\forall p_1 \in C)(\forall p_2 \le p_1)\,
p_2 \in C$, and
\item[2)] $(\forall p_1 \not\in C)
(\exists p_2 \le p_1)
(\forall p_3 \le p_2)\,
p_3 \not\in C$.
\end{itemize}
Topologically, giving $\mathbb{P}$ the topology generated by basic open sets of the form
$\{q\in\bP : q\le p\}$ for $p\in \bP$, a set
$C\subseteq \mbb{P}$ is regular open if and only if it is equal to the interior of its closure in this topology.
We define $\mbb{B}$ as the collection
of regular open subsets of $\mbb{P}$ ordered by inclusion.
See \cite{KoppelbergHB} for more background on the regular open completion of a partial ordering.
Given cardinals $\lambda$ and $\mu$,
we say $\mbb{B}$ (or $\mbb{P}$) satisfies the
$(\lambda, \mu)$-distributive law ($(\lambda, \mu)$-d.l.)
if and only if whenever $\{ A_\alpha : \alpha < \lambda \}$
is a collection of size $\le \mu$ maximal
antichains in $\mbb{B}$,
there is a single $p \in \mbb{B}$
below one element of each antichain.
This is equivalent to the statement
$1_\mbb{B} \forces (^{\check{\lambda}} \check{\mu}
\subseteq \check{V})$.
That is, every function from $\lambda$ to $\mu$
in the forcing extension is already in the ground model.
Note that $\mbb{B}$ and $\mbb{P}$ force the same statements,
since
$\mbb{P}$ densely embeds into $\mbb{B}$
by the mapping $p \mapsto \{ q \in \mbb{P} : q \le p \}$.
The $(\lambda, \mu)$-d.l.\
is equivalent to the statement that
whenever $p \in \mbb{P}$ and $\dot{f}$ are such that
$p \forces \dot{f} : \check{\lambda} \to \check{\kappa}$,
then there are $q \le p$ and $g : \lambda \to \kappa$
satisfying $q \forces \dot{f} = \check{g}$.
We will also study
a distributive law weaker than the $(\lambda,\mu)$-d.l.;
namely, the
$(\lambda, \mu, {<\delta})$-d.l.
where $\delta \le \mu$.
This is the statement that
for each $\alpha < \lambda$ there is a set
$X_\alpha \in [A_\alpha]^{<\delta}$ such that
there is a single non-zero element of $\mbb{B}$
below $\bigvee X_\alpha$ for each $\alpha < \lambda$.
That is, there is some $p \in \mbb{P}$ such that
$(\forall \alpha < \lambda)
(\exists a \in X_\alpha)\,
p \in a$.
The $(\lambda, \mu, {<\delta})$-d.l.\
is equivalent to the statement that whenever
$p \in \mbb{P}$ and $\dot{f}$ satisfy
$p \forces \dot{f} : \check{\lambda} \to \check{\mu}$,
then there exists $q \le p$ and a function
$g : \lambda \to [\mu]^{<\delta}$ satisfying
$q \forces (\forall \alpha < \check{\lambda})\,
\dot{f}(\alpha) \in \check{g}(\alpha)$.
Finally,
if $\mu$ is the smallest cardinal such that
every maximal antichain in $\mbb{B}$ has size
$\le \mu$, then the distributive law is unchanged
if we replace $\mu$ in the second argument
with any larger cardinal,
so in this situation we write $\infty$
instead of $\mu$.
\begin{convention}\label{conventionforpaper}
For this entire paper,
$\kappa$ is a singular cardinal
and $\langle \kappa_\alpha :
\alpha < \cf(\kappa) \rangle$
is an increasing
sequence of regular cardinals
with limit $\kappa$ such that
$\cf(\kappa) < \kappa_\alpha < \kappa$
for all $\alpha$.
\end{convention}
Note that
the cardinality of
$\prod_{\alpha < \cf(\kappa)}
\kappa_\alpha $ equals $ \kappa^{\cf(\kappa)}$, which is greater than $\kappa$.
We do not assume that $\kappa$
is a strong limit cardinal.
However, we do make the following
weaker assumption:
\begin{assumption}\label{basicassumption}
$$(\forall \mu < \kappa)\,
\mu^{\cf(\kappa)} < \kappa.$$
\end{assumption}
In a few places, we will make the
special assumption that $\kappa$
is the limit of measurable cardinals.
\begin{definition}\label{def.2.1}
The set $N \subseteq {^{<\cf(\kappa)} \kappa}$
consists of all functions $t$ such that
$\dom(t) < \cf(\kappa)$ and
$(\forall \alpha \in \dom(t))\,
t(\alpha) < \kappa_\alpha$.
We call each $t \in N$ a \defemph{node}.
Given a set $T \subseteq N$
(which is usually a tree,
meaning that it is closed under initial segments),
$[T]$ is the set of all $f \in
{^{\cf(\kappa)} \kappa}$ such that
$(\forall \alpha < \cf(\kappa))\,
f \restriction \alpha \in T$.
Define $X := [N]$.
Given $t_1, t_2 \in N \cup X$,
we write $t_2 \sqsupseteq t_1$ iff
$t_2$ is an extension of $t_1$.
\end{definition}
Note that $|N| = \kappa$
and $|X| = \kappa^{\cf(\kappa)}$.
We point out that our set $X$ is commonly written as $\prod_{\al<\cf(\kappa)}\kappa_{\al}$.
In order to avoid confusion with cardinal arithmetic and to simplify notation, we shall use $X$ as defined above.
\begin{definition}\label{def.2.2}
Fix a tree $T \subseteq N$.
A \defemph{branch} through $T$
is a maximal element of $T \cup [T]$.
Given $\alpha < \cf(\kappa)$,
$T(\alpha) := T \cap {^\alpha \kappa}$
is the set of all nodes of $T$ on
\defemph{level} $\alpha$.
Given $t \in T$ such that $t \in T(\alpha)$,
then $\mbox{Succ}_T(t)$ is the set
of all children of $t$ in $T$:
all nodes $c \sqsupseteq t$ in $T(\alpha+1)$.
The word \defemph{successor} is another word
for child (hence, successor
always means immediate successor).
A node $t \in T$ is \defemph{splitting}
iff $|\mbox{Succ}_T(t)| > 1$.
$\mbox{Stem}(T)$
is the unique (if it exists)
splitting node of $T$
that is comparable
(with respect to extension)
to all other elements of $T$.
Given $t \in T$,
the tree $T | t$ is the subset of $T$
consisting of all nodes of $T$
that are comparable to $t$.
\end{definition}
It is desirable for the trees
that we consider to have no dead ends.
\begin{definition}\label{def.2.3}
A tree $T \subseteq N$ is called
\defemph{non-stopping} iff
it is non-empty
and for every
$t \in T$, there is some $f \in [T]$
satisfying $f \sqsupseteq t$.
A tree $T \subseteq N$ is \defemph{suitable}
iff $T$ has no branches of length
$< \cf(\kappa)$.
\end{definition}
Suitable implies non-stopping,
and they are equivalent if
$\cf(\kappa) = \omega$.
\begin{definition}\label{defn.2.4}
A tree $T \subseteq N$ is
\defemph{pre-perfect} iff $T$
is non-stopping and
for each $\alpha < \cf(\kappa)$ and
each node $t_1 \in T$,
there is some $t_2 \sqsupseteq t_1$ in $T$
such that $|\mbox{Succ}_T(t_2)| \ge \kappa_\alpha$.
A tree $T \subseteq N$ is \defemph{perfect}
iff $T$ is pre-perfect and,
instead of just being non-stopping, is suitable.
\end{definition}
In Section~\ref{secuncheight},
we will construct a pre-perfect $T$
such that $[T]$ has size $\kappa$.
That example points out problems that arise in straightforward attempts to generalize some of our results to singular cardinals of uncountable cofinality.
On the other hand, it is not hard to see that
if $T$ is perfect, then $[T]$ has size $\kappa^{\cf(\kappa)}$.
We will now define the forcing that we will investigate.
\begin{definition}\label{defn.2.5}
$\mbb{P}$ is the set of all perfect trees $T \subseteq N$
ordered by inclusion.
$\mbb{B}$ is the regular open completion of $\mbb{P}$.
\end{definition}
Note that by a density argument, given $\kappa$, the choice of the sequence
$\langle \kappa_\alpha : \alpha < \cf(\kappa) \rangle$ having $\kappa$ as its limit
does not affect the definition of $\mbb{P}$.
\begin{definition}\label{defn.2.6}
Assume $\cf(\kappa) = \omega$.
Fix a perfect tree $T \subseteq N$.
A node $t \in T$ is $0$-splitting iff
it has exactly $\kappa_0$ children in $T$ and
it is the stem of $T$ (so it is unique).
Given $n < \omega$,
a node $t \in T$ is $(n+1)$-splitting iff
it has exactly $\kappa_{n+1}$ children in $T$ and
it's maximal proper initial segment that is splitting
is $n$-splitting.
\end{definition}
\begin{definition}\label{defn.2.7}
Assume $\cf(\kappa) = \omega$.
Fix a perfect tree $T \subseteq N$.
We say $T$ is in \defemph{weak splitting normal form}
iff every splitting node of $T$ is $n$-splitting
for some $n$.
We say $T$ is in \defemph{medium splitting normal form}
iff it is in weak splitting normal form
and for each splitting node $t \in T$,
all minimal splitting descendents of $t$
are on the same level.
We say $T$ is in \defemph{strong splitting normal form}
iff it is in medium splitting normal form and
for each $n \in \omega$, there is some $l_n \in \omega$
such that $T(l_n)$ is precisely the set of
$n$-splitting nodes of $T$.
We say that the set
$\{ l_n : n \in \omega \}$
\defemph{witnesses}
that $T$ is in strong splitting normal form.
\end{definition}
If $T$ is in weak splitting normal form,
then for each $f \in [T]$,
there is a sequence
$t_0 \sqsubseteq t_1 \sqsubseteq ...$
of initial segments of $f$ such that
$t_n$ is $n$-splitting for each $n < \omega$
(and these are the only splitting nodes on $f$).
It is not hard to prove that any
$T \in \mbb{P}$ can be extended to some
$T' \le T$ in medium splitting normal form.
Furthermore, the
set of conditions below a condition in
medium splitting normal form
is isomorphic to $\mbb{P}$ itself.
This implies that whenever $\varphi$
is a sentence in the forcing language
that only involves names of the form
$\check{a}$ for some $a \in V$, then either
$1 \forces \varphi$ or $1 \forces \neg \varphi$.
In Proposition~\ref{cangetstrongnormalform},
we will show
(in the $\cf(\kappa) = \omega$ case)
that each condition can be
extended to one in \textit{strong} splitting normal form.
\subsection{Topology}
To prove several facts about $\mbb{P}$
for the $\cf(\kappa) = \omega$ case,
a topological approach will be useful.
\begin{definition}
Given $t \in N$,
let $B_t \subseteq X$ be the set of all
$f \in X$ such that $f \sqsupseteq t$.
We give the set $X$ the topology
induced by the basis
$\{ B_t : t \in N \}$.
\end{definition}
\begin{observation}
Each $B_t \subseteq X$ for $t \in N$
is clopen.
\end{observation}
\begin{observation}
A set $C \subseteq X$ is closed iff
whenever $g \in X$ satisfies
$(\forall \alpha < \cf(\kappa))\,
|C \cap B_{g \restriction \alpha}|
\not= \emptyset$,
then $g \in C$.
\end{observation}
This next fact explains
why we considered the concept of ``non-stopping'':
\begin{fact}
A set $C \subseteq X$ is closed iff
$C = [T]$ for some (unique) non-stopping
tree $T \subseteq N$.
\end{fact}
\begin{definition}
A set $C \subseteq X$ is \defemph{strongly closed}
iff $C = [T]$ for some (unique)
suitable tree $T \subseteq N$.
Hence, if $\cf(\kappa) = \omega$,
then strongly closed is the same as closed.
\end{definition}
\begin{definition}
A set $P \subseteq X$ is
\defemph{perfect} iff
it is strongly closed and for each
$f \in P$,
every neighborhood of $f$
contains $\kappa^{\cf(\kappa)}$
elements of $P$.
\end{definition}
Thus, every non-empty perfect set
has size $\kappa^{\cf(\kappa)} = |X|$.
One can check that if $B \subseteq X$
is clopen and
$P \subseteq X$ is perfect, then
$B \cap P$ is perfect.
The next lemma does not hold
in the $\cf(\kappa) > \omega$ case
when we replace ``perfect tree''
with ``pre-perfect tree'',
because it is possible for a pre-perfect
tree to have $\kappa$ branches
(see Counterexample~\ref{uncountablecounterex}).
\begin{lemma}
\label{perfecttreeimpliesset}
If $T \subseteq N$ is a perfect tree, then
$[T]$ is a perfect set.
\end{lemma}
\begin{proof}
Since $T$ is perfect,
it is suitable, which by definition implies
that $[T]$ is strongly closed.
Next, given any $t \in T$,
we can argue that $B_t \cap [T]$ has size $\kappa^{\cf(\kappa)}$,
because we can easily construct an embedding from
$N$ into $T | t$, and we have that $X$ has size
$\kappa^{\cf(\kappa)}$.
\end{proof}
This next lemma implies the opposite direction:
if $P \subseteq X$ is a perfect set,
then $P = [T]$ for some perfect tree $T \subseteq N$.
\begin{lemma}
\label{closedandeverywherebigimpliesperfect}
Fix $P \subseteq X$.
Suppose $P$ is strongly closed and
for each $f \in P$,
every neighborhood of $f$
contains $\ge \kappa$
elements of $P$.
Then $P = [T]$ for some (unique) perfect tree
$T \subseteq N$.
Hence, $P$ is a perfect set.
\end{lemma}
\begin{proof}
Since $P$ is strongly closed,
fix some (unique) suitable tree
$T \subseteq N$ such that $P = [T]$.
If we can show that $T$ is a perfect tree,
we will be done
by the lemma above.
Suppose that $T$ is not a perfect tree.
Let $t \in T$ and $\alpha < \cf(\kappa)$
be such that for every extension $t' \in T$
of $t$, $| \mbox{Succ}_T(t') | \le \kappa_\alpha$.
We see that $[(T|t)]$ has size at most
$(\kappa_\alpha)^{\cf(\kappa)} < \kappa$,
which is a contradiction.
\end{proof}
\begin{cor}
\label{majorequiv}
Fix $P \subseteq X$.
The following are equivalent:
\begin{itemize}
\item[1)] $P$ is perfect;
\item[2)] $P$ is strongly closed and
$$(\forall f \in P)
(\forall \alpha < \cf(\kappa))\,
|P \cap B_{f \restriction \alpha}|
= \kappa^{\cf(\kappa)};$$
\item[3)] $P$ is strongly closed and
$$(\forall f \in P)
(\forall \alpha < \cf(\kappa))\,
|P \cap B_{f \restriction \alpha}|
\ge \kappa;$$
\item[4)] There is a perfect
tree $T \subseteq N$ such that
$P = [T]$.
\end{itemize}
\end{cor}
\begin{lemma}
\label{bigclosedhasperfect}
Assume $\cf(\kappa) = \omega$.
Let $C \subseteq X$ be strongly closed
and assume $|C| > \kappa$.
Then $C$ has a non-empty perfect subset.
\end{lemma}
\begin{proof}
Let $T \subseteq N$ be the (unique) suitable tree
such that $C = [T]$.
We will construct $T'$ by successively adding
elements to it, starting with the empty set.
By an argument similar
to the one used in the previous lemma,
there must be a node $t_\emptyset \in T$
such that there is a set
$S_{t_\emptyset} \subseteq \mbox{Succ}_T(t_\emptyset)$
of size $\kappa_0$ such that
$(\forall c \in S_{t_\emptyset})\,
[(T|c)] > \kappa$.
Fix $t_\emptyset$ and add it and all its initial segments
to $T'$.
Next, for each $c \in S_{t_\emptyset}$,
there must be a node $t_c \in T$
such that there is a set
$S_{t_c} \subseteq \mbox{Succ}_T(t_c)$
of size $\kappa_1$ such that
$(\forall d \in S_{t_c})$\,
$[(T|d)] > \kappa$.
For each $c$,
fix such a $t_c$
and add it and all its initial segments to $T'$.
Continue like this.
At a limit stage $\alpha$,
let $t$ be such that it is not in $T'$ yet
but all its initial segments are.
Find some extension of $t$ in $T$ that has
$\kappa_\alpha$ appropriate children, etc.
It is clear from the consturction that
$T' \subseteq T$ will be a perfect tree.
\end{proof}
\subsection{Laver-style Trees}
In this subsection, we assume $\cf(\kappa) = \omega$,
as this is the only case to which
the proofs apply.
The results in this subsection are modifications to our setting of work extracted from \cite{Namba70}, where Namba used the terminology `rich' and `poor' sets.
\begin{definition}
For each $n < \omega$, let
$\mbb{Q}_n \subseteq \mbb{P}$
denote the set of $T \in \mbb{P}$ such that
$\dom( \mbox{Stem}(T) ) \le n$, and
for each $m \ge \dom( \mbox{Stem}(T) ) $
and $t \in T(m)$,
$| \mbox{Succ}_T(t) | = \kappa_m$.
\end{definition}
Note that if $n < m$, then
$\mbb{Q}_n \subseteq \mbb{Q}_m$.
The set $\mbb{Q}=\bigcup_{n < \omega} \mbb{Q}_n$
is the collection of ``Laver'' trees.
\begin{definition}
Fix a tree $T \subseteq N$.
We say that $T$
\defemph{has small splitting at level} $n < \omega$
iff $(\forall t \in T(n))\,
| \mbox{Succ}_T(t) | < \kappa_n$.
A tree is called {\em leafless} if it has no maximal nodes.
We say that $T$ is $n$-\defemph{small} iff
there is a sequence of leafless trees
$\langle D_m \subseteq N : m \ge n \rangle$
such that $[T] \subseteq \bigcup_{m \ge n} [D_m]$
and each $D_m$ has small splitting at level $m$.
\end{definition}
Note that if $n > m$,
then $n$-small implies $m$-small.
If $\langle D_m : m \ge n \rangle$
witnesses that $T$ is $n$-small,
then without loss of generality
$D_m \subseteq T$ for all $m \ge n$.
\begin{observation}
Let $m < \omega$.
Let $\mc{D}$ be a collection of trees
that have small splitting at level $m$.
If $|\mc{D}| < \kappa_m$,
then $\bigcup \mc{D}$ has small splitting
at level $m$.
\end{observation}
\begin{lemma}\label{lem2.23}
Let $T \subseteq N$ be a tree, let
$t := \mbox{Stem}(T)$,
and let $n := \dom(t)$.
Assume that $T$ is not $n$-small.
Then
$$E := \{ c \in \mbox{Succ}_T(t) :
(T | c) \mbox{ is not $(n+1)$-small} \}$$
has size $\kappa_n$.
\end{lemma}
\begin{proof}
Towards a contradiction,
suppose that $|E| < \kappa_n$.
Let $F := \mbox{Succ}_T(t) - E$.
Let $D_n \subseteq N$ be the set
$D_n := \bigcup \{ (T|c) : c \in E \}$.
Note that
$$T = [D_n] \cup
\bigcup_{c \in F} [T|c].$$
We have that $D_n$ has small splitting
at level $n$, because $t$ is the only
node in $D_n \subseteq T$ at level $n$,
and $\mbox{Succ}_{D_n}(t) = E$
has size $< \kappa_n$.
For each $c \in F$,
let $\langle D^c_m \subseteq (T|c) :
m \ge n + 1 \rangle$
be a sequence of trees that witnesses that
$(T|c)$ is $(n+1)$-small.
For each $m \ge n+1$, let
$$D_m := \bigcup_{c \in F} D^c_m.$$
Then
$$\bigcup_{c \in F} [T|c] =
\bigcup_{c \in F} \bigcup_{m \ge n+1} [D^c_m] =
\bigcup_{m \ge n+1} \bigcup_{c \in F} [D^c_m] \subseteq
\bigcup_{m \ge n+1} [D_m].$$
Consider any $m \ge n+1$.
Since $|F| \le |\mbox{Succ}_T(t)|
\le \kappa_n < \kappa_m$
and each $D^c_m$ has small splitting at level $m$,
by the observation above
$D_m$ has small splitting at level $m$.
Thus, we have
$[T] \subseteq \bigcup_{m \ge n} [D_m]$ and
each $D_m$ has small splitting at level $m$.
Hence $T$ is $n$-small,
which is a contradiction.
\end{proof}
\begin{cor}
\label{notsmallimplieslaver}
Let $T \subseteq N$ be a tree,
let $t := \mbox{Stem}(T)$, and
let $n := \dom(t)$.
Assume that $T$ is not $n$-small.
Then there is a subtree $L \subseteq T$
such that $L \in \mbb{Q}_n$.
\end{cor}
\begin{proof}
We will construct $L$ by induction.
For each $m \le n$,
let $L(m) := \{ t \restriction m \}$.
Let $L(n+1)$ be the set of $c \in \mbox{Succ}_T(t)$
such that $(T|c)$ is not $(n+1)$-small.
By Lemma \ref{lem2.23},
$|\mbox{Succ}_T(t)| = \kappa_n$.
Let $L(n+2)$ be the set of nodes of the form
$c \in \mbox{Succ}_T(u)$ for $u \in L(n+1)$
such that $(T|c)$ is not $(n+2)$-small.
Again by Lemma \ref{lem2.23},
for each $u \in L(n+1)$,
since $(T|u)$ is not $(n+1)$-small,
$|\mbox{Succ}_L(u)| = \kappa_{n+1}$.
Continuing in this manner,
we obtain $L \subseteq T$, and it has the property that
for each $m \ge n$ and $t \in L(m)$,
$|\mbox{Succ}_L(t)| = \kappa_m$.
Thus, $L \in \mbb{Q}_n$.
\end{proof}
\begin{lemma}\label{lem.2.25}
Fix $n < \omega$ and
let $L \in \mbb{Q}_n$.
Then $L$ is not $n$-small.
\end{lemma}
\begin{proof}
Suppose, towards a contradiction,
that there is a sequence of leafless trees
$\langle D_m \subseteq L : m \ge n \rangle$
such that $[L] \subseteq \bigcup_{m \ge n} [D_m]$
and each $D_m$ has small splitting at level $m$.
Let $t_n \in L(n)$ be arbitrary.
We will define a sequence of nodes
$\langle t_m \in L(m) : m \ge n \rangle$
such that $t_n \sqsubseteq t_{n+1} \sqsubseteq ...$
and $(\forall m \ge n)\,
[D_m] \cap B_{t_{m+1}} = \emptyset$.
If we let $x \in [L]$ be
the union of this sequence of $t_n$'s,
then since $\{x\} = \bigcap_{m \ge n} B_{t_{m+1}}$,
we will have $x \not\in \bigcup_{m \ge n} [D_m]$,
so $[L] \not\subseteq \bigcup_{m \ge n} [D_m]$,
which is a contradiction.
Define $t_{n+1}$ to be any successor of $t_n$ in $L$
such that $t_{n+1} \not\in D_n$.
This is possible because $D_n$ has small splitting
at level $n$ and $t$ has $\kappa_n$ successors in $L$.
We have $[D_n] \cap B_{t_{n+1}} = \emptyset$.
Next, define $t_{n+2}$ to be any successor of $t_{n+1}$ in $L$
such that $t_{n+2} \not\in D_{n+1}$.
Continuing in this manner yields
the desired sequence
$\langle t_m : m\ge n \rangle$.
\end{proof}
\begin{proposition}\label{prop.clear}
Fix $n < \omega$.
If $\mc{T}$ is a collection of $n$-small trees
and $|\mc{T}| < \kappa_n$, then
$\bigcup \mc{T}$ is an $n$-small tree.
\end{proposition}
\begin{proof}
For each $T \in \mc{T}$,
let $\langle D^T_m : m \ge n \rangle$ witness that $T$
is $n$-small.
Then $\langle \bigcup_{T \in \mc{T}} D^T_m : m \ge n \rangle$
witnesses that $\bigcup \mc{T}$ is $n$-small.
\end{proof}
\begin{cor}\label{cor.2.27}
Fix $n < \omega$.
If $\{ [T] : T \in \mc{T} \}$ is a partition of $X$
into $< \kappa_n$ closed sets, then at least one
of the trees $T \in \mc{T}$ is not $n$-small.
\end{cor}
\begin{proof}
Suppose that each $T \in \mc{T}$ is $n$-small.
Then by
Proposition \ref{prop.clear},
$\bigcup_{T \in \mc{T}} T = N$ is $n$-small.
However, $N$ cannot be $n$-small by
Lemma \ref{lem.2.25},
as $N$ is a member of $ \mbb{Q}_n$.
\end{proof}
We do not know if this next lemma
has an analogue
for the $\cf(\kappa) > \omega$ case because
of a Bernstein set phenomenon.
\begin{lemma}\label{lem.psi}
\label{onlysomanypieces}
Assume $\cf(\kappa) = \omega$.
Fix $n < \omega$.
Suppose $\Psi: N \to \kappa_n$.
Given $h : \omega \to \kappa_n$,
let $C_h \subseteq X$ be the set of all
$f \in X$ such that
$$(\forall k < \omega)\,
\Psi( f \restriction k ) = h(k).$$
Then for some $h$,
there is an $L \in \mbb{Q}_m$
such that $[L] \subseteq C_h$, where $m$ satisfies $\kappa_m>(\kappa_n)^{\om}$.
\end{lemma}
\begin{proof}
It is straightforward to see that each
set $C_h$ is strongly closed (and hence closed).
Let $m < \omega$ be such that
$(\kappa_n)^{\om}<\kappa_m$.
Such an $m$ exists
by Assumption \ref{basicassumption}.
By Corollary \ref{cor.2.27},
one of the sets $C_h = [T]$ must be such that
$T$ is not $m$-small.
By Corollary~\ref{notsmallimplieslaver},
there is some tree $L \subseteq T$
such that $L \in \mbb{Q}_m$.
\end{proof}
\subsection{Strong Splitting Normal Form}
\begin{observation}
\label{embeddingobs}
Let $T \in \mbb{P}$.
There is an embedding
$F : N \to T$, meaning that
$(\forall t_1, t_2 \in N)$,
\begin{itemize}
\item $t_1 = t_2 \Leftrightarrow
F(t_1) = F(t_2)$;
\item $t_1 \sqsubseteq t_2 \Leftrightarrow
F(t_1) \sqsubseteq F(t_2)$;
\item $t_1 \perp t_2 \Leftrightarrow
F(t_1) \perp F(t_2)$.
\end{itemize}
From this, it follows by induction that if
$t \in N$ is on level $\alpha < \cf(\kappa)$,
then $F(t)$ is on level $\beta$
for some $\beta \ge \alpha$.
It follows that given any $f \in [N]$,
there is exactly one $g \in [T]$
that has all the nodes
$F(f \restriction \alpha)$
for $\alpha < \cf(\kappa)$ as initial segments.
Given a set $S \subseteq N$,
let $I(S)$ be the set of all initial
segments of elements of $S$.
If $H \subseteq N$ is a perfect tree, then
$I(F``(H)) \subseteq T$ is a perfect tree.
If $H_1, H_2 \subseteq N$ are trees such that
$[H_1] \cap [H_2] = \emptyset$, then
$[I(F``(H_1))] \cap [I(F``(H_2))] = \emptyset$.
\end{observation}
\begin{proof}
To construct the embedding $F$,
first define $F(\emptyset) = \emptyset$.
Now fix $\alpha < \cf(\kappa)$ and suppose
$F(u)$ has been defined for all
$u \in \bigcup_{\gamma < \alpha} N(\gamma)$.
If $\alpha$ is a limit ordinal
and $t \in N(\alpha)$,
define $F(t)$ to be
$\bigcup_{\gamma < \alpha} F(t \restriction \gamma)$.
If $\alpha = \beta + 1$,
fix $u \in N(\beta)$.
Fix $s \sqsupseteq F(u)$
such that $s$ has $\ge \kappa_\beta$ successors in $T$.
For each $\sigma < \kappa_\beta$,
define $F(u ^\frown \sigma)$ to be the
$\sigma$-th successor of $s$ in $T$.
The rest of the claims in the observation
follow easily.
\end{proof}
\begin{proposition}
\label{cangetstrongnormalform}
For each $T \in \mbb{P}$,
there is some $T' \le T$
in strong splitting normal form.
\end{proposition}
\begin{proof}
Fix $T \in \mbb{P}$.
Fix an embedding $F : N \to T$.
Let $\Psi : N \to \omega$ be the coloring
$\Psi(u) := \dom( F(u) )$.
Let $L \in \mbb{Q}$
be given by Lemma \ref{lem.psi}.
Then $T' := I(F``(L))$ is in strong
splitting normal form and $T' \le T.$
\end{proof}
This section concludes by showing that $\mbb{P}$ is not
$\kappa^{\cf(\kappa)}$-c.c.
That is,
$\mbb{P}$ has a maximal antichain
of size $\kappa^{\cf(\kappa)}$.
This result is optimal because
$|\mbb{P}| = \kappa^{\cf(\kappa)}$.
\begin{proposition}\label{prop.2.31}
\label{splititup}
Let $T \in \mbb{P}$.
Then there are $\kappa^{\cf(\kappa)}$
pairwise incompatible extensions of $T$ in $\mbb{P}$.
Hence, $\mbb{P}$ is not $\kappa^{\cf(\kappa)}$-c.c.
\end{proposition}
\begin{proof}
Let $F : N \to T$ be an embedding
guaranteed to exist by the observation above.
For each $\alpha < \cf(\kappa)$, let
$\{ R_{n, \beta} : \beta < \kappa_\alpha \}$
be a partition of $\kappa_\alpha$
into $\kappa_\alpha$
pieces of size $\kappa_\alpha$.
Given $f \in [N]$, let
$H_f \subseteq N$ be the tree
$$H_f := \{ t \in N :
(\forall \alpha \in \dom(t))\,
t(\alpha) \in R_{\alpha, f(\alpha)} \}.$$
Each $H_f$ is a non-empty perfect tree.
If $f_1 \not= f_2$, then
$[H_{f_1}] \cap [H_{f_2}] = \emptyset.$
Using the notation of Proposition~\ref{embeddingobs},
for each $f \in [N]$ let
$$T_f := I(F``(H_f)).$$
Certainly each $[T_f]$ is a subset of $P$,
because $T_f \subseteq T$.
By the Proposition~\ref{embeddingobs},
each $T_f$ is a non-empty perfect tree,
and $f_1 \not= f_2$ implies
$[T_{f_1}] \cap [T_{f_2}] = \emptyset$,
which in turn implies
$T_{f_1}$ is incompatible with $T_{f_2}$.
Thus, the conditions $T_f \in \mbb{P}$ for $f \in [N]$
are pairwise incompatible.
Since $[N] = X$ has size $\kappa^{\omega}$,
there are $\kappa^{\omega}$
of these conditions.
\end{proof}
\section{$(\om,\kappa_n)$ and $(\om,\infty,<\kappa)$-distributivity hold in $\mathbb{P}$}\label{sec.dlsholding}
This section concentrates on those distributive laws which hold in the complete Boolean algebra $\bB$, when $\kappa$ has countable cofinality.
Theorem \ref{thm.3.5} was proved by Prikry in the late 1960's; the first proof in print appears in this paper.
Here, we reproduce the main ideas of his proof, modifying his original argument slightly,
in particular,
using Lemma \ref{onlysomanypieces}, to simplify the presentation.
In Theorem \ref{thm.3.9} we prove that $\bP$ satisfies a Sacks-type property. This, in turn, implies that the $(\om,\infty,<\kappa)$-d.l.\ holds in $\bB$ (Corollary \ref{cor.3.10}).
The reader is reminded that for the entire paper, Convention
\ref{conventionforpaper} and Assumption \ref{basicassumption} are assumed.
\subsection{$(\om,\kappa_n)$-Distributivity}
\begin{definition}\label{defn.3.1}
A \defemph{stable tree system}
is a pair $(F_N, F_\mbb{P})$ of functions
$F_N : N \to N$ and
$F_\mbb{P} : N \to \mbb{P}$,
where $F_N$ is an embedding,
such that
\begin{itemize}
\item[1)]
For each $t \in N$,
$\mbox{Stem}(F_\mbb{P}(t)) \sqsupseteq F_N(t)$;
\item[2)]
If $t_1 \in N$ is a proper initial segment of
$t_2 \in N$, then
$F_\mbb{P}(t_1) \supseteq F_\mbb{P}(t_2)$,
and $F_N(t_1)$ is a proper initial segment of
$F_N(t_2)$;
\item[3)]
$F_N$ maps each level of $N$
to a subset of a level of $N$
(levels are mapped to distinct levels).
\end{itemize}
If requirement 3) is dropped,
$(F_N, F_\mbb{P})$ is called a
\defemph{weak stable tree system}.
\end{definition}
Note that 1) can be rewritten as follows:
$[F_\mbb{P}(t)] \subseteq B_{F_N(t)}$
for all $t \in N$.
Note from 3) that
$I(F``(N))$ is in $\mbb{P}$.
\begin{lemma}
\label{weakisenough}
Assume $\cf(\kappa) = \omega$.
If $(F_N, F_\mbb{P})$ is a weak stable tree system,
then there is a tree $T \le N$ in strong splitting
normal form and an embedding
$F: N \to T$ such that
$(F_N \circ F, F_\mbb{P} \circ F)$
is a stable tree system.
\end{lemma}
\begin{proof}
Let $\Psi : N \to \omega$
be the coloring
$\Psi(u) := \dom(F_N(u))$.
Let $T \in \mbb{Q}$
be given by Lemma~\ref{onlysomanypieces}.
Let $F : N \to T$ be an embedding
that maps levels to levels.
The function $F$ is as desired.
\end{proof}
We point out that Definition \ref{defn.3.1} applies for $\kappa$ of any cofinality.
It can be shown that if
$(F_N, F_\mbb{P})$ is a stable tree system
and $\gamma < \cf(\kappa)$,
then
$$\bigcup \{ F_\mbb{P}(t) :
t \in N(\gamma) \} \in \mbb{P}.$$
For our purposes, when $\cf(\kappa)=\om$, the following lemma will be useful.
\begin{lemma}
\label{intofunionisbig}
Assume $\cf(\kappa) = \omega$.
Let $(F_N, F_{\mbb{P}})$
be a stable tree system.
Then $$T := \bigcap_{n < \omega}
\bigcup \{ F_\mbb{P}(t) :
t \in N(n) \}$$
is in $\mbb{P}$.
Further, given any $S \le T$
and $n \in \omega$,
there is some $t \in N(n)$
such that $S$ is compatible with
$F_\mbb{P}(t)$.
\end{lemma}
\begin{proof}
To prove the first claim,
note that
$$T :=
\bigcap_{n < \omega}
\bigcup \{ F_\mbb{P}(t) : t \in N(n) \} =
\bigcup_{f \in X}
\bigcap_{n < \omega}
F_\mbb{P}(f \restriction n).$$
This is because if
$t_1, t_2 \in N$ are incomparable,
then $F_\mbb{P}(t_1) \cap
F_\mbb{P}(t_2) = \emptyset$.
Now temporarily fix $f \in X$.
One can see that
$$\bigcap_{n < \omega}
F_\mbb{P}(f \restriction n) =
I(\{F_N( f \restriction n ) :
n < \omega\}).$$
Now
$$\bigcup_{f \in X}
\bigcap_{n < \omega}
F_\mbb{P}(f \restriction n) =
\bigcup_{f \in X}
I(\{F_N( f \restriction n ) :
n < \omega\}) =
I( F_N``(N)).$$
Thus, $T = I( F_N``(N))$,
so $T$ is in $\mbb{P}$.
To prove the second claim,
fix $S \le T$ and $n \in \omega$.
The stems of the trees
$F_\mbb{P}(t)$ for $t \in N(n)$
are pairwise incompatible.
Also, the stems of the trees
$F_\mbb{P}(t)$ for $t \in N(n)$
are all in $N(l)$ for some fixed
$l \in \omega$.
Let $s \in S(l)$ be arbitrary.
Then $s = \mbox{Stem}(F_\mbb{P}(t))$
for some fixed $t \in N(n)$,
and so $(S | s) \le F_\mbb{P}(t)$,
showing that $S$ is compatible
with $F_\mbb{P}(t)$.
\end{proof}
\begin{lemma}
\label{separate}
Assume $\cf(\kappa) = \omega$,
and let $n < \omega$.
Consider any
$\{ T_{\beta} \in \mbb{P} :
\beta < \kappa_n \}$.
Then there is some
$l < \omega$,
a set $S \subseteq \kappa_n$
of size $\kappa_n$,
and an injection
$J : S \to N(l)$
such that
$$(\forall \beta \in S)\,
J(\beta) \in T_\beta.$$
\end{lemma}
\begin{proof}
For each $\beta < \kappa_n$,
let $l_\beta < \omega$
be such that $T_\beta$
has $\ge \kappa_n$ nodes
on level $l_\beta$.
Let $l < \omega$
and $S \subseteq \kappa_n$
be a
set of size $\kappa_n$
such that $(\forall \beta \in S)\,
l_\beta = l$;
these exist because $\kappa_n$
is regular and $\omega < \kappa_n$.
Define the injection
$J : S \to N(l)$
by mapping each element $\beta$ of
$S$ to a node on level $l$
of $T_\beta$ which is different from
the nodes chosen so far.
Then $J$ satisfies the lemma.
\end{proof}
\begin{theorem}\label{thm.3.5}
Assume $\cf(\kappa) = \omega$.
Then $\bP$ satisfies the $(\omega, \nu)$-d.l., for all $\nu<\kappa$.
\end{theorem}
\begin{proof}
Let $\mbb{B}$ be the complete
Boolean algebra associated with $\mbb{P}$.
We have a dense embedding of $\mbb{P}$
into $\mbb{B}$, which maps each condition
$P \in \mbb{P}$ to the set
of all conditions $Q \le P$.
Each element of $\mbb{B}$ is a
downwards closed subset of $\mbb{P}$.
We shall show that for each $n<\om$, the $(\om,\kappa_n)$-d.l.\ holds in $\bB$.
Let $n<\om$ be fixed.
For each $m < \omega$, let
$\langle a_{m,\gamma} \in \mbb{B} :
\gamma < \kappa_n \rangle$
be a maximal antichain in $\mbb{B}$.
For each $m < \omega$, the set
$\bigcup \{
a_{m,\gamma} : \gamma < \kappa_n \}$
is dense in $\mbb{P}$.
To show that the specified
distributive law holds, fix a
non-zero element
$b \in \mbb{B}$.
We must find a function
$h \in {^{\omega} \kappa_n}$
such that
$$b \wedge \bigwedge_{m < \omega}
a_{m,h(m)} > \mathbf{0}.$$
It suffices to show that for some
$Q \in b$, there is a function
$h \in {^\omega \kappa_n}$
such that
$$(\forall m < \omega)\,
Q \in a_{m, h(m)}.$$
Fix any $P \in b$.
First, we will construct a stable tree
system $(F_N, F_\mbb{P})$ with the property that
$$(\forall m < \omega)
(\forall t \in N(m))
(\exists \gamma < \kappa_n)\,
F_\mbb{P}(t) \in a_{m,\gamma}.$$
By Lemma~\ref{weakisenough},
it suffices to define a weak stable tree system
with this property.
To define $(F_N, F_\mbb{P})$,
first let $F_N(\emptyset)$ be $\emptyset$ and
$F_\mbb{P}(\emptyset) \le P$ be a member of
$a_{0,\gamma}$ for some $\gamma < \kappa_n$.
Suppose that $t \in N$ and
both $F_N(t)$ and $F_\mbb{P}(t)$ have been defined.
Suppose $t$ is on level $m$ of $N$.
Note that $\mbox{Succ}_N(t) =
\{ t ^\frown \beta : \beta < \kappa_m \}$.
For each $\beta < \kappa_m$,
let $P_{\langle t, \beta \rangle}$
be an element of
$a_{m+1,\gamma}$ for some $\gamma < \kappa_n$.
We may apply Lemma~\ref{separate} to get
injections $\eta_t : \mbox{Succ}_N(t) \to \kappa_m$
and $J_t : \mbox{Succ}_N(t) \to N(l_t)$ for some
$l_t < \omega$
such that
$(\forall s \in \mbox{Succ}_N(t))\,
J_t(s) \in P_{\langle t, \eta_t(s) \rangle}$.
For each $s \in \mbox{Succ}(t)$,
define $F_N(s) := J_t(s)$
and $F_\mbb{P}(s) :=
P_{\langle t, \eta_t(s) \rangle} | F_N(s)$.
Note that each $F_\mbb{P}(s)$ is in
$a_{m+1,\gamma}$ for some
$\gamma < \kappa_\alpha$.
Also, since the nodes
$F_N(s) \sqsupseteq F_N(t)$
for $s \in \mbox{Succ}(t)$
are pairwise incompatible,
each $F_N(s)$ must be a
\textit{proper} extension of $F_N(t)$.
This completes the definition of
$(F_N, F_\mbb{P})$.
Let $\Psi : N \to \kappa_n$
be the function such that for each
$m < \omega$ and
$t \in N(m)$,
$\Psi(t) = \gamma < \kappa_n$
is the unique ordinal such that
$F_\mbb{P}(t) \in a_{m,\gamma}$.
Using the notation and result in
Lemma~\ref{onlysomanypieces},
there is some
$h \in {^\omega \kappa_n}$
such that $C_h$ includes
a non-empty perfect set.
Fix such an $h$, and let
$H \le N$ be a perfect tree
such that $[H] \subseteq C_h$.
We have
$$(\forall m < \omega)
(\forall t \in H(m))\,
F_\mbb{P}(t) \in a_{m,h(m)}.$$
Let $Q \in \mbb{P}$ be the set
$$Q := \bigcap_{m < \omega}
\bigcup \{ F_\mbb{P}(t) : t \in
H(m) \}.$$
It is immediate that $Q \subseteq P$,
because $F_\mbb{P}(\emptyset) = P$.
By Lemma~\ref{intofunionisbig},
$Q \in \mbb{P}$.
Thus, $Q \le P$.
Now fix an arbitrary
$m < \omega$.
We will show that
$Q \in a_{m, h(m)}$,
and this will complete the proof.
It suffices to show that for every
$\gamma \not= h(m)$ and every
$R \in a_{m, \gamma}$, we have
$|[Q] \cap [R]| < \kappa^{\omega}$, as
this will imply
there is no
non-empty perfect
subset of their intersection.
Fix such $\gamma$ and $R$.
We have $Q \le
\bigcup \{ F_\mbb{P}(t) :
t \in H(m) \}$.
In fact,
$$[Q] \le
\bigcup \{ [F_\mbb{P}(t)] :
t \in H(m) \}.$$
Hence,
$$[Q] \cap [R] \subseteq
\bigcup \{ [F_\mbb{P}(t)] \cap [R] :
t \in H(m) \}.$$
However,
fix some $F_\mbb{P}(t)$
for $t \in H(m)$.
The conditions $R \in a_{m,\gamma}$
and $F_\mbb{P}(t) \in a_{m,h(m)}$
are incompatible, so the closed set
$[F_\mbb{R}(t)] \cap [R]$ must have size
$\le \kappa$
by Corollary~\ref{bigclosedhasperfect}.
We now have that
$[Q] \cap [R]$ is a subset of
a size $< \kappa$ union of
size $\le \kappa$ sets.
Thus, $|[Q] \cap [R]| \le \kappa
< \kappa^\omega$,
implying that the $(\om,\kappa_n)$-d.l.\ holds in $\bB$.
\end{proof}
\begin{question}
For $\cf(\kappa) > \omega$
and $\nu<\kappa$,
does $\bP$ satisfy the $(\cf(\kappa), \nu)$-d.l.?
\end{question}
\subsection{$(\om,\infty,<\kappa)$-Distributivity}
The next theorem we will prove
will generalize the fact that
$\mbb{P}$ satisfies the
$(\omega, \kappa, {<\kappa})$-d.l.\
(assuming $\cf(\kappa) = \omega)$.
The proof does not work for the
$\cf(\kappa) > \omega$ case.
We could get the proof to work as long as
we modified the forcing so that fusion holds for sequences of length
$\cf(\kappa)$.
However, all such modifications we have tried
cause important earlier theorems
in this paper to fail.
\begin{definition}\label{defn.4.2}
Assume $\cf(\kappa) = \omega$.
A \defemph{fusion sequence} is a sequence
of conditions $\langle T_n \in \mbb{P}
: n < \omega \rangle$
such that $T_0 \ge T_1 \ge ...$ and
there exists a sequence of sets
$\langle S_n \subseteq T_n : n < \omega \rangle$
such that for each $n < \omega$,
each $t \in S_n$ has
$\ge \kappa_n$ successors in $T_n$,
which are in $T_m$ for every $m \ge n$,
and each successor of $t$ in $T_n$
has an extension in $S_{n+1}$.
\end{definition}
\begin{lemma}\label{lem.4.3}
Let $\langle T_n \in \mbb{P} : n < \omega \rangle$
be a fusion sequence and define
$T_\omega := \bigcap_{n \in \omega} T_n$.
Then $T_\omega \in \mbb{P}$ and
$(\forall n < \omega)\, T_\omega \le T_n$.
\end{lemma}
\begin{proof}
This is a standard argument.
\end{proof}
The following
theorem shows that $\mbb{P}$
has a property very similar to the Sacks property.
\begin{theorem}\label{thm.3.9}
Assume $\cf(\kappa) = \omega$.
Let $\mu : \omega \to ({\kappa - \{0\}})$ be any
non-decreasing function
such that $\lim_{n \to \omega} \mu(n) = \kappa$.
Let $\lambda = \kappa^\omega$.
Let $T \in \mbb{P}$ and $\dot{g}$ be such that
$T \forces \dot{g} : \omega \to \check{\lambda}$.
Then there is some $Q \le T$
and a function $f$ with domain $\omega$ such that
for each $n \in \omega$,
$|f(n)| \le \mu(n)$ and
$Q \forces \dot{g}(\check{n}) \in \check{f}(\check{n})$.
\end{theorem}
\begin{proof}
We will define a decreasing (with respect to inclusion)
sequence of trees
$\langle T_n \in \mbb{P} : n \in \omega \rangle$
such that some subsequence of this is a fusion sequence.
The condition $Q$ will be the intersection
of the fusion sequence.
At the same time, we will define $f$.
For each $n \in \omega$ we will also define a set
$S_n \subseteq T_n$ such that every child
(in $T_n$) of every node in $S_n$
will be in each tree $T_m$ for $m \ge n$.
Each node in $T_n$ will be comparable to some node in $S_n$.
Also, we will have $|S_n| \le \mu(n)$
and each $t \in S_n$ will have
$\le \mu(n)$ children in $T_n$.
Each element of $S_{n+1}$ will properly
extend some element of $S_n$,
and each element of $S_n$ will
be properly extended by some element of $S_{n+1}$.
Let $S_0$ consist of a single node $t$ of $T$
that has $\ge \kappa_0$ children.
Let $T' \subseteq T$ be a subtree such that
$t$ is the stem of $T'$ and
$t$ has exactly $\min\{\kappa_0, \mu(0)\}$ children.
For each $\gamma$ such that $t ^\frown \gamma \in T'$,
let $U_{t ^\frown \gamma}$ be a subtree of
$T | t ^\frown \gamma$ such that
$U_{t ^\frown \gamma}$ decides the value of
$\dot{g}(\check{0})$.
Let $T_0$ be the union of these $U_{t ^\frown \gamma}$ trees.
The condition $T_0$ allows for only $\le \mu(0)$
possible values for $\dot{g}(\check{0})$.
Define $f(0)$ to be the set of these values.
We have $T_0 \forces \dot{g}(0) \in \check{f}(0)$.
Also, $|S_0| = 1$ and the unique node in $S_0$
has $\le \mu(0)$ children in $T_0$,
so $|f(0)| \le \mu(0)$.
Now fix $n > 0$ and suppose we have defined
$T_0, ..., T_{n-1}$.
For each child $t \in T_{n-1}$
of a node in $S_{n-1}$,
pick an extension $s_t \in T_{n-1}$ of $t$
that has $\ge \kappa_n$ children in $T_{n-1}$.
Let $S_n$ be the set of these $s_t$ nodes.
By hypothesis,
$|S_{n-1}| \le \mu(n-1)$ and
each node in $S_{n-1}$ has $\le \mu(n-1)$
children in $T_{n-1}$,
Thus, $|S_n| \le \mu(n-1)$,
and so $|S_n| \le \mu(n)$,
because $\mu(n-1) \le \mu(n)$.
Let $T'_{n-1}$ be a subtree of $T_{n-1}$ such that
each $s_t$ is in $T'_{n-1}$ and each $s_t$
has exactly $\min \{ \kappa_n, \mu(n) \}$
children in $T'_{n-1}$.
Thus, each $s_t \in S_n$ has
$\le \mu(n)$ children in $T'_{n-1}$.
For each $s_t ^\frown \gamma$ in $T'_{n-1}$,
let $U_{s_t ^\frown \gamma}$ be a subtree of
$T'_{n-1} | s_t ^ \frown \gamma$
that decides the value of
$\dot{g}(\check{n})$.
Let $T_n$ be the union of the
$U_{s_t ^\frown \gamma}$ trees.
We have $T_n \subseteq T'_{n-1} \subseteq T_{n-1}$.
The condition $T_n$ allows for only
$\mu(n)$ possible values for $\dot{g}(\check{n})$.
Define $f(n)$ to be the set of these values.
We have that $|f(n)| \le \mu(n)$ and
$T_n \forces \dot{g}(0) \in \check{f}(0)$.
This completes the construction of the
sequence of trees and the function $f$.
Defining $Q := \bigcap_{n \in \omega} T_n$,
we see that $Q$ is a condition because there
is a subsequence of $\langle T_n : n \in \omega \rangle$
that is a fusion sequence
satisfying the hypotheis of the lemma above.
This is true because
$\lim_{n \to \omega} \mu(n) = \kappa$.
The condition $Q$ forces the desired statements.
\end{proof}
Note that for the purpose of using the theorem above,
each function $\mu' : \omega \to \kappa$
such that $\lim_{n \to \omega} \mu'(n) = \kappa$
everywhere dominates a non-decreasing function
$\mu: \omega \to \kappa$ such that
$\lim_{n \to \omega} \mu(n) = \kappa$.
Note also that
nothing would have changed in the proof if instead we had
$T \forces \dot{g} : \omega \to \check{V}$,
because any name for an element of $V$
can be represented by a function in $V$ from
an antichain (which has size
$\le \kappa^\omega$, by Proposition \ref{prop.2.31}) in $\mbb{P}$
to $V$.
\begin{cor}\label{cor.3.10}
Assume $\cf(\kappa) = \omega$.
Then $\mbb{P}$ satisfies the
$$(\omega, \infty, <\kappa)\mbox{-d.l.}$$
\end{cor}
\section{Failures of Distributive Laws}\label{sec.dlfails}
This section contains two of the three failures of distributive laws proved in this paper.
Here, we assume Convention
\ref{conventionforpaper} and Assumption \ref{basicassumption}, and do not place any restrictions on the cofinality of $\kappa$.
Theorems \ref{thm.4.1} and \ref{thm.4.10} were proved by Prikry in the late 1960's (previously unpublished)
for the case when $\cf(\kappa)=\om$, and here they are seen to easily generalize to $\kappa$ of any cofinality.
\subsection{Failure of $(\cf(\kappa), \kappa, \kappa_n)$-Distributivity}
We point out that when
$\cf(\kappa) = \omega$,
the $(\omega, \kappa, {<\kappa})$-d.l.\ holding in $\bP$ follows from the fact that $\bP$
satisfies the
$(\omega, \omega)$-d.l.
However, if we replace the third parameter
${< \kappa}$ with a fixed cardinal $\nu<\kappa$,
the associated distributive law fails.
This is true in the $\cf(\kappa) > \omega$
case as well.
\begin{theorem}\label{thm.4.1}
For each $\nu<\kappa$,
the $(\cf(\kappa), \kappa, \nu)$-d.l.\
fails for $\mbb{P}$.
\end{theorem}
\begin{proof}
It suffices to show that for each $\al<\cf(\kappa)$, the $(\cf(\kappa),\kappa,\kappa_{\al})$-d.l.\ fails in $\mathbb{P}$.
Note that a maximal antichain of $\mbb{P}$
corresponds to a maximal antichain of the
regular open completion of $\mbb{P}$, via mapping $P\in\bP$ to the regular open set $\{Q\in\bP:Q\le P\}$.
Let $\al<\cf(\kappa)$, and
let $A_\beta := \{ (N | t) :
t \in N(\beta) \}$
for each $\beta < \cf(\kappa)$.
Each $A_\beta$ is a maximal antichain in $\mbb{P}$.
For each $\beta < \cf(\kappa)$,
let $S_\beta \subseteq A_\beta$
have size $\le \kappa_\alpha$.
Let $H \subseteq N$ be the set of $t$
such that $N | t \in S_\beta$
for some $\beta$.
Since each $S_\beta$ has size
$\le \kappa_\alpha$,
each level of $H$ has size
$\le \kappa_\alpha$.
This implies that $H$ has at most
$\kappa_\alpha^{\omega} < \kappa$ paths,
and so $[H]$ cannot include a
non-empty perfect subset.
By the definitions, we have
$$H = \bigcap_{\beta < \cf(\kappa)}
\bigcup S_\beta.$$
Since the left hand side of the equation
above cannot include a perfect tree,
neither can the right hand side.
Hence, the collection $A_{\beta}$, $\beta<\cf(\kappa)$, witnesses the failure of $(\cf(\kappa),\kappa,\kappa_{\al})$-distributivity in $\bP$.
\end{proof}
We point out that the previous theorem is stated in Theorem 4 (2) of \cite{Namba72}.
The proof there, though, is not obviously complete, and for the sake of the literature and of full generality, the proof has been included here.
\subsection{Failure of $(\mf{d}, \infty, <\kappa)$-Distributivity}
\begin{definition}
Given functions
$f,g : \cf(\kappa) \to \cf(\kappa)$,
we write $f \le^* g$
and say $g$ \defemph{eventually dominates} $f$
iff $$\{ \alpha < \cf(\kappa) :
f(\alpha) > g(\alpha) \}$$
is bounded below $\cf(\kappa)$.
Let $\mf{d}(\cf(\kappa))$ be the smallest size
of a family of functions
from $\cf(\kappa)$ to $\cf(\kappa)$
such that each function from
$\cf(\kappa)$ to $\cf(\kappa)$
is eventually dominated by a member of this family.
\end{definition}
\begin{definition}
Let $\mc{D}$ be the collection of all
functions $f$ from $\cf(\kappa)$ to $\cf(\kappa)$
such that $f$ is
non-decreasing and
$$\lim_{\alpha \to \cf(\kappa)} f(\alpha) =
\cf(\kappa).$$
We call a subset of $\mc{D}$ a
\defemph{dominated-by} family
iff given any function $g \in \mc{D}$,
some function in the family is
eventually dominated by $g$.
\end{definition}
The smallest size of a dominated by family
if $\mf{d}(\kappa)$.
We will prove the direction that for every
dominating family, there is a dominated-by
family of the same size.
The other direction is similar.
Let $\mc{F}$ be a dominating family.
Without loss of generality, each
$f \in \mc{F}$ is strictly increasing.
Let $\mc{F}' := \{ f' : f \in \mc{F} \}$,
where each $f'$ is a non-decreasing
function that extends the partial function
$\{ (y,x) : (x,y) \in f \}$.
Since $\mc{F}$ is a dominating family,
it can be shown that $\mc{F}'$
is a dominated-by family.
\begin{definition}
Given $f \in \mc{D}$,
we say that a perfect tree
$T \in \mbb{P}$
\defemph{obeys} $f$ iff
for each $\alpha < \cf(\kappa)$,
the $\alpha$-th level of $T$
has $\le \kappa_{f(\alpha)}$
nodes in $T$.
\end{definition}
\begin{lemma}\label{lem.4.5}
Let $\lambda = \mf{d}(\cf(\kappa))$
and $G =
\{ g_\gamma \in \mc{D} : \gamma < \lambda \}$
be a dominated-by family.
Then there is some $\delta < \cf(\kappa)$
such that
$$(\forall \alpha < \cf(\kappa))
(\exists \gamma \in \lambda)\,
g_\gamma(\alpha) \le \delta.$$
\end{lemma}
\begin{proof}
Assume there is no such
$\delta < \cf(\kappa)$.
For each $\delta < \cf(\kappa)$,
let $\alpha_\delta < \cf(\kappa)$
be the least ordinal such that
$$
(\forall \gamma<\lambda)\ g_{\gamma}(\al_{\delta})>\delta.
$$
It must be that
$\delta_1 < \delta_2$ implies
$\alpha_{\delta_1} \le
\alpha_{\delta_2}$.
Now, the limit
$$\mu := \lim_{\delta \to \cf(\kappa)}
\alpha_\delta$$
cannot be less than $\cf(\kappa)$.
To see why, suppose $\mu < \cf(\kappa)$.
Consider $g_0$.
The function $g_0 \restriction (\mu+1)$
must be bounded below $\cf(\kappa)$,
since $\cf(\kappa)$ is regular.
Let $\delta$ be such a bound.
Since $\alpha_\delta \le \mu$
and $g$ is non-decreasing,
we have
$g_0(\alpha_\delta) \le g(\mu) \le \delta$,
which contradicts the definition
of $\alpha_\delta$.
We have now shown that
$\mu = \cf(\kappa)$.
The partial function
$\alpha_\delta \mapsto \delta$
may not be well-defined.
To fix this problem, for each
$\alpha$ which equals
$\alpha_\delta$ for at least
one value of $\delta$,
pick the least such $\delta$.
Let $\Delta \subseteq \cf(\kappa)$
be the cofinal set of
such $\delta$ values picked.
This results in a well-defined
partial function
which is non-decreasing.
Let $f \in \mc{D}$ be an extension
of this partial function.
Since $G$ is a dominated-by family,
fix some $\gamma$ such that
$f$ dominates $g_\gamma$.
Now, let $\delta \in \Delta$
be such that
$g_\gamma(\alpha_\delta)
\le f(\alpha_\delta)$.
Since $f(\alpha_\delta) = \delta$,
we get that
$g_\gamma(\alpha_\delta) \le \delta$,
which contradicts the definition of
$\alpha_\delta$.
\end{proof}
\begin{theorem}\label{thm.4.10}
The $(\mf{d}(\cf(\kappa)),
\infty,
<\kappa)$-d.l. fails for $\mbb{P}$.
\end{theorem}
\begin{proof}
Let $\lambda = \mf{d}(\cf(\kappa))$.
Let $\{ f_\gamma \in \mc{D} : \gamma < \lambda \}$
be a set which forms a
dominated-by family.
For each $\gamma < \lambda$, let
$\mc{A}_\gamma \subseteq \mbb{P}$
be a maximal antichain in $\mbb{P}$
with the property that for each
$T \in \mc{A}_\gamma$,
$T$ obeys $f_\gamma$.
Note that each $\mc{A}_\gamma$
has size $\le \kappa^{\cf(\kappa)}
= |\mbb{P}|$.
For each $\gamma < \lambda$,
let $\mc{B}_\gamma \subseteq \mc{A}_\gamma$
be some set of size strictly less than $\kappa$.
Let $u : \mbb{P} \to \mbb{B}$
be the standard embedding of $\mbb{P}$
into its completion.
We claim that
$$\bigwedge_{\gamma < \lambda}
\bigvee
\{ u(T) : T \in \mc{B}_\gamma \} = 0,$$
which will prove the theorem.
To prove this claim,
for each $\gamma < \lambda$ let
$$T_\gamma := \bigcup \mc{B}_\gamma.$$
The claim will be proved once we
show that
$\tilde{T} := \bigcap_{\gamma < \lambda}
T_\gamma$
does not include a perfect tree.
It suffices to find some
$\delta < \cf(\kappa)$
such that there is a cofinal set of
levels of $\tilde{T}$ that each
have $\le \kappa_\delta$ nodes.
Since $\cf(\kappa) < \lambda$
are both regular cardinals,
fix a set
$K \subseteq \cf(\kappa)$
of size $\cf(\kappa)$
and some $\delta < \cf(\kappa)$
such that
$|\mc{B}_\gamma| \le \kappa_\delta$
for each $\gamma \in K$.
Given $\gamma \in K$, define
$g_\gamma \in D$ to be the function
$$g_\gamma(\alpha) := \max \{
f_\gamma(\alpha), \delta \}.$$
As
$|\mc{B}_\gamma| \le
\kappa_\delta$ and
$(\forall T \in \mc{B}_\gamma)\,$
it follows that $T$ obeys $f_\gamma$,
it follows that
$T_\gamma = \bigcup \mc{B}_\gamma$
obeys $g_\gamma$.
Thus, by the definition of
$\tilde{T}$, it suffices to find
a cofinal set $L \subseteq \cf(\kappa)$
and for each $l \in L$ an ordinal
$\gamma_l \in K$ such that
$g_{\gamma_l}(l) \le \delta$.
This, however, follows
from Lemma \ref{lem.4.5}.
\end{proof}
For $\cf(\kappa)=\om$, assuming the Continuum Hypothesis and that $2^{\kappa}=\kappa^+$,
Theorem 4 (4) of \cite{Namba72}
states that for all $\lambda\le\kappa^+$, the $(\om_1,\lambda,<\lambda)$-d.l.\ fails in $\bP$.
Under these assumptions, that theorem of Namba implies Theorem \ref{thm.4.10}.
We have included our proof as it is simpler and the result is more general than that in \cite{Namba72}.
\section{$\mathcal{P}(\omega)/\Fin$ and $\mf{h}$}\label{sec.5}
In this section, we show that the Boolean algebra $\mathcal{P}(\omega)/\Fin$ completely embeds into $\bB$.
Similar reasoning shows that the forcing
$\bP$ collapses the cardinal $\kappa^{\om}$ to the distributivity number $\mathfrak{h}$.
It will follow that the $(\mathfrak{h},2)$-distributive law fails in $\bB$;
hence assuming the Continuum Hypothesis, $\bB$ does not satisfy the $(\om_1,2)$-d.l.
Similar results were proved by \Bukovsky\ and \Coplakova\ in Section 5 of \cite{Bukovsky/Coplakova90}.
They considered perfect trees, where there is a
fixed family of countably many regular cardinals and for each cardinal $\kappa_n$ in the family, their perfect trees must have cofinally many levels where the branching has size $\kappa_n$; similarly for their family of Namba forcings.
Recall that the regular open completion of
a poset is the collection of regular open subsets
of the poset ordered by inclusion.
For simplicity, we will work with the
poset $\mbb{P}'$ of conditions in $\mbb{P}$
that are in strong splitting formal form.
$\mbb{P}'$ forms a dense subset of $\mbb{P}$,
so $\mbb{P}'$ and $\mbb{P}$ have isomorphic
regular open completions.
For this section,
let $\mbb{B}'$ denote the regular open completion
of $\mbb{P}'$
(and $\mbb{B}$ is the regular open completion of $\mbb{P}$).
Recall the following definition:
\begin{definition}
\label{cedef}
Let $\mbb{S}$ and $\mbb{T}$ be complete Boolean algebras.
A function $i : \mbb{S} \to \mbb{T}$ is a
\defemph{complete embedding}
iff the following are satisfied:
\begin{itemize}
\item[1)] $(\forall s, s' \in \mbb{S}^+)\,
s' \le s \Rightarrow i(s') \le i(s)$;
\item[2)] $(\forall s_1, s_2 \in \mbb{S}^+)\,
s_1 \perp s_2 \Leftrightarrow i(s_1) \perp i(s_2)$;
\item[3)] $(\forall t \in \mbb{T}^+)
(\exists s \in \mbb{S}^+)
(\forall s' \in \mbb{S}^+)\,
s' \le s \Rightarrow
i(s') || t$.
\end{itemize}
\end{definition}
If $i : \mbb{S} \to \mbb{T}$
is a complete embedding,
then if $G$ is $\mbb{T}$-generic over $V$,
then there is some $H \in V[G]$ that is
$\mbb{S}$-generic over $V$.
\begin{definition}
Given $T \in \mbb{P}$,
$\mbox{Split}(T) \subseteq \omega$
is the set of $l \in \omega$
such that $T$ has
a splitting node
on level $l$.
\end{definition}
\begin{theorem}\label{thm.5.3}
There is a complete embedding of
$\mathcal{P}(\omega)/\Fin$ into $\mbb{B}$.
\end{theorem}
\begin{proof}
It suffices to show there is a complete embedding
of $P(\omega)/\mbox{Fin}$ into $\mbb{B}'$.
For each $X \in [\omega]^\omega$, define
$\mc{S}_X \in \mbb{B}'$ to be
$\mc{S}_X := \{ T \in \mbb{P}' :
\mbox{Split}(T) \subseteq^* X \}$
Note that $X =^* X'$ implies
$\mc{S}_X = \mc{S}_{X'}$.
Define $i : [\omega]^\omega \to \mbb{P}'$
to be $i(X) := \mc{S}_X$.
This induces a map from
$P(\omega)/\mbox{Fin}$ to $\mbb{B}'$.
We will show this is a complete embedding.
First, we must establish that each
$\mc{S}_X$ is indeed in $\mbb{B}'$.
Temporarily fix $X \in [\omega]^\omega$.
We must show that
$\mc{S}_X \subseteq \mbb{P}'$
is a regular open subset of $\mbb{P}'$.
First, it is clear that $\mc{S}_X$
is closed downwards.
Second, consider any
$T_1 \not\in \mc{S}_X$.
By definition,
$|\mbox{Split}(T_1) - X| = \omega$.
By the nature of strong
splitting normal form,
there is some $T_2 \le T_1$ in $\mbb{P}'$
such that $\mbox{Split}(T_2) = \mbox{Split}(T_1) - X$.
We see that for each
$T_3 \le T_2$ in $\mbb{P}'$,
$T_3 \not\in \mc{S}_X$.
Thus, $\mc{S}_X$ is a regular open set.
We will now show that $i$
induces a complete embedding.
To show 1) of Definition~\ref{cedef},
suppose $Y \subseteq^* X$ are in $[\omega]^\omega$.
If $T \in \mc{S}_Y$,
then $\mbox{Split}(T) \subseteq^* Y$,
so $\mbox{Split}(T) \subseteq^* X$,
which means $T \in \mc{S}_X$.
Thus, $\mc{S}_Y \subseteq \mc{S}_X$,
so 1) is established.
To show 2) of the definition,
suppose $X, Y \in [\omega]^\omega$ but
$X \cap Y$ is finite.
Suppose, towards a contradiction,
that there is some
$T \in \mc{S}_X \cap \mc{S}_Y$.
Then $\mbox{Split}(T) \subseteq^* X$
and $\mbox{Split}(T) \subseteq^* Y$, so
$\mbox{Split}(T) \subseteq^* X \cap Y$,
which is impossible because
$\mbox{Split}(T)$ is infinite.
To show 3) of the definition,
fix $T_1 \in \mbb{P}$.
Let $X := \mbox{Split}(T_1)$.
We will show that for each
infinite $Y \subseteq^* X$,
there is an extension of $T_1$
in $\mc{S}_Y$.
Fix an infinite $Y \subseteq^* X$.
By the nature of strong splitting normal
form, there is some $T_2 \le T_1$ such that
$\mbox{Split}(T_2) = Y \cap X$.
Thus, $T_2 \in \mc{S}_Y$.
This completes the proof.
\end{proof}
\begin{cor}
Forcing with $\mbb{P}$ adds
a selective ultrafilter on $\omega$.
\end{cor}
\begin{proof}
Forcing with $\mathcal{P}(\omega)/\Fin$
adds a selective ultrafilter.
\end{proof}
\begin{definition}
The distributivity number,
denoted $\mf{h}$,
is the smallest ordinal $\lambda$
such that the
$(\lambda,\infty)$-d.l.\
fails for $\mathcal{P}(\omega)/\Fin$.
\end{definition}
We have that
$\omega_1 \le \mf{h} \le 2^\omega$.
The $(\mf{h},2)$-d.l.\ in fact
fails for $\mathcal{P}(\omega)/\Fin$.
Thus, forcing with $\mbb{P}$ adds a new
subset of $\mf{h}$.
It is also well-known
(see \cite{BlassHB})
that forcing with $\mathcal{P}(\omega)/\Fin$
adds a surjection from $\mf{h}$ to $2^\omega$.
Thus, forcing with $\mbb{P}$ collapses
$2^\omega$ to $\mf{h}$.
We will now see that many more cardinals
get collapsed to $\mf{h}$.
\begin{definition}
A \defemph{base matrix tree}
is a collection $\{ \mc{H}_\alpha : \alpha < \mf{h} \}$
of mad families $\mc{H}_\alpha \subseteq [\omega]^\omega$
such that $\bigcup_{\alpha < \mf{h}} \mc{H}_\alpha$
is dense in $[\omega]^\omega$
with respect to almost inclusion.
\end{definition}
Balcar, Pelant and Simon proved in \cite{Balcar/Pelant/Simon80} that a base matrix for $\mathcal{P}(\omega)/\Fin$ exists, assuming only ZFC.
The following lemma and theorem use ideas from the proof of Theorem 5.1 in \cite{Bukovsky/Coplakova90}, in which \Bukovsky\ and \Coplakova\ prove that their perfect tree forcings, described above, collapses $\kappa^+$ to $\mathfrak{h}$, assuming $2^{\kappa}=\kappa^+$.
\begin{lemma}
There exists a family
$\{ \mc{A}_\alpha \subseteq \mbb{P} : \alpha < \mf{h} \}$
of maximal antichains such that
$\bigcup_{\alpha < \mf{h}} \mc{A}_\alpha$
is dense in $\mbb{P}$.
\end{lemma}
\begin{proof}
Let $\{ \mc{H}_\alpha \subseteq [\omega]^\omega :
\alpha < \mf{h} \}$
be a base matrix tree.
For an infinite $A \subseteq \omega$,
let $\mbb{P}_A :=
\{ T \in \mbb{P} : \mbox{Split}(T) \subseteq A \}$.
For an infinite $A \subseteq \omega$,
we may easily construct an antichain
$\mc{B}_A \subseteq \mbb{P}_A$
whose downward closure is dense in
$\mbb{P}_A$.
Now temporarily fix $\alpha < \mf{h}$.
For distinct $A_1, A_2 \in \mc{H}_\alpha$,
the elements of $\mc{B}_{A_1}$ are incompatible
with the elements of $\mc{B}_{A_2}$,
because if $T_1 \in \mc{B}_{A_1}$ and
$T_2 \in \mc{B}_{A_2}$, then
$\mbox{Split}(T_1) \subseteq^* A_1$ and
$\mbox{Split}(T_2) \subseteq^* A_2$,
so $T_1$ and $T_2$ cannot have a common
extension because $A_1 \cap A_2$ is finite.
For each $\alpha < \mf{h}$,
define $\mc{A}_\alpha :=
\bigcup \{ \mc{B}_A : A \in \mc{H}_\alpha \}$.
Temporarily fix $\alpha < \mf{h}$.
We will show that $\mc{A}_\alpha$ is maximal.
Consider any $T \in \mbb{P}$.
We will show that some extension of
$T$ is compatible to an element of $\mc{A}_\alpha$.
Let $T' \le T$ be such that
$\mbox{Split}(T') \subseteq A$
for some fixed $A \in \mc{H}_a$.
If there was no such $A$, then
$\mbox{Split}(T)$ would witness that $\mc{H}_\alpha$
is not a mad family.
Hence, $T' \in \mbb{P}_A$.
Since the downward closure of $\mc{B}_A$
is dense in $\mbb{P}_A$,
we have that $T'$ (and hence $T$) is compatible to some
element of $\mc{B}_A \subseteq \mc{A}_\alpha$.
We will now show that
$\bigcup_{\alpha < \mf{h}} \mc{A}_\alpha$
is dense in $\mbb{P}$.
Fix any $T \in \mbb{P}$.
Let $A \in \bigcup_{\alpha < \mf{h}} \mc{H}_\alpha$
be such that $A \subseteq^* \mbox{Split}(T)$.
Let $T' \le T$ be such that $\mbox{Split}(T') \subseteq
A \cap \mbox{Split}(T)$,
and let $S \in \mc{B}_A$ be such that $S \le T'$.
Then $S \le T$,
and we are finished.
\end{proof}
\begin{theorem}
The forcing $\mbb{P}$ collapses
$\kappa^\omega$ to $\mf{h}$.
\end{theorem}
\begin{proof}
We work in the generic extension.
Let $G$ be the generic filter.
By the previous lemma,
let $\{ \mc{A}_\alpha \subseteq \mbb{P}
: \alpha < \mf{h} \}$
be a collection of antichains such that
$\bigcup_{\alpha < \mf{h}} \mc{A}_\alpha$
is dense in $\mbb{P}$.
For each $T \in
\bigcup_{\alpha < \mf{h}} \mc{A}_\alpha$,
let $F_T : \kappa^\omega \to \mbb{P}$
be an injection such that
$\{ F_T(\beta) : \beta < \kappa^\omega \}$
is a maximal antichain below $T$
(which exists by Lemma~\ref{splititup}).
Consider the function $f :
\mf{h} \to \kappa^\omega$
defined by
$$f(\alpha) := \beta \Leftrightarrow
(\exists T \in \mbb{P})\,
T \in \mc{A}_\alpha \cap G \mbox{ and }
F_T(\beta) \in G.$$
This is indeed a function because
for each $\alpha$,
there is at most one $T$ in
$\mc{A}_\alpha \cap \mbox{G}$,
and there is at most one
$\beta < \kappa^\omega$ such that
$F_T(\beta) \in G$.
To show that $F_T$ surjects onto $\kappa^\omega$,
fix $\beta < \kappa^\omega$.
We will find an $\alpha < \mf{h}$ such that
$f(\alpha) = \beta$.
It suffices to show that
$$\{ F_T(\beta) : T \in \bigcup_
{\alpha < \mf{h}} \mc{A}_\alpha \}$$
is dense in $\mbb{P}$.
To show this, fix $S \in \mbb{P}$.
Since $\bigcup_{\alpha < \mf{h}} \mc{A}_\alpha$
is dense in $\mbb{P}$,
fix some $\alpha < \mf{h}$ and $T \in \mc{A}_\alpha$
such that $T \le S$.
We have $F_T(\beta) \le T$,
so $F_T(\beta) \le S$
and we are done.
\end{proof}
\section{Minimality of $\omega$-Sequences}\label{sec.6}
For the entire section,
we will assume $\cf(\kappa) = \omega$.
Sacks forcing was the first forcing shown to add a minimal degree of constructibility.
In \cite{Sacks}, Sacks proved that given a generic filter $G$ for the perfect tree forcing on
${^{<\omega} 2}$,
each real $r:\om\ra 2$ in $V[G]$ which is not in $V$ can be used to reconstruct the generic filter $G$.
A forcing {\em adds a minimal degree of constructibility} if
whenever $\dot{A}$ is a name
forced by a condition $p$ to be
a function from an ordinal to $2$,
then $p \forces (\dot{A} \in \check{V} \mbox{ or }
\dot{G} \in \check{V}(\dot{A}))$,
where $\dot{G}$ is the name for the generic filter
and $1 \forces \check{V}(\dot{A})$
is the smallest inner model $M$ such that
$\check{V} \subseteq M$ and $\dot{A} \in M$.
One may also ask whether the generic extension is minimal with respect to adding new sequences from $\omega$ to a given cardinal.
Abraham \cite{Abraham85} and Prikry proved that the perfect tree forcings and the version of Namba forcing involving subtrees of
${^{< \omega} \omega_1}$
thus adding an unbounded function from $\om$ into $\om_1$ are minimal, assuming $V=L$ (see Section 6 of \cite{Bukovsky/Coplakova90}).
Carlson, Kunnen and Miller showed this to be the case assuming Martin's Axiom and the negation of the Continnum Hypothesis in \cite{Carlson/Kunen/Miller84}.
The question of minimality was investigated generally for two models of ZFC $M\subseteq N$ (not necessarily forcing extensions) when $N$ contains a new subset of a cardinal regular in $M$ in Section 1 of \cite{Bukovsky/Coplakova90}.
In Section 6 of that paper,
\Bukovsky\ and \Coplakova\ proved that their families of perfect tree and generalized Namba forcings are minimal with respect to adding new $\om$-sequences of ordinals, but do not produce minimal generic extensions, since $\mathcal{P}(\om)/\Fin$ completely embeds into their forcings.
Brown and Groszek investigated the question of minimality of forcing extensions was investigated for forcing posets consisting of superperfect subtrees of
${^{<\kappa}\kappa}$,
where $\kappa$ is an uncountable regular cardinal, splitting along any branch forms a club set of levels, and whenever a node splits, its immediate successors are in some $\kappa$-complete, nonprincipal normal filter.
In \cite{Brown/Groszek06}, they proved that this forcing adds a generic of minimal degree if and only if the filter is $\kappa$-saturated.
In this section,
we show that, assuming that $\kappa$ is a limit of measurable cardinals,
$\mbb{P}$
is minimal with respect to
$\omega$-sequences, meaning if
$p \forces \dot{A} : \omega \to \check{V}$, then
$(p \forces \dot{A} \in \check{V}$ or
$\dot{G} \in \check{V}(\dot{A}))$.
$\bP$
does not add a minimal degree of constructibility,
since $\mathcal{P}(\omega)/\Fin$ completely embeds into $\bB$, and that intermediate model has no new $\om$-sequences.
The proof that Sacks forcing $\mbb{S}$ is minimal
follows once we observe that given an ordinal
$\alpha$, a name $\dot{A}$ such that
$p \forces \dot{A} \in {^{\check{\alpha}} 2} - \check{V}$,
and two conditions $p_1, p_2$,
there are $p_1' \le p_1$ and $p_2' \le p_2$
that decide $\dot{A}$ to extend incompatible
sequences in $V$.
After this observation,
given any condition $p \in \mbb{S}$, we can
extend $p$ using fusion to get $q \le p$
so that which branch
the generic is through $q$ can be recovered by knowing
which initial segments (in $V$) the sequence $\dot{A}$
extends.
This is because every child of a splitting node in $q$
has been tagged with a sequence in $V$,
and no two children of a splitting node are tagged
with compatible sequences.
In Sacks forcing $\mbb{S}$,
every node has at most $2$ children.
In our forcing $\mbb{P}$ (assuming $\cf(\kappa) = \omega$),
for each $n < \omega$
there must be some nodes that have $\ge \kappa_n$ children.
To make the proof work for $\mbb{P}$, we would like
that whenever $n < \omega$ and
$\langle p_\gamma \in \mbb{P} : \gamma < \kappa_n \rangle$
is a sequence
of conditions each forcing $\dot{A}$ to be in
${^{\check{\alpha}}2} - \check{V}$,
then there exists a set of pairwise incompatible sequences
$\{ s_\gamma \in {^{<{\alpha}} 2} : \gamma < \kappa_n \}$ and
a set of conditions $\{ p_\gamma' \le p_\gamma : \gamma < \kappa_n \}$
such that
$(\forall \gamma < \kappa_n)\,
p_\gamma' \forces
\check{s}_\gamma
\sqsubseteq
\dot{A}$.
However,
suppose $1 \forces \dot{A} \in {^{\check{\omega}_1} 2}$,
$2^{< \omega_1} = 2^\omega < \kappa_0$,
and $\kappa_0$ is a measurable cardinal
as witnessed by some normal measure.
Then there is a measure one set of
$\gamma \in \kappa_0$ such that the $s_\gamma$ are all the same.
Thus,
when we shrink a tree to try to
assign tags to its nodes,
there seems to be the possiblity that we can shrink it further to cause the resulting tags to give us no information.
There is a special case:
if $1 \forces \dot{A} : \omega \to \check{V}$
and $1 \forces \dot{A} \not\in \check{V}$,
then it is impossible to perform fusion to
decide more and more of $\dot{A}$ while at the same time
shrinking to get tags that are identical for each
stage of the fusion.
The intersection of the fusion sequence would be
a condition $Q$ such that
$Q \forces \dot{A} \in \check{V}$,
which would be a contradiction.
The actual proof by contradiction
uses a thinning procedure more complicated than
ordinary fusion.
Our proof will make the special assumption that
$\kappa$ is a limit of measurable cardinals
to perform the thinning.
When we say ``thin the tree $T$'',
it is understood that we mean get
a subtree $T'$ of $T$ that is still perfect,
and replace $T$ with $T'$.
When we say ``thin the tree $T$ below $t \in T$'',
we mean thin $T|t$ to get some $T'$,
and then replace $T$
by $T' \cup \{ s \in T : s$ is incompatible with $t \}$.
\begin{definition}
Fix a name $\dot{A}$
such that $1_\mbb{P} \forces
\dot{A} : \omega \to \check{V}$ and
$1_\mbb{P} \forces \dot{A} \not\in \check{V}$.
For each condition $T \in \mbb{P}$,
let $\psi_T : T \to {^{<\omega} V}$ be the function
which assigns to each node $t \in T$
the longest sequence $s = \psi_T(t)$ such that
$(T | t) \forces \dot{A} \sqsupseteq \check{s}$.
Call a splitting node
$t \in T$ a \defemph{red} node of $T$ iff the sequences
$\psi_T(c)$ for $c \in \mbox{Succ}_T(t)$ are all the same.
Call a splitting node
$t \in T$ a \defemph{blue} node of $T$ iff the sequences
$\psi_T(c)$ for $c \in \mbox{Succ}_T(t)$ are pairwise incomparable,
where we say two sequences are incomparable iff neither
is an end extension of the other.
\end{definition}
Although $\psi_T$ and the notions of a red and blue node
depend on the name $\dot{A}$,
in practice there will be no confusion.
Note that being blue is preserved when we pass to
a stronger condition but being red may not be.
For the sake of analyzing the minimality of $\mbb{P}$
with respect to $\omega$-sequences,
we want to be able to shrink any perfect tree $T$
to get some perfect $T' \le T$
whose splitting nodes are all blue:
\begin{lemma}[Blue Coding]
\label{bluecoding}
Let $T \in \mbb{P}$, $\dot{A}$, and $\alpha \in \textrm{Ord}$
be such that $T \forces (\dot{A} : \check{\alpha} \to \check{V})$
and $T \forces \dot{A} \not\in \check{V}$.
Suppose the following are satisfied:
\begin{itemize}
\item[1)]
T is in weak splitting normal form.
\item[2)]
Each splitting node of $T$ is a blue node of $T$.
\end{itemize}
Then $T \forces \dot{G} \in \check{V}(\dot{A})$,
where $\dot{G}$ is the generic filter.
\end{lemma}
\begin{proof}
Unlike almost every other proof in this paper,
we will work in the extension.
Let $G$ be the generic filter,
$g := \bigcap G$,
$\check{V}_G$
be the ground model, and $\dot{A}_G$
be the interpretation of the name $\dot{A}$.
It suffices to prove how $g$ can be constructed
from $\dot{A}_G$ and $\check{V}_G$.
We have that $g$ is a path through $T$.
Let $t_0$ be the stem of $T$.
Now $g$ must extend one of the children of $t_0$ in $T$.
Because $t_0$ is blue in $T$,
this child $c$ can be defined
as the unique $c \in \mbox{Succ}_T(t_0)$ satisfying
$\psi_T(c) \sqsubseteq \dot{A}_G$.
Call this child $c_0$.
Now let $t_1$ be the unique minimal extension of $c_0$
that is splitting.
In the same way, we can define the
$c \in \mbox{Succ}_T(t_1)$ that $g$ extends as
the unique child $c$ that satisfies
$\psi_T(c) \sqsubseteq \dot{A}_G$.
Call this child $c_1$.
We can continue like this, and the sequence
$c_0 \sqsubseteq c_1 \sqsubseteq c_2 \sqsubseteq ...$
is constructible
from $\check{V}_G$ and $\dot{A}_G$.
Since $g$ is the unique path that extends each $c_i$,
we have that $g$ is constructible
from $\check{V}_G$ and $\dot{A}_G$
(and so $G$ is as well).
\end{proof}
\begin{lemma}[Blue Selection]
\label{blueselection}
Let $\lambda_1 < \lambda_2$ be cardinals.
Suppose $\lambda_2$ has a measure $\mc{U}$ that is
uniform and $\lambda_1$-complete
(which happens if $\lambda_2$ is a measurable cardinal).
Let $\langle S_\alpha \in [
\bigcup_{\gamma \in \textrm{Ord}} {^\gamma V}]^{\lambda_2} :
\alpha < \lambda_1 \rangle$ be a $\lambda_1$-sequence
of size $\lambda_2$ sets of sequences,
where within each $S_\alpha$ the sequences are pairwise
incomparable.
Then there is a sequence
$\langle a_\alpha \in S_\alpha : \alpha < \lambda_1 \rangle$
such that the $a_\alpha$ are pairwise incomparable.
\end{lemma}
\begin{proof}
The measure $\mc{U}$ induces a measure on each $S_\alpha$,
so we may freely talk about a measure one subset of $S_\alpha$.
Given sequences $a, b$, we write $a || b$ to mean they
are comparable (one is an initial segment of the other).
\underline{Claim 1}:
Fix $\alpha_1, \alpha_2 < \lambda_1$.
Then there is at most one $a \in S_{\alpha_1}$ such that
$B_a := \{ b \in S_{\alpha_2} : a || b \}$ has measure one.
\underline{Subclaim}:
Suppose $a \in S_{\alpha_1}$ is such that $B_a$ has measure one.
Then all elements of $B_a$ extend $a$.
To see why, suppose there is some $b \in B_a$
which does not extend $a$.
Then $b$ is an initial segment of $a$.
Let $b'$ be another element of $B_a$.
Since $b \perp b'$,
it must be that $a \perp b'$, which is a conatradiction.
Towards proving Claim 1, suppose $a, a'$ are distinct elements
of $S_{\alpha_1}$ such that the sets $B_a$ and $B_{a'}$ have
measure one.
There must be some $b \in B_a \cap B_{a'}$.
We have that $b$ extends both $a$ and $a'$,
which is impossible because $a \perp a'$.
This proves Claim 1.
We will now prove the theorem.
For each $\alpha_1, \alpha_2 < \lambda_1$,
remove the unique element of $S_{\alpha_1}$
that is comparable with measure one elements of
$S_{\alpha_2}$
(if it exists).
This replaces each set $S_\alpha$ with
a new set $S_\alpha'$.
Since $\lambda_1 < \lambda_2$
and the measure is uniform, each
$S_\alpha'$ has size $\lambda_2$
(and is concentrated on by the measure).
Let $a_0$ be any element of $S_0'$.
Now fix $0 < \alpha < \lambda_1$
and suppose we have chosen
$a_\beta \in S_\beta'$ for each $\beta < \alpha$.
For each $\beta < \alpha$, let
$B_\beta := \{ b \in S_\alpha : a_\beta || b \}$.
Each set $B_\beta$ has measure zero,
and there are $< \lambda_1$ of them.
By the $\lambda_1$-completeness of the measure,
there must be an element of $S_\alpha'$ not in any
$B_\beta$ for $\beta < \alpha$.
Let $a_\alpha$ be any such element.
The sequence
$\langle a_\alpha : \alpha < \lambda_1 \rangle$
works as desired.
\end{proof}
\begin{lemma}[Red-Blue Concentration]
\label{redblue}
Let $\lambda_1 < \lambda_2$ be such that
$\lambda_1$ is a measurable cardinal and
$\lambda_2$ has a uniform $\lambda_1$-complete measure.
Let $T \in \mbb{P}$ and $t \in T$ be the stem of $T$.
Assume
$|\textrm{Succ}_T(t)| = \lambda_1$ and fix
a $\lambda_1$-complete measure $\mc{U}$ that
concentrates on $\textrm{Succ}_T(t)$.
For each $c \in \textrm{Succ}_T(t)$,
let $s_c \sqsupseteq c$ be the shortest
proper splitting extension of $c$, and assume that in fact
$|\textrm{Succ}_T(s_c)| = \lambda_2$ and there is
a uniform $\lambda_1$-complete measure $\mc{U}_c$
which concentrates on $\textrm{Succ}_T(s_c)$.
Assume further that for each
$c \in \textrm{Succ}_T(t)$,
$s_c$ is either a red node of $T$
or a blue node of $T$.
Then there is a set
$C \subseteq \textrm{Succ}_T(t)$ in $\mc{U}$
and for each $c \in C$ a tree
$T_c \subseteq T | c$
such that when we define
$T' := \bigcup_{c \in C} T_c$,
then exactly one of the following holds:
\begin{itemize}
\item[1)] The values of
$\psi_{T'}(c)$ for $c \in C$
are pairwise incomparable, so
$t$ is a blue node of $T'$;
\item[2)] The values of
$\psi_{T'}(c)$ for $c \in C$
are all the same,
so $t$ is a red node of $T'$.
Also, for each $c \in C$,
we have that
$\mc{U}_c$ concentrates on
$\textrm{Succ}_{T'}(s_c)$
and $s_c$ is a red node of $T'$.
This implies that $\psi_{T'}(\tilde{c})$
is the same for each
$\tilde{c} \in \textrm{Succ}_{T'}(s_c)$ and
$c \in \textrm{Succ}_{T'}(t)$.
\end{itemize}
\end{lemma}
\begin{proof}
First use the fact that $\mc{U}$
is an ultrafiler on $\textrm{Succ}_T(t)$
to get a set $C_0 \subseteq \textrm{Succ}_T(t)$
in $\mc{U}$ such that the nodes
$s_c$ for $c \in C_0$ are either all blue in $T$
or all red in $T$.
Suppose the nodes $s_c$
(for $c \in C$) are all blue in $T$.
Set $C := C_0$.
then use the lemma above
(the Blue Selection Lemma) to pick one child
$\tilde{c}_c$ of each $s_c$ (for $c \in C$)
such that the resulting sequences
$\psi_{T}(\tilde{c}_c)$ are all
pairwise incomparable.
It is here that we use the fact that the
measures $\mc{U}_c$ are $\lambda_1$-complete.
Now define each $T_c \subseteq T | c$ to be
$T_c := T | {\tilde{c}_c}$.
Define $T'$ to be
$\bigcup_{c \in C} T_c$.
We have $
\psi_{T}(\tilde{c}_c) =
\psi_{(T | \tilde{c}_c)}(\tilde{c}_c) =
\psi_{T_c}(c) =
\psi_{T'}(c)$.
Since the $\psi_T(\tilde{c}_c)$
for $c \in C$ are pairwise incomparable,
then the $\psi_{T'}(c)$ for $c \in C$ are pairwise
incomparable, so
1) holds.
Suppose now that the nodes $s_c$
(for $c \in C_0$) are all red in $T$.
Given $c \in C_0$,
$\psi_{T}(\tilde{c})$ does not depend on
which $\tilde{c} \in \textrm{Succ}_T(s_c)$ is used,
so each $\psi_T(\tilde{c})$
for $\tilde{c} \in \mbox{Succ}_T(s_c)$
in fact equals
$\psi_T(s_c)$.
We also have $\psi_T(s_c) = \psi_T(c)$
for each $c \in C_0$.
We will now use the assumption that
$\lambda_1$ is a measurable cardinal.
Since $\lambda_1$ is a measurable cardinal,
$\lambda_1 \rightarrow (\mc{U})^2_2$.
Thus, there is a set
$C_1 \subseteq C_0$ in $\mc{U}$ such that
the sequences $\psi_T(c)$ for $c \in C_1$
are either all pairwise comparable or all pairwise
incomparable.
\underline{Case 1}:
If they are all pairwise comparable,
then because they might have different lengths,
use the $\omega_1$-completeness of $\mc{U}$
to get a set $C_2 \subseteq C_1$ in $\mc{U}$
such that the $\psi_T(c)$ for $c \in C_2$
are identical.
Set $C := C_2$ and set each $T_c \subseteq T | c$
to be $T_c := T | c$ (no thinning of the
subtrees is neccesary).
We have that 2) holds.
\underline{Case 2}:
If they are pairwise incomparable,
then set $C := C_1$ and
set each $T_c \subseteq T | c$
to be $T_c := T | c$
(no thinning of subtrees is neccesary).
We have that 1) holds.
\end{proof}
We are now ready for the fundamental
lemma needed to analyze the minimality of $\mbb{P}$
(for functions with domain $\omega$).
\begin{lemma}[Blue Production for $\dot{A} : \omega \to \check{V}$]
\label{blueproduction}
Assume $\cf(\kappa) = \omega$.
Fix $n < \omega$.
Suppose $\kappa_{n} < \kappa_{n+1} < ...$
are all measurable cardinals.
Let $T \in \mbb{P}$ with stem $s \in T$.
Let $\dot{A}$ be such that
$T \forces \dot{A} : \omega \to \check{V}$ and
$T \forces \dot{A} \not\in \check{V}$.
Suppose $s$ has exactly
$\kappa_n$ children in $T$.
Then there is some perfect $W \subseteq T | s$ such that
$s$ has $\kappa_n$ children in $W$ and
$s$ is blue in $W$.
\end{lemma}
\begin{proof}
To prove this result,
we will frequently pick some node
in a tree and fix an ultrafilter which
concentrates on the set of its children in that tree.
When we shrink the tree further,
we will ensure that as long as
the node has $> 1$ child, then the ultrafilter
will still concentrate on the set of its children.
To index this,
we will have partial functions which map
nodes to ultrafilters.
We will start with the empty partial function.
We will define a recursive function $\Phi$.
As input it will take in a tuple
$\langle Q, t, \vec{\mc{U}}, m, k \rangle$,
and as output it will return
$\langle Q', \vec{\mc{U}}' \rangle$.
$Q \supseteq Q'$ are perfect trees.
$\vec{\mc{U}} \subseteq \vec{\mc{U}}'$ are partial functions,
mapping nodes to ultrafilters.
$m$ and $k$ are both numbers $< \omega$.
$Q$ has stem $t$
(passing the stem $t$ to the function $\Phi$ is redundant,
but we do it for emphasis).
The node $t \in Q$ has at least $\kappa_m$ children in $Q$,
it is in $Q'$, and
it has exactly $\kappa_m$ children in $Q'$.
Moreover,
$t \in \dom(\vec{\mc{U}}')$ and
$\vec{\mc{U}}'(t)$ concentrates on
$\mbox{Succ}_{Q'}(t)$.
The number $k$ is how many
recursive steps to take.
Finally, one of the following holds
(note the additional purpose of $m$ and $k$):
\begin{itemize}
\item[1)] $t$ is blue in $Q'$, or
\item[2)] $t$ is red in $Q'$ and
$\dom(\psi_{Q'}(t)) \ge m+k$.
\end{itemize}
That is, if $t$ is red in $Q'$,
then at least the first
$m + k$ values of $\dot{A}$ are decided
by $(Q' | t) = Q'$.
We will now define $\Phi$ recursively on $k$:
\underline{$\Phi(Q,t,\vec{\mc{U}},m,0)$}:
First, remove children of $t$
so that in the resulting tree
$Q_0 \subseteq Q$, $t$ has \textit{exactly}
$\kappa_m$ children.
If this is impossible,
then the function is being used incorrectly.
At this point, we should have
$t \not\in \dom(\vec{\mc{U}})$, otherwise the function
is being used incorrectly.
Let $\mc{U}$ be a $\kappa_m$-complete ultrafilter
on $\mbox{Succ}_{Q_0}(t)$.
Attach this ultrafilter to $t$ by defining
$\vec{\mc{U}}' := \vec{\mc{U}} \cup \{ (t, \mc{U}) \}$.
We now must define $Q' \subseteq Q$.
For each $c \in \mbox{Succ}_{Q_0}(t)$,
let $U_c \subseteq Q_0 | c$ be some condition
which decides at least the first $m+0$ values of $\dot{A}$.
Let $Q_1 := \bigcup_c U_c$.
We have $Q_1 \subseteq Q_0$.
Of course, $\mbox{Succ}_{Q_1}(t) = \mbox{Succ}_{Q_0}(t)$.
Now use the $\kappa_m$-completeness of $\mc{U}$
to get a set $C_0 \subseteq \mbox{Succ}_{Q_1}(t)$
in $\mc{U}$ such that the sequences
$\psi_{Q_1}(c)$ for $c \in C_0$ are either pairwise
incomparable or pairwise comparable.
Let $Q_2 \subseteq Q_1$ be the tree obtained by only
removing the children of $t$ that are not in $C_0$.
If the sequences $\psi_{Q_2}(c) = \psi_{Q_1}(c)$
for $c \in C_0$
are pairwise incomparable,
then we are done by defining $Q' := Q_2$
($t$ is blue in $Q_2$).
If not, then apply the pigeon hole principle for
$\omega_1$-complete ultrafilters to get a set
$C_1 \subseteq C_0$ in $\mc{U}$ such that all
$\psi_{Q_2}(c)$ sequences for $c \in C_1$
are the \textit{same}.
Let $Q_3 \subseteq Q_2$ be the tree obtained from $Q_2$ by only
removing the children of $t$ that are not in $C_1$.
We are done by defining $Q' := Q_2$
($t$ is red in $Q_2$ and $Q_2$ decides
at least the first $m+0$ values of $\dot{A}$).
\underline{$\Phi(Q,t,\vec{\mc{U}},m,k+1)$}:
It must be that $t$ has $\kappa_m$ children in $Q$,
otherwise the function is being used incorrectly.
Also, it must be that $t \in \dom(\vec{\mc{U}})$
and $\vec{\mc{U}}(t)$ concentrates on
$\mbox{Succ}_{Q}(t)$.
Temporarily fix a $c \in \mbox{Succ}_{Q}(t)$.
Let $s_c \sqsupseteq c$ be a minimal
extension in $Q$ with $\ge \kappa_{m+1}$ children
(if $k > 0$, by the way the function is used,
the node $s_c$ will be unique).
Let $U_c := Q | s_c$.
Let $\langle U_c', \vec{\mc{U}}_c \rangle :=
\Phi(U_c, \vec{\mc{U}}, s_c, m+1, k)$.
We have that $s_c \in \dom(\vec{\mc{U}}_c)$
and $\vec{\mc{U}}_c(s_c)$ is a
$\kappa_{m+1}$-complete ultrafilter that concentrates
on the size $\kappa_{m+1}$ set of children of
$s_c$ in $U_c'$.
Also, $s_c$ is either a blue node of $U_c'$,
or it is a red node of $U_c'$ and $U_c'$
decides at least the first $(m+1)+k$ elements of $\dot{A}$.
Now unfix $c$.
Define $\vec{\mc{U}}' := \bigcup_c \vec{\mc{U}}_c$.
Let $Q_0 := \bigcup_c U_c' \subseteq Q$.
Use measurability to get a set
$C_0 \subseteq \mbox{Succ}_{Q_0}(t)$
in $\vec{\mc{U}}(t)$ such that
the nodes $s_c$ for $c \in C_0$ are either
all red in $Q_0$ or all blue in $Q_0$.
We will break into cases.
First, consider the case that
the nodes $s_c$ for $c \in C_0$ are all blue
in $Q_0$.
Use Lemma~\ref{blueselection} (Blue Selection)
to get, for each $c \in C_0$, a node
$\tilde{c}_c \in \mbox{Succ}_{Q_0}(s_c)$ such that
the sequences $\psi_{Q_0}(\tilde{c}_c)$ are pairwise
incomparable.
Note that for each $c \in C_0$,
$\psi_{Q_0 | \tilde{c}_c}(c) =
\psi_{Q_0}(\tilde{c}_c)$.
Let $Q_1 := \bigcup_{c \in C_0} (Q_0 | \tilde{c}_c)
\subseteq Q_0$.
We have that $t$ is a blue node of $Q_1$.
Defining $Q' := Q_1$, we are done.
The other case is that the nodes $s_c$
for $c \in C_0$ are all red.
Again using measurability,
fix a set $C_1 \subseteq C_0$
in $\vec{\mc{U}}(t)$
such that the sequences
$\psi_{Q_0}(c)$ for $c \in C_1$
are either all comparable
or all incomparable.
If they are pairwise incomparable,
then define
$Q' := \bigcup_{c \in C_1} (Q_0 | c)
\subseteq Q_0$.
The node $t$ is blue in $Q'$, and we are done.
If they are pairwise comparable,
then apply the pigeon hole principle again
to get a set $C_2 \subseteq C_1$ in
$\vec{\mc{U}}(t)$ such that the sequences
$\psi_{Q_0}(c)$ for $c \in C_2$ are all the same
(by using the pigeon hole principle to get the
sequences $\psi_{Q_0}(c)$ to have the same length,
we get them to be identical).
Define $Q' :=
\bigcup_{c \in C_2} (Q_0 | c) \subseteq Q_0$.
We have that $t$ is red in $Q'$.
From our definition of a red node,
since each $s_c$ is a red node of $Q'$,
it follows that for each $c \in C_2$
and each $c' \in \mbox{Succ}_{Q'}(s_c)$,
we have $\psi_{Q'}(c) = \psi_{Q'}(c')$.
We said earlier that $U_c'$ decides at least
the first $m + (k+1)$ elements of $\dot{A}$.
Thus, $Q'$ itself decides at least
the first $m + (k+1)$ values of $\dot{A}$.
This completes the definition of $\Phi$.
With $\Phi$ defined,
we will prove the lemma.
Let $\langle T_0, \vec{\mc{U}}_0 \rangle :=
\Phi(T, s, \emptyset, n, 0)$.
If $s$ is blue in $T_0$, we are done
by setting $W := T_0$.
If not, then
$(T_0 | s) = T_0$ decides at least the first
$n$ values of $\dot{A}$.
Next, let $\langle T_1, \vec{\mc{U}}_1 \rangle :=
\Phi(T_0, s, \vec{\mc{U}}_0, n, 1)$.
If $s$ is blue in $T_1$, we are done
by setting $W := T_1$.
If not, then
$(T_1 | s) = T_1$ decides at least the first
$n+1$ values of $\dot{A}$.
Next, let $\langle T_2, \vec{\mc{U}}_2 \rangle :=
\Phi(T_1, s, \vec{\mc{U}}_1, n, 2)$.
Etc.
We claim that this procedure eventually terminates.
If not, then we have produced the sequences
$T_0 \supseteq T_1 \supseteq T_2 \supseteq ...$
(which is probably \textit{not} a fusion sequence) and
$\vec{\mc{U}}_0 \subseteq \vec{\mc{U}}_1
\subseteq \vec{\mc{U}}_2 \subseteq ...$.
Let $T_\omega := \bigcap_{i < \omega} T_i$.
If we can show that $T_\omega$ is a perfect tree,
then we will have that $T_\omega$ decides at
least the first $k$ values of $\dot{A}$
for every $k < \omega$,
which implies $T_\omega \forces \dot{A} \in \check{V}$,
which is a contradiction.
To show that $T_\omega$ is a perfect tree,
first note that $s \in \dom(\vec{\mc{U}}_0)$
and $s$ has $\vec{\mc{U}}_0(s)$ many children
in each tree $T_i$.
Using the $\omega_1$-completeness
of $\vec{\mc{U}}_0(s)$,
$s$ has $\vec{\mc{U}}_0(s)$ many children
in $T_\omega$, so in particular it has $\kappa_n$ children
in $T_\omega$.
Now temporarily fix $c \in \mbox{Succ}_{T_\omega}(s)$.
Let $s_c \sqsupseteq c$
be the minimal splitting extension of $c$ in $T_1$.
We have that $s_c \in \dom(\vec{\mc{U}}_1)$ and
$s_c$ has $\vec{\mc{U}}_1(s_c)$ many children in $T_1$.
In fact, $s_c$ has that many children
in $T_i$ for every $i \ge 1$.
All sets in $\vec{\mc{U}}_1(s_c)$ have size $\kappa_{n+1}$.
Using the $\omega_1$-completeness
of $\vec{\mc{U}}_1(s_c)$,
$s_c$ has $\vec{\mc{U}}_1(s_c)$ many children in $T_\omega$,
so in particular it has $\kappa_{n+1}$ children.
We may continue this argument.
Here is the pattern:
For each $i < \omega$, recursively define
$S_0 := \{ s \}$ and
$S_{i+1} := $
the set of all minimal nodes $t$ in $T_{i+1}$
such that $|\mbox{Succ}_{T_{i+1}}(t)| = \kappa_{n+i+1}$
and $t$ extends some child in $T_\omega$
of some node $t' \in S_i$.
By induction we can show that for all $i < \omega$,
$S_i \subseteq T_\omega$ and
every node in $S_i$ has exactly $\kappa_{n+i}$
children in $T_\omega$
(and the minimal splitting extension in $T_\omega$ of each
such child is in $S_{i+1}$).
Thus, $T_\omega$ is perfect and the proof is complete.
\end{proof}
\begin{theorem}\label{thm.6.6}
Assume $\cf(\kappa) = \omega$.
Suppose the cardinals $\kappa_0 < \kappa_1 < ...$
are all measurable.
Fix a condition $T \in \mbb{P}$.
Let $\dot{A}$ be a name such that
$T \forces (\dot{A} : \omega \to \check{V})$ and
$T \forces (\dot{A} \not\in \check{V})$.
Let $\dot{G}$ be a name for the generic object.
Then $T \forces \dot{G} \in \check{V}(\dot{A})$.
\end{theorem}
\begin{proof}
It suffices to find a condition $T' \le T$
satisfying the hypotheses of
Lemma~\ref{bluecoding} (Blue Coding).
We will construct $T'$ by performing fusion.
Let $T_\emptyset \le T$ be such that the stem
$t_\emptyset \in T_\emptyset$ is $0$-splitting.
Apply Lemma~\ref{blueproduction}
(Blue Production)
to the tree $T_\emptyset$ and the node
$t_\emptyset \in T_\emptyset$
to get $T_\emptyset' \le T_\emptyset$.
Now $t_\emptyset$ is blue
and $0$-splitting in $T_\emptyset'$.
Hence, the unique $0$-splitting node of $T_\emptyset'$ is blue.
Define $T_0 := T_\emptyset'$,
the first element of our fusion sequence.
Now, fix any $c \in \mbox{Succ}(T_0, t_\emptyset)$.
Let $T_c \le (T_0 | c)$ be
such that there is a (unique) $1$-splitting node
$t_c \sqsupseteq c$ in $T_c$.
Apply Lemma~\ref{blueproduction}
(Blue Production)
to the tree $T_c$ and the node $t_c$
to get $T_c' \le T_c$.
Now $t_c$ is blue and $1$-splitting in $T_c'$.
Unfixing $c$, let us define
$T_1 := \bigcup \{ T_c' : c \in \mbox{Succ}(T_c',t_\emptyset) \}$.
We have $T_1 \le T_0$,
every child of $t_\emptyset$ is in $T_1$
(so in particular it is $0$-splitting),
and every $1$-splitting node of $T_1$ is blue.
We may continue like this to get the fusion sequence
$T_0 \supseteq T_1 \supseteq T_2 \supseteq ...$.
Define $T'$ to be the intersection of this sequence.
We have that $T'$ is in weak splitting normal form
(every node with $>1$ child is $n$-splitting for some $n$).
Since being blue is preserved when we pass to a stronger condition,
every splitting node of $T'$ is blue.
We may now apply
Lemma~\ref{bluecoding} (Blue Coding),
and the theorem is finished.
\end{proof}
\begin{cor}\label{cor.6.7}
The forcing $\mbb{P}$ does not add a minimal degree of constructibility.
\end{cor}
\begin{proof}
Let $\mbb{B}$ be the regular open completion of $\mbb{P}$.
In the previous section,
we showed that there is a complete
embedding of $\mc{P}(\omega)/\mbox{Fin}$
into $\mbb{B}$.
Let $G$ be generic for $\mbb{P}$ over $V$.
Let $H \in V[G]$ be generic for
$\mc{P}(\omega)/\mbox{Fin}$ over $V$.
Since $\mc{P}(\omega)/\mbox{Fin}$ is
countably complete, it does not add any
new $\omega$-sequences, so $G \not\in V[H]$.
On the other hand, we have $H \not\in V$.
Thus,
$V \subsetneq V[H] \subsetneq V[G]$,
so the forcing is not minimal.
\end{proof}
\section{Uncountable height counterexample and open problems}
\label{secuncheight}
To conclude the paper, we present an example of what can go wrong when one tries to generalize the some of the results in the previous sections to singular cardinals $\kappa$ with uncountable cofinality.
Assuming $\cf(\kappa) > \omega$,
we will first construct a pre-perfect tree
$T \subseteq N$
such that $[T]$ has size $\kappa$.
\begin{lemma}
Let $g : \mbox{Ord} \to 2$ be a function.
Given an ordinal $\gamma$, let
$$S_{g \restriction \gamma}
:= \{ \alpha < \gamma : g(\alpha) = 1 \}.$$
Let $\Phi_{< \gamma}$ be the statement that
for each limit ordinal $\alpha < \gamma$,
$g$ equals $0$ for a final segment of $\alpha$.
Let $\Phi_{\gamma}$ be the analogous statement
but for all $\alpha \le \gamma$.
The following hold:
\begin{itemize}
\item[1)] If $\Phi_\gamma$, then $S_{g \restriction \gamma}$ is finite.
\item[2)] If $\Phi_{<\gamma}$ and $\cf(\gamma) \not= \omega$,
then $S_{g \restriction \gamma}$ is finite.
\item[3)] If $\Phi_{<\gamma}$, then $S_{g \restriction \gamma}$
is countable.
\end{itemize}
\end{lemma}
\begin{proof}
We can prove these by induction on $\gamma$.
If $\gamma = 0$, there is nothing to do.
Now assume that $\gamma$ is a successor ordinal.
If we assume $\Phi_{<\gamma}$, then
$\Phi_{\gamma-1}$ is true so
by the inductive hypothesis and the fact that
$$|S_{g \restriction \gamma}| \le
|S_{g \restriction (\gamma-1)}| + 1,$$
$S_{g \restriction \gamma}$ is finite.
Now assume that $\cf(\gamma) = \omega$.
Let $\langle \gamma_n : n \in \omega \rangle$
be a sequence cofinal in $\gamma$.
Note that
$$S_{g \restriction \gamma} =
\bigcup_{n \in \omega}
S_{g \restriction \gamma_n} =
S_{g \restriction \gamma_0} \cup
\bigcup_{n \in \omega} (
S_{g \restriction \gamma_{n+1}} -
S_{g \restriction \gamma_n} ).$$
Thus, if we assume $\Phi_{<\gamma}$,
then $\Phi_{\gamma_n}$ holds for each $n$,
so by the induction hypothesis
each $S_{g \restriction \gamma_n}$ is finite,
so $S_{g \restriction \gamma}$ is countable.
If additionally we assume $\Phi_{\gamma}$,
then it must be that all be finitely
many of the
$S_{g \restriction \gamma_{n+1}} -
S_{g \restriction \gamma_n}$
are empty, so $S_{g \restriction \gamma}$ is finite.
Finally, assume $\cf(\gamma) > \omega$
and $\Phi_{< \gamma}$.
For each limit ordinal $\alpha < \gamma$,
let $f(\alpha) < \alpha$ be such that $g$ is $0$
from $f(\alpha)$ to $\alpha$.
By Fodor's Lemma, fix some $\beta < \gamma$
such that $f^{-1}(\beta) \subseteq \gamma$
is a stationary
subset of $\gamma$.
Since $f^{-1}(\beta)$
is cofinal in $\gamma$,
we see that $g$ is $0$ from
$\mu := \min f^{-1}(\beta)$ to $\gamma$.
Thus,
$S_{g \restriction \gamma} =
S_{g \restriction \mu}$.
The set $S_{g \restriction \mu}$
is finite because $\Phi_{\mu}$
and the induction hypothesis,
so we are done.
\end{proof}
We can now get the desired
counterexample:
\begin{counterexample}\label{counterex}
\label{uncountablecounterex}
Assume $\cf(\kappa) > \omega$.
There is a pre-perfect tree $T \subseteq N$
such that $[T]$ has size $\kappa$,
and hence $[T]$ is not perfect.
\end{counterexample}
\begin{proof}
We will define $T \subseteq N$.
Define the $\alpha$-th level of $T$ as follows:
\begin{itemize}
\item[1)] if $\alpha = 0$,
then the level consists of only
the root $\emptyset$.
\item[2)] If $\alpha = \beta + 1$,
then a node is in the $\alpha$-th
level of $T$ iff it is the successor
in $N$ of a node in the $\beta$-th
level of $T$.
\item[3)] If $\alpha$ is a limit ordinal,
then a node $t$ is in the $\alpha$-th level
of $T$ iff every proper initial segment of $t$
is in $T$ and $t(\beta) = 0$
for a final segment of $\beta$'s
less than $\alpha$.
\end{itemize}
First, let us verify that $T$ is non-stopping.
Consider any node $t \in T$.
Let $f \in X$ be the unique function
that extends $t$ such that
$f(\alpha) = 0$ for all $\alpha$ in
$\dom(f) - \dom(t)$.
We see that $f$ is a path through $T$.
We will now show that $[T]$ has size
$\le \kappa$.
Consider any $f \in [T]$.
Let $g : \cf(\kappa) \to 2$ be the function
$$g(\alpha) :=
\begin{cases}
0 & \mbox{if } f(\alpha) = 0, \\
1 & \mbox{otherwise.}
\end{cases}$$
By the definition of $T$ and the lemma
above, it must be that
$\{ \alpha < \cf(\kappa) : g(\alpha) = 1 \}$
is finite.
Recall that for each $\alpha < \cf(\kappa)$,
there are at most $\kappa_\alpha$
possible values for $f(\alpha)$.
Now, a simple computation shows that
there are at most $\kappa$ such paths $f$
associated to a given $g$
(in fact, there are exactly $\kappa$,
but this does not matter).
\end{proof}
This counterexample points to the need for some further requirements on the trees when $\kappa$ has uncountable cofinality.
Such obstacles will likely be overcome by assuming that splitting levels on branches are club, as in \cite{Kanamori80} and \cite{Brown/Groszek06}, as this will provide fusion for $\cf(\kappa)$ sequences of trees.
We ask, which distributive laws hold and which ones fail for the Boolean completions of the families of perfect tree forcings similar to those in this paper for singular $\kappa$ of uncountable cofinality, but requiring club splitting, or some other splitting requirement which ensures $\cf(\kappa)$-fusion.
More generally,
\begin{question}
Given a regular cardinal $\lambda$,
for which cardinals $\mu$ is there a complete Boolean algebra in which
for all $\nu<\mu$, the $(\lambda,\nu)$-d.l.\ holds but the $(\lambda, \mu)$-d.l.\ fails?
\end{question}
Similar questions remain open for three-parameter distributivity.
\bibliographystyle{amsplain}
|
\section{Introduction}
Dense 3D surface reconstruction is an important problem in computer vision
which remains challenging in general scenarios. Most existing multiview
reconstruction methods suffer from some common problems such as: {\em (i)} Holes in the
3D model corresponding to homogeneous/reflective/transparent image regions, {\em (ii)}
Oversmoothing of semantically-important details such as ridges, {\em (iii)} Lack of semantically meaningful surface features, organization and geometric detail.
\begin{figure}
\begin{center}
\includegraphics[width=0.325\linewidth]{figs/amsterdam_house_3d_drawing_01}
\includegraphics[width=0.325\linewidth]{figs/amsterdam_house_3d_drawing_02}
\includegraphics[width=0.325\linewidth]{figs/amsterdam_house_3d_drawing_03}
\includegraphics[width=0.325\linewidth]{figs/loftsurface_amsterdam_house_01.png}
\includegraphics[width=0.325\linewidth]{figs/loftsurface_amsterdam_house_02.png}
\includegraphics[width=0.325\linewidth]{figs/loftsurface_amsterdam_house_03.png}
\end{center}
\vspace{0cm}
\caption{
The proposed approach transforms a 3D curve drawing (top) obtained from a
fully calibrated set of 27 views, into a collection of dense surface patches
(bottom) obtained via lofting and occlusion reasoning.}
\vspace{0cm}
\label{fig:recon:lofting:results}
\end{figure}
In computer vision and graphics literature, there has been
scattered but persistent interest in using 3D curves to infer aspects of an
underlying shape~\cite{Maekawa:Ko:GM2002, Zorin:SIGGRAPH2006}, shape-related
features linked to shading~\cite{Bui:etal:CNG2015}, or closed 3D
curves~\cite{Zhuang:etal:ACMTOG2013}. For example, the approach in
Sadri and Singh~\cite{Sadri:Singh:ACMTOG2014} exploits
the \emph{flow complex}, a structure that captures both the topology and the
geometry of a set of 3D curves, to construct an intersection-free triangulated
3D shape. Similarly, the approach in Pan~\etal~\cite{Pan:etal:ACMTOG2015} explores a similar concept with \emph{flow lines}, which are designed to encapsulate principal curvature lines on a
surface. As another example, the approach in Abbasinejad~\etal~\cite{Abbasinejad:etal:SOCG2012} identifies potential surface
patches delineated by a 3D curve network, breaking them into smaller,
planar patches to represent a complex surface. These methods are completely
automated and yield impressive results on a wide range of objects. However, they
require a complete and accurate input curve network, which is very
difficult to obtain in a bottom-up fashion from image data: there will always be
holes, missed curves, incorrect groupings, noise, outliers, and other
real-world imperfections. Furthermore, these methods are not general, but rather tailored for scenes with objects of relatively clean geometry. Thus, they are not suitable for more
general, large-scale complex scenes that the multiview stereo community
tackles on a regular basis.
We propose a novel and complementary dense 3D reconstruction approach based on
occlusion reasoning and a CAD method called \emph{lofting}, which is the process
of obtaining 3D surfaces through the interpolation of 3D structure curves.
Lofting has primarily been a drafting technique for generating streamlined
objects from curved line drawings that was initially used to design and build
ships and aircrafts. More recently, lofting has become a common technique in
computer graphics and computer-aided design (CAD) applications where a
collection of surface curves are used to define the surface through
interpolation. Even though lofting is a very powerful tool, it does not appear
to be used very much in the multiview geometry applications.
Employing an existing curve-based reconstruction method, we start with a
calibrated image sequence to build a 3D drawing of the scene in the form of a 3D
graph, where graph links contain curve geometries and graph nodes contain
junctions where curve endpoints meet. We propose to use the 3D drawing of a
scene as a scaffold on which dense surface patches can be placed on, see
Figure~\ref{fig:recon:lofting:results}.
Our approach relies on the availability of a ``3D drawing'' of the surface, a
graph of 3D curve fragments reconstructed from calibrated multiview observations
of an object \cite{Usumezbas:Fabbri:Kimia:ECCV16}. Observe that such a 3D
drawing acts as a scaffold for the surface of the object in that the drawing
breaks the object surfaces into 3D surface patches, which are glued on and
supported by the 3D drawing scaffold. Our approach then is based on selecting
some 3D curve fragments from the 3D drawing, forming surface hypotheses from
these curve fragments, and using occlusion reasoning to discard inconsistent
hypotheses.
Aside from yielding a useful and semantically-meaningful intermediate
representation, reconstructing surfaces by going through curved structures
closely replicates the human act of drawing: As in a progressive drawing, the
basis is independent of illumination conditions and other details. For instance,
photometry/shading/reflectance can be incorporated later on either as hatchings
or progressively refined as fine shading; multiple renderings can be performed
from the same basis. Even challenging materials such as the ocean surface can be
rendered on top of a curve basis. This approach also has the advantage of
scalability, since it allows for a very large 3D scene to be selectively and
progressively reconstructed.
This paper is organized as follows: Section~\ref{sec:curve_drawing} reviews the
state-of-the-art in generating a 3D drawing of a scene observed under calibrated
views. Section~\ref{sec:lofting} reviews lofting and describes how a surface is
generated from a few curve fragments lying on the surface.
Section~\ref{sec:auto-lofting} describes how 3D surface patch hypotheses are
generated from a 3D drawing, and how occlusion consistency is used to take out
non-veridical hypotheses. Section~\ref{sec:lofting-results} deals with several
technical challenges, which require a regularization of the 3D drawing so that
surface patches can be robustly inferred. Section~\ref{sec:lofting-results}
presents experimental results, a comparison with
PMVS~\cite{Furukawa:Ponce:CVPR2007,Furukawa:Ponce:PAMI2010}, and quantification
of reconstruction accuracy.
\section{From Image Curves to a 3D Curve Drawing}
\label{sec:curve_drawing}
Our multiview stereo method is based on the idea of using 3D curvilinear
structures as boundary conditions to hypothesize the simplest 3D surfaces that
would be explained by these boundaries. The 3D curvilinear structure that is
needed is obtained by correlating image curves in calibrated multiview imagery
to reconstruct 3D curve fragments, which are organized as a graph and referred
to as ``3D Curve Drawing'' \cite{Usumezbas:Fabbri:Kimia:ECCV16}. Since this
paper requires a 3D curve drawing available, we summarize the work of
\cite{Usumezbas:Fabbri:Kimia:ECCV16} on which we rely.
The 3D curve drawing is built on a series of steps. First, the image is
pre-processed to obtain edges using robust, third-order operators which give
highly-accurate edge information \cite{Tamrakar:Kimia:ICCV07}. Second, a
geometric linker groups edges into curves \cite{Yuliang:etal:CVPR14} which
claims to improve on grouping errors and extent of outliers. This results in
image curve fragments ${\gamma_i^v, i=1,\dots,M^v}$ for each view $v=1,\dots,N$.
Third, pairs of curves $(\gamma_{i_1}^{v_1}, \gamma_{i_2}^{v_2})$ from two
``hypothesis views'' $v_1$ and $v_2$, which have significant epipolar overlap,
are used to generate putative candidate reconstructions $\Gamma_k, k=1,\dots,K$.
These candidate reconstructed curves are gauged against image evidence on other
projected views called ``confirmation views'' and if there is sufficient support
for a 3D curve candidate, it is confirmed and otherwise rejected. This results
in a set of unorganized 3D curve fragments called the ``3D Curve Sketch''.
This representation indeed resembles a sketch. 3D curve fragments in this sketch
are often redundant since they came from multiple hypotheses, are often
overfragmented due to partial epipolar overlap, feature a nontrivial level of
clutter, and most importantly, are unorganized in that the topological
relationship of 3D curve fragments is not available. The recent work of
\cite{Usumezbas:Fabbri:Kimia:ECCV16} deals with these issues, and constructs a
graph of 3D curve fragments referred to as a 3D drawing of the scene.
Our approach requires 3D curve fragments and their topological relationships. To
the best of our knowledge, the approach in \cite{Usumezbas:Fabbri:Kimia:ECCV16}
is the state of the art in curve-based multiview stereo. However, any other
method that can give 3D curve fragments organized in a topological graph can be
used by our approach as well.
\section{Bringing Lofting Into Multiview Stereo}\label{sec:lofting}
\begin{figure}
\begin{center}
\includegraphics[width=0.49\linewidth]{figs/bsurfaces-lofting-curves.png}
\includegraphics[width=0.49\linewidth]{figs/bsurfaces-lofted.png}
\end{center}
\vspace{0cm}
\caption{From open and closed curves (left), lofting produces smooth surfaces
(right).}
\vspace{0cm}
\label{fig:lofting}
\end{figure}
Lofting is graphics technique for shape inference from a set of 3D curves, a
term with roots in shipbuilding to describe the molding of a hull from
curves~\cite{Bole:STR2015}. Designers often use such intermediate, curve-based
representations (sketches, graphs, drawings) to outline 3D shape, as they
compactly capture rich 3D information and are easy to customize. Through
lofting, these 3D curves are used to interactively model smooth surfaces,
Figure~\ref{fig:lofting}.
Implementations of lofting are commonplace in interactive CAD~\cite{Blender,
Wu:etal:1977, Nealen:etal:SIGGRAPH2007, Morigi:Rucci:SBIM2011,
Grimm:Joshi:SBIM2012, Abbasinejad:etal:SOCG2012, Das:etal:SBIM2005,
Nam:Chai:CNG2012}, and
applications~\cite{Lin:etal:CII1997,Beccari:etal:CAD2010,Tustison:etal:CVPR2004}.
Lofting has not yet spread to 3D computer vision, where fully-automated
image-based modeling is the norm. This work leverages lofting to build a
fully-automated, dense multiview stereo reconstruction pipeline.
Given 3D curves $\Gamma_1, \Gamma_2, \dots \Gamma_n$ forming the partial boundary
of a surface, lofting produces a smooth surface passing through them which is
sought to be `simple': smooth, avoiding holes and degeneracies such as
self-intersections. Earlier approches formulated this as surface deformation
with parameters estimated to fit the prior into a 3D curve
outline~\cite{Chiyokura:Fumihiko:SIGGRAPH1983, Kraevoy:etal:SBIM2009}.
Approaches using functional optimization~\cite{Morigi:Rucci:SBIM2011,
Nealen:etal:SIGGRAPH2007,Sorkine:Cohenor:SMI2004,Welch:Witkin:SIGGRAPH1994,Bobenko:Schroder:SGP2005,Moreton:Sequin:SIGGRAPH1992}
employ generic objectives, such as least squares and integral of squared
principal curvatures, and the result depends on this choice, leading to
overfitting or oversmoothing. These approaches cannot easily handle complex
shapes with many self occlusions~\cite{Lin:etal:CII1997}.
Other algotihms include those based on B-splines~\cite{Woodward:CAD1988,Park:etal:IJAMT2004}.
We have chosen lofting based on subdivision surfaces, a well-known graphics
technique that divides the faces of a coarse input mesh via a recursive sequence
of transforms or subdivision schemes, yielding smooth high-poly meshes,
Fig.~\ref{fig:subdivision}. Subdivision is widely used in a number of graphics
problems~\cite{Peters:Reif:ACMTOG1997,DeRose:etal:SIGGRAPH1998}, such as surface
fitting~\cite{Suzuki:etal:CGA1999,Takeuchi:etal:2000,Ma:etal:CAD2004},
reconstruction~\cite{Hoppe:etal:SIGGRAPH1994, Maekawa:Ko:GM2002,
Zorin:SIGGRAPH2006}, and lofting itself~\cite{Nasri:CAGD1997, Nasri:CAGD2000,
Nasri:CGF2003,Nasri:etal:SMI2001,Nasri:Abbas:GMP2002,Catalano:subdivision:book}.
Combined subdivision schemes~\cite{Levin:SIGGRAPH1999,Levin:CAGD1999} translate
conditions on the limit surface to conditions on the scheme itself, and allow
subdivision to be adjusted near the curve network and boundary conditions beyond
subdivision or spline curves. Subdivision surfaces provide a simple standard
framework, with more powerful schemes compared to other techniques; meshes with
complex constraints at corners can be handled with greater
ease~\cite{Schaefer:etal:SGP2004}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figs/subdivision.png}
\end{center}
\vspace{0cm}
\caption{Application of subdivision resulting in
a high-poly surface (manually marked hard edges in
red)~\cite{Lavoue:etal:techreport}.
\vspace{0cm}
} \label{fig:subdivision}
\end{figure}
We leverage~\cite{Schaefer:etal:SGP2004}, which takes open 3D polygonal lines
terminating in a set of corners -- as in our 3D drawing, but interactively
generated. We have augmented it to automatically reorganize the curve network
prior to lofting, and with additional heuristics to avoid degeneracies. The
result is a lofting approach that can: i) take any number of boundary curves
partially or completely covering the boundary of the desired surface, and ii)
handles topological inconsistencies, self-intersections, discontinuities and
other geometric artifacts. A brief description of our lofting stage follows.
\noindent\textbf{Skinning:} quadrangulates the input curves to construct a quad
topology base mesh without the final
geometry~\cite{Schaefer:etal:SGP2004,Piegl:Tiller:CAD1996,
Kaklis:Ginnis:CAGD1996, Nasri:etal:CGA2003}. Skinning does not produce accurate
shape approximation, but mainly avoids vertices lacking curvature
continuity~\cite{Loop:SGP2004}. Given a closed 3D curve $\Gamma =
(s_1,\dots,s_n)$, a chain is a subsequence $\Gamma_i^{i+k} =
(s_i,\dots,s_{i+k})$, $i = 1, \dots, n+k$. The topology of the base mesh
$\lambda$ is constructed by a sequence of chain advances on $\Gamma$: given
$\Gamma_i^{i+k}$, this adds a layer of $k$ quads to $\lambda$ bounded below by
$\Gamma_i^{i+k}$ and above by a new chain $\bar \Gamma_j^{j+k} =
(s_j,\dots,s_{j+k})$ on the interior of the resulting patch $\lambda$. $\Gamma$
is replaced by $\tilde \Gamma = \Gamma_1^{i-1} \cup \bar \Gamma_j^k \cup
\Gamma_{i+k+1}^n$.
Depending on the configuration of special interior vertices, different types of
advances apply~\cite{Schaefer:etal:SGP2004}, Fig.~\ref{fig:chain:advance}.
\begin{figure}
\begin{center}
\includegraphics[height=1.3cm]{figs/chain-1.png}
\includegraphics[height=1.3cm]{figs/chain-2.png}
\end{center}
\vspace{0cm}
\caption{Quaqdrangulation in lofting; depending on the configuration of
special interior vertices on a chain, one of these
edits are applied to obtain a base
mesh topology~\cite{Schaefer:etal:SGP2004}.}
\vspace{0cm}
\label{fig:chain:advance}
\end{figure}
\textbf{Fairing} computes the positions of the vertices in $\lambda$ by
minimizing ``fairness'' energy, a thin-plate
functional~\cite{Schaefer:etal:SGP2004}.
\textbf{Subdivision} is then applied with a modified version of Catmull-Clark
schemes~\cite{Schaefer:etal:SGP2004}, yielding a fine mesh, see
Figure~\ref{fig:subdivision}.
\section{Automated Multiview Reconstruction Using Lofting}
\label{sec:auto-lofting}
In the previous two sections, we described: {\em (i)} The concept of a {\em 3D
curve drawing}, a graph of 3D contour fragments and a method for deriving it
from a set of calibrated multiview imagery, and {\em (ii)} the concept of
lofting which reconstructs 3D surface meshes bounded by a set of given contour
fragments. We now describe how {\em pairs of curve fragments} selected from the
3D curve drawing give rise to 3D surface hypotheses. These hypotheses are then
ruled out when they predict occlusions which are not consistent with the input
data. The remaining hypotheses yield a set of occlusion-consistent surface
patches. In the following, we first describe the process of hypothesis formation
and then testing of formed hypotheses for occlusion consistency.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\linewidth]{figs/shape-schematic.png}
\end{center}
\vspace{0cm}
\caption{A schematic of a simple shapes where a surface patch (green) is
represented by a pair of curves (red); in the case of closed curves, a pair is
not necessary.
}
\vspace{0cm}
\label{fig:shape:schematic}
\end{figure}
\noindent{\textbf{Forming Surface Patch Hypotheses:} Ideally, any subset of
curve fragments should be able to form surface hypotheses, but this is clearly
intractable; even if curve fragments are long, noiseless and salient (a critical
factor as we shall see in Section~\ref{sec:reorg}), they number in the order of
100 curves or so. Note that surface patches that arise from closed curves are a
special case and these be identified and processed a priori. The remaining
surface patches involve at least two curve fragments but typically more, say
around 3-5. Then, pairs of curve fragments can be used as entry level
hypotheses, Figure~\ref{fig:shape:schematic}.
The pool of curve fragments from which pairs are selected is restricted to those
with a minimal length constraint, $L > \tau_{length}$. This threshold is learned
from data and is typically around a few centimeters for our data. The distance
between two 3D curves is defined as the average point-to-curve distance for all
the samples on both curves. The typical 3D curve proximity threshold
$\tau_{\alpha}$, which is also learned from data, is around 15-20 cm.
Third, in addition to length and pair proximity, curvature of the reconstructed
surface is a cue to whether it is veridical. This is because object surfaces are
typically not as convoluted as surfaces arising from unrelated cues. We use
average Gaussian curvature, \ie Gaussian curvature at every point on the surface
averaged over all surface points, and a threshold $\tau_G$ which is also
learned. It should be noted that every curve pair generates two surface
hypotheses: each endpoint in a given curve can pair with two possible endpoints
on the other curve in the pair. The surface hypotheses with lower average
Gaussian curvature is the one that is selected, if it is above $\tau_G$,
Figure~\ref{fig:loft:options}. See Figure~\ref{fig:lofting:samples} for a
collection of sample surface hypotheses obtained this way.
\begin{figure}
\begin{center}
\includegraphics[width=0.485\linewidth]{figs/loft-option-1.png}
\includegraphics[width=0.485\linewidth]{figs/loft-option-2.png}
\includegraphics[width=0.485\linewidth]{figs/loft-option-3.png}
\includegraphics[width=0.485\linewidth]{figs/loft-option-4.png}
\end{center}
\vspace{0cm}
\caption{There is an inherent ambiguity in reconstructing a surface from two
curve fragments arising from which endpoints are paired (top row vs. bottom
row).
When two curve fragments do belong to a veridical surface, one of the two
reconstructions generally has much lower average Gaussian curvature than the
other and this is a cue as to which one is veridical. When the pairing of
curve fragments is incorrect in that no surface exists between them, both
reconstructs have high average Gaussian curvature, a cue to remove outliers.
}
\label{fig:loft:options}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.325\linewidth]{figs/loft-option-3.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-2.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-3.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-4.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-5.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-6.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-7.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-8.png}
\includegraphics[width=0.325\linewidth]{figs/loft-example-9.png}
\end{center}
\vspace{0cm}
\caption{Some example loft surfaces of various geometries that our
reconstruction algorithm generates.
}
\vspace{0cm}
\label{fig:lofting:samples}
\end{figure}
Note that an alternate method for forming pairs of 3D curve fragments is to use
the topology of 3D curve fragments as projected onto 2D views. The topology of
2D image curves is derived from the medial axis or Delaunay Triangulation to
determine the neighboring curve fragments for any given curve. The topology of
projected 3D curve fragments then induces a neighborhood relationship among 3D
curve fragments: two 3D curve fragments are neighbors in 3D if their
corresponding 2D image curves are neighbors in at least one view. This improves
the performance in two ways: {\em (i)} veridical pairing which exceed the
proximity threshold are restored to the pool of candidate pairs; {\em (ii)}
non-veridical curve pairs which are not neighbors are correctly discarded. This
is a significant factor in areas dense in 3D curves compared to the proximity
threshold, which generates numerous non-veridical curve pairs.
\noindent{\textbf{Hypothesis Viability Using Occlusion Consistency:}} The most
important cue in probing the viability of a 3D surface patch hypothesis is
whether it is consistent with respect to the occlusions it predicts (it is
assumed that surfaces are opaque). If an opaque 3D surface patch is veridical,
then all 3D curve structures that are occluded by it in a given projected image
must be invisible. For example, a surface hypothesis may occlude a portion of a
3D curve. Image evidence supporting the occluded portion is grounds for
invalidating the surface hypothesis, Figure~\ref{fig:occlusion:schematic}.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{figs/occlusion-schematic.png}
\end{center}
\vspace{0cm}
\caption{A 3D surface patch $S$ occludes all 3D curve fragments that lie
behind it. Thus, the 3D curve fragments between $\Gamma_1$ and $\Gamma_4$ are
partially obstructed so that only portions between $(\Gamma_1, \Gamma_2)$ and
$(\Gamma_3, \Gamma_4)$ are visible as $(\gamma_1, \gamma_2)$ and $(\gamma_3,
\gamma_4)$ in the image. The projections of $(\Gamma_2, \Gamma_3)$ should have
no edge evidence in the image. On the other hand, the 3D curve fragments
$(\Gamma_5, \Gamma_6)$ is fully unoccluded and edge evidence for it is
expected. The presence of edge evidence in the portion $(\gamma_2, \gamma_3)$
is grounds for invalidating the 3D surface hypothesis $S$.
}
\vspace{0cm}
\label{fig:occlusion:schematic}
\end{figure}
The technical approach to testing occlusion is based on ray tracing
\cite{Glassner:raytracing:book}: A ray is connected from the camera center to
each point on a 3D curve fragment belonging to the 3D curve drawing and the
visibility of the point is tested against each surface hypothesis. Specifically,
let $\{\Pi_1, \dots, \Pi_N\}$ denote the set of hypothesized surface patches.
Let the 3D curve drawing have curve fragments $\{\Gamma_1, \dots, \Gamma_K\}$,
each having image curve projections onto view $l$, $\gamma_k^l (s)$, where $s$
represents length parameter $s \in [0,L_k^l]$, where $L_k^l$ is the total length
of the projected curve. Let the portion of the 3D curve that is occluded by the
surface patch $\Pi_n$ be denoted by the interval $(a_{k,n}^l, b_{k,n}^l)$. Then,
the evidence against surface hypothesis $\Pi_n$ provided by curve $\Gamma_k$ from
view $l$, $E_{k,n}^l$, is the edge support for the invisible portion. This
evidence is the sum of total edge support at sample point $s$, $\phi(\gamma_k^l
(s))$, which is simply the number of image edges that have matching locations
and orientations to the curve $\gamma_k^l (s)$ at sample point $s$:
\begin{equation}\label{eq:lofting:support}
E_{n,k}^l = \int_{a_{k,n}^l}^{b_{k,n}^l} \phi(\gamma_k^l (s))ds
\end{equation}
This evidence is then subjugated to a threshold of significance $\tau_E$; if
significant, the evidence invalidates the hypothesis. On the other hand, if the
evidence against the hypothesis for all the curves that should be occluded is
indeed insignificant, \ie, $E_{n,k}^l < \tau_E, \forall k$, the lack of evidence
in fact provides support for the surface hypothesis. This is to be distinguished
from surface hypotheses that are not occluding any curves. The situation where
$\Pi_n$ occludes $\Gamma_k$ and image evidence shows occlusion lends more
evidence to $\Pi_n$ than the situation where $\Pi_n$ does not occlude any
curves.
We now assume that all surface patches occlude at least one curve in at least
one view; note that for polyhedral shapes, frontal patches occlude the contours
of patches on the back, so this is not a stringent assumption. In fact, probing
this assumption on both Amsterdam House Dataset and Barcelona Pavilion Dataset
(which are described in Section~\ref{sec:lofting-results}) shows that this is
the case for more than 90\% of the surface hypotheses generated. This assumption
implies that each surface hypothesis needs to be confirmed at least once against
an occlusion hypothesis, \ie, $\forall n, \exists l, \exists k, \text{such that
} E_{k,n}^l < \tau_E$.
The above process probes the implication of surface patch in relation to the 3D
curve drawing. When introducing a multitude of surface patches, however, the
issue of occlusion between two surface hypotheses arises. It is possible that
one surface hypothesis is fully occluded by all other surfaces. Such a surface
is then not visible in any view and is discarded.
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{figs/lofting-pipeline.png}
\end{center}
\caption{A visual illustration of our dense surface reconstruction pipeline.
}
\label{fig:lofting:pipeline}
\end{figure*}
\noindent{\textbf{Redundant Hypotheses:}} Since surface hypotheses are generated
by pairs of 3D curve fragments, if a ground truth surface consists of multiple
curve fragments, say a rectangular patch consisting of four curve fragments,
then the same surface will likely be represented by a number of curve fragment
pairs, six possible pairs in the case of a rectangular patch.
These redundant representations are detected in a post-processing stage and
consolidated. When a large portion of a surface hypothesis (80\% in our system)
is subsumed by another surface, \ie, 80\% of the points on it are closer than a
proximity threshold to another surface, then this surface is discarded as a
redundant hypothesis. A more principled approach is to merge two overlapping
surfaces by forming curve triplet hypotheses: When two curve pairs have a curve
fragment in common and their surface hypotheses overlap, as described above, the
lofting approach is applied to the curve triplet and the resulting surface
replaces the pair of surface hypotheses. And, of course, a curve triplet and a
curve pair with a common curve fragment and overlapping surfaces result in curve
quadruplet hypotheses, and so on as needed. This growth of surface hypotheses
yields more accurate and less redundant surface patches, but results from this
process are not ready for inclusion in this publication.
\begin{figure}
\begin{center}
\includegraphics[width=0.485\linewidth]{figs/occluded_curves_view00_surface01scale.jpg}\hspace{1.3mm}\includegraphics[width=0.485\linewidth]{figs/edge_support_view00_surface01scale.jpg}\\[1mm]
\includegraphics[width=0.485\linewidth]{figs/occluded_curves_view05_surface02scale.jpg}\hspace{1.3mm}\includegraphics[width=0.485\linewidth]{figs/edge_support_view05_surface02scale.jpg}
\end{center}
\vspace{0cm}
\caption{Examples of surface hypotheses being confirmed by the confirmation
views shown here.
Left column: Projected surface hypothesis is shown in green, projected curve
drawing is shown in blue and occluded segments are shown in purple. Right
column: Same surface and occluded segments are shown with image edges in
blue. Notice the lack of any edge presence whatsoever around most of the
purple segments, which is clear indication of occlusion consistency between
the images and the hypothesis surface.
}
\vspace{0cm}
\label{fig:confirm}
\end{figure}
Figure~\ref{fig:lofting:pipeline} is a visual illustration of our entire surface
reconstruction approach. Figure~\ref{fig:confirm} demonstrates that our
algorithm is very good at correlating image edges with 3D curve structures,
accurately reasoning about occlusion and confirming an overwhelming majority of
correct surfaces, as well rejecting almost all of the incorrect hypotheses,
Figure~\ref{fig:false}. It should be noted that many surface hypotheses do not
contain any portion of the curve drawing behind them from {\em any} given view.
These hypotheses cannot be confirmed or denied, and, depending on the robustness
of the hypothesis generation algorithm, they can be included in or discarded
from the output as needed. In addition, many existing multiview stereo methods
can be plugged into our system at the level of curve pairing and used as
alternative ways to provide initial seeds for our surface hypotheses. As
mentioned earlier, our lofting algorithm scales well to a large number of input
3D curves, which are provided either simultaneously or sequentially.
\begin{figure}
\begin{center}
\includegraphics[width=0.485\linewidth]{figs/occluded_curves_view00_surface14.jpg}\hspace{1.3mm}\includegraphics[width=0.485\linewidth]{figs/edge_support_view00_surface14.jpg}\\[1mm]
\includegraphics[width=0.485\linewidth]{figs/occluded_curves_view25_surface14.jpg}\hspace{1.3mm}\includegraphics[width=0.485\linewidth]{figs/edge_support_view25_surface14.jpg}
\end{center}
\vspace{0cm}
\caption{An example outlier surface hypothesis ruled out by detected edge
structures. Left column: projected surface hypothesis is shown in green,
projected curve drawing is shown in blue and occluded segments are shown in
purple. Right column: Same surface and occluded segments are shown with
image edges in blue. Notice how most of the purple segments are barely
visible from all the edges that match in both location and orientation.
}
\vspace{0cm}
\label{fig:false}
\end{figure}
\section{Reorganization of Input Curve Graph Using Differential Geometric Cues}
\label{sec:reorg}
Four important technical issues arise in the application of lofting to
reconstruct surface patches from 3D drawings.
\noindent{\textbf{\underline{Problem 1:} Lofting sensitivity to overgrouping:}}
Lofting is highly sensitive to overgrouping of edges into curves.
If some parts of a curve belong to a veridical surface patch but another part
does not, then the lofting results experience significant and irreversible
geometric errors, \eg, as in Figure~\ref{fig:lofting:sensitivities}a where two
curve fragments $\mathcal{C}_1$ and $\mathcal{C}_2$ belong to a side of the
house and correctly hypothesize a surface patch through lofting. However, if
$\mathcal{C}_2$ is grouped with an adjacent curve fragment $\mathcal{C}_3$
belonging to an adjacent face of the house that $\mathcal{C}_2$ belongs to (let
$\mathcal{C}_4$ denote $\mathcal{C}_2 \cup \mathcal{C}_3$), then the lofting
results based on $(\mathcal{C}_1$, $\mathcal{C}_4)$ do not produce a meaningful
surface patch. The core of this problem is that the curve $C_2$ is shared by two
surface hypotheses, but if grouped with $C_3$, it can no longer represent the
frontal surface hypothesis created by $C_1$ and $C_2$. This transition in the
ability to represent multiple surface hypotheses happens at junctions. Thus,
breaking all curves at corners, \ie high-curvature points, should remedy this
problem, Figure~\ref{fig:reorg:results}. Unfortunately, it is difficult to
output curvature for noisy curves, thus requiring a smoothing algorithm before
the curve can be broken at high-curvature points. This smoothing algorithm is
described below in the context of curve noise.
\noindent{\textbf{\underline{Problem 2:} Lofting sensitivity to curve noise: }}
Curve fragments of the 3D drawing can have excessive noise, depicting
loop-like structures and local perturbations,
Figure~\ref{fig:lofting:sensitivities}b. These degeneracies in the local form of
a curve fragment often result in failures in the lofting algorithm to produce a
surface hypothesis, or result in surfaces featuring geometric degeneracies.
There are a number of smoothing methods, and we use a relatively recent robust
algorithm that is based on B-splines~\cite{Garcia:CSDA2010, Garcia:EIF2011},
balancing data fidelity term with a smoothness term. The ratio of those two
terms determine the degree of smoothing. The advantage of this method is that
the polyline representation of the curve can be maintained after smoothing.
\noindent{\textbf{\underline{Problem 3:} Lofting sensitivity to
overfragmentation and gaps: }} Lack of edges or undergrouping in the edge
grouping stage can lead to gaps and overfragmentation. In both cases, a long
veridical curve is represented as multiple smaller curve fragments,
Figure~\ref{fig:lofting:sensitivities}c. As a result, what would have been a
single surface patch now needs to be covered by a suboptimal set of smaller,
overlapping surface hypotheses. In addition, the increased number of curve
fragments increases the number of curve pairs to be considered, and lead to a
combinatorial increase in computational cost. Curve fragments that are
coincidental at a point can be grouped if they show good continuity of tangents
at endpoints. Similarly, gaps between two curve fragments $\Gamma_1(s)$ and
$\Gamma_2(s)$ can be bridged between endpoint $\Gamma_1(s_1)$ and $\Gamma_2(s_2)$
if: {\em (i)} These endpoints are sufficiently close, \ie, $|\Gamma_1(s_1) -
\Gamma_2(s_2)| < \tau_{dist}$, where $\tau_{dist}$ is a gap proximity threshold,
and {\em (ii)} $CC((\Gamma_1(s_1),T_1(s_1)),(\Gamma_2(s_2),T_2(s_2))) <
\tau_{cocirc}$ where $CC$ is the co-circularity measure, characterizing good
continuation from one point-tangent pair $(P_1,T_1)$ to another pair $(P_2,T_2)$
\cite{Parent:Zucker:PAMI1989}.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{figs/lofting-sensitivities.png}
\end{center}
\vspace{0cm}
\caption{{\em (a)} Overgrouping of two curve fragments $\mathcal{C}_2$ and
$\mathcal{C}_3$ into $\mathcal{C}_4$ can lead to nonsensical lofting results
in the pair $(\mathcal{C}_1,\mathcal{C}_4)$ in contrast to the
close-to-veridical results of lofting $(\mathcal{C}_1,\mathcal{C}_2)$; {\em
(b)} lofting is sensitive to loop-like noise or excessive perturbations; {\em
(c)} lofting with overfragmented curves produces suboptimal lofting results
and redundant surface proposals, leading to a combinatorial increase in the
number of lofting applications and postprocessing.
}
\vspace{0cm}
\label{fig:lofting:sensitivities}
\end{figure}
\noindent{\textbf{\underline{Problem 4:} Duplications due to curve fragment
overlaps: }} There is some duplication in 3D curve fragments in that two curves
can overlap along portions, thus creating duplicate surface representations.
While this duplication may not be an issue for some applications, better results
can be obtained if the duplication is removed: When two curves overlap, the
longer curve is unaltered and the overlapping segment is removed from the
shorter curve. The curves are also downsampled since the initial curve drawing
is dense in sample points.
The resolution of the above four problems significantly improves the performance
of our algorithm. Note that these steps are applied in sequence: Pruning small
curves, smoothing curve fragments, gap filling and grouping overfragmented
segments, eliminating duplications and downsampling. In addition, it is
judicious to iteratively apply these steps in sequence, starting with small
parameters and increasing the parameters in steps (typically 3-4). This is
crucial because all of these steps run the risk of distorting the 3D data in
significant ways if pursued too aggressively in a single iteration, \eg, corners
can be oversmoothed, wrong gaps can be filled, meaningful but relatively short
curve fragments can get pruned without getting a chance to be merged into a
larger curve fragment etc.
It should be noted that aforementioned problems do not arise in the plethora of
interactive surface lofting approaches, as a human agent is available to break
or group 3D structures to obtain geometrically accurate 3D surfaces,
\cite{Nealen:etal:SIGGRAPH2007}. Some of the lofting approaches try to get
around this problem by constraining the input curves to be closed curves,
\cite{Zhuang:etal:ACMTOG2013, Schaefer:etal:SGP2004}, but a fully automated,
bottom-up lofting system like ours has to be able to handle such grouping
inconsistencies algorithmically.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\linewidth]{figs/drawing-1.png}
\includegraphics[width=0.49\linewidth]{figs/drawing-2.png}
\includegraphics[width=0.49\linewidth]{figs/grouped-1.png}
\includegraphics[width=0.49\linewidth]{figs/grouped-2.png}
\includegraphics[width=0.49\linewidth]{figs/broken-1.png}
\includegraphics[width=0.49\linewidth]{figs/broken-2.png}
\end{center}
\vspace{0cm}
\caption{The original input 3d curve drawing (top row), which is the direct
output of the 3D curve drawing approach, the result of our reorganization
algorithm before breaking sharp corners (middle row), and after the sharp
corners are broken (bottom row). The level of granularity displayed in the
last row is the most appropriate for our lofting approach, as most surfaces
are bounded by entire curves rather than subsegments.
}
\vspace{0cm}
\label{fig:reorg:results}
\end{figure}
In summary, this regrouping algorithm exploits the underlying organization, as
well as the rich differential geometric properties embedded in any
sufficiently-smooth, 3D curve representation, to adjust the granularity and
connectivity of any input curve graph or network to suit the needs of a wide
variety of applications. In the case of surface lofting, the quality of the
resulting reconstructions are significantly improved if the input curves that
have 3D surfaces between them have their samples more or less linearly aligned
with each other, resulting in a more robust quadrangulation step that kickstarts
most lofting approaches. We therefore use the 1st and 2nd order differential
geometric cues, namely tangents and curvatures, to full extent in order to
aggresively group smooth segments and break curves at high-curvature points,
maximizing the likelihood that the lofting algorithm will receive a set of 3D
curves best suited for its capabilities.
\section{Experiments and Results} \label{sec:lofting-results}
\begin{figure*}
\begin{center}
\includegraphics[width=0.245\linewidth]{figs/amsterdam_pmvs1.png}
\includegraphics[width=0.245\linewidth]{figs/amsterdam_pmvs2.png}
\includegraphics[width=0.245\linewidth]{figs/pavilion-midday-pmvs-1.png}
\includegraphics[width=0.245\linewidth]{figs/pavilion-midday-pmvs-2.png}\\
\includegraphics[width=0.245\linewidth]{figs/loftsurface_amsterdam_house_01.png}
\includegraphics[width=0.245\linewidth]{figs/loftsurface_amsterdam_house_02sm.png}
\includegraphics[width=0.245\linewidth]{figs/pavilion-loft1sm.png}
\includegraphics[width=0.245\linewidth]{figs/pavilion-loft2sm.png}
\end{center}
\vspace{0cm}
\caption{\small Two views of the PMVS reconstruction results on the Amsterdam
House Dataset and Barcelona Pavilion Dataset (first row). Observe the wide
gaps on homogeneous surfaces. The second row shows the results of our
algorithm from the same views, obtained from a set of mere 27 curve
fragments and without using appearance. Note that the PMVS gaps are filled
in our results. Our algorithm errs in reconstructing the back of the can as
a flat surface. This can easily be corrected via integration of appearance
cues in the reconstruction process.
}\vspace{-0.3cm}
\label{fig:lofting:results}
\end{figure*}
\noindent{\textbf{\underline{Implementation:}}} The 3D drawing is computed using
code made available by the authors of \cite{Usumezbas:Fabbri:Kimia:ECCV16}.
Smoothing code was made available by \cite{Garcia:CSDA2010}. We have selected
one of the most robust lofting implementations, BSurfaces, a part of
Blender~\cite{BSurfaces}, a well-known, professional-grade CAD system in
widespread use. BSurfaces is able to work on multiple curves with arbitrary
topology and configurations, either simultaneously or incrementally, producing
simple and smooth surfaces that accurately interpolate input curves, even if
they only partially cover the boundary of the surface to be reconstructed. The
use of BSurfaces has been limited to interactive modeling, where a human agent
provides clean well-connected curves to the system.
To the best of our knowledge, a fully-automated 3D modeling pipeline that
obtains a 3D curve network, and uses lofting to surface this network in a
fully-automated fashion, is novel.
\noindent{\textbf{\underline{Datasets:}}} We use two datasets to quantify
experimental results. First, the Amsterdam House Dataset consists of 50 fully
calibrated multiview images and comprises a wide variety of object properties,
including but not limited to smooth surfaces, shiny surfaces, specific
close-curve geometries, text, texture, clutter and cast shadows. This dataset is
used to evaluate the occlusion and visibility reasoning part of our pipeline,
Section~\ref{sec:auto-lofting}. Second, the Barcelona Pavilion Dataset is a
realistic synthetic dataset created for validating the present approach with
complete control over illumination, 3D geometry and cameras. This dataset was
used with its 3D mesh ground truth to evaluate the geometric accuracy of the
full pipeline.
\noindent{\textbf{\underline{Qualitative Evaluation:}}} Figure~\ref{fig:lofting:results} shows our algorithm's reconstruction and compares it to PMVS~\cite{Furukawa:Ponce:CVPR2007}. Observe that the reconstructed surface patches are glued onto the 3D drawing so that the topological relationship among surface patches is explicitly captured and represented. A key point to keep in mind is that the two approaches are not compared to see which is better. Rather, the intent is to show the complementary nature of the two appraches and the promise of even greater performance when appearance, the backbone of PMVS, is integrated into our approach.
\noindent{\textbf{\underline{Quantitative Evaluation:}}} The algorithm is quantitatively evaluated in two ways. First, we assume the input to the algorithm, the 3D curve drawing, is correct and compare ground truth to the algorithm's results based on a common 3D drawing. Specifically, we manually construct a surface model using the curve drawing in an interactive design and modeling context using Blender. The resulting surface model then serves as ground truth (GT) since it is the best possible expected outcome of our algorithm. Both GT and algorithm surface models are sampled and a proximity threshold is used to determine if a sample belongs to the other and vice versa. Three stages of surface reconstruction are then evaluated as a precision-recall curve, Figure~\ref{fig:lofting:quan}a, namely: {\em (i)} All surface hypotheses satisfying formation constraints; {\em (ii)} surface hypotheses that survive the occlusion constraint; {\em (iii)} surface hypotheses that further satisfy the visibility constraint with duplications removed. The algorithm recovers 90\% of the surfaces with nearly 100\% precision. The missing surfaces are those that do not occlude any structures, and therefore cannot be validated with our approach. Clearly, the use of appearance would a long way towards recovering these missing surfaces.
Second, we also quantitatively evaluate the algorithm in an end-to-end fashion,
including the 3D drawing stage. Since the ground truth surfaces are not
available from Amsterdam House Dataset, we resort to using Barcelona Pavilion
Dataset, which has GT surfaces. Since this dataset is large, we focus our
evaluation on a specific area with two chair objects. We use the same strategy
to compare the final outcome of our algorithm, Figure~\ref{fig:lofting:quan}b.
The results show that despite a complete disregard for appearance, geometry of
the surfaces together with occlusion constraint is able to recover a significant
number of surface patches accurately. The recall does not reach 100\% because
the ground truth floor surfaces do not occlude any curves and therefore cannot
be recovered.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{figs/lofting_pr-improved.png}\\
\includegraphics[width=\linewidth]{figs/PR-lofting-improved.png}
\end{center}
\vspace{0cm}
\caption{(a) The precision-recall curves for Amsterdam House Dataset,
corresponding to post hypothesis-formation surfaces (green), confirmed
surface (blue), and confirmed surfaces after occlusion-based cleanup (red).
These results provide quantitative proof for the necessity of all steps in
our reconstruction algorithm; (b) The precision-recall curve for Barcelona
Pavilion Dataset, evaluating the geometric accuracy of the entire pipeline.
}
\label{fig:lofting:quan}
\end{figure}
\section{Conclusions}
This paper presents a fully automated dense surface reconstruction approach using geometry of curvilinear structure evident in wide baseline calibrated views of a scene. The algorithm relies on the {\em 3D drawing}, a graph-based representation of reconstructed 3D curve fragments which annotate meaningful structure in the scene, and on lofting to create surface patch hypotheses which are glued onto the 3D drawing, viewed as a scaffold of the scene. The algorithm validates these hypotheses by reasoning about occlusion among curves and surfaces. Thus it requires views from a wide range of camera angles and performs best if there are multiple objects to afford the opportunity for inter-object and intra-object occlusion. Qualitative and quantitative evaluations shows that a significant portion of the scene surface structure can be recovered and its topological structure is made explicit, a clear advantage.
This is significant considering this is only the first step in our approach, namely, using geometry without using appearance which is the core idea underlying successful dense reconstruction systems like PMVS. Our goal is to integrate the use of appearance in the process which promises to significantly improve the reconstruction performance.
{\small
\input{paper-arxiv.bbl}
}
\end{document}
|
\section{Supercharges for a supersymmetric closed chain of four interacting Majorana modes}
In this appendix, we present the explicit form of the supercharges in a supersymmetric closed chain of four Majorana modes with nearest-neighbour couplings. The corresponding Hamiltonian is given by
\begin{equation}
H_{eff}=i(t_1\gamma_1\gamma_2+t_2\gamma_2\gamma_3+t_3\gamma_3\gamma_4+t_4\gamma_4\gamma_1).
\end{equation}
By defining two Dirac operators $c_1=\gamma_1+i\gamma_2$ and $c_2=\gamma_3+\gamma_4$, we have the many-particle basis $\{|00\rangle, |11\rangle, |10\rangle, |01\rangle\}$ with $|00\rangle=|\phi_0\rangle, |11\rangle=c_1^\dagger c_2^\dagger|\phi_0\rangle, |10\rangle=c_1^\dagger|\phi_0\rangle$ and $|01\rangle=c_2^\dagger |\phi_0\rangle$. Under this basis, the matrix form of $H_{eff}$ is given by
\begin{equation}
\hat{H}_{eff}=
\begin{pmatrix}
-t_1-t_3 &t_4-t_2 &0 &0 \\
t_4-t_2 &t_1+t_3 &0 &0 \\
0 &0 &t_1-t_3 &-t_2-t_4 \\
0 &0 &-t_2-t_4 &t_3-t_1
\end{pmatrix}.
\end{equation}
By imposing the condition for SUSY $t_1t_3=t_2t_4$, we get the energy levels $E=\pm \epsilon$ with $\epsilon=\sqrt{t_1^2+t_2^2+t_3^2+t_4^2}$ which are two-fold degenerate. In order to construct the supercharges, all energy levels need to be non-negative, so we shift the Hamiltonian by a positive constant $h\epsilon$ with $h\ge 1$. We write the shifted Hamiltonian as $H_{SUSY}=H_{eff}+h\epsilon$ and the energy levels are $E_1=(h-1)\epsilon$ and $E_2=(h+1)\epsilon$. The degenerate states at $E_1$ are obtained as
\begin{equation}\label{excitations}
\begin{split}
|\varphi\rangle_{even}=\frac{1}{A_1}[(t_2-t_4)|00\rangle+(\epsilon-t_1-t_3)|11\rangle,\\
|\varphi\rangle_{odd}=\frac{1}{B_1}[(t_4+t_2)|10\rangle+(\epsilon+t_1-t_3)|01\rangle,
\end{split}
\end{equation}
and the degenerate states at $E_2$ are obtained as
\begin{equation}\label{excitations}
\begin{split}
|\varphi\rangle_{even}=\frac{1}{A_2}[(t_2-t_4)|00\rangle-(\epsilon+t_1+t_3)|11\rangle,\\
|\varphi\rangle_{odd}=\frac{1}{B_2}[(t_4+t_2)|10\rangle+(-\epsilon+t_1-t_3)|01\rangle,
\end{split}
\end{equation}
where $A_1,B_1,A_2$ and $B_2$ are coefficients for normalization. We thus find the two-fold degeneracy of states with opposite fermion parities.
According to the analysis in the main text, there are two fermionic operators
\begin{eqnarray}
Q_1=\gamma'\sqrt{H_{SUSY}}, \ \ \ Q_2=\gamma''\sqrt{H_{SUSY}},
\end{eqnarray}
which satisfy the algebra
\begin{eqnarray}
\{P,Q_i\}=0,\ \ \{Q_i,Q_j\}=2\delta_{ij}H_{SUSY}
\end{eqnarray}
for $i,j\in\{1,2\}$. This indicate an ${\cal N}=2$ supersymmetry and $Q_{1,2}$ are the two supercharges. For the case of four Majorana modes here, we obtain
\begin{equation}
\gamma'=\frac{t_2}{\sqrt{t_1^2+t_2^2}}\left(\gamma_1+\frac{t_1}{t_2}\gamma_3\right),\ \ \ \gamma''=\frac{t_3}{\sqrt{t_2^2+t_3^2}}\left(\gamma_2+\frac{t_2}{t_3}\gamma_4\right)
\end{equation}
and
\begin{eqnarray}
\sqrt{H_{SUSY}}=\frac{1}{A\sqrt{\epsilon}}H_{eff}+B\sqrt{\epsilon},
\end{eqnarray}
with
\begin{eqnarray}
A=\sqrt{h+1}+\sqrt{h-1},\ \ \ \ B=\frac{\sqrt{h+1}+\sqrt{h-1}}{2}.
\end{eqnarray}
We can easily check that $(H_{eff}/A\sqrt{\epsilon}+B\sqrt{\epsilon})^2=H_{eff}+h\epsilon$.
\section{Quasiparticle excitations for non-supersymmetric closed chains of four interaction Majorana modes}
In this appendix, we study the quasiparticle excitations in a closed chain of four interacting Majorana modes with $t_2t_4/t_1t_3<0$, which is a situation in the regimes of $V_x$ in Fig.~3(b) where only finite energy excitations are available. Here we rewrite the $H_{eff}$ in terms of Dirac operators as
\begin{align}
H_{eff}=&t_1(c_1^\dagger c_1-c_1c_1^\dagger)-(t_2+t_4)c_2^\dagger c_1-(t_2+t_4)c_1^\dagger c_2 \\ \nonumber
+&(t_2-t_4)c_1c_2+(t_4-t_2)c_1^\dagger c_2^\dagger+t_3(c_2^\dagger c_2-c_2 c_2^\dagger),
\end{align}
which has the matrix form
\begin{equation}
\hat{H}_{BdG}=\frac{1}{2}
\begin{pmatrix}
2t_1 &-(t_2+t_4) &0 &t_4-t_2 \\
-(t_2+t_4) &2t_3 &t_2-t_4 &0 \\
0 &t_2-t_4 &-2t_1 &t_2+t_4 \\
t_4-t_2 &0 &t_2+t_4 &-2t_3
\end{pmatrix}
\end{equation}
under the BdG basis $\{c_1^\dagger, c_2^\dagger, c_1,c_2\}$. By diagonalizing the matrix, we get four quasiparticle excitations with energy
\begin{equation}\label{BdGexcitation}
\epsilon=\pm\frac{1}{\sqrt{2}}\sqrt{\sum_{j=1}^4t_j^2\pm\sqrt{\left(\sum_{j=1}^4t_j^2\right)^2-4(t_1t_3-t_2t_4)^2}},
\end{equation}
where the positive and negative excitations are symmetric due to particle-hole symmetry. Because $t_1t_3\neq t_2t_4$ due to $t_2t_4/t_1t_3<0$, there is no gapless excitations according to Eq.~(\ref{BdGexcitation}). Therefore, no matter how we change the magnetic flux $\Phi$ through the ring in Fig.~1(b) of the main context to tune $t_{2,4}$, we cannot obtain SUSY.
Now we focus on a special case with $|t_2|=|t_4|$, where we can prove that the smallest values of the lowest excitations appear at $t_2=t_4=0$, i.e. $\Phi=0$. Without loss of generality, we consider $t_2=t_4$ and $t_1t_3<0$, then we have the first excitation as
\begin{equation}\label{1st}
\epsilon_1=\frac{1}{\sqrt{2}}\sqrt{(t_1^2+t_3^2+2t_2^2)-\sqrt{(t_1^2-t_3^2)^2+4t_2^2(t_1+t_3)^2}}.
\end{equation}
It is straightforward to prove $\epsilon_{1}\ge \epsilon_{1}(t_2=t_4=0)$ for any $t_{1,3}$, thus giving the smallest value
\begin{equation}\label{Emin}
E_{min}=|\epsilon_1(t_2=t_4=0)|=\left|\frac{t_1+t_3}{2}+\frac{|t_1-t_3|}{2}\right|,
\end{equation}
which is the smaller one between $|t_1|$ and $|t_3|$ as shown in Fig.~3(b) in the main text.
\section{Effects of additional fluxes threading the closed chain of Majorana modes}
In real experiments, we consider an external ring much larger than the closed chain of two nanowires and two junctions. Nevertheless, when fluxes penetrate the external ring, it may not be avoided for a small amount of flux to thread the closed chain. Here we show that the system can still be tuned to the supersymmetric state in presence of such additional fluxes.
\begin{figure}[t]
\psfig{figure=supp,width=8.5cm} \caption{\label{fsupp}
Lowest-energy spectrum with respect to magnetic fluxes through the loop formed by external superconducting ring and junction between $\gamma_2$ and $\gamma_3$ in Fig. 1(b) of the main text. The additional flux threading the closed chain of coupled Majorana modes is $\Phi'=0.1\Phi$. Other parameters are the same as the green curve of Fig. 2(a) in the main text. }
\end{figure}
We analyze with Kitaev chains as the same with the main text, where the tunneling term related to Josephson effect is given by
\begin{equation}
H_{\Gamma}'\!=-\Gamma_1 a_{L1}^\dagger a_{R1}-\Gamma_n a_{Ln}^\dagger a_{Rn}+h.c..
\end{equation} In the topological regime, we have $a_{L1} \rightarrow i \frac{1}{2}ge^{i\theta_{L1}/2}\gamma_1$, $a_{Ln} \rightarrow \frac{1}{2}ge^{i\theta_{Ln}/2}\gamma_2$, $a_{R1} \rightarrow i \frac{1}{2}ge^{i\theta_{R1}/2}\gamma_4$ and $a_{Rn} \rightarrow \frac{1}{2}ge^{i\theta_{Rn}/2}\gamma_3$. The phase shift across a junction is determined by the fluxes surrounded by the junction and external ring, and thus we have
\begin{equation}\label{t2t4overt1t3}
\theta_{R1}-\theta_{L1}=2\pi\frac{\Phi+\Phi'}{\Phi_0}, \ \ \ \theta_{Rn}-\theta_{Ln}=2\pi\frac{\Phi}{\Phi_0},
\end{equation}
where $\Phi'$ is the additional flux threading the closed chain of coupled Majorana modes. We thus obtain
\begin{equation}
H_\Gamma'=it_2\gamma_2\gamma_3+it_4\gamma_4\gamma_1
\end{equation}
with
\begin{equation}
t_2=\frac{\Gamma_1}{2}g\sin\frac{\pi\Phi}{\Phi_0}, \ \ \
t_4=-\frac{\Gamma_n}{2}g\sin\frac{\pi(\Phi+\Phi')}{\Phi_0}.
\end{equation}
By using that $t_1t_3=-E_1^2$ in the main text, we obtain
\begin{equation}\label{t2t4overt1t3}
\frac{t_2t_4}{t_1t_3}=\frac{g^2\Gamma_1\Gamma_n}{4E_1^2}\sin\frac{\pi\Phi}{\Phi_0}\sin\frac{\pi(\Phi+\Phi')}{\Phi_0}.
\end{equation}
Since the external ring is much larger than the closed chain, we consider $\Phi'\ll \Phi$, in which case SUSY can still be obtained when $g^2\Gamma_1\Gamma_n/4E_1^2\ge 1$, but $\Phi_{SUSY}$ is shifted a little from the value corresponding to $\Phi'=0$. For an example $\Phi'=0.1\Phi_0$, $\Phi_{SUSY}$ changes from $0.213\Phi_0$ in Fig.~2(a) of the main text to $0.205\Phi_0$ in Fig. 4.
|
\section{Introduction}\label{sec:intro}
In this article, all graphs are finite and simple.
We say that a graph $G$ \emph{contains} a graph $H$ if
$H$ is isomorphic to an induced subgraph of $G$, and that $G$ is
\emph{$H$-free} if it does not contain $H$. For a family of graphs ${\cal H}$,
$G$ is \emph{${\cal H}$-free} if for every $H\in {\cal H}$, $G$ is $H$-free.
A \emph{hole} in a graph is a chordless cycle of length at least 4.
A \emph{theta}
is a graph formed by three paths between the same pair of distinct
vertices so that the union of any two of the paths induces a hole. A
\emph{wheel} is a graph formed by a hole and a node that has at least 3
neighbors in the hole.
In this series of papers we study (theta, wheel)-free graphs.
This project is motivated and
explained in more detail in Part I of the
series~\cite{twf-p1}, where two subclasses of (theta, wheel)-free
graphs are studied. In Part II of the series~\cite{twf-p2}, we prove
a decomposition theorem for (theta, wheel)-free graphs that uses clique cutsets
and 2-joins, and use it to obtain a polynomial time recognition
algorithm for the class.
In this part we use the decomposition theorem from~\cite{twf-p2} to obtain further
properties of the graphs in the class
and to construct polynomial time algorithms for maximum weight clique,
maximum weight stable set, and coloring problems.
In Part IV of the series~\cite{twf-p4} we show that the induced version of
the $k$-linkage problem can be solved in polynomial time for (theta, wheel)-free
graphs.
\subsection*{The main results and the outline of the paper}
Throughout the paper we will denote by ${\cal C}$ the class of (theta, wheel)-free
graphs. Also, $n$ will denote the number of vertices and $m$ the number of edges of
a given graph.
For completeness, in Section~\ref{sec:decTh}, we state the decomposition
theorem for ${\cal C}$ and several other results proved in previous parts that
will be needed here.
Fundamental for our algorithms are the 2-join decomposition techniques developed in
\cite{nicolas.kristina:two} which we also describe here, as well as
prove some preliminary lemmas.
In Section~\ref{sec:maxCl}, we prove that every graph in ${\cal C}$
contains a bisimplicial vertex, and use this property to
give an $\mathcal O(n^2m)$-time
algorithm for the maximum weight clique problem on ${\cal C}$,
as well as to show that the class is 3-clique-colorable.
In Section~\ref{sec:stable}, we give an $\mathcal O(n^6m)$-time
algorithm for the maximum weight stable set problem on ${\cal C}$.
In Section~\ref{sec:vCol}, we give an $\mathcal O(n^5m)$-time
algorithm that optimally colors graphs from ${\cal C}$. We also
prove that every graph in ${\cal C}$, with maximum clique size
$\omega$, admits a coloring with at most $\max\{\omega, 3\}$ colors.
Since ${\cal C}$ contains all chordal graphs, clearly ${\cal C}$ has unbounded
clique-width.
In Section~\ref{s:cW}, we show how an example of Lozin and Rauthenbach
\cite{lozin:unboundedCW}
implies that the class of graphs from ${\cal C}$ that have no clique cutset
also has
unbounded clique-width.
\subsection*{Terminology and notation}
A {\em clique} in a graph is a (possibly empty) set of pairwise adjacent vertices.
We say that a clique is
{\it big} if it is of size at least 3.
A {\em stable set} in a graph is a (possibly empty) set of pairwise nonadjacent vertices.
A {\it diamond} is a
graph obtained from a complete graph on 4 vertices by deleting an edge. A {\it claw} is a
graph induced by nodes $u,v_1,v_2,v_3$ and edges $uv_1,uv_2,uv_3$.
A {\em path} $P$ is a sequence of distinct vertices
$p_1p_2\ldots p_k$, $k\geq 1$, such that $p_ip_{i+1}$ is an edge for
all $1\leq i <k$. Edges $p_ip_{i+1}$, for $1\leq i <k$, are called
the {\em edges of $P$}. Vertices $p_1$ and $p_k$ are the {\em ends}
of $P$. A cycle $C$ is a sequence of vertices $p_1p_2\ldots p_kp_1$,
$k \geq 3$, such that $p_1\ldots p_k$ is a path and $p_1p_k$ is an
edge. Edges $p_ip_{i+1}$, for $1\leq i <k$, and edge $p_1p_k$ are
called the {\em edges of $C$}. Let $Q$ be a path or a cycle. The
vertex set of $Q$ is denoted by $V(Q)$. The {\em length} of $Q$ is
the number of its edges. An edge $e=uv$ is a {\em chord} of $Q$ if
$u,v\in V(Q)$, but $uv$ is not an edge of $Q$. A path or a cycle $Q$
in a graph $G$ is {\em chordless} if no edge of $G$ is a chord of
$Q$.
Let $G$ be a graph.
For $x\in V(G)$, $N(x)$ is the set of all neighbors of $x$ in $G$, and $N[x]=N(x) \cup \{ x\}$.
For $S\subseteq V(G)$, $G[S]$ denotes the subgraph of $G$ induced by $S$.
For disjoint subsets $A$ and $B$ of $V(G)$, we say that $A$ is {\em complete} (resp. {\em anticomplete})
to $B$ if every vertex of $A$ is adjacent (resp. nonadjacent) to every vertex of $B$.
When clear from the context, we will sometimes write $G$ instead of $V(G)$.
\section{Decomposition of (theta, wheel)-free graphs}
\label{sec:decTh}
To state the decomposition theorem for graphs in ${\cal C}$ we first define the
basic classes involved and then the cutsets used.
\subsection*{Basic classes}
We will refer to P-graphs and line graphs of triangle-free chordless graphs (which we now define)
as {\em basic} graphs.
A graph $G$ is {\em chordless} if no cycle of $G$ has a chord.
An
edge of a graph is {\em pendant} if at least one of its endnodes has
degree~1. A \emph{branch vertex} in a graph is a vertex of degree at
least~3. A {\em branch} in a graph $G$ is a path of length at least~1
whose internal vertices are of degree 2 in $G$ and whose endnodes are
both branch vertices. A {\em limb} in a graph $G$ is a path of length
at least~1 whose internal vertices are of degree 2 in $G$ and whose
one endnode has degree at least 3 and the other one has degree~1. Two
distinct branches are {\em parallel} if they have the same endnodes.
Two distinct limbs are {\em parallel} if they share the same vertex of
degree at least~3.
Cut vertices of a graph $R$ that are also branch vertices are called
the {\em attaching vertices} of $R$. Let $x$ be an attaching vertex
of a graph $R$, and let $C_1, \ldots ,C_t$ be the connected components
of $R\setminus x$ that together with $x$ are not limbs of $R$ (possibly, $t=0$, when all
connected components of $R\setminus x$ are limbs). If $x$ is the end
of at least two parallel limbs of $R$, let $C_{t+1}$ be the subgraph of $R$ formed by
all the limbs of $R$ with endnode $x$. The graphs
$R[V(C_i)\cup \{ x\} ]$ (for $i=1, \ldots, t$) and the graph $C_{t+1}$
(if it exists) are the \emph{$x$-petals} of $R$.
For any integer $k\geq 1$, a {\em $k$-skeleton} is a graph $R$ such that:
\begin{enumerate}
\item $R$ is connected, triangle-free, chordless and contains at least
three pendant edges (in particular, $R$ is not a path).
\item $R$ has no parallel branches (but it may contains
parallel limbs).
\item For every cut vertex $u$ of $R$, every component of
$R\setminus u$ has a vertex of degree 1 in $R$.
\item For every vertex cutset $S=\{a, b\}$ of $R$ and for
every component $C$ of $R\setminus S$, either $R[C\cup S]$ is a
chordless path from $a$ to $b$, or $C$ contains at least one vertex
of degree 1 in $R$.
\item For every edge $e$ of a cycle of $R$, at least one of the
endnodes of $e$ is of degree 2.
\item Each pendant edge of $R$ is given one label, that is an integer
from $\{1, \dots, k\}$.
\item Each label from $\{ 1, \ldots ,k\}$ is given at least once (as a
label), and some label is used at least twice.
\item If some pendant edge whose one endnode is of degree at least 3
receives label $i$, then no other pendant edge receives label $i$.
\item If $R$ has no branches then $k=1$, and otherwise
if two limbs of $R$ are parallel, then
their pendant edges receive different labels and at least one of these labels is used more then once.
\item If $k>1$ then for every attaching vertex $x$ and for
every $x$-petal $H$ of $R$, there are at least two distincts labels
that are used in $H$. Moreover, if $\overline{H}$ is a union of at least one but not all $x$-petals,
then there is a label $i$ such that both $\overline{H}$ and $(R\setminus\overline{H})\cup\{x\}$ have
pendant edges with label $i$.
\item If $k=2$, then both labels are used at least twice.
\end{enumerate}
Note that if $R$ is a skeleton, then it edgewise
partitions into its branches and its limbs. Also, there is a trivial
one-to-one correspondence between the pendant edges of $R$ and the
limbs of $R$: any pendant edge belongs to a unique limb, and
conversely any limb contains a unique pendant edge.
If $R$ is a graph, then the {\em line graph} of $R$, denoted by $L(R)$, is the graph whose vertices are the edges of $R$,
and such that two vertices of $L(R)$ are adjacent if and only if the corresponding edges are adjacent in $R$.
A {\em P-graph} is any graph $B$ that can be
constructed as follows:
\begin{itemize}
\item
Pick an integer $k\geq 1$ and a $k$-skeleton $R$.
\item
Build $L(R)$, the line graph of $R$. The vertices of $L(R)$ that
correspond to pendant edges of $R$ are called {\em pendant vertices}
of $L(R)$, and they receive the same label as their corresponding
pendant edges in $R$.
\item Build a clique $K$ with vertex set $\{ v_1, \ldots ,v_k\}$,
disjoint from $L(R)$.
\item $B$ is now constructed from $L(R)$ and $K$ by adding edges
between $v_i$ and all pendant vertices of $L(R)$ that have label
$i$, for $i=1, \ldots ,k$.
\end{itemize}
We say that $K$ is the {\em special clique}
of $B$ and $R$ is the {\em skeleton} of $B$.
\begin{lemma}
\label{l:twoBranches}
Every P-graph $G$ contains two distinct branches of length at least~2
(in particular, these two branches both contain a vertex of degree~2).
\end{lemma}
\begin{proof}
Let $i$ be a label of $G$ that is used at least twice (it exists by (vii)) and consider two
pendant edges of the skeleton $R$ of $G$ that receive this label. Then,
by condition (viii) the limbs that contain these pendant edges are of
length at least 2, and hence they correspond to branches of length at
least 2 in $G$ (note that by (i) the degree of $v_i$ in $G$ is at least 3).
\end{proof}
\begin{lemma}[\cite{twf-p1}]\label{p1l2.4}
$G$ is the line graph of a triangle-free chordless graph if and only if $G$ is (wheel, diamond, claw)-free.
\end{lemma}
\begin{lemma}[\cite{twf-p2}]\label{p2l4.2}
Every P-graph is (theta, wheel, diamond)-free.
\end{lemma}
\subsection*{Cutsets}
In a graph $G$, a
subset $S$ of nodes and/or edges is a {\em cutset} if its removal yields
a disconnected graph. A node cutset $S$ is a {\em clique cutset} if
$S$ is a clique. Note that every disconnected graph has a clique
cutset: the empty set.
An {\em almost 2-join} in a graph $G$ is a pair $(X_1,X_2)$ that is a
partition of $V(G)$, and such that:
\begin{itemize}
\item For $i=1,2$, $X_i$ contains disjoint nonempty sets $A_i$ and
$B_i$, such that every node of $A_1$ is adjacent to every node
of $A_2$, every node of $B_1$ is adjacent to every node of
$B_2$, and there are no other adjacencies between $X_1$ and $X_2$.
\item For $i=1,2$, $|X_i|\geq 3$.
\end{itemize}
An almost 2-join $(X_1, X_2)$ is a \emph{2-join} when for $i\in\{1,2\}$, $X_i$
contains at least one path from $A_i$ to $B_i$, and if $|A_i|=|B_i|=1$
then $G[X_i]$ is not a chordless path.
We say that $(X_1,X_2,A_1,A_2,B_1,B_2)$ is a {\em split} of this
2-join, and the sets $A_1,A_2,B_1,B_2$ are the {\em special sets} of
this 2-join. We often use the following notation:
$C_i = X_i\sm (A_i \cup B_i)$ (possibly, $C_i = \emptyset$).
We are ready to state the decomposition theorem from~\cite{twf-p2}.
\begin{theorem}[\cite{twf-p2}]\label{decomposeTW}
If $G$ is (theta, wheel)-free, then $G$ is a line graph of a
triangle-free chordless graph or a P-graph, or $G$ has a clique
cutset or a 2-join.
\end{theorem}
We now describe how we decompose a graph from ${\cal C}$ into basic
graphs using the cutsets in the above theorem.
\subsection*{Decomposing with clique cutsets}
If a graph $G$ has a clique cutset $K$, then
its node set can be partitioned into sets $(A,K,B)$, where $A$ and $B$
are nonempty and anticomplete. We say that $(A,K,B)$
is a \emph{split} for the clique cutset $K$. When $(A, K, B)$ is a
split for a clique cutset of a graph $G$, the {\em blocks of decomposition}
of $G$ with respect to $(A, K, B)$ are the graphs $G_A=
G[A\cup K]$ and $G_B= G[K \cup B]$.
A \emph{clique cutset decomposition tree} for a graph $G$ is a rooted
tree $T$ defined as follows.
\begin{itemize}
\item[(i)] The root of $T$ is $G$.
\item[(ii)] Every non-leaf node of $T$ is a graph $G'$ that contains a
clique cutset $K'$ with split $(A', K', B')$. The children of $G'$
in $T$ are the blocks of decomposition $G'_{A'}$ and $G'_{B'}$ of
$G'$ with respect to $(A', K,' B')$, and at least one of the graphs
$G_{A'}'$ and $G_{B'}'$ does not admit a clique cutset.
\item[(iii)] Every leaf of $T$ is a graph with no clique cutset.
\item[(iv)] $T$ has at most $n$ leaves.
\end{itemize}
\begin{theorem}[\cite{tarjan}]
\label{th:tarjan}
A clique cutset decomposition tree of an input graph $G$ can be
computed in time $O(nm)$.
\end{theorem}
Note that for a non-leaf node $G'$ of $T$, the corresponding
clique cutset $K'$ of $G'$ is also a clique cutset of $G$.
The following lemmas proved in \cite{twf-p1} will also be needed.
\begin{lemma}[\cite{twf-p1}]\label{diamondCliqueCut}
If $G$ is a wheel-free graph that contains a diamond, then $G$ has a
clique cutset.
\end{lemma}
A {\em star cutset} in a graph is a node cutset $S$ that contains a
node (called a {\em center}) adjacent to all other nodes of $S$. Note
that a nonempty clique cutset is a star cutset.
\begin{lemma}[\cite{twf-p1}]\label{Star=Clique}
If $G\in\mathcal C$ has a star cutset, then $G$ has a clique cutset.
\end{lemma}
\subsection*{Decomposing with 2-joins}
We first state some properties of 2-joins in graphs with no clique cutset.
Let $\mathcal D$ be the class of all graphs from $\mathcal C$ that do
not have a clique cutset. By Lemma~\ref{Star=Clique}, no graph from
$\mathcal D$ has a star cutset and by Lemma \ref{diamondCliqueCut} no
graph from $\mathcal D$ contains a diamond. Also, let
$\mathcal D_{\textsc{basic}}$ be the class of all basic graphs from
$\mathcal C$ that do not have a clique cutset.
An almost 2-join with a split $(X_1, X_2, A_1, A_2, B_1, B_2)$ in a
graph $G$ is \emph{consistent} if the following statements hold for
$i=1, 2$:
\begin{enumerate}
\item Every component of $G[X_i]$ meets both $A_i$, $B_i$.
\item Every node of $A_i$ has a non-neighbor in $B_i$.
\item Every node of $B_i$ has a non-neighbor in $A_i$.
\item Either both $A_1$, $A_2$ are cliques, or one of $A_1$ or $A_2$ is
a single node, and the other one is a disjoint union of cliques.
\item Either both $B_1$, $B_2$ are cliques, or one of $B_1$, $B_2$ is
a single node, and the other one is a disjoint union of cliques.
\item $G[X_i]$ is connected.
\item For every node $v$ in $X_i$, there exists a path in $G[X_i]$
from $v$ to some node of $B_i$ with no internal node in $A_i$.
\item For every node $v$ in $X_i$, there exists a path in $G[X_i]$
from $v$ to some node of $A_i$ with no internal node in $B_i$.
\end{enumerate}
Note that the definition contains redundant statements (for instance, (vi)
implies (i)), but it is convenient to list properties separately as above.
\begin{lemma}[\cite{twf-p1}]
\label{l:consistent}
If $G\in\mathcal D$, then
every almost 2-join of $G$ is consistent.
\end{lemma}
By this lemma every 2-join of a graph of $\mathcal D$ is consistent.
We now define the blocks of decomposition of a graph with respect to a
2-join. Let $G$ be a graph and $(X_1, X_2,A_1,A_2,B_1,B_2)$ a split of a 2-join of $G$.
Let $k_1$ and $k_2$ be positive integers. The
\emph{blocks of decomposition} of $G$ with respect to $(X_1, X_2)$ are
the two graphs $G_1^{k_1}$ and $G_2^{k_2}$ that we describe now. We obtain $G_1^{k_1}$
from $G$ by replacing $X_2$ by a \emph{marker path} $P_2= a_2 \ldots
b_2$ of length $k_1$, where $a_2$ is a node complete to $A_1$, $b_2$ is a node complete
to $B_1$, and $V(P_2)\setminus \{ a_2,b_2\}$ is anticomplete to $X_1$. The
block $G_2^{k_2}$ is obtained similarly by replacing $X_1$ by a marker path
$P_1 = a_1\ldots b_1$ of length $k_2$.
In \cite{twf-p2} the blocks of decomposition w.r.t.\ a 2-join that we used in construction of a
recognition algorithm had marker paths of length 2. In this paper we will use blocks whose
marker paths are of length 3. So, unless otherwise stated, when we say that $G_1$ and
$G_2$ are blocks of decomposition w.r.t.\ a 2-join we will mean that their marker paths are of length~3.
\begin{lemma}[\cite{twf-p1}]
\label{l:keepKfree}
Let $G$ be a graph with a consistent 2-join $(X_1, X_2)$ and $G_1$,
$G_2$ be the blocks of decomposition with respect to this 2-join whose marker
paths are of length 2. Then the following hold:
\begin{itemize}
\item[(i)] $G$ has no clique cutset if and only if $G_1$ and $G_2$ have
no clique cutset.
\item[(ii)] $G\in {\cal C}$ if and only if $G_1$ and $G_2$ are in ${\cal C}$.
\end{itemize}
\end{lemma}
\begin{lemma}
\label{new2}
Let $G$ be a graph from ${\cal D}$. Let $(X_1, X_2)$ be a 2-join of $G$, and $G_1$,
$G_2$ the blocks of decomposition with respect to this 2-join whose marker
paths are of length at least~2. Then
$G_1$ and $G_2$ are in ${\cal D}$ and they do not have star cutsets.
\end{lemma}
\begin{proof}
By Lemma \ref{l:consistent}, $(X_1,X_2)$ is consistent.
Let $G_1'$ and $G_2'$ be blocks of decomposition w.r.t.\ $(X_1,X_2)$ whose marker paths are of length 2.
Then for $i\in \{ 1,2\}$, $G_i$ is obtained from $G_i'$ by subdividing (0 or several times) an edge of its marker path.
Subdividing an edge whose one endnode is of degree 2 cannot create a clique cutset, nor a theta, nor a wheel,
and hence the result follows from Lemma \ref{l:keepKfree} and Lemma \ref{Star=Clique}.
\end{proof}
A 2-join $(X_1,X_2)$ of $G$ is a {\em minimally-sided 2-join} if for some $i\in \{ 1,2 \}$ the following holds: for every 2-join $(X_1',X_2')$ of $G$, neither $X_1'\subsetneq X_i$ nor $X_2'\subsetneq X_i$. In this case $X_i$ is a {\em minimal side} of this minimally-sided 2-join.
A 2-join $(X_1,X_2)$ of $G$ is an {\em extreme 2-join} if for some $i\in \{ 1,2 \}$ and all $k\geq 3$ the block of decomposition $G_i^k$ has no 2-join.
In this case $X_i$ is an {\em extreme side} of such a 2-join.
Graphs in general do not necessarily have extreme 2-joins (an example is given in \cite{nicolas.kristina:two}), but it is shown in \cite{nicolas.kristina:two}
that graphs with no star cutset do. It is also shown in \cite{nicolas.kristina:two} that if $G$ has no star cutset then the blocks of decomposition w.r.t.\ a 2-join
whose marker paths are of length at least 3, also have no star cuset. This is then used to show that in a graph with no star cutset, a minimally-sided 2-join is extreme.
We summarize these results in the following lemma.
\begin{lemma}[\cite{nicolas.kristina:two}]\label{extreme}
Let $G$ be a graph with no star cutset. Let
$(X_1,X_2,A_1,A_2,B_1,B_2)$ be a split of a minimally-sided 2-join of
$G$ with $X_1$ being a minimal side, and let $G_1$ and $G_2$ be the
corresponding blocks of decomposition whose marker paths are of length at least 3. Then the following hold:
\begin{enumerate}
\item $|A_1|\geq 2$, $|B_1|\geq 2$, and in particular all the vertices of $A_2\cup B_2$ are of degree at least~3.
\item
$(X_1,X_2)$ is an extreme 2-join, with $X_1$ being an extreme side (in particular, $G_1$ has no 2-join).
\end{enumerate}
\end{lemma}
The following simple lemma is useful and not proved in the previous papers of
the series.
\begin{lemma}
\label{l:2joinDeg2}
Let $G$ be in $\mathcal D$. Let $(X_1,X_2,A_1,A_2,B_1,B_2)$ be a
split of a minimally-sided 2-join of $G$ with $X_1$ being a minimal
side, and let $G_1$ and $G_2$ be the corresponding blocks of
decomposition. If the block of decomposition $G_1$ is a P-graph,
then $X_1$ contains a vertex that has degree~2 in $G$.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:twoBranches}, $G_1$ contains a vertex $v$ of
degree~2 that is not in the marker path of $G_1$. We claim that $v$
has also degree~2 in $G$. If $v\in X_1\setminus(A_1\cup B_1)$, then
it is clear, so suppose $v$ is in $A_1\cup B_1$, say in $A_1$ up to
symmetry. Note that $(X_1, X_2)$ is consistent by
Lemma~\ref{l:consistent}. Since $v$ has degree~2 in $G_1$,
condition (vii) in the definition of consistent 2-joins applied to
$v$ implies that $v$ has precisely one neighbor in
$X_1\setminus A_1$ and one neighbor in the marker path of $G_1$.
Since by Lemma~\ref{extreme} $|A_1|\geq 2$, it follows that $G[A_1]$
is disconnected. Hence, by condition (iv) in the definition of
consistent 2-joins, $|A_2|=1$. It follows that $v$ has the same
degree in $G_1$ and in $G$.
\end{proof}
In \cite{nicolas.kristina:two} it is shown that one can decompose a graph with no star cutset
using a sequence of `non-crossing' 2-joins into graphs with no star cutset and no 2-join
(which will in our case be basic). This will be particularly important when using 2-join decomposition to
solve the stable set problem. We now describe such 2-join decomposition obtained in \cite{nicolas.kristina:two}.
A {\it flat path} of $G$ is any path of $G$ of length at least 3, whose
interior vertices are of degree 2, and whose ends do not have a common neighbor.
When $\mathcal M$ is a collection of vertex-disjoint flat paths of
$G$, a 2-join $(X_1,X_2)$ of $G$ is $\mathcal M$-independent if for
every path $P$ from $\mathcal M$ we have that either $V(P)\subseteq X_1$ or
$V(P)\subseteq X_2$.
\subsubsection*{2-Join decomposition tree $T_G$ of depth $p\geq 1$ of a graph $G$ that has no star cutset and has a 2-join}
\begin{itemize}
\item[(i)] The root of $T_G$ is $(G^0,\mathcal M^0)$, where $G^0:=G$ and $\mathcal M^0=\emptyset$.
\item[(ii)] Each node of $T_G$ is a pair $(H,\mathcal M)$, where $H$ is a graph of $\mathcal D$ and $\mathcal M$ is a set of disjoint flat paths of $H$.
The non-leaf nodes of $T_G$ are pairs $(G^0,\mathcal M^0),\ldots,(G^{p-1},\mathcal M^{p-1})$. Each non-leaf node $(G^i,\mathcal M^i)$ has two children. One is $(G^{i+1},\mathcal M^{i+1})$, the other one is $(G_B^{i+1},\mathcal M_B^{i+1})$.
The leaf-nodes of $T_G$ are the pairs $(G_B^1,\mathcal M_B^1),\ldots,(G_B^{p},\mathcal M_B^{p})$ and $(G^p,\mathcal M^p)$.
Graphs $G_B^1,G_B^2,\ldots,G_B^p,G^p$ have no star cutset nor 2-join.
\item[(iii)] For $i\in\{0,1,\ldots,p-1\}$, $G^i$ has a 2-join $(X_1^i,X_2^i)$ that is extreme with extreme side $X_1^i$ and that is $\mathcal M^i$-independent.
Graphs $G^{i+1}$ and $G_B^{i+1}$ are blocks of decomposition of $G^i$ w.r.t.\ $(X_1^i,X_2^i)$ whose marker paths are of length at least 3.
The block $G_B^{i+1}$ corresponds to the extreme side $X_1^i$, i.e.\ $X_1^i\subseteq V(G_B^{i+1})$.
Set $\mathcal M_B^{i+1}$ consists of paths from $\mathcal M^i$ whose vertices are in $X_1^i$. Note that the marker path used to construct the block $G_B^{i+1}$ does not belong to $\mathcal M_B^{i+1}$.
Set $\mathcal M^{i+1}$ consists of paths from $\mathcal M^i$ whose vertices are in $X_2^i$ together with the marker path $P^{i+1}$ used to build $G^{i+1}$.
\item[(iv)] $\mathcal M_B^1\cup\ldots\cup \mathcal M_B^p\cup \mathcal M^{p}$ is the set of all marker paths used in the construction of the nodes $G^1,\ldots,G^p$ of $T_G$, and the sets $\mathcal M_B^1,\ldots,\mathcal M_B^p,\mathcal M^p$ are pairwise disjoint.
\end{itemize}
Node $(G^p,\mathcal M^p)$ is a leaf of $T_G$ and is called the {\it deepest node of $T_G$}.
The 2-join decomposition tree is described slightly differently in \cite{nicolas.kristina:two}, but the following result follows easily from the proofs in \cite{nicolas.kristina:two}.
\begin{lemma}[\cite{nicolas.kristina:two}]\label{DT-construction}
There is an algorithm with the following specification.
\begin{description}
\item[ Input:] A graph $G$ that has no star cutset and has a 2-join.
\item[ Output:]
A 2-join decomposition tree $T_G$ of depth at most $n$.
\item[ Running time:] $\mathcal O(n^4m)$.
\end{description}
\end{lemma}
\begin{lemma}\label{new3}
If $G\in {\cal D}$ has a 2-join, then $T_G$ can be constructed, and all graphs
$G_B^1,G_B^2,\ldots,G_B^p,G^p$ that correspond to the leaves of $T_G$ are in $\mathcal D_{\textsc{basic}}$.
\end{lemma}
\begin{proof}
By Lemma \ref{Star=Clique} $G$ has no star cutset, and hence we can construct $T_G$.
By Lemma \ref{new2} all graphs that correspond to nodes of $T_G$ belong to ${\cal D}$.
By construction graphs $G_B^1,G_B^2,\ldots,G_B^p,G^p$ have no star cutset nor 2-join.
By Lemma \ref{Star=Clique} it follows that none of them has a clique cutset, and hence by Theorem \ref{decomposeTW}
all of them are basic.
\end{proof}
\section{Maximal cliques and clique coloring}
\label{sec:maxCl}
A vertex $v$ of a graph $G$ is {\it simplicial} if $N(v)$ is a
clique, and it is {\it bisimplicial} if $N(v)$ is a disjoint
union of two cliques that are anticomplete to each other.
Note that every simplicial vertex is also bisimplicial.
We now show that every graph $G\in\mathcal C$ has a bisimpicial vertex, which we then use
to obtain an algorithm for finding a maximum weight clique of $G$ and to prove that $G$ is 3-clique-colorable.
\begin{theorem}\label{bisimplicial}
If $G\in\mathcal C$ then for every clique $K$
of $G$, either $K= V(G)$ or there is a bisimplicial vertex
(of $G$) in $G\setminus K$.
\end{theorem}
\begin{proof}
The proof is by induction on $|V(G)|$.
If $R$ is a triangle-free chordless graph, then $L(R)$ does not
contain a claw nor a diamond, and hence every vertex of $L(R)$ is
bisimplicial. If $G$ is a P-graph, then by
Lemma~\ref{l:twoBranches} it contains at least two branches of length
at least~2. The clique $K$ contains internal vertices of at most one
of these branches. Hence, $G\setminus K$ contains a vertex of
degree~2, that is therefore bisimplicial. So, when $G$
is basic the result holds.
Let us now suppose that $(A,K',B)$ is a split of a clique cutset $K'$
of $G$, and let $G_A$ and $G_B$ be the blocks of decomposition w.r.t.\
this clique cutset. Then clique $K$ is contained in $G_A$ or in
$G_B$. W.l.o.g.\ suppose that $K$ is contained in $G_B$. By induction,
there is a bisimplicial vertex in $G_A\setminus K'$, and
hence in $G\setminus K$.
So, let us suppose that $G$ is not basic and that it does not admit a
clique cutset. By Theorem \ref{decomposeTW}, $G$ admits a 2-join
$(X_1',X_2',A_1',A_2',B_1',B_2')$. Then $K$ is contained in
$G[X_1'\cup A_2']$ or in $G[X_2'\cup B_1']$. W.l.o.g.\ suppose that
$K$ is contained in $G[X_2'\cup B_1']$. Let
$(X_1,X_2,A_1,A_2,B_1,B_2)$ be a split of a minimally-sided 2-join of
$G$ with $X_1\subseteq X_1'$ being a minimal side, and let $G_1$ and
$G_2$ be the corresponding blocks of decomposition.
By Lemma
\ref{Star=Clique}, $G$ does not have a star cutset.
So by Lemma \ref{extreme} (ii) $G_1$ does not have a 2-join.
By Lemma \ref{new2}, $G_1\in {\cal D}$, and so
by Theorem \ref{decomposeTW}, $G_1$ is basic.
Additionally, by Lemma \ref{extreme}~(i), $|A_1|,|B_1|\geq 2$, and
hence, by (iv) and (v) of definition of consistent 2-join, $A_2$ and $B_2$ are cliques. Also
we may assume that
$K\cap A_1=\emptyset$, since otherwise for $u\in K\cap A_1$ and $v\in K\cap B_1$,
$u,v\in B_1'$, and hence any $b\in B_2'$ is a vertex of $X_2'$ that has a neighbor in both
$A_1$ and $B_1$, contradicting the assumption that $(X_1,X_2)$ is a 2-join of $G$
such that $X_1\subseteq X_1'$.
It follows that $K\subseteq X_2\cup B_1$.
If $G_1$ is the line graph
of a triangle-free chordless graph, then every vertex of $A_1$ is
bisimplicial in $G_1$ and hence bisimplicial
in $G$ (since $A_2$ is a clique). So, let us assume that $G_1$ is a P-graph. By
Lemma~\ref{l:2joinDeg2}, a vertex $u$ of $X_1$ is of degree 2 in $G$.
If $B_1$ is a clique then, since $|B_1|\geq 2$ and by (viii) of definition of consistent 2-join, it
follows that $u\not\in B_1$, and therefore $u\not\in K$ and the result holds (since $A_2$ is a clique).
If $B_1$ is not a clique, then the special clique
of P-graph $G_1$ is contained in $B_1\cup\{b_2\}$ (where $b_2$ is the vertex of the marker path of $G_1$ that is complete to $B_1$),
and we can take any vertex from $X_1\setminus B_1$ (since $A_2$ is a clique).
\end{proof}
\subsection*{Maximum weight clique}
Let $G$ be a graph and $w:V(G)\rightarrow [0,+\infty)$ a {\it weight function on $G$}. A {\it maximum weight clique} of $G$
is a clique $K$ of $G$, such that $\sum_{v\in K} w(v)$ has the maximum value. If $K$ is a maximum weight clique of $G$,
we denote by $\omega_w(G)$ the value of the sum $\sum_{v\in K} w(v)$.
\begin{theorem}\label{SMaxCliqueAlg}
There is an algorithm with the following specifications:
\begin{description}
\item[ Input:] A weighted graph $G\in\mathcal C$.
\item[ Output:]
A maximum weight clique of $G$.
\item[ Running time:] $\mathcal O(n^2m)$.
\end{description}
\end{theorem}
\begin{proof}
By Theorem \ref{bisimplicial}, $G$ contains a vertex $v$ that is bisimplicial. This vertex can be found in time $\mathcal O(nm)$. Let $N(v)$ consist of
(possibly empty) cliques $K_1$ and $K_2$.
Then \[\omega_w(G)=\max\{\omega_w(G\setminus v),\omega_w(\{v\}\cup K_1),\omega_w(\{v\}\cup K_2)\},\] and if $K$ is the maximum weight clique of
$G\setminus v$, then a maximum weight clique of $G$ is $K$, $\{v\}\cup K_1$ or $\{v\}\cup K_2$.
So it is enough to find a maximum weight clique of $G\setminus v$, which can be done by applying (recursively) the same procedure on $G\setminus v$.
The total running time of this algorithm is $\mathcal O(n\cdot nm)=\mathcal O(n^2m)$.
\end{proof}
\subsection*{Clique coloring}
A {\it k-clique-coloring} of a graph $G$ is a function $c:V(G)\rightarrow \{1,2,\ldots,k\}$, such that for every inclusion-wise maximal clique $K$ of size at least 2,
$c(K)=\{c(v)\,:\,v\in K\}$ has at least 2 elements. We say that $G$ is {\it $k$-clique-colorable} if it admits a $k$-clique-coloring. The {\it clique-chromatic number} of $G$, denoted by $\chi_C(G)$, is the smallest number $k$ such that $G$ is $k$-clique-colorable.
There are graphs in $\mathcal C$ that are not 2-clique-colorable, as
shown in Figure \ref{fig:3-clique-color}.
We now prove that every $G\in\mathcal C$ is $3$-clique-colorable.
\begin{figure}[h!]
\begin{center}
\psset{xunit=21.0mm,yunit=21.0mm,radius=0.1,labelsep=0.5mm}
\def\rputnode(#1,#2)#3#4{\Cnode(#1,#2){#3}\nput{0}{#3}{\small$#4$}}
\begin{pspicture}(3,2.4)
\rputnode(1,1){a}{}
\rputnode(2,1){b}{}
\rputnode(1,0){a1}{}
\rputnode(2,0){b1}{}
\rputnode(1.5,1.7){c}{}
\rputnode(0.3,1.5){a2}{}
\rputnode(2.7,1.5){b2}{}
\rputnode(0.8,2.2){c1}{}
\rputnode(2.2,2.2){c2}{}
\ncline{a}{b}
\ncline{a}{c}
\ncline{b}{c}
\ncline{a}{a1}
\ncline{a}{a2}
\ncline{b}{b1}
\ncline{b}{b2}
\ncline{c}{c1}
\ncline{c}{c2}
\ncline{a1}{b1}
\ncline{a2}{c1}
\ncline{b2}{c2}
\ncline{0x}{0x3}
\end{pspicture}
\end{center}
\caption{\label{fig:3-clique-color} Graph from $\mathcal C$ that is not 2-clique-colorable}
\end{figure}
\begin{theorem}
If $G\in\mathcal C$, then $\chi_C(G)\leq 3$.
\end{theorem}
\begin{proof}
The proof is by induction on $|V(G)|$.
By Theorem \ref{bisimplicial}, $G$ contains a vertex $v$ that is bisimplicial.
By induction, we can 3-clique-color $G\setminus v$.
Let $K_1$ and $K_2$ be disjoint, anticomplete cliques such that $K_1\cup K_2=N(v)$.
To obtain a 3-clique-coloring of $G$ from the 3-clique-coloring of $G\setminus v$ it is enough to color $v$
with a color different from a vertex of $K_1$ and a vertex of $K_2$ (note that if $K_i$ is empty, for some $i\in \{ 1,2\}$, then any of the three
colors satisfies the property).
\end{proof}
\section{Stable set problem}
\label{sec:stable}
Let $G$ be a graph and $w:V(G)\rightarrow [0,+\infty)$ a {\it weight function on $G$}. A {\it maximum weight stable set} of $G$ is a stable set $S$ of
$G$, such that $\sum_{v\in S} w(v)$ has the maximum value. If $S$ is a maximum weight stable set of $G$, we denote by
$\alpha_w(G)$ the value of the sum $\sum_{v\in S} w(v)$.
In this section we give a polynomial-time algorithm for finding a maximum weight stable set of a weighted graph in $\mathcal C$.
To do this we first introduce a different way to decompose w.r.t.\ a 2-join, one that is suited for the stable set problem.
A {\it gem} $\Gamma$ is the graph defined with $V(\Gamma)=\{p_1,p_2,p_3,p_4,z\}$ and $E(\Gamma)=\{p_1p_2,p_2p_3,p_3p_4,p_1z,p_2z,p_3,p_4z\}$. Vertex $z$ is the {\it center} of the gem $\Gamma$.
Let $(X_1,X_2,A_1,A_2,B_1,B_2)$ be a split of a 2-join of $G\in\mathcal C$. To build a {\it gem-block} $G_2^g$ replace $X_1$ by an induced path $pxyq$
plus a vertex $z$ complete to this path, such that $p$ (resp.\ $q$) is complete to $A_2$ (resp.\ $B_2$)
and these are the only edges between $\{ p,x,y,q,z\}$ and $X_2$.
Note that $G_2^g$ is not necessarily in $\mathcal C$.
Let $a:=\alpha_w(G[A_1\cup C_1])$, $b:=\alpha_w(G[B_1\cup C_1])$, $c:=\alpha_w(G[C_1])$ and $d:=\alpha_w(G[X_1])$. We give the following weights to the new vertices of $G_2^g$: $w(p)=a$, $w(x)=a+b-d$, $w(y)=d$, $w(q)=2d-a$ and $w(z)=c+d$.
\begin{lemma}[\cite{nicolas.kristina:two}]\label{gem}
If $G_2^g$ is the gem-block of $G$, then the weights of $G_2^g$ are non-negative and
$\alpha_w(G_2^g)=\alpha_w(G)+d$.
\end{lemma}
The gem blocks are useful for computing $\alpha$, but they are not preserving for our class ${\cal C}$, so
we cannot recursively decompose using gem blocks.
Instead, we will first construct the 2-join decomposition tree $T_G$ (as in Section \ref{sec:decTh}) using marker paths of length 3,
and then we will reprocess it by replacing marker paths by gems.
As a consequence, the leaves of our decomposition tree may fail to be basic. So, we define {\it extensions} of basic graphs in the following way.
Let $P$ be a flat path of $G$ of length 3. {\it Extending} $P$ means adding a new vertex $z$
that is complete to $V(P)$ and anticomplete to the rest of the graph.
An {\it extension} of a pair $(G,\mathcal M)$, where $G$ is a graph and $\mathcal M$ a set of vertex-disjoint flat paths of $G$ of length 3,
is any weighted graph obtained by extending the flat paths of $\mathcal M$ and giving any non-negative weights to all the vertices.
An {\it extension} of $G$ is any graph that is an extension of $(G,\mathcal M)$ for some $\mathcal M$.
We define $\mathcal D_{\textsc{basic}}^{\textsc{ext}}$ to be the class of all graphs that are an extension of a graph from $\mathcal D_{\textsc{basic}}$.
Let us examine the graphs in $\mathcal D_{\textsc{basic}}^{\textsc{ext}}$.
If $G$ is a line graph of a triangle-free chordless graph, then an extension $G'$ of $G$
is again a line graph (but not of a triangle-free chordless graph). Indeed, if $R$ is the root graph of $G$, then a flat path of $G$ of length 3 correspond to a path
$B=b_1b_2b_3b_4b_5$ of $R$ all of whose interior vertices are of degree 2. Hence, $G'=L(R')$, where $R'$ is the graph obtained from $R$ by adding edge $b_2b_4$
for every such $B$.
Similarly, if $G$ is a P-graph with special clique $K$ and skeleton $R$, then each flat path $P=a_1a_2a_3a_4$ of $G$ either corresponds to a path $B=b_1b_2b_3b_4b_5$ of $R$
that belongs to a branch or a limb of $R$, or an endnode of $P$, say $a_4$ belongs to $K$
(in the latter case let $C=c_1c_2c_3c_4$ be the subpath of a limb of $R$ such that $L(R[V(C)])$ is the path $a_1a_2a_3$ in $G$).
To obtain $R'$ from $R$, for each $B$ we add the edge $b_2b_4$, and for each $C$ we add the edge $c_2c_4$.
Then an extension $G'$ of $G$ is obtained from $L(R')$ by adding clique $K$ and edges between them so that
for every $v\in K$, $N_{G'}(v)=N_G(v)\cup Z_v$, where $Z_v$ is the set of all centers of gems that were used to extend flat paths with endnode $v$.
\begin{lemma}\label{SSbasicAlg}
There is an algorithm with the following specifications:
\begin{description}
\item[ Input:] A weighted graph $G'\in\mathcal D_{\textsc{basic}}^{\textsc{ext}}$.
\item[ Output:]
A maximum weight stable set of $G'$.
\item[ Running time:] $\mathcal O(n^4)$.
\end{description}
\end{lemma}
\begin{proof}
Let $G'$ be an extension of $G\in \mathcal D_{\textsc{basic}}$. In order to compute a maximum weight stable set of $G'$, we first need to compute $G$ and then decide if $G$ a line graph of a triangle-free chordless graph or a P-graph.
Since $G$ is diamond-free, every diamond of $G'$ is contained in some gem of $G'$. So, to obtain $G$ from $G'$ it is enough to find all gems contained in $G'$. This can be done in time $\mathcal O(n^4)$. To decide whether $G$ is a line graph of a triangle-free chordless graph or a P-graph it is enough to test whether or not $G$ contains a claw (a triangle-free chordless graph does not contain a claw, and a P-graph does).
In case $G$ is a P-graph, we find its special clique $K$ by finding all centers of claws and extend them to a maximal clique (in case there is only one center of claw, say $u$,
then we check whether $u$ is contained in a clique of size 3, and if it is we extend that clique to a maximal clique, and otherwise $K=\{ u\}$).
All this can also be done in time $\mathcal O(n^4)$.
If $G$ is the line graph of a triangle-free chordless graph, then $G'$ is also a line graph, so the maximum weighted stable set of $G'$ can be computed in time $\mathcal O(n^3)$ using Edmonds' algorithm \cite{edmonds}.
If $G$ is a P-graph, then $G'\setminus K$ is a line graph $L(R')$, and hence maximum weight stable set of $G'$ is either contained in $L(R')$, or has exactly one vertex of $K$.
So, it is enough to compute a maximum weight stable set of $L(R')$, and a maximum weight stable set of $G'\setminus N[v]$, for each $v\in K$. Since, for each $v\in K$ the graph $G'\setminus N[v]$ is a line graph, we conclude that using Edmonds' algorithm a maximum weight stable set of $G'$ can be computed in time $\mathcal O(n^4+n\cdot n^3)=\mathcal O(n^4)$.
\end{proof}
\begin{lemma}\label{SSalgD}
There is an algorithm with the following specifications:
\begin{description}
\item[ Input:] A weighted graph $G\in\mathcal D$.
\item[ Output:]
A maximum weight stable set of $G$.
\item[ Running time:] $\mathcal O(n^4m)$.
\end{description}
\end{lemma}
\begin{proof}
Check whether $G$ contains a 2-join (this can be done in time ${\cal O} (n^2m)$ by the algorithm in \cite{fast2j}).
If it does not, then by Theorem \ref{decomposeTW} $G\in \mathcal D_{\textsc{basic}}\subseteq \mathcal D_{\textsc{basic}}^{\textsc{ext}}$,
and hence we compute maximum weight stable set in ${\cal O} (n^4)$ time by Lemma \ref{SSbasicAlg}.
Otherwise, we construct the 2-join decomposition tree $T_G$ (of depth $1\leq p\leq n$) using marker paths of length 3
in ${\cal O} (n^4m)$ time by Lemma \ref{DT-construction}.
By Lemma \ref{new3} all graphs $G_B^1, \ldots ,G_B^p,G^p$ that correspond to the leaves of $T_G$ are in $\mathcal D_{\textsc{basic}}$.
We now reprocess $T_G$.
Let $P^1$ be the marker path used in construction of $G^1$. We replace $G^1$ by the corresponding gem block $G^{1g}$.
To do this we need to compute the weights $a^1,b^1,c^1,d^1$ that need to be assigned to the vertices of the gem, and this amounts to
computing four weighted stable set problems on $G^1_B$. Since $G^1_B\in \mathcal D_{\textsc{basic}}\subseteq \mathcal D_{\textsc{basic}}^{\textsc{ext}}$
this can be done in ${\cal O} (n^4)$ time by Lemma \ref{SSbasicAlg}.
By Lemma \ref{gem} $\alpha_w (G^{1g})=\alpha_w (G)+d^1$. In all the other graphs that correspond to the nodes of $T_G$ and contain $P^1$,
we extend $P^1$ using weights $a^1,b^1,c^1,d^1$.
We continue this process for $i=2, \ldots ,p$.
So if $P^i$ is the marker path used in construction of $G^i$, we compute the weights needed to transform it into a gem block, by computing four
weighted stable set problems on $G^i_B$ whose paths in ${\cal M}^i_B$ have all already been extended. Since this graph is in
$\mathcal D_{\textsc{basic}}^{\textsc{ext}}$, this can be done in ${\cal O} (n^4)$ time by Lemma \ref{SSbasicAlg}.
In all the graphs that correspond to nodes of $T_G$ that contain $P^i$ we extend $P^i$ using calculated weights $a^i,b^i,c^i,d^i$.
The last graph we reprocess is $G^p$, and let us denote by $G^{pg}$ the graph that we obtain at the end of the reprocessing procedure.
By repeated application of Lemma \ref{gem},
$\alpha_w (G^{pg})=\alpha_w (G)+d^1+\ldots +d^p$, and so we deduce $\alpha_w (G)$.
The proof of Lemma \ref{gem} actually shows how to keep track of a stable set of $G$ whose weight is $\alpha_w (G)$.
Since $p\leq n$, this algorithm can be implemented to run in time
$\mathcal O(n^4m+n\cdot n^4)=\mathcal O(n^4m)$.
\end{proof}
\begin{theorem}\label{SSalg}
There is an algorithm with the following specifications:
\begin{description}
\item[ Input:] A weighted graph $G\in\mathcal C$.
\item[ Output:]
A maximum weight stable set of $G$.
\item[ Running time:] $\mathcal O(n^6m)$.
\end{description}
\end{theorem}
\begin{proof}
By Theorem \ref{th:tarjan} we construct the clique cutset decomposition tree $T$ of $G$ in ${\cal O} (nm)$ time.
So all the leaves of $T$ are graphs from $\mathcal D$.
By using Tarjan's method from \cite{tarjan} to compute a maximum weight stable set of $G$ it is enough to compute $\mathcal O(n^2)$ maximum weight stable sets
on the leaves of $T$ (each one of which can be computed in ${\cal O} (n^4m)$ time by Lemma \ref{SSalgD}).
Therefore, this algorithm can be implemented to run in time
$\mathcal O(nm+n^2\cdot n^4m)=\mathcal O(n^6m)$.
\end{proof}
\section{Vertex coloring}
\label{sec:vCol}
A {\it k-coloring} of a graph $G$ is a function $c:V(G)\rightarrow \{1,2,\ldots,k\}$, such that for every $uv\in E(G)$, $c(u)\neq c(v)$. We say that $G$ is {\it $k$-colorable} if it admits a $k$-coloring. The {\it chromatic number} of $G$, denoted by $\chi(G)$, is the smallest number $k$ such that $G$ is $k$-colorable.
In this section we give a polynomial-time coloring algorithms for ${\cal C}$ and prove that every $G\in\mathcal C$ is $\max\{3,\omega(G)\}$-colorable.
A graph is {\it sparse} if every edge is incident with at least one vertex of degree at most 2. Note that every sparse graph is chordless. A proper {\it 2-cutset} of a connected graph $G$ is a pair of non-adjacent vertices $a,b$ such that there is a partition $(X,Y,\{a,b\})$ of $V(G)$ with $X$ and $Y$ anticomplete, both $G[X\cup\{a,b\}]$ and $G[Y\cup\{a,b\}]$ contain an $ab$-path and neither $G[X\cup\{a,b\}]$ nor $G[Y\cup\{a,b\}]$ is a chordless path. We say that $(X,Y,\{ a,b\} )$ is a {\it split} of this proper 2-cutset. The {\it blocks of decomposition} $G_X$ and $G_Y$ w.r.t.\ this cutset are defined as follows. Block $G_X$ (resp.\ $G_Y$) is the graph obtained by taking $G[X\cup\{a,b\}]$ (resp.\ $G[Y\cup\{a,b\}]$) and adding a new vertex $u$ (resp.\ $v$) complete to $\{a,b\}$ (and anticomplete to the rest).
A decomposition theorem for the class of chordless graphs is proved in \cite{lmt:isk4}. An improvement of this theorem, that is an {\it extreme decomposition} for this class, is proved in \cite{mft:chordless}. We give both results in the following theorem.
\begin{theorem}[\cite{lmt:isk4,mft:chordless}]\label{DecomposChordless}
If $G$ is a 2-connected chordless graph, then either $G$ is sparse or $G$ admits a proper 2-cutset. Additionally, if $(X,Y,\{ a,b\} )$ is a split of a proper 2-cutset of $G$ such that $|X|$ is minimum among all such splits, then $a$ and $b$ both have at least two neighbors in $X$, and $G_X$ is sparse.
\end{theorem}
A {\it k-edge-coloring} of a graph $G$ is a function $c:E(G)\rightarrow \{1,2,\ldots,k\}$, such that for every two distinct edges with a common node, say $uv$ and $uw$,
$c(uv)\neq c(uw)$. $G$ is {\it $k$-edge-colorable} if it admits a $k$-edge-coloring. The {\it edge-chromatic number} of $G$ is the smallest number $k$ such that $G$ is $k$-edge-colorable.
The edge-coloring of chordless graphs is studied in \cite{mft:chordless}, where the authors obtained the following result.
For a graph $G$, let $\Delta(G)=\max\{\deg(v)\,:\,v\in V(G)\}$ and $\delta(G)=\min\{\deg(v)\,:\,v\in V(G)\}$.
\begin{theorem}[\cite{mft:chordless}]\label{ColorChordless}
Every chordless graph $G$ is $\{3,\Delta(G)\}$-edge-colorable. Moreover, there is an $\mathcal O(n^3m)$-time algorithm that finds such an edge-coloring.
\end{theorem}
In this section we will prove a variant of the previous theorem (see Lemma \ref{TwoColoredBasic}). The following result is an important step towards obtaining a $\max\{3,\omega(G)\}$-coloring for our basic classes.
\begin{lemma}[\cite{mft:chordless}]\label{ListChord}
Let $G=(V,E)$ be a sparse graph and suppose that a list $L_{uv}$ of colors is associated with each edge $uv\in E$. Let $S$ be a stable set of $G$ that contains all vertices of $G$ of degree at least 3. Suppose that for every vertex $u\in S$, all edges incident to $u$ receive the same list. If for each edge $uv\in E$, $|L_{uv}|\geq \max\{\deg(u),\deg(v)\}$ and for each edge $uv\in E$ with no end in $S$, $|L_{uv}|\geq 3$, then there is an edge-coloring $c$ of $G$ such that, for each edge $uv\in E$, $c(uv)\in L_{uv}$.
Furthermore, there is an $\mathcal O(nm)$-time algorithm that finds such an edge-coloring $c$.
\end{lemma}
Let $v_1,\ldots,v_k$, where $1\leq k\leq 3$, be some vertices of a branch $B$ of $G$, such that they do not induce a path of length 2. Furthermore, let the list of colors $L_i$, $|L_i|\geq 2$, be associated with $v_i$, for $1\leq i\leq k$, such that if $v_i$ and $v_j$ are adjacent, for some $1\leq i<j\leq k$, then $L_i\cap L_j\neq \emptyset$. Note that branch $B$ can be edge-colored with $\left|\bigcup_{i=1}^k L_i\right|$ colors, so that every edge incident with $v_i$ is colored with a color from $L_i$, for $1\leq i\leq k$. Indeed, if no two of the vertices from the set $\{v_1,\ldots,v_k\}$ are adjacent, then we can color $B$ greedily (starting from one endnode of $B$). If w.l.o.g.\ $v_1$ and $v_2$ are adjacent, then we can obtain the desired edge-coloring by first coloring the edge $v_1v_2$ (with a color from $L_1\cap L_2$) and then greedily coloring the rest of the branch (starting from the other edge incident with $v_1$ and the other edge incident with $v_2$). We say that such an edge-coloring of the branch $B$ is {\em according to the lists} $L_1, \ldots,L_k$.
A vertex $v$ of $G$ is {\em free} if it is of degree 2 and both of its neighbors are also of degree 2.
Vertices $u$ and $v$ of $G$ are {\it parallel} if they are of degree 2, and contained in distinct parallel branches of $G$.
A {\it ring} of $G$ is a hole of $G$ that has at most one vertex that is of degree at least 3 in $G$.
A {\it small theta} of $G$ is an induced subgraph of $G$ isomorphic to $K_{2,3}$ ($K_{2,3}$ is complete bipartite graph whose sides have sizes 2 and 3). Note that if $H$ is a small theta of a sparse graph $G$, then only degree 3 vertices of $H$ can have neighbors in $G\setminus H$.
A set $S=\{v_1, \ldots,v_k\}$, $1\leq k\leq 3$, of vertices is {\it good} if the following hold:
\begin{itemize}
\item[(i)] at most one of $v_1,\ldots,v_k$ is of degree 2 and not free;
\item[(ii)] if $k=3$, then $S$ is not contained in a ring of $G$ of length 5;
\item[(iii)] if $k=3$ and some $v_i\in S$ is of degree 2 and not free, then the two vertices from $S\setminus\{v_i\}$ are not adjacent.
\end{itemize}
\begin{lemma}\label{2list2sparse}
Let $G$ be a triangle-free sparse graph with $\delta(G)=2$, and let $v_1,\ldots,v_k\in V(G)$, $1\leq k\leq 3$, be such that $\{v_1,\ldots,v_k\}$ does not induce a path of length 2. To vertices $v_i$, for $1\leq i\leq k$, the lists of colors $L_i\subseteq\{1,\ldots,s\}$, where $s=\max\{3,\Delta(G)\}$, are associated so that $|L_i|\geq\deg(v_i)$. Furthermore, if $v_i$ and $v_j$ are adjacent, for some $1\leq i<j\leq k$, then $L_i\cap L_j\neq \emptyset$. If one of the following holds:
\begin{itemize}
\item[(1)] $k=3$, $\{v_1,\ldots,v_k\}$ is contained in a ring of length 5 and $\left|L_1\cup L_2\cup L_3\right|\geq 3$;
\item[(2)] $k\leq 3$ and the set $\{v_1,\ldots,v_k\}$ is good;
\item[(3)] $k=2$, and if $v_1$ and $v_2$ are both of degree 2, then $\{v_1,v_2\}$ is not contained in a small theta of $G$;
\end{itemize}
then there is an $s$-edge-coloring of $G$, such that every edge incident with $v_i$ is colored with a color from $L_i$, for $1\leq i\leq k$. Furthermore, there is an $\mathcal O(nm)$-time algorithm that finds such an edge-coloring.
\end{lemma}
\begin{proof}
We prove the result for each of the cases (1), (2) and (3) separately.
\medskip
\noindent
(1) Let $B=u_1u_2u_3u_4u_5u_1$ be the ring of $G$ that contains $\{v_1,\ldots,v_k\}$, and w.l.o.g.\ $v_1=u_1$, $v_2=u_2$ and $v_3=u_4$. Furthermore, let $v$ be
vertex of $B$ with maximum degree. As a first step, we $s$-edge-color $B$.
First, assume that $|L_1\cup L_2|\geq 3$. Then, we color $u_1u_2$ with a color $c\in L_1\cap L_2$, then color edges $u_1u_5$ and $u_2u_3$ with distinct colors from $L_1\setminus\{c\}$ and $L_2\setminus\{c\}$, respectively, and finally color edges $u_3u_4$ and $u_1u_5$ with distinct colors from $L_3$. So, w.l.o.g.\ let $L_1=L_2=\{1,2\}$. Then $3\in L_3$, and let $c\in L_3\setminus\{3\}$. Now, we color the edges $u_1u_5$ and $u_2u_3$ with a color $c'\in\{1,2\}\setminus\{c\}$, $u_1u_2$ with the color from $\{1,2\}\setminus\{c'\}$, $u_3u_4$ in 3 and $u_4u_5$ in $c$.
So, we have obtained an $s$-edge-coloring of $B$. Let $G'$ be the graph obtained from $G$ by removing all vertices of $B$ except $v$. Now, to complete the $s$-edge-coloring of $G$ we $s$-edge-color $G'$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G'$) such that all edges incident with $v$ (in $G$) have different colors.
\medskip
\noindent(2) We prove the claim by induction on $|V(G)|$. If $k=1$, then to obtain an $s$-edge-coloring of $G$ we first $s$-edge-color $G$ by Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors such that edges incident with $v_1$ have colors from the list $L_1$. So, we may assume that $k\geq 2$. Also, by induction, we may assume that $G$ is connected. We now consider the following cases.
\medskip
\noindent{\bf Case 1.} $\Delta(G)=2$.
\medskip
\noindent If there is an edge $e$ of $G$ that is not incident with $v_1$, $v_2$ nor $v_3$, then to obtain an edge coloring of $G$ we first edge-color the path $G\setminus\{e\}$ according to the lists $L_1$, $L_2$ and $L_3$ and then color the edge $e$. So, we may assume that every edge of $G$ is incident with at least one of $v_1$, $v_2$ or $v_3$.
Let $k=2$. Then $G$ is of length 4 and vertices $v_1$ and $v_2$ are not adjacent. So, to obtain an edge coloring of $G$ we first color edges incident with $v_1$ (with colors from $L_1$) and then edges incident with $v_2$ (with colors from $L_2$).
Let $k=3$. Since $G$ is not a hole of length 5, we have that $G=u_1u_2u_3u_4u_5u_6u_1$, and no two vertices from $\{v_1,v_2,v_3\}$ are adjacent. So, we may assume w.l.o.g.\ that $v_1=u_1$, $v_2=u_3$ and $v_3=u_5$. If there is a color $c\in L_1\cap L_2\cap L_3$, then to obtain an edge-coloring of $G$ we first color edges $u_1u_2$, $u_3u_4$ and $u_5u_6$ with $c$, and then color the remaining edges according to the lists $L_1$, $L_2$ and $L_3$. So, let us assume that $L_1\cap L_2\cap L_3=\emptyset$. Then to obtain an edge-coloring of $G$ we first greedily edge-color the path $u_6u_1u_2u_3$ according to the lists $L_1$ and $L_2$. Note that either the colors of $u_6u_1$ and $u_2u_3$ are distinct, or $u_6u_1$ and $u_2u_3$ are colored with the same color which is not in $L_3$. In both cases we can color the remaining edges of $G$ according to the list $L_3$.
\medskip
\noindent{\bf Case 2.} $v_1$ is contained in a ring.
\medskip
\noindent
By Case 1 we may assume that $\Delta (G) \geq 3$.
Let $B$ be the ring of $G$ that contains $v_1$, let $v$ be vertex of $B$ of degree at least 3 and let $G'$ be the graph obtained from $G$ by deleting vertices of $B\setminus v$ (and edges incident with these vertices). Note that $G'$ is triangle-free sparse and that $v$ is of degree at least 3, of degree 1 or is free in $G'$. Furthermore, if $v$ is of degree 1 in $G'$, then let $P=v\ldots v'$ be the limb of $G'$ that contains $v$; otherwise $P=\{v\}$ and $v=v'$. Also, let $V''=(V(G)\setminus(V(B)\cup V(P)))\cup\{v'\}$ and $G''=G[V'']$.
In this case we will assume that $k=3$, that is, if $k=2$, then we take $v_3$ to be an arbitrary vertex such that $\{v_1,v_2,v_3\}$ satisfies conditions of this lemma, and $L_3=\{1,2,\ldots,s\}$ (such $v_3$ exists: if $v_2\not\in B$, then we may define $v_3$ to be $v$ or a free vertex of $B$; if $v_2\in B$, then we may define $v_3$ to be a vertex from $V''\setminus\{v\}$ of degree at least 3 or free). It suffices to consider the following cases.
\smallskip
\noindent{\bf Case 2.1.} $\{v_1,v_2,v_3\}\subseteq V(B)$.
\smallskip
\noindent To obtain an $s$-edge-coloring of $G$ we first edge-color $B$ according to the lists $L_1,\ldots,L_k$ (as in Case 1). Then, we $s$-edge-color $G'$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G'$) such that all edges incident with $v$ (in $G$) have different colors.
\smallskip
\noindent{\bf Case 2.2.} $v_2\in B$ and $v_3\not\in B$.
\smallskip
\noindent
W.l.o.g. $v_1\neq v$.
First assume that $P=\{v\}$. If $v_3$ is not adjacent to $v$, then to obtain desired edge-coloring of $G$ we first edge-color $B$ according to the lists $L_1$ and $L_2$ (as in Case 1). Let $L$ be the set of colors used for coloring edges incident with $v$ in this coloring. Then, to complete $s$-edge-coloring of $G$ we, by induction, edge-color $G'$ such that the lists $L'$ and $L_{3}$ are associated with vertices $v$ and $v_3$, where $L'=\{1,2,\ldots,s\}\setminus L$ if $v_2\neq v$, or $L'=L_2\setminus L$ if $v=v_2$. So, let us assume that $v_3$ is adjacent to $v$. Then $v_1$ and $v_2$ are not adjacent and not adjacent to $v$, and they are free or $v_2=v$. If $v=v_2$, then to obtain desired edge-coloring of $G$ we first, by induction, edge-color $G'$ such that the lists $L_2$ and $L_{3}$ are associated with vertices $v_2$ and $v_3$, and then edge-color $B$ (as in Case 1) such that the lists $L_1$ and $\{1,2,\ldots,s\}\setminus L_2'$ are associated with $v_1$ and $v_2$, where $L_2'$ is the set of colors used for coloring edges incident with $v$ in edge-coloring of $G'$. Hence, suppose that $v\neq v_2$. Let $c\in L_3$. Then to obtain desired edge-coloring of $G$ we first edge-color $B$ such that the lists $L_1$, $L_2$ and $\{1,2,\ldots,s\}\setminus \{c\}$, are associated with $v_1$, $v_2$ and $v$. Then, to complete $s$-edge-coloring of $G$ we, by induction, edge-color $G'$ such that the lists $\{1,2,\ldots,s\}\setminus L$ and $L_{3}$ are associated with vertices $v$ and $v_3$, where $L$ is the set of colors used for coloring edges incident with $v$ in this edge-coloring of $B$.
Next, assume that $P\neq \{v\}$. If $v_3\not\in V(P)$, then we first edge-color $B$ such that the lists $L_1$ and $L_2$ are associated with $v_1$ and $v_2$ (as in Case 1). Then we edge-color $P$ such that the edge incident with $v$ is colored with a color not used for coloring edges incident with $v$ in this edge-coloring of $B$. Finally, we edge color $G''$ by induction, such that the lists $L''=\{1,2,\ldots,s\}\setminus\{c\}$ and $L_3$ are associated with $v'$ and $v_3$, where $c$ is the color used for coloring edge of $P$ incident with $v'$ (note that $L''\cap L_3\neq\emptyset$, since $|L''|=s-1$). So, suppose that $v_3\in V(P)$. If $v_3$ is not adjacent to $v$, then we first edge-color $B$ such that the lists $L_1$ and $L_2$ are associated with $v_1$ and $v_2$ (as in Case 1). Then we greedily edge-color $P$ from $v$ to $v'$, such that the edge incident with $v$ is colored with a color not used for coloring edges incident with $v$ in this edge-coloring of $B$, and that edges incident with $v_3$ are colored with colors from $L_3$. To complete edge-coloring of $G$, we edge color $G''$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G''$), such that all edges incident with $v'$ (in $G$) have different colors. Finally, suppose that $v$ and $v_3$ are adjacent.
Then $v_3$ is of degree 2 and not free, and so $v_1$ and $v_2$ are not adjacent, $v_1$ is free and $v_2$ is either free or $v_2=v$. In particular, no vertex of
$\{ v_1,v_2\}$ is adjacent to $v$. If $L_2\cap L_3\neq \emptyset$ then
let $c\in L_2\cap L_3$, and otherwise let $c$ be any color from $L_3$.
Then we first edge-color $B$ (as in Case 1), such that: if $v\neq v_2$, then lists $L_1$, $L_2$ and $\{1,2,\ldots,s\}\setminus\{c\}$ are associated $v_1$, $v_2$ and $v$; if $v=v_2$, then lists $L_1$ and $L_2\setminus\{c\}$ are associated $v_1$ and $v_2$. Then we greedily edge-color $P$ such that $vv_3$ is colored with $c$ and the other edge from $P$ incident with $v_3$ with a color from $L_3\setminus\{c\}$. Finally, we edge-color $G''$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G''$) such that all edges incident with $v'$ (in $G$) have different colors.
\smallskip
\noindent{\bf Case 2.3.} $v_2,v_3\not\in V(B)$.
\smallskip
\noindent First, assume that $P=\{v\}$ (i.e.\ $v=v'$). If $v_1=v$, then to obtain an $s$-edge-coloring of $G$, we first $s$-edge-color $G'$ by induction, such that the lists $L_1$, $L_2$ and $L_3$ are associated with $v_1$, $v_2$ and $v_3$ (note that $|L_1|\geq 4$, and hence $|L_1\cup L_2\cup L_3|\geq 4$). Finally, we edge-color $B$ and then permute the colors in this edge-coloring of $B$ such that all edges incident with $v$ receive different color. So, suppose $v\neq v_1$. If $v_1$ is not adjacent to $v$, then we first $s$-edge-color $G'$ by induction, such that the lists $L_2$ and $L_3$ are associated with $v_2$ and $v_3$, and then edge-color $B$ (as in Case 1) such that the lists $L_1$ and $\{1,2,\ldots,s\}\setminus L$ are associated with $v_1$ and $v$, where $L$ is the set of colors used for coloring edges incident with $v$ in this edge-coloring of $G'$. Finally, suppose that $v_1$ is adjacent to $v$. Then $v_1$ is of degree 2 and not free, so $v_2$ and $v_3$ are either of degree at least 3 or free, and hence $\{v_2,v_3\}$ is anticomplete to $v$.
Let $c$ be a color from $L_1$, and $L'=\{1,2,\ldots,s\}\setminus\{c\}$. Hence, to obtain an $s$-edge-coloring of $G$, we first edge-color $G'$ by induction, such that the lists $L'$, $L_2$ and $L_3$ are associated with $v$, $v_2$ and $v_3$ (note that $|L'|=s-1\geq 3$, and hence $|L'\cup L_2\cup L_3|\geq 3$), and then edge-color $B$ (as in Case 1) such that the lists $L_1$ and $\{1,2,\ldots,s\}\setminus L''$ are associated with $v_1$ and $v$, where $L''$ is the set of colors used for coloring edges incident with $v$ in this edge-coloring of $G'$.
Now, suppose that $v\neq v'$. If $v_2,v_3\in V''$, then we first, by induction, $s$-edge-color $G''$ such that the lists $L_2$ and $L_3$ are associated with $v_2$ and $v_3$. Then we edge-color $P$ greedily from $v'$ to $v$ (note that $v$ and $v'$ are not adjacent), such that the edge incident with $v'$ is colored with a color not used for coloring edges incident with $v'$ in this edge-coloring of $G''$, and that the edge incident with $v$ is colored with a color from $L_1$. Let this color be $c$. To complete edge-coloring of $G$, we edge-color $B$ such that: if $v_1=v$, then the list $L_1\setminus \{c\}$ is associated with $v_1$; if $v_1\neq v$, then the lists $L_1$ and $L=\{1,2,\ldots,s\}\setminus \{c\}$ are associated with $v_1$ and $v$ (note that $L\cap L_1\neq\emptyset$, since $|L|=s-1$). Next, suppose that $v_2,v_3\in V(P)$. In this case, we first edge-color $P$ such that the lists $L_2$, $L_3$ and possibly $L_1$ (if $v_1=v$) are associated with $v_2$, $v_3$ and possibly $v_1$ (if $v_1=v$). Let $c$ be the color of the edge incident with $v$ in this edge-coloring of $P$. Then we edge-color $B$ such that: if $v_1=v$, then the list $L_1\setminus \{c\}$ is associated with $v_1$; if $v_1\neq v$, then the lists $L_1$ and $L=\{1,2,\ldots,s\}\setminus \{c\}$ are associated with $v_1$ and $v$ (note that $L\cap L_1\neq\emptyset$, since $|L|=s-1$). To complete edge-coloring of $G$, we $s$-edge-color $G''$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G''$) such that all edges incident with $v'$ (in $G$) have different colors.
Finally, we may assume that $v_2\in V(P)\setminus\{v'\}$ and $v_3\in V''\setminus\{v'\}$. To obtain $s$-edge-coloring of $G$ we first $s$-edge-color $P$ such that the lists $L_2$ and possibly $L_1$ (if $v_1=v$) are associated with $v_2$ and possibly $v_1$ (if $v_1=v$). Let $c$ (resp.\ $c'$) be the color of the edge incident with $v$ (resp.\ $v'$) in this edge-coloring of $P$. Next, by induction, we $s$-edge-color $G''$ such that the lists $L_3$ and $L'=\{1,2,\ldots,s\}\setminus\{c'\}$ are associated with $v_3$ and $v'$ (note that $L_3\cap L'\neq \emptyset$, since $|L'|=s-1$). To complete $s$-edge-coloring of $G$ we $s$-edge-color $B$ such that: if $v_1=v$, then the list $L_1\setminus \{c\}$ is associated with $v_1$; if $v_1\neq v$, then the lists $L_1$ and $L=\{1,2,\ldots,s\}\setminus \{c\}$ are associated with $v_1$ and $v$ (note that $L\cap L_1\neq\emptyset$, since $|L|=s-1$).
\medskip
By Case 2, from now on we may assume that no vertex from $\{v_1,\ldots,v_k\}$ is contained in a ring, and by Case 1 we may assume that $\Delta (G)\geq 3$.
In particular, every vertex of $\{ v_1,\ldots ,v_k\}$ is contained in a branch of $G$.
\medskip
\noindent{\bf Case 3.} $v_1$ is free.
\medskip
\noindent Let $B=u_1\ldots u_2$ be the branch of $G$ that contains $v_1$, and let $G'$ be the graph obtained from $G$ by deleting internal vertices of $B$ (and edges incident with these vertices). Note that since $G$ is sparse, vertices $u_1$ and $u_2$ are free or of degree at least 3 in $G'$, and every neighbor of $u_1$ and $u_2$ is of degree 2 in $G$ and $G'$. In particular, for every $v\in V(G')\setminus\{u_1,u_2\}$, if $\{u_1,u_2,v\}$ is not contained in a ring of length 5 of $G'$, then the set $\{u_1,u_2,v\}$ is good in $G'$.
\smallskip
\noindent{\bf Case 3.1.} Neither $v_2$ nor $v_3$ is adjacent to both $u_1$ and $u_2$.
\smallskip
\noindent First, let us assume that $V(B)\cap \{v_2,\ldots,v_k\}\neq \emptyset$. If $\{v_1,\ldots,v_k\}\subseteq V(B)$, then to obtain an $s$-edge-coloring of $G$ we first $s$-edge-color $B$ according to the lists $L_1,\ldots,L_k$, and then, by induction, $s$-edge-color $G'$ with the list $L_{i}'$ associated with $u_i$, for $i\in\{1,2\}$. The list $L_i'$, for $i\in\{1,2\}$, is defined as follows: $L_{i}'=\{1,2,\ldots,s\}\setminus \{c_{u_i}\}$ if $u_i\not\in\{v_2,\ldots,v_k\}$ and $L_{i}'=L_{j}\setminus \{c_{u_i}\}$ if $u_i=v_j$, for some $2\leq j\leq k$, where $c_{u_1}$ (resp.\ $c_{u_2}$) is the color of the edge incident with $u_1$ (resp.\ $u_2$) in this edge-coloring of $B$.
Let us now assume that w.l.o.g.\ $v_2\in B$, but $v_3\not\in B$ (in this case $k=3$), and w.l.o.g.\ let $v_2$ be in the $u_1v_1$-subpath of $B$. If $v_2$ and $v_3$ are not adjacent (i.e.\ $v_2\neq u_1$, or $v_2=u_1$ and $v_3$ is not adjacent to $u_1$), then to obtain desired $s$-edge-coloring of $G$ we first $s$-edge-color $B$ according to the lists $L_1$ and $L_2$, and such that the edges incident with $u_1$ and $u_2$ receive different colors (this can be done since the edge incident with $u_2$ is the last that we color, and we have at least 2 options for coloring it). Then, by induction, we $s$-edge-color $G'$ such that the lists $L_{1}'$, $L_{2}'$ and $L_3$ are associated with vertices $u_1$, $u_2$ and $v_3$ ($L_{1}'$ and $L_{2}'$ are defined as in the previous part of the proof, and they satisfy $|L_1'\cup L_2'|\geq 3$, since $L_1'\neq L_2'$). So, let us assume that $v_3$ is adjacent to $u_1$ and $v_2=u_1$. Let $c\in L_2\cap L_3$. To obtain desired $s$-edge-coloring of $G$, we first greedily $s$-edge-coloring $B$ starting with the edge incident with $u_1$ and giving it a color $c'\in L_2\setminus\{c\}$, and such that the color of the edge incident with $u_2$ is not $c'$. Then, by induction, we $s$-edge-color $G'$ such that the lists $L_{1}''=L_2\setminus\{c'\}$, $L_{2}''=\{1,2,\ldots,s\}\setminus\{c_{u_2}\}$ and $L_3$ are associated with vertices $u_1$, $u_2$ and $v_3$, where $c_{u_2}$ is the color of the edge incident with $u_2$ in the edge-coloring of $B$ (note that $|L_1''\cup L_2''|\geq 3$, since $L_1''\neq L_2''$).
Finally, let us assume that $v_2$ and $v_3$ are not in $B$.
Observe that $\{ u_1,v_1,v_2,u_2\}$ cannot induce a path, since otherwise both $v_2$ and $v_3$ would be of degree 2 and not free. Therefore, by the case we are in,
at most one of the sets $\{v_2,v_3,u_1\}$ and $\{v_2,v_3,u_2\}$ induces a path of length 2. W.l.o.g.\ assume that $\{v_2,v_3,u_1\}$ does not induce a path of length 2. Furthermore, if $u_1$ is adjacent to $v_2$ or $v_3$, then that vertex is not free or degree at least 3 in $G$.
Also, by the case we are in, $\{ v_2,v_3,u_1\}$ cannot be contained in a ring of $G'$ of length 5.
Hence the set $\{v_2,v_3,u_1\}$ is good in $G'$. Now, to obtain desired $s$-edge-coloring of $G$ we first, by induction, $s$-edge-color $G'$ such that the lists $\widetilde{L}_{1}$, $L_{2}$ and possibly $L_3$ (if $k=3$) are associated with vertices $u_1$, $v_2$ and possibly $v_3$ (if $k=3$), where $\widetilde{L}_1=\{1,2,\ldots,s\}\setminus\{c_1'\}$ ($c_1'\not\in L_1$ if $|L_1|=2$, or arbitrary otherwise). Then branch $B$ is greedily 3-edge-colored in the following way: we color the edge incident with $u_1$ with color $c_1'$, the edge incident with $u_2$ with a color not used for coloring edges incident with $u_2$ in $G'$, then color $v_1u_2$-subpath of $B$ (greedily from $u_2$ to $v_1$) and finally color $u_1v_1$-subpath of $B$ (greedily from $v_1$ to $u_1$).
\smallskip
\noindent{\bf Case 3.2.} $v_2$ is adjacent to both $u_1$ and $u_2$.
\smallskip
\noindent In this case, $v_2$ is of degree 2 and not free in $G$. Also, if $k=3$, then $v_3$ must be free or of degree at least 3 in $G$, and it follows that $v_3$ is anticomplete
to $\{ u_1,u_2\}$ and so is free or of degree at least 3 in $G'$.
First, assume that $k=2$ or $v_3\not\in B$. Note that in this case, if $k=3$, then $\{u_1,v_2,v_3\}$ is not contained in a ring of $G'$ of length 5. So, to obtain desired $s$-edge-coloring of $G$ we do the following. First, by induction, we $s$-edge-color $G'$ so that the lists $\widetilde{L}_{1}$, $L_{2}$ and possibly $L_3$ (if $k=3$) are associated with vertices $u_1$, $v_2$ and possibly $v_3$ (if $k=3$), where $\widetilde{L}_1=\{1,2,\ldots,s\}\setminus\{c_1'\}$ ($c_1'\not\in L_1$ if $|L_1|=2$, or arbitrary otherwise). Then branch $B$ is greedily 3-edge-colored in the following way: we color the edge incident with $u_1$ with color $c_1'$, the edge incident with $u_2$ with a color not used for coloring edges incident with $u_2$ in $G'$, then color $v_1u_2$-subpath of $B$ (greedily from $u_2$ to $v_1$) and finally color $u_1v_1$-subpath of $B$ (greedily from $v_1$ to $u_1$).
Next let us assume that $v_3$ is in $B$ and free. Then $v_1$ and $v_3$ are not adjacent, and let us w.l.o.g.\ assume that $v_3$ is in the $v_1u_2$-subpath of $B$. Then to obtain desired $s$-edge-coloring of $G$ we first, by induction, $s$-edge-color $G'$ such that the lists $\widetilde{L}_1$ and $L_2$ are associated with vertices $u_1$ and $v_2$, where $\widetilde{L}_1=\{1,2,\ldots,s\}\setminus\{c_1'\}$ ($c_1'\not\in L_1$ if $|L_1|=2$, or arbitrary otherwise). Then branch $B$ is greedily 3-edge-colored in the following way: we color the edge incident with $u_1$ with color $c_1'$, the edge incident with $u_2$ with a color not used for coloring edges incident with $u_2$ in $G'$, then color $v_1u_2$-subpath of $B$ (greedily from $u_2$ to $v_1$) and finally color $u_1v_1$-subpath of $B$ (greedily from $v_1$ to $u_1$).
So, w.l.o.g.\ let $v_3=u_1$. Then to obtain desired $s$-edge-coloring of $G$ we first, by induction, $s$-edge-color $G''=G\setminus\{v_2\}$, such that the lists $L_1$, $L_3''$ and $L''$ are associated with vertices $v_1$, $v_3$ and $u_2$, where $L_3''=L_3\setminus\{c'\}$ for some $c'\in L_2\cap L_3$, and $L''=\{1,2,\ldots,s\}\setminus\{c''\}$ for some $c''\in L_2\setminus\{c'\}$ (note that $\{v_1,v_3,u_2\}$ is not contained in ring of $G''$ of length 5). Finally, we color the edge $u_1v_2$ in $c'$ and $u_2v_2$ in $c''$.
\medskip
By Case 3, from now on we may assume that no vertex from $\{v_1,\ldots,v_k\}$ is free. Therefore it suffices to consider the following cases.
\medskip
\noindent{\bf Case 4.} $v_1$ and possibly $v_3$ (if $k=3$) are of degree at least 3.
\medskip
\noindent If $v_2$ is also of degree at least 3, then the proof follows from Lemma \ref{ListChord} (with $S$ the set of all vertices of degree at least 3, lists $L_i$, for $i\in\{1,\ldots,k\}$, given to edges incident with $v_i$, and list $\{1,\ldots,s\}$ given to all other edges). So, suppose that $\deg(v_2)=2$. Let $B=u_1\ldots u_2$ be the branch of $G$ that contains $v_2$, and $G'$ be the graph obtained from $G$ by deleting internal vertices of $B$ (and edges incident with these vertices). Note that since $G$ is sparse $\{v_1,v_3\}$ is anticomplete to $\{u_1,u_2\}$ and each of the vertices $u_1$ and $u_2$ is free or of degree at least 3 in $G'$.
First, let us assume that $\Delta(G)=4$. If $v_1,v_3\not\in B$, then to obtain $s$-edge-coloring of $G$, we first $s$-edge-color $B$ according to $L_2$. Let $L'=\{1,2,\ldots,\Delta(G)\}\setminus\{c_{u_1}\}$ and $L''=\{1,2,\ldots,\Delta(G)\}\setminus\{c_{u_2}\}$, where $c_{u_1}$ (resp.\ $c_{u_2}$) is the color of the edge incident with $u_1$ (resp.\ $u_2$) in this edge-coloring of $B$. Finally, we $s$-edge-color $G'$ using Lemma \ref{ListChord}, such that the lists $L_1$, $L'$, $L''$ and possibly $L_3$ (if $k=3$) are associated with edges incident with $v_1$, $u_1$, $u_2$ and possibly $v_3$ (if $k=3$), respectively, and the list $\{1,2,\ldots,\Delta(G)\}$ associated with all other edges. If w.l.o.g.\ $v_1=u_1$, then
to obtain $s$-edge-coloring of $G$, we first $s$-edge-color $B$ according to the lists $L_1$, $L_2$ and possibly $L_3$ (if $v_3=u_2$). Next, we associate with $v_1$ the list $L_1'=L_1\setminus\{c_{u_1}\}$, and to $u_2$ the list $L''=\{1,2,\ldots,\Delta(G)\}\setminus\{c_{u_2}\}$ if $v_3\neq u_2$, or $L''=L_3\setminus\{c_{u_2}\}$ if $v_3=u_2$, where $c_{u_1}$ (resp.\ $c_{u_2}$) is the color of the edge incident with $u_1$ (resp.\ $u_2$) in this edge-coloring of $B$. Then we $s$-edge-color $G'$, by induction, such that the lists $L_1'$, $L''$ and possibly $L_3$ (if $k=3$ and $v_3\neq u_2$) are associated with $u_1$, $u_2$ and possibly $v_3$ (if $k=3$ and $v_3\neq u_2$).
Finally, let $\Delta(G)=3$. In this case $L_1=L_3=\{1,2,3\}$, and hence any 3-edge-coloring of $G$ respects the lists $L_1$ and $L_3$. So, to obtain a desired $s$-edge-coloring of $G$ we first $s$-edge-color $G$ using Lemma \ref{ListChord} (we give the list $\{1,2,3\}$ to all edges) and then permute the colors such that the edges incident with $v_2$ receive colors from the list $L_2$.
\medskip
\noindent(3) We prove the claim by induction on $|V(G)|$. By induction, we may assume that $G$ is connected.
It suffices to consider the following cases.
\medskip
\noindent{\bf Case 1.} $v_1$ and $v_2$ are of degree at least 3.
\medskip
\noindent The proof in this case follows from Lemma \ref{ListChord} (with $S$ the set of all vertices of degree at least 3, lists $L_i$, for $i\in\{1,2\}$, given to edges incident with $v_i$, and list $\{1,2,\ldots,s\}$ given to all other edges).
\medskip
\noindent{\bf Case 2.} $v_1$ is of degree 2.
\medskip
\noindent
If $\Delta (G)=2$ then $G$ is a hole and it is easy to see how to obtain the desired coloring. So we may assume that $\Delta (G)\geq 3$.
We now consider the following cases.
\medskip
\noindent{\bf Case 2.1.} $v_1$ is contained in a ring of $G$.
\medskip
\noindent Let $B$ be that ring, let $v$ be the vertex of degree at least 3 of $B$, and let $G'$ be the graph obtained from $G$ by deleting degree 2 vertices of $B$ (and edges incident with these vertices). Note that $G'$ is triangle-free sparse and that $v$ is of degree at least 3, of degree 1 or is free in $G'$. Also, since $G$ is sparse, $v$ is not adjacent to a vertex of degree at least 3. In particular, if $v$ is contained in a small theta of $G$ (or any of its induced subgraphs), then $v$ is not a degree 2 vertex of this small theta. Finally, if $v$ is of degree 1 in $G'$, then let $P=v\ldots v'$ be the limb of $G'$ that contains $v$; otherwise $P=\{v\}$ and $v=v'$. Also, let $V''=(V(G)\setminus(V(B)\cup V(P))\cup\{v'\}$ and $G''=G[V'']$.
If $\{v_1,v_2\}\in V(B)$, then we proceed as in Case 2.1 of part (2). So, let us assume that $v_2\not\in V(B)$. Our proof in this case is similar to proof of Case 2.3 of part (2).
First, let $P=\{v\}$ (i.e.\ $v=v'$). If $v_1=v$, then to obtain an $s$-edge-coloring of $G$, we first $s$-edge-color $G'$ by induction such that the lists $L_1$ and $L_2$ are associated with $v_1$ and $v_2$. To complete edge-coloring of $G$, we edge-color $B$ and then permute the colors in this edge-coloring of $B$ such that all edges incident with $v$ receive different color. So, suppose $v\neq v_1$. If $v_1$ is not adjacent to $v$, then we first $s$-edge-color $G'$ (using part (2)) such that the list $L_2$ is associated with $v_2$, and then edge-color $B$ (as in Case 1 of (2)) such that the lists $L_1$ and $\{1,2,\ldots,s\}\setminus L$ are associated with $v_1$ and $v$, where $L$ is the set of colors used for coloring edges incident with $v$ in this edge-coloring of $G'$. So, suppose that $v_1$ is adjacent to $v$. Then we first $s$-edge-coloring $G'$ by induction such that the lists $L_2$ and $L'=\{1,2,\ldots,s\}\setminus\{c\}$ are associated with $v_2$ and $v$, where $c\in L_1$ is arbitrary (note that $L_2\cap L'\neq\emptyset$, since $|L'|=s-1$). To complete edge-coloring of $G$ we edge-color $B$ (as in Case 1 of (2)) such that the lists $L_1$ and $\widetilde{L}=\{1,2,\ldots,s\}\setminus L''$ are associated with $v_1$ and $v$, where $L''$ is the list of colors used for coloring edges incident with $v$ in this edge-coloring of $G'$ (note that $c\in L_1\cap \widetilde{L}$).
Suppose now that $v\neq v'$. If $v_2\in V''$, then we first $s$-edge-color $G'$ (using part (2)) such that the list $L_2$ is associated with $v_2$ . Then we edge-color $P$ greedily from $v'$ to $v$ (note that $v$ and $v'$ are not adjacent), such that the edge incident with $v'$ is colored with a color not used for coloring edges incident with $v'$ in this edge-coloring of $G''$, and that the edge incident with $v$ is color with a color from $L_1$. Let this color be $c$. To complete edge-coloring of $G$, we edge color $B$ such that: if $v_1=v$, then the list $L_1\setminus \{c\}$ is associated with $v_1$; if $v_1\neq v$, then the lists $L_1$ and $L=\{1,2,\ldots,s\}\setminus \{c\}$ are associated with $v_1$ and $v$ (note that $L\cap L_1\neq\emptyset$, since $|L|=s-1$). Next, suppose that $v_2\in V(P)$. In this case, we first edge-color $P$ such that the lists $L_2$ and possibly $L_1$ (if $v_1=v$) are associated with $v_2$ and possibly $v_1$ (if $v_1=v$). Let $c$ be the color of the edge incident with $v$ in this edge-coloring of $P$. Then we edge-color $B$ such that: if $v_1=v$, then the list $L_1\setminus \{c\}$ is associated with $v_1$; if $v_1\neq v$, then the lists $L_1$ and $L=\{1,2,\ldots,s\}\setminus \{c\}$ are associated with $v_1$ and $v$ (note that $L\cap L_1\neq\emptyset$, since $|L|=s-1$). To complete edge-coloring of $G$, we $s$-edge-color $G''$ using Lemma \ref{ListChord} (such that all edges receive the list $\{1,2,\ldots,s\}$), and then permute the colors (in this edge-coloring of $G''$) such that all edges incident with $v'$ (in $G$) have different colors.
\medskip
By Case 2.1, from now on we may assume that neither $v_1$ nor $v_2$ is contained in a ring of $G$. Let $B=u_1\ldots u_2$ be the branch of $G$ that contains $v_1$, and let $G'$ be the graph obtained from $G$ by deleting internal vertices of $B$ (and edges incident with these vertices). Since $G$ is sparse vertices $u_1$ and $u_2$ are free or of degree at least 3 in $G'$, and every neighbor of $u_1$ and $u_2$ is of degree 2 in $G$ and $G'$. In particular, if $u_1$ (resp.\ $u_2$) is contained in a small theta of $G$ (or any of its induced subgraphs), then $u_1$ (resp.\ $u_2$) is not a degree 2 vertex of this small theta.
\smallskip
\noindent{\bf Case 2.2.} $v_1$ and $v_2$ are not parallel.
\smallskip
\noindent In this case $\{u_1,u_2,v_2\}$ is not contained in a ring of $G'$ of length 5.
If $v_2\in B$, then to obtain $s$-edge-coloring of $G$ we first $s$-edge-color $B$ according to the lists $L_1$ and $L_2$, and then by induction $s$-edge-color $G'$ such that the list $L'$ and $L''$ are associated with $u_1$ and $u_2$. Lists are defined as follows: $L'=\{1,2,\ldots,s\}\setminus \{c_{u_1}\}$ (resp.\ $L''=\{1,2,\ldots,s\}\setminus \{c_{u_2}\}$) if $u_1\neq v_2$ (resp.\ $u_2\neq v_2$) or $L'=L_{2}\setminus \{c_{u_1}\}$ (resp.\ $L''=L_{2}\setminus \{c_{u_2}\}$) if $u_1=v_2$ (resp.\ $u_2=v_2$), where $c_{u_1}$ (resp.\ $c_{u_2}$) is the color of the edge incident with $u_1$ (resp.\ $u_2$) in the edge-coloring of $B$.
So, let $v_2\not\in B$. Since $\{u_1,u_2,v_2\}$ does not induce a path of length 2 (by the case we are in),and since the set is good in $G'$, to obtain $s$-edge-coloring of
$G$ we first $s$-edge-color $B$ according to the list $L_1$, and then by part (2) $s$-edge-color $G'$ with the lists $L'=\{1,2,\ldots,s\}\setminus \{c_{u_1}\}$, $L''=\{1,2,\ldots,s\}\setminus \{c_{u_2}\}$ and $L_2$ associated with $u_1$, $u_2$ and $v_2$, where $c_{u_1}$ (resp.\ $c_{u_2}$) is the color of the edge incident with $u_1$ (resp.\ $u_2$) in the edge-coloring of $B$ (note that $L_2\cap L'\neq\emptyset$ and $L_2\cap L''\neq\emptyset$).
\smallskip
\noindent{\bf Case 2.3.} $v_1$ and $v_2$ are parallel.
\smallskip
\noindent If $v_1$ or $v_2$ is free, then we can apply part (2). So, suppose that both $v_1$ and $v_2$ are not free. Let $B'$ be the branch of $G$ that contains $v_2$.
\smallskip
\noindent{\bf Case 2.3.1.} At least one of the branches $B$ and $B'$ is of length at least 3.
\smallskip
\noindent W.l.o.g.\ let $B'$ be of length at least 3 and $v_2$ adjacent to $u_1$. Now we define colors $c_1$ and $c_2$ that are going to be used when edge-coloring $G$:
\begin{itemize}
\item if $v_1$ is adjacent to both $u_1$ and $u_2$, then $c_1$ and $c_2$ are distinct colors from $L_1$;
\item if $v_1$ is adjacent to $u_1$, but not adjacent to $u_2$, then $c_1$ is a color from $L_1$ and $c_2$ a color not from $\{c,c_1\}$, where $c$ is a color from $L_1$ distinct from $c_1$;
\item if $v_1$ is adjacent to $u_2$, but not adjacent to $u_1$, then $c_2$ is a color from $L_1$ and $c_1$ a color not from $\{c,c_2\}$, where $c$ is a color from $L_1$ distinct from $c_2$.
\end{itemize}
Now, we first, by part (2), $s$-edge-color $G'$ such that the lists $L'=\{1,2,\ldots,s\}\setminus\{c_1\}$, $L''=\{1,2,\ldots,s\}\setminus\{c_2\}$ and $L_2$ are associated with $u_1$, $u_2$ and $v_2$ (note that $|L'\cup L''\cup L_2|\geq 3$, since $L'\cup L''=\{1,2,\ldots,s\}$). To complete the edge-coloring of $G$ we color the branch $B$ in the following way: we first color the edges incident with $u_1$ and $u_2$ with colors $c_1$ and $c_2$, respectively, and then greedily edge-color the rest of $B$ starting from $v_1$ and according to the list $L_1$.
\smallskip
\noindent{\bf Case 2.3.2.} Branches $B$ and $B'$ are of length 2.
\smallskip
\noindent First, let us assume that both $u_1$ and $u_2$ are of degree at least 4 in $G$, and let $G''=G\setminus\{v_1,v_2\}$. Then $G''$ is traingle-free sparse and vertices $u_1$ and $u_2$ are free or of degree at least 3 in $G''$. Now we define colors $c_1,c_2,c_3,c_4$ that are going to be used when edge-coloring $G$:
\begin{itemize}
\item if $|L_1\cap L_2|\geq 2$ and $c',c''\in L_1\cap L_2$, then $c_1=c_4=c'$, $c_2=c_3=c''$;
\item if $|L_1\cap L_2|=1$ and $L_1\cap L_2=\{c\}$, then $c_1=c_4=c$, $c_2=c'$ and $c_3=c''$, where $c'\in L_2\setminus\{c\}$ and $c''\in L_1\setminus\{c\}$;
\item if $L_1\cap L_2=\emptyset$, then $c_1,c_3\in L_1$, $c_1\neq c_3$, and $c_2,c_4\in L_2$, $c_2\neq c_4$.
\end{itemize}
Now, by induction, we edge-color $G''$ such that the lists $\{1,2,\ldots,s\}\setminus\{c_1,c_2\}$ and $\{1,2,\ldots,s\}\setminus\{c_3,c_4\}$ are associated with $u_1$ and $u_2$, and then color edges $u_1v_1$, $u_1v_2$, $u_2v_1$ and $u_2v_2$ in colors $c_1$, $c_2$, $c_3$ and $c_4$, respectively.
So, we may assume that w.l.o.g.\ $\deg_G(u_1)=3$. Let $\widetilde{G}$ be the graph obtained from $G\setminus\{v_1,v_2\}$ by adding the edge $u_1u_2$. Since $u_1$ is of degree 2 in $\widetilde{G}$, graph $\widetilde{G}$ is sparse, and since $\{v_1,v_2\}$ is not contained in a small theta of $G$ graph $\widetilde{G}$ is triangle-free. Furthermore, since at least one neighbor of $u_1$ in $\widetilde{G}$ is of degree 2, $u_1$ is not a degree 2 vertex of some small theta of $\widetilde{G}$.
We define lists of colors $\widetilde{L}_1$ and $\widetilde{L}_2$ that are going to be used for obtaining an edge-coloring of $\widetilde{G}$ (and $G$):
\begin{itemize}
\item[(i)] if $|L_1\cap L_2|\geq 2$ and $c_1,c_3\in L_1\cap L_2$, then $\widetilde{L}_1=\{c_1,c_2\}$ and $\widetilde{L}_2=\{1,2,\ldots,s\}\setminus\{c_3\}$, where $c_2\not\in\{c_1,c_3\}$;
\item[(ii)] if $|L_1\cap L_2|=1$ and $L_1\cap L_2=\{c_1\}$, then $\widetilde{L}_1=\{c_1,c_2\}$ and $\widetilde{L}_2=\{1,2,\ldots,s\}\setminus\{c_2\}$, where $c_2\in L_1\setminus\{c_1\}$;
\item[(iii)] if $L_1\cap L_2=\emptyset$, then $\widetilde{L}_1=\{c_1,c_2\}$ and $\widetilde{L}_2=\{1,2,\ldots,s\}\setminus\{c_2\}$, where $c_1\in L_1$ and $c_2\in L_2$.
\end{itemize}
Now, by induction, we $s$-edge-color $\widetilde{G}$ such that the lists $\widetilde{L}_1$ and $\widetilde{L}_2$ are associated with $u_1$ and $u_2$. Furthermore, we can permute the colors $c_1$ and $c_2$ in case (i), such that the edge $u_1u_2$ is colored with $c_1$. Finally, to obtain an edge-coloring of $G$ we extend the obtained edge-coloring of $\widetilde{G}\setminus\{u_1u_2\}$ in the following way. In case (i) we color the edges $u_1v_1$, $u_1v_2$, $u_2v_1$ and $u_2v_2$ with colors $c_1$, $c_3$, $c_3$ and $c_1$, respectively; in case (ii) we color the edges $u_1v_1$, $u_1v_2$, $u_2v_1$ and $u_2v_2$ with colors $c_1$, $c_3$, $c_2$ and $c_1$, respectively, where $c_3\in L_2\setminus\{c_1\}$; in case (iii) we color the edges $u_1v_1$, $u_1v_2$, $u_2v_1$ and $u_2v_2$ with colors $c_3$, $c_4$, $c_1$ and $c_2$, respectively, where $c_3\in L_1\setminus\{c_1\}$ and $c_4\in L_2\setminus\{c_2\}$.
\medskip
Note that this proof yields an $\mathcal O(nm)$-time algorithm that finds described edge-coloring. Indeed, all steps in the proof can be done in linear time, except when Lemma \ref{ListChord} is applied (which takes $\mathcal O(nm)$), but then the edge-coloring of $G$ can be completed in linear time.
\end{proof}
\begin{lemma}\label{TwoColoredBasic}
Let $G$ be a triangle-free chordless graph, and let $v_1$ and $v_2$ be distinct vertices of $V(G)$ both of degree at least 1.
Let $\widetilde{G}$ be a graph obtained from $G$ by adding a path
$Q=q_1\ldots q_k$, $k\geq 2$, (whose vertices are disjoint from vertices of $G$) and edges $q_1v_1$ and $q_kv_2$ (these are the only edges between $G$ and $Q$).
Assume that $\widetilde{G}$
is also triangle-free chordless. Suppose that we are given two lists of colors $L_1,L_2\subseteq\{1,2,\ldots,s\}$, where $s=\max\{3,\Delta(G)\}$, such that $|L_1|\geq \deg_G(v_1)$,
$|L_2|\geq\deg_G(v_2)$, and if $v_1$ and $v_2$ are adjacent, then $L_1\cap L_2\neq\emptyset$. Also, suppose that if both $v_1$ and $v_2$ are of degree 1 in $G$
and $|L_1|=|L_2|=1$, then their neighbors in $G$ are distinct. Then there exists an $s$-edge-coloring of $G$ such that every edge of $G$ incident with $v_i$ is colored with a color from $L_i$, for $i\in\{1,2\}$.
Furthermore, such an edge coloring can be obtained in ${\cal O} (n^3m)$-time.
\end{lemma}
\begin{proof}
We prove this lemma by induction on $|V(G)|$. By induction, Theorem \ref{ColorChordless} and Lemma \ref{2list2sparse}, we may assume that $G$ is connected.
\medskip
\noindent{\bf Case 1.} $G$ contains a vertex $v$ of degree 1.
\medskip
\noindent First, suppose that $G$ is a path, i.e.\ $P=v\ldots v'$. If $\{v_1,v_2\}\cap\{v,v'\}=\emptyset$, then we $s$-edge-color this path according to the lists $L_1$ and $L_2$. If $|\{v_1,v_2\}\cap\{v,v'\}|=1$ and w.l.o.g.\ $v=v_1$, then we first color the edge incident with $v$ (with a color from $L_1$ if $v_1v_2\not\in E(G)$, or a color from $L_1\cap L_2$ is $v_1v_2\in E(G)$) and then greedily $s$-edge-color the rest of $G$ (starting from $v$) according to the list $L_2$. If w.l.o.g.\ $v_1=v$ and $v_2=v'$, we first color edges incident with $v_1$ and $v_2$ (with colors from $L_1$ and $L_2$ if $v_1v_2\not\in E(G)$, or a color from $L_1\cap L_2$ if $v_1v_2\in E(G)$), and then greedily edge-color the rest of $G$.
So, suppose that $G$ is not a path. Let $B=v\ldots v'$ be the limb of $G$ that contains $v$ and let $G'$ the graph induced by $(V(G)\setminus V(B))\cup\{v'\}$. If $\{v_1,v_2\}\subseteq V(B)$, then we first $s$-edge-color $B$ in the following way: if $B$ is not of length 2 or $\{v_1,v_2\}\neq\{v,v'\}$, then we $s$-edge-color $B$ as in the previous paragraph; if $B$ is of length 2 and w.l.o.g\ $v_1=v$ and $v_2=v'$, then we color the edge incident with $v$ with a color $c\in L_1$ and then color the edge incident with $v'$ with a color from $L_2\setminus\{c\}$ (note that $|L_2|\geq 3$). To complete edge-coloring of $G$ we $s$-edge-color $G'$ using Theorem \ref{ColorChordless} and permute the colors in this edge-coloring of $G'$ so that the edges incident with $v'$ (in $G$) all receive different colors.
If $\{v_1,v_2\}\subseteq V(G')$, then we first, by induction, $s$-edge-color $G'$ so that the lists $L_1$ and $L_2$ are associated with $v_1$ and $v_2$ (note that $v'$ is of degree at least 2 in $G'$, so a vertex is of degree 1 in $G'$ iff it is of degree 1 in $G$). Then, we greedily $s$-edge-color $B$ (starting from $v'$) such that edges incident with $v'$ all receive different colors.
Finally, suppose w.l.o.g.\ that $v_1\in V(B)\setminus\{v'\}$ and $v_2\in V(G')\setminus\{v'\}$. If both $v_1$ and $v_2$ are of degree 1 and adjacent to $v'$, then we first color edges incident with $v_1$ and $v_2$ (with colors from $L_1$ and $L_2$), then $s$-edge-color $G\setminus\{v_1,v_2\}$ using Theorem \ref{ColorChordless} and finally permute the colors in this edge-coloring of $G\setminus\{v_1,v_2\}$ such that edges incident with $v'$ all receive different colors. So, suppose that w.l.o.g.\ $v_2$ is of degree at least 2 or not adjacent to $v'$. Then, to obtain an $s$-edge-coloring of $G$, we first greedily $s$-edge-color $B$ such that edges incident with $v_1$ receive colors from the list $L_1$. Let $L'=\{1,2,\ldots,s\}\setminus\{c\}$, where $c$ is the color of the edge incident with $v'$ in this edge-coloring of $B$. Also, let $Q''$ be the path induced by $V(Q)$ and vertices of the $v_1v'$-subpath of $B$, and let $Q'$ be the path induced by $V(Q'')\setminus\{v'\}$. Then $Q'$ is disjoint from $G'$, its endnodes are adjacent to $v'$ and $v_2$, and the graph induced by $V(G')\cup V(Q')$ is traingle-free chordless. Hence, to complete $s$-edge-coloring of $G$ we, by induction, $s$-edge-color $G'$ so that the lists $L_2$ and $L'$ are associated with $v_2$ and $v'$ (note that $\deg_{G'}(v')\geq 2$, and if $v'v_2\in E(G)$, then $\deg_{G'}(v_2)\geq 2$ and hence $L'\cap L_2\neq\emptyset$ since $|L'|=s-1$).
\medskip
By Case 1, we may assume that $\delta(G)\geq 2$. By Theorem \ref{DecomposChordless}, it is enough to consider the following cases.
\medskip
\noindent{\bf Case 2.} $G$ is sparse.
\medskip
\noindent Follows from part (3) of Lemma \ref{2list2sparse}. Indeed, in this case $v_1$ and $v_2$ are not degree 2 vertices that belong to a small theta $H$ of $G$, since
otherwise $\widetilde{G}[V(H)\cup V(Q)]$ is a cycle with chords.
\medskip
\noindent{\bf Case 3.} $G$ has a cutvertex.
\medskip
\noindent Let $v$ be a cutvertex of $G$ and let $(X_1,X_2,\{v\})$ be a partition of $V(G)$ such that $X_1$ is anticomplete to $X_2$.
First, let us assume that $v_i\in X_i$, for $i\in\{1,2\}$. Let $Q_i'$, for $i\in\{1,2\}$, be a chordless path between $v_{3-i}$ and $v$ in $G$, and let $Q_i$ be the path induced
in $\widetilde{G}$ by $(V(Q)\cup V(Q_i'))\setminus\{v\}$. Then $Q_i$ is disjoint from $G[X_{i}\cup\{v\}]$, its endnodes are adjacent to $v$ and $v_{i}$, and the graph induced by $X_{i}\cup\{v\}\cup V(Q_i)$ is traingle-free chordless. So, if $v_i$ is not adjacent to $v$, for some $i\in\{1,2\}$, then we obtain an $s$-edge-coloring of $G$ in the following way. We first
$s$-edge-color $G[X_{3-i}\cup\{v\}]$ using Theorem \ref{ColorChordless} and then permute the colors in this edge-coloring so that edges incident with $v_{3-i}$ receive colors from the list $L_{3-i}$. Let $L'$ be the set of colors used for coloring edges incident with $v$ in this coloring, and let $L=\{1,2,\ldots,s\}\setminus L'$. Then, by induction, we $s$-edge-color $G[X_i\cup\{v\}]$ sothat edges incident with $v_i$ receive colors from the list $L_i$ and edges (from $G[X_i\cup\{v_i\}]$) incident with $v$ receive colors from the list $L$. Merging these colorings we obtain an $s$-coloring of $G$. So, let us assume that $v_i$ is adjacent to $v$, for $i\in\{1,2\}$. Let $c_i\in L_i$, for $i\in\{1,2\}$, be distinct colors. To obtain an
$s$-edge-coloring of $G$ we first, by induction, $s$-edge-color $G[X_{1}\cup\{v\}]$, such that edges incident with $v_{1}$ receive colors from the list $L_{1}$ and edges incident with $v$ (in $G[X_{1}\cup\{v\}]$) colors from the list $\{1,2,\ldots,s\}\setminus\{c_2\}$. Let $L$ be the set of colors used for coloring edges incident with $v$ in this coloring, and let $L'=\{1,2,\ldots,s\}\setminus L$. Then, to obtain desired edge-coloring of $G$ we, by induction, $s$-edge-color $G[X_{2}\cup\{v\}]$, so that edges incident with $v_{2}$ receive colors from the list $L_{2}$ and edges incident with $v$ (in $G[X_{2}\cup\{v\}]$) colors from the list $L'$ (note that $c_2\in L'\cap L_2$).
So, we may assume w.l.o.g.\ that $v_1,v_2\in X_1\cup\{v\}$. To obtain an $s$-edge-coloring of $G$, we first, by induction, $s$-edge-color $G[X_1\cup \{v\}]$ so that the lists $L_1$ and $L_2$ are associated with $v_1$ and $v_2$. Then we $s$-edge-color $G[X_2\cup\{v\}]$ using Theorem \ref{ColorChordless} and permute the colors in this edge-coloring so that edges incident with $v$ (in $G$) all receive different colors.
\medskip
\noindent{\bf Case 4.} $G$ has a proper 2-cutset $\{a,b\}$.
\medskip
\noindent By Case 2 we may assume that $G$ is 2-connected. Let a proper 2-cutset $\{a,b\}$ of $G$, with the split $(X_1,X_2,\{ a,b\} )$, be chosen so that the side $X_1$ is
minimum among all such splits, that is, by Theorem \ref{DecomposChordless}, such that the block of decomposition $G_{X_1}$ is sparse and $a$ and $b$ each have at least
two neighbors in $X_1$.
Then, $G[X_1\cup\{a,b\}]$ is also triangle-free sparse, and since $G_{X_1}$ is sparse, each of the vertices $a$ and $b$ is of degree at least 3 or free in $G[X_1\cup\{a,b\}]$.
\smallskip
\noindent{\bf Case 4.1.} $v_i\in X_i$, for $i\in\{1,2\}$.
\smallskip
\noindent First, let us assume that $v_1$ is adjacent to both $a$ and $b$.
Since $G_{X_1}$ is
sparse, $v_1$ is of degree 2. By the minimality of $X_1$, this
implies that $G[(X_1\setminus\{v_1\})\cup\{a,b\}]$ is a chordless
path. Let $Q'$ be a chordless path in $G$ whose one endnode is $v_2$, the other is a node of $\{ a,b\}$, and no interior node is in $\{ a,b\}$.
Then $V(Q)\cup V(Q')\cup X_1\cup\{a,b\}$ induces in $\widetilde{G}$ a cycle with a chord ($av_1$ or $bv_1$), a contradiction.
Therefore $v_1$ cannot be adjacent to both $a$ and $b$.
Similarly, $G[X_1\cup\{a,b\}]$ is not a hole of length 5. Indeed, if we suppose the opposite, then $v_1$ is adjacent to $a$ or $b$, w.l.o.g.\ to $a$. Now, if $Q'$ is a path from $v_2$ to $a$ in $G[X_2\cup \{ a,b\} ]$ whose interior does not go through $b$, then $V(Q)\cup (Q')\cup X_1\cup\{a,b\}$ induces a cycle with chord $av_1$, a contradiction. Finally, note that $\{a,b\}$ is not contained in a ring of length 5 of $G[X_1\cup\{a,b\}]$. Indeed, if we suppose the opposite, then, since $G[X_1\cup\{a,b\}]$ is not a hole of length 5, the degree at least 3 vertex of this ring is a cutvertex of $G$.
By the previous paragraph w.l.o.g.\ $v_1$ is not adjacent to $b$. Next, suppose that $v_1a$ is also not an edge. Then, to
obtain an $s$-edge-coloring of $G$ we first $s$-edge-color
$G[X_{2}\cup\{a,b\}]$ using Theorem \ref{ColorChordless}, and then permute the colors so that edges incident with $v_{2}$
receive colors from the list $L_{2}$. Let $L_a'$ (resp.\ $L_b'$)
be the set of colors used for coloring edges incident with $a$
(resp.\ $b$) in this coloring, and let
$L_a=\{1,2,\ldots,s\}\setminus L_a'$ (resp.\
$L_b=\{1,2,\ldots,s\}\setminus L_b'$). We
complete $s$-edge-coloring of $G$ using part (2) of Lemma \ref{2list2sparse}, that is,
we $s$-edge-color $G[X_1\cup\{a,b\}]$ so that edges incident with
$v_1$, $a$ and $b$ receive colors from the lists $L_1$, $L_a$ and
$L_b$.
Hence, we may assume that $v_1$
is adjacent to $a$ (but not to $b$). Let
$c_1\in L_1$, and $Q'$ be the path induced by $V(Q)\cup \{v_1\}$. Then $Q'$ is disjoint from $G[X_2\cup\{a,b\}]$, its endnodes are adjacent to $a$ and $v_2$, and
$\widetilde{G}[X_2\cup\{a,b\}\cup V(Q')]$ is triangle-free chordless. So, to obtain an
$s$-edge-coloring of $G$ we first, by induction, $s$-edge-color
$G[X_{2}\cup\{a,b\}]$, so that edges incident with $v_{2}$
receive colors from the list $L_{2}$ and edges incident with $a$ (in
$G[X_{2}\cup\{a,b\}]$) colors from the list
$\{1,2,\ldots,s\}\setminus\{c_1\}$. Let $L_a'$ (resp.\ $L_b'$) be
the set of colors used for coloring edges incident with $a$ (resp.\
$b$) in this coloring, and let
$L_a=\{1,2,\ldots,s\}\setminus L_a'$ (resp.\
$L_b=\{1,2,\ldots,s\}\setminus L_b'$). To complete $s$-edge-coloring of $G$ we $s$-edge-color
$G[X_{1}\cup\{a,b\}]$ using part (2) of Lemma \ref{2list2sparse}, so that
edges incident with $v_{1}$, $a$ and $b$ receive colors from the
lists $L_{1}$, $L_a$ and $L_b$, respectively (note that
$c_1\in L_1\cap L_a$).
\smallskip
\noindent{\bf Case 4.2.} $v_1,v_2\in X_i\cup\{a,b\}$, for some $i\in\{1,2\}$.
\smallskip
\noindent Note that if $a$ and $b$ are of degree 1 in $G[X_j\cup\{a,b\}]$, for some $j\in\{1,2\}$, then their neighbors in $X_j$ are distinct. Indeed, if we suppose the opposite, then their common neighbor is a cutvertex of $G$.
Now, to obtain an $s$-edge-coloring of $G$ we first, by induction, $s$-edge-color $G[X_{i}\cup\{a,b\}]$, so that edges incident with $v_{j}$, for $j\in\{1,2\}$, receive colors from the list $L_{j}$. Let $L_a$ (resp.\ $L_b$) be the set of colors used for coloring edges incident with $a$ (resp.\ $b$) in this coloring, and let $L_a'=\{1,2,\ldots,s\}\setminus L_a$ (resp.\ $L_b'=\{1,2,\ldots,s\}\setminus L_b$). Let $Q_{3-i}'$ be a chordless path from $a$ to $b$ in $G[X_i\cup\{a,b\}]$, and $Q_{3-i}$ be the path induced by $V(Q_{3-i}')\setminus\{a,b\}$. Then $Q_{3-i}$ is disjoint from $G[X_{3-i}\cup\{a,b\}]$, its endnodes are adjacent to $a$ and $b$, and $G[X_{3-i}\cup\{a,b\}\cup V(Q_{3-i})]$ is traingle-free chordless.
So, to complete $s$-edge-coloring of $G$ we, by induction, $s$-edge-color $G[X_{3-i}\cup\{a,b\}]$, so that edges incident with $a$ and $b$ receive colors from the lists $L_a'$ and $L_b'$.
\medskip
Finally, let us explain how this proof yields an $\mathcal O(n^3m)$-time algorithm. By Case 1, in linear time we can reduce the problem to the one where $\delta(G)\geq 2$. By Lemma \ref{2list2sparse}, Case 2 can be done in $\mathcal O(nm)$ time. In Case 3, either we use induction and apply Lemma \ref{2list2sparse}, or we use Theorem \ref{ColorChordless} to color the entire side. In each step of Case 4 we first find a desired 2-cutset, which can be done in $\mathcal O(n^2m)$ time (see \cite{mft:chordless}), and then edge-color "the basic" side, which can be done in $\mathcal O(nm)$ time (by Lemma \ref{2list2sparse}). Since there is at most $\mathcal O(n)$ steps and each time we use Theorem \ref{ColorChordless} the entire side is colored, the running time of the algorithm is $\mathcal O(n\cdot(n^2m+nm)+n^3m)=\mathcal O(n^3m)$, as claimed.
\end{proof}
\begin{lemma}\label{vcn1}
Let $G \in \mathcal D$ and let $(X_1,X_2,A_1,A_2,B_1,B_2)$ be a split of a minimally-sided 2-join of $G$, with $X_1$ being
its minimal side, and let $G_1$ and $G_2$ be the corresponding blocks of decomposition. Let $s=\max\{3,\omega(G)\}$, and assume that we
are given an $s$-coloring $c$ of $G[X_2]$. We can extend $c$ to an $s$-coloring of $G$ in ${\cal O} (n^3m)$-time.
Furthermore, if $G$ is a basic graph then it can be $\max\{3,\omega(G)\}$-colored in ${\cal O} (n^3m)$-time.
\end{lemma}
\begin{proof}
By Lemma \ref{extreme}, $|A_1|,|B_1|\geq 2$, and by Lemma \ref{l:consistent}, $(X_1,X_2)$ is a consistent 2-join.
Hence $A_2$ and $B_2$ are cliques. Also, by Lemma \ref{new2}, $G_1\in \mathcal D$, and by Lemma \ref{extreme}, $G_1$ does not have a 2-join.
So, by Theorem \ref{decomposeTW}, $G_1$ is basic.
Let $L_a$ (resp. $L_b$) be the set of colors that $c$ assigns to vertices of $A_2$ (resp. $B_2$).
Let $L_a'=\{ 1, \ldots ,s\} \setminus L_a$ and $L_b'=\{ 1, \ldots ,s\} \setminus L_b$.
We want an $s$-coloring of $G[X_1]$ in which the vertices of $A_1$ are colored with colors from $L_a'$ and
vertices of $B_1$ are colored with colors from $L_b'$.
First suppose that $G_1$ is a line graph of a triangle-free chordless graph. Then $G_1$ is claw-free and hence (since $|A_1|,|B_1|\geq 2$) $A_1$ and $B_1$
are both cliques. Let $R$ be the triangle-free chordless graph such that $L(R)=G[X_1]$.
Since $R$ is triangle-free, $A_1$ (resp. $B_1$) corresponds to the set of edges incident to vertex $v_1$ (resp. $v_2$) of $R$ that is of degree at least 1 in $R$.
Note that $v_1$ and $v_2$ are not adjacent since $A_1\cap B_1=\emptyset$.
Since $A_1,A_2,B_1,B_2$ are all cliques, $\deg_R(v_1)\leq |L_a'|$ and $\deg_R(v_2)\leq |L_b'|$.
We associate lists $L_a'$ and $L_b'$ to vertices $v_1$ and $v_2$ respectively, and the result follows from Lemma \ref{TwoColoredBasic}.
Now suppose that $G_1$ is a P-graph with special clique $K$ and skeleton $R$. Let $K'$ be the vertices of $K$ that are centers of claws.
Note that all centers of claws of $G_1$ are in $K'$. For $u\in K'$, by (viii) of definition of P-graph, all pendant vertices of $L(R)$ that are adjacent to $u$
are of degree 2 in $G_1$. Let $H$ be the graph obtained from $G[X_1]$ by removing degree 2 vertices of $G_1$ that are adjacent to a vertex of $K'$.
Then $H$ is claw-free, and hence by Lemma \ref{p1l2.4} and Lemma \ref{p2l4.2}, $H$ is the line graph of a triangle-free chordless graph, say $R_H$.
If $A_1$ and $B_1$ are both cliques, then (since $|A_1|,|B_1|\geq 2$) $A_1\cup B_1\subseteq V(H)$, and we $s$-color $H$, similarly to the case
when $G_1$ was the line graph of triangle-free chordless graph, so that vertices of $A_1$ (resp. $B_1$) are colored with colors from $L_a'$ (resp. $L_b'$).
This coloring easily extends to an $s$-coloring of $G[X_1]$ since $s\geq 3$.
So we may assume that $A_1$ is not a clique. Since $G\in \mathcal D$, by Lemma \ref{diamondCliqueCut}, $G$ is diamond-free, and hence (since $|A_1|\geq2$) it
follows that $|A_2|=1$.
Therefore $|L_a'|\geq 2$. Let $a_2$ be the vertex of the marker path of $G_1$ that is complete to $A_1$. Since $A_1$ is not a clique, $a_2$ is center of a claw and hence
$a_2\in K'$. It follows that $K\cap X_1\subseteq A_1$, and so $B_1$ is a clique.
Let $A_1'=A_1\cap V(H)$ and $A_1''=A_1\setminus A_1'$. So vertices of $A_1''$ are all of degree 2 in $G_1$, and hence of degree 1 in $H$.
Also, $A_1'\subseteq K$ (since the vertices that are adjacent to centers of claws of $G_1$, and in particular to $a_2$, must be either in $K$ or of degree 2 in $G_1$),
and hence $A_1'$ is a (possibly empty) clique.
We first $s$-color $H$ so that the vertices of $A_1'$ (resp. $B_1$) receive the colors from $L_a'$ (resp. $L_b'$), and then we extend this coloring to the desired coloring
of $G[X_1]$.
Clique $B_1$ of $G_1$ corresponds to edges incident to a vertex $v_2$ of $R_H$. We assign list $L_b'$ to $v_2$. Since $B_1$ and $B_2$ are cliques,
$\deg_{R_H} (v_2)\leq |L_b'|$.
If $A_1'=\emptyset$ then we $s$-color $H$ by Theorem \ref{ColorChordless} (in ${\cal O} (n^3m)$-time) and then permute colors so that the vertices of $B_1$ are
colored with colors from $L_b'$. So let us assume that $A_1'\neq \emptyset$, and let $v_1$ be the vertex of $R_H$ whose incident edges correspond to vertices of
$A_1'$. We assign list $L_a'$ to $v_1$. Since $A_2$ and $A_1'$ are cliques, $\deg_{R_H} (v_1)\leq |L_a'|$. Note that $v_1$ and $v_2$ are not adjacent in $R_H$ since
$A_1\cap B_1=\emptyset$. It now follows from Lemma \ref{TwoColoredBasic} that we can obtain the desired $s$-coloring of $H$ in ${\cal O} (n^3m)$-time.
So, we may assume that we have an $s$-coloring of $H$ in which vertices of $A_1'$ (resp. $B_1$) are colored with colors from $L_a'$ (resp. $L_b'$).
We now extend that to the desired $s$-coloring of $G[X_1]$. Since $s\geq 3$ and vertices of $X_1\setminus H$ all have degree 2, we can greedily extend the coloring of $H$ to
vertices of $X_1\setminus (H\cup A_1'')$. Since $|A_2|=1$ and $s\geq 3$, it follows that $|L_a'|\geq 2$. Since vertices of $A_1''$ are of degree 1 in $H$, we can clearly
extend the coloring to them as well, so that they receive a color from $L_a'$.
Therefore, $s$-coloring of $G[X_2]$ can be extended to an $s$-coloring of $G$ in ${\cal O} (n^3m)$-time. Observe that this proof also shows that any basic graph can be colored
in ${\cal O} (n^3m)$-time.
\end{proof}
\begin{theorem
There is an algorithm with the following specifications:
\begin{description}
\item[ Input:] A graph $G\in\mathcal C$.
\item[ Output:]
A $\chi(G)$-coloring of $G$.
\item[ Running time:] $\mathcal O(n^5m)$.
\end{description}
Furthermore, if $G\in {\cal C}$ then $\chi(G)\leq \max\{3,\omega(G)\}$.
\end{theorem}
\begin{proof}
We can decide in linear time if $G$ is 2-colorable, and if it is 2-colorable we can 2-color it (also in linear time). So it is enough to give an algorithm that outputs a
$\max\{3,\omega(G)\}$-coloring of $G$.
\medskip
\noindent
{\bf Claim:} {\em Every $G\in \mathcal D$ can be $\max\{3,\omega(G)\}$-colored in $\mathcal O(n^4m)$-time.}
\medskip
\noindent
{\em Proof of Claim:} Let $G\in \mathcal D$ and let $s=\max\{3,\omega(G)\}$. We $s$-color $G$ as follows.
First check whether $G$ contains a 2-join (this can be done in $\mathcal O(n^2m)$-time by the algorithm in \cite{fast2j}). If it does not, then by Theorem \ref{decomposeTW}
$G$ is basic, and hence it can be $s$-colored in $\mathcal O(n^3m)$-time by Lemma \ref{vcn1}.
Otherwise, by Lemma \ref{DT-construction}, we construct a 2-join decomposition tree $T_G$ (of depths $1\leq p\leq n$) using marker paths of length 3, in
$\mathcal O(n^4m)$-time.
By Lemma \ref{new3} all graphs $G_B^1, \ldots ,G_B^p,G^p$ that correspond to the leaves of $T_G$ are in $\mathcal D_{\textsc{basic}}$.
All the 2-joins used in the construction of $T_G$ are extreme 2-joins. For our purpose here we want them to be minimally-sided 2-joins.
Note that by Lemma \ref{extreme}, every minimally-sided 2-join is an extreme 2-join, but not every extreme 2-join is a minimally-sided one.
The way $T_G$ is constructed in \cite{nicolas.kristina:two} first a minimally-sided 2-join is found and then in order to achieve ${\cal M}$-independence,
it is possibly pulled in the direction of minimal side to obtain another extreme 2-join that is then used in the construction of $T_G$.
If we do not care about ${\cal M}$-independence (as we do not here), we can have the algorithm that constructs $T_G$ just use the minimally-sided 2-join that is first
found. This way we obtain $T_G$ with all the other properties, except ${\cal M}$-independence, in which every 2-join used is minimally-sided (which is what we need here).
To obtain the desired coloring of $G$, we process nodes of $T_G$ from bottom up. We start with $G^p$. As $G^p$ is basic, we color it in $\mathcal O(n^3m)$-time
by Lemma \ref{vcn1}.
Since $G^p$ and $G^p_B$ are blocks of decomposition w.r.t.\ a minimally-sided 2-join of $G^{p-1}$, with $G^p_B$ being a block that corresponds to a minimal side,
by Lemma \ref{vcn1} we extend the coloring of $G^p$ to $G^{p-1}$ in $\mathcal O(n^3m)$-time.
We proceed like this up the tree, all the way to the root of $T_G$, namely $G^0=G$. As the depth of $T_G$ is at most $n$, it follows that $G$ can be $s$-colored
in $\mathcal O(n^4m)$-time.
This completes the proof of the Claim.
\medskip
We now consider $G\in {\cal C}$. By Theorem \ref{th:tarjan} we construct the clique-cutset decomposition tree $T$ of $G$ in $\mathcal O(nm)$-time.
So all the leaves of $T$ are graphs from $\mathcal D$, and there are at most $n$ of them. So to $s$-color all the leaves, by the Claim, it takes time
$\mathcal O(n\cdot n^4m)=\mathcal O(n^5m)$.
Finally, process the tree from bottom up, permuting colors of the blocks of decomposition so they agree on the clique cutset and paste the colorings of the
blocks together. Going this way all the way up to the root of $T$, we obtain the desired coloring of $G$ in $\mathcal O(n^5m)$-time.
\end{proof}
\section{A note on clique-width}
\label{s:cW}
In this section we show that the class $\mathcal D_{\textsc{basic}}$ has unbounded clique-width (and hence unbounded rank-width \cite{oum}).
So the class of (theta,wheel)-free graphs with no clique cutset has unbounded clique-width.
For $k\geq 3$, let $C_k$ be a chordless cycle of length $k$. For
$k\geq 1$, let $H_k$ be the graph on vertex set
$\{w_1,\ldots,w_{k+1},u',u'',v',v''\}$, such that
$\{w_1,\ldots,w_{k+1}\}$ induce a chordless path of length $k$, and
the only other edges of $H_k$ are $u'w_1$, $u''w_1$, $v'w_{k+1}$
and $v''w_{k+1}$.
Let $\Phi_k$ be the class of planar bipartite
$(C_3,\ldots,C_k,H_1,\ldots,H_k)$-free graphs of vertex degree at
most~3.
\begin{lemma}[\cite{lozin:unboundedCW}]
For any positive integer $k$, the tree- and clique-width of graphs
in $\Phi_k$ is unbounded.
\end{lemma}
Note that every $(H_1,C_3,C_4)$-free graph is chordless and triangle-free, so the class of triangle-free chordless graphs is the subclass of $\Phi_4$, which, by previous lemma, has unbounded cique-width. Furthermore, if $\Phi_4'$ is the class of 2-connected graphs from $\Phi_4$, then $\Phi_4'$ also has unbounded clique-width (see, for example, \cite{klm}). Moreover, the following holds.
\begin{lemma}[\cite{klm}]
If $\mathcal G$ is a class of graph that has unbounded clique-width, then the class $L(\mathcal G):=\{L(G)\,:\,G\in\mathcal G\}$ also has unbounded clique-width.
\end{lemma}
This lemma, together with our previous observations, implies that the class $L(\Phi_4')$ has unbounded clique-width. Since $L(\Phi_4')\subseteq \mathcal D_{\textsc{basic}}$, we conclude that the class $\mathcal D_{\textsc{basic}}$ has unbounded clique-width.
Interestingly, N.K. Le \cite{le} proved that the class of (theta, wheel, prism)-free graphs that do not have a clique cutset has bounded clique-width
(using the decomposition theorem for this class from \cite{twf-p1}).
This means that one can use the machinery of
\cite{courcelle:CW} and \cite{tarjan} to obtain faster polynomial-time algorithms for coloring and stable set problem for (theta, wheel, prism)-free graphs.
|
\section{Introduction and Main Results}
In this paper, we are concerned with upper rate functions,
which are a quantitative expression
of conservativeness,
for a class of symmetric jump processes on ${\mathbb R}^d$.
In particular, we investigate conditions on jumping kernels
such that the corresponding upper rate functions are of the iterated logarithm type.
It is well known that by Kolmogorov's test (see, e.g., \cite[4.12]{IM}),
the function $R(t)=\sqrt{ct\log\log t}$ with constant $c>0$ is an upper rate function
for the standard Brownian motion on $\R^d$
if and only if $c>2$.
This fact immediately implies Khintchine's law of the iterated logarithm.
Similar results of this type are true even for a large class of L\'evy processes.
For example, earlier Gnedenko \cite{Gn43} (see also \cite[Proposition 48.9]{Sa13}) showed that
if a L\'evy process $X=(\{X_t\}_{t\ge0}, \Pp)$ on $\R$ satisfies
$\Ee X_1=0$ and $\Ee X_1^2<\infty$, then
$$\limsup_{t\to\infty} \frac{|X_t|}{\sqrt{t\log\log t}}=(\Ee X_1^2)^{1/2}, \quad \text{a.s.}$$
Sirao \cite{Si53} also obtained analogous results in terms of
integral tests on the distribution function of $X$.
We note that such results as \cite{Gn43,Si53} do not hold in general
for L\'evy processes with the infinite second moment, for instance,
symmetric $\alpha$-stable processes with $\alpha\in (0,2)$ (see \cite{Kh38} or \cite[Theorem 2.1]{Sa01}).
The purpose of this paper is to establish upper rate functions of the form $\sqrt{t\log\log t}$
for a class of non-L\'evy symmetric jump processes generated
by regular Dirichlet forms on $L^2({\mathbb R}^d;\d x)$, which we introduce later.
Let $J(x,y)$ be a non-negative measurable function on $\R^d\times \R^d$, and set
\begin{align*}
{\cal D}&=\left\{f\in L^2({\mathbb R}^d;\d x) \Big|\iint_{x\neq y} (f(y)-f(x))^2J(x,y)\,\d x\,\d y<\infty\right\},\\
\E(f,f)&=\iint_{x\neq y} (f(y)-f(x))^2J(x,y)\,\d x\,\d y, \quad f\in {\cal D}.
\end{align*}
Throughout this paper, we always impose the following
\begin{assum}\label{assum:jump}\rm The function $J(x,y)$ satisfies
\begin{itemize}
\item[(i)] $J(x,y)=J(y,x)$ for all $x\neq y$;
\item[(ii)] there exist constants
$0<\kappa_1\le \kappa_2<\infty$
and $0<\alpha_1\le\alpha_2<2$ such that for all $x,y\in \R^d$ with $0<|x-y|<1$,
\begin{equation}\label{A:jump-kernel}
\frac{\kappa_1}{|x-y|^{d+\alpha_1}}\leq J(x,y)\leq \frac{\kappa_2}{|x-y|^{d+\alpha_2}};\end{equation}
\item[(iii)]\begin{equation}\label{e:01}\sup_{x\in\R^d} \int_{\{|x-y|\ge1\}}J(x,y)\,\d y<\infty.
\end{equation}
\end{itemize}
\end{assum}
\noindent Denote by $C_c^{\rm lip}(\R^d)$ the set of Lipschitz continuous functions on $\R^d$ with compact support.
Let $\F$ be the closure of $C_c^{\rm lip}(\R^d)$ with respect to the norm
$\|f\|_{{\cal E}_1}:=\sqrt{\E(f,f)+\|f\|_2^2}$ on ${\cal D}$.
Then it is easy to check that
the bilinear form $(\E,\F)$ is a symmetric regular Dirichlet form on $L^2(\R^d;\d x)$, see e.g.\ \cite[Example 1.2.4]{FOT11}.
The function $J(x,y)$ is called the jumping kernel corresponding to $(\E,\F)$.
Associated with the regular Dirichlet form $(\E,\F)$ is a symmetric Hunt process
$X=(\{X_t\}_{t\ge0}, \{\Pp^x\}_{x\in\R^d\setminus \N})$ with state space $\R^d\setminus \N$,
where $\N\subset \R^d$ is a properly exceptional set for $(\E,\F)$.
\ \
The main result is as follows.
\begin{thm}\label{main} Let $X=(\{X_t\}_{t\ge0}, \{\Pp^x\}_{x\in\R^d\setminus \N})$
be the symmetric Hunt process generated by the regular Dirichlet form $(\E,\F)$ as above.
Let $J(x,y)$ be the jumping kernel corresponding to $(\E,\F)$. Suppose that
\begin{equation}\label{e:second}
\sup_{x\in \R^d}\int_{{\mathbb R}^d} |x-y|^2J(x,y)\,\d y <\infty.
\end{equation}
Then, we have the following two statements.
\begin{itemize}
\item[$(1)$]
If there exist positive constants $c$ and $\varepsilon$ such that for any $x,y\in\R^d$ with $|x-y|\ge 1$,
$$J(x,y)\le \frac{c}{|x-y|^{d+2}\log^{1+\varepsilon}(e+|x-y|)},$$
then there exists a constant $C_0>0$ such that for all $x\in \R^d \backslash \N$,
\begin{equation}\label{e:rate}
\Pp^x(|X_t-x|\le C_0\sqrt{ t\log\log t} \text{ for all sufficiently large }t)=1.
\end{equation}
\item[$(2)$] If there exists a positive constant $c$ such that for any $x,y\in\R^d$ with $|x-y|\ge 1$,
$$J(x,y)\le \frac{c}{|x-y|^{d+2}},$$ then there exists a constant $c_0>0$ such that for all $x\in \R^d \backslash \N$,
$$\Pp^x(|X_t-x|\le c_0\sqrt{ t\log\log t} \text{ for all sufficiently large } t)=0.$$
\end{itemize}
\end{thm}
The condition \eqref{e:second} implies that the jumping kernel of $X$ has the finite second moment.
\eqref{e:rate} indicates that the function $C_0\sqrt{t\log\log t}$ is
the so-called upper rate function of the process $X$,
which describes the forefront of the process $X$.
As we mentioned before, $\sqrt{(2+\varepsilon)t\log\log t}$ with $\varepsilon>0$ is an upper rate function
for the standard Brownian motion on $\R^d$.
Therefore, Theorem \ref{main} shows that
if the jumping kernel of $X$ satisfies the condition as in Theorem \ref{main} (1),
then $X$ enjoys upper rate functions of the Brownian motion type.
According to the results of \cite{Gn43,Si53},
we believe that $C\sqrt{t\log\log t}$ with some large constant $C>0$ should be
an upper rate function for all symmetric jump processes
with finite second moments;
however, we do not know how to prove this at this stage.
Here it should be noted that the arguments of \cite{Gn43,Si53} heavily depend
on the characterization of L\'evy processes (see \cite[Sections 2 and 3]{Sa01} for more details),
while in the present setting such characterization is not available.
To overcome this difficulty,
we prove Theorem \ref{main} by using heat kernel estimates.
The idea of obtaining rate functions via heat kernel
estimates has appeared in the literatures before, see \cite{SW17} and the references therein.
There are a few differences and difficulties in the present paper, which require some new ideas and non-trivial arguments.
\begin{itemize}
\item[(1)]
For symmetric jump processes of variable order (see \eqref{A:jump-kernel}),
it seems impossible to present two-sided estimates
for the associated heat kernel, see \cite{BBCK09} for details.
Instead of this approach, here we turn to consider the heat kernel estimate
for large time, which is enough to yield the rate function of the process.
\item[(2)] There are a lot of works on heat kernel estimates
for symmetric jump processes on $\R^d$ generated by non-local symmetric Dirichlet forms,
see \cite{BBCK09, BGK09, CKK08, CK10, CKK11, Fo09} and the references therein.
However, there seems no study on the heat kernel estimates
when the jumping kernel has the finite second moment (even with precise algebraic decay).
Despite this, we can establish two-sided heat kernel estimates of large time
for symmetric jump processes whose jumping kernels are comparable to
$|x-y|^{-(d+2+\varepsilon)}$ for all $x,y\in\R^d$ with $|x-y|\ge1$ and some constant $\varepsilon>0$
(Corollary \ref{c:two1}).
We can also obtain nice
upper bounds of heat kernel estimates for processes
whose jumping kernel involves the logarithmic factor (Theorem \ref{thm-upper-bound}).
\end{itemize}
By analogy with Brownian motions,
one may guess that in order to prove Theorem \ref{main},
it suffices to get Gaussian type upper bound estimates for the heat kernel.
However, as far as we have discussed in this paper,
such upper bounds are only true for some interval of large time, not for all large time.
This is quite different from the Brownian motion case, and so we need further considerations
on the heat kernel bounds
(Theorem \ref{thm-upper-bound} and the proof of Theorem \ref{main} in the last section).
Bass and Kumagai \cite{BaKu08} proved
the convergence to symmetric diffusion processes
of continuous time random walks on ${\mathbb Z}^d$ with unbounded range.
In particular, they assumed the uniform finite second moment condition on conductances
similar to \eqref{e:second} on jumping kernels, see \cite[(A3) in p.\ 2043]{BaKu08}.
For the proof of the convergence result,
they obtained sharp on-diagonal heat kernel estimates, H\"{o}lder regularity of parabolic functions and Harnack inequalities.
Our result can be regarded as an another approach
to get the diffusivity of symmetric jump processes with jumping kernels having the finite second moment.
\ \
The reminder of this paper is arranged as follows.
In the next section, we recall some known results for heat kernel of the process $X$, and then present
related assumptions used in our paper. Section \ref{section3} is devoted to establish upper bounds and lower bounds of heat kernel for large time. In particular,
Theorems \ref{thm-upper-bound} and \ref{T:lower} are interesting of their own.
Then the proof of Theorem \ref{main} will be presented in the last section.
\ \
For any two positive measurable functions $f$ and $g$,
$f\asymp g$ means that there is a constant $c>1$ such that $c^{-1} f\le g\le cf$.
\section{Known results and assumptions}
Recall that $X=(\{X_t\}_{t\geq 0}, \{\Pp^x\}_{x\in \R^d\setminus \N})$ is
the Hunt process associated with $(\E,\F)$,
which can start from any point in $\R^d\setminus \N$.
Let $P(t,x,\d y)$ be the transition probability of
$X$. The transition semigroup $\{P_t,t\ge0\}$ of $X$ is defined
for $x\in \R^d\setminus \N$ by
$$P_tf(x)=\Ee^x (f(X_t))=\int_{\R^d} f(y)\,P(t,x, \d y),\quad f\ge 0, t\ge 0.$$
The following result has been proved in \cite[Theorem 1.2]{BBCK09} and \cite[Proposition 3.1]{CKK11}.
\begin{thm}{\rm (\cite[Theorem 1.2]{BBCK09} and \cite[Proposition 3.1]{CKK11})} \label{l:upp}
Under Assumption {\rm \ref{assum:jump}}, there are a properly exceptional set $\N\subset \R^d$,
a non-negative symmetric kernel $p(t,x,y)$ defined on
$(0,\infty)\times (\R^d\setminus \N) \times (\R^d\setminus \N)$ such that $P(t,x,\d y)=p(t,x,y)\,\d y$, and
$$p(t,x,y)\le c_0(t^{-d/\alpha_1}\vee t^{-d/2}),\quad t>0, \ x,y\in \R^d\setminus \N$$
holds with some constant $c_0>0$. Moreover, there is
an $\E$-nest $\{F_k:k\ge1\}$ of compact subsets of $\R^d$ so that $${\cal N}={\mathbb R}^d\setminus \bigcup_{k=1}^{\infty}F_k$$
and that for each fixed $t>0$ and $y\in {\mathbb R}^d\setminus {\cal N}$, the map
$x\mapsto p(t,x,y)$ is continuous on each $F_k.$ \end{thm}
To obtain upper bounds of off-diagonal estimates for $p(t,x,y)$, we will use the following Davies' method, see \cite{CKS87}. Note that, the so-called {\it carr\'{e} du champ} associated with $(\E,\F)$ is given by
$$\Gamma(f,g)(x)=\int_{\R^d} (f(y)-f(x))(g(y)-g(x))J(x,y)\,\d y,\quad f,g\in \F.$$
We can extend $\Gamma (f,f)$ to any non-negative measurable function $f$, whenever it is pointwise well defined.
The following proposition immediately follows from Theorem \ref{l:upp} and \cite[Corollary 3.28]{CKS87}.
\begin{prop}\label{P:da}
Suppose that Assumption {\rm \ref{assum:jump}} holds.
Then, there exists a constant $c_0>0$ such that
for any $x,y\in \R^d\setminus \N$ and $t>0$,
$$p(t,x,y)\le c_0 (t^{-d/\alpha_1}\vee t^{-d/2}) \exp\left(E(2t, x,y)\right),$$
where
$$ E(t,x,y):=-\sup\{|\psi(x)-\psi(y)|-t\Lambda(\psi): \psi\in C_c^{\rm lip}(\R^d) \hbox{ with }\Lambda(\psi)<\infty\}
$$ and
$$\Lambda(\psi):=\|e^{-2\psi}\Gamma(e^\psi, e^\psi)\|_\infty.$$
\end{prop}
\ \
In the next section, we will consider the following two assumptions on the jumping kernel $J(x,y)$ for $x,y\in \R^d$ with $|x-y|\ge1$.
\begin{enumerate}
\item[{\bf(A)}] There are a constant $c>0$ and
an increasing function $\phi: [1,\infty)\rightarrow (1,\infty]$ such that
for all $x,y\in\R^d$ with $|x-y|\ge 1$,
\begin{equation}
\label{upp-2} J(x,y)\leq \frac{c}{|x-y|^{d+2}\phi(|x-y|)}.
\end{equation}
Moreover, the function
$$\Phi(s):= \left(\int_s^{\infty}\frac{{\rm d}r}{r\phi(r)}\right)^{-1},\quad s\ge 1$$
satisfies
\begin{itemize}
\item $\Phi(\infty)=\infty$;
\item the function $s\mapsto \log \Phi(s)/s$ is decreasing on $[1,\infty)$;
\item there is a constant $\gamma>0$ such that
\begin{equation}\label{sec00}\sup_{s\ge1}\frac{\Phi(s)}{\phi^\gamma(s)}<\infty.\end{equation}
\end{itemize}
\item[{\bf(B)}] There is a constant $c>0$ such that
for all $x,y\in\R^d$ with $|x-y|\ge 1$,
\begin{equation}
\label{upp-1}J(x,y)\leq \frac{c}{|x-y|^{d+2}}.
\end{equation}
It also holds that
\begin{equation}\label{sec}\sup_{x\in \R^d} \int_{\{|x-y|\ge1\}} |x-y|^2J(x,y)\,\d y<\infty.\end{equation}
\end{enumerate}
Because $\phi$ is increasing on $[1,\infty)$, \eqref{upp-2} is stronger than \eqref{upp-1}.
Since the condition $\Phi(\infty)=\infty$ implies \eqref{sec},
{\bf (A)} is stronger than {\bf (B)}.
For instance,
$\phi(r)=(1+r)^{\theta}$, $\phi(r)=\log^{1+\theta} (e+r)$
and $\phi(r)=\log(e+r)\log^{1+\theta}\log(e^e+r)$ for any $\theta>0$ satisfy the conditions in {\bf(A)}.
On the other hand, under \eqref{A:jump-kernel} and \eqref{sec},
$$\sup_{x\in\R^d}\int_{\R^d} |x-y|^2 J(x,y)\,\d y<\infty.$$ In particular, there is a constant $c_1>0$ such that for any $K>0$,
\begin{equation}\label{big-jump}
\sup_{x\in\R^d}\int_{\{|x-y|>K\}} J(x,y)\,\d y\le \frac{c_1}{K^2}.
\end{equation}
\section{Heat kernel estimates}\label{section3}
Throughout this section, we always suppose that Assumption {\rm \ref{assum:jump}} holds.
We will derive upper and lower bound estimates of the heat kernel for large time respectively.
\subsection{Heat kernel upper bound}
\begin{prop}\label{thm-upper-bound0}
Under Assumption {\bf (B)},
there exist positive constants $t_0$ and $c$ such that for all $t\ge t_0$ and $x,y\in \R^d\backslash \N,$
$$p(t,x,y)\leq
\begin{cases}
\displaystyle \frac{c}{t^{d/2}},& t\geq |x-y|^2, \\
\displaystyle \frac{ct}{|x-y|^{d+2}}, & t\leq |x-y|^2.
\end{cases}$$
\end{prop}
\begin{proof}
We mainly follow the proof of \cite[Theorem 1.4]{BGK09},
but here we suppose that the time parameter $t$ is large.
By Theorem \ref{l:upp}, there are constants $t_0, c_0>0$ such that for all $x,y\in \R^d\backslash \N$ and $t\ge t_0$,
$$p(t,x,y)\leq {c_0}{t^{-d/2}}.$$
Thus, we only need to verify the off-diagonal estimate for $p(t,x,y).$
We first introduce truncated Dirichlet forms associated with $(\E,\F)$.
For $0<K<\infty$, define
$${\cal E}^{(K)}(u,v)=\iint_{\{0<|x-y|<K\}} (u(x)-u(y))(v(x)-v(y))\,J(x,y)\,\d x\,\d y, \quad u,v\in {\cal F}.$$
Then by (\ref{big-jump}),
\begin{equation*}
\begin{split}
\iint_{\{|x-y|\geq K\}} (u(x)-u(y))^2J(x,y)\,\d x\,\d y
&\leq 4\int_{\R^d}u(x)^2 \left(\int_{\{|x-y|\geq K\}}J(x,y)\,\d y\right)\,\d x\\
&\leq \frac{c_1}{K^2}\|u\|_2^2,
\end{split}
\end{equation*}
which yields that
\begin{equation}\label{e:ffee1} \begin{split}
{\cal E}(u,u)
=&{\cal E}^{(K)}(u,u)+\iint_{\{|x-y|\geq K\}} (u(x)-u(y))^2J(x,y)\,\d x\,\d y\\
\leq & {\cal E}^{(K)}(u,u)+\frac{c_1}{K^2}\|u\|_2^2.\end{split}
\end{equation} In particular, $({\cal E}^{(K)},{\cal F})$ is a regular Dirichlet form on $L^2(\R^d;\d x)$.
Let $P^{(K)}(t,x,\d y)$ be the transition probability associated with $({\cal E}^{(K)},{\cal F})$.
Then, by \eqref{e:ffee1} and the proof of \cite[Theorem 1.2]{BBCK09} (or \cite[Proposition 3.1]{CKK11}), there exist positive constants $c_2, c_3$ and $t_1$ such that for all $t\ge t_1$ and $x,y\in\R^d\backslash \N$,
$$P^{(K)}(t,x,\d y)=p^{(K)}(t,x,y)\,\d y$$ and
\begin{equation}\label{e:ont}
p^{(K)}(t,x,y)\le c_2 t^{-d/2}\exp\left(\frac{c_3t}{K^2}\right).
\end{equation}
Next, we will obtain the off-diagonal estimate for $p^{(K)}(t,x,y)$, by applying Proposition \ref{P:da} to $({\cal E}^{(K)},{\cal F})$. For fixed points $x_0, y_0\in \R^d$,
let $R=|x_0-y_0|$ and $K=R/\theta$ for some $\theta>0$,
which will be determined later.
For $\lambda>0$, we define the function $\psi\in C_c^{\rm lip}(\R^d)$ by
$$\psi(x)=[\lambda(R-|x-y_0|)]\vee 0.$$
Then, by the inequality $(e^r-1)^2\leq r^2e^{2|r|}$ for $r\in {\mathbb R}$ and the fact that $|\psi(x)-\psi(y)|\leq \lambda |x-y|$ for all $x,y\in\R^d$, we get
\begin{equation}\label{gamma-upper}
\begin{split}
\Gamma_K(\psi)(x):&= e^{-2\psi(x)} \Gamma^{(K)}(e^{\psi}, e^{\psi})(x)\\
&=\int_{\{0<|x-y|<K\}}\left(e^{\psi(y)-\psi(x)}-1\right)^2J(x,y)\,{\rm d}y\\
&\leq \int_{\{0<|x-y|<K\}}\left(\psi(x)-\psi(y)\right)^2 e^{2|\psi(x)-\psi(y)|}J(x,y)\,{\rm d}y\\
&\leq e^{2\lambda K}\lambda^2 \int_{\{0<|x-y|<K\}}|x-y|^2J(x,y)\,\,{\rm d}y \\
&\leq c_4\lambda ^2 e^{2\lambda K}\leq c_5\frac{e^{3\lambda K}}{K^2},
\end{split}
\end{equation} where in the third inequality we used \eqref{sec} and the last inequality follows from the fact that $r^2\le 2e^r$ for all $r\geq 0$. Hence,
$$\Lambda_K(\psi):=\| \Gamma_K(\psi)\|_\infty \le c_5\frac{e^{3\lambda K}}{K^2},$$
which implies that
\begin{equation}\label{upper-exp}
E^{(K)}(t,x_0,y_0)\le -|\psi(x_0)-\psi(y_0)|+\Lambda(\psi)t \leq c_5\frac{e^{3\lambda K}}{K^2}t-\lambda R.
\end{equation}
In what follows, we assume that $t<K^2$.
In (\ref{upper-exp}), if we take
$$\lambda=\frac{1}{3K}\log\left(\frac{K^2}{t}\right),$$
then
$$E^{(K)}(t,x_0,y_0)\leq -\frac{R}{3K}\log\left(\frac{K^2}{t}\right)+\frac{c_5}{K^2}\frac{K^2}{t}t
=c_5-\frac{\theta}{3}\log\left(\frac{K^2}{t}\right)$$
so that by \eqref{e:ont} and Proposition \ref{P:da},
\begin{align*}p^{(K)}(t,x_0,y_0)\leq & c_6t^{-d/2}\exp\left(\frac{c_3t}{K^2}+E^{(K)}(2t,x_0,y_0)\right)\\
\le & c_6 t^{-d/2}\exp\left(c_3+c_5-\frac{\theta}{3}\log\left(\frac{K^2}{2t}\right)\right)\\
=& c_7t^{-d/2}\left(\frac{2t}{K^2}\right)^{\theta/3}.\end{align*}
Hence by letting $\theta=3(d+2)/2$, we have
\begin{equation}\label{e:upper-trunc0}
p^{(K)}(t,x_0,y_0)\leq c_7t^{-d/2}\left(\frac{2t}{K^2}\right)^{(d+2)/2}
=\frac{c_8t}{K^{d+2}}=\frac{c_8\theta^{d+2}t}{|x_0- y_0|^{d+2}}.
\end{equation}
We finally obtain the off-diagonal upper bound of $p(t,x,y)$.
In fact,
by Meyer's construction (see e.g.\ \cite[Lemma 3.1(c)]{BGK09} or \cite[Lemma 3.7(b)]{BBCK09}),
\eqref{e:upper-trunc0} and \eqref{upp-1},
\begin{equation}\label{e:meyer}
p(t,x_0,y_0)\leq p^{(K)}(t,x_0,y_0)+t\sup_{|x-y|\geq K}J(x,y)\leq \frac{c_9t}{|x_0-y_0|^{d+2}}.
\end{equation}
Therefore, the proof is complete.
\end{proof}
\begin{thm}\label{thm-upper-bound}
Suppose that Assumption {\bf (A)} holds.
Then, for any $\kappa\ge1$,
there exist positive constants $\theta_0\in (0,1)$, $t_0\ge 1$ and $c_i \ (i=1,2)$ such that
for all $t\ge t_0$,
$$
p(t,x,y)
\leq
\begin{cases}
\displaystyle \frac{c_1}{t^{d/2}},& t\geq |x-y|^2,\\
\displaystyle \frac{c_1}{t^{d/2}}\exp\left(-\frac{c_2|x-y|^2}{t}\right),
& \displaystyle \frac{\theta_0|x-y|^2}{\log \Phi(|x-y|)}\leq t\leq |x-y|^2,\\
\displaystyle U(t,|x-y|,\phi,\Phi, \kappa),
& \displaystyle t\leq \frac{\theta_0|x-y|^2}{\log \Phi(|x-y|)},
\end{cases}
$$ where $$U(t,|x-y|,\phi, \Phi,\kappa):=\frac{c_1}{t^{d/2}\Phi(|x-y|/\kappa)^{\kappa/8}} \wedge \frac{c_1t}{|x-y|^{d+2}}+\frac{c_1t}{|x-y|^{d+2}\phi(|x-y|/\kappa)}.$$
\end{thm}
\begin{proof}
We use the same notations as in those
of Proposition \ref{thm-upper-bound0}.
By Theorem \ref{l:upp}, we only need to consider off-diagonal estimates, i.e.,
the case that $t\le |x-y|^2$.
We split the proof into two parts. Even though
the proof below is based on the Davies method,
the argument is much more delicate than that of Proposition \ref{thm-upper-bound0}.
Let $K\geq 1$. For fixed points $x_0, y_0\in \R^d$ with $|x_0-y_0|\geq 1$,
let $R=|x_0-y_0|$. For $\lambda>0$, define the function $\psi\in C_c^{\rm lip}(\R^d)$ by
$$\psi(x)=[\lambda (R-|x-y_0|)]\vee 0.$$
Then
by the same argument as in (\ref{gamma-upper}),
and by Assumption \ref{assum:jump} (ii) and Assumption {\bf(A)},
\begin{equation}\label{e:poff}
\begin{split}
\Gamma_K(\psi)(x)&=\int_{\{0<|x-y|<K\}}\left(e^{\psi(y)-\psi(x)}-1\right)^2J(x,y)\,{\rm d}y\\
&\leq \lambda ^2\int_{\{0<|x-y|<K\}}|x-y|^2e^{2\lambda |x-y|}J(x,y)\,\,{\rm d}y \\
&=\lambda ^2\int_{\{0<|x-y|<1\}}|x-y|^2e^{2\lambda |x-y|}J(x,y)\,\,{\rm d}y\\
&\quad +\lambda ^2\int_{\{1\leq |x-y|<K\}}|x-y|^2e^{2\lambda |x-y|}J(x,y)\,\,{\rm d}y \\
&\leq \lambda ^2e^{2\lambda}
\sup_{x\in \R^d}\int_{\{0<|x-y|<1\}}|x-y|^2J(x,y)\,\,{\rm d}y\\
&\quad +c_1\lambda^2\int_{\{1\leq |x-y|<K\}}
\frac{e^{2\lambda |x-y|}}{|x-y|^d\phi(|x-y|)}\,\,{\rm d}y\\
&=:{\rm (I)}+{\rm (II)}.
\end{split}
\end{equation}
(1) We first derive the desired Gaussian upper bound.
For any $\theta>0$, let $\eta$ be a positive constant such that $\eta/\theta<1/4$.
Assume that $K=R$ and $t\geq \theta K^2/\log \Phi(K)$. We set $\lambda=\eta K/t$.
Since $K\ge 1$ and the function $s\mapsto\log \Phi(s)/s $ is decreasing on $[1,\infty)$ by Assumption {\bf (A)},
$$e^{2\lambda}=e^{2\eta K/t}
\leq \exp\left(2\eta\frac{\log \Phi(K)}{\theta K}\right)
\leq e^{2\eta\log\Phi(1)/\theta}=\Phi(1)^{2\eta/\theta},$$
and so
$$
{\rm (I)}\leq c_2\Phi(1)^{2\eta/\theta}\lambda^2\le c_2(1+\Phi(1))^{2\eta/\theta}\lambda^2
\le c_2(1+\Phi(1))^{1/2}\lambda^2 =: c_3 \lambda^2.
$$
If $1\leq r\leq K$, then, also due to the decreasing property of the function $s\mapsto\log \Phi(s)/s$,
$$e^{2\lambda r}=e^{2\eta Kr/t}
\leq \exp\left(2\eta r\frac{\log \Phi(K)}{\theta K}\right)
\leq \exp\left(2\eta r\frac{\log \Phi(r)}{\theta r}\right)=\Phi(r)^{2\eta/\theta},$$
which implies that
\begin{align*}
{\rm (II)}
&\leq c_1\lambda^2\int_{\{|x-y|\geq 1\}}
\frac{\Phi(|x-y|)^{2\eta/\theta}}{|x-y|^d \phi(|x-y|)}\,\,{\rm d}y
=c_4 \lambda^2 \int_1^{\infty}\frac{\Phi(r)^{2\eta/\theta}}{r\phi(r)}\,\d r\\
&= c_4\lambda^2 \int_1^\infty \frac{1}{r\phi(r)} \left(\int_r^\infty \frac{1}{s\phi(s)}\,\d s\right)^{-2\eta/\theta}\,\d r
= \frac{c_4\lambda^2}{1-(2\eta/\theta)}\left(\int_1^\infty \frac{1}{s\phi(s)}\,\d s\right)^{1-(2\eta/\theta)} \\
&\le {2c_4\lambda^2} \left(1+\int_1^\infty \frac{1}{s\phi(s)}\,\d s\right) =:c_5\lambda^2.
\end{align*}
Hence by \eqref{e:poff},
$$\Lambda_{K}(\psi)=\|\Gamma_K(\psi)\|_{\infty}
\leq (c_3+c_5)\lambda^2=:C_*\lambda^2.$$
In particular, we have
$$E^{(K)}(t,x_0,y_0)\le \Lambda_{K}(\psi)t-| \psi(x_0)-\psi(y_0)|\le
C_{*}\lambda^2 t -\lambda R=-\eta(1-\eta C_{*})\frac{K^2}{t}.$$
This along with Proposition \ref{P:da} yields that
there is a constant $c_6>0$ such that for all $t\geq \theta K^2/\log \phi(K)$,
\begin{equation}\label{e:upper-trunc}
p^{(K)}(t,x_0,y_0)\leq c_6t^{-d/2}\exp\left\{\frac{c_0t}{K^2}-\frac{\eta(1-\eta C_{*})}{2}\frac{K^2}{t}\right\}.
\end{equation}
We note that the constants $c_6$ and $C_*$ above are independent of $\eta$ and $\theta$.
In what follows, we assume that
$$
\frac{\theta K^2}{\log \Phi(K)}\leq t\leq K^2.$$
Since by \eqref{e:upper-trunc},
$$
p^{(K)}(t,x_0,y_0)
\leq c_7t^{-d/2}\exp\left\{-\frac{\eta(1-\eta C_{*})}{2}\frac{K^2}{t}\right\},
$$
we have by the first inequality in \eqref{e:meyer} and
\eqref{upp-2},
\begin{equation}\label{comparison}
\begin{split}
p(t,x_0,y_0)
&\leq p^{(K)}(t,x_0,y_0)+t\sup_{|x-y|\geq K} J(x,y)\\
&\leq c_7t^{-d/2}\exp\left\{-\frac{\eta(1-\eta C_{*})}{2}\frac{K^2}{t}\right\}
+\frac{c_8t}{K^{d+2}\phi(K)}.
\end{split}
\end{equation}
Let $\eta_*$ be a positive constant such that
$$\frac{\eta_*((1-\eta_*C_*)\vee 4)}{2\theta}\in \Big(0,1\wedge \frac{1}{\gamma}\Big ),$$
where $\gamma$ is the constant in Assumption {\bf (A)}.
Then by \eqref{sec00}, there is a constant $c_9>0$ such that
$$
\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{K^2}{t}\right\}
\geq\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{\log\Phi(K)}{\theta}\right\}
=\frac{1}{\Phi(K)^{\eta_*(1-\eta_* C_*)/(2\theta)}}\geq \frac{c_9}{\phi(K)}.
$$
By noting that
$$
\frac{1}{t^{d/2}}=\frac{t}{t^{(d+2)/2}}
\geq t\left(\frac{1}{K^2}\right)^{(d+2)/2}
=\frac{t}{K^{d+2}},
$$
we get
\begin{equation*}
\begin{split}
\frac{t}{K^{d+2}\phi(K)}
\leq c_9^{-1}t^{-d/2}\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{K^2}{t}\right\}.
\end{split}
\end{equation*}
Hence if we take $\eta=\eta_*$ in \eqref{comparison}, then
\begin{equation*}
\begin{split}
p(t,x_0,y_0)
\leq&
c_7t^{-d/2}\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{K^2}{t}\right\}\\
&+c_{10} t^{-d/2}
\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{K^2}{t}\right\}\\
=&:c_*t^{-d/2}\exp\left\{-\frac{\eta_*(1-\eta_* C_{*})}{2}\frac{|x_0-y_0|^2}{t}\right\}.
\end{split}
\end{equation*}
Namely, for each fixed $\theta>0$,
we get the desired Gaussian bound for any $t>0$ and $x,y\in \R^d$ such that
$$\frac{\theta|x-y|^2}{\log \Phi(|x-y|)}\leq t\leq |x-y|^2.$$
(2) Let $\kappa\ge1$. Here we let $K=R/\kappa$.
Since we can choose $t_0$ in the statement large enough,
we may and do assume
that $|x_0-y_0|$ is large enough such that $|x_0-y_0|\ge \kappa$, and so $K\ge 1$.
Below we assume that
$$t\le \frac{\theta_0 R^2}{\log \Phi(R)}$$
for some $\theta_0>0$ small enough, which will be determined later.
Let
$$\lambda=\frac{\log \Phi(K)}{4K}.$$
Since the function $s\mapsto\log \Phi(s)/s $ on $[1,\infty)$ is decreasing by Assumption {\bf (A)},
$$e^{2\lambda r}= \exp\left(r\frac{ \log \Phi(K)}{2K}\right)\le \exp\left(r\frac{\log \Phi(r)}{2r}\right)
=\Phi(r)^{1/2},\quad 1\le r\le K.$$
Hence by \eqref{e:poff},
$$\Lambda_{K}(\psi)\le c_0\lambda^2,$$
where $c_0>0$ is independent of $\theta_0, \kappa$ and $\lambda.$
In particular, by choosing $\theta_0\in(0,1)$ so small that $c_0 \kappa\theta_0\le 2,$ we have
\begin{align*}E^{(K)}(t,x_0,y_0)&\le \Lambda_{K}(\psi)t-|\psi(x_0)-\psi(y_0)|\\
&\le
c_0\lambda^2 t -\lambda R\\
&\le \frac{c_0}{16} \left(\frac{ \log \Phi(K)}{K}\right)^2\frac{\theta_0R^2}{\log \Phi(R)}- \frac{\log \Phi(K)}{4K} R\\
&=\frac{\kappa}{4} \log \Phi(K) \left(-1+\frac{c_0 \kappa\theta_0 }{4}\frac{\log \Phi(K)}{\log \Phi(\kappa K)} \right)\\
&\le -\frac{\kappa}{8} \log \Phi(K),\end{align*}
where we used $\kappa\ge1$ and the increasing property of the function $\Phi(r)$ in the last inequality.
We then have by Proposition \ref{P:da},
$$
p^{(K)}(t,x_0,y_0)
\leq c_1t^{-d/2} \frac{1}{\Phi(K)^{\kappa/8}},
$$ which yields that by the same way as in \eqref{comparison},
$$ p(t,x_0,y_0)\le c_1t^{-d/2} \frac{1}{\Phi(|x_0-y_0|/\kappa)^{\kappa/8}}+\frac{c_2t}{|x_0-y_0|^{d+2}\phi(|x_0-y_0|/\kappa)}.$$
Noting that Assumption {\bf (B)} is weaker than Assumption {\bf(A)},
we know from Proposition \ref{thm-upper-bound0} that for
any $x_0,y_0\in \R^d\setminus\N$ and $t\ge t_0$ with $t\le |x_0-y_0|^2$,
$$ p(t,x_0,y_0)\le \frac{c_3t}{|x_0-y_0|^{d+2}}.$$
Since $\phi$ is an increasing function on $[1,\infty)$ and $|x_0-y_0|\geq \kappa$,
we have $\phi(|x_0-y_0|/\kappa)\geq \phi(1)$ so that
$$\frac{t}{|x_0-y_0|^{d+2}\phi(|x_0-y_0|/\kappa)}\le \frac{t}{\phi(1)|x_0-y_0|^{d+2}}.
$$
Therefore, we finally obtain
$$p(t,x_0,y_0)\le \frac{c_4}{t^{d/2}\Phi(|x-y|/\kappa)^{\kappa/8}} \wedge \frac{c_4t}{|x-y|^{d+2}}
+\frac{c_4t}{|x-y|^{d+2}\phi(|x-y|/\kappa)}.$$
\ \
Combining the conclusions in (1) and (2) above, we get the desired assertion.
\end{proof}
\begin{rem}\label{rem-upper-bound}\rm (i) According to Theorem \ref{thm-upper-bound}, we can obtain \cite[Theorem 3.3]{CKK11}
when $\phi(r)=\exp({cr^\beta})$ for some constants $c>0$ and $\beta\in(0,1]$. By \cite[(1.14) in
Theorem 1.2]{CKK11}, we know that upper bound estimates in Theorem \ref{thm-upper-bound} are sharp
up to constants in this case.
(ii) By part (1) of the argument for Theorem \ref{thm-upper-bound}, we indeed prove that for any $\theta>0$, there are constants $c_i=c_i(\theta)>0$ $(i=1,2)$ such that for all $t\ge t_0$ and $x,y\in \R^d$ with
$$\frac{\theta|x-y|^2}{\log \Phi(|x-y|)}\leq t\leq |x-y|^2,$$ it holds that
$$p(t,x,y)\le \frac{c_1}{t^{d/2}}\exp\left(-\frac{c_2|x-y|^2}{t}\right).$$
\end{rem}
As a consequence of Theorem \ref{thm-upper-bound},
we have the following statement about upper bound estimates of the heat kernel for a new class of symmetric jump processes.
\begin{cor}\label{c:two}
Assume that there are positive constants $\varepsilon, c_0$ such that for all $x,y\in \R^d$ with $|x-y|\ge 1$,
$$J(x,y)\le \frac{c_0}{|x-y|^{d+2+\varepsilon}}.$$ Then, there exist positive constants $t_0\ge 1$, $\theta_0>0$ and $c_i \ (i=1,2)$ such that for all $t\ge t_0$,
$$p(t,x,y)\leq
\begin{cases}
\displaystyle \frac{c_1}{t^{d/2}},& t\geq |x-y|^2,\\
\displaystyle \frac{c_1}{t^{d/2}}\exp\left(-\frac{c_2|x-y|^2}{t}\right),
& \displaystyle \frac{\theta_0|x-y|^2}{\log(1+|x-y|)}\leq t\leq |x-y|^2,\\
\displaystyle \frac{c_1t}{|x-y|^{d+2+\varepsilon}}, & \displaystyle t\leq \frac{\theta_0|x-y|^2}{\log(1+|x-y|)}.
\end{cases}$$\end{cor}
\begin{proof}
In this case, $\phi(r)=r^\varepsilon$ and $\Phi(r)=\varepsilon r^{\varepsilon}$.
By taking $\kappa\ge 1$ so large enough that
$\varepsilon \kappa/8\ge d+2+\varepsilon$ in Theorem \ref{thm-upper-bound},
we obtain the desired assertion. \end{proof}
To study rate functions of the process $X$ corresponding to the test function $\phi(r)=\log^{1+\varepsilon}r$, we also need the following.
\begin{prop}\label{thm-upper-bound3}
Suppose that Assumption {\bf (A)} is satisfied.
Then for any $\delta\in(0,1)$, there exist positive constants $t_0$, $\theta_0$ and $c_1, c_2$ such that
$$p(t,x,y)\leq \frac{c_1t}{|x-y|^{d+2}\log^{(d+2)\delta/2}\log(e+\Phi(c_2|x-y|))}$$
for all $t\ge t_0$ and $x,y\in \R^d\backslash \N$ with
$$t_0\le t\le \frac{ \theta_0|x-y|^2}{\log\Phi(|x-y|)}.$$
\end{prop}
\begin{proof}
For fixed points $x_0, y_0\in {\mathbb R}^d$ and $\theta>0$,
we let $R=|x_0-y_0|$ and $K=R/\theta$.
Since $t_0$ can be large enough,
we may and do assume that $R$ is large enough.
We use the approach of Proposition \ref{thm-upper-bound0}
and start from the estimate \eqref{upper-exp}.
Taking
$$
\lambda= \frac{1}{3K} \log\left (\frac{K^2 \log^\delta \log \Phi(K)}{t}\right),$$ we have
$$E^{(K)}(t,x_0,y_0)\le - \frac{\theta}{3} \log \left(\frac{K^2 \log^\delta \log \Phi(K)}{t}\right)
+ c_* \log^\delta\log \Phi(K),$$
where $c_*$ is the constant $c_5$ in \eqref{upper-exp}.
If
$$t\le \frac{ c_0 K^2}{\log\Phi(K)}$$
for some $c_0>0$, then for $K\ge 1$ large enough,
$$ \frac{\theta}{6}\log \left(\frac{K^2 \log^\delta \log \Phi(K)}{t}\right)
\ge \frac{\theta}{6}\log \left(\frac{\log \Phi(K) \log^\delta \log \Phi(K)}{c_0}\right)
\ge c_* \log^\delta\log \Phi(K),$$
due to the fact that $\delta\in(0,1)$.
Hence, for $K\ge 1$ large enough, we have
$$E^{(K)}(t,x_0,y_0)\le - \frac{\theta}{6} \log \left(\frac{K^2 \log^\delta \log \Phi(K)}{t}\right),$$
which along with Proposition \ref{P:da} yields that
\begin{equation*}
\begin{split}
p^{(K)}(t,x_0,y_0)
&\le c_1 t^{-d/2} \exp\left(-\frac{\theta}{6} \log \left(\frac{K^2 \log^\delta \log \Phi(K)}{2t}\right)\right)\\
&= c_1 t^{-d/2}\left(\frac{2t}{K^2 \log^\delta \log \Phi(K)}\right)^{\theta/6}.
\end{split}
\end{equation*}
Setting $\theta= 3(d+2)$, we get
$$p^{(K)}(t,x_0,y_0)\le c_2 \frac{t}{K^{d+2} \log^{(d+2)\delta/2}\log \Phi(K)}.$$
This along with the first inequality in \eqref{e:meyer}, Assumption {\bf (A)} and the fact that $|x_0-y_0|=\theta K$ gives us that
\begin{align*}
p(t,x_0,y_0)
&\leq p^{(K)}(t,x_0,y_0)+t\sup_{|x-y|\geq K}J(x,y)\\
&\leq c_3 \frac{t}{|x_0-y_0|^{d+2} \log^{(d+2)\delta/2}\log \Phi(c_4|x_0-y_0|)}.
\end{align*}
The proof is complete. \end{proof}
\subsection{Heat kernel lower bound}
In this subsection, we establish the following lower bound estimates for the heat kernel.
\begin{thm}\label{T:lower}
Under Assumption {\bf(B)}, there exist positive constants $t_0$ and $c_i$ $(i=1,2,3)$ such that
for all $t\ge t_0$ and $x,y\in \R^d\backslash \N$,
$$p(t,x,y)\geq
\begin{cases}
c_1t^{-d/2} & |x-y|^2\leq t, \\
\displaystyle c_1t^{-d/2}\exp\left(-\frac{c_2|x-y|^2}{t}\right) &
c_3|x-y|\leq t\leq |x-y|^2.
\end{cases}$$
\end{thm}
We first explain the main idea of the proof of Theorem \ref{T:lower}.
Following the approach of \cite{BBCK09}, we introduce a class of modifications for the jumping kernel $J(x,y)$.
Let $\kappa_2$ be the constant in \eqref{A:jump-kernel}.
For $\delta\in (0,1)$, define
\begin{equation}\label{eq-j-delta}
J^{(\delta)}(x,y):=J(x,y){\bf 1}_{\{|x-y|\geq \delta\}}+\frac{\kappa_2}{|x-y|^{d+\alpha_2}}{\bf 1}_{\{0<|x-y|<\delta\}}
\end{equation}
and
$${\cal D}^{\delta}:=\left\{u\in L^2({\mathbb R}^d;\d x) \bigg|
\iint_{x\ne y}(u(x)-u(y))^2J^{(\delta)}(x,y)\,{\rm d}x\,{\rm d}y<\infty\right\}.$$
Then by Assumption \ref{assum:jump}, we have for any $\delta\in(0,1)$
\begin{equation*}
\begin{split}
\iint_{\{|x-y|\geq \delta \}}(u(x)-u(y))^2J(x,y)\,{\rm d}x\,{\rm d}y
&\leq 4\int u(x)^2\left(\int_{\{|x-y|\geq \delta\}}J(x,y)\,{\rm d}y\right)\,{\rm d}x\\
&\leq c_1(\delta)\int u(x)^2\,{\rm d}x
\end{split}
\end{equation*}
and so
\begin{equation}\label{eq-compare}\begin{split}
\iint_{x\ne y}(u(x)-u(y))^2J^{(\delta)}(x,y)\,{\rm d}x\,{\rm d}y&
+\|u\|_{L^2({\mathbb R}^d;\d x)}^2\\
&\asymp
\iint_{x\ne y}\frac{(u(x)-u(y))^2}{|x-y|^{d+\alpha_2}}\,{\rm d}x\,{\rm d}y
+\|u\|^2_{L^2({\mathbb R}^d;\d x)}.\end{split}
\end{equation}
Therefore, for all $\delta\in (0,1)$,
$${\cal D}^{\delta}
=\left\{u\in L^2({\mathbb R}^d;\d x) \bigg|
\iint_{x\ne y}\frac{(u(x)-u(y))^2}{|x-y|^{d+\alpha_2}}\,{\rm d}x\,{\rm d}y<\infty \right\};$$
that is, ${\cal D}^{\delta}$ is independent of $\delta\in (0,1)$.
Let $({\cal E}^{\delta},{\cal D}^{\delta})$ be a bilinear form on $L^2({\mathbb R}^d;\d x)$ given by
$${\cal E}^{\delta}(u,v)=\iint_{{\mathbb R}^d\times {\mathbb R}^d}(u(x)-u(y))(v(x)-v(y)) J^{(\delta)}(x,y)\,{\rm d}x\,{\rm d}y,
\quad \text{$u,v\in {\cal D}^{\delta}$},$$
and let ${\cal F}^{\delta}$ be the closure of $C_c^{\rm lip}(\R^d)$
with respect to the norm $\|f\|_{\E_1^{\delta}}:=\sqrt{\E^{\delta}(f,f)+\|f\|_2^2}$ in ${\cal D}^{\delta}$.
Then, $({\cal E}^{\delta},{\cal F}^{\delta})$ is a regular Dirichlet form on $L^2({\mathbb R}^d;\d x)$.
Moreover, according to (\ref{eq-compare}) and the argument of \cite[Lemma 2.5]{BBCK09},
we have ${\cal F}^{\delta}={\cal D}^{\delta}$.
Associated with the regular Dirichlet form $({\cal E}^{\delta}, {\cal F}^{\delta})$ is
a symmetric Hunt process $Y^{\delta}=(\{Y_t^{\delta}\}_{t\geq 0}, \{\Pp^x\}_{x\in \R^d\setminus\N})$
with state space $\R^d\backslash \N_\delta$,
where $\N_\delta\subset \R^d$ is a properly exceptional set for $({\cal E}^{\delta}, {\cal F}^{\delta})$.
By \cite[Main result]{MU11} the process $Y^{\delta}$ is conservative.
We also see from
Theorem \ref{l:upp} that there exists
a non-negative kernel $q^{\delta}(t,x,y)$
on $(0,\infty)\times ({\mathbb R}^d\setminus {\cal N}_{\delta})\times ({\mathbb R}^d\setminus {\cal N}_{\delta})$
such that for any non-negative function $f$ on ${\mathbb R}^d$,
$$\Ee^x f(Y_t^{\delta})=\int_{{\mathbb R}^d}q^{\delta}(t,x,y)f(y)\,{\rm d}y,
\quad \text{$t>0$ and $x\in {\mathbb R}^d\setminus {\cal N}_{\delta}$}$$
and there is a constant
$c_2>0$ such that
\begin{equation}\label{e:upper1}
q^{\delta}(t,x,y)\leq
c_2(t^{-d/2}\vee t^{-d/\alpha_1}), \quad
\text{$t>0$ and $x,y\in {\mathbb R}^d\setminus {\cal N}_{\delta}$}.
\end{equation}
Moreover, there exists an ${\cal E}^{\delta}$-nest $\{F_k^{\delta}\}_{k\geq 1}$ of compact sets such that
$${\cal N}_{\delta}={\mathbb R}^d\setminus \bigcup_{k=1}^{\infty}F_k^{\delta}$$
and for each fixed $t>0$ and $y\in {\mathbb R}^d\setminus {\cal N}_{\delta}$, the map
$x\mapsto q^{\delta}(t,x,y)$ is continuous on each $F_k^{\delta}$.
Here we should note that the constant $c_2$ in \eqref{e:upper1} can be chosen to be independent of $\delta\in (0,1)$.
Indeed, by the definition of $J^{(\delta)}(x,y)$,
$$J^{(\delta)}(x,y)\geq
\frac{\kappa_1}{|x-y|^{d+\alpha_1}}{\bf 1}_{\{|x-y|<1\}}+J(x,y){\bf 1}_{\{|x-y|\geq 1\}}=:J_l(x,y)$$
for any $\delta\in(0,1)$ and $x,y\in \R^d$.
Then by following the argument of \cite[Theorem 1.2]{BBCK09} and \cite[Proposition 3.1]{CKK11},
we see that $c_2$ can be determined by $J_l(x,y)$, which is independent of $\delta$.
Actually, under Assumption {\bf(B)},
we can also get the following near-diagonal lower bound of $q^{\delta}(t,x,y)$,
which is the key to Theorem \ref{T:lower}.
\begin{prop}\label{thm-lower-1}
Under Assumption {\bf(B)}, there exist constants $t_0>0$ and $c_0=c_0(t_0)>0$,
which are independent of $\delta\in (0,1)$, such that
for any $t\geq t_0$ and $x,y\in {\mathbb R}^d\setminus {\cal N}_{\delta}$ with $|x-y|^2\leq t$,
$$q^{\delta}(t,x,y)\geq c_0t^{-d/2}.$$
\end{prop}
We will prove Proposition \ref{thm-lower-1} later,
and present the proof of Theorem \ref{T:lower} first.
\begin{proof}[Proof of Theorem $\ref{T:lower}$]
(1) We first claim that there exist an ${\cal E}$-properly exceptional set ${\cal N}$ and
constants $t_0,c_0>0$ such that
for any $t\geq t_0$ and $x,y\in {\mathbb R}^d\setminus {\cal N}$ with $|x-y|^2\leq t$,
$$p(t,x,y)\geq c_0t^{-d/2}.$$
Indeed, let $\{\delta_n\}_{n=1}^{\infty}$ be a decreasing sequence in $(0,1)$
such that $\delta_n \rightarrow 0$ as $n\rightarrow\infty$.
Then, by \cite[p.1969, Theorem 2.3]{BBCK09},
$({\cal E}^{\delta_n},{\cal F}^{\delta_n})$ converges to $({\cal E}, {\cal F})$
in the sense of Mosco as $n\rightarrow \infty$.
Since $J^{(\delta)}(x,y)\geq J(x,y)$
by definition, we have
${\cal F}^{\delta}\subset {\cal F}$ and
$${\cal E}^{\delta}(u,u)\geq {\cal E}{(u,u)} \quad \text{for any $u\in {\cal F}^{\delta}$}.$$
Therefore, any ${\cal E}^\delta$-exceptional set can be regarded as an ${\cal E}$-exceptional set.
Namely, we can choose an ${\cal E}$-exceptional set ${\cal N}$
so that $\bigcup_{n=1}^{\infty}{\cal N}_{\delta_n}\subset {\cal N}$.
On account of this, the desired assertion follows from Proposition \ref{thm-lower-1}
and \cite[p.1990--1991, Proof of Theorem 1.3]{BBCK09}.
(2) Next, we prove Theorem $\ref{T:lower}$ by following the argument of \cite[Theorem 3.6]{CKK08}.
Note that if $t\geq t_0$ and $|x-y|^2\leq t$, then our assertion follows from (1).
In what follows, we assume that $\sqrt{t_0}|x-y|\leq t\leq |x-y|^2$.
Let $l$ be the maximum of positive integers such that
$$\frac{t}{l}\leq \left(\frac{|x-y|}{l}\right)^2.$$
Since
\begin{equation}\label{eq-l}
\frac{|x-y|^2}{t}-1\leq l\leq \frac{|x-y|^2}{t},
\end{equation}
we have
\begin{equation}\label{eq-space-time}
\frac{1}{2}\left(\frac{|x-y|}{l}\right)^2\leq \frac{t}{l}\leq \left(\frac{|x-y|}{l}\right)^2
\end{equation}
and
\begin{equation}\label{eq-lower-time}
\frac{t}{l}\geq \frac{t^2}{|x-y|^2}\geq t_0.
\end{equation}
Let $\{x_i\}_{0\le i\le 6l}$ be a sequence on the line segment
joining $x_0=x$ and $x_{6l}=y$ such that
\begin{equation}\label{eq-sequence}
|x_k-x_{k-1}|=\frac{|x-y|}{6l} \quad \text{for any $k=1,\dots, 6l$}.
\end{equation}
Take a sequence $\{y_i\}_{0\le i\le 6l}$ such that
$y_0=x$, $y_{6l}=y$ and $y_k\in B(x_k, (6l)^{-1}|x-y|)$ for all $1\leq k\leq 6l-1.$ Then, \eqref{eq-sequence} and \eqref{eq-space-time} imply that for any $1\leq k\leq 6l$,
$$
|y_k-y_{k-1}|
\leq |y_k-x_k|+|x_k-x_{k-1}|+|x_{k-1}-y_{k-1}|
\leq 3\cdot \frac{|x-y|}{6l}
=\frac{|x-y|}{2l}
\leq \sqrt{\frac{t}{l}}.
$$
Hence by \eqref{eq-lower-time} and (1),
there exists a constant
$C=C(t_0)\in (0,1)$ such that
$$p\left(\frac{t}{l},y_{k-1},y_k\right)\geq C\left(\frac{t}{l}\right)^{-d/2}, \quad 1\leq k\leq 6l.$$
This, together with the Markov property of $p(t,x,y)$, implies that
\begin{equation*}
\begin{split}
p(t,x,y)
&=\int_{{\mathbb R}^d}\cdots
\int_{{\mathbb R}^d}p(t/l,x,y_1)\cdots p(t/l,y_{6l-1},y)\,{\rm d}y_1\cdots\,{\rm d}y_{6l-1}\\
&\geq \int_{B(x_1,(6l)^{-1}|x-y|)}\cdots
\int_{B(x_{6l-1},(6l)^{-1}|x-y|)}p(t/l,x,y_1)\cdots p(t/l,y_{6l-1},y)\,{\rm d}y_1\cdots\,{\rm d}y_{6l-1}\\
&\geq
C\left(\frac{t}{l}\right)^{-d/2}
\prod_{k=1}^{6l-1}\left\{C\left(\frac{t}{l}\right)^{-d/2}
|B(x_k,(6l)^{-1}|x-y|)|\right\}\\
&\geq c_1\left(\frac{t}{l}\right)^{-d/2}C^{6l},
\end{split}
\end{equation*}
where in the second inequality $|\cdot|$ denotes the $d$-dimensional Lebesgue measure, and the last inequality follows from \eqref{eq-space-time}.
Note that, by \eqref{eq-l}, we have
$$
C^{6l}\geq e^{-c_2l}
\geq \exp\left(-c_2\frac{|x-y|^2}{t}\right),$$ which, along with the estimate above, yields
the desired assertion.
\end{proof}
\ \
The remainder of this subsection is devoted to the proof of Proposition \ref{thm-lower-1}.
For this, we need Lemmas \ref{lem-g'} and \ref{lem-dist} below.
These two lemmas are concerned with a class of scaled processes for the subprocess of $Y^{\delta}$ on a ball.
We begin with some results which are due to \cite{BBCK09,CKK08,CK10,Fo09}.
Let $B(x,r)$ be an open ball with radius $r>0$ centered at $x\in {\mathbb R^d}$, and $B_r=B(0,r)$.
Denote by $Y^{\delta,B_r}$ the subprocess of $Y^{\delta}$ on $B_r$.
Let $q^{\delta,B_r}(t,x,y)$ and $({\cal E}^{\delta,B_r}, {\cal F}^{\delta,B_r})$ be
the heat kernel (also called Dirichlet heat kernel in the literature)
and the regular Dirichlet form associated with $Y^{\delta,B_r}$, respectively.
For a fixed $r>0$, define $$Y_t^{\delta,(r)}:=r^{-1}Y_{r^2 t}^{\delta}.$$
Then $Y^{\delta,(r)}=\left(\left\{Y_t^{\delta, (r)}\right\}_{t\geq 0}, \{\Pp^x\}_{x\in {\mathbb R}^d\backslash \N_\delta}\right)$
is a symmetric Hunt process on ${\mathbb R}^d\backslash \N_\delta$ such that
the associated Dirichlet form $({\cal E}^{\delta,(r)}, {\cal F}^{\delta,(r)})$ on $L^2({\mathbb R}^d;\d x)$
is given by
$${\cal E}^{\delta,(r)}(u,v)
=\iint_{{\mathbb R}^d\times {\mathbb R}^d}(u(x)-u(y))(v(x)-v(y))r^{d+2}
J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y$$ and
$${\cal F}^{\delta,(r)}
=\left\{u\in L^2({\mathbb R}^d;\d x) \Big|
\iint_{{\mathbb R}^d\times {\mathbb R}^d}\frac{(u(x)-u(y))^2}{|x-y|^{d+\alpha_2}}
\,{\rm d}x\,{\rm d}y<\infty\right\}.$$
Moreover, the associated heat kernel $q_r^{\delta}(t,x,y)$ satisfies
\begin{equation}\label{eq-heat-scale}
q_r^{\delta}(t,x,y)=r^d q^{\delta}(r^2t,rx,ry).
\end{equation}
Let $Y^{\delta,(r), B_1}$ be the subprocess of $Y^{\delta,(r)}$ on $B_1$.
Then the associated Dirichlet heat kernel $q_r^{\delta,B_1}(t,x,y)$ is given by
$$q_r^{\delta, B_1}(t,x,y)=r^dq^{\delta, B_r}(r^2t,rx,ry),\quad
\text{$t>0$ and $x,y\in B_1\setminus{\cal N}_{\delta}$}.$$
We denote by $({\cal E}^{\delta,(r),B_1}, {\cal F}^{\delta,(r), B_1})$
the associated regular Dirichlet form on $L^2(B_1;{\rm d}x)$.
In the following, let
$$\Phi(x)=C_{\Phi}(1-|x|^2)^{\frac{12}{2-\alpha_2}}{\bf1}_{B_1}(x),\quad x\in \R^d$$
for some constant $C_{\Phi}>0$ so that $\int_{B_1}\Phi(x)\,\d x=1$.
For each fixed $x_1\in B_1\setminus {\cal N}$, $r\geq 1$ and $\varepsilon\in (0,1)$, define
$$u_r(t,x):=q_r^{\delta,B_1}(t,x,x_1), \quad u_r^{\varepsilon}(t,x):=u_r(t,x)+\varepsilon$$
and
$$H_{\varepsilon}(t):=\int_{B_1}\Phi(y)\log u_r^{\varepsilon}(t,y)\,{\rm d}y.$$
\begin{prop}\label{p:domain}
Under Assumption {\bf(B)}, the next two assertions hold.
\begin{enumerate}
\item For each $t>0$, the function $\Phi(\cdot)/u_r^{\varepsilon}(t,\cdot)$
belongs to $\F^{\delta,(r),B_1}$.
\item The function $H_{\varepsilon}(t)$ is differentiable on $(0,\infty)$ and for each $t>0$,
\begin{equation}\label{e:deriv}
H_{\varepsilon}'(t)=
-{\cal E}^{\delta,(r),B_1}\left(u_r(t,\cdot),\frac{\Phi(\cdot)}{u_r^{\varepsilon}(t,\cdot)}\right).
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
(i) \ For any $x,y\in B_1$,
$$\frac{\Phi(x)}{u_r^{\varepsilon}(t,x)}\leq \frac{1}{\varepsilon}\Phi(x)$$
and
\begin{equation*}
\begin{split}
\left|\frac{\Phi(x)}{u_r^{\varepsilon}(t,x)}-\frac{\Phi(y)}{u_r^{\varepsilon}(t,y)}\right|
&\leq \frac{1}{u_r^{\varepsilon}(t,x)}\left|\Phi(x)-\Phi(y)\right|
+\Phi(y)\left|\frac{1}{u_r^{\varepsilon}(t,x)}-\frac{1}{u_r^{\varepsilon}(t,y)}\right|\\
&=\frac{1}{u_r^{\varepsilon}(t,x)}|\Phi(x)-\Phi(y)|
+\frac{\Phi(y)}{u_r^{\varepsilon}(t,x)u_r^{\varepsilon}(t,y)}|u_r^{\varepsilon}(t,x)-u_r^{\varepsilon}(t,y)|\\
&\leq \frac{1}{\varepsilon}|\Phi(x)-\Phi(y)|+\frac{C_{\Phi}}{\varepsilon^2}|u_r(t,x)-u_r(t,y)|.
\end{split}
\end{equation*}
Then our assertion follows by the strong version of the normal contraction property
(e.g., see the proof of \cite[Theorem 1.4.2 (ii)]{FOT11}).
\noindent
(ii) \ By (i), the right hand side of \eqref{e:deriv} is finite for any $t>0$.
Then our assertion follows by the same way as in \cite[Lemmas 4.1 and 4.7]{BBCK09} and \cite[Proposition 3.7]{Fo09}.
\end{proof}
\begin{lem}\label{lem-g'}
Under Assumption {\bf(B)},
there exist positive constants $c_1$ and $c_2$
such that for any $\varepsilon\in (0,1)$, $\delta\in(0,1)$, $x_1\in B_1\setminus{\cal N}_{\delta}$,
$t>0$ and $r\ge1$,
\begin{equation}\label{eq-g'}
\begin{split}
H_{\varepsilon}'(t)\geq -c_1+c_2\int_{B_1}\left(\log u_r^{\varepsilon}(t,y)-H_{\varepsilon}(t)\right)^2\,\Phi(y)\,{\rm d}y.
\end{split}
\end{equation}
\end{lem}
\begin{proof}
We mainly follow the argument of \cite[Lemma 4.7]{BBCK09}.
By Proposition \ref{p:domain} (ii),
\begin{equation}\label{e:deriv-lower}
\begin{split}
&H_{\varepsilon}'(t)\\
&=-{\cal E}^{\delta,(r),B_1}\left(u_r(t,\cdot), \frac{\Phi(\cdot)}{u_r^{\varepsilon}(t,\cdot)}\right)\\
&=-\iint_{B_1\times B_1}
(u_r^{\varepsilon}(t,y)-u_r^{\varepsilon}(t,x))
\frac{u_r^{\varepsilon}(t,x)\Phi(y)-u_r^{\varepsilon}(t,y)\Phi(x)}{u_r^{\varepsilon}(t,x)u_r^{\varepsilon}(t,y)}
r^{d+2} J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad -2\int_{B_1}\Phi(x)\left(r^{d+2}\int_{B_1^c}J^{(\delta)}(rx,ry)\,{\rm d}y\right)
\frac{u_r(t,x)}{u_r^{\varepsilon}(t,x)}\,{\rm d}x.
\end{split}
\end{equation}
Let $a=u_r^{\varepsilon}(t,y)/u_r^{\varepsilon}(t,x)$ and $b=\Phi(y)/\Phi(x)$.
Since $s+1/s-2\geq (\log s)^2$ for any $s>0$,
we have
\begin{align*}
&(u_r^{\varepsilon}(t,y)-u_r^{\varepsilon}(t,x))
\frac{u_r^{\varepsilon}(t,x)\Phi(y)-u_r^{\varepsilon}(t,y)\Phi(x)}{u_r^{\varepsilon}(t,x)u_r^{\varepsilon}(t,y)}\\
&=\Phi(x)\left(1-a+b-\frac{b}{a}\right)\\
&=\Phi(x)\left[(1-\sqrt{b})^2-\sqrt{b}\left(\frac{a}{\sqrt{b}}+\frac{\sqrt{b}}{a}-2\right)\right]\\
&\leq \Phi(x)\left[(1-\sqrt{b})^2-\sqrt{b}\left(\log\frac{a}{\sqrt{b}}\right)^2\right]\\
&=\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2-\sqrt{\Phi(x)\Phi(y)}
\left[\log\left(\frac{u_r^{\varepsilon}(t,y)}{\sqrt{\Phi(y)}}\right)
-\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right]^2.
\end{align*}
Using this inequality with $0\leq u_r(t,x)/u_r^{\varepsilon}(t,x)\leq 1$, we obtain by \eqref{e:deriv-lower},
\begin{equation*}
\begin{split}
H_{\varepsilon}'(t)
&\geq -\iint_{B_1\times B_1}\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad+\iint_{B_1\times B_1}
\sqrt{\Phi(x)\Phi(y)}
\left[\log\left(\frac{u_r^{\varepsilon}(t,y)}{\sqrt{\Phi(y)}}\right)
-\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right]^2
r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad-2\int_{B_1}\Phi(x)\left(r^{d+2}\int_{B_1^c}J^{(\delta)}(rx,ry)\,{\rm d}y\right)\,{\rm d}x\\
&=-\iint_{{\mathbb R}^d\times {\mathbb R}^d}\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad+\iint_{B_1\times B_1}
\sqrt{\Phi(x)\Phi(y)}
\left[\log\left(\frac{u_r^{\varepsilon}(t,y)}{\sqrt{\Phi(y)}}\right)-\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right]^2
r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&=:-({\rm I})+({\rm II}).
\end{split}
\end{equation*}
To give a lower bound of the last expression above,
we first show that
there exists a constant $C_1>0$,
which is independent of $\delta\in (0,1)$ and $\varepsilon\in (0,1)$,
such that
\begin{equation}\label{eq-1}
({\rm I})
\leq C_1\left(\int_{{\mathbb R}^d}|\nabla\sqrt{\Phi(x)}|^2\,{\rm d}x+\int_{B_1}\Phi(x)\,{\rm d}x\right).
\end{equation}
To do so, we write
\begin{align*}
({\rm I})
&=\iint_{\{0<|x-y|<1/r\}}\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad+\iint_{\{1/r\leq |x-y|<1\}}\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&\quad +\iint_{\{|x-y|\geq 1\}}\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2r^{d+2}J^{(\delta)}(rx,ry)\,{\rm d}x\,{\rm d}y\\
&=:({\rm I})_1+({\rm I})_2+({\rm I})_3.
\end{align*}
By Assumption \ref{assum:jump} (ii) and \cite[(3.9)]{CKK08},
there exists a positive constant $c_1$,
which are independent of $\delta\in (0,1)$ and $r\geq 1$, such that
\begin{equation*}
\begin{split}
({\rm I})_1
&\leq \kappa_1 r^{d+2}\iint_{\{0<|x-y|<1/r\}}
\frac{\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2}{|rx-ry|^{d+\alpha_2}}\,{\rm d}x\,{\rm d}y
\leq c_1\int_{{\mathbb R}^d}|\nabla \sqrt{\Phi}(x)|^2\,{\rm d}x.
\end{split}
\end{equation*}
Since $6/(2-\alpha_2)>1$,
the function $\sqrt{\Phi(x)}=\sqrt{C_\Phi}(1-|x|^2)^{\frac{6}{2-\alpha_2}}{\bf1}_{B_1}(x)$ is Lipschitz continuous;
that is, there exists a positive constant $c_{\Phi}$ such that
$$
|\sqrt{\Phi(x)}-\sqrt{\Phi(y)}|\leq c_{\Phi}|x-y|
\quad \text{for any $x,y\in {\mathbb R}^d$}.
$$
We note that for any $\delta\in (0,1)$, $J^{(\delta)}(rx,ry)=J(rx,ry)$ for $x,y\in \R^d$ and $r>1$ with $|rx-ry|\geq 1$.
Therefore, there exist positive constants $c_{2i}$ ($i=1,2,3$),
which are independent of $r\geq 1$ and $\delta\in (0,1)$, such that
\begin{equation*}
\begin{split}
({\rm I})_2
&\leq c_{21} r^{d+2}\iint_{\{1/r\leq |x-y|<1\}}
{\left(\sqrt{\Phi(x)}-\sqrt{\Phi(y)}\right)^2}J(rx,ry)\,{\rm d}x\,{\rm d}y \\
&\leq c_{22} r^{d+2}\int_{B_2} \left(\int_{\{1/r\leq |x-y|<1\}}{|x-y|^2}J(rx,ry)\,{\rm d}y\right)\,{\rm d}x\\
&\le \frac{c_{22}}{r^d} \int_{B_{2r}} \left(\int_{\{|x-y|\ge 1\}}{|x-y|^2}J(x,y)\,{\rm d}y\right)\,{\rm d}x\\
&\leq {c_{23}}=c_{23}\int_{B_1} \Phi(x)\,{\rm d}x,
\end{split}
\end{equation*}
where we used Assumption {\bf(B)} in the last inequality.
We also have
\begin{equation*}
\begin{split}
({\rm I})_3
&\leq c_{31} r^{d+2}\int_{B_1}\left(\int_{\{|x-y|\geq 1\}}
J(rx,ry)\,{\rm d}y \right)\,\d x\\
&=\frac{c_{31}r^2}{r^{d}} \int_{B_{r}} \left(\int_{\{|x-y|\geq r\}}J(x,y)\,\d y\right)\,{\rm d}x\\
&\le c_{32}= c_{32} \int_{B_1}\Phi(x)\,{\rm d}x
\end{split}
\end{equation*}
for some positive constants $c_{3i}$ $(i=1,2)$,
which are independent of $r\geq 1$ and $\delta\in (0,1)$.
We thus arrive at (\ref{eq-1}).
We next show that there exist positive constants $c$ and $c'$,
which are independent of $\varepsilon\in(0,1)$, $\delta\in (0,1)$,
$x_1\in B_1\setminus{\cal N}_{\delta}$,
$t>0$ and $r\geq 1$,
such that
\begin{equation}\label{eq-2}
({\rm II})\geq -c+c'\int_{B_1}(\log u_r^{\varepsilon}(t,x)-H_{\varepsilon}(t))^2\Phi(x)\,{\rm d}x.
\end{equation}
To do so, we first prove that
\begin{equation}\label{e:l^2}
\int_{B_1}\left[\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right]^2\,\d x<\infty.
\end{equation}
Since \eqref{e:upper1} implies that
$$u_r(t,x)=q_r^{\delta,B_1}(t,x,x_1)=r^dq^{\delta,B_r}(r^2t,rx,rx_1)
\leq c''r^d[(r^2t)^{-d/2}\vee (r^2t)^{-d/\alpha_1}],$$
we have
$$\varepsilon\leq u_r^{\varepsilon}(t,x)=u_r(t,x)+\varepsilon\leq c''r^d[(r^2t)^{-d/2}\vee (r^2t)^{-d/\alpha_1}]+\varepsilon$$
so that
$$0\leq (\log u_r^{\varepsilon}(t,x))^2
\leq[|\log \varepsilon|\vee |\log (c'' r^d((r^2t)^{-d/2}\vee (r^2t)^{-d/\alpha_1})+\varepsilon)|]^2.$$
Hence
$$\int_{B_1}(\log u_r^{\varepsilon}(t,x))^2\,{\rm d}x<\infty.$$
Noting that
$$\left[\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right]^2
=\left(\log u_r^{\varepsilon}(t,x)-\log\sqrt{\Phi(x)}\right)^2
\leq 2(\log u_r^{\varepsilon}(t,x))^2+2(\log \sqrt{\Phi(x)})^2$$
and
$$\int_{B_1}(\log \sqrt{\Phi(x)})^2\,{\rm d}x<\infty,$$
we get \eqref{e:l^2}.
We next give a lower bound of $({\rm II})$. By \eqref{A:jump-kernel} and \eqref{eq-j-delta},
we have for all $r\ge 1$ and $x,y\in \R^d$,
$$
r^{d+2}J^{(\delta)}(rx,ry)
\geq r^{d+2}\frac{\kappa_1}{|rx-ry|^{d+\alpha_1}}{\bf 1}_{\{|x-y|<1/r\}}
=r^{2-\alpha_1}\frac{\kappa_1}{|x-y|^{d+\alpha_1}}{\bf 1}_{\{|x-y|<1/r\}}.
$$
Then by \eqref{e:l^2} and the weighted Poincar\'e inequality (\cite[Corollary 6]{DK13},
see also the argument in \cite[Theorem 4.1]{CKK11} and \cite[Proposition 3.2]{CKK08}),
we obtain
\begin{equation}\label{eq-2-1}
\begin{split}
({\rm II})
&\geq
r^{2-\alpha_1}\iint_{B_1\times B_1}
\sqrt{\Phi(x)\Phi(y)}
\left(\log\left(\frac{u_r^{\varepsilon}(t,y)}{\sqrt{\Phi(y)}}\right)-\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)\right)^2\\
&\qquad\qquad \qquad\qquad \times
\frac{\kappa_1}{|x-y|^{d+\alpha_1}}{\bf 1}_{\{|x-y|<1/r\}}\,{\rm d}x\,{\rm d}y\\
&\geq c_4\int_{B_1}\left[\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)
-\left(\int_{B_1}\log\left(\frac{u_r^{\varepsilon}(t,y)}{\sqrt{\Phi(y)}}\right)\Phi(y)\,{\rm d}y\right)\right]^2\Phi(x)\,{\rm d}x\\
&=c_{4}\int_{B_1}\left[\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)
-\left(H_{\varepsilon}(t)-\frac{1}{2}\int_{B_1}\Phi(y)\log \Phi(y)\,\d y\right)\right]^2\Phi(x)\,{\rm d}x
\end{split}
\end{equation}
for some positive constant $c_{4}=c_{4}(\kappa_1,d,\alpha_1, \Phi)$,
which is independent of $\delta\in (0,1)$,
$x_1\in B_1\setminus{\cal N}_{\delta}$,
$t>0$, $r\geq 1$ and $\varepsilon\in (0,1)$.
Moreover, since
\begin{equation*}
\begin{split}
(\log u_r^{\varepsilon}(t,x)-H_{\varepsilon}(t))^2
&\leq 2\left[\log\left(\frac{u_r^{\varepsilon}(t,x)}{\sqrt{\Phi(x)}}\right)
-\left(H_{\varepsilon}(t)-\frac{1}{2}\int_{B_1}\Phi(y)\log\Phi(y)\,\d y\right)\right]^2\\
&\quad
+2\left(\frac{1}{2}\log\Phi(x)-\frac{1}{2}\int_{B_1}\Phi(y)\log\Phi(y)\,{\rm d}y\right)^2,
\end{split}
\end{equation*}
the last expression in \eqref{eq-2-1} is greater than
$$
\frac{c_{4}}{2}\int_{B_1}\left(\log u_r^{\varepsilon}(t,x)-H_{\varepsilon}(t)\right)^2\Phi(x)\,{\rm d}x-c_{5}
$$
for
$$c_{5}=\frac{c_{4}}{4}
\int_{B_1} \left(\log\Phi(x)-\int_{B_1}\Phi(y)\log\Phi(y)\,{\rm d}y\right)^2
\Phi(x)\,{\rm d}x,$$
whence \eqref{eq-2} follows.
Combining \eqref{eq-1} with \eqref{eq-2}, we have \eqref{eq-g'}.
The proof is complete. \end{proof}
\begin{lem}\label{lem-dist}
Under Assumption {\bf (B)},
there exist constants $t_0\in(0,1)$ small enough and $c_*=c_*(t_0)\geq 1$ such that
the following assertions hold.
\begin{enumerate}
\item For all $\delta\in (0,1)$, $r\geq c_*$, $t\in[t_0/8,2t_0]$ and $x\in \R^d\backslash \N_\delta$,
$$\Pp^{x}\left(|Y_t^{\delta,(r)}-Y_0^{\delta,(r)}|>\frac{1}{4}\right)\leq \frac{1}{12}.$$
\item For all $\delta\in (0,1)$, $r\geq c_*$, $t\in[t_0/8, t_0]$ and $x_1\in B_{1/2}\backslash \N_\delta$,
$$\int_{B(x_1,1/4)}u_r(t,x)\,\d x\geq \frac{3}{4}.$$
\end{enumerate}
\end{lem}
\begin{proof}
(i) \ By \eqref{eq-heat-scale} and the change of variables,
we have for all $t>0$ and $x\in \R^d\backslash \N_\delta$,
\begin{equation*}
\begin{split}
\Pp^{x}\left(|Y_t^{\delta,(r)}-Y_0^{\delta,(r)}|>\frac{1}{4}\right)
&=\int_{\{|y-x|\geq 1/4\}}q_r^{\delta}(t,x,y)\,{\rm d}y\\
&=r^d\int_{\{|y-x|\geq 1/4\}}q^{\delta}(r^2t,rx,ry)\,{\rm d}y
=\int_{\{|y-rx|\geq r/4\}}q^{\delta}(r^2t,rx,y)\,{\rm d}y\\
&=\int_{\{|y-rx|\geq r/4,
|y-rx|^2\ge r^2 t \}}
q^{\delta}(r^2t,rx,y)\,{\rm d}y\\
&\quad +\int_{\{|y-rx|\geq r/4,
r^2 t> |y-rx|^2\}}
q^{\delta}(r^2t,rx,y)\,{\rm d}y\\
&=:({\rm I})+({\rm II}).
\end{split}
\end{equation*}
Since the jumping kernel $J^{(\delta)}(x,y)$ fulfills Assumption {\bf(B)},
we see by Proposition \ref{thm-upper-bound0} that
there are constants $c_i \,(i=1,2)>0$ and $t_1>0$ (both are independent of $\delta\in(0,1)$)
such that for all $r^2t\ge t_1$ and $x\in \R^d\backslash \N_\delta$,
\begin{equation*}
\begin{split}
({\rm I})
&\leq \int_{\{|y-rx|\geq r/4, |y-rx|^2\ge r^2 t\}}
\frac{c_1 r^2 t}{|y-rx|^{d+2}}\,{\rm d}y\leq c_1 r^2 t \int_{\{|y-rx|\geq r/4\}}\frac{{\rm d}y}{|y-rx|^{d+2}}
=c_2t.
\end{split}
\end{equation*}
On the other hand, if $t\leq 1/16$, then $r^2 t\leq r^2/16$,
and so
$({\rm II})=0.$
Therefore, if we choose $t_2>0$ small enough such that
\begin{equation*}\label{eq-t_0}
t_2\leq \frac{1}{32} \quad \text{and} \quad
c_2t_2\leq \frac{1}{24},
\end{equation*}
then for any $r\geq \sqrt{{8t_1}/{t_2}}$ and $t\in [t_2/8, 2t_2]$,
$$\Pp^{x}\left(|Y_t^{\delta,(r)}-Y_0^{\delta,(r)}|>\frac{1}{4}\right)\leq \frac{1}{12}.$$
The desired assertion follows by taking $t_0=t_2$ and $c_*=1\vee \sqrt{8t_1/t_2}$.
\noindent
(ii) \ For an open subset $D$ of ${\R^d}$,
let $\tau_D^{Y^{\delta,(r)}}$ be the exit time of $Y^{\delta,(r)}$ from $D$.
Since $q_r^{\delta,B_1}(t,x,x_1)=q_r^{\delta,B_1}(t,x_1,x)$,
\begin{equation}\label{e:lower-u}
\begin{split}
\int_{B(x_1,1/4)}u_r(t,x)\,\d x
&=\int_{B(x_1,1/4)}q_r^{\delta,B_1}(t,x,x_1)\,\d x\\
&=\int_{B(x_1,1/4)}q_r^{\delta,B_1}(t,x_1,x)\,\d x\\
&=\Pp^{x_1}(|Y^{\delta,(r), B_1}_t-x_1|<1/4)\\
&=\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|<1/4, t<\tau_{B_1}^{Y^{\delta,(r)}}\right).
\end{split}
\end{equation}
Noting that
\begin{equation*}
\begin{split}
1&=\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|<1/4, t<\tau_{B_1}^{Y^{\delta,(r)}}\right)
+\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|<1/4, \tau_{B_1}^{Y^{\delta,(r)}}\leq t\right)\\
&\quad +\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|\geq 1/4\right)\\
&\leq \Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|<1/4, t<\tau_{B_1}^{Y^{\delta,(r)}}\right)
+\Pp^{x_1}\left(\tau_{B_1}^{Y^{\delta,(r)}}\leq t\right)
+\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|\geq 1/4\right),
\end{split}
\end{equation*}
we get by \eqref{e:lower-u},
\begin{equation}\label{e:lower-u2}
\int_{B(x_1,1/4)}u_r(t,x)\,\d x
\geq 1-\Pp^{x_1}\left(\tau_{B_1}^{Y^{\delta,(r)}}\leq t\right)
-\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|\geq 1/4\right).
\end{equation}
Let $X=(\{X_t\}_{t\geq 0}, \{{\Pp}^x\}_{x\in {\R^d}})$
be the strong Markov process on $\R^d$ and $\tau_D$ the exit time of $X$ from $D$.
Then by the same way as in \cite[(2.18)]{BGK09},
the strong Markov property implies that for any $x\in \R^d$, $t>0$ and $r>0$,
\begin{equation}\label{e:ff7}
\begin{split}
\Pp^x (\tau_{B(x,r)}\le t)\le & \Pp^x(\tau_{B(x,r)}\le t, |X_{2t}-x|\le r/2)+\Pp^x(|X_{2t}-x|\ge r/2)\\
\le&\Pp^x(\tau_{B(x,r)}\le t, |X_{2t}-X_{\tau_{B(x,r)}}|\ge r/2)+ \Pp^x(|X_{2t}-x|\ge r/2)\\
\le&\sup_{s\le t, |z-x|\ge r}\Pp^z(|X_{2t-s}-z|\ge r/2)+ \Pp^x(|X_{2t}-x|\ge r/2)\\
\le& 2 \sup_{s\in [t,2t], z\in \R^d}\Pp^z(|X_{s}-z|\ge r/2).
\end{split}
\end{equation}
Applying it to $\{Y^{\delta,(r)}_t\}_{t\geq 0}$, we see that for any $x_1\in B_{1/2}\setminus{\cal N}_{\delta}$,
$$\Pp^{x_1}(\tau_{B_1}^{Y^{\delta,(r)}}\leq t)
\leq \Pp^{x_1}(\tau_{B(x_1,1/2)}^{Y^{\delta,(r)}}\leq t)
\leq 2 \sup_{s\in [t,2t], z\in \R^d}\Pp^z(|Y^{\delta,(r)}_{s}-z|\ge 1/4).$$
Then by (i), we obtain for any $x_1\in B_{1/2}\setminus{\cal N}_{\delta}$ and $t\in [t_0/8,t_0]$,
\begin{equation*}
\begin{split}
&\Pp^{x_1}\left(\tau_{B_1}^{Y^{\delta,(r)}}\leq t\right)
+\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|\geq 1/4\right)\\
&\leq 2 \sup_{s\in [t,2t], z\in \R^d}\Pp^z(|Y^{\delta,(r)}_{s}-z|\ge 1/4)
+\Pp^{x_1}\left(|Y^{\delta,(r)}_t-x_1|\geq 1/4\right)\\
&\leq 2\cdot\frac{1}{12}+\frac{1}{12}=\frac{1}{4}.
\end{split}
\end{equation*}
Hence the proof is complete by \eqref{e:lower-u2}.
\end{proof}
Now, we are in position to give the proof of Proposition \ref{thm-lower-1}.
\begin{proof}[Proof of Proposition $\ref{thm-lower-1}$]
Let $t_0\in(0,1)$ and $c_*\geq 1$ be the same constants as in Lemma \ref{lem-dist}.
We first prove that there exists a positive constant $c=c(t_0)$ such that
for all $\delta\in(0,1)$, $r\geq c_*$, $x_1\in B_{1/2}\backslash \N_\delta$
and $t_1\in [t_0/4, t_0]$,
$$\int_{B_1}\Phi(y)\log q_r^{\delta,B_1}(t_1,y,x_1)\,\d y\geq -c.$$
Our approach here is similar to that of \cite[Lemmas 3.3.1--3.3.3]{Da89} and \cite[Proof of Theorem 2.5]{Fo09}.
Fix $\varepsilon\in (0,1)$, $\delta\in (0,1)$, $x_1\in B_{1/2} \backslash \N_\delta$, $r\geq c_*$ and $t\in [t_0/8, t_0]$.
Let $K$ be a constant such that $|B(x_1,1/4))|e^{-K}=1/4$, and define
$$D_t^{\varepsilon}:=\left\{x\in B\left(x_1,{1}/{4}\right) \mid u_r^{\varepsilon}(t,x)\geq e^{-K}\right\}.$$
Then
$$
\int_{B(x_1,1/4)\setminus D_t^{\varepsilon}}u_r(t,x)\,{\rm d}x
\leq
\int_{B(x_1,1/4)\setminus D_t^{\varepsilon}}u_r^{\varepsilon}(t,x)\,{\rm d}x
\leq e^{-K}|B(x_1,1/4)|
=\frac{1}{4}.
$$
Since $r\geq 1$ and $t\leq 1$ by assumption,
we get from \eqref{e:upper1} that
\begin{equation}\label{e:upper-scale}
\begin{split}
u_r(t,x)=r^dq^{\delta, B_r}(r^2t,rx,rx_1)&\leq r^dq^{\delta}(r^2t,rx,rx_1)\\
&\leq c_{1}r^d\left((r^2 t)^{-d/2}\vee (r^2 t)^{-d/\alpha_1}\right)\leq c_{1}t^{-d/\alpha_1},
\end{split}
\end{equation}
where $c_{1}$ is a positive constant
independently of $\delta\in (0,1)$, $r\geq 1$ and $x, x_1\in B_{1/2}\backslash \N_\delta$.
Then
$$
\int_{D_t^{\varepsilon}}u_r(t,x)\,{\rm d}x
\leq \frac{c_{1}}{t^{d/\alpha_1}}|D_t^{\varepsilon}|.
$$
Combining all the estimates above with Lemma \ref{lem-dist} (ii), we have
$$
\frac{3}{4}
\leq \int_{B(x_1,1/4)}u_r(t,x)\,{\rm d}x
=\int_{D_t^{\varepsilon}}u_r(t,x)\,{\rm d}x+\int_{B(x_1,1/4)\setminus D_t^{\varepsilon}}u_r(t,x)\,{\rm d}x
\leq \frac{c_{1}}{t^{d/\alpha_1}}|D_t^{\varepsilon}|+\frac{1}{4};
$$
that is,
$$|D_t^{\varepsilon}|\geq \frac{t^{d/\alpha}}{2c_{1}}
\geq \frac{1}{2c_{1}}\left(\frac{t_0}{8}\right)^{d/\alpha}
\quad \text{for all $t\in [t_0/8,t_0]$}.$$
Furthermore, by following the argument in \cite[p.851--852]{CKK08} and using Lemma \ref{lem-g'},
there exists a positive constant $c_{2}=c_{2}(t_0)$,
which is independent of $\varepsilon\in (0,1)$, $\delta\in(0,1)$, $r\geq c_*$ and $x_1\in B_{1/2}\backslash \N_\delta$,
such that for any $t_1\in [t_0/4, t_0]$,
\begin{equation}\label{e:lower-h}
H_{\varepsilon}(t_1)=\int_{B_1}\Phi(y)\log u_r^{\varepsilon}(t_1,y)\,\d y\geq -c_{2}.
\end{equation}
Note that if $0<\varepsilon<1\wedge (2c_1/t_0^{d/\alpha_1})$,
then by \eqref{e:upper-scale},
$$\frac{\varepsilon t_1^{d/\alpha_1}}{2c_1}
\leq \frac{t_1^{d/\alpha_1}}{2c_1}u_r^{\varepsilon}(t_1,y)
= \frac{t_1^{d/\alpha_1}}{2c_1}(u_r(t_1,y)+\varepsilon)
\leq \frac{1}{2}+\frac{t_0^{d/\alpha_1}\varepsilon}{2c_1}\leq 1.$$
Therefore, by the monotone convergence theorem,
\begin{equation*}
\begin{split}
\int_{B_1}\Phi(y)\log \left(\frac{t_1^{d/\alpha_1}}{2c_1}u_r^{\varepsilon}(t_1,y)\right)\,\d y
\rightarrow \int_{B_1}\Phi(y)\log \left(\frac{t_1^{d/\alpha_1}}{2c_1}u_r(t_1,y)\right)\,\d y
\quad (\varepsilon\downarrow0).
\end{split}
\end{equation*}
Then by letting $\varepsilon\downarrow0$ in \eqref{e:lower-h}, we get
$$\int_{B_1}\Phi(y)\log q_r^{\delta,B_1}(t_1,y,x_1)\,\d y=\int_{B_1}\Phi(y)\log u_r(t_1,y)\,\d y\geq -c_{2},$$
which is the desired inequality.
We next discuss the lower bound of $q^{\delta}(t,x,y)$.
By Jensen's inequality,
there exists a positive constant
$c_{3}=c_{3}(t_0,\Phi)$
such that for all $\delta\in (0,1)$, $r\geq c_*$, $t_1\in [t_0/4, t_0]$ and $x_0,x_1\in B_{1/2}\backslash \N_\delta$,
\begin{equation*}
\begin{split}
\log q_r^{\delta,B_1}(2t_1,x_0,x_1)
&=\log\left(\int_{B_1}q_r^{\delta,B_1}(t_1,x_0,y)q_r^{\delta, B_1}(t_1,y,x_1)\,{\rm d}y\right)\\
&\geq
\log\left(\int_{B_1}q_r^{\delta,B_1}(t_1,x_0,y)q_r^{\delta, B_1}(t_1,y,x_1)\Phi(y)\,{\rm d}y\right)
-\log \|\Phi\|_{\infty}\\
&\geq \int_{B_1}\log \left(q_r^{\delta,B_1}(t_1,x_0,y)q_r^{\delta, B_1}(t_1,y,x_1)\right)\Phi(y)\,{\rm d}y
-\log \|\Phi\|_{\infty}\\
&=\int_{B_1} \Phi(y)\log q_r^{\delta,B_1}(t_1,x_0,y)\,{\rm d}y
+\int_{B_1}\Phi(y) q_r^{\delta, B_1}(t_1,y,x_1)\,{\rm d}y\\
& \quad -\log \|\Phi\|_{\infty}\\
&\geq -c_{3};
\end{split}
\end{equation*}
that is,
\begin{equation}\label{e:heat-lower}
q_r^{\delta,B_1}(t,x_0,x_1)\geq e^{-c_{3}} \quad
\text{for all $t\in [t_0/2,2t_0]$}.
\end{equation}
As we see from the proof of Lemma \ref{lem-dist},
the positive constant $t_0$ can be arbitrary small.
In what follows, without loss of generality we may and can assume that $0<t_0<1/4$.
Then for any $t\in [1/2,2]$, there exists a positive integer $k_t\geq 1$ such that
$t-k_t t_0/2\in [t_0/2,2t_0]$. In fact,
\begin{equation}\label{e:k_t}
0<\frac{1}{t_0}-4\leq \frac{t-2t_0}{t_0/2}\leq k_t\leq\frac{t-t_0/2}{t_0/2}\leq \frac{4}{t_0}-1
\end{equation}
and
$$\frac{t-t_0/2}{t_0/2}-\frac{t-2t_0}{t_0/2}=3.$$
By the semigroup property and \eqref{e:heat-lower},
we have for any $t\in [1/2,2]$ and $x_0,x_1\in B_{1/2}\backslash \N_\delta$,
\begin{equation*}
\begin{split}
r^d q^{\delta,B_r}(r^2t,rx_0,rx_1)
&=q_r^{\delta,B_1}(t,x_0,x_1)\\
&=\int_{B_1}q_r^{\delta,B_1}(t-t_0/2,x_0,z_1)q_r^{\delta,B_1}(t_0/2,z_1,x_1)\,\d z_1\\
&\geq \int_{B_{1/2}}q_r^{\delta,B_1}(t-t_0/2,x_0,z_1)q_r^{\delta,B_1}(t_0/2,z_1,x_1)\,\d z_1\\
&\geq e^{-c_3} \int_{B_{1/2}}q_r^{\delta,B_1}(t-t_0/2,x_0,z_1)\, \d z_1.
\end{split}
\end{equation*}
By the same way, the last term above is equal to
\begin{equation*}
\begin{split}
&e^{-c_3} \int_{B_{1/2}}
\left(\int_{B_1}q_r^{\delta,B_1}(t-2\cdot t_0/2,x_0,z_2)q_r^{\delta,B_1}(t_0/2,z_2,z_1)\,\d z_2\right)\d z_1\\
&\geq e^{-2c_3} \int_{B_{1/2}}\left(\int_{B_{1/2}}q_r^{\delta,B_1}(t-2\cdot t_0/2,x_0,z_2)\,\d z_2\right)\d z_1.
\end{split}
\end{equation*}
By repeating this procedure and using \eqref{e:k_t},
there exists a positive constant $c_{4}=c_{4}(t_0,\Phi)$ such that
for all $\delta\in(0,1)$, $r\geq c_*$, $t\in [1/2,2]$ and $x_0,x_1\in B_{1/2}\backslash \N_\delta$,
\begin{equation}\label{e:iteration}
\begin{split}
r^d q^{\delta,B_r}(r^2t,rx_0,rx_1)
&\geq e^{-k_t c_3} \int_{B_{1/2}}\cdots \int_{B_{1/2}}q_r^{\delta,B_1}(t-k_t t_0/2,x_0,z_{k_t})\,\d z_{k_t}\cdots\,d z_1\\
&\geq e^{-(k_t+1) c_3}|B_{1/2}|^{k_t}
\geq c_4,
\end{split}
\end{equation} where $c_4$ is independent of $t$.
By taking $t=1$ in \eqref{e:iteration},
we find that for all $\delta\in(0,1)$, $r\geq c_*$ and $x_0,x_1\in B_{1/2}\backslash \N_\delta$,
$$q^{\delta,B_r}(r^2,rx_0,rx_1)\geq \frac{c_{4}}{r^d}.$$
Letting $r=\sqrt{t}$ in the estimate above,
we have for any for $t\geq c_*^2$ and $x_0,x_1\in B_{1/2} \backslash \N_\delta$,
$$q^{\delta,B_{\sqrt{t}}}(t,\sqrt{t}x_0,\sqrt{t}x_1)\geq \frac{c_{4}}{t^{d/2}};$$
that is,
$$q^{\delta,B_{\sqrt{t}}}(t, x_0, x_1)\geq \frac{c_{4}}{t^{d/2}},\quad x_0,x_1\in B_{\sqrt{t}/2}\backslash \N_\delta.$$
By the space-uniformity of ${\mathbb R}^d$,
we can replace the center of any ball by $z_0\in{\mathbb R}^d$ in the argument above.
Hence for any $t\geq c_*^2$, $z_0\in \R^d$ and $x,y\in B(z_0,\sqrt{t}/2)\backslash \N_\delta$,
$$q^{\delta}(t,x,y)\geq q^{\delta,B(z_0,\sqrt{t})}(t, x, y)\geq \frac{c_{4}}{t^{d/2}}.$$
Note that for any $x,y\in \R^d$ with $|x-y|^2\leq t$,
there exists a point $z_0\in {\mathbb R}^d$ such that
$x,y\in B(z_0,\sqrt{t}/2)$.
Therefore, our assertion is valid for $t\geq c_*^2$.
\end{proof}
At the end of this section, we present two-sided heat kernel estimates for jump processes,
upper bounds of which have been established in Corollary \ref{c:two}.
\begin{cor}\label{c:two1}
Assume that there is a constant $\varepsilon>0$ such that for all $x,y\in \R^d$ with $|x-y|\ge 1$,
$$J(x,y)\asymp \frac{1}{|x-y|^{d+2+\varepsilon}}.$$
Then, there exist positive constants $t_0\ge 1$, $\theta_0>0$ and $c_0$ such that for all $t\ge t_0$,
$$p(t,x,y)\asymp
\begin{cases}
\displaystyle \frac{1}{t^{d/2}},& t\geq |x-y|^2,\\
\displaystyle \frac{1}{t^{d/2}}\exp\left(-\frac{c_0|x-y|^2}{t}\right),
& \displaystyle \frac{\theta_0|x-y|^2}{\log(1+|x-y|)}\leq t\leq |x-y|^2,\\
\displaystyle \frac{1}{|x-y|^{d+2+\varepsilon}}, & \displaystyle t\leq \frac{\theta_0|x-y|^2}{\log(1+|x-y|)}.
\end{cases}$$
Here we note that the constants $c_0$ and $\theta_0$ in the formula above
should be different for upper and lower bounds. \end{cor}
\begin{proof}
The upper bound estimates have been proved in Corollary \ref{c:two},
so we need verify lower bounds. According to Theorem \ref{T:lower}, we have got the first two cases, i.e.\
$t\geq |x-y|^2$ and $\frac{\theta_0|x-y|^2}{\log(1+|x-y|)}\leq t\leq |x-y|^2$.
Then, the proof is complete, if we prove that there exist
constants $t_0\ge 1$ and $c_1, c_2>0$ such that for all $t_0\le t\le c_1|x-y|^2$,
\begin{equation}\label{e:ff5}p(t,x,y)\ge \frac{c_2}{|x-y|^{d+2+\varepsilon}}.\end{equation}
(1) First, we claim that there are positive constants $c_0$ and $t_0$ such that
for all $t\ge t_0$ and $x\in \R^d \setminus\N$,
\begin{equation}\label{e:ff6}
\Pp^x(\tau_{B(x,c_0\sqrt{t})}\le t)\le 1/2.
\end{equation}
Indeed, we recall \eqref{e:ff7}: for any $x\in \R^d\setminus \N$ and $t,r>0$,
\begin{equation}\label{e:ff7-1}
\Pp^x (\tau_{B(x,r)}\le t)
\le 2 \sup_{s\le t, z\in \R^d}\Pp^z(|X_{2t-s}-z|\ge r/2). \end{equation}
Now, according to upper bound estimates for $p(t,x,y)$ in Corollary \ref{c:two},
there is a constant $t_0>0$ such that for all $t\ge t_0$, $r^2\ge t$ and $x\in \R^d\setminus \N$,
\begin{align*}\Pp^x(|X_t-x|\ge r)&\le c_1\left(\int_{\{|y-x|\ge r\}} t^{-d/2} \exp\left(-c_2|x-y|^2/t\right)\,\d y+\int_{\{|y-x|\ge r\}}
\frac{t}{|x-y|^{d+2+\varepsilon}}\,\d y\right)\\
&\le c_3\left(\int_{r^2/t}^\infty e^{-c_2s}s^{d/2-1}\,\d s+\int_r^\infty \frac{t}{s^{3+\varepsilon}}\,\d s\right)\\
&\le c_4\left(e^{-c_5 r^2/t}+ \frac{t}{r^{2+\varepsilon}}\right). \end{align*} In particular, taking $r\ge c_6t^{1/2}$ for some $c_6$ large enough, we find that
$$\Pp^x(|X_t-x|\ge r)\le 1/4.$$ This along with \eqref{e:ff7-1} yields \eqref{e:ff6}.
(2) Next, we will use the approach of \cite[Section 4.4]{CZ16}. Fix $t\ge t_0$ and $x,y\in \R^d\setminus \N$ with $|x-y|\ge 4c_0 t^{1/2},$
where $c_0$ is the constant in \eqref{e:ff6}. It follows from the
Chapman-Kolmogorov equation and Theorem \ref{T:lower} that
\begin{align*}
p(2t,x,y)=&\int_{\R^d} p(t,x,z)p(t,z,y)\,\d z\\
\ge &\left(\inf_{|z-y|\le 2c_0t^{1/2}} p(t,z,y)\right) \int_{\{|y-z|\le 2c_0t^{1/2}\}} p(t,x,z)\,\d z\\
\ge &c_1 t^{-d/2}\Pp^x(X_t\in B(y,2c_0t^{1/2})).
\end{align*}
For any $x\in \R^d$ and $r>0$, define
$$\sigma_{B(x,r)}=\inf\{t>0:X_t\in B(x,r)\}.$$
By the strong Markov property,
\begin{align*}&\Pp^x(X_t\in B(y,2c_0t^{1/2}))\\
&\ge \Pp^x\left(\sigma_{B(y,c_0t^{1/2})}\le t/2; \sup_{s\in\big[\sigma_{B(y,c_0t^{1/2})},t\big]}
|X_s-X_{\sigma_{B(y,c_0t^{1/2})}}|\le c_0t^{1/2}\right)\\
&\ge \Pp^x\left(\sigma_{B(y,c_0t^{1/2})}\le t/2\right)\inf_{z\in B(y,c_0t^{1/2})}\Pp^z(\tau_{B(z,c_0t^{1/2})}>t)\\
&\ge \frac{1}{2}\Pp^x\left(\sigma_{B(y,c_0t^{1/2})}\le t/2\right),
\end{align*} where we used \eqref{e:ff6} in the last inequality.
Furthermore, by the L\'evy system formula (see \cite[p.151]{BGK09} and \cite[Appendix A]{CK08}) and the fact that $|x-y|\ge 4c_0 t^{1/2},$
\begin{align*}\Pp^x\left(\sigma_{B(y,c_0t^{1/2})}\le t/2\right)
&\ge \Pp^x(X_{(t/2)\wedge \tau_{B(x,c_0t^{1/2})}}\in B(y, c_0t^{1/2}))\\
&\ge c_2\Ee^x\left(\int_0^{(t/2)\wedge \tau_{B(x,c_0t^{1/2})}} \int_{B(y, c_0t^{1/2})} \frac{\d z}{|X_s-z|^{d+2+\varepsilon}}\,\d s\right)\\
&\ge c_3 t^{d/2+1}\Pp^x(\tau_{B(x,c_0t^{1/2})}\ge t/2) \frac{1}{|x-y|^{d+2+\varepsilon}}\\
&\ge c_4t^{d/2+1}\frac{1}{|x-y|^{d+2+\varepsilon}},\end{align*}
where in the third inequality we used the facts that $|x-y|\ge 4c_0 t^{1/2}$, and
for all $s\in (0, (t/2)\wedge \tau_{B(x,c_0t^{1/2})})$ and $z\in B(y, c_0t^{1/2})$,
$$|X_s-z|\le |X_s-x|+|x-y|+|y-z|\le 2 c_0t^{1/2}+|x-y|\le 2 |x-y|;$$ and the last inequality follows from \eqref{e:ff6}.
Combining all the inequalities above, we find that
$t\ge t_0$ and $x,y\in \R^d\setminus \N$ with $|x-y|\ge 4c_0 t^{1/2},$
$$p(2t,x,y)\ge \frac{c_4 t}{|x-y|^{d+2+\varepsilon}},$$ which proves \eqref{e:ff5}. \end{proof}
\section{Proof of Theorem \ref{main}}
\begin{proof}[Proof of Theorem $\ref{main}$]
Throughout this proof, we set $\psi(r)=\sqrt{r\log\log r}$.
Recall that $\tau_{B(x,r)}=\inf\{t>0:X_t\notin B(x,r)\}$
for any $x\in \R^d$ and $r>0$.
(1) In this case, $\phi(s)=\log^{1+\varepsilon}(e+s)$ and so
$c^{-1}\log^{\varepsilon}(e+s)\leq \Phi(s)\leq c\log^{\varepsilon}(e+s)$ for some constant
$c\ge1$.
We follow the proof of
\cite[Theorem 3.1(1)]{SW17} first.
Setting $t_k=2^k$, we have for all $k\ge 2$ and $x\in \R^d\setminus \N$,
\begin{equation}\begin{split}\label{e:ff1}\Pp^x(|X_s-x|&\ge C_0\psi(s)\hbox{ for some }s\in [t_{k-1},t_k])\\
&\le \Pp^x(\sup_{s\in [t_{k-1},t_k]}|X_s-x|\ge C_0\psi(t_{k-1}))\le \Pp^x(\tau_{B(x,C_0\psi(t_{k-1}))}\le t_k)\\
&\le 2 \sup_{s\le t_k, z\in \R^d}\Pp^z(|X_{t_{k+1}-s}-z|\ge C_0\psi(t_{k-1})/2),
\end{split}\end{equation}
where in the last inequality we used \eqref{e:ff7}.
For any $\kappa\ge 1$, let $\theta_0$ be the constant in Theorem \ref{thm-upper-bound}.
We choose $\theta_0^*>C$ large enough such that, if $r\ge \theta_0^*\psi(t)$, then
$t\le \frac{\theta_0 r^2}{\log \Phi(r)};$ if $r\le\theta_0^*\psi(t)$, then $ t\ge \frac{\theta'_0 r^2}{\log \Phi(r)}$ for some constant $\theta_0'\in (0,1)$.
Below, we fix this $\kappa$ and $\theta_0^*$, and let $\delta>0$ first. For any $x\in \R^d \setminus \N$ and $t, C>0$ large enough, according to Theorem \ref{thm-upper-bound}, Remark \ref{rem-upper-bound}(ii) and
Proposition \ref{thm-upper-bound3} (with $\delta=1/2$),
\begin{align*} &\Pp^x(|X_t-x|\ge C \psi(t))\\
&=\int_{\{|y-x|\ge C\psi(t)\}} p(t,x,y)\,\d y\\
&\le \frac{c_1}{t^{d/2}}\int_{\{C\psi(t)\le |y-x|\le \theta^*_0\psi(t)\}} \exp\left(-\frac{c_2|x-y|^2}{t}\right) \,\d y\\
&\quad+c_3\int_{\{\theta^*_0\psi(t)\le |y-x|\le c_4\sqrt{t\log^{1+\delta}t}\}} \left( t^{-d/2} \frac{1}{\log^{\kappa \varepsilon/8} |x-y|}+
\frac{t}{|x-y|^{d+2}\log^{1+\varepsilon}|x-y|}\right)\,\d y\\
&\quad+c_5 \int_{\{|y-x|\ge c_4\sqrt{t\log^{1+\delta}t}\}} \frac{ t}{|x-y|^{d+2}\log^{(d+2)/4}\log\log (1+|x-y|)} \,\d y\\
&=:I_1+I_2+I_3,
\end{align*} where the constants $c_i (i=1,\cdots, 5)$ may depend on $\kappa$ and $\delta$.
First, it holds that
\begin{align*}I_2\le& c_{21}\left[(t\log^{1+\delta}t)^{d/2} \left (t^{-d/2}\log^{-\kappa \varepsilon/8} t \right)+
\int_{\theta^*_0\psi(t)}^\infty \frac{t}{r^3\log^{1+\varepsilon}r}\,\d r \right]\\
\le & c_{22}\left[\log^{-((\kappa \varepsilon/8)-((1+\delta)d/2))} t +
\frac{1}{\log^{1+\varepsilon} t} \right]. \end{align*} Taking $\kappa\ge1$ large enough such that
$ \kappa \varepsilon/8\ge (1+\delta)d/2+1+\varepsilon,$ we find that
$$I_2\le \frac{c_{23}}{\log^{1+\varepsilon} t}.$$ Second, we fix $\kappa$ as above. We find that
\begin{align*}I_1\le&\frac{c_{11}}{t^{d/2}}\int_{\{|y-x|\ge C\psi(t)\}} \exp\left(-\frac{c_2|x-y|^2}{t}\right) \,\d y\\
\le& c_{12}\int_{C^2\log\log t}^\infty\exp(-c_2s) s^{d/2-1}\,\d s\le c_{13}(\log t)^{-C^2 c_2/2}, \end{align*} where $c_2$ depends on $\kappa$ above. Choosing $C>1$ large enough such that
$C^2 c_2/2\ge 1+\varepsilon$, we get that $$I_1\le \frac{c_{14}}{\log^{1+\varepsilon} t}.$$
Third, it is easy to see that
$$I_3\le \frac{c_{31}}{\log^{1+\delta} t}.$$ In particular, letting $\delta=\varepsilon$,
$$I_3\le \frac{c_{32}}{\log^{1+\varepsilon} t}.$$
By all the estimates above, we obtain that there is a constant $C_1>0$ such that
for any $x\in \R^d \setminus \N$ and $t, C>0$ large enough,
\begin{equation}\label{e:ff2} \Pp^x(|X_t-x|\ge C \psi(t))\le \frac{C_1}{\log^{1+\varepsilon} t}.\end{equation}
According to \eqref{e:ff1} and \eqref{e:ff2}, we know that there is a constant $C_2>0$ such that for all $k\ge 2$, $C_0>0$ large enough and $x\in \R^d\setminus \N$,
$$\Pp^x(|X_s-x|\ge C_0\psi(s)\hbox{ for some }s\in [t_{k-1},t_k])\le \frac{C_2}{k^{1+\varepsilon}}.$$ This together with the Borel-Cantelli lemma proves the first desired assertion.
(2) For any $c>0$ and $k\ge 1$, set $t_k=2^k$ and
$$B_k=\{|X_{t_{k+1}}-X_{t_k}|\ge c\psi(t_{k-1})\}.$$
Denote by $({F}_t)_{t\ge0}$ the natural filtration of the process $X$.
Then, for every $x\in \R^d\setminus \N$ and $k\ge 1$, by the Markov property and Theorem \ref{T:lower},
\begin{align*}\Pp^x(B_k|{F}_{t_k})\ge&\min_{z\in \R^d\setminus \N}\Pp^z(|X_{t_k}-z|\ge c\psi(t_{k-1}))\\
\ge &\min_{z\in \R^d\setminus \N}\int_{\{c\psi(t_{k-1})\le |y-z|\le t_{k}\}} p(t_k,z,y)\,\d y\\
\ge& c_1t_{k}^{-d/2}\min_{z\in \R^d\setminus \N}\int_{\{c\psi(t_{k-1})\le |y-z|\le t_{k}\}} \exp\left(-\frac{c_2|z-y|^2}{t_{k}}\right)\,\d y\\
\ge&c_3 \int_{c^2\log\log (t_{k-1})/2}^{t_k} e^{-c_2s} s^{d/2-1}\,\d s\\
\ge& c_4 k^{-c^2c_2}.
\end{align*}
Choosing $c>0$ small enough such that $c^2c_2\in(0,1]$, we have
$$\sum_{k=1}^\infty \Pp^x(B_k|{F}_{t_k})=\infty.$$
Then by the second Borel-Cantelli lemma,
$$\Pp^x(\limsup B_k)=1.$$
This yields the desired assertion, see e.g. the proof of \cite[Theorem 3.1(2)]{SW17}.
\end{proof}
\noindent \textbf{Acknowledgements.}
The research of Yuichi Shiozawa is supported
by JSPS KAKENHI Grant Number JP26400135, JP17K05299,
and Research Institute for Mathematical Sciences, Kyoto University.
The research of Jian Wang is supported by National
Natural Science Foundation of China (No.\ 11522106), the Fok Ying Tung
Education Foundation (No.\ 151002), National Science Foundation of
Fujian Province (No.\ 2015J01003), the Program for Probability and Statistics: Theory and Application (No. IRTL1704), and Fujian Provincial
Key Laboratory of Mathematical Analysis and its Applications
(FJKLMAA).
|
\section{Introduction}
Our work is motivated by making the following theorems explicit for type A, where the finite Weyl group is the symmetric group, $\mathfrak{S}_n$ and the affine Weyl group is $\tilde{\mathfrak{S}}_n$.
\begin{theorem}{J.L.Waldspurger, 2005} \cite{waldspurger2007remarque}
Let $W$ be a Weyl group presented as a reflection group on a Euclidean vector space $V$. Let $C_{\omega}\subset V$ be the open cone over the fundamental weights and $C_R \subset V$ the closed cone spanned by the positive roots. Let the cone associated with group element $g$ be $C_w:=(I-w)C_{\omega}$ (where $I$ is the identity element in $G$). One has the decomposition
$$C_R=\displaystyle \bigsqcup_{w\in W} C_w$$
\end{theorem}
\begin{figure}
\centering
\includegraphics[scale=.7]{a2wald-tikz}
\caption{The Waldspurger Decomposition for $A_2=\mathfrak{S}_3$}
\label{fig:a2wald}
\end{figure}
\begin{theorem}{E. Meinrenken, 2006} \cite{meinrenken2009tilings}\cite{bibikov2009tilings}
Let the affine Weyl group for a crystallographic Coxeter system be denoted $W^a$ and recall that
$W^a= \Lambda\rtimes W$ where the coroot lattice $\Lambda \subset V$ acts by translations. Let $A \subset C$ denote the Weyl
alcove, with $0 \in A$. Then the images $V_w = (id-w)A$, $w \in W^a$ are all disjoint, and their union is all of $V$.
That is, $$V=\bigsqcup_{w\in W^a} V_w$$
\end{theorem}
\begin{figure}
\centering
\includegraphics[scale=.7]{a2meintile}
\caption{The Meinrenken tiling for $A_2=\mathfrak{S}_3$.}
\label{fig:a2mein}
\end{figure}
We will define the \textbf{Meinrenken tile} to be $\displaystyle\bigsqcup_{w\in W} V_w$, restricting to a copy of the finite Weyl group inside of the affine Weyl group. The semi-direct product of this finite Weyl group with the coroot lattice simply translates the Meinrenken tile and so this restriction is convenient from a combinatorial perspective. Although it is built out of simplices, the Meinrenken tile is not a simplicial complex, nor even a CW complex, and it need not even be convex.
In type A, where our Weyl group elements are permutations, both $C_{\pi}=C_w$ and $V_{\pi}=V_w$ can be found in a purely combinatorial way by considering what we call the \textit{Waldspurger transform of the permutation} $\pi$, $\mathbf{WT}(\pi)$. $\mathbf{WT}(\pi)$ is an $(n-1) \times (n-1)$ matrix constructed from the $n \times n$ permutation matrix via a \emph{transformation diagram} like the one at the top of Section \ref{fund trans section} Section \ref{Section 2} is dedicated to defining the Waldspurger transform of a permutation, giving a combinatorial description of the transform, and verifying that the combinatorial description agrees with the definition. The proof of Theorem \ref{wald transform thm} is fundamental, but not very illuminative, and the impatient reader is invited to skip to Section \ref{fund trans section}. The remainder of the paper is organized as follows:
In Section \ref{geom obs section} we informally discuss the geometry of the Meinrenken tile, particularly its symmetries and irregularities. In Section \ref{um vectors section} we return to the combinatorics of Waldspurger matrices. We classify their row and column vectors by showing that they satisfy certain unimodality conductions and call them ``UM vectors''. We give explicit bijections between UM vectors and Unimodal Motzkin paths, Abelian ideals in the Lie algebra, $\mathfrak{sl}_n(\mathfrak{C})$, tableau with bounded hook lengths, and coroots in a certain polytope studied by Panyushev, Peterson, and Kostant \cite{panyushev2011abelian}. In section \ref{entropy,asm, gen trans section} we show that componentwise comparison of Waldspurger matrices is Bruhat order and that summing all of the entries of the matrix gives the rank of the corresponding permutation in the lattice of alternating sign matrices (or monotone triangles). Inspired by this, we extend the Waldspurger transform to alternating sign matrices and exhibit a lattice isomorphism between these generalized Waldspurger matrices and monotone triangles. This lattice is known to be distributive, with join-irreducibles the bigrassmannian elements; permutations with exactly one left descent and one right descent \cite{sch}. We show that these correspond to the Waldspurger matrices which are determined by a single entry. In Section \ref{types b&c section} we explore types B and C. We define the Waldspurger transform with respect to any crystalographic root system $\Phi$ and exhibit a combinatorial means of computing type B and C Waldspurger matrices by folding centrally symmetric type A Waldspurger matrices. Here row and column vectors are not in bijection with abelian ideals and componentwise comparison is no longer Bruhat order. It is known that Dedekind-MacNeille completion of Bruhat order is still distributive for types B and C, and that the join-irreducibles are a strict subset of the bigrasmannian elements. Various descriptions of the join irreducibles have been given in \cite{sch}\cite{geck1997bases} \cite{Reading2002}\cite{2016arXiv161208670A}. Nevertheless, we present a conjectural description for these join-irreducibles: they correspond to the type C Waldspurger matrices specified by a single entry.
\section{The Waldspurger Transform for Permutations}\label{Section 2}
\begin{definition} Let $\phi$ denote the reflection representation of the symmetric group $$\phi:\mathfrak{S}_n \longrightarrow GL_{n-1}(\mathbb{R})$$ The Waldspurger matrix, $\mathbf{WT}(g)$, of a permutation $g$ is the matrix of $\phi(1)-\phi(g)$ applied to the matrix with columns given by the fundamental weights, expressed in root coordinates.
\end{definition}
Our first theorem gives a concrete combinatorial method of finding the Waldspurger matrix associated with a given permutation. It is helpful to consider the Cartan Matrix of the type A root system.
The \textit{Cartan matrix} of a root system is the matrix whose elements are the scalar products
$$a_{{ij}}=2 \frac {(r_{i},r_{j}) }{(r_{i},r_{i})}$$
(sometimes called the Cartan integers) where the $r_i$'s are the simple roots. Recall that the root system $A_{n-1}$ has as its simple roots the vectors $a_i=e_i-e_{i+1}$ for $i=1,\dots , n-1$. One can verify that the Cartan matrix for this root system has $2$'s on the main diagonal, $(-1)$'s on the superdiagonal and subdiagonal, and $0$s elsewhere. Its columns express the simple roots in the basis of fundamental weights.
\begin{theorem} \label{wald transform thm}
Let $P$ be the $(n-1) \times (n-1)$ matrix for the permutation $\pi \in\ \mathfrak{S}_n$ expressed in root coordinates. Let $C$ be the $(n-1) \times (n-1)$ Cartan matrix and let $D$ be the $(n-1) \times (n-1)$ matrix
$$D_{i,j}= \begin{cases}
\displaystyle\sum_{\substack{a\leq i\\b>j}}\pi_{a,b} & i\leq j \\
\displaystyle\sum_{\substack{a> i\\b\leq j}}\pi_{a,b} & i\geq j\\
\end{cases}.
$$
Then we have that $\mathbf{(I-P)=DC}$.
\end{theorem}
\begin{proof}
We use the fact that $C=A^TA$ where $A$ is the $n \times (n-1)$ matrix $$A=\begin{pmatrix*} 1 & 0 & 0 & \dots & 0\\
-1 & 1 & 0 & \dots & 0\\
0 & -1 & 1 & \dots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & \dots & 1\\
0 & 0 & 0 & \dots & -1\end{pmatrix*}$$
to rewrite the conclusion :
$$P = I-DA^TA$$
We multiply both sides on the left by A:
$$AP = A-ADA^TA$$
We then make the observation that $AP=\pi A$. Making this substitution and canceling the $A$'s on the right we obtain:
$$ \pi =I-ADA^T$$
This we will verify.
\newline
Multiplying $A$ and $D$, we see that $(AD)_{i,j}=D_{i,j}-D_{i-1,j}$ with the understanding $D_{0,k}:=0$ for all $k$.
One more multiplication gives us that $$(ADA^T)_{i,j}=D_{i,j}-D_{i-1,j}-D_{i,j-1}+D_{i-1,j-1}$$
once again, with the understanding that if either $i=0$ or $j=0$ then $D_{i,j}:=0$
\begin{case}
If $i=j$ then
\begin{align*} (ADA^T)_{i,j} &= D_{i,j}-D_{i-1,j}-D_{i,j-1}+D_{i-1,j-1} \\
&= \sum_{\substack{a\leq i\\b>j}}\pi_{a,b} - \sum_{\substack{a\leq i-1\\b>j}}\pi_{a,b}- \sum_{\substack{a> i\\b\leq j-1}}\pi_{a,b}+ \sum_{\substack{a> i-1\\b\leq j-1}}\pi_{a,b}\\
&= \sum_{k \neq j} \pi_{i,k}\\
&= \begin{cases} 0 & \pi_{i,j}=1\\ 1 & \pi_{i,j}=0 \end{cases}
\end{align*}
\textrm{To understand the second-to-last inequality, observe that we are summing over the following terms of permutation matrices:}
\bigbreak
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\ \cmidrule{4-5}
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}-
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\ \cmidrule{4-5}
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}-
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\ \cmidrule{1-2}
\dots & \pi_{i+1,j-1} & \multicolumn{1}{|c}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \ddots \end{pmatrix}$}+
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\ \cmidrule{1-2}
\dots & \pi_{i,j-1} & \multicolumn{1}{|c}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \multicolumn{1}{|c}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \ddots \end{pmatrix}$}
\bigskip
Thus, $(I-ADA^T)_{i,j}=\pi_{i,j}$ for this case.
\end{case}
\bigskip
\begin{case}
If $i<j$ then
\begin{align*} (ADA^T)_{i,j} &= D_{i,j}-D_{i-1,j}-D_{i,j-1}+D_{i-1,j-1} \\
&= \sum_{\substack{a\leq i\\b>j}}\pi_{a,b} - \sum_{\substack{a\leq i-1\\b>j}}\pi_{a,b}- \sum_{\substack{a\leq i\\b> j-1}}\pi_{a,b}+ \sum_{\substack{a\leq i-1\\b> j-1}}\pi_{a,b}\\
&= -\pi_{i,j}\\
\end{align*}
\textrm{This last equality is, again, easier to understand visually:}
\bigbreak
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\ \cmidrule{4-5}
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}-
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\ \cmidrule{4-5}
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}-
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{|c}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \multicolumn{1}{|c}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\ \cmidrule{3-5}
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}+
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{|c}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\ \cmidrule{3-5}
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}
\bigskip
Thus, $(I-ADA^T)_{i,j}=\pi_{i,j}$ for this case as well.
\end{case}
\bigskip
\begin{case}
If $i>j$ then
\begin{align*} (ADA^T)_{i,j} &= D_{i,j}-D_{i-1,j}-D_{i,j-1}+D_{i-1,j-1} \\
&= \sum_{\substack{a> i\\b\leq j}}\pi_{a,b} - \sum_{\substack{a> i-1\\b\leq j}}\pi_{a,b}- \sum_{\substack{a> i\\b\leq j-1}}\pi_{a,b}+ \sum_{\substack{a> i-1\\b\leq j-1}}\pi_{a,b}\\
&= -\pi_{i,j}\\
\end{align*}
\textrm{As before, the final equality is apparent with a visual:}
\bigbreak
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\ \cmidrule{1-3}
\dots & \pi_{i+1,j-1} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}$}-
\resizebox{.25\textwidth}{!}{$ \begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\ \cmidrule{1-3}
\dots & \pi_{i,j-1} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}$
}-
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\ \cmidrule{1-2}
\dots & \pi_{i+1,j-1} & \multicolumn{1}{|c}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \ddots \end{pmatrix}$}+
\resizebox{.25\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\ \cmidrule{1-2}
\dots & \pi_{i,j-1} & \multicolumn{1}{|c}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \pi_{i+1,j-1} & \multicolumn{1}{|c}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{|c}{\vdots} & \vdots & \ddots \end{pmatrix}$}
\bigskip
Thus, $(I-ADA^T)_{i,j}=\pi_{i,j}$ in this final case.
\end{case}
\end{proof}
Because the inverse of the Cartan matrix expresses the fundamental weights in simple root coordinates, we may multiply both sides of the equation above on the right by $\mathbf{C}^{-1}$ and observe
$$\mathbf{D}=\mathbf{WT}(\pi).$$
\subsection{The Fundamental Transformation}\label{fund trans section}
Let $\pi \in \mathfrak{S}_n$ be expressed as an $n \times n$ permutation matrix. For aesthetics, our examples put the entries of $\pi$ on a grid, leave off the zeros, and use stars instead of ones. Construct the $(n-1) \times (n-1)$ Waldspurger matrix $\mathbf{WT}(\pi)$ in the spaces between the entries of the permutation matrix as follows:
If an entry is on or above the main diagonal, count the number of stars above and to the right, and put that count in the space. If the entry is on or below the main diagonal, count the number of stars below and to the left and put that count in the space. Note that entries on the diagonal are still well-defined.
As an example, here is the Waldspurger matrix for the permutation $456213\in \mathfrak{S}_6$.
\begin{center}
\includegraphics[]{waldmatexample}
\end{center}
Now suppose $M$ is a Waldspurger matrix for the permutation $\pi$, with columns $c_1,c_2,\dots,c_{n-1}$. To return to the language of the Waldspurger and Meinrenken theorems we have:
\begin{equation}
C_M:=C_{\pi}=\left\{ \displaystyle\sum_{i=1}^{n-1} a_ic_i \vert \hspace{10pt} a_i\in \mathbb{R}_{\geq 0} \right\}
\end{equation}
\begin{equation}V_M:=V_{\pi}=\left\{\displaystyle\sum_{i=1}^{n-1} a_ic_i \vert \hspace{10pt} a_i \in \mathbb{R}_{\geq 0}\textrm{ and }\sum a_i\leq1 \right\}
\end{equation}
It is at times convenient to study the boundary of the Meinrenken tile, so we will also define
\begin{equation}\Delta_M:=\Delta_{\pi}:=\left\{\displaystyle\sum_{i=1}^{n-1} a_ic_i \vert \hspace{10pt}a_i \in \mathbb{R}_{\geq 0}\textrm{ and }\sum a_i=1\right\}
\end{equation}
\section{Geometric Observations}\label{geom obs section}
Our first example, in Figures \ref{fig:a2wald} and \ref{fig:a2mein}, was in many ways too nice. One may be tempted to study the Meinrenken tile or a slice of the root cone as a simplicial complex, or at the very least a regular CW complex. Going up even one dimension presents several unforeseen complications. For starters, our Meinrenken tile is no longer convex! See, for example, the right side of Figure \ref{meinzome}, constructed out of zometools. (The two yellow edges and one blue edge coming out from the origin are the fundamental weights.)
\begin{figure}
\begin{center}
\begin{minipage}[t][][b]{.3\textwidth}
\includegraphics[scale=1.5]{3-cone-slice}
\end{minipage}
\hspace{100pt}
\begin{minipage}[t][][b]{.3\textwidth}
\includegraphics[scale=.070]{meinzomelabeled}
\end{minipage}
\end{center}
\label{meinzome}
\caption{A slice of the root cone of type $A_3$ with points labeled in root coordinates, along with the corresponding $A_3$ Meinrenken tile.}
\end{figure}
Observe from the left side of figure that the slice of the root cone fails to be simplicial or regular CW complex. The top triangle intersects the two below it along ``half edges". One may desire to consider it instead as a degenerate square to fix this impediment, but from the Meinrenken tile, it seems this new vertex should rightly be the fundamental weight with root coordinates $(\frac{1}{2},1,\frac{1}{2})$ and not the vertex $(1,2,1)$. If we wish to proceed in this manner, we must then include $(\frac{1}{2},1,\frac{1}{2})$ as a vertex for the two triangles 110,121,111 and 111,121,011 and consider them as degenerate tetrahedra. This sort of topological completion via intersecting facets has proven to be a rabbit hole with less fruit than one might hope for. Instead let us turn our attention back to the symmetric group, and consider Figure \ref{oneline}.
\begin{figure}
\begin{center}
\includegraphics[scale=.90]{a3oneline}
\includegraphics[scale=.90]{a3cone-cycle-notation}
\caption{A slice of the Root cone $A_3=\mathfrak{S}_4$}
\label{oneline}
\end{center}
\end{figure}
Observe that the dimension of a simplex in the cone slice relates to the number of cycles (counting fixed points as one-cycles) of the corresponding permutation. The four-cycles are the triangles, the three-cycles and disjoint two-cycles are the edges, and the transpositions are vertices. This has been known for some time \cite{bibikov2009tilings} and can be seen as a corollary to the Chevalley-Shephard-Todd theorem \cite{Bourbaki:2008:LGL:1502204}.
The astute observer will notice that there are two permutations missing in the picture. The identity corresponds to the cone point which we cut off, and the vertical edge in the center we left unlabeled, as we feel that it (along with the starred edge $3412$) deserves some discussion. It corresponds to the permutation $4321$ and its $3 \times 3$ Waldspurger matrix has all entries equal to one except for a two in the middle. If we consider the columns of each Waldspurger matrix as being ordered from left to right, the cones in the Waldspurger decomposition are endowed with an orientation. The orientation appears to be consistent, but what does it say in the case of this permutation? It appears to go first up from $(1,1,1)$ to $(1,2,1)$ and then back down. The starred edge, $3412$ is also worth mentioning. Its Waldspurger matrix has first column $(1,1,0)$ second column $(1,2,1)$ and third column $(0,1,1)$ so it is perhaps better seen as a degenerate triangle than as an edge. Looking at the Meinrenken tile, we see that $\Delta_{3,4,1,2}$ is actually a triangle. The strangeness in the Meinrenken picture comes from the fact that $V_{3,4,1,2}$ is a square and not a tetrahedron.
Despite all of these collapses in dimension, there is still a fair amount of symmetry in the Meinrenken tile.
\begin{theorem}
Let $R$ denote reflection through the affine hyperplane orthogonal the longest positive root, $\theta$, at height one. Then $R$ is an involution on the set of $\Delta_{\pi}$'s. At the level of permutations, this involution is just applying the transposition $(1,n)$ on the left.
$$R( \Delta_{\pi})=\Delta_{(1,n)\pi}.$$
In contrast, applying the transposition $(1,n)$ on the right is the gluing map for using multiple Meinrenken tiles to tile space. The left right symmetry is conjugation by the longest element in the Coxeter group.
\end{theorem}
\begin{proof}
Conjugation by the longest element is known to induce a left right symmetry in the root lattice. In the next section we will see that UM vectors (the columns of Waldspurger matrices) are really special order filters in this lattice, and inherit this same action.\newline
To prove the other two statements, we will consider how the transformation diagram changes when one applies the transposition $(1,n)$ on the left (respectively right). The two moving stars will cause $\theta$ to be subtracted from all columns (respectively rows) starting and ending with $1$'s, and to be added to all columns (respectively rows) starting and ending with $0$'s. Adding or subtracting $\theta$'s from rows is acting by translation on the $\Delta_{\pi}$'s, and since this transformation preserves being a Waldspurger matrix, it must be the gluing map for attaching multiple Meinrenken tiles to tile space.\newline
In contrast, adding $\theta$'s to columns is the reflection $R$.
Indeed, consider where $R$ sends column vectors. If we let $P$ denote projection onto $\theta$, then $v \mapsto (id-2P)v+\theta$. In root coordinates, this projection is described by the matrix $\frac{2\theta \theta^{T}C}{\theta^{T}C\theta}=JC$ where $J$ is the matrix of all ones and $C$ is the Cartan matrix. One may verify that $$JC=\left[\begin{matrix}1&0&\dots&0&1\\1&0&\dots&0&1\\ \vdots&\vdots&\vdots&\vdots &\vdots\\1&0&\dots&0&1\\1&0&\dots&0&1 \end{matrix}\right]$$
and thus $$v \mapsto (I-JC)v+\theta =v-(v_1+v_{n-1})\theta +\theta= \begin{cases}
v & \textrm{ if } v_1+v_{n-1}=1\\
v+\theta & \textrm{ if } v_1+v_{n-1}=0 \\
v-\theta & \textrm{ if } v_1+v_{n-1}=-1.
\end{cases}
$$
\end{proof}
\section{UM vectors}\label{um vectors section}
Suppose $v$ is the $k$-th column of the Waldspurger matrix associated to the permutation $\pi$. It is evident from the transformation diagram that $v_1=0$ or $v_1=1$ since the one in the first row of $\pi$ can either be to the left or to the right of $v_1$. By similar reasoning, for $i\leq k$ we have $v_i=v_{i-1}$ or $v_i=v_{i-1}+1$ and for $i>k$ we have $v_i=v_{i-1}$ or $v_i=v_{i-1}-1$ with $v_n=0$ or $v_n=1$. In other words, $v$ will start with a zero or a one, weakly increase (by steps of $0$ or $1$) until the $k$th entry, and then weakly decrease (by steps of $0$ or $1$), to the last entry.
\begin{definition}
A \emph{Motzkin path} is a lattice path in the integer plane $\mathbb{Z}\times \mathbb{Z}$ consisting of steps $(1,1), (1,-1),(1,0)$ which starts and ends on the x-axis, but never passes below it. A Motzkin path is \emph{unimodal} if all occurrences of the step $(1,1)$ are before the occurrences of $(1,-1)$. For brevity, we will hence forth refer to unimodal Motzkin paths as \emph{UMP}'s.
\end{definition}
\begin{lemma}
\emph{(counting UMPs)} \newline
There are $2^{n-1}$ UMPs between $(0,0)$ and $(0,n)$.
\end{lemma}
\begin{proof}\emph{(induction)}
\newline
{\bf Base case: } There is only one UMP of length one, and only two UMPs of length two.
\newline
{\bf Induction hypotheses:}
Suppose there are $2^{k-1}$ UMPs of length $k$ for all $k\leq n-1$. Consider an arbitrary UMP of length $n$.\\
{\bf Case 1:}\\
The first step of the UMP is $(1,0)$. Cutting off this step, we have an arbitrary UMP of length $n-1$ and so by induction, that there are $2^{n-2}$ such UMPs.\\
\begin{center}
\includegraphics[]{ump1}
\end{center}
{\bf Case 2:}\\
The last step of the UMP is $(1,0)$. Cutting off this last step, we have an arbitrary UMP of length $(n-1)$ and so by induction, that there are $2^{n-2}$ such UMPs.\\
\begin{center}
\includegraphics[scale=.5]{ump2}
\end{center}
{\bf Case 3:}\\
We have double counted some stuff. If the first and last steps of a UMP are both $(1,0)$ then the UMP was counted by both of the previous cases. Cutting off both the first and last steps we have an arbitrary UMP of length $n-2$. There are, by induction, $2^{n-3}$ such UMPs.\\
\begin{center}
\includegraphics[scale=.5]{ump3}
\end{center}
{\bf Case 4:}\\
The first and last steps of the UMP are $(1,1)$ and $(1,-1)$, respectively. Cutting these steps once again, we see by induction, that there are $2^{n-3}$ such UMPs.\\
\begin{center}
\includegraphics[scale=.5]{ump4}
\end{center}
So we see that there are $2^{n-2}+2^{n-2}-2^{n-3}+2^{n-3}=2^{n-1}$ UMPs of length $n$.
\end{proof}
\begin{definition}
A \emph{UM vector} is any vector that appears as a column in $\mathbf{WT}(\pi)$ for some permutation $\pi$.
\end{definition}
\begin{theorem}
There is a bijective correspondence between UM vectors of length $n-1$ and UMPs with $n$ steps. Consequentially, there are $2^n$ UM vectors of length $n$.
\end{theorem}
\begin{proof}
A UM vector must start with a zero or a one, weakly increase by one until its entry on the diagonal, and then weakly decreases by one until its final entry, a zero or one. Any row vector of a Waldspurger matrix must also be a UM vector with its maximum also on the diagonal. Padding a UM vector with zeros on each end gives the $x$ coordinates for a UMP of length $n$.
For example,
\begin{center}
$(1,2,3,3,2,2,1)\leftrightarrow(0,1,2,3,3,2,2,1,0)\leftrightarrow$
\includegraphics[]{ump5}
\end{center}
\end{proof}
\begin{theorem}
UM vectors are in bijection with tableaux with hook length bounded above by $n$ and with Abelian ideals in the nilradical of the Lie Algebra $\mathfrak{sl}_n$.
\end{theorem}
\begin{proof}
One can take any UM vector and write it as a sum of positive roots by recursively subtracting the highest root whose nonzero entries correspond to positive nondecreasing entries in the UM vector. For example, the vector $(0,1,2,1)=(0,1,1,0)+(0,0,1,1)$ This set of positive roots will always generate an abelian ideal in the nilradical of the Lie Algebra $\mathfrak{sl}_n$ and will correspond to a tableau with bounded hook length, as seen in the diagram below.
\begin{center}
\includegraphics[scale=.8]{abelian_ideals_tableau}
\end{center}
\end{proof}
\begin{theorem}
UM vectors are exactly the coroots $c$ (in root coordinates) such that $-1\leq (c,r)\leq 2$ for every positive root $r$.
They are coroots inside the polytope defined by affine hyperplanes at heights $-1$ and $2$ orthogonal to every positive root.
\label{inequality}
\end{theorem}
\begin{proof}This follows from a result which Panyushev attributes to Peterson and
Kostant \cite{panyushev2011abelian} which is expressed in the language of Abelian ideals. They show that the number of coroots inside of this ``Pederson Polytope" is $2^{n-1}$. If we can show that our $2^{n-1}$ UM vectors are inside the polytope, we will be done.
Explicitly, suppose that $\bar{x}$ is a UM vector and $\bar{y}$ is a positive root (both expressed in root coordinates). Then $(x,y)=x^t\cdot y=\bar{x}^tA^tA\bar{y}=\bar{x}^tC\bar{y}$ where $A$ is the matrix defined in Theorem \ref{wald transform thm} and $C$ is the Cartan matrix. Suppose that $\bar{y}=(0,\dots,0,1,\dots,1,0,\dots 0)^t$ where the first one is in position $i$ and the last one is in position $j$.
Then
\begin{align*}
\bar{x}^tC\bar{y}&=2\left(\sum_{k=i}^{j} x_k\right) -x_{i-1}-x_{j+1}-2\left(\sum_{k=i+1}^{j-1}x_k\right)\\
&=-x_{i-1}+x_{i}+x_{j}-x_{j+1}
\end{align*}
Because $(x_1,\dots,x_{n-1})$ is a UM vector, $x_i$ and $x_{i-1}$ can differ by at most one, and likewise $x_j$ and $x_{j+1}$ can differ by at most one.
This yields that $$-2\leq x_{i}-x_{i-1}+x_{j}-x_{j+1} \leq 2.$$ However, the $-2$ is unatainable by the unimodality of UM vectors. Suppose that $x_{i-1}>x_{i}$, that is $x_{i-1}=x_{i}+1$. Then $x_j \geq x_{j+1}$, that is, $x_j=x_{j+1}$ or $x_j+1=x_{j+1}$.
Either way, $x_{i}-x_{i-1}+x_{j}-x_{j+1}=x_{i}-(x_{i}+1)+x_{j}-x_{j+1}>-2$.
Thus $$-1\leq x_{i}-x_{i-1}+x_{j}-x_{j+1} \leq 2$$ showing that our UM vectors are all inside the Pederson polytope.
\end{proof}
\section{Entropy, Alternating Sign Matrices, and the Waldspurger Transform in General}\label{entropy,asm, gen trans section}
\begin{definition}
The \textit{Entropy} (alternatively called variance in the literature) of a permutation $\pi$ is $$E(\pi):=\sum_{i=1}^n (\pi(i)-i)^2$$
\end{definition}
\begin{definition}
The \textit{Waldspurger Height} of a permutation $\pi$, is $$h(\pi):=\sum_{i=1}^n\sum_{j=1}^n \mathbf{WT}(\pi)_{i,j}$$
\end{definition}
\begin{theorem}\label{entropy} For $\pi \in \mathfrak{S}_n$,
$$h(\pi)=\frac{1}{2}E(\pi)$$.
\end{theorem}
\begin{proof}
Consider what each ``star" in the transformation diagram contributes to the Waldspurger matrix. We can see it as contributing ones to every entry enclosed in the right triangle between itself and the main diagonal, and contributing one half for every entry on the main diagonal whose box is cut by the triangle.
\begin{center}
\includegraphics[]{entropy}
\end{center}
\end{proof}
\begin{definition}
\emph{Alternating Sign Matrices} or ASMs, are square matrices with entries $0$, $1$, or $-1$ whose rows and columns sum to $1$ and alternate in sign.
\end{definition}
\begin{theorem}{A. Lascoux and M. Sch{\"u}tzenberger, 1996 \cite{sch}}\label{lascoux}
One half the entropy of a permutation is its rank in the Dedekind-MacNeille completion of the Bruhat order. The elements in this lattice can be viewed as alternating sign matrices with partial order given by component-wise comparison of entries in their associated monotone triangles.
\end{theorem}
The Dedekind-MacNeille completion of a poset $P$ is defined to be the smallest lattice containing $P$ as a subposet \cite{MacNeilleByBirkhoff}. Its construction is similar to the Dedekind cuts used to construct the real numbers from the rationals. For more on alternating sign matrices, monotone triangles, and their history we refer to ~\cite{asm}.
This connection to alternating sign matrices motivates us to extend our definition of the Waldspurger transform to a larger class of matrices.
\begin{definition}
An $n \times n$ matrix $M$ is \textit{sum-symmetric} if its $i$th row sum equals its $i$th column sum for all $1\leq i \leq n$. We write $M \in SS_n$.
\end{definition}
\begin{definition}\label{WT}
From an $n \times n$ sum-symmetric matrix $M$, define the $n-1 \times n-1$ matrix, $\mathbf{WT}(M)$ where $$\mathbf{WT}(M)_{i,j}=\begin{cases}
\displaystyle\sum_{\substack{a\leq i\\b>j}}M_{a,b} & i\leq j \\
\displaystyle\sum_{\substack{a> i\\b\leq j}}M_{a,b} & i\geq j\\
\end{cases}.
$$
\end{definition}
\begin{proposition}
$\mathbf{WT}(M)$ is well-defined if and only if $M \in SS_n$. If $M$ were not sum-symmetric, the diagonal would be ``over-determined.''
\end{proposition}
\begin{proposition}The $\mathbf{WT}$ map is linear and surjective with kernel the diagonal matrices.
$$\mathbf{WT}:SS_n \twoheadrightarrow \textnormal{Mat}_{n-1}$$
\end{proposition}
\begin{theorem}
The restriction of the Waldspurger transform to alternating sign matrices has as its image all $M \in \textnormal{Mat}_{n-1}$ such that columns and rows of $M$ are UM vectors with maximums on the diagonal. Component-wise comparison of these matrices is exactly the same order as is defined on the ASM lattice via monotone triangles.
\end{theorem}
\begin{proof}
See the next subsection.
\end{proof}
\subsection{The Lattice of Monotone Triangles}\label{MT lattice section}
We have a bijection between ASMs and generalized Waldspurger matrices and would like to show that componentwise comparison of generalized Waldspurger matrices is the same partial order as the componentwise comparison of monotone triangles (and is thus Dedekind-MacNeille completion of Bruhat order by Theorem \ref{lascoux}). To this end, we consider the well-known bijection between monotone triangles and alternating sign matrices \cite{2014arXiv1408.5391S} obtained by letting the $k$th row of the triangle equal the positions of $1$'s in the sum of the first $k$ rows of an alternating sign matrix. In particular, the identity matrix will always correspond to the monotone triangle
$$\begin{matrix}
1\\
1&2\\
\cdots\\
1&2&\cdots& n\\
\end{matrix}.$$
Because this is the $\hat{0}$ in the lattice of monotone triangles and the partial order is componentwise comparison, we may consider \emph{reduced monotone triangles} by subtracting this triangle from all of the others (see figure \ref{four in the same}).
We will explicitly describe the composition of these two bijections and show that it is a poset isomorphism, preserving component-wise comparison. The map is easy to describe, but it will take a little work to verify that it is well-defined and surjective.
The map, from monotone triangles to Waldspurger matrices is as follows: Subtract off the monotone triangle corresponding to the identity permutation, and then consider the entries of this reduced monotone triangle as ``painting instructions.'' The $(i,j)$th entry of the reduced triangle tells us how much paint to load our brush with for a left-to-right stroke beginning at the $(i,j)$th entry of the corresponding Waldspurger matrix.
As a working example, consider Figure \ref{four in the same}. The two at the top of the reduced triangle is ``painted''
onto the $(1,1)$ and $(1,2)$ entries of the associated Waldspurger matrix. The one in the next row is painted onto the $(2,1)$ entry, and the two after it is painted onto the $(2,2)$ and $(2,3)$ entries.
We must check that our painting gives a matrix with unimodal rows and columns with maximums on the diagonal. The left-to-right painting process ensures that the entries in each row of the Waldspurger matrix will increase weakly by one up to the diagonal. The fact that rows of the reduced triangle are weakly increasing guarantees that the row of the Waldspurger matrix will be weakly decreasing by ones after the diagonal. The conditions on the columns are a bit more disguised, but the fact that reduced monotone triangles increase weakly up columns guarantees that the columns of the Waldspurger matrix will increase weakly up to the diagonal. Finally, the fact that reduced monotone triangles decrease by at most one in the $\searrow$ direction, guarantees that the columns of the Waldspurger matrix will decrease weakly above the diagonal.
This follows from induction on the size of the monotone triangle. Suppose that the lower-left corner or the monotone trianges maps onto a generalized Waldspurger matrix of dimension one less. Then painting a new diagonal will preserve the unimodality in rows and columns, and keep the maximums on the diagonal.
This painting map has an inverse ``peeling'' operation. UM vectors by themselves are not in bijection with rows of reduced monotone triangles, but, if one knows that the UM vector is to appear in row $k$, our painting map will have an inverse ``peeling'' operation into $k$ entries as seen in Figure \ref{peel}.
To peel a UM vector into $k$ parts, create a diagram as in Figure \ref{peel} and specify $k$ starting points, one at the top of each of the $k$ columns. First draw a path from the $k$th starting point to the end, staying as far up and to the right as possible. Then do the same with the $(k-1)$st point. Note that the unimodality condition on the UM vector guarantees that this path will be weakly shorter than the first one. Continue in this way until all of the vertices are exhausted. Record the length of the paths to get the corresponding row in the associated reduced monotone triangle.
\begin{figure}
\begin{center}
\includegraphics[scale=.45]{peel.pdf}
\caption{There are four ways to peel the UM vector $1233332221$. It may peel into three, four, five, or six parts, depending on which $3$ is on the diagonal of the Waldspurger matrix it is appearing in.}
\label{peel}
\end{center}
\end{figure}
\begin{figure}
\centering
\begin{equation*}
\left[\begin{matrix}
0&0&1&0&0&0\\
0&1&-1&1&0&0\\
0&0&0&0&1&0\\
0&0&1&0&-1&1\\
1&0&0&0&0&0\\
0&0&0&0&1&0\\
\end{matrix}\right]
\leftrightarrow
\begin{matrix}
3&&&&\\
2&4&&&\\
2&4&5&&\\
2&3&4&6\\
1&2&3&4&6\\
1&2&3&4&5&6\\
\end{matrix} \leftrightarrow
\begin{matrix}
2&&&&\\
1&2&&&\\
1&2&2\\
1&1&1&2\\
0&0&0&0&1
\end{matrix}
\end{equation*}
\includegraphics[]{asm-wald}
\caption{An ASM and corresponding monotone triangle, reduced monotone triangle, and generalized Waldspurger matrix. (The blue 9-sided stars represent 1's, and the green six-sided stars represent -1's in the transformation diagram.)}
\label{four in the same}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{asm-lattice}
\includegraphics[]{reduced-monotone-triangles}
\label{macnbruhat}
\caption{The Dedekind-MacNeille completion of Bruhat order $A_2$ viewed as ASMs, Generalized Waldspurger Matrices, monotone triangles, and reduced monotone triangles}
\end{figure}
\subsection{The Tetrahedral poset: Bigrassmannians and Join-Irreducibles}\label{tetrahedral section}
Lascoux and Sch{\"u}tzenberger showed that the Dedekind-MacNeille completion of Bruhat order for type A is a distributive lattice, and that its join-irreducible elements are exactly the bigrassmannian permutations \cite{sch} (those with a unique right descent and unique left descent). The number of bigrasmannian permutations of length $n$ is a \emph{tetrahedral number}, and is counted by the coefficients of $$\frac{1}{(1-z)^4}=1+4z+10z^2+20z^3+35z^4+\dots.$$
The relationship between the tetrahedral poset and ASMs has been studied elsewhere \cite{2014arXiv1408.5391S}, but our Waldspurger matrices provide a new prospective. Bigrassmannian permutations correspond to Waldspurger matrices determined by fixing a single entry and then ``falling down" as quickly as possible. More poetically, they are arrangements of oranges in a tetrahedral orange basket (held up so that one edge is parallel with the ground) so that only one orange may be removed without causing a tumble.
\begin{figure}
\centering
$\begin{matrix}
1&1&1&1&1\\1&2&2&2&1\\1&2&3&2&1\\1&2&2&2&1\\1&1&1&1&1
\end{matrix}$
\caption{the number of ways of fixing each entry in a $5 \times 5$ Waldspurger matrix (also the top element in the Waldspurger version of the ASM lattice.)}
\label{tetrahedral}
\end{figure}
In our $A_n$ Waldspurger matrices, the number of ways of fixing a single entry to be a one is $n^2$
, to be a two is $(n-2)^2$, etc. This sum of alternating squares is another well known formula for the tetrahedral numbers (see http://oeis.org/A000292 for more).
\subsection{Centers of Mass and Geometric Realizations of Hasse Diagrams}\label{centers of mass subsection}
Our definition for Waldspurger matrices was geometrically motivated, but we have seen that they are also very combinatorially related to the ASM lattice. It is then natural to ask how this partial order and the geometry are connected. One classical invariant of posets with a distinctly geometric flavor is the notion of order dimension. The order dimension of a poset $P$ is the smallest $n$ for which $P \cong Q\subset \mathbb{R}^n$ where the elements of $Q$ are ordered componentwise. In \cite{Reading2002}, Reading computed the order dimension of Bruhat orders for types A and B, the former being $\mathrm{dim}(A_n)=\lfloor \frac{(n+1)^2}{4}\rfloor$. This tells us, in particular, that there is no way of embedding the lattice of $3 \times 3$ Waldspurger matrices in dimension less than $4$ in a way that preserves componentwise comparison. On the other hand, for each of these $3 \times 3$ matrices, we have an associated simplex $\Delta_M \subset \mathbb{R}^3$ and may consider the natural map which takes $\Delta_M$ to its center of mass.
If one replaces each simplex $\Delta_{\pi}$ (where $\pi \in \mathfrak{S}_n$) with its center of mass, one gets back a translate of the vertex set of the classical permutohedron. If one instead considers the centers of mass for each $\Delta(M)$ where $M$ is an alternating sign matrix, one obtains every interior point of the permutohedron as well; some appearing with multiplicities. (see Figure \ref{embedded hasse a3}). For example, the two generalized Waldspurger matrices below have the same center of mass.
$$\left[\begin{matrix}
1&0&0\\
0&1&0\\
0&1&1
\end{matrix}\right],\left[ \begin{matrix}
1&1&0\\
0&1&0\\
0&0&1
\end{matrix} \right].$$.
\begin{proposition}
The height statistic is not only the rank of an ASM $M$ in the lattice, it is also the height of the center of mass of $\Delta_M$ inside of the Meinrenken tile in the direction of $\rho$, the sum of the positive roots.
\end{proposition}
\begin{proof}
We want to show that projection of the center of mass of $\Delta_M$ onto $\rho$ is (up to scalar multiple) equal to the sum of the entries in $\mathbf{WT}(M)$.
By the definition of $\Delta_M$, its center of mass is a scalar times the vector of column sums of $\mathbf{WT}(M)$. We will be done if we can show that projection of a vector $v$ onto $\rho$ is (up to scalar multiple) $\rho$ times the sum of the entries of $v$.
Projection of a vector $v$ onto $\rho$ in root coordinates, is $\frac{v^TC\rho}{\rho^TC \rho}\rho$. The denominator is just a scalar, and the numerator is
$$v^TC\rho=v^T\left[
\begin{matrix}
2&-1&0&\dots&\dots\\
-1&2&-1&0&\dots\\
0&-1&2&-1&\dots\\
\vdots&\vdots&\vdots&\vdots&\ddots\\
0&\dots&0&-1&2
\end{matrix}\right]
\left[\begin{matrix}
n\\
2(n-1)\\
3(n-2)\\
\vdots\\
(n-2)3\\
(n-1)2\\
n
\end{matrix}\right]=v^T\theta$$ where $\theta$ is the vector of all ones. We conclude that, up to scalars, this projection is the sum of the entries of $v$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=.9]{a2embedded_hasse}
\caption{Place $\mathbf{WT}(M)$ at the baricenter of $\Delta_M$ for each $M\in ASM$ to get a geometric realization of the Hasse diagram inside the Meinrenken tile}
\label{embedded hasse a2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.75]{embeddedhasse.pdf}
\caption{There are 38 lattice points in the permutohedron and 42 Alternating sign matrices in this dimension. Four of the interior points have multiplicity two.}
\label{embedded hasse a3}
\end{center}
\end{figure}
\newpage
\section{Types B and C}\label{types b&c section}
For general crystalographic root systems, $\Phi$, define the Waldspurger Transform of a Weyl group element $g$ to be the matrix
$$\mathbf{WT}_{\Phi}(g):=(Id-R_g)C_{\Phi}^{-1}$$
where $R_g$ is the matrix of $g$ in the coordinates of the simple roots of $\Phi$, and $C_{\Phi}$ is the Cartan Matrix.
If no root system is specified, we will assume type A, so that $\mathbf{WT}=\mathbf{WT}_{A}$ is the Waldspurger transform already discussed. Recall that we proved that
$$\left[\mathbf{WT}(\pi)\right]_{i,j}= \begin{cases}
\displaystyle\sum_{\substack{a\leq i\\b>j}}\pi_{a,b} & i\leq j \\
\displaystyle\sum_{\substack{a> i\\b\leq j}}\pi_{a,b} & i\geq j\\
\end{cases}.
$$
\bigskip
It is natural to ask which phenomena we observed in type A will hold more generally.
It seems that the connection to Abelian ideals does not generalize. One can verify that there are $2\cdot 3^{n-1}$ ``UM vectors of type B,'' but the author is unable to find any lie-theoretic interpretation of this fact.
The poset-theoretic results give more hope for generalization. Lascoux and Sch{\"u}tzenberger showed that the Dedekind-MacNeille completion of Bruhat order for type B is a distributive lattice, and gave a description of the join-irreducible elements as a subset of the bigrassmannian elements \cite{sch}. They showed that, while the number of bigrasmannian elements is counted by the coefficients of
\begin{equation}\label{type b bigrass formula}\frac{1}{(1-z)^5}+\frac{1}{(1-z)^4}=1+6z+19z^2+45z^3+161z^4+\dots \end{equation}
the number of join-irreducibles or elements of the ``base'' are the \emph{octahedral numbers}:
\begin{equation}\label{type b base formula}\frac{(1+z)^2}{(1-z)^4}=1+6z+19z^2+44z^3+146z^4+\dots. \end{equation}
Geck and Kim \cite{geck1997bases} gave a more in-depth treatment of exactly when bigrassmannian elements fail to be part of the base, and Reading gave a combinatorial description of the base in terms of signed monotone triangles \cite{Reading2002}. Recently, Anderson gave another combinatorial description of the base in terms of type B Rothe diagrams and essential sets \cite{2016arXiv161208670A}. Despite all this, the story is still a bit unsatisfying; there is no known combinatorial description for all of the elements of the Dedekind-MacNeille completion of Bruhat order for type B. Reading's signed monotone triangles give us a means of determining whether or not a bigrassmannian element is in the base, but they are somehow not the right analog of ``type B alternating sign matrices''. We encounter similar complications here, but we will nevertheless define type B and C Waldspurger matrices and push the theory as far as we can.
Analogous to our theorem for type A, we conjecture the following:
\begin{conjecture} Each element in the base for types B and C corresponds to a type B and C Waldspurger matrix which is componentwise least given a single fixed entry. \label{b bases conj}\end{conjecture}
We will work our way into the type B and C combinatorics in the following subsections. We will then provide evidence supporting conjecture, while explaining where problems arise.
\subsection{Centrally Symmetric Permutation matrices}\label{Csym subsection}
For type B, we may consider $V=\mathbb{R}^n$, and $\Phi$ consisting of all integer vectors in $V$ of length $1$ or $\sqrt[]{2}$, for a total of $2n^2$ roots. Choose the simple roots: $\alpha_i=e_i-e_{i+1}$, for $1\leq i \leq n-1$ and the shorter root $\alpha_n = e_n$. For type C, we will have the simple roots with the exception that $\alpha_n=2\cdot e_n$.
These root systems share a Weyl group of size $2^n n!$ consisting of the $n \times n$ {\it signed permutation matrices}. That is, the set of all $n \times n$ permutation matrices where any of the $1$'s may be replaced with $-1$'s. We will abuse notation and use $B_n$ for both the root system of type B, and for this particular representation of the Weyl group for both types B and C.
Call a square $n \times n$ matrix {\it centrally symmetric} if it is preserved under $180^{\circ}$ rotation; that is if $M_{i,j}=M_{n-i,n-j}$ for all $1 \leq i,j \leq n-1$. Let $\mathfrak{CS}_n$ denote the set of centrally symmetric $n \times n$ matrices.
\begin{proposition}
The group $\mathfrak{CS}_{2n}\subset\mathfrak{S}_{2n}$ is isomorphic to the group $B_n$ of signed permutations via a ``folding move''.
\end{proposition}
\begin{proof}
If $\pi$ is a $2n \times 2n$ centrally symmetric permutation matrix, we may ``fold'' it to obtain $\pi^{\star}$, a signed permutation on $n$, by letting $$\pi^{\star}_{i,j}=
\pi_{i,j}-\pi_{2n-i+1,j}$$
The map is invertible because $\pi$ was a permutation matrix, meaning that
$$\pi^{\star}_{i,j}=\begin{cases}
1 & \textrm{ if }\pi_{i,j}=1\\
-1 & \textrm{ if }\pi_{2n-i+1,j}=1\\
0 \textrm{ otherwise}
\end{cases}$$
i.e. there will never be any collisions in the folding.
\end{proof}
\bigskip
We may also consider a similar ``folding map'' on the centrally symmetric type A Waldspurger matrices.
$$\mathcal{F}:\mathbf{WT}_{A_{2n-1}}(\mathfrak{CS_{2n}})\longrightarrow \textrm{Mat}_n$$
Where $$\mathcal{F}(M)_{i,j}=\begin{cases}M_{i,j}+M_{2n-i+1,j}\textrm{ for all }1\leq i,j < n\\
M_{i,j} \textrm{ for all } i=n, j\leq n
\end{cases}$$
\begin{subsection}{Working in root coordinates}\label{b in rootcoords subsection}
Let $P$ be the change of basis matrix that gives the simple roots of $B_n$ in terms of the standard basis vectors, and define $Q$ analogously for $C_n$. From the discussion above, we have:
$$P_{i,j}=\begin{cases}
1 & \textrm{ if }i=j\\
-1 & \textrm{ if }i=j+1\\
0 & \textrm{ otherwise}
\end{cases} \hspace{20pt}
Q_{i,j}=\begin{cases}
1 & \textrm{ if }i=j, j<n\\
-1 & \textrm{ if }i=j+1\\
2&\textrm{ if }i=j=n\\
0 & \textrm{ otherwise}
\end{cases}.$$
One can then verify that
$$
P_{i,j}^{-1}=\begin{cases}
1 & \textrm{ if }i\geq j\\
0 & \textrm{ otherwise}
\end{cases}\hspace{20pt}
Q_{i,j}^{-1}=\begin{cases}
1 & \textrm{ if }i=j, j<n\\
1/2& \textrm{ if }i=j=n\\
0 & \textrm{ otherwise}
\end{cases}.
$$
With respect to this ordering on the simple roots, one can further verify that the inverses of the Cartan matrices for the root systems $B_n$ and $C_n$ are, respectively:
$$(C_{B_{n}}^{-1})_{i,j}=\begin{cases}
\textrm{min}(i,j) & \textrm{ if } j<n\\
i/2 & \textrm{ if } j=n
\end{cases}\hspace{20pt}
(C_{C_{n}}^{-1})_{i,j}=\begin{cases}
\textrm{min}(i,j) & \textrm{ if } i<n\\
j/2 & \textrm{ if } i=n
\end{cases}.
$$
Next, if we let $S=Q(C_{C_{n}}^{-1})$, and let $R=P(C_{B_{n}}^{-1}),$ one may verify that
$$S=\begin{cases}
1 & \textrm{ if }j\geq i\\
0 & \textrm{ otherwise}
\end{cases}\hspace{20pt}
R=\begin{cases}
1&\textrm{ if }j\geq i, j\neq n\\
1/2&\textrm{ if }j=n\\
0&\textrm{ otherwise}
\end{cases}.
$$
\end{subsection}
\begin{theorem}
$\mathcal{F}$ is a bijection between centrally symmetric Waldspurger Matrices of type $A_{2n-1}$, and Waldspurger Matrices of type $C_n$.
and the following diagram commutes:
\[ \begin{tikzcd}
B_n \arrow{r}{\mathbf{WT}_{C_n}} & \mathbf{WT}_{C_n}(B_n) \\%
\mathfrak{CS}_{2n} \arrow{r}{\mathbf{WT}}\arrow[swap]{u}{\star}& \mathbf{WT}(\mathfrak{CS_{2n}})\subset UM_{n-1}\arrow{u}{\mathcal{F}}
\end{tikzcd}
\]
\end{theorem}
\begin{proof} We will show that $\mathcal{F}(\mathbf{WT}(\pi))_{i,j}$ and $\mathbf{WT}_{C_n}(\pi^{\star})_{i,j}$ are summing over the same parts of the permutation matrix $\pi$. On the one hand,
\begin{gather}
\begin{align*}
\mathcal{F}(\mathbf{WT}(\pi))_{i,j} &=\begin{cases}\mathbf{WT}(\pi)_{i,j}+\mathbf{WT}(\pi)_{2n-i+1,j}\textrm{ for all }1\leq i,j < n\\
\mathbf{WT}(\pi)_{i,j} \textrm{ for all } i=n, j\leq n
\end{cases}\\
&=\begin{cases}
\displaystyle\sum_{\substack{a\leq i\\b>j}}^{2n}\pi_{a,b} +\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & i\leq j<n \\
\displaystyle\sum_{\substack{a> i\\b\leq j}}^{2n}\pi_{a,b} +\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & j\leq i<n\\
\displaystyle\sum_{\substack{a> i\\b\leq j}}^{2n}\pi_{a,b} & i=n
\end{cases}
\end{align*}.
\end{gather}
On the other hand,
\begin{align*}
\mathbf{WT}_{C_n}(\pi^{\star})_{i,j} &= \left(Id-(Q^{-1} \pi^{\star} Q)C_{C_{n}}^{-1}\right)_{i,j}\\
&=\left(C_{C_{n}}^{-1}-(Q^{-1} \pi^{\star} S)\right)_{i,j}\\
&=\left(C_{C_{n}}^{-1}\right)_{i,j}-\left((Q^{-1} \pi^{\star} S)\right)_{i,j}\\
&=\begin{cases}
\textrm{min}(i,j) & \textrm{ if } i<n\\
j/2 & \textrm{ if } i=n
\end{cases}
- \begin{cases}\displaystyle\sum_{a\leq i,b\leq j}\pi^{\star}_{a,b}& \textrm{if }i<n\\
\frac{1}{2}\displaystyle\sum_{a\leq i,b\leq j}\pi^{\star}_{a,b}& \textrm{if }i=n\end{cases}\\
&=\begin{cases}
\textrm{min}(i,j)-\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } i<n\\
\frac{j}{2} -\frac{1}{2}\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } i=n
\end{cases}\\
&=\begin{cases}
i-\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } i\leq j<n\\
j-\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } j\leq i<n\\
\frac{j}{2} -\frac{1}{2}\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } i=n
\end{cases}\\
&=\begin{cases}
\displaystyle\sum_{a\leq i}^{2n}\pi_{a,b}-\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}+\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & \textrm{ if } i\leq j<n\\
\displaystyle\sum_{b\leq j}^{2n}\pi_{a,b}-\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}+\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & \textrm{ if } j\leq i<n\\
\frac{j}{2} -\frac{1}{2}\displaystyle\sum_{a\leq i,b\leq j}^{2n}\pi_{a,b}-\pi_{2n-a+1,b} & \textrm{ if } i=n
\end{cases}\\
&=\begin{cases}
\displaystyle\sum_{\substack{a\leq i\\b>j}}^{2n}\pi_{a,b} +\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & i\leq j<n \\
\displaystyle\sum_{\substack{a> i\\b\leq j}}^{2n}\pi_{a,b} +\displaystyle\sum_{\substack{a> 2n-i+1\\b\leq j}}^{2n}\pi_{a,b} & j\leq i<n\\
\displaystyle\sum_{\substack{a> i\\b\leq j}}^{2n}\pi_{a,b} & i=n
\end{cases}.
\end{align*}
The last equality is perhaps easier to see pictorially. The case $i\leq j<n$ says
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \textrm{sum these} & \dots \\
\dots & \pi_{i,j-1} & \multicolumn{1}{c|}{\pi_{i,j}} & \textrm{entries} & \dots \\ \cmidrule{4-5}
\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}+$}
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \dots &\pi_{2n-i+1,j-1} & \pi_{2n-i+1,j} & \dots \\
\cmidrule{1-3}\dots & \textrm{sum these} & \multicolumn{1}{c|}{\pi_{2n-i,j-1}} & \pi_{2n-i,j} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{2n-i-1,j-1}} & \pi_{2n-i-1,j} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}=$}
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \textrm{sum these} & \pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{entries} & \pi_{i,j} & \pi_{i,j+1} & \dots \\
\cmidrule{1-5}\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}-$}
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \textrm{sum these} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\cmidrule{1-3}\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}+$}
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \dots &\pi_{2n-i+1,j-1} & \pi_{2n-i+1,j} & \dots \\
\cmidrule{1-3}\dots & \textrm{sum these} & \multicolumn{1}{c|}{\pi_{2n-i,j-1}} & \pi_{2n-i,j} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{2n-i-1,j-1}} & \pi_{2n-i-1,j} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}.$}
The $j\leq i<n$ case says
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \textrm{sum} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{these} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}$}
$-$\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \textrm{sum these} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\cmidrule{1-3}\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}$+$
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{2n-i-1,j-1} &\multicolumn{1}{c|}{ \pi_{2n-i-1,j}} & \textrm{sum these} & \dots \\
\dots & \pi_{2n-i,j-1} & \multicolumn{1}{c|}{\pi_{2n-i,j}} & \textrm{entries} & \dots \\ \cmidrule{4-5}
\dots & \pi_{2n-i+1,j-1} & \pi_{2n-i+1,j} & \pi_{2n-i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}$=$
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\dots & \pi_{i,j-1} & \pi_{i,j} & \pi_{i,j+1} & \dots \\
\cmidrule{1-3}\dots & \textrm{sum these entries} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}$}$+$
\resizebox{.21\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \pi_{2n-i-1,j-1} &\multicolumn{1}{c|}{ \pi_{2n-i-1,j}} & \textrm{sum these} & \dots \\
\dots & \pi_{2n-i,j-1} & \multicolumn{1}{c|}{\pi_{2n-i,j}} & \textrm{entries} & \dots \\ \cmidrule{4-5}
\dots & \pi_{2n-i+1,j-1} & \pi_{2n-i+1,j} & \pi_{2n-i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}.$}
Finally, the case $i=n$ says
\resizebox{.26\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\cmidrule{1-3}\dots & \textrm{sum these} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}=$}
$\frac{1}{2}$
\resizebox{.26\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \textrm{sum} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{these} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}$}
$-\frac{1}{2}$\resizebox{.26\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \udots \\
\dots & \textrm{sum these} &\multicolumn{1}{c|}{ \pi_{i-1,j}} & \pi_{i-1,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\cmidrule{1-3}\dots & \pi_{i+1,j-1} & \pi_{i+1,j} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$}
$+\frac{1}{2}$\resizebox{.26\textwidth}{!}{$\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \udots \\
\dots & \pi_{i-1,j-1} &\pi_{i-1,j} & \pi_{i-1,j+1} & \dots \\
\cmidrule{1-3}\dots & \textrm{sum these} & \multicolumn{1}{c|}{\pi_{i,j}} & \pi_{i,j+1} & \dots \\
\dots & \textrm{entries} & \multicolumn{1}{c|}{\pi_{i+1,j}} & \pi_{i+1,j+1} & \dots \\
\udots & \vdots & \multicolumn{1}{c|}{\vdots} & \vdots & \ddots \end{pmatrix}.$}
\end{proof}
Because transposition and the map $\star :\mathfrak{CS}_{2n}\rightarrow B_n$ commute, we will from now on abuse notation and identify centrally symmetric permutations with their images in $B_n=C_n$.
\begin{proposition}
$\mathbf{WT}_{C_n}(\pi^{\intercal})=(\mathbf{WT}_{B_n}(\pi))^{\intercal}$ for any $\pi \in B_n$.
\end{proposition}
\begin{proof}
\begin{align*}
\mathbf{WT}_{C_n}(\pi^{\intercal}) &= Id-(Q^{-1} \pi^{\intercal} Q)C_{C_{n}}^{-1}\\
&=C_{C_{n}}^{-1}-(Q^{-1} \pi^{\intercal} S)\\
&=(C_{B_{n}}^{-1})^{\intercal}-(R^{\intercal} \pi^{\intercal} (P^{-1})^{\intercal})\\
&=Id-(P^{-1} \pi R)^{\intercal}(C_{B_{n}}^{-1})^{\intercal}\\
&=(Id-(P^{-1} \pi R)C_{B_{n}}^{-1})^{\intercal}\\
&=(\mathbf{WT}_{B_n}(\pi))^{\intercal}.
\end{align*}
\end{proof}
Informally, this proposition tells us that as far as the Waldspurger and Meinrenken theorems are concerned, types B and C are essentially the same.
\subsection{Smallest examples in full detail}\label{b2 deets subsection}
\begin{figure}
\centering
\includegraphics[scale=.5]{c2pic.pdf}
\includegraphics[scale=.5]{b2pic.pdf}
\caption{The Meinrenken tiles for $C_2$ and $B_2$ respectively}
\end{figure}
There are exactly eight centrally symmetric $3 \times 3$ Waldspurger matrices of type A:
$$\left[\begin{matrix}
0&0&0\\
0&0&0\\
0&0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&0&0\\
0&0&0\\
0&0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
0&0&0\\
0&1&0\\
0&0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1&0\\
0&1&0\\
0&1&1
\end{matrix}\right] \textrm{ }$$ $$
\left[\begin{matrix}
1&0&0\\
1&1&1\\
0&0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1&1\\
1&1&1\\
1&1&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1&0\\
1&2&1\\
0&1&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1&1\\
1&2&1\\
1&1&1
\end{matrix}\right].$$
We may fold them vertically to get type C Waldspurger matrices, or horizontally to get type B:
$$\left[\begin{matrix}
0&0\\
0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&0\\
0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
0&0\\
0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&2\\
0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&0\\
1&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
2&2\\
1&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&2\\
1&2
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
2&2\\
1&2
\end{matrix}\right]
$$
$$\left[\begin{matrix}
0&0\\
0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&0\\
0&0
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
0&0\\
0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1\\
0&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&0\\
2&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
2&1\\
2&1
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
1&1\\
2&2
\end{matrix}\right], \hspace{10pt}
\left[\begin{matrix}
2&1\\
2&2
\end{matrix}\right].
$$
Recall that, in type A, the dimensions of each of the simplices was determined by the number of cycles of the corresponding permutation, and so the number of simplices of a given dimension was a Stirling number of the first kind. In type B, we see Suter's type B stirling numbers of the first kind \cite{stirlingB} with our $1$ point, $4$ edges, and $3$ triangles for $B_2$ and $C_2$.
In this dimension there are two centrally symmetric ASMS which are not permutations, with type A Waldspurger matrices
$\left [\begin{matrix}
1&0&0\\0&1&0\\0&0&1
\end{matrix}\right]$
and
$\left [\begin{matrix}
1&1&0\\1&1&1\\0&1&1
\end{matrix}\right ]$
. They fold vertically to give us the two extra matrices picture in the right hand side of Figure \ref{b2comp}.
\begin{figure}
\begin{center}
\includegraphics[]{b2bruhat}
\hspace{30pt}
\includegraphics[]{b2comp}
\caption{In the case of $C_2$, (and $B_2$, though it is not shown here) componentwise comparison of Waldspurger matrices is exactly Bruhat order, and componentwise comparison of folded centrally symmetric ASMs is exactly its Dedekind-MacNeille completion.}
\label{b2comp}
\end{center}
\end{figure}
\subsection{UM vectors for types B and C}
While the our folding map $\mathcal{F}$ did not double the middle row of type A centrally symmetric Waldspurger matrices (folding it onto itself), it is combinatorially convenient for us to do so. We will call such a map $\tilde{\mathcal{F}}$. The following proposition combines the inequality description of UM vectors from Theorem \ref{inequality} and $\tilde{\mathcal{F}}$ to give inequality descriptions for ``UM vectors for types B and C''.
\begin{proposition}
Column and row vectors of type B and C Waldspurger matrices (with respect to $\tilde{\mathcal{F}}$) must start with entries $0,1,2$, increase by $0,1,$ or $ 2$ up to the diagonal, and increase by $-1,0,$ or $1$ after the diagonal, ending with an even number. \end{proposition}
\begin{corollary}
There are $2\cdot 3^{n-1}$ UM vectors of type B.
\end{corollary}
\subsection{Conjectural description of elements in the Base}\label{base conjecture subsection}
Recall that, in type A, one could obtain any element of the base by specifying a single entry in a Waldspurger matrix (as long as it was below the corresponding entry in the Waldspurger matrix corresponding to the longest word) and ``falling down'' (see Figure \ref{tetrahedral}). In contrast, the type C Waldspurger matrix for the longest word (with respect to $\tilde{\mathcal{F}}$) is
$$\left[\begin{matrix}
2&2&2&2&\dots &2&2\\
2&4&4&4&\dots &4&4\\
2&4&6&6&\dots &6&6\\
\dots&\dots&\dots&\dots& \dots & \dots\\
2&4&6&8&\dots&2(n-1)&2(n-1)\\
2&4&6&8&\dots&2(n-1)&2n
\end{matrix}\right]
$$
but when constructing a Type C Waldspurger matrix, we are no longer free to specify entries on the right or bottom boundaries to be odd. Consequentially, the type C analog of Figure \ref{tetrahedral} as the number of ways of determining the $(i,j)$th entry of a type C Waldspurger matrices is
$$\begin{matrix}
2&2&2&2&\dots &2&1\\
2&4&4&4&\dots &4&2\\
2&4&6&6&\dots &6&3\\
\dots&\dots&\dots&\dots& \dots & \dots\\
2&4&6&8&\dots&2(n-1)&n-1\\
1&2&3&4&\dots&n-1&n
\end{matrix}.
$$
It is straight forward to verify that the entries above sum to an octahedral number, and this supports our Conjecture \ref{b bases conj}.
In $C_4$ we first see the distinction between bigrassmannian elements, and elements of the base. Recall from Equations \ref{type b bigrass formula} and \ref{type b base formula} that there are $45$ bigrassmannian elements, but only $44$ elements in the base. The Waldspurger matrix of every bigrassmannian element is minimal with respect to a fixed single entry, but there is one collision. There are two incomparable bigrassmannian elements that are minimal after fixing the $(2,2)$ entry to be a two:
$$\left[
\begin{matrix}
1&1&0&0\\
1&\colorbox{green}{$2$}&1&0\\
0&1&1&0\\
0&0&0&0
\end{matrix}
\right] \textrm{ vs }
\left[
\begin{matrix}
0&0&0&0\\
0&\colorbox{green}{$2$}&2&1\\
0&2&2&1\\
0&2&2&1
\end{matrix}
\right].$$
The matrix on the left is in the base, and the one on the right is not. The question ``bigrassmannian vs base'' seems intimately connected to the question: What is the type C analog of the type A ``falling down'' algorithm?
These two matrices may be ``unfolded'' to the centrally symmetric type A Waldspurger matrices:
$$\left[\begin{matrix}
1&1&0&0&0&0&0\\
1&\colorbox{green}{$2$}&1&0&0&\colorbox{green}{$0$}&0\\
0&1&1&0&0&0&0\\
0&0&0&0&0&0&0\\
0&0&0&0&1&1&0\\
0&\colorbox{green}{$0$}&0&0&1&\colorbox{green}{$2$}&1\\
0&0&0&0&0&1&1
\end{matrix}
\right] \textrm{ and }
\left[\begin{matrix}
0&0&0&0&0&0&0\\
0&\colorbox{green}{$1$}&1&1&1&\colorbox{green}{$1$}&0\\
0&1&1&1&1&1&0\\
0&1&1&1&1&1&0\\
0&1&1&1&1&1&0\\
0&\colorbox{green}{$1$}&1&1&1&\colorbox{green}{$1$}&0\\
0&0&0&0&0&0&0
\end{matrix}
\right].$$
If we write $2=2+0$ and we ``fall down'' in the type A way, we get the matrix on the left. If we write $2=1+1$ and ``fall down'' in the type A way, we get the matrix on the right. Conjecturally, the Waldspurger matrices of bigrassmannian elements for $C_n$ are determined by specifying a single entry, unfolding it to specify four entries of a $(2n-1)\times(2n-1)$ centrally symmetric type A Waldspurger matrix, and then performing the type A falling down algorithm. Elements of the base come from unfolding the specified entry as inequitably as possible.
\subsection{Waldspurger Order}
Define the \emph{Waldspurger Order} on a finite reflection group to be the componentwise order on Waldspurger matrices. One is given hope in the low dimensions that Bruhat order might, as in type A, be merely componentwise comparison of Waldspurger Matrices. This is true for $C_2$ and $C_3$ and in both cases, the Dedekind-MacNeille completion comes from simply folding centrally symmetric ASMs. It fails for $C_n$ when $n\geq 4$, (though it appears to be an order extension).
Among the bigrassmannian elements of $C_4$, there are exactly two cover relations in Waldspurger order which are not cover relations in Bruhat order:
$$\left[
\begin{matrix}
1&1&0&0\\
1&2&1&0\\
0&1&1&0\\
0&0&0&0
\end{matrix}
\right]<
\left[
\begin{matrix}
2&2&2&1\\
2&2&2&1\\
2&2&2&1\\
2&2&2&1
\end{matrix}
\right] \textrm{ and }
\left[
\begin{matrix}
0&0&0&0\\
0&2&2&1\\
0&2&2&1\\
0&2&2&1
\end{matrix}
\right]<
\left[
\begin{matrix}
1&1&1&0\\
1&2&3&1\\
1&3&5&2\\
0&2&4&2
\end{matrix}
\right].
$$
Folded $8 \times 8$ centrally symmetric ASMs also fail to be a lattice with respect to componentwise order. The same sort of failures were recognized for signed monotone triangles by Reading in Section 10, Question 4 of \cite{Reading2002}.
\section{Further Questions}\label{further questions section}
\begin{enumerate}
\item It is curious that the same elements which caused bigrassmannian$\neq$base, for $B_4$ and $C_4$ are involved in causing Waldspurger order$\neq$Bruhat order. Is this a coincidence, or can one use it to give a concrete combinatorial description of elements of Dedekind-MacNeille completion of Bruhat order for types B and C using Waldspurger matrices?
\item Is there a simple way to determine a signed permutation's essential set in the sense of \cite{2016arXiv161208670A} from its Waldspurger matrix?
\item Is there a description of Waldspurger order in terms of words in the Coxeter group?
\item How many elements are there in the Dedekind-MacNeille completion of Bruhat order for type B?
\item In \cite{meinrenken2009tilings}, Meinrenken has another intriguing theorem: Let $W$ an affine Weyl group with $A$ a fundamental alcove. Then for any endomorphism $S$, in the same connected component as $0$ in the set $\left\{ S \in \textrm{End}(V) \mid \textrm{det}(S-w)\neq 0 \forall w \in W\right\}$, the simplices $(S-w)A$ for $w \in W$ are all disjoint and their closures cover the entire vector space $V$. \newline This theorem seems to provide an interesting interpolation between the affine hyperplane arrangement, or Stiefel diagram, and the Meinrenken tile. Does any nice combinatorics arise from selecting nice endomorphisms? Is there an intrinsic characterization of the types of tilings that arise in this way?
\item It is a classical result in Ehrhart theory \cite{postnikov2009permutohedra} that the number of points inside of the permutohedron is the number of forests on the vertex set $\{1,2,\dots n\}$. We have exhibited a surjective map from ASMs to these same points. Is there an interpretation of these multiplicities in terms of forest structures?
\end{enumerate}
|
\section{Introduction}
The aim of semi-supervised learning is to improve supervised learners by exploiting potentially large amounts of, typically easier to obtain, unlabeled data \cite{chapelle06b}. Up to now, however, semi-supervised learners have reported mixed results when it comes to such improvements: it is not always the case that semi-supervision results in lower expected error rates. On the contrary, severely deteriorated performances have been observed in empirical studies and theory shows that improvement guarantees can often only be provided under rather stringent conditions \cite{castelli95a,ben-david08a,lafferty07a,singh08a}.
Now, the principal suggestion put forward in this chapter is that, when dealing with semi-supervised learning, one may not only want to study the (expected) error rates these classifiers produce, but also to measure the classifiers' performances by means of the intrinsic loss they may be optimizing in the first place. That is, for classification routines that optimize a so-called surrogate loss at training time---which is what many machine learning and Bayesian decision theoretic approaches do \cite{scholkopf2002learning,robert2001bayesian}, we propose to also investigate how this loss behaves on the test set as this can provide us with an alternative view on the classifier's behavior that a mere error rate cannot capture.
In fact, though the main example is concerned with semi-supervision, we would like to argue that in other learning scenarios, similar considerations might be beneficial. For instance in active learning \cite{settles2010active}, where rather than sampling randomly from ones input data to provide these instances with labels, one aims to do the sampling in a systematic way, trying to keep labeling cost as low as one can or, similarly, to learn from as few labeled examples as possible. Also here it may (or, we believe, it should) be of interest to not only compare the error rates that different approaches (e.g. random sampling and uncertainty sampling \cite{lewis1994sequential}) achieve, but also how the surrogate losses compare for these techniques when we are using the same underlying classifiers. Similar remarks now can be made for other learning scenarios like domain adaptation, transfer learning, and learning under data shift or data drift \cite{margolis2011literature,torrey2009transfer,quionero2009dataset,vzliobaite2010learning}. In these last settings, one may typically want to compare, say, a classifier trained in the source domain with one that exploits additional knowledge on the target domain.
\subsection{Surrogate Loss vs. Error Rates}
The simple idea underlying the suggestion we make is that, unless we make particular assumptions, generally, we cannot expect to minimize the error rate if we are, in fact, optimizing a surrogate loss. This surrogate loss is, to a large extent, chosen for computational reasons, but of course the hope is that, with increasing training set size, minimizing it will not only lead to improvements with respect to this surrogate loss but also with respect to the expected error rate. This cannot be guaranteed in any strict way however. To start with, the classifier's error rate itself can already act rather unpredictably. A general result by Devroye demonstrates, for instance, that for any classifier there exists a classification problem such that the error rate converges at an arbitrarily slow rate to the Bayes error \cite{devroye1982any}. If the classifier is not a universal approximator \cite{devroye1996probabilistic,steinwart05}, there is not even a guarantee that the Bayes error will ever be reached. Worse even, in the case that we are dealing with such model misspecification, error rates might even go up with increasing numbers of training samples \cite{loog12dip}. This leads to the rather counterintuitive result that, in some cases, expected error rates might actually be improved by throwing arbitrary samples out of the training set. The aforementioned considerations lead us, all in all, to speculate that any kind of generally valid (i.e., not depending on strong assumptions) expected performance guarantees, if at all possible in semi-supervised learning or any of the other aforementioned learning scenarios, can merely be obtained in terms of the surrogate loss of the classifier at hand. Overall, these ideas are in line with those presented in \cite{loog15}.
We could definitely imagine that, still, one takes the position that the mere loss that matters is the 0/1 loss and that it is this quantity that has to be minimized. As far as we can see, however, taking this stance to the extreme, one cannot do anything else than try and directly minimize this 0/1 loss and face all the computational complications that go with it. On a less philosophical level, one may claim that the 0/1 loss is, in the end, also not the loss that one is interested in. One might actually have an application-relevant loss and in real applications (clinical, domestic, industrial, pedagogic, etc.) this is but seldom the 0/1 loss. In fact, the true loss of interest related to a particular classification problem may ultimately be unknown.
For us there is, however, a more basic reason for studying the surrogate loss intrinsic to the classifier at hand. As a matter of a fact, a lower loss really means the model is better, in the sense that the estimated parameters get closer to those of the optimal classifier one would obtain if all data is labeled. In the particular setting of semi-supervised learning, a decrease in expected loss, when adding unlabeled data, really indicates that the same effect---i.e., an improved model fit---is achieved as with adding more labeled data. In our opinion, this seems the least we could ask for in a semi-supervised setting. With this we still do not mean to claim that the surrogate loss is \emph{the} quantity to study, but it does give us a different perspective on the problem of various learning scenarios. Finally, let us point out that the connection between the 0/1 loss and surrogate losses has in recent years attracted quite some attention. Some papers investigating theoretical aspects for particular classes of loss functions, but also covering the design of such surrogate losses, are \cite{ben2012minimizing,masnadi2008design,nguyen2009surrogate,reid2009surrogate,reid2010composite,scott2011surrogate}. These contributions follow earlier works such as \cite{bartlett2006convexity}, \cite{buja}, and \cite{zhang2004statistical}.
\subsection{Outline}
This chapter illustrates our point by means of two classifiers that optimize the log-likelihood of the model fit to the data. Clearly, this objective should be maximized, but taking minus the likelihood would turn it into a loss (which is sometimes referred to as the log loss). The particular classifiers under consideration are the nearest means classifier (NMC) \cite{duda72a} and classical linear discriminant analysis (LDA) \cite{rao1948utilization}. Next section starts off with a general reflection on these two classifiers after which two semi-supervised variations are introduced. Section \ref{sect:exp} reports on the results of the experiments, comparing the semi-supervised learners and their supervised counterparts empirically. The final section discusses our findings in the light of the point we would like to make and concludes this chapter.
\section{A Biased Introduction to Semi-Supervision}
Before we get to semi-supervised NMC and LDA, we feel the need to remark that their regular supervised versions are still capable of providing state-of-the-art performance. Especially for relatively high-dimensional, small sample problems NMC may be a particularly good choice. Some rather recent examples demonstrating this can be found in bioinformatics and its applications \cite{wilkerson2012differential,villamil2012colon,budczies2012remodeling}, but also in neurology \cite{jolij2011act} and pathology \cite{gazinska2013comparison}. Further use of the NMC can be found in high-impact journals from the fields of oncology, neuroscience, general medicine, pharmacology, and the like. A handful of the latest examples can be found in \cite{hyde2012mnemonic,haibe2012three,desmet2013identification,sjodahl2012molecular}. Similar remarks can be made about LDA, though in comparison with the NMC, there should be relatively more data available to make it work at a competitive level. Like for the NMC, many recent contributions from a large number of disciplines still employ this classical decision rule, e.g.\ \cite{ackermann2013detection,allen2013network,chung2013single,brunton2013rats,price2012cyanophora}. All in all, like any other classifier, NMC and LDA have their validity and cannot be put aside as being outdated or not-state-of-the-art. The fact that classifiers having been around for 40 years or more, does not mean they are superseded. In this respect, the reader might also want to consult relevant works such as \cite{handXXX} and \cite{efronXXX}.
\subsection{Supervised NMC and LDA}
The two semi-supervised versions of both the NMC and LDA are those based on classical expectation maximization or self-learning and those based on a so-called intrinsically constrained formulation. These approaches are introduced in the subsections that follow. The models underlying supervised NMC and LDA are based on normality assumptions for the class-conditional probability density functions. More specifically:
\begin{itemize}
\item
LDA is the classical technique where the class-conditional covariance matrices are assumed the same across all classes, but where both the class means and the class priors can vary from class to class. Estimating these variables under maximum likelihood results in the well-known solutions for the priors and the means, while the overall class covariance matrix becomes the prior weighted sum of the ML estimates of the individual class covariance matrices.
\item
For the NMC the parameter space is further restricted. In addition to the covariance matrix being the same for all classes it is also constrained to be the a multiple of the identity matrix. Moreover, the priors are fixed to be equal for all classes. In \cite{loog15} one can find the solution to this parameter estimation problem. Here we note that this model is not necessarily unique: there are of course various ways in which one can formulate the NMC (as well as other classifiers) in terms of an optimization problem. Ours is but one choice.
\end{itemize}
\subsection{EM and Self-Learning}
Self-learning or self-training is a rather generally applicable approach to semi-supervised learning \cite{basu02a,mclachlan75a,vittaut02}. In an initial step, the classifier of choice is trained on the available labeled data. Using this trained classifier all unlabeled data is assigned a label. Then, in a next step all of this now labeled data is added to the training set and the classifier is retrained with this enlarged set. Given this newly trained classifier one can relabel the initially unlabeled data and retrain the classifier again with these updated labels. This process is then iterated until convergence, i.e., when the labeling of the initially unlabeled data remains unchanged. The foregoing only gives the basic recipe for self-learning. Many variations and alternatives are possible, e.g., one can only take a fraction of the unlabeled data into account when retraining, once labeled one can decide to not relabel the data, etc.
Another well-known, and arguably more principled semi-supervised approach treats the absence of certain labels as a missing data problem. Most of the time this is formulated in terms of a maximum likelihood objective \cite{dempster77a} and relies on the classical technique of expectation maximization (EM) to come to a solution \cite{nigam98a,oneill78a}. Although self-learning and EM may at a first glance seem different ways of tackling the semi-supervised classification problem, \cite{basu02a} effectively shows that self-learners optimize the same objective as EM does (though they may typically end up in different local optima). Similar observations have been made in \cite{abney04a,haffari07a}.
A major problem with EM and self-learning strategies is the fact that they often suffer from severely deteriorated performance with increasing numbers of unlabeled samples. This behavior, which has been extensively studied in various previous works \cite{cohen04a,cozman06a,loog2013semi,yang2011effect}, is typically caused by model misspecification, i.e., the setting in which the statistical model does not fit the actual data distribution. We note that this is at contrast with the supervised setting, where most classifiers are capable of handling mismatched data assumptions rather well and adding more labeled data typically improves performance. NMC will most definitely suffer from model misspecification, because of the rather rigid, low-complexity nature of this classifier. LDA is more flexible, but still only able to model linear decision boundaries. Hence, also LDA will often be misspecified.
\subsection{Intrinsically Constrained NMC}
In \cite{loog10x} and \cite{loog12a}, a novel way to learn in a semi-supervised manner was introduced. On a conceptual level, the idea is to exploit constraints that are known to hold for the NMC and LDA and that define relationships between the class-specific parameters of those classifiers and certain statistics that are independent of the particular labeling. These relationships are automatically fulfilled in the supervised setting but typically impose constraints in the semi-supervised setting. Specifically, for NMC and LDA the following constraint holds (see \cite{fukunaga90}):
\begin{equation}\label{eq:altlaw}
N m = \sum_{k=1}^K N_k m_k \, ,
\end{equation}
where $K$ is the number of classes, $m$ is the overall sample mean of the data, and $m_k$ are the different sample means of the $K$ classes. $N$ is the total number of training instances and $N_k$ is the number of observations for class $k$. For LDA there is an additional constraint that holds (again see \cite{fukunaga90}):
\begin{equation}\label{eq:cov}
B + W = T \, .
\end{equation}
It relates the standard estimates for the average class-conditional covariance matrix $W$, the between-class covariance matrix $B$, and the estimate of the total covariance matrix $T$. $W$ is the covariance matrix that models the spread of every class in LDA.
In the supervised setting these constraints do not need to be assumed as they are automatically fulfilled. Their benefit only becomes apparent with the arrival of unlabeled data. In the semi-supervised setting, the label-independent estimates $m$ and $T$ can be improved. Using these more accurate estimates, however, results in a violation of the constraints. Fixing the constraints again by properly adjusting $m_i$, $W$, and $B$, these label-dependent estimates become more accurate and in expectation lead to improved classifiers. For a more detailed account of how to enforce these constraints, we refer to \cite{loog2013semi} (see \cite{krijthe} and \cite{loog} for related approaches).
The constrained estimation approach is less generally applicable, but it can avoid the severe deteriorations self-learning displays: when the model does not match the data, the model fit will obviously not be good, but the constrained semi-supervised fit will generally still be better, in terms of the error rate, than the supervised equivalent. Still, also in this constrained setting, the results turn out not to be univocal either. Error rates can increase with increasing number of unlabeled samples and we consider further insight into this issue paramount for a deeper understanding of the semi-supervised learning problem in general.
\section{Experimental Setup and Results}\label{sect:exp}
For the experiments, we used eight data sets from the UCI Machine Learning Repository \cite{Bache+Lichman:2013}, all having two classes. The data sets used, together with some basic specifications, can be found in Table \ref{tab:real}. We put up the experiments in a way similar to those performed in \cite{loog2013semi}.
\begin{table}[ht]
\begin{center}
\caption{Basic properties of the eight two-class data sets from the UCI Machine Learning Repository \cite{Bache+Lichman:2013}.}\label{tab:real}
\begin{tabular}{l|c|c|c|}
data & \# objects & dimensions & smallest prior
\\ \hline
{ \tt haberman } & 306 & 3 & 0.26 \\
{ \tt ionosphere } & 351 & 33 & 0.36 \\
{ \tt pima } & 768 & 8 & 0.35 \\
{ \tt sonar } & 208 & 60 & 0.47 \\
{ \tt spect } & 267 & 22 & 0.21 \\
{ \tt spectf } & 267 & 44 & 0.21 \\
{ \tt transfusion } & 748 & 3 & 0.24 \\
{ \tt wdbc } & 569 & 30 & 0.37
\end{tabular}
\end{center}
\end{table}
Experiments with the three NMCs were done for two different total labeled training set sizes, four and ten, while the unlabeled training set sizes considered are $2^1=2$, $2^3$, \dots, $2^{9}$, and $2^{11} = 2048$. For the supervised and semi-supervised LDAs, experiments were carried out with $100$ labeled samples, while the unlabeled training set sizes are the same as for the NMCs. In the experiments, we study learning curves for increasing numbers of unlabeled data. For every combination of the amount of unlabeled objects and labeled objects, 1000 repetitions of randomly drawn data were used to obtain accurate performance estimates. In order to be able to do so based on the limited amount of samples provided by the data sets, instances were drawn with replacement. This basically means that we assume that the empirical distribution of every data set is its true distribution and this therefore allows us to measure the true error rates and the true log-likelihoods. It enabled us to properly study our learning curves on real-world data without having to deal with the extra variation due to cross validation and the like.
\begin{figure*}[ht]
\centering
\hrulefill~{\small NMC / error rates / 4 training samples}~\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/res002_1.pdf}
\includegraphics[width=0.32\hsize]{figs/res002_2.pdf}
\includegraphics[width=0.32\hsize]{figs/res002_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/res002_4.pdf}
\includegraphics[width=0.32\hsize]{figs/res002_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/res002_6.pdf}
\includegraphics[width=0.32\hsize]{figs/res002_7.pdf}
\includegraphics[width=0.32\hsize]{figs/res002_8.pdf}
\caption{Mean error rates for the supervised (black), self-learned (yellow), and the constrained NMC (blue) on the eight real-world data sets for various unlabeled sample sizes and a total of four labeled training samples.}\label{fig:one}
\end{figure*}
\begin{figure*}[ht]
\centering
\hrulefill~{\small NMC / error rates / 10 training samples}~\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/res010_1.pdf}
\includegraphics[width=0.32\hsize]{figs/res010_2.pdf}
\includegraphics[width=0.32\hsize]{figs/res010_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/res010_4.pdf}
\includegraphics[width=0.32\hsize]{figs/res010_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/res010_6.pdf}
\includegraphics[width=0.32\hsize]{figs/res010_7.pdf}
\includegraphics[width=0.32\hsize]{figs/res010_8.pdf}
\caption{Mean error rates for the supervised (black), self-learned (yellow), and the constrained NMC (blue) on the eight real-world data sets for various unlabeled sample sizes and a total of ten labeled training samples.}\label{fig:oneprime}
\end{figure*}
\begin{figure*}
\centering
\hrulefill~{\small NMC / log-likelihoods / 4 training samples}~\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/llk002_1.pdf}
\includegraphics[width=0.32\hsize]{figs/llk002_2.pdf}
\includegraphics[width=0.32\hsize]{figs/llk002_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llk002_4.pdf}
\includegraphics[width=0.32\hsize]{figs/llk002_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llk002_6.pdf}
\includegraphics[width=0.32\hsize]{figs/llk002_7.pdf}
\includegraphics[width=0.32\hsize]{figs/llk002_8.pdf}
\caption{Mean log-likelihood for the supervised (black), self-learned (yellow), and the constrained NMC (blue) on the eight real-world data sets for various unlabeled sample sizes and a total of four labeled training samples. Compare these to the respective error rates in Figure \ref{fig:one}.}\label{fig:two}
\end{figure*}
\begin{figure*}
\centering
\hrulefill~{\small NMC / log-likelihoods / 10 training samples~}\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/llk010_1.pdf}
\includegraphics[width=0.32\hsize]{figs/llk010_2.pdf}
\includegraphics[width=0.32\hsize]{figs/llk010_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llk010_4.pdf}
\includegraphics[width=0.32\hsize]{figs/llk010_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llk010_6.pdf}
\includegraphics[width=0.32\hsize]{figs/llk010_7.pdf}
\includegraphics[width=0.32\hsize]{figs/llk010_8.pdf}
\caption{Mean log-likelihood for the supervised (black), self-learned (yellow), and the constrained NMC (blue) on the eight real-world data sets for various unlabeled sample sizes and a total of ten labeled training samples. Compare these to the respective error rates in Figure \ref{fig:oneprime}.}\label{fig:twoprime}
\end{figure*}
\begin{figure*}
\centering
\hrulefill~{\small LDA / error rates / 100 training samples}~\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/reslda_1.pdf}
\includegraphics[width=0.32\hsize]{figs/reslda_2.pdf}
\includegraphics[width=0.32\hsize]{figs/reslda_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/reslda_4.pdf}
\includegraphics[width=0.32\hsize]{figs/reslda_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/reslda_6.pdf}
\includegraphics[width=0.32\hsize]{figs/reslda_7.pdf}
\includegraphics[width=0.32\hsize]{figs/reslda_8.pdf}
\caption{Mean error rates for supervised (black), self-learned (yellow), and constrained LDA (blue) on the eight real-world data sets for various unlabeled sample sizes and a total of 100 labeled training samples.}\label{fig:five}
\end{figure*}
\begin{figure*}
\centering
\hrulefill~{\small LDA / log-likelihoods / 100 training samples}~\hrulefill \smallskip \\
\includegraphics[width=0.32\hsize]{figs/llklda_1.pdf}
\includegraphics[width=0.32\hsize]{figs/llklda_2.pdf}
\includegraphics[width=0.32\hsize]{figs/llklda_3.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llklda_4.pdf}
\includegraphics[width=0.32\hsize]{figs/llklda_5.pdf} \bigskip \\
\includegraphics[width=0.32\hsize]{figs/llklda_6.pdf}
\includegraphics[width=0.32\hsize]{figs/llklda_7.pdf}
\includegraphics[width=0.32\hsize]{figs/llklda_8.pdf}
\caption{The curves for the log-likelihood of the three LDAs corresponding to the error curves in Figure \ref{fig:five}. Note that the blue curve is not always clearly visible as it is more or less fully occluded by the yellow curve.}\label{fig:fiveprime}
\end{figure*}
Following the introductory section, we constructed learning curves both for the expected error rate and the expected log-likelihood (based on the 1000 repetitions). Figure \ref{fig:one} shows the error rates for the NMCs on the various data sets when only four training samples are available. Figure \ref{fig:oneprime} shows the error when ten samples are at hand. The corresponding average log-likelihood curves can be found in Figures \ref{fig:two} and \ref{fig:twoprime}, respectively. Figure \ref{fig:five} reports the error rates obtained with 100 training samples and using the supervised and semi-supervised LDAs. Figure \ref{fig:fiveprime} reports on the corresponding log-likelihoods. The supervised classification performance is displayed in black, self-learners are in yellow (NCS 0580-Y10R), and the constrained versions are in blue (NCS 4055-R95B). The lighter bands around the learning curves give an indication of the standard deviations of the averaged curves, providing an idea of the statistical significance of the differences between the curves.
\section{Discussion and Conclusion}\label{sect:fin}
To start with, it is important to note that when we look at the error rates, behaviors can indeed be quite disperse. For both classifiers and both constrained and self-learned semi-supervised approaches, there are examples of error rates higher as well as lower than the averaged error rate the regular supervised learners achieve. Sometimes rather erratic behavior can be noted, like for self-learned NMC on {\tt wdbc} in Figure \ref{fig:one} (yellow curve) and constrained LDA on {\tt haberman} and {\tt transfusion} in Figure \ref{fig:five} (blue curves). On these last two, also the behavior of self-learned LDA does not seem very regular. Overall, the performance of the self-learners is very disappointing as only on {\tt wdbc} with 4 labeled training samples, some overall but not very convincing improvements can be observed. Regarding expected error rates, the constrained approach fares significantly better, showing clear performance improvement in at least 6 of the 16 NMC experiments and in 5 out of 8 of the LDA experiments. Still, in at least 3 of the 16, classification errors become significantly worse for NMC and, in 5 out of 8 experiments, constrained LDA is not convincing.
Things drastically change indeed when we look at the log-likelihood curves. For the constrained approaches, looking at Figure \ref{fig:two} and the lower half of Figure \ref{fig:five}, the story is very simple: where for the error rate deteriorations, improvements, and erraticism could be observed, the log-likelihood improves---i.e., increases---in every single case in a smooth, monotonic, and significant way. Only for LDA on {\tt haberman} and maybe {\tt transfusion}, the constrained approach does not improve as convincingly as in all 22 other cases.
For self-learned NMC and LDA, the results are still mixed. In many a case, we now do see improvements, but there are still some data sets on which the likelihood decreases. Notably, for self-learned NMC with 4 labeled samples, the log-likelihood on the test data improves in all cases. But we do not see the monotonic behavior that the constrained approach displays. Still, curves are less erratic than those for the error rates. Nonetheless, it so seems that even if we quantify performance in terms of log-likelihoods, we should be very critical towards self-learning and EM-based approaches. Behavior definitely is much more regular in terms of its surrogate loss, but performances worse than the supervised approach provides still do occur.
Nevertheless, the results illustrate that it can be interesting to study not only the performance in terms of error rates but also in terms of the surrogate loss. This is irrespective of the possibility that, ultimately, one might only be interested in the former. It is encouraging to observe empirically that there seem to be semi-supervised learning schemes that can guarantee improvements in terms of the intrinsic surrogate loss. This really is a nontrivial observation, as similar guarantees for error rates seem out of the question (unless strict conditions on the data are imposed; cf.\ \cite{castelli95a,ben-david08a,lafferty07a,singh08a}). Although our illustration is in terms of semi-supervised learning, it seems rather plausible that similar observations can be made for other learning settings in which two or more different estimation techniques for the same type of classifier, relying on the same surrogate loss, are compared. All in all, it is worthwhile considering the behavior of the surrogate in general, as it provides us with a view on a classifier's relative performance that a mere error rate cannot capture.
\bibliographystyle{unsrt}
|
\section*{supplemental material}
Figures S1 and S2 show the photon energy dependence of the ARPES maps
of the band structure of SnSe. Due to the loss of information about
the electron's perpendicular momentum ($k_{\perp}^{2}$) upon leaving
the surface, its value is estimated using the formula $\frac{\hbar^{2}}{2m_{e}}k_{\perp}^{2}=E_{kin}\cos^{2}\vartheta-V_{o}$
($\vartheta$ is the electron emission angle, 0 for $\bar{\Gamma})$
with only one free parameter—the inner potential $V_{o}$. The parameter
measures the position of the bottom of the free-electron final band
with respect to the vacuum level, and is usually found to be in the
range from -10 to -20~eV. Both values for the inner potential that
we used in Figures S1 and S2 confirm that the scans at six photon
energies (30-50 eV) sampled along three half-widths ($\frac{\pi}{a}=0.27\text{Å}^{-1}$)
of the bulk Brillouin zone. The dispersion along $k_{a}$ of the highest-lying
band has shown to be below our detection limit of 30–50~meV.
Figure S3 shows in a different aspect ratio and color scale band dispersion
cuts of the pockets $w_{1}$ to $w_{4}$ along $\bar{\Gamma}-\bar{Z}$
and cuts of pockets $w_{4}$ and $v_{2}$ in the perpendicular direction
to justify the parabolic fitting of the bands on top of the valence
band. In addition to the fits of the maxima of photoemission intensity,
the latter two have been fitted by following half-width-at-half-maximum
points from above as well.
The last section is describing our toy-model calculation of several
transport properties for a single band resembling the bands forming
the top of the valence band of SnSe. We argue that the slight mass
anisotropy cannot be accounted for the factor of 4 difference in conductivities
that has been observed along the $b$ and $c$ axes in SnSe.
\newpage{}
\begin{center}
\noindent\shadowbox{\begin{minipage}[t]{1\columnwidth - 2\fboxsep - 2\fboxrule - \shadowsize}%
\begin{center}
\includegraphics[width=160mm]{FigS1}
\par\end{center}%
\end{minipage}}
\par\end{center}
\textbf{Figure S1}. Photon energy dependence of the ARPES maps of
the band structure of SnSe. Perpendicular momentum $k_{a}$ and its
relative change $\Delta k_{a}$ have been estimated using the inner
potential $V_{o}$ of -12~eV. Dashed horizontal lines are shown at
several characteristic energies as a guide to the eye.
\newpage{}
\begin{center}
\noindent\shadowbox{\begin{minipage}[t]{1\columnwidth - 2\fboxsep - 2\fboxrule - \shadowsize}%
\begin{center}
\includegraphics[width=160mm]{FigS2}
\par\end{center}%
\end{minipage}}
\par\end{center}
\textbf{Figure S2}. Photon energy dependence of the ARPES maps of
the band structure of SnSe. Perpendicular momentum $k_{a}$ and its
relative change $\Delta k_{a}$ have been estimated using the inner
potential $V_{o}$ of -16~eV. Dashed horizontal lines are shown at
a few characteristic energies as a guide to the eye.
\newpage{}
\begin{center}
\noindent\shadowbox{\begin{minipage}[t]{1\columnwidth - 2\fboxsep - 2\fboxrule - \shadowsize}%
\begin{center}
\includegraphics[width=160mm]{FigS3}
\par\end{center}%
\end{minipage}}
\par\end{center}
\textbf{Figure S3}. Several band dispersion cuts of the pockets $w$
and $v$ shown in a different aspect ratio exemplifying the parabolic
shape of the bands. In addition to maximum-intensity parabolae (dashed
lines) overlaid to the cuts BB of the pocket $w_{4}$ and CC of the
pocket $v_{2}$ are parabolae following half-width-at-half-maximum
points from above. These are free from any intensity coming from neighboring
bands. Photons of 34, 30, and 50 eV were used, respectively.
\newpage{}
\textbf{A toy-model calculation of conductivity and Seebeck coefficient
tensors} $\sigma$, $S$ for a single band with the in-plane parabolic
dispersion and a tight-binding-like dispersion across the layers,
mimicking the valence band of SnSe:
\[
\varepsilon(k_{a},k_{b},k_{c})=-\frac{\hbar^{2}k_{c}^{2}}{2m_{c}}-\frac{\hbar^{2}k_{b}^{2}}{2m_{b}}-\Delta(1-\cos ak_{a})
\]
in the Boltzmann transport equation formalism (González-Romero, arXiv:\href{http://arxiv.org/abs/1612.05967v1}{1612.05967})
\begin{minipage}[b][6cm][c]{10cm}%
\[
\sigma_{\alpha\beta}=-\frac{e^{2}}{(2\pi)^{3}}\iiint_{BZ}v_{\alpha}v_{\beta}\tau_{\vec{k}}\,\frac{\partial}{\partial\varepsilon}f_{o}(\varepsilon,\mu)\,d\vec{k}
\]
\[
(\sigma S)_{\alpha\beta}=-\frac{ek_{B}}{(2\pi)^{3}}\iiint_{BZ}v_{\alpha}v_{\beta}\tau_{\vec{k}}\,\frac{\varepsilon-\mu}{k_{B}T}\frac{\partial}{\partial\varepsilon}f_{o}(\varepsilon,\mu)\,d\vec{k}
\]
\begin{center}
$\alpha$ and $\beta$ denote the Cartesian axes
\par\end{center}%
\end{minipage}%
\begin{minipage}[b][6cm][c]{6cm}%
\includegraphics[width=4.5cm]{fermi-derivative-timesx}
$f_{o}(\varepsilon,\mu)=1/(\exp\frac{\varepsilon-\mu}{k_{B}T}+1)$
\medskip{}
$\frac{\partial}{\partial\varepsilon}f_{o}(\varepsilon,\mu)=\frac{1}{k_{B}T}f_{o}(f_{o}-1)$%
\end{minipage}
assuming an elliptical angular dependence of the in-plane scattering
time
\begin{minipage}[b][5cm][c]{7cm}%
\[
\tau(k_{b},k_{c})=\tau_{0}\sqrt{\frac{\tau_{b}^{2}k_{b}^{2}+\tau_{c}^{2}k_{c}^{2}}{k_{b}^{2}+k_{c}^{2}}}
\]
\end{minipage}%
\begin{minipage}[b][5cm][c]{8cm}%
\includegraphics[width=4.5cm]{\string"SnSe-transport_tensors-tau_anisotropy\string".pdf}\includegraphics[width=3.5cm]{\string"SnSe-transport_tensors-tau_anisotropy_2Dinkscape\string".pdf}%
\end{minipage}
Here, $v_{a}=-\frac{\Delta}{\hbar}a\sin ak_{a}$, $v_{b}=-\frac{\hbar}{m_{b}}k_{b}$,
$v_{c}=-\frac{\hbar}{m_{c}}k_{c}$. Scattering time $\tau_{0}$ is
usually taken to be of the order of several femtoseconds. The effective
masses are nearly isotropic, $m_{c}=0.21$, $m_{b}=0.18$. $k_{B}T=25$
meV.
\begin{minipage}[t][7\totalheight]{7cm}%
Setting the ratio $\tau_{b}/\tau_{c}$ to 15, this simple model gives
$\sigma_{bb}/\sigma_{cc}=2.5$. The anisotropy found in angle-resolved
transport measurements is as high as 4 (Xu \emph{et al}, DOI:\href{https://doi.org/10.1021/acsami.7b00782}{10.1021/acsami.7b00782}).
Three values of $2\Delta$, the total band dispersion across the layers,
were used: $50$ meV, $100$ meV (lighter curves), and $25$ meV (darker
curves).
Interestingly, the in-plane Seebeck coefficient tensor components
are insensitive to the scattering time anisotropy, and the power factor
is only affected in the first order, through $\sigma$.
Smaller band width in the perpendicular direction, i.e. weak bonding
between the layers, leads to higher conductivities and power factors
for a given chemical potential, but has negligible influence on the
Seebeck coefficient.%
\end{minipage}\hspace{0.5cm}%
\begin{minipage}[t][7\totalheight]{6cm}%
\includegraphics[width=6cm]{\string"SnSe-transport_tensors-sigmabb_sigmacc-deltadep\string".pdf}
\includegraphics[width=6cm]{\string"SnSe-transport_tensors-Sbb_Scc-deltadep\string".pdf}
\includegraphics[width=6cm]{\string"SnSe-transport_tensors-PFbb_PFcc-deltadep\string".pdf}%
\end{minipage}
|
\section{Introduction}
Polling models are queueing models in which a single server, alternatingly,
visits a number of queues in some prescribed order. These models
have been extensively studied in the literature. For example, various
different service disciplines (rules which describe the server's behaviour
while visiting a queue) and both models with and without switch-over times
have been considered. We refer to Takagi \cite{Takagi1,Takagi2} and Vishnevskii and Semenova \cite{Vishnevskii}
for some literature reviews and to Boon, van der Mei and Winands \cite{Boon},
Levy and Sidi \cite{Levy} and Takagi \cite{Takagi3} for overviews of the
applicability of polling models.
Motivated by questions regarding the performance modelling and analysis
of optical networks, the study of polling models with {\it retrials} and
{\it glue periods} was initiated in Boxma and Resing \cite{BoxmaResing}.
In these models, just before the server arrives at a station there is
some glue period. Customers (both new arrivals and retrials) arriving at
the station during this glue period "stick" and will be served during the
visit of the server. Customers arriving in any other period leave
immediately and will retry after an exponentially distributed time.
In \cite{BoxmaResing}, the joint queue length process is analysed both at
embedded time points (beginnings of glue periods, visit periods and
switch-over periods) and at arbitrary time points, for the model with two
queues and {\it deterministic} glue periods. This analysis is later on
extended in Abidini, Boxma and Resing \cite{Abidini16} to the model with a
general number of queues. After that, in Abidini et al. \cite{KimKim}, an algorithm is presented to obtain the moments of
the number of customers in each station for the model with {\it
exponentially distributed} glue periods. Furthermore, in
\cite{KimKim} also
a workload decomposition for the model with {\it generally distributed}
glue periods is derived leading to a pseudo-conservation law.
The pseudo-conservation law in its turn is used to obtain approximations
of the mean waiting times at the different stations. In these papers, however, no analytical expressions for the complete joint distributions have been derived, which is something we aim to do in this paper.
In this manuscript, we will study the above-described polling system with
glue periods and retrials in a heavy traffic regime. More concretely, we will regard the regime where each of the arrival rates is scaled with the same constant, and subsequently the constant approaches from below that value, for which the system is critically loaded. Then, the workload
offered to the server is scaled to such a proportion that the queues are on
the verge of instability. Many techniques have been used to obtain the
heavy traffic behaviour of a variety of different polling models. Initial
studies of the heavy traffic behaviour of polling systems can be found in
Coffman, Puhalskii and Reiman \cite{CoffmanEtAl1,CoffmanEtAl2}, where the
occurrence of a so-called heavy traffic averaging principle is established.
This principle implies that, although the total scaled workload in the system
tends to a Bessel-type diffusion in the heavy traffic regime, it may be
considered as a constant during the course of a polling cycle, while the loads
of the individual queues fluctuate like a fluid model. It will turn out that
this principle will also hold true for this model. Furthermore, in van der Mei
\cite{MeiHTLST}, several heavy traffic limits have been established by taking
limits in known expressions for the Laplace-Stieltjes transform (LST) of the
waiting-time distribution. Alternatively, Olsen and van der Mei \cite{OlsenMei}
provide similar results, by studying the behaviour of the descendant set approach
(a numerical computation method, cf.\ Konheim, Levy and Srinivasan
\cite{KonheimLevySrinivasan}) in the heavy traffic limit. For the derivation of
heavy traffic asymptotics for our model, however, we will use results from
branching theory, mainly those presented in Quine \cite{Quine}. Earlier, these
results have resulted in heavy traffic asymptotics for conventional polling models,
see van der Mei \cite{MeiHTBranching}. We will use the same method as presented
in that paper, but for a different class of polling system that models the dynamics
of optical networks. In addition, for some steps of the analysis, we will present new and straight forward proofs, while other steps require a different approach. Furthermore, we will derive asymptotics for the \emph{joint} queue length process at arbitrary time points, as opposed to just the marginal processes as derived in \cite{MeiHTBranching}. Due to the additional intricacies of the model at hand, we
will need to overcome many arising complex difficulties, as will become apparent later.
The rest of the paper is organized as follows. In Section 2, we introduce
some notation and present a theorem from \cite{Quine} on multitype
branching processes with immigration. In Section 3, we describe in detail
the polling model with retrials and glue periods and recall from
\cite{Abidini16} how the joint queue length process at
some embedded time points in this model is related to multitype branching
processes with immigration. Next, we will derive heavy traffic results for
our model. In Section 4, we consider the joint queue length process at
the start of glue periods. In Section 5, we look at the joint queue
length process at the start of visit and switch-over periods, while in
Section 6, we consider the joint queue length
process at arbitrary time points. Finally, in Section 7, we show how the heavy
traffic results, in combination with a light traffic result, can be used
to approximate performance measures for stable systems with arbitrary
system loads.
\section{Multitype branching processes with immigration}
To derive heavy-traffic results for the model under study, we regard its queue length process as a multitype branching process with immigration. To this end, before introducing the actual model in detail, we will state an important general result from \cite{Quine} on multitype branching process with immigration in this section, which we will make significant use of in the sequel of this paper. To state this result, we will first need some notation.
A multitype branching process with immigration has two kinds of individuals: immigrants and offspring.
We denote with $\underbar{$g $} = (g_1, \ldots, g_n)$ the mean immigration
vector. Here, $g_i$ is the mean number of type~$i$ immigrants entering
the system in each generation, for all $i= 1,\ldots, N$.
The offspring in the model is represented by the vector of generating functions
$h(\underbar{$z$}) = ( h_{1}(\underbar{$z$}), h_{2}(\underbar{$z$}), \ldots, h_{N}(\underbar{$z$}))$.
Here, $\underbar{$z$}=(z_1,z_2,\ldots,z_N)$ and $|z_i| \leq 1$, for all
$i= 1, \ldots, N$, and
\begin{equation*}
h_i(\underbar{$z$}) = \sum_{j_1,\ldots,j_N \geq 0} p_i(j_1,\ldots,j_N) z_1^{j_1} \ldots z_N^{j_N},
\end{equation*}
where $p_i(j_1,\ldots,j_N)$ is the probability that a type~$i$ individual produces $j_k$ type~$k$ individuals, for all $i =1,\ldots,N$ and $k =1,\ldots,N$.
We use this to define the mean matrix $\mathbf{M} = (m_{i,j})$, where $m_{i,j}= \left.\frac{\partial h_i(\underbar{$z$})}{\partial z_j}\right\vert_{\underbar{$z$} = \underbar{$1$}}$, for all $i,j= 1, \ldots, N$, where \underbar{$1$} represents a vector of which each of the entries equals one. The elements $m_{ij}$ represent
the mean number of type $j$ children produced by a type~$i$ individual per generation.
We also define the second-order derivative matrix $K^{(i)}= \left(k^{(i)}_{j,k}\right)$ where $k^{(i)}_{j,k}= \left.\frac{\partial^2 h_i(\underbar{$z$})}{\partial z_j \partial z_k}\right\vert_{\underbar{$z$}=\underbar{$1$}}$, for all $i,j,k= 1, \ldots, N.$
Define $\underbar{$ w$} = (w_1, \ldots, w_N)^T$ as the normalized right eigenvector corresponding to the maximal
eigenvalue $\xi$ of $\mathbf{M}$. Then,
\[
\mathbf{M} \underbar{$ w$} = \xi \underbar{$w $} ~~~~~~~~~~~~ \text{and}~~~~~~~~~~~~~\underbar{$ w$} ^T \underbar{$1 $} = 1.
\]
Furthermore, we define $\underbar{$v$} = (v_1, \ldots, v_N)^T$ as the left eigenvector of $\mathbf{M}$, corresponding to the maximal
eigenvalue $\xi$, normalized such that
\[
\underbar{$v $} ^T \underbar{$ w$} =1.
\]
Additionally we give the following general notation in order to state the result of \cite{Quine}. Any variable $x$ which is dependent on $\xi$ will be denoted by $\hat{x}$ to indicate that it is
evaluated at $\xi =1$. Further, for $0<\xi <1$ let
\begin{equation}
\pi_0(\xi) :=0~~~~~ \text{and}~~~~~ \pi_n(\xi) := \sum_{r=1}^{n} \xi^{r-2}, ~~~~~~n = 1,2,\ldots.
\label{lifeofpi}
\end{equation}
We denote with $\Gamma (\alpha, \mu)$ a gamma-distributed random variable. For $\alpha , \mu , x > 0$, its probability density function is given by
\[
f(x) = \frac{\mu^{\alpha}}{\Gamma(\alpha)} x^{\alpha -1} e^{- \mu x},~~~~~
\text{where} ~~~~~~ \Gamma(\alpha) := \int_{t=0}^{\infty} t^{\alpha -1} e^{-t} dt.
\]
Now that the required notation is defined, we state the following important result, which is Theorem 4 of \cite{Quine}.
\begin{theorem}
If all first and second order derivatives of $h(\underbar{$z$})$ exist at $\underbar{$ z$} = \underbar{$ 1$} $, and $ 0 <g_i < \infty$ for all $i = 1, \ldots, N$, then
\begin{equation*}
\frac{1}{\pi(\xi)}
\left(\begin{array}{cc}
Z_{1} \\
\vdots \\
Z_{N}
\end{array} \right)
\xrightarrow[d]{}
A \left(\begin{array}{cc}
\hat{v}_1 \\
\vdots \\
\hat{v}_N
\end{array} \right)
\Gamma(\alpha , 1),~~~\text{when}~~ \xi \uparrow 1 .
\end{equation*}
Here $\xrightarrow[d]{}$ means convergence in distribution, $ \pi(\xi) :=\lim_{n \to \infty}\pi_n(\xi)$, $\alpha := \frac{1}{A} \hat{\underbar{$ g$} }^T \hat{\underbar{$w $} }$ and $A:= \frac{1}{2} \sum_{i=1}^{N} \hat{v}_i \left( \hat{\underbar{$w $}} ^T \hat{K}^{(i)} \hat{\underbar{$ w$} }\right) > 0 $.
The vector $(Z_{1},Z_{2},\ldots ,Z_{N})$ is defined such that $Z_{i}$ is the steady-state number of individuals of type~$i$ in the mulitype branching process with immigration, for all $i= 1, \ldots, N.$
\label{thm1}
\end{theorem}
\section{Polling model with retrials and glue periods}
In this section we first define the polling model with retrials and glue
periods. Then, we recall from \cite{Abidini16} its property that the
joint queue length process at the start of glue periods of a certain queue
is a multitype branching process with immigration.
\subsection{Model description}
\label{subsec:model}
We consider a single server polling model
with multiple queues,
$Q_i,$ $i= 1,\ldots,N$.
Customers arrive at $Q_i$ according to a Poisson process with rate $\lambda_i$;
they are called type-$i$ customers.
The service times at $Q_i$ are i.i.d., with $B_i$ denoting a generic service time,
with distribution $B_i(\cdot)$ and Laplace-Stieltjes transform (LST) $\tilde{B}_i(\cdot)$.
The server cyclically visits all the queues, thus after a visit of $Q_i$,
it switches to $Q_{i+1}$, $i= 1,\ldots,N$.
Successive switch-over times from $Q_i$ to $Q_{i+1}$
are i.i.d., where $S_i$ denotes a generic switch-over time,
with distribution $S_i(\cdot)$ and LST $\tilde{S}_i(\cdot)$.
We make all the usual independence assumptions about interarrival times, service times
and switch-over times at the queues.
After a switch of the server to $Q_i$, there first is a deterministic (i.e., constant)
glue period $G_i$, before the visit of the server at $Q_i$ begins.
The significance of the glue period stems from the following assumption.
Customers who arrive at $Q_i$ do not receive service immediately.
When customers arrive at $Q_i$ during a glue period $G_i$, they stick, joining the queue of $Q_i$.
When they arrive in any other period, they immediately leave and
retry after a retrial interval which is independent
of everything else, and which is exponentially distributed with parameter $\nu_i$, $i=1,\ldots,N$.
Since customers will only `stick' during the glue period, the service discipline at all queues can be interpreted as being gated. That is, during the visit period at $Q_i$, the server serves all
`glued' customers in that queue, i.e., all type-$i$ customers waiting at the end of the glue period,
but none of those in orbit,
and neither any new arrivals. We are interested in the steady-state behaviour of this polling model with retrials.
We hence assume that the stability condition
$\rho = \sum_{i=1}^N \rho_i < 1$ holds, where
$\rho_i := \lambda_i \mathbb{E} [B_i]$.
Note that now the server has three different periods at each station, a deterministic glue period during which customers are glued for service, followed by a visit period during which all the glued
customers are served and a switch-over period during which the server moves to the next station.
We denote, for $i=1,\ldots ,N$, by $(X_{1}^{(i)},X_{2}^{(i)},\ldots ,X_{N}^{(i)})$, $(Y_1^{(i)},Y_2^{(i)},\ldots ,Y_N^{(i)})$ and
$(Z_1^{(i)},Z_2^{(i)},\ldots ,Z_N^{(i)})$ vectors with as distribution the limiting distribution of
the number of customers of the different types in the system at the start of a glue period, a visit period and a switch-over period of station $i$, respectively. Furthermore, we denote, for $i=1,\ldots ,N$, by $(V_1^{(i)},V_2^{(i)},\ldots ,V_N^{(i)})$
the vector with as distribution the limiting distribution of
the number of customers of the different types in the system at an arbitrary point in time during a visit period of station $i$.
During glue and visit periods, we furthermore distinguish between those customers who are queueing in $Q_i$
and those who are in orbit for $Q_i$. Therefore we write
$Y_i^{(i)} = Y_i^{(iq)} + Y_i^{(io)}$ and $V_i^{(i)} = V_i^{(iq)} + V_i^{(io)}$, for all $i=1,\ldots ,N$, where $q$ represents queueing and $o$ represents in orbit.
Finally we denote by $(L^{(1q)},\ldots ,L^{(Nq)},L^{(1o)},\ldots,L^{(No)})$ the vector with as distribution the limiting distribution of
the number of customers of the different types in the queue and in the orbit at an arbitrary point in time.
The generating function of the vector of numbers of arrivals
at $Q_1$ to $Q_N$ during a type-$i$ service time $B_i$ is $\beta_i(\underbar{$z$}) := \tilde{B}_i(\sum_{j=1}^{N}\lambda_j(1-z_j))$.
Similarly, the generating function of the vector of numbers of arrivals
at $Q_1$ to $Q_N$ during a type-$i$ switch-over time $S_i$ is $\sigma_i(\underbar{$z$}) := \tilde{S}_i(\sum_{j=1}^{N}\lambda_j(1-z_j))$.
\subsection{Relation with multitype branching processes}
\label{sub:branchingprocess}
We now identify the relation of the polling model as defined in Section \ref{subsec:model} with a multitype branching process.
In \cite{Abidini16} it is shown that the number of customers of different
types in the system at the start of a glue period of station $1$ in the
polling model with retrials and glue periods is a multitype branching
process with immigration. In particular, it is derived in \cite{Abidini16}
that the joint PGF of $X_1^{(1)},\ldots,X_N^{(1)}$ satisfies
\begin{equation}
\mathbb{E} \left[z_1^{X_{1}^{(1)}} z_2^{X_{2}^{(1)}}\ldots z_N^{X_{N}^{(1)}}\right]=\prod_{i=1}^{N}\sigma^{(i)}(\underbar{$z$}) \prod_{i=1}^{N} {\rm e}^{-G_i D_i(\underbar{$z$})}
\mathbb{E} \left[ [h_1(\underbar{$z$})]^{X_{1}^{(1)}}[h_2(\underbar{$z$})]^{X_{2}^{(1)}}\ldots[h_N(\underbar{$z$})]^{X_{N}^{(1)}}\right],
\label{M1}
\end{equation}
\begin{eqnarray}
\text{where}~~~~~~\sigma^{(i)}(\underbar{$z$}) &:=& \sigma_i(z_1,\ldots ,z_i,h_{i+1}(\underbar{$z$}),\ldots ,h_N(\underbar{$z$})), \nonumber \\
D_i(\underbar{$z$})&:=& \sum_{j=1}^{i-1} \lambda_j (1-z_j) +\lambda_i \left(1-\beta^{(i)}(\underbar{$z$})\right) + \sum_{j=i+1}^{N} \lambda_j(1-h_j(\underbar{$z$})), \nonumber \\
\beta^{(i)}(\underbar{$z$}) &:=& \beta_i(z_1,\ldots ,z_i,h_{i+1}(\underbar{$z$}),\ldots ,h_N(\underbar{$z$})), \nonumber \\
h_{i}(\underbar{$z$})&:=&f_i(z_1,\ldots ,z_i,h_{i+1}(\underbar{$z$}),\ldots ,h_N(\underbar{$z$})),\nonumber \\
\text{and}~~~~~~f_i(\underbar{$z$}) &:=& (1-{\rm e}^{-\nu_i G_i}) \beta_i(\underbar{$z$}) + {\rm e}^{-\nu_i G_i} z_i. \nonumber
\end{eqnarray}
As explained in detail in \cite{Abidini16}, \eqref{M1} consists of the
product of three factors:
\begin{itemize}
\item $\prod_{i=1}^{N}\sigma^{(i)}(\underbar{$z$})$ represents new arrivals during
switch-over times and descendants of these arrivals in the current cycle.
\item $\prod_{i=1}^{N} {\rm e}^{-G_i D_i(\underbar{$z$})}$ represents new arrivals during glue periods
and descendants of these arrivals in the current cycle.
The function $D_i(\underbar{$z$})$ is itself a sum of three terms:
\begin{itemize}
\item $\sum_{j=1}^{i-1} \lambda_j (1-z_j)$ represents the arrivals of type $j <i$; these arrivals are not served in the current cycle.
\item $\lambda_i \left(1-\beta^{(i)}(\underbar{$z$})\right)$ represents descendants of the arrivals of type $i$; these arrivals are all served
during the visit of station $i$ in the current cycle.
\item $\sum_{j=i+1}^{N} \lambda_j(1-h_j(\underbar{$z$}))$ represents the
arrivals or descendants of arrivals of type $j>i$; these arrivals are either served (with probability $1-{\rm e}^{-\nu_i G_i}$) or not served (with probability
${\rm e}^{-\nu_i G_i}$) in the current cycle.
\end{itemize}
\item $\mathbb{E} \left[ [h_1(\underbar{$z$})]^{X_{1}^{(1)}}[h_2(\underbar{$z$})]^{X_{2}^{(1)}}\ldots[h_N(\underbar{$z$})]^{X_{N}^{(1)}}\right]$ represents descendants of $(X_1^{(1)},\ldots,X_N^{(1)})$ generated in the current cycle.
\end{itemize}
We now proceed to further identify the branching process by finding its mean matrix $M$ and the mean immigration vector \underbar{$g$}.
\subsubsection*{Mean matrix of branching process:}
The elements $m_{i,j}$ of the mean matrix $\mathbf{M}$ of the branching
process are given by
\begin{equation*}
m_{i,j}= f_{i,j} \cdot 1[j \leq i] + \sum_{k=i+1}^{N} f_{i,k} m_{k,j},
\label{meanchildre}
\end{equation*}
where $f_{i,j}= \left.\frac{\partial f_i(\underbar{$z$})}{\partial z_j}\right\vert_{\underbar{$z$}=\underbar{$1$}},$ and hence
\begin{eqnarray*}
f_{i,j} & = & \begin{cases}
(1- {\rm e}^{-\nu_iG_i}) \lambda_j \mathbb{E} [B_i] , & i \ne j, \\
(1- {\rm e}^{-\nu_iG_i}) \rho_i + {\rm e}^{-\nu_iG_i} , & i=j.
\end{cases}
\end{eqnarray*}
In the heavy traffic analysis of this model the following lemma will be useful.
\begin{lemma}
\begin{equation}
\mathbf{M} = \mathbf{M_1} \cdots \mathbf{M_N},
\label{piM}
\end{equation}
where, for $i=1,2,\ldots,N$, we have
\begin{equation}
\mathbf{M_i} = \left(\begin{array}{ccccccccc}
1~~~&~~~ 0~~~&~~~ \cdots~~~&~~~ 0 ~~~&~~~0 ~~~&~~~0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 \\
0 ~~~&~~~1 ~~~&~~~\ddots~~~&~~~ \vdots~~~&~~~ 0~~~&~~~ 0 ~~~&~~~\cdots~~~&~~~ \cdots ~~~&~~~0 \\
\vdots~~~&~~~\ddots~~~&~~~\ddots~~~&~~~ 0~~~&~~~ 0~~~&~~~ 0~~~&~~~ \cdots~~~&~~~ \cdots~~~&~~~ 0\\
0~~~&~~~\cdots~~~&~~~0~~~&~~~ 1~~~&~~~ 0~~~&~~~ 0~~~&~~~ \cdots~~~&~~~ \cdots~~~&~~~ 0\\
f_{i,1}& f_{i,2}&\cdots& f_{i, i-1}& f_{i,i}& f_{i, i+1}& \vdots& \vdots& f_{i,N}\\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ 1 ~~~&~~~ 0~~~&~~~ \cdots ~~~&~~~ 0 \\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ 1 ~~~&~~~ \ddots ~~~&~~~ 0 \\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ \ddots ~~~&~~~ \ddots ~~~&~~~ 0 \\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ 0 ~~~&~~~ \cdots ~~~&~~~ 0 ~~~&~~~ 1 \\
\end{array} \right),
\label{meanmatrixi}
\end{equation}
\end{lemma}
\proof{First of all note that $ m_{N,j}= f_{N,j}$ for all $j= 1, \ldots, N.$ Therefore we have
\begin{equation*}
\mathbf{M_N} = \left(\begin{array}{ccccccccc}
1~~~&~~~ 0~~~&~~~ \cdots~~~&~~~\cdots~~~&~~~ 0 \\
0 ~~~&~~~1 ~~~&~~~\ddots~~~&~~~ \cdots ~~~&~~~0 \\
\vdots~~~&~~~\ddots~~~&~~~\ddots~~~&~~~ \cdots~~~&~~~ 0\\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ \ddots ~~~&~~~ 0 \\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 1 ~~~&~~~0 \\
m_{N,1}& m_{N,2}&\cdots& m_{N,N-1}& m_{N,N}\\
\end{array} \right).
\end{equation*}
Now using the fact that
$m_{N-1,j}= f_{N-1,j} + f_{N-1,N} m_{N,j}$ for all $j \leq N-1$ and
furthermore
$m_{N-1,N}= f_{N-1,N} m_{N,N}$, we obtain that
\begin{eqnarray*}
\mathbf{M_{N-1}}\mathbf{M_N}
&=& \left(\begin{array}{ccccccccc}
1~~~&~~~ 0~~~&~~~ \cdots~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 0 \\
0 ~~~&~~~1 ~~~&~~~\ddots~~~&~~~\cdots~~~&~~~ \cdots ~~~&~~~0 \\
\vdots~~~&~~~\ddots~~~&~~~\ddots~~~&~~~ \cdots~~~&~~~ \cdots~~~&~~~ 0\\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ \ddots ~~~&~~~ \ddots ~~~&~~~ 0 \\
0~~~&~~~ \cdots ~~~&~~~\cdots~~~&~~~ 1 ~~~&~~~ 0 ~~~&~~~0 \\
m_{N-1,1}& m_{N-1,2}&\cdots& \cdots& m_{N-1,N-1}& m_{N-1,N}\\
m_{N,1}& m_{N,2}&\cdots& \cdots& m_{N,N-1}& m_{N,N}\\
\end{array} \right).
\end{eqnarray*}
Continuing in this way we obtain
\begin{eqnarray*}
\mathbf{M_{1}} \cdots \mathbf{M_N} = \left(\begin{array}{cc}
m_{1,1} \cdots m_{1,N} \\
\vdots ~~~\ddots ~~~\vdots\\
m_{N,1} \cdots m_{N,N}
\end{array} \right) = \mathbf{M}.
\end{eqnarray*}
\rightline{$\Box$}
\begin{remark}
(Intuition behind Lemma 1) The matrix $\mathbf{M_i}$ represents what happens with customers during a visit period at station~$i$. Customers at station~$i$ itself are either served
or not served, leading to the $i^{th}$ row with elements $f_{i,j}$. Customers at all other stations are not served leading to $1$'s on the diagonal and $0$'s
outside the diagonal. We obtain the product $\mathbf{M_{1}} \cdots \mathbf{M_N}$ because a cycle consists successively of visit periods of station~$1$, station~$2$, $\ldots$, up to station~$N$.
\end{remark}
\subsubsection*{Mean number of immigrants:}
Next, we look at the immigration part of the process. Let $g_i$ be the
mean number of type~$i$ individuals which immigrate into the system in
each generation. Equation (3.12) of \cite{Abidini16} gives us
\begin{equation}
g_i =\sum_{k=1}^{N} \lambda_k \Bigg(\Bigg(\sum_{j=1}^{k-1}(G_j+\mathbb{E} [S_j])\Bigg)\big(1- {\rm e}^{-\nu_k G_k}\big) + G_k\Bigg) m_{k,i}+\lambda_i \left(\sum_{j=1}^{i-1}(G_j+\mathbb{E} [S_j]){\rm e}^{-\nu_i G_i}+ \sum_{j=i}^N \mathbb{E} [S_j] + \sum_{j=i+1}^N G_j\right) .
\label{immigrants}
\end{equation}
The right-hand side of (\ref{immigrants}) is the sum of two terms. The term $\sum_{k=1}^{N} \lambda_k \left(\left(\sum_{j=1}^{k-1}(G_j+\mathbb{E} [S_j])\right)\left(1- {\rm e}^{-\nu_k G_k}\right) + G_k\right) m_{k,i}$ represents the mean number of type~$i$
customers which are descendants of customers of type $k$, arriving during glue periods and switch-over periods before the visit period of station $k$ and served during the visit at station~$k$, in the current cycle.
The first part of the second term $\lambda_i \sum_{j=1}^{i-1}(G_j+\mathbb{E} [S_j]){\rm e}^{-\nu_i G_i}$ represents the mean number of customers of type~$i$ which
arrive during glue periods and switch-over periods before the visit of the server at station~$i$ and which are not served during the visit of station $i$ in the current cycle.
The second part of the second term $\lambda_i \left(\sum_{j=i}^N \mathbb{E} [S_j] + \sum_{j=i+1}^N G_j\right)$ represents the mean number of customers of type~$i$ which
arrive during glue periods and switch-over periods after the visit period of station~$i$ in the current cycle.
Note that each of the terms mentioned above is non-negative and finite.
Furthermore, for non-zero glue periods and arrival rates, at least one of
the terms is non-zero. Therefore we have $0< g_i < \infty$.
\begin{remark}
Note that the branching part of the process only represents descendants of customers which are present in the system at the start of a glue period of station $1$.
Customers which arrive at stations during glue periods and switch-over periods are not represented by the branching part of the process. Instead they and
their descendants are represented by the immigration
part of the process. Both glue periods and switch-over periods can be
considered as parts of the cycle during which the server is not working. This rather unexpected feature explains why the polling model at hand is not part of the class of polling models considered by \cite{MeiHTBranching}, but requires an analysis on its own.
\end{remark}
\section{Heavy traffic analysis: number of customers at start of glue periods of station $1$}
Now that we have successfully modelled the polling system as a multitype branching process with immigration, we derive the limiting scaled joint queue length distribution in each station at the start of glue periods of station $1$ by following the same line of proof as that of \cite{MeiHTBranching}.
In \cite{MeiHTBranching}, the author first proves a couple of lemmas for a conventional branching-type polling system without retrials and glue periods and, afterwards, uses
these lemmas to give the heavy traffic asymptotics of the joint queue length process at certain embedded time points. In the following subsection, we will derive similar lemmas in order to derive a heavy traffic theorem for our polling system with retrials and glue periods.
Note that when we scale our system such that $\rho \uparrow 1$, we
are effectively changing the arrival rate at each station while keeping
the service times and the ratios of the arrival rates fixed. Let any
variable $x$ which is dependent on $\rho$ be denoted by $\hat{x}$
whenever it is evaluated at $\rho =1$. Therefore we have for any system
that, $\lambda_i= \rho \hat{\lambda}_i $.
From Theorem 1 of \cite{Zedek}, we know that if all elements of a matrix
are continuous in some variable, then the real eigenvalues of this matrix
are also continuous in that variable. As each element of $\mathbf{M}$ is a
continuous function of $\rho$, the maximal eigenvalue $\xi$
is a continuous function of $\rho$ as well. Furthermore, from Lemmas 3, 4 and 5 of
\cite{Resing93} we know that $\xi<1$ when $\rho<1$, $\xi=1$ when $\rho =1$
and $\xi > 1$ when $\rho>1$. Therefore, we have that $\xi$ is a
continuous function of $\rho$ and
\begin{equation*}
\lim_{\rho \uparrow 1}\xi (\rho) = \xi (1)=1.
\label{xi1}
\end{equation*}
\subsection{Preliminary results and lemmas}\label{sec:prelim}
\begin{lemma}
The normalized right and left eigenvectors of $\mathbf{\hat{M}}$, the mean matrix of the system with $\rho=1$, corresponding to the maximal
eigenvalue $\xi =1$, are respectively given by
\begin{equation*}
\hat{\underbar{$w $} } = \left(\begin{array}{cc}
\hat{w}_1 \\
\vdots \\
\hat{w}_N
\end{array} \right) =\frac{ \underbar{$b $} }{\lvert \underbar{$b $} \rvert }~~~~~~~~~~~~~~~~~~~~~~\text{and}~~~~~~~~~~~~~~~~~~ \hat{\underbar{$ v$}}
= \left(\begin{array}{cc}
\hat{v}_1 \\
\vdots \\
\hat{v}_N
\end{array} \right) = \frac{\lvert \underbar{$ b$} \rvert} {\delta } \hat{\underbar{$ u$}},
\end{equation*}
where
\begin{equation*}
\underbar{$b $} = \left(\begin{array}{cc}
\mathbb{E} [B_1] \\
\vdots \\
\mathbb{E} [B_N]
\end{array} \right),~~~~~~~~~~~~~~~~~~~~~~~~~\underbar{$ u$}
= \left(\begin{array}{cc}
u_1 \\
\vdots \\
u_N
\end{array} \right),
\end{equation*}
\[ \lvert \underbar{$ b$} \rvert : = \sum_{j=1}^N \mathbb{E} [B_j],~~~~~~~~~~u_j := \lambda_j \left[\frac{e^{-\nu_j G_j} }{1-e^{-\nu_j G_j}} + \sum_{k=j}^N \rho_k \right]~~~\text{and}~~~~\delta := \hat{\underbar{$ u$} }^{T}\underbar{$b $} .
\]
\label{lemma:eigenvector}
\end{lemma}
\proof{
First we look at the normalized right eigenvector $\hat{ \underbar{$w $} }$. Using \eqref{meanmatrixi} we evaluate the vector $ \mathbf{\hat{M}_i} \hat{\underbar{$ w$} }$.
Let $ \left( \mathbf{\hat{M}_i} \hat{\underbar{$ w$}} \right)_j $ represent the $j^{th}$ element of $ \mathbf{\hat{M}_i} \hat{\underbar{$ w$} }$. By a series of simple algebraic manipulations, it follows then that
\begin{equation*}
\left( \mathbf{\hat{M}_i} \hat{\underbar{$ w$}} \right)_j =
\begin{cases} \hat{w}_j , & j \ne i, \\
\frac{1}{\lvert \underbar{$ b$} \rvert} \sum_{k=1}^{N} \hat{f}_{i,k} \mathbb{E}[B_k] , & j=i.
\end{cases}
\end{equation*}
However, it also holds that
\begin{eqnarray*}
\sum_{k=1}^{N} \hat{f}_{i,k} \mathbb{E}[B_k] &=& e^{-\nu_i G_i} \mathbb{E}[B_i]+ \sum_{k=1}^{N} (1-e^{-\nu_i G_i}) \mathbb{E} [B_i] \hat{\lambda}_k \mathbb{E} [B_k] \\
&=& e^{-\nu_i G_i} \mathbb{E}[B_i]+ (1-e^{-\nu_i G_i}) \mathbb{E} [B_i] \sum_{k=1}^{N} \hat{\rho}_k = \mathbb{E} [B_i] .
\end{eqnarray*}
Therefore, we conclude that $\left( \mathbf{\hat{M}_i} \hat{\underbar{$ w$}} \right)_i = \hat{w}_i$.
This implies that $\hat{ \underbar{$ w$} }$ is the normalized right eigenvector of $ \mathbf{\hat{M}_i}$ for an eigenvalue $\xi=1$, for all $i= 1, \ldots,N$.
Hence from \eqref{piM} we get the first part of the lemma.
Next, we look at the left eigenvector $\hat{\underbar{$ v$} }$. Since $\hat{\underbar{$u $}}$ is a multiple of $\hat{\underbar{$v $}}$, it is enough to show that $\hat{\underbar{$ u$} } $ is an eigenvector of $\hat{\mathbf{ M}} $.
Define
\begin{eqnarray}
\underbar{$ u$}^{(i)}
= \left(\begin{array}{cc}
u^{(i)}_1 \\
\vdots \\
u^{(i)}_N
\end{array} \right),~~~~~~~ \text{where}~~~~~~~~~~u_j^{(i)} =\begin{cases}
\lambda_j \left[\frac{e^{-\nu_j G_j} }{1-e^{-\nu_j G_j}} + \sum_{k=j}^N \rho_k + \sum_{k=1}^{i-1} \rho_k \right] , & i \leq j ,\\
\lambda_j \left[\frac{e^{-\nu_j G_j} }{1-e^{-\nu_j G_j}} + \sum_{k=j}^{i-1} \rho_k \right] , & i>j.
\end{cases}
\label{eq:hat}
\end{eqnarray}
Note that $u^{(1)}_j = u^{(N+1)}_j = u_j$, for all $j= 1, \ldots, N$, and
hence, $\underbar{$u$}^{(1)}=\underbar{$u$}^{(N+1)}=\underbar{$u$}$.
Furthermore, we have
\begin{eqnarray*}
\hat{\underbar{$ u$} }^{{(1)}^T} \mathbf{\hat{M}_1} = \left(\begin{array}{cc}
\hat{u}_1 \hat{f}_{1,1}\\
\hat{u}_1 \hat{f}_{1,2} + \hat{u}_2\\
\vdots\\
\hat{u}_1 \hat{f}_{1,N} + \hat{u}_N\\
\end{array}\right)^T = \left(\begin{array}{cc}
\hat{\lambda}_1 \frac{e^{-\nu_1 G_1}}{1-e^{-\nu_1 G_1}} + \hat{\lambda}_1 \hat{\rho}_1\\
\hat{u}_2 +\hat{\lambda}_2 \hat{\rho}_1\\
\vdots\\
\hat{u}_N + \hat{\lambda}_N \hat{\rho}_1\\
\end{array}\right)^T = \left(\begin{array}{cc}
\hat{u}_1^{(2)} \\
\hat{u}_2^{(2)}\\
\vdots\\
\hat{u}_N^{(2)}\\
\end{array}\right)^T = \hat{\underbar{$ u$} }^{{(2)}^T} ,
\end{eqnarray*}
and, in a similar way, for all $i= 1, \ldots, N$,
\begin{equation}
\hat{\underbar{$ u$} }^{{(i)}^T} \mathbf{\hat{M}_i} = \hat{\underbar{$ u$} }^{{(i+1)}^T}.
\label{lefteigeni}
\end{equation}
Therefore we have
\[
\hat{\underbar{$u $}}^{T} \mathbf{M} =\hat{\underbar{$ u$} }^{{(1)}^T} \mathbf{\hat{M}_1}\cdots \mathbf{\hat{M}_N} =
\hat{\underbar{$ u$} }^{{(N+1)}^T} = \hat{\underbar{$ u$} }^{T}.
\]
Hence $\hat{\underbar{$ u$} }$ and $\hat{\underbar{$ v$}}$ are the left eigenvectors of $\mathbf{\hat{M}}$, for eigenvalue $\xi = 1$.
\rightline{$\Box$}
\begin{remark}
Alternatively, we could have used lemma 4 from \cite{MeiHTBranching} to find the left
and normalized right eigenvectors. The normalized right eigenvector $\hat{\underbar{$w$}}$ is the same as given in \cite{MeiHTBranching}.
To find the left eigenvector $\hat{\underbar {$v$}}$ from \cite{MeiHTBranching}, we
first need to calculate the exhaustiveness factor $f_j$. In our model, this
exhaustiveness factor is given by
$f_j = (1- e^{-\nu_j G_j}) (1 -\rho_j)$.
Each customer of type~$j$, present at the start of a glue period at station~$j$, is served with probability $(1- e^{-\nu_j G_j})$ and during that
service time on average $\rho_j$ new type~$j$ customers will arrive.
Furthermore, with probability $e^{-\nu_j G_j}$ a customer of type~$j$,
present at the start of a glue period at station~$j$, is not served.
Therefore we have $1- f_j = (1- e^{-\nu_j G_j}) \rho_j+ e^{-\nu_j G_j} $, and hence the exhaustiveness factor is given by $f_j = (1- e^{-\nu_j G_j}) (1 -\rho_j).$
Substituting this exhaustiveness factor in lemma 4 of \cite{MeiHTBranching} we get
\begin{eqnarray*}
u_j &=& \lambda_j \left[\frac{(1-\rho_j)(1-(1- e^{-\nu_j G_i}) (1 -\rho_j)) }{(1- e^{-\nu_j G_j}) (1 -\rho_j)} + \sum_{k=j+1}^N \rho_k \right] \nonumber \\
&=& \lambda_j \left[\frac{ e^{-\nu_j G_j} + (1 -e^{-\nu_j G_j})\rho_j }{1- e^{-\nu_j G_j} } + \sum_{k=j+1}^N \rho_k \right] \nonumber \\
&=& \lambda_j \left[\frac{ e^{-\nu_j G_j} }{1- e^{-\nu_j G_j} } + \sum_{k=j}^N \rho_k \right],
\end{eqnarray*}
which is in agreement with Lemma \ref{lemma:eigenvector}.
\end{remark}
\begin{remark}
\label{rem:vector}
In Lemma \ref{lemma:eigenvector} we have given the left and normalized right eigenvectors for the mean matrix $\mathbf{\hat{M}}$ at eigenvalue $\xi =1$.
Note that this mean matrix is defined for the branching process when we
consider the beginning of a glue period of station~$1$ as
the initial point of the cycle.
Instead if we consider
the beginning of a glue period of station~$i$ as the initial point of the
cycle, we get, for eigenvalue $\xi =1$, the same normalized right
eigenvector $\hat{\underbar{$w$}}$. However, the left eigenvector is now given by the vector $\hat{\underbar{$ v$}}^{(i)}$ defined by
\[
\hat{\underbar{$ v$}}^{(i)}
= \left(\begin{array}{cc}
\hat{v}^{(i)}_1 \\
\vdots \\
\hat{v}^{(i)}_N
\end{array} \right) = \frac{\lvert \underbar{$ b$} \rvert} {\delta } \hat{\underbar{$ u$}}^{(i)}.
\]
Note that $\delta = \hat{\underbar{$ u$} }^{(1)^T}\underbar{$b $} = \hat{\underbar{$ u$} }^{(i)^T}\underbar{$b $}.$
In this paper we prove all the lemmas and theorems using $ \hat{\underbar{$ v $}}= \hat{\underbar{$ v$}}^{(1)}$. However, we can instead use $ \hat{\underbar{$ v$}}^{(i)}$ and prove the same by just changing the initial
point of the cycle from the beginning of a glue period of station~$1$ to the beginning of a glue period of station~$i$.
\end{remark}
In Lemma 2 we have evaluated the normalized right, and left eigenvectors of $\mathbf{M}$ at the maximal eigenvalue, when $\rho \uparrow 1$.
We will now use this to compute the value of the derivative of this eigenvalue as $\rho \uparrow 1$.
\begin{lemma}
For the maximal eigenvalue $\xi = \xi (\rho)$ of the matrix $\mathbf{M}$, the derivative of $\xi (\rho)$ w.r.t. $\rho$ satisfies
\begin{equation*}
\xi ^ \prime (1) = \frac{1}{\delta}.
\label{xiderivative}
\end{equation*}
\label{lemma:eigenvaluederi}
\end{lemma}
\proof{
Since the maximal eigenvalue $\xi$ of $\mathbf{M}$ is a simple eigenvalue
and furthermore $\mathbf{M}$ is continuous in $\rho$, Theorem 5 of Lancaster \cite{Lancaster64} states that
\begin{equation}
\left.\frac{d \xi}{d \rho} \right\vert_ {\rho=1} = \frac{ \hat{\underbar{$ v$} }^{T} \mathbf{\hat{M}^{\prime}}\hat{\underbar{$w $}} }{ \hat{\underbar{$v $} }^{T}\hat{\underbar{$ w$}}},
\label{derivative}
\end{equation}
where $\mathbf{\hat{M}^{\prime}}$ is the element wise derivative of $\mathbf{M}$ w.r.t. $\rho$ evaluated at $\rho =1$.
Let $\mathbf{U_i} =\left( \prod_{k=1}^{i-1} \mathbf{M_k} \right) \mathbf{M^{\prime}_i} \left(\prod_{k=i+1}^{N} \mathbf{M_k}\right)$. Then due to \eqref{piM} we can write
$\mathbf{M^{\prime}} = \sum_{i=1}^{N} \mathbf{U_i}$.
From \eqref{meanmatrixi} we can see that
\begin{eqnarray}
\mathbf{M^{\prime}_i} &=& \left(\begin{array}{cccccc}
0~~~& \cdots~~~&~~~ 0 ~~~&~~~\cdots~~~&~~~ 0 \\
\vdots~~~&~~~\ddots~~~&~~~ \cdots~~~&~~~ \ddots~~~&~~~ \vdots\\
\frac{df_{i,1}}{d\rho}&\cdots& \frac{df_{i,i}}{d\rho}& \cdots& \frac{df_{i,N}}{d\rho}\\
\vdots~~~&~~~ \ddots ~~~&~~~\cdots~~~&~~~ \vdots \\
0~~~&~~~ \cdots ~~~&~~~ 0 ~~~&~~~ \cdots ~~~&~~~ 0 \\
\end{array} \right)\nonumber \\
&=& \left(\begin{array}{cccc}
0~~~& ~~~\cdots~~~&~~~ 0 \\
\vdots~~~&~~~ \ddots~~~&~~~ \vdots\\
(1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i]\frac{d\lambda_1}{d\rho}&~~~ \cdots~~~&(1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i] \frac{d\lambda_N}{d\rho} \\
\vdots~~~&~~~ \ddots~~~&~~~ \vdots\\
0~~~&~~~ \cdots~~~&~~~0 \\
\end{array} \right)\nonumber\\
&=& \left(\begin{array}{cc}
~~~~0 \\
~~~~\vdots\\
(1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i]\\
~~~~\vdots\\
~~~~0 \\
\end{array} \right) \left( \frac{d\lambda_1}{d\rho}~~~\cdots~~~ \frac{d\lambda_N}{d\rho} \right).
\label{midder}
\end{eqnarray}
From the definition of $\rho$, we know that
$\sum_{i=1}^N \mathbb{E} [B_i] \frac{d\lambda_i}{d\rho} = 1$, and hence
\begin{equation}
\left(\frac{d \lambda_1}{d\rho}~~~\cdots~~~ \frac{d\lambda_N}{d\rho} \right) \hat{\underbar{$w $}}= \lvert \underbar{$b $} \rvert ^{-1}.
\label{finalder}
\end{equation}
Since $\hat{\underbar{$w $}}$ is the normalized right eigenvector of any $ \mathbf{\hat{M}_i}$ for eigenvalue $\xi=1$, we have
\begin{equation}
\prod_{k=i+1}^{N} \mathbf{\hat{M}_k} \hat{\underbar{$w $}} = \hat{\underbar{$w $}}.
\label{initialder}
\end{equation}
Using \eqref{midder}, \eqref{finalder} and \eqref{initialder} we get
\begin{equation}
\mathbf{\hat{M}^{\prime}_i} \prod_{k=i+1}^{N} \mathbf{\hat{M}_k} \hat{\underbar{$ w$} } = \frac{1}{\lvert \underbar{$ b$} \rvert} \left(\begin{array}{cc}
~~~~0 \\
~~~~\vdots\\
(1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i]\\
~~~~\vdots\\
~~~~0 \\
\end{array} \right).
\label{lemma2p1}
\end{equation}
From \eqref{lefteigeni} and \eqref{lemma2p1} we get
\begin{eqnarray}
\hat{\underbar{$u $}}^{T} \mathbf{\hat{U}_i} \hat{\underbar{$w $} } &=&
\left(\hat{\underbar{$ u$} }^{T} \prod_{k=1}^{i-1} \mathbf{\hat{M}_k} \right) \mathbf{\hat{M}^{\prime}_i} \left(\prod_{k=i+1}^{N} \mathbf{\hat{M}_k} \hat{\underbar{$ w$} } \right) \nonumber \\
&=& \frac{1}{\lvert \underbar{$ b$} \rvert} \left(\begin{array}{cc}
\hat{u}_{1}^{(i)} \\
\vdots \\
\hat{u}_{i-1}^{(i)} \\
\hat{u}_i^{(i)} \\
\hat{u}_{i+1}^{(i)} \\
\vdots \\
\hat{u}_N^{(i)}
\end{array} \right) ^T \left(\begin{array}{cc}
~~~~0 \\
~~~~\vdots\\
~~~~0 \\
(1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i]\\
~~~~0 \\
~~~~\vdots\\
~~~~0 \\
\end{array} \right) \nonumber \\
&=& \frac{\hat{u}_{i}^{(i)} (1- {\rm e}^{-\nu_iG_i}) \mathbb{E}[B_i]}{\lvert \underbar{$ b$} \rvert} = \frac{\hat{\rho}_i}{\lvert \underbar{$ b$} \rvert},
\label{lemma2p2}
\end{eqnarray}
where the last equality follows from the fact that
$\hat{u}_{i}^{(i)} = \hat{\lambda}_i/(1- {\rm e}^{-\nu_iG_i})$, see \eqref{eq:hat}.
Multiplying both sides of \eqref{lemma2p2} with $\lvert \underbar{$ b$} \rvert/ \delta$ and summing it over all $i= 1,\ldots,N,$ we get that
\begin{equation}
\hat{\underbar{$ v$}}^{T} \mathbf{\hat{M}^\prime} \hat{\underbar{$w $} } =
\sum_{i=1}^{N} \frac{\lvert \underbar{$ b$} \rvert }{\delta}\hat{\underbar{$u $} }^{T} \mathbf{\hat{U}_i} \hat{\underbar{$w $} } = \sum_{i=1}^{N} \frac{\hat{\rho}_i}{\delta} = \frac{1}{\delta}.
\label{eq:lemma2}
\end{equation}
Since $\hat{\underbar{$ v$}}^{T} \hat{\underbar{$w $} } = 1$, we obtain from \eqref{derivative} and \eqref{eq:lemma2} that
$ \xi ^ \prime (1) = \frac{1}{\delta}.$
\rightline{$\Box$}}
\vspace{0.2cm}
In Theorem~\ref{thm1}, we need all the second-order derivatives
$\frac{\delta^2 h_{i}(\underbar{$z$})}{\delta z_j \delta z_k}$
of the function $h_{i}(\underbar{$z$})$.
In Lemma 4, we first find $\frac{\delta^2 h_{i}(\underbar{$z$})}{\delta z_j \delta z_k}$, for all $i,j$ and $k$, and
then use them to find the parameter $A$ as defined in Theorem~\ref{thm1}.
\begin{lemma}
For the second-order derivative matrix $K^{(i)}= \left(k^{(i)}_{j,k}\right)$ where $k^{(i)}_{j,k}= \left. \frac{\partial^2 h_i(\underbar{$z$})}{\partial z_j \partial z_k}\right\vert_{\underbar{$z$} =\underbar{$1$}}$, for all $i,j,k= 1, \ldots, N,$
we have that \[
A:= \frac{1}{2}\sum_{i=1}^{N} \hat{v}_i^{(1)} \left( \hat{\underbar{$w $}} ^T \hat{K}^{(i)} \hat{\underbar{$ w$} }\right) = \frac{1}{2 \delta \lvert \underbar{$b $} \rvert}\frac{b^{(2)}}{b^{(1)}},
\]
where $b^{(j)} = \frac{\sum_{i=1}^{N} \lambda_i \mathbb{E} [B_i^j]}{\sum_{i=1}^{N} \lambda_i},$ for $j=1,2$.
\label{lemma:Avalue}
\end{lemma}
\proof{
We know that
\begin{eqnarray*}
h_{i}(\underbar{$z$})&=&f_i(z_1,\ldots ,z_i,h_{i+1}(\underbar{$z$}),\ldots ,h_N(\underbar{$z$})) \\
&=& (1-{\rm e}^{-\nu_i G_i}) \beta_i(z_1,\ldots ,z_i,h_{i+1}(\underbar{$z$}),\ldots ,h_N(\underbar{$z$})) + {\rm e}^{-\nu_i G_i} z_i\\
&=& (1-{\rm e}^{-\nu_i G_i}) \mathbb{E} \left[ {\rm e}^{-B_i\left(\sum_{c=1 }^{i}(1-z_c) \lambda_c + \sum_{c=i+1}^{N}(1-h_c(\underbar{$z$})) \lambda_c \right)}\right]+ {\rm e}^{-\nu_i G_i} z_i.\end{eqnarray*}
From this it follows that
\begin{eqnarray*}
\frac{\partial h_{i}(\underbar{$z$})}{\partial z_k} &=& (1-{\rm e}^{-\nu_i G_i}) \mathbb{E} \left[B_i\left( \lambda_k 1[k \leq i] + \sum_{c=i+1}^{N} \lambda_c \frac{\partial h_{c}(\underbar{$z$})}{\partial z_k} \right) {\rm e}^{-B_i\left(\sum_{c=1 }^{i}(1-z_c) \lambda_c + \sum_{c=i+1}^{N}(1-h_c(\underbar{$z$})) \lambda_c \right)}\right]+ {\rm e}^{-\nu_i G_i} 1[k = i] ,\\
\text{and}~~~~ \frac{\partial^2 h_{i}(\underbar{$z$})}{\partial z_j \partial z_k} &=& (1-{\rm e}^{-\nu_i G_i}) \mathbb{E} \Bigg[\Bigg(B_i^2 \left( \lambda_k 1[k \leq i] + \sum_{c=i+1}^{N} \lambda_c \frac{\partial h_{c}(\underbar{$z$})}{\partial z_k} \right)\left( \lambda_j 1[j \leq i] + \sum_{c=i+1}^{N} \lambda_c \frac{\partial h_{c}(\underbar{$z$})}{\partial z_j} \right)\\
&&~~~~~~~~~~~~~~~~~~~~+B_i \sum_{c=i+1}^{N} \lambda_c \frac{\partial^2 h_{c}(\underbar{$z$})}{\partial z_j \partial z_k} \Bigg) {\rm e}^{-B_i\left(\sum_{c=1 }^{i}(1-z_c) \lambda_c + \sum_{c=i+1}^{N}(1-h_c(\underbar{$z$})) \lambda_c \right)}\Bigg],
\end{eqnarray*}
where $1[E]= 1$, when the event $E$ is true and otherwise $1[E]= 0$.
Because $\left. \frac{\partial^2 h_i(\underbar{$z$})}{\partial z_j \partial z_k}\right\vert_{\underbar{$z$} =\underbar{$1$}}=k^{(i)}_{j,k} $ and $(1-{\rm e}^{-\nu_i G_i})\mathbb{E}[B_i]\left( \lambda_k 1[k \leq i] + \sum_{c=i+1}^{N} \lambda_c \frac{\delta h_{c}}{\delta z_k}(i\underbar{$1$})\right) = m_{i,k} - 1[k = i]{\rm e}^{-\nu_i G_i}$, we have
\begin{eqnarray}
k^{(i)}_{j,k} &=& \frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})}(m_{i,j}- 1[j = i]{\rm e}^{-\nu_i G_i}) (m_{i,k}- 1[k = i]{\rm e}^{-\nu_i G_i}) + (1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \lambda_c k^{(c)}_{j,k} \nonumber \\
&=&\frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})} (m_{i,j} m_{i,k} -(1[j = i] m_{i,k} +1[k = i]m_{i,j}){\rm e}^{-\nu_i G_i} +1[i= j =k ]{\rm e}^{-2\nu_i G_i} )\nonumber\\
&&+ (1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \lambda_c k^{(c)}_{j,k}.
\label{eq:kijk}
\end{eqnarray}
Let $\mathbf{1}_i$ be an $N\times N$ matrix, where the element in the $i$-th row and the $i$-th column equals one, and all $N^2-1$ other entries read zero. Then, based on \eqref{eq:kijk}, we can write
\begin{eqnarray}
K^{(i)} &=& \frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})} \left[\left(\begin{array}{cc}
m_{i,1}m_{i,1} \cdots m_{i,1}m_{i,N} \\
\vdots ~~~\ddots ~~~\vdots\\
m_{i,N}m_{i,1} \cdots m_{i,N} m_{i,N}
\end{array} \right) -{\rm e}^{-\nu_i G_i} \left(\begin{array}{cc}
0 ~~\cdots~~ m_{i,1} ~~\cdots~~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~\cdots ~m_{i,i-1} ~\cdots~ 0\\
m_{i,1}~ \cdots ~2m_{i,i}~\cdots~ m_{i,N}\\
0 ~\cdots ~m_{i,i+1} ~\cdots ~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~~\cdots~~ m_{i,N}~~\cdots~~ 0
\end{array} \right) +{\rm e}^{-2\nu_i G_i} \mathbf{1}_i
\right] \nonumber\\
&&+(1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \lambda_cK^{(c)} \nonumber\\
&=& \frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})} \left[\left(\begin{array} {cc}
m_{i,1} \\ \vdots \\ m_{i,N} \end{array} \right)\left(\begin{array}{cc}
m_{i,1} \cdots m_{i,N} \end{array} \right) - {\rm e}^{-\nu_i G_i}\left( \begin{array}{cc}
0 ~~\cdots~~ m_{i,1} ~~\cdots~~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~\cdots ~m_{i,i-1} ~\cdots~ 0\\
m_{i,1}~ \cdots ~2m_{i,i}~\cdots~ m_{i,N}\\
0 ~\cdots ~m_{i,i+1} ~\cdots ~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~~\cdots~~ m_{iN}~~\cdots~~ 0
\end{array} \right) + {\rm e}^{-2\nu_i G_i} \mathbf{1}_i
\right] \nonumber\\
&&+(1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \lambda_c K^{(c)}. \nonumber \end{eqnarray}
This leads to\begin{eqnarray}
\underbar{$w$}^T K^{(i)} \underbar{$w$} &=&\frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})} \left[\underbar{$w$}^T \left(\begin{array} {cc}
m_{i,1} \\ \vdots \\ m_{i,N} \end{array} \right)\left(\begin{array}{cc}
m_{i,1} \cdots m_{i,N} \end{array} \right) \underbar{$w$} - {\rm e}^{-\nu_i G_i}\underbar{$w$}^T \left( \begin{array}{cc}
0 ~~\cdots~~ m_{i,1} ~~\cdots~~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~\cdots ~m_{i,i-1} ~\cdots~ 0\\
m_{i,1}~ \cdots ~2 m_{i,i}~\cdots~ m_{i,N}\\
0 ~\cdots ~m_{i,i+1} ~\cdots ~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~~\cdots~~ m_{i,N}~~\cdots~~ 0
\end{array} \right) \underbar{$w$} + {\rm e}^{-2\nu_i G_i} \underbar{$w$}^T \mathbf{1}_i \underbar{$w$} \right] \nonumber\\
&&+(1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \lambda_c \underbar{$w$}^T K^{(c)}\underbar{$w$}.
\label{eq:lemma4_0}
\end{eqnarray}
Note that from the definition of $\hat{\underbar{$w$}}$, we have that
\begin{equation}
\hat{\underbar{$w$}}^T \left(\begin{array}{cc}
\hat{m}_{i,1} \\ \vdots \\ \hat{m}_{i,N} \end{array} \right)
= \left(\begin{array}{cc}
\hat{m}_{i,1} \cdots \hat{m}_{i,N} \end{array} \right) \hat{\underbar{$w$}}
= \frac{\mathbb{E}[B_i]}{\lvert \underbar{$b $} \rvert } .
\label{eq:lemma4_1}
\end{equation}
Now we evaluate \begin{eqnarray}
\hat{ \underbar{$w$}}^T \left( \begin{array}{cc}
0 ~~\cdots~~ \hat{m}_{i,1} ~~\cdots~~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~\cdots ~ \hat{m}_{i,i-1} ~\cdots~ 0\\
\hat{m}_{i,1}~ \cdots ~2 \hat{m}_{i,i}~\cdots~ \hat{m}_{i,N}\\
0 ~\cdots ~ \hat{m}_{i,i+1} ~\cdots ~0\\
\vdots ~~~\ddots ~~~\vdots~~~\ddots ~~~\vdots\\
0 ~~\cdots~~ \hat{m}_{i,N}~~\cdots~~ 0
\end{array} \right) \hat{\underbar{$w$}} &=& \frac{1}{\lvert \underbar{$b $} \rvert } \left( \begin{array}{cc} \hat{m}_{i,1}\mathbb{E}[B_i] \\ \vdots \\\hat{m}_{i,i-1}\mathbb{E}[B_i]\\ \hat{m}_{i,i}\mathbb{E}[B_i] + \sum_{j=1}^{N} \hat{m}_{i,j} \mathbb{E}[B_j]\\ \hat{m}_{i,i+1}\mathbb{E}[B_i]\\ \vdots \\ \hat{m}_{i,N}\mathbb{E}[B_i] \end{array}\right)^T\hat{\underbar{$w$}} \nonumber \\
&=& \frac{\mathbb{E}[B_i]\sum_{j=1}^{N} \hat{m}_{i,j}\mathbb{E}[B_j] + \mathbb{E}[B_i]\sum_{j=1}^{N} \hat{m}_{i,j}\mathbb{E}[B_j]}{\lvert \underbar{$b $} \rvert ^2} \nonumber \\
&=& \frac{2 \mathbb{E}[B_i]^2} {\lvert \underbar{$b $} \rvert ^2} .
\label{eq:lemma4_2}
\end{eqnarray}
Evaluating \eqref{eq:lemma4_0} for $\rho \uparrow 1$, and substituting \eqref{eq:lemma4_1} and \eqref{eq:lemma4_2} in it, we have
\begin{eqnarray}
\hat{\underbar{$w$}}^T \hat{K}^{(i)} \hat{\underbar{$w$}} &=&\frac{\mathbb{E} [B_i^2]}{\mathbb{E}[B_i]^2(1-{\rm e}^{-\nu_i G_i})} \left[\frac{\mathbb{E}[B_i]^2}{\lvert \underbar{$b $} \rvert ^2}- 2 {\rm e}^{-\nu_i G_i}\frac{\mathbb{E}[B_i]^2}{\lvert \underbar{$b $} \rvert ^2} + {\rm e}^{-2\nu_i G_i}\frac{\mathbb{E}[B_i]^2}{\lvert \underbar{$b $} \rvert ^2} \right] +(1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \nonumber \\
&=& (1-{\rm e}^{-\nu_i G_i})\frac{\mathbb{E} [B_i^2]}{\lvert \underbar{$b $} \rvert ^2} +(1-{\rm e}^{-\nu_i G_i}) \mathbb{E}[B_i] \sum_{c=i+1}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \nonumber \\
&=&(1-{\rm e}^{-\nu_i G_i}) \left(\frac{\mathbb{E} [B_i^2]}{\lvert \underbar{$b $} \rvert ^2} +\mathbb{E}[B_i]\sum_{c=i+1}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right).
\label{lemma4_3}
\end{eqnarray}
Multiplying both sides of \eqref{lemma4_3} with $\hat{v}_i$ and evaluating it for $i=1$ we get
\begin{eqnarray}
\hat{v}_1 \hat{\underbar{$w$}}^T \hat{K}^{(1)} \hat{\underbar{$w$}} &=&\frac{ \lvert \underbar{$ b$} \rvert \hat{\lambda}_1}{\delta} \left(\frac{\mathbb{E} [B_1^2]}{\lvert \underbar{$b $} \rvert ^2} +\mathbb{E}[B_1]\sum_{c=2}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right)\nonumber \\
&=&\frac{ \hat{\lambda}_1 \mathbb{E} [B_1^2]}{\delta \lvert \underbar{$b $} \rvert } + \frac{\lvert \underbar{$ b$} \rvert\hat{\rho}_1 \hat{\lambda}_2 (1-{\rm e}^{-\nu_2 G_2})}{\delta} \left(\frac{\mathbb{E} [B_2^2]}{\lvert \underbar{$b $} \rvert ^2} +\sum_{c=3}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right) + \lvert \underbar{$ b$} \rvert\frac{\hat{\rho}_1}{\delta}\sum_{c=3}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}},
\label{eq:lemma4_4}
\end{eqnarray}
where for the second equality we again used \eqref{lemma4_3}, but now for $i=2$,
to substitute $\hat{\underbar{$w$}}^T \hat{K}^{(2)} \hat{\underbar{$w$}}$.
Multiplying both sides of \eqref{lemma4_3} with $\hat{v}_i$ and evaluating it for $i=2$ we get
\begin{eqnarray}
\hat{v}_2 \hat{\underbar{$w$}}^T \hat{K}^{(2)} \hat{\underbar{$w$}} &=& \frac{\lvert \underbar{$ b$} \rvert\hat{\lambda}_2}{\delta} \left( {\rm e}^{-\nu_2 G_2}+(1-{\rm e}^{-\nu_2 G_2}) \sum_{j=2}^{N} \hat{\rho}_j \right) \left(\frac{\mathbb{E} [B_2^2]}{\lvert \underbar{$b $} \rvert ^2} +\mathbb{E}[B_2]\sum_{c=3}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right)\nonumber\\
&=& \frac{\lvert \underbar{$ b$} \rvert\hat{\lambda}_2}{\delta}\left( {\rm e}^{-\nu_2 G_2}+(1-{\rm e}^{-\nu_2 G_2}) (1-\hat{\rho}_1) \right) \left(\frac{\mathbb{E} [B_2^2]}{\lvert \underbar{$b $} \rvert ^2} +\mathbb{E}[B_2]\sum_{c=3}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right)\nonumber\\
&=& \frac{\lvert \underbar{$ b$} \rvert\hat{\lambda}_2 - \lvert \underbar{$ b$} \rvert\hat{\rho}_1\hat{\lambda}_2 (1-{\rm e}^{-\nu_2 G_2})}{\delta} \left(\frac{\mathbb{E} [B_2^2]}{\lvert \underbar{$b $} \rvert ^2} +\mathbb{E}[B_2]\sum_{c=3}^{N} \hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}} \right).
\label{eq:lemma4_5}
\end{eqnarray}
Summing \eqref{eq:lemma4_4} and \eqref{eq:lemma4_5} we get
\begin{equation*}
\sum_{j=1}^{2} \hat{v}_j \hat{\underbar{$w$}}^T \hat{K}^{(j)} \hat{\underbar{$w$}} = \sum_{j=1}^{2} \frac{ \hat{\lambda}_j}{\delta}\frac{\mathbb{E} [B_j^2]}{\lvert \underbar{$b $} \rvert } + \frac{\lvert \underbar{$ b$} \rvert}{\delta}\left(\sum_{j=1}^{2} \hat{\rho}_j \right)\sum_{c=3}^{N}\hat{\lambda}_c \hat{\underbar{$w$}}^T \hat{K}^{(c)} \hat{\underbar{$w$}}.
\end{equation*}
By repeating the above procedure, we end up with
\begin{equation*}
\sum_{j=1}^{N} \hat{v}_j \hat{\underbar{$w$}}^T \hat{K}^{(j)} \hat{\underbar{$w$}} = \sum_{j=1}^{N} \frac{ \hat{\lambda}_j}{\delta}\frac{\mathbb{E} [B_j^2]}{\lvert \underbar{$b $} \rvert }= \frac{1}{\delta \lvert \underbar{$b $} \rvert} \frac{b^{(2)}}{b^{(1)}} .
\end{equation*}
Therefore we have \[
A:= \frac{1}{2}\sum_{j=1}^{N} \hat{v}_j \hat{\underbar{$w$}}^T \hat{K}^{(j)} \hat{\underbar{$w$}} = \frac{1}{2 \delta \lvert \underbar{$b $} \rvert} \frac{b^{(2)}}{b^{(1)}} .
\]
\rightline{$\Box$}
}
At this point, we have determined all parameters required to deploy Theorem~\ref{thm1}, except for the constant $\alpha$. This parameter depends on the immigration part of our process and is given by the following lemma.
\begin{lemma}
For $g = (g_1 \cdots g_N)^T$, we have that
\begin{equation}
\alpha:= \frac{1}{A} \hat{\underbar{$ g$} }^T \hat{\underbar{$w $} } = 2 r \delta \frac{b^{(1)} }{b^{(2)}},
\label{normalizedimmigrants}
\end{equation}
where $r= \sum_{i=1}^{N} \left( \mathbb{E}[S_i] +G_i \right)$.
\label{lemma:alpha}
\end{lemma}
\proof {
Multiplying both sides of \eqref{immigrants} with $\mathbb{E} [B_i]$ and summing it over all $i$ gives
\begin{eqnarray*}
\sum _{i=1}^{N} g_i \mathbb{E} [B_i] &=& \sum_{k=1}^{N} \lambda_k \left(\sum_{i=1}^{N} m_{k,i} \mathbb{E} [B_i]\right) \left(\sum_{j=1}^{k-1}\left(G_j+\mathbb{E} [S_j]\right)(1- {\rm e}^{-\nu_k G_k}) + G_k\right) \\
&+& \sum_{i=1}^{N} \rho_i \left(\sum_{j=1}^{i-1}\left(G_j+\mathbb{E} [S_j]\right){\rm e}^{-\nu_i G_i}+ \sum_{j=i}^N \mathbb{E} [S_j] + \sum_{j=i+1}^N G_j\right).
\end{eqnarray*}
Since $\hat{w}$ is an eigenvector of $\mathbf{\hat{M}_k}$, we have $\sum_{i=1}^{N} \hat{m}_{k,i} \mathbb{E} [B_i] = \mathbb{E} [B_k]$.
Hence, taking $\rho \uparrow 1$, we get
\begin{eqnarray}
\sum_{i=1}^{N} \hat{g_i} \mathbb{E} [B_i] &=& \sum_{i=1}^{N} \hat{\rho_i} \left(\sum_{j=1}^{i-1}\left(G_j+\mathbb{E} [S_j]\right)(1- {\rm e}^{-\nu_i G_i}) + G_i
+ \sum_{j=1}^{i-1}(G_j+\mathbb{E} [S_j]){\rm e}^{-\nu_i G_i}+ \sum_{j=i}^N \mathbb{E} [S_j] + \sum_{j=i+1}^N G_j\right) \nonumber \\
&=& \sum_{i=1}^{N} \hat{\rho_i} \sum_{j=1}^{N} \left(\mathbb{E} [S_j] + G_j \right) = \sum_{i=1}^{N} (\mathbb{E} [S_i] + G_i).
\label{eq:immi}
\end{eqnarray}
Substituting $\hat{\underbar{$ w$} } = \lvert \underbar{$ b$} \rvert ^{-1} (\mathbb{E} [B_1] ~\cdots ~ \mathbb{E} [B_N])^T$ and $A= \frac{1}{2 \delta \lvert \underbar{$b $} \rvert} \frac{b^{(2)}}{b^{(1)}}$ in \eqref{normalizedimmigrants}
and using \eqref{eq:immi} will give that
$\alpha= 2 r \delta \frac{b^{(1)}}{b^{(2)}}$. \rightline{$\Box$}
\subsection{The heavy traffic theorem}
Similar to the procedure used in \cite{MeiHTBranching}, we will now combine the preliminary work in Section \ref{sec:prelim} with Theorem \ref{thm1} in order to obtain the following heavy traffic theorem for the complete queue length process at cycle starts.
\begin{theorem}\label{thm:startglue}
For the cyclic polling system with retrials and glue periods, the scaled steady-state joint queue length vector at the start of glue periods at station~$1$ satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
X_1^{(1)} \\
\vdots \\
X_N^{(1)}
\end{array} \right)
\xrightarrow[d]{\rho \uparrow 1}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
\hat{u}^{(1)}_{1} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right)
\Gamma(\alpha , 1),
\end{equation}
where $\alpha = 2 r \delta\frac{b^{(1)}}{b^{(2)}}.$
\label{thm:2}
\end{theorem}
\proof{
The joint queue length process at the start of glue periods of station~$1$ is an $N-$dimensional multitype branching process with mean matrix $\mathbf{M}$
and the mean number of type~$i$ immigration customers per generation, $g_i$, given by \eqref{piM} and \eqref{immigrants} respectively. At the end of Section \ref{sub:branchingprocess} we concluded that $0<g_i<\infty$.
Furthermore, $h(\underbar{$z$})= ( h_{1}(\underbar{$z$}), h_{2}(\underbar{$z$}), \ldots, h_{N}(\underbar{$z$}))$ is the offspring function for which the second-order derivatives $k^{(i)}_{j,k}= \left. \frac{\partial^2 h_i(\underbar{$z$})}{\partial z_j \partial z_k}\right\vert_{\underbar{$z$} =\underbar{$1$}}$, for all $i,j,k = 1,\ldots,N$, exist.
Therefore, all the conditions of Theorem \ref{thm1} are satisfied.
\\
Denote by $(X_{1}^{(1)},X_{2}^{(1)},\ldots ,X_{N}^{(1)})$
the vector with as distribution the limiting distribution of
the number of customers of the different types in the system at
the start of a glue period of station $1$.
Now, from Theorem \ref{thm1} it follows that
\begin{equation}
\frac{1}{\pi(\xi(\rho))}
\left(\begin{array}{cc}
X^{(1)}_{1} \\
\vdots \\
X^{(1)}_{N}
\end{array} \right)
\xrightarrow[d]{}A \left(\begin{array}{cc}
\hat{v}_1^{(1)} \\
\vdots \\
\hat{v}_N^{(1)}
\end{array} \right)
\Gamma(\alpha , 1), ~~~\text{when}~~ \rho \uparrow 1 ,
\label{thm2:1}
\end{equation}
where $\pi(\xi(\rho)):=\lim_{n \to \infty}\pi_n(\xi(\rho))$,
and $A$ and $\hat{\underbar{$v$}}^{(1)}$ and
$\alpha = \frac{1}{A} \hat{\underbar{$ g$} }^T \hat{\underbar{$w $} },$
are as defined in
Lemmas \ref{lemma:eigenvector}, \ref{lemma:Avalue} and \ref{lemma:alpha}.
From \eqref{lifeofpi} we can say that, for $\rho < 1$,
\begin{equation*}
\pi(\xi(\rho)) = \frac{1}{\xi(\rho) (1-\xi(\rho))}.
\end{equation*}
Using this, together with Lemma \ref{lemma:eigenvaluederi}, gives
\begin{equation}
\lim_{\rho \uparrow 1} (1-\rho) \pi(\xi(\rho)) = \lim_{\rho \uparrow 1} \frac{1 - \rho}{\xi(\rho) (1-\xi(\rho))} = \lim_{\rho \uparrow 1} \frac{-1 }{\xi^{\prime}(\rho) (1-2\xi(\rho))} = \lim_{\rho \uparrow 1} \frac{1}{\xi^{\prime}(\rho)}=\delta.
\label{eq:limeps}
\end{equation}
Therefore, multiplying and dividing the LHS of \eqref{thm2:1} with $1-\rho$, we get
\begin{eqnarray*}
\frac{1- \rho}{(1-\rho)\pi(\xi(\rho))}
\left(\begin{array}{cc}
X^{(1)}_{1} \\
\vdots \\
X^{(1)}_{N}
\end{array} \right)
\xrightarrow[d]{}& A \left(\begin{array}{cc}
\hat{v}_1^{(1)} \\
\vdots \\
\hat{v}_N^{(1)}
\end{array} \right)
\Gamma(\alpha , 1),~~~\text{when}~~ \rho \uparrow 1.
\end{eqnarray*}
Using \eqref{eq:limeps}, this gives
\begin{eqnarray*}
(1- \rho)
\left(\begin{array}{cc}
X^{(1)}_{1} \\
\vdots \\
X^{(1)}_{N}
\end{array} \right)
\xrightarrow[d]{}& \frac{1}{2 \lvert \underbar{$b $} \rvert} \frac{b^{(2)}}{ b^{(1)}} \left(\begin{array}{cc}
\hat{v}_1^{(1)} \\
\vdots \\
\hat{v}_N^{(1)}
\end{array} \right)
\Gamma(\alpha , 1), ~~~\text{when}~~ \rho \uparrow 1,
\end{eqnarray*}
and hence
\begin{eqnarray*}(1- \rho)
\left(\begin{array}{cc}
X^{(1)}_{1} \\
\vdots \\
X^{(1)}_{N}
\end{array} \right)
\xrightarrow[d]{}& \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta}
\left(\begin{array}{cc}
\hat{u}^{(1)}_1 \\
\vdots \\
\hat{u}^{(1)}_N
\end{array} \right) \Gamma(\alpha , 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{eqnarray*}
\rightline{$\Box$} }
\subsection{Discussion of results: connection with a binomially gated polling model}
It turns out that the heavy traffic results that we obtained in this section for the model at hand, are similar to those of a binomially gated polling model (see e.g.\ \cite{Levy2}). The dynamics of the binomially gated polling model are much like those of a conventional gated polling model, except that after dropping a gate at $Q_i$, the customers before it will each be served in the corresponding visit period with probability $p_i$ in an i.i.d.\ way, rather than with probability one as in the gated model.
In particular, the heavy traffic analysis of our model coincides with that of a binomially gated polling model with the same
interarrival time distributions, service time distributions and switch-over time distributions, and probability parameters $p_i=1-e^{-\nu_iG_i}$. To check this, we note that the binomially gated polling model with these probability parameters falls within the framework of the seminal work of \cite{MeiHTBranching} when taking the exhaustiveness parameters $f_i = (1-\rho_i)(1-e^{-\nu_iG_i})$, after which it is easily verified that Theorem 5 of \cite{MeiHTBranching} coincides with Theorem \ref{thm:startglue}. Note, however, that although we also exploit a branching framework in this paper, the model considered in this paper does not fall directly in the class of polling models considered in \cite{MeiHTBranching}, due to the intricate immigration dynamics it exposes.
The intuition behind this remarkable connection is as follows. First, we have that a binomially gated polling model does not have the feature of glue periods. However, in a heavy-traffic regime, the server in our model will reside in a visit period for 100\% of the time, so that glue periods hardly occur in this regime either. Furthermore, in a binomially gated polling model, each customer present at the start of a visit period at $Q_i$ will be served within that visit period with probability $p_i = 1-e^{-\nu_iG_i}$ in an i.i.d.\ fashion. Note that something similar happens with the model at hand. There, the start of a visit period coincides with the conclusion of a glue period. During this glue period, all customers present in the orbit of the queue will, independently from one another, queue up for the next visit period with probability $1-e^{-\nu_iG_i}$. These two facts explain the analogy.
Do note that this analogy, remarkable though it is, does not help us in the further analysis towards the asymptotics of the customer population at an arbitrary point in time. While Theorem \ref{thm:startglue} is now aligned with Theorem 5 of \cite{MeiHTBranching}, we cannot use the subsequent analysis steps in that paper to get to results concerning the customer population in heavy traffic at an arbitrary point in time. This is much due to the fact that the strategy of \cite{MeiHTBranching} exploits a relation between the queue length of $Q_1$ at a cycle start and the virtual waiting time of that queue at an arbitrary point in time. Since the type-$i$ customers in our model are not served in the order of arrival, as is usually assumed, such a relation is hard to derive and is essentially unknown. As an alternative, we will extend the current heavy traffic asymptotics at cycle starts to certain other embedded epochs in Section \ref{sec:embeddedTimePoints}, and eventually to arbitrary points in time in Section \ref{sec:arbitraryTimePoints}.
\section{Heavy traffic analysis: number of customers at other embedded time points}\label{sec:embeddedTimePoints}
A cycle in the polling system with retrials and glue periods passes
through three different phases: glue periods, visit periods and
switch-over periods. In the previous section, in Theorem \ref{thm:2},
we studied the behaviour of the scaled steady-state joint queue length
vector at the start of glue periods at station~$1$.
We will now extend this result to the scaled steady-state joint queue
length vector at the start of a visit period and the start of a
switch-over period in Theorems \ref{startofvisit} and \ref{startofswitch}.
\begin{theorem}
For the cyclic polling system with retrials and glue periods,
the scaled steady-state joint queue length vector
at the start of visit periods at station $1$ satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
Y_1^{(1q)}\\
Y_1^{(1o)} \\
Y_2^{(1)}\\
\vdots \\
Y_N^{(1)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right)
\Gamma(\alpha , 1),~~~\text{when}~~ \rho \uparrow 1.
\label{eq:startvisit}
\end{equation}
\label{startofvisit}
\end{theorem}
\proof{
The distribution of the number of new customers of type~$j$ entering the system during a glue period of station~$i$ is stochastically smaller than that of the number of events $G_j^{(i)}$ in a Poisson process with rate $\hat{\lambda}_j$ during an interval of length $G_i$. This is due to the fact that the arrival rate $\lambda_j = \rho\hat{\lambda}_j$ does not exceed $\hat{\lambda}_j$.
Since $G_j^{(i)}$ is finite with probability 1, we have that
$(1-\rho)G_j^{(i)} \rightarrow 0$ with probability 1, as $\rho \uparrow 1$.
Therefore the limiting scaled joint queue length distribution, for all customers other than type~$i$, is same at the start of a glue period
and at the start of a visit period of station~$i$.
Furthermore, the $X_i^{(i)}$ customers of type~$i$, present in the system at the start of a glue period of station~$i$, join the queue, independently of each other, with probability $1-e^{-\nu_i G_i}$ during the glue period.
Let $\{U_i, i \ge 0\}$ be a series of i.i.d. random variables where $U_k$ indicates whether the $k$-th customer joins the queue or stays in orbit, for all $k= 1, \ldots,X_i^{(i)}$. More specifically, $U_k =1$ if the customer joins the queue, with probability $1-e^{-\nu_i G_i}$, and $U_k=0$ if the customer stays in orbit, w.p. $e^{-\nu_i G_i}$. Then the number of customers of type~$i$ in the queue ($Y_i^{(iq)}$) and in the orbit
($Y_i^{(io)}$) at the start of a visit period at station~$i$ are given by
\begin{eqnarray*}
Y_i^{(iq)} = \sum_{k=1}^{ X_i^{(i)}} U_k ~~~~~~&\text{and}&~~~~~~~ Y_i^{(io)} = X_i^{(i)} -\sum_{k=1}^{ X_i^{(i)}} U_k.
\end{eqnarray*}
Since $X_i^{(i)} \to \infty$ with probability $1$, as $\rho \uparrow 1$, we have by virtue of the weak law of large numbers that
\begin{equation}
\frac{Y_i^{(iq)}}{X_i^{(i)}} = \frac{\sum_{k=1}^{ X_i^{(i)}} U_k}{ X_i^{(i)}}\xrightarrow[\mathbb{P}]{} 1- e^{-\nu_i G_i},~~~\text{when}~~ \rho \uparrow 1,
\label{eq:convergance1a}
\end{equation}
where $\xrightarrow[\mathbb{P}]{}$ means convergence in probability.
Similarly we have
\begin{equation}
\frac{Y_i^{(io)}}{X_i^{(i)}} = \frac{X_i^{(i)} -\sum_{k=1}^{ X_i^{(i)}} U_k}{ X_i^{(i)}}\xrightarrow[\mathbb{P}]{} e^{-\nu_i G_i},~~~\text{when}~~ \rho \uparrow 1.
\label{eq:convergance1b}
\end{equation}
Therefore, using Slutsky's convergence theorem \cite{Grimmett}, along with \eqref{thm2:1}, \eqref{eq:convergance1a}, \eqref{eq:convergance1b} and the arguments above, we get
\begin{equation*}
(1-\rho) \left(\begin{array}{cc}
Y_1^{(1q)}\\
Y_1^{(1o)} \\
Y_2^{(1)}\\
\vdots \\
Y_N^{(1)}
\end{array} \right)
\xrightarrow[d]{\rho \uparrow 1}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right)
\Gamma(\alpha , 1), ~~~\text{when}~~ \rho \uparrow 1.
\end{equation*}
\rightline{$\Box$}
We end this section by considering the scaled steady-state joint queue length vector at the start of a switch-over period from station~$1$ to station~$2$.
\begin{theorem}
For the cyclic polling system with retrials and glue periods,
the scaled steady-state joint queue length vector at the start of a
switch-over period from station~$1$ to station~$2$ satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
Z_1^{(1)} \\
Z_2^{(1)} \\
\vdots \\
Z_N^{(1)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda_1} \mathbb{E}[B_1] \\
\hat{u}^{(1)}_{2} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda}_2 \mathbb{E}[B_1]\\
\vdots \\
\hat{u}^{(1)}_{N} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda}_N \mathbb{E}[B_1]
\end{array} \right)
\Gamma(\alpha , 1), ~~~\text{when}~~ \rho \uparrow 1.
\label{eq:switch}
\end{equation}
\label{startofswitch}
\end{theorem}
\proof{
The number of customers in the orbit of station~$j$ at the start of a switch-over period from station~$i$ to station~$i+1$ equals the number of customers in the orbit
at the start of the visit of station~$i$~ plus the Poisson arrivals with rate $\lambda_j$, during the service of customers in the queue of station~$i$, say $J_j^{(i)}$. In other words, we have that
\begin{equation}
Z_j^{(i)} = \begin{cases}
Y_j^{(i)} + J_j^{(i)} , & j \ne i, \\
Y_i^{(io)} + J_i^{(i)} , & j=i.
\end{cases}
\label{startswitch1}
\end{equation}
Note that $J_j^{(i)}$ is the sum of Poisson arrivals with rate $\lambda_j$ during $Y_i^{(iq)}$ independent service times with distribution $B_i$. Let $D_{i,j,k}$ be the number of
Poisson arrivals with rate $\lambda_j$ during the $k^{th}$ service in the visit period of station $i$. Thus
\begin{equation*}
J_j^{(i)} = \sum_{k=1}^{Y_i^{(iq)}} D_{i,j,k}.
\end{equation*}
Since $Y_i^{(iq)}\rightarrow \infty$ as $\rho \uparrow 1$, and $\mathbb{E}[B_i]$ is finite, we have by virtue of the weak law of large numbers that
\begin{equation}
\frac{J_j^{(i)}}{ Y_i^{(iq)}} \xrightarrow[\mathbb{P}]{} \hat{\lambda}_j \mathbb{E}[B_i], ~~~\text{when}~~ \rho \uparrow 1.
\label{startswitch2}
\end{equation}
Therefore, using Slutsky's convergence theorem along with \eqref{eq:startvisit}, \eqref{startswitch1} and \eqref{startswitch2} we get
\begin{equation*}
(1-\rho) \left(\begin{array}{cc}
Z_1^{(1)} \\
Z_2^{(1)} \\
\vdots \\
Z_N^{(1)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda_1} \mathbb{E}[B_1] \\
\hat{u}^{(1)}_{2} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda}_2 \mathbb{E}[B_1]\\
\vdots \\
\hat{u}^{(1)}_{N} &+ (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \hat{\lambda}_N \mathbb{E}[B_1]
\end{array} \right)
\Gamma(\alpha , 1), ~~~\text{when}~~ \rho \uparrow 1.
\end{equation*}
%
\begin{remark}
Alternatively, Theorems \ref{startofvisit} and \ref{startofswitch} can be obtained by exploiting known relations between the joint PGFs of the vectors $(X_1^{(1)},\ldots, X_N^{(1)})$, $(Y_1^{(1q)},Y_1^{(1o)}, Y_2^{(1)}\ldots, Y_N^{(1)})$ and $(Z_1^{(1)},\ldots, Z_N^{(1)})$ given in Equations (3.2) and (3.3) of \cite{Abidini16}. After replacing each parameter $z_j$ in these functions by $z_j^{(1-\rho)}$ and taking the limit of $\rho$ going to one from below, these expressions give the relations between the joint PGFs of the heavy traffic distributions. Combining these results with Theorem \ref{thm:2} and subsequently invoking Levy's continuity theorem (see e.g.\ Section 18.1 of \cite{Williams}) then readily imply the theorems.
\end{remark}
\begin{remark}
Throughout this section, we have focused on the joint queue length process at the start of a glue,
visit or switch-over period at $Q_1$. However, similar results for the starts of these periods at any $Q_i$
can be obtained by either simply reordering indices, or by exploiting the relations obtained in \cite{Abidini16} between $(X_1^{(i)},\ldots, X_N^{(i)})$, $(Y_1^{(iq)},Y_1^{(io)}, Y_2^{(i)}\ldots, Y_N^{(i)})$, $(Z_1^{(i)},\ldots, Z_N^{(i)})$ and $(X_1^{(i+1)},\ldots, X_N^{(i+1)})$.
\end{remark}
%
\section{Heavy traffic analysis: number of customers at arbitrary time points}\label{sec:arbitraryTimePoints}
In this section we look at the limiting scaled joint queue length distribution of the number of customers at the different stations at an arbitrary time point. At such a point in time, the system
can be in the glue period, the visit period or the switch-over period of some station~$i$,
with probability $\frac{(1-\rho)G_i}{\sum_{i=1}^{N} (G_i+ \mathbb{E}[S_i])}$, $\rho_i$ and
$\frac{(1-\rho)\mathbb{E}[S_i]}{\sum_{i=1}^{N} (G_i+ \mathbb{E}[S_i])}$ respectively. As $\rho \uparrow 1$, the probabilities $\frac{(1-\rho)G_i}{\sum_{i=1}^{N} (G_i+ \mathbb{E}[S_i])}$ and
$\frac{(1-\rho)\mathbb{E}[S_i]}{\sum_{i=1}^{N} (G_i+ \mathbb{E}[S_i])}$, both converge to $0$. Therefore we only need to study the scaled steady-state joint queue length vector at an arbitrary time
in each of the $N$ visit periods.
\begin{theorem}
For the cyclic polling system with retrials and glue periods, the scaled steady-state joint queue length vector at an arbitrary time point in a visit period of station~$1$ satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
V_1^{(1q)}\\
V_1^{(1o)} \\
V_2^{(1)}\\
\vdots \\
V_N^{(1)}
\end{array} \right)
\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left[ \left(\begin{array}{cc}
(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right) + (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} U \left(\begin{array}{cc}
-1 \\
\hat{\lambda}_1 E[B_1] \\
\hat{\lambda}_2 E[B_1] \\
\vdots \\
\hat{\lambda}_N E[B_1]
\end{array} \right)
\right]
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\label{eq:randomvisit}
\end{equation}
\label{thm:randomvisit}
\end{theorem}
\proof{
We will use Equation (3.19) of \cite{Abidini16} to prove this. This equation states that the joint generating function, $R^{(i)}_{vi}(\underbar{$z$}_q,\underbar{$z$}_o)$, of the numbers of
customers in the queue and in the orbits at an arbitrary time point in a
visit period of $Q_i$ is given by
\begin{eqnarray*}
R^{(i)}_{vi}(\underbar{$z$}_q,\underbar{$z$}_o) &=& \frac{z_{iq} \left(\mathbb{E} [z_{iq}^{Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{Y_j^{(i)}} \right)z_{io}^{Y_i^{(io)}}] - \mathbb{E} [\tilde{B_i}(\sum_{j=1}^{N} \lambda_j(1-z_{jo}))^{Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{Y_j^{(i)}} \right)z_{io}^{Y_i^{(io)}}]\right)}{\mathbb{E} [Y_i^{(iq)}] \left(z_{iq} - \tilde{B_i}(\sum_{j=1}^{N} \lambda_j(1-z_{jo}))\right)} \nonumber \\
&& \times \frac{1-\tilde{B_i}(\sum_{j=1}^{N}\lambda_j(1-z_{jo}))} {\left(\sum_{j=1}^{N}\lambda_j(1-z_{jo})\right) \mathbb{E} [B_i]}.
\end{eqnarray*}
Evaluating the above generating function in the points $\underbar{$\tilde{z}$}_q = (z_{1q}^{(1-\rho)}, \ldots, z_{Nq}^{(1-\rho)})$ and $\underbar{$\tilde{z}$}_o = (z_{1o}^{(1-\rho)}, \ldots, z_{No}^{(1-\rho)})$, we get
\begin{eqnarray}
&&R^{(i)}_{vi}(\underbar{$\tilde{z}$}_q,\underbar{$\tilde{z}$}_o) = \nonumber \\
&& \frac{z_{iq}^{(1-\rho)} \left(\mathbb{E} [z_{iq}^{(1-\rho)Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{(1-\rho)Y_j^{(i)}} \right)z_{io}^{(1-\rho)Y_i^{(io)}}] - \mathbb{E} [\tilde{B_i}(\sum_{j=1}^{N} \lambda_j(1-z_{jo}^{(1-\rho)}))^{Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{(1-\rho)Y_j^{(i)}} \right)z_{io}^{(1-\rho)Y_i^{(io)}}]\right)}{\mathbb{E} [Y_i^{(iq)}] \left(z_{iq}^{(1-\rho)} - \tilde{B_i}(\sum_{j=1}^{N} \lambda_j(1-z_{jo}^{(1-\rho)}))\right)} \nonumber \\
&& \times \frac{1-\tilde{B_i}(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)}))} {\left(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)})\right) \mathbb{E} [B_i]}.
\label{copied_1}
\end{eqnarray}
This equation has two terms. The first term expresses the generating function of the number of customers in the system at the start of the service of the customer who is currently in service. The second term is the generating function of the number of customers that arrived during
the past service period of the customer who is currently in service.
As $\rho \uparrow 1$, this second term satisfies
\begin{eqnarray}
\lim_{\rho \uparrow 1} \frac{1-\tilde{B_i}(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)}))} {\left(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)})\right) \mathbb{E} [B_i]} &=& \lim_{\rho \uparrow 1} \frac{1-\mathbb{E}[e^{-(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)}))B_i}]} {\left(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)})\right) \mathbb{E} [B_i]} \nonumber\\
&=& \lim_{\rho \uparrow 1} \frac{\mathbb{E}[B_i e^{-(\sum_{j=1}^{N}\lambda_j(1-z_{jo}^{(1-\rho)}))B_i}]} { \mathbb{E} [B_i]} \nonumber\\
&=& \frac{\mathbb{E} [B_i]}{\mathbb{E} [B_i]} = 1,
\label{LSTRes}
\end{eqnarray}
where the second equality follows from l'H\^opital's rule.
Equation \eqref{LSTRes} expresses the fact that the scaled vector of number of customers arriving at the different stations during a past service time tends to $0$, and hence its generating function tends to $1$, as $\rho \uparrow 1$.
Before taking the limit $\rho \uparrow 1$ in \eqref{copied_1} we first look at
$ \lim_{\rho \uparrow 1} \left(\tilde{B_i}\left(\sum_{j=1}^{N} \lambda_j\left( 1-z_{jo}^{(1-\rho)}\right)\right)\right)^{\frac{1}{1-\rho}}$.
As we mentioned earlier when we scale $\rho \uparrow 1$ we scale the system such that only the arrival rates increase and the service times remain the same. So we can write $\lambda_j = \rho \hat{\lambda}_j$ where $\hat{\lambda}_j$ is fixed and independent
of $\rho$, for all $j= 1,\ldots, N.$ Therefore we have
\begin{eqnarray}
\lim_{\rho \uparrow 1} \left(\tilde{B_i}\left(\rho\sum_{j=1}^{N} \hat{\lambda}_j\left( 1-z_{jo}^{(1-\rho)}\right)\right)\right)^{\frac{1}{1-\rho}}
&=& e^{\lim_{\rho \uparrow 1} \frac{\ln \left(\tilde{B_i}\left(\rho\sum_{j=1}^{N} \hat{\lambda}_j \left( 1-z_{jo}^{(1-\rho)}\right)\right)\right)}{1-\rho}} \nonumber\\
&=& e^{\lim_{\rho \uparrow 1} \frac{\left(\rho \sum_{j=1}^{N} \hat{\lambda}_j z_{jo}^{(1-\rho)} \ln z_{jo} + \sum_{j=1}^{N} \hat{\lambda}_j \left(1-z_{jo}^{(1-\rho)}\right)i\right)\tilde{B_i}^{\prime}\left(\rho\sum_{j=1}^{N} \hat{\lambda}_j \left( 1-z_{jo}^{(1-\rho)}\right)\right) }{-\tilde{B_i}\left(\rho\sum_{j=1}^{N} \hat{\lambda}_j \left( 1-z_{jo}^{(1-\rho)}\right)\right) }} \nonumber \\
&=& e^{ \mathbb{E}[B_i] \sum_{j=1}^{N} \hat{\lambda}_j \ln z_{jo} } = e^{ \sum_{j=1}^{N} \ln z_{jo}^{\mathbb{E}[B_i] \hat{\lambda}_j }} = \prod_{j=1}^{N}z_{jo}^{ \mathbb{E}[B_i] \hat{\lambda}_j} ,
\label{LSTHT}
\end{eqnarray}
where the second equality follows from l'H\^opital's rule.
Next we evaluate the following limit, which is related to the denominator of the first term in \eqref{copied_1}
\begin{eqnarray}
\lim_{\rho \uparrow 1} \frac{z_{iq}^{(1-\rho)} - \tilde{B_i}(\sum_{j=1}^{N} \lambda_j(1-z_{jo}^{(1-\rho)}))}{1 - \rho} &=& \lim_{\rho \uparrow 1} \frac{z_{iq}^{(1-\rho)}-1}{1 - \rho} + \lim_{\rho \uparrow 1} \frac{1 - \mathbb{E}\left[e^{-\rho \sum_{j=1}^{N} \hat{\lambda}_j \left(1-z_{jo}^{(1-\rho)}\right)B_i}\right]}{1 - \rho} \nonumber \\
&=& \lim_{\rho \uparrow 1}z_{iq}^{(1-\rho)} \ln z_{iq}\nonumber\\
&-& \lim_{\rho \uparrow 1} \mathbb{E}\left[\left( \rho B_i \sum_{j=1}^{N} \hat{\lambda}_j z_{jo}^{(1-\rho)} \ln z_{jo} + B_i\sum_{j=1}^{N} \hat{\lambda}_j \left(1-z_{jo}^{(1-\rho)} \right)\right)e^{-\rho \sum_{j=1}^{N} \hat{\lambda}_j \left(1-z_{jo}^{(1-\rho)}\right)B_i} \right] \nonumber \\
&=& \ln z_{iq}- \sum_{j=1}^{N} \hat{\lambda}_j \mathbb{E} [B_i] \ln z_{jo} = \ln \left( z_{iq} \prod_{j=1}^{N}z_{jo}^{ - \mathbb{E}[B_i] \hat{\lambda}_j} \right),
\label{LSTdeno}
\end{eqnarray}
where the first equality uses the fact that $\lambda_j = \rho \hat{\lambda}_j$ and the second equality follows from l'H\^opital's rule.
We know that $\lim_{\rho \uparrow 1} z_{iq}^{(1-\rho)} =1$. Substituting this along with \eqref{LSTRes},\eqref{LSTHT} and \eqref{LSTdeno} in \eqref{copied_1} while we take $\rho \uparrow 1$, we get
\begin{eqnarray}
\lim_{\rho \uparrow 1}&& R^{(i)}_{vi}(\underbar{$\tilde{z}$}_q,\underbar{$\tilde{z}$}_o) = \\
&&\lim_{\rho \uparrow 1} \frac{\mathbb{E} [z_{iq}^{(1-\rho)Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{(1-\rho)Y_j^{(i)}} \right)z_{io}^{(1-\rho)Y_i^{(io)}}] - \mathbb{E} [\prod_{j=1}^{N}z_{jo}^{ \mathbb{E}[B_i] \hat{\lambda}_j (1-\rho)Y_i^{(iq)}}\left(\prod_{j=1,j\ne i}^{N}z_{jo}^{(1-\rho)Y_j^{(i)}} \right)z_{io}^{(1-\rho)Y_i^{(io)}}]}{\mathbb{E} [(1-\rho) Y_i^{(iq)}] \ln \left( z_{iq} \prod_{j=1}^{N}z_{jo}^{ - \mathbb{E}[B_i] \hat{\lambda}_j} \right)} .
\nonumber
\label{eq:metastate}
\end{eqnarray}
Consider the following notation
\begin{eqnarray*}
\kappa &:=& \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} = \frac{b^{(2)}}{2b^{(1)}}\frac{\hat{\lambda}_1}{\delta} \\
\kappa_1 &:=& \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\kappa_i &:=& \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \hat{u}^{(1)}_{i}, ~~~~~~~~~~\forall i = 2, \ldots, N.
\end{eqnarray*}
Using the above notation in \eqref{eq:startvisit} and substituting it in \eqref{eq:metastate} we have,
\begin{eqnarray*}
\lim_{\rho \uparrow 1} R^{(1)}_{vi}(\underbar{$\tilde{z}$}_q,\underbar{$\tilde{z}$}_o) &=& \frac{\mathbb{E} [z_{1q}^{\kappa \Gamma(\alpha , 1)}\prod_{j=1}^{N}z_{jo}^{\kappa_j \Gamma(\alpha , 1)} ] - \mathbb{E} [\prod_{j=1}^{N}z_{jo}^{ \mathbb{E}[B_1] \hat{\lambda}_j \kappa \Gamma(\alpha , 1)}\prod_{j=1}^{N}z_{jo}^{\kappa_j \Gamma(\alpha , 1)} ]}{\mathbb{E} [\kappa \Gamma(\alpha , 1)] \ln \left( z_{1q} \prod_{j=1}^{N}z_{jo}^{ - \mathbb{E}[B_1] \hat{\lambda}_j} \right)} \nonumber \\
&=& \frac{ \mathbb{E} [\left( z_{1q}^{\kappa} \prod_{j=1}^{N} z_{jo}^{\kappa_j} \right)^{\Gamma(\alpha , 1)} ] - \mathbb{E} [\left( \prod_{j=1}^{N}z_{jo}^{ \mathbb{E}[B_1] \hat{\lambda}_j \kappa + \kappa_j}\right)^{ \Gamma(\alpha , 1)} ] } {\kappa \alpha \ln \left( z_{1q} \prod_{j=1}^{N}z_{jo}^{ - \mathbb{E}[B_1] \hat{\lambda}_j} \right)}.
\end{eqnarray*}
Now we introduce the following notation to change our generating function into an LST,
\begin{eqnarray*}
s &:=& - \ln z_{1q} \\
s_i &:=& - \ln z_{io}, ~~~~\forall i=1,\ldots,N.
\end{eqnarray*}
Then we have that the joint LST of the scaled steady-state joint queue length vector, of the queue of station~$1$ and the orbits at all the stations, during an arbitrary time in the visit period of station~$1$ is
\begin{eqnarray}
\lim_{\rho \uparrow 1} R^{(1)}_{vi}(\underbar{$\tilde{z}$}_q,\underbar{$\tilde{z}$}_o) &=& \frac{ \mathbb{E} [\left( e^{-s \kappa} \prod_{j=1}^{N} e^{-s_j \kappa_j} \right)^{\Gamma(\alpha , 1)} ] - \mathbb{E} [\left( \prod_{j=1}^{N}e^{-s_j \left(\mathbb{E}[B_1] \hat{\lambda}_j \kappa + \kappa_j\right)}\right)^{ \Gamma(\alpha , 1)} ] } { \alpha \kappa \ln \left( e^{-s} \prod_{j=1}^{N}e^{ s_j \mathbb{E}[B_1] \hat{\lambda}_j} \right)} \nonumber \\
&=& \frac{ \mathbb{E} [ e^{- \left(s \kappa + \sum_{j} s_j \kappa_j\right) \Gamma(\alpha , 1)} ] - \mathbb{E} [e^{ - \sum_{j=1}^{N} s_j \left(\mathbb{E}[B_1] \hat{\lambda}_j \kappa + \kappa_j \right) \Gamma(\alpha , 1)} ] } {\alpha \kappa \left(-s+ \mathbb{E}[B_1] \sum_{j} \hat{\lambda}_j s_j \right)} \nonumber \\
&=& \frac{\left( \frac{1}{1 +s \kappa + \sum_{j} s_j \kappa_j } \right)^{\alpha} - \left(\frac{1}{1 + \sum_{j=1}^{N} s_j \left(\mathbb{E}[B_1] \hat{\lambda}_j \kappa + \kappa_j \right)} \right)^{\alpha}}{\alpha \kappa \left(-s+ \mathbb{E}[B_1] \sum_{j} \hat{\lambda}_j s_j \right)} \nonumber \\
&=& \mathbb{E} [ e^{- \left(s \kappa + \sum_{j} s_j \kappa_j + \left(-s \kappa+ \sum_{j} \hat{\lambda}_j \mathbb{E}[B_1] \kappa s_j \right) U \right) \Gamma(\alpha+1 , 1)} ],
\label{eq:jointlst}
\end{eqnarray}
where $U$ is a standard uniform random variable and the last equality follows from the expression
\[
\mathbb{E}[e^{-(a+bU) \Gamma(\alpha+1,1)}] = \frac{\left(\frac{1}{1+a}\right)^{\alpha}-\left(\frac{1}{1+(a+b)}\right)^{\alpha}}{\alpha b}.
\]
Now we substitute $s = - \ln z_{1q}$ and $s_i = - \ln z_{io}$ back in \eqref{eq:jointlst} to get for the joint generating function
\begin{eqnarray}
\lim_{\rho \uparrow 1} R^{(1)}_{vi}(\underbar{$\tilde{z}$}_q,\underbar{$\tilde{z}$}_o) &=& \mathbb{E} \left[\left(z_{1q}^{\kappa} \prod_{j=1}^{N} z_{jo}^{\kappa_j} \left(z_{1q}^{-1} \prod_{j=1}^{N} z_{jo}^{\hat{\lambda}_j \mathbb{E}[B_1]} \right)^{ \kappa U} \right)^{\Gamma(\alpha+1 , 1)} \right].
\label{eq:alpha1}
\end{eqnarray}
Let $V_{iq}^{(i)}$ and $V_{io}^{(i)}$ be the number of customers in the queue and orbit of station~$i$, and $V_{j}^{(i)}$ be the number of customers of type $j \ne i$, at an arbitrary point in time during a visit period of station~$i$, for all $i=1,\ldots,N$. Then from
\eqref{eq:alpha1} we have,
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
V_1^{(1q)}\\
V_1^{(1o)} \\
V_2^{(1)}\\
\vdots \\
V_N^{(1)}
\end{array} \right)
\xrightarrow[d]{} \left[ \left(\begin{array}{cc}
\kappa\\
\kappa_1 \\
\kappa_2 \\
\vdots \\
\kappa_{N}
\end{array} \right) + \kappa U \left(\begin{array}{cc}
-1 \\
\hat{\lambda}_1 E[B_1] \\
\hat{\lambda}_2 E[B_1] \\
\vdots \\
\hat{\lambda}_N E[B_1]
\end{array} \right)
\right]
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{equation}
Therefore, the scaled steady-state joint queue length vector, in the queue of station~$1$ and the orbits of all stations, at an arbitrary time during a visit period of station~$1$, as $\rho \uparrow 1$, satisfies
\begin{equation*}
(1-\rho) \left(\begin{array}{cc}
V_1^{(1q)}\\
V_1^{(1o)} \\
V_2^{(1)}\\
\vdots \\
V_N^{(1)}
\end{array} \right)
\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left[ \left(\begin{array}{cc}
(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right) + U \left(\begin{array}{cc}
-\hat{\lambda}_1 \\
\hat{\lambda}_1 \hat{\rho}_1 \\
\hat{\lambda}_2 \hat{\rho}_1 \\
\vdots \\
\hat{\lambda}_N \hat{\rho}_1
\end{array} \right)
\right]
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{equation*} \rightline{$\Box$}}
An intuitive argument for Theorem \ref{thm:randomvisit} can be given in the following way.
Since, under heavy traffic, the scaled number of customers which are in the queue of station~$1$ at the start of an arbitrary visit period is
gamma distributed, $ \kappa \Gamma(\alpha , 1)$, also the scaled length
of an arbitrary visit period is gamma distributed, $ \kappa \mathbb{E}[B_1] \Gamma(\alpha , 1)$. Therefore if we choose an arbitrary time point in
a visit period of station~$1$ the scaled length of that special visit period is distributed as $ \kappa \mathbb{E}[B_1] \Gamma(\alpha +1 , 1)$,
where $\kappa= \frac{b^{(2)}}{2 b^{(1)}}\frac{1}{\delta }(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1}$. Since this is a special interval
which we are looking at, the scaled steady-state joint queue length vector at the start of this visit period satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
\breve{Y}_1^{(1q)}\\
\breve{Y}_1^{(1o)} \\
\breve{Y}_2^{(1)}\\
\vdots \\
\breve{Y}_N^{(1)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\begin{array}{cc}
(1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right)
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\label{eq:specialstart}
\end{equation}
At the arbitrary point in time, we have $ \kappa U \Gamma(\alpha +1 , 1)$ customers served, which means there are $ J_j^{(1)} = \sum_{k=1}^{ \kappa U \Gamma(\alpha +1 , 1)} (L_j^{(1)})_k$ new arrivals
of type~$j$ customers during that period. Note that as $\rho \uparrow 1$, $ J_j^{(1)} \rightarrow \infty$ , therefore the new arrivals during the past service time of the customer in service can be neglected.
Using the same arguments as in Theorem \ref{startofswitch} we can say that the limiting scaled distribution of the new number of customers
of type~$j$ at an arbitrary point in time during the visit of station~$i$ can be given as
\[
\frac{ J_j^{(1)}}{U \breve{Y}_1^{(1q)}} \rightarrow \lambda_j \mathbb{E}[B_1] .
\]
Therefore the scaled steady-state joint queue length vector at an arbitrary point in time during the visit of station~$1$ as $\rho \uparrow 1$ satisfies
\[(1-\rho) \left(\begin{array}{cc}
V_1^{(1q)}\\
V_1^{(1o)} \\
V_2^{(1)}\\
\vdots \\
V_N^{(1)}
\end{array} \right)
\xrightarrow[d]{}
\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left[ \left(\begin{array}{cc}
0 \\
e^{-\nu_1 G_1} \hat{u}^{(1)}_{1} \\
\hat{u}^{(1)}_{2} \\
\vdots \\
\hat{u}^{(1)}_{N}
\end{array} \right) + (1-e^{-\nu_1 G_1}) \hat{u}^{(1)}_{1} \left(\begin{array}{cc}
1- U \\
U \hat{\lambda}_1 E[B_1] \\
U \hat{\lambda}_2 E[B_1] \\
\vdots \\
U \hat{\lambda}_N E[B_1]
\end{array} \right)
\right]
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1,
\]
which is equivalent to \eqref{eq:randomvisit}.
In \eqref{eq:randomvisit} we have given the scaled steady-state joint queue length vector of customers of each type at an arbitrary point in time during the visit period of station~$1$ when $\rho \uparrow 1$. Using Remark \ref{rem:vector}
we can extend this to an arbitrary point in time during the visit period of a given station~$i$. This can be written as
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
V_1^{(i)}\\
\vdots \\
V_{i-1}^{(i)}\\
V_i^{(iq)}\\
V_i^{(io)} \\
V_{i+1}^{(i)}\\
\vdots \\
V_N^{(i)}
\end{array} \right)
\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left[ \left(\begin{array}{cc}
\hat{u}^{(i)}_{1} \\
\vdots \\
\hat{u}^{(i)}_{i-1}\\
(1-e^{-\nu_i G_i}) \hat{u}^{(i)}_{i} \\
e^{-\nu_i G_i} \hat{u}^{(i)}_{i} \\
\hat{u}^{(i)}_{i+1} \\
\vdots \\
\hat{u}^{(i)}_{N}
\end{array} \right) + U \left(\begin{array}{cc}
\hat{\lambda}_1 \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_{i-1} \hat{\rho}_i\\
-\hat{\lambda}_i \\
\hat{\lambda}_i \hat{\rho}_i \\
\hat{\lambda}_{i+1} \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_N \hat{\rho}_i
\end{array} \right)
\right]
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\label{eq:randomvisit_i}
\end{equation}
Due to the observation that in heavy traffic, the server resides in a visit period for 100\% of the time, \eqref{eq:randomvisit_i} leads to the following theorem.
\begin{theorem}\label{thm:final}
In a cyclic polling system with retrials and glue periods, the scaled steady-state joint queue length vector at an arbitrary time point, with $L^{(iq)}$ and $L^{(io)}$ representing the number in queue and in orbit at station $i$ respectively for all $i=1,\ldots,N$,
satisfies
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
L^{(1q)}\\
\vdots \\
L^{(Nq)}\\
L^{(1o)}\\
\vdots\\
L^{(No)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \underbar{P}~ \Gamma(\alpha +1 , 1) ~~~\text{when}~~ \rho \uparrow 1,
\label{eq:thmrandompoint}
\end{equation}
where $\underbar{P} = \underbar{P}_i$ with probability $\hat{\rho}_i$ and
\[
\underbar{P}_i = \left(\begin{array}{cc}
0\\
\vdots \\
0\\
(1-e^{-\nu_i G_i}) \hat{u}^{(i)}_{i} \\
0\\
\vdots \\
0\\
\hat{u}^{(i)}_{1} \\
\vdots\\
\hat{u}^{(i)}_{i-1}\\
e^{-\nu_i G_i} \hat{u}^{(i)}_{i} \\
\hat{u}^{(i)}_{i+1} \\
\vdots \\
\hat{u}^{(i)}_{N}
\end{array} \right) + U \left(\begin{array}{cc}
0\\
\vdots \\
0\\
-\hat{\lambda}_i\\
0\\
\vdots\\
0\\
\hat{\lambda}_1 \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_{i-1} \hat{\rho}_i\\
\hat{\lambda}_i \hat{\rho}_i \\
\hat{\lambda}_{i+1} \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_N \hat{\rho}_i
\end{array} \right) .
\]
\end{theorem}
\proof{
As mentioned at the beginning of this section, when $\hat{\rho} \uparrow 1$ the system is in the visit periods with probability 1. Therefore
the limiting scaled joint queue length distribution at an arbitrary point in time can be given as
the limiting scaled joint queue length distribution in the visit period of station~$i$ with probability $\hat{\rho}_i.$
Now consider that the number of customers of type~$j$, at an arbitrary point in time during the visit period of station~$i$, in queue and orbit respectively is given by $V_j^{(iq)}$ and $V_j^{(io)}$. Then using
\eqref{eq:randomvisit_i} we can write
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
V_1^{(iq)}\\
\vdots \\
V_{i-1}^{(iq)}\\
V_i^{(iq)}\\
V_{i+1}^{(iq)}\\
\vdots \\
V_N^{(iq)}\\
V_1^{(io)}\\
\vdots\\
V_{i-1}^{(io)}\\
V_{i}^{(io)}\\
V_{i+1}^{(io)}\\
\vdots \\
V_N^{(io)}
\end{array} \right)
\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\left(\begin{array}{cc}
0\\
\vdots \\
0\\
(1-e^{-\nu_i G_i}) \hat{u}^{(i)}_{i} \\
0\\
\vdots \\
0\\
\hat{u}^{(i)}_{1} \\
\vdots\\
\hat{u}^{(i)}_{i-1}\\
e^{-\nu_i G_i} \hat{u}^{(i)}_{i} \\
\hat{u}^{(i)}_{i+1} \\
\vdots \\
\hat{u}^{(i)}_{N}
\end{array} \right) + U \left(\begin{array}{cc}
0\\
\vdots \\
0\\
-\hat{\lambda}_i\\
0\\
\vdots\\
0\\
\hat{\lambda}_1 \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_{i-1} \hat{\rho}_i\\
\hat{\lambda}_i \hat{\rho}_i \\
\hat{\lambda}_{i+1} \hat{\rho}_i \\
\vdots \\
\hat{\lambda}_N \hat{\rho}_i
\end{array} \right)\right)
\Gamma(\alpha +1 , 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{equation}
This holds because $V_j^{(io)} = V_j^{(i)}$ and $V_j^{(iq)} = 0$ when $i \ne j.$
Therefore we know that the limiting scaled joint queue length distribution at an arbitrary point in time, with probability $\hat{\rho}_i$, can be given as
$ \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \underbar{P}_i\Gamma(\alpha +1 , 1)$. Hence we have
\begin{equation}
(1-\rho) \left(\begin{array}{cc}
L^{(1q)}\\
\vdots \\
L^{(Nq)}\\
L^{(1o)}\\
\vdots\\
L^{(No)}
\end{array} \right)
\xrightarrow[d]{}\frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \underbar{P}~ \Gamma(\alpha +1 , 1), ~~~\text{when}~~ \rho \uparrow 1,
\end{equation}
where $\underbar{P} = \underbar{P}_i$ with probability $\hat{\rho}_i$.
\rightline{$\Box$}}
\begin{remark}
Note that under heavy-traffic the total scaled workload in the system satisfies the so-called
\emph{heavy-traffic averaging principle}. This principle, first found in \cite{CoffmanEtAl1,CoffmanEtAl2}
for a specific class of polling models, implies that the workload in each queue is emptied and refilled
at a rate that is much faster than the rate at which the total workload is changing. As a consequence the total workload
can be considered constant during the course of a cycle (represented by the gamma distribution), while the workloads
in the individual queues fluctuate like a fluid model. It is because of this that the queue length vector
in Theorem \ref{thm:final} also features a state-space collapse (cf.\ \cite{Reiman}): the limiting distribution of
the $2N$-dimensional scaled queue length vector is governed by just three distributions: the discrete distribution governing
\underbar{P}, the uniform distribution and the gamma distribution.
Therefore using Theorems \ref{thm:randomvisit} and \ref{thm:final}, and the fact that $\delta =\sum_i \mathbb{E}[B_i] \hat{u}^{(j)}_{i}$
for all $j = 1, \ldots, N,$ the scaled workload in the system at an arbitrary point in time is given by
\begin{equation*}
(1-\rho)\sum_{i=1}^N \mathbb{E}[B_i] (L^{(iq)}+L^{(io)})\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\Gamma(\alpha+1, 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{equation*}
Since the above equation is independent of everything but the gamma distribution, the workload is the same at any arbitrary point in time during the cycle.
Hence the system agrees with the \emph{heavy-traffic averaging principle}. Note that in Theorems \ref{thm:startglue}, \ref{startofvisit} and \ref{startofswitch}
we have found the limiting distribution of the scaled number of customers at embedded time points. Extending the heavy traffic principle along with these theorems, we can say that the
scaled workload in an arbitrarily chosen cycle can be given as
\begin{equation*}
(1-\rho)\sum_{i =1}^N \mathbb{E}[B_i] X_i^{(1)}\xrightarrow[d]{} \frac{b^{(2)}}{2b^{(1)}}\Gamma(\alpha, 1),~~~\text{when}~~ \rho \uparrow 1 .
\end{equation*}
We observe that the scaled workload in the two cases, arbitrary point in time and arbitrary cycle, have $\Gamma(\alpha+1, 1)$ and $\Gamma(\alpha, 1)$ distributions respectively.
This is because of a bias introduced in selection of an arbitrary point in time, i.e., an arbitrarily chosen point in time has a higher probability to be in a longer cycle than being in a shorter cycle. This
bias does not exist when we arbitrarily choose a cycle.
\end{remark}
\section{Approximations}
In the previous section we derived the heavy traffic limit of the scaled steady-state joint queue length vector
at an arbitrary point in time for the cyclic polling system with retrials and glue periods. We will use this result to derive
an approximation for the mean number of customers in the system for arbitrary system loads. This is done by using an interpolation between the heavy traffic result and a light traffic result, similar to what is described in Boon et al. \cite{Boon2011}.
\subsection{Approximate mean number of customers}
Consider the following approximation for the mean number of customers of type~$i$,
\begin{equation}\label{eq:approx}
E[L_i] \approx \frac{c_0 + \rho c_1}{1-\rho}.
\end{equation}
The coefficients $c_0$ and $c_1$ are chosen in agreement with the light traffic and heavy traffic behaviour of $E[L_i]$. Clearly, when $\rho \downarrow 0$, also $E[L_i] \downarrow 0$.
Hence we choose $c_0=0$.
On the other hand, we have $ \lim_{\rho \uparrow 1} E[(1-\rho) L_i] = c_1$. Using \eqref{eq:thmrandompoint}, we get
\begin{eqnarray*}
\lim_{\rho \uparrow 1} E[(1-\rho) L_i] &=& \lim_{\rho \uparrow 1}\mathbb{E}[(1-\rho)( L^{(iq)}+ L^{(io)}) ] \\
&=& \mathbb{E}\left[ \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left(\sum_{j=1}^{N} \hat{\rho}_j \left(\hat{u}_i^{(j)} +U \hat{\lambda}_i\hat{\rho}_j\right) - U \hat{\lambda}_i\hat{\rho}_i \right) \Gamma(\alpha +1 , 1) \right]\\
&=& \frac{b^{(2)}}{2b^{(1)}}\frac{1}{\delta} \left( \sum_{j=1}^{N} \hat{\rho}_j \left(\hat{u}_i^{(j)} + \frac{ \hat{\lambda}_i\hat{\rho}_j}{2}\right) - \frac{ \hat{\lambda}_i\hat{\rho}_i}{2} \right) (\alpha +1).
\end{eqnarray*}
Hence we choose $ c_1 = \frac{b^{(2)}}{2b^{(1)}}\frac{(\alpha +1)}{\delta} \left( \sum_{j=1}^{N} \hat{\rho}_j \left(\hat{u}_i^{(j)} + \frac{ \hat{\lambda}_i\hat{\rho}_j}{2}\right) - \frac{ \hat{\lambda}_i\hat{\rho}_i}{2} \right) $, and so the final approximation for $E[L_i]$ becomes, for arbitrary
$\rho \in (0,1)$,
\begin{eqnarray}
E[L_i] \approx \frac{\rho}{(1-\rho)} \frac{b^{(2)}}{2b^{(1)}}\frac{(\alpha +1)}{\delta} \left( \sum_{j=1}^{N} \hat{\rho}_j \left(\hat{u}_i^{(j)} + \frac{ \hat{\lambda}_i\hat{\rho}_j}{2}\right) - \frac{ \hat{\lambda}_i\hat{\rho}_i}{2} \right) .
\label{eq:approxnumberi}
\end{eqnarray}
\subsection{Numerical results}
In this section we will compare the above approximation with exact results. The exact results are obtained using the approach in \cite{Abidini16}.
Consider a five station polling system in which the service times are exponentially distributed with mean $\mathbb{E}[B_i] =1$ for all $i = 1, \ldots, 5$. The arrival processes are Poisson processes with rates $\lambda_1 = \rho\frac{1}{10},
\lambda_2 = \rho\frac{2}{10},~\lambda_3 = \rho\frac{3}{10},~\lambda_4 = \rho\frac{1}{10},~\lambda_5 = \rho\frac{3}{10}.$ The switch-over times from station~$i$ are exponentially distributed with mean
$\mathbb{E}[S_i] = 2,~3,~1,~5,~2$ for stations $i=1, \ldots, 5$. The durations of the deterministic glue periods are $G_i = 3,~1,~2,~1,~2$, and the exponential retrial rates are $\nu_i = 5,~1,~3,~2,~1$, for stations $i=1$ to $5$ respectively.
We plot the following for $\rho \in (0,1)$ and compare the approximation given in \eqref{eq:approxnumberi} with the values obtained using exact analysis.
\begin{multicols}{2}
{\centering
\includegraphics[width=0.5\textwidth]{diffone.pdf}
\captionof{figure}{$\%$ error for the number of customers in each station}
\label{fig:gull1}}
{\centering
\includegraphics[width=0.5\textwidth]{diff_all.pdf}
\captionof{figure}{$\%$ error for total number of customers}
\label{fig:tiger1}}
\end{multicols}
In Figures \ref{fig:gull1} and \ref{fig:tiger1} we respectively plot the percentage error calculated as $ \%~\text{error}= \frac{\text{Approximate Value}-\text{Exact Value} }{\text{Exact Value}} \times 100$, for the mean
number of customers of each type and the total mean number of customers in the system. The error percentage is similar to that predicted in \cite{Boon2011}. The error is non-negligible for lower values of $\rho$, but it decreases quickly as $\rho$ increases. Consequently, for larger values of $\rho$, the approximation is accurate.
Based on this, we conclude that the heavy-traffic results as derived in this paper are very useful for deriving closed-form approximations for the queue length, especially as the systems under study (e.g.\ optical systems) typically run under a heavy workload (i.e., a large value of $\rho$). Nevertheless, to obtain better performance for small values of $\rho$, the current approximation as presented here can be refined by e.g.\ computing theoretical values of $\frac{d}{d\rho}\mathbb{E}{L_i}|_{\rho = 0}$ and incorporating that information in \eqref{eq:approx} as explained in \cite{Boon2011}. Furthermore, approximations for the mean queue length as mentioned here can be extended to approximations for the complete queue length distributions of the polling systems with glue periods and retrials in the spirit of \cite{DorsmanEtAl}. These extensions, however, are beyond the scope of this paper.
\subsection*{Acknowledgement}
\noindent
The authors wish to thank Marko Boon and Onno Boxma for fruitful discussions.
The research is supported by the IAP program BESTCOM funded by the Belgian government, and by the Gravity program NETWORKS funded
by the Dutch government. Part of the research of the second author was performed while he was affiliated with Leiden University.
|
\section{Introduction} \label{intro
A set of empirical data with positive values follows a {\em Pareto distribution} if the log-log plot of the values versus rank is approximately a straight line. Pareto distributions are ubiquitous in the social and natural sciences, appearing in a wide range of fields from geology to economics \citep{Simon:1955,Bak:1996,Newman:2005}. A Pareto distribution satisfies {\em Zipf's law} if the log-log plot has a slope of~$-1$, following \citet{Zipf:1935}, who noticed that the frequency of written words in English follows such a distribution. We shall refer to these distributions as {\em Zipfian}. Zipf's law is considered a form of universality, since Zipfian distributions occur almost as frequently as Pareto distributions. Nevertheless, according to \citet{Tao:2012}, ``mathematicians do not have a fully satisfactory and convincing explanation for how the law comes about and why it is universal.''
We propose a mathematical explanation of Zipf's law based on {\em Atlas models} and {\em first-order models}, systems of continuous semimartingales with parameters that depend only on rank. Atlas and first-order models can be constructed to approximate empirical systems of time-dependent rank-based data that exhibit some form of stability \citep{F:2002, Banner/Fernholz/Karatzas:2005}. Atlas models have stable distributions that are Pareto, while first-order models are more general than Atlas models and can be constructed to have any stable distribution. We show that under two natural conditions, conservation and completeness, the stable distribution of an Atlas model will satisfy Zipf's law. However, many empirical systems of time-dependent rank-based data generate distributions with log-log plots that are not actually straight lines but rather are concave curves with a tangent of slope $-1$ at some point along the curve. We shall refer to these more general distributions as {\em quasi-Zipfian,} and we shall use first-order models to approximate the systems that generate them.
The dichotomy between Zipfian and non-Zipfian Pareto distributions is of interest to us here. We find that Zipfian and quasi-Zipfian distributions are usually generated by systems of time-dependent rank-based data, and it is this class of systems that we can approximate by Atlas models or first-order models. In contrast, data that follow non-Zipfian Pareto distributions are usually generated by other means, often of a cumulative nature. Examples of time-dependent rank-based systems that generate Zipfian or quasi-Zipfian distributions include the market capitalization of companies \citep{Simon/Bonini:1958,F:2002}, the population of cities \citep{Gabaix:1999}, the employees of firms \citep{Axtell:2001}, the income and wealth of households \citep{Atkinson/Piketty/Saez:2011,Piketty:2017}, and the assets of banks \citep{Fernholz/Koch:2017}. From the comprehensive survey of \citet{Newman:2005} we find an assortment of non-Zipfian Pareto distributions: the magnitude of earthquakes, citations of scientific papers, copies of books sold, the diameter of moon craters, the intensity of solar flares, and the intensity of wars, all of which are cumulative systems. Consider, for example, the magnitude of earthquakes: each new earthquake adds a new observation to the data, but once recorded, these observations do not change over time. Such cumulative systems may generate Pareto distributions, but we have no reason to believe that these distributions will be Zipfian.
In the next sections we first review the properties of Atlas models and first-order models, and then characterize Zipfian and quasi-Zipfian systems using these models. We apply our results to the capitalization of U.S.\ companies, with an analysis of the corresponding quasi-Zipfian distribution curve. Finally, we consider a number of examples of other time-dependent systems as well as other approaches that have been used to characterize these systems. Proofs of all propositions are in the appendix, along with an example.
\section{Asymptotically stable systems of continuous semimartingales} \label{semimartingales
We shall use systems of positive continuous semimartingales $\{X_1,\ldots, X_n\}$, with $n>1$, to approximate systems of time-dependent data. For such a system we define the {\em rank function} to be the random permutation $r_t\in\Sigma_n$ such that $r_t(i)<r_t(j)$ if $X_i(t)>X_j(t)$ or if $X_i(t)=X_j(t)$ and $i<j$. Here $\Sigma_n$ is the symmetric group on $n$ elements. The {\em rank processes} $X_{(1)}\ge\cdots\ge X_{(n)}$ are defined by $X_{(r_t(i))}(t)=X_i(t)$.
We have assumed that $X_i(t)>0$ for $t\in[0,\infty)$ and $i=1,\ldots,n$, a.s., so we can consider the logarithms of these processes. The processes $(\log X_{(k)} -\log X_{(k+1)})$, for $k=1,\ldots,n-1$, are called {\em gap processes}, and we define $\lt{k,k+1}^X$ to be the local time at the origin for $(\log X_{(k)} -\log X_{(k+1)})$, with $\lt{0,1}^X=\lt{n,n+1}^X\equiv 0$ \citep{F:2002}. If the $\log X_i$ spend no local time at triple points, then the rank processes satisfy
\begin{equation}\label{2.1}
d\log X_{(k)}(t)=\sum_{i=1}^n{\mathbbm 1}_{\{r_t(i)=k\}}\,d\log X_i(t)+\frac{1}{2} d\lt{k,k+1}^X(t)-\frac{1}{2} d\lt{k-1,k}^X(t),\quad\text{{\rm a.s.}},
\end{equation}
for $k = 1, \ldots, n$ \citep{F:2002,Banner/Ghomrasni:2008}.
We are interested in systems that show some kind of stability by rank, at least asymptotically. Since we must apply our definition of stability to systems of empirical data as well as to continuous semimartingales, we use asymptotic time averages rather than expectations for our definitions. For the systems of continuous semimartingales we consider, the law of large numbers implies that the asymptotic time averages are equal to the expectations \citep{Banner/Fernholz/Karatzas:2005, IPBKF:2011}.
\begin{defn} \label{D2.2}
\citep{F:2002} A system of positive continuous semimartingales $\{X_1, \ldots, X_n\}$ is {\em asymptotically stable} if
\begin{enumerate}
\item $\displaystyle
\limt{1}\big( \log X_{(1)}(t)-\log X_{(n)}(t)\big)=0,\quad\text{{\rm a.s.}}$ ({\em coherence});
\item $\displaystyle
\limt{1}\lt{k,k+1}^X(t) = \lambda_{k,k+1}>0,\quad\text{{\rm a.s.}}$;
\item $\displaystyle
\limt{1}\bbrac{\log X_{(k)}-\log X_{(k+1)}}_t =
\sigma^2_{k,k+1}>0,\quad\text{{\rm a.s.}}$;
\end{enumerate}
for $k=1,\ldots,n-1$.
\end{defn}\vspace{5pt}
For $k=1,\ldots,n$, let us define the processes
\begin{equation}\label{2.1.1}
X_{[k]} \triangleq X_{(1)}+\cdots+X_{(k)},
\end{equation}
in which case we can express $X_{[k]}$ in terms of the $X_i$ and $\lt{k,k+1}^X$.
\begin{lem} \label{L2.1}
Let $X_1, \ldots, X_n$ be positive continuous semimartingales that satisfy \eqref{2.1}. Then
\begin{equation}\label{2.2}
dX_{[k]}(t)= \sum_{i=1}^n {\mathbbm 1}_{\{r_t(i)\le k\}}dX_i(t)+\frac{1}{2} X_{(k)}(t)d\lt{k,k+1}^X(t),\quad\text{{\rm a.s.}}
\end{equation}
for $k=1,\ldots,n$.
\end{lem}
Lemma \ref{L2.1} describes the dynamic relationship between the combined value $X_{[k]}$ of the $k$ top ranks and the local time process $\lt{k,k+1}^X$. This local time process compensates for turnover into and out of the top $k$ ranks. Over time, some of the higher-ranked processes will decrease and exit from the top ranks, while some of the lower-ranked processes will increase and enter those top ranks. The process of entry and exit into and out of the top $k$ ranks is quantified by the last term in \eqref{2.2}, which measures the replacement of the top ranks of the system by lower ranks.
Lemma~\ref{L2.1} allows us to express the local time $\lt{k,k+1}^X$ in terms of $X_i$, $X_{(k)}$, and $X_{[k]}$, all of which are observable. Hence, the parameters $\lambda_{k,k+1}$ can be expressed as
\begin{equation}\label{3.02}
\lambda_{k,k+1}
=\limT{2} \int_0^T\bigg(\frac{dX_{[k]}(t)}{X_{(k)}(t)}-\sum_{i=1}^n {\mathbbm 1}_{\{r_t(i)\le k\}}\frac{dX_i(t)}{X_{(k)}(t)}\bigg),\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$. In a similar fashion we can write
\begin{equation}\label{3.03}
\sigma^2_{k,k+1}=\limT{1}\int_0^T d\bbrac{\log X_{(k)}-\log X_{(k+1)}}_t,\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$. Equations~\eqref{3.02} and \eqref{3.03} will allow us to define parameters equivalent to $\lambda_{k,k+1}$ and $\sigma^2_{k,k+1}$ for time-dependent systems of empirical data.
\section{Atlas models and first-order models} \label{atlas
The simplest system we shall consider is an {\em Atlas model} \citep{F:2002}, a system of positive continuous semimartingales $\{X_1,\ldots,X_n\}$ defined by
\begin{equation}\label{2.3}
d\log X_i(t) = -g\,dt+ng{\mathbbm 1}_{\{r_t(i)=n \}}dt+\sigma\,dW_i(t),
\end{equation}
where $g$ and $\sigma$ are positive constants and $(W_1, \ldots, W_n)$ is a Brownian motion. Atlas models are asymptotically stable with parameters
\begin{equation} \label{2.4}
\lambda_{k,k+1} = 2kg,\quad\text{ and }\quad \sigma^2_{k,k+1} =2 \sigma^2,
\end{equation}
for $k = 1, \ldots, n-1$ \citep{Banner/Fernholz/Karatzas:2005}.
The processes $X_i$ in an Atlas model are exchangeable, so each $X_i$ asymptotically spends equal time in each rank and hence has zero asymptotic log-drift. The gap processes $(\log X_{(k)}-\log X_{(k+1)})$ for Atlas models have stable distributions that are independent and exponentially distributed with
\begin{equation} \label{2.41}
\limT{1}\int_0^T\big(\log X_{(k)}(t) - \log X_{(k+1)}(t)\big)\,dt = \frac{\sigma^2_{k,k+1}}{2\lambda_{k,k+1}},\quad\text{{\rm a.s.}},
\end{equation}
for $k=1, \ldots, n-1$ \citep{Harrison/Williams:1987,Banner/Fernholz/Karatzas:2005,FK:2009} .
The asymptotic slope of the tangent to the log-log plot of the $X_{(k)}$ versus rank will be
\begin{equation}\label{2.42a}
\limT{1}\int_0^T \frac{\log X_{(k)}(t) - \log X_{(k+1)}(t)}{\log(k) - \log(k+1)} \, dt
\end{equation}
at rank $k$, so if we define the {\em slope parameters} $s_k$ by
\begin{equation} \label{2.42}
s_k \triangleq k\limT{1}\int_0^T \big(\log X_{(k)}(t) - \log X_{(k+1)}(t)\big) \, dt,
\end{equation}
for $k = 1, \ldots, n-1$, then
\begin{equation} \label{2.43}
-s_k\left(1 + \frac{1}{2k}\right) < \limT{1}\int_0^T \frac{\log X_{(k)}(t) - \log X_{(k+1)}(t)}{\log(k) - \log(k+1)} \, dt < -s_k,
\end{equation}
for $k = 1, \ldots, n-1$. Accordingly, for large enough $k$ the slope parameter $s_k$ will be approximately equal to minus the slope given in~\eqref{2.42a}. For expositional simplicity, we shall treat the $s_k$ as if they measured the true log-log slopes between adjacent ranks, but it is important to remember that this equivalence is only as accurate as the range in inequality \eqref{2.43}.
For an Atlas model, it follows from \eqref{2.4} and \eqref{2.41} that
\begin{equation}\label{2.44}
s_k = \frac{\sigma^2}{2g},\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$, so the stable distribution of an Atlas model follows a Pareto distribution, at least within the approximation \eqref{2.43}, and when
\begin{equation}\label{2.45}
\sigma^2= 2g,
\end{equation}
it follows Zipf's law.
A modest generalization of the Atlas model is a {\em first-order model} \citep{F:2002, Banner/Fernholz/Karatzas:2005}, a system of positive continuous semimartingales $\{X_1,\ldots,X_n\}$ with
\begin{equation}\label{2.5}
d\log X_i(t) = g_{r_t(i)}\,dt+G_n{\mathbbm 1}_{\{r_t(i)=n\}}dt+\sigma_{r_t(i)}\,dW_i(t),
\end{equation}
where $\sigma^2_1,\ldots,\sigma^2_n$ are positive constants, $g_1,\ldots,g_n$ are constants satisfying
\begin{equation}\label{2.51}
g_1+\cdots+g_n \le 0 \quad\text{ and }\quad g_1+\cdots+g_k < 0 \text{ for } k < n,
\end{equation}
$G_n=-(g_1 + \cdots + g_n)$, and $(W_1, \ldots, W_n)$ is a Brownian motion. First-order models are asymptotically stable with parameters
\begin{equation} \label{2.6}
\lambda_{k,k+1} = -2\big(g_1+\cdots+g_k\big), \quad\text{{\rm a.s.}},
\end{equation}
and
\begin{equation}\label{2.7}
\sigma^2_{k,k+1} = \sigma_k^2+\sigma^2_{k+1}, \quad\text{{\rm a.s.}},
\end{equation}
for $k = 1, \ldots, n-1$ \citep{Banner/Fernholz/Karatzas:2005}. A first-order model is {\em simple} if there is a positive constant $g$ such that $g_k=-g$, for $k=1,\ldots,n$, and the $\sigma^2_k$ are nondecreasing, with $0<\sigma^2_1\le\cdots\le\sigma^2_n$.
The processes $X_i$ in a first-order model are exchangeable, as they are for Atlas models, so again each $X_i$ asymptotically spends equal time in each rank and hence has zero asymptotic log-drift. Moreover, first-order models have asymptotically exponential gaps, and \eqref{2.41} continues to hold in this more general case \citep{Banner/Fernholz/Karatzas:2005}. The slope parameters for a first-order model are
\begin{equation}\label{2.71}
s_k = \frac{ k\big(\sigma_k^2+\sigma^2_{k+1}\big)}{2\lambda_{k,k+1}}= -\frac{ k\big(\sigma_k^2+\sigma^2_{k+1}\big)}{4\big(g_1+\cdots+g_k\big)},\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$, so the stable distribution of a first-order model is not confined to the class of Pareto distributions.
A further generalization to {\em hybrid Atlas models,} systems of processes with growth rates and variance rates that depend both on rank and on name (denoted by the index $i$), was introduced by \citet{IPBKF:2011}, who showed that these more general systems are also asymptotically stable. In a hybrid Atlas model the processes are not necessarily exchangeable, so processes occupying a given rank need not have the same growth rates and variance rates, and the asymptotic distribution of the gap processes may be mixtures of exponential distributions rather than pure exponentials \citep{IPBKF:2011}. Nevertheless, although we can expect \eqref{2.41} to hold precisely only for systems in which the growth rates and variance rates are determined by rank alone, in many cases this relation can still provide a reasonably accurate characterization of the invariant distribution of the system.
It is convenient to consider families of Atlas models and first-order models that share the same parameters, and for this purpose we define a {\em first-order family} to be a sequence of constants $\{g_k,\sigma^2_k\}_{ k\in{\mathbb N}}$, with
\begin{equation}\label{2.9}\begin{split}
g_1+\cdots+g_k&<0,\\
\sigma^2_k&>0,
\end{split}\end{equation}
for $k\in{\mathbb N}$. A first-order family generates a class of first-order models $\{X_1,\ldots,X_n\}$, for $n\in{\mathbb N}$, each defined as in \eqref{2.5} with the common parameters $g_k$ and $\sigma_k$, the positive square root of $\sigma^2_k$, and $G_n=-(g_1+\cdots+g_n)$, for $n\in{\mathbb N}$. A first-order family is {\em simple} if all the first-order models generated by it are simple. An {\em Atlas family} is a first-order family with $g_k=-g<0$ and $\sigma^2_k=\sigma^2>0$, for $k\in{\mathbb N}$.
For first-order families, the parameters $\sigma^2_{k,k+1}$, $\lambda_{k,k+1}$ and $s_k$ are defined uniquely for $k\in{\mathbb N}$ by \eqref{2.4}, \eqref{2.44}, \eqref{2.6}, \eqref{2.7}, and \eqref{2.71}, as the case may be. Let us note that the slope parameters $s_k$ given by \eqref{2.71} do not depend on the number of processes in the model as long as $n>k$, so a first-order family defines a unique asymptotic distribution curve. These families will allow us to derive results about asymptotic distribution curves without repeatedly reciting the characteristics of individual Atlas or first-order models. Moreover, we shall only consider values derived from the models in a first-order family when these models are in their stable distribution. Essentially, we need only consider the values that result from the parameters $\{g_k,\sigma^2_k\}_{ k\in{\mathbb N}}$, and we can ignore the models themselves.
A model $\{X_1,\ldots,X_n\}$ in a simple first-order family will satisfy
\begin{equation*}
d\log X_i(t) = -g\,dt+ng{\mathbbm 1}_{\{r_t(i)=n\}}dt+\sigma_{r_t(i)}\,dW_i(t),
\end{equation*}
where $g>0$, the $\sigma^2_k$ are nondecreasing, and $(W_1,\ldots,W_n)$ is a Brownian motion. Hence, for a simple first-order family,
\begin{equation}\label{2.09}
\lambda_{k,k+1} = 2kg,\quad\text{{\rm a.s.}},
\end{equation}
and
\begin{equation}\label{2.10}
s_k = \frac{ \sigma_k^2+\sigma^2_{k+1}}{4g},\quad\text{{\rm a.s.}},
\end{equation}
for $k\in{\mathbb N}$, with the $s_k$ nondecreasing. Hence, in this case the log-log plot of the stable distribution will be concave.
It appears that actual empirical time-dependent systems often behave like simple first-order families, and we analyze one such example below, the capitalizations of U.S.\ companies (see Figures~\ref{f1} and \ref{f2}). The condition that the variance rates increase at the lower ranks seems natural --- even in the original observation of \citet{Brown:1827} it would seem likely that the water molecules would have buffeted the smaller particles more vigorously than the larger ones.
\section{First-order approximation of time-dependent systems}\label{first1
Suppose that $\{Y_1,\ldots,Y_n\}$, for $n>1$, is an asymptotically stable system of positive continuous semimartingales with rank function $\rho_t\in\Sigma_n$ such that $\rho_t(i)<\rho_t(j)$ if $Y_i(t)>Y_j(t)$ or if $Y_i(t)=Y_j(t)$ and $i<j$. Let $\{Y_{(1)}\ge\cdots\ge Y_{(n)}\}$ be the corresponding rank processes with $Y_{(\rho_t(i))}(t)=Y_i(t)$. As in Definition~\ref{D2.2}, for the processes $Y_1\ldots,Y_n$ we can define the parameters
\begin{equation}\label{3.01a}\begin{split}
\boldsymbol \lambda_{k,k+1} &\triangleq \limt{1}\lt{k,k+1}^Y(t) >0,\quad\text{{\rm a.s.}},\\
\boldsymbol \sigma^2_{k,k+1}&\triangleq \limt{1}\bbrac{\log Y_{(k)}-\log Y_{(k+1)}}_t >0,\quad\text{{\rm a.s.}},
\end{split}\end{equation}
for $k=1,\ldots,n-1$, and by convention $\boldsymbol \lambda_{0,1}=0$, $\boldsymbol \sigma^2_{0,1}=\boldsymbol \sigma^2_{1,2}$, and $\boldsymbol \sigma^2_{n,n+1}=\boldsymbol \sigma^2_{n-1,n}$.
\begin{defn}\citep{F:2002}
Let $\{Y_1,\ldots,Y_n\}$ be an asymptotically stable system of positive continuous semimartingales with parameters $\boldsymbol \lambda_{k,k+1}$ and $\boldsymbol \sigma^2_{k,k+1}$, for $k=1,\ldots,n$, defined by \eqref{3.01a}. Then the {\em first-order approximation} for $\{Y_1,\ldots,Y_n\}$ is the first-order model $\{X_1,\ldots,X_n\}$ with
\begin{equation}\label{3.1}
d\log X_i(t) = g_{r_t(i)}\,dt+G_n{\mathbbm 1}_{\{r_t(i)=n\}}dt+\sigma_{r_t(i)}\,dW_i(t),
\end{equation}
for $i=1,\ldots,n$, where $r_t\in\Sigma_n$ is the rank function for the $X_i$, the parameters $g_k$ and $\sigma_k$ are defined by
\begin{equation}\label{3.2}
\begin{split}
g_k &= \frac{1}{2} \boldsymbol \lambda_{k-1,k} - \frac{1}{2} \boldsymbol \lambda_{k,k+1},\text{ for } k=1,\ldots,n-1, \text{ and } g_n = \frac{g_1+\cdots+g_{n-1}}{n-1},\\
\sigma_k^2 &= \frac{1}{4}\big(\boldsymbol \sigma^2_{k-1,k}+\boldsymbol \sigma^2_{k,k+1}\big),\text{ for } k=1,\ldots,n,
\end{split}
\end{equation}
where $\sigma_k$ is the positive square root of $\sigma^2_k$, $G_n=-(g_1+\cdots+g_n)$, and $(W_1,\ldots,W_n)$ is a Brownian motion.
\end{defn}\vspace{5pt}
For the first-order model \eqref{3.1} with parameters \eqref{3.2}, equations \eqref{2.6} and \eqref{2.7} imply that
\begin{equation}\label{3.20}
\lambda_{k,k+1}=-2\big(g_1+\cdots+g_k\big)= \boldsymbol \lambda_{k,k+1},\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$, and
\[
\sigma^2_{k,k+1}=\sigma^2_k+\sigma^2_{k+1}
= \frac{1}{4}\big(\boldsymbol \sigma^2_{k-1,k}+2\boldsymbol \sigma^2_{k,k+1}+\boldsymbol \sigma^2_{k+1,k+2}\big),\quad\text{{\rm a.s.}},
\]
for $k=1,\ldots,n-1$. Hence, \eqref{2.41} becomes
\begin{equation}\label{3.3}
\limT{1}\int_0^T\big(\log X_{(k)}(t)-\log X_{(k+1)}(t)\big)\,dt=\frac{\sigma^2_{k,k+1}}{2\lambda_{k,k+1}}=\frac{\boldsymbol \sigma^2_{k-1,k}+2\boldsymbol \sigma^2_{k,k+1}+\boldsymbol \sigma^2_{k+1,k+2}}{8\boldsymbol \lambda_{k,k+1}},\quad\text{{\rm a.s.}},
\end{equation}
for $k=1,\ldots,n-1$. If the processes $Y_1\ldots,Y_n$ satisfy
\begin{equation}\label{3.4}
\limT{1}\int_0^T\big(\log Y_{(k)}(t)-\log Y_{(k+1)}(t)\big)\,d t\cong\frac{\boldsymbol \sigma^2_{k,k+1}}{2\boldsymbol \lambda_{k,k+1}},
\end{equation}
for $k=1,\ldots,n-1$, then the stable distribution \eqref{3.3} for the first-order approximation will be a smoothed version of the stable distribution \eqref{3.4} for the $Y_i$. The approximation \eqref{3.4} will be accurate if the gap series $(\log Y_{(k)}(t)-\log Y_{(k+1)}(t))$ behave like reflected Brownian motion, which has an exponential stable distribution. We can expect this approximation to hold when the behavior of the processes $Y_1\ldots,Y_n$ is determined mostly by rank. The accuracy of this approximation is likely to deteriorate when more idiosyncratic characteristics are present, characteristics that depend on the indices $i$.
Now suppose that we have a time-dependent system $\{Z_1(\tau),Z_2(\tau),\ldots\}$ of positive-valued data observed at times $\tau\in\{1,2,\ldots,T\}$. Let
\begin{equation}\label{AR0}
N_\tau = \#\{Z_1(\tau),Z_2(\tau),\ldots\} \quad\text{ and }\quad N=N_{1}\land\cdots\land N_{T},
\end{equation}
where $\#$ represents cardinality. Let $\rho_\tau:{\mathbb N}\to{\mathbb N}$ be the rank function for the system $\{Z_1(\tau),Z_2(\tau),\ldots\}$ such that $\rho_\tau$ restricted to the subset $\{1,\ldots,N_\tau\}$ is the permutation with $\rho_\tau(i)<\rho_\tau(j)$ if $Z_i(\tau)>Z_j(\tau)$ or if $Z_i(\tau)=Z_j(\tau)$ and $i<j$, and for $i>N_\tau$, $\rho_\tau(i)=i$. We define the ranked values $\{Z_{(1)}(\tau)\ge Z_{(2)}(\tau)\ge\cdots\}$ such that $Z_{(\rho_\tau(i))}(\tau)=Z_i(\tau)$ for $i\le N_\tau$, and for definiteness we can let $Z_{(k)}(\tau)=0$ for $k>N_\tau$. With these definitions, we have
\[
Z_{[k]}(\tau)=Z_{(1)}(\tau)+\cdots+Z_{(k)}(\tau),
\]
for $k=1,\ldots,N$ and $\tau\in\{1,2,\ldots,T\}$.
We can mimic the time averages \eqref{3.02} and \eqref{3.03} to define the parameters
\begin{equation}\label{AR1}
\boldsymbol \lambda_{k,k+1} \triangleq\frac{2}{T-1}\sum_{\tau=1}^{T-1}\bigg(\frac{Z_{[k]}(\tau+1)-Z_{[k]}(\tau)}{Z_{(k)}(\tau)}-\sum_{i=1}^N {\mathbbm 1}_{\{\rho_{\tau}(i)\le k\}}\frac{Z_i(\tau+1)-Z_i(\tau)}{Z_{(k)}(\tau)} \bigg),
\end{equation}
and
\begin{equation}\label{AR2}
\boldsymbol \sigma^2_{k,k+1} \triangleq \frac{1}{T-1}\sum_{\tau=1}^{T-1}\Big(\big(\log Z_{(k)}(\tau+1)-\log Z_{(k+1)}(\tau+1)\big)
-\big(\log Z_{(k)}(\tau)-\log Z_{(k+1)}(\tau)\big)\Big)^2
\end{equation}
for $k=1,\ldots,N-1$, and by convention $\boldsymbol \lambda_{0,1}=0$ and $\boldsymbol \sigma^2_{0,1}=\boldsymbol \sigma^2_{1,2}$.
\begin{defn}
Suppose that $\{Z_1(\tau),Z_2(\tau),\ldots\}$ is a time-dependent system of positive-valued data with $N$, $\boldsymbol \lambda_{k,k+1}$, and $\boldsymbol \sigma^2_{k,k+1}$ defined as in \eqref{AR0}, \eqref{AR1}, and \eqref{AR2}. The {\em first-order approximation} of $\{Z_1(\tau),Z_2(\tau),\ldots\}$ is the first-order family $\{g_k,\sigma^2_k\}_{ k\in{\mathbb N}}$ with
\begin{equation}\label{AR3}
\begin{split}
g_k = \frac{1}{2} \boldsymbol \lambda_{k-1,k} - \frac{1}{2} \boldsymbol \lambda_{k,k+1},\text{ for } k=1,\ldots,N-1, &\text{ and } g_k = \frac{g_1+\cdots+g_{N-1}}{N-1},\text{ for } k\ge N,\\
\sigma_k^2 = \frac{1}{4}\big(\boldsymbol \sigma^2_{k-1,k}+\boldsymbol \sigma^2_{k,k+1}\big),\text{ for } k=1,\ldots,N-1, &\text{ and }\sigma_k^2 =\sigma^2_{N-1},\text{ for } k\ge N,
\end{split}
\end{equation}
\end{defn}
\vspace{10pt}
With this definition the slope parameters $s_k$ given in \eqref{2.71} are constant for $k\ge N$.
If the data satisfy
\begin{equation}\label{3.4z}
\frac{1}{T}\sum_{\tau=1}^{T}\big(\log Z_{(k)}(\tau)-\log Z_{(k+1)}(\tau)\big)\,d t\cong\frac{\boldsymbol \sigma^2_{k,k+1}}{2\boldsymbol \lambda_{k,k+1}},
\end{equation}
for $k \in{\mathbb N}$, then the stable distribution \eqref{3.3} for the first-order approximation will be a smoothed version of the distribution \eqref{3.4z} for the data $\{Z_{(1)}(\tau),Z_{(2)}(\tau),\ldots\}$. As was the case with \eqref{3.4}, the approximation \eqref{3.4z} will be accurate if the gap series $(\log Z_{(k)}(\tau)-\log Z_{(k+1)}(\tau))$ are distributed like reflected Brownian motion. We shall say that a system of time-dependent data that satisfies \eqref{3.4z} is {\em rank-based}, and we can expect this approximation to hold when the behavior of the data is determined mostly by rank. We should also note that \eqref{AR1}, \eqref{AR2}, and \eqref{3.4z} are not true asymptotic values, but rather estimates based on limited data.
\section{Zipfian systems of time-dependent data} \label{zipf
Zipf's law originally referred to the frequency of words in a written language \citep{Zipf:1935}, with the system $\{Z_1(\tau),Z_2(\tau),\ldots\}$, where $Z_i(\tau)$ represents the number of occurrences of the $i$th word in a language at time~$\tau$. To measure the relative frequency of written words in a language it is not possible to observe all the written words in that language. Instead, the words must be {\em sampled,} where a random sample is selected (without replacement), and the frequency versus rank of this random sample is studied. For example, in \citet{wiki-zipf} 10 million words in each of 30 languages were sampled, and the resulting distribution curves created. If the sample is large enough, the distribution of the sampled data should not differ materially from the distribution of the entire data set, at least for the higher ranks.
An additional advantage that arises from using sampled data is that the total number of data in the sample remains constant over time. The total number of written words that appear in a language is likely to increase over time, and this increase could bias estimates of some parameters. Sampling the data will remove such a trend from the data, since a constant number of words can be sampled at each time. Accordingly, in all cases we shall assume that global trends have been removed from the data, either by sampling or by some other means of detrending.
Since we have assumed that we have a constant sample size or that the data have been detrended somehow, the total count of our sampled data will remain constant, so
\begin{equation}\label{3.0}
Z_1(\tau)+Z_2(\tau)+\cdots=\text{ constant},
\end{equation}
for $\tau\in\{1,2,\ldots,T\}$, where in the case of the Wikipedia words the constant would be 10 million.
Suppose we have a time-dependent system of positive-valued data $\{Z_1(\tau),Z_2(\tau),\ldots\}$ and we observe the top $n$ ranks, for $1<n<N$, with $N$ from \eqref{AR0}, along with
\[
Z_{[n]}(\tau)=Z_{(1)}(\tau)+\cdots+Z_{(n)}(\tau).
\]
Since the total value of the sampled data in \eqref{3.0} is constant, for large enough $n$ it is reasonable to expect the relative change of the top $n$ ranks to satisfy
\begin{equation}\label{3.01}
\frac{Z_{[n]}(\tau+1)-Z_{[n]}(\tau)}{Z_{[n]}(\tau)}\cong 0,
\end{equation}
as $n$ becomes large. This condition is essentially a ``conservation of mass'' criterion for $\{Z_1(\tau),Z_2(\tau),\ldots\}$, and we would like to interpret this in terms of first-order families.
In all that follows, for a first-order family $\{g_k,\sigma^2_k\}_{k\in{\mathbb N}}$ we shall use the notation ${\mathbb E}_n$ to denote the expectation with respect to the stable distribution for the model $\{X_1,\ldots,X_n\}$ defined by that family. The following definition is motivated by the condition \eqref{3.01}.
\vspace{5pt}
\begin{defn} \label{D5.1}
The first-order family $\{g_k,\sigma^2_k\}_{k\in{\mathbb N}}$ is {\em conservative} if
\begin{equation} \label{5.1}
\lim_{n\to\infty} {\mathbb E}_n\bigg[\frac{dX_{[n]}(t)}{X_{[n]}(t)}\bigg] = 0.
\end{equation}
\end{defn}\vspace{5pt}
For the system $\{Z_1(\tau),Z_2(\tau),\ldots\}$ and for $n<N$, the effect of processes that leave the top $n$ ranks over the time interval $[\tau,\tau+1]$ and are replaced by processes from the lower ranks is measured by
\[
Z_{[n]}(\tau+1)-\sum_{i=1}^N{\mathbbm 1}_{\{\rho_{\tau}(i)\le n\}}Z_i(\tau+1),
\]
or
\[
\big(Z_{[n]}(\tau+1)-Z_{[n]}(\tau)\big)-\bigg(\sum_{i=1}^N{\mathbbm 1}_{\{\rho_{\tau}(i)\le n\}}\big(Z_i(\tau+1)-Z_i(\tau)\big)\bigg).
\]
While some replacement from lower ranks is necessary, it seems reasonable to require that on average the relative proportion of the mass that is replaced becomes arbitrarily small for large enough $n$, i.e., that
\begin{equation}\label{5.10}
\frac{1}{T-1}\sum_{\tau=1}^{T-1}\bigg[\frac{Z_{[n]}(\tau+1)-Z_{[n]}(\tau)}{Z_{[n]}(\tau)}-\sum_{i=1}^N{\mathbbm 1}_{\{\rho_{\tau}(i)\le n\}}\frac{Z_i(\tau+1)-Z_i(\tau)}{Z_{[n]}(\tau)}\bigg]\cong 0,
\end{equation}
for large enough $n$. In terms of the first-order approximation $\{g_k,\sigma^2_k\}_{k\in{\mathbb N}}$ to $\{Z_1(\tau),Z_2(\tau),\ldots\}$, the corresponding condition will be
\[
\limT{1}\int_0^T\bigg(\frac{dX_{[n]}(t)}{X_{[n]}(t)}-\sum_{i=1}^N{\mathbbm 1}_{\{r_{t}(i)\le n\}}\frac{dX_i(t)}{X_{[n]}(t)}\bigg)\cong 0,\quad\text{{\rm a.s.}},
\]
for large enough $n$, where $N>n$ and $\{X_1,\ldots,X_N\}$ is a first-order model defined by $\{g_k,\sigma^2_k\}_{k\in{\mathbb N}}$. By \eqref{2.2}, this is equivalent to
\[
\limT{1}\int_0^T\frac{X_{(n)}(t)}{2X_{[n]}(t)}d\lt{n,n+1}^X(t)\cong 0,\quad\text{{\rm a.s.}},
\]
for large enough $n$. Since
\begin{equation}\label{5.20}
\limT{1}\int_0^T d\lt{n,n+1}^X(t)=\lambda_{n,n+1}=-2\big(g_1+\cdots+g_n\big),\quad\text{{\rm a.s.}},
\end{equation}
condition \eqref{5.10} can be interpreted as
\[
\limT{1}\int_0^T-\big(g_1+\cdots+g_n\big)\frac{X_{(n)}(t)}{X_{[n]}(t)}dt\cong 0,\quad\text{{\rm a.s.}},
\]
for large enough $n$. For the model $\{X_1,\ldots,X_n\}$ defined by the family $\{g_k,\sigma^2_k\}_{k\in{\mathbb N}}$, $G_n=-\big(g_1+\cdots+g_n\big)$, so the following definition is derived from condition \eqref{5.10}.
\begin{defn} \label{D5.2}
The first-order family $\{g_k,\sigma^2_k\}_{ k\in{\mathbb N}}$ is {\em complete} if
\begin{equation} \label{5.2}
\lim_{n\to\infty}{\mathbb E}_n\bigg[\frac{G_nX_{(n)}(t)}{X_{[n]}(t)}\bigg]= 0.
\end{equation}
\end{defn}\vspace{5pt}
For an Atlas family or simple first-order family, \eqref{5.2} is equivalent to
\begin{equation}\label{5.11}
\lim_{n\to\infty}{\mathbb E}_n \bigg[\frac{ngX_{(n)}(t)}{X_{[n]}(t)}\bigg]= 0,
\end{equation}
since $G_n=ng$.
\begin{defn} \label{D5.3}
An Atlas family or first-order family is {\em Zipfian} if its slope parameters $s_k=1$, for $k\in{\mathbb N}$. A time-dependent rank-based system is {\em Zipfian} if its first-order approximation is Zipfian.
\end{defn}
\begin{prop} \label{T1}
An Atlas family is Zipfian if and only if it is conservative and complete.
\end{prop}
Since many empirical distributions are not Zipfian but rather quasi-Zipfian, we need to formalize this concept for first-order models.
\begin{defn} \label{D5.4}
A first-order family is {\em quasi-Zipfian} if its slope parameters $s_k$ are nondecreasing with $s_1 \leq 1$ and
\begin{equation}\label{5.4}
\lim_{k\to\infty}s_k \geq 1,
\end{equation}
where this limit includes divergence to infinity. A time-dependent rank-based system is {\em quasi-Zipfian} if its first-order approximation is quasi-Zipfian.
\end{defn}\vspace{5pt}
Because the slope parameters $s_k$ are approximately equal to minus the slope of a log-log plot of size versus rank, Definition~\ref{D5.4} implies that a time-dependent rank-based system will be quasi-Zipfian if this log-log plot is concave with slope not steeper than $-1$ at the highest ranks and not flatter than $-1$ at the lowest ranks. By these definitions, a Zipfian system is also quasi-Zipfian.
\begin{prop} \label{T2}
If a simple first-order family is conservative, complete, and satisfies
\begin{equation} \label{5.5}
\lim_{n\to\infty} {\mathbb E}_n\bigg[\frac{X_{(1)}(t)}{X_{[n]}(t)}\bigg] \le \frac{1}{2},
\end{equation}
then it is quasi-Zipfian.
\end{prop}
We show in Example~\ref{E1} below that a conservative and complete first-order family $\{g_k,\sigma^2_k\}_{ k\in{\mathbb N}}$ with $g_k=-g<0$, for $k\in{\mathbb N}$, can have a Pareto distribution with log-log slope steeper than $-1$ if the $\sigma_k$ are not nondecreasing.
\section{Examples and discussion} \label{disc
Here we apply the methods we developed above to an actual time-dependent rank-based system. We also discuss a number of other such systems, as well as other approaches to time-dependent rank-based systems.
\begin{exa}\label{ex1} {\em Market capitalization of companies.}
\vspace{5pt}
The market capitalization of U.S.\ companies was studied as early as \citet{Simon/Bonini:1958}, and here we follow the methodology of \citet{F:2002}. The capitalization of a company is defined as the price of the company's stock multiplied by the number of shares outstanding. Ample data are available for stock prices, and this allows us to estimate the first-order parameters we introduced in the previous sections.
Figure~\ref{f1} shows the smoothed first-order parameters $\sigma^2_k$ and $-g_k$ for the U.S.\ capital distribution for the 10 year period from January 1990 to December 1999. The capitalization data we used were from the monthly stock database of the Center for Research in Securities Prices at the University of Chicago. The market we consider consists of the stocks traded on the New York Stock Exchange, the American Stock Exchange, and the NASDAQ Stock Market, after the removal of all Real Estate Investment Trusts, all closed-end funds, and those American Depositary Receipts not included in the S\&P 500 Index. The parameters in Figure~\ref{f1} correspond to the 5000 stocks with the highest capitalizations each month. The first-order parameters $g_k$ and $\sigma^2_k$ were calculated as in \eqref{3.2} from the parameters $\boldsymbol \lambda_{k,k+1}$ and $\boldsymbol \sigma^2_{k,k+1}$, and then smoothed by convolution with a Gaussian kernel with $\pm 3.16$ standard deviations spanning 100 months on the horizontal axis, with reflection at the ends of the data.
We see in Figure~\ref{f1} that the values of the parameters $-g_k$ are relatively constant compared to the parameters $\sigma^2_k$, which increase almost linearly with rank. The near-constant $-g_k$ suggest that the first-order approximation will generate a simple first-order family. In Figure~\ref{f2}, the distribution curve for the capitalizations is represented by the black curve, which represents the average of the year-end capital distributions for the ten years spanned by the data. The broken red curve is the first-order approximation of the distribution following \eqref{3.3}. The two curves are quite close, and this indicates that the time-dependent system of company capitalizations is mostly rank-based. The black dot on the curve between ranks 100 and 500 is the point at which the log-log slope of the tangent to the curve is $-1$, so this is a quasi-Zipfian distribution, consistent with Proposition~\ref{T2}. Note that if we had considered only the top 100 companies, the completeness condition, Definition~\ref{D5.2}, would have failed, as we would expect for an incomplete distribution.
\end{exa}
\begin{exa} {\em Frequency of written words.}
\vspace{5pt}
Word frequency is the origin of Zipf's law \citep{Zipf:1935}, but testing our methodology with word-frequency could be difficult. Ideally, we would like to construct a first-order approximation for the data and compare the first-order distribution to that of the original data. However, the parameters $\boldsymbol \lambda_{k,k+1}$ and $\boldsymbol \sigma^2_{k,k+1}$ for the top-ranked words in a language are likely to be difficult to estimate over any reasonable time frame, since the top-ranked words probably seldom change ranks. Nevertheless, while the top ranks may require centuries of data for accurate estimates, the lower ranks could be amenable to analysis similar to that which we carried out for company capitalizations. Moreover, it might be possible to combine, for example, all the Indo-European languages and generate accurate estimates of the $\boldsymbol \lambda_{k,k+1}$ and $\boldsymbol \sigma^2_{k,k+1}$ even for the top ranks of the combined data.
We can see from the remarkable chart in \citet{wiki-zipf} that the log-log plots for 30 different languages are (almost) straight. Actually, these plots are slightly concave, or quasi-Zipfian in nature. It is possible that this slight curvature is due to sampling error at the lower ranks, which would raise the variances and steepen the slope, but this would have to be determined by studying the actual data.
\end{exa}
\begin{exa} {\em Random growth processes.}
\vspace{5pt}
Economists have traditionally used random growth processes to model time-dependent systems with quasi-Zipfian distributions. For example, these processes were used by \citet{Gabaix:1999} to model the distribution of city populations and by \citet{Piketty:2017} to construct a piecewise approximation to the distribution curves for the income and wealth of U.S.\ households. A {\em random growth process} is an It\^o\ process of the form
\begin{equation}\label{rg0}
\frac{dX(t)}{X(t)}=\mu(X(t))\,dt+\sigma(X(t))\,dW(t),
\end{equation}
where $W$ is Brownian motion and $\mu$ and $\sigma$ are well-behaved real-valued functions. We can convert this into logarithmic form by It\^o's rule, in which case
\begin{equation}\label{rg1}
d\log X(t) = \bigg(\mu(X(t))-\frac{\sigma^2(X(t))}{2}\bigg)dt+\sigma(X(t))\,dW(t), \quad\text{{\rm a.s.}}
\end{equation}
We shall assume that this equation has at least a weak solution with $X(t)>0$, a.s., and that the solution has a stable distribution.
Let us construct $n$ i.i.d.\ copies $X_1,\ldots,X_n$ of $X$, all defined by \eqref{rg0} or, equivalently, by \eqref{rg1}, and assume that the $X_i$ are all in their common stable distribution. Let us assume that the $X_i$ spend no local time at triple points, so we can define the rank processes and \eqref{2.1} and \eqref{2.2} will be valid. If the system is asymptotically stable we can calculate the corresponding rank-based growth rates $g_k$, but if we know the stable distribution of the original process \eqref{rg0}, then there is a simpler way to proceed.
If we know the common stable distribution of the $X_i$, then we can calculate expectations under this stable distribution and let
\begin{equation}\label{rg2}
g_k= {\mathbb E}\bigg[\mu(X_{(k)}(t))-\frac{\sigma^2(X_{(k)}(t))}{2}\bigg]\quad\text{ and }
\quad \sigma^2_k= {\mathbb E}\big[\sigma^2(X_{(k)}(t)\big],
\end{equation}
for $k=1,\ldots,n$. Under appropriate regularity conditions on the $\mu$ and $\sigma$, the expectations here will be equal to the asymptotic time averages of the functions. Since the $X_i$ are stable, the geometric mean $\big(X_1X_2 \ldots X_n\big)^{1/n}=\big(X_{(1)}X_{(2)}\ldots X_{(n)}\big)^{1/n}$ will also be stable, so
\begin{equation*}
\big(g_1+\cdots+g_n\big)t={\mathbb E}\big[\log \big(X_{(1)}(t)\cdots\ X_{(n)}(t)\big)-\log \big(X_{(1)}(0)\cdots\ X_{(n)}(0)\big)\big]=0.
\end{equation*}
Hence,
\begin{equation}\label{rg2.1}
g_1+\cdots+g_n=0,\quad\text{ with }\quad g_1+\cdots+g_k<0,\text{ for } k<n,
\end{equation}
so the $g_k$ and $\sigma^2_k$ define the first-order model
\begin{equation}\label{rg3}
d\log Y_i(t)= g_{r_t(i)}dt+\sigma_{r_t(i)}dW_i(t),
\end{equation}
where $W_1,\ldots,W_n$ is $n$-dimensional Brownian motion. In this case, $G_n=0$.
If the functions $\mu$ and $\sigma$ in \eqref{rg0} are smooth enough, then the system is likely to be rank-based and the stable distributions of the gap processes $(\log X_{(k)}-\log X_{(k+1)})$ will be (close to) exponential. In this case the stable distribution of the first-order model \eqref{rg3} will be close to that of the original system \eqref{rg0}. More conditions are required to ensure that this stable distribution be quasi-Zipfian, and to achieve a true Zipfian distribution, a lower reflecting barrier or other equivalent device must be included in the model \citep{Gabaix:2009}.
\end{exa}
\begin{exa} {\em Population of cities.}
\vspace{5pt}
The distribution of city populations is a prominent example of Zipf's law in social science. However, as the comprehensive cross-country investigation of \citet{Soo:2005} shows, city size distributions in most countries are not Zipfian but rather quasi-Zipfian. \citet{Gabaix:1999} hypothesized that the quasi-Zipfian distribution of U.S.\ city size was caused by higher population variances at the lower ranks, consistent with Proposition~\ref{T2}. Which of the deviations from Zipf's law uncovered by \citet{Soo:2005} are due to population variances that decrease with increasing city size remains an open question.
There is another phenomenon that occurs with city size distributions. Suppose that rather than studying a large country like the U.S.\, we consider instead the populations of the cities in New York State. According to the 2010 U.S.\ census, the largest city, New York City, had a population of 8,175,133, while the second largest, Buffalo, had only 261,310, so this distribution is non-Zipfian. The corresponding population of New York State was 19,378,102, so hypothesis \eqref{5.5} of Proposition~\ref{T2} is satisfied, but nevertheless the proposition fails. This calls for an explanation, and we conjecture that while the population of the cities of New York State comprise a time-dependent system, this system is not rank-based. The population of New York City is not determined merely by its rank among New York State cities, but is highly city-specific in nature. Hence, we cannot expect the stable distribution for the gap process between New York City and second-ranked Buffalo to be exponential, and we cannot expect the distribution of the system to be quasi-Zipfian.
\end{exa}
\begin{exa} {\em Assets of banks.}
\vspace{5pt}
\citet{Fernholz/Koch:2016a} show that the distribution of assets held by U.S.\ bank holding companies, commercial banks, and savings and loan associations are all quasi-Zipfian. This is true despite the fact that these distributions have undergone significant changes over the past few decades. However, as \citet{Fernholz/Koch:2017} show, the first-order approximations of these time-dependent rank-based systems generally do not satisfy the hypotheses of Proposition~\ref{T2}, since the parameters $\boldsymbol \sigma^2_{k,k+1}$ are, in most cases, lower for higher values of $k$. Nonetheless, the parameters $\boldsymbol \lambda_{k,k+1}$ vary with $k$ in such a way as to generate quasi-Zipfian distributions.
\end{exa}
\begin{exa} {\em Employees of firms.}
\vspace{5pt}
\citet{Axtell:2001} shows that the distribution of employees of U.S.\ firms is close to Zipfian, with only slight concavity. A number of empirical analyses have shown that for all but the tiniest firms, employment growth rates of U.S.\ firms do not vary with firm size \citep{Neumark/Wall/Zhang:2011}. This observation together with the slight concavity demonstrated by \citet{Axtell:2001} suggests that the first-order approximation of U.S.\ firm employees is simple, which would explain its quasi-Zipfian nature.
\end{exa}
\section{Conclusion} \label{concl
We have shown that the stable distribution of an Atlas family will follow Zipf's law if and only if two natural conditions, conservation and completeness, are satisfied. We have also shown that a simple first-order family will have a stable distribution that is quasi-Zipfian if the family is conservative and complete, provided that the largest weight is not greater than one half. Since many systems of time-dependent rank-based empirical data can be approximated by Atlas families or simple first-order families, our results offer an explanation for the universality of Zipf's law for these systems.
|
\section{Introduction}
Qualitatively, pitch angle is fairly easy to determine by eye. This endeavour famously began with the creation of the Hubble--Jeans sequence of galaxies \citep{Jeans:1919,Jeans:1928,Hubble:1926,Hubble:1936}. In modern times, this legacy has been continued on a grand scale by the Galaxy Zoo project \citep{Lintott:2008}, which has been utilizing citizen scientist volunteers to visually classify spiral structure in galaxies on their website.\footnote{\url{https://www.galaxyzoo.org/}} If one is familiar with the concept of pitch angle, and presented with high-resolution imaging of grand design spiral galaxies, pitch angles accurate to $\pm 5\degr$ could reasonably be determined by visual inspection and comparison with reference spirals. Fortunately, instead of relying on only the human eye, several astronomical software routines now exist to measure pitch angle more precisely, particularly helpful with lesser quality imaging, and in galaxies with more ambiguous spiral structure (i.e. flocculent spiral galaxies).
Not surprisingly, pitch angle is intimately related to Hubble type, although it can at times be a poor indicator due to misclassification and asymmetric spiral arms \citep{Ringermacher:2010}. The Hubble type is also known to correlate with the bulge mass \citep[e.g.][and references therein]{Yoshizawa:1975,Graham:Worley:2008}, and more luminous bulges are associated with more tightly wound spiral arms \citep{Savchenko:2013}. Additionally, \citet{Davis:2015} present observational evidence for the spiral density wave theory's (bulge mass)--(disc density)--(pitch angle) Fundamental Plane relation for spiral galaxies.
It has been established that bulge mass correlates well with supermassive black hole (SMBH) mass \citep{Dressler:1989,Kormendy:Richstone:1995,Magorrian:1998,Marconi:Hunt:2003}. A connection between pitch angle and SMBH mass is therefore expected given the relations mentioned above. Moreover, pitch angle has been demonstrated to be connected to the shear rate in galactic discs, which is itself an indicator of the central mass distribution contained within a given galactocentric radius \citep{Seigar:2005,Seigar:2006}. In fact, it is possible to derive an indirect relationship between spiral arm pitch angle and SMBH mass through a chain of relations from (spiral arm pitch angle) $\rightarrow$ (shear) $\rightarrow$ (bulge mass) $\rightarrow$ (SMBH mass), $|\phi| \rightarrow \Gamma \rightarrow M_{\rm Bulge} \rightarrow M_{\rm BH}.$ From an analysis of the simulations by \citet{Grand:2013}\footnote{$\Gamma \approx 1.70-0.03|\phi|$ and $\log({M_{\rm Bulge}/{\rm M_{\sun}}}) \approx 1.17\Gamma +9.42$.} and application of the $M_{\rm BH}$--$M_{\rm Bulge}$ relation of \citet{Marconi:Hunt:2003}, the following (black hole mass)--(spiral arm pitch angle), $M_{\rm BH}$--$\phi$, relation estimation is obtained:
\begin{equation}
\log({M_{\rm BH}/{\rm M_{\sun}}}) \approx 8.18 - 0.041\left[|\phi|-15\degr\right].
\label{indirect}
\end{equation}
In this paper we explore and expand upon the established $M_{\rm BH}$--$\phi$ relation \citep{Seigar:2008,Berrier:2013}, which revealed that $M_{\rm BH}$ decreases as the spiral arm pitch angle increases. One major reason to pursue such a relation is the potential of using pitch angle to predict which galaxies might harbour intermediate-mass black holes (IMBHs, $M_{\rm BH}<10^5$~${\rm M_{\sun}}$). The $M_{\rm BH}$--$\phi$ relation will additionally enable one to probe bulge-less galaxies where the (black hole mass)--(bulge mass), $M_{\rm BH}$--$M_{\rm Bulge}$, and (black hole mass)--(S\'ersic index), $M_{\rm BH}$--$n$, relations can no longer be applied. Furthermore, pitch angles can be determined from images without calibrated photometry and do not require carefully determined sky backgrounds. The pitch angle is also independent of distance. Pitch angles have been measured for galaxies as distant as $z > 2$ \citep{Davis:2012}.
We present the mathematical formulae governing logarithmic spirals in Section \ref{theory}. We describe our sample selection of all currently known spiral galaxies with directly measured black hole masses and discuss our pitch angle measurement methodology in Section \ref{DM}. In Section \ref{AR}, we present our determination of the $M_{\rm BH}$--$\phi$ relation, including additional tests upon its efficacy, and division into subsamples segregated by barred/unbarred and literature-assigned pseudo/classical bulge morphologies; revealing a strong $M_{\rm BH}$--$\phi$ relation amongst the pseudobulge subsample. Finally, we comment on the meaning and implications of our findings in Section \ref{DI}, including a discussion of pitch angle stability, longevity and interestingly, connections with tropical cyclones and eddies in general.
We adopt a spatially flat lambda cold dark matter ($\Lambda$CDM) cosmology with the best-fitting Planck TT+lowP+lensing cosmographic parameters estimated by the Planck mission \citep{Planck:2015}: $\Omega_{\rm M} = 0.308$, $\Omega_\Lambda = 0.692$ and $h_{67.81} = h/0.6781 = H_0/(67.81$~km~s$^{-1}$~Mpc$^{-1}) \equiv 1$. Throughout this paper, all printed errors and plotted error bars represent $1\sigma$ ($\approx 68.3$~per~cent) confidence levels.
\section{Theory}\label{theory}
Logarithmic spirals are ubiquitous throughout nature, manifesting themselves as optimum rates of radial growth for azimuthal winding in numerous structures such as mollusc shells, tropical cyclones and the arms of spiral galaxies. Additional astrophysical examples of the manifestation of logarithmic spirals include protoplanetary discs \citep{Perez:2016,Rafikov:2016}, circumbinary discs surrounding merging black holes \citep{Zanotti:2010,Giacomazzo:2012} and the geometrically thick disc surrounding active galactic nuclei (AGN) central black holes \citep{Wada:2016}. This sort of expansion allows for radial growth without changing shape. One such special case of a logarithmic spiral is the golden spiral ($|\phi| \approx 17\fdg0$), which widens by a factor of the golden ratio ($\approx1.618$) every quarter turn, itself closely approximated by the Fibonacci spiral \citep[see appendix A of][]{Davis:2014}.
One can define the radius from the origin to a point along a logarithmic spiral at $(r, \theta)$ as
\begin{equation}
r = r_0e^{\tau\theta},
\label{spiral_equation}
\end{equation}
where $r_0$ is an arbitrary real positive constant representing the radius when the azimuthal angle $\theta = 0$, and $\tau$ is an arbitrary real constant (see Fig.~\ref{example}).
\begin{figure}
\includegraphics[clip=true,trim= 0mm 0mm 0mm 0mm,width=\columnwidth]{f1.pdf}
\caption{A logarithmic spiral (red), circle (blue), line tangent to the logarithmic spiral (magenta) at point ($r,\theta$), line tangent to circle (cyan) at point ($r,\theta$) and radial line (green) passing through the origin and point ($r,\theta$). Included are the length of $r_0$, the angle $\phi$ and the location of the reference point ($r,\theta$). For this example, $r_0 = 1$, $|\phi| = 20\degr$ and point $(r,\theta) = (\approx1.33,{\rm \pi}/4)$; making the circle radius $\approx1.33$. Note that the spiral (red) radius can continue outward towards infinity as $\theta\rightarrow+\infty$ and continue inward towards zero as $\theta\rightarrow-\infty$.}
\label{example}
\end{figure}
If $\tau = 0$, then one obtains a circle at constant radius $r = r_0$, while if $|\tau| = \infty$, one obtains a radial ray from the origin to infinity. The parameter $\tau$ therefore quantifies the tightness of the spiral pattern.
A logarithmic spiral is self-similar and thus always appears the same regardless of scale. Every successive $2{\rm \pi}$ revolution of a logarithmic spiral grows the radius at a rate of
\begin{equation}
\frac{r_{n+1}}{r_n}=e^{2{\rm \pi}\tau},
\label{growth}
\end{equation}
where $r_n$ is any arbitrary radius between the origin and the point $(r_n,\theta)$ and $r_{n+1}$ is the radius of the spiral after one complete revolution, such that $r_{n+1}$ is the radius between the origin and the point $(r_{n+1},\theta+2{\rm \pi})$.
The rate of growth (of the spiral radius as a function of azimuthal angle) of such a logarithmic spiral can be defined using the derivative of equation~(\ref{spiral_equation}), such that
\begin{equation}
\frac{dr}{d\theta} = r_0\tau e^{\tau\theta} = \tau r.
\label{spiral_derivative}
\end{equation}
Notice that $\tau = \frac{dr}{d\theta}/r = 0$ generates a circle and $\tau = \frac{dr}{d\theta}/r = \infty$ generates a radial ray. Given these two extremes, one can more conveniently quantify the tightness of logarithmic spirals via an inverse tangent function. Specifically,
\begin{equation}
\tan^{-1}\tau = \tan^{-1}\left(\frac{\frac{dr}{d\theta}}{r}\right) = \phi,
\label{pitch}
\end{equation}
with $\phi$ being referred to as the `pitch angle' of the logarithmic spiral. In general terms, it is the angle between a line tangent to a logarithmic spiral and a line tangent to a circle of radius $r$ that are constructed from and intersect both at $(r,\theta)$, the reference point (see Fig.~\ref{example}). Rearrangement of equation~(\ref{pitch}) implies that $\tau = \tan\phi$. Therefore, in terms of pitch angle, equation~(\ref{spiral_equation}) becomes
\begin{equation}
r = r_0e^{\theta\tan{\phi}},
\label{pitch1}
\end{equation}
and equation~(\ref{spiral_derivative}) becomes\footnote{Equation~(\ref{pitch2}), without the preceding derivation, is provided in equation~6-2 of \citet{Binney:Tremaine:1987}.}
\begin{equation}
\cot{\phi}=r\frac{d\theta}{dr},
\label{pitch2}
\end{equation}
with $|\phi| \leq \frac{{\rm \pi}}{2}$. Therefore, as $\phi\rightarrow0$, the spiral approaches a circle and as $|\phi|\rightarrow\frac{{\rm \pi}}{2}$, the spiral approaches a radial ray.
The sign of $\phi$ indicates the chirality of winding, with positive values representing a clockwise direction of winding with increasing radius (`S-wise') and negative values representing a counterclockwise direction of winding with increasing radius (`Z-wise') for our convention. For galaxies, this merely indicates the chance orientation of a galaxy based on our line of sight to that galaxy. \citet{Hayes:2016} demonstrates through analysis of $458~012$ Sloan Digital Sky Survey \citep[SDSS,][]{York:2000} galaxies contained within the Galaxy Zoo 1 catalogue \citep{Lintott:2008,Lintott:2011} with the \textsc{sparcfire} software \citep{Davis:Hayes:2014}, that the winding direction of arms in spiral galaxies, as viewed from Earth, is consistent with the flip of a fair coin.\footnote{\citet{Shamir:2017} does however notice, from analysis of a smaller set of $162516$ SDSS spiral galaxies with the \textsc{ganalyzer} software \citep{Ganalyzer,Shamir:2011}, a slight bias of $82244$ spiral galaxies with clockwise handedness versus $80272$ with counterclockwise handedness.} Ergo, for our purposes in this paper, we only consider the absolute value of pitch angle in regards to any derived relationship.
\section{Data and Methodology}\label{DM}
Our sample of galaxies was chosen from the ever-growing list of galaxies with directly measured black hole masses, by which we mean their sphere of gravitational influence has supposedly been spatially resolved. This includes measurements via proper stellar motion, stellar dynamics, gas dynamics and stimulated astrophysical masers (we do not include SMBH masses estimated via reverberation mapping methods). Additionally, we only consider measurements offering a specific mass rather than an upper or lower limit. This criterion yielded a sample 44 galaxies (see Table~\ref{Sample}).
We carefully considered the implications of variable pitch angles for the same galaxy when viewed in different wavelengths of light. \citet{Pour-Imani:2016} found that pitch angle is statistically more tightly wound (i.e. smaller) when viewed in the light from the older stellar populations. We sought to preferentially measure images that more strongly exhibit the young stellar populations. In doing so, we are able to glimpse the current location of the spiral density wave that is enhancing star formation in the spiral arm. Whereas, older stellar populations were born long ago within the density wave pattern, but have since drifted away from the wave after multiple orbits around their galaxy, making the pitch angle smaller \citep[see fig.~1 from][]{Pour-Imani:2016}.
Our preferred images are those of ultraviolet light (e.g. \textit{GALEX} \textit{FUV} and \textit{NUV}), which reveals young bright stars still in their stellar nurseries, or 8.0 $\micron$ infrared light (e.g. \textit{Spitzer} \textit{IRAC4}), which is sensitive to light from the warmed dust of star-forming regions. Above all, we sought high-resolution imaging that adequately revealed the spiral structure, regardless of the wavelength of light. For instance, near-IR images often (though not always) reveal smoother spiral structure that is more likely to appear grand design in nature (and is easier from which to measure pitch angle). This can be seen in the work of \citet{Thornley:1996}, who demonstrates that spirals appearing flocculent in visible wavelengths of light may appear as grand design spirals if viewed in near-IR wavelengths.
Previous papers exploring the $M_{\rm BH}$--$\phi$ relation have used a single method to measure the value of $\phi$. \citet{Seigar:2008} and \citet{Berrier:2013} both exclusively used two-dimensional fast Fourier transform (\textsc{2dfft}) analysis. Here, we have employed multiple methods to ensure the most reliable pitch angle measurements. Pitch angles were measured using a new template fitting software called \textsc{spirality} \citep{Shields:2015,Spirality} and \textsc{2dfft} software \citep{Davis:2012,2DFFT}. Additionally, computer vision software \citep{Davis:Hayes:2014} was utilized to corroborate the pitch angle measurements.
All of these methods first compensate for the random inclination angle of a galaxy's disc by de-projecting it to a face-on orientation.\footnote{It is interesting to note that the act of measuring pitch angle itself also yields a good indication of the true inclination angle of a galaxy \citep{Poltorak:2007}.} Inclination angles were estimated from each galaxy's outer isophote ellipticity, and these inclination angles were subsequently used to de-project the galaxies via the method of \citet{Davis:2012}. Even with the use of multiple software routines that invoke varied methods of measuring pitch angle, it sometimes remains difficult to clearly analyse flocculent spiral structure. To overcome this, one can apply multiple image processing techniques such as `symmetric component isolation' (see \citealt{Davis:2012}, their section 5.1 and \citealt{Shields:2015}, their section 3.2 and 3.3) to enhance the spiral structure for an adequate measurement. Even so, when presented with images of poor quality, one needs to be mindful that all methods are unfavourably contaminated with spuriously high-pitch angle signals in the presence of low signal to noise.
\begin{landscape}
\begin{table}
\caption{Sample of 44 spiral galaxies with directly measured black hole masses.
\textbf{Columns:}
(1) Galaxy name.
(2) Morphological type, mostly from HyperLeda and NED.
(3) Bulge morphology (`C' = classical bulge, `P' = pseudobulge and `N' = bulge-less).
(4) Bulge morphology reference.
(5) Spiral arm classification \citep{Elmegreen:1987}.
(6) Luminosity distance, mostly from HyperLeda and NED.
(7) Distance reference.
(8) Major diameter, mostly from NED.
(9) Black hole mass, adjusted to the distances in Column 7.
(10) Measurement method for black hole mass (`g' = gas dynamics, `m' = maser, `p' = stellar proper motion and `s' = stellar dynamics).
(11) Black hole mass reference.
(12) Harmonic mode (i.e. number of dominant spiral arms) measured by \textsc{spirality}.
(13) Logarithmic spiral arm pitch angle.
(14) Inclination angle used for de-projection.
(15) Telescope used for pitch angle measurement (images acquired primarily from NED or MAST).
(16) Photometric filter used for pitch angle measurement.
(17) Resolution of pitch angle measurement (i.e. Gaussian PSF FWHM).
(18) Pitch angle reference.
(19) Measurement method for pitch angle measurement (`T' = template fitting, `F' = \textsc{2dfft} and `V' = computer vision).
\textbf{References:}
(1) \citet{Kormendy:Ho:2013}.
(2) \citet{Hu:2009}.
(3) \citet{Greene:2016}.
(4) \citet{Zoccali:2014}.
(5) \citet{Kormendy:2013}.
(6) \citet{Saglia:2016}.
(7) \citet{Fisher:Drory:2010}.
(8) \citet{Hu:2008}.
(9) \citet{Nowak:2010}.
(10) \citet{Sandage:1981}.
(11) \citet{Sani:2011}.
(12) \citet{Gadotti:2012}.
(13) \citet{Berrier:2013}.
(14) \citet{Tully:2008}.
(15) Luminosity distance computed using redshift (usually from NED) and the cosmographic parameters of \citet{Planck:2015}.
(16) \citet{Yamauchi:2012}.
(17) \citet{Boehle:2016}.
(18) \citet{Riess:2012}.
(19) \citet{Radburn-Smith:2011}.
(20) \citet{Pudge}.
(21) \citet{Tully:1988}.
(22) \citet{Terry:2002}.
(23) \citet{Sorce:2014}.
(24) \citet{Lagattuta:2013}.
(25) \citet{Kudritzki:2012}.
(26) \citet{Lee:2013}.
(27) \citet{Honig:2014}.
(28) \citet{Humphreys:2013}.
(29) \citet{Bose:2014}.
(30) \citet{Jacobs:2009}.
(31) \citet{Silverman:2012}.
(32) \citet{McQuinn:2016}.
(33) \citet{Tully:2015}.
(34) \citet{Gao:2016}.
(35) \citet{McQuinn:2017}.
(36) \citet{Kuo:2013}.
(37) \citet{Greenhill:2003}.
(38) \citet{Reid:2013}.
(39) \citet{Greenhill:2003a}.
(40) \citet{Tadhunter:2003}.
(41) \citet{Gao:2017}.
(42) \citet{Bender:2005}.
(43) \citet{Rodriguez-Rico:2006}.
(44) \citet{Lodato:2003}.
(45) \citet{Onishi:2015}.
(46) \citet{Atkinson:2005}.
(47) \citet{Cappellari:2008}.
(48) \citet{Devereux:2003}.
(49) \citet{Yamauchi:2004}.
(50) \citet{Hicks:2008}.
(51) \citet{Davies:2006}.
(52) \citet{Onken:2014}.
(53) \citet{Pastorini:2007}.
(54) \citet{Brok:2015}.
(55) \citet{Jardel:2011}.
(56) Private value from K. Gebhardt \citep{Kormendy:2011}.
(57) \citet{Greenhill:1997}.
(58) \citet{Blais-Ouellette:2004}.
(59) \citet{Wold:2006}.
(60) This work.
(61) \citet{Vallee:2015}.
(62) \citet{Pour-Imani:2016}.
(63) \citet{Davis:2014}.
}
\label{Sample}
\begin{tabular}{llccccccccccrcllccc}
\hline
Galaxy name & \multicolumn{1}{c}{Type} & Bulge & Rf. & AC & Dist. & Rf. & Size & $\log({M_{\rm BH}/{\rm M_{\sun}}})$ & Met. & Rf. & $m$ & \multicolumn{1}{c}{$|\phi|$} & $i$ & \multicolumn{1}{c}{Telescope} & \multicolumn{1}{c}{Filter} & Res. & Rf. & Met. \\
& & & & & (Mpc) & & ($\arcmin$) & & & & & \multicolumn{1}{c}{($\degr$)} & ($\degr$) & & & ($\arcsec$) & \\
(1) & \multicolumn{1}{c}{(2)} & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & \multicolumn{1}{c}{(13)} & (14) & \multicolumn{1}{c}{(15)} & \multicolumn{1}{c}{(16)} & (17) & (18) & (19) \\
\hline
Circinus & SABb & P & 1 & & $4.21$ & 14 & $6.9$ & $6.25^{+0.07}_{-0.08}$ & m & 39 & 2 & $17.0\pm3.9$ & $48.8$ & \textit{HST} & \textit{F215N} & $0.19$ & 60 & T \\
Cygnus A & SB$^a$ & C & 2 & & $258.4$ & 15 & $0.45$ & $9.44^{+0.11}_{-0.14}$ & g & 40 & 2 & $2.7\pm0.2$ & $0$ & \textit{HST} & \textit{F450W} & $0.08$ & 60 & T \\
ESO558-G009 & Sbc & P & 3 & & $115.4$ & 15 & $1.6$ & $7.26^{+0.03}_{-0.04}$ & m & 41 & 2 & $16.5\pm1.3$ & $75.2$ & \textit{HST} & \textit{F814W} & $0.08$ & 60 & F \\
IC 2560 & SBb & P,C & 1,3 & & $31.0$ & 16 & $3.2$ & $6.49^{+0.08}_{-0.10}$ & m & 41 & 2 & $22.4\pm1.7$ & $66.4$ & \textit{HST} & \textit{F814W} & $0.08$ & 60 & T \\
J0437+2456$^b$ & SB & P & 3 & & $72.2$ & 15 & $0.8$ & $6.51^{+0.04}_{-0.05}$ & m & 41 & 2 & $16.9\pm4.1$ & $65.2$ & \textit{HST} & \textit{F814W} & $0.06$ & 60 & T \\
Milky Way & SBbc & P,C & 1,4 & & $0.008$ & 17 & & $6.60\pm0.02$ & p & 17 & $4^c$ & $13.1\pm0.6$ & & & & & 61 & \\
Mrk 1029 & S & P & 3 & & $136.9$ & 15 & $0.8$ & $6.33^{+0.10}_{-0.13}$ & m & 41 & 2 & $17.9\pm2.1$ & $0$ & \textit{HST} & \textit{F160W} & $0.09$ & 60 & T \\
NGC 0224 & SBb & C & 1 & & $0.75$ & 18 & $190$ & $8.15^{+0.22}_{-0.10}$ & s & 42 & 1 & $8.5\pm1.3$ & $78.9$ & \textit{GALEX} & \textit{NUV} & $4.85$ & 13 & F \\
NGC 0253 & SABc & P & 5 & & $3.47$ & 19 & $27.5$ & $7.00\pm0.30^d$ & g & 43 & 2 & $13.8\pm2.3$ & $73.0$ & \textit{Spitzer} & \textit{IRAC4} & $1.91$ & 60 & F \\
NGC 1068 & SBb & P,C & 1,6 & 3 & $10.1$ & 14 & $7.1$ & $6.75\pm0.02$ & m & 44 & 3 & $17.3\pm1.9$ & $42.2$ & SDSS & \textit{u} & $1.06$ & 60 & F \\
NGC 1097 & SBb & P & 7 & 12 & $24.9$ & 20 & $9.3$ & $8.38\pm0.03$ & g & 45 & 2 & $9.5\pm1.3$ & $48.4$ & \textit{Spitzer} & \textit{IRAC4} & $1.97$ & 62 & F \\
NGC 1300 & SBbc & P & 1 & 12 & $14.5$ & 14 & $6.2$ & $7.71^{+0.17}_{-0.12}$ & g & 46 & 2 & $12.7\pm2.0$ & $30.2$ & du Pont & \textit{B} & $0.69$ & 63 & F \\
NGC 1320 & Sa & P & 3 & & $37.7$ & 21 & $1.9$ & $6.78^{+0.16}_{-0.26}$ & m & 41 & 1 & $19.3\pm2.0$ & $35.7$ & \textit{HST} & \textit{F330W} & $0.03$ & 60 & F \\
NGC 1398 & SBab & C & 6 & 6 & $24.8$ & 14 & $7.1$ & $8.03\pm0.08$ & s & 6 & 1 & $9.7\pm0.7$ & $42.3$ & \textit{GALEX} & \textit{FUV} & $4.20$ & 60 & T \\
NGC 2273 & SBa & P & 1 & & $31.6$ & 22 & $3.2$ & $6.97\pm0.03$ & m & 41 & 2 & $15.2\pm3.9$ & $42.1$ & \textit{HST} & \textit{\textit{F336W}} & $0.11$ & 60 & T \\
NGC 2748 & Sbc & P & 1 & & $18.2$ & 23 & $3.0$ & $7.54^{+0.15}_{-0.23}$ & g & 46 & 1 & $6.8\pm2.2$ & $74.3$ & \textit{Spitzer} & \textit{IRAC1} & $1.89$ & 60 & T \\
NGC 2960 & Sa & P & 6 & & $71.1$ & 24 & $1.8$ & $7.06\pm0.03$ & m & 41 & 1 & $14.9\pm1.9$ & $58.3$ & \textit{HST} & \textit{\textit{F336W}} & $0.05$ & 60 & F \\
NGC 2974 & SB & C & 8 & & $21.5$ & 14 & $3.5$ & $8.23\pm0.05$ & s & 8,47 & 3 & $10.5\pm2.9$ & $69.0$ & \textit{GALEX} & \textit{FUV} & $4.17$ & 60 & T \\
NGC 3031 & SBab & C & 1 & 12 & $3.48$ & 25 & $26.9$ & $7.83^{+0.11}_{-0.07}$ & g & 48 & 2 & $13.4\pm2.3$ & $53.4$ & \textit{Spitzer} & \textit{IRAC4} & $1.74$ & 60 & T \\
NGC 3079 & SBcd & P & 6 & & $16.5$ & 14 & $7.9$ & $6.38^{+0.08}_{-0.10}$ & m & 49 & 2 & $20.6\pm3.8$ & $78.9$ & \textit{Spitzer} & \textit{IRAC2} & $1.76$ & 60 & F \\
NGC 3227 & SABa & P & 1 & 7 & $21.1$ & 22 & $5.4$ & $7.86^{+0.17}_{-0.25}$ & g,s & 50,51 & 2 & $7.7\pm1.4$ & $70.3$ & JKT & H$\alpha$ & $1.87$ & 60 & F \\
NGC 3368 & SABa & P\&C & 9 & 8 & $10.7$ & 26 & $7.6$ & $6.89^{+0.08}_{-0.10}$ & g,s & 9 & 2 & $14.0\pm1.4$ & $0$ & VATT 1.8m & \textit{R} & $0.62$ & 13 & F \\
NGC 3393 & SBa & P & 1 & & $55.8$ & 15 & $2.2$ & $7.49^{+0.05}_{-0.06}$ & m & 41 & 3 & $13.1\pm2.5$ & $34.5$ & CTIO 0.9m & \textit{B} & $0.99$ & 13 & F \\
NGC 3627 & SBb & P & 6 & 7 & $10.6$ & 26 & $9.1$ & $6.95\pm0.05$ & s & 6 & 2 & $18.6\pm2.9$ & $53.9$ & \textit{Spitzer} & \textit{IRAC4} & $1.90$ & 62 & F \\
NGC 4151 & SABa & C & 6 & 5 & $19.0$ & 27 & $6.3$ & $7.68^{+0.15}_{-0.60}$ & g,s & 50,52 & 3 & $11.8\pm1.8$ & $0$ & VLA & 21 cm & $4.71$ & 13 & F \\
NGC 4258 & SABb & P,C & 7\&10,1 & & $7.60$ & 28 & $18.6$ & $7.60\pm0.01$ & m & 28 & 1 & $13.2\pm2.5$ & $67.2$ & \textit{GALEX} & \textit{NUV} & $4.70$ & 60 & T \\
\hline
\end{tabular}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}
\contcaption{}
\begin{tabular}{llccccccccccrcllccc}
\hline
Galaxy name & \multicolumn{1}{c}{Type} & Bulge & Rf. & AC & Dist. & Rf. & Size & $\log({M_{\rm BH}/{\rm M_{\sun}}})$ & Met. & Rf. & $m$ & \multicolumn{1}{c}{$|\phi|$} & $i$ & \multicolumn{1}{c}{Telescope} & \multicolumn{1}{c}{Filter} & Res. & Rf. & Met. \\
& & & & & (Mpc) & & ($\arcmin$) & & & & & \multicolumn{1}{c}{($\degr$)} & ($\degr$) & & & ($\arcsec$) & \\
(1) & \multicolumn{1}{c}{(2)} & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & \multicolumn{1}{c}{(13)} & (14) & \multicolumn{1}{c}{(15)} & \multicolumn{1}{c}{(16)} & (17) & (18) & (19) \\
\hline
NGC 4303 & SBbc & P & 7 & 9 & $12.3$ & 29 & $6.5$ & $6.58^{+0.07}_{-0.26}$ & g & 53 & 2 & $14.7\pm0.9$ & $0$ & \textit{GALEX} & \textit{NUV} & $4.38$ & 60 & T \\
NGC 4388 & SBcd & P & 1 & & $17.8$ & 23 & $4.84$ & $6.90^{+0.04}_{-0.05}$ & m & 40 & 2 & $18.6\pm2.6$ & $74.0$ & KPNO 2.3m & $K_S$ & $1.29$ & 60 & F \\
NGC 4395 & SBm & N & 10 & 1 & $4.76$ & 30 & $13.2$ & $5.64^{+0.22}_{-0.12}$ & g & 54 & 2 & $22.7\pm3.6$ & $36.2$ & \textit{GALEX} & \textit{FUV} & $4.20$ & 60 & T \\
NGC 4501 & Sb & P & 6 & 9 & $11.2$ & 31 & $6.9$ & $7.13\pm0.08$ & s & 6 & 2 & $12.2\pm3.4$ & $35.7$ & \textit{GALEX} & \textit{NUV} & $4.10$ & 60 & T \\
NGC 4594 & Sa & P,C & 11\&12,1 & & $9.55$ & 32 & $8.7$ & $8.81\pm0.03$ & s & 55 & 1 & $5.2\pm0.4$ & $80.7$ & \textit{Spitzer} & \textit{IRAC4} & $2.23$ & 60 & T \\
NGC 4699 & SABb & P\&C & 6 & 3 & $23.7$ & 14 & $3.8$ & $8.34\pm0.05$ & s & 6 & 1 & $5.1\pm0.4$ & $50.7$ & \textit{GALEX} & \textit{NUV} & $3.86$ & 60 & T \\
NGC 4736 & SBab & P & 1 & 3 & $4.41$ & 30 & $11.2$ & $6.78^{+0.09}_{-0.11}$ & s & 56 & 1 & $15.0\pm2.3$ & $32.9$ & \textit{GALEX} & \textit{FUV} & $3.90$ & 62 & F \\
NGC 4826 & Sab & P & 1 & 6 & $5.55$ & 23 & $10.0$ & $6.07^{+0.10}_{-0.12}$ & s & 56 & 3 & $24.3\pm1.5$ & $62.5$ & VLA & $21$ cm & $7.26$ & 60 & F \\
NGC 4945 & SBc & P & 1 & & $3.72$ & 33 & $20.0$ & $6.15\pm0.30^d$ & m & 57 & 2 & $22.2\pm3.0$ & $80.5$ & 2MASS & $K_S$ & $2.77$ & 60 & F \\
NGC 5055 & Sbc & P & 7 & 3 & $8.87$ & 35 & $12.6$ & $8.94^{+0.09}_{-0.11}$ & g & 58 & 1 & $4.1\pm0.4$ & $67.9$ & \textit{GALEX} & \textit{FUV} & $3.80$ & 60 & T \\
NGC 5495 & SBc & P & 3 & & $101.1$ & 15 & $1.4$ & $7.04^{+0.08}_{-0.09}$ & m & 41 & 2 & $13.3\pm1.4$ & $38.2$ & \textit{HST} & \textit{F814W} & $0.08$ & 60 & F \\
NGC 5765b & SABb & P & 3 & & $133.9$ & 34 & $0.7$ & $7.72\pm0.03$ & m & 41 & 2 & $13.5\pm3.9$ & $0$ & \textit{HST} & \textit{F814W} & $0.08$ & 60 & T \\
NGC 6264 & SBb & P & 1 & & $153.9$ & 36 & $0.81$ & $7.51\pm0.02$ & m & 41 & 2 & $7.5\pm2.7$ & $49.8$ & \textit{HST} & \textit{F110W} & $0.57$ & 60 & V \\
NGC 6323 & SBab & P & 1 & & $116.9$ & 15 & $1.1$ & $7.02\pm0.02$ & m & 41 & 1 & $11.2\pm1.3$ & $68.2$ & \textit{HST} & \textit{\textit{F336W}} & $0.08$ & 60 & T \\
NGC 6926 & SBc & P & 13 & 5 & $87.6$ & 37 & $1.9$ & $7.74^{+0.26}_{-0.74}$ & m & 37 & 2 & $9.1\pm0.7$ & $73.3$ & 2MASS & \textit{J} & $2.97$ & 60 & T \\
NGC 7582 & SBab & P & 1 & & $19.9$ & 20 & $5.0$ & $7.67^{+0.09}_{-0.08}$ & g & 59 & 1 & $10.9\pm1.6$ & $59.4$ & \textit{GALEX} & \textit{FUV} & $3.92$ & 60 & T \\
UGC 3789 & SABa & P & 1 & & $49.6$ & 38 & $1.6$ & $7.06^{+0.02}_{-0.03}$ & m & 41 & 2 & $10.4\pm1.9$ & $55.5$ & \textit{HST} & \textit{F438W} & $0.08$ & 60 & T \\
UGC 6093 & SBbc & P & 3 & & $164.1$ & 15 & $0.94$ & $7.45\pm0.04$ & m & 41 & 2 & $10.2\pm0.9$ & $24.7$ & \textit{HST} & \textit{F814W} & $0.08$ & 60 & T \\
\hline
\multicolumn{19}{l}{$^a$ Cygnus A displays a nuclear bar and spiral arms at a galactocentric radius < $2\farcs25$ ($2.82$ kpc).} \\
\multicolumn{19}{l}{$^b$ SDSS J043703.67+245606.8} \\
\multicolumn{19}{l}{$^c$ Meta-analysis by \citet{Vallee:2015}.} \\
\multicolumn{19}{l}{$^d$ A factor of 2 uncertainty has been assigned here.} \\
\end{tabular}
\end{table}
\end{landscape}
Fully aware of the inherent bias of algorithms to be confused by high-pitch angle noise,\footnote{This is akin to the persistence of low-frequency noise in FFT analysis. Noise abounds in frequencies that correspond to wavelengths of the order of the sampling range. For pitch angle analysis, low frequencies are high-pitch angle patterns with wavelengths of the order of the radial width of the annulus of a galactic disc. They experience less azimuthal winding than low-pitch angle, high-frequency patterns with shorter wavelengths, which wrap around a greater azimuthal range of the galaxy and potentially repeat their spiral pattern across the annulus of a galactic disc.} we took care to identify the fundamental pitch angle for each individual galaxy. This involved analysis that did not blindly quote the strongest Fourier pitch angle frequency, but rather sought to identify secondary and perhaps tertiary Fourier pitch angle frequencies that might represent the true, fundamental pitch angle. By applying multiple, independent software routines (see Appendix~\ref{demo}), we were confident in our ability to identify and rule out false pitch angle measurements. Collectively, this approach represents an improvement over past efforts as we have utilized the most appropriate method and avoided instances where things can go wrong.
Additionally --- unless care is taken --- we note that barred galaxies tend to be biased towards higher pitch angle values due to the presence of large central bars. This was, however, readily checked by varying the innermost radius of the region fit for spiral structure. The measured pitch angle starts to spike once the bar begins to influence the result. Even if such careful steps are taken to remove the influence of bars, the fact remains that much of the inner radial range of the galaxy is unusable for pitch angle measurement.
In contrast, unbarred galaxies can have spiral patterns that encompass the entire radial range of a galaxy except for the bulge (if present). Therefore, the main necessity for accurate pitch angle measurement is the presence of spiral arms that encompass large azimuthal ranges around the galaxy. The easiest galaxies to measure have spiral arm patterns that wrap around a significant fraction of the galaxy. Spiral patterns that wrap around $2{\rm \pi}$~radians become quite simple to measure.
Admittedly, it is challenging to model all the varying morphologies of spiral galaxies as possessing perfectly logarithmic and constant pitch angle spiral arms. To mediate this difficulty, one focuses on identifying the spiral arm \textit{segments} that are brightest and closest to the galactic centre, but beyond any central bars. Specifically, we have higher regard for stable stretches of constant pitch angle that are not at the outermost radial edge of a galaxy. In doing so, our pitch angles avoid, as best as possible, potential external tidal influence on the spiral arm geometry.
\section{Analysis and Results}\label{AR}
We performed linear fits using the \textsc{bces} (bivariate correlated errors and intrinsic scatter) regression method \citep{BCES},\footnote{We used a version of the \textsc{bces} software translated into the \textsc{python} programming language by R. S. Nemmen for use in astronomical applications \citep{Nemmen:2012}.} which takes into account measurement error in both coordinates and intrinsic scatter in data. For our ($\phi$, $M_{\rm BH}$) data, we use the \textsc{bces} (Y|X) fitting method, which minimizes the residuals in the ${\rm Y}=\log{M_{\rm BH}}$ direction, and the \textsc{bces} Bisector fitting method, which bisects the angle between the \textsc{bces} (Y|X) and the \textsc{bces} (X|Y)\footnote{The \textsc{bces} (X|Y) regression minimizes the residuals in the ${\rm X} = |\phi|$ direction.} slopes. We find from analysis of the full sample of 44 galaxies (see Fig.~\ref{plot}) that the \textsc{bces} (Y|X) regression yields a slope and intercept\footnote{To reduce the uncertainty on the intercept, we performed a regression of $\log({M_{\rm BH}/{\rm M_{\sun}}})$ on ($|\phi| - |\phi|_{\rm median}$), with $|\phi|_{\rm median}\equiv15\degr$ being the approximate median integer value of $|\phi|$.} such that
\begin{equation}
\log({M_{\rm BH}/{\rm M_{\sun}}}) = (7.01\pm0.07) - (0.171\pm0.017)[|\phi|-15\degr],
\label{M-phi}
\end{equation}
with intrinsic scatter $\epsilon=0.33\pm0.08$~dex and a total root mean square (rms) scatter $\Delta = 0.43$~dex in the $\log{M_{\rm BH}}$ direction.\footnote{The intrinsic scatter is the quadratic difference between the total rms scatter and the measurement uncertainties.} The quality of the fit can be described with a Pearson correlation coefficient of $r = -0.88$ and a $p$-value probability of $5.77\times10^{-15}$ that the null hypothesis is true.
As pointed out in \citet{Novak:2006}, since there is no natural division of the variables into `dependant' and `independent' variables in black hole scaling relations, we prefer to represent the $M_{\rm BH}$--$\phi$ relation with a \textit{symmetric} treatment of the variables, as is the case in the \textsc{bces} Bisector regression. However, given that the error bars on the logarithm of the black hole masses are much smaller than the error bars on the pitch angles (see Table~\ref{Sample}), the \textsc{bces} (X|Y) regression, and thus also the symmetric treatment of our data, results in the same relation (equation~\ref{M-phi}) as the asymmetric regression performed above. We additionally used the modified \textsc{fitexy} routine from \citet{Tremaine:2002} and obtained consistent results.
\begin{figure}
\includegraphics[clip=true,trim=0mm 0mm 0mm 0mm,width=\columnwidth]{f2.pdf}
\caption{Black hole mass (Table~\ref{Sample}, Column 9) versus the absolute value of the pitch angle in degrees (Table~\ref{Sample}, Column 13), represented as red dots bounded by black error bars. Equation~(\ref{M-phi}) is the solid green line (which represents the result of the error-weighted \textsc{bces} (Y|X) regression of $\log{M_{\rm BH}}$ on $|\phi|$). The $1\sigma$ confidence band (smaller dark shaded region) and the $1\sigma$ total rms scatter band (larger light shaded region) depict the error associated with the fit parameters (slope and intercept) and the rms scatter about the best fit of equation~(\ref{M-phi}), respectively. The three galaxies with questionable measurements (see Section \ref{Outliers}) are labelled. For comparison, we have also plotted the ordinary least squares (Y|X) linear regression from \citet{Seigar:2008} and from \citet{Berrier:2013}, represented by a dotted magenta and a dashed cyan line, respectively.}
\label{plot}
\end{figure}
\begin{table*}
\caption{\textsc{bces} (Y|X) linear regressions for the expression $\log({M_{\rm BH}/{\rm M_{\sun}}}) = A[|\phi|-15\degr]+B$.
\textbf{Columns:}
(1) Fit number.
(2) Sample description.
(3) Sample size.
(4) Slope.
(5) $\log({M_{\rm BH}/{\rm M_{\sun}}})$--intercept at $|\phi|=15\degr$.
(6) Intrinsic scatter in the $\log{M_{\rm BH}}$ direction.
(7) Total rms scatter in the $\log{M_{\rm BH}}$ direction.
(8) Pearson correlation coefficient.
(9) $p$-value probability that the null hypothesis is true.
}
\label{fits}
\begin{tabular}{clccccccl}
\hline
Fit & \multicolumn{1}{c}{Sample} & $N$ & $A$ & $B$ & $\epsilon$ & $\Delta$ & $r$ & \multicolumn{1}{c}{$p$-value} \\
& & & (dex/deg) & (dex) & (dex) & (dex) & & \\
(1) & \multicolumn{1}{c}{(2)} & (3) & (4) & (5) & (6) & (7) & (8) & \multicolumn{1}{c}{(9)} \\
\hline
1 & All & 44 & $-0.171\pm0.017$ & $7.01\pm0.07$ & $0.33\pm0.08$ & 0.43 & $-0.88$ & $5.77\times10^{-15}$ \\
2 & Pseudobulges + hybrids & $37^a$ & $-0.153\pm0.018$ & $6.99\pm0.07$ & $0.31\pm0.08$ & 0.41 & $-0.85$ & $1.68\times10^{-11}$ \\
3 & Classical bulges + hybrids & $13^a$ & $-0.169\pm0.025$ & $7.13\pm0.16$ & $0.31\pm0.08$ & 0.41 & $-0.90$ & $2.31\times10^{-5}$ \\
4 & Barred & 35 & $-0.188\pm0.024$ & $6.96\pm0.09$ & $0.35\pm0.09$ & 0.46 & $-0.86$ & $2.66\times10^{-11}$ \\
5 & Unbarred & 9 & $-0.143\pm0.020$ & $7.11\pm0.12$ & $0.33\pm0.08$ & 0.43 & $-0.92$ & $4.92\times10^{-4}$ \\
6 & $m=2$ & 26 & $-0.188\pm0.028$ & $7.00\pm0.09$ & $0.41\pm0.11$ & 0.49 & $-0.86$ & $1.79\times10^{-8}$ \\
7 & $m\neq2$ & 18 & $-0.153\pm0.019$ & $7.05\pm0.10$ & $0.28\pm0.09$ & 0.40 & $-0.88$ & $1.20\times10^{-6}$ \\
\hline
\multicolumn{9}{l}{$^a$ Seven galaxies (IC 2560, the Milky Way, NGC 1068, NGC 3368, NGC 4258, NGC 4594 and NGC 4699)} \\
\multicolumn{9}{l}{potentially have both types of bulge morphology. The bulge-less galaxy NGC 4395 is excluded.}
\end{tabular}
\end{table*}
\subsection{Sub-samples}
We have explored the $M_{\rm BH}$--$\phi$ relation for various subsets that segregate different types of bulges and overall morphologies: pseudobulges, classical bulges, barred and unbarred galaxies. The results of this analysis are presented in Table~\ref{fits} and Fig.~\ref{plot2}. \citet{Graham:2008} \& \citet{Hu:2008} presented evidence that barred/pseudobulge galaxies do not follow the same $M_{\rm BH}$--$\sigma$ scaling relation as unbarred/classical bulges and several authors have speculated that SMBHs do not correlate with galaxy discs \citep[e.g.][]{Kormendy:2011}. However, recent work by \citet{Simmons:2017} indicate that disc-dominated galaxies do indeed co-evolve with their SMBHs. We present evidence that SMBHs clearly correlate well with galactic discs in as much as the existence of an $M_{\rm BH}$--$\phi$ relation demands such a correlation \citep{Treuthardt:2012}. Furthermore, we reveal in Table~\ref{Sample} that most of the galaxies in our sample are alleged in the literature to contain pseudobulges. That is, SMBHs in alleged pseudobulges correlate with their galaxy's discs (Table~\ref{fits}).
\begin{figure}
\includegraphics[clip=true,trim= 0mm 0mm 0mm 0mm,width=\columnwidth]{f3.pdf}
\caption{Similar to Fig.~\ref{plot}. Galaxies with bulges only classified as classical are indicated by pentagons. Galaxies with bulges only classified as pseudobulges are indicated by circles. Galaxies with bulges ambiguously classified as having either classical, pseudo or both types of bulges are labelled as having hybrid bulges and are marked with squares. NGC 4395, being the only bulge-less galaxy in our sample, is marked with a diamond. Markers filled with the colour red represent galaxies with barred morphologies and markers filled with the colour blue represent galaxies with unbarred morphologies. The full and sub-sample \textsc{bces} (Y|X) linear regressions are plotted as lines with various styles and colours. Fits 1--5 from Table~\ref{fits} are depicted as a dotted black line, a solid green line, a dotted magenta line, an alternating dash--dotted red line and a dashed blue line, respectively. The three galaxies with questionable measurements (see Section \ref{Outliers}) are labelled. Error bars and confidence regions have not been included for clarity.}
\label{plot2}
\end{figure}
We further acknowledge that the label `pseudobulge' is often an ambiguous moniker. The qualifications that differentiate pseudobulges from classical bulges are extensive \citep{Fisher:Drory:2016} and are often difficult to determine definitively \citep{Savorgnan:2016:II,Graham:2016b}. This can be observed from the seven hybrid galaxies in our sample (see Table~\ref{Sample}, Column 3) that either have conflicting pseudobulge versus classical bulge classifications in the literature or are stated as possessing both a pseudobulge and classical bulge, simultaneously.
Observing Fig.~\ref{plot2}, we do note that the six galaxies (Cygnus A, NGC 224, NGC 1398, NGC 2974, NGC 3031 and NGC 4151) classified unambiguously as possessing classical bulges, all lie above the best-fitting linear regression for the entire sample. Since the linear regression naturally acts to divide half of the sample above the line of best fit, the individual chance of a particular galaxy lying above the line of best fit is 50~per~cent, making the probability of these specific six galaxies all lying above the line of best fit 1 chance out of 64. This informs us that the classical bulges tend to have higher black hole masses for a given pitch angle. However, it is important to note the diminished statistical significance of the classical bulge sample due to it having a small sample size of only 13 galaxies (with seven of those being hybrid bulge morphologies). Furthermore, this sample includes all three galaxies with questionable measurements (see Section \ref{Outliers}). However, what is of interest is that the galaxies alleged to have pseudobulges define a tight relation; they are not randomly distributed in the $M_{\rm BH}$--$\phi$ diagram.
We also find a majority of our sample consisting of barred morphologies. All of the relations in Table~\ref{fits} are similar (within the quoted margins of error) concerning both their slopes and intercepts, except for the slopes of the barred and unbarred sample. The barred sample has a statistically dissimilar (i.e. error bars that do not overlap), steeper slope than the unbarred sample. Again, it is important to point out that this observation is derived from small number statistics for the unbarred sample, but the error bars should capture this. This observed dissimilarity is such that barred galaxies tend to have more massive black holes than unbarred galaxies with equivalent pitch angles for $|\phi| \loa 11\fdg8$, and vice versa.
Finally, we investigate whether the number of spiral arms affects the determination of the $M_{\rm BH}$--$\phi$ relation. We compare galaxies with two dominant spiral arms ($m=2$) to those with any other count of dominant spiral arms ($m\neq2$). For all galaxies (except the Milky Way), we use the \textsc{spirality} software to count the number of spiral arms (see the middle and right-hand panels of Fig.~\ref{spirality}. In the end, we find that the two samples are statistically equivalent (see Table~\ref{fits}, Fits 6 and 7).
\subsection{Questionable measurements}\label{Outliers}
Cygnus A has the most massive SMBH in our spiral galaxy sample; it is the only spiral galaxy with an SMBH mass greater than one billion solar masses. While numerous early-type galaxies are known to exceed the billion solar mass mark, it is uncommon for spiral galaxies to achieve this mass. Cygnus A is also the most distant galaxy in our sample, with the most ambiguous morphological classification. Furthermore, the bar and spiral arms in Cygnus A are nuclear features unlike the large-scale features in the discs of all other galaxies in our sample. It may be that Cygnus A is an early-type galaxy with an intermediate-scale disc hosting a spiral, cf. CG 611 \citep{Graham:2017}. Recently, \citet{Perley:2017} discovered what is potentially a secondary SMBH near the central SMBH in Cygnus A, further complicating our understanding of this galaxy.
NGC 4594, the `Sombrero' galaxy, is notorious for being simultaneously elliptical and spiral, behaving like two galaxies, one inside the other \citep{Gadotti:2012}.
As for the Milky Way, our Galaxy has been determined to have an uncommonly tight spiral structure (albeit measured with difficulty by astronomers living inside of it) for its relatively low-mass SMBH. It is worth noting that published values of the Milky Way's pitch angle have varied wildly, ranging from $3\degr\leq|\phi|\leq28\degr$. Meta-analysis of these various published values yields a best-fitting absolute value of $13\fdg1\pm0\fdg6$ \citep{Vallee:2015}, which is what we used here. In fact, the most recent measurement of the Milky Way's pitch angle by \citet{Rastorguev:2017} is even smaller ($|\phi| = 10\fdg4\pm0\fdg3$), and thus even more of an outlier if applied to the $M_{\rm BH}$--$\phi$ relation.
Removing these three galaxies does not change any of the results in Table~\ref{fits} by more than the 1$\sigma$ level.
\subsection{$M_{\rm BH}$--$\sigma$ relation for spiral galaxies}
Here, we analyse the $M_{\rm BH}$--$\sigma$ relationship for our sample of 44 spiral galaxies. We obtained the majority of our central velocity dispersion ($\sigma$) measurements from the HyperLeda data base (see Table~\ref{sigma_table}). Literature values were available for all galaxies except for NGC 6926. Unlike the $M_{\rm BH}$--$\phi$ relation, a marked difference arises between the various \textsc{bces} regressions of our $M_{\rm BH}$--$\sigma$ data. We therefore present both the results of the \textsc{bces} (Y|X) and the \textsc{bces} Bisector regressions.
Plots depicting the overall fit, and delineating barred/unbarred morphologies, as well as different bulge morphologies, can be seen in Figs.~\ref{sigma_morph}~\&~\ref{sigma_bulge_morph}, respectively. It is evident from these figures that three galaxies standout as noticeable outliers: NGC 4395, NGC 5055 and Cygnus A. NGC 4395 has an extremely low velocity dispersion, Cygnus A has the largest uncertainty on $\sigma$ of any galaxy in our sample, and NGC 5055 appears to have an uncharacteristically low velocity dispersion for a galaxy hosting such a large black hole. We have excluded these three galaxies from all linear regressions (see Table~\ref{sigma_fits}), leaving us with a sample of 40 spiral galaxies.
\begin{table}
\caption{Central velocity dispersions.
\textbf{Columns:}
(1) Galaxy name.
(2) Checkmark indicates the galaxy has been identified to possess a bar.
(3) Central velocity dispersion (km~s$^{-1}$).
(4) Central velocity dispersion reference.
}
\label{sigma_table}
\begin{tabular}{lccl}
\hline
Galaxy name & Bar? & $\sigma$ & Reference \\
& & (km~s$^{-1}$) & \\
(1) & (2) & (3) & (4) \\
\hline
Circinus & \checkmark & $149\pm18$ & HyperLeda \\
Cygnus A & \checkmark & $270\pm90$ & \citet{Kormendy:Ho:2013} \\
ESO558-G009 & & $170\pm^{+21}_{-19}$ & \citet{Greene:2016} \\
IC 2560 & \checkmark & $141\pm10$ & \citet{Kormendy:Ho:2013} \\
J0437+2456 & \checkmark & $110^{+13}_{-12}$ & \citet{Greene:2016} \\
Milky Way & \checkmark & $105\pm20$ & \citet{Kormendy:Ho:2013} \\
Mrk 1029 & & $132^{+16}_{-14}$ & \citet{Greene:2016} \\
NGC 0224 & \checkmark & $157\pm4$ & HyperLeda \\
NGC 0253 & \checkmark & $97\pm18$ & HyperLeda \\
NGC 1068 & \checkmark & $176\pm9$ & HyperLeda \\
NGC 1097 & \checkmark & $195^{+5}_{-4}$ & \citet{Bosch:2016} \\
NGC 1300 & \checkmark & $222\pm30$ & HyperLeda \\
NGC 1320 & & $110\pm10$ & HyperLeda \\
NGC 1398 & \checkmark & $197\pm18$ & HyperLeda \\
NGC 2273 & \checkmark & $141\pm8$ & HyperLeda \\
NGC 2748 & & $96\pm10$ & HyperLeda \\
NGC 2960 & & $166^{+17}_{-15}$ & \citet{Saglia:2016} \\
NGC 2974 & \checkmark & $233\pm4$ & HyperLeda \\
NGC 3031 & \checkmark & $152\pm2$ & HyperLeda \\
NGC 3079 & \checkmark & $175\pm12$ & HyperLeda \\
NGC 3227 & \checkmark & $126\pm6$ & HyperLeda \\
NGC 3368 & \checkmark & $120\pm4$ & HyperLeda \\
NGC 3393 & \checkmark & $197\pm28$ & HyperLeda \\
NGC 3627 & \checkmark & $127\pm6$ & HyperLeda \\
NGC 4151 & \checkmark & $96\pm10$ & HyperLeda \\
NGC 4258 & \checkmark & $133\pm7$ & HyperLeda \\
NGC 4303 & \checkmark & $96\pm8$ & HyperLeda \\
NGC 4388 & \checkmark & $99\pm9$ & HyperLeda \\
NGC 4395 & \checkmark & $27\pm5$ & HyperLeda \\
NGC 4501 & & $166\pm7$ & HyperLeda \\
NGC 4594 & & $231\pm3$ & HyperLeda \\
NGC 4699 & \checkmark & $191\pm9$ & HyperLeda \\
NGC 4736 & \checkmark & $108\pm4$ & HyperLeda \\
NGC 4826 & & $99\pm5$ & HyperLeda \\
NGC 4945 & \checkmark & $121\pm18$ & HyperLeda \\
NGC 5055 & & $100\pm3$ & HyperLeda \\
NGC 5495 & \checkmark & $166^{+20}_{-18}$ & \citet{Greene:2016} \\
NGC 5765b & \checkmark & $162^{+20}_{-18}$ & \citet{Greene:2016} \\
NGC 6264 & \checkmark & $158\pm15$ & \citet{Kormendy:Ho:2013} \\
NGC 6323 & \checkmark & $158\pm26$ & \citet{Kormendy:Ho:2013} \\
NGC 6926 & \checkmark & & \\
NGC 7582 & \checkmark & $148\pm19$ & HyperLeda \\
UGC 3789 & \checkmark & $107\pm12$ & \citet{Kormendy:Ho:2013} \\
UGC 6093 & \checkmark & $155^{+19}_{-17}$ & \citet{Greene:2016} \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[clip=true,trim= 0mm 0mm 0mm 0mm,width=\columnwidth]{f4.pdf}
\caption{Black hole mass (Table~\ref{Sample}, Column 9) versus central velocity dispersion (Table~\ref{sigma_table}, Column 3), represented as red dots bounded by black error bars. The \textsc{bces} (Y|X) and the \textsc{bces} Bisector regressions of Fit \# 1 from Table~\ref{sigma_fits} are depicted by a dashed magenta and a solid green line, respectively. The $1\sigma$ confidence band (smaller grey shaded region) depicts the error associated with the fit parameters (slope and intercept). The $1\sigma$ total rms scatter about the best-fitting \textsc{bces} Bisector regression is shown by the larger green shaded region. We consider the three labelled galaxies as outliers and they are not included in any of our linear regressions involving central velocity dispersion.}
\label{sigma_morph}
\end{figure}
\begin{figure}
\includegraphics[clip=true,trim= 0mm 0mm 0mm 0mm,width=\columnwidth]{f5.pdf}
\caption{Similar to Fig.~\ref{sigma_morph}. Symbols are the same as in Fig.~\ref{plot2}. \textsc{bces} Bisector linear regressions to the full and sub-samples are plotted as lines with various styles and colours. Bisector Fits 1--5 from Table~\ref{sigma_fits} are depicted as a dotted black line, a solid green line, a dotted magenta line, an alternating dash--dotted red line and a dashed blue line, respectively.}
\label{sigma_bulge_morph}
\end{figure}
\begin{table*}
\caption{\textsc{bces} linear regressions for the expression $\log({M_{\rm BH}/{\rm M_{\sun}}}) = A\log[\sigma/200$~$\rm{km}$~$\rm{s}$$^{-1}]+B$. Similar to Table~\ref{fits}, except that a different expression has been fit, and two types of regression are used.
}
\label{sigma_fits}
\begin{tabular}{cclccccccc}
\hline
Fit & Regression & \multicolumn{1}{c}{Sample} & $N$ & $A$ & $B$ & $\epsilon$ & $\Delta$ & $r$ & $p$-value \\
& & & & & (dex) & (dex) & (dex) & & \\
(1) & (2) & \multicolumn{1}{c}{(3)} & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\
\hline
\multirow{2}{*}{1} & (Y|X) & \multirow{2}{*}{All} & \multirow{2}{*}{$40^a$} & $3.88\pm0.89$ & $7.80\pm0.16$ & $0.54\pm0.02$ & 0.57 & \multirow{2}{*}{0.56} & \multirow{2}{*}{$1.72\times10^{-4}$} \\
& Bisector & & & $5.65\pm0.79$ & $8.06\pm0.13$ & $0.58\pm0.03$ & 0.63 & & \\
\hline
\multirow{2}{*}{2} & (Y|X) & \multirow{2}{*}{Pseudobulges + hybrids} & \multirow{2}{*}{$35^{a,b}$} & $3.97\pm1.03$ & $7.73\pm0.19$ & $0.51\pm0.02$ & 0.55 & \multirow{2}{*}{0.56} & \multirow{2}{*}{$5.03\times10^{-4}$} \\
& Bisector & & & $5.76\pm0.91$ & $8.01\pm0.17$ & $0.55\pm0.03$ & 0.61 & & \\
\hline
\multirow{2}{*}{3} & (Y|X) & \multirow{2}{*}{Classical bulges + hybrids} & \multirow{2}{*}{$12^{a,b}$} & $4.15\pm1.47$ & $8.08\pm0.20$ & $0.62\pm0.01$ & 0.62 & \multirow{2}{*}{0.63} & \multirow{2}{*}{$2.85\times10^{-2}$} \\
& Bisector & & & $5.78\pm1.34$ & $8.26\pm0.15$ & $0.66\pm0.01$ & 0.67 & & \\
\hline
\multirow{2}{*}{4} & (Y|X) & \multirow{2}{*}{Barred} & \multirow{2}{*}{$32^a$} & $3.63\pm0.92$ & $7.78\pm0.17$ & $0.53\pm0.02$ & 0.56 & \multirow{2}{*}{0.52} & \multirow{2}{*}{$2.13\times10^{-3}$} \\
& Bisector & & & $5.45\pm0.86$ & $8.04\pm0.15$ & $0.57\pm0.03$ & 0.62 & & \\
\hline
\multirow{2}{*}{5} & (Y|X) & \multirow{2}{*}{Unbarred} & \multirow{2}{*}{$8^a$} & $4.52\pm2.15$ & $7.82\pm0.33$ & $0.64\pm0.02$ & 0.68 & \multirow{2}{*}{0.66} & \multirow{2}{*}{$7.31\times10^{-2}$} \\
& Bisector & & & $6.06\pm1.54$ & $8.06\pm0.24$ & $0.68\pm0.02$ & 0.73 & & \\
\hline
\multirow{2}{*}{6} & (Y|X) & \multirow{2}{*}{$m=2$} & \multirow{2}{*}{$23^a$} & $3.49\pm1.18$ & $7.60\pm0.23$ & $0.51\pm0.01$ & 0.54 & \multirow{2}{*}{0.46} & \multirow{2}{*}{$2.55\times10^{-2}$} \\
& Bisector & & & $5.50\pm1.27$ & $7.92\pm0.23$ & $0.54\pm0.02$ & 0.60 & & \\
\hline
\multirow{2}{*}{7} & (Y|X) & \multirow{2}{*}{$m\neq2$} & \multirow{2}{*}{$17^a$} & $3.88\pm1.18$ & $7.98\pm0.18$ & $0.56\pm0.02$ & 0.58 & \multirow{2}{*}{0.64} & \multirow{2}{*}{$5.83\times10^{-3}$} \\
& Bisector & & & $5.30\pm0.96$ & $8.17\pm0.15$ & $0.59\pm0.03$ & 0.64 & & \\
\hline
\multicolumn{10}{l}{$^a$ Excluding NGC 6926 (for lack of a velocity dispersion measurement) and the outliers: Cygnus A, NGC 4395 and NGC 5055.} \\
\multicolumn{10}{l}{$^b$ Seven galaxies (IC 2560, the Milky Way, NGC 1068, NGC 3368, NGC 4258, NGC 4594 and NGC 4699) potentially have} \\
\multicolumn{10}{l}{both types of bulge morphology. The bulge-less galaxy NGC 4395 is excluded.}
\end{tabular}
\end{table*}
Many works \citep[e.g.][]{Graham:2008,Hu:2008,Gultekin:2009,Graham:Scott:2013} have identified an offset ($\approx 0.3$~dex) in SMBH mass between galaxies with barred and unbarred morphologies in the $M_{\rm BH}$--$\sigma$ diagram. However, we do not observe any such offset in our $M_{\rm BH}$--$\sigma$ diagram. This could be attributed to numerous reasons. Since \citet{Graham:Scott:2013}, five of the unbarred spiral galaxies in their sample have been identified as possessing bars: NGC 224, NGC 3031, NGC 4388, NGC 4736 \citep[][and references therein]{Savorgnan:2016:II}, plus NGC 6264 \citep[][and references therein]{Saglia:2016}. As such, our sample of 40 only contains 8 unbarred galaxies. Also, many of the black hole mass estimates and distances have been revised. Moreover, \citet{Graham:Scott:2013} observe the offset using a sample that includes elliptical, lenticular and spiral morphologies. Since we do not include any elliptical nor lenticular morphologies, it becomes difficult to directly compare our results.
Two of the `unbarred' galaxies are NGC 4826 (the `Black Eye Galaxy') and Mrk 1029, both alleged to have pseudobulges. Unexpectedly, all of the eight galaxies without bars have been claimed to host pseudobulges, structures thought to be associated with bars.
Concerning the linear regressions for the various subsamples (see Table~\ref{sigma_fits}), we find no statistical difference between the slope or intercept for all but one of the fits when using the same type of regression. With the symmetric bisector regression, the `Classical Bulges + Hybrids' sample (Fit \#3 from Table~\ref{sigma_fits}) is noticeably offset above the other fits (see Fig. \ref{sigma_bulge_morph}). The vertical offset is 0.20~dex above the `All' sample and 0.25~dex above the `Pseudobulges + Hybrids' sample. However, the intercept values do have overlapping error bars. We additionally used the modified \textsc{fitexy} routine from \citet{Tremaine:2002} and obtained consistent results.
\section{Discussion and Implications}\label{DI}
The spiral density wave theory has been cited for approximately six decades \citep{Shu:2016} as the agent for `grand design' spiral genesis in disc galaxies. \citet{Lin:Shu:1966} specify that the density wave theory predicts that a relationship should exist between spiral arm pitch angle and the central enclosed mass of a galaxy.\footnote{In their chapter~6, \citet{Binney:Tremaine:1987} provide an extensive discussion of the implications and potential limitations of the spiral wave dispersion relation.} They calculated the pitch angle to be a ratio of the density of material in the galaxy's disc relative to a certain quantity made up of the frequencies of orbital motions in the discs, which itself is dependent on the central gravitational mass. Specifically, that the pitch angle at a given radius is determined by the density of the medium in that region of the disc and the enclosed gravitational mass central to that orbital galactocentric radius.\footnote{See equation~4.1 from \citet{Lin:Shu:1966}. This formula is also represented in equation~1 from \citet{Davis:2015} in terms of pitch angle and later simplified in their equation~2. However, please note that due to a corrigendum, their equation~1 is not dimensionally correct and needs to be divided by the galactocentric radius on the righthand side of the formula.}
\subsection{Variable pitch angle}
So far this decade, there have been several studies in the literature concerning the variances in pitch angle measurements caused by numerous factors such as the wavelength of light and galactocentric radius. \citet{Martinez-Garcia:2014} find, from a study of five galaxies across the optical spectrum, that the absolute value of pitch angle gradually increases at longer wavelengths for three galaxies. This result can be contrasted with the larger study of \citet{Pour-Imani:2016}, who use a sample of 41 galaxies imaged from \textit{FUV} to 8.0~$\micron$ wavelengths of light. They find that the absolute pitch angle of a galaxy is statistically smaller\footnote{The analysis of \citet{Pour-Imani:2016} indicates that the most prominent observed difference is between the 3.6~$\micron$ pitch angle ($|\phi_{3.6\,\mu{\rm m}}|$) and the 8.0~$\micron$ pitch angle ($|\phi_{8.0\,\mu{\rm m}}|$). The typical difference is: $|\phi_{8.0\,\mu{\rm m}}| - |\phi_{3.6\,\mu{\rm m}}| = 3\fdg75\pm1\fdg25$.} (tighter winding) when measured using light that highlights old stellar populations, and larger (looser winding) when using light that highlights young star forming regions. As noted in Section \ref{DM}, to account for this variation, we strived to identify the most fundamental pitch angle for each galaxy in light that preferably indicates star forming regions. In doing so, we should be glimpsing the true location of the spiral density wave, which itself should be related to the central mass and ultimately the SMBH mass of a galaxy.
It is also worth considering the disc size--luminosity relations for different morphological types. \citet{Graham:Worley:2008} indicate that disc scalelength is roughly constant with Hubble type, but the disc central surface brightness shows a definite trend of decreasing with Hubble type. For this to occur, the luminosity of the disc must also become fainter with increasing Hubble type. Therefore, the discs become thinner (decreased surface density) as the spiral arms become more open in the late-type spirals. This would indicate that in more open spiral patterns, the overall disc density is small. However, this only implies that the \textit{stellar} density has decreased in the disc. It is likely that the gas density, and thus the gas fraction of the total density, is higher in gas-rich, late-type galaxies. Indeed, \citet{Davis:2015} present evidence (see their equation~2) that the gas density (as compared to the stellar density) in the disc is the primary indicator of spiral tightness since it is primarily within the gas that the density wave propagates.
For the case of variable pitch angle with galactocentric radius, \citet{Savchenko:2013} find that most galaxies cannot be described by a single pitch angle. In those cases, the absolute value of pitch angle decreases with increasing galactocentric radius (i.e. the arms become more tightly wound). This is in agreement with \citet{Davis:2015}, who predict that there should be a natural tendency for pitch angle to decrease with increasing galactocentric radius due to conditions inherent in the density wave theory. Particularly, because as galactocentric radius increases, the enclosed mass must increase and the gas density in the disc typically decreases. Both of these factors tend to tighten the spiral arm pattern (decrease the pitch angle). However, this can be contrasted with the findings of \citet{Davis:Hayes:2014}, whose observations indicate the opposite (i.e. increasing pitch angle with increasing galactocentric radius).
\subsection{Evolution of pitch angle}\label{Evolution}
It is important for the validity of any relationship derived from pitch angle, and for how proposed relations connect to broader galaxy evolution, that spiral patterns not be transient features. Observations of the ubiquity of spiral galaxies, accounting for 56~per~cent of the galaxies in our local Universe \citep{Loveday:1996}, appear to favour the longevity of spiral structure. Over the years, there have been numerous findings from theory and computer simulations of spiral galaxies. \citet{Julian:Toomre:1966} show that spirals can be a transient phenomenon brought on by lumpy perturbers in the disc of a galaxy. Contrarily, \citet{D'Onghia:2013} find that spiral structure can survive long after the original perturbing influence has vanished. However, \citet{Sellwood:Carlberg:2014} argue that if spirals can develop as self-excited instabilities, then the role of heavy clumps in the disc is probably not fundamental to the origin of spiral patterns. Furthermore, they claim that long-lasting spiral structure results from the superposition of several transient spiral modes.\footnote{\citet{Morozov:1991} \& \citet{Morozov:1992} also describe complicated patterns in spiral galaxies as superpositions of unstable hydrodynamic modes.}
\citet{Grand:2013} find from $N$-body simulations that absolute pitch angles tend to decrease with time through a winding-up effect. Contrastingly, \citet{Shields:2014} find from analysing a sample of more than 100 galaxies spanning redshifts up to $z=1.2$, that pitch angle appears to have statistically loosened since $z=0.5$. Although, they admit the possibility of selection effects and biases. One possibility is that the later type spiral galaxies with higher pitch angles might not be observed at great distance due to their lower intrinsic luminosity and surface brightness.
Continuing the discussion of simulations, we draw attention to predictions of pitch angles. Our measured pitch angles (which contain only relatively earlier type spiral galaxies, predominantly Hubble types Sa \& Sb) do not exceed $\approx25\degr$. The pitch angles presented in this work are consistent with the predictions of \citet{Perez-Villegas:2013} that large-scale, long-lasting spiral structure in galaxies should restrict pitch angle values to a maximum of $\approx15\degr$, $\approx18\degr$ and $\approx20\degr$ for Sa, Sb and Sc galaxies, respectively. Furthermore, \citet{Perez-Villegas:2013} show that chaotic behaviour leads to more transient spiral structure for pitch angle values larger than $\approx30\degr$, $\approx40\degr$ and $\approx50\degr$ for Sa, Sb and Sc galaxies, respectively. If these predictions are applicable, this implies that our measured pitch angles should be stable for the vast majority of our sample. Additionally, future studies of black holes in galaxies with later morphological types than exist in our sample (i.e. Sd) should be considered relatively stable for even large pitch angles. Since our sample does not include Sd galaxies, and consists of SMBHs with $M_{\rm BH} \goa 10^6$~${\rm M_{\sun}}$, this implies that future work to identify IMBHs via the $M_{\rm BH}$--$\phi$ relation should be targeting galaxies that have pitch angles $\goa30\degr$.
\subsection{External influences}
Another consideration that could influence pitch angle is from external agents such as tidal interaction, accretion, harassment, cluster environments, etc. Through the process of investigating the pitch angles of 125 galaxies in the Great Observatories Origins Deep Survey South Field \citep{Dickinson:2003}, \citet{Davis:2010} found little to no difference, on average, between the pitch angles of galaxies in and out of overdense regions, nor between the pitch angles of red or blue spiral galaxies. More recently, \citet{Semczuk:2017} studied $N$-body simulations of a Milky Way like galaxy orbiting a Virgo-like cluster. These simulations produce tidally induced logarithmic spirals upon pericentre passage around the cluster. Their findings indicate that, similarly to \citet{Grand:2013}, spiral arms wind up with time (decreasing the absolute value of pitch angle). However, upon successive pericentre passages, the spirals are again tidally stretched out and the pitch angle loosens with this cycle repeating, indefinitely. Concerning our sample, our galaxies are generally local field galaxies and should have little instance of tidal interaction events.
\subsection{Intrinsic scatter}
Our intrinsic scatter in equation~(\ref{M-phi}) is $\approx 77$~per~cent of the total rms scatter, implying that $\approx 77$~per~cent of the scatter comes from intrinsic scatter about the $M_{\rm BH}$--$\phi$ relation and the other $\approx 23$~per~cent arises from measurement error. The median black hole mass measurement error is 19~per~cent or 0.08~dex and the median pitch angle measurement error is 14~per~cent or $1\fdg9$ across our sample of 44 galaxies. Given this much smaller measurement error on $\log{M_{\rm BH}}$ than on $|\phi|$, i.e. 0.08 versus 1.9 in Figs.~\ref{plot}~\&~\ref{plot2}, we note again (see Section \ref{AR}) that the \textsc{bces} (X|Y) regression, and thus the \textsc{bces} Bisector symmetric regression, is almost identical (to two significant figures) to the \textsc{bces} (Y|X) regression.
One major contributing factor to the observed intrinsic scatter in the $M_{\rm BH}$--$\phi$ relation could be attributed to not accounting for the gas fraction in galaxies. Another factor that may be affecting the intrinsic scatter is Toomre's Stability Criterion \citep{Safronov:1960,Toomre:1964}, whose parameter $Q$ is related to gas fraction (since it depends on the gas surface density). Observationally, \citet{Seigar:2005,Seigar:2006,Seigar:2014} find that pitch angle is well-correlated with the galactic shear rate, which depends on the mass contained within a specified galactocentric radius. \citet{Grand:2013} corroborate these results in their $N$-body simulations. Recently, \citet{Kim:2017} have also shown that the pitch angle of nuclear spirals is similarly correlated with the background shear. Therefore, when shear is low, spirals are loosely wound (and vice versa) both on the scale of nuclear spirals in a galactic centre and for spiral arms residing in a galactic disc.
One would expect that late-type galaxies, which tend to have lower shear rates and therefore larger pitch angles, would accordingly have a higher gas fraction. This seems to explain the observational results of \citet{Davis:2015}, who indicate the existence of a Fundamental Plane relationship between bulge mass, disc density and pitch angle. Concerning the $M_{\rm BH}$--$\phi$ relation, if gas fractions were accurately known, it could act as a correcting factor to reduce the intrinsic scatter in the $M_{\rm BH}$--$\phi$ relation by the addition of a third parameter.
\subsection{Comparison to previous studies}
This work updates the previous work of \citet{Seigar:2008} and \citet{Berrier:2013}. In \citet{Seigar:2008}, 5\footnote{In table~2 of \citet{Seigar:2008}, 12 galaxies are listed under the category of `BH Estimates from Direct Measurements'. However, two of those are merely upper limits and five are reverberation mapping estimates.} spiral galaxies with directly measured black hole mass estimates were studied. Five years later, \citet{Berrier:2013} increased this number to 22.\footnote{Of the 22 galaxies in the direct measurement sample of \citet{Berrier:2013}, three measurements were adopted and left unchanged for use in this paper (NGC 224, NGC 3368 and NGC 3393) and not plotted in Fig.~\ref{compare}.} Our current work has doubled the sample size. The median pitch angle of these respective samples has gradually decreased over the years from $17\fdg3$ \citep{Seigar:2008} to $14\fdg4$ \citet{Berrier:2013}, and finally $13\fdg3$ (this work).
As seen in Fig.~\ref{plot}, the slope of our linear fit is noticeably steeper than that described by the ordinary least-squares (Y|X) linear regression of \citet{Berrier:2013}; slope = $-0.062\pm0.009$~dex~deg$^{-1}$ (or compared to $-0.076\pm0.005$~dex~deg$^{-1}$ found by \citealt{Seigar:2008}). This can be attributed to many possible reasons. During the intervening 4 yr, many of the distances to the SMBH host galaxies have been revised. All redshift-dependent distances have been revised with newer cosmographical parameters. Any change in distance will have a proportional change on the estimated black hole mass. Additionally, many of these mass measurements themselves have been updated.
Pitch angle estimates have also evolved since these previous studies (see Fig.~\ref{compare}). Pitch angle measurements conducted by \citet{Seigar:2008} and \citet{Berrier:2013} were exclusively measured via \textsc{2dfft} algorithms. Our work has continued to use this trusted method of pitch angle measurement, but we have also incorporated newer template fitting and computer vision methods. Access to better resolution imaging has also affected our pitch angle measurements even for identical galaxies from previous studies, which employed primarily ground-based imaging. Our work has benefited from an availability of space-based imaging for the majority of our sample. These combined effects enable us to improve the accuracy and precision of pitch angle measurements for these well-studied galaxies.
\begin{figure}
\includegraphics[clip=true,trim= 9mm 0mm 20mm 15mm,width=\columnwidth]{f6.pdf}
\caption{Comparison of our pitch angle measurements, in green, for the 20 galaxies that are common in the directly measured SMBH mass samples of \citet{Seigar:2008}, in blue, and/or \citet{Berrier:2013}, in red. Black error bars are provided for individual measurements and reflect that our measurements are in agreement with these previous studies 70~per~cent of the time.}
\label{compare}
\end{figure}
Access to better resolution imaging also has likely statistically tightened (decreased $|\phi|$) our measured spirals. As previously mentioned, poor-resolution has a tendency to bias pitch angle measurements to looser (increased $|\phi|$) values. Therefore, better spatial resolution should reduce high-pitch angle noise and increase the chance of measuring the fundamental pitch angle. This effect could potentially explain why our linear regression has yielded a steeper slope than previous studies, by preferentially `tightening' the low-surface brightness, low-mass SMBH host galaxies that would benefit the most from better imaging (galaxies towards the bottom, righthand quadrant of Fig.~\ref{plot}). Additionally, the samples of \citet{Seigar:2008} and \citet{Berrier:2013} did not include the four lowest pitch angle galaxies (seen in the extreme upper-left corners of Figs.~\ref{plot}~\&~\ref{plot2}). These four extreme points will also contribute to a comparative steepening of the slope of the $M_{\rm BH}$--$\phi$ relation presented in this work.
Six of our galaxies have $>1\sigma$ discrepancies between our pitch angle measurement and our previously published values. Concerning Fourier methods of pitch angle measurement, it sometimes occurs that signals will be present at the true pitch angle and at multiples of 2 of that value. That is, the fundamental pitch angle can be overlooked by such codes, which report a pitch angle different by a factor of 2, especially exacerbated in noisy and/or flocculent images. As can be seen in Fig.~\ref{compare}, in all cases where the pitch angle measurements disagree beyond one standard deviation, the discrepancy is approximately a factor of 2. We feel confident over-ruling previous measurements with values differing by a factor of two because we have been able to use multiple software methods and analyse different imaging, such as \textit{GALEX} images, that can better bring out the spiral structure in many cases.
\subsection{Utility of the $M_{\rm BH}$--$\phi$ relation}
The potential to identify galaxies that may host IMBHs will be greatly enhanced via application of the $M_{\rm BH}$--$\phi$ relation (equation~\ref{M-phi}). A sample of candidate late-type galaxies hosting IMBHs could be initially identified from a catalogue of images \citep[e.g.][]{Baillard:2011}. Then, quantitative pitch angle measurements could provide IMBH mass estimates. Finally, a follow-up cross-check with the \textit{Chandra} X-ray archive could validate the existence of these IMBHs by looking for active galactic nuclear emission.
The $M_{\rm BH}$--$\phi$ relation, as defined in equation~(\ref{M-phi}), is capable of interpolating black hole masses in the range $5.42 \leq \log({M_{\rm BH}/{\rm M_{\sun}}}) \leq 9.11$ from their associated pitch angles $24\fdg3 \geq |\phi| \geq 2\fdg7$, respectively. Beyond that, it can be extrapolated for black hole masses down to zero ($|\phi| = 56\fdg0$) or as high as $\log({M_{\rm BH}/{\rm M_{\sun}}}) = 9.57$ ($\phi = 0\degr$). In terms of predicting IMBH masses in the range $10^2 \leq M_{\rm BH}/{\rm M_{\sun}} \leq 10^5$, this would dictate that their host galaxies would be late-type spirals with pitch angles of $44\fdg3 \geq |\phi| \geq 26\fdg7$, respectively. Pitch angles $\goa50\degr$ are very rarely measured in the literature.
Gravitational wave detections can aid future extensions of the $M_{\rm BH}$--$\phi$ relation by providing direct estimates of black hole masses, with better accuracy than current astronomical techniques based on electromagnetic radiation, for which the black hole's sphere of gravitational influence needs to be spatially resolved. The Advanced Laser Interferometer Gravitational-Wave Observatory \citep{aLIGO} is capable of detecting gravitational waves generated by black hole merger events with total masses up to 100 ${\rm M_{\sun}}$ \citep{LIGO}. This is right at the transition between stellar mass black holes ($M_{\rm BH} < 10^2$~${\rm M_{\sun}}$) and IMBHs ($10^2 < M_{\rm BH}/{\rm M_{\sun}} < 10^5$).
As the sensitivity and the localization abilities of gravitational radiation detectors increases (i.e. the proposed Evolved Laser Interferometer Space Antenna; \citealt{eLISA} \& \citealt{Danzmann:2015}), we should be able to conduct follow-up electromagnetic radiation observations and potentially glimpse (galaxy) evolution in action. Upcoming ground-based detectors will be able to probe longer wavelength gravitational radiation than is currently possible. In particular, the Kamioka Gravitational Wave Detector (KAGRA) will be sensitive to IMBH mergers up to 2000~${\rm M_{\sun}}$ \citep{Shinkai:2016}, implying that (late-type)--(late-type) galaxy mergers with IMBHs could potentially generate a detectable signal.
It would be easy to assume that the existence of an $M_{\rm BH}$--$\phi$ relation might simply be a consequence of the well-known (black hole mass)--(host spheroid mass) relation. However, the efficacy of the $M_{\rm BH}$--$\phi$ relation in predicting black hole masses in bulge-less galaxies may imply otherwise. Moreover, the $M_{\rm BH}$--$\phi$ relation is significantly tighter than expectations arising from the consequence of other scaling relations (see equation~\ref{indirect}). This will surely become more evident as the population of bulge-less galaxies with directly measured black holes masses grows to numbers greater than the current tally of one (NGC 4395). Furthermore, the low scatter in the $M_{\rm BH}$--$\phi$ relation makes it at least as accurate at predicting black hole masses in spiral galaxies as the other known mass scaling relations.
\subsection{Galaxies with (possible) low-mass black holes}
\subsubsection{NGC 4395}
NGC 4395 is the only galaxy in our sample that has been classified as a bulge-less galaxy \citep{Sandage:1981}. It additionally stands out as having the lowest mass black hole in our sample, at just under a half-million solar masses. Furthermore, it is the only Magellanic type galaxy in our sample. With the classification, SBm, it also exhibits a noticeable barred structure despite any indications of a central bulge.
\subsubsection{M33}
To test the validity of our relation at the low-mass end, we now analyse M33 (NGC 598); which is classified as a bulge-less galaxy \citep{M33,Minniti:1993} and thought to have one of the smallest, or perhaps no black hole residing at its centre. Since studies \citep{Gebhardt:2001,Merritt:2001} only provide upper limits to its black hole mass, we do not use it to determine our relation, but rather test the $M_{\rm BH}$--$\phi$ relation's extrapolation to the low-mass end. We adopt a luminosity distance of 839~kpc \citep{Gieren:2013} and a distance-adjusted black hole mass of $\log({M_{\rm BH}/{\rm M_{\sun}}})\leq3.20$ \citep{Gebhardt:2001} and $\log({M_{\rm BH}/{\rm M_{\sun}}})\leq3.47$ \citep{Merritt:2001}. Using a \textit{GALEX} \textit{FUV} image, we measure a pitch angle of $|\phi| = 40\fdg0\pm3\fdg0$. This is in agreement with the measurement of \citet{Seigar:2011}, who reports $|\phi| = 42\fdg2\pm3\fdg0$ from a \textit{Spitzer}/\textit{IRAC1}, 3.6~$\micron$ image. Applying equation~(\ref{M-phi}), we obtain $\log({M_{\rm BH}/{\rm M_{\sun}}})=2.73\pm0.70$. Thus, our mass estimate of the potential black hole in M33 is consistent at the $-0.67\sigma$ and $-1.06\sigma$ level with the published upper limit black hole mass estimates of \citet{Gebhardt:2001} and \citet{Merritt:2001}, respectively.
\subsubsection{Circinus galaxy}
Lastly, we investigated the recent three-dimensional radiation-hydrodynamic simulation of gas around the low-luminosity AGN of the Circinus Galaxy \citep{Wada:2016}. Their results produce prominent spiral arm structure in the geometrically thick disc surrounding the AGN, assuming a central SMBH mass of $2\times10^6$~${\rm M_{\sun}}$ (consistent with our adopted directly measured mass in Table~\ref{Sample}). We analyse the snapshot of their number density distribution of $\rm{H_2O}$ (see their fig.~2a, left-hand panel). We find an interesting eight-arm structure with a pitch angle of $27\fdg7\pm1\fdg7$ and a radius of $\approx9$ pc. This result differs significantly from our measurement of the pitch angle in the galactic disc ($17\fdg0\pm3\fdg9$). Possible explanations could simply be different physical mechanisms in the vicinity of a black hole's sphere of gravitational influence \citep[$r_h\approx1.2$~pc for Circinus's SMBH, for definition of $r_h$, see][]{Peebles:1972} or the different relative densities of this inner $\approx 9$~pc disc radius versus the much larger $\approx48$~kpc \citep{Jones:1999} disc radius of the entire galaxy. According to the predictions of spiral density wave theory \citep[see equation~2 from][]{Davis:2015}, the higher relative local densities in the inner disc should produce a higher pitch angle spiral density wave than in the larger, more tenuous galactic disc.
\subsection{Vortex nature of spiral galaxy structure}
There exists a sizeable amount of material in the literature concerning the study of vortices, cyclones and anticyclones in galaxies and their reproducibility in laboratory fluid dynamic experiments \citep[for a 20 yr review, see][]{Fridman:2007}. Fridman has done a lot of research concerning the origin of spiral structure since \citet{Fridman:1978}. Much of his work investigating the motion of gas in discs of galaxies has revealed strong evidence for the existence and nature of vortices \citep[e.g.][]{Fridman:Khoruzhii:1999,Fridman:1999,Fridman:2000,Fridman:2001c,Fridman:2001b,Fridman:2001}. Furthermore, \citet{Chavanis:2002} present a thorough discussion on the statistical mechanics of vortices in galaxies, while \citet{Vatistas:2010} provide an account of the striking similarity between the rotation curves in galaxies and in terrestrial hurricanes and tornadoes. In addition, \citet{Vorobyov:2006} specifically describe their numerical simulations that indicate anticyclones in the gas flow around the location of galactic corotation increase in intensity with increasing galactic spiral arm pitch angle absolute value.
In meteorology, it is well-known that the source mechanism for the observed rotation in cyclones and anticyclones is the Coriolis effect caused by the rotation of the Earth. Some speculation has been made postulating that the observed rotation in galaxies is analogously derivative of a Coriolis effect originating from an alleged rotation of the Universe \citep{Li:1998,Chaliasos:2006,Chaliasos:2006b}. Other studies theorise that primeval turbulence in the cosmos (rotation of cells/voids/walls of the cosmic web) instilled angular momenta in forming galaxies \citep{Dallaporta:1972,Jones:1973,Casuso:2015}, rather than rotation of the entire Universe.
There also exists significant literature concerning the similarity of spiral galaxies to turbulent eddies. Early research indicates that large-scale irregularities that occur in spiral galaxies can be described as eddies and are in many ways similar to those that occur in our own atmosphere \citep{Dickinson:1963,Dickinson:1964,Dickinson:1964b}. Subsequent hydrodynamic studies of eddies revealed a possible scenario regarding the formation of rotating spiral galaxies based on the concept of the formation of tangential discontinuity and its decay into eddies of the galactic scale, spawned from protogalactic vorticity in the metagalactic medium \citep{Ushakov:1983,Ushakov:1984,Chernin:1993,Chernin:1996}. \citet{Silk:1972} speculate that galactic-scale turbulent eddies originating prior to the era of recombination were frozen out of the turbulent flow at epochs following recombination.
Direct simulations with rotating shallow water experiments have been used to adequately model vortical structures in planetary atmospheres and oceans \citep{Nezlin:1990b,Nezlin:1990,Nezlin:1990c}. The application of rotating shallow water experiments is also applicable for the adequate modelling of spiral structures of gaseous galactic discs. \citet{Nezlin:1991} hypothesize that experimental Rayleigh friction between shallow water and the bottom of a vessel is physically analogous to friction between structure and stars in the disc of a galaxy. Favourable comparisons exist between the rotation of a compressible inviscid fluid disc and the dynamics observed in hurricanes and spiral galaxies and yield vortex wave streamlines that are logarithmic spirals \citep{White:1972}.
Curiously, further evidence in nature for a connection between spiral arms and the central mass concentration manifests itself in tropical cyclones \citep{Dvorak:1975}. Indeed, the Hubble-Jeans Sequence lends itself quite well to tropical cyclones, such that those with high wind speeds have large central dense `overcasts' (CDOs) and tightly wound spiral arms while those with low wind speeds have small CDOs and loosely wound spiral arms. One notable difference between galaxies and tropical cyclones is that larger galactic bulges garner higher (S\'ersic indices and) central mass concentrations while tropical cyclones with larger CDOs possess lower central atmospheric pressures. Both mechanisms, whether gravity or pressure, can be described by the nature and effect of their central potential wells on the surrounding spiral structure.
\section*{Acknowledgements}
AWG was supported under the Australian Research Council's funding scheme DP17012923. This research has made use of NASA's Astrophysics Data System. We acknowledge the usage of the HyperLeda data base \citep{HyperLeda}, \url{http://leda.univ-lyon1.fr}. This research has made use of the NASA/IPAC Extragalactic Database (NED), \url{https://ned.ipac.caltech.edu/}. Some of these data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST), \url{http://archive.stsci.edu/}. We used the \textsc{red idl} cosmology routines written by Leonidas and John Moustakas. Finally, we used the statistical and plotting routines written by R. S. Nemmen that accompany his \textsc{python} translation of the \textsc{bces} software.
\bibliographystyle{mnras}
|
\section{Introduction}
If $A$ is an abelian variety defined over a number field $K$, then we know by the Mordell-Weil Theorem \cite{weil28} that the set of $K$-rational points $A(K)$ is a finitely generated abelian group. Thus, we have
\begin{displaymath}
A(K)\cong A_{tors}(K)\oplus \Z^r,
\end{displaymath}
for a finite group $A_{tors}(K)$, called the \emph{$K$-rational torsion subgroup} of $A$, and a positive integer $r$. Given a positive integer $g$, a number field $K$ and a finite group $G$, one can ask whether there exists an abelian variety $A$ of dimension $g$ defined over $K$ such that $A_{tors}(K)\cong G$. This question is hard to answer in general and only in some examples the answer is known. If $g=1$ and $K=\Q$, the theorem of Mazur \cite{mazur} classifies all occuring $\Q$-rational torsion subgroups for elliptic curves. One can refine the question about the whole rational torsion subgroup to the question of a rational point of prescribed order $N$. In general, when $g>1$ and $N$ is a positive integer, not much is known.
\begin{table}
\begin{tabular}{|c|c|c|}
\hline
Genus & Torsion Order & Reference\\ \hline
$g=2$ & $N\in\{2,3,\ldots,30,$ & \cite{elkies_web}, \cite{leprevost95}, \cite{leprevost04},\\
&$32,\ldots,36,39,40\}$ & \cite{leprevost93}, \cite{leprevost91}, \cite{ogawa1994}, \\
&&\cite{leprevost93_2},\cite{leprevost91_1},\cite{flynn90},\\
&&\cite{platonov14}\\
\hline
$g=2$ & $N\in\{45,60,63,70\}$ & \cite{howeleprevostpoonen00}, \cite{howe14} \\ \hline
$g=3$ & Subgroup up to order $864$ & \cite{howeleprevostpoonen00} \\ \hline
$g=g$ & $N=2g^2+2g+1$ & \cite{pattersonetal08},\cite{flynn91},\cite{leprevost1992}\\
&$N=2g^2+3g+1$&\\\hline
\end{tabular}
\caption{Known examples of points of finite order $N$ on jacobians of genus $g$ curves.}
\end{table}
In this paper we consider the following problem. Given a positive integer $N$, we construct a smooth projective curve defined over $\Q$ such that the \emph{jacobian} $\Jac(C)$ of $C$ has a $\Q$-rational $N$-division point, that is, a point $D\in\Jac(C)(\Q)$ with $ND=\mathcal O$, where $\mathcal O$ is the identity element of $\Jac(C)$. This paper extends some considerations of the PhD thesis of the author \cite{kronberg16}. First, we want to fix some notation and definitions.
Throughout this paper we consider smooth projective curves $C$ but we will only write down the equations for an affine patch of the curve. Given a curve $C$ defined over a perfect field $K$ we can consider its \emph{jacobian} via the isomorphism $\Jac(C)\cong\faktor{\Div^0(C)}{\mathcal P}$ as the set of degree zero divisors modulo principal divisors on $C$. An element $D\in\Jac(C)$ is called $K$-rational if it is invariant under the action of the absolute Galois group $\Gal(\overline K/K)$, where $\overline K$ is a fixed algebraic closure of $K$, i.e. for every $D_0\in D$ and $\sigma\in \Gal(\overline K/K)$ we have that $D_0^\sigma$ is equivalent to $D_0$. Let $f\in K(C)$ be a function such that there exists a divisor $D\in \Div_0(C)$ and an integer $N$ with $\div(f)=ND$. Then the class of $D$ in $\Jac(C)$ is a $K$-rational $N$-division point. Our goal is for a given $N>1$ to construct curves $C$ defined over $\Q$ with a function $f\in\Q(C)$ having this property. By being careful in the construction we assure that the divisor $D$ is not a principal divisor, i.e. the function $f$ is not an $N$-th power. By this method we can construct curves $C$ defined over a perfect field $K$ of genus $g$ with a point of order $N$, where $N$ is linear in terms of $g$.
\begin{definition}
Let $K$ be a perfect field and let $F\in K[X]$ be a separable polynomial with $n:=\deg(F)\geq 5$. A curve $C$ defined by
\begin{displaymath}
C: y^k=F(x),
\end{displaymath}
for some $2\leq k\in\N$ with $\gcd(k,\operatorname{char}(K))=1$ is called \emph{superelliptic curve}.
\end{definition}
\begin{proposition}
Let $C$ be a superelliptic curve defined over $K$.
\begin{enumerate}
\item $C$ is a smooth affine curve.
\item If $\gcd(k,n)=1$, then there is one point at infinity, denoted by $P_\infty$. \label{prop:item:infinity}
\item If $k\mid n$, then there are $k$ points at infinity denoted by $P_{\infty,i}$ for $1\leq i\leq k$.\label{prop:item:infinity2}
\item If $\gcd(k,n)=1$, then the genus of $C$ is $g(C)=\frac 1 2 (k-1)(n-1)$.\label{prop:item:genus}
\item The equation order $\mathcal O_C:=\faktor{K[X,Y]}{(Y^k-F)}$ is integrally closed in $K(C)$.\label{prop:item:closure}
\item $\Norm{K(C)}{K(x)}(a(x)+b(x)y)=a(x)^k+(-1)^{k+1} b(x)^kF\in K(x)$.
\end{enumerate}
\end{proposition}
\begin{proof}
The first statement of the proposition is obvious due to the fact that $F$ is a separable polynomial. Assertions (\ref{prop:item:infinity}),(\ref{prop:item:infinity2}) and (\ref{prop:item:genus}) follow from \cite[Prop 3.7.3]{stichtenoth} and Assertion (\ref{prop:item:closure}) follows from \cite[Prop 3.5.12]{stichtenoth}. The last statement can be shown by direct computation of the norm.
\end{proof}
\begin{lemma}\label{lem:support}
Let $C: y^k=F(x)$ be a superelliptic curve defined over a perfect field $K$ with $\gcd(k,\operatorname{char}(K))=1=\gcd(k,\deg(F))$ and let $\psi:=a(x)+b(x)y\in\mathcal O_C$ with $\gcd(a,b)=1$. Then $\div(\psi)=D-\deg(D)P_{\infty}$ for some effective divisor $D$ and a prime $p\in K[X]$ is in the support of $D$ if and only if $p\mid(a^k+(-1)^{k+1}b^kF)$.
\end{lemma}
\begin{proof}
The proof can be found in \cite[Prop. 13]{galbraith02}.
\end{proof}
This lemma relates the norm of a function to a superelliptic curve since a superelliptic curve $C$ is well defined by fixing $k$ and $F$. We use this fact in the following section.
The same method is used in \cite{leprevost91_1} for the construction of hyperelliptic curves with a point of finite order in its jacobian.
\section{Solving Norm Equations and Torsion}
We now want to relate the norm of a function on a superelliptic curve to the existence of a rational torsion point on the jacobian of the curve.
\begin{lemma}
Let $C: y^k=F(x)$ be a superelliptic curve of genus $g(C)$ defined over a perfect field $K$ and let $\psi:=a(x)+b(x)y\in \mathcal O_C$ with $\gcd(a,b)=1$ such that
\begin{displaymath}
\Norm{K(C)}{K(x)}(\psi) = \varepsilon u(x)^N\in K[x]
\end{displaymath}
with $\varepsilon\in K^*$, $N\in\N$ and $1\leq\deg(u)\leq g(C)$. Then there exists $D_0\in\Jac(C)(K)$ such that $1<\ord(D_0)\mid N$.
\end{lemma}
\begin{proof}
Suppose $\psi:=a(x)+b(x)y\in \mathcal O_C$ such that we have $a^k+(-1)^{k+1}b^kF= \varepsilon u^N$ for some unit $\varepsilon$ and $1\leq\deg(u)\leq g(C)$. Then we apply Lemma \ref{lem:support} to obtain an effective divisor $D$ with $\deg(D)=N\deg(u)$ supported exactly at the primes dividing $u$ with multiplicity $N$. Since $u$ is a polynomial over $K$, the divisor $D$ is defined over $K$. Since $\deg(u)>0$ it follows that $D$ is not only supported at infinity. This gives $\div(\psi)=D-N\cdot\deg(u)P_\infty=N(D_0-\deg(u)P_\infty)$, where $D_0$ and $D$ have the same support and $1<\ord(D_0-\deg(u)P_\infty)\mid N$ as asserted.
\end{proof}
With this lemma we directly obtain the following proposition.
\begin{proposition}
Let $2\leq k\in\N$ and $K$ be a perfect field with $\gcd(k,\operatorname{char}(K))=1$. Let $a,b\in K[X]$ be coprime. Assume $F:=(-1)^{k+1}\frac{a^k-u^N}{b^k}\in K[X]$ is separable for some $u\in K[X]$ with $1\leq\deg(u)\leq \frac{1}{2}(k-1)(\deg(F)-1)$. If we set
\begin{displaymath}
C: y^k=F(x),
\end{displaymath}
then $C$ is a superelliptic curve of genus $g(C)=\frac{1}{2}(k-1)(\deg(F)-1)$ and $\Jac(C)[N](K)\neq\{\mathcal O\}$.
\end{proposition}
By this corollary we need to find polynomials $a,b,u\in K[X]$ such that $F$ as in the corollary is a polynomial. This will be done by fixing the polynomial $b\in K[X]$ and finding a polynomial $R_1\in K [X]$ that is a $k$-th power for a fixed $k$. The following lemma allows us in that case to construct a suitable polynomial $a\in K[X]$.
\begin{lemma}[Hensel's Lifting]\label{lem:hensel}
Let $2\leq k\in\N$ and $K$ be a perfect field with $\gcd(k,\operatorname{char}(K))=1$. Let $b,R_1\in K[X]$ such that $\gcd(R_1,b)=1$. If we take $u\in K[X]$ such that $R_1^k\equiv u \pmod b$, then there exist polynomials $R_l\in K[X]$ such that
\begin{align*}
R_l&\equiv R_{l-1} \pmod{b^{l-1}}\\
R_l^k&\equiv u \pmod{b^l}
\end{align*}
for each $2\leq l\in\Z$.
\end{lemma}
We will give a proof of this well-known lemma since the proof is constructive and the construction will be used in the examples we treat in this paper.
\begin{proof}
We will prove this lemma by showing the $l$-th iteration of the construction. Assume $2\leq k\in\N$ and $K$ is a perfect field with $\gcd(k,\operatorname{char}(K))=1$. Let $2\leq l\in\Z$ and $b,R_1\in K[X]$ with $\gcd(R_1,b)=1$ and set $u\in K[X]$ such that $R_1^k\equiv u \pmod {b^{l-1}}$. Set $\lambda_1 := \frac{R_1^k-u}{b^{l-1}}\in K[X]$; since $\gcd(R_1,b)=1$ and $R_{l-1}\equiv R_1\pmod b$ we have $\gcd(R_{l-1},b)=1$ and $\gcd(k,\operatorname{char}(K))=1$ we have that $kR_{l-1}^{k-1}$ is invertible modulo $b$. Let $\lambda_2\in K[X]$ with $\deg(\lambda_2)<\deg(b)$ be such that $\lambda_2(kR_1^{k-1})\equiv \lambda_1 \pmod{b}$. For $R_l:= R_{l-1}-\lambda_2b^{l-1}$ we have
\begin{align*}
R_l&\equiv R_{l-1} \pmod{b^{l-1}}\\
R_l^k&\equiv u \pmod{b^l}.
\end{align*}
\end{proof}
This lemma now allows us to find the desired polynomials $F$ and $a$.
\begin{corollary}\label{cor:hensel}
Let $1<k,N$ be integers and $K$ a perfect field such that we have $\gcd(\operatorname{char}(K),k)=1$. Let $b,u\in K[X]$ with $\gcd(b,u)=1$ such that $u$ is a $k$-th power modulo $b$. Then for all $\varepsilon\in K^*$ there exist polynomials $a,F\in K[X]$ such that
\begin{displaymath}
a^k+(-1)^{k+1}b^kF = \varepsilon^k u^N.
\end{displaymath}
\end{corollary}
\begin{proof}
Let $R\in K[X]$ such that
\begin{displaymath}
R^k\equiv u \pmod b.
\end{displaymath}
Since $\gcd(b,u)=1$ by assumption, we have $\gcd(b,R)=1$ und thus we can apply Lemma \ref{lem:hensel} to find a polynomial $R_k\in K[X]$ such that $R_k^k\equiv u \pmod{b^k}$.
Writting $\varepsilon R_k^N = qb^k +a$, with $\deg(a)<k\deg(b)$, we have
\begin{displaymath}
a^k\equiv \varepsilon^k R_k^{kN}\equiv \varepsilon^k u^N\pmod{b^k}
\end{displaymath}
und thus there exists a polynomial $F\in K[X]$ such that
\begin{displaymath}
a^k+(-1)^{k+1}b^kF = \varepsilon^k u^N.
\end{displaymath}
\end{proof}
This gives us a tool to construct polynomials $F\in K[X]$ for given integers $k,N$ such that the curve defined by $C:y^k=F(x)$ has a non-trivial rational point in $\Jac(C)[N](K)$. In the next section we give examples of this construction.
\section{Examples}
First we consider the trivial case, where $b=1$ . In this case no lifting is needed.
\begin{example}
Let $k\geq 2$ and $a\in \Z[X]$ be a polynomial with $\gcd(a_0,k)=1$, where $a_0$ is the constant coefficient of $a$. For every odd prime $p>k\cdot\deg(a)$ we set $F_p:= a^k-x^p$. Then the curve defined by
\begin{displaymath}
C_{p,k}: y^k=F_p(x)
\end{displaymath}
is a superelliptic curve of genus $g(C_{p,k})=\frac{1}{2}(k-1)(p-1)$ with a $\Q$-rational $p$-torsion point on its jacobian.
\end{example}
In order to verify the example observe that we have
\begin{displaymath}
\Norm{K(C_{p,k})}{K(x)}(a^k+y)=x^p.
\end{displaymath}
Thus, we only have to check whether $F$ is a separable polynomial to prove the assertion. For this we compute $\gcd(F,F')$. The formal derivative of $F$ is given by
\begin{displaymath}
F'= -px^{p-1}+ka^{k-1}a'.
\end{displaymath}
Choosing a prime $q$ with $q\mid k$ yields $F'\equiv -px^{p-1}\pmod q$. Since $\gcd(a_0,k)=1$ by assumption we have $F(0)\not\equiv 0 \pmod q$ and thus $\gcd(F,F')\equiv 1\pmod q$. This implies $\gcd(F,F')=1$.
\begin{example}
For this example, let $k=3$ and set $b:=X^6 + 3X^4 + 3X^2 - X + 1$ and $u:= X$ then
\begin{displaymath}
(X^2+1)^3\equiv u \pmod b.
\end{displaymath}
Since $b$ is irreducible over $\Q$ we have $\gcd(b,u)=1$ and we can apply Hensel's Lemma to obtain
\begin{displaymath}
R_3^3\equiv u \pmod{b^3}
\end{displaymath}
with
\begin{align*}
9R_3 =&X^{17} - 2X^{16} + 9X^{15} - 18X^{14} + 36X^{13} - 73X^{12} + 90X^{11}
- 172X^{10}+ 162X^9 \\
& - 255X^8 + 212X^7 - 248X^6 + 185X^5 - 161X^4 + 93X^3 - 53X^2
+ 22X + 1.
\end{align*}
Suppose
\begin{displaymath}
F_p:=-1\frac{a_p^3-X^p}{b^3}\in\Q[X]
\end{displaymath}
is separable, where $p\geq 51$ is a prime and $a_p\in\Q[X]$ with $\deg(a_p)<18$ and
\begin{displaymath}
a_p \equiv R_3^p \mod {b^3}.
\end{displaymath}
Then the curve given by
\begin{displaymath}
C_p: Y^3=F_p(X)
\end{displaymath}
is a superelliptic curve of genus $p-19$ and has a $\Q$-rational point of order $p$ on its jacobian.
\end{example}
With the computer algebra system \textsc{Magma}, we have tested for all primes $51\leq p\leq 2539$ that the resulting polynomial $F_p$ is separable.
The following example comes from \cite{leprevost91_1}, where this method is carried out for hyperelliptic curves.
\begin{example}[See {\cite{leprevost91_1}}]
The hyperelliptic curve of genus two defined by
\begin{displaymath}
y^2=-4x^5+(t^2+10t+1)x^4-4t(2t+1)x^3+2t^2(t+3)x^2-4t^3x+t^4=:F(x)
\end{displaymath}
over $\Q(t)$ has a $\Q(t)$-rational point of order $13$ on its jacobian.
\end{example}
In order to see that this is true consider the polynomial $b:=X^4-3X^3+(1+2t)X^2-2tX+t^2$. Then we have
\begin{displaymath}
\left(-\frac{1}{t}X^3+\frac{3}{t}X^2-\frac{1+t}{t}X+1\right)^2\equiv X \pmod{b}
\end{displaymath}
and we can we can find by Corollary \ref{cor:hensel} with $N=13$ the mentioned polynomial $F$.
\bibliographystyle{is-alpha}
|
\section{Introduction}
Production of $b-$quarks in the high energy $pp-$collisions is the
object of an intensive experimental study at the CERN LHC. In the
present paper we focus on a measurements of $b\bar b$ angular and momentum correlations, since
they provide a test of dynamics of hard interactions, which is
highly sensitive to the higher-order corrections in QCD. There are
two ways of study these $b\bar b$ correlations. The first one is
based on reconstruction of pairs of $b-$jets \cite{bCDF,bATLAS}, in
the second case we get information on dynamics of hard production of
$b\bar b-$pair using data on pair production of $B$-mesons. In turn,
long-lived $B-$mesons are reconstructed via their semileptonic
decays. One advantage of the latter method is the unique capability
to detect $B\bar B-$pairs even at small opening angles, in which
case the decay products of the $B-$hadrons tend to be merged into a
single jet and the standard $b-$jet tagging techniques are not
applicable \cite{bCMS}.
On the theory side, one has to take into account multiple radiation
of both soft/collinear and hard additional partons to describe such
angular correlations over the whole observable range of opening
angles between momenta of $B$-mesons. In the Leading Order (LO) of
Collinear Parton Model (CPM), $b$-quarks are produced back-to-back
in azimuthal angle. Effects of the soft and collinear Initial State
Radiation (ISR) or Final State Radiation (FSR) somewhat smear the
distribution in azimuthal angle difference between transverse
momenta of mesons ($\Delta \phi$) around $\Delta\phi\simeq \pi$.
These effects are systematically taken into account with the Leading
Logarithmic Accuracy in the Parton Showers (PS) of the standad
Monte-Carlo (MC) generators, such as PYTHIA or HERWIG.
Radiation of the additional hard gluons or quarks cause $B$ and
$\bar{B}$ mesons to fly with $\Delta\phi < \pi$, but such radiation
is beyond the formal accuracy of standard PS. Description of such
events essentially depends on the way, how transverse momentum and
the ``small'' light-cone component of momentum of the emitted parton
are dealt with inside a PS algorithm, so called {\it recoiling
scheme}~\cite{PYTHIA_RS}. Usually the accuracy of description of
such kinematic configurations is improved via different methods of
{\it matching} of the full NLO corrections in CPM with the
parton-shower, such as MC@NLO~\cite{MCNLO} or POWHEG~\cite{POWHEG}
or via {\it merging} of the kinematically and dynamically accurate
description of a few additional hard emissions, provided by the
exact tree-level matrix elements, with the soft/collinear emissions
from the PS~\cite{Merging}.
The presence of additional free parameters in the matching/merging
methods, as well as the multitude of possible recoiling schemes,
clearly calls for the improved understanding of the high-$p_T$
regime of the PS from the point of view of Quantum Field Theory.
Apart from the soft and collinear limits, the only known limit of
scattering amplitudes in QCD which structure is sufficiently simple
for the theoretical analysis is the limit of Multi-Regge kinematics
(MRK), when emitted partons are highly separated in rapidity from
each-other. This makes the MRK-limit to be a natural starting point
for the construction of improved approximations. In the present
paper, we construct the factorization formula and the framework of
LO calculations in the Parton Reggeization Approach (PRA), which
unifies the PS-like description of the soft and collinear emissions
with the MRK limit for hard emissions. Then we switch to the
description of the angular correlations in the production of
$B\bar{B}$-pairs accompanied by the hard jet, which sets the scale
of the process. The present study is motivated by experimental data
of the Ref.~\cite{bCMS}, since neither MC-calculations in the
experimantal paper, nor the calculations in the LO of
$k_T$-factorization approach in the Ref.~\cite{JZL_kT-fact} could
accurately describe the shape of angular distributions. We construct
the consistent prescription, which {\it merges} the LO PRA
calculation for this process with tree-level NLO matrix element. The
latter improves description of those events, in which not the
$b$-jet, but the hard gluon jet is the leading one, while avoiding
possible double-counting and divergence problems. In such a way we
achieve a good description of the shape of all $B\bar{B}$
correlation spectra without additional free parameters.
The paper has following structure. We describe the basics of PRA and
it's relationschips with other approaches in the Sec.~\ref{sec:LO}.
In the Sec.~\ref{sec:merging} we present our merging prescription
and the analytic and numerical tools, which we use. Then we
concentrate on the numerical results, comparison with experimental
data of the Ref.~\cite{bCMS} and predictions for possible future
measurements in the Sec.~\ref{sec:results}. Finally, we summarize
our conclusions in the Sec.~\ref{sec:conclusions}.
\section{LO PRA framework}
\label{sec:LO}
To derive the factorization formula of the PRA in LO
approximation, let us consider production of the partonic
final state of interest ${\cal Y}$ in the following auxilliary hard
subprocess:
\begin{equation}
g(p_1)+g(p_2)\to g(k_1) + {\cal Y}(P_{\cal A}) + g(k_2), \label{eqI:ax_proc}
\end{equation}
where the four-momenta of particles are denoted in parthenses, and
$p_1^2=p_2^2=k_1^2=k_2^2=0$. The final state ${\cal Y}$ sets the
hard scale $\mu^2$ of the whole process via it's invariant mass
$M_{\cal A}^2=P_{\cal A}^2$, or transverse momentum $P_{T{\cal A}}$,
otherwise it can be arbitrary combination of QCD partons. In a
frame, where ${\bf p}_{1}=-{\bf p}_{2}$ directed along the
Z-axis it is natural to work with the Sudakov(light-cone) components
of any four-momentum $k$:
\[
k^\mu=\frac{1}{2}\left( k^+ n_-^\mu + k^- n_+^\mu \right) + k_T^\mu,
\]
where $n_{\pm}^\mu = \left(n^\pm\right)^\mu = \left(1,0,0,\mp 1
\right)^\mu$, $n_{\pm}^2=0$, $n_+n^-=2$,
$k^\pm=k_{\pm}=(n_{\pm}k)=k^0\pm k^3$, $n_{\pm}k_T=0$, so that
$p_1^-=p_2^+=0$ and $s=(p_1+p_2)^2=p_1^+p_2^- >0$. The dot-product
of two four-vectors $k$ and $q$ in this notation is equal to:
\[
(kq)=\frac{1}{2}\left( k^+ q_- + k^- q_+ \right) - {\bf k}_T {\bf q}_T.
\]
For the discussion of different kinematic limits of the process
(\ref{eqI:ax_proc}) it is convinient to introduce the
``$t$-channel'' momentum transfers $q_{1,2}=p_{1,2}-k_{1,2}$, which
implies that ${\bf q}_{T1,2}=-{\bf k}_{T1,2}$, $q_{1}^-=-k_1^-$ and
$q_2^+=-k_2^+$. Let us define $t_{1,2}={\bf q}_{T1,2}^2$, and the
corresponding fractions of the ``large'' light-cone components of
momenta:
\[
z_1=\frac{q_1^+}{p_1^+},\ \ z_2=\frac{q_2^-}{p_2^-},
\]
for the further use. Variables $z_{1,2}$ satisfy the conditions
$0\leq z_{1,2}\leq 1$ because $k_{1,2}^\pm\geq 0$ and
$q_1^+=P_{\cal A}^++k_2^+\geq 0$, $q_2^-=P_{\cal A}^-+k_1^-\geq 0$
since all final-state particles are on-shell.
In the {\it collinear limit} (CL), when ${\bf
k}_{T1,2}^2\ll \mu^2$, while $0\leq z_{1,2}\leq 1$, the asymptotic
for the square of tree-level matrix element for the subprocess
(\ref{eqI:ax_proc}) is very well known:
\begin{equation}
\overline{|{\cal M}|^2}_{\rm CL} \simeq \frac{4g_s^4}{{\bf
k}_{T1}^2 {\bf k}_{T2}^2} P_{gg}(z_1) P_{gg}(z_2)
\frac{\overline{|{\cal A}_{CPM}|^2}}{z_1 z_2}, \label{eqI:CL_M2}
\end{equation}
where the bar denotes averaging (summation) over the spin and color
quantum numbers of the initial(final)-state partons, $g_s=\sqrt{4\pi
\alpha_s}$ is the coupling constant of QCD, $P_{gg}(z)=2C_A\left(
(1-z)/z + z/(1-z)+ z(1-z) \right)$ is the LO gluon-gluon DGLAP
splitting function and ${\cal A}_{CPM}$ is the amplitude of the
subprocess $g(z_1p_1)+g(z_2p_2)\to {\cal Y}(P_{\cal A})$ with
on-shell initial-state gluons. The error of approximation
(\ref{eqI:CL_M2}) is suppressed as $O({\bf k}_{T1,2}^2/\mu^2)$ w. r.
t. the leading term.
The limit of {\it Multi-Regge Kinematics} (MRK) for the subprocess
(\ref{eqI:ax_proc}) is defined as:
\begin{eqnarray}
& \Delta y_1 =y(k_1)-y(P_{\cal A}) \gg 1,\ \Delta y_2=y(P_{\cal A})-y(k_2) \gg 1 , \label{eqI:MRK1} \\
& {\bf k}_{T1}^2\sim {\bf k}_{T2}^2 \sim M_{T{\cal A}}^2\sim \mu^2 \ll s, \label{eqI:MRK2}
\end{eqnarray}
where rapidity for the four-momentum $k$ is equal to
$y(k)=\frac{1}{2}\log\left( \frac{k^+}{k^-} \right)$. The rapidity
gaps $\Delta y_{1,2}$ can be calculated as:
\[
\Delta y_{1,2}=\log\left[ \frac{M_{T{\cal A}}}{|{\bf k}_{T1,2}|} \frac{1-z_{1,2}}{z_{1,2} - \frac{{\bf k}_{T2,1}^2}{s(1-z_{2,1})}} \right].
\]
From this expression, taken together with the conditions
(\ref{eqI:MRK1}) and (\ref{eqI:MRK2}), one can see, that the
following hierarchy holds in the MRK limit:
\begin{equation}
\frac{{\bf k}_{T1,2}^2}{s} \ll z_1\sim z_2 \ll 1, \label{eqI:MRK-hier}
\end{equation}
so that the small parameters, which control the MRK limit are
actually $z_{1,2}$, while the transverse momenta are of the same
order of magnitude as the hard scale, and the collinear asymptotic
of the amplitude (\ref{eqI:CL_M2}) is inapplicable. Also, the
following scaling relations for momentum components hold in the MRK
limit:
\begin{equation}
M_{T{\cal A}}\sim |{\bf k}_{T1}|\sim q_1^+ \sim O(z_1) \gg q_1^- \sim O(z_1^2),\
M_{T{\cal A}}\sim |{\bf k}_{T2}|\sim q_2^- \sim O(z_2) \gg q_2^+ \sim O(z_2^2), \label{eqI:scaling}
\end{equation}
which allows one to neglet the ``small'' light-cone components of
momenta $q_1^-$ and $q_2^+$.
The systematic formalism for the calculation of the asymptotic
expressions for arbitrary QCD amplitudes in the MRK limit has been
formulated by L. N. Lipatov and M. I. Vyazovsky in a form of
gauge-invariant Effective Field Theory (EFT) for Multi-Regge
processes in QCD~\cite{Lipatov95, LipatovVyazovsky}, see
also~\cite{LipatovRev} for a review. The MRK asymptotics of the
amplitude in this EFT is constructed from gauge-invariant blocks --
{\it effective vertices}, which describe the production of clusters
of QCD partons, separated by the large rapidity gaps. These
effective vertices are connected together via $t$-channel exchanges
of gauge-invariant off-shell degrees of freedom, Reggeized gluons
$R_{\pm}$ and Reggeized quarks $Q_{\pm}$. The latter obey special
kinematical constraints, such that the field $Q_\pm$($R_\pm$)
carries only $q^\pm$ light-cone component of momentum and the
transverse momentum of the same order of magnitude, while $q^\mp=0$.
As it was shown above, these kinematical constraints are equivalent
to MRK.
Due to the requirements of gauge-invariance of effective vertices
and the above-mentioned kinematic constraints, the interactions of
QCD partons and Reggeons in the EFT~\cite{Lipatov95,
LipatovVyazovsky} are nonlocal and contain the Wilson's exponents of
gluonic fields. After the perturbative expansion, the latter
generate an infinite series of induced vertices of interaction of
particles and Reggeons. The Feynman Rules of the EFT are worked out
in details in the Ref.~\cite{EFT_FRs}, however we also collect the
induced and effective vertices, relevant for our present study in
the Figs.~\ref{figI:FeynmanRules} and~\ref{figI:FeynmanRules2} for
the reader's convenience.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{FRs_gluons.pdf}
\end{center}
\caption{Feynman rules of the EFT~\cite{Lipatov95}. Propagator of
the Reggeized gluon (top-left panel) and Reggeon-gluon induced
vertices up to the $O(g_s^2)$ are shown. The usual Feynman Rules of
QCD hold for interactions of ordinary quarks and
gluons.\label{figI:FeynmanRules}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{vertices_figure.pdf}
\end{center}
\caption{Structure of the effective vertices $R_\pm gg$ (top-left),
$R_\pm q\bar{q}$ (top-right), $R_+R_- g$ (bottom-left) and the
$R_+R_-gg$ {\it combined vertex} (bottom-right). These veritces
appear in the diagrams of the Figs.~\ref{figI:MRK_ampl},
\ref{figII:RRqq} and~\ref{figII:RRqqg}.\label{figI:FeynmanRules2}}
\end{figure}
The diagrammatic representation of the squared amplitude of the
process (\ref{eqI:ax_proc}) is shown in the
Fig.~\ref{figI:MRK_ampl}. Explicitly, the $R_\pm gg$ effective
vertex, which is depicted diagrammatically in the
Fig.~\ref{figI:FeynmanRules2}, reads:
\[
\Gamma^{abc}_{\mu\nu\pm}(k_1,k_2)=-ig_s f^{abc} \left[ 2g_{\mu\nu}k_1^\mp + (2k_2+k_1)_\mu n^\mp_\nu - (2k_1+k_2)_\nu n^\mp_\mu - \frac{(k_1+k_2)^2}{k_1^\mp} n^\mp_\mu n^\mp_\nu \right].
\]
Evaluating the square of $R_\pm gg$ effective vertex, contracted with the polarization vectors of on-shell external gluons one obtains:
\begin{equation}
\sum\limits_{\lambda_1,\lambda_2} \left\vert \Gamma_{\mu\nu\pm}(k_1,-k_2) \epsilon_\mu(k_1,\lambda_1) \epsilon^\star_\nu(k_2,\lambda_2) \right\vert^2 = 8(k_1^\mp)^2. \label{eqI:Gmn_sq}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{gg_gYg_MRK.pdf}
\end{center}
\caption{Diagrammatic representation of the MRK asymptotics for squared amplitude of the subprocess (\ref{eqI:ax_proc}).\label{figI:MRK_ampl}}
\end{figure}
Using the result (\ref{eqI:Gmn_sq}) and the Feynman rules of the
Fig.~\ref{figI:FeynmanRules} one can write the MRK asymptotics of
the squared amplitude of the process (\ref{eqI:ax_proc}) in the
following form:
\begin{equation}
\overline{|{\cal M}|^2}_{\rm MRK} \simeq \frac{4g_s^4}{{\bf k}_{T1}^2 {\bf k}_{T2}^2} \tilde{P}_{gg}(z_1) \tilde{P}_{gg}(z_2) \frac{\overline{|{\cal A}_{PRA}|^2}}{z_1 z_2}, \label{eqI:MRK_M2}
\end{equation}
where the MRK gluon-gluon splitting functions
$\tilde{P}_{gg}(z)=2C_A/z$ reproduce the small-$z$ asymptotics of
the full DGLAP splitting functions and the squared PRA amplitude is
defined as:
\begin{equation}
\overline{ |{\cal A}_{PRA}|^2} = \left( \frac{q_1^+q_2^-}{4(N_c^2-1)\sqrt{t_1t_2}} \right)^2 \left[ {\cal A}_{c_1 c_2}^\star {\cal A}^{c_1 c_2} \right],
\end{equation}
where ${\cal A}$ is the Green's function of the subprocess $R_+(q_1)
+ R_-(q_2)\to {\cal Y}(P_{\cal A})$ with amputated propagators of
the Reggeized gluons, and $c_{1,2}$ are their color indices. The error of the approximation
(\ref{eqI:MRK_M2}) is supressed as $O(z_{1,2})$ w. r. t. the leading
term.
In contrast with the collinear limit, PRA amplitude explicitly and
nontrivially depends on the ${\bf q}_{T1}$ and ${\bf q}_{T2}$.
However, when ${\bf k}_{T1,2}\ll \mu^2$, MRK limit reduces to the
small-$z_{1,2}$ asymptotics of the collinear limit and the
Eq.~(\ref{eqI:MRK_M2}) should reproduce the Eq.~(\ref{eqI:CL_M2}).
To this end, the following {\it collinear limit constraint} for the
PRA amplitude should hold:
\begin{equation}
\int\frac{d\phi_1 d\phi_2}{(2\pi)^2} \lim\limits_{t_{1,2}\to 0} \overline{ |{\cal A}_{PRA}|^2} = \overline{|{\cal A}_{CPM}|^2}, \label{eqI:CL_A}
\end{equation}
where $\phi_{1,2}$ are the azimuthal angles of the vectors ${\bf
q}_{T1,2}$. One can prove the constraint
(\ref{eqI:CL_A}) for the general PRA amplitudes of the type
$R_++R_-\to {\cal Y}$, with the help of Ward identities for the
Green's functions with Reggeized gluons, which has been discovered
in the Ref.~\cite{BLV}.
Now we introduce the {\it modified MRK (mMRK) approximation} for the
squared amplitude of the subprocess (\ref{eqI:ax_proc}) as follows:
\begin{enumerate}
\item In the Eq. (\ref{eqI:MRK_M2}) we substitute the MRK asymptotics for the splitting fuctions $\tilde{P}_{gg}(z)$ by the full LO DGLAP expression $P_{gg}(z)$.
\item We substitute the factors ${\bf k}_{T1,2}^2$ in the denominator of (\ref{eqI:MRK_M2}) by the exact value of $q^2_{1,2}$, as if all four components of momentum $q_{1,2}^+$, $q_{1,2}^-$ and ${\bf q}_T$ where flowing through the $t$-channel propagator: ${\bf k}_{T1,2}^2\to -q_{1,2}^2={\bf q}_{T1,2}^2/(1-z_{1,2})$.
\item However, the ``small'' light-cone components of momenta: $q_1^-$ and $q_2^+$ do not propagate into the hard scattering process, so it's gauge-invariant definition is unaffected and is given by the Lipatov's EFT~\cite{Lipatov95}.
\end{enumerate}
After these substitutions, the mMRK approximation for the squared amplitude of the subprocess (\ref{eqI:ax_proc}) takes the following form:
\begin{equation}
\overline{|{\cal M}|^2}_{\rm mMRK} \simeq \frac{4g_s^4}{q_{1}^2 q_{2}^2} P_{gg}(z_1) P_{gg}(z_2) \frac{\overline{|{\cal A}_{PRA}|^2}}{z_1 z_2}. \label{eqI:mMRK_M2}
\end{equation}
The mMRK approximation (\ref{eqI:mMRK_M2}) reproduces the exact QCD
results both in the collinear and MRK limits. The latter suggests,
that it should be more accurate than the default collinear limit
approximation (\ref{eqI:CL_M2}) when ${\bf k}_{T1,2}\sim \mu^2$ even
outside of the strict MRK limit $z_{1,2}\ll 1$, however at present
we can not give the precise parametric estimate of accuracy of the
Eq.~(\ref{eqI:mMRK_M2}) in this kinematic region. The available
numerical evidence (see the Ref.~\cite{HEJ1} for the case of
amplitudes with reggeized gluons in the $t$-channel and
Refs.~\cite{HHJ, Diphotons} for the case of Reggeized quarks)
supports the form of mMRK approximation, proposed above.
To derive the LO factorization formula of PRA we substitute the mMRK
approximation (\ref{eqI:mMRK_M2}) to the factorization formula of
CPM integrated over the phase-space of additional partons $k_{1,2}$:
\begin{eqnarray}
d\sigma &=& \int \frac{dk_1^+ d^2{\bf k}_{T1}}{(2\pi)^3 k_1^+} \int \frac{dk_2^- d^2{\bf k}_{T2}}{(2\pi)^3 k_2^-}
\int d\tilde{x}_1 d\tilde{x}_2 f_g(\tilde{x}_1,\mu^2) f_g(\tilde{x}_2,\mu^2)\ \frac{\overline{|{\cal M}|^2}_{\rm mMRK}}{2S\tilde{x}_1\tilde{x}_2}
\times \nonumber \\
&\times & (2\pi)^4 \delta\left( \frac{1}{2}\left( q_1^+ n_- + q_2^-
n_+ \right) + q_{T1}+ q_{T2} - P_{\cal A} \right) d\Phi_{\cal A},
\label{eqI:dsig0}
\end{eqnarray}
where $f_g(x,\mu^2)$ are the (integrated) Parton Distribution
Functions (PDFs) of the CPM,
$p_{1,2}^\mu=\tilde{x}_{1,2}P_{1,2}^\mu$, where $P_{1,2}$ are the
four-momenta of colliding protons, and $d\Phi_{\cal A}$ is the
element of the Lorentz-invariant phase-space for the final state of
the hard subprocess ${\cal Y}$.
Changing the variables in the integral: $(k_1^+, \tilde{x}_1)\to
(z_1, x_1)$, $(k_2^-,\tilde{x}_2)\to (z_2,x_2)$, where
$x_{1,2}=\tilde{x}_{1,2}z_{1,2}$, one can rewrite the
Eq.~\ref{eqI:dsig0} in a $k_T$-factorized form:
\begin{eqnarray}
d\sigma &=& \int\limits_0^1 \frac{dx_1}{x_1} \int \frac{d^2{\bf q}_{T1}}{\pi} \tilde{\Phi}_g(x_1,t_1,\mu^2) \int\limits_0^1 \frac{dx_2}{x_2} \int \frac{d^2{\bf q}_{T2}}{\pi} \tilde{\Phi}_g(x_2,t_2,\mu^2)\cdot d\hat{\sigma}_{\rm PRA}, \label{eqI:kT_fact}
\end{eqnarray}
where the partonic cross-section in PRA is given by:
\begin{equation}
d\hat{\sigma}_{\rm PRA}= \frac{\overline{|{\cal A}_{PRA}|^2}}{2Sx_1x_2}\cdot (2\pi)^4 \delta\left( \frac{1}{2}\left( q_1^+ n_- + q_2^- n_+ \right) + q_{T1}+ q_{T2} - P_{\cal A} \right) d\Phi_{\cal A}, \label{eqI:CS_PRA}
\end{equation}
and the tree-level ``unintegrated PDFs'' (unPDFs) are:
\begin{equation}
\tilde{\Phi}_g(x,t,\mu^2) = \frac{1}{t} \frac{\alpha_s}{2\pi} \int\limits_x^1 dz\ P_{gg}(z)\cdot \frac{x}{z}f_g\left(\frac{x}{z},\mu^2\right). \label{eqI:tree_unPDFs}
\end{equation}
The cross-section (\ref{eqI:kT_fact}) with ``unPDFs''
(\ref{eqI:tree_unPDFs}) contains the collinear divergence at
$t_{1,2}\to 0$ and infrared (IR) divergence at $z_{1,2}\to 1$. To
regularize the latter, we observe, that the mMRK expression
(\ref{eqI:mMRK_M2}) can be expected to give a reasonable
approximation for the exact matrix element only in the
rapidity-ordered part of the phase-space, where $\Delta y_1>0$ and
$\Delta y_2>0$. The cutoff on $z_{1,2}$ follows from this
conditions:
\begin{equation}
z_{1,2}< 1-\Delta_{KMR}(t_{1,2},\mu^2), \label{eq:KMR_cut}
\end{equation}
where $\Delta_{KMR}(t,\mu^2)=\sqrt{t}/(\sqrt{\mu^2}+\sqrt{t})$, and we have taken into account that $\mu^2\sim M_{T{\cal A}}^2$. The collinear singularity is regularized by the Sudakov formfactor:
\begin{equation}
T_i(t,\mu^2)=\exp\left[ - \int\limits_t^{\mu^2} \frac{dt'}{t'} \frac{\alpha_s(t')}{2\pi} \sum\limits_{j=q,\bar{q},g} \int\limits_0^{1} dz\ z\cdot P_{ji}(z) \theta\left(1-\Delta_{KMR}(t',\mu^2) - z\right) \right], \label{eq:Sudakov}
\end{equation}
which resums doubly-logarithmic corrections $\sim\log^2 (t/\mu^2)$ in the LLA in a way similar to what is done in the standard PS~\cite{PS_rev}.
The final form of our unPDF is:
\begin{equation}
\Phi_i(x,t,\mu^2) = \frac{T_i(t,\mu^2)}{t} \frac{\alpha_s(t)}{2\pi} \sum_{j=q,\bar{q},g} \int\limits_x^{1} dz\ P_{ij}(z)\cdot \frac{x}{z}f_{j}\left(\frac{x}{z},t \right)\cdot \theta\left(1-\Delta_{KMR}(t,\mu^2)-z \right), \label{eqI:KMR}
\end{equation}
which coincides with Kimber, Martin and Ryskin (KMR)
unPDF~\cite{KMR}. The KMR unPDF is actively used in the
phenomenological studies employing $k_T$-factorization, but to our
knowledge, the reasoning above is the first systematic attempt to
uncover it's relationships with MRK limit of the QCD amplitudes.
The KMR unPDF approximately (see Sec.~2 of the Ref.~\cite{NLO_KMR}
for the further details) satisfies the following normalization
condition:
\begin{equation}
\int\limits_0^{\mu^2} dt\ \Phi_i(x,t,\mu^2) = xf_i(x,\mu^2), \label{eqI:KMR_norm}
\end{equation}
which ensures the normalization for the single-scale observables,
such as proton structure functions or $d\sigma/dQ^2dy$ cross-section
in the Drell-Yan process, on the corresponding LO CPM results up to
power-supressed corrections and terms of the NLO in $\alpha_s$.
Results for multiscale observables in PRA are significantly
different than in CPM, due to the nonzero transverse-momenta of
partons in the initial state.
The main difference of PRA from the multitude of studies in the
$k_T$-factorization, such as Ref.~\cite{JZL_kT-fact}, is the
application of matrix elements with off-shell initial-state partons
(Reggeized quarks and gluons) from Lipatov's EFT~\cite{Lipatov95,
LipatovVyazovsky}, which allows one to study the arbitrary processes
involving non-Abelian structure of QCD without violation of
gauge-invaiance due to the nonzero virtuality of initial-state
partons. This approach, together with KMR unPDF gives stable and
consistent results in a wide range of phenomenological applications,
which include the description of the angular correlations of
dijets~\cite{NSSjj}, $b$-jets~\cite{SSbb}, charmed~\cite{PLB2016,
PRD2016DD} and bottom-flavoured~\cite{KNSS2015} mesons, different
multiscale observables in hadroproduction of
diphotons~\cite{Diphotons} and photoproduction of photon+jet
pairs~\cite{photon_jet}, as well as some other examples.
Recently, the new approach to derive gauge-invariant scattering
amplitudes with off-shell initial-state partons, using the
spinor-helicity techniques and BCFW-like recursion relations for
such amplitudes has been introduced in the Refs.~\cite{hameren3,
katie}. This formalism is equivalent to the Lipatov's EFT at the
tree level, but for some observables, e. g. related with heavy
quarkonia, or for the generalization of the formalism to NLO, the
explicit Feynman rules and the structure of EFT is more convenient.
\section{LO PRA merged with tree-level NLO corrections}
\label{sec:merging}
Let's consider the kinematic conditions of a measurement in the
Ref.~\cite{bCMS}. In this experiment, the events with at least one
jet having $p_T^{\rm jet}>p_{TL}^{\min}$ has been recorded in
$pp$-collisions at the $\sqrt{S}=7$ TeV, and the
semileptonic decays of $B$-hadrons where reconstructed in this
events, through the decay vertices, displaced w.~r.~t. the primary
$pp$-collision vertex. The $B$-hadron is required to have
$p_{TB}>p_{TB}^{\min}=15$ GeV, while three data-samples are
presented in the Ref.~\cite{bCMS} for three values of
$p_{TL}^{\min}=56$, $84$ and $120$ GeV. The rapidities of
$B$-hadrons are constrained to be $|y_B|<y_B^{\max}=2$, while the
leading jet is searched in somewhat wider domain $|y_{\rm
jet}|<y_{\rm jet}^{\max}=3$.
The leading jet, reconstructed in this experiment, sets the hard
scale of the event. Two possibilities should be considered: the
first one is, that the jet originating from $b$-quark or
$\bar{b}$-antiquark is the leading one, and the second option is,
that some gluon or light-quark jet is leading in $p_T$, and jets
originating from $b$ or $\bar{b}$ are subleading. Observables with
such kinematic constraints on the QCD radiation are difficult to
study in $k_T$-factorization, because the radiation of additional
hard partons is already taken into account in the unPDFs, and the
jet, originating from unPDF could happen to be the leading one.
One can easily estimate the distribution of additional jets in
rapidity, using the KMR model for unPDFs (\ref{eqI:KMR}).
The variable $z$ is related with rapidity ($y$) of the parton,
emitted on the last step of the parton cascade, as follows:
\[
z(y)= \left(1+\frac{\sqrt{t}}{x\sqrt{S}}e^{y}\right)^{-1},
\]
so, starting from Eq.~(\ref{eqI:KMR}) one can derive the
distribution integrated over $t$ from some scale $t_0$ up to
$\mu^2$, but unintegrated over $y$: $G_i(x,y,t_0,\mu^2)$.
Representative plots of this distribution for the case of
$P_{gg}$-splitting only, are shown in the Fig.~\ref{figII:rapidity_plots}
for some values of scales typical for the process under
consideration. The LO PDFs from the
MSTW-2008 set~\cite{MSTW-2008} has been used to produce this plot.
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth]{rap_distr_unPDF_jet_hard_Math.pdf}
\end{center}
\caption{ Distribution in rapidity of a gluon jets with $|{\bf
k}_T|>\mu/2$ from the last stage of the parton cascade, as given by
the KMR model~(\ref{eqI:KMR}). Solid line -- $\mu^2=10^3$ GeV$^2$,
dashed line -- $\mu^2=10^5$ GeV$^2$. Both plots are nomalized to the
common integral, scale of the $G$-axis is arbitrary. For both
distributions: $x=\mu/\sqrt{S}$, i. e. the rapidity of the hard
process is zero. \label{figII:rapidity_plots} }
\end{figure}
From Fig.~\ref{figII:rapidity_plots} it is clear,
that in the KMR model, the majority of hardest ISR jets jets with
${\bf k}_T^2\sim \mu^2$ lie within the rapidity interval $|y|<3$ if
the particles, produced in the primary hard process have rapidities
close to zero. Therefore this jets can be identified as the leading
ones. But the kinematic approximations, which has been made in the
derivation of the factorization formula, are least reliable in this
region of phase-space, and hence the poor agreement with data is to
be expected. To avoid the above-mentioned problem, we will merge the
LO PRA description of events with the leading $b$($\bar{b}$)-jets
with events triggered by the leading gluon jet, originating from the
exact $2\to 3$ NLO PRA matrix element.
The LO ($O(\alpha_s^2)$) subprocess, which we will take into account
is:
\begin{equation}
R_+(q_1) + R_-(q_2)\to b(q_3)(\rightarrow B (p_{TB})) + \bar{b}(q_4)(\rightarrow \bar{B} (p_{T\bar{B}}) ),\label{eqII:LO_subpr}
\end{equation}
where the hadronization of $b$($\bar{b}$)-quarks into the
$B$($\bar{B}$) mesons is described by the set of universal,
scale-dependent parton-to-hadron fragmentation functions, fitted on
the world data on the $B$-hadron production in $e^+e^-$-annihilation
in the Ref.~\cite{FFB}.
The following kinematic cuts are applied to the LO subprocess
(\ref{eqII:LO_subpr}):
\begin{enumerate}
\item Both $B$ and $\bar{B}$ mesons are required to have $|y_B|<y_B^{\max}$ and $\min(p_{TB},p_{T\bar{B}})> p_{TB}^{\min}$.
\item If the distance between three-momenta ${\bf q}_3$ and ${\bf q}_4$ in the $(\Delta y, \Delta \phi)$-plane:
$\Delta R_{34}=\sqrt{\Delta y_{34}^2+\Delta\phi_{34}^2}>\Delta R_{\rm exp.}=0.5$, then $b$ and $\bar{b}$ jets
are resolved separately and we define: $p_{TL}=\max(|{\bf q}_{T3}|,|{\bf q}_{T4}|)$.
\item If $\Delta R_{34}<\Delta R_{\rm exp.}$, then $p_{TL}=|{\bf q}_{T3}+{\bf q}_{T4}|$,
according to the anti-$k_T$ jet clustering algorithm~\cite{antikt}.
\item The MC event is {\it accepted} if $\max(|{\bf q}_{T1}|,|{\bf q}_{T2}| ) < p_{TL}$ and $p_{TL}>p_{TL}^{\min}$.
\end{enumerate}
The set of Feynman diagrams for the subprocess (\ref{eqII:LO_subpr})
is presented in the Fig.~\ref{figII:RRqq}. The convinient expression
for the squared amplitude of this subprocess with massless quarks
can be found in the Ref.~\cite{NSSjj}. Due to the Ward identities of
the Ref.~\cite{BLV}, this amplitude coincides with the amplitude,
which can be obtained in the ``old $k_T$-factorization''
prescription, i. e. by substituting the polarization vectors of
initial-state gluons in the usual $gg\to q\bar{q}$ amplitude by
$q_{T1,2}^\mu/|{\bf q}_{T1,2}|$.
\begin{figure}
\begin{center}
\includegraphics[width=.8\textwidth, clip=]{RR_qq.pdf}
\caption{The Feynman diagrams of Lipatov's EFT for the Reggeized
amplitude of subprocess $R_++R_-\to b+\bar b$. \label{figII:RRqq}}
\end{center}
\end{figure}
The NLO ($O(\alpha_s^3)$) subprocess is
\begin{equation}
R_+(q_1)+R_-(q_2)\to b(q_3)(\rightarrow B (p_{TB})) + \bar{b}(q_4)(\rightarrow \bar{B} (p_{T\bar{B}})) + g(q_5), \label{eqII:NLO_subpr}
\end{equation}
and the following kinematic constraints are applied in the
calculation of this contribution:
\begin{enumerate}
\item Both $B$ and $\bar{B}$ mesons are required to have $|y_B|<y_B^{\max}$ and $\min(p_{TB},p_{T\bar{B}})> p_{TB}^{\min}$.
\item Gluon jet is the leading one: $p_{TL}=|{\bf q}_{T5}|$, $\max\left( |{\bf q}_{T1}|, |{\bf q}_{T2}|, |{\bf q}_{T3}|, |{\bf q}_{T4}| \right) < p_{TL}$ and $p_{TL}>p_{TL}^{\min}$.
\item Rapidity of the gluon is required to be $|y_5|<y_{\rm jet}^{\max}$. The gluon jet is isolated: $\Delta R_{35}>\Delta R_{\rm exp.}$ and $\Delta R_{45}>\Delta R_{\rm exp.}$.
\end{enumerate}
Furthermore, since the matrix elements for both subprocesses
(\ref{eqII:LO_subpr}) and (\ref{eqII:NLO_subpr}) are taken in the
approximation of massless $b$-quarks, the corresponding final-state
collinear singularity is regularized by the condition
$(q_3+q_4)^2>4m_b^2$, where $m_b=4.5$ GeV.
Few comments are in order. For the both subprocesses
(\ref{eqII:LO_subpr}) and (\ref{eqII:NLO_subpr}), transverse momenta
of jets from the unPDFs are constrained to be subleading. In such a
way we avoid the double-counting of the leading emissions between LO
and NLO contributions and additional subtractions are not needed.
This is in contrast to the observables fully inclusive in the QCD
radiation~\cite{Diphotons}, where the double-counting subtractions
between LO and NLO terms has to be done. Another comment concerns
the isolation condition for the leading gluon jet in the NLO
contribution (\ref{eqII:NLO_subpr}). This condition regularizes the
collinear singularity between the final-state gluon and
$b$($\bar{b}$)-quark. In the full NLO calculation, this singularity
will be cancelled by the loop correction, producing some finite
contribution, but since the gluon is required to be harder than $b$
or $\bar{b}$-quarks, this finite contribution will be proportional
to the $P_{gq}(z)=C_F(1+(1-z)^2)/z$ splitting function at $z\to 1$,
so we don't expect the logarithmically-enchanced contributions from
this region of the phase-space.
\begin{figure}
\begin{center}
\includegraphics[width=.8\textwidth, clip=]{RR_qqg.pdf}
\caption{The Feynman diagrams of Lipatov's EFT for the Reggeized
amplitude of subprocess $R_++R_-\to b+\bar b+ g$.
\label{figII:RRqqg}}
\end{center}
\end{figure}
The set of Feynman diagrams for amplitude of the subprocess
(\ref{eqII:NLO_subpr}) is presented in the Fig.~\ref{figII:RRqqg}.
We generate this amplitude, using our model-file \texttt{ReggeQCD},
which implements the Feynman rules of Lipatov's EFT in
\texttt{FeynArts}~\cite{FeynArts}. The squared ampitude is computed
using the \texttt{FormCalc}~\cite{FormCalc} package, and has been
compared numerically with the squared amplitude, obtained by the
methods of the Refs.~\cite{hameren3, katie}. Our results and
results of Ref. \cite{hameren3, katie} agree up to machine precision. Apart
from the Feynman rules, depicted in the
Figs.~\ref{figI:FeynmanRules} and \ref{figI:FeynmanRules2},
\texttt{ReggeQCD} package contains all Feynman rules, which are
needed to generate arbitrary PRA amplitude with reggeized gluons or
quarks in the initial state and up to three quarks, gluons or
photons in the final state. We are planning to publish the \texttt{ReggeQCD} model-file in a separate paper~\cite{ReggeQCD}. The \texttt{FORTRAN} code for the squared amplitudes of the processes (\ref{eqII:LO_subpr}) and (\ref{eqII:NLO_subpr}) is available from authors by request.
As it was stated above, we will use the fragmentation model, to
describe hadronization of $b$-quarks into $B$-hadrons, so the
observable cross-section is:
\begin{eqnarray}
\frac{d\sigma_{\rm obs.}}{dy_B dy_{\bar{B}} d\Delta\phi} &=& \int\limits_{p_{TB}^{\min}}^\infty dp_{TB} \int\limits_{p_{TB}^{\min}}^\infty dp_{T\bar{B}} \int\limits_0^1 \frac{dz_1}{z_1} D_{B/b}(z_1,\mu^2) \int\limits_0^1 \frac{dz_2}{z_2} D_{B/b}(z_2,\mu^2) \nonumber \\
&\times & \frac{d\sigma_{b\bar{b}}}{dq_{T3} dq_{T4} dy_3 dy_{4} d\Delta\phi}, \label{eqII:frag}
\end{eqnarray}
where $\Delta\phi=\Delta\phi_{34}$, $D_{B/b}(z,\mu^2)$ are the fragmentation functions~\cite{FFB}, and $q_{T3}=|{\bf q}_{T3}|=p_{TB}/z_1$, $q_{T4}=|{\bf q}_{T4}|=p_{T\bar{B}}/z_2$, $y_3=y_B$, $y_4=y_{\bar B}$. To simplify the numerical calculations, it is very convenient to integrate over $q_{T3,4}$ instead of $p_{TB}$ and $p_{T\bar{B}}$ in Eq.~(\ref{eqII:frag}), then all the dependence of the cross-section on fragmentation functions can be absorbed into the following measurement function:
\begin{equation}
\Theta(\tilde{z},\mu^2)=\left\{ \begin{array}{c}
\int\limits_{1/\tilde{z}}^1 dz\ D_{B/q}(z,\mu^2) \ {\rm if} \ \tilde{z}>1, \\
0 \ {\rm otherwise,}
\end{array}\right. \label{eqII:MF-def}
\end{equation}
which can be efficiently computed and tabulated in advance,
therefore reducing the dimension of phase-space integrals by 2. Then
the master-formula for the cross-section of $2\to 2$ subprocess
(\ref{eqII:LO_subpr}) in PRA takes the form:
\begin{eqnarray}
\frac{d\sigma_{\rm obs.}^{(2\to 2)}}{dy_B dy_{\bar{B}} d\Delta\phi} = \int\limits_0^\infty dq_{T3}\ dq_{T4}\cdot \Theta \left(\frac{q_{T3}}{p_{TB}^{\min}}, \mu^2 \right) \Theta \left(\frac{q_{T4}}{p_{TB}^{\min}}, \mu^2 \right) \nonumber \\
\times \int\limits_0^\infty dt_1 \int\limits_0^{2\pi}d\phi_1\ \Phi_g(x_1,t_1,\mu^2) \Phi_g(x_2,t_2,\mu^2)\cdot \frac{q_{T3}q_{T4}\ \overline{\left\vert {\cal A}^{(2\to 2)}_{PRA} \right\vert^2}}{2(2\pi)^3 (Sx_1x_2)^2}\cdot \theta_{\rm cuts}^{(2\to 2)},
\end{eqnarray}
where $\phi_1$ is azimuthal angle between the vector ${\bf q}_{T1}$
and ${\bf q}_{T3}$, $t_2=|{\bf q}_{T3}+{\bf q}_{T4}-{\bf q}_{T1}|$,
$x_1=(q_3^++q_4^+)/\sqrt{S}$, $x_2=(q_3^-+q_4^-)/\sqrt{S}$, and the
theta-function $\theta_{\rm cuts}^{(2\to 2)}$ implements the
kinematic constraints for $2\to 2$ process, described above.
Analogously, the formula for differential cross-section of the $2\to
3$ process (\ref{eqII:NLO_subpr}) reads:
\begin{eqnarray}
\frac{d\sigma_{\rm obs.}^{(2\to 3)}}{dy_B dy_{\bar{B}} d\Delta\phi\ dy_5 } = \int\limits_0^\infty dq_{T3}\ dq_{T4}\cdot \Theta \left(\frac{q_{T3}}{p_{TB}^{\min}}, \mu^2 \right) \Theta \left(\frac{q_{T4}}{p_{TB}^{\min}}, \mu^2 \right) \nonumber \\
\times \int\limits_0^\infty dt_1 dt_2 \int\limits_0^{2\pi}d\phi_1 d\phi_2\ \Phi_g(x_1,t_1,\mu^2) \Phi_g(x_2,t_2,\mu^2)\cdot \frac{q_{T3}q_{T4}\ \overline{\left\vert {\cal A}^{(2\to 3)}_{PRA} \right\vert^2}}{8(2\pi)^6 (Sx_1x_2)^2}\cdot \theta_{\rm cuts}^{(2\to 3)},
\end{eqnarray}
where $\phi_2$ is the azimuthal angle between the vectors ${\bf
q}_{T2}$, and ${\bf q}_{T3}$, ${\bf q}_{T5}={\bf q}_{T1}+{\bf
q}_{T2}-{\bf q}_{T3}-{\bf q}_{T4}$,
$x_1=(q_3^++q_4^++q_5^+)/\sqrt{S}$,
$x_2=(q_3^-+q_4^-+q_5^-)/\sqrt{S}$ and the theta-function
$\theta_{\rm cuts}^{(2\to 3)}$ implements kinematic cuts for $2\to
3$ process, described after the Eq.~(\ref{eqII:NLO_subpr}).
\section{Numerical results for $B\bar{B}$-correlation observables}
\label{sec:results}
Now we are in a position to compare our numerical results, obtained
in the approximation, formulated in the Sec.~\ref{sec:merging} of
the present paper, to the experimental data of the Ref.~\cite{bCMS}.
Experimental uncertanties, related with the shape of $\Delta\phi$
and $\Delta R=\Delta R_{34}$ distributions are relatively small
($\sim 20-30\%$). They are indicated by the error-bars in the
Figs.~\ref{figIII:dPhi-spectra} and~\ref{figIII:dR-spectra}.
However, an additional uncertainty in the absolute normalization of
the cross-sections $\simeq \pm 47\%$ is reported in the
Ref.~\cite{bCMS}, and it is not included into the error bars of the
experimental points in the Figs.~\ref{figIII:dPhi-spectra}
and~\ref{figIII:dR-spectra}, as well as in the plots presented in
the experimental paper. Taking this large uncertainty into account,
it is reasonable to consider the overall normalization of the
cross-section to be a free parameter, which is also the case in MC
simulations presented in the Ref.~\cite{bCMS}. Following this route
we find, that to obtain a very good agreement of the central curve
of our predictions both with the shape and normalization of all
experimental spectra we have to multiply all our predictions on the
universal factor $\simeq 0.4$. Since the major part of the reported
normalization uncertainty is due to the uncertainty in the
efficiency of identification of $B$-mesons, our finding seems to
support the assumption, that the $B$-meson reconstruction efficiency
is largely independent from the kinematics of the leading jet, and
in particular, from the value of $p_{TL}^{\min}$. In the plots
below, we show theoretical predictions multiplied by the
above-mentioned factor, however our default result is also
compatible with experiment, if one takes into account full
experimental uncertainties and the scale-uncertainty of our
predictions.
In the Figs.~\ref{figIII:dPhi-spectra} and~\ref{figIII:dR-spectra}
we present the comparison of our predictions with $\Delta
\phi$ and $\Delta R$ spectra from the Ref.~\cite{bCMS}. Apart from
the above-mentioned overall normalization uncertainty, our model
does not contain any free parameters. To generate the gluon unPDF,
according to the Eq.~(\ref{eqI:KMR}) we use the LO PDFs from the
MSTW-2008 set~\cite{MSTW-2008}. We also use the value of
$\alpha_s(M_Z)=0.1394$ form the PDF fit. In both LO
(\ref{eqII:LO_subpr}) and NLO (\ref{eqII:NLO_subpr}) contrubutions
we set the renormalization and factorization scales to be equal to
the $p_T$ of the leading jet: $\mu_R=\mu_F=\xi p_{TL}$, where
$\xi=1$ for the central lines of our predictions, and we vary
$1/2<\xi<2$ to estimate the scale-uncertainty of our prediction,
which is shown in the following figures by the gray band. All
numerical calculations has been performed using the adaptive MC
integration routines from the \texttt{CUBA} library~\cite{CUBA},
mostly using the \texttt{SUAVE} algorithm, but with the cross-checks
against the results obtained by \texttt{VEGAS} and \texttt{DIVONNE}
routines.
The shape of measured distributions, both in $\Delta\phi$ and
$\Delta R$, agrees with our theoretical predictions within the
experimental uncertainty. Also, our model correctly describes the
dependence of the cross-section on the $p_{TL}^{\min}$ cut.
Our predictions for the $d\sigma/d\Delta\phi$ and $d\sigma/d\Delta
R$ spectra at $\sqrt{S}=13$ TeV are presented in the Figs.
\ref{figIII:dPhi-spectra-13} and \ref{figIII:dR-spectra-13} for the
same kinematic cuts as in the Ref.~\cite{bCMS}. Also in the
Figs.~\ref{figIII:dPhi-ratio} and \ref{figIII:dR-ratio} we provide
predictions for the ratios of $\Delta\phi$ and $\Delta R$ spectra at
different energies, as it was proposed in the Ref.~\cite{ratios}.
The primary advantage of such observable is, that the theoretical
scale-uncertainty mostly cancels in the ratio, leading to the more
precise prediction. The residual $\Delta\phi$ and $\Delta R$
dependence of the ratio arises in the inerplay between the
$x$-dependence of PDFs and the dynamics of emissions of additional
hard radiation, therefore probing the physics of interest for PRA.
Measurements of such observables at the LHC will present an
important challenge for the state-of-the-art calculations in
perturbative QCD and tuning of the MC event generators.
\section{Conclusions}
\label{sec:conclusions}
In the present paper, the example of $B\bar{B}$-azimuthal
decorrelations is used to show, how the contributions of $2\to 2$
and $2\to 3$ processes in PRA can be consistently taken together to
describe multiscale correlational observables in a presence of
experimental constraints on additional QCD radiation. Our numerical
results agree well with experimental data of the Ref.~\cite{bCMS},
up to a common normalization factor. The predictions for
$\sqrt{S}=13$ TeV are provided. Also the foundations of the Parton
Reggeization Approach has been reviewed in the Sec.~\ref{sec:LO} and
the relation of PRA with collinear and Multi-Regge limits of
scattering amplitudes in QCD is higlighted.
\section{Acknowledgements}
We are grateful to A. van Hameren for help in comparison of squared
amplitudes obtained in the PRA with ones obtained using recursion
techniques of the Refs.~ \cite{hameren3, katie}. The work was
supported by the Ministry of Education and Science of Russia under
Competitiveness Enhancement Program of Samara University for
2013-2020, project 3.5093.2017/8.9. The Feynman diagrams in the
Figs.~\ref{figI:FeynmanRules}, \ref{figI:FeynmanRules2} and
\ref{figI:MRK_ampl} were made using
\texttt{JAXODRAW}~\cite{JaxoDraw}.
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth, angle=-90]{BB_56_dPHI.pdf}
\includegraphics[width=0.45\textwidth, angle=-90]{BB_84_dPHI.pdf} \\
\includegraphics[width=0.45\textwidth, angle=-90]{BB_120_dPHI.pdf}
\end{center}
\caption{Comparison of the predictions for $\Delta\phi$-spectra of $B\bar{B}$-pairs with the CMS data~\cite{bCMS}.
Dashed line -- contribution of the LO subprocess (\ref{eqII:LO_subpr}), dash-dotted line -- contribution of the NLO subprocess
(\ref{eqII:NLO_subpr}), solid line -- sum of LO and NLO contributions. \label{figIII:dPhi-spectra}}
\end{figure}
\clearpage
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth, angle=-90]{BB_56_dR.pdf}
\includegraphics[width=0.45\textwidth, angle=-90]{BB_84_dR.pdf} \\
\includegraphics[width=0.45\textwidth, angle=-90]{BB_120_dR.pdf}
\end{center}
\caption{Comparison of the predictions for $\Delta R$-spectra of $B\bar{B}$-pairs with the CMS data~\cite{bCMS}.
Notation for the histograms is the same as in the Fig.~\ref{figIII:dPhi-spectra}. \label{figIII:dR-spectra}}
\end{figure}
\clearpage
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth]{BB_56_dPHI_13.pdf}
\includegraphics[width=0.45\textwidth]{BB_84_dPHI_13.pdf} \\
\includegraphics[width=0.45\textwidth]{BB_120_dPHI_13.pdf}
\end{center}
\caption{Predictions for the $d\sigma/d\Delta\phi$-spectra $\sqrt{S}=13$ TeV for the same kinematic cuts as in the Ref.~\cite{bCMS}.
Notation for the histograms is the same as in the Fig.~\ref{figIII:dPhi-spectra}. \label{figIII:dPhi-spectra-13}}
\end{figure}
\clearpage
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth]{BB_56_dR_13.pdf}
\includegraphics[width=0.45\textwidth]{BB_84_dR_13.pdf} \\
\includegraphics[width=0.45\textwidth]{BB_120_dR_13.pdf}
\end{center}
\caption{Predictions for the $d\sigma/d\Delta R$-spectra $\sqrt{S}=13$ TeV for the same kinematic cuts as in the Ref.~\cite{bCMS}.
Notation for the histograms is the same as in the Fig.~\ref{figIII:dPhi-spectra}. \label{figIII:dR-spectra-13}}
\end{figure}
\clearpage
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth]{BB_56_dPHI_R13-7.pdf}
\includegraphics[width=0.45\textwidth]{BB_84_dPHI_R13-7.pdf} \\
\includegraphics[width=0.45\textwidth]{BB_120_dPHI_R13-7.pdf}
\end{center}
\caption{Predictions for the ratio of $d\sigma/d\Delta\phi$-spectra at $\sqrt{S}=13$ TeV and $\sqrt{S}=7$ TeV
for the same kinematic cuts as in the Ref.~\cite{bCMS}. \label{figIII:dPhi-ratio}}
\end{figure}
\clearpage
\begin{figure}[p!]
\begin{center}
\includegraphics[width=0.45\textwidth]{BB_56_dR_R13-7.pdf}
\includegraphics[width=0.45\textwidth]{BB_84_dR_R13-7.pdf} \\
\includegraphics[width=0.45\textwidth]{BB_120_dR_R13-7.pdf}
\end{center}
\caption{Predictions for the ratio of $d\sigma/d\Delta R$-spectra at $\sqrt{S}=13$ TeV and $\sqrt{S}=7$ TeV for the same
kinematic cuts as in the Ref.~\cite{bCMS}. \label{figIII:dR-ratio}}
\end{figure}
\clearpage
|
\section{Introduction}
The problem of inferring the parameters of a stochastic model from data is ubiquitous in the natural and social sciences, and engineering. Many systems, like gene regulatory networks, electric power grids, virus populations, or financial markets have a complex dynamics which is often modelled by stochastic processes. Such stochastic processes are characterised by potentially many free parameters, which need to be estimated from data. For a review in the context of the inverse Ising problem, see~\cite{Nguyen2017}.
Here, we ask how to infer the parameters characterising a non-equilibrium stochastic process. We consider a system with configurations $x$ in some configuration space and time-homogeneous transition probabilities between configurations.
Our focus is on time-homogeneous Markov processes, which are fully defined by instantaneous transition rates. These rates are parametrized by a model with parameters denoted $\Theta$.
Configurations can be discrete or continuous, and also time can be discrete or continuous. For the concrete example of a colloidal particle undergoing Brownian motion, the configurations $x$ are positions in space and the model parameter specifies the diffusion constant of the particle.
We restrict ourselves to ergodic processes, so for any initial state the system eventually settles into a unique steady state characterised by the steady-state probability distribution $p_\Theta(x)$.
Our aim is to infer the underlying parameters $\Theta^\text{true}$ from $M$ samples $x^\mu$, with $\mu=1,\hdots,M$ , drawn independently from the steady state distribution.
Parameter inference hinges on the description of the empirical data by a model. For a model whose steady state $p_\Theta(x)$ is known explicitly, the maximum-likelihood estimate
\begin{equation}
\label{maxlike}
\Theta^\text{inf} = \operatorname*{argmax}_{\Theta} \prod_{\mu=1}^M p_\Theta(x^{\mu})
\end{equation}
provides an estimate of the model parameters which becomes exact in the limit of a large number of samples. However, for non-equilibrium models, the steady state $p_\Theta(x)$ is hard to compute and generally unknown. This is a major difference to equilibrium models and prevents the use of established inference methods.
In some cases, time series data is available and one can use the empirically observed transitions between configurations to compute the likelihood of the observed time series. This likelihood can be computed directly from the transition probabilities specified by the model; the underlying model parameters are then estimated as the parameters that maximise the likelihood of the time series~\cite{Roudi2011a,zeng2013}. Inference from time series can be performed even more efficiently using mean-field approximations~\cite{Roudi2011a,Mezard2011a,Mahmoudi2014}.
However, for many systems, classical as well as quantum, time series data is not available. An extreme case is whole-genome single-cell gene expression profiling, where cells are destroyed by the measurement process. In such cases, we have only independent samples from which to infer the model parameters. To this end, we use the transition rates between configurations and their dependence on the model parameters to construct a quantity we call the propagator likelihood. We show how this likelihood can be used to infer the model parameters from independent samples taken from the steady state.
This article is organised as follows: First, we introduce the propagator likelihood through an intuitive argument and then offer a systematic derivation based on relative entropies. Second, we apply the propagator likelihood to pedagogical examples with both discrete and continuous configurations, specifically the asymmetric simple exclusion process (ASEP) and the Ornstein-Uhlenbeck process. Finally, we address the more challenging problem of inferring the parameters of two prominent models from statistical physics and theoretical biology: the kinetic Ising model and replicator dynamics.
\section{The propagator likelihood}
Suppose we knew the functional dependence of the steady-state distribution $p_\Theta(x)$ on the model parameters $\Theta$. Then a standard approach would be to maximise the (log-)~likelihood of the samples
\begin{align}
\mathcal{L}(\Theta) = \frac{1}{M} \sum_{\mu=1}^M \log p_\Theta(x^\mu) = \sum_x \hat{p}(x) \log p_\Theta(x) \ ,
\label{eq:log-likelihood}
\end{align}
where the set of sampled configurations characterises the empirical distribution $\hat{p}(x)$ with probability mass function
\begin{equation}
\hat{p}(x)=\frac{1}{M}\sum_{\mu=1}^M \delta_{x^\mu,x} \ ,
\label{eq:sampled-dist-def}
\end{equation}
and $\delta_{x^\mu,x}$ denotes a Kronecker-$\delta$.
However, in non-equilibrium systems we frequently do not know the steady-state distribution. Non-equilibrium systems lack detailed balance, so the steady state is not described by the Boltzmann distribution and lacks a simple characterisation. Our solution to this inference problem is based on exploiting one elementary fact: since the distribution $p_\Theta$ is stationary, it remains unchanged if we propagate it forward in time by an arbitrary time interval. Thus, we can replace the steady-state distribution $p_\Theta(x)$ in the log-likelihood function \eqref{eq:log-likelihood} with the same distribution propagated forward in time $\sum_y \prop p_\Theta(y)$.
The propagator $\prop$ is the conditional probability of observing the system in configuration $x$ at time $t=\tau$, given it was in configuration $y$ at time $t=0$.
By replacing the unknown steady-state distribution $p_\Theta(y)$ with the empirical distribution $\hat{p}(y)$, we arrive at the propagator likelihood
\begin{align}
\mathcal{PL}(\Theta;\tau)&=\sum_x \hat{p}(x) \log \sum_y \prop \hat{p}(y) \notag \\
&=\frac{1}{M} \sum_{\mu=1}^M \log \left(\frac{1}{M}\sum_{\nu=1}^M p_\Theta(x^\mu,\tau |x^\nu,0)\right) \ .
\label{eq:propagator-likelihood-def}
\end{align}
In this way, we have shifted the parameter-dependence from the (unknown) steady-state distribution $p_\Theta(x)$ to the (known) propagator $p_\Theta(x,\tau |y,0)$. For models with continuous configurations, $p_\Theta(x^\mu,\tau |x^\nu,0)$ is the transition probability density.
The propagator likelihood~\eqref{eq:propagator-likelihood-def} has a straightforward probabilistic interpretation:
$\frac{1}{M}\sum_{\nu=1}^M p_\Theta(x,\tau |x^\nu,0)$ is a probability distribution over $x$, conditional on the sampled configurations $\{x^\nu\}$. The propagator likelihood is the corresponding log-likelihood of this probability distribution, evaluated for $M$ independent draws of the empirically observed configurations and rescaled by $M$.
In the limit $\tau \to \infty$, the propagator likelihood~\eqref{eq:propagator-likelihood-def} approaches the log-likelihood~\eqref{eq:log-likelihood}, since $\lim_{\tau\rightarrow \infty}p_\Theta(x,\tau|y,0)\equiv p_\Theta(x)$. However, the complexity of calculating the propagator increases with $\tau$.
In principle, the propagation time interval $\tau$ is arbitrary: it parameterizes different measures of how close a given empirical probability distribution is to being stationary under a particular set of model parameters. Different choices of $\tau$ will be discussed in sections~\ref{sec:OUP} and~\ref{sec:kineticIsing}.
Maximizing the propagator likelihood does not involve sampling the probability distribution at different times, but seeks model parameters that would leave the empirical distribution invariant, if one did propagate it forward in time.
Correspondingly, although rates of transitions between configurations $x^\nu$ and $x^\mu$ appear in~\eqref{eq:propagator-likelihood-def}, these transitions are entirely fictitious: the empirical configurations $\{x^\nu\}$ are sampled independently from the non-equilibrium steady state.
In the following, we will assume that the parameters maximizing the propagator likelihood are unique. One might want to prove this for the particular model used by analytically calculating the propagator likelihood and checking its convexity.
\begin{figure}[h]
\center
\includegraphics[width=0.5\textwidth]{propagator_concept.eps}
\caption{The propagator likelihood. The set of independent samples $\{x^\mu\}_{\mu=1}^M$ characterize the empirical distribution $\hat{p}$ defined by equation~\eqref{eq:sampled-dist-def}, shown on the left. We use the transition probabilities $\prop$ to propagate $\hat{p}$ forward in time by an arbitrary interval $\tau$ to generate a new distribution $\q$ (see Eq.\eqref{eq:d-def}), shown on the right. The functional form of the propagator is thought to be known, but it is parametrized by a set of unknown parameters $\Theta$. Demanding stationarity of the empirical distribution, we can estimate the underlying parameters $\Theta^\text{true}$ by finding the parameters $\Theta^\text{inf}$ that minimise the distance between $\p$ and $\q$ as measured with relative entropy. This is equivalent to maximising the propagator likelihood (see main text).}
\label{fig:concept}
\end{figure}
\subsection{Minimising relative entropy}
A second interpretation of the propagator likelihood can be found by rephrasing parameter inference from a steady state as finding a set of parameters $\Theta$ such that the propagator $p_\Theta(x,\tau |y,0)$ is compatible with the empirical distribution $\hat{p}$ being stationary~(see Fig.~\ref{fig:concept}). Demanding stationarity corresponds to requiring that $\hat{p}$ is in some sense close to a distribution $q_{\Theta,\tau}$ generated by propagating the empirical distribution for an arbitrary time interval $\tau$,
\begin{equation}
q_{\Theta,\tau}(x)=\sum_y p_\Theta(x,\tau |y,0) \hat{p}(y) \ .
\label{eq:d-def}
\end{equation}
To quantify this notion of closeness for discrete configurations, we use the relative entropy or Kullback-Leibler divergence~\cite{Kullback1951}
\begin{equation}
D(\hat{p} \| q_{\Theta,\tau})=\sum_x \hat{p}(x) \log \frac{\hat{p}(x)}{q_{\Theta,\tau}(x)} \ .
\label{eq:KL}
\end{equation}
Inserting the probability mass function $q_{\Theta,\tau}(x)$ defined by~\eqref{eq:d-def} into the relative entropy, we find that the relative entropy can be written as the negative sum of the Shannon entropy of the empirical distribution, $S(\hat{p})=-\sum_x \hat{p}(x) \log \hat{p}(x)$ and the propagator likelihood~\eqref{eq:propagator-likelihood-def}:
\begin{equation}
D(\hat{p} \| q_{\Theta,\tau})=-S(\hat{p})-\mathcal{PL}(\Theta;\tau) \ .
\label{eq:KL-decomposition}
\end{equation}
The first term depends only on the sampled configurations and is independent of the model parameters; thus minimising the relative entropy with respect to $\Theta$ is equivalent to maximising the propagator likelihood. Furthermore, due to the positivity of relative entropy, the propagator likelihood is bounded from above by the negative Shannon entropy, and this bound will be saturated only for a model that makes the empirical distribution exactly stationary. The propagator likelihood~\eqref{eq:propagator-likelihood-def} thus emerges from a variational approach aiming to find the model parameters that are most consistent with the sampled distribution being the steady state.
A similar argument can be made also for models with continuous configurations $x\in\mathbb{R}^d$. We consider the differential relative entropy $D=\int \d x \ \hat{p}_s(x) \log (\hat{p}_s(x)/q_{\Theta,\tau,s}(x))$, which can be computed by estimating the probability density of the steady state from the samples via a Gaussian mixture model $\hat{p}_s(x)=\frac{1}{M}\sum_{\mu=1}^M \exp(-(x-x^{\mu})^2/2s^2)/(2\pi s^2)^{d/2}$. Here, $s>0$ is the width of the Gaussians in the mixture model, and $q_{\Theta,\tau,s}(x)=\int \d y \ p_{\Theta}(x,\tau|y,0) \hat{p}_s(y)$ denotes the time-propagated density estimate. Minimising this estimate of the differential relative entropy is then equivalent to maximising a quantity that converges to the propagator likelihood for $s\rightarrow 0$.
It is easy to show that the maximum propagator likelihood estimate $\Theta^\text{inf}$ converges to the underlying parameters $\Theta^\text{true}$ in the limit of large sample sizes: for $M\rightarrow \infty$, the empirical distribution $\hat{p}(y)$ converges to the steady-state distribution $p_{\Theta^\text{true}}(y)$. Hence, the propagator likelihood converges to $\mathcal{PL}(\Theta;\tau,{M=\infty})=\sum_xp_{\Theta^\text{true}}(x) \ln \sum_y p_\Theta(x,\tau |y,0) p_{\Theta^\text{true}}(y)$. According to~\eqref{eq:KL-decomposition}, this function has its maximum over $\Theta$ where the relative entropy between the underlying distribution $p_{\Theta^\text{true}}(x)$ and its propagated version $\sum_y p_\Theta(x,\tau |y,0) p_{\Theta^\text{true}}(y)$ is minimal. This minimum is realised for $\Theta=\Theta^\text{true}$, since the relative entropy is non-negative and the steady-state by definition remains unchanged when propagated with the parameter value $\Theta=\Theta^\text{true}$.
\FloatBarrier
\section{Models with discrete configurations}
\subsection{Discrete time: a simple two-configuration model}
\begin{figure}[hbtp!]
\includegraphics[width=0.45\textwidth]{two_state.eps}
\caption{The propagator likelihood for a simple two-configuration system. The inset shows the single-step dynamics of the system with configurations 0 and 1, controlled by the hopping probability $r\in(0,1)$. In the main figure, the solid lines show the propagator likelihood for varying propagation time intervals $\tau$. The dashed line shows the log-likelihood \eqref{eq:log-likelihood}, which corresponds to an infinite propagation time interval. The maximum likelihood estimate of the hopping probability, ${r}^\text{inf}=\frac{1-\hat{p}(0)}{\hat{p}(0)}$, is marked on the top axis and coincides with the maximum for all propagator likelihoods with an uneven number of time steps $\tau$ (see the main text for the case of even time steps).}
\label{fig:two-state}
\end{figure}
To illustrate the propagator likelihood with a toy example, we consider a system with only two configurations, denoted by $0$ and $1$ (see inset of Fig.~\ref{fig:two-state}). At each time step, if the system is in configuration $1$, it moves to configuration $0$. If it is in configuration $0$, it moves to configuration $1$ with probability $r\in(0,1)$ or remains in configuration $0$ with probability $1-r$. The steady-state distribution is easily computed, giving $p_r(0)=1/(1+r)$ and $p_r(1)=1-p_r(0)=r/(1+r)$.
We are now given samples $\{x^\mu\}_{\mu=1}^M\in \{0,1\}^M$ taken independently from the steady state and want to infer the model parameter $r$. The empirical distribution is given by the frequencies of the two configurations, $\hat{p}(0)=\frac{1}{M}\sum_{\mu=1}^M \delta_{0,x^\mu}$ and $\hat{p}(1)=1-\hat{p}(0)$. Since we know the steady state for this particular model, we can infer $r$ from the relationship $\langle\hat{p}(0)\rangle=1/(1+r)$, yielding $r^\text{inf}=(1-\hat{p}(0))/\hat{p}(0)$. For comparison, we also use the propagator likelihood~\eqref{eq:propagator-likelihood-def} with the single-step propagator $p_r(x,\tau=1|y,0)=\delta_{y,1}\delta_{x,0}+\delta_{y,0}(r\delta_{x,1}+(1-r)\delta_{x,0})$, giving
\begin{align}
\mathcal{PL}(r;1)&=\hat{p}_0 \log (\underbrace{(1-r)\hat{p}_0}_{0\rightarrow 0} +\underbrace{\hat{p}_1}_{1\rightarrow 0})+\hat{p}_1\log(\underbrace{r\hat{p}_0}_{0\rightarrow 1}) \notag \\
&=\hat{p}_0\log(1-r\hat{p}_0)+(1-\hat{p}_0)\log(r\hat{p}_0) \ .
\end{align}
Maximising the propagator likelihood analytically with respect to $r$ by setting $\frac{\d \mathcal{PL}}{\d r}(r^\text{inf})=0$, we recover the same result as obtained above by analysing the known steady-state distribution. Indeed, for uneven propagation time intervals, the propagator likelihood shows a unique maximum at the same point where the likelihood has its maximum, $r^\text{inf}=\frac{1-\hat{p}_0}{\hat{p}_0}$. Also, the propagator likelihood approaches the log-likelihood for increasing $\tau$, as expected. For even propagation time intervals, however, a second (global) maximum occurs at the boundary $r=1$: since the choice $r=1$ makes the two configurations simply exchange their probabilities in each step, the Markov chain loses its ergodicity and becomes periodic. In this case, \emph{any} distribution is stationary over an even number of time steps.
While stationarity with respect to a single time step is both necessary and sufficient to define the steady state, stationarity with respect to longer propagation time intervals is necessary but not sufficient. Hence, spurious maxima of the likelihood can appear when using longer propagation time interval.
\subsection{Continuous time: the asymmetric simple exclusion process (ASEP)}
Markov processes with discrete configurations in continuous time are characterised by instantaneous transition rates between distinct configurations $W_\Theta(x|y)=\lim_{\tau \rightarrow 0} p_\Theta(x,\tau|y,0)/\tau \ , \ (x\neq y)$. The system hops away from configuration $y$ at a random time that is exponentially distributed with parameter $-W_\Theta(y|y)\equiv\sum_{x\neq y}W_\Theta(x|y)$. For the purpose of inferring the model parameters, it is convenient to map the continuous-time process onto a discrete-time process with the same steady state. This can be achieved by choosing the single-step transition matrix
\begin{equation}
\tilde{p}_\Theta(x,\tau=1|y,0)=\delta_{x,y}+\lambda W_\Theta(x|y) \ .
\label{eq:discrete_time_version}
\end{equation}
The parameter $\lambda$ affects the overall rate at which transitions occur. Choosing $0<\lambda<[\max_{y} \{-W(y|y)\}]^{-1}$ ensures a well-defined stochastic matrix. Since the steady-state distribution $p_{\Theta}(y)$ itself is not associated with a time scale, the choice of $\lambda$ is in principle arbitrary.
As an example of a model with continuous time, we consider the asymmetric simple exclusion process (ASEP) on a ring with asynchronous updates (see inset of Fig.~\ref{fig:exclusion}). The ASEP is a simple model of a driven lattice gas and has been applied to traffic flow, surface growth, and directed paths in random media~\cite{Krug1996,Evans1997,Derrida1998}.
The steady-state distribution in 1D can be calculated analytically in terms of matrix products~\cite{Derrida1998,Evans1997}. In higher dimensions, however, there is no such systematic approach and, to the best of our knowledge, the steady-state distribution is unknown.
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{exclusion.eps}
\caption{Reconstruction of hopping rates in the asymmetric simple exclusion process (ASEP). The inset schematically shows the dynamics: $K$ particles move on a periodic one-dimensional lattice with $N>K$ lattice sites, see text. In the main figure, we plot the relative mobilities $\hat{\mu}^\text{inf}_i$ inferred using the propagator likelihood versus the underlying relative mobilities $\hat{\mu}^\text{true}_i={\mu}^\text{true}_i/\sum_j {\mu}^\text{true}_j$ that were used to generate the data. We simulated $K=10$ particles hopping on a lattice with $N=15$ sites and took $M=10^{10}$ samples independently from the steady state. The underlying mobilities $\mu_i$ were drawn independently from a uniform distribution on the unit interval $(0,1)$.}
\label{fig:exclusion}
\end{figure}
The model consists of $K$ particles moving on a periodic one-dimensional lattice with $N>K$ lattice sites. Each lattice site can be occupied by at most one particle. Particles labelled $i=1,\ldots,K$ independently attempt to jump one step in the clockwise direction at a rate $\mu_i$, which is called the intrinsic mobility or hopping rate of a particle.
The configuration of the system can be characterised by the number of free lattice sites in front of each particle, $\mathbf{n}=(n_1,\hdots,n_K)\subset (\mathbb{N}_0)^K$, with the restriction that the particle gaps add up to the number of free lattice sites: $n_1+n_2+\hdots+n_K=N-K$.
For the transition $\mathbf{n}=(n_1,\hdots,n_K)\rightarrow \mathbf{n}'=(n'_1,\hdots,n'_K)$ between two distinct configurations there is a non-zero transition rate only if the configurations are connected by the jump of a single particle $i$, \textit{i.e.} all gaps are identical except for the gap in front of particle $i$, which must be decreased by one, $n'_i=n_{i}-1$, and the gap behind particle $i$, which must by increased by one $n'_{i-1}=n_{i-1}+1$. The transition rate is then simply the hopping rate of the particle $W_{\boldsymbol{\mu}}(\mathbf{n}'|\mathbf{n})={\mu}_i$.
To infer the parameters, we define a discrete-time version of the process with transition probabilities defined by \eqref{eq:discrete_time_version}. We choose $\lambda$ such that the hopping rates add to one, $\lambda=(\mu_1+\mu_2+\hdots+\mu_K)^{-1}$ in \eqref{eq:discrete_time_version}. The steady-state distribution is characterised by the relative hopping rates $\hat{\mu}_i \equiv \mu_i/\sum_j \mu_j$. The single-step propagator likelihood of the discrete-time process then reads
\begin{align}
\mathcal{PL}(\hat{\boldsymbol{\mu}},1)=\sum_{\mathbf{n}'} \hat{p}(\mathbf{n}') \log \left\{ \hat{p}(\mathbf{n}')+\sum_{\mathbf{n}} W_{\hat{\boldsymbol{\mu}}}(\mathbf{n}'|\mathbf{n})\hat{p}(\mathbf{n})\right\} \ .
\end{align}
We use this result to evaluate the propagator likelihood~\eqref{eq:propagator-likelihood-def} and infer the relative mobilities $\hat{\mu}_i$. As an example, we consider a system of $K=10$ particles hopping on $N=15$ lattice sites. The particle mobilities $\mu_i$ are independently and uniformly drawn from the interval $(0,1)$. We generate $M=10^{10}$ Monte Carlo samples, recorded every $10$ jumps after an initial settling time interval of $10^5$ jumps to reach the steady state. We then maximise the propagator likelihood numerically using the sequential least squares programming algorithm as implemented in the SciPy library~\cite{SciPy}. In Fig.~\ref{fig:exclusion} we plot the inferred relative mobilities versus the relative mobilities used to generate the samples.
\FloatBarrier
\section{Models with continuous configurations}
Markov processes with continuous configurations pose an additional challenge: Finite-time propagators are generally not known explicitly. Instead, finite-time propagators are characterised indirectly as the solution of a Fokker-Planck equation. Rather than solving a Fokker-Planck equation, which for systems with a large number of degrees of freedom is generally infeasible, we proceed by approximating the propagator for short time intervals $\tau$ via a linearisation of the corresponding Langevin equation (LE) that describes the stochastic dynamics of the model.
Again, we first demonstrate this procedure using a toy model. We consider one of the simplest processes with continuous configurations, the Ornstein-Uhlenbeck process (OUP), which describes the Brownian dynamics of an overdamped particle in a quadratic potential. Note that, again, for this particular case the steady-state distribution is known exactly, so one could infer the model parameters using the standard maximum likelihood approach. We use this case to illustrate the propagator likelihood before turning to more complex models where the likelihood-based approach is not feasible.
\subsection{The Ornstein-Uhlenbeck process}
\label{sec:OUP}
Consider a single particle diffusing in a one-dimensional harmonic potential $U(x)=\frac{b}{2}x^2$ with diffusion constant $\sigma^2$.
A physical realisation of this model is a colloidal particle in solution being held in place by optical tweezers and confined to a one-dimensional channel. The dynamics of the particle is modelled by the Langevin equation
\begin{equation}
\frac{\d x}{\d t}= -b x +\sigma \xi(t) \ ,
\label{eq:ou-langevin}
\end{equation}
where the random force $\xi(t)$ describes $\delta$-correlated white noise interpreted in the It{\^o} convention.
\begin{figure}[hbtp]
\includegraphics[width=0.45\textwidth]{ou.eps}
\caption{Parameter inference in the Ornstein-Uhlenbeck process. (a) The inset shows a schematic plot of the model describing a single particle moving in the harmonic potential $U(x)=bx^2/2$. In the main figure, we show the relative reconstruction error $\epsilon=|\Theta^\text{inf}-\Theta^\text{true}|/\Theta^\text{true}$ of the parameter $\Theta=b/\sigma^2$ (characterising the steady state) versus the dimensionless propagation time interval $\tau$ used in the propagator for sample sizes $M=10^3$ ($\blacksquare$), $M=10^4$ ($\blacktriangle$), and $M=10^5$ ($\bullet$). The solid lines with markers show the reconstruction errors for the approximate short-time propagator, the dashed lines indicate the reconstruction errors for the exact finite-time propagator. \newline
(b) shows the estimated rate of change of the inferred parameter with respect to the propagation time interval $\tau$. The rates of change are computed using forward difference quotients $|\partial \Theta^\text{inf}/\partial \tau(\tau_i)|\approx |\Theta^\text{inf}(\tau_{i}+\Delta \tau)- \Theta^\text{inf}(\tau_{i})|/\Delta \tau$ and are shown on the vertical axis for the differentiation step size $\Delta \tau=10^{-3}$. The minimal rate of change corresponds to the optimal choice of the propagation time interval (see main text).\newline
The data was generated by independent sampling from the stationary distribution, \textit{i.e.} a centred Gaussian with variance $\sigma^2/(2b)=1/4$. In order to remove fluctuations between different sample sets $\{x_\mu\}_{\mu=1}^M$ and demonstrate the dependence of the average error on the sample size $M$ and propagation time interval $\tau$, the results were averaged over 50 independent sample sets. The minima of the reconstruction error and the rate of change coincide also for individual sample sets, while the position of the minima may vary across sample sets. }
\label{fig:ornstein-uhlenbeck}
\end{figure}
As for the exclusion process, one model parameter must be eliminated by rescaling time, since the steady-state distribution is time-independent. We rescale time to be dimensionless with $t'=t \sigma^2$, so that the particle has unit diffusivity. To calculate the propagator likelihood for short time intervals $\tau \ll 1$, we linearise the LE~\eqref{eq:ou-langevin} in time
\begin{equation}
x(\tau)\approx x(0)-\frac{b}{\sigma^2} x(0) \tau +\int_0^{\tau} \d t' \xi(t') \ .
\label{eq:ou-langevin-linearized}
\end{equation}
Since the integrated white noise $\int_0^{\tau} \d t' \xi(t')$ is normally distributed with mean 0 and variance $\tau$, we obtain an approximate short-time Gaussian propagator
\begin{equation}
p_{b/\sigma^2}(x,\tau|y,0) \approx \frac{\exp\left(-[x-\overline{x}]^2/2\tau\right)}{\sqrt{2\pi \tau}} \ ,
\label{eq:ou-approximate-propagator}
\end{equation}
where $\overline{x}=y-(b/\sigma^2)y\tau$ is the most likely future position of the particle.
Such a Gaussian form of the propagator emerges for any linearised LE with white noise and is not specific to the OUP. For coloured and multiplicative noise, $\xi(t)\rightarrow f(x(t),t)\eta(t)$, where $f$ is some function and the random force $\eta(t)$ has a finite correlation time, we can proceed similarly. In this case, the normal distribution of the integrated white noise is replaced with the appropriate distribution of the integrated coloured noise ${\int_0^{\tau} \d t' f(x(t'),t')\eta(t') \approx f(x(0),0)\int_0^{\tau} \d t' \eta(t')}$.
Inserting the short-time propagator~\eqref{eq:ou-approximate-propagator} into the propagator likelihood~\eqref{eq:propagator-likelihood-def}, we perform a one-dimensional maximisation of the propagator likelihood to infer the parameter $\Theta=b/\sigma^2$. Fig.~\ref{fig:ornstein-uhlenbeck}(a) shows the relative reconstruction error versus the dimensionless propagation time interval $\tau$ for various sample sizes, both for the short-time propagator and for the exact finite-time propagator. The non-monotonic behaviour of the error for the short-time propagator shows that the optimal choice for $\tau$ involves a trade-off: At short time intervals $\tau$, the distances typically crossed during the interval $\tau$ are small. In this case, the sum over pairs of sampled configurations in the propagator likelihood~\eqref{eq:propagator-likelihood-def} is dominated by few transitions with small steps, and, in the limit $\tau \to 0$ it is dominated by transitions of the type $x^{\mu} \to x^{\mu}$.
For this reason, the parameter inference at small values of $\tau$ is more strongly affected by sampling fluctuations than at large values of $\tau$. At large values of $\tau$, on the other hand, the approximation used to derive the short-time propagator \eqref{eq:ou-approximate-propagator} becomes invalid. As a result, both the optimal value of $\tau$ and the total reconstruction error decrease as the sample size is increased.
The exact finite-time propagator exhibits only sampling fluctuations, so the reconstruction error decreases monotonically with $\tau$, converging to the maximum likelihood estimate at large $\tau$. Note that the results for the approximate and exact propagators do not converge for $\tau \rightarrow 0$, since the \emph{relative difference} of the propagators converges to $0$ only for the peak $x=y$, even though the \emph{absolute difference} converges to $0$ for all values of $x$.
\paragraph{Choosing the optimal propagation time interval.}
The non-monotonic behaviour of the reconstruction error $\epsilon=|\Theta^\text{inf}-\Theta^\text{true}|/\Theta^\text{true}$ raises the question how to choose the optimal propagation time interval without prior knowledge of the underlying parameter $\Theta^\text{true}$. We find an answer by assuming that the error is a smooth function of the propagation time interval: we seek the minimal error by demanding $0=\partial \epsilon/\partial \tau =\frac{\text{sgn}( \Theta^\text{inf}- \Theta^\text{true})}{|\Theta^\text{true}|} \frac{\partial \Theta^\text{inf}}{\partial \tau} \sim \partial \Theta^\text{inf}/\partial \tau$. The error derivative will become small only for $\partial \Theta^\text{inf}/\partial \tau \rightarrow 0$.
The latter quantity can be estimated directly from the data by repeating the inference for a set of propagation time intervals $\{(\tau_i,\tau_i+\Delta \tau)\}$ and computing the forward difference quotients $\partial\Theta^\text{inf}/\partial \tau(\tau_i)\approx[\Theta^\text{inf}(\tau_{i}+\Delta \tau)- \Theta^\text{inf}(\tau_{i})]/\Delta \tau$. Since estimating the derivative from the data will incur numerical errors, we relax the condition $0=\partial\Theta^\text{inf}/\partial \tau$ and demand only that $|\partial\Theta^\text{inf}/\partial \tau|$ is minimal. In Fig.~\ref{fig:ornstein-uhlenbeck}(b) we show that these minima indeed coincide with the optimal choice of $\tau$ as judged from the reconstruction error shown in Fig.~\ref{fig:ornstein-uhlenbeck}(a).
\FloatBarrier
\section{Non-equilibrium models in statistical physics and theoretical Biology}
We now turn to non-equilibrium applications where the standard maximum likelihood approach is not feasible, as the steady-state distribution is unknown.
\subsection{The kinetic Ising model}
\label{sec:kineticIsing}
The kinetic Ising model consists of a set of $N$ binary spins $s_i=\pm1$, which interact with each other via couplings $J_{ij}$ and are subject to external fields $h_i$~(see inset of Fig.~\ref{fig:ising}). Crucially, the couplings are not symmetric ($J_{ij} \neq J_{ji}$ in general). A stochastic dynamics of this model is specified by the so-called Glauber dynamics~\cite{glauber1963a}: In each time step, a spin $i$ is chosen in and its value $s_i(t+1)$ one time step later is updated according to the probability distribution
\begin{equation}
\label{eq:glaubersq}
p(\s_i(t+1)|\sss(t))=\frac{\exp\{\s_i(t+1) \theta_i(t)\}}{2
\cosh(\theta_i(t))} \ ,
\end{equation}
where the effective local field at time $t$ is
\begin{equation}
\label{eq:localfield}
\theta_i(t)=h_i+\sum_{j=1}^N J_{ij} \s_j(t) \ .
\end{equation}
The kinetic Ising model has been used to model gene regulatory and neural networks~\cite{Derrida1987a,hertz1991a,Bailly2010b}.
For a symmetric coupling matrix without self-couplings, the Glauber dynamics~\eqref{eq:glaubersq} converges to the equilibrium state characterised by the Boltzmann distribution ${p_B(\mathbf{s})=e^{-\mathcal{H}(\mathbf{s})}/Z}$ with the well-known Ising Hamiltonian ${\mathcal{H}(\mathbf{s})=-\sum_i s_i (h_i+\sum_{j>i} J_{ij}s_j)}$. For asymmetric couplings, however, Glauber dynamics~\eqref{eq:glaubersq} converges to a non-equilibrium steady state, which lacks detailed balance and is hard to characterise.
In recent work we have shown how the spin couplings $J_{ij}$ and external fields $h_i$ can be inferred from independent samples taken from the steady state by fitting couplings and fields to match the magnetisations, two-, and three-point correlations sampled in the data~\cite{Dettmer2016}. Here we demonstrate that the couplings can be inferred even more accurately with the propagator likelihood~\eqref{eq:propagator-likelihood-def}, which uses information from the full empirical distribution. We insert the single-step propagator~\eqref{eq:glaubersq} into the propagator likelihood~\eqref{eq:propagator-likelihood-def} and maximise the propagator likelihood with respect to the external fields $h_i$ and off-diagonal couplings $J_{ij}$ (we consider a model without self-interactions: $J_{ii}= 0$). For the last step, we use the Broyden-Fletcher-Goldfarb-Shanno algorithm as implemented in the SciPy library~\cite{SciPy}, and initialise the algorithm with the naive mean-field parameter estimate as described in~\cite{Dettmer2016}. Fig.~\ref{fig:ising} compares the relative error of coupling reconstruction $\epsilon=\| \mathbf{J}^\text{inf}-\mathbf{J}^{\text{true}}\|_2/\|\mathbf{J}^{\text{true}}\|_2$ based on the single-step propagator likelihood with the corresponding reconstruction error of fitting finite spin moments up to three-point correlations.
It turns out that parameter inference in the kinetic Ising model requires more samples than in the equilibrium inverse Ising problem. To achieve a relative reconstruction error of $10^{-2}$ for an equilibrium system of $N=10$ spins, the pseudolikelihood method requires of the order of $10^6$ samples~\cite{Aurell2012b}. In the non-equilibrium model considered here, we require at least $10^8$ independent samples for a similar reconstruction accuracy (see Fig.~\ref{fig:ising}). Naturally, inference in the kinetic Ising model becomes significantly easier if time-correlated data is available. For example, the Gaussian mean-field theory~\cite{Mezard2011a} requires only on the order of $10^6$ pairs of samples $\{\sss(t),\sss(t+1)\}$ to achieve a similar reconstruction accuracy for a system as large as 100 spins.
The reason for this is that, in the kinetic Ising model, couplings are not uniquely determined by pairwise correlations. Instead, many different models can reproduce the same pairwise correlations. For this reason, we need information from higher order spin correlations, which require more samples to determine them accurately.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{ising.eps}
\caption{The inference of couplings in the kinetic Ising model. The inset schematically shows a system of binary spins interacting via couplings $J_{ij}$ subject to external fields $h_i$ (not shown). In the main figure, we plot the relative error of couplings $\epsilon=\| \mathbf{J}^\text{inf}-\mathbf{J}^{\text{true}}\|_2/\|\mathbf{J}^{\text{true}}\|_2$ versus the number of independent samples used for inference, using (i) finite spin moments up to three-point correlations ($\bullet$) and (ii) the single-step propagator likelihood ($\blacksquare$). Both methods are exact, so the relative error decreases with the sample size as $\epsilon\sim M^{-1/2}$. The propagator likelihood (which uses the full set of configurations sampled) performs only a little better than the fit to the first three moments, showing that most information required for reconstruction is already contained in the first three moments. The underlying off-diagonal couplings were drawn independently from a Gaussian distribution with mean $0$ and standard deviation $1/\sqrt{N}$ (we excluded self-interactions, $J_{ii}= 0$), the external fields were drawn independently from a Gaussian distribution with mean $0$ and standard deviation $1$. The system size was $N=10$ spins. }
\label{fig:ising}
\end{figure}
\paragraph{Sparse networks.}
We now consider a particular situation, where the parameter inference requires fewer samples: sparse coupling matrices with known topology of the couplings, so only the values of the couplings are to be reconstructed. Specifically, we look at the kinetic Ising model with sparse couplings (so most interactions are zero) and assume as prior knowledge the pairs $(i,j)$ that have a non-zero coupling between them, \textit{i.e.} $J^\text{true}_{ij}\neq 0$ or $J^\text{true}_{ji}\neq 0$, regardless of the direction of the coupling.
This problem has been addressed for undirected equilibrium systems like Ising models with ferromagnetic or binary couplings~\cite{Bento2009,Aurell2012b}. We apply the propagator likelihood to a network of $N=10$ spins, where each possible directed link $J_{ij}$ from spin $i$ to spin $j$ is non-zero with probability $p=0.2$. The non-zero couplings are again drawn independently from a Gaussian distribution with mean $0$ and variance $1/N$. Self-interactions are excluded and the external fields $h_i$ drawn independently from a Gaussian distribution with mean $0$ and variance $1$. Figure~\ref{fig:ising_sparse} shows that the directed couplings can be inferred with slightly fewer samples when the topology of the couplings is known.
\begin{figure}[hbtp!]
\includegraphics[width=0.45\textwidth]{ising_sparse.eps}
\caption{Coupling inference in the sparse kinetic Ising model. The inset schematically shows a system of binary spins interacting via sparse couplings $J_{ij}$ subject to external fields $h_i$ (not shown). In the main figure, we plot the relative error of couplings $\epsilon=\| \mathbf{J}^\text{inf}-\mathbf{J}^{\text{true}}\|_2/\|\mathbf{J}^{\text{true}}\|_2$ versus the number of independent samples. The underlying off-diagonal couplings were chosen sparsely: they were set to zero with probability $1-p=0.8$, and with probability $p=0.2$ were drawn independently from a Gaussian distribution with mean 0 and variance $1/N$ (we excluded self-interactions, $J_{ii}=0$ ). The external fields were drawn independently from a Gaussian distribution with mean 0 and variance 1. The system size was $N=10$ spins. The couplings were inferred by maximising the single-step propagator likelihood over the set of couplings between directly interacting spin pairs $(i,j)$, \textit{i.e.} there is at least one true non-zero coupling between the spin pair, $J^\text{true}_{ij}\neq 0$ or $J^\text{true}_{ji}\neq 0$, regardless of the direction.}
\label{fig:ising_sparse}
\end{figure}
\paragraph{Increasing the propagation time interval.}
So far we have restricted ourselves to the single-step propagator ($\tau=1$). Can the inference be improved by increasing the propagation time interval? Intuitively, we expect that the single-step propagator cannot be improved on when all configurations have been sampled, since this implies that all transitions over longer propagation time intervals consist of single-step transitions that have already been probed by the single-step propagator likelihood: $x^\nu\overset{\tau}{\rightarrow}x^\mu=\sum_{x_1,x_2,...,x_{\tau-1}}x^\nu\overset{\tau=1}{\rightarrow}{x_1}\overset{\tau=1}{\rightarrow}{x_2}\hdots\overset{\tau=1}{\rightarrow}{x_{\tau-1}} \overset{\tau=1}{\rightarrow} x^\mu$. Indeed, the examples with discrete time considered so far in this article fall into this category and our numerical evidence confirms that increasing the propagation time interval does not improve the inference.
If, however, the configuration space is undersampled, some of the transitions appearing in the longer-time propagator likelihood will involve intermediate configurations that are not present in the sample and therefore do not appear in the single-step propagator likelihood. In this case, we expect to find that increasing the propagation time interval improves the inference for a fixed sample size.
In principle, one could even compute the log-likelihood~\eqref{eq:log-likelihood} numerically by using sufficiently long propagation time intervals $\tau$. However, the computational cost of taking the $2^N$-dimensional transition matrix to a large power $\tau$ is often prohibitive. Furthermore, the matrix products needs to be computed many times in order to evaluate the likelihood and its ($N^2$-dimensional) gradient over many iterations of a maximisation algorithm.
In Fig.~\ref{fig:ising-propagation-time} we consider a kinetic Ising model where only a small fraction of system configurations appear in the sampled configurations. Increasing the propagation time interval from $\tau=1$ to $\tau=3$ improves the inference markedly. Also, we find that the reconstruction error is much smaller for the symmetric part of the coupling matrix (shown in Fig.~\ref{fig:ising-propagation-time}(a)) than for the antisymmetric part (shown in Fig.~\ref{fig:ising-propagation-time}(b)). This is because the symmetric part of the couplings is governed by the pairwise spin-correlations, while the antisymmetric part is dominated by higher-order spin-correlations, which require more samples for an accurate computation, see~\cite{Dettmer2016}. The benefit of increasing the propagation time interval is also larger for the symmetric part, suggesting that the reconstruction of the antisymmetric part of the couplings is mainly limited by the sample size and that increasing the propagation time interval even further will not lead to a more accurate reconstruction.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{ising-propagation-time.eps}
\caption{Increasing the propagation time interval in the undersampled kinetic Ising model.
(a) shows the reconstructed symmetric part of the coupling matrix $J^\text{sym}_{ij}=(J_{ij}+J_{ji})/2$ based on the single-step propagator likelihood ($\blacktriangle$) and on the longer propagation time interval $\tau=3$ ($\bullet$). (b) shows the reconstructed antisymmetric part of the coupling matrix $J^\text{asym}_{ij}=(J_{ij}-J_{ji})/2$ also based on the single-step propagator likelihood ($\blacktriangle$) and on the longer propagation time interval $\tau=3$ ($\bullet$). \newline
The underlying off-diagonal couplings were drawn independently from a Gaussian distribution with mean $0$ and standard deviation $0.5/\sqrt{N}$ (we excluded self-interactions, $J_{ii}= 0$), the external fields were drawn independently from a Gaussian distribution with mean $0$ and standard deviation $0.5$. The system size was $N=16$ spins and $M=10^{4}N$ samples were used. As a result, less than a third of the $2^{16}$ system configurations were present in the sample.}
\label{fig:ising-propagation-time}
\end{figure*}
\subsection{The replicator model}
The replicator model describes a dynamics of self-replicating entities, for instance genotypes, different animal species, RNA-molecules, or an abstract strategy in the game-theoretic problem. The replicator model has been used in population genetics, ecology, prebiotic chemistry, and sociobiology~\cite{Schuster1983}.
We consider a population consisting of $N$ different species and denote by $x_i$ the fraction of species $i$ in the total population (scaled for convenience by a factor on $N$ so $\sum_i x_i=N$). The growth rate of species $i$, called its fitness, is denoted by $f_i$. The population fraction change in time depends on the growth rate $f_i$ and the average growth rate of the population $\overline{f}$
\begin{equation}
\frac{\d x_i}{\d t}=x_i(t) (f_i(\mathbf{x},t)-\overline{f}(\mathbf{x},t)) \ ,
\label{eq:replicators-basic}
\end{equation}
with $\overline{f}(\mathbf{x},t)=\frac{1}{N}\sum_{j=1}^N x_j(t)f_j(\mathbf{x},t)$. The set of equations~\eqref{eq:replicators-basic} defines the replicator model.
The average fitness $\overline{f}$ enters to ensure that the fractions remain normalised such that $\sum_{i} x_i(t)= N$ for all times.
Here we consider a fitness which for each species $i$ depends linearly on the population fractions of the other species
\begin{equation}
f_i(\mathbf{x}(t))=\sum_{j\neq i}^N J_{ij} x_j(t) \ .
\label{eq:replicators-fitness}
\end{equation}
The inter-species interactions $J_{ij}$ are quenched random variables with mean $u$ (called the cooperation pressure) and standard deviation $1/\sqrt{N}$. There are no self-interactions, $J_{ii}=0$.
For symmetric interactions, $J_{ij}=J_{ji}$, the fitness vector can be written as the gradient of a Lyapunov function. This implies that the system converges to an equilibrium steady state, which can be characterised by methods from statistical physics~\cite{Diederich1989}. In the socio-biological context, however, there is no reason for the interactions to be symmetric, or in fact to assume deterministic dynamics.
Assuming an asymmetric matrix $J_{ij}$ and allowing random fluctuations $\sigma \xi_i(t)$ in the reproduction of species $i$ leads to a set of Langevin equations
\begin{equation}
\frac{\d x_i}{\d t}=x_i(t)\left(f_i(\mathbf{x}(t))+\sigma \xi_i(t) -\lambda(\mathbf{x},t)\right) \ ,
\label{eq:replicators-Langevin}
\end{equation}
where the $\xi_i(t)$ are $N$ independent sources of white noise interpreted in the Stratonovich convention, the parameter $\sigma>0$ controls the overall noise strength, and the factor $\lambda(\mathbf{x}(t),t)=\frac{1}{N}\sum_j x_j(t)(f_j(\mathbf{x}(t))+\sigma \xi_j(t))$ ensures normalisation, \textit{i.e.} $\sum_i x_i(t)= N$ for all times.
This dynamics converges to a non-equilibrium steady state. Its characteristics for typical realisations of the matrix of couplings have been studied in the limit of a large number of species using dynamical mean field theory~\cite{Opper1992}.
\begin{figure*}[hbtp!]
\includegraphics[width=0.9\textwidth]{replicators.eps}
\caption{Reconstruction of the inter-species interactions in replicator dynamics. \newline (a) The inset schematically shows the replicator model describing the population dynamics of different species competing for fractions of the total population size. The population moves on a $N-1$-dimensional simplex defined by the normalisation $\sum_i x_i=N,\ x_i \geq 0$. In the main figure, we plot the inferred rescaled inter-species interactions $\hat{J}^\text{inf}_{ij}\equiv J^\text{inf}_{ij}/\sigma^2$ versus the rescaled underlying interactions $\hat{J}^\text{true}_{ij}= {J}^\text{true}_{ij}/\sigma^2$ for the propagation time interval $\tau=5.0\times 10^{-6}$. The error bars indicate the error due to the ambiguity associated with the choice of the propagation time interval $\tau$ as described next. (b) shows how the propagation time interval was chosen and how the reconstruction error can be estimated without recourse to the underlying couplings. For this plot, an arbitrary parameter (here $\hat{J}_{12}$) was chosen and its inferred value plotted for different propagation time intervals $\tau_i$ ($\blacksquare$). The horizontal line shows the value of the true underlying parameter $\hat{J}_{12}^\text{true}$.
For small values of $\tau$ the effects of sampling fluctuations dominate and the inferred parameter saturates as discussed in section~\ref{sec:OUP}.
For large $\tau$, the error due to the linearisation of the Langevin equations is large and the inference becomes unstable, as signalled by the erratic changes in the value of the inferred parameter.
The interval of reasonable propagation time intervals must lie between those two regimes and we choose a propagation time (marked by the circle) that lies in the (logarithmic) centre of this transition region (marked by the two vertical dashed lines). The other parameters show a similar behaviour and the same transition time interval, so the choice of the propagation time interval does not depend on the parameter considered.
For each parameter, we take the vertical extent of the transition region as the estimation error. To illustrate the effects of sampling fluctuations, we repeated the procedure above a second time with the same model parameters but different samples (continuous line without markers). As expected, the sampling fluctuations influence mainly the inferred parameters for $\tau \rightarrow 0$, while the inference for larger values of $\tau$ is far less sensitive to the fluctuations.\newline
The system consisted of $N=3$ species, the noise strength was set to $\sigma=0.1$, and the underlying interactions $J^\text{true}_{ij}$ were quenched random variables chosen independently from a Gaussian with mean $u=2.0$ and standard deviation $1/\sqrt{N}$ (no self-interactions: $J_{ii}= 0$).
We used an Euler discretisation of the Langevin equation~\eqref{eq:replicators-Langevin} with time steps of length $\Delta t=10^{-6}/\sigma^2$ and a total of $M=10^4$ samples were taken every $10^4$ steps after an initial settling time of $10^9$ steps.}
\label{fig:replicators}
\end{figure*}
We now turn to the problem of inferring the couplings $J_{ij}$ of the replicator model from a set of configurations $\{\mathbf{x}^\mu\}_{\mu=1}^M$ taken independently from the non-equilibrium steady state. For simplicity, we focus on the so-called cooperative regime, in which all species survive in the long-time limit, \textit{i.e.} $\lim_{t\rightarrow \infty}x_i(t)>0 \ \forall i$.
This regime is characterised by a sufficiently large value of the cooperation pressure $u$~\cite{Opper1992}.
Our results can be generalised to the case where species go extinct by restricting the transitions $\mathbf{x}^\nu \rightarrow \mathbf{x}^\mu$ considered in the propagator likelihood to those between configurations with the same set of surviving species.
Again, to make time dimensionless, we rescale time $t'=t\sigma^2$, resulting in a noise-term with unit magnitude. The steady state and the propagator depend only on the rescaled couplings $\hat{J}_{ij} \equiv J_{ij}/\sigma^2$.
By linearising the LE~\eqref{eq:replicators-Langevin} for short times and eliminating $x_N$ via the normalisation constraint, $x_N=N-\sum_{i=1}^{N-1}x_i$, we arrive at the Gaussian short-term propagator
\begin{align}
&p(\mathbf{x},\tau|\mathbf{y},0) \approx\frac{1}{\sqrt{2\pi \tau}^{N-1}\sqrt{\text{Det} \Sigma}} \times \notag \\
&\exp \left\{ -\frac{1}{2 \tau}\sum_{i,j=1}^{N-1}\left(x_i-y_i-\mu_i\tau\right) \Sigma^{-1}_{ij} \left(x_j-y_j-\mu_j\tau\right) \right\}
\label{eq:replicator-propagator}
\end{align}
with drift~\footnote{The second term in the drift arises from the difference between the It{\^o} and Stratonovich convention in the Langevin equation.}
\begin{align}
\mu_i &= y_i(\hat{f}_i(\mathbf{y})-\overline{\hat{f}}(\mathbf{y}))-\frac{y_i}{N}\left(y_i-\frac{1}{N}\sum_{j=1}^N y_j^2\right) \notag \\
\label{eq:propagator-drift}
\end{align}
and covariance matrix $\Sigma=A A^T\in \mathbb{R}^{N-1\times N-1}$ with
\begin{equation}
A_{ij}=y_i(y_j/N-\delta_{i,j}) \ .
\label{eq:propagator-covariance}
\end{equation}
We denote by $\hat{f}_i(\mathbf{y})$ the fitness~\eqref{eq:replicators-fitness} calculated with the rescaled variables $\hat{J}_{ij}=J_{ij}/\sigma^2$, instead of the original interactions $J_{ij}$, and by $\overline{\hat{f}}(\mathbf{y})=\frac{1}{N}\sum_j y_j\hat{f}_j(\mathbf{y})$ its species-weighted average.
To reconstruct the rescaled interactions $\hat{J}_{ij}$, we insert the Gaussian short-term propagator~\eqref{eq:replicator-propagator} into the propagator likelihood~\eqref{eq:propagator-likelihood-def} and maximise it using the Broyden-Fletcher-Goldfarb-Shanno algorithm (see Fig.~\ref{fig:replicators}). As for the OUP, the reconstruction error depends non-monotonically on the choice of the dimensionless propagation time interval $\tau$, due to the trade-off between the error from linearising the LE and the error from effectively reducing the sample size by exponentially damping the propagators of most transitions. Unfortunately, the simple procedure we used for the OUP, minimising the parameter derivative $|\partial \Theta^\text{inf}/\partial \tau|$, cannot easily be generalised to higher dimensions. The reason is that the derivative of the reconstruction error $\partial \epsilon /\partial \tau$ is a linear combination of the individual parameter entries $(\partial \Theta^\text
{inf}_i/\partial \tau)_{i=1}^K$, which can cancel each other without vanishing individually (here $K=N(N-1)$ denotes the number of model parameters). To see that not all individual derivatives can vanish simultaneously, we remind ourselves that the inferred parameters must satisfy $0\equiv \frac{\partial \mathcal{PL}}{\partial \Theta_i} (\Theta^\text{inf}(\tau),\tau) \ ,i=1,\hdots,K$. Additionally demanding $ \partial \Theta^\text{inf}_i/\partial \tau=0 ,i=1,\hdots,K$, corresponds to solving the system of equations $\{\frac{\partial \mathcal{PL}}{\partial \Theta_i}=0,\frac{\partial^2 \mathcal{PL}}{\partial \Theta_i \partial \tau}=0 \}_{i=1}^{K}$ for the $K+1$ variables $(\Theta_i,\tau)$. This system of $2K$ nonlinear equations for $K+1$ variables will in general have no solution for $K>1$. Instead, we can find a good propagation time interval by plotting a single inferred parameter versus the propagation time interval $\tau$ used for inference [see Fig.~\ref{fig:replicators}(b)]. The regime where the inference is dominated by the error from the linearisation for large values of $\tau$ is characterised by an erratic change of the value of the inferred parameter. At small values of $\tau$, the reconstruction is dominated by sampling fluctuations (see section~\ref{sec:OUP}). These regimes are connected by a transition region, from which the propagation time interval should be chosen. We checked that this transition region stretched across the same time interval (approximately $[2\times 10^5,2\times 10^6]$) for all parameters and chose the logarithmic center of this transition interval as the propagation time interval $\tau$. We found this produced a good reconstruction quality, however, a method to pinpoint the optimal value of $\tau$ is currently lacking.
\FloatBarrier
\section{Conclusions}
We study parameter inference for a non-equilibrium model from independent samples taken from the steady state. Our approach is based on a variant of the likelihood we call the propagator likelihood. In the limit of a large propagation time interval, the propagator likelihood converges to the likelihood of the model. However, for non-equilibrium system, the likelihood and the limit of large propagation time intervals is generally intractable. The propagator likelihood can be derived from a variational principle aiming to find model parameters for which the distribution of configurations sampled from the steady state is invariant under propagation in time.
For systems with discrete configurations, we base our reconstruction on the single-step propagator, although increasing the propagation time interval can improve the inference when not all configurations have been sampled. This can be understood as follows: at short times, most pairs of sampled configurations have a small or even vanishing propagator, and the propagator likelihood~\eqref{eq:propagator-likelihood-def} is dominated by a few pairs of close configurations. At higher values of the propagation time interval $\tau$, more configuration pairs contribute to the propagator likelihood, which reduces sampling fluctuations. However, as the computational complexity of evaluating the propagator grows exponentially with the number of time steps, there is a competition between inference quality and computational complexity. For systems with continuous configurations, we use a short-time approximation to the propagator. Also in this case, inference improves with the propagation time interval $\tau$ until the short-time approximation becomes invalid. \newline
Inferring model parameters from the steady state requires a large number of samples: Inferring couplings of the kinetic Ising model with $N=10$ spins to within a reconstruction error $\epsilon \approx 0.01$ requires $M \approx 10^8$ samples, compared to the equilibrium case requiring approximately $10^6$ samples (for couplings drawn independently from a Gaussian with mean 0 and variance $1/N$). The bottleneck in practical applications may thus well be the number of available samples.
Non-equilibrium inference is also computationally expensive: evaluating the propagator likelihood takes $\mathcal{O}(M^2)$ operations for systems with continuous configurations and $\mathcal{O}(M)$ operations for systems with discrete configurations (provided that only a small number of neighbouring configurations can be reached in a single step with non-zero transition probability). A challenge for the future is to find more efficient inference methods, both in terms of the number of samples required and in terms of the computational complexity.
\acknowledgments{This work was supported by the BMBF [grant number emed:SMOOSE].}
|
\section{Introduction}
In many areas of applied statistics including quality control, chemical experiments, biostatistics, financial analysis and medical research, the coefficient of variation (CV) is commonly used as a measure of dispersion and repeatability of data. It is defined as the ratio of the standard deviation to the mean, and applied to compare relative variability of two or more populations. Here, a critical question is whether their CV’s are the same or not.
For the first time,
\cite{bennett-76}
considered problem of testing the equality of CV’s by assuming independent normal populations. Then,
a modified version of Bennett’s test by
\cite{sh-su-86},
a likelihood ratio test by
\cite{do-di-83},
an asymptotically chi-square test and a distribution free squared ranks approach by
\cite{miller-91asymptotic,miller-91use},
some Wald tests by
\cite{gu-ma-96},
an invariant test by
\cite{fe-mi-96},
a family of test statistics based
on Renyi's divergence by
\cite{pa-pa-00},
likelihood ratio, Wald and
score tests based on inverse CV's by
\cite{na-ra-03} and
a likelihood ratio test based on one-step Newton estimators
by \cite{ve-jo-07}
are derived for testing the hypothesis that the CV's of normal populations are equal.
Recently,
\cite{fu-ts-98,jafari-10,liu-xu-zh-11,ja-ka-13,kr-le-14,kh-sa-14}
proposed some tests and performed simulation studies to compare sizes and powers of tests.
Also, \cite{jafari-15inferences}
proposed a test for comparing CV's when the populations are not independent.
If the null hypothesis of equality of CV’s is not rejected, then it may be of interest to estimate the unknown common CV. In practice especially in meta-analysis, we may collect independent samples from different populations with a common CV. For inference about the common CV, there has not yet been a well-developed approach for this purpose: some estimators are presented by
\cite{fe-mi-96},
\cite{ahmed-02},
\cite{forkman-09},
and
\cite{be-ja-08}.
An approximate confidence interval for the common CV is obtained by \cite{ve-jo-07} based on the likelihood ratio approach. Using Monte Carlo simulation,
\cite{be-ja-08}
showed that the coverage probability of this confidence interval is close to the confidence coefficient when the sample sizes are large.
Using the concepts of generalized p-value
\cite{ts-we-89}
and generalized confidence interval
\cite{weerahandi-93},
a generalized approach for inference about this parameter is proposed by
\cite{tian-05},
and also, two generalized approaches are presented by
\cite{be-ja-08}.
Our simulations studies (Tables \ref{tab.sim1}, \ref{tab.sim2} and \ref{tab.sim3}) indicate that
there are some cases that the coverage probabilities of these three generalized confidence intervals are away from confidence coefficient. In fact, these approaches are very sensitive to the common CV parameter. For example, their coverage probabilities are close to one when the common CV is large (i.e. it is equal to 0.3 or 0.35).
In this paper, we are interested in the problem of inference about common CV from different independent normal populations and give a confidence interval for it. This method also is applicable for testing hypothesis about the parameter. For this purpose, we will use the modified signed log-likelihood ratio (MSLR) method introduced by
\cite{barndorff-86,barndorff-91}.
It is a higher order likelihood method and has higher order accuracy even when the sample size is small
\cite{lin-13higher}
and successfully is applied in some settings, for example: Ratio of means of two independent log-normal distributions
\cite{wu-ji-wo-su-02};
Comparison of means of log-normal distributions
\cite{gill-04};
Inference on ratio of scale parameters of two independent Weibull distributions
\cite{wu-wo-ng-05};
Approximating the F distribution
\cite{wong-08approx};
Testing the difference of the non-centralities of two non-central t distributions
\cite{ch-le-wo-12};
Common mean of several log-normal distributions
\cite{lin-13higher};
Testing equality normal CVs
\cite{kr-le-14};
Comparing two correlation coefficients
\cite{ka-ja-15modified}.
The remainder of this paper is organized as follows: In Section \ref{sec.infer}, we first review three generalized approaches for constructing confidence interval for the common CV parameter, and then describe the MSLR method for this problem. In Section \ref{sec.sim}, we evaluate the methods with respect to coverage probabilities and expected lengths using Monte Carlo simulation. The methods are illustrated using two real examples in Section \ref{sec.ex}. Some concluding remarks are given in Section \ref{sec.con}.
\section{Inference about the common CV}
\label{sec.infer}
Let $X_{i1},\dots ,X_{{in}_i}$ ($i{\rm =1},2,..,k$) be a random sample of size $n_i$ from a normal distribution with mean ${\mu }_i>0$ and variance ${\tau }^2{\mu }^2_i,$ where the parameter $\tau >0$ is the common CV. The problem of interest is to test and to construct confidence interval for $\tau $. In this section, we first review the proposed approaches based on generalized inference for this parameter, and then an approach is given for inference about the parameter using MSLR method.
\subsection{ Generalized inferences }
\cite{tian-05}
proposed a generalized confidence interval for the common CV and a generalized p-value for testing a hypothesis about this parameter.
A generalized pivotal variable for the common CV is considered as
\begin{equation}\label{eq.gv1}
G_1=\frac{\sum^k_{i=1}{\left(n_i-1\right)/R_i}}{\sum^k_{i=1}{\left(n_i-1\right)}},
\end{equation}
where $R_i=\frac{{\bar{x}}_i}{s_i}\sqrt{\frac{U_i}{n_i-1}}-\frac{Z_i}{n_i}$, and ${\bar{x}}_i$ and $s^2_i$ are observed values of ${\bar{X}}_i=\frac{1}{n_i}\sum^{n_i}_{j=1}{X_{ij}}$ and $S^2_i=\frac{1}{n_i-1}\sum^{n_i}_{j=1}{{\left(X_{ij}-{\bar{X}}_i\right)}^2}$, respectively, $U_i$ and $Z_i$ are independent random variables with $U_i\sim {\chi }^2_{(n_i-1)}$ and $Z_i\sim N(0,1)$, $i=1,...,k$.
Also, \cite{be-ja-08}
proposed a generalized pivotal variable for the common CV as
\begin{equation}\label{eq.gv2}
G_2=\frac{n}{\sum^n_{i=1}{\frac{n_i\sqrt{U_i}}{\sqrt{n_i-1}}\frac{{\bar{x}}_i}{s_i}-\sqrt{n}Z}},
\end{equation}
where $Z\sim N\left(0,1\right)$. They obtained a generalized pivotal variable by combining this and generalized pivotal variable proposed by
\cite{tian-05}
as
\begin{equation}\label{eq.gv3}
G_3=\frac{1}{2}G_1+\frac{1}{2}G_2,
\end{equation}
\subsection{MSLR method}
The log-likelihood function based on the full observations can be written as
\[\ell \left({\boldsymbol \theta }\right)=-n{\rm log}\left(\tau \right)-\sum^k_{i=1}{n_i{\rm log}\left({\mu }_i\right)}-\frac{1}{2{\tau }^2}\sum^k_{i=1}{\sum^{n_i}_{j=1}{{\left(\frac{x_{ij}}{{\mu }_i}-1\right)}^2,}}\]
where ${\boldsymbol \theta }={\left(\tau ,{\mu }_1,\dots ,{\mu }_k\right)}'$ and $n=\sum^k_{i=1}{n_i}$.
Let $\hat{{\boldsymbol \theta }}={\left(\hat{\tau },{\hat{\mu }}_1,{\hat{\mu }}_2,\dots ,{\hat{\mu }}_k\right)}'$ be the maximum likelihood estimator (MLE) of the vector parameter ${\boldsymbol \theta }$. There is not a closed form for the MLE's of the unknown parameters of model. But it could be obtained by using a numerical method like the Newton method.
For fixed value of parameter $\tau $, the constrained maximum likelihood estimators (CMLE) of parameters ${\mu }_i,i=1,\dots ,k,$ are obtained by the following explicit form:
\[{\hat{\mu }}_{i\tau }=\frac{\sqrt{{\bar{X}}^2_i+4{\tau }^2\overline{X^2_i}}-{\bar{X}}_i}{2{\tau }^2},\ \ \ \ \ \ i=1,\dots ,k,\]
where $\overline{X^2_i}=\frac{1}{n_i}\sum^{n_i}_{j=1}{X^2_{ij}}$.
Now, we use the MSLR method which is the modification of traditional signed log-likelihood ratio (SLR) for inference about $\tau $. The SLR is defined as
\begin{equation}\label{eq.r}
r\left(\tau \right)={\rm sgn}\left(\hat{\tau }-\tau \right){\left(2(\ell (\hat{\theta })-\ell ({\hat{\theta }}_{\tau }))\right)}^{{1}/{2}},
\end{equation}
where $\hat{\tau }$ is the MLE of $\tau $, $\hat{{\boldsymbol \theta }}$ is the MLE's of unknown parameters, ${\hat{{\boldsymbol \theta }}}_{\tau }=(\tau ,{\hat{\mu }}_{1\tau },\dots ,{\hat{\mu }}_{k\tau })$ is the vector of CMLE's of unknown parameters for a fixed $\tau $ and ${\rm sgn(.)}$ is the sign function.
Based on Wilks' theorem, it is well known that $r\left(\tau \right)$ is asymptotically standard normal distributed with error of order $O(n^{-1/2})$ (see \cite{co-hi-74}),
and therefore, an approximate $100\left(1-\alpha \right)\%$ confidence interval for $\tau $ can be obtained from
\[\left\{\tau :\ \left|r\left(\tau \right)\right|\le Z_{{\alpha }/{2}}\right\},\]
where $Z_{{\alpha }/{2}}$ is the $100\left(1-{\alpha }/{{\rm 2}}\right)\%$th percentile of the standard normal distribution.
{\ \cite{ve-jo-07}
utilized the likelihood ratio approach and proposed an asymptotic confidence interval for the common CV using Newton one-step estimator. But \cite{be-ja-08} showed that the coverage probability of the confidence interval proposed by
\cite{ve-jo-07}
is smaller than the confidence coefficient when the sample sizes are small. So this approach is not included in our comparison study.}
Generally, \cite{pi-pe-92}
showed the SLR method is not very accurate and some modifications are needed to increase the accuracy of the SLR method. There exist various ways to improve the accuracy of this approximation by adjusting the SLR statistic. For the various ways to improve the accuracy of SLR method, refer to the works of
\cite{barndorff-86,barndorff-91,fr-re-wu-99,Skovgaard-01,di-ma-st-01}.
In this paper, we used the method proposed by
\cite{fr-re-wu-99}
which has the form
\begin{equation}\label{eq.rs}
r^*\left(\tau \right)=r\left(\tau \right)-\frac{1}{r\left(\tau \right)}{\log \frac{r\left(\tau \right)}{Q\left(\tau \right)}\ },
\end{equation}
where
\[Q(\tau )=\frac{\left| \begin{array}{cc}
{\ell }_{;{\boldsymbol V}}(\hat{{\boldsymbol \theta }})-{\ell }_{;{\boldsymbol V}}({\hat{{\boldsymbol \theta }}}_{\tau }) &
{\ell }_{{\boldsymbol \lambda };{\boldsymbol V}}({\hat{{\boldsymbol \theta }}}_{\tau }) \end{array}
\right|}{\left|{\ell }_{{\boldsymbol \theta };{\boldsymbol V}}(\hat{{\boldsymbol \theta }})\right|}{\left\{\frac{\left|j_{\theta {\theta }'}
(\hat{{\boldsymbol \theta }})\right|}{\left|j_{{\boldsymbol \lambda }{{\boldsymbol \lambda }}^{{\boldsymbol '}}}({\hat{{\boldsymbol \theta }}}_{\tau })\right|}\right\}}^{1/2},\]
and
{\ $j_{{\boldsymbol \theta }{\boldsymbol \theta '}}(\hat{\boldsymbol \theta }) ={\left.\frac{\partial^ 2\ell ({\boldsymbol \theta })}{\partial{\boldsymbol \theta }{\partial{\boldsymbol \theta }}'}\right|}_{{\boldsymbol \theta } =\hat{{\boldsymbol \theta }}}$ and $j_{\boldsymbol \lambda {{\boldsymbol \lambda }}'}({\hat{{\boldsymbol \theta }}}_{\tau }) ={\left.\frac{\partial^2\ell ({\boldsymbol \theta })}{\partial{\boldsymbol \lambda }{\partial{\boldsymbol \lambda }}'}\right|}_{{\boldsymbol \theta } ={\hat{{\boldsymbol \theta }}}_\tau }$}
are the observed information matrix evaluated at $\hat{{\boldsymbol \theta }}$
and observed nuisance information matrix evaluated at ${\hat{{\boldsymbol \theta }}}_{\tau }$, respectively, and ${\ell }_{;{\boldsymbol V}}({\boldsymbol \theta })$ is the likelihood gradient. Also, the quantity ${\ell }_{{\boldsymbol \theta };{\boldsymbol V}}(\hat{\theta })$ and ${\ell }_{{\boldsymbol \lambda };{\boldsymbol V}}({\hat{{\boldsymbol \theta }}}_{\tau })$ are defined as
\[{\ell }_{{\boldsymbol \theta };{\boldsymbol V}}(\hat{{\boldsymbol \theta }})={\left.\frac{\partial {\ell }_{;{\boldsymbol V}}({\boldsymbol \theta })}{\partial {\boldsymbol \theta }}\right|}_{{\boldsymbol \theta }=\hat{{\boldsymbol \theta }}}\ \ \ \ {\rm and}\ \ \ \ {\ell }_{{\boldsymbol \lambda };{\boldsymbol V}}({\hat{{\boldsymbol \theta }}}_{\tau })={\left.\frac{\partial {\ell }_{;{\boldsymbol V}}({\boldsymbol \theta })}{\partial {\boldsymbol \lambda }}\right|}_{{\boldsymbol \theta }={\hat{{\boldsymbol \theta }}}_{\tau }},\]
where, ${\boldsymbol \lambda }$ is the vector of nuisance parameters. The vector array ${\boldsymbol V}$ is defined as
\[{\boldsymbol V}{\left.=-{\left(\frac{\partial {\boldsymbol R}\left({\boldsymbol X};{\boldsymbol \theta }\right)}{\partial {\boldsymbol X}}\right)}^{-1}\left(\frac{\partial {\boldsymbol R}\left({\boldsymbol X};{\boldsymbol \theta }\right)}{\partial {\boldsymbol \theta }}\right)\right|}_{\hat{{\boldsymbol \theta }}},\]
where ${\boldsymbol R}\left({\boldsymbol X};{\boldsymbol \theta }\right)=\left(R_{11}\left({\boldsymbol X};{\boldsymbol \theta }\right),\dots ,R_{k,n_k}\left({\boldsymbol X};{\boldsymbol \theta }\right)\right)$ is a vector of pivotal quantity.
\begin{theorem}\label{thm.1} (\cite{barndorff-91,fr-re-wu-99})
Generally, $r^*(\tau)$ in \eqref{eq.rs} is asymptotically standard normally distributed with error of order $O(n^{-3/2})$.
\end{theorem}
Based on Theorem \ref{thm.1}, a $100\left(1-\alpha \right)\%$ confidence interval for $\tau $ is given as
\[\left\{\tau :\left|r^*\left(\tau \right)\right|<Z_{\alpha /2}\right\}.\]
Also,
the test statistic $r^*\left(\tau_0 \right)$ can be used for testing the hypotheses $H_0:\ \tau ={\tau }_0$ vs $H_1:\ \tau \ne {\tau }_0$, and the p-value is given as
\[{\rm p}=2{\rm min}\left\{P\left(Z>r^*\left({\tau }_0\right)\right),P\left(Z<r^*\left({\tau }_0\right)\right)\right\},\]
where $Z$ has a standard normal distribution.
For our problem in this paper, ${\boldsymbol \lambda }={\left({\mu }_1,\dots ,{\mu }_k\right)}'$ and the details of implementation of $r^*$ are given as follows:
For $i=1,\dots ,k,j=1,..,n_i$, define vector pivotal quantity ${\boldsymbol R}=(R_{11},\dots,
\break
R_{kn_k})'$ with elements $R_{ij}=\frac{x_{ij}-{\mu }_i}{\tau {\mu }_i}$.
The derivative of elements of vector pivotal quantity ${\boldsymbol R}$ with respect to $x_{tj}$ and vector parameter ${\boldsymbol \theta }$ are obtained as
\[\frac{\partial R_{ij}}{{\partial x}_{i'j}}=
\left\{ \begin{array}{lc}
\frac{1}{\tau {\mu }_i} & \ \ i=i' \\
0 & \ \ i\ne i', \end{array}
\right.\ \ \ \ \ \ \
\frac{\partial R_{ij}}{{\partial\mu }_i'}=\left\{ \begin{array}{lc}
-\frac{x_{ij}}{\tau {\mu }^2_i} & \ \ i=i' \\
0 & \ \ i\ne i', \end{array}
\right.\ \ \ \ \ \
\frac{\partial R_{ij}}{\partial\tau }=-\frac{x_{ij}-{\mu }_i}{{\tau }^2{\mu }_i}.\]
Therefore, we have
\[{\left(\frac{\partial {\boldsymbol R}\left({\boldsymbol x};{\boldsymbol \theta }\right)}{\partial {\boldsymbol x}}\right)}^{-1}=\tau \left[ \begin{array}{cccc}
{\mu }_1I_{n_1} & {{\boldsymbol 0}}_{n_2} & \cdots & {{\boldsymbol 0}}_{n_2} \\
{{\boldsymbol 0}}_{n_1} & {\mu }_2I_{n_2} & \cdots & {{\boldsymbol 0}}_{n_2} \\
\vdots & \vdots & \ddots & \vdots \\
{{\boldsymbol 0}}_{n_1} & {{\boldsymbol 0}}_{n_2} & \cdots & {\mu }_kI_{n_k} \end{array}
\right],\]
where ${{\boldsymbol 0}}_{n_i}$ and $I_{n_i}$ are the $n_i\times n_i$ zero and identity matrices, respectively.
Therefore, elements of vector array ${\boldsymbol V}=\left(\boldsymbol V'_1,\dots ,{{\boldsymbol V}}'_{k+1}\right)$ are
\begin{eqnarray*}
&&{{\boldsymbol V}}_1=\left(\frac{x_{11}-{\hat{\mu }}_1}{\hat{\tau }},\dots ,\frac{x_{1n_1}-{\hat{\mu }}_1}{\hat{\tau }},\frac{x_{21}-{\hat{\mu }}_2}{\hat{\tau }},\dots,\right.\\
&&
\hspace{1cm}\left.
\frac{x_{2n_2}-{\hat{\mu }}_2}{\hat{\tau }},\dots ,\frac{x_{k1}-{\hat{\mu }}_k}{\hat{\tau }},\dots ,\frac{x_{kn_k}-{\hat{\mu }}_k}{\hat{\tau }}\right),\\
&&{{\boldsymbol V}}_2=\left(\frac{x_{11}}{{\hat{\mu }}_1},\dots ,\frac{x_{1n_1}}{{\hat{\mu }}_1},0,\dots ,0\right),\\
&&{{\boldsymbol V}}_3=\left(0,\dots ,0,\frac{x_{21}}{{\hat{\mu }}_2},\dots ,\frac{x_{2n_2}}{{\hat{\mu }}_2},0,\dots ,0\right),\\
&&\ \ \ \ \ \ \vdots\\
&&{{\boldsymbol V}}_{k+1}=\left(0,\dots ,0,0,\dots ,0,\frac{x_{k1}}{{\hat{\mu }}_k},\dots ,\frac{x_{kn_k}}{{\hat{\mu }}_k}\right).
\end{eqnarray*}
The derivative of the log-likelihood function respect to $x_{ij}$ are
$\frac{\partial \ell \left({\boldsymbol \theta }\right)}{{\partial x}_{ij}}= -\frac{x_{ij}-{\mu }_i}{{\mu }^2_i{\tau }^2},
$
$i=1,\dots ,k$, $ j=1,\dots ,n_i.$
The likelihood gradient ${\ell }_{;{\boldsymbol V}}\left({\boldsymbol \theta }\right)$ is obtained as
\[{\ell }_{;{\boldsymbol V}}\left({\boldsymbol \theta }\right)=\left(\sum_j{\frac{\partial \ell \left({\boldsymbol \theta }\right)}{\partial x_j}}v_{1j},\sum_j{\frac{\partial \ell \left({\boldsymbol \theta }\right)}{{\partial x}_j}}v_{2j},\dots ,\sum_j{\frac{\partial \ell \left({\boldsymbol \theta }\right)}{{\partial x}_j}}v_{\left(k+1\right)j}\right).\]
For our problem, this likelihood gradient is obtained as
\begin{align*}
{\ell }_{;{\boldsymbol V}}\left({\boldsymbol \theta }\right)=&\left(\frac{1}{\hat{\tau }{\tau }^2}\sum^k_{i=1}{\sum^{n_i}_{j=1}{\frac{1}{{\mu }^2_i}\left(x_{ij}-{\hat{\mu }}_i\right)\left({\mu }_i-x_{ij}\right)}},\right.
\frac{1}{{\hat{\mu }}_1{{\mu }^2_1\tau }^2}\sum^{n_1}_{j=1}{x_{1j}\left({\mu }_1-x_{1j}\right)},\\
&\hspace{1cm}
\left.\dots ,\frac{1}{{\hat{\mu }}_k{{\mu }^2_k\tau }^2}\sum^{n_k}_{j=1}{x_{kj}\left({\mu }_k-x_{kj}\right)}\right)'.
\end{align*}
Also, the quantity ${\ell }_{{\boldsymbol \theta };{\boldsymbol V}}\left({\boldsymbol \theta }\right)=\frac{\partial {\ell }_{{\boldsymbol \theta };{\boldsymbol V}}\left({\boldsymbol \theta }\right)}{\partial {\boldsymbol \theta }}$ is obtained as
\begin{align*}
{\ell }_{\tau ;{\boldsymbol V}}\left({\boldsymbol \theta }\right)=&\left[\frac{2}{\hat{\tau }{\tau }^3}\sum^k_{i=1}\sum^{n_i}_{j=1}\frac{1}{{\mu }^2_i}\left(x_{ij}-{\hat{\mu }}_i\right)\left(x_{ij}-{\mu }_i\right)\right.,\frac{2}{{\hat{\mu }}_1{{\mu }^2_1\tau }^2}\sum^{n_1}_{j=1}{x_{1j}\left(x_{1j}-{\mu }_1\right)},\\
&\left.\dots ,\frac{2}{{\hat{\mu }}_k{{\mu }^2_k\tau }^2}\sum^{n_k}_{j=1}{x_{kj}\left(x_{kj}-{\mu }_k\right)}\right],\\
{\ell }_{{\mu }_1;{\boldsymbol V}}\left({\boldsymbol \theta }\right)=&\left[\frac{1}{\hat{\tau }{\tau }^2}\sum^{n_1}_{j=1}\left(x_{1j}-{\hat{\mu }}_1\right)\right.\left(\frac{1}{{\mu }^2_1}\right.-\left.\frac{2\left({\mu }_1-x_{1j}\right)}{{\mu }^3_1}\right),
\\
&\frac{1}{{\hat{\mu }}_1{\tau }^2}\sum^{n_1}_{j=1}x_{1j}\left(\frac{1}{{\mu }^2_1}-\frac{2\left({\mu }_1-x_{1j}\right)}{{\mu }^3_1}\right),
\left. 0,\dots ,0\right],\\
&\vdots \\
{\ell }_{{\mu }_k;{\boldsymbol V}}\left({\boldsymbol \theta }\right)=&\left[\frac{1}{\hat{\tau }{\tau }^2}\sum^{n_k}_{j=1}\left(x_{kj}-{\hat{\mu }}_k\right)\left(\frac{1}{{\mu }^2_k}-\frac{2\left({\mu }_k-x_{kj}\right)}{{\mu }^3_k}\right)\right.
,0,\dots ,0,\\
&
\left.\frac{1}{{\hat{\mu }}_k{\tau }^2}\sum^{n_k}_{j=1}x_{kj}\left(\frac{1}{{\mu }^2_k}-\frac{2\left({\mu }_k-x_{kj}\right)}{{\mu }^3_k}\right)\right].
\end{align*}
We also need to compute the observed information matrix and observed nuisance information matrix. The elements of the observed information matrix is obtained as
\begin{eqnarray*}
&&\ j_{{\mu }_i{\mu }_{i'}}\left({\boldsymbol\theta} \right)=
\left\{ \begin{array}{lc}
-\frac{n_i}{{\mu }^2_i}+\frac{1}{{\tau }^2{\mu }^4_i}\sum^{n_i}_{j=1}{x^2_{ij}+}\frac{2}{{\tau }^2{\mu }^3_i}\sum^{n_i}_{j=1}{x_{ij}\left(\frac{x_{ij}}{{\mu }_i}-1\right)} & \ \ \ i=i' \\
0 & \ \ \ i\ne i', \end{array}
\right.\\
&&j_{\tau \tau }\left({\boldsymbol\theta}\right)=-\frac{\sum^k_{i=1}{n_i}}{{\tau }^2}+\frac{3}{{\tau }^4}\left(\sum^k_{i=1}{\sum^{n_i}_{j=1}{{\left(\frac{x_{ij}}{{\mu }_i}-1\right)}^2}}\right),\\
&&
j_{{\tau ,\mu }_i}\left({\boldsymbol\theta} \right)=\frac{2}{{\tau }^3{\mu }^2_i}\sum^{n_i}_{j=1}{x_{ij}\left(\frac{x_{ij}}{{\mu }_i}-1\right)}.
\end{eqnarray*}
{\ By using the elements $j_{{\mu }_i{\mu }_{i'}}\left({\mathbf \theta }\right)$ for $i,i'=1,\dots ,k$, one can constitute the observed nuisance information matrix.}
\section{Simulation study}
\label{sec.sim}
A simulation study is performed to evaluate the operation of the proposed approach. We performed this with 10,000 replications to compare the coverage probabilities (CP) and expected lengths (EL) of four approaches: the modified signed likelihood ratio (MSLR) method,
generalized pivotal approach in \eqref{eq.gv1} shown by GV1,
generalized pivotal approach in \eqref{eq.gv2} shown by GV2, and
generalized pivotal approach in \eqref{eq.gv3} shown by GV3.
{\ we generate random samples of size $n_i$ from $k=3, 5, 10$ independent normal distributions. We take the true value of model parameter as $\left({\mu }_1,{\mu }_2,{\mu }_3\right)=\left(20,10,10\right)$ for $k=3$, $({\mu}_1,\dots,{\mu }_5)=(50,40,30,20,10)$ for $k=5$, and
$({\mu}_1,\dots,$ ${\mu }_{10})=(50,40,30,20,10,50,40,30,20,10)$ for $k=10$. The variances of normal populations are obtained such that we have the value of common CV, $\tau $. This value varies in the set $\left\{0.1,0.2,0.3,0.35\right\}$.} For different values of common CV, $\tau $, the coverage probabilities and expected lengths of the MSLR and GV approaches are estimated to construct the confidence interval with the $0.95$ confidence coefficient. The results are given in Tables \ref{tab.sim1}, \ref{tab.sim2} and \ref{tab.sim3}. We can conclude that
\begin{enumerate}
\item[i.] The coverage probability of the MSLR method is close to the confidence coefficient for all cases. In fact, it is very satisfactory regardless of the number of samples and for all different values of common CV, even for small sample sizes.
\item[ii.] The coverage probability of the GV2 is very smaller than the confidence coefficient in most cases.
\item[iii.] The coverage probabilities of the GV1 and GV3 are very larger than the confidence coefficient especially when $\tau$ is large (i.e. 0.3 and 0.35). These cases are marked boldface in the tables.
\item[iv.] In all cases, the expected length of the MSLR method is shorter than expected lengths of the GV methods, even for the cases that the GV methods act well (i.e. when their coverage probabilities are close to the confidence coefficient).
\item[v.] The expected length of the GV1 method is considerably larger than expected lengths of other methods.
\item[vi.]
The expected lengths of all approaches increase when the value of $\tau$ increases. Also, the expected lengths become smaller when the sample sizes increase.
\end{enumerate}
Since, the MSLR method is the only approach that controls the correct confidence coefficient and has the shorter interval length with respect to the other competing approaches in all cases, we recommend researchers use the MSLR method for practical applications when the random samples are normal.
To compare robustness of the MSLR and GV approaches, a similar simulation study is performed by considering the Weibull distribution with shape parameter $\alpha $ and scale parameter $\beta$ as the following probability density function:
$$
f\left(x\right)=\frac{\alpha }{\beta }{(\frac{x}{\beta })}^{\alpha -1}{\exp (-(\frac{x}{\beta })^{\alpha })},\ \ x>0.
$$
The random samples are generated from $k$ Weibull distributions, and the parameters are chosen such that a common CV, $\tau$, holds.
We take the true value of model parameter as $(\beta_1,\beta_2,\beta_3)=(20,10,10)$ for $k=3$,
$(\beta_1,\dots,\beta_5)=(50,40,30,20,10)$ for $k=5$, and
$({\beta}_1,\dots,{\beta }_{10})=(50,40,30,20,10,50,40,30,20, $ $10)$ for $k=10$ where $\beta_i$ is the scale parameter of $i$th Weibull distribution.
The results are given in Tables \ref{tab.sim4}, \ref{tab.sim5} and \ref{tab.sim6}. We can conclude that
the coverage probability of the MSLR method is close to the confidence coefficient when $\tau$ is large (i.e. 0.3 and 0.35) and is smaller than the confidence coefficient for other cases. Other results are similar to those reported in normal case.
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under normal distribution for $k=3$.}\label{tab.sim1}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,n_2,n_3$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4 & CP & 0.950 & 0.954~ & 0.906~ & 0.964 & 0.950 & \textbf{0.972} & 0.918 & \textbf{0.971} \\
& EL & 0.108 & ~0.208 & 0.107~ & 0.147 & 0.222 & 0.522 & 0.224 & 0.343 \\ \hline
4,5,6 & CP & 0.943 & 0.956 & 0.918 & 0.956 & 0.953 & 0.961 & 0.913 & 0.958 \\
& EL & 0.091 & 0.146 & 0.090 & 0.111 & 0.186 & 0.343 & 0.186 & 0.245 \\ \hline
6,5,4 & CP & 0.936 & 0.954 & 0.918 & 0.956 & 0.933 & 0.963 & 0.921 & 0.949 \\
& EL & 0.091 & 0.146 & 0.090 & 0.110 & 0.189 & 0.344 & 0.187 & 0.247 \\ \hline
5,5,10 & CP & 0.942 & 0.956 & 0.927 & 0.954 & 0.940 & 0.957 & 0.923 & 0.958 \\
& EL & 0.090 & 0.103 & 0.073 & 0.084 & 0.189 & 0.233 & 0.152 & 0.181 \\ \hline
10,5,5 & CP & 0.938 & 0.958 & 0.927 & 0.954 & 0.954 & 0.953 & 0.922 & 0.960 \\
& EL & 0.074 & 0.103 & 0.073 & 0.084 & 0.153 & 0.233 & 0.152 & 0.180 \\ \hline
4,5,20 & CP & 0.944 & 0.953 & 0.923 & 0.956 & 0.957 & 0.960 & 0.927 & 0.955 \\
& EL & 0.058 & 0.077 & 0.059 & 0.064 & 0.121 & 0.173 & 0.121 & 0.139 \\ \hline
20,5,4 & CP & 0.942 & 0.955 & 0.925 & 0.950 & 0.954 & 0.963 & 0.929 & 0.953 \\
& EL & 0.058 & 0.077 & 0.059 & 0.064 & 0.121 & 0.174 & 0.121 & 0.140 \\ \hline
7,7,7 & CP & 0.954 & 0.956 & 0.929 & 0.949 & 0.938 & 0.956 & 0.926 & 0.954 \\
& EL & 0.072 & 0.094 & 0.071 & 0.079 & 0.149 & 0.206 & 0.146 & 0.166 \\ \hline
7,8,9 & CP & 0.954 & 0.958 & 0.931 & 0.951 & 0.944 & 0.954 & 0.933 & 0.954 \\
& EL & 0.066 & 0.082 & 0.065 & 0.071 & 0.136 & 0.179 & 0.134 & 0.149 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4 & CP & 0.949 & \textbf{0.999} & 0.915 & \textbf{0.990} & 0.951 & \textbf{0.999} & 0.919 & \textbf{0.983} \\
& EL & 0.354 & 1.160 & 0.358 & 0.680 & 0.432 & 1.774 & 0.436 & 0.593 \\ \hline
4,5,6 & CP & 0.953 & \textbf{0.983} & 0.920 & \textbf{0.978} & 0.955 & \textbf{0.997} & 0.926 & \textbf{0.983} \\
& EL & 0.295 & 0.687 & 0.298 & 0.437 & 0.358 & 0.985 & 0.361 & 0.593 \\ \hline
6,5,4 & CP & 0.938 & \textbf{0.982} & 0.922 & \textbf{0.979} & 0.950 & \textbf{0.997} & 0.919 & \textbf{0.987} \\
& EL & 0.302 & 0.685 & 0.297 & 0.453 & 0.364 & 0.976 & 0.361 & 0.598 \\ \hline
5,5,10 & CP & 0.945 & 0.963 & 0.923 & 0.960 & 0.948 & \textbf{0.979} & 0.936 & \textbf{0.967} \\
& EL & 0.302 & 0.432 & 0.241 & 0.308 & 0.369 & 0.588 & 0.290 & 0.396 \\ \hline
10,5,5 & CP & 0.955 & \textbf{0.969} & 0.925 & \textbf{0.965} & 0.946 & \textbf{0.975} & 0.923 & \textbf{0.973} \\
& EL & 0.245 & 0.428 & 0.240 & 0.312 & 0.294 & 0.585 & 0.291 & 0.404 \\ \hline
4,5,20 & CP & 0.936 & \textbf{0.974} & 0.927 & 0.961 & 0.952 & \textbf{0.983} & 0.930 & \textbf{0.973} \\
& EL & 0.191 & 0.323 & 0.190 & 0.236 & 0.227 & 0.441 & 0.229 & 0.308 \\ \hline
20,5,4 & CP & 0.944 & \textbf{0.973} & 0.927 & \textbf{0.970} & 0.951 & \textbf{0.984} & 0.930 & \textbf{0.978} \\
& EL & 0.190 & 0.323 & 0.190 & 0.237 & 0.227 & 0.445 & 0.229 & 0.308 \\ \hline
7,7,7 & CP & 0.954 & 0.952 & 0.926 & 0.963 & 0.949 & 0.956 & 0.928 & 0.945 \\
& EL & 0.237 & 0.368 & 0.232 & 0.278 & 0.285 & 0.482 & 0.280 & 0.349 \\ \hline
7,8,9 & CP & 0.945 & 0.959 & 0.934 & 0.954 & 0.944 & 0.955 & 0.932 & 0.939 \\
& EL & 0.215 & 0.308 & 0.211 & 0.240 & 0.260 & 0.402 & 0.256 & 0.304 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under normal distribution for $k=5$.}\label{tab.sim2}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,\dots ,n_5$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4,4,4 & CP & 0.947 & 0.934 & 0.869 & \textbf{0.970} & 0.948 & \textbf{0.968} & 0.870 & \textbf{0.984} \\
& EL & 0.079 & 0.172 & 0.076 & 0.114 & 0.161 & 0.453 & 0.157 & 0.279 \\ \hline
4,4,5,5,6 & CP & 0.956 & 0.938 & 0.883 & 0.962 & 0.958 & 0.947 & 0.883 & \textbf{0.969} \\
& EL & 0.069 & 0.125 & 0.067 & 0.089 & 0.141 & 0.312 & 0.139 & 0.204 \\ \hline
6,5,5,4,4 & CP & 0.951 & 0.935 & 0.885 & 0.960 & 0.950 & 0.950 & 0.881 & \textbf{0.967} \\
& EL & 0.069 & 0.126 & 0.068 & 0.089 & 0.141 & 0.309 & 0.138 & 0.205 \\ \hline
5,5,5,5,10 & CP & 0.938 & 0.937 & 0.891 & 0.957 & 0.944 & 0.943 & 0.898 & 0.958 \\
& EL & 0.059 & 0.092 & 0.058 & 0.070 & 0.121 & 0.217 & 0.120 & 0.155 \\ \hline
10,5,5,5,5 & CP & 0.941 & 0.938 & 0.894 & 0.953 & 0.946 & 0.942 & 0.892 & 0.957 \\
& EL & 0.060 & 0.092 & 0.059 & 0.070 & 0.122 & 0.217 & 0.120 & 0.155 \\ \hline
4,4,5,5,20 & CP & 0.950 & 0.942 & 0.888 & 0.953 & 0.951 & 0.952 & 0.897 & 0.960 \\
& EL & 0.051 & 0.078 & 0.051 & 0.060 & 0.105 & 0.187 & 0.105 & 0.135 \\ \hline
20,5,5,4,4 & CP & 0.949 & 0.946 & 0.897 & 0.960 & 0.949 & 0.953 & 0.896 & \textbf{0.966} \\
& EL & 0.051 & 0.078 & 0.051 & 0.060 & 0.105 & 0.187 & 0.105 & 0.134 \\ \hline
7,7,7,7,7 & CP & 0.950 & 0.939 & 0.902 & 0.955 & 0.957 & 0.939 & 0.907 & 0.953 \\
& EL & 0.054 & 0.074 & 0.053 & 0.060 & 0.110 & 0.165 & 0.108 & 0.127 \\ \hline
7,7,8,8,9 & CP & 0.954 & 0.939 & 0.912 & 0.952 & 0.959 & 0.941 & 0.914 & 0.951 \\
& EL & 0.050 & 0.066 & 0.049 & 0.055 & 0.103 & 0.145 & 0.101 & 0.115 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4,4,4 & CP & 0.944 & \textbf{0.999} & 0.873 & \textbf{0.997} & 0.945 & \textbf{1.000} & 0.877 & \textbf{0.998} \\
& EL & 0.254 & 1.160 & 0.246 & 0.635 & 0.305 & 1.868 & 0.296 & 0.976 \\ \hline
4,4,5,5,6 & CP & 0.957 & \textbf{0.992} & 0.887 & \textbf{0.988} & 0.951 & \textbf{1.000} & 0.890 & \textbf{0.995} \\
& EL & 0.221 & 0.678 & 0.218 & 0.401 & 0.265 & 1.051 & 0.262 & 0.579 \\ \hline
6,5,5,4,4 & CP & 0.952 & \textbf{0.993} & 0.884 & \textbf{0.991} & 0.950 & \textbf{1.000} & 0.890 & \textbf{0.995} \\
& EL & 0.221 & 0.681 & 0.218 & 0.403 & 0.265 & 1.032 & 0.261 & 0.582 \\ \hline
5,5,5,5,10 & CP & 0.940 & 0.966 & 0.900 & \textbf{0.972} & 0.941 & \textbf{0.986} & 0.895 & \textbf{0.985} \\
& EL & 0.190 & 0.425 & 0.187 & 0.280 & 0.228 & 0.618 & 0.225 & 0.379 \\ \hline
10,5,5,5,5 & CP & 0.953 & 0.964 & 0.898 & \textbf{0.971} & 0.941 & \textbf{0.988} & 0.896 & \textbf{0.982} \\
& EL & 0.192 & 0.430 & 0.188 & 0.280 & 0.231 & 0.614 & 0.225 & 0.379 \\ \hline
4,4,5,5,20 & CP & 0.949 & \textbf{0.982} & 0.902 & \textbf{0.979} & 0.950 & \textbf{0.994} & 0.895 & \textbf{0.988} \\
& EL & 0.164 & 0.386 & 0.164 & 0.250 & 0.197 & 0.576 & 0.196 & 0.346 \\ \hline
20,5,5,4,4 & CP & 0.952 & \textbf{0.982} & 0.898 & \textbf{0.982} & 0.949 & \textbf{0.994} & 0.899 & \textbf{0.988} \\
& EL & 0.164 & 0.385 & 0.164 & 0.247 & 0.197 & 0.576 & 0.197 & 0.340 \\ \hline
7,7,7,7,7 & CP & 0.955 & 0.938 & 0.910 & 0.960 & 0.954 & 0.946 & 0.907 & 0.963 \\
& EL & 0.173 & 0.299 & 0.170 & 0.214 & 0.207 & 0.398 & 0.203 & 0.272 \\ \hline
7,7,8,8,9 & CP & 0.963 & 0.942 & 0.914 & 0.951 & 0.961 & 0.942 & 0.917 & 0.950 \\
& EL & 0.161 & 0.257 & 0.159 & 0.190 & 0.194 & 0.337 & 0.191 & 0.238 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under normal distribution for $k=10$.}\label{tab.sim3}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,\dots ,n_{10}$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4,4,4,4,4,4,4,4 & CP & 0.941 & 0.866 & 0.753 & 0.970 & 0.944 & \textbf{0.976} & 0.752 & \textbf{0.995} \\
& EL & 0.053 & 0.132 & 0.051 & 0.083 & 0.107 & 0.390 & 0.104 & 0.224 \\ \hline
4,4,5,5,6,4,4,5,5,6 & CP & 0.946 & 0.883 & 0.781 & 0.965 & 0.946 & 0.910 & 0.785 & \textbf{0.976} \\
& EL & 0.047 & 0.093 & 0.046 & 0.063 & 0.095 & 0.248 & 0.093 & 0.154 \\ \hline
6,5,5,4,4,6,5,5,4,4 & CP & 0.942 & 0.880 & 0.788 & 0.963 & 0.944 & 0.910 & 0.793 & \textbf{0.977} \\
& EL & 0.047 & 0.094 & 0.046 & 0.063 & 0.095 & 0.247 & 0.093 & 0.154 \\ \hline
5,5,5,5,10,5,5,5,5,10 & CP & 0.942 & 0.889 & 0.818 & 0.954 & 0.944 & 0.890 & 0.826 & 0.966 \\
& EL & 0.040 & 0.067 & 0.040 & 0.049 & 0.083 & 0.165 & 0.082 & 0.112 \\ \hline
10,5,5,5,5,10,5,5,5,5 & CP & 0.943 & 0.891 & 0.822 & 0.962 & 0.948 & 0.893 & 0.826 & 0.965 \\
& EL & 0.040 & 0.067 & 0.040 & 0.049 & 0.083 & 0.165 & 0.082 & 0.112 \\ \hline
4,4,5,5,20,4,4,5,5,20 & CP & 0.945 & 0.898 & 0.818 & 0.960 & 0.944 & 0.925 & 0.830 & \textbf{0.972} \\
& EL & 0.035 & 0.057 & 0.036 & 0.043 & 0.072 & 0.146 & 0.073 & 0.099 \\ \hline
20,5,5,4,4,20,5,5,4,4 & CP & 0.940 & 0.903 & 0.819 & 0.962 & 0.948 & 0.921 & 0.829 & 0.967 \\
& EL & 0.035 & 0.057 & 0.036 & 0.043 & 0.072 & 0.146 & 0.073 & 0.099 \\ \hline
7,7,7,7,7,7,7,7,7,7 & CP & 0.942 & 0.899 & 0.846 & 0.954 & 0.947 & 0.892 & 0.852 & 0.955 \\
& EL & 0.037 & 0.053 & 0.036 & 0.042 & 0.075 & 0.120 & 0.074 & 0.089 \\ \hline
7,7,8,8,9,7,7,8,8,9 & CP & 0.944 & 0.902 & 0.854 & 0.953 & 0.946 & 0.910 & 0.863 & 0.956 \\
& EL & 0.034 & 0.047 & 0.034 & 0.038 & 0.071 & 0.105 & 0.070 & 0.081 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4,4,4,4,4,4,4,4 & CP & 0.945 & \textbf{1.000} & 0.759 & \textbf{1.000} & 0.946 & \textbf{1.000} & 0.771 & \textbf{1.000} \\
& EL & 0.167 & 1.235 & 0.161 & 0.630 & 0.199 & 1.964 & 0.192 & 0.991 \\ \hline
4,4,5,5,6,4,4,5,5,6 & CP & 0.950 & \textbf{0.999} & 0.799 & \textbf{0.999} & 0.948 & \textbf{1.000} & 0.800 & \textbf{0.999} \\
& EL & 0.148 & 0.637 & 0.145 & 0.350 & 0.177 & 1.046 & 0.172 & 0.544 \\ \hline
6,5,5,4,4,6,5,5,4,4 & CP & 0.945 & \textbf{0.999} & 0.796 & \textbf{0.998} & 0.951 & \textbf{1.000} & 0.797 & \textbf{0.998} \\
& EL & 0.148 & 0.633 & 0.145 & 0.348 & 0.177 & 1.056 & 0.172 & 0.549 \\ \hline
5,5,5,5,10,5,5,5,5,10 & CP & 0.943 & 0.953 & 0.826 & \textbf{0.983} & 0.950 & \textbf{0.991} & 0.838 & \textbf{0.995} \\
& EL & 0.129 & 0.356 & 0.127 & 0.218 & 0.154 & 0.558 & 0.152 & 0.317 \\ \hline
10,5,5,5,5,10,5,5,5,5 & CP & 0.946 & 0.955 & 0.829 & \textbf{0.983} & 0.946 & \textbf{0.993} & 0.834 & \textbf{0.996} \\
& EL & 0.129 & 0.357 & 0.127 & 0.218 & 0.154 & 0.561 & 0.152 & 0.318 \\ \hline
4,4,5,5,20,4,4,5,5,20 & CP & 0.947 & \textbf{0.991} & 0.827 & \textbf{0.992} & 0.950 & \textbf{0.999} & 0.836 & \textbf{0.996} \\
& EL & 0.111 & 0.351 & 0.113 & 0.205 & 0.133 & 0.560 & 0.134 & 0.304 \\ \hline
20,5,5,4,4,20,5,5,4,4 & CP & 0.949 & \textbf{0.992} & 0.836 & \textbf{0.995} & 0.945 & \textbf{0.999} & 0.840 & \textbf{0.997} \\
& EL & 0.111 & 0.352 & 0.113 & 0.206 & 0.134 & 0.566 & 0.134 & 0.306 \\ \hline
7,7,7,7,7,7,7,7,7,7 & CP & 0.945 & 0.891 & 0.852 & 0.954 & 0.948 & 0.893 & 0.863 & 0.955 \\
& EL & 0.118 & 0.225 & 0.115 & 0.154 & 0.141 & 0.312 & 0.138 & 0.202 \\ \hline
7,7,8,8,9,7,7,8,8,9 & CP & 0.939 & 0.894 & 0.873 & 0.955 & 0.946 & 0.891 & 0.875 & 0.953 \\
& EL & 0.108 & 0.191 & 0.109 & 0.136 & 0.131 & 0.257 & 0.130 & 0.174 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under Weibull distribution for $k=3$.}\label{tab.sim4}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,\dots ,n_{10}$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4 & CP & 0.900 & 0.939 & 0.876 & 0.948 & 0.932 & 0.964 & 0.895 & 0.962 \\
& EL & 0.110 & 0.211 & 0.105 & 0.146 & 0.228 & 0.536 & 0.224 & 0.351 \\ \hline
4,5,6 & CP & 0.894 & 0.933 & 0.884 & 0.937 & 0.927 & 0.950 & 0.911 & 0.951 \\
& EL & 0.092 & 0.147 & 0.088 & 0.110 & 0.190 & 0.353 & 0.187 & 0.250 \\ \hline
6,5,4 & CP & 0.899 & 0.934 & 0.881 & 0.934 & 0.934 & 0.950 & 0.898 & 0.950 \\
& EL & 0.092 & 0.147 & 0.088 & 0.110 & 0.190 & 0.351 & 0.186 & 0.249 \\ \hline
5,5,10 & CP & 0.895 & 0.923 & 0.879 & 0.920 & 0.935 & 0.942 & 0.908 & 0.941 \\
& EL & 0.074 & 0.104 & 0.073 & 0.083 & 0.154 & 0.237 & 0.152 & 0.182 \\ \hline
10,5,5 & CP & 0.895 & 0.927 & 0.887 & 0.925 & 0.935 & 0.941 & 0.908 & 0.940 \\
& EL & 0.074 & 0.104 & 0.072 & 0.083 & 0.154 & 0.236 & 0.152 & 0.182 \\ \hline
4,5,20 & CP & 0.890 & 0.922 & 0.882 & 0.920 & 0.926 & 0.946 & 0.906 & 0.938 \\
& EL & 0.058 & 0.078 & 0.058 & 0.064 & 0.120 & 0.176 & 0.121 & 0.139 \\ \hline
20,5,4 & CP & 0.888 & 0.922 & 0.887 & 0.921 & 0.929 & 0.948 & 0.912 & 0.943 \\
& EL & 0.058 & 0.078 & 0.058 & 0.064 & 0.121 & 0.176 & 0.121 & 0.139 \\ \hline
7,7,7 & CP & 0.894 & 0.921 & 0.885 & 0.916 & 0.932 & 0.942 & 0.911 & 0.937 \\
& EL & 0.072 & 0.095 & 0.070 & 0.078 & 0.149 & 0.210 & 0.147 & 0.168 \\ \hline
7,8,9 & CP & 0.887 & 0.920 & 0.889 & 0.919 & 0.933 & 0.942 & 0.915 & 0.936 \\
& EL & 0.066 & 0.083 & 0.064 & 0.070 & 0.136 & 0.181 & 0.134 & 0.149 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4 & CP & 0.962 & \textbf{0.999} & 0.913 & \textbf{0.993} & 0.965 & \textbf{0.999} & 0.927 & \textbf{0.994} \\
& EL & 0.362 & 1.139 & 0.362 & 0.681 & 0.435 & 1.672 & 0.442 & 0.946 \\ \hline
4,5,6 & CP & 0.956 & \textbf{0.986} & 0.925 & \textbf{0.978} & 0.968 & \textbf{0.998} & 0.934 & \textbf{0.988} \\
& EL & 0.300 & 0.674 & 0.298 & 0.445 & 0.360 & 0.907 & 0.363 & 0.577 \\ \hline
6,5,4 & CP & 0.961 & \textbf{0.984} & 0.926 & \textbf{0.976} & 0.969 & \textbf{0.999} & 0.935 & \textbf{0.990} \\
& EL & 0.301 & 0.677 & 0.301 & 0.448 & 0.362 & 0.915 & 0.364 & 0.581 \\ \hline
5,5,10 & CP & 0.956 & \textbf{0.970} & 0.935 & 0.963 & 0.964 & \textbf{0.984} & 0.946 & \textbf{0.977} \\
& EL & 0.243 & 0.428 & 0.243 & 0.309 & 0.291 & 0.559 & 0.293 & 0.389 \\ \hline
10,5,5 & CP & 0.960 & \textbf{0.972} & 0.935 & 0.966 & 0.964 & \textbf{0.984} & 0.944 & \textbf{0.972} \\
& EL & 0.243 & 0.428 & 0.243 & 0.309 & 0.292 & 0.559 & 0.294 & 0.390 \\ \hline
4,5,20 & CP & 0.958 & \textbf{0.977} & 0.941 & 0.969 & 0.969 & \textbf{0.989} & 0.949 & \textbf{0.979} \\
& EL & 0.190 & 0.320 & 0.192 & 0.236 & 0.227 & 0.419 & 0.230 & 0.296 \\ \hline
20,5,4 & CP & 0.959 & \textbf{0.977} & 0.932 & 0.966 & 0.971 & \textbf{0.987} & 0.949 & \textbf{0.978} \\
& EL & 0.189 & 0.321 & 0.191 & 0.236 & 0.228 & 0.419 & 0.230 & 0.296 \\ \hline
7,7,7 & CP & 0.963 & 0.959 & 0.939 & 0.953 & 0.966 & \textbf{0.972} & 0.949 & 0.965 \\
& EL & 0.234 & 0.366 & 0.233 & 0.277 & 0.281 & 0.471 & 0.282 & 0.344 \\ \hline
7,8,9 & CP & 0.963 & 0.960 & 0.942 & 0.956 & 0.970 & \textbf{0.970} & 0.952 & 0.962 \\
& EL & 0.214 & 0.310 & 0.214 & 0.243 & 0.257 & 0.390 & 0.256 & 0.296 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under Weibull distribution for $k=5$.}\label{tab.sim5}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,\dots ,n_{10}$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4,4,4 & CP & 0.894 & 0.912 & 0.821 & 0.951 & 0.928 & 0.959 & 0.852 & \textbf{0.976} \\
& EL & 0.079 & 0.175 & 0.074 & 0.114 & 0.164 & 0.469 & 0.157 & 0.285 \\ \hline
4,4,5,5,6 & CP & 0.891 & 0.915 & 0.827 & 0.943 & 0.928 & 0.932 & 0.866 & 0.961 \\
& EL & 0.069 & 0.128 & 0.066 & 0.089 & 0.143 & 0.319 & 0.139 & 0.209 \\ \hline
6,5,5,4,4 & CP & 0.897 & 0.918 & 0.833 & 0.944 & 0.934 & 0.934 & 0.861 & 0.959 \\
& EL & 0.069 & 0.127 & 0.066 & 0.089 & 0.142 & 0.318 & 0.138 & 0.208 \\ \hline
5,5,5,5,10 & CP & 0.894 & 0.913 & 0.841 & 0.933 & 0.931 & 0.926 & 0.880 & 0.949 \\
& EL & 0.059 & 0.093 & 0.057 & 0.070 & 0.123 & 0.222 & 0.120 & 0.157 \\ \hline
10,5,5,5,5 & CP & 0.885 & 0.916 & 0.834 & 0.932 & 0.929 & 0.928 & 0.878 & 0.948 \\
& EL & 0.059 & 0.093 & 0.057 & 0.070 & 0.122 & 0.221 & 0.120 & 0.156 \\ \hline
4,4,5,5,20 & CP & 0.884 & 0.917 & 0.846 & 0.930 & 0.934 & 0.942 & 0.883 & 0.954 \\
& EL & 0.051 & 0.079 & 0.051 & 0.060 & 0.105 & 0.190 & 0.105 & 0.136 \\ \hline
20,5,5,4,4 & CP & 0.892 & 0.910 & 0.834 & 0.924 & 0.929 & 0.938 & 0.875 & 0.955 \\
& EL & 0.051 & 0.079 & 0.051 & 0.060 & 0.105 & 0.191 & 0.105 & 0.136 \\ \hline
7,7,7,7,7 & CP & 0.882 & 0.913 & 0.848 & 0.921 & 0.931 & 0.929 & 0.894 & 0.944 \\
& EL & 0.054 & 0.075 & 0.052 & 0.059 & 0.111 & 0.166 & 0.108 & 0.127 \\ \hline
7,7,8,8,9 & CP & 0.885 & 0.915 & 0.855 & 0.922 & 0.931 & 0.922 & 0.895 & 0.941 \\
& EL & 0.050 & 0.067 & 0.049 & 0.054 & 0.104 & 0.148 & 0.102 & 0.116 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4,4,4 & CP & 0.960 & \textbf{1.000} & 0.885 & \textbf{0.997} & 0.969 & \textbf{1.000} & 0.886 & \textbf{0.999} \\
& EL & 0.254 & 1.131 & 0.249 & 0.619 & 0.300 & 1.733 & 0.298 & 0.907 \\ \hline
4,4,5,5,6 & CP & 0.959 & \textbf{0.993} & 0.897 & \textbf{0.991} & 0.970 & \textbf{1.000} & 0.908 & \textbf{0.997} \\
& EL & 0.222 & 0.663 & 0.219 & 0.399 & 0.264 & 0.968 & 0.265 & 0.553 \\ \hline
6,5,5,4,4 & CP & 0.957 & \textbf{0.993} & 0.898 & \textbf{0.990} & 0.968 & \textbf{1.000} & 0.910 & \textbf{0.997} \\
& EL & 0.222 & 0.666 & 0.220 & 0.401 & 0.265 & 0.964 & 0.264 & 0.550 \\ \hline
5,5,5,5,10 & CP & 0.958 & 0.966 & 0.911 & \textbf{0.975} & 0.970 & \textbf{0.990} & 0.923 & \textbf{0.989} \\
& EL & 0.191 & 0.426 & 0.189 & 0.280 & 0.228 & 0.577 & 0.227 & 0.364 \\ \hline
10,5,5,5,5 & CP & 0.960 & 0.962 & 0.902 & \textbf{0.972} & 0.970 & \textbf{0.987} & 0.923 & \textbf{0.984} \\
& EL & 0.191 & 0.424 & 0.189 & 0.279 & 0.228 & 0.579 & 0.228 & 0.365 \\ \hline
4,4,5,5,20 & CP & 0.957 & 0.986 & 0.908 & \textbf{0.985} & 0.969 & \textbf{0.997} & 0.922 & \textbf{0.992} \\
& EL & 0.164 & 0.382 & 0.165 & 0.247 & 0.196 & 0.533 & 0.197 & 0.326 \\ \hline
20,5,5,4,4 & CP & 0.962 & 0.985 & 0.910 & \textbf{0.983} & 0.968 & \textbf{0.998} & 0.921 & \textbf{0.992} \\
& EL & 0.164 & 0.379 & 0.165 & 0.246 & 0.195 & 0.535 & 0.198 & 0.327 \\ \hline
7,7,7,7,7 & CP & 0.958 & 0.942 & 0.922 & 0.953 & 0.969 & 0.954 & 0.929 & 0.964 \\
& EL & 0.173 & 0.298 & 0.171 & 0.214 & 0.206 & 0.389 & 0.206 & 0.269 \\ \hline
7,7,8,8,9 & CP & 0.958 & 0.942 & 0.925 & 0.954 & 0.968 & 0.956 & 0.936 & 0.964 \\
& EL & 0.162 & 0.257 & 0.160 & 0.191 & 0.193 & 0.329 & 0.192 & 0.237 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ Empirical coverage probabilities and expected lengths of two-sided confidence intervals for the parameter of common CV under Weibull distribution for $k=10$.}\label{tab.sim6}
\begin{tabular}{|c|c|cccc|cccc|} \hline
& & \multicolumn{4}{|c|}{$\tau =0.1$} & \multicolumn{4}{|c|}{$\tau =0.2$} \\ \hline
$n_1,\dots ,n_{10}$ & & MSLR & GV1 & GV2 & GV3 & MSLR & GV1 & GV2 & GV3 \\ \hline
4,4,4,4,4,4,4,4,4,4 & CP & 0.890 & 0.859 & 0.668 & 0.964 & 0.923 & \textbf{0.978} & 0.737 & \textbf{0.997} \\
& EL & 0.053 & 0.136 & 0.050 & 0.083 & 0.109 & 0.408 & 0.104 & 0.232 \\ \hline
4,4,5,5,6,4,4,5,5,6 & CP & 0.886 & 0.867 & 0.689 & 0.952 & 0.927 & 0.893 & 0.768 & \textbf{0.972} \\
& EL & 0.047 & 0.095 & 0.044 & 0.063 & 0.096 & 0.256 & 0.093 & 0.157 \\ \hline
6,5,5,4,4,6,5,5,4,4 & CP & 0.882 & 0.862 & 0.691 & 0.952 & 0.929 & 0.895 & 0.767 & 0.972 \\
& EL & 0.047 & 0.095 & 0.045 & 0.063 & 0.096 & 0.255 & 0.093 & 0.157 \\ \hline
5,5,5,5,10,5,5,5,5,10 & CP & 0.882 & 0.869 & 0.731 & 0.937 & 0.926 & 0.872 & 0.802 & 0.955 \\
& EL & 0.041 & 0.069 & 0.039 & 0.049 & 0.084 & 0.169 & 0.082 & 0.114 \\ \hline
10,5,5,5,5,10,5,5,5,5 & CP & 0.882 & 0.875 & 0.725 & 0.939 & 0.932 & 0.876 & 0.800 & 0.954 \\
& EL & 0.041 & 0.068 & 0.039 & 0.049 & 0.083 & 0.169 & 0.082 & 0.114 \\ \hline
4,4,5,5,20,4,4,5,5,20 & CP & 0.877 & 0.875 & 0.733 & 0.937 & 0.930 & 0.914 & 0.808 & 0.968 \\
& EL & 0.035 & 0.059 & 0.035 & 0.043 & 0.072 & 0.150 & 0.073 & 0.101 \\ \hline
20,5,5,4,4,20,5,5,4,4 & CP & 0.878 & 0.877 & 0.740 & 0.935 & 0.925 & 0.914 & 0.804 & 0.968 \\
& EL & 0.035 & 0.058 & 0.035 & 0.043 & 0.072 & 0.150 & 0.073 & 0.101 \\ \hline
7,7,7,7,7,7,7,7,7,7 & CP & 0.881 & 0.881 & 0.748 & 0.928 & 0.925 & 0.879 & 0.827 & 0.947 \\
& EL & 0.037 & 0.054 & 0.036 & 0.041 & 0.076 & 0.122 & 0.074 & 0.090 \\ \hline
7,7,8,8,9,7,7,8,8,9 & CP & 0.876 & 0.878 & 0.766 & 0.920 & 0.928 & 0.879 & 0.842 & 0.941 \\
& EL & 0.035 & 0.048 & 0.033 & 0.038 & 0.071 & 0.107 & 0.070 & 0.082 \\ \hline
& & \multicolumn{4}{|c|}{$\tau =0.3$} & \multicolumn{4}{|c|}{$\tau =0.35$} \\ \hline
4,4,4,4,4,4,4,4,4,4 & CP & 0.957 & 1.000 & 0.781 & 1.000 & 0.968 & \textbf{1.000} & 0.808 & \textbf{1.000} \\
& EL & 0.167 & 1.228 & 0.163 & 0.627 & 0.197 & 1.864 & 0.195 & 0.941 \\ \hline
4,4,5,5,6,4,4,5,5,6 & CP & 0.961 & 0.999 & 0.821 & 1.000 & 0.970 & \textbf{1.000} & 0.838 & \textbf{1.000} \\
& EL & 0.149 & 0.614 & 0.146 & 0.340 & 0.176 & 0.978 & 0.175 & 0.512 \\ \hline
6,5,5,4,4,6,5,5,4,4 & CP & 0.959 & 0.998 & 0.819 & 0.998 & 0.967 & \textbf{1.000} & 0.841 & \textbf{1.000} \\
& EL & 0.149 & 0.620 & 0.147 & 0.343 & 0.176 & 0.971 & 0.175 & 0.509 \\ \hline
5,5,5,5,10,5,5,5,5,10 & CP & 0.960 & 0.946 & 0.849 & 0.984 & 0.968 & \textbf{0.995} & 0.872 & \textbf{0.996} \\
& EL & 0.129 & 0.350 & 0.128 & 0.216 & 0.154 & 0.509 & 0.153 & 0.297 \\ \hline
10,5,5,5,5,10,5,5,5,5 & CP & 0.959 & 0.949 & 0.855 & 0.985 & 0.970 & \textbf{0.995} & 0.875 & \textbf{0.997} \\
& EL & 0.129 & 0.349 & 0.128 & 0.216 & 0.154 & 0.509 & 0.153 & 0.297 \\ \hline
4,4,5,5,20,4,4,5,5,20 & CP & 0.958 & 0.992 & 0.853 & 0.993 & 0.967 & \textbf{0.999} & 0.869 & \textbf{0.998} \\
& EL & 0.112 & 0.344 & 0.113 & 0.202 & 0.133 & 0.523 & 0.136 & 0.286 \\ \hline
20,5,5,4,4,20,5,5,4,4 & CP & 0.957 & 0.993 & 0.853 & 0.994 & 0.967 & \textbf{1.000} & 0.869 & \textbf{0.997} \\
& EL & 0.112 & 0.345 & 0.114 & 0.203 & 0.133 & 0.524 & 0.135 & 0.287 \\ \hline
7,7,7,7,7,7,7,7,7,7 & CP & 0.960 & 0.891 & 0.876 & 0.954 & 0.968 & 0.907 & 0.892 & \textbf{0.966} \\
& EL & 0.118 & 0.223 & 0.116 & 0.154 & 0.140 & 0.296 & 0.139 & 0.196 \\ \hline
7,7,8,8,9,7,7,8,8,9 & CP & 0.946 & 0.894 & 0.890 & 0.955 & 0.965 & 0.906 & 0.910 & 0.960 \\
& EL & 0.109 & 0.189 & 0.109 & 0.136 & 0.131 & 0.248 & 0.131 & 0.171 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Real examples}
\label{sec.ex}
\begin{example}
In this part, we used the data set given by
\cite{fu-ts-98}.
{\ This data set is also analyzed by
\cite{ja-ka-13} and
\cite{kr-le-14}
for the problem of testing the equality of several normal independent CV's, and considered by
\cite{tian-05} and
\cite{be-ja-08}}
for the problem of inference about the common CV. The Hong Kong Medical Technology Association has conducted a Quality Assurance Programme for medical laboratories since 1989 with the purpose of promoting the quality and standards of medical laboratory technology. The data are collected from the third surveys of 1995 and 1996 for the measurement of Hb, RBC, MCV, Hct, WBC, and Platelet in two blood samples (normal and abnormal). The summary statistics for this subset of data is given in Table \ref{tab.ex1}. The main data set of this study has not been presented, and therefore, we cannot check the normality assumption.
At level $\alpha =0.05$,
\cite{ja-ka-13}
showed that the CV for RBC, MCV, Hct, WBC, and Plt in 1995 is not significantly different from that of 1996 in the abnormal blood samples. The confidence intervals for the common CV based on our proposed MSLR method and the three generalized approaches for these data between 1995 and 1996 in each measurement are given in Table \ref{tab.ex2}. Since the sample sizes are large, the results of all methods are close to each other.
\end{example}
\begin{table}[ht]
\begin{center}
\caption{ Summary statistics of measurements in the abnormal blood samples.}\label{tab.ex1}
\begin{tabular}{|c|c|ccccc|} \hline
Year & & RBC & MCV & Hct & WBC & Plt \\ \hline
1995 & $n_1$ & 65 & 63 & 64 & 65 & 64 \\
& ${\bar{x}}_1$ & 4.606 & 87.25 & 0.4024 & 17.68 & 524.7 \\
& $s_1$ & 0.0954 & 3.496 & 0.0194 & 1.067 & 37.05 \\ \hline
1996 & $n_2$ & 73 & 72 & 72 & 73 & 71 \\
& ${\bar{x}}_2$ & 4.574 & 92.33 & 0.4216 & 18.93 & 466.5 \\
& $s_2$ & 0.0838 & 3.078 & 0.0168 & 1.211 & 41.58 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{The two-sided confidence intervals for the common CV of measurements in the abnormal blood samples.}\label{tab.ex2}
\begin{tabular}{|c|ccccc|} \hline
Method & RBC & MCV & Hct & WBC & Plt \\ \hline
MSLR & (0.017,0.022) & (0.033,0.042) & (0.039,0.050) & (0.056,0.071) & (0.072,0.092) \\
GV1 & (0.017,0.022) & (0.034,0.042) & (0.039,0.050) & (0.056,0.071) & (0.072,0.092) \\
GV2 & (0.017,0.022) & (0.033,0.041) & (0.039,0.049) & (0.056,0.071) & (0.071,0.090) \\
GV3 & (0.017,0.022) & (0.034,0.041) & (0.039,0.050) & (0.056,0.071) & (0.072,0.091) \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{example}
The data set in Appendix D of
\cite{fl-ha-91}
refer to survival times of patients from four hospitals. It is analyzed by \cite{na-ra-03} and \cite{be-ja-08}. These data and their descriptive statistics are given in Table \ref{tab.ex3}.
The normality assumption for survival times of patients in each of the hospitals was checked using Kolmogorov-Smirnov (KS) and Shapiro-Wilk (SW) tests. The p-values are given in Table \ref{tab.ex3}.
Therefore, the normal model appears to be appropriate for each group.
\cite{na-ra-03}
tested homogeneity of CV's for the hospitals and they showed that all tests give the same conclusion of accepting the null hypothesis. Therefore, we have a common CV for these data. The two-sided confidence intervals for the common CV based on MSLR, GV1, GV2 and GV3 are (0.4748, 0.5988), (-1.7855, 3.6561), (0.4568, 1.1759) and (-0.5457, 2.2563), respectively. It easily can be seen that the lengths of these methods are 0.1240, 5.4416, 0.7191, and 2.8020, respectively. Therefore, the length of the confidence interval proposed by
\cite{tian-05}
is larger than other methods while the length of our proposed confidence interval is smaller than other methods. This is consistent with the simulation results in Section \ref{sec.infer} that the length of our proposed method is smaller than other approaches.
\end{example}
\begin{table}
\begin{center}
\caption{ Data and descriptive statistics for survival times of patients from four hospitals.}\label{tab.ex3}
\begin{tabular}{|c|l|rr|cc|} \hline
& Data & ${\bar{x}}_i$ & $s^2_i$ & {\ KS} & {\ SW}\\ \hline
Hospital 1 & 176 105 266 227 66 & 168.0 & 6880.5 &0.990 &0.794 \\
Hospital 2 & 24 5 155 54 & 59.5 & 4460.3 &0.822 &0.309 \\
Hospital 3 & 58 64 15 & 45.7 & 714.3 &0.748 &0.215 \\
Hospital 4 & 174 42 305 92 30 82 265 237 208 147 & 154.6 & 8894.7 &0.939& 0.695 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{sec.con}
In this paper, we utilize the method of modified signed log-likelihood ratio for the inference about the parameter of common coefficient of variation in several independent normal populations. Also, we compared it with other competing approaches known as generalized variable approaches in terms of empirical coverage probabilities and expected lengths. Simulation studies showed that
the coverage probability of the MSLR method is close to the confidence coefficient and
its expected length is shorter than expected lengths of the GV methods. Therefore, our proposed approach
acts very satisfactory regardless of the number of samples and for all different values of common CV, even for small sample sizes, while the generalized variable approaches act well when the value of common CV is large. It is notable that an executable program written in R is provided to compute the confidence intervals for the common CV and can be made available to any interested reader.
\bibliographystyle{apa}
|
\section{Introduction}
\label{intro}
What does it mean for a model to be interpretable? From our human perspective, interpretability typically means that the model can be explained, a quality which is imperative in almost all real applications where a human is responsible for consequences of the model. However good a model might have performed on historical data, in critical applications, interpretability is necessary to justify, improve, and sometimes simplify decision making.
A great example of this is a malware detection neural network \cite{malwarecom} which was trained to distinguish regular code from malware. The neural network had excellent performance, presumably due to the deep architecture capturing some complex phenomenon opaque to humans, but it was later found that the primary distinguishing characteristic was the grammatical coherance of comments in the code, which were either missing written poorly in the malware as opposed to regular code. In hindsight, this seems obvious as you wouldn't expect someone writing malware to expend effort in making it readable. This example shows how the interpretation of a seemingly complex model can aid in creating a simple rule.
The above example defines interpretability as humans typically do: we require the model to be understandable. This thinking would lead us to believe that, in general, complex models such as random forests or even deep neural networks are not interpretable. However, just because we cannot always understand what the complex model is doing does not necessarily mean that the model is not interpretable in some other useful sense. It is in this spirit that we define the novel notion of $\delta$-interpretability that is more general than being just relative to a human.
We offer an example from the healthcare domain \cite{chang2010}, where interpretability is a critical modeling aspect, as a running example in our paper. The task is predicting future costs based on demographics and past insurance claims (including doctor visit costs, justifications, and diagnoses) for members of the population.
The data used in \cite{chang2010} represents diagnoses using ICD-9-CM (International Classification
of Diseases) coding which had on the order of 15,000 distinct codes at the time of the study. The high dimensional nature of diagnoses led to the development of various abstractions such as the ACG (Adjusted Clinical Groups) case-mix system \cite{starfield1991}, which output various mappings of the ICD codes to lower dimensional categorical spaces, some even independent of disease. A particular mapping of IDC codes to 264 Expanded Diagnosis Clusters (EDCs) was used in \cite{chang2010} to create a complex model that performed quite well in the prediction task.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{InterpretabilityBlockDiagram.png}
\caption{Above we depict what it means to be $\delta$-interpretable. Essentially, our procedure/model is $\delta$-interpretable if it improves the performance of TM by $\ge \delta$ fraction w.r.t. a target data distribution.}
\label{Intblck}
\end{figure*}
\section{Defining $\delta$-Interpretability}
Let us return to the opening question. Is interpretability simply sparsity, entropy, or something more general? An average person is said to remember no more than seven pieces of information at a time \cite{lisman1995storage}. Should that inform our notion of interpretability? Taking inspiration from the theory of computation \cite{toc} where a language is classified as regular, context free, or something else based on the strength of the machine (i.e. program) required to recognize it, we look to define interpretability along analogous lines.
Based on this discussion we would like to define interpretability relative to a target model (TM), i.e. $\delta$-interpretability. \emph{The target model in the most obvious setting would be a human, but it doesn't have to be}. It could be a linear model, a decision tree or even an entity with superhuman capabilities. The TM in our running healthcare example \cite{chang2010} is a linear model where the features come from an ACG system mapping of IDC codes to only 32 Aggregated Diagnosis Groups (ADGs). \eat{In this simple TM, the mapping is based on only five clinical features, namely; duration of condition, severity of condition, diagnostic certainty, etiology of the condition and specialty care involvement, which surprisingly do not identify organ systems or disease.}
Our model/procedure would qualify as being $\delta$-interpretable if we can somehow convey information to the TM that will lead to improving its performance (e.g., accuracy or AUC or reward) for the task at hand. Hence, the $\delta$-interpretable model has to transmit information in way that is consumable by the TM. For example, if the TM is a linear model our $\delta$-interpretable model can only tell it how to modify its feature weights or which features to consider. In our healthcare example, the authors in \cite{chang2010} need a procedure to convey information from the complex 264-dimensional model to the simple linear 32-dimensional model. Any pairwise or higher order interactions would not be of use to this model. Thus, if our "interpretable" model came up with some modifications to pairwise interaction terms, it would not be considered as an $\delta$-interpretable procedure for the linear TM.
Ideally, the performance of the TM should improve w.r.t. to some target distribution. The target distribution could just be the underlying distribution, or it could be some reweighted form of it in case we are interested in some localities of the feature space more than others. For instance, in a standard supervised setting this could just be generalization error (GE), but in situations where we want to focus on local behavior the error would be w.r.t. the new reweighted distribution that focuses on the specific region. \emph{In other words, we allow for instance level interpretability as well as global interpretability and capturing of local behaviors that lie in between}. The healthcare example focuses on mean absolute prediction error (MAPE) expressed as a percentage of the mean of the actual expenditure (Table 3 in \cite{chang2010}). Formally, we define $\delta$-interpretability as follows:
\begin{bigboxit}
\label{didef}
\begin{definition} $\delta$-interpretability:
Given a target model $M_T$ belonging to a hypothesis class $\mathcal{H}$ and a target distribution $D_T$, a procedure $P_I$ is $\delta$-interpretable if the information $I$ it communicates to $M_T$ resulting in model $M_T(I)\in\mathcal{H}$ satisfies the following inequality: $e_{M_T(I)}\le \delta\cdot e_{M_T}$, where $e_{\mathcal{M}}$ is the expected error of $\mathcal{M}$ relative to some loss function on $D_T$.
\end{definition}
\end{bigboxit}
The above definition is a general notion of interpretability that does not require the interpretable procedure to have access to a complex model. It may use the complex model (CM) but it may very well act as an oracle conjuring up useful information that will improve the performance of the TM. The more intuitive but special case of Definition \ref{didef}.1 is given below which defines $\delta$-interpretability for a CM relative to a TM as being able to transfer information from the CM to the TM using a procedure $P_I$ so as to improve the TMs performance. These concepts are depicted in figure \ref{Intblck}.
\begin{bigboxit}
\label{didefa}
\begin{definition} CM-based $\delta$-interpretability:
Given a target model $M_T$ belonging to a hypothesis class $\mathcal{H}$, a complex model $M_C$, and a target distribution $D_T$,
the model $M_C$ is $\delta$-interpretable relative to $M_T$, if there exists a procedure $P_I$ that derives information $I$ from $M_C$ and communicates it to $M_T$ resulting in model $M_T(I)\in\mathcal{H}$ satisfying the following inequality: $e_{M_T(I)}\le \delta\cdot e_{M_T}$, where $e_{\mathcal{M}}$ is the expected error of $\mathcal{M}$ relative to some loss function on $D_T$.
\end{definition}
\end{bigboxit}
One may consider the more intuitive definition of $\delta$-interpretability when there is a CM.
We now clarify the use of the term Information $I$ in the definition. In a normal binary classification task, training label $y \in \{+1,-1\}$ can be considered to be a one bit information about the sample $x$, i.e. "Which label is more likely given $x$?", whereas the confidence score $p(y|x)$ holds richer information, i.e. "How likely is the label y for the sample $x$?". From an information theoretic point of view, given $x$ and only its training label $y$, there is still uncertainty about $p(y|x)$ in the interval $[1/2,1]$ prior to training. According to our definition, an interpretable method can provide useful information $I$ in the form of a sequence of bits or parameters about the training data that can potentially reduce this uncertainty of the confidence score of the TM prior to training. Moreover, the new $M_T(I)$ is better performing if it can effectively use this information.
\eat{can actually reduce the uncertainty of the $p(y(x)|x)$ in the interval $[1/2,1]$ prior to training the TM. However, the new $M_T(I)$ obtained is more useful if the training method can effectively use this potentially useful information. This is one precise and concrete sense in which, $I$ communicated by the interpretable method are indeed information bits that reduce uncertainty about the confidence score.}
\emph{Our definitions above are motivated by the fact that when people ask for an interpretation there is an implicit quality requirement in that the interpretation should be related to the task at hand. We capture this relatedness of the interpretation to the task by requiring that the interpretable procedure improve the performance of the TM. Note the TM does not have to be interpretable, rather it just is a benchmark used to measure the relevance of the provided interpretation. Without such a requirement anything that one can elucidate is then an explanation for everything else, making the concept of interpretation pointless. Consequently, the crux for any application in our setting is to come up with an interpretable procedure that can ideally improve the performance of the given TM.}
The closer $\delta$ is to 0 the more interpretable the procedure. Note the error reduction is relative to the TM model itself, not relative to the complex model. An illustration of the above definition is seen in figure \ref{Intblck}. Here we want to interpret a complex process relative to a given TM and target distribution. The interaction with the complex process could just be by observing inputs and outputs or could be through delving into the inner workings of the complex process. \emph{In addition, it is imperative that $M_T(I)\in \mathcal{H}$ i.e. the information conveyed should be within the representational power of the TM.}
\emph{The advantage of this definition is that the TM isn't tied to any specific entity such as a human and thus neither is our definition.} We can thus test the utility of our definition w.r.t. simpler models (viz. linear, decision lists, etc.) given that a humans complexity maybe hard to characterize. We see examples of this in the coming sections.
Moreover, a direct consequence of our definition is that it naturally \emph{creates a partial ordering of interpretable procedures} relative to a TM and target distribution, which is in spirit similar to complexity classes for time or space of algorithms. For instance, if $\mathcal{R^+}$ denotes the non-negative real line $\delta_1$-interpretability $\Rightarrow$ $\delta_2$-interpretability, where $\delta_1 \le \delta_2$ $\forall \delta_1,~\delta_2\in \mathcal{R^+}$, but not the other way around.
\section{Practical Definition of Interpretability}
We first extend our $\delta$-interpretability definition to the practical setting where we don't have the target distribution, but rather just samples. We then show how this new definition reduces to our original definition in the ideal setting where we have access to the target distribution.
\subsection{($\delta,\gamma$)-Interpretability: Performance and Robustness}
Our definition of $\delta$-interpretability just focuses on the performance of the TM. However, in most practical applications robustness is a key requirement. Imagine a doctor advising a treatment to a patient. He better have high confidence in the treatments effect before prescribing.
So what really is a robust model? Intuitively, it is a notion where one expects the same (or similar) performance from the model when applied to "nearby" inputs. In practice, this is many times done by perturbing the test set and then evaluating performance of the model \cite{carlini}. If the accuracies are comparable to the original test set then the model is deemed robust. Hence, this procedure can be viewed as creating alternate test sets on which we test the model. Thus, the procedures to create adversarial examples or perturbations can be said to induce a distribution $D_R$ from which we get these alternate test sets. \emph{The important underlying assumption here is that the newly created test sets are at least moderately probable w.r.t. the target distribution.} Of course, in case of non-uniform loss functions the test sets on whom the expected loss is low are uninteresting. This brings us to the question of when is it truly interesting to study robustness.
\emph{It seems that robustness is really only an issue when your test data on which you evaluate is incomplete i.e. it doesn't include all examples in the domain.} If you can test on all points in your domain, which could be finite, and are accurate on it then there is no need for robustness. That is why in a certain sense, low generalization error already captures robustness since the error is over the entire domain and it is impossible for your classifier to not be robust and have low GE if you could actually test on the entire domain. The problem is really only because of estimation on incomplete test sets \cite{kushtst}. \eat{Using conditional entropy definition for GE ie H(Y|X) as Karthik was suggesting doesnt solve the problem for incomplete test sets since we have seen time and again with these networks that they may give high confidences for the correct class on these test sets but are still not robust.} Given this we extend our definition of $\delta$-interpretability for practical scenarios.
\begin{bigboxit}
\begin{definition}($\delta,\gamma$)-interpretability:\label{didefp} Given a target model $M_T$ belonging to a hypothesis class $\mathcal{H}$, a sample $S_T$ from the target distribution $D_T$, a sample $S_R$ from the adversarial distribution $D_R$, a model $P_I$ is $\delta, \gamma$-interpretable relative to $(D_T\sim) S_T$ and $(D_R\sim) S_R$ if the information $I$ it communicates to $M_T$ resulting in model $M_T(I)\in\mathcal{H}$ satisfies the following inequalities:
\begin{itemize}
\item $\hat{e}^{S_T}_{M_T(I)}\le \delta\cdot \hat{e}^{S_T}_{M_T}$ (performance)
\item $\hat{e}^{S_R}_{M_T(I)}-\hat{e}^{S_T}_{M_T(I)}\le \gamma\cdot (\hat{e}^{S_R}_{M_T}-\hat{e}^{S_T}_{M_T})$ (robustness)
\end{itemize}
where $\hat{e}^{\mathcal{S}}_{\mathcal{M}}$ is the empirical error of $\mathcal{M}$ relative to some loss function.
\end{definition}
\end{bigboxit}
The first term above is analogous to the one in Definition \ref{didef}.1. The second term captures robustness and asks how representative is the test error of $M_T(I)$ w.r.t. its error on other high probability samples when compared with the performance of $M_T$ on the same test and robust sets. This can be viewed as an orthogonal metric to evaluate interpretable procedures in the practical setting. This definition could also be adapted to a more intuitive but restrictive definition analogous to Definition \ref{didefa}.2.
\subsection{Reduction to Definition \ref{didef}.1}
\emph{An (ideal) adversarial example is not just one which a model predicts incorrectly, but rather it must satisfy also the additional requirement of being a highly probable sample from the target distribution.} Without the second condition even highly unlikely outliers would be adversarial examples. But in practice this is not what people usually mean, when one talks about adversarial examples.
Given this, ideally, we should choose $D_R=D_T$ so that we test the model mainly on important examples. If we could do this and test on the entire domain our Definition \ref{didefp} would reduce to Definition \ref{didef}.1 as seen in the following proposition.
\begin{proof}[Proposition 1]
In the ideal setting, where we know $D_T$, we could set $D_R=D_T$ and compute the true errors, ($\delta,\gamma$)-interpretability would reduce to $\delta$-interpretability.
\end{proof}
\begin{proof}[Proof Sketch]
Since $D_R=D_T$, by taking expectations we get for the first condition: $E[\hat{e}^{S_T}_{M_T(I)} - \delta\hat{e}^{S_T}_{M_T}]\le 0$ and hence $e_{M_T(I)}-\delta\cdot e_{M_T}\le 0$.
For the second equation we get: $E[\hat{e}^{S_R}_{M_T(I)} - \hat{e}^{S_T}_{M_T(I)}-\gamma \hat{e}^{S_R}_{M_T}+\gamma\hat{e}^{S_T}_{M_T}]\le 0$ and hence $e_{M_T(I)}-e_{M_T(I)}-\gamma e_{M_T}+\gamma e_{M_T}\le 0$, which implies $0\le 0$.
\end{proof}
The second condition vanishes and the first condition is just the definition of $\delta$-interpretability. Our extended definition is thus consistent with Definition \ref{didef}.1 where we have access to the target distribution.
\noindent\textbf{Remark:} Model evaluation sometimes requires us to use multiple training and test sets, such as when doing cross-validation. In such cases, we have multiple target models $M^i_T$ trained on independent data sets, and multiple independent test samples $S^i_T$ (indexed by $i=\{1,\ldots,K\}$). The
empirical error above can be defined as $(\sum_{i=1}^K{\hat{e}_{M^i_T}^{S^i_T}})/K$. Since $S^i_T$, as well as the training sets, are sampled from the same target distribution $D_T$, the reduction to Definition \ref{didef}.1 would still apply to this average error, since $E[\hat{e}_{M^h_T}^{S^i_T}] = E[\hat{e}_{M^j_T}^{S^k_T}]$ $\forall$ $h,i,j,k$.
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Interpretable & TM & $\delta$ & $\gamma$ & $D_R$ & Dataset ($S_T$) & Performance\\
Procedure &&&&&& Metric\\
\hline
\hline
EDC Selection & OLS & 0.925 & 0 & Identity & Medical Claims & MAPE\\
\hline
Defensive Distillation & DNN & 1.27 & 0.8 & $L_2$ attack & MNIST & Classification error\\
\hline
MMD-critic & NPC & 0.24 & 0.98 & Skewed & MNIST & Classification error\\
\hline
LIME & SLR & 0.1 & 0 & Identity & Books & Feature Recall\\
\hline
Rule Lists (size $\le 4$ ) & Human & 0.95 & 0 & Identity & Manufacturing & Yield\\
\hline
\end{tabular}
\end{center}
\caption{Above we see how our framework can be used to characterize interpretability of methods across applications.}
\label{inttbl}
\end{table*}
\section{Application to Existing Interpretable Methods}
We now look at how some of the current methods and how they fit into our framework.
\noindent\textbf{EDC Selection:} The running healthcare example of \cite{chang2010} considers a complex model based on 264 EDC features and a simpler linear model based on 32 ACG features, and both models also include the same demographic variables. The complex model has a MAPE of 96.52\% while the linear model has a MAPE of 103.64\%. The authors in \cite{chang2010} attempt to improve the TM's performance by including several EDC features. They develop a stepwise process for generating selected EDCs based on significance testing and broad applicability to various types of expenditures (\cite{chang2010}, Additional File 1). This stepwise process can be viewed as a $(\delta, \gamma)$-interpretable procedure that provides information in the form of 19 EDC variables which, when added to the TM, improve the performance from 103.64\% to 95.83\% and is thus (0.925, 0) -interpretable. Note the significance since, given the large population sizes and high mean annual healthcare costs per individual, even small improvements in accuracy can have high monetary impact.
\noindent\textbf{Distillation:} DNNs are not considered interpretable. However, if you consider a DNN to be a TM then you can view defensive distillation as a $(\delta, \gamma)$-interpretable procedure. We compute $\delta$ and $\gamma$ from results presented in \cite{carlini} on the MNIST dataset, where the authors adversarially perturbs the test instances by a slight amount. We see here that defensive distillation makes the DNN slightly more robust at the expense of it being a little less accurate.
\noindent\textbf{Prototype Selection:} We implemented and ran MMD-critic \cite{l2c} on randomly sampled MNIST training sets of size 1500 where the number of prototypes was set to 200. The test sets were 5420 in size which is the size of the least frequent digit. We had a representative test set and then 10 highly skewed test sets where each contained only a single digit. The representative test set was used to estimate $\delta$ and the 10 skewed test sets were used to compute $\gamma$. The TM was a nearest prototype classifier \cite{l2c} that was initialized with random 200 prototypes which it used to create the initial classifications. We see from the table that mmd-critic has a low $\delta$ and almost 1 $\gamma$ value. This implies that it is significantly more accurate than random prototype selection while maintaining robustness. In other words, it should be strongly preferred over random selection.
\noindent\textbf{LIME:} We consider the experiment in \cite{lime} where they use sparse logistic regression (SLR) to classify a review as positive or negative on the Books dataset. Their main objective here is to see if their approach is superior to other approaches in terms of selecting the true important features. There is no robustness experiment here so $D_R$ is identity which means same as $D_T$ and hence $\gamma$ is 0. Their accuracy however, in selecting the important features is significantly better than random feature selection which is signified by $\delta=0.1$. The other experiments can also be characterized in similar fashion. In cases where only explanations are provided with no explicit metric one can view as the experts confidence in the method as a metric which good explanations will enhance.
\eat{\noindent\textbf{Interpretable MDP:} We used a constrained MDP formulation \cite{imdp} to derive a product-to-product recommendation policy for the European tour operator TUI. The goal was to generate buyer conversions and to improve a simple product-to-product policy based on static pictures of the website and what products are currently looked at. The constrained MDP results in a policy that is just as simple but greatly improves the conversion rate (averaged over 10 simulations where customer behavior followed a mixed logit customer choice model with parameters fit to TUI data). The mean normalized conversion rate increased from 0.3377 to 0.6167.}
\noindent\textbf{Rule Lists:}
We built a rule list on a semi-conductor manufacturing dataset \cite{jmlr14Amit} of size 8926. In this data, a single datapoint is
a wafer, which is a group of chips, and measurements, which correspond to 1000s of input features (temperatures, pressures, gass flows, etc.), are made on this wafer throughout its production. The goal was to provide the engineer some insight into, if anything, was plaguing his process so that he can improve performance. We built a rule list \cite{twl} of size at most 4 which we showed to the engineer. The engineer figured out that there was some issue with some gas flows, which he then fixed. This resulted in 1\% more of his wafers ending up within specification. Or his yield increased from 80\% to 81\%. This is significant since, even small increase in yield corresponds to billions of dollars in savings.
\section{Framework Generalizability}
\label{assump}
It seems that our definition of $\delta$-interpretability requires a predefined goal/task. While (semi-)supervised settings have a well-defined target, we discuss other applicable settings.
In unsupervised settings although we do not have an explicit target there are quantitative measures \cite{charubook} such as Silhouette or mutual information that are used to evaluate clustering quality. Such measures which people use to evaluate quality would serve as the target loss that the $\delta$-interpretable procedure would aim to improve upon. The target distribution in this case would just be the instances in the dataset. If a generative process is assumed for the data, then that would dictate the target distribution.
In other settings such as reinforcement learning (RL) \cite{reinf}, the $\delta$-interpretable procedure would try to increase the expected discounted reward of the agent by directing it into more lucrative portions of the state space.\eat{ In inverse RL on the other hand, it would assist in learning a more accurate reward function based on the observed behaviors of an intelligent entity. The methodology could also be used to test interpretable models on how well they convey the causal structure \cite{pearl} to the TM by evaluating the TMs performance on counterfactuals before and after the information has been conveyed.}
There maybe situations where a human TM may not want to take any explicit actions based on the insight conveyed by a $\delta$-interpretable procedure. However, an implicit evaluation metric, such as human satisfaction, probably exists, which a good interpretable procedure will help to enhance. For example, in the setting where you want explanations for classifications of individual instances \cite{lime}, the metric that you are enhancing is the human confidence in the complex model.
\section{Related Work}
There has been a great deal of interest in interpretable modeling recently and for good reason. Almost any practical application with a human decision maker interpretability is imperative for the human to have confidence in the model. It has also become increasingly important in deep neural networks given their susceptibility to small perturbations that are humanly unrecognizable \cite{carlini,gan}.
There have been multiple frameworks and algorithms proposed to perform interpretable modeling. These range from building rule/decision lists \cite{decl,twl} to finding prototypes \cite{l2c} to taking inspiration from psychometrics \cite{irt} and learning models that can be consumed by humans. There are also works \cite{lime} which focus on answering instance specific user queries by locally approximating a superior performing complex model with a simpler easy to understand one which could be used to gain confidence in the complex model.
The most relevant work to our current endeavor is possibly \cite{rsi}. They provide an in depth discussion for why interpretability is needed, and an overall taxonomy for what should be considered when talking about interpretability. Their final TM is always a human even for the functionally grounded explanation as the TMs are proxies for human complexity. As we have seen, our definition of a TM is more general, as besides human, it could be any ML model or even something else that has superhuman cognitive skills. This generalization allows us to test our definition without the need to pin down human complexity. Moreover, we provide a formal strict definition for $\delta$-interpretability that accounts for key concepts such as performance and robustness and articulates how robustness is only an issue when talking about incomplete test sets.\eat{ In addition we also propose a principled meta-interpretable strategy that works well in practice. Our meta strategy has relations to distillation and learning with privileged information \cite{priv16} with the key difference being in the mechanics of how we use information which is by weighting the instances rather than modeling it as a target. This has the advantage of not having to change from a classification to regression setting. Moreover, weighting instances has an intuitive justification where if you view the complex model as a teacher and the TM as a student, the teacher is telling the student which aspects (e.g. instances) he should focus on and which he could ignore.}
\section{Discussion}
We defined $\delta$ for a single distribution but it could be defined over multiple distributions where $\delta = \max(\delta_1,...,\delta_k)$ for $k$ distributions and analogously $\gamma$ also could be defined over multiple adversarial distributions. We did not include these complexities in the definitions so as not to lose the main point, but extensions such as these are straightforward.
Another extension could be to have a sensitivity parameter $\alpha$ to define equivalence classes, where if two models are $\delta_1$- and $\delta_2$-interpretable, then they are in the same equivalence class if $|\delta_1-\delta_2|\le \alpha$. This can help group together models that can be considered to be equivalent for the application at hand. The $\alpha$ in essence quantifies operational significance. One can have even multiple $\alpha$ as a function of the $\delta$ values.
One can also extend the notion of interpretability where $\delta$ or and $\gamma$ are 1 but you can learn the same model with fewer samples given information from the interpretable procedure.
Human subjects store approximately 7 pieces of information \cite{lisman1995storage}. As such, we can explore highly interpretable models, which can be readily learned by humans, by considering models for TM that make simple use of no more than 7 pieces of information. Feldman~\cite{feldman2000minimization} finds that the subjective difficulty of a concept is directly proportional to its Boolean complexity, the length of the shortest logically equivalent propositional formula. We could explore interpretable models of this type. Another model bounds the rademacher complexity of humans \cite{zhu2009human} as a function of complexity of the domain and sample size. Although the bounds are loose, they follow the empirical trend seen in their experiments on words and images.
Finally, all humans may not be equal relative to a task. Having expertise in a domain may increase the level of detail consumable by that human. So the above models which try to approximate human capability may be extended to account for the additional complexity consumable by the human depending on their experience.
\section*{Acknowledgements}
We would like to thank Margareta Ackerman, Murray Campbell, Alexandra Olteanu, Marek Petrik, Irina Rish, Kush Varshney and Bowen Zhou for insightful suggestions and comments.
|
\section{Introduction}
Near Earth Objects (NEOs) are minor Solar System
bodies whose orbits bring them close to the Earth's
orbit. NEOs are important for both scientific investigations
and planetary defense. Scientifically, NEOs, which
have short dynamical lifetimes in near-Earth space,
act as dynamical and compositional tracers from elsewhere
in the Solar System. Studying NEOs also has the practical
application of searching for NEOs that could impact
the Earth and potentially cause widespread destruction.
Critically, the number of Chelyabinsk-sized bodies
(10--20~meters) is not well-constrained due to
various assumptions made in calculating
that population.
This leads to significant uncertainty in the impact risk of
these relatively common and relatively hazardous events.
Most NEOs are discovered by a small number of
dedicated surveys, including the Catalina Sky
Survey \citep{css},
the Pan-STARRS survey \citep{panstarrs},
and the restarted NEOWISE mission \citep{neowise}, and
more than 1000~NEOs are discovered every year.
The optical surveys (CSS, PS) use 1--2~meter class
telescopes, and their limiting magnitudes are roughly
V$\sim$21. The goal of these surveys is to discover
as many NEOs as possible, so any aspect of the survey
that diminishes discovery efficiency is eliminated.
We have carried out an NEO survey using
the 3~deg$^2$ Dark Energy Camera (DECam; \citealt{decam}) with the
4-meter Blanco telescope at the Cerro Tololo
Inter-American Observatory (CTIO);
for the purposes of moving object measurements,
this combination of camera and telescope
has been assigned the observatory code W84 by
the Minor Planet Center.
Our program was allocated
10~dark nights per
``A'' semester for each of 2014, 2015, and 2016,
and
is described in detail in Allen et al.\ (in prep.).
The etendue (product of aperture size and field of
view) of this survey is a factor of 2--10~larger
than that of other ground-based optical
surveys, but our duty cycle of 10~nights per year
is quite small in comparison to the typical 200~nights
per year for dedicated surveys. The observational
niche of our new observing program is therefore not in discovering
a large number of NEOs, but rather (1) to discover
{\em faint} NEOs, through our much larger aperture, and
(2) to characterize our survey by implanting synthetic
objects in our data stream, allowing us to debias
and measure the size distribution of NEOs down to small
sizes. The large-scale dedicated surveys (CSS, PS) cannot
afford to detect and measure synthetic objects in
their data stream, which increases
processing time, but we can because of our comparatively short
observing season.
Of course, no synthetic objects are
reported to the Minor Planet Center (MPC).
All NEO surveys are subject to observational
incompleteness that results in detecting a biased
sample. NEOs have a range of rates of motion, and
because the flyby geometries vary, optical brightness
does not necessarily correspond to NEO size.
In order to measure the underlying size distribution of NEOs,
knowledge of which NEOs are not detected is as important
as knowledge of which NEOs are detected. The best way to
measure this detection efficiency is through implanting
synthetic NEOs --- objects with the motions, PSFs,
noise properties, etc., of real NEOs --- into the data
stream, and then detecting the fake NEOs in the same
way as real NEOs are detected. Thus, the detection
efficiency of the survey
is readily measured as the number of synthetic
NEOs detected over the number implanted as a
function of magnitude (or rate of motion or
orbital properties or any other aspect).
Here we combine our detected (real) NEOs
with our measured detection efficiencies
to derive, for the first time, the debiased
size distribution of NEOs down to 10~meters
diameter as derived from a single
telescopic survey. We find a factor of ten fewer
10~meter-sized
NEOs than extrapolations from larger sizes
or normalization from terrestrial impact studies
predict. Some implications of this result
are discussed at the end of this paper.
\section{Observations and data processing \label{observations}}
A detailed description of the observing
cadence, sky coverage, filters, and exposure
time is presented in Allen et al.\ (in prep.).
Here we use only results from our ten-night observing run
in April/May, 2014.
Briefly, each survey field (3~deg$^2$) was typically
observed 5~times per night and on 3~nights.
We observed using the broad VR filter.
The data processing steps are also presented
in detail in Allen et al.
In summary, each exposure is flat fielded and astrometrically calibrated using the standard NOAO Community Pipeline (CP) for DECam \citep{valdes}. A photometric zeropoint, which leads to the reported magnitudes, is computed by matching stars to Pan-STARRS magnitudes \citep{magnier}.
However, in fields for which Pan-STARRS magnitudes were not available, the reference catalog used is the USNO-B1 photographic catalog \citep{usno} with a transformation designed to match, on average, the more accurate CCD magnitudes.
For fields for which Pan-STARRS photometry is available
we transform the catalog g and r to V using a transformation from Lupton (2005) and then match the observed VR instrumental magnitudes to those values.
For non-Pan-STARRS fields we
use USNO-B1 photographic magnitudes transformed to r \citep{usno},
which gives us the pseudo-r magnitude.
For the purposes of this paper, we treat all magnitudes
that we reported to the MPC as~V. We estimate
that the photometric errors are typically less than~0.1~magnitudes,
but formally use 0.2~mag here to include errors introduced
by these various transformations.
Our detection limit is around
SNR$\approx$5 for objects that are not trailed
or only slightly trailed;
for trailed objects,
our detection limit corresponds to SNR$\approx$5~per pixel
at
the brightest part of the trail.
A special version of the CP incorporating the NOAO Moving Object Detection System \citep{mods}
adds the following steps. Exposures from each survey field in a night are median-combined to provide a reference image with transients removed. The median is subtracted from each contributing exposure to remove the static field. Catalogs of transient sources are created from the difference image.
Objects with
3~or more detections with similar magnitudes that make a track of a consistent rate and with shapes (elongation/P.A.) consistent with that rate are identified as
candidate moving objects.
All objects with
{\tt digest2} scores\footnote{{\tt https://bitbucket.org/mpcdev/digest2}}
greater than~40\% --- that is, where the probability of being
an NEO or other interesting object (Trojan, etc.),
based on the short orbital arc, is greater than 40\% ---
are verified through visual inspection to
eliminate false detections (chance cosmic
ray alignments, etc.).
All validated objects are submitted to the
MPC; this list includes NEOs as well
as
many valid
moving objects
that are not NEOs.
Postage stamps of several representative NEOs
observed by us are shown in Figure~\ref{stamps}.
A histogram of V magnitudes of our detected
(real) NEOs is shown in Figure~\ref{vmag}.
\section{Detection efficiency}
One hundred synthetic NEOs, i.e., fake asteroids,
are created in a square that circumscribes each
pointing; on average, around 72~objects fall within
the DECam field of view and not in gaps.
These synthetic objects are
implanted directly in each exposure.
This is many more synthetic NEOs than real NEOs in each exposure,
and allows us to probe the details of our sensitivity with
a large number of objects.
Over the entire observing run, around
365,000~synthetic asteroid detections were
possible.
The distributions of magnitude and rate of motion
for the synthetic population are not matched
to any specific underlying NEO population but
do generally approximate an observed NEO
population, while extending to much fainter
magnitudes than could be detected in this survey
(Figures~\ref{magdistrib} and~\ref{ratedistrib}).
The synthetic objects are created independently for each field with simple linear motions; there
is no linkage across fields or nights.
The detected synthetic objects are identified based on their known
implant positions. We recovered the synthetic asteroids in the same way as real asteroids, that is, satisfying the same minimum number (3) of observations per field, up to the point of
running {\tt digest2} (all synthetic objects would have
high NEO probabilities), visual inspection (since these
are known to be valid objects), and, naturally, reporting
to the MPC. Postage stamp images of representative
synthetic images are shown in Figure~\ref{stamps}.
Two important features of the synthetic implants for the efficiency characterization are the seeing and streaking. The seeing of each exposure was used to provide a point-spread-function (PSF) and the static magnitude of the source was trailed across the image based on its rate (as shown in Figure~\ref{stamps}). These aspects affect the surface brightness, which means that the detection efficiency varies with the conditions on each night and field and the apparent rate of motion.
We note that our debiasing procedure requires
the reasonable assumption that NEO sizes
and albedos (in other words, their absolute
magnitudes) are indepedent of flyby geometry
(distance from the Earth, phase angle, etc.).
Debiasing must only take into account
the survey properties that bias our observed
sample: magnitude and rate of motion, but not
geometry.
The measured detection efficiency as a function
of magnitude is easily calculated as the
number of synthetic NEOs detected divided by
the number of synthentic NEOs implanted.
The detection efficiency is calculated
for each night and for each of four bins in
rate of motion (60--135, 135--210,
210--285, 285--360~arcsec/hr).
Our measured detection efficiencies as a function
of observed magnitude are shown
in Figure~\ref{deteff}.
The overall detection efficiency for all synthetic
objects as a function of H~magnitude is
shown in Figure~\ref{synthetic}.
For real objects, there are 303~unique ``object-nights'': a given
object observed on a given night. As an example,
2014~HA$_{196}$ was discovered by us on 20140422
and re-observed by us on 20140427. This asteroid
therefore
has two ``object-nights'' and appears twice
in our list of 303~``object-nights,'' giving
us two different opportunities to debias the
NEO population with this asteroid. Because we
normalize our resulting size distribution (see below),
counting individual objects more than once does
not introduce a significant error for our result.
\section{The size distribution of NEOs}
Observations that meet the following criteria
are used in this debiasing work:
(1) the object observed must be classified as
an NEO by the MPC;
(2) the object observed has received either
a
preliminary designation (such as
2014~HD196, one of the W84
discoveries from the 2014 observing season) or permanent
designation (for example, asteroid 88254,
for which our nine W84
observations over two nights are only a small fraction of
the more than 400~observations of this asteroid to date),
which means that the orbit is relatively well
known and therefore that both its NEO status
and H~magnitude are reasonably secure;
and
(3) the observations were made by us, i.e.,
observatory code W84.
There are a total of
1377~measurements of 235~unique objects
that meet these criteria.
97~of these objects were discovered
by our survey.
When detected NEOs and the detection
efficiency are both known, deriving the
size distribution of NEOs is straightforward.
Each $i$th NEO is detected at magnitude
$V_i$ and rate of motion $r_i$.
The detection efficiency at
that magnitude $\eta_i(V_i,r_i)$ is known (Figure~\ref{deteff}).
Each $i$th detected asteroid therefore, when
debiased, represents
$N_i = 1/ \eta_i$ asteroids,
applying the correction
for the number of NEOs of that magnitude and
rate of motion
that exist but were not detected in our
survey.
Note that each NEO is debiased
individually using the appropriate $\eta$ for
that object, using the nightly efficiencies
shown in Figure~\ref{deteff}.
The observed V magnitudes do not
specify the size of the asteroid.
The MPC processes
the submitted astrometry to derive
an orbit. Given that orbit and the reported
magnitude, the Solar System absolute
magnitude $H$ (the hypothetical magnitude
an object would have at 1~au from the
Sun, 1~au from the observer, and at zero
phase)
can be derived.
For each $i$th asteroid we use the
MPC-derived absolute magnitude $H_i$.
Figure~\ref{hmag} shows the histogram
of all $H$ magnitudes in our survey.
Each $i$th asteriod therefore represents
$N_i$ asteroids with absolute magnitude $H_i$.
Finally, we derive the cumulative size distribution
$N(<$$H)$
by summing all $N_i$ for a given $H\leq H_i$.
An asteroid's diameter $D$ and $H$ are related through
the albedo $p_V$ as
\begin{equation}
D = \frac{1329}{\sqrt{p_V}} 10^{-H/5}
\end{equation}
\noindent \citep{chillemi}. We know nothing about the
albedos of the NEOs we observed. Therefore,
to calculate asteroid diameters
we
adopt an albedo
of~0.2, which is the average albedo
from \citet{mainzer} for bodies
smaller than 200~meters.
We adopt this NEOWISE albedo value since their
survey is relatively insensitive to asteroid albedo,
unlike other NEO surveys;
we discuss the uncertainties introduced
by this assumption of a single average
albedo below.
There is therefore a direct translation
from the absolute magnitude distribution
to NEO size distribution.
We calculate the cumulative size distribution:
the number of objects larger than
a given diameter.
The final step in deriving the size distribution
of NEOs is a correction for the volume searched.
This is analogous to the Malmquist
bias present in flux-limited astrophysics
surveys\footnote{This is similar in concept
to the kind of analysis described in
\citet{schmidt}, although the details
of the corrective approach differ.}:
we can detect 100~meter
NEOs at a greater distance (and therefore in
a greater volume) than
we can detect 10~meter NEOs.
The NEO diameter $D$ is given by
\begin{equation}
D~{\rm (m)} = 2\times \sqrt{ \left(\frac{f_{NEO}}{f_\odot}\right)
\left(\frac{\Delta^2}{p}\right)\left(\frac{R}{1.5\times10^{11}~{\rm m}}\right)^2}
\end{equation}
\noindent where
$f_{NEO}$ and $f_\odot$ are the fluxes from the NEO
and the Sun, respectively;
$\Delta$
and $R$
are the geocentric and heliocentric
distances of the NEO
at the time of the flux measurement, in
meters; and $p$ is the
albedo of the NEO.
For opposition surveys such as ours,
$R$ can be approximated as
$\Delta + 1.5\times 10^{11}$~meters, so to
first order
$R^2 \Delta^2$ is approximately $\Delta^4$.
For a given flux limit $f_{NEO}$ and constant
albedo $p$, diameter is therefore
proportional to $\Delta^2$.
The ratio of the (geocentric) search distances for any two
sizes $D_a$ and $D_b$ is therefore
$\sqrt{D_a/D_b}$, and the ratio of
searched volumes is $(D_a/D_b)^{1.5}$.
Most of our objects were detected with
$\Delta$$<$1~au, and all of them with
$\Delta$$<$1.3~au.
Our detection limit of V$\sim$23
for bodies at 1.3~au corresponds to
NEOs with sizes $\sim$200~meters.
Thus, our survey is complete for
objects larger than 200~meters ---
that is, we would have detected every NEO
larger than 200~meters that appeared within
our search cone ---
and
we need only apply the volume correction
described above for objects smaller
than 200~meters (around H=21).
Therefore,
we set
$D_a$ above to be 200~m.
We
multiply our measured and debiased size distribution
at every $D_b$ smaller than 200~m by
the factor $(200~{\rm m}/D_b)^{1.5}$ to correct
for this volume effect. This has the effect
of ``adding back'' small ($<$200~m)
NEOs that would have been in our search
cone but too faint to have been detected
(through being too distant).
The choice of 1.3 au as the outermost boundary
of NEO space is well justified.
Around 50\% of our discovered NEOs have geocentric distances between 1.0~and 1.3~au,
and none beyond 1.3~au.
Our discovered
distribution of some 70 objects indicates that the detection volume is well-sampled
and extends to 1.3~au.
Finally,
our survey was relatively small, covering
$\sim$975~square degrees in 348~distinct
pointings. We therefore normalize
our derived size distribution to results
from NEOWISE and ExploreNEOs, both of which
independently derive the result that
there are around 1000~NEOs larger than
1~km \citep{mainzer,trilling}, in agreement with an earlier estimate
by \citet{werner}.
Our final result
is shown in Figure~\ref{sizedist}.
We find around $10^{6.6}$~(or $3.5\times 10^6$) NEOs larger than
H=27.3 (around
10~meters), and
$10^{6.9}$~(or $7.9\times 10^6$)~NEOs larger than H=28 (around
7~meters).
For the first time, the size
distribution of NEOs
from 1~km to 10~meters has
been derived from a single dataset.
This is significant as it bridges previous
observational results that had data only at either
the
large or small end of this range,
as discussed below.
\section{Discussion}
\subsection{Objects not included in this analysis}
We include in this analysis only objects that
have been designated and identified as
NEOs by the MPC.
This could introduce biases in two ways,
as follows.
First, we report to the MPC only objects
that have {\tt digest2} scores of $>$40\%.
We have not yet reported the many thousands
of objects that we detected that have low {\tt digest2}
scores. These are objects that are
likely main belt asteroids and have
low probabilities of being an NEO.
However, even if an object
with a low {\tt digest2} score
is an NEO,
the low probability indicates that it is
probably moving slower than typical
NEO rates, which means that it is relatively far
from the Sun and Earth. In this case, the object
would have to be relatively large to be bright
enough to be detected by us, and would therefore
have little effect on the derived size distribution of
small NEOs.
In order to further understand any potential
bias introduced by using {\tt digest2} we carried
out the following experiment.
We chose ten random recently discovered objects from the
MPC's list of NEOs, and for each
ran only the first five measurements from the
first night of observations through {\tt digest2}.
For these ten objects, the lowest {\tt digest2}
score was~78, and seven of the objects had
{\tt digest2} NEO scores of~100 (i.e., 100\% probability
of being NEOs).
We also chose five measurements from the discovery
nights for ten random Hungaria asteroids and ten
random Mars crossing asteroids.
The Hungarias have {\tt digest2} scores in the
range~17--66, and the Mars crossers have scores~14--91.
From this experiment we conclude that
non-NEOs can sometime be misclassified as NEOs
on the basis of their {\tt digest2} scores (given
our $\geq$40\% threshold), but
no NEOs are ever misclassified as non-NEOs.
We therefore do not miss any NEOs through this
{\tt digest2} filtering.
Since the only objects
used in this analysis are those with
preliminary designations --- those with orbits
that are conclusively NEOs --- our approach is
unlikely to have either false negatives (missing
objects that should be included) or
false positives (including objects that should
not be).
Second,
objects that we observed but that never were
designated are probably the faintest objects,
because neither we nor any other facilities were
able to recover them.
Figure~\ref{HV} shows that
there is no correlation between H and V for V$>$20
(a conservative limit for recovery facilities,
which generally use telescopes in the
1--2~meter class, though some smaller telescopes
also contribute)
and H$>$20
(in other words, smaller than a few hundred
meters diameter).
This lack of relationship between H and V
is because there is no dependence of orbital elements
on asteroid size.
Therefore, there is no bias introduced in our
small NEO
size distribution
calculation even though
targets with the faintest (apparent) magnitudes
may
have been
preferentially omitted through a recovery
bias. Observed magnitude (and therefore
recovery probability) is essentially independent
of size for NEOs smaller than around 300~meters.
Finally, there is a chance that some of the fastest
moving objects were missed by our automated detection
algorithms. Using this set of synthetic objects,
we cannot calibrate our detection efficiency
for (real) objects moving faster than 360~arcsec/hr because
no synthetic objects moving faster than this rate were
implanted in the data stream (Figure~\ref{ratedistrib}).
For objects moving faster than this rate,
we extrapolate our derived efficiencies
(as a function of magnitude, and for the
appropriate night)
for the fast-moving object in question, and
use these extrapolated efficiencies in our
cumulative size distribution calculation.
The effect of this extrapolation is small.
Figure~\ref{rateofmotion} shows, for our real
objects, the measured rate of motion and the
V and H magnitudes of those detected objects.
We find that some 6.5\% of real objects have
rates faster than 360~arcsec/hour; for
H$>$25 (bodies smaller than 30~meters), the fraction is around 10\%.
So, while the extrapolated efficiencies are only
an approximation,
the relatively small uncertainty introduced affects only a small
number of objects.
In conclusion, omitting some objects from this analysis
for the reasons explained above likely does not introduce
a significant error to our derived NEO size distribution,
though we may underestimate the number of
small NEOs by some 10\%.
We therefore conclude that the sample we consider
here can be used to derive the small NEO size distribution
without any large bias from reporting or
recoveries.
\subsection{Uncertainties}
Several uncertainties remain in the size distribution
estimate shown in Figure~\ref{sizedist}. One is that the
photometry of our survey has uncertainties on the order
of 0.1~mag, though, as described in Section~\ref{observations},
we formally assume photometric uncertainties of 0.2~mag.
These V~magnitude uncertainties transfer directly
to H~magnitude uncertainties\footnote{
It is important to note
that in this study we use only designated NEOs, for
which orbital information is known; if we had used
objects without good orbits then the calculation of
H~magnitude from V~magnitude would have additional uncertainties.}.
Some objects may have larger uncertainties on
H~magnitudes if the MPC magnitudes are driven
by measurements from uncalibrated surveys, so
we assign a global uncertainty of 0.3~mag
on all H~magnitudes.
(This is likely an overestimate of the
uncertainties, though, since in most cases
a significant fraction of the reported photometry
comes from our W84 measurements, which have a
relatively small uncertainty of around~0.1~mag.)
Another source of uncertainty concerns photometry of trailed sources.
In our survey --- as in most other NEO surveys ---
isophotal magnitudes are reported. For trailed
objects, this typically underreports the brightness.
In our survey this affects both the real and
implanted objects, so although we are internally
consistent in terms of our debiasing, all fast-moving
objects may actually be somewhat (perhaps a few tenths
of a magnitude) brighter than reported. This error
bar is on the order of the largest error described
above.
Consequently, the derived size
distribution shown in Figure~\ref{sizedist} has an
uncertainty in the horizontal direction on the order
of 0.3~mag. In other words, our result should be written as
$10^{6.6}$~objects
larger than $H=27.3\pm0.3$, corresponding to
diameters of $10^{+2}_{-1}$~meters.
We have assumed that the average albedo in the NEO
population is~0.2.
This is the
average albedo
for objects smaller than 200~m
as derived from NEOWISE observations
\citep{mainzer}.
Although the NEOWISE survey is relatively
insensitive to albedo, there is still a small
bias against high albedo objects.
Furthermore, the mean albedo for
very small NEOs could be different from~0.2; if
these smallest bodies have fresh surfaces (either through
surface resetting due to planetary encounters, or through
being collisionally young objects) then the mean
albedo could be higher. Conservatively, we recalculate
the above steps using mean albedos of~0.4 and, for completeness,~0.1.
The result is that $H=27.3$ corresponds to diameters
of 7--14~meters.
This uncertainty dominates that from the previous
paragraph.
We
conservatively express our final
result as $10^{6.6}$~NEOs larger than
$10\pm4$~meters.
The uncertainties in our measured detection
efficiencies are small because we have many
thousands of objects for each night.
However,
there is uncertainty in our cumulative
number of debiased NEOs.
There are 257~NEOs with $H\leq28$ in our
debiased sample. Formally, Poissonian statistics
implies a fidelity of around 6\%
on this measurement. Conservatively, we assign
10\% uncertainties, which allows for other uncertainties
that may be present in our result. This
10\% uncertainty now also includes the possible
underestimate discussed at the end
of \S5.1.
Our final result from this analysis is therefore
that there are
$10^{6.6}$$\pm$10\%~NEOs larger than
$10\pm4$~meters. We note that in the
context of our survey this result is still preliminary,
as our 20~telescope nights from 2015 and 2016
will serve as independent measurements of
the size distribution and allow us to refine our
uncertainties, particularly in size regimes
where we presently have a relatively
small number of detections.
\subsection{Comparison to previous estimates}
There have been a number of previous estimates
of the NEO size distribution, using a variety of
techniques. We briefly describe previous
work here, and plot their independently
derived size distributions in Figure~\ref{sizedist}.
All previous work agrees that there are
approximately 1000~NEOs larger than 1~km,
so we have normalized all the data described
in this section to have 1000~NEOs at H=17.5
(after \citealt{trilling}).
\citet{rabinowitz1993} made an
early estimate that was updated in
\citet{rabinowitz}, in both cases
using data from NEO surveys in a study
that is roughly analogous to ours, although
without implanting synthetic
objects into the data stream.
\citet{rabinowitz} present results down
to H=30, but there are very few objects
with H$>$24
included in their solution.
For H$<$24 our results agree with those
from \cite{rabinowitz} very well.
More recently, \citet{schunova}
have estimated the size distribution
based on Pan-STARRS1 data as debiased
through the (theoretical) population
model of \citet{greenstreet}. Their
population estimate is somewhat greater
than ours, though they do show a
change to shallower slope (and therefore
relative deficit of NEOs compared
to \citealt{harrisdabramo} for
H$>$24).
Both the Spitzer/ExploreNEOs and NEOWISE
teams have made independent measurements of
the size distribution of NEOs as a result of
their (independent) thermal infrared surveys.
NEOWISE results suggest
20,500$\pm$3000~NEOs larger than 100~m \citep{mainzer};
we find here around 18,000~NEOs larger than
100~m, in good agreement with the earlier result.
The nominal ExploreNEOs
result also suggests around 20,000~NEOs
larger than 100~m, with an acceptable
range of 5000--100,000 \citep{trilling};
we are again in close agreement here with
the previous work.
\citet{harris2015},
\cite{boslough}, and \citet{harrisdabramo}
use
re-detection simulations
of ongoing ground-based surveys to estimate
completeness and therefore the
underlying NEO size distribution.
However, they have
no simulated observations (re-detections)
for objects with $H>25$, and the
completion rate is small for
$H>23$ \citep{boslough}.
Their estimate of the number
of 10~meter NEOs
is therefore
essentially an extrapolation
from larger
sizes. Our result agrees moderately
well with these re-detection results
for $H<23$, but the two results diverge
for $H>23$, where their number of
re-detections is small. This implies
that their extrapolation from larger sizes
may not be appropriate.
\citet{tricarico} used a similar
redetection approach and combined
two decades' worth of nine different
survey programs to estimate the underlying
NEO population. This result is seen
in Figure~\ref{sizedist} to be in extremely
close agreement with our result, especially
at the smallest sizes (H$>$26).
\citet{werner} analyzed the lunar crater
population to determine the size distribution
of the lunar impactor population, i.e.,
NEOs, as averaged over the past few billion
years.
They estimate the NEO population down to
10~meters, though with two significant
caveats: (1) the average impact probability
per asteroid is derived from the dynamical
calculations for the $\sim$1000~largest
NEOs and applied to NEOs of all sizes, and
(2)
there is some uncertainty whether
the current NEO population is the same as the
historically averaged NEO population.
Our result agrees closely with \citet{werner}
for $H<26$. We do not reproduce their sharp
rise for
$H>26$.
Finally,
\citet{brownetal} analyzed impact data,
including both the Chelyabinsk impact and
various other data from the past several decades, to
deduce the size distribution
of Earth impactors (that is, very small NEOs).
A further analysis of this impactor
work is also presented in more detail
in \citet{boslough}.
This analysis produces a size distribution of
impactors covering the approximate range 1--20~meters.
To convert from impactors
to the entire NEO population, a
scaling factor that has its origins in the calculated
annualized impact rate of the 1000~largest
NEOs is used (as \citealt{werner} did).
The factor used to date places the impactor
size distribution in agreement with the \citet{harris2015}
and \citet{harrisdabramo}
NEO size distribution extrapolation in that size range.
Our measured slope from 1--20~meters is
identical to that from \citet{brownetal}
and \citet{boslough},
but there is an offset in the absolute number:
we find $10^{6.9}$~NEOs with
$H>28$ compared to their
$10^{8}$.
Their result, using their scaling
from the largest NEOs,
is shown as the thin cyan
line in Figure~\ref{sizedist}.
The thick cyan line in Figure~\ref{sizedist}
shows their impactor data normalized
to our measured size distribution at H=26 (around
20~meters).
The slopes of their impactor size distribution
and our measured NEO size distribution agree
extraordinarily well.
\subsection{Implications of our result}
This is the first time that a
single observational data set has been
used to measure the size distribution
from 1~km down to 10~meters. Previous
direct measurement work
had data in either the larger ($>$300~m) or,
indirectly, the
small ($<$20~m) regime.
Very broadly, our result is in agreement
with most of the previous work, but there
are several aspects that warrant further
exploration.
The number of $\sim$10~meter-sized
NEOs is of keen interest because these
objects impact the Earth relatively frequently
and can cause severe damage (as happened
in Chelyabinsk, Russia, in 2013).
At this size range
our results appear to disagree with those
of \citet{rabinowitz} and \citet{harrisdabramo},
but both of these have very few data points
for H$>$23.
For H$<$25, our result agrees very
well with the \citet{werner} result, but
they report a strong upturn at H=26 that is
not seen in \citet{tricarico}, and \citet{schunova}
show a relative downturn at that same size.
The \citet{brownetal} and \citet{boslough}
results for impactors in the size range 1--20~m
are significantly higher than our derived
result.
\citet{werner}, \citet{brownetal}, and \citet{boslough}
all have in common
the assumption that the impact rate of the
smallest NEOs is the same as that of
the largest NEOs. If instead the impact
rate of the smallest NEOs is an order of
magnitude greater than that of the biggest
NEOs, then the number of small NEOs implied
would be reduced by that same
factor. The impact rate of small NEOs
could be larger than their large counterparts
if the orbit distributions of those two
populations differ, for
example if
there exist bands of collisional debris
or meteoroids in orbits similar to that
of the Earth that the Earth spends
significant time transiting,
similar to meteor streams but with a different origin
(A.\ Harris [DLR], pers.\ comm.).
This could result, for example, from the fragmentation
of medium-small NEOs into swarms of smaller
boulders that pass near the Earth, such as
is implied by results reported in
\citet{mommertBD,mommertMD}.
A very recent result \citep{spurny} suggest
that one such band in which there is
a relative enhancement of $\sim$10~meter-sized
NEOs
indeed may exist, and
\citet{jeongahn} find that the lunar
cratering rate is higher for 1--10~meter NEOs
than for kilometer size objects.
The sharp upturn in the
\citet{werner} distribution at H=26
(Figure~\ref{sizedist}) might
reflect an increasing impact probability at
that size.
We emphasize that the slope of the
\citet{brownetal} and \citet{boslough}
results for bodies 1--20~km matches
our derived slope very well, implying that
the discrepancy arises not from measurements
of the size distribution but the normalization
assumption used in their work.
We note that our data point at H=21 appears
depressed compared to the adjacent points and
the overall implied continuum slope. This
dip has been seen in other work as well
\citep{werner,harrisdabramo,schunova}.
\citet{harrisdabramo} offer two plausible
explanations. The first is that this size
(around 100--200~meters) corresponds to
the transition from weak rubble pile
asteroids at larger sizes to stronger
monolithic bodies at small sizes, and that
the relative deficit of bodies at this size
indicates the maximally disrupted asteroid size.
Their second proposed explanation is that
if there is a shift in average albedo at this
size (perhaps due to collisions among smaller
but not larger bodies) then the conversion
between H~magnitude and diameter would naturally
produce an apparent dip.
\section{Conclusions and future work}
We are carrying out a 30~night survey to
detect NEOs with the Dark Energy Camera
and the 4-meter Blanco telescope at CTIO.
In year~1 we made 1377~measurements of
235~unique NEOs. Through implanting synthetic
objects in our data stream and measuring the
detection efficiency of our survey as a function
of magnitude and rate of motion, we have
debiased our survey.
We find that there are
around $10^{6.6}$~(or $3.5\times 10^6$) NEOs larger than
H=27.3 (around
10~meters), and
$10^{6.9}$~NEOs larger than H=28 (around
7~meters).
This population estimate is around a
factor of ten less than has been previously
estimated, though in close agreement with one
recent measurement and
somewhat in agreement with another.
Our derived NEO size distribution
--- the first to cover the entire range from
1~km to 10~meters based on a single
observational data set ---
matches basically all observed
data for sizes larger than 100~meters.
In the size range 1--20~m, our measured
slope matches the bolide impactor slope
quite closely, and implies that the impact
probability for any given small NEOs is greater
than that for a large NEO by a factor
of ten or more.
We have data from 10~survey nights in each of 2015 and 2016
that are not
analyzed here, though all observations
have already been reported
to the MPC. These more recent data will
be used to independently measure the size distribution
and refine the error bars on the estimate presented here.
We will also extend our analysis to W84
objects that were detected on only one or two
nights and are therefore not designated by the MPC.
This overall experiment is in some ways a pathfinder for
the upcoming Large Synoptic Survey Telescope (LSST),
which has NEO observations as one of its
primary science drivers. LSST will have a much bigger
aperture than the Blanco (8.4~meters compared
to 4~meters), and cover far more sky than we have
here (20,000~deg$^2$ compared to our $\sim$975~deg$^2$), making it the most
comprehensive NEO survey ever carried out. When software
tools capable of implanting and detecting synthetic
objects are in place, a very high fidelity
measurement of the NEO size distribution to sizes
as small as 1~meter will be possible.
\acknowledgments
We thank Peter Brown and Alan Harris (DLR) for many useful conversations
and Steve Chelsey and an anonymous AAS statistics
reviewer for useful comments that improved this paper.
We thank the NOAO TAC and Director
Dave Silva for granting Survey status for this program.
We also thank Dave Silva and NOAO for acquiring the VR filter that we
use in our survey.
We gratefully acknowledge the hard work and
help that Tim Spahr and Gareth Williams of the
Minor Planet Center have provided and continue
to provide in support of
our DECam NEO survey.
DET carried out some of the work on this paper
while being hosted at Lowell Observatory.
This work was supported
in part by NASA award NNX12AG13G.
This work is based
on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO Prop.\ 2013B-0536; PI: L.\ Allen), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration.
Funding for the DES Projects has been provided by
the U.S. Department of Energy,
the U.S. National Science Foundation,
the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom,
the Higher Education Funding Council for England,
the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign,
the Kavli Institute of Cosmological Physics at the University of Chicago,
the Center for Cosmology and Astro-Particle Physics at the Ohio State University,
the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University,
Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro,
Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inovac{\~a}o,
the Deutsche Forschungsgemeinschaft,
and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are
Argonne National Laboratory,
the University of California at Santa Cruz,
the University of Cambridge,
Centro de Investigaciones En{\'e}rgeticas, Medioambientales y Tecnol{\'o}gicas-Madrid,
the University of Chicago,
University College London,
the DES-Brazil Consortium,
the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hoch\-schule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory,
the University of Illinois at Urbana-Champaign,
the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC),
the Institut de F{\'i}sica d'Altes Energies,
Lawrence Berkeley National Laboratory,
the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe,
the University of Michigan,
{the} National Optical Astronomy Observatory,
the University of Nottingham,
the Ohio State University,
the University of Pennsylvania,
the University of Portsmouth,
SLAC National Accelerator Laboratory,
Stanford University,
the University of Sussex,
and Texas A\&M University.
\vspace{5mm}
\facility{Blanco(DECam)}
\software{NOAO DECam Community Pipeline (CP) \citep{valdes} with Moving Object Detection System (MODS) \citep{mods}}
|
\section{Introduction}\label{sec:Intro}
Magnetohydrodynamics (MHD) is a hydrodynamic theory of long-range excitations in plasmas (ionised gases) (see e.g. \cite{bellan2008fundamentals,freidberg2014,goedbloed2004principles,goedbloed2010advanced}), which has been applied to systems ranging from the physics of fusion reactors to astrophysical objects. In the modern language of hydrodynamics formulated as an effective field theory \cite{Dubovsky:2011sj, Endlich:2012vt, Grozdanov:2013dba,Nicolis:2013lma,Kovtun:2014hpa,Harder:2015nxa,Grozdanov:2015nea,Crossley:2015evo,Glorioso:2017fpd,Haehl:2015foa,Haehl:2015uoc,Torrieri:2016dko,Glorioso:2016gsa,Gao:2017bqf,Jensen:2017kzi}, MHD should describe the dynamics of infrared (IR) charge-neutral states in terms of massless effective degrees of freedom. These plasma ground states are characterised by an equation of state with a finite magnetic field. On the other hand, the electric field is suppressed due to the screening of electromagnetic interactions and is only induced on shorter length scales than the (thermodynamic) size of the system. In their standard form, the equations of motion that describe the evolution of plasmas are formulated as a combination of macroscopic fluid equations (continuity equation and the non-dissipative Euler, or dissipative Navier-Stokes equation), coupled to the microscopic electromagnetic Maxwell's equations. In ideal, non-dissipative form, the set of dynamical equations is
\begin{align}
\partial_t \rho + \vec \nabla \cdot \left(\rho\, \vec v\right) &= 0 \, , \label{MHD1} \\
\rho \left( \partial_t + \vec v \cdot \nabla \right) \vec v &= - \vec \nabla \,p + \vec J \times \vec B \, , \label{MHD2} \\
\partial_t \vec B &= \vec\nabla \times \left(\vec v \times \vec B \right) , \label{MHD3} \\
\left( \partial_t + \vec v \cdot \nabla \right) \left( \frac{p}{\rho^\gamma} \right) &= 0 \, . \label{MHD4}
\end{align}
The magnetic field is constrained by
\begin{align}
\vec \nabla \cdot \vec B &= 0\, . \label{MHD5}
\end{align}
Eq. \eqref{MHD1} is the continuity equation and Eq. \eqref{MHD2} the Euler equation in the presence of the Lorentz force $\vec J \times \vec B$, with $\vec J$ given by the low-frequency limit of the Ampere's law ($\partial_t \vec E \to 0$)
\begin{align}
\vec J = \frac{1}{\mu_0} \vec \nabla \times \vec B \, .
\end{align} Eq. \eqref{MHD3} is the Faraday's induction law with the electric field fixed by the assumption of the ideal Ohm's law
\begin{align}\label{IdealOhm}
\vec E + \vec v \times \vec B = 0 \, ,
\end{align}
which is derived by taking the conductivity in the (Lorentz transformed) Ohm's law $ \vec J / \sigma = \vec E + \vec v \times \vec B $ to infinity, i.e. $\sigma \to \infty$. The constraint equation \eqref{MHD5} is the magnetic Gauss's law. Since the ideal Ohm's law completely fixes $\vec E$, the electric Gauss's law plays no role in the equations of MHD. Eq. \eqref{MHD4} is the adiabatic equation of state relating density and pressure. Usually, one takes $\gamma = 5/3$. Altogether, Eqs. \eqref{MHD1}--\eqref{MHD4} give eight dynamical equations for eight unknown functions $\rho$, $p$, $\vec v$ and $\vec B$, subject to the magnetic field constraint \eqref{MHD5}.
While the above equations are closed, solvable and have been successfully applied to a variety of phenomena in plasma physics, they are only applicable within the specific assumptions used to construct them. This means that they are only valid for electromagnetism controlled by Maxwell's equations in the limit of ideal Ohm's law (no possibility of strong-field pair production, etc.) and for the specific equation of state in Eq. \eqref{MHD4}. This equation of state encodes a separation between the fluid and the charge carrying sectors, for which the justification, beyond assuming weakly coupled Maxwell electromagnetism, also assumes very weak interactions between the fluid degrees of freedom and electromagnetism inside the plasma. Concretely, the latter statement is reflected in the equation of state permitting no dependence on the magnetic properties controlled by the charged sector. Furthermore, because of a lack of a symmetry principle behind the construction of ideal MHD, these equations are difficult to extend unambiguously to the most general, higher-order, dissipative theory in the gradient expansion (the Knudsen number expansion) \cite{Baier:2007ix,Bhattacharyya:2008jc,Romatschke:2009kr,Grozdanov:2015kqa}.\footnote{We note that in standard MHD, as formulated in Eqs. \eqref{MHD1}--\eqref{MHD4}, only the fluid sector has a well-defined and finite Knudsen number.} As such, the traditional formulation of MHD lacks generality and cannot be compatible with a variety of IR effective theories of plasmas that could (in principle) be derived from quantum field theory, in particular, in the presence of a strong magnetic field.
These issues were addressed in a recent work \cite{Grozdanov:2016tdf}, where MHD was formulated by following the effective field theory philosophy behind the construction of relativistic hydrodynamics (see e.g. \cite{Kovtun:2012rj,Grozdanov:2015kqa}). Namely, MHD was formulated by only considering global conserved operators and writing them in terms of the most general hydrodynamic gradient expansion of the IR hydrodynamic fields \cite{Grozdanov:2016tdf}.\footnote{See also \cite{Schubring:2014iwa} and Ref. \cite{Hernandez:2017mch}, which includes a valuable comparison of various related past works, such as \cite{Huang:2011dc,Critelli:2014kra,Finazzo:2016mhm}. For a new treatment of charged fluids in an external electromagnetic field, see \cite{Kovtun:2016lfw,Hernandez:2017mch}. Of further interest is also a recently proposed field theory description of polarised fluids \cite{Montenegro:2017rbu}.} With such an expansion in hand, conservation equations then completely determine the temporal dynamics of a plasma with any equation of state. As in hydrodynamics, all of the details of the equation of state and transport coefficients are left to be determined by the microscopics of the underlying theory.
The two relevant global symmetries describing the long-range dynamics of a plasma were argued to give the stress-energy tensor $T^{\mu\nu}$ and a conserved anti-symmetric two-form current $J^{\mu\nu}$ \cite{Grozdanov:2016tdf}:
\begin{align}
\nabla_\mu T^{\mu\nu} &= H^{\nu}_{\;\;\mu\sigma} J^{\mu\sigma} \,, \label{EOM1}\\
\nabla_\mu J^{\mu\nu} & =0 \,. \label{EOM2}
\end{align}
While $T^{\mu\nu}$ corresponds to conserved energy-momentum, $J^{\mu\nu}$ is the manifestation of a generalised global one-form $U(1)$ symmetry, which can be sourced (and gauged) by a two-form gauge field $b_{\mu\nu}$ \cite{Gaiotto:2014kfa}. $H^\nu_{~\mu\sigma}$ is a three-form field strength that can be turned on by an external two-form gauge field, $H = d b_{ext}$. This generalised global symmetry is a consequence of the absence of magnetic monopoles and directly corresponds to the conserved number of magnetic flux lines crossing a co-dimension two surface (in a four dimensional plasma). Normally, it is expressed in terms of the (topological) Bianchi identity
\begin{align}\label{BianchiId}
d F = 0 \, ,
\end{align}
where $F = d A$ and $A$ is the abelian electromagnetic field.\footnote{Note that in a theory with only electromagnetic fields, the number of electric flux lines crossing a two-surface is also conserved in four dimensions. For this reason, in absence of matter, the theory of electrodynamics has two one-form $U(1)$ symmetries. In terms of the photon field, the statement of the conservation of the electric one-form symmetry is analogous to its equation of motion: $\star\,d\star F = 0$.} In the language of a two-form current used in Eq. \eqref{EOM2},
\begin{align}\label{TwoFormJ}
J^{\mu\nu} = \frac{1}{2} \epsilon^{\mu\nu\rho\sigma} F_{\rho\sigma} \, .
\end{align}
The power in identifying Eq. \eqref{BianchiId} as a conservation equation of a global symmetry becomes apparent when one attempts to describe a phase of matter dominated by electromagnetic interactions, but without massless photons, i.e. the particles associated with $A$. In fact, this is precisely the situation in a plasma in which long-range electric forces are (Debye) screened and the photons are massive. Treating $J^{\mu\nu}$ as a globally conserved operator without invoking a massless gauge field $A$ can then be used directly to organise the infra-red dynamics of such states \cite{Grozdanov:2016tdf}. We note that in the language of generalised global symmetries, photons only become massless particles when the (one-form) symmetry is spontaneously broken---i.e. photons are the Goldstone bosons present in the broken symmetry phase.\footnote{We note that the order parameter that distinguishes between a broken and an unbroken magnetic one-form symmetry is an expectation value of the 't Hooft loop operator. When the symmetry is preserved, then the expectation value of the loop operator obeys the area law, $\langle W_C \rangle \sim \exp \left\{-T \, \text{Area}[C] \right\}$. On the other hand, in the symmetry broken phase with massless photons, the expectation value obeys the perimeter law, $\langle W_C \rangle \sim \exp \left\{-T \, \text{Perimeter}[C] \right\}$ \cite{Gaiotto:2014kfa}.} From this point of view, the Maxwell action is the effective Goldstone boson action that realises this symmetry non-linearly.
The equations of motion \eqref{EOM1} and \eqref{EOM2} give seven dynamical equations (and one constraint). To solve them, we introduce the following hydrodynamical fields: a velocity field $u^\mu$, a temperature field $T$, a chemical potential $\mu$ that corresponds to the density of magnetic flux lines and a vector $h^\mu$, which can be though as a hydrodynamical realisation of a fluctuating magnetic field. The vectors are normalised as $u_\mu u^\mu = -1$, $h_\mu h^\mu = 1$, $u_\mu h^\mu = 0$, together resulting in $10 - 3 = 7$ degrees of freedom. A (directed) velocity flow of the plasma breaks the Lorentz symmetry from $SO(3,1)$ to $SO(3)$, which is further broken by the additional vector (magnetic field) to $SO(2)$.\footnote{Note that at zero temperature, in a plasma with a non-fluctuating temperature field, the symmetry is enhanced to $SO(1,1) \times SO(2)$ \cite{Grozdanov:2016tdf}.} The projector transverse to both $u^\mu$ and $h^\mu$ is defined as $\Delta^{\mu\nu} = g^{\mu\nu} + u^\mu u^\nu - h^\mu h^\nu$ and has a trace $\Delta^\mu_{~\mu} = 2$.
The constitutive relations for the conserved tensors with a positive local entropy production \cite{Glorioso:2016gsa} and charge conjugation symmetry can now be expanded to first order in derivatives as \cite{Grozdanov:2016tdf}
\begin{align}
T^{\mu\nu} & = (\varepsilon + p)\, u^{\mu}u^{\nu} + p \, g^{\mu\nu} - \mu\rho\, h^{\mu}h^{\nu} + \delta f \, \Delta^{\mu\nu} + \delta \tau \, h^{\mu}h^{\nu} + 2 \, \ell^{(\mu}h^{\nu)} + t^{\mu\nu} \, ,\label{MHDstress-energy} \\
J^{\mu\nu} & = 2\rho \, u^{[\mu}h^{\nu]} + 2m^{[\mu}h^{\nu]} + s^{\mu\nu} \, \label{MHDcurrent},
\end{align}
where
\begin{align}
\delta f & = -\zeta_{\perp} \Delta^{\mu\nu}\nabla_{\mu}u_{\nu} -\zeta_{\times}^{(1)} h^{\mu}h^\nu \nabla_\mu u_\nu\, ,\label{visc1} \\
\delta \tau & = -\zeta_\times^{(2)} \Delta^{\mu\nu}\nabla_\mu u_\nu - \zeta_{\parallel} h^{\mu}h^{\nu} \nabla_{\mu} u_{\nu} \, ,\\
\ell^{\mu} & = -2\eta_{\parallel}\Delta^{\mu{\sigma}}h^{\nu} \nabla_{({\sigma}}u_{\nu)} \,,\label{visc4} \\
t^{\mu\nu} & = -2\eta_{\perp}\left(\Delta^{\mu\rho}\Delta^{\nu{\sigma}}- \frac{1}{2} \Delta^{\mu\nu}\Delta^{\rho{\sigma}}\right)\nabla_{(\rho}u_{{\sigma})} \, ,\\
m^{\mu} & = -2 r_{\perp} \Delta^{\mu\beta}h^{\nu} \left( T \nabla_{[\beta}\left(\frac{h_{\nu]} \mu}{T}\right) + u_\sigma H^{\sigma}_{\;\;\beta\nu}\right)\, ,\label{mdef}\\
s^{\mu\nu} & = -2 r_{\parallel} \Delta^{\mu\rho} \Delta^{\nu\sigma} \left( \mu\nabla_{[\rho} h_{\sigma]} + u_\lambda H^\lambda_{\;\;\rho\sigma} \right) \, . \label{sdef}
\end{align}
The frame choice which leads to this particular form of constitutive relations was specified in Ref. \cite{Grozdanov:2016tdf}. The thermodynamic relations between $\varepsilon$, $p$ and $\rho$, which need to be obeyed by the equation of state $p(T,\mu)$ are
\begin{align}
\varepsilon + p &= s T + \mu \rho \, , \label{ThermoRel1} \\
d p &= s \, dT+ \rho \, d\mu \, .
\end{align}
Furthermore, for the theory to be invariant under time-reversal, the Onsager relation implies that $\zeta_\times^{(1)} = \zeta_\times^{(2)}\equiv\zeta_\times $. Thus, first-order dissipative corrections to ideal MHD are controlled by seven transport coefficients: $\eta_\perp$, $\eta_\parallel$, $\zeta_\perp$, $\zeta_\parallel$, $\zeta_\times$, $r_\perp$ and $r_\parallel$. Each one can be computed from a set of Kubo formulae presented in \cite{Grozdanov:2016tdf,Hernandez:2017mch} and reviewed in Appendix \ref{appendix:kubo}. The transport coefficients should obey the following positive entropy production constraints: $\eta_\perp \geq 0 $, $\eta_\parallel \geq 0$, $r_\perp \geq 0 $, $r_\parallel \geq 0$, $\zeta_\perp \geq 0 $ and $\zeta_\perp \zeta_\parallel \geq \zeta_\times^2 $. In absence of charge conjugation symmetry, the theory has four additional transport coefficients, resulting in total in eleven transport coefficients \cite{Hernandez:2017mch}. The precise connection between the above formalism of MHD using the concept of generalised global symmetries and MHD expressed in terms of electromagnetic fields, which match in the limit of a small magnetic field (compared to the temperature of the plasma), was established in Ref. \cite{Hernandez:2017mch}.
Since the effective theory \cite{Grozdanov:2016tdf} makes no assumption regarding the microscopic details of the plasma, then, should such details somehow be computable from quantum field theory, or otherwise, the effective MHD can be used in solar plasma physics, fusion reactor physics, astrophysical plasma physics and even QCD quark-gluon plasma resulting from nuclear collisions. Of course, computing the microscopic properties of such systems is extremely difficult. In this work, we will resort to using holographic duality. By using standard holographic methods applicable to hydrodynamics \cite{Policastro:2001yc,Policastro:2002se,Policastro:2002tn}, our analysis will provide us with the required microscopic data of a strongly interacting toy model plasma needed to describe the phenomenology of MHD waves.
In process, we will construct and develop holographic duality (the bulk/boundary dictionary) for field theories with generalised global (higher-form) symmetries. For this reason, this work should be thought of as not only a study of strongly interacting MHD but also as providing and executing for the first time the necessary systematic procedure for studying higher-form symmetries in holography.
The paper is structured as follows: first, in Section \ref{sec:MatterEM}, we review important aspects of gauge theories with a matter sector coupled to dynamical $U(1)$, which can describe a plasma in the IR limit. In particular, we focus on the discussion of how to couple a strongly interacting field theory with a holographic dual to dynamical electromagnetism, all within a holographic setup. Then, in Section \ref{sec:Holography}, we explore this holographic setup in detail, develop the holographic dictionary for theories with higher-form symmetries and use it to compute the microscopic properties of the dual plasma, i.e. the equation of state and first-order transport coefficients. In Section \ref{sec:MHDWaves}, we then use this data to analyse the phenomenology of propagating MHD modes---Alfv\'{e}n and magnetosonic waves. Finally, we conclude with a discussion and a summary of the most important findings in Section \ref{sec:Discussion}. Three appendices are devoted to a derivation of the relevant Kubo formulae (Appendix \ref{appendix:kubo}), details regarding the derivation of horizon formulae for the transport coefficients (Appendix \ref{appendix:transport}) and a derivation of the magnetosonic dispersion relations (Appendix \ref{appendix:magnetosonicSpectrum}).
Note added: We note that in addition to this paper on the holographic dual of MHD from the perspective of generalised global symmetries, a closely related work, i.e. Ref. \cite{Hofman:2017Something}, also studies various aspects of generalised global symmetries in gauge-gravity duality and holographic dual(s) of \cite{Grozdanov:2016tdf}. Although the two concurrent and complementary papers focus on different aspects of holography, there is overlap between our Sections \ref{sec:N4-II} and \ref{sec:Holography} and parts of Ref. \cite{Hofman:2017Something}.
\section{Matter coupled to electromagnetic interactions}\label{sec:MatterEM}
A microscopic theory from which an effective description of a plasma can arise comprises of a matter sector that interacts through an electromagnetic $U(1)$ gauge field. In all theories that will be studied here, matter will only couple to electric flux lines. For this reason, the electric one-form symmetry will be explicitly broken. However, the magnetic one-form symmetry will remain a symmetry and $\partial_\mu J^{\mu\nu} = 0$, where $J = \star\, d A$ in a phase with spontaneously broken magnetic global one-form $U(1)$ symmetry. The simplest example of such a theory is quantum electrodynamics. In other theories, the matter sector may itself exhibit complicated physics with additional gauge interactions, such as in QCD. In this work, the theory that we will study contains an infinitely strongly coupled holographic matter sector (closely related to $\mathcal{N}=4$ supersymmetric $SU(N_c)$ Yang-Mills) with infinite $N_c$. Because of the coupling between matter and dynamical electromagnetism, the holographic setup and the interpretation of results is somewhat subtle. For this reason, we begin our discussion by reviewing some relevant aspects of quantum field theory in a line of arguments similar to \cite{Fuini:2015hba}.
\subsection{Quantum electrodynamics}\label{sec:QED}
The simplest example of a theory coupling matter to electromagnetism is quantum electrodynamics (QED). QED is a $U(1)$ gauge theory that contains a (massive) Dirac fermion $\psi$ (describing electrons and positrons) and a massless photon field $A_\mu$:\footnote{We use the mostly positive convention for the metric tensor, so that $\eta_{\mu\nu} = \{-1, +1,+1,+1\}$.}
\begin{align}
S_{\scriptscriptstyle QED} = - \int d^4 x \left[ i \bar\psi \gamma^\mu D_\mu \psi + m \bar \psi \psi + \frac{1}{4 e^2} F_{\mu\nu} F^{\mu\nu} \right].
\end{align}
$D_\mu$ is the gauge covariant derivative that couples $A_\mu$ to the fermion current (with the coupling $e$ scaled out from the interaction). For a detailed discussion of various properties of QED, see e.g. \cite{Weinberg:1995mt,Weinberg:1996kr,Peskin:1995ev}.
The stress-energy tensor of the theory is
\begin{align}
T^{\mu\nu} = \frac{1}{2} \bar \psi i \left( \gamma^\mu D^\nu + \gamma^\nu D^\mu \right) \psi - \eta^{\mu\nu} \bar\psi \left(i\gamma^\lambda D_\lambda + m \right)\psi + \frac{1}{e^2} \left[ F^{\mu\lambda} F^\nu_{~\lambda} - \frac{1}{4} \eta^{\mu\nu} F^{\rho\sigma} F_{\rho\sigma} \right].
\end{align}
In the massless limit ($m=0$), the theory is classically scale invariant, which is reflected in the vanishing trace of the stress-energy tensor, $T^\mu_{~\mu} = 0$. Quantum mechanically, the theory does not remain scale invariant. The trace receives a correction proportional to the beta function of the electromagnetic coupling,
\begin{align}\label{QED_TraceAnomaly}
T^\mu_{~\mu} = - \frac{\beta(e)}{2 e^3} F_{\mu\nu} F^{\mu\nu}\,.
\end{align}
This is the anomalous breaking of scale invariance---the so-called trace anomaly. The running electromagnetic coupling $e(\mu)$ depends on the renormalisation group scale $\mu$.
To first order in perturbation theory, the beta function is
\begin{align}
\beta (e) = \mu \frac{d e}{d \mu }= \frac{e^3}{12 \pi^2}\,,
\end{align}
which, integrated on the interval $\mu \in \left[M , \Lambda \right]$, gives the running coupling
\begin{align}\label{CutOffDependentCouplingQED}
\frac{1}{e(\Lambda)^2} = \frac{1}{e(M)^2} - \frac{ \ln \left(\Lambda / M \right)}{6\pi^2}\, .
\end{align}
Here, $M$ is some IR renormalisation group scale at which the electric charge takes the renormalised physical value, $e_r = e(M)$, and $\Lambda$ is the UV cut-off. Note that at the Landau pole, $\Lambda = \Lambda_{EM}$, the left-hand-side of \eqref{CutOffDependentCouplingQED} vanishes. On the other hand, the expectation value of the stress-energy tensor is a physical quantity and therefore cannot depend on $\mu$ . This statement is encoded in the following identity, which leads to the Callan-Symanzik equation:
\begin{align}\label{TMuIndep}
\mu \frac{d}{d \mu} \left\langle T^{\mu\nu} \right\rangle = 0 \, .
\end{align}
Since we are interested in neutral IR plasma states in QED that can be described by an effective theory of MHD, we can consider the (ground state) expectation value of the photon field to produce a non-zero magnetic field and a vanishing (screened) electric field,
\begin{align}
\left\langle A_\mu \right\rangle = \frac{1}{2}\mathcal{B} \left( x^1 \delta^2_{~\mu} - x^2 \delta^1_{~\mu}\right).
\end{align}
$\mathcal{B}$ is the magnitude of the ``background" magnetic field pointing in the $x^3 = z$ direction. The IR spectrum of the theory has a gapped-out photon, i.e. long-range charge neutrality, which allows us to neglect quantum fluctuations of $A_\mu$. For such a plasma state, Eq. \eqref{QED_TraceAnomaly} yields
\begin{align}\label{TTraceQED}
\left\langle T^{\mu}_{~\mu} \right\rangle = - \frac{\beta(e)}{e^3} \mathcal{B}^2 = - \frac{1}{12\pi^2} \mathcal{B}^2 + \mathcal{O}\left(e^2\right).
\end{align}
Furthermore, the expectation value of the stress-energy tensor can be conveniently split into the matter (containing matter-light interactions) and the purely electromagnetic parts,
\begin{align}
\left\langle T^{\mu\nu} \right\rangle &= \left\langle T^{\mu\nu}_{{\scriptscriptstyle matter} } (\mu) \right\rangle + \frac{1}{e (\mu)^2 } \left[ F^{\mu\lambda} F^\nu_{~\lambda} - \frac{1}{4} \eta^{\mu\nu} F^{\rho\sigma} F_{\rho\sigma} \right] \nonumber\\
&= \left\langle T^{\mu\nu}_{{\scriptscriptstyle matter}} (\Lambda / M ) \right\rangle + \left( \frac{1}{e_r^2} - \frac{ \ln \left(\Lambda / M \right)}{6\pi^2} \right) \frac{\mathcal{B}^2}{2} \times
{\scriptsize
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{bmatrix} },
\end{align}
where in the second line, we chose to evaluate the expectation value at the UV cut-off $\mu = \Lambda$. Note that because $\left\langle T^{\mu\nu}\right\rangle$ is $\mu$-independent (cf. Eq. \eqref{TMuIndep}), this choice does not influence the final value of $\left\langle T^{\mu\nu} \right\rangle$.
\subsection{Strongly interacting holographic matter coupled to dynamical electromagnetism}\label{sec:N4}
We now turn our attention to the holographic strongly interacting theory that will be investigated in the remainder of this paper. Throughout our discussion, it will prove useful to think of the matter sector as that of the best understood holographic example---the conformal $\mathcal{N}=4$ supersymmetric Yang-Mills theory (SYM) with an infinite number of colours $N_c$ and an infinite 't Hooft coupling $\lambda$. However, as will become clear below, the theory dual to our holographic setup will not be precisely the $\mathcal{N}=4$ SYM theory coupled to a $U(1)$ gauge field, but rather its deformation, of which the microscopic definition will not be investigated in detail. Instead, the model studied here should be considered as a bottom-up construction---the simplest dual of a strongly coupled plasma, which can be described with magnetohydrodynamics in the infrared limit.
The field content of $\mathcal{N} = 4$ SYM is four Weyl fermions, three complex scalars and a vector field, all transforming under the adjoint representation of $SU(N_c)$. The theory also has an $SU(4)_R$ R-symmetry owing to its extended supersymmetry. The adjoint fields together represent the matter content of a hypothetical plasma, which further requires the fields to be (minimally) coupled to an electromagnetic $U(1)$ gauge group (with $e$ the electromagnetic coupling). In $\mathcal{N} = 4$ SYM, this can be achieved by gauging the $U(1)_R$ subgroup of $SU(4)_R$. Under $U(1)_R$, the Weyl fermions transform with the charges $\{+3,-1,-1,-1\}/\sqrt{3}$ and the complex scalars all have charge $+2/\sqrt{3}$ (for details regarding the choice of the normalisation, see \cite{Fuini:2015hba}). Such a system can be considered as a strongly coupled toy model for a QCD plasma in which the quarks interact with photons as well as with the $SU(3)$ vector gluons.
A crucial fact about $\mathcal{N} = 4$ SYM is that the $R$-current of $\mathcal{N} = 4$ becomes anomalous in the presence of electromagnetism. For this reason, the $U(1)_R$, which is gauged, is also anomalous and thus the theory has to be deformed in some way to reestablish its self-consistency. As pointed out in \cite{Fuini:2015hba}, one way to do this is by adding a set of spectator fermions that only interact electromagnetically and ``absorb" the anomaly. We will assume that the gauge anomaly can be cancelled by some deformation of the theory so that the quantum expectation value of the $U(1)_R$ R-current $J_R^\mu $ remains conserved, $\nabla_\mu \langle J_R^\mu \rangle = 0$. We can then write the total bare action of the $SU(N_c) \times U(1)$ gauge theory as
\begin{align}
S_{{\scriptscriptstyle plasma}} = S_{\scriptscriptstyle matter} + \int d^4 x \, A_\mu J^\mu_R \, - \frac{1}{4 e^2} \int d^4 x \, F_{\mu\nu} F^{\mu\nu} \,,
\end{align}
where $A_\mu$ is the dynamical electromagnetic gauge field and $F = d A$. The expectation value of the conserved operator $J^\mu_R$ contains a trace over the colour index of the adjoint matter field and therefore scales as $N_c^2$. Since it is coupled to a single photon, the Maxwell part of the total plasma action $S_{{\scriptscriptstyle plasma}}$ contains no powers of $N_c$.
As in the QED plasma, we will consider the photons to be gapped out from the IR spectrum so that $A_\mu$ will only produce a (classical) magnetic field
\begin{align}
\langle A_\mu \rangle = \frac{1}{2}\mathcal{B} \left( x^1 \delta^2_{~\mu} - x^2 \delta^1_{~\mu}\right).
\end{align}
In order to maintain the neutrality of the plasma, we will set the electric $U(1)_R$ chemical potential to zero, $\mu_R = \left\langle A_0 \right\rangle= 0$.\footnote{For a discussion of supersymmetric gauge theories with non-zero R-charge densities, see e.g. \cite{Yamada:2006rx,Cherman:2013rla}} For this reason, the electric one-form (or vector) conserved $U(1)_R$ R-current will play no role in the hydrodynamic IR limit of the theory, so $\langle J^{\mu}_R \rangle = 0$.
The plasma has a conserved stress-energy tensor to which both the matter (along with its interaction with the electromagnetic field) and the purely electromagnetic sectors contribute,
\begin{align}\label{TmunuSecQFT}
\left\langle T^{\mu\nu} \right\rangle &= \left\langle T^{\mu\nu}_{{\scriptscriptstyle matter} } (\Lambda/M) \right\rangle + \frac{1}{e (\Lambda/M)^2 } \left[ \langle F^{\mu\lambda} F^\nu_{~\lambda} \rangle - \frac{1}{4} \eta^{\mu\nu} \langle F^{\rho\sigma} F_{\rho\sigma}\rangle \right] .
\end{align}
The trace of the superconformal theory again experiences an anomaly proportional to the beta function of the electromagnetic coupling (cf. Eq. \eqref{TTraceQED}), which in $\mathcal{N} = 4$ theory turns out to be one-loop exact in the presence of a background electromagnetic field and follows from a special case of the NSVZ beta function due to the fact that the $U(1)_R$ sector has a remaining $\mathcal{N} = 1$ supersymmetry (see Refs. \cite{Freedman:1998tz,Anselmi:1997ys,Fuini:2015hba}),\footnote{Note that as $N_c \to \infty$, $N_c^2 - 1 \approx N_c^2$.}
\begin{align}\label{N4TraceAnomaly}
\langle T^\mu_{~\mu} \rangle = - \frac{\beta(e)}{e^3} \mathcal{B}^2 = - \frac{N_c^2}{4 \pi^2} \mathcal{B}^2 \,.
\end{align}
The beta function for the inverse electromagnetic coupling is then
\begin{align}\label{NSVZN4}
\beta\left(1/e^2\right) = \mu \frac{d e^{-2}}{d\mu} = - \frac{N_c^2}{2 \pi^2} \left[ \frac{1}{6} \sum_{\alpha=1}^4 \left(q_{\text f}^\alpha\right)^2 + \frac{1}{12} \sum_{a=1}^3 \left(q_{\text s}^a\right)^2 \right]= - \frac{N_c^2}{2 \pi^2} \, ,
\end{align}
with the fermionic and the scalar R-charges being $q_{\text f}^\alpha = \{+3,-1,-1,-1\} / \sqrt{3}$ and $q_{\text s}^\alpha = \{2,2,2\}/\sqrt{3}$, respectively. In analogy with Eq. \eqref{CutOffDependentCouplingQED} in QED, by integrating the beta function equation, we find
\begin{align}\label{CutOffDependentCouplingN4}
\frac{1}{e^2(\Lambda) } = \frac{1}{e^2 (M) } - \frac{N_c^2}{2\pi^2} \ln \left(\Lambda / M \right).
\end{align}
It is essential to stress that even though our holographic theory will not be exactly dual to the $\mathcal{N} = 4$ SYM theory, it will give us the same trace anomaly and thus the same electromagnetic beta function. Since the NSVZ beta function \eqref{NSVZN4} is only sensitive to the matter content, this match can be interpreted as our working with a theory with the $U(1)$-gauged matter content and R-charges of $\mathcal{N} = 4$ but with a deformed Lagrangian and possibly additional matter that is ungauged under the $U(1)$.
Beyond the stress-energy tensor of the theory discussed thus far, the only other (generalised) global symmetry of interest to describing a plasma state is the higher-form $U(1)$ symmetry that corresponds to the conserved number of magnetic flux lines crossing a two-surface. The symmetry results in a conserved two-form current $\langle J^{\mu\nu} \rangle \neq 0$ and was discussed in Section \ref{sec:Intro}. The generating functional of the field theory that can be used to study MHD of a magnetised plasma in which the two globally conserved operators are $T^{\mu\nu}$ and $J^{\mu\nu}$ is therefore
\begin{align}\label{GenFunTJ}
W\left[g_{\mu\nu}, b_{\mu\nu} \right] = \left\langle \exp\left[i \int d^4x \sqrt{-g} \left( \frac{1}{2} T^{\mu\nu}g_{\mu\nu} + J^{\mu\nu} b_{\mu\nu}\right) \right] \right\rangle .
\end{align}
The remainder of this paper is devoted to constructing and analysing its holographic bulk dual.
\subsection{Holographic dual}\label{sec:N4-II}
The simplest holographic dual of a strongly interacting state with the generating functional \eqref{GenFunTJ} is one that contains a five-dimensional bulk with a dynamical graviton (metric tensor $G_{ab}$) described by the Einstein-Hilbert action, a negative cosmological constant and a two-form bulk gauge field $B_{ab}$:\footnote{Throughout this paper, we use Greek and Latin letters to denote the boundary and bulk theory indices, respectively.}
\begin{align}\label{BulkAction}
S = \frac{1}{2\kappa_5^2} \int d^5x \sqrt{-G} \left( R + \frac{12}{L^2} -\frac{1}{3 e_H^2} H_{abc}H^{abc} \right) .
\end{align}
In standard (Dirichlet) quantisation, the two fields asymptote to $g_{\mu\nu}$ and $b_{\mu\nu}$ at the boundary and source $T^{\mu\nu}$ and $J^{\mu\nu}$. Furthermore, $H$ is the three-form defined as $H = d B$. In component notation, $B = \frac{1}{2} B_{ab} \, dx^a \wedge dx^b$ and $H = \frac{1}{6} H_{abc} \, dx^a \wedge dx^b \wedge dx^c$. The two-form gauge field action is the bulk Maxwell Lagrangian $F \wedge \star\, F$ written in terms of the five-dimensional Hodge dual three-form $H = \star\, F$, giving the Lagrangian term $H \wedge \star \, H$. In most of our work, we will set $e_H = L = 1$. Because the two bulk theories are related by dualisation, the background solution to the equations of motion derived from \eqref{BulkAction} give rise to the same magnetised black brane solution known from the Einstein-Maxwell theory \cite{D'Hoker:2009mmn}.
In the absence of the two-form term, the action \eqref{BulkAction} arises from a consistent truncation of type IIB string theory on $S^5$ and is upon identification of the Newton's constant $\kappa_5 = 2 \pi / N_c$ dual to pure $\mathcal{N} = 4$ SYM at infinite $N_c$ and infinite 't Hooft coupling $\lambda$. For reasons discussed above, the full dual of the action \eqref{BulkAction} is unknown and we are not aware of a mechanism for deriving this action from a consistent truncation of ten-dimensional type IIB supergravity. Nevertheless, for purposes of comparing the sizes of matter and electromagnetic contributions to the total operator expectation values, it will prove useful to keep the definition of $\kappa_5$ in terms of the number of colours $N_c$ of the hypothetical dual deformed $\mathcal{N} = 4$ SYM coupled to dynamical electromagnetism.
To show further evidence that the action \eqref{BulkAction} is a sensible dual of a strongly coupled MHD plasma, it is useful to elucidate the connection between Eq. \eqref{BulkAction} and the Einstein-Maxwell theory. To put an uncharged holographic theory in an external magnetic field, one normally adds the Maxwell action $F \wedge \star\, F$ with $F = d A$ to the Einstein-Hilbert bulk action. If one imposes Dirichlet boundary conditions on the bulk one-form $A_a$, then $A_a$ sources the R-current $J^\mu_R$ at the boundary, $\int d^4 x\, J^\mu_R \delta A_\mu$, and thus the electromagnetic field $A_\mu$ is external and non-dynamical. The investigation of the physics of such a setup with an external magnetic field was initiated in \cite{D'Hoker:2009mmn} and studied in numerous subsequent works, including recent \cite{Fuini:2015hba,Janiszewski:2015ura,Critelli:2014kra,Finazzo:2016mhm,Ammon:2017ded}. The semi-classical generating functional of the field theory dual to the Einstein-Maxwell bulk action with Dirichlet boundary conditions corresponds to
\begin{align}\label{ZExtField}
Z_0 [ A_\mu] = \int \mathcal{D} \Phi \, \exp \left\{ i S_0(\Phi) + i\int d^4 x A_\mu J^\mu_R (\Phi) \right\},
\end{align}
where $S_0$ is the strongly coupled field theory action that depends on a set of fields $\Phi$, which we collectively denote as $\Phi$. For uncharged solutions of the Einstein-Maxwell theory, $S_0$ is the $\mathcal{N} = 4$ SYM action and $A_\mu$ is an external gauge field that sources the $U(1)_R$ current. The bounday gauge field $A_\mu$ can be made dynamical by performing a Legendre transform of \eqref{ZExtField} and adding a kinetic term for $A_\mu$ (see also Refs. \cite{Grozdanov:2016tdf,Hernandez:2017mch}):
\begin{equation}\label{ZExtField2}
\begin{aligned}
Z[j_{ext}^\mu] &= \int \mathcal{D} A\int \mathcal{D}\Phi \,\exp \left\{{i S_0(\Phi) + i \int d^4x \left(A_\mu J^\mu_R(\Phi)-\frac{1}{4e^2}F_{\mu\nu}F^{\mu\nu} + A_\mu j^\mu_{ext}\right) }\right\},\\
&= \int \mathcal{D} A\, Z_0[ A_\mu] \,\exp \left\{i\int d^4x \left(-\frac{1}{4e^2}F_{\mu\nu}F^{\mu\nu} + A_\mu j^\mu_{ext} \right) \right\},
\end{aligned}
\end{equation}
where $j^\mu_{ext}$ is the external current which sources the dynamical $U(1)$ gauge field $A_\mu$. To describe a stable plasma state, which is charge neutral in equilibrium, we must impose $J^\mu_R + j^\mu_{ext} = 0$. The variation $\int d^4x\, A_\mu \delta j^\mu_{ext}$ can then be used to obtain correlation functions of the dynamical vector field. Now, since $j^\mu_{ext}$ is conserved, one can express it through an anti-symmetric two-form $b_{\mu\nu}$ as $\epsilon^{\mu\nu\rho\sigma} \partial_\nu b_{\rho\sigma}$, which, upon integration by parts, yields a dualised $\int d^4 x \, J^{\mu\nu} b_{\mu\nu}$, where $J^{\mu\nu}$ is the anti-symmetric current from Eq. \eqref{TwoFormJ}. Furthermore, as we will explicitly see in Section \ref{sec:Holography}, the gravitational dual formulation of a theory with a two-form source $b_{\mu\nu}$ and a corresponding conserved two-form current $J^{\mu\nu}$ allows us to interpret the kinetic Maxwell term in \eqref{ZExtField2} as a double-trace deformation $\int d^4 x \, J_{\mu\nu} J^{\mu\nu}$ of a CFT (with a broken scale invariance). The necessity of imposing double-trace deformations to ensure that the $U(1)$ boundary gauge field be dynamical will thus require us to impose mixed boundary conditions \cite{Witten:2001ua} on the two-form gauge field.\footnote{See also Refs. \cite{Pomoni:2008de,Heemskerk:2010hk,Faulkner:2010jy,Grozdanov:2011aa} and references therein.}
Instead of imposing Dirichlet boundary conditions on the bulk $A_a$, one can work in alternative quantisation and impose Neumann boundary conditions. Such a choice exchanges the interpretation of the normalisable and the non-normalisable mode in $A_a$. From the dual field theory point of view, this can be interpreted as the Legendre transform of the boundary theory, as in Eq. \eqref{ZExtField2}, leading to the variation $\int d^4 x \, A_\mu \delta J^\mu_R$. Physically, this means that in alternative quantisation, an external current sources a dynamical (boundary) vector field.\footnote{See Refs. \cite{Witten:2003ya,Marolf:2006nd,Jokela:2013hta} for discussions regarding the exchange of boundary conditions and (emergent) dynamical gauge fields in lower-dimensional theories.} The two boundary theories, one with Dirichlet and one with Neumann boundary conditions, are normally related by a double-trace deformed RG flow. In our case, we require the boundary double-trace deformation $\int d A \wedge \star\, dA$, or its Hodge dualised $\int d^4 x \, J_{\mu\nu} J^{\mu\nu}$, to be explicitly present regardless of the choice of the quantisation.
From the point of view of the quantum bulk theory, as in a lower-dimensional theory \cite{Faulkner:2012gt}, the Einstein-Maxwell bulk (quantum) path integral runs over the metric and the Maxwell field $A_a$. Alternatively, one can write the path integral over the fields strength $F_{ab}$, but at the expense of ensuring the Bianchi identity $dF = 0$ by introducing a Lagrange multiplier $B_{ab}$:
\begin{equation}
Z \supset \int \mathcal{D} F_{ab} \, \mathcal{D} B_{ab} \,\, \text{exp} \left\{ i \, \frac{N_c^2}{8\pi^2} \int d^5 x \sqrt{-G} \, \left( F_{ab}F^{ab} +e_H^{-1} B_{ab} \epsilon^{abcde}\nabla_c F_{de} \right)\right\} .
\end{equation}
Since the second (Bianchi identity) term vanishes for any classical field solution, it has no influence on the saddle point of the path integral. However, it does generate a non-zero contribution to the boundary action, i.e. $\left(N_c^2 / 8 \pi^2 e_H \right) \int d^4x \, \epsilon^{\mu\nu\rho\sigma} F_{\rho\sigma} B_{\mu\nu}$, which is precisely the source term $\int d^4x \, J^{\mu\nu} b_{\mu\nu}$ once we identify $B_{\mu\nu} \sim b_{\mu\nu}$ and $J^{\mu\nu} \sim \epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$. The precise dictionary between the bulk and boundary quantities will be discussed in Section \ref{sec:holoRenorm}. By varying the action with respect to $F_{ab}$, one obtains the equation of motion
\begin{equation}\label{relnFandBbulk}
F^{ab} = e_H^{-1} \epsilon^{abcde}\nabla_c B_{de} \, .
\end{equation}
Then, the field strength $F_{ab}$ can be integrated out in the saddle point approximation which gives the two-form gauge field Lagrangian term from Eq. \eqref{BulkAction}. Furthermore, in the language of the Einstein-Maxwell theory, by using Eq. \eqref{relnFandBbulk}, one finds the relation between the one-form R-current $J^\mu_R$ and the $B_{ab}$ field:
\begin{equation}
\langle J^\mu_R \rangle =- \frac{N_c^2}{2 \pi^2} \lim_{u\to 0} F^{u\mu} = -\frac{N_c^2}{2 \pi^2 e_H}\lim_{u\to 0}\epsilon^{\mu\nu\rho\sigma}\partial_\nu B_{\rho\sigma} \, ,
\end{equation}
where $u$ is the radial coordinate and $u = 0$ the boundary of the bulk spacetime. Thus, imposing Dirichlet boundary conditions on $B_{ab}$ (in this case, necessarily with an additional double-trace deformation) corresponds to treating $J^\mu_R$ as a source, which is the same as performing alternative quantisation discussed above. This is again consistent with the interpretation that the dual field theory of \eqref{BulkAction} contains dynamical photons. Furthermore, as we will see from a detailed holographic renormalisation in Section \ref{sec:holoRenorm}, the (double-trace) boundary counter-terms, which are required to keep the on-shell action finite, will give us precisely the Maxwell theory for $A_\mu$ (dual of $b_{\mu\nu}$) on the boundary, including a renormalised electromagnetic coupling $e_r$, as in QED.\footnote{We note that the way the Maxwell Lagrangian arises on the boundary is equivalent to the way holographic matter can be coupled to dynamical gravity on a cut-off brane \cite{Gubser:1999vj}. There too, a holographic counter-term gives rise to the Einstein-Hilbert action at the cut-off brane (the boundary) of a more intricately foliated bulk. As shown by Gubser in \cite{Gubser:1999vj}, such a theory can result in a radiation (CFT)-dominated FRW universe at the boundary with the stress-energy tensor of the $\mathcal{N} =4 $ SYM driving the expansion.} All further details of this holographic setup will be presented in Section \ref{sec:Holography}.
\section{Holographic analysis of theories with generalised global symmetries: equation of state and transport coefficients}\label{sec:Holography}
In this section, we study the relevant details of the simplest holographic theory with Einstein gravity coupled to a higher-form (two-form) bulk field, cf. Eq. \eqref{BulkAction}, which can source a two-form current associated with the $U(1)$ one-form generalised global symmetry in the boundary theory. In other words, we construct the holographic dictionary for theories with higher-form symmetries. As our main goal is to study the phenomenology of MHD waves in a strongly coupled plasma using the dispersion relations of \cite{Grozdanov:2016tdf}, we will use holography only to provide us with the necessary microscopic data: the equation of state and the transport coefficients.
In Section \ref{sec:ActionAndBrane}, we will begin by discussing details of the magnetic brane solution \cite{D'Hoker:2009mmn,D'Hoker:2009bc} supported by the bulk action introduced in Section \ref{sec:N4-II}. In Section \ref{sec:holoRenorm}, we will consider holographic renormalisation of the theory in question and show how the bulk gives rise to a dual theory coupled to dynamical electromagnetism (as in Section \ref{sec:MatterEM}). In particular, we will derive the expectation values of the stress-energy tensor $\langle T^{\mu\nu} \rangle$ and the two-form $\langle J^{\mu\nu}\rangle$ and show that they satisfy the Ward identities \eqref{EOM1} and \eqref{EOM2}. We will also recover and match all expected renormalisation group properties, such as the beta function of the electromagnetic coupling, from the point of view of the bulk calculation. In Section \ref{sec:EOS}, we will then compute and analyse thermal and magnetic properties of the equation of state of the dual plasma. Finally, in Section \ref{transport-maintext}, we will derive the membrane paradigm formulae for the seven transport coefficients required to describe first-order dissipative MHD \cite{Grozdanov:2016tdf} and compute them.\footnote{These horizon formulae are analogous to the expression for shear viscosity in $\mathcal{N}=4$ theory \cite{Iqbal:2008by}. For more recent discussions of other transport coefficients that can be computed directly from horizon data, see e.g. \cite{Gubser:2008sz,Saremi:2011ab,Donos:2014cya,Banks:2015wha,Davison:2015taa,Gursoy:2014boa,Grozdanov:2016ala}.} Further details regarding the horizon formulae for the transport coefficients can be found in Appendix \ref{appendix:transport}.
\subsection{Holographic action and the magnetic brane}\label{sec:ActionAndBrane}
A holographic action dual to a plasma state with a low-energy limit that can be described by MHD was stated in Eq. \eqref{BulkAction}. Including the boundary Gibbons-Hawking term and the (relevant) holographic counter-term, the full action is
\begin{equation}\label{HoloAction}
\begin{aligned}
S &= \frac{N_c^2}{8 \pi^2} \Bigg[\int d^5x \sqrt{-G}\, \left( R+\frac{12}{L^2} -\frac{1}{3e_H^2} H_{abc}H^{abc} \right) \\
&\qquad + \int_{\partial M} d^4x \sqrt{-\gamma} \left( 2\, {\mathrm{tr}} K - 6 + \frac{1}{e_H^2} \mathcal{H}_{\mu\nu}\mathcal{H}^{\mu\nu}\ln \mathcal{C}\right)\Bigg],
\end{aligned}
\end{equation}
where ${\mathrm{tr}} K$ is the trace of the extrinsic curvature of the boundary ($\partial M$) defined by an outward normal vector $n^a$. For convenience, we set both the AdS radius $L=1$ and $e_H=1$. The two-form $\mathcal{H}_{\mu\nu}$ is defined as a projection of the three-form field strength in the direction normal the boundary, $\mathcal{H}_{\mu\nu} = n^a H_{a\mu\nu}$. $\mathcal{C}$ is a dimensionless number that needs to be adjusted to fix the renormalisation condition, of which the details will be discussed below. The equations of motion that follow are
\begin{align}
R_{ab} + 4G_{ab} - \left( H_{acd} H_b^{\;\;cd}-\frac{2}{9} G_{ab}H_{cde} H^{cde}\right) &= 0 \, , \label{EOMTheory1}\\
\frac{1}{\sqrt{-G}} \partial_a\left( \sqrt{-G} H^{abc}\right) &= 0 \, \label{EOMTheory2} .
\end{align}
Since the theory \eqref{HoloAction} is S-dual to the Einstein-Maxwell theory, we can express the magnetised black brane solution of \cite{D'Hoker:2009mmn} by dualising the Maxwell terms and writting
\begin{equation}
\begin{aligned}
ds^2 &= G_{ab}dx^a dx^b = r_h^2\left(-F(u) dt^2 + \frac{e^{2\mathcal{V}(u)}}{v}(dx^2+dy^2) +
\frac{e^{2\mathcal{W}(u)}}{w}dz^2\right) + \frac{du^2}{4u^3 F(u)}\, , \\
H &= \frac{B r_h^2 e^{-2\mathcal{V}+\mathcal{W}}}{2u^{3/2}\sqrt{w}} \,dt\wedge dz\wedge du \, .
\end{aligned}
\label{StefanAnsatz}
\end{equation}
The equations of motion \eqref{EOMTheory1} for this ansatz reduce to three second-order ordinary differential equations (ODE's) for $\{F,\,\mathcal{V},\,\mathcal{W}\}$ and one additional first-order constraint. The equation of motion derived from the variation of the two-form \eqref{EOMTheory2} is automatically satisfied. The equations are equivalent to those derived from the Einstein-Maxwell theory \cite{D'Hoker:2009mmn} upon identification of the Maxwell bulk two-form $F$ with $F = \mathcal{B} \, dx \wedge dy$, where $\mathcal{B} = Br_h^2/v $.\footnote{The metric ansatz is chosen to have the form used in \cite{Janiszewski:2015ura}. It can be obtained from the ansatz $ds^2 = -U(r)dt^2 + e^{2V(r)}(dx^2+dy^2) + e^{2W(r)}dz^2+ dr^2/U(r)$ used in \cite{D'Hoker:2009mmn} by performing a coordinate transformation $r=r_h/\sqrt{u}$ and shifting $V$ and $W$ by constant $-\ln v$ and $-\ln{w}$, respectively, which are chosen so that the near-boundary expansion has the form $ds^2 = (1/u) \, \eta_{\mu\nu}dx^\mu dx^\nu + du^2 / (4u^2)$.
}
The undetermined functions $F$, $\mathcal{V}$ and $\mathcal{W}$ are obtained numerically by using the shooting method. We first expand the background fields near the horizon as
\begin{equation}
\begin{aligned}
F &= f_{1}^h (1-u) + f_{2}^h(1-u)^2+\mathcal{O}(1-u)^3 \,,\\
\mathcal{V} &= v_{0}^h + v^h_{1}(1-u) + \mathcal{O}(1-u)^2 \,,\\
\mathcal{W} &= w^h_{0} + w^h_{1}(1-u) + \mathcal{O}(1-u)^2 \, ,
\end{aligned}
\end{equation}
where the constants $\{f^h_i,\,v^h_i,\,w^h_i\}$ can be written in terms of $\{f_1^h,\,v^h_0,\,w^h_0\}$ after solving the equations of motion order-by-order near the horizon. The scaling symmetry of our background ansatz then allows us to rescale $dx$ and $dy$ so that $v_0^h = w_0^h = 0$. Next, we match the numerical solutions generated by shooting from the horizon towards the boundary, where the analytical near-boundary expansions of the metric functions are
\begin{equation}
\begin{aligned}
F &=\frac{1}{u}\left( 1 +f_1^b \sqrt{u}+ \frac{f_1^b}{4} u + f_4^b u^2 +\mathcal{O}(u^{3/2})+
\left(\frac{\mathcal{B}^2}{3}+\mathcal{O}(\sqrt{u}) \right) u^2\ln u\right) ,\\
e^{2\mathcal{V}} &= \frac{1}{u}\left(v + v f_1^b \sqrt{u} + \frac{v (f^b_1)^2}{4}u + v^b_4 u^2 + \mathcal{O}(u^{3/2})
- \left(\frac{\mathcal{B}^2}{6}+\mathcal{O}(\sqrt{u}) \right)u^2\ln u\right),\\
e^{2\mathcal{W}} &=\frac{1}{u}\left(w + w f_1^b \sqrt{u} + \frac{w (f^b_1)^2}{4}u - \frac{2wv^b_4}{v} u^2 + \mathcal{O}(u^{3/2})
- \left( \frac{w\mathcal{B}^2}{3}+\mathcal{O}(\sqrt{u}) \right)u^2\ln u\right).\\
\end{aligned}
\label{eqn:nearBndExpansion}
\end{equation}
As before, one can solve for the coefficients $\{f^b_i,\,v^b_i,\,w^b_i\}$ in terms of $\{f^b_1,\,f^b_4,\,v^b_4\}$. Furthermore, $f_1^b$ can be removed by residual diffeomorphism freedom of the metric ansatz \cite{Janiszewski:2015ura}. For a given value of $B = v \mathcal{B}/r_h^2$, we can therefore generate a numerical background by shooting from the initial conditions of the functions set by the near-horizon expansion with $\{f^h_1,v^h_0,w^h_0 \}= \{\hat f ,0,0\}$. The numerical value of $\hat f$ is chosen so that the near-boundary expansion has $f^b_1 = 0$. The near-boundary behaviour of this function then determines the properties of the dual field theory. Note that the theory is governed by a one-parameter family of such numerical solutions characterised by the dimensionless ratio $T/\sqrt{\mathcal{B}}$, where $T = f^h_1r_h / 2\pi $ is the Hawking temperature (see Eq. \eqref{TempEnt}). In practice, this ratio can be tuned by changing the parameter $B$ of the background ansatz. The numerical solver encounters stiffness problems when $B \approx \sqrt{3}$, i.e. where the temperature is close to zero. All of our numerical results will therefore stop near $T / \sqrt{\mathcal{B}} = 0$. In this work, we do not attempt an independent analysis of the theory at $T = 0$.
\subsection{Holographic renormalisation and the bulk/boundary dictionary}\label{sec:holoRenorm}
The next step in analysing the dual of \eqref{HoloAction} is a systematic holographic renormalisation. In this section, we derive the one-point functions of the stress-energy tensor $\langle T_{\mu\nu} \rangle $ and the two-form current $\langle J_{\mu\nu}\rangle$, and show that they satisfy the Ward identities of magnetohydrodynamics \eqref{EOM1} and \eqref{EOM2} \cite{Grozdanov:2016tdf}, which in terms of operator expectation values take the form
\begin{align}\label{WardIden1}
\nabla_\nu \langle T^{\mu\nu}\rangle = \tilde H^{\mu}_{~\lambda\sigma} \langle J^{\lambda\sigma}\rangle \, , && \nabla_\mu \langle J^{\mu \nu}\rangle =0 \,,
\end{align}
where $\tilde H = db$ is the field strength of the background gauge field $b$ in field theory. The precise definition of these quantities will become clear below. Since we are only interested in the expansion of MHD to first order in the gradient expansion around a flat (boundary) background, it will be sufficient to only work with terms that contain no more than two derivatives along the boundary directions. The procedure for obtaining holographic renormalisation will closely follow Refs. \cite{deHaro:2000vlm,Taylor:2000xw}.\footnote{This part of the calculation was performed by using the Mathematica package xAct \cite{xAct}.}
We begin by writing the bulk metric in the Fefferman-Graham coordinates \cite{deHaro:2000vlm}
\begin{equation}\label{FGcoord}
ds^2_\text{FG} =G_{ab}\, dx^a dx^b= \frac{d\rho^2}{4\rho^2} + \gamma_{\mu\nu}(\rho,x)dx^\mu dx^\nu = \frac{d\rho^2}{4\rho^2} + \frac{1}{\rho} g_{\mu\nu}(\rho,x) dx^\mu dx^\nu \, ,
\end{equation}
so that near the boundary, $\rho \approx 0$, the metric $g_{\mu\nu}$ can be expanded as
\begin{equation}\label{nearBoundaryFGmetric}
g_{\mu\nu}(\rho,x) = g^{(0)}_{\mu\nu}(x) + \rho g^{(1)}_{\mu\nu}(x) + \rho^2 \left(g^{(2)}_{\mu\nu}(x) + \tilde h_{\mu\nu}(x) \ln\rho \right)+\mathcal{O}(\rho^3) \, .
\end{equation}
Note that Greek (boundary) indices in a tensor $A^{\mu\nu}$ are raised with the metric $g^{\mu\nu}_{(0)}$, which satisfies $g^{(0)}_{\mu\nu}g^{\mu\nu}_{(0)} = 4$. There are two types of covariant derivatives that we will use: $\nabla_\mu^{(g)}$ and $\nabla_\mu$. Firstly, $\nabla_\mu^{(g)}$ and $\nabla^\mu_{(g)} \equiv g^{\mu\nu} \nabla_\mu^{(g)}$ are defined with respect to the metric $g_{\mu\nu}(\rho,x)$, while $\nabla_\mu$ and $\nabla^\mu\equiv g^{\mu\nu}_{(0)} \nabla_\mu$ are defined through the metric $g^{(0)}_{\mu\nu}(x)$. The Ricci tensors of $g_{\mu\nu}$ and $g^{(0)}_{\mu\nu}$ are denoted by $R^{(g)}_{\mu\nu}$ and $R^{(0)}_{\mu\nu}$, respectively.
The components of bulk two-form gauge field $B_{ab}$ in the boundary field theory directions can similarly be expanded near the boundary as
\begin{equation}\label{nearBoundaryFGgaugefield}
B_{\mu\nu}(\rho,x) = B^{(0)}_{\mu\nu}(x) + B^{(1)}_{\mu\nu}(x) \ln \rho + \mathcal{O}(\rho) \, .
\end{equation}
In the boundary directions, the three-form field strength is defined as $ H_{\mu\nu\sigma} = \partial_\mu B_{\nu\sigma} + \partial_\nu B_{\sigma\mu} + \partial_\sigma B_{\mu\nu}$, with the near-boundary expansion $ H_{\mu\nu\sigma}(\rho,x) = H^{(0)}_{\mu\nu\sigma}(x) + H^{(1)}_{\mu\nu\sigma}(x) \ln \rho + \mathcal{O}(\rho)$. Each $ H^{(n)}$ is defined in terms of $B^{(n)}$, i.e. in the same way at each order. Note that both quantities $B^{(0)}_{\mu\nu}$ and $B^{(1)}_{\mu\nu}$ are related to the two-form gauge field source of the boundary theory, $ \int d^4x \sqrt{-g}\, J^{\mu\nu}\delta b_{\mu\nu}$. The variation of the regularised bulk on-shell contribution from the $H^2$ term, evaluated at the boundary cut-off $\rho = \rho_\Lambda$, is
\begin{equation}
\delta S_{\scriptscriptstyle on-shell} = -\frac{N_c^2 }{4 \pi^2} \int d^4x \sqrt{-g} \, \mathcal{H}^{\mu\nu} \left(\delta B_{\mu\nu}^{(0)} + \delta B^{(1)}_{\mu\nu} \ln \, \mathcal{C}^2\rho_\Lambda \right) .
\end{equation}
This expression makes it clear that the boundary source should be identified with the linear combination of $B^{(0)}_{\mu\nu}$ and $B^{(1)}_{\mu\nu} \ln \,\mathcal{C}^2\rho_\Lambda$ in the parenthesis. Thus, $\mathcal{H}^{\mu\nu}$ sets the expectation value of $J^{\mu\nu}$ in the boundary theory. However, due to the fact that, by definition, $J^{\mu\nu} = \frac{1}{2} \epsilon^{\mu\nu\lambda\sigma}F_{\lambda\sigma}$, of which the expectation value contains no colour trace, we need to identify the combination of $B^{(0)}_{\mu\nu}$ and $B^{(1)}_{\mu\nu}$ with the field theory source $b_{\mu\nu}$ by including a factor proportional to $1/N_c^2$, i.e.,
\begin{equation}\label{eq:B0toSourceb}
B^{(0)}_{\mu\nu} + B^{(1)}_{\mu\nu}\ln\, \mathcal{C}^2\rho_\Lambda = \frac{4 \pi^2 }{N_c^2 } b_{\mu\nu} \, .
\end{equation}
In holography, such boundary conditions are knows as {\em mixed} boundary conditions. They arise in the presence of double-trace deformations \cite{Witten:2001ua}, which is precisely how the logarithmically running $H^2$ term in the renormalised on-shell action should be interpreted. From the point of view of the boundary field theory, as we will see below, this term is a consequence of dynamical boundary electromagnetism---it is the boundary Maxwell action. Now, since the source $b_{\mu\nu}$ is a physical quantity, it cannot depend on the cut-off scale $\rho_\Lambda$. Hence, the renormalisation group equation
\begin{align}\label{RGEqB}
\frac{db_{\mu\nu}}{d \rho_\Lambda} = 0\,
\end{align}
prompts us to set the value of $\mathcal{C} \sim 1 / \sqrt{\rho_\Lambda}$, which makes the on-shell action formally finite in the limit of $\rho_\Lambda \to 0$. Of course, we need to scale $\mathcal{C}\to\infty$ so that the product $\mathcal{C}^2\rho_\Lambda$ remains finite. As we will see below, the proportionality constant in the relation between $\mathcal{C}^2$ and $\rho_\Lambda$ sets the value of the renormalised electromagnetic coupling, and corresponds to the choice of the renormalisation group condition. This procedure replaces the necessity to keep the cut-off scale of the theory explicit in our final results and replaces the need to explicitly choose the Landau pole scale in favour of choosing the renormalisation group scale, or the electromagnetic coupling. With these boundary conditions in hand, the expectation value of $J^{\mu\nu}$ can then be obtained by taking a variational derivative of the on-shell action with respect to the source $b_{\mu\nu}$.
The Ward identities \eqref{WardIden1} can be obtained by solving the equations of motion \eqref{EOMTheory1} and \eqref{EOMTheory2} \cite{Taylor:2000xw}. In the Fefferman-Graham coordinates \eqref{FGcoord}, these equations (together with the trace of \eqref{EOMTheory1}) become
\begin{align}
\frac{1}{2}{\mathrm{tr}}\left[g^{-1}g'' \right] - \frac{1}{4}{\mathrm{tr}}\left[g^{-1}g'g^{-1}g' \right] +\frac{1}{3}\rho^2{\mathrm{tr}}\left[ g^{-1}B'g^{-1}B' \right]-\frac{1}{18}\rho \,{\mathrm{tr}}[ g^{-1} H^2] &=0 \, ,\label{FGeqn1} \\
\frac{1}{2}\left(\nabla_\mu^{(g)}{\mathrm{tr}} g' - \nabla^\nu_{(g)}g'_{\mu\nu}\right) - \rho^2 H_{\mu\alpha\beta}\left(g^{-1}B'g^{-1}\right)^{\alpha\beta} &=0 \, , \label{FGeqn2} \\
\rho\left( 2g''_{\mu\nu} - 2(g'g^{-1}g')_{\mu\nu} + g'_{\mu\nu} {\mathrm{tr}}[g^{-1}g']\right) + R^{(g)}_{\mu\nu}-2g'_{\mu\nu} - g_{\mu\nu} {\mathrm{tr}}[g^{-1}g'] & \nonumber \\
\quad + 8\rho^3\left[ (B'g^{-1}B')_{\mu\nu}-\frac{1}{3}g_{\mu\nu}{\mathrm{tr}}\left[g^{-1}B'g^{-1}B' \right]\right]+\rho^2 \left[ H^2_{\mu\nu}-\frac{2}{9}g_{\mu\nu}{\mathrm{tr}}[g^{-1} H^2] \right] & = 0 \,,\label{FGeqn3}\\
\frac{d}{d\rho}\left( 2\rho \left( g^{-1}B'g^{-1} \right)^{\mu\nu} \right)+\frac{1}{2}\nabla^\lambda_{(g)}\left( g^{\mu\alpha}g^{\nu\beta}H_{\lambda\alpha\beta} \right)& = 0 \,, \label{FGeqn4} \\
\nabla_\nu \left( g^{-1}B'g^{-1}\right)^{\mu\nu} &= 0\, , \label{FGeqn5}
\end{align}
where $g^{-1}$ denotes the matrix inverse of $g$ (in components, this is $g^{\mu\nu}$) and where
\begin{align}
{\mathrm{tr}}[ g^{-1}B' g^{-1}B'] = -B'_{\mu_1\mu_2}B'_{\nu_1\nu_2}g^{\mu_1\nu_1}g^{\mu_2\nu_2}\,,&& H^2_{\mu\nu} = H_{\mu\lambda_1\lambda_2}H_{\nu\sigma_1\sigma_2} g^{\lambda_1\sigma_1}g^{\lambda_2\sigma_2} \, .
\end{align}
Expanding equations \eqref{FGeqn1}--\eqref{FGeqn5} around small $\rho$, we find that
\begin{align}
g^{(1)}_{\mu\nu} = \frac{1}{2}\left( R^{(0)}_{\mu\nu}-\frac{1}{6}g^{(0)}_{\mu\nu} R^{(0)}\right),&& (g^{(1)})^\mu_{~\mu} = \frac{1}{6} R \, .
\end{align}
Since $g^{(1)}_{\mu\nu}$ is proportional to second derivatives of the boundary metric, and we are only keeping track of terms up to second order in boundary derivatives, we can ignore terms with $g^{(1)}_{\mu\nu}$. The remaining equations of motion can thus be written as
\begin{align}
(g^{(2)})^\mu_{~\mu} -\frac{1}{3} B^{(1)}_{\mu\nu}B^{(1)\mu\nu}=0\,, \qquad \tilde h^\mu_{~\mu} = 0 \, , \qquad \nabla_\nu B^{(1)\mu\nu} &= 0 \, ,\label{FGrel1}\\
- H_{\mu\nu\lambda}^{(0)}B^{(1)\nu\lambda} + \nabla^\nu_{(0)}\left( g^{(0)}_{\mu\nu} (g^{(2)})^\lambda_{~\lambda} - g_{\mu\nu}^{(2)}-\frac{1}{2}\tilde h_{\mu\nu} \right) &= 0 \, ,\label{FGrel2}\\
\tilde h_{\mu\nu} + \frac{1}{2}\left(4 B^{(1)}_{\mu\lambda} (B^{(1)})_\nu^{~\lambda}- g_{\mu\nu}^{(0)} B^{(1)}_{\lambda\sigma}B^{(1)\lambda\sigma} \right) &=0 \, . \label{FGrel3}
\end{align}
The expectation values of the stress-energy tensor and the two-form current follow from the generating functional \eqref{GenFunTJ}:
\begin{align}\label{Def1ptFunction}
\langle T^{\mu\nu} \rangle = -\frac{2i}{\sqrt{-g^{(0)}}} \frac{\delta \ln W}{\delta g^{(0)}_{\mu\nu}}\, ,&& \langle J^{\mu\nu} \rangle = - \frac{i}{\sqrt{-g^{(0)} }} \frac{\delta \ln W}{\delta b_{\mu\nu}} \, .
\end{align}
In holography, $W$ is computed from the (on-shell) action \eqref{HoloAction}, giving us\footnote{Note that in order to raise indices of the boundary theory expectation values, one needs to use the induced metric $\gamma_{\mu\nu}$.}
\begin{align}
\langle T_{\mu\nu}\rangle =& -\frac{N_c^2}{4 \pi^2} \lim_{\epsilon \to 0}\frac{r_h^2}{\epsilon}\left(K_{\mu\nu} - \gamma_{\mu\nu} K - 3 \gamma_{\mu\nu}+ \frac{1}{2}R[\gamma]_{\mu\nu} \right. \nonumber\\
&\left. -\frac{1}{4}\gamma_{\mu\nu}R[\gamma]- \left(\mathcal{H}_{\mu\lambda}\mathcal{H}_\nu^{\;\;\lambda}-\frac{1}{4}\gamma_{\mu\nu}\mathcal{H}_{\alpha\beta}\mathcal{H}^{\alpha\beta}\right)\ln (\mathcal{C}^2\rho)\right) \biggr|_{\rho=\rho_\Lambda} \label{defT} \, , \\
\langle J_{\mu\nu}\rangle =& -\lim_{\epsilon \to 0}\,\mathcal{H}_{\mu\nu} \big|_{\rho=\rho_\Lambda} \, .\label{defJ}
\end{align}
Note that while the expectation value of $T^{\mu\nu}$ scales as $N_c^2$, the expectation value of $J^{\mu\nu}$ is of order $\mathcal{O}(1)$.
By using Eq. \eqref{FGrel1} and the fact that $\mathcal{H}_{\mu\nu} = n^\rho H_{\rho\mu\nu} = -2 B^{(1)}_{\mu\nu}+\mathcal{O}(\rho)$, we find that the boundary two-form current is conserved:
\begin{align}\label{JConserHolRG}
\nabla_{(0)}^\mu \langle J_{\mu\nu} \rangle= 2 \nabla^\mu B_{\mu\nu}^{(1)}= 0 \,.
\end{align}
Using the definition \eqref{TwoFormJ}, which gives $\langle J_{\mu\nu} \rangle = \frac{1}{2} \epsilon_{\mu\nu\rho\sigma} \langle F^{\rho\sigma} \rangle$ and connects Eq. \eqref{JConserHolRG} with the Bianchi identity, we find that $\star \, B^{(1)}$ sets the expectation value of the Maxwell field strength $ \langle F \rangle$. Furthermore, the (regularised) stress-energy tensor \eqref{defT} becomes
\begin{equation}\label{holographicStress}
\langle T_{\mu\nu}\rangle =\lim_{\rho_\Lambda \to 1/(L\Lambda)^2}\frac{N_c^2}{2 \pi^2 }\left(g^{(2)}_{\mu\nu} - g^{(0)}_{\mu\nu}(g^{(2)})^\lambda_{~\lambda} +\frac{1}{2}\tilde h_{\mu\nu} +\tilde h_{\mu\nu}\ln\left(\mathcal{C}^2 \rho\right)+ \mathcal{O}(\rho,\partial^2)\right) \biggr|_{\rho=\rho_\Lambda} .
\end{equation}
It is useful to write $\rho_\Lambda = 1/(L\Lambda)^2$, where $\Lambda$ is the UV cut-off energy of the theory and $L$ is the AdS radius which we set to be $L=1$. As discussed in Section \ref{sec:MatterEM}, the choice of the constant $\mathcal{C}$ must now be made in order to fix the renormalisation condition, which will render the renormalised expectation value $\langle T_{\mu\nu}\rangle$ physical and finite in the formal limit of $\Lambda \to \infty$. This again implies that $\mathcal{C}^2 \rho_\Lambda$ has to be finite and invariant under the change of the cut-off scale, which is consistent with the renormalisation group-invariant condition for the dual field theory source $b_{\mu \nu}$ in Eq. \eqref{RGEqB}. It will prove useful to introduce a renormalisation group-invariant energy scale $M_\star = \Lambda/\mathcal{C}$, which is the energy scale associated with the Landau pole. Furthermore, we also introduce the combination $1/e_r^2 = \ln (\Lambda L/\mathcal{C})$, which, as we shall see shortly, plays the role of the renormalised electromagnetic coupling.
To see how the constant $\mathcal{C}$ in Eq. \eqref{holographicStress} is related to our discussion in Section \ref{sec:MatterEM}, we write the last term by introducing a mass scale $M$:
\begin{equation}\label{splitting}
\frac{N_c^2}{\pi^2}\tilde h_{\mu\nu}\ln(\Lambda L/\mathcal{C}) = \frac{N_c^2}{\pi^2} \tilde h_{\mu\nu} \ln(\Lambda/M) + \tilde h_{\mu\nu}\left( \frac{2}{e_r^2} - \frac{N_c^2 }{\pi^2} \ln(\Lambda/M) \right) .
\end{equation}
What can be seen from Eq. \eqref{splitting} is that this splitting precisely reproduces the way the logarithmic divergence enters into the stress-energy tensor from two different pieces of the Lagrangian: the matter content (with its coupling to the photons) and the electromagnetic (Maxwell) part:
\begin{align}
\langle T_{\mu\nu} \rangle = \langle T^{\scriptscriptstyle matter}_{\mu\nu}\rangle + \langle T^{\scriptscriptstyle EM}_{\mu\nu}\rangle \,,
\end{align}
with the two terms being
\begin{align}
\langle T^{\scriptscriptstyle matter}_{\mu\nu}\rangle &= \frac{N_c^2}{2\pi^2}\left(g^{(2)}_{\mu\nu} - g^{(0)}_{\mu\nu}(g^{(2)})^\lambda_{~\lambda} +\frac{1}{2}\tilde h_{\mu\nu} \right) - \frac{N_c^2}{\pi^2} \tilde h_{\mu\nu} \ln(\Lambda/M) \, , \\
\langle T^{\scriptscriptstyle EM}_{\mu\nu}\rangle &= -\left(\frac{2}{e_r^2} - \frac{N_c^2}{\pi^2} \ln \left(\Lambda / M \right)\right) \tilde h_{\mu\nu} \, .
\end{align}
Finally, we note that the electromagnetic $\langle T^{\scriptscriptstyle EM}_{\mu\nu}\rangle$ would follow precisely from the Maxwell boundary action, which induces a double-trace deformation into the boundary field theory (see discussion below Eq. \eqref{eq:B0toSourceb})
\begin{align}
S_{\scriptscriptstyle EM} = -\frac{1}{4 e(\Lambda/M)^2} \int d^4x \sqrt{-g} F_{\mu\nu}F^{\mu\nu} \, , \qquad \frac{1}{e(\Lambda/M)^2} = \left( \frac{1}{e_r^2} - \frac{N_c^2}{2\pi^2} \ln (\Lambda/M) \right)\, ,
\end{align}
upon using Eq. \eqref{FGrel3} and the fact that the bulk $\star \, B^{(1)}$ determines $\langle F_{\mu\nu}\rangle $:
\begin{align}
\langle T^{\scriptscriptstyle EM}_{\mu\nu}\rangle &=\frac{1}{e(\Lambda/M)^2}\left(\langle F_{\mu\alpha}F_\nu^{\;\;\alpha}\rangle - \frac{1}{4} \eta_{\mu\nu} \langle F_{\alpha\beta}F^{\alpha\beta} \rangle \right) \nonumber\\
&= \frac{1}{e(\Lambda/M)^2}\left(\langle F_{\mu\alpha} \rangle \langle F_\nu^{\;\;\alpha}\rangle - \frac{1}{4} \eta_{\mu\nu} \langle F_{\alpha\beta} \rangle\langle F^{\alpha\beta} \rangle \right) \, ,
\end{align}
where the last equality follows from the fact that quantum fluctuations of the photon field are suppressed in the boundary QFT. Our holographic calculation thus fully reproduces Eq. \eqref{TmunuSecQFT}, which followed from the field theory discussion in Section \ref{sec:N4}. Furthermore, the running electromagnetic coupling constant matches the one found from field theory (cf. Eq. \eqref{CutOffDependentCouplingN4}) \cite{Fuini:2015hba}. Hence, our holographic setup appears to contain the $U(1)$-gauged matter content of the $\mathcal{N} = 4$ SYM theory. In terms of bulk quantities, the renormalised stress-energy tensor and the two-form current are
\begin{align}
\langle T_{\mu\nu}\rangle &= \frac{N_c^2}{2\pi^2} \left(g^{(2)}_{\mu\nu} - g^{(0)}_{\mu\nu}(g^{(2)})^\lambda_{~\lambda} +\frac{1}{2}\tilde h_{\mu\nu} \right) - \frac{2}{e_r^2} \tilde h_{\mu\nu} \, , \label{THolFin} \\
\langle J_{\mu\nu} \rangle & =
2B^{(1)}_{\mu\nu} \, \label{JHolFin} ,
\end{align}
where, as in Section \ref{sec:MatterEM}, $e_r$ is the renormalised coupling which needs to be set by experimental input---the renormalisation condition. In practice, this constant is fixed by choosing the value of $\mathcal{C}$ in \eqref{defT}. For the same reasons as in any QFT with the Landau pole, there is therefore an inherent ambiguity in holographic results, which has to be fixed by external physically-motivated input. Here, instead of simply choosing the Landau pole scale, which would have rendered all our results explicitly dependent on the UV cut-off scale of the theory, we underwent a renormalisation group analysis of the theory and traded the cut-off scale for the renormalisation group scale $M$, which set the more physically relevant electric charge $e_r$. As a result, the stress-energy tensor in Eq. \eqref{THolFin} and all other physical quantities are formally independent of the cut-off scale $\Lambda$.
We conclude this section by noting that the relation \eqref{FGrel2} and a relation between $\tilde H$, $H^{(0)}$ and $H^{(1)}$ implies that the Ward identity for the stress-energy tensor satisfies Eq. \eqref{EOM1}, or in terms of our holographic notation, $\nabla_\nu \langle T^{\mu\nu}\rangle = \tilde H^{\mu}_{~\lambda\sigma} \langle J^{\lambda\sigma}\rangle$, as in Eq. \eqref{WardIden1}.
\subsection{The equation of state}\label{sec:EOS}
To find the equation of state of our theory, we can use the renormalised stress-energy tensor \eqref{THolFin} and the two-form current \eqref{JHolFin} computed in the previous section. The results are then expressed in terms of the near-boundary expansions \eqref{eqn:nearBndExpansion}, which can be read off from the numerical background. Upon changing the radial coordinate from the Fefferman-Graham $\rho$ to $u$ used in Section \ref{sec:ActionAndBrane}, the logarithmic term in the near-boundary expansion becomes shifted by
\begin{equation}
\tilde h_{\mu \nu} \ln \rho = \tilde h_{\mu \nu} \ln u + \tilde h_{\mu \nu} \ln (r_h/L) \, .
\end{equation}
Hence, in order to extract $g^{(2)}_{\mu \nu}$ in the Fefferman-Graham coordinates from the near-boundary expansion in the $u$ coordinate, one has to take into account the fact that the term proportional to $u^2$ is a combination of $g^{(2)}_{\mu \nu}$ and $\tilde h_{\mu \nu} \ln (r_h/L)$. This effectively changes the value of the renormalised electromagnetic coupling and the resulting stress-energy tensor in equilibrium, written in terms of variables in \eqref{eqn:nearBndExpansion}, are
\begin{align}
\left\langle T^{tt} \right\rangle &= \frac{N_c^2}{2\pi^2} \left[ - \frac{3}{4} f^b_4 r_h^4 + \frac{\mathcal{B}^2}{8 \pi\bar\alpha} \right], \label{THol1} \\
\left\langle T^{xx} \right\rangle = \left\langle T^{yy} \right\rangle &= \frac{N_c^2}{2\pi^2}\left[ \left( - \frac{1}{4} f^b_4 + \frac{v^b_4}{v} \right)r_h^4- \frac{\mathcal{B}^2}{4} + \frac{\mathcal{B}^2}{8 \pi \bar\alpha} \right] ,\label{THol2} \\
\left\langle T^{zz} \right\rangle &= \frac{N_c^2}{2\pi^2} \left[ \left(- \frac{1}{4} f^b_4 - 2 \frac{v^b_4}{v} \right)r_h^4 - \frac{\mathcal{B}^2}{8 \pi \bar \alpha} \right],\label{THol3}
\end{align}
where we have used the (renormalised) fine-structure constant of the electromagnetic coupling in the plasma
\begin{equation}
\frac{1}{4\pi \alpha} = \ \frac{1}{e_r^2} + \ln \left(\frac{r_h}{L}\right) = \ln \left( M_\star r_h\right) .
\end{equation}
The argument of the logarithm is nothing but the energy scale of the Landau pole $M_\star$ (introduced below Eq. \eqref{holographicStress}) measured in the units of energy set by $1/r_h$. For convenience, we will rescale $\alpha$ by $N_c^2 / 2 \pi^2$ (or $| \beta(1/e^2)|$):
\begin{align}
\bar\alpha = \frac{N_c^2}{2\pi^2}\alpha \, .
\end{align}
The coupling $\bar\alpha$ (or alternatively, the dimensionless ratio between the Landau pole scale $M_\star$ and the energy scale set by $1/r_h$) has to be fixed by experimental observations as in any other quantum field theory, which is not easy in an unrealistic toy model.
In studying strongly coupled MHD, it is phenomenologically relevant to not only consider the matter and light-matter interactions, but to also include large electromagnetic self-interactions encoded in the Maxwell action. However, since we are working with a holographic large-$N_c$ matter sector and a single photon, it is unnatural to expect a Maxwell term of the same order. The choice that we make here is to set the rescaled constant $\bar\alpha$ to the physically motivated $\bar\alpha = 1/137$. There are several ways to think about this choice: one is imagining that our plasma contains magnetic properties, which have non-trivial scalings with $N_c$, while another interpretation may assume that the bulk studied here could remain a valid dual of a theory with a reasonably small $N_c$. Of course, by considering only a classical bulk theory, we are restricting the strict validity of any computed observable to the limit of $N_c \to \infty$. As soon as one moves towards finite $N_c$, it becomes crucial to estimate the size of subleading $1/N_c^2$ corrections (topological expansion in the string coupling $g_s$)---an endeavour in holography (and string theory) which to date has been largely neglected and will continue being neglected in this work.\footnote{For some discussions of $1/N_c^2$ corrections to the thermodynamic free energy (the equilibrium partition function) and hydrodynamic long-time tails, see \cite{Denef:2009yy,Denef:2009kn,CaronHuot:2009iq,Arnold:2016dbb,Castro:2017mfj}.} A less problematic limit is that of the infinite 't Hooft coupling, which is also implied by the choice of our action.\footnote{For recent discussions of coupling-dependent holography, see \cite{Stricker:2013lma,Waeber:2015oka,Grozdanov:2016vgg,Grozdanov:2016zjj,Grozdanov:2016fkt} and references therein.} Perhaps the best interpretation is one of an ``agnostic choice" led by our having to fix a free parameter to some value. We will return to a more careful investigation of the dependence of our results on this choice in Section \ref{sec:alphaDependence}.
The expectation values of the stress-energy tensor expressed in \eqref{THol1}--\eqref{THol3} are related to the MHD stress-energy tensor in Eq. \eqref{MHDstress-energy} by
\begin{align}
\langle T^{tt}\rangle = \varepsilon \, ,&& \langle T^{xx}\rangle = p\, ,&& \langle T^{zz}\rangle = p-\mu\rho \,.
\end{align}
We note that, as required in a conformal field theory with a trace anomaly induced by electromagnetic interactions, the trace of stress-energy tensor is non-zero. The holographic two-form current,
\begin{equation}
\langle J^{tz}\rangle = \mathcal{B} = \frac{Br_h^2}{v}\,,
\end{equation}
is related to the equilibrium magnetic flux line density appearing in the MHD equation \eqref{MHDcurrent} as
$
\langle J^{tz} \rangle = \rho.
$
Temperature and entropy can be expressed in terms of the background geometry as
\begin{align}\label{TempEnt}
T = \frac{1}{2 \pi } f^h_1r_h \,,&& s = \frac{N_c^2}{2\pi^2} \left( \frac{\pi r_h^3}{v\sqrt{w}} \right) ,
\end{align}
and are therefore independent of the renormalised electromagnetic charge. The chemical potential, which is conjugate to the density of magnetic flux lines, can be computed by using the thermodynamic identity $\varepsilon + p = s T + \mu \rho$ (cf. \eqref{ThermoRel1}):
\begin{align}
\mu = \frac{ \langle T^{xx} \rangle - \langle T^{zz} \rangle }{\langle J^{tz} \rangle} = \frac{N_c^2}{2\pi^2} \left( \frac{3v^b_4}{B}-\frac{B}{4 v} + \frac{B}{4\pi v \bar\alpha} \right)r_h^2\,.
\end{align}
Note that with our choice of the bulk theory scalings, $\rho \sim \mathcal{O}(1)$ and $\mu \sim \mathcal{O}(N_c^2)$. Furthermore, while $T \sim\mathcal{O}(1)$, $p$, $\varepsilon$ and $s$ all scale as $\mathcal{O}(N_c^2)$.
\begin{figure}[tbh]
\center
\includegraphics[width=0.49\textwidth]{energyden}
\includegraphics[width=0.49 \textwidth]{pressure}
\includegraphics[width=0.49\textwidth]{entropy}
\includegraphics[width=0.49 \textwidth]{chempot}
\caption{Dimensionless energy density $\varepsilon/\mathcal{B}^2$ (top-left), pressure
$p/\mathcal{B}^2$ (top-right), entropy density $s/\mathcal{B}^{3/2}$ (bottom-left) and chemical potential $\mu/\mathcal{B}$ (bottom-right), in units of $N_c^2 / (2\pi^2) $, plotted as a function of the dimensionless parameter $T/\sqrt{\mathcal{B}}$. The first three plots use logarithmic scales on both axes.}
\label{fig:Thermo}
\end{figure}
Using the above relations, we can perform two consistency checks on our holographic setup and numerical calculations of the background. First, the value of the pressure computed from the stress-energy tensor component $\langle T^{xx}\rangle = p$ can be compared with the value of the Euclidean on-shell action, $p = -i (\beta V_3)^{-1}S_{\scriptscriptstyle on-shell}$, where $\beta = 1 / T$ and $V_3$ is the spatial volume of the theory. Secondly, we can compute $\varepsilon + p - \mu \rho$ from the stress-energy tensor evaluated near the boundary and by using the thermodynamic relation \eqref{ThermoRel1}, check whether its values agree with $s T$ computed purely from horizon quantities. Both calculations show consistency of our setup in that we find $\langle T^{xx}\rangle = -i (\beta V_3)^{-1}S_{\scriptscriptstyle on-shell}$ and $\langle T^{tt}\rangle + \langle T^{zz}\rangle = sT$, within numerical precision.
We can now plot various thermodynamic quantities in a dimensionless manner by dividing them by appropriate powers of $\mathcal{B}$. The natural dimensionless parameter with respect to which we present our numerical results is $T / \sqrt{\mathcal{B}}$. The results for the energy density, pressure, entropy density and chemical potential are shown in Figure \ref{fig:Thermo}. The theory has two distinct regimes: the low- and the high-temperature regimes, or alternatively, the strong and weak magnetic field regimes, respectively. The high-temperature regime $T / \sqrt{\mathcal{B}} \gg 1$ is one to which MHD has been historically applied and to which the formulation of MHD, which assumes a weak-field separation between fluid and charge degrees of freedom can be applied. The claim presented in the Ref. \cite{Grozdanov:2016tdf} is that within the dual formulation, however, MHD applies for all values of $T / \sqrt{\mathcal{B}}$ provided that the state remains in the hydrodynamic regime. The profiles of the thermodynamic functions in Figure \ref{fig:Thermo} show a smooth crossover between the two regimes, which occurs around
\begin{align}\label{Crossover}
T/\sqrt{\mathcal{B}} \approx 0.5-0.7 \,.
\end{align}
By using numerical fits, the equation of state in the two limits behaves as expected on dimensional grounds \cite{Grozdanov:2016tdf}. We present our numerical results in Table \ref{table:EOS}.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{| l | l | l |}
\hline
& weak field ($T / \sqrt{\mathcal{B}} \gg 1$) & strong field ($T / \sqrt{\mathcal{B}} \ll 1$) \\ \hline
$\varepsilon~~$ & $\frac{N_c^2}{2\pi^2} \left( 74.1 \times T^4 \right)$ & $\frac{N_c^2}{2\pi^2}\left( 5.62\times \mathcal{B}^2 \right)$ \\ \hline
$p~~$ & $\frac{N_c^2}{2\pi^2} \left( 25.3 \times T^4 \right)$ & $\frac{N_c^2}{2\pi^2} \left( 5.32 \times \mathcal{B}^2\right) $ \\ \hline
$s~~$ & $\frac{N_c^2}{2\pi^2} \left( 99.4 \times T^3 \right)$ & $\frac{N_c^2}{2\pi^2}\left( 7.41 \times\mathcal{B}\, T \right)$ \\ \hline
$\mu~~$ & $\frac{N_c^2}{2\pi^2} \left( 10.9\times \mathcal{B} \right)$& $\frac{N_c^2}{2\pi^2}\left( 2.88 \times \mathcal{B} \right)$ \\ \hline
\end{tabular}
\end{center}
\caption{Approximate asymptotic behaviour of the equation of state in weak- and strong-field limits for $\bar\alpha = 1/137$.}
\label{table:EOS}
\end{table}
In the limit of $\mathcal{B} \to 0$, the weak-field result approximately limits to the equation of state of a strongly coupled, thermal $\mathcal{N} = 4$ plasma, dual to a five dimensional AdS-Schwarzschild black brane with $p_{\scriptscriptstyle \mathcal{N} = 4} = \frac{1}{8} N_c^2 \pi^2 T^4 $; i.e. $ \lim_{\mathcal{B}\to 0 } p_{\scriptscriptstyle weak} \approx 1.28 \times N_c^2 T^4$ and $ p_{\scriptscriptstyle \mathcal{N} = 4} \approx 1.23 \times N_c^2 T^4$. We also note that the value of the pressure at low temperature strongly depends on the renormalised (re-scaled) fine structure constant $\bar \alpha$, which we set to $\bar\alpha = 1/137$.
\subsection{Transport coefficients}\label{transport-maintext}
Next, we compute the seven transport coefficients, $\eta_\perp$, $\eta_\parallel$, $r_\perp$, $r_\parallel$, $\zeta_\perp$, $\zeta_\parallel$ and $\zeta_\times$, by using the Kubo formulae derived in \cite{Grozdanov:2016tdf,Hernandez:2017mch} and reviewed in Appendix \ref{appendix:kubo}. The procedure only requires us to turn on time-dependent fluctuations of the background fields without any spatial dependence, $G_{ab} \to G_{ab} + \delta G_{ab}(t)$ and $B_{ab} \to B_{ab} + \delta B_{ab}(t)$. The perturbations asymptote to the boundary sources $\delta g^{(0)}_{\mu\nu}$ and $\delta b^{(0)}_{\mu\nu}$ of the dual stress-energy tensor and the two-form current. In the absence of spatial dependence, the fluctuations decouple into five separate channels, from which the seven transport coefficients are computed, with each channel containing one independent dynamical second-order equation. The sets of decoupled fluctuations responsible for their respective transport coefficients are
\begin{equation}
\begin{aligned}
\eta_{\perp} &: \quad \delta G_{xy} \,,\\
\eta_{\parallel} &: \quad \delta G_{xz}, \,\delta B_{tx},\, \delta B_{xu} \,,\\
\zeta_\perp, \, \zeta_\parallel,\zeta_\times &: \quad \delta G_{tt}, \,\delta G_{xx},\,\delta G_{yy},\,\delta
G_{zz}, \,\delta B_{tz},\, \delta G_{tu},\, \delta B_{zu},\, \delta G_{uu} \,,\\
r_{\perp} &: \quad \delta B_{xz}, \,\delta G_{tx},\, \delta G_{xu} \,,\\
r_{\parallel} &: \quad \delta B_{xy} \,,
\end{aligned}
\label{eqn:fluc-channel}
\end{equation}
with only one of the three bulk viscosities being independent. Each one of the transport coefficients can then be related to a membrane paradigm-type formula and can be expressed in terms of a simple expression. We summarise these relations here and discuss their derivation below:
\begin{equation}
\label{horizonFormulae}
\begin{aligned}
\eta_\perp &= \frac{N_c^2}{2\pi^2} \left( \frac{r_h^3}{4v\sqrt{w}}\right) = \frac{1}{4\pi} s \,,\\
\eta_\parallel &= \frac{N_c^2}{2\pi^2} \left( \frac{r_h^3}{4w^{3/2}} \right) = \frac{1}{4\pi} \frac{v}{w} s \,,\\
r_\perp &= \frac{2\pi^2}{N_c^2} \left( \frac{\sqrt{w}}{r_h} \right) \left(\frac{ \mathfrak{b}^{(-)}_{xz}(1)}{ \mathfrak{b}^{(-)}_{xz}(0)}\right)^2, \\
r_\parallel &= \frac{2\pi^2}{N_c^2} \left( \frac{v}{r_h\sqrt{w}} \right) ,\\
\zeta_\perp =\frac{1}{4}\zeta_\parallel = -\frac{1}{2}\zeta_\times &= \frac{N_c^2}{2\pi^2} \left( \frac{r_h^3}{12 v\sqrt{w}} \left( \frac{6+B^2}{6-B^2}\right)^2
\left[ \frac{ \mathfrak{Z}^{(-)}(1)}{ \mathfrak{Z}^{(-)}(0)} \right]^2 \right) ,
\end{aligned}
\end{equation}
where $\mathfrak{b}^{(-)}$ and $\mathfrak{Z}^{(-)}$ are the time-independent solutions of the fluctuations $\delta B_{xz}$ and $Z_s = \delta G^x_{~x} + \delta G^y_{~y} - (2\mathcal{V}'/\mathcal{W}') \delta G^z_{~z}$, respectively. The arguments denote that the functions are evaluated either at the horizon, $u=1$, or the boundary, $u=0$. Note that the value at the boundary is set by the Dirichlet boundary conditions.
What we see is that the ratio of the transverse shear viscosity (w.r.t. the background magnetic field) to entropy density is universal, resulting in $\eta_\perp/s = 1/4\pi$. Furthermore, the expressions for $\eta_\parallel$ and $r_\parallel$ only depend on the background quantities $v$ and $w$, while $\zeta_\perp$, $\zeta_\parallel$ and $r_\perp$ also depend on the fluctuations of the fields.\footnote{For a holographic derivation of bulk viscosity in neutral relativistic hydrodynamics, see \cite{Gubser:2008sz}.}
In order to derive the horizon formulae, we use the Wronskian method (see e.g. \cite{Davison:2015taa}). Here, we will only explicitly show the derivation of the transverse resistivity $r_\perp$. The other formulae from Eq. \eqref{horizonFormulae} are derived in Appendix \ref{appendix:transport}. First, we combine the equations of motion for the relevant fluctuations, $\delta B_{xz}$, $\delta G_{tx}$, and $\delta G_{xu}$, into a single second-order differential equation by eliminating the metric fluctuations,
\begin{equation}\label{eomForRperp}
\delta B_{xz}''+\left(\frac{3}{2u}+ \frac{F'}{F} -\mathcal{W}' \right)\delta B_{xz}' +
\left(\frac{\omega^2}{4r_h^2 u^3F^2} -\frac{B^2 e^{-4\mathcal{V}}}{u^3F} \right)\delta B_{xz} = 0 \,.
\end{equation}
Since we are only computing first-order transport coefficients, it is sufficient to solve Eq. \eqref{eomForRperp} to linear order in $\omega$. To find the solution, we assume that there exists a time-independent solution $\mathfrak{b}_{xz}^{(-)}(u)$, which asymptotes to a constant both at the boundary and the horizon. At the boundary, this asymptotic value is related to the source of the two-form background gauge field, i.e. $\mathfrak{b}_{xz}^{(-)}(u\to 0) = \delta B_{xz}^{(0)}$. The time-dependent information is contained in the second solution, linearly independent from $\mathfrak{b}^{(-)}_{xz}$. We refer to this solution as $\mathfrak{b}_{xz}^{(+)}$. It can be expressed as an integral over the Wronskian $W_R$ of \eqref{eomForRperp}:
\begin{align}
\mathfrak{b}_{xz}^{(+)}(u) = \mathfrak{b}_{xz}^{(-)}(u) \int^1_u du'
\frac{ W_R (u') }{ \left( \mathfrak{b}_{xz}^{(-)}(u')\right)^2} \,,
\end{align}
where
\begin{align}
W_R (u) = \exp\left[ -\int_u^1 du' \left( \frac{3}{2u'}+\frac{F' (u')}{F(u')} - \mathcal{W}' (u')\right) \right] = \frac{1}{u^{3/2}F e^{-\mathcal{W}}} \, .
\end{align}
The near-boundary and the near-horizon expansions of $\mathfrak{b}_{xz}^{(+)}$ are
\begin{equation}
\mathfrak{b}_{xz}^{(+)} = \begin{dcases}
\sqrt{w}\left[\mathfrak{b}_{xz}^{(-)}(0)\right]^{-1}\ln u +\mathcal{O}(\sqrt{u}) \,,& ~~~~\text{for}\;\; u\approx 0 \, ,\\
-r_h\left[2\pi T\mathfrak{b}_{xz}^{(-)}(1)\right]^{-1}\ln (1-u) +\mathcal{O}(1-u) \,,& ~~~~\text{for}\;\; u\approx 1 \,.
\end{dcases}
\label{eqn:asymptbxz}
\end{equation}
Finally, $\delta B_{xz}(\omega,u)$ is then the following linear combination of the two solutions:
\begin{equation}
\delta B_{xz}(\omega,u) = \mathfrak{b}_{xz}^{(-)}(u) + \alpha(\omega)\mathfrak{b}_{xz}^{(+)}(u)+\mathcal{O}(\omega^2) \,.
\end{equation}
The coefficient $\alpha(\omega)$ can be determined by imposing a regular ingoing boundary condition at the horizon, which corresponds to computing a retarded dual correlator \cite{Son:2002sd,Herzog:2002pc}:
\begin{align}
\delta B_{xz} (u) = (1- u)^{- \frac{i\omega}{4\pi T}} \tilde B_{xz}\,.
\end{align}
The function $\tilde B_{xz}(u)$ is regular at the horizon. This choice of the boundary condition implies that near the horizon, $\delta B_{xz}$ behaves as
\begin{equation}\label{ingoingBxz}
\delta B_{xz}(u) = \mathfrak{b}_{xz}^{(-)}(u) + \alpha(\omega)
\mathfrak{b}_{xz}^{(+)}(u) + \ldots = \mathfrak{b}_{xz}^{(-)}(1) \left(1-\frac{i\omega}{4\pi
T}\ln(1-u) \right) +\ldots \,.
\end{equation}
Comparing Eq. \eqref{ingoingBxz} with the asymptotic behaviour of $\mathfrak{b}_{xz}^{(+)}$ in \eqref{eqn:asymptbxz}, we find $\alpha = (i\omega/2 r_h)\left[\mathfrak{b}_{xz}^{(-)}(1)\right]^2$. Thus, the near-boundary expression for $\delta B_{xz}$ becomes
\begin{equation}
\delta B_{xz}(u) = \mathfrak{b}_{xz}^{(-)}(0) \left(1+\frac{i\omega}{2r_h}\sqrt{w}
\left[ \frac{ \mathfrak{b}_{xz}^{(-)}(1)}{ \mathfrak{b}_{xz}^{(-)}(0)} \right]^2\ln u\right) +\mathcal{O}(u) \,.
\end{equation}
By substituting this expression into the expectation value \eqref{defJ} of the two-form current $\langle J^{\mu\nu}\rangle$, we obtain
\begin{equation}
\langle \delta J^{xz}\rangle = \lim_{u\to 0}\left( 2u^{3/2}\sqrt{F}\delta B_{xz}'(u) \right)= \frac{2\pi^2}{N_c^2}\left( 2i\omega r_h^{-1}\sqrt{w} \left[ \frac{ \mathfrak{b}_{xz}^{(-)}(1)}{ \mathfrak{b}_{xz}^{(-)}(0) }\right]^2 \right)\delta b_{xz} + \mathcal{O}(\omega^2) \, .
\end{equation}
The expression on the right-hand-side of the second equation is obtained by using the relation between $\delta B_{xz}^{(0)}$, $\delta B_{xz}^{(1)}$ and $\delta b_{xz}$ in \eqref{eq:B0toSourceb}. Note that the dependence on the electromagnetic coupling enters into the one-point function at order $\omega^2$ and, thus, $\bar\alpha$ plays no role in the holographic formula for the resistivity; $r_\perp$ and other first-order transport coefficients are independent of the renormalised electromagnetic coupling. Finally, using the Kubo formula for $r_\perp$, which is derived and presented in Eq. \eqref{practicalKubo} of Appendix \ref{appendix:kubo}, we recover the expression presented in Eq. \eqref{horizonFormulae}. All of the six remaining transport coefficients can be obtained by following the same procedure. We refer the reader to Appendix \ref{appendix:transport} for their detailed derivations.
The plots of the (dimensionless) transport coefficients $\eta_{\parallel}$, $\zeta_{\parallel}$, $r_{\perp}$ and $r_\parallel$ as a function of $T/\sqrt{\mathcal{B}}$ are presented in Figure \ref{fig:Transport}. The remaining three viscosities can easily be inferred from Eq. \eqref{horizonFormulae}. In particular, $\eta_{\perp} / s = 1 / (4\pi)$, $\zeta_\perp = \zeta_\parallel / 4$ and $\zeta_\times = - \zeta_\parallel / 2$. We note that all transport coefficients satisfy the positive entropy production bounds discussed in Section \ref{sec:Intro}. It is interesting that the bulk viscosity inequality $\zeta_\perp \zeta_\parallel \geq \zeta_\times^2$ is saturated, i.e. $\zeta_\perp \zeta_\parallel = \zeta_\times^2$ in the plasma studied here for all parameters of the theory.
\begin{figure}[tbh]
\center
\includegraphics[width=0.482\textwidth]{etaL}
\includegraphics[width=0.482\textwidth]{zetaL}
\includegraphics[width=0.482\textwidth]{rP}
\includegraphics[width=0.482\textwidth]{rL}
\caption{The plots of (dimensionless) first-order transport coefficients as a function of $T / \sqrt{\mathcal{B}}$.}
\label{fig:Transport}
\end{figure}
We can now investigate the behaviour of the transport coefficients in the two extreme limits of $T / \sqrt{\mathcal{B}} \to 0$ and $T/ \sqrt{\mathcal{B}} \to \infty$, i.e. the strong- and the weak-field regimes, respectively. The leading-order power-law scalings (which we assume) and the coefficients follow from numerical fits. The results are presented in Table \ref{table:TC}.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{| l | l | l |}
\hline
& weak field ($T / \sqrt{\mathcal{B}} \gg 1$) & strong field ($T / \sqrt{\mathcal{B}} \ll 1$) \\ \hline
$\eta_\perp~~$ & $\frac{s}{4\pi}$ & $\frac{s}{4\pi}$ \\ \hline
$\eta_\parallel~~$ & $1.00\times \frac{s}{4\pi}$ & $\frac{s}{4\pi} \left( 21.32\times \frac{T^2}{\mathcal{B}} \right)$ \\ \hline
$\zeta_\perp~~$ & $0.33\times \frac{s}{4\pi}$ &$\frac{s}{4\pi} \left( 16.34\times \frac{T^3}{\mathcal{B}^{3/2}} \right) $ \\ \hline
$\zeta_\parallel~~$ & $1.33\times \frac{s}{4\pi}$ & $\frac{s}{4\pi} \left( 65.37\times \frac{T^3}{\mathcal{B}^{3/2}} \right) $\\ \hline
$\zeta_\times~~$ &$-0.66 \times \frac{s}{4\pi}$ & $ - \frac{s}{4\pi} \left( 32.69 \times \frac{T^3}{\mathcal{B}^{3/2}} \right) $ \\ \hline
$r_\perp~~$ & $\frac{\mathcal{B}}{\mu} \left( 3.37\times \frac{1}{T} \right)$ & $\frac{\sqrt{\mathcal{B}}}{\mu}\left(4.7\times \frac{T^3}{\mathcal{B}^{3/2}} \right) $ \\ \hline
$r_\parallel~~$ & $\frac{\mathcal{B}}{\mu} \left( 3.37\times \frac{1}{T}\right) $& $ \frac{\sqrt{\mathcal{B}}}{\mu} \left( 62.3\times \frac{T}{\sqrt{\mathcal{B}}}\right) $ \\ \hline
\end{tabular}
\end{center}
\caption{Approximate asymptotic behaviour of all first-order transport coefficients in weak- and strong-field limits. The temperature-dependent scaling of the shear viscosities at low temperature agrees with what was reported in Ref. \cite{Critelli:2014kra}.}
\label{table:TC}
\end{table}
Since the entropy density $s$ vanishes in the limit of zero temperature, all first-order transport coefficients vanish in the strong-field limit of $T\to0$. Furthermore, as we will see, all (first-order) dissipative effects also vanish in the $T\to 0$ limit. These observations are consistent with predictions of \cite{Grozdanov:2016tdf} based on symmetry arguments.
In the regime of a weak magnetic field, $T\gg \sqrt{\mathcal{B}}$, we find that both shear viscosities $\eta_\perp$ and $\eta_\parallel$ converge to $\eta_\perp = \eta_\parallel = s/(4\pi)$ as $\mathcal{B}/T^2 \to 0$. On the other hand, the longitudinal bulk viscosity limits to $\zeta_\parallel \to 4 \eta / 3$, which is consistent with the fact that as $\mathcal{B}/T^2 \to 0$, the evolution of the plasma should be governed by uncharged relativistic conformal hydrodynamics (see e.g. \cite{Kovtun:2012rj} or Appendix \ref{appendix:transport}). Indeed, both resistivities, $r_\perp$ and $r_\parallel$, also tend to zero in the limit.
We also note that the weak-field behaviour of $r_\perp$ and $r_\parallel$ is consistent with the assumption used to construct standard (ideal) MHD, whereby conductivity is taken to infinity, $\sigma \approx 1 / r \to \infty$, and whereby one adds corrections proportional to $1/\sigma$.\footnote{See Ref. \cite{Grozdanov:2016tdf} for a discussion regarding the subtleties in relating resistivities to conductivities.} In other words, small weak-field resistivities are compatible with the assumption of ideal Ohm's law, which gives rise to Eq. \eqref{IdealOhm} (see also our discussion around this equation in Section \ref{sec:Intro}.). Furthermore, note that in standard MHD, only one resistivity (conductivity) is typically added to include dissipative corrections. What we see is that in our theory, the two resistivities take similar values in the weak-field limit in which standard MHD applies. However, in the strong-field limit, they assume drastically different values, including a different scaling with $T/\sqrt{\mathcal{B}}$. This observation therefore further points to the important role of anisotropic effects in MHD \cite{Grozdanov:2016tdf} and the necessity for using the formulation of \cite{Grozdanov:2016tdf,Hernandez:2017mch} as one moves from the weak- to the strong-field regime.
The fact that $r_\perp$ and $r_\parallel$ tend to zero both in the limits of $T / \sqrt{\mathcal{B}} \to 0$ and $T \sqrt{\mathcal{B}} \to \infty$, along with the positivity of the entropy production bounds $r_\perp \geq 0$ and $r_\parallel \geq 0$ \cite{Grozdanov:2016tdf}, implies that there always exists a maximum value of the resistivities at some intermediate $T / \sqrt{\mathcal{B}}$. It would be interesting to find the sizes of these maxima in experimentally realisable systems and probe the regimes of the ``least conductive" plasmas. Finally, it would be interesting to further investigate the connection between maximal $r$ and various discussions of lower bounds on conductivities, e.g. \cite{PhysRevLett.111.125004,Grozdanov:2015qia,Lucas:2017ggp}.
\section{Magnetohydrodynamic waves in a strongly coupled plasma}\label{sec:MHDWaves}
We are now ready to use the information obtained from the holographic analysis of Section \ref{sec:Holography} to study dissipative dispersion relations of magnetohydrodynamic waves in a toy model of a strongly coupled plasma. We will use the theory of MHD \cite{Grozdanov:2016tdf}, which is a phenomenological effective theory, and supplement it with microscopic details---the equation of state and transport coefficients---of the holographic setup investigated above. We will be particularly interested in the dependence of the MHD modes on the angle between momentum and magnetic field, as well as the ratio between temperature and the strength of the magnetic field. The 't Hooft coupling of interactions in the matter sector is not tuneable in our model, however, the electromagnetic coupling is. In all sections, except in Section \ref{sec:alphaDependence}, it will be set to $\alpha = 2 \pi^2 / 137 N_c^2$.
Before presenting the numerical results, we review the relevant facts about MHD modes. For a detailed derivation of these results, see Ref. \cite{Grozdanov:2016tdf} and for a discussion of the general procedure, see Refs. \cite{Kadanoff,Kovtun:2012rj}. First, we write the hydrodynamic variables $u^\mu$, $h^\mu$, $T$ and $\mu$ in terms of oscillating modes perturbed around their near-equilibrium values, e.g. $u^\mu \to (1,0,0,0) + \delta u^\mu \, e^{-i \omega t + i k x \sin\theta + i k z \cos\theta}$, so that $\theta \in [0, \pi/2]$ measures the angle between the equilibrium magnetic field pointing in the $z$-direction and the wave momentum $k$ in the $x$--$z$ plane. The dispersion relations $\omega(k)$ are then derived from the equations of MHD, i.e. Eqs. \eqref{EOM1} and \eqref{EOM2}, with the external $H_{\mu\nu\rho} = 0$. The solutions depend on the angle $\theta$, temperature $T$ and the strength of the magnetic field (or the chemical potential of the magnetic flux number density), parametrised in our solutions by $\mathcal{B}$. Any dimensionless quantity will only depend on the single dimensionless ratio $T / \sqrt{\mathcal{B}}$. The resulting modes can be decomposed into two channels---odd and even under the reflection of $y \to -y$. The first channel is the transverse Alfv\'{e}n channel. The second is the magnetosonic channel with two branches of solutions: slow and fast magnetosonic waves.
The linearised MHD equations of motion \eqref{EOM1} and \eqref{EOM2} need to be expanded in the hydrodynamic regime in powers of small $\omega / \Lambda_h \ll 1 $ and $k/ \Lambda_h \ll 1$, where $\Lambda_h$ is the UV cut-off of the effective theory. In standard MHD, where $T \gg \sqrt{\mathcal{B}}$, then $\Lambda_h \approx T$, whereas in the strong-field regime of $T \ll \sqrt{\mathcal{B}}$, the cut-off can be set by the magnetic field, then $\Lambda_h \approx \sqrt{\mathcal{B}}$. As argued in \cite{Grozdanov:2016tdf}, hydrodynamics may exist all the way to $T \to 0$, even when $\delta T = 0$. Such an expansion, performed to some order, gives rise to a polynomial equation in $\omega$ and $k$. For example, in the Alfv\'{e}n channel, within first-order dissipative MHD,
\begin{equation}\label{AlfvenDispersion}
\begin{aligned}
-\omega^2 + \left(\frac{\mu\rho \cos^2\theta }{\varepsilon+p}\right)k^2 - i \left[ \left( \frac{\mu r_\perp}{\rho} + \frac{\eta_\parallel}{\varepsilon+p} \right)\cos^2\theta + \left( \frac{\mu r_\parallel}{\rho} + \frac{\eta_\perp}{\varepsilon+p} \right)\sin^2\theta \right] \omega k^2 & \\
+ \frac{\mu }{2 \rho (\varepsilon+p)}\left( r_\perp \cos^2 \theta + 2 r_\parallel \sin^2\theta \right)\left( \eta_\perp \sin^2\theta + \eta_\parallel \cos^2\theta \right)k^4 &= 0 \, .
\end{aligned}
\end{equation}
The two solutions of the quadratic equation for $\omega$ are given by
\begin{align}\label{AlfvenSolFull}
\omega = -\frac{i}{2}(\mathcal{D}_{A,+})k^2 \pm \frac{k}{2}\sqrt{\mathcal{V}_A^2 \cos^2\theta - (\mathcal{D}_{A,-})^2 k^2} \,,
\end{align}
where $\mathcal{D}_{A,+}$ and $\mathcal{D}_{A,-}$ are
\begin{equation}
\begin{aligned}
\mathcal{D}_{A,\pm} &= \left( \frac{\mu r_\perp}{\rho} \pm \frac{\eta_\parallel}{\varepsilon+p} \right)\cos^2\theta + \left( \frac{\mu r_\parallel}{\rho} \pm \frac{\eta_\perp}{\varepsilon+p} \right)\sin^2\theta \,.
\end{aligned}
\end{equation}
One can now series expand $\omega(k) = \mathcal{D}_0 k + \mathcal{D}_1 k^2$, or alternatively, plug this ansatz in Eq. \eqref{AlfvenDispersion} and solve it order-by-oder in $k$. What we find is the Alfv\'{e}n wave dispersion relation \cite{Grozdanov:2016tdf}:
\begin{equation}\label{AlfvenDispersion2}
\omega = \pm \mathcal{V}_A k\cos\theta - \frac{i}{2} \left( \frac{1}{\varepsilon+p}\left( \eta_\perp\sin^2\theta + \eta_\parallel \cos^2\theta \right)+ \frac{\mu}{\rho} \left(r_\perp \cos^2\theta + r_\parallel \sin^2\theta \right)\right)k^2 \, ,
\end{equation}
where the speed is given by $\mathcal{V}_A^2 = \mu\rho/(\varepsilon+p)$. The dispersion relation appears to be well-defined for any angle $\theta \in [0, \pi/2]$ between momentum and equilibrium magnetic field. In particular, if we were to take the $\theta \to \pi/2$ limit, \eqref{AlfvenDispersion2} would yield two diffusive modes, both with dispersion relation
\begin{align}\label{AlfDiffK0First}
\omega = - \frac{i}{2} \left( \frac{\eta_\perp}{\varepsilon + p } + \frac{\mu r_\parallel}{\rho} \right) k^2\, ,
\end{align}
which are, however, unphysical and only result from an incorrect order of limits of $k$ and $\theta$.
As can be seen from the structure of the square-root in Eq. \eqref{AlfvenSolFull}, the expansion in small $k$ is only sensible so long as $k^2 \ll \mathcal{V}_A^2 \cos^2\theta / (\mathcal{D}_{A,-})^2$. Hence, even for a small finite $k$, this expansion is inapplicable for angles $\theta$ near $\theta = \pi / 2$ where $\cos \theta$ becomes very small. In fact, for
\begin{equation}\label{eq:criticalThetaAlfven}
\mathcal{V}_A^2 \cos^2\theta \leq (\mathcal{D}_{A,-})^2k^2 \,,
\end{equation}
the propagating modes cease to exist altogether and the two modes become purely imaginary (diffusive to $\mathcal{O}(k^2)$). The transmutation of two propagating Alfv\'{e}n modes into two non-propagating modes occurs when the inequality in \eqref{eq:criticalThetaAlfven} is saturated, i.e. at the critical angle $\theta_c$ when $\text{Re}[\omega] = 0$:
\begin{align}\label{ThetaC}
\frac{ \cos (\theta_c)}{\mathcal{D}_{A,-}(\theta_c)} = \frac{k}{\mathcal{V}_A} .
\end{align}
In other words, the plasma exhibits propagating (sound) modes for $0 \leq \theta < \theta_c$ and non-propagating (diffusive) modes for $\theta_c < \theta \leq \pi / 2$. We plot the dependence of the critical angle $\theta_c$ on $k/\sqrt{\mathcal{B}}$ and $T/\sqrt{\mathcal{B}}$ for the Alfv\'{e}n waves in our model in Figure \ref{fig:CriticalAngleAlfven}. What we see is that for small $k / \sqrt{\mathcal{B}}$ and small $T / \sqrt{\mathcal{B}}$, the transition to diffusive modes occurs closer to $\theta_c \approx \pi / 2$. For any fixed and finite $T / \sqrt{\mathcal{B}}$, Eq. \eqref{ThetaC} indeed implies that $\theta_c \to \pi / 2$ as $k\to 0$.
We note that as already pointed out in \cite{Hernandez:2017mch}, the limits of $k\to0$ and $\theta \to \pi /2$ do not commute and we obtain different results depending on which expansion ($k \approx 0$ or $\theta \approx \pi/2$) is performed first. If one first takes the limit $\theta \to \pi /2$, then Eq. \eqref{AlfvenDispersion} becomes
\begin{equation}\label{AlfvenDispersionOpt2}
-\omega^2 - i \left( \frac{\mu r_\parallel}{\rho} + \frac{\eta_\perp}{\varepsilon+p} \right) \omega k^2 + \frac{ \mu r_\parallel \eta_\perp }{\rho(\varepsilon+p)} k^4 = 0 \, ,
\end{equation}
which instead of Eq. \eqref{AlfDiffK0First} results in two non-degenerate diffusive modes
\begin{align}
\omega = - i \frac{\eta_\perp}{\varepsilon+p} k^2\,, && \omega = - i \frac{\mu r_\parallel}{\rho} k^2 \, .
\end{align}
The dispersion relation \eqref{AlfvenDispersion2} is therefore only sensible at a finite $T/\sqrt{\mathcal{B}}$ and infinitesimally small $k / \Lambda_h$.
In the magnetosonic channel, the story is entirely analogous to the one described for the Alfv\'{e}n waves. By expanding around $k\approx 0$ first, we obtain the dispersion relation of \cite{Grozdanov:2016tdf}:
\begin{align}
\omega = \pm v_M k - i \tau k^2 \, ,
\end{align}
where the speed of magnetosonic wave is given by
\begin{equation}\label{speedvM}
v_M^2 = \frac{1}{2}\left\{ (\mathcal{V}_A^2+ \mathcal{V}_0^2)\cos^2\theta + \mathcal{V}_S^2\sin^2\theta \pm \sqrt{[(\mathcal{V}_A^2-\mathcal{V}_0^2)\cos^2\theta + \mathcal{V}_S^2\sin^2\theta]^2+ 4\mathcal{V}^4 \cos^2\theta \sin^2\theta }\right\}.
\end{equation}
The functions $\mathcal{V}_A$, $\mathcal{V}_0$, $\mathcal{V}_S$ and $\mathcal{V}$ appearing in \eqref{speedvM} are
\begin{align}
\mathcal{V}_A^2 &= \frac{\mu\rho}{\varepsilon+p}, & \mathcal{V}_0^2 &= \frac{s}{T\chi_{11}}, \nonumber\\
\mathcal{V}_S^2 &= \frac{(s-\rho\chi_{12})(s+\rho\chi_{21})+\rho^2\chi_{11}\chi_{22}}{(\varepsilon+p)\chi_{11}}, & \mathcal{V}^4 &= \frac{s(s-\rho\chi_{12})(s+\rho\chi_{21})}{T(\varepsilon+p)\chi_{11}^2}.
\end{align}
The susceptibilities are\footnote{Note that these susceptibilities are different from the ones used in \cite{Grozdanov:2016tdf}, where independent thermodynamic quantities were $T$ and $\mu$, not $T$ and $\rho$. For this reason we also use different notation.}
\begin{equation}\label{susceptibilities}
\chi_{11} = \left( \frac{\partial s}{\partial T}\right)_\rho,\quad \chi_{12} = \left( \frac{\partial s}{\partial \rho}\right)_T,\quad \chi_{21} = \left( \frac{\partial \mu}{\partial T}\right)_\rho,\quad \chi_{22} = \left( \frac{\partial \mu}{\partial \rho}\right)_T \,.
\end{equation}
The two types of magnetosonic waves, corresponding to $\pm$ solutions in \eqref{speedvM}, are known as the fast (with $+$) and the slow (with $-$) magnetosonic waves. We refer the reader to Appendix \ref{appendix:magnetosonicSpectrum} for further details regarding the derivation of the magnetosonic modes. Each pair of the propagating slow magnetosonic modes also splits, in analogy with the Alfv\'{e}n waves, into two non-propagating diffusive modes for $\theta \geq \theta_c$. The critical angle $\theta_c$ for magnetosonic modes is also defined as in the Alfv\'{e}n channel: the angle at which $\text{Re}[\omega] = 0$. We plot the numerically-computed dependence of the magnetosonic $\theta_c$ on $k/\sqrt{\mathcal{B}}$ and $T/\sqrt{\mathcal{B}}$ in Fig. \ref{fig:CriticalAngleAlfven}. As can be seen from the plot, the critical angles for the two types of waves are independent. However, they show similar qualitative dependence on the parameters that characterise the waves.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{CriticalAngleAlfven}
\includegraphics[width=.49\textwidth]{CriticalAngleMS}
\caption{The critical angle $\theta_c$ for Alfv\'en waves (left) and slow magnetosonic waves (right), plotted as a function of $T/\sqrt{\mathcal{B}}$ for $k/\sqrt{\mathcal{B}} = \{0.1, \, 0.2, \,0.4, \, 0.6 \}$. The dashed line at the top of both sub-figures indicates the value of $\theta_c = \pi/2$.}
\label{fig:CriticalAngleAlfven}
\end{figure}
We summarise the $\theta$-dependent characteristics of MHD modes in Fig. \ref{fig:SummaryHighT}. We observe the pattern of a transmutation of sound modes into diffusion to be different in the weak- and strong-field regimes. Namely, the two magnetosonic waves interchange their dispersion relations at small $\theta$. Since the complicated expressions for dispersion relations greatly simplify at $\theta = 0$ and $\theta = \pi/2$, we state them below. The sound mode dispersion relations, denoted by S, are
\begin{equation}\label{specialSound}
\begin{aligned}
\text{S1} &:\quad \omega = \pm \mathcal{V}_S k - \frac{i}{2} \Bigg\{ \frac{\zeta_\perp + \eta_\perp}{\varepsilon+p} \\
&\quad\quad + \frac{r_\perp \left[ (s-\rho\chi_{12})(\mu-T\chi_{21}) - \rho T \chi_{11}\chi_{22}\right]\left[ (s+\rho\chi_{21})(\mu+T\chi_{12}) - \rho T \chi_{11}\chi_{22}\right]}{T^2\chi_{11}\left[(s-\rho\chi_{12})(s+\rho\chi_{21}) + \rho^2\chi_{11}\chi_{22} \right]}\Bigg\} k^2 \,,\\
\text{S2} &:\quad \omega =\pm \mathcal{V}_A k - \frac{i}{2}\left( \frac{\eta_\parallel}{\varepsilon+p} +\frac{\mu r_\perp}{\rho}\right) k^2 \, ,\\
\text{S3} &:\quad \omega =\pm \mathcal{V}_0 k - \frac{i}{2} \frac{\zeta_\parallel}{sT} k^2\,,\\
\end{aligned}
\end{equation}
and the diffusive modes, denoted by D, are
\begin{equation}\label{specialDiffuse}
\begin{aligned}
\text{D1} &:\qquad \omega = -i\frac{\eta_\parallel}{sT}k^2\,,\\
\text{D2} &:\qquad \omega = -\frac{ir_\perp (\varepsilon+p)^2\chi_{22}}{T^2\left[ (s-\rho\chi_{12})(s+\rho\chi_{21})+\rho^2\chi_{11}\chi_{22} \right]}k^2\,,\\
\text{D3} &:\qquad \omega = -i\frac{\eta_\perp}{\varepsilon+p}k^2\,, \\
\text{D4} &:\qquad \omega = -i\frac{r_\parallel \mu}{\rho}k^2 \,.\\
\end{aligned}
\end{equation}
\begin{figure}[tbh]
\centering
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw}]
\node (A) at (0,3.5) {Fast};
\node (B) at (0,1.75) {Slow};
\node (C) at (0,0) {Alfv\'en};
\node (D) at (3,3.5) {S1} ;
\node (E) at (3,2.5) {D$1$} ;
\node (F) at (3,1.5) {D$2$} ;
\node (Ea) at (3,0.5) {D$3$} ;
\node (Fa) at (3,-0.5) {D$4$} ;
\node (A-1) at (-3,3.5) {S3};
\node (B-1) at (-3,1.75) {S2};
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=black,very thick}]
\path [->] (A) edge node [above] {$\theta \rightarrow \pi/2 $} (D);
\path [->] (B) edge (E);
\path [->] (B) edge (F);
\path [->] (C) edge (Ea);
\path [->] (C) edge (Fa);
\path [->] (A) edge node [above] {$0 \leftarrow \theta $} (A-1);
\path [->] (B) edge (B-1);
\path [->] (C) edge (B-1);
\end{scope}
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw}]
\node (A) at (0,3.5) {Fast};
\node (B) at (0,1.75) {Slow};
\node (C) at (0,0) {Alfv\'en};
\node (D) at (3,3.5) {S1} ;
\node (E) at (3,2.5) {D$1$} ;
\node (F) at (3,1.5) {D$2$} ;
\node (Ea) at (3,0.5) {D$3$} ;
\node (Fa) at (3,-0.5) {D$4$} ;
\node (A-1) at (-3,3.5) {S2};
\node (B-1) at (-3,1.75) {S3};
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=black,very thick}]
\path [->] (A) edge node [above] {$\theta \rightarrow \pi/2 $} (D);
\path [->] (B) edge (E);
\path [->] (B) edge (F);
\path [->] (C) edge (Ea);
\path [->] (C) edge (Fa);
\path [->] (A) edge node [above] {$ 0 \leftarrow \theta$} (A-1);
\path [->] (B) edge (B-1);
\path [->] (C) edge (A-1);
\end{scope}
\end{tikzpicture}
\vspace{0.5cm}
\caption{Diagrams depicting the $\theta$-dependent pattern of transmutation from sound to diffusive modes for Alfv\'{e}n waves and slow and fast magnetosonic waves. The left and right diagrams correspond to weak- and strong-field regimes. The relevant dispersion relation are stated in Eqs. \eqref{specialSound} and \eqref{specialDiffuse}.}
\label{fig:SummaryHighT}
\end{figure}
In the regime of a large $T/\sqrt{\mathcal{B}}$, the results agree with those of \cite{Hernandez:2017mch}. Furthermore, using the asymptotic form of the thermodynamics quantities and transport coefficients in the $T/\sqrt{\mathcal{B}}\to \infty$ limit, one can show that these modes reduce to sound and diffusive modes of uncharged relativistic hydrodynamics.
In the strong-field regime, which cannot be described within standard MHD, the speeds of S1 and S3 become large and approach the speed of light in the limit of $T\to0$. It is clear that in the strong-field regime, MHD sound waves can easily violate any causal upper bound on the speed of sound \cite{Cherman:2009tw,Cherman:2009kf,Hohler:2009tv,Hoyos:2016cob}. Furthermore, as discussed above, all diffusion constants vanish and the system becomes controlled by second-order MHD \cite{Grozdanov:2016tdf}, which we do not investigate in this work. All details regarding angle-dependent wave propagation are presented in Section \ref{Thetadep}.
\subsection{Speeds and attenuations of MHD waves}\label{SpeedsAndAtt}
Here, we plot the speeds (phase velocities) and first-order attenuation coefficients of the three types of MHD sound waves: the Alfv\'{e}n and the fast and slow magnetosonic waves for the holographic strongly coupled plasma discussed above. These results assume an infinitesimally small value of momentum $k$, and follow from first expanding the polynomial equation of the type of \eqref{AlfvenDispersion} around $k \approx 0$ and writing each dispersion relation as $\omega = \pm v k - i \mathcal{D} k^2$. The speeds $v$ (presented in Fig. \ref{fig:3Sound}) and attenuation coefficients $\mathcal{D}$ (presented in Fig. \ref{fig:3soundAtten}) are then plotted for all $0 \leq \theta \leq \pi / 2$, which, as discussed above, is only physically sensible when $\theta_c \to \pi / 2$, i.e. as $k\to 0$.
\begin{figure}[tbh]
\center
\includegraphics[width=0.37\textwidth]{3soundLowT}
\hspace{2cm}
\includegraphics[width=0.37\textwidth]{3soundMidT}
\includegraphics[width=0.37\textwidth]{3soundHighT}
\caption{Angular dependence of the speeds of Alfv\'{e}n (black, solid), fast (blue, dotted) and slow (red, dashed) magnetosonic waves in the strong-field, the crossover and the weak-field regimes.
}
\label{fig:3Sound}
\end{figure}
\begin{figure}[tbh]
\center
\includegraphics[width=0.32\textwidth]{3soundAttenLowT1}
\includegraphics[width=0.32\textwidth]{3soundAttenLowT2}
\includegraphics[width=0.32\textwidth]{3soundAttenMidT1}
\includegraphics[width=0.32\textwidth]{3soundAttenMidT2}
\includegraphics[width=0.32\textwidth]{3soundAttenHighT1}
\includegraphics[width=0.32\textwidth]{3soundAttenHighT2}
\caption{Angular dependence of the (dimensionless) attenuation coefficients of Alfv\'{e}n (black, solid), fast (blue, dotted) and slow (red, dashed) magnetosonic waves, $\mathcal{D}\sqrt{\mathcal{B}}$, in the strong-field, the crossover and the weak-field regimes.
}
\label{fig:3soundAtten}
\end{figure}
The angular profiles of the speeds and the dissipative attenuation coefficients show distinct behaviour in the strong-, the crossover (cf. Eq. \eqref{Crossover}) and the weak-field regimes. In particular, the speeds of sound enter the weak-field regime, where they reduce to well-known standard MHD results, rapidly after the temperature exceeds $T / \sqrt{\mathcal{B}} \approx 0.7$. There, Alfv\'{e}n and slow magnetosonic waves travel with very similar speeds for all $\theta$ and their speeds coincide at $\theta = 0 $ and $\theta = \pi/2$. The situation is different in the strong-field regime where the profiles of speeds qualitatively match the strong-field predictions of \cite{Grozdanov:2016tdf}. There, slow magnetosonic and Alfv\'{e}n waves can travel faster at small $\theta$, with speeds comparable to those of fast magnetosonic waves. At $\theta = 0$, the Alfv\'{e}n speed equals that of fast, instead of slow, magnetosonic waves (cf. Fig. \ref{fig:SummaryHighT}). It should also be noted that there exists a value of $T / \sqrt{\mathcal{B}}$ in the crossover regimes where all three speeds are equal at $\theta = 0$.
The attenuation coefficients, computed with all seven transport coefficients \cite{Grozdanov:2016tdf,Hernandez:2017mch}, are computed for the first time for a concrete microscopically (holographically) realisable plasma and therefore difficult to compare with other past results. What we observe is that the Alfv\'{e}n waves experience the strongest damping for all values of $T / \sqrt{\mathcal{B}}$. Beyond that, the qualitative behaviour again displays distinct angle-dependent features in the three regimes, which are apparent from Fig. \ref{fig:3soundAtten}. A noteworthy, but not a surprising fact is that the strength of attenuation appears to be much more strongly dependent on the angle between momentum and magnetic field in the regime of small $T / \sqrt{\mathcal{B}}$. Furthermore, in the crossover regime, we find that the strengths of fast and slow magnetosonic mode attenuations interchange roles as $T / \sqrt{\mathcal{B}}$ increases. In plots at $T / \sqrt{\mathcal{B}} = 0.5$ and $T / \sqrt{\mathcal{B}} = 0.66$, there exists an angle $\theta$ at which the two attenuation strengths coincide.
\subsection{MHD modes on a complex frequency plane}\label{Thetadep}
By assuming a finite value of momentum $k$, a full analysis of the spectrum requires us to take into account the transmutation of sound modes into non-propagating diffusive modes. The pattern of this behaviour, as a function of the angle between momentum and the direction of the equilibrium magnetic field $\theta$, was summarised in Fig. \ref{fig:SummaryHighT}. Motivated by holographic quasinormal mode (poles of two-point correlators) analyses, we plot the motion of the MHD modes on the complex frequency plane---here, as a function of $\theta$ and $T/\sqrt{\mathcal{B}}$. One should consider these plots as a prediction of how the first-order approximation to the hydrodynamic sector of the full quasinormal spectrum computed from the theory \eqref{HoloAction} is expected to behave.
In Fig. \ref{fig:complexPlaneVaryThetaExtremeT}, we plot the typical $\theta$-dependent trajectories of $\omega(\theta)$ for Alfv\'{e}n and magnetosonic modes in distinctly strong- and weak-field regimes. At all temperatures (except at $T = 0$ where $\mathcal{D} = 0$), the behaviour is consistent with our previous discussions, including the fact that the transmutation of Alfv\'{e}n and slow magnetosonic waves into diffusive modes occurs at lower $\theta_c$ as $k/\sqrt{\mathcal{B}}$ increases.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{AlfvenComplexPlaneLowT-alt}
\includegraphics[width=.49\textwidth]{MSComplexPlaneLowT-alt}
\includegraphics[width=.491\textwidth]{AlfvenComplexPlaneHighT-alt}
\includegraphics[width=.49\textwidth]{MSComplexPlaneHighT-alt}
\caption{Dependence of the complex (dimensionless) frequency $\mathfrak{w}=\omega/\sqrt{\mathcal{B}}$ on $\theta$, plotted for Alf\'{e}n (black) and fast (blue) and slow (red) magnetosonic waves in the strong- and weak-field regimes with $T/\sqrt{\mathcal{B}}=0.4$ and $T / \sqrt{\mathcal{B}} = 1.15$, respectively. The arrows represent the motion of poles as $\theta$ is tuned from $0$ to $\pi/2$. Momentum is set to $k/\sqrt{\mathcal{B}}=0.05$.}
\label{fig:complexPlaneVaryThetaExtremeT}
\end{figure}
In the crossover temperature regime (around $T/\sqrt{\mathcal{B}} \approx 0.6$), we can observe in more detail the interplay between fast and slow magnetosonic modes, which was noted in Section \ref{SpeedsAndAtt}. While the speed of fast magnetosonic waves always exceeds that of slow waves, their attenuation strengths exchange roles around $T/\sqrt{\mathcal{B}} \approx 0.675$, which manifests in a characteristically distinct behaviour for $\theta < \theta_c$, presented in Fig. \ref{fig:complexPlaneVaryThetaMidT1234} (see also Fig. \ref{fig:3soundAtten}). The $\theta$-dependence of Alfv\'en waves remains qualitatively similar to those depicted in Fig. \ref{fig:complexPlaneVaryThetaExtremeT}.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{MSComplexPlaneMidT1-alt}
\includegraphics[width=.49\textwidth]{MSComplexPlaneMidT2-alt}
\includegraphics[width=.49\textwidth]{MSComplexPlaneMidT3-alt}
\includegraphics[width=.49\textwidth]{MSComplexPlaneMidT4-alt}
\caption{Dependence of the complex (dimensionless) frequency $\mathfrak{w}=\omega/\sqrt{\mathcal{B}}$ of fast (blue) and slow (red) magnetosonic modes on $\theta$ in the crossover regime. The arrows represent the motion of poles as $\theta$ is tuned from $0$ to $\pi/2$. Momentum is set to $k/\sqrt{\mathcal{B}}=0.05$.}
\label{fig:complexPlaneVaryThetaMidT1234}
\end{figure}
For a fixed $\theta < \theta_c$, where $\theta_c$ depends on $k$ and $T/\sqrt{\mathcal{B}}$, we plot the typical behaviour of $\omega(k)$ as a function of $T/\sqrt{\mathcal{B}}$ in Fig. \ref{fig:complexPlaneVaryTem1}. At $T=0$, all poles start from the non-dissipative regime (the real $\mathfrak{w}$ axis), with the speed of fast magnetosonic waves given by $v = 1$. As they move towards larger $T/\sqrt{\mathcal{B}}$, the Alfv\'{e}n and the slow magnetosonic modes again asymptote to each other, eventually transforming into diffusive modes, while the speed of the fast magnetosonic modes gradually converges towards that of neutral conformal sound with $v = 1 / \sqrt{3}$.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{3soundComplexPlaneSmallAngle}
\includegraphics[width=.49\textwidth]{3soundComplexPlaneMidAngle1}
\includegraphics[width=.49\textwidth]{3soundComplexPlaneMidAngle2}
\includegraphics[width=.49\textwidth]{3soundComplexPlaneBigAngle}
\caption{Dependence of the complex (dimensionless) frequency $\mathfrak{w}=\omega/\sqrt{\mathcal{B}}$ on $T/\sqrt{\mathcal{B}}$, plotted for Alfv\'{e}n (black) and fast (blue) and slow (red) magnetosonic waves for $\theta < \theta_c$. The arrows represent the motion of poles as $T/\sqrt{\mathcal{B}}$ is tuned from $0$ towards the weak-field regime. Momentum is set to $k/\sqrt{\mathcal{B}}=0.01$.
}
\label{fig:complexPlaneVaryTem1}
\end{figure}
In the high temperature limit, the ``collision" of the Alfv\'{e}n and, independently, the slow magnetosonic poles on the imaginary axis occurs close to the real axis, which follows from the fact that for both types of waves,
\begin{align}
\text{Im}\left[\mathfrak{w}\right] \approx -\frac{1}{2}\left( \frac{\eta}{\varepsilon+p} + \frac{\mu r}{\mathcal{B}}\right)\sqrt{\mathcal{B}} \sim -\frac{\sqrt{\mathcal{B}}}{T} \to 0\,,
\end{align}
as $T / \sqrt{\mathcal{B}} \to \infty$. The Alfv\'{e}n waves then become the diffusive modes of uncharged conformal hydrodynamics with $\omega = -i\eta k^2/( 2sT )$. As for our final plot, in Fig. \ref{fig:90degreeModes}, we present the dependence of the four diffusion constants and one sound attenuation coefficient on the temperature at $\theta = \pi/2$ (cf. Fig. \ref{fig:SummaryHighT} and Eqs. \eqref{specialSound}--\eqref{specialDiffuse}). The modes D1, D3 and S1 reduce to dispersion relations of uncharged relativistic hydrodynamics. D2 and D4 are new.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{pureDiffusiveEtaZeta}
\includegraphics[width=.476\textwidth]{pureDiffusiveR}
\caption{Plots of the four diffusion constants (D1, D2, D3, D4) and the sound attenuation (S1) as a function of $T/\sqrt{\mathcal{B}}$ at $\theta =\pi/2$. Black, red and blue curves depict dissipative coefficients that originate from the Alfv\'{e}n, slow magnetosonic and fast magnetosonic waves, respectively.
}
\label{fig:90degreeModes}
\end{figure}
\subsection{Electric charge dependence}\label{sec:alphaDependence}
We end our discussion of MHD dispersion relations by investigating their dependence on the choice of the $U(1)$ coupling constant, or equivalently, the position of the Landau pole, which has so far been set to the ($N_c$-rescaled) $\bar\alpha = 1 / 137$. All dependence on $\bar\alpha$ enters into the expectation value of the stress-energy tensor through the term proportional to $\mathcal{H}_{\mu\nu}\mathcal{H}^{\mu\nu} \ln\mathcal{C}$ (cf. Eq. \eqref{defT}), which contributes no terms linear in $\omega$. For this reason, while the equation of state strongly depends on $\bar\alpha$, the first-order transport coefficients do not. Hence, all speeds of sound and attenuation (and diffusive) coefficients depend on the choice of $\bar\alpha$ through the equation of state and susceptibilities.
What we observe is that the speeds of waves and attenuation coefficients strongly depend on the renormalised electromagnetic coupling, so, unsurprisingly, the strength of electromagnetic interactions plays an important role in the phenomenology of MHD. For concreteness, we only present the detailed behaviour of the Alfv\'{e}n waves (with speed $\mathcal{V}_A \cos\theta$), which reduce to the neutral hydrodynamic diffusive mode D3 (and D4) at $\theta = \pi/2$. Both $\mathcal{V}_A$ and the diffusion constant of D3, $\mathcal{D}_{D3}$, strongly depend on $\bar\alpha$. For a small variation in the values of $\bar\alpha$, we plot the results in Fig. \ref{fig:VA-differentAlpha}.\footnote{We remind the reader that in the boundary Lagrangian, the electromagnetic coupling is scaled out from the covariant derivatives. Thus, only the Maxwell term depends on $e_r$. As we vary $e_r$, we keep the strength of the electromagnetic field fixed.} To show the importance of a sensible choice of the renormalisation condition, we also vary the coupling over a larger range (to $\bar\alpha = 80 / 137$), where we see that the system develops unphysical behaviour with instabilities. As is apparent from Fig. \ref{speedAflvenVaryAlpha}, Alfv\'{e}n waves become unstable at low $T/\sqrt{\mathcal{B}}$.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{AlfvenSpeed-differentAlpha}
\includegraphics[width=.49\textwidth]{DiffusionD3-differentAlpha}
\caption{The plot of $\mathcal{V}_A^2$ and the diffusion constant $\mathcal{D}_{D3}$ at $\bar\alpha = \{ \bar\alpha_0 / 2,\, \bar\alpha_0,\,2\bar\alpha_0 \}$, where $\bar\alpha_0 = 1/137$. The dashed line in the left plot is the $\bar\alpha$-independent speed (squared) of the S3 mode (cf. Fig. \ref{fig:SummaryHighT}), i.e. $\mathcal{V}_0^2$, which is plotted for comparison.
}
\label{fig:VA-differentAlpha}
\end{figure}
\begin{figure}[tbh]
\center
\includegraphics[width=.55\textwidth]{speedAlfvenVaryAlpha}
\caption{The plot of the Alfv\'{e}n $\mathcal{V}_A^2$ at a varying $\bar\alpha$ ranging from $\bar\alpha = \bar\alpha_0$ to $\bar\alpha = 80 \bar\alpha_0$, where $\bar\alpha_0 = 1/137$. We see that as $\bar\alpha$ increases, the waves develop an instability in the strong-field regime.
}
\label{speedAflvenVaryAlpha}
\end{figure}
In all to us known literature, the unavoidable choice of the constant $\mathcal{C}$, which sets $\bar\alpha$, is made in a different way. $\mathcal{C}$ is either chosen so that the logarithmic terms vanish altogether, or so that it sets the UV scale to that of the magnetic field, which is convenient when studying strong magnetic fields as e.g. in \cite{Fuini:2015hba,Janiszewski:2015ura}. Here, we wish to point out some of the consequences of setting $\mathcal{C}$ to either of the two standard options. The first option, which eliminates the logarithmic terms, results in the following thermodynamics quantities:
\begin{align}\label{stressChoice1}
\varepsilon = \frac{N_c^2}{2\pi^2}\left( -\frac{3}{4}f^b_4 r_h^4 \right) \,,&& p =\frac{N_c^2}{2\pi^2}\left[ \left(-\frac{1}{4}f^b_4 + \frac{v^b_4}{v} \right)r_h^4-\frac{\mathcal{B}^2}{4}\right],&& \mu\rho =\frac{N_c^2}{2\pi^2} \left( \frac{3v^b_4}{v} r_h^4-\frac{\mathcal{B}^2}{4} \right).
\end{align}
The second choice results in
\begin{equation}\label{stressChoice2}
\begin{aligned}
\varepsilon &= \frac{N_c^2}{2\pi^2}\left( -\frac{3}{4}f^b_4r_h^4 + \frac{\mathcal{B}^2}{4}\ln \mathcal{B} \right), && p =\frac{N_c^2}{2\pi^2}\left[\left( -\frac{1}{4}f^b_4 + \frac{v^b_4}{v} \right)r_h^4-\frac{\mathcal{B}^2}{4} + \frac{\mathcal{B}^2}{4}\ln \mathcal{B}\right], \\
\mu\rho &= \frac{N_c^2}{2\pi^2}\left( \frac{3v^b_4}{v} r_h^4-\frac{\mathcal{B}^2}{4} -\frac{\mathcal{B}^2}{4}\ln\mathcal{B}\right).
\end{aligned}
\end{equation}
While these two renormalisation conditions are suitable for studying certain physical setups involving static electromagnetic fields, we claim that they lead to unphysical results when the boundary $U(1)$ gauge field is dynamical. By comparing the renormalised stress-energy tensor \eqref{THol1}--\eqref{THol3} to expressions in \eqref{stressChoice1} and \eqref{stressChoice2}, we find that the two choices correspond to the renormalised coupling being $e_r^2 \to \infty$ and $e_r^2 \sim \ln \mathcal{B}$, respectively. An infinite $U(1)$ coupling is unphysical in a plasma state. The problem with the second choice is that if extrapolated to the weak-field regime, $\ln \mathcal{B} / M$, where $M$ is some scale, can become negative and $e_r$ imaginary, which is again unphysical. Thus, these choices may lead to instabilities and superluminal propagation, which were absent from our results with $\bar\alpha$ near $1/137$. We plot the Alfv\'{e}n speed parameter $\mathcal{V}_A$ for the two couplings from \eqref{stressChoice1} and \eqref{stressChoice2} in Fig. \ref{fig:unphysical}.
\begin{figure}[tbh]
\center
\includegraphics[width=.49\textwidth]{unphysical1}
\includegraphics[width=.49\textwidth]{unphysical2}
\caption{The $\theta$-independent factor $\mathcal{V}_A$ of the Alfv\'en wave speed plotted for the renormalised $e^2_r \to \infty$ (left) from Eq. \eqref{stressChoice1} and for $e^2 \sim \ln \mathcal{B}$ (right) from Eq. \eqref{stressChoice2}. }
\label{fig:unphysical}
\end{figure}
\section{Discussion}\label{sec:Discussion}
This work is the first holographic study of states with generalised global (higher-form) symmetries. Moreover, it is the first step in a long road to a better understanding of magnetohydrodynamics in plasmas outside of the regime of validity of standard MHD, be it in the presence of strong magnetic fields or in a strongly interacting (or dense) plasma with a complicated equation of state and transport coefficients---all claimed to be describable within the recent (generalised global) symmetry-based formulation of MHD of Ref. \cite{Grozdanov:2016tdf}. In order to supply a hydrodynamical theory of MHD with the necessary microscopic information of a strongly coupled plasma, we resorted to the simplest, albeit experimentally inaccessible option: holography. Nevertheless, our hope is that in analogy with the myriad of works on holographic conformal hydrodynamics, which have led to important new insights into strongly interacting realistic fluids, holography can also help us understand observable MHD states in the presence of strong fields, high density and of strongly interacting gauge theories, such as QCD.
With this view, we constructed the simplest theory dual to the operator structure and Ward identities used in MHD of \cite{Grozdanov:2016tdf}, investigated the relevant aspects of the holographic dictionary and used it to compute the equation of state and transport coefficients of the dual plasma state. This information was then used to analyse the dependence of MHD waves---Alfv\'{e}n and magnetosonic waves---on tuneable parameters specifying the state: the strength of the magnetic field, temperature, the angle between momentum of propagation and the equilibrium magnetic field direction, as well as the strength of the $U(1)$ electromagnetic gauge coupling. We believe that the latter feature of our model---dynamical electromagnetism on the boundary---which in the (dual) language of two-form gauge fields in the bulk allows for standard (Dirichet) quantisation, could in its own right be used for holographic studies of $U(1)$-gauged systems, unrelated to MHD.
Our results have revealed several new qualitative features of MHD waves, particularly in the regime of a strong magnetic field, which is inaccessible to standard MHD methods. Various properties of the equation of state, transport coefficients and dispersion relations found here, may now be compared to those in experimentally realisable plasmas, or at the least, used as a toy model for future studies of MHD. Approximate scalings in the limiting regimes of large and small $T / \sqrt{\mathcal{B}}$ are collected in Tables \ref{table:EOS} and \ref{table:TC}. Here, we summarise some of the most interesting observations:
\\
$\bullet$ The equation of state and transport coefficients strongly depend on the strength of the magnetic field, i.e. on whether the plasma is in the weak-field, the crossover, or the strong-field regime.
$\bullet$ In the weak-field regime with $T / \sqrt{\mathcal{B}} \gg 1$, the system is well-described by standard MHD (see \cite{Hernandez:2017mch} for a full description) with small resistivities (large conductivity regime, which is assumed by ideal Ohm's law) and small effects of anisotropy. As $T/\sqrt{\mathcal{B}} \to \infty$, the plasma becomes an uncharged, conformal fluid with a single independent transport coefficient, $\eta = s/ 4\pi$. In the strong-field limit of $T / \sqrt{\mathcal{B}} \to 0$, the plasma limits to a non-dissipative regime with all first-order transport coefficients (along with sound attenuations and diffusion constants) tending to zero. Effects of anisotropy are large.
$\bullet$ Resistivities have a global maximum in the intermediate $T/\sqrt{\mathcal{B}}$ regime, which indicates a regime of least conductive plasma. If the assumptions of standard MHD are correct at $T / \sqrt{\mathcal{B}} \gg 1$ and the symmetry-based predictions of \cite{Grozdanov:2016tdf} are correct at $T / \sqrt{\mathcal{B}} \ll 1$, such a regime should be generically exhibited by any plasma.
$\bullet$ Out of the three bulk viscosities, $\zeta_\perp$, $\zeta_\parallel$ and $\zeta_\times$, only one is independent and they saturate the positivity of the entropy production inequality, i.e. they are related by $\zeta_\perp \zeta_\parallel = \zeta_\times^2$. One may speculate on how general this result is and whether it is related to the suppression of entropy production at strong coupling \cite{Grozdanov:2014kva,Haehl:2015pja} or perhaps some form of (holographic) universality at infinite (or strong) coupling.
$\bullet$ Various qualitative features of slow and fast magnetosonic modes are exchanged in the weak- and strong-field regimes (usually at small angles, $\theta$, between momentum and the equilibrium magnetic field direction), such as their asymptotic tendency to the speed of Alfv\'{e}n waves and the strength of sound attenuation.
$\bullet$ For a finite momentum, propagating Alfv\'{e}n and slow magnetosonic modes (sound modes to $\mathcal{O}(k^2)$) transmute into pairs of non-propagating, diffusive (to $\mathcal{O}(k^2)$) modes. This occurs at large angles between the direction of momentum propagation and the equilibrium magnetic field, $\theta_c < \theta \leq \pi/2$, where $\theta_c$ is some momentum- and $T/\sqrt{\mathcal{B}}$-dependent critical angle (cf. Eq. \eqref{ThetaC} for Alfv\'{e}n waves).
$\bullet$ The phenomenology of MHD modes strongly depends on the strength of the electromagnetic coupling (or the position of the Landau pole) and can, for large ranges of the coupling, lead to unstable or superluminal propagation.
\\
Beyond the types of waves studied in this work, it would be particularly interesting to better understand the role of finite charge density, as studied in \cite{Hernandez:2017mch}, within the formalism of \cite{Grozdanov:2016tdf}. The important question then is how the phenomenology of such MHD waves, which typically experience gapped propagation and instabilities (e.g. the infamous Weibel instability), becomes altered by strong interactions, strong fields and for more `exotic' field content.
Finally, the holographic setup studied here will need to undergo extensive further tests and analyses in order to unambiguously establish its connection to plasma physics and MHD. In particular, it is essential to study the quasinormal spectrum of the theory to verify that the hydrodynamic modes indeed describe the small-$\omega$ and small-$k$ expansion of the leading infrared poles. Furthermore, it will be interesting to understand the role of higher-frequency spectrum and its interplay with MHD modes. We leave all these and many other interesting questions to the future.
\acknowledgments{The authors would like to thank Debarghya Banerjee, Pavel Kovtun, Alexander Krikun, Chris Rosen, Koenraad Schalm, Andrei Starinets, Giorgio Torrieri, Vincenzo Scopelliti, Phil Szepietowski and Jan Zaanen for stimulating discussions, and Simon Gentle for his comments on the draft of this paper. We are also grateful to Diego Hofman and Nabil Iqbal for numerous discussions on the topic of this work, comments on the manuscript and for sharing the draft of \cite{Hofman:2017Something} prior to publication. S. G. is supported in part by a VICI grant of the Netherlands Organisation for Scientific Research (NWO), and by the Netherlands Organisation for Scientific Research/Ministry of Science and Education (NWO/OCW). The work of N. P. is supported by the DPST scholarship from the Thai government and by Leiden University. }
|
\section*{Acknowledgments}
The authors thank anonymous reviewers for their careful reading and a number of valuable comments.
This work was supported by JSPS KAKENHI Grant Numbers 16H06931 and 16K16011.
\section{Applications}
\label{sec:applications}
In this section, we demonstrate our proposed framework by applying it to several stochastic combinatorial problems.
We only describe the results for the adaptive strategy (Algorithm~\ref{alg:adaptive}) as the results for the non-adaptive strategy (Algorithm~\ref{alg:nonadaptive}) can easily be obtained analogously.
\subsection{Matching Problems}\label{sec:matching_problems}
In this section, unless otherwise noted, $n$ denotes the number of vertices in the (hyper)graph in question.
\subsubsection{Bipartite Matching}\label{sec:bipartite}
We first demonstrate how to use our technique for the bipartite matching problem.
Let $(V, E)$ be a bipartite graph and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
The bipartite matching problem can then be represented as
\begin{align}\label{eq:bipartite}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} x_e \le 1 &\quad (u \in V), \\
&\ x \in \{0,1\}^E,
\end{array}
\end{align}
where $\delta(u) = \{\, e \in E : u \in e \,\}$.
The K\H{o}nig--Egerv\'ary theorem~\cite{kHonig1931graphs,egervary1931combinatorial} shows that the LP relaxation of this system is TDI, so Algorithm~\ref{alg:adaptive} has an approximation ratio of $(1 - \epsilon)$ with high probability for sufficiently large $T$.
Moreover, if the algorithm finds an integral solution to the optimistic problem in Line~\ref{line:2}, it reveals a matching in each iteration, and hence at most $T$ edges per vertex in total.
Finally, by Lemma~\ref{lem:tdi}, there exists an $(\epsilon, \epsilon)$-witness cover of size $e^{O(\mu \log n)}$ for each $\mu \geq 1$.
We can therefore obtain the following result from Theorem~\ref{thm:adaptive}.
\begin{corollary}\label{cor:bipartite}
By taking $T = \Omega(\Delta_c \log (n/\epsilon)/\epsilon p)$,
for the bipartite matching problem,
Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
This result is improved by using the vertex sparsification lemma in Section~\ref{sec:sparsification}.
\subsubsection{Non-bipartite Matching}\label{sec:nonbipartite}
Next, we consider the non-bipartite matching problem.
Let $(V, E)$ be a graph and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
A naive formulation of this problem is as follows:
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} x_e \le 1 &\quad (u \in V), \\
&\ x \in \{0,1\}^E.
\end{array}
\end{align}
It is known that the LP relaxation of this system is TDI/2 (totally dual half-integral) and has an integrality gap of $3/2$~\cite{schrijver2003combinatorial}.
Therefore, by the same argument as in the bipartite matching problem, we can show that Algorithm~\ref{alg:adaptive} has an approximation ratio of $(2 - \epsilon)/3$ with high probability if $T = \Omega(\Delta_c \log (n/\epsilon)/\epsilon p)$.
To improve this approximation ratio, we consider a strengthened formulation by adding the blossom inequalities:
\begin{align}\label{eq:blossom}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} x_e \le 1 &\quad (u \in V), \\
&\ \displaystyle\sum_{e \in E(S)} x_e \le \floor{\frac{|S|}{2}} &\quad (S \in \mathcal{V}_{\rm odd}), \\
&\ x \in \{0,1\}^E,
\end{array}
\end{align}
where $E(S) = \{\, e \in E : e \subseteq S \,\}$ and $\mathcal{V}_{\rm odd} = \{\, S \subseteq V : |S| \textrm{ is odd and at least } 3 \,\}$.
Cunningham and Marsh~\cite{cunningham1978primal} showed that this system is TDI, so our algorithm has an approximation ratio of $(1 - \epsilon)$ with high probability for sufficiently large $T$.
Moreover, the number of revealed edges is at most $T$ per vertex.
The only remaining issue is the number of iterations.
Since the system \eqref{eq:blossom} has exponentially many constraints, we have to exploit its combinatorial structure to reduce the number of possibilities.
The dual problem is given by
\begin{align}
\label{eq:matchingdual}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{u \in V} y_u + \sum_{S \in \mathcal{V}_{\rm odd}} \floor{\frac{|S|}{2}} z_S =: \tau(y, z)\\
\text{subject to} &\ \displaystyle y_u + y_v + \sum_{S \in \mathcal{V}_{\rm odd} \colon \{u,v\} \subseteq S} z_S \ge \tilde c_{e} &\quad (e = \{u,v\} \in E), \\
&\ y \in \mathbb{R}_+^V,\ z \in \mathbb{R}_+^{\mathcal{V}_{\rm odd}}.
\end{array}
\end{align}
For $\mu \geq 1$,
let us define a set $W \subseteq \mathbb{R}_+^V \times \mathbb{R}_+^{\mathcal{V}_{\rm odd}}$ by
\begin{align}
W = \left\{\, (y,z) \in \mathbb{Z}_+^V \times \mathbb{Z}_+^{\mathcal{V}_{\rm odd}} :\, \tau(y, z) \le (1 - \epsilon) \mu \,\right\}.
\end{align}
It is clear that $W$ is an $(\epsilon, \epsilon)$-witness cover, and we can evaluate the size of $W$ as follows.
\begin{Claim} \label{claim:nonbipartite}
$|W| = e^{O(\mu \log n)}$.
\end{Claim}
\begin{proof}
We count the candidates for $y$ and $z$ separately.
Since $\sum_i y_i < \mu$, the number of candidates for $y$ is at most
\begin{align}
\sum_{k=0}^{\floor{\mu}} \binom{n}{k} \le \left(\floor{\mu} + 1\right) \left( \frac{e n}{\floor{\mu}} \right)^{\floor{\mu}} = e^{O(\mu \log n)}.
\end{align}
To count the number of candidates for $z$, we regard $z$ as a multiset, e.g., if $z_S = 2$ then we think there are two $S$.
Let $s_i$ $(i = 1, 2, \ldots, k)$ be the size of each set contained in $z$.
Then we have
\begin{align}
s_1 + \cdots + s_k \le 3 \left( \floor{\frac{s_1}{2}} + \cdots + \floor{\frac{s_k}{2}} \right) < 3 \mu.
\end{align}
Therefore, the number of candidates for $z$ is given by
\begin{align}
\sum_{k=0}^{\floor{\mu}} \sum_{\substack{s_1 + \cdots + s_k \le 3\mu \\ s_1, \ldots, s_k \ge 3}} \binom{n}{s_1} \cdots \binom{n}{s_k}
\le \sum_k \binom{3 \floor{\mu} + k -1}{k} n^{\mu} = e^{O(\mu \log n)}.
\end{align}
By multiplying the number of candidates for $y$ and $z$, we obtain the required result.
\end{proof}
Therefore, we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log (n/\epsilon)/\epsilon p)$,
for the non-bipartite matching problem,
Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
The same analysis can be applied to the (simple) $b$-matching problem.
\paragraph{Relationship to the analysis of Assadi et al.}
\label{sec:matching}
The adaptive and non-adaptive algorithms of Assadi et al.~\cite{assadi2016stochastic}
for the unweighted non-bipartite matching problem are within our framework (Algorithms~\ref{alg:adaptive} and \ref{alg:nonadaptive}, respectively),
and their analysis utilizes the Tutte--Berge formula.
They showed that the required number of iterations is $O(\log (n/\epsilon \mu)/\epsilon p)$, and it is reduced to $O(\mathrm{poly}(p, 1/\epsilon))$ by using the vertex sparsification lemma.
For the unweighted problem, our analysis gives a weaker result than theirs.
However, since no simple alternative to the Tutte--Berge formula for the weighted problem is known, our analysis is more general than theirs.
\begin{comment}
\subsubsection{Simple b-Matching}
Let $(V, E)$ be a graph, $b \in \mathbb{Z}_+^V$ be a capacity, and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
The simple $b$-matching problem is formulated by
\begin{align}
\begin{array}{lll}
\text{maximize} & \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} & x_e \le 1 & (e \in E), \\
& \displaystyle\sum_{e \in \delta(u)} x_e \le b(u) & (u \in V), \\
& \displaystyle\sum_{e \in E(S)} x_e + \sum_{e \in F} x_e \le \floor{\frac{b(S) + |F|}{2}} & (S \subseteq V, F \subseteq \delta(S)), \\
& x \in \{0,1\}^E.
\end{array}
\end{align}
Edmonds and Johnson~\cite{edmonds1970matching} showed the LP relaxation of the above system is totally dual integral.
The number of revealed element is $O(T)$ per vertex.
The remaining issue is the number of iterations.
Since the system has exponentially many constraints, we exploit the combinatorial structure to reduce the number of possibilities.
The dual problem is given by
\begin{align}
\begin{array}{ll}
\text{minimize} & \sum_e y_e + \sum_u b(u) z_u + \sum_{S,F} \floor{\frac{b(S) + |F|}{2}} w_{S,F} \\
\text{subject to} & y_e + \sum_{u \ni e} z_u + \sum_{S, F: e \in E[S] \cup F } \ge \tilde c_{e}, \ \ (e \in E), \\
& y \in \mathbb{R}_+^V, z \in \mathbb{R}_+^{2^V}, w \in \mathbb{R}_+^{2^V \times 2^E}.
\end{array}
\end{align}
We define a witness cover by
\begin{align}
W = \{ (y,z,w) : \text{obj} \le (1 - \epsilon) \mu \}.
\end{align}
\begin{lemma}
$W$ is an $(\epsilon, \epsilon)$-witness cover of size $O(e^{\mu \log n})$.
\end{lemma}
\begin{proof}
It is clear that $W$ is an $(\epsilon, \epsilon)$-witness cover.
We evaluate the size of $W$.
We separately count $y$, $z$, and $w$.
The number of candidates of $y$ is at most $O(e^{\mu \log n})$ and the number of candidates of $z$ is at most $O(e^{\mu \log n})$.
To count $w$, we regard $w$ as a multiset $\{ (S_1,F_1), \ldots, (S_k,F_k) \}$.
Let $(s_1,f_1), \ldots (s_k,f_k)$ be the sizes of them.
Then, we have $s_1 + f_1 + \cdots + s_k + f_k < 3 \mu$.
The number of vertex sets of size $s$ is $\binom{n}{s}$ and the number of edge sets of size $f$ is at most $\binom{m}{f}$.
Therefore, the number of candidates of $w$ is evaluated by
\begin{align}
\sum_k \sum_{(s_1,f_1), \ldots, (s_k,f_k)} \binom{n}{s_1} \binom{m}{f_1} \cdots \binom{n}{s_k} \binom{m}{f_k} \notag \\
&= O(e^{\mu \log n}).
\end{align}
By multiplying them, we obtain the result.
\end{proof}
\end{comment}
\subsubsection{$k$-Hypergraph Matching}
Let $(V, E)$ be a $k$-uniform hypergraph, i.e., $E$ is a set family on $V$ whose each element $e \in E$ has size exactly $k$.
Let $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
The \emph{$k$-hypergraph matching problem} can be represented as
\begin{align}\label{eq:k-hypergraph}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} x_e \le 1 &\quad (u \in V), \\
&\ x \in \{0,1\}^E,
\end{array}
\end{align}
where $\delta(u) = \{\, e \in E : u \in e \,\}$.
Chan and Lau~\cite{chan2010linear} proved that the LP relaxation of the above system has an integrality gap of $\alpha := 1/(k - 1 + 1/k)$, and they also proposed an LP-relative $\alpha$-approximation algorithm.
Since at most one edge per vertex is revealed in expectation due to the constraint $\sum_{e \in \delta(u)} x_e \le 1$,
the expected number of revealed hyperedges per vertex is $O(T)$ in total.
The only remaining issue is the number of iterations.
Since the system \eqref{eq:k-hypergraph} has polynomially many constraints and is not TDI, we have to discretize the dual variables.
The corresponding dual problem is given by
\begin{align}\label{eq:dual_k-hypergraph}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{u \in V} y_u \\
\text{subject to} &\ \displaystyle\sum_{u \in e} y_u \ge \tilde c_e &\quad (e \in E), \\
&\ y \in \mathbb{R}_+^V.
\end{array}
\end{align}
Here, we show that the dual optimal solution is sparse.
\begin{Claim}
If the optimal value is less than $\mu$,
then there exists a dual optimal solution $y \in \mathbb{R}_+^V$ such that $|\mathrm{supp}(y)| < k \mu$.
\end{Claim}
\begin{proof}
Let $x \in \mathbb{R}_+^E$ be a primal optimal solution.
We can assume that $x_e > 0$ only if $\tilde c_e \ge 1$.
Therefore, we have $\sum_e x_e \le \sum_e \tilde c_e x_e < \mu$.
Since each hyperedge consists of exactly $k$ elements, we have
\begin{align}
\sum_{u \in V} \sum_{e \in \delta(u)} x_e = k \sum_{e \in E} x_e < k \mu.
\end{align}
This shows that less than $k \mu$ inequalities can hold in equality.
Therefore, by complementary slackness, the corresponding dual optimal solution $y$ satisfies $|\mathrm{supp}(y)| < k \mu$.
\end{proof}
Therefore, by Lemma~\ref{lem:poly_nontdi_sparse},
there exists an $(\epsilon, \epsilon/2)$-witness cover for each $\mu \geq 1$
of size at most $M^\mu$, where $M = \exp\left(O\left(k \log \frac{n}{k\mu} + \frac{1}{\epsilon}\right)\right)$.
Thus we obtain the following.
\begin{corollary}
\label{cor:khypergraphmatching1}
By taking $T = \Omega(\Delta_c(k \log (n/\epsilon) + 1/\epsilon)/\epsilon p)$, for the $k$-hypergraph matching problem, Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)/(k - 1 + 1/k)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
This result is improved by using the vertex sparsification lemma in Section~\ref{sec:sparsification}.
\paragraph{Comparison with Blum et al.}
Blum et al.~\cite{blum2015ignorance} provided adaptive and non-adaptive algorithms for the unweighted $k$-hypergraph matching problem based on local search of Hurkens and Schrijver~\cite{hurkens1989size}.
Their adaptive algorithm has approximation ratio of $(2 - \epsilon)/k$ in expectation by conducting a constant number of queries per vertex.
For unweighted problem, our algorithm has a worse approximation ratio than theirs.
However, our algorithm has four advantages:
it requires exponentially smaller number of queries;
it runs in polynomial time both in $n$ and $1/\epsilon$;
it is applied to the weighted problem with the same approximation ratio;
and it has a stronger stochastic guarantee, i.e., not in expectation but with high probability.
\begin{remark}
For unweighted $k$-hypergraph matching problem, Chan and Liu~\cite{chan2010linear} showed that there is a packing LP with an integrality gap of $2/(k+1)$.
Note that the rounding algorithm for this LP is not known.
Using this formulation, we obtain a $(2 - \epsilon)/(k+1)$ approximation algorithm which conducts $O_{\epsilon,p}(\log^2 n)$ queries and runs in non-polynomial time (i.e., it performs exhaustive search).
\end{remark}
\subsubsection{$k$-Column Sparse Packing Integer Programming}\label{sec:k-CSPIP}
The $k$-column sparse packing integer programming problem is a common generalization of the $k$-hypergraph matching problem and the knapsack problem, and can be represented as follows
(the formulation itself just rewrites \eqref{eq:packing} by using the entries of the matrix and of the vectors):
\begin{align}\label{eq:k-column_sparse}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{j = 1}^m \tilde c_j x_j \\
\text{subject to} &\ \displaystyle\sum_{j = 1}^m a_{ij} x_j \le b_i &\quad (i = 1, \ldots, n), \\
&\ x \in \{0,1\}^m,
\end{array}
\end{align}
where ``$k$-column sparse'' means that $| \{\, i : a_{ij} \neq 0 \,\} | \le k$ for each $j \in \{1, \ldots, m\}$.
Without loss of generality, we assume that $a_{ij} \le b_i$ for all $i \in \{1,\ldots,n\}$ and $j \in \{1,\ldots,m\}$.
The main difference from the other problems is that the system $\sum_j a_{ij} x_j \le b_i$, $x \geq 0$ does not imply $x_j \le 1$.
Instead, we have $x_j \le w_j$, where $w_j = \min_{i \colon a_{ij} \neq 0} b_i/a_{ij}$.
Let $w = \max_j w_j$.
By modifying Algorithm~\ref{alg:adaptive} to reveal each $x_j$ with probability $x_j / w$, we obtain the same approximation guarantee with $w$ times larger number of iterations.
Parekh~\cite{parekh2011iterative} proposed an LP-relative $(1/2k)$-approximation algorithm for general $k$ and a $(1/3)$-approximation algorithm for $k = 2$, which encompasses the \emph{demand matching problem}~\cite{shepherd2007demand}.
The expected number of revealed elements for each constraint $i$ is $O(b_iT)$, because we have
\begin{align}
&\sum_{j\colon a_{ij} \neq 0} x_j \le \sum_j a_{ij} x_j \le b_i.
\end{align}
The only remaining issue is the number of iterations.
Using the same approach as for the $k$-hypergraph matching problem, we obtain the following result.
\begin{corollary}
By taking $T = \Omega(\Delta_c w (k \log(n/\epsilon) + 1/\epsilon)/\epsilon p)$,
for the $k$-column sparse packing integer programing problem, Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)/2k$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\subsection{Matroid Problems}
Now we apply our technique to matroid-related optimization problems
(see, e.g., \cite{schrijver2003combinatorial} for basics of matroids and related optimization problems).
In this section, unless otherwise noted,
$m$ denotes the ground set size of the matroids in question.
\subsubsection{Maximum Independent Set}
Let ${\mathbf M} = (E, \mathcal{I})$ be a matroid on a finite set $E$, and let $r\colon 2^E \to \mathbb{Z}_+$ be its rank function.
A set $S \subseteq E$ is a \emph{flat} if $r(S \cup e) \ne r(S)$ for all $e \in E \setminus S$,
and let ${\mathcal F}_{\mathbf M}$ denote the family of flats in ${\mathbf M}$.
For a subset $S \subseteq E$,
the smallest flat containing $S$ is called the \emph{closure} of $S$.
We assume that the rank of the matroid is relatively small to ensure that Algorithm~\ref{alg:adaptive} does not reveal all the elements.
Let $\tilde c \in \mathbb{Z}_+^E$ be a stochastic weight.
The maximum independent set problem can be represented as follows:
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in S} x_e \le r(S) &\quad (S \subseteq E), \\
&\ x \in \{0,1\}^E.
\end{array}
\end{align}
Edmonds~\cite{edmonds1970submodular} showed that the LP relaxation of the above system is TDI, so our algorithm has an approximation ratio of $(1 - \epsilon)$ with high probability.
Moreover, the number of revealed elements is $O(r T)$, where $r = r(E)$ is the rank of the matroid in question.
The only remaining issue is the number of iterations.
Since the system has exponentially many constraints, we have to exploit the combinatorial structure of the problem to reduce the number of possibilities.
The dual problem is given by
\begin{align}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{S \subseteq E} r(S) y_S \\
\text{subject to} &\ \displaystyle\sum_{S \subseteq E\colon e \in S} y_S \ge \tilde c_e &\quad (e \in E), \\
&\ y \in \mathbb{R}_+^{2^E}.
\end{array}
\end{align}
Since the closure of each $S \subseteq E$ contributes the objective value by $r(S)$ and contains all elements in $S$,
we can restrict the supports of $y$ to the subfamilies of ${\mathcal F}_{\mathbf M}$.
Then, for $\mu \geq 1$,
let us define a set $W \subseteq \mathbb{R}_+^{2^E}$ by
\begin{align}
W = \left\{\, y \in \mathbb{Z}_+^{2^E} :\, \sum_{S \in {\mathcal F}_{\mathbf M}} r(S) y_S \le (1 - \epsilon) \mu,\ \mathrm{supp}(y) \subseteq {\mathcal F}_{\mathbf M} \, \right\}.
\end{align}
It is clear that $W$ is an $(\epsilon, \epsilon)$-witness cover, and we can evaluate the size of $W$ as follows.
\begin{Claim}\label{cl:maximum_independent_set}
$|W| = e^{O(\mu \log m)}$.
\end{Claim}
\begin{proof}
To evaluate the size of $W$, as for the non-bipartite matching problem, we regard $y$ as a multiset of flats.
Let $r_1, \ldots, r_k$ be the ranks of flats in $y$.
Then we have $r_1 + \cdots + r_k < \mu$.
Since each flat is the closure of some independent set, the number of flats of rank $r$ is at most the number of independent sets of size $r$, which is at most $\binom{m}{r}$.
Therefore, the number of dual candidates for $y$ is given by
\begin{align}
\quad \sum_{k=0}^{\floor{\mu}} \sum_{\substack{r_1 + \cdots + r_k \leq \mu \\ r_1, \ldots, r_k \ge 1}} \binom{m}{r_1} \cdots \binom{m}{r_k} \le e^{O(\mu \log m)}. \quad \qedhere
\end{align}
\end{proof}
Therefore, we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log (m / \epsilon)/\epsilon p)$,
for the maximum independent set problem,
Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\begin{comment}
\begin{remark}
By conducting a different analysis, we can take $T = \Omega(1/\epsilon p)$. See Appendix.
\COMM{TM}{APPENDIX}
\end{remark}
\end{comment}
\subsubsection{Matroid Intersection}
The same technique can also be applied to the matroid intersection problem.
Let ${\mathbf M}_j = (E, \mathcal{I}_j)$ $(j = 1, 2)$ be two matroids whose rank functions are $r_j \colon 2^E \to \mathbb{Z}_+$, and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic weight.
The weighted matroid intersection problem can be represented as
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in S} x_e \le r_j(S) &\quad (S \subseteq E, \ j \in \{1, 2\}), \\
&\ x \in \{0,1\}^{E}.
\end{array}
\end{align}
Edmonds~\cite{edmonds1970submodular} showed that the LP relaxation of the above system is TDI, so our algorithm has an approximation ratio of $(1 - \epsilon)$ with high probability for sufficiently large $T$.
Moreover, the number of revealed elements is $O(r^\ast T)$,
where $r^\ast$ is the maximum rank of a common independent set in the two matroids.
The only remaining issue is the number of iterations.
As the analysis of the maximum independent set problem,
we can restrict the supports of dual vectors $y \in {\mathbb R}_+^{2^E} \times {\mathbb R}_+^{2^E}$ to the subfamilies of ${\mathcal F}_{{\mathbf M}_1} \times {\mathcal F}_{{\mathbf M}_2}$,
and we obtain the following by the same argument.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log (m/\epsilon)/\epsilon p)$,
for the matroid intersection problem, Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.%
\footnote{Note that the bipartite matching problem is a special case of the matroid intersection problem, and Corollary~\ref{cor:bipartite} is obtained from a naive application of this result. Using the vertex sparsification lemma shown in Section~\ref{sec:sparsification}, a stronger result is obtained for bipartite matching (Corollary~\ref{cor:bipartite_stronger}).}
\end{corollary}
\subsubsection{$k$-Matroid Intersection}
Let ${\mathbf M}_j = (E, \mathcal{I}_j)$ $(j = 1, 2, \ldots, k)$ be $k$ matroids whose rank functions are $r_j \colon 2^E \to \mathbb{Z}_+$, and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic weight.
The $k$-matroid intersection problem can be represented as
\begin{align}\label{eq:k-matroid_intersection}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in S} x_e \le r_j(S) &\quad (S \subseteq E,\ j \in \{1, 2, \ldots, k\}), \\
&\ x \in \{0,1\}^E.
\end{array}
\end{align}
The important difference between the $2$-intersection and $k$-intersection ($k \ge 3$) problems is that the latter is NP-hard in the non-stochastic case.
Moreover, the LP relaxation of the system is not a kind of TDI.
Adamczyk et al.~\cite{adamczyk2016submodular} proposed an LP-relative $(1/k)$-approximation algorithm.
The expected number of revealed elements is $O(\hat{r} T)$, where $\hat{r}$ is the minimum of the ranks of the $k$ matroids.
The only remaining issue is the number of iterations.
Since the LP relaxation of \eqref{eq:k-matroid_intersection} is not TDI, we have to discretize the dual variables.
Moreover, since we could not prove the dual optimal solution is sparse, we use Theorem~\ref{thm:kolliopoulos2005}.
For $\theta = \Theta\left(\frac{\epsilon^2}{\log m}\right)$ and $\mu \geq 1$,
let us define a set $W \subseteq \left(\mathbb{R}_+^{2^V}\right)^k$ by
\begin{align}
W = \biggl\{\, &(y^1, \ldots, y^k) \in \left(\theta\mathbb{Z}_+^{2^V}\right)^k \, : \notag\\& \sum_{j,S} r_j(S) y^j_S \le \left(1 - \frac{\epsilon}{2}\right) \mu,\ \mathrm{supp}(y^j) \subseteq {\mathcal F}_{{\mathbf M}_j} \ (\forall j = 1, \ldots, k) \,\biggr\}.
\end{align}
By Theorem~\ref{thm:kolliopoulos2005}, $W$ is an $(\epsilon, \epsilon/2)$-witness cover,
and we can evaluate its size as follows.
\begin{Claim}
$|W| = e^{O(\mu k \log^2 m/\epsilon^2)}$.
\end{Claim}
\begin{proof}
To evaluate the size of $W$, as for the maximum independent set problem, we count each $y^j$ separately,
where we regard $y^j$ as a multiset in which each flat $S$ contributes $\theta$.
Let $r_1, \ldots, r_k$ be the ranks of flats in $y^j$.
Then we have $r_1 + \cdots + r_k < \mu / \theta$.
By the same argument as for the maximum independent set problem, the number of dual candidates for $y^j$ is at most $e^{O((\mu/\theta)\log m)}$.
By multiplying the numbers of candidates for the $k$ coordinates, we obtain the required result
(recall $\theta = \Theta(\epsilon^2/\log m)$).
\end{proof}
Therefore, we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c k \log m \log (m/\epsilon)/\epsilon^3 p)$,
for the $k$-matroid intersection problem,
Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)/k$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\subsubsection{Matchoid}
The matchoid problem is a common generalization of the matching problem and the matroid intersection problem.
Let $(V, E)$ be a graph with $|V| = n$ and $|E| = m$,
${\mathbf M}_v = (\delta(v), \mathcal{I}_v)$ be a matroid whose rank function is $r_v \colon 2^{\delta(v)} \to \mathbb{Z}_+$ for each vertex $v \in V$,
and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
The task is to find a maximum-weight subset $F \subseteq E$ of edges such that $F \cap \delta(v) \in \mathcal{I}_v$ for every $v \in V$.
A naive LP formulation is as follows:
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in S} x_e \le r_v(S) &\quad (v \in V,~S \subseteq \delta(v)), \\
&\ x \in \{0, 1\}^E.
\end{array}
\end{align}
Lee, Sviridenko, and Vondr\'ak \cite{lee2013matroid}\footnote{Precisely,
the discussion is given via a reduction to the matroid matching problem, which preserves the variables and the feasible region.}
proposed an LP-relative $(2/3)$-approximation algorithm.
For each vertex $v \in V$,
the expected number of revealed edges incident to $v$ is $O(r_vT)$, where $r_v = r_v(\delta(v)) \ge \sum_{e \in \delta(v)} x_e$.
The only remaining issue is the number of iterations.
Since it has exponentially many constraints (in the maximum degree), we have to exploit its combinatorial structure to reduce the number of possibilities.
The dual problem is given by
\begin{align}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{v \in V}\sum_{S \subseteq \delta(v)} r_v(S) y_{v, S} \\
\text{subject to} &\ \displaystyle\sum_{v \in V}\sum_{S \subseteq \delta(v)\colon e \in S} y_{v, S} \ge \tilde c_e &\quad (e \in E), \\
&\ y \in \mathbb{R}_+^{\mathcal{S}},
\end{array}
\end{align}
where $\mathcal{S} = \{\, (v, S) \mid v \in V,~S \subseteq \delta(v) \,\} \subseteq V \times 2^E$.
Similarly to the other matroid problems, we can restrict the support of $y$ so that, if $y_{v, S} > 0$, then $S \subseteq \delta(v)$ is a flat in $\mathbf{M}_v$.
Let $\mathcal{F}_v$ be the set of flats in $\mathbf{M}_v$ and $\mathcal{F} = \{\, (v, S) \mid v \in V,~S \in \mathcal{F}_v \,\}$.
Based on the (TDI/2)-ness of the matroid matching polyhedron due to Gijswijt and Pap~\cite{gijswijt2013algorithm},
the following set is an $(\epsilon, \epsilon)$-witness cover:
\begin{align}
W = \left\{\, y \in \frac{1}{2} \mathbb{Z}_+^{\mathcal{S}} : \, \sum_{v \in V}\sum_{S \subseteq \delta(v)} r_v(S) y_{v, S} \le (1 - \epsilon) \mu,\ \mathrm{supp}(y) \subseteq {\mathcal F} \,\right\}.
\end{align}
Similarly to the maximum independent set case, $|W|$ can be bounded by $e^{O(\mu \log m)}$.
Thus we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log (m/\epsilon)/\epsilon p)$, for the matchoid problem, \mbox{Algorithm~\ref{alg:adaptive}} outputs a $(2 - \epsilon)/3$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\begin{comment}
\subsubsection{Matroid Matching}
The matroid matching problem is a common generalization of the matching problem and the matroid intersection problem.
Let $(V, E)$ be a graph with $|V| = n$, ${\mathbf M} = (V, \mathcal{I})$ be a matroid whose rank function is $r\colon 2^V \to \mathbb{Z}_+$, and $\tilde c \in \mathbb{Z}_+^E$ be a stochastic edge weight.
The task is to find a maximum-weight subset of edges such that the endpoints form an independent set of the matroid ${\mathbf M}$.
Vande Vate~\cite{vate1992fractional} proposed the following LP formulation:
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in E} a(S, e) x_e \le r(S) &\quad (S \subseteq V), \\
&\ x \in \{0, 1\}^E,
\end{array}
\end{align}
where $a(S, e) = |S \cap e|$.
Gijswijt and Pap~\cite{gijswijt2013algorithm} showed that the system is TDI/2, and proposed a polynomial-time algorithm for obtaining a fractional optimal solution.
A $(2/3)$-approximate integral solution can be obtained by rounding this solution by the procedure of Lee, Sviridenko, and Vondr\'ak \cite{lee2013matroid}\footnote{Lee, Sviridenko and Vondr\'ak~\cite{lee2013matroid} proposed a rounding procedure for a slightly different formulation. However, since the solution to the present formulation can be easily converted to the Lee et al.'s formulation, we can use the rounding procedure.}.
The expected number of revealed edges is $O(T)$ per vertex since $\sum_{e \in \delta(u)} x_e = \sum_{e \in E} a(\{u\},e) x_e \le r(\{u\}) \le 1$.
The only remaining issue is the number of iterations.
Since it has exponentially many constraints, we have to exploit its combinatorial structure to reduce the number of possibilities.
The dual problem is given by
\begin{align}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{S \subseteq V} r(S) y_S \\
\text{subject to} &\ \displaystyle\sum_{S \subseteq V\colon e \in S} a(S,e) y_S \ge \tilde c_e &\quad (e \in E), \\
&\ y \in \mathbb{R}_+^{2^V}.
\end{array}
\end{align}
Since $a(S,e)$ is monotonically nondecreasing in $S$, we can restrict the support of $y$ to the set of flats.
We define a witness cover by
\begin{align}
W = \left\{\, y \in \frac{1}{2} \mathbb{Z}_+^{2^V} : \, \sum_{S \subseteq V} r(S) y_S \le (1 - \epsilon) \mu,\ \mathrm{supp}(y) \subseteq {\mathcal F}_{\mathbf M} \,\right\}.
\end{align}
By the same argument as for the matroid maximization problem, we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log (n/\epsilon)/\epsilon p)$, for the matroid matching problem, Algorithm~\ref{alg:adaptive} outputs a $(2 - \epsilon)/3$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\end{comment}
\subsubsection{Degree Bounded Matroid}
Let $(V, E)$ be a hypergraph with $|V| = n$ whose maximum degree is $d = \max_{u \in V} |\delta(u)|$, and $b \colon E \to \mathbb{Z}_+$ give a capacity of each hyperedge.
Let ${\mathbf M} = (V, \mathcal{I})$ be a matroid whose rank function is $r \colon 2^V \to \mathbb{Z}_+$,
and $\tilde c \in \mathbb{Z}_+^V$ be a stochastic weight.
The degree bounded matroid problem can be represented as
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{u \in V} \tilde c_u x_u \\
\text{subject to} &\ \displaystyle\sum_{u \in e} x_u \le b(e) &\quad (e \in E), \\
&\ \displaystyle\sum_{u \in S} x_u \le r(S) &\quad (S \subseteq V), \\
&\ x \in \{0,1\}^V.
\end{array}
\end{align}
Kir{\'a}ly et al.~\cite{kiraly2012degree} proposed an
algorithm that finds a (possibly infeasible) solution whose objective value is at least the LP-optimal value
and which violates each capacity constraint by at most $d - 1$.
Since $\sum_{u \in V} x_u \le r(V) =: r$,
the expected number of revealed elements is $O(r T)$.
The only remaining issue is the number of iterations.
Since this system has exponentially many constraints, we have to exploit its combinatorial structure to reduce the number of possibilities.
The dual problem is given by
\begin{align}\label{eq:dual_DBM}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{e \in E} b(e) y_e + \sum_{S \subseteq V} r(S) z_S =: \tau(y, z)\\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} y_e + \sum_{S \subseteq V \colon u \in S} z_S \ge \tilde c_u &\quad (u \in V), \\
&\ y \in \mathbb{R}_+^E,\ z \in \mathbb{R}_+^{2^V}.
\end{array}
\end{align}
Since the system is not TDI, we have to discretize the dual variables.
We could not prove the sparsity of $z$ but, by observing the sparsity of $y$ and using the matroid property, we can see that there exists a good discretization.
\begin{Claim}
Let $(y,z) \in \mathbb{R}_+^E \times \mathbb{R}_+^{2^V}$ be an optimal solution to \eqref{eq:dual_DBM} with $\tau(y, z) < \mu$.
Then, there exists a feasible solution $(y',z')$ with $\tau(y', z') < (1 - \epsilon/2)\mu$
whose entries are multiple of $\epsilon/2d$.
\end{Claim}
\begin{proof}
Let $x \in \mathbb{R}_+^V$ be a primal LP-optimal solution.
By complementary slackness, $y_e > 0$ only if the constraint $\sum_{u \in e} x_u \le b(e)$ holds in equality.
Thus, by summing up, we have
\begin{align}
\sum_{e\colon y_e > 0} b(e) = \sum_{e\colon y_e > 0} \sum_{u \in e} x_u \le \sum_{e \in E} \sum_{u \in e} x_u \le d \sum_{u \in V} x_u < d \mu.
\end{align}
Now we round up each entry of $y$ to the minimum multiple of $\epsilon/2 d$ to obtain $y'$.
This increases objective value at most $(\epsilon/2d) \sum_{e\colon y_e > 0} b(e) < \epsilon \mu/2$.
Therefore the objective value of $(y',z)$ is at most $(1 - \epsilon/2) \mu$.
To discretize $z$, we consider the minimization problem with respect to $z$:
\begin{align}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{S \subseteq V} r(S) z_S \\
\text{subject to} &\ \displaystyle\sum_{S \subseteq V \colon u \in S} z_S \ge \tilde c_u - \sum_{e \in \delta(u)} y'_e &\quad (u \in V).
\end{array}
\end{align}
This problem is the dual of the maximum independent set problem whose cost vector is a multiple of $\epsilon/2 d$.
Therefore, by the TDIness of the maximum independent set problem,
there exists an optimal solution $z'$ whose entries are multiples of $\epsilon/2 d$.
Thus, $(y',z')$ is feasible by construction and has an objective value of at most $(1 - \epsilon/2) \mu$.
\end{proof}
As for the other matroid problems, we can assume that $\mathrm{supp}(z)$ is a set of flats.
For $\mu \geq 1$,
let us define a set $W \subseteq \mathbb{R}_+^E \times \mathbb{R}_+^{2^V}$ by
\begin{align}
W = \left\{\, (y,z) \in \frac{\epsilon}{2d}\left(\mathbb{Z}_+^E \times \mathbb{Z}_+^{2^V}\right) \, : \ \tau(y, z) \leq \left(1 - \frac{\epsilon}{2}\right) \mu,\ \mathrm{supp}(z) \subseteq {\mathcal F}_{\mathbf M} \,\right\}.
\end{align}
By construction, $W$ is an $(\epsilon,\epsilon/2)$-witness cover,
and we can evaluate its size as follows.
\begin{Claim}
$|W| = e^{O\left(\mu d \log n / \epsilon\right)}$.
\end{Claim}
\begin{proof}
To evaluate the size of $W$, we separately count $y$ and $z$.
The number of candidates for $y$ is evaluated as similar to Lemma~\ref{lem:tdi} by distributing $2 d \mu/\epsilon$ tokens to $|E| = O(dn) = O(n^2)$ components, and is bounded by $e^{O(\mu d \log n / \epsilon)}$.
The number of candidates for $z$ is evaluated as similar to Claim~\ref{cl:maximum_independent_set} by counting multisets with weight $\epsilon/2 d$, and is bounded by $e^{O(\mu d \log n / \epsilon)}$.
By multiplying these two numbers of candidates, we obtain the required result.
\end{proof}
Therefore, we obtain the following.
\begin{corollary}
For the degree bounded matroid problem with maximum degree $d$, by taking $T = \Omega(\Delta_c d \log (n/\epsilon) / \epsilon^2 p)$, Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution that violates each constraint at most $d - 1$ with probability at least $1 - \epsilon$.
\end{corollary}
\subsection{Stable Set Problems}
We finally show applications to stable set problems.
In this section,
$n$ and $m$ denote the numbers of vertices and edges, repsectively, in the graph in question.
\subsubsection{Stable Set in Some Perfect Graphs}
We assume that the stability number $\alpha$ (the maximum size of a stable set) is relatively small to ensure that Algorithm~\ref{alg:adaptive} does not reveal all the vertices.
By the Tur\`an theorem~\cite{turan1954theory}, the average degree is required to be relatively large.
Let $(V, E)$ be a graph and $\tilde c \colon V \to \mathbb{Z}_+$ be a stochastic vertex weight.
The maximum stable set problem can be represented as
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{u \in V} \tilde c_u x_u \\
\text{subject to} &\ x_u + x_v \le 1 &\quad ((u, v) \in E), \\
&\ x \in \{0,1\}^V.
\end{array}
\end{align}
The LP relaxation of this system is half-integral.
However, this is not helpful because the number of revealed vertices can be large: there is a solution $x_u = 1/2$ for all $u \in V$, which corresponds to revealing half of the vertices in expectation.
We instead consider the following formulation, which introduces the \emph{clique inequalities}:
\begin{align}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{u \in V} c_u x_u \\
\text{subject to} &\ \displaystyle\sum_{u \in C} x_u \le 1 &\quad (C \in \mathcal{C}), \\
&\ x_u \in \{0,1\}^V,
\end{array}
\end{align}
where $\mathcal{C}$ is the set of maximal cliques.
A graph is \emph{perfect} if the LP relaxation of the above system is TDI.
If we assume that the graph is perfect, Algorithm~\ref{alg:adaptive} has an approximation ratio of $(1 - \epsilon)$ with high probability for sufficiently large $T$, and the number of revealed vertices is $O(\alpha T)$ in expectation.
The dual problem is given by
\begin{align}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{C \in \mathcal{C}} y_C \\
\text{subject to} &\ \displaystyle\sum_{C \in \mathcal{C}\colon u \in C} y_C \ge c_u &\quad (u \in V), \\
&\ y \in \mathbb{R}_+^{\mathcal{C}}.
\end{array}
\end{align}
If the number of maximal cliques is $O(n^k)$ for some fixed constant $k$, we immediately see that the required number of iterations is $O(k \log (n/\epsilon)/\epsilon p)$.
A perfect graph may have exponentially many maximal cliques in general, but the following graph classes have at most polynomially many maximal cliques.
\begin{itemize}
\item
If a graph is chordal, it has only linear number of maximal cliques.
\item
If a graph has a bounded clique number (i.e., the size of cliques are bounded by a constant $k$), the number of cliques is at most $\binom{n}{k} = O(n^k)$.
This includes a graph class that can be characterized by forbidden minors and subgraphs.
\end{itemize}
\subsubsection{Stable Set in t-Perfect Graphs}
Another tractable graph class for the stable set problem is t-perfect graphs.
A graph $(V, E)$ is \emph{$t$-perfect} if the relaxation of the following formulation is integral, i.e., it has an integral optimal solution:
\begin{align}
\label{eq:tperfect}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{u \in V} \tilde c_u x_u \\
\text{subject to} &\ x_u + x_v \le 1 &\quad ((u,v) \in E), \\
&\ \displaystyle\sum_{u \in C} x_u \le \floor{\frac{|C|}{2}} &\quad (C \in \mathcal{C}), \\
&\ x_u \in \{0,1\}^V,
\end{array}
\end{align}
where $\mathcal{C}$ is the set of odd cycles.
We assume that the graph is t-perfect.
Then, Algorithm~\ref{alg:adaptive} has an approximation ratio of $(1 - \epsilon)$ with high probability for sufficiently large $T$,
and the number of revealed vertices is $O(\alpha T)$.
The only remaining issue is the number of iterations.
Since the system \eqref{eq:tperfect} is not required to be TDI\footnote{A graph is \emph{strongly t-perfect} if the system in \eqref{eq:tperfect} is TDI.
Any strongly t-perfect graph is t-perfect, but the converse is open.}, we have to discretize the dual variables.
We use Theorem~\ref{thm:kolliopoulos2005}.
Let $\theta = \Theta\left(\frac{\epsilon^2}{\log n}\right)$.
The corresponding dual problem is given by
\begin{align}
\label{eq:tperfectdual}
\begin{array}{lll}
\text{minimize} &\ \displaystyle\sum_{e \in E} y_e + \sum_{C \in \mathcal{C}} \floor{\frac{|C|}{2}} \tilde z_C \\
\text{subject to} &\ \displaystyle\sum_{e \in \delta(u)} y_e + \sum_{C \in \mathcal{C}\colon u \in C} z_C \ge \tilde c_u &\quad (u \in V), \\
&\ y \in \mathbb{R}_+^E,\ z \in \mathbb{R}_+^\mathcal{C}.
\end{array}
\end{align}
We regard $z_C$ as a multiset in which each odd cycle $C$ contributes $\theta$.
Let $c_1, \ldots, c_k$ be the sizes of each odd cycles.
We then have $c_1 + \cdots + c_k < \mu / \theta$.
We define the witness cover by
\begin{align}
W = \left\{\, (y, z) \in \theta \left(\mathbb{Z}_+^E \times \mathbb{Z}_+^{\mathcal{C}}\right) \, : \
\sum_{e \in E} y_e + \sum_{C \in \mathcal{C}} \floor{\frac{|C|}{2}} z_C\le \left(1 - \frac{\epsilon}{2}\right) \mu \,\right\}.
\end{align}
\begin{Claim}
$|W| \le e^{O((\mu/\theta) \log n)}$.
\end{Claim}
\begin{proof}
To evaluate the size of $W$, we count $y$ and $z$ separately.
The number of candidates for $y$ is clearly $e^{O((\mu/\theta) \log m)}$ (and $m = O(n^2)$),
while the number of candidates for $z$ is bounded by $e^{O((\mu/\theta) \log n)}$ as the similar argument to Claim~\ref{claim:nonbipartite}.
\end{proof}
Therefore, we obtain the following.
\begin{corollary}
By taking $T = \Omega(\Delta_c \log n \log (n/\epsilon)/\epsilon^3 p)$,
for the t-stable set problem, Algorithm~\ref{alg:adaptive} outputs a $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\section{General Framework}
\label{sec:general}
Throughout the paper (with one exception as remarked later),
we assume that the constraints in \eqref{eq:packing} satisfy several reasonable conditions.
\begin{assumption}\label{asmp:constraint}
We assume that $A \in \mathbb{Z}_+^{n \times m}$ and $b \in \mathbb{Z}_+^n$ in \eqref{eq:packing} satisfy the following three conditions\footnote{The first two are assumed without loss of generality (by removing the corresponding constraints and variables if violated).
The third one is for simplicity, which holds for most of applications.
The generalizability to remove it is discussed in Section~\ref{sec:k-CSPIP} with a specific application.}:
\begin{enumerate}
\renewcommand{\labelenumi}{\alph{enumi}.}
\item
$b \geq 1$;
\item
$A\chi_j \leq b$ for each $j = 1, 2, \ldots, m$, where $\chi_j \in \{0, 1\}^m$ denotes the $j$-th unit vector;
\item
$Ax \leq b$ and $x \geq 0$ imply $x \leq 1$.
\end{enumerate}
\end{assumption}
We give a general framework of adaptive and non-adaptive algorithms for our problem in Section~\ref{sec:two_strategies},
and then describe a unified methodology for its performance analysis in Section~\ref{sec:performance_analysis}.
The main results are stated as Theorems \ref{thm:adaptive} and \ref{thm:nonadaptive}, whose proofs are separately shown in Section~\ref{sec:proofs}.
\subsection{Two Strategies}\label{sec:two_strategies}
To describe two strategies,
we formally define two auxiliary problems, the \emph{optimistic LP} and the \emph{pessimistic LP}.
We define the \emph{optimistic vector} $\overline{c} \in \mathbb{Z}_+^m$ and the \emph{pessimistic vector} $\underline{c} \in \mathbb{Z}_+^m$ as follows:
\begin{align}
\overline{c}_j = \begin{cases} c_j & j \text{ has been queried}, \\ c_j^+ & \text{otherwise}, \end{cases}\quad
\underline{c}_j = \begin{cases} c_j & j \text{ has been queried}, \\ c_j^- & \text{otherwise}, \end{cases}
\end{align}
where recall that $c_j \in \mathbb{Z}_+$ denotes the realized value of $\tilde c_j$.
The optimistic and pessimistic LPs are obtained from the original stochastic problem \eqref{eq:packing}
by replacing the objective vector $\tilde c$ with $\overline{c}$ and with $\underline{c}$, respectively,
and by relaxing the constraint $x \in \{0, 1\}^m$ to $x \in \mathbb{R}_+^m$,
where $\mathbb{R}_+$ denotes the set of nonnegative reals.
By Assumption~\ref{asmp:constraint}.c ($Ax \leq b$ and $x \geq 0$ imply $x \leq 1$),
the relaxed constraint is equivalent to $x \in [0, 1]^m$.
Note that these problems are no longer stochastic, i.e., contain no random variables.
\begin{algorithm}[t]
\caption{Adaptive strategy.}
\label{alg:adaptive}
\begin{algorithmic}[1]
\For{$t = 1, 2, \ldots, T$}\label{line:1}
\State{Find an optimal solution $x$ to the optimistic LP.} \label{line:2}
\State{For each $j = 1, \ldots, m$, conduct a query to reveal $\tilde c_j$ with probability $x_j$.}\label{line:3}
\EndFor
\State{Find an integral feasible solution to the pessimistic LP and return it.}\label{line:5}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Non-adaptive strategy.}
\label{alg:nonadaptive}
\begin{algorithmic}[1]
\For{$t = 1, 2, \ldots, T$}
\State{Find an optimal solution $x$ to the optimistic LP.}\label{line:2'}
\State{For each $j = 1, \ldots, m$, suppose $\tilde c_j = c_j^-$ with probability $x_j$.}\label{line:3'}
\EndFor
\State{For every $j$ with $\tilde c_j = c_j^-$ supposed at Line~\ref{line:3'}, conduct a query to reveal $\tilde c_j$.}
\State{Find an integral feasible solution to the pessimistic LP and return it.}\label{line:6}
\end{algorithmic}
\end{algorithm}
First, we describe the {\em adaptive strategy} shown in Algorithm~\ref{alg:adaptive}.
In this strategy, we iteratively compute an optimal solution $x \in [0, 1]^m$ to the optimistic LP\footnote{\label{ft:5}Note that,
if the optimal solution $x$ is written as a convex combination $\sum_{i} \lambda_i x^{(i)}$ of basic feasible solutions $x^{(i)}$,
then every $x^{(i)}$ is also optimal and one can replace $x$ with any $x^{(i)}$.
In particular, when the considered polyhedron is integral (i.e., every extreme point is an integral vector),
Algorithms~\ref{alg:adaptive} and \ref{alg:nonadaptive} can be derandomized based on this observation.},
and reveal each $\tilde c_j$ with probability $x_j$.
After $T$ iterations, we find an integral feasible solution to the pessimistic LP,
where we have freedom of the choice of algorithms for the corresponding non-stochastic problem.
As remarked in Section~\ref{sec:contributions}, how to execute the last step depends heavily on each specific problem.
Next, we describe the {\em non-adaptive strategy} shown in Algorithm~\ref{alg:nonadaptive}.
As with the adaptive strategy, we solve the optimistic LP at each step.
To be non-adaptive, the algorithm tentatively assigns values to $\tilde c_j$ pessimistically instead of revealing their realized values.
After the iterations, it reveals all these values and then computes an integral feasible solution to the pessimistic LP by some algorithms for the non-stochastic problem.
\subsection{Performance Analysis}\label{sec:performance_analysis}
We now analyze the performance of algorithms within our framework.
First, we consider the adaptive strategy (Algorithm~\ref{alg:adaptive}).
As described in Section~\ref{sec:formulation},
we evaluate the trade-off between the following two factors,
each of which is reasonably decomposed into two factors.
\begin{itemize}
\item[(1)]
{\bf The number of conducted queries.}
\begin{itemize
\item[(1-a)]
\emph{Expected number of queries at Line~\ref{line:3}}.
If this number is large, the algorithm may reveal all relevant $\tilde c_j$ in a few iterations, making the algorithm trivial.
\item[(1-b)]
\emph{Required number of iterations $T$ at Line~\ref{line:1}}.
If $T$ is very large, then, as in (1-a), the algorithm may reveal all relevant $\tilde c_j$, making the algorithm trivial.
\end{itemize}
\item[(2)]
{\bf The quality of the output solution.}
Basically, we want to find a feasible solution to \eqref{eq:packing} with a large objective value,
which is at most the {\em omniscient} optimal value of \eqref{eq:packing} after all $\tilde c_j$ are revealed.
\begin{itemize}
\item[(2-a)]
\emph{Closeness between the pessimistic and omniscient LPs}.
The omniscient optimal value of \eqref{eq:packing} is at most the optimal value $\tilde\mu$ of the {\em omniscient LP},
which is obtained by revealing all $\tilde c_j$ and by relaxing $x \in \{0, 1\}^m$ to $x \in \mathbb{R}_+^m$.
If the pessimistic LP-optimal value at Line~\ref{line:5} is close to $\tilde\mu$,
then, at least as an LP, the pessimistic problem is close to the problem that we want to solve.
\item[(2-b)]
\emph{LP-relative approximation ratio at Line~\ref{line:5}}.
If one can find an integral feasible solution such that
the ratio between its objective value and the LP-optimal value is bounded, then, combined with (2-a),
a reasonable bound on the objective value of the output solution can be obtained.
\end{itemize}
\end{itemize}
Essentially, (1-a) and (2-b) are properties of each specific problem and its LP formulation.
Thus, we postpone these two factors to the discussion on applications in Section~\ref{sec:applications},
and focus on (1-b) and (2-a) in the general study in this section.
That is, our goal here is to estimate $T$ such that the optimal value of the pessimistic LP after $T$ iterations
is at least $(1 - \epsilon) \tilde\mu$ with high probability,
where $\tilde\mu$ is the optimal value of the omniscient LP and $\epsilon > 0$ is a parameter one can choose.
Note again that $\tilde\mu$ is a random variable depending on the realization of $\tilde c_j$.
To evaluate the number of iterations $T$, we consider the dual of the pessimistic LP:
\begin{align}
\label{eq:packingdual}
\begin{array}{ll}
\text{minimize} &\ y^\top b \\[1mm]
\text{subject to} &\ y^\top A \ge \underline{c}^\top\!, \\[1mm]
&\ y \in \mathbb{R}_+^n.
\end{array}
\end{align}
By the LP strong duality, it is sufficient to evaluate the probability that this dual LP has no feasible solution whose objective value is less than $(1 - \epsilon) \tilde\mu$.
Now we introduce the notion of a \emph{witness cover}, which is the most important concept in this study.
Intuitively, a witness cover for $\mu \in \mathbb{R}_+$
is a set of ``representatives'' of all the dual feasible solutions
with objective values of at most $(1 - \epsilon) \mu$.
More specifically, for any primal objective vector, if some dual feasible solution has the objective value at most $(1 - \epsilon)\mu$,
then a witness cover contains at least one such dual solution.
\begin{definition}\label{def:witness}
Let $A \in \mathbb{Z}_+^{n\times m}$, $b \in \mathbb{Z}_+^n$, and $\epsilon, \epsilon' \in \mathbb{R}_+$ with $0 < \epsilon' \leq \epsilon$.
A finite set $W \subseteq \mathbb{R}_+^n$ of dual vectors is an \emph{$(\epsilon,\epsilon')$-witness cover for $\mu \in \mathbb{R}_+$} if it satisfies the following two properties.
\begin{enumerate}
\item For every $c \in \mathbb{Z}_+^m$,
if $y^\top A \geq c^\top$ is violated (i.e., $(y^\top A)_j < c_j$ for some $j$) for all $y \in W$,
then $y^\top A \geq c^\top$ is violated for all $y \in \mathbb{R}_+^n$ with $y^\top b \leq (1 - \epsilon)\mu$.
\item $y^\top b \leq (1 - \epsilon') \mu$ holds for all $y \in W$.
\end{enumerate}
\end{definition}
\begin{example}
Consider the bipartite matching case (see Section~\ref{sec:bipartite} for the detail).
In the LP relaxation of the naive formulation \eqref{eq:bipartite},
the constraint system is totally dual integral (see Section~\ref{sec:tdipoly} for the detail),
and each dual vector is an assignment of nonnegative reals to vertices, whose sum is the objective value.
Hence, the set of assignments of nonnegative integers to vertices whose sum is at most $(1 - \epsilon)\mu$
is an $(\epsilon, \epsilon)$-witness cover for $\mu$.
\end{example}
During the iterations, the constraints in the dual pessimistic LP \eqref{eq:packingdual} become successively stronger.
Hence, for any witness cover $W$ for the omniscent LP-optimal value $\tilde\mu$, every $y \in W$ eventually becomes infeasible to \eqref{eq:packingdual} (by the second condition in Definition~\ref{def:witness}).
By evaluating the probability that all $y \in W$ become infeasible after $T$ iterations, we obtain a bound on the required number of iterations.
Note again that $\tilde\mu$ is a random variable, and hence
we assume that there exists a relatively small witness cover
for every possible objective value $\mu$,
which can be restricted to $\mu \geq 1$ due to Assumption~\ref{asmp:constraint}.b
(see the proof for the detail).
\begin{theorem}
\label{thm:adaptive}
Let $M \in \mathbb{R}_+$,
and suppose that there exists an $(\epsilon, \epsilon')$-witness cover of size at most $M^\mu$ for every $\mu \geq 1$.
Then, by taking
\begin{align}\label{eq:iterations}
T \geq \frac{\Delta_c}{\epsilon' p}\log\left(\frac{M}{\delta}\right),
\end{align}
the pessimistic LP at Line~\ref{line:5} of Algorithm~\ref{alg:adaptive} has a $(1 - \epsilon)$-approximate solution with probability at least $1 - \delta$, where $\Delta_c = \max_j (c_j^+ - c_j^-)$ and $0 < \delta < 1$.
\end{theorem}
For the non-adaptive algorithm (Algorithm~\ref{alg:nonadaptive}), by conducting a similar analysis with a case analysis, we obtain the required number of iterations with a provable approximation ratio.
\begin{theorem}
\label{thm:nonadaptive}
Under the same assumption as Theorem~\ref{thm:adaptive},
by taking $T$\! as \eqref{eq:iterations},
the pessimistic LP at Line~\ref{line:6} of Algorithm~\ref{alg:nonadaptive} has a $(1 - \epsilon)/2$-approximate solution with probability at least $1 - \delta$.
\end{theorem}
These theorems show that if there exists a small witness cover (for each possible $\mu$),
Algorithms~\ref{alg:adaptive} and \ref{alg:nonadaptive} will find good solutions in a reasonable number of iterations.
It is worth emphasizing that we only have to prove the existence of such a witness cover, i.e., we do not have to construct it algorithmically.
We discuss how to prove the existence of such witness covers
(i.e., how to construct them theoretically)
in general and in each specific application,
in Sections~\ref{sec:witness} and \ref{sec:applications}, respectively.
\subsection{Proofs of Main Theorems}\label{sec:proofs}
\subsubsection*{Proof of Theorem~\ref{thm:adaptive}}
Let $\tilde\mu$ be the optimal value of the omniscient LP.
If $\tilde\mu = 0$ then the statement obviously holds (with probability 1).
Thus we restrict ourselves to the case when $\tilde\mu > 0$.
Note that $\tilde \mu > 0$ implies $\tilde\mu \geq 1$ as follows.
If $\tilde \mu > 0$ then $\tilde c_j = c_j \geq 1$ for some $j$, and by Assumption~\ref{asmp:constraint}.b, the $j$-th unit vector $\chi_j \in \{0, 1\}^m$ is feasible
(i.e., $A \chi_j \leq b$); therefore $\tilde\mu \ge c^\top \chi_j = c_j \geq 1$.
For each $\mu \geq 1$, fix an $(\epsilon, \epsilon')$-witness cover $W_\mu$ of size $|W_\mu| \le M^\mu$.
We first evaluate the probability that each $y \in W_{\tilde\mu}$ is feasible after $T$ iterations.
Since some $\tilde c_j$ is newly revealed,
some constraints may be violated (i.e., $(y^\top A)_j < c_j$ may happen).
Once $y$ has become infeasible, it never returns to feasible due to the monotonicity of $\underline{c}$ throughout Algorithm~\ref{alg:adaptive}.
Therefore, $y$ is feasible after $T$ iterations only if
$y$ is feasible at every iteration step.
Fix $t = 1, 2, \ldots, T$, and we evaluate the probability that
a vector $y$ in each witness cover that is feasible at the beginning of the $t$-th step
remains feasible at the end of the step.
Let $\overline{c}, \underline{c} \in \mathbb{Z}_+^m$ be
the optimistic and pessimistic vectors, respectively, at Line~\ref{line:2} in the $t$-th step,
and $\overline{\mu}, \underline{\mu} \in \mathbb{R}_+^m$
the optimal values of the corresponding LPs.
Note that $\overline{\mu}$ and $\underline{\mu}$ are respectively upper and lower bounds on $\tilde\mu$ at that time.
\begin{Claim}
\label{lem:feasibility}
For every $\mu \in [\underline{\mu}, \overline{\mu}]$ and each $y \in W_{\mu}$ with $y^\top A \geq \underline{c}^\top$\!,
the probability that $y$ is feasible after Line~\ref{line:3} is at most $\exp \left( - \epsilon' p\mu/\Delta_c \right)$.
\end{Claim}
\begin{proof}
Since $y$ is feasible at the beginning of the step,
$y$ is feasible after Line~\ref{line:3} only if no possibly violated constraint is revealed to be $c_j^+$.
We can evaluate the number of possibly violated constraints at this step using the following inequality:
\begin{align}\label{eq:c-x}
\overline{c}^\top x \le \overline{c}^\top x + y^\top (b - A x) = y^\top b + (\overline{c}^\top - y^\top A) x,
\end{align}
where $x \in [0, 1]^m$ is the optimal solution to the optimistic LP obtained in Line~\ref{line:2}.
Since the optimistic vector $\overline{c}$ dominates the actual vector $\tilde c$
(irrespective of which values are realized),
we have $\overline{c}^\top x \ge \tilde\mu$.
Since $y \in W_{\mu}$, we have $y^\top b \le (1 - \epsilon') \mu$.
Therefore, we derive from \eqref{eq:c-x}
\begin{align}\label{eq:delta-mu}
\epsilon'\mu \le (\overline{c}^\top - y^\top A)x
\le \sum_{j\colon \text{violated}} (c_j^+ - \underline{c}_j) x_j
\le \Delta_c \sum_{j\colon \text{violated}} x_j,
\end{align}
where we say that $j$ is {\em violated} if $(y^\top A)_j < c_j$ for the realized value $c_j$ of $\tilde c_j$,
and note that $\overline{c}_j \leq c_j^+$, $(y^\top A)_j \geq \underline{c}_j$, and $c_j \geq c_j^+ - \Delta_c$ for every $j$.
Since the left-hand side of \eqref{eq:delta-mu} is positive, there must exist possibly violated constraints in the support of $x$, and if one of them, say $\tilde c_j$, is revealed (with probability $x_j$) as $c_j^+$ (with probability at least $p$), then $y$ becomes infeasible.
Then the probability that $y$ is still feasible after this step is at most
\begin{align}
\quad \prod_{j\colon \text{violated}} (1 - p x_j) &\le \exp \left( -p \sum_{j\colon \text{violated}} x_j \right)
\le \exp \left( \frac{- p \epsilon'\mu}{\Delta_c} \right). \quad \qedhere
\end{align}
\end{proof}
By applying Claim~\ref{lem:feasibility} to $\tilde\mu$ (the omniscient LP-optimal value) $T$ times,
we obtain that the probability that each $y \in W_{\tilde\mu}$ is feasible after $T$ iterations
is at most $\exp \left( - \epsilon' p\tilde\mu T/\Delta_c \right)$.
By the union bound, the probability that $W_{\tilde\mu}$ has at least one feasible solution to the dual pessimistic LP \eqref{eq:packingdual} after $T$ iterations is at most $|W_{\tilde\mu}| \exp \left( - \epsilon' p\tilde\mu T / \Delta_c \right)$,
which is at most $\exp \left( \tilde\mu \log M - \epsilon' p\tilde\mu T / \Delta_c \right)$.
By taking $T \ge \Delta_c \log (M/\delta) / \epsilon' p$,
the latter value is bounded by $\exp(\tilde\mu \log\delta) \leq \delta$
(recall that $\tilde\mu \geq 1$ and $0 < \delta < 1$).
By the definition of witness cover and strong duality,
we conclude that the optimal value of the pessimistic LP at Line~\ref{line:5} of Algorithm~\ref{alg:adaptive} is at least $(1 - \epsilon) \tilde\mu$ with probability at least $1 - \delta$.
\subsubsection*{Proof of Theorem~\ref{thm:nonadaptive}}
The following proof is a simple extension of Theorem~5.1 in Assadi et al.~\cite{assadi2016stochastic} for the stochastic matching problem.
Let $\tilde\mu$ be the optimal value of the omniscient LP,
and we assume $\tilde\mu \geq 1$ as in the proof of Theorem~\ref{thm:adaptive}.
In the above analysis of Algorithm~\ref{alg:adaptive}, it is ensured that there exists a solution $x$ with $\overline{c}^\top x \ge \tilde\mu$, which ensured that the last pessimistic LP in Algorithm~\ref{alg:adaptive} has an optimal value of at least $(1 - \epsilon) \tilde\mu$.
However, in the non-adaptive case, we may not be able to find such a solution because each $\tilde c_j$ is not revealed but is rounded-down.
To overcome this issue, we define $\tilde\mu' \in \mathbb{R}_+$ as the minimum objective value obtained at Line~\ref{line:2'} of Algorithm~\ref{alg:nonadaptive}.
Note that, since the optimal value of the optimistic LP solved at Line~\ref{line:2'} is monotonically non-increasing, $\tilde\mu'$ is the objective value obtained at the $T$-th step.
By applying Claim~\ref{lem:feasibility} to $\tilde\mu'$ (instead of $\tilde\mu$) $T$ times, we obtain the following claim.
\begin{Claim}
\label{lem:largecase}
By taking $T \ge \Delta_c \log (M/\delta) / \epsilon' p$ in Algorithm~\ref{alg:nonadaptive}, the optimal value of the pessimistic LP at Line~\ref{line:6} is at least $(1 - \epsilon) \tilde\mu'$ with probability at least $1 - \delta$.
\end{Claim}
If $\tilde\mu' \ge \tilde\mu/2$, we can immediately prove the theorem.
Thus, we consider the case $\tilde\mu' < \tilde\mu/2$, obtaining the following claim.
\begin{Claim}
\label{lem:smallcase}
If $\tilde\mu' < \tilde\mu/2$, the optimal value of the pessimistic LP at Line~\ref{line:6} of Algorithm~\ref{alg:nonadaptive} is at least $\tilde\mu/2$.
\end{Claim}
\begin{proof}
We use the subscripts $R$ and $N$ to denote the revealed and unrevealed entries in the primal vector, respectively,
i.e., $\tilde c_R = c_R$ has been realized and the rest $\tilde c_N$ has not been revealed.
Let $x^* \in \mathbb{R}_+^m$ be an optimal solution to the (primal) omniscient LP
(which is a random variable depending on the realization of $\tilde c_N$). We then have
\begin{align}
c_R^\top x_R^* + \tilde c_N^{\top} x_N^* = \tilde\mu,
\end{align}
Since $(0, x_N^*) \in \mathbb{R}_+^m$ is a feasible solution to the optimistic LP at the last iteration, we have
\begin{align}
c_N^{+\top} x_N^* \le \tilde\mu' < \tilde\mu / 2.
\end{align}
Therefore, the objective value for $x^*$ in the pessimistic LP at Line~\ref{line:6} is bounded by
\begin{align}
\underline{c}^\top x^* \ge c_R^\top x_R^* \ge
c_R^\top x_R^* + (\tilde c_N^{\top} - c_N^{+\top}) x_N^* > \tilde\mu / 2.
\end{align}
This means that the pessimistic LP-optimal value is at least $\tilde\mu / 2$.
\end{proof}
This concludes the theorem.
\section{Introduction}
\subsection{Problem Formulation}\label{sec:formulation}
We study a stochastic variant of linear programming (LP) with the 0/1-integer constraint,
which enables us to discuss such variants of various packing-type combinatorial optimization problems
such as matching, matroid, and stable set problems in a unified manner.
Specifically, we introduce the \emph{stochastic packing integer programming problem} defined as follows:
\begin{align}
\label{eq:packing}
\begin{array}{ll}
\text{maximize} &\ \tilde c^\top x \\[1mm]
\text{subject to} &\ A x \le b, \\[1mm]
&\ x \in \{0, 1\}^m,
\end{array}
\end{align}
where $A \in \mathbb{Z}_+^{n \times m}$ and $b \in \mathbb{Z}_+^n$,
and $\mathbb{Z}_+$ denotes the set of nonnegative integers.
The objective vector $\tilde c \in \mathbb{Z}_+^m$ is \emph{stochastic} in the following sense.
\begin{itemize}
\item
The entries $\tilde c_j$ $(j = 1, 2, \ldots, m)$ are independent random variables
with some hidden distributions for which we are given the following information: for each $j$,
\begin{itemize}
\item
the domain of $\tilde c_j$ is an integer interval $\{c_j^-, c_j^- + 1, \ldots, c_j^+\}$ given by $c_j^-, c_j^+ \in \mathbb{Z}_+$, and
\item
the probability that $\tilde c_j = c_j^+$ is at least a given constant $p \in (0, 1]$ (which is independent from $j$), i.e., $c_j^- \leq \tilde c_j \le c_j^+ - 1$ occurs with probability at most $1 - p$.
\end{itemize}
\item
When an instance ($A$, $b$, and the above information on $\tilde c$) is given,
the {\em realized values} of all $\tilde c_j$, denoted by $c_j$,
are hiddenly fixed by nature according to the above distributions.
\item
For each $j$, we are allowed to conduct a query to reveal the realized value $c_j$ of $\tilde c_j$.
\end{itemize}
Note that, since all $\tilde c_j$ are independent,
we can consider at any time that each realized value $c_j$ is determined just when a query for $j$ is conducted.
\begin{example}\label{ex:SM}
Our problem captures the \emph{stochastic matching problem} introduced by Blum et al.~\cite{blum2015ignorance} as follows.
In the stochastic matching problem, we are given an undirected graph $G = (V, E)$ such that
each edge $e \in E$ is \emph{realized} with probability at least $p \in (0, 1]$,
and the goal is to find a large matching that consists of realized edges.
We can know whether each edge is realized or not by conducting a query.
A naive formulation of this situation as our problem is obtained by restricting the domain of $\tilde{c} \in \mathbb{Z}_+^E$ to $\{0, 1\}^{E}$,
by letting $A \in \mathbb{Z}_+^{V \times E}$ be the vertex-edge incidence matrix of $G$, and by setting $b = 1$.
Section~\ref{sec:nonbipartite} gives a more detailed discussion with general edge weights.
\end{example}
Our aim is to find a feasible solution to \eqref{eq:packing} with a large objective value by conducting a small number of queries.
Note that we can definitely obtain an optimal solution by solving the corresponding non-stochastic problem after conducting queries for all $j$.
Our interest is therefore in the trade-off between the number of queries and the quality of the obtained solution.
\subsection{Our Contributions and Technique}\label{sec:contributions}
\begin{table}[t]
\centering
\caption{Results obtained for the adaptive strategy, where $n$ and $m$ denote the number of vertices in the graph and the ground set size of the matroids (or the number of edges), respectively, in question. We omit $O( \cdot )$ in the iteration column. Also, all the coefficients are assumed to be $O(1)$. For the non-adaptive strategy, the approximation ratio is halved.}
\label{tbl:results
\begin{tabular}{l|c|c} \hline
Problem & Approximation Ratio & Number of Iterations $T$ \\ \hline
Bipartite Matching & $1 - \epsilon$ & $\log (1/\epsilon p) / \epsilon p$ \\
Non-bipartite Matching & $1 - \epsilon$ & $\log (n/\epsilon)/\epsilon p$ \\
$k$-Hypergraph Matching & $(1 - \epsilon) /(k - 1 + 1/k)$ & $(k \log (k/\epsilon p) + 1/\epsilon) / \epsilon p$ \\
$k$-Column Sparse PIP & $(1 - \epsilon)/2k$ & $(k \log (k/\epsilon p) + 1/\epsilon) / \epsilon p$ \\[1mm]
Matroid (Max.~Independent Set)& $1 - \epsilon$ & $\log (m/\epsilon) / \epsilon p$ \\
Matroid Intersection & $1 - \epsilon$ & $\log (m/\epsilon) / \epsilon p$ \\
$k$-Matroid Intersection & $(1 - \epsilon) / k$ & $k \log m \log (m /\epsilon) / \epsilon^3 p$ \\
Matchoid & $(1 - \epsilon) 2/3$ & $\log (m/\epsilon) /\epsilon p$ \\
Degree Bounded Matroid & $1 - \epsilon$ {\small $\displaystyle\left(\begin{array}{c}\text{each constraint}\\\text{is violated}\\\text{at most $d-1$}\end{array}\right)$}
& $d \log (n/\epsilon)/\epsilon^2 p$ \\[4mm]
Stable Set in Chordal Graphs & $1 - \epsilon$ & $\log n / \epsilon p$ \\
Stable Set in t-Perfect Graphs & $1 - \epsilon$ & $\log n \log (n/\epsilon) /\epsilon^3 p$ \\
\hline
\end{tabular}\vspace{-1mm}
\end{table}
\subsubsection*{Contributions}
We propose a general framework of adaptive and non-adaptive algorithms for the stochastic packing integer programming problem.
Here, an algorithm is \emph{non-adaptive} if it reveals all queried items simultaneously, and \emph{adaptive} otherwise.
In the adaptive strategy\footnote{Algorithms \ref{alg:adaptive} and \ref{alg:nonadaptive} have freedom of the choices of algorithms for solving LPs and for finding an integral solution in the last step; in particular, the latter depends heavily on each specific problem before formulated as an integer LP. For this reason, we use the term ``strategy'' rather than ``algorithm'' to refer them.} (which is formally shown in Algorithm~\ref{alg:adaptive} in Section~\ref{sec:two_strategies}), we iteratively compute an optimal fractional solution $x \in [0,1]^m$ to the \emph{optimistic LP} (the LP relaxation of \eqref{eq:packing} in which all the unrevealed $\tilde c_j$ are supposed to be $c_j^+$), and conduct a query for each element $j$ with probability $x_j$.
After the iterations, we find an integral feasible solution to the \emph{pessimistic LP} (in which all the unrevealed $\tilde c_j$ are supposed to be $c_j^-$) by using some algorithms for the corresponding non-stochastic problem.
Similarly, in the non-adaptive strategy (Algorithm~\ref{alg:nonadaptive}), we iteratively compute an optimal fractional solution $x$ to the optimistic LP, and round down each element $j$ (i.e., suppose $\tilde c_j$ to be $c_j^-$ instead of revealing $c_j$) with probability $x_j$.
After the iterations, we reveal all the rounded-down elements and find an integral feasible solution to the pessimistic LP.
In application, we need to decide how to execute the last step,
and the performance of the resulting algorithm depends on combinatorial structure of each specific problem.
Our main contribution is a \emph{proof technique} for analyzing the performance of the algorithms.
Using this technique, we obtain results for the problem classes summarized in Table~\ref{tbl:results}.
\subsubsection*{Technique}
Our technique is based on \emph{LP duality} and \emph{enumeration}.
A brief overview of the technique follows, where we focus on the adaptive strategy.
Let $\tilde\mu$ be the optimal value of the \emph{omniscient LP} (the LP relaxation of \eqref{eq:packing} in which all $\tilde c_j$ are revealed).
Note that $\tilde\mu$ is a random variable depending on the realization of $\tilde c_j$.
Our goal is to evaluate the number of iterations $T$ such that
the optimal value of the pessimistic LP after $T$ iterations
is at least $(1 - \epsilon) \tilde\mu$ with high probability\footnote{Here we consider two types of randomness together.
One is on the realization of $\tilde c_j$, which is contained in the ``stochastic'' input and determines the omniscient optimal value $\tilde\mu$.
The other is on the choice of queried elements, which is involved in our ``randomized'' algorithms and affects the pessimistic LP obtained after the iterations.}.
Then, if we have an LP-relative $\alpha$-approximation algorithm~\cite{parekh2014generalized} (which outputs an integral feasible solution whose objective value is at least $\alpha$ times the LP-optimal value) for the corresponding non-stochastic problem,
we obtain a $(1 - \epsilon) \alpha$-approximate solution to our problem with high probability.
To discuss the optimal value of the pessimistic LP, we consider the dual LP.
By the LP strong duality, it is sufficient to prove that the dual pessimistic LP
after $T$ iterations has no feasible solution whose objective value is less than $(1 - \epsilon) \tilde\mu$ with high probability.
Here, we introduce a finite set $W \subseteq \mathbb{R}_+^n$ of dual vectors,
called a \emph{witness cover},
for every possible objective value $\mu$ (a candidate of $\tilde\mu$)
that satisfies the following property: if all $y \in W$ are infeasible, there is no feasible solution whose objective value is less than $(1 - \epsilon) \mu$.
Intuitively, $W$ represents all the candidates for dual feasible solutions whose objective values are less than $(1 - \epsilon) \mu$.
We evaluate the probability that each $y \in W$ becomes infeasible after $T$ iterations, and then estimate the sufficient number of iterations by using the union bound for $W$.
In application, we only need to show the existence of a small witness cover for each specific problem.
We also give general techniques to construct small witness covers
when the considered problem enjoys some nice properties,
e.g., when the constraint system $Ax \leq b$, $x \geq 0$ is totally dual integral.
\subsection{Related Work}
As described in Example~\ref{ex:SM},
our stochastic packing integer programming problem generalizes the \emph{stochastic (unweighted) matching problem}~\cite{blum2015ignorance,assadi2016stochastic,assadi2017stochastic} and the \emph{stochastic (unweighted) $k$-hypergraph matching problem}~\cite{blum2015ignorance}, which have recently been studied in EC (Economics and Computation) community.
These problems are motivated to find an optimal strategy for kidney exchange~\cite{roth2004kidney,dickerson2016organ}.
For the stochastic unweighted matching problem, Blum et al.~\cite{blum2015ignorance} proposed adaptive and non-adaptive algorithms that achieve approximation ratios of $(1 - \epsilon)$ and of $(1/2 - \epsilon)$, respectively, \emph{in expectation}, by conducting $O(\log (1/ \epsilon)/p^{2/\epsilon})$ queries per vertex.
Their technique is based on the existence of disjoint short augmenting paths.
Assadi et al.~\cite{assadi2016stochastic} proposed adaptive and non-adaptive algorithms that respectively achieve the same approximation ratios \emph{with high probability}, by conducting $O(\log (1/ \epsilon p)/\epsilon p)$ queries per vertex.
Their technique is based on the Tutte--Berge formula and vertex sparsification.
Our proposed strategies coincide with those of Assadi et al.\ when they are applied to the stochastic unweighted matching problem and we always find integral optimal solutions to the LP relaxations, i.e.,
solve the (non-stochastic) unweighted matching problem every time.
Our analysis looks similar to theirs since they both use the duality,
but ours is simpler and can also be used for the weighted and capacitated situation.
On the other hand, our analysis shows that $O(\log (n/\epsilon)/\epsilon p)$ queries per vertex are required\footnote{Very recently, Behnezhad and Reyhani~\cite{behnezhad2017almost} claimed that the same algorithm as ours achieves an approximation ratio of $1 - \epsilon$ by conducting a constant number of queries that depends on only $\epsilon$ and $p$.
Their analysis uses augmenting paths, like Blum et al.~\cite{blum2015ignorance}.}, which is worse than theirs.
Recently, Assadi et al.~\cite{assadi2017stochastic} proposed a non-adaptive algorithm that achieves an approximation ratio of strictly better than $1/2$ \emph{in expectation}.
However, this technique is tailored to the unweighted matching problem, so we could not generalize it to our problem.
For the stochastic unweighted $k$-hypergraph matching problem, Blum et al.~\cite{blum2015ignorance} proposed adaptive and non-adaptive algorithms that find $(2 - \epsilon)/k$- and $(4 - \epsilon)/(k^2 + 2k)$-approximate matchings, respectively, \emph{in expectation}, by conducting $O(s_{k,\epsilon} \log (1/ \epsilon)/p^{s_{k,\epsilon}})$ queries per vertex, where $s_{k,\epsilon}$ is a constant depending on $k$ and $\epsilon$.
Their technique is based on the local search method of Hurkens and Schrijver~\cite{hurkens1989size}.
For the adaptive case, our strategy achieves a worse approximation ratio than theirs
because the same is true of the LP-based algorithm versus the local search.
On the other hand, our algorithm requires an exponentially smaller number of queries and runs in polynomial time both in $n$ and $1/\epsilon$.
In addition, our algorithm can be used for the weighted case.
For the non-adaptive case, our algorithm outperforms theirs,
all in terms of approximation ratio, the number of queries, and running time.
Other variants of the stochastic packing integer programming problem with queries have been studied.
However, many of them employ the \emph{query-commit model}~\cite{dean2004approximating,dean2005adaptivity,molinaro2011query,costello2012stochastic}, in which the queried elements must be a part of the output.
Some studies~\cite{adamczyk2011improved,chen2009approximating,bansal2012lp} also impose additional budget constraints on the number of queries.
In the \emph{stochastic probing problem}~\cite{gupta2013stochastic,adamczyk2016submodular,gupta2017adaptivity}, both the queried and realized elements must satisfy given constraints.
Blum et al.~\cite{blum2013harnessing} studied a stochastic matching problem without query-commit condition, but with a budget constraint on the number of queries.
\subsection{Organization}
The rest of the paper is organized as follows.
In Section~\ref{sec:general}, we describe our framework of adaptive and non-adaptive algorithms for the stochastic packing integer programming problem,
and explain a general technique for providing a bound on the number of iterations.
In Section~\ref{sec:witness}, we outline how to construct a small witness cover in general.
In Section~\ref{sec:applications}, we apply the technique to a variety of specific combinatorial problems.
In Section~\ref{sec:sparsification}, we provide a vertex sparsification lemma that can be used to improve the performance of the algorithms for several problems.
\section{Vertex Sparsification Lemma}
\label{sec:sparsification}
\subsection{Vertex Sparsification Lemma}
For the (unweighted) stochastic matching problem, Assadi et al.~\cite{assadi2016stochastic} proposed a procedure called \emph{vertex sparsification}, which reduces the number of vertices proportional to the maximum matching size $\mu$ while approximately preserving any matchings of size $\nu = \omega(1)$ with high probability.
This procedure is very useful as a preprocessing step for this problem since it makes $n/\mu = O(1)$, and so the required number of iterations becomes constant.
Here, we extend this procedure to an independence system on a $k$-uniform hypergraph and improve the result to preserve \emph{any} independence set with high probability without assuming $\nu = \omega(1)$.
In next section. we improve the performances of the algorithms for the bipartite matching problem, $k$-hypergraph matching problem, and $k$-column sparse packing integer programming problem by using this lemma.
In general, sparsification procedures are kinds of \emph{kernelization} procedure, which is studied in the area of parametrized complexity~\cite{downey2012parameterized}.
In particular, our and Assadi et al.~\cite{assadi2016stochastic}'s procedures are similar to the one in \cite{chitnis2016kernelization}, which aims to reduce space complexity of packing problems in streaming setting, but the conducted analyses and the provided guarantees are both different.
\medskip
Let $(V, E)$ be a $k$-uniform hypergraph and $(E, \mathcal{I})$ be an independence system
(which is a nonempty, downward-closed set system, i.e., ${\mathcal I} \neq \emptyset$, and $X \subseteq Y \in {\mathcal I} \implies X \in {\mathcal I}$),
whose rank function $r \colon 2^E \to {\mathbb Z}_+$ is defined by $r(S) = \max\{\, |I| : I \subseteq S,~I \in {\mathcal I} \,\}$.
We focus on the following special case of the stochastic packing integer programming problem \eqref{eq:packing} in this section:
\begin{align}\label{eq:sparsifiable}
\begin{array}{lll}
\text{maximize} &\ \displaystyle\sum_{e \in E} \tilde c_e x_e \\
\text{subject to} &\ \displaystyle\sum_{e \in S} x_e \le r(S) &\quad (S \subseteq E), \\
&\ x \in \{0,1\}^E.
\end{array}
\end{align}
Note that the constraint is equivalent to $\mathrm{supp}(x) \in {\mathcal I}$,
and this formulation still includes the $k$-column sparse PIP \eqref{eq:k-column_sparse}
(and hence all the matching problems shown in Section~\ref{sec:matching_problems}) as follows:
let $V = \{1, \ldots, n\}$ and $E = \{1, \ldots, m\}$
such that each hyperedge $j \in E$ is associated with a subset $\{\, i \in V : a_{ij} \neq 0 \,\}$
(if the size is less than $k$, add arbitrary vertices $i$ with $a_{ij} = 0$),
and define ${\mathcal I} = \{\, S \subseteq E : \sum_{j \in S} a_{ij} \leq b_i \ (\forall i \in V)\,\}$.
\begin{algorithm}[t]
\caption{Vertex sparsification.}
\label{alg:sparsification}
\begin{algorithmic}[1]
\State{Assign a random color in $\{1, \ldots, \frac{\beta(k,\epsilon,\delta) k^2 s}{2 \delta} \}$ to each vertex, where $\beta(k,\epsilon,\delta) = \frac{2 e^{\epsilon/k} \log (1 / \delta)}{\epsilon}$.}
\State{Return all colorful hyperedges.}
\end{algorithmic}
\end{algorithm}
Our procedure is shown in Algorithm~\ref{alg:sparsification}, which is a kind of color coding.
Let $s \in \mathbb{Z}_+$ be an upper bound on $r = r(E)$,
and $\epsilon, \delta \in (0, 1)$ be parameters for the accuracy and the probability, respectively.
It first assigns a random color in $\{1, \ldots, n^\circ\}$ to each vertex, where $n^\circ = \beta(k, \epsilon, \delta) k^2 s / \delta$ with $\beta(k, \epsilon, \delta) = 2 e^{\epsilon/k} \log (1/\delta)/\epsilon$.
It then returns all ``colorful'' hyperedges that consists entirely of differently colored vertices.
This yields an independence system on the color class consisting from $n^\circ = \Theta(s)$ vertices.
\begin{lemma}[Vertex Sparsification Lemma]
\label{lem:sparsification}
Suppose that $n \ge 2 k$.
Then, after Algorithm~\ref{alg:sparsification},
for any independent set $I \in \mathcal{I}$ in the original instance,
there exists an independent set $I^\circ \subseteq I$ of size at least $(1 - \epsilon) |I|$
in the sparsified instance with probability at least $1 - \delta$.
\end{lemma}
\begin{proof}
Let $\nu = |I|$.
For notational simplicity, we denote by $\beta = \beta(k, \epsilon, \delta)$.
We now make the following case analysis.
\paragraph{Case 1: $\nu \le \beta$ (the rank of $I$ is small).}
If all vertices incident to $I$ have different colors, the size of $I$ is preserved after the mapping.
Since the number of the incident vertices is at most $k \nu$, the probability that this has occurred is at least
\begin{align}
\frac{n^\circ (n^\circ - 1) \cdots (n^\circ - k \nu + 1)}{n^{\circ k \nu}}
\ge \exp \left( - \frac{k^2 \nu^2}{n^\circ} \right)
\ge \exp \left( - \frac{\delta \nu^2}{\beta s} \right) \ge e^{-\delta} \ge 1 - \delta.
\end{align}
Here, the first inequality follows from the falling factorial approximation (the next lemma), and the second inequality follows from $\nu \le r \le s$ and $\nu \le \beta$.
\begin{lemma}[Falling Factorial Approximation]
\label{thm:fallingfactorial}
\begin{align}
\frac{n (n-1) \cdots (n-k+1)}{n^k} \ge \exp \left( -\frac{k^2}{n} \right).
\end{align}
\end{lemma}
\begin{proof}
Recall that $\log (1 - x) \ge - x/(1-x)$ for all $x \in (0,1)$.
The logarithm of the above is
\begin{align}
\quad &\sum_{i=1}^{k-1} \log \left(1 - \frac{i}{n}\right) \ge -\sum_{i=1}^{k-1} \frac{i}{n - i} \ge -\sum_{i=1}^{k-1} \frac{i}{n - k}
= -\frac{k (k-1)}{2(n - k)} \ge -\frac{k^2}{n}. \quad \qedhere
\end{align}
\end{proof}
\paragraph{Case 2: $\nu \ge \beta$ (the rank of $I$ is large).}
We further reduce the number of colors by mapping each color class to $\{1, \ldots, k^2 \nu/\epsilon \}$. (Note that $k^2 \nu/\epsilon \le n^\circ$ since $\beta \ge 1/\epsilon$.)
We say that a color class $c$ is \emph{good} if some vertex in color $c$ is covered by some hyperedge $e \in I$, and otherwise we say that $c$ is \emph{bad}.
For each color class $c$,
let $X_c$ be the indicator of the event that $c$ is bad,
i.e., $X_c = 1$ if $c$ is bad and $X_c = 0$ otherwise.
Then $\Pr(X_c = 1) = (1 - \epsilon/k^2 \nu)^{k \nu} \le e^{-\epsilon/k}$.
Therefore $\mathbb{E}\left[\sum_c X_c\right] \le e^{-\epsilon/k} k^2 \nu / \epsilon$.
Since $X_c$ are negatively correlated random variables, we can apply the Chernoff bound~\cite{panconesi1997randomized}:
\begin{align}
\Pr\left(\sum_c X_c \ge \frac{k^2 \nu}{\epsilon} - \left(1 - \frac{\epsilon}{k}\right) k \nu\right)
&= \Pr\left(\sum_c X_c \ge \left(1 - \frac{\epsilon}{k} + \frac{\epsilon^2}{k^2}\right) \frac{k^2 \nu}{\epsilon} \right) \notag \\
&\le \Pr\left(\sum_c X_c \ge \left(1 + \frac{\epsilon^2}{2 k^2}\right) e^{-\epsilon/k} \frac{k^2 \nu}{\epsilon} \right) \notag \\
&\le \exp\left(-\epsilon e^{-\epsilon/k} \frac{\nu}{2}\right) \le \exp\left(-\epsilon e^{-\epsilon/k} \frac{\beta}{2}\right) = \delta,
\end{align}
where the first inequality follows from $(1+x^2/2) e^{-x} \le 1 - x + x^2$ and the last equality follows from the definition of $\beta$.
Therefore, there are at least $(1 - \epsilon/k) k\nu$ good color classes with high probability.
For each good color class, we select one covered vertex and remove all other vertices.
The number of removed vertices is at most $\epsilon \nu$, so at most $\epsilon \nu$ hyperedges in the independent set are removed.
The remaining hyperedges form an independent set of size at least $(1 - \epsilon) \nu$.
\end{proof}
\begin{remark}
The second part is a simple extension of Assadi et al.~\cite{assadi2016stochastic}.
Since they only analyzed this case, $\nu = \omega(1)$ was required.
\end{remark}
\subsection{Usage of Vertex Sparsification Lemma}
Here, we describe how to use the vertex sparsification lemma to improve the performance of Algorithms~\ref{alg:adaptive} and \ref{alg:nonadaptive}.
For simplicity, we only describe the result for Algorithm~\ref{alg:adaptive}, as Algorithm~\ref{alg:nonadaptive} can be handled using the same argument.
Let $(V, E)$ be a $k$-uniform hypergraph with $|V| = n$ and $|E| = m$ and $(E, \mathcal{I})$ be an independence system.
We consider the problem \eqref{eq:sparsifiable}, where we assume the following.
\begin{enumerate}
\item There exists an LP-relative $\alpha$-approximation algorithm.
\item The number of iterations required to guarantee $(1 - \epsilon) \alpha$-approximation with probability at least $1 - \delta$ is bounded by $T(\log (n/\mu), \epsilon, \delta)$.
\end{enumerate}
The method is shown in Algorithm~\ref{alg:speedup}, where $\epsilon, \delta \in (0, 1)$ are parameters for the accuracy and the probability, respectively, and $c_{\max} = \max_j c_j^+$.
We first estimate the maximum size $s$ of the independent sets such that $\alpha s \le r \le s$, which is computed via LP relaxation.
We then apply Algorithm~\ref{alg:sparsification} to obtain a sparsified instance, and finally apply Algorithm~\ref{alg:adaptive} or \ref{alg:nonadaptive} with an LP-relative $\alpha$-approximation algorithm to obtain a solution.
We now analyze the performance of this procedure.
\begin{algorithm}[tb]
\caption{Speedup by vertex sparsification.}
\label{alg:speedup}
\begin{algorithmic}[1]
\State{Estimate the size $s$ of maximum independent set such that $\alpha s \le r \le s$.}
\State{Sparsify the instance by Algorithm~\ref{alg:sparsification} with accuracy parameter $\epsilon' = \epsilon/(1 + c_\text{max})$ and probability parameter $\delta' = \delta/4$.}
\State{Run Algorithm~\ref{alg:adaptive} or \ref{alg:nonadaptive} with an LP-relative $\alpha$-approximation algorithm
by setting $T = T(O(\log (k / p \alpha \epsilon' \delta')), \epsilon', \delta')$.}\label{line:3''}
\end{algorithmic}
\end{algorithm}
\begin{theorem}
Algorithm~\ref{alg:speedup} finds a $(1 - \epsilon) \alpha$-approximate solution with probability at least $1 - \delta$.
\end{theorem}
\begin{proof}
Let $r = r(E)$ be the rank of the original independence system and $\tilde\mu$ the optimal value of the original instance \eqref{eq:sparsifiable}, which is a random variable determined by nature.
Also, let $r^\circ$ be the rank of the sparsified instance, and $\tilde \mu^\circ$ be the optimal value of the sparsified instance, which is also a random variable.
\begin{Claim}\label{cl:sparsified_bound
\begin{align}
\Pr\left(r^\circ \ge (1 - \epsilon')r\right) &\geq 1 - \delta',\label{eq:sparsified_bound_1}\\
\Pr\left(\tilde \mu^\circ \ge (1 - c_{\max} \epsilon') \tilde \mu \right) &\geq 1 - \delta'.\label{eq:sparsified_bound_2}
\end{align}
\end{Claim}
\begin{proof}
The first inequality \eqref{eq:sparsified_bound_1} immediately follows from Lemma~\ref{lem:sparsification},
and we focus on the second \eqref{eq:sparsified_bound_2}.
Fix a realization of $\tilde{c}$,
and let $x \in \{0, 1\}^E$ be an optimal solution to \eqref{eq:sparsifiable}
such that $I = \mathrm{supp}(x) \in {\mathcal I}$ is minimal.
By Lemma~\ref{lem:sparsification},
there exists an independent set $I^\circ \subseteq I$ of size $|I^\circ| \geq (1 - \epsilon')|I|$
in the sparsified instance with probability $1 - \delta'$.
Let $x^\circ \in \{0, 1\}^E$ be the vector with $\mathrm{supp}(x^\circ) = I^\circ$,
whose restriction to the sparsified hyperedge set is a feasible solution to the sparsified instance.
We then have
\begin{align}
\tilde\mu^\circ \geq \tilde c^\top x^\circ \geq \tilde c^\top x - c_{\max} \epsilon'|I| \geq \tilde\mu - c_{\max} \epsilon' \tilde\mu,
\end{align}
where the last inequality follows from the minimality of $I = \mathrm{supp}(x)$
(for each $j \in I$, we must have $\tilde{c}_j \geq 1$, and hence $\tilde\mu = \tilde c^\top x \geq \mathrm{supp}(x) = |I|$).
\end{proof}
By Claim \ref{cl:sparsified_bound},
we have $r^\circ \ge (1 - \epsilon')r$ and $\tilde \mu^\circ \ge (1 - c_{\max} \epsilon') \tilde \mu$
with probability at least $1 - 2\delta'$.
Under this event, by using Algorithm~\ref{alg:adaptive} in Line~\ref{line:3''} of Algorithm~\ref{alg:speedup},
we obtain a solution whose objective value is at least $(1 - \epsilon') \tilde \mu^\circ \ge (1 - (c_{\max} + 1) \epsilon') \tilde \mu = (1 - \epsilon) \tilde \mu$ with probability at least $1 - \delta'$
(and hence with probability at least $1 - 3\delta'$ in total).
The remaining issue is the number of iterations.
That is, for $\beta' = \beta(k, \epsilon', \delta') = 2e^{\epsilon'/k}\log(1/\delta')/\epsilon'$ and $n^\circ = \beta'k^2s/\delta'$, we prove
\begin{align}\label{eq:sparsified_ratio}
\log\frac{n^\circ}{\tilde\mu^\circ} = O\left(\log \frac{k}{p \alpha \epsilon' \delta'}\right),
\end{align}
with probability at least $1 - \delta'$,
which implies that we succeed with probability at least $1 - 4\delta' = 1 - \delta$ through Algorithm \ref{alg:speedup}.
We make a case analysis.
\paragraph{Case 1. $r^\circ \ge 8\log(1 / \delta')/p$ (the rank of the sparsified instance is large).}
We evaluate the objective value of the independent set in the sparsified instance that corresponds to the maximum independent set in the original instance.
Since each element in the sparsified independent set contributes at least $1$ with probability at least $p$, we can apply the Chernoff bound
\begin{align}
\Pr\left( \tilde \mu^\circ \ge \frac{p r^\circ}{2} \right) \ge \Pr\left( \sum_{i=1}^{r^\circ} X_i \ge \frac{p r^\circ}{2} \right)
\ge 1 - e^{-p r^\circ / 8} \ge 1 - \delta',
\end{align}
where $X_i$ ($i = 1, \ldots, r^\circ$) are i.i.d.\ random variables following the Bernoulli distribution with probability $p$.
Under this event ($\tilde \mu^\circ \ge p r^\circ / 2$), we have
\begin{align}
\frac{n^\circ}{\tilde \mu^\circ} \le \frac{2 n^\circ}{p r^\circ} \le \frac{4 n^\circ}{p r} \le \frac{4 n^\circ}{p \alpha s} = \frac{4 \beta' k^2}{p \alpha \delta'} = \frac{8e^{\epsilon'/k}\log(1/\delta')}{p\alpha\epsilon'\delta'},
\end{align}
where the second inequality follows from $r^\circ \geq (1 - \epsilon')r \geq r/2$ (because $\epsilon' = \epsilon/(1 + c_{\max}) \leq 1/2$), and
the third from $r \geq \alpha s$.
Since $e^{\epsilon'/k} = O(1)$ and $\log(1/\delta') \leq 1/\delta'$, we have \eqref{eq:sparsified_ratio}.
\paragraph{Case 2. $r^\circ \le 8\log(1 / \delta')/p$ (the rank of the sparsified instance is small).}
We have
\begin{align}
\frac{n^\circ}{\tilde \mu^\circ} \le n^\circ \leq \frac{\beta' k^2 r}{\alpha \delta'} \le \frac{2 \beta' k^2 r^\circ}{\alpha \delta'} \le \frac{16 \beta' k^2 \log(1/\delta')}{p \alpha \delta'},
\end{align}
where the first inequality follows from $\tilde\mu^\circ \geq 1$ (because it is an integer with $\tilde\mu^\circ \geq (1 - \epsilon' c_{\max})\tilde\mu > 0$),
the second from $r \geq \alpha s$, and the third from $r^\circ \geq r/2$.
This leads to \eqref{eq:sparsified_ratio} similarly in Case 1.
\end{proof}
The sizes of witness covers of bipartite matching, $k$-hypergraph matching, and $k$-column-sparse packing integer programming depend on $n/\mu$.
Thus these are improved by using this technique.
\begin{corollary}\label{cor:bipartite_stronger}
For the bipartite matching problem with $c_j = O(1)$ for all $j$, there is an algorithm that conducts $O(\log(1/\epsilon p) / \epsilon p)$ queries per vertex and finds $(1 - \epsilon)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\begin{corollary}
For the $k$-hypergraph matching problem with $c_j = O(1)$ for all $j$, there is an algorithm that conducts $O(k (\log(k /\epsilon p) + 1/\epsilon)/\epsilon p)$ queries per vertex and finds $(1 - \epsilon)/(k - 1 + 1/k)$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\begin{corollary}
For the $k$-column sparse packing integer programming problem with $c_j = O(1)$, $b_i = O(1)$, and $A_{ij} = O(1)$ for all $i, j$, there is an algorithm that conducts $O(k (\log(k /\epsilon p) + 1/\epsilon)/\epsilon p)$ queries per vertex and finds $(1 - \epsilon)/2 k$-approximate solution with probability at least $1 - \epsilon$.
\end{corollary}
\section{Constructing Witness Covers}
\label{sec:witness}
Our technique requires us to prove the existence of a small witness cover.
Here, we describe general strategies for constructing small witness covers.
\subsection{Totally Dual Integral Case}
\label{sec:tdipoly}
A system $A x \le b$, $x \ge 0$ is \emph{totally dual integral (TDI)} if,
for every integral objective vector $c \in \mathbb{Z}^m$,
the dual problem $\min \{\, y^\top b : y^\top A \ge c^\top,~ y \ge 0 \,\}$ has an integral optimal solution $y \in \mathbb{Z}_+^n$ (unless it is infeasible).
Note that every TDI system yields an integral polyhedron
(see, e.g., \cite{schrijver2003combinatorial} for the detail).
Hence, if we obtain a basic optimal solution to the optimistic LP (in Line~\ref{line:2} of Algorithms~\ref{alg:adaptive} and \ref{alg:nonadaptive}),
then we do not need randomization in conducting query (cf.~the footnote~\ref{ft:5} in Section~\ref{sec:two_strategies}).
If the system is TDI, we can construct a witness cover by enumerating all possible integral dual vectors as follows.
\begin{lemma}\label{lem:tdi}
If the system $Ax \le b$, $x \ge 0$ is TDI,
the following set $W \subseteq \mathbb{R}_+^n$ is an $(\epsilon, \epsilon)$-witness cover for $\mu \geq 1$ such that $|W| = \exp\left({O\left(\mu \log (1 + \frac{n}{\mu})\right)}\right)$:
\begin{align}
\label{eq:tdipoly}
W = \{\, y \in \mathbb{Z}_+^n : y^\top b \le (1 - \epsilon) \mu \,\}.
\end{align}
\end{lemma}
\begin{proof}
It is clear that $W$ is an $(\epsilon, \epsilon)$-witness cover for $\mu$,
so it only remains to evaluate the cardinality of $W$.
We see that $|W|$ is at most the number of nonnegative vectors whose entries sum to
at most $\floor{\mu}$,
which can be counted by distributing $k$ ($\leq \floor{\mu}$) tokens among $n$ entries, giving
\begin{align}
\quad |W| &\le \sum_{k=0}^{\floor{\mu}} \binom{n + k - 1}{k} = \binom{n + \floor{\mu}}{\floor{\mu}}
\le \left( \frac{e (n + \floor{\mu})}{\floor{\mu}} \right)^{\floor{\mu}}
= e^{O\left( \mu \log \left( 1 + \frac{n}{\mu} \right) \right)}. \quad \qedhere
\end{align}
\end{proof}
Note that the same counting technique can be used when the system is totally dual $1/k$-integral (TDI/$k$), i.e., the existence of a dual optimal solution where each entry is a multiple of $1/k$ is guaranteed.
\subsection{Non-TDI Case}
\label{sec:nontdi}
If the system is not TDI, we have to deal with fractional dual vectors.
To enumerate these fractional vectors, we discretize the dual vectors, requiring the discretization to have the following property:
if there exists a feasible $y$ such that $y^\top b \le (1 - \epsilon) \mu$, there exists a feasible discretized $y'$ such that $y'^{\top} b \le (1 - \epsilon/2) \mu$.
Here, we consider two possible situations: Dual Sparse Case and General Case.
\subsubsection*{Dual Sparse Case}
If there exists a sparse dual optimal solution, we can simply discretize the dual vectors to obtain a good discretized solution as follows.
\begin{lemma}
\label{lem:nontdi_sparse}
For positive $\mu$, $\epsilon$, and $\gamma$,
if there exists $y \in \mathbb{R}_+^n$ such that $y^\top b \le (1 - \epsilon) \mu$ and $|\mathrm{supp}(y)| \le \gamma \mu$,
then there exists $y' \in \prod_{i = 1}^n (\epsilon/2 b_i \gamma) \mathbb{Z}_+$ such that $y'^\top A \ge y^\top A$ and $y'^\top b \le (1 - \epsilon/2) \mu$.
\end{lemma}
\begin{proof}
A suitable $y'$ can be obtained by rounding up the $i$-th entry of $y$ to the next multiple of $\epsilon / 2 b_i \gamma$ for each $i$.
\end{proof}
Now that the existence of a discretized solution whose objective value is almost the same as any sparse solution has been guaranteed,
we can construct a witness cover by enumerating all the discretized vectors.
\begin{lemma}
\label{lem:poly_nontdi_sparse}
Under the assumption given in Lemma~\ref{lem:nontdi_sparse}, the following set $W \subseteq \mathbb{R}_+^n$ is an $(\epsilon, \epsilon/2)$-witness cover for $\mu \geq 1$, whose cardinality is $|W| = \exp\left({ O\left(\mu \left( \gamma \log \frac{n}{\gamma \mu} + \frac{1}{\epsilon} \right) \right)}\right)$:
\begin{align}
W = \left\{\, y \in \prod_{i=1}^n \left(\frac{\epsilon}{2 b_i \gamma}\right) \mathbb{Z}_+ : \, y^\top b \le \left(1 - \frac{\epsilon}{2}\right) \mu,~ |\mathrm{supp}(y)| \le \gamma \mu \,\right\}.
\end{align}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:nontdi_sparse}, $W$ is an $(\epsilon, \epsilon/2)$-witness cover for $\mu$.
We evaluate the cardinality of $W$ as follows.
We first select $s$ ($\le \gamma \mu$) entries for the support of $y$, and then distribute $k$ ($< 2 \mu / \epsilon$) tokens among these entries, where each token contributes $\epsilon/2$ to the objective value.
In the nontrivial case when $\mu_1 := \gamma\mu \geq 1$ and $\mu_2 := 2\mu/\epsilon \geq 1$,
the number of these patterns is bounded by
\begin{align}
\quad |W| &\le \sum_{s=1}^{\floor{\mu_1}} \binom{n}{s} \sum_{k=0}^{\floor{\mu_2}} \binom{s + k - 1}{k} \notag \\
&\le \floor{\mu_1} \left(\frac{e n}{\floor{\mu_1}}\right)^{\floor{\mu_1}} \floor{\mu_2} \left( \frac{e (\floor{\mu_1} + \floor{\mu_2})}{\floor{\mu_2}} \right)^{\floor{\mu_2}} \notag \\\nonumber
&\le \floor{\mu_1} \left(\frac{e n}{\floor{\mu_1}}\right)^{\floor{\mu_1}} \floor{\mu_2} e^{\floor{\mu_1} + \floor{\mu_2}}
= \exp \left(O \left( \mu_1 \log \frac{n}{\mu_1} + \mu_2 \right) \right). \quad \qedhere
\end{align}
\end{proof}
\subsubsection*{General Case}
When the optimal dual solutions are not sparse, the results of simple discretization are useless.
However, even in such a case, there is a good discretized solution.
Let $y$ be a feasible dual vector.
Then by applying \emph{randomized rounding}~\cite{raghavan1987randomized} to $y$, we obtain a suitable discretized vector $y'$ with a positive probability.
Formally, the following theoretical guarantee is obtained.
\begin{theorem}[Kolliopoulos and Young~\cite{kolliopoulos2005approximation}]
\label{thm:kolliopoulos2005}
Any feasible LP $\min_y\{\, y^\top b : y^\top A \ge c^\top,\ y \ge 0 \,\}$
has a $(1 + \epsilon/2)$-approximate solution whose entries are multiples of $\theta = \Theta\left(\frac{\epsilon^2}{\log m}\right)$, where $m$ is the dimension of $c$.
\end{theorem}
We use this theorem as an existence theorem.
If there is an optimal dual solution with objective value of at most $(1 - \epsilon) \mu$, this theorem shows that there exists a dual feasible solution whose entries are multiples of $\theta$ with an objective value of at most $(1 + \epsilon/2)(1 - \epsilon) \mu \le (1 - \epsilon/2) \mu$.
By enumerating the dual vectors whose entries are multiples of $\theta$, we can obtain a witness cover.
\begin{lemma}
\label{lem:poly_nontdi_nonsparse}
Let $\theta = \Theta\left(\frac{\epsilon^2}{\log m}\right)$.
The following set $W \subseteq \mathbb{R}_+^n$ is an $(\epsilon, \epsilon/2)$-witness cover for $\mu \geq 1$ such that $|W| = \exp\left({O\left(\frac{\mu\log m}{\epsilon^2} \log \left(1 + \frac{n}{\mu} \right)\right)}\right) $:
\begin{align}
W = \left\{\, y \in \theta \mathbb{Z}_+^n :\, y^\top b \le \left(1 - \frac{\epsilon}{2}\right) \mu \,\right\}.
\end{align}
\end{lemma}
\begin{proof}
By Theorem~\ref{thm:kolliopoulos2005}, $W$ is an $(\epsilon, \epsilon/2)$-witness cover for $\mu$.
We evaluate the cardinality of $W$ as follows. Let $\mu' := \floor{\mu/\theta}$.
The number of ways of distributing $k$ ($\leq \mu'$) tokens among $n$ entries is bounded by
\begin{align}
\quad |W| &\le \sum_{k=0}^{\mu'} \binom{n+k-1}{k}
\le \mu' \left( \frac{e(n + \mu')}{\mu'} \right)^{\mu'} \nonumber\\
&\le \mu' e^{\mu' (1 + \log (1 + n/\mu'))} = \exp\left(O \left(\frac{\mu}{\theta} \log \left(1 + \frac{\theta n}{\mu} \right) \right) \right). \quad \qedhere
\end{align}
\end{proof}
\subsection{Exponentially Many Constraints}
\label{sec:tdiexp}
Some problems, such as the non-bipartite matching problem and matroid problems, have exponentially many constraints.
In such cases, it is impossible to enumerate all the candidates naively as we have done in the previous sections.
Sometimes, this difficulty is overcome by identifying the granularity of the dual solution (i.e., TDI, dual sparse, or general), and then bounding the number of possible dual patterns by exploiting the combinatorial structure; see next section for concrete examples.
|
\section{Introduction}
The process of rupture nucleation in which slowly driven frictional interfaces (faults) spontaneously develop elastodynamically propagating fronts accompanied by rapid slip is of fundamental importance for various fields, with far-reaching implications for earthquake physics. Quantitatively understanding the nucleation process is essential for predicting the dynamics of frictional interfaces in general and for earthquake dynamics in particular. There exists some observational evidence, based on seismological records~\citep{Scholz1998,Ohnaka2000,Harris2017}, and some experimental evidence, based on laboratory measurements~\citep{Dieterich1979,Ohnaka1990,Kato1992,McLaskey2013,Latour2013}, which suggest that rapid rupture propagation accompanied by a marked seismological signature is preceded by precursory aseismic slip. This precursory aseismic slip is commonly associated with a slowly expanding creep patch defined as a slipping segment of finite linear size $L(t)$, embedded within a non-slipping fault. Accelerating slip is expected to emerge once $L(t)$ surpasses a critical nucleation length $L_c$. We note that other nucleation scenarios have been considered in the literature, see for example~\citet{Ben-Zion2008}, but are not discussed here.
Various theoretical and computational works have indicated that the nucleation of accelerating slip is related to a frictional instability~\citep{Ruina1983,Yamashita1991,Ben-Zion1997,Scholz1998,Ben-Zion2001,Lapusta2003,Uenishi2003,Ben-Zion2008,Kaneko2008,Kaneko2016}. From this perspective, the critical nucleation length $L_c$ corresponds to the critical conditions for the onset of instability that leads to accelerating slip and to the spontaneous propagation of elastodynamic rupture fronts. A major challenge is to understand the relations between the critical instability conditions and $L_c$. In this Letter, we propose a theoretical approach for predicting $L_c$ which differs from the conventional approach.
The conventional approach, based on a single degree-of-freedom spring-block analysis extended to deformable bodies using various model-dependent fracture mechanics estimates, is discussed in the framework of rate-and-state constitutive laws in Sect.~\ref{sec:conventional}. Our approach, based on the stability of homogeneous sliding of elastically-deformable bodies, is introduced in Sect.~\ref{sec:LSA} and is shown to yield a significantly larger $L_c$ for elastically identical half-spaces and rate-and-state friction. In Sect.~\ref{sec:bimaterial} we show that the proposed approach is naturally applicable to bimaterial interfaces, which are of great interest in various contexts~\citep{Weertman1980,Andrews1997,Ben-Zion1998,Cochard2000,Adams2000,Ben-Zion2001,Gerde2001,Ranjith2001,Rice2001,Shi2006,Rubin2007,Ampuero2008a,allam2014,Brener2016,Aldam2017}, and derive analytic results for $L_c$ in this case, indicating that the bimaterial effect decreases $L_c$ compared to available predictions in the literature. Finally, in Sect.~\ref{sec:FiniteH} we show that the proposed approach is applicable to finite-size systems and test our predictions against inertial Finite-Element-Method calculations for a finite-size two-dimensional elastically-deformable body in rate-and-state frictional contact with a rigid body under sideway loading. The theoretically predicted $L_c$ and its finite-size dependence are shown to be in reasonably good quantitative agreement with the full numerical solutions, lending support to the proposed approach. Section~\ref{sec:conclusion} offers some concluding remarks and discusses some prospects.
\section{A conventional approach to calculating the nucleation length $L_c$}
\label{sec:conventional}
As stated, the most prevalent approach to the nucleation of rapid slip at frictional interfaces associates nucleation with an instability of a slowly expanding creep patch. The creep patch features a non-uniform spatial distribution of slip velocity, in the quasi-static regime (where inertia and acoustic radiation are negligible), due to some external loading. It is assumed to be stable as long as its length $L(t)$ is smaller than a critical nucleation length $L_c$. When $L(t)\=L_c$, the patch becomes unstable and transforms into a rupture front, accompanied by accelerated slip and dynamic propagation (where inertia and significant acoustic radiation are involved). As creep patches are non-stationary objects that involve spatially varying fields, determining their stability --- and hence $L_c$ --- is a non-trivial challenge that typically requires invoking some approximations.
The most common approximation proceeds in two steps~\citep{Dieterich1986,Dieterich1992,Lapusta2000,Kaneko2008}. First, the creep patch and the two elastically deformable bodies that form the frictional interface are replaced by a rigid block of mass $M$ in contact with a rigid substrate and attached to a Hookean spring of stiffness $K$. That is, all of the spatial aspects of the problem are first neglected. The external loading and the typical slip velocity within the patch are mimicked by constantly pulling the Hookean spring at a velocity $V$. The rigid block is pressed against the rigid substrate by a normal force $F_N$, which gives rise to a frictional resistance force $f F_N$, where $f$ is described by the friction law, which may depend on the block's slip $u(t)$, its time-derivatives and the state of the frictional interface.
This single degree-of-freedom spring-block system is described by the force balance equation $M \ddot{u}(t)\=K(V t-u(t))-f(...)F_N,$ where each superimposed dot denotes a time-derivative. We assume that $f(...)$ can be described by the rate-and-state constitutive framework, where $f(\dot{u}(t),\phi(t))$ is a function of the slip velocity $\dot{u}$ and of an internal state variable $\phi$. The latter, which quantifies the typical age/maturity of contact asperities, evolves according to $\dot{\phi}\=g(\phi\,\dot{u}/D)$, where $D$ is a memory lengthscale and the function $g(\Omega)$ satisfies $g(1)\=0$ and $g'(1)\!<\!0$. For example, two popular choices, i.e.~$g(\Omega)\=1-\Omega$~\citep{Ruina1983, Marone1998,Nakatani2001,Baumberger2006,Bhattacharya2014} and $g(\Omega)\=-\Omega\log\Omega$~\citep{Ruina1983,Gu1984,Bhattacharya2014}, feature $g'(1)\=-1$.
Consider then a steady sliding state at a constant driving velocity $\dot{u}\=V$ such that $\phi\=D/V$. A standard linear stability analysis implies that this steady state becomes unstable if~\citep{Rice1983,Ruina1983,Gu1984,Lapusta2000,Baumberger2006,Bhattacharya2014}
\begin{equation}
\label{eq:spring-block}
K < K_c\equiv\frac{df(V,D/V)}{d\!\log{V}}\frac{g'(1)\,F_N}{D}\ ,
\end{equation}
where an inertial term proportional to $MV^2$ has been neglected. That is, an instability is predicted when the spring stiffness $K$ is smaller than a critical stiffness $K_c$. Note that since generically $g'(1)\!<\!0$, a necessary condition for instability is $df(V,D/V)/dV\!<\!0$, i.e.~that the sliding velocity $V$ belongs to the velocity-weakening branch of the steady state friction curve~\citep{Ruina1983}.
In the second step, the analysis is extended to spatially varying fields and elastically deformable bodies --- relevant to realistic creep patches --- by identifying the spring stiffness $K$ in the spring-block system with an $L$-dependent effective stiffness $K^{eff}\!(L)$ in the spatially varying and elastically deformable system. This is typically done through some fracture mechanics estimates which yield~\citep{Dieterich1986,Rice1993}
\begin{equation}
K^{eff}\!(L)=\eta\frac{\mu A_n}{L}\ ,
\end{equation}
where $\mu$ is the shear modulus, $A_n$ is the nominal contact area and the dimensionless number $\eta$ is a model-dependent pre-factor. As expected physically, the effective stiffness of the overall system, $K^{eff}$, is a decreasing function of the length of the creep patch, $L$. Using then $K^{eff}\!<\!K_c$ of Eq.~\eqref{eq:spring-block} as an instability criterion, one obtains
\begin{equation}
\label{eq:1DOF_Lc}
L > L_c \equiv \eta\frac{\mu D}{\tfrac{df(V,D/V)}{d\!\log{V}}g'(1)\,\sigma_0}\ ,
\end{equation}
where $\sigma_0\!=\!F_N/A_n$. The numerical pre-factor $\eta$ is model-dependent (e.g.~it depends on the crack configuration, dimensionality and loading configuration) and its value varies between $2/\pi$ and $4/3$ in the available literature~\citep[see Table 1]{Dieterich1992}. The nucleation criterion in Eq.~\eqref{eq:1DOF_Lc}, with $\eta$ close to unity, is widely used in the literature, though we are not aware of computational or experimental studies that quantitatively and systematically tested it. Next, we present a different approach for calculating $L_c$.
\section{An approach based on the stability of homogeneous sliding of elastically-deformable bodies}
\label{sec:LSA}
Our goal here is to propose an alternative approach to calculating the critical nucleation length $L_c$. In the proposed approach, nucleation is viewed as a spatiotemporal instability occurring along the creep patch which is assumed to be stable from the fracture mechanics perspective, i.e.~to propagate under stable Griffith energy balance conditions~\citep{Freund1990}. Since, in general, an elastic body can be thought of as a scale-dependent spring, one expects short wavelength $\lambda$ (large wavenumber $k\=2\pi/\lambda$) perturbations to be stable and instability --- if it exists --- to emerge beyond a critical (minimal) wavelength $\lambda_c$ (i.e.~below a critical wavenumber $k_c$). Consequently, when the size $L(t)$ of the expanding creep patch is small, $L(t)\!<\!2\pi/k_c$, we expect it to be stable. A loss of stability is expected when an unstable perturbation can {\em first} fit into the creep patch, i.e.~when the patch size satisfies $L(t)\=L_c\!\equiv\!2\pi/k_c$.
In this physical picture, the major goal is to calculate the critical wavenumber $k_c$. There is, however, no unique and general procedure to study the stability of non-stationary (time-dependent) and spatially varying solutions such as those associated with an expanding creep patch. Consequently, we invoke an approximation in which the spatially varying slip velocity within the creep patch is replaced by a homogeneous (space-independent) characteristic slip velocity $V$. With this approximation in mind, we need to study the stability of steady-state homogeneous sliding of an infinitely long system (in the sliding direction) in order to calculate $k_c$. Applying the result to the actual creep patch, accelerating slip nucleation is predicted to occur when $L(t)\=L_c\!\equiv\!2\pi/k_c$. This idea has been introduced, pursued and substantiated in the context of thin layers sliding on top of rigid substrates in~\citet{Bar-Sinai2013}. Our aim here is to significantly generalize the idea to any frictional system.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Fig1}\\
\caption{(left) A long elastic body of height $H^{\mbox{\tiny (1)}}$, shear modulus $\mu^{\mbox{\tiny (1)}}$ and Poisson's ratio $\nu^{\mbox{\tiny (1)}}$ sliding on top of another long elastic body of height $H^{\mbox{\tiny (2)}}$, shear modulus $\mu^{\mbox{\tiny (2)}}$ and Poisson's ratio $\nu^{\mbox{\tiny (2)}}$. The color gradients represent the fact that the bodies are essentially infinitely long. The bodies are pressed one against the other by a normal stress of magnitude $\sigma_0$ and a homogeneous sliding state at a relative velocity $V$ (in the figure the lower body is assumed to be stationary) is reached by the application of a shear stress of magnitude $\tau_0$ to the top and bottom edges (not shown). (right) The same as in the left panel, except that the lower body is infinitely rigid, $\mu^{\mbox{\tiny (2)}}\!\to\!\infty$, the upper body is of finite length and the velocity $V$ is applied to the lateral edge at $x\!=\!0$. Note that the superscript $\hbox{\scriptsize (1)}$ is unnecessary here and hence is omitted.}
\label{fig:sys_fig}
\end{figure*}
We consider a long elastic body in the $x$-direction of height $H^{\mbox{\tiny (1)}}$ in the $y$-direction steadily sliding with a relative slip velocity $V$ on top of a long elastic body of height $H^{\mbox{\tiny (2)}}$. The bodies may be made of different elastic materials and are pressed one against the other by a normal stress $\sigma_0$, see Fig.~\ref{fig:sys_fig} (left). As we are interested in the response of the system to spatiotemporal perturbations on top of the homogeneous sliding state at a velocity $V$, we define the slip displacement $\epsilon(x,t)\!\equiv\! u_x(x,y\!=\!0^+,t)-u_x(x,y\!=\!0^-,t)$ and the slip velocity $v(x,t)\!\equiv\!\dot{\epsilon}(x,t)$, where ${\B u}(x,y,t)$ is the displacement field and $y\=0$ is the fault plane (the superscript $\hbox{\scriptsize +/--}$ means approaching the fault plane from the upper/lower body side, respectively). ${\B u}(x,y,t)$ for each body satisfies the Navier-Lam\'e equation $\nabla\!\cdot\!{\B \sigma}\=\frac{\mu}{1-2\nu}\nabla\!\left(\nabla\!\cdot\!{\B u}\right)+\mu\nabla^2{\B u}\=\rho\,\ddot{\B u}$, with its own shear modulus $\mu$, Poisson's ratio $\nu$ and mass density $\rho$~\cite{Landau1986}. The Cauchy stress tensor field $\B \sigma$ was related to the displacement field $\B u$ through Hooke's law and each superimposed dot represents a partial time derivative.
The fault at $y\=0$ is assumed to be described by the rate-and-state constitutive relation $\tau\=\sigma_{xy}\=-f(v,\phi)\sigma_{yy}$. Fault opening or interpenetration are excluded, i.e.~we assume $u_y(x,y\!=\!0^+,t)\=u_y(x,y\!=\!0^-,t)$, and $\sigma_{xy}$ and $\sigma_{yy}$ are continuous across the fault. The internal state field $\phi(x,t)$ evolves according to $\dot{\phi}\=g(\phi\,\dot{u}/D)$, with $g(1)\=0$ and $g'(1)\!<\!0$, as in Sect.~\ref{sec:conventional}. We then introduce interfacial slip perturbations of the form $\delta\epsilon\!\propto\!\exp(\Lambda t-i k x)$, where $\Lambda$ is the complex growth rate and $k$ is the wavenumber. The shear and normal stress perturbations are related to $\delta\epsilon$ using the solution of the quasi-static Navier-Lam\'e equation, and take the form $\delta\sigma_{xy}\=-\mu\,k\,G_1\,\delta\epsilon$, $\delta\sigma_{yy}\=i\mu\,k\,G_2\,\delta\epsilon$, $\mu$ is the shear modulus of the upper body. We focus on the quasi-static regime, i.e.~excluding inertia, because nucleation generically takes place in this regime. The quasi-static elastic transfer functions $G_1$ and $G_2$, see Supporting Information~\citep{Geubelle1995}, contain all of the information about the system's geometry, the elastic properties of the sliding bodies and loading conditions (e.g.~velocity vs.~stress boundary condition). The perturbation in the frictional resistance takes the form $\delta{f}\=\tfrac{\Lambda(a\Lambda\ell-\zeta V)}{V(V+\Lambda\ell)}\,\delta\epsilon$, where we used $\delta{v}\=\Lambda\delta\epsilon$, and the definitions $\ell\!\equiv\!-\tfrac{D}{g'(1)}\!>\!0$, $a\!\equiv\!v\tfrac{\partial\!f(v,\phi)}{\partial v}\!>\!0$ and $\zeta\!\equiv\!-v\tfrac{df(v, D/v)}{dv}\=-\tfrac{df(v,D/v)}{d\!\log{v}}$ (the latter two are evaluated at $v\=V$), see Supporting Information. Note that $\zeta$ can be both positive (velocity-weakening friction) and negative (velocity-strengthening friction) depending on the materials, the sliding velocity $V$ and physical conditions (e.g.~temperature)~\citep{Bar-Sinai2014}. For the small slip velocities regime of interest here we assume that friction is velocity-weakening, hence we consider $\zeta\!>\!0$.
The linear perturbation spectrum $\Lambda(k)$ is determined by the perturbation of the constitutive relation, which reads
\begin{equation}
\delta\tau=\delta\sigma_{xy}=\sigma_0\delta{f}-f\delta\sigma_{yy}\ .
\end{equation}
Substituting the results for $\delta\sigma_{xy}$, $\delta\sigma_{yy}$ and $\delta{f}$, we obtain an equation for $\Lambda(k)$
\begin{equation}
\label{eq:spectrum}
\mu\,k\left(G_1-i f G_2\right)+\sigma _0\frac{\Lambda(a \Lambda \ell -\zeta V)}{V (V+\Lambda \ell )}=0\ .
\end{equation}
Once solutions $\Lambda(k)$ are obtained, instability is implied whenever $\Re[\Lambda(k)]\!>\!0$, corresponding to an exponential growth of perturbations. Consequently, $k_c$ is determined as the largest wavenumber $k$ (smallest wavelength) for which $\Re[\Lambda(k)]\=0$ and the critical nucleation length is estimated as $L_c\!\equiv\!2\pi/k_c$.
Solutions to Eq.~\eqref{eq:spectrum} for some cases are available in the literature. Most notably, for two identical half-spaces we have $G_1\=\text{sign}(k)[2(1-\nu)]^{-1}$ and $G_2\=0$ (see Supporting Information), where the latter represents the absence of a bimaterial effect for elastically identical materials of the same shape/geometry. Plugging these transfer functions into Eq.~\eqref{eq:spectrum}, one can readily obtain a known result for the critical wavenumber~\citep{Rice1983}, which reads $k_c\=2(1-\nu)\zeta \sigma _0 \mu^{-1} \ell^{-1}$. Using our proposed criterion $L_c\!\equiv\!2\pi/k_c$, we obtain
\begin{equation}
\label{eq:half_spaces}
L_c=\frac{\pi\,\mu\,\ell }{\zeta (1-\nu ) \sigma _0}\qquad\quad\Longrightarrow\qquad\quad \eta=\frac{\pi }{1-\nu} \ ,
\end{equation}
where $\eta$ was defined in Eq.~\eqref{eq:1DOF_Lc}. This prediction for the critical nucleation length is identical to the one in Eq.~\eqref{eq:1DOF_Lc}, which basically follows from dimensional considerations, once the pre-factor $\eta\=\pi(1-\nu)^{-1}$ is identified as done above (and the definitions of $\ell$ and $\zeta$ are recalled). This value of the pre-factor $\eta$ is $\pi$ times larger than the largest value we have been able to trace in the available literature based on the conventional approach, hence we conclude that the proposed approach predicts a significantly larger nucleation length $L_c$ for identical half-spaces as compared to the conventional approach. Indeed, some numerical simulations of earthquake nucleation indicated that the conventional prediction with $\eta\!\simeq\!1$ quite significantly underestimates the observed $L_c$~\citep{Lapusta2003}.
The physical picture of nucleation developed in this section suggests that the {\em origin} of nucleation is a linear frictional instability, while the {\em outcome} of nucleation is typically strongly nonlinear. In particular, the critical
nucleation conditions coincide with the onset of linear instability when the patch size reaches $L_c$, then the slip velocity increases exponentially in the linear regime until nonlinearities set in when the slip velocity is large enough. Finally, the patch breaks up into propagating rupture fronts. The linear stage of the instability is expected to be rather generic, and in particular nearly independent of the exact functional form of $g(\cdot)$ (with $g(1)\=0$ and $g'(1)\!<\!0$) within the rate-and-state constitutive framework and of the background strength of the fault quantified by the initial age $\phi(t\=0)$, while the nonlinear stages that follow may depend on the details of the constitutive relation and the background fault strength.
These generic properties of the onset of nucleation will be explicitly demonstrated in Sect.~\ref{sec:FiniteH} below. Furthermore, we note that the works of~\citet{Rubin2005, Ampuero2008} apparently focus on the nonlinear stages of nucleation, which is consistent with the fact that they find differences between different friction laws and that their patches can shrink/expand during the nonlinear evolution of the instability. The nonlinear stages -- on the route to rupture propagation -- cannot take place, though, if the patch does not reach first the size $L_c$ determined by the linear instability. Hence, we believe that the above defined $L_c$ is the relevant nucleation length, and not any other length that might characterize the nonlinear evolution of the instability.
\section{Application to bimaterial interfaces}
\label{sec:bimaterial}
The general framework laid down in the previous section, unlike the conventional approach, can be naturally applied to bimaterial interfaces. We consider then two half-spaces made of different elastic materials, the upper half-space is characterized by a shear modulus $\mu^{\mbox{\tiny (1)}}$ and Poisson's ratio $\nu^{\mbox{\tiny (1)}}$ and the lower half-space by a shear modulus $\mu^{\mbox{\tiny (2)}}$ and Poisson's ratio $\nu^{\mbox{\tiny (2)}}$. It corresponds to Fig.~\ref{fig:sys_fig} (left), once the limits $H^{\mbox{\tiny (1)}}\!\to\!\infty$ and $H^{\mbox{\tiny (2)}}\!\to\!\infty$ are taken. Defining $\psi\!\equiv\!\mu^{\mbox{\tiny (2)}}\!/\mu^{\mbox{\tiny (1)}}$ and $\mu\!\equiv\!\mu^{\mbox{\tiny (1)}}$ (i.e.~the shear modulus of the upper body is denoted by $\mu$, as before), the elastic transfer functions for this bimaterial system take the form~\citep{Rice2001} (see also Supporting Information)
\begin{equation}
\label{eq:Gs_bimaterial}
G_1=\frac{{\C M}}{2 \mu}\text{sign}(k),\qquad\qquad G_2=\frac{\beta {\C M}}{2\mu} \ ,
\end{equation}
where
\begin{equation}
\hspace{-0.14cm}{\C M}\!\equiv\!\frac{2\psi\mu(1\!-\!\beta^2)\!^{-1}}{\psi(1\!-\!\nu^{\mbox{\tiny (1)}}\!)\!+\!(1\!-\!\nu^{\mbox{\tiny (2)}}\!)},\qquad\qquad
\beta\!\equiv\!\frac{\psi(1\!-\!2\nu^{\mbox{\tiny (1)}}\!)\!-\!(1\!-\!2\nu^{\mbox{\tiny (2)}}\!)}{2[\psi(1\!-\!\nu^{\mbox{\tiny (1)}}\!)\!+\!(1\!-\!\nu^{\mbox{\tiny (2)}}\!)]}.
\end{equation}
${\C M}$ plays the role of an effective bimaterial modulus, which approaches $\mu/(1-\nu)$ in the identical materials limit, $\mu^{\mbox{\tiny (1)}}\=\mu^{\mbox{\tiny (2)}}\=\mu$ and $\nu^{\mbox{\tiny (1)}}\=\nu^{\mbox{\tiny (2)}}\=\nu$. $\beta$, which appears in $G_2$ but not in $G_1$, vanishes in the identical materials limit (and consequently $G_2$ vanishes in this limit as well) and hence it quantifies the bimaterial effect.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Fig2}\\
\caption{The critical nucleation length $L_c$ (normalized by $L_c^{\cal M}$) for bimaterial interfaces separating two half-spaces, cf.~Eq.~\eqref{eq:bimaterial_Lc}, plotted as a function of $f\beta$ for various values of $\zeta/a$.}
\label{fig:bi}
\end{figure*}
The presence of a bimaterial contrast, $\beta\!\ne\!0$, introduces a new destabilization effect associated with a coupling between slip and normal stress perturbations, in addition to the the destabilizing effect associated with velocity-weakening friction, $\zeta\!>\!0$. Hence, on physical grounds one expects $L_c$ to decrease with increasing bimaterial contrast. To test this, we insert $G_{1,2}$ of Eq.~\eqref{eq:Gs_bimaterial} into Eq.~\eqref{eq:spectrum} and calculate $k_c$, obtaining the following expression for $L_c\=2\pi/k_c$
\begin{equation}
\label{eq:bimaterial_Lc}
L_c\!=\!\frac{\pi {\C M} \ell}{\zeta \sigma_0}\frac{(f\beta)^2\!\left(1+\zeta/a-\sqrt{\left(1+\zeta/a\right)^2 + \frac{\displaystyle 4\,\zeta/a}{\displaystyle (f\beta)^2}} \right)+2\,\zeta/a}{2\,\zeta/a } \ .
\end{equation}
The first multiplicative contribution on the right-hand-side, $L_c^{\cal M}\!\equiv\!\frac{\pi {\C M} \ell }{\zeta \sigma _0}$, is obtained by replacing $\mu/(1\!-\!\nu)$ in our result in Eq.~\eqref{eq:half_spaces} by the effective modulus ${\C M}$. A similar replacement has been proposed by~\citet{Rubin2007} in the context of a different heuristic estimate of the critical nucleation length for bimaterial interfaces. Consequently, we plot in Fig.~\ref{fig:bi} $L_c$ of Eq.~\eqref{eq:bimaterial_Lc}, normalized by $L_c^{\cal M}$, as a function of $f\beta$ for various values of $\zeta/a$. It is observed that $L_c$ for bimaterial interfaces is generically {\em smaller} than the conventional estimate $L_c^{\cal M}$, indicating that bimaterial interfaces may be more unstable than previously considered. We note in passing that Eq.~\eqref{eq:bimaterial_Lc} remains valid also in the presence of velocity-strengthening friction, $\zeta\!<\!0$, for which it predicts that for sufficiently strong bimaterial contrasts, $f\beta\!\ge\!\frac{2\sqrt{-a \zeta }}{a+\zeta}$, instability is implied even for velocity-strengthening friction~\citep{Rice2001}.
\section{Application to Finite-size systems and comparison to inertial Finite-Element-Method calculations}
\label{sec:FiniteH}
The general framework laid down in section~\ref{sec:LSA}, unlike the conventional approach, can be naturally applied to finite-size systems. To demonstrate this, we consider here a system that features both finite dimensions and a bimaterial contrast. In particular, we consider a long deformable body of height $H$, and of elastic constants $\mu$ and $\nu$, in rate-and-state frictional contact with a rigid substrate under the application of a compressive stress $\sigma_0$ and a shear stress $\tau_0$. This configuration corresponds to Fig.~\ref{fig:sys_fig} (left), once the limit $\mu^{\mbox{\tiny (2)}}\!\to\!\infty$ is taken. In this case, the elastic transfer functions appearing in Eq.~\eqref{eq:spectrum} take the form (see Supporting Information)
\begin{equation}
\begin{split}
&G_1=\frac{4 (1-\nu ) (2 H k+\sinh (2 H k))}{2 H^2 k^2+(3-4 \nu ) \cosh (2 H k)-4 \nu (3-2 \nu )+5}\ ,\\ &G_2=\frac{4 \left(H^2 k^2+(1-2 \nu ) \sinh ^2(H k)\right)}{2 H^2 k^2+(3-4 \nu ) \cosh (2 H k)-4 \nu (3-2 \nu )+5} \ .
\end{split}
\label{eq:Gs_finiteH}
\end{equation}
When substituted in Eq.~\eqref{eq:spectrum}, we obtain a complex equation which is not analytically tractable, but rather is amenable to numerical analysis. Let us denote the solution by $k_c(H)$ and the corresponding prediction for the critical nucleation length by $L_c(H)\=2\pi/k_c(H)$.
Equation~\eqref{eq:spectrum}, with $G_{1,2}$ of Eq.~\eqref{eq:Gs_finiteH}, does admit an analytic solution in the limit $Hk\!\to\!0$, i.e.~when the system height $H$ is small compared to field variations parallel to the interface characterized by a lengthscale $\sim k^{-1}$. In this limit, we find $G_1\!\simeq\!2 H k(1\!-\!\nu)^{-1}$ and $G_2\!\simeq\!0$. Using these in Eq.~\eqref{eq:spectrum}, we obtain
\begin{equation}
L_c^{(Hk \to 0)} \simeq 2\pi \sqrt{\frac{2H \mu\,\ell }{\zeta (1-\nu ) \sigma _0}}\ .
\label{eq:Lc1D}
\end{equation}
$L_c^{(Hk \to 0)}$ predicts the small $H$ behavior of $L_c(H)$ and constrains any numerical calculation of $L_c(H)$ to be quantitatively consistent with it in this limit. In addition, it is fully consistent with the results of~\citet{Bar-Sinai2013}. We numerically calculated $L_c(H)$ for the following set of parameters: $\mu\=3.1$ GPa, $\nu\=1/3$, $f\=0.41$, $a\=0.0068$, $\zeta\=0.016$, $\sigma_0\=1$ MPa, $\ell\=0.5\,\mu$m, and $V\=10\,\mu$m/s (the latter corresponds to an applied shear stress $\tau_0\=f(V,\phi\=D/V)\sigma_0$). The result is plotted in the main panel of Fig.~\ref{fig:Lc_FH} (solid line). When $L_c^{(Hk \to 0)}$ of Eq.~\eqref{eq:Lc1D} is superimposed on it (dashed line), perfect agreement at small $H$ and significant deviations at larger $H$ are observed, as expected.
Our goal now is to quantitatively test the ability of the calculated $L_c(H)$ to predict the critical nucleation length in a realistic situation in which a slowly expanding creep patch spontaneously nucleates accelerating slip. We would also like to test the theoretical prediction that $L_c$ is nearly independent of the specific friction law (in particular the aging vs.~the slip $\phi$ evolution laws) and the background fault strength (the initial value of $\phi$). To these aims, we performed inertial Finite-Element-Method (FEM) calculations that are directly relevant for the geometrical configuration and material parameters discussed in the last two paragraphs. In particular, we consider a deformable body of height $H$ which is also of finite extent in the direction parallel to the interface and which is loaded (by an imposed velocity $V\=10\,\mu$m/s that is initiated at $t\=0$) at its lateral edge (defined as $x\=0$), rather than at its top edge at $y\=H$, see Fig.~\ref{fig:sys_fig} (right). The advantage of this sideway loading configuration is that it naturally generates a creep patch that slowly expands from $x\=0$ along the interface, cf.~the inset of Fig.~\ref{fig:Lc_FH}. The interface is first described by the aging rate-and-state constitutive relation with $\dot{\phi}\!\simeq\!1\!-\!\phi v\!/\!D$ and $f(v,\phi)\!\simeq\!f_0\!+\!a\log(v\!/V)\!+\!(\zeta+a)\log(\phi V\!/\!D)$, where $f_0\=0.41$ and the other parameters are as above. The initial conditions are $v(t\=0)\=0$ and $\phi(t\=0)\=1s$. The full constitutive relation used in the FEM calculations, which also allows a transition from stick ($v\=0$) to slip ($v\!>\!0$), can be found in the supporting information~\cite{Hecht2012}.
Our theoretical approach predicts that the creep patch loses its stability and develops accelerating slip upon reaching a certain critical length. This is indeed observed in the inset of Fig.~\ref{fig:Lc_FH}, where the slip velocity blows up when the creep patch reaches a certain length. We then measured the critical length in inertial FEM calculations for different system heights $H$ (in addition to the inset of Fig.~\ref{fig:Lc_FH}, see also the supporting information for the details of the determination of $L_c$ in the numerical calculations) and superimposed the results for the aging law (red circles) on the main panel of Fig.~\ref{fig:Lc_FH}. It is observed that the theoretical prediction for the critical nucleation length $L_c(H)$ is in reasonably good quantitative agreement with the FEM results for the full range of system heights $H$. This major result lends serious support to the approach developed in this Letter.
In order to test whether the theoretically predicted critical nucleation length $L_c(H)$ is indeed nearly independent of the details of the friction law, we repeated the above described FEM calculations for $H\=0.01$ m and $H\=0.05$ m with the slip law instead of the aging law; that is, we used $\dot{\phi}\!\simeq\!-(\phi v\!/\!D)\log(\phi v\!/\!D)$ (the full constitutive relation can again be found in the supporting information). The resulting critical nucleation length (black triangles in main panel of Fig.~\ref{fig:Lc_FH}) for both $H$ values exhibits only a small variation (less than $10\%$) compared to the results for the aging law. Furthermore, we repeated the above described FEM calculations for the aging law with $H\=0.1$ m, except that we increased the initial age of the fault by three orders of magnitude, from $\phi(t\=0)\=1$ s to $\phi(t\=0)\=10^3$ s. The resulting critical nucleation length (brown square in main panel of Fig.~\ref{fig:Lc_FH}) exhibits only a small variation (less than $10\%$) compared to the result for $\phi(t\=0)\=1$ s. These results lend strong support to the idea that the critical nucleation length $L_c$ is determined by a linear instability that is reasonably predicted by the procedure developed in this Letter.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Fig3}\\
\caption{The theoretical prediction for the critical nucleation length $L_c$ for a generic rate-and-state constitutive relation as a function of the height $H$ of an elastic body sliding on top of a rigid substrate (solid line). The material, interfacial and loading parameters are given in the text. The analytic approximation for $L_c(H)$ in the $Hk\!\to\!0$ limit, cf.~Eq.~\eqref{eq:Lc1D}, is added (dashed line). The nucleation length measured in inertial FEM simulations of a finite elastic body of height $H$ under sideway loading for the aging law (see text and Fig.~\ref{fig:sys_fig} (right) for details) is shown as a function of $H$ (red circles). For $H\!=\!0.01$ m and $H\!=\!0.05$ m, $L_c$ for the slip law is also shown (black triangles), demonstrating small variation compared to the result for the aging law. For $H\!=\!0.1$ m, $L_c$ for $\phi(t\!=\!0)\!=\!10^3$ s is also shown (brown square), demonstrating small variation compared to the result for $\phi(t\!=\!0)\!=\!1$ s (i.e.~three orders of magnitude difference in the initial age of the fault). (inset) A sequence of snapshots in time (see legend) of the slip velocity field in inertial FEM simulations for the aging law with $H\!=\!0.1$ m, demonstrating the propagation of a creep patch from the loading edge at $x\!=\!0$ into the interface. At a certain creep patch length (denoted by a vertical dashed line and a horizontal double-head arrow) an instability accompanied by accelerated slip takes place. This is the numerically extracted nucleation length for this height $H$, as can be seen in the main panel.}
\label{fig:Lc_FH}
\end{figure*}
\section{Concluding remarks}
\label{sec:conclusion}
In this Letter we developed a theoretical approach for the calculation of the critical nucleation length for accelerating slip $L_c$. The proposed approach builds on existing literature by adopting the view that nucleation is associated with a linear frictional instability of an expanding creep patch. It deviates from the conventional approach in the literature by replacing the problem of the stability of a spatiotemporally varying creep patch by an effective homogeneous sliding linear stability analysis for deformable bodies, rather than invoking a spring-block stability analysis supplemented with some fracture mechanics estimates for deformable bodies. The quality of the predictions emerging from the proposed approach therefore depend on the degree by which the creep patch can be approximated by spatially homogeneous fields. This approximation is expected to be reasonable in many cases in light of the weak/logarithmic velocity dependence of friction in many materials. The temporal aspects of the creep patch propagation are taken into account by the requirement that it becomes unstable upon attaining a length for which an unstable mode from the homogeneous linear stability analysis can be first fitted into.
The proposed approach is rather general and applies to a broad range of physical situations. For sliding along rate-and-state frictional interfaces separating identical elastic half-spaces, it has been shown to predict a significantly larger nucleation length compared to the conventional approach. For sliding along rate-and-state frictional interfaces separating different elastic half-spaces, the proposed approach has been shown to predict a bimaterial weakening effect which appears to be stronger than previously hypothesized, resulting in a smaller nucleation length. Finally, the proposed approach has been applied to finite-height systems. For this case, the scenario of a loss of stability of an expanding creep patch has been directly demonstrated using inertial FEM calculations and the predicted nucleation length has been shown to be in reasonably good quantitative agreement with direct FEM results for a range of system heights. The quality of the theoretical predictions has been shown to be nearly independent of the specific friction law used (aging vs.~slip laws) and the background strength of the fault. These results offer a theoretical framework for predicting rapid slip nucleation along faults and hence may give rise to better short-term earthquake prediction capabilities. The proposed approach can and should be quantitatively tested in a wide variety of interfacial rupture nucleation problems, using both theoretical tools and extensive numerical simulations.
\begin{acknowledgments}
E.~B.~acknowledges support from the Israel Science Foundation (Grant No.~295/16), the William Z.~and Eda Bess Novick Young Scientist Fund, COST Action MP1303, and the Harold Perlman Family. R.~S.~acknowledges support from the DFG priority program 1713. We are grateful to Eric Dunham, one of the reviewers of the manuscript, for his valuable and constructive comments and suggestions. We also thank Robert Viesca for useful discussions in the context of nucleation on bimaterial interfaces. M.~A.~acknowledges Yohai Bar-Sinai for helpful guidance and assistance. The analytical formulae and numerical methods described in the main text and supporting information are sufficient to reproduce all the results and plots presented in the paper.
\end{acknowledgments}
|
\section{Introduction}
Weil heights have an important role in Diophantine geometry, and particular Weil heights with nice properties, called canonical heights, are sometimes very useful. The theory of canonical heights has had deep applications throughout the field of Arithmetic geometry.
Over abelian varieties $A$ defined over a number field $K$, N\'{e}ron and Tate constructed canonical height functions $\hat{h}_L: A(\bar{K}) \rightarrow \mathbb{R}$ with respect to symmetric ample line bundles $L$ which enjoy nice properties, and can be used to prove Mordell-Weil theorem for the rational points of the variety. More generally, in [4], Call and Silverman constructed canonical height functions on projective varieties $X$ defined over a number field which admit a morphism $f:X \rightarrow X$ with $f^*(L) \cong L^{\otimes d}$ for some line bundle $L$ and some $d >1$. In another direction, Silverman [19] constructed canonical height functions on certain $K3$ surfaces $S$ with two involutions $\sigma_1, \sigma_2$ (called Wheler's $K3$ surfaces) and developed an arithmetic theory analogous to the arithmetic theory on abelian varieties.
It was an idea of Kawaguchi [10] to consider polarized dynamical systems of several maps, namely, given $X/K$ a projective variety, $f_1,...f_k:X \rightarrow X$ morphisms on defined over $K$, $\mathcal{L}$ an invertible sheaf on $X$ and a real number $d>k$ so that $f_1^*\mathcal{L}\otimes ... \otimes f_k^*\mathcal{L} \cong \mathcal{L}^{\otimes d}$, he constructed a canonical height function associated to the polarized dynamical system $(X, f_1,..., f_k, \mathcal{L})$ that generalizes the earlier constructions mentioned above. In the Wheler's K3 surfaces' case above, for example, the canonical height defined by Silverman arises from the system formed by $(\sigma_1, \sigma_2)$ by Kawagushi's method.
Given $X/\mathbb{C}$ smooth projective variety, $f:X \dashrightarrow X$ dominant rational map inducing $f^*:$NS$(X)_{\mathbb{R}} \rightarrow$NS$(X)_{\mathbb{R}}$ on the N\'{e}ron-Severi group, the dynamical degree is defined as $\delta_f := \lim_{n \rightarrow \infty} \rho((f^n)^*)^{\frac{1}{n}}$, where $\rho$ denotes the spectral radius of a given linear map, or the biggest number among the absolute values of its eigenvalues. This limit converges and is a birational invariant that has been much studied over the last decades. In [12] we find a list of references.
In [12], Kawaguchi and Silverman studied an analogous arithmetic degree for $X$ and $f$ defined over $\bar{\mathbb{Q}}$ on points with well defined foward orbit over $\bar{\mathbb{Q}}$. Namely, $\alpha_f(P):= \lim_{n \rightarrow \infty} h^+_X(f^n(P))^{\frac{1}{n}}$, where $h_X$ is a Weil height relative to an ample divisor and $h^+_X= \max \{1, h_X\}$. Such degree measures the arithmetic complexity of the orbit of $P$ by $f$, and $\log \alpha_f(P)$ has been interpreted as a measure of the arithmetic entropy of the orbit $\mathcal{O}_f(P)$. It is showed in [12] that the arithmetic degree determines the height counting function for points in orbits, and that the arithmetic complexity of the $f$-orbit of an algebraic point never exceeds the geometrical-dynamical complexity of the map $f$, as well as more arithmetic consequences. We ask if this kind of research could be done in the setting of general dynamical systems as treated by Kawaguchi, with several maps, as in the case of Wheler's K3 surfaces. This is the first subject found in this work.
Given $X/K$ be a projective variety, $f_1,...,f_k:X \dashrightarrow X$ rational maps, $\mathcal{F}_n=\{f_{i_1} \circ ... \circ f_{i_n} ; i_j =1,...,k \}$, we define a more general dynamical degree of a system of maps as $\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\max_{f \in \mathcal{F}_n} \rho (f^*)^{\frac{1}{n}}$, and extend the definition of arithmetic degree for $\alpha_{\mathcal{F}}(P)= \frac{1}{k} \lim_{n \rightarrow \infty}\{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}}$, obtaining also the convergence of $\delta_{\mathcal{F}}$, and that $\alpha_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}$ when $\alpha_{\mathcal{F}}(P)$ exists. Motivated by [12], we give an elementary proof that our new arithmetic degree is related with height counting functions in orbits, when $\alpha_{\mathcal{F}}(P)$ exists, by:
\begin{center}
$\lim_{B \rightarrow \infty} \dfrac{ \# \{ n \geq 0 ; \sum_{f \in \mathcal{F}_n} h_X(f(P)) \leq B\}}{ \log B}= \dfrac{1}{ \log (k. \alpha_{\mathcal{F}}(P))} $,
\end{center}
\begin{center}
$\lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X(Q) \leq B \})^{\frac{1}{\log B}} \geq k^{ \frac{1}{ \log (k. \alpha_{\mathcal{F}}(P)})}$.
\end{center}
We are able to extend theorem 1 of [12], showing explicitely how the dynamical degree of a system with several maps can offer an uniform upper bound for heights on iterates of points in orbits, when $K$ is a number field or an one variable function field. Precisely, for every $\epsilon >0$,
there exists a positive constant $C=C(X, h_X, f, \epsilon)$ such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$,
\begin{center}
$\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P).$
\end{center} In particular, $h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $f \in \mathcal{F}_n.$
This theorem becomes a tool to show the second very important theorem of this work. As we have seen, for a pair $(X/K, f_1,...,f_k, L)$ with $k$ self-morphisms on $X$ over $K$, and $L$ a divisor satisfying a linear equivalence $\otimes^k_{i=1}f^*_i(L) \sim L^{\otimes d}$ for $d>k$, there is a well known theory of canonical heights developed by Kawaguchi in [10]. Now we are partially able to generalize this to cover the case that the relation $\otimes^k_{i=1}f^*_i(L) \equiv L^{\otimes d}$ is only an algebraic relation. Hence the limit
\begin{center}
$\hat{h}_{L,\mathcal{F}}(P)= \lim_{n \rightarrow \infty}\dfrac{1}{d^n}\sum_{ f \in \mathcal{F}_n} h_L(f(P)).$
\end{center}
converges for certain eigendivisor classes relative to algebraic relation. For $L$ ample and $K$ a number field, we obtain that :
\begin{center}
$\hat{h}_{L,\mathcal{F}}(P)=0 \iff P$ has finite $\mathcal{F}$-orbit.
\end{center}
These kind of generalization was firstly done for just one morphism by Y. Matsuzawa in [15], extending Call and Silverman's theory of canonical heights in [4], and we work out for several maps in the present work.
\section{Notation, and first definitions}
Throughout this work, $K$ will be either a number field or a one-dimensional function field of characteristic 0 . We let $\bar{K}$ be an algebraic closure of $K$. The uple $ (X, f_1,...,f_k)$ is called a dynamical system, where either $X$ is a smooth projective variety and $f_i:X \dashrightarrow X$ are dominant rational maps all defined over $K,$ or $X$ is a normal projective variety and $f_i:X \dashrightarrow X$ are dominant morphisms.
We denote by $ h_X:X(\bar{K})\rightarrow [0, \infty)$ the absolute logarithmic Weil height function relative to an ample divisor $A$ of $X$, and for convenience we set $h_X^+(P)$ to be $ \max \{1, h_X(P)\}.$
The sets of iterates of the maps in the system are denoted by $\mathcal{F}_0=\{ $Id$\}, \mathcal{F}_1= \mathcal{F} =\{f_1,...,f_k\}$, and $\mathcal{F}_n=\{f_{i_1} \circ ... \circ f_{i_n} ; i_j =1,...,k \}$, inducing what we call $\mathcal{O}_{\mathcal{F}}(P)$ the forward $\mathcal{F}$-orbit of $P$=$\{ f(P); f \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n \}.$ A point $P$ is said preperiodic when its $\mathcal{F}$-orbit is a finite set.
We write $ I_{f_i}$ for the indeterminacy locus of $f_i$, i.e., the set of points which $f_i$ is not well-defined, and $ I_{\mathcal{F}}$ for $ \bigcup_{i=1}^k I_{f_i}$. Also we define $ X_{\mathcal{F}}(\bar{K})$ as the set of points $P \in X(\bar{K})$ whose forward orbit is well-defined, in other words, $\mathcal{O}_{\mathcal{F}}(P) \cap I_{\mathcal{F}} = \emptyset$.
The set of Cartier divisors on $X$ is denoted by Div$(X)$, while Pic$(X)$ denotes The Picard group of $X$, and NS$(X)=\mbox{Pic}(X)/\mbox{Pic}^{0}(X)$ is called the Neron-Severi Group of $X$. The equality in this group is denoted by the symbol $\equiv$, which is called algebraic equivalence.
Given a rational map $f:X \dashrightarrow X$, the linear map induced on the tensorized N\'{e}ron-Severi Group NS$(X)_{\mathbb{R}}=$ NS$(X) \otimes \mathbb{R}$ is denoted by $f^*$. So, when looking for a dynamical system $(X,\mathcal{F})$, it is convenient for us to use the notation $\rho(\mathcal{F}_n):= \max_{f \in \mathcal{F}_n} \rho (f^*,$NS$(X)_{\mathbb{R}})$.
For definitions and properties about Weil height functions, we refer to [8].
Next, we define the dynamical degree of a set of rational maps on a complex variety, which is a measure of the geometric complexity of the iterates of the maps in the set, when it exists. This is a generalization for several morphisms of the dynamical degree appearing in of [12]. \newline
{\bf Definition 2.1: }
\textit{Let $X/ \mathbb{C}$ be a (smooth) projective variety and let $\mathcal{F}$ be as above. The dynamical degree of $\mathcal{F}$, when it exists, is defined by}
\begin{center}
$\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$
\end{center}
In this sense, we also generalize the second definition in the introduction of [12], introducing now the arithmetic degree of a system of maps $\mathcal{F}$ at a point $P$. This degree measures the growth rate of the heights of $n$-iterates of the point by maps of the system as $n$ grows, and so it is a measure of the arithmetic complexity of $\mathcal{O}_{\mathcal{F}}(P)$.\newline
{\bf Definition 2.2: }
\textit{Let $P \in X_{\mathcal{F}}(\bar{K}).$ The arithmetic degree of $\mathcal{F}$ at $P$ is the quantity}
\begin{center}
$\alpha_{\mathcal{F}}(P)= \frac{1}{k} \lim_{n \rightarrow \infty}\{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $
\end{center} \textit{assuming that the limit exists.}\newline
{\bf Definition 2.3: } \textit{In the lack of the convergence, we define the upper and the lower arithmetic degrees as}
\begin{center}
$\bar{\alpha}_{\mathcal{F}}(P)= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $
$\underline{\alpha}_{\mathcal{F}}(P)= \frac{1}{k} \lim \inf_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(P))\}^{\frac{1}{n}} $
\end{center}
{\bf Remark 2.4: }
Let $X$ be a projective variety and $D$ a Cartier divisor. If \newline $f:X \rightarrow X $ is a surjective morphism, then $f^*D$ is a Cartier divisor. In the case where $X$ is smooth, and $f:X \dashrightarrow X$ a merely rational map, we take a smooth projective variety $\tilde{X}$ and a birational morphism $\pi: \tilde{X} \rightarrow X$ such that $\tilde{f} := f \circ \pi: \tilde{X} \rightarrow X$ is a morphism. And we define $f^*D:=\pi_*(\tilde{f}^*D).$ It is not hard to verify that this definition is independent of the choice of $X$ and $\pi$. This is done in section 1 of [12] for example.
\section{First properties for the arithmetic degree}
In this section we check that the upper and lower degrees defined in the end of the section above are independent of the Weil height function chosen for $X$, and so they are well defined. Some examples of these degrees are computed is this section as well. We also present and prove our first counting result for points in orbits for several maps, and state an elementary and useful linear algebra's lemma.\newline
{\bf Proposition 3.1: }
\textit{The upper and lower arithmetic degrees $\bar{\alpha}_{\mathcal{F}}(P)$ and $ \underline{\alpha}_{\mathcal{F}}(P)$ are independent of the choice of the height function $h_X$.}
\begin{proof}
If the $\mathcal{F}$-orbit of $P$ is finite, then the limit $\alpha_{\mathcal{F}}(P)$ exists and is equal to 1, by definition of such limit, whatever the choice of $h_X$ is. So we consider the case when $P$ is not preperiodic, which allows us to replace $h_X^+$ with $h_X$ when taking limits.
Let $h$ and $h^{\prime}$ be the heights induced on $X$ by ample divisors $D$ and $D^{\prime}$ respectively, and let the respective arithmetic degrees denoted by $\bar{\alpha}_{\mathcal{F}}(P)$, $\underline{\alpha}_{\mathcal{F}}(P)$, ${\bar{\alpha}}^{\prime}_{\mathcal{F}}(P)$, ${\underline{\alpha}}^{\prime}_{\mathcal{F}}(P)$. By the definition of ampleness, there is an integer $m$ such that $mD- D^{\prime}$ is ample, and thus the functorial properties of height functions imply the existence of a non-negative constant $C$ such that:
\begin{center}
$mh(Q) \geq h^{\prime}(Q) - C $ for all $ Q \in X(\bar{K}).$
\end{center}
We can choose a sequence of indices $\mathcal{N} \subset \mathbb{N}$ such that:
\begin{center}
$\bar{\alpha}^{\prime}_{\mathcal{F}}(P)= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}} = \frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}}$
\end{center}
Then \newline \newline
$\bar{\alpha}^{\prime}_{\mathcal{F}}(P)=\frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n} h^{\prime}(f(P))\}^{\frac{1}{n}} \newline \newline \leq \frac{1}{k} \lim_{n \in \mathcal{N}} \{ \sum_{f \in \mathcal{F}_n}m h(f(P))+C \}^{\frac{1}{n}} ~~\newline \newline \leq \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n}m h(f(P))+C \}^{\frac{1}{n}} ~ \newline \newline
= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ m(\sum_{f \in \mathcal{F}_n} h(f(P)))+Ck^n \}^{\frac{1}{n}} \newline \newline
= \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h(f(P))\}^{\frac{1}{n}} \newline \newline =
\bar{\alpha}_{\mathcal{F}}(P). $ \newline
This proves the inequality for the upper arithmetic degrees. Reversing the roles of $h$ and $h^{\prime}$ in the calculation above we also prove the opposite inequality, which demonstrates that $\bar{\alpha}_{\mathcal{F}}(P)={\bar{\alpha}}^{\prime}_{\mathcal{F}}(P).$ In the same way we prove that $\underline{\alpha}_{\mathcal{F}}(P)={\underline{\alpha}}^{\prime}_{\mathcal{F}}(P).$
\end{proof}
Our next lemma says that points belonging to a fixed orbit have their upper and lower arithmetic degrees bounded from above by the respective arithmetic degrees of the given orbit generator point. \newline \newline
{\bf Lemma 3.2: }
\textit{Let $\mathcal{F}=\{f_1,..., f_k\}$ be a set of self-rational maps on $X$ defined over $\bar{K}$. Then, for all $P \in X_{\mathcal{F}}(\bar{K})$, all $l \geq 0$, and all $g \in \mathcal{F}_l$, }
\begin{center}
$\bar{\alpha}_{\mathcal{F}}(g(P)) \leq \bar{\alpha}_{\mathcal{F}}(P)$ and ~$\underline{\alpha}_{\mathcal{F}}(g(P)) \leq \underline{\alpha}_{\mathcal{F}}(P)$
\end{center}
\begin{proof} We calculate \newline \newline
$\bar{\alpha}_{\mathcal{F}}(g(P)) = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n} h_X^+(f(g(P)))\}^{\frac{1}{n}}\newline \newline
\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_n , g^{\prime} \in \mathcal{F}_l } h_X^+(f(g^{\prime}(P))) - \sum_{f \in \mathcal{F}_n , g^{\prime} \in \mathcal{F}_l -\{g\}} h_X^+(f(g^{\prime}(P)))\}^{\frac{1}{n}} ~~~~~~~~~~~~~~~~~~~\newline \newline
\leq \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ [\sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))]+ O(1).k^{n+l}\}^{\frac{1}{n}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
=\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
=\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n+l} . (1+ \frac{l}{n})} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
=\frac{1}{k} \lim \sup_{n \rightarrow \infty} \{ \sum_{f \in \mathcal{F}_{n+l}} h_X^+(f(P))\}^{\frac{1}{n+l}} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
=\bar{\alpha}_{\mathcal{F}}(P)$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\newline \newline The proof for $\underline{\alpha}_{\mathcal{F}}(P)$ is similar.
\end{proof}
\newpage Here are some examples: \newline
$\bf{Example \: 3.3}$: Let $S$ be a $K3$ surface in $\mathbb{P}^2 \times \mathbb{P}^2$ given by the intersection of two hypersurfaces of bidegrees (1,1) and (2,2) over $\overline{\mathbb{Q}}$, and assume that NS$(S) \cong \mathbb{Z}^2,$ generated by $L_i:= p_i^*O_{\mathbb{P}^2}(1), i=1,2$, where $p_i : S \rightarrow \mathbb{P}^2$ is the projection to the $i$-factor for $i=1,2.$ These induce noncommuting involutions $\sigma_1, \sigma_2 \in $ Aut$(S)$. By [19, Lemma 2.1], we have \begin{center} $\sigma_i^*L_i \cong L_i, \sigma_i^*L_j \cong 4L_i - L_j,$ for $ i \neq j.$\end{center} The line bundle $L:= L_1 + L_2$ is ample on $S$ and satisfies $\sigma_1^*L + \sigma^*_2L \cong 4L$, and thus $h:= \hat{h}_{L, \{ \sigma_1, \sigma_2\}}$ exists on $S(\overline{\mathbb{Q}})$ by [10, theorem 1.2.1]. Noting that
$$ \sigma_1^* \sim \begin{bmatrix}
1 & 4 \\
0 & -1
\end{bmatrix},
\sigma_2^* \sim \begin{bmatrix}
-1 & 0 \\
4 & 1
\end{bmatrix},
(\sigma_1 \circ \sigma_2)^* \sim \begin{bmatrix}
-1 & -4 \\
4 & 15
\end{bmatrix},
(\sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix}
15 & 4 \\
-4 & -1
\end{bmatrix} , $$
$$
(\sigma_1 \circ \sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix}
15 & 56 \\
-4 & -15
\end{bmatrix} ,
(\sigma_2 \circ \sigma_1 \circ \sigma_2)^* \sim \begin{bmatrix}
-15 & -4 \\
56 & 15
\end{bmatrix},
$$ we calculate that \begin{center}$\rho(\sigma_1^*)= 2+ \sqrt{3}, \rho( \sigma_2^*)= 2 + \sqrt{3}, \rho((\sigma_1 \circ \sigma_2 )^*)= 7 + 4 \sqrt{3}, \newline \rho(( \sigma_2 \circ \sigma_1)^*) = 7 + 4 \sqrt{3}, \rho((\sigma_1 \circ \sigma_2 \circ \sigma_1)^*)= 1, \rho((\sigma_2 \circ \sigma_1 \circ \sigma_2)^*)=1.$ \end{center} This gives that $\delta_{\{\sigma_1, \sigma_2\}}= 2 + \sqrt{3}. $ Furthermore, since $h$ is a Weil Height with respect to an ample divisor,
\begin{center}$\alpha_{\{\sigma_1, \sigma_2\}}(P) = (1/2) . \lim_{n \rightarrow \infty } [\sum_{f \in \{ \sigma_1, \sigma_2 \}_n} h(f(P))]^{\frac{1}{n}}=1/2.[4^n.h(P)]^{ \frac{1}{n}}=2 $ \end{center} for all $P \in S(\bar{\mathbb{Q}})$ non-preperiodic, i.e, $P$ such that $h(P) \neq 0.$
Observe that in this case $\bar{\alpha}_{\{\sigma_1, \sigma_2\}}(P)=2 \leq 2 + \sqrt{3}= \delta_{\{\sigma_1, \sigma_2\}},$ which we will prove in Corollary 1.16 to be true in our general conditions. \newline
$\bf{Example \: 3.4}$: Let $S$ be a $K3$ surface in $\mathbb{P}^2 \times \mathbb{P}^2$, as in the example 1.4.5 of [10], given by the intersection of two hypersurfaces of bidegrees (1,2) and (2,1) over $\overline{\mathbb{Q}}$, and assume that NS$(S) \cong \mathbb{Z}^2,$ generated by $L_i:= p_i^*O_{\mathbb{P}^2}(1), i=1,2$, where $p_i : S \rightarrow \mathbb{P}^2$ is the projection to the $i$-factor for $i=1,2.$ These induce noncommuting involutions $\sigma_1, \sigma_2 \in $ Aut$(S)$. By similar computations we have $\sigma_i^*L_i \cong L_i, \sigma_i^*L_j \cong 5L_i - L_j,$ for $ i \neq j.$ The ample line bundle $L:= L_1 + L_2$ exists on $S$ and satisfies $\sigma_1^*L + \sigma^*_2L \cong 5L$, and thus $h:= \hat{h}_{L, \{ \sigma_1, \sigma_2\}}$ exists on $S(\overline{\mathbb{Q}})$ by [10, theorem 1.2.1]. Proceeding in the same way as in the previous example, we have that \begin{center} $\bar{\alpha}_{\{\sigma_1, \sigma_2\}}(P) =5/2 \leq \sqrt{\dfrac{23 + 5 \sqrt{21}}{2}}={\delta}_{\{\sigma_1, \sigma_2\}}.$ \end{center}
$\bf{Example \: 3.5}$: Let $S$ be a hypersurface of tridegree (2,2,2) in $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$ over $\overline{\mathbb{Q}}$, as in the example 1.4.6 of [10]. For $i=1,2,3,$ let $p_i:S \rightarrow \mathbb{P}^1 \times \mathbb{P}^1$ be the projection to the $(j,k)-$th factor with $\{i,j,k\}= \{1,2,3\}.$ Since $p_i$ is a double cover, it gives an involution $\sigma_i \in $ Aut$(S).$ Let also, $q_i:S \rightarrow \mathbb{P}^1$ be the projection to the $i-$th factor, and set $L_i := q_i^* O_{\mathbb{P}^1}, L:= L_1 + L_2 + L_3$ ample, and we assume that NS$(S) = <L_1,L_2,L_3> \cong \mathbb{Z}^3.$ By similar computations as above we have
\begin{center}
$ \sigma_i^*(L_i) \cong -L_i +2L_j + 2 L_k $ for $ \{i,j,k\}= \{ 1,2,3\}\newline
\sigma_j^*(L_i) \cong L_i$ for $i \neq j.$
\end{center} Then $\sigma_1^*L + \sigma_2^*L + \sigma_3^*L \cong 5L,$ which gives us the existence of $h := \hat{h}_{L, \{ \sigma_1, \sigma_2, \sigma_3\}}$ by [10, theorem 1.2.1]. We note that if $h(P) \neq 0$, then a similar computation as in the previous examples yields $\alpha_{\{ \sigma_1,\sigma_2,\sigma_3\}}(P)= 5/3 $. While we can also calculate that:
$$
(\sigma_3 \circ \sigma_2 \circ \sigma_1)^* \sim \begin{bmatrix}
1 & -2 & -2 \\
2 & 3 & 10 \\
2 & 6 & 15
\end{bmatrix}
$$ with its big eigenvalue being aproximatelly $\rho( (\sigma_3 \circ \sigma_2 \circ \sigma_1)^*) \sim 18,3808$. As $(18,3808)^{1/3} \sim 2,639$, we have that $\delta_{\{\sigma_1,\sigma_2,\sigma_3\}} \geq 2,63 > 5/3=\alpha_{\{ \sigma_1,\sigma_2,\sigma_3\}}(P)$ \newline
$\bf{Example \: 3.6}$: Let $A$ be an abelian variety over $\bar{\mathbb{Q}}$, $L$ a symmetric ample line bundle on $A$. Let $f= (F_0:...:F_N): \mathbb{P}^N \rightarrow \mathbb{P}^N$ be a morphism defined by the homogeneous polynomials $F_0,..., F_N$ of same degree $d >1$ such that $0$ is the only common zero of $F_0,..., F_N.$ Set $X= A \times \mathbb{P}^N, g_1=[2] \times \mbox{id}_{\mathbb{P}^N},$ and $g_2= \mbox{id}_A \times f.$ Put $M:= p_1^*L \otimes p_2^* O_{\mathbb{P}^N}(1),$ where $p_1$ and $p_2$ are the obvious projections. Then \begin{center}
$\stackrel{(d-1) ~ \text{times}}{\overbrace{g_1^*(M) \otimes... \otimes g_1^*(M)}} \otimes g_2^*(M) \otimes g_2^*(M) \otimes g_2^*(M) \cong M^{\otimes (4d-1)}. $
\end{center} This gives us that a canonical height $h:= \hat{h}_{\{g_1,...,g_1,g_2,g_2,g_2\}}$ exists by [10, theorem 1.2.1]. Again, if $h(P) \neq 0$, then $ \alpha_{\{g_1,...,g_1,g_2,g_2,g_2\}}(P)=\dfrac{4d-1}{d+2}$, and we can also see that $\delta_{\{g_1,...,g_1,g_2,g_2,g_2\}}= \max \{\delta_f, \delta_{[2]} \}= \max \{d, 4 \}$, which leads also to the same as the previous examples, since $\dfrac{4d-1}{d+2} <\max \{d, 4 \}.$ \newline
The next proposition is a counting orbit points result in the case of a system possibly with several maps. This result describes some information about the growth of the height counting function of the orbit of $P$ as given below.\newline
{\bf Proposition 3.7: }
\textit{Let $ P \in X_{\mathcal{F}}(\bar{K})$ whose $\mathcal{F}$-orbit is infinite, and such that the arithmetic degree $\alpha_{\mathcal{F}}(P)$ exists. Then}
\begin{center}
$\lim_{B \rightarrow \infty} \dfrac{ \# \{ n \geq 0 ; \sum_{f \in \mathcal{F}_n} h_X(f(P)) \leq B\}}{ \log B}= \dfrac{1}{ \log(k. \alpha_{\mathcal{F}}(P))} $
\end{center} \textit{and in particular,}
\begin{center}
$\lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X(Q) \leq B \})^{\frac{1}{\log B}} \geq k^{ \frac{1}{ \log(k. \alpha_{\mathcal{F}}(P))}}$
\end{center}
\begin{proof}
Since $\mathcal{O}_{\mathcal{F}}(P)= \infty$, it is only necessary to prove the same claim with $h_X^+$ in place of $h_X.$ For each $\epsilon >0$, there exists an $n_0(\epsilon)$ such that
\begin{center}
$(1- \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{1}{k} (\sum_{f \in \mathcal{F}_n} h^+_X(f(P)))^{\frac{1}{n}} \leq (1 + \epsilon) \alpha_{\mathcal{F}}(P) $ for all $n \geq n_0(\epsilon).$
\end{center} It follows that
\begin{center}
$\{ n \geq n_0(\epsilon): (1 + \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{B^{\frac{1}{n}}}{k} \} \subset \{ n \geq n_0(\epsilon):\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \}$
\end{center} and
\begin{center}
$ \{ n \geq n_0(\epsilon):\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \subset \{ n \geq n_0(\epsilon): (1 - \epsilon) \alpha_{\mathcal{F}}(P) \leq \dfrac{B^{\frac{1}{n}}}{k} \}$
\end{center}
Counting the number of elements in these sets yields
\begin{center}
$\dfrac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon) -1 \leq \# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \}$
\end{center}
and
\begin{center}
$ \# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \leq \dfrac{ \log B}{ \log (k (1 - \epsilon) \alpha_{\mathcal{F}}(P))} + n_0(\epsilon) +1 $
\end{center}
Dividing by $\log B$ and letting $B \rightarrow \infty$ gives
\begin{center}
$ \dfrac{ 1}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} \leq \lim \inf_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B} $
\end{center} and
\begin{center}
$ \lim \sup_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B} \leq \dfrac{ 1}{ \log (k (1 - \epsilon) \alpha_{\mathcal{F}}(P))}$
\end{center}
Since the choice for $\epsilon$ is arbitrary, and the $ \lim \inf$ is less or equal to the $\lim \sup$, this finishes the proof that
\begin{center}
$ \lim_{ B \rightarrow \infty} \dfrac{\# \{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} }{ \log B}= \dfrac{ 1}{ \log (k .\alpha_{\mathcal{F}}(P))}$
\end{center} Moreover, we also have that
\begin{center}
$\{ n \geq 0 : \sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq B \} \subset \{ n \geq 0 : h^+_X(f(P)) \leq B $ for all $ f \in \mathcal{F}_n \}$
\end{center} and thus
\begin{center}
$\dfrac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon) -1 \leq \# \{ n \geq 0 : h^+_X(f(P)) \leq B $ for all $ f \in \mathcal{F}_n \}$
\end{center} This implies that
\begin{center}
$\dfrac{ k^{\frac{ \log B}{ \log (k (1 + \epsilon) \alpha_{\mathcal{F}}(P))} - n_0(\epsilon)} -1}{k-1} \leq \# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X^+(Q) \leq B \}$
\end{center}
Taking $\frac{1}{\log B}$-roots and letting $B \rightarrow \infty$ gives
\begin{center}
$ k^{ \frac{1}{ \log (k. \alpha_{\mathcal{F}}(P))}} \leq \lim \inf _{B \rightarrow \infty} (\# \{Q \in \mathcal{O}_{\mathcal{F}}(P); h_X^+(Q) \leq B \})^{\frac{1}{\log B}}.$
\end{center}
\end{proof}
We finish this section by stating the following elementary lemma from linear algebra. This lemma will be useful in the following sections. \newline
{\bf Lemma 3.8:} \textit{Let $A=(a_{ij}) \in M_r( \mathbb{C})$ be an $r$-by-$r$ matrix. Let $||A||=\max |a_{ij}|$, and let $\rho (A)$ denote the spectral radius of $A$. Then there are constants $c_1$ and $c_2$, depending on $A$, such that}
\begin{center}
$c_1\rho (A)^n \leq ||A^n|| \leq c_2 n^r \rho (A)^n$ for all $n \geq 0.$
\end{center}
\textit{In particular, we have $\rho (A) = \lim_{n \rightarrow \infty} ||A^n||^{ \frac{1}{n}}$. }
\begin{proof}
See [12, lemma 14]
\end{proof}
\section{Some divisor and height inequalities for rational maps}
We let $h,g :X \dashrightarrow X$ be rational maps, and $ f \in \mathcal{F}_n$ for $\mathcal{F}=\{f_1,...,f_k\}$ a dynamical system of self-rational maps on $X$. The aim of this section is mainly to prove the next result below. It states that the action of $f \in \mathcal{F}_n$ on the vector space NS$(X)_{\mathbb{R}}$ is related with the actions of the maps $f_1,...,f_k$ by the existence of certain inequalities. This result guarantees, for instance, that the dynamical degree exists, and afterwards will also be important in order to claim and prove that $h^+_X(f(P)) \leq O(1).k^n.(\delta_{\mathcal{F}} + \epsilon)^n h^+_X(P)$ for all $f \in \mathcal{F}_n$.
{\bf Proposition 4.1:}
\textit{Let $X$ be a smooth projective variety, and fix a basis $D_1,..., D_r$ for the vector space NS$(X)_{\mathbb{R}}$. A dominant rational map $h: X \dashrightarrow X$ induces a linear map on NS$(X)_{\mathbb{R}}$, and we write}
\begin{center}
$h^*D_j \equiv \sum_{i=1}^r a_{ij}(h)D_i$ \textit{and} $A(h)=(a_{ij}(h)) \in M_r(\mathbb{R}).$
\end{center}
\textit{We let $||.||$ denote the sup norm on $M_r(\mathbb{R}).$ Then there is a constant $C \geq 1$ depending on $D_1,...,D_r$ such that for any dominant rational maps $h,g :X \dashrightarrow X,$ any $n \geq 1$, and any $ f \in \mathcal{F}_n$ we have}
\begin{center}
$\quad \quad \quad|| A( g \circ h)|| \leq C ||A(g)|| . ||A(h)||$ \newline
$||A(f)|| \leq C.(r . \max_{i=1,...,k.}||A(f_i)||)^n.$
\end{center}
The proof of this result will be made in the sequel. An immediate corollary of this is the convergence of the limit defining the dynamical degree. \newline
{\bf Corollary 4.2:}
\textit{The limit $\delta_{\mathcal{F}}=\lim \sup_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$ exists.}
\begin{proof}
With notation as in the statement of proposition 4.1, we have
\begin{center}
$\rho(\mathcal{F}_n)= \max_{f \in \mathcal{F}_n} \rho (f^*,$NS$(X)_{\mathbb{R}})= \max_{f \in \mathcal{F}_n} \rho (A(f))$
\end{center}
Denoting $||A(\mathcal{G})|| = \max_{g \in \mathcal{G}}||A(g)||$, where $\rho(\mathcal{G}):= \rho(g)$ for $\mathcal{G}$ dynamical system and $g \in \mathcal{G}$, proposition 4.1 give us that
\begin{center}
$\log ||A(\mathcal{F}_{n+m})|| \leq \log ||A(\mathcal{F}_{m})|| + \log ||A(\mathcal{F}_{n})||+ O(1)$
\end{center} Using this convexity estimate, we can see that $\dfrac{1}{n}\log ||A(\mathcal{F}_n)||$ converges. Indeed, if a sequence $(d_n)_{n \in \mathbb{N}}$ of nonnnegative real numbers satisfies $d_{i+j} \leq d_i + d_j$, then after fixing a integer $m$ and writing $n=mq+r$ with $0 \leq r \leq m-1,$ we have
\begin{center}
$\dfrac{d_n}{n} = \dfrac{d_{mq+r}}{n} \leq \dfrac{(qd_m+d_r)}{n}= \dfrac{d_m}{m} \dfrac{1}{(1+ r/mq)} + \dfrac{d_r}{n} \leq \dfrac{d_m}{m} + \dfrac{d_r}{n}.$
\end{center} Now take the limsup as $n \rightarrow \infty$, keeping in mind that $m$ is fixed and \newline $ r \leq m-1$, so $d_r$ is bounded. This gives
\begin{center}
$\lim \sup_{n \rightarrow \infty} \dfrac{d_n}{n} \leq \dfrac{d_m}{m}.$
\end{center} taking the infimum over $m$ shows that
\begin{center}
$\lim \sup_{ n \rightarrow \infty} \dfrac{d_n}{n} \leq \inf_{ m \geq 1} \dfrac{d_m}{m} \leq \lim \inf_{ m \rightarrow \infty} \dfrac{d_m}{m} ,$
\end{center} and hence all three quantities must be equal.
As the sequence $(||A(\mathcal{F}_{n})||^{1/n})_{n \in \mathbb{N}}$ is convergent and therefore bounded, lemma 3.8 guarantees that the sequence $(\rho(\mathcal{F}_n)^{1/n})_{n \in \mathbb{N}}$ is bounded as well.
\end{proof}
We also conjecture that the limit $\lim_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$ exists and is a birational invariant. The proof for dynamical degrees of systems with only one map given in [6, prop. 1.2] should be extented naturally for our present definition of degree with several maps. In the mentioned article, the dynamical degree is firstly defined using currents, and afterwards such definition is proved to coincide with the one using the limit of roots of spectral radius. Such result can be worked out in some future paper. Thus, from now on, we assume that \begin{center} $\delta_{\mathcal{F}}:=\lim_{n\rightarrow \infty}\rho(\mathcal{F}_n)^{\frac{1}{n}}$, \end{center} and that it exists.
We start the proof of proposition 4.1 stating the following auxiliar proposition and lemmas whose proofs can be found in [12]:\newline
{\bf Proposition 4.3:}
\textit{Let $X^{(0)}, X^{(1)}, ..., X^{(m)}$ be smooth projective varieties of the same dimension $N$, and let $f^{(i)}: X^{(i)} \dashrightarrow X^{(i-1)}$ be dominant rational maps for $1 \leq i \leq m.$ Let $D$ be a nef divisor on $X^{(0)}$. Then for any nef divisor $H$ on $X^{(m)}$, we have}
\begin{center}
$(f^{(1)} \circ f^{(2)} \circ ... \circ f^{(m)})^*D . H ^{N-1} \leq (f^{(m)})^*... (f^{(2)})^* (f^{(1)})^*D.H^{N-1}.$
\end{center}
\begin{proof} See [12, Prop. 17] \end{proof}
For the lemmas, we need to set the following notation: \newline \begin{itemize}
\item $N:$ The dimension of $X$, wich we assume is at least 2. \newline
\item $\mbox{Amp}(X)$: The ample cone in NS$(X)_{\mathbb{R}}$ of all ample $\mathbb{R}-$divisors. \newline
\item $\mbox{Nef}(X)$: The nef cone in NS$(X)_{\mathbb{R}}$ of all nef $\mathbb{R}-$divisors. \newline
\item $\mbox{Eff}(X)$: The effective cone in NS$(X)_{\mathbb{R}}$ of all effective $\mathbb{R}-$divisors. \newline
\item $\overline{\mbox{Eff}}(X): $ The $\mathbb{R}-$closure of Eff$(X)$. \newline
\end{itemize}
As described in [5, section 1.4], we have the facts
\begin{center}
Nef$(X)=\overline{\mbox{Amp}}(X)$ and Amp$(X)=$ int$($Nef$(X)).$
\end{center} In particular, since Amp$(X) \subset $ Eff$(X)$, it follows that Nef$(X) \subset \overline{\mbox{Eff}}(X).$\newline
{\bf Lemma 4.4:} \textit{With notation as above, let $D \in \overline{\mbox{Eff}}(X) - \{0\}$ and $H \in $ Amp$(X).$ Then }
\begin{center}
$D.H^{N-1} > 0.$
\end{center}
\begin{proof}
See [12, lemma 18]
\end{proof} \newpage
{\bf Lemma 4.5:}
\textit{Let $H \in $ Amp$(X)$, and fix some norm $|.|$ on the $\mathbb{R}-$vector space NS$(X)_{\mathbb{R}}$. Then there are constants $C_1, C_2 > 0$ such that}
\begin{center}
$C_1|v| \leq v.H^{N-1} \leq C_2|v|$ for all $v \in \overline{\mbox{Eff}}(X).$
\end{center}
\begin{proof}
See [12, lemma 19]
\end{proof}
Now we start the proof of proposition 4.1. We fix a norm $|.|$ on the $\mathbb{R}-$vector space NS$(X)_{\mathbb{R}}$ as before. Additionally, for any $A:$ NS$(X)_{\mathbb{R}} \rightarrow $ NS$(X)_{\mathbb{R}}$ linear transformation, we set
\begin{center}
$||A||^{\prime} = \sup_{ v \in \mbox{Nef} - \{0\}} \dfrac{|Av|}{|v|},$
\end{center} which exists because the set $\overline{\mbox{Eff}}(X) \cap \{ w \in \mbox{NS}(X)_{\mathbb{R}} : |w| = 1 \}$ is compact. \newline
We note that for linear maps $A,B \in $ End(NS$(X)_{\mathbb{R}})$ and $c \in \mathbb{R}$ we have
\begin{center}
$|| A + B||^{\prime} \leq ||A||^{\prime} + ||B||^{\prime}$ and $||cA||^{\prime}=|c|||A||^{\prime}.$
\end{center}
Further, since Nef$(X)$ generates NS$(X)_{\mathbb{R}}$ as an $\mathbb{R}-$vector space, we have $||A||^{\prime}=0$ if and only if $A=0.$ Thus $||.||^{\prime}$ is an $\mathbb{R}-$norm on End(NS$(X)_{\mathbb{R}}).$ \newline
Similarly, for any linear map $A:$ NS$(X)_{\mathbb{R}} \rightarrow $NS$(X)_{\mathbb{R}},$ we set
\begin{center}
$||A||^{ \prime \prime}= \sup_{ v \in \mbox{Eff} - \{0\}} \dfrac{|Aw|}{|w|},$
\end{center} then $||.||^{ \prime \prime}$ is an $\mathbb{R}-$norm on End(NS$(X)_{\mathbb{R}}).$
We note that $ \overline{\mbox{Eff}}(X)$ is preserved by $f^*$ for $f$ self-rational map on $X$, and that Nef$(X) \subset \overline{\mbox{Eff}}(X).$ Thus if $v \in $Nef$(X),$ then $ g^*v$ and $h^*v$ belong to $ \overline{\mbox{Eff}}(X).$ This allows us to compute \newline \newline
$||(g \circ h)^*||^{\prime}=\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{|(g \circ h)^*v|}{|v|}~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~ ~~~~~~~~~~~~~\newline \newline
\leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(g \circ h)^*v.H^{N-1}}{|v|} $ from lemma 4.5 ~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline
$ \leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(h^* g^*v).H^{N-1}}{|v|} $ from proposition 4.3 ~~~~~~~~~~~~~~~~~~~~\newline \newline
$ = C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
= C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|} . \dfrac{|g^*v|}{|v|})~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline
\leq C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|}) .(\sup_{ v \in \mbox{Nef} - \{0\}} \dfrac{|g^*v|}{|v|})~~~~~~~~~~~~~~ \newline \newlin
= C_1^{-1}(\sup_{ v \in \mbox{Nef}(X) - \{0\}, g^*v \neq 0} \dfrac{(h^* g^*v).H^{N-1}}{|g^*v|}) . || g^*||^{\prime}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline
\leq C_1^{-1}(\sup_{ w \in \overline{\mbox{Eff}}(X) - \{0\}} \dfrac{ (h^*w).H^{N-1}}{|w|}) . || g^*||^{\prime}$ since $ g^*v \in \overline{\mbox{Eff}}(X)~~~ ~~~~~~~~~~~~~~\newline \newline
\leq C_1^{-1} C_2 (\sup_{ w \in \overline{\mbox{Eff}}(X) - \{0\}} \dfrac{ |h^*w|}{|w|}) . || g^*||^{\prime} $ from lemma 4.5~~~~~~~~~~~~~~~~~~~~~~~~~ \newline \newline
$= C_1^{-1} C_2 ||h^*||^{\prime \prime}. ||g^*||^{\prime}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \newline
We remember that we defined $||.||$ to be the sup norm on $M_r(\mathbb{R})=$ End$($NS$(X)_{\mathbb{R}}$, where the identification is via the given basis $D_1,..., D_r$ of NS$(X)_{\mathbb{R}}$. We thus have three norms $||.||, ||.||^{\prime}$ and $||.||^{\prime \prime}$ on End$($NS$(X)_{\mathbb{R}}$, so there are positive constants $C_3^{\prime}, C_4^{\prime}, C_3^{\prime \prime}$ and $ C_4^{\prime \prime}$ such that
\begin{center}
$C_3^{\prime}|| \gamma|| \leq || \gamma||^{\prime} \leq C_4^{\prime}|| \gamma||$ and $C_3^{\prime \prime}|| \gamma|| \leq || \gamma||^{\prime \prime} \leq C_4^{\prime \prime}|| \gamma||$ l $\forall \gamma \in $ End$($NS$(X)_{\mathbb{R}}.$
\end{center}
Hence \newline \newline
$||A(g \circ h)||=||(g \circ h)^*|| \leq C_3^{\prime -1}||(g \circ h)^*||^{\prime}~~~~~~~ \newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 ||h^*||^{\prime \prime}. ||g^*||^{\prime} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||h^*||. ||g^*|| ~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
= C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||A(h)||. ||A(g)||.~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~$
\newline \newline
Similarly, if $v \in $ Nef$(X), f := f_{i_1} \circ ... \circ f_{i_n}\in \mathcal{F}_n$, then $f^*v \in \overline{\mbox{Eff}}(X).$ A similar calculation gives \newline \newline
$||f^*||^{\prime}=\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{|f^*v|}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
\leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{(f^*v).H^{N-1}}{|v|} $ from lemma 4.5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
$= C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{( f_{i_1} \circ ... \circ f_{i_n})^*v.H^{N-1}}{|v|} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~\newline \newline
\leq C_1^{-1}\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{((f_{i_n})^*... (f_{i_1})^*v).H^{N-1}}{|v|} $ from proposition 4.3~~~~~~~~~~~~~~~~~\newline \newline
$ \leq C_1^{-1} C_2 (\sup_{ v \in \mbox{Nef}(X) - \{0\}} \dfrac{ |(f_{i_n})^*... (f_{i_1})^*v|}{|v|}) $ from lemma 4.5 ~~~~~~~~~~~ ~~~~~~~~~~~~~~\newline \newline
$= C_1^{-1} C_2. ||(f_{i_n})^*... (f_{i_1})^*||^{\prime}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~ ~~~~~~~~~~~~~~~~~~~~~~~~$ \newline\newline Hence \newline \newline
$||A(f)||=||f^*|| \leq C_3^{\prime -1}||f^*||^{\prime}~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~ \newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 ||(f_{i_n})^*... (f_{i_1})^*||^{\prime} ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~\newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} ||(f_{i_n})^*... (f_{i_1})^*|| ~~~~~~~~~~~~~ ~~~~~~~~~~~\newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime} r^n ||(f_{i_n})^*||... ||(f_{i_1})^*||~~~~~~~ ~~~~~~~~~~~ \newline \newline
\leq C_3^{\prime -1} C_1^{-1} C_2 C_4^{\prime} C_4^{\prime \prime}.[r. \max_{i=1,...,k.}||A(f_i)||]^n$, ~~~~~~~~~~~~~~~~~~~~~~~~~~~\newline \newline
As we wanted to show. \newline
As it was said in the beginning of this section, the next proposition is a height inequality for rational maps, with eyes towards future applications. \newline
{\bf Proposition 4.6:}
\textit{Let $X/\bar{K}$ and $ Y/\bar{K}$ be smooth projective varieties, \newline let $f:Y \dashrightarrow X$ be a dominant rational map defined over $\bar{K}$, let $D \in$ Div$(X)$ be an ample divisor, and fix Weil height functions $h_{X,D}$ and $h_{Y,f^*D}(P)$ associated to $D$ and $f^*D.$ Then }
\begin{center}
$h_{X,D} \circ f(P) \leq h_{Y,f^*D}(P) + O(1)$ \textit{for all} $P \in (Y - I_f)(\bar{K}),$
\end{center} \textit{where the $O(1)$ bound depends on $X,Y, f,$ and the choice of height functions, but is independent of $P$.}
\begin{proof}
See [12, Prop. 21].
\end{proof}
\section{A bound for the sum of heights on iterates}
This section is devoted for the proof of a quantitative upper bound for $\sum_{f \in \mathcal{F}_n} h^+_X(f(P))$ in terms of the dynamical degree $\delta_{\mathcal{F}}$ of the system.This is one of the main results of this work, and is stated below. As a corollary, we see that the arithmetic degree of any point is upper bounded by the dynamical degree of the system. \newline
{\bf Theorem 5.1:}
\textit{Let $K$ be a number field or a one variable function field of characteristic $0$ , let $\mathcal{F}=\{f_1,...,f_k\}$ be a set of dominant self rational maps on $X$ defined over $K$ as stated before, let $h_X$ be a Weil height on $X(\bar{K})$ relative to an ample divisor, let $h^+_X= \max \{h_X, 1 \}$, and let $\epsilon >0$. Then there exists a positive constant $C=C(X, h_X, f, \epsilon)$ such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$,}
\begin{center}
$\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P).$
\end{center} \textit{In particular, $h^+_X(f(P)) \leq C. k^n.(\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $f \in \mathcal{F}_n.$} \newline \newline
Before proving the theorem, we note that it implies the fundamental inequality $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$\newline
{\bf Corollary 5.2:}
\textit{Let $P \in X_{\mathcal{F}}(\bar{K}).$ Then}
\begin{center}
$\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$
\end{center}
\begin{proof} Let $ \epsilon >0.$ Then
\begin{center}
$\quad \quad\bar{\alpha}_{\mathcal{F}}(P) = \frac{1}{k} \lim \sup_{n \rightarrow \infty} \{\sum_{f \in \mathcal{F}_n} h^+_X(f(P))\}^{\frac{1}{n}} $ by definition of $\bar{\alpha}_{\mathcal{F}}~ ~~~~~~~\newline \newline
\leq \lim \sup_{n \rightarrow \infty} ( C. (\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P))^{\frac{1}{n}} $ from theorem 5.1 ~~~~~~~~~~~~~~~ ~~~\newline \newline
$ =\delta_{\mathcal{F}} + \epsilon.\quad \quad \quad \quad \quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~$
\end{center}
This holds for all $\epsilon>0$, which proves that $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$
\end{proof}
{\bf Lemma 5.3:}
\textit{Let $E \in$ Div$(X)_{\mathbb{R}}$ be a divisor that is algebraic equivalent to 0, and fix a height function $h_E$ associated to $E.$ Then there is a constant $C=C(h_X, h_E)$ such that}
\begin{center}
$|h_E(P)| \leq C \sqrt{h_X^+(P)} $ \textit{for all} $P \in X(\bar{K}).$
\end{center}
\begin{proof}
See for example the book of Diophantine Geometry of Hindry-Silverman[8, Theorem B.5.9].
\end{proof}
Theorem 5.1 will be a consequence from the slightly weaker result:\newline
{\bf Theorem 5.4:}
\textit{Let $K$ be a number field or a one variable function field of characteristic $0$ , let $\mathcal{F}=\{f_1,...,f_k\}$ be a set of dominant self rational maps on $X$ defined over $K$, let $h_X$ be a Weil height on $X(\bar{K})$ relative to an ample divisor, let $h^+_X= \max \{h_X, 1 \}$, and let $\epsilon >0$. Then there exists a positive constant $C=C(X, h_X, f, \epsilon)$, and $t$ positive integer such that for all $P \in X_{\mathcal{F}}(\bar{K})$ and all $n \geq 0$,}
\begin{center}
$\sum_{f \in \mathcal{F}_{nt}} h^+_X(f(P)) \leq C. k^{nt}.(\delta_{\mathcal{F}} + \epsilon)^{nt} . h^+_X(P).$
\end{center}
Before proving it and then deduce theorem 5.1, we state and prove two auxiliar short lemmas.\newline
{\bf Lemma 5.5:} \textit{In the situation above, there is a constant $C \geq 1$ such that
\begin{center}
$\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq k^n.C^n . h^+_X(P).$
\end{center}
for all $P \in X_{\mathcal{F}}(\bar{K})$.}
\begin{proof}
We take $H$ an ample divisor on $X$, $h_H \geq 1$ and $ h_{f_i^*H}$ height functions associated to $H$ and $f_i^*H$ respectively, so that \begin{center} $h_H(f_i(P)) \leq h_{f_i^*H}(P) + O(1) $ \end{center}
for all $P \in X_{\mathcal{F}}(\bar{K})$ , with $O(1)$ depending on $H, f_i, f_i^*H, h_H, h_{f_i^*H},$ but not on $P$. Then, for $C$ enough large, we find that $h_{f_i^*H}(P) + O(1) \leq C h_H(P)$, and so $h_H(f_i(P)) \leq C h_H(P)$ for all $P \in X_{\mathcal{F}}(\bar{K})$, which yields \begin{center}$\sum_{f \in \mathcal{F}_n} h_H(f(P)) \leq k^n.C^n . h_H(P).$ \end{center}
The proof is finished since $h_H$ and $h_X$ are associated with ample divisors, and therefore are commensurate.
\end{proof}
{\bf Lemma 5.6:} \textit{Let $\mathcal{A}_0:=\{a_0\}, a_0 \geq 1$, $k$ fixed, and for each $l \in \mathbb{N}$, $\mathcal{A}_l$ a set with $k^l$ positive real numbers such that
\begin{center} $\sum_{a \in \mathcal{A}_n} a \leq \sum_{a \in \mathcal{A}_{n-1}} a + C_1(\sum_{a \in \mathcal{A}_{n-1}} \sqrt{a} + \sum_{a \in \mathcal{A}_{n-1}} \sqrt{a + C_2})$ for all $n \geq 1,$ \end{center} where $C_1, C_2$ are non-negative constants. Then there exists a positive constant $C$ depending only on $C_1, C_2$ such that
\begin{center}
$\sum_{a \in \mathcal{A}_n} a \leq k^{n-1}.C.n^2. a_0$
\end{center}}
\begin{proof}
$\sum_{a \in \mathcal{A}_n} a \leq \sum_{a \in \mathcal{A}_{n-1}} a + C_1(\sum_{a \in \mathcal{A}_{n-1}} \sqrt{a} + \sum_{a \in \mathcal{A}_{n-1}} \sqrt{a + C_2}) \newline
=\sum_{a \in \mathcal{A}_{n-1}} [a + C_1( \sqrt{a} + \sqrt{a + C_2})] \leq \sum_{a \in \mathcal{A}_{n-1}} [a + C_1\sqrt{a}( 1 + \sqrt{1 + \dfrac{C_2}{a}})] \newline
\leq \sum_{a \in \mathcal{A}_{n-1}} [a + C_1\sqrt{a}( 1 + \sqrt{1 + C_2})]= \sum_{a \in \mathcal{A}_{n-1}} [a + C_3\sqrt{a} ]$ with \newline $C_3:= C_1(1+ \sqrt{1+ C_2})$.\newline
Thus we have $\sum_{a \in \mathcal{A}_1}a \leq \sum_{a \in \mathcal{A}_{0}} [a + C_3\sqrt{a} ]=a_0 + C_3\sqrt{a_0} \leq a_0(1 + C_3) \newline \leq a_0.C = a_0.C.k^0,$ where $C:=\max \{ \dfrac{C_3.k}{4}, 1 + C_3\}$, and we want to prove by induction that $\sum_{a \in \mathcal{A}_n}a \leq Ck^{n-1}n^2a_0.$ So we compute \newline \newline
$\sum_{a \in \mathcal{A}_{n+1}}a \leq \sum_{a \in \mathcal{A}_{n}} [a + C_3\sqrt{a}] \leq \sum_{a \in \mathcal{A}_{n}} a+ C_3\sum_{a \in \mathcal{A}_{n}} \sqrt{a} \newline \leq \sum_{a \in \mathcal{A}_{n}}a + k^{n/2}C_3^{1/2}\sqrt{\sum_{a \in \mathcal{A}_{n}}a} \leq k^{n-1}Cn^2a_0 + k^{n/2}C_3^{1/2}\sqrt{k^{n-1}Cn^2a_0} \newline \leq k^{n-1}Cn^2a_0+ 2k^{(n-1)/2}\sqrt{C}\sqrt{k^{n-1}Cn^2a_0} \leq k^{n-1}Ca_0(n^2+\dfrac{2n}{\sqrt{a_0}})\newline \leq k^{n-1}Ca_0(n^2+2n) \leq k^{n}\tilde{C}a_0(n+1)^2,$ where we can make $\tilde{C}:=\max \{C_3/4, 1+ C_3 \}$
\end{proof}
Now we start the proof of theorem 5.4 \newline
\textit{ Proof of theorem 5.4:} We take $D_1,..,D_r$ very ample divisors forming a basis of NS$(X)_{\mathbb{R}}$, and $H \equiv \sum c_i D_i$ ample with $c_i \geq 0$ such that $H+ D_i, H-D_i$ are all ample.
We consider a resolution of indeterminacy $p:Y \rightarrow X$ as a sequence of blowing ups working for each $f_i$, such that $g_i:=f_i \circ p$ is a morphism for each $i \leq k$, and Exc$(p)$ is the exceptional locus of $p$. For each $j\leq k, i \leq r$, we take effective divisors $\tilde{D}_i^{(j)}$ on $X$ with $\tilde{D}_i^{(j)}$ linearly equivalent to $D_i$, and such that none of the components of $g_j^*\tilde{D}_i^{(j)}$ are containded in Exc$(p)$. The divisor $Z_i^{(j)}:=p^*p_*g_j^*\tilde{D}_i^{(j)}- g_j^*\tilde{D}_i^{(j)}$ on $Y$ is effective and has support contained in Exc$(p)$. We denote $F_i^{(j)}:=g_j^*D_i$ for $i=1,...,r$, and take divisors $F_{r+1}^{(j)},...,F_s^{(j)}$ so that $F_1^{(j)},..., F_s^{(j)}$ form a basis for NS$(Y)_{\mathbb{R}}$. For $i \leq r$, we can see that $p^*p_*F_i^{(j)}-F_i^{(j)}$ and $Z_i^{(j)}$ are linearly equivalent. By [7, prop. 7.10], we can find $H^{\prime} \in$ Div$(Y)_{K}$ ample so that $p^*H - H^{\prime}$ is effective with support contained in Exc$(p)$.
We consider $g_j^*D_i \equiv \sum_{m \leq s } a^{(j)}_{mi}F_m^{(j)}$ for $i=1,...,r$ and $A^{(j)}:=(a^{(j)}_{mi})_{m,i}$ the correspondent $s \times r-$matrix. We also denote $p_*F_i^{(j)} \equiv \sum_{l \leq r } b^{(j)}_{li}D_l; i=1,...,s,$ and $B^{(j)}:=(b^{(j)}_{li})_{l,i}$ the correspondent $r \times s-$matrix.
We see that $B^{(j)}A^{(j)}$ is a matrix representing $f_j^*$ with respect to the basis $D_1,...,D_r$.
Let us fix some notation:
$\vec{D}:=(D_1,...,D_r), \vec{F}^{(j)}:=(F_1^{(j)},..., F_s^{(j)}), \vec{Z}^{(j)}:=(Z_1^{(j)},..., Z_s^{(j)}), \vec{c}:=(c_1,...,c_r), \newline E^{(j)}:=g_j^*H-<A^{(j)} \vec{c}, \vec{F}^{(j)}>, \vec{E^{\prime}}^{(j)}= ({E^{\prime}_1}^{(j)},...,{E^{\prime}_s}^{(j)}):=p_*\vec{F}^{(j)} - {B^{(j)}}^T\vec{D}. $ We note that $E^{(j)}$ and $\vec{E^{\prime}}^{(j)}$ are numerically zero divisors for each $j$.
We choose height functions $h_{D_1},..., h_{D_r}$ for $D_1,...D_r$ respectively, and $h_H \geq 1$ with respect to $H$ such that $h_H \geq |h_{D_i}|$ for each $i \leq r$. All of these functions are independent of $\mathcal{F}$. Defining $h_{F^{(j)}_i}:=h_{D_i}\circ g_j, i=1,...,r$ height functions associated with $F_i^{(j)}$. For $i=r+1,...,s$ we fix height functions $h_{p_*F^{(j)}_i}$ with respect to the divisors $p_*F^{(j)}_i$, and we denote: $h_{\vec{D}}:=(D_1,...,D_r), h_{\vec{F}^{(j)}}:=(h_{F_1^{(j)}},...,h_{F_s^{(j)}}), \newline h_{p_*\vec{F}^{(j)}}:=(h_{p_*F_1^{(j)}},...,h_{p_*F_s^{(j)}}), h_{\vec{E^{\prime}}^{(j)}}:=(h_{{E^{\prime}_1}^{(j)}},...,h_{{E^{\prime}_s}^{(j)}}) = h_{p_*\vec{F}^{(j)}}-{B^{(j)}}^Th_{\vec{D}}, \newline h_{\vec{Z}^{(j)}}:= (h_{Z_1^{(j)}},...,h_{Z_s^{(j)}})=h_{p_*\vec{F}^{(j)}}\circ p - h_{\vec{F}^{(j)}},$ where $h_{\vec{Z}_i^{(j)}}$ and $h_{\vec{E^{\prime}}_i^{(j)}}$ are height functions associated with the divisors $\vec{Z}_i^{(j)}$ and $\vec{E^{\prime}}_i^{(j)}$. Also, define \begin{center} $h_{E^{(j)}}:=h_H \circ g_j - <A^{(j)} \vec{c}, h_{\vec{F^{(j)}}}>$. \end{center} We can suppose that $h_{Z_i^{(j)}} \geq 0$ on $Y-Z_i^{(j)}$. We can fix a height function $h_{H^{\prime}} \geq 1$ related to $H^{\prime}$, and a height function $h_{p^*H - H^{\prime}}$ related to $p^*H - H^{\prime}$ satisfying $h_{p^*H - H^{\prime}} \geq 0$ on $Y-$Exc$(p)$. Since $E^{(j)}$ and ${E_i^{\prime}}^{(j)}$ are numerically equivalent to zero, there exists a positive constant such that $|h_{{E}^{(j)}}| \leq C \sqrt{h_{H^{\prime}}}$ and $|h_{{E_i^{\prime}}^{(j)}}|\leq C \sqrt{h_H}$. Also, there exists a constant $\gamma \geq 0$ such that $h_H \circ p \geq h_{p^*H - H^{\prime}} + h_{H^{\prime}}- \gamma$ on $Y(\bar{K})$. Finally, if we denote by $M(f_j)$ the matrix representing $f_j^*$, linear map on NS$(X)_{\mathbb{R}},$ with respect to the basis $D_1,...,D_r$, $||M(f_j)||$ the maximum absolute value of its coefficients (norm of a matrix), then we make the notation $||M(\mathcal{F}_n)||:= \max_{f \in \mathcal{F}_n}||M(f)||$.
For $P \in X_{\mathcal{F}}(\bar{K}), n \geq 1$, we compute: \newline \newline
$\sum_{f \in \mathcal{F}_n} h_H(f(P))= \sum_{i \leq k}\sum_{f \in \mathcal{F}_{n-1}}h_H(f_i(f(P))) \newline \newline = \sum_{i \leq k,f \in \mathcal{F}_{n-1}}[(h_H \circ g_i)(p^{-1}f(P))-<A^{(i)}\vec{c}, h_{p_*F^{(i)}}\circ p>(p^{-1}f(P))\newline \newline +<A^{(i)}\vec{c}, h_{p_*F^{(i)}}>(f(P))]=\sum_{i\leq k,f \in \mathcal{F}_{n-1}}[<A^{(i)}\vec{c}, h_{F^{(i)}}-h_{p_*F^{(i)}}\circ p>(p^{-1}f(P))\newline \newline + h_{E^{(i)}}(p^{-1}f(P))+ <B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P)) +<A^{(i)}\vec{c},h_{{E^{\prime}}^{(i)}}>(f(P))]\newline \newline =\sum_{i\leq k,f \in \mathcal{F}_{n-1}}[<\vec{c},-h_{Z^{(i)}}>(p^{-1}f(P))+h_{E^{(i)}}(p^{-1}f(P))\newline \newline + <B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P))+ <\vec{c},{A^{(i)}}^Th_{{E^{\prime}}^{(i)}}>(f(P))] \newline \newline \leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[h_{E^{(i)}}(p^{-1}f(P)) +<B^{(i)}A^{(i)}\vec{c},h_{\vec{D}}>(f(P))\newline \newline +<\vec{c},{A^{(i)}}^Th_{{E^{\prime}}^{(i)}}>(f(P))]\leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[r^2||\vec{c} ||||B^{(i)}A^{(i)}|| h_H(f(P))\newline \newline+r||\vec{c}||C\sqrt{h_H(f(P))} +C \sqrt{h_{H^{\prime}}(p^{-1}f(P))}]\newline \newline \leq \sum_{i\leq k,f \in \mathcal{F}_{n-1}}[r^2||\vec{c} ||||B^{(i)}A^{(i)}|| h_H(f(P))+r||\vec{c}||C\sqrt{h_H(f(P))} +C \sqrt{h_{H^{\prime}}(f(P)) + \gamma}]$,\newline \newline where the last follows because $h_H \circ p \geq h_{p^*H - H^{\prime}} + h_{H^{\prime}}- \gamma$ on $Y(\bar{K})$ and $h_{p^*H - H^{\prime}} \geq 0$ on $Y-$Exc$(p)$.\newline
Denoting by $R:=\max_i \{1, r^2||\vec{c} ||||B^{(i)}||||A^{(i)}||\}$, and dividing the whole inequality above by $R^n$, we obtain \newline \newline
$\dfrac{1}{R^{n}}\sum_{f \in \mathcal{F}_n} h_H(f(P))\newline \leq k.[\sum_{f \in \mathcal{F}_{n-1}} \dfrac{h_H(f(P))}{R^n}+r||\vec{c}||C \sum_{f \in \mathcal{F}_{n-1}} \sqrt{\dfrac{h_H(f(P))}{R^{n-1}}}+C\sum_{f \in \mathcal{F}_{n-1}} \sqrt{\dfrac{h_H(f(P))}{R^{n-1}}+ \gamma}],$ \newline \newline
which, by lemma 5.6, implies that \begin{center}
$\sum_{f \in \mathcal{F}_n} h_H(f(P))\leq C_1 k^n n^2 R^n h_H(P),$
\end{center} for a positive constant $C_1.$
Fixing a real number $\epsilon >0$, let $\delta_{\mathcal{F}}= \lim \sup_n \rho(\mathcal{F}_n)^{1/n}$. Then by 3.8, we can check that $\delta_{\mathcal{F}}\geq \lim_n ||M(\mathcal{F}_n)||^{1/n},$ and hence there is a positive integer $l$ such that $\dfrac{||M(\mathcal{F}_l)||}{(\delta_{\mathcal{F}}+ \epsilon)^l} r^2 ||\vec{c}|| < 1.$ We fix such $l$ and we apply the arguments of last computations to conclude, for $\mathcal{F}_{ln}=(\mathcal{F}_l)_n$, that is it true that there exists a constant $C_1$ such that \begin{center}
$\sum_{f \in \mathcal{F}_{nl}} h_H(f(P))\leq C_1 k^{ln} n^2\dfrac{R^{n}}{(\delta_{\mathcal{F}}+ \epsilon)^{nl}}(\delta_{\mathcal{F}}+ \epsilon)^{nl}h_H(P),$
\end{center} for $R \leq \max_{i} \{1, r^2||\vec{c} ||M(\mathcal{F}_l||\}$. Thus, there is a constant $C_2$ such that $C_1 n^2\dfrac{R^n}{(\delta_{\mathcal{F}}+ \epsilon)^{nl}} \leq C_2$ for all $n$. So we find that
\begin{center}
$\sum_{f \in \mathcal{F}_{nl}} h^+_X(f(P)) \leq C_2. k^{nl}.(\delta_{\mathcal{F}} + \epsilon)^{nl} . h^+_X(P)$
\end{center} for all $n$, showing theorem 5.4.
\textit{Proof that theorem 5.4 implies theorem 5.1} We proved that for any $\epsilon >0$ there is a positive integer $l$ and a positive constant $C$ so that
\begin{center}
$\sum_{f \in \mathcal{F}_{nl}} h^+_X(f(P)) \leq C. k^{nl}.(\delta_{\mathcal{F}} + \epsilon)^{nl} . h^+_X(P),$
\end{center}for all $n,$ and $P \in X_{\mathcal{F}}(\bar{K})$. Given a integer $n$, there are $q\geq 0$ and $0<t<l$ such that $n=lq +t$. Let also $C_1$ be the constant of lemma 5.5. For $P \in X_{\mathcal{F}}(\bar{K})$, we calculate that
\begin{center}
$\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline \leq C. k^{lq}.(\delta_{\mathcal{F}} + \epsilon)^{lq}. \sum_{f \in \mathcal{F}_t} h^+_X(f(P)) \newline \newline \leq C. k^{lq}.(\delta_{\mathcal{F}} + \epsilon)^{lq}.C_1^t.k^t. h^+_X(P) \:\:\:\:\:\:\:\:\:\:\newline \newline \leq CC_1^{l-1}k^n(\delta_{\mathcal{F}} + \epsilon)^{n}h^+_X(P),\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:$
\end{center}
as we wanted to show. \newline
\section{The arising of new canonical heights}
In this final section, we show that the canonical height limit, proposed and constructed by S. Kawaguchi in [10, theorem 1.2.1], is convergent for certain eigendivisor classes relative to algebraic equivalence, instead of linear equivalence case worked by Kawaguchi. The theorem is also an extension of theorem 5 of [12], where the eigensystem of the hypothesis has just one morphism.\newline
{\bf Theorem 6.1:} \textit{Assume that $\mathcal{F}=f_1,...,f_k:X \rightarrow X$ are morphisms, and let $D \in $Div$(X)_{\mathbb{R}}$ that satisfies the algebraic relation}
\begin{center}
$\sum^k_{i=1} f^*_iD \equiv \beta D$\textit{ for some real number} $\beta >\sqrt{\delta_{\mathcal{F}}}k,$
\end{center}
\textit{where $\equiv$ denotes algebraic equivalence in NS$(X)_{\mathbb{R}}.$ Then }\newline
\textit{(a) For all $P \in X(\bar{K})$, the following limit converges:}
\begin{center}
$\hat{h}_{D,\mathcal{F}}(P)= \lim_{n \rightarrow \infty}\dfrac{1}{\beta^n}\sum_{ f \in \mathcal{F}_n} h_D(f(P)).$
\end{center}
\textit{(b) The canonical height in (a) satisfies }
\begin{center}
$\sum^k_{i=1} \hat{h}_{D,\mathcal{F}}(f_i(P))=\beta \hat{h}_{D,\mathcal{F}}(P)$ and $\hat{h}_{D,\mathcal{F}}(P)= h_D(P) + O(\sqrt{h^+_X(P)}). $
\end{center}
\textit{(c) If $\hat{h}_{D,\mathcal{F}}(P) \neq 0$, then $\underline{\alpha}_{\mathcal{F}}(P) \geq \beta/k.$}\newline
\textit{(d) If $\hat{h}_{D,\mathcal{F}}(P) \neq 0$ and $\beta=\delta_{\mathcal{F}}k,$ then $\alpha_{\mathcal{F}}(P)= \delta_{\mathcal{F}}.$}\newline
\textit{(e) Assume that $D$ is ample and that $K$ is a number field. Then}
\begin{center}
$\hat{h}_{D,\mathcal{F}}(P)=0 \iff P$ \textit{is preperiodic, i.e, has finite $\mathcal{F}$-orbit.}
\end{center}
\begin{proof}
(a) Theorem 5.1 says that for every $\epsilon >0$ there is a constant \newline $C_1=C_1(X,h_X,\mathcal{F}, \epsilon)$ such that
\begin{center}
$\sum_{f \in \mathcal{F}_n} h^+_X(f(P)) \leq C_1.k^n. (\delta_{\mathcal{F}} + \epsilon)^n . h^+_X(P)$ for all $n \geq 0.$
\end{center}
We are given that $\sum^k_{i=1} f^*_iD \equiv \beta D.$ Applying lemma 5.3 with \newline $E=\sum^k_{i=1} f^*_iD - \beta D,$ we find a positive constant $C_2=C_2(D, \mathcal{F}, h_X)$ such that
\begin{center}
$|h_{\sum^k_{i=1} f^*_iD}(Q) - \beta h_D(Q)| \leq C_2 \sqrt{h^+_X(Q)} $ for all $Q \in X(\bar{K}).$
\end{center}
Since we assumed that the $f_i$ are morphisms, standard functoriality of Weil height states that
\begin{center}
$ h_{\sum^k_{i=1} f^*_iD} = \sum^k_{i=1} h_D \circ f_i + O(1),$
\end{center}
so the above inequality is reformulated as follows
\begin{center}
(**) $| \sum^k_{i=1} h_D(f_i(Q)) - \beta h_D(Q)| \leq C_3 \sqrt{h^+_X(Q)} $ for all $Q \in X(\bar{K}).$
\end{center}
For $N \geq M \geq 0$ we estimate a telescopic sum,
\begin{center}
$|\beta^{-N} \sum_{f \in \mathcal{F}_N} h_D(f(P)) - \beta^{-M} \sum_{f \in \mathcal{F}_M} h_D(f(P))| ~~~~~~~~~~~~~~~~~~ ~~~~~ ~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \newline \newline
=|\sum^{N}_{n=M+1}\beta^{-n}[ \sum_{f \in \mathcal{F}_n} h_D(f(P))- \beta \sum_{f \in \mathcal{F}_{n-1}} h_D(f(P))]| ~~~~~ ~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline
\leq \sum^{N}_{n=M+1}\beta^{-n}|\sum_{f \in \mathcal{F}_n} h_D(f(P))- \beta \sum_{f \in \mathcal{F}_{n-1}} h_D(f(P))| ~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~~~~~~\newline \newline
\leq \sum^{N}_{n=M+1}\beta^{-n} [\sum_{f \in \mathcal{F}_{n-1}}|\sum^k_{i=1} h_D(f_i(f(P))) - \beta h_D(f(P))| ] ~~~~~~~~~~ ~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline
\leq \sum^{N}_{n=M+1}\beta^{-n} (\sum_{f \in \mathcal{F}_{n-1}} C_3 \sqrt{h^+_X(f(P))}) $ by (**) $~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ ~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~\newline \newline
\leq \sum^{N}_{n=M+1}\beta^{-n}.k^{(n-1)/2}. C_3 . \sqrt{\sum_{f \in \mathcal{F}_{n-1}}h^+_X(f(P))} $ by Cauchy-Schwarz ~~~~~~~~~~\: \newline \newline
$\leq \sum^{N}_{n=M+1}\beta^{-n}.k^{n-1}. C_3 .C. (\delta_{\mathcal{F}} + \epsilon)^{(n-1)/2}.\sqrt{h_X^+(P)}$ by Thm. 5.1 ~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\newline \newline
$\leq CC_3 \sqrt{h_X^+(P)}\sum_{n=M+1}^{\infty} [ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{n/2}.$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:
\end{center} And
\begin{center}
$\sum_{n=M+1}^{\infty} [ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{n/2} < \infty \iff \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}} < 1.$
\end{center}
Since $\beta > \sqrt{\delta_{\mathcal{F}}k^2},$ we can choose $0< \epsilon < \frac{\beta^{2}}{k^2} - \delta_{\mathcal{F}}$, which implies $\frac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}} < 1$ and the desired convergence. Also we obtain the following estimate (***):
\begin{center}
$ |\beta^{-N} \sum_{f \in \mathcal{F}_N} h_D(f(P)) - \beta^{-M} \sum_{f \in \mathcal{F}_M} h_D(f(P))| \newline \leq CC_3[ \dfrac{k^2(\delta_{\mathcal{F}}+ \epsilon)}{\beta^{2}}]^{M/2} \sqrt{h_X^+(P)}. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~$
\end{center}
(b) The formula
\begin{center}
$\sum^k_{i=1} \hat{h}_{D,\mathcal{F}}(f_i(P))=\beta \hat{h}_{D,\mathcal{F}}(P)$
\end{center} follows immediately from the limit defining $\hat{h}_{D,\mathcal{F}}$ in part (a). Next, letting $N \rightarrow \infty$ and setting $M=0$ in (***) gives
\begin{center}
$ |\hat{h}_{D,\mathcal{F}}(P)- h_D(P)|= O(\sqrt{h^+_X(P)}),$
\end{center} which completes the proof of (b).
(c) We are assuming that $\hat{h}_{D,\mathcal{F}}(P) \neq 0.$ If $\hat{h}_{D,\mathcal{F}}(P) <0,$ we change $D$ to $-D,$ so we may assume $\hat{h}_{D,\mathcal{F}}(P)>0.$ Let $H \in $ Div$(X)$ be an ample divisor such that $H+D$ is also ample (this can always be arranged by replacing $H$ with $mH$ for a sufficiently large $m$). Since $H$ is ample, we may assume that the height function $h_H$ is non-negative. We compute
\begin{center}
$\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline = \sum_{f \in \mathcal{F}_n}h_{D}(f(P)) + \sum_{f \in \mathcal{F}_n}h_{H}(f(P)) + O(k^n) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline
\geq \sum_{f \in \mathcal{F}_n}h_{D}(f(P)) + O(k^n)$ since $h_H \geq0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\newline \newline
= \sum_{f \in \mathcal{F}_n}\hat{h}_{\mathcal{F},D}(f(P)) + O( \sum_{f \in \mathcal{F}_n}\sqrt{h^+_{X}(f(P))} ) $ from (b) $~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~~~~~ \newline \newline
=\beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sum_{f \in \mathcal{F}_n}\sqrt{h^+_{X}(f(P))} ) $ from (b) $~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~\:~~~~~\newline \newline
\geq \beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sqrt{\sum_{f \in \mathcal{F}_n}h^+_{X}(f(P))} ) $ since $ (x \rightarrow \sqrt{x})$ is convex $ ~~~~~~~~~~~~~~~~~~~~\:\newline \newline
= \beta^{n}\hat{h}_{\mathcal{F},D}(P) + O( \sqrt{Ck^n(\delta_{\mathcal{F}} + \epsilon)^nh_X^+(P)})$ from Theorem 5.1.~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:~~~~
\end{center}
This estimate is true for every $\epsilon >0$, where $C$ depends on $\epsilon.$ Using the assumption that $\beta > \sqrt{k.\delta_{\mathcal{F}}} $ we can choose $\epsilon >0$ such that \newline $k.(\delta_{\mathcal{F}} + \epsilon) < \beta^{2}.$ This gives
\begin{center}
$\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P)) \geq \beta^{n}\hat{h}_{\mathcal{F},D}(P) + o(\beta^n),$
\end{center} so taking $n^{th}$-roots, using the assumption that $\hat{h}_{\mathcal{F} ,D}(P) >0,$ and letting $n \rightarrow \infty$ yields
\begin{center}
$\underline{\alpha}_{\mathcal{F}}(P)=\lim \inf_{n \rightarrow \infty} \dfrac{1}{k}\{\sum_{f \in \mathcal{F}_n}h_{D+H}(f(P) \}^{1/n} \geq \dfrac{\beta}{k}.$
\end{center}
(d) From (c) we get that $\underline{\alpha}_{\mathcal{F}}(P) \geq \dfrac{\beta}{k} = \dfrac{\delta_{\mathcal{F}}.k}{k}=\delta_{\mathcal{F}},$ while corollary 5.2 gives $\bar{\alpha}_{\mathcal{F}}(P) \leq \delta_{\mathcal{F}}.$ Hence the limit defining $\alpha_{\mathcal{F}}(P)$ exists and is equal to $\delta_{\mathcal{F}}.$
(e) First suppose that $\# \mathcal{O}_{\mathcal{F}}(P) < +\infty.$ Since $D$ is ample and the orbit of $P$ is finite, we have that $h_D \geq 0, \hat{h}_{\mathcal{F},D}(P) \geq 0$, and there is a constant $C>0$ such that $h_D(f(P)) \leq C $ for all $f \in \cup_{l \geq0} \mathcal{F}_l$. This gives
\begin{center}
$ |\hat{h}_{\mathcal{F},D}(P)| \leq \lim_{n \rightarrow \infty}\dfrac{1}{\beta^n}\sum_{ f \in \mathcal{F}_n} |h_D(f(P))| \leq \lim_{n \rightarrow \infty} C.\dfrac{k^n}{\beta^n}=0$
\end{center} Since $\beta > k.$
For the other direction, suppose that $\hat{h}_{\mathcal{F},D}(P)=0.$ Then for any $n \geq 0$ and $g \in \mathcal{F}_n,$ we apply part (b) to obtain
\begin{center}
$0=\beta^n\hat{h}_{\mathcal{F},D}(P)=\sum_{f \in \mathcal{F}_n}\hat{h}_{\mathcal{F},D}(f(P)) \geq \hat{h}_{\mathcal{F},D}(g(P)) \newline \geq h_D(g(P)) - c\sqrt{h_D(g(P))}. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$
\end{center} This gives $h_D(g(P)) \leq c^2,$ where $c$ does not depend on $P$ or $n.$ This shows that $\mathcal{O}_{\mathcal{F}}(P)$ is a set of bounded height with respect to an ample height. Since $\mathcal{O}_{\mathcal{F}}(P)$ is contained in $X(K(P))$ and since we have assumed that $K$ is a number field, we conclude that $\mathcal{O}_{\mathcal{F}}(P)$ is finite.
\end{proof}
{\bf Remark 6.2:} In the same way as pointed in remark 29 of [12], when $f_1,...,f_k$ are morphisms, there is always one divisor class $D \in$ NS$(X)_{\mathbb{R}}$ such that $\sum^k_{i=1} f^*_iD \equiv \beta D$, where $\beta$ is the spectral radius of the linear map $\sum_{i \leq k} A(f_i)$ on NS$(X)_{\mathbb{R}}$. It would remain to check whether it satisfies $\beta >k.\sqrt{\delta_{\mathcal{F}}}$. This works for the non-trivial example 3.3, 3.4, 3.5 and 3.6, where the above height will coincide with the height constructed by Kawaguchi and Silverman.
|
\section{introduction}
Let $G$ be a finite group and $\mathbb{F}$ be a field of positive characteristic $p$. One of the main tools of studying the $p$-permutation $\mathbb{F} G$-modules via the Brauer construction has been developed by Brou\'{e} in \cite{MBroue}. For a $p$-subgroup $P$ of $G$, there is a bijection between the set of the isomorphism classes of indecomposable $p$-permutation $\mathbb{F} G$-modules with vertex $P$ and the set of isomorphism classes of indecomposable projective $\mathbb{F}[\mathrm{N}_G(P)/P]$-modules defined by the Brauer construction. Furthermore, the Green correspondents of such indecomposable $p$-permutation $\mathbb{F} G$-modules with respect to the subgroup $\mathrm{N}_G(P)$ are precisely the inflation of their corresponding indecomposable projective $\mathbb{F}[\mathrm{N}_G(P)/P]$-modules. Suppose further that $\mathbb{F}$ is algebraically closed. The generic Jordan type of a module for an elementary Abelian $p$-group as defined by Wheeler \cite{WW} is another useful technique to study the $\mathbb{F} G$-modules. For instance, if an indecomposable $\mathbb{F} G$-module $M$ has non-generically free Jordan type upon restriction to an elementary Abelian $p$-subgroup $E$ of $G$ then $E$ is contained in a vertex of $M$.
In this paper, we study the classical objects the Young and Young permutation $\mathbb{F}\sym{n}$-modules. Since they are $p$-permutation modules, their stable generic Jordan types (modulo the projectives) restricted to any elementary Abelian $p$-subgroup $E$ of $\sym{n}$ have the form $[1]^r$ for some non-negative integers $r$ depending on $E$. We are interested in the numbers $r$ as in the previous sentence. In Section \ref{S: orbit numbers}, one of our main results shows that the dimension of the Brauer construction $Y^\lambda(E)$ is precisely $r$ where $[1]^r$ is the stable generic Jordan type of $Y^\lambda{\downarrow_E}$ and $\dim_\mathbb{F} Y^\lambda(P)$, for any $p$-subgroup $P$ of $\sym{n}$, depend only on the orbit type of $P$ on the set $\{1,\ldots,n\}$. As such, we call $\dim_\mathbb{F} Y^\lambda(P)$ the orbit numbers. For example, when $n=4$, $P=\langle(1,2,3,4)\rangle$ and $Q=\langle (1,2)(3,4),(1,3)(2,4)\rangle$, for any partition $\lambda$ of $4$, we have $\dim_\mathbb{F} Y^\lambda(P)=\dim_\mathbb{F} Y^\lambda(Q)=r$ where $Y^\lambda{\downarrow_E}$ has stable generic Jordan type $[1]^r$. The orbit numbers are interesting in the sense that, when $P$ is a vertex of $Y^\lambda$, following \cite[Theorem 2]{KErdmann}, we have $\dim_\mathbb{F} Y^\lambda(P)$ is the product of the dimensions of the projective modules $Y^{\lambda(0)},\ldots,Y^{\lambda(s)}$ where $\lambda=\lambda(0)+p\lambda(1)+\cdots+p^s\lambda(s)$ is the $p$-adic expansion of $\lambda$. It is an open problem to find a closed-form for the dimensions of the indecomposable projective modules for the symmetric groups. In Section \ref{S: Some computation}, we obtain some reductive formulae about orbit numbers. We explicitly calculate the orbit numbers in the case when $\lambda$ are two-part partitions in Section \ref{S: two part}.
\begin{comment}
\section{introduction}
Let $G$ be a finite group and $\mathbb{F}$ be an algebraic closed field with prime characteristic $p$. Representation theory of elementary Abelian $p$-subgroup of $G$ acts an important role in understanding $\mathbb{F} G$-modules and $G$ itself as it has a close relation with homological properties of $\mathbb{F} G$-modules and cohomology theory of $G$. One difficulty of studying it lies in how to understand restrictions of $\mathbb{F} G$-modules to elementary Abelian $p$-subgroups of $G$. Let $M$ be a finitely generated $\mathbb{F} G$-module. In \cite{EFJPAS}, a new kind of invariants is endowed to $M$ to encode information of restrictions of $M$ to elementary Abelian $p$-subgroups of $G$. The definition of these invariants is given as follows. Let $k$ be a positive integer and $E$ be an elementary Abelian $p$-subgroup of $G$ with rank $k$ and generator set $\{g_{1}, g_{2},\ldots, g_{k}\}$. Let $\mathbb{K}$ be an extension of $\mathbb{F}$ where indeterminates $\alpha_{1}, \alpha_{2},\ldots, \alpha_{k}$ are included. Let $u_{\alpha}$ be an element of the group algebra $\mathbb{K} E$ with the following form:
\begin{align*}
u_{\alpha}:=1+\displaystyle{\sum_{i=1}^{k}}\alpha_{i}(g_{i}-1).
\end{align*}
Notice that $u_{\alpha}$ has order $p$ which implies that matrix representing $u_{\alpha}$ with respect to a basis of $M$ has no Jordan block with size larger than $p$. Let $n_{i}$ denote the number of Jordan blocks with size $i$ in the Jordan form. The notation $[1]^{n_{1}}[2]^{n_{2}}\cdots [p]^{n_{p}}$ totally describes the Jordan type of the matrix. By \cite{EFJPAS}, the Jordan type is independent of the choice of generator sets of $E$, which means that the notation is a well defined invariant for $M$ with respect to $E$. It is called the generic Jordan type of $M$ with respect to $E$. Another notation $[1]^{n_{1}}[2]^{n_{2}}\cdots [p-1]^{n_{p-1}}$ is termed by the stable generic Jordan type of $M$ with respect to $E$. Its meaning is obvious. Notice that the generic Jordan type of $M$ with respect to $E$ is uniquely determined by corresponding stable generic Jordan type of $M$ with respect to $E$ and $\mathbb{F}$-dimension of $M$. The module $M$ is said to be generically free with respect to $E$ if its stable generic Jordan type with respect to $E$ has no nonzero entries.
Though meaningful work has been done in \cite{EFJPAS} and \cite{WW} where basic properties of generic Jordan types of $\mathbb{F} G$-modules with finite $\mathbb{F}$-dimensions were concerned about. Less results are known about generic Jordan types of $p$-permutation modules. The aim of this paper is to study stable generic Jordan types of $p$-permutation modules. In particular, we mainly focus on Young modules of a symmetric group. It is well known that Young modules are important objects in representation theory of symmetric groups. Moreover, these modules can build bridge between representation theory of symmetric groups and representation theory of Schur algebras by schur functor. For definition of them, let $n$ be a positive integer and $\mathfrak{S}_{n}$ be a symmetric group on $n$ letters which has a natural action on set $[n]:=\{1, 2,\ldots, n\}$. The permutation $\mathbb{F}\mathfrak{S}_{n}$-modules that permute cosets of Young subgroups are known as Young permutation modules. The indecomposable $\mathbb{F}\mathfrak{S}_{n}$-modules occurring in direct sum decomposition of Young permutation modules were parametrized by James in \cite{GJ2} using partitions of $n$. These modules are now termed by Young modules. Let $\lambda$ be a partition of $n$. Young module indexed by partition $\lambda$ is denoted by $Y^{\lambda}$.
Our approach of this paper builds on representation theory of symmetric groups. By applying properties of Brauer construction of modules, we understand generic Jordan types of $p$-permutation modules. For the case of Young modules, we generalize stable generic Jordan types to orbit numbers by considering about $\mathbb{F}$-dimensions of Brauer construction of Young modules with respect to elementary Abelian $p$-subgroups and work on them to study stable generic Jordan types. Along the way, technique of $p$-Kostka numbers is repeatedly utilized in the discussion.
The paper is organised as follows. In section $2$, we present preliminaries about background knowledge and known results. We also introduce notations that are needed in discussion. In section $3$, we determine stable generic Jordan types of $p$-permutation modules and generalize stable generic Jordan types to orbit numbers in the case of Young modules. As a consequence, we obtain a formal formula and two reductive formulae of orbit numbers. Some stable generic Jordan types of Young modules are also calculated. In section $4$, some properties of orbit numbers are deduced and some orbit numbers of Young modules labelled by two part partitions are calculated explicitly.
\end{comment}
\section{preliminaries}
Throughout the paper $\mathbb{F}$ is an algebraically closed field of positive characteristic $p$. For any finite group $G$, an $\mathbb{F} G$-module is assumed to be a finitely generated left $\mathbb{F} G$-module.
\subsection{Representation theory of finite groups}
For a general background about the modular representation theory of finite groups, we refer readers to \cite{JAlperin} or \cite{HNYT}.
Let $G$ be a finite group and let $M,N$ be two $\mathbb{F} G$-modules. We write $N\mid M$ if $N$ is isomorphic to a direct summand of $M$, i.e., $M\cong N\oplus L$ for some $\mathbb{F} G$-module $L$. Suppose further that $N$ is indecomposable. The number of summands in an indecomposable direct sum decomposition of $M$ that are isomorphic to $N$ is well-defined by Krull-Schmidt Theorem (see \cite[Section 4, Theorem 3]{JAlperin}) and is denoted by $[M: N]$.
Let $M$ be an indecomposable $\mathbb{F} G$-module and $H$ be a subgroup of $G$. Then $M$ is said to be relatively $H$-projective if there exists some $\mathbb{F} H$-module $N$ such that $M\mid N{\uparrow^{G}}$, here $N{\uparrow^G}$ denotes the induction of $N$ to $G$. By \cite{JGreen}, the minimal (with respect to inclusion of subgroups) subgroups $H$ of $G$ subject to the condition such that $M\mid N{\uparrow^G}$ for some $\mathbb{F} H$-module $N$ are $p$-subgroups and unique up to $G$-conjugation. These $p$-subgroups of $G$ are called the vertices of $M$. Let $P$ be a vertex of $M$. We denote the normalizer of $P$ in $G$ by $\mathrm{N}_G(P)$. Then there exists, unique up to isomorphism and $\mathrm{N}_G(P)$-conjugation, an indecomposable $\mathbb{F} P$-module $S$ such that $M\mid S{\uparrow^{G}}$. Such an $\mathbb{F} P$-module is called a source of $M$.
Let $M$ be an indecomposable $\mathbb{F} G$-module, let $P$ be a vertex of $M$ and let $H$ be a subgroup of $G$ containing $\mathrm{N}_G(P)$. The Green correspondent of $M$ with respect to the subgroup $H$ is the unique indecomposable summand $N$ of $M{\downarrow_H}$ such that $N$ has a vertex $P$.
Let $E=\langle g_1,\ldots,g_k\rangle$ be an elementary Abelian $p$-group of order $p^k$ and $M$ be an $\mathbb{F} E$-module. Let $\mathbb{K}$ be a field extension of $\mathbb{F}$ containing the indeterminates $\alpha_{1}, \alpha_{2},\ldots, \alpha_{k}$. Consider the element
\[u_{\alpha}:=1+\sum_{i=1}^k\alpha_{i}(g_{i}-1)\in \mathbb{K} E.\] Since $\langle u_\alpha\rangle$ is a cyclic group of order $p$, the restriction of $\mathbb{K}\otimes_{\mathbb{F}}M$ to the shifted subgroup $\mathbb{K}\langle u_\alpha\rangle$ is isomorphic to a direct sum of $n_i$ unipotent Jordan blocks of sizes $i$ where $i=1,2,\ldots, p$. The generic Jordan type of the $\mathbb{F} E$-module $M$ is defined as $[1]^{n_{1}}[2]^{n_{2}}\cdots [p]^{n_{p}}$. By \cite{WW}, the generic Jordan type is independent of the choice of the generators of $E$. The stable generic Jordan type of $M$ is $[1]^{n_{1}}[2]^{n_{2}}\cdots [p-1]^{n_{p-1}}$. The module $M$ is called generically free if $n_i=0$ for all $i=1,2,\ldots, p-1$. The following are the properties we shall need and we refer readers to \cite{EFJPAS,GLW,WW} for more details.
\begin{lem}\label{L: generic Jordan type}\
\begin{enumerate}[(i)]
\item The generic Jordan type of a direct sum of modules is the direct sum of the generic Jordan types of the modules.
\item Let $E'$ be a proper subgroup of an elementary Abelian $p$-group $E$ and let $M$ be an $\mathbb{F} E'$-module. Then the module $M{\uparrow^{E}}$ is generically free.
\end{enumerate}
\end{lem}
\subsection{Brauer construction}
One of main techniques that we shall need is the Brauer constructions of $p$-permutation modules introduced by Brou\'{e} in \cite{MBroue}. An $\mathbb{F} G$-module is called a $p$-permutation module if for any $p$-subgroup $P$ of $G$ there exists a basis $\mathcal{B}$ that is permuted by $P$, i.e., for each $g\in P$ and $b\in \mathcal{B}$, we have $gb\in\mathcal{B}$. By \cite[(0.4)]{MBroue}, an indecomposable $p$-permutation module is precisely a module with trivial source. The class of all $p$-permutation $\mathbb{F} G$-modules is closed under taking finite direct sum, direct summand and tensor product.
We recall the Brauer construction of a module. Let $M$ be an $\mathbb{F} G$-module and $P$ be a $p$-subgroup of $G$. The set of $P$-fixed points in $M$ is \[M^{P}:=\{m\in M:\text{$gm=m$ for all $g\in P$}\}.\] Notice that $M^{P}$ is an $\mathbb{F}\mathrm{N}_{G}(P)$-module on which $P$ acts trivially. Let $Q$ be a proper subgroup of $P$. The relative trace map $\mathrm{Tr}_{Q}^{P}$: $M^{Q}\rightarrow M^{P}$ is the linear map defined by \[\mathrm{Tr}_{Q}^{P}(m):=\sum_{g\in P/Q}gm,\] where $P/Q$ denotes a set of left coset representatives of $Q$ in $P$ and $m\in M^{Q}$. Observe that $\mathrm{Tr}_{Q}^{P}(v)$ is independent of the choice of the set of left coset representatives. Furthermore \[\mathrm{Tr}^{P}(M):=\sum \mathrm{Tr}_{Q}^{P}(M^Q),\] where $Q$ runs over the set of all proper subgroups of $P$, is an $\mathbb{F} \mathrm{N}_{G}(P)$-submodule of $M^{P}$. One defines the Brauer construction of $M$ with respect to $P$ to be the $\mathbb{F} [\mathrm{N}_{G}(P)/P]$-module \[M(P):=M^{P}/\mathrm{Tr}^{P}(M).\] In general, if $M$ is indecomposable and $M(P)\neq 0$ then $P$ is contained in a vertex of $M$. The converse is true in the case of $p$-permutation modules.
\begin{thm}[{\cite[Theorem 3.2 (1)]{MBroue}}]\label{Contain}
Let $M$ be an indecomposable $p$-permutation $\mathbb{F} G$-module, let $Q$ be a vertex of $M$ and let $P$ be a $p$-subgroup of $G$. Then $M(P)\neq 0$ if and only if $P$ is contained in a $G$-conjugate of $Q$.
\end{thm}
Suppose further that $M$ is a $p$-permutation $\mathbb{F} G$-module. Let $\mathcal{B}$ be a basis of $M$ permuted by $P$ and let
\begin{align*}
\mathcal{B}(P):=\{b\in \mathcal{B}:\text{$gb=b$ for all $g\in P$}\}.
\end{align*} Notice that $P$ acts trivially on $\mathcal{B}(P)$. As a corollary of Theorem \ref{Contain}, we have the following.
\begin{cor}\label{C: EG}
Let $M$ be a $p$-permutation $\mathbb{F} G$-module and $\mathcal{B}$ be a $p$-permutation basis of $M$ with respect to a $p$-subgroup $P$ of $G$. Then $M(P)$ is isomorphic to the $\mathbb{F}$-span of $\mathcal{B}(P)$ as $\mathbb{F} [\mathrm{N}_{G}(P)/P]$-modules. Furthermore, if $M\cong U\oplus V$ then \[M(P)\cong U(P)\oplus V(P).\]
\end{cor}
We end this subsection with the following well-known result of Brou\'{e}.
\begin{thm}[{\cite[Theorems 3.2 and 3.4]{MBroue}}]\label{BC1} Let $G$ be a finite group and let $P$ be a $p$-subgroup of $G$.
\begin{enumerate}
\item [(i)] The Brauer construction sending $M$ to $M(P)$ is a bijection between the isomorphism classes of indecomposable $p$-permutation $\mathbb{F} G$-modules with vertex $P$ and the isomorphism classes of indecomposable projective $\mathbb{F}[\mathrm{N}_G(P)/P]$-modules. Furthermore, the inflation $\mathrm{Inf}_{\mathrm{N}_G(P)/P}^{\mathrm{N}_G(P)}M(P)$ of the $\mathbb{F}[\mathrm{N}_G(P)/P]$-module $M(P)$ to $\mathrm{N}_G(P)$ is the Green correspondent of~$M$ with respect to $\mathrm{N}_G(P)$.
\item [(ii)] Let $N$ be an indecomposable $\mathbb{F} G$-module with a vertex $P$ and $M$ be a $p$-permutation $\mathbb{F} G$-module. Then $N$ is a direct summand of $M$ if and only if $N(P)$ is a direct summand of $M(P)$. Moreover, \[[M:N]=[M(P):N(P)].\]
\end{enumerate}
\end{thm}
\subsection{Composition, partition and orbit}\label{SS: composition} Let $\mathbb{N}$ be the set of nonnegative integers and let $n\in\mathbb{N}$. By a composition $\alpha$ of $n$, we mean a sequence of nonnegative integers $(\alpha_1,\ldots,\alpha_r)$ such that $\sum^r_{i=1}\alpha_i=n$. In this case, we write $|\alpha|=n$. By convention, the unique composition of $0$ is denoted as $\varnothing$. The composition $\alpha$ is called a partition if $\alpha_1\geq\cdots\geq\alpha_r$. We write $\mathscr{C}(n)$ and $\P(n)$ for the set of compositions and partitions of $n$ respectively. The set $\P(n)$ is partially ordered by the dominance order $\unrhd$ and totally ordered by the lexicographic order. Notice that the lexicographic order refines the dominance order.
Let $\alpha=(\alpha_1,\ldots,\alpha_r)$ and $\beta=(\beta_1,\ldots,\beta_s)$ be two compositions and let $m$ be a positive integer. We write \begin{align*}
\alpha+\beta=\beta+\alpha&=(\alpha_1+\beta_1,\ldots,\alpha_r+\beta_r,\beta_{r+1},\ldots,\beta_s),\\
\alpha\bullet \beta&=(\alpha_1,\ldots,\alpha_r,\beta_1,\ldots,\beta_s),\\
m\alpha&=(m\alpha_1,\ldots,m\alpha_r),
\end{align*} if $r\leq s$. A composition $\delta$ is a refinement of $\beta$ if there exist compositions $\delta^{(1)},\ldots,\delta^{(s)}$ such that $\delta=\delta^{(1)}\bullet\cdots\bullet\delta^{(s)}$ and, for $i=1,2,\ldots, s$, we have $|\delta^{(i)}|=\beta_i$.
A partition $\lambda=(\lambda_1,\ldots,\lambda_r)$ is called $p$-restricted if $\lambda_r<p$ and $\lambda_i-\lambda_{i+1}<p$ for all $i=1,2,\ldots,r-1$. We write $\mathscr{RP}_p(n)$ for the set of all $p$-restricted partitions of $n$. The $p$-adic expansion of a partition $\lambda$ is the sum \[\lambda=\sum^t_{i=0}p^{i}\lambda(i)\] for some nonnegative integer $t$ such that, for each $i=0,1,\ldots,t$, $\lambda(i)$ is a $p$-restricted partition. By the proof of \cite[Lemma 7.5]{GJR}, there is a way to write down the $p$-adic expansion of $\lambda$ as follows. Let $\lambda_{r+1}=0$. Suppose that, for each $j=1,2,\ldots, r$, we have the $p$-adic sum of the number \[\lambda_j-\lambda_{j+1}=\sum_{i= 0}^{t}a_{i,j}p^i,\] i.e., $0\leq a_{i,j}\leq p-1$. Then, for each $i=0,1,\ldots,t$, $\lambda(i)$ is the desired $p$-restricted partition where $\lambda(i)_k=\sum^r_{j=k}a_{i,j}$.
For any partition $\lambda=(\lambda_1,\ldots,\lambda_r)$, we denote by $[\lambda]$ the set $\{(i,j)\in \mathbb{N}^{2}:\ 1\leq i\leq r,\ 1\leq j\leq \lambda_{i}\}$. It is called the Young diagram of $\lambda$. The $p$-core of $\lambda$ is the partition whose Young diagram is obtained by removing all possible rim $p$-hooks from $[\lambda]$ and is denoted by $\kappa_{p}(\lambda)$. The number of rim $p$-hooks removed from $[\lambda]$ to get $\kappa_{p}(\lambda)$ is called the $p$-weight of $\lambda$.
Let $A$ be a finite set. The permutation group on the set $A$ is denoted as $\sym{A}$. Let $n\in\mathbb{N}$. We denote the set $\{1,\ldots,n\}$ by $\set{n}$ and let $\sym{n}=\sym{[n]}$. By convention, $\set{0}=\emptyset$ and $\sym{0}$ is the trivial group. Let $\lambda=(\lambda_1,\ldots,\lambda_r)$ be a composition. The Young subgroup $\sym{\lambda}$ is identified with the direct product \[\sym{\lambda_1}\times\cdots\times\sym{\lambda_r},\] where the first factor $\sym{\lambda_1}$ acts on the set $\{1,\ldots,\lambda_1\}$, the second factor $\sym{\lambda_2}$ acts on the set $\{1+\lambda_1,\ldots,\lambda_1+\lambda_2\}$ and so on.
We now discuss the orbits of $p$-subgroups of $\sym{n}$ on the set $\set{n}$. Let \[\P^{(p)}(n)=\{\O\in \mathscr{C}(n): \text{$\O=(1^{a_{0}}, p^{a_{1}},\ldots, (p^{r})^{a_{r}})$ for some $r$}\},\] where, in $\O$, the first $a_0$ entries of $\O$ are $1$, the next $a_1$ entries are $p$ and so on. For each $\O=(1^{a_{0}}, p^{a_{1}},\ldots, (p^{r})^{a_{r}})\in\P^{(p)}(n)$, let \[p^s\O=((p^s)^{a_0},(p^{s+1})^{a_1},\ldots,(p^{s+r})^{a_r})\in\P^{(p)}(p^sn).\] Let $P$ be a $p$-subgroup of $\sym{n}$. We denote the set of orbits of the action of $P$ on $\set{n}$ by $\orbit{\set{n}}{P}$. We say that $\orbit{\set{n}}{P}$ has type $\O=(1^{a_0},(p)^{a_1},\ldots,(p^r)^{a_r})\in\P^{(p)}(n)$ for some $r$ if, for each $i=0,1,\ldots,r$, the number of orbits with sizes $p^i$ in $\orbit{\set{n}}{P}$ is exactly $a_i$. We write $\orbit{\set{n}}{P}\simeq \orbit{\set{n}}{Q}$ if $Q$ is another $p$-subgroup of $\sym{n}$ such that both $\orbit{\set{n}}{P}$ and $\orbit{\set{n}}{Q}$ have the same type, i.e., there is a permutation $\sigma\in\sym{n}$ such that $\sigma A\in \orbit{\set{n}}{Q}$ for all $A\in \orbit{\set{n}}{P}$. It is clear that $\orbit{\set{n}}{P}\simeq \orbit{\set{n}}{Q}$ if $P$ is conjugate to $Q$ in $\sym{n}$.
Let $\lambda$ be a partition of $n$, let $\sum_{i=0}^{r}p^{i}\lambda(i)$ be the $p$-adic expansion of $\lambda$ and let \[\O_\lambda:=(1^{|\lambda(0)|}, p^{|\lambda(1)|},\ldots, (p^{r})^{|\lambda(r)|})\in\P^{(p)}(n).\] We fix a Sylow $p$-subgroup of $\sym{\O_\lambda}$ and denote it by $P_\lambda$. Notice that, since any Sylow $p$-subgroup of $\sym{p^i}$ acts transitively on the set $\set{p^i}$, we have that $\orbit{\set{n}}{P_\lambda}$ has type $\O_\lambda$. We end this subsection by the following lemma.
\begin{lem}\label{L: minimal subgroup}\
Let $\lambda\in\P(n)$, $\O\in\P^{(p)}(n)$ and $P$ be a $p$-subgroup of $\sym{n}$ such that $\orbit{\set{n}}{P}$ has type $\O$. Then $\O$ is a rearrangement of a refinement of $\O_\lambda$ if and only if $P$ is conjugate to a $p$-subgroup of $P_\lambda$.
\end{lem}
\begin{proof} Suppose that $\O$ is a rearrangement of a refinement of $\O_\lambda$. Without loss of generality, since $\orbit{\set{n}}{P_\lambda}$ has type $\O_\lambda$, we may assume that each orbit in $\orbit{\set{n}}{P_\lambda}$ is a union of some orbits in $\orbit{\set{n}}{P}$. Let $\sigma\in P$. Then $\sigma$ leaves each orbit $A$ in $\orbit{\set{n}}{P}$ invariant, i.e., $\sigma(A)=A$. Therefore, $\sigma$ leaves each orbit in $\orbit{\set{n}}{P_\lambda}$ invariant. This shows that $\sigma\in\sym{\O_\lambda}$ and hence $P\subseteq \sym{\O_\lambda}$. We conclude that $P$ is conjugate to a subgroup of $P_\lambda$. Conversely, suppose, without loss of generality, that $P$ is a subgroup of $P_\lambda$. Then each orbit in $\orbit{\set{n}}{P_\lambda}$ is a union of some orbits in $\orbit{\set{n}}{P}$. By definition, the type $\O$ of $\orbit{\set{n}}{P}$ is a rearrangement of a refinement of the type $\O_\lambda$ of $\orbit{\set{n}}{P_\lambda}$.
\end{proof}
\subsection{Representation theory of symmetric groups}\label{SS: sym} We now turn to the representation theory of symmetric groups. For a general background on this topic, we refer readers to \cite{GJ1} or \cite{GJ3}.
For modules of symmetric groups, we assume that readers are familiar with the notion of tableau, tabloid and polytabloid. Let $\mathbb{F}(n)$ be the trivial $\mathbb{F}\sym{n}$-module. For a composition $\lambda$ of $n$, we use $\mathbb{F}(\lambda)$ to denote the restriction of $\mathbb{F}(n)$ to the Young subgroup $\sym{\lambda}$. The Young permutation module $M^{\lambda}$ with respect to $\lambda$ is the induced module $\mathbb{F}(\lambda){\uparrow^{\sym{n}}}$. It has a basis consisting of all $\lambda$-tabloids. Notice that $M^\lambda\cong M^\mu$ if $\mu$ can be rearranged to $\lambda$. Suppose now that $\lambda$ is a partition. The Specht module $S^\lambda$ is the submodule of $M^\lambda$ spanned by the $\lambda$-polytabloids. It has a basis given by the standard $\lambda$-polytabloids and dimension given by the hook formula. In the characteristic zero case, the Specht modules are the irreducible $\mathbb{F}\sym{n}$-modules. However, they are usually not irreducible when $p$ is positive. By the Nakayama conjecture, two Specht modules $S^\lambda,S^\mu$ for $\mathbb{F}\sym{n}$ lie in the same block if and only if $\kappa_p(\lambda)=\kappa_p(\mu)$.
The isomorphism classes of indecomposable direct summands of Young permutation modules are called the Young modules and they are parametrized by $\P(n)$ such that the Young module $Y^\lambda$ is a direct summand of $M^\lambda$ with multiplicity one and, if $Y^\mu\mid M^\lambda$, then $\mu\trianglerighteq \lambda$ (see \cite[Theorem 3.1]{GJ2}), i.e., \[M^{\lambda}\cong Y^{\lambda}\oplus\bigoplus_{\mu\vartriangleright\lambda}k_{\lambda,\mu}Y^{\mu},\] where $k_{\lambda,\mu}=[M^{\lambda}: Y^{\mu}]$ are known as the $p$-Kostka numbers. Using the lexicographic order of $\P(n)$, we denote the $p$-Kostka matrix for $\mathbb{F}\sym{n}$ by $\mathrm{K}$ whose $(\lambda,\mu)$-entry is $k_{\lambda,\mu}$. Notice that $\mathrm{K}$ is upper uni-triangular.
We recall the following reductive formulae for $p$-Kostka numbers proved by Gill
\begin{thm}[{see \cite[Theorems 13 and 14]{CGill}}]\label{T: Reductive}
Let $\lambda, \mu\in\P(m)$ and $\nu, \delta\in\P(n)$. We have the following statements.
\begin{enumerate}[(i)]
\item Let $\lambda_{1}$ be the first part of $\lambda$ and $\sum_{i=0}^{t}p^{i}\mu(i)$ be the $p$-adic expansion of $\mu$. If $s>t$ and $p^{s}>\lambda_{1}$ then $k_{\lambda+p^{s}\nu,\mu+p^{s}\delta}=k_{\lambda,\mu}k_{\nu,\delta}$.
\item Let $\lambda_{2}$ be the second part of $\lambda$. If $\lambda_{2}<p^{s}$ then $k_{\lambda+(p^{s}r),\mu+(p^{s}r)}=k_{\lambda,\mu}$ for every $r\in \mathbb{N}$.
\end{enumerate}
\end{thm}
The regular module $\mathbb{F}\sym{n}$ is the Young permutation module $M^{(1^n)}$. Therefore the projective indecomposable $\mathbb{F}\sym{n}$-modules are Young modules. In fact, the Young module $Y^\lambda$ is projective if and only if $\lambda\in\mathscr{RP}_p(n)$. Let $\lambda\in\mathscr{RP}_p(n)$ and $\mathrm{sgn}(n)$ be the signature representation of $\mathbb{F}\sym{n}$. Since $Y^\lambda\otimes\mathrm{sgn}(n)$ is also projective indecomposable, we have \[Y^\lambda\otimes\mathrm{sgn}(n)\cong Y^{\mathbf{m}(\lambda)}\] for some unique partition $\mathbf{m}(\lambda)\in\mathscr{RP}_p(n)$. The map $\mathbf{m}:\mathscr{RP}_p(n)\to\mathscr{RP}_p(n)$ is called the Mullineux map (on $p$-restricted partitions) and is an involution. The $p$-regular version of Mullineux map was conjectured by Mullineux in \cite{GMullineux} and proved by Ford and Kleshchev in \cite{FordKleshchev}. In \cite{BrKu}, Brundan and Kujawa proved the $p$-restricted version.
We now discuss the Brauer constructions of Young permutation modules and Young modules as in \cite{KErdmann}.
For each $\lambda\in\P(n)$ and $\O\in\P^{(p)}(n)$, let $P$ be a $p$-subgroup of $\sym{n}$ such that $\orbit{\set{n}}{P}$ has type $\O$. Let $M_{\lambda,P}$ be the set of all $\lambda$-tabloids $\t$ such that each row of $\t$ is a union of some orbits in $[n]/P$. Notice that if $Q$ is another $p$-subgroup of $\sym{n}$ such that $\orbit{\set{n}}{Q}\simeq \orbit{\set{n}}{P}$ then $|M_{\lambda,P}|=|M_{\lambda,Q}|$. We write \[m_{\lambda,\O}=|M_{\lambda,P}|.\]
Since, for each $\lambda\in\P(n)$, the Young permutation module $M^\lambda$ has basis the $\lambda$-tabloids permuted by $\sym{n}$, and hence permuted by any $p$-subgroup of $\sym{n}$, Young permutation and Young modules are $p$-permutation $\mathbb{F} \sym{n}$-modules. Let $P$ be a $p$-subgroup of $\sym{n}$. Notice that a $\lambda$-tabloid $\t$ is fixed by $P$ if and only if every orbit in $\set{n}/P$ lies in a row of $\t$. By Corollary \ref{C: EG}, we have the following lemma.
\begin{lem}\label{L: dim of M(P)} Let $\lambda\in\P(n)$, let $P$ be a $p$-subgroup of $\sym{n}$ and suppose that $\orbit{\set{n}}{P}$ has type $\O$. Then \[\dim_\mathbb{F} M^\lambda(P)=m_{\lambda,\O},\] i.e., $\dim_\mathbb{F} M^\lambda(P)$ is the number of (unordered) ways to insert the orbits in $\orbit{\set{n}}{P}$ into the rows of $\lambda$. In particular, we have $\dim_\mathbb{F} M^\lambda(P)=\dim_\mathbb{F} M^\lambda(Q)$ if $\orbit{\set{n}}{P}\simeq \orbit{\set{n}}{Q}$.
\end{lem}
The precise structure of the Brauer construction $M^\lambda(P)$ when $\lambda\in\P(n)$, $P$ is a Sylow $p$-subgroup of $\sym{\O}$ and $\O\in\P^{(p)}(n)$ is given in \cite[Proposition 1]{KErdmann} but we shall not need it here.
Suppose that a normal subgroup $N$ of $G$ acts trivially on an $\mathbb{F} G$-module $M$. We write $\mathrm{Def}^G_{G/N}M$ for the deflation of $M$ to the quotient group $G/N$.
We now describe the vertices of Young modules and their Brauer constructions with respect to the vertices.
\begin{thm}[{\cite{GJR,KErdmann}}]\label{T: GJR} Let $\lambda\in\P(n)$. Then $Y^{\lambda}$ is relatively $\sym{\O_\lambda}$-projective. If $Y^{\lambda}$ is also relatively $H$-projective for some Young subgroup $H$ then $\sym{\O_\lambda}$ is $\sym{n}$-conjugate to a subgroup of $H$. Furthermore, $Y^\lambda$ has a vertex $P_\lambda$.
\end{thm}
\begin{thm}[{\cite{KErdmann}}]\label{Erdmann2} Let $\sum_{i=0}^rp^i\lambda(i)$ be the $p$-adic expansion of $\lambda\in\P(n)$ and let $\beta=(a_0,a_1,\ldots,a_r)$ where $a_i=|\lambda(i)|$ for each $i=0,1,\ldots,r$. Then $\mathrm{N}_{\sym{\O_\lambda}}(P_\lambda)/P_\lambda$ acts trivially on $Y^\lambda(P_\lambda)$ and \[\mathrm{Def}^{\mathrm{N}_{\sym{n}}(P_\lambda)/P_\lambda}_{\mathrm{N}_{\sym{n}}(P_\lambda)/\mathrm{N}_{\sym{\O_\lambda}}(P_\lambda)}Y^{\lambda}(P_\lambda)\cong Y^{\lambda(0)}\boxtimes Y^{\lambda(1)}\boxtimes\cdots\boxtimes Y^{\lambda(r)}\] as $\mathbb{F} \sym{\beta}$-modules via the canonical isomorphism \[\sym{\beta}\cong \mathrm{N}_{\sym{n}}(P_\lambda)/\mathrm{N}_{\sym{\O_\lambda}}(P_\lambda)\cong (\mathrm{N}_{\sym{n}}(P_\lambda)/P_\lambda)/(\mathrm{N}_{\sym{\O_\lambda}}(P_\lambda)/P_\lambda).\]
\end{thm}
\section{orbit numbers}\label{S: orbit numbers} In this section, we define the orbit numbers (see Definition \ref{D: orbit number}) labelled by $\P(n)\times \P^{(p)}(n)$. The numbers can be simultaneously defined as either the dimensions of the Brauer constructions of Young modules with respect to $p$-subgroups or the nonnegative integers $m$ where $[1]^m$ is the stable generic Jordan types of Young modules restricted to certain elementary Abelian $p$-subgroups.
We begin with the following lemma.
\begin{lem}\label{L: sgjt dim} Let $M$ be a $p$-permutation $\mathbb{F} G$-module and $E$ be an elementary Abelian $p$-subgroup of $G$. Then the stable generic Jordan type of $M{\downarrow_{E}}$ is $[1]^r$ where $r=\dim_{\mathbb{F}}M(E)$.
\end{lem}
\begin{proof}
Let $\mathcal{B}$ be a $p$-permutation basis of $M$ with respect to $E$ and suppose that $A_1,\ldots,A_r,B_1,\ldots,B_s$ are the orbits of action of $E$ on $\mathcal{B}$ such that $|A_i|=1$ for $i=1,2,\ldots, r$ and $|B_j|>1$ for $j=1,\ldots,s$. Then \[M{\downarrow_{E}}\cong \left (\bigoplus_{i=1}^r\mathbb{F}_E\right )\oplus\left (\bigoplus_{j=1}^{s} \mathbb{F}_{H_j}{\uparrow^{E}}\right )\] as $\mathbb{F} E$-modules where $H_j$ is the stabiliser of $b_j\in B_j$ for all $j=1,2,\ldots, s$ and $\mathbb{F}_{H_j},\mathbb{F}_E$ are the trivial modules for $\mathbb{F} H_j$ and $\mathbb{F} E$ respectively. Since $H_j$ is a proper subgroup of $E$, by Lemma \ref{L: generic Jordan type}, $M{\downarrow_E}$ has stable generic Jordan type $[1]^r$. By Corollary \ref{C: EG}, $\dim_\mathbb{F} M(E)=r$. The result now follows.
\end{proof}
In the case of the Young permutation module $M^\lambda$, Lemmas \ref{L: dim of M(P)} and \ref{L: sgjt dim} assert that the generic Jordan type of $M^\lambda{\downarrow_E}$ is $[1]^r$ where \[r=\dim_\mathbb{F} M^\lambda(E)=m_{\lambda,\O}\] and $\orbit{\set{n}}{E}$ has type $\O$.
\begin{comment}
\begin{lem}\label{YPM}
Let $\lambda\in\P(n)$. For $s\in\mathbb{N}$, let $\mu\in\P(p^sn)$ and $P,Q$ be $p$-subgroups of $\sym{n},\sym{p^sn}$ respectively. Suppose that $\orbit{\set{n}}{P}$ and $\orbit{\set{p^sn}}{Q}$ have types $\O$ and $p^s\O$ respectively. Then $\dim_{\mathbb{F}}M^{p^s\lambda}(Q)=\dim_{\mathbb{F}}M^{\lambda}(P)$.
\end{lem}
\begin{proof} It is clear that there exists a bijection between $\orbit{\set{n}}{P}$ and $\orbit{\set{p^sn}}{Q}$ mapping an orbit of size $p^i$ to an orbit of size $p^{s+i}$. The bijection induces a bijection between the sets $M_{\lambda,\O},M_{p^s\lambda,p^s\O}$. The proof is now complete using Lemma \ref{L: dim of M(P)}.
\end{proof}
\end{comment}
We now prove the main result of this section.
\begin{thm} \label{T: Yinvarant}
Let $\lambda\in\P(n)$ and $P,E$ be $p$-subgroups of $\sym{n}$ such that $\orbit{\set{n}}{P}\simeq \set{n}{E}$. Then \[\dim_\mathbb{F} Y^\lambda(P)=\dim_\mathbb{F} Y^\lambda(E).\] Suppose further that $E$ is elementary Abelian. We have $\dim_\mathbb{F} Y^\lambda(E)=m$ where the stable generic Jordan type of $Y^{\lambda}{\downarrow_{E}}$ is $[1]^m$.
\end{thm}
\begin{proof}
We prove that $\dim_{\mathbb{F}}Y^{\lambda}(P)=\dim_{F}Y^{\lambda}(E)$ by using induction on the dominance order of $\P(n)$. In the base case, since $Y^{(n)}\cong M^{(n)}$ is the trivial $\mathbb{F}\sym{n}$-module, we have $\dim_{\mathbb{F}}Y^{(n)}(P)=\dim_{\mathbb{F}}Y^{(n)}(E)=1$ by Lemma \ref{L: dim of M(P)}. Suppose that $\dim_\mathbb{F} Y^\mu(P)=\dim_\mathbb{F} Y^\mu(E)$ for all $\mu\rhd\lambda$. By Corollary \ref{C: EG}, we have
\begin{align*}
M^{\lambda}(P)&\cong Y^{\lambda}(P)\oplus\displaystyle{\bigoplus_{\mu\rhd\lambda}}k_{\lambda,\mu}Y^{\mu}(P),\\
M^{\lambda}(E)&\cong Y^{\lambda}(E)\oplus\displaystyle{\bigoplus_{\mu\rhd\lambda}}k_{\lambda,\mu}Y^{\mu}(E).
\end{align*} Counting the dimensions of the above equations, using Lemma \ref{L: dim of M(P)} and induction on the dominance order, we obtain that $\dim_\mathbb{F} Y^{\lambda}(P)=\dim_\mathbb{F} Y^\lambda(E)$.
Suppose further now that $E$ is elementary Abelian. Since Young modules are $p$-permutation as direct summands of Young permutation modules, the second assertion follows from Lemma \ref{L: sgjt dim}.
\end{proof}
In the view of Theorem \ref{T: Yinvarant}, we can now define the orbit number.
\begin{defn}\label{D: orbit number}
Let $\lambda\in\P(n)$, $\O\in\P^{(p)}(n)$ and let $P,E$ be $p$-subgroups of $\sym{n}$ such that both $\orbit{\set{n}}{P}$ and $\orbit{\set{n}}{E}$ have type $\O$ and $E$ is elementary Abelian. The orbit number $y_{\lambda,\O}$ is defined as the following common numbers: \[y_{\lambda,\O}:=\dim_\mathbb{F} Y^\lambda(P)=b,\] where $[1]^b$ is the stable generic Jordan type of $Y^\lambda{\downarrow_E}$.
\end{defn}
Fix $n\in \mathbb{N}$. Let both $\P(n)$ and $\P^{(p)}(n)$ be ordered by the lexicographic order. Let $\mathrm{Y},\mathrm{M}$ be the $\P(n)\times\P^{(p)}(n)$-matrices whose $(\lambda,\O)$-entries of $\mathrm{Y},\mathrm{M}$ are the orbit number $y_{\lambda,\O}$ and $m_{\lambda,\O}$ respectively. By Theorem \ref{BC1}(ii), we have \begin{equation}\label{Eq: 1} m_{\lambda,\O}=y_{\lambda,\O}+\sum_{\mu\rhd \lambda}k_{\lambda,\mu}y_{\mu,\O},
\end{equation} or equivalently, $\mathrm{M}=\mathrm{K}\mathrm{Y}$ where, recall that, $k_{\lambda,\mu}=[M^\lambda:Y^\mu]$ and $\mathrm{K}$ is the $p$-Kostka matrix of $\mathbb{F}\sym{n}$. The $(\lambda,\O)$-entry $m_{\lambda,\O}$ of $\mathrm{M}$ has a combinatorial description given by Lemma \ref{L: dim of M(P)}. Suppose further that $\O=(1^{a_{0}}, p^{a_{1}},\ldots, (p^{r})^{a_{r}})$ and let $\Lambda(\lambda,\O)$ be the set consisting of tuples of compositions $\alpha=(\alpha^{(0)}, \alpha^{(1)},\ldots, \alpha^{(r)})$ such that $\lambda=\sum_{i=0}^{r}p^i\alpha^{(i)}$ (not necessarily the $p$-adic expansion of $\lambda$) and $|\alpha^{(i)}|=a_i$ for all $i=0,1,\ldots, r$. It is easy to see that the number $m_{\lambda,\O}$ can be described as \[m_{\lambda,\O}=\sum_{\alpha\in\Lambda(\lambda,\O)}\prod_{i=0}^r \dim_\mathbb{F} M^{\alpha^{(i)}}.\]
To end this section, we give characterisations when an orbit number is nonzero. The following lemma is straightforward following Theorem \ref{Erdmann2}
\begin{lem}\label{L: prod of dims} Let $\sum^s_{i=0}p^i\lambda(i)$ be the $p$-adic expansion of $\lambda\in\P(n)$. Then \[y_{\lambda,\O_\lambda}=\prod_{i=0}^s\dim_\mathbb{F} Y^{\lambda(i)}\neq 0.\]
\end{lem}
For example, if $\lambda$ has a unique $\lambda(\ell)$ in its $p$-adic expansion with more than one part, then \[y_{\lambda,\O_\lambda}=\dim_\mathbb{F} Y^{\lambda(\ell)}.\] This happens, in particular, when $Y^\lambda$ is a non-projective periodic Young module (see \cite[Corollary 3.3.3]{DHDN}).
Recall that $P_\lambda$ is a fixed Sylow $p$-subgroup of $\sym{\O_\lambda}$ as in Subsection \ref{SS: composition}
\begin{thm}\label{T:eolambda}
Let $\lambda\in\P(n)$, $\O\in \P^{(p)}(n)$ and $P,E$ be $p$-subgroups such that both $\orbit{\set{n}}{P},\orbit{\set{n}}{E}$ have type $\O$ and $E$ is elementary Abelian. Then the following statements are equivalent.
\begin{enumerate}[(i)]
\item $y_{\lambda,\O}\neq0$.
\item $Y^\lambda{\downarrow_E}$ is not generically free.
\item $P$ is conjugate to a subgroup of $P_\lambda$.
\item $\O$ is rearranged to be a refinement of $\O_{\lambda}$.
\end{enumerate} In any of the cases above, we have $y_{\lambda,\O}\geq y_{\lambda,\O_\lambda}\neq 0$.
\end{thm}
\begin{proof} The equivalence of parts (i), (ii) and (iii) follows from Definition \ref{D: orbit number} and Theorem \ref{Contain}. The equivalence of parts (iii) and (iv) is given by Lemma \ref{L: minimal subgroup}. We now prove the last assertion. Let $Q$ be a conjugate of $P_{\lambda}$ in $\sym{n}$ such that $P$ is a $p$-subgroup of $Q$. Let $\mathcal{B}$ be a $p$-permutation basis of $Y^{\lambda}$ with respect to $Q$. Then $\mathcal{B}(Q)\subseteq\mathcal{B}(P)$. By Corollary \ref{C: EG} and Theorem \ref{T: Yinvarant}, we have \[y_{\lambda,\O}=\dim_{\mathbb{F}}Y^{\lambda}(P)\geq\dim_{\mathbb{F}}Y^{\lambda}(Q)=\dim_{\mathbb{F}}Y^{\lambda}(P_{\lambda})=y_{\lambda,\O_{\lambda}}.\]
\end{proof}
\section{Some computation}\label{S: Some computation}
In this section, we present some equalities among the orbit numbers we have defined in Definition \ref{D: orbit number}. The main results are Theorems \ref{T: Mullineux}, \ref{T: reduction 3} and \ref{T: Reductive5}.
We present our first results which follows easily from Lemma \ref{L: prod of dims}. Recall that $\mathbf{m}$ is the Mullineux map on $p$-restricted partitions such that $Y^\lambda\otimes\mathrm{sgn}(n)\cong Y^{\mathbf{m}(\lambda)}$ for all $\lambda\in\mathscr{RP}_p(n)$
\begin{thm}\label{T: Mullineux} Let $\sum_{i=0}^sp^{i}\lambda(i)$ be the $p$-adic expansion of $\lambda$ and let \[\mu=\sum_{i=0}^sp^{k_i}\mathbf{m}^{\ell_i}(\lambda(i)),\] where $k_0,\ldots,k_s$ are mutually distinct nonnegative integers and, for each $i=0,1,\ldots, s$, $\ell_i$ is either 0 or 1. Then $y_{\lambda,\O_\lambda}=y_{\mu,\O_\mu}$.
\end{thm}
\begin{proof} Notice that $\sum_{i=0}^sp^{k_i}\mathbf{m}^{\ell_i}(\lambda(i))$ is the $p$-adic expansion of $\mu$ since $k_i$'s are all distinct. By Lemma \ref{L: prod of dims}, we have \[y_{\lambda,\O_\lambda}=\prod_{i=0}^s\dim_\mathbb{F}{Y^{\lambda(i)}}=\prod_{i=0}^s\dim_\mathbb{F}{Y^{\mathbf{m}^{\ell_i}(\lambda(i))}}=y_{\mu,\O_\mu}.\]
\end{proof}
Recall that $\beta\bullet \gamma$ denote the concatenation of two compositions $\beta, \gamma$. We need the following lemmas to prove our next result Theorem \ref{T: reduction 3}.
\begin{lem}\label{L: ps partition} Let $m,n,s\in\mathbb{N}$ such that $p^s>m$. If $\lambda+p^s\mu=\alpha+p^s\beta$ for some $\lambda,\alpha\in\mathscr{C}(m)$ and $\mu,\beta\in\mathscr{C}(n)$ then $\lambda=\alpha$ and $\mu=\beta$.
\end{lem}
\begin{proof} If $\lambda_i>\alpha_i$ for some $i$ then \[p^s>\lambda_i-\alpha_i=p^s(\beta_i-\mu_i)\geq p^s,\] which is a contradiction. Similarly, we must have $\lambda_i\geq\alpha_i$. Therefore $\lambda=\alpha$ and hence $\mu=\beta$.
\end{proof}
\begin{lem}\label{L: Reductive2} Let $\lambda\in\P(m)$, $\mu\in\P(n)$, $\O\in\P^{(p)}(m)$, $\O'\in\P^{(p)}(n)$ and $s\in\mathbb{N}$ such that $p^s>m$. Then \[m_{\lambda+p^s\mu,\O\bullet p^s\O'}=m_{\lambda,\O}m_{\mu,\O'}.\]
\end{lem}
\begin{proof} Let $\O''=\O\bullet p^{s}\O'$ and let $A=\orbit{\set{m}}{P}$, $B=\orbit{\set{n}}{Q}$ and $C=\orbit{\set{m+p^{s}n}}{R}$. Furthermore, let $P,Q,R$ be $p$-subgroups of $\sym{m},\sym{n},\sym{m+p^sn}$ such that $A,B,C$ have types $\O,\O',\O''$ respectively. Since $p^s>m$, we may identify the set $C$ with the set $A\cup B$ where an orbit of size $p^i$ in $C$ is identified with an orbit of size $p^i$ in $A$ if $i<s$ and an orbit of size $p^{i-s}$ in $B$ if $i\geq s$. Recall the notation $M_{\lambda,P}$ defined in Subsection \ref{SS: sym}. To prove the result, we construct a bijection between the sets $X:=M_{\lambda+p^s\mu,R}$ and $Y:=M_{\lambda,P}\times M_{\mu,Q}$. We define $g:Y\to X$ as follows. For each $(\t,\mathfrak{s})\in Y$, let $g(\t,\mathfrak{s})\in X$ be the $(\lambda+p^s\mu)$-tabloid whose $i$th row contains an orbit of $C$ if and only if its corresponding orbit is in $A$ and belongs to the $i$th row of $\t$ or it is in $B$ and belongs to the $i$th row of $\mathfrak{s}$. Conversely, we define $f:X\to Y$ as follows. For each $\mathfrak{u}\in X$, let $\t$ be the $\alpha$-tabloid, for some composition $\alpha$ of $m$, whose $i$th row contains an orbit of $A$ if and only if its corresponding orbit in $C$ belongs to the $i$th row of $\mathfrak{u}$. Similarly, we obtain an $\beta$-tabloid $\mathfrak{s}$. Since $\alpha+p^s\beta=\lambda+p^s\mu$, by Lemma \ref{L: ps partition}, we have $\alpha=\lambda$ and $\beta=\mu$. Therefore $f(\mathfrak{u})=(\t,\mathfrak{s})\in Y$. Obviously, $f,g$ are inverses of each other. The proof is now complete using Lemma \ref{L: dim of M(P)}.
\end{proof}
\begin{comment}
I dont understand the last few lines of the proof.
\end{comment}
\begin{lem}\label{L: red 1} Let $\O\in\P^{(p)}(m)$, $\O'\in\P^{(p)}(n)$ and $s\in\mathbb{N}$ such that $p^s>m$. If $y_{\tau,\O\bullet p^s\O'}\neq 0$ then $\tau=\nu+p^s\delta$ for some $\nu\in\P(m)$ and $\delta\in\P(n)$.
\end{lem}
\begin{proof} Let $\sum^r_{i=0}p^{i}\tau(i)$ be the $p$-adic expansion of $\tau$. By Theorem \ref{T:eolambda}, $\O'':=\O\bullet p^s\O'$ is a rearrangement of a refinement of $\O_\tau=(1^{|\tau(0)|},\ldots,(p^r)^{|\tau(r)|})$. Let $\nu:=\sum_{i=0}^{s-1}p^{i}\tau(i)$ and $\delta:=\sum_{i=s}^rp^{i-s}\tau(i)$. Notice that both $\nu,\delta$ are partitions. We now show that $|\nu|=m$ and $|\delta|=n$. Since $\O''$ is a rearrangement of a refinement of $\O_\tau$, we have $|\delta|\geq n$. On the other hand, we have $m+p^sn=|\nu|+p^s|\delta|$ and hence $0\leq p^s(|\delta|-n)=m-|\nu|<p^s$. Therefore it forces that $|\delta|=n$ and $|\nu|=m$.
\end{proof}
We are now ready to prove our first reductive formula about orbit numbers.
\begin{thm}\label{T: reduction 3}
Let $\lambda\in \P(m)$, $\mu\in \P(n)$, $\O\in \P^{(p)}(m)$, $\O'\in \P^{(p)}(n)$ and $s,t\in\mathbb{N}$ such that $p^t\geq p^{s}>m$. Then \[y_{\lambda+p^s\mu,\O\bullet p^{s}\O'}=y_{\lambda+p^t\mu,\O\bullet p^{t}\O'}.\]
\end{thm}
\begin{proof}
Let $\O'':=\O\bullet p^s\O'$ and $\O''':=\O\bullet p^t\O'$. We show our statement by using induction on the set \[X=\{\nu+p^s\delta:\nu\in\P(m),\ \delta\in\P(n)\}\] with respect to the dominance order. When $\nu=(m)$ and $\delta=(n)$, both $Y^{(m+p^sn)}$ and $Y^{(m+p^tn)}$ are trivial modules. So, by Lemma \ref{L: dim of M(P)}, \[y_{(m+p^sn),\O''}=m_{(m+p^sn),\O''}=1=m_{(m+p^tn),\O'''}=y_{(m+p^tn),\O'''}.\]
Suppose that $y_{\nu+p^s\delta,\O''}=y_{\nu+p^t\delta,\O'''}$ for all partitions $\nu+p^s\delta\rhd\lambda+p^s\mu$ where $\nu\in\P(m)$ and $\delta\in\P(n)$. By Equation \ref{Eq: 1}, we have \[m_{\lambda+p^s\mu,\O''}=y_{\lambda+p^s\mu,\O''}+\sum_{\tau\rhd \lambda+p^s\mu}k_{\lambda+p^s\mu,\tau}y_{\tau,\O''}.\] By Lemma \ref{L: red 1}, $y_{\tau,\O''}=0$ unless $\tau=\nu+p^s\delta$ for some uniquely determined $\nu\in\P(m)$ and $\delta\in\P(n)$ (see Lemma \ref{L: ps partition}). By Theorem \ref{T: Reductive}(i), $k_{\lambda+p^s\mu,\nu+p^s\delta}=k_{\lambda,\nu}k_{\mu,\delta}$. Using the observation $k_{\lambda,\nu}k_{\mu,\delta}=0$ unless $\nu\trianglerighteq\lambda$ and $\delta\trianglerighteq\mu$, we deduce that \[m_{\lambda+p^s\mu,\O''}=y_{\lambda+p^s\mu,\O''}+\sum_{\nu\rhd \lambda,\delta\rhd \mu}k_{\lambda,\nu}k_{\mu,\delta}y_{\nu+p^s\delta,\O''}.\] Similarly, we obtain \[m_{\lambda+p^t\mu,\O'''}=y_{\lambda+p^t\mu,\O'''}+\sum_{\nu\rhd \lambda,\delta\rhd \mu}k_{\lambda,\nu}k_{\mu,\delta}y_{\nu+p^t\delta,\O'''}.\] Our result now follows using Lemma \ref{L: Reductive2} and inductive hypothesis.
\end{proof}
We obtain the following immediate consequence
\begin{cor} Let $t\in\mathbb{N}$, $\mu\in\P(n)$ and $\O\in\P^{(p)}(n)$. Then $y_{p^t\mu,p^t\O}=y_{\mu,\O}$.
\end{cor}
Next we prove another reductive formula which depends on the shape of the partition (see Theorem \ref{T: Reductive5}) instead of on the size of the partition as in Theorem \ref{T: reduction 3}. The main idea of these two proofs are quite similar but the latter requires slightly different treatment. We need the following two lemmas.
\begin{lem}\label{L: red 2} Let $\lambda\in\P(m)$, $\O\in\P^{(p)}(m)$, $\O'\in\P^{(p)}(n)$ and $s\in\mathbb{N}$ such that $p^s>\lambda_2$. Then \[m_{\lambda+(p^sn),\O''}=m_{\lambda,\O},\] where $\O''\in\P^{(p)}(m+p^sn)$ is the rearrangement of $\O\bullet p^s\O'$.
\end{lem}
\begin{proof} Let $\O''$ be the rearrangement of $\O\bullet p^s\O'\in \P^{(p)}(m+p^{s}n)$. Let $P,Q$ be $p$-subgroups of $\sym{m},\sym{m+p^sn}$ such that $\orbit{\set{m}}{P},\orbit{\set{m+p^sn}}{Q}$ have types $\O,\O''$ respectively. Let $\t\in M_{\lambda+(p^sn),Q}$. Since $p^s>\lambda_2$, the orbits in $[m+p^sn]/Q$ with sizes larger than or equal to $p^s$ must be assigned to the first row of $\t$. Therefore there is a obvious bijection between $M_{\lambda+(p^sn),Q}$ and $M_{\lambda,P}$. Our claim now follows by using Lemma \ref{L: dim of M(P)}.
\end{proof}
\begin{lem}\label{L: red 3} Let $\lambda\in\P(m)$, $\O\in\P^{(p)}(m)$, $\O'\in\P^{(p)}(n)$ and $s\in\mathbb{N}$ such that $p^s>m-\lambda_1$. If $\tau\in \P(m+p^sn)$ such that $\tau\rhd \lambda+(p^sn)$ and $y_{\tau,\O\bullet p^s\O'}\neq 0$ then $\tau=\nu+(p^sn)$ for some uniquely determined partition $\nu\in\P(m)$.
\end{lem}
\begin{proof} Let $\sum^r_{i=0}p^i\tau(i)$ be the $p$-adic expansion of $\tau$ and let $\alpha=\sum^{s-1}_{i=0}p^i\tau(i)$ and $\beta=\sum^r_{i=s}p^{i-s}\tau(i)$ such that $\tau=\alpha+p^s\beta$. We claim that $\beta=(b)$ for some $b\geq n$. Since $y_{\tau,\O\bullet p^s\O'}\neq 0$, we have $\O\bullet p^s\O'$ is a rearrangement of a refinement of $\O_\tau$ and hence $|\beta|\geq |\O'|=n$. Suppose that $\beta_i>0$ for some $i\geq 2$. Since $\tau\rhd \lambda+(p^sn)$, we have $\tau_1\geq \lambda_1+p^sn$. Also, $\tau_i=\alpha_i+p^s\beta_i\geq \alpha_i+p^s$. Therefore \[p^s\leq\alpha_i+p^s\leq \tau_i\leq (m+p^sn)-\tau_1\leq m-\lambda_1<p^s,\] which is a contradiction. This shows that $\beta$ has at most one part and hence $\beta=(b)$ for some $b\geq n$. Let $\nu=\alpha+p^s(b-n)$. Then $\tau=\nu+(p^sn)$.
\end{proof}
We are now ready to prove the second reductive formula about orbit numbers.
\begin{thm}\label{T: Reductive5}
Let $\lambda\in \P(m)$ such that $m-\lambda_{1}<p^s$ for some $s\in\mathbb{N}$, let $\O\in\P^{(p)}(m)$ and let $\O'\in \P^{(p)}(n)$. Then \[y_{\lambda+(p^sn),\O''}=y_{\lambda,\O},\] where $\O''\in\P^{(p)}(m+p^sn)$ is the rearrangement of $\O\bullet p^s\O'$.
\end{thm}
\begin{proof} We argue by using induction with respect to the dominance order on the set \[X=\{\nu\in\P(m):m-\nu_1<p^s\}.\] W hen $\nu=(m)\in X$, we have $y_{(m)+(p^sn),\O''}=1=y_{(m),\O}$. Suppose that $y_{\nu+(p^sn),\O''}=y_{\nu,\O}$ for all $\lambda\lhd\nu\in X$. By Equation \ref{Eq: 1} and Lemma \ref{L: red 3}, we have
\begin{align*}
m_{\lambda+(p^sn),\O''}&=y_{\lambda+(p^sn),\O''}+\sum_{\tau\rhd \lambda+(p^sn)}k_{\lambda+(p^sn),\tau}y_{\tau,\O''}\\
&=y_{\lambda+(p^sn),\O''}+\sum_{\nu\rhd \lambda}k_{\lambda+(p^sn),\nu+(p^sn)}y_{\nu+(p^sn),\O''}.
\end{align*} By Theorem \ref{T: Reductive}(ii), since $\lambda_2\leq m-\lambda_1<p^s$, we have $k_{\lambda+(p^sn),\nu+(p^sn)}=k_{\lambda,\nu}$. By inductive hypothesis, we have \[m_{\lambda+(p^sn),\O''}=y_{\lambda+(p^sn),\O''}+\sum_{\nu\rhd \lambda}k_{\lambda,\nu}y_{\nu,\O}.\] The proof is now complete by using Equation \ref{Eq: 1} $m_{\lambda,\O}=y_{\lambda,\O}+\sum_{\nu\rhd \lambda}k_{\lambda,\nu}y_{\nu,\O}$ and Lemma \ref{L: red 2}.
\end{proof}
\section{Two-part partitions}\label{S: two part}
In the final section, we provide some explicit calculation about the orbit numbers $y_{\lambda,\O}$ when $\lambda$ is a two-part partition. We begin with the following proposition.
\begin{prop} Let $\O=(1^{a_{0}}, p^{a_{1}},\ldots, (p^{r})^{a_{r}})\in\P^{(p)}(n)$. If $a_0=0$, then $y_{(n-1,1),\O}=0$. If $a_0\neq 0$ then
\[y_{(n-1,1),\O}=\begin{cases}
a_0, &\text{ if $p\mid n$,}\\
a_0-1, &\text{ if $p\nmid n$}.
\end{cases}\]
\end{prop}
\begin{proof} Let $P$ be a $p$-subgroup such that $\orbit{\set{n}}{P}$ has type $\O$. It is well-known that $M^{(n-1,1)}$ is isomorphic to $Y^{(n-1,1)}$ if $p$ divides $n$ and $Y^{(n)}\oplus Y^{(n-1,1)}$ otherwise. By Lemma \ref{L: dim of M(P)}, $\dim_\mathbb{F} M^{(n-1,1)}(P)$ is the number of ways to insert the orbits with size one that are in $\set{n}/P$ into the second row of $(n-1,1)$, i.e., $\dim_\mathbb{F} M^{(n-1,1)}(P)=a_0$. If $a_0=0$, since $0\leq y_{(n-1,1),\O}\leq \dim_\mathbb{F} M^{(n-1,1)}(P)=0$, then $y_{(n-1,1),\O}=0$. If $a_0\neq 0$, then
\[y_{(n-1,1),\O}= \dim_\mathbb{F} M^{(n-1,1)}(P)-k_{(n),(n-1,1)}\dim_\mathbb{F} Y^{(n)}(P)=a_0-k_{(n),(n-1,1)},\] where $k_{(n),(n-1,1)}$ is 1 if $p\nmid n$ and 0 otherwise.
\end{proof}
Next, we compute $y_{\lambda,\O_\lambda}$ when $\lambda$ is a two-part partition. We need the following two lemmas.
\begin{lem}\label{L:pweight}
Any $p$-restricted two-part partition has $p$-weight either 0 or 1. Furthermore, a $p$-restricted partition $(a,b)$ has $p$-weight 1 if and only if $a-b< p-1$ and $a+1\geq p$. In this case, the $p$-core is $(b-1,a+1-p)$.
\end{lem}
\begin{proof}
Let $\lambda=(a,b)$ which is a $p$-restricted two-part partition, i.e., $a-b<p$ and $b<p$. Suppose that the $p$-weight of $\lambda$ is not zero. It is clear from the Young diagram of $\lambda$ that it is equivalent to $a-b<p-1$ and $a+1\geq p$. In this case, after removing one $p$-hook from $\lambda$, the remaining partition is $(b-1,a+1-p)$. However, $(b-1)+1<p$, so $(b-1,a+1-p)$ has $p$-weight 0.
\end{proof}
The weight one blocks of symmetric groups are Morita equivalent by \cite[Theorem 1]{Scopes}.For $p\geq 3$, the decomposition matrix of the principal block of $\mathbb{F}\sym{p}$ has been described by Peel in \cite{Peel} (see also \cite[Theorem 24.1]{GJ1}). Let $b$ be the weight one block of $\mathbb{F}\sym{n}$ labelled by a $p$-core $\nu$ and let \[\mu^{(0)}\rhd\mu^{(1)}\rhd\cdots\rhd\mu^{(p-1)}\] be all the partitions occurring in $b$. Notice that $\mu^{(i)}$ is $p$-restricted for each $i=1,2,\ldots, p-1$ and $\mu^{(0)}=\nu+(p)$. Taking the conjugates, we have $(\mu^{(0)})'\lhd(\mu^{(1)})'\lhd\cdots\lhd(\mu^{(p-1)})'$. By the Brauer reciprocity, if $\mu$ is $p$-restricted, we have that the multiplicity of the ordinary irreducible character $\chi^\lambda$ in $\mathrm{ch}(Y^\mu)$ is the decomposition number $d_{\lambda',\mu'}$, i.e., the multiplicity of the simple module $D^{\mu'}$ in the composition series of $S^{\lambda'}$. In particular, we have \[\dim_\mathbb{F} Y^{\mu^{(i)}}=\dim_\mathbb{F} S^{\mu^{(i)}}+\dim_\mathbb{F} S^{\mu^{(i-1)}}.\]
Suppose that $\lambda=(a,b)$ is a $p$-restricted partition. Suppose first that $p$ is odd. If the $p$-weight of $\lambda$ is zero then $Y^\lambda\cong S^\lambda$. If the $p$-weight of $\lambda$ is one then, by Lemma \ref{L:pweight}, in our discussion above, $\mu^{(1)}=\lambda$ and $\mu^{(0)}=\kappa_p(\lambda)+(p)=(p+b-1,a+1-p)$. Suppose now that $p=2$. We have that $(a,b)$ is either $(2,1)$ or $(1,1)$. In this case, $Y^{(2,1)}\cong S^{(2,1)}$ and $M^{(1,1)}\cong Y^{(1,1)}$. We have obtained the following lemma.
\begin{lem}\label{L: dim of two part}
For a $p$-restricted two-part partition $\lambda$, we have
\[\dim_\mathbb{F}{Y^\lambda}=\begin{cases}
\dim_\mathbb{F}{S^\lambda},& \text{ if $\lambda=\kappa_p(\lambda)$,}\\
\dim_\mathbb{F}{S^\lambda}+\dim_\mathbb{F}{S^{\kappa_p(\lambda)+(p)}},& \text{ if $\lambda\neq \kappa_p(\lambda)$.}
\end{cases}
\]
\end{lem}
Now we can give a description for the orbit numbers $y_{\lambda,\O_\lambda}$ when $\lambda$ is two-part partition.
\begin{prop}
Let $\lambda=(a,b)$ be a two-part partition and let the $p$-adic sums of the numbers $a-b$ and $b$ be $\sum_{i\geq0}x_ip^{i}$ and $\sum_{i\geq0}y_ip^{i}$, respectively.
Then
\[y_{\lambda,\O_\lambda}=\prod_{i\geq 0} \left ({x_i+2y_i-1\choose y_i}+\delta(x_i,y_i){x_i+2y_i-1\choose x_i+y_i+1-p}\right ),\]
where $\delta$ is the function defined as
\[\delta(x,y)=\begin{cases}
1, &\text{ if $x<p-1$ and $x+y+1\geq p$,}\\
0, &\text{ otherwise.}
\end{cases}
\]
\end{prop}
\begin{proof} Notice that the $p$-adic expansion of $(a,b)$ is $\sum_{i\geq 0}p^i(x_i+y_i,y_i)$. By Lemma \ref{L:pweight}, the partition $(x_i+y_i,y_i)$ has $p$-weight 1 if and only if $x_i<p-1$ and $x_i+y_i+1\geq p$. In this case, \[\kappa_p(\lambda(i))+(p)=(p+y_i-1,x_i+y_i+1-p).\] Otherwise, the $p$-weight of $(x_i+y_i,y_i)$ is zero. By Lemma \ref{L: dim of two part}, we have
\[\dim_\mathbb{F} Y^{\lambda(i)}=\dim_\mathbb{F}{S^{(x_i+y_i,y_i)}}+\delta(x_i,y_i)\dim_\mathbb{F}{S^{(y_i+p-1,x_i+y_i+1-p)}}.\] The proof is now complete using Lemma \ref{L: prod of dims} and Hook Formula for the dimension of a Specht module.
\end{proof}
|
\section{Introduction}
Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase \citep{bollacker2008freebase}, are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete \citep{miller2016key,west2014knowledge}, and hence can only answer a limited subset of all possible factoid questions.
For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) \textit{searching} for the passages relevant to the given question, and (2) \textit{reading} the retrieved text in order to select a span of text which best answers the question \citep{chen2017reading,watanabe2017question}.
Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail \citep{hermann2015teaching} and Squad \citep{rajpurkar2016squad}. State-of-the-art systems for these datasets \citep{dhingra2016gated,seo2016bidirectional} focus solely on step (2) above, in effect assuming the relevant passage of text is already known.
\begin{figure*}[!htbp]
\small
\begin{tabular}{rp{0.8\textwidth}}
\textbf{Question} & javascript -- javascript not to be confused with java is a dynamic weakly-typed language used for XXXXX as well as server-side scripting . \\
\textbf{Answer} & \textbf{client-side} \\
\textbf{Context excerpt} & JavaScript is not weakly typed, it is strong typed. \newline
JavaScript is a \textbf{Client Side} Scripting Language. \newline
JavaScript was the **original** \textbf{client-side} web scripting language.\\\\
\textbf{Question} & 7-Eleven stores were temporarily converted into Kwik E-marts to promote the release of what movie? \\
\textbf{Answer} & \textbf{the simpsons movie} \\
\textbf{Context excerpt} & In July 2007 , 7-Eleven redesigned some stores to look like Kwik-E-Marts in select cities to promote \textbf{The Simpsons Movie} . \newline
Tie-in promotions were made with several companies , including 7-Eleven , which transformed selected stores into Kwik-E-Marts . \newline
`` 7-Eleven Becomes Kwik-E-Mart for ` \textbf{Simpsons Movie} ' Promotion '' .
\end{tabular}
\caption{\small Example short-document instances from \textsc{Quasar-S} (top) and \textsc{Quasar-T} (bottom)}\label{f_demo}
\end{figure*}
In this paper, we introduce two new datasets for QUestion Answering by Search And Reading -- \textsc{Quasar}. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. \textsc{Quasar-S} consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow\footnote{Stack Overflow is a website featuring questions and answers (posts) from a wide range of topics in computer programming. The entity definitions were scraped from \url{https://stackoverflow.com/tags}.}. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities.
\textsc{Quasar-T} consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.
While production quality QA systems may have access to the entire world wide web as a knowledge source, for \textsc{Quasar} we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For \textsc{Quasar-S} we construct the knowledge source by collecting top 50 threads\footnote{A question along with the answers provided by other users is collectively called a thread. The threads are ranked in terms of votes from the community. Note that these questions are different from the cloze-style queries in the \textsc{Quasar-S} dataset.} tagged with each entity in the dataset on the Stack Overflow website. For \textsc{Quasar-T} we use ClueWeb09 \citep{callan2009clueweb09}, which contains about 1 billion web pages collected between January and February 2009. Figure~\ref{f_demo} shows some examples.
Unlike existing reading comprehension tasks, the \textsc{Quasar} tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in \citep{chen2017reading}) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems.
Additionally, \textsc{Quasar-S} has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available.
\textsc{Quasar-T}, on the other hand, consists of open-domain questions based on trivia, which refers to ``bits of information, often of little importance".
Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions,
we hypothesize that \textsc{Quasar-T} requires a deeper reading of documents to answer correctly.
We evaluate \textsc{Quasar} against human testers, as well as several baselines ranging from na{\"i}ve heuristics to state-of-the-art machine readers. The best performing baselines achieve $33.6\%$ and $28.5\%$ on \textsc{Quasar-S} and \textsc{Quasar-T}, while human performance is $50\%$ and $60.6\%$ respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies -- retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question.
\section{Existing Datasets}
\paragraph{Open-Domain QA:} Early research into open-domain QA was driven by the TREC-QA challenges organized by the National Institute of Standards and Technology (NIST) \citep{voorhees2000building}. Both dataset construction and evaluation were done manually, restricting the size of the dataset to only a few hundreds. \textsc{WikiQA} \citep{yang2015wikiqa} was introduced as a larger-scale dataset for the subtask of answer sentence selection, however it does not identify spans of the actual answer within the selected sentence. More recently, \citet{miller2016key} introduced the \textsc{MoviesQA} dataset where the task is to answer questions about movies from a background corpus of Wikipedia articles. \textsc{MoviesQA} contains $\sim100k$ questions, however many of these are similarly phrased and fall into one of only $13$ different categories; hence, existing systems already have $\sim85\%$ accuracy on it \citep{watanabe2017question}. MS MARCO \citep{nguyen2016ms} consists of diverse real-world queries collected from Bing search logs, however many of them not factual, which makes their evaluation tricky. \citet{chen2017reading} study the task of \textit{Machine Reading at Scale} which combines the aspects of search and reading for open-domain QA. They show that jointly training a neural reader on several distantly supervised QA datasets leads to a performance improvement on all of them. This justifies our motivation of introducing two new datasets to add to the collection of existing ones; more data is good data.
\paragraph{Reading Comprehension:} Reading Comprehension (RC) aims to measure the capability of systems to ``understand'' a given piece of text, by posing questions over it. It is assumed that the passage containing the answer is known beforehand. Several datasets have been proposed to measure this capability. \citet{richardson2013mctest} used crowd-sourcing to collect MCTest -- $500$ stories with $2000$ questions over them. Significant progress, however, was enabled when \citet{hermann2015teaching} introduced the much larger CNN / Daily Mail datasets consisting of $300k$ and $800k$ cloze-style questions respectively. Children's Book Test (CBT) \citep{hill2015goldilocks} and Who-Did-What (WDW) \citep{onishi2016did} are similar cloze-style datasets. However, the automatic procedure used to construct these questions often introduces ambiguity and makes the task more difficult \citep{chen2016thorough}. Squad \citep{rajpurkar2016squad} and NewsQA \citep{trischler2016newsqa} attempt to move toward more general extractive QA by collecting, through crowd-sourcing, more than $100k$ questions whose answers are spans of text in a given passage. Squad in particular has attracted considerable interest, but recent work \citep{weissenborn2017fastqa} suggests that answering the questions does not require a great deal of reasoning.
Recently, \citet{joshi2017triviaqa} prepared the TriviaQA dataset, which also consists of trivia questions collected from online sources, and is similar to \textsc{Quasar-T}.
However, the documents retrieved for TriviaQA were obtained using a commercial search engine, making it difficult for researchers to vary the retrieval step of the QA system in a controlled fashion; in contrast we use ClueWeb09, a standard corpus. We also supply a larger collection of retrieved passages, including many not containing the correct answer to facilitate research into retrieval, perform a more extensive analysis of baselines for answering the questions, and provide additional human evaluation and annotation of the questions. In addition we present \textsc{Quasar-S}, a second dataset.
SearchQA \citep{dunn2017searchqa} is another recent dataset aimed at facilitating research towards an end-to-end QA pipeline, however this too uses a commercial search engine, and does not provide negative contexts not containing the answer, making research into the retrieval component difficult.
\section{Dataset Construction}
Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each \textsc{Quasar} variant.
\subsection{Question sets}
\paragraph{\textsc{Quasar-S}:} The software question set was built from the definitional ``excerpt'' entry for each tag (entity) on
StackOverflow. For example the excerpt for the ``java`` tag is, ``Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).'' Not every excerpt includes the tag being defined (which we will call the ``head tag''), so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline. We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. ``.net'', ``c++''). Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic: first searching the string for mentions of other entities, then repleacing each mention in turn with a placeholder string (Figure \ref{f_clozeExamples}).
\begin{figure*}[!htbp]
\small
\begin{tabular}{cp{0.8\textwidth}}
\textbf{Excerpt} & Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM). \\
\makecell{\textbf{Preprocessed} \\ \textbf{Excerpt}} & java --- java is a general-purpose object-oriented programming language designed to be used in conjunction with the java virtual-machine jvm .\\\\
\multicolumn{2}{c}{\textbf{Cloze Questions}} \\
\textit{Cloze} & \textit{Question} \\\hline
java & java --- java is a general-purpose object-oriented programming language designed to be used in conjunction with the @placeholder virtual-machine jvm . \\\hline
virtual-machine & java --- java is a general-purpose object-oriented programming language designed to be used in conjunction with the java @placeholder jvm . \\\hline
jvm & java --- java is a general-purpose object-oriented programming language designed to be used in conjunction with the java virtual-machine @placeholder . \\
\end{tabular}
\caption{\small Cloze generation}\label{f_clozeExamples}
\end{figure*}
This heuristic is noisy, since the software domain often overloads existing English words (e.g. ``can'' may refer to a Controller Area Network bus; ``swap'' may refer to the temporary storage of inactive pages of memory on disk; ``using'' may refer to a namespacing keyword). To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as ``can'' ``swap'' and ``using'', but also ``image'' ``service'' and ``packet''). A more sophisticated entity recognition system could make recall improvements here.
\paragraph{\textsc{Quasar-T}:} The trivia question set was built from a collection of just under 54,000 trivia questions collected by Reddit user 007craft and released in December 2015\footnote{\url{https://www.reddit.com/r/trivia/comments/3wzpvt/free_database_of_50000_trivia_questions/}}. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in formatting, spelling, and accuracy. We filtered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of ~52,000 free-response style questions remaining. The questions range in difficulty, from straightforward (``Who recorded the song `Rocket Man''' ``Elton John'') to difficult (``What was Robin Williams paid for Disney's Aladdin in 1982'' ``Scale \$485 day + Picasso Painting'') to debatable (``According to Earth Medicine what's the birth totem for march'' ``The Falcon'')\footnote{In Earth Medicine, March has two birth totems, the falcon and the wolf.}
\subsection{Context Retrieval}
\label{sec:retreival}
The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.
Context documents for each query were generated in a two-phase fashion, first collecting a large pool of semirelevant text, then filling a temporary index with short or long pseudodocuments from the pool, and finally selecting a set of $N$ top-ranking pseudodocuments (100 short or 20 long) from the temporary index.
For \textsc{Quasar-S}, the pool of text for each question was composed of 50+ question-and-answer threads scraped from \url{http://stackoverflow.com}. StackOverflow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy\footnote{\url{https://scrapy.org}} to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identifier, a post identifier, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.
To build the context documents for \textsc{Quasar-S}, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each annotated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:
\begin{querytermlist}
\small
\item[] SHOULD(PHRASE(question text))
\item[] SHOULD(BOOLEAN(question text))
\item[] MUST(tags:\$headtag)
\end{querytermlist}
where ``question text'' refers to the sequence of tokens in the cloze question, with the placeholder removed. The first SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered.
The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.
For \textsc{Quasar-T}, the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09. Each question-answer pair was converted to a {\tt\#combine} query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove ({\verb~s/[.(){}<>:*`_]+//g~}) or replace ({\verb~s/[,?']+/ /g~}) illegal characters. Any questions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho\footnote{\url{http://jericho.htmlparser.net/docs/index.html}}. For long pseudodocuments we used the full page text, truncated to 2048 characters. For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter, truncated to 200 characters.
To build the context documents for the trivia set, the pseudodocuments from the pool were collected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for \textsc{Quasar-S}, without the head tag filter:
\begin{querytermlist}
\small
\item[] SHOULD(PHRASE(question text))
\item[] SHOULD(BOOLEAN(question text))
\end{querytermlist}
The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.
\subsection{Candidate solutions}
The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question.
\textsc{Quasar-S} used a closed vocabulary of 4874 tags as its candidate list.
Since the questions in \textsc{Quasar-T} are in free-response format, we constructed a separate list of candidate solutions for each question. Since most of the correct answers were noun phrases, we took each sequence of {\tt NN*} -tagged tokens in the context document, as identified by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list.
\subsection{Postprocessing}
\begin{table*}[!htbp]
\centering
\small
\begin{tabular}{@{}ccccc@{}}
\toprule
\textbf{Dataset} & \textbf{\begin{tabular}[c]{@{}c@{}}Total\\ (train / val / test)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Single-Token\\ (train / val / test)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Answer in Short\\ (train / val / test)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Answer in Long\\ (train / val / test)\end{tabular}} \\ \midrule
\textsc{Quasar-S} & 31,049 / 3,174 / 3,139 & -- & 30,198 / 3,084 / 3,044 & 30,417 / 3,099 / 3,064 \\
\textsc{Quasar-T} & 37,012 / 3,000 / 3,000 & 18,726 / 1,507 / 1,508 & 25,465 / 2,068 / 2,043 & 26,318 / 2,129 / 2,102 \\ \bottomrule
\end{tabular}
\caption{\small Dataset Statistics. \textbf{Single-Token} refers to the questions whose answer is a single token (for \textsc{Quasar-S} all answers come from a fixed vocabulary). \textbf{Answer in Short (Long)} indicates whether the answer is present in the retrieved short (long) pseudo-documents.}
\label{t_datasetStats}
\end{table*}
Once context documents had been built, we extracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of \textsc{Quasar} as a whole. We also split the full set into training, validation and test sets. The final size of each data subset after all discards is listed in Table \ref{t_datasetStats}.
\section{Evaluation}
\subsection{Metrics}
Evaluation is straightforward on \textsc{Quasar-S} since each answer comes from a fixed output vocabulary of entities, and we report the average \textit{accuracy} of predictions as the evaluation metric. For \textsc{Quasar-T}, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation difficult. Here we pick the two metrics from \citet{rajpurkar2016squad,joshi2017triviaqa}.
In preprocessing the answer we remove punctuation, white-space and definite and indefinite articles from the strings. Then, \textit{exact} match measures whether the two strings, after preprocessing, are equal or not. For \textit{F1} match we first construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap between the two bags of tokens. These metrics are far from perfect for \textsc{Quasar-T}; for example, our human testers were penalized for entering ``0'' as answer instead of ``zero''. However, a comparison between systems may still be meaningful.
\subsection{Human Evaluation}
To put the difficulty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for \textsc{Quasar-S}, and an avid trivia enthusiast for \textsc{Quasar-T}) and $1-3$ non-experts. Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app. The experts were evaluated in a ``closed-book'' setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an ``open-book'' setting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section~\ref{sec:retreival}). We decided to use short pseudo-documents for this exercise to reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/so_rel_pie.pdf}
\caption{\textsc{Quasar-S} relations}
\label{fig:so_rel}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/tr_rel_bar.pdf}
\caption{\textsc{Quasar-T} genres}
\label{fig:tr_rel}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/tr_typ_pie.pdf}
\caption{\textsc{Quasar-T} answer categories}
\label{fig:tr_typ}
\end{subfigure}
\caption{\small Distribution of manual annotations for \textsc{Quasar}. Description of the \textsc{Quasar-S} annotations is in Appendix~\ref{app:relations}.}\label{fig:annotations}
\end{figure*}
We also asked the volunteers to provide annotations to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For \textsc{Quasar-S} the annotators were asked to mark the relation between the \textit{head} entity (from whose definition the cloze was constructed) and the \textit{answer} entity. For \textsc{Quasar-T} the annotators were asked to mark the genre of the question (e.g., Arts \& Literature)\footnote{Multiple genres per question were allowed.} and the entity type of the answer (e.g., Person). When multiple annotators marked the same question differently, we took the majority vote when possible and discarded ties. In total we collected $226$ relation annotations for $136$ questions in \textsc{Quasar-S}, out of which $27$ were discarded due to conflicting ties, leaving a total of $109$ annotated questions. For \textsc{Quasar-T} we collected annotations for a total of $144$ questions, out of which $12$ we marked as ambiguous. In the remaining $132$, a total of $214$ genres were annotated (a question could be annotated with multiple genres), while $10$ questions had conflicting entity-type annotations which we discarded, leaving $122$ total entity-type annotations. Figure~\ref{fig:annotations} shows the distribution of these annotations.
\subsection{Baseline Systems}
We evaluate several baselines on \textsc{Quasar}, ranging from simple heuristics to deep neural networks. Some predict a single token / entity as the answer, while others predict a span of tokens.
\subsubsection{Heuristic Models}
\paragraph{Single-Token:} \textit{MF-i} (Maximum Frequency) counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency. \textit{MF-e} is the same as \textit{MF-i} except it excludes the candidates present in the query. \textit{WD} (Word Distance) measures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style \textsc{Quasar-S} the distances are measured by first aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a specified threshold, which is tuned on the validation set.
\paragraph{Multi-Token:} For \textsc{Quasar-T} we also test the Sliding Window (\textit{SW}) and Sliding Window + Distance (\textit{SW+D}) baselines proposed in \citep{richardson2013mctest}. The scores were computed for the list of candidate solutions described in Section~\ref{sec:retreival}.
\subsubsection{Language Models}
For \textsc{Quasar-S}, since the answers come from a fixed vocabulary of entities, we test language model baselines which predict the most likely entity to appear in a given context. We train three \textit{n-gram} baselines using the SRILM toolkit \citep{stolcke2002srilm} for $n=3,4,5$ on the entire corpus of all Stack Overflow posts. The output predictions are restricted to the output vocabulary of entities.
We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the final states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overflow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach benefits from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.
\subsubsection{Reading Comprehension Models}
Reading comprehension models are trained to extract the answer from the given passage. We test two recent architectures on \textsc{Quasar} using publicly available code from the authors\footnote{\url{https://github.com/bdhingra/ga-reader}} \footnote{\url{https://github.com/allenai/bi-att-flow}}.
\paragraph{GA (Single-Token):} The GA Reader \citep{dhingra2016gated} is a multi-layer neural network which extracts a single token from the passage to answer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For \textsc{Quasar-S} we train and test GA on all instances for which the correct answer is found within the retrieved context. For \textsc{Quasar-T} we train and test GA on all instances where the answer is in the context and is a single token.
\paragraph{BiDAF (Multi-Token):} The BiDAF model \citep{seo2016bidirectional} is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writing it had state-of-the-art performance among published models on the Squad dataset. For \textsc{Quasar-T} we train and test BiDAF on all instances where the answer is in the retrieved context.
\subsection{Results}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/so_sent.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/so_post.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/tr_sent.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth,trim={5mm 0 5mm 0},clip]{figures/tr_post.pdf}
\end{subfigure}
\caption{\small Variation of Search, Read and Overall accuracies as the number of context documents is varied.}\label{fig:searchvread}
\end{figure*}
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as \textit{Search Accuracy}. The performance of the baseline among these instances is referred to as the \textit{Reading Accuracy}, and the overall performance (which is a product of the two) is referred to as the \textit{Overall Accuracy}. In Figure~\ref{fig:searchvread} we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient -- finding the few most relevant ones will allow the reader to work best.
\begin{table*}[!htbp]
\small
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Optimal\\ Context\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Search Acc}} & \multicolumn{2}{c|}{\textbf{Reading Acc}} & \multicolumn{2}{c|}{\textbf{Overall Acc}} \\ \cline{3-8}
& & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} \\ \hline
\multicolumn{8}{|l|}{Human Performance} \\ \hline
Expert (CB) & -- & -- & -- & -- & -- & \textit{0.468} & -- \\
Non-Expert (OB) & -- & -- & -- & -- & -- & \textit{0.500} & -- \\ \hline
\multicolumn{8}{|l|}{Language models} \\ \hline
3-gram & -- & -- & -- & -- & -- & 0.148 & 0.153 \\
4-gram & -- & -- & -- & -- & -- & 0.161 & 0.171 \\
5-gram & -- & -- & -- & -- & -- & 0.165 & 0.174 \\
BiRNN$\dagger$ & -- & -- & -- & -- & -- & \textbf{0.345} & \textbf{0.336} \\ \hline
\multicolumn{8}{|l|}{Short-documents} \\ \hline
WD & 10 & 0.40 & 0.43 & 0.250 & 0.249 & 0.100 & 0.107 \\
MF-e & 60 & 0.64 & 0.64 & 0.209 & 0.212 & 0.134 & 0.136 \\
MF-i & 90 & 0.67 & 0.68 & 0.237 & 0.234 & 0.159 & 0.159 \\
GA$\dagger$ & 70 & 0.65 & 0.65 & \textbf{0.486} & \textbf{0.483} & 0.315 & 0.316 \\ \hline
\multicolumn{8}{|l|}{Long-documents} \\ \hline
WD & 10 & 0.66 & 0.66 & 0.124 & 0.142 & 0.082 & 0.093 \\
MF-e & 15 & 0.69 & 0.69 & 0.185 & 0.197 & 0.128 & 0.136 \\
MF-i & 15 & 0.69 & 0.69 & 0.230 & 0.231 & 0.159 & 0.159 \\
GA$\dagger$ & 15 & 0.67 & 0.67 & 0.474 & 0.479 & 0.318 & 0.321 \\ \hline
\end{tabular}
\caption{\small Performance comparison on \textsc{Quasar-S}. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with $\dagger$. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.}
\label{tab:quasars}
\end{table*}
\begin{table*}[!htbp]
\small
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{\textbf{Method}} & \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Optimal \\ Context\end{tabular}}} & \multicolumn{2}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Search\\ Acc\end{tabular}}}} & \multicolumn{4}{c|}{\textbf{Reading Acc}} & \multicolumn{4}{c|}{\textbf{Overall Acc}} \\ \cline{5-12}
& & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{\textbf{exact}} & \multicolumn{2}{c|}{\textbf{f1}} & \multicolumn{2}{c|}{\textbf{exact}} & \multicolumn{2}{c|}{\textbf{f1}} \\ \cline{3-12}
& & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} & \textbf{val} & \textbf{test} \\ \hline
\multicolumn{12}{|l|}{Human Performance} \\ \hline
Expert (CB) & -- & -- & -- & -- & -- & -- & -- & \textit{0.547} & -- & \textit{0.604} & -- \\
Non-Expert (OB) & -- & -- & -- & -- & -- & -- & -- & \textit{0.515} & -- & \textit{0.606} & -- \\ \hline
\multicolumn{12}{|l|}{Short-documents} \\ \hline
MF-i & 10 & 0.35 & 0.34 & 0.053 & 0.044 & 0.053 & 0.044 & 0.019 & 0.015 & 0.019 & 0.015 \\
WD & 20 & 0.40 & 0.39 & 0.104 & 0.082 & 0.104 & 0.082 & 0.042 & 0.032 & 0.042 & 0.032 \\
SW+D & 20 & 0.64 & 0.63 & 0.112 & 0.113 & 0.157 & 0.155 & 0.072 & 0.071 & 0.101 & 0.097 \\
SW & 10 & 0.56 & 0.53 & 0.216 & 0.205 & 0.299 & 0.271 & 0.120 & 0.109 & 0.159 & 0.144 \\
MF-e & 70 & 0.45 & 0.45 & 0.372 & 0.342 & 0.372 & 0.342 & 0.167 & 0.153 & 0.167 & 0.153 \\
GA$\dagger$ & 70 & 0.44 & 0.44 & \textbf{0.580} & \textbf{0.600} & \textbf{0.580} & \textbf{0.600} & 0.256 & \textbf{0.264} & 0.256 & 0.264 \\
BiDAF$\dagger$** & 10 & 0.57 & 0.54 & 0.454 & 0.476 & 0.509 & 0.524 & \textbf{0.257} & 0.259 & \textbf{0.289} & \textbf{0.285} \\ \hline
\multicolumn{12}{|l|}{Long-documents} \\ \hline
WD & 20 & 0.43 & 0.44 & 0.084 & 0.067 & 0.084 & 0.067 & 0.037 & 0.029 & 0.037 & 0.029 \\
SW & 20 & 0.74 & 0.73 & 0.041 & 0.034 & 0.056 & 0.050 & 0.030 & 0.025 & 0.041 & 0.037 \\
SW+D & 5 & 0.58 & 0.58 & 0.064 & 0.055 & 0.094 & 0.088 & 0.037 & 0.032 & 0.054 & 0.051 \\
MF-i & 20 & 0.44 & 0.45 & 0.185 & 0.187 & 0.185 & 0.187 & 0.082 & 0.084 & 0.082 & 0.084 \\
MF-e & 20 & 0.43 & 0.44 & 0.273 & 0.286 & 0.273 & 0.286 & 0.119 & 0.126 & 0.119 & 0.126 \\
BiDAF$\dagger$** & 1 & 0.47 & 0.468 & 0.370 & 0.395 & 0.425 & 0.445 & 0.17 & 0.185 & 0.199 & 0.208 \\
GA$\dagger$** & 10 & 0.44 & 0.44 & 0.551 & 0.556 & 0.551 & 0.556 & 0.245 & 0.244 & 0.245 & 0.244 \\ \hline
\end{tabular}
\caption{\small Performance comparison on \textsc{Quasar-T}. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with $\dagger$. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.}
\label{tab:quasart}
\end{table*}
In Tables~\ref{tab:quasars} and \ref{tab:quasart} we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set\footnote{The Search Accuracy for different baselines may be different despite the same number of retrieved context documents, due to different preprocessing requirements. For example, the SW baselines allow multiple tokens as answer, whereas WD and MF baselines do not.}. For \textsc{Quasar-S} the best performing baseline is the BiRNN language model, which achieves $33.6\%$ accuracy. The GA model achieves $48.3\%$ accuracy on the set of instances for which the answer is in context, however, a search accuracy of only $65\%$ means its overall performance is lower. This can improve with improved retrieval. For \textsc{Quasar-T}, both the neural models significantly outperform the heuristic models, with BiDAF getting the highest F1 score of $28.5\%$.
The best performing baselines, however, lag behind human performance by $16.4\%$ and $32.1\%$ for \textsc{Quasar-S} and \textsc{Quasar-T} respectively, indicating the strong potential for improvement. Interestingly, for human performance we observe that non-experts are able to match or beat the performance of experts when given access to the background corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the usefulness of the search engine for non-experts; it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus.
Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix~\ref{app:performance}.
\section{Conclusion}
We have presented the \textsc{Quasar} datasets for promoting research into two related tasks for QA -- searching a large corpus of text for relevant passages, and reading the passages to extract answers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the searching performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improving these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, including the documents retrieved by our system and the human annotations, are available at \url{https://github.com/bdhingra/quasar}.
\section*{Acknowledgments}
This work was funded by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google.
\bibliographystyle{emnlp_natbib}
|
\section{Introduction}
Methanol masers are excellent tracers of high-mass star-formation (e.g. \citealt{walsh03}). Since their discovery \citep{menten91}, 6.7\,GHz methanol masers have been heralded as one of the most important spectral lines in astronomy; each of the transitions of the class~II~CH$_3$OH{} maser family, of which the 6.7\,GHz transition belongs, occurs only towards regions of high-mass star formation (HMSF; \citealt{walsh01,minier03,green12,breen13}). These class~II transitions are powered by mid-infrared emission from a nearby young stellar object (YSO), and dissipate as the H\,\textsc{ii}{} region resulting from HMSF evolves \citep{walsh98}. Consequently, class~II~CH$_3$OH{} masers signpost a specific evolutionary phase of the HMSF timeline.
On the other hand, class~I~CH$_3$OH{} maser transitions have a relatively uncertain connection to HMSF. Observations have shown that class~I~CH$_3$OH{} masers can be found towards many star-forming regions and evolutionary stages, but not consistently, and not necessarily with or without the presence of class~II masers \citep{ellingsen06,voronkov10a,breen10,ellingsen13}. Additionally, class~I maser emission has been found towards low-mass star formation \citep{kalenskii10}, supernova remnants \citep{pihlstrom14,mcewen14} and the centres of other galaxies \citep{ellingsen14,chen15}. In contrast to class~II masers, class~I masers are collisionally excited in dense molecular gas \citep{cragg92,voronkov10a,voronkov10b,voronkov14}.
Another collisionally-excited transition commonly found toward star-forming regions is thermal silicon monoxide (SiO{}~$v=0$), which traces a wide range of shocks \citep{nguyen-luong13,widmann16}, and has a rest frequency close to the class~I~CH$_3$OH{} maser. Thus, it is prudent to compare these spectral lines and learn more about class~I~CH$_3$OH{} masers. Additionally, high-density gas tracers such as carbon monosulfide (CS{}) highlight regions with potentially sufficient molecular gas abundance for masing. Investigations of the interstellar medium utilising these spectral lines, which all occur in the 7\,mm waveband, form the primary motivation for the Millimetre Astronomer's Legacy Team - 45\,GHz survey (\mbox{MALT-45}{}; \citealt{jordan13,jordan15}). \mbox{MALT-45}{} is an unbiased, sensitivity-limited auto-correlation survey of spectral lines in the 7\,mm waveband, primarily surveying CS{}~$J$=(1--0), class~I~CH$_3$OH{} masers (the 44\,GHz 7(0,7)--6(1,6) A$^+$ transition) and SiO{}~$J$=(1--0); a table of all surveyed spectral lines is contained in Table~\ref{tab:meth_targets}. Prior to the \mbox{MALT-45}{} survey, class~I~CH$_3$OH{} masers were primarily found towards regions containing other masers or regions of shocked gas, such as extended green objects (EGOs; \citealt{cyganowski08}); very little was known about regions containing only class~I maser emission.
The extent of the \mbox{MALT-45}{} survey is currently detailed in a single paper (\citealt{jordan15}; hereafter Paper~I). \mbox{MALT-45}{} is unique in that it utilises the Australia Telescope Compact Array (ATCA) by processing auto-correlated data (although cross-correlated data is simultaneously collected, it is not used due to extremely poor \emph{uv}-coverage). Within the mapped region $330^{\circ} \leq l \leq 335^{\circ}$, $b=\pm0.5^{\circ}$, Paper~I details the detection of 77 class~I methanol masers, 58 of which were new detections. For the first time, with a flux-density limited sample of class~I sources, we are able to assess the bulk-properties of these masers. Paper~I in this series of \mbox{MALT-45}{} publications briefly investigated the association of methanol masers with other detected spectral lines, but was ultimately limited by the spatial resolution of the auto-correlation survey ($\sim$1~arcmin). In this paper, we detail the results of interferometric follow-up observations towards each of the class~I~CH$_3$OH{} masers observed in Paper~I, in order to derive accurate positions.
\section{Observations}
\begin{table*}
\begin{center}
\caption{Bright spectral lines between 42.2 to 49.2\,GHz, targeted by \mbox{MALT-45}{} and these observations. Column 1 lists the spectral line. Column 2 lists the rest frequency of the line. Column 3 classifies the line as either a maser or thermal line. Column 4 gives the ATCA half power beam width at the corresponding rest frequency. Column 5 indicates whether this line is catalogued in this paper (`Y') or not (`N'); this refers to the inclusion of Gaussian fits to the spectral emission from these lines. Column 6 lists the median RMS noise level for auto-correlation data per 32\,kHz spectral channel, with errors representing the standard deviation. Radio recombination line (RRL) frequencies are taken from \citet{lilley68}. All other rest frequencies are taken from the Cologne Database for Molecular Spectroscopy (CDMS; \citealt{muller05,mueller13}). Note that cross-correlation noise levels are given in Table~\ref{tab:meth_targets}.}
\label{tab:spectral_lines}
\begin{tabular}{ lcccc c }
\hline
Spectral line & Rest frequency & Maser or & Beam size & Catalogued in & Median RMS \\
& (GHz) & thermal? & (arcsec) & this paper? & noise level \\
\hline
SiO{}~(1--0) $v=3$ & 42.51938 & Maser & 66 & N & \\
SiO{}~(1--0) $v=2$ & 42.82059 & Maser & 66 & N & \\
H53$\alpha${} (RRL) & 42.95197 & Thermal & 65 & Y & 6.5$\pm$1.0\,mK \\
SiO{}~(1--0) $v=1$ & 43.12207 & Maser & 65 & N & \\
SiO{}~(1--0) $v=0$ & 43.42385 & Thermal & 65 & Y & 5.9$\pm$0.67\,mK \\
CH$_3$OH{}~7(0,7)--6(1,6) A$^+$ & 44.06941 & Maser (Class~I) & 64 & Y & 170$\pm$62\,mJy \\
H51$\alpha${} (RRL) & 48.15360 & Thermal & 58 & N & \\
C$^{34}$S{}~(1--0) & 48.20694 & Thermal & 58 & N & 8.1$\pm$0.85\,mK \\
CH$_3$OH{}~1$_0$--0$_0$ A$^+$ & 48.37246 & Thermal & 58 & Y & 7.7$\pm$1.8\,mK \\
CH$_3$OH{}~1$_0$--0$_0$ E & 48.37689 & Thermal & 58 & N & \\
OCS~(4--3) & 48.65160 & Thermal & 58 & N & \\
CS{}~(1--0) & 48.99095 & Thermal & 57 & Y & 8.9$\pm$1.3\,mK \\
\hline
\end{tabular}
\end{center}
\end{table*}
Observations were conducted on 2013 September 7 and 8 using the ATCA. The array configuration was 1.5A, which has 5 antennas distributed in the East-West direction with baselines ranging from 153\,m to 1.5\,km; the maximum baseline is 4.5\,km including antenna 6 (CA06), however, due to poor weather, baselines including CA06 had bad phase stability. Consequently, data from CA06 is not included in this paper.
The correlator was programmed in the 64M-32k mode, which provides `zoom windows` for enhanced spectral resolution. In this mode, each zoom window has a channel resolution of 32\,kHz over a bandwidth of 64\,MHz. This same correlator mode was used in the observations detailed by Paper~I; however, unlike Paper~I, these observations made use of `stitched zooms': rather than using individual zoom windows, multiple zooms may be joined to increase the bandwidth over a spectral line, while maintaining the same spectral resolution. Each spectral line in these observations was observed with stitched zoom windows covering 96\,MHz (except for the window containing H51$\alpha${} and C$^{34}$S{}~(1--0), which covers 224\,MHz). For observations at 7\,mm, the 32\,kHz channel resolution corresponds to a spectral resolution of approximately 0.21\,\kms{} per channel. The spectral lines observed are identical to that of Paper~I; the list of observed lines as well as velocity coverage information and noise statistics are contained in Table~\ref{tab:spectral_lines}. Since Paper~I, the rest frequencies of the SiO{}~(1--0) molecular lines have been updated \citep{mueller13}.
The class~I~CH$_3$OH{} masers targeted in these observations are listed in Table~\ref{tab:meth_targets}. This target list was derived from a preliminary reduction of the \mbox{MALT-45}{} survey and hence do not exactly match the maser sites presented in Paper~I, however, there are only six sites that are not common to both papers. We include four sites in the current observations (G331.44$-$0.14, G331.44$-$0.16, G333.01$-$0.46 and G333.12$-$0.43) that were not listed in Paper~I, and we have not targeted two that are listed in Paper~I (G331.36$-$0.02 and G334.64$+$0.44), because they are tenuous detections that became apparent only in an improved processing of the \mbox{MALT-45}{} survey, after the observations detailed by this paper were conducted. Hence, a total of 79 targets were observed over the two days of observations. For more details, refer to Section~\ref{sec:results}.
To obtain good \emph{uv}-coverage, each target was observed multiple times over a single observing session. Each individual observation of a target lasted one minute (one `cut'), and each target has at least seven cuts, but some have up to ten; the noise levels for each target at the 44\,GHz class~I~CH$_3$OH{} maser transition frequency are also listed in Table~\ref{tab:meth_targets}. Bandpass calibration was derived from PKS B1253$-$055, phase calibration from PMN J1646$-$5044 and flux-density calibration from PKS B1934$-$638. \citet{partridge16} give a flux density scale for B1934$-$638, which yields a flux density of 0.31\,Jy at the rest frequency of the 44\,GHz class~I~CH$_3$OH{} maser spectral line. However, we used the recommended flux density of 0.39\,Jy derived from internal work with the ATCA. In addition to the cross-correlation data, auto-correlation data was simultaneously recorded and used; see Section~\ref{subsec:auto_correlation}.
\subsection{Data reduction}
\subsubsection{Cross-correlation}
\label{subsec:cross_correlation}
Class~I~CH$_3$OH{} maser data were reduced using standard interferometric techniques with \textsc{MIRIAD} \citep{sault95}. For all data cubes, the synthesised beam is approximately 0.5$\times$1~arcsec in right ascension and declination, respectively. Similar to \citet{voronkov14}, the results presented in this paper make use of self-calibrated data cubes. Self-calibration affects the absolute positional accuracy of data; to remove this effect, a reference maser spot was chosen from data with no self calibration applied. The position of this reference spot was determined before and after self-calibration and the measured difference was used to determine the absolute positions for all spectral features in the self-calibrated data cube. Ideally, the reference position before and after self-calibration has not shifted significantly ($<$0.5~arcsec), but some significant offsets were observed in these data due to poor weather conditions during the observations, which leads to large corrections in the self-calibration model. Models were selected from the brightest channel of emission from CLEANed data, which is generally effective, but occasionally the quality of calibration was so poor that the resulting self-calibrated data experiences a dramatic spatial shift. Despite this, we are encouraged by the accuracy of positions once a reference position has been subtracted; for example, G333.23$-$0.06 experienced poor calibration, but the difference in position of the brightest maser spot of G333.23$-$0.06 between this work and that of \citet{voronkov14} is 1.2~arcsec. This difference may result from a combination of genuine maser variability and calibration uncertainties. Prior experience with the ATCA in this waveband suggests that, in the best case scenario, the standard phase transfer calibration procedure on the absolute positional accuracy results in an error of approximately 0.5~arcsec (1$\sigma$) \citep{voronkov14}. However, given that we experience non-ideal self-calibration offsets, the absolute positional error for any maser spot should be considered to be 1~arcsec.
Data cubes were produced for each of the observed targets, which were then used to identify maser spots by visual inspection. In this paper, a maser spot refers to a spatial and spectral peak of emission. Spots were identified if the spectrum contained at least three consecutive channels of 3$\sigma$ emission, with a peak of at least 5$\sigma$ within the same pixel (0.2~arcsec). We employ this conservative approach of maser identification so the reader can be confident that all identified maser spots are real. A small number of weaker, narrow features may not satisfy these criteria and so are excluded; however, from past experience and comparison of our cubes with previous observations (where available), we suggest that this is not common. The requirement of three significant consecutive channels has the effect of limiting our maser detections to a velocity width of at least 0.65\,\kms{}. Similar to \citet{voronkov14}, dynamic range limitations exist for channels containing extremely bright maser emission. This reduces the ability to identify real maser emission at a similar velocity to the bright feature; maser spots were excluded where they are believed to have been caused by such dynamic range artefacts. These exclusions were determined by visual inspection.
The position of each maser spot was fitted with the \textsc{MIRIAD} task \textsc{imfit} using each channel of significant emission ($>$3$\sigma$). \textsc{imfit} used a 3$\times$3~arcsec box around each maser spot to generate a position, except for those with nearby bright maser spots. To accurately position weaker maser spots close to bright ones, either the velocity range of emission was altered for position fitting, or the size of the box for fitting was decreased. Decreasing the size of the box for \textsc{imfit} was found to not significantly affect resultant positions, so we do not highlight the maser spots using these modified procedures. Each channel-fitted position is then flux density-weighted and is listed in Appendix~\ref{app:meth_detail}. The uncertainty reflects the 1$\sigma$ positional uncertainty of each channel-fitted position. In some cases, the maser spot positions have uncertainties of a few tens of milliarcsec. We caution that the ATCA is likely not able to determine relative positions to this accuracy for these observations, as dynamic range and calibration uncertainties become a limiting factor. In general, we found the class~I maser spots were spatially distinguished by our observations, with separations of 1~arcsec or greater, but occasionally spots were very near to a bright neighbour. In these cases, positions were manually assigned by using the brightest pixel, with a cautious position uncertainty of 100 milliarcsec in Right Ascension and 200 milliarcsec in Declination. Such maser spots are designated with an asterisk ($*$) in Appendix~\ref{app:meth_detail}.
Spectral information for each maser spot was derived from the image cubes, which were then characterised with Gaussian fits. All residuals are within 10 per cent of the fitted peak intensity. Coupled with uncertainties in calibration, we expect a 20 per cent uncertainty for all quoted flux-density values.
\subsubsection{Auto-correlation}
\label{subsec:auto_correlation}
Each observation target was also searched for CS{}~(1--0), SiO{}~(1--0) $v=0$ and CH$_3$OH{}~1$_0$--0$_0$~A$^+$. However, due to their thermal nature and the \emph{uv}-coverage of the observations (minimum baseline length of 153\,m), these thermal emission lines were not readily detectable in the cross-correlation data. To report on these properties, a \textsc{Ruby} script was written to assist with the analysis of the auto-correlated data. Unlike the observations detailed by Paper~I, no dedicated reference (i.e. off) position was observed in this work. Instead, data collected at a location where the emission is known to be weak and have a simple spectral profile was used as the reference spectrum for auto-correlations. The script routine pipeline was as follows:
(i) For each source, a raw auto-correlation spectrum was produced for each antenna and each cut;
(ii) For each antenna source cut, a quotient spectrum (i.e.~$\rfrac{\text{on}-\text{off}}{\text{off}}$) was formed from the source and reference cut data;
(iii) A first-order polynomial was fitted to and removed from the spectra. The velocity range of the fit was restricted between $-$160 and 20\,\kms{}, as this contains all of the emission in this section of the Galaxy, and constraining the fit improves the quality of the resulting spectra;
(iv) All quotient spectra from an antenna are averaged together, before averaging all antennas together;
(v) The flux-density calibration is achieved with antenna efficiencies and point-source sensitivities taken from \citet{urquhart10}.
This procedure is graphically demonstrated with the CS{} emission of G330.95$-$0.18 in Fig.~\ref{fig:autocorrelation_procedure}.
For the spectral line data processed in this way, spectra were smoothed with a Hanning window of 9 spectral channels, and up to two Gaussians were fitted to the frequency data. This smoothing results in a velocity resolution of $\sim$0.39\,\kms{} for CS{}, $\sim$0.44\,\kms{} for SiO{} $v=0$ and $\sim$0.40\,\kms{} for CH$_3$OH{} 1$_0$--0$_0$ A$^+$. As with the cross-correlation data, Gaussians are fitted to minimise residuals; however, occasionally 10 per cent residuals are unavoidable. With additional flux-density calibration uncertainty applied to auto-correlation data, we place a 20 per cent uncertainty on all intensity values derived.
This auto-correlation procedure was also used for class~I~CH$_3$OH{} masers, which we compare with the cross-correlation data in Section~\ref{sec:cross_vs_auto}.
\section{Results}
\label{sec:results}
The basic properties of each observed target are listed in Table~\ref{tab:meth_targets}. Note that five sources were not detected in cross-correlation by these observations, but three of these five were detected in auto-correlation (G331.44$-$0.14, G331.72$-$0.20 and G333.24$+$0.02). The reason that three of the sites were detected in auto-correlation data, but not cross-correlation data, could either be due to poor weather (causing significant decorrelation) or, alternatively, it may indicate that the emission region is large, and is therefore resolved out in the cross-correlation data. The fact that the spectral profile of these three sources is consistent with maser emission (although relatively weak maser emission; each of the three sources is less than $\sim$2\,Jy) suggests that the most likely explanation for their non-detection is due to poor weather. Further discussion of cross-correlation spectra compared to the auto-correlation data for the same source is contained in Section~\ref{sec:cross_vs_auto}.
The remaining two sources, G330.83$+$0.18 and G331.21$+$0.10, were detected in October~2013 with peak-flux densities of 3.1 and 3.5\,Jy. The median RMS noise of the survey observations was 0.90\,Jy, meaning that they were detected a 3.4 and 3.8-$\sigma$ level, respectively. It is therefore possible that these were spurious detections, or, alternatively, that their peak-flux densities have varied below the 0.85\,Jy 5$\sigma$ detection limit for the current observations. A variation factor of four seems reasonable for 44\,GHz masers (e.g. variations larger than four were found in two sources detailed by \citealt{kurtz04}), although there have been no dedicated studies. Given the current data, and the low significance of the original detections, these two sources cannot be considered reliable detections and as such we exclude them from further analysis. For the remainder of this paper, we discuss the 77 regions containing 44\,GHz class~I~CH$_3$OH{} emission.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/autocorrelation_pipeline.pdf}
\caption{A graphical representation of the reduction procedure for auto-correlated data in this paper. The data from each source cut is combined with the nearest reference cut in time to produce a baselined spectrum. All baselined spectra are then averaged to produce an auto-correlation product. This procedure is performed for each antenna of the ATCA, and all products are then averaged together for a final product. The final panel contains this final product spectrum (red) and the same spectrum after being Hanning-smoothed (black). This example uses two Gaussians (blue) to characterise the data.}
\label{fig:autocorrelation_procedure}
\end{figure}
We have classified our class~I~CH$_3$OH{} masers as associated with other maser transitions or selected HMSF tracers if they fall within 60~arcsec of each other, to be consistent with \citet{voronkov14}. The associations are detailed in Table~\ref{tab:meth_assoc}. Each site is compared with published positions of class~II~CH$_3$OH{} \citep{caswell11}, water (H$_2$O{}) \citep{breen10,walsh11,walsh14} and hydroxyl (OH{}) masers \citep{sevenster97,caswell98}, as well as ATLASGAL clumps \citep{contreras13} and EGOs \citep{cyganowski08}. The presence of each thermal line is determined by the method described in Section~\ref{subsec:auto_correlation}, using data from these observations. CS{} is detected towards every source, and because of this is not listed in Table~\ref{tab:meth_assoc}. The Gaussian parameters determined for each of the thermal lines are listed in Appendix~\ref{app:thermal_detail}. Uncertainties for each of the fitted parameters are quoted, however, we remind the reader to consider the 20 per cent uncertainty adopted for the absolute flux-density scale.
Note that \citet{walsh11} details single-dish positions of H$_2$O{} masers, while \citet{walsh14} provides high-resolution follow-up positions of these same masers. Hence, we use positions from \citet{walsh14} wherever possible; however, two H$_2$O{} masers (G331.86$+$0.06 and G333.46$-$0.16) were not detected in follow-up observations, and thus we use their single-dish positions for comparisons in this paper. The non-detection of these masers is reasoned as being most likely due to intrinsic variability.
Kinematic distances are also contained in Table~\ref{tab:meth_assoc}. Distances were calculated using the program supplied by \citet{reid09}. All kinematic distance ambiguities were assumed to be at the `near' distance, except for sources which have been prescribed as `far' by \citet{green11}. All distances presented in this paper use the same parameters as Paper~I, specifically: $\Theta_0=246$\,\kms{}, $R_\odot=8.4$\,kpc, $U_\odot=11.1$\,\kms{}, $V_\odot=12.2$\,\kms{}, $W_\odot=7.25$\,\kms{}, $U_s=0$\,\kms{}, $V_s=-15.0$\,\kms{}, $W_s=0$\,\kms{}, $\sigma(V_{LSR})=3.32$\,\kms{}.
We consider all maser spots detected toward a single target to be part of the same maser site. There is potential for this to artificially to combine multiple maser sites into a single site, however, Apprendix~\ref{app:glimpse} shows that this is uncommon. To illustrate the derived properties of masers, a sample of Gaussian fits to spectra is included in Table~\ref{tab:meth_sample}. The complete collection of Gaussian spectral fits is included in Appendix~\ref{app:meth_detail}. Maser positions overlaid on infrared images from the \emph{Spitzer} Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE; \citealt{benjamin03}) can be seen in Appendix~\ref{app:glimpse}.
\begin{table*}
\begin{center}
\caption{Observational targets for class~I~CH$_3$OH{} masers. Column 1 lists the site name taken from Paper~I. Column 2 lists the refined site name, which is determined from the mean position of maser spots within a single maser site. The refined site names are used throughout this paper. A refined name is not supplied if no maser emission was detected in this work. Columns 3 and 4 list the interferometric phase centre for these observations (central maser positions are listed in Table~\ref{tab:meth_assoc}). Column 5 gives the date on which the site was observed (7 or 8 September 2013). Column 6 lists the off-source image-plane median RMS noise value for the self-calibrated cube. Column 7 lists the smallest radius of a circle to encompass all emission of the site. Column 8 lists the velocity range of emission detected in auto-correlation data. All maser sites were observed with an approximate local standard of rest velocity coverage between $-$353 and 195\,\kms{}. Note that sites with a radius of $<1$~arcsec contain either only one maser spot, or spots very close together. Sites without a specified radius were not detected in this work. Sites without a velocity range were not detected in cross- or auto-correlation (see Section~\ref{sec:results}). Sites labelled with an asterisk ($*$) were not listed by Paper~I, but have class~I~CH$_3$OH{} maser emission detected in these observations. Sites labelled with a dagger ($\dagger$) are designated as `young'; see Section~\ref{sec:evolutionary}. Sites labelled with a double dagger ($\ddagger$) have large shifts induced by self-calibration; see Section~\ref{subsec:cross_correlation}.}
\label{tab:meth_targets}
\begin{tabular}{ lcccc ccc }
\hline
\multicolumn{1}{c}{Paper~I} & Refined & \multicolumn{2}{c}{Interferometric phase centre} & Obs. & Median RMS & Radius & Velocity \\
\multicolumn{1}{c}{site name} & site name & $\alpha_{2000}$ & $\delta_{2000}$ & date & noise level & (arcsec) & range \\
& & (h:m:s) & ($^\circ$:$^\prime$:$^{\prime\prime}$) & (Sep. 2013) & (mJy) & & (\kms{}) \\
\hline
G330.30$-$0.39 & G330.294$-$0.393 & 16:07:37.0 & $-$52:30:59 & 8 & 46 & 4 & $-$82 to $-$76 \\
G330.67$-$0.40 & G330.678$-$0.402 & 16:09:30.6 & $-$52:16:08 & 8 & 42 &$<1$& $-$69 to $-$60 \\
G330.78$+$0.24$^\dagger$ & G330.779$+$0.249 & 16:07:12.2 & $-$51:43:07 & 8 & 42 &$<1$& $-$46 to $-$41 \\
G330.83$+$0.18 & & 16:07:40.6 & $-$51:43:54 & 8 & 44 & & \\
G330.87$-$0.36 & G330.876$-$0.362 & 16:10:16.6 & $-$52:05:50 & 8 & 45 & 22 & $-$66 to $-$57 \\
G330.88$-$0.38 & G330.871$-$0.383 & 16:10:21.1 & $-$52:06:42 & 8 & 44 &$<1$& $-$67 to $-$57 \\
G330.92$-$0.41$^\dagger$ & G330.927$-$0.408 & 16:10:44.2 & $-$52:05:56 & 8 & 45 & 3 & $-$44 to $-$40 \\
G330.93$-$0.26$^\dagger$ & G330.931$-$0.260 & 16:10:06.6 & $-$51:59:23 & 8 & 43 & 1 & $-$91 to $-$87 \\
G330.95$-$0.18 & G330.955$-$0.182 & 16:09:52.0 & $-$51:54:59 & 8 & 46 &$<1$& $-$98 to $-$84 \\
G331.13$-$0.48$^\dagger$ & G331.131$-$0.470 & 16:11:59.8 & $-$52:00:32 & 8 & 44 &$<1$& $-$70 to $-$65 \\
G331.13$-$0.25 & G331.132$-$0.244 & 16:10:59.7 & $-$51:50:25 & 8 & 46 & 10 & $-$93 to $-$78 \\
G331.13$-$0.50 & G331.134$-$0.488 & 16:12:05.9 & $-$52:01:33 & 8 & 37 &$<1$& $-$68 to $-$68 \\
G331.13$+$0.15$^\dagger$ & G331.134$+$0.156 & 16:09:14.8 & $-$51:32:47 & 8 & 46 & 4 & $-$79 to $-$73 \\
G331.21$+$0.10 & & 16:09:50.8 & $-$51:32:39 & 8 & 44 & & \\
G331.29$-$0.20 & G331.279$-$0.189 & 16:11:27.0 & $-$51:41:54 & 8 & 45 & 21 & $-$95 to $-$84 \\
G331.34$-$0.35 & G331.341$-$0.347 & 16:12:25.6 & $-$51:46:16 & 8 & 46 &$<1$& $-$67 to $-$64 \\
G331.37$-$0.40$^\dagger$ & G331.370$-$0.399 & 16:12:48.2 & $-$51:47:26 & 8 & 44 &$<1$& $-$66 to $-$64 \\
G331.37$-$0.13$^\dagger$ & G331.371$-$0.145 & 16:11:40.3 & $-$51:35:52 & 8 & 47 &$<1$& $-$89 to $-$86 \\
G331.39$+$0.15$^\dagger$ & G331.380$+$0.149 & 16:10:26.2 & $-$51:22:52 & 8 & 46 &$<1$& $-$47 to $-$43 \\
G331.41$-$0.17$^\dagger$ & G331.409$-$0.164 & 16:11:59.2 & $-$51:35:33 & 8 & 45 &$<1$& $-$85 to $-$85 \\
G331.44$-$0.14$^{*\dagger}$& & 16:11:57.9 & $-$51:33:08 & 8 & 45 & & $-$87 to $-$84 \\
G331.44$-$0.19 & G331.440$-$0.187 & 16:12:11.5 & $-$51:35:02 & 8 & 44 & 5 & $-$92 to $-$85 \\
G331.44$-$0.16$^{*\dagger}$&G331.442$-$0.158& 16:12:05.0 & $-$51:33:45 & 8 & 40 &$<1$& $-$87 to $-$85 \\
G331.50$-$0.08 & G331.492$-$0.082 & 16:11:59.6 & $-$51:28:14 & 8 & 46 & 22 & $-$93 to $-$84 \\
G331.50$-$0.10 & G331.503$-$0.109 & 16:12:10.5 & $-$51:29:23 & 8 & 44 & 17 & $-$101 to $-$86 \\
G331.52$-$0.08 & G331.519$-$0.082 & 16:12:07.5 & $-$51:27:25 & 8 & 44 & 8 & $-$93 to $-$85 \\
G331.54$-$0.10 & G331.530$-$0.099 & 16:12:14.5 & $-$51:27:34 & 8 & 45 & 19 & $-$95 to $-$85 \\
G331.55$-$0.07 & G331.544$-$0.067 & 16:12:11.3 & $-$51:25:29 & 8 & 45 & 7 & $-$92 to $-$85 \\
G331.56$-$0.12 & G331.555$-$0.122 & 16:12:26.7 & $-$51:27:41 & 8 & 44 & 3 & $-$104 to $-$97 \\
G331.72$-$0.20$^\dagger$ & & 16:13:34.4 & $-$51:24:25 & 8 & 46 & & $-$49 to $-$45 \\
G331.86$-$0.13$^\dagger$ & G331.853$-$0.129 & 16:13:52.3 & $-$51:15:42 & 8 & 46 &$<1$& $-$52 to $-$48 \\
G331.88$+$0.06$^\dagger$ & G331.887$+$0.063 & 16:13:10.5 & $-$51:06:09 & 8 & 44 & 12 & $-$91 to $-$84 \\
G331.92$-$0.08$^\dagger$ & G331.921$-$0.083 & 16:13:58.0 & $-$51:10:56 & 8 & 46 &$<1$& $-$53 to $-$51 \\
G332.09$-$0.42$^\dagger$ & G332.092$-$0.420 & 16:16:15.6 & $-$51:18:31 & 8 & 46 & 7 & $-$59 to $-$54 \\
G332.24$-$0.05$^\dagger$ & G332.240$-$0.044 & 16:15:17.6 & $-$50:55:59 & 8 & 47 & 8 & $-$51 to $-$46 \\
G332.30$-$0.09 & G332.295$-$0.094 & 16:15:46.0 & $-$50:55:54 & 8 & 46 & 8 & $-$55 to $-$45 \\
G332.32$+$0.18$^\dagger$ & G332.318$+$0.179 & 16:14:40.0 & $-$50:43:11 & 8 & 45 & 10 & $-$50 to $-$44 \\
G332.36$-$0.11 & G332.355$-$0.114 & 16:16:07.3 & $-$50:54:16 & 8 & 45 &$<1$& $-$51 to $-$49 \\
G332.59$+$0.15$^\dagger$ & G332.583$+$0.147 & 16:15:58.5 & $-$50:33:22 & 8 & 45 &$<1$& $-$45 to $-$42 \\
G332.60$-$0.17$^\dagger$ & G332.604$-$0.167 & 16:17:28.1 & $-$50:46:20 & 8 & 46 & 2 & $-$48 to $-$44 \\
G332.72$-$0.05$^\dagger$ & G332.716$-$0.048 & 16:17:28.6 & $-$50:36:34 & 8 & 45 & 2 & $-$41 to $-$38 \\
G333.00$-$0.43 & G333.002$-$0.437 & 16:20:27.7 & $-$50:41:06 & 8 & 45 &$<1$& $-$57 to $-$55 \\
G333.01$-$0.46$^{* \ddagger}$ & G333.014$-$0.466 & 16:20:37.1 & $-$50:41:31 & 7 & 41 &$<1$& $-$55 to $-$52 \\
G333.02$-$0.06$^\dagger$ & G333.029$-$0.063 & 16:18:55.9 & $-$50:24:03 & 8 & 43 & 1 & $-$43 to $-$39 \\
G333.03$-$0.02$^\dagger$ & G333.029$-$0.024 & 16:18:45.9 & $-$50:22:21 & 7 & 41 &$<1$& $-$43 to $-$41 \\
G333.07$-$0.44$^\ddagger$ & G333.068$-$0.446 & 16:20:49.6 & $-$50:38:51 & 7 & 46 &$<1$& $-$55 to $-$51 \\
G333.07$-$0.40 & G333.071$-$0.399 & 16:20:37.0 & $-$50:36:33 & 7 & 42 & 6 & $-$54 to $-$51 \\
G333.10$-$0.51 & G333.103$-$0.502 & 16:21:13.4 & $-$50:39:56 & 7 & 41 & 11 & $-$60 to $-$53 \\
G333.12$-$0.43$^*$ & G333.121$-$0.433 & 16:20:58.5 & $-$50:35:41 & 7 & 54 & 3 & $-$57 to $-$44 \\
G333.13$-$0.44 & G333.126$-$0.439 & 16:21:02.3 & $-$50:35:53 & 7 & 55 & 12 & $-$57 to $-$42 \\
G333.14$-$0.42 & G333.137$-$0.427 & 16:21:03.0 & $-$50:34:59 & 7 & 49 & 17 & $-$57 to $-$43 \\
G333.16$-$0.10 & G333.162$-$0.101 & 16:19:41.4 & $-$50:19:55 & 7 & 35 & 4 & $-$93 to $-$90 \\
G333.18$-$0.09 & G333.184$-$0.090 & 16:19:45.2 & $-$50:18:45 & 7 & 37 & 10 & $-$89 to $-$84 \\
G333.22$-$0.40 & G333.220$-$0.402 & 16:21:16.9 & $-$50:30:17 & 7 & 42 & 4 & $-$57 to $-$49 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{1}
\begin{table*}
\begin{center}
\caption{{\em - continued.}}
\begin{tabular}{ lcccc ccc }
\hline
\multicolumn{1}{c}{Paper~I} & Refined & \multicolumn{2}{c}{Pointing centre} & Obs. & Median RMS & Radius & Velocity \\
\multicolumn{1}{c}{site name} & site name & $\alpha_{2000}$ & $\delta_{2000}$ & date & noise level & (arcsec) & range \\
& & (h:m:s) & ($^\circ$:$^\prime$:$^{\prime\prime}$) & (Sep. 2013) & (mJy) & & (\kms{}) \\
\hline
G333.23$-$0.06 & G333.233$-$0.061 & 16:19:49.4 & $-$50:15:17 & 7 & 42 & 14 & $-$96 to $-$81 \\
G333.24$+$0.02$^\dagger$ & & 16:19:26.0 & $-$50:11:29 & 7 & 42 & & $-$71 to $-$66 \\
G333.29$-$0.38 & G333.284$-$0.373 & 16:21:28.1 & $-$50:26:35 & 7 & 43 &$<1$& $-$55 to $-$49 \\
G333.30$-$0.35 & G333.301$-$0.352 & 16:21:24.5 & $-$50:24:47 & 7 & 40 &$<1$& $-$54 to $-$48 \\
G333.31$+$0.10 & G333.313$+$0.106 & 16:19:27.6 & $-$50:04:45 & 7 & 55 & 8 & $-$51 to $-$43 \\
G333.33$-$0.36 & G333.335$-$0.363 & 16:21:36.4 & $-$50:23:53 & 7 & 44 & 18 & $-$55 to $-$46 \\
G333.37$-$0.20$^{\dagger \ddagger}$ & G333.376$-$0.202 & 16:21:05.2 & $-$50:15:18 & 7 & 44 & 5 & $-$63 to $-$56 \\
G333.39$+$0.02 & G333.387$+$0.031 & 16:20:08.2 & $-$50:04:49 & 7 & 39 & 3 & $-$73 to $-$68 \\
G333.47$-$0.16 & G333.467$-$0.163 & 16:21:19.5 & $-$50:09:42 & 7 & 40 & 12 & $-$49 to $-$38 \\
G333.50$+$0.15$^\dagger$ & G333.497$+$0.143 & 16:20:06.5 & $-$49:55:25 & 7 & 42 & 2 & $-$115 to $-$111 \\
G333.52$-$0.27$^\ddagger$ & G333.523$-$0.275 & 16:22:03.4 & $-$50:12:08 & 7 & 42 &$<1$& $-$52 to $-$49 \\
G333.56$-$0.30 & G333.558$-$0.293 & 16:22:19.2 & $-$50:11:28 & 7 & 42 & 27 & $-$47 to $-$44 \\
G333.56$-$0.02$^\dagger$ & G333.562$-$0.025 & 16:21:07.9 & $-$49:59:49 & 7 & 44 &$<1$& $-$43 to $-$37 \\
G333.57$+$0.03$^\dagger$ & G333.569$+$0.028 & 16:20:55.4 & $-$49:57:25 & 7 & 42 &$<1$& $-$86 to $-$81 \\
G333.59$-$0.21$^\ddagger$ & G333.595$-$0.211 & 16:22:06.2 & $-$50:06:30 & 7 & 45 & 4 & $-$52 to $-$44 \\
G333.70$-$0.20 & G333.694$-$0.197 & 16:22:27.7 & $-$50:01:22 & 7 & 41 & 9 & $-$52 to $-$49 \\
G333.71$-$0.12$^{\dagger \ddagger}$ & G333.711$-$0.115 & 16:22:10.8 & $-$49:57:21 & 7 & 41 & 5 & $-$32 to $-$30 \\
G333.77$-$0.01$^{\dagger \ddagger}$ & G333.772$-$0.010 & 16:21:59.6 & $-$49:50:20 & 7 & 42 &$<1$& $-$91 to $-$88 \\
G333.77$-$0.25$^{\dagger \ddagger}$ & G333.773$-$0.258 & 16:23:02.8 & $-$50:00:31 & 7 & 40 &$<1$& $-$50 to $-$47 \\
G333.82$-$0.30$^\dagger$ & G333.818$-$0.303 & 16:23:29.5 & $-$50:00:40 & 7 & 41 & 1 & $-$50 to $-$46 \\
G333.90$-$0.10$^\ddagger$ & G333.900$-$0.098 & 16:22:56.3 & $-$49:48:43 & 7 & 41 & 4 & $-$66 to $-$61 \\
G333.94$-$0.14$^{\dagger \ddagger}$ & G333.930$-$0.133 & 16:23:14.1 & $-$49:48:40 & 7 & 44 & 5 & $-$43 to $-$39 \\
G333.98$+$0.07$^\dagger$ & G333.974$+$0.074 & 16:22:31.0 & $-$49:38:08 & 7 & 40 &$<1$& $-$62 to $-$57 \\
G334.03$-$0.04$^\dagger$ & G334.027$-$0.047 & 16:23:16.7 & $-$49:40:57 & 7 & 41 &$<1$& $-$85 to $-$83 \\
G334.74$+$0.51$^\dagger$ & G334.746$+$0.506 & 16:23:57.9 & $-$48:46:40 & 7 & 38 & 4 & $-$66 to $-$59 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{Associations with each of the class~I~CH$_3$OH{} maser sites. Note that the presence of CS{}~(1--0) is not listed, because it is detected towards every source. Column 1 lists the refined region name. Columns 2 and 3 list the equatorial coordinates of the mean position of all maser spots within a single maser site. Column 4 lists the kinematic distance \citep{reid09}. Uncertainties are listed in units of the least significant figure. Column 5 lists the associations with other masers, EGOs and ATLASGAL sources within 1~arcmin of the centre position$^1$. If there is more than one of the same type of emission, it is listed as a subscript. Columns 6 through 9 lists whether thermal SiO{}~(1--0), CH$_3$OH{} 1$_0$--0$_0$ A$^+$, H53$\alpha${} or C$^{34}$S{} (1--0) emission is detected (`Y') or not (`N'). Regions labelled with a dagger ($\dagger$) are designated as `young'; see Section~\ref{sec:evolutionary}. Regions labelled with a double dagger ($\ddagger$) have large shifts induced by self-calibration; see Section~\ref{subsec:cross_correlation}.}
\label{tab:meth_assoc}
\begin{tabular}{ lcccc cccc }
\hline
Region & \multicolumn{2}{c}{Centre of maser emission} & Kinematic & Associations & Presence of & Presence of & Presence & Presence \\
name & $\alpha_{2000}$ & $\delta_{2000}$ & distance & within & SiO{} (1--0) & CH$_3$OH{} & of H53$\alpha${}? & of C$^{34}$S{} \\
& (h:m:s) & ($^\circ$:$^\prime$:$^{\prime\prime}$) & (kpc) & 1 arcmin$^1$ & $v=0$? & 1$_0$--0$_0$ A$^+$? & & (1--0)? \\
\hline
G330.294$-$0.393 & 16:07:37.9 & $-$52:30:58.29 & 4.9 (2) & WA & N & Y & Y & Y \\
G330.678$-$0.402 & 16:09:31.7 & $-$52:15:50.68 & 4.1 (2) & A & Y & Y & Y & Y \\
G330.779$+$0.249$^\dagger$ & 16:07:09.8 & $-$51:42:53.76 & 3.1 (2) & A & Y & Y & N & Y \\
G330.871$-$0.383 & 16:10:22.1 & $-$52:07:08.28 & 4.1 (2) & MA & Y & Y & Y & Y \\
G330.876$-$0.362 & 16:10:17.9 & $-$52:05:59.37 & 4.0 (2) & MWSCGA$_2$ & Y & Y & Y & Y \\
G330.927$-$0.408$^\dagger$ & 16:10:44.7 & $-$52:05:57.33 & 3.1 (2) & A & Y & Y & N & Y \\
G330.931$-$0.260$^\dagger$ & 16:10:06.7 & $-$51:59:18.13 & 5.3 (2) & A & N & Y & N & Y \\
G330.955$-$0.182 & 16:09:53.0 & $-$51:54:52.84 & 5.5 (2) & MWCA & Y & Y & Y & Y \\
G331.131$-$0.470$^\dagger$ & 16:11:59.4 & $-$52:00:19.88 & 4.3 (2) & GA & Y & Y & N & Y \\
G331.132$-$0.244 & 16:10:59.8 & $-$51:50:21.78 & 5.4 (2) & MWCGA & Y & Y & N & Y \\
G331.134$-$0.488 & 16:12:05.1 & $-$52:01:01.37 & 4.3 (2) & A & Y & Y & Y & Y \\
G331.134$+$0.156$^\dagger$ & 16:09:15.1 & $-$51:32:39.79 & 4.6 (2) & MWA & Y & Y & N & Y \\
G331.279$-$0.189 & 16:11:27.2 & $-$51:41:57.04 & 5.3 (2) & MWCA & Y & Y & Y & Y \\
G331.341$-$0.347 & 16:12:26.4 & $-$51:46:20.50 & 4.2 (2) & MWCGA & Y & Y & Y & Y \\
G331.370$-$0.399$^\dagger$ & 16:12:48.5 & $-$51:47:24.39 & 4.2 (2) & GA & N & Y & N & Y \\
G331.371$-$0.145$^\dagger$ & 16:11:41.2 & $-$51:36:15.36 & 5.2 (2) & & N & Y & N & Y \\
G331.380$+$0.149$^\dagger$ & 16:10:26.8 & $-$51:22:58.46 & 3.2 (2) & A & Y & Y & N & Y \\
G331.409$-$0.164$^\dagger$ & 16:11:57.4 & $-$51:35:32.03 & 5.1 (2) & A$_3$ & Y & Y & N & Y \\
G331.44$-$0.14$^\dagger$ & 16:11:57.9 & $-$51:33:07.81 & 5.1 (2) & A & Y & Y & N & N \\
G331.440$-$0.187 & 16:12:11.9 & $-$51:35:15.76 & 9.6 (2) & MWCA & Y & Y & N & Y \\
G331.442$-$0.158$^\dagger$ & 16:12:04.8 & $-$51:33:55.74 & 5.1 (2) & & Y & Y & N & Y \\
\hline
\end{tabular}
\flushleft
$^1$M - 6.7\,GHz CH$_3$OH{} maser from \citet{caswell11}; W - 22\,GHz H$_2$O{} maser from any of \citet{breen10,walsh11,walsh14}; S - 1612\,MHz OH{} maser from \citet{sevenster97}; C - 1665 or 1667\,MHz OH{} maser from \citet{caswell98}; G - EGO from \citet{cyganowski08}; A - ATLASGAL point source from \citet{contreras13}.
\end{center}
\end{table*}
\setcounter{table}{2}
\begin{table*}
\begin{center}
\caption{{\em - continued.}}
\begin{tabular}{ lcccc cccc }
\hline
Site & \multicolumn{2}{c}{Centre of maser emission} & Kinematic & Associations & Presence of & Presence of & Presence & Presence \\
name & $\alpha_{2000}$ & $\delta_{2000}$ & distance & within & SiO{} (1--0) & CH$_3$OH{} & of H53$\alpha${}? & of C$^{34}$S{} \\
& (h:m:s) & ($^\circ$:$^\prime$:$^{\prime\prime}$) & (kpc) & 1 arcmin$^1$ & $v=0$? & 1$_0$--0$_0$ A$^+$? & & (1--0)? \\
\hline
G331.492$-$0.082 & 16:11:59.0 & $-$51:28:34.01 & 5.3 (2) & A$_2$ & Y & Y & Y & Y \\
G331.503$-$0.109 & 16:12:09.3 & $-$51:29:14.95 & 5.2 (2) & WSCA$_3$ & Y & Y & Y & Y \\
G331.519$-$0.082 & 16:12:06.6 & $-$51:27:25.69 & 5.3 (2) & A & Y & Y & Y & Y \\
G331.530$-$0.099 & 16:12:14.3 & $-$51:27:44.11 & 5.4 (2) & A & Y & Y & Y & Y \\
G331.544$-$0.067 & 16:12:09.7 & $-$51:25:44.35 & 5.2 (2) & MCA & Y & Y & Y & Y \\
G331.555$-$0.122 & 16:12:27.1 & $-$51:27:42.69 & 5.8 (2) & MW$_2$CA$_2$ & Y & Y & Y & Y \\
G331.72$-$0.20$^\dagger$ & 16:13:34.4 & $-$51:24:24.91 & 3.3 (2) & A & Y & Y & N & Y \\
G331.853$-$0.129$^\dagger$ & 16:13:52.5 & $-$51:15:46.63 & 3.4 (2) & A & Y & Y & N & Y \\
G331.887$+$0.063$^\dagger$ & 16:13:11.2 & $-$51:05:58.39 & 5.2 (2) & A & Y & Y & N & Y \\
G331.921$-$0.083$^\dagger$ & 16:13:59.2 & $-$51:10:56.00 & 3.5 (2) & A & Y & Y & N & Y \\
G332.092$-$0.420$^\dagger$ & 16:16:15.7 & $-$51:18:27.64 & 3.7 (2) & MWA & Y & Y & N & Y \\
G332.240$-$0.044$^\dagger$ & 16:15:17.2 & $-$50:56:01.32 & 3.4 (2) & A & Y & Y & N & Y \\
G332.295$-$0.094 & 16:15:45.3 & $-$50:55:54.87 & 3.5 (2) & MWGA & Y & Y & Y & Y \\
G332.318$+$0.179$^\dagger$ & 16:14:40.1 & $-$50:43:07.07 & 3.4 (2) & WA & Y & Y & N & Y \\
G332.355$-$0.114 & 16:16:07.2 & $-$50:54:19.92 & 3.5 (2) & MWCGA & N & Y & N & Y \\
G332.583$+$0.147$^\dagger$ & 16:16:01.0 & $-$50:33:30.96 & 11.8 (2) & MGA & N & N & N & Y \\
G332.604$-$0.167$^\dagger$ & 16:17:29.3 & $-$50:46:12.92 & 3.3 (2) & MWGA & Y & Y & N & Y \\
G332.716$-$0.048$^\dagger$ & 16:17:28.5 & $-$50:36:23.32 & 3.0 (2) & A & N & Y & N & Y \\
G333.002$-$0.437 & 16:20:28.7 & $-$50:41:00.91 & 3.8 (2) & WA$_3$ & Y & Y & Y & Y \\
G333.014$-$0.466$^\ddagger$ & 16:20:39.5 & $-$50:41:47.45 & 3.6 (2) & A & Y & Y & Y & Y \\
G333.029$-$0.063$^\dagger$ & 16:18:56.7 & $-$50:23:54.64 & 3.0 (2) & MWA & N & Y & N & Y \\
G333.029$-$0.024$^\dagger$ & 16:18:46.6 & $-$50:22:14.57 & 3.1 (2) & MA$_2$ & Y & Y & N & Y \\
G333.068$-$0.446$^\ddagger$& 16:20:48.7 & $-$50:38:38.68 & 3.7 (2) & MWA & Y & Y & Y & Y \\
G333.071$-$0.399 & 16:20:37.2 & $-$50:36:30.13 & 3.6 (2) & A & Y & Y & Y & Y \\
G333.103$-$0.502 & 16:21:13.1 & $-$50:39:31.37 & 3.8 (2) & MA & Y & Y & Y & Y \\
G333.121$-$0.433 & 16:20:59.5 & $-$50:35:51.03 & 3.4 (2) & M$_4$W$_6$C$_2$A$_2$&Y & Y & Y & Y \\
G333.126$-$0.439 & 16:21:02.6 & $-$50:35:52.44 & 3.5 (2) & M$_4$W$_6$C$_2$A$_2$&Y & Y & Y & Y \\
G333.137$-$0.427 & 16:21:02.2 & $-$50:34:56.03 & 3.5 (2) & MWC$_2$A & Y & Y & Y & Y \\
G333.162$-$0.101 & 16:19:42.5 & $-$50:19:56.29 & 5.3 (2) & MA$_2$ & N & Y & Y & Y \\
G333.184$-$0.090 & 16:19:45.5 & $-$50:18:34.59 & 5.1 (1) & MGA & Y & Y & Y & Y \\
G333.220$-$0.402 & 16:21:17.9 & $-$50:30:19.97 & 3.6 (2) & A & Y & Y & Y & Y \\
G333.24$+$0.02$^\dagger$ & 16:19:26.0 & $-$50:11:29.26 & 4.4 (1) & A & Y & Y & N & Y \\
G333.233$-$0.061 & 16:19:50.8 & $-$50:15:15.94 & 5.1 (1) & M$_2$W$_3$SCA & Y & Y & N & Y \\
G333.284$-$0.373 & 16:21:27.3 & $-$50:26:22.48 & 3.6 (2) & A & Y & Y & Y & Y \\
G333.301$-$0.352 & 16:21:25.9 & $-$50:24:46.26 & 3.5 (2) & A$_2$ & Y & Y & Y & Y \\
G333.313$+$0.106 & 16:19:28.5 & $-$50:04:45.61 & 11.8 (2) & MWCGA & Y & Y & N & Y \\
G333.335$-$0.363 & 16:21:38.0 & $-$50:23:46.78 & 3.6 (2) & A & Y & Y & Y & Y \\
G333.376$-$0.202$^{\dagger \ddagger}$ & 16:21:06.2 & $-$50:15:13.21 & 4.0 (2) & WA & Y & Y & N & Y \\
G333.387$+$0.031 & 16:20:07.5 & $-$50:04:49.51 & 10.6 (1) & MWCA & Y & Y & N & Y \\
G333.467$-$0.163 & 16:21:20.2 & $-$50:09:41.70 & 3.1 (2) & MWCGA & Y & Y & Y & Y \\
G333.497$+$0.143$^\dagger$ & 16:20:07.6 & $-$49:55:26.21 & 6.3 (2) & A$_2$ & Y & Y & N & Y \\
G333.523$-$0.275$^\ddagger$ & 16:22:04.5 & $-$50:12:05.32 & 3.5 (2) & A & Y & Y & Y & Y \\
G333.558$-$0.293 & 16:22:18.7 & $-$50:11:23.38 & 3.1 (2) & A$_2$ & Y & Y & Y & Y \\
G333.562$-$0.025$^\dagger$ & 16:21:08.7 & $-$49:59:48.85 & 12.0 (2) & MA & Y & Y & N & Y \\
G333.569$+$0.028$^\dagger$ & 16:20:56.6 & $-$49:57:15.91 & 5.0 (1) & A & Y & Y & N & Y \\
G333.595$-$0.211$^\ddagger$ & 16:22:06.7 & $-$50:06:21.28 & 3.5 (2) & WSCA & Y & Y & Y & Y \\
G333.694$-$0.197 & 16:22:29.1 & $-$50:01:31.84 & 3.5 (2) & A$_2$ & Y & Y & Y & Y \\
G333.711$-$0.115$^{\dagger \ddagger}$ & 16:22:12.1 & $-$49:57:20.77 & 2.5 (2) & A & N & N & N & Y \\
G333.772$-$0.010$^{\dagger \ddagger}$ & 16:22:00.3 & $-$49:50:15.79 & 5.2 (1) & & N & Y & N & Y \\
G333.773$-$0.258$^{\dagger \ddagger}$ & 16:23:06.2 & $-$50:00:43.31 & 3.5 (2) & A & Y & Y & N & Y \\
G333.818$-$0.303$^\dagger$ & 16:23:30.0 & $-$50:00:41.97 & 3.4 (2) & & N & Y & N & Y \\
G333.900$-$0.098$^\ddagger$ & 16:22:57.3 & $-$49:48:35.60 & 10.9 (2) & MSA & Y & Y & N & Y \\
G333.930$-$0.133$^{\dagger \ddagger}$ & 16:23:14.5 & $-$49:48:45.94 & 3.1 (2) & MWA & Y & Y & N & Y \\
G333.974$+$0.074$^\dagger$ & 16:22:31.4 & $-$49:38:08.31 & 3.9 (2) & A & N & Y & N & Y \\
G334.027$-$0.047$^\dagger$ & 16:23:16.7 & $-$49:41:00.19 & 5.0 (1) & A$_2$ & N & Y & N & Y \\
G334.746$+$0.506$^\dagger$ & 16:23:58.0 & $-$48:46:59.12 & 4.2 (1) & A & Y & Y & N & Y \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{A sample of Gaussian spectral fits for class~I~CH$_3$OH{} maser spots: only those in region G331.132$-$0.244. The remainder of fits are provided in Appendix~\ref{app:meth_detail}. Column 1 lists the Galactic coordinates of each distinct spectral feature. In the case where there is more than one velocity component at exactly the same location, the letter associated with the Galactic position is incremented. Spots labelled with an asterisk ($*$) have been manually fitted; see Section~\ref{subsec:cross_correlation}. Columns 2 and 3 list the fitted position for the Gaussian. Columns 4 through 6 list the fitted Gaussian parameters. Note that the uncertainty for each parameter is quoted in parentheses, in units of the least significant figure. Column 7 lists the integrated flux density of the Gaussian.}
\label{tab:meth_sample}
\begin{tabular}{ l ll ll d{1}l d{1}l d{1}l d{5} }
\hline
Spot name & \multicolumn{2}{c}{$\alpha_{2000}$} & \multicolumn{2}{c}{$\delta_{2000}$} & \multicolumn{2}{c}{Peak flux} & \multicolumn{2}{c}{Peak} & \multicolumn{2}{c}{FWHM} & \multicolumn{1}{c}{Integrated} \\
& \multicolumn{2}{c}{(h:m:s)} & \multicolumn{2}{c}{($^\circ$:$^\prime$:$^{\prime\prime}$)} & \multicolumn{2}{c}{density} & \multicolumn{2}{c}{velocity} & \multicolumn{2}{c}{(\kms{})} & \multicolumn{1}{c}{flux density} \\
& & & & & \multicolumn{2}{c}{(Jy)} & \multicolumn{2}{c}{(\kms{})} & & & \multicolumn{1}{c}{(Jy\,\kms{})} \\
\hline
G331.1308$-$0.2441A & 16:10:59.53 & (2) & $-$51:50:25.79 & (4) & 13.0 & (6) & -87.50 & (3) & 1.45 & (7) & 14.148 \\
G331.1308$-$0.2441B & 16:10:59.54 & (6) & $-$51:50:25.72 & (5) & 21 & (4) & -90.1 & (1) & 1.6 & (3) & 24.901 \\
G331.1308$-$0.2441C & 16:10:59.54 & (6) & $-$51:50:25.68 & (9) & 93 & (3) & -91.11 & (1) & 0.93 & (3) & 65.276 \\
G331.1333$-$0.2458A & 16:11:00.7 & (5) & $-$51:50:23.97 & (6) & 45 & (1) & -88.531 & (8) & 0.70 & (2) & 23.769 \\
G331.1333$-$0.2410A & 16:10:59.41 & (2) & $-$51:50:11.8 & (1) & 23.1 & (2) & -84.630 & (2) & 0.610 & (5) & 10.604 \\
G331.1333$-$0.2410B & 16:10:59.45 & (2) & $-$51:50:11.64 & (2) & 2.9 & (1) & -86.19 & (1) & 0.53 & (3) & 1.151 \\
G331.1322$-$0.2454A & 16:11:00.25 & (6) & $-$51:50:25.71 & (1) & 6.5 & (2) & -84.34 & (1) & 0.81 & (3) & 3.986 \\
G331.1322$-$0.2454B & 16:11:00.29 & (4) & $-$51:50:26.0 & (1) & 6.3 & (2) & -87.667 & (8) & 0.47 & (2) & 2.244 \\
G331.1332$-$0.2407A & 16:10:59.30 & (4) & $-$51:50:10.90 & (3) & 2.11 & (6) & -85.718 & (8) & 0.59 & (2) & 0.931 \\
G331.1313$-$0.2434A & 16:10:59.499 & (9) & $-$51:50:22.76 & (5) & 4.19 & (7) & -86.097 & (3) & 0.412 & (7) & 1.300 \\
G331.1315$-$0.2441A & 16:10:59.73 & (8) & $-$51:50:24.2 & (1) & 0.90 & (5) & -82.85 & (2) & 0.57 & (4) & 0.386 \\
G331.1313$-$0.2451A & 16:10:59.9 & (2) & $-$51:50:27.2 & (2) & 0.89 & (7) & -82.53 & (4) & 0.91 & (8) & 0.610 \\
\hline
\end{tabular}
\end{center}
\end{table*}
The auto-correlation sensitivity of these observations is approximately a factor of 5 better than that of Paper~I, and has improved the detection rate of thermal lines towards the class~I~CH$_3$OH{} masers. Paper~I described detectable CS{} emission towards 95 per cent of class~I~CH$_3$OH{} masers, whereas these data have detectable CS{} towards every maser. Similarly, the detection rate of thermal SiO{} has greatly improved; these data have detectable SiO{} towards 83 per cent of regions (64/77), which is a significant increase over Paper~I (30 per cent). The thermal 1$_0$--0$_0$ A$^+$ line of CH$_3$OH{} was not characterised in Paper~I, but auto-correlations have been produced for this paper. The thermal line of CH$_3$OH{} in this frequency band is detected towards almost all regions (75/77; 97 per cent). In addition, we list detections of H53$\alpha${} and C$^{34}$S{} in Table~\ref{tab:meth_assoc}, along with median RMS noise levels in Table~\ref{tab:spectral_lines}. The thermal lines are used for various comparisons in Section~\ref{discussion}. The Gaussian parameters determined for each of the thermal lines are listed in Appendix~\ref{app:thermal_detail}.
\section{Discussion}
\label{discussion}
With the first large and unbiased sample of class~I~CH$_3$OH{} masers, we perform a number of analyses to determine any relationships between this maser transition and other star formation tracers. Given that we simultaneously collect useful thermal lines associated with HMSF, such as CS{}~(1--0), we often compare these with the class~I masers to better understand their environments. In addition, we use other surveys pertaining to star formation, such as the Methanol MultiBeam (MMB) and ATLASGAL, in an attempt to identify what class~I~CH$_3$OH{} maser emission can tell us about their host star-forming regions.
\subsection{Class~I~CH$_3$OH{} masers on a HMSF evolutionary timeline}
\label{sec:evolutionary}
\citet{voronkov14} found that class~I~CH$_3$OH{} masers likely trace multiple evolutionary phases of the HMSF timeline, perhaps ranging from very young sources tracing outflows to very evolved sources featuring expanding H\,\textsc{ii}{} regions. In this subsection, we attempt to divide class~I~CH$_3$OH{} masers into `young' or `evolved' categories. Paper~I briefly discusses class~I~CH$_3$OH{} masers with associated class~II~CH$_3$OH{} or OH{} masers; class~I~CH$_3$OH{} masers associated with either or both of these types of maser were assumed to be in a relatively late stage of HMSF. Paper~I notes that the majority of class~I~CH$_3$OH{} masers lack these associations and therefore seem more likely to be associated with earlier stages of HMSF. With the additional sensitivity provided by the follow-up observations discussed in this paper, we may also use radio recombination line (RRL) data to help discriminate HMSF regions between evolved and young stages. RRLs are associated with H\,\textsc{ii}{} regions, which typically signpost more evolved stages of HMSF than class~II~CH$_3$OH{} masers \citep{walsh98}. As OH{} masers also occur late in an evolutionary timeline (e.g. \citealt{caswell97}), in this paper, we categorise maser sites as `evolved' if H53$\alpha${} emission was detected or an OH{} maser is associated, or both. Otherwise, the class~I~CH$_3$OH{} site is deemed `young'. The presence of H53$\alpha${} emission as well as all maser associations with class~I~CH$_3$OH{} maser sites is listed in Table~\ref{tab:meth_assoc}.
From the 77 observed class~I~CH$_3$OH{} maser regions, we classify 41 as evolved (being associated with either an OH{} maser or H53$\alpha${} emission, 53 per cent). Curiously, when using catalogues of class~II~CH$_3$OH{} and OH{} masers, we find that a large number of maser sites (16/77, 21 per cent) featuring H53$\alpha${} emission are without class~II~CH$_3$OH{} or OH{} masers. Perhaps these regions are too evolved to harbour class~II~CH$_3$OH{} or OH{} masers, or alternatively, it could be that the class~II~CH$_3$OH{} or OH{} maser emission is too weak to be detected. The population of OH{} masers within the \mbox{MALT-45}{} survey region are primarily formed by the catalogue of \citet{caswell98}; this survey details detections to a limit of 0.16\,Jy, but \citet{caswell98} note that the survey is not complete at this level, requiring emission across several channels or corroboration with other data. Additional masers from the SPLASH survey may reveal new associations \citep{dawson14,qiao16}. Across the \mbox{MALT-45}{} survey region, class~II~CH$_3$OH{} masers from the MMB survey are complete to a 5$\sigma$ detection limit at 1.0\,Jy \citep{caswell11}. Given the good sensitivity of these surveys, it seems unlikely that undiscovered class~II~CH$_3$OH{} and OH{} masers are associated with these class~I~CH$_3$OH{} masers, but follow-up observations at these transitions may prove effective for finding new detections.
Another explanation for 21 per cent of our sites featuring H53$\alpha${} emission but neither class~II~CH$_3$OH{} nor OH{} masers is that bright H53$\alpha${} emission originates from nearby sources of HMSF, contaminating what we would otherwise deem as young regions of star formation. The G333 giant molecular cloud \citep{bains06,fujiyoshi06} powers bright H53$\alpha${} emission, which may cause false-positive associations of H53$\alpha${} at sites near to this complex. Thus, it can be difficult to accurately discriminate between young and evolved HMSF regions by using only RRL data; however, we suggest that this is generally uncommon, as the H$_2$O{} Galactic Plane survey (HOPS; \citealt{walsh11}) finds approximately 10 other examples of comparable RRL regions in 100 square-degrees of the Galactic plane. Therefore, we proceed with the presence of H53$\alpha${} emission or an OH{} maser to discriminate young and evolved regions of star formation.
Non-evolved class~I~CH$_3$OH{} maser regions comprise 47 per cent of the total population, indicating that class~I~CH$_3$OH{} masers can occur over a broad span of time in star-forming regions. In the following text, we qualitatively discuss infrared associations with young class~I~CH$_3$OH{} masers. Images in Appendix~\ref{app:glimpse} show the infrared environment of the detected masers. It can be seen that many masers are cospatial with dark infrared regions, presumably IRDCs, which are dense regions of cold gas projected in front of a bright background. IRDCs are known to host HMSF, thus it is not surprising to find a large population of class~I~CH$_3$OH{} masers towards these locations.
The three class~I~CH$_3$OH{} maser sites detected in auto- but not cross-correlated data are also within the young population. G331.72$-$0.20 and G333.24$+$0.02 appear to be associated with dark infrared regions, but G331.44$-$0.14 lacks any obvious infrared feature; it may be originating in an IRDC behind foreground emission, or an example of a region lacking star formation. At present, we are unable to explain why these sites were not detected in cross-correlation data based on their infrared associations; with a larger population, an explanation may become apparent.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/oh_maser_spot_counts_new.pdf}
\includegraphics[width=\columnwidth]{figures/cII_maser_spot_counts_new.pdf}
\includegraphics[width=\columnwidth]{figures/rrl_maser_spot_counts_new.pdf}
\caption{Histograms of class~I~CH$_3$OH{} maser spot counts within each maser region, highlighting associations with and without an OH{} maser, class~II~CH$_3$OH{} maser or H53$\alpha${} detection. The majority of class~I~CH$_3$OH{} maser regions with only one or two spots do not have an associated OH{} or class~II~CH$_3$OH{} maser, but there is a tendency for more associations in regions with more class~I~CH$_3$OH{} maser spots. A KS test on the samples with and without an OH{} maser association shows a 1.2 per cent probability that the two samples are drawn from the same population. The same KS test performed on class~II~CH$_3$OH{} maser and H53$\alpha${} detection samples have a 1.1 and 22 per cent probability, respectively.}
\label{fig:spot_counts}
\end{figure}
Fig.~\ref{fig:spot_counts} shows histograms of class~I~CH$_3$OH{} maser site spot counts with and without other star-formation tracers. Perhaps most interestingly, the comparison with OH{} masers suggests that class~I~CH$_3$OH{} maser sites with few spots (less than five) are unlikely to be associated with an OH{} maser. To check the significance of this claim, as well as the similarity between the other populations, we have performed Kolmogorov-Smirnov (KS) tests. KS tests on the populations associated with and without OH{} masers, class~II~CH$_3$OH{} masers and H53$\alpha${} emission finds 1.2, 1.1 and 22 per cent probability that they are drawn from the same distribution, respectively. It seems that the number of detected class~I~CH$_3$OH{} maser spots is a good indication for the presence of other masers, but not necessarily the presence of radio recombination line emission.
With only a few possible exceptions, each of the class~I~CH$_3$OH{} masers identified by \mbox{MALT-45}{} appear to be associated with HMSF. As we find many maser sites without associated class~II~CH$_3$OH{} or OH{} masers or H53$\alpha${} emission, we appear to be identifying early stages of HMSF. Thus, class~I~CH$_3$OH{} masers can provide a useful means of identifying star formation, and further \mbox{MALT-45}{} observations will find more young star-forming regions purely through maser emission. In addition to isolating a young population, this work has shown that the number of class~I~CH$_3$OH{} maser spots detected may be indicative of the evolutionary stage of the region, which further demonstrates the usefulness of this spectral line.
\subsection{Properties of detected class I CH$_3$OH{} masers}
\label{sec:basic_properties}
\begin{figure}
\subfigure{\includegraphics[width=\columnwidth]{figures/luminosity_hist_new.pdf}}
\caption{Luminosities of class~I~CH$_3$OH{} maser regions, associated with and without H53$\alpha${} emission or an OH{} maser. There are relatively few masers with high luminosities that are not associated with H53$\alpha${} emission or an OH{} maser. A KS test finds a 0.2 per cent chance that both samples are drawn from the same population.}
\label{fig:lum_properties}
\end{figure}
\begin{figure*}
\subfigure{\includegraphics[width=0.9\textwidth]{figures/size_vs_vel_new.pdf}}
\caption{Projected linear sizes and velocity ranges of class~I~CH$_3$OH{} maser regions. The majority of maser regions have relatively small projected linear sizes and velocity ranges. Regions that are not evolved tend to have lower luminosities and smaller projected linear sizes and velocity ranges; KS tests suggest that the populations with and without H53$\alpha${} emission and or an associated OH{} maser are distinct (see Section~\ref{sec:sizes_vels}).}
\label{fig:size_vel_properties}
\end{figure*}
\subsubsection{Luminosities}
Using the information gathered from these observations, we conduct analyses of the basic properties of class~I~CH$_3$OH{} masers. Fig.~\ref{fig:lum_properties} shows the luminosities of these masers. Luminosities are determined from the integrated intensity of auto-correlated emission, calculated by Gaussian fits to maser spectra.
The histogram in Fig.~\ref{fig:lum_properties} presents luminosities of class~I~CH$_3$OH{} masers, highlighting the populations associated with and without H53$\alpha${} emission or an OH{} maser. A KS test finds a 0.2 per cent chance that both samples are drawn from the same population. Using the presence of H53$\alpha${} emission or an OH{} maser as an indication for a relatively evolved region, this result suggests that the luminosity of class~I~CH$_3$OH{} masers can indicate the evolutionary stage of its host star-forming region.
\subsubsection{Projected linear sizes and velocity ranges}
\label{sec:sizes_vels}
Fig.~\ref{fig:size_vel_properties} compares the projected linear sizes and velocity ranges of young and evolved class~I~CH$_3$OH{} maser regions. The projected linear size is calculated using the angular size of maser emission across a region and its derived kinematic distance (see Table~\ref{tab:meth_assoc}). Velocity range simply refers to the difference between the most redshifted and blueshifted emission within a single region. The majority of sources are confined to small projected linear sizes ($<$0.5\,pc) and small velocity ranges ($<$5\,\kms{}). Of the maser sites that exceed these sizes and velocity ranges, relatively few are young. KS tests were performed on both young and evolved samples of projected linear sizes and velocities; the probability that each sample is drawn from the same distribution is $<$10$^{-1}$ per cent and 4.3 per cent, respectively, suggesting that these populations are distinct.
\citet{voronkov14} also analysed the projected linear sizes and velocity ranges of class~I~CH$_3$OH{} masers, but between populations with and without an associated OH{} maser. Here, we observe similarities between our two populations and theirs. Class~I~CH$_3$OH{} masers observed by \citet{voronkov14} without an associated OH{} maser are confined to relatively small projected linear sizes ($<$0.4\,pc) and velocity ranges ($<$10\,\kms{}) (fig.~5 of their paper). In addition, when associated with an OH{} maser, \citet{voronkov14} find that the projected linear sizes and velocity ranges of class~I~CH$_3$OH{} maser sites are typically larger.
While the projected linear sizes of our study and that of \citet{voronkov14} have similar values (up to ~1\,pc), the velocity ranges do not. The mean velocity ranges of our class~I~CH$_3$OH{} masers are 4.1 and 2.4\,\kms{} for the evolved and young populations, respectively, while the \citet{voronkov14} with and without OH{} maser association are approximately 10 and 5\,\kms{}, respectively. For comparison, the mean velocity range of our class~I~CH$_3$OH{} masers associated with a class~II~CH$_3$OH{} and OH{} maser is 5.3\,\kms{}, whereas class~II~CH$_3$OH{} without OH{} is 3.2\,\kms{}. This difference is likely due to the sample of class~I~CH$_3$OH{} masers used by \citet{voronkov14} being typically biased toward class~II~CH$_3$OH{} masers. The difference between this value and that of \citet{voronkov14} might be due to our sample being more unbiased.
\subsubsection{Spatial distributions}
\label{sec:spatial_distributions}
The spatial distributions of class~I~CH$_3$OH{} masers have been discussed in the literature. \citet{kurtz04} measured the distance between class~I~CH$_3$OH{} masers and H\,\textsc{ii}{} regions, and \citet{voronkov14} compared the distance between class~II and class~I~CH$_3$OH{} masers. For our class~I~CH$_3$OH{} maser regions featuring another type of star-formation maser (OH{}, class~II~CH$_3$OH{} or H$_2$O{}), a projected linear distance comparison is presented in Section~\ref{sec:maser_separation}. As we do not have a common object to compare within each maser site (such as an H\,\textsc{ii}{} region or class~II~CH$_3$OH{} maser), we instead compare the maximum distance between any two maser spots within a site. Here, we briefly discuss the maximum spatial offset (projected linear size) between any two maser spots within a site; see Fig.~\ref{fig:size_vel_properties}.
Seven maser sites have a projected linear size greater than 0.8\,pc; these sites are G330.876$-$0.362, G331.279$-$0.189, G331.492$-$0.082, G331.503$-$0.109, G331.530$-$0.099, G333.313$+$0.106 and G333.558$-$0.293. Upon inspection of the distribution of these masers with infrared maps, it is likely that not all the class~I~CH$_3$OH{} masers are related to a single high-mass object; see Appendix~\ref{app:glimpse}. G330.876$-$0.362 has only two spots, very far from each other; one is closely associated with class~II~CH$_3$OH{}, H$_2$O{} and OH{} masers, the other with an infrared dark cloud (IRDC). G331.279$-$0.189 has a single maser spot very far from the rest of the relatively clustered spots, and may not be powered by a common source. G333.558$-$0.293 has only two maser spots; the infrared map shows what appears to be a different IRDC associated with each. G331.492$-$0.082, G331.503$-$0.109, G331.530$-$0.099 and G333.313$+$0.106 may be exceptions. There is no obvious infrared distinction between the maser spots of G331.492$-$0.082 to rule out a common source. The spots associated with G331.503$-$0.109 and G331.530$-$0.099 both may be powered by the same source located at G331.512$-$0.100, which also features H$_2$O{} and OH{} masers, as there are no other apparent infrared sources that could be powering either maser spot. Despite the large projected linear size, the spots associated with G333.313$+$0.106 appear to be associated with the same object, which also features an EGO and a class~II~CH$_3$OH{} maser.
While it is difficult to discern the origin of each maser spot from observations of maser emission alone, it seems that the large offsets between spots are not necessarily common, and may be erroneously generated by calculating a radius for each observed maser `site'. While these masers are collisionally-excited and therefore more likely to appear at large distances from their powering source, we suggest that genuine class~I~CH$_3$OH{} maser associations over large distances are uncommon. Perhaps the class~I masers are tracing weak, continuous C-type shocks rather than powerful J-type shocks \citep{widmann16}; in addition, we might expect masers triggered by stronger shocks to have broader velocity ranges than those we observe.
\subsection{Comparing class I CH$_3$OH{} masers with other masers and thermal lines}
\subsubsection{Separation from other maser species in star-forming regions}
\label{sec:maser_separation}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/maser_projected_hist_new.pdf}
\caption{Histogram of the projected linear distance between every class~I~CH$_3$OH{} maser spot position and class~II~CH$_3$OH{}, OH{} and H$_2$O{} masers. This histogram uses the projected distance between masers only when their angular offset is less than 60~arcsec. The majority of projected linear distances to masers are within 0.5\,pc.}
\label{fig:maser_projected_hist}
\end{figure}
Section~\ref{sec:spatial_distributions} is able to infer the age and size of host star-forming regions by simply using the properties of class~I~CH$_3$OH{} maser spots. Here, we compare the distances between class~I~CH$_3$OH{} maser spots and other maser species to determine the spatial extents of star-forming regions; this is prudent in conjunction with Section~\ref{sec:spatial_distributions}, as class~II~CH$_3$OH{} and OH{} masers are relatively near to their powering source.
The projected linear distances between every spot of a class~I~CH$_3$OH{} maser site and class~II~CH$_3$OH{}, OH{} and H$_2$O{} masers are presented in Fig.~\ref{fig:maser_projected_hist}. The other star-formation maser positions were gathered from \citet{caswell11,sevenster97,caswell98,breen10,walsh11,walsh14}. A maser site was considered to be associated with a secondary maser site if the two were less than 60~arcsec from each other. If more than one of the same maser species is within 60~arcsec of a class~I~CH$_3$OH{} maser spot, only the closest was used for comparison.
Most projected linear distances are less than 0.5\,pc. \citet{kurtz04} find that the distance between class~I~CH$_3$OH{} masers and H\,\textsc{ii}{} regions tend to be within 0.5\,pc, in agreement with our results. \citet{voronkov14} were able to model an exponential decay to the number of class~I~CH$_3$OH{} maser spots with distance from class~II~CH$_3$OH{} masers; with their larger sample size, they see many masers at projected linear distances beyond 0.5\,pc. However, the vast majority are within 0.5\,pc, also in agreement with our results. Given that our maser spots are derived from an unbiased sample, it seems that typical regions of HMSF power class~I~CH$_3$OH{} maser activity within a distance of 0.5\,pc.
The majority of projected linear distances between class~I~CH$_3$OH{} maser spots and class~II~CH$_3$OH{} or H$_2$O{} masers are at small distances ($<$0.2\,pc), whereas OH{} masers have a flat distribution out to approximately 0.5\,pc. The similarity of the projected linear distances in each comparison can be determined with KS tests; the probabilities of each distribution being drawn from the same sample are $<$10$^{-1}$, $<$10$^{-1}$ and 49.0 per cent for OH{} and H$_2$O{}, OH{} and class~II, and H$_2$O{} and class~II, respectively. Thus, OH{} masers have significantly higher projected linear distances from class~I~CH$_3$OH{} masers compared to class~II~CH$_3$OH{} or H$_2$O{} masers.
In general, maser associations are well characterised within distances of 0.5\,pc. We believe that most associations beyond 0.5\,pc are also genuine, but we would require additional information to conclude otherwise.
\subsubsection{Velocities}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/velocity_difference_box.pdf}
\caption{Notched box-and-whisker plot of peak velocity differences between class~I~CH$_3$OH{} masers and various species associated with star formation. The red solid vertical line within the boxes indicates the median, the orange dashed lines the mean. The notched region about the median indicates the 95 per cent confidence interval on the median, and the box covers the interquartile region (IQR; middle 50 per cent of data). Whiskers extend past the IQR by up to 1.5$\times$IQR. Black crosses indicate outlying data. The dashed black line indicates zero peak velocity difference. The class~II~CH$_3$OH{} maser box plot has a relatively wide range of velocity differences compared to the other lines. CS{}, SiO{} and thermal CH$_3$OH{} velocities are all closely associated with the maser velocity.}
\label{fig:velocity_difference_box}
\end{figure*}
Paper~I finds that class~I~CH$_3$OH{} masers are good tracers of systemic velocity; with these follow-up data, we repeat this type of analysis with more thermal lines, as well as class~II~CH$_3$OH{} maser emission. The peak velocities of class~I~CH$_3$OH{} masers in auto-correlated data are compared with class~II~CH$_3$OH{} masers, thermal CS{}, SiO{} and CH$_3$OH{} in Fig.~\ref{fig:velocity_difference_box}. We discuss the resulting distributions in the following text.
\citet{voronkov14} used a large sample of class~I and class~II~CH$_3$OH{} masers to investigate their relative velocities. Analysing the distribution of velocity differences between class~I and class~II~CH$_3$OH{} masers, they find a peak velocity difference at $-0.57 \pm 0.07$\,\kms{} with a standard deviation of $3.32 \pm 0.07$\,\kms{} and a slight blueshifted asymmetry. The cause of the blueshift could not be attributed to either the class~I or class~II~CH$_3$OH{} masers. As class~II~CH$_3$OH{} masers occur near to a YSO, their velocities are thought to be tracers of systemic velocities, albeit with a large dispersion \citep{szymczak07,green11}. In the following discussion, we analyse the peak velocities of thermal lines observed in these data, and find that class~I~CH$_3$OH{} masers are significantly better tracers of systemic velocities than class~II~CH$_3$OH{} masers.
The distribution of velocity differences in Fig.~\ref{fig:velocity_difference_box} which compares class~I and class~II~CH$_3$OH{} masers has the statistics $\mu = -1.17 \pm 0.87$\,\kms{}, $\tilde{x} = -1.90$\,\kms{} and $\sigma = 4.75 \pm 0.61$\,\kms{} ($\mu$ is the mean, $\tilde{x}$ the median and $\sigma$ the standard deviation). The parameters of this comparison are consistent with that of \citet{voronkov14}. The median and mean are also blueshifted, but unlike \citet{voronkov14}, are statistically insignificant.
Paper~I compared the peak velocities of CS{}~(1--0) and class~I~CH$_3$OH{} masers where each maser was detected. From that work, the resulting statistics were $\mu = 0.0 \pm 0.2$\,\kms{}, $\tilde{x} = -0.1$\,\kms{} and $\sigma = 1.5 \pm 0.1$\,\kms{}. The results of the same comparison with these data corroborate that of Paper~I: $\mu = 0.09 \pm 0.18$\,\kms{}, $\tilde{x} = 0.04$\,\kms{} and $\sigma = 1.56 \pm 0.17$\,\kms{}. Given that CS{} traces very dense gas ($n_c > 10^5$\,cm$^{-3}$), the peak CS{} velocity is likely closely related to the systemic velocity of a molecular cloud. Hence, the statistics suggest that class~I~CH$_3$OH{} masers are also good tracers of systemic velocities. As CS{} emission can be quite bright in these data, it is possible that optically thick emission, if present, is causing uncertainty in the systemic velocity, and hence skews the distribution in Fig.~\ref{fig:velocity_difference_box}. To help resolve this matter, we analysed the peak velocities of C$^{34}$S{} emission. With only one exception, C$^{34}$S{} was detected toward each observed class~I~CH$_3$OH{} region, and the peak velocity of C$^{34}$S{} agrees with the peak CS{} velocity to within 0.5\,\kms{} in each region. Therefore, we consider peak CS{} velocities as accurate systemic velocity tracers. \citet{green11} compared the peak velocity of CS{} (2--1) and mid-velocity of class~II~CH$_3$OH{} maser emission, finding a mean and median velocity difference of 3.6 and 3.2\,\kms, respectively; their relatively large velocity differences corroborate our results for class~II~CH$_3$OH{} masers compared against class~I~CH$_3$OH{} masers.
The measured difference between SiO{} emission and class~I~CH$_3$OH{} maser velocities is similar to the difference between CS{} emission and class~I~CH$_3$OH{} maser velocities: $\mu = -0.19 \pm 0.21$\,\kms{}, $\tilde{x} = -0.01$\,\kms{} and $\sigma = 1.65 \pm 0.15$\,\kms{}. The distribution featuring SiO{} is slightly wider than that featuring CS{}, but we place less emphasis on this difference in line width, since the difference is small. Overall, the peak velocities of class~I~CH$_3$OH{} masers and SiO{} emission closely agree.
The differences in velocity between the peak velocity of the thermal 1$_0$--0$_0$ A$^+$ line of CH$_3$OH{} and class~I~CH$_3$OH{} masers also shows a tight association: $\mu = -0.08 \pm 0.15$\,\kms{}, $\tilde{x} = 0.01$\,\kms{} and $\sigma = 1.33 \pm 0.11$\,\kms{}. As a sufficient abundance of CH$_3$OH{} gas is needed for maser emission, it is perhaps not surprising that the thermal velocity is closely matched to the maser velocity. Given the distribution relative to CS{}, it seems that thermal CH$_3$OH{} is also a good tracer of systemic velocities.
The differences in the peak velocity for the thermal CS{}, SiO{} and CH$_3$OH{} spectral lines all appear consistent with zero, with uncertainties of a couple of \kms{}. The distribution featuring velocity differences using class~II~CH$_3$OH{} masers, however, is broad and has a relatively distinct mean and median blueshift. This hints that the peak velocity of class~II~CH$_3$OH{} masers tend to be blueshifted from the systemic velocity, although the statistics presented above only provide tentative evidence of this. One explanation of the blueshift is that the strongest class~II~CH$_3$OH{} masers in a region are preferentially detected in the foreground of star-forming regions rather than in the background. This preference may occur because at 6.7\,GHz radio continuum free-free emission may be optically thick. This would be especially true for younger sources. The preference to see blueshifted class~II~CH$_3$OH{} masers could be explained by the masers occurring in an outflow or an expanding shell. But blueshifted masers are not easily explained if the masers occur in a circumstellar accretion disk.
\subsubsection{Brightness}
\label{sec:thermal_brightness}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/scatter_new.pdf}
\caption{Integrated flux density scatter plots of CS{}, SiO{} and thermal CH$_3$OH{} against class~I~CH$_3$OH{} masers. All three comparisons show a similar trend, albeit with a large degree of scatter; the $r$-values for the lines of best fit with CS{}, SiO{} and thermal CH$_3$OH{} are 0.41, 0.57 and 0.40, respectively.}
\label{fig:thermal_scatter}
\end{figure*}
With a statistically-complete sample of class~I~CH$_3$OH{} masers and enhanced sensitivity to auto-correlated emission, the observations of this work have the opportunity to identify relations, if any, between the luminosities of class~I~CH$_3$OH{} masers and other species associated with star formation. In this subsection, we use the auto-correlated integrated flux densities of class~I~CH$_3$OH{} maser emission to compare with the thermal CS{}, SiO{} and CH$_3$OH{} integrated intensities; see Fig.~\ref{fig:thermal_scatter}.
In each comparison, a similar positive trend exists, although the degree of scatter in each comparison is large. The $r$-value for the CS{} comparison is 0.41.
SiO{}~(1--0)~$v=0$ is typically thought to trace strongly-shocked gas, particularly found in outflows. As class~I~CH$_3$OH{} masers are collisionally excited by weak shocks, they may be excited in regions also containing SiO{} emission. As discussed earlier, analyses with SiO{} were not possible in Paper~I, due to the low detection rate and relatively weak intensities of SiO{}~(1--0)~$v=0$. Consequently, Paper~I speculates on the nature of the class~I~CH$_3$OH{} maser sites, given that the SiO{} detection rate was quite low (30 per cent). As the majority of regions now have confirmed SiO{} detections, it seems that a large portion of class~I~CH$_3$OH{} maser sites indeed have shocked gas associated with them. As bright SiO{} emission within these maser sites is rare, perhaps their faint intensities indicate weak shocks, similar to what was found by \citet{widmann16}. The correlation coefficient ($r$-value) between the integrated intensity of SiO{} and class~I~CH$_3$OH{} maser integrated flux density (IFD) is 0.57.
Similar to SiO{}, comparisons with thermal CH$_3$OH{} was not possible in Paper~I. The correlation here is the weakest, with an $r$-value of 0.40.
The correlation coefficient for each comparison indicates a moderate correlation. This combined with a significant positive slope indicates that brighter class~I~CH$_3$OH{} masers are more likely to be associated with brighter thermal lines. We might expect that higher-mass regions of star formation to contain more molecular gas, and thus the thermal line emission to be brighter. Given the moderate correlations between the brightness of class~I~CH$_3$OH{} masers and each of the thermal lines discussed in this section, class~I masers may in turn hint at the mass of their host star-forming regions.
Class II~CH$_3$OH{} masers have been found to be more luminous in more evolved regions of HMSF \citep{breen10b}; if there was a relationship between class~I and class~II~CH$_3$OH{} masers, then the same might be true for class~I~CH$_3$OH{} masers. The luminosity of class~I and class~II~CH$_3$OH{} masers was compared, but no correlation was observed, corroborating the results of Paper~I. Due to class~I~CH$_3$OH{} masers being associated with more than one evolutionary phase on HMSF timeline, it is likely that a simple relationship between them and class~II~CH$_3$OH{} masers does not exist.
\subsection{Comparing class I CH$_3$OH{} masers with 870\,$\mu$m dust continuum from ATLASGAL}
\label{subsec:atlasgal}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/atlasgal_offset_hist_new.pdf}
\caption{Histograms of offsets between ATLASGAL clumps and class~I~CH$_3$OH{} maser spots, for young and evolved regions. The majority of associations have projected linear distances less than 0.4\,pc, and almost all are within 0.6\,pc. This indicates a strong relation between the region containing the ATLASGAL source and class~I~CH$_3$OH{} maser emission. A small population also exists with projected linear distances larger than 1\,pc. Note that for each class~I~CH$_3$OH{} maser spot, if multiple ATLASGAL sources are present within 60~arcsec, only the closest was used.}
\label{fig:atlasgal_offset_hist}
\end{figure}
ATLASGAL surveyed 870\,$\mu$m dust continuum emission across a large part of the Galactic plane ($330^\circ \leq l \leq 60^\circ$; \citealt{schuller09}), including the \mbox{MALT-45}{} region. This sub-millimetre emission traces cold dust, which is optically thin, and can in turn be used to infer column densities and clump masses.
The catalogue of \citet{contreras13} provides the location and integrated flux densities of 870\,$\mu$m dust continuum emission point sources. Using these, we can investigate relationships between class~I~CH$_3$OH{} masers and 870\,$\mu$m emission, such as the brightness of each and the typical clump mass containing a maser site. We associated any class~I~CH$_3$OH{} masers and ATLASGAL clumps within an angular offset of 60~arcsec, to be consistent with the other maser associations, and list the associations in Table~\ref{tab:meth_assoc}. Only 4 of our 77 maser regions lack an association with an 870\,$\mu$m clump (5 per cent).
\subsubsection{Separation between ATLASGAL 870\,$\mu$m dust clumps and class~I~CH$_3$OH{} maser spots}
Section~\ref{sec:maser_separation} analyses the distances between class~I~CH$_3$OH{} masers and other masers associated with star formation; here, we analyse how class~I masers compare against point sources of dust continuum.
Histograms of projected linear distances between class~I~CH$_3$OH{} maser spots and 870\,$\mu$m clumps are provided in Fig.~\ref{fig:atlasgal_offset_hist}. The majority of maser spots are within 0.4\,pc of an ATLASGAL source, but the more evolved maser distribution has a long tail down to approximately 1.2\,pc. Quantified with a KS test, we find a 2.4 per cent probability that both samples are drawn from the same population. This is similar to the projected linear distances between class~I~CH$_3$OH{} maser spots and OH{} masers shown in Fig.~\ref{fig:maser_projected_hist}, and may be due to the same effect: evolved regions of star formation are more likely to affect a greater volume than younger ones.
Note that Fig.~\ref{fig:atlasgal_offset_hist} includes any ATLASGAL clump within 60~arcsec of a maser spot. Maser sites without ATLASGAL associations are discussed in Section~\ref{sec:atlasgal_exceptions}.
\subsubsection{Masses}
It is useful to compare the properties of class~I~CH$_3$OH{} masers against the mass of their host star-forming regions. \citet{chen12} and \citet{urquhart13} estimate clump masses from the integrated intensity of millimetre-wavelength continuum emission, assuming the emission is optically thin. In the same manner, we use ATLASGAL data to estimate clump masses via:
\begin{subequations}\label{eq:mass}
\begin{align}
M_{\text{gas}} = \frac{S_{\nu} D^2 R_d}{\kappa_d B_\nu \left(T_{\text{dust}}\right)}
\tag{1}
\end{align}
\end{subequations}
where $M_{\text{gas}}$ is the mass of the gas, $S_{\nu}$ is the integrated flux density of ATLASGAL 870\,$\mu$m emission, $D$ is distance to the maser, $R_d$ is the ratio of gas and dust masses, $\kappa_d$ is the mass-absorption coefficient per unit mass of dust, and $B_\nu\left(T_{\text{dust}}\right)$ is the Planck function at temperature $T_{\text{dust}}$. \citet{urquhart13} justify the choice of $T_{\text{dust}} = 20$\,K and $\kappa_d = 1.85$\,cm$^2$\,g$^{-1}$, which we also use here. We also assume $R_d = 100$. The assumption of a single dust temperature is not realistic, but in the absence of temperature measurements toward every individual region, is necessary. \citet{pandian12} find the kinetic temperatures of gas in regions with class~II~CH$_3$OH{} masers, determined by NH$_3${} observations, have mean and median values of 26 and 23.4\,K, respectively. \citet{urquhart11} also find mean and median temperatures toward high-mass YSOs to be 22.1 and 21.4\,K, respectively. If we instead assume a dust temperature of 25\,K, masses will decrease to 73 per cent of those determined with 20\,K. Using a temperature of 15\,K increases masses by 56 per cent relative to the 20\,K masses. These uncertainties are comparable to those given by the kinematic distances. The histogram of masses can be seen in Fig.~\ref{fig:atlasgal_mass_hist}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/atlasgal_mass_hist_new.pdf}
\caption{Histograms of clump masses associated with class~I~CH$_3$OH{} masers, determined by using 870\,$\mu$m ATLASGAL dust continuum, for young and evolved populations. The mean mass of the young distribution is $10^{3.0}$\,$M_{\odot}$, whereas the mean mass of the evolved distribution is $10^{3.74}$\,$M_{\odot}$. The highest mass values are restricted to evolved masers regions, while the lowest masses are all young maser regions.}
\label{fig:atlasgal_mass_hist}
\end{figure}
Class~I~CH$_3$OH{} masers appear to be associated with a wide range of clump masses ($10^{1.25} < \frac{M}{M_\odot} < 10^{4.5}$). \citet{urquhart13} associate 870\,$\mu$m dust continuum emission with class~II~CH$_3$OH{} masers, and calculate clump masses. They find clump masses associated with class~II~CH$_3$OH{} masers can range from $10^{-2} < \frac{M}{M_\odot} < 10^{6}$. As clump mass is consistent with that of \citet{urquhart13}, and all class~II~CH$_3$OH{} masers are associated with HMSF, the lower end of these clump masses cannot necessarily serve as an indication of low-mass star formation regions.
Fig~\ref{fig:atlasgal_offset_hist} shows a difference in the calculated mass range for clumps associated with H53$\alpha${} emission or an OH{} maser, however, this difference is consistent with the sources we expect to be less evolved (clumps not associated with H53$\alpha${} emission or an OH{} maser) to be at a lower temperature than assumed 20\,K. Other small contributions to this mass discrepancy may be attributable to a slight bias in the detectability of H53$\alpha${} whereby higher-mass objects are more likely to have detectable H53$\alpha${} emission at slightly younger age, or the possibility of genuine associations with low-mass stars. A KS test shows that the probability that both histograms are drawn from the same distribution is $<$10$^{-1}$ per cent.
\subsubsection{Brightness}
Fig.~\ref{fig:atlasgal_scatter} compares the IFD of 870\,$\mu$m ATLASGAL dust continuum clumps and the luminosity of class~I~CH$_3$OH{} masers, but only a weak correlation exists. \citet{chen12} performed a similar comparison using 95\,GHz class~I~CH$_3$OH{} masers towards Bolocam Galactic Plane Survey (BGPS) sources of 1.1\,mm thermal dust emission (fig.~6 of their paper). They find a strong correlation between the maser luminosity and clump mass, with a $r$-value of 0.84 and $p$-value of 8.1$\times$10$^{-13}$. Our line of best fit has the coefficients $r =$ 0.33, $p = $ 5.7$\times$10$^{-3}$, and is shown on Fig.~\ref{fig:atlasgal_scatter}. The large difference between correlation coefficients may be attributed to the \citet{chen12} sample of BGPS sources being selected based on their GLIMPSE colours, which is likely biasing their statistics. On the other hand, given that the relationship between 44\,GHz and 95\,GHz class~I~CH$_3$OH{} masers is well established \citep{valtts00}, our lack of correlation observed may highlight differences between 870\,$\mu$m and 1.1\,mm emission, with 1.1\,mm emission originating from more evolved regions. More data is necessary to establish a connection between class~I~CH$_3$OH{} masers (44 and 95\,GHz, to eliminate biases) and dust continuum emission. In particular, finding more 44\,GHz class~I~CH$_3$OH{} masers toward sources with the same GLIMPSE colours selected by \citet{chen12} would help to eliminate biases.
Here, our correlation coefficient of 0.33 is not much weaker than those found in our comparisons of thermal lines to class~I~CH$_3$OH{} masers (0.41, 0.57 and 0.40 for CS{}, SiO{} and thermal CH$_3$OH{}, respectively) in Section~\ref{sec:thermal_brightness}. In that section, we speculate that the brightness of the thermal lines is proportional to mass; indeed, a comparison between the integrated intensity of CS{} and the IFD 870\,$\mu$ dust continuum emission has a correlation coefficient of 0.84. As the brightness of dust continuum emission is directly proportional to mass, the result in this section strengthens the notion that the brightness of class~I~CH$_3$OH{} masers can hint at the mass of their host star-forming regions.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/atlasgal_ifds_new.pdf}
\caption{Scatter plot of ATLASGAL 870\,$\mu$m dust continuum emission and class~I~CH$_3$OH{} maser integrated flux densities. The line of best fit for all data (young and evolved) is shown, and has an $r$-value of 0.33.}
\label{fig:atlasgal_scatter}
\end{figure}
\subsubsection{Class~I~CH$_3$OH{} maser sites without an ATLASGAL source}
\label{sec:atlasgal_exceptions}
Only four class~I~CH$_3$OH{} maser regions are without an ATLASGAL point source classified by \citet{contreras13}: G331.371$-$0.145, G331.442$-$0.158, G333.772$-$0.010 and G333.818$-$0.303. Using the data provided by the ATLASGAL team, we are able to investigate the 870\,$\mu$m emission in each of these regions. Within $\sim$12~arcsec, G331.371$-$0.145, G331.442$-$0.158 and G333.818$-$0.303 each have peak 870\,$\mu$m flux densities of 0.41, 0.63 and 0.23\,Jy\,beam$^{-1}$, respectively. The ATLASGAL 1$\sigma$ sensitivity over the \mbox{MALT-45}{} area is approximately 60\,mJy\,beam$^{-1}$. Point sources were classified by the source extraction algorithm \textsc{SExtractor}. Despite the pixel values of 870\,$\mu$m dust emission toward these three maser sites being at least 3$\sigma$, \textsc{SExtractor} has most likely not been satisfied with a spatial morphology criterion. However, we do consider the 870\,$\mu$m emission at these locations to be genuine. For these three regions, we determine the IFD of 870\,$\mu$m emission to be 0.57, 0.28 and 0.24\,Jy, respectively. Using Equation~\ref{eq:mass}, these values correspond to clump masses of 85, 40 and 15\,$M_\odot$, respectively. These masses are relatively low compared to those discussed in Section~\ref{subsec:atlasgal}, but even lower masses were determined from integrated flux densities in the ATLASGAL catalogue.
G333.772$-$0.010 has a single pixel of 870\,$\mu$m dust continuum emission with a peak flux density of 0.25\,Jy\,beam$^{-1}$ at a projected linear distance of $\sim$27~arcsec. However, unlike the other regions of 870\,$\mu$m emission, nearby pixels are relatively dim ($<$0.14\,Jy\,beam$^{-1}$, $<\sim$2$\sigma$), indicating this pixel is likely a random noise spike. For sources at a similar distance ($\sim$5.2\,kpc), we might expect to see a similar distribution of 870\,$\mu$m emission, but this is not the case. The class~I~CH$_3$OH{} maser emission is closely associated with compact infrared emission. Interestingly, the maser emission is strong (peak of 17.7\,Jy, IFD of 11.7\,Jy\,\kms{}), but the thermal lines are weak (the integrated intensity of CS{} and thermal CH$_3$OH{} are 0.319 and 0.057\,K\,\kms{}, respectively, and no SiO{} is detected). Since \citet{urquhart15} do not detect an ATLASGAL counterpart towards approximately 7 per cent of class~II~CH$_3$OH{} masers from the MMB, we may have a similar example of star formation without a significant dust detection.
\subsection{Comparing class I CH$_3$OH{} masers in cross- and auto-correlation}
\label{sec:cross_vs_auto}
\citet{minier02} and \citet{bartkiewicz14} discuss class~II~CH$_3$OH{} maser emission detected by very-long-baseline interferometry and single-dish/auto-correlation observations. These comparisons between cross- and auto-correlation data reveal a significant amount of resolved-out flux density; \citet{bartkiewicz14} report between 24 and 86 per cent. Naturally, an interferometer will resolve out any extended emission relative to the synthesised beam. The authors elaborate that the `missing flux' is not dependant on distance, and that lower-resolution cross-correlation data have similar compact structures. This suggests that missing emission is quite diffuse, and is seemingly independent of the properties of compact maser emission. \citet{minier02} also attribute missing flux as being due to diffuse emission.
While these investigations were exclusively focused on class~II~CH$_3$OH{} masers, the observations detailed in this paper are able to undertake a similar investigation for class~I~CH$_3$OH{} masers. It is worth noting that the observations of \citet{minier02} and \citet{bartkiewicz14} have a resolution approximately two orders of magnitude better than ours. They find that class~II~CH$_3$OH{} maser emission is partially resolved on 100\,km baselines, or longer. This is equivalent to a baseline length of approximately 15\,km at 44\,GHz. Since we note significantly reduced flux densities on \mbox{MALT-45}{} baselines of 6\,km or less, this implies that the 44\,GHz CH$_3$OH{} emission regions are on typically larger scales than the 6.7\,GHz CH$_3$OH{} regions.
Auto-correlation spectra for the CH$_3$OH{} masers were extracted using the method discussed in Section~\ref{subsec:auto_correlation}. Cross-correlation spectra were then plotted with their corresponding auto-correlation spectrum; three example regions are shown in Fig.~\ref{fig:cross_auto_difference}. With the 1.5A array configuration for the ATCA, the maximum detectable scale of the cross-correlation data for these follow-up observations is approximately 4.6~arcsec.
The relative strength of compact maser emission to diffuse emission of a class~I~CH$_3$OH{} maser region was assessed by computing the ratio of cross-correlation IFD to auto-correlation IFD, i.e., if the cross-correlation IFD closely matches the auto-correlation IFD, the emission is compact. Fig.~\ref{fig:cross_auto_difference} shows a large variation in missing flux density is across several regions. We analysed the ratios of cross-correlated IFDs to auto-correlated IFDs, but no trends are apparent for evolved, young or total populations. The ratio of cross-correlated IFD to auto-correlated IFD was also compared against the flux density of the masers, but no trend was observed. Class~I~CH$_3$OH{} emission can be strongly confined to compact structures (cross-correlated emission $\approx$ auto-correlated emission), extended (auto-correlated emission $\gg$ cross-correlated emission), or a combination of the two. This may explain why three targeted sites were not detected in the cross-correlation dataset (G331.44$-$0.14, G331.72$-$0.20 and G333.24$+$0.02); their 44\,GHz emission is likely too extended and lacks any bright, compact maser components.
\begin{figure}
\centering
\includegraphics[height=0.30\textheight]{figures/cross_auto_diff_1.pdf}
\includegraphics[height=0.30\textheight]{figures/cross_auto_diff_2.pdf}
\includegraphics[height=0.30\textheight]{figures/cross_auto_diff_3.pdf}
\caption{Cross- and auto-correlation spectra of three class~I~CH$_3$OH{} maser regions, with strong, moderate and weak maser strengths. Auto-correlation spectra are plotted red, cross-correlation spectra blue and the difference green.}
\label{fig:cross_auto_difference}
\end{figure}
To test if compact or diffuse regions can be separated, populations were investigated with and without various maser associations. These associations included class~II~CH$_3$OH{}, H$_2$O{} and OH{} masers, but no dependence was found for the proportion compact to diffuse maser emission with and without these associations. This proportion of compact to diffuse emission was also compared with thermal line integrated intensities in CS{}~(1--0), SiO{}~(1--0)~$v=0$ and CH$_3$OH{}~1$_0$--0$_0$; see Fig.~\ref{fig:maserness_scatter}. A similar trend can be seen in each comparison; the $r$-values for CS{}, SiO{} and thermal CH$_3$OH{} are $-$0.55, $-$0.38 and $-$0.45, respectively. The corresponding $p$-values are 3.3$\times$10$^{-7}$, 2.7$\times$10$^{-3}$ and 7.7$\times$10$^{-5}$, respectively. These statistics suggest that our lines of best fit are significant.
Any indication of class~I~CH$_3$OH{} emission being dominated by compact components appears to be an intrinsic property of class~I masers, but with a significant scatter amongst the sources. Another star formation tracer may reveal a correlation with the property of maser strength discussed here. Further observations of class~I~CH$_3$OH{} maser regions with better sensitivity and zero-spacing data will also help to identify weaker, more diffuse cross-correlation emission, in order to better understand the relationship between the diffuse and compact components of the CH$_3$OH{} maser emission.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/cs_maserness_scatter.pdf}
\includegraphics[width=\columnwidth]{figures/sio_maserness_scatter.pdf}
\includegraphics[width=\columnwidth]{figures/meth_maserness_scatter.pdf}
\caption{Scatter plots of the ratio of cross-correlation integrated flux density (C.C. IFD) to auto-correlation (A.C.) IFD against the thermal integrated intensity of CS{}, SiO{} and CH$_3$OH{}. The solid line represents the least squares fit. A similar, weak trend appears for all lines, but the large scatter hampers correlation ($r$-values of $-$0.55, $-$0.38 and $-$0.45, respectively). The single datum with a larger C.C. IFD than A.C. IFD has an over-contribution of cross-correlation emission, due to spectral blending in nearby maser spots.}
\label{fig:maserness_scatter}
\end{figure}
\section{Summary and Conclusions}
We have extracted high-resolution positions, flux densities and velocities for class~I methanol masers detected in the \mbox{MALT-45}{} survey, and presented various properties for each observed region. The unbiased population from \mbox{MALT-45}{} provides the first opportunity to assess class~I methanol masers in a sensitivity-limited sample which are free from target selection biases, such as class~II methanol masers or extended green objects. In addition, the thermal lines mapped by \mbox{MALT-45}{} were observed with better sensitivity toward each maser site, providing more detail about the regions containing the class~I methanol masers. We have determined:
(i) Class I methanol maser sites with fewer spots of emission are less likely to be associated with a class~II methanol or hydroxyl maser;
(ii) Class I methanol masers without an associated hydroxyl maser or radio recombination line emission have lower luminosities.
(iii) The spatial extent and velocity range of class~I methanol masers is typically small ($<$0.5\,pc and $<$5\,\kms{}, respectively), particularly for those without an associated class~II methanol or hydroxyl maser;
(iv) Class I methanol masers are generally located within 0.5\,pc of a class~II methanol or hydroxyl maser, but can be up to 0.8\,pc away;
(v) Class I methanol masers are reliable tracers of systemic velocities, and are better than class~II methanol masers;
(vi) The brightness of class~I methanol masers are weakly correlated with the brightness of the thermal CS{}~(1--0), SiO{}~(1--0)~$v=0$ and CH$_3$OH{}~1$_0$--0$_0$ lines, as well as 870\,$\mu$m dust continuum emission from ATLASGAL. These results suggest that the brightness of a class~I methanol maser is proportional to the mass of its host star-forming region;
(vii) Class I methanol masers have high association rates with 870\,$\mu$m dust continuum point sources catalogued by ATLASGAL, with typical offsets not exceeding 0.4\,pc;
(viii) Class I methanol masers are found towards a large range of clump masses (10$^{1.25}$ to 10$^{4.5}$\,$M_\odot$), but peak between 10$^{3.0}$ and 10$^{3.5}$\,$M_\odot$. Additionally, masers associated with clump masses between 10$^{3.25}$ and 10$^{4.5}$\,$M_\odot$ are almost all evolved regions of star formation;
(ix) The amount of diffuse emission in the 44\,GHz class~I methanol transition varies from source to source, but appears to increase as the brightness of thermal lines increase.
\section*{Acknowledgements}
Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. Shari Breen is the recipient of an Australian Research Council DECRA Fellowship (project number DE130101270). This research has made use of NASA's Astrophysics Data System Bibliographic Services, and the SIMBAD database,
operated at CDS, Strasbourg, France. Computation was aided by the \textsc{NumPy} \citep{numpy}, \textsc{SciPy} \citep{scipy} and \textsc{Astropy} \citep{astropy} libraries. Figures were generated with \textsc{matplotlib} \citep{matplotlib}. Some figures in this paper use colours from \url{www.ColorBrewer.org} \citep{colorbrewer}. The \textsc{MIRIAD}\footnote{http://www.atnf.csiro.au/computing/software/miriad/} software suite is managed and maintained by CSIRO Astronomy and Space Science.
\bibliographystyle{mnras}
|
\section{Introduction}
The Skyrme model proposes that baryons are topological solitons in a
field theory of pions \cite{Sk,RZ}. Protons and neutrons are spin-half quantum
states of the basic Skyrmion with unit baryon number, and they combine into
an isospin-half doublet of nucleons \cite{ANW}. The model incorporates
approximate chiral symmetry and has
had considerable success in modelling not only nucleons, but also the
ground states and excited states of larger nuclei. Many Skyrmion
solutions with higher baryon numbers are known. The simplest
quantization technique is rigid body quantization of the classical
Skyrmions of minimal energy, but recent work has considered
some of the low-energy deformation modes of Skyrmions; this gives further
states, and improved fits to the spectra of nuclei, in particular
Carbon-12 \cite{LM} and Oxygen-16 \cite{HKM}.
However, two problems with the standard Skyrme model are that the classical
Skyrmion binding energies are rather high, and that there is little evidence
for a conventional sequence of magic numbers that should be a feature
of a model of nuclei.
There have been various attempts to resolve the first problem by
devising variants of the standard Skyrme model, and the variant we
consider here is the one devised by the group in Leeds \cite{Har,GHS,GHK}.
This adds a quartic potential term to the usual pion mass term,
and also substantially changes the ratio between the quadratic and
quartic terms in field derivatives (the sigma model term and
Skyrme term). The model has much reduced classical binding energies,
although this desirable feature is rather spoiled by quantum effects.
The Leeds group found the classical energy minima for baryon
numbers\footnote{We use the notation $B$ for baryon number, as is
standard in Skyrmion research. It is identical to atomic
mass number, usually denoted by $A$.}
up to $B = 8$ in \cite{GHS}. In addition to the low binding energies, their
most striking result was that the Skyrmions were clusters of almost
undeformed $B=1$ Skyrmions centred very close to vertices of a face centred
cubic (FCC) lattice. In \cite{GHK} they showed that an accurate approximation
to their model is obtained by treating the $B=1$ Skyrmions
as particles located precisely on the FCC vertices. They
could study Skyrmions up to baryon number $B=23$ using this approximation.
It was not really a novel discovery that the optimal way to arrange
large numbers of $B=1$ Skyrmions in three dimensions
is to place them at the vertices of an FCC lattice. This was first
noted by Kugler and Shtrikman \cite{KSh}, and Castillejo et al. \cite{CJJ}.
A similar lattice of instantons probably occurs in the holographic
approach to multi-baryon systems \cite{MS,KMS}.
The FCC lattice has four sublattices, and the Skyrmions have a uniform
orientation on each sublattice. Nearest neighbour Skyrmions, which are
on distinct sublattices, have a relative orientation that is maximally
attractive (i.e. one Skyrmion is rotated relative to the other by $\pi$ around
a line perpendicular to the line joining them), and this is why the
overall energy is minimised.
In the standard Skyrme model with zero pion mass one can find an infinite,
periodic Skyrme crystal. If the lattice spacing is forced to be relatively
large, the crystal structure is an FCC lattice of $B=1$ Skyrmions,
but as the lattice spacing decreases, it reaches a critical value
where the Skyrmions partially merge and the symmetry is enhanced.
The crystal then becomes a
primitive cubic lattice of half-Skyrmions. The true energy minimum of
the Skyrme model occurs at a lattice spacing smaller than the
critical value, so it is a crystal of half-Skyrmions. At least, this
is the case if one just considers the static solutions for zero pion mass. The
half-Skyrmion crystal structure is remarkable, but for massive pions
the enhanced symmetry (a kind of discrete chiral symmetry) is lost and
the minimal energy crystal reverts to having FCC structure. The FCC
lattice is also almost certainly the minimal energy crystal
structure in the lightly bound Skyrme model.
Ma and Rho \cite{MR} have recently reconsidered the transition between
the FCC crystal of Skyrmions and the half-Skyrmion crystal with enhanced
symmetry. By taking into account pion mass effects and quantum
effects, they argue that at normal nuclear densities, the Skyrmions
are in the FCC phase, but at higher densities (more than twice normal
nuclear densities) the half-Skyrmion phase will occur. This has
consequences for neutron stars and other dense nuclear systems that
are not normally accessible in laboratory experiments.
It suggests that it is reasonable to study variants of the
Skyrme model with FCC arrangements of $B=1$ Skyrmions, where the
Skyrmions are close to merging. The lightly bound model is just one
such variant.
Related to the transition from the FCC crystal to the half-Skyrmion
crystal is what happens for baryon number
$B=4$. The optimal way to arrange four $B=1$ Skyrmions is to place them
at the vertices of a regular tetrahedron. The orientations are
distinct and are those that occur on the four FCC sublattices. All six pairs
of Skyrmions maximally attract, and the field configuration has tetrahedral
symmetry. Remarkably, in the standard Skyrme model, both for zero mass
pions and for pions of realistic mass, the true $B=4$ Skyrmion of minimal
energy has an enhanced cubic symmetry \cite{BTC}. The four Skyrmions at the
vertices of the tetrahedron merge into a cubic structure with
half-Skyrmions at the eight vertices. However, for the lightly bound
model, and probably many other variant models, the minimal energy
solution remains tetrahedral. Note that a cubic structure easily deforms
into a tetrahedral structure, and in two ways, because the vertices of a cube
naturally split into two subsets forming tetrahedra. The relevant
tetrahedral symmetry group is ${\rm T}_d$, a subgroup of
the cubic group ${\rm O}_h$. It is therefore not surprising that the
$B=4$ Skyrmion has a vibrational mode that oscillates between two
dual tetrahedra, and that this is one of the lowest frequency modes \cite{BBT}.
The existence of the cubic $B=4$ Skyrmion in the standard Skyrme model
has influenced much of the recent research into Skyrmions of higher
baryon numbers. Skyrmion solutions, for baryon numbers a multiple of
four, have been found by bringing several $B=4$ cubes together \cite{BMS}. The
results are similar to those found in the alpha-cluster models of
nuclei. The Skyrmion with $B=8$ has two cubes touching along a face.
Solutions with $B=12$ have been found with three cubes arranged in an
equilateral triangle, and also in a linear chain; these have similar
energies. Four $B=4$ cubes can be arranged tetrahedrally to give
a $B=16$ solution. Eight cubes can be arranged into a large cube with $B=32$,
and twenty seven $B=4$ cubes produce the largest known standard Skyrmion, with
$B=108$ \cite{FLM}.
However, the study of higher baryon number Skyrmions as clusters of
$B=4$ cubes, in the standard Skyrme model, has reached an impasse.
Quantizing the cubic $B=32$ and $B=108$
Skyrmions as rigid bodies doesn't work well. It has also not been possible to
construct a $B=40$ solution from ten $B=4$ cubes, to obtain a good
model for the magic nucleus Calcium-40. More generally, the standard
Skyrme model has not yet yielded obvious structures compatible with the
known magic numbers for nuclei, beyond the $B=16$ solution modelling
Oxygen-16.
More promising, then, is the lightly bound Skyrme model, with its symmetries
inherited from the FCC lattice. The maximal symmetry of clusters cut
out from the FCC lattice is cubic, but these cubically symmetric
clusters are not exceptionally tightly bound. The most tightly bound
clusters appear to have tetrahedral symmetry, and have a single tetrahedral
cluster of four $B=1$ Skyrmions at their centre. We shall describe
these next. Their baryon numbers match the magic numbers established
in other nuclear models.
\section{Clusters with Tetrahedral Symmetry}
In the FCC lattice, each vertex has 12 nearest neighbours. Equivalently, the
coordination number is 12. In the lightly bound Skyrme model there is
one baryon (i.e. one $B=1$ Skyrmion) at each vertex, and it is an
accurate approximation to say that the binding is predominantly due
to the pair interactions between nearest neighbours, which are all of
the same strength. We shall use this approximation, and
ignore longer-range contributions to the interactions. We refer to
the binding between each nearest neighbour
pair as a bond, so the total binding energy is a constant times
the number of bonds. Let us normalise the energy so that this constant is
unity, and identify the number of bonds as the binding energy $E$.
Using this approach, we see that as each baryon in the FCC lattice
is bonded to twelve others, and each bond has two baryons at its ends,
the binding energy per baryon of the complete lattice is $E/B = 6$. We are
interested in finite clusters of baryons arranged as subsets of the
FCC lattice. $E/B$ is then obviously less than 6,
because some baryons, especially those on the surface of a cluster,
have fewer than twelve nearest neighbours.
The smallest clusters beyond a single pair have baryon numbers $B=3$ and
$B=4$. The $B=3$ cluster is an equilateral triangle of baryons, with
3 bonds and $E/B = 1$; the $B=4$ cluster is a tetrahedron of baryons, with
6 bonds and $E/B = 1.5$. The next highly symmetric cluster is the
$B=6$ octahedron with 12 bonds, for which $E/B = 2$. For baryon numbers
less than 16 it is not possible for $E/B$ to be as large as 3. There are
highly symmetric clusters with $B=13$ (a single baryon surrounded by
all twelve of its nearest neighbours) and $B=14$ (a $B=6$ octahedron with
each face completed into a tetrahedron by adding one more
baryon outside). These are both cubically symmetric, and have 36 and 40 bonds
respectively, giving $E/B$ values of 2.77 and 2.86. Notice that the
cubic symmetry is differently realised in these two cases, as the
first cluster has a baryon at its centre, but the second does not.
For $B=16$ there is a tetrahedrally symmetric cluster of baryons with 48 bonds,
so $E/B = 3$. This has the basic $B=4$ tetrahedron at the
centre, attached to a triangular $B=3$ cluster above each face. It
therefore consists of four $B=6$ octahedra each sharing one
face with the central tetrahedron, and these octahedra have some shared
edges and vertices. Four outer faces are hexagons with seven
baryons, and four are triangles with three baryons (see Figure 1).
\begin{figure}[!ht]
\centering
\includegraphics[width=10.5cm]{CIMG2960.jpg}
\caption{$B=16$ cluster in the lightly bound Skyrme model.}
\label{fig1}
\end{figure}
Notice that this cluster inherits a basic property of the FCC
lattice. The complete lattice can be decomposed into alternating
$B=4$ tetrahedra and $B=6$ octahedra. Each tetrahedron shares
faces with four neighbouring octahedra, and each octahedron shares
faces with eight neighbouring tetrahedra. The octahedra all have
the same spatial orientation, whereas the tetrahedra occur in two
orientations related by inversion. The $B=16$ cluster has a single
central tetrahedron, together with its four neighbouring octahedra and
six more tetrahedra between the octahedra. (The $B=14$
cluster mentioned above has a single central octahedron together with its
eight neighbouring tetrahedra.)
The larger clusters we shall consider are mainly those built on the $B=16$
core, and the most symmetric ones have tetrahedral symmetry. One
can also consider larger clusters with cubic symmetry, built on the
$B=13$ or $B=14$ cores. For a discussion of these, and some further
clusters, see the Appendix. The tetrahedral clusters have large $E/B$
values, but they have competitors. For example, at $B=19$, there is a large
octahedron (with a baryon at the centre) with 60 bonds; there is also
a cluster where a $B=3$ triangle is attached to one of the hexagonal
faces of the $B=16$ cluster, also with 60 bonds, but less symmetry.
It is the second cluster that easily allows further baryons to be attached,
so as to achieve higher $E/B$ values.
At this point it is helpful to introduce Cartesian coordinates for the vertices
of the FCC lattice. We choose the origin to be the centre of one of
the $B=4$ tetrahedra, and orient and scale this tetrahedron so that its
vertices are at $(1,1,1)$, $(1,-1,-1)$, $(-1,1,-1)$ and $(-1,-1,1)$.
The vertices of the entire FCC lattice are then at the positions
$(a,b,c)$ where $a$, $b$ and $c$ are odd integers, and also $a+b+c = 3
\bmod 4$. The vectors from any vertex to its nearest neighbours
are $(0,\pm 2,\pm 2)$, $(\pm 2,0,\pm 2)$ and $(\pm 2,\pm 2,0)$, and
have squared length 8. Note that the sum of the coordinates of these
vectors is always $0 \bmod 4$. The points $(a,b,c) = (1,1,1) \bmod 4$
form one of the four sublattices of the FCC lattice, and $B=1$ Skyrmions
located at these points all have the same orientation. Similarly for
the points equal to $(1,-1,-1)$, $(-1,1,-1)$ or $(-1,-1,1) \bmod 4$.
It is straightforward to classify larger, tetrahedrally symmetric
clusters using these coordinates. For example, the $B=16$ cluster has
baryons at all the allowed vertices with coordinates $(\pm 1,\pm 1,\pm 1)$,
and the vertices whose coordinates are $(\pm 3,\pm 1,\pm 1)$ or
its permutations. The constraint $a+b+c = 3 \bmod 4$ allows
half of the sign combinations here, so there are four vertices of the
first type and twelve of the second type. These are at squared distances 3
and 11 from the origin, respectively. The next largest tetrahedrally symmetric
cluster adds baryons at the twelve allowed vertices with coordinates
$(\pm 3,\pm 3,\pm 1)$ and its permutations, all at
squared distance 19, to produce $B=28$. This efficiently adds four
triangular $B=3$ clusters above each hexagonal face of the $B=16$
cluster, and adds 48 bonds, producing 96 bonds in total. (Note
that the squared distances are always of the form $8k+3$.)
At squared distance 27 there are two sets of vertices. There are twelve
vertices $(\pm 5,\pm 1,\pm 1)$ and its permutations, and
four vertices $(\pm 3,\pm 3,\pm 3)$. We discover here that tetrahedral
symmetry is not a sufficient criterion for achieving a large value of
$E/B$. It is optimal to add twelve baryons on the first set of vertices,
to produce $B=40$, but not optimal to add four baryons on
the second set either before or after adding the twelve. The twelve
baryons of the first set occur in six pairs; each pair attaches to two
touching square faces of the $B=28$ cluster, adding 9 bonds per
pair, and 54 bonds altogether. The four baryons of the second set are isolated
and add only 3 bonds each.
Therefore, there are tetrahedrally symmetric clusters with $B=32$ and
with $B=44$, but the most strongly bound cluster in this region of
baryon numbers is the $B=40$ cluster, which has $96 + 54 = 150$ bonds,
and $E/B = 3.75$ (see Figure 2). It has the form of a truncated tetrahedron,
as does the $B=16$ cluster. Baryon numbers 4, 16 and 40 are magic numbers,
corresponding to the nuclei Helium-4, Oxygen-16 and Calcium-40, which
is encouraging, so we shall study these truncated tetrahedra further.
For their baryon numbers, they have maximal or almost maximal
values of $E/B$.
\begin{figure}[!ht]
\centering
\includegraphics[width=10.5cm]{CIMG2956.jpg}
\caption{Truncated tetrahedral $B=40$ cluster.}
\label{fig2}
\end{figure}
\section{Magic, Truncated Tetrahedra}
We now discuss in more generality the infinite family of truncated
tetrahedral clusters that give exceptionally large values of the
binding energy per baryon, $E/B$, and shall refer to their baryon
numbers as magic.
The FCC lattice has complete, pure tetrahedra as subclusters. These are
obtained by intersecting four planes parallel to the faces of the basic $B=4$
cluster at the centre. The first of these clusters beyond $B=4$ has
$B=10$, but this does not have a $B=4$ tetrahedron at its centre,
so more interesting is the next one, with $B=20$. When four baryons
are truncated from its vertices we recover the magic $B=16$ cluster. More
generally, complete tetrahedra have too much of a pointed shape to have an
exceptional value for $E/B$. What we need to do is to truncate the
tetrahedra by slicing off four smaller tetrahedra to bring the
cluster closer to a spherical shape.
To calculate the total baryon numbers of these truncated tetrahedra we
require the pure tetrahedral numbers. A tetrahedron is built up from layers
of equilateral triangles, so a tetrahedral number $T_N$ is the sum of a
sequence of triangular numbers,
\begin{equation}
T_N = \sum_{n=1}^N \frac{1}{2} (n+1)n = \frac{1}{6}(N+2)(N+1)N \,.
\end{equation}
The first few of these are $1 \,, 4 \,, 10 \,, 20 \,, 35 \,,
56 \,, 84 \,, 120 \,, 165 \,, 220 \,, 286 \,,364$.
(An amusing appearance of these numbers is in the song Twelve Days of
Christmas. If one takes literally that on the first day the gift
is a Partridge in a Pear Tree, and on the second it is two Turtle
Doves {\it and} a Partridge in a Pear Tree, and so on, then by the 12th
day the total number of gifts is $T_{12} = 364$.)
If we require our truncated tetrahedra to have a central $B=4$
cluster, which appears desirable, then
we must start with a complete tetrahedron whose edge has an even number
of baryons. We then truncate this by removing four equal tetrahedra of
baryons, leaving the shortest edge with just two baryons. This
produces the truncated structure that is closest to spherical.
We therefore start with a tetrahedron with $2N$ baryons on an edge,
and remove four tetrahedra with $N-1$ baryons on
an edge. The total baryon number remaining is
\begin{eqnarray}
M_N = T_{2N} - 4T_{N-1} &=& \frac{1}{6}(2N+2)(2N+1)2N -
\frac{2}{3}(N+1)N(N-1) \nonumber \\
&=& \frac{2}{3}(N+1)N(2N+1-N+1) \nonumber \\
&=& \frac{2}{3}(N+2)(N+1)N \,.
\end{eqnarray}
There are a pleasing number of common factors in the two
contributions to $M_N$, and the result,
surprisingly, is four times the tetrahedral number $T_N$. The cluster has
$T_N$ baryons in each of the four orientations, matching the equal
distribution of orientations that occurs for the central $B=4$
tetrahedron, but the baryons with a given orientation are not arranged
as a pure tetrahedron.
The first few of the numbers $M_N$ are $4 \,, 16 \,, 40 \,, 80 \,,
140 \, ,224$, and we refer to these as tetrahedral magic (baryon) numbers.
Assuming a nucleus with one of these baryon numbers has equal
numbers of protons and neutrons, then these form
the sequence $2 \,, 8 \,, 20 \,, 40 \,, 70 \,, 112$. The first three
of these are the standard magic numbers of nuclear physics.
We calculate next the total bond numbers of the truncated
tetrahedra. We again start with a complete, pure tetrahedron, and consider
slicing it up into parallel equilateral triangles. A triangle with $n$
baryons along an edge has (the triangular number) $\frac{1}{2} n(n-1)$ bonds
in each of three directions, so there are $\frac{3}{2} n(n-1)$ bonds
within this triangle. The triangle is also bonded to the next smaller
triangle by 3 bonds for each baryon in the smaller triangle, so
there are $\frac{3}{2} n(n-1)$ such bonds. This triangle therefore
contributes $3n(n-1)$ bonds overall, which is six times a triangular number.
Summing these up, we find that a complete tetrahedron with $N$ baryons
along each edge has $(N+1)N(N-1)$ bonds in total, six times a
tetrahedral number.
Our truncated tetrahedron starts as a
complete tetrahedron with $2N$ baryons on an edge, and then four
tetrahedra with $N-1$ baryons along an edge are removed. The bonds of
the removed tetrahedra are lost, as are the bonds connecting these
tetrahedra to what remains. The total number of bonds of the
truncated tetrahedron is therefore
\begin{eqnarray}
E_N &=& (2N+1)2N(2N-1) - 4\left(N(N-1)(N-2) + \frac{3}{2}N(N-1)\right)
\nonumber \\
&=& 2(2N-1)(N+2)N \,.
\end{eqnarray}
The first few of these bond numbers are $6 \,, 48 \,, 150 \,,
336 \,, 630 \,, 1056$, and these are also the approximate binding
energies. The binding energies per baryon are $E/B = 1.5 \,, 3 \,, 3.75 \,,
4.2 \,, 4.5 \,, 4.72$, and the general algebraic formula is
\begin{eqnarray}
E/B = E_N/M_N &=& \frac{2(2N-1)(N+2)N}{\frac{2}{3}(N+2)(N+1)N}
\nonumber \\
&=& 3\frac{2N-1}{N+1} \nonumber \\
&=& 6 - \frac{9}{N+1} \,.
\end{eqnarray}
This slowly approaches 6 as expected, but only reaches 5 for $N=8$,
when $B=480$, much larger than the baryon number of any observed nucleus.
An alternative way to find the bond number of a truncated tetrahedron
is to note that baryons occur in four
types of position. There are interior baryons, face baryons (not
on an edge), edge baryons (not at a vertex), and vertex baryons.
These have coordination numbers 12, 9, 7 and 5, respectively. As each
bond has two ends, the total bond number is half of the total
coordination number. For example, the $B=140$ truncated tetrahedron
(which has edge lengths of 2 baryons and 5 baryons) has 40 interior
baryons, 52 face baryons, 36 edge baryons and 12 vertex baryons, and the
total bond number is 630.
Some of the truncated tetrahedra have another interesting property.
When $N$ is odd, the truncated tetrahedron has an even number of
parallel layers with triangular symmetry, and
in this case it can be decomposed into a set of disjoint $B=4$ tetrahedra
that account for all the baryons. These $B=4$ tetrahedra are connected by
octahedra sharing the faces. The interpretation is that the
truncated tetrahedra are clusters of alpha particles. In particular,
the truncated tetrahedra with baryon numbers 40 and 140 decompose
into 10 and 35 disjoint $B=4$ tetrahedra, respectively, arranged
with tetrahedral symmetry. The case $B=40$ is particularly
interesting, because until now it was not known how to arrange ten
alpha particles into a Calcium-40 nucleus in any Skyrmion model.
The orientations of the $B=4$ tetrahedra alternate. The arrangement
for $B=40$ is that six tetrahedra have one orientation and four the other. The
six occur at vertices of an large octahedron, and the four at vertices of a
large tetrahedron. The ten together do not form a pure tetrahedron. For
$B=140$ there are nineteen in one orientation, at the vertices of a larger
octahedron, and sixteen in the other orientation, at the vertices of a
larger truncated tetrahedron. It is curious that the symmetry group of one
set is cubic, and larger than that of the other set.
\section{Physics of Truncated Tetrahedra}
We propose that truncated tetrahedra composed of lightly bound Skyrmions
are models for some types of magic nuclei, and present some of
the evidence here. Recall that the magic baryon numbers we have obtained are
$4 \,, 16 \,, 40 \,, 80 \,, 140 \,, 224$. The first three of these
clearly correspond to Helium-4, Oxygen-16 and Calcium-40. The
fourth corresponds to Zirconium-80. This is conjectured to be
magic, despite being close to the proton drip line, because 40 appears
to be a magic number for both protons and neutrons in tetrahedrally deformed
nuclei \cite{TYM,DGSM}. Baryon number 140 is only conjecturally magic, based
on the evidence that 70 is a magic number for protons/neutrons in
tetrahedrally deformed nuclei. However, no nucleus exists with both
70 protons and 70 neutrons (the largest, short-lived nucleus with equal
proton and neutron numbers is Tin-100, or possibly Tellurium-104).
Baryon number 224 allows for Radium-224, which is octupole-deformed
and possibly tetrahedral \cite{LD}, with 88 protons and 136 neutrons.
The binding energy per baryon, eq. (4), matches the two leading terms
in the Bethe--Weizs\"acker, liquid drop mass formula
\begin{equation}
E/B = a_{\rm V} - a_{\rm S} B^{-\frac{1}{3}} \,,
\end{equation}
where empirically, $a_{\rm S}/a_{\rm V} \simeq 1.1$. Using eq. (2), we
see that it is a very good approximation to write $B \simeq
\frac{2}{3} (N+1)^3$, and then eq. (4) becomes
\begin{equation}
E/B = 6 - 9\left(\frac{3}{2}B\right)^{-\frac{1}{3}} \,.
\end{equation}
The prediction from the lightly bound Skyrme model is therefore that
$a_{\rm S}/a_{\rm V} = \left(\frac{3}{2}\right)^{\frac{2}{3}} \simeq 1.3$.
One feature of magic nuclei, according to the shell model, is
that there is a large energy gap
between the highest filled level and the lowest unfilled level. In
other words, it takes more than the usual energy to excite a single nucleon.
The truncated tetrahedral Skyrmions have an analogous feature.
For a truncated tetrahedron, the minimal coordination number of a
baryon is 5. This occurs for the baryons at the twelve vertices. For
other cluster shapes there is usually a baryon with a coordination
number smaller than this. For example, the baryons at the vertices
of pure tetrahedra have coordination number 3, and baryons at the
vertices of untruncated octahedra have coordination number 4. (However,
there are truncated octahedra where all coordination numbers are 6
or more -- see Appendix). The energy required to remove
one baryon from a truncated tetrahedron is therefore large, since
5 bonds need to be broken, and this is evidence for it being
magic. Moreover, all the elementary faces of a truncated tetrahedron
are triangles, so the optimal way to relocate the baryon is to attach
it by 3 bonds to one of these triangles. Moving
a single baryon from a vertex to one of these new locations is
therefore at the cost of 2 bonds, so the one-baryon excitation energy
(the energy of a nuclear particle-hole excitation) is 2 bond units. By
identifying 6 bond units with $a_{\rm V} = 15.6$ MeV, we see that the
bond unit is $2.6$ MeV, so the one-baryon separation energy is
predicted to be 13 MeV, and the one-baryon excitation energy to be
$5.2$ MeV.
It is also interesting to consider what can happen if a few baryons
are added to a truncated tetrahedral core. A single baryon can be
attached with 3 bonds (as just
mentioned), but two neighbouring baryons can be added with 7 bonds,
and it is rather efficient to attach a triangular cluster of three
baryons to an underlying face, which adds 12 bonds. This
significantly increases $E/B$ in the case that the core has $B=16$ or $B=40$.
Attaching a triangular $B=3$ cluster in this way could
provide a model for Fluorine-19 (the only stable isotope of
Fluorine), or Scandium-43 \cite{SW}. The larger faces of the $B=40$
core accommodate attaching a
hexagonal cluster of 7 baryons, which adds 33 bonds. This could model
Scandium-47 or Titanium-47, which are moderately stable compared
to neighbouring isotopes. Finally, the $B=80$ core accommodates attaching
a triangular cluster of 10 baryons to a large face, which adds 48
bonds, and could be related to 50 being a magic number.
The simplest, original version of the shell model of nuclei \cite{BW}
supposed that the individual protons and neutrons move in a mean field
potential that is a three-dimensional isotropic harmonic oscillator.
The harmonic oscillator energy levels have high degeneracies, and a magic
nucleus is one where all the states up to a given energy are filled.
Allowing for two spin states, the magic proton and neutron numbers
are precisely $2 \,, 8 \,, 20 \,, 40 \,, 70 \,, 112$, double the
tetrahedral numbers. The appearance of tetrahedral numbers here is
well known, but still rather surprising, because the mean field potential is
spherically symmetric. Moreover, there seems no
obvious connection with the spatial structure of a truncated
tetrahedron that we have discussed.
The refined version of the shell model introduces a more
sophisticated energy-dependence on the orbital angular momentum of
each nucleon, and includes a strong spin-orbit force. The net effect
is to raise the magic numbers $40$, $70$, $112$ to $50$, $82$,
$126$. Magic nuclei do not have to have a magic baryon number. It is
sufficient if either the proton {\it or} neutron numbers are magic.
We do not yet understand this in the context of Skyrmions, but are
still encouraged to find that Skyrmion magic numbers overlap
those of nuclear physics in the cases of equal proton and neutron
numbers. Particularly encouraging is to find that in the lightly
bound Skyrme model, the Skyrmion with baryon number 40 is magic, and
can be used to model Calcium-40. In the standard Skyrme model, a
$B=40$ solution of the field equations with similar symmetry probably
exists, and the search for it is underway.
The shell model usually assumes a spherical mean field potential, but there
have been substantial investigations over many decades of deformed
shapes. These deformations are usually assumed to be
quadrupolar, producing an ellipsoidal shape with ${\rm D}_{2h}$
symmetry \cite{EG}, but octupolar deformations, including the special
octahedral deformations that preserve tetrahedral symmetry have also
been analysed \cite{TYM,DGSM}. The magic numbers that would occur
for such tetrahedrally deformed shapes appear
to be much closer to the sequence one finds in the original shell
model. The tetrahedral deformation
seems to suppress the effect of the spin-orbit force. The magic proton
and neutron numbers for tetrahedral nuclei include 40, 56 and 70. If
one could ignore Coulomb effects, this would lead to magic baryon
numbers 80 and 140, just as for the Skyrmions that are truncated tetrahedra.
The relation between the (quantum) shell model calculations
exploring tetrahedral deformations and the (classical) spatial
structures we have found is not clear, despite the commonality of
an underlying tetrahedral symmetry.
In the discussions of tetrahedrally deformed nuclei using the shell
model, there is little consideration of the small proton and neutron magic
numbers 2, 8 and 20, because the corresponding magic nuclei are usually
supposed to be intrinsically spherical. However, a small
tetrahedral deformation would not spoil these magic numbers, and
cluster models of nuclei suggest that Oxygen-16 at least has a
tetrahedral form.
Rigid body quantization of a tetrahedral structure leads to a
non-standard rotational band, with spin/parities
$J^P = 0^+,3^-,4^+,6^+,6^-,7^-,...$ (see \cite{Lez1}, for example).
The existence of appropriately spaced $3^-$ and $4^+$
states, and absence of a lower-lying $2^+$ state is therefore a key
indicator of a tetrahedral structure. Magic nuclei like Oxygen-16,
Calcium-40 and Lead-208 are well known for having low-lying $3^-$
states and no such $2^+$ states, and encourage the interpretation
of these magic nuclei as tetrahedrally deformed, although
alternative interpretations in terms of octupole vibrations have
also to be considered.
Within the standard Skyrme model, there have been a number of
investigations of the quantum states of tetrahedrally and cubically
symmetric Skyrmions \cite{LM1}. There are constraints on the spin/parities
determined by the symmetry of the Skyrmion and the topological
Finkelstein--Rubinstein sign factors related to the symmetry
group elements. Provided the baryon number is a multiple of four, and
one seeks states with isospin zero, then the
Finkelstein--Rubinstein signs are all +, and the rotational states
of a tetrahedral Skyrmion have the same spin/parities $J^P$ as above,
and energies proportional to
$J(J+1)$. Oxygen-16 can be modelled this way, starting with a Skyrmion
that consists of a tetrahedral cluster of $B=4$ subunits. Recent work
has gone beyond rigid body quantization and this gives further
states \cite{HKM}, and a better fit to the experimental spectrum of
Oxygen-16. Even if Calcium-40 is intrinsically tetrahedral, its
spectrum will combine vibrational and rotational states.
Within the lightly bound Skyrme model, the Finkelstein--Rubinstein
signs can be calculated for rigidly rotating clusters of arbitrary
shape and any baryon number \cite{GHK}, and the spins of some
low-lying quantum states have been
determined. There is no doubt that, using rigid body quantization,
the same spin/parities would be obtained for Calcium-40 as for Oxygen-16 if
one modelled the $B=40$ Skyrmion as a truncated
tetrahedron. It is less clear what would result if one quantized
Skyrmions with higher baryon numbers. Coulomb effects need to be
considered, and more importantly, the related asymmetry between
neutron and proton numbers.
\section{Wigner's Model and Tetrahedral Symmetry}
We will not review Wigner's model of nuclei \cite{Wig} in detail. Wigner
made the simplyfying assumption of an SU(4) symmetry in nuclear
physics, and treated the four states of the proton or neutron with spin
up or spin down as a fundamental quartet of SU(4). Larger nuclei are
then classified by irreducible representations (irreps) of SU(4). The weight
diagrams of suitable irreps resemble the truncated
tetrahedral clusters we have been discussing, and the weight labels
include spin and isospin labels.
SU(4) is a Lie group of rank 3, so its root lattice and weight lattice are
three-dimensional \cite{Hum}. The root lattice is an FCC lattice. The weight
lattice is reciprocal to this, so it is a BCC lattice, and it has four times
as many points. There are four cosets of the root lattice in the
weight lattice, and the weights of each irrep lie in just one of
these. The cosets are shifted FCC lattices, but in thinking about them
we do not shift the origin. The truncated tetrahedra of interest to us
are all in the coset that contains the weights of the
fundamental 4-dimensional irrep. Further clusters with their centres at the
origin are in other cosets. For example, the $B=13$ cluster mentioned
earlier is in the root lattice, and the $B=6$ octahedron and the cubic
$B=14$ cluster are in the coset of the 6-dimensional irrep (the vector
of SO(6)). Their additional symmetry arises from the $\mathbb{Z}_2$ reflection
symmetry of the SU(4) Dynkin diagram.
An important feature of weight diagrams of SU(4) is that typically,
the interior weights in a diagram have multiplicities greater than one.
The 4-dimensional irrep, with its tetrahedral weight diagram,
accommodates four nucleons -- one of each type --
and filling the four states gives an alpha particle. This
is analogous to the $B=4$ tetrahedron in the lightly bound
Skyrme model modelling an alpha particle. The next irrep
whose weight diagram has a truncated tetrahedral shape is
20-dimensional. The shape is the same as the truncated tetrahedron
modelling Oxygen-16 in the lightly bound Skyrme model, but in
Wigner's SU(4) model it accommodates 20 nucleons, because
the inner four weights have multiplicity two.
Despite its simplicity, Wigner's model has lasting interest,
and the more sophisticated variants that Wigner discussed in his
original paper, with partial breaking of the SU(4) symmetry, capture
phenomena of physical significance. However, Wigner does not seem to
have argued that his weight diagrams have a spatial interpretation,
despite being three-dimensional. The weight labels are internal
quantum numbers.
Cook, Dallacasa and collaborators \cite{Coo, CD}, as well as others
\cite{Eve,Lez}, have rediscovered Wigner's model, and have
reinterpreted the FCC lattice as a model of the spatial structure of nuclei.
Nuclei are clusters with at most one nucleon at each lattice site,
but the nucleons acquire labels similar to those of Wigner. The
labels combine the principal quantum number of the isotropic harmonic
oscillator with the total angular momentum, together with spin and
isospin labels $\pm \frac{1}{2}$. The most stable nuclei, in which
complete shells of the isotropic harmonic oscillator are filled, have the
shapes of truncated tetrahedra. Despite these models of nuclei appearing to
be static, individual nucleons have angular momenta that increase as
one moves away from a chosen axis -- which is physically reasonable --
and interestingly, spin up and spin down nucleons occur in complete,
alternating planar layers. Similarly, protons and neutrons (isospin up and
isospin down) occur in complete, alternating planar layers in an orthogonal
direction. The layer structure is inherited from Wigner's
classification, where it occurs in the three-dimensional weight
space, but Cook et al. argue that it occurs in physical space.
One might criticise the spatial interpretation as having little physical
justification, but it is interesting to compare it with the lightly
bound Skyrme model. The Skyrme model has a physical basis as an
effective field theory of pions, with solitons representing the nucleons. The
truncated tetrahedra arise as particularly strongly
bound arrangements of $B=1$ Skyrmions. A key difference from both
Wigner's model and Cook's reinterpretation is that the $B=1$ Skyrmions
occur in four distinct orientations, rather than as four
distinct nucleon states. Static Skyrmions are not yet nuclei. Only after
quantization of the complete Skyrmion structure, using rigid body
quantization or something more sophisticated, does one get a nucleus
with an overall spin and isospin.
From the perspective of Skyrmions, it might therefore appear that there is no
spin and isospin layering. However, that is not the case. To see this
one should consider, not a nucleus with spin and isospin zero, like
Calcium-40, but a nucleus with a small net spin, or a small net
isospin. It is a useful approximation to model such nuclei as classically
spinning or isospinning Skyrmions. Such an approximation can even be
used for $B=1$ Skyrmions. By finding the semi-classical
approximations to the Adkins, Nappi and Witten quantum states of
$B=1$ Skyrmions \cite{ANW}, Gisiger and Paranjape \cite{GP} noted
that protons always spin
clockwise relative to a particular body axis of the $B=1$ Skyrmion (the
body axis defined by the neutral pion field), and neutrons always
spin anticlockwise. The axis is free to point in any direction in
space, which therefore allows protons or neutrons to be spin up or
down relative to any spatial axis. This classical approximation has
been found useful for studying the collisions of two nucleons in the
Skyrme model \cite{GP,FM}.
Now suppose a truncated tetrahedral Skyrmion has a small net isospin.
In one set of planar layers, the $B=1$ Skyrmion constituents occur in two
orientations, but for both of them the body axes point up. In the
alternating set of layers, the constituents again have two
orientations, and for both of them the body axes point down. Because
of the (classical) isospin, the $B=1$ Skyrmions are all spinning
clockwise around these body axes, which produces an excess of protons
over neutrons (or anticlockwise, producing an excess of neutrons).
That means that in the first set of layers, the spins are
all down, and in the second set of layers, the spins are all up (or vice
versa). Similarly, if there is a net spin aligned with the preferred body
axes, then the isospins alternate between the layers. This spin and
isospin layering has much similarity to what Cook et al.
describe, but for the Skyrmions it requires some dynamics, and is
present only in the sense of a quantum superposition if there is no
net spin or isospin, as for example in Calcium-40.
\section{Remarks on Skyrmion Quantization}
For the Skyrmions that are truncated tetrahedra, with magic baryon
numbers, low-energy quantum states are probably best
found using collective quantization of the rotational and
isorotational degrees of freedom. Additional states arise from
collective vibrational modes. The evidence for this comes from
previous work on the $B=4$ Skyrmion \cite{BBT}, and also the $B=32$
Skyrmion \cite{Fei}, and on the $B=12$ and $B=16$ Skyrmions where
detailed spectra match those found experimentally in Carbon-12 and
Oxygen-16 \cite{LM,HKM}.
Theoretically, in the shell model, one may interpret a magic nucleus
as having a rather rigid quantum state, because all the available
one-particle states up to some level are occupied, and this rigidity is
enhanced by the short-range nucleon-nucleon repulsion. The Pauli principle
allows one-particle excitations only if a particle is excited to the
next shell up, and this takes considerable energy. Similarly, the
quantum ground state of the Skyrmion, with spin and isospin zero,
involves little relative motion of the $B=1$ Skyrmion constituents. The
$B=1$ Skyrmion locations and orientations are highly organised, as
in a crystal. Also, similarly as in the shell model, the energy
required to move a single Skyrmion from a complete truncated
tetrahedral cluster up to the next layer is rather large, as
previously mentioned.
But now consider the quantum state of a Skyrmion with baryon number
just one or two greater than a magic number. The Skyrmion will have a
truncated tetrahedral core, to which will be attached one or two
additional $B=1$ Skyrmions. There is considerable freedom as to where
these additional Skyrmions are. Typically there are
numerous locations in the FCC lattice where one additional Skyrmion can
be attached with 3 bonds, and the energy is almost the same for
all of these. This suggests that the additional Skyrmion should be
treated like a valence nucleon, free to move
in the outer shell it occupies. Rigid body quantization of a particular
configuration, as considered in \cite{GHK}, is not justified here.
Similarly, if there are two additional $B=1$ Skyrmions, they are
free to move fairly independently, although
there is some preference for them to be close together, as this can
create one additional bond.
The physics of such Skyrmions is therefore quite similar to the
usual shell model physics of one or two additional nucleons
interacting with a magic nucleus as core. The additional nucleons are
fairly free, but there is an important, attractive residual
interaction between them \cite{Cas}. The residual interaction becomes
more significant if there are three valence nucleons, as it produces a
significant spatial correlation between them, and the simplest
shell model picture starts to break down. This matches what we
have seen for lightly bound Skyrmions, where we saw that it was favourable
to attach three $B=1$ Skyrmions in the form of a triangle, because this adds
12 extra bonds. The three Skyrmions are strongly correlated spatially.
There remain some challenges for the Skyrme model here. It is
important to see if the quantization of a single $B=1$ Skyrmion outside
a core leads to a strong spin-orbit coupling. The orientation of the
Skyrmion varies with its location, so it is plausible that as it moves
across the surface of the core it has to spin too. An analysis of this
coupling has been carried out in the simpler Baby Skyrme model in two
dimensions \cite{HM}, but not yet in the context of the three-dimensional
model. One should probably allow the $B=1$ Skyrmion to move freely around the
core, but a possible simplification is to constrain the $B=1$ Skyrmion
to occupy one of the FCC lattice sites (of which there are just a
finite number in the layer outside a truncated tetrahedral core). The
Hamiltonian for the $B=1$ Skyrmion would then be a hopping
Hamiltonian, as used frequently in condensed matter contexts. A
further challenge is to allow for rotations of the core. It is
presumably necessary to parametrise the orientation of the core using
continuous coordinates (Euler angles) even if the $B=1$ Skyrmion
outside is treated as hopping.
\section*{Appendix: Rectangular Bipyramids and Octahedra}
In this paper, we mainly considered the Skyrmions obtained from a
tetrahedron with $2N$ baryons along an edge, truncated by removing
four tetrahedra with $N-1$ baryons along an edge.
The eight faces of such a Skyrmion alternate between
equilateral triangles of baryons and slightly larger hexagons with one
bond along each short edge. To pass from one truncated tetrahedron to the
next, it is sufficient to attach the next larger equilateral triangle
of baryons to each of the four hexagonal faces. Each pair of these
triangles is joined by a single bond.
If just two of these equilateral triangles are attached, then the
baryon number is half-way between that of the truncated
tetrahedron one starts with, and the next one. The number of bonds is
just two less than half-way between, because the two triangles are
joined by a single bond, whereas if all four triangles are attached, they
are joined by 6 bonds. The baryon number sequence obtained this way
is therefore $B = 10 \,, 28 \,, 60 \,, 110 \,, 182 \,, 280$ and the
bond numbers (binding energies) are $E = 25 \,, 97 \,, 241 \,, 481
\,, 841 \,, 1345$. The binding energies per baryon are
$E/B = 2.5 \,, 3.46 \,, 4.02 \,, 4.37 \,, 4.62 \,, 4.80$. We do not
give the algebraic formulae, but these are easily deduced from those
for the truncated tetrahedra. They are not especially simple.
The shapes of these clusters are rather elegant. They are rectangular
bipyramids, with ${\rm D}_{2h}$ symmetry. This is best seen through
their slicings into rectangles of baryons. For example, the $B=28$
bipyramid is sliced into $2 + 6 + 12 + 6 + 2$ baryons.
The rectangles have sides that differ by one baryon, and they
all have the same orientation (see Figure 3). For comparison,
the $B=16$ and $B=40$ truncated tetrahedra slice into $2 + 6 + 6 + 2$
baryons and $2 + 6 + 12 + 12 + 6 + 2$ baryons, respectively (see
Figures 1 and 2). Here, the lower rectangles (the second half of
the sequence) are rotated by $\pi/2$ relative to the upper rectangles.
\begin{figure}[!ht]
\centering
\includegraphics[width=10.5cm]{CIMG2972.jpg}
\caption{$B=28$ rectangular bipyramid.}
\label{fig3}
\end{figure}
These bipyramids are interesting because of their high bond numbers.
The 25 bonds of the $B=10$ bipyramid is the maximum possible
for this baryon number \cite{GHK}. The $B=28$ example has one more
bond than the tetrahedrally symmetric cluster obtained by attaching
four $B=3$ triangles to the $B=16$ truncated tetrahedron, as discussed
in Section 2.
We now turn to more symmetric Skyrmions, with cubic symmetry. There
is an infinite sequence of complete, pure octahedra. They alternate
between having a baryon at the centre and not. The sequences of baryon
numbers, bond numbers (binding energies), and the energies per baryon are
$B = 1 \,, 6 \,, 19 \,, 44 \,, 85 \,, 146 \,, 231$,
$E = 0 \,, 12 \,, 60 \,, 168 \,, 360 \,, 660 \,, 1092$, and
$E/B = 0 \,, 2 \,, 3.16 \,, 3.82 \,, 4.24 \,, 4.52 \,, 4.73$.
The baryon numbers are found by slicing the octahedra into squares.
For example, the $B=85$ octahedron has square slices $1 + 4 +
9 + 16 + 25 + 16 + 9 + 4 + 1$. The bond numbers are high. For example,
the $B=85$ octahedron has 24 bonds more than the $B=80$ truncated
tetrahedron. Generally, the numbers above slightly exceed those in the
sequences for the truncated tetrahedra (starting with $B=0$,
$E=0$). At each step, the difference in baryon number increases by 1,
and the difference in bond number increases by 6.
The algebraic formulae for the baryon number and bond number of an
octahedron with $N$ baryons along an edge are
\begin{equation}
B = \frac{1}{3}(2N^2 + 1)N \,, \qquad
E = 2(2N - 1)N(N - 1) \,.
\end{equation}
Note that $B = T_{2N-1} - 4T_{N-1}$, because an
octahedron can be obtained by suitably truncating a complete
tetrahedron with $2N-1$ baryons along an edge. For example, the $B=19$
octahedron is a truncation of the $B=35$ tetrahedron. (A minimal
truncation of the $B=35$ tetrahedron gives a $B=31$ truncated
tetrahedron with $E/B = 3.48$, a high value.)
For the larger octahedra, an even higher binding energy per baryon is
achieved by truncating the six corners, removing one baryon from each.
The truncation removes six baryons and 24 bonds, leaving six square
faces and eight hexagonal faces. The
sequences of baryon numbers and bond numbers (binding energies) for
the truncated octahedra are
$B = 13 \,, 38 \,, 79 \,, 140 \,, 225$ and
$E = 36 \,, 144 \,, 336 \,, 636 \,, 1068$,
and the binding energies per baryon are
$E/B = 2.77 \,, 3.79 \,, 4.25 \,, 4.54 \,, 4.75$.
The first of these corresponds to a truncated $B=19$ octahedron, with a
central baryon and its twelve nearest neighbours. Note that the
truncated octahedron with $B=140$ has 6 bonds more than the
truncated tetrahedron with the same baryon number.
The truncated octahedra, starting with $B=38$ (see Figure 4),
have the following interesting property. The vertices have coordination
number 6, so removing a vertex baryon to infinity requires breaking
6 bonds. This is the maximum possible for a convex
vertex, which is mathematically related to the fact that the
vectors from a vertex baryon to its six nearest neighbours possess the
same geometry as the six positive roots of SU(4). Note also that the
surface of a truncated octahedron is topologically a sphere, but
its curvature is concentrated at the vertices. The curvature is
particularly small for these special vertices, and to compensate,
there are 24 of them.
\begin{figure}[!ht]
\centering
\includegraphics[width=10.5cm]{CIMG2967.jpg}
\caption{$B=38$ truncated octahedron.}
\label{fig4}
\end{figure}
It is tempting to think of the complete octahedra and truncated
octahedra as magic, but the calculations in \cite{GHK} show that, at
least for the examples with baryon numbers 6, 13 and
19, the binding energies are not exceptionally large when the
interactions between all of the $B=1$ Skyrmions are allowed for. This
appears to be because the $B=1$ Skyrmions are not distributed between
the four orientations as equally as possible.
A more radical truncation of an octahedron, removing five baryons from
each corner, is not optimal until the baryon numbers becomes larger than
those relevant to nuclei.
\section*{Acknowledgements}
I am grateful to Martin Speight for discussions, and for clarifying
that one of the truncated tetrahedra has 150 bonds. I also thank Derek
Harland for helpful comments. The figures are photos of lightly bound
Skyrmions, modelled using the magnetic building sets SUPERMAG-Maxi and
GEOMAG. This research is partly supported by STFC grant ST/L000385/1.
|
\section*{Abstract}
Hematopoietic stem cells in mammals are known to reside mostly in the bone marrow, but also transitively passage in small numbers in the blood.
Experimental findings have suggested that they exist in a dynamic equilibrium, continuously migrating between these two compartments.
Here we construct an individual-based mathematical model of this process, which is parametrised using existing empirical findings from mice.
This approach allows us to quantify the amount of migration between the bone marrow niches and the peripheral blood.
We use this model to investigate clonal hematopoiesis, which is a significant risk factor for hematologic cancers.
We also analyse the engraftment of donor stem cells into non-conditioned and conditioned hosts, quantifying the impact of different treatment scenarios.
The simplicity of the model permits a thorough mathematical analysis, providing deeper insights into the dynamics of both the model and of the real-world system.
We predict the time taken for mutant clones to expand within a host, as well as chimerism levels that can be expected following transplantation therapy, and the probability that a preconditioned host is reconstituted by donor cells.
\section*{Author Summary}
Clonal hematopoiesis -- where mature myeloid cells in the blood deriving from a single stem cell are over-represented -- is a major risk factor for overt hematologic malignancies.
To quantify how likely this phenomena is, we combine existing observations with a novel stochastic model and extensive mathematical analysis.
This approach allows us to observe the hidden dynamics of the hematopoietic system.
We conclude that for a clone to be detectable within the lifetime of a mouse, it requires a selective advantage.
I.e. the clonal expansion cannot be explained by neutral drift alone.
Furthermore, we use our model to describe the dynamics of hematopoiesis after stem cell transplantation.
In agreement with earlier findings, we observe that niche-space saturation decreases engraftment efficiency.
We further discuss the implications of our findings for human hematopoiesis where the quantity and role of stem cells is frequently debated.
\section*{Introduction}
The hematopoietic system has evolved to satisfy the immune, respiratory, and coagulation demands of the host.
A complex division tree provides both amplification of cell numbers and a variety of differentiated cells with distinct roles in the body \cite{kondo:ARI:2003,paul:Cell:2015,kaushansky:book:2016}.
In a typical adult human $\sim 10^{11}$ terminally differentiated blood cells are produced each day \cite{vaziri:PNAS:1994,kaushansky:book:2016,nombela:BloodAdv:2017}.
It has been argued that the division tree prevents the accumulation of mutations, which are inevitable given the huge number of cell divisions \cite{werner:Interface:2013,brenes:Interface:2013,derenyi:NatComms:2017}.
At the base of the tree are hematopoietic stem cells (HSCs).
These have the ability to differentiate into all hematopoietic cell lineages, as well as the capacity to self-renew \cite{passegue:JEM:2005,kondo:ARI:2003}, although the exact role of HSCs in blood production is still debated \cite{sawai:Immunity:2016,schoedel:Blood:2016}.
With an aging population, hematopoietic malignancies are increasingly prevalent \cite{sant:Blood:2010}.
Clonal hematopoiesis -- where a lineage derived from a single HSC is overrepresented -- has been identified as a significant risk factor for hematologic cancers \cite{genovese:NEJM:2014,jaiswal:NEJM:2014,xie:NatMed:2014}.
To assess the risks posed to the host we need an understanding of how fast clones are growing, when they initiate, and if they would subvert physiologic homeostatic control.
The number of HSCs within a mouse is estimated at ${\sim \! 0.01\%}$ of bone marrow cellularity \cite{bhattacharya:JEM:2006,bryder:AJPath:2006}, which amounts to ${\sim \! 10,000}$ HSCs per host \cite{abkowitz:Blood:2002,bhattacharya:JEM:2006,bhattacharya:JEM:2009,kaushansky:book:2016}.
In humans this number is subject to debate; limited data has lead to the hypothesis that HSC numbers are conserved across all mammals \cite{abkowitz:Blood:2002}, but the fraction of `active' HSCs depends on the mass of the organism \cite{dingli:PLoSONE:2006} (see also Refs~\cite{dingli:bookchapter:2009,nombela:BloodAdv:2017} for a discussion).
Within an organism, the HSCs predominantly reside in so-called bone marrow niches: specialised micro-environments that provide optimal conditions for maintenance and regulation of the HSCs \cite{morrison:Nature:2014,crane:NatRevImmun:2017}.
There are likely a finite number of niches within the bone marrow, and it is believed that they are not all occupied at the same time \cite{bhattacharya:JEM:2006}.
The number of niches is likely roughly equal to the number of HSCs, and through transplantation experiments in mice it has been shown that ${\sim \! 1\%}$ of the niches are unoccupied at any time \cite{bhattacharya:JEM:2006,czechowicz:Science:2007}.
A similar number of HSCs are found in the peripheral blood of the host \cite{bhattacharya:JEM:2006}.
These free HSCs are phenotypically and functionally comparable to (although distinguishable from) bone marrow HSCs \cite{wright:Science:2001,bhattacharya:JEM:2009}.
The HSCs have a residence time of minutes in the peripheral blood, and parabiosis experiments (anatomical joining of two individuals) have shown that circulating HSCs can engraft to the bone marrow \cite{wright:Science:2001}.
It has also been shown that HSCs can detach from the niches without cell division taking place \cite{bhattacharya:JEM:2009}.
These findings paint a picture of HSCs migrating between the peripheral blood and the bone marrow niches, maintaining a dynamic equilibrium between the two compartments.
In this manuscript we construct a model from the above described processes, and we use this to answer questions about clonally dominant hematopoiesis.
We first consider this in mice, where we use previously reported values to parametrise our model.
The model is general enough that it also captures scenarios of transplantation into both preconditioned (host HSCs removed) and non-preconditioned hosts: the free niches and the migration between compartments also allows for intravenously injected donor HSCs to attach to the bone marrow niches and to contribute to hematopoiesis in the host.
In the discussion we comment on the implications of these results for human hematopoiesis.
\section*{Materials and Methods}
Our model, shown schematically in \figref{fig:modelSchematic}, contains two compartments for the HSCs.
The bone marrow (BM) compartment consists in our model of a fixed number, $N$, of niches.
This means that a maximum of $N$ HSCs can be found there at any time, but generally the number of occupied niches is less than $N$.
The peripheral blood (PB) compartment, however, has no size restriction.
The number of cells in the PB and BM at a given time are given by $s$ and $n$, respectively.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=0.6\linewidth]{Fig1.pdf}}
\caption{
Compartmental model for a single population of HSCs.
The bone marrow (BM) compartment has a fixed total of $N$ niches.
At a given time, $n$ of the niches are occupied, and $N-n$ remain unoccupied.
The peripheral blood (PB) compartment has no size restriction, and at a given time contains $s$ HSCs.
A HSC in the BM can detach at rate $d$ and enter the PB, while a cell in the PB can attach to an unoccupied niche with rate $a(N-n)/N$.
Here $(N-n)/N$ is the fraction of unoccupied niches.
HSCs may die in the PB or BM with rates $\delta$ and $\delta'$.
Reproduction (symmetric division) of HSCs occurs at rate $\beta$.
The new daughter cell attaches to an empty niche with probability $\rho$, otherwise it is ejected into the PB.
Dynamics are concretely described by the reactions in \eqref{eq:reactions}.
}
\label{fig:modelSchematic}
\end{figure}
The dynamics are indicated by arrows in \figref{fig:modelSchematic}.
Our model is stochastic and individual based, such that events are chosen randomly and waiting times between events are exponentially distributed.
Simulations are performed using the Gillespie stochastic simulation algorithm (SSA) \cite{gillespie:JPC:1977}.
HSCs in our model are only capable of dividing when attached to a niche; outside the niche, pre-malignant cells are incapable of proliferating due to the unfavourable conditions.
Upon division, the new daughter HSC enters another niche with probability $\rho$, or is ejected into the PB.
Here $\rho$ depends on the number of free niches, i.e. $\rho = \rho(n)$, and should satisfy $\rho(N)=0$, such that a daughter cell cannot attach if all niches are occupied.
Likewise, the migration of a HSC from the PB to the BM should depend on the number of empty niches.
We choose the attachment rate as $a(N-n)/N$ per cell.
In general, cells can die in both compartments.
However, we expect the death rate in the PB, $\delta$, to be higher than the death rate in the BM, $\delta'$, as the PB is a less favourable environment.
For our initial analysis, we assume there is no death in the BM compartment ($\delta'=0$), and new cells are always ejected into the PB ($\rho=0$).
These assumptions are relaxed in our detailed analysis, which can be found in the Supporting Information (SI).
A two-compartment model has been considered previously by Roeder and colleagues \cite{roeder:ExpHemat:2002,roeder:NM:2006}.
The rate of migration between the compartments is controlled by the number of cells in each compartment, as well as a cell-intrinsic continuous parameter which increases or decreases depending on which compartment the cell is in.
This parameter also controls the differentiation of the HSCs.
Further models of HSC dynamics, for example \cite{abkowitz:NatMed:1996,catlin:Blood:2005,dingli:CCY:2007,dingli:PLoSCB:2007,traulsen:Interface:2013}, have not considered the migration of cells between compartments.
For example, Dingli \emph{et al.} consider a constant-size population of HSCs in a homogeneous microenvironment \cite{dingli:CCY:2007,dingli:PLoSCB:2007}.
Competition between wildtype and malignant cells then follows a Moran process.
In our model the BM compartment size is fixed, but cell numbers can fluctuate.
To initially parametrise our model we consider only one species of HSCs: those which belong to the host.
In a steady-state organism, the number of HSCs in the PB and BM are close to their equilibrium values, which are labelled as $s^*$ and $n^*$, respectively.
These values have been reported previously in the literature for mice, and are provided in Table~\ref{tab:params}.
Other previously reported values include the total number of HSC niches $N$, HSC division rate $\beta$, and the time that cells spend in the PB, which we denote as $\ell$.
Using these values we can quantify the remaining model parameters $\delta$, $d$, and $a$.
These results are discussed in the next section.
\begin{table}
\centering
\caption{{\bf Parameter values from empirical murine observations. These are equilibrium values in healthy mice.}}
\label{tab:params}%
\begin{tabular}{|l|c|c|l|}
\hline
{\bf Description} & {\bf Parameter} & {\bf Value} & {\bf Reference} \\ \hline
Total niches & $N$ & 10,000 niches & \cite{bhattacharya:JEM:2006,bhattacharya:JEM:2009} \\ \hline
Occupied niches & $n^*$ & 9,900 niches & \cite{bhattacharya:JEM:2006,czechowicz:Science:2007} \\ \hline
PB HSCs & $s^*$ & 1--100 cells & \cite{bhattacharya:JEM:2009,wright:Science:2001} \\ \hline
Average HSC division rate & $\beta$ & 1/39 per day & \cite{takizawa:JEM:2011} \\ \hline
Time in PB & $\ell$ & 1--5 minutes & \cite{wright:Science:2001} \\ \hline
\end{tabular}
\end{table}
When considering a second population of cells, such as a mutant clone or donor cells following transplantation, we may want to impose a selective effect relative to the host HSCs.
We therefore allow the mutant/donor cells to proliferate with rate $\beta_2 = (1 + \varepsilon) \beta$, where $\varepsilon$ represents the strength of selection.
For $\varepsilon = 0$, the mutant/donor cells proliferate at the same rate as the host HSCs.
In the SI we consider the general scenario of selection acting on all parameters.
Our analysis delivers an interesting result: the impact of selection on clonal expansion is independent of which parameter it acts on (provided $\delta'=\rho=0$).
For clonality and chimerism we use the same definition: the fraction of cells within the BM compartment that are derived from the initial mutant or the donor population of cells.
Typically, experimental measurements of clonality and chimerism use mature cells rather than HSCs.
However, this is beyond the scope of our model so we use HSC fraction as a proxy for this measurement.
We are therefore implicitly assuming that HSC chimerism correlates with mature cell chimerism.
The literature on the role of HSCs in native hematopoiesis is split \cite{sawai:Immunity:2016,sun:Nature:2014} (also reviewed in \cite{busch:CurrOpHem:2016}).
For the division rate of HSCs in mice we use the value $\beta = 1/39$ per day.
This is the average division rate of all HSCs within a host deduced from CFSE-staining experiments \cite{takizawa:JEM:2011}, but again there is some disagreement in reported values for this quantity \cite{wilson:Cell:2008,takizawa:JEM:2011,bernitz:Cell:2016}.
These differences arise from the interpretation of HSC cell-cycle dynamics.
More concretely, our model consists of four sub-populations: $n_1$ is the number of host or wildtype cells located in the BM, and $s_1$ is the number of cells of this type in the PB.
Likewise, $n_2$ and $s_2$ are the number of mutant/donor cells in the BM and PB, respectively.
The cell numbers are affected by the processes indicated in \figref{fig:modelSchematic} (with $\delta'=\rho=0$).
The effect of these events and the rate at which they happen are given by the following reactions:
\begin{linenomath}
\begin{subequations}
\label{eq:reactions}%
\begin{align}
\mbox{Reproduction:} \quad (n_i, s_i) &\xrightarrow{\makebox[7em]{$\beta_i n_i$}} (n_i, s_i+1), \\
\mbox{Death:} \quad (n_i, s_i) &\xrightarrow{\makebox[7em]{$\delta_i s_i$}} (n_i, s_i-1), \\
\mbox{Detachment:} \quad (n_i, s_i) &\xrightarrow{\makebox[7em]{$d_i n_i$}} (n_i-1, s_i+1), \\
\mbox{Attachment:} \quad (n_i, s_i) &\xrightarrow{\makebox[7em]{$a_i s_i (N-n)/N$}} (n_i+1, s_i-1),
\end{align}
\end{subequations}
\end{linenomath}
where $n = \sum_i n_i$, and $(N-n)/N$ is the fraction of unoccupied niches.
The corresponding deterministic dynamics are described by the ODEs:
\begin{linenomath}
\begin{subequations}
\label{eq:ODEsTwo}%
\begin{align}
\frac{{\rm d} n_1}{{\rm d} t} &= -d_1 n_1 + a_1 s_1 \frac{N-n}{N}, \\
\frac{{\rm d} n_2}{{\rm d} t} &= -d_2 n_2 + a_2 s_2 \frac{N-n}{N}, \\
\frac{{\rm d} s_1}{{\rm d} t} &= (d_1+\beta_1)n_1 - \left(\delta_1 + a_1 \frac{N-n}{N}\right)s_1, \\
\frac{{\rm d} s_2}{{\rm d} t} &= (d_2+\beta_2)n_2 - \left(\delta_2 + a_2 \frac{N-n}{N}\right)s_2.
\end{align}
\end{subequations}
\end{linenomath}
Recall we have $\beta_1 = \beta$ and $\beta_2 = (1+\varepsilon)\beta$ in the main manuscript, along with $\delta_1 = \delta_2 = \delta$, $a_1 = a_2 = a$, and $d_1 = d_2 = d$.
\subsection*{Accessibility}
A Wolfram Mathematica notebook containing the analytical details can be found at \url{https://github.com/ashcroftp/clonal-hematopoiesis-2017}.
This location also contains the Gillespie stochastic simulation code used to generate all data in this manuscript, along with the data files.
\section*{Results}
\subsection*{Steady-state HSC dynamics in mice}
By considering just the cells of the host organism, we can compute the steady state of our system from \eqref{eq:ODEsTwo}, and hence express the model parameters $\delta$, $d$, and $a$ in terms of the known quantities displayed in Table~\ref{tab:params}.
These expressions are shown in Table~\ref{tab:params2}, where we also enumerate the possible values of these deduced model parameters.
Even for the narrow range of values reported in the literature (Table~\ref{tab:params}), we find disparate dynamics in our model.
At one extreme, the average time a cell spends in the BM compartment ($1/d$) can be less than two hours (for $s^*=100$ cells and $\ell=1$ minute).
Thus under these parameters the HSCs migrate back-and-forth very frequently between the niches and blood, and the flux of cells between these compartments over a day ($s^*/\ell$) is significantly larger than the population size.
In fact, under these conditions $144,000$ HSCs per day leave the marrow and enter the blood.
With slower turnover in the PB compartment ($\ell = 5$ minutes, but still $s^* = 100$), the average BM residency time of a single HSC is eight hours, and $28,800$ HSCs leave the bone marrow per day.
At the other extreme, if the PB compartment is as small as reported in Ref.~\cite{bhattacharya:JEM:2009} ($s^*=1$ cell), then the residency time of each HSC in the bone marrow niche is between 8 and 290 days (for $\ell=1$ and $5$ minutes, respectively).
Under these conditions the number of cells entering the PB compartment per day is $1,440$ and $288$, respectively.
For an intermediate PB size of $s^* = 10$, the BM residency time is between $17$ and $90$ hours (for $\ell=1$ and $5$ minutes, respectively), and the flux of cells leaving the BM is a factor ten greater than for $s^* = 1$.
\begin{table}
\begin{adjustwidth}{-0.5in}{0in}
\centering
\caption{{\bf Deduced model parameter values. The parameters $\delta$, $d$, and $a$ are given here as values per day. The remaining parameters ($N$, $\beta$, $n^*$) are given in Table~\ref{tab:params}}.}
\label{tab:params2}%
\begin{tabular}{|l|c|l|r c|c|c|c|c|c|}
\hline
& & & & \multicolumn{6}{c}{{\bf Value (per day)}} \\
{\bf Description} & {\bf Parameter} & {\bf Expression} & $s^*$: &\multicolumn{2}{c}{1 cell} & \multicolumn{2}{c}{10 cells} & \multicolumn{2}{c}{100 cells} \\
& & & $\ell$: & 1 min & 5 mins & 1 min & 5 mins & 1 min & 5 mins\\
\hline
Death rate & $\delta$ & $\frac{\beta n^*}{s^*}$ & & 250 & 250 & 25 & 25 & 2.5 & 2.5 \\ \hline
\noalign{\smallskip}
Detachment rate & $d$ & $\frac{s^*}{\ell n^*} - \beta$ & & 0.12 & 0.0034 & 1.4 & 0.27 & 15 & 2.9 \\ \hline
\noalign{\smallskip}
Attachment rate & $a$ & $\left(\frac{1}{\ell} - \frac{\beta n^*}{s^*}\right)\frac{N}{N-n^*}$ & & 120,000 & 3,400 & 140,000 & 26,000 & 140,000 & 29,000 \\ \hline
\end{tabular}
\end{adjustwidth}
\end{table}
\subsection*{Clonal dominance in mice}
Clonal dominance occurs when a single HSC generates a mature lineage which outweighs the lineages of other HSCs, or where one clone of HSCs outnumbers the others.
The definition of when a clone is dominant is not entirely conclusive.
Previous studies of human malignancies have used a variant allele frequency of $2\%$, corresponding to a clone that represents $4\%$ of the population \cite{steensma:Blood:2015,sperling:NatRevCancer:2017}.
For completeness we investigate clonality ranges from $0.1\%$ to $100\%$.
In the context of disease, this clone usually carries specific mutations which may confer a selective advantage over the wildtype cells in a defined cellular compartment.
The \emph{de novo} emergence of such a mutant occurs following a reproduction event.
Therefore, in our model with $\rho=0$, after the mutant cell is generated it is located in the PB compartment, and for the clone to expand it must first migrate back to the BM.
This initial phase of the dynamics is considered in general in the next section of transplant dynamics, where a positive number $\mathcal{S}$ of mutant/donor cells are placed in the PB.
We find (as shown in the SI) that the expected number of these cells that attach to the BM after this initial dynamical phase is
\begin{equation}
n_2
= \frac{a(N-n^*)/N}{\delta + a(N-n^*)/N}\mathcal{S}
= \left(1-\frac{\beta \ell n^*}{s^*}\right)\mathcal{S}.
\label{eq:chimerismLow}
\end{equation}
We then apply a fast-variable elimination technique to calculate how long it takes for this clone to expand within the host \cite{constable:PRE:2014,constable:PRL:2015}.
This procedure reduces the dimensionality of our system, and makes it analytically tractable.
A full description of the analysis can be found in the SI, but we outline the main steps and results of this procedure below.
We first move from the master equation -- the exact probabilistic description of the stochastic dynamics -- to a set of four stochastic differential equations (SDEs) for each of the variables via an expansion in powers of the large parameter $N$ \cite{gardiner:book:2009}.
We then use the projection method of Constable \emph{et al.} \cite{constable:PRE:2014,constable:PRL:2015} to reduce this system to a single SDE describing the relative size of the clone.
This projection relies on the weak-selection assumption, i.e. $0 \le \varepsilon \ll 1$.
The standard results of Brownian motion are then applied to obtain the statistics of the clone's expansion.
In particular, the probability that the mutant/donor HSCs reach a fraction $0 < \sigma \le 1$ of the occupied BM niches is given by
\begin{equation}
\phi(z_0, \sigma) = \frac{1 - e^{-\Lambda z_0}}{1 - e^{-\Lambda \sigma \xi}},
\label{eq:selectiveDominanceProb}
\end{equation}
where $z_0$ is the initial clone size can be found explicitly from \eqref{eq:chimerismLow}, such that $z_0=n_2/N$.
We also have $\xi = n^*/N$, and $\Lambda$ is a constant describing the strength of deterministic drift relative to stochastic diffusion.
Concretely, we have
\begin{equation}
\Lambda
= \varepsilon N \frac{d\beta + d\delta + \beta\delta}{(d+\beta)\delta}
= \varepsilon N \left(1 + \frac{s^*}{n^*} - \beta \ell \right).
\end{equation}
The mean time for the clone to expand to size $\sigma$ (i.e. the mean conditional time) is written as $T_\xi(z_0, \sigma) = \theta(z_0, \sigma) / \phi(z_0, \sigma)$, where $\theta(z_0, \sigma)$ is given by the solution of
\begin{equation}
\frac{\partial^2 \theta(z_0, \sigma)}{\partial z_0^2} + \Lambda \frac{\partial \theta(z_0, \sigma)}{\partial z_0} = -\frac{N}{\mathcal{B}} \frac{\phi(z_0, \sigma)}{z_0(\xi-z_0)}, \quad \theta(0) = \theta(\sigma \xi) = 0.
\label{eq:selectiveDominanceTime}
\end{equation}
Here $\mathcal{B}$ is another constant describing the magnitude of the diffusion, and is given by
\begin{equation}
\mathcal{B}
= \frac{d(d+\beta)\beta\delta^2}{\xi(d\beta + d\delta + \beta\delta)^2}
= \frac{\beta N}{s^*} \frac{\frac{s^*}{n^*} - \beta \ell}{\left(1+\frac{s^*}{n^*}-\beta\ell\right)^2}.
\end{equation}
Although a general closed-form solution to \eqref{eq:selectiveDominanceTime} is possible, it is too long to display here.
Instead we use an algebraic software package to solve the second-order differential equation.
A similar expression to \eqref{eq:selectiveDominanceTime} can be obtained for the second moment of the fixation time, as shown in \cite{goel:book:1974} and repeated in the SI.
The first scenario we consider is the expansion of a neutral clone ($\varepsilon = 0$); i.e. how likely is it that a single cell expands into a detectable clone in the absence of selection?
It is known that the time to fixation of a neutral clone in a fixed-size population grows linearly in the system size \cite{kimura:Genetics:1969}.
Interestingly and importantly, in intestinal crypts this fixation is seen frequently because $N = \mathcal{O}(10)$ \cite{snippert:cell:2010}.
In the hematopoietic system, however, it likely takes considerably longer than this due to the relatively large number of stem cells.
Solving \eqref{eq:selectiveDominanceTime} with $\varepsilon = 0$ gives the mean conditional expansion time as
\begin{equation}
T_\xi(z_0, \sigma) = \frac{N}{\mathcal{B}} \left[ \frac{\xi-z_0}{z_0}\log\left(\frac{\xi}{\xi-z_0}\right) + \frac{1-\sigma}{\sigma} \log(1-\sigma)\right].
\end{equation}
From this solution we find that it takes, on average, $5$--$45$ years for a neutral clone to reach $1\%$ clonality ($\sim 100$ HSCs).
Expanding to larger sizes takes considerably longer, as highlighted in \figref{fig:dominanceMouse}.
Therefore, clonal hematopoiesis in mice is unlikely to result from neutral clonal expansion; for a clone to expand within the lifetime of a mouse it must have a selective advantage.
Neutral results for human systems are considered in the discussion.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=\textwidth]{Fig2.pdf}}
\caption{
Time taken for a clone initiated from a single HSC to expand under different levels of selection.
(a) Time taken for a mutant clone to expand as a function of the level of clonality reached, with colour indicating the selective effect of the mutant.
(b) Time taken for a mutant clone to expand as a function of the selective effect, with colour indicating different levels of clonality.
Symbols are results from $10^3$ simulations of the full model (with associated standard deviations), and solid lines are predictions from \eqref{eq:selectiveDominanceTime}.
Shaded regions are the predicted standard deviations, using the formula presented in the SI.
Here $\ell = 3$ minutes, $s^* = 100$, and the remaining parameters are as in Table~\ref{tab:params}.
}
\label{fig:dominanceMouse}
\end{figure}
When the mutant clone has an advantage, there is always some selective force promoting this cell type.
Therefore the probability of such a clone expanding is higher than the neutral case, as seen from \eqref{eq:selectiveDominanceProb}.
In \figref{fig:dominanceMouse} we illustrate the time taken for a single mutant HSC to reach specified levels of clonal dominance for different selective advantages.
Advantageous clones ($\beta_2/\beta > 1$) initially grow exponentially in time [\figref{fig:dominanceMouse}(a)], and are much faster than neutral expansion ($\beta_2/\beta=1$).
These clones can reach levels of up to $90\%$ relatively quickly, however replacing the final few host cells takes much longer.
The advantage that a mutant clone must have if it is to represent a certain fraction of the population in a given period of time can be found from \figref{fig:dominanceMouse}(b).
For a single mutant to completely take over in two years, it requires a fold reproductive advantage of $\beta_2/\beta \approx 2$ [dashed lines in \figref{fig:dominanceMouse}(b)].
This means that the cells in this clone are dividing at least twice as fast as the wildtype host cells.
To achieve $1\%$ clonality in this timeframe, the advantage only has to be $\beta_2/\beta \approx 1.2$.
For the clone to expand in shorter time intervals, a substantially larger selective advantage is required.
For example, $100\%$ clonality in six months from emergence of the mutant requires $\beta_2/\beta \approx 5.5$, i.e. the dominant clone needs to divide more than five times faster than the wildtype counterparts.
As shown in the SI, \eqref{eq:selectiveDominanceProb} and \eqref{eq:selectiveDominanceTime} are equivalent to the results obtained from a two-species Moran process.
This suggests the two-compartment structure is not necessary to capture the behaviour of clonal dominance.
However, the consideration of multiple compartments is required to understand transplantation dynamics, as covered in the next section.
\subsection*{Transplant success in mice}
We now turn our attention to the scenario of HSC transplantation.
As previously mentioned this situation is analogous to the disease spread case, with the exception that the initial `dose' of HSCs can be larger than one.
We first consider the case of a non-preconditioned host.
We then move onto transplantation in preconditioned hosts, where all host cells have been removed.
\subsubsection*{Engraftment in a non-preconditioned host}
Multiple experiments have tested the hypothesis that donor HSCs can engraft into a host which has not been pretreated to remove some or all of the host organism's HSCs \cite{stewart:Blood:1993,quesenberry:Blood:1994,rao:ExpHemat:1997,blomberg:ExpHemat:1998,slavin:Blood:1998,quesenberry:NYAS:2001,bhattacharya:JEM:2006,bhattacharya:JEM:2009,takizawa:JEM:2011,kovtonyuk:Blood:2016}.
These studies have found that engraftment can be successful; following repeated transplantations mice display a chimerism with up to $40\%$ of the HSCs deriving from the donor \cite{quesenberry:Blood:1994,rao:ExpHemat:1997,blomberg:ExpHemat:1998}.
In this scenario we start with a healthy host organism and inject a dose of $\mathcal{S}$ donor HSCs into the PB compartment, in line with the experimental protocols mentioned above.
These donor cells can be neutral, or may have a selective (dis)advantage.
Injecting neutral cells reflects the \emph{in vivo} experiments described above, while advantageous cells can be used to improve the chances of eliminating the host cells.
Transplanting disadvantageous cells would reflect the introduction of `normal' HSCs into an already diseased host carrying advantageous cells.
We do not consider this scenario further here, as the diseased cells are highly unlikely to be replaced without host preconditioning.
We can separate the engraftment dynamics of these donor cells into two different regimes: i) the initial relaxation to a steady state where the total number of HSCs is stable, and ii) long-time dynamics eventually leading to the extinction of either the host or donor HSCs.
We focus on these regimes separately.
Upon the initial injection of the donor HSCs, the PB compartment contains more cells than the equilibrium value $s^*$.
This leads to a net flux of cells attaching to the unoccupied niches in the BM until the population relaxes to its equilibrium size.
Once the equilibrium is reached, the initial dynamics end, and the long-term noise-driven dynamics take over (discussed below).
The challenge for this first part is to determine how many of the donor HSCs have attached to the BM at the end of the initial dynamics.
We identify two distinct behaviours which occur under low and high doses of donor HSCs.
If the dose is small ($\mathcal{S} \ll N-n^*$), then the number of donor HSCs that attach to the BM is given by \eqref{eq:chimerismLow}, and is proportional to the dose size $\mathcal{S}$.
To obtain this result we have assumed that the number of occupied niches remains constant, such that each donor cell has the same chance of finding an empty niche.
However, if the dose of donor HSCs is large enough then all niches become occupied and the BM compartment is saturated; attachment to the BM can only occur following a detachment.
Using this assumption, the initial dynamics can then be described by the linear ODEs
\begin{linenomath}
\begin{subequations}
\label{eq:chimerismHigh}%
\begin{align}
\frac{{\rm d} n_2}{{\rm d} t} &= -d n_2 +\frac{dN}{s(t)}s_2, \\
\frac{{\rm d} s_2}{{\rm d} t} &= (\beta+d)n_2 - \left(\delta + \frac{dN}{s(t)}\right)s_2,
\end{align}
\end{subequations}
\end{linenomath}
where $s(t)$, the total number of cells in the PB compartment, is found from $\dot{s} = \beta N - \delta s$.
A derivation of \eqref{eq:chimerismHigh} can be found in the SI.
The predicted chimerism, and the accuracy of these predictions, at the end of the initial phase are shown in \figref{fig:initialChimerism}.
The efficiency of donor cell engraftment decreases in the large-dose regime ($\mathcal{S} > N-n^*$).
This is simply because the niche-space is saturated, so HSCs spend longer in the blood and are more likely to perish.
If the lifetime in the PB ($\ell$) is short, then we have more frequent migration between compartments, as highlighted in Table~\ref{tab:params2}.
Hence smaller $\ell$ leads to higher chimerism.
The approximation from \eqref{eq:chimerismHigh} becomes increasingly accurate for larger doses.
The two approximations break down at the cross-over region between small and large doses.
In this regime the number of occupied niches does not reach a stable value.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=\textwidth]{Fig3.pdf} }
\caption{
Initial chimerism of neutral donor cells in a healthy, non-preconditioned host.
Upper panels depict the level of donor chimerism shortly after a dose of neutral donor cells, $\mathcal{S}$, is injected into the host.
Symbols are from numerical integration of \eqref{eq:ODEsTwo}.
The small-dose regime is described by \eqref{eq:chimerismLow} (solid lines for $\mathcal{S} < 10^3$), and the large-dose regime is described by \eqref{eq:chimerismHigh} (solid lines for $\mathcal{S} > 10^2$).
Lower panels show the accuracy of these approximations when compared to the numerical integration of \eqref{eq:ODEsTwo}.
This error takes the form (\emph{approx.}$-$\emph{exact})/\emph{exact}.
(a) $s^* = 10$, and (b) $s^* = 100$.
The lifetime in the PB, $\ell$, is measured in minutes.
Remaining parameters are as in Table~\ref{tab:params}.
}
\label{fig:initialChimerism}
\end{figure}
If the donor cells have a selective (dis)advantage, then the deterministic dynamics predict the eventual extinction of either the host or donor cells.
However, the selective effect is usually small and only acts on a longer timescale.
Therefore the initial dynamics are largely unaffected by selection, and we assume neutral donor cell properties when we model the initial dynamics.
The inefficiency of large doses can be overcome by administering multiple small doses over a long period.
In this way we prevent the niches from becoming saturated and fewer donor cells die in the PB.
Hence we should be able to obtain a higher level of engraftment when compared to a single-bolus injection of the same total number of donor HSCs.
These effects have been tested experimentally \cite{quesenberry:Blood:1994,rao:ExpHemat:1997,blomberg:ExpHemat:1998,bhattacharya:JEM:2009}.
Parabiosis experiments are also an extreme example of this; they represent a continuous supply of donor cells \cite{wright:Science:2001}.
As shown in \figref{fig:multipleDoses}, our model captures the same qualitative behaviour as reported in the experiments: Multiple doses lead to higher levels of chimerism at the end of the initial phase of dynamics.
This effect is highlighted more when the total dose size is large.
Using our analysis we know how efficient each dose is, and what levels of chimerism can be achieved.
Hence our model can be used to optimise dosing schedules such that they are maximally efficient.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=0.5\textwidth]{Fig4.pdf}}
\caption{
Number of donor HSCs attaching to the BM of a non-preconditioned host after a single dose (dashed lines) or seven daily doses (solid lines).
Both treatments use the same total number, $\mathcal{S}$, of donor HSCs.
Trajectories are from numerical integration of the ODEs~\eqref{eq:ODEsTwo}.
Here we have $\ell=3$ minutes, $s^*=100$, and the remaining parameters are as in Table~\ref{tab:params}.
}
\label{fig:multipleDoses}
\end{figure}
We note here that engraftment efficiency is only important when donor cells are rare and there is no danger to life.
This is the case, for example, in experimental protocols when tracking small numbers of cells.
It makes sense to here use the multiple dosing strategy.
Transplantation following preconditioning, however, provides a more viable approach to disease treatment where patient survival needs to be maximised.
In this case the dose size should be increased, but it should be considered that there are diminishing returns in engraftment when the dose size is large enough to saturate all open niches.
This dose size can be read from \figref{fig:initialChimerism}.
The long-term dynamics are handled in the same way as the clonal dominance results above; we use \eqref{eq:selectiveDominanceProb} and \eqref{eq:selectiveDominanceTime} to show how the number of donor cells injected into the PB affects the probability that they expand (as opposed to die out), and the time it takes for the host cells to be completely displaced.
One key result is that a dose of just eight HSCs with an advantage of $\beta_2/\beta = 1.1$ has over $50\%$ chance to fixate in the host.
However, the time for this to happen is $\sim 16$ years.
With a reproductive advantage of $\beta_2/\beta = 1.5$ the success rate is $\sim 95\%$ for the same dose, and the time taken now falls to $\sim 4$ years.
Further results are found in the SI.
\subsubsection*{Engraftment in a preconditioned host}
HSC transplantation procedures are often preceded by treatment or irradiation of the host -- referred to as host preconditioning.
This greatly reduces the number of host HSCs in the BM compartment.
For this section we assume complete conditioning such that no host HSCs remain, i.e. myeloablative conditioning.
Following the pre-treatment, a dose of donor HSCs of size $\mathcal{S}$ is injected into the PB compartment.
We then want to know the probability that these cells reconstitute the organism's hematopoietic system.
For this section we assume that donor HSCs have identical properties to the wildtype cells (i.e. no selection).
We further assume that all donor HSCs have the potential to reconstitute the hematopoietic system in the long-term --
in experiments this is not always the case as not all cells which are sorted as phenotypic HSCs (as defined by surface markers) are functional, reconstituting HSCs (see e.g. \cite{matsuzaki:Immunity:2004}).
Because of this assumption, we only show results for the injection of a single donor HSC into the conditioned host ($\mathcal{S} = 1$).
Higher doses lead to a greater probability of reconstitution.
A further assumption is, that the host maintains (or is provided with) enough mature blood cells during the reconstitution period to sustain life.
We consider two approaches for estimating the probability of hematopoietic reconstitution.
As a first-order approximation, the probability that a single HSC in the PB compartment dies is $\psi = \delta/(\delta+a)$.
Here we have assumed that all niches are unoccupied, such that the attachment rate per cell is $a(N-0)/N=a$.
For a dose of size $\mathcal{S}$, the reconstitution probability is $\varphi = 1-\psi^{\mathcal{S}}$.
Hence we have
\begin{equation}
\varphi = 1 - \left(\frac{\delta}{\delta + a}\right)^\mathcal{S}.
\label{eq:emptyReconst0}
\end{equation}
This prediction, \eqref{eq:emptyReconst0}, is shown as dotted lines in \figref{fig:conditionedReconstitution}, which, however, does not agree with results from the model.
The second approach considers all possible combinations of detachments and reattachments, as well as reproduction events.
This leads to a reconstitution probability, given a dose of $\mathcal{S}$ donor cells, of
\begin{equation}
\varphi = 1 - \left(\frac{\delta}{\delta + a} \frac{d + \beta}{\beta}\right)^{\mathcal{S}},
\label{eq:emptyReconst}
\end{equation}
which is derived in the SI.
This result, \eqref{eq:emptyReconst}, is shown as solid lines in \figref{fig:conditionedReconstitution}, and is in excellent agreement with the reconstitution probability observed in simulations.
From these results we can conclude that, in our model, HSCs migrate multiple times between the PB and BM before they establish a sustainable population.
It is also the case that in this model a single donor HSC is sufficient to repopulate a conditioned host in $\sim 90$ -- $99\%$ of cases across all the parameter ranges reported in Table~\ref{tab:params}.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=0.5\textwidth]{Fig5.pdf}}
\caption{
Probability of reconstitution from a single donor HSC which is injected into a preconditioned host.
Symbols are results from $10^5$ simulations of the stochastic model.
For efficiency we ran the simulations until the population reached either 0 (extinction) or 100 (reconstitution), and we assume no further extinction events occur once this upper limit has been reached.
Dotted lines are the `first-order' prediction of \eqref{eq:emptyReconst0}.
Solid lines are the predictions of \eqref{eq:emptyReconst} which account for detachments, reattachments, and reproduction events.
Remaining parameters are as in Table~\ref{tab:params}.
}
\label{fig:conditionedReconstitution}
\end{figure}
\section*{Discussion}
We have introduced a mathematical model that describes the back-and-forth migration of hematopoietic stem cells between the blood and bone marrow within a host.
This is motivated by the literature of HSC dynamics in mice.
The complexity of the model has been kept to a minimum to allow us to parametrise it based on empirical results.
The model is also analytically tractable, permitting a more thorough understanding of the dynamics and outcomes.
For example, on long timescales we find the that the two-compartment model is equivalent to the well-studied Moran model.
Meanwhile, analysis of the reconstitution of a preconditioned mouse shows that in our model HSCs migrate multiple times between
the BM and PB compartments before establishing a sustainable population.
Given these dynamics we first investigate clonal dominance, where a clone originating from a single mutant cell expands in the HSC population.
In mice we find that a selective advantage is required if the clone is to be detected within a lifetime:
A clone starting from a single cell with a reproduction rate $50\%$ higher than the wildtype can expand to $1\%$ clonality in one year.
A cell dividing twice as fast as the wildtype reaches ${>10\%}$ clonality in the same timeframe.
Such division rates can be reached by MPN-initiating HSCs \cite{lundberg:JEM:2014}.
The requirement of a selective advantage agrees with the clinical literature where, for example, mutants are known to enjoy a growth advantage under inflammatory conditions \cite{fleischman:Blood:2011,kleppe:CanDisc:2015}.
The model also captures the scenario of stem cell transplantation.
Engraftment into a non-preconditioned host is analogous to clonal dominance, except that the clone is initiated by multiple donor cells.
For small doses of donor HSCs, the number of cells that attach to the BM is directly proportional to the size of the dose.
For larger doses the BM niches are saturated, leading to lower engraftment efficiency.
Donor chimerism can be improved by injecting the host with multiple small doses, as opposed to a large single-bolus dose of the same size.
This agrees with results that have been reported in the empirical literature \cite{quesenberry:Blood:1994,rao:ExpHemat:1997,blomberg:ExpHemat:1998,bhattacharya:JEM:2009}.
Following preconditioning of a mouse to remove all host cells, we find that a single donor HSC is sufficient to repopulate a host in ${\sim \! 90}$--$99\%$ of cases.
This result rests on the assumption that the donor stem cell was, in fact, a long-term reconstituting HSC, which may not be the case in experimental setups.
In the SI we consider the effects of death occurring in the BM niche ($\delta' \ne 0$), and the direct attachment of a new daughter cell to the bone marrow niche ($\rho \ne 0$).
We find that death in the niche increases the migration rate of cells between the PB and BM compartments, which can greatly reduce the attachment success of the low-frequency mutant/donor cells.
However, the direct attachment of daughter cells to the niche has no effect on the initial attachment of donor/mutant cells, and on the level of chimerism achieved in the initial phase of the dynamics.
Broadening the scope of our investigation, clonality of the hematopoietic system is a major concern for human health \cite{genovese:NEJM:2014,jaiswal:NEJM:2014,xie:NatMed:2014,steensma:Blood:2015}.
Clinical studies have shown that $10\%$ of people over $65$ years of age display clonality, yet $42\%$ of those developing hematologic cancer displayed clonality prior to diagnosis \cite{genovese:NEJM:2014}.
Our model, and the subsequent analysis, can be applied to this scenario.
However, the number of HSCs in man is debated, with estimates of ${\sim \! 400}$ \cite{buescher:JCI:1985,dingli:PLoSONE:2006},
$\mathcal{O}(10^4)$ \cite{abkowitz:Blood:2002}, or $\mathcal{O}(10^7)$ \cite{nombela:BloodAdv:2017}.
Estimates as high as $\mathcal{O}(10^9)$ can also be obtained by combining the total number of nucleated bone marrow cells \cite{harrison:JCP:1962} with stem cell fraction measurements \cite{wang:Blood:1997,kondo:ARI:2003,pang:PNAS:2011}.
In \figref{fig:dominanceMan} we summarise how neutral and advantageous clones starting from a single HSC expand in human hematopoietic systems.
We find that $4\%$ clonality \cite{steensma:Blood:2015,sperling:NatRevCancer:2017} can be achieved in a short period of time for even neutral clones [\figref{fig:dominanceMan}(a)].
If the human HSC pool is $\mathcal{O}(10^3)$ or smaller, we would expect clonal hematopoiesis and the associated malignancies to be highly abundant in the population, perhaps more-so than they currently are \cite{genovese:NEJM:2014,jaiswal:NEJM:2014,xie:NatMed:2014,steensma:Blood:2015}.
On the other hand, for a system size of $N = 10^6$ it takes thousands of years for a single neutral HSC to expand to detectable levels, making neutral expansion extremely unlikely to result in clonal hematopoiesis.
Therefore, for clonal hematopoiesis to occur in a pool of this size or larger \cite{nombela:BloodAdv:2017} the mutants would require a significant fitness advantage over the wildtype HSCs.
We also consider a range of parameters, and even relax the $\alpha=\varrho=0$ condition, in Fig~S1 and Fig~S2.
We find no significant differences in the predictions of our model.
\begin{figure}[h]
\centering
\iftoggle{showFigs}{\includegraphics[width=0.5\textwidth]{Fig6.pdf}}
\caption{
Time taken until a clone initiated from a single cell represents $4\%$ \cite{steensma:Blood:2015,sperling:NatRevCancer:2017} of the human HSC pool, as a function of the total number of niches in the system.
Colours represent the selective advantage of the invading clone.
Lines are given by the solution of \eqref{eq:selectiveDominanceTime}, and shaded regions represent the calculated standard deviation (details in the SI).
Remaining parameters are $\beta=1/40$ week$^{-1}$ \cite{catlin:Blood:2011}, $\ell=60$ minutes, $s^*=0.01N$, and $n^*=0.99N$.
Here $\ell$, $s^*$, and $n^*$ are extrapolated from the murine data, where $\ell_{\rm human} \approx 10 \ell_{\rm mouse}$, which follows the same scaling as the HSC division rate, $\beta$.
Further parameter combinations are shown in Fig~S1 and Fig~S2.
References refer only to the source of parameters; no part of this figure has been reproduced from previous works.
}
\label{fig:dominanceMan}
\end{figure}
\subsection*{Limitations}
Our model has been kept to a minimal level of biological detail to allow for parametrisation from experimental results.
This has the added benefit of analytic tractability.
The model is constructed under steady-state conditions, which is the case for neutral clonal expansion.
However, in the case of donor-cell transplantation following myeloablative preconditioning, we are no longer in a steady state.
Here we expect some regulatory mechanisms to affect the HSC dynamics, including a faster reproductive rate and a reduced probability of cells detaching from the niche.
There are also possibilities for mutants to exploit or evade the homeostatic mechanisms \cite{brenes:PNAS:2011}.
Different mechanisms of stem cell control have recenty been considered for hematopoietic cells \cite{stiehl:BMT:2014}, as well as in colonic crypts \cite{yang:JTB:2017}.
The steady state assumption is also unable to capture the different dynamics associated with ageing.
For example, in young individuals the hematopoietic system is undergoing expansion.
In our model there is no distinction between young and old systems.
In Fig~S3 we demonstrate the impact of a (logistically) growing number of niches.
Such growth means clonal hematopoiesis is likely to be detected earlier, and therefore would increase our lower bound estimate on the number of HSCs in man.
Telomere-length distributions have been used to infer the HSC dynamics from adolescence to adulthood, and have suggested a slowing down of HSC divisions as life progresses \cite{werner:eLife:2015}.
Faster dynamics in early life would lead to a higher incidence among young people, which again increases our lower bound estimate.
It is also not entirely clear how to extrapolate the parameters from the reported mouse data to a human system.
Here we have taken the simplest approach and appropriately scaled the unknown parameters.
However, hematopietic behaviour may differ between species.
For example, results of HSC transplantation following myeloablative therapy in non-human primates have shown that clones of hematopoietic cells persist for many years \cite{kim:Blood:2009,kim:CSC:2014}.
This could be due to single HSCs remaining attached to the niche and over-contributing to the hematopoietic system, or due to clonal expansion of the HSCs to large enough numbers such that a contributing fraction will always be found in the BM.
Both of these mechanisms are features of our model: the time a cell spends in the BM is much longer than the time in the PB and can be increased further by tuning the model parameters, namely by decreasing $s^*$ or increasing $\ell$.
Changes to these parameters seems to have little effect on our predictions of clonal expansion, as shown in Fig~S1 and Fig~S2.
Clonal extinctions are also a feature of our work, and have been identified in non-human primates \cite{kim:CSC:2014}.
A more general point to discuss is the role of hematopoietic stem cells in blood production.
In our model we are only considering HSC dynamics, however it has been proposed that downstream progenitor cells are responsible for maintaining hematopoiesis \cite{schoedel:Blood:2016} in mice.
Hence, myeloid clonality would also be determined by the behaviour of these progenitor cells.
On the other hand, an independent study found that HSCs are driving multi-lineage hematopoiesis \cite{sawai:Immunity:2016}, suggesting we are correct in our approach.
Again we also expect there to be variation between species in this balance of HSC/progenitor activity.
With little quantitative information available, we have assumed that HSCs are the driving force of steady-state hematopoiesis across mice and humans.
\subsection*{Conclusion}
In conclusion, this simple mathematical model encompasses multiple HSC-engraftment scenarios and qualitatively captures empirically observed effects.
The mathematical calculations provide insight into how the dynamics of the model unfold.
The analytical results, which we have verified against stochastic simulations, allow us to easily investigate how parameter variation affects the outcome.
We now hope to extend this analysis, incorporating further effects of disease and combining this model with the differentiation tree of hematopoietic cells.
\section*{Supporting Information}
\paragraph*{SI}
\label{SI}
{\bf Supporting mathematical details.}
Contains detailed derivations of all equations presented in the manuscript, including the details of the projection method.
The analysis is carried out for unrestricted parameters, including selective effects on all parameters and permitting death in the BM space, as well as direct attachment of new daughter cells to the niche.
\iftoggle{showSuppFigs}{\begin{figure*}[h]\centering\includegraphics[width=\textwidth]{FigS1.pdf}\end{figure*}}
\paragraph*{Fig~S1}
\label{fig:S1}
{\bf Clonality in man: more parameter combinations and death within niches.}
Time taken until a clone initiated from a single cell represents $4\%$ \cite{steensma:Blood:2015,sperling:NatRevCancer:2017} of the human HSC pool, as a function of the total number of niches in the system.
Colours represent the selective advantage of the invading clone.
Solid lines correspond to death only within the niches ($\alpha=0$), while dashed lines represent equal death rates in both compartments ($\alpha=1$; see SI for details).
Lines are generated using mathematical formulae in the SI.
Remaining parameters are $\beta=1/40$ week$^{-1}$ \cite{catlin:Blood:2011}, $n^*=0.99N$, and $\varrho=0$.
Some predictions are missing when $d \le 0$ and/or $a \le 0$; these parameter regimes are incompatible with our model.
\iftoggle{showSuppFigs}{\begin{figure*}[h]\centering\includegraphics[width=\textwidth]{FigS2.pdf}\end{figure*}}
\paragraph*{Fig~S2}
\label{fig:S2}
{\bf Clonality in man: more parameter combinations and reproduction into BM.}
Time taken until a clone initiated from a single cell represents $4\%$ \cite{steensma:Blood:2015,sperling:NatRevCancer:2017} of the human HSC pool, as a function of the total number of niches in the system.
Colours represent the selective advantage of the invading clone.
Solid lines correspond to the daughter cell entering the PB compartment after reproduction ($\varrho=0$), while dashed lines represent daughter cells remaining in the BM ($\varrho=1$; see SI for details).
Lines are generated using mathematical formulae in the SI.
Remaining parameters are $\beta=1/40$ week$^{-1}$ \cite{catlin:Blood:2011}, $n^*=0.99N$, and $\alpha=0$.
Some predictions are missing when $d \le 0$ and/or $a \le 0$; these parameter regimes are incompatible with our model.
\iftoggle{showSuppFigs}{\begin{figure*}[h]\centering\includegraphics[width=0.7\textwidth]{FigS3.pdf}\end{figure*}}
\paragraph*{Fig~S3}
\label{fig:S3}
{\bf Predicted and simulated incidence curves of clonal hematopoiesis in man: constant size and under growth.}
The cumulative probability density function (CDF) of times to reach $4\%$ clonality \cite{steensma:Blood:2015,sperling:NatRevCancer:2017} when starting from a single neutral mutant in a normal host.
Shaded regions are incidence curves from simulations using either a constant niche count of $N=10^4$, or a logistically growing number of niches with $\dot{N} \approx r N(1-N/K)$, where $K=10^4$, $r=0.3$ per year, and $N(0)=K/20$.
These parameters represent a maturation period of $\sim 20$ years to reach $N \approx K$.
Lines are predicted incidence curves which assume normally-distributed times to clonality, using the mean and variance formulae as described in the SI, and constant population size as indicated in the legend (minimum and maximum number of niches).
Remaining parameters are $\beta=1/40$ week$^{-1}$ \cite{catlin:Blood:2011}, $n^*=0.99N$, $s^*=0.01N$, and $\ell=60$ minutes.
Finally, we only consider here the conditional incidence time, which have been normalised by the fixation probability.
This probability is 20 times larger for the neutral mutant in the growing model when compared to the fixed number of niches.
\section*{Acknowledgements}
The authors would like to thank Radek Skoda, Timm Schroeder, Ivan Martin, Larisa Kovtonyuk and Matthias Wilk for useful discussions.
|
\section{\label{introduction}Introduction}
Topological insulators (TIs) are a new state of matter that have attracted
a lot of interest within the condensed matter physics community~\cite{kane,hasan,qi,moor,fu,bernevig}.
It is now well established that TIs are promising candidates for future advanced electronic
devices. They possess a bulk insulating gap and conducting edge states. The edge states are protected
by time-reversal symmetry (TRS) against backscattering and this property makes them robust against
disorder and nonmagnetic defects. Consequently, the edge channels normally possess very high carrier mobility.
Among TI materials two-dimensional (2D) van der Waals systems have attracted a lot of attention during
the past decade~\cite{mannix2017}. The interest in these systems originates from the discovery of graphene,
which has a very high carrier mobility (200 000 cm$^2$/(V s)), thermal conductivity, and mechanical
strength~\cite{neto2009,katsnelson2012}; however, its zero electronic band gap has severely limited its
applicability in electronic devices. Also, the proposal for the existence of a topological insulating phase in graphene by Kane
and Mele was shown to be unrealistic, because of its extremely small SOC strength~\cite{yao2007,huertas2006}.
Hence, extensive efforts have been devoted to open a band gap and increase the effective SOC
in graphene or find other 2D systems with favorable SOC, carrier mobility, and appropriate band gap.
Other 2D materials such as single- or few-layer transition metal dichalcogenides (TMDs), boron nitride,
silicene, germanene, phosphorene, stanene and MXene, have been extensively explored~\cite{ren2016,mannix2017}.
Another important issue for applications in electronic industry is the compatibility of the material
with current silicon-based electronic technology. Therefore, the group IV elements with honeycomb
structure are more favorable for this purpose.
One method for tuning the electronic band structure of 2D systems is the use of surface functionalization.
Functionalization of graphene with hydrogen, the so-called hydrogen-terminated graphene or graphane,
opens a sizeable band gap, but its carrier mobility decreases dramatically to 10 cm$^2$/(Vs)~\cite{elias2009control}.
Silicene and germanene the other analogues of graphene have also attracted much attention.
However, the small band gap of these systems and mobility issues have limited their application for electronics.
Functionalized germanene provide enhanced stability and tunable properties~\cite{jiang2014improving}.
Compared with bulk Ge, surface functionalized germanene possess a direct and large band gap depending
on the surface ligand. These materials can be synthesized via the topotactic deintercalation of
layered Zintl phase precursors~\cite{jiang2014improving,jiang2014covalently}. In contrast to TMDs,
the weaker interlayer interaction allows for direct band gap single layer properties such as
strong photoluminescence that are readily present without the need to exfoliate down to a single layer.
Bianco et al.~\cite{bianco2013stability} produced experimentally hydrogen-terminated
germanene, GeH (also called, germanane). Recently the new material GeCH$_3$
was synthesized~\cite{jiang2014improving}, that exhibit an enhanced thermal stability. GeCH$_3$
is thermally stable up to $250\,^{\circ}\mathrm{C}$ which compares to $75\,^{\circ}\mathrm{C}$ for GeH.
The electronic structure of GeCH$_3$ has been shown to be very sensitive to strain, which makes it very attractive
for strain sensor applications~\cite{ma2014strain,jing2015high,ma2016band}. It has also a high carrier mobility
and pronounced light absorption which makes it attractive for light harvesting applications~\cite{jing2015high,ma2016band}.
At present there exist already a few first-principle studies of GeCH$_3$ that also include the effect of
SOC~\cite{jiang2014improving,ma2014strain,jing2015high,ma2016band}.
To fully understand the physics behind the electronic band structure close to the Fermi level,
we propose a TB model.
Our TB model is fitted to the density functional theory (DFT) results both for the case with and
without SOC. In the next part of this work we applied biaxial
tensile strain to examine the effect of strain on the electronic properties of this system and
compare our results with DFT calculations. The possibility of a topological
phase transition in GeCH$_3$ under biaxial tensile strain is also examined. Our finding that there is a transition
to the QSH phase is further corroborated by the fact that we find TRS protected edge
states in nanoribbons made out of GeCH$_3$.
This paper is organized as follows. In Sec.~II, we introduce the crystal structure and
lattice constants of monolayer GeCH$_3$. Our TB model with and without SOC is introduced in
Sec.~III,
and the effect of strain on the electronic properties of monolayer GeCH$_3$
is examined. In Sec.~IV, using the $\mathbb{Z}_2$ formalism we demonstrate
the existence of a topological phase transition in the electronic properties
of monolayer GeCH$_3$ when biaxial tensile strain is applied.
The paper is summarized in Sec.V.
\section{\label{structure}lattice structure of monolayer G\lowercase{e}CH$_3$ }
\begin{figure}
\centering
\includegraphics[width=.46\textwidth]{lattice.jpg}
\caption{ Schematic top (a) and side (b) views of the monolayer GeCH$_3$ structure. Blue,
black, and gray balls indicate Ge, C, and H atoms, respectively. Ge atoms are sandwiched between two sheets of
methyl groups. $h$ is the buckling of the structure. (c) Top view of the system eliminating
the methyl group. ${\bm a_1}$ and ${\bm a_2}$ are the lattice vectors. (d) Fisrt Brillouin zone of the
system with the reciprocal lattice vectors ${\bm b_1}$ and ${\bm b_2}$.}
\label{Lattice}
\end{figure}
The hexagonal atomic structure of monolayer GeCH$_3$ and its geometrical parameters are
shown in Figs.~\ref{Lattice}(a-c).
As shown in Figs.~\ref{Lattice}(a,b) it consists of three atomic layers where
a buckled honeycomb sheet of Ge atoms is
sandwiched between two outer methyl group layers. Each unit cell of monolayer GeCH$_3$
consists of two Ge atoms and two CH$_3$ groups. Previous DFT calculations gave for the lattice constant $a=3.954$\AA ,
and the Ge-Ge and Ge-C bond lengths are 2.415\AA ~and 1.972\AA, respectively\cite{ma2014strain}. The buckling height, $h$,
indicating the distance between two different Ge sublattices, is 0.788 \AA.
We have chosen the $x$ and $y$ axes along the armchair and zigzag directions, respectively. The $z$ axis is in the normal
direction to the plane of the monolayer GeCH$_3$. With this definition of coordinates, the lattice vectors are written as
$\bm a_1=a/2(1,\sqrt{3}),~\bm a_2=a/2(-1,\sqrt{3})$,
where the corresponding hexagonal Brillouin zone of the structure (see Fig.~\ref{Lattice}(d))
is determined by the reciprocal vectors
$\bm b_1=2\pi/a(1,\sqrt{3}/3),~\bm b_2=2\pi/a(-1,\sqrt{3}/{3})$.
\section{\label{ModelHamiltonian}Tight-Binding Model Hamiltonian}
Electronic structure of monolayer GeCH$_3$ has been obtained by using DFT calculations in Ref.~\cite{ma2014strain}. It is shown that
the low-energy electronic properties of this system are dominated by $s$, $p_x$ and $p_y$ atomic orbitals of Ge atoms.
DFT calculations including SOC interaction have shown that applying an in-plane biaxial tensile strain induces a topological phase
transition in the electronic properties of monolayer GeCH$_3$~\cite{ma2014strain}. Although such a DFT approach, provides valuable
information regarding the electronic properties of such system, it is limited to small computational unit cells.
For example, large nanoribbons consisting of hundreds of atoms and including disorder require very large super-cells which go beyond present
day computational DFT capability. This motivated us to derive a TB model for monolayer GeCH$_3$ that is sufficiently accurate to describe
the low-energy spectrum and the electronic properties of this system.
In the following we will propose a low-energy TB model Hamiltonian that includes SOC for monolayer GeCH$_3$.
We show that our model is able to predict accurately the effect of strain on the electronic properties of the system.
\subsection{\label{NoSOC}Model Hamiltonian without SOC }
We propose a TB model including $s$ , $p_x$ and $p_y$ atomic orbitals with principal quantum number $n=4$ of Ge atoms to
describe the low-energy spectrum of this system. The nearest-neighbor effective TB Hamiltonian without
SOC in the basis of $|s,p_x,p_y\rangle$ and in the second quantized representation is given by
\begin{equation}
H_0 =\sum_{i,\alpha}E_{i\alpha} c_{i\alpha}^\dagger c_{i\alpha}+ \sum_{\langle i,j\rangle,\alpha,\beta} t_{i\alpha,j\beta}
(c_{i\alpha}^{\dagger}c_{j\beta}+h.c),
\label{eqn:TBhamiltonian}
\end{equation}
where $c_{i\alpha}^{\dagger}$ and $c_{i\alpha}$ represent the creation and annihilation
operators for an electron in the $\alpha$-th orbital of the $i$-th atom, $E_{i\alpha}$ is the onsite energy of
$\alpha$-th orbital of the $i$-th atom and $t_{i\alpha,j\beta}$ is the nearest-neighbor hopping amplitude
between $\alpha$-th orbital of $i$-th atom and $\beta$-th orbital of $j$-th atom.
We will show that this effective model is sufficiently accurate to describe the low-energy spectrum of this system.
Note that the above Hamiltonian is quite different from the effective Hamiltonian that describes the electronic properties of
pristine germanene~\cite{kaloni2013stability}. In the pristine honeycomb structures of the group IV elements,
the effective low-energy spectrum is
described by the outer $p_z$ atomic orbitals. However, in monolayer GeCH$_3$, the $p_z$ orbitals mainly contribute
to the $\sigma$-bonding between Ge and C atoms to form the energy bands that are far from the Fermi level. Therefore,
we will neglect the contribution of the $p_z$ orbitals of the Ge atoms and the other orbitals of the CH$_3$ molecule in our TB model.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{No_SOC_strain_ab-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{No_SOC_cd-eps-converted-to.pdf}
\caption{The TB band structure of GeCH$_3$ isolated monolayer without SOC in the presence of (a) 0\%, (b) 4\%, (c) 8\%,
and (d) 12\% biaxial tensile strain. Symbols represent the HSE data taken from~\cite{ma2014sup}.}
\label{nosoc}
\end{figure}
\begin{table*}
\caption{The nearest neighbor hopping parameters between $s$ and $p$ orbitals are listed in the first
column. The second column represents the hopping integrals as a function of the standard Slater-Koster
parameters with direction dependent quantities. The third column are the nearest hopping parameters with the
inclusion of applied strain.}
\label{table1}
\begin{ruledtabular}
\begin{tabular}{ccc}
\textrm{Hopping parameters}&
\textrm{Without strain}&
\textrm{With biaxial strain}\\
\colrule
$t_{ss}$ & $V_{ss\sigma}$ & $t_{ss}^0[1-2\epsilon\cos\phi_0^2]$\\
$t_{sp_x}$ &$lV_{sp\sigma}$&$t_{sp_x}^0[1-2\epsilon\cos\phi_0^2+\eta\epsilon\tan\phi_0]$\\
$t_{sp_y}$ &$mV_{sp\sigma}$&$t_{sp_y}^0[1-2\epsilon\cos\phi_0^2+\eta\epsilon\tan\phi_0]$\\
$t_{p_xp_x}$ & $l^2V_{pp\sigma}+(1-l^2)V_{pp\pi}$&$t_{p_xp_x}^0[1-2\epsilon\cos\phi_0^2+2\eta\epsilon\tan\phi_0]-2\eta\epsilon\tan\phi_0V_{pp\pi}$\\
$t_{p_yp_y}$ &$m^2V_{pp\sigma}+(1-m^2)V_{pp\pi}$& $t_{p_yp_y}^0[1-2\epsilon\cos\phi_0^2+2\eta\epsilon\tan\phi_0]-2\eta\epsilon\tan\phi_0V_{pp\pi}$ \\
$t_{p_xp_y}$ & $lm(V_{pp\sigma}-V_{pp\pi)}$&$t_{p_xp_y}^0[1-2\epsilon\cos\phi_0^2+2\eta\epsilon\tan\phi_0]$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
With the above description, the hopping parameters of Eq.~(\ref{eqn:TBhamiltonian}) can be expressed in terms of the standard
Slater-Koster parameters as listed in the middle column of Table~\ref{table1},
where $l=\cos\theta\cos\phi_0$ and $m=\sin\theta\cos\phi_0$ are, respectively, function of the cosine of
the angles between the bond connecting two neighboring
atoms with respect to $x$ and $y$ axes.\\
Using the Fourier transform of Eq.~(\ref{eqn:TBhamiltonian}), and numerically diagonalizing the resulting Hamiltonian in $k$ space,
one can fit to the ab-initio results
in order to obtain the numerical values of the mentioned Slater-Koster parameters. The density functional calculation
results~\cite{ma2014sup} including the Heyd-Scuseria-Ernzerhof (HSE)
functional approximation~\cite{heyd2003hybrid} are used to parametrize the TB model given by Eq.~(\ref{eqn:TBhamiltonian}).
We have listed the obtained numerical values
of these parameters in Table~\ref{table2}. The numerically calculated TB energy bands of monolayer GeCH$_3$
in the absence of strain, as shown in Fig.~\ref{nosoc}(a), are
in excellent agreement with the ab-initio results. The direct band gap of monolayer GeCH$_3$ at the $\Gamma$ point is
1.82~eV.
\begin{table}
\caption{The values of the Slater-Koster parameters in units of eV as obtained from a fitting to the ab-initio results .}
\label{table2}
\begin{ruledtabular}
\begin{tabular}{cccccc}
\textrm{$V_{ss\sigma}$}&
\textrm{$V_{sp\sigma}$}&
\textrm{$V_{pp\sigma}$}&
\textrm{$V_{pp\pi}$}&
\textrm{$\epsilon_s$}&
\textrm{$\epsilon_p$}\\
\colrule
-2.20&2.62 &2.85&-0.85&-5.09&2.1
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{\label{strain}Strain effects}
Applying strain to a system modifies its electronic properties \cite{bir1974symmetry}.
This is due to the fact that it changes both the bond lengths and bond angles leading to a modulation
of the hopping parameters that determine the electronic properties of the system.
An accurate prediction of the electronic properties of the system in the presence
of different types of strain, is a stringent test of the accuracy of our TB model. To this end, we now first calculate
the modification of the hopping parameters when biaxial tensile strain is applied to the plane of monolayer GeCH$_3$.
Then we will study the modification of the energy spectrum in the presence of such a strain to show that our results
agree very well with the DFT calculations. This particular type of strain
noticeably simplifies our calculations.
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.47\textwidth]{buckling-eps-converted-to.pdf}
\caption{Variation of the buckling angle as a function of biaxial tensile strain. Symbols represent the DFT data for germanene \cite{kaloni2013stability} and the solid line is the fit to this data.}
\label{buckling}
\end{figure}
When biaxial tensile strain is applied in the plane of monolayer GeCH$_3$ leaves the honeycomb nature of its lattice
intact and the initial lattice vectors ${\bm a^0_1}$ and ${\bm a^0_2}$ evolve to the deformed ones ${\bm a_1}$
and ${\bm a_2}$. Therefore, the vector $\bm r_0=(x_0,y_0,z_0)$, in
the presence of in-plane strain is deformed into
$\bm r={(x,y,z)=[(1+\epsilon_x)x_0,(1+\epsilon_y)y_0,z_0]}$, where $\epsilon_x$ and $\epsilon_y$ are
the strain in the direction
of the ${ x}$ and ${y}$ axes, respectively.
In the following, for simplicity we assume that the strengths of the applied biaxial strains in
the two directions are equal, i.e.,
$\epsilon_x=\epsilon_y=\epsilon$. In the linear deformation regime, one can perform an expansion of the norm of
${r}$ to first order in $\epsilon_x$ an $\epsilon_y$ which results in
\begin{equation}
r\simeq(1+\alpha_x\epsilon_x+\alpha_y\epsilon_y) r_0=[1+(\alpha_x+\alpha_y)\epsilon]r_0,
\label{rstrain}
\end{equation}
where $\alpha_x={({x_0}/{r_0})}^2$ and $\alpha_y={({y_0}/{r_0})}^2$ are coefficients related to the geometrical structure
of GeCH$_3$. For the three nearest neighbor Ge atoms, one can write $\alpha_x+\alpha_y=\cos^2\phi_0$, where $\phi_0$ is
the initial buckling angle. We note that in the presence of biaxial strain, the bond lengths and buckling angles are both altered.
Thus, we consider their effects on the modification of the hopping parameters, simultaneously. Based on elasticity theory,
we know that the main features of the mechanical properties in a covalent material are determined by the structure of the
system and the strength of the covalent bonds. Therefore, one can expect that the change of the buckling angle in
germanene \cite{kaloni2013stability} and GeCH$_3$ be akin. The variation of the buckling angle \cite{kaloni2013stability} as a function of
biaxial strain can be fit to the linear form $\phi=\phi_0-\eta \epsilon$ (see Fig.~\ref{buckling}), where $\eta=-30$.
According to the Harrison rule \cite{harrison}, the standard Slater-Koster parameters related to $s$ and $p$ orbitals are proportional to
the bond length $r$ as $V_{\alpha\beta\gamma}\propto{1}/{r^2}$. Using Eq.~(\ref{rstrain}), the modified parameters
are given by
\begin{equation}
V_{\alpha\beta\gamma}=(1-2\epsilon\cos^2 \phi_0)V^0_{\alpha\beta\gamma}.
\label{}
\end{equation}
One can then use the change of the buckling angle and the Slater-Koster parameters to obtain the modified hopping parameters as listed in the last
column of Table~\ref{table1}, where $t_{\alpha\beta}^0$ represents the unstrained hopping parameters. For
instance, the new hopping parameter $t_{sp_{x}}$ can be approximated by
\begin{align}
t_{sp_x}&=t^0_{sp_x}+\left(\frac{\partial t_{sp_x}}{\partial r}\right)_{r_0}\Delta r
+\left(\frac{\partial t_{sp_x}}{\partial \phi}\right)_{\phi_0}\Delta\phi\nonumber\\
&=t^0_{sp_x}-2\cos\theta\cos\phi_0V^0_{sp\sigma}
\frac{\Delta r}{r_0}-\cos\theta\sin\phi_0V^0_{sp\sigma}\Delta\phi.
\label{}
\end{align}
Substituting ${\Delta r}/{r_0}=\epsilon\cos^2\phi_0$ and $\Delta\phi=-\eta\epsilon$ into
the above equation gives
\begin{equation}
t_{sp_x}=t^0_{sp_x}[1-\epsilon(2\cos^2\phi_0-\eta\tan\phi_0)].
\label{}
\end{equation}
In a similar way, one can obtain the other modified hopping parameters in order to study the evolution of the energy spectrum of
monolayer GeCH$_3$ as a function of applied biaxial tensile strain.
Straightforward substitution of the new hopping parameters in
Hamiltonian, Eq.~(\ref{eqn:TBhamiltonian}), gives the Hamiltonian for the strained system. The calculated TB energy spectrum in the
presence of biaxial tensile strain with strengths of $4\%$, $8\%$, and $12\%$ are shown in
Figs.~\ref{nosoc}(b), (c) and (d), which are in excellent agreement with the DFT results \cite{ma2014strain,ma2014sup}.
\begin{figure}
\centering
\includegraphics[width=.47\textwidth]{Gap-eps-converted-to.pdf}
\caption{Comparison of the variation of energy band gap vs. biaxial strain between TB model and HSE
calculations~\cite{ma2014strain}.}
\label{Gap}
\end{figure}
We show in Fig.~\ref{Gap} the dependence of the band gap of GeCH$_3$ as function of biaxial tensile strain. Notice the
good agreement between both DFT and TB approaches demonstrating the validity of our proposed TB model.
\subsection{\label{SOC_Hamiltonian}Spin-Orbit coupling}
Spin-orbit interaction is a relativistic correction to the Schr\"{o}dinger equation. It can significantly affect the electronic
properties of systems that consists of heavier elements. In such systems, the major part of SOC originates from the
orbital motion of electrons close to the atomic nuclei. In the Slater-Koster approximation, one can assume an effective spherical
atomic potential $V_i({\bm r})$, at least in the region near the nucleus. Therefore, one can
substitute $\nabla V_i({\bm r})=({dV_i}/{dr}){\bm r}/{r}$ and ${\bm s}={\hbar}/{2}{\bm \sigma}$
into the general form for the SOC term~\cite{li2015giant,zhao2015driving}
\begin{equation}
H_{SOC}=-\frac{\hbar}{4m_0^2c^2}(\nabla V\times {\bm p})\cdot{\pmb \sigma},
\end{equation}
to obtain the SOC in the form of
\begin{equation}
H_{SOC}=\lambda(r){\bm L}\cdot\pmb{\sigma},
\end{equation}
where $\lambda(r)={1}/{2m_0^2c^2r}(dV/dr)$ is a radial function whose value depends on the
type of
atomic species. In the above equations, $\hbar, m_0, c$ and ${\bm p}$, are Plank constant, free mass of electron, speed of light, and
momentum, respectively; and ${\pmb \sigma},{\bm L}$ and ${\bm s}$ represent the Pauli matrices, angular momentum operator and
electron spin operator, respectively.
Using the well known ladder operators $ L_\pm$ and $S_\pm$, one can obtain the matrix representation of the SOC
Hamiltonian in the basis set of $|s_{1},p_{x1},p_{y1},s_{2},p_{x2},p_{y2}\rangle \otimes |\uparrow,\downarrow\rangle$
for monolayer GeCH$_3$ with matrix elements
\begin{equation}
\langle\alpha_i|H_{SOC}|\beta_i\rangle=\lambda_i<{\bm L}\cdot{\pmb \sigma}>_{\alpha\beta},
\label{hsoc2}
\end{equation}
where $\alpha_i$ and $\beta_i$ represent the atomic orbitals of $i$-th atom. Note that since the two atom basis in the unit
cell of the monolayer GeCH$_3$ are the same, we have $\lambda_1=\lambda_2=\lambda$.
Thus, the representation of the SOC Hamiltonian in the above mentioned basis is
\begin{equation}
H_{SOC}=
\begin{bmatrix}
H_{SOC}^{\uparrow\uparrow}&H_{SOC}^{\uparrow\downarrow}\\H_{SOC}^{\downarrow\uparrow}&H_{SOC}^{\downarrow\downarrow}
\end{bmatrix},
\end{equation}
whose elements are $6\times6$ matrices with $H_{SOC}^{\uparrow\downarrow}=H_{SOC}^{\downarrow\uparrow}={\bm 0}$, and
\begin{equation}
H_{SOC}^{\uparrow\uparrow}=\lambda
\begin{bmatrix}
0&0&0\\0&0&-i\sigma_z\\0&i\sigma_z&0\
\end{bmatrix}
,~~H_{SOC}^{\downarrow\downarrow}=\lambda
\begin{bmatrix}
0&0&0\\0&0&i\sigma_z\\0&-i\sigma_z&0\
\end{bmatrix}.
\end{equation}
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.3\textwidth]{LDA-eps-converted-to.pdf}
\caption{The multi-orbital TB spectrum of GeCH$_3$ monolayer with SOC. Symbols represent the LDA data taken from~\cite{ma2014strain}.}
\label{LDA}
\end{figure}
The value of the strength $\lambda$ of the SOC should be chosen either in agreement with experiment or by fitting
the TB bands to the ab-initio results near some $k$ points such that it gives the correct band gap.
In order to evaluate the strength of the SOC for Ge atoms in monolayer GeCH$_3$, we fitted the
spectrum obtained from our multi-orbital TB model
to the one from density functional calculations within the local density approximation (LDA) for the
exchange correlation in Ref.~\cite{ma2014strain}.
As shown in Fig.~\ref{LDA}, there is excellent agreement between the TB spectrum and the DFT results for the
SOC strength $\lambda=0.096$~eV. We adopt this SOC strength in the following calculations of the TB spectrum when
we use the hopping parameters from Table~\ref{table2}.
The TB energy spectrum
of monolayer GeCH$_3$ are shown
in Figs.~\ref{with_SOC} (a) and (b) for 0\% and 12.5\% strain, respectively.
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.45\textwidth]{SOCGeCH3-eps-converted-to.pdf}
\caption{The TB band structure of GeCH$_3$ isolated monolayer with SOC in the presence of (a) 0\%, and (b) 12.5\% biaxial tensile strain.
(c) Zoomed-in view of (b).}
\label{with_SOC}
\end{figure}
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.45\textwidth]{Gap_gech3-eps-converted-to.pdf}
\caption{The calculated band gaps of monolayer GeCH$_3$ as a function of biaxial strain at the $\Gamma$ point, $E_\Gamma$,
and the global gap $E_g$. The two distinct colored regions show the different trivial and band inverted phases.}
\label{gap_soc}
\end{figure}
Note that due to the presence of time reversal and inversion symmetry, each band in the energy spectrum of
monolayer GeCH$_3$ is doubly degenerate.
As shown in Fig.~\ref{gap_soc},
by applying biaxial tensile strain, the global band gap located at $\Gamma$ gradually
decreases and eventually a band inversion occurs at 11.6\% strain.
By further increasing strain,
the induced band gap due to SOC, (see Figs.~\ref{with_SOC}(b), and (c)) becomes indirect, and at a
reasonable strength of 12.8\% reaches the value of 115 meV.
One can use the TB spectrum of Figs.~\ref{with_SOC}, to calculate the effective masses of electrons
and holes near the conduction band minimum (CBM) and the valence band maximum (VBM). The results, in unit of
free electron mass $m_0$, are listed in Table~\ref{table3}
for 0\%, 6\%, 9\%, and 12.5\% biaxial tensile strain.
Note that, the electron and hole effective masses near the CBM and VBM
along the two directions of $\Gamma$-K and $\Gamma$-M are the same.
\begin{table}
\caption{The effective mass of electron
and hole near the CBM and VBM in unit of
free electron mass $m_0$. The electron and hole effective masses along the two directions of $\Gamma$-K and $\Gamma$-M are
the same.}
\label{table3}
\begin{ruledtabular}
\begin{tabular}{ccc}
\textrm{ ~~~~~~~~~~~~~~Strain ($\epsilon$)\textbackslash Effective mass ($m/m_0$)}&
\textrm{Electron}&
\textrm{Hole}\\
\colrule
~0\% &0.135 &0.157\\
~~~~6\% &0.074 &0.105\\
~~~~9\% &0.045 &0.058\\
~~~~12.5\% &0.033 &0.316\\
\end{tabular}
\end{ruledtabular}
\end{table}
Another way to test the validity of our TB model, is its ability to predict a possible topological phase transition in the
electronic properties of monolayer GeCH$_3$. In the next section we will study the strain-induced topological phase in
monolayer GeCH$_3$ using our TB model.
\section{\label{Numericalresults} topological phase transition of
monolayer G\lowercase{e}CH$_3$ under strain}
In the previous section, using the TB model including
SOC, we showed that monolayer GeCH$_3$ is a NI. We also showed
that one can manipulate its electronic properties by applying in-plane biaxial strain.
It is clear from Eq.~(\ref{hsoc2}) that SOC preserves the TRS.
Thus, the monolayer GeCH$_3$ can exhibit a QSH phase when its energy
spectrum is manipulated by an external parameter that does not break TRS.
The $\mathbb{Z}_2$ classification is a well known approach to distinguish between
the two different NI and TI phases~\cite{kane,hasan}.
In the following, we briefly introduce the lattice version of the Fu-Kane formula~\cite{fu1},
to calculate the $\mathbb{Z}_2$ invariant. Then, we show numerically that by applying biaxial tensile
strain, a change in the bulk topology of monolayer GeCH$_3$ occurs.
\subsection{\label{z2}Calculation of the $\mathbb{Z}_2$ invariant }
The Fu-Kane formula~\cite{fu1}, for the calculation of the $\mathbb{Z}_2$ invariant is given by
\begin{equation}
\text{\footnotesize $\mathbb{Z}_2=\frac{1}{2\pi i}\left [\oint_{{\partial\textrm{HBZ}}}d\bm{k}\bm{\cdot\mathcal{A}}(\bm{k})-
\int_{{\textrm{HBZ}}}d^2k \mathcal{F}(\bm{k})]\right]\textrm {(mod 2)}$},
\label{z2-1}
\end{equation}
where the integral is taken over half the Brillouin zone as denoted by $\small{\textrm{HBZ}}$.
Here, the Berry gauge potential $\bm{\mathcal{A}}(\bm{k})$, and the Berry field
strength $\mathcal{F}(\bm{k})$ are
given by $\sum_n\langle u_n(\bm{k})|\nabla_n u_n(\bm{k})\rangle$, and
$\nabla_{\bm k} \times {\mathcal{A}(\bm{k})\mid_z}$, respectively; where $u_n(\bm{k})$ represents
the periodic part of the Bloch wave function with band index $n$, and the summation in
$\bm{\mathcal{A}}(\bm{k})$ runs over all occupied states.\\
Note that, in this approach one has to do some gauge fixing procedure~\cite{fukui1}
to fulfill the TRS constraints and the periodicity of the
$k$ points which are related by a reciprocal lattice vector $\bm G$.
Moreover, due to the TRS and the inversion symmetry in monolayer GeCH$_3$, each band
is at least doubly degenerate. Therefore, one needs to generalize the definition of
$\bm{\mathcal{A}}$ and $\mathcal{F}$ to non-Abelian gauge field analogies~\cite{hatsugai2004}
constructed from the 2M dimensional ground state multiplet $|\psi(k)\rangle=(|u_1(k)\rangle,...,|u_{2M}(k)\rangle)$,
associated to the Hamiltonian $\mathcal{H}(k)|u_n(k)\rangle= E_n(k)|u_n(k)\rangle $~\cite{fukui1,hatsugai2004}.\\
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.48\textwidth]{Bz.jpg}
\caption{Conversion of the equivalent (a) rhombus
shape of the honeycomb Brillouin zone in $k$ space into a (b) unit square in $q$ space.
}
\label{Bz}
\end{figure}
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.48\textwidth]{z2-eps-converted-to.pdf}
\caption{Calculation of $\mathbb{Z}_2$ invariant for monolayer GeCH$_3$ in the
presence of biaxial strain. The two NI and TI phases are represented by regions of different colors and delimited
by a black line at the critical value of $11.6\%$.
}
\label{z2}
\end{figure}
In order to compute the $\mathbb{Z}_2$ invariant, a lattice version of Eq.~(\ref{z2-1}) is more favorable for
numerical calculations. To this end, one can simply convert the equivalent rhombus
shape of the honeycomb Brillouin zone in $k$ space as shown in Figs.~\ref{Bz}(a) and (b),
into a unit square in $q$ space by the following change of variables
\begin{equation}
k_x=\frac{2\pi}{a}(q_x-q_y), ~~~~ k_y=\frac{2\pi}{\sqrt{3}a}(q_x+q_y).
\end{equation}
This, allows us to use the more simple lattice version of Eq.~(\ref{z2-1})~\cite{fukui1}
\begin{equation}
\small\mathbb{Z}_2=\frac{1}{2 \pi i}\left[ \sum_{{\textit q_l}\in \small{\partial\textrm{HBZ}}}A_x({\textit q_l}) -
\sum_{{\textit q_l}\in \small\textrm{HBZ}} F_{xy} ({\textit q_l})\right]\textrm {(mod 2)},
\label{z2-2}
\end{equation}
where the lattice sites of the Brillouin zone are labeled by $q_l$. Thus the above mentioned gauge
fixing procedure and TRS constraints are applied on the equivalent $q$ points. Using the
so-called unimodular link variable~\cite{fukui1}
\begin{equation}
U_{\hat\mu}({\textit q_l})=\frac{\textrm {det} \psi^\dagger(\textit q_l)\psi(\textit q_l+ \hat{\mu})}{|\textrm {det} \psi^\dagger(\textit q_l)\psi(\textit q_l+ \hat{\mu})|},
\end{equation}
where $\hat{\mu}$ denotes a unit vector in the $q_x$-$q_y$ plane, one can define the Berry potential and Berry field
in Eq.~(\ref{z2-2}) as
\begin{eqnarray}
A_{x}({\textit q_l})&=&\ln U_x({\textit q_l}), \\
\small F_{xy}(\textit q_l)&=&\ln\frac{U_x(\textit q_l)U_y(\textit q_l+\hat{x})}{ U_y(\textit q_l)U_x(\textit q_l+\hat{y})}.
\label{FFF}
\end{eqnarray}
Note that both the Berry potential and the Berry field strength are defined within the
branch of $A_{x}({\textit q_l})/i\in(-\pi,\pi)$ and $F_{xy}(\textit q_l)/i\in(-\pi,\pi)$.\\
The numerical results of the $\mathbb{Z}_2$ invariant are shown in Fig.~\ref{z2}.
As seen, for $\epsilon<11.6$\%, monolayer GeCH$_3$ is a NI and at the critical value of
$\epsilon=11.6$\%, the $\mathbb{Z}_2$ invariant jumps from 0 to 1, indicating a strain-induced TI phase transition in
the electronic properties of the system. The topologically
protected global bulk gap for a strain of 12.8\% is 115~meV, which is much larger than the thermal
energy at room temperature and therefore the monolayer GeCH$_3$ is an excellent candidate for strain related applications.\\
In the next subsection we examine the formation of topologically protected edge states
in a typical nanoribbon with zigzag edges when the system is driven into the TI phase by applying
biaxial tensile strain.
\subsection{\label{SOC_Hamiltonian}Electronic properties of GeCH$_3$ nanoribbons under strain}
The appearance of helical gapless states at the edge of a 2D topological insulator, is a crucial consequence of its nontrivial
bulk topology. In the previous section, we showed that a jump from 0 to 1 in the $\mathbb{Z}_2$ invariant for biaxial strain at
$\epsilon>11.6\%$ takes place, demonstrating a topological phase transition in the electronic properties of monolayer GeCH$_3$.
As an example, in this subsection, we study the 1D energy bands of GeCH$_3$ nanoribbons with zigzag edges in the presence of biaxial
tensile strain. Our TB model predicts the appearance of topologically protected edge states with increasing strain when the $\mathbb{Z}_2$
invariant becomes 1. We denote the width of the zigzag GeCH$_3$ nanoribbon (z-GeCH$_3$-NR) by N, which is the number of zigzag chains
across the ribbon width. To calculate the energy spectrum of a z-GeCH$_3$-NR with width N, we construct its
supercell Hamiltonian ($H^{SC}$) in the basis of
$|\psi\rangle\equiv|s_{H_0},s_{1},p_{x1},p_{y1},...,s_{2N},p_{x2N},p_{y2N},s_{H_1}\rangle \otimes |\uparrow,\downarrow\rangle$
where $s_i$, $p_{xi}$, and $p_{yi}$ represent the $s$, $p_{x}$, and $p_{y}$ orbitals of Ge atoms along the nanoribbon width.
\begin{figure}[ht]
\centering
\vspace{20pt}
\includegraphics[width=0.48\textwidth]{band_zig-eps-converted-to.pdf}
\caption{The 1D energy bands of z-GeCH$_3$-NR for $N=40$ in the presence of
(a) 9\%, (b) 11\%, and (c) 13\% biaxial tensile strain.
}
\label{band_zig}
\end{figure}
$|s_{H_0}\rangle$ and $|s_{H_1}\rangle$ represent the atomic orbitals of H atoms that are introduced to passivate the Ge
atoms on each edge, respectively. We assume that the width of the nanoribbon is large enough that the interaction between the
two edges is negligible, and one can safely neglect the tiny change of the hopping parameters due to the passivation procedure.
Therefore, one can write the matrix elements of the nanoribbon Hamiltonian $H^{SC}=H_0^{SC}+H_{SOC}^{SC}$ as
\begin{align}
\label{14}
M_{i\alpha,j\beta}^{\sigma\sigma^\prime}&=\langle\psi|H^{SC}|\psi\rangle_{i\alpha,j\beta}^{\sigma\sigma^\prime}\nonumber\\
&=E_{i\alpha}\delta_{ij}\delta_{\alpha\beta}\delta_{\sigma\sigma^\prime}\nonumber\\&
+\delta_{\sigma\sigma^\prime}\sum_nt_{i\alpha,j\beta}
e^{i\bm k\cdot \bm R_{0n}}+\lambda_i\delta_{ij}<\bm L\cdot\pmb\sigma>_{\alpha\beta}^{\sigma\sigma^\prime},
\end{align}
where $i,j$ are the basis site indices in a supercell; $\alpha,\beta$ denote the atomic orbitals; $\sigma, \sigma^\prime$ denote the
spin degrees of freedom; and ${\bm R_{0n}}$ is the translational vector of the $n$-th supercell.
The corresponding onsite energy of Ge atoms and the hopping parameters pertinent to the Ge-Ge bonds are substituted from
Tables~\ref{table1} and ~\ref{table2}. Moreover, one has to define the onsite energy $E_H^s$, and the hopping parameters $t_{H,Ge}^{ss}$ and
$t_{H,Ge}^{sp_{y}}$ in the above
equation corresponding to the matrix elements related to the H-Ge bond. We adopt from the
fitting procedure the numerical values $E_H^s=-2.54$ eV,
$t_{H,Ge}^{ss}=V_{H,Ge}^{ss}=-4.54$ eV, and $t_{H,Ge}^{sp_y}=\pm V_{H,Ge}^{sp}$ with $V_{H,Ge}^{sp}=0.5$ eV where +(-) denotes the lower
(upper) H-Ge edge bonds.
One can diagonalize the corresponding TB Hamiltonian, Eq.~(\ref{14}), in order to obtain the energy spectrum.
By applying biaxial tensile strain we found that the band gap of the nanoribbon
gradually decreases and eventually the metallic edge states protected by TRS
appear for a strain value where a band inversion takes place in the TB energy spectrum of bulk monolayer GeCH$_3$.
The numerically calculated
energy bands of z-GeCH$_3$-NR with $N=40$ in the presence of 9\%, 11\%, and 13\% biaxial tensile strain
are shown in Figs.~\ref{band_zig}(a), (b), and (c), respectively.
This demonstrates a topological phase transition from the NI to the QSH phase in the electronic properties
of monolayer GeCH$_3$.
\section{\label{Conclusion}Conclusions}
To conclude, we have proposed an effective TB model with and without SOC for monolayer GeCH$_3$
including $s$, $p_x$, and $p_y$ orbitals per atomic site.
Our model reproduces the low-energy spectrum of monolayer GeCH$_3$ in
excellent agreement with ab-initio results. It also predicts accurately the evolution of the band gap
in the presence of biaxial tensile strain. By including the SOC, this band gap manipulation
leads to a band inversion in the electronic properties
of monolayer GeCH$_3$, giving rise to a topological phase transition from NI to QSH.
Our model predicts that this phase transition takes place for 11.6\% biaxial tensile strain
as verified by the $\mathbb{Z}_2$ formalism.
The topologically protected global bulk gap at a strain of 12.8\% is 115~meV,
which is much larger than the thermal
energy at room temperature and makes monolayer GeCH$_3$ a promising candidate for future
applications. We also showed the emergence of topologically protected edge states in
a typical z-GeCH$_3$-NR in the presence of biaxial strain larger than 11.6\%. This is an additional confirmation of the existence of the TI phase
in the electronic properties of monolayer GeCH$_3$.
\nocite{*}
|
\section{Introduction}
\pagestyle{fancy}
The process that generates the solar magnetic field is called
the \textit{\hbindex{dynamo}}. It is widely believed to depend on
the \textit{tachocline}, the shearing layer between the Sun's radiative inner
core and its convective outer envelope \citeeg{c14}. As stellar masses drop
below \apx0.35~\msun\ (spectral types \apx M3.5 and later), the tachocline
disappears \citep{l58, cb00}, which made it challenging to explain how
mid-M~dwarf stars can in fact generate strong magnetic fields \citep{sl85}.
The surprising magnetic properties of fully-convective M~dwarfs raised the
question of what dynamo action would be like in the coolest, lowest-mass
objects: the \textit{\hbindex{ultra-cool dwarfs}} (UCDs), stars and brown
dwarfs with spectral types M7 and later \citep{krl+99, mdb+99}. (The very
youngest and most massive brown dwarfs have spectral types \apx M7; the very
lowest-mass stars have spectral types \apx L4. Objects with spectral types
between these limits can be of either category.) But it was not until the CCD
revolution that it became possible to study UCDs systematically. The first
results suggested that magnetic activity faded out in the UCDs \citeeg{dss+96,
bm95}. The consensus model was that magnetic field generation became
ineffective in the lowest-mass objects due to the loss of the Sun-like
``shell'' dynamo and the transition to cool outer atmospheres, expected to be
largely neutral and therefore unable to couple the energy of their convective
motions into any fields generated below the surface \citep{mbs+02}. This
picture was muddied, however, by reports of flares from very late M~dwarfs in
the ultraviolet \citep[UV;][]{lwb+95}, \ha\ \citep{rkgl99, lkrf99}, and
X-ray \citep{fgs00}. These results suggested that UCDs could generate and
dissipate magnetic fields at least intermittently.
A breakthrough occurred in 2001 with the detection of an X-ray flare
from \obj{lp944}, a \textit{bona fide} brown dwarf \citep[M9.5;][]{rbmb00},
which was shortly followed by the detection of both bursting and quiescent
\hbindex{radio emission} from the same object by a team of summer students using the
NRAO \hbindex{Very Large Array} \citep{bbb+01}. Radio detections of UCDs were
thought to be impossible: scaling arguments had led to radio flux density
predictions of $\lesssim$0.1~\ujy, not achievable even with present-day
observatories. But \citet{bbb+01} detected \obj{lp944} at a flux
density \apx$10^4$ times brighter than these predictions, demonstrating that
UCD magnetism is --- at least sometimes --- vigorous and of a fundamentally
different nature than observed in higher-mass objects. The detection of
quiescent emission further demonstrated that not only can UCDs generate stable
magnetic fields, but that they can also sustainably source the
highly-energetic, nonthermal electrons needed to produce observable radio
emission.
Radio observations have since proved to be the best available probe of
magnetism in the UCD regime, with a major leap in capabilities coming with VLA
upgrade project \citep{pcbw11}. In the rest of this chapter, we describe the
phenomenology of UCD radio emission, place it in a broader astrophysical
context, and deduce the implications of the data for the magnetic properties
of UCDs. We close by presenting the unique contribution that studies of UCD
magnetism can make to exoplanetary science and probable future directions of
research in the field.
\section{Phenomenology of the Radio Emission}
Radio observations of UCDs have revealed a complex phenomenology that can
broadly be divided into ``bursting'' and ``non-bursting'' components. The
non-bursting components can also be variable and evolve significantly over
long timescales (large compared to the rotation period \prot) so we prefer
this use this terminology rather than refer to such emission as ``quiescent.''
\autoref{t.radioucds} presents the list of all known \hbindex{radio-active UCDs} at the
time of writing.
\subsection{Bright, Polarized Bursts}
UCDs emit bright, \hbindex{circularly polarized} radio bursts at GHz
frequencies that have durations $\tau \apx 1$--$100$~minutes. In the initial
discovery by \citet{bbb+01}, the radio bursts of \obj{lp944} had a brightness
temperature $\tb \apx 10^{10}$~K and a fractional circular polarization
$\fc \apx 30$\%, consistent with synchrotron emission mechanisms \citep{d85}.
(Brightness temperature is a proxy for specific intensity often used by radio
astronomers: $I_\nu \equiv 2 \nu^2 k \tb / c^2$.) Subsequent observations
have, however, revealed cases with brightness temperatures and fractional
polarizations too large to be explained by synchrotron emission. In two early
examples, \citet{bp05} detected two bursts from \obj{denis1048} (M8), one with
flux density $S_\nu \apx 20$~mJy, $\tau \apx 5$~minutes, $\tb \apx 10^{13}$~K,
and $\fc \apx 100$\%. \citet{hbl+07} detected repeated bursts
from \obji{tvlm513} (M9) with $S_\nu \apx 3$~mJy, $\tau \apx 5$~minutes,
$\tb \gtrsim 10^{11}$~K, and $\fc \apx 100$\% with both left- and right-handed
helicities observed.
\begin{table}[tbp]
\centering
\def$^*${$^*$}
\mytablefontsize
\begin{tabular}{lllll}
\multicolumn{1}{c}{Source name} &
\multicolumn{1}{c}{Other name} &
\multicolumn{1}{c}{SpT} &
\multicolumn{1}{c}{Var?} &
\multicolumn{1}{c}{First radio detection} \\
\hline
\obj{2m0952} & & M7$^*$ & & \citet{mbr12} \\
\obj{2m1314b} & \obj{n33370b} & M7 & Y & \citet{mbi+11} \\
\obj{2m1456} & & M7 & & \citet{bp05} \\
\obj{2m0027} & \obj{lp349} & M8$^*$ & N & \citet{pbol+07} \\
\obj{2m1501} & \obj{tvlm513} & M8.5 & Y & \citet{b02} \\
\obj{2m1835} & \obj{lsr1835} & M8.5 & Y & \citet{b06b} \\
\obj{2m1048} & DENIS~J$\ldots$ & M9 & Y & \citet{bp05} \\
\obj{2m0024} & \obj{bri0021} & M9.5 & Y & \citet{b02} \\
\obj{2m0339} & \obj{lp944} & M9.5 & Y & \citet{bbb+01} \\
\obj{2m0720} & & M9.5+T5 & Y & \citet{bmt+15} \\
\obj{2m0746b} & & L1.5 & Y & \citet{brpb+09} \\
\obj{2m1906} & WISE~J$\ldots$ & L1 & & \citet{gbb+13} \\
\obj{2m0523} & & L2.5 & & \citet{b06b} \\
\obj{2m0036} & & L3.5 & Y & \citet{b02} \\
\obj{2m1315} & & L3.5+T7 & & \citet{bmzb13} \\
\obj{2m0004} & & L5+L5 & & \citet{lmr+16} \\
\obj{2m0423} & SDSS~J$\ldots$ & L7.5 & Y & \citet{khp+16} \\
\obj{2m1043} & & L8 & Y & \citet{khp+16} \\
\obj{2m0607} & WISE~J$\ldots$ & L9 & & \citet{gwb+16} \\
\obj{2m0136} & SIMP~J$\ldots$ & T2.5 & Y & \citet{khp+16} \\
\obj{wise1122} & & T6 & Y & \citet{rw16} \\
\obj{2m1047} & & T6.5 & Y & \citet{rw12} \\
\obj{2m1237} & & T6.5 & Y & \citet{khp+16}
\end{tabular}
\caption{The twenty-three radio-detected UCDs as of mid-2017. ``SpT''
shows a spectral type from SIMBAD; UCD spectral typing is challenging and
subtle \citeeg{kgc+12}, but to conserve space we omit details and references.
Spectral types with asterisks ($^*$) are known to come from the blended
spectra of more than one object. ``Var?'' indicates whether the source has
been confirmed to have radio emission that varies on short ($\lesssim$1~hr)
time scales. This is the case for all well-studied UCDs
except \obj{lp349} \citep{opbh+09}.}
\label{t.radioucds}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\linewidth]{images/brpb09-fig1}
\caption{\textit{[From \citet{brpb+09}. Reproduced by permission of the AAS.]}
Radio light curve of \obj{2m0746b} showing periodic, highly polarized, rapid,
bright bursts. The black and red points show the data averaged into 5- and
60-second bins, respectively. The negative Stokes~V values, $|V| \apx I$,
indicate \apx100\% left circular polarization in the bursts. The burst
spectra do not extend to the VLA's 8.46~GHz band (lower panels).}
\label{f.lightcurve0746}
\end{figure}
In many cases, these radio bursts have been observed to occur periodically,
and in all such cases where the rotation period \prot\ is measured through
independent means, the periodicity of the bursts
matches \prot. \autoref{f.lightcurve0746} shows a classic example of this
phenomenology from \citet{brpb+09}. In the objects with such measurements,
$2 \lesssim \prot \lesssim 4$~h, but there are likely significant selection
effects at play that make it difficult to infer the true distribution
of \prot\ of the radio-active UCDs. In objects with repeated observations, the
periodic bursts are sometimes present and sometimes
not \citep[e.g., \obj{lsr1835};][]{bbg+08, had+08}. \obj{tvlm513} is the
best-studied member of this class, with burst observations spanning years that
enable claims of extremely precise (millisecond) determinations of the
rotation period \citep{dam+10, hhb+13, wr14}.
These bursts have been generally been detected at frequencies between 1 and
10~GHz. Once again, selection effects make it difficult to draw conclusions
about the fundamental character of the burst spectra given the observational
results: the vast majority of searches for UCD radio emission of have been
conducted in the 1--10~GHz frequency window. This window is where the VLA's
sensitivity peaks, but it is challenging to quantify how important intrinsic
effects are as well (we observe in this window because there truly are more
bursts to be seen in it). The spectral shapes of the bursts are not fully
understood. Both high- and low-frequency cutoffs have been observed in
different bursts \citep{lmg15, wbi+15}, but in no burst has there been
definitive evidence that the flux density peak has been identified. Later in
this chapter we will argue that the bursts are probably of moderate bandwidth,
$\Delta\nu / \nu \apx 1$.
The total energy contained in the bursts is not large, which is commonly the
case for radio processes. Using the properties of the bursts
from \obj{tvlm513} quoted above \citep{hbl+07}, the energy content of an
individual burst is \apx$10^{27}$~erg, assuming isotropic emission. For
\hbindex{coherent emission processes} the emission is unlikely to be isotropic, reducing the
energy budget further. The burst luminosities are typically
$\approx${}$10^{-6}$ of the bolometric (sub)stellar radiative output.
\subsection{Non-bursting Emission}
UCDs also produce non-bursting radio emission that is generally steady over
the time scales of individual observations. Repeated observations of numerous
UCDs have revealed, however, that this emission often varies at the
order-of-magnitude level on longer (\apx week and above)
timescales \citeeg{adh+07, mbr12}. Several UCDs have been detected once in the
radio and not detected in deeper follow-up observations \citeeg{mbr12}. On the
other hand, archival detections show that the hyperactive M7
star \obji{n33370b} has sustained a broadly consistent level of radio emission
for at least a decade \citep{mbi+11}. \autoref{f.lightcurve33370b} shows that
this object, the most radio-bright UCD, nonetheless displays both periodic
(at \prot) and long-term variability in its radio emission.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{images/wbi15-fig9}
\caption{\textit{[From \citet{wbi+15}. Reproduced by permission of the AAS.]}
Radio light curve of \obj{n33370b} showing periodic variation and moderate
polarization in the non-bursting radio emission. In the upper panels, filled
and empty points show Stokes~I and V components, respectively. The lower
panels show the fractional circular polarization derived from these values.
The leftmost and center panel show two observations separated by 24~hr; the
rightmost panel shows observations made \apx1~yr later. Vertical black lines
indicate times that the dwarf's periodically-modulated optical emission
reaches maximum. Rapid, 100\% circular polarized radio bursts have been
excised from these data.}
\label{f.lightcurve33370b}
\end{figure}
Radio-detected UCDs typically have non-bursting spectral luminosities of
$\lnur \apx 10^{12}$--$10^{14}$~\speclum, usually about an order of magnitude
fainter than the peak observed burst luminosity when both phenomena have been
observed. Selection effects are important here, too: the lower bound of this
range corresponds to the sensitivity that is achieved in typical VLA
reconaissance observations (\apx1~hr duration) of nearby (\apx10~pc) UCDs. The
deepest upper limit on a UCD is \apx$10^{11}$~\speclum, obtained in
observations of the nearby binary \obji{luh16ab} \citep{oms+15}. The brightest
UCD radio emitter, \obj{n33370b},
reaches \apx$10^{14.7}$~\speclum\ \citep{mbi+11, wcb14}.
The non-bursting emissions generally have low or moderate circular
polarization. Linear polarization has not been detected. As shown
in \autoref{f.lightcurve33370b}, $0 < \fc < 20$\% in the case
of \obj{n33370b}, with periodic variability at \prot\ indicating that the
apparent circular polarization depends on orientation. The recently discovered
radio-active T6.5 dwarf \obj{wise1122} presents a new, unusual case: unlike
the other UCDs, \obj{wise1122} produces highly polarized emission that is not
clearly confined to rapid bursts \citep{wgb17}. The only published
observations of this object are too brief, however, to allow a firm
interpretation.
The non-bursting spectra are broadband. They peak around 1--10~GHz and
generally have shallow spectral indices on both the low- and high-frequency
sides of the peak. Only a few UCDs have been observed at a wide range of radio
frequencies, however. \obj{tvlm513} has been detected at frequencies ranging
from 1.4~GHz all the way to 98~GHz; the latter detection was achieved with
\hbindex{ALMA} and represents the first demonstration that UCDs can be detected at
millimeter wavelengths \citep{wcs+15}. \obj{n33370b} has been detected from
1--40~GHz \citep{mbi+11, wbi+15} and has an extremely flat spectrum, with
significant circular polarization at all observed frequencies. \obj{denis1048}
has been detected from 5--18~GHz with a negative spectral index $\alpha =
-1.71 \pm 0.09$ \citep[$S_\nu \propto \nu^\alpha$;][]{rhhc11}. Searches for
emission from UCDs at frequencies below 1~GHz have thus far been
unsuccessful \citep{jol+11, bhn+16} although the famous low-mass flare
star \obji{uvcet} (M6) was recently detected at 154~MHz using
the \hbindex{Murchison Widefield Array} \citep{llk+17}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{images/had06-fig6}
\caption{\textit{[From \citet{had+06}. Reproduced by permission of the AAS.]}
Radio light curve of \obj{tvlm513} showing periodic behavior that is not
cleanly separable into burst and non-burst components. The data are phased
to a period of 2~hr and shown binned at 6, 7, and 8~minutes, with each
binned light curve being plotted twice. The observing frequency was
4.88~GHz.}
\label{f.lightcurve513}
\end{figure}
\subsection{Intermediate Cases}
It is not always possibly to cleanly separate UCD radio emission into bursting
and non-bursting components. \autoref{f.lightcurve513} shows an example
from \obj{tvlm513} in which variability is observed with both circular
polarization helicities as well as null
polarization \citep{had+06}. \obj{2m0036} and \obj{n33370b} have shown
similarly ambiguous phenomenologies \citep{brr+05, had+08, wbi+15}.
\section{UCD Radio Emission in Context}
The previous section focused narrowly on the properties of the radio emission
detected from UCDs. In this section, we place this emission in a broader
astrophysical context.
\subsection{The Prevalence of Radio Activity in UCDs}
Volume-limited \hbindex{radio surveys} of UCDs achieve a detection rate of
approximately 10\% \citep{b06b, mbr12, ahd+13, lmr+16}. However, recent work
by \citet{khp+16} demonstrates that biased surveys can achieve a substantially
higher detection rate: in a sample of five late-L and T~dwarfs selected to
have prior detections of \ha\ emission or \hbindex{optical variability}, four
of the targets were detected. These findings are consistent because the \ha\
detection rate of L and T~dwarfs is also about 10\% on average, with a
noticeably higher detection rate for objects warmer than \apx
L5 \citep{phk+16}.
This ``headline number'' comes with three important caveats. First, it derives
from an observer-dependent binary classification (``did the object's apparent
radio flux density have sufficiently high S/N?'') rather than a fundamental
physical measurement (``what is the object's radio spectral luminosity?'').
Second, the radio detectability of individual objects varies over time in ways
that are not well understood. Third, the reported number averages across a
wide variety of objects, while studies of FGKM dwarfs lead us to expect that
activity strength should depend strongly on fundamental (sub)stellar
parameters.
In particular, mass, rotation, and age are generally believed to be the most
important for setting stellar activity levels \citeeg{b03, wdmh11}.
Correlations between fundamental parameters are pervasive, however, so it is
challenging to determine causation \citeeg{rsp14}. Below we consider how UCD
radio emission scales with some of these physical
parameters, \textit{considering only the radio-detected objects}. A proper
analysis of the entire radio-observed UCD sample that takes into account
nondetections has yet to be performed. Numerous UCDs have upper limits on
their radio emission that are inconsistent with the trends described.
\subsubsection{Mass, Spectral Type, and Effective Temperature}
Because brown dwarfs do not evolve to a stable main sequence and direct mass
measurements of astronomical objects are difficult to obtain, spectral type
(SpT) is widely used as a proxy for mass in UCD activity studies.
The magnetic activity levels of FGKM stars are often quantified with the ratio
of the stellar X-ray luminosity to bolometric luminosity \citep[\lxlb;
e.g.,][]{wdmh11}. This ratio decreases as SpT increases (that is, moves toward
cooler \teff) even though \lb\ on its own scales strongly with \teff, implying
a significant drop in the un-normalized \lx\ \citep{smf+06, bbf+10, wcb14}. It
is therefore striking that in UCDs, \lnur\ shows only a mild decrease with
SpT, with typical values of \apx$10^{13.5}$~\speclum at M7
and \apx$10^{12.5}$~\speclum in the T~dwarfs \citep[their Figure~8]{gwb+16}.
Over this range of SpTs \lnurlb\ increases from typical values
of \apx$10^{-17}$~Hz$^{-1}$ to \apx$10^{-16}$~Hz$^{-1}$.
\subsubsection{Rotation}
Magnetically active FGKM stars follow a ``\hbindex{rotation/activity
relation}'' in which the level of magnetic activity increases with increasing
rotation rate up until a ``\hbindex{saturation} point,'' past which further
increases in rotation rate do not affect the level of magnetic
activity \citeeg{wdmh11}. Here the level of magnetic activity is most commonly
quantified with \lxlb, but analogous trends are observed in most other
measurements that trace activity.
The nature of the radio rotation/activity relationship in UCDs is more
ambiguous. Plots of \lnurlb\ against rotation show a scaling relationship that
has no sign of a saturation point \citep{mbr12}. However, the fastest rotators
tend to be the objects with the latest spectral types, introducing a
covariance with the mass trend described above. \citet{cwb14} studied a subset
of UCDs at a relatively narrow range of SpT, M6.5--M9.5, and found weak
evidence that \lxlb\ is in fact \textit{anti}-correlated with rotation rate.
\subsubsection{Age}
Sun-like stars become less active as they age since they shed angular momentum
through their winds \citep{s72}. This process becomes much less efficient as
stellar mass decreases, with the average activity lifetime of M~dwarfs going
from \apx1~Gyr for M0--M2 stars to \apx8~Gyr for M5--M7 stars \citep{whb+08}.
The data suggest that brown dwarfs rotate rapidly for their entire
lives \citep{bmm+14}.
The relation between age and radio activity has not been studied
systematically in UCDs. However, several noteworthy radio-active UCDs are have
age constraints,
including \obj{lp944} \citep[\apx500~Myr;][]{tr98}, \obj{n33370b} \citep[\apx80~Myr;][]{dfr+16},
and \obji{simp0136} \citep[\apx200~Myr;][]{gfb+17}. Very young UCDs can also
have radio emission associated with young-star phenomena such as accretion,
jets, and disks \citeeg{rzp17}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{images/wcb14-fig1}
\caption{\textit{[From \citet{wcb14}. Reproduced by permission of the AAS.]}
Radio and X-ray emission for active stars and brown dwarfs. Gray points and
the red line show the ``\gbr'' defined for active stars and solar flares.
Green, red, and blue points show data for M3--M6, M6.5--M9.5, and $\ge$L0
dwarfs, respectively. While some UCDs may obey the \gbr, there is a
substantial population of outliers with radio emission that far exceeds what
would be predicted from their X-ray emission.}
\label{f.lrlx}
\end{figure}
\subsection{Multi-wavelength Correlations}
Stellar magnetism is associated with emission across the electromagnetic
spectrum, and different bands probe different physical regions or processes.
In Sun-like stars, \ha\ emission probes the chromosphere; UV, the transition
region; X-rays, hot dense coronal plasma; and radio/millimeter emission,
particle acceleration. Multi-wavelength observations, especially simultaneous
ones, therefore yield insights that cannot be obtained through single-band
studies.
The radio and X-ray luminosities of active stars are nearly linearly
correlated, a phenomenon known as the ``\hbindex{G\"udel-Benz
relation}'' \citep{gb93, bg94}. A single power law can fit observations
spanning ten orders of magnitude in \lnur, in systems ranging in size from
individual solar flares to active binaries (\autoref{f.lrlx}, gray points). As
shown above, however, the \gbr\ breaks down dramatically in the UCD
regime \citep[\autoref{f.lrlx}, colored points;][]{bbb+01, wcb14}.
Correlations between the luminosities of UCDs in radio and other bands
(e.g., \ha) have not yet been investigated in the literature.
\hbindex{Simultaneous multi-wavelength observations} can illuminate the physics of
stellar and substellar flares, although extensive observations of flare stars
demonstrate that very few general statements can be made: individual events
may or may not be associated with emission in each of the bands that trace
magnetic activity, and the relative ordering and magnitude of the emission in
these bands is variable \citeeg{oba+04}. The UCD with the best simultaneous
multi-wavelength observational coverage is \obj{n33370b}, and the data show a
similar variety of phenomenologies \citep{wbi+15}. A detailed understanding of
the underlying physics remains elusive.
The evidence that optical/IR variability is a useful indicator of UCD radio
activity \citep{khp+16} suggests that the two are correlated. Only a handful
of UCDs have data sets that allow the optical and radio variability (either
bursts or non-bursting periodic variatons) to be phased. While the radio and
optical maxima of \obj{tvlm513} are significantly out of phase \citep{wr14,
mzpop15}, there is a hint that the millimeter and optical maxima may occur at
the same phase \citep{wcs+15}. The radio and optical maxima of \obj{n33370b}
are also significantly out of phase. Intriguingly, long-term monitoring of
this object suggests that its non-bursting polarized radio emission remains in
phase with its optical variability but the total radio intensity does
not \citep{wbi+15}. \textit{Keck} spectroscopic monitoring of \obji{lsr1835}
revealed periodic variations in the optical emission that were argued to
originate in a high-altitude opaque blackbody with $T \apx
2200$~K \citep{hlc+15}.
Radio and \ha\ variability can also be correlated. In \obj{2m0746b}, the radio
bursts are 90\deg\ out of phase with the maxima of periodic changes in
the \ha\ equivalent width \citep{brpb+09}. Recent observations
of \obj{lsr1835} showed radio and \ha\ variations that were approximately in
phase \citep{hlc+15}, but other observations of the same object have shown
aperiodic \ha\ variability with no clear connection to the radio
emission \citep{bbg+08}. Simultaneous multi-wavelength monitoring
of \obj{tvlm513} revealed periodic \ha\ variability with no clear connection
to emission in other bands, although there is some evidence for radio bursts
at the times of the \ha\ minima \citep{bgg+08}.
\section{Interpretation of the Data}
We now turn to the astrophysical interpretation of the observations presented
in the previous sections.
\subsection{Auroral Radio Emission}
The periodic, bright, highly-polarized radio bursts observed in radio-active
UCDs are consistent with the \hbindex{auroral radio bursts} observed in Solar
System planets \citep{ztrr01}, which are generally agreed to originate from
the
\hbindex{electron cyclotron maser instability} \citep[ECMI;][]{wl79, t06}. The ECMI
converts the free energy of a magnetized plasma into electromagnetic waves
through resonant interactions between the waves and the particles' cyclotron
motion. The ECMI is relatively easy to trigger in physical systems involving
beams of mildly relativistic electrons that are accelerated along magnetic
field lines by the presence of a co-aligned electric field, if the ambient
medium is of sufficiently low density. This happens at the Earth when
energetic solar wind particles funnel down its magnetic field lines toward the
poles.
Observable ECMI emission is expected to be dominated by a narrow-band signal
at the electron cyclotron frequency of the local magnetic field,
\begin{equation}
\nu_\text{ce} = \frac{e B}{2 \pi m_e c}
\apx 2.8 \left(\frac{B}{1\text{ G}}\right)\text{ MHz}.
\end{equation}
Observations of ECMI bursts from UCDs therefore measure the strengths of their
magnetic fields. In practice, the ECMI occurs in regions that span a variety
of field strengths, so the observed emission has a moderate bandwidth,
$\Delta \nu / \nu \apx 1$ \citep{ztrr01}, with a cutoff at high frequencies
because the body's magnetic field reaches some peak value at its surface. ECMI
emission is beamed and likely refracts through the plasmasphere that evidently
envelops the radio-active UCDs, necessitating detailed simulations to predict
its observed properties \citeeg{kdy+12, ydk+12}.
\begin{sloppypar} At a typical VLA observing frequency of \apx5~GHz, the
inferred strength is \apx2~kG, comparable to the strongest \hbindex{surface
field strengths} observed on active M~dwarfs \citep{kps+17}. A polarized pulse
at 10~GHz from the T6.5 dwarf \obj{2m1047} implies a field strength of at
least 3.6~kG \citep{wb15}, demonstrating that the fully convective dynamo can
generate strong fields even in extremely low-mass, cool (\apx$900$~K) objects.
\end{sloppypar}
Observations of multiple consecutive ECMI bursts at the rotation period imply
the presence of a relatively stable ``\hbindex{electrodynamic engine}'' that
accelerates the beams of electrons responsible for the emission. Understanding
the nature of this engine is one of the great tasks in the field of UCD
magnetism. In the Solar System planets, the engine is often powered by the
solar wind \citeeg{d61, a69}, but this driver is not available for solivagant
UCDs. The only persuasive explanation is that the engine is ultimately powered
by the body's rotation \citep{s09}. This is largely the case for
\hbindex{Jupiter} \citep{mb07}, raising the exciting possibility that sophisticated
models developed in the context of the Solar System gas giants can be brought
to bear on the UCD case. For instance, studies of Jupiter inform a model in
which rotational energy is converted into nonthermal particle acceleration
through shear-induced currents at the corotation breakdown
radius \citep{nbc+12}.
Rapid rotation and the stable operation of the electrodynamic engine imply
that the magnetospheres of radio-active UCDs likely have dipole-dominated
topologies. This inference is supported by observations that probe the
topologies of the magnetic fields of cool stars. Studies using \hbindex{Zeeman
Doppler Imaging}
\citep[ZDI;][]{the.zdi} show that strong, axisymmetric, dipolar
fields emerge in the coolest M~dwarfs currently accessible to the
technique \citep{mdp+10} and that such a topology may be associated with
enhanced radio activity and variability \citep{kl17}.
Auroral electron beams do not only produce radio emission. First, auroral
processes are associated with emission across the electromagnetic spectrum,
with the highest luminosities concentrated at FUV and IR
wavelengths \citep{bg00}. However, the emission at these wavelengths is not
nearly as bright as it is in the radio, such that the auroral fluxes in other
bands inferred for known active UCDs are beyond the capabilities of
present-day instruments. Second, the energetic auroral electrons eventually
precipitate into the upper atmosphere, where they can drive chemical processes
like \hbindex{haze production} \citeeg{wyf03}. \citet{hlc+15} interpreted
their simultaneous radio and optical observations in this framework, arguing
that an electron beam delivering $10^{24}\text{--}10^{26}$~erg~s$^{-1}$ of
kinetic power drove both the radio emission of \obj{lsr1835} and its optical
variability by creating a compact, high-altitude layer of H$^{-}$ upon
precipitation. This model also motivated the targeted survey
of \citet{khp+16}, under the assumption that auroral electron beams cause
detectable \ha\ and/or optical variability. A recent study, however, does not
find a correlation between \ha\ and high-amplitude optical variability in a
large sample of L/T dwarfs \citep{mmha17}.
\subsection{Gyrosynchrotron Radio Emission}
Non-bursting UCD radio emission bears the hallmarks
of \hbindex{gyrosynchrotron emission}, the same process that is believed to be
responsible for the bulk of the radio emission observed from active
stars \citep{d85, g02}. Gyrosynchrotron emission is produced by mildly
relativistic electrons spiraling in an ambient magnetic field, resulting in a
broadband spectrum with low to moderate circular polarization. Analysis of the
spectral properties can constrain the ambient magnetic field strength, the
total number and volume density of energetic particles, and their energy
distribution. It has been argued that the non-bursting UCD radio emission may
instead represent an unusual form of ECMI emission \citep{had+06, had+08}, but
several lines of evidence, most notably the millimeter-wavelength detection
of \obj{tvlm513}, discourage this interpretation \citep{wbi+15, wcs+15}.
The standard equations for gyrosynchrotron emission are derived for spatially
homogeneous field and particle properties \citep{d85}. A robust result of this
analysis is that the optically-thick (low-frequency) side of the spectrum
should have a spectral index $\alpha = 5/2$, much steeper than that observed
for sources like \obj{n33370b} and \obj{tvlm513} \citep{ohbr06, mbi+11}. While
the flat observed spectra can be reproduced qualitatively with more realistic
inhomogeneous models \citeeg{wkj89, tlu+04}, homogeneous models should still
give a sense of the average properties of the emitting region. Spectral fits
with both kinds of model suggest that the ambient field strength in the
synchrotron-emitting region is \apx$10$--$100$~G, typical of flare
stars \citep{b06b, ohbr06, mka+17}. Assuming standard energetic electron
densities and brightness temperatures, the typical source size is a
few \rstar\ \citep{b06b, wcb14}. The fact that \rstar\ evolves only slowly
with mass in the UCD regime may help explain why \lnur\ appears to settle at a
typical value of \apx$10^{13}$~\speclum\ in the radio-active UCDs, if the
other factors that set the synchrotron radio luminosity ($B$ and $n_e$) are
also mass-insensitive. Radio emission is energetically insignificant, so if
the particle acceleration process saturates in some way, this value of \lnur\
could be achieved in UCDs with widely varying bolometric and spindown
luminosities.
Analyses of the non-bursting radio emission of UCDs have not yet begun to
leverage the detailed models that have been developed for analogous systems.
\hbindex{Magnetic chemically peculiar (MCP) stars} have high masses but also possess
strong, dipole-dominated magnetospheres with persistent and periodically
variable radio emission. Numerical modeling of MCP particle populations can
constrain the magnetospheric structure in detail \citep{tlu+04, lto+17}. Even
more excitingly, Jupiter's \hbindex{radiation (van~Allen) belts} have been
studied in exquisite detail and produce centimeter-wavelength emission with
variability, spectra, and polarization that are highly reminiscent of the UCD
observations \citep{dp81, dpbg+03}. The application of Jovian models to UCD
data has the potential to yield a treasure trove of insight. For instance, the
presence of Jupiter's moons can be inferred from the spectrum of its radiation
belts alone \citep{scb08}, and observations made at different orientations of
the planet can be combined to reconstruct the full three-dimensional structure
of the belts \citep{sodl97}.
\subsection{The Emergence of ``Planet-like'' Magnetism in UCDs}
The data show that UCDs can generate strong magnetic fields and dissipate
their energy vigorously, but that they do so in processes that are
fundamentally different than the typical flare star phenomenology. This is
demonstrated most clearly by the substantial drop in UCD X-ray emission
(both \lx\ and \lxlb), violation of the \gbr, and the emergence of periodic,
bright, highly-polarized radio bursts.
This can be understood as the emergence of ``planet-like'' magnetism in UCDs,
characterized by processes that occur in large-scale, stable,
rotation-dominated magnetospheres \citep{s09}. These include the operation of
an electrodynamic engine that accelerates auroral electron beams and sustains
a population of mildly relativistic electrons. The lack of X-ray emission
indicates that coronal heating, if it can be said to occur at all, does not
happen in a Sun-like fashion. Historically, this has been explained as being
due to the outer atmosphere becoming electrically neutral and therefore unable
to couple the energy of convective motions into magnetic flux
tubes \citep{mbs+02}. More recent work has argued that UCD atmospheres should
in fact still couple to the magnetic field efficiently \citep{rhsr15},
suggesting that more detailed analysis is needed.
One of the fundamental questions about this picture is why only \apx10\% of
UCDs are detected in the radio. While early thinking focused on the possible
roles of inclination and rotation rate \citeeg{hhk+13}, current data suggest
that planet-like magnetism is only \textit{sometimes} present in UCDs and that
the presence or absence of planet-like behavior is not linked to any
particular fundamental parameter. The most compelling evidence for this is
the \obj{n33370ab} system: while \obj{n33370b} is the most radio-luminous UCD
known, its binary companion is at least 30 times fainter than it, despite
being nearly identical in mass, age, rotation rate, and
composition \citep{wbi+15, dfr+16, fdr+16}. Population studies show evidence
for bimodality when considering the \gbr\ \citep{sab+12, wcb14}, the
rotation/activity relation \citep{cwb14}, and ZDI-derived magnetic field
topologies \citep{mdp+10}.
The large-scale topology of the magnetic field may be the key factor that
determines whether planet-like magnetic behavior arises in a given
UCD \citep{cwb14}. This hypothesis is tenable because \hbindex{geodynamo
simulations} indicate that the fully convective dynamo may be bistable in the
conditions encountered in the UCD regime, with identical objects sustaining
different topologies depending on initial conditions \citep{gmd+13}. Recent
observations provide the first direct evidence for this model: ZDI reveals
that \obj{uvcet} (\apx M6) has an axisymmetric, dipole-dominated magnetic
field, while the field of its nearly-identical binary companion \obj{blcet} is
weaker and non-axisymmetric \citep{kl17}. Consistent with the proposed
model, \obj{uvcet} is more luminous and variable in the radio
than \obj{blcet}.
Detectable radio emission requires the presence of both a magnetic
field \textit{and} nonthermal electrons. The difference between the
radio-active and -inactive UCDs may therefore hinge not on the field topology
but on the presence of a source of plasma that can eventually produce the
gyrosynchrotron and ECMI emission. In analogy with Jupiter, the 10\% of UCDs
that are radio-active might be the ones possessing volcanic planets resembling
\hbindex{Io}. This scenario can potentially be tested by searching for ECMI bursts that
repeat periodically not at \prot\ but at the synodic period of the planetary
orbit. No evidence of such a non-rotational periodicity has yet been reported.
\section{The Exoplanetary Connection}
Radio studies of UCDs make a unique contribution to exoplanetary science
because they are the only effective way to observe the magnetic properties of
cool, extrasolar bodies.
\begin{sloppypar}
One reason that this is important is that UCDs may host large numbers of
observationally-accessible small planets, as demonstrated by
the \obji{trappist1} system \citep{gjl+16, gtd+17}. Understanding UCD activity
is therefore important for the same reasons that it is important for any
exoplanet host star: magnetic phenomena make planet discovery more
challenging \citeeg{rmer14} and they can have a significant impact on
\hbindex{atmospheric retention} and the broader question of
\hbindex{habitability} \citeeg{jglb15, sbj16}. Because UCD magnetism can be
so different from that of Sun-like stars and M~dwarfs, its impact on
habitability may differ substantially from the cases that have been
investigated thus far in the literature. For instance, the detection of
millimeter-wavelength radiation from \obj{tvlm513} points to a surprisingly
high-energy radiation environment of MeV electrons, which can
produce \hbindex{$\gamma$-ray emission} when they precipitate into the stellar
atmosphere \citep{wcs+15}. The moons of the Solar System gas giants should
serve as useful reference points in this domain \citeeg{ppw08}.
\end{sloppypar}
UCD magnetic fields can strongly resemble those of the Solar System gas giant
planets. Radio observations therefore provide insight into the magnetospheres
of exoplanets themselves, which observers have been struggling to probe since
well before the first confirmed exoplanet discovery \citep{yse77}. Currently,
\hbindex{exoplanetary magnetospheres} can only be investigated using indirect and
model-dependent means \citeeg{ehw+10}. Direct observations of exoplanetary
magnetospheres would not only shed light on the question of habitability, but
also internal structure; for instance, magnetic field generation in rocky
planets may require the presence of plate tectonics \citep{bls10}. The first
direct measurement of the magnetic field of a planetary-mass object may
already have occurred, because \obj{simp0136}, detected in the radio
by \citet{khp+16}, was recently argued to be a member of the \apx200-Myr-old
Carina-Near moving group, which would give it a mass of $12.7 \pm 1.0$~\mjup\
according to standard evolutionary models \citep{gfb+17}.
\section{Future Directions of Research}
One of the top priorities in the field of UCD radio studies is the extension
of its techniques to genuine exoplanets. By analogy with Solar System
examples, exoplanets are expected to have magnetic fields that are much weaker
than those of UCDs, which leads to the expectation that their radio emission
will occur at lower radio frequencies, $\lesssim$300~MHz. Fortunately the past
decade has witnessed a dramatic investment in low-frequency radio arrays such
as the \hbindex{Low Frequency Array} (LOFAR), the \hbindex{Murchison Widefield Array} (MWA), the
\hbindex{Long-Wavelength Array} (LWA), the \hbindex{Giant Metrewave Radio Telescope} (GMRT), and
the \hbindex{Hydrogen Epoch of Reionization Array} (HERA). While the first
generation of these instruments has not yielded any detections of genuine
UCDs, the first positive results are starting to emerge \citep{llk+17}, and
virtually all of these observatories are undergoing upgrades that are expected
to yield significant sensitivity improvements.
While many of the nearest UCDs have been surveyed by the Very Large Array, the
results of \citet{khp+16} suggest that targeted searches may be able to yield
detections beyond the typical detection horizon (\apx30~pc) for blind searches
thus far. Furthermore, radio studies of southern UCDs have historically been
hampered by the lack of an instrument as powerful as the VLA (latitude
$+34$\deg). The commissioning of the \hbindex{MeerKAT} radio telescope in South
Africa \citep{the.meerkat}, with science operations slated to begin in late
2017, will introduce a powerful new observatory in the south. MeerKAT should
be especially valuable in surveys for radio emission from young,
\hbindex{directly-imaged exoplanets}, which are promising targets because they are as
warm, or even warmer, than the coolest UCDs with confirmed radio detections,
and convect vigorously. Most of the currently-known young planets are in the
southern hemisphere, however, and have not been the subject of sensitive radio
observations. Surveys for radio-active UCDs in both hemispheres will be
transformed by the deeper insight into the natures of the stars and brown
dwarfs in the solar neighborhood afforded by upcoming surveys from
observatories such as Gaia, the Transiting Exoplanet Survey Satellite (TESS),
and Spektr-RG, the spacecraft bearing the \hbindex{e-ROSITA} instrument.
Finally, a great deal of theoretical work remains to be done. More detailed
models of the bursting and non-bursting radio emission will strengthen the
astrophysical inferences that can be drawn from the radio data. The
populations statistics of radio-active UCDs should be understood better by a
more rigorous treatment of the many nondetections and a more careful
characterization of the long-term variability of their radio emission. This
sort of work will lay the foundations upon which models can be constructed
that explain fundamental puzzles such as the source of the radio-emitting
plasma, the possible existence of a bistable dynamo, and the relationship
between rotation and magnetic activity in the ultra-cool regime.
\begin{acknowledgement}
P.K.G.W. thanks Edo Berger for supporting his work in this field and Adam
Burgasser, Kelle Cruz, Trent Dupuy, Jackie Faherty, Gregg Hallinan, Mark
Marley, and Rachel Osten for many enlightening conversations over the years.
P.K.G.W. acknowledges support for this work from the National Science
Foundation through Grant AST-1614770. This research has made use of the SIMBAD
database, operated at CDS, Strasbourg, France and NASA's Astrophysics Data
System.
\end{acknowledgement}
|
\section{Introduction}
Quon language is a 3D picture language that we can apply to simulate mathematical concepts \cite{JLW-Quon}.
It was designed to answer a question in quantum information, where the underling symmetry is the group $\Z_2$ for qubits and $\Z_d$ for qudits.
\textcolor{black}{One can consider the quon language as a topological quantum field theory (TQFT) in the 3D space with lower dimensional defects, and a quon as a 2D defect on the boundary of the 3D TQFT.}
The underlying symmetry of the 3D picture language can be generalized to more general quantum symmetries captured by subfactor theory \cite{JonSun97, EvaKaw98}.
Jones introduced subfactor planar algebras as a topological axiomatization of the standard invariants of subfactors \cite{JonPA}.
One can consider a planar algebra as a 2D topological quantum field theory (TQFT) on the plane with line defects. A subfactor planar algebra is always spherical, so the theory also extends to a sphere.
A vector in the planar algebra of a subfactor is a morphism in the bi-module category associated with the subfactor.
From this point of view, a morphism is usually represented as a disc with $m$ boundary points on the top and $n$ boundary points at the bottom, and considered as a transformation with $m$ inputs and $n$ outputs,
In the 3D quon language, we consider these morphisms in planar algebras as quons, and consider planar tangles as transformations. This interpretation is similar to the original definition of Jones, which turns out to be more compatible with the notions in quantum information.
A planar tangle has multiple input discs and one output disc. So it represents a transformation from multiple quons to one quon.
In quantum information, we usually consider multiple qubits and their transformations.
To simulate multiple quon transformations, we generalize planar tangles to spherical tangles with multiple input discs and output discs. When we compose such tangles, we will obtain higher genus-surfaces. So we further extend the notion of planar algebras to higher genus surfaces, that we call surface algebras. The theory of planar algebras becomes the local theory of surface algebras.
There is a freedom to define the partition function of a sphere in this extension, denoted by $\zeta$. We show that the partition function of the genus-$g$ surface is $\zeta^{1-g}$, which detects the topological non-triviality. We prove that any non-degenerate spherical planar algebra has a unique extension of to a surface algebra for any non-zero $\zeta$. Therefore a subfactor not only defines a spherical planar algebra, but also a surface algebra parameterized by $\zeta$. The fruitful theory of subfactors provides many interesting examples.
In this paper, we take the subfactor to be the quantum double of a unitary modular tensor category $\C$, also known as the Drinfeld double \cite{Dri86}.
Then the 2-box space of the planar algebra of the subfactor is isomorphic to $L^2(Irr)$, where $Irr$ denotes the set of irreducible objects of $\C$.
Xu and the author proved that the associated subfactor planar algebra is unshaded \cite{LiuXu}.
Thus the 2-box space becomes the 4-box space of the unshaded planar algebra, denoted by $\SA_4$.
The unshaded condition is crucial to define the string Fourier transform (SFT) on one space.
Moreover, we proved that the SFT on $\SA_4$ is identical to the modular $S$ transformation of the MTC $\C$.
Both transformations have been considered as a generalization of the Fourier transform from different point of views. This identification relates the two different Fourier dualities for MTC and subfactors.
We restrict the 1-quon space as $\SA_4$, in order to study this pair of Fourier dualities. We list the correspondence for 1-quons in Fig.~\ref{Table:Fourier duality on 1-quons}; see \S \ref{Sec:Fourier duality on 1-quons} for details.
\begin{table}[h]
\caption{Fourier duality on MTC and 1-Quons}\label{Table:Fourier duality on 1-quons}
\begin{tabular}{|c|c|}
\hline
MTC & Quon \\ \hline
simple objects (Irr) & ortho-normal-basis \\ \hline
multiplication & multiplication \\ \hline
fusion & convolution \\ \hline
$S$ matrix & SFT $\FS$ \\ \hline
full subcategories $\C_K$ & biprojections $P_K$ \\ \hline
M\"{u}ger's center $\C_{\hat{K}}$ & $P_{\hat{K}}$ \\ \hline
${\hat{\hat{K}}}=K$ & $\FS^2(P_K)=P_K$ \\ \hline
$\dim \C_{\hat{K}}$ & $\Supp(P_K)$ \\ \hline
$\dim \C_K \dim \C_{\hat{K}} = \dim C $ & $\Supp(P_K)\Supp(P_{\hat{K}})=\delta^2$ \\ \hline
\end{tabular}
\end{table}
Verlinde proposed that the $S$ matrix diagonalizes the fusion in the framework of CFT, known as the Verlinde formula \cite{Ver88}.
The Fourier duality on 1-quons gives a conceptual proof addressing Verlinde's original observation given by the Lines 3-5 in Fig.~\ref{Table:Fourier duality on 1-quons}.
Jiang, Wu, and the author studied the Fourier analysis of the SFT on subfactors in \cite{JLW16}. Through this identification, we obtain many inequalities for the $S$ matrix, which will be discussed in a coming paper.
It is particularly interesting that the $\infty$-$1$ Hausdorff-Young inequality for SFT gives an important inequality for the $S$ matrix in unitary MTC proved by Terry Gannon \cite{Gan05}:
\begin{align}
\|\FS(\beta_Y)\|_1 &\leq \delta^{-1} \|\beta_Y\|_{\infty} \\
\Rightarrow \left|\frac{S_X^Y}{S_{X}^0}\right| &\leq \frac{S_{0}^Y}{S_{0}^0}.
\end{align}
The equality condition has been used by M\"{u}ger to define the center of full subcategories in $\C$, known as M\"{u}ger's center \cite{Mug03}.
On the other hands, Bisch and Jones introduced biprojections for subfactors and planar algebras by studying intermediate subfactors \cite{Bis94,BisJon97}.
The Lines 6-10 in Fig.~\ref{Table:Fourier duality on 1-quons} identifies full subcategories in $\C$ with biprojections in $\SA_{4}$, and the several corresponding results between M\"{u}ger's center and biprojections.
We extend the correspondence between $\FS$ and $S$ from 1-quons to $n$-quons using surface algebras, see Theorem \ref{Thm:Fourier duality}:
\begin{center}
\begin{tikzpicture}
\begin{scope}[node distance=4cm, auto, xscale=1,yscale=1]
\foreach \x in {0,1,2,3} {
\foreach \y in {0,1,2,3} {
\coordinate (A\x\y) at ({2*\x},{.7*\y});
}}
\foreach \y in {0,3}{
\node at (A0\y) {surface tangles};
\node at (A3\y) {graphic quons};
\draw[->] (A1\y) to node {$Z$} (A2\y);
}
\draw[->] (A02) to node [swap] {$\vec{\FS}$} (A01);
\draw[->] (A32) to node [swap] {$\vec{S}$} (A31);
\end{scope}
\end{tikzpicture}.
\end{center}
The left side is pictorial and $\FS$ could be considered as a global $90^{\circ}$ rotation. The right side is algebraic and the $S$-matrix is generalization of the discrete Fourier transform. The partition function $Z$ is a functor relating the pictorial Fourier duality and the algebraic Fourier duality.
In particular, the algebraic Fourier duality between the two qudit resource state $\GHZ$ and $\Max$ in quantum information turns out to be a pictorial Fourier duality in quon language \cite{JLW-Quon}, see \S \ref{Sec: GHZ Max} for details.
\begin{equation}
Max_{n,g}= \vec{\FS}(GHZ_{n,g})
\quad \Rightarrow \quad
\Max_{n,g}= \vec{S}\GHZ_{n,g}
\end{equation}
Now this result also apply to unitary MTCs.
Comparing the coefficients, we obtain the generalized Verlinde formula:
\begin{equation} \label{Equ:1}
\Max_{n,g}= \vec{S}\GHZ_{n,g}
\quad \Rightarrow \quad
\dim(\vec{X},g)= \sum_{X\in Irr} (\prod_{i=1}^n S_{X_i}^{X}) (S_{X}^1)^{2-n-2g}
\end{equation}
The generalized Verlinde formula was first proved by Moore and Seiberg in CFT \cite{MooSei89}. Here we prove it for any unitary MTC and any genus $g$.
We refer the readers to an interesting discussion about various versions of Verlinde formula on MathOverflow: \url{https://mathoverflow.net/questions/151221/verlindes-formula}.
Moreover, for each oriented graph $\Gamma$ on the sphere, we define a surface tangles as a fat graph of $\Gamma$. Then its SFT becomes a fat graph of $\hat{\Gamma}$, where $\hat{\Gamma}$ is the dual graph of $\Gamma$ forgetting the orientation.
So the pictorial Fourier duality also coincide with the graphical duality.
Therefore we obtain one algebraic identity as algebraic Fourier duality of quons from any graph $\Gamma$.
We give some examples including well known ones, such as the Verlinde formula mentioned above; partially known ones; and completely new ones.
If the graph $\Gamma$ is the tetrahedron, then the graphic self-duality of the tetrahedron gives an algebraic $6j$-symbol self-duality for unitary MTCs, see \S \ref{Sec:Fourier duality} for details:
\begin{equation}
\left|{{{X_{6}~X_{5}~X_{4}}\choose{\overline{X_{3}}~\overline{X_{2}}~\overline{X_{1}}}}}\right|^{2}
= \sum_{\vec {Y}\in Irr^6} \left(\prod_{k=1}^{6}S_{X_{k}}^{Y_{k}} \right)
\left|{{{Y_{1}Y_{2}Y_{3}}\choose{Y_{4}Y_{5}Y_{6}}}}\right|^{2}.
\end{equation}
In the special case of quantum $SU(2)$, the identity for the 6j-symbol self-duality was discovered by Barrett in the study of quantum gravity \cite{Bar03}, based on an interesting identity of J. Robert \cite{Rob95}. Then the identity was generalized to some other cases related to $SU(2)$ in \cite{FNR07}. A general case for MTCs has been conjectured by Shamil Shakirov, which we answer positively here.
We obtain a sequence of new algebraic self-dual identities from a sequence of self-dual graphs, see \S \ref{Sec:Fourier duality} for details:
\begin{equation}
\left|{{{X_{2n}~X_{2n-1}\cdots X_{n}}\choose{\overline{X_{n}} \phantom{aa} \overline{X_{n-1}} ~\cdots \overline{X_{1}}}}}\right|^{2}
= \sum_{\vec {Y}\in Irr^{2n}} \left(\prod_{k=1}^{2n}S_{X_{k}}^{Y_{k}} \right)
\left|{{{Y_{1} \phantom{aa} Y_{2} \phantom{aa} \cdots ~Y_{n}}\choose{Y_{n+1}Y_{n+2}\cdots Y_{2n}}}}\right|^{2}.
\end{equation}
\iffalse
\textcolor{blue}{
This paper is inspired by the notions in several different subjects: CFT, Fourier analysis, MTCs, quantum information, subfactors, TQFT. The 1-quon space here is designed to be commonly interesting for these subjects.
For mathematical interests, there is no reason to restrict the 1-quon space to be the 2-box space of the quantum double of MTCs.
If the MTC is the representation category of a conformal net, then the quantum double of the MTC is the 2-interval Jones-Wassermann subfactor, also known as Longo-Rehren inclusion \cite{LonReh95,Was98}. The multi-interval Jones-Wassermann subfactor has been considered in \cite{Xu00,KawLonMug01}, which is closed related to orbifold theory in CFT \cite{DVVV89,KLX05}.
Recently, Xu and the author constructed multi-interval Jones-Wassermann subfactors for any unitary MTC inspired by the reconstruction project from MTC to CFT \cite{LiuXu}, and proved the modular self-duality. The $n$-box SFT for the $m$-interval Jones-Wassermann subfactor for MTCS also corresponds to a generalized Fourier transform in TQFT, an element in the mapping class group of a surface with genus (m-1)(n-1).
If we take the 1-quon space from the case $m=n=2$ to arbitrary $m, n \geq 1$, then all the methods in this paper still work. We also obtain new identities from Fourier dualities, which are related to the higher genus structure in 2+1 TQFT.
}
\textcolor{blue}{
Each MTC defines a 2+1 Witten-Reshetikhin-Turaev TQFT \cite{Wit88,ResTur91,Tur16}. On the other hand, for any $m\geq 1$, we obtain a 3D quon language. This paper is addressing the fundamental connections of a long term project in exploring the correspondence between these 3D quon language and the 2+1 TQFT for MTCs inspired by the orbifold theory in CFT.
}
\fi
\begin{ac}
The author would like to thank Terry Gannon, Arthur Jaffe, Vaughan F. R. Jones, Shamil Shakirov, Cumrun Vafa, Erik Verlinde, Jinsong Wu and Feng Xu for helpful discussions. The author was supported by a grant from Templeton Religion Trust and an AMS-Simons Travel Grant.
The author would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme ``Operator algebras: subfactors and their applications''.
\end{ac}
\section{Surface algebras}
In this section, we are going to extend spherical planar algebras from the sphere to higher genus surfaces, which are the boundary of 3-manifolds in the 3D space. The theory of spherical planar algebras become the 0-genus case.
To simply the notation, we only define the single color case and the ground field is $\mathbb{C}$. One can generalize these definitions to multi-color cases over a general field.
\subsection{Surface tangles}
If we consider a planar tangle as a spherical tangle by one point compactification of the plane, then the complement of the planar tangle becomes a disc on the sphere. The induced orientation of the boundary of the output disc will be changed.
Thus we use anti-clockwise and clockwise orientations of boundary of discs to indicate input and output respectively.
The composition of planar tangles is still a planar tangle. In this case, the number of output disc is always one.
If we allow spherical tangles to have multiple input discs and output discs, then we will obtain tangles on higher genus surfaces when we compose these spherical tangles.
We give a generalization of planar tangles to surface tangles, see Fig.~\ref{Fig:genus-2 tangle} for example.
\begin{definition}
A genus-$g$ tangle, for $g\in \mathbb{N}$, is a 3-manifold in the 3D space whose boundary is a genus-$g$ surface. The surface consists of a finite (possibly empty) set of smooth closed discs $\mathcal{D}(T)$. For each disc $D\in \mathcal{D}(T)$, its boundary $\partial D$ of is an oriented circle with a number of marked points. There is also a finite set of disjoint smoothly embedded curves called strings, which are either closed curves, or the end points of the strings are different marked points of discs. Each marked point is an end-point of some string, which meets the boundary of the corresponding disc transversally.
The connected components of the complement of the strings and discs are called regions. The connected component of the boundary of a disc, minus its marked points, will be called the intervals of that disc. To each disc there is a distinguished interval on its boundary. The distinguished interval is marked by an arrow $\to$, which also indicates the orientation.
A surface tangle is a disjoint union of finitely many higher-genus tangles.
\end{definition}
\begin{figure}\label{Fig:genus-2 tangle}
\begin{center}
\begin{tikzpicture}
\draw[blue] (-2,-1)--++(4,0) arc (-90:90:1) --++(-4,0) arc (90:270:1);
\draw[blue] (-1-.2,0) to [bend left=30] (-1+.2,0);
\draw[blue] (-1-.3,0) to [bend left=-30] (-1+.3,0);
\draw[blue] (1-.2,0) to [bend left=30] (1+.2,0);
\draw[blue] (1-.3,0) to [bend left=-30] (1+.3,0);
\draw (-2,0) to [bend left=30] (0,0);
\draw (-2,0) to [bend left=-30] (0,0);
\draw (2,0) to [bend left=30] (0,0);
\draw (2,0) to [bend left=-30] (0,0);
\draw (-2.5,0) to [bend left=30] (2.5,0);
\draw (-2.5,0) to [bend left=-30] (2.5,0);
\foreach \x in {-2,0,2} {
\fill[white] (\x,0) circle (.5);
\fill[blue!20] (\x,0) circle (.5);
}
\draw[blue,->] (-1.5,0) arc (0:360:.5);
\draw[blue,<-] (-.5,0) arc (-180:180:.5);
\draw[blue,->] (2.5,0) arc (0:360:.5);
\end{tikzpicture}
\end{center}\caption{Example: a genus-2 tangle with two input discs and one output disc.}
\end{figure}
We consider the 3D topological isotopy by orientation-preserving diffeomorphisms in the 3D space.
The {\it surface operad} is the set of isotopic classes of surface tangles.
\begin{remark}
One can impose additional data to color the regions and the strings. In subfactor theory, an alternating shading of the regions is preferred. Therefore the number of boundary points of each disc is even. In tensor categories, the strings are colored by simple objects. In 2-categories, one has multiple colors for regions and strings. In these cases, the boundary condition $\partial D$ will be colored too.
\end{remark}
\begin{notation}
Let $\partial \mathcal{D}$ be the set of boundary conditions of discs, i.e., the equivalent classes of $\partial D$ modulo isotopy.
We say a disc $D$ is an input (respectively, output) disc, if the orientation of $\partial D$ is anti-clockwise (respectively, clockwise). Let $\mathcal{D}_{I}$ and $\mathcal{D}_{O}$ be the sets of input discs and output discs respectively.
\end{notation}
\begin{notation}
Let $r$ be a reflection by a plane in the 3D space.
\end{notation}
The reflection $r$ is unique up to topological isotopy in the 3D space. Moreover $r$ maps a surface tangle to a surface tangle and reverses the orientation of the boundary of discs. Thus $r$ switches $\mathcal{D}_{I}$ and $\mathcal{D}_{O}$.
\begin{definition}
We define two elementary operadic operations for surface tangles.
\begin{itemize}
\item[(1)] Tensor: taking a disjoint union of two surface tangles.
\item[(2)] Contraction: gluing two discs of a surface tangle whose boundaries are mirror images.
\end{itemize}
\end{definition}
Modulo topological isotopy in the 3D space, the tensor is unique, but
there are inequivalent contractions.
The composition of planar tangles can be decomposed as a contraction and a tensor.
\subsection{Surface algebras}
We define surface algebras as finite dimensional representations of surface tangles whose target spaces are indexed by the boundary condition $\partial \mathcal{D}$:
\begin{definition}\label{Def:SA}
A surface algebra $\SA_{\bullet}$ is a representation $Z$ of surface tangles on the tensor products of a family of finite dimensional vector spaces $\{\mathscr{S}_{i}\}_{i \in \partial \mathcal{D}}$, having the following axioms:
\begin{itemize}
\item[(1)] Boundary condition: For a surface tangle $T$, $Z(T)$ is a vector in $\displaystyle \bigotimes_{D\in \mathcal{D}(T)} \SA_{\partial D}$.
\item[(1')] Second boundary condition: If $T$ has no discs, then $Z(T)$ is a scalar in the ground field.
\item[(2)] Duality: For any $i \in \partial \mathcal{D}$, $\SA_{r(i)}$ is the dual space of $\SA_{i}$.
\item[(3)] Isotopy invariance: The representation $Z$ is well-defined up to isotopy in the 3D space.
\item[(4)] Naturality: The following commutative diagram holds:
\begin{center}
\begin{tikzpicture}
\begin{scope}[node distance=4cm, auto, xscale=1, yscale=1]
\foreach \x in {0,1,2,3} {
\foreach \y in {0,1,2,3} {
\coordinate (A\x\y) at ({2*\x},{.7*\y});
}}
\foreach \y in {0,3}{
\node at (A0\y) {surface tangles};
\node at (A3\y) {vectors};
\draw[->] (A1\y) to node {$Z$} (A2\y);
}
\foreach \x in {0,3}{
\draw[->] (A\x2) to node [swap] {tensor/contraction} (A\x1);
}
\end{scope}
\end{tikzpicture}
\end{center}
\end{itemize}
\end{definition}
We also call $Z(T)$ the {\it partition function} of $T$ from the statistic point of view.
\begin{definition}
The partition function of a sphere is called the 2D sphere value, denoted by $\zeta$.
The partition function of a closed string in a sphere is $\delta\zeta$. We call $\delta$ the 1D circle value.
\end{definition}
If we restrict the representation $Z$ to genus-0 tangles with one output disc, then we recover unital, finite dimensional, spherical planar algebras \footnote{
The spherical condition for planar algebra is defined based on the evaluable condition, namely the 0-box space is one-dimensional \cite{Jon12}. The spherical condition of surface algebras on the sphere does not require this one-dimensional condition. Typical examples of such planar algebras are graph planar algebras. \cite{Jon00}.}. Moreover, $\delta$ is the statistical dimension of the planar algebra.
\begin{definition}
We say a surface algebra is an extension of a planar algebra, if the restriction of its partition function $Z$ on the planar tangles is the partition function of the planar algebra.
\end{definition}
\begin{remark}
If the regions and strings of surface tangles are colored, then the index set $\mathbb{N}$ will be replaced by all permissible colors of the boundary of a disc.
\end{remark}
\begin{remark}
If one considers surface algebras as 2D TQFT with line defects, then it is better to consider the discs of surface tangles as holes. However, we emphasize that these surfaces are boundaries of 3-manifolds, thus the notion of discs is more reasonable.
\end{remark}
\begin{notation}
For an input disc $D$, the boundary condition $\partial D$ only depends on the number of marked points $n$. Thus we denote $\SA_{\partial D}$ by $\SA_n$ and its dual space by $\SA_n^*$.
\end{notation}
We can consider $Z(T)$ as a multi-linear transformation on the vector space $\{\mathscr{S}_{n}\}_{n \in \mathbb{N}}$ from input discs to output discs.
Let us extend some notions from planar algebra to surface algebras.
\begin{notation}
We use a thick string labelled by a number $n$ to indicates $n$ parallel strings.
\end{notation}
\begin{figure}[h]
\begin{tikzpicture}
\draw[blue] (.25,0) ellipse (1.5 and .75);
\draw[fill,blue!20] (0,0) arc (0:360:.5);
\draw[blue] (0,0) arc (0:360:.5);
\draw[->,blue] (0,0) arc (0:270:.5);
\draw[fill,blue!20] (1.5,0) arc (0:360:.5);
\draw[blue] (1.5,0) arc (0:360:.5);
\draw[->,blue] (1.5,0) arc (0:270:.5);
\draw[thick] (0,0)--(.5,0);
\node at (.25,-.2) {$n$};
\end{tikzpicture}
\caption{The genus-0 tangle for the bilinear form $B_n$.} \label{Fig:Bn}
\end{figure}
\begin{notation}
The genus-0 tangle in Fig.~\ref{Fig:Bn} defines a bilinear form $B_n$ on $\SA_{n}$.
\end{notation}
\begin{definition}
A surface algebra is called
{\it non-degenerate}, if the bilinear form $B_n$ is non-degenerate for all $n$.
\end{definition}
If the surface algebra is non-degenerate, then the bilinear form $B_n$ induces an isomorphism $D_n$ from the vector space $\SA_{n}$ to its dual space $\SA_{n}^*$. From this point of view, the tangles for the map $D_n$ and its inverse $D_n^{-1}$ are given in Fig.~\ref{Fig:Dn}. So we can identity the vector space $\SA_n$ with its dual using these duality maps, denoted by $D$ for short.
\begin{figure}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x+4,0) arc (0:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\node at (6.5,3) {and};
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\end{scope}
\begin{scope}[shift={(2,0)},xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x+4,6) arc (0:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\end{scope}
\end{tikzpicture}
\caption{The tangles for $D_n$ and $D_n^{-1}$.} \label{Fig:Dn}
\end{figure}
\begin{definition}
Suppose $^*$ is an anti-linear involution on $\SA_{n}$, $n\in \mathbb{N}$. Then $R(x):= D(x^*)$ is an anti-linear isomorphism from $\SA_{n}$ to $\SA_{n}^*$. We still denote its inverse and the linear extension on the tenor power by $R$. Then
\begin{equation}
\langle x,y\rangle:=B_n(x,y^*) =R(y)(x)
\end{equation}
is an inner product on $\SA_{n}$.
\end{definition}
\begin{remark}
The bilinear form in planar algebras is $\frac{1}{\zeta}B_n$.
\end{remark}
\begin{definition}
A surface algebra is called a surface $^*$-algebra, if it has an anti-linear involution, such that for any surface tangle $T$,
\begin{equation}
Z(r(T))=R(Z(T)).
\end{equation}
\end{definition}
\begin{definition}
A surface $^*$-algebra $\SA_{\bullet}$ is called (semi-)positive, if the inner product $\langle \cdot,\cdot \rangle$ is (semi-)positive.
\end{definition}
Note that positivity implies non-degeneracy.
For a positive surface algebra $\SA_{\bullet}$, the vector space $\SA_n$ is a Hilbert space. Moreover, the map $R$ is the Riesz representation. Thus we can consider a positive surface algebra as a Hilbert space representation of surface tangles satisfying an additional commutative diagram:
\begin{equation} \label{Equ:reflection}
\raisebox{-1cm}{
\begin{tikzpicture}
\begin{scope}[node distance=4cm, auto, xscale=1, yscale=1]
\foreach \x in {0,1,2,3} {
\foreach \y in {0,1,2,3} {
\coordinate (A\x\y) at ({2*\x},{.7*\y});
}}
\foreach \y in {0,3}{
\node at (A0\y) {surface tangles};
\node at (A3\y) {vectors};
\draw[->] (A1\y) to node {$Z$} (A2\y);
}
\draw[->] (A02) to node [swap] {reflection} (A01);
\draw[->] (A32) to node [swap] {Riesz representation} (A31);
\end{scope}
\end{tikzpicture}} ~~.
\end{equation}
\subsection{Labelled tangles}
For a surface tangle, we can partially fill its discs by a vector with compatible boundary condition. We consider the result as a labelled tangle. Let us extend the representation $Z$ of surface tangles to labelled tangles.
\begin{definition}
Suppose $\SA_{\bullet}$ is a surface algebra and $T$ is a surface tangle. Let $S$ be a subset of $\mathcal{D}(T)$ and $v$ be a vector in $\displaystyle \bigotimes_{D\in S} \SA_{r(\partial D)}$.
We call the pair $T$ and $v$ a labelled tangle, denoted by $T \circ_S v$, or $T(v)$ for short, in the sense that the discs in $S$ are labelled by the vector $v$.
We call it fully labelled, if all discs are labelled.
We define the partition function of the labelled tangle $T(v)$ by
\begin{equation} \label{Equ:ZTv}
Z(T(v)):=v(Z(T)),
\end{equation}
where $\displaystyle v \in \bigotimes_{D\in S} \SA_{r(\partial D)}$ is considered as a partial linear functional on $\displaystyle \bigotimes_{D\in \mathcal{D}(T)} \SA_{\partial D}$.
\end{definition}
\begin{definition}
We define the reflection on a labelled tangle $T(v)$ by
\begin{equation} \label{Equ:rTv}
r(T(v))=r(T)(R(v)).
\end{equation}
\end{definition}
Suppose $\SA_{\bullet}$ is a surface algebra and $T$ is a surface tangle. Then $Z(T) \in \displaystyle \bigotimes_{D\in \mathcal{D}(T)} \SA_{\partial D}$.
Let $S$ be a subset of $\mathcal{D}(T)$. Then each vector $v$ in $\displaystyle \bigotimes_{D\in S} \SA_{r(\partial D)}$ is a partial linear functional on $\displaystyle \bigotimes_{D\in \mathcal{D}(T)} \SA_{\partial D}$. Moreover, $v(Z(T))$ is a vector in $\displaystyle \bigotimes_{D\in \mathcal{D}(T)\setminus S} \SA_{r(\partial D)}$, corresponding to the unlabelled discs of $T$.
\begin{theorem}\label{Thm:Labelled tangles}
For a surface algebra $\SA_{\bullet}$, the extended representation $Z$ of labelled tangles satisfies all axioms in Definition \ref{Def:SA}.
\end{theorem}
\begin{proof}
Axioms (1') and (3) follow from the corresponding axioms for surface tangles.
Axiom (1) and (4) follow from the corresponding axioms for surface tangles and Equation \eqref{Equ:ZTv}.
Axiom (2) follows the corresponding axioms for surface tangles and Equation \eqref{Equ:rTv}.
We give a proof for the tensor in axiom (4) in details. The others are similar.
Suppose $T_1(v_1)$ and $T_2(v_2)$ are labelled tangles, then their disjoint union $T_1(v_1) \otimes T_2(v_2)=(T_1\otimes T_2)(v_1\otimes v_2)$
is a labelled tangle. So
\begin{align*}
&Z(T_1(v_1) \otimes T_2(v_2))\\
=&Z((T_1\otimes T_2)(v_1\otimes v_2))\\
=&(v_1\otimes v_2)(Z(T_1\otimes T_2))\\
=&(v_1\otimes v_2)(Z(T_1)\otimes Z(T_2))\\
=&v_1(Z(T_1)) \otimes v_2(Z(T_2))\\
=&Z(T_1(v_1)) \otimes Z(T_2(v_2))
\end{align*}
\end{proof}
Let $T(v)$ be a a labelled tangle containing $T_1(v_1)$ as a sub labelled tangle. In other words, there is a labelled tangle $T_2(v_2)$,
such that $T(v)$ is a multiple contractions between $T_1(v_1)$ and $T_2(v_2)$. We denote it by $T(v)= T_1(v_1) \circ_S T_2(v_2)$, where $S$ indicates the unlabelled discs are that glued.
If $T_3(v_3)$ is a labelled tangle which has the same partition function as $T_1(v_1)$. Then we can identify their unlabelled discs. If we replace $T_1(v_1)$
by $T_3(v_3)$ in $T(v)$, then we obtain a new labelled tangle $T_3(v_3) \circ_S T_2(v_2)$. By Theorem \ref{Thm:Labelled tangles}, we have that
\begin{corollary}
If $Z(T_1(v_1))=Z(T_3(v_3))$, then $Z(T_1(v_1) \circ_S T_2(v_2))=Z(T_3(v_3) \circ_S T_2(v_2))$.
\end{corollary}
Since the replacement of $T_1(v_1)$ by $T_3(v_3)$ will not affect the partition function, we can consider it as a {\it relation} of labelled tangles, denoted by $T_1(v_1)=T_3(v_3)$.
The following genus-0 tangle $I_n$ has one input disc and one output disc:
\begin{center}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\end{scope}
\end{tikzpicture}
\end{center}
If $\SA_{\bullet}$ is non-degenerate, then $I_n$ defines the identity map on $\SA_n$.
For any vector $v$ in $\SA_n$, we obtain a labelled tangle $I_n(v)$ by filling $v$ in the input disc. Then $Z(I_n(v))=v$.
So the vector $v$ can be considered as a labelled tangle, denoted by $v=I_n(v)$. Its pictorial representation is
\begin{center}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,6) {$v$};
\end{scope}
\end{tikzpicture}
\end{center}
This construction can be generalized to any vector in the tensor product of $\{\SA_{n}\}_{n\in\mathbb{N}}$ and their dual spaces.
Therefore, we can identify vectors and labelled tangles by each other in a surface algebra.
The vector spaces $\SA_{n}$ and $\SA_{n}^*$ are dual to each other.
Let $\{\alpha_k\}$ be a basis of $\SA_{n}$ and $\{\beta_k\}$ be its dual basis. Then we have that
\begin{equation}
Z(I_n)=\sum_k \alpha_k\otimes \beta_k.
\end{equation}
The right hand side is independent of the choice of basis.
This defines a relation for labelled tangles that we call the {\it joint relation}.
\begin{proposition}[Joint relation]\label{Prop:Joint relation}
Suppose $\SA_{\bullet}$ is a non-degenerate surface algebra, then we have the join relation for labelled tangels:
\begin{equation} \label{Equ:joint relation}
\raisebox{-.5cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\end{scope}
\end{tikzpicture}
}
=\sum_k
\raisebox{-1cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,6) {$\alpha_k$};
\end{scope}
\begin{scope}[xscale=.25,yscale=.1,shift={(0,12)}]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,0) {$\beta_k$};
\end{scope}
\end{tikzpicture}}
\end{equation}
Consequently, for the genus-$g$ surface $S_g$, we have
\begin{equation}
Z(S_g)=\zeta^{1-g}.
\end{equation}
\end{proposition}
\subsection{Unique extension}
\begin{theorem}\label{Thm:unique extension}
For any $\zeta\neq0$, any non-degenerate, unital, finite dimensional, spherical planar algebra $\PA_{\bullet}$ has an unique extension to a non-degenerate surface algebra $\SA_{\bullet}$ with 2D sphere value $\zeta$.
\end{theorem}
In other words, the joint relation and the local relations defined by the planar algebra are consistent and the 2D sphere value $\zeta$ is a freedom.
\begin{proof}
Since $\PA_{\bullet}$ is non-degenerate and $\zeta \neq 0$, its extension $\SA_{\bullet}$ is non-degenerate. Moreover, the inner product on $\SA_{\bullet}$ is $\zeta$ times the inner product on $\PA_{\bullet}$. The anti-linear isomorphism $D_n: \SA_n \to \SA_n^*$ is defined by the Riesz representation.
The interior 3-manifold of a fully labelled surface tangle $T$ is contractable to a graph $G_T$, homotopic to a planar graph.
Moreover, the graph $G_T$ is unique up to the contraction move which contracts an adjacent pair of an $m$-valent vertex and an $n$-valent vertex to an $(m+n-2)$-valent vertex:
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.5]
\foreach \x in {0,1,2,3}
{
\coordinate (A\x) at ({cos(360/4*\x)}, {sin(360/4*\x)});
\draw (0,0)--(A\x);
}
\begin{scope}[shift={(-1,0)}]
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(360/3*\x)}, {sin(360/3*\x)});
\draw (0,0)--(A\x);
}
\end{scope}
\node at (2,0) {$\to$};
\node at (6,0) {$.$};
\begin{scope}[shift={(4,0)}]
\foreach \x in {0,1,2,3,4}
{
\coordinate (A\x) at ({cos(360/5*\x)}, {sin(360/5*\x)});
\draw (0,0)--(A\x);
}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
We consider $T$ as a small neighborhood of $G_T$. We can decompose $T$ into fully labelled genus-0 tangles by applying the joint relation \eqref{Equ:joint relation} to all edges of $G_T$. Thus the partition function $Z(T)$ is determined by the value of $Z$ on fully labelled genus-0 tangles. Therefore the extension is unique for a fixed $\zeta$.
Now we prove the existence of such extension.
We need to prove that the partition function $Z(T)$ is well-defined.
Let $\{\alpha_k\}$ be a basis of $\SA_{n}$ and $\{\beta_k\}$ be its dual basis.
Let $\{\alpha_{k'}\}$ be a basis of $\SA_{m}$ and $\{\beta_{k'}\}$ be its dual basis.
By basic linear algebra, for any $f\in \SA_{n+m}$, we have that
\begin{equation}
\sum_k
\raisebox{-1.5cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,0) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,6) {$\alpha_k$};
\end{scope}
\begin{scope}[xscale=.25,yscale=.1,shift={(0,12)}]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,12) arc (-180:180:2);
\draw[blue] (\x,12) arc (-180:180:2);
\draw[->,blue] (\x,12) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,12);
\draw[blue] (4.5,0)--(4.5,12);
\draw[thick] (2,-2)--(2,10);
\draw[fill,white] (4,5) arc (0:360:1.5 and 2);
\draw[->,blue] (4,5) arc (0:360:1.5 and 2);
\node at (2.5,5) {$f$};
\node at (2-.5,8) {$m$};
\node at (2-.5,1) {$n$};
\node at (2.5,0) {$\beta_k$};
\end{scope}
\end{tikzpicture}}
=\sum_{k'}
\raisebox{-1.5cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,-6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,-6) arc (-180:0:2);
\draw[->,blue] (\x,-6) arc (-180:-90:2);
\draw[blue,dashed] (\x,-6) arc (180:0:2);
}
\draw[blue] (.5,-6)--(.5,6);
\draw[blue] (4.5,-6)--(4.5,6);
\draw[thick] (2,-8)--(2,4);
\draw[fill,white] (4,-2) arc (0:360:1.5 and 2);
\draw[->,blue] (4,-2) arc (0:360:1.5 and 2);
\node at (2.5,-2) {$f$};
\node at (2-.5,1) {$m$};
\node at (2-.5,-5) {$n$};
\node at (2.5,6) {$\alpha'_k$};
\end{scope}
\begin{scope}[xscale=.25,yscale=.1,shift={(0,12)}]
\foreach \x in {0.5}{
\draw[fill,blue!20] (\x,6) arc (-180:180:2);
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$m$};
\node at (2.5,0) {$\beta'_k$};
\end{scope}
\end{tikzpicture}}
\;.
\end{equation}
Therefore for a fixed $G_T$, $Z(T)$ is well-defined up to isotopy.
By basic linear algebra, for any $\alpha \in \PA_{n}$ and $\beta \in \PA_{n}^*$, we have that
\begin{equation} \label{Equ:joint relation}
\raisebox{-.5cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\node at (2.5,0) {$\beta$};
\node at (2.5,6) {$\alpha$};
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\end{scope}
\end{tikzpicture}
}
=\sum_k
\raisebox{-1cm}{
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.1]
\foreach \x in {0.5}{
\node at (2.5,0) {$\beta$};
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,6) {$\alpha_k$};
\end{scope}
\begin{scope}[xscale=.25,yscale=.1,shift={(0,12)}]
\foreach \x in {0.5}{
\node at (2.5,6) {$\alpha$};
\draw[blue] (\x,6) arc (-180:180:2);
\draw[->,blue] (\x,6) arc (-180:-90:2);
\draw[blue] (\x,0) arc (-180:0:2);
\draw[->,blue] (\x,0) arc (-180:-90:2);
\draw[blue,dashed] (\x,0) arc (180:0:2);
}
\draw[blue] (.5,0)--(.5,6);
\draw[blue] (4.5,0)--(4.5,6);
\draw[thick] (2,-2)--(2,4);
\node at (2-.5,1) {$n$};
\node at (2.5,0) {$\beta_k$};
\end{scope}
\end{tikzpicture}}
\;.
\end{equation}
Thus $Z(T)$ is invariant under the contraction move and it is independent of the choice the $G_T$. Therefore $Z(T)$ is well-defined for fully labelled surface tangles.
Applying the joint relation to a fully labelled tangle is equivalent to applying the inverse of the contraction move to the graph. Thus the joint relation is a relation for $Z$. Therefore we obtain an extension from $\PA_{\bullet}$ to $\SA_{\bullet}$.
\end{proof}
Consequently the general constructions of spherical planar algebras can be extended to surface algebras. For example,
\begin{corollary}\label{Cor: extension tensor}
Suppose a surface algebra $(\SA_{\bullet})_{k}$ is an extension of a planar algebra $(\PA_{\bullet})_{k}$ with sphere value $\zeta_k$, for $k=1,2$. Then
$(\SA_{\bullet})_1\otimes (\SA_{\bullet})_2$ is an extension of $(\PA_{\bullet})_1\otimes (\PA_{\bullet})_2$ with sphere value $\zeta_1\zeta_2$.
\end{corollary}
\begin{theorem}
Suppose a surface algebra $\SA_{\bullet}$ is an extension of a subfactor planar algebra $\PA_{\bullet}$ with sphere value $\zeta$. Then $\SA_{\bullet}$ is positive, if and only if $\zeta>0$.
\end{theorem}
\begin{proof}
We consider the genus-0 labelled tangle with one disc as a hemisphere.
The sphere is a composition of an unlabelled hemisphere and its mirror image, so $\zeta>0$ is necessary.
Conversely, if $\zeta>0$, then the partition function of $\SA_{\bullet}$ is positive on the sphere. By the joint relation \eqref{Equ:joint relation}, any labelled tangles is a sum of disjoint unions of hemispheres. The positivity follows.
\end{proof}
\section{Jones-Wassermann subfactors}
Each subfactor defines a subfactor planar algebra \cite{JonPA}. A subfactor planar algebra has an alternating shading. A subfactor is called symmetrically self-dual, if its subfactor planar algebra is unshaded, see \cite{LMP17} for further discussions and examples.
The Jones-Wassermann subfactor was first studied in the framework on conformal nets \cite{LonReh95,Was98,Xu00,KawLonMug01}.
Motivated by the reconstruction program from modular tensor categories (MTC), (cf. \cite{Tur16}), to conformal field theory (CFT),
Xu and the author have constructed $m$-interval Jones-Wassermann subfactors for modular tensor categories, and proved that these subfactors are symmetrically self-dual, called the modular self-duality for MTC \cite{LiuXu}. This is a resource of a large family of unshaded planar algebras, where the input data is a modular tensor category.
We follow the notations in \cite{LiuXu}. Let $\C$ be a unitary modular tensor category and $Irr$ be the set of irreducible objects of $\C$.
For an object $X$, its dual object is denoted by $\overline{X}$. Its quantum dimension is $d(X)$.
Let $\displaystyle \mu=\sum_{X\in Irr} d(X)^2$ be the global dimension of $\C$.
Let $\SA_{\bullet}$ be the unshaded planar algebra of the $2$-interval Jones-Wassermann subfactor for $\C$, also known as the quantum double. By parity, $\SA_n$ is zero for odd $n$ \footnote{Since we begin with unshaded planar algebras in this paper, the vector space $\SA_{2n}$ here is $\SA_{n}$ of the subfactor planar algebra in \cite{LiuXu}.}.
The 2-interval Jones-Wassermann subfactor defines a Frobenius algebra in $\C \otimes \C^{op}$ \footnote{In \cite{LiuXu}, we considered $\C \otimes \C$ instead of $\C \otimes \C^{op}$, which is necessary in studying the $m$-interval Jones-Wassermann subfactor for all $m\geq 1$.
In this paper, we only deal with the case $m=2$. It is more convenient to work on $\C\otimes \C^{op}$.
The opposite map here corresponds to the map $\theta_2$ on $\C$ defined in \cite{LiuXu}. }.
The object $\gamma$ for the Frobenius algebra in $\C\otimes \C^{op}$ is
\begin{equation}
\gamma=\bigoplus_{X\in Irr} X \otimes X^{op}.
\end{equation}
Since the planar algebra is unshaded, the object $\tau$ can be further decomposed as $\gamma=\tau^2$ in $\SA_{\bullet}$, where $\tau$ is the object associated with a single string. Recall that $\delta$ is the value of a closed circle in $\SA_{\bullet}$, then the Jones index $\delta^2=\mu$.
Moreover, the Hilbert space $\SA_{2n}$ is isomorphic to $\hom_{\C \otimes \C^{op}} (1,\tau^{2n})=\hom_{\C \otimes \C^{op}} (1,\gamma^{n})$.
Let $Irr^n$ be the $n^{\rm th}$ tensor power of $Irr$. Its element is given by $\vec{X}:=X_1\otimes \cdots \otimes X_{n}$.
Then $\displaystyle d(\vec{X})=\prod_{j=1}^{n} d(X_j)$.
Let $ONB(\vec{X})$ be an ortho-normal-basis of $\hom_{\C}(1,\vec{X})$.
Following the construction in \cite{LiuXu}, the partition function of the following planar diagram in $\SA_{2n}$ is given by
\begin{equation} \label{Equ:spider}
\begin{tikzpicture}
\begin{scope}[scale=.8]
\draw (.5,0) arc (180:0:.5);
\draw (2,0) arc (180:0:.5);
\draw (3.5,0) arc (180:0:.5);
\draw (5,0) arc (0:180: 2.5 and 1);
\node at (2.5,0.2) {$\cdots$};
\draw[blue] (-.5,0) rectangle (5.5,1.5);
\draw[->,blue] (-.5,1.5)--(-.5,.75);
\draw[->] (6,.7) to (7,.7);
\node at (6.5,1) {$Z$};
\node at (12,.5) {$\displaystyle \delta^{\frac{n}{2}} \mu_n:= \delta^{1-\frac{n}{2}} \sum_{\vec{X} \in Irr^n} d(\vec{X})^{\frac{1}{2}} \sum_{\alpha \in ONB(\vec{X})} \alpha \otimes \alpha^{op}.$};
\end{scope}
\end{tikzpicture}
\end{equation}
The vector space $\SA_4$ is isomorphic to $\hom_{\C \otimes \C^{op}} (1,\tau^4)$. By Frobenius reciprocity,
we can identify the Hilbert space $\SA_4$ as
\begin{equation}
\hom_{\C \otimes \C^{op}} (\tau^2,\tau^2)=\bigoplus_{X\in Irr} \hom_{\C \otimes \C^{op}} (X_D, X_D) \cong L^2(Irr),
\end{equation}
where $X_D=X \otimes X^{op}$.
It is considered as the 1-quon space for quantum information \cite{JLW-Quon}.
Take
\begin{equation} \label{Equ:basis}
\beta_X=d(X)^{-1}1_{X_D},
\end{equation}
where $1_{X_D}$ is the identity map in $\hom_{\C \otimes \C^{op}} (X_D, X_D)$. Then
$\{ \beta_X \}_{ X \in Irr}$ form an ONB of the 1-quon space, called the {\it quantum coordinate} \cite{LiuXu}.
\begin{notation}
We denote the bra-ket notation for the 1-quon $\displaystyle \sum_{X\in Irr} c_X \beta_X$ by $\displaystyle \sum_{X \in Irr} c_X \ket{X}$.
\end{notation}
The modular transformation $S$ of a MTC is originally defined by a hopf link.
\footnote{ The entries of the $S$ matrix is defined by the value of a Hopf link in a MTC, usually denoted by $S_{X,Y}$. Here we write it as $S_{X}^{Y}$ while considering it as a matrix on 1-quons.}
The Fourier transform on subfactors was introduced by Ocneanu in terms of paragroups \cite{Ocn88}.
In planar algebras, it turns out to be a one-string rotation of the diagram, called the string Fourier transform (SFT), denoted by $\FS$.
In general, the SFT will change the shading of diagrams in a subfactor planar algebra. It is crucial that the planar algebra $\SA_{\bullet}$ of the Jones-Wassermann subfactor is unshaded, so that the SFT is defined on each $\SA_{n}$, $n\geq 0$.
Furthermore, Xu and the author proved that the action of $\FS$ on the quantum coordinate of the 1-quon space is the $S$ matrix in \cite{LiuXu}:
\begin{proposition}\label{Prop:SFT}
On the ONB $\{ \beta_X\}_{X\in Irr}$ of $\SA_4$, the SFT $\FS$ is the modular $S$ matrix , i.e.,
\begin{equation} \label{Equ:Fourier S}
\FS(\ket{X})=\sum_{Y\in irr} S_{X}^{Y}\ket{Y} .
\end{equation}
\end{proposition}
\section{Fourier duality on 1-quons}\label{Sec:Fourier duality on 1-quons}
A quon $x$ in $\SA_4$ is represented by a labelled tangle which has one output disc with 4 points on the boundary. We modify shape of the disc as a square and represent $v$ as follows:
\begin{center}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.25]
\draw (-2,-2)--(2,2);
\draw (-2,2)--(2,-2);
\draw[blue] (-2,-2) rectangle (2,2);
\fill[white] (-1,-1) rectangle (1,1);
\draw[blue] (-1,-1) rectangle (1,1);
\draw[->,blue] (-1,1)--(-1,0);
\draw[->,blue] (-2,2)--(-2,0);
\node at (0,0) {$x$};
\end{scope}
\end{tikzpicture}
\end{center}
The outside region belongs to the output disc, when we consider it as a genus-0 labelled tangle.
For quons $x,y \in \SA_4$, we can compose the square-like labelled tangles vertically or horizontally:
\begin{center}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.25]
\draw (-2,-2)--(-1,-1);
\draw (2,-2)--(1,-1);
\draw (-1,1)--++(0,1);
\draw (1,1)--++(0,1);
\draw (-2,5)--(-1,4);
\draw (2,5)--(1,4);
\draw[blue] (-2,-2) rectangle (2,5);
\fill[white] (-1,-1) rectangle (1,1);
\draw[blue] (-1,-1) rectangle (1,1);
\fill[white] (-1,2) rectangle (1,4);
\draw[blue] (-1,2) rectangle (1,4);
\draw[->,blue] (-1,1)--(-1,0);
\draw[->,blue] (-1,4)--(-1,3);
\draw[->,blue] (-2,5)--(-2,1.5);
\node at (0,0) {$x$};
\node at (0,3) {$y$};
\end{scope}
\node at (1,0) {$,$};
\node at (4,0) {$.$};
\begin{scope}[shift={(2,.5)},rotate=-90,xscale=.25,yscale=.25]
\draw (-2,-2)--(-1,-1);
\draw (2,-2)--(1,-1);
\draw (-1,1)--++(0,1);
\draw (1,1)--++(0,1);
\draw (-2,5)--(-1,4);
\draw (2,5)--(1,4);
\draw[blue] (-2,-2) rectangle (2,5);
\fill[white] (-1,-1) rectangle (1,1);
\draw[blue] (-1,-1) rectangle (1,1);
\fill[white] (-1,2) rectangle (1,4);
\draw[blue] (-1,2) rectangle (1,4);
\draw[->,blue] (-1,-1)--(0,-1);
\draw[->,blue] (-1,2)--(0,2);
\draw[->,blue] (-1,-2)--(0,-2);
\node at (0,0) {$x$};
\node at (0,3) {$y$};
\end{scope}
\end{tikzpicture}
\end{center}
Both operations define associative multiplications on $\SA_4$.
We call the vertical composition the multiplication of $x$ and $y$, denoted by $xy$.
We call the horizontal composition the convolution of $x$ and $y$, denoted by $x*y$\footnote{The horizontal multiplication is usually called the coproduct on subfactor planar algebras.}.
Furthermore, the SFT is given by the following $90^\circ$ rotation
\begin{center}
\begin{tikzpicture}
\begin{scope}[xscale=.25,yscale=.25]
\draw (-2,-2)--(2,2);
\draw (-2,2)--(2,-2);
\draw[blue] (-2,-2) rectangle (2,2);
\fill[white] (-1,-1) rectangle (1,1);
\draw[blue] (-1,-1) rectangle (1,1);
\draw[->,blue] (1,1)--(0,1);
\draw[->,blue] (-2,2)--(-2,0);
\end{scope}
\end{tikzpicture}
\end{center}
It intertwines the two multiplications,
\begin{equation} \label{Equ:Fourier duality}
\FS(xy)=\FS(x)*\FS(y).
\end{equation}
This is a corner stone of the pictorial Fourier duality.
Let us consider the 1-quon space $\SA_4 \cong L^2(Irr)$ as functions on the quantum coordinates.
Then we have the following formulas for the multiplication and the convolution.
\begin{proposition}[Multiplication]
For $X,Y \in Irr$,
\begin{equation} \label{Equ:multiplication}
\ket{X}\ket{Y}=\delta_{X,Y} d(X)^{-1} \ket{X}.
\end{equation}
\end{proposition}
\begin{proof}
It follows from Equation \eqref{Equ:basis}.
\end{proof}
\begin{proposition}[Convolution]
For $X,Y \in Irr$,
\begin{equation} \label{Equ:convolution}
\ket{X}*\ket{Y}=\delta^{-1}\sum_{W \in Irr }N_{X,Y}^W \ket{W},
\end{equation}
where $N_{X,Y}^{W}=\dim\hom(W, X\otimes Y)$.
\end{proposition}
\begin{proof}
It follows from Equation \eqref{Equ:spider}.
\end{proof}
The matrix $N_X=N_{X,-}^{-}$ is called the adjacent matrix or the fusion. Verlinde first proposed that the modular transformation $S$ diagonalizes the fusion \cite{Ver88}.
The Fourier duality of 1-quons gives a conceptual explanation of this result.
\begin{theorem}[Verlinde formula]
For any $X\in Irr$,
\begin{equation} \label{Equ:Verlinde formula}
\delta^{-1}SN_XS^{-1}=\sum_{Y \in Irr} S_{X}^{Y} d(Y)^{-1} \delta_Y,
\end{equation}
where $\delta_Y$ is the projection on to $\mathbb{C}\beta_Y$.
\end{theorem}
\begin{proof}
By Equations \eqref{Equ:multiplication}, \eqref{Equ:convolution}, \eqref{Equ:Fourier duality}, \eqref{Equ:Fourier S}
\begin{align*}
\delta^{-1}SN_XS^{-1} (S\ket{W})
=&S(\ket{X}*\ket{W})\\
=&(S\ket{X})(S\ket{W})\\
=& \sum_{Y \in Irr} S_{X}^{Y} d(Y)^{-1} \delta_Y (S\ket{W}).
\end{align*}
Since $\{S\ket{W}\}_{W\in Irr}$ form an ONB of $\SA_4$, we obtain Equation \eqref{Equ:Verlinde formula}.
\end{proof}
Now we give another application of the Fourier duality on 1-quons.
The set $Irr$ of irreducible objects of $\C$ forms a fusion ring under the direct sum $\oplus$ and the tensor $\otimes$. For any subset $K \subset Irr$, we define its indicator function as
\begin{equation}
P_K=\sum_{X\in K} 1_{X_D}.
\end{equation}
Then $P_K$ is a projection in $\SA_4 \cong L^2(Irr)$. This is a bijection between subsets of $Irr$ and projections in $\SA_4$.
Let us define $SUB_{\otimes}=\{K\subset Irr | K \text{ is closed under } \otimes \}$.
\begin{theorem}\label{Thm:PK}
Take $K\subset Irr$, then $K\in SUB_{\otimes}$ iff $P_K$ is a biprojection. Consequently, if $K$ is closed under $\otimes$, the it is closed under the dual.\end{theorem}
\begin{proof}
By Equation \eqref{Equ:convolution}, if $P_K$ is a biprojection, then $K$ is closed under the tensor and the dual.
The converse statement follows from Theorem 4.12 in \cite{Liuex}.
\end{proof}
When $K\in SUB_{\otimes}$, we define $\C_K$ to be the full fusion subcategories of $\C$ whose simple objects are given by $K$.
This is a bijection between $SUB_{\otimes}$ and full fusion subcategories of $\C$.
By Theorem \ref{Thm:PK}, we obtain a bijections between full fusion subcategories $\{\C_K \}$ of $\C$ and biprojections $\{P_K\}$ of $\SA_4$.
Let $\Supp(x)$ be the trace of the support of $x$. Let $\dim \C_k$ be the global dimension of $\C_K$. Then
\begin{equation}
\dim \C_K:=\sum_{X\in K} d(X)^2=\Supp(P_K).
\end{equation}
By Theorem \ref{Thm:PK}, $P_K$ is a biprojection for $K\in SUB_{\otimes}$, so $\FS(P_K)$ is a multiple of a biprojection $P_{\hat{K}}$, for some $\hat{K}\in SUB_{\otimes}$. We call $\hat{K}$ and $P_{\hat{K}}$ the Fourier duals of $K$ and $P_{K}$ respectively. Since $\FS^2(P_K)=P_K$, the double dual is identity.
Moreover, we obtain a full subcategory $\C_{\hat{K}}$ of $\C$ that we call the Fourier dual of $\C_K$.
Then
\begin{equation}
\dim \C_K \dim \C_{\hat{K}}=\Supp(P_K)\Supp(\FS(P_K))=\delta^2. \footnote{In general, we have the Donoho-Stark uncertainty principle $\Supp(x)\Supp(\FS(x))\leq \delta^2$, see \cite{JLW16}.}
\end{equation}
The Hausdorff-Young inequality for subfactor planar algebras has been proved in \cite{JLW16}. Applying the $\infty$-$1$ Hausdorff-Young inequality to the $S$ matrix, we recover an important inequality for unitary MTC proved by Terry Gannon \cite{Gan05}:
\begin{align*}
\|\FS(\beta_Y)\|_1 &\leq \delta^{-1} \|\beta_Y\|_{\infty} \\
\Rightarrow \left|\frac{S_X^Y}{S_{X}^0}\right| &\leq \frac{S_{0}^Y}{S_{0}^0}.
\end{align*}
The M\"{u}ger's center $C\C_{K}$ of $\C_{K}$ in $\C$ was introduced by M\"{u}ger in \cite{Mug03}.
It is defined as a full subcategory whose simple objects are given by
\begin{equation}
\left\{X \in Irr \left| \quad \frac{S_X^Y}{S_{X}^0} = \frac{S_{0}^Y}{S_{0}^0},~ \forall~ Y\in K \right. \right\}.
\end{equation}
\begin{theorem}
For $K \in SUB_{\otimes}$,
\begin{equation}
\hat{K}=K=\left\{X \in Irr \left| \quad \frac{S_X^Y}{S_{X}^0} = \frac{S_{0}^Y}{S_{0}^0},~ \forall~ Y\in K \right. \right\}.
\end{equation}
Therefore $\C_{\hat{K}}$ is M\"{u}ger's center $C\C_{K}$.
\end{theorem}
\begin{proof}
It is a special case of Theorem 4.21 in \cite{Liuex}.
\end{proof}
We summarize the results in this section in the Table \ref{Table:Fourier duality on 1-quons}.
We recover several results about M\"{u}ger's center from a different point of view.
\section{Graphic quons}
\subsection{Definitions}
In this section, we extend the unshaded subfactor planar algebra $\SA_{\bullet}$ to a surface algebra by Theorem \ref{Thm:unique extension}, still denoted by $\SA_{\bullet}$. We consider $\zeta:=Z(S_0)$ as a free variable. We study $n$-quons through the surface algebra, particularly the ones represented by surface tangles.
Recall that $\SA_4$ is the space of 1-quons. Take its $n^{\rm th}$ tensor power $(\SA_4)^n$ to be the space of $n$-quons.
Let us denote $Q_n^m:=\hom((\SA_4)^m,(\SA_4)^n)$ to be the space of transformations from $m$-quons to $n$-quons. We ignore the index when it is zero.
For $\vec{X}=X_1\otimes \cdots \otimes X_n \in Irr^n$, we define $\beta_{\vec{X}}=\beta_{X_1}\otimes \cdots \otimes \beta_{X_n}$.
Then $\{\beta_{\vec{X}}\}_{\vec{X}\in Irr^n}$ form an ONB of $Q_n$.
\begin{notation}
We denote the bra-ket notation for the n-quon $\beta_{\vec{X}}$ by $\ket{\vec{X}}$ and $\ket{\vec{X}}=\ket{X_1\cdots X_n}$. The bra-ket notation for a transformation in $Q_n^m$ is given by $\displaystyle \sum_{\vec{Y} \in Irr^m} \sum_{\vec{X} \in Irr^n} c_{\vec{X}}^{\vec{Y}} \ket{\vec{X}}\bra{\vec{Y}}$.
\end{notation}
By the commutative diagram \eqref{Equ:reflection}, when we reverse the orientation of a disc of a surface tangle, we switch $\bra{X}$ and $\ket{X}$ in its partition function. One can consider it as the Frobenius reciprocity.
When we use the bra-ket notation for n-quons, we have an order for the tensor. Thus we also order the discs for surface tangles from 1 to $n$.
The choice of the order is identical to the action of a permutation on the tensors.
\begin{notation}
Let $LT_n^m$ be the set of labelled surface tangles with $m$ input discs and $n$ output discs, so that each disc has four boundary points.
\end{notation}
Then the partition function $Z$ is a surjective map from $LT_n^m$ to $\SA_4$.
\begin{definition}
For a genus-$g$ labelled tangle $T$ in $LT_n^{m}$, we define the normalized quon $\ket{T}$ by
\begin{equation}
\ket{T}:=Z(S_g)^{-1} Z(T).
\end{equation}
\end{definition}
By Theorem \ref{Thm:unique extension}, the extension from spherical planar algebra to surface algebras is unique up to the choice of $\zeta=Z(S_0)$.
\begin{proposition}
The normalized quon $\ket{T}$ is independent of the choice of $\zeta$.
\end{proposition}
\begin{proof}
It follows from the joint relation \eqref{Equ:joint relation}.
\end{proof}
\begin{definition}
Let $T_n^m$ be the subset $LT_n^m$ consisting of surface tangles.
We call $GQ_n^m:=Z(T_n^m)$ the space of graphic quon transformations and $GQ_n:=Z(T_n)$ the space of graphic n-quons.
\end{definition}
\subsection{From graphs to graphic quons}
Let $G_n$ be the set of oriented graphs on a surface whose the edges are ordered from 1 to $n$. For $\Gamma \in G_n$, let us construct a surface tangle $T_\Gamma \in T_n$: We replace each oriented edge of $\Gamma$ by an output disc with four marked points; we replace each $n$-valent vertex of $\Gamma$ by a planar diagram with $2n$ boundary points as follows,
\begin{center}
\begin{tikzpicture}
\draw (0,0)--(0,1);
\draw[->] (0,0)--(0,.5);
\node at (1,.5) {$\to$};
\begin{scope}[shift={(2,0)}]
\fill[blue!20] (0,0) rectangle (1,1);
\draw[blue] (0,0) rectangle (1,1);
\draw[->,blue] (0,0)--(0,.5);
\end{scope}
\begin{scope}[shift={(-3,-1)}]
\foreach \x in {0,1,2,3}{
\draw (1.5,.8)--(\x,0);
}
\fill (1.5,.8) circle (.1);
\node at (1.5,.16) {$\cdots$};
\node at (4,.4) {$\to$};
\begin{scope}[shift={(5,0)},scale=.8]
\draw (.5,0) arc (180:0:.5);
\draw (2,0) arc (180:0:.5);
\draw (3.5,0) arc (180:0:.5);
\draw (5,0) arc (0:180: 2.5 and 1);
\node at (2.5,0.2) {$\cdots$};
\end{scope}
\end{scope}
\end{tikzpicture} ~~~.
\end{center}
Moreover, we obtain a graphic quon $\ket{T_\Gamma}$.
We can also define $\ket{T_\Gamma}$ for $\Gamma$ in $\C \otimes \C^{op}$ directly: Each $k$-valent vertex of $\Gamma$ is replaced by the rotationally invariant morphism
$\delta^{\frac{k}{2}}\mu_k \in \hom_{\C\otimes \C^{op}}(1,\gamma^k)$ in Equation \eqref{Equ:spider}. Each edge is an output disc with two marked points. The target space of each output disc is $\hom(\gamma,\gamma) \cong L^2(Irr)$.
In general, when we identify the string labelled by the Frobenius algebra $\gamma$ as a pair of parallel strings, it requires a shading in the middle \footnote{This identification is a classical trick in subfactor theory. The alternating shading is essential in the study of subfactor planar algebras.}.
Here we can lift the shading by the modular self-duality of MTCs. This is used in a crucial way in \S \ref{Sec:Fourier duality}.
\begin{definition}
An non-zero $n$-quon is called positive, if all coefficients are non-negative.
\end{definition}
\begin{proposition}\label{Prop:positive}
For any $\Gamma \in G_n$, $\ket{T_\Gamma}$ is positive. Equivalently, its dual $\bra{T_\Gamma}$ is a positive linear functional on the tensor power of $L^2(Irr)$.
\end{proposition}
\begin{proof}
For any $\vec{X} \in Irr^{n}$, we label the $j^{\rm th}$ edge of $\Gamma$ by $\beta(X_j)=d(X)^{-1} 1_{X} \otimes 1_{X}^{op}$. We label each $k$-valent vertex of $\Gamma$ by $\delta^{\frac{k}{2}}\mu_k$. Since $\mu_k$ is a positive linear sum of $\alpha \otimes \alpha^{op}$ in Equation \eqref{Equ:spider}. It is enough to show that for each choice of $\alpha \otimes \alpha^{op}$, the value is non-negative. For each choice, we obtain a fully surface labelled tangle in $\C$ and its opposite in $\C^{op}$. Thus the value is multiplication of a complex conjugate pair, which is non-negative. Therefore $\bra{T_\Gamma} \vec{X} \rangle \geq 0$.
\end{proof}
If the graph $\Gamma$ is connected, then $\ket{T_{\Gamma}}$ is usually entangled for any bipartite partition.
So we call $\bra{T_\Gamma}$ a {\it topologically entangled measurement} on quons.
\begin{definition}
For $\Gamma \in G_n$, we define $\overline{\Gamma} \in G_n$ by reversing the orientations of all edges of $\Gamma$.
\end{definition}
\begin{proposition}\label{Prop:Z2}
For any $\Gamma \in G_n$,
\begin{equation}
\ket{T_{\overline{\Gamma}}}=\ket{T_\Gamma}.
\end{equation}
\end{proposition}
\begin{proof}
For any $\vec{X} \in Irr^n$,
\begin{align*}
&\langle \vec{X} \ket{T_{\overline{\Gamma}}}
=\langle \overline{\vec{X}} \ket{T_\Gamma}
=\overline{\langle \vec{X} \ket{T_\Gamma}}
=\langle \vec{X} \ket{T_\Gamma},
\end{align*}
where $\overline{\vec{X}}=\overline{X_1}\otimes \cdots \otimes \overline{X_n}$ and
the last equality follows from Proposition \ref{Prop:positive}. So $\ket{T_{\overline{\Gamma}}}=\ket{T_\Gamma}$.
\end{proof}
There are many interesting graphs on surfaces. The symmetry of (oriented) graphs leads to the symmetry of graphic quons.
For examples, there are five platonic solids on the spheres: tetrahedron, cube, octahedron, dodecahedron and icosahedron.
The number of edges are 6, 12, 12, 30, 30 respectively.
The value of the tetrahedron in $\C$ is well-known as the $6j$ symbol. Here we are working in $\C \otimes \C^{op}$, thus the value becomes the absolute square of the $6j$ symbol. If the dimension of the hom space is not 1, then we need to sum over an ONB for these $6j$ symbol squares. The sum is a good quantity to understand the global property of 6j symbols, as it is independent of the choice of the ONB.
We take an oriented tetrahedron and order the edges by 1 to 6 as shown in Fig~\ref{Fig: tetrahedron}.
\begin{figure}\label{Fig: tetrahedron}
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->,dashed] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (O)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$1$};
\node at (D1) {$2$};
\node at (D2) {$3$};
\node at (E0) {$4$};
\node at (E1) {$5$};
\node at (E2) {$6$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\end{tikzpicture}.
\caption{Tetrahedron: The first three edges are outside and the last three edges are inside. They are order by the angle from $0^{\circ}$ to $360^{\circ}$.
The orientation of the first three edges are anti-clockwise. The orientation of the last three edges towards outside.
We consider the tetrahedron as a graph on the sphere. Dashed lines indicate that the first three edges are at the back of the sphere. }
\end{figure}
We denote this graph by $\Gamma_6$.
Then we obtain a 6-quon in terms of $6j$-symbol squares, denote by
\begin{equation}\label{Equ:6j}
\ket{T_{\Gamma_6}}=
\sum_{\vec{X}\in Irr^6}
\left|{{{X_{1}X_{2}X_{3}}\choose{X_{4}X_{5}X_{6}}}}\right|^{2} \ket{\vec{X}}
\end{equation}
For a general MTC $\C$, it could be difficult to compute the coefficients of these graphic quons. Actually the closed form of $6j$ symbols are only known for a few examples. We can manipulate these graphic quons in a pictorial way by their graphic definition, even though we do not know the algebraic closed forms of their coefficients.
\subsection{GHZ and Max}\label{Sec: GHZ Max}
Greenberg, Horne and Zeilinger introduced a multipartite resource state for quantum information, called the GHZ state, denoted by $\GHZ$ \cite{GHZ}.
In \cite{JafLiuWoz-HS}, Jaffe, Wozniakowski and the authur find another resource state following topological intuition, called $\Max$. They both generalize the Bell state.
For the $3$-qubit case,
\begin{align*}
\GHZ&=2^{-1/2} (|000\rangle+|111\rangle), \\
\Max&=2^{-1}(|000\rangle+|011\rangle+|101\rangle+|110\rangle).
\end{align*}
We observe that $\GHZ$ and $\Max$ are Fourier duals of each other:
\begin{equation}
\Max=(F\otimes F\otimes F)^{\pm1}\GHZ,
\end{equation}
where $F$ is the discrete Fourier transform
In tensor networks, the $\GHZ$ and $\Max$ are represented as two trivalent vertices:
\begin{tikzpicture}
\begin{scope}[shift={(0,0)},xscale=-.5,yscale=.5]
\draw (0,0)--(-1,-1);
\draw (0,0)--(0,-1);
\draw (0,0)--(1,-1);
\draw[black] (0,0) circle (.2);
\fill[black] (0,0) circle (.2);
\end{scope}
\end{tikzpicture}
and
\begin{tikzpicture}
\begin{scope}[shift={(0,0)},xscale=-.5,yscale=.5]
\draw (0,0)--(-1,-1);
\draw (0,0)--(0,-1);
\draw (0,0)--(1,-1);
\fill[white] (0,0) circle (.2);
\draw[black] (0,0) circle (.2);
\end{scope}
\end{tikzpicture}.
They have been considered as two fundamental tensors in \cite{Laf03}, see also \cite{JLW-Quon,Bia17,Coe17}.
It is shown in \cite{JLW-Quon} that $\GHZ$ and $\Max$ are graphic quons, and the corresponding surface tangles are given by
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.25]
\draw[blue] (.5,6.5) circle (4);
\foreach \x in {0,1}{
\foreach \y in {0,1}{
\foreach \u in {0}{
\foreach \v in {1,2,3}{
\coordinate (A\u\v\x\y) at (\x+1.5*\u,\y+3*\v);
}}}}
\foreach \u in {0}{
\foreach \v in {1,2,3}{
\fill[blue!20] (A\u\v00) rectangle (A\u\v11);
\draw (A\u\v00) rectangle (A\u\v11);
\draw[->] (A\u\v00)--++(0,.5);
\node at (.5,3*\v+.5) {\v};
}}
\draw (A0101) to [bend left=30] (A0200);
\draw (A0201) to [bend left=30] (A0300);
\draw (A0301) to [bend left=-30] (A0100);
\draw (A0111) to [bend left=-30] (A0210);
\draw (A0211) to [bend left=-30] (A0310);
\draw (A0311) to [bend left=30] (A0110);
\node at (7,6) {and};
\node at (20,5) {.};
\begin{scope}[shift={(20,6)},rotate=90]
\draw[blue] (.5,6.5) circle (4);
\foreach \x in {0,1}{
\foreach \y in {0,1}{
\foreach \u in {0}{
\foreach \v in {1,2,3}{
\coordinate (A\u\v\x\y) at (\x+1.5*\u,\y+3*\v);
}}}}
\foreach \u in {0}{
\foreach \v in {1,2,3}{
\fill[blue!20] (A\u\v00) rectangle (A\u\v11);
\draw (A\u\v00) rectangle (A\u\v11);
\draw[->] (A\u\v01)--++(.5,0);
\node at (.5,3*\v+.5) {\v};
}}
\draw (A0101) to [bend left=30] (A0200);
\draw (A0201) to [bend left=30] (A0300);
\draw (A0301) to [bend left=-30] (A0100);
\draw (A0111) to [bend left=-30] (A0210);
\draw (A0211) to [bend left=-30] (A0310);
\draw (A0311) to [bend left=30] (A0110);
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
Inspired by this observation, we generalize $\GHZ$ and $\Max$ to $n$-quons on genus-$g$ surfaces for the MTC $\C$.
\begin{definition}
Let us define the genus-$g$ tangles $GHZ_{n,g}$ and $Max_{n,g}$ in $T_n$ as follows:
\begin{equation}
GHZ_{n,g}=
\raisebox{-1cm}{
\begin{tikzpicture}
\pgftransformcm{1}{0}{0}{1}{}
\begin{scope}[scale=.6]
\begin{scope}[shift={(0,0,-1)}]
\foreach \x in {1,2,3,4,5,6}
{
\draw[dashed] (2*\x,0)--++(0,1);
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3,4,5}
{
\draw[dashed] (2*\x+1,1)--++(1,0);
}
\draw[dashed] (2,0)--++(0,2);
\draw (2,2)--++(11,0)--++(0,-3);
\foreach \x in {4,5}
{
\draw[dashed] (2*\x+1,0)--++(1,0);
}
\draw[dashed] (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\begin{scope}
\foreach \x in {1,2,3,4,5,6}
{
\draw (2*\x,0)--++(0,1);
\draw (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x,0)--++(.5,0);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0);
\draw (2*\x+1,0)--++(0,1);
\node at (2*\x+.5,0,-.5) {\x};
}
\foreach \x in {1,2,3,4,5}
{
\draw (2*\x+1,1)--++(1,0);
}
\draw (2,0)--++(0,2)--++(11,0)--++(0,-2);
\foreach \x in {4,5}
{
\draw (2*\x+1,0)--++(1,0);
}
\draw (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\end{scope}
\end{tikzpicture}
}.
\end{equation}
\begin{equation}
Max_{n,g}=
\raisebox{-1cm}{
\begin{tikzpicture}
\pgftransformcm{1}{0}{0}{1}{}
\begin{scope}[scale=.6]
\begin{scope}[shift={(0,0,-1)}]
\foreach \x in {1,2,3,4,5,6}
{
\draw[dashed] (2*\x,0)--++(0,1);
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3,4,5}
{
\draw[dashed] (2*\x+1,1)--++(1,0);
}
\draw[dashed] (2,0)--++(0,2);
\draw (2,2)--++(11,0)--++(0,-3);
\foreach \x in {4,5}
{
\draw[dashed] (2*\x+1,0)--++(1,0);
}
\draw[dashed] (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\begin{scope}
\foreach \x in {1,2,3,4,5,6}
{
\draw (2*\x,0)--++(0,1);
\draw (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x+1,0)--++(0,0,-.5);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0);
\draw (2*\x+1,0)--++(0,1);
\node at (2*\x+.5,0,-.5) {\x};
}
\foreach \x in {1,2,3,4,5}
{
\draw (2*\x+1,1)--++(1,0);
}
\draw (2,0)--++(0,2)--++(11,0)--++(0,-2);
\foreach \x in {4,5}
{
\draw (2*\x+1,0)--++(1,0);
}
\draw (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\end{scope}
\end{tikzpicture}
}.
\end{equation}
\end{definition}
Here we draw the tangles for $n=3$, $g=2$. The readers can figure out the general case.
The corresponding tensor network notations could be generalized (up to a scalar) as
\raisebox{-.5cm}{
\begin{tikzpicture}
\foreach \x in {1,2,3,4,5,6}
{\draw (\x,0)--++(0,1);
}
\foreach \x in {2,3,4,5}
{\fill[black] (\x,1) circle (.1);
}
\foreach \x in {5}
{\fill[black] (\x,0) circle (.1);
}
\draw (1,1)--(6,1);
\draw (4,0)--(6,0);
\foreach \x in {4,5}
{
\draw[blue] (\x+.3,.5) to [bend left=30] (\x+.7,.5);
\draw[blue] (\x+.2,.5) to [bend left=-30] (\x+.8,.5);
}
\end{tikzpicture}}
\text{ and }
\raisebox{-.5cm}{
\begin{tikzpicture}
\foreach \x in {1,2,3,4,5,6}
{\draw (\x,0)--++(0,1);
}
\draw (1,1)--(6,1);
\draw (4,0)--(6,0);
\foreach \x in {2,3,4,5}
{
\fill[white] (\x,1) circle (.1);
\draw (\x,1) circle (.1);
}
\foreach \x in {5}
{
\fill[white] (\x,0) circle (.1);
\draw (\x,0) circle (.1);
}
\foreach \x in {4,5}
{
\draw[blue] (\x+.3,.5) to [bend left=30] (\x+.7,.5);
\draw[blue] (\x+.2,.5) to [bend left=-30] (\x+.8,.5);
}
\end{tikzpicture}}~~.
\begin{remark}
From tensor network to quons language, we fat a string to a cuboid. The relations of the two Frobenius algebras becomes topological isotopy in two orthogonal directions, indicated by black and white.
\end{remark}
\begin{proposition}\label{Prop:GHZ}
For $n,g \geq 0$,
\begin{align}
\GHZ_{n,g}&= \sum_{X\in Irr} d(X)^{2-n-2g} \overbrace{|XX\cdots X \rangle}^{n \text{ entries}}.
\end{align}
\end{proposition}
\begin{proof}
By the joint relation \eqref{Equ:joint relation}, the coefficient of $\ket{\vec{X}}$ in $\GHZ_{n,g}$ is given by
\begin{equation}
Z(S_0)^{-1}\sum_{\vec{Y} \in Irr^g}
\raisebox{-2cm}{
\begin{tikzpicture}
\pgftransformcm{1}{0}{0}{1}{}
\begin{scope}[scale=1]
\begin{scope}[shift={(0,0,-1)}]
\foreach \x in {1,2,3,6}
{
\draw[dashed] (2*\x,0)--++(0,1);
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3,4,5}
{
\draw[dashed] (2*\x+1,1)--++(1,0);
}
\draw[dashed] (2,0)--++(0,2);
\draw (2,2)--++(11,0)--++(0,-3);
\foreach \x in {4,5}
{
\draw[] (2*\x+1,0)--++(1,0);
}
\draw[dashed] (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\begin{scope}
\foreach \x in {1,2,3,6}
{
\draw (2*\x,0)--++(0,1);
\draw (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x+1,0)--++(-.5,0);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0);
\draw (2*\x+1,0)--++(0,1);
\node at (2*\x+.5,0,-.5) {$\beta_{X_\x}$};
}
\foreach \x in {4,5}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x,0)--++(.5,0);
\draw[blue] (2*\x,0)--++(0,0,-1)--++(1,0);
}
\foreach \x in {1,2}
{
\node at (2*\x+6.5,0,-.5) {$\beta_{Y_\x}$};
}
\begin{scope}[shift={(0,1)}]
\foreach \x in {4,5}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0);
\draw[->,blue] (2*\x+1,0)--++(-.5,0);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0)--++(0,0,1);
}
\foreach \x in {1,2}
{
\node at (2*\x+6.5,0,-.5) {$\beta_{Y_\x}$};
}
\end{scope}
\foreach \x in {1,2,3,4,5}
{
\draw (2*\x+1,1)--++(1,0);
}
\draw (2,0)--++(0,2)--++(11,0)--++(0,-2);
\foreach \x in {4,5}
{
\draw (2*\x+1,0)--++(1,0);
}
\draw (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\draw[dashed] (2*4,0,-1)--++(0,-1,0);
\end{scope}
\end{scope}
\end{tikzpicture}
}.
\end{equation}
Since $\beta_{X}=d(X)^{-1}1_{X_D}$ and $1_{X_D}$ is a minimal projection, the coefficient is nonzero only when $\ket{\vec{X}}=|XX\cdots X \rangle$, for some $X\in Irr$. In this case, the coefficient is $d(X)^{2-n-2g}$.
\end{proof}
For $\vec{X}\in Irr^n$, let $\dim(\vec{X},g)$ be the dimension of the vector space consisting of vectors in genus-$g$ surface with boundary points $X_1,X_2,\ldots,X_n$ in $\C$.
Then
\begin{align*}
\dim(\vec{X},0)&=\dim\hom_{\C}(1,\vec{X}) \\
\dim(\vec{X},g)&=\sum_{\vec{Y}\in Irr^g} \dim\hom_{\C}(1,\vec{X}\otimes \vec{Y}\otimes \theta_1(\vec{Y})),\\
\end{align*}
where $\theta_1(\vec{Y})=\overline{Y_g}\otimes \cdots \otimes \overline{Y_1}$.
\begin{proposition}\label{Prop:Max}
For $n,g \geq 0$,
\begin{align}
\Max_{n,g}&=\delta^{2-n-2g} \sum_{\vec{k}\in Irr^n} \dim(\vec{X},g) |\vec{X} \rangle.
\end{align}
\end{proposition}
\begin{proof}
By the joint relation \eqref{Equ:joint relation}, the coefficient of $\ket{\vec{X}}$ in $\Max_{n,g}$ is given by
\begin{equation}
Z(S_0)^{-1}\sum_{\vec{Y} \in Irr^g}
\raisebox{-2cm}{
\begin{tikzpicture}
\pgftransformcm{1}{0}{0}{1}{}
\begin{scope}[scale=1]
\begin{scope}[shift={(0,0,-1)}]
\foreach \x in {1,2,3,6}
{
\draw[dashed] (2*\x,0)--++(0,1);
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\draw[dashed] (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3,4,5}
{
\draw[dashed] (2*\x+1,1)--++(1,0);
}
\draw[dashed] (2,0)--++(0,2);
\draw (2,2)--++(11,0)--++(0,-3);
\foreach \x in {4,5}
{
\draw[] (2*\x+1,0)--++(1,0);
}
\draw[dashed] (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\end{scope}
\begin{scope}
\foreach \x in {1,2,3,6}
{
\draw (2*\x,0)--++(0,1);
\draw (2*\x+1,0)--++(0,1);
}
\foreach \x in {1,2,3}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x+1+1,0,-1)--++(0,0,.5);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0);
\draw (2*\x+1,0)--++(0,1);
\node at (2*\x+.5,0,-.5) {$\beta_{X_\x}$};
}
\foreach \x in {4,5}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0)--++(0,0,-1);
\draw[->,blue] (2*\x+1,0,-1)--++(0,0,.5);
\draw[blue] (2*\x,0)--++(0,0,-1)--++(1,0);
}
\foreach \x in {1,2}
{
\node at (2*\x+6.5,0,-.5) {$\beta_{Y_\x}$};
}
\begin{scope}[shift={(0,1)}]
\foreach \x in {4,5}
{
\fill[blue!20] (2*\x,0)--++(1,0)--++(0,0,-1)--++(-1,0);
\draw[blue] (2*\x,0)--++(1,0);
\draw[->,blue] (2*\x+1,0,-1)--++(0,0,.5);
\draw[blue,dashed] (2*\x,0)--++(0,0,-1)--++(1,0)--++(0,0,1);
}
\foreach \x in {1,2}
{
\node at (2*\x+6.5,0,-.5) {$\beta_{Y_\x}$};
}
\end{scope}
\foreach \x in {1,2,3,4,5}
{
\draw (2*\x+1,1)--++(1,0);
}
\draw (2,0)--++(0,2)--++(11,0)--++(0,-2);
\foreach \x in {4,5}
{
\draw (2*\x+1,0)--++(1,0);
}
\draw (2*4,0)--++(0,-1)--++(5,0)--++(0,1);
\draw[dashed] (2*4,0,-1)--++(0,-1,0);
\end{scope}
\end{scope}
\end{tikzpicture}
}.
\end{equation}
Appying Equation \eqref{Equ:spider}, the coefficient is
\begin{align*}
&\delta^{2-n-2g} \sum_{\vec{Y} \in Irr ^g} \sum_{\alpha_1,\alpha_2 \in ONB(\vec{X}\otimes \vec{Y} \otimes \theta_1(\vec{Y}))} \langle \alpha_1 \otimes \alpha_1^{op}, \alpha_2 \otimes \alpha_2^{op} \rangle\\
=&\delta^{2-n-2g}\sum_{\vec{Y}\in Irr^g} \dim\hom_{\C}(1,\vec{X}\otimes \vec{Y}\otimes \theta_1(\vec{Y}))\\
=&\delta^{2-n-2g} \dim(\vec{X},g).
\end{align*}
\end{proof}
\begin{definition}
Let us define the generating function for $\GHZ$ and $\Max$,
\begin{align}
\GHZ_n(z)=\sum_{g=0}^{\infty} \GHZ_{n,g} z^g,\\
\Max_n(z)=\sum_{g=0}^{\infty} \Max_{n,g} z^g.
\end{align}
\end{definition}
\begin{proposition}\label{Prop:GHZ rational}
For $n\geq0$,
\begin{align}
\GHZ_n(z)&=\sum_{k\in Irr} \frac{ d(X)^{4-n}}{ d(X)^2 -z} \overbrace{|X X\ldots X \rangle}^{n \text{ entries}}.
\end{align}
\end{proposition}
The coefficients of $\GHZ_n(z)$ are all rational functions. It is less obvious that the coefficients of $\Max_n(z)$ are also rational functions. We prove this in Theorem \ref{Thm:Maxz}
\section{Fourier duality}\label{Sec:Fourier duality}
In this section, we study the Fourier duality on graphic quons.
Without loss of generality, we only consider surface tangles in $T_n$ , i.e., all discs are output discs. Then their partition functions are graphic quons in $GQ_n$.
Recall that the SFT $\FS$ is a $90^{\circ}$ rotation of the output disc. The corresponding genus-0 tangle is given by
\begin{equation}
\FS=
\raisebox{-.5cm}{
\begin{tikzpicture}
\begin{scope}[scale=1]
\fill[blue!20] (0,0+1)--(1,0+1)--(1.5,.5+1)--(.5,.5+1)--(0,0+1);
\draw[blue] (0,0+1)--(1,0+1)--(1.5,.5+1)--(.5,.5+1)--(0,0+1);
\draw[->,blue] (.5,1.5)--(.25,1.25);
\fill[blue!20] (0,0)--(1,0)--(1.5,.5)--(.5,.5)--(0,0);
\draw[blue] (0,0)--(1,0)--(1.5,.5);
\draw[blue,dashed] (1.5,.5)--(.5,.5)--(0,0);
\draw[->,blue] (0,0)--(.5,0);
\draw (0,0)--++(0,1);
\draw (1,0)--++(0,1);
\draw[dashed] (.5,.5)--++(0,1);
\draw (1.5,.5)--++(0,1);
\end{scope}
\end{tikzpicture}
}
\end{equation}
The action of $\FS$ on the quantum coordinate $\{ \beta_X \}_{ X \in Irr}$ is identical to the $S$ matrix of $\C$.
We define the action of $\vec{\FS}$ on $T_n$ as the action of $\FS$ on all output discs. We define the action of $\vec{S}$ on $GQ_n$ as the $n^{\rm th}$ tensor power of $S$.
\begin{theorem}\label{Thm:Fourier duality}
For any unitary MTC $\C$, the following commutative diagram holds,
\begin{center}
\begin{tikzpicture}
\begin{scope}[node distance=4cm, auto, xscale=1,yscale=1]
\foreach \x in {0,1,2,3} {
\foreach \y in {0,1,2,3} {
\coordinate (A\x\y) at ({2*\x},{.7*\y});
}}
\foreach \y in {0,3}{
\node at (A0\y) {surface tangles};
\node at (A3\y) {graphic quons};
\draw[->] (A1\y) to node {$Z$} (A2\y);
}
\draw[->] (A02) to node [swap] {$\vec{\FS}$} (A01);
\draw[->] (A32) to node [swap] {$\vec{S}$} (A31);
\end{scope}
\end{tikzpicture}.
\end{center}
\end{theorem}
\begin{proof}
It follows from Proposition \ref{Prop:SFT} and \ref{Thm:unique extension}.
\end{proof}
In general, if we apply a global $90^{\circ}$ rotation to a labelled surface tangle, then its partition function is acted by the conjugation of $S$.
By Proposition \ref{Prop:Z2}, we have that
\begin{corollary}
For any oriented graph on the surface $T \in G_n$,
\begin{equation}
\vec{S}^2\ket{T_\Gamma}=\ket{T_\Gamma}.
\end{equation}
\end{corollary}
So we call the graphic quons $\ket{T_\Gamma}$ and $\vec{S}\ket{T_\Gamma}$ the Fourier dual of each other.
\begin{remark}
By Proposition \ref{Prop:positive}, the Fourier dual pair of quons are both positive. It is an interesting phenomenon that the modular transformation $S$ preserves this positivity. It is difficult to construct such positive Fourier duals algebraically.
\end{remark}
\begin{corollary}
Note that $Max_{n,g}= \vec{\FS}(GHZ_{n,g})$, for any $n,g\geq 0$, so
\begin{equation} \label{Equ:MaxGHZ}
\Max_{n,g}= \vec{S}\GHZ_{n,g}.
\end{equation}
\end{corollary}
\begin{theorem}[Verlinde formula]
For any unitary MTC $\C$ and any $n,g \geq 0$,
\begin{align}
\dim(\vec{X},g)&= \sum_{X\in Irr} (\prod_{i=1}^n S_{X_i}^{X}) (S_{X}^1)^{2-n-2g}
\end{align}
\end{theorem}
\begin{proof}
Note that $\GHZ_{n,g}$ and $\Max_{n,g}$ are computed in Propositions \ref{Prop:GHZ} and \ref{Prop:Max}, and $d(X)=\delta S_X^1$.
The statement follows from comparing the coefficients on both sides of Equation \eqref{Equ:MaxGHZ}.
\end{proof}
The higher-genus Verlinde formula was first proved by Moore and Seiberg in the framework of CFT in \cite{MooSei89}. Here we prove it for any unitary MTC as the Fourier duality of $\GHZ$ and $\Max$. The unitary condition is not necessary in the proof.
\begin{theorem}\label{Thm:Maxz}
For any $n\geq0$,
\begin{equation}
\Max_n(z)=\sum_{\vec{X}\in Irr^n} \sum_{X\in Irr} (\prod_{i=1}^n S_{X_i}^{X}) \frac{ d(X)^{4-n}}{ d(X)^2 -z} \ket{\vec{X}}.
\end{equation}
\end{theorem}
\begin{proof}
By Equation \eqref{Equ:MaxGHZ}, we have
\begin{equation}
\Max_{n}(z)= \vec{S}\GHZ_{n}(z).
\end{equation}
By Proposition \ref{Prop:GHZ rational}, the statement holds.
\end{proof}
It is interesting that the coefficients of $\Max_n(z)$, namely the generating functions of $\{\dim(\vec{X},g)\}_{g\in \mathbb{N}}$, $\vec{X}\in Irr^n$, for all $n\geq 0$, live in a small dimensional vector space spanned by $\left\{\displaystyle \frac{1}{ d(X)^2 -z}\right\}_{X \in Irr}$.
Note that the genus-0 $\GHZ$ and $\Max$ can be defined through the cycle graph and the dipole graph,
\begin{center}
\begin{tikzpicture}
\begin{scope}[shift={(-6,.5)}]
\draw (1,0)-- (4,0);
\draw (1,0) to [bend left=30] (4,0);
\foreach \x in {1,2,3,4}
{
\fill (\x,0) circle (.1);
}
\node at (2.5,.2) {$\cdots$};
\end{scope}
\node at (-1.5,0) {,};
\node at (1.5,0) {.};
\fill (0,0) circle (.1);
\fill (0,1) circle (.1);
\draw (0,0) arc (-90:90:1 and .5);
\draw (0,0) arc (-90:90:.5 and .5);
\draw (0,0) arc (270:90:.5 and .5);
\draw (0,0) arc (270:90:1 and .5);
\node at (0,.5) {$\cdots$};
\end{tikzpicture}
\end{center}
The two graphs are dual to each other.
In general, for an oriented graph $\Gamma \in G_n$ on the sphere, we obtain a genus-0 tangle $T_\Gamma$.
If we do not lift the shading, then the tangle $T_\Gamma$ has an alternating shading, and all distinguished intervals of the discs are unshaded. When we apply $\vec{\FS}$ to $T_\Gamma$, all distinguished intervals become shaded. By contracting the unshaded regions to a point, we obtain an oriented graph $\hat{\Gamma}$ in $G_n$, such that $T_{\hat{\Gamma}}=\vec{\FS}(T_{\Gamma})$. By Theorem \ref{Thm:Fourier duality}, we have that
\begin{theorem}\label{Cor:graph dual}
For any oriented graph $\Gamma \in G_n$ on the sphere,
\begin{equation} \label{Equ:graph}
\ket{T_{\hat{\Gamma}}}=\vec{S}\ket{T_{\Gamma}}.
\end{equation}
\end{theorem}
If we forget the orientation, then $\hat{\Gamma}$ is the dual graph of $\Gamma$ on the sphere.
Thus the graphic duality coincides with the Fourier duality of quons on the sphere.
However, this is not true on surfaces. One needs further assumptions for graphs on surfaces: The faces are simply connected and the edges are contractable. We call such graphs {\it local}. Then Equation \eqref{Equ:graph} remains true for local graphs.
There are interesting graphs on surfaces that are not local. Actually the graphs for $\GHZ$ and $\Max$ on higher-genus surfaces are not local.
So the quon language provides a natural extension of the graphic duality, which is compatible with the algebraic Fourier duality.
For the five platonic solids on the spheres, there are two dual pairs and one self-dual tetrahedron.
We obtain three identities for the Fourier duality. The self-duality of the tetrahedron gives the self-duality for 6j-symbols.
\begin{theorem}[6j-symbol self-duality]
For any MTC $\C$, and any $\vec{X}\in Irr^6$,
\begin{equation} \label{Equ:6jRelation}
\left|{{{X_{6}~X_{5}~X_{4}}\choose{\overline{X_{3}}~\overline{X_{2}}~\overline{X_{1}}}}}\right|^{2}
= \sum_{\vec {Y}\in Irr^6} \left(\prod_{k=1}^{6}S_{X_{k}}^{Y_{k}} \right)
\left|{{{Y_{1}Y_{2}Y_{3}}\choose{Y_{4}Y_{5}Y_{6}}}}\right|^{2}
\end{equation}
\end{theorem}
\begin{proof}
We take the tetrahedron $\Gamma_6$ in Fig~\ref{Fig: tetrahedron}.
Its dual graph $\hat{\Gamma_6}$ is given by the second. The third is isotopic to the second by $180^{\circ}$ rotation.
\begin{equation}\label{Equ:6j self dual}
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->,dashed] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (O)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$1$};
\node at (D1) {$2$};
\node at (D2) {$3$};
\node at (E0) {$4$};
\node at (E1) {$5$};
\node at (E2) {$6$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\node at (2,0) {$\to$};
\node at (6,0) {$=$};
\begin{scope}[shift={(4,0)},xscale=-1]
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw[dashed] (O)--(A\x);
\draw[->] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (A\x)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$5$};
\node at (D1) {$4$};
\node at (D2) {$6$};
\node at (E0) {$2$};
\node at (E1) {$1$};
\node at (E2) {$3$};
\draw (A0) -- (A1) -- (A2) -- (A0);
\end{scope}
\begin{scope}[shift={(8,0)},xscale=1]
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (A\x)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$6$};
\node at (D1) {$5$};
\node at (D2) {$4$};
\node at (E0) {$3$};
\node at (E1) {$2$};
\node at (E2) {$1$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\end{scope}
\end{tikzpicture}
\end{equation}
By Corollary \ref{Cor:graph dual}, we have $\ket{T_{6j}}=\vec{S}\ket{T_{\hat{6j}}}$.
Comparing the coefficients using Equation \eqref{Equ:6j}, we obtain Equation \eqref{Equ:6jRelation}.
\end{proof}
In the special case of quantum $SU(2)$, the identity for the 6j-symbol self-duality was discovered by Barrett in \cite{Bar03}, based on an interesting identity of J. Robert \cite{Rob95}. Then the identity was generalized to some other cases related to $SU(2)$ in \cite{FNR07}.
To generalize the triangle to all regular polygons, our order of edges of the tetrahedron is slightly different from Barrett's choice.
To recover Barrett's original formula, we take the following tetrahedron:
\begin{center}
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->,dashed] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (O)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$3$};
\node at (D1) {$1$};
\node at (D2) {$2$};
\node at (E0) {$4$};
\node at (E1) {$5$};
\node at (E2) {$6$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\node at (2,0) {$\to$};
\node at (6,0) {$=$};
\node at (10,0) {$\to$};
\begin{scope}[shift={(4,0)},xscale=-1]
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw[dashed] (O)--(A\x);
\draw[->] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (A\x)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$5$};
\node at (D1) {$4$};
\node at (D2) {$6$};
\node at (E0) {$1$};
\node at (E1) {$3$};
\node at (E2) {$2$};
\draw (A0) -- (A1) -- (A2) -- (A0);
\end{scope}
\begin{scope}[shift={(8,0)},xscale=1]
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (A\x)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$5$};
\node at (D1) {$4$};
\node at (D2) {$6$};
\node at (E0) {$1$};
\node at (E1) {$3$};
\node at (E2) {$2$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\end{scope}
\begin{scope}[shift={(12,0)},xscale=1]
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\coordinate (A\x) at ({cos(120*\x)},{sin(120*\x)});
\coordinate (C\x) at ({(cos(120*\x)+cos(120*(\x+1)))/2},{(sin(120*\x)+sin(120*(\x+1)))/2});
\coordinate (D\x) at ({(cos(120*\x)+cos(120*(\x+1)))/1.5},{(sin(120*\x)+sin(120*(\x+1)))/1.5});
\draw (O)--(A\x);
\draw[->] (A\x)--(C\x);
\draw[white] (.5,0) arc (0:120*\x:.5) coordinate (B\x);
\draw[->] (A\x)--(B\x);
\draw[white] (.5,0) arc (0:120*\x+20:.5) coordinate (E\x);
}
\node at (D0) {$6$};
\node at (D1) {$4$};
\node at (D2) {$5$};
\node at (E0) {$1$};
\node at (E1) {$2$};
\node at (E2) {$3$};
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\end{scope}
\end{tikzpicture}
\end{center}
The first arrow is the graphic duality. The $=$ is a rotation. The last arrow is a vertical reflection. By Propositions \ref{Prop:positive} and \ref{Prop:Z2}, the 6-quons corresponding to the last two graphs are the same.
We can generalize the tetrahedron to a sequence of self-dual graphs on the sphere:
\begin{center}
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1}
{
\draw[white] (1,0) arc (0:180*\x:1) coordinate (A\x);
\fill (0,0) circle (.05);
\draw (O)--(A\x);
}
\draw[dashed] (A0) to [bend right=30] (A1) to [bend right=30] (A0);
\end{tikzpicture}
\quad,
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2}
{
\draw[white] (1,0) arc (0:120*\x:1) coordinate (A\x);
\draw (O)--(A\x);
}
\draw[dashed] (A0) -- (A1) -- (A2) -- (A0);
\end{tikzpicture}
\quad,
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2,3}
{
\draw[white] (1,0) arc (0:360/4*\x:1) coordinate (A\x);
\draw (O)--(A\x);
}
\draw[dashed] (A0) -- (A1) -- (A2) -- (A3) -- (A0);
\end{tikzpicture}
\quad,
\begin{tikzpicture}
\coordinate (O) at (0,0);
\foreach \x in {0,1,2,3,4}
{
\draw[white] (1,0) arc (0:360/5*\x:1) coordinate (A\x);
\draw (O)--(A\x);
}
\draw[dashed] (A0) -- (A1) -- (A2) -- (A3) -- (A4) -- (A0);
\end{tikzpicture}
\quad $\cdots$
\end{center}
We order and orient the edges of each graph similar to $\Gamma_6$ in Fig~\ref{Fig: tetrahedron}, and denote the oriented graph by $\Gamma_{2n}$, for $n\geq 2$. Then we obtain a $2n$-quon, denoted by
\begin{equation}
\ket{T_{\Gamma_{2n}}}=
\sum_{\vec{X}\in Irr^{2n}}
\left|{{{X_{1}\phantom{aa}X_{2}\phantom{aa} \cdots~X_{n}}\choose{X_{n+1}X_{n+2}\cdots X_{2n}}}}\right|^{2} \ket{\vec{X}}.
\end{equation}
\begin{theorem}
For any MTC $\C$, and any $\vec{X}\in Irr^{2n}$, $n\geq 2$,
\begin{equation}
\left|{{{X_{2n}~X_{2n-1}\cdots X_{n}}\choose{\overline{X_{n}} \phantom{aa} \overline{X_{n-1}} ~\cdots \overline{X_{1}}}}}\right|^{2}
= \sum_{\vec {Y}\in Irr^{2n}} \left(\prod_{k=1}^{2n}S_{X_{k}}^{Y_{k}} \right)
\left|{{{Y_{1} \phantom{aa} Y_{2} \phantom{aa} \cdots ~Y_{n}}\choose{Y_{n+1}Y_{n+2}\cdots Y_{2n}}}}\right|^{2} \;.
\end{equation}
\end{theorem}
\begin{proof}
The dual graph of $\Gamma_{2n}$ is obtained similar to Equation \eqref{Equ:6j self dual}.
The statement follows from Corollary \ref{Cor:graph dual}.
\end{proof}
|
\section{INTRODUCTION}
Astrophysical observations ranging from the scale of the horizon ($\sim$ 15,000
Mpc) to the typical spacing between galaxies ($\sim$ 1 Mpc) are all consistent
with a Universe that was seeded by a nearly scale-invariant fluctuation spectrum
and that is dominated today by dark energy ($\sim 70 \%$) and Cold Dark Matter
($\sim 25\%$), with baryons contributing only $\sim 5\%$ to the energy density
\citep[][]{planck2015,guo16}. This cosmological model has provided a compelling
backbone to galaxy formation theory, a field that is becoming increasingly
successful at reproducing the detailed properties of galaxies, including their
counts, clustering, colors, morphologies, and evolution over time
\citep{vogelsberger14,Schaye15}. As described in this review, there are
observations below the scale of $\sim 1$ Mpc that have proven more
problematic to understand in the $\Lambda$CDM\ framework. It is not yet clear whether
the small-scale issues with $\Lambda$CDM\ will be accommodated by a better
understanding of astrophysics or dark matter physics, or if they will require a
radical revision of cosmology, but any correct description of our Universe must
look very much like $\Lambda$CDM\ on large scales. It is with this in mind that we
discuss the small-scale challenges to the current paradigm. For concreteness, we
assume that the default $\Lambda$CDM\ cosmology has parameters
$h=H_0/(100 \,{\rm km \, s}^{-1} \,{\rm Mpc}^{-1}) = 0.6727$, $\Omega_{m} = 0.3156$,
$\Omega_{\Lambda} = 0.6844$, $\Omega_b = 0.04927$, $\sigma_8 = 0.831$, and
$n_s = 0.9645$ \citep{planck2015}.
Given the scope of this review, we must sacrifice detailed discussions for a
more broad, high-level approach. There are many recent reviews or overview
papers that cover, in more depth, certain aspects of this review. These include \citet{frenk2012}, \citet{peebles2012a}, and \citet{primack2012} on the
historical context of $\Lambda$CDM\ and some of its basic predictions;
\citet{willman2010} and \citet{mcconnachie2012} on searches for and observed
properties of dwarf galaxies in the Local Group; \citet{feng2010},
\citet{porter2011}, and \citet{strigari2013} on the nature of and searches for
dark matter; \citet{kuhlen2012b} on numerical simulations of cosmological structure
formation; and \cite{brooks2014a}, \citet{weinberg2015} and \citet{del-popolo2017} on small-scale
issues in $\Lambda$CDM. Additionally, we will not discuss cosmic acceleration (the
$\Lambda$ in $\Lambda$CDM) here; that topic is reviewed in \citet{weinberg2013}.
Finally, space does not allow us to address the possibility that the challenges
facing $\Lambda$CDM\ on small scales reflects a deeper problem in our understanding of
gravity. We point the reader to reviews by \citet{Milgrom2002}, \citet{famaey2012}, and
\citet{McGaugh2015}, which compare Modified Newtonian Dynamics (MOND) to $\Lambda$CDM\
and provide further references on this topic.
\vspace{-0.1cm}
\subsection{Preliminaries: how small is a small galaxy?}
\label{subsec:small_galaxy}
This is a review on small-scale challenges to the $\Lambda$CDM\ model. The past
$\sim 12$ years have seen transformative discoveries that have fundamentally
altered our understanding of ``small scales'' -- at least in terms of the
low-luminosity limit of galaxy formation.
Prior to 2004, the smallest galaxy known was Draco, with a stellar mass of
$M_{\star} \simeq 5 \times 10^5 M_{\odot}$. Today, we know of galaxies 1000 times less
luminous. While essentially all Milky Way satellites discovered before 2004 were
found via visual inspection of photographic plates (with the exceptions of
the Carina and Sagittarius dwarf spheroidal galaxies), the advent of large-area digital sky surveys
with deep exposures and accurate star-galaxy separation algorithms has
revolutionized the search for and discovery of faint stellar systems in the
Milky Way (see \citealt{willman2010} for a review of the search for faint
satellites). The Sloan Digital Sky Survey (SDSS) ushered in this revolution,
doubling the number of known Milky Way satellites in the first five years of
active searches. The PAndAS survey discovered a similar population of faint
dwarfs around M31 \citep{richardson2011}. More recently the DES survey has
continued this trend \citep{koposov2015, drlica-wagner2015}. All told, we know
of $\sim 50$ satellite galaxies of the Milky Way and $\sim 30$ satellites of M31
today \citep[][updated on-line catalog]{mcconnachie2012}, most of which are
fainter than any galaxy known at the turn of the century. They are also
extremely dark-matter-dominated, with mass-to-light ratios within their stellar
radii exceeding $\sim 1000$ in some cases \citep{walker2009,wolf2010}.
Given this upheaval in our understanding of the faint galaxy frontier over the
last decade or so, it is worth pausing to clarify some naming conventions. In
what follows, the term ``dwarf'' will refer to galaxies with
$M_{\star} \lesssim 10^9 M_{\odot}$. We will further subdivide dwarfs into three mass
classes: Bright Dwarfs ($M_{\star} \approx 10^{7-9} M_{\odot}$), Classical Dwarfs
($M_{\star} \approx 10^{5-7} M_{\odot}$), and Ultra-faint Dwarfs
($M_{\star} \approx 10^{2-5} M_{\odot}$). Note that another common classification for
dwarf galaxies is between dwarf spheroidals (dSphs) and dwarf drregulars (dIrrs).
Dwarfs with gas and ongoing star formation are usually labeled dIrr. The term
dSph is reserved for dwarfs that lack gas and have no ongoing star formation.
Note that the vast majority of field dwarfs (meaning that they are not
satellites) are dIrrs. Most dSph galaxies are satellites of larger systems.
\begin{figure*}[th!]
\includegraphics[width=\textwidth]{Figures/fig1_trimmed.pdf}
\caption{\textit{Approaching the threshold of galaxy formation.} Shown
are images of dwarf galaxies spanning six orders of magnitude in
stellar mass. In each panel, the dwarf's stellar mass is listed in the lower-left corner and a scale bar corresponding to 200 pc is shown in the lower-right corner. The LMC, WLM,
and Pegasus are dwarf irregular (dIrr) galaxies that have gas and ongoing
star formation. The remaining six galaxies shown are gas-free dwarf spheroidal (dSph) galaxies and are not currently forming stars. The faintest galaxies shown here are only detectable in limited volumes around the Milky Way; future surveys may reveal many more such galaxies at greater distances. Image credits: Eckhard Slawik (LMC); ESO/Digitized Sky Survey 2 (Fornax); \citeauthor{massey2007} (2007; WLM, Pegasus, Phoenix); ESO (Sculptor); Mischa Schirmer (Draco), Vasily Belokurov and Sergey Koposov (Eridanus II, Pictoris I). \vspace{2cm}
\label{fig:dwarfs}
}
\end{figure*}
Figure \ref{fig:dwarfs} illustrates the morphological differences among galaxies
that span these stellar mass ranges. From top to bottom we see three dwarfs
each that roughly correspond to Bright, Classical, and Ultra-faint Dwarfs,
respectively.
\begin{summary}[ADOPTED DWARF GALAXY NAMING CONVENTION]
\noindent {\bf Bright Dwarfs:} $M_{\star} \approx 10^{7-9} M_{\odot}$ \\ -- the faint
galaxy completeness limit for field galaxy surveys
\bigskip
\noindent {\bf Classical Dwarfs:} $M_{\star} \approx 10^{5-7} M_{\odot}$ \\ -- the
faintest galaxies known prior to SDSS
\bigskip
\noindent {\bf Ultra-faint Dwarfs:} $M_{\star} \approx 10^{2-5} M_{\odot}$ \\ --
detected within limited volumes around M31 and the Milky Way
\end{summary}
With these definitions in hand, we move to the cosmological model within which
we aim to explain the counts, stellar masses, and dark matter content of these
dwarfs.
\subsection{Overview of the $\Lambda$CDM\ model}
\label{subsec:lcdm}
The $\Lambda$CDM\ model of cosmology is the culmination of century of work on the
physics of structure formation within the framework of general relativity. It
also indicates the confluence of particle physics and astrophysics over the past
four decades: the particle nature of dark matter directly determines essential
properties of non-linear cosmological structure. While the $\Lambda$CDM\ model is
phenomenological at present -- the actual physics of dark matter and dark energy
remain as major theoretical issues -- it is highly successful at explaining the
large-scale structure of the Universe and basic properties of galaxies that form
within dark matter halos.
In the $\Lambda$CDM\ model, cosmic structure is seeded by primordial adiabatic
fluctuations and grows by gravitational instability in an expanding
background. The primordial power spectrum as a function of wavenumber $k$ is
nearly scale-invariant\footnote{Recent measurements find $n = 0.968 \pm 0.006$
\citep{planck2015}, i.e., small but statistically different from true scale
invariance.}, $P(k)\propto k^n$ with $n\simeq 1$.
Scales that re-enter the horizon when the Universe is radiation-dominated grow
extremely slowly until the epoch of matter domination, leaving a scale-dependent
suppression of the primordial power spectrum that goes as $k^{-4}$ at large
$k$. This suppression of power is encapsulated by the ``transfer function" $T(k)$, which is defined as the ratio of amplitude of a density perturbation in the post-recombination era to its primordial value as a function of perturbation wavenumber $k$.
This processed power
spectrum is the input for structure formation calculations; the dimensionless
processed power spectrum, defined by
\begin{equation}
\label{eq:5}
\Delta^2(k, a)=\frac{k^3}{2\pi^2}\,P(k)\,T^2(k)\,d^2(a)\,,
\end{equation}
therefore rises as $k^4$ for scales larger than the comoving horizon at
matter-radiation equality (corresponding to $k=0.008\,{\rm Mpc}^{-1}$) and is
approximately independent of $k$ for scales that re-enter the horizon well
before matter-radiation equality. Here, $d(a)$ is the linear growth function,
normalized to unity at $a=1$. The processed $z=0$ ($a=1$) linear power spectrum for $\Lambda$CDM~ is shown by the solid
line in Figure \ref{fig:pofk}. The asymptotic shape behavior is most easily seen in the bottom panel, which spans the
wave number range of cosmological interest.
For a more complete discussion of primordial fluctuations and the processed power spectrum
we recommend that readers consult \citet{mo2010}.
It is useful to associate each wavenumber with a mass scale set by its
characteristic length $r_l=\lambda/2=\pi/k$. In the early Universe, when
$\delta \ll 1$, the total amount of matter contained within a sphere of comoving
Lagrangian radius $r_l$ at $z=0$ is
\begin{eqnarray}
\label{eq:mlin}
M_l&=&\frac{4\,\pi}{3}\,r_l^3\,\rho_{\rm m} = \frac{\Omega_{\rm m}\,H_0^2}{2\,G}\,r_l^3\,\\
&=&1.71 \times 10^{11}\,M_{\odot} \left(\frac{\Omega_{\rm m}}{0.3}\right)
\left(\frac{h}{0.7}\right)^2\,\left(\frac{r_l}{1\,{\rm Mpc}}\right)^3\, .
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Figures/powerSpec_withWDM_zoom.pdf} \\
\includegraphics[width=0.95\textwidth]{Figures/powerSpec_withWDM_full_box.pdf}
\caption{The $\Lambda$CDM\ dimensionless power spectrum (solid lines, Equation \ref{eq:5}) plotted
as a function of linear wavenumber $k$ (bottom axis) and corresponding linear
mass $M_l$ (top axis). The bottom panel spans all physical scales relevant for
standard CDM models, from the particle horizon to the
free-streaming scale for dark matter composed of standard 100 GeV WIMPs on the
far right. The top panel zooms in on the scales of interest for this review,
marked with a rectangle in the bottom panel. Known
dwarf galaxies are consistent with occupying a relatively narrow 2 decade
range of this parameter space -- $10^{9}-10^{11}\,M_{\odot}$ -- even though dwarf
galaxies span approximately 7 decades in stellar mass. The effect of WDM models on the
power spectrum is illustrated by the dashed, dotted, and dash-dotted lines, which
map to the (thermal) WDM particle masses listed. See Section \ref{subsubsec:linear} for a discussion
of power suppression in WDM.}
\label{fig:pofk}
\end{figure}
The mapping between wave number and mass scale is illustrated by the top and bottom axis in Figure \ref{fig:pofk}.
The processed linear power spectrum for $\Lambda$CDM\ shown in the bottom panel (solid line)
spans the horizon scale to a typical mass cutoff scale for the most common cold
dark matter candidate ($\sim 10^{-6} M_{\odot}$; see discussion in
Section~\ref{subsec:particle_physics}). A line at $\Delta = 1$ is plotted for
reference, showing that fluctuations born on comoving length scales smaller
than $r_l \approx 10\,h^{-1} \, {\rm Mpc} \approx 14 \,{\rm Mpc}$ have gone non-linear today. The
top panel is zoomed in on the small scales of relevance for this review (which
we define more precisely below). Typical regions on these scales have collapsed
into virialized objects today. These collapsed objects -- dark matter halos --
are the sites of galaxy formation.
\subsection{Dark matter halos}
\label{subsec:lcdm_small}
\subsubsection{Global properties}
Soon after overdense regions of the Universe become non-linear, they stop expanding, turn around, and collapse, converting potential energy into kinetic energy in the process.
The result is virialized dark matter halos with masses given by
\begin{equation}
\label{eq:mvir}
M_{\rm{vir}}=\frac{4\,\pi}{3}\,R_{\rm{vir}}^3\,\Delta\,\rho_{\rm m}\,,
\end{equation}
where $\Delta \sim 300$ is the virial over-density parameter, defined here
relative to the background matter density. As discussed below, the value of
$M_{\rm{vir}}$ is ultimately a definition that requires some way of defining a halo's
outer edge ($R_{\rm{vir}}$). This is done via a choice for $\Delta$. The numerical
value for $\Delta$ is often chosen to match the over-density one predicts for a
virialized dark matter region that has undergone an idealized spherical collapse
\citep[][]{bryan1998}, and we will follow that convention here. Note that given
a virial mass $M_{\rm{vir}}$, the virial radius, $R_{\rm{vir}}$, is uniquely defined by
Equation \ref{eq:mvir}. Similarly, the virial {\em velocity}
\begin{equation}
V_{\rm{vir}} \equiv \sqrt{\frac{G M_{\rm{vir}}}{R_{\rm{vir}}}},
\end{equation}
is also uniquely defined. The parameters $M_{\rm{vir}}$, $R_{\rm{vir}}$, and $V_{\rm{vir}}$ are
equivalent mass labels -- any one determines the other two, given a specified over-density parameter $\Delta$.
\begin{marginnote}[]
\entry{Galaxy Clusters}{$M_{\rm{vir}}\approx 10^{15} M_{\odot}$ $V_{\rm{vir}} \approx 1000 \, {\rm km \, s}^{-1}$}
\entry{Milky Way}{$M_{\rm{vir}}\approx 10^{12} M_{\odot}$ $V_{\rm{vir}} \approx 100 \, {\rm km \, s}^{-1}$}
\entry{Smallest Dwarfs}{$M_{\rm{vir}}\approx 10^{9} M_{\odot}$ $V_{\rm{vir}} \approx 10 \, {\rm km \, s}^{-1}$}
\end{marginnote}
One nice implication of Equation \ref{eq:mvir} is that a present-day object with
virial mass $M_{\rm{vir}}$ can be associated directly with a linear perturbation with
mass $M_l$. Equating the two gives
\begin{equation}
\label{eq:rvir}
R_{\rm{vir}}=0.15 \, \left(\frac{\Delta}{300}\right)^{-1/3}\,r_l\,.
\end{equation}
We see that a collapsed halo of size $R_{\rm{vir}}$ is approximately 7 times smaller in
physical dimension than the comoving linear scale associated with that mass
today.
With this in mind, Equations 3-6 allow us to self-consistently define ``small
scales" for both the linear power spectrum and collapsed objects:
$M \lesssim 10^{11} \, M_{\odot}$. As we will discuss, potential problems
associated with galaxies inhabiting halos with $V_{\rm{vir}} \simeq 50 \, {\rm km \, s}^{-1}$ may
point to a power spectrum that is non-CDM-like at scales $r_l \lesssim 1 \,{\rm Mpc}$.
\begin{summary}[WE DEFINE ``SMALL SCALES'' AS THOSE SMALLER THAN:]
\vspace{-0.25cm}
\begin{equation*}
\label{eq:4}
M \approx 10^{11}\,M_{\odot} \leftrightarrow k \approx 3\,{\rm Mpc}^{-1}
\leftrightarrow r_l \approx 1\,{\rm Mpc}
\leftrightarrow R_{\rm{vir}} \approx 150 \,{\rm kpc}
\leftrightarrow V_{\rm{vir}} \approx 50\,{\rm km \, s}^{-1}\,.
\end{equation*}
\end{summary}
As alluded to above, a common point of confusion is that the halo mass
definition is subject to the assumed value of $\Delta$, which can vary by a
factor of $\sim 3$ depending on the author. For the spherical collapse
definition, $\Delta \simeq 333$ at $z=0$ (for our fiducial cosmology) and
asymptotes to $\Delta = 178$ at high redshift \citep[][]{bryan1998}. Another common
choice is a fixed $\Delta = 200$ at all $z$ (often labeled $M_{200m}$ in the
literature). Finally, some authors prefer to define the virial overdensity as
$200$ times the critical density, which, according to Equation \ref{eq:mvir}
would mean $\Delta(z) = 200 \rho_{c}(z)/\rho_m(z)$. Such a mass is commonly
labeled ``$M_{200}$'' in the literature. For most purposes (e.g., counting
halos), the precise choice does not matter, as long as one is consistent with the
definition of halo mass throughout an analysis: every halo has the same center,
but its outer radius (and mass contained within that radius) shifts depending on
the definition. In what follows, we use the spherical collapse definition
($\Delta = 333$ at $z=0$) and adhere to the convention of labeling that mass
``$M_{\rm{vir}}$".
Before moving on, we note that it is also possible (and perhaps even preferable)
to give a halo a ``mass" label that is directly tied to a physical feature
associated with a collapsed dark matter object rather than simply adopting a
$\Delta$. \citet{more2015} have advocated the use of a ``splash-back" radius ,
where the density profile shows a sharp break (this typically occurs at
$\sim 2 R_{\rm{vir}}$). Another common choice is to tag halos based not on a mass but
on $V_{\rm{max}}$, which is the peak value of the circular velocity $V_c(r) = \sqrt{G M(<r)/r}$
as one steps out from the halo center. For any individual halo, the value of
$V_{\rm{max}}$ ($\gtrsim V_{\rm{vir}}$) is linked to the internal mass profile or density
profile of the system, which is the subject of the next subsection. As
discussed below, the ratio $V_{\rm{max}}/V_{\rm{vir}}$ increases as the halo mass decreases.
\begin{textbox}[ht]\section{ROBUST PREDICTIONS FROM CDM-ONLY SIMULATIONS}
A defining characteristic of CDM-based hierarchical structure formation is
that the smallest scales collapse first -- a fact that arises directly from
the shape of the power spectrum (Figure 1) and that lies at the heart of many
robust predictions for the counts and structure of dark matter halos today.
As discussed below, baryonic processes can alter these predictions to various
degrees, but pure dark matter simulations have provided a well-defined set of
basic predictions used to benchmark the theory.
\subsection{The dark matter profiles of individual halos are cuspy and dense [Figure \ref{fig:nfw}]}
The density profiles of individual $\Lambda$CDM\ halos increase steadily towards small
radii, with an overall normalization and detailed shape that reflects the halo's
mass assembly. At fixed mass, early-forming halos tend to be denser than
later-forming halos. As with the mass function, both the shape {\em and}
normalization of dark matter halo density structure is predicted by $\Lambda$CDM, with
a well-quantified prediction for the scatter in halo concentration at fixed
mass.
\subsection{There are many more small halos than large ones [Figure \ref{fig:massfunc}]}
The comoving number density of dark matter halos rises steeply towards small
masses, $dn/dM \propto M^{\alpha}$ with $\alpha \simeq -1.9$. At large halo
masses, counts fall off exponentially above the mass scale that is just going
non-linear today. Importantly, both the shape and normalization of the mass
function is robustly predicted by the theory.
\subsection{Substructure is abundant and almost self-similar [Figure \ref{fig:mf}]}
Dark matter halos are filled with substructure, with a mass function that rises
as $dN/dm \propto m^{\alpha_s}$ with $\alpha_s \simeq -1.8$ down to the low-mass
free-streaming scale ($m \ll 1 M_{\odot}$ for canonical models). Substructure
reflects the high-density cores of smaller merging halos that survive the
hierarchical assembly process. Substructure counts are nearly self-similar with
host mass, with the most massive subhalos seen at
$m_{\rm max} \sim 0.2 M_{\rm host}$.
\end{textbox}
\subsubsection{Abundance}
In principle, the mapping between the initial spectrum of density fluctuations
at $z \rightarrow \infty$ and the mass spectrum of collapsed (virialized) dark matter
halos at later times could be extremely complicated: as a given scale becomes
non-linear, it could affect the collapse of nearby regions or larger scales. In
practice, however, the mass spectrum of dark matter halos can be modeled
remarkably well with a few simple assumptions. The first of these was taken by
\citet{press1974}, who assumed that the mass spectrum of collapsed objects could
be calculated by extrapolating the overdensity field using linear theory even
into the highly non-linear regime and using a spherical collapse model
\citep{gunn1972}. In the Press-Schechter model, the dark matter halo mass
function -- the abundance of dark matter halos per unit mass per unit volume at
redshift $z$, often written as $n(M,z)$ -- depends only on the rms amplitude of
the linear dark matter power spectrum, smoothed using a spherical tophat filter
in real space and extrapolated to redshift $z$ using linear theory. Subsequent
work has put this formalism on more rigorous mathematical footing
\citep{bond1991, cole1991, sheth2001}, and this extended Press-Schechter (EPS) theory
yields abundances of dark matter halos that are perhaps surprisingly accurate
(see \citealt{zentner2007} for a comprehensive review of EPS theory). This accuracy is tested
through comparisons with large-scale numerical simulations.
Simulations and EPS theory both find a universal form for $n(M,z)$: the comoving
number density of dark matter halos is a power law with log slope of
$\alpha \simeq -1.9$ for $M \ll M^*$ and is exponentially suppressed for
$M \gg M^*$, where $M^* = M^*(z)$ is the characteristic mass of fluctuations going
non-linear at the redshift $z$ of interest\footnote{The black line in Figure \ref{fig:massfunc} illustrates the
mass function of $\Lambda$CDM\ dark matter halos.}. Importantly, given an initial power spectrum of density
fluctuations, it is possible to make highly accurate predictions within $\Lambda$CDM\
for the abundance, clustering, and merger rates of dark matter halos at any
cosmic epoch.
\subsubsection{Internal structure}
\citet{dubinski1991} were the first to use $N$-body simulations to show that the
internal structure of a CDM dark matter halo does not follow a simple power-law,
but rather bends from a steep outer profile to a mild inner cusp obeying
$\rho(r) \sim 1/r$ at small radii. More than twenty years later, simulations
have progressed to the point that we now have a fairly robust understanding of
the structure of $\Lambda$CDM\ halos and the important factors that govern halo-to-halo
variance \citep[e.g.,][]{navarro2010,diemer2015,klypin2016}, at least for dark-matter-only simulations.
To first approximation, dark matter halo profiles can be described by a nearly
universal form over all masses, with a steep fall-off at large radii
transitioning to mildly divergent cusp towards the center. A common way to
characterize this is via the NFW functional form \citep{navarro1997}, which
provides a good (but not perfect) description dark matter profiles:
\begin{equation}
\label{eq:nfw}
\rho(r) = \frac{4 \rho_{-2}}{(r/r_{-2})(1+r/r_{-2})^2}.
\end{equation}
Here, $r_{-2}$ is a characteristic radius where the log-slope of the density
profile is $-2$, marking a transition point from the inner $1/r$ cusp to an
outer $1/r^3$ profile. The second parameter, $\rho_{-2}$, sets the value of
$\rho(r)$ at $r=r_{-2}$. In practice, dark matter halos are better described
the three-parameter \citet{Einasto65} profile \citep{Navarro2004,Gao2008}.
However, for the small halos of most concern for this review, NFW fits do almost
as well as Einasto in describing the density profiles of halos in
simulations \citep{Dutton2014}. Given that the NFW form is slightly simpler, we
have opted to adopt this approximation for illustrative purposes in this review.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/nfw_combined.pdf}
\caption{{\em Right:} The density profiles of median NFW dark matter halos at
$z=0$ with masses that span galaxy clusters ($M_{\rm{vir}} = 10^{15} M_{\odot}$, black)
to the approximate HI cooling threshold that is expected to correspond to the
smallest dwarf galaxies ($M_{\rm{vir}} \approx 10^8 M_{\odot}$, yellow). The lines are
color coded by halo virial mass according to the bar on the right and are
separated in mass by 0.5 dex. We see that (in the median) massive halos are
denser than low-mass halos at a fixed {\em physical} radius. However, at a
fixed small fraction of the virial radius, smaller halos are typically
slightly denser than larger halos, reflecting the concentration-mass relation.
This is demonstrated by the dotted line which connects $\rho(r)$ evaluated at
$r = \epsilon R_{\rm{vir}}$ for halos over a range of masses. We have chosen
$\epsilon = 0.015$ because this value provides a good match to observed galaxy
half-light radii over a wide range of galaxy luminosities under the assumption
that galaxies occupy halos according to abundance matching (see Section
\ref{sec:AM} and Figure \ref{fig:AM}). Interestingly, the characteristic dark
matter density at this `galaxy radius' increases only by a factor of $\sim 6$
over almost seven orders of magnitude in halo virial mass. {\em Left:} The
equivalent circular velocity curves $V_c(r) \equiv \sqrt{G M(<r)/r}$ for the
same halos plotted on the right. The dashed line connects the radius $R_{\rm{max}}$
where the circular velocity is maximum ($V_{\rm{max}}$) for each rotation curve. The
dotted line tracks the $R_{\rm{vir}}$ -- $V_{\rm{vir}}$ relation. The ratio $R_{\rm{max}}/R_{\rm{vir}}$
decreases towards smaller halos, reflecting the mass-concentration
relation. The ratio $V_{\rm{max}}/V_{\rm{vir}}$ also increases with decreasing
concentration. }
\label{fig:nfw}
\end{figure}
As Equation \ref{eq:nfw} makes clear, two parameters (e.g., $\rho_{-2}$
and $r_{-2}$) are required to determine a halo's NFW density profile. For a fixed
halo mass $M_{\rm{vir}}$ (which fixes $R_{\rm{vir}}$), the second parameter is often expressed
as the halo concentration: $c = R_{\rm{vir}}/r_{-2}$. Together, a $M_{\rm{vir}}-c$
combination completely specifies the profile. In the median, and over the mass
and redshift regime of interest to this review, halo concentrations increase
with decreasing mass and redshift: $c \propto M_{\rm{vir}}^{-a} \, (1+z)^{-1}$, with
$a \simeq 0.1$ \citep[][]{bullock2001}. Though halo concentration correlates with halo mass, there is significant scatter ($\sim 0.1$ dex) about the median at fixed $M_{\rm{vir}}$
\citep{Jing2000,bullock2001}. Some fraction of this scatter is driven by the
variation in halo mass accretion history \citep{wechsler2002,ludlow2016}, with
early-forming halos having higher concentrations at fixed final virial mass.
The dependence of halo profile on a mass-dependent concentration parameter and the correlation between formation time and concentration at fixed virial mass are caused by the hierarchical build-up of halos in $\Lambda$CDM: low-mass halos assemble earlier, when the mean density of the Universe is higher, and therefore have higher concentrations than high-mass halos (e.g., \citealt{{navarro1997,wechsler2002}}). At the very smallest masses, the
concentration-mass relation likely flattens, reflecting the shape of the
dimensionless power spectrum \citep[see our Figure 1 and the discussion
in][]{ludlow2016}; at the highest masses and redshifts, characteristic of very
rare peaks, the trend seems to reverse \citep[$a < 0$;][]{klypin2016}.
The right panel of Figure \ref{fig:nfw} summarizes the median NFW density
profiles for $z=0$ halos with masses that span those of large galaxy clusters
($M_{\rm{vir}} = 10^{15} M_{\odot}$) to those of the smallest dwarf galaxies
($M_{\rm{vir}} = 10^8 M_{\odot}$). We assume the $c-M_{\rm{vir}}$ relation from \citet{klypin2016}.
These profiles are plotted in physical units (unscaled to the virial radius) in
order to emphasize that higher mass halos are denser at every radius than lower
mass halos (at least in the median). However, at a fixed small fraction of the
virial radius, small halos are slightly denser than larger ones. This is a
result of the concentration-mass relation. Under the ansatz of
abundance matching (Section \ref{sec:AM}, Figure \ref{fig:AM}), galaxy sizes
(half-mass radii) track a fixed fraction of their host halo virial radius:
$r_{\rm gal} \simeq 0.015 R_{\rm{vir}}$ \citep{kravtsov2013}. This relation is plotted
as a dotted line such that the dotted line intersects each solid line at that
$r = 0.015\, R_{\rm{vir}}$, where $R_{\rm{vir}}$ is that particular halo's virial radius. We
see that small halos are slightly denser at the typical radii of the galaxies
they host than are larger halos. Interestingly, however, the density range is
remarkably small, with a local density of dark matter increasing by only a
factor of $\sim 6$ over the full mass range of halos that are expected to host
galaxies, from the smallest dwarfs to the largest cD galaxies in the universe.
On the left we show the same halos, now presented in terms of the implied
circular velocity curves: $V_c \equiv \sqrt{GM(<r)/r}$. The dotted line in left
panel intersects $V_{\rm{vir}}$ at $R_{\rm{vir}}$ for each value of $M_{\rm{vir}}$. The dashed line does
the same for $V_{\rm{max}}$ and its corresponding radius $R_{\rm{max}}$. Higher mass systems,
with lower concentrations, typically have $V_{\rm{max}} \simeq V_{\rm{vir}}$, but for smaller
halos the ratio is noticeably different than one and can be as large as
$\sim 1.5$ for high-concentration outliers. Note also that the lowest mass
halos have $R_{\rm{max}} \ll R_{\rm{vir}}$ and thus it is the value of $V_{\rm{max}}$ (rather than
$V_{\rm{vir}}$) that is more closely linked to the observable ``flat" region of a
galaxy rotation curve. For our ``small-scale" mass of $M_{\rm{vir}} = 10^{11} M_{\odot}$,
typically $V_{\rm{max}} \simeq 1.2\, V_{\rm{vir}} \simeq 60 \, {\rm km \, s}^{-1}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{Figures/mf1_bolshoi.pdf}
\caption{Steep mass functions. The black solid line shows the $z=0$ dark matter
halo mass function ($M_{\rm halo} = M_{\rm{vir}}$) for the full population of halos
in the universe as approximated by \citet{sheth2001}. For comparison, the
magenta lines show the subhalo mass functions at $z=0$ (defined as
$M_{\rm halo} = M_{\rm sub} = M_{\rm peak}$, see text) at the same redshift for host halos at four
characteristic masses ($M_{\rm{vir}} = 10^{12}, 10^{13}, 10^{14},$ and
$10^{15} M_{\odot}$) with units given along the right-hand axis. Note that the
subhalo mass functions are almost self-similar with host mass, roughly
shifting to the right by $10\times$ for every decade increase in host mass.
The low-mass slope of subhalo mass function is similar than the field halo
mass function. Both field and subhalo mass functions are expected to rise
steadily to the cutoff scale of the power spectrum, which for fiducial CDM
scenarios is $ \ll 1 M_{\odot}$.}
\label{fig:massfunc}
\end{figure}
\subsection{Dark matter substructure}
It was only just before the turn of the century that $N$-body simulations set
within a cosmological CDM framework were able to robustly resolve the
substructure {\em within} individual dark matter halos
\citep{Ghigna1998,klypin1999a}. It soon became clear that the dense centers of
small halos are able to survive the hierarchical merging process: dark matter
halos should be filled with substructure. Indeed, subhalo counts are nearly self-similar with
host halo mass. This was seen as welcome news for
cluster-mass halos, as the substructure could be easily identified with cluster
galaxies. However, as we will discuss in the next section, the fact that Milky-Way-size halos are filled with substructure is less clearly consistent with what
we see around the Galaxy.
Quantifying subhalo counts, however, is not so straightforward. Counting by
mass is tricky because the definition of ``mass" for an extended distribution
orbiting within a collapsed halo is even more fraught with subjective decisions
than virial mass. When a small halo is accreted into a large one, mass is
preferentially stripped from the outside. Typically, the standard virial
overdensity ``edge" is subsumed by the ambient host halo. One option is to
compute the mass that is bound to the subhalo, but even these masses vary from
halo finder to halo finder. The value of a subhalo's $V_{\rm{max}}$ is better defined,
and often serves as a good tag for quantifying halos.
Another option is to tag bound subhalos using the maximum virial mass that the halos had
at the time they were first accreted\footnote{This maximum mass is similar to the virial mass at the time of accretion,
though infalling halos can begin losing mass prior to first crossing the virial radius.} onto a host, $M_{\rm peak}$. This is a
useful option because stars in a central galaxy belonging to a halo at accretion
will be more tightly bound than the dark matter. The resultant satellite's
stellar mass is most certainly more closely related to $M_{\rm peak}$ than the
bound dark matter mass that remains at $z=0$. Moreover, the subsequent mass
loss (and even $V_{\rm{max}}$ evolution) could change depending on the baryonic content
of the {\em host} because of tidal heating and other dynamical effects
\citep{DOnghia2010}. For these reasons, we adopt $M_{\rm sub} = M_{\rm peak}$ for
illustrative purposes here.
The magenta lines in Figure \ref{fig:massfunc} show the median subhalo mass
functions ($M_{\rm sub} = M_{\rm peak}$) for four characteristic host halo masses
($M_{\rm{vir}} = 10^{12-14} M_{\odot}$) according to the results of
\citet{rodriguez-puebla2016}. These lines are normalized to the right-hand
vertical axis. Subhalos are counted only if they exist within the virial radius
of the host, which means the counting volume increases as
$\propto M_{\rm{vir}} \propto R_{\rm{vir}}^3$ for these four lines. For comparison, the black
line (normalized to the left vertical axis) shows the global halo mass function
\citep[as estimated via the fitting function from][]{sheth2001}. The subhalo
mass function rises with a similar (though slightly shallower) slope as the field halo mass function and is
also roughly self-similar in host halo mass.
\subsection{Linking dark matter halos to galaxies}
\label{sec:AM}
\begin{figure}[t]
\includegraphics[width=\textwidth]{Figures/mass_functions.pdf}
\caption{The thick black line shows the global dark matter mass function. The
dotted line is shifted to the left by the cosmic baryon fraction for each halo
$M_{\rm{vir}} \rightarrow f_b M_{\rm{vir}}$. This is compared to the observed stellar mass
function of galaxies from \citet[][magenta stars]{bernardi2013} and
\citeauthor{wright2017} (2017; cyan squares). The shaded bands
demonstrate a range of faint-end slopes
$\alpha_g = -1.62$ to $-1.32$. This range of power laws will
produce dramatic differences at the scales of the classical Milky Way satellites
($M_{\star} \simeq 10^{5-7} M_{\odot}$). Pushing large sky surveys down below
$10^6 M_{\odot}$ in stellar mass, where the differences between the power law
range shown would exceed a factor of ten, would provide a powerful constraint on our
understanding of the low-mass behavior. Until then, this mass regime can only
be explored with without large completeness corrections in vicinity of the
Milky Way. }
\label{fig:mf}
\end{figure}
How do we associate dark matter halos with galaxies? One simple approximation is
to assume that each halo is allotted its cosmic share of baryons
$f_b = \Omega_b/\Omega_m \approx 0.15$ and that those baryons are converted to
stars with some constant efficiency $\epsilon_\star$:
$M_{\star} = \epsilon_\star \, f_b \, M_{\rm{vir}}$. Unfortunately, as shown in Figure
\ref{fig:mf}, this simple approximation fails miserably. Galaxy stellar masses
do not scale linearly with halo mass; the relationship is much more
complicated. Indeed, the goal of forward modeling galaxy formation from known
physics within the $\Lambda$CDM\ framework is an entire field of its own (galaxy
evolution; \citealt{somerville2015}). Though galaxy formation theory has
progressed significantly in the last several decades, many problems remain unsolved.
Other than forward modeling galaxy formation, there are two common approaches
that give an independent assessment of how galaxies relate to dark matter halos.
The first involves matching the observed volume density of galaxies of a given
stellar mass (or other observable such as luminosity, velocity width, or baryon
mass) to the predicted abundance of halos of a given virial mass. The second
way is to measure the mass of the galaxy directly and to infer the dark matter
halo properties based on this dynamical estimator.
\subsubsection{Abundance matching} As illustrated in Figure \ref{fig:mf}, the
predicted mass function of collapsed dark matter halos has a considerably
different normalization and shape than the observed stellar mass function of
galaxies. The difference grows dramatically at both large and small masses, with
a maximum efficiency of $\epsilon_\star \simeq 0.2$ at the stellar mass scale of
the Milky Way ($M_{\star} \approx 10^{10.75} M_{\odot}$). This basic mismatch in shape
has been understood since the earliest galaxy formation models set within the
dark matter paradigm \citep{white1978} and is generally recognized as one of the
primary constraints on feedback-regulated galaxy formation
\citep{white1991,benson2003,somerville2015}.
\begin{figure}[t]
\includegraphics[width=0.8\textwidth]{Figures/plot_behroozi_alt.pdf}
\caption{Abundance matching relation from Behroozi et al.~(in preparation).
Gray (magenta) shows a scatter of 0.2 (0.5) dex about the median relation.
The dashed line is power-law extrapolation below the regime where large sky
surveys are currently complete. The cyan band shows how the extrapolation
would change as the faint-end slope of the galaxy stellar mass function ($\alpha$) is varied
over the same range illustrated by the shaded gray band in Figure \ref{fig:mf}.
Note that the enumeration of $M_{\star} = 10^5 M_{\odot}$ galaxies could provide a
strong discriminator on faint-end slope, as the $\pm 0.15$ range in $\alpha$ shown
maps to an order of magnitude difference in the halo mass associated with this galaxy
stellar mass and a corresponding
factor of $\sim 10$ shift in the galaxy/halo counts shown in Figure \ref{fig:massfunc}.
}
\label{fig:AM}
\end{figure}
At the small masses that most concern this review, dark matter halo counts
follow $dn/dM \propto M^{\alpha}$ with a steep slope $\alpha_{dm} \simeq -1.9$
compared to the observed stellar mass function slope of $\alpha_g = -1.47$
\citep[][which is consistent with the updated GAMA results shown in Figure \ref{fig:mf}]{baldry2012}. Current surveys that cover enough sky to provide a global
field stellar mass function reach a completeness limit of
$M_{\star} \approx 10^{7.5} M_{\odot}$. At this mass, galaxy counts are more than two
orders of magnitude below the naive baryonic mass function $f_b M_{\rm{vir}}$. The
shaded band illustrates how the stellar mass function would extrapolate
to the faint regime spanning a range of
faint-end slopes $\alpha$ that are marginally consistent with
observations at the completeness limit.
One clear implication of this comparison is that galaxy formation efficiency
($\epsilon_\star$) must vary in a non-linear way as a function of $M_{\rm{vir}}$ (at
least if $\Lambda$CDM\ is the correct underlying model). Perhaps the cleanest way to
illustrate this is adopt the simple assumption of Abundance Matching (AM): that
galaxies and dark matter halos are related in a one-to-one way, with the most
massive galaxies inhabiting the most massive dark matter halos
\citep[][]{frenk1988,kravtsov2004a,conroy2006,moster2010,behroozi2013}. The
results of such an exercise are presented in Figure \ref{fig:AM} (as derived by
Behroozi et al., in preparation). The gray band shows the median
$M_{\star} - M_{\rm{vir}}$ relation with an assumed 0.2 dex of scatter in $M_{\star}$ at
fixed $M_{\rm{vir}}$. The magenta band expands the scatter to 0.5 dex . This relation
is truncated near the completeness limit in \citet{baldry2012}. The central
dashed line in Figure \ref{fig:AM} shows the median relation that comes from
extrapolating the \citet{baldry2012} mass function with their best-fit
$\alpha_g = -1.47$ down to the stellar mass regime of Local Group dwarfs. The
cyan band brackets the range for the two other faint-end slopes shown in Figure
\ref{fig:mf}: $\alpha_g = -1.62$ and $-1.32$.
Figure \ref{fig:AM} allows us to read off the virial mass expectations for
galaxies of various sizes. We see that Bright Dwarfs at the limit of detection
in large sky surveys ($M_{\star} \approx 10^8 M_{\odot}$) are naively associated with
$M_{\rm{vir}} \approx 10^{11} M_{\odot}$ halos. Galaxies with stellar masses similar to
the Classical Dwarfs at $M_{\star} \approx 10^6 M_{\odot}$ are associated with
$M_{\rm{vir}} \approx 10^{10} M_{\odot}$ halos. As we will discuss in Section
\ref{sec:solutions}, galaxies at this scale with $M_{\star}/M_{\rm{vir}} \approx 10^{-4}$
are at the critical scale where feedback from star formation may not be
energetic enough to alter halo density profiles significantly. Finally,
Ultra-faint Dwarfs with $M_{\star}\approx 10^4 M_{\odot}$, $M_{\rm{vir}}\approx 10^{9} M_{\odot}$,
and $M_{\star}/M_{\rm{vir}} \approx 10^{-5}$ likely sit at the low-mass extreme of galaxy
formation.
\begin{marginnote}[]
\entry{Bright Dwarfs}{$M_{\star}\approx 10^{8} M_{\odot}$
$M_{\rm{vir}} \approx 10^{11}M_{\odot}$ $M_{\star}/M_{\rm{vir}} \approx 10^{-3}$}
\entry{Classical Dwarfs}{$M_{\star}\approx 10^6 M_{\odot}$
$M_{\rm{vir}}\approx 10^{10} M_{\odot}$ $M_{\star}/M_{\rm{vir}} \approx 10^{-4}$}
\entry{Ultra-faint Dwarfs}{$M_{\star}\approx 10^4 M_{\odot}$
$M_{\rm{vir}}\approx 10^{9} M_{\odot}$ $M_{\star}/M_{\rm{vir}} \approx 10^{-5}$}
\end{marginnote}
\subsubsection{Kinematic Measures} An alternative way to connect to the dark
matter halo hosting a galaxy is to determine the galaxy's dark matter mass
kinematically. This, of course, can only be done within a central radius probed
by the baryons. For the small galaxies of concern for this review, extended
mass measurements via weak lensing or hot gas emission is infeasible. Instead,
masses (or mass profiles) must be inferred within some inner radius, defined
either by the stellar extent of the system for dSphs and/or the outer rotation
curves for rotationally-supported gas disks.
Bright dwarfs, especially those in the field, often have gas disks with ordered
kinematics. If the gas extends far enough out, rotation curves can be
extracted that extend as far as the flat part of the galaxy rotation curve
$V_{\rm flat}$. If care is taken to account for non-trivial velocity
dispersions in the mass extraction \citep[e.g.,][]{kuzio-de-naray2008}, then we
can associate $V_{\rm flat} \approx V_{\rm{max}}$.
Owing to the difficulty in detecting them, the faintest galaxies known are all
satellites of the Milky Way or M31 and are dSphs. These lack rotating gas
components, so rotation curve measurements are impossible. Instead, dSphs are
primarily stellar dispersion-supported systems, with masses that are best probed
by velocity dispersion measurements obtained star-by-star for the closest dwarfs
\citep[e.g.,][]{walker2009,simon2011,kirby2014}. For systems of this kind, the
mass can be measured accurately within the stellar half-light radius
\citep{walker2009}. The mass within the de-projected (3D) half-light radius
($r_{1/2}$) is relatively robust to uncertainties in the stellar velocity
anisotropy and is given by $M(<r_{1/2}) = 3\, \sigma_\star^2 \, r_{1/2} / G$, where
$\sigma_\star$ is the measured, luminosity-weighted line-of-sight velocity dispersion
\citep{wolf2010}. This formula is equivalent to saying that the circular
velocity at the half-light radius is
$V_{1/2} = V_c(r_{1/2} )= \sqrt{3} \,\sigma_\star$. The value of $V_{1/2}$
($\le V_{\rm{max}}$) provides a one-point measurement of the host halo's rotation curve
at $r=r_{1/2}$.
\subsection{Connections to particle physics}
\label{subsec:particle_physics}
Although the idea of ``dark" matter had been around since at least \citet{zwicky1933}, it was not until rotation curve measurements of galaxies in the 1970s revealed the need for significant amounts of non-luminous matter \citep{freeman1970, rubin1978, bosma1978, rubin1980} that dark matter was taken seriously by the broader astronomical community (and shortly thereafter, it was recognized that dwarf galaxies might serve as sensitive probes of dark matter; \citealt{aaronson1983, faber1983, lin1983}). Very quickly, particle physicists realized the potential implications for their discipline as well. Dark matter candidates were grouped into categories based on their effects on structure formation. ``Hot" dark matter (HDM) particles remain relativistic until relatively late in the Universe's evolution and smooth out perturbations even on super-galactic scales; ``warm" dark matter (WDM) particles have smaller initial velocities, become non-relativistic earlier, and suppress perturbations on galactic scales (and smaller); and CDM has negligible thermal velocity and does not suppress structure formation on any scale relevant for galaxy formation. Standard Model neutrinos were initially an attractive (hot) dark matter candidate; by the mid-1980s, however, this possibility had been excluded on the basis of general phase-space arguments \citep{tremaine1979}, the large-scale distribution of galaxies \citep{white1983a}, and properties of dwarf galaxies \citep{lin1983}. The lack of a suitable Standard Model candidate for particle dark matter has led to significant work on particle physics extensions of the Standard Model. From a cosmology and galaxy formation perspective, the unknown particle nature of dark matter means that cosmologists must make assumptions about dark matter's origins and particle physics properties and then
investigate the resulting cosmological implications.
\begin{marginnote}[]
\entry{Cold Dark Matter (CDM)}{\\$m \sim 100 \,{\rm GeV}$, $v_{\rm th}^{z=0}
\approx 0\,{\rm km \, s}^{-1}$}
\entry{Warm Dark Matter (WDM)}{\\$m \sim 1 \,{\rm keV}$, $v_{\rm th}^{z=0} \sim 0.03 \,{\rm km \, s}^{-1}$}
\entry{Hot Dark Matter (HDM)}{\\$m \sim 1\,{\rm eV}$, $v_{\rm th}^{z=0} \,\sim 30 \,{\rm km \, s}^{-1}$}
\end{marginnote}
A general class of models
that are appealing in their simplicity is that of \textit{thermal relics}.
Production and destruction of dark matter particles are in equilibrium so long
as the temperature of the Universe $kT$ is larger than the mass of the dark
matter particle $m_{\rm DM}c^2$. At lower temperatures, the abundance is
exponentially suppressed, as destruction (via annihilation) dominates over
production. At some point, the interaction rate of dark matter particles drops
below the Hubble rate, however, and the dark matter particles ``freeze out'' at
a fixed number density (see, e.g., \citealt{kolb1990}; this is also known as chemical decoupling). Amazingly,
if the annihilation cross section is typical of weak-scale physics, the
resulting freeze-out density of thermal relics with $m\sim 100\,{\rm GeV}$ is
approximately equal to the observed density of dark matter today (e.g., \citealt{jungman1996}). This subset of
thermal relics is referred to as \textit{weakly-interacting massive particles
(WIMPs)}. The observation that new physics at the weak scale naturally leads
to the correct abundance of dark matter in the form of WIMPs
is known as the ``WIMP miracle'' \citep{feng2008} and has been the
basic framework for dark matter over the past 30 years.
WIMPs are not the only viable dark matter candidate, however, and it is
important to note that the WIMP miracle could be a red herring. \textit{Axions},
which are particles invoked to explain the strong CP problem of quantum
chromodynamics (QCD), and right-handed neutrinos (often called \textit{sterile
neutrinos}), which are a minimal extension to the Standard Model of particle
physics that can explain the observed baryon asymmetry and why neutrino masses
are so small compared to other fermions, are two other hypothetical particles
that may be dark matter (among a veritable zoo of additional possibilities; see \citealt{feng2010} for a recent review). While WIMPs, axions, and sterile neutrinos are capable of producing the observed abundance of dark matter in the present-day Universe, they can have very different effects on the mass spectrum of cosmological perturbations.
While the cosmological perturbation spectrum is initially set by physics in the very early universe (inflation in the standard scenario), the microphysics of
dark matter affects the evolution of those fluctuations at later times. In the standard WIMP
paradigm, the low-mass end of the CDM hierarchy is set by first collisional
damping (subsequent to chemical decoupling but prior to kinetic decoupling of
the WIMPs), followed by free-streaming (e.g., \citealt{hofmann2001, bertschinger2006}). For typical 100 GeV WIMP candidates,
these processes erase cosmological perturbations with $M \lesssim 10^{-6}\,M_{\odot}$
(i.e., Earth mass; \citealt{green2004}). Free-streaming also sets the low-mass end of the mass
spectrum in models where sterile neutrinos decouple from the plasma while
relativistic. In this case, the free-streaming scale can be approximated by the
(comoving) size of the horizon when the sterile neutrinos become
non-relativistic. The comoving horizon size at $z = 10^7$, corresponding to
$m\approx 2.5 \,{\rm keV}$, is approximately 50 kpc, which is significantly
smaller than the scale derived above for $L^*$ galaxies. keV-scale sterile
neutrinos are therefore observationally-viable dark matter candidates (see
\citealt{adhikari2016} for a recent, comprehensive review). QCD axions are
typically $\sim \mu{\rm eV}$-scale particles but are produced out of thermal
equilibrium \citep{kawasaki2013}. Their free-streaming scale is significantly smaller than that of a typical WIMP (see Section~\ref{subsubsec:linear}).
The previous paragraphs have focused on the effects of collisionless damping and
free-streaming -- direct consequences of the particle nature of dark matter --
in the linear regime of structure formation. Dark matter microphysics can also
affect the non-linear regime of structure formation. In particular, dark matter
self-interactions -- scattering between two dark matter particles -- will affect
the phase space distribution of dark matter. Within observational constraints,
dark matter self-interactions could be relevant in the dense centers of dark
matter halos. By transferring kinetic energy from high-velocity particles to
low-velocity particles, scattering transfers ``heat'' to the centers of dark
matter halos, reducing their central densities and making their velocity
distributions nearly isothermal. This would have a direct effect on galaxy
formation, as galaxies form within the centers of dark matter halos and the
motions of their stars and gas trace the central gravitational potential. These
effects are discussed further in Section~\ref{subsubsec:nonlinear}.
The particle nature of dark matter is therefore reflected in the cosmological
perturbation spectrum, in the abundance of collapsed dark matter structures as a
function of mass, and in the density and velocity distribution of dark matter in
virialized dark matter halos.
\begin{textbox}[ht]\section{THREE CHALLENGES TO BASIC $\bm{\Lambda}$CDM\ PREDICTIONS}
There are three classic problems associated with the small-scale predictions
for dark matter in the $\Lambda$CDM\ framework. Other anomalies exist, including
some that we discuss in this review, but these three are important because 1)
they concern basic predictions about dark matter that are fundamental to the
hierarchical nature of the theory; and 2) they have received significant
attention in the literature.
\subsection{Missing Satellites and Dwarfs [Figures~\ref{fig:massfunc}--\ref{fig:MSP_AM}]}
The observed stellar mass functions of field galaxies and satellite galaxies in
the Local Group is much flatter at low masses than predicted dark matter halo
mass functions: $dn/dM_{\star} \propto M_{\star}^{\alpha_g}$ with
$\alpha_{g} \simeq -1.5$ (vs. $\alpha \simeq -1.9$ for dark matter). The issue
is most acute for Galactic satellites, where completeness issues are less of a
concern. There are only $\sim 50$ known galaxies with $M_\star > 300 M_{\odot}$
within $300$ kpc of the Milky Way compared to as many as $\sim 1000$ dark
subhalos (with $M_{\rm sub} > 10^{7} M_{\odot}$) that could conceivably host
galaxies. One solution to this problem is to posit that galaxy formation
becomes increasingly inefficient as the halo mass drops. The smallest dark
matter halos have simply failed to form stars altogether.
\subsection{Low-density Cores vs. High-density Cusps [Figure~\ref{fig:cuspcore}]}
The central regions of dark-matter dominated galaxies as inferred from rotation
curves tend to be both less dense (in normalization) and less cuspy (in inferred
density profile slope) than predicted for standard $\Lambda$CDM\ halos (such as those
plotted in Figure~\ref{fig:nfw}). An important question is whether baryonic feedback alters
the structure of dark matter halos.
\subsection{Too-Big-to-Fail [Figure~\ref{fig:tbtf}]}
The local universe contains too few galaxies with central densities indicative
of $M_{\rm{vir}} \simeq 10^{10} M_{\odot}$ halos. Halos of this mass are generally
believed to be too massive to have failed to form stars, so the fact that they
are missing is hard to understand. The stellar mass associated with this halo
mass scale ($M_\star \simeq 10^6 M_{\odot}$, Figure \ref{fig:AM}) may be too small
for baryonic processes to alter their halo structure (see Figure
\ref{fig:feedback}).
\end{textbox}
\section{OVERVIEW OF PROBLEMS}
\label{sec:problems}
The CDM paradigm as summarized in the previous section emerged among other
dark matter variants in the early 1980s \citep{peebles1982,blumenthal1984,davis1985}
with model parameters gradually settling to their current precise state (including $\Lambda$) in the
wake of overwhelming evidence from large-scale galaxy clustering, supernovae
measurements of cosmic acceleration, and cosmic microwave background studies,
among other data. The 1990s saw the first $N$-body simulations to resolve the
internal structure of CDM halos on small scales. Almost immediately researchers
pinpointed the two most well-known challenges to the theory: the cusp-core
problem \citep{flores1994,moore1994} and the missing satellites problem
\citep{klypin1999,moore1999}. This section discusses these two classic issues
from a current perspective goes on to describe a third problem, too-big-to-fail\ \citep{boylan-kolchin2011}, which is in some sense is a confluence of the
first two. Finally, we conclude this section with a more limited discussion of
two other challenges faced by $\Lambda$CDM\ on small scales: the apparent planar
distributions seen for Local Group satellites and the dynamical scaling
relations seen in galaxy populations.
\subsection{Missing Satellites}
\label{subsec:msp}
\begin{figure}[t]
\begin{minipage}{.50\textwidth}
\includegraphics[width=\linewidth]{Figures/halo_circle.pdf}
\end{minipage}
\begin{minipage}{.49\textwidth}
\includegraphics[width=\linewidth]{Figures/3Dplot_V3.pdf}
\end{minipage}
\caption{The Missing Satellites Problem: Predicted $\Lambda$CDM\ substructure (left) vs. known Milky Way satellites
(right). The image on the left shows the $\Lambda$CDM\ dark matter distribution within a sphere of radius 250 kpc around the center of a Milky-Way size dark matter halo (simulation by V. Robles and T. Kelley in collaboration with the authors). The image on the right (by M. Pawlowski in collaboration with the authors)
shows the current census of Milky Way satellite galaxies, with galaxies discovered since 2015 in red. The Galactic disk is represented by a circle of radius 15 kpc at the center and the outer sphere has a radius of 250 kpc. The 11 brightest (classical) Milky Way satellites are labeled by name. Sizes of the symbols are not to scale but are rather proportional to the log of each satellite galaxy's stellar mass. Currently, there are $\sim 50$ satellite galaxies of the Milky Way compared to thousands of predicted subhalos with $M_{\rm peak} \gtrsim 10^7\,M_{\odot}$. }
\label{fig:satellites}
\end{figure}
The highest-resolution cosmological simulations of MW-size halos in the $\Lambda$CDM\,
paradigm have demonstrated that dark matter (DM) clumps exist at all resolved
masses, with no break in the subhalo mass function down to the numerical
convergence limit
\citep[e.g.,][]{springel2008,VL2,GHALO,garrison-kimmel2014,Griffen2016}. We
expect thousands of subhalos with masses that are (in principle) large enough to
have supported molecular cooling ($M_{\rm peak} \gtrsim 10^7~M_{\odot}$). Meanwhile,
only $\sim 50$ satellite galaxies down to $\sim 300~M_{\odot}$ in stars are known to orbit
within the virial radius of the Milky Way \citep{drlica-wagner2015}. Even though there
is real hope that future surveys could bring the census of ultra-faint dwarf
galaxies into the hundreds \citep{tollerud2008, hargis2014}, it seems unlikely there are
thousands of undiscovered dwarf galaxies to this limit within the virial volume of the Milky
Way. The current situation is depicted in Figure \ref{fig:satellites}, which
shows the dark matter distribution around a Milky Way size galaxy as predicted by a $\Lambda$CDM\ simulation next to a map of the known galaxies of the Milky Way on the same scale.
Given the discussion of abundance matching in Section \ref{sec:AM} and the
associated Figure~\ref{fig:AM}, it is reasonable to expect that dark matter
halos become increasingly inefficient at making galaxies at low masses and at
some point go completely dark. Physical mass scales of interest in this regard
include the mass below which
reionization UV feedback likely suppresses gas accretion
$M_{\rm{vir}} \approx 10^9 \, M_{\odot}$ \citep[$V_{\rm{max}} \gtrsim 30 \, {\rm km \, s}^{-1}$; e.g.,][]{efstathiou1992,bullock2000, benson2002, bovill2009, sawala2016a} and the minimum mass for atomic cooling in the early Universe, $M_{\rm{vir}} \approx 10^8 \, M_{\odot}$
\citep[$V_{\rm{max}} \gtrsim 15 \, {\rm km \, s}^{-1}$; see, e.g.,][]{rees1977}.
According to Figure \ref{fig:AM}, these physical effects are likely to become
dominant in the regime of ultra-faint galaxies $M_{\star} \lesssim 10^5 M_{\odot}$.
\begin{figure}[t]
\includegraphics[width=0.7\textwidth]{Figures/sham_mw.pdf}
\caption{``Solving" the Missing Satellites Problem with abundance matching. The cumulative count of dwarf galaxies around the Milky Way (magenta) plotted down to completeness limits from
\citet{garrison-kimmel2017}. The gray shaded region shows the predicted
stellar mass function from the dark-matter-only ELVIS simulations
\citep{garrison-kimmel2014} combined with the fiducial AM relation shown in
Figure \ref{fig:AM}, assuming zero scatter. If the faint end slope of the stellar mass function is shallower (dashed) or steeper (dotted), the predicted abundance of satellites with $M_{\star} > 10^4\,M_{\odot}$ throughout the Milky Way's virial volume differs by a factor of 10. Local Group counts can therefore serve as strong constraints on galaxy formation models.}
\label{fig:MSP_AM}
\end{figure}
The question then becomes: can we simply adopt the abundance-matching relation derived from
field galaxies to ``solve" the Missing Satellites Problem down to the scale of
the classical MW satellites (i.e., $M_{\rm{vir}} \simeq 10^{10} M_{\odot}$ $\leftrightarrow$ $M_{\star} \simeq 10^6 M_{\odot}$)? Figure \ref{fig:MSP_AM} \citep[modified from][]{garrison-kimmel2017} shows that the answer is
likely ``yes." Shown in magenta is the cumulative count of Milky Way
satellite galaxies within 300 kpc of the Galaxy plotted down to the stellar mass completeness limit within that volume. The shaded band shows the $68\%$ range
predicted stellar mass functions from the dark-matter-only ELVIS simulations \citep{garrison-kimmel2014} combined with the AM relation shown in Figure \ref{fig:AM} with zero scatter. The agreement is not perfect, but there is no over-prediction.
The dashed lines show how the predicted satellite stellar mass functions would
change for different assumed (field galaxy) faint-end slopes in the calculating the AM relation. An important avenue going forward will be to push these comparisons down to
the ultra-faint regime, where strong baryonic feedback effects are expected to
begin shutting down galaxy formation altogether.
\begin{figure}[!htb]
\includegraphics[width=0.8\textwidth]{Figures/rotCurves_littleTHINGS.pdf}
\caption{The Cusp-Core problem. The dashed line shows the naive $\Lambda$CDM\ expectation (NFW, from dark-matter-only simulations) for a typical rotation curve of a $V_{\rm{max}} \approx 40 \, {\rm km \, s}^{-1}$ galaxy. This rotation curve
rises quickly, reflecting a central density profile that rises as a cusp with $\rho \propto 1/r$. The data points show
the rotation curves of two example galaxies of this size from the LITTLE THINGS survey \citep{oh2015}), which are more slowly rising and better fit by a density profile with a constant density core \citep[][cyan line]{Burkert1995}.}
\label{fig:cuspcore}
\end{figure}
\subsection{Cusp, Cores, and Excess Mass}
As discussed in Section 1, $\Lambda$CDM\ simulations that include only dark matter
predict that dark matter halos should have density profiles that rise steeply at
small radius $\rho(r) \propto r^{-\gamma}$, with $\gamma \simeq 0.8-1.4$ over
the radii of interest for small galaxies \citep{navarro2010}. This is in
contrast to many (though not all) low-mass dark-matter-dominated galaxies with
well-measured rotation curves, which prefer fits with constant-density cores
($\gamma \approx 0-0.5$; e.g.,
\citealt{McGaugh2001,Marchesini2002,simon2005,deBlok2008,kuzio-de-naray2008}). A
related issue is that fiducial $\Lambda$CDM\ simulations predict more dark matter in
the central regions of galaxies than is measured for the galaxies that they
should host according to AM. This ``central density problem" is an issue of
normalization and exists independent of the precise slope of the central density
profile \citep{alam2002,oman2015}. While these problems are in principle
distinct issues, as the second refers to a tension in total cumulative mass and
the first is an issue with the derivative, it is likely that they point to a common
tension. Dark-matter-only $\Lambda$CDM\ halos are too dense and too cuspy in their
centers compared to many observed galaxies.
Figure \ref{fig:cuspcore} summarizes the basic problem. Shown as a dashed line
is the typical circular velocity curve predicted for an NFW $\Lambda$CDM~ dark matter
halo with $V_{\rm{max}} \approx 40 {\rm km \, s}^{-1}$ compared to the observed
rotation curves for two galaxies with the same asymptotic velocity
from \citet{oh2015}. The observed rotation curves rise
much more slowly than the $\Lambda$CDM\ expectation, reflecting core densities that are
lower and more core-like than the fiducial prediction.
\subsection{Too-Big-To-Fail}
\begin{figure}[!htb]
\begin{minipage}{0.5\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/tbtf_fig2b.pdf}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/papastergis_fig.pdf}
\end{minipage}
\caption{The Too-Big-to-Fail Problem. {\em Left:} Data points show the circular
velocities of classical Milky Way satellite galaxies with
$M_{\star} \simeq 10^{5-7} M_{\odot}$ measured at their half-light radii
$r_{1/2}$. The magenta lines show the circular velocity curves of subhalos
from one of the (dark matter only) Aquarius simulations. These are
specifically the subhalos of a Milky Way-size host that have peak maximum
circular velocities $V_{\rm{max}} > 30 \, {\rm km \, s}^{-1}$ at some point in their histories.
Halos that are this massive are likely resistant to strong star formation
suppression by reionization
and thus naively too big to have failed to form stars \citep[modified
from][]{boylan-kolchin2012}. The existence of a large population of such
satellites with greater central masses than any of the Milky Way's dwarf
spheroidals is the original Too-Big-to-Fail problem. {\it Right:} The same
problem -- a mismatch between central masses of simulated dark matter systems
and observed galaxies -- persists for field dwarfs (magenta points), indicating it is not a
satellite-specific process (modified from \citealt{papastergis2017}). The field
galaxies shown all have stellar masses in the range $5.75 \leq \log_{10}(M_{\star}/M_{\odot}) \leq 7.5$. The gray curves
are predictions for $\Lambda$CDM\ halos from the fully self-consistent hydrodynamic
simulations of \citet{fitts2016} that span the same stellar mass range in the simulations
as the observed galaxies.}
\label{fig:tbtf}
\end{figure}
\label{subsec:tbtf}
As discussed above, a straightforward and natural solution to the missing
satellites problem within $\Lambda$CDM\ is to assign the known Milky Way satellites to
the largest dark matter subhalos (where largest is in terms of either
present-day mass or peak mass) and attribute the lack of observed galaxies in
in the remaining smaller subhalos to galaxy formation physics. As pointed out by
\citet{boylan-kolchin2011}, this solution makes a testable prediction: the
inferred central masses of Milky Way satellites should be consistent with the
central masses of the most massive subhalos in $\Lambda$CDM\ simulations of Milky
Way-mass halos. Their comparison of observed central masses to $\Lambda$CDM\
predictions from the
Aquarius \citep{springel2008} and Via Lactea II \citep{diemand2008} simulations revealed
that
the most massive $\Lambda$CDM\ subhalos were systematically too centrally dense to host the
bright Milky Way satellites \citep{boylan-kolchin2011,
boylan-kolchin2012}. While there are subhalos with central masses
comparable to the Milky Way satellites,
these subhalos were never among the $\sim10$ most massive
(Figure~\ref{fig:tbtf}). Why would galaxies fail to form in the most massive
subhalos, yet form in dark matter satellites of lower mass? The most massive
satellites should be ``too big to fail'' at forming galaxies if the lower-mass
satellites are capable of doing so (thus the origin of the name of this
problem).
In short, while the \textit{number} of massive subhalos in dark-matter-only simulations matches the number of classical dwarfs observed (see Figure 8), the \textit{central densities} of these simulated dwarfs are higher than the central densities observed in the real galaxies (see Figure 10).
While too-big-to-fail was originally identified for satellites of the Milky
Way, it was subsequently found to exist in Andromeda \citep{tollerud2014} and
field galaxies in the Local Group (those outside the virial radius of the Milky
Way and M31; \citealt{kirby2014}). Similar discrepancies were also pointed out
for more isolated low-mass galaxies, first based on HI rotation curve data
\citep{ferrero2012} and subsequently using velocity width measurements
\citep{papastergis2015, papastergis2016}. This version of too-big-to-fail\ in the field is also manifested in the velocity function of field galaxies\footnote{We note that the mismatch between the observed and predicted velocity function can also be interpreted as a ``missing dwarfs" problem if one considers the discrepancy as one in numbers at fixed $V_{\rm halo}$. We believe, however, that the more more plausible interpretation is a discrepancy in $V_{\rm halo}$ at fixed number density.} (\citealt{zavala2009,klypin2015, trujillo-gomez2016,schneider2016}, though see \citealt{maccio2016} and \citealt{brooks2017} for arguments that no discrepancy exists). The generic observation in the
low-redshift Universe, then, is that the inferred central masses of galaxies
with $10^5 \lesssim M_{\star}/M_{\odot} \lesssim 10^8$ are $\sim 50\%$ smaller than expected from
dissipationless $\Lambda$CDM\ simulations.
The too-big-to-fail and core/cusp problems would be naturally connected if
low-mass galaxies generically have dark matter cores, as this would reduce their
central densities relative to CDM expectations\footnote{For a sense of the problem, the amount of mass that
would need to be removed to alleviate the issue on classical dwarf scales is $\sim 10^7 M_{\odot}$ within $\sim 300$ pc}.
However, the problems are, in
principle, separate: one could imagine galaxies that have large constant-density cores yet still with
too much central mass relative to CDM predictions (solving the
core/cusp problem but not too-big-to-fail), or having cuspy profiles with
overall lower density amplitudes than CDM (solving too-big-to-fail but not
core/cusp).
\begin{figure}[t]
\begin{minipage}{.5\textwidth}
\includegraphics[scale=0.4]{Figures/VPOS2.pdf}
\end{minipage}
\begin{minipage}{.5\textwidth}
\includegraphics[scale=0.4]{Figures/GPoA_velocities.pdf}
\end{minipage}
\caption{Planes of Satellites. \textit{Left}: Edge-on view of the satellite distribution around the Milky Way
\citep[updated from][]{pawlowski2015} with the satellite galaxies in yellow, young halo globular clusters and star clusters in blue, and all other newly-discovered objects (unconfirmed dwarf galaxies or star clusters) are shown as green triangles. The red lines in the center dictate the position and orientation of streams in the MW halo. The gray wedges span 24 degrees about the plane of the MW disk, where satellite discovery might be obscured by the Galaxy.
\textit{Right}: The satellite distribution around Andromeda \citep[modified by M. Pawlowski from][]{ibata2013} where the red points
are satellites belonging to the identified kinematic plane. Triangles pointing up are receding relative to M31. Triangles pointing down are approaching.
}
\label{fig:planes}
\end{figure}
\subsection{Satellite Planes}
\label{subsec:planes}
\citet{kunkel1976} and \citet{lynden-bell1976} pointed out that satellite galaxies
appeared to lie in a polar great circle around the Milky Way.
Insofar as this cannot be explained in a theory of structure formation, this observation
pre-dates all other small-scale structure issues in the Local Group by
approximately two decades. The anisotropic distribution of Galactic satellites
received scant attention until a decade ago, when \citet{kroupa2005} argued that
it proved that satellite galaxies cannot be related to dark matter substructures
(and thereby constituted another crisis for CDM). Kroupa et al.~examined classical,
pre-SDSS dwarf galaxies in and around the Milky Way and found that the observed
distribution was strongly non-spherical. From this analysis, based
on the distribution of angles between the normal of the best-fitting plane of
dwarfs and the position vector of each MW satellite in the Galacto-centric
reference frame, Kroupa et al.~argued that
``the mismatch between the number and spatial distribution of MW dwarves
compared to the theoretical distribution challenges the claim that the MW
dwarves are cosmological sub-structures that ought to populate the MW halo.''
This claim was quickly disputed by \citet{zentner2005}, who investigated the
spatial distribution of dark matter subhalos in simulated CDM halos and determined
that it was highly inconsistent with a spherical distribution. They found that
the planar distribution of MW satellites was marginally consistent with being a
random sample of the subhalo distributions in their simulations, and furthermore,
the distribution of satellites they considered likely to be luminous
(corresponding to the more massive subhalos) was even more consistent with
observations. A similar result was obtained at roughly the same time by
\citet{kang2005}. Slightly later, \citet{metz2007} argued that the distribution of MW
satellite galaxies was inconsistent, at the 99.5\% level, with isotropic or
prolate substructure distributions (as might be expected in $\Lambda$CDM).
Related analysis of Milky Way satellite objects has further supported the idea
that the configuration is highly unusual compared to $\Lambda$CDM\ subhalo
distributions \citep{pawlowski2012}, with the 3D motions of satellites
suggesting that there is a preferred orbital pole aligned perpendicular to the
observed spatial plane \citep{pawlowski2013a}. The left hand side of Figure
\ref{fig:planes} shows the current distribution of satellites (galaxies and star
clusters) around the Milky Way looking edge-on at the planar configuration.
Note that the disk of the Milky Way could, in principle, bias discoveries away
from the MW disk axis, but it is not obvious that the orbital poles would be
biased by this effect. Taken together, the orbital poles and spatial
configuration of MW satellites is highly unusual for a randomly drawn sample of
$\Lambda$CDM\ subhalos \citep{Pawlowski2015a}.
As shown in the right-hand panel of Figure 11, the M31 satellite galaxies also
show evidence for having a disk-like configuration \citep{metz2007}. Following
the discovery of new M31 satellites and the characterization of their
velocities, \citet{conn2013} and \citet{ibata2013} presented evidence that 15 of
27 Andromeda dwarf galaxies indeed lie in a thin plane, and further, that the
southern satellites are mostly approaching us with respect to M31, while the
northern satellites are mostly receding (as coded by the direction of the red
triangles in Figure 11). This suggests that the plane could be rotationally
supported. Our view of this plane is essentially edge-on, meaning we have
excellent knowledge of in-plane motions and essentially no knowledge of
velocities perpendicular to the plane. Nevertheless, even a transient plane of
this kind would be exceedingly rare for $\Lambda$CDM\ subhalos
\citep[e.g.,][]{ahmed2017}.
Work in a similar vein has argued for the existence of planar structures in the
Centaurus A group \citep{tully2015} and for rotationally-supported systems of
satellites in a statistical sample of galaxies from the SDSS
\citep{ibata2015}. \citet{Libeskind2015} have used $\Lambda$CDM\ simulations to suggest
that some alignment of satellite systems in the local Universe may be naturally
explained by the ambient shear field, though they cannot explain thin planes
this way. Importantly, \citet{Phillips2015} have re-analyzed the SDSS data and
argued that it is not consistent with a ubiquitous co-rotating satellite
population and rather more likely a statistical fluctuation. More data that
enables a statistical sample of hosts down to fainter satellites will be needed
to determine whether the configurations seen in the Local Group are common.
\begin{figure}[t!]
\begin{minipage}{.5\textwidth}
\includegraphics[width=0.95\linewidth]{Figures/RARforJB.pdf}
\end{minipage}
\begin{minipage}{.5\textwidth}
\includegraphics[width=0.95\linewidth]{Figures/2017_07_20_CDM_Oman_pts.pdf}
\end{minipage}
\caption{Regularity vs. Diversity. {\em Left:} The radial acceleration relation from \citet[][slightly modified]{McGaugh2016} showing the centripetal acceleration observed in rotation curves, $g_{\rm obs} = V^2/r$, plotted versus the expected acceleration from observed baryons $g_{\rm bar}$ for 2700 individual data points from 153 galaxy rotation curves. Large squares show the mean and the dashed line lines show the rms width. {\em Right:} Green points show the circular velocities of observed
galaxies measured at $2~{\rm kpc}$ as a function of $V_{\rm{max}}$
from \citet{oman2015} as re-created by \citet{creasey2017}. For comparison, the gray band shows expectations from dark
matter only $\Lambda$CDM\ simulations. There is much more scatter at fixed $V_{\rm{max}}$ than
predicted by the simulations. Note that the galaxies used in the RAR in left-hand panel have $V_{\rm{max}}$ values that span the range shown on the right. The
tightness of the acceleration relation is remarkable (consistent with zero scatter given observational error, red cross), especially given the
variation in central densities seen on the right.}
\label{fig:scalings}
\end{figure}
\subsection{Regularity in the Face of Diversity}
Among the more puzzling aspects of galaxy phenomenology in the context of $\Lambda$CDM\ are
the tight scaling relations between dynamical properties and baryonic
properties, even within systems that are dark matter dominated. One well-known
example of this is the baryonic Tully-Fisher relation \citep{mcgaugh2012}, which
shows a remarkably tight connection between the total baryonic mass of a galaxy (gas plus
stars) and its circular velocity $V_{\rm flat}$ ($\simeq V_{\rm{max}}$): $M_b \propto V_{\rm flat}^4$.
Understanding this correlation with $\Lambda$CDM\ models requires care for the low-mass
galaxies of most concern in this review \citep{brook2016}.
A generalization of the baryonic Tully-Fisher relation known as the radial acceleration relation (RAR) was recently introduced by
\citet{McGaugh2016}. Plotted in left-hand Figure \ref{fig:scalings}, the RAR
shows a tight correlation between the radial acceleration traced by rotation curves ($g_{\rm obs} = V^2/r$) and that predicted solely by the observed distribution of baryons ($g_{\rm bar}$)\footnote{This type of relation is what is generally expected in MOND, though the precise shape of the relation depends on the MOND interpolation function assumed (see \citealt{McGaugh2016} for a brief discussion).}. The upper right ``high-acceleration" portion of the relation correspond to baryon-dominated regions of (mostly large) galaxies. Here the relation tracks the one-to-one line, as it must. However, rotation curve points begin to peel away from the line, towards an acceleration larger than what can be explained by the baryons alone below a characteristic acceleration of $a_0 \simeq 10^{-10}$ m s$^{-2}$. It is this additional acceleration that we attribute to dark matter. The outer parts of large galaxies contribute to this region, as do virtually all parts of small galaxies. It is surprising, however, that the dark matter contribution in the low-acceleration regime tracks the baryonic distribution so closely, particularly in light of the diversity in galaxy rotation curves seen among galaxies of at a fixed $V_{\rm flat}$, as we now discuss.
The right-hand panel of Figure \ref{fig:scalings} illustrates the diversity in rotation curve shapes seen from galaxy to galaxy. Shown is a slightly modified version of a figure introduced by \citet{oman2015} and recreated by Creasey et al. (2017). Each data point
corresponds to a single galaxy rotation curve. The horizontal axis shows the
observed value of $V_{\rm flat}$ ($\approx V_{\rm{max}}$) for each galaxy and the vertical axis plots the value of the circular
velocity at $2$ kpc from the galaxy center. Note that at fixed $V_{\rm flat}$,
galaxies demonstrate a huge diversity in central
densities. Remarkably, this diversity is apparently correlated with the baryonic content in such a way as to drive the tight relation seen on the left.
The gray band in the right panel shows the expected relationship
between $V_{\rm{max}}$ and $V_c(2 {\rm kpc})$ for halos in $\Lambda$CDM\ dark-matter-only simulations.
Clearly, the real galaxies demonstrate much more diversity than is naively predicted.
The real challenge, as we see it, is to understand how galaxies can have
so much diversity in their rotation curve shapes compared to naive $\Lambda$CDM\ expectations while
also yielding tight correlations with baryonic content. The fact that there is a tight correlation with {\em baryonic} mass and not stellar mass (which presumably correlates more closely with total feedback energy) makes the question all the more interesting.
\section{SOLUTIONS}
\label{sec:solutions}
\subsection{Solutions within $\Lambda$CDM}
In this subsection, we explore some of the most popular and promising solutions
to the problems discussed above. We take as our starting point the basic $\Lambda$CDM\
model plus reionization, i.e., we take it as a fundamental prediction of $\Lambda$CDM\
that the heating of the intergalactic medium to $\sim 10^4\,{\rm K}$ by cosmic
reionization will suppress galaxy formation in halos with virial temperatures
below $\sim10^4\,{\rm K}$ (or equivalently, with $V_{\rm{vir}} \lesssim 20\,{\rm km \, s}^{-1}$) at
$z \lesssim 6$.
\subsubsection{Feedback-induced cores}
\label{subsubsec:feedback}
Many of the most advanced hydrodynamic simulations today have shown that it is
possible for baryonic feedback to erase the central cusps shown in the density profiles in Figure 3 and produce core-like density profiles as inferred from rotation curves such as those shown in Figure 9 \citep{mashchenko2008, Pontzen2012, madau2014a, onorbe2015,read2016a}.
One key prediction is that the effect of core creation will vary with the mass
in stars formed \citep{governato2012,di-cintio2014}. If galaxies form enough
stars, there will be enough supernovae energy to redistribute dark matter and
create significant cores. If too many baryons end up in stars, however, the
excess central mass can compensate and drag dark matter back in. At the other
extreme, if too few stars are formed, there will not be enough energy in supernovae
to alter halo density structure and the resultant dark matter distribution will
resemble dark-matter-only simulations. While the possible importance of
supernova-driven blowouts for the central dark matter structure of dwarf
galaxies was already appreciated by
\citet{navarro1996a} and \citet{gnedin2002}, an important recent development is
the understanding that even low-level star formation over an extended period can
drive gravitational potential fluctuations that lead to dark matter core
formation.
\begin{figure}[t!]
\includegraphics[width=\textwidth]{Figures/msmh_slope.pdf}
\caption{The impact of baryonic feedback on the inner profiles of dark matter
halos. Plotted is the inner dark matter density slope $\alpha$
at $r = 0.015 R_{\rm{vir}}$ as a function of $M_\star/M_{\rm{vir}}$ for simulated galaxies
at z = 0. Larger values of $\alpha \approx 0$ imply core profiles, while lower values of $\alpha \lesssim 0.8$ imply cusps. The shaded gray band shows the expected range of dark matter profile slopes for NFW profiles as derived from dark-matter-only simulations (including concentration scatter). The filled magenta stars and shaded purple band (to guide the eye) show the predicted inner density slopes from the NIHAO cosmological hydrodynamic simulations by \citet{tollet2016}. The cyan stars are a similar prediction from an entirely different suite of simulations from the FIRE-2 simulations \citep[][Chan et al., in preparation]{fitts2016, hopkins2017}. Note that at dark matter core formation peaks in efficiency at $M_{\star}/M_{\rm{vir}} \approx 0.005$, in the regime of the brightest dwarfs. Both simulations find that for $M_{\star}/M_{\rm{vir}} \lesssim 10^{-4}$, the impact of baryonic feedback is negligible. This critical ratio below which core formation via stellar feedback is difficult corresponds to the regime of classical dwarfs and ultra-faint dwarfs.}
\label{fig:feedback}
\end{figure}
This general behavior is illustrated in Figure \ref{fig:feedback}, which shows
the impact of baryonic feedback on the inner slopes of dark matter halos $\alpha$
measured at $1-2\%$ of the halo virial radii. Core-like density profiles have $\alpha \rightarrow 0$. The magenta stars show results from
the NIHAO hydrodynamic simulations as a function of $M_{\star}/M_{\rm{vir}}$, the ratio of
stellar mass to the total halo mass \citep{tollet2016}. The cyan stars show
results from an entirely different set of simulations from the FIRE-2 collaboration \citep[][Chan et al., in preparation]{wetzel2016,fitts2016,garrison-kimmel2017a}.
The shaded gray band shows the expected slopes
for NFW halos with the predicted range of concentrations from dark-matter-only simulations.
We see that both sets of simulations find core formation to be fairly efficient
$M_{\star}/M_{\rm{vir}} \approx 0.005$. This ``peak
core formation" ratio maps to $M_{\star} \simeq 10^{8-9} \,M_{\odot}$, corresponding to the
brightest dwarfs. At ratios below $M_{\star}/M_{\rm{vir}} \approx 10^{-4}$, however, the
impact of baryonic feedback is negligible. The ratio
below which core formation is difficult corresponds to
$M_{\star} \approx 10^{6} M_{\odot}$ -- the
mass-range of interest for the too-big-to-fail problem.
\begin{figure}[tb!]
\includegraphics[width=\textwidth]{Figures/core_raddenpros_02_03_2017.pdf}
\caption{Dark matter density profiles from full hydrodynamic FIRE-2
simulations \citep{fitts2016}. Shown are three different
galaxy halos, each at mass $M_{\rm{vir}} \approx 10^{10} M_{\odot}$. Solid lines show
the hydro runs while the dashed show the same halos run with dark matter only.
The hatched band at the left of each panel marks the region where numerical
relaxation may artificially modify density profiles and the vertical dotted line shows
the half-light radius of the galaxy formed. The stellar mass of the galaxy
formed increases from left to right: $M_{\star} \approx 5\times10^5$,
$4 \times 10^6$, and $10^7 M_{\odot}$, respectively. As $M_{\star}$ increases, so
does the effect of feedback. The smallest galaxy has no effect on the density
structure of its host halo. }
\label{fig:alex}
\end{figure}
The effect of feedback on density profile shapes as a function of stellar mass
is further illustrated in Figure~\ref{fig:alex}. Here we show simulation
results from \citet{fitts2016} for three galaxies (from a
cosmological sample of fourteen), all formed in halos with
$M_{\rm{vir}}(z=0) \approx 10^{10} M_{\odot}$ using the FIRE-2 galaxy formation prescriptions
(\citealt{hopkins2014} and in preparation). The dark matter density profiles of the resultant
hydrodynamical runs are shown as solid black lines in each panel, with stellar
mass labeled, increasing from left to right. The dashed lines in each panel
show dark-matter-only versions of the same halos. We see that only in the run that
forms the most stars ($M_{\star} \approx 10^7 M_{\odot}$,
$M_{\star}/M_{\rm{vir}} \approx 10^{-3}$) does the feedback produce a large core.
Being conservative, for systems with $M_{\star}/M_{\rm{vir}} \lesssim 10^{-4}$, feedback is likely to be ineffective in altering dark matter profiles significantly compared to dark-matter-only simulations.
\begin{summary}[SCALE WHERE FEEDBACK BECOMES INEFFECTIVE IN PRODUCING CORES]
\begin{equation*}
\label{eq:10}
M_{\star}/M_{\rm{vir}} \approx 10^{-4} \leftrightarrow M_{\star} \approx 10^6 M_{\odot}
\leftrightarrow M_{\rm{vir}} \approx 10^{10} M_{\odot}
\end{equation*}
\end{summary}
It is important to note that while many independent groups are now obtaining
similar results in cosmological simulations of dwarf galaxies
\citep{governato2012, munshi2013, madau2014a, chan2015, onorbe2015, tollet2016,fitts2016} --
indicating a threshold mass of $M_{\star} \sim 10^6\,M_{\odot}$ or
$M_{\rm{vir}} \sim 10^{10}\,M_{\odot}$ -- this is \textit{not} an ab initio $\Lambda$CDM\
prediction, and it depends on various adopted parameters in galaxy formation
modeling. For example, \citet{sawala2016a} do not obtain cores in their
simulations of dwarf galaxies, yet they still produce systems that match observations
well owing to a combination of feedback effects that lower central densities of
satellites (thereby avoiding the too-big-to-fail\ problem). On the other hand,
the very high resolution, non-cosmological simulations presented in
\cite{read2016a} produce cores in galaxies of \textit{all} stellar masses. We
note that Read et al.'s galaxies have somewhat higher $M_{\star}$ at a given
$M_{\rm{vir}}$ than the cosmological runs described cited above; this leads to
additional feedback energy per unit dark matter mass, likely explaining the
differences with cosmological simulations and pointing to a testable prediction
for dwarf galaxies' $M_{\star}/M_{\rm{vir}}$.
\subsubsection{Resolving too-big-to-fail}
\label{subsubsec:expl_tbtf}
The baryon-induced cores described in Section~\ref{subsubsec:feedback} have their
origins in stellar feedback. The existence of such cores for galaxies above the
critical mass scale of $M_{\star} \approx 10^{6}\,M_{\odot}$ would explain why
$\sim$half of the classical Milky Way dwarfs -- those above this mass -- have
low observed densities. However, about half of the MW's classical dwarfs have
$M_{\star} < 10^{6}\,M_{\odot}$, so the scenario described in
Section~\ref{subsubsec:feedback} does not explain these systems' low central
masses. Several other mechanisms exist to reconcile $\Lambda$CDM\ with the internal
structure of low-mass halos, however.
Interactions between satellites and the Milky Way -- tidal stripping, disk
shocking, and ram pressure stripping -- all act as additional forms of feedback
that can reduce the central masses of satellites. Many numerical simulations of
galaxy formation point to the importance of such interactions (which are
generally absent in dark-matter-only simulations\footnote{We note that capturing
these effects is extremely demanding numerically, and it is not clear that any
published cosmological hydrodynamical simulation of a Milky Way-size system
can resolve the mass within $300-500$ pc of satellite galaxies with the
accuracy required to address this issue.}), and these environmental influences
are often invoked in explaining too-big-to-fail\ (e.g., \citealt{zolotov2012,
arraki2014, brooks2014, brook2015, wetzel2016, tomozeiu2016, sawala2016a, dutton2016}). In
many of these papers, environmental effects are limited to 1-2 virial radii from
the host galaxy. Several Local Group galaxies reside at greater distances. While
only a handful of these systems have $M_{\star} < 10^6 M_{\odot}$ (most are $M_{\star} \sim 10^7 M_{\odot}$), these galaxies provide an initial test of the importance of external feedback: if
environmental factors are key in setting the central densities of low-mass
systems, satellites should differ systematically from field galaxies. The
results of \citet{kirby2014} find no such difference; further progress will
likely have to await the discovery of fainter systems an larger optical telescopes to provide spectroscopic samples for performing dynamical analyses. Other forms of feedback may persist
to larger distances. For example, \citet{benitez-llambay2013} note that ``cosmic web
stripping'' (ram pressure from large-scale filaments or pancakes) may be
important in dwarf galaxy evolution.
None of these solutions would explain too-big-to-fail\ in isolated field
dwarfs. However, a number of factors could influence the conversion between
observed HI line widths and the underlying gravitational potential, complicating
the interpretation of systematically low densities (for a discussion of some of
these issues, see \citealt{papastergis2017}). Some examples are: (1) gas
may not have the radial extent necessary to reach the maximum of the dark matter
halo rotation curve; (2) the contribution of non-rotational support (pressure
from turbulent motions) may be non-negligible and not correctly handled; and (3)
determinations of inclinations angles of galaxies may be systematically
wrong. \citet{maccio2016} find good agreement between their simulations and the
observed abundance of field dwarf galaxies in large part because the gas
distributions in the simulated dwarfs do not extend to the peak of the dark
matter rotation curve (see also \citealt{kormendy2016} for a similar conclusion reached via different considerations). A more complete understanding of observational samples
and very careful comparisons between observations and simulations are crucial
for quantifying the magnitude of any discrepancies between observations and theory.
\subsubsection{Explaining planes}
\label{subsubsec:expl_planes}
Even prior to the \citet{ibata2013} result on the potential
rotationally-supported plane in M31, multiple groups continued to study the
observed distribution of satellite galaxies, their orbits, and the consistency
of these with $\Lambda$CDM. \citet{libeskind2009} and \citet{lovell2011} argued that
the MW satellite configuration and orbital distribution are consistent with
predictions from $\Lambda$CDM\ simulations, while \citet{metz2009} and
\citet{pawlowski2012} argued that evidence of a serious discrepancy had only
become stronger. A major point of disagreement was whether or not filamentary
accretion within $\Lambda$CDM\ is sufficient to explain satellite orbits. Given that
SDSS only surveyed about 1/3 of the northern sky (centered on the North Galactic
Pole, thereby focusing on the portion of the sky where the polar plane was
claimed to lie), areal coverage was a serious concern when trying to
understand the significance of the polar distribution of satellites. DES has
mitigated this concern somewhat, but it is also surveying near the polar
plane. \citet{pawlowski2016} has recently argued that incomplete sky coverage is
\textit{not} the driving factor in assessing phase-space alignments in the Milky Way;
future surveys with coverage nearer the Galactic plane should definitively test
this assertion.
Following \citet{ibata2013}, the question of whether the M31 configuration (right-hand side of Figure 11) is
expected in $\Lambda$CDM\ also became a topic of substantial interest. The general
consensus of work rooted in $\Lambda$CDM\ is that planes qualitatively similar (though not as thin) as the M31 plane are
not particularly uncommon in $\Lambda$CDM\ simulations, but that these planes are not
rotationally-supported structures (e.g., \citealt{bahl2014, gillet2015,
buck2016}). Since we
view the M31 plane almost perfectly edge-on, proper motions of dwarf galaxies in
the plane would provide a clean test of its nature. Should this plane turn out
to be rotationally supported, it would be \textit{extremely} difficult to
explain with our current understanding of the $\Lambda$CDM\ model. These proper motions
may be possible with a combination of \textit{Hubble} and \textit{James Webb
Space Telescope} data. \citet{skillman2017} presented preliminary observations
of three plane and three non-plane galaxies, finding no obvious differences
between the two sets of galaxies. Future observations of this sort could help
shed light on the M31 plane and its nature.
\subsubsection{Explaining the radial acceleration relation}
\label{subsubsec:expl_relations}
Almost immediately after \citet{McGaugh2016} published their RAR relation paper, \citet{keller2017} responded by demonstrating that a similar relation can be obtained
using $\Lambda$CDM\ hydrodynamic simulations of disk galaxies. Importantly, however, the systems
simulated did not include low-mass galaxies, which are dark-matter-dominated throughout. The smallest galaxies are the ones with low acceleration in their centers as well as in their outer parts, and they remain the most puzzling to explain (see \citealt{Milgrom2016} for a discussion related to this issue).
More recently, \citet{Navarro2016} have argued that $\Lambda$CDM\ can naturally produce an acceleration relation similar to that shown in Figure 12. A particularly compelling section of their argument follows directly from abundance-matching (Figure 6): the most massive disk galaxies that exist are not expected to be in halos much larger than $5 \times 10^{12} M_{\odot}$. This sets a maximum acceleration scale ($\sim 10^{-10}$ m s$^{-2}$) above which any observed acceleration {\it must} track the baryonic acceleration. The implication is that any mass-discrepancy attributable to dark matter will only begin to appear at accelerations below this scale. Stated slightly differently, any successful model of galaxy formation set within a $\Lambda$CDM\ context \textit{must} produce a relation that begins to peel above the one-to-one only below the characteristic scale observed.
It remains to be seen whether the absolute normalization and shape of the RAR in the low-acceleration regime can be reproduced in $\Lambda$CDM\ simulations that span the full range of galaxy types that are observed to obey the RAR. As stated previously, these same simulations must also simultaneously reproduce the observed diversity of galaxies at fixed $V_{\rm{max}}$ that is seen in the data (e.g., as shown in the right-panel of Figure 12).
\subsection{Solutions requiring modifications to $\Lambda$CDM }
\subsubsection{Modifying linear theory predictions}
\label{subsubsec:linear}
As discussed in Section~\ref{subsec:particle_physics}, the dominant impact of dark matter particle nature on the linear theory power spectrum for CDM models is in the high-$k$ cut-off (see labeled curves in Figure~\ref{fig:pofk}). This cut-off is set by the
free-streaming or collisional damping scale associated with CDM and is of order 1 comoving pc
(corresponding to perturbations of $10^{-6}\,M_{\odot}$) for canonical WIMPs \citep{green2004} or 0.001 comoving pc (corresponding to $10^{-15}\,M_{\odot}$) for a $m\approx 10 \mu{\rm eV}$ QCD axion \citep{nambu1990}.
In these models, the dark matter halo
hierarchy should therefore extend 18 to 27 orders of magnitude below the mass
scale of the Milky Way ($10^{12}\,M_{\odot}$; see Fig.~\ref{fig:pofk}).
A variety of dark matter models result in a truncation of linear perturbations
at much larger masses, however. For example, WDM models have an effective free-streaming length $\lambda_{\rm fs}$ that scales inversely with particle mass \citep{bode2001, viel2005}; in the Planck \citeyearpar{planck2015} cosmology, this relation is approximately
\begin{equation}
\label{eq:lambda_fs}
\lambda_{\rm fs}=33 \left(\frac{m_{\rm WDM}}{1\,{\rm keV}}\right)^{-1.11}\,{\rm kpc}
\end{equation}
and the corresponding free-streaming mass is
\begin{eqnarray}
\label{eq:m_fs}
M_{\rm fs}&=&\frac{4\pi}{3}\,\rho_{\rm m}\,\left(\frac{\lambda_{\rm fs}}{2}\right)^{3}\nonumber\\
&=&2\times 10^7\,\left(\frac{m_{\rm WDM}}{1\,{\rm keV}} \right)^{-3.33}\,M_{\odot} .
\end{eqnarray}
The effects of
power spectrum truncation are not limited to the free-streaming scale, however:
power is substantially suppressed for significantly larger scales (smaller
wavenumbers $k$). A characterization of the scale at which power is
significantly affected is given by the half-mode scale $k_{\rm
hm}=2\,\pi/\lambda_{\rm hm}$, where the transfer function is reduced by 50\%
relative to CDM. The half-mode wavelength $\lambda_{\rm hm}$ is approximately fourteen times larger than the free-streaming length \citep{schneider2012}, meaning that structure below $\sim 5 \times 10^{10}\,M_{\odot}$ is significantly different from CDM in a 1 keV thermal dark matter model:
\begin{equation}
M_{\rm hm}=5.5\times 10^{10} \, \left(\frac{m_{\rm WDM}}{1\,{\rm keV}} \right)^{-3.33}\,M_{\odot}\,.
\end{equation}
Examples of power suppression for several thermal WDM models are shown by the dashed, dotted, and dash-dotted lines in Fig. \ref{fig:pofk}.
The lack of small-scale power in models with warm (or hot) dark matter is a
testable prediction. As the free-streaming length is increased and higher-mass
dark matter substructure is erased, the expected number of dark matter
satellites inside of a Milky Way-mass halo decreases. The observed number of
dark-matter-dominated satellites sets a lower limit on the number of subhalos
within the Milky Way, and therefore, a lower limit on the warm dark matter
particle mass. \citet{polisensky2011} find this constraint is $m>2.3\,{\rm keV}$
(95\% confidence) while \citet{lovell2014} find $m>1.6\,{\rm keV}$; these
differences come from slightly different cosmologies, assumptions about the
mass of the Milky Way's dark matter halo, and modeling of completeness limits
for satellite detections.
It is important to note that particle mass and the free-streaming scale are
\textit{not} uniquely related: the free-streaming scale depends on the particle
production mechanism and is set by the momentum distribution of the dark matter
particles. For example, a resonantly-produced sterile neutrino can have a much
``cooler'' momentum distribution than a particle of the same mass that is
produced by a process in thermal equilibrium \citep{shi1999}. Constraints therefore must be
computed separately for each production mechanism \citep{merle2015, venumadhav2016}. As an example, the effects of Dodelson-Widrow \citeyearpar{dodelson1994} sterile neutrinos, which are produced through non-resonant oscillations from active neutrinos, can be matched to effects of thermal relics via the following relation:
\begin{equation}
\label{eq:viel-bozek}
m(\nu_s)=3.9\,{\rm keV}\,\left(\frac{m_{\rm thermal}}{1\,{\rm keV}}\right)^{1.294}\,\left(\frac{\Omega_{\rm DM}h^2}{0.1225}\right)^{-1/3}
\end{equation}
\citep{abazajian2006,bozek2016}.
The effects of power spectrum suppression are not limited to pure number counts of dark matter halos: since cosmological structure form hierarchically, the erasure of small perturbations affects the collapse of more massive objects. The primary result of this effect is to delay the assembly of halos of a given mass relative to the case of no power spectrum suppression. Since the central densities of dark matter halos reflect the density of the Universe at the time of their formation, models with reduced small-scale power also result in shallower central gravitational potentials at fixed total mass for halos within 2-3 dex of the free-streaming mass. This effect is highlighted in the lower-middle panel of Figure~\ref{fig:non-cdm-images}. It compares $V_{\rm{max}}$ values for a CDM simulation and a WDM simulation that assumes a thermal-equivalent mass of 2 keV but is otherwise identical to the CDM simulation. Massive halos ($V_{\rm{max}} \gtrsim 50 \,{\rm km \, s}^{-1}$) have identical structure; at lower masses, WDM halos have systematically lower $V_{\rm{max}}$ values than their CDM counterparts. This effect comes from a reduction of $V_{\rm{max}}$ for a given halo in the WDM runs, {\em not} from there being fewer objects. The reduction in central density due to power spectrum suppression for halos near or just below the half-mode mass (but significantly more massive than the free-streaming mass) is how WDM can solve the too-big-to-fail\ problem \citep{anderhalden2013}.
\begin{figure}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/cdm_img.png}
\end{minipage}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/sidm_img.png}
\end{minipage}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/wdm_img.png}
\end{minipage}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/cdm_vs_sidm_density.pdf}
\end{minipage}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=0.99\linewidth]{Figures/CDM_WDM_vmax_clipped.pdf}
\end{minipage}
\begin{minipage}{0.325\textwidth}
\includegraphics[width=\linewidth]{Figures/vmax_func.pdf}
\end{minipage}
\caption{Dark matter phenomenology in the halo of the Milky Way. The three images in the upper row show the same Milky-Way-size dark matter halo simulated with
CDM, SIDM ($\sigma/m = 1~{\rm cm^2/g}$), and WDM (a Shi-Fuller resonant model with a thermal equivalent mass of $2~{\rm keV}$). The left panel in the bottom row shows the dark matter density profiles of the same three halos while the bottom-right panel shows the subhalo velocity functions for each as well. The middle panel on the bottom shows that while the host halos have virtually identical density structure in WDM and CDM, individual subhalos identified in both simulations smaller $V_{\rm{max}}$ values in WDM \citep{bozek2016}. This effect can explain the bulk of the differences seen in the $V_{\rm{max}}$ functions (bottom right panel). Note that SIDM does not reduce the abundance of substructure (unless the power spectrum is truncated) but it does naturally produce large constant-density cores in the dark matter distribution. WDM does not produce large constant-density cores at Milky Way-mass scales but does result in fewer subhalos near the free-streaming mass and reduces $V_{\rm{max}}$ of a given subhalo (through reduced concentration) near the half-mode mass ($M_{\rm halo} \lesssim 10^{10}\,M_{\odot}$ for the plotted 2 keV thermal equivalent model).
}
\label{fig:non-cdm-images}
\end{figure}
\subsubsection{Modifying non-linear predictions}
\label{subsubsec:nonlinear}
The non-linear evolution of CDM is described by the Collisionless Boltzmann
equation. Gravitational interactions are the only ones that are relevant for CDM
particles, and these interactions operate in the mean field limit (that is,
gravitational interactions between individual DM particles are negligible
compared to interactions between a dark matter particle and the large-scale
gravitational potential). The question of how strong the constraints are on
non-gravitational interactions between individual dark matter particles is
therefore crucial for evaluating non-CDM models.
There has been long-standing interest in models that involve dark matter
self-interactions \citep{carlson1992, spergel2000}. In its simplest form,
self-interacting dark matter (SIDM, sometimes called collisional dark matter) is characterized by an energy-exchange interaction cross section $\sigma$. The mean free path $\lambda$ of dark matter particles is then
$\lambda=(n\,\sigma)^{-1}$, where $n$ is the local number density of dark matter
particles. Since the mass of the dark matter particle is not known, it is often
useful to express the mean free path as $(\rho\,\sigma/m)^{-1}$ and to quantify
self-interactions in terms of the cross section per unit particle mass,
$\sigma/m$. If $\lambda(r)/r \ll 1$ at radius $r$ from the center of a dark matter halo, many scattering events occur per local
dynamical time and SIDM acts like a fluid, with conductive transport of heat. In
the opposite regime, $\lambda(r)/r \gg 1$, particles are unlikely to scatter
over a local dynamical time and SIDM is effectively an optically thin (rarefied)
gas, with elastic scattering between dark matter particles. Most work in recent years has
been far from the fluid limit.
As originally envisioned by \citet{spergel2000} in the context of solving the missing satellites and cusp/core problems, the mean free path for
self-interactions is of order $1\,{\rm kpc} \lesssim \lambda \lesssim 1\,{\rm Mpc}$ at
densities characteristic of the Milky Way's dark matter halo (0.4
Gev/${\rm cm^3}$; \citealt{read2014}), leading to self-interaction cross sections of
$400 \gtrsim \sigma/m \gtrsim 0.4\,{\rm cm^2/g}$
($800 \gtrsim \sigma/m \gtrsim 0.8 \,{\rm barn/GeV}$). This scale ($\sim$barn/GeV) is
enormous in particle physics terms -- it is comparable to the cross-section for
neutron-neutron scattering -- yet it remains difficult to exclude observationally. It is important to emphasize that the dark matter particle self-interaction strength can, in principle, be completely decoupled from the dark matter's interaction strength with Standard Model particles and thus standard direct detection constraints offer no absolute model-independent limits on $\sigma/m$ for the dark matter. Astrophysical constraints are therefore essential for understanding dark matter physics.
Though the SIDM cross section estimates put forth
by \citet{spergel2000} were based on analytic arguments, the interaction scale they
proposed to alleviate the cusp/core problem does overlap (at the low end) with more modern results based on fully self-consistent cosmological simulations. Several groups have now run cosmological simulations
with dark matter self-interactions and have found that models with
$\sigma/m \approx 0.5-10 \,{\rm cm^2/g}$ produce dark matter cores in dwarf galaxies with sizes $\sim 0.3-1.5\,
{\rm kpc}$ and central densities $2-0.2 \times 10^{8}\,M_{\odot}\,{\rm kpc}^{-3}=7.4-0.74
\,{\rm GeV\,cm^{-3}}$ that can alleviate the cusp/core and too-big-to-fail\ problems discussed above (e.g., \citealt{vogelsberger2012,peter2013,fry2015,elbert2015}). SIDM does not, however, significantly alleviate the missing satellites problem, as the substructure counts in SIDM simulations are almost identical to those in CDM simulations (\citealt{Rocha2013}; see Figure~\ref{fig:non-cdm-images}).
One important constraint on possible SIDM models comes from galaxy clusters. The high central dark matter densities observed in clusters exclude SIDM models with $\sigma/m \gtrsim 0.5\,{\rm cm^2/g}$, though SIDM with $\sigma/m \simeq 0.1 {\rm cm^2/g}$ may be preferred over CDM (e.g., \citealt{kaplinghat2016,Elbert2016}). This means that in order for SIDM is to alleviate the small-scale problems that arise in standard CDM and also match constraints seen on the galaxy cluster scale, it needs to have a velocity-dependent cross section $\sigma(v)$ that decreases as the rms speed of dark matter particles involved in the scattering rises from the scale of dwarfs ($v \sim 10 ~{\rm km \, s}^{-1}$) to galaxy clusters ($v \sim 1000 ~{\rm km \, s}^{-1}$). Velocity-dependent scattering cross sections are not uncommon among Standard Model particles.
Figure 15 shows the results of three high-resolution cosmological simulations (performed by V. Robles, T. Kelley, and B. Bozek in collaboration with the authors) of the same Milky Way mass halo done with CDM, SIDM ($\sigma/m = 1 \,{\rm cm^2/g}$), and WDM (a 7 keV resonant model, with thermal-equivalent mass of 2 keV). The images show density maps spanning 600 kpc. It is clear that while WDM produces many fewer subhalos than CDM, the SIDM model yields a subhalo distribution that is very similar to CDM, with only slightly less substructure near the halo core, which itself is slightly lower density than the CDM case.
These visual impressions are quantified in the bottom three panels, which show the main halo density profiles (left) and the subhalo $V_{\rm{max}}$ functions for all three simulations (right). The middle panel shows the relationship between the $V_{\rm{max}}$ values of individual halos identified in both CDM and WDM simulations \citep{bozek2016}. The left panel shows clearly that SIDM produces a large, constant-density core in the main halo, while the WDM profile is almost identical to the CDM case. However, for mass scales close to the half-mode suppression mass of the WDM model ($M_{\rm halo} \lesssim 10^{10}\,M_{\odot}$ for this case), the density structure is affected significantly. This effect accounts for most of the difference seen in the right panel: WDM subhalos have $V_{\rm{max}}$ values that are greatly reduced compared to their CDM counterparts, meaning there is a $V_{\rm{max}}$-dependent shift \textit{leftward} at fixed number (i.e., subhalos at this mass scale are not being destroyed, which would result in a reduction in number at fixed $V_{\rm{max}}$).
Finally, we conclude by noting that it is possible to write down SIDM models that have both truncated power spectra and significant self-interactions. Such models produce results that are a hybrid between traditional WDM and SIDM with scale-invariant spectra \citep[e.g.][]{cyr-racine2016, vogelsberger2016}. Specifically, it is possible to modify dark matter in such a way that it produces both fewer subhalos (owing to power spectra effects) and constant density cores (owing to particle self-interactions) and thus solve the substructure problem and core/cusp problem simultaneously without appealing to strong baryonic feedback.
\section{Current Frontiers}
\label{sec:frontiers}
\subsection{Dwarf galaxy discovery space in the Local Group}
\label{subsec:discoveries}
The tremendous progress in identifying and characterizing faint stellar systems
in the Local Group has led to a variety of new questions. For one, these
discoveries have blurred what was previously a clear difference between dwarf
galaxies and star clusters, leading to the question, ``what is a galaxy?''
\citep{willman2012}. DES has identified several new satellite galaxies, many of
which appear to be clustered around the Large Magellanic Cloud (LMC; \citealt{drlica-wagner2015}). The putative association of these satellites with the LMC is intriguing
\citep{jethwa2016,sales2017}, as the nearly self-similar nature of dark matter
substructure implies that the LMC -- which is likely to be hosted by a halo of
$M_{\rm peak} \sim 10^{11}\,M_{\odot}$ \citep{boylan-kolchin2010} -- could itself contain
multiple dark matter satellites above the mass threshold required for galaxy
formation. Satellites of the LMC and even fainter dwarfs will be attractive
targets for ongoing and future observations to test basic predictions of $\Lambda$CDM\
\citep{wheeler2015}.
The 800 pound gorilla in the dwarf discovery landscape is the Large Synoptic
Survey Telescope (LSST). Currently under construction and set to begin
operations in 2022, LSST has the potential to expand dwarf galaxy
discovery space substantially: by the end of the survey, co-added LSST data will
be sensitive to galaxies ten times more distant (at fixed luminosity) than SDSS,
or equivalently, LSST will be able to detect galaxies that are one hundred times
fainter than SDSS at the same distance. This means that LSST should be complete
for galaxies with $L_\star \gtrsim 2 \times 10^3\,L_{\odot}$ within $\sim 1\,{\rm Mpc}$ of the
Galaxy, dramatically increasing the census of very faint galaxies beyond $\sim
100\,{\rm kpc}$ from the Earth.
One of the unique features of LSST data sets will be the ability to explore the
properties of low-mass, \textit{isolated} dark matter halos (i.e., those that
have not interacted with a more massive system such as the Milky Way), thereby
separating out the effects of environment from internal feedback and dark matter
physics. Given the predictions discussed in Sec.~\ref{subsubsec:feedback}, any
new discoveries with $M_{\star} \lesssim 10^{6}\,M_{\odot}$ at $\sim 1\,{\rm Mpc}$ from the Milky
Way and M31 will be attractive targets for discriminating between baryonic
feedback and dark matter physics. At this distance, spectrographs on 10m-class
telescopes will not be sufficient to measure kinematics of resolved stars;
planned 30m-class telescopes will be uniquely suited to this task.
In addition to hosting surviving satellites, galactic halos also act as a
graveyard for satellite galaxies that have been disrupted through tidal
interactions. These disrupted satellites can form long-lived tidal streams; more
generally, the stars from these satellites are part of a galaxy's stellar halo
(which may also encompass stars from globular clusters or other
sources). Efforts are underway to disentangle disrupted satellites from other
stars in the Milky Way halo via chemistry and kinematics (see \citealt{bland-hawthorn2016} for a recent review).
\subsection{Dwarfs beyond the Local Group}
An alternate avenue to probing deeper within the Local Group is to search for
low-mass galaxies further away (but still in the very local Universe). The Dark
Energy Camera (DECam) and Subaru/Hyper Suprime-Cam are being used by several
groups to search for very faint companions in a variety of systems (from NGC
3109, itself a dwarf galaxy at $\sim 1.3\,{\rm Mpc}$, to Centaurus A, a relatively
massive elliptical galaxy at $\approx 3.8\,{\rm Mpc}$ (\citealt{sand2015a,
crnojevic2016, carlin2016}). Searches for the \textit{gaseous} components of
galaxies that would otherwise be missed by surveys have also proven fruitful,
with a number of individual discoveries \citep{giovanelli2013, sand2015,
tollerud2016}.
Recently, the rediscovery of ultra-diffuse dwarf galaxies \citep{impey1988,
dalcanton1997, koda2015, van-dokkum2015} has led to significant interest in
these odd systems, which have sizes comparable to the Milky Way but luminosities
comparable to bright dwarf galaxies. Ultra-diffuse dwarfs have been discovered
predominantly in galaxy clusters, but if similar systems -- perhaps with even
lower luminosities -- exist near the Local Group, they could have escaped
detection. Understanding the formation and evolution of ultra-diffuse dwarfs, as
well as their dark matter content and connection to the broader galaxy
population, has the potential to alter our current understanding of faint
stellar systems.
\subsection{Searches for starless dwarfs}
Very low mass dark matter halos \textit{must} be starless, should they
exist. Detecting starless halos would represent a strong confirmation of the
$\Lambda$CDM\ model (and would place stringent constraints on the possible solutions to problems
covered in this review); accordingly, astronomers and physicists are exploring a
variety of possibilities for detecting such halos.
A promising technique for inferring the presence of the predicted population of
low-mass, dark substructure within the Milky Way is through subhalos' effects on
very cold low velocity dispersion stellar streams \citep{ibata2002,
carlberg2009a, yoon2011}. Dark matter substructure passing through a stream will
perturb the orbits of the stars, creating gaps and bunches in the
stream. Although many physical phenomena may produce similar effects, and the
very existence of gaps themselves remains a matter of debate, large samples of
cold streams would likely provide the means to test the abundance of low-mass
($M_{\rm vir} \sim 10^{5-6}\,M_{\odot}$) substructure in the Milky Way. We note that the
streams from disrupting satellite galaxies discussed above are not suitable for
this technique, as they are produced with large enough stellar velocity
dispersions that subhalos' effects will go undetectable. Blind surveys for HI gas provide yet another path to searching for starless (or
extremely faint) substructure in the very nearby Universe. Some ultra-compact
high-velocity clouds (UCHVCs) may be gas-bearing ``mini-halos'' that are devoid
of stars (e.g., \citealt{blitz1999}).
Most of the probes we have discussed so far rely on electromagnetic signatures
of dark matter. Gravitational lensing is unique in that it is sensitive to
\textit{mass} alone, potentially providing a different window into low-mass dark
matter halos.
\citet{vegetti2010, vegetti2012} have detected two relatively low-mass dark
matter subhalos within lensed galaxies using this technique. The galaxies are at
cosmological distances, making it difficult to identify any stellar component
associated with the subhalos; Vegetti et al. quote upper limits on the
luminosities of detected subhalos of $\sim 5\times 10^{6-7}\,L_{\odot}$, comparable
to classical dwarfs in the Local Group. The inferred dynamical masses are much
higher, however: within 300 pc, Milky Way satellites all have $M_{300} \approx
10^7\,M_{\odot}$ \citep{strigari2008}, while the detected subhalos have $M_{300} \approx (1-10)\times
10^{8}\,M_{\odot}$. It remains to be seen whether this is related to the lens
modeling or if the substructure in lensing galaxies is fundamentally different
from that in the Local Group.
More recently, ALMA has emerged as a promising tool for detecting dark matter
halo substructure via spatially-resolved spectroscopy of lensed
galaxies. This technique was discussed in \citet{hezaveh2013}, and recently, a
subhalo with a total mass of $\sim 10^{9}\,M_{\odot}$ within $\sim 1\,{\rm kpc}$ was
detected with ALMA \citep{hezaveh2016}. At present, the detected substructure is significantly more massive than the
hosts of dwarf galaxies in the Local Group: the velocity dispersion of the
substructure is $\sigma_{\rm DM} \sim 30\,{\rm km \, s}^{-1}$ as opposed to
$\sigma_{\star} \approx 5-10\,{\rm km \, s}^{-1}$ for Local Group dwarf satellites. This value
of $\sigma_{\rm DM}$ is indicative of a galaxy similar to the Small Magellanic
Cloud, which has $M_{\star} \sim 5\times 10^8\,M_{\odot}$ and
$M_{\rm{vir}} \sim (5-10) \times 10^{10}\,M_{\odot}$. The discovery of
additional lens systems, and the enhanced resolution and sensitivity of ALMA in
its completed configuration, promise to reveal lower-mass substructure, perhaps
down to scales similar to Local Group satellites but at cosmological distances
and in very different host galaxies.
\subsection{Indirect signatures of dark matter}
If dark matter is indeed a standard WIMP, two dark matter particles can
annihilate into Standard Model particles with electromagnetic signatures. This
process is exceedingly rare, on average; as discussed in
Section~\ref{subsec:particle_physics}, the freeze-out of dark matter
annihilations is what sets the relic density of dark matter in the WIMP
paradigm. Nevertheless, the annihilation rate is proportional to the local value
$\rho_{\rm DM}^2$, meaning that the centers of dark matter halos are potential
sites for annihilations. While the brightest source of such annihilations in the
sky should be the Galactic Center, foregrounds make unambiguous detection of
annihilating dark matter toward the Galaxy challenging. Dwarf
spheroidal galaxies have somewhat lower predicted annihilation fluxes owing both
to their greater distances and lower masses, but they have the significant
advantage of being free of foreground contamination. The {\it Fermi}
$\gamma$-ray telescope has surveyed MW dwarfs extensively, with no conclusive
evidence for dark matter annihilation products. The upper limits on combined
dwarf data from {\it Fermi} are already placing moderate tension on the most
basic ``WIMP miracle'' predictions for the annihilation cross section for wimps
with $m \lesssim 100\,{\rm GeV}$ \citep{ackermann2015}. Searches for annihilation from starless dark matter subhalos within the Milky Way via the \textit{Fermi} point source catalog have not
yielded any detections to date \citep{calore2016}.
On cosmic scales, dark matter annihilations may contribute to the extragalactic
gamma-ray background \citep{zavala2010}. The expected contributions of dark matter depend
sensitively on the spectrum of dark matter halos and subhalos, as well as
the relation between concentration and mass for very low mass systems. These
relations can be estimated by a variety of methods (though generally not
simulated directly, owing to the enormous range of scales that contribute), with
uncertainties being grouped into a ``boost factor'' that describes unresolved
annihilations.
If dark matter is a sterile neutrino rather than a WIMP-like particle,
self-annihilation will not be seen. Sterile neutrinos decay radiatively to an
active neutrino and a photon, however; for all of the relevant sterile neutrino
parameter space, this decay is effectively at rest and a clean signature is
therefore a spectral line at half the rest mass energy of the dark matter
particle, $E_{\gamma}=m_{\rm DM}/2$. While there is no \textit{a priori}
expectation for the mass of the sterile neutrino, arguments from
Section~\ref{subsubsec:linear} point to $E_{\gamma} \gtrsim 1$ keV, so searches in
the soft X-ray band are constraining. The most promising recent result in this
field is the detection of a previously unknown X-ray line near 3.51 keV in the
spectra of individual galaxy clusters, stacked galaxy clusters, and the halo of
M31 \citep{bulbul2014, boyarsky2014}. X-ray observations and satellite counts in
M31 rule out an oscillation (\citealt{dodelson1994}) origin for this line if it
indeed originates from sterile neutrino dark matter \citep{horiuchi2014},
leaving heavy scalar decay and possibly resonant conversion as possible
production mechanisms \citep{merle2015}. A definitive test of the origin of the
3.5 keV line was expected from the \textit{Hitomi} satellite, as it had the
requisite energy resolution to see the thermal broadening of the line due to
virial motions (i.e., the line width from a halo with mass $M_{\rm{vir}}$ should be
$\sim V_{\rm{vir}}/c$). With \textit{Hitomi}'s untimely demise, tests of the line's
origin may have to wait for \textit{Athena}.
\subsection{The high-redshift Universe}
While studies of low-mass dark matter halos are most easily conducted in the
very nearby Universe owing to the faintness of the galaxies they host, there are
avenues at higher redshifts that may provide alternate windows in to the
spectrum of density perturbations. One potentially powerful probe at $z\sim 2-6$
is the \textit{Lyman-$\alpha$ forest} of absorption lines produced by neutral
hydrogen in the intergalactic medium between us and high-redshift quasars (see
\citealt{mcquinn2016} for a recent review and further details). This hydrogen
probes the density field in the quasi-linear regime (i.e., it is in
perturbations that are just starting to collapse) and can constrain the dark
matter power spectrum to wavenumbers as large as $k \sim 10\,h\,{\rm Mpc}^{-1}$. Any
model that reduces the power on this scale relative to $\Lambda$CDM\ expectations will
predict different absorption patterns. In particular, WDM will suppress power on
these scales.
\citet{viel2013} used Lyman-$\alpha$ flux power spectra from 25 quasar
sightlines to constrain the mass of thermal relic WDM particle to $m_{\rm WDM,
th} > 3.3\,{\rm keV}$ at 95\% confidence. This translates into a density
perturbation spectrum that must be very close to $\Lambda$CDM\ down to $M \sim
10^{8}\,M_{\odot}$ \citep{schneider2012} and would rule out the possibility that
free-streaming has direct relevance for the scales of classical dwarfs (and
larger-mass systems). The potential complication with this interpretation is the
relationship between density and temperature in the intergalactic medium, as
pressure or thermal motions can mimic the effects of dark matter
free-streaming.
Counts of galaxies in the high-redshift Universe also trace the spectrum of
collapsed density perturbations at low masses, albeit in a non-trivial
manner. The mere existence of galaxies at high redshift places an upper limit on
the free-streaming length of dark matter (so long as all galaxies form within
dark matter halos) in much the same way that the existence of substructure in
the local Universe does \citep{schultz2014}. \citet{menci2016} have placed limits on the masses of
thermal relic WDM particles of 2.4 keV (2.1 keV) at $68\%$ (95\%) confidence
based on the detection of a single galaxy in the
\textit{Hubble} Frontier Fields at $z \sim 6$ with absolute UV magnitude of
$M_{\rm UV}=-12.5$ \citep{livermore2017}. While this stated constraint is very strong, and the
technique is promising, correctly modeling faint, high-redshift galaxies --
particularly lensed ones -- at can be very challenging. Furthermore, the true
redshift of the galaxy can only be localized to $\Delta z \sim 1$; the rapid
evolution of the halo mass function at high redshift further complicates
constraints. With the upcoming \textit{James Webb Space Telescope}, the
high-redshift frontier will be pushed fainter and to higher redshifts, raising
the possibility of placing strong constraints on the free-streaming length of
dark matter through structures in the early Universe.
\begin{textbox}[ht]\section{The challenge of detecting ``empty'' dark matter
halos }
The detection of abundant, baryon-free, low-mass dark matter halos would be an
unambiguous validation of the particle dark matter paradigm, would strongly constrain
particle physics models, and would eliminate many of the dark matter candidates
for the origin of the small-scale issues described in this review. Why is this
such a challenging task? \\
The answer lies in the densities of low-mass dark
matter halos compared to other astrophysical objects. From
Equation~\eqref{eq:mvir}, the average density within a halo's virial radius is
200 times the cosmic matter density. For the most abundant low-mass halos in
standard WIMP models -- those just above the free-streaming scale of
$\sim 10^{-6}\,M_{\odot}$ -- the virial radius is approximately $0.1\,{\rm
pc}$.
This is the equivalent of the mass of the Earth spread over a a distance that is
\textit{significantly} larger than the Solar System (the mean distance between
Pluto and the Sun is $\sim 2\times 10^{-4}\,{\rm pc}$). Even the lowest-mass,
earliest-collapsing CDM structures are incredibly diffuse compared to typical
astrophysical objects. Although there may be $\mathcal{O} (10^{17})$ Earth-mass
dark matter subhalos within the Milky Way's $\approx 300\,{\rm kpc}$ virial radius,
detecting them is a daunting challenge.
\end{textbox}
\section{Summary and Outlook}
\label{sec:outlook}
Small-scale structure sits at the nexus of astrophysics, particle physics, and
cosmology. Within the standard $\Lambda$CDM\ model, most properties of small-scale
structure can be modeled with high precision in the limit that baryonic physics
is unimportant. And yet, the level of agreement between theory and observations
remains remarkably hard to assess, in large part because of hard-to-model
effects of baryonic physics on first-principles predictions. Given the stakes --
absent direct detection of dark matter on Earth, indirect evidence from
astrophysics provides the strongest clues to dark matter's nature -- it is
essential to take potential discrepancies seriously and to explore all avenues
for their resolution.
We have discussed three main classes of problems in this review: (1) counts and
(2) densities of low-mass objects, and (3) tight scaling relations between the
dark and luminous components of galaxies. All of these issues may have their
origin in baryonic physics, but they may also point to the need for a
phenomenological theory that goes beyond $\Lambda$CDM. Understanding which of these two
options is correct is pressing for both astrophysics and particle physics.
In our opinion, the search for abundant dark matter halos with inferred virial masses
substantially lower than the expected threshold of galaxy formation
($M_{\rm{vir}} \sim 10^8\,M_{\odot}$) is the most urgent calling in this field
today. The existence of these structures is an unambiguous prediction of all
WIMP-based dark matter models (though it is not unique to WIMP models), and
confirmation of the existence of dark matter halos with $M \sim 10^{6}\,M_{\odot}$ or
less would strongly constrain particle physics of dark matter and effectively
rule out any role of dark matter free-streaming in galaxy formation. Here, too, accurate
predictions for the number of expected dark subhalos
will require an honest accounting of baryon physics -- specifically the destructive effects of central
galaxies themselves \citep[e.g.,][]{garrison-kimmel2017a}. Of nearly
equal importance is characterizing the central dark matter density structure of very faint
($M_{\star} \lesssim 10^{6}\,M_{\odot}$) galaxies, as a prediction of many recent
high-resolution cosmological simulations within the $\Lambda$CDM\ paradigm is that
stellar feedback from galaxies below this threshold mass should not modify their
host dark matter halos' cuspy density profile shape. The detection of ubiquitous
cores in very low-mass galaxies therefore has the potential to falsify the
$\Lambda$CDM\ paradigm.
While some of the tests of the paradigm are clear, their implementation is
difficult. Dark matter substructure is extremely diffuse compared to baryonic
matter, making its detection highly challenging. The smallest galaxies have very few
stars to base accurate dynamical studies upon. Nevertheless, a variety of
independent probes of the small-scale structure of dark matter are now feasible,
and the LSST era will likely provide a watershed for our understanding of the
nature of dark matter and the threshold of galaxy formation. It is not
far-fetched to think that improved astrophysical data, theoretical
understanding, and numerical simulations will provide a definitive test of $\Lambda$CDM\
within the next decade, even without the direct detection of particle dark
matter on Earth.
\section*{DISCLOSURE STATEMENT}
The authors are not aware of any affiliations, memberships, funding, or
financial holdings that might be perceived as affecting the objectivity of this
review.
\section*{ACKNOWLEDGMENTS}
It is a pleasure to thank our collaborators and colleagues for helpful
discussions and for making important contributions to our perspectives on this
topic. We specifically thank Peter Behroozi, Brandon Bozek, Peter Creasey, Sandy Faber, Alex Fitts, Shea Garrison-Kimmel, Andrea Macci\`{o}, Stacy McGaugh, Se-Heon Oh, Manolis Papastergis, Marcel Pawlowski, Victor Robles, Laura Sales, Eduardo Tollet, Mark Vogelsberger, and Hai-Bo Yu for feedback and help in preparing the figures. MBK
acknowledges support from The University of Texas at
Austin, from NSF grant AST-1517226, and from NASA grants HST-AR-12836, HST-AR-13888, HST-AR-13896, and HST-AR-14282 from the Space Telescope Science Institute (STScI), which is operated by AURA, Inc., under NASA contract NAS5-26555.
JSB was supported by NSF grant AST-1518291 and by NASA through HST theory grants (programs AR-13921, AR-13888, and AR-14282) awarded by STScI.
This work used computational
resources granted by the Extreme Science and Engineering Discovery Environment
(XSEDE), which is supported by National Science Foundation grant number
OCI-1053575 and ACI-1053575. Resources supporting this work were also provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.
|
\section{NULL MODELS: reshuffling the sequences}\label{sec:null}
In the main text, in order to check whether the sequences produced by
our ERRW model are correlated, we have compare them to reshuffled
versions of the sequences. More precisely, given a trajectory
$\mathcal{S}$ of visited nodes (concepts), it is possible to define
two null models based on the following two reshuffling procedures
\cite{tria2014dynamics}. The simplest procedure consists in the global
reshuffling of all the elements of $\mathcal{S}$ (indicated as
\textit{glob} in Figure 4 of the main text). This method destroys
indeed the correlations (if there are any) in the sequence, but it also
modifies the rate at which the new concepts appear, ultimately
changing the exponent of the Heaps' law. Contrarily,
the rate can be preserved by
defining a second version of the null model, based on a local
reshuffling (indicated as \textit{loc} in Figure 4 of the main
text). In this second procedure we reshuffle all the elements
in $\mathcal{S}$ only after their
first appearance, such that a concept cannot be randomly replaced in
the sequence before the actual time it has been discovered.
\section{CORRELATIONS produced by ERRWs on real networks}\label{sec:corr_real}
In the main text, we have shown how the ERRW model on small-world (SW)
networks is
able to produce correlated sequences of concepts. We have also
proposed a study case of the ERRW model on real topologies
extracted from empirical data. In particular, we have explored the
cognitive growth of science by extracting empirical sequences of
relevant concepts in different scientific fields.
For each of the fields considered, we have then tuned the
reinforcement parameter of our model
in order to
produce sequences with the same Heaps' exponents as the empirical ones
(see Figure 3 and Table 1 of the main text). Here, we investigate
correlations in the sequences produced by ERRWs on real networks.
Figure \ref{SI_abs_corr} reports the same quantities we used
to study correlations in sequences produced by ERRW on synthetic
small-world networks (see Figure 4 of the main text), namely the average
entropy of the sequence (Figure \ref{SI_abs_corr}(a)), number $M_l$ of
different subsequences of length $l$ as a function of $l$ (Figure
\ref{SI_abs_corr}(b)), and frequency distribution $f(\Delta t)$ of
inter-event times $\Delta t$ between couples of consecutive
concepts (Figure \ref{SI_abs_corr}(c)). In each plot, results are
compared to the two null models defined in Section \ref{sec:null} of
this Supplemental Material, confirming the correlated nature of the
sequences. Furthermore, the comparison with the same statistics
obtained for ERRWs on SW networks (see Figure 4 of the main text)
confirms again that small-world topologies represent a good choice
for modeling the relations among concepts.
\section{CORRELATIONS produced by ERRWs on synthetic networks}
In the main text we have implemented the ERRW model on small-world
networks, which proved to be good topologies for modeling the structure
of relations among concepts (see Section \ref{sec:corr_real} of this
text and
Refs~\cite{gravino2012complex,motter2002topology,benedek2017semantic}). In
addition to the plots in Figure 4 of the main text, where we studied
the correlations produced by an edge-reinforced random walk over a SW
network with fixed link probability $p$ for a fixed amount of
reinforcement at $\delta w=0.01$, here we show the curves of average
entropy of sequence (Figure \ref{SI_en}) and frequency distribution
$f(\Delta t)$ of inter-event times $\Delta t$ between couples of
consecutive concepts (Figure \ref{SI_int}) for different values of
reinforcement, ranging from $\delta w=0.001$ to $\delta w=1$. Three
different cases of SW networks with $N=10^6$ nodes and respectively
with link rewiring
probability $p=0.001$ (Fig.~\ref{SI_en}(a-d) and
Fig.~\ref{SI_int}(a-d)), $p=0.01$ (Fig.~\ref{SI_en}(e-h) and
Fig.~\ref{SI_int}(e-h)) and $p=0.1$ (Fig.~\ref{SI_en}(i-l) and
Fig.~\ref{SI_int}(i-l)), are considered.
All the curves are compared to the corresponding
null models as defined in Section \ref{sec:null} of this Supplemental
Material.
\bigskip
\begin{figure*}
\includegraphics[width=1\textwidth]{SM_Fig1-eps-converted-to.pdf}
\caption{\label{SI_abs_corr} Correlations among concepts for
the growth of knowledge in science (Astronomy shown)
produced by an ERRW model. The ERRW is tuned to the
empirical data by selecting the reinforcement $\delta w$
that reproduces the Heaps' exponent $\beta$ obtained by
fitting the associated Heaps' curve as a power law (for the
Astronomy case shown $\delta w=330$).~\textbf{(a)} Frequency
distribution of inter-event times $\Delta t$ between
consecutive occurrences of the same concept (node in our
model).~\textbf{(b)} Number $M_{l}$ of different
subsequences of length $l$ as a function of
$l$.~\textbf{(c)} Normalized entropy of the sequence of
visited nodes as a function of $n$, the number of times the
nodes have been visited (see the main text for details). In
each panel, blue circles show average values over 20
different realizations, while triangles and crosses refer to those of (globally and locally)
reshuffled sequences.}
\end{figure*}
\begin{figure*}
\includegraphics[width=1\textwidth]{SM_Fig2_1-eps-converted-to.pdf}
\includegraphics[width=1\textwidth]{SM_Fig2_2-eps-converted-to.pdf}
\includegraphics[width=1\textwidth]{SM_Fig2_3-eps-converted-to.pdf}
\caption{\label{SI_en} Correlations among concepts produced by an edge-reinforced random walk on a SW network for different values of link probability $p$ and reinforcement $\delta w$ (see the main text for details). Normalized entropy of the sequence of visited nodes as a function of $n$, the number of times the nodes have been visited. In each panel, blue circles show average values over 20 different realizations, while triangles and crosses refer to those of (globally and locally) reshuffled sequences.}
\end{figure*}
\begin{figure*}
\includegraphics[width=1\textwidth]{SM_Fig3_1-eps-converted-to.pdf}
\includegraphics[width=1\textwidth]{SM_Fig3_2-eps-converted-to.pdf}
\includegraphics[width=1\textwidth]{SM_Fig3_3-eps-converted-to.pdf}
\caption{\label{SI_int} Correlations among concepts produced
by an edge-reinforced random walk on a SW network for
different values of link probability $p$ and reinforcement
$\delta w$ (see the main text for details). Frequency
distribution of inter-event times $\Delta t$ between
consecutive occurrences of the same concept (node in our
model). In each panel, blue circles show average values over 20 different realizations,
while triangles and crosses refer to those of (globally and locally) reshuffled sequences.}
\end{figure*}
\section{The effect of the average degree on the reinforcement}
To better understand the wide range of values obtained for the
reinforcement parameter from the analysis of the
growth of knowledge in different scientific
fields (see Table 1 of the main text), we looked at the relation
between the exponent $\beta$ extracted from the Heaps' law
and the reinforcement $\delta w$ in networks
with different average node degree.
Figure \ref{SI_avg_k} shows $\delta w$
versus $\beta$. Each curve corresponds to Erd\H{o}s-R\'enyi
random graphs with $N=10^5$ nodes and
average degrees $\langle k \rangle$ ranging from 6 to
80. As expected, the
average degree significantly impacts the reinforcement. In particular,
the higher the value of $\langle k \rangle$, the stronger the reinforcement
$\delta w$ has to be in order to produce the same Heaps' exponent.
This is easily understandable if one considers the possible
choices of a walker reaching a node connected to a
link that has been reinforced. If the node has a
high degree, the probability of selecting that specific link among all
the others will be smaller, and the walker will more easily select a
new link, leading to a previously undiscovered node, and therefore to
a higher $\beta$. If one wants to keep a certain discovery rate in
networks with higher $\langle k \rangle$, higher values of
reinforcement will then need to be considered.
\begin{figure*}
\includegraphics[width=0.6\textwidth]{SM_Fig4-eps-converted-to.pdf}
\caption{\label{SI_avg_k} ERRW on ER networks with $N=10^5$ and average degree $\langle k \rangle$. Heaps' exponent $\beta$ as a function of reinforcement $\delta w$.
}
\end{figure*}
\section{Comparing ERRWs to the network version of the urn models}
Here we clarify some aspects regarding similarities and differences
between our ERRW model and the urn models proposed by Tria
et al. \cite{tria2014dynamics}, together with their network versions.
\\
In the main text, we state that for the edge-reinforced random walk
(ERRW) model, the conditional probability
$\text{Prob}\left[X_{t+1}=i|i_{0},i_{1},\ldots,i_{t}\right] $ that, at
time step $t+1$, the walker is at node $i$, after a trajectory
$\mathcal{S}=(i_{0},i_{1},i_{2},\ldots,i_t)$, depends on the whole
history of the visited nodes, namely on the frequency but also on the
precise order in which they have been visited. This is different
from what happens in the basic version of the urn model.
Using the notation introduced by Tria et al. \cite{tria2014dynamics},
in the main text, by urn model (UM) we referred to the basic urn model,
i.e. the urn model without semantic. In this case, each ball in the
urn has the same probability of being extracted. Since there might be
multiple balls of the same color, the probability to extract a given
color will depend on the number of balls of that color, and also on
the total number of balls in the urn. The number of balls of a given
color at time $t$ depends on how many times balls of that color have
been extracted up to time $t$ (i.e. on how many times the color has
been reinforced), but it does not depend on the specific order of
appearance in the sequence of extracted balls. The number of balls in
the urn at time $t$ depends on the number of balls initially present
in the urn, plus the ones added by mean of the reinforcement mechanism
($\rho$ additional balls for every $t$), plus the balls representing
the ``adjacent possible" ($\nu +1$ additional balls, every time a
color is extracted for the first time).\\
For example, let us consider the UM with parameters $\rho=1$ and $\nu=0$, and let us
indicate as $R$, $B$, $G$ balls respectively of color Red, Blue and Green. By $\mathcal{U}_t$ we indicate the urn at time $t$, while $\mathcal{S}_t$ represents the sequence of extracted colors from the urn at time $t$, which will trigger a reinforcement at $t+1$ of $\rho=1$ new balls of color $X$ every time a ball of color $X$ is extracted, and a
further addition of $\nu+1 =1$ balls of new colors every time a color is extracted for the first
time (novelty).\\
A possible evolution, starting from an initial condition with one red ball in the urn at time $t=1$, is the following:
\noindent
At $t=1$, $\mathcal{U}_1= \{ R \}$. A $R$ ball is drawn: $\mathcal{S}_1= (R)$. $R$ is reinforced and $B$ is added to the urn. \\
At $t=2$, $\mathcal{U}_2= \{ R,R,B \}$. A $B$ ball is drawn: $\mathcal{S}_2= (R,B)$. $B$ is reinforced and $G$ is added;\\
At $t=3$, $\mathcal{U}_3= \{ R,R,B,B,G \}$. A $R$ ball is drawn: $\mathcal{S}_3 = (R, B, R)$. $R$ is reinforced;\\
At $t=4$, $\mathcal{U}_4= \{ R,R,R,B,B,G \}$. A $R$ ball is drawn: $\mathcal{S}_4= (R, B, R, R)$. $R$ is reinforced;\\
At $t=5$, $\mathcal{U}_5= \{ R,R,R,R,B,B,G \}$. A $B$ ball is drawn: $\mathcal{S}_5=(R, B, R, R, B)$. $B$ is
reinforced;\\
At $t=6$, $\mathcal{U}_6= \{ R,R,R,R,B,B,B,G \}$.
\noindent
Now, the probabilities of extracting balls of different colors at time $t=6$ are respectively: $p_{R}=1/2, p_{B}=3/8$ and $p_{G}=1/8$.\\
Notice that another possible evolution, starting from the same initial condition, is the following:
\noindent
At $t=1$, $\hat{\mathcal{U}}_1= \{ R \}$. A $R$ ball is drawn: $\hat{\mathcal{S}}_1= (R)$. $R$ is reinforced and $B$ is added to the urn. \\
At $t=2$, $\hat{\mathcal{U}}_2= \{ R,R,B \}$. A $R$ ball is drawn: $\hat{\mathcal{S}}_2= (R,R)$. $R$ is reinforced;\\
At $t=3$, $\hat{\mathcal{U}}_3= \{ R,R,R,B \}$. A $B$ ball is drawn: $\hat{\mathcal{S}}_3 = (R, R, B)$. $B$ is reinforced and $G$ is added;\\
At $t=4$, $\hat{\mathcal{U}}_4= \{ R,R,R,B,B,G \}$. A $B$ ball is drawn: $\hat{\mathcal{S}}_4= (R, R, B, B)$. $B$ is reinforced;\\
At $t=5$, $\hat{\mathcal{U}}_5= \{ R,R,R,B,B,B,G \}$. A $R$ ball is drawn: $\hat{\mathcal{S}}_5=(R, R, B, B, R)$. $R$ is reinforced;\\
At $t=6$, $\hat{\mathcal{U}}_6= \{ R,R,R,R,B,B,B,G \}$.
\noindent
Although the two sequences generated at time $t=5$ are different, namely $\hat{\mathcal{S}}_5 \neq \mathcal{S}_5 $, they contain the same number of entries for each color, and the two urns at time $t=6$ will be equal, namely $\hat{\mathcal{U}}_6=\mathcal{U}_6$, so that the probabilities of extracting balls of different colors at time $t=6$ will be $p_{R}=1/2, p_{B}=3/8$ and $p_{G}=1/8$ also for the second urn evolution.
With this simple example we have been able to show that the probability of extracting a color at a given time depends on the number of balls of each color, but not on the precise order of the extracted balls. \\
Our focus until now has been on the basic UM proposed by Tria et al.
There is however a more refined version of the model proposed in
Ref.~\cite{tria2014dynamics}, called urn model with {\it semantic
triggering}, from now on UMS. In this second version, the authors
propose an urn model that is also able to reproduce the correlations
of empirical sequences. The model is based on the introduction of
{\it semantic labels} attached to the balls (different balls and
colors might share the same label), together with a mechanism
named semantic triggering. The semantic triggering mechanism
is able to produce correlated sequences, but it also requires the
addition of a third parameter, namely $\eta$, to the model. Notice,
instead, that the model we propose in this paper does not need labels
or additional mechanisms. In our model correlations emerge naturally
from the co-evolution of the walker dynamics and the network.
Finally, in the Supplementary Information of
Ref.~\cite{tria2014dynamics} the authors discuss how to map urn models
into a growing network framework. Such a
mapping is exact only in the case when $\eta=1$, which actually
corresponds to the simple UM without semantic and thus without
correlations. Contrarily, when $\eta \le 1$, i.e. in the case of the
UMS in which the model is able to produce correlated sequences,
the mapping is not one-to-one. The key difference is in fact
that in a network the
connections are always well defined (a link exists or not). In fact,
the possibility of going from a node $n_{A}$ to any other node is
restricted to the neighbors of $n_A$, while for the case of the urn
model the possibility of drawing any ball $X$ after the extraction of
a given ball $A$ is always probabilistic. As a consequence, the
network framework of the urn model presented in S.I. of
Ref.~\cite{tria2014dynamics} works exactly only for the very specific
case $\eta=1$ corresponding to a fully connected
network (where a walker can move from each node to every other node,
in the same way as any ball can be drawn from an urn after the
extraction of any other ball).
\end{document}
|
\section{Introduction}
Intentionally added impurities, i.e., dopants, can completely alter the physical properties of the host material. While in some cases, the additional electrons or holes contributed by the dopants dramatically modify the electronic structure, thereby changing properties like the electrical conductivity\cite{doped_semiconductor} and magnetism\cite{DMS}, in other cases, the small doping-induced perturbation is enough to alter the atomic arrangement (crystal structure) of the host system (e.g. yttrium stabilized zirconia). Hafnia (HfO$_2$), a well known linear dielectric material\cite{hafnia_review_Hong_Zhu, hafnia_high_k_JAP_review, hafnia_high_k_progress_physics_Robertson, hafnia_high_k_rampi, hafnia_high_k_tang}, is likely an example of the latter, as doped thin films of this material have been recently observed to exhibit ferroelectric (FE) behavior through the formation of a non-equilibrium polar phase.\cite{hafnia_ferro_observation, undoped_hafnia} Despite a great number of experimental and theoretical studies,\cite{Huan_ferroelectricity, hafnia_surface_energy_Materlik,hafnia_surface_energy_Rohit,hafnia_mono_to_tetra_surface_energy} the origin of this novel functionality, which has applications in FE-field effect transistors\cite{device_fet} and FE-random access memories,\cite{device_feram} has not been completely understood.
In the most likely mechanism, some ``suitable'' combination of surface energy, mechanical stresses, oxygen vacancies, dopants and the electrical history of the hafnia film is believed to stabilize the polar orthorhombic Pca2$_1$ (P-O1) phase over the equilibrium monoclinic (M) phase of hafnia, thus enabling FE behavior.\cite{hafnia_review, hafnia_dopants_effects, dopants_hafnia_influence_RSC, TEM_PO1_observation_hafnia} The disappearance of ferroelectricity in the absence of a capping electrode and with increasing film thickness suggests the critical role of the mechanical stresses\cite{hafnia_ferro_observation, stress_hafnia_rohit, hzo_strain_Min_Park, zirconia_stress_Kisi, mechanical_stress_Al, mechanical_stress_Y} and surface energies,\cite{hafnia_surface_energy_Materlik, hafnia_surface_energy_Rohit, hafnia_mono_to_tetra_surface_energy}respectively. Similarly, the demonstration of the ``wake-up effect'' (on application of external electric fields) hints at the role that the electrical history of the film plays in stabilizing the FE phase.\cite{hystereses_deform_Tony, electric_field_ovac_movement, electric_field_structure} Dopants, too, have been found to increase the stability ``window'' of the P-O1 phase as reflected in an increase in both the magnitude of the measured polarization and the critical thickness of the hafnia film (below which FE behavior is observed).\cite{undoped_hafnia} Some insight into the role of dopants has emerged from recent empirical studies,\cite{dopants_hafnia_influence_RSC,hafnia_dopants_effects} which have indicated the trend of dopants with higher ionic radii leading to enhanced polarization. Nevertheless, the true role of the dopants in the formation of the P-O1 phase remains unclear, given that traditionally doping is known to stabilize the high-temperature tetragonal (T) or the cubic phases of hafnia.\cite{dopants_stabilize_T_or_C, dopants_stabilize_T, hafnia_hong_design_dopants} Two critical questions, important from both application and theoretical standpoints, that these recent studies\cite{hafnia_ferro_observation, mechanical_stress_Al, mechanical_stress_Y, dopants_hafnia_influence_RSC,hafnia_dopants_muller} on FE doped hafnia raise are: (1) which dopant favor the polar phase the most and at what concentration?, and (2) do dopants play a critical role in stabilizing this polar phase in hafnia films, and if yes, which attributes of a dopant (chemical or physical) are relevant?
\begin{figure}
\centering
\includegraphics[scale=0.57]{scheme_v6.pdf}
\caption{The overall scheme of this work illustrating the three-stage selection process and the modeling conditions imposed in each stage.}
\label{Fig:scheme}
\end{figure}
In this contribution, we address these questions using high-throughput first-principles density functional theory (DFT) computations. In order to address the first question, we follow a three-stage down-selection strategy, illustrated in Fig. \ref{Fig:scheme}, wherein we examine the influence of nearly 40 dopants on the energetics of the relevant low-energy phases of hafnia, including M ($P2_1/c$), T ($P4_2/nmc$), P-O1 ($Pca2_1$), another polar P-O2 ($Pmn2_1$), and high-pressure OA ($Pbca$) phases. Based on these energy changes, the initial set of nearly 40 dopants in Stage 1 is down-selected to 14 dopants in Stage 2, and finally, to the 6 most promising dopants, i.e., Ca, Sr, Ba, Y, La and Gd, in Stage 3. In agreement with empirical observations,\cite{hafnia_dopants_effects, dopants_hafnia_influence_RSC} our study revealed that these 6 dopants favor the stabilization of the P-O1 phase of hafnia. To answer the second question, the computational data obtained in Stage 3 was analyzed. Clear trends illustrating that dopants with higher ionic radii and lower electronegativity stabilize the P-O1 phase the most were found, also consistent with the experimental observations.\cite{dopants_hafnia_influence_RSC} The root-cause of these trends is traced to the formation of an additional bond between the dopant and the 2$^{\rm nd}$ nearest-neighbor oxygen atom. Based on these findings, we search the entire Periodic Table, predicting the lanthanides, the lower half of the alkaline earth metals (i.e. Ca, Sr, Ba) and Y as the most favorable dopants to promote ferroelectricity in hafnia.
\section{Theoretical Methods}
Our work is based on electronic structure DFT calculations, performed using the Vienna {\it Ab Initio} Simulation Package\cite{vasp} (VASP) employing the Perdew-Burke-Ernzerhof exchange-correlation functional\cite{PBE} and the projector-augmented wave methodology.\cite{PAW} A 3$\times$3$\times$3 Monkhorst-Pack mesh\cite{monkhorst} for k-point sampling was adopted and a basis set of plane waves with kinetic energies up to 500 eV was used to represent the wave functions. For each doped phase, spin polarized computations were performed and all atoms were allowed to relax until atomic forces were smaller than 10$^{-2}$ eV/\AA.
To determine the energy ordering of phases in doped hafnia, we define the relative energy of a phase $\alpha$ with respect to the equilibrium M phase in the presence of a dopant D as
\begin{equation}\label{eq:rel_energy}
\Delta E^{\rm \alpha-M}_{\rm D} = E^\alpha_{\rm D} - E^{\rm M}_{\rm D},
\end{equation}
where $E^\alpha_{\rm D}$ and $E^{\rm M}_{\rm D}$ are the DFT computed energies of the doped $\alpha$ and M phases, respectively. To highlight the direct role of a dopant in stabilizing the phase $\alpha$, we subtract from Eq. \ref{eq:rel_energy} a term corresponding to the energy of dopant-free pure phases:
\begin{equation}\label{eq:rel_rel_energy}
\Delta E^{\alpha-{\rm M}}_{\rm D-Pure} = \left(E^\alpha_{\rm D} - E^{\rm M}_{\rm D}\right) -\left(E^\alpha_{\rm Pure} - E^{\rm M}_{\rm Pure}\right)
\end{equation}
where $E^\alpha_{\rm Pure}$ and $E^{\rm M}_{\rm Pure}$ are the DFT computed energies of pure $\alpha$ and M phases, respectively. $\Delta E^{\rm \alpha-M}_{\rm D-Pure}$ represents the change in the relative energy of the phase $\alpha$ with respect to the M phase solely due to the introduction of the dopant D. Thus, a dopant with negative $\Delta E^{\rm \alpha-M}_{\rm D-Pure}$ favors (or stabilizes) the phase $\alpha$ over the M phase more than in the dopant-free pure case. Further, if $\alpha$ happens to be one of the polar phases, one can expect such dopants to enhance FE behavior in hafnia.
Five different phases of hafnia were considered, including M, T, P-O1, P-O2 and OA, as they were either empirically observed or theoretically predicted to have low energy under conditions for which hafnia films display FE behavior.\cite{Huan_ferroelectricity,Hafnia_ph_dig} Equivalent 32 formula-unit (96 atom) supercells, starting from the structures documented in our previous work,\cite{stress_hafnia_rohit} were constructed to carry out the energy calculations. For each phase, three levels of substitutional doping concentration, namely, 3.125\%, 6.25\% and 12.5\% were studied by replacing 1, 2 and 4 Hf atom(s), respectively, by the dopant atom(s).
To overcome the challenge of high computational cost associated with accurately modeling the effect of $\sim$40 dopants on the energetics of the five phases of hafnia, we carry out this work in three stages, as illustrated in Fig. \ref{Fig:scheme}. Moving down the stages, a balance between computational accuracy and cost is maintained by increasing the modeling sophistication on the one hand and retaining only the promising dopants, with substantially negative $\Delta E^{\rm PO1-M}_{\rm D-Pure}$ and $\Delta E^{\rm PO2-M}_{\rm D-Pure}$, on the other hand. We restrict the initial set of dopants to elements from row 3, 4 and 5 of the Periodic Table (see Fig. \ref{Fig:scheme}), with the exception of Gd, which is included since empirical observations of ferroelectricity have been made in this case. In Stage 1, we model these dopants in the aforementioned five phases at 3.125\% doping concentration, and under the assumption of fixed volume of the simulation cell and the absence of oxygen vacancies (O$_{\rm vac}$). The relatively large size of the dopants considered and small perturbations expected at such small doping concentration form the rationale underlying these assumptions. Promising dopants from Stage 1 that energetically favor the polar phases were selected for more in-depth studies in Stage 2. Their influence on the phase stability was again studied at the doping concentration of 3.125\%, but now in a presence of appropriate concentration of O$_{\rm vac}$ (determined through the study of the electronic structures of doped hafnia phases, as discussed in Supplementary Information), expected to be present in real systems owing to the different oxidation states of the dopant and the hafnium ion. Finally, in Stage 3, promising dopants selected from Stage 2 were studied at multiple doping concentrations of 3.125\%, 6.25\% and 12.5\%. The volume of the supercell was relaxed and an appropriate number of O$_{\rm vac}$ were introduced to achieve charge neutrality. The doped hafnia structures obtained in Stage 3 were later examined to draw key chemical trends.
\section{Results and Discussion}
\subsection{Stage 1}
\begin{figure}[h]
\centering
\includegraphics[scale=0.725]{dopant_energy_fig1_v4.pdf}
\caption{Phase stability of hafnia in presence of different dopants and under the constraints of Stage 1, as computed using (a) Eq. \ref{eq:rel_energy} and (b) Eq. \ref{eq:rel_rel_energy}. In panel (a), solid symbols represent the data from this work while open symbols signify results from previous studies.\cite{dopants_stabilize_T_or_C,Huan_ferroelectricity} The lines are guide to the eyes.}
\label{Fig:stage1}
\end{figure}
As stated above and illustrated in Fig. \ref{Fig:scheme}, the influence of $\sim$40 dopants on the phase stability in hafnia under the assumption of fixed volume and the absence of O$_{\rm vac}$ was studied in Stage 1. The energies of different phases of hafnia at 3.125\% doping concentration are presented in Fig. \ref{Fig:stage1} and are found to be consistent with limited available past studies (shown in open circles).\cite{Huan_ferroelectricity,dopants_stabilize_T_or_C} In case of pure hafnia, the small energy difference between the equilibrium M and the P-O1 phases should be noted, signaling that even minor perturbations, perhaps introduced by extrinsic factors, such as dopants, stresses, etc., may be sufficient to stabilize the polar P-O1 phase as the ground state. Further, the P-O1 and the OA phases are extremely close in energy, in agreement with the previous studies.\cite{hafnia_LDA_vs_GGA, Huan_ferroelectricity, hafnia_surface_energy_Materlik} This energetic proximity is a manifestation of the remarkable structural similarity between the two phases.
As captured in Fig. \ref{Fig:stage1}(a), the M phase remains the equilibrium phase for all the dopants considered at 3.125\% doping concentration, although the energy differences among the hafnia phases change significantly. The relative energy of T phase alters substantially more with the choice of the dopant (for e.g., Ge, Au, etc.) in comparison to that of the P-O1, P-O2 and OA phases, possibly due to the different coordination environment experienced by a dopant cation in the T (CN = 8) versus the other phases (CN = 7) considered here. Interestingly, the T phase of Pd- and Pt-doped hafnia collapse into the P-O1 phase (see Supplementary Information for details) upon atomic relaxation (resulting in absence of these data points in Fig. \ref{Fig:stage1}). An important implication of this finding is that even small perturbations can possibly result in T to P-O1 phase transformations, and can be a potential pathway of formation of the P-O1 phase in hafnia. We will continue to encounter this collapse of the T phase to the P-O1 phase in later stages of this work as well.
Owing to the large energy scale and the small doping level, the influence of dopants on the phase stability appears feeble in Fig. \ref{Fig:stage1}(a). This picture, however, changes substantially when we re-plot it using Eq. \ref{eq:rel_rel_energy} as shown in Fig. \ref{Fig:stage1}(b). We again caution here that the quantity $\Delta E_{\rm D-Pure}^{\rm \alpha-M}$ plotted in Fig. \ref{Fig:stage1}(b) only helps us identify the phase(s) a dopant prefers over the M phase, and not the lowest energy ground state of hafnia, which is indeed determined by the quantity $\Delta E_{\rm D}^{\rm \alpha - M}$. Two key trends to be observed in Fig. \ref{Fig:stage1}(b) are: (1) row IV and row V dopants follow very similar phase stability trends when moving from left to right across the periodic table, with the row V dopants inducing larger energy variations, and (2) dopants from alkaline earth, and group 3, 10, 11 and 12 of the periodic table tend to favor the P-O1 and/or the P-O2 phases in hafnia, leading to the following shortlisted candidates further studied in Stage 2: Ca, Sr, Ba, Y, La, Cu, Zn, Pd, Ag, Cd, Pt, Au, Hg and Gd. Interestingly, a few of these dopants, such as Y, La, Sr, Ba, La, among others, have been empirically\cite{dopants_hafnia_influence_RSC,hafnia_dopants_effects} shown to promote substantial FE behavior in hafnia films, thus, already highlighting an agreement between our initial results and experiments. Another vital chemical insight, which will be strengthened in the later sections, is that dopants with low electronegativity tend to stabilize the polar phases in hafnia.
\subsection{Stage 2}
In Stage 2, we increase the modeling sophistication by introducing appropriate charge neutralizing O$_{\rm vac}$ for 3.125\% doped hafnia systems. Two issues concerning the number of O$_{\rm vac}$ and their placement site in the 32 hafnia-unit supercell should be addressed. Since all the dopants, except Y, La, Au and Gd, in Stage 2 are divalent, only one O$_{\rm vac}$ corresponding to the one dopant cation needs to be added (as confirmed using the electronic structure studies discussed in Supplementary Information). However, for the case of Y, La, Au and Gd, a partial O$_{\rm vac}$ is required at 3.125\% doping level. To avoid practical computational issues, these trivalent dopants were transferred directly to Stage 3. The remaining 10 divalent dopants were studied in Stage 2 with a single O$_{\rm vac}$.
With respect to the placement of this single O$_{\rm vac}$, we argue that this should be in a nearest-neighbor site to the dopant cation owing to the electrostatic pull expected between the negatively charged dopant and the positively charged O$_{\rm vac}$ defects. With this restriction on configurational space to the cases in which O$_{\rm vac}$ is closest to the dopant, and taking into account the symmetry of the different hafnia phases, we are left with 7 different choices for the M, P-O1, and OA phases, 5 for the O2 and 2 for the T phase. These choices can be further classified into two categories based on the number of Hf-O bonds that need to be broken to introduce an O$_{\rm vac}$; while one category involves breaking 3 bonds, the other requires 4 broken bonds. For the representative case of Pd- and Pt- doped hafnia systems, energies for all possible configurations (i.e., 7 for the M, P-O1, and OA, 5 for the O2 and 2 for the T) were computed and it was found that O$_{\rm vac}$ sites involving 3 broken Hf-O bonds are always energetically preferred, with the exception of the T phase which has only one type of O$_{\rm vac}$ site that involves breaking 4 Hf-O bonds. Thus, we further reduce our configurational space to cases which involve breakage of only 3 Hf-O bonds in the M, P-O1, OA and P-O2 phases. This leaves us with 3 different choices for the M, P-O1, OA phases, and 2 choices for each of the O2 and T phases. For each phase, only the configuration with lowest energy was considered in order to obtain the phase stability trends presented in Fig. \ref{Fig:stage2}. To summarize, in Stage 2 we computed the phase stability of hafnia at dopant concentration of 3.125\% for the case of the 10 shortlisted divalent elements, and with the restrictions of O$_{\rm vac}$ being in nearest-neighbor site of the dopant and occupying an O site with 3 Hf-O bonds in the case of M, P-O1, OA and O2 phases. The volume of the supercell was also assumed to be fixed.
\begin{figure}
\centering
\includegraphics[scale=0.725]{vac_energy_fig2_v5.pdf}
\caption{The relative energies of 3.125\% doped hafnia for the limited set of 10 divalent dopants of Stage 2 in presence of a charge neutralizing O$_{\rm vac}$. For ease of comparison, the results of Stage 2 (open symbols) are overlaid on top of that of Stage 1 (lighter solid symbols).}
\label{Fig:stage2}
\end{figure}
The findings of Stage 2 are overlaid on the results of Stage 1 for the selected set of 10 divalent dopants in Fig. \ref{Fig:stage2}. The transition metals that favored the polar phase(s) in Stage 1, do not substantially stabilize the polar phase(s) with the introduction of O$_{\rm vac}$ as $\Delta E_{\rm D-Pure}^{\rm \alpha-M}$ of both the polar phases can be seen to shift up after the O$_{\rm vac}$ introduction (e.g., compare the open and solid symbols for the case of Cu and Zn in Fig. \ref{Fig:stage2}). On the other hand, the T phase is consistently favored with the addition of O$_{\rm vac}$ due to the lowering of the coordination number of the vacancy neighboring Hf atoms from 8 to 7, which is energetically preferred - and is also the reason why the M phase is the equilibrium phase of hafnia. This behavior is consistent with the past study\cite{hafnia_vacancy_stabilization}. The Cu- and Ag-doped T phase was, however, found to collapse into the polar P-O1 phase. Further investigations are necessary to identify what triggers this collapse of the T phase into the P-O1 phase. Nevertheless, the alkaline earth metals like Ca, Sr, and Ba continue to stabilize the polar phases in Stage 2, leaving us with our next set of promising candidates studied in Stage 3, i.e, Ca, Sr, Ba, Y, La, Au and Gd.
\subsection{Stage 3}
From the initial set of $\sim$40 dopants, we are now left with the 7 most promising candidates in Stage 3 that favor the polar phase(s) in hafnia. Owing to the lesser number of dopants involved, we now lift the modeling constraints imposed in the previous stages, and investigate the influence of these dopants at varying concentrations. For the case of divalent dopants, we studied three different doping concentrations of 3.125\%, 6.25\% and 12.5\%. On the other hand, for the case of trivalent dopants, we studied only 6.25\% and 12.5\% doping concentrations owing to the difficulty associated with modeling a partial O$_{\rm vac}$ at 3.125\% doping level, as mentioned earlier. The volume of the supercell was relaxed and an appropriate number of O$_{\rm vac}$ were introduced to achieve charge neutrality. Unfortunately, for the case of Au, the phases did not retain their structural identity (i.e., the relaxed structures from our computations were so distorted that they could not be unambiguously associated with the starting structure) at higher doping concentration of 6.25\% and 12.5\%, and thus, we exclude this case from our results.
\begin{figure}[h]
\centering
\includegraphics[scale=0.78]{vrelax_energy2_v10.pdf}
\caption{Phase stability of hafnia in presence of (a) Ca, (b) Sr, (c) Ba, (d) Y, (e) La, and (f) Gd, as function of their doping concentration. While the phases mostly retained their structural identity upon doping (solid symbols), in some limited cases, especially at higher doping concentration, it was hard to clearly identify the doped phases upon relaxation. Such cases are represented in open symbols based on their starting phase.}
\label{Fig:stage3}
\end{figure}
The results of Stage 3 are presented in Fig. \ref{Fig:stage3}. We first note that while in many cases the doped hafnia phases retained their structural identity upon relaxation, there were a few cases, especially at 12.5\% doping concentration, where either it was difficult to clearly identify the doped phases or the starting phase transformed into another phase upon relaxation. We represent these unusual cases in open symbols based on their starting structure. The following key observations can be made from Fig. \ref{Fig:stage3}: (1) all of the Stage 3 dopants stabilize the P-O1 and/or the P-O2 phases with increasing doping concentration, (2) while at 3.125\% doping level, there exists substantial energy difference between the polar phases and the equilibrium M phase, at 6.25\% doping level, the P-O1 phase becomes extremely close in energy to that of the M phase, (3) at high doping concentration of 12.5\% no conclusive statements about the ground state of hafnia can be made as hafnia phases loose their structural identity at such high doping level, (4) for some doped cases, the T and even the P-O2 phase collapsed into the P-O1 phase upon relaxation, suggesting that these dopants prefer to form the relatively low energy polar P-O1 phase, and (5) between the two polar phases considered, i.e., P-O1 and P-O2, the former is clearly favored over the latter, consistent with the experimental observations of this phase.\cite{TEM_PO1_observation_hafnia}
One important limitation/assumption of the above study pertaining to the dopant and O$_{\rm vac}$ arrangement should be mentioned here. Higher doping concentration (6.25\% and 12.5\%) leads to a rather challenging modeling problem of expansion of the configurational space. For instance, for the case of 6.25\% Sr-doped hafnia, the two Sr atoms would lie on any two sites of the cation sub-lattice and the associated two O$_{\rm vac}$ on any two sites of the anion sub-lattice. Even after discounting for the symmetry of the system, a huge number of such permutations (or configurations) are possible and it is not at all trivial to determine which among them would be energetically preferred. Further, to finally determine the phase stability of doped hafnia, one would have to ascertain the lowest energy configuration of each phase. Although methods, such as, cluster expansion\cite{cluster_expansion}, etc., can be used to surmount this problem of large configurational space, these approaches are extremely computationally demanding. Nevertheless, we get some estimate of the scale of energy variations expected in our doped hafnia systems owing to the different possible configurations by computing energies of 10 diverse configurations of 6.25\% Sr-doped P-O1 phase at various dopant-dopant distances. A standard deviation of just $\sim$8 meV/f.u. in the energies of these configurations was found, suggesting that the scale of energy variations owing to different possible configurations of dopants is rather small as compared to that of the relative energies among the different phases of hafnia. Thus, we expect the trends observed in the Fig. \ref{Fig:stage3} and the conclusions made in the previous discussion to hold even when multiple possible configurations of doped hafnia phases are considered.
The results from Fig. \ref{Fig:stage3} clearly suggest that certain dopants, especially Ca, Sr, Ba, La, Y, and Gd can substantially lower the relative energy between the P-O1 and the equilibrium M phases, although no situation was encountered in which a polar phase had the lowest energy. This indicates that dopants $\it{alone}$ cannot stabilize a polar phase as the ground state in hafnia and can only $\it{assist}$ other factors, such as the surface energy, the mechanical stresses and the electric field, prevalent in the hafnia films, to form the polar phase. The disappearance of FE behavior in the absence of the aforementioned crucial factors,\cite{hafnia_review} and the empirical observation of FE behavior in pure hafnia films\cite{undoped_hafnia} further corroborates this conclusion.
\subsection{Learning from the DFT data}
\begin{figure}
\centering
\includegraphics[scale=0.65]{energy_vs_radius_v8.pdf}
\caption{(a) Chemical trends in the relative energies of the M, T and P-O1 phases of hafnia with (a) ionic radius and electronegativity of a divalent (solid symbols) and trivalent (open symbols) dopant at 6.25\% doping concentration. Some cases of the T phase collapsed into the P-O1 phase upon relaxation and are omitted here for cleanliness. (b) The distance between the dopant and the closest 2nd nearest oxygen in the case of 6.25\% doped P-O1 and M phases.}
\label{Fig:chemical_trends}
\end{figure}
In order to reveal the dominant attributes of a dopant that help stabilize a polar phase in hafnia, we plot in Fig. \ref{Fig:chemical_trends}(a) the relative energies of the most relevant M, T and P-O1 phases against the ionic radius \cite{ionic_radii_Shannon} and the electronegativity \cite{electronegativity_pauling} of the dopants in Stage 3 for the case of 6.25\% doping level. With the dopants grouped on the basis of their valency, a clear chemical trend of dopants with \textit{higher ionic radius and lower electronegativity favoring the polar P-O1 phase} in hafnia is evident from the figure. The trend of increasing stability of the polar $Pca$2$_1$ phase with increasing dopant radii matches very well with the experimental observations \cite{dopants_hafnia_influence_RSC} of higher polarizations in hafnia systems with larger dopants. We further note that trivalent dopants considered here, owing to their ionic radii being comparable to that of Hf stabilizes the P-O1 phase at lower strains in comparison to that of the divalent dopants. Thus, trivalent dopants seem to be a superior choice to promote ferroelectricity in hafnia.
To understand the root-cause of the aforementioned chemical trends, the relaxed structures of the doped hafnia phases were carefully examined. In Fig. \ref{Fig:chemical_trends}(b), we plot the distance between the dopant and the closest 2nd nearest neighbor oxygen for the case of the M and the P-O1 phases as a function of the ionic radii of the dopants considered in Stage 3. Although this dopant-oxygen distance remains largely unaffected upon doping in the case of the M phase (with the exception of the Ba doping), it substantially reduces in the case of the P-O1 phase, suggesting formation of an additional dopant-oxygen bond. Further, as is evident from the figure, this additional bond becomes consistently shorter for dopants with larger ionic radii and lower electronegativity (not shown here). Cumulatively these observations strongly suggest formation of an energy lowering bond between the dopant cation and the 2nd nearest oxygen neighbor in the case of the P-O1 phase as the root-cause of its stabilization with respect to the M phase upon doping.
Based on the aforementioned findings and the observed chemical trends, we search the entire Periodic Table to find dopants with low electronegativity and large ionic radii that will potentially favor the polar $Pca$2$_1$ phase in hafnia. Excluding the elements studied in this work and those which are radioactive, the lanthanide series elements emerge as good dopant candidates matching these criteria. Thus, combining all the findings, results or observations from our computations we finally predict that \textit{the lanthanide series elements, the lower half of the alkaline earth metals (Ca, Sr and Ba) and Y are the most favorable dopants to promote ferroelectricity in hafnia}.
\subsection{Connection with experiments}
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{exp_fig6_v1.pdf}
\caption{Trends in the measured remnant polarization of doped hafnia films with (a) dopant ionic radii and (b) doping concentration. The results are reproduced from Ref. \citenum{dopants_hafnia_influence_RSC} with permission from The Royal Society of Chemistry.}
\label{Fig:exp_trends}
\end{figure}
Some noteworthy agreements between the theoretical predictions made in this study and the empirical observations made by Starschich et al.\cite{dopants_hafnia_influence_RSC} (major results reproduced in Fig. \ref{Fig:exp_trends}) and Schroeder et al.\cite{hafnia_dopants_effects} can also be drawn; (1) the dopants that showed substantial polarization in the empirical studies, such as Sr, Ba, Gd, Y, La were also found to stabilize the polar P-O1 phase significantly, (2) the trend of dopants of larger ionic radii stabilizing the polar P-O1 phase matches well with the experimental observation of high remnant polarization in larger dopants (see Fig. \ref{Fig:exp_trends}(a)), and (3) in agreement with the experiments, we also found that the doping concentration of 6.25\% to be most appropriate to stabilize the polar phase. As reproduced in Fig. \ref{Fig:exp_trends}(b), with increasing doping concentration, the measured polarization in hafnia films first increases, reaches a maxima around 5-8\% doping level, and then gradually decreases. Similar results are evident from this study as well. With increasing doping concentration, the polarization would initially rise due to enhanced stabilization of the polar P-O1 phase. However, after a critical doping concentration the distortions introduced in the structure would diminish the polarization of the polar phase, thus, resulting in gradual decrease in the measured polarization. Overall, the remarkable similarities between our computations and empirical observations give confidence in the assumptions made to model the hafnia systems and the predictions made in this study.
\section{Conclusions}
In summary, we investigated the influence of $\sim$40 dopants on the phase stability in hafnia using density functional theory calculations. A three stage down-selection strategy was adopted to efficiently search for promising dopants that favor the polar phases in hafnia. In Stage 1, the selected dopants were modeled under the constraints of 3.125\% substitutional doping concentration, the absence of charge neutralizing oxygen vacancy, and fixed volume. From this stage, 10 divalent and 4 trivalent dopants that favor the polar $Pca2_1$ and/or $Pmn2_1$ phase in hafnia were selected for Stage 2. While the trivalent dopants were studied directly in next stage, the divalent dopants in Stage 2 were modeled in presence of an appropriate oxygen vacancy, from which Ca, Sr and Ba were found to favor the polar $Pca2_1$ phase and were selected to Stage 3.
In Stage 3, the remaining promising candidates, i.e., Ca, Sr, Ba, Y, La and Gd doped hafnia systems were comprehensively studied at various doping concentrations with appropriate number of charge compensating oxygen vacancies. For all these dopants, increasing doping concentration enhanced the stabilization of the polar $Pca2_1$ phase. However, no case was encountered in which a polar phase becomes the ground state, suggesting that dopants $\it{alone}$ may not induce ferroelectricity in bulk hafnia and can only $\it{assist}$ other factors such as surface energy, strain, electric field, etc. Empirical measurements of relatively high remnant polarization have been made for these identified dopants, suggesting good agreement between experiments and our computations. Indeed, the doping concentration of around 5-8\% at which maximum polarization is empirically observed matches well with our predictions.
Finally, clear chemical trends of dopants with higher ionic radii and lower electronegativity favoring polar $Pca2_1$ phase in bulk hafnia were identified. For this polar phase, an additional bond between the dopant cation and the 2nd nearest oxygen neighbor was identified as the root-cause of this observation. Further, trivalent dopants, owing to their ionic radii being comparable to that of Hf, were found to favor the polar $Pca2_1$ phase at lower strains in comparison to that of the divalent dopants. Based on these insights, we were able to go beyond the dopant elements considered with the DFT calculations. We conclude that the entire lanthanide series metals, the lower half of the alkaline earth metals (Ca, Sr, Ba) and Y are the most favorable dopants to promote ferroelectricity in hafnia. These insights can be used to tailor the ferroelectric characteristics of hafnia films by selecting dopants with appropriate combination of ionic radius and electronegativity.
\begin{acknowledgement}
Financial support of this work through Grant No. W911NF-15-1-0593 from the Army Research Office (ARO) and partial computational support through a Extreme Science and Engineering Discovery Environment (XSEDE) allocation number TG-DMR080058N are acknowledged.
\end{acknowledgement}
\begin{suppinfo}
Discussion on the need of oxygen vacancy introduction in doped hafnia using electronic structure studies and the methodology adopted to characterize different phases of doped hafnia.
\end{suppinfo}
\providecommand{\latin}[1]{#1}
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{42}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Wilson(1965)]{doped_semiconductor}
Wilson,~A.~H. \emph{The Theory of Metals}; Cambridge University Press,
1965\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang(2004)]{DMS}
Wang,~Z.~L. Zinc oxide nanostructures: growth, properties and applications.
\emph{J. Phys: Condens. Matt.} \textbf{2004}, \emph{16}, R829\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhu \latin{et~al.}(2012)Zhu, Tang, Fonseca, and
Ramprasad]{hafnia_review_Hong_Zhu}
Zhu,~H.; Tang,~C.; Fonseca,~L. R.~C.; Ramprasad,~R. Recent progress in ab
initio simulations of hafnia-based gate stacks. \emph{J. Mater. Sci.}
\textbf{2012}, \emph{47}, 7399--7416\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wilk \latin{et~al.}(2001)Wilk, Wallace, and
Anthony]{hafnia_high_k_JAP_review}
Wilk,~G.~D.; Wallace,~R.~M.; Anthony,~J.~M. High-κ gate dielectrics: Current
status and materials properties considerations. \emph{J. Appl. Phys.}
\textbf{2001}, \emph{89}, 5243--5275\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Robertson(2006)]{hafnia_high_k_progress_physics_Robertson}
Robertson,~J. High dielectric constant gate oxides for metal oxide Si
transistors. \emph{Rep. Prog. Phys.} \textbf{2006}, \emph{69}, 327\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramprasad and Shi(2005)Ramprasad, and Shi]{hafnia_high_k_rampi}
Ramprasad,~R.; Shi,~N. Dielectric properties of nanoscale HfO2 slabs.
\emph{Phys. Rev. B} \textbf{2005}, \emph{72}, 052107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tang and Ramprasad(2008)Tang, and Ramprasad]{hafnia_high_k_tang}
Tang,~C.; Ramprasad,~R. Oxygen defect accumulation at Si:HfO2 interfaces.
\emph{Appl. Phys. Lett.} \textbf{2008}, \emph{92}, 182908\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[B{\"o}scke \latin{et~al.}(2011)B{\"o}scke, M{\"u}ller, Br{\"a}uhaus,
Schr{\"o}der, and B{\"o}ttger]{hafnia_ferro_observation}
B{\"o}scke,~T.~S.; M{\"u}ller,~J.; Br{\"a}uhaus,~D.; Schr{\"o}der,~U.;
B{\"o}ttger,~U. Ferroelectricity in hafnium oxide thin films. \emph{Appl.
Phys. Lett.} \textbf{2011}, \emph{99}, 102903\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Polakowski and M\"uller(2015)Polakowski, and M\"uller]{undoped_hafnia}
Polakowski,~P.; M\"uller,~J. Ferroelectricity in undoped hafnium oxide.
\emph{Appl. Phys. Lett.} \textbf{2015}, \emph{106}, 232905\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huan \latin{et~al.}(2014)Huan, Sharma, Rossetti, and
Ramprasad]{Huan_ferroelectricity}
Huan,~T.~D.; Sharma,~V.; Rossetti,~G.~A.; Ramprasad,~R. Pathways towards
ferroelectricity in hafnia. \emph{Phys. Rev. B} \textbf{2014}, \emph{90},
064111\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Materlik \latin{et~al.}(2015)Materlik, Künneth, and
Kersch]{hafnia_surface_energy_Materlik}
Materlik,~R.; Künneth,~C.; Kersch,~A. The origin of ferroelectricity in
Hf1−xZrxO2: A computational investigation and a surface energy model.
\emph{J. Appl. Phys.} \textbf{2015}, \emph{117}, 134109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Batra \latin{et~al.}(2016)Batra, Tran, and
Ramprasad]{hafnia_surface_energy_Rohit}
Batra,~R.; Tran,~H.~D.; Ramprasad,~R. Stabilization of metastable phases in
hafnia owing to surface energy effects. \emph{Appl. Phys. Lett.}
\textbf{2016}, \emph{108}, 172902\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hudak \latin{et~al.}(2017)Hudak, Depner, Waetzig, Talapatra, Arroyave,
Banerjee, and Guiton]{hafnia_mono_to_tetra_surface_energy}
Hudak,~B.~M.; Depner,~S.~W.; Waetzig,~G.~R.; Talapatra,~A.; Arroyave,~R.;
Banerjee,~S.; Guiton,~B.~S. Real-time atomistic observation of structural
phase transformations in individual hafnia nanorods. \emph{Nat. Commun.}
\textbf{2017}, \emph{8}, 15316 EP --, Article\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[M\"uller \latin{et~al.}(2012)M\"uller, Yurchuk, Schl\"osser, Paul,
Hoffmann, M\"uller, Martin, Slesazeck, Polakowski, Sundqvist, Czernohorsky,
Seidel, K\"ucher, Boschke, Trentzsch, Gebauer, Schröder, and
Mikolajick]{device_fet}
M\"uller,~J. \latin{et~al.} Ferroelectricity in HfO2 enables nonvolatile data
storage in 28 nm HKMG. 2012 Symposium on VLSI Technology (VLSIT). 2012; pp
25--26\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mueller \latin{et~al.}(2012)Mueller, Summerfelt, Muller, Schroeder,
and Mikolajick]{device_feram}
Mueller,~S.; Summerfelt,~S.~R.; Muller,~J.; Schroeder,~U.; Mikolajick,~T.
Ten-nanometer ferroelectric Si:HfO2 films for next-generation FRAM
capacitors. \emph{IEEE Electron Device Letters} \textbf{2012}, \emph{33},
1300--1302\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Park \latin{et~al.}(2015)Park, Lee, Kim, Kim, Moon, Kim, Müller,
Kersch, Schroeder, Mikolajick, and Hwang]{hafnia_review}
Park,~M.~H.; Lee,~Y.~H.; Kim,~H.~J.; Kim,~Y.~J.; Moon,~T.; Kim,~K.~D.;
Müller,~J.; Kersch,~A.; Schroeder,~U.; Mikolajick,~T.; Hwang,~C.~S.
Ferroelectricity and antiferroelectricity of doped thin HfO2-based films.
\emph{Adv. Mater.} \textbf{2015}, \emph{27}, 1811--1831\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schroeder \latin{et~al.}(2014)Schroeder, Yurchuk, M{\"u}ller, Martin,
Schenk, Polakowski, Adelmann, Popovici, Kalinin, and
Mikolajick]{hafnia_dopants_effects}
Schroeder,~U.; Yurchuk,~E.; M{\"u}ller,~J.; Martin,~D.; Schenk,~T.;
Polakowski,~P.; Adelmann,~C.; Popovici,~M.~I.; Kalinin,~S.~V.; Mikolajick,~T.
Impact of different dopants on the switching properties of ferroelectric
hafnium oxide. \emph{Jpn. J. Appl. Phys.} \textbf{2014}, \emph{53},
08LE02\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Starschich and Boettger(2017)Starschich, and
Boettger]{dopants_hafnia_influence_RSC}
Starschich,~S.; Boettger,~U. An extensive study of the influence of dopants on
the ferroelectric properties of HfO2. \emph{J. Mater. Chem. C} \textbf{2017},
\emph{5}, 333--338\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sang \latin{et~al.}(2015)Sang, Grimley, Schenk, Schroeder, and
LeBeau]{TEM_PO1_observation_hafnia}
Sang,~X.; Grimley,~E.~D.; Schenk,~T.; Schroeder,~U.; LeBeau,~J.~M. {On the
structural origins of ferroelectricity in HfO2 thin films}. \emph{Appl. Phys.
Lett.} \textbf{2015}, \emph{106}, 162905\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Batra \latin{et~al.}(2017)Batra, Huan, Jones, Rossetti, and
Ramprasad]{stress_hafnia_rohit}
Batra,~R.; Huan,~T.~D.; Jones,~J.~L.; Rossetti,~G.; Ramprasad,~R. Factors
favoring ferroelectricity in hafnia: A first-principles computational study.
\emph{The Journal of Physical Chemistry C} \textbf{2017}, \emph{121},
4139--4145\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hyuk~Park \latin{et~al.}(2014)Hyuk~Park, Joon~Kim, Jin~Kim, Moon, and
Seong~Hwang]{hzo_strain_Min_Park}
Hyuk~Park,~M.; Joon~Kim,~H.; Jin~Kim,~Y.; Moon,~T.; Seong~Hwang,~C. The effects
of crystallographic orientation and strain of thin Hf0.5Zr0.5O2 film on its
ferroelectricity. \emph{Appl. Phys. Lett.} \textbf{2014}, \emph{104},
072901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kisi \latin{et~al.}(1989)Kisi, Howard, and Hill]{zirconia_stress_Kisi}
Kisi,~E.~H.; Howard,~C.~J.; Hill,~R.~J. Crystal structure of orthorhombic
zirconia in partially stabilized zirconia. \emph{J. Am. Ceram. Soc.}
\textbf{1989}, \emph{72}, 1757--1760\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mueller \latin{et~al.}(2012)Mueller, Mueller, Singh, Riedel,
Sundqvist, Schroeder, and Mikolajick]{mechanical_stress_Al}
Mueller,~S.; Mueller,~J.; Singh,~A.; Riedel,~S.; Sundqvist,~J.; Schroeder,~U.;
Mikolajick,~T. Incipient ferroelectricity in Al-doped HfO2 thin films.
\emph{Advanced Functional Materials} \textbf{2012}, \emph{22},
2412--2417\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Müller \latin{et~al.}(2011)Müller, Schröder, Böscke, Müller,
Böttger, Wilde, Sundqvist, Lemberger, Kücher, Mikolajick, and
Frey]{mechanical_stress_Y}
Müller,~J.; Schröder,~U.; Böscke,~T.~S.; Müller,~I.; Böttger,~U.;
Wilde,~L.; Sundqvist,~J.; Lemberger,~M.; Kücher,~P.; Mikolajick,~T.;
Frey,~L. Ferroelectricity in yttrium-doped hafnium oxide. \emph{Journal of
Applied Physics} \textbf{2011}, \emph{110}, 114113\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schenk \latin{et~al.}(2014)Schenk, Yurchuk, Mueller, Schroeder,
Starschich, B{\"o}ttger, and Mikolajick]{hystereses_deform_Tony}
Schenk,~T.; Yurchuk,~E.; Mueller,~S.; Schroeder,~U.; Starschich,~S.;
B{\"o}ttger,~U.; Mikolajick,~T. About the deformation of ferroelectric
hystereses. \emph{Appl. Phys. Rev.} \textbf{2014}, \emph{1}, 041103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pešić \latin{et~al.}(2016)Pešić, Fengler, Larcher, Padovani,
Schenk, Grimley, Sang, LeBeau, Slesazeck, Schroeder, and
Mikolajick]{electric_field_ovac_movement}
Pešić,~M.; Fengler,~F. P.~G.; Larcher,~L.; Padovani,~A.; Schenk,~T.;
Grimley,~E.~D.; Sang,~X.; LeBeau,~J.~M.; Slesazeck,~S.; Schroeder,~U.;
Mikolajick,~T. Physical mechanisms behind the field-cycling behavior of
HfO2-based ferroelectric capacitors. \emph{Adv. Funct. Mater.} \textbf{2016},
\emph{26}, 4601--4612\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grimley \latin{et~al.}(2016)Grimley, Schenk, Sang, Pešić, Schroeder,
Mikolajick, and LeBeau]{electric_field_structure}
Grimley,~E.~D.; Schenk,~T.; Sang,~X.; Pešić,~M.; Schroeder,~U.;
Mikolajick,~T.; LeBeau,~J.~M. Structural changes underlying field-cycling
phenomena in ferroelectric HfO2 thin films. \emph{Adv. Electron. Mater.}
\textbf{2016}, \emph{2}, 1600173--n/a, 1600173\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee \latin{et~al.}(2008)Lee, Cho, Lee, Hwang, and
Han]{dopants_stabilize_T_or_C}
Lee,~C.-K.; Cho,~E.; Lee,~H.-S.; Hwang,~C.~S.; Han,~S. First-principles study
on doping and phase stability of ${\text{HfO}}_{2}$. \emph{Phys. Rev. B}
\textbf{2008}, \emph{78}, 012102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fischer and Kersch(2008)Fischer, and Kersch]{dopants_stabilize_T}
Fischer,~D.; Kersch,~A. Stabilization of the high-k tetragonal phase in HfO2:
The influence of dopants and temperature from ab initio simulations. \emph{J.
Appl. Phys.} \textbf{2008}, \emph{104}, 084104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhu \latin{et~al.}(2013)Zhu, Ramanath, and
Ramprasad]{hafnia_hong_design_dopants}
Zhu,~H.; Ramanath,~G.; Ramprasad,~R. Interface engineering through atomic
dopants in HfO2-based gate stacks. \emph{J. Appl. Phys.} \textbf{2013},
\emph{114}, 114310\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Müller \latin{et~al.}(2015)Müller, Polakowski, Mueller, and
Mikolajick]{hafnia_dopants_muller}
Müller,~J.; Polakowski,~P.; Mueller,~S.; Mikolajick,~T. Ferroelectric hafnium
oxide based materials and devices: assessment of current status and future
prospects. \emph{ECS J. Solid State Sci. Technol.} \textbf{2015}, \emph{4},
N30--N35\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kresse and Furthm\"uller(1996)Kresse, and Furthm\"uller]{vasp}
Kresse,~G.; Furthm\"uller,~J. Efficient iterative schemes for \textit{ab
initio} total-energy calculations using a plane-wave basis set. \emph{Phys.
Rev. B} \textbf{1996}, \emph{54}, 11169--11186\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and Ernzerhof]{PBE}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. Generalized gradient approximation made
simple. \emph{Phys. Rev. Lett.} \textbf{1996}, \emph{77}, 3865--3868\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bl\"ochl(1994)]{PAW}
Bl\"ochl,~P.~E. Projector augmented-wave method. \emph{Phys. Rev. B}
\textbf{1994}, \emph{50}, 17953--17979\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Monkhorst and Pack(1976)Monkhorst, and Pack]{monkhorst}
Monkhorst,~H.~J.; Pack,~J.~D. Special points for Brillouin-zone integrations.
\emph{Phys. Rev. B} \textbf{1976}, \emph{13}, 5188--5192\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ohtaka \latin{et~al.}(2001)Ohtaka, Fukui, Kunisada, Fujisawa,
Funakoshi, Utsumi, Irifune, Kuroda, and Kikegawa]{Hafnia_ph_dig}
Ohtaka,~O.; Fukui,~H.; Kunisada,~T.; Fujisawa,~T.; Funakoshi,~K.; Utsumi,~W.;
Irifune,~T.; Kuroda,~K.; Kikegawa,~T. {Phase relations and volume changes of
hafnia under high pressure and high temperature}. \emph{J. Am. Ceram. Soc.}
\textbf{2001}, \emph{84}, 1369--1373\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kang \latin{et~al.}(2003)Kang, Lee, and Chang]{hafnia_LDA_vs_GGA}
Kang,~J.; Lee,~E.-C.; Chang,~K.~J. First-principles study of the structural
phase transformation of hafnia under pressure. \emph{Phys. Rev. B}
\textbf{2003}, \emph{68}, 054106\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hoffmann \latin{et~al.}(2015)Hoffmann, Schroeder, Schenk, Shimizu,
Funakubo, Sakata, Pohl, Drescher, Adelmann, Materlik, Kersch, and
Mikolajick]{hafnia_vacancy_stabilization}
Hoffmann,~M.; Schroeder,~U.; Schenk,~T.; Shimizu,~T.; Funakubo,~H.; Sakata,~O.;
Pohl,~D.; Drescher,~M.; Adelmann,~C.; Materlik,~R.; Kersch,~A.;
Mikolajick,~T. Stabilizing the ferroelectric phase in doped hafnium oxide.
\emph{Journal of Applied Physics} \textbf{2015}, \emph{118}, 072006\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sanchez \latin{et~al.}(1984)Sanchez, Ducastelle, and
Gratias]{cluster_expansion}
Sanchez,~J.; Ducastelle,~F.; Gratias,~D. Generalized cluster description of
multicomponent systems. \emph{Physica A} \textbf{1984}, \emph{128}, 334 --
350\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shannon(1976)]{ionic_radii_Shannon}
Shannon,~R.~D. {Revised effective ionic radii and systematic studies of
interatomic distances in halides and chalcogenides}. \emph{Acta Crystallogr.
Sect. A} \textbf{1976}, \emph{32}, 751--767\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pauling(1932)]{electronegativity_pauling}
Pauling,~L. The nature of the chemical bond. IV. The energy of single bonds and
the relative electronegativity of atoms. \emph{J. Am. Chem. Soc.}
\textbf{1932}, \emph{54}, 3570--3582\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\newpage
\section{TOC Graphic}
\includegraphics[scale=0.57]{scheme_v6.pdf}
\end{document}
|
\section*{Results}
{\bf Qumode Probes.} The use of qumode probes allows for the determination of the eigenvalues of an observable of a system of interest. It further allows one to measure the occupation probabilities of the associated eigenstates for the particular system state. This may be achieved even when there is no \emph{a priori} knowledge of the system state, or the eigenvalues or eigenstates of the system operator. From the measurement of these eigenvalues and occupation probabilities, we can estimate the moments of the observable, allowing for its characterisation with respect to the (possibly unknown) system state.
The qumode probing protocol consists of three components. The first of these is the system under investigation, which is described by some generic state $\rho_{\mathrm{sys}}$. The protocol does not in general need a particular form for the system or its state, and it may inhabit a discrete or continuous Hilbert space. The second component is the continuous variable qumode, described by its quadratures $x$ and $p$, often referred to as `position' and `momentum' \cite{gerry2005}. We shall take these quadratures to be in their dimensionless form (that is, in terms of the creation and annihilation operators $a$ and $a^\dagger$ of the mode, we have $x=(a+a^\dagger)/2$ and $p=(a-a^\dagger)/2i$).
The final component is the interaction between system and qumode. We shall consider an interaction Hamiltonian of the form $gx\otimes H_{\mathrm{int}}$, where the first subspace belongs to the qumode, and the second the system \cite{liu2016}. Hence, the interaction acts on the system, with a strength that depends on the qumode position quadrature, with $g$ an overall coupling strength. The associated evolution operator (in natural units $\hbar=1$) is $U(t)=\exp(-igx\otimes H_{\mathrm{int}}t)$, and thus the qumode is dephased in this quadrature, at a rate dependent on the the system operator $H_{\mathrm{int}}$, thence motivating the use of a phase estimation-type algorithm.
We label the eigenstates of the system operator $H_{\mathrm{int}}$ as $\ket{u_n}$, with associated eigenvalues $E_n$. Thus, when the system is in such an eigenstate, and the qumode in a quadrature eigenstate $\ket{x}$, the effect of the interaction can be written
\begin{equation}
\label{eqeigenstates}
\ket{x}\otimes\ket{u_n}\to e^{-igxE_nt}\ket{x}\otimes\ket{u_n}.
\end{equation}
In general, the qumode can be in a superposition of the quadrature eigenstates $\ket{\psi_q}=\int dx G(x)\ket{x}$, and the system state can always be expressed in the basis defined by the eigenstates of $H_{\mathrm{int}}$: $\rho_{\mathrm{sys}}=\sum_{mn}c_{mn}\ket{u_m}\bra{u_n}$. Owing to the linearity of quantum mechanics, Eq.~\eqref{eqeigenstates} can be extended to such states, and one can perform a partial trace over the system to obtain an expression for the qumode state after running the interaction for a time $t$:
\begin{equation}
\label{eqintstate}
\rho_q(t)=\int\int dxdx'G(x)G^*(x')L(x,x',t)\ket{x}\bra{x'},
\end{equation}
where analogous to the qubit probe protocols \cite{elliott2016, johnson2016}, we define the dephasing function $L(x,x',t)\equiv\mathrm{Tr}(\rho_{\mathrm{sys}}\exp(-ig(x-x')H_{\mathrm{int}}t))=\sum_n P_n\exp(-ig(x-x')E_nt)$, where $P_n=c_{nn}$. This is resemblant of a characteristic function for the system operator.
\begin{figure}
\includegraphics[width=\linewidth]{./figure2.pdf}
\caption{{\bf Quantum circuit for qumode probing.} A qumode prepared in momentum eigenstate $\ket{p_0}$ interacts with the system through a controlled gate $U_x=\exp(-igxH_{\mathrm{int}}t)$ dependent on the qumode position quadrature $x$. Measuring the qumode in the momentum quadrature then directly samples the statistics of the system operator $H_{\mathrm{int}}$ for state $\rho_{\mathrm{sys}}$.}
\label{figcircuit}
\end{figure}
Let us now consider that after an interaction time $\tau$ a measurement is made of the qumode state. Inspired by the qubit-based protocols, we shall measure in a basis conjugate to that which defines the interaction Hamiltonian, here the momentum quadrature. Further, we shall for now consider the initial qumode state to have been prepared in a momentum eigenstate $\ket{p_0}=(1/\sqrt{2\pi})\int dx \exp(ip_0x)\ket{x}$. This protocol is illustrated in \figref{figcircuit}. Then, the probability of the measurement resulting in the outcome $\ket{p}$ is given by (see Technical Appendix)
\begin{align} \label{eqinfinitesq}
P(p)=\bopk{p}{\rho_q(\tau)}{p}=\sum_n P_n (\delta(p-(p_0-gE_n\tau)))^2.
\end{align}
Thus, the probability distribution $P(p)$ is non-zero only at the points $p=p_0-gE_n\tau$, where it takes values $P_n$. The measurement of the qumode state hence directly samples the same distribution as that of a measurement of the operator $H_{\mathrm{int}}$ on the state $\rho_{\mathrm{sys}}$, with the mapping from qumode measurement outcomes to the spectrum of the system operator given by $E=(p_0-p)/g\tau$. With repeated measurements, one can then obtain an estimate of the probability distribution $P(p)$ (and hence $P(E)$). This thus allows for the estimation of the spectrum, and the moments of the system operator $\langle H_{\mathrm{int}}^m\rangle=\sum_n P_nE_n^m$. In contrast to the analogous qubit probing protocols, here these properties can be obtained directly, without the need for post-processing of the measurement outcomes.
A caveat to the above is that we have neglected the presence of the natural evolution of the system under its bare Hamiltonian $H_0$ during the running of the protocol. For this to be valid, we require that the interaction occurs on timescales much faster than the natural evolution ($gH_\mathrm{{int}}\gg H_0$), and that the natural evolution has negligible effect on the system state during the running of the protocol ($H_0\tau\ll1$), hence imposing a maximum allowable running time for the protocol. These restrictions are lifted when the bare Hamiltonian and the interaction Hamiltonian commute, in which case the natural evolution does not affect the outcome of the qumode measurement.
{\bf Robustness to experimental imperfections.} The above derivation of the qumode state after performing the protocol assumed an initial qumode state prepared in a momentum quadrature eigenstate. In practice, one can only achieve approximations to such an initial state. Specifically, there is a finite resolution (`bin'-size) in the precision with which one can measure the momentum quadrature, and only a finite level of squeezing in a given quadrature (with a quadrature eigenstate corresponding to an infinite squeezing). We can generalise the above results to encompass each of these imperfections, and thus determine the regime of parameters for which the protocol is valid. This is done by considering different inital qumode states $G(x)$ corresponding to finite bins and squeezed states.
First, we consider the case where there is a finite bin size for the quadrature measurement. A bin size of $L$ centered on $p_0$ will constrain the initial momentum quadrature value $p_0+k$ to be within the range $-L/2\leq k\leq L/2$, and hence we can express the initial state as
\begin{equation}
G(x)=\frac{1}{\sqrt{2\pi L}}\int_{-\frac{L}{2}}^{\frac{L}{2}}dk e^{i(p_0+k)x}.
\end{equation}
With this initial distribution, the final probability distribution for the qumode measurement is given by (see Technical Appendix)
\begin{equation}
\label{eqfinalbin}
P(p)=\sum_n\begin{cases}
\frac{P_n}{L} & -\frac{L}{2}\leq p-p_0+gE_n\tau\leq \frac{L}{2}\\
0 & \mathrm{otherwise}.
\end{cases}
\end{equation}
The further consequence of a finite bin size in the final measurement of the qumode can also be accounted for, by integrating this probability over the size of each bin.
For the second case, with finite squeezing, we consider an initial distribution with a Gaussian uncertainty in the value of the momentum quadrature, centred on $p_0$:
\begin{equation}
G(x)=\left(\frac{s^2}{\pi}\right)^{\frac{1}{4}}\frac{1}{\sqrt{2\pi}}e^{ip_0x}\int dq e^{-\frac{s^2q^2}{2}}.
\end{equation}
Here, $s$ corresponds to the dimensionless squeezing factor \cite{liu2016}, parameterising the squeezing in the momentum quadrature (note that $s=1$ corresponds to the case of an unsqueezed coherent state, defined as the eigenstate of the annihilation operator; $a\ket{\alpha}=\alpha\ket{\alpha}$). Inserting this in to Eq.~\eqref{eqintstate} and following through the protocol (see Technical Appendix), we find the final probability distribution for the qumode is given by
\begin{equation}
\label{eqfinalsqueeze}
P(p)=\frac{s}{\sqrt{\pi}}\sum_nP_n e^{-s^2(p-p_0+g\tau E_n)^2}.
\end{equation}
The forms of the final probability distributions Eqs.~\eqref{eqfinalbin} and \eqref{eqfinalsqueeze} are somewhat unsurprising, as they mirror the uncertainty in the initial momentum distribution; that is, the distribution of the initial momentum quadrature value $p_0$ ultimately defines the uncertainty in $p$ for the final distributions. Because of these finite uncertainties in the initial momentum, the final probability distributions are no longer perfectly identical to the distribution of the system operator $H_{\mathrm{int}}$, and inherit the uncertainty in the initial state. With finite measurement bin sizes, we are limited to a resolution in $p$ of $L$, corresponding to a limit in the resolution of the spectrum of $H_{\mathrm{int}}$ of $\Delta E=L/g\tau$. For the case of finite squeezing, the squeezing factor defines the spread of the final distribution, with greater squeezing narrowing the distribution. The standard deviation of the final momentum distribution is given by $\sigma_p=\sqrt{2}/s$, corresponding to a standard deviation for the system operator eigenvalues of $\sigma_E=\sqrt{2}/sg\tau$. Thus, the precision to which we can measure can be increased in both cases by running the protocol for longer or increasing the coupling strength, and by decreasing the bin size or increasing the squeezing for each of the individual cases respectively. These values for $\Delta E$ and $\sigma_E$ can be replaced by our desired limit on accuracy to define the valid parameter regime. We illustrate the consequence of different levels of precision in \figref{figexample}.
\begin{figure}
\includegraphics[width=\linewidth]{./figure3.pdf}
\caption{{\bf Effect of finite squeezing.} Without perfect squeezing, the distribution sampled by measuring the qumode is spread around the actual distribution of the interaction Hamiltonian. This is illustrated for various levels of precision $\alpha=\Delta/sg\tau$, where $\Delta$ is the difference between the eigenvalues of $H_{\mathrm{int}}$. This example $H_{\mathrm{int}}$ has 5 evenly-spaced eigenstates with randomly-selected populations.}
\label{figexample}
\end{figure}
Interestingly, we note that as with the proposal for power of one qumode computation \cite{liu2016}, one can trade off a decreased squeezing with increased running time $\tau$, and vice versa. While it is tempting to then conclude that these uncertainties in the final distribution can thus be negated by a sufficiently increased running time, this is not necessarily the case in general, due to the constraint imposed on $\tau$ for the effects of the system's natural evolution $H_0$ to be neglected.
{\bf Applications for Thermodynamics.}
When the system is known to be in a thermal state $\rho_\Theta(\beta)=\exp(-\beta H_\Theta)/Z(\beta)$ with respect to a Hamiltonian $H_\Theta$, it is possible to use the qumode as a thermodynamical probe, by engineering $H_{\mathrm{int}}$ to be proportional to this $H_\Theta$. Here, $\beta=1/T$ is the inverse temperature (we employ units in which Boltzmann's constant $k_B=1$), and $Z(\beta)=\mathrm{Tr}(\exp(-\beta H_\Theta))$ is the partition function. In particular, note that this can be achieved even when the system is thermalised with respect to its natural Hamiltonian $H_0$. In this case, because the interaction Hamiltonian will be proportional to the natural Hamiltonian, the two hence commute and there will be no restriction on the magnitude of $g$ or the time for which the protocol can be run, as noted above.
To see this, consider that we estimate from the protocol the eigenvalues $\{E_n\}$ of $H_\Theta$, along with their respective probabilities $P_n$ for the state $\rho_\Theta(\beta)$. One can then construct for each eigenvalue (with degeneracy $g_n$) an equation of the form
\begin{equation} \label{eqZeq}
\log (Z(\beta))+\beta E_n=\log (g_n/P_n).
\end{equation}
Suppose we assume known values of only two degeneracies $g_{n_0}$, $g_{n_1}$. Since Eq.~\eqref{eqZeq} can be rewritten $\beta=\log(P_{n_0} g_{n_1}/(P_{n_1} g_{n_0}))/(E_{n_1}-E_{n_0})$, we see that the qumode probe can be used as a non-destructive thermometer for quantum systems. While traditional thermometry relies on thermal equilibrium to be established between the system and the probe \cite{correa2015individual}, here the temperature can be measured without this constraint. Similar probes using qubits \cite{johnson2016, sabin2014} instead of a qumodes also do not require equilibration. However, using a qumode as a thermometer has the advantage of being able to tune the precision through the amount of squeezing of the probe itself. Precision is thus not constrained by the number of probes themselves, which can be a limitation even for thermometers that exploit quantum advantages in precision using quantum metrology \cite{stace2010quantum, sabin2014}.
When the effects of finite squeezing and bin size are considered, it is necessary that we can resolve between different eigenvalues of the interaction Hamiltonian (i.e. min($E_n-E_{n'})\gtrsim\sigma_E$). Otherwise, we must treat nearby eigenvalues as degenerate, thus limiting the precision to which we can estimate $\beta$ by the uncertainty $\sigma_E$. We note that to resolve a \textit{particular} $E_n$ to precision $\sigma_E$, the number of measurements required scales as $\mathcal{N} \sim 1/(\sigma_E^2 P_n)$, where $1/\sigma_E \gtrsim 1/s$. Thus, the total number of measurements to estimate $\beta$ to precision $\sigma_E$ is bound by $\sum_{i=0}^1\mathcal{N}_i$, where $\mathcal{N}_i \sim \text{min}(1/(\sigma_E^2 P_{n_i}))$ for $i=0,1$ and the minimisation is over all known values of degeneracies $g_n$.
With a known $g_{n_0}$ and temperature, the full set of degeneracies $\{g_n\}$ can be found using $g_n=P_n g_{n_0} \exp(\beta(E_n-E_{n_0}))/P_{n_0}$. Measuring this for a range of $\beta$ enables reconstruction of the full partition function using $Z(\beta)=\sum_n g_n \exp(-\beta E_n)$. With access to the partition function and temperature, important thermodynamical quantities such as free energies, heat capacities \cite{glazer2002} and von Neumann entropy can also be reconstructed.
An important application is in understanding the free energy landscape of a physical system. The Jarzynki equality \cite{jarzynski1997nonequilibrium} and its quantum counterpart \cite{tasaki2000jarzynski, mukamel2003quantum} connect the free energy difference between two thermal states of a system to the work done $W$ in a far-from-equilibrium process. Since it is possible to sample the probability distribution of work done on a quantum system $P(W)$, it has been proposed that the free energy difference can be extracted from $P(W)$. However, this method for extracting free energy differences is not always efficient, for instance, when large negative values of work are involved and at low temperatures \cite{paz2014}. However, by probing $\{E_n\}$ and $\{P_n\}$ directly using our model, we see that $F(\beta)=-\log(Z(\beta))/\beta$ can still be recovered efficiently in those regimes. It is also possible to reconstruct the heat capacity $C=\beta^2 \partial^2 \log(Z(\beta))/\partial \beta^2$ using this method, which can be used to probe quantum critical points \cite{liang2015}.
It is also possible to probe thermodynamical quantities of systems that are perturbed far from equilibrium. In particular, we can study the average work performed on a system due to a sudden quench in the interaction Hamiltonian. Further, we can determine the irreversible portion of this work $\langle W_{\text{irr}}\rangle$ \cite{tasaki2000jarzynski, campisi2011colloquium, dorner2012emergent}, which is defined as the difference between $\langle W\rangle$, average work done on the system during the quench, and $\Delta F$, the change in the free energy had the system evolved adiabatically from the thermal state of the initial interaction Hamiltonian to that of the final interaction Hamiltonian. The irreversible work, motivated by the fluctuation theorems and its connections with various entropy measures \cite{deffner2010generalized, tasaki2000jarzynski}, is a widely-used measure of irreversibility, and has been shown to be a signature for some second-order phase transitions \cite{mascarenhas2014}. Note that the average work is also an interesting quantity to study in its own right, and its behaviour across a critical point has been connected to first-order quantum phase transitions \cite{mascarenhas2014}.
Consider initial and final interaction Hamiltonians $H_{\text{int}}^{(0)}$ and $H_{\text{int}}^{(1)}$, satisfying $H_{\text{int}}^{(0)}, H_{\text{int}}^{(1)}\gg H_0/g$. The change in free energy $\Delta F$ may be calculated as above by using the qumode to probe the free energies of the respective thermal states $\rho_{\Theta_0}$ and $\rho_{\Theta_1}$ of the interaction Hamiltonians. Under a quench, the system state is unchanged, and remains in the initial thermal state $\rho_{\Theta_0}$. Thus, the average work done by the quench is given by $\langle W \rangle=\text{Tr}(\rho_{\Theta_0}H_{\text{int}}^{(1)})-\text{Tr}(\rho_{\Theta_0}H_{\text{int}}^{(0)})=\sum_{m} E^{(1)}_m P^{(1)}_m-\sum_n E^{(0)}_n P^{(0)}_n$, where $E^{(i)}$ and $P^{(i)}$ are energies and probability amplitudes of the state $\rho_{\Theta_0}$ under $H_{\text{int}}^{(i)}$ for $i=0,1$. These quantities can be determined by the qumode probe, and hence one can calculate the average work along with its irreversible portion.
Finally, we remark that it is also possible to use the qumode probe to find the overlaps of the ground states of a parameter-dependent Hamiltonian at two different values of the parameter $\lambda$. This quantity has been used to characterise regions of criticality defining quantum phase transitions in the Dicke model \cite{zanardi2006ground}. For pure states, the overlap probability is just the state fidelity between the ground states. Let $H_{\text{int}}(\lambda=\lambda_c)$ be the interaction Hamiltonian at the quantum critical point. Then the state fidelity of ground states of $H_{\text{int}}(\lambda<\lambda_c)$ and $H_{\text{int}}(\lambda>\lambda_c)$ can be measured by concatenating two qumode probe quantum circuits. The first qumode probe circuit evolves under $H_{\text{int}}(\lambda<\lambda_c)$ and is used to prepare the ground state $\ket{u_{\lambda<\lambda_c}}$ of $H_{\text{int}}(\lambda<\lambda_c)$. Now we use $\ket{u_{\lambda<\lambda_c}}$ as the state input to the second qumode probe circuit, which now evolves under $H_{\text{int}}(\lambda>\lambda_c)$. The final probability of obtaining a zero eigenvalue is then $P_0=|\bra{u_{\lambda<\lambda_c}} u_{\lambda>\lambda_c}\rangle|^2$, which is the sought after overlap probability.
{\bf Candidates for Experimental Implementations.} We now suggest some current experimental setups that would be ideal candidates to test our protocol. We consider two interaction Hamiltonians, the quantum Rabi model and the Dicke model, both of which describe light-matter interactions, and are hence ubiquitous in quantum technologies.
The first Hamiltonian, the quantum Rabi model, is given by \cite{gerry2005}
\begin{equation}
\label{eqquantumrabi}
H_{QR}=gx\otimes\sigma_x,
\end{equation}
where $\sigma_x$ is the usual Pauli $x$ matrix \cite{nielsen2010}, taking the role of the interaction Hamiltonian $H_{\mathrm{int}}$. The Hamiltonian was originally conceived as a description of a quantised light field interacting with a two-level system. One often finds this Hamiltonian in its simplified guise as the Jaynes-Cummings Hamiltonian, where the approximation is made to neglect the counter-rotating terms $a\sigma^-$ and $a^\dagger\sigma^+$; nevertheless, systems described by this Hamiltonian are typically more accurately described by the full Rabi Hamiltonian.
As noted above, the majority of current quantum technologies involve light-matter interactions, and so one can find many examples of systems utilising such interactions. Sometimes, as with ion traps \cite{blatt2012}, Rydberg atoms \cite{urban2009}, and laser-driven tunnelling of ultracold atoms in optical lattice \cite{jaksch2003}, the continuous variable mode is treated as a classical field (although in the case of the former, interactions between the internal states of the ions and their quantised motional state is well-described by the above Hamiltonian). In principle, our protocol can be applied to such systems by replacement of the classical light with a qumode light field, though to achieve similar transition rates one would need either a highly-populated field, or a very strong coupling. While being optimistic about such possibilities, we shall for concreteness highlight examples where the fully-quantum interaction is realised.
Both cavity \cite{gleyzes2007, hamsen2017} and circuit \cite{lahaye2009, forn2010, yoshihara2017} quantum electrodynamics experiments consist of interactions between a continuous variable mode (cavity fields in the former, nanomechanical resonators in the latter) and a two-level system (atoms and superconducting qubits respectively), interacting through a quantum Rabi Hamiltonian Eq.~\eqref{eqquantumrabi} in the (ultra)strong coupling regime. Such setups operate in a regime where the qumode measurement resolution can be much finer than the differences between the interaction Hamiltonian eigenvalues, which for this example is of order unity. For example, in Ref. \cite{forn2010} the coupling between qubit and resonator gives $g\approx 10^{10}$, and the Q-factor of $10^3$ and resonance frequency of 8.2GHz leads to $g\tau\sim 200$ when the protocol is run for times of the order of the resonator lifetime. With the parameters of Ref. \cite{hamsen2017}, we would have $g\tau=40$ when $\tau$ is the cavity lifetime. Thus, for both these examples, the spread $\sigma_E$ would be much smaller than the differences between the measured eigenvalues.
While Eq.~\eqref{eqquantumrabi} makes it clear that the protocol can be used to probe moments of $\sigma_x$ for a two-level system, it can straightforwardly be applied more generally. First, the physical motivation and derivation of the Hamiltonian does not necessarily require that the system has only two states, and can be rederived for any number of states, by replacing $\sigma_x$ with the appropriate $x$ spin operator for the number of states. Secondly, by illuminating an array of such systems with the same light field, the Hamiltonian becomes an interaction between the light field and the sum of the individual spin operators for each system, and thus probes moments of the total spin operator, as well as correlations between individual spins. Finally, it is possible to probe the spin operator in any chosen direction, by appropriate rotation of the individual spins prior to probing (e.g. applying a Hadamard gate \cite{nielsen2010} to the system allows probing of $\sigma_z$).
A related Hamiltonian that takes the desired form is the Dicke model, which describes the interaction between an ensemble of identical two-level atoms interacting with a common quantised light field. It is given by
\begin{equation}
\label{eqdicke}
H_D=gx\otimes J_x,
\end{equation}
where $\bm{J}$ is a spin operator describing the collective excitations of the ensemble. The model has been realised experimentally with cold atoms trapped in an optical cavity, in the context of studying quantum phase transitions \cite{baumann2010}. Here, we see it also as a possible route to non-destructively probe atomic ensembles. This setting is particularly apt for our proposal, as the inclusion of the optical cavity facilitates the use of tools from quantum optics, which constitutes a primary manifestation of continuous variable quantum systems. The strong light-matter coupling and long cavity lifetimes achievable in such systems provide favourable conditions for testing our protocol. Specifically, a ratio of 0.2 for the collective interaction strength to cavity decay rate has been achieved \cite{baumann2010}, corresponding to $g\tau\sim\mathcal{O}(10^{-3}-10^{-2})$ when the protocol is run for a time comparable to the cavity lifetime. This allows for measurement of the number of atomic excitations to be resolved to within a few hundred even with no squeezing. This is quite sharp when compared to the total number of atoms ($10^5$). Further, we note extensions of the above experiment have succeeded in additional manipulation of the trapped atoms, by confining them to various lattice structures \cite{landig2016}.
{\bf Discussion.} We have shown that properties of quantum systems may be imprinted onto continuous variable qumode ancillae to allow for non-destructive probing of the system. For an appropriate choice of interaction operator and initial qumode state, the spectrum of the desired observable, and the occupation probabilities of its eigenstates for a particular system state are mapped directly on to the qumode state. The qumode state then behaves according to the same statistics as this operator, and thus measurement of the qumode reproduces the same result as a direct measurement of the system would, while avoiding particular drawbacks associated with direct measurements of practical implementations of quantum technologies. Further, the direct recovery of the measurement statistics is in contrast to analogous protocols with qubit ancillae, where one must first employ post-processing such as taking derivatives \cite{elliott2016} or Fourier transforms \cite{dorner2013, mazzola2013, johnson2016} of the measurement outcomes. As our proposal is feasible with typical parameters for contemporary experiments, it could find immediate use in the field.
We note that while qumode probes are non-destructive to the system, in general they will not be non-demolition. The backaction on the system state will project it to the eigenstate (or in the case of degeneracies, eigenspace) associated with the measurement outcome, in much the same way as a direct measurement of the system would. For an interaction operator with the eigenstates known, this projection could perhaps be employed as a form of probabilistic state engineering \cite{verstraete2009, pedersen2014, elliott2015, joshi2016}, with the particular state created being heralded by the qumode measurement outcome, potentially adding further utility to the addition of our protocol to the quantum technologies toolbox.
After completion of this manuscript, we became aware of a recent work employing a qumode-based protocol to measure thermodynamical work \cite{cerisola2017}.
{\bf Acknowledgements.} The authors acknowledge financial support from Singapore National Research Foundation Awards NRF-NRFF2016-02 and NRF-NRFF2013-01, the Engineering and Physical Sciences Research Council (Doctoral Training Account), John Templeton Foundation Grant 53914 ``Occam's quantum mechanical razor: Can quantum theory admit the simplest understanding of reality?'', and the Clarendon Fund, Merton College of the University of Oxford. T.J.E. thanks the Centre for Quantum Technologies, National University of Singapore for their hospitality.
\section*{Technical Appendix}
Here we derive Eq.~\eqref{eqinfinitesq} which gives the probability of the momentum measurement resulting in the outcome $\ket{p}$, denoted $P(p)$, when the initial state is infinitely squeezed. Recall that for a qumode prepared in a given state $G(x)$, its state after interacting with the system for a time $\tau$ is given by Eq.~\eqref{eqintstate}, which we reproduce here for convenience:
\begin{equation}
\rho_q(t)=\int\int dxdx'G(x)G^*(x')L(x,x',\tau)\ket{x}\bra{x'},
\end{equation}
with $L(x,x',\tau)=\sum_n P_n\exp(-ig(x-x')E_n\tau)$.
For the infinitely squeezed initial state we then have
\begin{align}
P(p)&=\bopk{p}{\rho_q(\tau)}{p}\nonumber\\
&=\frac{1}{(2\pi)^2}\int\int dx dx'e^{i(x-x')(p_0-p)}L(x,x',\tau) \nonumber \\
&=\frac{1}{(2\pi)^2}\sum_n P_n \int\int dx dx'e^{i(x-x')(p_0-p-gE_n\tau)} \nonumber\\
&=\sum_n P_n (\delta(p-(p_0-gE_n\tau)))^2,
\end{align}
as given in Eq.~\eqref{eqinfinitesq}.
We also derive Eqs.~\eqref{eqfinalbin} and \eqref{eqfinalsqueeze}, which describe the sampled probability distribution when the initial state has a finite bin size, or is finitely squeezed. Inserting the explicit initial qumode state for a finite bin of size $L$, $G(x)=1/(\sqrt{2\pi L})\int_{-\frac{L}{2}}^{\frac{L}{2}}dk \exp(i(p_0+k)x)$, we have that
\begin{align}
\rho_q(\tau)&=\frac{1}{2\pi L}\sum_n P_n \int\int dxdx'\int_{-\frac{L}{2}}^{\frac{L}{2}}\int_{-\frac{L}{2}}^{\frac{L}{2}}dkdk'\nonumber\\&\times e^{i(p_0+k)x-i(p_0+k')x'-ig(x-x')E_n\tau}\ket{x}\bra{x'}.
\end{align}
Thus, we have that
\begin{align}
P(p)&=\frac{1}{(2\pi)^2L}\sum_n P_n \int\int dxdx'\int_{-\frac{L}{2}}^{\frac{L}{2}}\int_{-\frac{L}{2}}^{\frac{L}{2}}dkdk'\nonumber\\&\times e^{i(p_0-p+k)x-i(p_0-p+k')x'-ig(x-x')E_n\tau} \nonumber \\ &=\frac{1}{(2\pi)^2L}\sum_n P_n \int dx\int_{-\frac{L}{2}}^{\frac{L}{2}}dk e^{i(p_0-p+k-gE_n\tau)x}\nonumber\\&\times\int dx'\int_{-\frac{L}{2}}^{\frac{L}{2}}dk' e^{-i(p_0-p+k'-gE_n\tau)x'}\nonumber\\&=\frac{1}{L}\sum_nP_n\left(\int_{-\frac{L}{2}}^{\frac{L}{2}}dk \delta(p-(p_0-gE_n\tau+k))\right)^2\nonumber\\& =\sum_n\begin{cases}
\frac{P_n}{L} & -\frac{L}{2}\leq p-p_0+gE_n\tau\leq \frac{L}{2}\\
0 & \mathrm{otherwise},
\end{cases}
\end{align}
in agreement with Eq.~\eqref{eqfinalbin}.
We now do the same for the finite squeezed state, which has initial wavefunction $G(x)=(s^2/\pi)^{\frac{1}{4}}(1/\sqrt{2\pi})\exp(ip_0x)\int dq\exp(-s^2q^2/2)$. The state of the qumode after the interaction for time $\tau$ is given by
\begin{align}
\rho_q(t)&=\frac{s}{\sqrt{\pi}}\frac{1}{2\pi}\sum_n P_n \int\int dxdx'\int\int dqdq'\nonumber\\&\times e^{i(p_0+q)x-i(p_0+q')x'-ig(x-x')E_n\tau}e^{-\frac{s^2(q^2+{q'}^2)}{2}}\ket{x}\bra{x'},
\end{align}
and the associated probability distribution is given by
\begin{align}
P(p)&=\frac{s}{4\pi^{\frac{5}{2}}}\sum_n P_n \int dx\int dqe^{i(p_0-p+q-gE_n\tau)x}e^{-\frac{s^2q^2}{2}}\nonumber\\&\times \int dx' \int dq'e^{-i(p_0-p+q'-gE_n\tau)x'}e^{-\frac{s^2{q'}^2}{2}}\nonumber\\&=\frac{s}{\sqrt{\pi}}\sum_nP_n\left(\int dq \delta(p-(p_0-gE_n\tau+q))e^{-\frac{s^2q^2}{2}}\right)^2\nonumber\\&=\frac{s}{\sqrt{\pi}}\sum_nP_n e^{-s^2(p-p_0+g\tau E_n)^2},
\end{align}
as given in Eq.~\eqref{eqfinalsqueeze}. In the limit of infinite squeezing, this reduces to Eq.~\eqref{eqinfinitesq}.
|
\section{Introduction}
Hybrid semiconductor/superconductor (S) devices are becoming promising platforms to host topological superconductivity and thus
Majorana zero modes~\cite{Kitaev2001,Fu2008,Sau2010,Oreg2010,Lutchyn2010,Alicea2012}. The technological advances are allowing to perform fundamental studies of some more basic mesoscopic objects, the
Andreev bound states (ABSs), which characterize any coherent weak link between two superconducting electrodes~\cite{Beenakker1991}. In this respect, the direct detection of the current carrying ABSs
through tunneling experiments~\cite{Pillet2010,Sueur2008} or through microwave spectroscopy~\cite{Zazunov2003,Kos2013,Bretheau2013,Bretheau2013_2,Janvier2015,Larsen2015,Lange2015} has constituted a great achievement whose extension to the
topological case is being pursued by several groups~\cite{Virtanen2013,Peng2016,Klees2017,Wiedenmann2016,Woerkom2016}. In particular, the microwave experiments of Ref.~\cite{Janvier2015} in metallic atomic contacts demonstrated the
possibility of quantum manipulation of the ABSs, an approach which could be now extended to the hybrid semiconductor devices.
The experiments on atomic contacts have also demonstrated that {\it odd}-parity states, in which an excess quasiparticle is trapped within the subgap levels,
are long lived and can get a significant population when the contact is close to perfect transmission and the phase difference approaches $\pi$~\cite{Zgirski2011}.
While this ``poisoning'' mechanism can become detrimental for all qubit proposals based on Majorana zero modes~\cite{Rainis2012} or ABSs~\cite{Olivares2014,Zazunov2014}, the spin
degree of freedom of long lived odd states can become itself the basis for another type of qubit. This is precisely the idea behind the Andreev spin qubit
(ASQ) proposal of Nazarov and coworkers~\cite{Chtchelkatchev2003,Padurariu2010}.
The ASQ was first proposed to be realized in metallic atomic contacts with strong spin-orbit (like for instance using Pb) which would be responsible of the splitting
of the spin states~\cite{Chtchelkatchev2003,Padurariu2010,Beri2008}. The hybrid nanowires now provide another possible platform for their realization due to their strong spin-orbit interaction and the
tunability of their conduction channels~\cite{Woerkom2016,Goffman2017}. While most experimental progress along this line has been achieved on high quality InSb/S and InAs/S hybrid nanowires~\cite{Mourik2012,Das2012,Albrecht2016,Zhang2016}, recent developments include also proximity coupled strips in two dimensional electron gases (2DEGs)~\cite{Suominen2017,Lee2017} which are promising
platforms in view of their potential scalability and tunability.
There are, however, a number of uncertainties which hinder the feasibility of this realization. In the first place,
the single channel theory of ABSs in a Rashba nanowire predicts spin-degenerate states for zero Zeeman field and thus suggests that high fields are
needed to remove this degeneracy~\cite{Cheng2012,Yokoyama2014}. On the other hand, this theory also predicts vanishing current matrix elements between the odd states thus making the visibility of their transitions in microwave experiments negligible.
The aim of the present work is to analyze theoretically the ABS structure and the current matrix elements relevant for the even and odd transitions
in superconductor/nanowire/superconductor junctions. We show that even when only the lowest subband is occupied the influence of the higher
subbands is essential both for the energy splitting of the ABSs at zero field~\cite{Reynoso2012,Murani2016} as well as for obtaining finite matrix elements between the odd states having different spin polarizations. Our approach allows us to obtain analytical results for all relevant quantities as a function of the model parameters such as length,
width, and chemical potential in the nanowire region. Our analysis thus provide a powerful tool to guide the experiments in the development of ASQs
based on semiconducting nanowires.
The paper is organized as follows.
In Sec.~\ref{Model}, we introduce a model describing multichannel nanowire Josephson junctions in the energy regime of single channel-transport, and obtain an effective Hamiltonian by projecting the full model onto the subspace spanned by subgap ABSs. By solving the Hamiltonian, we find analytical expressions for the Andreev energy levels. In Sec.~\ref{Current}, we define four distinct states corresponding to possible occupation-number configurations of ABSs. We refer to the state in which Andreev levels with negative energies are occupied as the ``ground state'', and the state being occupied by two quasiparticles with different spins as the ``excited state''. We term ``odd state'' for a single quasiparticle occupation of ABSs. We analytically derive the matrix elements of the current operator for even states (ground and excited states) and odd states. We further analyze their dependence on controllable parameters such as Zeeman field, chemical potential, and junction length. In Sec.~\ref{Exp}, we discuss the feasibility to observe the transitions between odd states in actual experiments.
Finally, in Sec.~\ref{Conclusion} we provide some concluding remarks.
\section{Model Hamiltonian}\label{Model}
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{Fig1_Setup.pdf}
\caption{Top: Schematic illustration of a quasi-one-dimensional nanowire proximity coupled to $s$-wave superconductors (S, cyan)
forming a Josephson junction with a length $L$ and a width $W$.
The nanowire supports many channels, Rashba-spin orbit coupling, a potential barrier, and magnetic fields $B_x$ and $B_y$.
Bottom: Dispersion relation of the lowest two transverse subbands in the nanowire in the absence of magnetic fields, see Eq.~\eqref{Dispersion}. The case of $\eta=0$ is drawn by dashed lines and the $\eta \ne 0$ case by solid lines, where the subband coupling $\eta$ is given by Eq.~\eqref{Eta}. The second subbands with energy $E^{\perp}_{01}$ (Eq.~\eqref{T_energy}) which do not couple with the lowest ones through the spin-orbit coupling are not shown for clarity.
Two co-propagating electrons (blue and red filled circles) with different Fermi velocities due to the finite $\eta$ are reflected as holes (blue and red empty circles), respectively, through Andreev reflection processes at $x=L$ (dotted lines). Multiple reflections at $x= 0$ and $L$ lead to the formation of Andreev levels.
}
\label{Fig1:Setup}
\end{figure}
The model system we consider is schematically depicted in Fig.~\ref{Fig1:Setup}.
Electrons in a quasi-one-dimensional nanowire are confined in the $y$ and $z$ directions by an harmonic potential and are free to move in the $x$ direction.
Two superconducting electrodes separated by a distance $L$ are proximity coupled to this nanowire forming a Josephson junction.
The Bogoliubov-de Gennes (BdG) Hamiltonian for this cylindrical Josephson junction is
\begin{align}
H_{\text{BdG}} = \left(H_0-\mu\right) \tau_z + H_R \tau_z + H_Z + H_S, \label{H_BdG}
\end{align}
where $\mu$ is the chemical potential.
Here $H_0$ describes the quasi-one-dimensional nanowire given by
\begin{align}
H_0 = \frac{p^2_x+p^2_y+p^2_z}{2m} + U_{b}(x)+U_{c}(y,z), \label{H0}
\end{align}
where $m$ is the effective mass of the conduction electrons in the nanowire, $U_{b}(x) = U_0 \delta(x-x_0)$ represents the potential barrier at $x=x_0$ which allows to tune the junction transmission, and $U_c(y,z)= m \omega^2_0 (y^2+z^2)/2$ is the harmonic confinement potential where $\omega_0$ is the angular frequency.
We define an effective diameter of the nanowire $W=2 \sqrt{\hbar/(m \omega_0)}$.
We assume that a magnetic field is applied along the $x$ and $y$ directions,
and that an electric field is present along the $z$ direction~\cite{Oreg2010,Zuo2017}.
The Rashba spin-orbit interaction $H_R$ and the Zeeman interaction $H_Z$ are given by
\begin{align}
H_R &= -\alpha p_x\sigma_y + \alpha p_y \sigma_x, \label{SOI}\\
H_Z &= \frac{g\mu_B}{2}\left(B_x \sigma_x + B_y \sigma_y \right),
\end{align}
where $\alpha$ is the strength of the spin-orbit coupling and $B_x$ and $B_y$ are components of the applied magnetic field in the $x$ and $y$ directions, respectively. The Pauli matrices $\sigma_{x,y,z}$ and $\tau_{x,y,z}$ act in the spin and Nambu spaces, respectively. $H_S$ is the induced $s$-wave pairing potential due to the proximity effect,
\begin{align}
H_S = \Delta(x) \left(\text{cos}~\phi(x)\tau_x - \text{sin}~\phi(x)\tau_y \right) \label{Spairing}
\end{align}
where the induced gap $\Delta(x)$ and the superconducting phase $\phi(x)$ are given by $\Delta(x) e^{i \phi(x)} = \Delta_0 e^{i \phi_L}$ at $x < 0$ and $ \Delta_0 e^{i \phi_R}$ at $x > L$. In the normal region of $0 < x < L$, $\Delta(x) = 0$.
The superconducting phase difference is defined by $\phi = \phi_R - \phi_L$. Below, we assume that the potential barrier and the Zeeman field are weak so that we can treat $U_{b}(x)$ and $H_Z$ as perturbations.
To make the discussion simpler, we define an effective one-dimensional (1D) BdG Hamiltonian by integrating out the $y$ and $z$ degrees of freedom.
The sum of the kinetic and confinement terms in Eq.~\eqref{H0} associated with the $y$ and $z$ coordinates is $(p^2_y+p^2_z)/(2m)+U_c(y,z)$ which
has the eigenvalues
\begin{align}
E^{\perp}_{n_y n_z} = \hbar \omega_0 (n_y + n_z +1) = \frac{4 \hbar^2}{m W^2}(n_y + n_z +1),\label{T_energy}
\end{align}
where $n_y, n_z = 0, 1, 2,...$. The eigenstates $\phi^{\perp}_{n_y n_z s}(y,z)$ (with $s=\uparrow, \downarrow$) corresponding to the lowest two eigenvalues $\hbar \omega_0$ and $2 \hbar \omega_0$ are given by,
\begin{align}
\phi^{\perp}_{00 s}(y,z) &= \frac{2}{\sqrt{\pi}W} e^{-2 (y^2 + z^2)/W^2} \chi_s, \nonumber\\
\phi^{\perp}_{10 s}(y,z) &= \frac{4 \sqrt{2} y}{\sqrt{\pi}W^2} e^{-2 (y^2 + z^2)/W^2} \chi_s, \nonumber\\
\phi^{\perp}_{01 s}(y,z) &= \frac{4 \sqrt{2} z}{\sqrt{\pi}W^2} e^{-2 (y^2 + z^2)/W^2} \chi_s,
\end{align}
where $\chi_{\uparrow (\downarrow)} = (1/\sqrt{2})(1, i (-i))^{T}$ are eigenstates of $\sigma_y$.
We note that the $\phi^{\perp}_{10 s}(y,z)$ and $\phi^{\perp}_{01 s}(y,z)$ are degenerate transverse modes with energy $2 \hbar \omega_0$.
However, $\phi^{\perp}_{01 s}(y,z)$ do not couple to $\phi^{\perp}_{00 s'}(y,z)$ through the spin-orbit interaction
\begin{align}
\int\int dy dz~ \phi^{\perp \dagger}_{00 s'}(y,z)~ H_R~ \phi^{\perp}_{01 s}(y,z) = 0,
\end{align}
meaning that $\phi^{\perp}_{01 s}(y,z)$ do not contribute to the modification of the lowest subbands.
By projecting $H_{\text{BdG}}$ onto the subspace spanned by the lowest two relevant transverse subbands,
$\{\phi^{\perp}_{00\uparrow},\phi^{\perp}_{00\downarrow},\phi^{\perp}_{10\uparrow},\phi^{\perp}_{10\downarrow} \}$, followed by integrating out the $y$ and $z$ coordinates, we obtain
\begin{gather}
H^{1D}_{\text{BdG}}~ \Psi(x) = \varepsilon~ \Psi(x), \nonumber\\
H^{1D}_{\text{BdG}} = \left(H'_0-\mu\right) \tau_z + H'_R \tau_z + H'_Z + H_S, \label{1DBdG}
\end{gather}
where $\Psi(x) = (\psi^{e}(x),\psi^{h}(x))^{T}$ with
\begin{align}
\psi^{e}(x) &= (\psi^{e}_{0\uparrow}, \psi^{e}_{0\downarrow}, \psi^{e}_{1\uparrow}, \psi^{e}_{1\downarrow})^{T}, \nonumber\\
\psi^{h}(x) &= (\psi^{h}_{0\downarrow}, -\psi^{h}_{0\uparrow}, \psi^{h}_{1\downarrow}, -\psi^{h}_{1\uparrow})^{T},
\end{align}
where the subscripts $js$ on the $\psi^{e/h}_{js}$ denote the transverse quantum numbers $j=0,1$ and the spins $s=\uparrow, \downarrow$.
$H'_0$, $H'_R$, and $H'_Z$ are the representations of $H_0$, $H_R$, and $H_Z$, respectively, in the subspace,
\begin{align}
H'_0 &= \frac{p^2_x}{2m} + E^{\perp}_{+} + E^{\perp}_{-}\Sigma_z + U_{b}(x), \label{H'0}\\
H'_R &= -\alpha p_x \tilde{\sigma}_z + \eta \tilde{\sigma}_y \Sigma_y, \label{H'R} \\
H'_Z &= \frac{g\mu_B}{2}\left(B_x \tilde{\sigma}_y + B_y \tilde{\sigma}_z \right),
\end{align}
where $E^{\perp}_{\pm} = (E^{\perp}_{00} \pm E^{\perp}_{10})/2$, the Pauli spin matrices $\tilde{\sigma}_{x,y,z}$ act in the spin space with basis $\{\chi_{\uparrow},\chi_{\downarrow}\}$, and $\Sigma_{x,y,z}$ are Pauli matrices acting on the space for the transverse degree of freedom.
The coefficient $\eta$ in Eq.~\eqref{H'R} describes the coupling between the different transverse subbands with opposite spins, and is given by
\begin{align}
\eta &= \int dy dz~ \phi^{\perp \dagger}_{00 \downarrow}(y,z)
\left(-i\hbar \alpha \frac{\partial}{\partial y} \sigma_x \right) \phi^{\perp}_{10 \uparrow}(y,z) \nonumber\\
&=\frac{\sqrt{2} \alpha \hbar}{W}. \label{Eta}
\end{align}
In the effective 1D model described by $H^{1D}_{\text{BdG}}$, the details of the system geometry such as
dimensionality, subband states, and their energies
enter through the parameters $E^{\perp}_{\pm}$ and $\eta$.
If we construct a model Hamiltonian for a 1D nanowire starting from a 2DEG with a hard-wall confinement potential with width $W_{2d}$, the parameters are given by
$E^{\perp}_{+}=5 \pi^2 \hbar^2/(4mW^2_{2d})$, $E^{\perp}_{-}=-3 \pi^2 \hbar^2/(4 mW^2_{2d})$, and
$\eta = 8 \alpha \hbar/(3 W_{2d})$. As from the experimental point of view, quasi-one-dimensional wires can be made either from cylindrical nanowires or 2DEG heterostructures~\cite{Suominen2017}, we provide the results for Andreev levels and current matrix elements of Josephson junctions in a model for a 2DEG-based nanowire in App.~\ref{app:2Dnanowire}. We emphasize that although the specific forms of $E^{\perp}_{\pm}$ and $\eta$ depend on the dimensionality and confinement potential, the form of $H^{1D}_{\text{BdG}}$ in Eq.~\eqref{1DBdG} with $E^{\perp}_{\pm}$ and $\eta$ as parameters and the resulting analytical expressions, for instance, Eq.~\eqref{PBdG} below, are independent of such geometrical differences.
We first examine $H'_0 + H'_R$ without the potential barrier $U_{b}(x)$. In particular, we focus on the energy regime $E \lesssim E^{\perp}_{10}$ where spinful electrons move in a single channel (see Fig.~\ref{Fig1:Setup}). The dispersion relation in the energy regime is given by~\cite{Yokoyama2014,Reynoso2012,Murani2016}
\begin{align}
E(k_x) = \frac{\hbar^2 k^2_x}{2m} + E^{\perp}_{+}
- \sqrt{\left( E^{\perp}_{-} \mp \alpha \hbar k_x\right)^2 + \eta^2}, \label{Dispersion}
\end{align}
and the Fermi velocities $v_{j=1,2}$ of the co-propagating electrons in the different spin subbands are
\begin{align}
v_1 &= \frac{\hbar k^{e}_{x1}}{m} + \frac{\alpha \left(E^{\perp}_{-} - \alpha \hbar k^{e}_{x1} \right)}{\sqrt{\left( E^{\perp}_{-} - \alpha \hbar k^{e}_{x1}\right)^2 + \eta^2}},\nonumber\\
v_2 &= \frac{\hbar k^{e}_{x2}}{m}- \frac{\alpha \left(E^{\perp}_{-} + \alpha \hbar k^{e}_{x2} \right)}{\sqrt{\left( E^{\perp}_{-} + \alpha \hbar k^{e}_{x2}\right)^2 + \eta^2}},\label{Velocity}
\end{align}
where $k^{e}_{xj}$ are wave vectors of the electrons.
If $\eta=0$, which means there is no mixing between the transverse subbands,
we find that $v_1 = v_2$ because Eq.~\eqref{Velocity} reduces to $v_1=\hbar k^{e}_{x1}/m - \alpha$ and $v_2=\hbar k^{e}_{x2}/m + \alpha$ and Eq.~\eqref{Dispersion} gives $k^{e}_{x1} - k^{e}_{x2} = 2 m \alpha/\hbar$. If $\eta$ is finite, $v_1 \neq v_2$.
The eigenstates $\psi^{e}_{R,j=1,2} (\psi^{e}_{L,j=1,2})$ of electrons moving to the right (left) with the velocity $v_j$ are given by
\begin{align}
\psi^{e}_{R,1} &= - \mathcal{T} \psi^{e}_{L,1}
= \frac{e^{i k^{e}_{x1} x}}{\sqrt{|v_1|}} \left( \text{sin}\frac{\theta_1}{2},0,0, -\text{cos}\frac{\theta_1}{2}\right)^{T}, \nonumber\\
\psi^{e}_{R,2} &= \mathcal{T} \psi^{e}_{L,2}
= \frac{e^{i k^{e}_{x2} x}}{\sqrt{|v_2|}} \left( 0, \text{sin}\frac{\theta_2}{2},\text{cos}\frac{\theta_2}{2}, 0 \right)^{T}, \label{Eigenstates}
\end{align}
where $\mathcal{T} = -i \tilde{\sigma}_y \Sigma_0 \mathcal{C}$ is the time reversal operator where $\mathcal{C}$ indicates complex conjugation, and
\begin{align}
\theta_1 &= \text{arccos}\left[\frac{1}{\alpha}\left(v_1 -\frac{\hbar k^{e}_{x1}}{m} \right) \right], \nonumber\\
\theta_2 &= \text{arccos}\left[\frac{1}{\alpha}\left(-v_2 +\frac{\hbar k^{e}_{x2}}{m} \right) \right].
\end{align}
For $\eta = 0$, $\theta_1 = \theta_2 = \pi$ and thus the spinors to the eigenstates have the forms $\psi^{e}_{R,1}, \psi^{e}_{L,2} \propto (1,0,0,0)^{T}$ and $\psi^{e}_{R,2}, \psi^{e}_{L,1} \propto (0,1,0,0)^{T}$, independent of the spin-orbit coupling and the momenta. The angles deviate from $\pi$ when $\eta$ is finite. In particular, in the limit $|E|, |E^{\perp}_{-}| \gg m \alpha^2, \eta$, they are expressed as
\begin{align}
\text{cos}~\theta_{1(2)} \approx -1 + \frac{\eta^2}{2(E^{\perp}_{-} \mp \alpha \sqrt{2 m E})^2},\label{Approx_theta}
\end{align}
where the $-$ sign is for $\theta_{1}$ and $+$ for $\theta_{2}$.
We will see below that the the different Fermi velocities and different spin directions of two co-propagating electrons
are a crucial ingredient for manipulating the Andreev levels.
In the following, we take into account the proximity-induced superconducting term given by Eq.~\eqref{Spairing}.
The corresponding BdG Hamiltonian is $\left(H'_{0} - \mu \right) \tau_z + H'_R\tau_z + H_S$.
For further evaluation, we linearize the dispersion relation in Eq.~\eqref{Dispersion} in the normal region around the chemical potential $\mu$ far from the bottom of the subbands,
\begin{align}
E^{e(h)}_{R,j} &= \mu \pm \hbar v_j \left( k^{e(h)}_{xj} -k_{F_j}\right), \nonumber\\
E^{e(h)}_{L,j} &= \mu \mp \hbar v_j \left( k^{e(h)}_{xj} + k_{F_j}\right), \label{LinearizedDispersion}
\end{align}
where the upper sign is for an electron and the lower for a hole.
In the normal region without a potential barrier, coherent superpositions of electrons and holes produced by Andreev reflections at the interfaces between the normal and superconducting regions give rise to the ABSs.
Perfect Andreev reflection at these interfaces connects time-reversed states.
For instance, and electron with $E^{e}_{R,1}$ is converted to a hole with $E^{h}_{R,1}$, as illustrated in Fig.~\ref{Fig1:Setup}.
We also assume that the spinor parts of the eigenstates in Eq.~\eqref{Eigenstates} do not change significantly within the subgap energy regime $|\varepsilon| < \Delta_0$ so that $\theta_{j=1,2}$ are fixed as $\theta_{j}(k^{e}_{xj}) = \theta_{j}(k_{F_j})$. This is a good approximation provided that the subband separation is larger than the induced superconducting gap, $2 |E^{\perp}_{-}| \gg \Delta_0$.
By matching the wave functions at the interfaces, we obtain four normalized ABSs
$\Psi_{j\lambda}(x)$ for $|\varepsilon| < \Delta_0$, where $j=1,2$ and $\lambda = \pm$.
The $\Psi_{1-}(x)$ and $\Psi_{2+}(x)$ have a component structure as
\begin{align}
(\psi^{e}_{0\uparrow},0, 0, \psi^{e}_{1\downarrow},\psi^{h}_{0\downarrow}, 0, 0, -\psi^{h}_{1\uparrow})^{T},\label{Structure_1}
\end{align}
while $\Psi_{1+}(x)$ and $\Psi_{2-}(x)$ have
\begin{align}
(0,\psi^{e}_{0\downarrow},\psi^{e}_{1\uparrow},0,0,-\psi^{h}_{0\uparrow},\psi^{h}_{1\downarrow},0)^{T},\label{Structure_2}
\end{align}
which are orthogonal to the states $\Psi_{1-}(x)$ and $\Psi_{2+}(x)$.
Further details on the ABSs are given in App.~\ref{app:ABS}.
The matching condition yields the following transcendental equation for the
Andreev level,
\begin{align}
\beta^{2} e^{i (k^{e}_{xj}-k^{h}_{xj})L + i \lambda \phi} = 1, \label{TranscendentalEqn}
\end{align}
where $\beta = \varepsilon/\Delta_0 - i \sqrt{1-(\varepsilon/\Delta_0)^2}$.
In the limit of either $\Delta_0 L / (\hbar v_j) \ll 1$ or $\varepsilon \ll \Delta_0$ and by using $e^{i (k^{e}_{xj}-k^{h}_{xj})L} = e^{i 2 \varepsilon L /(\hbar v_j)}$ from the linearized dispersion relation, the energy-phase relations, $\varepsilon_j(\phi)$ for $\Psi_{j+}(x)$ and $-\varepsilon_j(\phi)$ for $\Psi_{j-}(x)$, can be evaluated as
\begin{align}
\varepsilon_j(\phi) = \Delta_0~ \frac{\text{cos}(\phi/2)}{1+L_j~ \text{sin}(\phi/2)}, \label{EnergyPhaseRelation}
\end{align}
where $L_j = \Delta_0 L/(\hbar v_j)$. The difference between $\varepsilon_1(\phi)$ and $\varepsilon_2(\phi)$ is given by
\begin{align}
\varepsilon_{1}(\phi)-\varepsilon_{2}(\phi)= \frac{(\Delta_0/2) (L_2-L_1)~\text{sin}~\phi}{\left(1+L_1~ \text{sin}(\phi/2) \right)\left(1+L_2~ \text{sin}(\phi/2) \right)}. \label{AE_difference}
\end{align}
This clearly shows a spin-splitting of ABSs and also manifests that the splitting comes from the finite value of $L_2-L_1 \propto (v_1 - v_2) L$. The degeneracies of the Andreev levels at $\phi = 0$ and $\pi$ are protected by the time reversal symmetry~\cite{Padurariu2010,Beri2008}.
We include the effects of the potential barrier $U_{b}(x)$ which tune the junction transmission and the Zeeman field $H'_Z$ by using perturbation theory. We map $U_{b}(x)$ and $H'_Z$ onto the subspace spanned by the basis $\{\Psi_{1+},\Psi_{1-},\Psi_{2+},\Psi_{2-}\}$, leading to a mapped BdG Hamiltonian $H^{P}_{\text{BdG}}$ as
\begin{align}
H^{P}_{\text{BdG}} =
\begin{pmatrix}
\varepsilon_1 + \mathbb{B}_{y1} & 0 & \mathbb{B}_x & \mathbb{U} \\
0 & -\varepsilon_1 - \mathbb{B}_{y1} & \mathbb{U}^{*} & \mathbb{B}^{*}_x \\
\mathbb{B}^{*}_x & \mathbb{U} & \varepsilon_2 - \mathbb{B}_{y2} & 0 \\
\mathbb{U}^{*} & \mathbb{B}_x & 0 & -\varepsilon_2 + \mathbb{B}_{y2}
\end{pmatrix}, \label{PBdG}
\end{align}
where $\left( H^{P}_{\text{BdG}}\right)_{j k}$ is computed by
\begin{align}
\left( H^{P}_{\text{BdG}}\right)_{j k} = \int^{\infty}_{-\infty} dx
\Psi^{\dagger}_{j}(x) H^{1D}_{\text{BdG}} \Psi_{k}(x),
\end{align}
where $j,k\in \{1+,1-,2+,2-\}$.
The $\mathbb{U}$ term is $U_{b}(x)$ expanded in this basis and is given by
\begin{align}
\mathbb{U} = -i 2 U_0 e^{i(k_{F_1}+k_{F_2})x_0}
\sqrt{\frac{\kappa_1 \kappa_2}{N_1 N_2}}~ \text{cos}\left(\frac{\theta_1-\theta_2}{2} \right), \label{PU}
\end{align}
and the Zeeman terms expanded in the basis have the forms
\begin{align}
&\mathbb{B}_{y1(y2)} = \frac{g \mu_B B_y}{2}~ \text{cos}~ \theta_{1(2)}, \label{PBy}\\
&\mathbb{B}_x = i 2 \left(\frac{g \mu_B B_x}{2}\right) \sqrt{\frac{\kappa_1 \kappa_2}{N_1 N_2}} \frac{\kappa_1 + \kappa_2}{(k_{F_1} - k_{F_2})^2} \nonumber\\
&\hspace{50pt}\times\left( 1+ e^{i(k_{F_1} - k_{F_2}) L}\right) \text{cos}\left(\frac{\theta_1-\theta_2}{2} \right), \label{PBx}
\end{align}
where $\kappa_{1(2)} = (1/(\hbar v_{1(2)})) \sqrt{\Delta^2_0-\varepsilon^2_{1(2)}(\phi)}$ and $N_{1(2)} = 2 (1+ \kappa_{1(2)} L)$.
In deriving Eq.~\eqref{PBx}, we assumed that $|k_{F_1}-k_{F_2}| \gg |\kappa_1 + \kappa_2|$.
The Hamiltonian $H^{P}_{\text{BdG}}$ is a good approximation provided that $|\mathbb{U}|, |\mathbb{B}_x|, |\mathbb{B}_{y1}|, |\mathbb{B}_{y2}| \ll \Delta_0$ and that $\phi \sim \pi$ where Andreev levels are close to zero energy.
The $H^{P}_{\text{BdG}}$ reflects the properties of the ABSs $\Psi_{j\lambda}$.
For the diagonal elements, the $+/-$ sign in front of the terms $\mathbb{B}_{y1}$ (or $\mathbb{B}_{y2}$) indicates the spin polarization direction of the corresponding basis state. As $U_{b}(x)$ is spin-conserving scattering, we have the off-diagonal element $\mathbb{U}$ which couples the basis states of the same spin polarization, i.e., $\Psi_{1-}$ and $\Psi_{2+}$, or $\Psi_{1+}$ and $\Psi_{2-}$, shown in Eqs.~\eqref{Structure_1} and \eqref{Structure_2}. The Zeeman component in the $x$-direction which results in the $\mathbb{B}_x$ element mixes the different spin states, $\Psi_{1\pm}$ and $\Psi_{2\pm}$, but does not mix $\Psi_{j+}$ and $\Psi_{j-}$ (with $j=1,2$) due to the cancellation of contributions from an electron and a hole. Note that the magnitude of $\mathbb{B}_x$ is significantly reduced from its bare value $g \mu_B B_x/2$ by the factor $\sqrt{\kappa_1 \kappa_2} (\kappa_1 + \kappa_2)/(k_{F_1}-k_{F_2})^2$, and oscillates with the length $L$. $H^{P}_{\text{BdG}}$ has two positive Andreev levels, $\varepsilon^{+}_{A1}(\phi)$ and
$\varepsilon^{+}_{A2}(\phi)$, and two negative Andreev levels, $\varepsilon^{-}_{A1}(\phi)=-\varepsilon^{+}_{A1}(\phi)$ and
$\varepsilon^{-}_{A2}(\phi)=-\varepsilon^{+}_{A2}(\phi)$:
\begin{align}
&\varepsilon^{+}_{A1}(\phi)
= \sqrt{\left(\frac{\varepsilon_1(\phi) + \varepsilon_2(\phi)
+ \mathbb{B}_{y1} -\mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{U}|^{2}} \nonumber\\
&\hspace{20pt}- \sqrt{\left(\frac{\varepsilon_1(\phi) - \varepsilon_2(\phi)
+ \mathbb{B}_{y1} + \mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{B}_x|^{2}}, \nonumber\\
&\varepsilon^{+}_{A2}(\phi)
= \sqrt{\left(\frac{\varepsilon_1(\phi) + \varepsilon_2(\phi)
+ \mathbb{B}_{y1} -\mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{U}|^{2}} \nonumber\\
&\hspace{20pt}+ \sqrt{\left(\frac{\varepsilon_1(\phi) - \varepsilon_2(\phi)
+ \mathbb{B}_{y1} + \mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{B}_x|^{2}}. \label{Andreevlevels}
\end{align}
These Andreev energy levels are plotted in Fig.~\ref{Fig2:AE}(a) in the absence of Zeeman field and for realistic parameters.
The corresponding normalized ABSs are given by
\begin{align}
\Psi^{+}_{A1}(\phi) &= -\Xi \Psi^{-}_{A1}(\phi) = \frac{1}{\sqrt{N(\phi)}}
\begin{pmatrix}
\tilde{f}(\phi) g(\phi) \\
- f(\phi) \tilde{g}^{*}(\phi) \\
- g(\phi) \tilde{g}^{*}(\phi) \\
f(\phi) \tilde{f}(\phi)
\end{pmatrix}, \nonumber\\
\Psi^{+}_{A2}(\phi) &= \Xi \Psi^{-}_{A2}(\phi) = \frac{1}{\sqrt{N(\phi)}}
\begin{pmatrix}
g(\phi) \tilde{g}(\phi) \\
f(\phi) \tilde{f}(\phi) \\
\tilde{f}(\phi) g(\phi) \\
f(\phi) \tilde{g}(\phi)
\end{pmatrix}, \label{ABS}
\end{align}
where $\Xi$ is the particle hole symmetry operator,
\begin{align}
\Xi = \begin{pmatrix}
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0
\end{pmatrix} \mathcal{C},
\end{align}
satisfying $\Xi H^{P}_{\text{BdG}}(\phi) \Xi^{-1} = - H^{P}_{\text{BdG}}(\phi)$.
The components of the ABSs are
\begin{align}
f(\phi) &= \varepsilon^{+}_{A1}(\phi) +\varepsilon^{+}_{A2}(\phi) - \varepsilon_1(\phi) - \varepsilon_2(\phi) - \mathbb{B}_{y1} +\mathbb{B}_{y2}, \nonumber\\
\tilde{f}(\phi) &= -\varepsilon^{+}_{A1}(\phi) +\varepsilon^{+}_{A2}(\phi) - \varepsilon_1(\phi) + \varepsilon_2(\phi) - \mathbb{B}_{y1} - \mathbb{B}_{y2}, \nonumber\\
g(\phi) &= 2 \mathbb{U}, \nonumber\\
\tilde{g}(\phi) &= 2 \mathbb{B}_x, \label{ABS_components}
\end{align}
and $N(\phi) = 4 \left[(\varepsilon^{+}_{A2}(\phi))^{2} - (\varepsilon^{+}_{A1}(\phi))^{2}\right] f(\phi)\tilde{f}(\phi)$ is the normalization factor.
The energy difference between $\varepsilon^{+}_{A1}(\phi)$ and $\varepsilon^{+}_{A2}(\phi)$,
which corresponds to the splitting of two odd states defined in Eq.~\eqref{Odd_states} below, is given by
\begin{align}
&|\varepsilon^{+}_{A1}(\phi)-\varepsilon^{+}_{A2}(\phi)| \nonumber\\
& \hspace{10pt}= 2 \sqrt{\left(\frac{\varepsilon_1(\phi) - \varepsilon_2(\phi)
+ \mathbb{B}_{y1} + \mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{B}_x|^{2}}. \label{Odd_energy}
\end{align}
We note that it is independent of $\mathbb{U}$ and hence a transmission probability in the normal region.
These are plotted in Figs.~\ref{Fig2:Result1}(a) and \ref{Fig3:Result2}(a) for different values of $\mu, B_x$, and $B_y$.
On the other hand, their sum $|\varepsilon^{+}_{A1}(\phi)+\varepsilon^{+}_{A2}(\phi)|$, which is the energy difference between ground and excited states (see Eq.~\eqref{Even_states}),
\begin{align}
&|\varepsilon^{+}_{A1}(\phi)+\varepsilon^{+}_{A2}(\phi)| \nonumber\\
& \hspace{10pt}= 2 \sqrt{\left(\frac{\varepsilon_1(\phi) + \varepsilon_2(\phi)
+ \mathbb{B}_{y1} -\mathbb{B}_{y2}}{2} \right)^{2} + |\mathbb{U}|^{2}},\label{Even_energy}
\end{align}
depends on $\mathbb{U}$, but is independent of $\mathbb{B}_x$, as shown in Fig.~\ref{Fig2:Result1}(c).
Moreover the dependence on $B_y$ is very weak, as shown in Fig.~\ref{Fig3:Result2}(c), in comparison with the dependence of the odd states
plotted in Fig.~\ref{Fig3:Result2}(a).
This can be understood by comparing the terms $\mathbb{B}_{y1} + \mathbb{B}_{y2}$ in Eq.~\eqref{Odd_energy} and $\mathbb{B}_{y1} - \mathbb{B}_{y2}$ in Eq.~\eqref{Even_energy} in the limit $|\mu|, |E^{\perp}_{-}| \gg m \alpha^2, \eta$,
\begin{align}
&\mathbb{B}_{y1} + \mathbb{B}_{y2} \approx -g \mu_B B_y, \nonumber\\
&\mathbb{B}_{y1} - \mathbb{B}_{y2} \approx g \mu_B B_y \frac{\alpha \eta^2 E^{\perp}_{-} \sqrt{2 m \mu}}{\left[ (E^{\perp}_{-})^2 -2 \alpha^2 m \mu \right]^2}, \label{Approx_By}
\end{align}
where we used Eq.~\eqref{Approx_theta}. Therefore, this implies that $|\mathbb{B}_{y1} + \mathbb{B}_{y2}| \gg |\mathbb{B}_{y1} - \mathbb{B}_{y2}|$ leads
to the strong (weak) dependence of the odd (even) states on $B_y$.
However, it is found that changing $\mu$ changes both $|\varepsilon^{+}_{A1}-\varepsilon^{+}_{A2}|$ and $|\varepsilon^{+}_{A1}+\varepsilon^{+}_{A2}|$, as shown in Figs.~\ref{Fig2:Result1}(a), (c) and \ref{Fig3:Result2}(a), (c).
The different dependencies of the even and odd states on the system parameters allow us to control $|\varepsilon^{+}_{A1}(\phi)-\varepsilon^{+}_{A2}(\phi)|$ independently by changing $B_x$ or $B_y$ without changing $|\varepsilon^{+}_{A1}(\phi)+\varepsilon^{+}_{A2}(\phi)|$. This is one of our main results.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{Fig2_AE.pdf}
\caption{Subgap energies of the Josephson junction as a function of the superconducting phase difference $\phi$ without Zeeman field.
(a) Andreev levels plotted from Eq.~\eqref{Andreevlevels}. The levels colored blue and red are formed by the Andreev reflection processes
marked by blue and red dashed lines in Fig.~\ref{Fig1:Setup}, respectively. They have spinor structures orthogonal to each other, but
are coupled through the current operator if Zeeman field $B_x$ is finite. (b) Same plot as in (a), but in the occupation number picture. Two even states, the ground state $|g\rangle$ and excited state $|e\rangle$, and two odd states, $|o1\rangle$ and $|o2\rangle$, are present, where spin splitting between the odd states due to finite values of the Fermi velocity difference and $L$ appears except for $\phi = \pi$. In (a) and (b), we used system parameters $\hbar \alpha = 20$ meV nm, $W = 200$ nm, $L = 300$ nm, $\Delta_0 = 165~\mu$eV, $g$-factor $= 12$, $U_0 = 16.5$ meV nm, $\mu = 0.5$ meV, and $m=0.023~m_e$.
}
\label{Fig2:AE}
\end{figure}
\section{Current operator}\label{Current}
To describe the microwave response of the nanowire Josephson junction,
we calculate the current operator matrix, whose off-diagonal elements determine the transitions induced by the coupling to the external radiation, in the subspace of the low-energy ABSs given in Eq.~\eqref{ABS} and analyze their dependence on the system parameters.
In the subgap energy region, there are two even states, ground state $|g\rangle$ with an energy $(\varepsilon^{-}_{A1}+\varepsilon^{-}_{A2})/2$ and
excited state $|e\rangle$ with an energy $(\varepsilon^{+}_{A1}+\varepsilon^{+}_{A2})/2$. The states are defined by
\begin{align}
\gamma_{A1+}|g\rangle = \gamma_{A2+}|g\rangle =0,\hspace{20pt}
|e\rangle = \gamma^{\dagger}_{A1+} \gamma^{\dagger}_{A2+}|g\rangle, \label{Even_states}
\end{align}
where $\gamma_{A1\pm(A2\pm)} = \int dx (\Psi^{\pm}_{A1(A2)}(x))^{\dagger} \Phi (x)$, with the Nambu field operator $\Phi (x)$, are the Bogoliubov operators.
By adding or removing a single quasiparticle from the even states, we have
two odd states $|o1\rangle$ and $|o2\rangle$,
\begin{align}
|o1\rangle = \gamma^{\dagger}_{A1+}|g\rangle,\hspace{20pt}
|o2\rangle = \gamma^{\dagger}_{A2+}|g\rangle, \label{Odd_states}
\end{align}
and their energies are $(\varepsilon^{+}_{A1}+\varepsilon^{-}_{A2})/2$ and $(\varepsilon^{-}_{A1}+\varepsilon^{+}_{A2})/2$, respectively.
Fig.~\ref{Fig2:AE}(b) shows the plot of these energies of the even and odd states in the case of zero Zeeman field.
The particle hole symmetry of the ABSs given in Eq.~\eqref{ABS} implies the relations
\begin{align}
\gamma^{\dagger}_{A1+} = -\gamma_{A1-}, \hspace{20pt}
\gamma^{\dagger}_{A2+} = \gamma_{A2-}.
\end{align}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig3_Result1.pdf}
\caption{Excitation spectra and matrix elements of the current operator (in units of $J_0 = e \Delta_0/h$)
as a function of $\phi$ at $B_y =0$ for odd (a, b) and even (c, d) transitions [see Eqs.~\eqref{Andreevlevels}, \eqref{EvenParity}, and \eqref{OddParity}]. We plot for different values of $\mu$ and $B_x$; $\mu = 0.51$ meV and $B_x = 50$ mT (black solid lines), $0.41$ meV and $50$ mT (black dashed), $0.51$ meV and $100$ mT (green solid), and $0.41$ meV and $100$ mT (green dashed).
The other system parameters are the same as in Fig.~\ref{Fig2:AE}. Contrary to
the results (a) and (b) for the odd states which depend on both $\mu$ and $B_x$, the results (c) and (d) for the even states are
independent of the value of $B_x$. The heights of the peaks at $\phi = \pi$ shown in (b) and (d) depend on $\mu$ but are independent of $B_x$.
}
\label{Fig2:Result1}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig4_Result2.pdf}
\caption{(a)-(d) Same plots as in Fig.~\ref{Fig2:Result1}, but for different values of $B_y$; $\mu = 0.51$ meV and $B_y = 10$ mT (black solid lines), $0.41$ meV and $10$ mT (black dashed), $0.51$ meV and $20$ mT (green solid),
and $0.41$ meV and $20$ mT (green dashed). Here $B_x = 50$ mT is used.
The excitation spectrum $|\varepsilon^{+}_{A2}+\varepsilon^{+}_{A1}|$ and the $|\langle e |\hat{J}|g \rangle|$ for the even states are weakly dependent on $B_y$ compared to the dependence for the odd states.
}
\label{Fig3:Result2}
\end{figure}
The current operator for the BdG Hamiltonian $H^{1D}_{\text{BdG}}$ in Eq.~\eqref{1DBdG} is
\begin{align}
\hat{J} = \sum_{m,n} J_{m,n} \hat{\gamma}^{\dagger}_{m}\hat{\gamma}_{n},
\end{align}
where $m,n \in \{A1+,A1-,A2+,A2-\}$. $J_{m,n}$ are the matrix elements of the current operator, and are obtained from
the ABSs in Eq.~\eqref{ABS}~\cite{Olivares2014}. The diagonal matrix elements determine the supercurrent
carried by even and odd states. In the ground and excited states, these are
\begin{align}
\langle g | \hat{J} | g \rangle &= - \langle e | \hat{J} | e \rangle \nonumber\\
&= \sum_{m=A1-,A2-} J_{m,m} \langle g |\gamma^{\dagger}_{m} \gamma_{m} | g \rangle \nonumber\\
&= J_{A1-,A1-} + J_{A2-,A2-},
\end{align}
and in the odd states,
\begin{align}
\langle o1 | \hat{J} | o1 \rangle &= - \langle o2 | \hat{J} | o2 \rangle \nonumber\\
&=\langle g | \gamma_{A1+}\hat{J} \gamma^{\dagger}_{A1+} | g \rangle \nonumber\\
&= J_{A1+,A1+},
\end{align}
where the matrix elements are given by
\begin{align}
J_{A1+,A1+} = -J_{A1-,A1-} = \frac{-e}{\hbar}\frac{\partial \varepsilon^{+}_{A1}(\phi)}{\partial \phi}, \nonumber\\
J_{A2+,A2+} = -J_{A2-,A2-} = \frac{-e}{\hbar}\frac{\partial \varepsilon^{+}_{A2}(\phi)}{\partial \phi}.
\end{align}
The current matrix element between the ground and excited states $\langle e| \hat{J} |g \rangle$ is
\begin{align}
\langle e| \hat{J} |g \rangle &= \langle g| \gamma_{A2+} \gamma_{A1+}\hat{J} |g \rangle \nonumber\\
&= J_{A1+,A2-} + J_{A2+,A1-}, \label{EvenParity}
\end{align}
and the element between the odd states is
\begin{align}
\langle o2| \hat{J} |o1 \rangle &= \langle g| \gamma_{A2+} \hat{J} \gamma^{\dagger}_{A1+} |g \rangle \nonumber\\
&= J_{A2+,A1+} + J_{A1-,A2-}, \label{OddParity}
\end{align}
where
\begin{align}
J_{A1+,A2-} &= J_{A2+,A1-} = \left(J_{A2-,A1+}\right)^{*} = \left(J_{A1-,A2+}\right)^{*} \nonumber\\
&= \frac{-e}{2 \hbar} \frac{g^{*}(\phi)}{\varepsilon^{+}_{A2}(\phi) + \varepsilon^{+}_{A1}(\phi)} \frac{\partial (\varepsilon_{1}(\phi) + \varepsilon_{2}(\phi))}{\partial \phi}. \label{EvenParity_2}
\end{align}
and
\begin{align}
J_{A1+,A2+} &= J_{A2-,A1-} = \left(J_{A2+,A1+}\right)^{*} = \left(J_{A1-,A2-}\right)^{*} \nonumber\\
&= \frac{-e}{2 \hbar} \frac{\tilde{g}(\phi)}{\varepsilon^{+}_{A2}(\phi) - \varepsilon^{+}_{A1}(\phi)} \frac{\partial (\varepsilon_{1}(\phi) - \varepsilon_{2}(\phi))}{\partial \phi}. \label{OddParity_2}
\end{align}
The remaining matrix elements $J_{A1+,A1-} = (J_{A1-,A1+})^{*}$ and $J_{A2+,A2-} = (J_{A2-,A2+})^{*}$ are zero as $\gamma^{\dagger}_{Aj+} \gamma_{Aj-} = (-1)^{j}(\gamma_{Aj-})^2 = 0$, where $j=1,2$.
Below, we discuss the dependence of $\langle o2| \hat{J} |o1 \rangle$ and $\langle e| \hat{J} |g \rangle$ on tunable system parameters, like $B_{x,y}$, $v_j$, $L$, and $\mu$.
Before discussing in detail the dependence, we examine the case of $\eta = 0$ and $L \rightarrow 0$ in order to check the consistency of our perturbative results with previous theoretical~\cite{Zazunov2003,Kos2013,Olivares2014,Ivanov1999,Desposito2001} as well as experimental~\cite{Janvier2015,Zgirski2011} studies on Josephson junctions in the short-junction limit.
As there is no transverse-subband mixing in this case, we have $v_1 = v_2$ and
\begin{align}
\varepsilon_1(\phi) = \varepsilon_2(\phi) = \Delta_0 \text{cos} \frac{\phi}{2},\hspace{15pt}
\mathbb{U} = -i \frac{U_0 \Delta_0}{\hbar v_1} \left|\text{sin}\frac{\phi}{2} \right|.
\end{align}
Then from Eq.~\eqref{OddParity_2} we see that $\langle o2| \hat{J} |o1 \rangle = 0$,
regardless of $B_{x,y}$ and $\mu$. On the other hand, we find for the even states that
\begin{align}
\langle e| \hat{J} |g \rangle\big|_{\eta, L =0} = \frac{-2e}{\hbar} \frac{\mathbb{U}^{*}}{\sqrt{\varepsilon_{1}^{2}(\phi)+|\mathbb{U}|^2}}\frac{\partial \varepsilon_{1} (\phi)}{\partial \phi}.
\end{align}
If we further assume that the Zeeman field is absent, it can be expressed as
\begin{align}
\langle e| \hat{J} |g \rangle\big|_{\eta, L, B_x, B_y =0} = -i \frac{e}{\hbar} \frac{\Delta^{2}_0 \sqrt{1-T}}{\varepsilon_A(\phi)} \text{sin}^{2}\frac{\phi}{2},
\end{align}
where $T = 1-|U_0/(\hbar v_{1})|^2$ is the transmission probability in the normal region in our weak scattering limit
and $\varepsilon_A(\phi) = \Delta_0 \sqrt{1-T~ \text{sin}^2(\phi/2)}$. This result is consistent with the previous results~\cite{Janvier2015,Desposito2001} in the limit of perfect transmission. As already known, this even transition matrix element is finite
even in the absence of effects of Rashba spin-orbit, Zeeman, and multichannel structure.
With finite $\eta$ and $L$, we analyze the matrix elements between the even and odd states by considering their dependence on $\mathbb{U}$, $\mathbb{B}_x$, and $\mathbb{B}_{y1,y2}$. From Eqs.~\eqref{EvenParity} and \eqref{OddParity}, we get
\begin{align}
\langle e| \hat{J} |g \rangle &\propto \frac{\mathbb{U}^{*}}{\varepsilon^{+}_{A2}(\phi) + \varepsilon^{+}_{A1}(\phi)}, \nonumber\\
\langle o2| \hat{J} |o1 \rangle &\propto \frac{\mathbb{B}_x}{\varepsilon^{+}_{A2}(\phi) - \varepsilon^{+}_{A1}(\phi)}.
\end{align}
This even-(odd-) state matrix element follows the same dependence of its energy $|\varepsilon^{+}_{A2} + \varepsilon^{+}_{A1}| (|\varepsilon^{+}_{A2} - \varepsilon^{+}_{A1}|)$ on the system parameters which we discussed above. Specifically, varying the parameter $\mathbb{U}(\mathbb{B}_x)$ changes the element $\langle e| \hat{J} |g \rangle (\langle o2| \hat{J} |o1 \rangle)$ while the other element $\langle o2| \hat{J} |o1 \rangle (\langle e| \hat{J} |g \rangle)$ remains unchanged, as clearly shown in Fig.~\ref{Fig2:Result1}(b) and (d) in which these elements are plotted for different values of $B_x$.
Also, due to the dependence of the energies on $B_y$ that is described by Eq.~\eqref{Approx_By},
the $|\langle o2| \hat{J} |o1 \rangle|$ term shows a significant change with $B_y$ (Fig.~\ref{Fig3:Result2}(b)), but
there is a small change of $|\langle e| \hat{J} |g \rangle|$ on $B_y$ (Fig.~\ref{Fig3:Result2}(d)).
We consider the matrix elements at $\phi = \pi$ for further detailed analysis. The $\langle o2| \hat{J} |o1 \rangle$ term at $\phi = \pi$ is obtained from Eqs.~\eqref{AE_difference}, \eqref{Andreevlevels}, and \eqref{ABS_components}:
\begin{align}
\langle o2| \hat{J} |o1 \rangle\big|_{\phi=\pi} &= \frac{-e\Delta_0}{2\hbar}\frac{\mathbb{B}_x}{\sqrt{\left(\mathbb{B}_{y1}+\mathbb{B}_{y2}\right)^2/4 + |\mathbb{B}_x|^2}} \nonumber\\
&\hspace{30pt}\times \frac{L_1 - L_2}{(1+L_1)(1+L_2)},
\end{align}
where $L_j = \Delta_0 L/(\hbar v_j)$. As this element is proportional to $\mathbb{B}_x (L_1-L_2)$,
the finite values of $B_x$, $L$, and $|v_1-v_2|$ are required in order to be nonzero.
When we assume that $B_y=0$, its magnitude can be further simplified as
\begin{align}
&\left|\langle o2| \hat{J} |o1 \rangle\right|\big|_{\phi=\pi, B_{y} =0} =
\frac{e\Delta_0}{2\hbar}\frac{L_1 - L_2}{(1+L_1)(1+L_2)} \nonumber\\
&\hspace{20pt} = \frac{e\Delta_0}{2\hbar}\frac{L_1 - L_2}{\left[1+(L_1+L_2)/2\right]^2} + \mathcal{O}((L_1-L_2)^3), \label{Odd_transition_sim}
\end{align}
which is independent of both $\mathbb{B}_x$ and $\mathbb{U}$,
except for a singular value $\mathbb{B}_x = 0$ where $\langle o2| \hat{J} |o1 \rangle = 0$.
The independence on $\mathbb{B}_x$ is shown in Fig.~\ref{Fig2:Result1}(b) in which the peak heights of
$\langle o2| \hat{J} |o1 \rangle$ at $\phi=\pi$ remain unchanged for different values of $B_x$.
For the dependence on $L$, Eq.~\eqref{Odd_transition_sim} has its maximum value at $L=L_c$ where
\begin{align}
L_c= \frac{2 \hbar}{\Delta_0}\left(\frac{1}{v_1} + \frac{1}{v_2} \right)^{-1},
\end{align}
in the limit $|L_1 - L_2|\ll 1$.
A word of caution should be said regarding the validity of this $L_c$ estimation, which is of the order of the coherence length $\hbar v_j/\Delta_0$.
The energy-phase relation $\varepsilon_j(\phi)$ in Eq.~\eqref{EnergyPhaseRelation}
is valid when either $\Delta_0 L / (\hbar v_j) \ll 1$ or $\varepsilon \ll \Delta_0$ is fulfilled. Therefore, the $L_c$ might be qualitatively correct as
$\varepsilon_{j}(\phi)=0 \ll \Delta_0$ at $\phi=\pi$.
The $\langle e| \hat{J} |g \rangle$ matrix element at $\phi=\pi$, which is obtained by
\begin{align}
\langle e| \hat{J} |g \rangle\big|_{\phi=\pi} &= \frac{e\Delta_0}{2\hbar}\frac{\mathbb{U}^{*}}{\sqrt{\left(\mathbb{B}_{y1}-\mathbb{B}_{y2}\right)^2/4 + |\mathbb{U}|^2}} \nonumber\\
&\hspace{30pt}\times \frac{2 + L_1 + L_2}{(1+L_1)(1+L_2)},
\end{align}
is independent of $\mathbb{B}_x$ but depends on $\mathbb{U}$ which is associated with the transmission probability in the normal region.
However, similar to the case of $\langle o2| \hat{J} |o1 \rangle$, if $B_{y}=0$,
the magnitude of this element does not depend on both $\mathbb{B}_x$ and $\mathbb{U}$ as
\begin{align}
&\left|\langle e| \hat{J} |g \rangle\right|\big|_{\phi=\pi, B_{y} =0} =
\frac{e\Delta_0}{2\hbar}\frac{2 + L_1 + L_2}{(1+L_1)(1+L_2)} \nonumber\\
&\hspace{20pt} = \frac{e\Delta_0}{\hbar}\frac{1}{1+(L_1+L_2)/2} + \mathcal{O}((L_1-L_2)^2),
\end{align}
except for a singularity of $\mathbb{U} =0$ where $\langle e| \hat{J} |g \rangle =0$. Note also that it decreases as $L$ increases.
In the above calculation, we have neglected the orbital effect of a magnetic field $B_x$, which would lead to a longitudinal magnetic flux $\Phi$ piercing our cylindrical nanowire. In App.~\ref{app:Orbital_effect}, we show that there is no first order correction to
the dispersion relation in Eq.~\eqref{Dispersion}, and the leading order correction is of second order in $\Phi$.
Therefore the above results for the ABSs and the matrix elements might be still valid
up to first order in $B_x$ with respect to the orbital effect.
\section{Experimental observation of odd transitions}\label{Exp}
We now briefly discuss the feasibility of observing the odd transitions in an actual experiment.
We consider an experimental setup where our nanowire Josephson junction is embedded in a superconducting ring which is inductively coupled to a microwave resonator. A similar setup for an superconducting atomic contact was used in Ref.~\cite{Janvier2015}.
In the dispersive limit (i.e. far from resonance), the visibility of the transition will be determined by the cavity pull $\chi$ fixed by the coupling to the nanowire and which can be written for the case of odd transitions as
\begin{align}
\chi_{odd} \propto \frac{|\langle o2| \hat{J} |o1 \rangle|^2}{\omega_R - \omega_A},
\end{align}
where $\hbar \omega_A= |\varepsilon^+_{A1}-\varepsilon^+_{A2}|$ is the Andreev energy level and $\omega_R$ is the resonator frequency. The proportionality constant depends on the mutual inductance and the impedance of the resonator which can be assumed to be of the same order as in Ref.~\cite{Janvier2015}. One stringent condition for the direct detection of the odd transitions is
\begin{align}
\chi_{odd} > \Delta \omega = \frac{\omega_R}{Q},
\end{align}
which means that the shift of the resonance frequency set by $\chi_{odd}$ has to be larger than the width of the resonance $\Delta \omega$, which in terms of the
resonator quality factor $Q$ is $\sim \omega_R/Q$. We take as a reference
the typical values of $\chi \sim 3$ MHz
in the experiments of Ref.~\cite{Janvier2015} where even transitions were observed, i.e. $\chi_{even}\sim 3$ MHz. If we assume similar conditions so that the proportionality constant is the same for both even and odd transitions, we estimate
$\chi_{odd}$ as
\begin{align}
\chi_{odd} \sim \chi_{even} \left| \frac{\langle o2| \hat{J} |o1 \rangle}{\langle e| \hat{J} |g \rangle}\right|^2 \sim 0.03 \text{MHz},
\end{align}
where we assume that the Andreev energy levels are much smaller than
$\omega_R$ and that $|\langle o2| \hat{J} |o1 \rangle/\langle e| \hat{J} |g \rangle| \sim 0.1$ around $\phi = \pi$ from the results shown in Fig.~\ref{Fig2:Result1}. Therefore, if we assume
$\omega_R \sim 2-10$ GHz, the condition for the quality factor to observe the odd transitions is given by
\begin{align}
Q > \frac{\omega_R}{\chi_{odd}} \sim 0.6 \times 10^{5} - 0.3 \times 10^{6},
\end{align}
which is challenging, but still within the present technological capabilities. It should be also noticed that this high $Q$ requirement could be relaxed provided that
a larger inductive coupling between the nanowire junction and the resonator is
achieved or by working with a larger number of photons in the resonator than in Ref.~\cite{Janvier2015}.
Another approach would be provided by using an indirect detection technique like the {\it shelving} method, which is well known in atomic physics~\cite{Dehmelt1975,Bergquist1986,Sauter1986} but their extension to circuit QED like experiments could be explored~\cite{Devoret}.
\section{Concluding remarks}\label{Conclusion}
We have analyzed the ABSs and the current matrix elements in multichannel nanowire Josephson junctions. We found analytical expressions for the Andreev energy levels and the matrix elements including the effects of a Zeeman field and a potential barrier by using pertubation theory, and investigated their dependence on the system parameters. We have shown that the multichannel structure of the nanowire, in combination with the Rashba spin-orbit interaction, plays a fundamental role in breaking the degeneracy between opposite spin ABSs in the absence of Zeeman field and gives rise to finite matrix elements for transitions between the odd states in the presence of a small Zeeman field. In particular, the energy difference and the matrix elements between the odd states are found to have strong dependence on the field, while those between the even states remain almost unchanged.
Contrary to the Zeeman effect, the barrier determining the transmission probability in the normal region only affects to the even transitions without affecting the odd transitions. Regarding the dependence of the junction length $L$, there exists a length scale $L_c$ at which the odd transition matrix elements have their maximum, while the corresponding ones for even transitions decrease monotonically with the length.
Our results may provide a way to selectively control the even and odd transitions by tuning the system parameters, and could be used
to guide the experiments in the realization of an Andreev spin qubit.
{\it Note added:} During the process of writing this manuscript we become aware of
a related work by van Heck, V\"{a}yrynen, and Glazman~\cite{Heck2017}, addressing
the effect of Zeeman and spin-orbit coupling in the properties of Andreev states in semiconducting nanowire junctions.
We point out that these two works correspond to different regimes, ours being in
the regime of multichannel and small Zeeman field, and
the regime of Ref.~\cite{Heck2017} in the single-channel with a wide range of Zeeman field.
\acknowledgments
We thank B. Braunecker, M. Devoret, M. Goffman, H. Pothier, L. Tosi and C. Urbina for useful discussions.
This work has been supported by the Spanish MINECO through Grant No.~FIS2014-55486-P
and through the ``Mar\'{\i}a de Maeztu'' Programme for Units of Excellence in R\&D (MDM-2014-0377).
|
\section{Introduction and Summary}
Recently new interesting formulation of non-relativistic theories was proposed in \cite{Batlle:2016iel} and further elaborated in
\cite{Gomis:2016zur,Batlle:2017cfa,Kluson:2017ufb,Kluson:2017vwp}
\footnote{For recent very nice proposal of non-relativistic string, see
also \cite{Harmark:2017rpg}.}.
These theories belong to the class of systems with reduced symmetries that were analyzed recently from different point of views. Such a very important subject is non-relativistic holography that is very useful tool for the study of strongly correlated systems in condensed matter, for recent review see \cite{Hartnoll:2016apf}. Non-relativistic symmetries also have fundamental meaning in the recent proposal of renormalizable quantum theory of gravity known today as Ho\v{r}ava-Lifshitz gravity \cite{Horava:2009uw}, for recent review and extensive list of references, see \cite{Wang:2017brl}. There is also an interesting connection between Ho\v{r}ava-Lifshitz gravity
and Newton-Cartan gravity \cite{Hartong:2016yrf,Hartong:2015zia}. In fact, Newton-Cartan gravity and its relation to different limits was also studied recently in series of papers
\cite{Bergshoeff:2017btm,Bergshoeff:2016lwr,
Hartong:2015xda,Bergshoeff:2015uaa,Andringa:2010it,Bergshoeff:2015ija,Bergshoeff:2014uea}.
Another possibility how to define non-relativistic theories is to perform non-relativistic limit on the level of action for particle, string or p-brane. The first example of such object was non-relativistic string introduced in \cite{Gomis:2000bd,Danielsson:2000gi}. These actions were obtained by non-relativistic "stringy" limit where time direction and one spatial direction along the string are large. The stringy limit of superstring in $AdS_5\times S^5$ was also formulated in
\cite{Gomis:2005pg} and it was argued here that it provides another soluble sector of AdS/CFT correspondence, for related work, see \cite{Sakaguchi:2006pg,Sakaguchi:2007zsa}. Non-relativistic limit was further extended to the case of higher dimensional objects in string theory, as for example p-branes
\cite{Gomis:2004pw,Kluson:2006xi,Brugues:2004an,Gomis:2005bj}.
It is important to stress that there is also non-relativistic limit of the relativistic string where only the time direction is large. In this case non-relativistic string does not vibrate and it represents a collection of non-relativistic massless particles.
All these limits were very carefully analyzed in \cite{Batlle:2016iel} where the general procedure how to implement non-relativistic limit for different relativistic action was proposed. The main idea is to start with an action for relativistic extended object with coordinates $X$
\begin{equation}
S=\int \mathcal{L}(X)
\end{equation}
and assume that the Lagrangian density is pseudo-invariant under set of relativistic symmetries $\delta_R$
\begin{equation}\label{deltaRmL}
\delta_R \mathcal{L}=\partial_\mu F^\mu \ .
\end{equation}
Then in order to find non-relativistic limit of this action we introduce dimensionless parameter $\omega$ and we define different non-relativistic limits by appropriate rescaling coordinates and parameters in Lagrangian density. Then we can presume that the Lagrangian density and symmetry transformation can be expanded in powers of $\omega$
\begin{eqnarray}
\delta_R&=&\delta_0+\omega^{-2}\delta_{-2}+\dots \ , \nonumber \\
\delta \mathcal{L}&=&\omega^2\mathcal{L}_2+\mathcal{L}_0+\omega^{-2}\mathcal{L}_{-2}+\dots \ , \nonumber \\
F^\mu&=&\omega^2 F^\mu_2+F^\mu_0+\omega^{-2}F^\mu_{-2}+\dots \ ,\nonumber \\
\end{eqnarray}
where the first term in the expansion of the relativistic symmetry $\delta_R$ is the non-relativistic transformation $\delta_0$. Then the equation (\ref{deltaRmL}) implies infinite set of the equations when we compare expressions of the same orders in $\omega$
\begin{eqnarray}\label{genexp}
& &\delta_0 \mathcal{L}_2=\partial_\mu F^\mu_2 \ , \nonumber \\
& &\delta_0\mathcal{L}_0+\delta_{-2}\mathcal{L}_2=\partial_\mu F^\mu_0 \ , \nonumber \\
& & \delta_0 \mathcal{L}_{-2}+\delta_{-2}\mathcal{L}_0+\delta_{-4}\mathcal{L}_2=
\partial_\mu F^\mu_{-2} \ . \nonumber \\
\end{eqnarray}
The special case occurs when the Lagrangian density is invariant under relativistic symmetry so that $F^\mu=0$. Then from previous equations we see that $\mathcal{L}_2$ is invariant under non-relativistic symmetry while
$\mathcal{L}_0$ is generally not invariant under non-relativistic symmetry.
It is further important to stress that $\mathcal{L}_2$ contributes to the action with the factor $\omega^2$ and hence gives a dominant contribution in the limit $\omega \rightarrow \infty$, while $\mathcal{L}_0$ remains finite and terms proportional to $\mathcal{L}_{-2},\mathcal{L}_{-4},\dots$ vanish.
Since this general procedure is very interesting we mean that it is useful to explore it in more details. In particular, we would like to formulate this procedure using the canonical form of the action when we express Lagrangian density using corresponding Hamiltonian. It turns out that it is very useful since it allows us to straightforwardly identify physical degrees of freedom in the limit $\omega\rightarrow \infty$. More precisely, we introduce scaling of non-relativistic directions at the level of the action and then we find corresponding Hamiltonian for finite $\omega$. Corresponding canonical action is invariant under relativistic transformations by definitions and we also determine form of these transformations for rescaled variables for finite $\omega$. Then we discuss properties of resulting Lagrangian density in dependence on the scaling of the tension of original p-brane and on the number of non-relativistic dimensions. We argue that the non-relativistic Lagrangian $\mathcal{L}_0$ is invariant under non-relativistic symmetries on condition when the Lagrangian $\mathcal{L}_2$ vanish in agreement with the general discussion in \cite{Batlle:2016iel}. We also argue that in case when $\mathcal{L}_2 $ is non-zero the variation of the Lagrangian density $\mathcal{L}_0$ under non-relativistic transformations exactly cancels the variation $\delta_{-2}\mathcal{L}_2$. On the other hand in this case we are not quite sure how to deal with divergent term in the Lagrangian which however can be canceled when we allow that p-brane couples to appropriate $p+1$-form field exactly as in
\cite{Gomis:2005bj}. However the fact that there is a background $p+1$ form
breaks the original relativistic symmetry to the subgroup that leaves this background field invariant and hence the symmetry group is reduced. More precisely, in case when we cancel divergent term the Lagrangian density $\mathcal{L}_2$ is zero and hence the Lagrangian density $\mathcal{L}_0$ has to be invariant under reduced group of symmetries. Of course, there is on exception which is a fundamental string when it can be shown that in the flat space-time the Lagrangian density $\mathcal{L}_2$ is total derivative and hence can be ignored \cite{Batlle:2016iel}.
We also determine Hamiltonian constraint for the non-relativistic p-brane and we show that it is linear in momenta.
As the final part of our work we focus on particle-like limit of
p-brane when only time direction is large. Using canonical form of
the action we easily find corresponding Lagrangian and we show that
it is invariant under Galilean transformations.
Let us outline our results. We propose different non-relativistic limits for p-brane when the relativistic action has canonical form. We discuss two particular cases where the corresponding Hamiltonian constraint takes very simple form even if this procedure is completely general and serves as an analogue to the procedure suggested in \cite{Batlle:2016iel}. On the other hand using the canonical form of the action we can easily find dynamical degrees of freedom
and corresponding Hamiltonian. Then when perform inverse transformation we
derive Lagrangian density that differs from the Lagrangian density that is derived from the relativistic Lagrangian density by absence of the kinetic term for non-relativistic coordinates which is mostly seen on an example of the particle like limit of relativistic p-brane. We show that these two Lagrangian densities agree in case of fundamental string as in \cite{Batlle:2016iel}.
This paper is organized as follows. In the next section (\ref{second})
we introduce non-relativistic limit of canonical form of the action and discuss symmetries of the theory.
In section (\ref{third}) we analyze particular case when the matrix $\tilde{G}_{ij}$ is non-singular. In section (\ref{fourth}) we discuss the possibility how to eliminate divergent term by coupling of p-brane to $p+1$ form and we derive corresponding non-relativistic Hamiltonian. In section
(\ref{fifth}) we perform particle-like non-relativistic limit of p-brane and fundamental string.
\section{ Non-Relativistic Limit of p-Brane Canonical Action}\label{second}
In this section we formulate our proposal how to define non-relativistic p-brane using the canonical form of the action. The starting point is an action for relativistic p-brane
\begin{eqnarray}\label{actnongauge}
S&=&-\tilde{\tau}_p\int d^{p+1}\xi
\sqrt{-\det \mathbf{A}_{\alpha\beta}}
\ , \nonumber \\
& &\mathbf{A}_{\alpha\beta}=\eta_{AB}\partial_\alpha \tilde{x}^A\partial_\beta \tilde{x}^B \ , \nonumber \\
\end{eqnarray}
where $\tilde{x}^A \ , A=0,\dots,d$ label embedding of p-brane in the
target space-time and where $\eta_{AB}=\mathrm{diag}(-1,\underbrace{1\dots,1}_d)$.
It is important to stress that the action is invariant under relativistic symmetry
\begin{equation}\label{Lororig}
\tilde{x}'^A=\tilde{\Lambda}^A_{ \ B}\tilde{x}^B+b^A \ , \quad \tilde{\Lambda}^C_{ \ A}\eta_{CD}\tilde{\Lambda}^D_{ \ B}=
\eta_{AB} \ ,
\end{equation}
where $\tilde{\Lambda}^A_{ \ B} , b^A$ are constants.
The action (\ref{actnongauge})
was the starting point for the definition of the non-relativistic
limit that was presented in \cite{Batlle:2016iel}. As was argued there it is possible to define $p+1$ different non-relativistic limits
according to the number of embedding coordinates
$(0,\dots,p+1)$ that are rescaled. Explicitly, we have
\begin{eqnarray}\label{scalx}
\tilde{x}^\mu=\omega X^\mu \ , \quad \mu=0,\dots,q \ ,
\quad
\tilde{x}^M=X^M \ , \quad M=q+1,\dots,d \ ,\quad
\tilde{\tau}=\frac{\tau}{\omega^{k_q}} \ ,
\nonumber \\
\end{eqnarray}
where the number $k_q$ depends on the form of the non-relativistic limit.
Inserting (\ref{scalx}) into definition of the matrix $\mathbf{A}_{\alpha\beta}$ we obtain that
it has the form
\begin{eqnarray}\label{defbAomega}
\mathbf{A}_{\alpha\beta}&=&\omega^2\tilde{G}_{\alpha\beta}+\mathbf{a}_{\alpha\beta} \ ,
\nonumber \\
\tilde{G}_{\alpha\beta}&\equiv& \partial_\alpha X^\mu \partial_\beta X_\mu \ , \quad
\mathbf{a}_{\alpha\beta}=\partial_\alpha X^M\partial_\beta X_M \ . \nonumber \\
\end{eqnarray}
Observe that we can write the matrix $\mathbf{A}_{\alpha\beta}$ as
\begin{equation}
\mathbf{A}_{\alpha\beta}=G_{AB}\partial_\alpha X^A\partial_\beta X^B=
G_{\mu\nu}\partial_\alpha X^\mu\partial_\beta X^\nu+
G_{MN}\partial_\alpha X^M\partial_\beta X^N \ ,
\end{equation}
where $G_{\mu\nu}=\omega^2\eta_{\mu\nu} \ , \quad
G_{MN}=\delta_{MN}$.
Our proposal is to define non-relativistic limit with the help of the
canonical form of the action. To do this we find Hamiltonian formalism for p-brane for finite $\omega$ and take the limit $\omega\rightarrow \infty$ after we
derive canonical form of the action.
Note that $k_q$ is an integer number that will be determined by requirement that
there are terms in the Lagrangian density at most quadratic at $\omega^2$. Using (\ref{defbAomega})
we find following conjugate momenta
\begin{eqnarray}
p_\mu&=&-\frac{\tau_p}{\omega^{k_q}}\omega^2\partial_\beta X_\mu (\mathbf{A}^{-1})^{\beta 0}
\sqrt{-\det \mathbf{A}} \ , \nonumber \\
p_M&=&-\frac{\tau_p}{\omega^{k_q}}
\partial_\alpha X_M (\mathbf{A}^{-1})^{\alpha 0} \sqrt{-\det\mathbf{A}} \ .
\end{eqnarray}
Then it is easy to see that the bare Hamiltonian is equal to zero
\begin{eqnarray}
H_B=\int d^p\xi (p_\mu \partial_0X^\mu
+p_M\partial_0 X^M-\mathcal{L})=0
\nonumber \\
\end{eqnarray}
while we have following collection of the primary constraints
\begin{eqnarray}
\mathcal{H}_i&=&p_\mu \partial_i X^\mu+p_M\partial_i X^M\approx 0 \ , \nonumber \\
\tilde{\mathcal{H}}_\tau&=&\frac{1}{\omega^2}p_\mu\eta^{\mu\nu}p_\nu+p_M p^M+\frac{\tau_p^2}{\omega^{2k_q}} \det \mathbf{A}_{ij}
\approx 0 \nonumber \\
\end{eqnarray}
so that the Lagrangian density has the form
\begin{eqnarray}\label{mLomega}
\mathcal{L}&=&p_\mu\partial_0 X^\mu+p_M\partial_0 X^M-
\lambda^\tau
(\frac{1}{\omega^2}p_\mu\eta^{\mu\nu}p_\nu+p_M p^M+ \frac{\tau_p^2}{\omega^{2k_q}}\det \mathbf{A}_{ij})-
\nonumber \\
&-&\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M) \ .
\nonumber \\
\end{eqnarray}
Let us now discuss the Lorentz transformation (\ref{Lororig})
in more details. It is instructive to write them in the form
\begin{equation}
\left(\begin{array}{cc}
\tilde{x}'^\mu \\
\tilde{x}'^M \\ \end{array}\right)=
\left(\begin{array}{cc}
\tilde{\Lambda}^\mu_{ \ \rho} & \tilde{\Lambda}^\mu_{ \ K} \\
\tilde{\Lambda}^M_{ \ \rho} & \tilde{\Lambda}^M_{ \ K} \\ \end{array}
\right)
\left(\begin{array}{cc}
\tilde{x}^\rho \\
\tilde{x}^K \\ \end{array}\right) \ .
\end{equation}
If we replace original variables with rescaled ones we obtain
\begin{eqnarray}\label{trans1}
X'^\mu&=&\tilde{\Lambda}^\mu_{ \ \nu}X^\nu+\frac{1}{\omega}\tilde{\Lambda}^\mu_{ \ M}X^M \ ,
\nonumber \\
X'^M&=&\omega \tilde{\Lambda}^M_{ \ \nu}X^\nu+\tilde{\Lambda}^M_{ \ N}X^N \ , \nonumber \\
\end{eqnarray}
where
$\tilde{\Lambda}^A_{ \ B}$ has to obey the equation
\begin{eqnarray}\label{tLambdarule}
\tilde{\Lambda}^A_{ \ C}\tilde{\Lambda}^B_{ \ D}\eta_{AB}=\eta_{CD} \ .
\end{eqnarray}
It is natural to require that the transformation rule for $X^M$ is finite in the limit $\omega\rightarrow \infty$ and hence we perform following rescaling
\begin{equation}
\tilde{\Lambda}^M_{ \ \nu}=\frac{1}{\omega}\Lambda^M_{ \ \nu} \ .
\end{equation}
Further, from (\ref{trans1}) we see that $\tilde{\Lambda}^\mu_{ \ \nu} \ ,
\tilde{\Lambda}^M_{ \ N}$ are not rescaled:
\begin{equation}
\tilde{\Lambda}^\mu_{ \ \nu}=\Lambda^\mu_{ \ \nu} \ , \quad
\tilde{\Lambda}^M_{ \ N}=\Lambda^M_{ \ N} \ .
\end{equation}
On the other hand if we decompose (\ref{tLambdarule}) into
corresponding components we obtain
\begin{eqnarray}
& & \Lambda^\rho_{ \ \mu}\eta_{\rho\sigma}\Lambda^\sigma_{ \ \nu}+
\frac{1}{\omega^2}\Lambda^M_{ \ \mu}\delta_{MN}\Lambda^N_{ \ \nu}=\eta_{\mu\nu} \ ,
\nonumber \\
& &\Lambda^\mu_{ \ \rho}\eta_{\mu\nu}\tilde{\Lambda}^\nu_{ \ M}+
\frac{1}{\omega}\Lambda^N_{ \ \rho}\delta_{NK}\Lambda^K_{ \ M}=0 \ , \nonumber \\
& &\tilde{\Lambda}^\mu_{ \ M}\eta_{\mu\nu}\Lambda^\nu_{ \ \rho}+
\frac{1}{\omega}\Lambda^N_{ \ M}\delta_{NK}\lambda^K_{ \ \rho}=0 \ , \nonumber \\
& &\frac{1}{\omega^2}\Lambda^\mu_{ \ M}\eta_{\mu\nu}\Lambda^\nu_{ \ N}+\Lambda^K_{ \ M}
\delta_{KL}\Lambda^L_{ \ N}=\delta_{MN} \ \nonumber \\
\end{eqnarray}
and we see that we have to demand following scaling rule for
$\tilde{\Lambda}^\mu_{ \ M}$
\begin{equation}
\tilde{\Lambda}^\mu_{ \ N}=
\frac{1}{\omega}\Lambda^\mu_{ \ M} \ .
\end{equation}
Using there results in (\ref{trans1}) we obtain final form of
Lorentz transformations for rescaled variables:
\begin{equation}
X'^\mu=\Lambda^\mu_{ \ \nu}X^\nu+\frac{1}{\omega^2}\Lambda^\mu_{ \ M}X^M \ , \quad
X'^M=\Lambda^M_{ \ N}X^N+\Lambda^M_{ \ \nu}X^\nu
\end{equation}
or in its infinitesimal form: $\Lambda^\mu_{ \ \nu}=\delta^\mu_{ \ \nu}+
\omega^\mu_{ \ \nu} \ , \Lambda^\mu_{\ M}=\lambda^\mu_{ \ M} \ ,
\Lambda^M_{ \ \nu}=\lambda^M_{ \ \nu}, \Lambda^M_{ \ N}=
\delta^M_{ \ N}+\omega^M_{ \ N}$
\begin{eqnarray}
\delta X^\mu&=&X'^\mu-X^\mu=\omega^\mu_{ \ \nu}X^\nu+\frac{1}{\omega^2}
\lambda^\mu_{ \ M}X^M \ , \nonumber \\
\delta X'^M&=&X'^M-X^M=\omega^M_{ \ N}X^N+\lambda^M_{ \ \nu}X^\nu
\nonumber \\
\end{eqnarray}
so that
\begin{eqnarray}
\delta_0 X^\mu=\omega^\mu_{ \ \nu}X^\nu \ , \quad
\delta_0 X^M=\omega^M_{ \ N}X^N+\lambda^M_{ \ \nu}X^\nu \ , \quad
\delta_{-2}X^\mu=\Lambda^\mu_{ \ M}X^M \ . \nonumber \\
\end{eqnarray}
It is important to stress that the parameters $\omega^\mu_{ \ \nu},
\lambda^M_{ \ \mu}$ can be expanded in powers of $\omega^{-2}$ so that
we obtain infinite number of terms in the expansion of the Lorentz transformations in agreement with the general definition
(\ref{genexp}). However for our purposes the number of these terms given above is sufficient.
Since we consider Lagrangian density in the canonical form we also
have to find corresponding transformation rule for conjugate
momenta. For simplicity we will consider infinitesimal form of the
transformation and we demand that the combination $p_\mu
\partial_0X^\mu+p_M\partial_0X^M$ is invariant
\begin{equation}
\delta p_\mu \partial_0X^\mu+p_\mu \partial_0\delta X^\mu+\delta p_M \partial_0X^M+
p_M\partial_0\delta X^M=0
\end{equation}
that in the end implies following transformation rules
\begin{equation}
\delta p_\mu=-p_\nu \omega^\nu_{ \ \mu}-p_M\lambda^M_{ \ \mu} \ , \quad
\delta p_M=-p_N \omega^N_{ \ M}-\frac{1}{\omega^2}p_\mu \lambda^\mu_{ \ M} \ .
\nonumber \\
\end{equation}
It is clear that the Lagrangian density
(\ref{mLomega}) is invariant under these transformations
since it is manifestly
invariant under Lorentz transformations and the transformation
rules given above are ordinary Lorentz transformations rewritten with the help of the rescaled variables. Another situation occurs when
we consider specific form of the non-relativistic Lagrangian density
and study its properties in the limit $\omega \rightarrow \infty$.
It is important to stress that the Lagrangian density (\ref{mLomega})
is exact in $\omega$ and we can perform its expansion in powers of $\omega^2$ exactly as in \cite{Batlle:2016iel} even for the case when
the matrix $\tilde{G}_{ij}$ is singular. For simplicity we restrict ourselves to two particular cases that allow to find simple result which however also describe main properties of the procedure introduced above. We start with the case when the matrix $\tilde{G}_{ij}$ is non-singular.
\section{The First Case: $\tilde{G}_{ij}$ is Non-singular Matrix}\label{third}
As the first possibility we consider the case when $\tilde{G}_{ij}$ is non-singular matrix.
Note that it is $p\times p$ matrix in the form $\partial_i X^\mu \eta_{\mu\nu}
\partial_j X^\nu$ where $\partial_i X^\mu$ is $p\times (q+1)$ matrix where $q\leq p$. In case when $q+1=p$ we find that $\tilde{G}_{ij}$ is non-singular matrix and we can write
\begin{equation}
\det (
\omega^2 \tilde{G}_{ij}+\mathbf{a}_{ij})=\omega^{2p}\det \tilde{G} \det (\delta_i^j
+\frac{1}{\omega^{2}}\tilde{G}^{ik}\mathbf{a}_{kj}) \ .
\end{equation}
If we choose $k_q=p$ we find that the Lagrangian density has the form
\begin{eqnarray}
\mathcal{L}&=&p_\mu\partial_0 X^\mu+p_M\partial_0 X^M-\nonumber \\
&-&\lambda^\tau
(\frac{1}{\omega^2}p_\mu\eta^{\mu\nu}p_\nu+p_M p^M +\tau_p^2 \det \tilde{G}_{ij}+\frac{1}{\omega^2}\tau_p^2
\det \tilde{G}_{ij}\tilde{G}^{ij}\mathbf{a}_{ji})
\nonumber \\
&-&\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M) \nonumber \\
\end{eqnarray}
so that we can easily take the limit $\omega\rightarrow \infty$ and we obtain
\begin{eqnarray}
\mathcal{L}&=&p_\mu\partial_0 X^\mu+p_M\partial_0 X^M-\nonumber \\
&-&\lambda^\tau
(p_M p^M +\tau_p^2\det \tilde{G}_{ij})-\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M) \ . \nonumber \\
\end{eqnarray}
It is easy to see that this Lagrangian density is invariant
under transformations
\begin{eqnarray}
\delta X^\mu=\omega^\mu_{ \ \nu}X^\nu \ ,
\delta X'^M=\omega^M_{ \ N}X^N+\lambda^M_{ \ \nu}X^\nu \ , \nonumber \\
\delta p_\mu=-p_\nu \omega^\nu_{ \ \mu}-p_M\lambda^M_{ \ \mu} \ , \quad
\delta p_M=-p_N \omega^N_{ \ M}
\nonumber \\
\end{eqnarray}
using the fact that in the limit $\omega\rightarrow \infty$ we have following conditions
\begin{eqnarray}
\omega_{\rho \sigma}+\omega_{\sigma\rho}=0 \ , \quad
\omega_{KL}+\omega_{LK}=0 \ .
\nonumber \\
\end{eqnarray}
As the next step we determine canonical equations of motion from an
extended Hamiltonian
\begin{equation}
H=\int d^p\xi (\lambda^\tau
(p_M p^M +\tau_p^2 \det \tilde{G}_{ij})\nonumber \\
+\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M))
\end{equation}
so that we have following collection of the canonical equations of motion
\begin{eqnarray}
\partial_0 X^M&=&\pb{X^M,H}=2\lambda^\tau p^M+\lambda^i\partial_i X^M \ , \nonumber \\
\partial_0 p_M&=&\pb{p_M,H}=\partial_i (\lambda^i p_M) \ , \nonumber \\
\partial_0 X^\mu&=&\pb{X^\mu,H}=\lambda^ i\partial_i X^\mu \ , \nonumber \\
\partial_0 p_\mu&=&\pb{p_\mu,H}=\partial_i[2\lambda^\tau \tau_p^2
\partial_j X_\mu \tilde{G}^{ji}\det\tilde{G}_{ij}]+\partial_i[\lambda^i p_\mu] \ ,
\nonumber \\
& & p_M p^M+\tau_p^2\det \tilde{G}_{ij}=0 \ , \quad p_\mu\partial_i X^\mu+p_M
\partial_i X^M=0 \ . \nonumber \\
\end{eqnarray}
Let us try to solve these equations of motion at the spatial gauge when
$X^i=\xi^i$. Then the equation of motion for $X^i$ implies
$\lambda^i=0$ and the equations of motion for $p_M$ implies
that $p_M$ can depend on $\xi^i$ only. Further we see from the Hamiltonian constraint that the only possibility is to demand that $t$ and $X^M,p_M$ depend on $\xi^i$. Without lost of generality we presume that it depends on $\xi^1$ only and hence the matrix $\tilde{G}_{ij}$ is diagonal
in the form
\begin{equation}
\tilde{G}_{ij}=\mathrm{diag}(1-t'^2,1,\dots,1) \ .
\end{equation}
Then the equation of motion for $p_i$ are automatically satisfied for $i\neq 1$ and imply $p_i=0$ which is also in agreement with the spatial diffeomorphism constraints that imply $p_i=-p_M\partial_i X^M$. For $i=1$ the equation of motion for $p_1$ has the form
\begin{equation}
\partial_0 p_1=\partial_1[2\lambda^\tau \tau_p^2]
\end{equation}
that determines the value of the Lagrange multiplier $\lambda^\tau$
since $p_1=-p_M\partial_1 X^M$ and since $p_M$ depend on $x$ only:
\begin{eqnarray}
\nonumber \\
2\frac{\partial_1^2t}{\partial_1t}=\frac{\partial_1 \lambda^\tau}{\lambda^\tau}
\nonumber \\
\end{eqnarray}
that has the solution
\begin{equation}
\lambda^\tau=C\partial_1t^2 \ ,
\end{equation}
where $C$ is a constant. Using this result we finally find that
$X^M=2C\partial_1t^2 p_M\xi^0+k^M$ where $k^M$ can depend on $\xi^1$ at least in principle. We see that the non-relativistic p-brane moves freely in the transverse space where however coordinates depend on $\xi^1$ through the
function $\partial_1t$. The simplest possibility os to choose $t=k\xi^1$ where $k$ has to obey the condition $k>1$. Then we can choose $\lambda^\tau=1$ for $C=\frac{1}{k^2}$ and $X^M$ has following time dependence
\begin{equation}
X^M=2p^M\xi^0 \ , p_Mp^M=\tau_p^2(k^2-1) \ .
\end{equation}
Let us now determine Lagrangian for this non-relativistic p-brane. In fact, using the equations of motion for $X^M$ and $X^\mu$ we easily find corresponding Lagrangian density
\begin{equation}
\mathcal{L}=\frac{1}{4\lambda^\tau}(\partial_0 X^M-\lambda^i\partial_i X^M)
(\partial_0 X_M-\lambda^j\partial_j X_M)-\lambda^\tau\det \tilde{G}_{ij} \ .
\end{equation}
As the next step we
eliminate Lagrange multipliers using corresponding equations of motion
\begin{eqnarray}\label{lagmuleq}
& &\frac{1}{4(\lambda^\tau)^2}(\partial_0 X^M-\lambda^i\partial_i X^M)^2
+\tau_p^2\det \tilde{G}_{ij}=0 \ , \nonumber \\
& &
\partial_i X^M (\partial_0 X_M-\lambda^j\partial_j X_M)=0 \ . \nonumber \\
\end{eqnarray}
To proceed further let us analyze the matrix $F_{ij}=\partial_i X^M\partial_j X_M$. This is $p\times p$ matrix which is given as a product of $p\times (d-q)$ matrices $\partial_i X^M$ and $(d-q)\times (d-q)$ matrix $\delta_{MN}$ and hence has the rank
$\mathrm{min}(p,(d-q))$. This matrix will be non-singular if $p<d-q$ which leads to the condition (using the fact that $q=p-1$)
\begin{equation}\label{plessd1}
p<\frac{d+1}{2} \ .
\end{equation}
Let us presume this case and hence we can solve the last equation in
(\ref{lagmuleq}) for $\lambda^i$ as
\begin{equation}
\lambda^i=F_{0j}F^{ji} \ , \quad F_{\alpha\beta}\equiv \partial_\alpha X^M\partial_\beta X_M
\end{equation}
so that the first equation in (\ref{lagmuleq}) gives
\begin{equation}
(\lambda^\tau)^2=-\frac{1}{4\tau^2\det \tilde{G}_{ij}}(F_{00}-F_{0k}F^{kj}F_{j0})=
-\frac{1}{4\tau^2\det\tilde{G}_{ij}\det F_{ij}}
\det F_{\mu\nu}
\end{equation}
and hence the Lagrangian density has the final form
\begin{equation}\label{mLF}
\mathcal{L}=\tau\sqrt{-\det \tilde{G}_{ij}\frac{\det F_{\alpha\beta}}{\det F_{ij}}}
\end{equation}
Of course, the last condition holds on condition that the matrix
$F_{\alpha\beta}=\partial_\alpha X^M \partial_\beta X_M$ is non-singular. On the other hand we have that this is $(p+1)\times (p+1)$ matrix
with the rank given as $\mathrm{min}(p+1,d-(p-1))$ that implies
\begin{equation}
p<\frac{d}{2} \
\end{equation}
that is stronger condition than the condition
(\ref{plessd1}) but certainly can be obeyed.
Finally note that (\ref{mLF}) is invariant under
scaling transformations
\begin{equation}
X'^\mu=\lambda^{-\frac{1}{p}}X^\mu \ , \quad X'^M=\lambda X^M \ .
\end{equation}
It is interesting that the Lagrangian density
derived from the Hamiltonian is not unique. For example, let us consider an equation of motion for $X^\mu$
\begin{equation}
\partial_0 X^\mu=\lambda^i\partial_i X^\mu \ .
\end{equation}
Multiply this equation with $\partial_j X_\mu$ we obtain
\begin{equation}
\partial_0 X^\mu\partial_j X_\mu=\lambda^i \tilde{G}_{ij}
\end{equation}
that can be solved for $\lambda^i$ using the fact that $\tilde{G}_{ij}$ is
non-singular
\begin{equation}
\lambda^i=\tilde{G}^{ij}\tilde{G}_{j0} \ , \quad \tilde{G}_{\alpha\beta}=
\partial_\alpha X^\mu \partial_\beta X_\nu \ .
\end{equation}
Then the equation of motion for $\lambda^\tau$ has the form
\begin{eqnarray}
(\lambda^\tau)^2=-\frac{1}{4\tau^2 \det\tilde{G}_{ij}}
(\mathbf{a}_{00}-2\tilde{G}_{0j}\tilde{G}^{ji}\mathbf{a}_{i0}+\tilde{G}_{0i}\tilde{G}^{ik}\mathbf{a}_{kl}
\tilde{G}^{lj}\tilde{G}_{j0})
\end{eqnarray}
and hence we obtain following Lagrangian density
\begin{equation}
\mathcal{L}=-\tau \sqrt{(\mathbf{a}_{00}-2\tilde{G}_{0j}\tilde{G}^{ji}\mathbf{a}_{j0}+
\tilde{G}_{0i}\tilde{G}^{ik}\mathbf{a}_{kl}\tilde{G}^{lj}\tilde{G}_{j0})\det\tilde{G}_{ij}} \ .
\end{equation}
We argued that the form of non-relativistic Lagrangian density depends
on the value of the coefficient $k_q$. Let us consider the second possibility when
$k_q$ is equal to $k_q=p-1$ so that the Lagrangian density is equal to
\begin{eqnarray}
\mathcal{L}&=&p_\mu\partial_0 X^\mu+p_M\partial_0 X^M-\nonumber \\
&-&\lambda^\tau
(\frac{1}{\omega^2}p_\mu\eta^{\mu\nu}p_\nu+p_M p^M +\omega^2\tau_p^2 \det \tilde{G}_{ij}+\tau_p^2
\det \tilde{G}_{ij}\tilde{G}^{ij}\mathbf{a}_{ji})
\nonumber \\
&-&\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M) \nonumber \\
\end{eqnarray}
so that we can easily take the limit $\omega\rightarrow \infty$ and we obtain
\begin{eqnarray}
\mathcal{L}&=&\omega^2 \mathcal{L}_2+\mathcal{L}_0 \ , \quad
\mathcal{L}_2=-\lambda^\tau \tau_p^2\det \tilde{G}_{ij} \ , \nonumber \\
\mathcal{L}_0&=&p_\mu\partial_0 X^\mu+p_M\partial_0 X^M-\nonumber \\
&-&\lambda^\tau
(p_M p^M +\tau_p^2\det \tilde{G}_{ij}\tilde{G}^{ij}\mathbf{a}_{ij})-\lambda^i (p_\mu \partial_i X^\mu+p_M\partial_i X^M) \ . \nonumber \\
\end{eqnarray}
Now we would like to analyze transformation rules for different terms in the Lagrangian. Clearly we have $\delta_{0}\mathcal{L}_2=0$ while we have
\begin{eqnarray}
\delta_0 \mathcal{L}_0&=&-\tau_p^2 \det \tilde{G}_{ij}\delta \mathbf{a}_{ij}=\nonumber \\
&=&-\tau_p^2\det \tilde{G}_{ij}\tilde{G}^{ij}(\lambda^M_{ \ \nu}\partial_i X^\nu
\delta_{MN}\partial_j X^N+\partial_i X^M\delta_{MN}\lambda^N_{ \ \nu}
\partial_j X^\nu) \ . \nonumber \\
\end{eqnarray}
On the other hand it is easy to see that
\begin{eqnarray}
\delta_{-2}\mathcal{L}_2&=&-\tau_p^2\det \tilde{G}_{kl}\delta_{(-2)} \tilde{G}_{ij}\tilde{G}^{ji}=
\nonumber \\
&=&-\tau_p^2\det \tilde{G}_{kl}\tilde{G}^{ij}(\partial_i (\lambda^\mu_{ \ M}X^M)
\eta_{\mu\nu}\partial_j X^\nu+\partial_i X^\mu \eta_{\mu\nu}
\partial_j (\lambda^\nu_{ \ M}X^M)) \nonumber \\
\end{eqnarray}
using the fact that $\delta_{-2}X^\mu=\Lambda^\mu_{ \ M}X^M$.
Now it is easy to see that $\delta_{-2}\mathcal{L}_2+\delta_0\mathcal{L}_0=0$
thanks to the conditions
\begin{eqnarray}
\lambda^\mu_{ \ M}\eta_{\mu\rho}+\delta_{MK}\lambda^K_{ \ \rho}=0 \ , \quad
\eta_{\rho\nu}\lambda^\nu_{ \ M}+\lambda^K_{ \ \rho}\delta_{KM}=0 \ .
\nonumber \\
\end{eqnarray}
It is important to stress that the variation of the Lagrangian density $\mathcal{L}_0$ proportional to $\lambda^M_{ \ \mu}$ is compensated by the variation of the Lagrangian density $\mathcal{L}_2$ which is in agreement with \cite{Batlle:2016iel}. On the other hand it is not completely clear how to deal the presence of the divergent term in the Lagrangian when we analyze corresponding equations of motion. The well defined procedure how to eliminate this term is to couple p-brane to background $p+1$ form.
We will show that this can be done in canonical approach too.
\section{Elimination of Divergent Term}\label{fourth}
We begin this section with the case of the massive relativistic particle action in the form
\begin{equation}
S=\int d\tau (p_t \dot{t}+p_M\dot{X}^M-
e(-\frac{1}{\omega^2}p_t^2+p_M^2+\tilde{m}^2))
\end{equation}
and we see that the limit $\omega\rightarrow
\infty$ gives the result
\begin{equation}
S=\int d\tau (p_t \dot{t}+p_M\dot{X}^M-
e(p_M^2+\tilde{m}^2)) \ .
\end{equation}
From the Hamiltonian constraint $p_M^2+m^2=0$ we see that
the non-relativistic limit can be defined if we scale $\tilde{m}^2=
\frac{1}{\omega^2}m^2$. On the other hand the dynamics is still trivial since the Hamiltonian constraint implies $p_Mp^M=0$
that has solution $p_M=0$. In order to resolve this
problem let us consider the possibility that we couple the particle to the background
electromagnetic field so that the action has the form
\begin{equation}
S=-\tilde{m}\int d\tau \sqrt{-g_{AB}\dot{X}^A\dot{X}^B}+M\int d\tau A_A\dot{X}^A
\end{equation}
so that the conjugate momentum is
\begin{equation}
p_A=\frac{\tilde{m} g_{AB}\dot{X}^B}{\sqrt{-g_{AB}\dot{X}^A\dot{X}^B}}+MA_A
\end{equation}
that implies following constraint
\begin{equation}
\mathcal{H}_\tau =(p_A-MA_A)g^{AB}(p_B-MA_B)+\tilde{m}^2\approx 0 \ .
\end{equation}
We define non-relativistic limit when $g_{AB}=\mathrm{diag}(-\omega^2,1,\dots,1),
g^{AB}=\mathrm{diag}(-\frac{1}{\omega^2},1,\dots,1)$ and also
$A_0=\omega^2$. Then the Hamiltonian constraint has the form
\begin{equation}
\mathcal{H}_\tau=p_M p^M-\frac{1}{\omega^2}p_t^2+2Mp_t-M^2\omega^2+\tilde{m}^2\approx 0
\end{equation}
Now we see that we derive well defined limit when we scale $\tilde{m}^2$ as
$\tilde{m}^2=M^2\omega^2$ so that the Hamiltonian for non-relativistic particle has
the form
\begin{equation}
\mathcal{H}_\tau=p_M p^M+2Mp_t \approx 0 \ .
\end{equation}
This is clearly non-relativistic Hamiltonian constraint and we see that
it was crucial that the particle coupled to the gauge field. It is now easy
to determine corresponding Lagrangian density using the equation of motion
for $X^M$ and for $t$
\begin{equation}
\dot{X}^M=\pb{X^M,H}=2ep_M \ , \quad
\dot{t}=\pb{t,H}=2Me
\end{equation}
and hence
\begin{equation}
L=p_M\dot{X}^M+p_t\dot{t}-H=
ep_M p^M=\frac{1}{4e}\dot{X}^M\dot{X}_M=\frac{M}{2\dot{t}}
\dot{X}^M\dot{X}_M \ ,
\end{equation}
where in the last step we used equation of motion for $t$. Note that this Lagrangian has the same form as the Lagrangian for non-relativistic particle derived in \cite{Andringa:2012uz}.
As the next step we consider fundamental string coupled to background NSNS two form where the action has the form
\begin{eqnarray}
S=-\tilde{\tau} \int d\tau d\sigma \sqrt{-\det g_{\alpha\beta}}
+\tilde{\tau}\int d\tau d\sigma B_{AB}\partial_\tau X^A\partial_\sigma X^B
\nonumber \\
\end{eqnarray}
so that we have following conjugate momenta
\begin{equation}
p_A=-\tilde{\tau} G_{AB}\partial_\alpha X^B g^{\beta\tau}\sqrt{-\det g_{\alpha\beta}}+
\tilde{\tau} B_{AB}\partial_\sigma X^B
\end{equation}
that implies following Hamiltonian constraint
\begin{equation}
\mathcal{H}_\tau=
(p_A+\tilde{\tau} B_{AC}\partial_\sigma X^C)g^{AB}(p_B+\tilde{\tau} B_{BD}\partial_\sigma X^D)+\tilde{\tau}^2 G_{AB}\partial_\sigma X^A
\partial_\sigma X^B\approx 0 \ .
\end{equation}
In order to define stringy non-relativistic limit
we choose following components of the metric $G_{\mu\nu}=\omega^2
\eta_{\mu\nu} \ , G^{\mu\nu}=\frac{1}{\omega^2}\eta^{\mu\nu} \ , \mu,\nu=0,1 $ so that we obtain Hamiltonian constraint in the form
\begin{eqnarray}
\mathcal{H}_\tau&=&\frac{1}{\omega^2}p_\mu \eta^{\mu\nu}p_\nu+
2p_\mu\tilde{\tau} \frac{1}{\omega^2}\eta^{\mu\nu}B_{\nu\sigma}\partial_\sigma X^\sigma+p_M g^{MN}p_N+\nonumber \\
&+&\frac{1}{\omega^2}\tau^2 B_{\mu\rho}\partial_\sigma X^\rho \eta^{\mu\nu}
B_{\nu\sigma}\partial_\sigma X^\sigma+\tilde{\tau}^2 \omega^2
\eta_{\mu\nu}\partial_\sigma X^\mu \partial_\sigma X^\nu+
\tilde{\tau}^2 \partial_\sigma X^M\partial_\sigma X_M
\approx 0 \ .
\nonumber \\
\end{eqnarray}
We see that the divergent term can be eliminated by suitable choice of the background NSNS two form when we take
$B_{\mu\nu}=\omega^2 \epsilon_{\mu\nu} \ ,
\epsilon_{01}=-1$. Further, the string tension is not rescaled
$\tilde{\tau}=\tau$
and hence the Hamiltonian constraint has the form
\begin{equation}
\mathcal{H}_\tau=2\tau p_\mu \eta^{\mu\nu}\epsilon_{\nu\sigma}\partial_\sigma X^\sigma+p_Mp^M+\tau^2 \partial_\sigma X^M\partial_\sigma X_M\approx 0 \ .
\end{equation}
This is the same form of the Hamiltonian constraint as was derived in
\cite{Gomis:2004ht}.
The generalization of this procedure to the case of $p-$brane is
straightforward. We presume that this $p-$brane couples to
$C^{p+1}$ form
so that the action has the form
\begin{equation}
S=-\tilde{\tau}_p\int d^{p+1}\xi\sqrt{-\det \mathbf{A}_{\alpha\beta}}
+\tilde{\tau}_p\int C^{(p+1)} \ ,
\end{equation}
where
\begin{equation}
C^{(p+1)}=C_{A_1\dots A_{p+1}}dX^{A_1}\wedge \dots dX^{A_{p+1}}=
\frac{1}{(p+1)!}\epsilon^{\alpha_1\dots \alpha_{p+1}}
C_{A_1\dots A_{p+1}}\partial_{\alpha_1}X^{A_1}\dots \partial_{\alpha_{p+1}}X^{A_{p+1}}
\end{equation}
so that we have following conjugate momenta
\begin{equation}
p_A=-\tilde{\tau} G_{AB}\partial_\beta X^B(\mathbf{A}^{-1})^{\beta 0}
\sqrt{-\det \mathbf{A}}+\frac{\tilde{\tau}_p}{p!}C_{AA_2\dots A_{p+1}}
\epsilon^{i_2\dots i_{p+1}}\partial_{i_2}X^{A_2}\dots
\partial_{i_{p+1}}X^{A_{p+1}} \ .
\end{equation}
Then it is easy to see that the Hamiltonian constraint has
the form
\begin{eqnarray}
\mathcal{H}_\tau&=&(p_A-\frac{\tilde{\tau}_p}{p!}C_{AA_2\dots A_{p+1}}
\epsilon^{i_2\dots i_{p+1}}\partial_{i_2}X^{A_2}\dots
\partial_{i_{p+1}}X^{A_{p+1}})g^{AB}\times \nonumber \\
&\times &(p_B-\frac{\tilde{\tau}_p}{p!}C_{BB_2\dots B_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}X^{B_2}\dots
\partial_{j_{p+1}}X^{B_{p+1}})+\tilde{\tau}_p^2 \det \mathbf{A}_{ij}\approx 0 \ .
\nonumber \\
\end{eqnarray}
Now we presume that the metric has the form $
G_{\mu\nu}=\omega^2 \eta_{\mu\nu} \ , G_{MN}=\delta_{MN} \ ,
G^{\mu\nu}=\frac{1}{\omega^2}\eta^{\mu\nu} \ , \mu,\nu=0,\dots,p$
so that we have
\begin{equation}
\det \mathbf{A}_{ij}=\omega^{2p}\det \tilde{G}_{ij}+\omega^{2p-2}\det \tilde{G}_{ij}
\tilde{G}^{kl}\mathbf{a}_{kl}
\end{equation}
and hence we obtain finite result if
$\tilde{\tau}_p^2 \omega^{2(p-1)}=\tau_p$ and if we choose components of the
$p+1$ form along $0,\dots,p$ directions in the form
\begin{equation}\label{Cback}
C_{\mu_0\dots\mu_{p+1}}=\omega^{p+1}\epsilon_{\mu_0\mu_1\dots\mu_{p+1}} \ .
\end{equation}
To proceed further we use the fact that
\begin{eqnarray}
& &\frac{\tilde{\tau}_p^2\omega^{2p}}{(p!)^2}
C_{\mu\mu_2\dots\mu_{p+1}}\epsilon^{i_2\dots i_{p+1}}
\partial_{i_2}X^{\mu_2}\dots\partial_{i_{p+1}}X^{\mu_{p+1}}
\eta^{\mu\nu}
C_{\nu\nu_2\dots\nu_{p+1}}\epsilon^{j_2\dots j_{p+1}}
\partial_{j_2}X^{\nu_2}\dots\partial_{j_{p+1}}X^{\nu_{p+1}}=
\nonumber \\
&=&-\frac{\tilde{\tau}_p^2\omega^{2p}}{p!}\epsilon^{i_1\dots i_{p}}\epsilon^{j_1\dots j_{p}}
\partial_{i_1}X^{\mu_1}\partial_{j_1}X_{\mu_1}\dots
\tilde{G}_{i_1j_1}\dots \tilde{G}_{i_pj_p}=
-\tilde{\tau}_p^2\omega^{2p}\det \tilde{G}_{ij} \nonumber \\
\end{eqnarray}
and we see that these two divergent contributions cancel. In other words we have following
final form of the Hamiltonian constraint
\begin{equation}\label{mHtaup}
\mathcal{H}_\tau=-2\frac{\tau_p}{p!}p^\mu\epsilon_{\mu\mu_2\dots\mu_{p+1}}
\epsilon^{i_2\dots i_{p+1}}\partial_{i_2}X^{\mu_2}
\dots\partial_{i_{p+1}}X^{\mu_{p+1}}+p_M p^M+\tau_p^2
\det \tilde{G}_{ij}\tilde{G}^{kl}\mathbf{a}_{kl}\approx 0
\end{equation}
and we see that this Hamiltonian constraint is linear in non-relativistic momenta. Note that the Hamiltonian constraint is invariant under
transformations
\begin{equation}\label{Lorsubgroup}
\delta X^\mu=\omega^\mu_{ \ \nu}X^\nu \ , \quad
\delta X^M=\omega^M_{ \ N}X^N
\ ,
\end{equation}
where we observe an important fact that there is absent the mixed
term $\lambda^M_{ \ \mu}X^\mu$ in the variation of $X^M$. This is a consequence of the fact that the presence of the background $p+1$ form
breaks the original Lorentz symmetry to the transformations
(\ref{Lorsubgroup}) that leaves the background $p+1$ form
(\ref{Cback}) invariant.
It is instructive to compare the Hamiltonian constraint (\ref{mHtaup})
wit the one that was derived in \cite{Kluson:2017vwp} where the Hamiltonian analysis
of non-BPS Dp-brane was performed. The Hamiltonian constraint derived
in \cite{Kluson:2017vwp} can be easily truncated to the case of p-brane and we obtain
\begin{equation}
\mathcal{H}_{\tau}^{sq.r.}=
p_M p^M+\tau_p^2
\det \tilde{G}_{ij}\tilde{G}^{kl}\mathbf{a}_{kl}-\tau_p
\sqrt{-p_\mu (\eta^{\mu\nu}-\partial_i X^\mu \tilde{G}^{ij}
\partial_j X^\nu)p_\nu} \ .
\end{equation}
Since we are interested in
the physical content of the theory it is natural to consider the
gauge fixed theory and hence we impose spatial static gauge
\begin{equation}
X^i=\xi^i
\end{equation}
so that $\tilde{G}_{ij}=\delta_{ij}$ and hence (\ref{mHtaup}) is equal to
\begin{eqnarray}
\mathcal{H}_\tau
=-2\tau_p p_0+p_M p^M+\tau_p^2 \delta^{ij}\mathbf{a}_{ij}\approx 0 \ . \nonumber \\
\end{eqnarray}
that agree with $\mathcal{H}_\tau^{sq.r}$ evaluated at the spatial static gauge too and hence we see that these two Hamiltonian constraints are equivalent.
\section{Particle Limit of p-Brane Action}\label{fifth}
Finally we consider the case when we perform particle like non-relativistic limit of
p-brane action where only the time direction is large
\begin{eqnarray}\label{partlimit}
\tilde{x}^0&=&\omega t \ , \quad \tilde{x}^M= X^M \ , \quad M=1,\dots,d \ , \nonumber \\
\tilde{\tau}_p&=&\frac{1}{\omega} \tau_p \ . \quad
\end{eqnarray}
In this case we have following primary constraints
\begin{eqnarray}
\mathcal{H}_i&=&p_t\partial_i t+p_M\partial_i X^M
=0\nonumber \\
\mathcal{H}_\tau&=&-\frac{1}{\omega^2}p_t^2+p_M p^M+\frac{\tau_p^2}{\omega^2}
\det \mathbf{A}_{ij} \approx 0 \ , \nonumber \\
\end{eqnarray}
where $\mathbf{A}_{ij}=-\omega^2 \partial_i t\partial_j t+\mathbf{a}_{ij}$.
Now we can write
\begin{eqnarray}
\det \mathbf{A}_{ij}&=&\frac{1}{p!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
(-\omega^2 \partial_{i_1}t\partial_{j_1}t+\mathbf{a}_{i_1 j_1})\times
\dots (-\omega^2 \partial_{i_p}t\partial_{j_p}t+\mathbf{a}_{i_p j_p})=\nonumber \\
&=&-\omega^2\frac{1}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p}+
\det \mathbf{a} \ , \nonumber \\
\end{eqnarray}
where all terms of higher order in $\omega^2$ vanish due to the antisymmetry of $\epsilon^{i_1\dots i_p}$. Then it is easy to see that
the Hamiltonian constraint has the form
\begin{eqnarray}
\mathcal{H}_\tau=-\frac{1}{\omega^2}p_t^2+p_M p^M-
\frac{\tau_p^2}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p}
+\frac{\tau_p^2}{\omega^2}\det\mathbf{a} \approx 0 \nonumber \\
\end{eqnarray}
and we see that it is well defined for $\omega\rightarrow \infty$ when we obtain
\begin{equation}
\mathcal{H}_\tau=p_M p^M-
\frac{\tau_p^2}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p} \approx 0 \ .
\end{equation}
It is easy to see that the Hamiltonian constraint is invariant under non-relativistic transformations
\begin{equation}
\delta p_M=-\omega_M^{ \ N}p_N \ , \quad \delta X^M=\omega^M_{ \ N}X^N+
\lambda^M t
\end{equation}
using the fact that $\delta a_{ij}=\lambda^M(\partial_i t\partial_j X_M+
\partial_i X_M\partial_j t)$ and then using an antisymmetry of
$\epsilon^{i_1\dots i_p}$.
It is also instructive to determine corresponding Lagrangian. Note that
the total Hamiltonian has the form
\begin{equation}
H=\int d^p\xi (\lambda^\tau \mathcal{H}_\tau+\lambda^i\mathcal{H}_i)
\end{equation}
so that we have following equations of motion
\begin{eqnarray}
\partial_0 X^M=\pb{X^M,H}=2\lambda^\tau p^M+\lambda^i\partial_i X^N \ , \quad
\partial_ 0 t=\pb{t,H}=\lambda^i\partial_i t
\nonumber \\
\end{eqnarray}
and hence the Lagrangian density has the form
\begin{eqnarray}
\mathcal{L}&=&p_M\partial_0 X^M+p_t\partial_0 t-\lambda^\tau \mathcal{H}_\tau-
\lambda^i\mathcal{H}_i=\nonumber \\
&=&\frac{1}{4\lambda^\tau}
(\mathbf{a}_{00}-2\lambda^i\mathbf{a}_{i0}+\lambda^i\lambda^j \mathbf{a}_{ij})
+
\lambda^\tau \frac{\tau_p^2}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p} \ ,
\nonumber \\
\end{eqnarray}
where again $\mathbf{a}_{\alpha\beta}=\partial_\alpha X^M\partial_\beta X_M$.
To proceed further we
solve the equations of motion for $\lambda^i$ and $\lambda^\tau$
\begin{eqnarray}
& & \mathbf{a}_{i0}-\mathbf{a}_{ij}\lambda^j=0 \ , \nonumber \\
& & -\frac{1}{4(\lambda^\tau)^2}
(\mathbf{a}_{00}-2\lambda^i\mathbf{a}_{i0}+\lambda^i\lambda^j \mathbf{a}_{ij})+
\frac{\tau_p^2}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p} =0 \ .
\nonumber \\
\end{eqnarray}
If we presume that $\mathbf{a}_{ij}$ has an inverse we can find solution of the first equation as
\begin{equation}
\lambda^i=\mathbf{a}^{ij}\mathbf{a}_{j0} \
\end{equation}
so that the equation of motion for $\lambda^\tau$ has the form
\begin{eqnarray}
-\frac{1}{4(\lambda^\tau)^2}
\frac{\det \mathbf{a}_{\alpha\beta}}{\det \mathbf{a}_{ij}}+
+
\frac{\tau_p^2}{(p-1)!}\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p}=0 \nonumber \\
\end{eqnarray}
and hence the Lagrangian density has the form
\begin{equation}
\mathcal{L}=\tau_p\sqrt{\frac{\det\mathbf{a}_{\alpha\beta}}{\det\mathbf{a}_{ij}(p-1)!}
\epsilon^{i_1\dots i_p}\epsilon^{j_1\dots j_p}
\partial_{i_1}t\partial_{j_1}t\mathbf{a}_{i_2 j_2}\dots \mathbf{a}_{i_pj_p}}
\end{equation}
It is clear that
this analysis is valid for $p>1$. The case $p=1$ will be studied
separately in the next subsection.
\subsection{The Case of Fundamental String}
In this case the Hamiltonian constraint has the form
\begin{eqnarray}
\mathcal{H}_\tau=-\frac{1}{\omega^2}p_t^2+p_M p^M
-\tau_F^2 (\partial_\sigma t\partial_\sigma t-\frac{1}{\omega^2}\partial_\sigma X^M
\partial_\sigma X_M) \approx 0
\nonumber \\
\end{eqnarray}
that implies in the limit $\omega \rightarrow \infty$
following Hamiltonian constraint
\begin{equation}\label{mHtauparticle}
\mathcal{H}_\tau=
p_M p^M-\tau_F^2 \partial_\sigma t\partial_\sigma t \approx 0 \nonumber \\
\end{equation}
which agrees with the Hamiltonian constraints found in
\cite{Batlle:2016iel}.
It is interesting to find corresponding Lagrangian density. To do this we use again canonical equations of motion
\begin{eqnarray}
\partial_\tau X^M&=&\pb{X^M,H}=2\lambda^\tau p_M+\lambda^\sigma \partial_\sigma X^M \ ,
\nonumber \\
\partial_\tau t&=&\pb{t,H}=\lambda^\sigma \partial_\sigma t
\nonumber \\
\end{eqnarray}
and hence the Lagrangian density has the form
\begin{eqnarray}
\mathcal{L}&=&p_M\partial_\tau X^M+p_t\partial_\tau t-\lambda^\tau \mathcal{H}_\tau-
\lambda^\sigma \mathcal{H}_\sigma=
\nonumber \\
&=&\frac{1}{4\lambda^\tau}(\partial_\tau X^M-\lambda^\sigma \partial_\sigma X^M)
(\partial_\tau X_M-\lambda^\sigma \partial_\sigma X_M)+\lambda^\tau \tau_F^2
\partial_\sigma t\partial_\sigma t \ . \nonumber \\
\end{eqnarray}
Solving the equation of motion for $\lambda^\sigma$ we obtain
\begin{equation}
\lambda^\sigma=\frac{\mathbf{a}_{\tau\sigma}}{\mathbf{a}_{\sigma\sigma}}
\end{equation}
while the equation of motion for $\lambda^\tau$ has the form
\begin{equation}
-\frac{1}{4(\lambda^\tau)^2}
\left(\mathbf{a}_{\tau\tau}-\frac{\mathbf{a}_{\tau\sigma}^2}{\mathbf{a}_{\sigma\sigma}}\right)
+\tau^2_F\partial_\sigma t\partial_\sigma t =0 \ .
\end{equation}
Inserting this result into original Lagrangian density we finally obtain
\begin{equation}
\mathcal{L}=
\tau_F\sqrt{\frac{\det\mathbf{a}_{\alpha\beta}}{\mathbf{a}_{\sigma\sigma}}\partial_\sigma
t\partial_\sigma t} \ .\nonumber \\
\end{equation}
It is interesting that this Lagrangian does not have the same form as the
Lagrangian found in \cite{Batlle:2016iel}. Explicitly, the Lagrangian
density derived there has the form
\begin{eqnarray}\label{Lorg}
\mathcal{L}&=&-\tau_F\int d\tau d\sigma
\sqrt{(\partial_\tau t\partial_\sigma X^M-\partial_\sigma t
\partial_\tau X^M)(\partial_\tau t \partial_\sigma X_M-\partial_\sigma t
\partial_\tau X_M)}=\nonumber \\
&=&-\tau_F\int d\tau d\sigma
\sqrt{\partial_\tau t\partial_\tau t \mathbf{a}_{\sigma\sigma}-
2\partial_\sigma t\partial_\tau t\mathbf{a}_{\tau\sigma}+
\partial_\sigma t\partial_\sigma t \mathbf{a}_{\tau\tau}}=-\tau_F
\int d\tau d\sigma \sqrt{\mathbf{B}} \ .
\nonumber \\
\end{eqnarray}
From (\ref{Lorg}) we obtain following conjugate momenta
\begin{eqnarray}
p_t&=&-\tau_F \frac{\partial_\tau t \mathbf{a}_{\sigma\sigma}-\partial_\sigma t
\mathbf{a}_{\tau\sigma}}
{\sqrt{\mathbf{B}}} \ ,
\nonumber \\
p_M&=&-\tau_F\frac{\partial_\tau X_M\partial_\sigma t
\partial_\sigma t-\partial_\sigma t\partial_\tau t \partial_\sigma X_M}
{\sqrt{\mathbf{B}}} \ .
\nonumber \\
\end{eqnarray}
We again find that the bare Hamiltonian is zero
while we have following collection of the primary constraints
\begin{eqnarray}
\mathcal{H}_\sigma&=&
p_M\partial_\sigma X^M+p_t\partial_\sigma t \approx 0 \ , \nonumber \\
\mathcal{H}_\tau&=&p_Mp^M-\tau_F^2 \partial_\sigma t\partial_\sigma t\approx 0 \ .
\nonumber \\
\end{eqnarray}
It is interesting that in case of fundamental string we can find the same form of the Lagrangian density as in \cite{Batlle:2016iel}. To see this note that the equation of motion for $t$ implies
\begin{equation}
\lambda^\sigma=\frac{\partial_\tau t}{\partial_\sigma t}
\end{equation}
so that the Lagrangian has the form
\begin{equation}\label{mLtt}
\mathcal{L}=\frac{1}{4\lambda^\tau}\left(\mathbf{a}_{\tau\tau}-2\frac{\partial_\tau t}{\partial_\sigma t}\mathbf{a}_{\tau\sigma}+\frac{\partial_\tau t\partial_\tau t}
{\partial_\sigma t\partial_\sigma t}\mathbf{a}_{\sigma\sigma}\right)+\lambda^\tau
\tau_F^2\partial_\sigma t\partial_\sigma t \ .
\end{equation}
Then solving the equation of motion for $\lambda^\tau$ we find
\begin{equation}
\lambda^\tau=-\frac{1}{2\tau_F}
\sqrt{\frac{\mathbf{a}_{\tau\tau}-2\frac{\partial_\tau t}{\partial_\sigma t}\mathbf{a}_{\tau\sigma}+\frac{\partial_\tau t\partial_\tau t}{\partial_\sigma t
\partial_\sigma t}\mathbf{a}_{\sigma\sigma}}{\partial_\sigma t\partial_\sigma t}} \ .
\end{equation}
Inserting this result back into (\ref{mLtt}) we finally obtain
\begin{equation}
\mathcal{L}=-\tau_F\sqrt{\mathbf{a}_{\tau\tau}\partial_\sigma t\partial_\sigma t-
2\partial_\tau t\partial_\sigma t \mathbf{a}_{\tau\sigma}+\partial_\tau t
\partial_\tau t\mathbf{a}_{\sigma\sigma}}
\end{equation}
that agrees with (\ref{Lorg}).
\acknowledgments{This work was
supported by the Grant Agency of the Czech Republic under the grant
P201/12/G028. }
|
\section{Introduction}
\par Unsupervised learning has long been an intriguing field in artificial intelligence. Human and animal learning is largely unsupervised: we discover the structure of the world mostly by observing it, not by being told the name of every object, which would correspond to supervised learning \cite{lecun15nature}.
A system capable of predicting what is going to happen by just watching large collections of unlabeled video data needs to build an internal representation of the world and its dynamics \cite{mathieu2015deep}. When considering the vast amount of unlabeled data generated every day, unsupervised learning becomes one of the key challenges to solve in the road towards general artificial intelligence.
Based on how a human would provide a high level summary of a video, we hypothesize that there are three key components to understand such content: namely \textit{foreground}, \textit{motion} and \textit{background}. These three elements would tell us, respectively, what the main objects in the video are, what they are doing and where their location. We propose a framework that explicitly disentangles these three components in order to build strong features for action recognition, where the supervision signals can be generated without requiring from expensive and time consuming human annotations.
The proposal is inspired by how infants who have no prior visual knowledge tend to group things that move as connected wholes and also move separately from one another \cite{elizabeth90cognitivemotion}. Based on this intuition, we can build a similar unsupervised pipeline to segment foreground and background with global motion, i.e. the rough moving directions of objects. Such segmented foregrounds across the video can be used to model both the global motion (e.g.~transition or stretch) and local motion (i.e.~transformation of detailed appearance) from a pair of foregrounds at different time steps.
Since background motion is mostly given by camera movements, we restrict the use of motion to the foreground and rely on appearance to model the former.
The contributions of this work are two-fold: (1) disentangling motion, foreground and background features in videos by human alike motion aware mechanism and (2) learning strong video features that improve the performance of action recognition task.
\section{Related Work}
Leveraging large collections of unlabeled videos has proven beneficial for unsupervised training of image models thanks to the implicit properties they exhibit in the temporal domain, e.g. visual similarity between patches in consecutive frames \cite{wang2015unsupervised} and temporal coherence and order \cite{misra2016shuffle}. Since learning to predict future frames forces the model to construct an internal representation of the world dynamics, several works have addressed such task by predicting global features of future frames with Recurrent Neural Networks (RNN) \cite{srivastava2015unsupervised} or pixel level predictions by means of multi-scale Convolutional Neural Networks (CNN) trained with an adversarial loss \cite{mathieu2015deep}. The key role played by motion has been exploited for future frame prediction tasks by explicitly decomposing content and motion \cite{villegas2017decomposing} and for unsupervised training of video-level models \cite{luo2017motionprediction}. Similarly in spirit, separate foreground and background streams have been found to increase the quality of generative video models \cite{vondrick2016generating}.
Techniques exploiting explicit foreground and background segmentations in video generally require from expensive annotation methods, limiting their application to labeled data. However, the findings by Pathak et al. \cite{pathak2017learning} show how models trained on noisy annotations learn to generalize and perform well when finetuned for other tasks. Such noisy annotations can be generated by unsupervised methods, thus alleviating the cost of annotating data for the target task. In this work we study our proposed method by using manual annotations, whereas evaluating the performance drop when replacing such annotations with segmentations generated in an unsupervised manner remains as future work.
\section{Methodology}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{./figs/architecture.pdf}\\
\caption{System architecture. Please note that in this work, the masks used to generate ground truth are from manual annotations while uNLC will be utilized in our future work.}
\label{fig:architecture}
\end{figure*}
We adopt an autoencoder-styled architecture to learn features in an unsupervised manner. The encoder maps input clips
to feature tensors
by applying a series of 3D convolutions and max-pooling operations \cite{du2015c3d}. Unlike traditional autoencoder architectures, the bottleneck features are partitioned into three splits which are then used as input for three different reconstruction tasks, as depicted in Figure \ref{fig:architecture}.
\textbf{Disentangling of foreground and background:} depending on the nature of the training data, reconstruction of frames may become dominated either by the foreground or background. We explicitly split the reconstruction task to guarantee that none of the parts dominates over the other. Partitioned foreground and background features will be passed into two different decoders for reconstruction. While segmentation masks are often obtained by manual labeling, it is worth noting they can be obtained without supervision as well, e.g. by using methods based on motion perceptual grouping such as uNLC \cite{pathak2017learning}. The latter approach has proven beneficial for unsupervised pre-training of CNNs \cite{pathak2017learning}.
\textbf{Disentangling of foreground motion:} leveraging motion information can provide a boost in action recognition performance when paired with appearance models \cite{simonyan2014two}. We encourage the model to learn motion-related representations by solving a predictive learning task where the foreground in the last frame needs to be reconstructed from the foreground in the first frame. Given a pair of foregrounds at timesteps $t_1$ and $t_2$, namely $\left(f_{t_{1}}, f_{t_{2}} \right)$, we aim to estimate a function $M$ from motion features $m_{t_1\rightarrow t_2}$ throughout $t_1$ and $t_2$ that maps $f_{t_{_1}}$ to $f_{t_{2}}$ in deep feature space $G$:
\begin{equation}
G \left( f_{t_{2}} \right) = M \left( G(f_{t_{1}}), m_{t_1 \rightarrow t_2} \right)
\end{equation}
Throughout this work, the space of encoded features is used for $G$, and $M$ is parametrized by a deterministic version of cross convolution \cite{visualdynamics16}. The foreground decoder weights are shared among all foreground reconstruction task. Gradients coming from the reconstruction of $f_{t_{2}}$ are blocked from backpropagating through $G( f_{t_{1}})$ during training to prevent $G( f_{t_{1}})$ from storing information about $f_{t_{2}}$.
\textbf{Frame selection:} assuming that the background semantics stay close throughout the short clips, only the background in the first frame is reconstructed. First and last frames are chosen to perform foreground reconstruction, since they represent the most challenging pair in the clip.
\textbf{Loss function:} the model is optimized to minimize the L1 loss between the original frames and their reconstruction. In particular, the loss function is defined from a decomposition of the input video volume $x$ of $T$ frames into the foreground $x_{fg}$ and background $x_{bg}$ volumes:
\begin{equation}
\begin{split}
x_{fg} = x \cdot b_{fg} \\
x_{bg} = x \cdot (1- b_{fg})
\end{split}
\end{equation}
where $b_{fg}$ corresponds to a volume of binary values, so that $1$ correspond to foreground pixels and $0$ to the background ones.
This decomposition allows defining the reconstruction loss $L_{rec}(x)$ over the video volume $x$ as the sum of three terms:
\begin{equation}
\label{eq:loss}
L_{rec}(x) = L^{1}_{fg}(x) + L^{1}_{bg}(x) + L^{T}_{fg}(x)
\end{equation}
where the components $L^{1}_{fg}$, $L^{1}_{bg}$ and $L^{T}_{fg}$ represent the reconstruction loss for the first foreground and first background , and last foreground , respectively. These three terms are particularizations at the first ($t=1$) and last ($t=T$) frames of the generic foreground $L^{t}_{fg}(x)$ and background $L^{t}_{bg}(x)$ reconstructions losses:
\begin{equation}
L^{t}_{fg}(x) = \frac{1}{A^t}\sum_{i,j}{W^t[i,j] \cdot \left|\hat{x}^{t}_{fg}[i,j] - x_{fg}[i,j,t]\right|}
\end{equation}
\begin{equation}
L^{t}_{bg}(x) = \frac{1}{A^t}\sum_{i,j}{\left|\hat{x}^t_{bg}[i,j] - x_{bg}[i,j,t]\right|}
\end{equation}
where $\hat{x}^t$ denotes a reconstructed foreground/ background at time $t$, $A^t$ is the area of the reconstructed frame at time $t$, and $W^t$ is an element-wise weighting mask at time $t$ designed to leverage the focus between the foreground and background pixels:
\begin{equation}
W^{t}[i,j]=
\begin{cases}
1 & \text{if } (i,j) \in \text{background}\\
\max\left[ 1, \frac{A^t_{bg}}{A^t_{fg}} \right] & \text{if } (i,j) \in \text{foreground}
\end{cases}
\end{equation}
During preliminary experiments, we observed that the reconstruction of the first foreground always outperformed the reconstruction of the last one by a large margin, given the increased difficulty of the latter task. In order to get finer reconstruction of the last foreground, we introduce an L2 loss $L_{feat}$ on $G(f{t_{2}})$. The pseudo ground truth for this task is obtained by getting first foreground features from the encoder fed with the temporally reversed clip. The final loss to optimize is the following:
\begin{equation}
L_{total}(x) = L_{rec}(x) + L_{feat}(x)
\end{equation}
\section{Experimental setup}
Please note again we are showing results trained with ground truth masks to check the feasibility of our proposal and the pure unsupervised framework generating masks from uNLC \cite{pathak2017learning} remains as future work.
\textbf{Dataset:} there are 24 classes out of 101 in UCF-101 with localization annotations \cite{UCF101,THUMOS15}. Following \cite{pathak2017learning}, we first evaluate the proposed framework with supervised annotations and use the bounding boxes in the subset of UCF-101 for such purpose. Evaluating the proposal in weak annotations collected by means of unsupervised methods remains as future work. We follow the original splits of training and test set and also split 10\% videos out of the training set as validation set in order to perform early stopping and prevent the network from overfitting the training data.
\textbf{Training details:} videos are split into clips of 16 frames each. These clips are then resized to $128\times128$ and their pixel values are scaled and shift to $[ -1, 1]$. The clips are randomly temporally or horizontally flipped for data augmentation. Weight decay with rate of $10^{-3}$ is added as regularization. The network is trained for 125 epochs with Adam optimizer and a learning rate of $10^{-4}$ on batches of 40 clips.
\section{Results}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{./figs/results.pdf}\\
\caption{Reconstruction results on the test set. For each example, the top row shows the reconstruction while the bottom one contains the ground truth. Each column shows the segmentation of foreground in first frame, background in first frame and foreground in last frame, respectively.}
\label{fig:recon_results}
\end{figure*}
We tested our model on test set for reconstruction task. For better demonstrating the efficiency of our proposed pretraining pipeline, we also trained the network to do action recognition with pretrained features.
\textbf{Reconstruction task:} reconstruction results on test set are shown in Figure \ref{fig:recon_results}. From these results, we can clearly see that the network already can predict similar foreground segmentation as ground truth. However, the image reconstructions are still blurry. We argue that this is due to the properties of the L1 loss we are adopting \cite{mathieu2015deep}. One interesting fact is that the network has learned to generalize foreground to some other moving objects in the scene even though they are not included in the annotations. For example, the result shown in the top-right corner: instead of only segmenting the person, the dog walking beside the person is also included. This fact suggests that the network has successfully learned to identify foreground from motion cues.
Besides from foreground and background features, these results also demonstrate a good extraction of motion features. The learned motion features contain both global motions, e.g.~transition of foreground, and local motions, e.g.~change of human pose. In the bottom-center result, the generated kernels from motion feature successfully shift the object from right to the middle and change its gesture.
\textbf{Action recognition:} a good pretraining pipeline should show better performance on some typical discriminative tasks than random initialization, especially when training data is scarce \cite{misra2016shuffle,luo2017motionprediction,pathak2017learning,vondrick2016generating,wang2015unsupervised}. We also conducted comparative experiments on the task of action recognition. By discarding the decoders in our framework and training a linear softmax layer on top of the disentangled features, we can obtain a simple network for action recognition. For the first experiment, we first pretrain our encoder on the subset of UCF-101 with the settings discussed above and then fine-tune the whole action recognition network with added softmax layer on the same subset. As baselines, we trained another two action recognition networks, one with all weights initialized randomly and another one pretrained with an unsupervised autoencoder architecture. This autoencoder shared the same 3D convolutional encoder architecture with ours, while its decoder was the mirrored version of the encoder but replacing the pooling operations with convolutions.
During training, we observed that our pretrained model reached 90\% accuracy on training set immediately after one epoch while the randomly initialized network took 130 epochs to achieve it. All three models reached around 96\% accuracy at the end of training and encountered severe overfitting problems. The accuracy of different methods on the validation set during training time is shown in Figure \ref{fig:val_results}. The best accuracy obtained on the test set with our pretrained model is 62.5\%, while it drops to 52.2\% and 56.8\% respectively when using a random initialization and autoencoder as pretraining scheme, as shown in Table \ref{tab:test_acc}. We observe a margin of more than 10\% on accuracy between our proposed method and random initialization on both validation set and test set. This further demonstrates that with our proposal, the network can learn features that generalize better. These results are specially promising given the small amount of data used during pretraining, which is just a fraction of UCF-101. While this demonstrates the efficiency of the approach, using a larger dataset for pretraining should provide additional gains and better generalization capabilities.
\begin{table}
\centering
\caption{Action recognition accuracy of different methods on the test subset of UCF-101.}
\label{tab:test_acc}
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{cc}
\toprule
\textbf{Method} & \textbf{Accuracy} \\
\midrule
Random initialization & 52.2\% \\
\midrule
Pretrained (autoencoder) & 56.8\% \\
\midrule
Pretrained (ours) & \textbf{62.5\%} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figs/val_figure.png}\\
\caption{Action recognition results on validation set. This figure shows the accuracy of each method on validation set during the training time.}
\label{fig:val_results}
\end{figure}
\section{Conclusions}
This work has proposed a novel framework towards an unsupervised learning of video features capable of disentangling of motion, foreground and background.
Our method mostly exploits motion in videos and is inspired by human perceptual grouping with motion cues.
Our experiments using ground truth boxes render convincing results on both frame reconstruction and action recognition, showing the potential of the proposed architecture.
However, multiple aspects still need to be explored in our work. As our plans for the future work, we decide to (1) introduce unsupervised learning for foreground segmentation as well, as proposed in uNLC \cite{pathak2017learning}; (2) train with a larger amount of unlabeled data; (3) introduce adversarial loss to improve the sharpness of the reconstructed frames \cite{mathieu2015deep}; and (4) fill the gap of absent motion features between the first frame and the last frame by reconstructing any random frame in the clip.
Our model and source code are publicly available at \url{https://imatge-upc.github.io/unsupervised-2017-cvprw/} .
\section*{Acknowledgments}
The Image Processing Group at UPC is supported by the project TEC2013-43935-R and TEC2016-75976-R, funded by the Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF). The Image Processing Group at UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office.
The contribution from the Barcelona Supercomputing Center has been supported by project TIN2015-65316 by the Spanish Ministry of Science and Innovation contracts 2014-SGR-1051 by Generalitat de Catalunya.
|
\section{Introduction}
\label{sec:introduction}
The development of the computational capacities in the last decades has allowed physicists to use numerical simulations to study physical properties at the atomic scale with the help of statistical physics.
In particular Molecular Dynamics (MD) consists in integrating the equation of motions for the atoms in order to sample probability measures in a high dimensional space~\cite{book_frenkel_2001,book_tuckerman_2010,book_leimkuhler_2015}.
However, traditional microscopic methods suffer from limitations in terms of accessible time and length scales, which drives the development of mesoscopic coarse-grained methods.
These mesoscopic models aim at greatly reducing the number of degrees of freedom explicitly described, and thus the computational cost, while retaining some properties absent from more macroscopic models such as hydrodynamics.
Smoothed Dissipative Particle Dynamics (SDPD)~\cite{espanol_2003} belongs to this class of mesoscopic method.
It couples a particle Lagrangian discretization of the Navier-Stokes, Smoothed Particle Hydrodynamics (SPH)~\cite{lucy_1977,monaghan_1977}, and the thermal fluctuations from models like Dissipative Particle Dynamics with Energy conservation (DPDE)~\cite{avalos_1997,espanol_1997}.
It is thus able to deal with hydrodynamics at nanoscale and has been shown to give results consistent with MD for a wide range of resolutions, at equilibrium and for shock waves~\cite{faure_2016}, or for dynamical properties such as the diffusion coefficient of a colloid in a SDPD bath~\cite{vazquez_2009,litvinov_2009}.
SDPD has in particular been used to study colloids~\cite{vazquez_2009,bian_2012}, polymer suspensions~\cite{litvinov_2008} and fluid mixtures~\cite{petsev_2016}.
One of the main challenges for mesoscopic models incorporating fluctuations is to develop efficient, stable and parallelizable numerical schemes for the integration of their stochastic dynamics.
Most schemes are based on a splitting strategy~\cite{trotter_1959,strang_1968} where the Hamiltonian part is integrated through a Velocity Verlet scheme~\cite{verlet_1967}.
A traditional and popular algorithm first proposed for Dissipative Particle Dynamics~\cite{hoogerbrugge_1992} and later extended to DPDE~\cite{stoltz_2006} relies on a pairwise treatment of the fluctuation/dissipation part~\cite{shardlow_2003}.
The adaptation of this scheme to dynamics preserving various invariants has led to a class of schemes called Shardlow-like Splitting Algorithms (SSA)~\cite{lisal_2011}.
A major drawback in this strategy is the complexity of its parallelization~\cite{larentzos_2014}.
Other schemes have been recently proposed in~\cite{homman_2016} to enhance its use in parallel simulations.
All these schemes are however hindered by instabilities when internal energies become negative.
This especially happens at small temperatures or when small heat capacities are considered, typically for small mesoparticles.
It has been proposed to use Monte Carlo principles sample the invariant measure of DPDE, by resampling the velocities along the lines of centers according to a Maxwell-Boltzmann distribution and redistributing the energy variation into internal energies according to some prescription~\cite{langenberg_2016}.
This approach leads however to a dynamics which is not consistent with DPDE.
It was proposed in~\cite{stoltz_2017} to correct discretization schemes for DPDE by rejecting unlikely or forbidden moves through a Metropolis procedure, which prevents the appearance of negative internal energies and improves the stability of the integration schemes.
There exist relatively few references in the literature about the integration of the full SDPD dynamics.
Most work focus on numerical schemes in the isothermal setting~\cite{litvinov_2010}, avoiding the need to preserve the total energy during the simulation.
In a previous article~\cite{faure_2016}, we introduced an adaptation of the Shardlow splitting to SDPD, allowing a good control of the energy conservation.
The aim of this work is to provide more details about the possible integration of SDPD in an energy conserving framework and most importantly to increase the stability for small particle sizes by adapting the Metropolization procedure described in~\cite{stoltz_2017}.
This article is organized as follows.
We first present in Section~\ref{sec:equations} the equations of SDPD as reformulated in~\cite{faure_2016}.
In Section~\ref{sec:schemes}, we recall the Shardlow splitting for SDPD and introduce a Metropolis step to enhance the stability of the algorithm.
We evaluate the properties of the Shardlow and Metropolis schemes by means of numerical simulations in Section~\ref{sec:results}.
Our conclusions are gathered in Section~\ref{sec:conclusion}.
\section{Smoothed Dissipative Particle Dynamics}
\label{sec:equations}
At the hydrodynamic scale, the dynamics of the fluid is governed by the Navier-Stokes equations~\eqref{eq:navier-stokes}, which read in their Lagrangian form when the heat conduction is neglected (for times $t\geq0$ and positions $\vect{x}$ in a domain $\Omega\subset \mathbb{R}^3$):
\begin{equation}
\label{eq:navier-stokes}
\begin{aligned}
{\rm D}_t\rho + \rho\,{\rm div}_{\vect{x}}\vect{v} &= 0,\\
\rho {\rm D}_t\vect{v} &= {\rm div}_{\vect{x}}\left(\mtx{\sigma}\right),\\
\rho{\rm D}_t\left(u + \frac12\vect{v}^2\right) &= {\rm div}_{\vect{x}}\left(\mtx{\sigma}\vect{v}\right).
\end{aligned}
\end{equation}
In these equations, the material derivative used in the Lagrangian description is defined as
\[
D_t f(t,\vect{x}) = \partial_t f(t,\vect{x}) + \vect{v}(t,\vect{x})\vect{\nabla}_{\vect{x}}f(t,\vect{x}).
\]
The unknowns are $\rho(t,\vect{x}) \in \mathbb{R}_+$ the density of the fluid, $\vect{v}(t,\vect{x}) \in \mathbb{R}^3$ its velocity, $u(t,\vect{x}) \in \mathbb{R}$ its internal energy and $\mtx{\sigma}(t,\vect{x}) \in \mathbb{R}^{3\times 3}$ the stress tensor:
\begin{equation}
\label{eq:stress-tensor}
\mtx{\sigma} = P\mtx{\mathrm{Id}} + \eta(\vect{\nabla} \vect{v} + (\vect{\nabla}\vect{v})^T) + \left(\zeta-\frac23\eta\right){\rm div}(\vect{v})\mtx{\mathrm{Id}},
\end{equation}
where $P$ is the pressure of the fluid, $\eta$ the shear viscosity and $\zeta$ the bulk viscosity.
In the following, we first present the SPH discretization of the Navier-Stokes equations in Section~\ref{sec:sph} before introducing the SDPD equations reformulated in terms of internal energies~\cite{faure_2016} in Section~\ref{sec:eom-sdpd}.
\subsection{Smoothed Particle Hydrodynamics}
\label{sec:sph}
Smoothed Particle Hydrodynamics~\cite{lucy_1977,monaghan_1977} is a Lagrangian discretization of the Navier-Stokes equations~(\ref{eq:navier-stokes}) on a finite number $N$ of fluid particles playing the role of interpolation nodes.
These fluid particles are associated with a portion of fluid of mass $m$.
They are located at positions $\vect{q}_i \in \Omega$ and have a momentum $\vect{p}_i \in\mathbb{R}^{3}$.
The internal degrees of freedom are represented by an internal energy $\varepsilon_i$.
In general, the energies are bounded below.
Upon shifting the minimum of the internal energy, we may consider that the internal energies remain positive ($\varepsilon_i>0$).
\subsubsection{Approximation of field variables and their gradients}
\label{sec:approx-sph}
A key ingredient in the SPH discretization is the use of a particle-based interpolation of the field variables.
This leads to an approximation of the field variables by averaging over their values at the particle positions weighted by a smoothing kernel function $W$.
The kernel is generally required to be non-negative, regular, normalized as $\int_{\Omega} W(\vect{r})d\vect{r} = 1$ and with finite support~\cite{book_liu_2003}.
We introduce the smoothing length $h$ defined such that $W(\vect{r})=0$ if $\abs{\vect{r}} \geq h $.
In the sequel, we use the notation $r = \abs{\vect{r}}$.
In this work, we rely on a cubic spline~\cite{liu_2003}, whose expression reads
\begin{equation}
\label{eq:sdpd-cubic-w}
W(\vect{r}) = \left\{
\begin{array}{cl}
\displaystyle \frac{8}{\pi h^3} \left(1-6\frac{r^2}{h^2}+6\frac{r^3}{h^3}\right) & \displaystyle \text{ if } r \leq \frac{h}{2},\\[1em]
\displaystyle \frac{16}{\pi h^3} \left(1-\frac{r}{h}\right)^3 & \displaystyle \text{ if } \frac{h}{2} \leq r \leq h,\\[1em]
0 & \displaystyle \text{ if } r \geq h.
\end{array}
\right.
\end{equation}
The field variables are then approximated as
\begin{equation}
\label{eq:sph-approx}
f(\vect{x}) \approx \sum_{i=1}^N f_i W(\vect{x}-\vect{q}_i),
\end{equation}
where $f_i$ denotes the value of the field $f$ on the particle~$i$.
The approximation of the gradient $\vect{\nabla}_{\vect{x}} f$ is obtained by deriving equation~\eqref{eq:sph-approx}, which yields
\[
\vect{\nabla}_{\vect{x}} f(\vect{x}) \approx \sum_{i=1}^N f_i \vect{\nabla}_{\vect{x}}W(\abs{\vect{x}-\vect{q}_i}).
\]
In order to have more explicit expressions, we introduce the function $F$ such that $\vect{\nabla}_{\vect{r}} W(\vect{r}) = -F(\abs{\vect{r}})\vect{r}$.
In the case of the cubic spline~(\ref{eq:sdpd-cubic-w}), it reads
\[
F(r) = \left\{
\begin{array}{cl}
\displaystyle \frac{48}{\pi h^5} \left(2-3\frac{r}{h}\right) & \displaystyle \text{ if } r \leq \frac{h}{2},\\[1em]
\displaystyle \frac{48}{\pi h^5} \frac1{r} \left(1-\frac{r}{h}\right)^2& \displaystyle \text{ if } \frac{h}{2} \leq r \leq h,\\[1em]
0 & \displaystyle \text{ if } r \geq h.
\end{array}
\right.
\]
The gradient approximation can then be rewritten as
\[
\vect{\nabla}_{\vect{x}} f(\vect{x}) \approx -\sum_{i=1}^N f_i F(\abs{\vect{x}-\vect{q}_i})(\vect{x}-\vect{q}_i).
\]
In order to simplify the notation, we define the following quantities for two particles $i$ and $j$:
\[
\vect{r}_{ij} = \vect{q}_i - \vect{q}_j,\quad
r_{ij} = \abs{\vect{r}_{ij}},\quad
\vect{e}_{ij} = \frac{\vect{r}_{ij}}{r_{ij}},\quad
F_{ij} = F(r_{ij}).
\]
We can associate a density $\rho_i$ and volume $\mathcal{V}_i$ to each particle as
\begin{equation}
\label{eq:sdpd-rho-v}
\rho_i(\vect{q}) = \sum_{j=1}^N mW(\vect{r}_{ij}),\quad
\mathcal{V}_i(\vect{q}) = \frac{m}{\rho_i(\vect{q})}.
\end{equation}
The corresponding approximations of the density gradient evaluated at the particle points read
\begin{equation}
\label{eq:gradient-rho}
\vect{\nabla}_{\vect{q}_j} \rho_i = \left\{
\begin{array}{cl}
m F_{ij}\vect{r}_{ij} & \text{ if } j\neq i,\\[.5em]
-m \sum\limits_{j=1}^N F_{ij}\vect{r}_{ij} & \text{ if } j=i.
\end{array}
\right.
\end{equation}
\subsubsection{Thermodynamic closure}
\label{sec:thermo-closure}
As in the Navier-Stokes equations, an equation of state is required to close the set of equations provided by the SPH discretization.
This equation of state relates the entropy $S_i$ of the mesoparticle $i$ with its density $\rho_i(\vect{q})$ (as defined by~\eqref{eq:sdpd-rho-v}) and its internal energy $\varepsilon_i$ through an entropy function
\begin{equation}
\label{eq:sdpd-eos}
S_i(\varepsilon_i,\vect{q})=\mathcal{S}(\varepsilon_i,\rho_i(\vect{q})).
\end{equation}
The equation of state $\mathcal{S}$ can be computed by microscopic simulations or by an analytic expression modeling the material behavior.
It is then possible to assign to each particle a temperature
\[
T_i(\varepsilon_i,\vect{q}) = \left[\frac1{\partial_{\varepsilon}\mathcal{S}}\right](\varepsilon_i,\rho(\vect{q})),
\]
pressure
\[
P_i(\varepsilon_i,\vect{q}) = -\frac{\rho(\vect{q})^2}{m}\left[\frac{\partial_{\rho}\mathcal{S}}{\partial_{\varepsilon}\mathcal{S}}\right](\varepsilon_i,\rho(\vect{q})),
\]
and heat capacity at constant volume
\[
C_i(\varepsilon_i,\vect{q}) = -\left[\frac{(\partial_{\varepsilon} \mathcal{S})^2}{\partial_{\varepsilon}^2\mathcal{S}}\right](\varepsilon_i,\rho(\vect{q})).
\]
To simplify the notation, we omit in Sections~\ref{sec:eom-sdpd} the dependence of $T_i$, $P_i$ and $C_i$ on the variables $\varepsilon_i$ and $\vect{q}$.
\subsection{Equations of motion for SDPD}
\label{sec:eom-sdpd}
Smoothed Dissipative Particle Dynamics~\cite{espanol_2003} is a top-down mesoscopic method relying on the SPH discretization of the Navier-Stokes equations with the addition of thermal fluctuations which are modeled by a stochastic force.
In its energy reformulation~\cite{faure_2016}, SDPD is a set of stochastic differential equations for the following variables: the positions $\vect{q}_i\in\Omega\subset\mathbb{R}^{3}$, the momenta $\vect{p}_i\in\mathbb{R}^{3}$ and the energies $\varepsilon_i\in \mathbb{R}$ for $i=1\dots N$.
The dynamics can be split into several elementary dynamics, the first one being a conservative dynamics derived from the pressure part of the stress tensor~\eqref{eq:stress-tensor} (see Section~\ref{sec:conservative-sdpd}) and a set of pairwise fluctuation and dissipation dynamics stemming from the viscous terms in~\eqref{eq:stress-tensor} coupled with random fluctuations (see Section~\ref{sec:fd-sdpd}).
\subsubsection{Conservative forces}
\label{sec:conservative-sdpd}
The elementary force between particles $i$ and $j$ arising from the discretization of the pressure gradient in the Navier-Stokes momentum equation reads
\begin{equation}
\label{eq:cons-forces}
\vect{\mathcal{F}}_{{\rm cons},ij} = m^2\left(\frac{P_i}{\rho_i^2}+\frac{P_j}{\rho_j^2}\right)F_{ij}\vect{r}_{ij}.
\end{equation}
In its original formulation~\cite{espanol_2003}, this conservative dynamics clearly appears as a Hamiltonian dynamics with a potential $\mathcal{E}(\vect{q},S_i)$ relating the energy with the positions and the particle entropy $S_i$.
The entropies are then invariants of this subdynamics.
In the energy reformulation, the entropies are no longer considered as such.
Instead the focus is on the total energy
\[
E(\vect{q},\vect{p},\varepsilon) = \sum_{i=1}^N \varepsilon_i + \frac{\vect{p}_i^2}{2m},
\]
which is preserved by the dynamics.
This can be ensured by computing the variation of the particle volume as
\[
{\rm d}\mathcal{V}_i = -\frac{m}{\rho_i^2}{\rm d}\rho_i = \sum_{j\neq i} \frac{m^2}{\rho_i^2}F_{ij}\vect{r}_{ij}\cdot\vect{v}_{ij}{\rm d} t,
\]
leading to the variation of the internal energy given by
\[
{\rm d}\varepsilon_i = - P_i{\rm d}\mathcal{V}_i+ T_i{\rm d} S_i = -\sum_{j\neq i}\frac{m^2P_i}{\rho_i(\vect{q})^2}F_{ij}\vect{r}_{ij}\cdot\vect{v}_{ij}\,{\rm d} t.
\]
This allows us to write the conservative part of the dynamics as
\begin{equation}
\label{eq:sdpd-cons}
\left\{
\begin{aligned}
{\rm d}\vect{q}_i &= \frac{\vect{p}_i}{m}\,{\rm d} t,\\
{\rm d}\vect{p}_i &= \sum_{j\neq i} \vect{\mathcal{F}}_{{\rm cons},ij}\,{\rm d} t,\\
{\rm d}\varepsilon_i &= -\sum_{j\neq i}\frac{m^2P_i}{\rho_i(\vect{q})^2}F_{ij}\vect{r}_{ij}\cdot\vect{v}_{ij}\,{\rm d} t.
\end{aligned}
\right.
\end{equation}
This dynamics preserve by construction the total momentum $\displaystyle \sum_{i=1}^N \vect{p}_i$ and the total energy $E(\vect{q},\vect{p},\varepsilon)$.
\subsubsection{Fluctuation and Dissipation}
\label{sec:fd-sdpd}
In order to give the expression of the viscous and fluctuating part of the dynamics, we define the relative velocity for a pair of particles $i$ and $j$ as
\[
\vect{v}_{ij} = \frac{\vect{p}_i}{m}-\frac{\vect{p}_j}{m}.
\]
The viscous term in the Navier-Stokes equations~(\ref{eq:navier-stokes}) is discretized by a pairwise dissipative force, while the thermal fluctuations are modeled by a pairwise stochastic force.
In the spirit of DPDE, the pairwise fluctuation and dissipation dynamics for $i<j$ is chosen of the following form:
\begin{equation}
\label{eq:sdpd-simple-fluct}
\left\{
\begin{aligned}
{\rm d}\vect{p}_i &= -\mtx{\Gamma}_{ij}\vect{v}_{ij}\,{\rm d} t + \mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},\\
{\rm d}\vect{p}_j &= \mtx{\Gamma}_{ij}\vect{v}_{ij}\,{\rm d} t - \mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},\\
{\rm d}\varepsilon_i &= \frac12\left[\vect{v}_{ij}^T\mtx{\Gamma}_{ij}\vect{v}_{ij} - \frac{\textrm{Tr}(\mtx{\Sigma}_{ij}\mtx{\Sigma}_{ij}^T)}{m}\right]{\rm d} t -\frac12 \vect{v}_{ij}^T\mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},\\
{\rm d}\varepsilon_j &= \frac12\left[\vect{v}_{ij}^T\mtx{\Gamma}_{ij}\vect{v}_{ij} - \frac{\textrm{Tr}(\mtx{\Sigma}_{ij}\mtx{\Sigma}_{ij}^T)}{m}\right]{\rm d} t -\frac12 \vect{v}_{ij}^T\mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},
\end{aligned}
\right.
\end{equation}
where $\vect{B}_{ij}$ is a $3$-dimensional vector of standard Brownian motions and $\mtx{\Gamma}_{ij}$, $\mtx{\Sigma}_{ij}$ are $3\times3$ symmetric matrices.
by construction, \eqref{eq:sdpd-simple-fluct} preserves the total momentum in the system since ${\rm d} \vect{p}_i = -{\rm d}\vect{p}_j$
Furthermore, as in DPDE, the equations for the energy variables are determined to ensure the conservation of the total energy $E(\vect{q},\vect{p},\varepsilon)$.
As $\displaystyle {\rm d} \varepsilon_i = -\frac12 {\rm d} \left(\frac{\vect{p}_i^2}{2m} + \frac{\vect{p}_j^2}{2m}\right)$, It\^o calculus yields the resulting equations in~\eqref{eq:sdpd-simple-fluct}.
We consider friction and fluctuation matrices of the form
\begin{equation}
\label{eq:fluct-gamma}
\begin{aligned}
\mtx{\Gamma}_{ij} &= \gamma^{\parallel}_{ij}(\varepsilon_i,\varepsilon_j,\vect{q})\mtx{P}^{\parallel}_{ij} + \gamma^{\perp}_{ij}(\varepsilon_i,\varepsilon_j,\vect{q})\mtx{P}^{\perp}_{ij},\\
\mtx{\Sigma}_{ij} &= \sigma^{\parallel}_{ij}(\varepsilon_i,\varepsilon_j,\vect{q})\mtx{P}^{\parallel}_{ij} + \sigma^{\perp}_{ij}(\varepsilon_i,\varepsilon_j,\vect{q})\mtx{P}^{\perp}_{ij},
\end{aligned}
\end{equation}
with the projection matrices $\mtx{P}^{\parallel}_{ij}$ and $\mtx{P}^{\perp}_{ij}$ given by
\[
\mtx{P}_{ij}^{\parallel} = \vect{e}_{ij}\otimes\vect{e}_{ij},\quad
\mtx{P}_{ij}^{\perp} = \mtx{\mathrm{Id}} - \mtx{P}_{ij}^{\parallel}.
\]
Introducing the coefficients
\[
\kappa_{ij}^{\parallel} = \left(\frac{10}3\eta+4\zeta\right)\frac{m^2F_{ij}}{\rho_i\rho_j},\quad \kappa_{ij}^{\perp} =\left(\frac{5\eta}{3}-\zeta\right)\frac{m^2F_{ij}}{\rho_i\rho_j},
\]
defined from the fluid viscosities $\eta$ and $\zeta$ appearing in the stress tensor~(\ref{eq:stress-tensor}), we can choose the friction and fluctuations coefficients as
\begin{equation}
\label{eq:sdpd-gamma-sigma}
\begin{aligned}
d_{ij}(\varepsilon_i,\varepsilon_j,\vect{q}) &= k_{\rm B}\frac{T_iT_j}{(T_i+T_j)^2}\left(\frac1{C_i}+\frac1{C_j}\right),\\
\gamma^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q}) &= \kappa_{ij}^{\theta}\left( 1 - d_{ij}(\varepsilon_i,\varepsilon_j,\vect{q}) \right),\\
\sigma^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q}) &= 2\sqrt{\kappa_{ij}^{\theta} k_{\rm B}\frac{T_iT_j}{T_i+T_j}}.
\end{aligned}
\end{equation}
As shown is~\cite{faure_2016}, this ensures that measures of the form (with $g$ a given smooth function)
\begin{equation}
\label{eq:sdpd-energy-minv}
\mu({\rm d}\vect{q}\,{\rm d}\vect{p}\,{\rm d} \varepsilon) = g\left(E(\vect{q},\vect{p},\varepsilon),\sum\limits_{i=1}^N\vect{p}_i\right)\prod_{i=1}^N\frac{\exp\left(\frac{S_i(\varepsilon_i,\vect{q})}{k_{\rm B}}\right)}{T_i(\varepsilon_i,\vect{q})}\,{\rm d}\vect{q}\,{\rm d}\vect{p}\,{\rm d} \varepsilon
\end{equation}
are left invariant by the elementary dynamics~\eqref{eq:sdpd-simple-fluct}.
Alternative fluctuation/dissipation relations are possible (such as constant $\sigma$ parameters) but the relations~(\ref{eq:sdpd-gamma-sigma}) allow to retrieve the original SPDP~\cite{espanol_2003}.
\subsubsection{Complete equations of motion}
\label{sec:full-eom}
Gathering all the terms, the SDPD equations of motion reformulated in the position, momentum and internal energy variables read
\begin{equation}
\label{eq:sdpd-energy}
\left\{
\begin{aligned}
{\rm d}\vect{q}_i =&\, \frac{\vect{p}_i}{m}\,{\rm d} t,\\
{\rm d}\vect{p}_i =& \sum_{j\neq i} m^2\left(\frac{P_i}{\rho_i^2}+\frac{P_j}{\rho_j^2}\right)F_{ij}\vect{r}_{ij}\,{\rm d} t - \mtx{\Gamma}_{ij}\vect{v}_{ij}\,{\rm d} t + \mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},\\
{\rm d} \varepsilon_i =& \sum_{j\neq i} -\frac{m^2P_i}{\rho_i^2}F_{ij}\vect{r}_{ij}^T\vect{v}_{ij}\,{\rm d} t + \frac12 \left[\vect{v}_{ij}^T\mtx{\Sigma}_{ij}\vect{v}_{ij} -\frac1{m}\textrm{Tr}(\mtx{\Sigma}_{ij}\mtx{\Sigma}_{ij}^T)\right]{\rm d} t
- \frac12 \vect{v}_{ij}^T\mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},
\end{aligned}
\right.
\end{equation}
with $\mtx{\Sigma}_{ij}$ and $\mtx{\Gamma}_{ij}$ given by~\eqref{eq:fluct-gamma} and~(\ref{eq:sdpd-gamma-sigma}).
The dynamics~\eqref{eq:sdpd-energy} preserves the total momentum $\sum\limits_{i=1}^N\vect{p}_i$ and the total energy $E(\vect{q},\vect{p},\varepsilon)$ since all the elementary subdynamics ensure these conservations.
\subsection{Reduced units for SDPD}
\label{sec:scaling-sdpd}
In SDPD, the mass $m$ of the fluid particles allows us to change the resolution of the method.
We introduce the particle size $\displaystyle K = \frac{m}{m_0}$, where $m_0$ is the mass of one microscopic particle (\emph{e.g.} a molecule).
Since we deal with different particle sizes in the following, it is convenient to introduce reduced units for each size $K$:
\begin{equation}
\label{eq:reduced-units}
\begin{aligned}
\widetilde{m}_K &= Km_0,\\
\widetilde{l}_K &= \left(\frac{Km_0}{\rho}\right)^{\frac13},\\
\widetilde{\varepsilon}_K &= K k_{\rm B}T,
\end{aligned}
\end{equation}
where $\widetilde{m}_K$ is the mass unit, $\widetilde{l}_K$ the length unit, $\widetilde{\varepsilon}_K$ the energy unit and $\rho$ the average density of the fluid.
With such a set of reduced units, the time unit is
\[
\widetilde{t}_K = \widetilde{l}_K\sqrt{\frac{\widetilde{m}_K}{\widetilde{\varepsilon}_K}} = \frac{m_0^{\frac56}K^{\frac13}}{\rho^{\frac13} \sqrt{k_{\rm B}T}}.
\]
In the following, we select time steps before expressing them in terms of $\widetilde{t}_K$, with $K$ the particle size used in the simulations.
This explains the use of non round time steps in Section~\ref{sec:results}.
The smoothing length $h_K$ defining the cut-off radius in~\eqref{eq:sdpd-cubic-w} also needs to be adapted to the size of the SDPD particles so that the approximations~(\ref{eq:sph-approx}) continue to make sense.
In order to keep the average number of neighbors roughly constant in the smoothing sum, $h_K$ should be rescaled as
\[
h_K = h\left(\frac{m_K}{\rho}\right)^{\frac13}.
\]
In this work, we take $h=2.5$, which corresponds to a typical number of 60-70 neighbors, a commonly accepted number~\cite{liu_2003}.
\section{Integration schemes}
\label{sec:schemes}
In the following, we describe several numerical schemes for the integration of SDPD.
They all rely on a splitting strategy~\cite{trotter_1959,strang_1968} where the full dynamics is divided in simpler elementary dynamics that are consecutively integrated.
Since the conservative part of the dynamics~(\ref{eq:sdpd-cons}) can be viewed as a Hamiltonian dynamics, it is natural to resort to a symplectic scheme such as the widely used Velocity-Verlet scheme~\cite{verlet_1967} which ensures a good energy conservation in the long term~\cite{hairer_2003,book_hairer_2002}.
This algorithm is briefly described in Section~\ref{sec:verlet}.
There is however no definite way to deal with the fluctuation/dissipation part described in Section~\ref{sec:fd-sdpd}.
Due to its close similarities with DPDE, we propose in the following to adapt some schemes devoted to the integration of DPDE to the SDPD setting.
One approach to integrate SDPD, described in~\cite{faure_2016}, is based on the algorithm proposed by Shardlow~\cite{shardlow_2003} for DPD and its subsequent adaptations to DPDE~\cite{stoltz_2006}.
The dynamics is split into a Hamiltonian part, discretized through a Velocity-Verlet algorithm~(\ref{eq:sdpd-verlet}), and elementary pairwise fluctuation/dissipation dynamics that are successively integrated.
We first recall in Section~\ref{sec:ssa} the Shardlow-like splitting scheme (SSA) used in~\cite{faure_2016}.
While this provides a way to integrate SDPD preserving its invariants (approximately for the energy), it suffers from stability issues especially for small particle sizes, when the internal and kinetic energy are of the same scale.
We thus explore methods to improve the stability of these integration algorithms in Section~\ref{sec:metropolis}, relying on the ideas developed in~\cite{stoltz_2017} where a Metropolis acceptance-rejection step is included to correct the biases of the numerical discretization of the fluctuation/dissipation part.
\subsection{Integrating the Hamiltonian part of the dynamics}
\label{sec:verlet}
It is convenient to consider the conservative part of the dynamics~(\ref{eq:sdpd-cons}) in its original formulation in the position, momentum and entropy variables~\cite{espanol_2003} in order to take advantage of the conservation of the entropies $S_i$.
The internal energies are related to the positions and entropies by an energy function $\mathcal{E}(S_i,\rho_i(\vect{q}))$, which allows us to write the Hamiltonian as
\[
H(\vect{q},\vect{p},S) = \sum_{i=1}^N \frac{\vect{p}_i^2}{2m} + \mathcal{E}(S_i,\rho_i(\vect{q})).
\]
The dynamics~(\ref{eq:sdpd-cons}) can thus be recast in Hamiltonian form as
\[
\left\{
\begin{aligned}
{\rm d}\vect{q}_i &= \frac{\vect{p}_i}{m}\,{\rm d} t,\\
{\rm d}\vect{p}_i &= -\vect{\nabla}_{\vect{q}_i} \mathcal{H}(\vect{q},\vect{p},S) = \sum_{j\neq i} \vect{\mathcal{F}}_{{\rm cons},ij}\,{\rm d} t.
\end{aligned}
\right.
\]
The Velocity-Verlet scheme~\cite{verlet_1967} allows to integrate such dynamics while preserving on average the Hamiltonian $H$.
This corresponds to the following integration scheme:
\begin{equation}
\label{eq:sdpd-verlet}
\left\{
\begin{aligned}
\vect{p}^{n+\frac12}_i &= \vect{p}^n_i + \sum_{j\neq i}\vect{\mathcal{F}}_{{\rm cons},ij}^n \frac{\Delta t}{2},\\
\vect{q}^{n+1}_i &= \vect{q}^n_i + \frac{\vect{p}^{n+\frac12}_i}{m}\Delta t,\\
\vect{p}^{n+1}_i &= \vect{p}^{n+\frac12}_i + \sum_{j\neq i}\vect{\mathcal{F}}_{{\rm cons},ij}^{n+1}\frac{\Delta t}{2}.
\end{aligned}
\right.
\end{equation}
\subsection{Shardlow-like Splitting Algorithm}
\label{sec:ssa}
We present here a first possibility for the integration of the fluctuation/dissipation dynamics introduced in~\cite{faure_2016} based on existing schemes for DPD~\cite{shardlow_2003} and DPDE~\cite{stoltz_2006}.
If we neglect the dependence of $\Gamma$ and $\Sigma$ on $\varepsilon_i$, the elementary dynamics~(\ref{eq:sdpd-simple-fluct}) on the momenta can be viewed as a standard Ornstein-Uhlenbeck process and solved analytically.
We provided in~\cite{faure_2016} the corresponding expression for the updated momenta after a time step $\Delta t$ as
\begin{equation}
\label{eq:ssa-ou}
\begin{pmatrix}
\vect{p}_i^{n+1}\\[.5em]
\vect{p}_j^{n+1}
\end{pmatrix}
= \sum\limits_{\theta\in\{\parallel,\perp\}} \mtx{P}_{ij}^{\theta}\left[\frac{m}2 \alpha_{ij}^{\theta}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n) \vect{v}_{ij}^n + \zeta_{ij}^{\theta}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n) \vect{G}_{ij}^n \right]
\begin{pmatrix}
\vect{1}\\[.5em]
\vect{-1}
\end{pmatrix},
\end{equation}
where $\vect{G}_{ij}^n$ is a standard 3-dimensional Gaussian variable and for $\theta \in \{\parallel,\perp\}$,
\[
\begin{aligned}
\alpha_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q}) &= \exp\left(-\frac{2\gamma_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q})\Delta t}{m}\right),\\
\zeta_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q}) &= \sigma_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q})\sqrt{\frac{m(1-(\alpha_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q}))^2) }{4\gamma_{ij}^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q})}}.
\end{aligned}
\]
The integration of the momenta with~(\ref{eq:ssa-ou}) induces a variation of the kinetic energy which is then redistributed symmetrically in the internal energies as suggested in~\cite{marsh_1998,stoltz_2006}.
This guarantees the exact conservation of the energy during this elementary step.
The new internal energies are finally given by
\[
\left\{
\begin{aligned}
\varepsilon_i^{n+1} &= \varepsilon_i^n -\frac12\left[\frac{\left(\vect{p}_i^{n+1}\right)^2}{2m} + \frac{\left(\vect{p}_j^{n+1}\right)^2}{2m} - \frac{\left(\vect{p}_i^n\right)^2}{2m} - \frac{\left(\vect{p}_j^n\right)^2}{2m}\right],\\
\varepsilon_j^{n+1} &= \varepsilon_j^n -\frac12\left[\frac{\left(\vect{p}_i^{n+1}\right)^2}{2m} + \frac{\left(\vect{p}_j^{n+1}\right)^2}{2m} - \frac{\left(\vect{p}_i^n\right)^2}{2m} - \frac{\left(\vect{p}_j^n\right)^2}{2m}\right].
\end{aligned}
\right.
\]
Thermodynamic variables like the temperatures $T_i$, $T_j$ and heat capacities $C_i$, $C_j$ are updated with the equation of state using the new internal energies, before turning to another pair of particles.
Let us however remark that the pairwise Shardlow-like algorithm is sequential by nature and its parallelization requires a convoluted method~\cite{larentzos_2014}.
Moreover, and maybe more importantly, there is no mechanism preventing the apparition of negative energies during the simulation.
This situation happens when the fluctuations are large with respect to the internal energies: typically at low temperature or when the particle sizes are small (so that their heat capacity are small as well).
This leads to stability issues unless very small timesteps are used.
\subsection{Metropolized integration scheme}
\label{sec:metropolis}
To avoid instabilities related to negative internal energies while keeping reasonable time steps, it has been proposed to include a Metropolis step to reject impossible or unlikely moves~\cite{stoltz_2017}.
In the following, we show how this procedure can be used for SDPD.
First, we reformulate the pairwise dynamics~(\ref{eq:sdpd-simple-fluct}) as an overdamped Langevin dynamics in the relative velocity $\vect{v}_{ij}$ variable only, see Section~\ref{sec:fd-overdamped}.
We then construct proposed moves for the Metropolized scheme and compute the corresponding acceptance ratio in Section~\ref{sec:ratio}.
A simplified version of the Metropolized scheme is introduced in Section~\ref{sec:approx-met} where the computation of the Metropolis ratio is avoided and the rejection occurs only to avoid negative internal energies.
\subsubsection{Reformulation of the fluctuation and dissipation dynamics as an overdamped Langevin dynamics}
\label{sec:fd-overdamped}
In order to simplify the Metropolization of the integration scheme, we show that the elementary fluctuation-dissipation dynamics~(\ref{eq:sdpd-simple-fluct}) can be described only in terms of the relative velocity $\vect{v}_{ij}$ and formulated as an overdamped Langevin dynamics.
Since the dynamics~(\ref{eq:sdpd-simple-fluct}) preserve the momentum $\vect{p}_i+\vect{p}_j$, the momenta $\vect{p}_i$ and $\vect{p}_j$ can be rewritten as a function of $\vect{v}_{ij} $ as:
\[
\vect{p}_i = \frac{\vect{p}_i+\vect{p}_j}2 + \frac{\vect{p}_i-\vect{p}_j}{2} = \frac{\vect{p}_i^0 + \vect{p}_j^0}2 + \frac{m}2 \vect{v}_{ij} = \vect{p}_i^0 + \frac{m}2(\vect{v}_{ij} - \vect{v}_{ij}^0),
\]
and
\[
\vect{p}_j = \vect{p}_j^0 - \frac{m}2(\vect{v}_{ij} - \vect{v}_{ij}^0).
\]
This already shows how to express the momenta $\vect{p}_i$ and $\vect{p}_j$ in terms of $\vect{v}_{ij}$.
In addition, the kinetic energy formulated in the relative velocity reads
\[
\frac{\vect{p}_i^2+\vect{p}_j^2}{2m} = \frac{\left(\vect{p}_i^0\right)^2+\left(\vect{p}_j^0\right)^2}{2m} + \frac{m}4(\vect{v}_{ij} - \vect{v}_{ij}^0)^2.
\]
The conservation of the energy $\frac{\vect{p}_i^2+\vect{p}_j^2}{2m}+\varepsilon_i+\varepsilon_j$ and the fact that ${\rm d} \varepsilon_i = {\rm d} \varepsilon_j$ provides the expression of the internal energies as a function of the relative velocity as
\begin{equation}
\label{eq:energies-vij}
\varepsilon_i = \varepsilon_i^0 - \frac{m}8\left(\left[\vect{v}_{ij}\right]^2 - \left[\vect{v}_{ij}^0\right]^2\right),\quad \varepsilon_j = \varepsilon_j^0 - \frac{m}8\left(\left[\vect{v}_{ij}\right]^2 - \left[\vect{v}_{ij}^0\right]^2\right).
\end{equation}
Using this relation, the dynamics~(\ref{eq:sdpd-simple-fluct}) can in fact be rewritten as an effective dynamics on the relative velocity only, as
\begin{equation}
\label{eq:fd-effective}
{\rm d} \vect{v}_{ij} = -\frac2m\mtx{\Gamma}_{ij}\vect{v}_{ij} {\rm d} t + \frac2m\mtx{\Sigma}_{ij}{\rm d}\vect{B}_{ij},
\end{equation}
where $\Gamma_{ij}$, $\Sigma_{ij}$ are functions of the relative velocity through~\eqref{eq:energies-vij}.
We claim that the dynamics~(\ref{eq:fd-effective}) can be written more explicitly as an overdamped Langevin dynamics under the form
\begin{equation}
\label{eq:fd-overdamped}
{\rm d} \vect{v}_{ij} = \left(-\mtx{M}(\vect{v}_{ij})\vect{\nabla}_{\vect{v}_{ij}}\mathcal{U}(\vect{v}_{ij}) + {\rm div}_{\vect{v}_{ij}}(\mtx{M})(\vect{v}_{ij})\right) {\rm d} t + \sqrt2\mtx{M}^{\frac12}(\vect{v}_{ij}){\rm d}\vect{B}_{ij},
\end{equation}
with the diffusion matrix
\[
\mtx{M}(\vect{v}_{ij}) = \frac{2\left[\sigma^{\parallel}(\varepsilon_i,\varepsilon_j,\vect{q})\right]^2}{m^2}\mtx{P}_{ij}^{\parallel} + \frac{2\left[\sigma^{\perp}(\varepsilon_i,\varepsilon_j,\vect{q}))\right]^2}{m^2}\mtx{P}_{ij}^{\perp},
\]
and the potential
\[
\mathcal{U}(\vect{v}_{ij}) = U\left(\varepsilon_i^0- \frac{m}{8}\left(\left[\vect{v}_{ij}\right]^2-\left[\vect{v}_{ij}^0\right]^2\right),\vect{q}\right) + U\left(\varepsilon_j^0- \frac{m}{8}\left(\left[\vect{v}_{ij}\right]^2-\left[\vect{v}_{ij}^0\right]^2\right),\vect{q}\right),
\]
where
\[
U(\varepsilon_i,\vect{q}) = \log T_i(\varepsilon,i,\vect{q}) - \frac1{k_{\rm B}}S_i(\varepsilon_i,\vect{q}).
\]
Let us emphasize that the reformulation~\eqref{eq:fd-overdamped} is the key element for the Metropolis stabilization.
We now check that~\eqref{eq:fd-overdamped} holds.
By definition
\[
\mtx{M}^{\frac12}(\vect{v}_{ij}) = \frac{\sqrt{2}}{m}\mtx{\Sigma}_{ij}.
\]
It therefore suffices to check that
\[
-\frac2m\mtx{\Gamma}_{ij}\vect{v}_{ij} = -\mtx{M}(\vect{v}_{ij})\vect{\nabla}_{\vect{v}_{ij}}\mathcal{U}(\vect{v}_{ij}) + {\rm div}_{\vect{v}_{ij}}(\mtx{M})(\vect{v}_{ij}).
\]
We first compute the gradient of the potential $\mathcal{U}$:
\[
\begin{aligned}
\vect{\nabla}_{\vect{v}_{ij}}\mathcal{U}(\vect{v}_{ij}) &= \left(\vect{\nabla}_{\vect{v}_{ij}} \varepsilon_i\right) \partial_{\varepsilon}U(\varepsilon_i,\vect{q}) + \left(\vect{\nabla}_{\vect{v}_{ij}} \varepsilon_j\right) \partial_{\varepsilon}U(\varepsilon_j,\vect{q})\\
&= -\frac{m}4\vect{v}_{ij} \left( \frac{\partial_{\varepsilon_i}T_i}{T_i} - \frac{\partial_{\varepsilon_i}S_i}{k_{\rm B}} + \frac{ \partial_{\varepsilon_j}T_j}{T_j} - \frac{\partial_{\varepsilon_j} S_j}{ k_{\rm B}} \right)\\
&= -\frac{m}4\vect{v}_{ij} \left( \frac1{T_i}\left[\frac1{C_i}-\frac1{k_{\rm B}}\right] + \frac1{T_j}\left[\frac1{C_j}-\frac1{k_{\rm B}}\right] \right).
\end{aligned}
\]
Upon application of the matrix $\mtx{M}$,
\[
\begin{aligned}
\mtx{M}(\vect{v}_{ij})\vect{\nabla}_{\vect{v}_{ij}}\mathcal{U}(\vect{v}_{ij})
&= -\frac1{2m}\vect{v}_{ij} \sum_{\theta\in\{\parallel,\perp\}} \sigma^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q})^2\left( \frac1{T_i}\left[\frac1{C_i}-\frac1{k_{\rm B}}\right] + \frac1{T_j}\left[\frac1{C_j}-\frac1{k_{\rm B}}\right] \right) \mtx{P}_{ij}^{\theta}\vect{v}_{ij}\\
&= \frac2{m}\vect{v}_{ij} \sum_{\theta\in\{\parallel,\perp\}} \kappa_{ij}^{\theta}\left(1 - \frac{k_{\rm B}}{T_i+T_j}\left(\frac{T_j}{C_i} + \frac{T_i}{C_j}\right) \right) \mtx{P}_{ij}^{\theta}\vect{v}_{ij}.
\end{aligned}
\]
The divergence of $\mtx{M}$ with respect to the relative velocity reads
\[
\begin{aligned}
{\rm div}_{\vect{v}_{ij}}(\mtx{M})(\vect{v}_{ij}) &= \frac2{m^2}\mtx{P}_{ij}^{\parallel} \vect{\nabla}_{\vect{v}_{ij}}\left(\left[\sigma^{\parallel}(\varepsilon_i,\varepsilon_j,\vect{q})\right]^2\right) + \frac2{m^2}\mtx{P}_{ij}^{\perp} \vect{\nabla}_{\vect{v}_{ij}}\left(\left[\sigma^{\perp}(\varepsilon_i,\varepsilon_j,\vect{q})\right]^2\right)\\
&= -\frac1{2m} \sum_{\theta\in\{\parallel,\perp\}} \left(\partial_{\varepsilon_i}+\partial_{\varepsilon_j}\right) \left(\left[\sigma^{\theta}(\varepsilon_i,\varepsilon_j,\vect{q})\right]^2\right) \mtx{P}_{ij}^{\theta}\vect{v}_{ij}\\
&= -\frac2{m} k_{\rm B}
\sum_{\theta\in\{\parallel,\perp\}} \kappa_{ij}^{\theta} \left(
\frac{T_j(\partial_{\varepsilon_i}T_i)}{T_i+T_j} - \frac{T_iT_j(\partial_{\varepsilon_i} T_i)}{(T_i+T_j)^2}
+ \frac{T_i(\partial_{\varepsilon_j}T_j)}{T_i+T_j} - \frac{T_iT_j(\partial_{\varepsilon_j} T_j)}{(T_i+T_j)^2}
\right)
\mtx{P}_{ij}^{\theta}\vect{v}_{ij}\\
&= \frac2{m}
\sum_{\theta\in\{\parallel,\perp\}} \kappa_{ij}^{\theta} \left(
d_{ij}
- \frac{k_{\rm B}}{T_i+T_j}\left[\frac{T_j}{C_i}+\frac{T_i}{C_j}\right]
\right)
\mtx{P}_{ij}^{\theta}\vect{v}_{ij}.
\end{aligned}
\]
The desired result follows from
\[
\begin{aligned}
-\mtx{M}(\vect{v}_{ij})\vect{\nabla}_{\vect{v}_{ij}}\mathcal{U}(\vect{v}_{ij}) + {\rm div}_{\vect{v}_{ij}}(\mtx{M})(\vect{v}_{ij}) &= -\frac2m \left( (1-d_{ij})\kappa_{ij}^{\parallel}\mtx{P}_{ij}^{\parallel} + (1-d_{ij})\kappa_{ij}^{\perp}\mtx{P}_{ij}^{\perp}\right) \vect{v}_{ij}\\
&= -\frac2m\mtx{\Gamma}_{ij}\vect{v}_{ij}.
\end{aligned}
\]
In view of~\eqref{eq:fd-overdamped}, it can be immediately deduced that the following measure on the relative velocity, at fixed momenta $\vect{p}_i$, $\vect{p}_j$ and fixed internal energies $\varepsilon_i$, $\varepsilon_j$, is left invariant by the overdamped Langevin dynamics~(\ref{eq:fd-effective}):
\begin{equation}
\label{eq:minv-vij}
\nu(d \vect{v}_{ij}) = Z_{ij}^{-1}\exp\left(-\mathcal{U}(\vect{v}_{ij})\right){\rm d} \vect{v}_{ij}
= Z_{ij}^{-1}\frac{
\exp\left(
k_{\rm B}^{-1}
\left[
S_i\left(\varepsilon_i,\vect{q}\right)
+
S_j\left(\varepsilon_j,\vect{q}\right)
\right]
\right)
}{
T_i\left(\varepsilon_i,\vect{q}\right)
T_j\left(\varepsilon_j,\vect{q}\right)
}
{\rm d} \vect{v}_{ij}
\end{equation}
\subsubsection{Metropolis ratio}
\label{sec:ratio}
We consider~(\ref{eq:ssa-ou}) as the proposed move.
In terms of the relative velocity, it reads
\begin{equation}
\label{eq:proposal}
\begin{aligned}
\vect{v}_{ij}^{n+1} &=
\sum_{\theta\in\{\parallel,\perp\}}
\alpha_{ij}^{\theta}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\theta}\vect{v}_{ij}^n
+ \zeta_{ij}^{\theta}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\theta}\vect{G}^n\\
&= \mtx{\mathcal{A}}(\vect{q}^n,\vect{v}_{ij}^n)\vect{v}_{ij}^n + \mtx{\mathcal{B}}(\vect{q}^n,\vect{v}_{ij}^n)\vect{G}_{ij}^n,
\end{aligned}
\end{equation}
with
\[
\mtx{\mathcal{A}}(\vect{q}^n,\vect{v}_{ij}^n) =
\alpha_{ij}^{\parallel}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\parallel}
+
\alpha_{ij}^{\perp}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\perp},
\]
and
\[
\mtx{\mathcal{B}}(\vect{q}^n,\vect{v}_{ij}^n) =
\zeta_{ij}^{\parallel}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\parallel}
+
\zeta_{ij}^{\perp}(\varepsilon_i^n,\varepsilon_j^n,\vect{q}^n)\mtx{P}_{ij}^{\perp},
\]
The momenta and internal energies can then be updated as
\begin{equation}
\label{eq:proposal-update}
\left\{
\begin{aligned}
\vect{p}_i^{n+1} &= \vect{p}_i^n + \frac{m}{2}(\vect{v}_{ij}^{n+1}-\vect{v}_{ij}^n),\\
\vect{p}_j^{n+1} &= \vect{p}_j^n - \frac{m}{2}(\vect{v}_{ij}^{n+1}-\vect{v}_{ij}^n),\\
\varepsilon_i^{n+1} &= \varepsilon_i^n - \frac{m}{8}\left(\left[\vect{v}_{ij}^{n+1}\right]^2-\left[\vect{v}_{ij}^n\right]^2\right),\\
\varepsilon_j^{n+1} &= \varepsilon_j^n - \frac{m}{8}\left(\left[\vect{v}_{ij}^{n+1}\right]^2-\left[\vect{v}_{ij}^n\right]^2\right).
\end{aligned}
\right.
\end{equation}
The internal energies $T_i$, $T_j$ and heat capacities $C_i$, $C_j$ are updated accordingly.
In order to decide whether we update the configuration with the proposed move or keep the current one, we first check whether $\varepsilon_i^{n+1}$ and $\varepsilon_j^{n+1}$ are negative, in which case the proposal is rejected.
Otherwise, we compute a Metropolis ratio that is an acceptance probability.
The probability to accept the proposed move from $\vect{v}$ to $\vect{v}'$ is $\min(1,A_{\Delta t}(\vect{v},\vect{v}'))$ with
\[
A_{\Delta t}(\vect{v},\vect{v}') = \frac{\nu(\vect{v}')T_{\Delta t}(\vect{v}',\vect{v})}{\nu(\vect{v})T_{\Delta t}(\vect{v},\vect{v}')},
\]
where $T_{\Delta t}$ is the transition kernel associated with the proposal.
In the following, we omit all the dependence on the positions $\vect{q}^n$, which remain constant in this subdynamics, to simplify the notation.
The probability that~(\ref{eq:proposal}) proposes $\vect{v}'$ starting from $\vect{v}$ is given by
\[
T_{\Delta t}(\vect{v},\vect{v}') =
\frac1{(2\pi)^{\frac32}\abs{\mtx{\mathcal{B}}(\vect{v})}}
\exp\left(-\frac12(\vect{v}'-\mtx{\mathcal{A}}(\vect{v})\vect{v})^T\mtx{\mathcal{B}}(\vect{v})^{-2}(\vect{v}'-\mtx{\mathcal{A}}(\vect{v})\vect{v})\right),
\]
with the inverse matrix $\displaystyle \mtx{\mathcal{B}}(\vect{v})^{-1} = \frac1{\zeta_{ij}^{\parallel}(\vect{v})}\mtx{P}_{ij}^{\parallel} + \frac1{\zeta_{ij}^{\perp}(\vect{v})}\mtx{P}_{ij}^{\perp}$ and the determinant $\abs{\mtx{\mathcal{B}}(\vect{v})} = \zeta_{ij}^{\parallel}(\vect{v})\zeta_{ij}^{\perp}(\vect{v})^2$.
For the direct move, the transition probability simply reads
\[
T_{\Delta t}(\vect{v}_{ij}^n,\vect{v}_{ij}^{n+1}) =
\frac1{(2\pi)^{\frac32}\abs{\mtx{\mathcal{B}}(\vect{v}_{ij}^n)}}
\exp\left(-\frac{\left(\vect{G}^n\right)^T\vect{G}^n}2\right),
\]
while, for the reverse move,
\[
T_{\Delta t}(\vect{v}_{ij}^{n+1},\vect{v}_{ij}^n) =
\frac1{(2\pi)^{\frac32}\abs{\mtx{\mathcal{B}}(\vect{v}_{ij}^{n+1})}}
\exp\left(-\frac12(\vect{v}_{ij}^n-\mtx{\mathcal{A}}(\vect{v}_{ij}^{n+1})\vect{v}_{ij}^{n+1})^T\mtx{\mathcal{B}}(\vect{v}_{ij}^{n+1})^{-2}(\vect{v}_{ij}^n-\mtx{\mathcal{A}}(\vect{v}_{ij}^{n+1})\vect{v}_{ij}^{n+1})\right).
\]
Using~(\ref{eq:minv-vij}) with the reference taken at iteration $n$,
\[
\begin{aligned}
\log\left(\frac{\nu(\vect{v}_{ij}^{n+1})}{\nu(\vect{v}_{ij}^n)}\right) =& \sum_{k\in\{i,j\}} \frac1{k_{\rm B}}\left[S_k\left(\varepsilon_k^n- \frac{m}{8}\left(\left[\vect{v}_{ij}^{n+1}\right]^2-\left[\vect{v}_{ij}^n\right]^2\right)\right) - S_k(\varepsilon_k^n)\right]\\
&- \log\left[T_k\left(\varepsilon_k^n - \frac{m}{8}\left(\left[\vect{v}_{ij}^{n+1}\right]^2-\left[\vect{v}_{ij}^n\right]^2\right)\right)\right] + \log[T_k(\varepsilon_k^n)].
\end{aligned}
\]
Finally, the acceptance ratio is given by
\begin{equation}
\label{eq:metropolis-ratio}
A_{\Delta t}(\vect{v}_{ij}^n,\vect{v}_{ij}^{n+1}) = \exp(a(\vect{v}_{ij}^n,\vect{v}_{ij}^{n+1})),
\end{equation}
with
\[
\begin{aligned}
a(\vect{v}_{ij}^n,\vect{v}_{ij}^{n+1}) &= -\log\left(T_{\Delta t}(\vect{v}_{ij}^n,\vect{v}_{ij}^{n+1})\right) + \log\left(T_{\Delta t}(\vect{v}_{ij}^{n+1},\vect{v}_{ij}^n)\right) + \log\left(\frac{\nu(\vect{v}_{ij}^{n+1})}{\nu(\vect{v}_{ij}^n)}\right)\\
&= \frac{\left(\vect{G}^n\right)^T\vect{G}^n}2 + \log\abs{\mtx{\mathcal{B}}(\vect{v}_{ij}^n)}
- \frac12(\vect{v}_{ij}^n-\mtx{\mathcal{A}}(\vect{v}_{ij}^{n+1})\vect{v}_{ij}^{n+1})^T\mtx{\mathcal{B}}(\vect{v}_{ij}^{n+1})^{-2}(\vect{v}_{ij}^n-\mtx{\mathcal{A}}(\vect{v}_{ij}^{n+1})\vect{v}_{ij}^{n+1})\\
&\quad\,- \log\abs{\mtx{\mathcal{B}}(\vect{v}_{ij}^{n+1})} + \frac1{k_{\rm B}}\left(S_i(\varepsilon_i^{n+1}) - S_i(\varepsilon_i^n) + S_j(\varepsilon_j^{n+1}) - S_j(\varepsilon_j^n)\right)\\
&\quad\,- \log\left(T_i(\varepsilon_i^{n+1})\right) + \log(T_i(\varepsilon_i^n)) - \log\left(T_j(\varepsilon_j^{n+1})\right) + \log(T_j(\varepsilon_j^n)).
\end{aligned}
\]
Starting from a configuration $(\vect{p}_i^n,\vect{p}_j^n,\varepsilon_i^n,\varepsilon_j^n)$, the overall algorithm (Exact Metropolis Scheme or EMS) to integrate the fluctuation/dissipation for a pair $(i,j)$ of particle is organized as follows:
\begin{enumerate}
\item Compute a proposed move for $\vect{v}_{ij}^{n+1}$ with~(\ref{eq:proposal}).
\item If the following energy bound does not hold
\begin{equation}
\label{eq:energy-bound}
\min\left(\varepsilon_i^n,\varepsilon_j^n\right) > \frac{m}8\left([\vect{v}_{ij}^{n+1}]^2-[\vect{v}_{ij}^n]^2\right),
\end{equation}
the move is rejected: $(\vect{p}_i^{n+1},\vect{p}_j^{n+1},\varepsilon_i^{n+1},\varepsilon_j^{n+1})=(\vect{p}_i^n,\vect{p}_j^n,\varepsilon_i^n,\varepsilon_j^n)$.\\
If the bound is satisfied, the algorithm continues.
\item Compute the acceptance ratio with~\eqref{eq:metropolis-ratio}.
\item Draw $U_{ij}^n \sim \mathcal{U}[0,1]$ and compare it with $A_{\Delta t}(\vect{v_{ij}^n},\vect{v}_{ij}^{n+1})$. If $U_{ij}^n > A_{\Delta t}(\vect{v_{ij}^n},\vect{v}_{ij}^{n+1})$, the move is rejected and $(\vect{p}_i^{n+1},\vect{p}_j^{n+1},\varepsilon_i^{n+1},\varepsilon_j^{n+1})=(\vect{p}_i^n,\vect{p}_j^n,\varepsilon_i^n,\varepsilon_j^n)$.\\
Otherwise it is accepted and the momenta and internal energies are updated with~\eqref{eq:proposal-update}, along with the internal temperatures $T_i$, $T_j$ and heat capacities $C_i$, $C_j$.
\end{enumerate}
\subsubsection{Approximate Metropolized scheme}
\label{sec:approx-met}
Since the computation of the Metropolis ratio may be cumbersome in practical simulations, we propose a simplified and approximate scheme where we only reject moves that cause internal energies to become negative.
It avoids the need to actually compute the Metropolis acceptance ratio.
As for the complete Metropolized scheme, we use the expression~(\ref{eq:proposal}) as the proposed evolution for the relative velocities.
We then check whether the updated internal energies remain positive and reject the moves that do not satisfy this property.
The current configuration at time is then used as the new configuration and counted as usual in the averages.
Otherwise the move is accepted and the velocities and internal energies, along with the internal temperatures and heat capacities, are updated accordingly.
When no stability issues, \emph{i.e.} negative internal energies, appear, the Approximate Metropolis Scheme (AMS) is equivalent to SSA.
\section{Numerical results}
\label{sec:results}
In the following, we test the accuracy of our schemes for the ideal gas equation of state given by
\begin{equation}
\label{eq:pg-eos}
\mathcal{S}_{\rm ideal}(\varepsilon,\rho) = \frac32(K-1)k_{\rm B}\ln(\varepsilon) - \frac12(K-1)\ln(\rho).
\end{equation}
The interest of this model is that the marginal distribution for the internal energies $\varepsilon_i$ has an analytic expression:
\begin{equation}
\label{eq:dist-analytic}
\overline{\mu}_{\beta,\varepsilon}({\rm d} \varepsilon) = \frac{ \beta^{ \frac{C_K}{k_{\rm B}}} }{ \Gamma\left(\frac{C_K}{k_{\rm B}}\right) }\varepsilon^{\frac{C_K}{k_{\rm B}}-1}\exp\left(-\beta\varepsilon\right)\,{\rm d}\varepsilon,
\end{equation}
where $C_K = \frac32(K-1)k_{\rm B}$ is the heat capacity in the equation of state~\eqref{eq:pg-eos} and $\Gamma$ is the Gamma function.
This distribution is plotted in Figure~\ref{fig:distribution-k} for various particle sizes.
\begin{figure}[!ht]
\centering
\includegraphics{ig-diste-k}
\caption{Distribution of internal energies (in units of $Kk_{\rm B}T$) with the ideal gas equation of state.}
\label{fig:distribution-k}
\end{figure}
As the size $K$ decreases, very small internal energies become more likely and stability issues arise.
Our simulations have been carried out on a 3-dimensional system of 1000 particles initialized on a simple cubic lattice with an initial temperature $T = 1000$~K.
The internal energies are chosen so that $T_i(\varepsilon_i,\rho_i(\vect{q})) = T$ with the density $\rho_i(\vect{q})$ evaluated from the initial distribution of the positions; while the velocities are distributed along the Boltzmann distribution.
We let the system equilibrate during a time $\tau_{\rm therm} = 50$ to obtain an equilibrated initial configuration.
The shear viscosity is set to $\eta = \num{2e-3}$~Pa.s and we neglect the bulk viscosity $\zeta$.
\subsection{Integrating the fluctuation/dissipation dynamics}
\label{sec:results-fd}
We first investigate the properties of the integration schemes for the fluctuation/dissipation part only and do not couple SSA and the Metropolized schemes with Velocity Verlet.
While SSA is quite stable for large particles ($K>10$) for which time steps as large as $\Delta t = 5$ can be used with no occurrence of a negative internal energy during a simulation time $\tau_{\rm sim}$, stability issues arise for smaller particles.
At $K=5$, we need a time step $\Delta t < 0.025$ to avoid the appearance of negative internal energies with SSA. As a comparison, the stability limit for the Verlet scheme at $K=5$ is $\Delta t = 0.8$.
As the particle size decreases further, it becomes impossible to run simulations and no admissible time step has been found for $K=2$.
With the rejection of moves provoking negative energies, the Metropolized schemes are stable at any time step for every particle sizes.
When they are not coupled to Velocity Verlet, the SSA and Metropolized schemes preserve exactly the energy by construction.
We can however compare the bias in the distributions of internal energies for the different schemes.
Figure~\ref{fig:distribution-dt} shows the distributions of internal energy for the exact and approximate Metropolized scheme using the ideal gas equation of state~(\ref{eq:pg-eos}) with $K=5$ obtained with a simulation time $\tau_{\rm sim}=20000$, compared with the analytic distribution~\eqref{eq:dist-analytic}.
In practice, the distributions $\nu_{\Delta t}$ obtained from the numerical simulations are approximated using histograms computed on $50$ configurations extracted at regular time intervals from the simulations.
This ensures a constant number of sampling points for all time steps.
The histograms consist in $N_{\rm bins} = 150$ bins uniformly distributed between $\varepsilon_{\rm min} = 0.1k_{\rm B}T$ and $\varepsilon_{\rm max} = 20k_{\rm B}T$.
\begin{figure}[!ht]
\centering
\subfigure[Distributions at $\Delta t =1$.]{\includegraphics[width=.45\columnwidth]{schemes-dist-energy-5.pdf}\label{fig:dist-dt-1}}
\subfigure[Bias with respect to the time step]{\includegraphics[width=.45\columnwidth]{error-dist-energy.pdf}\label{fig:dist-bias}}
\caption{Comparisons of the internal energy distributions with the ideal gas equation of state for the exact (EMS) and approximate (AMS) Metropolized algorithms: for $\Delta t =1$ in~\protect\subref{fig:dist-dt-1} and with the error~\eqref{eq:error-dist} with respect to the time step in~\protect\subref{fig:dist-bias}}
\label{fig:distribution-dt}
\end{figure}
A more quantitative measurement of the bias is to evaluate the quadratic error with respect to the theoretical distribution
\begin{equation}
\label{eq:error-dist}
\mathcal{E}(\Delta t) = \sqrt{\frac{\int_0^{+\infty} \left[\nu_{\Delta t}(\varepsilon)-\overline{\mu}_{\beta,\varepsilon}(\varepsilon)\right]^2~{\rm d} \varepsilon}{\int_0^{+\infty}\overline{\mu}_{\beta,\varepsilon}(\varepsilon)^2\,{\rm d} \varepsilon}}.
\end{equation}
Due to the Metropolis procedure, the only source of error for EMS is of statistical nature.
This is not guaranteed for AMS but no systematic bias is apparent up to $\Delta t = 5$.
The agreement with the theoretical distribution is a bit deteriorated when the full acceptance ratio is not computed and AMS displays a $30\%$ larger error compared to the exact Metropolization.
We also observe the scaling of the rejection rate with the time step in Figure~\ref{fig:rejection}.
The Metropolized scheme displays rejection rates between $0.1\%$ and $0.2\%$ for time steps between $\Delta t = 1$ and $\Delta t = 5$.
For our system size ($1000$ particles), it means that in average several fluctuation/dissipation interactions are rejected each time step.
Most of these rejections are due to the Metropolis ratio and not to the appearance of negative internal energies which accounts for approximately one rejection every thousand.
When we only reject forbidden moves that would cause negative energies, the rejection rate of this approximate Metropolization is between $0.005\%$ and $0.01\%$, which is about $20$ times smaller than the overall rejection rate of the exact Metropolization.
This is however two orders of magnitude larger than the occurrence of negative internal energies with the exact Metropolis scheme.
By rejecting authorised but unlikely moves (leading to small energies for instance), EMS is less prompt to the apprition of negative internal energies.
A linear fit in log scale shows that the total rejection rate for both the exact and approximate Metropolization scales as $\Delta t^{0.42}$.
The rejection of negative energies with the exact Metropolis schemes roughly follows the same scaling with $\Delta t^{0.5}$.
\begin{figure}
\centering
\subfigure[EMS (Total)]{\includegraphics[width=.3\columnwidth]{met-rejection.pdf}\label{fig:rejection-met}}
\subfigure[EMS (Negative energies)]{\includegraphics[width=.3\columnwidth]{met-rejection-negative.pdf}\label{fig:rejection-met-neg}}
\subfigure[AMS]{\includegraphics[width=.3\columnwidth]{am-rejection.pdf}\label{fig:rejection-am}}
\caption{Rejection rates for~\protect\subref{fig:rejection-met} the exact Metropolization and~\protect\subref{fig:rejection-am} the approximate Metropolization. The rejection rate due to negative internal energies in the exact Metropolis scheme is also displayed in~\protect\subref{fig:rejection-met-neg}.}
\label{fig:rejection}
\end{figure}
\subsection{Integrating the full SDPD}
\label{sec:results-full}
We now turn to the numerical integration of the full dynamics and study the behavior of the full schemes coupling the Velocity Verlet scheme for the integration of the conservative part of the dynamics and either SSA or its Metropolized versions for the fluctuation/dissipation dynamics.
To evaluate the energy conservation and the scheme stability, we run $n_{\rm sim}=10$ independent simulation with $K=5$ during a time $\tau_{\rm sim}=1000$ for each time step $\Delta t$ and average the results.
The schemes obtained by superimposing~\eqref{eq:sdpd-verlet} and either SSA or the Metropolized scheme lead to linear energy drifts which have already been observed in DPDE~\cite{lisal_2011,homman_2016} and in SDPD~\cite{faure_2016}.
This is illustrated in Figure~\ref{fig:energy-time-met} where the total energy with respect to time is displayed for different time steps in the case of the Metropolized scheme.
\begin{figure}[!ht]
\centering
\includegraphics{met-energy-5}
\caption{Time evolution of the total energy with the Metropolized algorithm for different time steps. A linear drift in the energy is observed as is usual in such methods.}
\label{fig:energy-time-met}
\end{figure}
We characterize the energy drift by fitting the time evolution of the energy on a linear function and plot the resulting slope in Figure~\ref{fig:energy-drift} for SSA, Metropolis and its approximate version.
\begin{figure}[!ht]
\centering
\includegraphics{drift-energy-ig}
\caption{Slope of the energy drift with respect to the time step for the two Metropolized schemes (EMS and AMS)}
\label{fig:energy-drift}
\end{figure}
We observe a similar energy drift for all methods.
As in the integration of the fluctuation/dissipation only, SSA is limited to small time steps to prevent stability issues to arise while the Metropolized schemes greatly increases the admissible time steps.
However, the time steps for which the dynamics is stable are much smaller with the full dynamics than those reported in Section~\ref{sec:results-fd}.
Although the conservative interactions are bounded in SDPD unlike the DPDE simulations in~\cite{stoltz_2017}, they still induce a stringent stability limit on the time steps.
This observation leads us to consider multiple time step implementations (MTS) where the fluctuation/dissipation is integrated with a larger time step.
We introduce the time steps $\Delta t_{\rm VV}$ used to integrate the conservative part with the Velocity-Verlet scheme and $\Delta t_{\rm FD} = \theta\Delta t_{\rm VV}$ used for the discretization of the fluctuation/dissipation with a Metropolized scheme (EMS or AMS).
We test this approach with $\theta = 5$ and $\theta = 10$.
The algorithm then reads:
\begin{enumerate}
\item $\theta$ consecutive steps of Velocity Verlet with $\Delta t = \Delta t_{\rm VV}$.
\item One step of EMS or AMS with $\Delta t = \theta\Delta t_{\rm VV}$.
\end{enumerate}
We plot in Figure~\ref{fig:ratio-mts} the slope of the energy drift compared to their single time step (STS) version (with a time step $\Delta t = \Delta t_{\rm VV}$).
For both the exact and the approximate Metropolization, the energy drift rate is smaller for the multiple time step approach when the time step is large enough.
Moreover, the reduction of the energy drift is enhanced for larger $\theta$ with a division by $6$ of the rate for $\theta = 10$ and $\Delta t = 0.392$ when it is only halved for $\theta = 5$.
\begin{figure}[!ht]
\centering
\includegraphics{drift-energy-mts}
\caption{Ratio between the slope of the energy drift for the multiple time steps Metropolized schemes (EMS and AMS) and their STS versions (with $\Delta t = \Delta t_{\rm VV}$). }
\label{fig:ratio-mts}
\end{figure}
\begin{table}[!ht]
\centering
\begin{tabular}{cc}
\toprule
Scheme & Time ($\mu$s/iter/part)\\
\hline
Verlet & 37.66\\
SSA & 49.48\\
EMS & 63.39 \\
AMS & 51.07 \\
EMS with MTS & 43.57\\
AMS with MTS & 40.92\\
\bottomrule
\end{tabular}
\caption{Comparison of the computation time per iteration per particle for the Shardlow-like algorithm and the Metropolis schemes. For the multiple time step implementations, the time steps ratio is set to $\theta = 5$. The time for Velocity Verlet only is given as a reference.}
\label{tab:schemes-time}
\end{table}
We measure the time per iteration and per particle for the different Metropolis schemes with $\Delta t=\num{2.24e-2}$ or $\Delta t_{\rm VV}=\num{2.24e-2}$ and gather them in Table~\ref{tab:schemes-time}.
For the multiple time step algorithms, the number of iteration is the number of Verlet steps (which is thus the same than in the STS case).
The integration of the fluctuation/dissipation dynamics with SSA represents a quarter of the total computational time.
The integration of the fluctuation/dissipation part with the exact Metropolization is about twice as long since we need to compute the reverse move and estimate the Metropolis ratio.
This results in an overall increase by $30\%$ of the total simulation time.
However, much larger time steps can be chosen with EMS while SSA suffers from stringent stability limitations.
There is almost no overhead when resorting to the approximate Metropolization which also greatly improves the stability and is as good as EMS in terms of energy conservation.
With the multiple time step strategy, the time needed for the fluctuation/dissipation is greatly reduced, as expected, by a factor $\theta$.
\subsection{Simulation of nonequilibrium systems}
\label{sec:results-noneq}
While all the previous simulations were carried in an equilibrium situation, the Metropolization procedure we propose are also suited for nonequilibrium settings.
In particular, SDPD has been applied to model shock waves~\cite{faure_2016} and reactive waves~\cite{faure_2017c}.
Stability is a crucial issue for these phenomena since they involve dramatic changes in the thermodynamic states of the material.
We thus illustrate the enhanced stability of the Metropolized schemes in non equilibrium situations by simulating a shock wave.
The system is initialized as previously mentioned but with $N=23400$ particles organized on $10 \times 10 \times 234$ lattice with periodic boundary conditions in the $x$- and $y$-directions.
In the $z$-direction, two walls are located at each end of the system and formed of ``virtual'' SDPD particles as described in~\cite{bian_2012,faure_2016}.
These virtual particles interact with the real SDPD particles through the conservative forces~(\ref{eq:cons-forces}) and a repulsive Lennard-Jones potential that ensures the impermeability of the walls.
After the system equilibration during $\tau_{\rm therm} = 50$, the lower wall is given a constant velocity $v_{\rm P} = 1661$~m.s$^{-1}$ in the $z$-direction.
To obtain the profiles of physical properties at a given time, the simulation box is divided into $n_{\rm sl}=100$ slices regularly distributed along the $z$-axis in which the physical properties are averaged.
Since a shock wave is a stationary process in the reference frame of the shock front, we can average profiles over time after shifting the position of the shock front to $z=0$.
We plot in Figure~\ref{fig:shock-profiles} the density profile for the Metropolized schemes (EMS and AMS) with $\Delta t = 0.045$.
This choice is governed by the piston velocity $v_{\rm P}$ and ensures than the piston does not move by more than $20\%$ of the characteristic distance between two particles.
This avoids instabilities in the conservative part of the dynamics.
\begin{figure}[!ht]
\centering
\includegraphics{shock}
\caption{Density profile in the reference frame of the shock front for the exact and approximate Metropolization.}
\label{fig:shock-profiles}
\end{figure}
The Rankine-Hugoniot relations makes use of the conservation of mass, momentum and energy to predict the thermodynamic properties in the shocked state from the initial thermodynamic conditions and the velocity of the particles in the shocked region.
The properties estimated from the SDPD simulations, namely the velocity of the shock wave $v_S$, the density $\rho_s$, pressure $P_s$ and temperature $T_{S}$ in the shocked state, all agree very well with the theoretical predictions as can be seen in Table~\ref{tab:shock-properties}.
\begin{table}[!ht]
\centering
\begin{tabular}{ccccc}
\toprule
Scheme & $v_S$ (km.s$^{-1}$)& $\rho_S$ (kg.m$^{-3}$)& $P_S$ (GPa)& $T_S$ (K)\\
\hline
EMS & 2254 & 4173 & 4.50 & 7816\\
AMS & 2268 & 4151 & 4.49 & 7836\\
RH & 2314 & 4075 & 4.58 & 8244\\
\bottomrule
\end{tabular}
\caption{Average observables in the shocked state: SDPD compared to the Rankine-Hugoniot (RH) predictions.}
\label{tab:shock-properties}
\end{table}
Let us point out that this simulation would not have been possible with SSA since negative internal energies appear very early in the simulation even for time steps as small as $\Delta t = 10^{-4}$.
The Metropolization procedure can thus be particularly useful in nonequilibrium simulations where stability issues are aggravated.
\section{Conclusion}
\label{sec:conclusion}
In this work, we have introduced a Metropolis procedure for the integration of the fluctuation/dissipation part in SDPD.
This adaptation of the Metropolized schemes for DPDE has led to a significant increase of the stability of the dynamics for small particle sizes.
This allows us to carry simulations that traditional schemes could not achieve due to the very stringent limitation on the time step they must respect.
It appears that an approximate version of the Metropolis step where only the negative internal energies are rejected is actually enough to ensure the stability of the algorithm and does not display a larger bias or energy drift than its exact version.
With the addition of the Metropolis step, the integration of the fluctuation/dissipation is stable for very large time steps and the limit on the admissible time steps emerges from the conservative part. A multiple time step approach has been tested where a smaller time step was used for Velocity Verlet and a larger one for the Metropolized SSA scheme.
This resulted in a similar energy drift at a reduced computational cost.
The relevance of the Metropolization in nonequilibrium situations has been illustrated by the simulation of a shock wave.
While traditional schemes such as SSA fail to perform the simulation for small particles, due to the aggravated stability issues, both the exact and approximated Metropolized schemes have allowed us to recover the correct physical properties of the shock wave.
Our Metropolis schemes still suffer from the difficult parallelization of SSA on which they are based.
It would be most beneficial to adapt a stabilization procedure based on rejecting moves leading to negative energies to parallel schemes~\cite{larentzos_2014,homman_2016} in order to deal with larger systems.
\section*{Acknowledgments}
We thank J.-B. Maillet for helpful discussions.
The work of G.S. was funded by the Agence Nationale de la Recherche, under grant ANR-14-CE23-0012 (COSMOS) and by the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement number 614492.
|
\section{Introduction}
As is well known, the efficient generation and manipulation of
spin-polarized current are the crucial elements in the field of spintronics.
\cite{spin} Considerable efforts have been devoted to direct spin-current
injection from ferromagnetic (FM) electrodes into nonmagnetic materials via
tunnel junctions, such as graphene,\cite{graphene1,graphene2} silicon,\cit
{silicon} and quantum dots (QDs).\cite{QDE1,QDE2,QDT1,QDT2} It is shown both
experimentally\cite{QDE1,QDE2} and theoretically\cite{QDT1} that the
spin-current polarization can be electrically manipulated in transport
through a single QD weakly coupled to one FM and one normal-metallic (NM)
electrodes, but the polarization is limited by the typical polarization of
ferromagnet with $30-40\%$.\cite{sp} Interestingly, when the coupling of the
FM lead with the QD is much stronger than that of the NM lead, the
polarization of spin injection in the strong coupling regime is greatly
enhanced, far beyond the intrinsic polarization of ferromagnets,\cite{QDT2}
due to the FM exchange-field induced spin-splitting of the QD level.
However, high spin polarization strongly depends on the left-right asymmetry
of the QD-lead coupling in this case.
In molecular spintronics, single-molecule magnets (SMMs) with a large
molecular spin have magnetic bistability induced by the easy-axis magnetic
anisotropy, which has potential application in data storage and information
processing. Therefore, transport properties through SMMs have been
intensively investigated in recent years.\cite{Wernsdorfer,exp1,Kim} For
instance, the spin-filter effect\cite{Spinfilter1,Spinfilter2} and
thermoelectrically induced pure spin-current\cite{Xing2,Xing3} are
identified in SMM junctions with NM leads. When the SMM is attached to two
FM electrodes, the tunnel magnetoresistance,\cite{JBTMR1,Xie1,JBTMR2,JBTMR3}
memristive properties,\cite{Timm3} spin Seebeck effect,\cit
{SSeebeck,SSeebeck1} spin-resolved dynamical conductance,\cite{Scon} and
other spin-related properties have been theoretically studied. Due to the
spin asymmetry, the spin-polarized transport through a SMM coupled to one FM
and one NM leads has received much attention. It is both theoretically\cit
{Timm1,JB,Rossier,Xing1,JB1} and experimentally\cite{exp2,exp3} verified
that, the spin switching of such FM-SMM-NM junction can be realized by
spin-polarized currents. On the other hand, the spin-polarized charge
current itself can exhibit behaviors such as the spin-diode\cite{JBdiode}
and negative differential conductance.\cite{Timm2,Xue} However, the
spin-current through the magnetic molecular junction is less mentioned in
these works.
In this paper, we adopt the master-equation approach to study spin-current
injection through the FM-SMM-NM junction in the weak coupling regime. The
system under investigation is shown in Fig.1, where the magnetization of the
FM lead is collinear with the magnetic easy axis of the SMM. Experimentally,
this transport setup can be realized on a FM scanning tunneling microscope
tip coupled to a magnetic adatom or SMM placed on a NM surface.\cit
{exp1,exp2,Rossier,JBdiode,Xie2} The SMM is modeled as a single-level QD
with a local uniaxial anisotropic spin.\cite{Timm1,JBTMR1} Both FM and
antiferromagnetic (AFM) exchange couplings are discussed in this work. We
find that the output spin-current polarization through the SMM junction is
greatly enhanced and can reach $90\%$, where spin polarization is typically
40\%$ in the FM lead. The enhancement of spin polarization is attributed to
the easy-axis magnetic anisotropy of the SMM and spin-flip process.\cit
{JBTMR1,Xie1,Xie2,Xie3}
\begin{figure}[tph]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Fig1.eps}
\end{center}
\caption{(Color online) Schematic diagram for a SMM weakly coupled to FM and
NM electrodes. The magnetization of FM lead is collinear with the easy axis
of SMM (assumed as $z$-axis). Bias voltages $V/2$ and $-V/2$ are applied on
the left (L) and right (R) electrodes, respectively.}
\label{Fig:1}
\end{figure}
\section{Model and method}
The total Hamiltonian of the SMM tunnel junction shown in Fig. 1 is written
a
\begin{equation}
H=H_{SMM}+H_{leads}+H_{T}. \label{Eq1}
\end{equation
The first term is the giant-spin Hamiltonian of the SMM\cit
{Timm1,JBTMR1,Shen
\begin{eqnarray}
H_{SMM} &=&\sum_{\sigma }(\varepsilon -eV_{g})d_{\sigma }^{\dag }d_{\sigma
}+Ud_{\uparrow }^{\dag }d_{\uparrow }d_{\downarrow }^{\dag }d_{\downarrow }
\notag \\
&&-K(S^{z})^{2}-J\mathbf{s\cdot S}.
\end{eqnarray
Here, $\varepsilon $ is the energy of the orbital level (OL) of the magnetic
molecule, which can be tuned by the gate voltage $V_{g}$, and the operator
d_{\sigma }^{\dag }$ ($d_{\sigma }$) creates (annihilates) an electron with
spin $\sigma $ in the molecular OL. $U$ is the Coulomb energy of the two
electrons in the molecule, and $K$ ($K>0$) denotes the easy-axis anisotropy
of the SMM. The spin operator of the OL is defined as\textbf{\ }$\mathbf
s\equiv }$ $\sum_{\sigma \sigma ^{\prime }}d_{\sigma }^{\dag }(\mathbf
\sigma }_{\sigma \sigma ^{\prime }}/2)d_{\sigma ^{\prime }}$, where $\mathbf
\sigma \equiv (\sigma }_{x},\mathbf{\sigma }_{y},\mathbf{\sigma }_{z}\mathbf
)}$ represents the vector of Pauli matrices. $J$ describes the spin-exchange
coupling between spin $\mathbf{s}$ of OL electrons and the local spin
\mathbf{S}$ of the molecule, which can be either FM ($J>0$) or AFM ($J<0$).
By introducing the $z$ component $S_{t}^{z}$ of the total spin operator
\mathbf{S}_{t}\mathbf{\equiv s+S}$, many-body eigenstates of the SMM can be
written as $\left\vert n,S_{t};S_{t}^{z}\right\rangle $, where $n$ denotes
the charge state of the SMM and $S_{t}$ is the quantum number of the total
spin $\mathbf{S}_{t}$.
The second term of Eq.(\ref{Eq1}) describes noninteracting electrons in the
electrodes, $H_{leads}=\sum_{\mathbf{\alpha =L,R}}\sum_{\mathbf{k}\sigma
}\varepsilon _{\alpha \mathbf{k\sigma }}c_{\alpha \mathbf{k}\sigma }^{\dag
}c_{\alpha \mathbf{k}\sigma }$, where $\varepsilon _{\alpha \mathbf{k}\sigma
}$ is the energy of an electron with wave vector $\mathbf{k}$ and spin
\sigma $ in lead $\alpha $, and $c_{\alpha \mathbf{k}\sigma }^{\dag }$ (
c_{\alpha \mathbf{k}\sigma }$) is the corresponding electronic creation
(annihilation) operator. Assuming $\rho _{\alpha \sigma }$ is the density of
states of electrons with spin $\sigma $ in the lead $\alpha $, we can define
the spin polarization of the ferromagnetic lead as $p_{\alpha }=(\rho
_{\alpha \sigma }-\rho _{\alpha \overline{\sigma }})/(\rho _{\alpha \sigma
}+\rho _{\alpha \overline{\sigma }})$. In our calculation, polarizations of
the left FM and right NM leads are chosen as $p_{L}=0.4$ and $p_{R}=0$,
respectively.
The coupling between the leads and the SMM is described by the tunneling
Hamiltonian $H_{T}=\sum_{\alpha \mathbf{k}\sigma }(t_{\alpha }c_{\alpha
\mathbf{k}\sigma }^{\dag }d_{\sigma }+t_{\alpha }^{\ast }d_{\sigma }^{\dag
}c_{\alpha \mathbf{k}\sigma })$, where $t_{\alpha }$ is the tunnel matrix
element between the lead $\alpha $ and the SMM, and the spin-dependent
tunnel-coupling strength is denoted by $\Gamma _{\alpha \sigma }=2\pi \rho
_{\alpha \sigma }\left\vert t_{\alpha }\right\vert ^{2}$. Furthermore, we
can rewrite the tunnel-coupling strength as $\Gamma _{\alpha \sigma }=\Gamma
_{\alpha }(1\pm p_{\alpha })/2$ with the sign $+$ ($-$) corresponding to
\sigma =\uparrow $ ($\downarrow $), and assume $\Gamma _{\alpha }=\Gamma
_{\alpha \uparrow }+\Gamma _{\alpha \downarrow }$ and $\Gamma =(\Gamma
_{L}+\Gamma _{R})/2$. For simplicity, the bias voltage $V$ is applied
symmetrically on the SMM tunnel junction with $\mu _{L}=eV/2$ and $\mu
_{R}=-eV/2$.
\begin{figure}[pth]
\begin{center}
\includegraphics[width=1\columnwidth]{Fig2.eps}
\end{center}
\caption{(Color online) Spin-current polarization for the FM (a) and AFM (b)
exchange couplings as a function of the bias and gate voltages. The
parameters are: $S=2$, $\protect\varepsilon=0.5$ meV, $|J|=0.4$ meV, $U=1$
meV, $K=0.05$ meV, $k_{B}T=0.04$ meV, $P_{L}=0.4$, $P_{R}=0$, and
\Gamma=\Gamma_{L}=\Gamma_{R}=0.001$ meV.}
\label{Fig:2}
\end{figure}
We analyze spin-polarized transport through the SMM junction in both
sequential and cotunneling regimes at the weak-coupling limit, ie. $\Gamma
\ll k_{B}T$. The tunneling processes of electron are assumed to be
stochastic and Markovian, and the time evolution of the SMM can be described
by the master equation,\cite{Xie1,Xie3,Timm5,Koch2
\begin{eqnarray}
\frac{dP_{i}}{dt} &=&\sum_{\alpha \alpha ^{\prime }\sigma \sigma ^{\prime
}i^{\prime }\neq i}[-(W_{\alpha \sigma ,\alpha ^{\prime }\sigma ^{\prime
}}^{i,i^{\prime }}+W_{\alpha \sigma }^{i,i^{\prime }})P_{i} \notag \\
&&+(W_{\alpha \sigma }^{i^{\prime },i}+W_{\alpha ^{\prime }\sigma ^{\prime
},\alpha \sigma }^{i^{\prime },i})P_{i^{\prime }}], \label{master}
\end{eqnarray
with $P_{i}$ denoting the probability in the molecular many-body eigenstate
\left\vert i\right\rangle $. The transition rates $W$ in the Eq.~(\re
{master}) can be calculated perturbatively by the generalized Fermi's golden
rule based on the $T$-matrix.\cite{Bruus} Moreover, the rate $W_{\alpha
\sigma }^{i,i^{\prime }}$ denotes the sequential tunneling transition from
the state $\left\vert i\right\rangle $ to $\left\vert i^{\prime
}\right\rangle $ due to a spin-$\sigma $ electron tunneling of lead $\alpha
, and $W_{\alpha \sigma ,\alpha ^{\prime }\sigma ^{\prime }}^{i,i^{\prime }}$
stands for the cotunneling transition with a spin-$\sigma $ electron of lead
$\alpha $ being transformed to spin-$\sigma ^{\prime }$ electron of lead
\alpha ^{\prime }$. With the stationary conditions $\frac
dP_{i}}{dt}=0$ and $\sum_{i}P_{i}=1$, we can get the steady state transport
properties. Finally, the current of spin-$\sigma $ electrons through lead
\alpha $ is defined as\cite{Xie3,Koch2
\begin{align}
I_{\alpha \sigma }& =e(-1)^{\delta _{R\alpha }}\sum_{\alpha ^{\prime }\neq
\alpha \sigma ^{\prime }ii^{\prime }}[(n_{i^{\prime }}-n_{i})W_{\alpha
\sigma }^{i,i^{\prime }}P_{i} \notag \\
& +(W_{\alpha \sigma ,\alpha ^{\prime }\sigma ^{\prime }}^{i,i^{\prime
}}-W_{\alpha ^{\prime }\sigma ^{\prime },\alpha \sigma }^{i,i^{\prime
}})P_{i}],
\end{align
and thus we have the total charge current $I_{\alpha }=I_{\alpha \uparrow
}+I_{\alpha \downarrow }$ as well as the spin-current $I_{\alpha
s}=I_{\alpha \uparrow }-I_{\alpha \downarrow }$. The polarization of
spin-current is defined as
\begin{equation*}
\chi =(I_{\alpha \uparrow }-I_{\alpha \downarrow })/(I_{\alpha \uparrow
}+I_{\alpha \downarrow })
\end{equation*
In addition, magnetization of the SMM is $\left\langle
S_{t}^{z}\right\rangle =\sum_{i}S_{ti}^{z}P_{i}$. With this well-defined
theory, we can calculate the spin transport properties of the SMM junction
and the results are presented below.
\section{Results and discussion}
In Fig. 2, we show the dependence of the spin-current polarization on the
gate and bias voltages for the FM and AFM spin-exchange couplings. We first
find that the polarization $\chi $ is asymmetric under reversal of the bias
voltage $V$, since tunnel-coupling strengths between the molecule and the FM
electrode is spin-dependent ($\Gamma _{L\uparrow }>\Gamma _{L\downarrow }$).
At lower bias voltages, the cotunneling processes dominates electron
transports in the regions marked by the electron occupation numbers $n=0$ (
1 $ or $2$),\cite{JBTMR1,Xie1} and the sequential tunneling processes start
to take part in transports at larger bias voltages. Although the
polarization of FM lead is 0.4, the spin-current polarization $\chi $ can be
beyond $0.4$, or even more than $0.9$ in the sequential and cotunneling
regions [Figs. 2(a) and (b)]. Moreover, because the energy of the molecular
state $\left\vert 1,5/2;S_{t}^{z}\right\rangle $ is larger than that of the
molecular state $\left\vert 1,3/2;S_{t}^{z}\right\rangle $ for the AFM case
\cite{Xie3} the voltage-dependence of spin-polarization $\chi $ is very
different for the two types of couplings, or even opposite in some regions.
The charge current $I_{c}$, spin current $I_{s}$, differential conductance
G $, magnetization $\left\langle S_{t}^{z}\right\rangle $, and spin
polarization $\chi $ are shown in Fig. 3 for the FM spin-exchange coupling.
The spin-flip induced by the spin-exchange coupling plays a key role \cit
{JBTMR1,Xie1,JBdiode,Xie3} in transport through the SMM. The
magnetization $\left\langle S_{t}^{z}\right\rangle $ is a positive value in
the negative bias voltages [Fig. 3(c)], since the states with positive
S_{t}^{z}$ dominate the steady transport processes. It becomes negative in
the positive bias voltages. The charge current $I_{c}$ (black solid line)
and spin current $I_{s}$ (red dash line) versus the bias voltage $V$ are
shown in Fig. 3(a) at the gate voltage $V_{g}=-0.4$ mV, where the degenerate
ground states of the SMM are $\left\vert 0,2;\pm 2\right\rangle $. The spin
current $I_{s}$ is negative at most of bias voltage region except the high
positive-bias, where all the transport channels are open. The variation of
differential conductance $G$ with respect to the bias voltage $V$ is plotted
in Fig. 3(b), which displays three sequential resonant peaks in the positive
bias. The peak-$1$ corresponds to the spin-down sequential transition
\left\vert 0,2;-2\right\rangle \Leftrightarrow \left\vert
1,5/2;-5/2\right\rangle $, and the peak-$2$ (peak-$3$) corresponds to the
spin-up transition $\left\vert 0,2;-2\right\rangle \Leftrightarrow
\left\vert 1,3/2;-3/2\right\rangle $ ($\left\vert 1,5/2;-5/2\right\rangle
\Leftrightarrow \left\vert 2,2;-2\right\rangle $). In the negative bias, the
peak-$1^{\prime }$ corresponds to the spin-up transition $\left\vert
0,2;2\right\rangle \Leftrightarrow \left\vert 1,5/2;5/2\right\rangle $, and
the peak-$2^{\prime }$ (peak-3$^{\prime }$) corresponds to the spin-down
transition $\left\vert 0,2;2\right\rangle \Leftrightarrow \left\vert
1,3/2;3/2\right\rangle $ ($\left\vert 1,5/2;5/2\right\rangle \Leftrightarrow
\left\vert 2,2;2\right\rangle $). The height of peak-$1$ is lower than that
of the peak-$1^{\prime }$, since spin-up (spin-down) electrons are majority
(minority) in the $L$-electrode.
\begin{figure}[pth]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Fig3.eps}
\end{center}
\caption{(Color online) In the case of FM exchange coupling ($J=0.4$ meV):
(a) charge current $I_{c}$ and spin current $I_{s}$, (b) differential
conductance $G$, (c) magnetization $\left\langle S_{t}^{z}\right\rangle $,
and (d) spin-current polarization $\protect\chi$ as a function of the bias
voltage $V $ for different gate voltages $V_{g}$. The current and
differential conductance are scaled in units of $I_{0}=2e\Gamma /\hbar $ and
$G_{0}=10^{-3}e^{2}/h$, respectively.}
\label{Fig:3}
\end{figure}
\begin{figure}[pth]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Fig4.eps}
\end{center}
\caption{(Color online) In the case of AFM exchange coupling ($J=-0.4$ meV):
(a) charge current $I_{c}$ and spin current $I_{s}$, (b) differential
conductance $G$, (c) magnetization $\left\langle S_{t}^{z}\right\rangle $,
and (d) spin-current polarization $\protect\chi$ as a function of the bias
voltage $V $ for different gate voltages $V_{g}$.}
\label{Fig:4}
\end{figure}
The spin-current polarization $\chi $ as a function of the bias voltage $V$
is shown in Fig. 3(d) for different gate voltages. At low bias voltages
around $V=0$ mV, the electron transport is dominated by elastic cotunneling
processes and thus the polarization $\chi $ approaches the spin polarization
of FM lead, namely $\chi \sim 0.4$. With the slight increase of bias voltage
$V$, the inelastic cotunneling processes start to take part in electron
transports. For the gate voltage $V_{g}=-0.4$ mV (black solid line), at
positive bias the transport current is brought mainly by the spin-down
electrons tunneling through the SMM via the virtual transition $\left\vert
0,2;-2\right\rangle \Leftrightarrow \left\vert 1,5/2;-5/2\right\rangle $
\cite{JBTMR1} and the polarization $\chi $ is from positive to negative
values. At negative bias, the transport is dominated by the spin-up
(spin-majority) electrons tunneling via the virtual transition $\left\vert
0,2;2\right\rangle \Leftrightarrow \left\vert 1,5/2;5/2\right\rangle $, and
the polarization $\chi $ is enhanced, larger than $0.8$. When the bias
voltage increases further to the threshold value, sequential tunneling
begins to dominate the electron transports. The polarization reaches the
lowest value $\chi \sim -0.7$ at the peak-$1$ of differential conductance as
displayed in Fig. 3 (b), where the transport is dominated by spin-down
electrons. It reduces to $-0.4$ ($-0.1$) approximately between the peak-$1$
and peak-$2$ (peak-$2$ and peak-$3$), where more spin-up electrons take part
in the transport. On the other hand, the polarization $\chi $ at the
conductance peak-$1^{\prime }$ obtains the highest value about $0.9$, where
the spin-up electrons dominate the transport. It reduces to about $0.7$ (
0.4 $) between the peak-$1^{\prime }$ and peak-$2^{\prime }$ (peak-
2^{\prime }$ and peak-$3^{\prime }$). When the absolute value of
bias-voltage $V$ is high enough, all transition channels enter the transport
window and the polarization $\chi $ remains in a constant magnitude close to
$0.2$. This situation is the same as in the QDs.\cite{QDT1} At the gate
voltage $V_{g}=1$ mV the ground states of SMM become $\left\vert 1,5/2;\pm
5/2\right\rangle $, where the spin polarization curve $\chi $ (pink dash
line) varies approximately from $0.4$ to $0.2$ with increasing bias $V$.
Since the gate voltages $V_{g}=2.4$ mV and $V_{g}=-0.4$ mV are symmetric
with respect to $V_{g}=1$ mV, which is the electron-hole symmetry point,\cit
{JBTMR1,JBdiode} the corresponding polarization curves (blue dot-dash and
black solid lines) exhibit a symmetric behavior seen from Fig. 3(d). For
V_{g}=2.4$ mV, the highest polarization $\chi $ located at peak-$1$ is
mainly contributed by the spin-up transitions $\left\vert
2,2;-2\right\rangle \Leftrightarrow \left\vert 1,5/2;-5/2\right\rangle $.
Figure 4 presents the spin-polarized transport through the SMM tunnel
junction with the AFM exchange interaction between the transport electrons
and the molecular giant-spin. Different from the FM case, the polarization
\chi $ at the gate voltage $V_{g}=-0.4$ mV increases from $0.4$ to about
0.7 $ in the positive bias. This behavior is attributed to the spin-up
virtual transition $\left\vert 0,2;-2\right\rangle \Leftrightarrow
\left\vert 1,3/2;-3/2\right\rangle $. While $\chi $ decreases to the
negative value about $-0.4$ in the negative bias due to the spin-down
virtual transition $\left\vert 0,2;2\right\rangle \Leftrightarrow \left\vert
1,3/2;3/2\right\rangle $. When the bias voltage increases to a certain
higher value, the sequential tunneling dominates the electronic transport,
and the differential conductance $G$ possesses four peaks for both positive
and negative bias voltages, as shown in Fig. 4(b). The peak-$1$ mainly from
the transitions $\left\vert 0,2;-2\right\rangle \Leftrightarrow \left\vert
1,3/2;-3/2\right\rangle $ is higher than the peak-1$^{\prime }$ , which is
contributed by the transitions $\left\vert 0,2;2\right\rangle
\Leftrightarrow \left\vert 1,3/2;3/2\right\rangle $. The corresponding
polarization $\chi $ at peak-1 (peak-1$^{\prime }$) reaches the highest
(lowest) value about $0.7$ ($-0.4$). The conductance peak-$2$ and peak-$3$
are related to the transitions $\left\vert 0,2;-2\right\rangle
\Leftrightarrow \left\vert 1,5/2;-5/2\right\rangle $ and $\left\vert
1,5/2;-5/2\right\rangle \Leftrightarrow \left\vert 2,2;-2\right\rangle $
respectively, and a small dip appears for the polarization curve (black
solid line) between the two peaks. In contrast, the polarization curve is
convex between the peak-$2^{\prime }$ and peak-$3^{\prime }$. Additional
peak-$4$ and peak-$4^{\prime }$ emerge with further increase of the bias
voltage $V$. These two peaks are mainly contributed from the transitions
\left\vert 1,3/2;-3/2\right\rangle \Leftrightarrow \left\vert
2,2;-2\right\rangle $ and $\left\vert 1,3/2;3/2\right\rangle \Leftrightarrow
\left\vert 2,2;2\right\rangle $ respectively. More interestingly, without
the reversal of bias, the polarization $\chi $ for $V_{g}=2$ mV (blue
dot-dash line) can be reversed from about $-0.6$ to $0.4$ in positive bias
range.
\section{Conclusion}
In conclusion, we have shown that the FM-SMM-NM junction can work as an
efficient spin-current injector. By the master equation approach,
the spin-polarized transport properties are systematically investigated in
both the sequential and cotunneling regimes. Our results demonstrate that
the transport exhibits a very asymmetric behavior with respect to the zero
bias. The spin-flip process, which originates from the spin-exchange
interaction of the SMM and the transport electron, leads to the
amplification of spin polarization injecting from the FM electrode, and a
very high polarization of the spin-current is obtained. Furthermore, both
the magnitude and sign of the spin polarization are tunable by the gate or
bias voltages, suggesting an electrically-controllable spin device in
molecular spintronics.
\begin{acknowledgements}
This work was supported by National Natural Science Foundation of China
under grant Nos. 11504260, 11447163, 11574186, 11275118, 11504240, and STIP under grant Nos. 2014147, 2016170.
\end{acknowledgements}
|
\section{Introduction}
\label{sec:introduction}
Autonomous vehicle planning and control
frameworks~\cite{KTINTH15,apex} often follow the hierarchical planning architecture
outlined by Firby~\cite{Firby89} and Gat~\cite{Gat98}.
The key idea here is to separate the complications involved in low-level
hardware control from high-level planning decisions to accomplish the navigation
objective.
A typical example of such separation-of-concerns is proving the controllability
property (vehicle can be steered from any start point to arbitrary neighborhood
of the target point) of the motion-primitives of the vehicle followed by the search
(path-planning) for an obstacle-free path (called the \emph{roadmap}) and then
utilizing the controllability property to compose the low-level primitives to
follow the path (path-following).
However, in the absence of the controllability property, it is not always
possible to follow arbitrary roadmaps with given motion-primitives.
In these situations we need to study a motion planning problem that is
not opaque to the motion-primitives available to the controller.
We study this motion planning problem in a simpler setting of systems modeled as
constant-rate multi-mode systems~\cite{ATW12}---a switched system with
constant-rate dynamics (vector) in every mode---and study the reachability
problem for the non-convex safety sets.
Alur et al.~\cite{ATW12} studied this problem for convex safety sets and
showed that it can be solved in polynomial time.
Our key result is that even for the case when the safety set is defined using
polyhedral obstacles, the problem of deciding reachability is undecidable.
On a positive side we show that if the safety set is an open set
defined by linear inequalities, the problem is decidable and can be
solved using a variation of cell-decomposition algorithm~\cite{SS83}.
We present a novel bounded model-checking~\cite{clarke2001bounded} inspired
algorithm equipped with acceleration to decide the reachability.
We use the Z3-theorem prover as the constraint satisfaction engine for the
quadratic formulas in our implementation.
We show the efficiency of our algorithm by comparing its performance with
the popular sampling based algorithm \emph{rapidly-exploring
random tree} (RRT) as implemented in the \textit{Open Motion Planning
Library (OMPL)}.
For a detailed survey of motion planning algorithms we refer to the
excellent expositions by Latombe~\cite{latombe2012robot} and
LaValle~\cite{Lav06}.
The motion-planning problem while respecting system dynamics can be
modeled~\cite{frazzoli2000robust} in the framework of hybrid
automata~\cite{ACHH92,Hen96}; however the reachability problem is
undecidable even for simple stopwatch automata~\cite{HKPV98}.
There is a vast literature on decidable subclasses of hybrid
automata~\cite{ACHH92,BBM98}.
Most notable among these classes are initialized rectangular hybrid
automata~\cite{HKPV98}, two-dimensional piecewise-constant derivative
systems~\cite{AMP95}, timed automata~\cite{alurDill94}, and
discrete-time control for hybrid automata~\cite{HK99}.
For a review of related work on multi-mode systems we refer to~\cite{AFMT13,ATW12}.
\section{Motivating Example}
\label{sec:motivation}
Let us consider a two-dimensional
multi-mode system with three modes $m_1, m_2$ and $m_3$ shown geometrically with
their rate-vectors in Figure~\ref{fig:l-shaped}(a).
We consider the reach-while-avoid problem in the arena given in
Figure~\ref{fig:l-shaped}(b) with two rectangular obstacles $\mathcal{O}_1$ and $\mathcal{O}_2$
and source and target points $\point{x}_s$ and $\point{x}_t$, respectively.
In particular, we are interested in the question whether it is possible to move
a point-robot from point $\point{x}_s$ to point $\point{x}_t$ using directions dictated by
the multi-mode system given in Figure~\ref{fig:l-shaped}(a) while avoiding
passing through or even grazing any obstacle.
It follows from our results in Section~\ref{sec:undec} that in general the
problem of deciding reachability is undecidable even with polyhedral obstacles.
However, the example considered in Figure~\ref{fig:l-shaped} has an interesting
property that the safety set can be represented as a union of finitely many
polyhedral open sets (cells).
This property, as we show later, makes the problem decidable.
In fact, if we decompose the workspace into cells using any off-the-shelf
cell-decomposition algorithm, we only need to consider the sequences of obstacle-free
cells to decide reachability.
In particular, for a given sequence of obstacle-free convex sets such
that the starting
point is in the first set, and the target point is in last set, one can write
a linear program checking whether there is a sequence of intermediate states,
one each in the intersection of successive sets, such that these points are
reachable in the sequence using the constant-rate multi-mode system.
Our key observation is that one need not to consider cell-sequences larger than
the total number of cells since for reachability, it does not help for the system to
leave a cell and enter it again.
This approach, however, is not very efficient since one
needs to consider all sequences of the cells.
However, this result provides an upper bound on sequence of ``meta-steps'' or
``bound'' through the cells that system needs to take in order to reach the
target and hint towards a bounded model-checking~\cite{clarke2001bounded}
approach.
We progressively increase bound $k$ and ask whether there is a sequence of points
$\point{x}_0, \ldots, \point{x}_{k+1}$ such that $\point{x}_0 = \point{x}_s$, $\point{x}_{k+1} = \point{x}_t$, and
for all $0 \leq i \leq k$ we have that $\point{x}_i$ can reach $\point{x}_{i+1}$ using the
rates provided by the multi-mode system (convex cone of rates translated to
$\point{x}_i$ contains $\point{x}_{i+1}$) and the line segment $\lambda \point{x}_i + (1-\lambda)
\point{x}_{i+1}$ does not intersect any obstacle.
Notice that if this condition is satisfied, then the system can safely move from
point $\point{x}_i$ to $\point{x}_{i+1}$ by carefully choosing a scaling down of the
rates so as to stay in the safety set, as illustrated in Figure~\ref{fig:l-shaped}.
Let us first consider $k= 0$ and notice that one can reach point $\point{x}_t$ from
$\point{x}_s$ using just the mode $m_1$, however unfortunately the line segment
connecting these points passes through both obstacles.
In this case we increase the bound by $1$ and consider the problem of finding a
point $\point{x}$ such that the system can reach from $\point{x}_s$ to $\point{x}$ and also from
$\point{x}$ to $\point{x}_t$, and the line segment connecting $\point{x}_s$ with $\point{x}$, and $\point{x}$
with $\point{x}_t$ do not intersect any obstacles.
It is easy to see from the Figure~\ref{fig:l-shaped} that it is indeed the
case. We can alternate modes $m_1, m_2$ from $x_s$ to $x$, and
modes $m_1, m_3$ from $x$ to $x_t$.
Hence, there is a schedule that steers the system from $\point{x}_s$ to $\point{x}_t$ as
shown in the Figure~\ref{fig:l-shaped}(c).
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.4]
\tikzstyle{lines}=[draw=black!30,rounded corners]
\tikzstyle{vectors}=[-latex, rounded corners]
\tikzstyle{rvectors}=[-latex,very thick, rounded corners]
\draw[lines] (4.2,0)--(10.2,0);
\draw[lines] (7, 3.2)--(7,-3);
\draw[->, ultra thick, red] (7, 0) --node[black, right]{$~~~m_1$} (9, 1.8) node[left]{$$};
\draw[->, ultra thick, green!50!black] (7, 0) --node[black, right]{$m_2$} (7, -1.8) node[left]{$$};
\draw[->, ultra thick, blue] (7, 0) --node[black, left]{$m_3~$} (5, 1.8) node[left]{$$};
\node at (10, -1) {$(a)$};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=0.7]
\draw (0,0) rectangle (4, 4);
\filldraw[fill=black!40!white, draw=black] (0.15, 1) rectangle(3.75, 0.25);
\filldraw[fill=black!40!white, draw=black] (3, 3.95) rectangle(3.75, 1.05);
\draw[-,orange](0,1) -- (4, 1);
\draw[-,orange](0,0.25) -- (4, 0.25);
\draw[-,orange](3.75,0) -- (3.75, 4);
\draw[-,orange](0.15,0) -- (0.15, 4);
\draw[-,orange](0, 1.05) -- (4, 1.05);
\draw[-,orange](0,3.95) -- (4, 3.95);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_s$}] at (0.1,0.1) {};
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_t$}] at (3.9,3.9) {};
\node at (2, .6) {$\mathcal{O}_1$};
\node at (3.38, 2) {$\mathcal{O}_2$};
\node at (4.8, 0) {$(b)$};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=0.7]
\draw (0,0) rectangle (4, 4);
\filldraw[fill=black!40!white, draw=black] (0.15, 1) rectangle(3.75,
0.25);
\filldraw[fill=black!40!white, draw=black] (3, 3.95) rectangle(3.75, 1.05);
\draw [black!30, thick] (0.1, 0.1) -- (3.9, 0.1);
\draw [black!30, thick] (3.9, 0.1) -- (3.9, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_s$}] at (0.1,0.1) {};
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_t$}] at (3.9,3.9) {};
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt,
pin={[fill=white, outer sep=2pt]225: $\point{x}$}] at (3.9,0.1) {};
\def0.15{0.15}
\def0.15{0.15}
\def0.3{0.15}
\foreach \i in {0,...,24}{
\draw[->,red](0.1+0.3*\i,0.1)--(0.1+0.15+0.3*\i, 0.1+0.15);
\draw[->,green!50!black](0.1+0.15+0.3*\i,0.1+0.15)--(0.1+0.15+0.3*\i, 0.1);
}
\def0.15{0.15}
\def0.15{0.15}
\def0.3{0.3}
\foreach \i in {0,...,12}{
\draw[->,blue](3.9,0.1+0.3*\i)--(3.9-0.15, 0.1+0.3*\i+0.15);
\draw[->,red](3.9-0.15, 0.1+0.3*\i+0.15)--(3.9, 0.1+0.3*\i+2*0.15);
}
\node at (2, .6) {$\mathcal{O}_1$};
\node at (3.38, 2) {$\mathcal{O}_2$};
\node at (4.8, 0) {$(c)$};
\end{tikzpicture}
\caption{ a) A multi-mode system, b) an
``L''-shaped arena consisting of obstacles $\mathcal{O}_1$ and $\mathcal{O}_2$ with
start and target points $\point{x}_s$ and $\point{x}_t$ along with the
cell-decomposition shown by orange lines, and
c) a safe schedule from $\point{x}_s$ to $\point{x}_t$. }
\label{fig:l-shaped}
\end{figure}
The property we need to check to ensure a safe schedule is the following:
there exists a sequence of points $x_s=x_0,x_1,x_2,\dots, x_n=x_t$ such that
for all $0\leq \lambda \leq 1$, and for all $i$, the line $\lambda x_i+(1-\lambda)x_{i+1}$
joining $x_i$ and $x_{i+1}$ does not intersect any obstacle $\mathcal{O}$.
This can be thought of as a first-order formula of the
form $\exists X \forall Y F(X, Y)$ where $F(X, Y)$ is a linear formula.
By invoking the Tarski-Seidenberg theorem we know that checking the satisfiability
of this property is decidable.
However, one can also give a direct quantifier elimination based on
Fourier-Motzkin elimination procedure to get existentially quantified quadratic
constraints that can be efficiently checked using theorem provers such as Z3
(\url{https://github.com/Z3Prover/z3}).
This gives us a complete procedure to decide reachability for multi-mode systems
when the safety set can be represented as a union of finitely many polyhedral open sets.
\section{Problem Formulation}
\label{sec:problem}
\noindent{\bf Points and Vectors.}
Let $\mathbb R$ be the set of real numbers.
We represent the states in our system as points in $\mathbb R^n$, which is equipped
with the standard \emph{Euclidean norm} $\norm{\cdot}$.
We denote points in this state space by $\point{x}, \point{y}$, vectors by $\vec{r}, \vec{v}$, and
the $i$-th coordinate of point $\point{x}$ and vector $\vec{r}$ by $\point{x}(i)$ and $\vec{r}(i)$,
respectively.
The distance $\norm{\point{x}, \point{y}}$ between points $\point{x}$ and $\point{y}$ is defined as
$\norm{\point{x} - \point{y}}$.
\noindent{\bf Boundedness and Interior.}
We denote an {\em open ball} of radius $d \in {\mathbb R}_{\geq 0}$ centered at $\point{x}$ as
$\ball{d}{\point{x}} {=} \set{\point{y} {\in} \mathbb R^n \::\: \norm{\point{x},\point{y}} < d}$.
We denote a closed ball of radius $d \in {\mathbb R}_{\geq 0}$ centered at $\point{x}$ as
$\overline{\ball{d}{\point{x}}}$.
We say that a set $S \subseteq \mathbb R^n$ is {\em bounded} if there exists
$d \in {\mathbb R}_{\geq 0}$ such that, for all $\point{x}, \point{y} \in S$, we have
$\norm{\point{x},\point{y}} \leq d$.
The {\em interior} of a set $S$, $\interior(S)$, is the set of all points
$\point{x} \in S$, for which there exists $d > 0$ s.t. $\ball{d}{\point{x}} \subseteq S$.
\noindent{\bf Convexity.} A point $\point{x}$ is a \emph{convex
combination} of a finite set of points $X = \set{\point{x}_1, \point{x}_2, \ldots, \point{x}_k}$ if
there are $\lambda_1, \lambda_2, \ldots, \lambda_k \in [0, 1]$ such that
$\sum_{i=1}^{k} \lambda_i = 1$ and $\point{x} = \sum_{i=1}^k \lambda_i \cdot \point{x}_i$.
We say that $S \subseteq \mathbb R^n$ is {\em convex} iff, for all
$\point{x}, \point{y} \in S$ and all $\lambda \in [0,1]$, we have
$\lambda \point{x} + (1-\lambda) \point{y} \in S$ and moreover,
$S$ is a {\em convex polytope} if there exists $k \in \mathbb N$, a
matrix $A$ of size $k \times n$ and a vector $\vec{b} \in \mathbb R^k$ such that $\point{x}
\in S$ iff $A\point{x} \leq \vec{b}$.
A closed \emph{hyper-rectangle} is a convex polytope that can
be characterized as $\point{x}(i) \in [a_i, b_i]$ for each $i \leq n$ where $a_i, b_i
\in \mathbb R$.
\begin{definition}
\label{def:BMMS}
A (constant-rate) multi-mode system (MMS) is a tuple $\mathcal{H} = (M, n, R)$ where:
$M$ is a finite nonempty set of \emph{modes},
$n$ is the number of continuous variables, and
$R : M \to \mathbb R^n$ maps to each mode a rate vector
whose $i$-th entry specifies the change in the value of the $i$-th
variable per time unit.
For computation purposes, we assume that the real numbers
are rational.
\end{definition}
\begin{example}
An example of a 2-dimensional multi-mode system $\mathcal{H} = (M, n, R)$ is shown in
Figure~\ref{fig:l-shaped}(a) where $M = \set{ m_1, m_2, m_3}$, $n = 2$, and the
rate vector is such that $R(m_1) = (1, 1)$, $R(m_2) = (0, -1)$, and $R(m_3) =
(-1, 1)$.
\end{example}
A \emph{schedule} of an MMS specifies a timed sequence of mode switches.
Formally, a \emph{schedule} is defined as a finite or infinite sequences of
\emph{timed actions}, where a timed action $(m, t) \in M \times {\mathbb R}_{\geq 0}$ is a
pair consisting of a mode and a time delay.
A finite \emph{run} of an MMS $\mathcal{H}$ is a finite sequence of states and timed
actions $r = \seq{\point{x}_0, (m_1, t_1), \point{x}_1, \ldots, (m_k, t_k), \point{x}_k}$
such that for all $1 \leq i \leq k$ we have that
$\point{x}_i = \point{x}_{i-1} + t_i \cdot R(m_i)$.
For such a run $r$ we say that $\point{x}_0$ is the \emph{starting state}, while
$\point{x}_k$ is its \emph{terminal state}.
An \emph{infinite run} of an MMS $\mathcal{H}$ is similarly defined to be an infinite
sequence $\seq{\point{x}_0, (m_1, t_1), \point{x}_1, (m_2, t_2), \ldots}$ such that for all
$i \geq 1$ we have that $\point{x}_i = \point{x}_{i-1} + t_i \cdot R(m_i)$.
Given a finite schedule $\sigma = \seq{(m_1, t_1), (m_2, t_2), \ldots, (m_k,
t_k)}$ and a state $\point{x}$, we write $\text{\it Run}(\point{x}, \sigma)$ for the (unique)
finite run $\seq{\point{x}_0, (m_1, t_1), \point{x}_1, (m_2, t_2), \ldots,
\point{x}_k}$ such that $\point{x}_0 = \point{x}$.
In this case, we also say that the schedule $\sigma$ steers the MMS $\mathcal{H}$ from the state
$\point{x}_0$ to the state $\point{x}_k$.
We consider the problem of MMS reachability within a given \emph{safety set}
$S$.
We specify the safety set by a pair $(\mathcal{W}, \mathcal{O})$, where
$\mathcal{W} \subseteq \mathbb R^n$ is called the \emph{workspace} and
$\mathcal{O} = \set{\mathcal{O}_1, \mathcal{O}_2, \ldots, \mathcal{O}_k}$ is a finite set of
\emph{obstacles}.
In this case the safety set $S$ is characterized as $S_{\mathcal{W} \backslash \mathcal{O}} = \mathcal{W} \setminus \mathcal{O}$.
We assume in the rest of the
paper that $\mathcal{W} = \mathbb R^n$ and for all $1 \leq i \leq k$, $\mathcal{O}_i$ is a
\emph{convex} (not necessarily closed) polytope specified by a set of linear
inequalities.
We say that a finite run
$\seq{\point{x}_0, (m_1, t_1), \point{x}_1, (m_2, t_2), \ldots}$ is $S$-safe if for
all $i \geq 0$ we have that $\point{x}_i \in S$ and
$\point{x}_i + \tau_{i+1} \cdot R(m_{i+1}) \in S$ for all $\tau_{i+1} \in [0, t_{i+1}]$.
Notice that if $S$ is a convex set then for all $i \geq 0$,
$\point{x}_i \in S$ implies that for all $i \geq 0$ and for all $\tau_{i+1} \in
[0, t_{i+1}]$ we have that $\point{x}_i + \tau_{i+1} \cdot R(m_{i+1}) \in S$.
We say that a schedule $\sigma$ is $S$-safe from a state $\point{x}$, or is $(S,
\point{x})$-safe, if the
corresponding unique run $\text{\it Run}(\point{x}, \sigma)$ is $S$-safe.
Sometimes we simply call a schedule or a run safe when the safety set and
the starting state are clear from the context.
We say that a state $\point{x}'$ is $S$-safe reachable from a state $\point{x}$ if there
exists a finite schedule $\sigma$ that is $S$-safe at $\point{x}$ and steers the
system from state $\point{x}$ to $\point{x}'$.
We are interested in solving the following problem.
\begin{definition}[Reachability]
Given a constant-rate multi-mode system $\mathcal{H} = (M, n, R)$, safety
set $S$, start state $\point{x}_s$, and target state $\point{x}_t$, the
reachability problem $\textsc{Reach}(\mathcal{H}, S_{\mathcal{W} \backslash \mathcal{O}}, \point{x}_s, \point{x}_t)$ is to decide
whether there exists an $S$-safe finite schedule that steers the
system from state $\point{x}_s$ to $\point{x}_t$.
\end{definition}
Alur \emph{et al.}~\cite{ATW12} gave a polynomial-time algorithm to
decide if a state $\point{x}_t$ is $S$-safe reachable from a state $\point{x}_0$
for an MMS $\mathcal{H}$ for a convex safety set $S$. In particular, they
characterized the following necessary and sufficient condition.
\begin{theorem} [\cite{ATW12}]
\label{thmcms}
Let $\mathcal{H} = (M, n, R)$ be a multi-mode system and let $S \subset \mathbb R^n$ be an
open, convex safety set.
Then, there is an $S$-safe schedule from $\point{x}_s \in S$ to $\point{x}_t \in S$, if and only if there is $\vec{t} \in {\mathbb R}_{\geq 0}^{|M|}$
satisfying:
$ \point{x}_s + \sum_{i=1}^{|M|} R(m_i) \cdot \vec{t}(i) = \point{x}_t$.
\end{theorem}
A key property of this result is that if $\point{x}_t$ is reachable from $\point{x}_s$ without
considering the safety set, then it is also reachable inside arbitrary convex
set as long as both $\point{x}_s$ and $\point{x}_t$ are strictly in the interior of the
safety set.
We study the extension of this theorem for the reachability problem with
non-convex safety sets. A key contribution of this paper is a precise
characterization of the decidability of the reachability problem for
multi-mode systems.
\begin{theorem}
\label{thm:main}
Given a constant-rate multi-mode system $\mathcal{H}$, workspace
$\mathcal{W} = \mathbb R^n$, obstacles set $\mathcal{O}$, start state $\point{x}_s$ and target
state $\point{x}_t$, the reachability problem
$\textsc{Reach}(\mathcal{H}, S_{\mathcal{W} \setminus \mathcal{O}}, \point{x}_s, \point{x}_t)$ is in general
undecidable.
However, if the obstacle set $\mathcal{O}$ is given as finitely many closed
polytopes, each defined by a finite set of linear inequalities, then
reachability is decidable.
\end{theorem}
\section{Decidability}
\label{sec:decidability}
We prove the decidability condition of Theorem \ref{thm:main} in this section.
\begin{theorem}
\label{thm:dec}
For a MMS $\mathcal{H} = (M, n, R)$, a safety set $S$, a start state $\point{x}_s$,
and a target state $\point{x}_t$, the problem $\textsc{Reach}(\mathcal{H}, S_{\mathcal{W} \backslash \mathcal{O}}, \point{x}_s, \point{x}_t)$
is decidable if $\mathcal{O}$ is given as finitely many closed polytopes.
\end{theorem}
For the rest of this section let us fix a MMS $\mathcal{H} =
(M, n, R)$,
a start state $\point{x}_s$ and a target state $\point{x}_t$. Before we prove
this theorem, we define cell cover (a notion related to, but distinct
from the one of cell decomposition introduced
in~\cite{latombe2012robot}).
\begin{definition}[Cell Cover]
Given a safety set $S \in \mathbb R^n$, a cell of $S$ is an open, convex
set that is a subset of $S$. A \emph{cell cover} of $S$ is a
collection $\mathcal{C} = \set{c_1, \ldots, c_N}$ of cells whose union
equals $S$. Cells $c, c' \in \mathcal{C}$ are \emph{adjacent} if and only
if $c \cap c'$ is non-empty.
\end{definition}
A \emph{channel} in $S$ is a finite sequence
$\seq{c_1, c_2, \ldots, c_N}$ of cells of $S$ such that $c_i$ and
$c_{i+1}$ are adjacent for all $1 \leq i < N$. It follows that
$\cup_{1 \leq i \leq N} c_i$ is a path-connected open set.
A $\mathcal{C}$-channel is a channel whose cells are in cell cover
$\mathcal{C}$.
Given a channel $\pi = \seq{c_1, \ldots, c_N}$, a multi-mode
system $\mathcal{H} = (M, n, R)$, start and target states
$\point{x}_s, \point{x}_t \in S$, we say that $\pi$ is a \emph{witness} to
reachability if the following linear program is feasible:
\begin{gather}
\bigexists_{0 \leq i \leq N} \point{x}_i \,.
\Big( \point{x}_s = \point{x}_0 \wedge \point{x}_t = \point{x}_N \Big) \wedge
\Big(1 \leq i < N \rightarrow \point{x}_i \in (c_i \cap c_{i+1})\Big)
\,\wedge \label{eq4} \\
\bigexists_{1 \leq i \leq N, m \in M} t_i^{(m)} \,. \Big(t_i^{(m)}
\geq 0\Big) \wedge \bigwedge_{1
\leq i \leq N} \Big(\point{x}_i = \point{x}_{i-1} + \sum_{m\in M} R(m) \cdot
t_i^{(m)}\Big) \enspace.\nonumber
\end{gather}
\begin{lemma}
\label{witness}
If $S$ is an open safety set, there exists a finite $S$-safe
schedule that solves $\textsc{Reach}(\mathcal{H}, S, \point{x}_s, \point{x}_t)$ if and only if
$S$ contains a witness channel $\seq{c_1, c_2, \ldots, c_N}$ for some
$N \in \mathbb{N}$.
\end{lemma}
\begin{proof}
($\Leftarrow$) If $\seq{c_1, c_2, \ldots, c_N}$ is a witness channel,
then for $0 < i \leq N$, $\point{x}_{i-1}$ and $\point{x}_i$ are in $c_i$.
Theorem~\ref{thmcms} guarantees the existence of a $c_i$-safe
schedule for each $i$. The concatenation of these schedules is a
solution to $\textsc{Reach}(\mathcal{H}, S, \point{x}_s, \point{x}_t)$.
($\Rightarrow$)
The run of a finite schedule that solves
$\textsc{Reach}(\mathcal{H}, S, \point{x}_s, \point{x}_t)$ defines a closed, bounded subset $P$ of $S$.
Since $S$ is open, every point $x \in P$ is contained in a cell
of $S$.
Collectively, these cells form an open cover of $P$.
By compactness, then, there is a finite subcover of $P$.
If any element of the subcover is entered by the run more than once, there
exists another run that is contained in that cell between the first entry and
the last exit.
For such a run, if two elements of the subcover are entered at the same time,
the one with the earlier exit time is redundant.
Therefore, there is a subcover in which no two elements are entered by
the run of the schedule at the same time.
This subcover can be ordered according to the time at which the run enters
each cell to produce a sequence that satisfies the definition of witness
channel.
\qed
\end{proof}
\begin{lemma}
\label{generic-witness-to-c-witness}
If $S$ is an open safety set and $\mathcal{C}$ a cell cover of
$S$, there exists a witness channel for $\textsc{Reach}(\mathcal{H}, S, \point{x}_s, \point{x}_t)$
iff there exists a witness $\mathcal{C}$-channel.
\end{lemma}
\begin{proof}
One direction is obvious. Suppose therefore that there exists a
witness channel; let $\sigma$ be the finite schedule whose existence is
guaranteed by Lemma~\ref{witness}. The path that is traced in the MMS $\mathcal{H}$
when steered by $\sigma$ is a bounded closed subset $P$ of $S$
because it is the continuous image of a compact interval of the real
line. (The time interval in which $\mathcal{H}$ moves from $\point{x}_s$ to
$\point{x}_t$.) Since $\mathcal{C}$ is an open cover of $P$, there exists a
finite subset of $\mathcal{C}$ that covers $P$; specifically, there is an
irredundant finite subcover such that no two cells are entered at
the same time during the run of $\sigma$. This subcover can be
ordered according to entry time to produce a sequence of cells that
satisfies the definition of witness channel. \qed
\end{proof}
\begin{lemma}
\label{cell-decomposition-to-cover}
If $\mathcal{O}$ is a finite set of closed polytopes, then a finite cell
cover of the safety set $S$ is computable.
\end{lemma}
\begin{proof}
If $\mathcal{O}$ is a finite set of closed polytopes, one can apply the the
vertical decomposition algorithm of \cite{latombe2012robot} to
produce a cell \emph{decomposition}. Each cell $C$ in this
decomposition of dimension less than $n$ that is not contained in
the obstacles (and hence is entirely contained in $S$) is replaced
by a convex open set obtained as follows. Let $B$ be an
$n$-dimensional box around a point of $C$ that is in $S$. The
desired set is the convex hull of the set of vertices of either $C$
or $B$. \qed
\end{proof}
\begin{proof}[of Theorem~\ref{thm:dec}]
Lemmas~\ref{witness}--\ref{generic-witness-to-c-witness}
imply that $\textsc{Reach}(\mathcal{H}, S, \point{x}_s, \point{x}_t)$ is decidable if a finite
cell cover of $S$ is available. If $\mathcal{O}$ is given as a finite set
of closed polytopes, each presented as a set of linear inequalities,
then Lemma~\ref{cell-decomposition-to-cover} applies. \qed
\end{proof}
\begin{algorithm}[t]
\KwIn{MMS $\mathcal{H} = (M, n, R)$, two points $\point{x}_s, \point{x}_t$, workspace $\mathcal{W}$,
obstacle set $\mathcal{O}$, and an upper bound $B$ on number of cells in a cell-cover. }
\KwOut{NO, if no safe schedule exists and otherwise
such a schedule.}
\BlankLine
$k \leftarrow 0$;
\While{$k \leq B $}{
Check if the following formula is satisfiable:
\begin{eqnarray*}
\bigexists\limits_{1 \leq i \leq N} \point{x}_i && \bigexists\limits_{1 \leq i \leq N, m \in M}
t_i^{(m)} \text{ s.t. }
\left( \point{x}_s = \point{x}_1 \wedge \point{x}_t = \point{x}_N \right) \wedge
\bigwedge\limits_{\stackrel{1\leq i \leq N}{m \in M}} t_i^{(m)} \geq 0 \wedge \\
&
\bigwedge\limits_{i=2}^{N} & \left(\point{x}_i = \point{x}_{i-1} + \sum_{m\in M} R(m) \cdot
t_i^{(m)}\right) \wedge
\bigwedge\limits_{i=2}^{N} \textsc{ObstacleFree}(\point{x}_{i-1}, \point{x}_{i})
\end{eqnarray*}
\lIf{not satisfiable}{
$k \leftarrow k+1$
}
\uElse
{
Let $\sigma$ be an empty sequence\;
\For{$i = 1$ to $k-1$ }{
$\sigma = \sigma :: \textsc{Reach\_Convex}(\mathcal{H}, \point{x}_i, \point{x}_{i+1}, S)$}
}
\Return $\sigma$\;
}
\caption{\textsc{BoundedMotionPlan}($\mathcal{H}, \mathcal{W}, \mathcal{O}, \point{x}_s, \point{x}_t, B$)}
\label{alg:mainLoop}
\end{algorithm}
\begin{algorithm}[h!]
\caption{\label{algo:mms} $\textsc{Reach\_Convex}(\mathcal{H}, \point{x}_s, \point{x}_t, S)$}
\KwIn{MMS $\mathcal{H} = (M, n, R)$, two points $\point{x}_s, \point{x}_t$, convex, open, safety set $S$}
\KwOut{NO if no $S$-safe schedule from $\point{x}_s$ to $\point{x}_t$ exists and otherwise
such a schedule.}
$t_1 =
\min\limits_{m \in M} \max \set{\tau \::\: \point{x}_s + \tau \cdot R(m) \in S}$\;
$t_2 =
\min\limits_{m \in M} \max \set{\tau \::\: \point{x}_t + \tau \cdot R(m) \in S}$\;
$t_\text{safe} = \min \set{t_1, t_2}$\;
Check whether the following linear program is feasible:
\begin{align}
\point{x}_s + \sum_{m\in M} R(m) \cdot t^{(m)} = \point{x}_t
& \text{ and } &
t^{(m)} \geq 0 \text{ for all $m \in M$}~\label{eqn3}
\end{align}
\lIf{no satisfying assignment exists}{\Return NO}
\Else{
Find an assignment $\{t^{(m)}\}_{m\in M}$.
Set $l = \lceil (\sum_{m \in M} t^{(m)})/t_\text{safe}\rceil $.
{\bf return} the following schedule $\seq{(m_i, t_i)}$ where
\[
m_k = (k \bmod |M|) + 1 \text{ and } t_k = t^{(m_k)} / l
\text{ for $k = 1, 2, \ldots, l|M|$.}
\]
}
\end{algorithm}
The algorithm implicit in the proof of Theorem~\ref{thm:dec} requires
one to compute the cell cover in advance, and enumerate
sequences of cells in order to decide reachability. We next present
an algorithm inspired by bounded model
checking~\cite{clarke2001bounded} that implicitly enumerates sequences
of cells of increasing length till the upper bound on number of cells
is reached, or a safe schedule from the source point to the target
point is discovered. The key idea is to guess a sequence of points
${\point{x}_1, \ldots, \point{x}_N}$ starting from the source point and ending in
the target point such that for every $1 \leq i < N$ the point
$\point{x}_{i+1}$ is reachable from $\point{x}_i$ using rates provided by the
multi-mode system. Moreover, we need to check that the line segment
connecting $\point{x}_i$ and $\point{x}_{i+1}$ does not intersect with obstacles,
i.e:
$\bigforall_{0 \leq \lambda \leq 1} (\lambda \point{x}_i + (1-\lambda) \point{x}_{i+1}) \not \in \cup_{j = 1}^{k} \mathcal{O}_j$.
We write $\textsc{ObstacleFree}(\point{x}_i, \point{x}_{i+1})$ for this condition.
Algorithm~\ref{alg:mainLoop} sketches a bounded-step algorithm to
decide reachability for multi-mode systems that always terminates for
multi-mode systems with sets of closed obstacles defined by linear
inequalities thanks to Theorem~\ref{thm:dec}.
Notice that at line~$2$ of algorithm \ref{alg:mainLoop}, we need to check the feasibility of
the constraints system, which is of the form $\exists X \forall Y F(X, Y)$ where
universal quantifications are implicit in the test for $\textsc{ObstacleFree}$.
If the solver we use to solve the constraints has full support to solve the
$\forall$ quantification, we can use that to solve the above constraint. In our
experiments, we used the Z3 solver (\url{https://github.com/Z3Prover/z3})
to implement the Algorithm~\ref{alg:mainLoop} and found that the
solver was unable to solve in some cases.
Fortunately, the universal quantification in our constraints is of very special
form and can be easily removed using the Fourier-Motzkin elimination procedure,
which results in quadratic constraints that are efficiently solvable by Z3
solver.
In Section~\ref{sec:experiments} we present the experimental results on some
benchmarks to demonstrate scalability.
\section{Undecidability}
\label{sec:undec}
In this section we give a sketch of the proof of the following undecidability
result.
\begin{theorem}
\label{thm:undec}
Given a constant-rate multi-mode system $\mathcal{H}$, convex workspace
$\mathcal{W}$, obstacles set $\mathcal{O}$, start state $\point{x}_s$ and target state $\point{x}_t$,
the reachability problem $\textsc{Reach}(\mathcal{H}, S_{\mathcal{W} \backslash \mathcal{O}}, \point{x}_s, \point{x}_t)$ is
in general undecidable.
\end{theorem}
\begin{proof}{(Sketch.)}
We prove the undecidability of this problem by giving a reduction from
the halting problem for two-counter machines that is known to be
undecidable~\cite{Min67}.
Given a two counter machine $\mathcal{A}$ having instructions $L = \ell_1$, $\dots$,
$\ell_{n-1}, \ell_{halt}$, we construct a multi-mode system $\mathcal{H}_\mathcal{A}$ along
with non-convex safety $S_{\mathcal{W} \backslash \mathcal{O}}$ characterized using linear constraints.
The idea is to simulate the unique run of two-counter machine $\mathcal{A}$ via the unique
safe schedule of the MMS $\mathcal{H}_\mathcal{A}$ by going through a sequence of modes such
that a pre-specified target point is reachable iff the counter machine halts.
\noindent \textbf{Modes}. For every increment/decrement instruction $\ell_i$ of the counter machine we
have two modes $\mathcal{M}_i$ and $\mathcal{M}_{ik}$, where $k$ is the index of the unique
instruction $\ell_k$ to which the control shifts in $\mathcal{A}$ from $\ell_i$.
For every zero check instruction $\ell_i$, we have four modes $\mathcal{M}^1_i,
\mathcal{M}^2_i, \mathcal{M}_{ik}$ and $\mathcal{M}_{im}$, where $k,m$ are respectively the indices of
the unique instructions $\ell_k, \ell_m$ to which the control shifts from
$\ell_i$ depending on whether the counter value is $>0$ or $=0$.
There are three modes $\mathcal{M}_{halt}, \mathcal{M}^{c_1}_{halt}$ and $\mathcal{M}^{c_2}_{halt}$
corresponding to the halt instruction.
We have a special ``initial'' mode $\mathcal{I}$ which is the first mode to be
applied in any safe schedule.
\noindent \textbf{Variables}. The MMS $\mathcal{H}_\mathcal{A}$ has two variables $C=\{c_1, c_2\}$ that store the value
of two counters.
There is a unique variable $S=\{s_0\}$ used to enforce that mode $\mathcal{I}$
as the first mode.
For every increment or decrement instruction $\ell_i$,
there are variables $w_{ij}, x_{ij}$, where $j$ is the index of the unique
instruction
$\ell_j$ to which control shifts from $\ell_i$.
We define variable $z_{i\#}$ for each zero-check instruction $\ell_i$.
\noindent \textbf{Simulation}. A simulation of the two counter machine going through instructions
$\ell_0$, $\ell_1$, $\ell_2$, $\dots, \ell_y, \ell_{halt}$ is achieved by going through modes
$\mathcal{I}, \mathcal{M}_{0}, \mathcal{M}_{01}, \mathcal{M}_1$ or $\mathcal{M}^1_1$ or $\mathcal{M}^2_1 \dots, \mathcal{M}_y, \mathcal{M}_{y~halt}$ in order, spending exactly one unit of time in each mode.
Starting from a point $\point{x}_s$ with $s_0=1$ and $v=0$ for all variables $v$ other than $s_0$, we
want to reach a point $\point{x}_t$ where $w_{halt}=1$ and
$v=0$ for all variables $v$ other than $w_{halt}$.
The idea is to start in mode $\mathcal{I}$, and
spending one unit of time in $\mathcal{I}$ obtaining $s_0=0, w_{01}=1$
(spending a time other than one violates safety, see Lemma~\ref{one}).
Growing $w_{01}$ represents that the current instruction is $\ell_0$, and the next one is $\ell_1$.
Next, we shift to mode $\mathcal{M}_0$,
spend one unit of time there to obtain $x_{01}=1, w_{01}=0$.
This is followed by
mode $\mathcal{M}_{01}$, where $x_{01}$ becomes 0, and one of the variables $z_{1\#}, w_{12}$ attain 1, depending on whether
$\ell_1$ is a zero check instruction or not (again, spending a time other than
one in $\mathcal{M}_0, \mathcal{M}_{01}$ violates safety, see Lemma~\ref{two}).
In general, while at a mode $\mathcal{M}_{ij}$, the next instruction
$\ell_k$ after $\ell_j$ is chosen by ``growing'' the variable $w_{jk}$ if $\ell_j$ is not a zero-check instruction, or
by ``growing'' the variable $z_{j\#}$ if $\ell_j$ is a zero-check instruction.
In parallel, $x_{ij}$ grows down to 0, so that $x_{ij}+w_{jk}=1$ or $x_{ij}+z_{j\#}=1$.
The sequence of choosing modes, and enforcing that one unit of time be spent in each mode
is necessary to adhere to the safety set as can be seen by Lemmas~\ref{two} and~\ref{three}.
\begin{itemize}
\item In the former case, the control shifts from $\mathcal{M}_{ij}$ to mode
$\mathcal{M}_j$ where variable $x_{jk}$ grows at rate 1 while $w_{jk}$ grows at rate -1, so that
$x_{jk}+w_{jk}=1$. Control shifts from $\mathcal{M}_j$ to $\mathcal{M}_{jk}$, where the next instruction $\ell_g$ after $\ell_k$ is chosen
by growing variable $w_{kg}$ if $\ell_k$ is not zero-check instruction, or
the variable $z_{k\#}$ is grown if $\ell_k$ is a zero-check instruction.
\item In the latter case, one of the modes $\mathcal{M}^1_j,\mathcal{M}^2_j$ is chosen from $\mathcal{M}_j$ where
$z_{j\#}$ grows at rate -1.
Assume $\ell_j$ is the instruction ``If the counter value is $>0$, then goto $\ell_m$, else goto $\ell_h$".
If $\mathcal{M}^1_j$ is chosen, then the variable $x_{jm}$ grows at rate 1
while if $\mathcal{M}^2_j$ is chosen, then the variable $x_{jh}$ grows at rate 1.
In this case, we have $z_{j\#}+x_{jm}=1$ or $z_{j\#}+x_{jh}=1$.
From $\mathcal{M}^1_j$, control shifts to $\mathcal{M}_{jm}$, while from
$\mathcal{M}^2_j$, control shifts to $\mathcal{M}_{jh}$.
\end{itemize}
Continuing in the above fashion, we eventually reach mode $\mathcal{M}_{y~halt}$
where $x_{y~halt}$ grows down to 0, while the variable $w_{halt}$ grows to 1, so that
$x_{y~halt}+w_{halt}=1$(see Lemma~\ref{last} which enforces this).
Starting from $\point{x}_s$---which lies in the hyperplane $H_0$
given as $s_0+w_{0j}=1$ where $\ell_j$ is the unique instruction following
$\ell_0$---a safe execution stays in $H_0$ as long as control stays in the initial mode
$\mathcal{I}$.
Control then switches to mode $\mathcal{M}_0$, to the hyperplane $H_1$ given by
$w_{0j}+x_{0j}=1$. Note that $H_0 \cap H_1$ is non-empty and intersect at the
point where $w_{0j}=1$, and all other variables are $0$.
Spending a unit of time at $\mathcal{M}_0$, control switches to mode $\mathcal{M}_{0j}$, and
to the hyperplane $H_2$ given by $x_{0j}+w_{jk}=1$ depending on whether
$\ell_j$ is not a zero-check instruction.
Again, note that $H_1 \cap H_2$ is non-empty and intersect
at the point where $c_1=1, x_{0j}=1$ and all other variables are zero.
This continues, and we obtain a safe transition from hyperplane $H_i$ to
$H_{i+1}$ as dictated by the simulation of the two counter machine.
The sequence of safe hyperplanes lead to the hyperplane $H_{last}$ given by
$w_{halt}=1$ and all other variables 0 iff the two counter machine halts.
Appendix \ref{app:undec-eg}
gives an example of a reduction
from 2-counter machines.
\qed
\end{proof}
\section{Experimental Results}
\label{sec:experiments}
In this section, we discuss some preliminary results obtained with an
implementation of Algorithm~\ref{alg:mainLoop}.
In order to show competitiveness of the proposed algorithm, we compare its
performance with a popular implementation of the RRT algorithm~\cite{Lav06} on a
collection of micro-benchmarks (some of these benchmarks are inspired by \cite{saha}).
\subsection{Experimental Setup}
Rapidly-exploring Random
Tree (RRT)~\cite{Lav06} is a space-filling data structure that is used to search a
region by incrementally building a tree.
It is constructed by selecting random points in the state space and can provide
better coverage of reachable states of a system than mere simulations.
There are many versions of RRTs available; we use the \emph{Open Motion Planning
Library (OMPL)} implementation of RRT for our experiments. The
OMPL library (http://ompl.kavrakilab.org)
consists of many
state-of-the-art, sampling-based motion planning algorithms. We used the RRT API provided by the OMPL library.
The results for RRT were obtained with a goal bias parameter set to $0.05$, and obstacles implemented
as \texttt{StateValidityCheckerFunction()} as mentioned in the documentation~\cite{sucan2012the-open-motion-planning-library}.
We implemented our algorithm on the top of the Z3
solver~\cite{DeMoura:2008:ZES:1792734.1792766}. The
implementation involves coding formulae in FO-logic over reals and checking
for a satisfying assignment. Our algorithm was implemented in Python 2.7. The OMPL implementation was done in C++.
The experiments with Algorithm~\ref{alg:mainLoop}
and RRT were performed on a computer running Ubuntu 14.10,
with an Intel Core i7-4510 2.00 GHz quadcore CPU, with 8 GB RAM.
We compared the two algorithms by executing them on a
set of microbenchmarks whose
obstacles are hyper-rectangular, though our
algorithm can handle general polyhedral obstacles.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}
\draw (0,0) rectangle (7, 4);
\filldraw[fill=black!40!white, draw=black] (1, 0) rectangle(2, 3.5);
\filldraw[fill=black!40!white, draw=black] (2.5, 4) rectangle(3.5, 0.5);
\filldraw[fill=black!40!white, draw=black] (4, 0) rectangle(5, 3.5);
\filldraw[fill=black!40!white, draw=black] (5.5, 4) rectangle(6.5, 0.5);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_s$}] at (0.2,0.1) {};
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_t$}] at (6.9,3.9) {};
\draw [black!30, thick] (0.2, 0.1) -- (0.2, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_1$}] at (0.2,3.9) {};
\draw [black!30, thick] (0.2, 3.9) -- (2.2, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_2$}] at (2.2,3.9) {};
\draw [black!30, thick] (2.2, 0.2) -- (2.2, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_3$}] at (2.2,0.2) {};
\draw [black!30, thick] (2.2, 0.2) -- (3.7, 0.2);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_4$}] at (3.7,0.2) {};
\draw [black!30, thick] (3.7, 3.9) -- (3.7, 0.2);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_5$}] at (3.7,3.9) {};
\draw [black!30, thick] (3.7, 3.9) -- (5.2, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]135:$\point{x}_6$}] at (5.2,3.9) {};
\draw [black!30, thick] (5.2, 0.2) -- (5.2, 3.9);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_7$}] at (5.2,0.2) {};
\draw [black!30, thick] (6.9, 0.2) -- (5.2, 0.2);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_8$}] at (6.9,0.2) {};
\draw [black!30, thick] (6.9, 0.2) -- (6.9, 3.9);
\node at (1.5, 1.75) {$\mathcal{O}_1$};
\node at (3, 1.75) {$\mathcal{O}_2$};
\node at (4.5, 1.75) {$\mathcal{O}_3$};
\node at (6, 1.75) {$\mathcal{O}_4$};
\end{tikzpicture}
\begin{tikzpicture}
\draw (0,0) rectangle (4, 4);
\filldraw[fill=black!40!white, draw=black](0.5, 3) rectangle (3.5, 3.5);
\filldraw[fill=black!40!white, draw=black](0.5, 1) rectangle (1, 3);
\filldraw[fill=black!40!white, draw=black](0.5, 0.5) rectangle (3.5, 1);
\filldraw[fill=black!40!white, draw=black](1.25, 1.25) rectangle (3.5, 1.5);
\filldraw[fill=black!40!white, draw=black](1.25, 2.5) rectangle (3.5, 2.75);
\filldraw[fill=black!40!white, draw=black](3.25, 1.5) rectangle (3.5, 2.5);
\filldraw[fill=black!40!white, draw=black](2, 1.65) rectangle (3, 1.85);
\filldraw[fill=black!40!white, draw=black](2, 2.15) rectangle (3, 2.3);
\filldraw[fill=black!40!white, draw=black](2, 1.85) rectangle (2.15, 2.15);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_s$}] at (0.1,0.1) {};
\draw [black!30, thick] (0.1, 0.1) -- (3.9, 0.1);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]225:$\point{x}_1$}] at (3.9,0.1) {};
\draw [black!30, thick] (3.9, 0.1) -- (3.9, 2.85);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]355:$\point{x}_2$}] at (3.9,2.85) {};
\draw [black!30, thick] (3.9, 2.85) -- (1.15, 2.85);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[outer sep=-2pt]120:$\point{x}_3$}] at (1.15,2.85) {};
\draw [black!30, thick] (1.15, 1.6) -- (3.075, 1.6);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[outer sep=-3pt]225:$\point{x}_4$}] at (1.15,1.6) {};
\draw [black!30, thick] (1.15, 1.6) -- (1.15, 2.85);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[outer sep=-3pt]355:$\point{x}_5$}] at (3.075,1.6) {};
\draw [black!30, thick] (3.075, 1.6) -- (3.075, 2);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[outer sep=-3pt]355:$\point{x}_6$}] at (3.075,2) {};
\draw [black!30, thick] (3.075, 2) -- (2.3, 2);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[outer sep=-3pt]180:$\point{x}_t$}] at (2.3,2) {};
\end{tikzpicture}
\begin{tikzpicture}
\draw (0,0) rectangle (4, 4);
\filldraw[fill=black!40!white, draw=black] (0.15, 1) rectangle(3.75, 0.25);
\filldraw[fill=black!40!white, draw=black] (3, 3.95) rectangle(3.75, 1.05);
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]180:$\point{x}_s$}] at (2.85,3.7) {};
\node [fill, draw, circle, minimum width=3pt, inner sep=0pt, pin={[fill=white, outer sep=2pt]0:$\point{x}_t$}] at (3.85,3.7) {};
\node at (2, .6) {$\mathcal{O}_1$};
\node at (3.38, 2) {$\mathcal{O}_2$};
\node at (0, -0.6) {};
\node at (-1, 0) {};
\end{tikzpicture}
\end{center}
\caption{a) Snake-shaped arena with four obstacles (left), b) Maze-shaped
arena with three $C$-shaped patterns (middle) and c) modified L-shaped
arena (right).}
\label{fig:modified_Lshaped}
\label{fig:maze-shaped}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c |}
\hline
\multirow{2}{*}{\textbf{Dimension}} & \multirow{2}{*}{\textbf{Arena Size}} &
\multicolumn{2}{| c |}{~~~~~\textbf{OMPLRRT}~~~~~} & \multicolumn{2}{| c |}{~~~~~\textbf{BoundedMotionPlan}~~~~~} \\
\cline{3-6}
& & Time(s) & Nodes & Time(s) & Witness Length \\
\hline
2 & 100 $\times$ 100 & 0.011 & 8 & 0.012 & 2 \\
\hline
2 & 1000 $\times$ 1000 & 0.076 & 245 & 0.012 & 2 \\
\hline
3 & 100 $\times$ 100 & 0.107 & 4836 & 0.183 & 2 \\
\hline
3 & 1000 $\times$ 1000 & 1.9 & 1800 & 0.19 & 2 \\
\hline
4 & 100 $\times$ 100 & 1.2 & 612 & 0.201 & 2 \\
\hline
4 & 1000 $\times$ 1000 & 94.39 & 2857 & 0.206 & 2 \\
\hline
5 & 100 $\times$ 100 & 3.12 & 778 & 2.69 & 2 \\
\hline
5 & 1000 $\times$ 1000 & 149.4 & 2079 & 2.68 & 2 \\
\hline
6 & 1000 $\times$ 1000 & 105 & 3822 & 15.3 & 2 \\
\hline
7 & 1000 $\times$ 1000 & 319.63 & 2639 & 190.3 & 2 \\
\hline
\end{tabular}
\end{center}
\caption{Summary of results for the $L$ shaped arena}
\label{ellShaped}
\end{table}
We considered the following microbenchmarks.
\begin{itemize}
\item \textbf{L-shaped arena.}
This class of microbenchmarks contains examples with hyper-rectangular
workspace and certain ``L'' shaped obstacles as shown in
Figure~\ref{fig:l-shaped}.
The initial vertex is the lower left vertex of the square ($\point{x}_s$) and the
target is the right upper vertex of the square ($\point{x}_t$).
Our algorithm can give the solution to this problem with bound $B = 2$
returning the sequence $\seq{\point{x}_1, \point{x}, \point{x}_t}$ as shown in the figure,
while the \textsc{Rrt} algorithm in this case samples most of the points which lie
on the other side of the obstacles and if the control modes are not in the
direction of the line segments $\point{x}_1 \point{x}$ and $\point{x}\px_t$,
then it grows in arbitrary directions and hits the obstacles a large number of
times, leading to a large number of iterations slowing the growth.
We experimented with L-shaped examples for dimensions ranging from $2$ to
$7$. In most of the cases, we found that the performance of
the \textsc{BoundedMotionPlan} algorithm was better than that of
OMPLRRT. Another important point to note is that RRT or other simulation-based
algorithms do not perform well as the input size increases, which can be
clearly seen from the running times obtained on increasing arena
sizes in Table~\ref{ellShaped}. Our algorithm worked better than RRT
for higher dimensions ($\geq$ 3).
\item \textbf{Snake-shaped arena.}
The name comes from the serpentine appearance of the safe sets in these arenas.
The motivation to study these microbenchmarks comes from motion planning
problems in regular environments.
The arena has rectangular obstacles coming from the top and the
bottom (as shown in Figure~\ref{fig:maze-shaped} for two dimensions)
alternately.
The starting point is the lower left vertex $\point{x}_s$ and the target point is
$\point{x}_t$.
A sample free-path through the arena is also shown in the figure.
\textsc{Rrt} algorithm performs well for lower dimensions but fails to
terminate for higher dimensions.
The results for this class of obstacles are summarised in Table~\ref{snakeShaped}. Experiments
were performed for up to 3 dimensions and 4 obstacles.
\item \textbf{Maze-shaped arena.}
These benchmarks mimic the motion planning situations where the task of the
robot is to navigate through a maze.
We model a maze using finitely many concentric ``C''-shaped obstacles with
different orientations as shown
in Figure~\ref{fig:maze-shaped}.
The task is to navigate from the lower left outer corner to the
center point of the square.
This kind of arena seems to be particularly challenging for the RRT algorithm
and the growth of the tree seems to be quite slow.
Also, the performance of our tool degrades as the bound increases due to a
increase in the number of constraints, and hence, these examples
require more time as compared to the other two microbenchmarks.
However, as shown in Table~\ref{mazeShaped}, \textsc{OmplRrt} and
\texttt{BoundedMotionPlan}
perform almost equally well, with the latter being slightly better.
\item \textbf{Modified L-shaped obstacles.}
These set of microbenchmarks contains a hyperrectangular workspace and 2
hyperrectangular obstacles arranged in a ``L-shaped'' fashion as shown in
Figure~\ref{fig:modified_Lshaped}. The initial vertex lies very close to one
of the obstacles. The target vertex is the vertex very close to the start
vertex but on the other side of the obstacle. Our algorithm can give the
solution to this problem with bound $B = 3$ while \textsc{Rrt} algorithm
spends time in sampling from the bigger obstacle-free part of the arena. The
results are summarised in Table~\ref{modifiedEll}.
\end{itemize}
\begin{table}[t]
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c | c | c |}
\hline
\multirow{2}{*}{\textbf{Dim.}} & \multirow{2}{*}{\textbf{Arena Size}} & \multirow{2}{*}{\textbf{Obstacles}} & \multicolumn{3}{| c |}{\textbf{OMPLRRT}} & \multicolumn{2}{| c |}{\textbf{BoundedMotionPlan}} \\ \cline{4-8}
& & & Time(s) & Nodes & Nodes in Path & Time(s) & Witness Length \\
\hline
2 & 350 $\times$ 350 & 3 & 3.56 & 13142 & 72 & 2.54 & 4 \\
\hline
2 & 350 $\times$ 350 & 4 & 4.12 & 15432 & 96 & 4.23 & 5 \\
\hline
2 & 3500 $\times$ 3500 & 3 & 4.79 & 15423 & 83 & 2.57 & 4 \\
\hline
3 & 350 $\times$ 350 & 3 & 102.3 & 86314 & 67 & 96.43 & 4 \\
\hline
3 & 3500 $\times$ 3500 & 3 & 100.22 & 1013 & 27 & 96.42 & 4 \\
\hline
\end{tabular}
\end{center}
\caption{Summary of results for the snake-shaped arena}
\label{snakeShaped}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c | c | c | }
\hline
\multirow{2}{*}{\textbf{Dim.}} & \multirow{2}{*}{\textbf{Arena Size}} & \multirow{2}{*}{\textbf{Obstacles}} & \multicolumn{3}{| c |}{\textbf{OMPLRRT}} & \multicolumn{2}{| c |}{\textbf{BoundedMotionPlan}} \\ \cline{4-8}
& & & Time(s) & Nodes & Nodes in Path & Time(s) & Bound \\
\hline
2 & $600 \times 600$ & 2 & 1.8 & 9500 & 60 & 1.3 & 4 \\
\hline
2 & $6000 \times 6000$ & 3 & 23.5 & 11256 & 78 & 45.23 & 5 \\
\hline
3 & $600 \times 600$ & 2 & 132.6 & 90408 & 71 & 120.3 & 5 \\
\hline
3 & $6000 \times 6000$ & 3 & 1002.6 & 183412 & 93 & 953.4 & 5 \\
\hline
\end{tabular}
\end{center}
\caption{Summary of results for the maze-shaped arena}
\label{mazeShaped}
\end{table}
\begin{table}[h]
\begin{center}
\centering
\begin{tabular}{ | c | c | c | c | c | c | c | }
\hline
\textbf{Dimension } & \textbf{Arena Size} & \multicolumn{3}{| c |}{\textbf{OMPLRRT}} & \multicolumn{2}{| c |}{\textbf{BoundedMotionPlan}} \\
\hline
& & Time & Nodes & Nodes in Path & Time & Bound \\
\hline
2 & $100 \times 100$ & 0.445 & 27387 & 40 & 0.126 & 3 \\
\hline
2 & $1000 \times 1000$ & 2.57 & 38612 & 47 & 0.132 & 3 \\
\hline
3 & $100 \times 100$ & 115.23 & 57645 & 71 & 92.1 & 3 \\
\hline
3 & $1000 \times 1000$ & 675.62 & 183412 & 93 & 95.23 & 3 \\
\hline
4 & $100 \times 100$ & 287.32 & 64230 & 65 & 283.23 & 3 \\
\hline
4 & $1000 \times 1000$ & 923.45 & 192453 & 78 & 292.53 & 3 \\
\hline
5 & $100 \times 100$ & 523.62 & 73422 & 69 & 534.45 & 3 \\
\hline
5 & $1000 \times 1000$ & 1043 & 223900 & 72 & 533.96 & 3\\
\hline
\end{tabular}
\end{center}
\caption{Summary of results for the modified L-shaped obstacles}
\label{modifiedEll}
\end{table}
The micro-benchmarks presented above involved the situations where
the target point is reachable from the source point.
It is interesting to see the performance of two algorithms in cases when there
is no path from the source to target point.
For the cases when an upper bound on cell-decomposition can be imposed,
our algorithm is capable of producing negative answer.
Table~\ref{negtab} summarizes the performance of \textsc{OmplRrt}
and \texttt{BoundedMotionPlan} for $L$-shaped arenas when the target
point is not reachable. The timeout for
RRT was set to be 500 seconds, and it did not terminate until the timeout,
which is as expected. On the other hand, \textsc{BoundedMotionPlan} performed well, with running times close to those when the target point is reachable.
\begin{table}[t]
\begin{center}
\centering
\begin{tabular} { | c | c | c | c |}
\hline
\multirow{2}{*}{~~~~\textbf{Dimension}~~~~} & \multicolumn{2}{| c |}{~~~~~~~~~~\textbf{OMPLRRT}~~~~~~~~~} & {~~~~\textbf{BoundedMotionPlan}~~~~} \\ \cline{2-4}
& Time(s) & Nodes & Time(s) \\ \hline
2 & 500 (TO) & 5301778 & 0.0088 \\ \hline
3 & 500 (TO) & 7892122 & 0.032 \\ \hline
4 & 500 (TO) & 4325621 & 0.056 \\ \hline
5 & 500 (TO) & 5624609 & 2.73 \\ \hline
6 & 500 (TO) & 4992951 & 18.34 \\ \hline
7 & 500 (TO) & 3765123 & 213.23 \\ \hline
\end{tabular}
\end{center}
\caption{Summary of results for the unreachable L-shaped obstacles.}
\label{negtab}
\end{table}
\noindent\textbf{Discussion}.
Our implementation of \textsc{BoundedMotionPlan} even though
preliminary, compares favorably with a state-of-the-art implementation
of \textsc{Rrt}. \textsc{BoundedMotionPlan}, in addition, can naturally deal
with restrictions on the dynamics of the MMS, that is, with systems
such that the positive linear span of the mode vectors is not
$\mathbb R^n$.
A trend observed in our experiments is that if a large fraction of the
arena is covered by obstacles, then the probability of a randomly sampled
point lying in the obstacle region is high and this makes RRT ineffective in
this situation by wasting a lot of iterations.
Another trend is that as the arena size increases, it becomes more
difficult for RRT to navigate to the destination points even with higher values
of goal bias.
Our algorithm performs better in situations when it terminates early
(target reachable from source with shorter witnesses) while
the performance of our algorithm degrades as the bound or the
dimensions increases since the number of constraints introduced by the
Fourier-Motzkin like-procedure implemented in our algorithm grows
exponentially with the dimension exhibiting the curse of
dimensionality.
\section{Conclusion}
\label{sec:conclusion}
In this paper we studied the motion planning problem for constant-rate
multi-mode system with non-convex safety sets given as a convex set of
obstacles.
We showed that while the general problem is already undecidable in this simple
setting of linearly defined obstacles, decidability can be recovered by making
appropriate assumption on the obstacles.
Moreover, our algorithm performs satisfactorily when compared to well-known
algorithms for motion planning, and can easily be adapted to provide
semi-algorithms for motion-planning problems for objects with polyhedral
shapes.
While the algorithm is complete for classes of safety sets for which a
bound on the size of a cell cover can be effectively computed,
bounds based on cell decompositions of the safety set may be too large to
be of practical use. This situation is akin to that encountered in
bounded model checking of finite-state systems, in which bounds based
on the radii of the the state graph are usually too large. We are
therefore motivated to look at extensions of the algorithm that
incorporate practical termination checks.
|
\section{Introduction}
The theory of algorithmic randomness \cite{Downey} tries to explain what kind of properties make an individual element of a sample space to appear random. Mostly the theory deals with infinite binary sequences. One of the first works in the field could be attributed to Borel \cite{Borel} with his notion of normal numbers. An emergence of the computability theory allowed to formalize a notion of randomness with respect to certain classes of algorithms. Loosely speaking, an object is considered random, when any kind of algorithm fails to find sufficient patterns in the structure of that object. There are many kinds of randomness definitions within the theory, however most of them arise from three paradigms of randomness:
\begin{enumerate}
\item Unpredictability
\item Incompressibility
\item Measure theoretical typicalness
\end{enumerate}
Each paradigm has its own tools to explore randomness. In relation to the unpredictability paradigm different kinds of effective martingales are considered \cite{Schnorr1971}. On the other hand, the incompressibility paradigm works with a notion of complexity, the most famous example being the Kolmogorov complexity \cite{Kolmogorov}. Finally, the measure theoretical typicalness paradigm tries to effectivize a notion of a nullset in measure-theoretic sense, with the most prominent example in the face of Martin-L\"{o}f randomness tests \cite{Lof}. An interesting part is that one could choose an appropriate definition from each paradigm so that arising classes of random infinite sequences coincide. This interconnection is thought to symbolize a universality of the notion. In almost all of these studies, a full computational strength of a Turing machine is used. On the one hand, it guarantess a generality of resulting theory, because any kind of effective process can be designed as an instance of a Turing machine. However, nothing prevents us from considering weaker variants of computational machines. For example, one could consider a polynomial-time computability, which gives rise to the notion of resource-bounded measure \cite{Lutz}. One can go even further and consider finite state machines, thus hitting the bottom of a computational machines hierarchy. Most paradigms for randomness have their variants adopted for finite state machines. As for the unpredictability paradigm, there are automatic martingales \cite{Stimm} and finite state predicting machines \cite{OConnor}. As for the incompressibility paradigm, one has several variants of automatic complexity for finite strings \cite{Shallit, Hyde, Calude2,Becher,Shen}. On top of that, there are approaches \cite{Busse} to study a notion of genericity from the computability theory, in the context of automata theory. From perspective of measure typicalness, we have a notion of regular nullset defined by Staiger \cite{Null, Monadic}. Those regular nullsets correspond to Buchi recognizable $\omega$-languages of measure zero. Our aim is to replicate the construction of Martin-L\"{o}f randomness tests, where nullsets are defined as intersection of classes having some algorithmic property, within automata theoretic framework.
\section{Background}
In this section we review concepts on which the main part of the paper going to rely on.
\subsection{Finite State Machines}
Let $S$ be a finite set, elements of which are to be interpreted as states. Consider a binary alphabet $\Sigma=\{0,1\}$. By concatentating elements of $\Sigma$, we obtain words, e.g. $00$, $101$, etc. We denote the collection of all words $\Sigma^*$, which is a monoid under concatenation and an identity element of the empty string, $\varepsilon$. Let $\Sigma$ act on $S$ from the right using a function $f: S \times \Sigma \rightarrow S$
and we can extend $f$ to an action $\cdot$ on the full monoid $\Sigma^*$ by defining
$$
s \cdot \varepsilon = s \mbox{ and }
s \cdot a = f(s,a) \mbox{ and } s \cdot (xy) = (s \cdot x) \cdot y
$$
for all states $s \in S$ and letters $a \in S$ and words $x,y \in \Sigma^*$. We choose some state, $s_0$, to be a \emph{starting state}. Furthermore, we are going to assume that all other states are reachable from $s_0$, i.e. for all $s\in S$, there is $w\in \Sigma^*$ such that $s_0\cdot w = s$. A \emph{Finite State Machine~(FSM)} is given as a triple of data: $(S,f,s_0)$. Given two words $x,y$ such that $x=yz$ for some word $z$, $y$ is said to be a \emph{prefix} of $x$, which is denoted as $y\preceq x$. In case $x\neq y$, it is said to be a \emph{strict prefix}, denoted as $y\prec x$. A term \emph{language} refers to any subcollection of $\Sigma^*$.
\paragraph*{Connected Components of FSM}
Suppose we are given a FSM, $M=(S,f,s_0)$. We can turn $S$ into a partially ordered set as follows:
\[
x\ge y\, \Leftrightarrow \,\exists w\in \Sigma^* \text{ such that } x\cdot w = y
\]
In other words, $x\ge y$ iff $y$ is reachable from $x$, for $x,y\in S$. The given partial order is reflexive and transitive, so it is possible to define following equivalence relation $\sim$ on $S$:
\[
x\sim y \, \Leftrightarrow x\ge y\text{ and } y\ge x
\]
Above equivalence relation tells that $x,y\in S$ belong to the same connected component. By taking quotient of $S$ under $\sim$ we end up with collection of connected components $[M]$ with inherited partial order. We call $g\in [M]$ a \emph{leaf connected component} if it is minimal, i.e. there is no element of $[M]$ strictly less than $g$. We denote a collection of leaf connected components of $M$ as $(M)$.
\paragraph*{Finite Automata}
Let us enrich the structure of finite state machine, $(S,f,s_0)$, by introducing a set of \emph{accepting states} $F\subseteq S$. In order to simplify the notation, we reserve the letter $M$ for all of machines to be introduced. A \emph{(deterministic) finite automaton}, $M$, is given by a quadruple of data: $(S,f,s_0,F)$. A word $w\in \Sigma^*$ is said to be accepted by $M$ if:
\[
s_0\cdot w\in F
\]
Given a finite automaton $M$, the language of $M$, $L(M)$, corresponds to the collection of all words accepted by $M$. A language $L\subseteq \Sigma^*$ is said to be \emph{regular} if there is a finite automaton $M$ recognizing it, i.e. $L=L(M)$.
\paragraph*{Automatic Relation}
A membership of some word $w$ in a regular language $L$ can be easily checked on a corresponding finite automaton of $L$. Although this property is rather convenient, it is quite limiting in a sense that it is unary. There is a way to extend a notion of 'regularity' or 'automaticity' on relationships of arbitrary arity. Given some input of $k$ words $(w_1,w_2,\ldots,w_k)$ it is possible to convolute them into the one block in the following manner:
\[\left[
\begin{array}{ c c c c c c}
( & \cdots & w_1 & \cdots & \#\# & )\\
(& \cdots & w_2 & \cdots & \cdots & )\\
(& \cdots & \cdots & \cdots & \cdots & )\\
(& \cdots & w_k & \cdots & \# & )
\end{array}
\right]
\]
where length of the block is equal to the length of the longest word, while empty spaces at the end of shorter words are filled with a special symbol such as $\#$. Each column of such a block can be considered as a single symbol coming from a product space $\Sigma_{\#}^k$, where $\Sigma_{\#}$ refers to $\Sigma$ with added special symbol, i.e. $\Sigma_{\#}=\{0,1,\#\}$. By analogy with finite state machines, let $\Sigma_{\#}^k$ act on some finite set $S$ forming a function $f$:
\[
f:S\times \Sigma_{\#}^k\to S
\]
This function is then extended to the action of $(\Sigma_{\#}^k)^*$ on $S$, defining a finite state machine $M=(S,f,s_0,F)$ with some $s_0\in S$ and $F\subseteq S$. Given block $u$ of above type is said to be accepted by $M$, if the run of $u$ on $M$ finishes at an accepting state, i.e. $s_0\cdot u\in F$. A relation $R\subseteq \Sigma^k$ is said to be automatic if there is a finite automaton $M$ which recognizes elements of $R$ given in the block form as above. A function $\phi:\Sigma^k \to \Sigma^m$ is said to be automatic if its graph forms an automatic relation, i.e. $graph(\phi)=\{(x,\phi(x))\mid x\in \Sigma^k\}\subseteq \Sigma^{k+m}$ is automatic. Let us give an example of automatic relation:\\
Let $R$ be a binary relation on words, $R\subseteq (\Sigma^*)^2$, such that $(x,y)\in R$ iff $\vert x \vert \le \vert y \vert$. We can construct a following finite automaton $M=(S,f,s_0,F)$ to recognize $R$. Firstly, we set $S=\{s_0,s_1\}$ with $F = \{s_0\}$. As for a function $f$, we set:
\begin{align*}
s_0\cdot\begin{pmatrix}a\\b\end{pmatrix} = \begin{cases} s_1, & \text{ if } a=\#\\s_0,& \text{ otherwise }\end{cases}
&&
s_1\cdot\begin{pmatrix} a\\b \end{pmatrix} = s_1, \text{ for all } a,b
\end{align*}
Automatic relations enjoy quite convenient properties from logical point of view. This fact is captured by the following theorem by Khoussainov and Nerode \cite{Kho}:
\begin{proposition}
Let $R$ be a first-order definable relation on $(\Sigma^*)^k$ from given functions $(f_1,f_2,\ldots,f_n)$ and relations $(R_1,R_2,\ldots,R_m)$. If each of these functions and relations is automatic, then $R$ is also automatic.
\end{proposition}
\paragraph*{Automatic Family}
In the computability theory, there is a notion of uniformly computably enumerable sets. This notion allows to look at the collection of sets from a single effective frame of reference. A somewhat parallel notion exists in automata theory known as \emph{Automatic family} \cite{Jain}.
\begin{definition}[Automatic family \cite{Jain}]
An automatic family is a collection of languages $\mathcal{U} = (U_i)_{i\in I}$ such that:
\begin{enumerate}
\item $I$ is a regular set;
\item $\{(x,i)\mid x\in U_i\}$ is an automatic relation.
\end{enumerate}
\end{definition}
\subsection{Infinite sequences}
While we refer to elements of $\Sigma^*$ as strings or words, we refer to an one-way infinite sequence on $\Sigma$ as a \emph{sequence} or $\omega$-\emph{word}. Alternatively, $\omega$-word $X$ can be a thought as a function:
\[X:\mathbb{N}\to \{0,1\}
\]
Given a sequence $X$, its elements can be indexed as $X=X_1X_2X_3\ldots$. Given a word $w$ and a sequence $X$, one can concatenate them, forming a sequence $Y=wX$. Herein, $w$ is called a \emph{prefix} of $Y$ and denoted as $w\prec Y$. If the length of $w$ is $i$, $\vert w \vert = i$, then $w$ is denoted as $Y[i]$. A collection of all prefixes of $Y$ is denoted as $Pref(Y)$. On the other hand $X$ is called \emph{suffix} of $Y$ and denoted as $X=Y[i:]$. A collection of all sequences, $\{0,1\}^{\mathbb{N}}$, is also called a \emph{Cantor Space}. A subcollection of the Cantor space is usually referred to as a \emph{class} and denoted with italicized capital letters. Given a language $L$ and a class $\mathcal{C}$, we can take their product:
\[L\cdot \mathcal{C} = \{wX\mid w\in L,\, X\in\mathcal{C}\}
\]
\paragraph*{B\"{u}chi Automata}
Automata which process sequences are usually called $\omega$-automata. The simplest of such automata are \emph{B\"{u}chi automata}. As we are going to work with deterministic B\"{u}chi automata, we limit ourselves revewing such automata only. However, reader should note that the term B\"{u}chi automata, generally, refers to nondeterministic B\"{u}chi automata. A deterministic B\"{u}chi automaton $M$ is given by a quadruple $M=(S,f,s_0,F)$ just as finite automata. However, an acceptance condition needs to be revised. Given a sequence $X$, it induces an infinite run of states, $S(X)$, in the following manner:
\begin{align*}
S(X)_1 = s_0 && S(X)_n \cdot X_n = S(X)_{n+1}, \text{ for all } n
\end{align*}
Let us denote the collection of elements of $S$ which appear infinitely often in $S(X)$ as $I(X)$. A sequence $X$ is accepted by $M$ iff $s\in F$ for some $s\in I(X)$, i.e. some accepting state appears infinitely often in $S(X)$. A collection all sequences accepted by $M$ forms the class $L(M)$. A class $\mathcal{C}$ is said to be recognized by determinstic B\"{u}chi automata if there $M$ as above such that $L(M)=\mathcal{C}$. A B\"{u}chi automaton $M$ is said to be of measure $m$ if $\mu(L(M))=m$, where $\mu$ is the Lebesgue measure on the Cantor Space.
\paragraph*{Muller Automata}
To review a structure of deterministic Muller automata, we start with a finite state machine $(S,f,s_0)$. We enrich its structure by a subset of powerset of $S$, i.e. accepting collection $F\subseteq \mathcal{P}(S)$. A Muller automata $M$ is given by a quadruple: $M=(S,f,s_0,F)$. A sequence $X$ is said to be accepted by $M$ if
\[
I(X)\in F
\]
Again, a collection of all sequences accepted by $M$ forms the language of $M$, $L(M)$. A class $\mathcal{C}$ is said ot be recognized by determinsitic Muller automata if there is $M$ as above such that $L(M)=\mathcal{C}$. A Muller automaton $M$ is said to be of measure $m$ if $\mu(L(M))=m$, where $\mu$ is the Lebesgue measure on the Cantor Space.
\paragraph*{Equivalence of B\"{u}chi and Muller automata}
It turns out that nondeterministic B\"{u}chi automata are equivalent to deterministic Muller automata in expressive power. This result is known as McNaughton's theorem \cite{Mc}.
\subsection{Measure on the Cantor space}
The Cantor space can be given a Lebesgue measure, $\mu$, full details of which can be found in the book by Oxtoby \cite{Oxtoby}. For the sake of completeness, we are going to review essentials needed for upcoming discussions. A given measure $\mu$ can be thought of as a product Bernoulli measure induced by the equiprobable measure $\mu_0$ on $\{0,1\}$. Given a word $w$, let
\[
[w] = \{w\}\cdot \{0,1\}^{\mathbb{N}}
\]
In a more general way, given a language $L$:
\[
[L]=L\cdot \{0,1\}^{\mathbb{N}}
\]
be the class of all sequences extending elements of $L$. Former classes form basic open classes with $\mu[w]=2^{-\vert w \vert}$. Given a language $W$, it is said to be an $\alpha$-cover of a class $\mathcal{C}$ if:
\[
\mathcal{C}\subseteq \bigcup_{w\in W}[w]\quad \text{and} \quad\sum_{w\in W}\mu[w] \le \alpha
\]
\begin{definition}
A class $\mathcal{C}$ is said to be of measure $0$, if there is $2^{-n}$-cover of $\mathcal{C}$, for all $n\ge 0$. A class $\mathcal{C}$ is said to be of measure $1$, if $\overline{\mathcal{C}}$ is of measure $0$.
\end{definition}
\paragraph*{Disjunctive sequences}
We are going to review a definition of \emph{disjunctive property} \cite{Calude} for sequences, which is going to be useful later on.
\begin{definition}[Disjunctive sequence \cite{Calude}]
An infinite sequence $X$ is said to be disjunctive if any word $w\in \Sigma^*$ appears in $X$ as a subword.
\end{definition}
\noindent
Let $\mathcal{D}$ denote the collection of all disjunctive sequences. It has been shown \cite{Calude} that $\mu(\mathcal{D})=1$.
\section{Definitions}
\subsection{Main Definitions}
In this section we are going to define a notion of randomness tests in the context of automata theory. In doing so, we draw inspiration from the original definition of effective randomness tests by Martin-L\"{o}f \cite{Lof}. The main contribution of the mentioned paper is a process of defining measure zero classes in an algorithmic fashion. More precisely, one considers a uniformly recursively enumerable collection of languages, $\mathcal{V}=(V_i)_{i\in \mathbb{N}}$, such that the measure of an individual class satisfies $\mu[V_i]\le 2^{-i}$. An effective nullset corresponding to the test $\mathcal{V}$ is given by $\bigcap_{i\in \mathbb{N}}[V_i]$. This shows that a definition of randomness tests requires only two concepts:
\begin{itemize}
\item Uniform collection of languages.
\item Condition on their measure.
\end{itemize}
In order to translate a concept of randomness tests into the domain of automata theory, we adopt a following strategy. As for a uniform collection of languages, we adopt a notion of an automatic family. As for a condition on their measure, we let measures of these sets be arbitrarily small. These ideas are captured in the following definitions.
\begin{definition}[Martin-L\"{o}f automatic randomness tests]
Let $\mathcal{U}=(U_i)_{i\in I}$ be an automaic family. We say that $\mathcal{U}$ forms a Martin-L\"{o}f automatic randomness test~(MART) if:
\[
\mu[U_i]\le 2^{-\vert i \vert} \text{ for all }i\in I
\]
\end{definition}
\noindent
Above definition is a direct analog of Martin-L\"{o}f randomness tests, because any individual class has a condition on its measure. This is an example of a local condition on the measures. Instead, one could also have a global condition on the measures. This way, we obtain an analog of weak 2-randomness \cite{Downey}.
\begin{definition}[Automatic randomness tests]
Let $\mathcal{U}=(U_i)_{i\in I}$ be an automatic family. We say that $\mathcal{U}$ forms an Automatic randomness test~(ART) if
\[
\liminf_{i\in I}\mu[U_i]=0
\]
\end{definition}
\noindent
Corresponding nullsets are defined in the manner of Martin-L\"{o}f randomness tests.
\begin{definition}[Covering]
Given an (M)ART $\mathcal{U}=(U_i)_{i\in I}$, let
\[F(\mathcal{U})=\bigcap_{i\in I}[U_i]
\]
be its covering region. An infinite sequence $X$ is said to be covered by $\mathcal{U}$ if it belongs to the covering region of $\mathcal{U}$, i.e. $X\in F(\mathcal{U})$. A pair of (M)ART's $(\mathcal{U},\mathcal{V})$ is said to be equivalent if $F(\mathcal{U})=F(\mathcal{V})$.
\end{definition}
\noindent
Finally, we define a notion of a random sequence in parallel with the original definition by Martin-L\"{o}f:
\begin{definition}[Random Sequence]
An infinite sequence $X$ is said to be \emph{Martin-L\"{o}f automatic random~(MAR)} if $X$ is not covered by any MART. Similarly, an infinite sequence $X$ is said to be \emph{automatic random~(AR)} if $X$ is not covered by any ART.
\end{definition}
\noindent
A theory of ART is more general and richer compared to that of MART. For this reason, we are going to focus our attention on ART's. Closer to the end of the paper, we are going to show that ART's are not equivalent to MART's.
\subsection{Examples}
Let us provide some examples of automatic randomness tests. As groups are understood by their actions, ART's are understood by the type of sequences they cover. Let us construct an automatic randomness test covering some ultimately periodic infinite sequence $X$, i.e. $X=uv^{\omega}$ for some $u,v\in \Sigma^*$. To build covering ART, $\mathcal{U}=(U_i)_{i\in I}$, let us have:
\[
I = uv^*, \quad U_i = \{i\}
\]
In some sense, ultimately periodic sequences are too rigid, so it does not take a significant effort to come up with an ART covering it. To consider more complex examples, let us take a following class of infinite sequences:
\[\mathcal{C} = \{X\mid X_{2i}=0 \text{ for all }i\in \mathbb{N}\}
\]
In other words class $\mathcal{C}$ is a collection of sequences having $0$ at even numbered positions. In terms of the computability theory, each member, $X$, of $\mathcal{C}$ has a form $X=A\oplus 0^{\omega}$. To construct ART $\mathcal{U}=(U_i)_{i\in I}$ covering the class $\mathcal{C}$, let us take:
\[I=(00)^*,\quad U_i=(\Sigma 0)^{\frac{\vert i \vert}{2}}
\]
It is clear that $(U_i)_{i\in I}$ is a valid automatic family. Since $\mu[U_i]=2^{-\frac{\vert i \vert}{2}}$, we have that $\lim_{i}\mu[U_i]=0$. Moreover $X\in \mathcal{C}$ if and only if $X\in [U_i]$ for all $i\in I$. Hence, $\mathcal{C}$ is exactly a covering region of $\mathcal{U}$. With reference to computability theory, Turing degree of $X=A\oplus 0^{\omega}$ is that of $A$. As $A$ can be chosen to be a sequence of any Turing degree, we have a following fact:
\begin{proposition}[Covering sequences of any complexity]
There is ART covering sequences of arbitrarily high Turing degree
\end{proposition}\noindent
This result might come as a quite unexpected, as ART covered sequences are thought to be of low computational complexity.
\section{Properties}
In this section we study properties of an ART. Firstly, we would like to focus on immediate properties.
\subsection{Immediate Properties}
We are going to show that an ART can be assumed to have a simple form.
\begin{theorem}[Assumptions on ART]
Let $\mathcal{U}$ be an ART. Then there is an equivalent ART $\mathcal{V}=(V_j)_{j\in J}$ satisfying following properties:
\begin{itemize}
\item $J = 0^*$;
\item $(V_j)_{j\in J}$ is a decreasing sequence, i.e. $V_i\supseteq V_j$ for $i\prec j$;
\item A length of an word contained in $V_i$ is at least $\vert i \vert+1$, i.e. $x\in V_i\Rightarrow \vert x \vert > \vert i \vert$.
\end{itemize}
\end{theorem}
\begin{proof}
Firstly, we construct an automatic family $\mathcal{W} = (W_j)_{j\in J}$ as an intermediate step. Let $J=0^*$ and we construct each member of the automatic family as follows:
\[W_j = \{x\mid \forall i[\vert i \vert \le \vert j \vert\Rightarrow \exists y\in U_i(y\prec x)] \}
\]
Due to the first-order definability property of automatic structures, $(W_j)_{j\in J}$ is a proper automatic family. From the definition it is clear that $(W_j)_{j\in J}$ is a decreasing sequence of languages. As for measure-theoretic properties, we have $\mu[W_j]\le \mu[U_i]$ for all $i$ such that $\vert i \vert \le \vert j \vert$. This shows that $\lim_{j\in J} \mu[W_j] = 0$, which makes $\mathcal{W}$ a proper ART. It is clear from construction that $\mathcal{U}$ and $\mathcal{W}$ are equivalent. So far, we have satisfied all of the desired conditions, except for the condition on a length of words. To address this condition, let us consider a following function, $\phi:0^* \to \Sigma^*$:
\[
\phi(j) = \min_{ll}(W_j) = \{x\mid x\in W_j,\, \forall y\in W_j(x\le_{ll}y)\}
\]
where $ll$ refers to \emph{length-lexicographic} linear order(sometimes called \emph{shortlex}) defined as follows:
\[
x\le_{ll} y \Leftrightarrow \vert x \vert < \vert y \vert \text{ or } \vert x \vert = \vert y \vert,\, x\le_{lex}y
\]
where $lex$ refers to usual lexicographic order. Both $lex$ and $ll$ are automatic relations. Hence, the function $\phi$ is automatic, due to the first-order definability property. Being an automatic function, $\phi$ has some pumping constant $c$. We are going to argue that any element of $W_j$ is no shorter than $\vert j \vert - c$ for any $j\in 0^*$. Otherwise, we have a pair $(x,j)$ such that $\phi(j)=x$ and $\vert j \vert > \vert x \vert + c$. Applying the pumping lemma for this instance, we assert an existence of arbitraily long $i$ such that $\phi(i)=x$. However, it contradicts the fact that $\lim_{j\in J}[W_j] = 0$. Hence, following condition on a length of words holds:
\[
x\in W_j \Rightarrow \vert x \vert \ge \vert j \vert - c
\]
Finally, we get rid of the constant $c$ in order to construct desired ART:
\[
V_j = W_{j0^{c+1}}
\]
Observe that the satisfaction of all other desired conditions remains intact by this shift in indexing. This completes our construction. As a final remark, let us observe that the conclusion of the theorem holds if we replace ART with MART, for the resulting randomness test preserves MART property.
\end{proof}
\paragraph*{Universal randomness tests}In the original theory of Martin-L\"{o}f radnomness(MLR) there is a notion of a \emph{universal ML test}. This test is expected to subsume all other Martin-L\"{o}f tests, so that testing a sequence against an universal ML test reveals its randomness properties. In our notations, universal ART Test $\mathcal{V}$ is expected to satisfy the following property: for any ART $\mathcal{U}$, we have that $F(\mathcal{U})\subseteq F(\mathcal{V})$. We are going to show that there is no such universal test in our case. Firstly we verify some facts.
\begin{lemma}
Suppose we are given a finite automaton $M=(S,f,s_0,F)$. Assume that for every state, $q\in S$, there is a word, $u$, such that $q\cdot u\in F$. Then a measure of all sequences which never visit an accepting state when run on $M$ is zero.
\end{lemma}
\begin{proof}
Let $\{s_0,s_1,\ldots,s_n\}$ be states of $M$. We construct a word $w$, processing which from any state results in a visit to some accepting state. The desired word $w=w_0w_1\ldots w_n$ is constructed inductively. Choose $w_0$ such as $s_0\cdot w_0\in F$. For any $i>0$, choose $w_i$ such that $(s_iw_0w_1\ldots w_{i-1})\cdot w_i\in F$. In this way final word $w$ satisfies given requirements. Let us denote the class of sequences which never visits an accepting state as $\mathcal{C}$. Given any $X\in \mathcal{C}$, it is clear that $X$ does not contain $w$ as a subword. Thus, $X$ is not disjunctive. This implies $\mathcal{C}$ is a subcollection of nondisjunctive sequences. Since the latter has measure zero, the former should also have measure zero.
\end{proof}
\begin{theorem}[No universal test for ART]
There is no universal ART. In other words, there is no ART $\mathcal{V}$ such that for any ART $\mathcal{U}$ we have $F(\mathcal{U})\subseteq F(\mathcal{V})$
\end{theorem}
\begin{proof}
Let us assume contrary, i.e. there is an universal randomness test $\mathcal{V}=(V_j)_{j\in J}$. It implies that for an arbitrary ART $\mathcal{U}=(U_i)_{i\in I}$we have:
\[
F(\mathcal{U})\subseteq F(\mathcal{V})
\]
Since $\liminf_j \mu[V_j]=0$, there is an index $j\in J$ such that $\mu[V_j]<1$. We wish to show that the class $[V_j]$ is not dense, i.e. there is some string $w$ which cannot be extended to an element of $[V_j]$. If this is a case, one can easily construct an ART $\mathcal{U}=(U_i)_{i\in I}$ so as to achieve a contradiction:
\[
U_i=\{wi\}
\]
where $I=0^*$. Clearly, an infinite sequence $w0^{\omega}\in F(\mathcal{U})$, yet $w0^{\omega}\not\in F(\mathcal{V})$ due to the fact that $w0^\omega\not\in [V_j]$. To complete the proof, we are left to show that $[V_j]$ is not a dense class. Let us observe that being a member of automatic family, $V_j$ is a regular language on its own, with some underlying finite automaton $M$. Since $[V_j]$ is an extension of all strings in $V_j$, it can be viewed a collection of all infinite sequences visiting some accepting state of $M$. To show that $[V_j]$ is not dense, it suffices to show an existence of some state, $q$ in $M$ such that for any word $w$, we have that $q\cdot w\not\in F$. If it is not the case, then according to Lemma 1, we have $\mu[V_j]=1$. As we are given that $\mu[V_j]<1$, there must be such a state $q$. Let $w$ be a word such that $s_0 \cdot w=q$. Then $w$ exhibits the fact that $[V_j]$ is not dense. Together with earlier argument, this completes the proof.
\end{proof}
\subsection{Machine version of ART}
One of the common practices in automata theory is a search for machine characterization of a particular phenomenon. In our case, we are interested in a covering of an infinite sequence $X$ by some ART $\mathcal{U}$. A natural question to ask: if this can be checked on some machine model. Since $X$ is an infinite sequence, we need to deal with $\omega$-automata. It turns out that for given ART $\mathcal{U}$, its covering region $F(\mathcal{U})$ can be described as a language corresponding to some deterministic B\"{u}chi automata of measure zero. Moreover, converse statement also holds.
\begin{theorem}[Characterization of a covering region]
Let $\mathcal{U}$ be an ART. Then $F(\mathcal{U})$ is recognized by a deterministic B\"{u}chi automaton of measure zero.
\end{theorem}
\noindent
We are going to provide two genuinely different proofs. The first proof is based on a direct construction of a desired B\"{u}chi automata. The second proof is based on an indirect argument, where we do not know how a corresponding automaton is going to look like.
\begin{proof}[Direct Construction]
According to Theorem 1, we can assume that $\mathcal{U}=(U_i)_{i\in I}$, where $I=0^*$ and the length of words in $U_i$ is larger than that of $i$ for all $i\in I$. Our aim is to construct a deterministic B\"{u}chi automaton $M$ such that $F(\mathcal{U})=L(M)$. The final automaton $M$ is based on deterministic movements of multiple markers inside of a some automaton. The idea is similar to the one of constructing a deterministic finite automaton equivalent to a given nondeterministic finite automaton. Recall that $X\in F(\mathcal{U})$ if and only if for all $i\in I$, there is $x_i\in U_i$ such that $x_i\prec X$. So, $M$ should somehow verify if above condition is satisfied while reading elements of $X$. The idea is to check for an existence of $x_i$ for each $i$ one-by-one. Suppose we want to check existence of $x_i$ for $i=0^n$. Let $N$ be an automaton recognizing automatic relation corresponding to $\mathcal{U}$. First we feed a following block into $N$:
\[
\begin{pmatrix}
X[n]\\
0^n
\end{pmatrix}
\]
Then we associate a red marker with this process to keep track a state of $N$, when subsequent blocks in the form of $(X_i,\#)$ are fed. If the given red marker is ever to visit an accepting state of $N$, then it is bound to disappear. Observe that the condition on existence of $x_i$ is now replaced by the condition on a disappearance of the created red marker. Now this procedure can be performed for all $n$ as we read elements $X$ bit-by-bit. The global condition, $X\in F(\mathcal{U})$, can be formulated as eventual disappearance of all created red markers. This procedure poses two kinds of problems:
\begin{itemize}
\item Number of red markers might grow indefinitely
\item As new red markers are being constantly produced, we might fail to keep track of the disappearance of older red markers.
\end{itemize}
To address the first issue, we are going to merge all red markers corresponding to the same state. This ensures that at any given stage, we have at most $\vert N \vert$, number of states in automaton corresponding to $N$, many red markers. In order to address the second issue, we introduce a notion of old and new red markers. We refer to old red markers as grey markers. Then we divide overall dynamics into phases. At the beginning of each phase, all existing red markers are transformed into grey markers. Grey markers behave just as red markers. The end of each phase is characterized by the disappearance of all grey markers, after which the next phase begins. Observe that, the disappearance of all created red markers is equivalent to witnessing ends of mentioned phases infinitely often. Thus, if one sets all accepting states to the end of phases and assigns states for each possible configuration of markers, then we have a condition corresponding to an acceptance condition for the final B\"{u}chi automaton. This completes the construction.
\end{proof}
\begin{proof}[Indirect Construction]
Given an ART, $\mathcal{U}=(U_i)_{i\in I}$, satisfying properties given in Theorem 1, we wish to build a deterministic B\"{u}chi automaton of measure zero recognizing the language $F(\mathcal{U})$. Since $F(\mathcal{U})$ is of measure zero, it suffices to show that $F(\mathcal{U})$ is recognized by some deterministic B\"{u}chi automaton. Recall that given class $\mathcal{C}$ is recognizable a deterministic B\"{u}chi automaton if there is a regular language $R$ satisfying:
\[
X\in \mathcal{C} \Leftrightarrow Pref(X)\cap R\text{ is an infinite set}
\]
So we need to exhibit a regular language $R$ satisfying the above condition for $F(\mathcal{U})$. Recall that $X\in F(\mathcal{U})$ iff for all $i\in I$, there is $x_i\in U_i$ such that $x_i\prec X$. Since $(U_i)_{i\in I}$ is assumed to be a decreasing sequence, we have $X\in F(\mathcal{U})$ if and only if there is infinitely many $i\in I$ such that the mentioned property holds, i.e. $\exists^{\infty} i\in I$ such that there is $x_i\in U_i$ with $x_i\prec X$. Then we turn each $U_i$ into a prefix-free language, which gives us a new automatic family $\mathcal{V} = (V_i)_{i\in I}$. Formally,
\[
V_i = \{x\in U_i: \forall y\in U_i(y\not\prec x)\}
\]
We can easily verify that $X\in F(\mathcal{U})$ if and only if $\exists^{\infty}i\in I$ such that there is $x_i\in V_i$ with $x_i\prec X$. As for a forward direction, given $X\in F(\mathcal{U})$, we have $\exists^{\infty}i\in I$ there is $x_i\in U_i$ such that $x_i\prec X$. For each mentioned $i\in I$, we select $y_i\preceq x_i\prec X$ such that $y_i\in V_i$. On the other hand, suppose that $\exists^{\infty}i\in I$ such that there is $x_i\in V_i$ with $x_i\in X$. By a virtue of $V_i$ being a subset of $U_i$, we have that $\exists^{\infty}i\in I$ such that there is $x_i\in U_i$ with $x_i\prec X$. Finally, let us define the desired regular language $R$:
\[
R = \bigcup_{i\in I}V_i = \{x\mid \exists i(x\in V_i)\}
\]
Due to the first-order definability for automatic structures, $R$ is indeed a regular language. Let us show that it satisfies the desired properties. Suppose that $X\in F(\mathcal{U})$, then we know $\exists^{\infty}i\in I$ such that there is $x_i\in V_i$ with $x_i\prec X$. Since the initial $\mathcal{U}$ is assumed to satisfy the condition on the length of words, we have
\[
x\in V_i \Rightarrow \vert x \vert > \vert i \vert
\]
Together with former observation, we conclude that $R\cap Pref(X)$ is indeed an infinite set. On the other hand, suppose that $R\cap Pref(X)$ is infinite. Since each $V_i$ is a prefix-free language, no two elements of $Pref(X)$ can belong to the same $V_i$. Hence, there are infinitely many $i\in I$ such that there is $x_i\in V_i$ with $x_i\prec X$. This implies that $X\in F(U)$. Thus, $R$ indeed satisfies the desired properties.
\end{proof}
\noindent
Now we state a converse direction of the previous theorem.
\begin{theorem}
Let $M$ be a deterministic B\"{u}chi automaton of measure zero. Then there is ART $\mathcal{U}$ such that $L(M)=F(\mathcal{U})$.
\end{theorem}
\begin{proof}
Let $R$ be a language corresponding to $M$, when it is viewed as a finite automaton, instead of the determinsitic B\"{u}chi automaton. Set $I=0^*$ and define $\mathcal{U} = (U_i)_{i\in I}$ as follows:
\[
U_i = \{x\mid \vert x \vert \ge \vert i \vert,\, x\in R\}
\]
Due to the first-order definability of automatic structures $\mathcal{U}$ is a proper automatic family. Moreover $\mathcal{U}$ is a decreasing family in a sense that $U_i\supseteq U_j$ for $i\prec j$. Recall that $X\in L(M)$ if and only if $Pref(X)\cap R$ is an infinite set. The latter is equivalent to saying that $X\in [U_i]$ for all $i\in I$, i.e. $X\in F(\mathcal{U})$. This argument shows that $L(M)=F(\mathcal{U})=\bigcap_{i\in I}[U_i]$. Since a measure of the Cantor space itself is bounded, we apply Lebesgue dominated convergence theorem to the previous equation, which gives us:
\[
\mu(L(M)) = \lim_{i\in I} \mu[U_i]
\]
As $\mu(L(M))=0$, we have that $\mathcal{U}$ forms a proper ART which satisfies the desired properties.
\end{proof}
\begin{corollary}
Let $\mathcal{C}$ be a class of the Cantor Space. The following are equivalent:
\begin{enumerate}
\item There is ART $\mathcal{U}$ such that $\mathcal{C}=F(\mathcal{U})$.
\item There is a deterministic B\"{u}chi automaton of measure zero such that $\mathcal{C}=L(M)$
\end{enumerate}
\end{corollary}
\noindent
We have encountered deterministic B\"{u}chi automata of measure zero. A natural question to ask would be: when is a given deterministic B\"{u}chi automaton of measure zero. To answer this question, we need to verify few things given in the form of lemmas.
We should note that some of the following results also appear in the work of Staiger \cite{Null,Monadic}, but we wish to give our own proofs.
\begin{lemma}
Let $M=(S,f,s_0)$ be an FSM. Then a run of any disjunctive sequence $X$ is going to reach a leaf connectied component, i.e. $\exists i\in \mathbb{N}$ such that $s_0\cdot X[i]\in g$ for some $g\in (M)$.
\end{lemma}
\begin{proof}
Firstly, let us observe that for any state $q\in S$ there is a word $w$ such that $q\cdot w$ is in some leaf connected component. Following an idea in the proof of the lemma 1, it is possible to construct a word $w$ such that no matter from which state one starts with, processing $w$ brings one to a leaf connected component. Given a disjunctive sequence $X$, it contains $w$ as a subword by definition. Hence, a run of disjunctive sequence $X$ on $M$ is going to end up in some leaf connected component.
\end{proof}
\begin{lemma}
Given a disjunctive sequence $X$, any suffix of $X$ is also a disjunctive sequence.
\end{lemma}
\begin{proof}
Suppose that we want to prove that $X[i:]=X_{i+1}X_{i+2}\ldots$ is disjunctive for some $i$. Let us pick some word $w$. In order to show that $w$ appears as a subword in $X[i:]$, let us observe that $0^iw$ appears as a subword in $X$. Since no part of $w$ could appear in $X[i]=X_1X_2\ldots X_i$, we have that $w$ appears as a subword in $X[i:]$.
\end{proof}
\begin{theorem}[Disjunctive sequences and automata]
Let $M=(S,f,s_0)$ be an FSM. Then for any disjunctive sequence $X$, we have that $I(X)=g$ for some $g\in (M)$.
\end{theorem}
\begin{proof}
According to Lemma 2 processing any disjunctive sequence $X$ will lead the automaton $M$ to some leaf connected component. Let $g\in (M)$ to be a leaf connected component in which $X$ ends up to be. Clearly $I(X)\subseteq g$, so we we need to show that $g\subseteq I(X)$. For an arbitrary state $q\in g$, we can apply the argument used in the proof of lemma 1 in the context of an FSM $g$ and an accepting state $q\in g$. There is a word $w_q$ such that processing $w_q$ starting from any state of $g$ would visit $q$ at least once. Now, if $I(X)\neq g$, then there should be a state $q\in g$ such that $q$ is never visited from some point onwards during the run of $X$. This implies that for some $i\in \mathbb{N}$, $X[i:]=X_{i+1}X_{i+2}\ldots$ does not contain $w_q$ as a subword. Thus, $X[i:]=X_{i+1}X_{i+2}\ldots$ is not disjunctive, which contradicts to our initial assumption according to Lemma 3.
\end{proof}
\noindent
Finally, we provide a condition of a deterministic B\"{u}chi automaton to be of measure zero.
\begin{theorem}[Measure and B\"{u}chi automata]
Let $M=(S,f,s_0,F)$ be a deterministic B\"{u}chi automaton. Then $\mu(L(M))=0$ if and only if $F\cap g=\emptyset$ for all leaf connected components $g\in (M)$.
\end{theorem}
\begin{proof}
Forward direction $(\Rightarrow):$\\
Assume that $q\in F\cap g$ for some leaf cpnnected component $g$. Since we assume that all states are reachable from starting state, $s_0$, there is a word $w$ such that $s_0\cdot w=q$. Now consider any disjunctive sequence $X\in\mathcal{D}$. By applying Theorem 5 for a component $g$ and a starting state $q$, we infer that $wX$ visits every element of $g$ infinitely often. This means that $wX$ is going to be an accepting sequence. Since the measure of all disjunctive sequences is one, we have that $\mu(w\mathcal{D})=2^{-\vert w \vert}$. Hence $\mu(L(M))\ge 2^{-\vert w \vert}>0$. Thus, the forward direction follows.\\
Backward direction $(\Leftarrow):$\\
Assume $F\cap g=\emptyset$ for all leaf components $g\in (M)$. According to Theorem 5, for any disjunctive $X$, $I(X)=g$ for some leaf component $g\in (M)$. This means that any sequence accepted by $M$ is not disjunctive, and $L(M)$ is a subcollection of nondisjunctive sequences. As a latter has the measure of zero. Former should also have the measure of zero. This completes the backward direction, and the overall proof of the given theorem.
\end{proof}
\noindent
Since deterministic Muller automata are close to deterministic B\"{u}chi automata, we provide same type of condition for deterministic Muller automata. Later this observation turns out to be useful for us.
\begin{theorem}[Measure and Muller Automata]
Let $M=(S,f,s_0,F)$ be a deterministic Muller automaton. Then $\mu(L(M))=0$ if and only if $F\cap (M) = \emptyset$.
\end{theorem}
\begin{proof}
The proof is similar to the one of above theorem. Forward direction\\$(\Rightarrow):$\\
Again assume that $g\in F$ for some leaf component $g\in(M)$. Let us choose an arbitrary state, $q\in g$. By an initial assumption on automata, there is a word $w$ such that $s_0\cdot w=q$. Then applying Theorem 5 for a connected component $g$ with a starting state $q$, we observe that for $wX$ where $X\in \mathcal{D}$, we have $I(wX)=g$. Since the measure of disjuncitve sequences is one, $\mu(w\mathcal{D})=2^{-\vert w \vert}$. This implies that $\mu(L(M))\ge 2^{-\vert w \vert}>0$. This completes the forward direction.\\
Backward direction, $(\Leftarrow):$\\
Assume $g\not\in F$ for any leaf connected component $g\in(M)$. According to the Theorem 5, for any disjunctive sequence $X$, we have $I(X)=g$ for some leaf component $g\in(M)$. For any sequence $X$, accepted by $M$, we have that $X$ is not disjunctive. Hence $L(M)$ is a subcollection of nondisjunctive sequences. As a latter has the measure zero, former should also have the measure zero. This completes the backward direction, and the whole proof of the given theorem.
\end{proof}
\subsection{Characterization}
\noindent
In this section we summarize our observations in the form of a characterization result for sequences covered by an ART. Again, note that equivalence of last three statements is known from Staiger \cite{Null,Monadic}
\begin{theorem}[Characterization]
Given an infinite sequence $X$ the following are equivalent:
\begin{enumerate}
\item $X\in F(\mathcal{U})$ for some ART $\mathcal{U}$
\item $X$ is accepted by a deterministic B\"{u}chi automaton of measure zero.
\item $X$ is accepted by a deterministic Muller automaton of measure zero.
\item $X$ is not a disjunctive sequence
\end{enumerate}
\end{theorem}
\begin{proof}
$(1\Rightarrow 2):$\\
Suppose a sequence $X$ is covered by an ART $\mathcal{U}$. By Theorem 1, $F(\mathcal{U})$ is recognized by a deterministic B\"{u}chi automaton $M$ of measure zero. So clearly, $X$ is accepted by $M$, which has desired properties.\\
$(2\Rightarrow 3):$\\
Suppose that a sequence $X$ is accepted by a deterministic B\"{u}chi automaton $M$ of measure zero. It is known that nondeterministic B\"{u}chi automata is equivalent to deterministic Muller automata. As any deterministic B\"{u}chi automaton can be viewed as a nondeterministic B\"{u}chi automaton, every language recognized by a deterministic B\"{u}chi automaton has a corresponding determinsitic Muller automaton. So, there is a deterministic Muller automaton $N$ such that $L(M)=L(N)$. As $X\in L(N)$, we have that $X$ is accepted by $N$.\\
$(3\Rightarrow 4):$\\
Suppose that a sequence $X$ is accepted by a deterministic Muller automaton $M$ with accepting collection $F$. According to Theorem 7, we have $F\cap(M)=\emptyset$. Combining this observation with Theorem 5, we obtain that any sequence accepted by $M$ is nondisjunctive. In particular, $X$ is not disjunctive. \\
$(4 \Rightarrow 1):$\\
Suppose that $X$ is nondisjunctive, i.e. there is some word $w$ which does not appear in $X$ as a subword. To construct ART covering $X$, let us take:
\[U_i = \{x\mid \vert x \vert = \vert i \vert, \, x \text{ does not contain }w\text{ as a subword}\}
\]
with $I=0^*$. Clearly, $\mathcal{U}=(U_i)_{i\in I}$ is an automatic family. Furthermore, a measure of $\mu[U_i]$ could be bounded in a straightforward manner. Assuming that $\vert w \vert = d$, consider an element $u\in U_{dk}$ for some $k\in \mathbb{N}$. As $u$ can be divided into $k$ blocks of size $d$, and none of the block is allowed to be $w$, we have:
\[\mu[U_{0^{dk}}]\le \left(1-\frac{1}{2^d}\right)^k
\]
As the limiting value of the right hand side approaches zero as $k\to \infty$, we have $\liminf_{i\in I} \mu[U_i]=0$. This shows that $(U_i)_{i\in I}$ is a proper ART. Since $X\in [U_i]$ for all $i\in I$, we have $X\in F(\mathcal{U})$, i.e. $X$ is covered by $\mathcal{U}$.
\end{proof}
\begin{theorem}[Combinatorial characterization of AR]
An infinite sequence $X$ is automatic random~(AR) if and only if it is disjunctive.
\end{theorem}
\begin{proof}
Focusing on $(1)$ and $(4)$ of the above theorem~(Theorem 8), we are aware of the following equivalence:
\[X \text{ is covered by some ART }\Leftrightarrow X\text{ is not disjunctive }
\]
This is equivalent to saying:
\[X \text{ is AR}\Leftrightarrow X \text{ is disjunctive }
\]
This way, we obtained a complete combinatorial equivalent of AR condition.
\end{proof}
\subsection{On relation between ART and MART}
At last, we would like to say few words regarding a relationship between ART and MART. Clearly ART subsumes MART, because each MART can be viewed as an ART. A natural questions to ask is an equivalence of these notions. Do these notions coincide or do they give rise to different classes of randomness? We are going to show that these two notions are not equivalent, i.e. there is a class $\mathcal{C}$ such that $\mathcal{C} = F(\mathcal{U})$ for some ART $\mathcal{U}$, yet $\mathcal{C} \neq F(\mathcal{V})$ for any MART $\mathcal{V}$.
\begin{theorem}
ART and MART are not equivalent as randomness notions.
\end{theorem}
\begin{proof}
Let us consider a class presented as an example earlier in the paper:
\[
\mathcal{C} = \{X\mid X_{2i}=0\text{ for all } i\in \mathbb{N}\}
\]
We have shown that there is an ART $\mathcal{U} = (U_i)_{i\in I}$ given as:
\[
I=(00)^*,\quad U_i=(\Sigma 0)^{\frac{\vert i \vert}{2}}
\]
such that $\mathcal{C} = F(\mathcal{U})$. Observe that for each $x\in U_i$, we have $\vert x \vert = \vert i \vert$. Moreover $\vert U_i \vert = 2^{\frac{\vert i \vert}{2}}$. We need to show that $\mathcal{C} \neq F(\mathcal{V})$ for any MART $\mathcal{V}$. Assume contrary: there is a such MART $\mathcal{V}$. Let us apply Theorem 1, to transform $\mathcal{V}$ to more favorable MART $\mathcal{W} = (W_e)_{e\in E}$ satisfying:
\begin{itemize}
\item $E = 0^*$.
\item $(W_e)_{e\in E}$ is decreasing, i.e. $W_i\supseteq W_j$ for $i\prec j$.
\item The length of words contained in $W_e$ is larger than $\vert e \vert$, i.e. $x\in W_e\Rightarrow \vert x \vert > \vert e \vert$.
\end{itemize}
Let us consider $W_{0^{2k}}$ for some $k$. For the sake of notational convenience, we are going to write $kk$ in place of $0^{2k}$. Any $X\in \mathcal{C}$ should have some prefix $x_{2k}\prec X$ in $W_{kk}$, satisfying $\vert x_{2k}\vert > 2k$. This means that any $x\in U_{kk}$ has some extension $y$ in $W_{kk}$. Let $c$ a pumping constant corresponding to the automatic relation for $\mathcal{W}$. For any string $y\in W_{kk}$ such that $\vert y\vert > 2k + c$, we can apply the pumming lemma to pump down $y$ to $y'$ such that $\vert y' \vert \le 2k + c$. Hence, for any $x\in U_{kk}$ we can assume that its extension $y$ has a length of at most $2k+c$. Finally, we estimate the measure of $[W_{kk}]$:
\[
\mu[W_{kk}]\ge \sum_{x\in U_{kk}}2^{-\vert y \vert} \ge 2^k(2^{-2k-c})=2^{-k-c}> 2^{-2k}
\]
provided that $k> c$. This fact contradicts the requirement of $\mathcal{W}$ being a MART.
\end{proof}
\noindent
So far, we have seen a difference between ART and MART on a class level, i.e. there is a class $\mathcal{C}\subseteq \{0,1\}^{\mathbb{N}}$ which is covered by some ART and none of MART. It turns out the difference goes further into a sequence level. In particular, we are going to show an existence of some sequence $X$ which is random with respect to MART, yet not random with respect to ART.
\begin{theorem}
There is a sequence $X$ which is Martin-L\"{o}f automatic random~(MAR), yet it is not automatic random~(AR).
\end{theorem}
\begin{proof}
Let $A$ be some disjunctive sequence and $X=A\oplus 0^{\omega}=A_10A_20\ldots$. Clearly, $X$ is not AR, for there is ART covering $X$ as shown in Examples subsection of the current paper. We are left to show that $X$ is MAR, i.e. no MART covers $X$. Assume contrary, suppose there is some MART $\mathcal{U}=(U_i)_{i\in I}$ covering $X$, which can be assumed to satisfy properties given in Theorem 1. Let us restrict our attention to even indexed language families, $U_{0^{2k}}$. Again for the sake of notational convenience, we replace an index of $0^{2i}$ by $ii$. Let us consider an induced family of languages, $\mathcal{V}=(V_i)_{i\in I}$ with $I=0^*$:
\[
V_i = \{a_1a_2\ldots a_n\mid (a_10a_20\ldots a_n\in U_{ii})\text{ or }(a_10a_20\ldots a_n0\in U_{ii})\}
\]
It is possible to construct a finite automaton recognizing $\mathcal{V}$ from that recognizing $\mathcal{U}$ by 'skipping' $0$ transitions. This argument shows that $\mathcal{V}$ is actually an automatic family. Furthermore, we have that $A\in \bigcap[V_i]$. This points out that $\mathcal{V}$ is not ART, for $A$ is a disjunctive sequence. In order to understand $\mathcal{V}$ better, let us switch to the realm of B\"{u}chi automata. As we know, there is some B\"{u}chi automaton $M$ recognizing $\bigcap[U_{ii}]$. A modification of $M$ consisting of 'skipping' over $0$ transitions, gives as a B\"{u}chi automaton $N$ recognizing $\bigcap[V_i]$. As for $N$, we know following facts: it is of positive measure and $A\in L(N)$. The latter fact can be interpreted as a presence of an accepting state in a connected component where the run of $A$ ends up in. Thus there is a prefix $w\prec A$ such that for any disjunctive sequence $X\in \mathcal{D}$, we have that $wX\in L(N)$, equivalently $wX\in [V_i]$ for all $i$. Now we can use this observation to derive a lower bound for the measure of $U_{ii}$, thus achieving a contradiction. Given any string $v$ of length $\vert i \vert$ such that $w\preceq v$, there is an extension of $v$ in $V_i$. Translating this statement for $U_{ii}$, we have: given $a_1a_2\ldots a_{\vert i \vert} \succeq w$, there is some string $u\in U_{ii}$ extending $a_10a_20\ldots a_n$. Thanks to pumping lemma, a length of $u$ can be assumed to be no greater than $2\vert i \vert + c$, where $c$ is a pumping constant correspong to automatic relation $\mathcal{U}$. Just as in previous proof, we now estimate $\mu[U_{ii}]$:
\[
\mu[U_{ii}]\ge \sum_{v}2^{-\vert u \vert} \ge 2^{\vert i \vert - \vert w \vert}2^{-2\vert i \vert - c} = 2^{-\vert i \vert - \vert w \vert - c} > 2^{-2\vert i \vert}
\]
given large enough $\vert i \vert$. This clearly violates conditions for being MART, which indicates falsity of the initial assumption. Hence, $\mathcal{U}$ could not be MART.
\end{proof}
\noindent
So far, we have seen that it is impossible, in general, to impose a condition of $\mu[U_i]\le 2^{-\vert i \vert}$ on ART without altering its randomness properties. A question now is what can we say about individual measures at all? Can we impose some weaker conditions? It turns out individual measures can be assumed to decrease exponentially.
\begin{theorem}
Suppose that $\mathcal{C}=F(\mathcal{U})$ for some ART $\mathcal{U}$. Then there is an ART $\mathcal{V} = (V_j)_{j\in J}$ subsuming $\mathcal{U}$, such that $\mu[V_j]\le \gamma^{\vert j \vert}$ for some $\gamma<1$.
\end{theorem}
\begin{proof}
Invoking a machine characterization of ART, there is a deterministic B\"{u}chi automaton of measure zero, $M=(S,f,s_0,F)$, recognizing $\mathcal{C}$. By Theorem 6, none of accepting states of $M$ is in a leaf connected component. Applying the argument used in the proof of Lemma 1, there is a word $w$ which brings any state of $M$ into a leaf connected component. This means that for any $X\in \mathcal{C}$, $X$ does not contain $w$ as a subword. Let $\mathcal{C}_w$ be a collection of all sequences not having $w$ as a subword. By the previous argument $\mathcal{C}\subseteq \mathcal{C}_w$. Recall that in the proof of Theorem 9, we have constructed an ART, $\mathcal{W}=(W_i)_{i\in I}$, corresponding to $\mathcal{C}_w$:
\[
W_i = \{x\mid \vert x \vert = \vert i \vert, \, x \text{ does not contain }w\text{ as a subword}\}
\]
We have shown that $\mathcal{W}$ is indeed an ART such that $\mathcal{C}_w = F(\mathcal{W})$. Furthermore, we have shown that:
\[
\mu[W_{0^{dk}}]\le \left(1-\frac{1}{2^d}\right)^k
\]
where $d=\vert w \vert$. Observe that $([W_j])_{j\in J}$ is a decreasing sequence, in a sense that $[W_i]\supseteq [W_j]$ for $i\prec j$. Hence, a condition $X\in F(\mathcal{W})$ can be stated as $\exists^{\infty}i\in I$ with $x_i\in W_i$ and $x_i\prec X$. Thus, choosing any infinite collection of members from $\mathcal{W}$ results in the exact same covering region. We only need to ensure that this choice somehow selects a regular language of indices. Collection $J = (0^d)^+ = 0^d(0^d)^*$ forms a regular language. So we set a new ART $\mathcal{V} = (V_j)_{j\in J}$ so that $V_j = W_j$. Given $j\in J$ such that $\vert j \vert = dk$, we have:
\[
\mu[V_j] \le (1-\frac{1}{2^d})^{k} = \gamma^{\vert j \vert}, \text{ where } \gamma = \left(1 - \frac{1}{2^d}\right)^{\frac{1}{d}}
\]
This completes the construction.
\end{proof}
\section{Discussions}
We have defined and investigated properties of randomness tests in the context of automata theory. These investigations led to quite unexpected connections between randomness tests and $\omega$-automata such as by B\"{u}chi and Muller. Furthermore, a purely combinatorial characterization of automatic randomness in the form of disjunctive property for sequences has been found. Let us compare our results with automatic randomness notions arising from other paradigms. As for the unpredictability paradigm considered in \cite{Stimm,OConnor}, automatic random sequences correspond to normal sequences as in \cite{Borel}. Similarly, for the incompressibility paradigm considered in \cite{Becher,Shen} random sequences correspond to normal ones. Clearly, a normality is much stronger property than being disjunctive. Hence, the current automatic randomness tests result in much larrger class of random sequences. It is an open question if one could modify the definition of automatic randomness tests, so that resulting randomness class coincides with that of normal sequences. Moreover, we hope that many more notions from the theory of algorithmic randomness could find their counterparts in the context of automata theory.
\section*{Acknowledgement}
We would like to thank Frank Stephan for numerous discussions held in person as well as via e-mail.
\medskip
\bibliographystyle{unsrt}
|
\section{Introduction}
Performance of conventional receivers (correlator or matched filter), designed for additive white Gaussian noise (AWGN) channels, deteriorates in harsh environment such as industrial and mining due to impulse nature of noise plus interference
\cite{ding2013first, cheffena2016}.
Impulse noise in wireless communication systems like WSNs (wireless sensor networks), IoT (Internet of Things), and M2M (machine-to-machine) deployed in mining, industrial, home, power line, and underwater, occur due to ignition, lightening, hardware impairment, ice cracking etc.
Therefore, in the presence of impulse noise, performance of conventional receiver deteriorates.
Hence, in an impulse noise environment, some robust signal pre-processing techniques are required
to ensure proper functionality, quality, and performance throughout the system's operation.
The degrading impact of impulse noise can be minimized or mitigated using non-linear techniques based mitigators. Non-linear impulse noise mitigation methods such as clipping and blanking are simple. However, these methods are suboptimal and sensitive to the choice of threshold value
\cite{zhidkov2006, kim2006comparative,guney}. Some methods involve impulse noise estimation followed by subtraction from the received signal using null carriers or training data \cite{lin2013}.
However, the occurrence of impulse noise samples is completely random. Hence, training or estimation based impulse noise mitigation methods may not be useful in a practical system. In \cite{el2010, niranjayan2013,ekrem2007ultra}, various non-linear receiver structures are analyzed for impulse noise scenarios in UWB systems. However, their performance depends on the receiver model parameters accuracy and feasibility estimation. In our earlier work \cite{san2016, sharma2017}, the sparsity of ultra-wide band (UWB) signal is exploited to mitigate impulse noise and narrowband interference effects in UWB systems.
However, the computational complexity for such receivers is large for a UWB signal vector with higher sampling frequency and frame duration.
In this paper, we propose a novel signal cluster-detection based receiver design to mitigate impulse noise in a UWB system.
The received UWB signal forms clusters due to signal propagation characteristics \cite{molish2006, kim2006comparative, san2016,ncc2017, yang2016variance,silva2016} and hence is also called as cluster sparse signal.
The proposed cluster detection algorithm easily differentiates between UWB signal cluster and impulse noise.
The time-hopping binary phase shift keying (TH-BPSK) UWB system is considered for bit error rate (BER) performance analysis in the presence of Bernoulli-Gaussian (BG) impulse noise in
AWGN and multipath IEEE 802.15.4a channels to validate the proposed algorithm.\\
\textit{Notations}: Small and bold small letters represent
a scalar and vector respectively.
$\left\Vert (\cdot) \right \Vert_{2}$ is the Euclidian norm of a signal $(\cdot)$, and $\mathcal{N}(\eta, \sigma^{2})$ represents the Gaussian probability distribution function (pdf) with mean $\eta$ and variance $\sigma^{2}$. Symbols $\langle .,.\rangle $ and ``$\ast$" represent the inner product and convolution between two vectors, respectively.
$Pr\{\mathcal{B}\}$ and $|(\cdot)|$ denote the probability of event $\mathcal{B}$
and the absolute magnitude value of $(\cdot)$, respectively.
\vspace{-1em}
\section{System Model}\label{sect2}
In this section, BG impulse noise and basic TH-BPSK UWB system models are described. The BG impulse noise $i(t)$ is represented as \cite{san2016}
\begin{equation}\label{eq1}
i(t)=b(t)k(t),
\end{equation}
where $b(t)$ is the Bernoulli random sequence and $k(t)$ is the Gaussian distributed noise process with zero mean and $\sigma^{2}_{I}$ variance.
The received signal $\textbf{r}$ is expressed as
\begin{equation}\label{system1}
\textbf{r}=\textbf{s}+\textbf{i}+\textbf{n} \in \mathbb{R}^{N},
\end{equation}
where $\textbf{s}$ is the TH-BPSK modulated desired multipath UWB signal, $\textbf{i}$ (discrete representation of $i(t)$ at Nyquist rate), and $\textbf{n}$ is Gaussian (background) noise with zero mean, and
$\sigma^{2}_{n}$ variance.
The impulse noise, $\textbf{i}$, models impulse interference or harsh environment noise in the system and is sparse in nature \cite{san2016}. Hence, total effective noise power in the system can be written as $\sigma^{2}=\sigma^{2}_{n}+p \sigma^{2}_{I}$, where $p$ is the probability of impulse noise samples that occur in a given time duration and is expressed as $p=(\# \ \text{impulse noise samples})/N$. The signal-to-noise ratio (SNR) and signal-to-impulse noise ratio (SINR) are defined as $\text{SNR}=\frac{\sigma^{2}_{s}}{\sigma^{2}_{n}}$ and $\text{SINR}=\frac{\sigma^{2}_{s}}{\sigma^{2}_{I}}$, respectively, where
$\sigma^{2}_{s}$ is the signal power and is considered unity, and $\sigma^{2}_{I} \gg \sigma^{2}_{n}$. Further, in (\ref{system1}) inter-symbol-interference and inter-pulse-interference are assumed to be zero.
\vspace{-1em}
\section{Proposed receiver design}\label{sect3}
\subsection{Cluster detection algorithm}
This subsection presents a new cluster detection algorithm (CDA) for the proposed receiver design. It is known that the UWB signal cluster is symmetric around the maximum absolute peak value of the transmitted pulse \cite{molish2006, san2016, ncc2017}. This signal cluster symmetry can be used to differentiate between signal cluster and impulse noise samples. Since the symmetry of UWB signal is observed irrespective of the type of transmitted pulse; the proposed method is independent of the type of UWB pulse and can be used for any UWB pulse.
Let $\mathcal{H}_{\textbf{i}}$ and $\mathcal{H}_{\textbf{s}}$ be the two hypothesis that label samples as impulse noise samples and desired signal samples frame by frame, respectively and are expressed as
\begin{equation} \label{hp1}
\begin{split}
\mathcal{H}_{\textbf{i}}: \textbf{r}= \textbf{s}+\textbf{i}+\textbf{n} \in \mathbb{R}^{N}, \\
\mathcal{H}_{\textbf{s}}: \textbf{r}= \textbf{s}+\textbf{n} \in \mathbb{R}^{N}.
\end{split}
\end{equation}
The maximum absolute peak value ($P^1_{max}$) and the corresponding time index ($I^{1}_{max}$) are calculated from the received signal $\textbf{r}$ and expressed as
\begin{equation}\label{pr1}
\left[P^1_{max}, I^{1}_{max}\right]=\max (|\textbf{r}|).
\end{equation}
The sample $P^1_{max}=|\textbf{r}(I^{1}_{max})|$ belongs either to $\mathcal{H}_{\textbf{i}}$ or $\mathcal{H}_{\textbf{s}}$.
The classification of sample $\textbf{r}(I^{1}_{max})$ is done as
\begin{equation}\label{pr2}
|\textbf{r}(I^{1}_{max})-\textbf{r}(I^{1}_{max}+1)| \underset{\mathcal{H}_{\textbf{s}}}{\overset{\mathcal{H}_{\textbf{i}}}{\gtreqless}} \mu,
\end{equation}
where $\mu$ is a constant that depends on the transmitted UWB pulse.
If the sample $\textbf{r}(I^{1}_{max}) \in \mathcal{H}_{\textbf{s}}$, we conclude that no impulse noise is present in the signal $\textbf{r}$ and the peak value $P^1_{max}=|\textbf{r}(I^{1}_{max})| \ \in \ \mathcal{H}_{\textbf{s}}$ represents the center of the first signal cluster detected at this position. In this case, we feed the signal $\textbf{r}$ to the conventional receiver for signal demodulation. However, if $\textbf{r}(I^{1}_{max}) \notin \mathcal{H}_{\textbf{s}}$, i.e., if $\textbf{r}(I^{1}_{max}) \in \mathcal{H}_{\textbf{i}}$, then sample $\textbf{r}(I^{1}_{max})$ represents the impulse noise sample and hence,
$\textbf{r}(I^{1}_{max})$ is assigned zero value to remove this impulse noise. Again, the maximum absolute peak value of the above modified signal $\textbf{r}$ (after assigning zero to impulse noise sample $\textbf{r}(I^{1}_{max})$) is calculated and classified using (\ref{pr1}) and (\ref{pr2}) respectively. This procedure is repeated until the $i^{\text{th}}$ maximum absolute peak valued sample $\textbf{r}(I^{i}_{max})$ of signal $\textbf{r}$ belongs to $\mathcal{H}_{\textbf{s}}$. Hence, a signal cluster is detected and the modified signal $\textbf{r}$ is applied to the conventional receiver for signal demodulation.
This CDA is very simple and does not require multiplication or division operations. It requires only one subtraction per iteration for differentiating between signal cluster and impulse noise samples and is summarized in \textbf{Algorithm} \textbf{\ref{algo}}.
The parameter $\mu$ in \textbf{Algorithm} \textbf{\ref{algo}} can be decided based on the transmitted UWB pulse $\textbf{w}$. Using the maximum absolute peak value $P^{w}_{max}$ and the corresponding index $I^{w}_{max}$ of pulse $\textbf{w}$ at the transceiver, parameter $\mu$ can be selected such that
$\mu \geq |\textbf{w}(I^{w}_{max})-\textbf{w}(I^{w}_{max}-1)|$
(or $\mu \geq |\textbf{w}(I^{w}_{max})-\textbf{w}(I^{w}_{max}+1)|$ due to pulse symmetry).
The values of $\textbf{w}(I^{w}_{max})$ and $\textbf{w}(I^{w}_{max}\pm 1)$ are known apriori at the receiver in UWB communication system and an appropriately low value of $\mu$ can be selected according to the above expression for good system performance.
Further in \textbf{Algorithm} \textbf{\ref{algo}}, the $\textbf{e}_{I^{i}_{max}} \in \mathbb{R}^{N}$ has entry `1' at $I^{i}_{max} $ position and `0's at the remaining entries.
\vspace{-.5em}
\begin{algorithm}
\caption{Cluster-Detection Algorithm (CDA)}
\label{algo}
\begin{algorithmic}
\State
Initialize: $\mu \geq |\textbf{w}(I^{w}_{max})-\textbf{w}(I^{w}_{max}-1)|$, \ $i=1$\\
Input: received signal $\textbf{r} \in \mathbb{R}^{N}$ \\
Output: estimated signal $\hat{\textbf{s}} \in \mathbb{R}^{N}$ \\
Calculate: $[P^{i}_{max}, I^{i}_{max}]=\max (|\textbf{r}|)$ \\
\textbf{While}: $|\textbf{r}(I^{i}_{max})-\textbf{r}(I^{i}_{max}+1)|\geq \mu$\\
Update $\textbf{r}_{i}=\textbf{r}_{i}-\textbf{e}_{I^{i}_{max}}\textbf{r}_{i}$ \\
Set $i=i+1$\\
Calculate: $[P^{i}_{max}, I^{i}_{max}]=\max (|\textbf{r}|)$ \\
\textbf{End} \\
Update $\hat{\textbf{s}}=\textbf{r}$
\end{algorithmic}
\end{algorithm}
\vspace{-1em}
\subsection{False alarm and miss-detection probabilities}
The probability of false alarm $p_{f}$ can be calculated as
\begin{equation}\label{pf1}
p_{f} = Pr\{|\textbf{r}(I^{i}_{max})-\textbf{r}(I^{i}_{max}+1)| \geq \mu |\mathcal{H}_{\textbf{s}}\}.
\end{equation}
Let $\tilde{r}_{s}|\mathcal{H}_{\textbf{s}}=\textbf{r}(I^{i}_{max})-\textbf{r}(I^{i}_{max}+1)=
\textbf{s}(I^{i}_{max})+\textbf{n}(I^{i}_{max})-\textbf{s}(I^{i}_{max}+1)-\textbf{i}(I^{i}_{max}+1)$
and $\tilde{r}_{s}|\mathcal{H}_{\textbf{s}}$ is distributed as $\tilde{r}_{s}|\mathcal{H}_{\textbf{s}}\sim \mathcal{N}\left(0, 2((1-\rho_{s})\sigma_{s}^{2}+\sigma_{n}^{2})\right)$, where $\rho_{s}$ represents the correlation between two consecutive samples of signal $\textbf{s}$ while noise samples are independent to each other.
The $p_{f}$ in (\ref{pf1}) can be written as
\begin{equation}\label{pf2}
p_{f} = \frac{1}{\sqrt{2\pi \sigma_{\tilde{r}_{s}}^{2}}} \int_{\mu}^{\infty} \exp^{-\frac{x^2}{2\sigma_{\tilde{r}_{s}}^{2}}}dx+\frac{1}{\sqrt{2\pi \sigma_{\tilde{r}_{s}}^{2}}} \int_{-\infty}^{-\mu} \exp^{-\frac{x^2}{2\sigma_{\tilde{r}_{s}}^{2}}}dx,
\end{equation}
where $\sigma_{\tilde{r}_{s}}^{2}=2((1-\rho_{s})\sigma_{s}^{2}+\sigma_{n}^{2})$.
Therefore, $p_{f}=2 Q{\left(\frac{\mu}{\sqrt{2((1-\rho_{s})\sigma_{s}^{2}+\sigma_{n}^{2})}}\right)}$.
Similarly, the probability of miss-detection $p_{m}$ is expressed as
\begin{equation}\label{pf3}
p_{m} = Pr\{|\textbf{r}(I^{i}_{max})-\textbf{r}(I^{i}_{max}+1)| < \mu |\mathcal{H}_{\textbf{i}}\}.
\end{equation}
Let $\tilde{r}_{i}|\mathcal{H}_{\textbf{i}}=\textbf{r}(I^{i}_{max})-\textbf{r}(I^{i}_{max}+1)$ and $\tilde{r}_{i}|\mathcal{H}_{\textbf{i}}$ is distributed as $\tilde{r}_{i}|\mathcal{H}_{\textbf{i}}\sim \mathcal{N}(0, 2((1-\rho_{s})\sigma_{s}^{2}+\sigma_{n}^{2}+p\sigma_{I}^{2}))$. After some intermediate steps, $p_{m}$ can be written as
$p_{m}=1-2 Q{\left(\frac{\mu}{\sqrt{2((1-\rho_{s})\sigma_{s}^{2}+\sigma_{n}^{2}+p\sigma_{I}^{2})}}\right)}$.
The proposed impulse noise rejection method select the parameter $\mu$ based on the transmitted UWB pulse. Hence, the proposed method does not need to find the optimal threshold, unlike clipper or blanking based receiver.
In general, optimal threshold using received signal statistics, such as signal and noise power, is difficult to compute.
\subsection{Convergence analysis of the proposed CDA}
Let $\textbf{s}, \textbf{r} \in \mathbb{R}^{N}$ be the desired UWB and received signals in the frame for a particular data symbol.
In the proposed CDA, signal for $(i+1)^{\text{th}}$ iteration is written as
$\textbf{r}_{i+1}= \textbf{r}_{i}- \textbf{e}_{I^{i}_{max}}\textbf{r}_{i}, i=1,2,... $ where $\textbf{e}_{I^{i}_{max}} \in \mathbb{R}^{N}$ has entry `1' at the $I^{i}_{max}$ position and `0's at the remaining entries. Further, $\lVert \textbf{r}_{i+1}\rVert_{2}^{2}=\lVert \textbf{r}_{i}- \textbf{e}_{I^{i}_{max}}\textbf{r}_{i}\rVert_{2}^{2}=\lVert \textbf{r}_{i}\rVert_{2}^{2}- \textbf{r}_{i}(I^{i}_{max})^{2}$.
Therefore, $\lVert \textbf{r}_{i+1}\rVert_{2} <\lVert \textbf{r}_{i}\rVert_{2}$ (where $\textbf{r}_{i}(I^{i}_{max})^{2}\neq 0$) and can also be written as
$\lVert \textbf{r}_{i+1}-\textbf{s}\rVert_{2} =\beta\lVert \textbf{r}_{i}-\textbf{s}\rVert_{2}$, where $\beta \in (0,1)$.
The distance between signal $\textbf{r}_{i}$ and desired signal $\textbf{s}$ in the $i^{\text{th}}$ iteration is written as $ \lVert \textbf{r}_{i} -\textbf{s} \rVert_{2}= \beta^{i} \lVert \textbf{r} -\textbf{s} \rVert_{2}$. Hence, as $i \rightarrow \infty$, $\lVert \textbf{r}_{i} -\textbf{s} \rVert_{2} \rightarrow 0$ i.e. $\textbf{r}_{i} \rightarrow \textbf{s}$.
In practical implementation, the proposed algorithm will have some finite distance between the desired signal $\textbf{s}$ and the received signal $\textbf{r}_{i}$ after $i^{\text{th}}$ iteration and hence, can be expressed as $\lVert \textbf{r}_{i} -\textbf{s} \rVert_{2} \rightarrow \epsilon_{0}$, where $\epsilon_{0}\geq 0$. The parameter $\epsilon_{0}$ depends on the SNR, number of iterations, and parameter $\mu$.
\subsection{BER performance}
In this subsection, we have analyzed the BER performance of the proposed receiver.
Let $\hat{\textbf{s}}$ be the output of the CDA. The signal $\hat{\textbf{s}}$ includes the background Gaussian noise and hence, $\left\Vert\hat{\textbf{s}}-\textbf{s}\right\Vert_{2} \geq 0$.
Therefore,
signal $\hat{\textbf{s}}$ can be written as $\hat{\textbf{s}}=\textbf{s}+\textbf{e}$, where $\textbf{e}$ is the undesired (noise) additive Gaussian noise in the signal $\hat{\textbf{s}}$.
The pdf of $\textbf{e}$ is Gaussian distributed and is given by $\mathcal{N}(0, \sigma^{2}_{e})$. In general, $\sigma^{2}_{e} \geq \sigma^{2}_{n}$ because a few samples of impulse noise may appear similar in amplitude to Gaussian background noise and hence, may not have been filtered out by \textbf{Algorithm} \textbf{\ref{algo}} and still be present in the output signal $\hat{\textbf{s}}$. The probability of overlap of the desired signal and impulse noise samples is low due to the sparse nature of both $\textbf{s}$ and $\textbf{i}$. Therefore, assigning zero value to the desired signal sample during cluster detection is almost zero. This will mostly not lead to any desired signal power deterioration in
the proposed receiver design. However, the signal power deterioration in case of UWB signal blanking due to overlapping with impulse noise is analyzed in the next subsection.
This paper considers correlation-based coherent receiver for data symbol detection. Thus, the correlator output $\zeta$ for a positive transmitted data symbol is written as
\begin{equation}\label{cr1}
\zeta=\langle\textbf{s}+\textbf{e}, \boldsymbol \phi \rangle,
\end{equation}
where $\boldsymbol \phi$ is the template signal. The template signal is generated using UWB pulse $\textbf{w}$ and channel impulse response (CIR) $\textbf{h}$ with known time hopping code as $\boldsymbol \phi=\textbf{h}\ast \textbf{w}$.
Correlator output $\zeta$ is Gaussian distributed, i.e.,
\begin{equation}\label{cr2}
\zeta \sim \mathcal{N}( \lVert\textbf{w}\rVert^{2}_{2} \sum_{l=0}^{L-1}|\alpha_{l}|^{2}, \lVert\textbf{w}\rVert^{2}_{2}\sigma^{2}_{e}\sum_{l=0}^{L-1}|\alpha_{l}|^{2}),
\end{equation}
where $\alpha_{l}$ is the channel coefficient of $l^{\text{th}}$ path and $L$ is the total number of resolved paths in CIR $\textbf{h}$.
The bit error probability $p_{pr}(e|\textbf{h})$ in the presence of impulse noise for the given CIR $\textbf{h}$ using the proposed correlator based receiver design in TH-BPSK system is given as
\begin{equation}\label{ber_pr}
p_{pr}(e|\textbf{h})=Q\left( \sqrt{\frac{ (1-\rho)\lVert\textbf{w}\rVert^{2}_{2}\sum_{l=0}^{L-1}|\alpha_{l}|^{2}}{\sigma^{2}_{e}}} \right),
\end{equation}
where $Q(\cdot)$ is the tail probability of normal Gaussian distribution and all the transmitted symbols are equally likely in (\ref{ber_pr}).
In the absence of impulse noise ($\sigma^{2}_{e}=\sigma^{2}_{n}$), $ p_{pr}(e|\textbf{h})$ in (\ref{ber_pr}) corresponds to the conventional TH-BPSK system. For the AWGN channel, (\ref{ber_pr}) is expressed as
$ p_{pr}(e)= Q\left( \sqrt{\frac{(1-\rho)\lVert\textbf{w}\rVert^{2}_{2}}{\sigma^{2}_{e}}} \right)$.
The factor $\rho$ depends on the blanking of UWB signal samples and $\rho\rightarrow 0$ as the sparsity of UWB signal $\textbf{s}$ and/or multipath channel diversity increases
for a fixed sparsity level of impulse noise.
\subsection{UWB signal and impulse noise samples overlap}
In this subsection, the effect of overlapping impulse noise sample on UWB signal is analyzed for the proposed CDA based receiver.
The number of samples, $\Omega$, in a frame duration, $T_{f}$, at the sampling frequency, $F_{s}$, can be expressed as $\Omega=\lceil T_{f} \times F_{s}\rceil$, where $\lceil (\cdot)\rceil$ represents a ceiling of $(\cdot)$. The total number of samples of desired UWB signal and impulse noise in a frame duration is written as $\Omega_{\textbf{s}}=\lceil L \Omega_{\textbf{w}} \rceil$ and $\Omega_{\textbf{i}}=\lceil p \Omega \rceil$, respectively, where $\Omega_{\textbf{w}}$ is the non-zero samples in UWB pulse $\textbf{w}$.
Due to the sparse nature of UWB signal and impulse noise,
$\Omega >>\Omega_{\textbf{s}} >>\Omega_{\textbf{i}}$ and their occupancy rate in a frame is expressed as $\Omega_{\textbf{s}}/\Omega$ and $\Omega_{\textbf{i}}/\Omega$, respectively.
The probability of a single impulse noise sample's chance of occurrence in the desired UWB signal cluster's duration is written as
$p_{\textbf{s},\textbf{i}}=\tilde{\Omega}_{\textbf{i}}/\Omega_{\textbf{w}}$, where $\tilde{\Omega}_{\textbf{i}}=\Omega_{\textbf{i}}/L, \ \tilde{\Omega}_{\textbf{i}}< \Omega_{\textbf{w}}$ and $\tilde{\Omega}_{\textbf{i}}$ relative impulse noise samples occupancy in a single UWB signal cluster.
Therefore, the probability that $k$-number of clusters have impulse noise is expressed as
$p_{\textbf{s},\textbf{i},k}=\sum_{k=1}^{L}\binom{L}{k} p_{\textbf{s},\textbf{i}}^{k}(1-p_{\textbf{s},\textbf{i}})^{L-k}$, where $\binom{L}{k}$ is a binomial coefficient.
Thus, probability $p_{\textbf{s},\textbf{i},k}$ (all clusters have impulse noise) reduces as $p$ decreases or $L$ increases.
Hence, the desired UWB signal sample's blanking probability $p_{\textbf{s},\textbf{i},k}$ is small for a multipath channel as compared to an AWGN channel.
Further, the desired UWB signal energy loss in a cluster $E_{\textbf{s},loss,l}, l=1,2,...,L$ due to blanking of a signal sample in the receiver design is expressed in the range of $ \lVert\alpha_{min}\textbf{w}_{min} \lVert^{2}_{2}$ to $ \lVert \alpha_{max}\textbf{w}_{max} \lVert^{2}_{2}$, where $\alpha_{min}=\min_{l} \{\alpha_{l}\}_{l=1}^{L}, \ \alpha_{max}=\max_{l} \{\alpha_{l}\}_{l=1}^{L},\
\textbf{w}_{min}= \min _{i} \{\textbf{w}_{i}\}_{i=1}^{\Omega_{\textbf{w}}}$ and
$\textbf{w}_{max}= \max _{i} \{\textbf{w}_{i}\}_{i=1}^{\Omega_{\textbf{w}}}$.
Therefore, effective signal energy loss in a frame is expressed as $E_{\textbf{s},loss}=p_{\textbf{s},\textbf{i},k} E_{\textbf{s},loss,l}$ and is smaller for a multipath channel as compared to an AWGN channel, i.e., $E_{\textbf{s},loss, \text{multipath}}\leq E_{\textbf{s},loss, \text{AWGN}}$ due to the low value of $p_{\textbf{s},\textbf{i},k}$.
In other words, received signal power spread in more number of low energy pulses
in multipath channel, hence, the blanking of a single sample results in very less signal energy loss as compared AWGN channel, which has more energy concentration in the single received pulse.
Hence, in this work multipath channel diversity (which reduces the effective value of $p_{\textbf{s},\textbf{i},k}$) and the desired received UWB signal sparsity (sparsity reduces the overlapping probability of the signal and impulse noise)
add robustness against the blanking loss in the proposed UWB receiver design.
\vspace{-1.2em}
\section{Simulation Results and Discussion}\label{sect4}
This section presents performance evaluation of the proposed receiver design compared to the conventional and the non-linear blanking based receivers. In \cite{zhidkov2006, juwono2016performance,epple2016advanced}, non-linear blanking receiver is analyzed for mitigating impulse noise effect in OFDM system. However, we use it for UWB system with suitable modification and parameter selection for the first time in UWB literature in order to mitigate impulse noise. All the simulations are carried out for the TH-BPSK UWB system for $F_{s}=16$ GHz sampling frequency using the second derivative Gaussian pulse $\textbf{w}$ \cite{san2016,sharmanew,ding2013first} with the pulse width parameter $\tau=0.4$ nanoseconds and with single frame per data symbol. The transmitter and receiver synchronization is assumed with the perfect channel state information at the receiver.
The TH code is generated using chip duration of $1$
nanoseconds and cardinality of $3$ for
AWGN and $6$ for multipath IEEE 802.15.4a channels.
In simulation results, legend ``BPSK" represents the BER performance of the conventional receiver in the impulse noise free system, ``Theory" represents semi-analytical results using
(\ref{ber_pr}) and, ``BR" and ``CDA" represent BER performance using the blanking receiver \cite{zhidkov2006, juwono2016performance,epple2016advanced} and the proposed CDA receiver in the presence of impulse noise, respectively.
The received signal $\textbf{r}$, blanking output signal $\textbf{y}$ (using the blanking method in \cite{zhidkov2006, juwono2016performance,epple2016advanced, rabie2014}), and the output of the proposed CDA algorithm $\hat{\textbf{s}}$ for five frame time duration in multipath channel model CM1 \cite{molish2006} are shown in Fig. \ref{signal}. The blanking non-linearity is applied to the received signal $\textbf{r}$. Samples of $\textbf{r}$ are assigned zero value if $|\textbf{r}_{i}| \geq T, \ i=1,2,...,N$, where $T$ is a constant threshold value.
To mitigate impulse noise effect, threshold $T$ for the blanking based receiver in UWB system is selected such that false alarm and miss-detection probabilities are minimized. The optimal value of $T$ is derived as \cite{zhidkov2006, juwono2016performance,epple2016advanced}
\begin{equation}\label{th1}
T_{opt}=\min_{T} \left\{ Pr(\mathcal{H}_{\textbf{s}}) p_{f,T}+
Pr(\mathcal{H}_{\textbf{i}}) p_{m,T} \right\}.
\end{equation}
Similar to $p_{f}$ (in eq (\ref{pf1})) and $p_{m}$ (in eq (\ref{pf3})), $p_{f,T}$ and $p_{m,T}$ are calculated and expressed as $2 Q{\left(\frac{T}{\sqrt{\sigma_{s}^{2}+\sigma_{n}^{2}}}\right)}$
and
$\left(1-2Q{\left(\frac{T}{\sqrt{\sigma_{s}^{2}+\sigma_{n}^{2}+p\sigma_{I}^{2})}}\right)}\right)$,
respectively.
The exact solution of (\ref{th1}) is difficult due to multiple $Q(\cdot)$ functions. Hence, the $Q(\cdot)$ function is approximated
using a method in \cite{karagiannidis}. On equating the derivative of (\ref{th1}) to zero, a sub-optimal value of $T$ is obtained. For example, at an $\text{SINR}=-40$ dB with $p=0.01$, sub-optimal values of $T_{opt}$ equal to 4 and 2.5 are obtained for
$\text{SNR}$ of -2 and 5 dB, respectively.
In simulations, we have used fixed value of $T$ throughout the entire range of SNR. However,
SNR specific $T$ can be selected using a look-up table method at the receiver, which requires frame based SNR estimation, thereby increasing computational complexity of the receiver.
In Fig. \ref{signal}, we have considered $\text{SINR}=-40$ dB, $\text{SNR}=20$ dB, blanking threshold $T=4$ (for blanking receiver), and impulse noise probability $p=0.01$ in this simulation setup. The amplitude of impulse noise samples is very high as observed in Fig. \ref{signal} (top subfigure) and the desired signal $\textbf{s}$ is completely buried within the impulse noise. In the blanking based receiver \cite{zhidkov2006, juwono2016performance,epple2016advanced}, high amplitude samples of impulse noise are blanked (assigned zero value), while low amplitude impulse samples are present at the output of blanking unit in signal as shown in Fig. \ref{signal} (middle subfigure).
Hence, performance of the blanking based receiver deteriorates due to the presence of few impulse noise samples and is sensitive to the threshold value $T$.
On the other hand, all the samples of impulse noise are removed with the proposed algorithm without any modification in the desired signal as observed in Fig. \ref{signal} (bottom subfigure).
\begin{figure}[h]
\vspace{-1em}
\centering{
\includegraphics[height=100mm,width=85mm]{figure11_2}}
\caption{\small The received signal $\textbf{r}$ (at the top), blanking output signal $\textbf{y}$ (using \cite{zhidkov2006, juwono2016performance,epple2016advanced}) and signal $\hat{\textbf{s}}$ (at the bottom) at the output of the proposed CDA. }
\label{signal}
\vspace{-1em}
\end{figure}
Further, the mean square error (MSE) between desired multipath received signal $\textbf{s}$ and CDA output signal $\hat{\textbf{s}}$ is calculated and defined as
\begin{equation}\label{error11}
\text{\textit{MSE}}=\frac{\lVert\textbf{s} -\hat{\textbf{s}} \rVert_{2}^{2}}{\tilde{N}},
\end{equation}
where $\tilde{N}$ is the number of samples in $\textbf{s}$.
In (\ref{error11}), signal $\textbf{s}$ is fixed and CDA output signal $\hat{\textbf{s}}$ changes after each iteration. Hence, MSE in (\ref{error11}) changes after each iteration. Simulation results are plotted in Fig. \ref{mse2} (left) for $\text{SINR}=-40$ dB and $\text{SNR}=20$ dB in multipath communication channel model CM1 using various values of the parameter $\mu$ in \textbf{Algorithm} \textbf{\ref{algo}}.
The rate of decrease in MSE with the number of iteration is same for all the values of $\mu$. However, high values of $\mu=1, 2, 3$ saturate at higher error floor as compared to lower values ($\mu=0.2, 0.3, 0.4, 0.5$) as observed from Fig. \ref{mse2} (left). Further, $\mu=0.3$ provides the lowest value of error floor as shown in Fig. \ref{mse2} (left).
Based on empirical results, $\mu=\kappa |\textbf{w}(I^{w}_{max})-\textbf{w}(I^{w}_{max}-1)|$, where $\kappa \in [2 \ \ 3]$.
Further, at a constant threshold value, BER performance of the blanking based receiver (\cite{zhidkov2006, juwono2016performance,epple2016advanced}) varies with SINR as observed in Fig. \ref{mse2} (right). On the other hand, the proposed receiver provides optimum results irrespective of the values of SINR chosen as observed in
Fig. \ref{mse2} (right).
Further, BER performance of both the proposed and blanking receivers converge around $\text{SINR}=0$ dB due to
low amplitude of impulse noise samples at these SINR values and hence, impulse noise behaves similar to Gaussian noise at $\text{SINR}=0$ dB.
\begin{figure}[h]
\vspace{-.5em}
\centering{
\includegraphics[height=75mm,width=125mm, trim = 25 0 0 0]{mse1}}
\captionsetup{justification=centering}
\caption{\footnotesize{ Performance analysis of the proposed receiver}}
\label{mse2}
\vspace{-1em}
\end{figure}
Next, the BER performance of TH-BPSK UWB system in the presence of impulse noise using the proposed receiver and the blanking non-linearity based receiver in \cite{zhidkov2006, juwono2016performance,epple2016advanced} is analyzed in AWGN and multipath IEEE 802.15.4a channel CM1 \cite{molish2006}. Results are shown in Fig. \ref{awgn+cm1} and Fig. \ref{11}.
In AWGN channel, $\text{SINR}=-30$ dB, $p=0.01$, $T=2.5,4$ (for blanking receiver), and frame duration $T_{f}=10$ nanoseconds are chosen, while in CM1 channel, $\text{SINR}=-20, -30$ dB, $p=0.01$, $T=2.5$, and $T_{f}=60$ nanoseconds are considered.
The blanking receiver exhibits bit error floor for both the values of threshold and SINR in the presence of impulse noise in both AWGN and CM1 channels as shown in Fig. \ref{awgn+cm1} and and Fig. \ref{11}.
The BER performance of the proposed receiver in the presence of impulse noise is close to the BER performance of the conventional receiver (BPSK) in impulse noise free scenario and is free from any bit error floor as observed in Fig. \ref{awgn+cm1} and Fig. \ref{11} respectively.
Further, CDA based receiver's BER performance is degraded marginally in AWGN channel due to non-zero (but small)
$p_{\textbf{s},\textbf{i},k}$ unlike in the multipath channel, which has $p_{\textbf{s},\textbf{i},k}$ is close to zero.
Proposed receiver needs around $\lceil p \Omega \rceil$ iterations per frame, hence it is computationally efficient and free from any SNR dependent threshold value selection like blanking receivers.
\begin{figure}[h]
\vspace{-.7em}
\centering{
\includegraphics[height=65mm,width=100mm]{combine_theory_2_222}}
\caption{ Average BER vs. SNR performance of TH-BPSK system using the proposed and the blanking based receiver (\cite{zhidkov2006, juwono2016performance,epple2016advanced}) in the presence of impulse noise in AWGN channel.}
\label{awgn+cm1}
\end{figure}
\begin{figure}[h]
\centering{
\includegraphics[height=65mm,width=100mm]{figure_11}}
\caption{Average BER vs. SNR performance of TH-BPSK system using the proposed and the blanking based receiver (\cite{zhidkov2006, juwono2016performance,epple2016advanced}) in the presence of impulse noise in CM1 (blue and red lines correspond to $\text{SINR}=-20, -30$ dB, respectively) channels.}
\label{11}
\end{figure}
\section{Conclusion}\label{sect5}
A signal cluster sparsity based receiver design for the impulse noise mitigation in a UWB system is proposed.
The proposed receiver is observed to be robust and has improved bit error rate performance (close to the impulse noise free system) as compared to the blanking non-linearity based receiver in the presence of Bernoulli-Gaussian impulse noise for single and multipath channels. The work presented in this paper is helpful for robust operation and analysis of UWB based devices such as WSNs, IoTs, and M2M that work extensively in harsh impulse noise environments and hence, require robust receiver designs in practical applications.
In future, the proposed cluster sparsity based receiver design can be extended for multiuser communication in the presence of impulse noise environment.
\bibliographystyle{ieeetran}
\footnotesize
|
\section{Introduction}
The present paper is a continuation of our paper \cite{DaR2} where we have characterised Komatsu spaces of ultradifferentiable functions and ultradistributions on compact manifolds in terms of the eigenfunction expansions related to positive elliptic operators. In particular, these classes include the spaces of analytic, Gevrey and smooth functions as well as the corresponding dual spaces of distributions and ultradistributions, in both Roumieu and Beurling settings.
In particular, this extended the earlier characterisations of analytic functions on compact manifolds in terms of the eigenfunction expansions by Seeley \cite{see:exp} (see also \cite{see:ex}), and characterisations of Gevrey spaces and ultradistributions on tori \cite{Tag1} and on compact Lie groups and homogeneous spaces \cite{DaR1}.
For example, if $E$ is a positive elliptic pseudo-differential operator on a compact manifold $X$ without boundary and $\lambda_j$ denotes its eigenvalues in the ascending order, then smooth functions on $X$ can be characterised in terms of their Fourier
coefficients:
\begin{equation}\label{EQ:smooth}
f\in C^{\infty}(X) \; \Longleftrightarrow \;
\forall N\; \exists C_{N}:
|\widehat{f}(j,k)|\leq C_{N} \lambda_{j}^{-N}
\textrm{ for all } j\geq 1, 1\leq k\leq d_{j},
\end{equation}
where $\hat{f}(j,k)= \left(f, e^{k}_j\right)_{L^2}$ with $e_j^k$ being the $k^{th}$ eigenfunction corresponding to the eigenvalue $\lambda_j$ (of multiplicity $d_j$), see \eqref{EQ:FC}.
If $X$ and $E$ are analytic, the result of Seeley \cite{see:exp} can be reformulated
as
\begin{equation}\label{EQ:analytic}
f \textrm{ is analytic } \Longleftrightarrow
\exists L>0\; \exists C:
|\widehat{f}(j,k)|\leq C e^{-L\lambda_j^{1/\nu}}
\textrm{ for all } j\geq 1, 1\leq k\leq d_{j},
\end{equation}
where $\nu$ is the order of the pseudo-differential operator $E$.
In \cite{DaR2} we extended such characterisations to Gevrey classes and, more generally, to Komatsu classes of ultradifferentiable functions and the corresponding classes of ultradistributions.
In this paper we continue this analysis showing that the appearing spaces of coefficients with respect to expansions in eigenfunctions of positive elliptic operators are perfect spaces in the sense of the theory of sequence spaces (see e.g. K{\"o}the \cite{Kothe:BK-top-vector-spaces-I}).
Consequently, we obtain tensor representations for linear mappings between spaces of ultradifferentiable functions and the corresponding spaces of ultradistributions. Such discrete representations in a given basis are useful in different areas of time-frequency analysis, in partial differential equations, and in numerical investigations. Due to possible multiplicities of eigenvalues the mappings beget the tensor structure rather than the matrix one as it would be in the case of simple eigenvalues, and our results are new for both situations.
Our analysis is based on the global Fourier analysis on compact manifolds which was consistently developed in \cite{DR}, with a number of subsequent applications, for example to the spectral properties of operators \cite{Delgado-Ruzhansky:JFA-2014}, or to the wave equations for the Landau Hamiltonian \cite{RT:LMP}. The corresponding version of the Fourier analysis based on expansions with respect to biorthogonal systems of eigenfunctions of non-self-adjoint operators has been developed in \cite{RT:IMRN}, with a subsequent extension in \cite{RT:MMNP}.
The obtained characterisations of Komatsu classes found their applications, for example for the well-posedness problems for weakly hyperbolic partial differential equations \cite{Garetto-Ruzhansky:wave-eq}. The spaces of coefficients of eigenfunction expansions in ${\mathbb R}^n$ with respect to the eigenfunctions of the harmonic oscillator have been analysed in \cite{GPR} , and the corresponding Komatsu classes have been investigated in \cite{VV}. The original Komatsu spaces of ultradifferentiable functions and ultradistributions have appeared in the works \cite{KO1, KO2, KO3} by Komatsu (see also Rudin \cite{Rudin:bk-RandCanalysis-1974}), extending the original works by Roumieu \cite{Roumieu:1962}. The universality of the spaces of Gevrey functions on the torus has been established in \cite{Tag2}.
The regularity properties of spaces of distributions and ultradistributions have been analysed in \cite{Pilipovic-Scarpalezos:PAMS-2001}, and their convolution properties appeared in \cite{Pilipovic-Prangoski:Roumieu-MM-2014}.
The characterisations in terms of the eigenfunction expansions provide for descriptions alternative to those using the classical Fourier analysis, with applications in the theory of partial differential equations, see e.g. \cite{Rodino:bk-Gevrey}.
For some other applications of this type of analysis one can see e.g. \cite{Carmichael-Kamiski-Pilipovic:BK,Delcroix-Hasler-Pilipovic:periodic}.
\smallskip
The paper in organised as follows. In Section \ref{SEC:Fourier} we briefly recall the constructions leading to the global Fourier analysis on compact manifolds. In Section \ref{SEC:seqspaces} we very briefly recall the relevant definitions from the theory of sequence spaces.
In Section \ref{SEC:Komatsu} we present the main results of this paper and their proofs.
In Section \ref{SEC:Beurling} we first recall the definitions for Beurling version of the spaces and then give the statement of the corresponding adjointness Theorem \ref{THM:adj} in this case.
\smallskip
In this paper we adopt the notation $\mathbb{N}_0=\mathbb{N}\cup\{0\}$.
\section{Fourier analysis on compact manifolds}
\label{SEC:Fourier}
Let $X$ be a closed $C^{\infty}$-manifold of dimension $n$ endowed with a fixed measure $dx.$ We first recall an abstract statement from \cite[Theorem 2.1]{DR} giving rise to the Fourier analysis on $L^2(X)$.
\begin{thm}\label{THM:DR-inv}
Let ${\mathcal H}$ be a complex Hilbert space and let $\mathcal{H^{\infty}}\subset \mathcal H$ be a dense linear subspace of $\mathcal H.$ Let $\{d_j\}_{j\in\mathbb{N}_0}\subset \mathbb{N}$ and let $\{e^{k}_{j}\}_{{j\in\mathbb{N}_{0}, 1\leq k\leq d_j}}$ be an orthonormal basis of $\mathcal H$ such that $e^{k}_j\in\mathcal{H}^{\infty}$ for all $j$ and $k$. Let $H_j:={\rm span}\{e^{k}_{j}\}_{k=1}^{d_j},$ and let $P_j:{\mathcal H}\rightarrow H_j$ be the orthogonal projection. For $f\in{\mathcal H},$ we denote $\hat{f}(j,k):=(f,e^{k}_j)_{{\mathcal H}}$ and let $\hat f(j)\in \mathbb{C}^{d_j}$ denote the column of $\hat f(j,k),$ $1\leq k\leq d_j.$ Let $T: {\mathcal H}^{\infty}\rightarrow {\mathcal H}$ be a linear operator. Then the following conditions (i)-(iii) are equivalent.
\begin{enumerate}
\item For each $j\in\mathbb N_{0},$ we have $T(H_j)\subset H_j.$
\item For each $l\in\mathbb{N}_0$ there exists a matrix $\sigma(l)\in\mathbb{C}^{d_l\times d_l}$ such that for all $e^{k}_j$,
$$\hat{Te^{k}_{j}}(l,m)=\sigma(l)_{mk}\delta_{jl}.$$
\item If in addition all $e_{j}^{k}$ are in the domain of $T^{*}$, then for each $l\in\mathbb{N}_0$ there exists a matrix $\sigma(l)\in\mathbb{C}^{d_l\times d_l}$ such that for all $f\in{{\mathcal H}}^{\infty} $ we have
$$\hat{Tf}(l)=\sigma(l)\hat{f}(l).$$
The matrices in (ii) and (iii) coincide.
The equivalent properties (i)--(iii) follow from the condition:
\item For each $j\in\mathbb{N}_0,$ we have $TP_j=P_jT$ on ${\mathcal H}^{\infty.}$
If, in addition, $T$ extends to a bounded operator $T\in{\mathcal L}(H)$ then (iv) is equivalent to (i)--(iii).
\end{enumerate}
\end{thm}
Under the assumptions of Theorem \ref{THM:DR-inv} we have the direct sum decomposition
$$
{\mathcal H} =\oplus_{j=0}^{\infty}H_j, \quad H_j=\textrm{ span }\{e^{k}_j\}_{k=1}^{d_j},
$$
and we have $d_j=\dim H_j.$ Here we will consider ${\mathcal H}=L^2(X)$ for a compact manifold $X$ with $H_j$ being the eigenspaces of an elliptic positive pseudo-differential operator $E.$
The eigenvalues of $E$ (counted without multiplicities) form a sequence ${\lambda_j}$, $j\in\mathbb{N}$, which we order so that
$$0=:\lambda_0<\lambda_1<\lambda_2<...$$
For each eigenvalue $\lambda_j,$ there is the corresponding finite dimensional eigenspace $H_j$ of functions on $X,$ which are smooth due to the ellipticity of $E.$ We set
$$d_0:=\dim H_0, \quad H_0:=\ker E, ~~\lambda_0:= 0.$$
Since the operator $E$ is elliptic, it is Fredholm, hence also $d_0<\infty.$
We denote by $\Psi^{\nu}_{+e}(X)$ the space of positive elliptic pseudo-differential operators on order $\nu>0$ on $M$.
Here we recall a useful relation between the sequences $\lambda_j$ and $d_j$ of eigenvalues of $E\in \Psi^{\nu}_{+e}(X)$ and their multiplicities from \cite{DR}.
\begin{prop}\label{PROP:dlambdas}
Let $X$ be a closed manifold of dimension $n$, and let $E\in \Psi^{\nu}_{+e}(X)$, with $\nu>0.$ Then there exists a constant $C>0$ such that we have
$$d_j\leq C(1+\lambda_j)^{\frac{n}{\nu}}$$ for all $j\geq 1.$ Moreover, we also have
$$\sum^{\infty}_{j=1}d_j(1+\lambda_j)^{-q}<\infty \;\textrm{ if and only if } \quad q>\frac{n}{\nu}.$$
\end{prop}
For $f\in L^{2}(X),$ by definition we have the Fourier series decomposition
$$f=\sum_{j=0}^{\infty}\sum_{k=1}^{d_j}\hat{f}(j,k)e^{k}_{j}.$$
The Fourier coefficients of $f\in L^2(X)$ with respect to the orthonormal basis $\{e^{k}_j\}$ are denoted by
\begin{equation}\label{EQ:FC}
\mathcal{F}f(j,k)=\hat{f}(j,k):= \left(f, e^{k}_j\right)_{L^2}.
\end{equation}
We denote the space of Fourier coefficients by
\begin{equation}\label{EQ:sigma}
\Sigma=\{v=(v_l)_{l\in\mathbb{N}_{0}},~ v_{l}\in\mathbb{C}^{d_l}\}.
\end{equation}
Since $\{e^{k}_j\}_{j\geq 0}^{1\leq k\leq d_j}$ is a complete orthonormal system of $L^{2}(X)$ we have the Plancherel formula
$$||f||^{2}_{L^2(X)}=\left(\sum_{j=0}^{\infty}\sum_{k=1}^{d_j}|\hat{f}(j,k)|^{2}\right)^{1/2}=||\hat f||^{2}_{l^{2}(\mathbb N_{0},\Sigma)}=:\sum_{j=0}^{\infty}||\hat{f}(j)||^{2}_{\mathtt{HS}},$$
where we interpret $\hat f$ as an element of the space
$$l^{2}(\mathbb N_{0},\Sigma)=\left\{h: \mathbb N_{0}\rightarrow \prod_{j}\mathbb{C}^{d_j}: h(j)\in \mathbb{C}^{d_j} , \sum_{j=0}^{\infty}\sum_{k=1}^{d_j}|{h}(j,k)|^{2}<\infty \right\}.$$
We endow $l^{2}(\mathbb N_{0},\Sigma)$ with the norm
$$||h||_{l^{2}(\mathbb N_{0},\Sigma)}=
\left(\sum_{j=0}^{\infty}\sum_{k=1}^{d_j}|{h}(j,k)|^{2}\right)^{1/2}.$$\\
We can think of $\mathcal{F}=\mathcal{F}_{X}$ as of the Fourier transform which is an isometry form $L^{2}(X)$ onto $l^{2}(\mathbb N_{0},\Sigma).$ The inverse of this Fourier transform can be then expressed by
$$(\mathcal{F}^{-1}h)(x)=\sum_{j=0}^{\infty}\sum_{k=1}^{d_j}h(j,k)e^{k}_{j}(x).$$
If $f\in L^{2}(X)$ we can write
\begin{equation}
\hat{f}(j)= \begin{pmatrix}
\hat{f}(j,1)\\
\vdots\\
\vdots\\
\vdots\\
\vdots\\
\hat{f}(j,d_j)
\end{pmatrix} \in \mathbb C^{d_j},
\end{equation}
thus thinking of the Fourier transforn always as a column vector. In particular, we think of
$$\hat{e^{k}_{j}}(l)=\left(\hat{e^{k}_{j}}(l,m)\right)_{m=1}^{d_l}$$ as of a column, and we notice that
$$\hat{e^{k}_{j}}(l,m)=\delta_{jl}\delta_{km}.$$
\section{Sequence spaces and sequential linear mappings}
\label{SEC:seqspaces}
We briefly recall that a sequence space $E$ is a linear subspace of
$$\mathbb{C}^{\mathbb Z}=\{a=(a_j)|a_j\in\mathbb{C}, j\in \mathbb{Z}\}.$$
The dual $\hat{E}$ ($\alpha$-dual in the terminology of G. Kothe \cite{Kothe:BK-top-vector-spaces-I}) is a sequence space defined by
$$\hat{E}=\{a\in \mathbb{C}^{\mathbb Z}: \sum_{j\in \mathbb{Z}} |u_j|||a_j|<\infty
\textrm{ for all }u\in E\}.$$
A sequence space $E$ is called {\em perfect} if $\hat{\hat{E}}=E$.
A sequence space is called {\em normal} if $u=(u_j)\in E$ implies $|u|=(|u_j|)\in E.$
A dual space $\hat{E}$ is normal so that any perfect space is normal.
A pairing ${\langle\cdot,\cdot\rangle}_{E}$ on $E$ is a bilinear function on $E\times\hat{E}$ defined by
$$\langle u,v\rangle_{E}=\sum_{j\in \mathbb{Z}}{u_jv_j}\in\mathbb{C},$$
which converges absolutely by the definition of $\hat{E}.$
\begin{defn} $\phi: E\rightarrow \mathbb{C}$ is called a {\em sequential linear functional} if there exists some $a\in\hat{E}$ such that $\phi(u)=\langle u,a\rangle_E$ for all $u\in E.$ We abuse the notation by also writing $a: E\rightarrow \mathbb{C}$ for this mapping.\end{defn}
\begin{defn} A mapping $f:E\rightarrow F$ between two sequence spaces is called a {\em sequential linear mapping} if
\begin{enumerate}
\item $f$ is algebraically linear,
\item for any $v\in \hat F,$ the composed mapping $v\circ f\in\hat{E}.$
\end{enumerate}
\end{defn}
\section{Tensor representations for Komatsu classes and their $\alpha$-duals}
\label{SEC:Komatsu}
Let $M_{k}$ be a sequence of positive numbers such that
\medskip
\noindent
(M.0) $M_0=1$,\\
(M.1) (Stability) $M_{k+1}\leq AH^{k}M_k, $ $k=0,1,2,\ldots,$\\
(M.2) $M_{2k}\leq AH^{k}\min_{0\leq q\leq k} M_qM_{k-q},$ $k=0,1,2,...,$ for some $A, H>0$.
\medskip
In a sequence of papers \cite{KO1,KO2,KO3} Komatsu investigated classes of ultradifferentiable functions on ${\mathbb{R}}^{n}$ associated to the sequence ${M_k}$, namely, the spaces of functions $\Psi\in C^{\infty}(\mathbb{R}^{n})$ such that for every compact
$K\subset\mathbb{R}^{n}$ there exist $h>0$ and a constant $C>0$ such that
\begin{equation}\sup_{x\in K}|\partial^{\alpha}\psi(x)|\leq Ch^{|\alpha|}M_{|\alpha|}
\end{equation}
holds for all multi-indices $\alpha\geq 0$.
Similar to the case of usual distributions given a space of ultradifferentiable functions satisfying (4.1) we can define a space of ultradistributions as its dual.
We now recall the analogous definition of the Komatsu ultradifferentiable functions $\Gamma_{\{\mathcal{M}_k\}}(X)$ and its $\alpha$-dual $\left[\Gamma_{\{\mathcal{M}_k\}}(X)\right]^{\wedge}$.
Here, as before, $X$ is a compact manifold without boundary and
$E\in\Psi_{+e}^{\nu}(X)$ with $\nu>0$.
\begin{defn} The class $\Gamma_{\{M_k\}}(X)$ is the space of $C^{\infty}$ functions $\phi$ on $X$ such that there exist $h>0$ and $C>0$ such that we have
$$||E^{k}\phi||_{L^2(X)}\leq Ch^{\nu k}M_{\nu k},\quad k=0,1,2,\ldots,$$
where $\nu\in\mathbb{N}$ is the order of the positive elliptic pseudo-differential operator $E.$\end{defn}
In \cite{DaR2} we have characterised the class $\Gamma_{\{M_k\}}(X)$ in terms of the eigenvalues of the operator $E$.
We assume that
\medskip
\noindent
(M.3) \quad For some $l,C_{l}>0$ we have $k!\leq C_{l} l^{k}M_{k}$, for all $k\in\mathbb{N}_{0}.$
\medskip
In the sequel, for $w_l\in{\mathbb{C}}^{d_l}$ we write
$$||w_l||_{\mathtt{HS}}:=\left(\sum_{j=1}^{d_l}\left|(w_l)_j\right|^{2}\right)^{1/2}.$$
\begin{thm}[{\cite{DaR2}}]\label{THM:gamma}
Assume conditions (M.0), (M.1), (M.2), (M.3). Then $\phi\in\Gamma_{\{M_k\}}(X)$ if and only if there exist constants $C>0$ and $L>0$ such that
$$||\hat{\phi}(l)||_{\mathtt{HS}}\leq C\exp\{-M(L\lambda_{l}^{1/\nu})\}, \quad \textrm{ for all } l\geq 1,$$
where $$M(r):=\sup_{k\in\mathbb{N}_0}\log\frac{r^{\nu k}}{M_{\nu k}}.$$
\end{thm}
\begin{ex}
As an example, for the (Gevrey-Roumieu) class of ultradifferentiable functions
$$\gamma^{s}(X)=\Gamma_{\{(k!)^{s}\}}(X),\quad 1< s<\infty,$$
we have $M(r)\simeq r^{1/s}$. This is also true for $s=1$, characterising the class of analytic functions if the manifold is analytic. The class $\gamma^{s}(X)$ coincides with the usual Gevrey class of functions on a manifold $X$ defined in terms of their localisations.
\end{ex}
Based on Theorem \ref{THM:gamma} we can then write
\begin{multline*}
\Gamma_{\{\mathcal{M}_k\}}(X)=\left\{[\hat{\phi}(l)]_{l\in{\mathbb{N}}_{0}}: \phi\in C^{\infty}(X),\; \exists C>0 \textrm{ such that } \right. \\ \left.
||\hat{\phi}(l)||_{\mathtt{HS}}\leq C\exp\{-M(L\lambda_l^{1/\nu})\} , \forall l \geq 1 \right\}.
\end{multline*}
For $\phi\in \Gamma_{\{\mathcal{M}_k\}}(X)$ we will write $\phi\approx \left[\hat{\phi}(l)\right]_{l\in{\mathbb{N}}_{0}}$ so that $\Gamma_{\{\mathcal{M}_k\}}(X)$ can be thought of as a sequence space, but it will be convenient to view it as a subspace of $\Sigma$ defined in \eqref{EQ:sigma}, taking into account the dimensions of the eigenspaces of the operator $E$.
Next we recall the definition of the $\alpha$-dual of the space $\Gamma_{\{\mathcal{M}_k\}}(X)$ (following \cite{DaR2}).\\
The $\alpha$-dual of the space $\Gamma_{\{\mathcal{M}_k\}}(X)$ of ultradifferentiable functions, denoted by $[\Gamma_{\{\mathcal{M}_k\}}(X)]^{\wedge},$ is given by
$$\left\{v=(v_l)_{l\in\mathbb{N}_0}\in \Sigma, v_l\in{\mathbb C}^{d_l}: \sum_{l=0}^{\infty}\sum_{j=1}^{d_l}|(v_l)_j||\hat{\phi}(l,j)|<\infty, \textrm{ for all } \phi\in \Gamma_{\{\mathcal{M}_k\}}(X) \right\}.$$
We also recall the following characterisations of the $\alpha$-duals established in \cite{DaR2}.
\begin{thm}\label{THM:gammahat}
Assume conditions (M.0), (M.1), (M.2), (M.3). The following statements are equivalent
\begin{enumerate}
\item $v\in [\Gamma_{\{\mathcal{M}_k\}}(X)]^{\wedge}$;
\item for every $L>0$ we have
$$\sum_{l=0}^{\infty}\exp\left(-M(L\lambda_l^{1/\nu})\right)||v_l||_{\mathtt{HS}}<\infty;$$
\item for every $L>0$ there exists $K_L>0$ such that
$$||v_l||_{\mathtt{HS}}\leq K_{L}\exp\left(M(L\lambda_l^{1/\nu})\right)$$ holds for all $l\in\mathbb{N}_0.$
\end{enumerate}
\end{thm}
We will now show that the space $\Gamma_{\{\mathcal{M}_k\}}(X)$ is perfect. In the
proof as well as in further proofs the following estimate will be useful:
\begin{equation}\label{EQ:weyllaw}
||e_l^{j}||_{L^{\infty}(X)}\leq C\lambda^{\frac{n-1}{2\nu}}_{l} \;\textrm{ for all }\;l\geq 1.
\end{equation}
This estimate follows, for example, from the local Weyl law \cite[Theorem 5.1]{Hor}, see also \cite[Lemma 8.5]{DR}.
\begin{thm}\label{P:perfect}
Let $X$ be a compact manifold and assume conditions (M.0), (M.1), (M.2), (M.3). Then
$\Gamma_{\{\mathcal{M}_k\}}(X)$ is a perfect space, that is, we have
$$\Gamma_{\{\mathcal{M}_k\}}(X)=\left[\hat{\Gamma_{\{\mathcal{M}_k\}}(X)}\right]^{\wedge},$$
where
$$\left[\hat{\Gamma_{\{\mathcal{M}_k\}}(X)}\right]^{\wedge}=\left\{w=(w_l)_{l\in{\mathbb{N}}_{0}}\in \Sigma: \sum^{\infty}_{l=0}\sum_{j=1}^{d_l}\left|(w_l)_j\right|\left|(v_l)_j\right|<\infty, \forall v \in \left[\Gamma_{\{\mathcal{M}_k\}}(X)\right]^{\wedge}\right\}.$$
\end{thm}
To prove this we first establish the following lemma:
\begin{lem} \label{L:perfect1}
We have $w\in \left[\hat{\Gamma_{\{\mathcal{M}_k\}}(X)}\right]^{\wedge}$ if and only if there exists $L>0$ such that
$$\sum_{l=0}^{\infty}\exp\left(M(L\lambda_l^{1/\nu})\right)||w_l||_{\mathtt{HS}}<\infty.$$
\end{lem}
\begin{proof}[Proof of Lemma \mbox{\ref{L:perfect1}}]
$\Longrightarrow$:
For $L>0$ we consider the echelon space
$$
D_{L}=\left\{v=(v_{l})\in\Sigma: \exists C>0: |(v_l)_j|\leq C\exp(M(L\lambda_{l}^{1/\nu})),\forall 1\leq j\leq d_l\right\},
$$
where $\Sigma=\{v=(v_l)_{l\in\mathbb{N}_{0}},~ v_{l}\in\mathbb{C}^{d_l}\}$
is as in \eqref{EQ:sigma}.
By the diagonal transform we have
$D_{L}\cong l^{\infty},$ and since $l^{\infty}$ is a perfect space so we have $\widehat{D_L}\cong l^{1},$ and it is given by
$$\widehat{D_L}=\left\{w=(w_l)\in\Sigma:\sum_{l=0}^{\infty}\sum_{j=1}^{d_l}\exp(M(L\lambda_{l}^{1/\nu}))|(w_l)_j|<\infty\right\}.$$
By Theorem \ref{THM:gammahat} we know that $\widehat{\Gamma_{\{M_k\}}(X)}=\cap_{L>0}D_L,$ and hence $\left[\widehat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}=\cup_{L>0}\widehat{D_L}.$
This means that $w\in \left[\widehat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}$ if and only if there exists $L_2>0,$ such that we have
$$\sum_{l=0}^{\infty}\sum_{j=1}^{d_l}\exp(M(L_2\lambda_{l}^{1/\nu}))|(w_l)_j|<\infty.$$
Let $1\leq p< q\leq\infty$ and let $a\in\mathbb{C}^{d\times d}.$ Then we have the estimates
\begin{equation}\label{EQ:in}
\|a\|_{\ell^p(\mathbb{C})}\leq d^{2\left(\frac1p-\frac1q\right)}\|a\|_{\ell^q(\mathbb{C})}
\quad\textrm{ and } \quad
\|a\|_{\ell^q(\mathbb{C})}\leq d^{\frac{2}{q}}\|a\|_{\ell^p(\mathbb{C})},
\end{equation}
see e.g. \cite[Lemma 3.2]{DaR1} for a simple proof.
In particular, we have $d^{-1}||a||_{l^{1}}\leq ||a||_{\mathtt{HS}}\leq d ||a||_{l^{1}}$ for $a\in\mathbb{C}^{d\times d}.$ Here we also note the estimate: for every $q,$ $L>0$ and $\delta>0$ there exists $C>0$ such that
\begin{equation}\label{EQ:estlambdas}
\lambda_{l}^{q}e^{-\delta M\left(L\lambda_{l}^{1/\nu}\right)}\leq C,
\end{equation}
see e.g. \cite[(2.15)]{DaR2}.
These estimates and Proposition \ref{PROP:dlambdas} imply
\begin{eqnarray}
& & \sum_{l=0}^{\infty}\exp(M(L\lambda_{l}^{1/\nu}))||w_l||_{\mathtt{HS}} \\
&\leq& \sum_{l=0}^{\infty}d_{l}\exp(M(L\lambda_{l}^{1/\nu}))||w_l||_{l^{1}(\mathbb{C}^{d_l})}\nonumber\\
&\leq& C\sum_{l=0}^{\infty}(1+\lambda_{l})^{\frac{n}{\nu}}\exp(-M(L\lambda_{l}^{1/\nu}))\exp(2M(L\lambda_{l}^{1/\nu}))||w_l||_{l^{1}(\mathbb{C}^{d_l})}\nonumber\\
&\leq& C^{\prime}\sum_{l=0}^{\infty}\sum_{j=1}^{d_l}\exp(2M(L\lambda_{l}^{1/\nu}))|(w_l)_j|\nonumber\\
&\leq& C^{\prime\prime} \sum_{l=0}^{\infty}\sum_{j=1}^{d_l}\exp(M(L_2\lambda_{l}^{1/\nu}))|(w_l)_j|<\infty,\nonumber
\end{eqnarray}
where
$L_2=L\sqrt{A} H,$ where $A,H>0$ in (M.2). The above claim will be true if we can show
that $\exp(2M(L\lambda_{l}^{1/\nu}))\leq\exp(M(L_2\lambda_{l}^{1/\nu})).$ This follows from property (M.2).
$\Longleftarrow$:
Converse follows similarly using estimates \eqref{EQ:in}.
\end{proof}
We can now prove Theorem \ref{P:perfect}.
\begin{proof}[Proof of Theorem \ref{P:perfect}]
We always have $$\Gamma_{\{M_k\}}(X)\subseteq \left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}$$ from the definition.
So we need to prove that
$\left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}\subseteq \Gamma_{\{M_k\}}(X).$
Let $w=(w_l)_{l\in{\mathbb{N}}_{0}}\in \left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}$ , $w_{l}=\left(w_{l}^{j}\right)_{j=1}^{d_l},$ and define
$$\phi(x):=\sum_{l=0}^{\infty}w_l \cdot e_{l}(x)=\sum_{l=0}^{\infty}\sum_{j=1}^{d_l} w^{j}_{l} e^{j}_{l}(x),$$
the series makes sense because of Lemma \ref{L:perfect1} and estimates \eqref{EQ:weyllaw} and \eqref{EQ:estlambdas}.
Then we have
\begin{eqnarray} \hat{\phi}(m,k)&=&\left(\phi, e^{k}_{m}\right)_{L^2}\nonumber\\
&=& \int_{X} \phi(x)\bar{e^{k}_{m}(x)}dx\nonumber\\
&=&\sum_{l=0}^\infty\sum_{j=1}^{d_l}\int_{X} w^{j}_{l}e^{j}_{l}(x)\bar{e^{k}_{m}(x)}dx\nonumber\\
&=&\sum_{l=0}^\infty\sum_{j=1}^{d_l} w_{l}^{j}\delta_{lm}\delta_{jk}=w^{k}_{m}, ~~~~1\leq j\leq d_l, ~1\leq k\leq d_m.\end{eqnarray}
This gives $$||\hat{\phi}(m)||_{\mathtt{HS}}=||w_m||_{\mathtt{HS}}.$$
Now since $w\in \left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge},$ by Lemma \ref{L:perfect1} there exists $L>0$ such that
$$\sum_{l=0}^{\infty}\exp\left(M(L\lambda^{1/\nu}_{l})\right)||w_l||_{\mathtt{HS}}<\infty.$$
Since $\hat{\phi}(l)=w_{l},$ it follows that there eists $C>0$ such that
$$||\hat{\phi}(l)||_{\mathtt{HS}}\leq C\exp\left(-M(L\lambda^{1/\nu}_{l})\right).$$
By Theorem \ref{THM:gamma}, we have $\phi\in \Gamma_{\{M_k\}}(X).$ Hence we have shown that
$$\left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}\subseteq \Gamma_{\{M_k\}}(X).$$
So we have $\left[\hat{\Gamma_{\{M_k\}}(X)}\right]^{\wedge}=\Gamma_{\{M_k\}}(X),$ and hence $\Gamma_{\{M_k\}}(X)$ is a perfect space.
\end{proof}
Next we proceed to prove the equivalence of two expressions for the duality.
\begin{lem}
Let $v\in \Gamma_{\{M_k\}}(X)$ and $w\in\widehat{\Gamma_{\{M_{k}\}}}(X)$, then
$$\sum_{k=0}^{\infty}||(\hat{v}_{k})||_{\mathtt{HS}}||(w_{k})||_{\mathtt{HS}}<\infty$$ if and only if $$\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(\hat{v_{k}})_i| |(w_k)_i|<\infty.$$
\end{lem}
\begin{proof}
$\Longrightarrow$: The proof is straightforward, following from the estimate
$$
\left(\sum_{i=1}^d a_i b_i\right)^2 \lesssim \left(\sum_{i=1}^d a_i^2\right) \left(\sum_{i=1}^d b_i^2\right).
$$
$\Longleftarrow$:
We will be using the equality
$$\left(\sum_{i=1}^{n}|a_i|\right)\left(\sum_{i=1}^n|b_i|\right)=\sum_{i=1}^{n}|a_i||b_i|+\sum_{i=1}^{n}|a_i|\left(\sum_{j=1}^n|b_j|-|b_i|\right)$$ for any $a_i,b_i\in\mathbb{R}$,
yielding
\begin{eqnarray}\label{4.5}
&&||(\hat{v}_{k})||_{\mathtt{HS}}||(w_{k})||_{\mathtt{HS}}\nonumber\\
&\leq& \sum_{i=1}^{d_k}|(\hat{v}_{k})_i|\sum_{i=1}^{d_k} |(w_k)_i|\nonumber\\
&=& \sum_{i=1}^{d_k}|(\hat{v}_{k})_i| |(w_k)_i| +\sum_{i=1}^{d_k}|(w_k)_{i}|\left(\sum_{j=1}^{d_k}|(\hat{v}_k)_j|-|(\hat{v}_k)_i|\right).
\end{eqnarray}
We consider the second term in the above inequality, that is,
\begin{equation}\label{4.6}
\left(\sum_{i=1}^{d_k}|(w_k)_{i}|\left(\sum_{j=1}^{d_k}|(\hat{v}_k)_j|-|(\hat{v}_k)_i|\right)\right)\leq C\left(\sum_{i=1}^{d_k}|(w_{k})_i|(d_ke^{-M(L\lambda_{k}^{1/\nu})})\right),
\end{equation}
for some $C>0$ and $L>0.$
Then using \eqref{4.6} in \eqref{4.5} we get
\begin{equation}\label{4.7}
\sum_{k=0}^\infty||(\hat{v}_{k})||_{\mathtt{HS}}||(w_{k})||_{\mathtt{HS}}
\leq\sum_{k=0}^\infty \sum_{i=1}^{d_k}|(w_k)_i| \left(|(\hat{v}_{k})_i|+Cd_ke^{-M(L\lambda_{k}^{1/\nu})}\right).
\end{equation}
Now let $|{(\hat{u}_{k})_{i}}|=|(\hat{v}_{k})_i|+Cd_ke^{-M(L\lambda_{k}^{1/\nu})}$, for $i=1,2,...,d_{k},$ and $k\in\mathbb{N}_{0}.$
So then we have
$$\sum_{k=0}^\infty||(\hat{v}_{k})||_{\mathtt{HS}}||(w_{k})||_{\mathtt{HS}}\leq\sum_{k=0}^\infty \sum_{i=1}^{d_k}|(w_k)_i| |{(\hat{u}_{k})_{i}}|.$$
Now for some $C^{\prime\prime}>0$ and $L_{2}>0$, we have
\begin{eqnarray}
||{\hat{u}_{k}}||_{\mathtt{HS}}&=&\left(\sum_{i=1}^{d_k}\left(|(\hat{v}_{k})_i|^{2}+C^2d_k^{2}e^{-2M(L\lambda_{k}^{1/\nu})}+2Cd_k|(\hat{v}_{k})_i|e^{-M(L\lambda_{k}^{1/\nu})})\right)\right)^{1/2}\nonumber\\
&\leq& C^{\prime\prime}e^{-M(L_{2}\lambda_{k}^{1/\nu})},
\end{eqnarray}
i.e, $u\in \Gamma_{\{M_{k}\}}(X).$
This is true since
\begin{eqnarray}
&&\left(\sum_{i=1}^{d_k}\left(|(\hat{v}_{k})_i|^{2}+C^2d_k^{2}e^{-2M(L\lambda_{k}^{1/\nu})}+2Cd_k|(\hat{v}_{k})_i|e^{-M(L\lambda_{k}^{1/\nu})}\right)\right)^{1/2}\nonumber\\
&\leq& \left(\sum_{i=1}^{d_k}\left(C^2e^{-2M(L\lambda_{k}^{1/\nu})}+C^2 d_{k}^{2}e^{-2M(L\lambda_{k}^{1/\nu})}+2C^2 d_{k}e^{-2M(L\lambda_{k}^{1/\nu})}\right)\right)^{1/2}\nonumber\\
&\leq&\left(\sum_{i=1}^{d_k}C^2\left(1+d_k\right)^{2}e^{-2M(L\lambda_{k}^{1/\nu})}\right)^{1/2}\nonumber\\
&\leq& C(1+d_{k})^{3/2}e^{-M(L\lambda_{k}^{1/\nu})}\nonumber\\
&\leq& C^{\prime}e^{-\frac{1}{2}M(L\lambda_{k}^{1/\nu})}\nonumber\\
&\leq& C^{\prime\prime}e^{-M(L_{2}\lambda^{1/2})},\nonumber
\end{eqnarray}
where $L_{2}=\frac{L}{\sqrt A H},$ with $A, H$ are constants in condition $(M.2).$
Now since $w\in\widehat{\Gamma_{\{M_{k}\}}}(X),$ so from \eqref{4.7} we have
$$\sum_{k=0}^\infty||(\hat{v}_{k})||_{\mathtt{HS}}||(w_{k})||_{\mathtt{HS}}\lesssim \sum_{k=0}^\infty \sum_{i=1}^{d_k}|(w_k)_i| |(\hat{u}_{k})_i|<\infty,$$ completing the proof.
\end{proof}
\begin{thm}[Adjointness Theorem]\label{THM:adj}
Let $\{M_k\}$ and $\{N_k\}$ satisfy conditions $(M.{0})$-$(M.{3}).$ A linear mapping $f:\Gamma_{\{M_k\}}(X)\rightarrow \Gamma_{\{N_k\}}(X)$ is sequential if and only if $f$ is represented by an infinite tensor $(f_{kjli}), $ ~ $k,j\in \mathbb{N}_{0},$ $1\leq l\leq d_{j}$ and $1\leq i\leq d_k$ such that for any $u\in\Gamma_{\{M_k\}}(X)$ and $v\in\hat{\Gamma_{\{N_k\}}}(X)$ we have
\begin{equation} \label{EQ:f1}
\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|f_{kjli}||\hat{u}(j,l)|<\infty, ~~\textrm{for ~all}~k\in\mathbb{N}_{0}, ~i=1,2,...,d_k,
\end{equation} and
\begin{equation}\label{EQ:f2}
\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|<\infty.
\end{equation}
Furthermore, the adjoint mapping $\hat{f}:\hat{\Gamma_{\{N_k\}}}(X)\rightarrow \hat{\Gamma_{\{M_k\}}}(X)$ defined by the formula $\hat{f}(v)=v\circ f$ is also sequential, and the transposed matrix ${(f_{kj})}^{t}$ represents $\hat{f}$, with $f$ and $\hat f$ related by $\langle f(u),v\rangle=\langle u,\hat f (v)\rangle.$
\end{thm}
Let us summarise the ranges for indices in the used notation as well as give more explanation to \eqref{EQ:f2}.
For $f:\Gamma_{\{M_k\}}(X)\rightarrow \Gamma_{\{N_k\}}(X)$ and $u\in \Gamma_{\{M_k\}}(X)$ we write
\begin{equation}\label{EQ:notf}
\mathbb C^{d_k}\ni f(u)_k=\sum_{j=0}^{\infty}f_{kj}\widehat{u}(j)=
\sum_{j=0}^\infty \sum_{l=1}^{d_j} f_{kjl}\widehat{u}(j,l),\quad k\in\mathbb{N}_0,
\end{equation}
so that
\begin{equation}\label{EQ:notf2}
f_{kjl}\in \mathbb C^{d_k},\;
f_{kjli}\in\mathbb C,\quad k,j\in\mathbb{N}_0,\; 1\leq l\leq d_j,\; 1\leq i \leq d_k,
\end{equation}
and
\begin{equation}\label{EQ:notf3}
\mathbb C\ni (f(u)_k)_i=f(u)_{ki} = \sum_{j=0}^\infty\sum_{l=1}^{d_j} f_{kjli}\widehat{u}(j,l),\quad k\in\mathbb{N}_0,\; 1\leq i \leq d_k,
\end{equation}
where we view $f_{kj}$ as a matrix, $f_{kj}\in\mathbb{C}^{d_k\times d_j}$, and the product of the matrices has been explained in \eqref{EQ:notf}.
\begin{rem}
Let us now briefly describe how the tensor $(f_{kjli})$, $k,j\in \mathbb{N}_{0},$ $1\leq l\leq d_{j}$, $1\leq i\leq d_k$, is constructed given a
sequential mapping $f:\Gamma_{\{M_k\}}(X)\rightarrow \Gamma_{\{N_k\}}(X)$.
For every $k\in \mathbb{N}_{0}$ and $1\leq i\leq d_k$, define the family
$v^{ki}=\left(v^{ki}_{j}\right)_{j\in\mathbb{N}_{0}}$ such that each $v^{ki}_{j}\in \mathbb{C}^{d_j}$ is defined by
\begin{equation}\label{EQ:defv}
v^{ki}_{j}(l)=\begin{cases}
1,~~~~~j=k, l=i,\\
0,~~~~~ \textrm{otherwise}.
\end{cases}
\end{equation}
Then $v^{ki}\in \left[\Gamma_{\{N_{k}\}}(X)\right]^{\wedge}$, and since
$f$ is sequential we have $v^{ki}\circ f\in\left[\Gamma_{\{M_{k}\}}(X)\right]^{\wedge}$, and we can write $v^{ki}\circ f=\left(v^{ki}\circ f\right)_{j\in\mathbb{N}_0},$ where $(v^{ki}\circ f)_{j}\in\mathbb{C}^{d_j}.$
Then for each $1\leq l \leq d_j$ we set
\begin{equation}\label{EQ:deff}
f_{kjli}:=(v^{ki}\circ f)_{j}(l),
\end{equation}
the $l^{th}$ component of the vector $(v^{ki}\circ f)_{j}\in\mathbb{C}^{d_j}.$
The formula \eqref{EQ:deff} will be shown in the proof of Theorem {\ref{THM:adj}}.
In particular, since for $\phi\in\Gamma_{\{M_{k}\}}(X)$ we have $f(\phi)\in\Gamma_{\{N_{k}\}}(X),$
it will be a consequence of \eqref{EQ: 4.27} and \eqref{EQ: 4.28} later on that
\begin{equation}
\label{EQ:deff2}
v^{ki}\circ f(\phi)=(\widehat{f(\phi)})(k,i)=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjli}\hat{\phi}(j,l),
\end{equation}
so that the tensor $(f_{kjli})$ is describing the transformation of the Fourier coefficients of $\phi$ into those of $f(\phi)$.
Another meaning of condition \eqref{EQ:f1} is that if for each $k\in \mathbb{N}_{0}$ and $1\leq i\leq d_k$ we define
$$
f^{ki}(j,l):=f_{kjli},
$$
then $f^{ki}\in \left[\Gamma_{\{M_{k}\}}(X)\right]^{\wedge}$. Condition \eqref{EQ:f2} is the continuity condition saying that for every $u\in\Gamma_{\{M_{k}\}}(X)$ we have that
$$
\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjli}\hat{u}(j,l)\in \Gamma_{\{N_{k}\}}(X).
$$
\end{rem}
To prove Theorem {\ref{THM:adj}} we first establish the following lemma.
\begin{lem} \label{L:L1}
Let $(f_{kjli})_{k,j\in{\mathbb{N}_{0}}, 1\leq l\leq d_j, 1\leq i\leq d_k}$ be an infinite tensor satisfying \eqref{EQ:f1} and \eqref{EQ:f2}. Then for any $u\in\Gamma_{\{M_k\}}(X)$ and $v\in \left[\Gamma_{\{N_k\}}(X)\right]^{\wedge},$ we have
$$\lim_{n\rightarrow\infty}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{0\leq j\leq n}f_{kj}\hat{u}(j)\right)_{i}}\right|=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|.$$
\end{lem}
\begin{proof}[Proof of Lemma \mbox{\ref{L:L1}}]
Let $u\in \Gamma_{\{M_k\}}(X)$ and $u\approx \left(\hat u(l)\right)_{l\in {\mathbb{N}}_{0}}.$ Define $u^{n}:=\left(\hat u ^{(n)}(l)\right)_{l\in {\mathbb{N}}_{0}}$ by setting
\begin{equation}
\hat{u}^{(n)}(l)=\begin{cases}
\hat u(l), \; l\leq n, \nonumber\\
0, \quad\;\, l>n.\end{cases}\nonumber
\end{equation}
Then for any $w\in \hat{\Gamma_{\{M_k\}}}(X),$ $\langle u-u^{n}, w\rangle\rightarrow 0$ as $n\rightarrow \infty.$ This is true since $\sum_{l=0}^{\infty}\left|\hat{u}(l)\cdot w_l \right|<\infty$ so that
$$\left| \langle u-u^{n}, w\rangle \right|\leq \sum_{l\geq n}\left|\hat{u_l}\cdot w_{l}\right|\rightarrow {0}$$ as $n\rightarrow \infty.$
Now for any $u\in \Gamma_{\{M_k\}}(X)$ and $v\in \left[\Gamma_{\{N_k\}}(X)\right]^{\wedge} $ and from \eqref{EQ:f1} and \eqref{EQ:f2} we have
\begin{multline}\label{EQ:long}
\langle f(u), v \rangle=\sum_{k=0}^{\infty} \left(f(u)\right)_{k}\cdot v_{k}
=\sum_{k=0}^{\infty}\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right) \cdot v_{k}
\\
=\sum_{k=0}^{\infty}\sum_{j=0}^\infty\sum_{\ell=1}^{d_j}\sum_{i=1}^{d_k} f_{kj\ell i}\widehat{u}(j,\ell)(v_k)_{i}
=\sum_{j=0}^{\infty}\sum_{\ell=1}^{d_j}\widehat{u}(j,\ell) \sum_{k=0}^{\infty}\sum_{i=1}^{d_k}
f_{kj\ell i}(v_k)_i
\\
=\sum_{j=0}^{\infty}\sum_{\ell=1}^{d_j}\widehat{u}(j,\ell) \sum_{k=0}^{\infty}
f_{kj\ell }\cdot v_k
=\sum_{j=0}^\infty\hat{u}(j)\cdot (v\circ f)_{j}=\langle u, v\circ f\rangle,
\end{multline}
where
$$\mathbb C^{d_j}\ni (v\circ f)_{j}=\left\{\sum_{k=0}^{\infty}
f_{kj\ell }\cdot v_k\right\}_{\ell=1}^{d_j},\quad j\in \mathbb{N}_{0},$$ and
$$v\circ f=\left\{(v\circ f)_{j}\right\}_{j=0}^{\infty}.$$
For sequential mapping $f$, $v\circ f\in\left[\Gamma_{\{M_k\}}(X)\right]^{\wedge} $ and
$$\sum_{j=0}^{\infty}u(j)\cdot(v\circ f)_j=\langle u, v\circ f\rangle=\left(v\circ f\right)(u),$$ so that we can write $\left(v\circ f\right)\in \mathbb{C}^{d_j}$and also $\left(v\circ f\right)(u)=\langle v, f(u)\rangle$. So for any $u\approx(\hat{u}(j))_{j\in\mathbb{N}_0}\in \Gamma_{\{M_k\}}(X) $ from the definition of $\left[\Gamma_{\{M_k\}}(X)\right]^{\wedge}$ we have $$
\sum_{j\in \mathbb{N}_0}\sum_{l=1}^{d_j}\left|(v\circ f)_{jl}\right|\left|\hat{u}(j,l)\right|<\infty.$$ Hence the series $\sum_{j=0}^{\infty}\left|(v\circ f)_{j}\cdot\hat{u}(j)\right|$ is convergent.
We can then conclude that $v\circ f\in \left[\Gamma_{\{M_k\}}(X)\right]^{\wedge} $ and we have
$$\langle f(u)-f(u^{n}), v\rangle=\langle u-u^{n}, v\circ f\rangle\rightarrow 0$$ as $n\rightarrow \infty.$ Therefore,
$$\langle f(u), v\rangle=\lim_{n\rightarrow\infty}\langle f(u^{n}), v\rangle,$$ for all $u\in \Gamma_{\{M_{k}\}}(X)$ and $v\in[\Gamma_{\{N_k\}}(X)]^{\wedge}.$
Hence for any $u\in \Gamma_{\{M_k\}}(X)$ and $v\in \left[\Gamma_{\{N_k\}}(X)\right]^{\wedge}$ we have
$$\lim_{n\rightarrow\infty}\sum_{k=0}^{\infty}v_k\cdot\left(\sum_{0\leq j\leq n}f_{kj}\hat{u}(j)\right) =\sum_{k=0}^{\infty}v_k\cdot\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right),$$
that is,
$$\lim_{n\rightarrow\infty}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_k)_i\left(\sum_{j\leq n}f_{kj}\hat{u}(j)\right)_i =\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_k)_{i}\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_i.$$ Now we will use the fact that if $u\in \Gamma_{\{M_k\}}(X)$ then $|u|\in\Gamma_{\{M_k\}}(X)$ where $|u|=\left(\hat{ |u}|_j\right)_{j\in\mathbb{N}_{0}},$ $\hat{ |u|}_{j}\in \mathbb{R}^{d_j},$ with
\begin{align}
\hat {|u|}_{j} &:= \begin{bmatrix}
|\hat{ u}(j,1)| \\
|\hat{ u}(j,2)| \\
\vdots \\
|\hat {u}(j,d_j)|
\end{bmatrix},\nonumber
\end{align} in view of the Theorem \ref{P:perfect}. The
same is true for the dual space $\left[\Gamma_{\{N_k\}}(X)\right]^{\wedge}.$
So then this argument gives
$$\lim_{n\rightarrow\infty}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{0\leq j\leq n}f_{kj}\hat{u}(j)\right)_{i}}\right|=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|.$$ The proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \mbox{\ref{THM:adj}}]
Let us assume first that the mapping $f:\Gamma_{\{M_k\}}(X)\rightarrow\Gamma_{\{N_{k}\}}(X)$ can be represented by $f=(f_{kjli})_{k,j\in\mathbb{N}_{0},
1\leq l\leq d_j, 1\leq i\leq d_k},$ an infinite tensor such that\begin{equation}\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|f_{kjli}||\hat{u}(j,l)|<\infty, ~~\textrm{for ~all}~k\in\mathbb{N}_{0},~ i=1,2,\ldots,d_k,\end{equation} and
\begin{equation}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|<\infty\end{equation} hold for all $u\in\Gamma_{\{M_k\}}(X)$ and $v\in[\Gamma_{\{N_k\}}(X)]^{\wedge}.$
Let $\hat{u}_{1}=\left(\hat{u_1}(p)\right)_{p\in{\mathbb{N}}_{0}}$ be such that for some $j,l$ where $j\in{\mathbb{N}}_{0},$ $1\leq l\leq d_j$ we have
\begin{equation} \hat{u_1}(p,q)=\begin{cases}
{1},~~~~ p=j, \; q=l,\\
0, ~~~~~~\textrm{otherwise}.\end{cases}\nonumber\end{equation}
Then $u_{1}\in \Gamma_{\{M_k\}}(X)$ so $fu_1=f(u_{1})\in\Gamma_{\{N_k\}}(X)$ and
\begin{eqnarray}\label{EQ:4.21}
\left(fu_1\right)_{k}&=&\sum_{p=0}^{\infty}f_{kp}\hat{u}_{1}(p)\nonumber\\
&=&\sum_{p=0}^{\infty}\sum_{q=1}^{d_p}f_{kpq}\hat{u_{1}}(p,q)\nonumber\\
&=& \sum_{q=1}^{d_j}f_{kjq}\hat{u_1}(j,q)\nonumber\\
&=&f_{kjl}\in \mathbb{C}^{d_k}.
\end{eqnarray}
We now first show that
$$\widehat{\left(fu\right)}(k)=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjl}\hat{u}(j,l),$$ where $f_{kjli}\in \mathbb{C}$ for each $k,j\in\mathbb{C},$ $1\leq l \leq d_j$ and $1\leq i\leq d_k.$
The way in which $f$ has been defined we have
$$(fu)_{k}= \sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjl}\hat{u}(j,l),\quad f_{kjl}\in \mathbb{C}^{d_k}.$$
Also since $u\in\Gamma_{\{M_k\}}(X)$, from our assumption we have $fu\in\Gamma_{\{N_k\}}(X)$ and $fu\approx \left(\hat{(fu)}(j)\right)_{j\in\mathbb{N}_{0}}$so $(fu)_{k}\approx\hat{(fu)}(k).$
We can then write $\hat{(fu)}(k)=\sum_{j}\sum_{l=1}^{d_j}f_{kjl}\hat{u}(j,l).$
Since we know that $v\in{\left[\Gamma_{\{N_k\}}(X)\right]^{\wedge}}$ and $fu\in\Gamma_{\{N_k\}}(X),$ we have
$$\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i||(\widehat{(fu)}(k))_i|=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i||\sum_{j=0}^{\infty}\sum_{l=1}^{d_j} f_{kjli}\hat{u}(j,l)|<\infty.$$
In particular using the definition of $u_1$ and \eqref{EQ:4.21} we get
\begin{eqnarray}\label{EQ:4.22}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i|\left|\sum_{p=0}^{\infty} \sum_{q=1}^{d_k}f_{kpqi}\hat{u_1}(p,q)\right|=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i||f_{kjli}|<\infty,\end{eqnarray}
for any $j\in \mathbb{N}_{0}$ and $1\leq l\leq d_j.$\\
Now for any $u\in \Gamma_{\{M_k\}}(X)$ consider
$$J=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}| |\hat{u}(j,l)|.$$
Then we consider the series
$$I_{n}:=\sum_{j\leq n}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}| |\hat{u}(j,l)|,$$
so that we have
\begin{eqnarray}
I_{n}&=&\sum_{j\leq n}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}| |\hat{u}(j,l)|\nonumber\\
&=&\sum_{j\leq n}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}\hat{u}(j,l)|.\nonumber
\end{eqnarray}
Let $\epsilon=(\epsilon_i)_{1\leq i\leq d_{k}},$ $k\in{\mathbb{N}}_{0}$, be such that $\epsilon_i\in\mathbb{C}$ and $|\epsilon_i|=1, $ for all $i$ and such that
$$|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli})\hat{u}(j,l)|=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}\hat{u}(j,l)\epsilon_{i}.$$
Then
\begin{eqnarray}
I_n &=&\sum_{j\leq n}\sum_{l=1}^{d_j}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}\hat{u}(j,l)\epsilon_{i}\nonumber\\
&\leq & \sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_{k})_{i}|\left|\sum_{j\leq n}\sum_{l=1}^{d_j}f_{kjli})\hat{u}(j,l)\epsilon_{i} \right|.
\end{eqnarray}
It follows from Lemma \ref{L:L1} that
$$\lim_{n\rightarrow\infty}\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_{k})_{i}|\left|\sum_{j\leq n}\sum_{l=1}^{d_j}f_{kjli}\hat{u}(j,l)\epsilon_{i} \right| = \sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_{k})_{i}|\left|\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjli}\hat{u}(j,l)\epsilon_{i} \right|<\infty.$$
Then \begin{equation}\label{EQ:4.24} J=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}| |\hat{u}(j,l)|<\infty.\end{equation}
So we proved that if $(f_{kjli})$ satistfies
\begin{itemize}
\item $\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|f_{kjli}||\hat{u}(j,l)|<\infty$,
\item $\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|<\infty$,
\end{itemize}
then for any $u\in{\Gamma_{\{M_{k}\}}(X)}$ and $v\in\left[\Gamma_{\{N_{k}\}}(X)\right]^{\wedge}$ we have from \eqref{EQ:4.22} and \eqref{EQ:4.24} respectively, that is,
\begin{enumerate}
\item $\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i||f_{kjli}|<\infty$,
\item $\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli})| |\hat{u}(j,l)|<\infty.$
\end{enumerate}
Now recall that for $f: \Gamma_{\{M_{k}\}}(X)\rightarrow \Gamma_{\{N_{k}\}}(X)$ we have
$$(f(u))_{k}=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjl}\hat{u}(j,l),$$
for any $u\in{\Gamma_{\{M_{k}\}}(X)},$ then for any $v\in\left[\Gamma_{\{N_{k}\}}(X)\right]^{\wedge}$, the composed mapping $v\circ f: \Gamma_{\{M_{k}\}}(X)\rightarrow \mathbb{C}$ is given by
\begin{eqnarray}(v\circ f)(u)&=&\sum_{k=0}^{\infty}v_k\cdot (f(u))_k=\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}\left(\sum_{j=0}^{\infty} \sum_{l=1}^{d_j}f_{kjli}\hat u(j,l)\right)\nonumber\\
&=&\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}\left(\sum_{k=0}^{\infty} \sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}\right)\hat u(j,l).\end{eqnarray}
So by (ii) we get that
$$\left|(v\circ f)(u)\right|\leq\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}| |\hat{u}(j,l)|<\infty.$$
So $\hat f(v)=(\hat f (v)_{j l})_{j\in \mathbb N, 1\leq l\leq d_j},$ with $\hat{f}(v)_{jl}= \sum_{k=0}^{\infty}\sum_{i=1}^{d_k}(v_{k})_{i}f_{kjli}\in [\Gamma_{\{M_{k}\}}(X)]^{\wedge}$ (from the definition of $ [\Gamma_{\{M_{k}\}}(X)]^{\wedge}$), that is $f$ is sequential.
And then $\langle f(u), v\rangle=\langle u,\hat f(v)\rangle$ is also true.
\medskip
Now to prove the converse part we assume that $f:\Gamma_{\{M_k\}}(X)\rightarrow \Gamma_{\{N_k\}}(X)$ is sequential. We have to show that $f$ can be represented as $f\approx(f_{kjli})_{k,j\in\mathbb{N}_{0},1\leq l\leq d_j, 1\leq i\leq d_k}$ and satisfies \eqref{EQ:f1} and \eqref{EQ:f2}.
Define for $k,i$ where $k\in \mathbb{N}_{0}$ and $1\leq i\leq d_k,$ the sequence $u^{ki}=\left(u^{ki}_{j}\right)_{j\in\mathbb{N}_{0}}$ such that $u^{ki}_{j}\in \mathbb{C}^{d_j}$ and $u^{ki}_j(l)=\hat{u^{ki}}({j,l})$, given by
\[
u^{ki}_{j}(l)=\hat{u^{ki}}({j,l})=\begin{cases}
1,~~~~~j=k, l=i,\\
0,~~~~~ \textrm{otherwise}.
\end{cases}
\]
Then $u^{ki}\in \left[\Gamma_{\{N_{k}\}}(X)\right]^{\wedge}.$
Now since $f$ is sequential we have $u^{ki}\circ f\in\left[\Gamma_{\{M_{k}\}}(X)\right]^{\wedge}$ and $u^{ki}\circ f=\left(u^{ki}\circ f\right)_{j\in\mathbb{N}_0},$ where $(u^{ki}\circ f)_{j}\in\mathbb{C}^{d_j}.$ We denote $u^{ki}\circ f=\left(f^{ki}_{j}\right)_{j\in{\mathbb{N}_{0}}},$ where $f^{ki}_j=(u^{ki}\circ f)_{j}.$ Then $(f^{ki}_{j})_{j\in\mathbb{N}_{0}}\in \left[\Gamma_{\{M_{k}\}}(X)\right]^{\wedge}$ and $f^{ki}_{j}\in\mathbb{C}^{d_j}.$
Then for any $\phi\approx \left(\hat{\phi}(j)\right)_{j\in\mathbb{N}_0}\in \Gamma_{\{M_k\}}(X)$ we have
\begin{equation}\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|f^{ki}_{jl}||\hat{\phi}(j,i)|<\infty.\end{equation}
For $\phi\in\Gamma_{\{M_{k}\}}(X)$ we can write $f(\phi)\in\Gamma_{\{N_{k}\}}(X).$ We can write $$f(\phi)=\left((f(\phi))^{\wedge}(p)\right)_{p\in\mathbb{N}_{0}}.$$ So
\begin{eqnarray}\label{EQ: 4.27}
u^{ki}\circ f(\phi)&=&\sum_{j=0}^{\infty}u^{ki}_j\widehat{(f(\phi))}_j\nonumber\\
&=&\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}u^{ki}_{jl}\widehat{(f(\phi))}(j,l)\nonumber\\
&=&(\widehat{f(\phi)})(k,i)~(\textrm{from~ the ~definition~ of~} u^{ki}).\end{eqnarray}
We have $u^{ki}\circ f=(f^{ki})_{j}\in \left[\Gamma_{\{M_k\}(X)}\right]^{\wedge},$ so
\begin{eqnarray}\label{EQ: 4.28}
(u^{ki}\circ f)(\phi)
&=&\sum_{j=0}^{\infty}f^{ki}_{j}\hat{\phi}(j)\nonumber\\
&=&\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f^{ki}_{jl}\hat{\phi}(j,l).
\end{eqnarray}
From \eqref{EQ: 4.27} and \eqref{EQ: 4.28} we have $(\widehat{f(\phi)})(k,i)=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f^{ki}_{jl}\hat{\phi}(j,l).$\\
Hence $(f(\phi))_{ki}=\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f^{ki}_{jl}\hat{\phi}(j,l),~~k\in\mathbb{N}_{0},$ and $1\leq i\leq d_k,$ that is $f$ is represented by the tensor $\left\{(f^{ki}_{jl})\right\}_{k,j\in\mathbb{N}_0, 1\leq i\leq d_k, 1\leq l\leq d_j}$.\\
If we denote $f^{ki}_{jl}$ by $f^{ki}_{jl}= f_{kjli},$ we can say that $f$ is represented by the tensor $(f_{kjli})_{k,j\in\mathbb{N}_0, 1\leq l\leq d_j, 1\leq i\leq d_k}.$
Also let $v\in\hat{\left[\Gamma_{\{N_k\}}(X)\right]}.$ Since $f(\phi)\in \Gamma_{\{N_k\}}(X)$ for $\phi\in \Gamma_{\{M_k\}}(X),$ then from the definition of $\hat{\left[\Gamma_{\{N_k\}}(X)\right]}$ we have
$$\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}|(v_k)_i|\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}f_{kjli}\hat{\phi}(j,l)|<\infty.$$
This completes the proof of Theorem \ref{THM:adj}.
\end{proof}
\section{Beurling class of ultradifferentiable functions and ultradistributions}
\label{SEC:Beurling}
In this section we briefly summarise the counterparts of the results of the previous section for the case of Komatsu classes of Beurling type ultradifferentiable functions and ultradistributions.
For more details we refer to \cite{DaR2} for a more detailed description of these spaces as well as of their duals and $\alpha$-duals in the sense of K\"othe.
The class $\Gamma_{(M_k)}(X)$ is the space of $C^{\infty}$ functions $\phi$ on $X$ such that for every $h>0$ there exists $C_{h}>0$ such that we have
\begin{equation}
||E^{k}\phi||_{L^2(X)}\leq C_{h}h^{\nu k}M_{\nu k}, ~k=0,1,2,...
\end{equation}
For these spaces, we replace condition (M.3) by condition
\medskip
\noindent
(M.3$'$) \quad for every $ l>0$ there exists $ C_{l}>0 $ such that
$k!\leq C_{l} l^{k}M_{k}$, for all $k\in\mathbb{N}_{0}.$
\medskip
The counterpart of \cite[Theorem \ref{THM:gamma} and Theorem \ref{THM:gammahat}]{DaR2}, holds for this class as well, namely, we have
\begin{thm}
\label{THM: Beurling 1}
Assume conditions (M.0), (M.1), (M.2), (M.$3'$). We have $\phi\in\Gamma_{(M_k)}(X)$ if and only if for every $L>0$ there exists $C_L>0$ such that
$$||\hat{\phi}(l)||_{\mathtt{HS}}\leq C_L\exp\{-M(L\lambda_{l}^{1/\nu})\}, \quad \textrm{ for all } l\geq 1.$$
For the dual space and for the $\alpha$-dual, the following statements are equivalent:
\begin{enumerate}
\item $v\in\Gamma^{\prime}_{(M_k)}(X);$
\item $v\in \left[\Gamma_{(M_k)}(X)\right]^{\wedge}$;
\item there exists $L>0$ such that we have
$$\sum_{l=0}^{\infty}\exp\left(-M(L\lambda_{l}^{1/\nu})\right)||v_{l}||_{\mathtt{HS}}<\infty;$$
\item there exists $L>0$ and $K>0$ such that $$||v_l||_{\mathtt{HS}}\leq K\exp\left(M(L\lambda_{l}^{1/\nu})\right)$$ holds for all $l\in\mathbb{N}_{0}.$
\end{enumerate}
\end{thm}
Again we note that given this characterisation of $\alpha$-duals, one can prove that they are perfect, in a way similar to the proof of Theorem \ref{P:perfect}, namely,
that
\begin{equation}
\left[\Gamma_{(M_k)}(X)\right]=\left(\left[\Gamma_{(M_k)}(X)\right]^{\wedge}\right)^{\wedge}.
\end{equation}
Finally we can state the counterpart of the adjointness Theorem \ref{THM:adj}.
\begin{thm}[Adjointness Theorem Beurling Case]
\label{THM:AdjB}
Let $\{M_k\}$ and $\{N_k\}$ satisfy conditions (M.{0})--(M.$3'$). A linear mapping $f:\Gamma_{(M_k)}(X)\rightarrow \Gamma_{(N_k)}(X)$ is sequential if and only if $f$ is represented by an infinite tensor $(f_{kjli}), $ ~ $k,j\in \mathbb{N}_{0},$ $1\leq l\leq d_{j}$ and $1\leq i\leq d_k$ such that for any $u\in\Gamma_{(M_k)}(X)$ and $v\in\hat{\Gamma_{(N_k)}}(X)$ we have
\begin{equation} \label{EQ:f11}
\sum_{j=0}^{\infty}\sum_{l=1}^{d_j}|f_{kjli}||\hat{u}(j,l)|<\infty, ~~\textrm{for ~all}~k\in\mathbb{N}_{0}, ~i=1,2,...,d_k,
\end{equation} and
\begin{equation}\label{EQ:f21}
\sum_{k=0}^{\infty}\sum_{i=1}^{d_k}\left|(v_k)_i\right|\left|{\left(\sum_{j=0}^{\infty}f_{kj}\hat{u}(j)\right)_{i}}\right|<\infty.
\end{equation}
Furthermore, the adjoint mapping $\hat{f}:\hat{\Gamma_{(N_k)}}(X)\rightarrow \hat{\Gamma_{(M_k)}}(X)$ defined by the formula $\hat{f}(v)=v\circ f$ is also sequential, and the transposed matrix ${(f_{kj})}^{t}$ represents $\hat{f}$, with $f$ and $\hat f$ related by $\langle f(u),v\rangle=\langle u,\hat f (v)\rangle.$
\end{thm}
The proof of Theorem \ref{THM:AdjB} is similar to the corresponding proof in Theorem \ref{THM:adj} for the spaces $\Gamma_{\{M_{k}\}}(X),$ so we omit the repetition.
|
\section{Introduction}
The Hong-Ou-Mandel experiment \cite{HOM} with two single photons, recently repeated with neutral atoms \cite{AHOM}, manifests the proportionality relation between the visibility of interference and the degree of partial distinguishability due to internal states of bosons \cite{Mandel1991}. To extend this fundamental relation for $N$ identical bosons and fermions \cite{FHOM} is an important fundamental problem with applications in the fields of quantum information and computation, quantum state engineering, and quantum metrology \cite{KLM,Peruzzo,AA,E1,E2,E3,E4,Metcalf,ULO,RWPh,QMet}.
The theory has advanced considerably in recent years \cite{MPBF,SUN,PartDist,Tichy,Tamma,Rohde,MCMS,BB,GL,CrSp}, however our understanding of the relation between partial distinguishability of $N$ identical particles and their interference on a multiport is still not complete. Non-trivial quantum-to-classical transition of more than two photons \cite{4PhDist,4PhExp,NonM} and the recent observation of a collective (triad) geometric phase in the genuine three-photon interference \cite{Triad2}, i.e., a phase attributed to the three photons as a whole, add to the complexity of the problem. The triad phase can be understood as a multi-particle realization of the Pancharatnam-Berry phase \cite{Panch,Berry} in the internal Hilbert space, it is also defined quite similarly to the Bargmann geometric invariant \cite{Bargm,ExpBargm} for three quantum states. Therefore, distinguishability of identical particles is a global property that cannot be reduced to considering distinguishability of only pairs of states \cite{JS}. There is also similarity to the fully entangled $N$-particle state, which exhibits the genuine $N$-particle interference with a collective phase \cite{GHSZ,GHZ,MInt} (due to phases in the individual Mach-Zehnder interferometers), demonstrated in another recent experiment \cite{Triad1}. In view of these important relations, the collective phases of identical particles deserve to be thoroughly investigated. Is it possible to have multiparticle collective phases for $N$ identical particles, independent of the collective phases of $R<N$ particles, and how to arrange for such a case? How to approach the characterization of the multiparticle interference and collective phases in the general case? Can collective phases be detected by popular criteria of quantum behavior? We formulate a general framework able to provide answers to the above questions and explore the relation of distinguishability due to the internal state discrimination \cite{Hel,Chefles} of identical particles and their interference on a multiport. We also find that weighted graph theory illustrates the relation between partial distinguishability and multiparticle interference and allows to simplify the proofs of some of the presented results.
In section \ref{sec2} we discuss the general framework for our analysis, introduce the notion of multiparticle interference, define what we call the genuine $N$-particle interference for $N\ge 3$, and make a connection between the weighted graph theory (in a generalized form) and the partial distinguishability of independent identical particles in general (mixed) internal states. In section \ref{sec3} we concentrate on identical particles in pure internal states, introduce the notion of a collective multiparticle phase, prove two theorems on the existence of the genuine (circle-dance) multiparticle interference, where we analyze in detail the case of $N=4$ and give an example of the circle-dance interference with single photons in Gaussian spectral states. In section \ref{sec4} we show that the collective $N$-particle phase governing the circle-dance interference of identical particles is a consequence of the genuine $N$th-order quantum correlations between them. Section \ref{sec5} contains the conclusion. Some mathematical details from sections \ref{sec2}-\ref{sec4} are relegated to the appendices.
\section{Permutation cycles, multiparticle interference and graph theory}
\label{sec2}
We consider interference on a unitary multiport of noninteracting identical particles (either bosons or fermions) coming from independent sources. E.g., in the case of single photons, a multiport can be a spatial arrangement of beam splitters and phase shifters, or an integrated optical network, as in Refs. \cite{KLM,E1,E2,E3,E4,Metcalf,ULO,RWPh}. Applications are possible even in the case of interacting particles, e.g., to the multiparticle scattering in a fixed number of discrete channels \cite{MCMS}, where the scattering into a set of discrete channels plays the role of a unitary transformation.
We fix the number of input and output ports of a multiport to be $M$ and the total number of particles to be $N$. We assume that the input to a multiport is given by the states $|k_1^{(a)}\rangle \langle k_1^{(a)}| \otimes \rho^{(1)}, \ldots, |k_N^{(a)}\rangle \langle k_N^{(a)}| \otimes\rho^{(N)}$, where $\rho^{(i)}$ is the internal state of particle $i$ and $|k_i^{(a)}\rangle$ stands for the quantum mode of input port $k_i$ of the multiport. A multiport performs a unitary transformation ($U$) between the input and output modes as follows $|k^{(a)}\rangle = \sum_{l=1}^M U_{kl}|l^{(b)}\rangle$, or in the second-quantization notation $\hat{a}^\dag_{k,j} = \sum_{l=1}^MU_{kl}\hat{b}^\dag_{l,j}$, where $\hat{a}^\dag_{k,j}$ ($\hat{b}^\dag_{l,j}$) creates a particle in the input mode $|k^{(a)}\rangle$ (respectively, output mode $|l^{(b)}\rangle$) and an internal state $|j\rangle\in \mathcal{H}$.
Throughout the text, when discussing the input and output of a multiport, we will use the following notations: the vector $\mathbf{k} = (k_1,\ldots,k_N)$ will stand for the sequence of input ports occupied by particles (arranged in nondecreasing order, when there are more than one particle per port), whereas the vector $\mathbf{l} = (l_1,\ldots,l_N)$ will stand for the same for the output ports.
We will also use $\mathbf{n} = (n_1,\ldots,n_M)$ and $\mathbf{m} = (m_1,\ldots,m_M)$ for the occupations of the input and output ports, respectively, where $n_i$ ($m_i$) denote the number of particles in the input (output) port $i$. In the main text only the input with up to one particle per port is considered, with the input ports fixed to be $\mathbf{k}=(1,\ldots,N)$, fig.~\ref{F1}(a) (arbitrary input configuration $\mathbf{n} = (n_1,\ldots,n_M)$ is considered in appendix \ref{appA}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig1ab.eps}
\caption{\textbf{(a)} $N$ identical particles are impinging on a multiport with a unitary matrix $U$. The circular arrows illustrate the $N$-particle cycle $(1,2,3,\ldots,N)$ responsible for the circle-dance interference with deterministically distinguishable particles $\alpha$ and $\beta$ for $\beta\ne \alpha\pm 1$ (\textit{mod} $N$). \textbf{(b)} A weighted directed graph representation of particle distinguishability in panel \textbf{(a)} in the case of pure internal states $|\phi_k\rangle$, $k=1,\ldots,N$, where each particle is a vertex and a directed edge $k\to l$ has the complex weight $w(k,l)\equiv - \ln(\langle\phi_l|\phi_{k}\rangle) = d_{kl} + i\theta_{kl}$, where $d_{kl}$ serves as the distance between the vertices (here $d_{kl} =\infty$ is indicated by the absence of such an edge) and $\theta_{kl}$ is the phase in the direction $k\to l$. Only two $(R\ge 3)$-particle cycles on a finite path -- $1\to2\to \ldots \to 5 \to 1$ and its inverse -- contribute to output probability in panel \textbf{(a)}, therefore the latter depends on a single collective (five-particle) phase along the edges $\theta_{(1,2,3,4,5)} = \theta_{12} + \theta_{23} + \theta_{34} + \theta_{45} + \theta_{51}$ (see the text).}\label{F1}
\end{center}
\end{figure}
The internal states define identical particle distinguishability which affects their interference on a linear multiport. For example, in the HOM experiment \cite{HOM} the main source of photon distinguishability was the arrival time which was recorded to relate it to the dip in the coincidence counting. One can characterize the state of partial distinguishability of identical particles by the degree of possible internal state discrimination. We are mainly interested in the probability of an output configuration $\mathbf{m}$ without account of the internal states (i.e., when the internal states are not resolved by the particle counting detectors or simply ignored) which is mathematically expressed by summation over the probabilities with resolved internal states~\footnote{For example, in the HOM experiment case we would be interested in the probability of two single photons leaving certain output ports $l_1$ and $l_2$ of a balanced beam splitter, and not in the probability of one photon leaving mode $l_1$ at time $\tau_1$ and the other leaving port $l_2$ at time $\tau_2$.}. The relation between discrimination of the internal states and its effect on multiparticle interference will be briefly considered in section \ref{sec4} (particle counting with discrimination of the internal states is also analyzed to the necessary detail in appendix \ref{appA}). Single photons on an optical multiport, as in Refs. \cite{KLM,E1,E2,E3,E4,Metcalf,ULO,RWPh}, are the main application of the theory. To some extent, the internal state discrimination is routinely done with single photons (e.g., detection of the time of arrival serves as the partial state discrimination).
We will use the fact \cite{PartDist} that just one complex-valued function on the symmetric group $ \mathcal{S}_N$ for $N$ particles accounts for the effect of partial distinguishability, if one ignores the information on the internal states at a multiport output (this function can be though of as a generalization of Mandel's interference visibility $\&$ distinguishability parameter \cite{HOM,Mandel1991} for $N\ge 3$ photons; for full discussion the reader should consult Ref. \cite{PartDist}). The exact form of probability of an output configuration $\mathbf{m}$ for interfering identical particles on a linear multiport was studied before \cite{PartDist,BB} (for completeness, we also provide brief derivation in appendix \ref{appA}). For independent particles in internal states $\rho^{(1)},\ldots, \rho^{(N)}$ impinging on the input ports $\mathbf{k} = (1,\ldots,N)$ of a unitary multiport $U$, the probability to detect a configuration $\mathbf{m}= (m_1,\ldots,m_M)$ in the output ports reads \cite{PartDist,BB}
\begin{equation}
p_N(\mathbf{l}|\mathbf{k}) = \frac{1}{\mathbf{m}!} \sum_{\tau,\sigma\in\mathcal{S}_N}J(\tau^{-1}\sigma)
\prod_{k=1}^N U^*_{k,l_{\tau(k)}}U_{k,l_{\sigma(k)}},
\en{E1}
where $\mathbf{m}\equiv m_1!\ldots m_M!$ and the $J$-function accounts for particle distinguishability. In our case it factorizes \cite{PartDist} according to the disjoint cycle decomposition (see, for instance, Ref. \cite{Stanley}) of its argument. Denoting by $cyc(\sigma)$ the set of disjoint cycles of a permutation~$\sigma$, we get
\begin{eqnarray}
\label{E3}
J(\sigma) &=& \prod_{\nu\in cyc(\sigma)} g(\nu),\nonumber\\
g(\nu)& \equiv& (\pm1)^{|\nu|-1} \mathrm{Tr}\bigl( \rho^{(k_{|\nu|})}\rho^{(k_{|\nu|-1})}\ldots \rho^{(k_{1})}\bigr),
\end{eqnarray}
where $\nu= (k_1,\ldots,k_{|\nu|})$, $k_\alpha\in \{1,\ldots,N\}$, stands for the cycle $k_1\to k_2\to \ldots \to k_{|\nu|}\to k_1$ and $|\nu|$ for its length, the minus sign is due to the signature of a cycle $\mathrm{sgn}(\nu) = (-1)^{|\nu|-1}$ \cite{Stanley} and applies to fermions.
The specific form of probability in Eqs. (\ref{E1}) and (\ref{E3}) can be understood without a detailed derivation. Indeed, for identical particles the probability $p_N(\mathbf{l}|\mathbf{k})$ must be symmetric under permutations of either output ports $\mathbf{l}$ or input ports $\mathbf{k}$ (particles do not have labels). Let us consider a specific $N$-particle transition amplitude on a multiport $\mathcal{A}(\mathbf{k}\to \mathbf{l})\equiv \prod_{k=1}^N U_{k,l_{k}} $ with $\mathbf{l} = (l_1,\ldots,l_N)$ (i.e., particle $k$ goes to output port $l_k$). By the above symmetry of probability, any permutation $\sigma$ of identical particles over the output ports gives another valid transition amplitude contributing to the probability, $\mathcal{A}({\mathbf{k}\to\sigma(\mathbf{l})})= \prod_{k=1}^N U_{k,l_{\sigma(k)}}$. The probability, linear in the amplitude and its complex conjugate, is given by the summation over two permutations $\sigma, \tau\in \mathcal{S}_N$ in the two amplitudes as in Eq. (\ref{E1}) (with the signature of a permutation in the case of fermions), where there is also a factor equal to the scalar product in $\mathcal{H}^{\otimes N}$ of the internal states, similarly permuted, with the result dependent only on the relative permutation, described by the function $J(\tau^{-1}\sigma)$ in Eq. (\ref{E1}). The disjoint cycles of $\tau^{-1}\sigma$ contribute independent factors, since they permute different particles (the cross-cycle particle distinguishability does not contribute), hence the $J$-function must be in the form of Eq. (\ref{E3}) (which is easily established by considering pure internal states). Finally, by using the arbitrary permutations, we have permuted $m_l$ particles in output port $l$ as well, thus have performed the multiple counting of identical terms, hence the factor $\mathbf{m}!$ in the denominator.
Due to the mutual independence of the concept of distinguishability of identical particles and the unitary transformation employed by a multiport, below we will focus on the state of particle distinguishability itself when discussing multiparticle interference on a multiport. Indeed, on a generic multiport, i.e., when none of the matrix elements $U_{kl}$ is zero, all permutations of particles can contribute to the coincidence count output probability and one can observe the discussed examples of multiparticle interference on such a multiport (obviously, an optimization is possible by selecting a particular multiport). We will return to this when discussing a specific example in section \ref{sec3}.
From the above discussion, one can derive the physical meaning of the cycles in Eq. (\ref{E3}): to each such cycle can be associated the interference of \textit{only} the particles involved in the cycle. Below we will frequently use the term ``$R$-particle interference" which simply means the contribution from an $R$-cycle in Eq. (\ref{E3}) to the output probability in Eq. (\ref{E1}). Note, however, that a general summation term in Eq. (\ref{E3}) consists of simultaneous and independent interferences of $S_1,\ldots,S_d$ particles according to a partition of $N = S_1+\ldots +S_d$.
The relative significance of the contribution of $R$-particle interference depends on the $g$-weights of the $R$-cycles in the respective probability. Obviously, there is always two-particle interference, unless the particles have orthogonal internal states, i.e., $\rho^{(k)}\rho^{(l)}= 0$ for all $k\ne l$ (distinguishable particles, i.e., the classical case \cite{PartDist}). It may happen that some cycles have zero $g$-weight such that there is just two-particle and $N$-particle interference on a multiport. We will say that this case realizes the ``genuine $N$-particle interference", meaning that no $3\le R\le N-1$ interference is realized at the same time.
Obviously, no $N$-particle interference contributes to the output probabilities, if one sends a subset of $N$ particles to a multiport. Similarly, no $N$-particle interference contributes to the respective (marginal) output probabilities, when $N$ particles are sent to a multiport input, but at the output the information on some of the particles is lost, or discarded (by binning together the output configurations $\mathbf{m}$ containing a given configuration $\mathbf{m}^\prime $ of less than $N$ particles). This is due to the simple fact that the marginal probability of an output configuration for $R$ out of $N$ particles depends \textit{only} on the $d$-cycles with $d\le R$. Indeed, one can use the above discussion of Eqs. (\ref{E1})-(\ref{E3}) to arrive at this conclusion. The probability $p_N(\mathbf{l}^{\prime}|\mathbf{k})$ of an output $\mathbf{m}^\prime$ with $R<N$ particles $\mathbf{l}^\prime=(l^\prime_1,\ldots,l^\prime_R)$ is obtained by partitioning the transition $\mathbf{k}\to \mathbf{l}$ as $(\mathbf{k}^\prime\to\mathbf{l}^\prime,\mathbf{k}^{\prime\prime}\to\mathbf{l}^{\prime\prime})$, summing up over $\mathbf{l}^{\prime\prime}$ (which simply removes the respective matrix elements $U_{kl^{\prime\prime}}$ by the unitarity of $U$) and averaging over all $(R,N-R)$-partitions $(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})$ of the input ports, i.e.,
\begin{equation}
p_N(\mathbf{l}^{\prime}|\mathbf{k})
= \left({N \atop R}\right)^{-1} \sum_{ \mathbf{k}^{\prime}\subset \mathbf{k} } p_R(\mathbf{l}^{\prime}|\mathbf{k}^{\prime}),
\en{E10}
where the summation is over all subsets $\mathbf{k}^\prime$ having $R$ indices (for a formal mathematical derivation, see appendix \ref{appD}). In Eq. (\ref{E10}) $p_R(\mathbf{l}^{\prime}|\mathbf{k}^{\prime})$ is the probability of $R$ particles at input ports $\mathbf{k}^{\prime}$ to end up at output ports $\mathbf{l}^{\prime\prime}$, given similarly as in Eqs. (\ref{E1}) and (\ref{E3}) but now for $R$ particles, thus it depends only on $d$-cycles with $d\le R$.
By using the Cauchy-Schwartz inequality for the trace-product of operators, one can prove an important upper bound on the $R$-cycle $g$-weight by the 2-cycle $g$-weights with the same particles (see appendix \ref{appB})
\begin{equation}
|\mathrm{Tr}(\rho^{(k_1)}\ldots \rho^{(k_R)})|^2 \le \prod_{\alpha=1}^R \mathrm{Tr}(\rho^{(k_\alpha)}\rho^{(k_{\alpha+1})})
\en{E11}
where $\alpha$ is $\textit{mod}\; R$.
Up to now, the treatment of identical particle distinguishability was based mainly on combinatorics (permutations), but Eq. (\ref{E11}) suggests a graph interpretation. This, however, requires generalizing the concept of a graph. Let us think of identical particles as vertices, with vertex $i$ being associated with internal state $\rho^{(i)}$ and all the vertices being connected by edges. Our main object of study is a cycle $\nu= (k_1,\ldots, k_R)$ on the edges connecting vertices $k_1,\ldots,k_R$ to which we set a complex weight
\begin{eqnarray}
\label{weight}
w(\nu) &\equiv& -\ln\left(\mathrm{Tr}\bigl\{ \rho^{(k_{|\nu|})}\rho^{(k_{|\nu|-1})}\ldots \rho^{(k_{1})}\bigr\}\right) \nonumber\\
& = & D_{(k_1,\ldots,k_R)} + i\theta_{(k_1,\ldots,k_R)},
\end{eqnarray}
where $D_{(k_1,\ldots,k_R)}$ is the path length of the cycle, whereas $\theta_{(k_1,\ldots,k_R)}$ (see also fig. \ref{F1}(b)) we will call the collective $R$-particle phase of the cycle. Note that reversing the cycle orientation changes the sign of the cycle phase (a nonzero cycle phase selects a direction of the path along the cycle).
Larger path distance of a cycle means smaller contribution to output probability on a multiport
via Eq.~(\ref{E3}), whereas Eq. (\ref{E11}) bounds twice the path distance of a higher order cycle by two-vertex cycles on the same edges (a generalization of the usual path addition on a graph). In the case of pure internal states, when the inequality in Eq. (\ref{E11}) becomes equality, one recovers the usual additive distance on a weighted graph and the phases lead to an additional directed graph (see the next section).
The completely indistinguishable particles and the distinguishable (classical) particles have degenerated graphs. The completely indistinguishable particles from independent sources have the same pure internal state (see Refs. \cite{PartDist,BB}) and map to the zero-distance graph with all the vertices coinciding (which makes sense, since the particles are completely indistinguishable). The deterministically distinguishable particles, $\rho^{(i)}\rho^{(j)}=0$, for $i\ne j$, are mapped to a set of vertices lying at infinite distance from each other (the infinite distance between two vertices will be illustrated as absence of the respective edge, as in fig. \ref{F1}(b)).
\section{Particles in pure internal states and $N$-particle interference}
\label{sec3}
Consider particles in pure internal states $\rho^{(k)} = |\phi_k\rangle\langle \phi_k|$, $k=1,\ldots,N$. Let us set $ \langle \phi_k|\phi_l\rangle \equiv r_{kl}e^{i\theta_{kl}} = \exp\{-d_{kl} +i \theta_{kl}\}$, where $r_{kl} = e^{-d_{kl}}$ is the absolute value and $\theta_{kl} = -\theta_{lk}$ is the phase of the inner product. We will call $\theta_{kl}$ the mutual phase of particle $l$ with respect to $k$. For the path distance along an $R$-particle cycle $(k_1,\ldots,k_R)$ we now have from Eq. (\ref{weight})
\begin{equation}
D_{(k_1,\ldots,k_R)} =\sum_{\alpha=1}^R d_{k_\alpha,k_{\alpha+1}},
\en{gGraph}
whereas for the collective $R$-particle phase along the cycle
\begin{equation}
\theta_{(k_1,\ldots,k_R) } = \theta_{k_1 k_2} + \theta_{k_2 k_3} + \ldots + \theta_{k_R k_1}
\en{Rphase}
(note the difference between the two concepts of the phase: for $R=2$ the mutual phase $\theta_{kl}$ can be arbitrary, whereas $\theta_{(k,l)}=0$).
We can interpret $d_{kl}$ as the distance between the vertices $k$ and $l$ in a usual distance-weighted graph, whereas the collective phase $\theta_{(k_1,\ldots,k_R)}$ (\ref{Rphase}) leads to a weighted directed graph, see fig.~\ref{F2}, where each directed edge $k\to l$ acquires a real weight $\theta_{kl}$. The two-graph representation is just a single directed graph with complex weights on the edges given by a Hermitian adjacency matrix $w_{kl}\equiv -\ln(\langle \phi_l|\phi_k\rangle) = d_{kl} + i\theta_{kl}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.35\textwidth]{fig2.eps}
\caption{The directed graph for the multiparticle phases, where the mutual phase $\theta_{kl} \equiv \mathrm{arg}(\langle\phi_k|\phi_{l}\rangle)$ weights the directed edge $k\to l$. An $(R\ge 3)$-particle collective phase corresponds to a closed oriented path on $R$ vertices. Each closed path on edges has a corresponding multiparticle phase obtained by summation of the mutual phases on the edges passed (with the minus sign, when passing an edge in the inverse direction to the indicated one). The directed paths $1\to 2 \to 3\to 1$, $1\to3\to4\to 1$ and $1\to 2\to4\to 1$ allow to express any closed oriented paths on the edges $1,2,3,4$. }\label{F2}
\end{center}
\end{figure}
Consider the directed graph representation of the collective phases, assuming that none of $ r_{kl} $ is zero, i.e., the internal states are only partially distinguishable by the state discrimination \cite{Hel,Chefles}. The two-particle collective phase $\theta_{(k,l)} = 0$, since the two directed edges $k\to l$ and $l\to k$ cancel each other. For $N=3$ all permutations in $\mathcal{S}_3$ are
cycles themselves. Since the collective phases of three particleson the same vertices can differ only by a sign (transposition of two particles reverses the path orientation in fig.~\ref{F2}, or $\theta_{(i,j,k)} = \mathrm{sgn}(\sigma)\theta_{(1,2,3)}$, for $\sigma(1,2,3) = (i,j,k)$), consider the 3-cycle $\nu=(1,2,3)$. It has the phase $\theta_{(1,2,3)} \equiv \theta_{12} + \theta_{23} +\theta_{31}$. From the above discussion and section \ref{sec2} it is now clear that precisely this phase governs the genuine three-particle interference of Ref. \cite{Triad2}, observed experimentally as a variation in the output probability according to this phase (by keeping $r_{kl}$ fixed, while varying the mutual phases $\theta_{kl}$).
We are interested in the setups when there is a collective $N$-particle phase, which cannot be reconstructed with detection of less than $N$ particles, e.g., from any marginal probability. In such a case, we will call such a phase the ``genuine $N$-particle phase". This notion is similar to the triad phase introduced in Ref. \cite{Triad2}. From the above discussion and section \ref{sec2}, it follows that the two notions coincide for $N=3$. Moreover, as a genuine $(N\ge 4)$-particle phase can serve only the $N$-particle collective phase
$\theta_{(k_1,\ldots,k_N)}$, when it is independent of all the $R$-particle collective phases $\theta_{(l_1,\ldots,l_R)}$ for $3\le R\le N-1$. We can now state our first result.
\medskip
\noindent\textit{Theorem 1.--} Identical particles in pure internal states with no two states being orthogonal do not allow for a genuine $N$-particle phase in an interference experiment on a linear multiport.
\textit{Proof.--} By the above discussion, the theorem will follow from analysis of the collective phases in Eq.~(\ref{Rphase}). By the linear relations between the mutual and collective phases, the set of all cycles contains as many independent collective phases as there are independent mutual phases $\theta_{kl}$, i.e., exactly \mbox{$(N-1)(N-2)/2$} phases. Indeed, since the mutual phases $\theta_{kl}$ are defined up to the global phase of a state, exactly \mbox{$N-1$} of them can be preset to given values by employing the global phase transformation $|\phi_k\rangle \to e^{-i\theta_k}|\phi_k\rangle$ resulting in a phase shift $\theta_{kl} \to \theta_{kl}+\theta_k - \theta_l$. We can now state the following lemma, which implies theorem 1.
\textit{Lemma 1.--} For $N$ particles in non-orthogonal pure internal states there are $(N-1)(N-2)/2$ independent triad phases, e.g., $\theta_{(1,k,l)}$ for $2\le k< l \le N$. Therefore, all $R$-particle phases, $3\le R \le N$, are linear combinations of the triad phases.
\textit{Proof.--} Let us start with $N=4$ particles, which corresponds to the fully-connected subgraph on the vertices $1,2,3,4$ in fig \ref{F2}. In this case, we have eight different $3$-cycles, i.e., closed oriented paths on three edges, divided into two groups of four phases, $\theta_{(1,2,3)}$, $\theta_{(1,3,4)}$, $\theta_{(1,2,4)}$, and $\theta_{(2,3,4)}$ and the sign-inverted of these (obtained by transposition of two particles in a cycle). Moreover, we have
\begin{equation}
\theta_{(2,3,4)} = \theta_{(1,2,3)} + \theta_{(1,3,4)} - \theta_{(1,2,4)}.
\en{E6}
We need to show that we can express three independent mutual phases as functions of the triad phases $\theta_{(1,2,3)}$, $\theta_{(1,3,4)}$ and $\theta_{(1,2,4)}$. Selecting the mutual phases $\theta_{12}$, $\theta_{23}$, and $\theta_{34}$ as the basis (while setting the rest mutual phases to zero, by using the arbitrariness of the global phase of an internal state) we obtain
\begin{equation}
\theta_{12} = \theta_{(1,2,4)},\; \theta_{23} = \theta_{(1,2,3)}-\theta_{(1,2,4)},\; \theta_{34} = \theta_{(1,3,4)}.
\en{Ph2}
Thus all $R$-particle phases are expressed through the phases $\theta_{(1,2,3)}$, $\theta_{(1,3,4)}$ and $\theta_{(1,2,4)}$, which concludes the proof for $N=4$.
Consider now $N> 4$ particles (it is enough to consider just five particles, as in fig. \ref{F2}). We have to show that the set of phases $\theta_{(1,k,l)}$ is the basis for all possible triad phases. For any triad phase $\theta_{(i,k,l)}$, in the graph representation the corresponding oriented path lies within the complete subgraph on the vertices $1,i,k,l$, which returns us to $N\le 4$ particles $1,i,k,l$, i.e., to the case considered above. Q.E.D.
In section \ref{sec2} we have introduced the notion of the genuine $N$-particle interference and in this section the notion of the genuine $N$-particle phase. Theorem 1 relates the two notions for pure internal states, by stating, in other words, that the genuine $N$-particle phase could be found only in a genuine $N$-particle interference, i.e., when the $R$-particle interferences are forbidden for all $3\le R\le N-1$. (We do not know if this relation extends to \textit{mixed} internal states, since the collective phase of a higher-order cycle in Eq. (\ref{weight}) has no simple linear dependence on the collective phases of lower-order cycles on the same edges.)
\subsection{Genuine $N$-particle phase and distinguishability}
\label{sec3A}
Theorem 1 states that a genuine $(N\ge 4)$-particles phase may appear only if some particles have orthogonal internal states, i.e., when they behave as distinguishable classical particles with respect to each other, since their internal states can be deterministically discriminated. The deterministic distinguishability prevents quantum interference with two particles \cite{HOM,Mandel1991}. For $N=3$, removing one edge prevents existence of the $3$-cycles and therefore a three-particle interference. On the other hand, the genuine $(N\ge 4)$-particle interference is possible when the respective graph consists of just one cycle, see fig. \ref{F1}(b), which would prevent $R$-particle interference for $3\le R \le N-1$ simply by the absence of an edge. Such an interference would differ conceptually from both the three-particle interference observed in Ref. \cite{Triad2} and from the genuine $N$-particle interference from the fully-entangled $N$-particle state \cite{GHSZ,MInt} reported in Ref.~\cite{Triad1} with three photons (by requiring the deterministic distinguishability of particles).
We must find conditions on the internal states of particles that allow for a variable $N$-particle phase (\ref{Rphase}) while the corresponding graph consists of a single cycle.
A given set of parameters $\{r_{kl}\ge 0, 0\le \theta_{kl}< 2\pi; 1\le k<l\le N\}$ comes from the inner products of $N$ vectors $|\phi_k\rangle$, $k=1,\ldots, N$, i.e., satisfies $\langle\phi_k|\phi_l\rangle = r_{kl}e^{i\theta_{kl}}$, if and only if the respective matrix $H_{kl}\equiv r_{kl}e^{i\theta_{kl}}$ (with $H_{kk}=1$) is positive semidefinite Hermitian matrix. The set of conditions $\sum_{l=1 }^N(1-\delta_{kl}) |H_{kl}| =\sum_{l=1}^N (1-\delta_{kl})r_{kl} \le 1 $ for $k=1,\ldots,N$ (here $\delta_{kl}$ is the Kronecker delta) is sufficient for $H$ to be such a matrix, since the eigenvalues $\lambda_1,\ldots,\lambda_N$ of $H$ are bounded as follows (known as the Gershgorin circle theorem \cite{Gersh})
\begin{equation}
|\lambda_k - 1| \le \sum_{l=1}^N (1-\delta_{kl})r_{kl} \le 1
\en{EQ3}
and therefore are all non-negative. Note that the mutual phases $\theta_{kl}$ remain free parameters. We can now formulate our second result.
\medskip
\noindent\textit{Theorem 2.--} Identical particles in linearly independent internal states $|\phi_1\rangle, \ldots, |\phi_N\rangle$, where each state being orthogonal to all others except two,
\begin{eqnarray}
\label{E9}
\langle\phi_k|\phi_{k\pm 1}\rangle \ne 0,\quad
\langle\phi_k|\phi_{l}\rangle = 0, \quad l\ne k\pm 1\quad mod \; N,\quad
\end{eqnarray}
can realize the genuine $N$-particle interference on a multiport, governed by a genuine collective $N$-particle phase Eq.~(\ref{Rphase}) due to the permutation cycle $1\to 2 \to \ldots \to N \to 1$ and its inverse (independent from the two-particle interference parameters), whereas there is no $R$-particle interference for \mbox{$3\le R \le N-1$.}
\textit{Proof.--} From the above discussion it it clear that theorem 2 follows if there are the state vectors with the inner products as in Eq. (\ref{E9}) and free mutual phases (thus allowing for a free $N$-particle collective phase). By Gershgorin's circle theorem (\ref{EQ3}), such state vectors do exist under the condition that \mbox{$ |\langle\phi_k|\phi_{k-1}\rangle| + |\langle\phi_k|\phi_{k+1}\rangle| \le 1$}, for $k=1,\ldots, N$ (\textit{mod} $N$). \mbox{Q.E.D.}
Due to the graphical representation by a polygon with the particles as the vertices and no internal edges, see fig.~\ref{F1}(b), such single-cycle $N$-particle interference can be termed the ``circle-dance interference".
Let us further discuss an explicit example of the circle-dance interference with $N=4$ particles and how to detect the respective four-particle collective phase in an experiment. First, we construct the required vectors. One can always find an orthonormal basis $(|e_1\rangle, |e_2\rangle, |e_3\rangle,|e_4\rangle)$ such that the matrix $\Phi_{kl} \equiv \langle e_k|\phi_l\rangle$ reads~\footnote{Since only the cyclic order $(1,2,3,4)$ of the state vectors is significant, the form of Eq. (\ref{E7}) is preserved (though in another basis) for the cyclic permutations of the state vectors.}
\begin{eqnarray}
&&\!\!\!\! \Phi = \left(\begin{array}{cccc}
1 & 0 & 0 & 0 \\
r_{12}e^{i\theta_{12}} & \sqrt{1-r^2_{12}} & 0 & 0 \\
0 & \frac{r_{23}}{\sqrt{1-r^2_{12}}}e^{i\theta_{23}} & \frac{\sqrt{1-r^2_{12} - r^2_{23}}}{\sqrt{1-r^2_{12}}} & 0 \\
r_{14} e^{i\theta_{14}} & - \frac{r_{12}r_{14}}{\sqrt{1-r^2_{12}}}e^{i(\theta_{14}-\theta_{12})} & \Phi_{34} & \Phi_{44}
\end{array} \right), \nonumber\\
&& \!\!\! \Phi_{34} = \frac{r_{34}(1-r^2_{12})e^{i\theta_{34}} + r_{12}r_{23}r_{14}e^{i(\theta_{14} - \theta_{12}-\theta_{23})}}{\left[(1-r^2_{12})(1-r^2_{12}-r^2_{23})\right]^\frac12},\nonumber\\
&& \!\!\! \Phi_{44} = \left( \frac{1-r^2_{12} - r^2_{14}}{1-r^2_{12}} - |\Phi_{34}|^2 \right)^{\frac12},
\label{E7}\end{eqnarray}
where we use $\Phi_{kk}$ to satisfy the normalization condition (set to be real by the freedom of the global phase of a quantum state) and the rest of the row elements to satisfy the cross-state inner products. Obviously, $r_{k,k+1}$ satisfy certain conditions for the above construction to make sense (to satisfy the normalization of the state vectors the expressions in the square roots must be positive). While the set of four conditions $ r_{k,k-1} +r_{k,k+1} \le 1$ (obtained from Gershgorin's circle theorem (\ref{EQ3})) is sufficient, an explicit analysis in this case shows that $ \sum_{k=1}^4 r^2_{k,k+1} \le 1$ is also sufficient (see appendix \ref{appNEW}). For equal values of $r_{k,k+1}\equiv r$ the two conditions coincide, giving $r\le 1/2$.
Whereas the three-particle interference is not possible with the states in Eq.~(\ref{E7}), there is the four-particle interference due to the cycle $\nu_4=(1,2,3,4)$ with the $g$-weight
\begin{equation}
g(\nu_4) = \pm \prod_{k=1}^4 \langle\phi_k|\phi_{k+1}\rangle = \pm \exp\{ -D_{(1,2,3,4)} +i\theta_{(1,2,3,4)}\}.
\en{E8}
Besides the deterministic distinguishability of not-the-neighbor particles on the cycle $\nu_4=(1,2,3,4)$ the circle-dance interference requires the state-vectors to remain linearly independent (similar as in three-particle case \cite{Triad2}), i.e., unambiguously distinguishable \cite{Chefles}. Indeed, one can verify that imposing linear dependence of the state vectors $|\phi_k\rangle$ (specifically in the case of Eq. (\ref{E7}), by setting $\Phi_{44} = 0$) will result in a condition for the mutual phases (making them functions of $r_{k,k+1}$).
We have stated above that any generic multiport can realize the circle-dance interference, i.e., any multiport with no matrix element being zero. Let us consider, as an example, single photons in the symmetric four-port of Ref. \cite{ZZH} corresponding to the diamond-shaped arrangement of four balanced beamsplitters with one phase plate $\varphi$ inserted into one of the internal paths,
\begin{equation}
U = \frac12 \left(\begin{array}{ccrc}
\; e^{i\varphi} & \; e^{i\varphi} & 1 & 1\\
-e^{i\varphi} & -e^{i\varphi} & 1 & 1\\
-1 & \; \, 1 & -1 & 1\\
\;\, 1 & -1 & -1 & 1
\end{array}\right).
\en{U4}
In general, there are $ \frac{N!}{R(N-R)!} $ different $R$-cycles with $N$ particles \cite{Stanley}. The symmetric group $\mathcal{S}_4$ contains six 2-cycles, eight 3-cycles, and six 4-cycles, therefore, there are three permutations which are not cycles themselves, but products of disjoint 2-cycles. Since only the neighbor particles in the cyclic order $(1,2,3,4)$ are connected by the edges in the corresponding graph representation, fig. \ref{F1}(b), only the following eight permutations, besides the trivial $I$, contribute to the probability (divided into groups according to their cycle structure):
\begin{eqnarray}
\label{P1}
&& C_2: \; (1,2),\, (2,3),\, (3,4),\, (1,4); \nonumber\\
&& C_{2\times2}:\; (1,2)(3,4), \,(1,4)(2,3); \nonumber\\
&&C_4: \; (1,2,3,4),\, (4,3,2,1).
\end{eqnarray}
We can rearrange the summation in the probability formula in Eqs. (\ref{E1}) and (\ref{E3}) to represent it as follows
\begin{equation}
p_N(\mathbf{l}|\mathbf{k}) = \frac{1}{\mathbf{m}!} \sum_{\sigma\in \mathcal{S}_N}\mathrm{per}(\mathcal{U}^*(I)\circ \mathcal{U}(\sigma))\prod_{\nu\in cyc(\sigma)} g(\nu)
\en{P2}
where the symbol ``$\circ$" denotes the Hadamard (by-element) product of matrices, $\mathcal{U}_{\alpha,\beta}(\sigma) \equiv U_{k_{\sigma(\alpha)},l_\beta}$, and
\begin{equation}
\mathrm{per}(A) = \sum_{\sigma\in\mathcal{S}_N} \prod_{k=1}^N A_{k,\sigma(k)}.
\en{P3}
For $N=4$ photons on the symmetric multiport of Eq. (\ref{U4}) with any of the phase values $\varphi\in \{ 0,\pi,\pm\pi/2\}$ and the coincidence detection, $\mathbf{l} = \mathbf{k} = (1,2,3,4)$, for the cycles given in Eq. (\ref{P1}) we get
\begin{equation}
\mathrm{per}(\mathcal{U}^*(I)\circ\, \mathcal{U}(\sigma)) = \left\{\begin{array}{cc} {3}/{32}, & \sigma \in \{I,\, C_{2\times2}\},\\
-{1}/{32}, & \sigma \in \{ C_2,\, C_4\}.
\end{array} \right.
\en{P4}
Using that (for bosons) $g(I)=1$, $g(k,k+1) = r^2_{k,k+1}$, $g(1,2,3,4)$ from Eq. (\ref{E8}), and that $g(4,3,2,1) = g^*(1,2,3,4)$, we obtain from Eqs. (\ref{P2}) and (\ref{P4}) the probability for the coincidence count as follows
\begin{eqnarray}
\label{p4}
&& p_4(\mathbf{k}|\mathbf{k})= \frac{1}{32}\Bigl\{ 3\left(1+ r^2_{12}r^2_{34} + r^2_{14}r^2_{23}\right) \nonumber\\
&&\quad - \sum_{k=1}^4 r^2_{k,k+1} - 2\prod_{k=1}^4r_{k,k+1}\cos\theta_{(1,2,3,4)} \Bigr\}.
\end{eqnarray}
(For fermions, the only change in formula (\ref{p4}) is in the sign at the last two terms, due to the minus sign at $g(k,k+1)$ and $g(1,2,3,4)$.)
Finally, we note that the circle-dance interference is possible also with identical particles in mixed internal states. Indeed, if $\mathrm{Tr}(\rho^{(k)}\rho^{(l)})=0$ for $l\ne k\pm 1$, then by Eq. (\ref{E11}) any cycle $\nu$ passing the edge $kl$ has zero $g$-weight, which prevents any $R$-particle interference with \mbox{$3\le R\le N-1$}.
\subsection{Four-particle circle-dance interference with single photons having Gaussian spectral profiles}
Let us analyze in more detail the case of single photons having Gaussian spectral shapes and different polarizations. We consider each photon in a pure state $|\Phi_k\rangle = |\varphi_k\rangle |P_k\rangle$, where $|P_k\rangle = \alpha_{k} |v\rangle+ \beta_{k} |h\rangle$, with $|v\rangle$ and $|h\rangle$ being the polarization basis, $|\alpha_ {k}|^2 + |\beta_ {k}|^2 = 1$, and (in the frequency basis $|\omega\rangle$)
\begin{equation}
|\varphi_k\rangle = \int d\omega \varphi_k (\omega) |\omega\rangle,
\label{vetor}
\end{equation}
with
\begin{equation}
\varphi_k (\omega) = \left ( \frac{1}{\sqrt{ \pi} \Delta_k} \right )^{1/2} \exp{\left (- \frac{(\omega - \omega_{0k})^2}{2 \Delta_k^2} - i \tau_k (\omega - \omega_{0k})\right )}.
\label{varphi_i}
\end{equation}
Here we use the polarization state $|P_k\rangle$ to satisfy the necessary orthogonality conditions of theorem 2, thus we introduce an angle
$\chi$ and set
\begin{eqnarray}
| P_1 \rangle = | v \rangle,\quad | P_2 \rangle = \cos{\chi} |v\rangle+ \sin{\chi} |h\rangle, \nonumber\\
|P_3 \rangle = | h \rangle, \quad |P_4 \rangle = \sin{\chi} |v\rangle - \cos{\chi} |h\rangle.
\label{4vetores}
\end{eqnarray}
We also have
\begin{equation}
\langle \varphi_k | \varphi_j \rangle = \sqrt{\frac{2 \Delta_k \Delta_j}{\Delta_k^2 + \Delta_j^2}} \hspace{1 mm} e^{-\eta_{kj}+i\xi_{kj} } ,
\label{produtointerno}
\end{equation}
with
\begin{eqnarray}
\eta_{kj} &=& \frac{1}{2} \left ( \frac{\Delta_k^2 \Delta_j^2}{\Delta_k^2 + \Delta_j^2} \right) \left[ (\tau_k - \tau_j)^2 - \left( \frac{\omega_{0k}}{\Delta_k^2} + \frac{\omega_{0j}}{\Delta_j^2} \right)^2 \right] \nonumber \\
&+& \frac{1}{2} \left ( \frac{\omega_{0k}^2}{\Delta_k^2} + \frac{\omega_{0j}^2}{\Delta_j^2} \right ), \nonumber\\
\xi_{kj} &= &\left ( \frac{\Delta_k^2 \Delta_j^2}{\Delta_k^2 + \Delta_j^2} \right )\left( \frac{\omega_{0k}}{\Delta_k^2} + \frac{\omega_{0j}}{\Delta_j^2} \right) (\tau_j - \tau_k)\nonumber\\
& +& (\tau_k \omega_{0k} - \tau_j \omega_{0j}).
\label{coeficiente1}
\end{eqnarray}
From the definition $\langle \Phi_k | \Phi_j \rangle=r_{kj} e^{ i\theta_{kj}}$, we get
\begin{eqnarray}
r_{12} &=& \cos{\chi} \sqrt{\frac{2 \Delta_1 \Delta_2}{\Delta^2_1 + \Delta^2_2}} e^{\eta_{12}}, \quad \theta_{12} = \xi_{12}, \nonumber\\
r_{23} &=& \sin{\chi} \sqrt{\frac{2 \Delta_2 \Delta_3}{\Delta^2_2 + \Delta^2_3}} e^{\eta_{23}} , \quad \theta_{23} = \xi_{23}, \nonumber\\
r_{34} &=& \cos{\chi} \sqrt{\frac{2 \Delta_3 \Delta_4}{\Delta^2_3 + \Delta^2_4}} e^{\eta_{34}}, \quad \theta_{34} = \pi + \xi_{34}, \nonumber\\
r_{41} &=& \sin{\chi} \sqrt{\frac{2 \Delta_4 \Delta_1}{\Delta^2_4 + \Delta^2_1}} e^{\eta_{14}}, \quad \theta_{41} = \xi_{41}.
\label{r_kj}
\end{eqnarray}
For $\Delta_k\ne \Delta_l$ the analysis becomes quite involved, thus we consider below all spectral width being the same, $\Delta_k = \Delta$. We have in this case
\begin{equation}
\eta_{k,k+1} = \frac{(\omega_{0k}-\omega_{0,k+1})^2}{4\Delta^2}+\frac{\Delta^2}{4}(\tau_k - \tau_{k+1})^2
\en{eta_kj}
and
\begin{equation}
\xi_{k,k+1}=\frac{1}{2} (\tau_k + \tau_{k+1})(\omega_{0k}-\omega_{0,k+1}).
\en{xi_kj}
From Eqs. (\ref{Rphase}), (\ref{r_kj}) and (\ref{xi_kj}) the four-particle collective phase becomes
\begin{eqnarray}
\label{4Phase}
\theta_{(1,2,3,4)} &=& \pi+ \frac12 \Bigl[ (\omega_{04}-\omega_{02})(\tau_1-\tau_3) \nonumber\\
&&+ (\omega_{03}-\omega_{01})(\tau_4-\tau_2)\Bigr].
\end{eqnarray}
Eqs. (\ref{eta_kj}) and (\ref{4Phase}) involve the differences of four frequencies and four arrival times, hence they contain only six real parameters additionally to the polarization angle $\chi$ of Eq. (\ref{4vetores}). One can therefore arrange for the two-particle interference parameters $r_{kj}$ (\ref{r_kj}) to remain fixed, which takes up only four free parameters from the total seven, thus leaving the four-particle collective phase (\ref{4Phase}) to vary on a three-parameter manifold. Therefore, the four-particle circle dance interference can be observed with the photons in Gaussian spectral profiles when one can arrange for variable central frequencies of the spectral states and the photon arrival times.
The above example serves only for illustration, since it may be experimentally challenging. There are other possible ways to arrange for the circle dance interference, one such way is reported elsewhere \cite{Cleo}.
\section{The circle-dance interference and multiparticle correlations}
\label{sec4}
What is the reason behind the fact that the collective phase of the circle-dance interference of $N$ particles cannot be detected in an experiment with less than $N$ particles (irrespectively if some particles are simply not sent to a multiport, there are particle losses, or one bins together the output configurations containing a given configuration with less than $N$ particles)? Here we show that such a phase is a signature of the genuine $N$th-order quantum correlations between them, i.e., the correlations not reducible to the lower-order ones.
We will use the fact that detection of $R$ photons is related to the $R$-th order correlation function \cite{Glauber} of the quantum field. This relation extends also to our case, when the internal states of identical particles are not resolved (which corresponds to a generalized measurement, mathematically expressed by summing up the probabilities with the resolved internal states). Therefore, we can adopt as the unnormalized $R$th order correlation function $Q_R(\mathbf{l})$ in $R$ output ports $\mathbf{l}^\prime=(l_1,\ldots,l_R)$ the following expression
\begin{equation}
Q_R(\mathbf{l}^\prime) = \left\langle \sum_{\mathbf{j}}\left(\prod_{\alpha=1}^R \hat{b}^\dag_{l_\alpha, j_\alpha} \right)\left(\prod_{\alpha=1}^R\hat{b}_{l_\alpha, j_\alpha} \right)\right\rangle,
\en{CF}
where we sum over the basis states in $\mathcal{H}^{\otimes R}$, $\hat{b}^\dag_{l,j}$ creates a particle in output port $l$ of a multiport and an internal state $j$, and the average is taken with the input state. For $N$ particles in input $\mathbf{k}$, the corresponding function $Q_R(\mathbf{l}^\prime|\mathbf{k})$, as the probability $p_N(\mathbf{l}^\prime|\mathbf{k})$ in Eq. (\ref{E10}), depends only on the $d$-cycles with $d\le R$, since the two are proportional (see appendix \ref{appE})
\begin{equation}
Q_R(\mathbf{l}^\prime|\mathbf{k}) = \mathbf{m}^\prime! \left({N \atop R}\right)p_N(\mathbf{l}^\prime|\mathbf{k}) = \mathbf{m}^\prime! \sum_{ \mathbf{k}^{\prime}\subset \mathbf{k}} p_R(\mathbf{l}^{\prime}|\mathbf{k}^{\prime}).
\en{Id}
By Eq. (\ref{Id}) all $Q_R(\mathbf{l}^\prime|\mathbf{k})$ for $R\le N-1$ are independent of the $N$-particle collective phase of Eq. (\ref{Rphase}).
Note that $Q_R(\mathbf{l}^\prime|\mathbf{k})$ contains both quantum and classical correlations, in the classical case (distinguishable particles in orthogonal internal states) it is proportional to the respective classical probability. The classical correlation function is therefore independent of the collective phases (since particles in the orthogonal internal states do not have collective phases).
Eq. (\ref{Id}) together with the results of the previous sections means that some popular criteria for distinguishing quantum and classical behavior of identical particles in unitary linear multiports as in Ref. \cite{Wal} and the recently introduced nonclassicality criteria for interference in multiport interferomentry \cite{Nonclass}, based on the second-order correlation, will not be able to detect quantum $R$-particle phases, for all $R\ge 3$, since they are related to higher than the second-order correlations. Therefore, though such criteria may detect some quantum behavior at a multiport output, they are far from being sufficient for this purpose.
\subsection{Discrimination of internal states and multiparticle interference}
\label{sec4A}
Up to now we have assumed that particle detection at the output of a linear multiport does not resolve the internal states of particles. Let us now discuss an internal state resolving detection (see appendix \ref{appA} for the mathematical details of state resolving detection).
If a detector resolves the internal state of at least one particle, the $N$-particle interference does not contribute to the respective probability. Let us illustrate this using the circle-dance $N$-particle interference with identical particles in pure internal states, which requires the internal states to be linearly independent, i.e., unambiguously distinguishable by the scheme of Ref. \cite{Chefles}. The unambiguous discrimination scheme of Ref. \cite{Chefles} runs as follows. The state $|\phi_k\rangle$ is identified with some probability $p_k$, the corresponding measurement operator being $\Pi_k = p_k|\phi^{(\perp)}_k\rangle\langle \phi^{(\perp)}_k|$, where $\langle\phi^{(\perp)}_k|\phi_l\rangle = \delta_{kl}$, whereas an inconclusive result corresponds to $\Pi_0 = \openone- \sum_{k=1}^N \Pi_k$. In the graph representation, see fig. \ref{F1}(b), even a single such an internal state detection with a conclusive result implies that vertex $k$ has no edges (since a particular input particle is detected at an output port, it does not participate in the permutations of particles in the quantum amplitudes, as discussed in section \ref{sec2}). Broken edge in the cycle of fig. \ref{F1}(b) means that the $N$-particle interference is not observed (neither the $(R\ge 3)$-particle interference in this case). The inconclusive result, on the other hand, does not affect the terms in the probability coming from the $(R\ge 2)$-cycles, due to
$\langle \phi_k|\Pi_0|\phi_l\rangle = \langle \phi_k|\phi_l\rangle$ for $k\ne l$, (the edges of the respective graph retain their weights), while simultaneously attenuating the permutations with fixed points, since for each fixed point we have in this case $\langle \phi_k|\Pi_0|\phi_k\rangle = 1-p_k$, instead of $g(k)=1$ in the state non-resolving detection. Since in the case of the circle-dance interference there are just the two-cycles $(k,k+1)$ and the full $N$-cycle, such a generalized detection attenuates the two-particle interference (the permutations having fixed points). Thus the unambiguous state discrimination would separate the outcomes with either destructed or relatively enhanced circle-dance interference.
The above discussion indicates that the internal state discrimination, even if partial (e.g., the resolution of the arrival times of single photons), can strongly affect the multiparticle interference. Whether one is technically able to perform generalized measurements on the internal states of identical particles, as required in the above unambiguous state discrimination scheme, is another problem that depends on the particular physical setup, the type of identical particles used and the degrees of freedom that serve as the Hilbert space of the internal states.
\section{Conclusion}
\label{sec5}
We have provided a general framework which allows to study the complex relation between particle distinguishability and higher-order interference effects of independent identical bosons or fermions on a linear multiport. We have introduced the collective geometric phases of identical particles, which govern multiparticle interferences, and related them to the higher-order quantum correlations acquired between independent particles via propagation in a linear multiport. We also have opened the discussion on exact relation between the state-discrimination distinguishability and the distinguishability in the multiparticle interference, by showing that the genuine $N$-particle interference for $N\ge 4$ independent particles in pure internal states requires each particle to be unambiguously distinguishable from all others and deterministically distinguishable from all but two. We show, for instance, that the unambiguous internal state discrimination is detrimental to the interference. However, the latter gets an enhanced visibility, if the measurement result is inconclusive.
Throughout the work we have seen an interesting connection of the partial distinguishability theory to the theory of weighted graphs. We show, for instance, how the usual concept of a weighted directed graph with $N$ vertices appears in the interference of $N$ identical particles in pure internal states, where the weights are defined by the inner products of the internal states. Though one could also obtain all our results by using only the combinatorics with permutations, this connection by itself is very interesting. Indeed, the weighted graph theory is intimately linked with one of most studied computationally hard problems, namely, the traveling salesman problem \cite{TSP}. Such a connection and what it has to say about the computational complexity of partially distinguishable identical particles is worth exploring in the future publications.
\medskip
\section{Acknowledgements}
V.S. was supported by the National Council for Scientific and Technological Development (CNPq) of Brazil, grant 304129/2015-1, and by the S{\~a}o Paulo Research Foundation (FAPESP), grant 2015/23296-8. M. E. O. was supported by FAPESP, grant number 2017/06774-9.
\medskip
|
\section{Introduction}
Training a deep neural network (NN) is a highly non-convex optimization problem that we usually solve using convex methods. For each extra layer we
add to the network, the problem becomes more non-convex, i.e. more curvature is added to the error surface, making the optimization harder.
Yet, it is commonplace to add unnecessary curvature at the output layer even though this does not expand the space of functions that the NN can represent. This curvature is then back-propagated through all the previous layers, causing a detrimental increase in the number of ripples in the error surfaces of especially the lower layers, which are already the toughest ones to train. This is done, in part, so that we can all pretend that the outputs are probabilities, even though they really are not.
In the following, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of the softmax does not stem from the normalization, as some have speculated [\cite{Goodfellow2016DeepLearning, Keck2012FeedforwardCoin}]. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization.
\subsection{Squashers \& Saturation}
Historically, output squashers, such as the logistic (aka sigmoid) and tanh functions, have been used as a simple way of reducing the impact of outliers on the learned model. For example, if you fit a model to a small dataset with a good amount of outliers, those outliers can produce very large error gradients that will push the model towards a hypothesis that favors said outliers, leading to poor generalization. Squashing the output will reduce those large error gradients, and thus reduce the negative influence of the outliers on the learned model. However, if you have a small dataset, you should not use a neural network in the first place---other methods are likely to work better. And if you have a large dataset, the impact of any outliers will be minuscule. Therefore, the outlier argument is not very relevant in the context of deep learning. What is relevant, however, is that squashing functions saturate, resulting in very small gradients, appearing in the error surface as infinite flat plateaus, that slow down learning, and even cause the optimizer to get stuck [\cite{LeCun2012, Glorot2010}]. This observation was part of the motivation behind applying the now popular ReLU activation (rectified linear units) to convolutional neural nets [\cite{Krizhevsky2012, NairRectifiedMachines, Jarrett2009WhatRecognition}]. Surely, the massive success of ReLUs (and other related variants) speaks to the importance of avoiding saturating activations. Yet, somehow this knowledge is currently not being applied at the output layer! We contend, that for the very same reason that saturating units in the hidden layers should be avoided, linear output activations are to be preferred.
\subsection{The Softmax Function}
The most famous culprit, among the saturating non-linearities, is of course the softmax function [\cite{BridleProbabilisticRecognition}], $y_j = \frac{\exp(x_j)}{\sum_i \exp(x_i)}$. This is the de facto standard activation used for multi-class classification with one-hot target vectors. Even though it is technically not a squasher, but a normalized exponential, it suffers from the same problem of saturation. That is, when the normalization term (the denominator) grows large, the output goes towards zero.
The original motivation behind the softmax function was not dealing with outliers, but rather to treat the outputs of a NN as probabilities conditioned on the inputs. As intriguing as this may sound, we must remember that in most cases the outputs of the softmax would actually \emph{not} be true probabilities. To claim that outputs are probabilities, we must assume a within-class Gaussian distribution of the data, made in the derivation of the function [\cite{BridleProbabilisticRecognition}]. In practice, we say that the outputs may be interpreted as probabilities, as they lie in the range $[0;1]$ and sum to unity [\cite{Bishop1995NeuralRecognition, Bishop2007PatternEdn}]. However, if these are sufficient criteria for calling outputs probabilities, then the normalization might just as well be applied after training, which would not make the probabilistic interpretation any less correct. This way, we can avoid the problem of saturation during training, while still pretending that the outputs are probabilities (in case that is relevant to the given application). Another potential drawback of the normalization is that it bounds the function at both ends s.t. $f:\mathcal{R}\to [0;1]$. Consequently, when we apply it at the output layer, s.t. $y=f(x)$, where the error gradient (or ``error delta'') ${\frac{\nabla \mathcal{L}(y,t)}{\nabla x}} = y-t$, and $t\in\{0,1\}$, we effectively bound the gradients too, which affects all the previous layers during back-propagation.
\subsection{The Main Idea}
Simply put, our main idea is to apply a bit of common sense to the aforementioned situation. Namely, that we are solving highly non-convex optimization problems using a convex method: backpropagation [\cite{Rumelhart1986LearningErrors}] with stochastic gradient descend (SGD). Even in saying those words, it appears evident that making the problem more non-convex---for no good reason---has to be a bad idea. Following that simple logic, output activations should \emph{always} be linear (the identity function) unless there is some advantage in adding non-linearity that somehow outweighs the price that must be paid in added non-convexity. Thus, we take the view that the only things that really matter are the speed of convergence and the final classification accuracy. We do not care about probabilistic interpretations, loss functions, or even, to some extent, mathematical correctness. Training a neural network is the process of iteratively pushing some weights in the right direction, and while doing so, we want to make the most of what we have: our error gradients. This does not entail imposing pointless bounds on them, or allowing them to become very small for no good reason.
Table \ref{tab:mnist} shows what happened when we first applied this view on real data; the MNIST dataset [\cite{LeCun1998GradientRecognition}]. Training a simple three-layer NN (fully connected) with ReLUs in the hidden layers, we compared the median results obtained over twenty trials with sigmoid, tanh, and linear output activations. The learning rate was fixed, and carefully tuned for each setting, and neither dropout [\cite{Hinton2012ImprovingDetectors}], batch normalization [\cite{Ioffe2015}], nor weight decay was used. The NN trained for 100 epochs, and the point of convergence is set to be the epoch where the minimum classification error was observed. This experiment was repeated multiple times with other hidden activations, and weight initialization schemes, and they all gave the same result: with linear output activations, the rate of convergence is reduced by approximately 25 percent (and some moderate improvements in generalization was observed as well). Note, that the softmax is not included in the table for the very simple reason that it gave miserable results on this NN configuration.
\begin{table}
\setlength{\tabcolsep}{3pt}
\def1.2{1.2}
\center
\begin{tabular}{@{}l c c c }
\toprule
\textbf{Output Activation} & \textbf{Error} & \textbf{Convergence} \\
\midrule
Sigmoid & 1.8 & 98.5 \\
Tanh & 1.7 & 95.0 \\
Linear & 1.7 & \textbf{73.5} \\
\bottomrule
\end{tabular}
\caption{Median results (20 trials) on MNIST for a 392-50-10 NN with ReLUs in the hidden layers; final classification error \& no. of epochs needed to converge.}
\label{tab:mnist}
\end{table}
\section{Gradient Boosting}
When we first tried to train a convolutional neural network (CNN) on the CIFAR-10 data [\cite{Krizhevsky2009LearningImages}], with linear instead of softmax outputs, we expected to see at least a hint of improvement. This was not the case. On the contrary, the softmax clearly won that battle. The reason for this lies in the exponentiation of the outputs. For a moment, stop thinking about the softmax in a probabilistic context, and instead view it as the equivalent of linear outputs, with a mean squared error loss, combined with non-linear boosting of the error deltas, $y - t$. From this perspective, it becomes clear that when we have $y_j = \frac{\exp(x_j)}{\sum_i \exp(x_i)}$, nothing changes with respect to the one-hot classification, but large errors will be exponentiated. This allows the optimizer to take bigger steps towards a minimum, thus leading to faster convergence. An intuitive interpretation of this would be that when we are confident about an error, we can take an exponentially larger step towards minimizing that error. The idea bears some resemblance to momentum, where we gradually speed things up when the error gradients are consistent.
\subsection{Exponential Boosting}
If exponentiation of error deltas is good, and saturation is bad, it follows that using an ``un-normalized'' softmax, so to speak, should yield an improvement. That is, simply use linear outputs, $y = x$, but compute the error gradients as ${\frac{\nabla \mathcal{L}(y,t)}{\nabla x}} = \alpha \exp(y) - t$. Alternatively, we can think of it as an exponential output activation with an incorrect gradient formulation imposed on it, i.e. $y - t$ (this is in fact how we implemented it). As seen in Figure \ref{fig:cnn5_median}, this simple change does in fact lead to a consistent boost in performance. The result was obtained on CIFAR-10 with a 5-layer CNN; four convolutional layers followed by an affine output layer with linear outputs and exponential gradient boosting (exp-GB), and batch normalization in all layers. We set $\alpha = 0.1$, which has worked well in all our experiments; sometimes $\alpha = 0.01$ is slightly better. To further boost the non-linear interaction between the outputs and the targets, we used larger target values, $t \in \{0,16\}$ instead of $t \in \{0,1\}$. As can be seen in the histograms of Figure \ref{fig:hist} (from a different experiment), this produces much larger gradients. The deltas are roughly in $[-6;10]$, as opposed to the bounded errors of the softmax, that are in $[-1;1]$. The idea of picking better target values is not new. To reduce the risk of saturating with logistic units, \cite{LeCun2012} recommend choosing targets at the point of the maximum of the second derivative.
Another potential advantage of the exponentiation is that $exp(x)$ asymptotically approaches zero towards negative infinity. This is
especially advantageous with one-hot target vectors, because we do not care about exact output values as long as the correct class has the largest value. Hence, we can mostly ignore any negative errors in outputs for the negative classes. This can be seen as a relaxation of the optimization problem, where we are essentially trying to solve an inequality for the negative classes instead of an exact equality. With linear activations \emph{without} exp-GB, the optimizer would always try to push the outputs for the negative classes towards zero. This can lead to situations where an otherwise correct output (i.e. the maximum value belongs to the node representing the target class) for a given input, $x_i$, leads to a weight update that renders the output incorrect the next time that $x_i$ is seen; this is in exchange for the mean output for the negative classes being slightly closer to zero than on the previous iteration. That is a bad result, but we avoid this problem when we exponentiate our gradients.
\subsection{Cubic Boosting}
Although we can often ignore large negative outputs that yield large negative error deltas, we cannot ignore all of them. This raises the question whether we may further boost performance by also allowing for the exponentiation of large \emph{negative errors}. The answer is: yes we can! An obvious candidate would be a mirrored exponential function, $y = \alpha \sgn(x) exp(\lvert x \rvert - 1) + \beta$, where $\sgn(\cdot)$ is the sign function. However, this function does not have a nice and cozy place for us to put all those gradients that we do not need to worry about, so it does not work well. Instead, we use a simple polynomial that has a conveniently flat area around $[-1; 1]$, $y= \alpha x^3 + \beta$; let's call it pow3-GB. Taking another look at Figure \ref{fig:cnn5_median}, we see that this does
indeed work better; following exactly the same trend as observed with exp-GB, that the error drops significantly faster than with the softmax. We set $\alpha = 0.001$, $\beta = 0.4$, and use target values $t \in \{0, 10\}$.
\begin{figure*}[t!]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=.5]{figs/misclassification-cnn02.png}
\caption{Median misclassification rates (20 trials) for a 5-layer CNN with softmax outputs, linear outputs with exponential boosting (exp-GB), and linear outputs with cubic boosting (pow3-GB).}
\label{fig:cnn5_median}
\end{subfigure}
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=.5]{figs/misclassification.png}
\caption{Misclassification rates for a 10-layer CNN with softmax outputs (for three different learning rates), exponential boosting (exp-GB), and cubic boosting (pow3-GB).}
\label{fig:cnn10_mis}
\end{subfigure}
\caption{Classification Errors on CIFAR-10}
\end{figure*}
\label{fig:mis}
\section{Experiments}
We now carefully study the performance of gradient boosting (GB) for image-classification on CIFAR-10 [\cite{Krizhevsky2009LearningImages}] and ImageNet 2012 [\cite{Russakovsky15}], and the pixel-level task of semantic segmentation on the PASCAL VOC 2012 dataset [\cite{Everingham10}].
\textbf{CIFAR-10 Classification.} In this experiment, our purpose is not to get state-of-the-art results but rather to learn how increased depth may affect our method. We use an (almost) all convolutional network with ten layers; following the principle presented in [\cite{SpringenbergDBR14}], but with batch normalization, and the average pooling layer replaced by a fully-connected one. The latter was done to make computation more deterministic, so as to allow for better evaluation of the effects of changing various parameters. Note that pooling involves atomic operations on the GPU, which can result in relatively large variance in output. For this experiment, we used a fixed learning rate and carefully tuned it with the purpose of getting the best result within ten epochs. We use the same $\alpha$ and $\beta$ values as in our previous experiment, but this time we use different target values. $t \in \{0,6\}$ produced better results for exp-GB. With pow3-GB it seemed a good idea to try negative target values for the negative classes since the function is not bounded at the lower end; we saw a significant improvement when using $t \in \{-2,10\}$.
Figure \ref{fig:cnn10_mis} shows how the classification error evolved during training. For softmax, we show results from trying three different learning rates to insure that our choice of $1.0$ really is a good one. We note that the overall trend is the same as for the 5-layer CNN; for the first 2-5 epochs, the error rates drop significantly faster with GB than with the softmax. The histograms in Figure \ref{fig:hist} show the distribution of the output error deltas for the first batch of epoch 1 and epoch 4. The larger target values used for GB are clearly reflected; resulting in sharper distributions clustered around the negated target values. This is of course most significant on the first iteration, but the trend is still very clear in the fourth (and tenth) epoch. This boosting of the output errors has a very significant effect on the gradient signals received in the hidden layers during backpropagation. Figure \ref{fig:rms_hid} shows this effect very nicely via the root mean square (RMS) of the gradients. With exp-GB, the RMS of the hidden layer gradients is an order of magnitude higher than with the softmax; for pow3-GB it is more than two orders of magnitude. Interestingly, the hidden-layer RMS-gradients recorded for pow3-GB grow rapidly from the second epoch and onwards. A similar trend is seen for exp-GB, albeit less dramatically, and for the softmax there is only a slight upwards trend, and only in the top layers. This correlates well with the error rates (see Figure \ref{fig:cnn10_mis}); the softmax gets stuck early on, and the linear activations with gradient boosting continue to learn through all ten epochs. All in all, this seems to indicate that gradient boosting may help alleviate the infamous problem of vanishing gradients [\cite{Hochreiter1991UntersuchungenNetzen,Goodfellow2016DeepLearning}] in deep neural networks.
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.65]{figs/grad_allcnn_softm_vs_exp_pow3_lix12_e1.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.65]{figs/grad_allcnn_softm_vs_exp_pow3_lix12_e4.png}
\end{subfigure}%
\caption{{\bf Top:} The distribution of output layer error gradients for softmax, linear with exponential boosting, and linear with cubic boosting at the start of epoch 1; training a 10-layer CNN on CIFAR-10. {\bf Bottom}: same, but for epoch 4.}
\label{fig:hist}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=.58]{figs/rms_all_grad_allcnn_softm_vs_exp_pow3.png}
\caption{RMS of error gradients over ten epochs of training a 10-layer CNN on CIFAR-10. {\bf Left:} output layer. {\bf Rest:} every second hidden layer. Note the upwards trend from epoch 2 onwards in the hidden layers for GB.}
\label{fig:rms_hid}
\end{figure}
\textbf{ImageNet Classification.} The ImageNet 2012 dataset [\cite{Russakovsky15}] consists of \~1.3 million RGB images that are much larger than the tiny images of CIFAR-10. Training a state-of-the-art model on this data can take weeks. Thus, for this experiment, we will again consider only the first ten epochs of training on a relatively shallow and well-known architecture, AlexNet [\cite{Krizhevsky2012}]. Figure \ref{fig:imagenet} and Table \ref{tab:imagenet} show the median classification errors over five trials. With exp-GB, we get a median reduction in the top5-error of 4.52 percent, and a 3.74 percent reduction of the top1-error; i.e. the minimum errors achieved within ten epochs. Thus, the result follows the general trend of our previous experiments. However, there are two important differences. First, the pow3-GB did not work well, whereas exp-GB clearly still outperformed the softmax. Secondly, we had to use batch normalization (BN) to get good results.
With respect to the failure of pow3-GB, the explanation is likely found in the 100-fold increase in the number of classes; compared to the ten classes of CIFAR-10. Because such large error gradients are back-propagated from the output layer, the errors in the hidden layers simply blow up too much, when one thousand errors are multiplied and summed instead of just ten. In a way, this is the opposite problem of what we could expect to see with the softmax, where the saturation is likely to be worse with more classes (as the normalization term grows), thus producing very small gradients. With respect to why BN becomes more important, again, the reason is that the magnitude of the back-propagated gradients depends on the number of classes. The larger gradients result in bigger and faster changes in the distributions of the activations in the hidden layers; thus magnifying internal covariate shifts, and increasing the need for BN.
For this experiment, we used (carefully tuned) fixed learning rates of 0.01 and 0.001 for the softmax and exp-GB, respectively. For exp-GB we set $\alpha = 0.01$ and used target values, $t \in \{0,10\}$.
\begin{figure*}[t!]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=.5]{figs/top1-imagenet.png}
\caption{{\bf Top1-Error:} AlexNet with BN.}
\label{fig:top1}
\end{subfigure}
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=.5]{figs/top5-imagenet.png}
\caption{{\bf Top5-Error:} AlexNet with BN.}
\label{fig:top5}
\end{subfigure}
\caption{Median Classification Errors on ImageNet 2012}
\end{figure*}
\label{fig:imagenet}
\begin{table}
\setlength{\tabcolsep}{3pt}
\def1.2{1.2}
\center
\begin{tabular}{@{}l c c c }
\toprule
\textbf{Output Activation} & \textbf{Top1-Error} & \textbf{Top5-Error} \\
\midrule
Softmax & 58.47 & 33.55 \\
Exp-GB & \textbf{53.95} & \textbf{29.81} \\
\bottomrule
\end{tabular}
\caption{Minimum error reached (median over 5 trials) training AlexNet with BN on ImageNet 2012.}
\label{tab:imagenet}
\end{table}
\textbf{Semantic Segmentation.} We now evaluate our method for the pixel-level classification task of semantic segmentation. The goal in semantic segmentation is to determine class labels for every single pixel in an image. Prior work [\cite{PixelNet,Hariharan15,Long15}] in this direction use a fully convolutional network with the standard softmax and multi-class cross-entropy loss for optmization. In this experimwent, we use the PixelNet architecture [\cite{PixelNet}]. This model uses a VGG-16 [\cite{SimonyanZ14a}] architecture (pre-trained on ImageNet) followed by a multi-layer perceptron that is used to do per-pixel inference over multi-scale descriptors. It has achieved state-of-the-art performance for various pixel-level tasks such as semantic segmentation, depth/surface normal estimation, and boundary detection.
We evaluate our findings on the heavily benchmarked Pascal VOC 2012 dataset. Similar to prior work [\cite{PixelNet,Hariharan15,Long15}], we make use of additional labels collected on 8498 images by \cite{hariharan11}. We keep a small set of 100 images for validation to analyze convergence, and use the same settings as used for analysis in [\cite{PixelNet}]: a single scale $224{\times}224$ image is used as input to train the model. All the hyper-parameters are kept constant except the initial learning rate\footnote{The initial $lr=1{\times}10^{-3}$ for softmax, $lr=1{\times}10^{-4}$ for exp-GB, and $lr=5{\times}10^{-5}$ for pow3-GB. Lowering the learning rate for softmax deteriorates the performance.}. We report results on the Pascal VOC-2012 test set (evaluated on the PASCAL server) using the standard metrics of region intersection over union (\textbf{IoU}) averaged over classes (higher is better).
Table~\ref{tab:voc_2012} shows our results (both per-class and \textbf{mIoU}) for GB and the standard softmax. We observe that the model trained using {\em exp-GB} converged after 40 epochs, whereas the {\em softmax} model converged after 60 epochs. As seen in Table~\ref{tab:voc_2012}, our method provides \textbf{33\%} faster convergence, while yielding a slightly better performance (\textbf{E-40} vs. \textbf{S-60}). We see a significant \textbf{3\%} boost in the first 40 epochs with exp-GB (\textbf{E-40} vs. \textbf{S-40}).
Additionally, recent work [\cite{varol17,Wang15}] in the computer vision community have formulated regression problems such as depth and surface normals estimation, in a classification paradigm, in hope of easier optimization and better performance. From these experiments, we however infer that it is likely not the {\em softmax$+$cross-entropy} that boosts the performance. Rather, it is the use of one-hot encoding of the target vectors.
\begin{table*}[t!]
\tiny{
\setlength{\tabcolsep}{3pt}
\def1.2{1.2}
\center
\begin{tabular}{@{}l c c c c c c c c c c c c c c c c c c c c c@{}p{0.3cm}@{}c@{}}
\toprule
& bg & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & &\textbf{mIoU}\\
\midrule
\textbf{E-40} & 92.3 &
\textbf{87.9} & \textbf{43.2} & 73.6 & 54.6 & \textbf{68.8} & 83.9 & \textbf{82.2} & 77.1 & \textbf{27.7} & 62.0 & 50.8 & \textbf{74.6} & \textbf{75.5} & \textbf{80.6} & \textbf{78.7} & 47.4 & \textbf{74.0} & 43.2 & \textbf{76.0} & \textbf{60.1} & & \textbf{67.3} \\
\textbf{P-40} & 92.3 &
86.0 & 39.1 & \textbf{74.1} & 49.4 & 66.3 & \textbf{84.7} & 79.7 & 77.4 & 26.4 & 63.2 & 51.6 & 71.4 & 74.7 & 79.8 & 76.6 & 45.1 & 70.6 & 47.5 & 71.8 & 59.9 & & 66.1 \\
\textbf{S-40} & 91.9 &
84.9 & 38.5 & 66.8 & 54.0 & 63.4 & 79.8 & 72.9 & 72.7 & 25.4 & 63.6 & 55.4 & 68.2 & 72.7 & 75.5 & 76.2 & 46.7 & 71.6 & 42.8 & 71.2 & 58.4 & & 64.4 \\
\midrule
\textbf{S-60} & \textbf{92.4} &
86.7 & 39.8 & 72.4 & \textbf{58.0} & 65.6 & 82.9 & 78.9 & \textbf{77.8} & 26.6 & \textbf{66.1} & \textbf{59.2} & 71.6 & 74.2 & 77.5 & 77.1 & \textbf{49.3} & 73.8 & \textbf{45.7} & 73.9 & 58.4 & & 67.1 \\
\bottomrule
\end{tabular}
\vspace{0.2cm}
\caption{\textbf{Evaluation on Pascal VOC-2012 for Semantic Segmentation:} We found our analysis consistent for the pixel-level task of semantic segmentation. With only 40 epochs, our formulation exceeds the performance using Softmax$+$Cross-Entropy for 60 epochs. \textbf{E} deonotes exp-GB$+$mean-squared-error; \textbf{P} denotes pow3-GB$+$mean-squared-error; and \textbf{S} denotes softmax$+$cross-entropy-loss.}
\label{tab:voc_2012}
}
\vspace{-0.1cm}
\end{table*}
\textbf{Summary.} We evaluated our findings on two standard tasks of classification, i.e. image classification and pixel-level classification, on heavily benchmarked datasets. We observe a consistent trend for all these tasks: (1). softmax impedes learning; (2). exp-GB$+$mean-squared-error converges \textbf{25-35\%} faster than standard softmax$+$cross-entropy, and that too with a slightly better performance (not at the cost of it). We believe our results are important not just from a convergence perspective, but also from the point-of-view of having a general loss function for both classification and regression tasks.
\section{Further Analysis}
We can take a slightly more theoretical view on gradient boosting, and why it speeds up the convergence, by reasoning about second-order properties of the error surfaces induced by exp-GB and pow3-GB. This is typically done with the Hessian matrix, $H$, of second derivatives, which tells us something about the rate of change in the error for a single step of gradient descent. To keep things simple, we will consider only the case of a single output activation, i.e. a single dimension, so we do not need the full Hessian, $\frac{d^2}{dx^2}f$ will do. We will look at $\frac{\partial^2 E}{\partial x^2}$, where $E$ is the sum-of-squares error, $E(y, t) = \frac{1}{2} \sum_i \lVert y - t \rVert^2$. For our purpose we can simply ignore the summation in $E$, and just analyze $\frac{\partial^2 E}{\partial x^2}$ for a single example, $(x,t)$. Let us start by comparing the Hessians for linear, softmax, exponential, and cubic activations.
For a linear activation, $y=x$, the Hessian is simply
\begin{align}
H_{linear} = {\frac{\partial^2 \lVert y - t \rVert^2}{\partial x^2}} = 1
\end{align}
Re-writing the softmax activation as $y = \frac{e^x}{s}$, where $s = \sum_i e^{x_i}$ is a proxy for the normalization term, we get
\begin{align}
H_{softmax} &= {\frac{\partial^2 \lVert {\frac{e^{x}}{s}} - t \rVert^2}{\partial x^2}}
&= {\frac{e^{2x}}{s^2}} - \frac{e^x}{s} (t - \frac{e^x}{s})
\end{align}
and for exponential and cubic activations we have,
\begin{align}
H_{exp} &= {\frac{\partial^2 \lVert e^{x} - t\rVert^2}{\partial x^2}} &=
e^{2x} - e^x (t - e^x)
\\
H_{pow3} &= {\frac{\partial^2 \lVert x^3 - t\rVert^2}{\partial x^2}} &=
9x^4 - 6x(- x^3 + t)
\end{align}
If we consider the situation where $x$ is near some local minimum, we know that the error surface will be locally convex around that point. This means that $H \approx 0$, and that the first and second term in each of the above Hessians will be approximately equal (i.e. they cancel each other out). Thus, we will ignore the second terms, and simply compare the growth of all the first terms, as we move $x$ away from that local optimum. Now it becomes immediately evident that (locally) $H_{pow3} > H_{exp} > H_{softmax} > H_{linear}$, because as $x \to \pm \infty$ we get $9x^4 > e^{2x} > {\frac{e^{2x}}{s^2}} > 1$ for all $s > 1$. Unsurprisingly, it all depends on the magnitude of the normalization term of the softmax, $s = \sum_i e^{x_i}$. If $s$ is very small $H_{softmax}$ will blow up, so we need to assert the probability of that happening. At the onset of training, it is reasonable to assume that the input to the softmax will be evenly distributed around zero. Thus, half of the $x_i$'s are positive, guaranteeing that $s>1$ as $\forall x>0, e^x > 1$. To see what happens later, we can consider a numerical example for one thousand classes. Even when the model is trained well, such that the $x_i$'s for the 999 negative classes are likely to be negative and contribute very little to $s$ as $e^{x_i} \ll 1$---it still takes only one single $x_i \ge 0$ to make $s>1$ (likely to be the one for the positive class). It seems reasonable to claim that this will probably be the case most of the time.
To back up this claim, we take a look at the actual $x_i$'s recorded during training of the 10-layer CNN from our CIFAR-10 experiment in the previous section. Figure \ref{fig:sum} shows how the normalization term, $s$, of the softmax actually behaved. It starts out with a value of 2,342 and increases monotonically from there.
\begin{figure}
\centering
\includegraphics[scale=.5]{figs/softmax_input.png}
\caption{Magnitude of the softmax normalization term, $s = \sum_i e^{x_i}$, recorded for a 10-layer CNN trained on CIFAR-10. It starts out with $s = 2,342$ and blows up from there.}
\label{fig:sum}
\end{figure}
However, we need to remember that for GB the Hessians are a little different, as we are just boosting the error gradients, $y-t$. Thus, the second derivatives are just the derivatives of those deltas, with $H_{exp-GB}= e^x$, $H_{pow3-GB}=3x^2$, and $H_{softmax}=1$ (with the multi-class cross-entropy loss)---which only adds to our point that GB can minimize the error faster than the softmax.
\section{Conclusion}
Our results suggest fundamental changes in deep network training, and to our perception of the omnipresent softmax function. In a way, all that we have done is to apply common sense to the challenge of training deep non-convex models using the convex method of gradient descent. Specifically, do not make the problem more non-convex that it needs to be. Whenever we add curvature to the error surface, we make the optimization harder, and we should always keep this in mind when we make decisions on how we configure our models during training.
Taking the consequence of this, by e.g. skipping the normalization term of the softmax, we get significant improvement in our NN training---and at no other cost than a few minutes of coding. The only drawback is the introduction of some new hyper-paramters, $\alpha$, $\beta$, and the target values. However, these have been easy to choose, and we do not expect that a lot of tedious fine-tuning is required in the general case.
From this perspective, our work---and much of literature---is concerned with treating the symptoms rather than the cause. The cause of our problems is our use of gradient-based optimizers. Perhaps one day we will have a better learning algorithm, but until we do: be careful what you back-propagate!
\bibliographystyle{apalike}
|
\section{Introduction}
Let $C \subset \Bbb P^2$ be an irreducible plane curve over an algebraically closed field $k$ of characteristic $p \ge 0$ and let $k(C)$ be its function field.
For a point $P \in \Bbb P^2$, if the function field extension $k(C)/\pi_P^*k(\Bbb P^1)$ induced by the projection $\pi_P$ is Galois, then $P$ is called a Galois point for $C$.
This notion was introduced by Yoshihara (\cite{miura-yoshihara, yoshihara1}).
When a Galois point $P$ is a smooth point of $C$, $P$ is called an inner Galois point.
There are not so many examples of plane curves with two inner Galois points (see the Table in \cite{yoshihara-fukasawa}).
In this note, we give new examples, which update the Table in \cite{yoshihara-fukasawa}.
Let $p>0$ and let $q \ge 3$ be a power of $p$.
We consider the curve $H$ defined by
$$X^qZ+XZ^q-Y^{q+1}=0, $$
which is called the Hermitian curve.
For the natural embedding of $H$ in $\Bbb P^2$ of degree $q+1$, Homma determined the distribution of Galois points (\cite{homma}).
To produce examples of plane curves with two Galois points, we would like to consider other birational embeddings.
We show the following.
\begin{theorem} \label{hermitian}
For the Hermitian curve $H$ of degree $q+1$, there exists a morphism $\varphi: H \rightarrow \Bbb P^2$ such that
\begin{itemize}
\item[(a)] the morphism $\varphi: H \rightarrow \varphi (H)$ is birational,
\item[(b)] the degree of $\varphi(H)$ is $q^3+1$, and
\item[(c)] there exist exactly two inner Galois points on $\varphi(H)$.
\end{itemize}
\end{theorem}
For the proof, it is important that two subgroups $G_1$ and $G_2$ of the full automorphism group ${\rm Aut}(H)$ of order $q^3$ act on the set $H(\Bbb F_{q^2})$ of all $\Bbb F_{q^2}$-rational points, which consists of $q^3+1$ points, on the Hermitian curve $H$.
The automorphism groups of the Suzuki and Ree curves have the similar property.
We will obtain the following.
\begin{theorem} \label{suzuki}
Let $p=2$, $q_0 \ge 2$ a power of $2$, and let $q=2q_0^2$.
For the Suzuki curve $\hat{C}$, which is the smooth projective model of the affine curve defined by
$$ x^q+x=y^{2q_0}(y^q+y), $$
there exists a morphism $\varphi: \hat{C} \rightarrow \Bbb P^2$ such that
\begin{itemize}
\item[(a)] the morphism $\varphi: \hat{C} \rightarrow \varphi (\hat{C})$ is birational,
\item[(b)] the degree of $\varphi(\hat{C})$ is $q^2+1$, and
\item[(c)] there exist exactly two inner Galois points on $\varphi(\hat{C})$.
\end{itemize}
\end{theorem}
\begin{theorem} \label{ree}
Let $p=3$, $q_0 \ge 3$ a power of $3$, and let $q=3q_0^2$.
For the Ree curve $\hat{C}$, which is the smooth projective model of the affine curve defined by
$$ y_1^q-y_1=x^{q_0}(x^q-x) \ \mbox{ and } \ y_2^q-y_2=x^{q_0}(y_1^q-y_1), $$
there exists a morphism $\varphi: \hat{C} \rightarrow \Bbb P^2$ such that
\begin{itemize}
\item[(a)] the morphism $\varphi: \hat{C} \rightarrow \varphi (\hat{C})$ is birational,
\item[(b)] the degree of $\varphi(\hat{C})$ is $q^3+1$, and
\item[(c)] there exist exactly two inner Galois points on $\varphi(\hat{C})$.
\end{itemize}
\end{theorem}
\section{Hermitian curves}
Let $P_1=(1:0:0)$ and $P_2=(0:0:1) \in H$.
We consider the subgroup
$$G_{1}:=\left\{
\left(\begin{array}{ccc}
1 & a^q & b \\
0 & 1 & a \\
0 & 0 & 1
\end{array}\right) \ ; \
a \in \Bbb F_{q^2}, \ b^q+b=a^{q+1}
\right\} $$
of ${\rm PGL}(3, k)$, which is of order $q^3$.
For any $\sigma \in G_{1}$, it follows that $\sigma(H)=H$, $\sigma(P_1)=P_1$ and $\sigma(H(\Bbb F_{q^2})\setminus \{P_1\})=H(\Bbb F_{q^2}) \setminus \{P_1\}$ (see also \cite[pp.~643--644]{hkt}).
We take $x=X/Z$ and $y=Y/Z$.
Note that $k(H)=k(x,y)$ and $y^{q^2}-y \in k(H)^{G_1}$.
Since $[k(x,y):k(y)]=q$ and $[k(y):k(y^{q^2}-y)]=q^2$, it follows that $k(y^{q^2}-y)=k(H)^{G_1}$ and $k(H)^{G_1} \cong k(\Bbb P^1)$.
Similarly, we define
$$G_{2}:=\left\{
\left(\begin{array}{ccc}
1 & 0 & 0 \\
c & 1 & 0 \\
d & c^q & 1
\end{array}\right) \ ; \
c \in \Bbb F_{q^2}, \ d^q+d=c^{q+1}
\right\}. $$
Note that $k(H)^{G_2}=k((y/x)^{q^2}-(y/x))$.
Then, $H/G_{i}\cong \Bbb P^1$ for $i=1, 2$, $G_{1} \cap G_{2}=\{1\}$, and
$$ \{P_1\} \cup \{\sigma(P_2) \ | \ \sigma \in G_{1}\}=H(\Bbb F_{q^2})=\{P_2\}\cup\{\tau(P_1) \ | \ \tau \in G_{2}\}. $$
It follows from \cite[Theorem 1]{fukasawa} that we have a morphism $\varphi: H \rightarrow \Bbb P^2$ such that $\varphi$ is birational onto $\varphi(H)$, $\deg \varphi(H)=q^3+1$ and there exist two inner Galois points.
To determine the number of inner Galois points on $\varphi(H)$, we consider the image $\varphi(H(\Bbb F_{q^2}))$.
As in the proof of \cite[Theorem 1]{fukasawa}, $\varphi$ is represented by
$$ \left(\frac{1}{y^{q^2}-y}:\frac{x^{q^2}}{y^{q^2}-x^{q^2-1}y}:1\right). $$
Then, $\varphi(P_1)=(0:1:0)$, $\varphi(P_2)=(1:0:0)$ and $\varphi(H(\Bbb F_{q^2}))=\varphi(H) \cap \{Z=0\}$.
Let $P=(\alpha:\beta:1) \in H(\Bbb F_{q^2}) \setminus \{P_1, P_2\}$.
Then, $y-\beta$ is a local parameter at $P$.
Let $u=y-\beta$ and $v=(y/x)-(\beta/\alpha)$.
Note that
$$y^{q^2}-y=u^{q^2}-u, \ (y/x)^{q^2}-(y/x)=v^{q^2}-v, \mbox{ and }
\frac{x^{q^2}(y^{q^2}-y)}{y^{q^2}-x^{q^2-1}y}=\frac{u}{v} \times \frac{u^{q^2-1}-1}{v^{q^2-1}-1}. $$
On the other hand,
$$ \frac{dv}{dy}=\frac{x-y\frac{dx}{dy}}{x^2}=-x^{q-2}. $$
It follows that $\varphi(P)=(-\alpha^{q-2}:1:0)$.
When $\alpha^q+\alpha \ne 0$, the fiber $\varphi^{-1}(\varphi(P))$ contains at least $q+1$ points (that is, $\varphi(P)$ is a singular point of $\varphi(H)$).
We consider the case where $\alpha^{q-1}+1=0$.
Then, $\varphi(P)=(1:\alpha:0)$ and the projection $\pi_{\varphi(P)}$ is represented by
$$ \left(-\alpha \frac{1}{y^{q^2}-y}+\frac{x^{q^2}}{y^{q^2}-x^{q^2-1}y}:1\right)=\left(\frac{x-\alpha}{y} \times \frac{(x-\alpha)^{q^2-1}y^{q^2-1}-x^{q^2-1}}{(y^{q^2}-1)(y^{q^2-1}-x^{q^2-1})}:1\right). $$
It follows that the ramification index at $P$ is equal to $q$.
This implies that the intersection multiplicity of $\varphi(H)$ and the tangent line at $\varphi(P)$ is $q+1$.
Assume that $\varphi(R)$ is inner Galois.
Then, the associated Galois group $G_{\varphi(R)}$ is of order $q^3$, which is a Sylow $p$-subgroup of ${\rm Aut}(H) \cong {\rm PGU}(3, q)$ (see \cite[pp.~643--644]{hkt}).
Therefore, there exists $P \in H(\Bbb F_{q^2})$ such that $\sigma (P)=P$ for any $\sigma \in G_{\varphi(R)}$.
Then, the order of the pull-back of a linear polynomial given by the tangent line at $P$ is $q^3$ or $q^3+1$, and hence, $\varphi^{-1}(\varphi(P))=\{P\}$.
It follows that $P=P_1$ or $P_2$, and hence, $R=P_1$ or $P_2$.
The proof of Theorem \ref{hermitian} is completed.
\section{Suzuki curves}
See \cite{hansen-stichtenoth}, \cite{gkt} or \cite[Section 12.2]{hkt} for properties of the Suzuki curves.
We take $x=X/Z$ and $y=Y/Z$.
Let $p=2$, $q_0$ a power of $2$, $q=2q_0^2$, and let $C \subset \Bbb P^2$ be (the projective closure of) the curve defined by
$$ x^q+x=y^{2q_0}(y^q+y).$$
The smooth model of $C$ is denoted by $\hat{C}$ with normalization $r: \hat{C} \rightarrow C$.
Let $P_{\infty}=(1:0:0)$ and $P_2=(0:0:1) \in C$.
It is known that $P_{\infty}$ is a unique singular point of $C$ and $r^{-1}(P_{\infty})$ consists of a unique point $P_1 \in \hat{C}$.
We consider the subgroup
$$G_{1}:=\left\{
\left(\begin{array}{ccc}
1 & a^{2q_0} & b \\
0 & 1 & a \\
0 & 0 & 1
\end{array}\right) \ ; \
a, b \in \Bbb F_{q}
\right\} $$
of ${\rm PGL}(3, k)$, which is of order $q^2$.
For any $\sigma \in G_{1}$, it follows that $\sigma(P_{\infty})=P_{\infty}$ and $\sigma(C(\Bbb F_{q})\setminus \{P_\infty\})=C(\Bbb F_{q}) \setminus \{P_\infty\}$.
In particular, there exists an inclusion $G_1 \hookrightarrow {\rm Aut}(\hat{C})$.
Note that $k(C)=k(x,y)$ and $y^{q}+y \in k(C)^{G_1}$.
Since $[k(x,y):k(y)]=q$ and $[k(y):k(y^{q}+y)]=q$, it follows that $k(y^{q}+y)=k(C)^{G_1}$ and $k(C)^{G_1} \cong k(\Bbb P^1)$.
Let $h:=xy+x^{2q_0}+y^{2q_0+2}$ and let $\psi$ be the rational transformation of $\Bbb A^2$ given by
$$ (x, y) \mapsto (y/h, x/h). $$
Then, $\psi$ induces an involution of $\hat{C}$ and $\psi(P_1)=P_2$.
Let $G_2:=\psi G_1 \psi \subset {\rm Aut}(\hat{C})$, which is of order $q^2$.
Note that $k(C)^{G_2}=k((x/h)^q+(x/h))$.
Then, $\hat{C}/G_{i}\cong \Bbb P^1$ for $i=1, 2$, $G_{1} \cap G_{2}=\{1\}$, and
$$ \{P_1\} \cup \{\sigma(P_2) \ | \ \sigma \in G_{1}\}=\hat{C}(\Bbb F_{q})=\{P_2\}\cup\{\tau(P_1) \ | \ \tau \in G_{2}\}. $$
It follows from \cite[Theorem 1]{fukasawa} that we have a morphism $\varphi: \hat{C} \rightarrow \Bbb P^2$ such that $\varphi$ is birational onto $\varphi(\hat{C})$, $\deg \varphi(\hat{C})=q^2+1$ and there exist two inner Galois points.
To determine the number of inner Galois points on $\varphi(\hat{C})$, we consider the image $\varphi(\hat{C}(\Bbb F_{q}))$.
As in the proof of \cite[Theorem 1]{fukasawa}, $\varphi$ is represented by
$$ \left(\frac{1}{y^{q}+y}:\frac{h^{q}}{x^{q}+h^{q-1}x}:1\right). $$
Then, $\varphi(P_1)=(0:1:0)$, $\varphi(P_2)=(1:0:0)$ and $\varphi(\hat{C}(\Bbb F_{q}))=\varphi(\hat{C}) \cap \{Z=0\}$.
Let $P=(\alpha:\beta:1) \in C(\Bbb F_{q}) \setminus \{P_\infty, P_2\}$.
Then, $y+\beta$ is a local parameter at $P$.
Let $u=y+\beta$ and $v=(x/h)+(\alpha/h(\alpha, \beta))$.
Note that
$$y^{q}+y=u^{q}+u, \ (x/h)^{q}+(x/h)=v^{q}+v, \mbox{ and }
\frac{h^q(y^{q}+y)}{x^{q}+h^{q-1}x}=\frac{u}{v} \times \frac{u^{q-1}+1}{v^{q-1}+1}. $$
On the other hand,
$$ \frac{dv}{dy}=\frac{y^{2q_0}h+x(y^{2q_0+1}+x)}{h^2}=\frac{h^{2q_0}}{h^2}=h^{2q_0-2}, $$
using the conditions $x^2=h^{2q_0}+x^{2q_0}y^{2q_0}+y^{4q_0+2}$ and $x^{2q_0}=h+xy+y^{2q_0+2}$.
It follows that $\varphi(P)=(h^{2q_0-2}(\alpha, \beta):1:0)$.
Note that $h^{q_0-1}(\gamma^{2q_0+1}, 0)=h^{q_0-1}(0, \gamma)=(\gamma^{2q_0+2})^{q_0-1}=1/\gamma$, for any $\gamma \in \Bbb F_q \setminus \{0\}$.
It follows that the set $\varphi(C(\Bbb F_q)\setminus \{P_{\infty}, P_2\})$ coincides with $\{Z=0\}(\Bbb F_q) \setminus \{(0:1:0), (1:0:0)\}$, and the fiber $\varphi^{-1}(\varphi(P))$ contains at least two points (that is, $\varphi(P)$ is a singular point of $\varphi(\hat{C})$) for any $P \in C(\Bbb F_q)\setminus \{P_{\infty}, P_2\}$.
Assume that $\varphi(R)$ is inner Galois.
Then, the associated Galois group $G_{\varphi(R)}$ is of order $q^2$, which is a Sylow $2$-subgroup of the Suzuki group ${\rm Sz}(q)$ (see \cite[p.~564]{hkt}).
Therefore, there exists $P \in \hat{C}(\Bbb F_{q})$ such that $\sigma (P)=P$ for any $\sigma \in G_{\varphi(R)}$.
Similar to the proof of Theorem \ref{hermitian}(3), $R=P_1$ or $P_2$.
The proof of Theorem \ref{suzuki} is completed.
\section{Ree curves}
See \cite{pedersen} or \cite[Section 12.4]{hkt} for properties of the Ree curves.
Let $p=3$, $q_0$ a power of $3$, $q=3q_0^2$, and let $C \subset \Bbb P^3$ be (the projective closure of) the space curve defined by
$$ y_1^q-y_1=x^{q_0}(x^q-x) \ \mbox{ and } \ y_2^q-y_2=x^{q_0}(y_1^q-y_1), $$
where $(x, y_1, y_2)$ and $(X:Y_1:Y_2:Z)$ are systems of affine and homogeneous coordinates of $\Bbb A^3$ and $\Bbb P^3$ respectively.
The smooth model of $C$ is denoted by $\hat{C}$ with normalization $r: \hat{C} \rightarrow C$.
Let $P_{\infty}=(0:0:1:0)$ and $P_2=(0:0:0:1) \in C$.
It is known that $P_{\infty}$ is a unique singular point of $C$ and $r^{-1}(P_{\infty})$ consists of a unique point $P_1 \in \hat{C}$.
We consider the subgroup
$$G_{1}:=\left\{
\left(\begin{array}{cccc}
1 & 0 & 0 & a \\
a^{q_0} & 1 & 0 & b \\
a^{2q_0} & -a^{q_0} & 1 & c \\
0 & 0 & 0 & 1
\end{array}\right) \ ; \
a, b, c \in \Bbb F_{q}
\right\} $$
of ${\rm PGL}(4, k)$, which is of order $q^3$.
For any $\sigma \in G_{1}$, it follows that $\sigma(P_{\infty})=P_{\infty}$ and $\sigma(C(\Bbb F_{q})\setminus \{P_{\infty}\})=C(\Bbb F_{q}) \setminus \{P_\infty\}$.
In particular, there exists an inclusion $G_1 \hookrightarrow {\rm Aut}(\hat{C})$.
Note that $k(C)=k(x,y_1, y_2)$ and $x^{q}-x \in k(C)^{G_1}$.
Since $[k(x,y_1, y_2):k(x)]=q^2$ and $[k(x):k(x^{q}-x)]=q$, it follows that $k(x^{q}-x)=k(C)^{G_1}$ and $k(C)^{G_1} \cong k(\Bbb P^1)$.
Let $\psi$ be the involution of $\hat{C}$ induced by
$$ (x, y_1, y_2) \mapsto (w_6/w_8, w_{10}/w_8, w_9/w_8), $$
as in \cite[p.126]{pedersen} or \cite[p. 577]{hkt} (see also \cite{eid, eid-duursma}).
It follows that $\psi(P_1)=P_2$.
Let $G_2:=\psi G_1 \psi \subset {\rm Aut}(\hat{C})$, which is of order $q^3$.
Note that $k(C)^{G_2}=k((w_6/w_8)^q-(w_6/w_8))$.
Then, $\hat{C}/G_{i}\cong \Bbb P^1$ for $i=1, 2$, $G_{1} \cap G_{2}=\{1\}$, and
$$ \{P_1\} \cup \{\sigma(P_2) \ | \ \sigma \in G_{1}\}=\hat{C}(\Bbb F_{q})=\{P_2\}\cup\{\tau(P_1) \ | \ \tau \in G_{2}\}. $$
It follows from \cite[Theorem 1]{fukasawa} that we have a morphism $\varphi: \hat{C} \rightarrow \Bbb P^2$ such that $\varphi$ is birational onto $\varphi(\hat{C})$, $\deg \varphi(\hat{C})=q^3+1$ and there exist two inner Galois points.
To determine the number of inner Galois points on $\varphi(\hat{C})$, we consider the image $\varphi(\hat{C}(\Bbb F_{q}))$.
As in the proof of \cite[Theorem 1]{fukasawa}, $\varphi$ is represented by
$$ \left(\frac{1}{x^{q}-x}:\frac{w_8^{q}}{w_6^{q}-w_8^{q-1}w_6}:1\right). $$
Then, $\varphi(P_1)=(0:1:0)$, $\varphi(P_2)=(1:0:0)$ and $\varphi(\hat{C}(\Bbb F_{q}))=\varphi(\hat{C}) \cap \{Z=0\}$.
Let $P=(\alpha:\beta:\gamma:1) \in C(\Bbb F_{q}) \setminus \{P_\infty, P_2\}$.
Then, $x-\alpha$ is a local parameter at $P$.
Let $u=x-\alpha$ and $A=(w_6/w_8)-(w_6/w_8)(\alpha, \beta)$.
Note that
$$x^{q}-x=u^{q}-u, \ (w_6/w_8)^{q}-(w_6/w_8)=A^{q}-A, \mbox{ and }
\frac{w_8^q(x^{q}-x)}{w_6^{q}-w_8^{q-1}w_6}=\frac{u}{A} \times \frac{u^{q-1}-1}{A^{q-1}-1}. $$
On the other hand,
$$ \frac{dA}{dx}=\frac{w_4^{3q_0}w_8-w_6w_7^{3q_0}}{w_8^2}=\frac{w_8^{3q_0}}{w_8^2}=w_8^{3q_0-2}. $$
It follows that $\varphi(P)=(w_8^{3q_0-2}(\alpha, \beta, \gamma):1:0)$.
Note that $w_8^{3q_0-2}(\delta^{-1}, 0, 0)=-\delta^2$, $w_8^{3q_0-2}(0, \delta, 0)=\delta^{3q_0-3}$ and $w_8^{3q_0-2}(0, \delta^{-q_0-1},0)=\delta^2$, for any $\delta \in \Bbb F_q \setminus \{0\}$.
It follows from $\sqrt{-1} \not\in \Bbb F_q$ that the set $\varphi(C(\Bbb F_q)\setminus \{P_{\infty}, P_2\})$ coincides with $\{Z=0\}(\Bbb F_q) \setminus \{(0:1:0), (1:0:0)\}$, and the fiber $\varphi^{-1}(\varphi(P))$ contains at least two points (that is, $\varphi(P)$ is a singular point of $\varphi(\hat{C})$) for any $P \in C(\Bbb F_q)\setminus \{P_{\infty}, P_2\}$.
Assume that $\varphi(R)$ is inner Galois.
Then, the associated Galois group $G_{\varphi(R)}$ is of order $q^3$, which is a Sylow $3$-subgroup of the Ree group ${\rm Ree}(q)$ (see \cite[p.~575]{hkt}).
Similar to the proof of Theorem \ref{hermitian}(3), $R=P_1$ or $P_2$.
The proof of Theorem \ref{ree} is completed.
|
\section{Introduction}
A broad range of many-body nonequilibrium systems have in common that different degrees of freedom within them undergo motion on two, well-separated time-scales, and that the faster degrees of freedom are the only ones directly subject to external driving. Such separation can occur if a faster set of active particles act as a bath for a heavier, more slowly relaxing set of larger, extended degrees of freedom, such as in the example of a polymer immersed in a mixture of self-propelling particles \cite{nikola2016active_polymer}. Alternatively, in many systems one can usefully identify coarse-grained variables describing global features of the many-body dynamics, which may relax more slowly than the coordinates of individual particles. Such order parameters might then be thought of as a set of slowly-varying constraints on the driven fast dynamics, as for example in \cite{sasa2015kuramoto_thermo}.
In all such cases, it is possible in principle for the particular configuration of a set of slow variables to have a significant influence on the specific nonequilibrium steady-state reached by the fast variables. Thus, in general, a feedback loop can arise in which the slow variables first establish the features of the fast steady-state, and then the statistics of this steady-state in turn determine the stochastic dynamics of the resulting local motion in slow variable space. The goal of this paper is to characterize the dynamical attractors of slow variable evolution in terms of the particular, special properties of the fast steady-states to which they give rise.
Nonequilibrium systems with time-scale separation have been extensively studied over the last several decades. The most common context where they have come up is in formalizing the concept of a ``thermal bath'' -- explicitly modelling the fast bath degrees of freedom as a Hamiltonian system, and studying their effects on the slow variables. In this way, one can in some cases recover the effective friction tensor \cite{Berry1993chaotic_bath}, and the corresponding noise term, related by fluctuation-dissipation theorem \cite{jarzynski1995chotic_thermalization}. There is also extensive literature studying the conditions and effects of deviations from this basic result, which are generally termed ``anomalous diffusion'' -- see e.g., \cite{lutz2001fractional_Lang}. Within this context, the ``slow'' degrees of freedom lack their own dynamics, and are considered only as probes of the fast bath. More recent studies have considered the minimal dissipation required from an external agent to slowly move such probes. A geometric interpretation of this bound was presented in \cite{zulkowski2012geometry}, and extended to nonequilibrium baths in \cite{zulkowski2013neq_geom}, as well as to reversible external protocols in \cite{machta2015dissipation}. Systems where slow variables have their own dynamics under a conservative coupling to the fast bath have received relatively little attention, excepting notable recent work for a simple harmonic oscillator probe in \cite{polkovnikov2016neg_mass}, and a more general exploration in \cite{Maes2015neq_fluid,Maes2015Lin_t_seper}, where some formal results relating dissipation and forces on the slow variables were derived.
Most of this previous work has relied on the projection operator technique to adiabatically eliminate fast variables and obtain the reduced Fokker-Planck equations for the slow variables, as in Ch6.4 of \cite{gardiner1985handbook}, or see \cite{bo2016time-sc_sep_functionals} for a recent review. The straightforward implication of this approach is that at long times, probability density in slow variable space is expected to accumulate in locations where inward mean drift is strong, and where local diffusion is low. Here, we first derive this effect for a general class of Langevin systems using a response-field path integral framework that makes clear the relationship between the reduced Fokker-Planck parameters and the absorption and thermalization of drive energy in the fast steady-state. Some related path-integral system reduction techniques have been studied before (e.g., \cite{feynman1963influence_func,bravi2017sub_network}), but in substantially different contexts. Having established a means of explicitly calculating the parameters of the multiplicative noise stochastic process governing the slow variables, we then proceed to analyse the implications for what we term ``least rattling feedback," in which slow variables dynamically finely-tune themselves to bring about fast variable steady-states that attenuate force fluctuations so as to lower the slow variable effective temperature.
The tendency of slow variables in driven systems to move thermophoretically towards regions of lower effective temperature has been noticed in the past, most commonly in situations where the slow variables find a way to reduce the influx of energy from the drive (as in \cite{magiera2015trapping, corte2008chaikin_spun_colloids}. As we shall see here, however, a striking alternative can arises if the fast variables are capable of exhibiting regular, integrable dynamics; in such a case, least rattling stability can co-exist with strongly coupling to and absorbing work from the external drive.
In the section \ref{sec:anal_slow} of this article, we will present the derivation of our main analytical result, which establishes a relationship between force fluctuations in fast driven variables and the resulting effective temperature experienced by the slow variables in a driven system. In section \ref{sec:KR_cart}, we will carry out a numerical analysis of the kicked rotor on a cart -- a time-scale separated, damped, driven dynamical system that is ideally suited for demonstrating the predictive power of the ``least rattling'' framework. Not only will this analysis draw clear connections to methods of equilibrium statistical physics and show how they generalize in such a nonequilibrium scenario, but it will also underline how ``least rattling'' helps to explain the non-trivial relationship between dissipation rate and local kinetic stability in driven systems.
\section{Analytical slow dynamics} \label{sec:anal_slow}
In this section, we lay out a general formalism for extracting slow dynamics in stochastic systems with strong time-scale separation. We will model ``slow" variables $x_{a}$ and ``fast" variables $y_{i}$ that evolve according to a coupled system of Langevin equations. Our approach will be to integrate out the fast degrees of freedom and develop an effective theory for the dynamics of the slow variables that is controlled by a small number $\epsilon$ which quantifies the time-scale separation between fast and slow. As we carry out this integration, we will show that the effects on $x_{a}$ from the fast steady-state of the $y_{i}$ variables at leading order in $\epsilon$ are an average force and, more subtley, a random force and renormalized drag that are calculated from the two-point correlation function of the forces acting between $x_{a}$ and $y_{i}$. These latter effects are identified as an emergent, position-dependent effective temperature experienced by the slow coordinates.
\subsection{Setup}
While the method we present here is not restricted to this context, it is easiest to illustrate on systems whose dynamics can be given by first order equations, as below. In particular, it works the same way for other types of fast dynamics -- such as inertial, or discrete -- as long as there is a fast relaxation to a steady-state.
\begin{align}
&\eta \,\dot{x}_a = F_a(x_a,y_i,t) + \sqrt{2\,T\,\eta}\,\xi_a \nonumber\\
&\mu \,\dot{y}_i = f_i(x_a,y_i,t) + \sqrt{2\,T\,\mu}\,\xi_i.
\label{eq:general_sys}
\end{align}
Here the noise $\xi$ is usual Gaussian white noise: $\<\xi_\alpha(t)\>=0$ and $\<`\xi_a`(t) `\xi_b`(s)\> = `\delta_a,b` \delta(t-s) $. Taking the limit $\mu/\eta \equiv \epsilon \ll 1$ amounts to explicitly separating $x_a$ as slow modes, and $y_i$ as fast ones ($a,b,c$ index the slow configuration space, and $i,j,k$ -- the fast one). The natural physical interpretation of this system as overdamped dynamics in a thermal bath of temperature $ T $, with two different damping coefficients $\mu$ and $\eta$, the noise amplitudes given by Einstein's relation, and with the forces $F_a$ and $f_i$ will be implied from now on for concreteness, but is not at all necessary. With a slight adjustment the system could as well represent underdamped dynamics, such as in the kicked rotor model system we characterize below.
\subsection{Results}
The detailed derivation of the effective slow dynamics is relegated to Appendix \ref{app:slow_deriv}. Here we mention only the key steps in the derivation. First, we rescale time $t \rightarrow \mu\, t$, making the slow dynamics obey $\dot{x}_a = \epsilon\,F_a + \sqrt{2\,T\,\epsilon}\,\xi_a$, while the relaxation time of fast variables becomes of $ \Ord{1} $. Second, we express probability of slow trajectories in terms of the Martin-Siggia-Rose path integral (also termed the response-field formalism) \cite{cardy2008neq_SM_turbul}, and third, we do a cumulant expansion controlled by $\epsilon$:
\begin{widetext}
\begin{align}
P[x(t)]
&=\frac{1}{Z_x} \int \mathcal{D} \tilde{x} \<\exp{-\int dt \; \left\{i \tilde{x}_a\(\dot{x}_a - \epsilon\, F_a(x,y,t)\) + \epsilon\, T\,\tilde{x}_a^{\;2} \right\}}\>_{y|x(t)}
\label{eq:MSR_slow} \nonumber\\
&= \frac{1}{Z_x} \int \mathcal{D} \tilde{x} \;\exp{-\int dt \; \left\{i \,\tilde{x}_a\dot{x}_a + \epsilon\, T\,\tilde{x}_a^{\;2} - i \,\epsilon\,\tilde{x}_a \<F_a\>_y + \frac{\epsilon^2}{2} \tilde{x}_a \tilde{x}_b \<F_a, F_b\>_y + \Ord{\epsilon^3}\right\}}.
\end{align}
\end{widetext}
where $ Z_x $ is the normalization, and $ \tilde{x}(t) $ is the auxiliary ``response'' field. In the last line, we see that the $\Ord{\epsilon^2}$ term in the expansion, like temperature $T$, comes in $\propto \tilde{x}^2$, and thus gives a correction to the noise on the slow dynamics -- this is the effect that we will focus on throughout the rest of this paper. Doing this more carefully (as shown in the Appendix) the resulting slow dynamics, which is our main analytical result, are
\begin{align} \label{eq:eff_slow}
&\quad \gamma_{ab}\,\cdot \,\dot{x}_b = \epsilon \<F_a\>_{y|fix\; x} + \sqrt{2\,\epsilon D}_{ab}\,\cdot\,\xi_b \\
&\gamma_{ab}(x) = \delta_{a,b} + \epsilon \int dt' \;(t-t')\<i \, \tilde{y}_i\, \partial_b f_i \at{t'}, F_a \at{t} \>_{y|fix\; x} \nonumber\\
&D_{ab}(x) = T \,\delta_{a,b} + \frac{\epsilon}{2} \int dt' \<F_a \at{t'}, F_b \at{t}\>_{y | fix \; x}. \nonumber
\end{align}
where the matrix square root is defined by $B\equiv \sqrt{D} \;\Leftrightarrow\; B.B^T=D$. Dots denote It\^o products, which will be typical here (see sec.\ref{app:noise_corr}). Note that only the connected components of the expectations appear in the expressions for $ \gamma(x) $ and $ D(x) $ (denoted by commas), and thus are insensitive to any deterministic motion of the fast variables. Further note that there is also an $\Ord{\epsilon}$ correction of the damping coefficient, which, for a fully conservative (undriven) system, matches the noise correction to preserve Einstein's relation, as it must (see sec.\ref{app:equil}). For non-conservative forces, however, this will not be the case, and the ratio of the effective noise to damping amplitudes can be used to define an effective temperature tensor $T_{eff}(x_a)\equiv \gamma^{-1}.D.\(\gamma^{-1}\)^T$, which will generally depend on the slow coordinates -- i.e., the noise on slow variables becomes multiplicative.
\subsection{Least Rattling}
The significance of the above formal result is that to extract the effective slow dynamics we need not know everything about the fast modes, but only the mean and variance of the force fluctuations $F_a$ in the $y_i$ (fast) steady-state at fixed $x_a$ (slow d.o.f.). All other details of the fast dynamics become irrelevant by the same mechanism as for the central limit theorem. The slow dynamics thus follow the simple equation \ref{eq:eff_slow}, which can often be solved analytically. Its qualitative behavior is guided by a competition between the mean drift along the average force $ \<F(x)\> $ and a median drift down the effective temperature gradients $ T_{eff}(x) $. While the former effect is larger by a factor $ 1/\epsilon $, it is a vector quantity, and as such, may be suppressed by averaging in case of high-dimensional disordered fast dynamics. This is in contrast to $ T_{eff} $, which comes in as a positive-definite tensor, making it robust to averaging-out. Without rigorously exploring this trade-off for now, in this work we simply choose focus on the effect of $ T_{eff}(x) $, which guides the slow variables towards regions in their configuration space that yield more orderly, less chaotic, or less ``rattling'' fast dynamics (see sec.\ref{app:regularity_Teff}). We suggest that this effect might result in the self-organization lately studied in many non-equilibrium systems \cite{redner2013phase-sep_colloids, schaller2010flock_fillament}.
We now expand on a few of the points mentioned above. First, how general is this method? Its scope is basically inherited from the regime of applicability of the Central Limit Theorem (CLT): our requirement of strong time-scale separation amounts to the condition that fast fluctuations decorrelate faster than dynamical time-scale of slow variables. This way their effect on the slow coordinates adds up as i.i.d. random variables, satisfying the conditions of CLT. Thus any fast fluctuations must either decorrelate quickly (e.g., due to thermal noise or chaos) -- thus contributing to the Gaussian noise amplitude, or not decorrelate at all (as with integrable behavior) -- contributing to the mean force $ \<F\> $. This requirement could notably be broken if some fast fluctuations decay slower than exponentially -- a scenario that leads to effective colored noise and anomalous diffusion, but retains much of the general intuition from eq.\ref{eq:eff_slow}.
This framework is particularly useful in cases where fast dynamics can be in several qualitatively different dynamical phases, controlled by the slow variables. E.g., if a fast variable undergoes a transition from chaotic to integrable behavior as a function of some slow coordinate, then we will typically expect its effect to transition from a noise contribution to an average force contribution respectively -- as we will see in the toy system below. Making this precise and describing the relevant universality classes of these transitions based on their symmetry structure can be done within the broader framework of renormalization group flow. This could allow extracting the effective slow dynamics, much like it allows finding large scale physics for quantum or statistical fields \cite{kardar2007SM_fields}.
Finally, we mentioned above that while the average force $ \<F\> $ causes the \emph{mean} of the $ x_{a} $-ensemble (slow variables) to drift, the multiplicative It\^o noise given by the effective temperature bath $ T_{eff}(x) $, affects only a drift of the \emph{median} of that same ensemble. This latter effect is realized by virtue of the $ p(x_{a}) $ probability distribution growing increasingly heavy-tailed with time (e.g., log-normal distributions are typical), and so while the mean remains fixed, the median will drift towards the low-noise regions. This means that any finite ensemble of trajectories will also settle in the low-noise region, and the mean will never be realized experimentally. Some aspects of this ergodicity-breaking phenomenon were discussed in \cite{peters2013_PRL_ergBreak}, and a similar problem considered in \cite{schnitzer93-Smoluch_for_chemotaxis}. The key for us is that the least-rattling effect is inherently non-ergodic, and is observed only by monitoring the system over time.
\section{Toy Model}\label{sec:KR_cart}
To illustrate the above results, we consider a toy model that is designed to be a simplest possible example capturing all the qualitative features we might expect of more general scale-separated driven systems of interest. Specifically, we take a kicked rotor on a cart setup shown in fig.\ref{fig:cart_charact}a. The fast kicked rotor (Chirikov standard map) dynamics here are chosen as the simplest system that can realize both the chaotic and integrable behaviors under different parameter regimes. Essentially, the system is a rigid pendulum that experiences no external forces except for periodic kicks of a uniform force field (as though gravity gets turned on in brief bursts), and is given by the first two lines in eq.\ref{eq:KR_cart}. We modelled the system to be immersed in a thermal bath by adding a small damping and noise (see third line in eq.\ref{eq:KR_cart}), whose effects have been studied in \cite{feudel1996map100attr, kraut1999KR_noise}. The point relevant for the following analysis is that when the driving force amplitude (henceforth called ``kicking strength") is large, the rotor dynamics are fully chaotic, but if the kicking strength drops below a critical value ($K\lesssim 5$), periodic orbits appear in the configuration space, and are made globally attractive in the presence of damping, thus quickly making the dynamics integrable (we refer to this phenomenon below as ``dynamical regularization"). Thus, by controlling the effective drive strength, it is possible to switch between chaotic and regular regimes of fast dynamics.
We then fasten the pivot of the fast kicked rotor on a slow cart that can slide back and forth in a highly viscous medium, perpendicular to the direction of the kick accelerations (i.e., along the symmetry axis of the rotor dynamics -- see fig. \ref{fig:cart_charact}a). The cart is pulled by the tension in the rod, which depends on the fast dynamics, while the global cart position $ x $ can feed back on the fast dynamics by having a kicking field that varies along the cart's track $ K(x) $. This way we have slow variables conservatively coupled to driven fast dynamics, and a feedback loop controlled through the arbitrary form of $ K(x) $ -- providing a flexible testing ground. Overall, we argue that, while vastly simplified, this model captures essential physical features of many multi-particle nonequilibrium systems of potential interest.
\subsection{Model Setup}
The toy model explored here is presented in fig. \ref{fig:cart_charact}a: the kicked rotor is attached to a massless cart moving on a highly-viscous track, which ensures that cart's velocity is much smaller than the rotor's. The exact equations of motion for the system can be derived from a force-balance, and in their dimensionless form become:
\begin{align}
& c\,\dot{x} = -\partial_x U(x) + \sqrt{2\,T\,c}\;\xi_x
+\underbrace{\sin\theta \,\(v^2 -\ddot{x}\,\sin \theta\)}_{\equiv F_x}\nonumber\\
& \dot{\theta} = v \nonumber\\
& \dot{v} = - K(x)\, \sin \theta \;\delta(t-n) \nonumber\\
&\hspace{7em}- b\, v + \sqrt{2\,T\,b}\;\xi_v
- \ddot{x}\,\cos \theta
\label{eq:KR_cart}
\end{align}
where all lengths are measured in units of rotor arm length, time in units of kicking period, and the angle $\theta$ is $2\pi$-periodic. Note that for practical reasons (see Appendix \ref{app:KR_cart}), we also assumed that the cart is momentarily pinned down during each kick, so as to remove the term $ \frac{1}{2} K(x)\,\sin 2\theta \;\delta(t-n) $ that should otherwise be included in $ F_x $ due to the direct coupling of the kicks to the cart. For now, we can motivate this by saying that the interesting problem is where the driving force affects the slow dynamics only by means of the fast ones, and not directly, while this chosen implementation can simply be viewed as an additional component of the drive protocol. Additionally, to provide more modelling freedom, we can include an arbitrary potential $ U(x) $ acting directly on the cart to produce a conservative force. Time-scale separation in this model implies that back reaction from cart dynamics on the rotor is small -- i.e., here that $\ddot{x} \ll K$ (by differentiating the last line, we see that indeed $\ddot{x} \sim \Ord{v^3/c} \ll 1$ for $ c \gg 1 $). Thus the leading-order feedback from the slow variables onto fast dynamics comes from $ x $-dependence of $ K $, which we have full control over, making for a convenient toy-model. We also independently assume that $b \ll 1$ so that fast dynamics are close to the ideal kicked rotor and retain its features.
\subsection{Analytical Evaluation}
For large $K$ (above the dynamical regularization threshold, i.e. $ K \gtrsim 5$), the steady-state of the fast dynamics is fully chaotic, and thus thermal -- i.e., we assume thermalization of the entirety of drive energy among the fast fluctuations, as happens in \cite{cohen2013drive_thermalization} for example. This way, the steady-state distribution is Boltzmann, which is here uniform over $\theta$ and Gaussian over $v$, whose variance we can call $ T_R $ (rotor temperature). The symmetry of this state over $ \theta $ and $ v $ gives $ \<F_x\>_{s.s.}=0 $, making the fluctuations dominant. The only remaining parameter we need to find is then $ T_R $, which is fully constrained by energy balance as follows. In general, to keep an ergodic system at an effective temperature that is higher than that of its bath requires dissipation \cite{horowitz2017diss_neq_distr}:
\begin{align}
&\delta Q = \int dt\, v \circ \(b\,v - \sqrt{2\,T_0\,b}\;\xi_v\)
=b \(\<v^2\>-T_0\) \delta t \nonumber\\
&\mathcal{P} \equiv \pd{Q}{t} = b \, \(T_{eff} - T_0\).
\label{eq:power_Teff}
\end{align}
(for 1D systems with mass=1). Moreover, we can find the power exerted by the kicking force to be $\mathcal{P}=K^2/4$ in the chaotic regime, which in the steady-state must balance the dissipated power. This allows us to extract the effective rotor temperature: $T_R \sim T_0 + \frac{K^2}{4\,b}+\Ord{\frac{1}{c}}$ (see sec.\ref{app:KR_ss} for details).
This, however, only gives us information about the fast behavior, while the $x$-noise correction that we want will also depend on the nature of the rotor-cart coupling. This way, we need to evaluate the $x$-force correlations and $\delta T_x =\frac{1}{2 c} \int dt \<F_x(t),F_x(s)\>$, where as above, $F_x = v^2 \,\sin\theta -\ddot{x}\,\sin^2 \theta$ = (centripetal $F_c$) -- (inertia $F_i$) is the force on the cart. The calculation is relatively straightforward and detailed in sec.\ref{app:KR_noise}, where we also show that $\gamma$ damping-coefficient correction is 0 by symmetry of the $ (\theta,v) $ distribution. We find that, while the inertial term can be ignored at leading order, the correlations of the centripetal force give us $\delta T_x = \frac{1}{2c}\int dt \<F_c,F_c\> = K^2/16c$. Note here that this multiplicative noise correction should be interpreted in the It\^o sense, as the $ \<F_c,F_c\> $ correlations decay on a time-scale faster than kick-period (see Appendix,\ref{app:noise_corr}).
For $ K \lesssim 5 $, on the other hand, the rotor moves periodically in one of the regular attractors. This means that the cart experiences no additional stochasticity other than that from the thermal bath, giving a low $ T_{eff}=T_0 $, but as some of these attractors spontaneously break the left-right symmetry of the problem, we get $ \<F_x\>_{s.s.} \neq 0 $. As the motion in most of these attractors is very simple -- $ n \in \mathbb{Z}$ full revolutions of the rotor per kick -- we can estimate this force explicitly: $ \<F_x\>_{ss} = \int_0^1 dt \; v^2(t)\, \sin \theta(t) \sim b\,v_n + v_n^3/2c $, where $ v_n \equiv 2\pi n $, and $ \theta(t) $ and $ v(t) $ were estimated by integrating the equations of motion \ref{eq:KR_cart} at leading order (see sec.\ref{app:KR_ordered}).
Compiling the resulting predictions for the cart motion, we get:
\begin{align} \label{eq:cart_result}
c\, \dot{x} =& -\partial_x U(x) + \<F\> + \sqrt{2\,c\,T_{eff}}\cdot \xi\\
&\<F\> = \begin{cases}
v_n \(b + v_n^2/2c\) & K\lesssim 5\\
0 & K\gtrsim 5
\end{cases}\nonumber\\
&T_{eff} = \begin{cases}
T_0 \quad \\
T_0 + K^2/16c \quad
\end{cases} \nonumber
\end{align}
with $ v_n\equiv 2\pi n $ and $ n $ some random integer, typically smaller than $ \Ord{\sqrt{T_R}} $ (since the rotor first explores its phase-space thermally before finding one of the regular attractors). Another quantity we can easily estimate for the two phases is the energy dissipation rate:
\begin{align}
\dot{Q}=\begin{cases}
v_n^2 \(b + v_n^2/2c\)& K\lesssim 5\\
K^2/4 & K\gtrsim 5
\end{cases}
\end{align}
Numerical simulations confirm these predictions in fig.\ref{fig:cart_charact} c, d, and e respectively.
\subsection{Numerical Tests}
To verify the above analytical results, we can run numerical simulations of the full system dynamics in eq. \ref{eq:KR_cart}. To begin, we check the cart dynamics for different values of ($ x $-independent) $ K $ (and $ U(x)=0 $). Fig. \ref{fig:cart_charact}b shows typical cart trajectories for $ K $ in the regular and chaotic regimes. More systematically, plotting the apparent average drift $ \<F\> $ and fluctuations $ T_{eff} $ for multiple realizations at each $ K $, we get plots in c and d of fig. \ref{fig:cart_charact} respectively. We thus see quantitative agreement between the prominent features of these plots and the results of eq. \ref{eq:cart_result} -- shown here as black lines. Finally fig. \ref{fig:cart_charact}e shows the heat dissipation rate in the different possible steady-states, showing that while lowering $ T_{eff} $ corresponds to decreased dissipation within the chaotic phase, this rule is violated if we enter a regular dynamic attractor.
Note that as the original problem is stated exactly, and our method allows for full analytical treatment of the slow variables, there are no fitting parameters in any of the curves we are comparing against throughout the numerical study. We use $ c=5\times 10^4,\, b=0.1 $ for all simulations, and to emphasize the effects from fluctuations of the fast variables, we take the actual thermal bath to be at a vanishingly low temperature $T_0 \sim 0$, unless otherwise stated.
\begin{figure}
\includegraphics[width=0.4\textwidth]{cart_charact.pdf}
\caption{a. schematic of the kicked-rotor-on-a-cart toy model. b. numerical realizations of typical cart trajectories over time for driving force below ($ K=3 $: regular / ordered regime) and above ($ K=8 $: chaotic regime) the ordering transition (at $ K_c \sim 5 $), along with samples of the corresponding fast $ (\theta,v) $ dynamics. c,d,e. average force, fluctuations, and dissipation rates in the cart dynamics, measured from trajectories as in panel (b), for the various values of $ K $, along with analytical predictions (in black) from eq.\ref{eq:cart_result}} \label{fig:cart_charact}
\end{figure}
While fig. \ref{fig:cart_charact} shows agreement of one- and two-point functions of cart position with our analytical prediction, we have yet to check that the fast dynamics can really be approximated by an effective thermal bath. One convincing way to do this is to introduce a non-trivial potential landscape $U(x)$ acting on the cart's position $ x $, and check the resulting steady-state distribution $p(x)$ against Boltzmann statistics at the predicted temperature $ T_{eff} $. Figure \ref{fig:distributions}a shows the agreement between the histogram produced by this simulation and the curve for the expected Boltzmann distribution.
To see that the $ T_{eff}(x) $ landscape remains the appropriate description even for non-uniform $ K(x) $, we can calculate the steady-state distribution in a $K(x)$ landscape, now letting $U(x)=0$. The expected distribution for free diffusion in a temperature landscape can easily be found using, e.g., Fokker-Planck equation, and gives $p(x)\propto 1/T(x)$ (note that this arises precisely because our effective slow dynamics have It\^o multiplicative noise -- for Stratonovich it would be $ 1/\sqrt{T(x)} $). This is well confirmed by simulations in fig. \ref{fig:distributions}b, thus showing that at least in the steady-state, probability density does indeed collect in low-temperature regions.
\begin{figure}
\includegraphics[width=0.5\textwidth]{distributions.pdf}
\caption{Histograms (grey) of the steady-state cart positions in simulation with: a. constant $K(x)=K$ and in a potential landscape $U(x)$ plotted in blue (black curve gives the expected Boltzmann distribution); b. $U(x)=const$ and $K(x)$ landscape in red (black curve shows $ 1/T_{eff}(x) $ -- solution of the Fokker-Planck equation) c. $U(x)$ as in a. and $ K(x) $ step-function (analytical prediction plotted in black is given by eq.\ref{eq:2-T-wells}). d. $U(x)=const$ and $K(x)$ as in b., but shifted down to dip below the critical $ K_c\sim 5 $ value -- dotted red line (black curve again shows $ 1/T_{eff}(x)$ outside of ordered region) -- this shows that probability gets localized at the two transition points at long times.} \label{fig:distributions}
\end{figure}
The last natural test that we mention here is to see how $ T_{eff}(x) $ landscape can counteract the forces of $ U(x) $ -- specifically changing the stability in a double-well potential. This setup is shown in \ref{fig:distributions}c, where the higher-energy potential well is stabilized by having a lower $ T_{eff} $. The numerical result is correctly predicted by the steady-state solution of the Fokker-Planck equation with the expected effective temperatures in each well (labels $L$ and $R$ denote left and right wells respectively), as shown in fig. \ref{fig:distributions}c:
\begin{align}
p(x) = & \frac{1}{Z}\begin{cases}
\frac{1}{T_L}\e{-U(x)/T_L} \qquad &x<0 \quad (L)\\
\frac{1}{T_R}\e{-U(x)/T_R+\Delta} \qquad &x>0 \quad (R)
\end{cases} \label{eq:2-T-wells}\\
& \Delta \equiv U(0)\(\frac{1}{T_R} - \frac{1}{T_L}\) \nonumber
\end{align}
In the limit of a discrete jump process between the two wells (wells with equal internal entropy separated by a high barrier), this exact solution becomes well approximated by that obtained from current-matching with the expected jump rates: $r_\rightarrow = \e{-(U(0)-U_L)/T_L}$ and $r_\leftarrow = \e{-(U(0)-U_R)/T_R}$. The key non-equilibrium feature in these solutions is the dependence of the probabilities in either well on the barrier height $U(0)$ via $\Delta$ -- for higher barriers the temperature difference becomes more important. This example gives the first non-trivial application of thermodynamic intuition from $ T_{eff}(x) $ landscape to solution of our non-equilibrium system. Projecting this concept onto a broader context, we note that this setup is a particular realization in the class of problems of iterative annealing (e.g., used in chaperoned protein folding \cite{todd1996chap_protein_fold}, etc).
\subsection{Least rattling}
\begin{figure}
\includegraphics[width=0.5\textwidth]{cart_drift.pdf}
\caption{Typical cart trajectories in linear $ K(x) $ landscape ($ U(x)=const $) all starting from one point, along with their mean (purple) and median (brown). Black curve shows the analytical prediction for the median, while mean is expected to be constant at small times. Inset shows the regularization transition at $ K_c \sim 5 $, where effective temperature drops abruptly to 0, and median departs from the smooth decay. $ x $-axis is labelled in units of $ K $} \label{fig:drift}
\end{figure}
Having confirmed the steady-state and thermal properties of the slow behaviors, we next want to look at the predictive power of our formalism for transient behaviors and currents, again in the presence of inhomogeneous fast dynamics. The first example we consider is transient cart motion in linearly varying $K(x)=\kappa\,x$. The simulation results are shown in fig. \ref{fig:drift}. As mentioned above, free diffusion in a temperature gradient results in a median drift to low $T$, as observed here. Explicitly, the slow dynamics in this case $ c\,\dot{x}=\frac{\kappa}{2\sqrt{2}}x \cdot \xi $ can be solved exactly to give $ \ln x(t) = -\(\frac{\kappa}{4c}\)^2\,t + \frac{\kappa}{2\sqrt{2}} \, \mathcal{N}(0,t) $ (with $ \mathcal{N} $ giving the normal distribution with variance $ t $), from which we can read off the mean $ x(t) = x_0 $ and median $ x = x_0\,\exp{-t\,\(\kappa/4c\)^2} $ behaviors. The latter is plotted in black in fig.\ref{fig:drift} and well reproduces the simulation result in brown. Note that for any finite ensemble of trajectories, or for a bounded system, the mean will eventually go to low temperatures as well, but not as cleanly or predictably -- so the constant mean value is not practically realizable at long times. The inset focuses on the crossover into regular dynamics, where we see that the symmetry-broken drift-force $ \<F_x\> $ can either take the cart to the $ K=0 $ absorbing state (as detailed in the further inset), or back out into the chaotic regime. In the latter case, the cart typically diffuses back down to the transition again. The resulting oscillations cause a (transient) accumulation of probability around the critical point, giving a peculiar realization of self-organized criticality. This critical region itself is also interesting as the correlations of the fast variables persist for long times, and can thus break the time-scale separation assumption -- but this will have little effect on the global system behavior. The overall takeaway here is the emergent ``least rattling:'' slow dynamics drift towards regions where fast ones are less stochastic.
To further illustrate the importance of the regularization transition on the slow dynamics, we consider the probability distribution $p(x)$ in the presence of $K(x)$ landscape (and no potential $U=0$), as in fig. \ref{fig:distributions}b., but shifted down such that it dips slightly below the regularization transition at its lowest point -- fig. \ref{fig:distributions}d. The resulting small zero-temperature region in $x$, corresponding to integrable fast dynamics, becomes absorbing, collecting most of the probability density over time (see fig. \ref{fig:distributions}d). Note again that probability accumulates at the critical transition points, giving the two-pronged shape. We stress here the observed sharp localization transition of the steady-state distribution as soon as some regular regime of the fast dynamics becomes accessible -- i.e., the slow variables find the regularized region even if it requires some fine-tuning. (This trade-off between least-rattling and entropic forces can be made quantitative.)
\subsection{Anomalous diffusion}
The last example we present shows that besides an effective temperature landscape, the regular dynamical phase accessible to this model can give rise to apparent anomalous diffusion. To begin, Fig. \ref{fig:pump}a shows an implementation of Buttiker-Landauer ratchet using our model: periodic $ U(x) $ and $ K(x) $ landscapes, with a relative phase-offset of $ \pi/2 $ create a steady-state current being pumped, in this case to the right. Intuitively, this happens because a higher effective temperature in the right half of the potential well makes it easier for the cart to overcome the right potential barrier than the left one. The interesting behavior appears when we shift the $ K(x) $ wave downward to straddle the transition point at $ K \sim 5 $ (fig. \ref{fig:pump}b). In this case the pumped current reverses direction and becomes an order of magnitude \emph{larger} -- even if we reduce the amplitude of the $ K(x) $ variation. To understand this, it helps to look at some typical realizations of barrier-crossing trajectories at the bottom of fig. \ref{fig:pump}. While in panel a. transitions are achieved by stochastic fluctuations that are exponentially suppressed by the Boltzmann factor, in panel b., these are achieved by a directed symmetry-broken drift force $ \<F\> > \partial_x U$, and thus the crossing probability is just the probability of the fast dynamics finding the appropriate regular attractors. These ballistic-like trajectories of the cart in the regular regime can be usefully thought of as anomalous super-diffusion with exponent $ \alpha =2 $. Also, in so far as the barrier crossing becomes easier as we lower $ K(x) $ through the critical value, we can say that the diffusion becomes stronger, thus showing non-monotonicity with $ K $ -- reminiscent of the findings in \cite{spiechowicz2017non-monot_diffusion}.
\begin{figure*}
\includegraphics[width=1\textwidth]{pumping.pdf}
\caption{Simulated steady-state cart-position distributions for the shown $ U(x) $ (blue) and $ K(x) $ (red) landscapes ($ x $ is periodic). These result in a pumped steady-state current (block arrows), with the typical barrier-crossing trajectories shown at the bottom. Unlike in all the above simulations, thermal bath temperature $ T_0 = 10^{-4} > 0 $ was taken in these to smooth out the critical behavior. Straddling the critical point with $ K(x) $ in panel b produces a ten-fold larger (and reversed) current, even for smaller absolute variation in $ K(x) $} \label{fig:pump}
\end{figure*}
\section{Discussion} \label{sec:discussion}
The equilibrium partition function that is computed for the Boltzmann distribution is a powerful formal tool for making predictive calculations in thermally fluctuating systems. Its success stems from two key simplifying assumptions: first, that energy only enters or leaves the system of interest in the form of heat exchanged at a single temperature, and second, that the system and surrounding heat bath uniformly sample joint states of constant energy. This latter ergodic assumption essentially amounts to eliminating time from the picture, so that energy and probability become interchangeable.
The nonequilibrium scenario is generally less tractable than its equilibrium counterpart both because time has not been eliminated from our description of the system, and also because energy is permitted to enter and leave the system via different couplings to the external environment. Thus, the specific approach to modelling some nonequilibrium systems we have described here seeks to recover some of the desirable advantages of the equilibrium description by exploiting time-scale separation in two ways: first, by only allowing nonequilibrium drives to couple to a fast subset of variables, and second, by ``partially removing'' time from the picture by replacing the fast variables with a timeless thermal bath approximation. This ``conveys'' the entire time-dependence of the problem into the resulting effective slow dynamics.
Adopting such an approach by no means recovers the simplicity of the equilibrium picture, however, it does give rise to a relatively tractable effective description of the dynamics. As we have seen, slow variables in such a scenario experience not only a mean force landscape from the steady-state of the fast variables, but also are expected to drift in the direction of decreasing fictitious temperature set by the fast variable force fluctuations. Crucially, the latter effect is non-ergodic, thus somehow capturing the breaking of ergodicity typical of driven dynamics in a simple and tractable picture.
We have established that this effective picture is quantitatively predictive of the diffusive and stationary behavior of distributions for such slow variables in a simple rotor-on-cart toy model. The tendency of such systems to gravitate to values of slow variables that reduce the effective temperature of fast ones suggests a interesting relationship between dissipation and kinetic stability in driven systems. Although nonequilibrium steady-states are not in general required to be extrema of the average dissipation rate, it is true that the minimum required dissipation to maintain an effective temperature scales with $ T_{eff} $. Accordingly, there may be a subset of systems where the drift to lower effective temperature is indeed accompanied by a drop in dissipation. However, for cases where dissipation instead goes to maintaining dynamically regular motions, steady-state behavior might be dominated by a highly dissipative, stable attractor of low $T_{eff}$.
Moreover, if fast variables can undergo a dynamical ordering transition that is controlled by the slow coordinates, the corresponding drop in effective temperature can be dramatic. As such, this case opens up the intriguing possibility that dynamical ordering in fast variables might serve as a mechanism for the long-term kinetic stability for slow variables. Moreover, if dynamical ordering only can occur for rare, finely-tuned choices of slow variables, this stability could appear as a tendency toward self-organized fine-tuning in the slow-variable dynamics.
Accordingly, we suggest that an interesting future set of applications for the least rattling approach may lie in the active matter setting, where it is frequently the case that coarse-grained macroscopic features of active particle mixtures relax more slowly than the strongly driven microscopic components. The diversity of self-organized dynamically-ordered collective behaviors exhibited by such systems is well-known \cite{schaller2010flock_fillament}, and it may be useful to characterize these behaviors in terms of their possibly fine-tuned relationships to driven force fluctuations on the microscopic level. Future work must focus on generalizing our current approach to modelling the dynamics of such coarse-grained variables.
|
\section{Introduction}
\begin{figure*}[th]
\centering
\includegraphics[width=0.45\linewidth]{figures/teaser_fp}
\enskip
\includegraphics[width=0.45\linewidth]{figures/teaser_bp}
\caption{\normalfont (Left) Forward
projection enables users to: (a) select any data point $\mathbf{x}$, (b)
interactively change its high-dimensional feature values, and (c) observe
the change $\Delta \mathbf{y}$ in the point's two-dimensional projection.
(Right) Backward projection enables users to: (a) select any node
corresponding to the two-dimensional projection of a data point
$\mathbf{x}$, (b) move the node arbitrarily in the plane, and (c) observe the chage
$\Delta \mathbf{x}$ in the point's high-dimensional feature values. \label{fig:teaser}
}
\end{figure*}
Dimensionality reduction (DR) is an effective technique for analyzing and
visualizing high-dimensional datasets across domains, from sciences to
engineering. Dimensionality-reduction algorithms such as principal component
analysis (PCA) and multidimensional scaling (MDS) automatically reduce the
number of dimensions in data while maximally preserving structures, typically
quantified as similarities, correlations or distances between data points. This
makes visualization of the data possible using conventional spatial techniques.
For example, analysts generally use scatter plots to visualize the data after
reducing the number of dimensions to two, encoding the reduced dimensions in a
two-dimensional position.
\noindent{\bf DR Challenges:} Most DR (also called manifold learning or
distance embedding) algorithms are driven by complex numerical optimizations.
Dimensions derived by these methods generally lack clear, easy-to-interpret
mappings to the original data dimensions, forcing users to treat DR methods as
black boxes. In particular, data analysts with limited experience in DR have
difficulty in interpreting the meaning of the projection axes and the position
of scatter plot nodes~\cite{Sedlmair_2013, Brehmer_2014}. \textit{What do the
axes mean?} is probably users' most frequent question when looking at scatter
plots in which points (nodes) correspond to dimensionally reduced data. Most
scatter-plot visualizations of dimensionally reduced data are viewed as static
images. One reason is that tools for computing and plotting these
visualizations, such as Matlab and R, provide limited interactive exploration
functionalities. Another reason is that few interaction and visualization
techniques that go beyond brushing-and-linking or cluster-based coloring to
allow dynamic reasoning with these visualizations.
\noindent{\bf Enriching User Experience with DRs:} In this paper, we introduce
two interactions, \fp and \bp (Figure~\ref{fig:teaser}), to help analysts explore
and reason about scatter plot representations of dimensionally reduced data,
facilitating a dynamic what-if analysis. We contribute two related
visualization techniques, \pl and \fm, to facilitate the effective use of the
proposed interactions. We also introduce Praxis, a new interactive DR exploration
tool implementing our interaction and visualization techniques for data analysis.
Our techniques enable users to interactively explore: 1) the most
important features that determine the vertical and horizontal axes of
projections, 2) how changing feature values (dimensions) of a data point
changes the point's projected location (two-dimensional representation) and 3)
how changing the projected position of a data point changes the high-dimensional
values of that point. We demonstrate our techniques first using a
PCA-based linear DR and then a nonlinear, deep
autoencoder-based ~\cite{hinton2006reducing} DR.
We assess the computational effectiveness of our methods by analyzing their
time and accuracy performances under varying sample and dimension sizes. We
then conduct a user study in which twelve data scientists performed
exploratory data analysis tasks using Praxis. The results suggest that our
visual interactions are scalable and intuitive and can be effective for
exploring dimensionality reductions and generating hypotheses about the
underlying data.
We also observe that our techniques belong to a class of interactions that
bidirectionally couple the data and its visual representation: \emph{dynamic
visualization interactions}~\cite{Victor_2013}. We look at dynamic
visualization interactions under the visual embedding
model~\cite{Demiralp_2014} and discuss the properties of effective
interactions that the model suggests.
\section{Related Work}
Our work is related to prior efforts in understanding and
improving user experience with dimensionality reductions.
\subsection{Direct Manipulation in DR}
Direct manipulation has a long history in human-computer
interaction~\cite{sutherland1964sketchpad,kay1977personal,borning1981programming}
and visualization research (e.g.~\cite{shneiderman1982direct}). Direct
manipulation techniques aim to improve user engagement by minimizing the
\textit{perceived} distance between the interaction source and the target
object~\cite{hutchins1985direct}.
Developing direct manipulation interactions to guide DR formation and modify
the underlying data is a focus of prior research~\cite{Buja_2008,Endert_2012,
Gleicher_2013,Jeong_2009,Johansson_2009,Williams_2004}. For example,
X/GGvis~\cite{Buja_2008} supports changing the weights of dissimilarities input
to the MDS stress function along with the coordinates of the embedded points
to guide the projection process. Similarly, iPCA~\cite{Jeong_2009} enables
users to interactively modify the weights of data dimensions in computing
projections. Endert \etal \cite{Endert_2011} apply similar ideas to additional
dimensionality-reduction methods while incorporating user feedback through
spatial interactions in which users can express their intent by dragging points
in the plane.
Our work is closely related to earlier approaches using direct manipulation to
modify data in DR visualizations~\cite{Jeong_2009,
viau2010flowvizmenu,Schreck_2009,crnovrsanin2009proximity,mamani2013user}.
Like our \fp and unconstrained \bp techniques, iPCA enables interactive forward
and backward projections for PCA-based DRs. However, iPCA recomputes full PCA
for each forward and backward projection, and these can suffer from jitter and
scalability issues, as noted in \cite{Jeong_2009}. Using out-of-sample
extrapolation, \fp avoids re-running dimensionality-reduction algorithms. From
the visualization point of view, this is not just a computational convenience,
but also has perceptual and cognitive advantages, such as preserving the
constancy of scatter-plot representations. For example, re-running (training)
a dimensionality reduction algorithm with a new data sample added can
significantly alter a two-dimensional scatter plot of the dimensionally reduced
data, even though all the original inter-data point similarities may remain
unchanged. In contrast to iPCA, we also enable users to interactively define
constraints on feature values and perform constrained \bp.
We refer readers to a recent survey~\cite{sacha2017visual} for a detailed
discussion of prior research on visual interaction with dimensionality
reduction.
\subsection{Visualization in DR Scatter Plots}
Prior work introduces various visualizations in planar scatter plots of DRs, in
order to improve the user experience by communicating projection
errors~\cite{Chuang_2012,Stahnke_2016,Aupetit_2007,Lespinats_2010}, change in
dimensionality projection positions~\cite{Jeong_2009}, data properties and
clustering results~\cite{Stahnke_2016,clustrophile:idea16}, and contributions
of original data dimensions in reduced dimensions~\cite{gabriel1971biplot}.
Low-dimensional projections are generally lossy representations of the
original data relations: therefore, it is useful to convey both overall and
per-point dimensionality-reduction errors to users when desired. Researchers
visualized errors in DR scatter plots using Voronoi
diagrams~\cite{Aupetit_2007,Lespinats_2010} and corrected (undistorted) the
errors by adjusting the projection layout with respect to the examined
point~\cite{Chuang_2012,Stahnke_2016}.
Biplot was introduced~\cite{gabriel1971biplot} to visualize the magnitude and
sign of a data attribute's contribution to the first two or three principal
components as line vectors in PCA. \Pl reduce to biplots when PCA is used for
dimensionality reduction. Our \textit{proline} construction algorithm is
general and reflects the underlying out-of-sample extension method used. On the
other hand, biplots are based on singular-value decomposition and always use
PCA forward projection, regardless of the actual DR used. Additionally, \pl
differ from biplots in being interactive visual objects beyond static vectors
and are decorated to communicate distributional characteristics of the
underlying data point.
Stahnke \etal ~\cite{Stahnke_2016} use a grayscale map to visualize how a
single attribute value changes between data points in DR scatter
plots. We use \fm, a grayscale map, to visualize the feasible regions in the
constrained \bp interaction.
\subsection{Out-of-sample Extension and Back Projection for DR}
We compute forward projections using out-of-sample extension (or
extrapolation)~\cite{Maaten_2009}. Out-of-sample extension is the projection of
a new data point into an existing DR (e.g. learned manifold model) using only
the properties of the already computed DR. It is conceptually equivalent to
testing a trained machine-learning model with data that was not part of the
training set. For linear DR methods, out-of-sample extension is often
performed by applying the learned linear transformation to the new data point.
For autoencoders, the trained network defines the transformation from the
high-dimensional to low-dimensional data representation~\cite{Bengio_2004}.
Back or backward projection maps a low-dimensional data point back into the
original high-dimensional data space. For linear DRs, back projection is
typically done by applying the inverse of the learned linear DR mapping. For
nonlinear DRs, earlier research proposed DR-specific backward-projection
techniques. For example, iLAMP~\cite{dos2012ilamp} introduces a back-projection
method for LAMP~\cite{joia2011lamp} using local neighborhoods and demonstrates
its viability over synthetic datasets~\cite{dos2012ilamp}. Researchers also
investigated general backward projection methods using radial basis
functions~\cite{monnig2014inverting,amorim2015facing}, treating backward
projection as an interpolation problem.
Autoencoders~\cite{hinton2006reducing}, neural-network-based DR models, are a
promising approach to computing backward projections. An autoencoder model
with multiple hidden layers can learn a nonlinear dimensionality reduction
function (encoding) as well as the corresponding backward projection
(decoding) as part of the DR process. Inverting DRs is, however, an ill-posed
problem. In addition to augmenting what-if analysis, the ability to define
constraints over a back projection can also ease the computational burden.
Praxis also enables users to interactively set equality and boundary
constraints over back projections through an intuitive interface.
We presented initial versions of \fp, \bp, and \pl earlier as part of
Clustrophile, an exploratory visual clustering analysis
tool~\cite{clustrophile:idea16}. We give here a focused discussion of our
revised interaction and visualization techniques, introduce Praxis, a new
visualization tool that implements our techniques for exploratory data analysis
using DR, and provide a thorough computational and user-performance evaluation.
The current work also introduces \fm, a new visualization technique to
facilitate \bp interactions.
\section{Interacting with Linear Dimensionality Reductions}
We demonstrate our methods first on principal component analysis (PCA), one of
the most frequently used linear dimensionality-reduction techniques; note that
the discussion here applies as well to other linear dimensionality-reduction
methods. PCA computes (learns) a linear orthogonal transformation of the
empirically centered data into a new coordinate frame in which the axes
represent maximal variability. The orthogonal axes of the new coordinate frame
are called principal components.
To reduce the number of dimensions to two, for example,
we project the centered data matrix, rows of which correspond to data
samples and columns to features (dimensions), onto the first two principal
components, $\mathbf{e_0}$ and $\mathbf{e_1}$.
Details of PCA along with its many formulations and interpretations can be
found in standard textbooks on machine learning or data mining (e.g.,
\cite{Bishop_2006,Hastie_2005}).
\subsection{Forward Projection}
\Fp enables users to interactively change the feature values of a data point
$\mathbf{x}$ and observe how these hypothesized changes in data modify the
current projected location $\mathbf{y}$ (Figure~\ref{fig:fpinaction}). This is
useful because understanding the importance and sensitivity of features
(dimensions) is a key goal in exploratory data analysis. In the case
of PCA, we obtain the two-dimensional position change vector
$\Delta\mathbf{y}$ by projecting the data change vector $\mathbf{x^\prime}$
onto the principal components: $\Delta\mathbf{y} = \Delta
\mathbf{x}\;\mathbf{E}$, where $\mathbf{E} = \begin{bmatrix}\mathbf{e_0} &
\mathbf{e_1} \end{bmatrix}$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/forward}
\caption{\normalfont Through forward projection, a user can quickly explore how much the
difference in the {\fontfamily{cmss}\selectfont StudentSkills} index value explains the planar projection
difference between Portugal (blue node) and Korea.\label{fig:fpinaction}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/prolines}
\caption{ \normalfont Proline construction. For a given dimension (feature) ${x}_i$ of
a point $\mathbf{x}$ in a dataset explored, we construct a proline by
connecting the forward projections of data points regularly sampled from a
range of $\mathbf{x}$ values, where all features are fixed but ${x}_i$
varies. A proline also encodes the forward projections for the ${x}_i$ values
in $\left[\mu_i-\sigma_i, \mu_i+\sigma_i\right]$ with thick green and red
line segments, providing a basic directional statistical context. $\mu_i$ is
the mean of the $i$th dimension in the dataset, the green segment represents
forward projections for ${x}_i$ values in $\left[{x}_i,
\mu_i+\sigma_i\right]$ and the red segment for ${x}_i$ values in
$\left[\mu_i-\sigma_i, {x}_i\right]$.\label{fig:pl}}
\end{figure}
%
%
%
%
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/forward_prolines}
\caption{
\normalfont Forward projection with prolines. StudentSkills is
revealed as key feature differentiating
Portugal from Korea. Observe that a value of 515 for StudenSkills would
be reasonable with respect to the feature
distribution ($\mu_i < 515 < \mu_i+\sigma_i$), but not enough to make
Portugal close to Korea in the projection plane. By visually comparing the
lengths (variability) of different proline paths, the user can easily
recognize which dimensions contribute most to determining the position of
points in the dimensionally reduced space. For instance, a change in the
feature value of WorkingLongHours (the shortest proline) would produce only a
very small change in the projection.\label{fig:fp_prolines}}
\end{figure}
\subsection{Prolines: Visualizing Forward Projections}
It is desirable to see in advance what forward projection paths look like for
each feature. Users can then start inspecting the dimensions that look
interesting or important.
\Pl visualize forward projection paths using a linear range of possible values
for each feature and data point (Figures~\ref{fig:pl}). Let $\mathbf{x}_i$ be
the value of the $i$th feature for the data point $\mathbf{x}$. We first
compute the mean $\mu_i$, standard deviation $\sigma_i$, minimum $min_i$ and
maximum $max_i$ values for the feature in the dataset and devise a range $ I =
\left[min_i, max_i\right]$. We then iterate over the range with step size
$c\sigma_i$, compute the forward projections as discussed above, and then
connect them as a path. The constant $c$ controls the step size with which we
iterate over the range and is set to $c=\sigma_i/8$ for the examples shown
here.
In addition to providing an advance snapshot of forward projections, \pl can be
used to provide summary information conveying the relationship between the
feature distribution and the projection space. We display along each
\textit{proline} a small light-blue circle indicating the position that the
data point would assume if it had a feature value corresponding to the mean of
its distribution; similarly, we display two small arrows indicating a variation
of one standard deviation ($\sigma_i$) from the mean ($\mu_i$). The segment
identified by the range $\left[\mu_i - \sigma_i, \mathbf{x}_i +
\sigma_i\right]$ is highlighted and further divided into two segments. The
green segment shows the positions that the data point would assume by
increasing its feature value; the red one indicates a decreasing value. This
enables users to infer the relationship between the feature space and the
direction of change in the projection space.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/backward_2}
\caption{\normalfont (Top) Unconstrained backward projection usage:
Curious about the projection difference between
Turkey and Italy, similar countries in some respects, the user moves the node
associated with Turkey (blue circle) towards Italy (a). The feature values of
Turkey are automatically updated (b) to satisfy the new projected position as
the node is moved. (Bottom) Constrained backward projection usage. Considering
that the features {\fontfamily{cmss}\selectfont LifeExpectancy},
{\fontfamily{cmss}\selectfont SelfReportedHealth} and {\fontfamily{cmss}\selectfont LifeSatisfaction} are unmodifiable,
the user puts a lock (equality constraint) on their values. Through a dedicated
interface (Figure~\ref{fig:praxis}d) the user also sets the upper bound for the
feature {\fontfamily{cmss}\selectfont StudentSkills} to 490 (inequality constraint). When performing
\bp, the feature values of Turkey are updated in order to respect the
user-defined constraints (c). \label{fig:bpinaction}}
\end{figure}
\subsection{Backward Projection}
\Bp as an interaction technique is a natural complement of \fp. Consider the
following scenario: a user looks at a projection and, seeing a cluster of
points and a single point projected far from this group, asks what changes in
the feature values of the outlier point would bring the outlier near
the cluster. Now, the user can play with different dimensions using \fp to
move the current projection of the outlier point near the cluster. It would be
more natural, however, to move the point directly and observe the change.
The formulation of \bp is the same as that of \fp: $\Delta \mathbf{y} = \Delta
\mathbf{x}\;\mathbf{E}$. In this case, however, $\Delta \mathbf{x}$ is unknown
and we need to solve the equation.
As formulated, the problem is underdetermined and, in general, there can be
an infinite number of data points (feature values) that project to the same planar
position. Therefore, we propose both unconstrained and constrained backward
projections, for which users can define equality as well as inequality constraints.
In the case of unconstrained backward projection, we find $\Delta \mathbf{x}$
by solving a regularized least-squares optimization problem:
\begin{equation*}
\begin{aligned}
& \underset{\Delta\mathbf{x}}{\text{minimize}}
& & \|{\Delta\mathbf{x}}\|^2 \\
& \text{subject to}
& & \Delta\mathbf{x}\;\mathbf{E} = \Delta\mathbf{y}
\end{aligned}
\end{equation*}
Note that this is equivalent to setting $\Delta \mathbf{x}=
\Delta\mathbf{y}\;\mathbf{E}^T$.
For constrained backward projection, we find $\Delta \mathbf{x}$
by solving the following quadratic optimization problem:
\begin{equation*}
\begin{aligned}
& \underset{ \Delta\mathbf{x}}{\text{minimize}} & & \|{\Delta\mathbf{x}}\;\mathbf{E} - \Delta\mathbf{y}\|^2 \\
& \text{subject to} & & \mathbf{C}\Delta\mathbf{x} = \mathbf{d}\\
& & & \mathbf{lb} \leq \Delta\mathbf{x} \leq \mathbf{ub}
\end{aligned}
\end{equation*}
$\mathbf{C}$ is the design matrix of equality constraints, $\mathbf{d}$ is the
constant vector of equalities, and $\mathbf{lb}$ and $\mathbf{ub}$ are the vectors
of lower and upper boundary constraints.
\subsection{Guiding Backward Projections}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/bp_prolines}
\caption{\normalfont With the addition of
\textit{projection marks}, \pl can be used as guides while the user performs
\bp. The figure shows the same constrained \bp example as
Figure~\ref{fig:bpinaction}: while the user drags the projected data point,
prolines indicate with a green or red color if their corresponding feature
value is increasing or decreasing. Projection marks, represented as little blue
circles, move along each proline as the user performs \bp and indicate the
current feature values. This is particularly useful for checking the position
of each value with respect to its feature distribution. In the case of
inequality constraints, values that do not satisfy a constraint generate a
black proline. \label{fig:bp_prolines}}
\end{figure}
\noindent{\bf Projection Marks:} It is important to note that, since more than
one data point in the multidimensional space can project to the same position,
forward and backward projections may not always correspond. For this reason, we
add to our prolines visualization a set of \textit{projection marks}
(Figure~\ref{fig:bp_prolines}) dynamically indicating the current value for
each feature while the user performs \bp. At the same time, dragging a data
point highlights the green or red segment of each proline based on the increase
or decrease of each feature, showing which dimensions correlate to each other.
By combining forward projection paths to \bp, the user can infer how fast each
value is changing in relation to its feature distribution.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/constraints1}
\caption{\normalfont Feasibility map. The feasibility map samples the
projection plane through backward projection and shows which regions are not
admissible based on a set of user-defined constraints. Here (a) a user
defines a lower bound for the {\fontfamily{cmss}\selectfont
EducationalAttainment} value through the interface provided (described in
Section~\ref{sec:praxis}) and (b) a dark area is drawn onto the projection
plane, indicating that moving the projected point (Greece) in that
region would break the constraint.\label{fig:fm}}
\end{figure}
\noindent {\bf Feasibility Maps:} Constrained backward projection enables users
to semantically regulate the mapping into unprojected high-dimensional data
space. For example we don't expect an Age field to be negative or bigger than
150, even though such a value can can constitute a more optimal solution in an
unconstrained backward projection scenario.
We propose the \fm visualization as a way to quickly see the feasible space
determined by a given set of constraints. Instead of manually checking if a
position in the projection plane satisfies the desired range of values
(considering both equality and inequality constraints), it is desirable to know
in advance which regions of the plane correspond to admissible solutions. In
this sense, \fm is a conceptual generalization of \pl to the constrained backward projection
interaction.
To generate a \fm, we sample the projection plane on a regular grid and evaluate
the feasibility at each grid point based on the constraints imposed by the user,
obtaining a binary mask over the projection plane. We render this binary mask
over the projection as an interpolated grayscale heatmap, where darker areas
indicate infeasible planar regions (Figure~\ref{fig:fm}). With accuracy
determined by the number of \bp samples, the user can see which areas a data
point can assume in the projection plane without breaking the constraints.
Note that, when dealing with linear dimensionality reduction techniques, the
\fm originates boundaries that are orthogonal to the prolines of the
constrained features. Nevertheless, generating the \fm by sampling the
projection plane has the advantage of being independent of the dimensionality
technique used.
In \bp, if a data point is dragged to a position that does not satisfy a
constraint, its color and the color of its corresponding projection marks turn
to black. If the user drops the data point in an infeasible position, the point
is automatically moved through animation back to the last feasible position
to which it was dragged (Figure~\ref{fig:fminteraction}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/constraints2}
\caption{\normalfont Visualizing broken constraints. If a projected point is dragged
through backward projection onto an infeasible region (a), its color and the
color of the projection marks associated with broken constraints turn black.
When the point is released, its position is restored to the last admissible
value computed through backward projection (b).
\label{fig:fminteraction}}
\end{figure}
\section{Praxis}\label{sec:praxis}
To study the usage of our interaction and visualization techniques, we
introduce Praxis, a new interactive tool integrating them. Through a data
panel (Figure~\ref{fig:praxis}b), users can load a dataset in CSV format and
visualize its PCA projection as a scatter plot (Figure~\ref{fig:praxis}a),
using the first two principal components as axes of the projection plane.
Results of forward and backward projection, along with the two visualizations
prolines and feasibility map, are displayed in the projection plane. The id
(name) of a data point is shown on mouse hover, while clicking performs
selection, showing its feature values in a dedicated sidebar panel
(Figure~\ref{fig:praxis}c). In particular, the \textit{Selection Details}
panel is used for performing forward projection (clicking on a dimension makes
its value modifiable) and for inspecting changes in feature values when
backward projection is used. Three buttons enable (respectively for each
feature) 1) reset it to its original value, 2) enable/disable inequality
constraints and 3) lock its value to a specific number (equality constraint).
Double-clicking the row associated with a feature displays a histogram
representing its distribution below the selected row, showing some basic
statistics (Figure~\ref{fig:praxis}d). The current value of the feature is
represented by a blue line and a cyan line indicates the distribution mean.
Bins of the histogram are colored similarly to \pl: green for increasing
values, red for decreasing values with respect to the original feature value.
Dragging one of the two black handles lets the user set or unset lower and
upper bounds for a feature distribution, thus defining a set of constraints for
a specific data point.
Finally, selecting a data point in the projection plane displays two buttons
that respectively enable 1) resetting its feature values (and position) to
their original value and 2) showing a tooltip on top of its $k$ currently
nearest neighbors, in order to facilitate reasoning about the similarity with
other data samples (especially when performing \bp).
Praxis is a web application based on a client-server model. We implemented its
interface using Javascript with the help of D3~\cite{d3} and
ReactJS~\cite{react} libraries. A separate analytics server carries out the
computations required by the projection computations. We implemented the server
in Python, using the SciPy~\cite{scipy}, NumPy~\cite{numpy},
scikit-learn~\cite{scikit-learn} and CVXOPT~\cite{cvxopt} libraries. We solve
the \bp generated quadratic optimization problems using CVXOPT~\cite{cvxopt}.
\section{Evaluation}
To evaluate our methods, we first conduct a user study with analysts and then
perform a computational model analysis assessing accuracy and scalability.
\subsection{User Study}
We evaluate user experience with our techniques through a user study with
twelve data scientists. We have two goals. First, to assess the effectiveness
of our projection and visualization techniques in what-if analysis of
dimensionally reduced data. Second, to understand how the use of the
techniques differs for changing task type and complexity.
\noindent{\bf Participants:} We recruited twelve participants with at least two
years' experience in data science. Their areas of expertise included
healthcare analytics, genomics and machine learning. Participants ranged from
25 to 55 years old, and all had at least a Master's degree in science or
engineering. Ten participants regularly applied dimensionality reductions in
their data analysis, using Matlab, R and Python. All participants were
familiar with using PCA, while six had used MDS before. Four participants cited
additional projection methods that they had previously used, including t-SNE
and autoencoder.
\begin{figure*} \centering
\includegraphics[width=0.9\linewidth]{figures/praxis_interface}
\caption{\normalfont Praxis interface. Praxis is an interactive tool for
DRs, integrating our projection interactions along with the related
visualizations. After importing a dataset and choosing a projection method
(a), a scatter plot is displayed using the reduced two dimensions (b). When
a point is selected, its feature values can be seen and modified from a
table panel (c) that also allows entering constraints for each feature by
double-clicking on a specific row of the table (d). A data table listing
all rows in the dataset is also included (e). \label{fig:praxis}}
\end{figure*}
\noindent{\bf Tasks and Data:} Participants were asked to perform the
following six high-level tasks using Praxis. We used a tabular
dataset~\cite{Stahnke_2016,oecdDataset} containing eight socio-economic
development indices for thirty-four countries belonging to the Organisation for
Economic Co-operation and Development (OECD). The dataset was a CSV file with
34 rows and nine columns, where one column represented country names.
Participants were free to use any combination of interactions and
visualizations to complete a given task.
\begin{enumerate}
\item[T1:] What four development indices contribute the most in determining the
position of points in the projection plane? Can you rank them based on
their relevance?
\item[T2:] Can you explain why Portugal is in that specific position of the
projection plane, distant from the other European countries?
\item [T3:] Suppose Chile has a near-term plan to attain a development level similar to Greece but could increase spending in only one of the
development index areas. On which area would you advise the Chilean
government to focus its resources?
%
\item [T4:] Consider the cluster formed by Turkey and Mexico. Compare it to
the cluster formed by Asian countries.
\item [T5:] Suppose Israel cannot increase its education spending for the
foreseeable future due to budgetary constraints. Would it be reasonable
for the country to attain a development level similar to Canada?
\item [T6:] Given that the Italian government would not allow
WorkingLongHours to increase beyond the distribution mean, say which
countries could be considered similar to Italy if it was able to improve
its StudentSkills index value to 500.
\end{enumerate}
\noindent{\bf Procedure:} The study took place in the
experimenter's office; one user at a time used Praxis running on
the experimenter's laptop. The study had three stages. In the first,
participants were briefed about the experiment
and filled out a pre-experiment questionnaire eliciting information about
their experience in data science and use of dimensionality-reduction
techniques. In the second stage, participants were introduced to the Praxis
interface and to our techniques via a training dataset. Five minutes were
dedicated to showing each user how to perform \fp, \bp and constraints
formulation. Participants were then introduced to a new dataset and asked to
complete the six tasks above. Task duration was manually timed and subject
responses were collected through think-aloud protocol. Participants had at most
two trials to perform each task; in the event of a failure, they moved on to
the next task. For each task we also recorded whether the task was completed
with or without the experimenter's help. In the last stage, participants were
asked to complete a post-experiment questionnaire to gather subjective
feedback.
\begin{table}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{*{8}{c}}
& \multicolumn{3}{c}{performance} & \multicolumn{4}{c}{techniques used} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-8}
Task & C & H & I & \emph{forward} & \pl & \emph{backward} & \emph{feasibility} \ \\
\midrule
T1 & 10 & 2 & 0 & 2 & 12 & 0 & 0 \\
T2 & 12 & 0 & 0 & 0 & 12 & 0 & 0 \\
T3 & 11 & 1 & 0 & 8 & 9 & 4 & 0 \\
T4 & 12 & 0 & 0 & 1 & 12 & 2 & 0 \\
T5 & 11 & 1 & 0 & 2 & 12 & 11 & 0 \\
T6 & 9 & 2 & 1 & 2 & 2 & 2 & 10 \\
\bottomrule
\end{tabular}
\caption{\normalfont Results of user study. Of a total of twelve participants, the table
indicates for each task how many of them completed the task with no help (C),
completed it with help (H) or did not complete the task (I). We also show the
number of participants who used each of our proposed techniques to perform
single tasks. Note that both \pl and \fm can give users visual information
without requiring them to perform \fp or \bp (whereas, for instance, \fp is
intrinsically bound to \pl).\label{tab:results}}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/time_log}
\caption{\normalfont Task completion time. Average log time for participants to
complete the six assigned tasks.\label{fig:time_results}}
\end{figure}
\noindent{\bf Results and Discussion:} We adopt a task performance criterion
similar to that employed in \cite{Stahnke_2016}. For each task, we count the
number of participants who completed the task (C), completed the task with help
(H) or were unable to complete the task (I). Similarly, we also report the
frequency of the interaction (\fp and \bp) and visualization (\pl and \fm)
techniques used by users to complete each task. We list these results in
Table~\ref{tab:results}. Figure~\ref{fig:time_results} shows the average log
time spent on each task, inclusive of the cases in which the participant didn't
complete the task. All the completed tasks were performed in less than 30
seconds. Task completion times and their standard errors reflect the intrinsic
complexities of the tasks.
Overall, \pl proved to be a simple yet powerful visualization technique for
exploring dimensionally reduced data as well as reasoning about the
dimensionality reduction. \Pl were used 59 times by participants over the
course of the six tasks performed. One participant mentioned he particularly
liked ``how prolines generate meaningful axes in a scatter plot where a clear
mapping to data dimensions is unclear,'' whereas another described prolines as
a ``great way to understand dimensionality reductions, especially for people
who used to treat them as a black box.'' The second most frequently used
technique was \bp, (used 19 times, followed by \fp (15 times). Note that the
use of \fp always involves displaying \pl, which incorporates paths of \fp
locations for hypothetical feature values. Participants used \fp when they
wanted to interactively change the feature values and see precisely the
projection change of the corresponding data point. In particular, one
participant declared, ``I feel \bp is more natural to use and useful to see
which features correlate to each other, but I would prefer \fp for more precise
control over feature values.'' Also, despite its lower incidence of use, \fm
was employed by the participants when the task was sufficiently complex (T6).
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/analysis}
\caption{\normalfont Accuracy and time performance results for PCA. We note that
the accuracy and time performance of \fp and \bp are mostly insensitive to the
number of samples and dimensions. Accuracy is instead tied to the amount of
change introduced by the user. The computational time for generating \pl shows
a linear dependence on the number of dimensions. Note that time is displayed in
microseconds in charts (e-h) and in milliseconds in charts (i-l).
\label{fig:analysis}}
\end{figure*}
\subsection{Model Analysis}
Since they are intended for use in interactive applications, the computational
complexity of the proposed techniques needs to adhere to certain responsiveness
requirements. At the same time, forward and backward projection methods need to
be accurate enough at estimating changes in the dimensionally reduced space as
well as in the underlying multidimensional data so as not to lead the user to
false assumptions. We evaluate our proposed techniques for PCA in terms of time
and accuracy over varying number of samples and dimensions of the input dataset
and also over the amount of change introduced by the user (i.e. how much a
feature value is modified in the case of \fp, how much a data point is moved in
the case of \bp).
In our evaluation we iteratively perform \fp and unconstrained \bp on
automatically generated Gaussian random multivariate distributions, changing
either the number of data samples or the number of data dimensions and leaving
the other one fixed. We apply our techniques for each data point of the
original distribution. The \fp algorithm is then applied to each dimension with
an amount of change in $\left\{\sigma_i/8, \sigma_i/4, \sigma_i/2,
\sigma_i\right\}$, where $\sigma_i$ is the standard deviation for the current
feature. \Bp is performed in eight possible directions of movement (horizontal
and vertical axes plus diagonals), with an amount of change in $\left\{m/80,
m/40, m/20, m/10\right\}$, where $m$ is the width of the projection plane.
Accuracy and time performances are determined for each execution of the two
techniques and then averaged over all dimensions (directions), data samples and
test iterations. All experimental results presented were generated on a
MacBook Pro, 2015 edition.
\noindent{\bf Time Performance:} Figure~\ref{fig:analysis} shows that the
execution time for both \fp (e,f) and \bp (g,h) is on the order of microseconds
and is not influenced by the number of samples nor by the amount of change;
charts i and h show a linear dependence on the number of dimensions that does
not, however, significantly affect the time performance. Even when dealing with
larger datasets (e.g. $>$ 500 samples, $>$ 100 dimensions), both techniques are
suitable for interactive data analysis tools. Figure~\ref{fig:analysis} also
shows the time performance of \pl (i,j) and \fm (k,l), respectively assuming
the computation of each proline with an average resolution of 5 \fp samples and
the generation of the \fm with a resolution of 100 \bp samples. In particular,
we note that the time to compute \pl depends linearly on the number of
dimensions.
\noindent{\bf Accuracy:} To assess the accuracy of our techniques, we
introduce a new similarity criterion for data-point neighborhoods in
dimensionally reduced spaces. For each execution of the algorithms on a data
point, we compute two sets of neighbors: 1) the $n$ closest neighbors in the
projection plane after performing \fp or after moving the data point with \bp,
and 2) the $n$ closest neighbors in the projection plane after performing the
dimensionality reduction on the multidimensional data, after it has been
modified through \fp or by \bp. Optimally, these two neighborhoods should
contain the same elements, which should have the same relative distance from
the data point on which the technique is performed. We define a neighborhood
correlation index $c_n=c_e\times c_o$, where $c_e$ is the percentage of elements
that appear in both neighborhoods, whereas $c_o$ is the percentage of elements
whose distance from the data point considered remains in the same order. The
index varies between 0 and 1, with 1 corresponding to very similar
neighborhoods.
Figure~\ref{fig:analysis} shows that the accuracy of both \fp (a,b) and \bp
(c,d) is mostly insensitive to the number of samples or dimensions. Instead, we
notice a strong dependence on the amount of change introduced by the user. This
shows that our proposed techniques are well suited for local changes in the
data, and greater user modifications could possibly alter the properties of the
dimensionality reduction.
\section{Interacting with Nonlinear Dimensionality Reductions}
We have so far demonstrated our interaction methods on PCA, a linear projection
method. What about interacting with nonlinear dimensionality reductions?
There are out-of-sample extrapolation methods for many nonlinear
dimensionality-reduction techniques that make the extension of \fp with \pl
possible~\cite{Bengio_2004}. As for \bp, its computation will be
straightforward in certain cases (e.g. when an
autoencoder~\cite{hinton2006reducing} is used). In general, however, some form
of constrained optimization specific to the dimensionality-reduction algorithm
will be needed. Nonetheless, it is highly desirable to develop general methods
that apply across dimensionality-reduction algorithms.
Below we discuss an application of our techniques to an autoencoder-based
nonlinear dimensionality reduction.
\subsection{Autoencoder-based dimensionality reduction}
An autoencoder is an artificial neural network model that can learn a
low-dimensional representation (or encoding) of data in an unsupervised
fashion~\cite{rumelhart1986learning}. Autoencoders that use multiple hidden
layers can discover nonlinear mappings between high-dimensional datasets and
their low-dimensional representations. Unlike many other
dimensionality-reduction methods, an autoencoder gives mappings in both
directions between the data and low-dimensional
spaces~\cite{hinton2006reducing}, making it a natural candidate for application
of the interactions introduced here.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/autoencoder_mnist}
\caption{\normalfont Backward projection with autoencoder. A user explores a
dimensionality reduction of handwritten digits from the MNIST
database~\protect\cite{lecunmnist} using backward projection. The two-dimensional
projection is obtained with a deep autoencoder. Projected data points are colored
based on the digit represented. By moving a node of the digit 7, back projection
enables the user to see how its feature values (pixels) are updated. In particular,
the user observes a smooth transition from 7 to 1, with 9 as an intermediate
representation.\label{fig:autoencoders}}
\end{figure}
Figures~\ref{fig:autoencoders} shows how \bp can be applied in exploring a
two-dimensional, autoencoder-based projection of handwritten digits from the
MNIST database~\cite{lecunmnist}. To this end, we first train an autoencoder
with seven fully connected layers using the 60,000-digit images from the
training set of the MNIST database.
Each image had $28\text{ px} \times 28\text{ px} = 784\text{ features}$. The
seven layers of the autoencoder had sizes 784, 128, 32, 2, 32, 128 and 784.
After training the network, we plot the encoded low-dimensional representations
of 100 digits from the MNIST test set as circular nodes in the plane. We color
each node by their associated digit. By performing \bp on a data point, it is
possible to observe the change in its feature values (pixel intensities) as the
point is moved around the projection plane. Our technique shows, in this case,
how one handwritten digit can gradually be transformed into another.
\begin{figure} \centering
\includegraphics[width=\linewidth]{figures/autoencoder_prolines}
\caption{\normalfont Prolines (a) from an autoencoder-based projection of
MNIST images~\protect\cite{lecunmnist}. A user can opt to show only the $n$
most relevant prolines through Praxis' \textit{Selection Details} panel
(b). Fifty most relevant prolines (c) corresponding to the 50 pixels with
the highest variability (d). In contrast to PCA, \pl in this case are not
straight lines. Depending on the user selection (b), the image in the
\textit{Selection Details} panel can alternatively display: 1) the feature
values (pixels) of the selected data point, 2) the difference map between
the original pixels and their value after performing \fp or \bp, and 3) an
interactive image for setting constraints on the values of each
pixel.\label{fig:autoencoder_prolines} }
\end{figure}
Using the same neural network model, we integrate support for autoencoder-based
dimensionality reduction in our tool \textit{Praxis}. In particular,
Figure~\ref{fig:autoencoder_prolines} shows \pl are not straight lines for
non-linear dimensionality reduction methods.
\section{A Model for Dynamic Visualization Interactions}
The interaction techniques introduced here belong to a class of interactions
that tightly couples data with its visual representation so that when users
interactively change one, they can observe a corresponding change in the other.
For example, through \fp, users observe how the visual representation (2D
position) changes as they change the value of a dataset's attributes.
Conversely, users can see how the data changes through \bp as they change the
visual representation. This class of interactions is essential for realizing
dynamic visualizations (e.g.~\cite{Victor_2013,kondo2014dimpvis}) and we call
them {\em dynamic visualization interactions} or {\em dynamic interactions} for
short.
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/ve}
\caption{\normalfont Visual embedding is a function that
preserves structures in the data domain $X$ within the embedded
perceptual space $Y$ (adapted from \protect\cite{Demiralp_2014}).\label{fig:ve}}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/ve_extended}
\caption{
\normalfont Bidirectionally coupling data and its visual
representations. Visual embedding suggests that a change in the data should
be reflected with a proportional change in its visual representation using
$f$, the visual embedding function. Conversely, change in a visual
representation should be reflected by a proportional change in the
corresponding data using $f^{-1}$.\label{fig:ve_extended}
}
\end{figure}
We now look at dynamic interactions under the visual embedding
model~\cite{Demiralp_2014}. The visual embedding model provides a functional
view of data visualization and posits that a good visualization is a structure-
or relation-preserving mapping from the data domain to the range (co-domain) of
visual encoding variables (Figure~\ref{fig:ve}). Visual embedding immediately
gives us criteria on which dynamic interactions should be considered effective
(Figure~\ref{fig:ve_extended}): 1) a change in data (e.g., induced by user
through direct manipulation) should cause a proportional change in its visual
representation and 2) a perceptual change in a visual encoding value (e.g., by
dragging nodes in a scatter plot or changing the height of a bar in a bar
chart) should be reflected by a change in data value that is proportional to
the perceived change. However, to enable a dynamic interaction on a
visualization, we need to have access to both the visualization function $f$
and its inverse $f^{-1}$. The visual embedding model also suggests why
implementing back mapping to the data space can be challenging.
We consider three basic forms of the visualization function $f$ in
Figure~\ref{fig:ve_classes} through examples using a toy dataset in
Figure~\ref{fig:ve_example_table}. When the visualization function $f$ is
one-to-one, a dynamic interaction over $f$ is straightforward as $f^{-1}$
exists. When $f$ is one-to-many (still invertible but not necessarily a
proper function), $f^{-1}$ exists and is determined by the target of
interaction. Consider the example in Figure~\ref{fig:ve_classes}. We visualize how
X values change for each NAME with a line chart. We also visualize the
correlation of X and Y with a scatter plot. If a user moves a point up or down
in the line chart, the corresponding change in X can be easily computed and the
scatter plot can be updated in a brush-and-link fashion. Essentially, the
one-to-many case can be seen as a collection of multiple one-to-one
visualizations.
The most interesting case is when $f$ is many-to-one and hence not invertible.
A frequent source of such visualization functions is summary data
aggregations, which are lossy. We can consider dimensionality reduction as a
form of aggregation. A simple example of a many-to-one visualization is the bar
chart (Figure~\ref{fig:ve_classes}) that shows the mean X for the data points
grouped by TYPE, A and B. Now, in a dynamic interaction scenario, how should
we update the data values if a user changes the heights of the bars? Our \bp
solves a similar problem under a more complex visualization function,
dimensionality reduction. In
general, constructing a dynamic interaction over many-to-one visualization
functions would require imposing a set of assumptions over data in the form
of, e.g., constraints or models. This presents a challenging yet important
future research direction.
\begin{figure}[h]
\includegraphics[width=\linewidth]{figures/ve_classes}
\caption{\normalfont Visualization classes. Three basic forms of the visualization
function $f$. Implementing a dynamic interaction is clearly challenging when
$f^{-1}$ does not exist.\label{fig:ve_classes} }
\end{figure}
\begin{figure}[h]
\includegraphics[width=\linewidth]{figures/ve_example_table}
\caption{ \normalfont Toy tabular dataset used for the three examples
in Figure~\ref{fig:ve_classes}. \label{fig:ve_example_table}}
\end{figure}
\section{Visual Analysis Is Like Doing Experiments}
This paper introduces \fp, \bp, and the related visualizations, \pl and \fm, to
improve user experience in exploratory data analysis using DR. We demonstrate
these techniques on PCA- and autoencoder-based DRs. We also contribute a new
tool, Praxis, implementing our techniques for DR-based data exploration. We
evaluate our work through a computational performance analysis, along with a
user study. Our visual interactions are scalable, intuitive to use and
effective for DR-based exploratory data analysis. We situate our techniques in
a class of visualization interactions at large that we discuss under the
visual embedding model.
Data analysis is an iterative process in which analysts essentially run mental
experiments on data, asking questions, (re)forming and testing hypotheses.
Tukey and Wilk~\cite{Tukey_1966} were among the first in observing the
similarities between data analysis and doing experiments, listing eleven
similarities between the two. In particular, one of these similarities states
that ``Interaction, feedback, trial and error are all essential; convenience is
dramatically helpful.'' In fact, data can be severely underutilized (e.g.,
\textit{dead}~\cite{Haas_2011,Victor_2013}) without what-if analysis.
However, to perform data analysis as if we are running data experiments,
dynamic visual interactions that bidirectionally couple data and its visual
representations must be one of our tools. Our work here contributes to the
nascent toolkit needed for performing visual analysis in a similar way to
running experiments.
\section{Acknowledgments}
The authors thank Paulo Freire for inspiring the name ``Praxis.''
\bibliographystyle{SIGCHI-Reference-Format}
|
\section{Introduction}
\label{intro}
Let $\mathbb{H}$ be the upper half plane of $\mathbb{C}$, i.e., the set of complex numbers with positive imaginary part. Let $\Gamma$ denote the full modular group $\textrm{SL}_{2}(\mathbb{Z})$, i.e., the group of 2-by-2 matrices over integers of discriminant~1, and let $j(\tau)$ be the well known modular $j$-invariant. As one of the most famous functions in number theory, the modular $j$-invariant possesses numerous interesting properties. For example, it generates the function field of $\textrm{SL}_{2}\backslash\mathbb{H}^{*}$, where $\mathbb H^*=\mathbb H \cup \mathbb{Q} \cup \{\infty\}$, and its value at an imaginary quadratic point $\tau$, which is called a singular modulus, generate some ring class field of the imaginary quadratic field $\mathbb{Q}(\tau)$ \cite{Sh}. It is no exaggeration to say that the modular $j$-invariant appears everywhere in the area of number theory related to modular forms. Even in the recent active and developing study of sign changes of the Fourier coefficients of modular forms, one can be told by Asai et al \cite{AKN} that the signs of the Fourier coefficients of $\frac{1}{j(\tau)}$ are alternating. Those interesting properties of the modular $j$-invariant has motivated developments of a great variety of studies, of which the famous one is the traces of singular moduli which can be defined as follows.
Let $d$ be a positive integer congruent to $0$ or $3$ modulo $4$. We denote by $Q_d$ the set of positive-definite binary forms $Q(X, Y)=[a, b, c]=aX^2+bXY+cY^2, a, b, c \in \mathbb{Z}$ of discriminant $-d$, with usuall action of $\Gamma$. To each $Q \in Q_d$, we associate its unique root $\alpha_Q \in \mathbb H$ of $Q(X, 1)$ and $w_Q:=|\bar{\Gamma}_{Q}|$, where $\bar{\Gamma}_{Q}$ is the group of automorphisms of $Q$, i.e., the stabilizer of $Q$. Then the trace of singular moduli of discriminant $-d$ is defined by
$$
t(d)=\sum_{Q\in Q_{d}/\Gamma}\frac{j(\alpha_{Q})-744}{|\bar{\Gamma}_{Q}|}
$$
with the convention $t(0)=2, t(-1)=-1$ and $t(d)=0$ for $d<-1$ or $d \equiv 1, 2 \pmod4$. The traces of singular moduli play a key role in recovering some arithmetic information encoded in the Fourier coefficients of modular forms. For example, Kaneko \cite{K} showed that
$$
a(n)=\frac{1}{n}\left\{\sum_{r\in\mathbb{Z}}t(n-r^{2})+\sum_{\substack{r\geq1\\r\,odd}}\left((-1)^{n}t(4n-r^{2})-t(16n-r^{2})\right)\right\}
$$
for $n\geq1$, where $a(n)$ are the Fourier coefficients of the modular $j$-invariant. Inspired by Kaneko's work, Ohta \cite{O} derived analogous formulas for some genus zero modular functions such as the modular function
\begin{align*}
\frac{\eta^{24}(\tau)}{\eta^{24}(2\tau)}&=\frac{1}{q}\prod_{n=1}^{\infty}\frac{(1-q^{n})^{24}}{(1-q^{2n})^{24}}\\
&=\frac{1}{q}-24+276q-2048q^2+11202q^3-49152q^4+184024q^5-614400q^6+O(q^{7})\\
&=\sum_{n=-1}^{\infty}c(n)q^{n},
\end{align*}
where $\eta(\tau)$ is the Dedekind eta function defined by
$$
\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n})
$$
with $q=\exp(2\pi i\tau)$, and he obtained that
\begin{align*}
c(n)
&=\frac{1}{n}\left\{\sum_{r\in\mathbb{Z}}t(n-r^{2})+\sum_{\substack{r\geq1\\r\,\,odd}}(-1)^{n}t(4n-r^{2})+24\sum_{\substack{d|n\\d\,\,odd}}d\right\}.
\end{align*}
This modular function has certain properties analogous to the modular $j$-invariant. For example, it is a generator for the function field of $\Gamma_{0}(2)\backslash\mathbb{H}^{*}$, where $\Gamma_0(2)$ denotes the congruence subgroup of level $2$, i.e.
$$
\Gamma_0(2)=\left\{
\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}
\in \Gamma\bigg| c \equiv 0 \pmod2
\right\}.
$$
Also if one observes the Fourier coefficients $c(n)$ of $\frac{\eta^{24}(\tau)}{\eta^{24}(2\tau)}$, one may note that the signs of the first eight $c(n)$'s are alternating, like that of $\frac{1}{j(\tau)}$.
Recently, motivated by Kaneko's and Ohta's work, Matsusaka and Osanai \cite[Theorem 1.3]{MO} extend Ohta's formula for $c(n)$ to the case involving certain generalized traces of singular moduli, and moreover derive an amazing asymptotic formula for $c(n)$, namely, as $n\to\infty$,
\begin{equation} \label{asym}
c(n)\sim\frac{e^{2\pi\sqrt{n}}}{2n^{3/4}}\times
\begin{cases}
-1, & \mbox{if $n\equiv0\pmod{2}$};\\
1, & \mbox{if $n\equiv1\pmod{2}$}
\end{cases}
\end{equation}
Clearly, as a consequence of the asymptotic formula \eqref{asym}, we note that the coefficient $c(n)$ possesses a sign-change property for large $n$. In this paper,
we extend the above observations and get a full range description of the oscillatory behavior of $c(n)$. Namely, we have the following result.
\begin{Theorem}
\label{main}
For all integer $n\geq-1$, we have $(-1)^{n+1}c(n)>0$, i.e.,
$$
\begin{cases}
c(n)>0, &\mbox{if $n$ is odd};\\
c(n)<0, &\mbox{if $n$ is even}.
\end{cases}
$$
\end{Theorem}
An immediate consequence of Theorem \ref{main} is the following.
\begin{Corollary}
The Fourier coefficients $c(n)$ never vanish.
\end{Corollary}
\section{Proof of Theorem \ref{main}}
We need the following lemmas.
\begin{Lemma}[Robin] \label{robin}
For $n\geq3$,
$$
\sum_{d|n}d<e^{\gamma}n\log\log{n}+\frac{n}{\log\log{n}},
$$
where $\gamma$ is Euler's constant.
\end{Lemma}
(see, e.g, \cite[Theorem 2]{R}).
\begin{Remark}
Under the Riemann Hypothesis, the above estimation can be sharpened as follows:
$$
\sum_{d|n}d<e^{\gamma}n\log\log{n}.
$$
\end{Remark}
\begin{Lemma}[Choi, Kim and Lim]\label{ckl}
The trace of singular moduli, $t(d)$, has the following sign-change property.
\begin{equation}
\label{tdsign}
\begin{cases}
t(d)>0, &\mbox{if $d\equiv0\pmod{4}$};\\
t(d)<0, &\mbox{if $d\equiv 3\pmod{4}$}.
\end{cases}
\end{equation}
\end{Lemma}
(see, e.g., \cite[Theorem 1]{CKL}).
\begin{Lemma}[Choi, Kim and Lim] \label{ckl2}
The trace of singular moduli, $t(d)$, has a lower bound and an upper bound as follows.
\begin{enumerate}
\item[(i).]{If $d\equiv0\pmod{4}$, then
$$
\exp(\pi\sqrt{d})-\frac{1}{2}(2\pi d)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{d}\right)\leq t(d)\leq
\exp(\pi\sqrt{d})+\frac{1}{2}(2\pi d)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{d}\right).
$$
}
\item[(ii).] {If $d\equiv3\pmod{4}$, then
$$
-\exp(\pi\sqrt{d})-\frac{1}{2}(2\pi d)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{d}\right)\leq t(d)\leq
\frac{1}{2}(2\pi d)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{d}\right)-\exp(\pi\sqrt{d}).
$$
}
\end{enumerate}
\end{Lemma}
(see, e.g., \cite[(31)]{CKL}).
\begin{Lemma}[Zagier] \label{zagier}
For all positive integer $n$, we have
$$
\sum_{|r|<2\sqrt{n}}t(4n-r^{2})=
\begin{cases}
-4, &\mbox{if $n$ is a square}; \\
2, &\mbox{if $4n+1$ is a square};\\
0, &\mbox{otherwise}.
\end{cases}
$$
\end{Lemma}
(see, e.g., \cite[Theorem 2]{Z}).
\begin{Lemma}[Ohta]\label{ohta}
For all positive integer $n$, we have
\begin{align*}
c(n)
&=\frac{1}{n}\left\{\sum_{r\in\mathbb{Z}}t(n-r^{2})+\sum_{\substack{r\geq1\\r\,\,odd}}(-1)^{n}t(4n-r^{2})+24\sum_{\substack{d|n\\d\,\,odd}}d\right\}.
\end{align*}
\end{Lemma}
(see, e.g., \cite[Theorem 2.1]{O}).
\bigskip
\begin{proof}[\textbf{Proof of Theorem \ref{main}}]
Note that it suffices to prove the sign-change propety for the sequence $\{nc(n)\}_{n \ge -1}$, which by Lemma \ref{ohta}, is the same as
$$
\left\{\sum_{r\in\mathbb{Z}}t(n-r^{2})+\sum_{\substack{r\geq1\\r\,\,odd}}(-1)^{n}t(4n-r^{2})+24\sum_{\substack{d|n\\d\,\,odd}}d \right\}_{n \ge -1}.
$$
We divide our proof into four cases.
\medskip
\textit{Case I: $n \equiv 0 \pmod 4$.}
\medskip
We need to show that $nc(n)<0$ in this case. Suppose $n=4k$ for some $k \ge 0$, then we have
\begin{align*}
&\sum_{r\in\mathbb{Z}}t(4k-r^{2})+\sum_{\substack{r\geq1\\r\, odd}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
&=\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\sum_{r\geq0}t(16k-r^{2})-\sum_{\substack{r\geq0\\r\,even}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
& \quad \textrm{(By the definition of $t(d)$)} \\
&=\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|\leq\sqrt{16k}}t(16k-r^{2})+\frac{t(16k)}{2}-\sum_{\substack{r\geq0\\r\,even}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
&=\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|\leq\sqrt{16k}}t(16k-r^{2})+\left(\frac{t(16k)}{2}-t(16k) \right)-\sum_{\substack{r>1 \\r\,even}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
&=\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|\leq\sqrt{16k}}t(16k-r^{2})-\frac{t(16k)}{2}-\sum_{\substack{r> 1\\r\,even}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
\end{align*}
By Lemma \ref{zagier}, we know that
$$
\sum_{|r|<\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|<\sqrt{16k}}t(16k-r^{2})\leq3.
$$
Together with $t(0)=2$, this implies that
$$
\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|\leq\sqrt{16k}}t(16k-r^{2})\leq9.
$$
Moreover, it is clear that
$$
\sum_{\substack{d|k\\2\nmid d}}d\leq\sum_{d|k}d.
$$
Thus, we have
\begin{align*}
&\sum_{|r|\leq\sqrt{4k}}t(4k-r^{2})+\frac{1}{2}\sum_{|r|\leq\sqrt{16k}}t(16k-r^{2})-\frac{t(16k)}{2}-\sum_{\substack{r>1\\r\,even}}t(16k-r^{2})+24\sum_{\substack{d|k\\2\nmid d}}d\\
&\leq 9-\sum_{\substack{r>1\\r\,even}}t(16k-r^{2})-\frac{t(16k)}{2}+24\sum_{d|k}d.
\end{align*}
Since $16k-r^{2}\equiv0\pmod{4}$ when $r$ is even, then by Lemma \ref{ckl}, we know that
$$
\sum_{\substack{r>1\\r\,even}}t(16k-r^{2})>0.
$$
\textbf{Claim:}
When $k \ge 2$, we have
$$
9+24\sum_{d|k}d-\frac{t(16k)}{2}<0.
$$
Indeed, by Lemma \ref{robin} and Lemma \ref{ckl2}, we have
$$
\sum_{d|k}d<e^{\gamma}k\log\log{k}+\frac{k}{\log\log{k}}
$$
and
$$
t(16k) \ge \exp\left(\pi\sqrt{16k}\right)-\frac{1}{2}(32\pi k)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{16k} \right).
$$
Hence, it suffices to show that when $k \ge 3$, the following inequality holds,
$$
9+24\left(e^{\gamma}k\log\log{(k)}+\frac{k}{\log\log{k}}\right)<\frac{1}{2}\left(\exp\left(\pi\sqrt{16k}\right)-\frac{1}{2}(32\pi k)^{\frac{3}{2}}\exp\left(\frac{\pi}{3}\sqrt{16k}\right)\right).
$$
However, the proof for the above inequality is elementary and hence we omit it here. Thus, we already show that for $k \ge 3$, $c(n)=c(4k)<0$. Combining this estimation with the fact that $c(0)=-24$, $c(4)=-49152$ and $c(8)=-5373952$, we get the desired result.
\medskip
\textit{Case II: $n \equiv 2 \pmod 4$.}
\medskip
Suppose $n=4k+2$ for some $k \ge 0$. Similarly, we start with Lemma \ref{ohta} to find
\begin{align*}
nc(n)&=\sum_{r\in\mathbb{Z}}t(4k+2-r^{2})+\sum_{\substack{r\geq1\\r \, odd}}t(16k+8-r^{2})+24\sum_{\substack{d|(2k+1)}}d.
\end{align*}
Note that $4k+2-r^{2}\equiv 1 \ \textrm{or} \ 2\pmod{4}$, and hence by the definition of $t(d)$, it follows that $t(4k+2-r^2)=0$ for all $r \in \mathbb{Z}$, which implies
$$
\sum_{r \in \mathbb{Z}} t(4k+2-r^2)=0.
$$
By replacing the role of $n=4k$ in Case I by $n=4k+2$, we can deduce that
$$
\sum_{\substack{r\geq1\\r,\, odd}}t(16k+8-r^{2})+24\sum_{\substack{d|(2k+1)}}d<0
$$
for all $k \ge 2$. Combining this estimation with the fact that $C(2)=-2048$, we get $c(n)=c(4k+2)<0$.
Thus, from the previous two cases, we see that $c(n)<0$ for $n$ even.
\medskip
\textit{Case III: $n \equiv 1 \pmod 4$.}
\medskip
Let $n=4k+1$ for some $k \ge 1$. In this case, we have
$$
nc(n)=\sum_{r\in\mathbb{Z}}t(4k+1-r^{2})-\sum_{\substack{r\geq1\\r\, odd}}t(16k+4-r^{2})+24\sum_{\substack{d|(4k+1)}}d.
$$
Since $4k+1-r^{2}\equiv 0 \ \textrm{or} \ 1\pmod{4}$, then by Lemma \ref{ckl} together with $t(d)=0$ if $d\equiv1,2\pmod{4}$, we have
$$
\sum_{r\in\mathbb{Z}}t(4k+1-r^{2})>0.
$$
Moreover, for $r$ odd, we have $16k+4-r^{2}\equiv3\pmod{4}$. Using Lemma \ref{ckl} again, we find that
$$
\sum_{\substack{r\geq1\\r\, odd}}t(16k+4-r^{2})<0.
$$
Therefore, $nc(n)>0$ for $n=4k+1$.
\medskip
\textit{Case IV: $n \equiv 3 \pmod 4$.}
\medskip
Finally, for the case $n=4k+3, k \ge -1$, again, we start with
\begin{align*}
nc(n)&=\sum_{r\in\mathbb{Z}}t(4k+3-r^{2})-\sum_{\substack{r\geq1\\r \, odd}}t(16k+12-r^{2})+24\sum_{\substack{d|(4k+3)}}d\\
&=\sum_{|r|\leq\sqrt{4k+3}}t(4k+3-r^{2})-\sum_{\substack{1\leq r\leq\sqrt{16k+12}\\r \, odd}}t(16k+12-r^{2})+24\sum_{\substack{d|(4k+3)}}d\\
&= \left( \sum_{|r|\leq\sqrt{4k+3}}t(4k+3-r^{2})-t(16k+11) \right)-\sum_{\substack{2\leq r\leq\sqrt{16k+12}\\r \, odd}}t(16k+12-r^{2})+24\sum_{\substack{d|(4k+3)}}d\\
\end{align*}
Clearly, if $r$ is odd, $16k+12-r^{2}\equiv3\pmod{4}$, then $t(16k+12-r^{2})<0$, and hence
$$
\sum_{\substack{2\leq r\leq\sqrt{16k+12}\\r \, odd}}t(16k+12-r^{2})<0.
$$
While for $r \in \mathbb{Z}$, $4k+3-r^{2}\equiv 2 \ \textrm{or} \ 3 \pmod{4}$, then by Lemma \ref{ckl}, we have
$$
\sum_{|r|\leq\sqrt{4k+3}}t(4k+3-r^{2})=\sum_{\substack{|r|\leq\sqrt{4k+3}\\r \, even}}t(4k+3-r^{2}).
$$
Thus, by Lemma \ref{ckl2}, (ii) and an easy calculation, we see that for $k \ge 3$,
\begin{eqnarray*}
&& \sum_{|r|\leq\sqrt{4k+3}}t(4k+3-r^{2})-t(16k+11)\\
&& \ge \exp \left(\pi \sqrt{16k+11} \right)-\frac{\left[2\pi(16k+11)\right]^{\frac{3}{2}}}{2} \exp \left(\frac{\pi \sqrt{16k+11}}{3} \right)\\
&& \quad -2 \sqrt{4k+3} \left( \exp\left(\pi \sqrt{4k+3}\right)+\frac{\left[2\pi(4k+3)\right]^{\frac{3}{2}}}{2} \exp \left( \frac{\pi \sqrt{4k+3}}{3} \right) \right)\\
&&>0.
\end{eqnarray*}
Thus, we have seen that when $n=4k+3, k \ge 3$, $c(n)>0$. Combining this estimation with the fact that $c(-1)=1, c(3)=11202, c(7)=1881471$ and $c(11)=91231550$, it follows that $c(n)>0$ for $n=4k+3$ with $k \ge 1$.
Clearly, the arguments in Case III and Case IV imply that $c(n)>0$ for $n$ odd.
\begin{Remark}
For general cases, say $p\geq3$ and $(p-1)|24$, we will need to consider the generalized traces of singular moduli (see, e.g., \cite{CJKK}), and these will be treated in subsequent work of the authors \cite{HY}.
\end{Remark}
\end{proof}
|
\section{Introduction}
In a recent article \cite{p187}, random coding error exponents and expurgated
exponents were analyzed for the generalized likelihood
decoder (GLD), where the decoded message is randomly selected under a
probability distribution
that is proportional to a general exponential function of the empirical joint
distribution of the codeword and the channel output vectors.
In Section V of \cite{p187}, Theorem 2 provides an expurgated
exponent which is applicable to this decoder (and hence also to the optimal
maximum likelihood decoder). The proof of that theorem is based on two steps of a
certain expurgation procedure. Nir Weinberger has brought to my attention that
there is a certain gap in that proof, as the second expurgation step might
interfere with the first step (more details will follow in Section 2 of this
letter). The purpose of this letter is to provide an alternative proof to the
above mentioned theorem.
\section{Setup and Background}
Consider a discrete memoryless channel (DMC),
designated by a matrix of single--letter input--output
transition probabilities $\{W(y|x),~x\in{\cal X},~y\in{\cal Y}\}$. Here the channel input
symbol $x$ takes on values in a finite input alphabet ${\cal X}$, and the
channel output symbol $y$ takes on values in a finite output alphabet ${\cal Y}$.
When the channel is fed by a vector
$\bx=(x_1,\ldots,x_n)\in{\cal X}^n$, it outputs a vector
$\by=(y_1,\ldots,y_n)\in{\cal Y}^n$ according to
\begin{equation}
\label{channel}
W(\by|\bx)=\prod_{t=1}^n W(y_t|x_t).
\end{equation}
A code ${\cal C}_n\subseteq{\cal X}^n$ is a collection of $M=e^{nR}$ channel input
vectors,
$\{\bx_0,\bx_1,\ldots,\bx_{M-1}\}$, $R$ being the coding rate in nats per
channel
use. It is assumed that all messages, $m=0,1,\ldots.M-1$, are equally likely.
As is very common in the information theory literature, we consider
the random coding regime.
The random coding ensemble considered
is the ensemble of constant composition codes, where each
codeword is drawn independently under the uniform distribution within a given
type class ${\cal T}(Q_X)$, i.e., the set of all
vectors in ${\cal X}^n$ whose empirical distribution is given by $Q_X$.
Once the code has been randomly selected, it is
revealed to
both the encoder and the decoder.
When the transmitter wishes to convey a message $m$, it transmits the
corresponding code-vector $\bx_m$ via the channel, which in turn,
stochastically maps it into an
$n$--vector $\by$ according to (\ref{channel}).
Upon receiving $\by$,
the stochastic generalized likelihood decoder randomly selects the
estimated message $\hat{m}$ according to a generalized version of
the induced posterior distribution
of the transmitted
message, i.e.,
\begin{eqnarray}
\label{posterior}
\mbox{Pr}\{\hat{m}=m_0|\by\}=
\frac{\exp\{ng(\hat{P}_{\bx_{m_0}\by})\}}{\sum_{m=0}^{M-1}
\exp\{ng(\hat{P}_{\bx_m\by})\}},
\end{eqnarray}
where $\hat{P}_{\bx_m\by}$ is the empirical joint distribution induced
by $(\bx_m,\by)$ and $g(\cdot)$ is an arbitrary continuous function.
For example,
\begin{equation}
\label{OLD}
g(\hat{P}_{\bx_m\by})=\sum_{x,y}\hat{P}_{\bx_m\by}(x,y)\ln W(y|x)
\end{equation}
corresponds to the ordinary likelihood decoder, where (\ref{posterior}) is
the correct underlying posterior probability of message $m_0$. This framework also
allows additional important stochastic decoders, where $g$ corresponds to a
mismatched metric $\tilde{W}$ or to the empirical mutual information, as
discussed in \cite{p187}.
As mentioned above, in Section V of \cite{p187}, an expurgated
error exponent is derived. Specifically,
letting $Q_{XY}$ denote a generic joint distribution over ${\cal X}\times{\cal Y}$,
and letting $I_Q(X;Y)$ denote the mutual information
induced by $Q_{XY}$,
we define the following.
Let
\begin{equation}
\alpha(R,Q_Y)=\sup_{\{Q_{X|Y}:~I_Q(X;Y)\le R\}}[g(Q_{XY})-I_Q(X;Y)]+R,
\end{equation}
and
\begin{eqnarray}
\Gamma(Q_{XX'},R)
&=&\inf_{Q_{Y|XX'}}\left\{D(Q_{Y|X}\|W|Q_X)+
I_Q(X';Y|X)+\right.\nonumber\\
& &\left.[\max\{g(Q_{XY}),\alpha(R,Q_Y)\}-
g(Q_{X'Y})]_+\right\}\\
&\equiv&\inf_{Q_{Y|XX'}}\left\{\bE_Q\log[1/W(Y|X)]-H(Y|X,X')+\right.\nonumber\\
& &\left.[\max\{g(Q_{XY}),\alpha(R,Q_Y)\}-g(Q_{X'Y})]_+\right\},
\end{eqnarray}
where $D(Q_{Y|X}\|W|Q_X)$ is defined in the usual manner (see also
\cite{p187}).
The main result in \cite[Section V]{p187} is the following:
\begin{theorem}
There exists a sequence of constant composition codes,
$\{{\cal C}_n,~n=1,2,\ldots\}$,
with composition $Q_X$, such that
\begin{equation}
\liminf_{n\to\infty}\left[-\frac{\log P_{\mbox{\tiny
e}|m}({\cal C}_n)}{n}\right]\ge E_{\mbox{\tiny ex}}^{\mbox{\tiny gld}}(R,Q_X),
\end{equation}
where
\begin{equation}
\label{ckmstyle}
E_{\mbox{\tiny ex}}^{\mbox{\tiny gld}}(R,Q_X)
=\inf[\Gamma(Q_{XX'},R)+I_Q(X;X')]-R,
\end{equation}
where the infimum is over all joint distributions $\{Q_{XX'}\}$ such that
$I_Q(X;X')\le R$ and $Q_{X'}=Q_X$.
\end{theorem}
The proof in \cite{p187} contains two main steps of expurgation. In the first step, we
confine attention to the subset of constant composition codes $\{{\cal C}_n\}$ with the property
\begin{equation}
\label{good}
\sum_{m'\ne m}\exp\{ng(\hat{P}_{\bx_{m'}\by})\}\ge
\exp\{n\alpha(R-\epsilon,\hat{P}_{\by})\}~~~\forall~m,\by
\end{equation}
where $\epsilon > 0$ is arbitrarily small. It is proved in \cite[Appendix
B]{p187} that the vast majority of
constant composition codes satisfy (\ref{good}) for large $n$. In the second expurgation
step (see \cite[Appendix C]{p187}), at most $M\cdot
(n+1)^{|{\cal X}|^2}e^{-n\epsilon/2}$ ``bad''
codewords are eliminated from the codebook in order to guarantee the
desired maximum error probability performance for the remaining part of the
code.
The gap in the proof of \cite[Theorem 2]{p187}
is in the following point: after the second expurgation step, it is no
longer guaranteed that eq.\ (\ref{good}) still holds for every $m$ and $\by$,
since the summation on the left--hand side of (\ref{good}) is now
taken over a smaller number of codewords.
Fortunately enough, Theorem 2 of \cite{p187} is still correct
(as will be proved in the next section in a completely different manner) at least when
$g(Q_{XY})$ is an affine functional of $Q_{XY}$, which is the case of the
ordinary matched/mismatched stochastic likelihood decoder (\ref{OLD}) with or
without a ``temperature'' parameter (see the discussion around eqs.\ (5)--(7) of
\cite{p187}). This affinity
assumption is used only at the last step of our derivation below. Thus, when
$g(Q_{XY})$ is not affine, one merely backs off from the last step of the
derivation, and considers the second to the last expression
as the formula of the expurgated exponent.
\section{Corrected Proof of \cite[Theorem 2]{p187}}
Assuming that message $m$ was
transmitted, the probability of error of the GLD, for a given code ${\cal C}_n$, is given by
\begin{equation}
P_{\mbox{\tiny e}|m}({\cal C}_n)
=\sum_{m^\prime\ne m}\sum_{\by}W(\by|\bx_m)\frac{
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\sum_{m^\prime\ne m}
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
\end{equation}
and so, for $\rho\ge 1$,
\begin{equation}
[P_{\mbox{\tiny e}|m}({\cal C}_n)]^{1/\rho}
\le\sum_{m^\prime\ne m}\left[\sum_{\by}W(\by|\bx_m)
\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\sum_{m^\prime\ne m}
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}\right]^{1/\rho},
\end{equation}
where we have used the inequality $(\sum_i a_i)^s\le\sum_i a_i^s$, which holds
whenever $s\le 1$ and $a_i\ge 0$ for all $i$ \cite[Exercise
4.15(f)]{Gallager68}.
Let ${\cal G}_\epsilon={\cal B}_\epsilon^c$ be defined as in \cite{p187},
that is, the set of codes for which (\ref{good}) holds, and
consider the fact (proved in Appendix B therein), that $\mbox{Pr}\{{\cal B}_\epsilon\}\le
\exp(-e^{n\epsilon}+n\epsilon+1)$.
We now
take the expectation over the randomness of the (incorrect part of the)
codebook,
${\cal C}_n^m={\cal C}_n\setminus\{\bx_m\}$ (where all wrong codewords are drawn from
a given type $Q_X$), except $\bx_m$, which is kept fixed
for now. When dealing with the pairwise error probability from
$m$ to $m'$, we do this in two steps: we first average over all codewords
except $\bx_m$ and $\bx_{m'}$, and then average over the randomness of $\bx_{m'}$.
\begin{eqnarray}
& & \bE\left\{[P_{\mbox{\tiny e}|m}({\cal C}_n)]^{1/\rho}\bigg|\bx_m\right\}\nonumber\\
&\le&\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m}P({\cal C}_n^m)\left[\sum_{\by}W(\by|\bx_m)
\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\sum_{m^\prime\ne m}
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}\right]^{1/\rho}\nonumber\\
&=&\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m\in{\cal G}_\epsilon}
P({\cal C}_n^m)\left[\sum_{\by}W(\by|\bx_m)
\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\sum_{m^\prime\ne m}
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}\right]^{1/\rho}+\nonumber\\
& &\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m\in{\cal B}_\epsilon}P({\cal C}_n^m)
\left[\sum_{\by}W(\by|\bx_m)
\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\sum_{m^\prime\ne m}
\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}\right]^{1/\rho}\nonumber\\
&\le&\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m\in{\cal G}_\epsilon}
P({\cal C}_n^m)\left[\sum_{\by}W(\by|\bx_m)
\cdot\min\left\{1,\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\exp\{n\alpha(R-\epsilon,\hat{P}_{\by})\}
}\right\}\right]^{1/\rho}+\nonumber\\
& &\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m\in{\cal B}_\epsilon}P({\cal C}_n^m)\cdot 1^{1/\rho}
\nonumber\\
&\le&\sum_{m^\prime\ne m}\sum_{{\cal C}_n^m}P({\cal C}_n^m)\left[\sum_{\by}W(\by|\bx_m)
\cdot\min\left\{1,\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\exp\{n\alpha(R-\epsilon,\hat{P}_{\by})\}
}\right\}\right]^{1/\rho}+\nonumber\\
& &e^{nR}\cdot\exp(-e^{n\epsilon}+n\epsilon+1)
\nonumber\\
&\lexe&\sum_{m^\prime\ne m}\bE\left(\left[\sum_{\by}W(\by|\bx_m)
\cdot\min\left\{1,\frac{\exp\{ng(\hat{P}_{\bx_{m^\prime}\by})\}}
{\exp\{ng(\hat{P}_{\bx_m\by})\}+\exp\{n\alpha(R-\epsilon,\hat{P}_{\by})\}
}\right\}\right]^{1/\rho}\bigg|\bx_m\right)\nonumber\\
&\exe&\sum_{m^\prime\ne
m}\bE\left\{\exp[-n\Gamma(\hat{P}_{\bx_m\bx_m'})/\rho]\bigg|\bx_m\right\}\nonumber\\
&\exe&\sum_{Q_{X'|X}}\bE\{N_m(Q_{X'|X})|\bx_m\}\exp\{-n\Gamma(Q_{XX'})/\rho\}\nonumber\\
&\exe&\max_{Q_{X'|X}}\exp\{n[R-I_Q(X;X')]\}\cdot
\exp\{-n\Gamma(Q_{XX'})/\rho\}\nonumber\\
&=&\exp\left\{-n\min_{Q_{X'|X}}[\Gamma(Q_{XX'})/\rho+I_Q(X;X')-R]\right\},
\end{eqnarray}
where $I_Q(X;X')$ is the mutual information induced by $Q_{XX'}$ and
$N_m(Q_{X'|X})=|{\cal T}(Q_{X'|X}|\bx_m)\cap{\cal C}_m|$, ${\cal T}(Q_{X'|X}|\bx_m)$
being the conditional type class pertaining to $Q_{X'|X}$ given $\bx_m$.
Since this bound is
independent of $\bx_m$, it also holds for the unconditional expectation,
$\bE[P_{e|m}({\cal C}_n)]^{1/\rho}$. Now, for a given code ${\cal C}_n$,
index the message $\{m\}$ according to
decreasing order of $\{P_{e|m}({\cal C}_n)\}$. Then,
\begin{equation}
\frac{1}{M}\sum_{m=1}^M[P_{e|m}({\cal C}_n)]^{1/\rho}\ge
\frac{1}{M}\sum_{m=1}^{M/2}[P_{e|m}({\cal C}_n)]^{1/\rho}\ge
\frac{1}{M}\cdot\frac{M}{2}[P_{e|M/2}({\cal C}_n)]^{1/\rho}=
\frac{1}{2}\cdot[\max_mP_{e|m}({\cal C}_n')]^{1/\rho},
\end{equation}
where ${\cal C}_n'$ is the good half of ${\cal C}_n$. Thus,
\begin{eqnarray}
\bE\left\{[\max_m P_{e|m}({\cal C}_n')]^{1/\rho}\right\}
&\le& 2
\bE\left\{\frac{1}{M}\sum_{m=1}^M
P_{e|m}({\cal C}_n)^{1/\rho}\right\}\nonumber\\
&\lexe&\exp\left\{-n\min_{Q_{X'|X}}[\Gamma(Q_{XX'})/\rho+I_Q(X;X')-R]\right\}
\end{eqnarray}
which means that there exists a code of size $M/2$ with
\begin{equation}
[\max_mP_{e|m}({\cal C}_n')]^{1/\rho}\le
\exp\left\{-n\min_{Q_{X'|X}}[\Gamma(Q_{XX'})/\rho+I_Q(X;X')-R]\right\},
\end{equation}
or equivalently,
\begin{equation}
\max_m P_{e|m}({\cal C}_n')\le
\exp\left(-n\min_{Q_{X'|X}}\{\Gamma(Q_{XX'})+\rho[I_Q(X;X')-R]\}\right),
\end{equation}
and since this holds for every $\rho\ge 1$, we have
\begin{equation}
\max_m P_{e|m}({\cal C}_n')\le
\exp\left(-n\sup_{\rho\ge 1}\min_{Q_{X'|X}}\{\Gamma(Q_{XX'})+\rho[I_Q(X;X')-R]\}\right).
\end{equation}
Now, consider the exponent,
\begin{eqnarray}
\label{maxmin}
E_{\mbox{\tiny ex}}(R,Q_X)&\dfn& \sup_{\rho\ge
1}\min_{Q_{X'|X}}\{\Gamma(Q_{XX'})+\rho[I_Q(X;X')-R]\}\\
&=&\sup_{\rho\ge
0}\min_{Q_{XX'}}\{\Gamma(Q_{XX'})+I_Q(X;X')-R+\rho[I_Q(X;X')-R]\},
\end{eqnarray}
where the marginals of $Q_{XX'}$ are constrained to the given fixed
composition, $Q_X$.
Using the definitions in \cite{p187},
\begin{eqnarray}
\Gamma(Q_{XX'})+I_Q(X;X')&=&\inf_{Q_{Y|XX'}}\left\{-\bE_Q\ln
W(Y|X)-H(Y|X,X')+\right.\nonumber\\
& &\left.I_Q(X;X')+[\max\{g(Q_{XY}),\alpha(R,Q_Y)\}-g(Q_{X'Y})]_+\right\}\nonumber\\
&=&\inf_{Q_{Y|XX'}}\left\{-\bE_Q\ln[W(Y|X)Q(X)Q(X')]-H_Q(X,X',Y)+\right.\nonumber\\
& &\left.+[\max\{g(Q_{XY}),\alpha(R,Q_Y)\}-g(Q_{X'Y})]_+\right\},
\end{eqnarray}
thus,
\begin{eqnarray}
& &\min_{Q_{XX'}}\{\Gamma(Q_{XX'})+I_Q(X;X')-R+\rho[I_Q(X;X')-R]\}\nonumber\\
&=&\min_{Q_{XX'Y}}\left\{-\bE_Q\ln[W(Y|X)Q(X)Q(X')]-H_Q(X,X',Y)+\right.\nonumber\\
& &\left.\rho[I_Q(X;X')-R]+
[\max\{g(Q_{XY}),\alpha(R,Q_Y)\}-g(Q_{X'Y})]_+\right\}.
\end{eqnarray}
Now, the first term on the right--most side
is linear (and hence convex) in $Q_{XX'Y}$ since $Q_X$ is fixed,
the second term is convex, and the third term is convex for a given $Q_X$. As
for the fourth term, it is convex at least in the case where $g$ is affine in $Q$
(e.g., matched/mismatched likelihood metric with/without a temperature
parameter) because the function $f(x)=[x]_+$ is monotonic and convex and we
argue that $\alpha(R,Q_Y)$ is also convex since it is given by the supremum
over a family of convex functions of $Q_Y$ (as $g$ is linear and $-I_Q(X;X')$
is convex in $Q_Y$ for a given $Q_{X|Y}$). The maximum between two convex
functions is convex.
Since the objective is affine (and hence concave) in $\rho$,
we can interchange the minimization
and the maximization to obtain,
\begin{eqnarray}
E_{\mbox{\tiny
ex}}(R,Q_X)&=&\inf_{Q_{XX'}}\left\{\Gamma(Q_{XX'})+I_Q(X;X')-R+\sup_{\rho\ge
0}\rho[I_Q(X;X')-R]\right\}\nonumber\\
&=&\inf_{\{Q_{XX'}:~I_Q(X;X')\le R\}}[\Gamma(Q_{XX'})+I_Q(X;X')-R]\nonumber\\
&=&E_{\mbox{\tiny ex}}^{\mbox{\tiny gld}}(R,Q_X).
\end{eqnarray}
If the supremum and the
minimum cannot be interchanged, then, of course, the formula of the expurgated exponent
remains as in (\ref{maxmin}).
\section*{Acknowledgment}
I would like to thank Dr.\ Nir Weinberger for drawing my attention to the gap
in the proof of Theorem 2 in \cite{p187}.
|
\section{Introduction}
\label{section:intro}
Methods from algebraic topology have only recently emerged in the machine learning community, most prominently under the term \emph{topological data analysis (TDA)} \cite{Carlsson09a}. Since TDA enables us to infer relevant topological and geometrical information from data, it can offer a novel and potentially beneficial perspective on various machine learning problems. Two compelling benefits of TDA are (1) its versatility, i.e., we are not restricted to any particular kind of data (such as images, sensor
measurements, time-series, graphs, etc.) and (2) its robustness to noise.
Several works have demonstrated that TDA can be
beneficial in a diverse set of problems, such as studying the
manifold of natural image patches \cite{Carlsson12a}, analyzing activity
patterns of the visual cortex \cite{Singh08a}, classification of 3D
surface meshes \cite{Reininghaus14a,Li14a}, clustering \cite{Chazal13a}, or
recognition of 2D object shapes \cite{Turner2013}.
Currently, the most widely-used tool from TDA is \emph{persistent homology}
\cite{Edelsbrunner02a, Edelsbrunner2010}. Essentially\footnote{We will make
these concepts more concrete in Sec.~\ref{section:background}.}, persistent homology
allows us to track topological changes as we analyze data at multiple
``scales''. As the scale changes, topological features (such as connected components, holes,
etc.) appear and disappear. Persistent homology associates a \emph{lifespan}
to these features in the form of a \emph{birth} and a \emph{death} time.
The collection of (birth, death) tuples forms a multiset that can be visualized as a persistence diagram or a barcode, also referred to as a
\emph{topological signature} of the data. However, leveraging these signatures for learning
purposes poses considerable challenges, mostly due to their unusual structure as a
multiset. While there exist suitable metrics to compare
signatures (e.g., the Wasserstein metric), they are highly
impractical for learning, as they require solving optimal matching
problems.
\noindent
\textbf{Related work.} In order to deal with these issues, several strategies
have been proposed. In \cite{Adcock13a} for
instance, Adcock et al. use invariant theory to ``coordinatize'' the space of
barcodes. This allows to map barcodes to vectors of fixed size which
can then be fed to standard machine learning techniques, such as support vector machines (SVMs).
Alternatively, Adams et al. \cite{Adams17a} map barcodes to so-called
\emph{persistence images} which, upon discretization, can also be
interpreted as vectors and used with standard learning techniques.
Along another line of research, Bubenik \cite{Bubenik15a} proposes a mapping
of barcodes into a Banach space. This has been shown to be particularly
viable in a statistical context (see, e.g., \cite{Chazal15a}). The mapping
outputs a representation referred to as a \emph{persistence landscape}.
Interestingly, under a specific choice of parameters, barcodes are mapped
into $L_2(\mathbb{R}^2)$ and the inner-product in that space can be used to
construct a valid kernel function. Similar, kernel-based techniques,
have also recently been studied by Reininghaus et al. \cite{Reininghaus14a}, Kwitt et al. \cite{Kwitt15a} and Kusano et al. \cite{Kusano16a}.
While all previously mentioned approaches retain certain stability properties of the original representation with respect to common metrics in TDA (such as the Wasserstein or Bottleneck distances), they also share one common \emph{drawback}: the mapping of topological signatures to a representation that is compatible with existing learning techniques is \emph{pre-defined}. Consequently, it is fixed and therefore \emph{agnostic} to any specific learning task. This is clearly suboptimal, as
the eminent success of deep neural networks (e.g., \cite{Krizhevsky12a,He16a})
has shown that \emph{learning} representations is
a preferable approach. Furthermore, techniques based on kernels
\cite{Reininghaus14a,Kwitt15a,Kusano16a} for instance, additionally suffer scalability
issues, as training typically scales poorly with the number of samples (e.g.,
roughly cubic in case of kernel-SVMs).
In the spirit of
end-to-end training, we therefore aim for an approach that allows to
learn a \emph{task-optimal} representation of topological signatures.
We additionally remark that, e.g., Qi et al. \cite{Qi16a} or Ravanbakhsh et al. \cite{Ravanbakhsh17a}
have proposed architectures that can handle \emph{sets}, but only with fixed size.
In our context, this is impractical as the capability of handling sets with
varying cardinality is a requirement to handle persistent homology in a machine
learning setting.
\vskip1ex
\noindent
\textbf{Contribution}. To realize this idea, we advocate a novel
input layer for deep neural networks that takes a topological signature
(in our case, a persistence diagram), and computes
a parametrized projection that can be learned during network training. Specifically,
this layer is designed such that its output is stable with respect to the 1-Wasserstein
distance (similar to \cite{Reininghaus14a} or \cite{Adams17a}).
To demonstrate the versatility of this approach, we present
experiments on 2D object shape classification and the classification of
social network graphs. On the latter, we improve the state-of-the-art by
a large margin, clearly demonstrating the power of combining TDA with
deep learning in this context.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{diagrams}
\end{center}
\caption{Illustration of the proposed network \emph{input layer} for topological signatures.
Each signature, in the form of a persistence diagram $\mathcal{D} \in \mathbb{D}$ (\emph{left}), is projected w.r.t. a collection of \emph{structure elements}. The layer's learnable parameters $\boldsymbol{\theta}$ are the locations $\boldsymbol{\mu}_i$ and the scales $\boldsymbol{\sigma}_i$ of these elements; $\nu \in \mathbb{R}^+$ is set a-priori and meant to discount the impact of points
with low persistence (and, in many cases, of low discriminative power). The layer output $\mathbf{y}$ is a concatenation of the projections. In this illustration, $N=2$ and hence $\mathbf{y} = (y_1,y_2)^\top$.\label{fig:idea}}
\end{figure}
\vspace{-10pt}
\section{Background}
\label{section:background}
For space reasons, we only provide a brief overview of the concepts that are
relevant to this work and refer the reader to \cite{Hatcher2002} or
\cite{Edelsbrunner2010} for further details.
{\bf Homology. }
The key concept of homology theory is to study the properties of some object $X$ by means of (commutative) algebra. In particular, we assign to $X$ a sequence of modules $C_0,\ C_1, \dots$ which
are connected by homomorphisms $\partial_n: C_n \rightarrow C_{n-1}$ such that
$\im \partial_{n+1} \subseteq \ker \partial_{n}$. A structure of this form is called a \emph{chain complex} and by studying its homology groups
$H_n = \ker \partial_{n} / \im \partial_{n+1}$ we can derive properties of $X$.
\vskip0.5ex
A prominent example of a homology theory is \emph{simplicial homology}.
Throughout this work, it is the used homology theory and hence we will now
concretize the already presented ideas.
Let $K$ be a simplicial complex and $K_n$ its $n$-skeleton. Then we set $C_n(K)$ as the
vector space generated (freely) by $K_n$ over $\mathbb{Z}/2\mathbb{Z}$\footnote{Simplicial homology is not specific
to $\mathbb{Z}/2\mathbb{Z}$, but it's a typical choice, since it allows us to interpret $n$-chains as sets
of $n$-simplices.}.
The connecting homomorphisms $\partial_n:C_n(K) \rightarrow C_{n-1}(K)$
are called boundary operators. For a simplex
$\sigma = [x_0, \dots, x_n] \in K_n$, we define them as
$\partial_n(\sigma) = \sum_{i=0}^n [x_0, \dots, x_{i-1}, x_{i+1}, \dots, x_n]$
and linearly extend this to $C_n(K)$, i.e., $\partial_n(\sum \sigma_i) = \sum \partial_n(\sigma_i)$.
{\bf Persistent homology.}
Let $K$ be a simplicial complex and $(K^i)_{i=0}^{m}$ a sequence of simplicial complexes such that $\emptyset=K^0 \subseteq K^1 \subseteq \dots \subseteq K^m = K$. Then, $(K^i)_{i=0}^{m}$ is called a \emph{filtration} of $K$. If we use the extra information provided by the filtration of $K$, we obtain the following
sequence of chain complexes (\emph{left}),
\begin{center}
\includegraphics[width=\textwidth]{sandbox}
\end{center}
where $C_n^i = C_n(K_n^i)$
and $\iota$ denotes the inclusion.
This then leads to the concept of \emph{persistent homology groups}, defined by
\[
H_n^{i,j} = \ker \partial_{n}^i / (\im \partial_{n+1}^j \cap \ker \partial_n^i) \quad\text{for}\quad i \leq j\enspace.
\]
The ranks, $\beta_n^{i,j} = \rank H_n^{i,j}$, of these homology groups (i.e., the \emph{$n$-th
persistent Betti numbers}), capture the number of homological features of dimensionality
$n$ (e.g., connected components for $n=0$, holes for $n=1$, etc.) that persist from $i$ to (at least) $j$.
In fact, according to \cite[Fundamental Lemma of Persistent Homology]{Edelsbrunner2010},
the quantitie
\begin{equation}
\mu_n^{i,j} = (\beta_n^{i, j-1} - \beta_n^{i,j}) - (\beta_n^{i-1, j-1} - \beta_n^{i-1, j})
\quad\text{for}\quad i < j
\label{eqn:munij}
\end{equation}
encode all the information about the persistent Betti numbers of dimension $n$.
{\bf Topological signatures}.
A typical way to obtain a filtration of $K$ is to consider sublevel sets
of a function $f:C_0(K)\rightarrow \mathbb{R}$. This function can be easily lifted to higher-dimensional
chain groups of $K$ by
\[
f([v_0, \dots, v_n]) = \max \{f([v_i]): 0 \leq i \leq n\}\enspace.
\]
Given $m = |f(C_0(K))|$, we obtain $(K_i)_{i=0}^m$ by setting $K_0 =\emptyset$ and
$K_i = f^{-1}((-\infty, a_i])$ for $1 \leq i \leq m$, where $a_1 < \cdots < a_m$
is the sorted sequence of values of $f(C_0(K))$.
If we construct a multiset such that, for $i<j$, the
point $(a_i, a_j)$ is inserted with multiplicity $\mu_n^{i,j}$, we effectively
encode the persistent homology of dimension $n$ w.r.t. the sublevel set
filtration induced by $f$.
Upon adding diagonal points with infinite multiplicity, we obtain the following structure:
\vskip1ex
\begin{defn}[Persistence diagram]
Let $\Delta = \{x \in \mathbb{R}^2_{\Delta}: \textup{\text{mult}}(x) = \infty\}$ be the multiset of the
diagonal $\mathbb{R}^2_{\Delta} = \{(x_0, x_1) \in \mathbb{R}^2: x_0 = x_1\}$, where $\textup{\text{mult}}$ denotes the multiplicity function and let
$\mathbb{R}^2_{\star} = \{(x_0, x_1) \in \mathbb{R}^2: x_1 > x_0\}$.
A persistence diagram, $\mathcal{D}$, is a multiset of the form
\[
\mathcal{D} = \{x: x \in \mathbb{R}^2_{\star}\} \cup \Delta\enspace.
\]
We denote by $\mathbb{D}$ the set of all persistence diagrams of the
form $|\mathcal{D} \setminus \Delta|<\infty\enspace.$
\label{defn:pd}
\end{defn}
For a given complex $K$ of dimension $n_{\max}$ and a function $f$ (of the discussed form),
we can interpret persistent homology as a mapping $(K, f) \mapsto (\mathcal{D}_0, \dots, \mathcal{D}_{n_{\max}-1})$,
where $\mathcal{D}_i$ is the diagram of dimension $i$ and $n_{\max}$ the dimension of $K$.
We can additionally add a metric structure to the space of persistence diagrams by
introducing the notion of distances.
\begin{defn}[Bottleneck, Wasserstein distance]
For two persistence diagrams $\mathcal{D}$ and $\mathcal{E}$,
we define their Bottleneck ($\textup{\text{w}}_{\infty}$) and Wasserstein ($\textup{\text{w}}_p^q$) distances by
\begin{equation}
\textup{\text{w}}_{\infty}(\mathcal{D}, \mathcal{E}) = \inf\limits_{\eta} \sup\limits_{\mathbf{x} \in \mathcal{D}} ||\mathbf{x} - \eta(\mathbf{x})||_{\infty}
~~\text{and}~~
\textup{\text{w}}_p^q(\mathcal{D}, \mathcal{E}) = \inf\limits_{\eta} \left(\ \sum\limits_{\mathbf{x} \in \mathcal{D}} ||\mathbf{x} - \eta(\mathbf{x})||_q^p\enspace\right)^{\frac{1}{p}},
\label{eqn:wassersteinbottleneck}
\end{equation}
where $p, q \in \mathbb{N}$ and the infimum is taken over all bijections $\eta:\mathcal{D}\rightarrow\mathcal{E}$.
\end{defn}
Essentially, this facilitates studying stability/continuity properties of topological
signatures
w.r.t. metrics in the filtration or complex space; we refer the
reader to \cite{Cohen-Steiner2007},\cite{Cohen-Steiner2010}, \cite{Chazal2009} for a selection of important stability results.
\begin{rem}
By setting $\mu_n^{i, \infty} = \beta_n^{i, m} - \beta_n^{i-1, m}$, we extend Eq.~\eqref{eqn:munij} to features which never disappear, also referred to as essential. This change can be lifted to $\mathbb{D}$ by setting
$\mathbb{R}^2_{\star} = \{(x_0, x_1) \in \mathbb{R} \times ( \mathbb{R}\cup \{\infty\}): x_1 > x_0\}$.
In Sec.~\ref{section:experiments}, we will see that essential features can offer discriminative
information.
\end{rem}
\section{A network layer for topological signatures}
In this section, we introduce the proposed (parametrized) network layer for
topological signatures (in the form of persistence diagrams).
The key idea is to take any $\mathcal{D}$ and define a projection w.r.t. a collection
(of fixed size $N$) of \emph{structure elements}.
In the following, we set
$\mathbb{R}^+:=\{ x \in \mathbb{R}: x > 0\}$ and $\mathbb{R}^+_0:=\{ x \in \mathbb{R}: x \geq 0\}$, resp., and
start by rotating points of $\mathcal{D}$ such that points on $\mathbb{R}^2_{\Delta}$ lie on
the $x$-axis, see Fig.~\ref{fig:idea}. The $y$-axis can then be interpreted as
the \emph{persistence} of features.
Formally, we let $\mathbf{b}_0$ and $\mathbf{b}_1$
be the unit vectors
in directions $(1,1)^\top$ and
$(-1, 1)^\top$ and define a mapping $\rho: \mathbb{R}^2_{\star} \cup \mathbb{R}^2_{\Delta} \rightarrow \mathbb{R} \times \mathbb{R}_0^+$
such that $\mathbf{x} \mapsto (\inprod{\mathbf{x}}{\mathbf{b}_0}, \inprod{\mathbf{x}}{\mathbf{b}_1})$.
This rotates points in $\mathbb{R}_{\star} \cup \mathbb{R}^2_{\Delta} $ clock-wise by $\pi/4$.
We will later see that this construction is
beneficial for a closer analysis of the layers' properties.
Similar to \cite{Reininghaus14a,Kusano16a},
we choose exponential functions as structure elements, but other choices are possible (see Lemma~\ref{lem:lemma1}). Differently
to \cite{Reininghaus14a,Kusano16a}, however, our structure elements \emph{are not} at
fixed locations (i.e., one element per point in $\mathcal{D}$), but their
locations and scales are learned during training.
\begin{defn}
Let $\boldsymbol{\mu} =(\mu_0, \mu_1)^\top \in \mathbb{R} \times \mathbb{R}^+, \boldsymbol{\sigma} = (\sigma_0, \sigma_1)\in \mathbb{R}^+ \times \mathbb{R}^+$ and $\nu \in \mathbb{R}^+$. We define
\[
s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}: \mathbb{R} \times \mathbb{R}_0^+ \rightarrow \mathbb{R}
\]
as follows:
\begin{equation}
s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}\big((x_0,x_1)\big)
=
\left\{
\begin{array}{ll}
e^{-\sigma_0^2(x_0-\mu_0)^2 - \sigma_1^2(x_1 -\mu_1)^2}, & x_1 \in [\nu, \infty) \\
\\
e^{-\sigma_0^2(x_0-\mu_0)^2 - \sigma_1^2(\ln(\frac{x_1}{\nu})\nu + \nu -\mu_1)^2}, & x_1 \in (0, \nu) \\
\\
0, &x_1 = 0
\end{array}
\right.
\label{eqn:structuredef}
\end{equation}
A persistence diagram $\mathcal{D}$ is then projected w.r.t. $s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ via
\begin{equation}
S_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}: \mathbb{D} \rightarrow \mathbb{R}, \quad\quad \mathcal{D} \mapsto \sum\limits_{\mathbf{x} \in \mathcal{D}} s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu} (\rho (\mathbf{x})) \enspace.
\label{eqn:layerdefinition2}
\end{equation}
\end{defn}
\begin{rem}
Note that
$s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ is continuous in $x_1$ as
$$\lim\limits_{x\rightarrow \nu} x = \lim\limits_{x\rightarrow \nu} \ln\left(\frac{x}{\nu}\right)\nu + \nu \quad \text{and} \quad \lim\limits_{x_1\rightarrow 0} s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}\big((x_0, x_1)\big)
= 0
= s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}\big((x_0, 0)\big)\enspace$$
and $e^{(\cdot)}$ is continuous. Further, $s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ is differentiable on $\mathbb{R} \times \mathbb{R}^+$, since
\[
1 = \lim\limits_{x\rightarrow \nu^{+}} \frac{\partial x_1}{\partial x_1}(x)
\quad \text{and} \quad
\lim\limits_{x\rightarrow \nu^-} \frac{\partial\left(\ln\left(\frac{x_1}{\nu}\right)\nu + \nu\right)}{\partial x_1}(x) =
\lim\limits_{x\rightarrow \nu^-} \frac{\nu}{x} =
1\enspace.
\]
\end{rem}
Also note that we use the log-transform in Eq.~\eqref{eqn:layerdefinition2} to guarantee
that $s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ satisfies the conditions of Lemma~\ref{lem:lemma1}; this is, however,
only one possible choice.
Finally, given a collection of structure elements $S_{\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i, \nu}$, we
combine them to form the output of the network layer.
\begin{defn}
\label{defn:layer}
Let $N \in \mathbb{N}$, $\boldsymbol{\theta} = (\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i)_{i=0}^{N-1} \in \big((\mathbb{R} \times \mathbb{R}^+) \times (\mathbb{R}^+ \times \mathbb{R}^+)\big)^N$ and $\nu \in \mathbb{R}^+$. We define
$$
\mathcal{S}_{\boldsymbol{\theta}, \nu}: \mathbb{D} \rightarrow (\mathbb{R}^+_0)^N \quad \mathcal{D} \mapsto \big(S_{\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i, \nu}(\mathcal{D})\big)_{i=0}^{N-1}.
$$
as the concatenation of all $N$ mappings defined in Eq.~\eqref{eqn:layerdefinition2}.
\end{defn}
Importantly, a network layer implementing Def.~\ref{defn:layer} is trainable via backpropagation, as
(1) $s_{\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i, \nu}$ is differentiable in $\boldsymbol{\mu}_i,\boldsymbol{\sigma}_i$, (2) $S_{\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i, \nu}(\mathcal{D})$ is a finite sum of $s_{\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i, \nu}$ and (3) $\mathcal{S}_{\boldsymbol{\theta}, \nu}$ is just a concatenation.
\section{Theoretical properties}
\label{section:theory}
In this section, we demonstrate that the proposed layer is stable w.r.t. the 1-Wasserstein distance
$\textup{\text{w}}_1^q$, see Eq.~\eqref{eqn:wassersteinbottleneck}. In fact, this claim will follow
from a more general result, stating sufficient conditions on functions $s: \mathbb{R}^2_{\star} \cup \mathbb{R}^2_{\Delta} \rightarrow \mathbb{R}_0^+$ such that a construction in the form of Eq.~\eqref{eqn:structuredef} is
stable w.r.t. $\textup{\text{w}}_1^q$.
\begin{lem}
\label{lem:lemma1}
Let
\[
s: \mathbb{R}^2_{\star} \cup \mathbb{R}^2_{\Delta} \rightarrow \mathbb{R}_0^+
\] have the following properties:
\begin{enumerate}[label=(\roman*),ref=(\roman*)]
\item \label{lem:lemma1:prop1} $s$ is Lipschitz continuous w.r.t. $\|\cdot\|_q$ and constant $K_s$
\item \label{lem:lemma1:prop2} $s(\mathbf{x}\big) = 0$,~~for $\mathbf{x} \in \mathbb{R}^2_{\Delta}$
\end{enumerate}
Then, for two persistence diagrams $\mathcal{D}, \mathcal{E} \in \mathbb{D}$, it holds that
\begin{equation}
\label{thm:stability:1}
\left|\sum\limits_{x \in \mathcal{D}} s(x) - \sum\limits_{y \in \mathcal{E}} s(y) \right| \leq K_s \cdot \textup{\text{w}}_1^q(\mathcal{D}, \mathcal{E})\enspace.
\end{equation}
\end{lem}
\begin{proof}\renewcommand{\qedsymbol}{}
see Appendix~\ref{appendix:proofs}
\end{proof}
\vspace{-5pt}
\begin{rem}
At this point, we want to clarify that Lemma \ref{lem:lemma1} is \emph{not} specific
to $s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ (e.g., as in Def.~\ref{eqn:structuredef}). Rather,
Lemma \ref{lem:lemma1} yields sufficient
conditions to construct a $\textup{\text{w}}_1$-stable input layer.
Our choice of $s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ is just a natural example that fulfils those requirements and, hence,
$\mathcal{S}_{\boldsymbol{\theta}, \nu}$ is just \emph{one} possible representative of a whole family
of input layers.
\end{rem}
With the result of Lemma~\ref{lem:lemma1} in mind, we turn to the specific case of
$\mathcal{S}_{\boldsymbol{\theta}, \nu}$ and analyze its stability properties w.r.t. $\textup{\text{w}}_1^q$.
The following lemma is important in this context.
\begin{lem}
\label{lem:boundedfirstorderderivatives}
$s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ has absolutely bounded first-order partial derivatives w.r.t. $x_0$ and $x_1$ on $\mathbb{R} \times \mathbb{R}^{+}$.
\end{lem}
\vspace{-10pt}
\begin{proof}\renewcommand{\qedsymbol}{}
see Appendix~\ref{appendix:proofs}
\end{proof}
\begin{thm}
\label{cor:stability}$\mathcal{S}_{\boldsymbol{\theta}, \nu}$ is Lipschitz continuous with
respect to $\textup{\text{w}}_1^q$ on $\mathbb{D}$.
\end{thm}
\vspace{-10pt}
\begin{proof}
Lemma~\ref{lem:boundedfirstorderderivatives} immediately implies that
$s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ from Eq.~\eqref{eqn:structuredef} is
Lipschitz continuous w.r.t $||\cdot||_q$.
Consequently, $s = s_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu} \circ \rho$ satisfies
property \ref{lem:lemma1:prop1} from Lemma~\ref{lem:lemma1}; property \ref{lem:lemma1:prop2}
from Lemma~\ref{lem:lemma1} is satisfied by construction.
Hence, $S_{\boldsymbol{\mu}, \boldsymbol{\sigma}, \nu}$ is Lipschitz continuous w.r.t. $\textup{\text{w}}_1^q$.
Consequently, $\mathcal{S}_{\boldsymbol{\theta}, \nu}$ is Lipschitz in each coordinate
and therefore Liptschitz continuous.
\end{proof}
Interestingly, the stability result of Theorem~\ref{cor:stability} is
comparable to the stability results in \cite{Adams17a} or \cite{Reininghaus14a} (which are
also w.r.t. $\textup{\text{w}}_1^q$ and in the setting of diagrams with finitely-many points).
However, contrary to previous works, if we would chop-off the input layer after network training,
we would then have a mapping $\mathcal{S}_{\boldsymbol{\theta}, \nu}$ of
persistence diagrams that is \emph{specifically-tailored
to the learning task} on which the network was trained.
\section{Experiments}
\label{section:experiments}
To demonstrate the versatility of the proposed approach, we present experiments
with two totally different types of data: (1) 2D shapes of objects, represented
as binary images and (2) social network graphs, given by their adjacency matrix.
In both cases, the learning task is \emph{classification}.
In each experiment we ensured a balanced group size (per label) and used a 90/10 random training/test split; all reported results are averaged over five runs with fixed $\nu = 0.1$.
In practice, points in input diagrams were thresholded at $0.01$ for computational reasons.
Additionally, we conducted a reference experiment on all datasets using simple vectorization
(see Sec.~\ref{subsection:vectorization})
of the persistence diagrams in combination with a linear SVM.
\noindent
\textbf{Implementation}. All experiments were implemented in \texttt{PyTorch}\footnote{\url{https://github.com/pytorch/pytorch}}, using
\texttt{DIPHA}\footnote{\url{https://bitbucket.org/dipha/dipha}} and \texttt{Perseus}~\cite{Perseus_MischaikowK13}.
Source code is publicly-available at \url{https://github.com/c-hofer/nips2017}.
\subsection{Classification of 2D object shapes}
\label{subsection:shapeclassification}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.85\textwidth]{pht.pdf}
\caption{Height function filtration of a ``clean'' (\emph{left}, \textcolor{darkgreen}{green} points) and a ``noisy''
(\emph{right}, \textcolor{babyblue}{blue} points) shape along direction
$\mathbf{d} = (0, -1)^\top$. This example demonstrates the insensitivity of homology towards noise, as the
added noise only (1) slightly shifts the dominant points (upper left corner)
and (2) produces additional points close to the diagonal, which have little
impact on the Wasserstein distance and the output of our layer.
\label{fig:npht}}
\end{center}
\end{figure}
We apply persistent homology combined with our proposed input layer to two
different datasets of binary 2D object shapes: (1) the \texttt{Animal} dataset,
introduced in \cite{Bai09a} which consists of
20 different animal classes, 100 samples each; (2) the \texttt{MPEG-7}
dataset which consists of 70 classes of different object/animal contours,
20 samples each (see \cite{Latecki00a} for more details).
\noindent
\textbf{Filtration.}
The requirements to use persistent homology on 2D shapes are twofold: \emph{First}, we need
to assign a simplicial complex to each shape; \emph{second}, we need to appropriately
filtrate the complex. While, in principle, we could analyze contour features,
such as curvature, and choose a sublevel set filtration based on that, such a strategy
requires substantial preprocessing of the discrete data (e.g., smoothing). Instead,
we choose to work with the raw pixel data and leverage the \emph{persistent homology
transform}, introduced by Turner et al. \cite{Turner2013}. The filtration in that
case is based on sublevel sets of the \emph{height function}, computed from multiple
directions (see Fig.~\ref{fig:npht}). Practically, this means that we \emph{directly construct a simplicial
complex from the binary image}.
We set $K_0$ as the set of all pixels which are contained in the object.
Then, a 1-simplex $[\mathbf{p}_0, \mathbf{p}_1]$ is in the 1-skeleton $K_1$ iff
$\mathbf{p}_0$ and $\mathbf{p}_1$ are 4--neighbors on the pixel grid.
To filtrate the constructed complex, we define by $\mathbf{b}$ the barycenter of
the object and with $r$ the radius of its bounding circle around $\mathbf{b}$. Finally,
we define, for $[\mathbf{p}] \in K_0$ and $\mathbf{d} \in \mathbb{S}^1$,
the filtration function by $f([p]) = \nicefrac{1}{r}\cdot \inprod{\mathbf{p} - \mathbf{b}}{\mathbf{d}}$.
Function values are lifted to $K_1$ by taking the maximum, cf. Sec.~\ref{section:background}.
Finally, let $\mathbf{d}_i$ be the 32 equidistantly distributed directions in $\mathbb{S}^1$,
starting from $(1,0)$.
For each shape, we get
a vector of persistence diagrams $(\mathcal{D}_i)_{i=1}^{32}$ where $\mathcal{D}_i$ is the
0-th diagram obtained by filtration along $\mathbf{d}_i$.
As most objects do not differ in homology groups of \emph{higher} dimensions (>\ 0), we
did not use the corresponding persistence diagrams.
\noindent
\textbf{Network architecture.} While the full network is listed in the
\emph{supplementary material} (Fig. \ref{fig:shape_network_arch}), the key architectural choices are: $32$
independent input branches, i.e., one for each filtration direction.
Further, the $i$-th branch gets, as input, the vector of persistence diagrams from directions $\mathbf{d}_{i-1}, \mathbf{d}_i$ and $\mathbf{d}_{i+1}$. This is a straightforward approach to capture
dependencies among the filtration directions. We use cross-entropy loss to train
the network for $400$ epochs, using stochastic gradient descent (SGD) with
mini-batches of size $128$ and an initial learning rate of $0.1$ (halved every
$25$-th epoch).
\noindent
\textbf{Results.} Fig.~\ref{fig:shaperesults} shows a selection of
2D object shapes from both datasets, together with the obtained
classification results. We list the two best ($\dagger$) and two worst
($\ddagger$) results as reported in \cite{Wang2014}.
While, on the one hand, using topological signatures is below the state-of-the-art, the proposed architecture is still better than other approaches that are
specifically tailored to the problem.
Most notably, our approach \emph{does not} require any specific data
preprocessing, whereas all other competitors listed in Fig.~\ref{fig:shaperesults} require, e.g., some sort of contour extraction. Furthermore, the proposed
architecture readily generalizes to 3D with the only difference that in this case $\mathbf{d}_i \in \mathbb{S}^2$.
Fig.~\ref{tbl:vectorization} (\emph{Right}) shows an exemplary visualization of the position of the learned structure elements for the
\texttt{Animal} dataset.
\subsection{Classification of social network graphs}
\label{subsection:graphclassification}
\begin{figure}[!t]
\footnotesize
\centering
\includegraphics[width=0.57\textwidth]{ShapeSample.pdf}
\hfill
\begin{tabular}[b]{lcc}
\toprule
& \texttt{MPEG-7} & \texttt{Animal} \\
\midrule
$^\ddagger$Skeleton paths & $86.7$ & $67.9$\\
$^\ddagger$Class segment sets & $90.9$ & $69.7$ \\
$^\dagger$ICS & $96.6$ & $78.4$ \\
$^\dagger$BCF & $97.2$ & $83.4$ \\
\midrule
\textbf{Ours} & $91.8$ & $69.5$ \\
\bottomrule
\end{tabular}
\caption{\emph{Left}: some examples from the \texttt{MPEG-7} (\emph{bottom})
and \texttt{Animal} (\emph{top}) datasets. \emph{Right}: Classification results,
compared to the two best ($\dagger$) and two worst ($\ddagger$) results reported in \cite{Wang2014}.\label{fig:shaperesults}}
\end{figure}
In this experiment, we consider the problem of graph classification,
where vertices are unlabeled and edges are undirected. That is, a
graph $\mathcal{G}$ is given by $\mathcal{G}=(V,E)$, where $V$ denotes the set of vertices
and $E$ denotes the set of edges.
We evaluate our approach on the challenging problem of
social network classification, using the two largest
benchmark datasets from \cite{Yanardag15a}, i.e., \texttt{reddit-5k} (5 classes, 5k graphs) and \texttt{reddit-12k} (11 classes, $\approx$12k graphs).
Each sample in these datasets represents a discussion graph and the classes indicate
\emph{subreddits} (e.g., \emph{worldnews}, \emph{video}, etc.).
\noindent
\textbf{Filtration.}
The construction of a simplicial complex from $\mathcal{G} = (V, E)$ is straightforward:
we set $K_0 = \{[v] \in V\}$ and $K_1 = \{[v_0, v_1]: \{v_0, v_1\} \in E\}$. We choose a very simple filtration based on the
\emph{vertex degree}, i.e., the number of incident edges to a vertex
$v \in V$. Hence, for $[v_0] \in K_0$ we get
$f([v_0]) = \deg(v_0)/\max_{v \in V} \deg(v)$ and again lift $f$ to $K_1$ by taking the maximum.
Note that chain groups are trivial for dimension $>1$, hence, all features in dimension $1$ are \emph{essential}.
\noindent
\textbf{Network architecture.} Our network has four input branches: two for
each dimension ($0$ and $1$) of the homological features, split into \emph{essential} and
\emph{non-essential} ones, see Sec.~\ref{section:background}. We train
the network for $500$ epochs using SGD and cross-entropy loss with an initial learning
rate of $0.1$ (\texttt{reddit\_5k}), or $0.4$ (\texttt{reddit\_12k}). The full network
architecture is listed in the \emph{supplementary material} (Fig. \ref{fig:graph_network_arch}).
\noindent
\textbf{Results.} Fig.~\ref{fig:graphresults} (\emph{right}) compares
our proposed strategy to state-of-the-art approaches from the literature.
In particular, we compare against (1) the graphlet kernel (GK)
and deep graphlet kernel (DGK) results from \cite{Yanardag15a},
(2) the Patchy-SAN (PSCN) results from \cite{Niepert16a} and (3)
a recently reported graph-feature + random forest approach (RF) from \cite{Barnett16a}.
As we can see, using topological signatures in our proposed
setting considerably outperforms the current
state-of-the-art on both datasets.
This is an interesting observation, as PSCN \cite{Niepert16a} for instance,
also relies on node degrees and an extension of the convolution operation to graphs.
Further, the results reveal that including \emph{essential} features is key
to these improvements.
\subsection{Vectorization of persistence diagrams}
\label{subsection:vectorization}
Here, we briefly present a reference experiment we conducted following Bendich et al. \cite{Bendich2016}.
The idea is to directly use the persistence diagrams as features via \emph{vectorization}.
For each point $(b, d)$ in a persistence diagram $\mathcal{D}$ we calculate its \emph{persistence}, i.e.,
$d - b$. We then sort the calculated persistences by magnitude from high to low and take the first $N$
values. Hence, we get, for each persistence diagram, a vector of dimension $N$ (if $|\mathcal{D}\setminus \Delta| < N$,
we pad with zero). We used this technique on all four data sets. As can be seen from the results in
Table~\ref{tbl:vectorization} (averaged over 10 cross-validation runs), vectorization performs poorly on \texttt{MPEG-7} and \texttt{Animal} but
can lead to competitive rates on \texttt{reddit-5k} and \texttt{reddit-12k}. Nevertheless, the obtained
performance is considerably inferior to our proposed approach.
\begin{figure}[t!]
\footnotesize
\begin{tabular}[b]{lccccccr}
\toprule
& \multicolumn{6}{c}{$N$} & \multirow{2}{*}{\bf Ours}\\
& 5 & 10 & 20 & 40 & 80 & 160 & \\
\midrule
\texttt{MPEG-7} & $81.8$ & $82.3$ & $79.7$ & $74.5$ & $68.2$ &$64.4$&$\mathbf{91.8}$\\
\texttt{Animal} & $48.8$ & $50.0$ & $46.2$ & $42.4$ & $39.3$ &$36.0$&$\mathbf{69.5}$ \\
\texttt{reddit-5k} & $37.1$ & $38.2$ & $39.7$ & $42.1$ & $43.8$ &$45.2$&$\mathbf{54.5}$\\
\texttt{reddit-12k} & $24.2$ & $24.6$ & $27.9$ & $29.8$ & $31.5$ &$31.6$&$\mathbf{44.5}$ \\
\bottomrule
\end{tabular}
\hfill
\includegraphics[height=2.5cm]{Animal-Learned.pdf}
\caption{
\label{tbl:vectorization} \emph{Left}: Classification accuracies for a linear SVM trained on vectorized
(in $\mathbb{R}^N$) persistence diagrams (see Sec.~\ref{subsection:vectorization}). \emph{Right}: Exemplary visualization of the learned structure elements (in $0$-th dimension) for the \texttt{Animal} dataset and filtration direction
$\mathbf{d} = (-1,0)^\top$.
Centers of the learned elements are marked in \textcolor{blue}{blue}.}
\end{figure}
\begin{figure}[t!]
\vspace*{0.25cm}
\footnotesize
\centering
\includegraphics[width=0.45\textwidth]{structure-small}
\hfill
\begin{tabular}[b]{lcc}
\toprule
& \texttt{reddit-5k} & \texttt{reddit-12k} \\
\midrule
GK \cite{Yanardag15a} & $41.0$ & $31.8$ \\
DGK \cite{Yanardag15a} & $41.3$ & $32.2$ \\
PSCN \cite{Niepert16a} & $49.1$ & $41.3$ \\
RF \cite{Barnett16a} & $50.9$ & $42.7$ \\
\midrule
\textbf{Ours} (w/o essential) &$49.1$& $38.5$ \\
\textbf{Ours} (w/ essential) & $\mathbf{54.5}$ & $\mathbf{44.5}$ \\
\bottomrule
\end{tabular}
\caption{\emph{Left}: Illustration of graph filtration by vertex degree,
i.e., $f \equiv \deg$ (for different choices of $a_i$, see Sec.~\ref{section:background}). \emph{Right}: Classification results
as reported in \cite{Yanardag15a} for GK and DGK, Patchy-SAN (PSCN) as reported in
\cite{Niepert16a} and feature-based random-forest (RF)
classification from \cite{Barnett16a}. \label{fig:graphresults}}.
\end{figure}
Finally, we remark that in both experiments, tests with the kernel of \cite{Reininghaus14a}
turned out to be computationally impractical, (1) on shape data due to the need to evaluate
the kernel for all filtration directions and (2) on graphs due the large number of samples
and the number of points in each diagram.
\vspace{-5pt}
\section{Discussion}
\label{section:discussion}
We have presented, to the best of our knowledge, the first approach
towards learning \emph{task-optimal} stable representations of topological
signatures, in our case persistence diagrams. Our particular
realization of this idea, i.e., as an input layer to deep neural networks,
not only enables us to learn with topological signatures, but also to
use them as additional (and potentially complementary) inputs to
existing deep architectures. From a theoretical point of view, we remark that the
presented \emph{structure elements} are not restricted to
exponential functions, so long as the conditions of Lemma~\ref{lem:lemma1}
are met. One drawback of the proposed approach, however, is the artificial
bending of the persistence axis (see Fig.~\ref{fig:idea}) by a logarithmic
transformation;
in fact, other strategies might be possible and better
suited in certain situations. A detailed investigation of this issue is left for future work.
From a practical perspective, it is also worth pointing out that,
in principle, the proposed layer could be used to handle any kind of input that
comes in the form of multisets (of $\mathbb{R}^n$), whereas previous works only allow to
handle sets of fixed size (see Sec.~\ref{section:intro}).
In summary, we argue that our experiments
show strong evidence that topological features of data can be beneficial
in many learning tasks, not necessarily to replace existing inputs, but
rather as a complementary source of discriminative information.
\clearpage
|
\section{Model for bilayer graphene}
Our low energy four-band Hamiltonian for homogeneous BLG is adopted from Ref.~\onlinecite{Koshino2013ett},
\begin{equation}
\bar{H}=
\begin{bmatrix}
H_{0}^+ & U^\dagger\\
U & H_{0}^-
\end{bmatrix}
\,,
\label{eqn:H}
\end{equation}
with basis $(F_{At}^{K^\xi},F_{Bt}^{K^\xi},F_{Ab}^{K^\xi},F_{Bb}^{K^\xi})$, where $F$ denotes envelope function, $\xi=\pm 1$ for the $K$ and $K'$ valley, and $t$ stands for top layer and $b$ for bottom layer.
The two-band Dirac Hamiltonian for a single layer is
\begin{equation}
H_0^\pm
\begin{bmatrix}
\pm V/2 & \xi k_x + ik_y\\
\xi k_x - ik_y & \pm V/2
\end{bmatrix}
\,,
\end{equation}
where $V=eV_i$ is the interlayer potential.
The interlayer interaction is
\begin{equation}
U=\frac{\gamma_1}{3}\left(1+2
\begin{bmatrix}
\cos \frac{2\pi}{3}\delta & \cos \frac{2\pi}{3}\left(\delta+1\right)\\
\cos \frac{2\pi}{3}\left(\delta-1\right) & \cos \frac{2\pi}{3}\delta
\end{bmatrix}
\right)\,,
\end{equation}
where the interlayer coupling\cite{Zhang2008des} $\gamma_1=0.4\,\mathrm{eV}$ and the stacking order $\delta\in [1,2]$ with $\delta=1,1.5,2$ corresponding to $AB$, $SP$ and $BA$ stacking.
To describe the domain walls,
the homogeneous Hamiltonian $\bar{H}$ has to be modified to account for the local change in stacking.
This is done by replacing the momentum perpendicular to the wall $k_\perp$ by the operator $-i\partial/\partial x_\perp$ and making the stacking parameter spatially dependent, $\delta(x_\perp)$,
resulting in the real space Hamiltonian $H(x_\perp)$.
For the tensile wall $x_\perp=y$ while for the shear wall $x_\perp=x$.
The distribution $\delta(x_\perp)$ is found in Ref.~\onlinecite{Alden2013sst} to be
\begin{equation}
\delta(x_\perp)=\frac2\pi\arctan\left(e^{\pi{x_\perp}/{l}}\right)+1\,,
\end{equation}
where the width $l=10.1\,\mathrm{nm}$ for the tensile wall and $l=6.2\,\mathrm{nm}$ for the shear wall.
\section{Optical conductivity of the domain wall}
There are two ways to approximate $\sigma(x_\perp)$.
The first is to diagonalize the homogeneous Hamiltonian $\bar{H}$ for a given stacking order $\delta$,
use the Kubo formula to find the homogeneous optical conductivity $\bar{\sigma}(\delta)$,
then map it to $\bar{\sigma}(x_\perp)$ using the stacking distribution $\delta(x_\perp)$ of the domain wall.
This is what we call the ``adiabatic'' approach and cannot account for the presence of the edge states.
The second is to diagonalize the real space Hamiltonian $H$ in coordinate basis and use the Kubo formula to find the nonlocal conductivity $\Sigma(x_\perp,x_\perp')$, which is then localized by $\sigma(x_\perp)=\int \Sigma(x_\perp,x_\perp')dx_\perp'$.
This ``lattice'' approach is what we use for our calculations.
Let us start from the calculation of $\bar{\sigma}(\delta)$.
The conductivity consists of two parts, an interband conductivity $\bar{\sigma}^I$ from optical transitions between the four bands, and a Drude-like intraband conductivity $\bar{\sigma}^D$.
Except for specific stacking orders such as $AB$ and $BA$ stacking, the conductivities are anisotropic.
We consider only the diagonal elements of the conductivity, $\bar{\sigma}_{xx}$ and $\bar{\sigma}_{yy}$, and neglect $\bar{\sigma}_{xy}$ and $\bar{\sigma}_{yx}$ which are small.\cite{Shimazaki2015gdp}
The interband conductivity is calculated using the Kubo formula,
\begin{equation}
\bar{\sigma}^I_{\alpha\alpha}=\frac{g_s g_v i \hbar}{4\pi^2}\int dk_xdk_y\sum_{n\neq m}-\frac{f_m-f_n}{E_m-E_n}\frac{e^2v^2M_\alpha^*M_\alpha}{\hbar\omega(1+i\eta)-(E_m-E_n)}\,.
\label{eqn:kubo}
\end{equation}
Here $\alpha=x$ or $y$.
The spin and valley degeneracy are $g_s=g_v=2$.
The summation goes over all pairs of states $\ket{n}$ and $\ket{m}$,
where the energy of the state $\ket{n}$ is $E_n$ and its occupation number $f_n$ is given by the Fermi-Dirac distribution, $f_n=1/(1+e^{(E_n-\mu)/k_BT})$.
The matrix element is defined as $M_\alpha=\braket{m|s_\alpha\otimes\tau_0|n}$ where $s_\alpha$ are the Pauli matrices acting on the sublattice and $\tau_0$ is the identity matrix acting on the layer degree of freedom.
The phenomenological damping rate is $\eta$.
The intraband conductivity $\bar{\sigma}^D$ arises from the $n=m$ part of the summation,
where the fraction $\frac{f_m-f_n}{E_m-E_n}$ is replaced by the derivative $\frac{df_n}{dE_n}$ while $E_m-E_n=0$, so that
\begin{equation}
\bar{\sigma}^D_{\alpha\alpha}=\frac{g_s g_v i \hbar}{4\pi^2}\int dk_xdk_y\sum_{n}-\frac{df_n}{dE_n}\frac{e^2v^2M_\alpha^*M_\alpha}{\hbar\omega(1+i\eta)}\,.
\label{eqn:kubo_D}
\end{equation}
The total conductivity $\bar{\sigma}=\bar{\sigma}^I+\bar{\sigma}^D$
can be readily found given the Hamiltonian $\bar{H}(\delta)$, the chemical potential $\mu$, the temperature $T$, the frequency $\omega$, the interlayer bias $V_i$ and the damping rate $\eta$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.1 in]{sigma_b_c_t}
\includegraphics[width=2.1 in]{sigma_b_c_s}
\end{center}
\caption{
\textbf{a}. Local conductivity $\sigma_\perp$ for the tensile wall,
where the contribution from optical transitions involving the bound states (thick curves) are separated from the contribution of transitions involving only the continuum (thin curves).
Parameters: $\mu=0.1\,\mathrm{eV}$, $T=300\,\mathrm{K}$, $\omega=890\,\mathrm{cm^{-1}}$, $V_i=0.1\,\mathrm{V}$ and $\eta=0.1$.
\textbf{b}. Similar quantities for the shear wall.
In both cases the bound states produce a prominent peak in the real part of $\sigma_\perp$.
}
\label{fig:sigma_bc}
\end{figure}
The calculation of the nonlocal conductivity $\Sigma$ is very similar.
The system is discretized in the $x_\perp$ direction into a grid of size $N$, so that the Hamiltonian $H$ has $4N$ bands.
The integration over $k_\perp$ is removed, and the matrix element is calculated at every grid point, $M_\alpha(x_\perp)=\braket{m(x_\perp)|s_\alpha\otimes\tau_0|n(x_\perp)}$, leading to the following nonlocal
conductivities
\begin{equation}
{\Sigma}^I_{\alpha\alpha}(x_\perp,x_\perp')=\frac{g_s g_v i \hbar}{4\pi^2}\int dk_\parallel\sum_{n\neq m}-\frac{f_m-f_n}{E_m-E_n}\frac{e^2v^2M_\alpha^*(x_\perp)M_\alpha(x_\perp')}{\hbar\omega(1+i\eta)-(E_m-E_n)}\,,
\label{eqn:kubo_nonlocal}
\end{equation}
\begin{equation}
{\Sigma}^D_{\alpha\alpha}(x_\perp,x_\perp')=\frac{g_s g_v i \hbar}{4\pi^2}\int dk_\parallel\sum_{n}-\frac{df_n}{dE_n}\frac{e^2v^2M_\alpha^*(x_\perp)M_\alpha(x_\perp')}{\hbar\omega(1+i\eta)}\,.
\label{eqn:kubo_D_nonlocal}
\end{equation}
For our calculations the nonlocal $\Sigma$ is then localized by integration over $x_\perp'$ and denoted $\sigma_\alpha\equiv\sigma_{\alpha\alpha}$, where $\alpha=\perp$ or $\parallel$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=2.1 in]{sigma_aniso_t}
\includegraphics[width=2.1 in]{sigma_aniso_s}
\end{center}
\caption{
\textbf{a}. The local conductivity $\sigma_\alpha$ is highly anisotropic at the tensile wall.
Parameteres: $\mu=0.1\,\mathrm{eV}$, $T=300\,\mathrm{K}$, $\omega=890\,\mathrm{cm^{-1}}$, $V_i=0$ and $\eta=0.1$.
\textbf{b}. Similar quantities for the shear wall.
}
\label{fig:sigma_aniso}
\end{figure}
As the bound state wavefunctions are localized at the domain wall, optical transitions involving these bound states give rise to conductivity peaks at the wall, as shown in Fig.~\ref{fig:sigma_bc}.
The domain wall also introduces anisotropy to the local conductivity,
as shown in Fig.~\ref{fig:sigma_aniso}a for the tensile wall and \ref{fig:sigma_aniso}b for the shear wall.
Away from the wall, conductivities in the $\perp$ and the $\parallel$ direction have the same value as expected, but at the wall they can be drastically different.
\section{Fitting the s-SNOM profiles}
To fit the experimental s-SNOM profiles, we calculate conductivities at $T=300\,\mathrm{K}$ and $\omega=890\,\mathrm{cm^{-1}}$, while treating the chemical potential $\mu$, the interlayer bias $V_i$ and the damping rate $\eta$ as fitting parameters.
As shown in Fig.~\ref{fig:SNOM_parameters}, changes to these three parameters have drastic effects on the resulting s-SNOM signal around the domain wall.
An increase in $\mu$ increases the plasmon wavelength and decreases the strength of the signal,
a change to $V_i$ changes the signal strength at the wall,
while an increase in $\eta$ decreases the overall amplitude of the oscillations.
This shows that one can reliably determine these three parameters in the fitting procedure.
In Fig.~\ref{fig:SNOM_tensile} we show our fits to the experimental near-field amplitude profiles for the tensile wall along with the phase $\phi$ of the s-SNOM signal.
Also shown are the plasmonic wavelength profile $\lambda_p$ and the plasmonic damping profile $\gamma$ used for the fit.
Parameters used for the series of fits for $V_g=(60,0,-40,-80)\,\mathrm{V}$ are:
$\mu=(0.17,0.21,0.25,0.27)\,\mathrm{eV}$,
$V_i=(0.25,0.2,-0.1,-0.2)\,\mathrm{V}$,
and $\eta=(0.2,0.15,0.1,0.12)$.
Fits for the shear wall are shown in Fig.~\ref{fig:SNOM_shear}.
The parameters used for $V_g=(30,-20,-50,-80,-110)\,\mathrm{V}$
are:
$\mu=(0.16,0.21,0.23,0.24,0.25)\,\mathrm{eV}$,
$V_i=(0.3,0.35,0.35,0.35,0.4)\,\mathrm{V}$,
and $\eta=(0.2,0.2,0.2,0.2)$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.1 in]{parameter_mu}
\includegraphics[width=2.1 in]{parameter_v}
\includegraphics[width=2.1 in]{parameter_g}
\end{center}
\caption{Comparison of near-field amplitude profiles under different fitting parameters.
The black curve in every panel is calculated at $\mu=0.21\,\mathrm{eV}$, $V_i=0\,\mathrm{V}$, and $\eta=0.15$ for the tensile wall.
In each panel one of the three parameters is varied.
\textbf{a}. Varying the chemical potential changes the plasmon wavelength and the overall amplitude.
\textbf{b}. Changing the interlayer bias $V_i$ alters signal strength at the wall.
\textbf{c}. Increasing the damping rate $\eta$ decreases the overall amplitude of the oscillations.
}
\label{fig:SNOM_parameters}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.55 in]{t60_s3}
\includegraphics[width=1.55 in]{t60_lp}
\includegraphics[width=1.55 in]{t0_s3}
\includegraphics[width=1.55 in]{t0_lp}
\\
\includegraphics[width=1.55 in]{t60_p3}
\includegraphics[width=1.55 in]{t60_gp}
\includegraphics[width=1.55 in]{t0_p3}
\includegraphics[width=1.55 in]{t0_gp}
\\
\includegraphics[width=1.55 in]{tn40_s3}
\includegraphics[width=1.55 in]{tn40_lp}
\includegraphics[width=1.55 in]{tn80_s3}
\includegraphics[width=1.55 in]{tn80_lp}
\\
\includegraphics[width=1.55 in]{tn40_p3}
\includegraphics[width=1.55 in]{tn40_gp}
\includegraphics[width=1.55 in]{tn80_p3}
\includegraphics[width=1.55 in]{tn80_gp}
\end{center}
\caption{Fits for the near-field profiles for the tensile wall. \textbf{a}. $V_g=60\,\mathrm{V}$.
\textbf{b}. $V_g=0\,\mathrm{V}$.
\textbf{c}. $V_g=-40\,\mathrm{V}$.
\textbf{d}. $V_g=-80\,\mathrm{V}$.
In each panel the normalized experimental near-field amplitude profile $\bar{s}_3$ is shown in gray, the simulated amplitude $\bar{s}_3$ and phase $\phi$ profiles are shown in blue and red.
Also shown are the plasmon wavelength profile $\lambda_p$ and damping profile $\gamma$ used for the fit.
}
\label{fig:SNOM_tensile}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.55 in]{s30_s3}
\includegraphics[width=1.55 in]{s30_lp}
\includegraphics[width=1.55 in]{sn20_s3}
\includegraphics[width=1.55 in]{sn20_lp}
\\
\includegraphics[width=1.55 in]{s30_p3}
\includegraphics[width=1.55 in]{s30_gp}
\includegraphics[width=1.55 in]{sn20_p3}
\includegraphics[width=1.55 in]{sn20_gp}
\\
\includegraphics[width=1.55 in]{sn50_s3}
\includegraphics[width=1.55 in]{sn50_lp}
\includegraphics[width=1.55 in]{sn80_s3}
\includegraphics[width=1.55 in]{sn80_lp}
\\
\includegraphics[width=1.55 in]{sn50_p3}
\includegraphics[width=1.55 in]{sn50_gp}
\includegraphics[width=1.55 in]{sn80_p3}
\includegraphics[width=1.55 in]{sn80_gp}
\\
\includegraphics[width=1.55 in]{sn110_s3}
\includegraphics[width=1.55 in]{sn110_lp}
\\
\includegraphics[width=1.55 in]{sn110_p3}
\includegraphics[width=1.55 in]{sn110_gp}
\end{center}
\caption{Fits for the near-field profiles for the shear wall. \textbf{a}. $V_g=30\,\mathrm{V}$.
\textbf{b}. $V_g=-20\,\mathrm{V}$.
\textbf{c}. $V_g=-50\,\mathrm{V}$.
\textbf{d}. $V_g=-80\,\mathrm{V}$.
\textbf{e}. $V_g=-110\,\mathrm{V}$.
In each panel the normalized experimental near-field amplitude profile $\bar{s}_3$ is shown in gray, the simulated amplitude $\bar{s}_3$ and phase $\phi$ profiles are shown in blue and red.
Also shown are the plasmon wavelength profile $\lambda_p$ and damping profile $\gamma$ used for the fit.
}
\label{fig:SNOM_shear}
\end{figure}
\section{Dielectric function in the band gap}
In this section we derive the effective 1D dielectric function $\varepsilon_\mathrm{1D}(k_\parallel,\omega)$ of the domain wall
when the chemical potential lies within the band gap.
The pole of $1/\varepsilon_\mathrm{1D}$ determines the dispersion of the 1D plasmon propagating along the wall.
In the absence of external fields, the total electric potential $\Phi$ of the sheet in the quasistatic limit is determined by the charge density $\rho$ and current density $j$ on the sheet,
\begin{equation}
\Phi=V_2 \ast \rho = V_2 \ast \frac{i}{\omega}\nabla \cdot \mathbf{j}\,,
\label{eqn:phi_j}
\end{equation}
where the Coulomb kernel $V_2=1/\kappa r$, $\mathbf{r}=(x,y)$ and $\ast$ denotes convolution, $A\ast B=\int A(r-r')B(r')dr'$.
For ease of notation we assume that the domain wall lies on the $y$-axis.
When the chemical potential is in the gap, $|\mu|<V/2$ (and the temperature and frequency are low, $k_B T\ll V$ and $\hbar \omega\ll V$),
only the bound states contribute to the optical response.
The charge density is zero on the sheet and the current only flows along the domain wall,
so we can make the simplification $j_x=0$ and $\Phi=\phi(x)e^{iq_y y}$.
Eq.~\eqref{eqn:phi_j} can then be rewritten as
\begin{equation}
\phi(x) = V_{1} \ast \frac{i}{\omega}\partial_y j_y = V_{1} \ast \frac{k_y^2}{i\omega}\int\Sigma_{yy}(x,x')\phi(x')dx'\,,
\label{eqn:phi_1D}
\end{equation}
Here the 1D Coulomb kernel is $V_1(x)=\int V_2 dy=\frac2\kappa K_0(q_y|x|)$, where $K_0$ is the modified Bessel function of the second kind.
Note that we removed the $k_y$ dependence in $\Sigma_{yy}$ as the plasmon wavelength $\lambda_y\sim k_y^{-1}$ is much larger than all other length scales in the problem, so that we can make the approximation $\Sigma(x,x',k_y)\simeq \Sigma(x,x',0)$.
At frequencies $\hbar \omega\ll V$, there are no allowed optical transitions and the conductivity comes purely from the Drude response,
\begin{equation}
\Sigma_{yy}(x,x')=g_s\sum_{K,K'}\sum_{j=1}^{N} \frac{iD_{yy,j}}{\pi(\omega-\xi_j v_j q_y)}|\psi_j(x)|^2|\psi_j(x')|^2\,,
\end{equation}
where $\psi_j$ is the wavefunction of the $j$-th bound state at energy $E_j=\mu$ and $g_s=2$ is the spin degeneracy.
The Drude weight $\frac1\pi D_{yy,j}=\frac{e^2}{h}|v_j|$ is directly proportional to the particle group velocity $v_j=\partial E_j/\hbar\partial k_y$, and $\xi_j$ is the sign of $v_j$.
The summation over the $K$ and the $K'$ valleys can be reduced by noting that
every bound state has a counterpart in the other valley with a velocity that is equal in magnitude but opposite in direction.
Since the width of the wavefunctions $\sim l$ is much smaller than the plasmon wavelength $\lambda_y$, the particle density distributions $|\psi_j(x)|^2$ can be roughly approximated as a $\delta$-function of characteristic width $l$.
Eq.~\eqref{eqn:phi_1D} then becomes
\begin{equation}
\phi(l)\simeq\left(\sum_{j=1}^{N}\frac{k_y^2}{\kappa(\omega^2-v_j^2k_y^2)}2K_0(k_yl)g_s\frac{2e^2}{h}|v_j|\right)\phi(l)=\left(1-\frac{\varepsilon_{1D}}{\kappa}\right)\phi(l)\,.
\end{equation}
For small arguments $K_0(z)\simeq \log(A/z)$ where $A\simeq 2e^{-0.577}=1.12$, and so the 1D dielectric function is
\begin{equation}
\varepsilon_{\mathrm{1D}}\left(k_y, \omega\right) =
\kappa - \frac{8 e^2}{h}\, \ln \left(\frac{A}{k_y l}\right)
k_y^2\sum_{j = 1}^{N}
\frac{ |v_{j}|}{\omega^2 - k_y^2 v_{j}^2}\,.
\label{eqn:1D_plasmon}
\end{equation}
\bibliographystyle{naturemag}
|
Subsets and Splits