text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
UOV signature scheme, how does the affine transformation work? What does the composition of the core map and the affine map yield?
I am having trouble understanding part of the UOV scheme, i get how it works except for when it comes to composing the core map F with an affine transformation say T, which i understand to be an invertible square matrix? I understand F to be composed of the o number of oil-vinegar polynomials where o is the number of oil variables. So does this mean that each multivariate equation in the core map F is multiplied by a square matrix representing the affine map T? What exactly would be the product of this multiplication? Is each multivariate oil vinegar equation treated as a 1x1 matrix and multiplied with T? I know that the map T is used to do a change of basis on the system but the public key is supposed to be a system of multivariate equations so could someone help clarify what the composition of the core map F and the affine map T actually yields? Perhaps with a small example if possible? For example in the case where o = 2 if i say the core map F is given by
$F = [3x_0^2 - 4x_0x_1 - 4x_1^2 + 8x_0x_2 + 11x_2^2 - 2x_0x_3 - x_1x_3 + 9x_2x_3 + 12x_3^2 + 7x_0x_4 - 11x_1x_4 - 9x_2x_4 - 2x_3x_4 + 3x_0x_5 + 5x_1x_5 + 14x_2x_5 - 11x_3x_5]$
$[-6x_0^2 + 10x_0x_1 + 10x_1^2 + 13x_0x_2 + 4x_1x_2 + 7x_2^2 - 7x_0x_3 - x_1x_3 - 13x_2x_3 + 4x_3^2 + 6x_0x_4 + 15x_1x_4 - 11x_2x_4 - 12x_3x_4 - 12x_0x_5 + 7x_1x_5 - 13x_2x_5 - 9x_3x_5]$
And an affine transformation $T$ given by:
$\begin{bmatrix} 18 & 23 & 14 & 1 & 6 & 21 \\ 24 & 3 & 3 & 0 & 2 & 0 \\ 17 & 25 & 2 & 16 & 23 & 8 \\ 8 & 21 & 16 & 5 & 1 & 9 \\ 14 & 28 & 8 & 17& 12& 12 \\ 6 & 24& 18& 19& 3& 1& \end{bmatrix}$
what would the public key/the composition of the core map F and the map T be?
public-key cryptanalysis post-quantum-cryptography matrix-multiplication
d_s_md_s_m
The secret polynomials are multivariate polynomials whose oil-oil terms have coefficient zero. You can represent it as a sum of terms, or as a vector-matrix-vector product. Take for example the polynomial of your example, $F_0(x_0, \ldots, x_5) = x^2_0 − 4x_0x_1 − 4x^2_1 + 8x_0x_2 + 11x_2^2 − 2x_0x_3 − x_1x_3 + 9x_2x_3 + 12x^2_3 + 7x_0x_4 − 11x_1x_4 − 9x_2x_4 − 2x_3x_4+3x_0x_5+5x_1x_5 + 14x_2x_5 − 11x_3x_5$. Using the vector notation $\mathbf{x}^\mathsf{T} = (x_0, x_1, \ldots, x_5)$ we can write this polynomial as the following product: $$ F_0(\mathbf{x}) = \mathbf{x}^\mathsf{T} \left( \begin{matrix} 1 & -4 & 8 & -2 & 7 & 3 \\ 0 & -4 & 0 & -1 & -11 & 5 \\ 0 & 0 & 0 & 9 & -9 & 14 \\ 0 & 0 & 0 & 12 & -2 & -11 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{matrix} \right) \mathbf{x} \enspace .$$ The coefficients in the matrix are the same as in the sum-of-terms representation. Note that in this matrix representation, the bottom-right block consists of zeros; this reflects the fact that oil-times-oil terms have coefficient zero. Also note that this matrix is not unique: whenever $F_i(\mathbf{x}) = \mathbf{x}^\mathsf{T} M_{F_i} \mathbf{x}$ then for any skew-symmetric matrix $A$ (i.e. such that $A^\mathsf{T} = -A$) you can also write $F_i(\mathbf{x}) = \mathbf{x}^\mathsf{T}(M_{F_i}+A)\mathbf{x}$. In particular, this means that (assuming odd characteristic) it is always possible to choose the central matrix to be symmetric by computing $\frac{1}{2}(M_{F_i}+M_{F_i}^\mathsf{T})$.
The secret linear transform $\mathcal{T}$ can also be represented in two ways. You give the matrix representation $$ T = \left( \begin{matrix} 18 & 23 & 14 & 1 & 6 & 21 \\ 24 & 3 & 3 & 0 & 2 & 0 \\ 17 & 25 & 2 & 16 & 23 & 8 \\ 8 & 21 & 16& 5 & 1 & 9 \\ 14 & 28 & 8 & 17 & 12 & 12 \\ 6 & 24 & 18 & 19 & 3 & 1 \end{matrix} \right) \enspace ,$$ in which case $\mathcal{T}(\mathbf{x}) = T\mathbf{x}$. You could also have chosen the list of polynomials representation, in which case there are 6 polynomials, $\mathcal{T}_0$ throught $\mathcal{T}_5$. For illustration, the first polynomial is given by $\mathcal{T}_0(x_0, x_1, x_2, x_3, x_4, x_5) = 18 x_0 + 23 x_1 + 14 x_2 + 1 x_3 + 6 x_4 + 21 x_5$.
To compute the composition $F \circ \mathcal{T}$ you must first choose a representation. In the polynomial representation we have $F \circ \mathcal{T} (\mathbf{x}) = F(\mathcal{T}(\mathbf{x})) = F(\mathcal{T}_0(\mathbf{x}), \mathcal{T}_1(\mathbf{x}), \ldots, \mathcal{T}_5(\mathbf{x}))$. So you can compute $F \circ \mathcal{T}$ by starting with a copy of $F$ and then substituting every occurrence of $x_i$ with $\mathcal{T}(\mathbf{x}) = \mathcal{T}(x_0, x_1, x_2, x_3, x_4, x_5)$. But make sure to keep track of which occurrences of $x_i$ appear as a result of the substitution, because they obviously shouldn't be substituted twice.
In the matrix representation, we have $F_i \circ \mathcal{T} (\mathbf{x}) = \mathcal{T}(\mathbf{x})^\mathsf{T} M_{F_i} \mathcal{T}(\mathbf{x}) = (T\mathbf{x})^\mathsf{T} M_{F_i} (T \mathbf{x}) = \mathbf{x}^\mathsf{T} (T^\mathsf{T} M_{F_i} T) \mathbf{x}$. So a matrix representation $M_{F_i \circ \mathcal{T}}$ of the composition can be computed just by sandwiching $M_{F_i}$ between $T^\mathsf{T}$ and $T$, i.e., $M_{F_i \circ \mathcal{T}} = T^\mathsf{T} M_{F_i} T$.
To complete the example you started, let's assume we're working modulo 31 (because otherwise the numbers get quite large). Then we have $$ M_{F_0 \circ \mathcal{T}} \cong T^\mathsf{T} M_{F_i} T = \left( \begin{matrix} 6 & 25 & 19 & 19 & 3 & 26 \\ 22 & 29 & 3 & 29 & 28 & 2 \\ 8 & 15 & 12 & 18 & 6 & 7 \\ 28 & 1 & 24 & 17 & 4 & 1 \\ 6 & 21 & 18 & 5 & 30 & 23 \\ 11 & 23 & 8 & 15 & 5 & 5 \end{matrix}\right) \cong \left( \begin{matrix} 6 & 8 & 29 & 8 & 20 & 3 \\ 8 & 29 & 9 & 15 & 9 & 28 \\ 29 & 19 & 12 & 21 & 12 & 23 \\ 8 & 15 & 21 & 17 & 20 & 8 \\ 20 & 9 & 12 & 20 & 30 & 14 \\ 3 & 28 & 23 & 8 & 14 & 5 \end{matrix}\right) \cong \left( \begin{matrix} 6 & 16 & 27 & 16 & 9 & 6 \\ 0 & 29 & 18 & 30 & 18 & 25 \\ 0 & 0 & 12 & 11 & 24 & 15 \\ 0 & 0 & 0 & 17 & 9 & 16 \\ 0 & 0 & 0 & 0 & 30 & 28 \\ 0 & 0 & 0 & 0 & 0 & 5 \end{matrix}\right) \enspace .$$ Here the congruence sign ($\cong$) indicates that the matrices represent the same quadratic form, i.e., up to addition of skew-symmetric matrices. In other words, after sandwiching them between $\mathbf{x}^\mathsf{T}$ and $\mathbf{x}$ they will result in the same polynomial. From the last upper triangular matrix we can read out the representation of the polynomial as a sum of terms, namely $$ F_0 \circ \mathcal{T}(x_0, x_1, x_2, x_3, x_4, x_5) = 6x_0^2 + 16x_0x_1 + 27x_0x_2 + 16x_0x_3 + 9x_0x_4 + 6x_0x_5 + 29x_1^2 + 18x_1x_2 + 30x_1x_3 + 18 x_1x_4 + 25x_1x_5 + 12x_2^2 + 11x_2x_3 + 24x_2x_4 + 15x_2x_5 + 17x_3^2 + 9x_3x_4 + 16x_3x_5 + 30x_4^2 + 28x_4x_5 + 5x_5^2 \enspace .$$ This should be the same result as obtained through the substitution method applied to the polynomial representations.
One more thing: it is a good idea to avoid mixing homogeneous quadratic polynomials (i.e., without linear or constant terms) with affine transforms (i.e., with constant terms) -- or vice versa. In other words, whenever your secret polynomials are homogeneous, your secret transforms should be linear; whenever they are inhomogeneous, they should be affine. This guarantees that the public polynomials are (in)homogeneous whenever the secret ones are.
AlanAlan
Thanks for contributing an answer to Cryptography Stack Exchange!
Not the answer you're looking for? Browse other questions tagged public-key cryptanalysis post-quantum-cryptography matrix-multiplication or ask your own question.
Questions on rank-attacks in Multivariate Cryptography
What is hardened SHA-1, how does it work and how much protection does it offer?
Usage of pairings in proxy re-encryption algorithm
What's a quadratic map and how do I invert it?
How does openssl signature verification work?
How does the Signature Scheme in Picnic Works
What are the security implications of knowing the private polynomial $\mathcal{F}$ | CommonCrawl |
Investigating regional source and sink patterns of Alpine CO2 and CH4 concentrations based on a back trajectory receptor model
Esther Giemsa1,
Jucundus Jacobeit1,
Ludwig Ries2 &
Stephan Hachinger3
Environmental Sciences Europe volume 31, Article number: 49 (2019) Cite this article
The main purpose of this paper is to contribute to the improvement in the present knowledge concerning regional carbon dioxide (CO2) and methane (CH4) exchange as an essential step towards reducing the uncertainties along with bottom-up estimations of their global budget by identifying the characteristic spatial and temporal scales of the regional greenhouse gas fluxes. To this end, we propose a stepwise statistical top-down methodology for examining the relationship between synoptic-scale atmospheric transport patterns and mole fractions of the climate gases to finally receive a characterisation of the sampling sites with regard to the key processes driving the CO2 or CH4 concentration levels.
The results of this study presented in this paper give detailed insights into the emission structures underlying the measurement time series by means of origin-related examinations of the Alpine CO2 and CH4 budgets. The time series of both climate gases from the atmospheric measurements carried out at the four high-alpine observatories Schneefernerhaus, Jungfraujoch, Sonnblick and Plateau Rosa form the basis for the characterisation of the regional CO2 as well as CH4 budget of the Alpine region as the focus area of the Central European study region. For the investigation area so outlined, the project identifies source and relative sink regions with influence on the Alpine climate gas measurements as well as their temporal variations. The therefore required combination of the measurements with the synoptic situation prevailing at the respective measuring time which carries the information about the origin of the analysed air masses is derived by means of a trajectory-based receptor model. The back trajectory receptor model is set up to decipher with high spatial resolution the most relevant source and sink areas, whereby the Alpine region is identified as a significant relative sink for CO2 as well as for CH4 concentrations all year long, whereas major European emitters show their impact during different seasons.
The reliable results achieved with this approach in connection with the encouraging model-internal uncertainty assessments and external plausibility checks lend credence to our model and its strength to illustrate dependably spatial–temporal variations of the relevant emitters and absorbers of different climate gases (CO2 and CH4) in high spatial resolution.
Carbon dioxide (CO2) and methane (CH4) represent the two most important greenhouse gases (except for water vapour) with a combined radiative forcing of 2.3 [± 0.24] W/m2 on the global average [18]. The unbroken increase in these two most notorious atmospheric greenhouse gases of over 120 ppm or 1.080 ppb above preindustrial levels has been unequivocally attributed to human emissions mainly coming from fossil fuel burning and land-use changes, while the oceans and terrestrial ecosystems somewhat attenuate this rise with seasonally varying strength [20, 33].
It is not only the climate gases time series themselves that matter regarding the unbroken increase in the atmospheric concentrations of both greenhouse gases concerning their preindustrial level (CO2 by 40%, CH4 by 150%) [18]. But more particularly, their interpretations with a special attention on the regional emission budget within the catchment area of measurement stations are of high scientific as well as sociopolitical interest. Only the knowledge about regional emission structures provides a sound understanding of the regional climate gases' budgets and therefore a valid detection of varying contributions in the catchment area of the measurement sites.
The derivation of such variables of a climate political dimension from high-precision measurement time series of climate gases on whose basis efficient emission reduction actions can be verified and adapted if necessary, is anything but trivial and requires the differentiated breakdown of the measurements by their origin. The interaction of mankind and biosphere as emitters or absorbers in connection with the long atmospheric lifetime—especially of CO2—is decisive for the complexity of this task. The long lifetime of climate gases once emitted together with the interference of the anthropogenic emissions—here burning of fossil fuels as well as land-use changes and livestock farming have primarily to be mentioned—and the seasonal carbon cycle of the biosphere, just like natural biogeochemical cycles, prevent atmospheric measurement time series of the climate gases from immediately providing information about changes of the regional emission situation [20].
The periodic seasonality of the natural global cycles as well as the above-mentioned secular trends superimpose some short-term fluctuations on the CO2 and CH4 concentrations reflecting the influence of regional climate gas emitters and absorbers. In this context, the transient CO2 and CH4 components prompted the development of different methods specifically designed to identify with high spatial resolution the most relevant regions appearing as CO2/CH4 sources or relative sinks within the catchment area of a receptor site [1]. This kind of approaches set the short-term fluctuations in the climate gas concentration measured at a particular station in relation to the simultaneous synoptic conditions captured by statistical analyses of back trajectories [36]. For the reconstruction of dynamic processes in the atmosphere on the synoptic scale, trajectories from the dispersion and transport modelling have established themselves as reliable tool. Air mass trajectories give an approximation of the path that air parcels have covered over a period of time, thereby carrying a specific history due to the crossed-over regions with them [13]. The simulation of trajectories can therefore give detailed insights on how the emission situation affects the climate gas concentrations of the considered observatories. On the basis of meteorological fields out of numerical weather prediction models, trajectories track the movement of an air package in space and time, thereby indicating flow patterns. Trajectories calculated from a measurement site backward in time thus give information about the transport pathways and potential source regions of the detected air masses. Thereby backward trajectories infer geographic regions that contribute to pollution events and enable detailed insights into source–receptor relationships within the catchment area of the target station [30, 31]. As back trajectories cross locations along their route where the measured concentration anomalies were brought about, these combinations have been applied to study source–receptor relationships without converting atmospheric greenhouse gas concentrations (ppm/ppb) into emission data ((kg m2)/s) but taking concentration anomalies as a proxy of sources and relative sinks [1].
One representative of the receptor-oriented methods is back trajectory receptor models calculating the movements of air parcels from the receptor site backward in time whereby the influence of crossed source and sink regions can be traced back by means of the resulting pathways. The basic assumption to which the trajectory-based receptor model is subject is that the air represented by backward trajectories is affected by the change of the climate gas concentration while passing a grid cell with relevant sources or sinks, such that the changes are effectively transported to the receptor [7]. The various forms of different types existing within the category of back trajectory receptor models, such as potential source contribution functions, gridded frequency distributions, concentration field analysis, residence time weighted concentration as well as concentration weighted trajectory fields (CWT), have in common that they indicate possible origins of measured air parcels on a grid, discretising the land surface and different height levels [9]. From that results the advantage of providing the spatial distribution of potential source and sink areas contributing to the measurements at the receptor site. Thus, with the back trajectory receptor models regional anthropogenic as well as biospheric point sources can be identified provided that the calculation has taken into account that the majority of the trajectory endpoints are all found surrounding the receptor location where all the back trajectories converge [32]. One formula that considers the increasing number of trajectory endpoints near the grid cell of the receptor site is the CWT approach [9]. This is done by normalising the source intensity of the grid cells by the trajectory residence time. Due to this amendment, the CWT approach is able to avoid the potential false identification of sources close to the receptor where the longest residence time could deceptively lead to the assumption of a greater influence on the concentrations measured at the receptor site. In addition, the trajectory residence time in the grid cells of the CWT is weighted by the concentration of the trace gas measured on arrival of each trajectory. The combined consideration of the detected concentration levels and the higher frequency of trajectory endpoints near the receptor in the model equation makes the CWT method a reliable and precise tool for source attribution studies [9].
The primary aim of our study is to give detailed insights into the emission structures underlying the measurement time series to examine carbon dioxide (CO2) and methane (CH4) budgets of the Alpine region by identifying the source and sink regions with influence on the Alpine climate gas measurements as well as their temporal variations. The required combination of the measurements with the synoptic situations prevailing at the respective measuring time intervals which gives the information about the origin of the recorded air masses is derived by means of the CWT approach. The highly precise climate gas time series of the Alpine observatories adjusted by long-term trend as well as the influence of seasonality from the biosphere on the one hand and the centroid tracks of the particle dispersion modelling arriving at the respective measuring times at the receptor locations on the other hand form the input data for the receptor model. Detailed emission inventories are not required for the application of receptor models.
Using this methodology, relevant sources and sinks with influence on the CO2 and CH4 concentrations of the years 2011–2015 as well as the temporal variability of the emitters and absorbers recorded at the Alpine observatories are sought. Beneath the identification of their seasonal and year-wise occurrence, an assessment to what extent the method applied here and the results obtained with it allow for reliable conclusions on emitters and relative sinks combined with their temporal variability. Thereto, the model's internal uncertainty is estimated and external plausibility checks are carried out by comparisons with results from the inverse modelling of climate gas fluxes and concentrations. In the end, the question has to be addressed whether the methodology of our study can be considered well capable of reliably detecting climate gas-specific source and sink regions with influence on the measurements at the Alpine receptor sites. Or in other words: How accurately and with what potential limitations can the model map spatiotemporal variations of the relevant emitters and absorbers of different climate gases (CO2 and CH4) measured at high-alpine observatories?
One promising method to adequately analyse the regional CO2 and CH4 budgets of an investigation region focuses on the time series of climate gases measured at high-situated observatories with a huge catchment area far away from local emission sources (potential impacts of the—even at these exposed sites—remaining local sources from tourism or machines as for example cableways were excluded carefully by the scientists responsible for the measurements). Due to their exposed position, these sites are particularly suited for investigations on the sources of climate gases, since they capture the state of the lower free troposphere just as the long-distance transport of air masses and aerial admixtures. These perfect circumstances are met in Central Europe in particular at the observatories in the Alpine high mountain region. Here, measurements are set up to adhere to high precision and quality assurance standards corresponding to their supra-regional significance. In particular, the requirements of the international program for atmospheric monitoring Global Atmosphere Watch (GAW) of the United Nations' World Meteorological Organisation (WMO) are met. The GAW monitoring program is responsible for the measurement of the physical and chemical state of the atmosphere within the UN/WMO Global Climate Observing System (GCOS). The central objective of GAW is to build up a global database of highly precise and hence compatible measurement data allowing a coherent worldwide analysis of atmospheric concentrations. For ensuring the required data quality objectives, which are less than ± 0.1 ppm CO2 in the northern hemisphere and ± 2.0 ppb CH4 for repeated inter-comparison measurements, GAW measurements are embedded in a scientific framework for quality assurance. The actual requirements for CO2 and CH4 measurements are entirely described in the GAW Report No. 242 [34].
The indicated data quality in connection with the high representativity of the measured climate gas concentrations certifies ideal conditions for the project objective to the measurements of the high-alpine observatories. Accordingly, the investigations of the CO2 and CH4 budgets introduced in this report are based on the measurement time series of the following four high-alpine measuring stations (see Table 1) which guarantee the widest possible cover of the Alpine region as core area of the Central European study area from the southwest up to the northeast (see Fig. 1):
Table 1 Overview of the four high-alpine observatories whose climate gases' time series form the basis for the characterisation of the Alpine CO2 and CH4 budget
Map showing the four high-alpine observatories (blue triangles) considered for the characterisation of the Alpine (red line) CO2 and CH4 budgets
Environment Research Station Schneefernerhaus (UFS)
Sphinx-Observatory Jungfraujoch (JFJ)
Sonnblick Observatory (SOB)
Observatory Plateau Rosa (PRO).
Decisive for the choice of exactly these four measuring stations (see Table 2 for detailed information about their measuring instruments) as database is—beside the guarantee of the highest data quality according to the GAW standards—in particular the supra-regional representativity of measurements conducted there. This is ensured by their exposed position in the high mountain region and the simultaneous efforts to avoid perturbations by local emitters in the immediate surroundings. The four observatories are located along the main Alpine ridge from southwest to northeast whereby the main investigation area (bordered by the perimeters of the Alpine convention) as well as the Alpine foothills and their surroundings are well covered (see Fig. 1). The thereby ensured cover of the Alps as focus region for the CO2 and CH4 characterisation of the greater Central European region accompanied by the before outlined highest data quality level and the far-reaching representativity of the measurements offer an ideal data basis in order to reach the project objective.
Table 2 Overview of the instruments for measurement of CO2 and CH4 on the four high-alpine observatories
Climate gas data processing
The time series of CO2 and CH4 from the atmospheric measurements at the four high-alpine observatories mentioned above form the basis for the characterisation of the regional CO2 and CH4 budget of the Alpine region as a focus area of the Central European investigation region. In order to receive statements concerning the origin of the measured CO2 or CH4 concentration levels, the high-precision CO2 and CH4 measurement series recorded at these four observatories have to be combined with the synoptic transport situations prevailing at the respective measuring time on the individual stations. This requires, first of all, the performance of a climate gas-specific filtering of the CO2 and CH4 data for long-term trend and seasonality. The thereby achieved explicit concentration on the short-term varying component in the measurements ensures the exclusive reproduction of the influence of emission strength from sources and relative sinks in conjunction with the meteorology in the following work steps. For this specific filtering, the well-established and frequently applied procedure of Cleveland et al. [12] is used. Their so-called seasonal-trend decomposition procedure based on loess consists of a sequence of applications of the loess smoother. The filtering procedure decomposes a seasonal time series into three components: first, a trend component representing the low-frequency variation in the data together with nonstationary, long-term changes in level. Second, a seasonal component showing the variation in the data due to the seasonal frequency, which in our case is one cycle per year (here the cycle length of the seasonal component is 4380 since this is the amount of the two-hourly climate gas concentration measurements within one year). And finally, the residual component that is the remaining variation in the data beyond that in the seasonal and trend components (for more details, see Cleveland et al. [12]). Therefore, the result of the double filtering reflects the short-term varying proportion of the data (4th line in Fig. 2) without the influence of the long-term, anthropogenic climatic change signal (3rd line in Fig. 2) and the seasonality of the biogenic carbon cycle (2nd line in Fig. 2). These residuals are extracted specifically by station and climate gas. Together with the reconstructed, atmospheric transport conditions of the air masses recorded at the time of the measurements, they constitute the starting point for the estimation of the CO2 and CH4 budgets of the Alpine region on basis of the measurement time series.
Seasonality (2nd line) and trend (3rd line) adjustment of the CO2 concentrations (measured 2011–2015 at Schneefernerhaus) based on the approach of Cleveland et al. [12] and Hafen [15] for the explicit consideration of the transient weather- and emission strength-related components (4th line) within the original CO2 measurements (1st line)
Backward trajectories
In the present study, four-dimensional back trajectories (three space dimensions plus time) are retrieved using the Lagrangian particle dispersion model (LPDM) FLEXPART. The LPDM FLEXPART further developed at the Norwegian meteorological service simulates the atmospheric transport of small air volumes on the meso- and large-scale under consideration of diffusion, convection, turbulence as well as dry and wet deposition [30]. For dispersion modelling of pollutants, FLEXPART traces the propagation of gases from a known source forward in time. In its backward mode, however, it serves the allocation of source regions for a certain receptor. The meteorological driving fields of a numerical weather prediction model form the basis of the particle dispersion simulations and thus also its centroid tracks. Since the model of the European Centre of Medium-Range Weather Forecasts (ECMWF) has established itself as the most widely used input base in the dispersion modelling for European study areas, FLEXPART is operated with the meteorological fields of the ECMWF ERA-Interim model (inter alia: [5, 6, 16, 30]). The limited resolution of the ECMWF reanalysis fields as well as of the LPDM (especially considering the horizontal dimension) in combination with the complex terrain in the investigated Alpine domain pose a challenge for any model to accurately describe the transport processes [14, 28]. In order to meet this challenge and in particular the practical problem concerning the differences between the model surface altitude and the real site altitudes, we follow previous experiences from sensitivity analysis with different release heights [5, 6, 16]. These studies show consistently that it works best (independent of time of day) for mountaintop sites to release particles at a medium height between the model surface and the site altitude. From the calculated site-specific release heights in such a way, ten thousand air volumes with the tracer characteristics of the climate gases released every 2 h at the respective receptor observatory are tracked back more than 10 days. The positions of the dispersing particles are stored with a time step of 2 h. The over 17,500 10-day particle dispersion calculations for every year of the investigation period (2011–2015) form the basis for the source contribution studies of the climate-relevant gases together with the CO2 or CH4 concentrations measured at the respective release time.
Before combining the FLEXPART results and the climate gas data for analysis, the uncertainties associated with the dispersion simulations such as the limited resolution of the meteorological ECMWF fields and the parameterisations of the LPDM itself are considered by an intermediate pre-processing step. To take the limited reliability of the dispersion modelling into account and to reduce these intrinsic model uncertainties, the backward simulations of the particle dispersions are aggregated to their centroid tracks, which results in the cancelling-out of errors. On the assumption that the uncertainties are equally distributed, the coordinates of the centroid pathways represent the average and thus the least erroneous transport positions of the particle tracking (visually checked for a test sample of FLEXPART particle dispersions with positive outcome).
The site-specific footprints of all backward centroid tracks from the respective receptor observatories of Schneefernerhaus, Jungfraujoch, Sonnblick and Plateau Rosa over the years 2011–2015 bring out the strongly different catchment areas of the four high-alpine sites (see Fig. 3). While the fields of view of the observatories Jungfraujoch and Plateau Rosa are primarily extending on the western or southwestern areas of the Alps and the Alpine foothills, the Environment Research Station Schneefernerhaus covers the northern to north-western regions of the study area that is finally completed from south to northeast by the catchment area of the Sonnblick observatory. If the centroid paths of the particle dispersion simulations of all four receptor sites are combined, these footprints complete each other and form a common catchment area which includes the course of the Alpine ridge and thereby captures the air masses very frequented over the Alpine study region (see Fig. 4).
Footprints of the centroid pathways from the site-specific FLEXPART particle dispersion modelling for the receptor observatories Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS), respectively, visualising the frequentation of the 0.2 × 0.2° grid cells within the individual catchment area by the backward trajectories during the study period 2011–2015
Combined footprint of the centroid pathways from the site-specific FLEXPART particle dispersion modelling for the receptor observatories Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS), respectively, visualising the frequentation of the 0.2 × 0.2° grid cells within the extended catchment area by the backward trajectories during the study period 2011–2015
Concentration weighted trajectory fields
A complex and frequently applied representant from the family of the trajectory-based receptor models are the so-called concentration weighted trajectory fields (CWT) introduced for the first time by Seibert et al. [29] and further developed since. For the calculation of the CWT, the complete study area is subdivided into grid cells. The contribution of every grid cell to the measured climate gas concentrations at the receptor observatories, averaged over the respective study period, arises from the combination of the paths of atmospheric air masses arriving at the measurement sites in the form of backward trajectories with the climate gas concentrations measured at the same time. The formula to calculate such a contribution [2] considers the length of stay of the air parcels over geographic regions prior to their arrival at the receptor site just as the measured concentration level of the climate gas. To avoid mistakes on account of the reduced reliability of the value for grid cells with low intensity of trajectories passing by, the grid cells are weighted according to their frequency of trajectories crossing. This minimises the effect of few trajectory coordinates in individual grid cells and ensures that the increased uncertainties of areas less frequented are taken into consideration [23]. Seibert et al. [29] computed for each grid cell of this domain the mean concentration of the investigated species as follows:
$$\overline{C}_{ij} = \frac{1}{{\mathop \sum \nolimits_{k = 1}^{N} \tau_{ijk}}}\mathop \sum \limits_{k = 1}^{N} c_{k}\tau_{ijk}$$
where i and j are the horizontal indices of the grid, k the index of trajectory, N the total number of trajectories used in the analysis, ck the pollutant concentration measured upon arrival of trajectory k and \(\tau\) ijk the residence time of trajectory k in grid cell (i, j). A high value of \(\overline{C}\) ij means that air parcels passing over cell (i, j) would, on average, cause high concentration levels at the receptor sites. On the other hand, a negative value means that the grid cell has on average a concentration lowering influence on the measurements at the receptor sites. According to this formula, the CWT approach is even able to distinguish between moderate sources or sinks and intense ones [7].
Emitting or absorbing grid cells can correspondingly be recognised by high or low values and reveal in their entirety a map that identifies potential source and sink regions with influence on the measurements at the receptor stations with high spatial resolution [9, 17]. Such CWT maps act as reliable indicators for the identification of regions with positive or negative effects on the climate gas concentrations measured at the receptor sites and represent the areas relevant for the measurements precisely, as the comparison with known emission sources has pointed out [2, 8].
Calculation of the CO2 and CH4 budgets of the Alps
For the detection of relevant sources and sinks with influence on the CO2 concentrations of the years 2011–2015 as well as the temporal variability of the emitters and absorbers recorded at the Alpine observatories (UFS, JFJ, SOB and PRO), the climate gas time series of the four measurement stations are first of all subjected to the adjustment for seasonality and long-term trend derived by Cleveland et al. [12]. This ensures that the results of the analyses are distorted neither by the climate change signal nor by the annual cycle, but refer exclusively to the short-term component in the measurement time series that depends on weather and emission strength of the sources or relative sinks.
Parallel to this statistical processing of the measurement data, the calculation of the 10-day centroid tracks derived from the backward particle dispersion simulations with FLEXPART is carried out individually for each of the four receptor sites, but with analogue settings of the model parameters. Altogether, over 85,000 FLEXPART particle dispersion runs have been conducted for this study and then been related to the CO2 concentrations of the receptor observatories recorded at the same time and subjected to the CWT analysis.
The examination of the CO2 budget based on atmospheric measurement time series represents a special challenge due to the variety of existing sources and sinks of carbon dioxide whose atmospheric concentration is continuously modified by biogenic as well as anthropogenic emitters and absorbers. If a model performs this complex task plausibly, the relatively simple transfer to other climate gases of similar characteristics, such as methane, is obvious to improve the knowledge progress. For this reason, the model is extended to the examination of the CH4 budget for the Alpine region whereby the studied region as well as the examination time period and the methodical approach are kept in analogy to the previous CO2 budget characterisation. The only necessary modification is the consideration of the CH4 measurement time series instead of the CO2 data of the observatories.
As done for the CO2 time series, the CH4 measured data of the four high-alpine sites are first of all subjected to the adjustment for seasonality and long-term trend [12]. Exclusively the residuals of the measurements which reflect the weather and emission intensities are analysed in the following to illustrate neither the impact of climate change nor biogenic seasonal cycle. For the analyses of the Alpine CH4 budget, the seasonally and trend-adjusted CH4 time series of the four stations are set in relation to the site-specific centroid tracks of the FLEXPART particle dispersions for the years 2011–2015. The combination of the residuals of the measurement time series and the backward trajectories is again expressed in site-specific CWT analyses.
Characterisation of the Alpine CO2 budget
The result of the combination of centroid tracks derived from the site-specific FLEXPART simulations and the CO2 concentrations measured there to the respective time of arrival of the backward trajectories after their adjustment for seasonality and long-term trend is shown in Fig. 5 in the form of CWT maps.
Concentration weighted trajectory fields quantifying the site-specific influence of source and relative sink areas to the de-seasonalised and de-trended CO2 concentrations (in ppm) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS), respectively, over the entire study period 2011–2015
These maps for the four receptor observatories over the complete study period 2011–2015 point out that particularly regions of Eastern Europe as well as Central Europe north of the Alps are responsible for high CO2 concentrations at Schneefernerhaus and Sonnblick. In contrast, high CO2 measurements at the sites Jungfraujoch and Plateau Rosa can be traced back primarily to the impact from regions south of the Alps. At this, the height difference between the measurement sites has notable effects: it causes the 450–830 m lower-lying observatory of the UFS to be influenced by the lower, free troposphere for shorter periods, resulting in a higher average CO2 concentration of the recorded air masses. The measurements at the three sites JFJ, PRO and SOB located at altitudes over 3000 m on the other hand represent more frequently background concentrations of carbon dioxide from the well-mixed free troposphere and are not influenced by the contributions of relative sinks or sources within the immediate surroundings. Altogether, both the altitude and the position of the measurement sites within the Alpine core study region are reflected in the site-specific CWT maps, which clearly show different focus regions of the individual footprints, where sources and relative sinks strongly influence the measurements at the receptors (more intensive colouring in Fig. 5).
The different coverage of the probed area corresponding to the locations of the measurement sites indicates the significance and need for a broad scope of the database to time series of more than just one observatory for studies like ours. As found by Kaiser et al. [19], the ability of the model to reliably identify sources and sinks is directly linked to the number of station data taken into account. Consequently, an expansion of the analyses to more CO2 measurement sites is expected to result in an improved model quality and increased reliability of the results [1, 4]. In particular, the integration of additional measuring stations with (over-) regional representativeness promises more reliable results of the identification of potential source and sink areas, because these measurement time series are hardly influenced locally, and instead detect regional CO2 as well as CO2 transported over long distances [36]. For these reasons, only a combination of the catchment areas of at least these four high-alpine sites ensures that the most relevant emitters that influence the climate gas concentrations of the Alpine region are captured.
The result of the cumulative consideration of the CO2 concentrations and particle dispersion simulations of all four high-alpine observatories in the form of the combined CWT map (see Fig. 6) locates CO2 emitting regions all around the central Alps with the exception of France in the west which has pulled out of coal mining since the beginning of the 2000s and uses the quite CO2-neutral nuclear power as main source of energy today. Furthermore, particularly the area around the Alpine main ridge appears as an important large-scale relative CO2 sink of Central Europe averaged over the years. Air masses originating from this central region in the midst of the study area caused a significant reduction in the CO2 concentration levels in the measurements of the years 2011–2015 when recorded at the receptor sites.
Combined concentration weighted trajectory field quantifying the influence of source and relative sink areas to the de-seasonalised and de-trended CO2 concentrations (in ppm) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
The seasonally differentiated maps of the CO2 contributions from the grid cells of the accumulated catchment area (see Fig. 7) identify single emission hotspots emerging in different seasons. In winter, these are located primarily north and east of the Alps and suggest CO2 emissions by heating with fossil fuels, whereas during summer CO2 measurements higher by about two ppm occur mainly during air mass advection from the Mediterranean area southwest of the Alps as well as from Central Italy. The enhancement of Alpine concentrations of carbon dioxide caused by emissions of burning fossil fuels, which amounts to values of up to four ppm in winter, is already ascertainable in fall—though less strong—and can in this season be attributed to eastern European regions located further inland. These most dominant emission regions include parts of the northeast of Germany and particularly wide areas of (West-)Poland and Eastern Europe. Given the fact that the biggest brown coal-mining areas of Europe are located in these regions, this indicates the considerable impact of the brown coal emissions on CO2 measurements, even at the high-alpine observatories situated in over 500 km straight-line distance at the top of Europe's highest mountains. The increases in the carbon dioxide level detected at the high-alpine sites during summertime stem from the northwest Mediterranean region where at this season often heat-related fires spark off. The CO2 released in these may well be responsible for values of CO2 higher by up to three ppm on average at the receptor sites, when air mass transport originates from this area during summer. Spring, in turn, shows some less contributing CO2 emission hotspots in highly populated areas of West Germany (Rhine-Ruhr area), Belgium and the Netherlands. The vehicular and power plant emissions from these regions are clearly displayed only during this season when private heating and wildfires do not play any role suggesting that the latter are the major sources for enhanced CO2 concentrations measured at the Alpine sites. Other large European metropolitan areas such as Paris or London may be located too far away from the four high-alpine observatories for a significant detection so that their impact thins out over the long distance (and especially over the Atlantic Ocean in the case of London). During all seasons, the central study area around the Alpine main ridge is classified as a significant relative carbon dioxide sink whereupon the Alpine core region shows the largest negative influence on the CO2 concentrations of Central Europe over all seasons.
Concentration weighted trajectory fields quantifying the seasonal influence of source and relative sink areas to the de-seasonalised and de-trended CO2 concentrations (in ppm) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
Altogether, these seasonally differentiated analyses of the Alpine CO2 budget on the basis of the combination of the measurement time series of the stations UFS, JFJ, PRO and SOB draw a plausible picture of the seasonally relevant CO2 emitters and absorbers near enough to the Alpine receptor sites and underline at the same time the relevance of seasonal differentiations of the CO2 source contribution studies. The seasonal variations of the emitters and absorbers, very distinctive in particular for carbon dioxide, appear only in seasonally differentiated CWT analyses as performed here. These allow in their entirety a conclusive characterisation of the relevant regions of Europe with influence on the Alpine CO2 concentrations.
In the CWT maps calculated separately for each of the 5 years of the study period (see Fig. 8), the high CO2 emissions due to the severe winter 2013 with temperatures down to − 20 °C clearly stick out. Only after lots of snow during March, that according to meteorological categorisation already counts to one of the spring months, this came to an end. The methodology visualises the intensified and longer-lasting emissions from heating with fossil fuels caused by these weather conditions reliably for the main key region Central Europe. Likewise, the method also succeeds in representing the following particularly mild winter 2014 with its weather-related much lower CO2 emissions from the reduced incineration of fossil sources of energy. The reliable reconstruction of the year-to-year variations in the CO2 emissions using the year-wise CWT analyses stress the effectiveness of the methodology to trace back the short-term varying components within the measurements of the high-alpine observatories reflecting the influence of weather and emission strength to their source regions. In summary, this certifies the ability of the method used here to identify both seasonal and year-wise deviations of the CO2 emissions from areas with influence on the CO2 budget of the Alpine study region.
Concentration weighted trajectory fields quantifying the yearly influence of source and relative sink areas to the de-seasonalised and de-trended CO2 concentrations (in ppm) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
Characterisation of the Alpine CH4 budget
As already seen in the source contribution studies for the CO2 concentrations of the stations, site-specific focuses within the Alpine investigation area also appear in the concentration weighted trajectory fields for the CH4 data (see Fig. 9). While the observatories JFJ and PRO are identifying the methane source regions north and (even more) south of the Alps, emitting grid cells in the north and east of Central Europe are detected primarily at Sonnblick and Schneefernerhaus. Hotspot regions—even on the small scale—are localised correspondingly in several site-specific maps, what increases the reliability of the cumulative source detection. To all four site-specific CWT maps, the consistent classification of the southwest of Europe as a large-scale relative CH4 sink is common, regardless of their individual footprint. Thus, air masses of the Iberian Peninsula, when recorded at the Alpine observatories, caused mean reductions in the CH4 concentrations measured there of about ten ppb averaged over the whole 5-year investigation period. On the other hand, air mass transport particularly from the northeast of Central Europe is accompanied by increased CH4 concentration levels of up to 20 ppb on their arrival and recording at the high-alpine sites during the years 2011–2015.
Concentration weighted trajectory fields quantifying the site-specific influence of source and relative sink areas to the de-seasonalised and de-trended CH4 concentrations (in ppb) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS), respectively, over the entire study period 2011–2015
The site-specific CWT maps already give a picture of the division of the catchment area in two parts with a southwestern relative CH4 sink region on the one hand (Portugal, Spain and the south of France) and CH4 emitters (from England to Italy with main focuses in the northeast of Central Europe: Germany, Denmark, eastern Austria, Poland, Czech Republic, Hungary) on the other hand. This phenomenon of a clear regional split is confirmed by the comprehensive plot incorporating the calculations from all four stations (see Fig. 10). This CWT map of the combination of all station data and backward trajectories for the whole period 2011–2015 manifests the strong northeast-southwest gradient in the impact to the Alpine CH4 concentrations according to which the seasonally and trend-adjusted measured values increase by 15 to 20 ppb during advection of air masses particularly from the northeast of Central Europe, while air masses from the southwestern direction cause a negative influence on the measurements in form of a reduction of 15 ppb CH4 on average. The only exception to this northeast-southwest distinction is the Alpine region itself whose effect as a relative CH4 sink considerably emerges despite the surrounding source regions. One can therefore generally constitute that the Alpine core region represents a significant relative sink for both examined climate gasses (CO2 and CH4) in the midst of an emission-intensive Europe and thus merits a special protection status also from the climate protection perspective.
Combined concentration weighted trajectory field quantifying the influence of source and relative sink areas to the de-seasonalised and de-trended CH4 concentrations (in ppb) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
The northeast-southwestern division in CH4 sources and relative sinks keeps its validity also for the seasonal consideration of the influence from the grid cells on the Alpine CH4 concentrations (see Fig. 11). In this merging of the site-specific calculated seasonal concentration weighted trajectory fields, the Iberian Peninsula appears in all seasons as an extensive area of origin for methane poor air masses in accordance with the previous results. A plausible explanation for this is the lack of wetlands in the characteristically dry Mediterranean area. Wetlands are the most important natural methane source and react very sensitively to climate changes and weather anomalies like higher precipitations and temperatures. Methane emissions from wetlands are triggered strongly by the water supply, because higher water levels on account of high amounts of precipitation promote anaerobic conditions favouring methane formation. Decomposition processes of methane, in turn, can be favoured by higher temperatures. The anthropogenic sources like rice fields and the burning of biomass are also subject to the influence of these climatic factors, so that warmth and humidity have in general great effects on the intensity of the CH4 emissions. This influence can be found again in the seasonal maps, not only regarding the southwestern European dry regions and their air masses poor in CH4, but is also in consideration of the high CH4 emissions from regions with wide, natural or artificial (in particular flooded, former mining areas) wetlands as for example in Poland. The concentration increasing influence of higher temperatures gets visible in the seasonal CWT maps in the form of more intensive CH4 emissions in summer and fall. In the winter season, the northernmost source regions (east of Poland, Slovakia, Hungary and Lithuania) for high CH4 concentrations measured at the Alpine sites disappear, since wetlands there are covered by snow and/or frozen and don't emit anything. The emitting regions further west during winter may represent wood burning for heating purposes as also seen in the analogous CWT maps for CO2. Nevertheless, the CH4 emissions from fires are many times lower than the amount of CO2 that is emitted at the same time [21]. That's also the reason why wildfires in the Mediterranean region aren't visible emission hotspots in the seasonal CH4 CWT maps, but an important source in the equivalent figure for CO2.
Concentration weighted trajectory fields quantifying the seasonal influence of source and relative sink areas to the de-seasonalised and de-trended CH4 concentrations (in ppb) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
The basic phenomena of the detection of source and sink regions with southwestern Europe known by negative CH4 contributions and emission-intensive regions in the north(west), the east and the south of Europe are also evident in view of the annual analysis of the influence on the Alpine CH4 concentrations (see Fig. 12). In consistency with the previous results, the central Alpine region finds itself in all five yearly aggregated CWT maps always as an area of negative contributions to the measured CH4 residuals. The differences between the years can again be explained by the climatic factors precipitation and temperature. Accordingly, years as well as regions with high precipitation sums, mostly in connection with warm summers and mild wet winters, effect high annual contributions to the CH4 concentrations of the Alpine region. The range of the annual differences in the contributions can approximately be compared to that of the seasonal contributions and contains values of − 20 to + 20 ppb averaged over the 5-year study period.
Concentration weighted trajectory fields quantifying the yearly influence of source and relative sink areas to the de-seasonalised and de-trended CH4 concentrations (in ppb) at the high-alpine receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) over the entire study period 2011–2015
Uncertainty assessments
In comparable studies on the combination of measurement time series with trajectory analyses, Reimann et al. [24] find that this has the potential to conclude on European emission quantities and to act as independent tool for the verification of anthropogenic trace gas emissions as they are determined in international contracts like UNFCCC [35]. For an assessment to what extent also the method applied here and the results obtained with it allow for reliable conclusions on emitters and relative sinks combined with their temporal variability and in addition to a logical interpretation of the results, a quantification of the uncertainties connected with the outcomes is required.
In principle, the reliability of a grid cell value is directly connected to the amount of trajectories crossing the same grid cell. Because the larger the number of single trajectory coordinates is within a grid cell, the higher is the certainty that the value calculated for that grid cell corresponds to reality and doesn't reflect a random extreme value or exception. Thus, with increasing frequency of trajectories, the reliability of the calculated contribution from the grid cells of the investigation area to the climate gas measurements at the Alpine observatories rises accordingly. As we have four high-alpine sites as receptors of the particle dispersion modelling with FLEXPART, the grid cells of the strongest frequentation, and therefore with the highest reliability, are in close vicinity to these as well as in the central area of the Alpine investigation region spanned by the four stations, which corresponds approximately to the perimeters of the Alpine convention (see Fig. 4). Due to the prevailing westerly wind direction over the analysis area by its location within the west wind zone, the further extension of the footprint representing the frequentation of the individual grid cells by the backward trajectories is not symmetrical in all directions, but shifted to the west. Altogether, more than 5% of the trajectory coordinates of the particle dispersion calculations during the 5-year investigation period 2011–2015 lie within the Alpine centre of the study region including the Alpine foreland. Therefore, a high reliability and explanatory power can be attributed to the results of this focus region.
Besides the frequency of trajectories, the intensity of the contact with the planetary boundary layer close to ground is another important criterion to estimate the reliability of the project results. The stronger the boundary layer contact within a grid cell is, the higher is the probability that the basic assumption underlying the model of the absorption and the effective transportation of the changes of the atmospheric trace gases is realised while air passes a grid cell with sources or sinks. In analogy to the footprint, the highest values within the representation of the average proportional boundary layer contact derived from the backward trajectories for each grid cell during the analysed years 2011–2015 are also found near the receptor observatories where trajectories model intrinsically arrive in close proximity to the ground level. Over the further Alpine region and its foreland, the contact with the boundary layer during the study period consists of at least 20% on and also doesn't drop below mean average values of ten percentage over the remaining European continent except for the eastern edge area (see Fig. 13). These comparatively high values underline the plausibility of the exchange process with the near-surface boundary layer assumed in the model, lending (just as the considerations on the footprint) credibility to the results for the core study region of the Alps and the Central European surroundings.
Mean percentual contact of the centroid tracks out of the FLEXPART particle dispersion modelling from the receptor sites Jungfraujoch (JFJ), Plateau Rosa (PRO), Sonnblick (SOB) and Schneefernerhaus (UFS) with the near-surface planetary boundary layer during the study period 2011–2015 (in %)
Plausibility checks
The model intrinsic estimates of the uncertainties and limitations of the reliability of the results serve, just like comparisons with other models, to judge the model quality. The latter complete the confirmation of the reliability and validity of the model as an external quality control. Because no analogous models in the strict sense that devote themselves to the same scientific issue are available for a comparison with the approach used here, the plausibility checks rely on the results from the inverse modellings of climate gas fluxes and concentrations from the prominent projects Copernicus Atmosphere Monitoring Service (CAMS) (https://atmosphere.copernicus.eu/) and Jena CarboScope (http://www.bgc-jena.mpg.de/CarboScope/). To this end, the fluxes or concentration data of both models are case specifically compiled and compared to our results so that an assessment of the plausibility can be met.
The CAMS CO2 surface fluxes are estimated with a variational Bayesian inversion system at a resolution of 3.75 × 1.9 degrees (longitude–latitude) and 3-hourly time steps, based on CO2 mol fraction station records from the following large databases including both in situ measurements made by automated quasi-continuous analysers and irregular air samples collected in flasks and later analysed in central facilities:
The NOAA Earth System Research Laboratory archive (NOAA CCGG),
The World Data Centre for Greenhouse Gases archive (WDCGG),
The Reseau Atmospherique de Mesure des Composes a Effet de Serre database (RAMCES)
The Intergrated Carbon Observation System—Atmospheric Thematic Centre (ICOS-ATC).
Fluxes and mole fractions are linked in the system by the global atmospheric transport model of the Laboratoire de Météorologie Dynamique. More detailed information can be found at Chevallier et al. [10] or Chevallier et al. [11].
For comparison with the project results from Chapter 4.1 for the purpose of plausibility checks, the 3-h intervals of fossil emissions are seasonally aggregated together with the biospheric (posterior) fluxes for the analogous study region of Central Europe with the Alps in the centre over the years 2011–2015. The result is shown in Fig. 14, where negative CO2 surface fluxes are visualised in blue tones and red tones represent positive CO2 surface fluxes. In these seasonal maps provided for the comparisons on basis of the CAMS data, it is immediately apparent that the sign in the amount of the combination of biogenic and fossil CO2 surface fluxes shifts with the seasons. While negative expressions of the CO2 surface fluxes of over 1 kg of carbon per m2 and month are prevailing particularly over large parts of the continent in summer, the positive CO2 surface fluxes dominate in winter, and—a little more weakly—already in fall over all of Central Europe, with hotspot areas in the east and northwest. The latter emission hotspot is not seen in its intensity in the comparable CWT maps what might be due to the too large distance to the high-alpine receptor observatories. Over this wide distance in connection with the thinning out effect of the Atlantic in the northwest of Central Europe, changes of the climate gas concentration close to ground cannot be detected adequately any more by stations in the Alps and traced back by means of trajectories. At this point, the spatial limitations of the CWT analysis based on the Alpine measurement time series appear (see Chapter 5 and Fig. 13). Nevertheless, our study returns the seasonal variations of the source and relative sink regions of the Central European mainland with the exception of the northwest of Europe during 2011—2015 in high correspondence with the CAMS CO2 surface fluxes. More detailed, value-based comparisons are not possible due to the very differing spatial resolutions as well as the unequal units of both methodical approaches since surface fluxes and concentration anomalies taken as proxies of emission and deposition fluxes are compared.
Seasonally aggregated mean CO2 surface fluxes of the Copernicus Atmosphere Monitoring Service for fossil and biospheric (posterior) emissions of the years 2011–2015 over Central Europe (in kg carbon/m2/month)
In addition, the Jena CarboScope project provides estimates of the CO2 fluxes between the earth's surface and atmosphere as well as modelled CO2 concentration fields of different altitudes in global coverage. The latter are calculated using forward simulations of the atmospheric transport model TM3 based on meteorological reanalysis fields. The input data of these simulations are inverse-modelled CO2 fluxes, which in turn are based on the same transport model in conjunction with observation data. Values between the measuring stations taken into account are estimated by means of interpolation. The resulting CO2 concentration fields are output in the form of NetCDF files with a resolution of 5 × 3. 75° in the unit of ppm. With this, the Jena CarboScope project provides estimates of the surface-atmosphere CO2 exchange based on atmospheric measurements with a focus on its temporal variations [25, 26].
The near-surface atmospheric CO2 concentration fields of the atmospheric pressure at mean sea-level of 1013 hPa from the most current version of the Jena CarboScope model runs (s04_v4.1) for the analogous investigation period 2011–2015 and the region of Central Europe with the Alps in the centre are yearly as well as seasonally aggregated for an additional external plausibility check. In the comparison of our results to the year-wise compilation of the near-surface CO2 concentration fields of the Jena CarboScope project (see Fig. 15), clear common characteristics are recognisable, as for example the localisation of the highest positive CO2 contributions or concentrations, respectively, over all years in the northern regions of Central Europe as well as their strongest occurrence during the 5-year study period in the year of the intensive winter, 2013. Negative annual means of the CO2 contributions or concentrations, respectively, appear in the southwest of Central Europe. These regions of positive or negative CO2 anomalies are returned in the comparable representations not only in their coarse localisation, but in addition also in their scale in very good correspondence of both models. So, both project results show annual amplitudes of approximately five ppm in this comparison. These parallels in spite of the differing model approaches and in particular their disparate resolutions underline once more the reliability of the characterisation achieved in the present analysis of the Alpine CO2 concentrations with focus on the influence of surrounding source and relative sink regions as well as their temporal variability.
Yearly mean CO2 concentrations (in ppm) at the surface over Central Europe aggregated from the Jena CarboScope estimates of the surface-atmosphere CO2 exchange based on atmospheric measurements (run ID version s04_v4.1, C. Rödenbeck) for the analysed time period 2011–2015
Also in the comparison of the seasonal configurations of both model data (see Figs. 7, 16), fundamental features are found to agree, such as the clear identification of the two regions of positive CO2 anomalies, the north-western Mediterranean area in summer as well as the eastern regions of Central Europe in the winter months. The areas of negative CO2 contributions or concentrations, respectively, are also similarly localised in their seasonal occurrences by both models, whereupon especially in winter, but even already in the transitional seasons the southwest of Central Europe, and in summer in particular the north-eastern areas of the investigation region stick out. Beside the similar localisation of the CO2 source and relative sink regions in the course of the year, the amplitudes within the single seasons are comparable. Altogether, these parallels found when comparing the seasonally aggregated CO2 data of both models bring out a substantial similarity of both models in the identification of locations and intensity of the sources and relative sinks with influence on the atmospheric CO2 concentration as well as their seasonal variability. These agreements despite the distinctive differences in the underlying project objectives and methods again encourage us to positively assess the reliability of the presented study and its underlying methodology.
Seasonal mean CO2 concentrations (in ppm) at the surface over Central Europe aggregated from the Jena CarboScope estimates of the surface-atmosphere CO2 exchange based on atmospheric measurements (run ID version s04_v4.1, C. Rödenbeck) for the exemplary year 2011
The calculation of the CAMS CH4 surface fluxes is built up on a four-dimensional, multi-parameter data assimilation system for inverse modelling based on the TM5 atmospheric transport model [3, 22]. Remote sensing data from the SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) instrument of the European environmental satellite Envisat as well as high-precision CH4 measurements of the NOAA network penetrate into the inversion set up. Further details about the model can be found in Segers and Houweling [27].
The complete methane fluxes of CAMS for the study period 2011–2015 have been aggregated seasonally (see Fig. 17) in analogy to the corresponding seasonal CWT maps (see Fig. 11). The comparison of the two plots shows that both models hardly indicate seasonal differences for the methane input of Central Europe and detect, moreover, no significant CH4 emissions from the Iberian Peninsula (regardless of the season). The latter region is characterised just the same in the CWT approach so that both methods declare correspondingly a negative impact from the north-western Mediterranean area to the Alpine CH4 measurements. From the more northern regions of Central Europe, in turn, partially very high positive source contributions are detected at all seasons. On this occasion, the main focus in the CAMS CH4 surface fluxes is found in the northwest, whereas the CWT method localises the methane sources in particular in the north and the east. Thus, the strong northeast-southwest gradient of CH4 emissions as in the CWT maps cannot be seen in the CAMS data, since the CAMS CH4 surface fluxes rather point out clear maxima in the highly populated areas (London, Paris, Netherlands and Belgium) which don't appear in the results of the CWT analyses in return. This is due to the catchment area of the CWT analyses being restricted to the Alpine region and its further surroundings as previously seen also in the comparison for the CO2 emission estimates ("Characterisation of the Alpine CO2 budget" and "Co2" sections). This is owed to the position of the four considered Alpine observatories: Schneefernerhaus and Sonnblick still cover well the areas to the east and to the north of the Alps with their footprint, but do not capture the air masses from the far northwest of Europe in sufficient frequency and intensity any more. Here, the thinning effect (strengthened further for sources in England due to the unavoidable crossing of the Atlantic Ocean) demonstrates the limitations of the CWT approach since the impact of emitters located in the northwest of Europe is far too distant to still be captured reliably by measurements at the Alpine observatories. Furthermore, westerly winds are typically accompanied by a lower degree of stability within the vertical layering of the atmosphere implying that westerly transportation processes are not as stable and persistent as during advection from an easterly direction.
Seasonally aggregated mean CH4 surface fluxes of the Copernicus Atmosphere Monitoring Service for the total methane emissions of the years 2011–2015 over Central Europe (in µg CH4/m2/s)
To sum up, the comparison of the approaches to trace methane sources/relative sinks gives results similar to the analogous comparison for CO2. Deficits of the CWT analyses with areas remote from the considered receptor observatories are somewhat more apparent for CH4. In the narrower catchment area as well as in the basic structures the very good correspondence of both models is confirmed. Taking into account the different methodical approach of the CO2 and CH4 budget estimations for the detection of relevant source and sink regions on the basis of measurement data versus inverse modelling for the quantification of the global climate gas fluxes, the very clear similarities of both models suggest that our methodology is well reliable for our purposes.
Conclusions and outlook
The synopsis of the previous chapters implies a high level of functionality and reliability of the modelling methodology we use to characterise the Alpine CO2 and CH4 budgets on the basis of atmospheric measurement time series from the observatories situated there. Regarding the clear parallels that can be seen in comparison with results from the inverse modelling of climate gas fluxes and concentrations—such as those derived from the Copernicus Atmosphere Monitoring Service (CAMS) and the Jena CarboScope project—the methodology of our study can be considered well capable of reliably detecting climate gas-specific source and relative sink regions with influence on the measurements at the Alpine receptor sites. The reliable project results, in conjunction with the positive results of the model's internal uncertainty estimates and external plausibility checks, highlight the model accuracy of the approach and, above all, underline the model's strength in accurately mapping spatiotemporal variations of the relevant emitters and absorbers of different climate gases (CO2 and CH4).
Thus, the highest positive contributions to the Alpine CO2 concentrations derive from Eastern Europe, where the biggest European brown coal-mining areas are situated. With the same trustworthiness, we have identified typical CH4 source and relative sink regions with southwestern Europe known by negative CH4 contributions and emission-intensive regions in the north(west), the east and the south of Europe. The only exception to this northeast-southwest distinction is the Alpine region, which sticks out as a relative CH4 sink from the surrounding source regions. Therefore, the Alpine core region represents a significant relative sink region for both examined climate gasses (CO2 and CH4) in the midst of partly emission-intensive Europe with regard to the study period and thus requires a special protection status also from the climate protection perspective.
An absolute prerequisite for a successful implementation of our model for the identification of potential emission hotspots and relative sink regions, together with their temporal variability, has been a sufficiently intensive coverage of the study region by the centroid pathways of the particle dispersion calculations. Only if the frequentation through the backward trajectories is high enough and the investigation period or the number of stations involved, respectively, is large enough for a sufficiently covered catchment area, the CWT analyses can produce meaningful maps. This restriction limits the application of the methodology to problems with low temporal resolution, which, for example, refer to seasonal or annual analyses, as in the present study. For the detection of relatively stationary source and sink regions and the variability of their influence over the course of the year, the method presented here on the basis of atmospheric measurement time series is very well suited, provided that an adequate data basis is guaranteed in the form of the long-term measurement series of a station or shorter measurement series of several stations covering at least a few years.
Taking into account, these requirements and the associated limitations of the methodological approach, the method presented fulfils the objectives of the study, as the plausible results of the fourth chapter and their comparison with largely comparable models demonstrate. The project results in form of the CWT maps attest the methodology its usefulness in answering the study's scientific questions, bringing out its strong advantages such as the high spatial resolution (0.2 × 0.2 degrees) or the climate gas specificity. Another benefit of the model is that it does not require a priori emission data, which means that the resulting outputs can possibly be used as an additional option for independent top-down plausibility checks and hence verification of bottom-up emission inventories. In order to be able to review the emission statistics, most of which are produced nationally, using the project's methodology, it is necessary to transform the database into a study area within national borders, in compliance with the requirements outlined above. The results presented here for the transnational study region of the Alps suggest a potentially promising transfer of the project approach to areas within national borders for the purpose of an independent validation of a country's emission inventories and open up future application possibilities for the model following the presented ones.
Data and material are available from the GAW WDCGG and the German Environment Agency.
GAW:
Global Atmosphere Watch of WMO/UNO
GCOS:
Global Climate Observing System of WMO/UNO
CAMS:
Copernicus Atmosphere Monitoring Service NOAA Earth System Research Laboratory archive (NOAA CCGG)
WDCGG:
World Data Centre for Greenhouse Gases
RAMCES:
Reseau Atmospherique de Mesure des Composes a Effet de Serre database
ICOS-ATC:
Intergrated Carbon Observation System—Atmospheric Thematic Centre
NOAA:
National Oceanic and Atmospheric Administration (USA)
Apadula F, Gotti A, Pigini A, Longhetto A, Rocchetti F, Cassardo C, Ferrarese S, Forza R (2003) Localization of source and sink regions of carbon dioxide through the method of the synoptic air trajectory statistics. Atmos Environ 37:3757–3770
Begum BA, Kim E, Jeong C-H, Lee D-W, Hopke PK (2005) Evaluation of the potential source contribution function using the 2002 Quebec forest fire episode. Atmos Environ 39:3719–3724
Bergamaschi P, Houweling S, Segers A, Krol M, Frankenberg C, Scheepmaker RA, Dlugokencky E, Wofsy SC, Kort EA, Sweeney C, Schuck T, Brenninkmeijer C, Chen H, Beck V, Gerbig C (2013) Atmospheric CH4 in the first decade of the 21st century: inverse modeling analysis using SCIAMACHY satellite retrievals and NOAA surface measurements. J Geophys Res Atmos 118:7350–7369
Brankov E, Rao ST, Porter PS (1998) A trajectory-clustering-correlation methodology for examining the long-range transport of air pollutants. Atmos Environ 32:1525–1534
Brunner D, Henne S, Keller CA, Reimann S, Vollmer MK, O'Doherty S, Maione M (2012) An extended Kalman-filter for regional scale inverse emission estimation. Atmos Chem Phys 12:3455–3478
Brunner D, Henne S, Keller CA, Vollmer MK, Reimann S (2013) Estimating European halocarbon emissions using lagrangian backward transport modeling and in situ measurements at the Jungfraujoch High-Alpine Site. In: Lin JC, Gerbig C, Brunner D, Stohl A, Luhar A, Webley P (eds) Lagrangian modeling of the atmosphere, of geophysical monographs. AGU, Washington, DC, pp 207–221
Carslaw DC (2015) The openair manual–open-source tools for analyzing air pollution data. Manual for version 1.1–4, King's College London
Carslaw DC, Ropkins K (2012) openair—an R package for air quality data analysis. Environ Model Softw 27–28:52–61
Cheng I, Xu X, Zhang L (2015) Overview of receptor-based source apportionment studies for speciated atmospheric mercury. Atmos Chem Phys 15:7877–7895
Chevallier F, Broquet G, Pierangelo C, Crisp D (2017) Probabilistic global maps of the CO2 column at daily and monthly scales from sparse satellite measurements: satellite-based probabilistic XCO2 maps. J Geophys Res Atmos. https://doi.org/10.1002/2017jd026453
Chevallier F, Ciais P, Conway TJ, Aalto T, Anderson BE, Bousquet P, Brunke EG, Ciattaglia L, Esaki Y, Fröhlich M, Gomez A, Gomez-Pelaez AJ, Haszpra L, Krummel PB, Langenfels RL, Leuenberger M, Machida T, Maignan F, Matsueda H, Morgui JA, Mukai H, Nakazawa T, Peylin P, Ramonet M, Rivier L, Sawa Y, Schmidt M, Steele LP, Vay SA, Vermeulen AT, Wofsy S, Worthy D (2010) CO2 surface fluxes at grid point scale estimated from a global 21 year reanalysis of atmospheric measurements. J Geophys Res Atmos. https://doi.org/10.1029/2010jd013887
Cleveland RB, Cleveland WS, McRae JE, Terpenning I (1990) STL—a seasonal-trend decomposition procedure based on loess. J. Off. Stat. 6:3–73
Crawford J, Zahorowski W, Cohen DD (2009) A new metric space incorporating radon-222 for generation of back trajectory clusters in atmospheric pollution studies. Atmos Environ 43:371–381
Folini D, Ubl S, Kaufmann P (2008) Lagrangian particle dispersion modeling for the high Alpine site Jungfraujoch. J Geophys Res. https://doi.org/10.1029/2007jd009558
Hafen RP (2010) Local Regression Models: Advancements, applications, and new methods. Dissertation submitted to the Faculty of Purdue University, West Lafayette, Indiana
Henne S, Brunner D, Oney B, Leuenberger M, Eugster W, Bamberger I, Meinhardt F, Steinbacher M, Emmenegger L (2016) Validation of the Swiss methane emission inventory by atmospheric observations and inverse modelling. Atmos Chem Phys 16:3683–3710
Hopke PK (2003) Recent developments in receptor modeling. J Chemometr 17:255–265
Intergovernmental Panel on Climate Change (IPCC) (2013) Climate Change 2013—The Physical Science Basis. Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, New York
Kaiser A, Scheifinger H, Spangl W, Weiss A, Gilge S, Fricke W, Ries L, Cemas D, Jesenovec B (2007) Transport of nitrogen oxides, carbon monoxide and ozone to the Alpine Global Atmosphere Watch stations Jungfraujoch (Switzerland), Zugspitze and Hohenpeissenberg (Germany), Sonnblick (Austria) and Mt. Krvavec (Slovenia). Atmos Environ 41:9273–9287
Keeling CD (1993) Global observations of atmospheric CO2. In: Heimann M (ed) The global carbon cycle. Springer, New York, pp 1–29
Lavorel S, Flannigan MD, Lambin EF, Scholes MC (2007) Vulnerability of land systems to fire: interactions among humans, climate, the atmosphere, and ecosystems. Mitig Adapt Strat Glob Change 12:33–53
Meirink JF, Bergamaschi P, Krol MC (2008) Four-dimensional variational data assimilation for inverse modelling of atmospheric methane emissions: method and comparison with synthesis inversion. Atmos Chem Phys Disc 8:12023–12052
Polissar AV, Hopke PK, Paatero P, Kaufmann YJ, Hall DK, Bodhaine BA, Dutton EG, Harris JM (1999) The aerosol at Barrow, Alaska: long-term trends and source locations. Atmos Environ 33:2441–2458
Reimann S, Vollmer MK, Folini D, Steinbacher M, Hill M, Buchmann B, Zander R, Mahieu E (2008) Observations of long-lived anthropogenic halocarbons at the high-Alpine site of Jungfraujoch (Switzerland) for assessment of trends and European sources. Sci Total Environ 391:224–231
Rödenbeck C (2005) Estimating CO2 sources and sinks from atmospheric mixing ratio measurements using a global inversion of atmospheric transport. Technical Report 6, Max Planck Institute for Biogeochemistry, Jena
Rödenbeck C, Zaehle S, Keeling R, Heimann M (2018) How does the terrestrial carbon exchange respond to interannual climatic variations? A quantification based on atmospheric CO2 data. Biogeosci Discuss. 1:1. https://doi.org/10.5194/bg-2018-34
Segers A, Houweling S (2017) Description of the CH4 Inversion Production Chain. ECMWF Copernicus Report
Seibert P, Kromp-Kolb H, Kasper A, Kalina M, Puxbaum H, Jost D, Schwikowski M, Baltensperger U (1998) Transport of polluted boundary layer air from the Po Valley to high-alpine sites. Atmos Environ 32:3953–3965
Seibert P, Kromp-Kolb H, Baltensperger U, Jost D (1994) Trajectory analysis of high-alpine air pollution data. In: Gryning S-E, Millán MM (eds) Air pollution modeling and its application X. NATO-Challenges of Modern Society. Springer, Boston, pp 595–596
Stohl A, Forster C, Frank A, Seibert P, Wotawa G (2005) Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2. Atmos Chem Phys 5:2461–2474
Stohl A, Thomson DJ (1999) A density correction for Lagrangian Particle dispersion models. Bound-Lay Meteorol 90:155–167
Watson JG, Chen AL-W, Chow JC, Doraiswamy P, Lowenthal DH (2008) Source apportionment: findings from the U.S. Supersites Program. J Air Waste Manage. Assoc. 58:265–288
World Meteorological Organization (WMO) (2017) Greenhouse Gas Bulletin: The state of greenhouse gases in the atmosphere based on global observations through 2016. WMO, Geneva
World Meteorological Organization (WMO) (2018) GAW Report No. 242—19th WMO/IAEA meeting on carbon dioxide, other greenhouse gases and related measurement techniques (GGMT-2017), Dübendorf, Switzerland
World Meteorological Organization (WMO) (2016) GAW Report No. 229—18th WMO/IAEA meeting on carbon dioxide, other greenhouse gases and related tracers measurement techniques (GGMT-2015), La Jolla, CA, USA
Zhang F, Fukuyama Y, Wang Y, Fang S, Li P, Fan T, Zhou L, Liu X, Meinhardt F, Emiliani P (2015) Detection and attribution of regional CO2 concentration anomalies using surface observations. Atmos Environ 123:88–101
This research received funding by the ReFoPlan from the German Environment Agency (UBA) on behalf of the German Ministry for Environment as well as from the Bavarian State Ministry of the Environment and Consumer Protection within the project of the Virtual Alpine Observatory II (VAOII) project. The particle dispersion modelling was carried out with help of the team of the Alpine Environmental Data Analysis Centre (http://www.AlpEnDAC.eu), using corresponding computing facilities (here: LRZ Linux Cluster). This work would not have been possible without the continuous observations at the Schneefernerhaus (UFS) by the German Environment Agency (UBA) together with the German Weather Service (DWD), at Jungfraujoch (JFJ) by the Swiss Federal Laboratories for Materials Science and Technology (Empa), at Sonnblick (SOB) by the Environment Agency Austria and at Plateau Rosa (PRO) by Ricerca sul Sistema Energetico (RSE). Measurements were made available through the WMO Global Atmosphere Watch World Data Centre for Greenhouse Gases (https://ds.data.jma.go.jp/gmd/wdcgg/). Additionally, we want to thank the scientists involved in the cited projects Copernicus Atmosphere Monitoring Service (CAMS) (https://atmosphere.copernicus.eu/) and Jena CarboScope (http://www.bgc-jena.mpg.de/CarboScope/) for providing the data for the comparison issue and plausibility checks we carried out. Special thanks is also expressed to the researchers who (further) developed the within this study used software (inter alia: the LPDM FLEXPART and the R package openair) and methods specified in the following references. Finally, we thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.
This research has been funded by the Federal ministry for Environment, Nature protection and nuclear safety, Award Number: FKZ 3716512040 and the Bavarian State Ministry for Environment and Consumer Protection, Award Number: VAO II-TP I/02. Recipient: Jucundus Jacobeit, University Professor. Dr.
Institute of Geography, University of Augsburg, 86159, Augsburg, Germany
Esther Giemsa & Jucundus Jacobeit
German Environment Agency, Environment Research Station Schneefernerhaus, 82475, Zugspitze, Germany
Ludwig Ries
Research Department (FOR), Leibniz Supercomputing Centre (LRZ), 85748, Garching (Bei München), Germany
Stephan Hachinger
Esther Giemsa
Jucundus Jacobeit
EG designed the computational framework of the study, derived the model and analysed the data. LR developed the basic idea for the project and provided all the technical details concerning the measurements of the CO2 and CH4 concentrations in compliance with the strict GAW standards. SH performed the particle dispersion runs with FLEXPART at the Leibniz Supercomputing Centre (LRZ) and was the expert for all matters related to the LPDM simulations. JJ supervised the project and contributed to the interpretation of the results. EG designed the figures and wrote the manuscript with input from all authors. The whole team of authors provided critical feedback and helped shape the research and analysis. All authors read and approved the final manuscript.
Correspondence to Esther Giemsa.
It is declared that a consent for publication exists.
Giemsa, E., Jacobeit, J., Ries, L. et al. Investigating regional source and sink patterns of Alpine CO2 and CH4 concentrations based on a back trajectory receptor model. Environ Sci Eur 31, 49 (2019). https://doi.org/10.1186/s12302-019-0233-x
DOI: https://doi.org/10.1186/s12302-019-0233-x
CO2 and CH4 concentrations
High-alpine observatories
Back trajectory receptor model
Regional source and sink patterns | CommonCrawl |
Study on Operating Performance Estimation Process of Electric Propulsion Systems for 2.5 Displacement Ton Class Catamaran Fishing Boat
Jeong, Yong-Kuk;Lee, Dong-Kun;Jeong, Uh-Cheul;Ryu, Cheol-Ho;Oh, Dae-Kyun;Shin, Jong-Gye 1
https://doi.org/10.5574/KSOE.2013.27.5.001 PDF KSCI
Because the environmental regulations for ships are getting tighter, green ships employing eco-friendly technology have recently received a large amount of attention. Among them, various studies for electric propulsion ships have been carried out, particularly in the United States, European Union, and Japan. On the other hand, research related to electric propulsion ships in Korea is in a very nascent stage. In this paper, an estimation process based on the rough requirements of ship-owners for the operating performance of electric propulsion ships is proposed. In addition, the estimation process is applied to a small fishing boat for verification of the process. These results are expected to be used as design guidelines in the early stage of the design process for electric propulsion ships.
Crack-healing Behavior and Corrosion Characteristics of SiC Ceramics
Hwang, Jin Ryang;Kim, Dae Woong;Nam, Ki Woo 10
The crack-healing behavior and corrosion resistance of SiC ceramics were investigated. Heat treatments were carried out from $900^{\circ}C$ to $1300^{\circ}C$. A corrosion test of SiC was carried out in acid and alkaline solutions under KSL1607. The results showed that heat treatment in air could significantly increase the strength. The heat-treatment temperature has a profound influence on the extent of crack healing and the degree of strength recovery. The optimum heat-treatment temperature was $1100^{\circ}C$ for one hour at an atmospheric level. In the two kinds of solutions, the cracks in a specimen were reduced with increasing time, and the surface of the crack healed specimen had a greater number of black and white spots. The strength of the corroded cracked specimen was similar to that of the cracked specimen. The strength of the corroded crack healed specimen decreased 47% and 75% compared to that of the crack healed specimen in the acid and alkaline solutions, respectively. Therefore, the corrosion of SiC ceramics is faster in an alkaline solution than in an acid solution.
Study on Flow Around Circular Cylinder Advancing Beneath Free Surface
Yi, Hyuck-Joon;Shin, Hyun-Kyung;Yoon, Bum-Sang 16
The flow around a circular cylinder advancing beneath the free surface is numerically investigated using a VOF method. The simulations cover Froude numbers in the range of 0.2~0.6 and gap ratios (h/d) in the range of 0.1~2.0, where h is the distance from the free surface to a cylinder, and d is the diameter of a cylinder at Reynolds number 180. It is observed that the vortex suppression effect and surface deformation increase as the gap ratio decreases or the Froude number increases. The most important results of the present study are as follows. The proximity of the free surface causes an initial increase in the Strouhal number and drag coefficient, and the maximum Strouhal number and drag coefficient occur in the range of 0.5~0.7. However, this trend reverses as the gap ratio becomes small, and the lift coefficient increases downward as the gap ratio decreases.
Thermal Shock Properties of 316 Stainless Steel
Lee, Sang-Pill;Kim, Young-Man;Min, Byung-Hyun;Kim, Chang-Ho;Son, In-Soo;Lee, Jin-Kyung 22
The present work dealt with the high temperature thermal shock properties of 316 stainless steels, in conjunction with a detailed analysis of their microstructures. In particular, the effects of the thermal shock temperature difference and thermal shock cycle number on the properties of 316 stainless steels were investigated. A thermal shock test for 316 stainless steel was carried out at thermal shock temperature differences from $300^{\circ}C$ to $1000^{\circ}C$. The cyclic thermal shock test for the 316 stainless steel was performed at a thermal shock temperature difference of $700^{\circ}C$ up to 100 cycles. The characterization of 316 stainless steels was evaluated using an optical microscope and a three-point bending test. Both the microstructure and flexural strength of 316 stainless steels were affected by the high-temperature thermal shock. The flexural strength of 316 stainless steels gradually increased with an increase in the thermal shock temperature difference, accompanied by a growth in the grain size of the microstructure. However, a thermal shock temperature difference of $800^{\circ}C$ produced a decrease in the flexural strength of the 316 stainless steel because of damage to the material surface. The properties of 316 stainless steels greatly depended on the thermal shock cycle number. In other words, the flexural strength of 316 stainless steels decreased with an increase in the thermal shock cycle number, accompanied by a linear growth in the grain size of the microstructure. In particular, the 316 stainless steel had a flexural strength of about 500 MPa at 100 thermal-shock cycles, which corresponded to about 80% of the strength of the as-received materials.
Determination of Nesting Algorithm Fitness Function through Various Numerical Experiments
Lee, Hyebin;Ruy, WonSun 28
In this paper, a research on the composition of the nesting algorithm fitness function is carried out by performing various numerical experiments to inspect how it affects the scrap efficiency, allocation characteristics, and time consumption, targeting the nesting results of ship parts. This paper specifically concentrates on a method to minimize the scrap ratio and efficiently use the well-defined remnants of a raw plate after the nesting process for the remnant nesting. Therefore, experiments for various ship parts are carried out with the weighting factor method, one of the multi-objective optimum design methods. Using various weighting factor sets, the nesting results are evaluated in accordance with the above purposes and compared with each set for each ship part groups. Consequently, it is suggested that the nesting algorithm fitness function should be constructed differently depending on the characteristics of the parts and the needs of the users.
Optimum Design of Truss on Sizing and Shape with Natural Frequency Constraints and Harmony Search Algorithm
Kim, Bong-Ik;Kown, Jung-Hyun 36
We present the optimum design for the cross-sectional(sizing) and shape optimization of truss structures with natural frequency constraints. The optimum design method used in this paper employs continuous design variables and the Harmony Search Algorithm(HSA). HSA is a meta-heuristic search method for global optimization problems. In this paper, HSA uses the method of random number selection in an update process, along with penalty parameters, to construct the initial harmony memory in order to improve the fitness in the initial and update processes. In examples, 10-bar and 72-bar trusses are optimized for sizing, and 37-bar bridge type truss and 52-bar(like dome) for sizing and shape. Four typical truss optimization examples are employed to demonstrate the availability of HSA for finding the minimum weight optimum truss with multiple natural frequency constraints.
Characteristics of Heaving Motion of Hollow Circular Cylinder
Bae, Yoon Hyeok;Cho, Il-Hyoung 43
In the present investigation, the hydrodynamic characteristics of a vertically floating hollow cylinder in regular waves have been studied. The potential theory for solving the diffraction and radiation problem was employed by assuming that the heave response motion was linear. By using the matched eigenfunction expansion method, the characteristics of the exciting forces, hydrodynamic coefficients, and heave motion responses were investigated with various system parameters such as the radius and draft of a hollow cylinder. In the present analytical model, two resonances are identified: the system resonance of a hollow cylinder and the piston-mode resonance in the confined inner fluid region. The piston resonance mode is especially important in the motion response of a hollow circular cylinder. In many cases, the heave response at the piston resonance mode is large, and its resonant frequency can be predicted using the empirical formula of Fukuda (1977). The present design tool can be applied to analyze the motion response of a spar offshore structure with a moon pool.
Water Wave Interactions with Array of Floating Circular Cylinders
Park, Min-Su;Jeong, Youn-Ju;You, Young-Jun 51
The water wave interactions on any three-dimensional structure of arbitrary geometry can be calculated numerically through the use of source distribution or Green's function techniques. However, such a method can be computationally expensive. In the present study, the water wave interactions in floating circular cylinder arrays were investigated numerically using the eigenfunction expansion method with the three- dimensional potential theory to reduce the computational expense. The wave excitation force, added mass coefficient, radiation damping coefficient, and wave run-up are presented with the water wave interactions in an array of 5 or 9 cylinders. The effects of the number of cylinders and the spacing between them are examined because the water wave interactions in floating circular cylinder arrays are significantly dependent upon these.
Comparison of Fatigue Damage Models of Spread Mooring Line for Floating Type Offshore Plant
Park, Jun-Bum;Kim, Kookhyun;Kim, Kyung-Su;Ko, Dae-Eun 63
The mooring lines of a floating type offshore plant are known to show wide banded and bimodal responses. These phenomena come from a combination of low and high frequency random load components, which are derived from the drift-restoring motion characteristic and wind- sea, respectively. In this study, fatigue models were applied to predict the fatigue damage of mooring lines under those loads, and the result were compared. For this purpose, seven different fatigue damage prediction models were reviewed, including mathematical formula. A FPSO (floating, production, storage, and offloading) with a $4{\times}4$ spread catenary mooring system was selected as a numerical model, which was already installed at an offshore area of West Africa. Four load cases with different combinations of wave and wind spectra were considered, and the fatigue damage to each mooring line was estimated. The rain flow fatigue damage for the time process of the mooring tension response was compared with the results estimated by all the fatigue damage prediction models. The results showed that both Benasciutti-Tovo and JB models could most accurately predict wide banded bimodal fatigue damage to a mooring system.
Study on Improvement in Numerical Method for Two-phase Flows Including Surface Tension Effects
Park, Il-Ryong 70
The present paper proposes a coupled volume-of-fluid (VOF) and level-set (LS) method for simulating incompressible two-phase flows that include surface tension effects. The interface of two fluids and its motion are represented by a VOF method designed using high-resolution differencing schemes. This hybrid method couples the VOF method with an LS distancing algorithm in an explicit way to improve the calculation of the normal and curvature of the interface. It is developed based on a rather simple algorithm to be efficient for various practical applications. The accuracy and convergence properties of the method are verified in a simulation of a single gas bubble rising in a three-dimensional flow with a large density ratio.
Effects of Welding Parameters on Diffusible Hydrogen Contents in FCAW-S Weld Metal
Bang, Kook-Soo;Park, Chan 77
The effects of the welding parameters, contact tip-to-workpiece distance (CTWD), current, and voltage on the diffusible hydrogen content in weld metal deposited by self-shielded flux cored arc welding were investigated and rationalized by comparing the amount of heat generated in the extension length of the wire. This showed that as CTWD increased from 15mm to 25mm, the amount of heat generated was increased from 71.1J to 174.8J, and the hydrogen content was decreased from 11.3mL/100g to 5.9mL/100 g. Even if little difference was observed in the amount of heat generated, the hydrogen content was increased with an increase in voltage because of the longer arc length. A regression analysis showed that the regression coefficient of voltage in self-shielded flux cored arc welding is greater than that in $CO_2$ arc welding. This implies that voltage control is more important in self-shielded flux cored arc welding than in $CO_2$ arc welding.
Effects of Ship Speed and Ice Thickness on Local Ice Loads Measured in Arctic Sea
Lee, Tak-Kee;Lee, Jong-Hyun;Rim, Chae-Whan;Choi, Kyungsik 82
The icebreaking research vessel ARAON conducted her second ice trial in the Arctic Ocean during the summer season of 2010. During this voyage, the local ice loads acting on the bow of the port side were measured using 14 strain gauges. The measurement was carried out during icebreaking while measuring the thickness of the ice every 10 m. The obtained strain data were converted to the equivalent stress values, and the effects of the ship speed and ice thickness on the ice load were investigated. As a result, it was found that a faster speed produced a larger stress, according to the variation in the peak values below an ice thickness condition of 1.5 m. Meanwhile, the effect of the ice thickness on the ice load was not clear.
A Study on Calculation of Local Ice Pressures for ARAON Based on Data Measured at Arctic Sea
Lee, Tak-Kee;Kim, Tae-Wook;Rim, Chae Whan;Kim, Sungchan 88
The icebreaking research vessel (IBRV) ARAON had her second ice trial in the Arctic Ocean in the summer season of 2010. During the voyage, the local ice loads acting on the bow of the port side were measured using 14 strain gauges. These measurements were carried out in three icebreaking performance tests. To convert the measured strains into the local ice pressures, a finite element model of the instrumented area was developed. The influence coefficient method (ICM), which uses the influence coefficient from the finite element model, and the direct method, which uses the measured strain, were selected as the conversion methods. As a result, the maximum measured pressure was 1.236MPa, and the average difference between ICM and the direct method was about 5% for an area of $0.2m^2$. The pressure-area relationship of the measurement falls below the range of the existing pressure-area curve, which is due to the low ice strength of melted ice in the summer.
Electrofusion Joining Technology for Polyethylene Pipes Using Carbon Fiber
Ahn, Seok-Hwan;Ha, Yoo-Sung;Moon, Chang-Kwon 93
Fuel gas is an important energy source that is being increasingly used because of the convenience and clean energy provided. Natural gas is supplied to consumers safely through an underground gas-pipe network made of a polyethylene material. In electrofusion, which is one of the joining methods used, copper wire is used as the heating wire. However, it takes a long time for fusion to occur because the electrical resistance of copper is low. In this study, therefore, electrofusion was conducted by replacing the copper heating wire with carbon fiber to reduce the fusion time and improve the production when joining large pipes. Fusion and tensile tests were performed after the electrofusion joint was made in the polyethylene pipe using carbon fiber. The results showed that the fusion time was shorter and the temperature inside the pipe was higher with an increase in the current value. The ultimate tensile strength of specimens was higher than that of virgin polyethylene pipe, except for polyethylene pipes joined using a current of 0.8 A. The best fusion current value was 0.9 or 1.0 A because of the short fusion time and lack of transformation inside the pipe. Thus, it was shown that carbon fiber can be used to replace the copper heating wire.
A Study on Exothermic Properties of TiO2/Epoxy Nanocomposites
Recently, various nanoparticles have been used for filler in polymer matrices. The particles of nano size are whether high or not cross-link density in polymer affects the thermal and mechanical properties of one. The properties change as a result of chemical reactions between the nanoparticles and the surface of the polymer. There are two models for nanocomposites: "repulsive interaction" and "attractive interaction" between the nanoparticles and matrix. In this study, the variation in the curing mechanism was examined when nano-size $TiO_2$ was dispersed into an epoxy (Bisphenol A, YD-128) with different curing agents. The results of this study showed that the exothermic temperature and Tg in the case of the nanoparticles used (Jeffamine) (D-180) at room temperature were reduced by an increase in the $TiO_2$ contents because of the "repulsive interaction" between the nanoparticles and the matrix. The tensile strengths were increased by increasing amounts of $TiO_2$ until 3 wt% because of a dispersion strengthening effect caused by the nanoparticles, because of the repulsive interaction. However, such tensile properties decreased at 5 wt% of $TiO_2$, because the $TiO_2$ was agglomerated in the epoxy. In contrast, in the case of the nanoparticles that used NMA and BDMA, the exothermic temperature and Tg tended to rise with increasing amounts of $TiO_2$ as a result of the "attractive interaction." This was because the same amounts of $TiO_2$ were well dispersed in the epoxy. The tensile strength decreased with an increase in the $TiO_2$ contents. In the general attractive interaction model, however, the cross-link density was higher, and tensile strength tended to increase. Therefore, for the nanoparticles that used NMA, it was difficult to conclude that the result was caused by the "attractive model."
Dispersion Simulation of Hydrogen in Simple-shaped Offshore Plant
Seok, Jun;Heo, Jae-Kyung;Park, Jong-Chun 105
Lots of orders of special vessels and offshore plants for developing the resources in deepwater have been increased in recent. Because the most of accidents on those structures are caused by fire and explosion, many researchers have been investigated quantitatively to predict the cause and effect of fire and explosion based on both experiments and numerical simulations. The first step of the evaluation procedures leading to fire and explosion is to predict the dispersion of flammable or toxic material, in which the released material mixes with surrounding air and be diluted. In particular turbulent mixing, but density differences due to molecular weight or temperature as well as diffusion will contribute to the mixing. In the present paper, the numerical simulation of hydrogen dispersion inside a simple-shaped offshore structure was performed using a commercial CFD program, ANSYS-CFX. The simulated results for concentration of released hydrogen are compared to those of experiment and other simulation in Jordan et al.(2007). As a result, it is seen that the present simulation results are closer to the experiments than other simulation ones. Also it seems that the hydrogen dispersion is closely related to turbulent mixing and the selection of the turbulence model properly is significantly of importance to the reproduction of dispersion phenomena. | CommonCrawl |
\begin{definition}[Definition:Continuous Real Function/Right-Continuous]
Let $S \subseteq \R$ be an open subset of the real numbers $\R$.
Let $f: S \to \R$ be a real function.
Let $x_0 \in S$.
Then $f$ is said to be '''right-continuous at $x_0$''' {{iff}} the limit from the right of $\map f x$ as $x \to x_0$ exists and:
:$\ds \lim_{\substack {x \mathop \to x_0^+ \\ x_0 \mathop \in A}} \map f x = \map f {x_0}$
where $\ds \lim_{x \mathop \to x_0^+}$ is a limit from the right.
Furthermore, $f$ is said to be '''right-continuous''' {{iff}}:
:$\forall x_0 \in S$, $f$ is '''right-continuous at $x_0$'''
\end{definition} | ProofWiki |
Physics > Particle Physics/Field Theory
JEES is an official English journal of the Korean Institute of Electromagnetic and Science. The objective of JEES is to publish academic as well as industrial research results and foundings on electromagnetic engineering and science. In particular, electromagnetic field theory and its applications, high frequency components, circuits, antennas, and systems, electromagnetic wave environment, and relevant industrial developments are in the scope of the Journal. With the Journal researches on electromagnetic wave-related engineering and sciences will be nourished, and ultimately contributed to improve the welfare of the human race and national development.
KSCI SCOPUS
Dual-Band Microstrip Patch Antenna with Switchable Orthogonal Linear Polarizations
Kim, Jeongin;Sung, Youngje 215
https://doi.org/10.26866/jees.2018.18.4.215 PDF KSCI
This study presents a dual-band polarization-reconfigurable antenna that comprises a large square patch with a pair of corner-cut edges and two small square patches with a shorting via. Two PIN diodes are located between the large square patch and two small square patches. Depending on the bias state applied to the two PIN diodes, each small patch may be disconnected or connected to the large square patch. As a result, the proposed antenna can provide polarization reconfigurability between two orthogonal linear polarizations. Further, the proposed antenna operates at 2.51 GHz and 2.71 GHz. From the measured results, the proposed antenna shows a 10 dB bandwidth of 2.39% (2.49-2.55 GHz) and 2.58% (2.68-2.75 GHz). In this work, the frequency ratio can be easily controlled by changing the size of the small patch.
Control of Power Distribution for Multiple Receivers in SIMO Wireless Power Transfer System
Kim, Gunyoung;Boo, Seunghyun;Kim, Sanghoek;Lee, Bomson 221
A method to control the power distribution among receivers by the load values in a single-input, multiple-output (SIMO) wireless power transfer (WPT) system is investigated. We first derive the value of loads to maximize total efficiency. Next, a simple, but effective analytical formula of the load condition for the desired power distribution ratio is presented. The derived load solutions are simply given by system figure of merits and desired power ratios. The formula is validated with many numerical examples via electromagnetic simulations. We demonstrate that with the choice of loads from this simple formula, the power can be conveniently and accurately distributed among receivers for most practical requirements in SIMO WPT systems.
Analysis of the Optimal Frequency Band for a Ballistic Missile Defense Radar System
Nguyen, Dang-An;Cho, Byoungho;Seo, Chulhun;Park, Jeongho;Lee, Dong-Hui 231
In this paper, we consider the anti-attack procedure of a ballistic missile defense system (BMDS) at different operating frequencies at its phased-array radar station. The interception performance is measured in terms of lateral divert (LD), which denotes the minimum acceleration amount available in an interceptor to compensate for prediction error for a successful intercept. Dependence of the frequency on estimation accuracy that leads directly to prediction error is taken into account, in terms of angular measurement noises. The estimation extraction is performed by means of an extended Kalman filter (EKF), considering two typical re-entry trajectories of a non-maneuvering ballistic missile (BM). The simulation results show better performance at higher frequency for both tracking and intercepting aspects.
A Study on the Convenient EMF Compliance Assessment for Base Station Installations at a Millimeter Wave Frequency
Lee, Young Seung;Lee, Haeng-Seon;Choi, Hyung-Do 242
This paper studies a convenient electromagnetic field (EMF) compliance assessment for base station installations at a millimeter wave (mmW) frequency. We utilize ray-tracing analysis as a numerical method for examining the wave propagation characteristic. Various installation cases are considered and the important parameters with a significant effect on the maximum power density levels are produced. We finally suggest the several scenarios for the convenient assessment of mmW base stations, which allow us to conduct cost effective computational tests compared with the current assessment procedure in the guideline.
Analysis of a Tapered Rectangular Waveguide for V to W Millimeter Wavebands
Lee, Sangsu;Son, Dongchan;Kwon, Jae-Yong;Park, Yong Bae 248
An electromagnetic boundary-value problem of a tapered rectangular waveguide is rigorously solved based on eigenfunction expansion and the mode-matching method. Scattering parameters of the tapered rectangular waveguide are represented in a series form and calculated in terms of different rectangular waveguide combinations. Computation is performed to analyze reflection and transmission characteristics. Conductor loss by surface current density is also calculated and discussed.
Emulator for Generating Heterogeneous Interference Signals in the Korean RFID/USN Frequency Band
Lee, Sangjoon;Yoon, Hyungoo;Baik, Kyung-Jin;Jang, Byung-Jun 254
In this study, we suggest an emulator for generating multiple heterogeneous interference signals in the Korean radio frequency identification/ubiquitous sensor network (RFID/USN) frequency band. The proposed emulator uses only one universal software radio peripheral to generate multiple heterogeneous interference signals more economically. Moreover, the physical and media access control parameters can be adjusted in real time using the LabVIEW program, thereby making it possible to create various time-varying interference environments easily. As an example showing the capability of the proposed emulator, multiple interference signals consisting of a frequency-hopping RFID signal and two LoRa signals with different spreading factors were generated. The generated signals were confirmed in both frequency and time domains. From the experimental results, we verified that our emulator could successfully generate multiple heterogeneous interference signals with different frequency and time domain characteristics.
A Low-Profile Broadband Array Antenna for Home Repeater Applications
Yoon, Sung Joon;Choi, Jaehoon 261
This paper reports on the proposed design of a low profile broadband array antenna for home repeater applications. The proposed antenna consists of $1{\times}4$ patch elements and two ground planes, one of which is slitted. By using the gap feeding method, the impedance matching of the antenna is improved by a multi-resonance phenomenon. The proposed antenna provides a wide -10 dB reflection coefficient bandwidth simultaneously covering the Global System for Mobile communications (GSM-1800), Personal Communications Service (PCS), and the Universal Mobile Telecommunication System (UMTS) bands (1.67-2.32 GHz). In order to reduce the mutual coupling between adjacent patch elements, slits are embedded in the ground plane. An isolation level of -20 dB is realized over the entire operating frequency band.
Null Steering of Circular Array Using Array Factor for GPS Anti-Jam
Kwon, Taek-Sun;Lee, Jae-Gon;Lee, Jeong-Hae 267
In this letter, the null steering of a circular array is presented using a modified array factor (AF) for a global positioning system (GPS) anti-jam. The seven radiating elements were designed using a mu-zero resonance (MZR) circularly polarized (CP) antenna arranged toward the center. Since the radiating elements, which are arranged toward the center, have a CP characteristic, the AF of the seven radiating elements has to be modified considering the rotation angle of the nth radiating element. The phases of input ports can be calculated to implement a nulling of radiation patterns where the modified AF is zero. To verify the modified AF for null steering in the desired direction, two cases of power dividers operating in $L_2$ band (1.2276 GHz) were fabricated to achieve pattern nulling at a certain angle. The modified AF can be confirmed by a comparing the simulated and measured radiation patterns. | CommonCrawl |
\begin{document}
\title{Scheduling Bidirectional Traffic on a Path} \begin{abstract} We study the fundamental problem of scheduling bidirectional traffic along a path composed of multiple segments. The main feature of the problem is that jobs traveling in the same direction can be scheduled in quick succession on a segment, while jobs in opposing directions cannot cross a segment at the same time. We show that this tradeoff makes the problem significantly harder than the related flow shop problem, by proving that it is $\mathsf{NP}$-hard even for identical jobs. We complement this result with a PTAS for a single segment and non-identical jobs. If we allow some pairs of jobs traveling in different directions to cross a segment concurrently, the problem becomes $\mathsf{APX}$-hard even on a single segment and with identical jobs. We give polynomial algorithms for the setting with restricted compatibilities between jobs on a single and any constant number of segments, respectively. \end{abstract}
\section{Introduction} The scheduling of bidirectional traffic on a path is essential when operating single-track infrastructures such as single-track railway lines, canals, or communication channels. Roughly speaking, the schedule governs when to move jobs from one node of the path to another along the segments of the path. The goal is to schedule all jobs such that the sum of their arrival times at their respective destinations is minimized. A central feature of real-world single-track infrastructures is that after one job enters a segment of the path, further jobs moving in the \emph{same} direction can do so with relatively little headway, while traffic in the \emph{opposite} direction usually has to wait until the whole segment is empty again (cf.~Fig.~\ref{fig:canal}a for a schematic illustration).
\begin{figure}
\caption{ Bidirectional scheduling of ship traffic through a canal, with and without compatibilities.
The processing time $p_{ij}$ of job~$j$ is the time needed to enter segment~$i$ with sufficient security headway, i.e., the delay before other jobs in the same direction may enter the segment. The travel time~$\tau_{ij}$ is the time needed to traverse the entire segment once entered.
In both (a) and (b), jobs~$1,2,3$ can enter the segment in quick succession, while job 4 has to wait until they left the segment. In~(b), job 5 is compatible with jobs~$1,2,3$ so that they may cross concurrently. The time to cross turnouts is assumed to be negligible.\\[-1cm] }
\label{fig:canal}
\end{figure}
Formally, in the bidirectional scheduling problem we are given a path of consecutive segments connected at nodes, and a set of jobs, each with a release date and a designated start and destination node. The time job~$j$ needs to traverse segment~$i$ is governed by two quantities: its \emph{processing time}~$p_{ij}$ and its \emph{transit time}~$\tau_{ij}$.
While the former prevents the segment from being used by any other job (running in \emph{either} direction), the latter only blocks the segment from being used by jobs running in \emph{opposite} direction. For example, this allows us to model settings with bidirectional train traffic on a railway line split into single-track segments that are connected by turnouts (cf.~\citet[Section~2]{LusbyLER2011}). In this setting, jobs correspond to trains, the processing time of a job is the time needed for the train to fully enter the next segment, and the transit time is the time to traverse the segment (and entirely move into the next turnout). While a train is entering a single-track segment of the line, no other train may do so. The next train in the same direction can enter immediately afterwards, whereas trains in opposite direction have to wait until the segment is clear again in order to prevent a collision.
Fig.~\ref{fig:path-time} shows the path-time-diagram of a feasible schedule for two segments and four jobs. Jobs are represented by parallelograms of the same color. The processing time of a job on a segment is reflected by the height of the corresponding parallelogram, while the transit time is the remaining time ($y$-distance) to the lowest point of the parallelogram. In a feasible schedule, jobs may not intersect, and, in particular, a job can only begin being processed at a segment once it has fully exited the previous segment. Note that in the example it makes sense for the two rightbound jobs to switch order while waiting at the central node.
\begin{figure}\label{pic:all-part1}
\label{fig:path-time}
\end{figure}
We also study a generalization of the model to situations where some of the jobs are allowed to pass each other when traveling in different directions (cf.~Fig.~\ref{fig:canal}b). This is a natural assumption, e.g., when scheduling the ship traffic on a canal, where smaller ships are allowed to pass each other while larger ships are not (cf.~\citet{LuebbLM2014}). In practice, the rules that decide which ships are allowed to pass each other are quite complex and depend on multiple parameters of the ships such as length, width, and draught (e.g., cf.~\cite{SeeSchStrO2013}). We model these complex rules in the most general way by a bipartite compatibility graph for each segment, where vertices correspond to jobs and two jobs running in different directions are connected by an edge if they can cross the segment concurrently.
\subsubsection*{Our results.} Table~\ref{tab:overview} gives a summary of our results. We first show that scheduling bidirectional traffic is hard, even without processing times and with identical transit times~(Section~\ref{sec:unbounded}). The proof is via a non-standard reduction from \noun{MaxCut}. The key challenge is to use the local interaction of the jobs on the path to model global interaction between the vertices in the \noun{MaxCut}. We overcome this issue by introducing polynomially many vertex gadgets encoding the partition of each vertex and synchronizing these copies along the instance.
We complement this result with a polynomial time approximation scheme (PTAS) for a single segment and arbitrary processing times~(Section~\ref{sec:PTAS}) using the $(1+\epsilon)$-rounding technique of~\citet{Afrati1999}.
We then show that bidirectional scheduling with arbitrary compatibility graphs is $\mathsf{APX}$-hard already on a single segment and with identical processing times~(Section~\ref{sec:arbitrary-conflicts}). The proof is via a reduction from a variant of \noun{Max-3-Sat} which is $\mathsf{NP}$-hard to approximate within a factor smaller than 1016/1015, as shown by Berman et al.~\cite{BermanKS2003}. As a byproduct, we obtain that also minimizing the makespan is $\mathsf{APX}$-hard in this setting. We again complement our hardness result by polynomial algorithms for identical jobs on constant numbers of segments and with a constant number of compatibility types~(Section~\ref{sec:constant_segments}).
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \newcommand{\scriptsize}{\scriptsize}
\iffalse \begin{figure}
\caption{Small example in distance time diagram.}
\label{fig:dist-time}
\end{figure} \fi
\subsubsection*{Significance.}
With this paper we initiate the mathematical study of optimized dispatching of traffic in networks with bidirectional edges, e.g.\ train networks, ship canals, communication channels, etc. In all of these settings, traffic in one direction limits the possible throughput in the other direction.
While in the past decades a wealth of results has been established for the unidirectional case (i.e., classical scheduling, and, in particular, flow shop models), surprisingly, and despite their practical importance, bidirectional infrastructures have not received a similar attention so far.
The bidirectional scheduling model that we propose captures the essence of bidirectional traffic by distinguishing processing and transit times. This simple framework already allows to exhibit the computational key challenges of this setting. In particular, we show that bidirectional scheduling is already hard for identical jobs on a path, which is in contrast to the unidirectional case. We observe another increase in complexity when allowing specific types of traffic to use an edge concurrently in both directions. In practice, this is reasonable e.g.\ for ship traffic in a canal, where small vessels may pass each other. In that sense, we show that scheduling ship traffic is already hard on a single edge and, thus, considerably harder than scheduling train traffic.
While bidirectional scheduling is hard in general, we show that certain features of real-world scenarios can make the problem tractable, e.g., a small number of turnouts along a single path and/or a small number of different vessels. In this work we restrict ourselves to simple paths, but we hope that our results are a first step towards understanding traffic in general bidirectional networks.
\begin{table}[tb] \caption{Overview of our results for bidirectional scheduling.\newline $^1$~even if $p=0$, $\transit{i} = 1$, $^2$~only if $\proc{}=1, \transit{i} \leq \text{const}$, $^3$~even if $\transit{i} = \proc{} = 1$.} \label{tab:overview}
\begin{minipage}{\textwidth}
\begin{center} \scriptsize \begin{tabular*}{\linewidth}{ @{}l @{\extracolsep{\fill}} P{2.cm} @{\extracolsep{1ex}} P{2.cm} @{\extracolsep{1ex}} P{2.cm} } \toprule & \multicolumn{3}{c}{\myline{2.5cm} Number $m$ of segments \myline{2.5cm}}\\ compatibilities & $m=1$ & $m$ const. & $m$ arbitrary\\ \midrule
\multicolumn{3}{l}{\bf Different jobs $\proc{ij} = \proc{j}$, $\tau_{ij}=\tau_i$}\\
& \multicolumn{1}{c}{\cellcolor{yellow!30}\scriptsize{PTAS }\scriptsize{[Thm.~\ref{thm:ptas-oneseg-nocompatibilities}]}} & \multicolumn{1}{c}{\multirow{2}{*}{\cellcolor{red!30}}} & \multicolumn{1}{c}{\cellcolor{red!30}} \\ \multirow{-2}{*}{none/all compatible} & \multicolumn{2}{c}{\cellcolor{red!30}\cellcolor{red!30}$\mathsf{NP}$-hard \scriptsize{\citep{LenstraKB1977}}} & \multicolumn{1}{c}{\multirow{-2}{*}{\cellcolor{red!30}$\mathsf{NP}$-hard$^1$~\scriptsize{[Thm.~\ref{thm:hard_m}]}}} \\
\\[-2ex]
\midrule
\multicolumn{3}{l}{\bf Identical jobs $\proc{ij} = \proc{}$, $\tau_{ij}=\tau_i$}\\
none compatible & \multicolumn{1}{c}{\cellcolor{green!30}} & \multicolumn{1}{c}{\cellcolor{yellow!30}} & \multicolumn{1}{c}{\cellcolor{red!30}}\\[1ex] const.\ \# types & \multicolumn{1}{c}{\multirow{-2}{*}{\cellcolor{green!30} polynomial \scriptsize{[Thm.~\ref{thm:cct-singleseg-poly}]}}} & \multicolumn{1}{c}{\multirow{-2}{*}{\cellcolor{yellow!30} polynomial$^2$~\scriptsize{[Thm.~\ref{thm:identical-constantseg-makespan}]}}} & \multicolumn{1}{c}{\multirow{-2}{*}{\cellcolor{red!30} $\mathsf{NP}$-hard$^1$~\scriptsize{[Thm.~\ref{thm:hard_m}]}}}\\ \\[-2ex]
arbitrary & \multicolumn{3}{c}{\cellcolor{red!30}$\mathsf{APX}$-hard$^3$~\scriptsize{[Thm.~\ref{thm:graph-apx-hardness-sum}]}} \\
\\[-2ex] \bottomrule \end{tabular*} \end{center} \end{minipage} \end{table}
\subsubsection*{Related work.} Scheduling problems are a fundamental class of optimization problems with a multitude of known hardness and approximation results (cf. \citet{LawlerLRS1993} for a survey). To the best of our knowledge, the bidirectional scheduling model that we propose and study in this paper has not been considered in the past nor is it contained as a special case in any other scheduling model. We give an overview of known results for related models.
For a single segment and jobs traveling from left to right, bidirectional scheduling reduces to the classical single machine scheduling problem, which \citet{LenstraKB1977} showed to be hard when minimizing total completion time. \citet{Afrati1999} gave a PTAS with generalizations to multiple identical or a constant number of unrelated machines. \citet{ChekuriKhanna2001} further generalized the result to related machines. We give a different generalization for bidirectional scheduling. For unrelated machines \citet{HoogeveenSW1998} showed that the completion time cannot be approximated efficiently within arbitrary precision, unless $\mathsf{P} = \mathsf{NP}$.
Bidirectional scheduling also has similarities to scheduling of two job families with a setup time that is required between jobs of different families. The general comments in \citet{PottsKov2000} on dynamic programs for such kinds of problems apply in part to our technique for Theorem~\ref{thm:cct-singleseg-poly}.
When all jobs need to be processed on all segments in the same order and all transit times are zero, bidirectional scheduling reduces to flow shop scheduling. \citet{GareyJS1976} showed that it is $\mathsf{NP}$-hard to minimize the sum of completion times in flow shop scheduling, even when there are only two machines and no release dates. They showed the same result for minimizing the makespan on three machines. \citet{HoogeveenSW1998} showed that there is no PTAS for flow shop scheduling without release dates, unless $\mathsf{P} = \mathsf{NP}$. In contrast, \citet{BrucknerKW2005} showed that flow shop problems with unit processing times can be solved efficiently, even when all jobs require a setup on the machines that can be performed by a single server only.
Job shop scheduling is a generalization of flow shop scheduling that allows jobs to require processing by the machines in any (not necessarily linear) order, cf.~\citet[Section~14]{LawlerLRS1993} for a survey. In this setting, the minimization of the sum of completion times was proven to even be $\mathsf{MAX}$-$\mathsf{SNP}$-hard by~\citet{HoogeveenSW1998}. \citet{QueyranneS00} gave a $\ensuremath{\mathcal{O}}\xspace((\log(m\mu)/\log\log(m\mu))^2)$-approximation for the weighted case with release dates, where~$\mu$ denotes the maximum number of operations per job. \citet{FishkinJM2003} gave a PTAS for a constant number of machines and operations per job.
It is worth noting that job shop scheduling does not contain bidirectional scheduling as a special case, since it does not incorporate the distinction between processing and transit times for jobs passing a machine in different directions.
Job shop scheduling problems with unit jobs are strongly related to packet routing problems where general graphs are considered, see the discussion in seminal paper by \citet{LeightonMR1994}. They proved that the makespan of any packet routing problem is linear in two trivial lower bounds, called the congestion and the dilation. For more recent progress in this direction, see, e.g., \citet{Scheideler1998} and \citet{PeisWiese2011}. All these works, however, consider minimizing the makespan and assume that the orientation of the graph is fixed. \citet{AntonBCFMNP2014} also consider average flow time on a directed line. They give lower bounds for competitive ratios in the online setting and~$\ensuremath{\mathcal{O}}\xspace(1)$ competitive algorithms with resource augmentation for the maximum flow time.
\section{Preliminaries}
In the bidirectional scheduling problem, we are given a set $M = \{1,\dots,m\}$ of segments which we imagine to be ordered from left to right. Further, we are given two disjoint sets of $\ensuremath{\rightbset{J}}$ and $\ensuremath{\leftbset{J}}$ of \emph{rightbound} and \emph{leftbound} jobs, respectively, with $\ensuremath{\set{J}} = \ensuremath{\rightbset{J}} \cup \ensuremath{\leftbset{J}}$ and $n = |\ensuremath{\set{J}}|$.
Each job is associated with a \emph{release date} $\rel{j} \in \ensuremath{\mathbb{N}}$, a \emph{start segment} $s_j$ and a \emph{target segment} $t_j$, where $s_j \leq t_j$ for rightbound jobs and $s_j \geq t_j$ for leftbound jobs. A rightbound job $j$ needs to cross the segments $s_j, s_j+1,\dots,t_j-1,t_j$, and a leftbound job needs to cross the segments $s_j,s_j-1,\dots,t_j+1,t_j$. We denote by~$M_j$ the set of segments that job~$j$ needs to cross. Each job~$j$ is associated with a processing time $\proc{j} \in \ensuremath{\mathbb{N}}$ and each segment~$i$ is associated with a transit time $\transit{i} \in \ensuremath{\mathbb{N}}$. Note that we restrict ourselves to identical processing times for a single job and identical transit times for a single segment. We call $\proc{j} + \transit{i}$ the \emph{running time} of job~$j$ on segment~$i$.
A \emph{schedule} is defined by fixing the start times $\start{ij}$ for each job $j$ on each segment $i \in M_j$. The \emph{completion time} of job $j$ on segment $i$ is then defined as $\compl{ij} = \start{ij} + \proc{j} + \transit{i}$. The overall completion time of job $j$ is $\compl{j} = \compl{t_jj}$. A schedule is feasible if it has the following properties. \begin{compactenum} \item Release dates are respected, i.e., $r_j \leq \start{s_jj}$ for each~$j \in \ensuremath{\set{J}}$.
\item Jobs travel towards their destination, i.e., $\compl{ij} \leq \start{i+1,j}$ (resp. $\compl{ij} \leq \start{i-1,j}$) for rightbound (resp. leftbound) jobs $j$ and $i \in M_j \setminus \{t_j\}$.
\item Jobs $j, j'$ traveling in the same direction are not processed on segment~$i \in M_j \cap M_{j'}$ concurrently, i.e., $[\start{ij},\start{ij}+\proc{j})\cap[\start{ij'},\start{ij'}+\proc{j'}) = \emptyset$.
\item \label{it:opposing_jobs} Jobs~$j, j'$ traveling in different directions are neither processed nor in transit on segment~$i \in M_j \cap M_{j'}$ concurrently, i.e., $[\start{ij}, \compl{ij})\cap[\start{ij'}, \compl{ij'})=\emptyset$. \end{compactenum}
Our objective is to minimize the \emph{total completion time}~$\sum C_j=\sum_{j\in \ensuremath{\set{J}}} C_j$.
Other natural objectives are the minimization of the \emph{makespan}~$\ensuremath{\compl{\max}}=\max\{C_j \mid j\in \ensuremath{\set{J}}\}$ or the \emph{total waiting time}~$\sum W_j=\sum_{j\in \ensuremath{\set{J}}} W_j$ where the individual waiting time of a job~$j$ is~$W_j = \compl{j} - \sum_{i \in M_j} (\proc{j} + \transit{i}) - r_j$. Note that minimizing the total waiting time is equivalent to minimizing the total completion time.
\begin{tikzpicture}
\pgftransformyscale{-1}
\xdef\pictureWidth{5}
\xdef\pictureHeight{2.5}
\xdef\sleftI{0}
\xdef\srightI{0+.9}
\xdef\sleftII{\pictureWidth-.9}
\xdef\srightII{\pictureWidth}
\xdef\vdist{.1}
\xdef\yShipUp{0}
\xdef\yShipDownOne{0}
\xdef\yShipDownTwo{1.2}
\xdef\yShipDownThree{\yShipDownTwo+\vdist}
\xdef\yShipDownFour{\yShipDownThree+\vdist}
\begin{scope}
\clip (0-.02+.5,0) rectangle (\pictureWidth+.02-.5, \pictureHeight);
\fill[siding] (\sleftI,0) rectangle (\srightI,\pictureHeight);
\fill[siding] (\sleftII,0) rectangle (\srightII,\pictureHeight);
\begin{pgfonlayer}{bg}
\fill[segment] (\srightI,0) rectangle (\sleftII, \pictureHeight);
\end{pgfonlayer}
\draw[help lines, xstep = .5, ystep = .8, gray!40] (0,0) grid
(\pictureWidth,\pictureHeight);
\begin{scope}[thick]
\coordinate (shipUp) at (0, \yShipUp);
\path (shipUp) +(15:1) coordinate (updir);
\draw (shipUp) -- (intersection of
{shipUp}--{updir} and
\srightII,0--\srightII,\pictureHeight);
\path (shipUp) -- (intersection of
{shipUp}--{updir} and
\sleftII,0--\sleftII,\pictureHeight) ++(0,\vdist)
coordinate (waitingendForUp) + (165:1) coordinate
(downdirForUp);
\coordinate (shipDownOne) at (\pictureWidth, \yShipDownOne);
\path (shipDownOne) +(165:1) coordinate (downdirOne);
\draw (shipDownOne) -- (intersection of
{shipDownOne}--{downdirOne}
and \sleftII,0--\sleftII,\pictureHeight)
coordinate (waitingstartOneForUp);
\draw (waitingstartOneForUp) -- (waitingendForUp) -- (intersection of
{waitingendForUp}--{downdirForUp} and
0,0--0,\pictureHeight);
\path (shipDownOne) -- (intersection of
{shipDownOne}--{downdirOne}
and \srightI,0--\srightI,\pictureHeight)
coordinate (setuprightI0)
+(0,\vdist) coordinate (waitingendUpForOne);
\path (waitingendUpForOne) +(15:1) coordinate (updirUpForOne);
\coordinate (shipDownTwo) at (\pictureWidth, \yShipDownTwo);
\path (shipDownTwo) +(165:1) coordinate (downdirTwo);
\draw (shipDownTwo) -- (intersection of
{shipDownTwo}--{downdirTwo}
and \sleftI,0--\sleftI,\pictureHeight);
\path (shipDownTwo) -- (intersection of
{shipDownTwo}--{downdirTwo}
and \sleftII,0--\sleftII,\pictureHeight)
coordinate (procend) ++(0,-\vdist) coordinate (procstart);
\coordinate (shipDownThree) at (\pictureWidth, \yShipDownThree);
\path (shipDownThree) +(165:1) coordinate (downdirThree);
\draw (shipDownThree) -- (intersection of
{shipDownThree}--{downdirThree}
and \sleftI,0--\sleftI,\pictureHeight);
\coordinate (shipDownFour) at (\pictureWidth, \yShipDownFour);
\path (shipDownFour) +(165:1) coordinate (downdirFour);
\draw (shipDownFour) -- (intersection of
{shipDownFour}--{downdirFour}
and \sleftI,0--\sleftI,\pictureHeight);
\end{scope}
\end{scope}
\draw[->, overlay] (0-.02+.5,.5) -- +(0,.6)
node[pos = .8, left,font=\small]{time};
\end{tikzpicture}
We also consider a generalization of the model, where some of the jobs traveling in different directions are allowed to pass each other. Formally, for each segment $i$, we are given a bipartite \emph{compatibility graph} $G_i = (\ensuremath{\rightbset{J}} \ensuremath{\mathaccent\cdot\cup} \ensuremath{\leftbset{J}},\edges{i})$ with $\edges{i} \subseteq \ensuremath{\rightbset{J}} \times \ensuremath{\leftbset{J}}$. Two jobs $j,j'$ that are connected by an edge in~$G_i$ are allowed to run on segment~$i$ concurrently, i.e., condition \ref{it:opposing_jobs} above need not be satisfied. Specifically, jobs~$j,j'$ may be processed or be in transit simultaneously.
All proofs omitted in the following sections can be found in the appendix.
\section{Hardness of bidirectional scheduling}\label{sec:unbounded}
First, we show that scheduling bidirectional traffic is hard, even when all processing times are zero and all transit times coincide. In other words, we eliminate all interaction between jobs in the same direction and show that hardness is merely due to the decision when to switch between left- and rightbound operation of each segment. This is in contrast to one-directional (flow shop) scheduling with identical processing times, which is trivial. Formally, we show the following result.
\begin{restatable}{theorem}{unbounded} \label{thm:hard_m}
The bidirectional scheduling problem is $\mathsf{NP}$-hard even if~$p_j=0$ and~$\tau_i=1$ for each~$j\in\ensuremath{\set{J}}$ and $i\in M$.\label{thm:unbounded_hardness} \end{restatable}
We reduce from the \noun{MaxCut} problem which is contained in Karp's list of~21 $\mathsf{NP}$-complete problems~\cite{Karp1972}. Given an undirected graph $G=(V,E)$ and some $k\in\mathbb{N}$ we ask for a partition $V=V_{1} \ensuremath{\mathaccent\cdot\cup} V_{2}$ with $|E\cap(V_{1}\times V_{2})|\geq k$.
For a considered instance \ensuremath{\mathcal{I}}\xspace of \noun{MaxCut} we construct an instance of the bidirectional scheduling problem which can be scheduled without exceeding some specific waiting time if and only if \ensuremath{\mathcal{I}}\xspace admits a solution. The translation to sum of completion times is then straightforward.
\begin{figure}\label{fig:vertex_gadget}
\end{figure}
A cornerstone of our construction is the \emph{vertex gadget} that occupies a fixed time interval on a single segment and can only be (sensibly) scheduled in two ways~(cf.~Fig.~\ref{fig:unbounded_sketch}), which we interpret as the choice whether to put the corresponding vertex in the first or second part of the partition, respectively. We introduce multiple \emph{vertex segments} that each have exactly one vertex gadget for each vertex in \ensuremath{\mathcal{I}}\xspace and add further gadgets that ensure that the state of all vertex gadgets for the same vertex is the same across all segments. These gadgets allow us to synchronize vertex gadgets on consecutive vertex segments in two ways. We can either simply synchronize vertex gadgets that occupy the same time interval on the two vertex segments (\emph{copy gadget}), or we can synchronize pairs of vertex gadgets occupying the same consecutive time intervals on the two vertex segments by linking the first gadget on the first segment with the second one on the second segment and vice-versa, i.e., we can transpose the order of two consecutive gadgets from one vertex segment to the next (\emph{transposition gadget}).
We construct an edge gadget for each edge in \ensuremath{\mathcal{I}}\xspace that incurs a small waiting time if two vertex gadgets in consecutive time intervals and segments are in different states and a slightly higher waiting time if they are in the same state. By tuning the multiplicity of each job, we can ensure that only schedules make sense where vertex gadgets are scheduled consistently. Minimizing the waiting time then corresponds to maximizing the number of edge gadgets that link vertex gadgets in different states, i.e., maximizing the size of a cut.
In order to fully encode the given \noun{MaxCut} instance \ensuremath{\mathcal{I}}\xspace, we need to introduce an edge gadget for each edge in \ensuremath{\mathcal{I}}\xspace. However, edge gadgets can only link vertex gadgets in consecutive time intervals. We can overcome this limitation by adding a sequence of vertex segments and transposing the order of two vertex gadgets from one segment to the next as described before. With a linear number of vertex segments we can reach an order where the two vertex gadgets we would like to connect with an edge gadget are adjacent. At that point, we can add the edge gadget, and then repeat the process for all other edges in \ensuremath{\mathcal{I}}\xspace (cf.~Fig.~\ref{fig:unbounded_sketch}).
We can reformulate Theorem~\ref{thm:unbounded_hardness} for nonzero processing times, simply by making the transit time large enough that the processing time does not matter.
\begin{restatable}{corollary}{corUnboundedWithProcessing}
The bidirectional scheduling problem is $\mathsf{NP}$-hard even if~$p_j=1$ and~$\transit{i}=\transit{}$ for each~$j\in\ensuremath{\set{J}}$ and $i\in M$. \end{restatable}
\begin{figure}\label{fig:unbounded_sketch}
\end{figure}
\section{A PTAS for bidirectional scheduling} \label{sec:PTAS}
We give a polynomial time approximation scheme (PTAS), i.e., a polynomial $(1+\ensuremath{\varepsilon})$-approximation algorithm for each~$\ensuremath{\varepsilon}>0$, for bidirectional scheduling on a single segment with general processing times. This problem is hard even if all jobs have the same direction~\citep{LenstraKB1977}. We extend the machine scheduling PTAS of \citet{Afrati1999} to the bidirectional case, provided that the jobs are either all pairwise in conflict or pairwise compatible. The main issue when trying to adopt the technique of~\cite{Afrati1999} is to account for the different roles of processing and transit times for the interaction of jobs in the same and different directions.
\begin{restatable}{theorem}{thmptasOnesegNoCompatibilities} \label{thm:ptas-oneseg-nocompatibilities}
The bidirectional scheduling problem on a single segment and with compatibility graph $G_1\in\{K_{\rightb{n},\leftb{n}},\emptyset\}$ admits a PTAS. \end{restatable}
The first part of the proof in~\cite{Afrati1999} is to restrict to processing times and release dates of the form~$(1+\ensuremath{\varepsilon})^x$ for some~$x\in\ensuremath{\mathbb{N}}$ and~$r_j\geq\ensuremath{\varepsilon}(p_j+\tau_1)$. Allowing fractional processing and release times we can show that any instance can be adapted to have these properties, without making the resulting schedule worse by a factor of more than~$(1+\ensuremath{\varepsilon})$.
We may thus partition the time horizon into intervals~$I_x = [(1+\ensuremath{\varepsilon})^x,(1+\ensuremath{\varepsilon})^{x+1}]$, such that every job is released at the beginning of an interval. Since jobs are not released too early, we may conclude that the maximum number of intervals~$\ensuremath{\sigma}$ covered by the running time of a single job is constant. This allows us to group intervals together in blocks~$B_t = \{I_{t\ensuremath{\sigma}}, I_{t\ensuremath{\sigma}+1},\dots, I_{(t+1)\ensuremath{\sigma}-1}\}$ of~$\ensuremath{\sigma}$ intervals each, such that every job scheduled to start in block~$B_t$ will terminate before the end of the next block~$B_{t+1}$.
To use the fact that each block only interacts with the next block in our dynamic program, we need to specify an interface for this interaction. For that purpose we introduce the notion of a \emph{frontier}. A block \emph{respects an incoming frontier}~$F=(f_{\ensuremath{\mathrm{l}}},f_{\ensuremath{\mathrm{r}}})$ if no leftbound (rightbound) job scheduled to start in the block starts earlier than~$f_{\ensuremath{\mathrm{l}}}$ ($f_{\ensuremath{\mathrm{r}}}$). Similarly, a block \emph{respects an outgoing frontier}~$F=(f_{\ensuremath{\mathrm{l}}},f_{\ensuremath{\mathrm{r}}})$ if no leftbound or rightbound job scheduled to start in the block would interfere with a leftbound (rightbound) job starting at time~$f_{\ensuremath{\mathrm{l}}}$ ($f_{\ensuremath{\mathrm{r}}}$). The symmetrical structure of the compatibility graph ($K_{\rightb{n},\leftb{n}}$ or $\emptyset$) allows us to use this simple interface. We introduce a dynamic programming table with entries~$T[t,F,U]$ that are designed to hold the minimum total completion time of scheduling all jobs in~$U\subseteq \ensuremath{\set{J}}$ to start in block~$B_t$ or earlier, such that~$B_t$ respects the outgoing frontier~$F$. We define~$C(t,F_1,F_2,V)$ to be the minimum total completion time of scheduling all jobs in~$V$ to start in $B_t$ with $B_t$ respecting the incoming frontier~$F_1$ and the outgoing frontier~$F_2$ (and~$\infty$ if this is impossible). We have the following recursive formula for the dynamic programming table: \[
T[t,F,U] = \min\nolimits_{F',V \subseteq U}\{T[t-1,F',U \setminus V]+C(t,F',F,V)\}. \]
To turn this into an efficient dynamic program, we need to limit the dependencies of each entry and show that~$C(\cdot)$ can be computed efficiently. The number of blocks to be considered can be polynomially bounded by~$\log D$, where~$D=\max_j r_j + n\cdot(\max_j p_j + \tau_1)$ is an upper bound on the makespan. The following lemma shows that we only need to consider polynomially many other entries to compute~$T[t,F,U]$ and we only need to evaluate~$C(\cdot)$ for job sets of constant size, which we can do in polynomial time by simple enumeration.
\begin{restatable}{lemma}{lemConstantInterface}
\label{lem:constant_interface}
There is a schedule with a sum of completion times within a factor of~$(1+\ensuremath{\varepsilon})$ of the optimum and with the following properties:
\begin{compactenum}
\item The number of jobs scheduled in each block is bounded by a constant.
\item Every two consecutive blocks respect one of constantly many frontiers.
\end{compactenum}
\end{restatable}
\begin{proof}[sketch]
Partitioning the released jobs of each interval direction-wise by processing time into \emph{small} and \emph{large} jobs and bundling small jobs into packages of roughly the same size allows us to bound the number of released jobs per interval by a constant, similarly as in~\cite{Afrati1999}. Furthermore, we establish that we may assume jobs to remain unscheduled only for constantly many blocks.
For the second property, we stretch all time intervals by a factor of~$(1+\ensuremath{\varepsilon})$, which gives enough room to decrease the start times of those jobs interfering with two blocks such that an~$1/\ensuremath{\varepsilon}^2$-fraction of an interval separates jobs starting in two consecutive blocks. Thus, we only need to consider~$\frac{\ensuremath{\sigma}}{\ensuremath{\varepsilon}^2}$ possible frontier values per direction, or a total of~$\smash{\bigl(\frac{\ensuremath{\sigma}}{\ensuremath{\varepsilon}^2}\bigr)^2}$ possible frontiers. \end{proof}
\ifthenelse{\boolean{ptas-more}}{ We now generalize our dynamic program to a constant number of segments assuming that the transit times of any two segments differ by at most a constant factor. To this end, we split our jobs into parts, one for each segment the job needs to be processed on, with the additional constraint that no part may be scheduled before any part of the same job on earlier segments. We are able to generalize Lemma~\ref{lem:constant_interface} to this setting, using that each part of a job runs in at most two blocks and partitioning jobs into small and large for each direction and combination of start and target segments. The interface between consecutive time blocks needs to be extended to a frontier on each segment. In addition, a part running in block~$B_t$ imposes a lower bound on the start time of the next part of the same job running in block~$B_{t+1}$. Since the number of parts running in block~$B_t$ is bounded by a constant~$b$, the interface still has constant size. We assume that jobs are ordered and write~$\vec F = (F_1,\dots,F_m)$, $\vec{\theta} = (\theta_1,\dots,\theta_b)$. We can define our table with entries~$T[t,\vec{F},U,V,\vec{\theta}]$ containing the minimum sum of completion times (of completely finished jobs) when scheduling the parts in $U$ to start in block~$B_t$ or earlier, such that: $B_t$ respects the outgoing frontier~$F_i$ on segment~$i$, the parts in $V \subseteq U$ are scheduled to start in block~$B_t$, and the~$l$-th part in~$V$ stops running by time~$\theta_l$. Similarly,~$C(t,\vec{F}',\vec{F},V,\vec\theta',\vec\theta)$ is the minimum sum of completion times for scheduling the parts in~$V$ in block~$B_t$, respecting frontiers~$\vec{F}',\vec{F}$ on the segments, such that the~$l$-th part in $V$ does not start running before time~$\theta_l'$ and stops running by time~$\theta_l$ (if possible, and~$\infty$ otherwise). The recursive formula restricted to subsets that respect the order in which parts need to be processed becomes \[
T[t,\vec{F},U,V,\vec{\theta}] = \min_{\substack{\vec F',V' \subseteq U\setminus V,\vec\theta' \\ |V'|\textrm{ is consistent}}}\{T[t-1,\vec{F}',U \setminus V, V',\vec\theta'] + C(t,\vec F',\vec F,V,\vec\theta',\vec\theta)\}. \]
We obtain the following result.\enlargethispage{1ex}
\begin{theorem} \label{thm:ptas}
The bidirectional scheduling problem on a constant number of segments and with compatibility graphs~$G_i\in\{K_{\rightb{n},\leftb{n}},\emptyset\}$ for each $i\in M$ admits a PTAS assuming that the transit times of any two segments differ by at most a constant factor. \end{theorem} }{}
\section{Hardness of custom compatibilities} \label{sec:arbitrary-conflicts} \newcommand{\tpart}[1]{\ensuremath{P_{#1}}}
In Section~\ref{sec:unbounded}, we showed that bidirectional scheduling is hard on an unbounded number of machines, even for identical jobs. As the main result of this section, we show that for arbitrary compatibility graphs the problem is $\mathsf{APX}$-hard already on a single segment and with unit processing and transit times. For ease of exposition, we first show that the minimization of the makespan is $\mathsf{NP}$-hard. Later we extend this result towards minimum completion time and $\mathsf{APX}$-hardness.
\begin{restatable}{theorem}{corGraphHardnessSum} \label{thm:graph-hardness-sum}
The bidirectional scheduling problem on a single segment and with an arbitrary compatibility graph is $\mathsf{NP}$-hard even if~$\proc{j}=\transit{1}=1$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable}
We give a reduction from an $\mathsf{NP}$-hard variant of \textsc{Sat}\xspace (cf.~\cite{GareyJohnson1979}); \rsat{\leq\!3}{3} considers a formula with a set of clauses~$\set{C}$ of size three over a set of variables $\set{X}$, where each variable appears in at most three clauses and asks if there is a truth assignment of~$\set{X}$ satisfying~$\set{C}$. Note the difference to the polynomially solvable \rsat{3}{3}, where each variable appears in \emph{exactly} three clauses~\cite{Tovey1984}.
For a given~\rsat{\leq\!3}{3} formula we construct a bidirectional scheduling instance that can be scheduled within some specific makespan~$T$ if and only if the given formula is satisfiable. Our construction is best explained by partitioning the time horizon~$[0,T]$ into four parts (cf.~Fig.~\ref{fig:graph-hardness} along with the following).
\begin{figure}
\caption{Illustration (colored) of the four parts of our construction. Time is directed downwards, rightbound (leftbound) jobs are depicted on the left (right) of each figure.\\[-.6cm]}
\label{pic:all-part2}
\label{pic:all-part3}
\label{pic:all-part4}
\label{fig:graph-hardness}
\end{figure}
We use a frame of blocking jobs that need to be scheduled at their release date. We can enforce this by making sure that at least one blocking job is released at (almost) each unit time step and that blocking jobs that are not supposed to run concurrently are incompatible. We release variable jobs that have to be scheduled into gaps between the blocking jobs. More precisely, in the first part of the construction we release 6~jobs within a separate time interval for each variable. Two of these jobs are leftbound and need to be scheduled within the first two parts of the construction, which implies that one of the two remaining pairs of rightbound jobs must be scheduled after the second part. If the first pair is delayed we interpret this as an assignment of \emph{true} to the variable and otherwise as \emph{false}.
The third part of the construction has a gap for each clause, with compatibilities ensuring that only variable jobs can be scheduled into the gap which satisfy the clause. Since each literal can only appear in at most two clauses, there are enough variable jobs to satisfy all clauses if the formula is satisfied. Finally, the last part has~$2|X|-|C|$ gaps that fit any variable job. In order to schedule all variable jobs before the end of the last part, we thus need to schedule a variable job into each gap of a clause. This is possible if and only if the given~\rsat{\leq\!3}{3} formula is satisfiable. We can easily extend our result to completion or waiting times by adding many blocking jobs after the last part, such that violating the makespan also ruins the the total completion time.
With a slight adaption of the construction and more involved arguments, we can even show $\mathsf{APX}$-hardness of the problem. We reduce from a specific variant of \noun{Max-3-Sat}, where each literal occurs exactly twice, and which is $\mathsf{NP}$-hard to approximate within a factor of $1016/1015$, see Berman et al.~\cite{BermanKS2003}.
\begin{restatable}{theorem}{GraphAPXHardnessSum} \label{thm:graph-apx-hardness-sum}
The bidirectional scheduling problem on a single segment and with an arbitrary compatibility graph is $\mathsf{APX}$-hard even if~$\proc{j}=\transit{1}=1$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable}
\section{Dynamic programs for restricted compatibilities} \label{sec:constant_segments} After establishing the hardness of bidirectional scheduling with a general compatibility graph in the last section, in this section we turn to the case of a constant number of different compatibility types.
Due to the identical processing times, the jobs in each direction can be scheduled in the order of their release dates. The only decision left is when to switch between left- and rightbound operation of the segments. This decision is hard in the general case~(Theorem~\ref{thm:unbounded_hardness}), but we are able to formulate a dynamic program for any constant number of segments.
Our result generalizes to the case when some jobs of different directions are compatible
as long as the number of \emph{compatibility types} is constant, where two jobs~$j_1, j_2$ in the same direction are defined to have the same compatibility type if the set of jobs compatible with~$j_1$ is equal to the set of jobs compatible with~$j_2$ on each segment. Formally,~$j_1$ and~$j_2$ have the same compatibility type if $\bigl\{j : \{j_1,j\} \in E_i\bigr\} = \bigl\{j : \{j_2,j\} \in E_i\bigr\}$ for the compatibility graphs $G_i = (\ensuremath{\leftbset{J}} \ensuremath{\mathaccent\cdot\cup} \ensuremath{\rightbset{J}}, E_i)$ of each segment~$i$.
For a single segment we partition~$\ensuremath{\set{J}}$ into~$\ensuremath{\kappa}$ subsets of jobs~$\ensuremath{\set{J}}^1, \dots, \ensuremath{\set{J}}^{\ensuremath{\kappa}}$ where all jobs of~$J^c$, $c\in 1,\dots, \ensuremath{\kappa}$, have the same compatibility type~$c$, and let $n_c = |J^c|$. Since the jobs of each subset only differ in their release dates, they can again be scheduled in the order of their release dates. This observation allows us to define a dynamic program that decides how to merge the job sets~$\ensuremath{\set{J}}^1, \dots, \ensuremath{\set{J}}^{\ensuremath{\kappa}}$ such that the resulting schedule has minimum total completion time.
\begin{restatable}{theorem}{singleSegPoly}
\label{thm:cct-singleseg-poly}
The bidirectional scheduling problem can be solved in polynomial time if~$m=1$,~$\ensuremath{\kappa}$ is constant and~$\proc{j}=\proc{}$ for each~$j\in\ensuremath{\set{J}}$.
\end{restatable}
We now consider a constant number of segments~$m>1$. The main complication in this setting is that decisions on one segment can influence decisions on other segments, and, in general, every job can influence every other job in this way. In particular, we need to keep track of how many jobs of each type are in transit at each segment, and we can thus not easily adapt the dynamic program for a single segment. We propose a different dynamic program that relies on all transit times being bounded by a constant and can be adapted for assumptions complementary to Theorem~\ref{thm:unbounded_hardness}.
\begin{restatable}{theorem}{thmIdenticalConstantSegMakespan} \label{thm:identical-constantseg-makespan} The bidirectional scheduling problem can be solved in polynomial time if~$m$ and~$\ensuremath{\kappa}$ are constant and either~$\proc{j}=1$ for each~$j\in\ensuremath{\set{J}}$ and $\transit{i}$ is constant for each~$i\in M$ or~$\proc{j}=0$ for each~$j\in\ensuremath{\set{J}}$ and $\transit{i}=1$ for each~$i\in M$. \end{restatable}
\appendix
\section{Proofs of Section~\ref{sec:unbounded}:\newline Hardness of bidirectional scheduling}\label{appendix:unbounded}
In this section, we give a detailed proof of the hardness of the bidirectional scheduling problem for a constant number of segments and identical processing and transit times. We describe our reduction from \noun{MaxCut}. Let an instance $\ensuremath{\mathcal{I}}\xspace=(G_\ensuremath{\mathcal{I}}\xspace,k)$ of \noun{MaxCut} be given, with $G=(V_\ensuremath{\mathcal{I}}\xspace,E_\ensuremath{\mathcal{I}}\xspace)$, $|V_\ensuremath{\mathcal{I}}\xspace|=n_\ensuremath{\mathcal{I}}\xspace$, and $|E_\ensuremath{\mathcal{I}}\xspace|=m_\ensuremath{\mathcal{I}}\xspace$. We introduce a set of jobs on polynomially many segments that can be scheduled with a total waiting time of $W$ if and only if $\ensuremath{\mathcal{I}}\xspace$ admits a solution. Our construction is comprised of various gadgets which we describe in the following. We make use of suitably large parameters $x \gg y \gg z \gg 1$ that we will specify later. For example, $x$ is chosen in such a way that if ever $x$ jobs are located at the same segment, these jobs need to be processed immediately in order to achieve a waiting time of $W$. Note that because jobs take no time in being processed (i.e.,~$p_j=0$), we can schedule any number of jobs sharing direction simultaneously on a single segment. Also, since~$\tau=1$, it makes no sense for a segment to stay idle if jobs are available. This allows us to restrict our analysis to schedules that are \emph{sensible} in the sense that for each segment and at every time step all jobs in one direction available at the segment get scheduled. On the other hand, the non-zero transit time induces a cost of switching the direction of jobs that are processed at a segment.
\subsubsection*{Vertex gadget.}
Each of the segments $1,10,19,28,\dots$ hosts one vertex gadget for each of the vertices in $V_\ensuremath{\mathcal{I}}\xspace$ (cf.~Figure~\ref{fig:vertex_gadget} with the following). Each vertex gadget~$g_t$ on segment $9\ell+1$ occupies a distinct time interval $[13t,13(t+1))$, $t<n_\ensuremath{\mathcal{I}}\xspace$, on the segment and is associated with one of the vertices $v\in V_\ensuremath{\mathcal{I}}\xspace$. The gadget comes with $24y$ \emph{vertex jobs} that only need to be processed at segment $9\ell+1$, half of them being leftbound, half being rightbound. Exactly $y$ jobs of each direction are released at times $13t,13t+1,\dots,13t+11$. We say that $g_t$ is scheduled \emph{consistently} if either all leftbound vertex jobs are processed immediately when they are released and all rightbound jobs wait for one time unit, or vice-versa. We say the gadget is in the \emph{leftbound} (\emph{rightbound}) \emph{state} and interpret this as vertex $v$ being part of set $V_1$ ($V_2$) of the partition of $V_\ensuremath{\mathcal{I}}\xspace = V_{1} \ensuremath{\mathaccent\cdot\cup} V_{2}$ we are implicitly constructing. A schedule is \emph{consistent} if all vertex gadgets are scheduled consistently. The following lemma allows us to distinguish consistent schedules.
\begin{restatable}{lemma}{lemVertexGadget} The vertex jobs of a single vertex gadget can be scheduled consistently with a waiting time of $12y$, while every inconsistent schedule has waiting time at least $13y$. \label{lem:vertex_gadget} \end{restatable} \begin{proof} Since $p=0$, we can schedule all available jobs with the same direction simultaneously. It follows that both consistent schedules are valid, and, since in both exactly half of the vertex jobs wait for one unit of time, the total waiting time of such a schedule is~$12y$. Any inconsistent (sensible) schedule would have to send jobs in the same direction in two consecutive unit time intervals, which means that in addition to the minimum waiting time of~$12y$, at least $y$ jobs have to wait an extra unit of time. \end{proof}
\subsubsection*{Synchronizing vertex gadgets.} Since every vertex $v\in V_\ensuremath{\mathcal{I}}\xspace$ is represented by multiple vertex gadgets on different segments, we need a way to ensure that all vertex gadgets for $v$ are in agreement regarding which part of the partition $v$ is assigned to. We introduce two different gadgets that handle synchronization. The \emph{copy gadget} synchronizes the vertex gadgets $g_t$ occupying the same time interval on segments~$9\ell+1$ and $9\ell+10$, while the \emph{transposition gadget} synchronizes gadgets $g_t,g_{t+1}$ on segment~$9\ell+1$ with gadgets $g_{t+1},g_t$ on segment~$9\ell+10$. Using a combination of copy and transposition gadgets, we can transition between any two orders of vertex gadgets on distant segments.
\begin{figure}\label{fig:copy_gadget}
\end{figure}
We first specify the copy gadget that synchronizes the vertex gadgets $g_t$ on two segments~$9\ell+1$ and $9\ell+10$ (cf.~Figure~\ref{fig:copy_gadget} with the following). The gadget consists of $2z$ rightbound \emph{synchronization jobs}, half of which are released at time~$13t$ and half at time~$13t+1$. The jobs need to be processed on all segments~$9\ell+1,\dots,9\ell+10$ in this order. In addition, we introduce~$3x$ \emph{blocking jobs} that are used to enforce that specific time intervals on a segment are reserved for leftbound/rightbound operation. Essentially, releasing~$x$ blocking jobs at time~$t$ on a single segment prevents any jobs to be processed in opposite direction during the time interval~$[t,t+1)$ (and even earlier). In this manner, we block the interval starting at time $13t+3$ on segments $9\ell+2,9\ell+3,9\ell+4$.
\begin{restatable}{lemma}{lemCopyGadget} In any consistent schedule, the synchronization jobs of a single copy gadget can be scheduled with a waiting time of $3z$ if the two corresponding vertex gadgets are in the same state, otherwise their waiting time is at least $5z$. \label{lem:copy_gadget} \end{restatable} \begin{proof}
Since $x \gg z$, we need to schedule all blocking jobs as soon as they are released.
If both vertex gadgets $g_t$ linked by the copy gadget are in the rightbound state, the synchronization jobs released at time $13t$ only have to wait for one time unit at segment $9\ell+4$, while the other jobs have to wait at segments $9\ell+1$ and $9\ell+2$. Similarly, if the vertex gadgets are in the leftbound state, the first half of the jobs have to wait at segments $9\ell+1$ and $9\ell+3$, while the other half only has to wait at segment $9\ell+3$. The waiting time in either case is $3z$. If the vertex gadgets are in opposite states, all jobs have to additionally wait at segment $9\ell+10$, which results in a total waiting time of at least $5z$. \end{proof}
\begin{figure}\label{fig:transposition_gadget}
\end{figure}
We now describe the transposition gadget that synchronizes the vertex gadgets $g_t,g_{t+1}$ on segment~$9\ell+1$ with the vertex gadgets $g_{t+1},g_t$ on segment~$9\ell+10$ (cf.~Figure~\ref{fig:transposition_gadget} with the following). The challenge here is that jobs synchronizing the different pairs of vertex gadgets need to pass each other without interfering. We achieve this by making sure that the jobs never meet while being in transit at the same segment. The gadget consists of $4z$ synchronization jobs, half being rightbound and half being leftbound. Half of each are released at times~$13t+6$ and $13t+7$, and all need to be processed at segments~$9\ell+1,\dots,9\ell+10$ (in different directions). In addition, we introduce~$12x$ blocking jobs to block the intervals starting at the following times: at times $13t+9$, $13t+10$ for rightbound jobs and at times $13t+14$, $13t+15$ for leftbound jobs on segment $9\ell+2$, at times $13t+9$ for rightbound and at $13t+15$ for leftbound on segment~$9\ell+3$, and the corresponding (symmetrical) intervals in opposite direction on segments~$9\ell+8$ and $9\ell+9$ (cf.~Figure~\ref{fig:transposition_gadget}).
\begin{restatable}{lemma}{lemTranspositionGadget} In any consistent schedule, the synchronization jobs of a single transposition gadget can be scheduled with a waiting time of $10z$ if each of the two pairs of corresponding vertex gadgets are in the same state, otherwise their waiting time is at least $12z$. \label{lem:transposition_gadget} \end{restatable} \begin{proof}
Since $x \gg z$, we need to schedule all blocking jobs as soon as they are released.
It is easy to verify that all synchronization jobs wait at exactly 2 segments due to blocking jobs. In addition, half of the jobs wait for one unit of time at the segment where they are released -- for a total of $10z$ time units. If the pair of vertex gadgets is in opposite states, all connecting synchronization jobs need to wait at least one additional unit of time at their last segment. Observe that synchronization jobs in opposite directions are never in transit on the same segment at the same time. \end{proof}
\subsubsection*{Edge gadget.}
\begin{figure}\label{fig:edge_gadget}
\end{figure}
The purpose of an edge gadget between vertex gadget $g_t$ on segment~$9\ell+1$ and $g_{t+1}$ on segment $9\ell+10$ is to produce a small additional waiting time if the two vertex gadgets are in the same state (cf.~Figure~\ref{fig:edge_gadget} with the following). We will introduce edge gadgets between vertex gadgets representing two vertices $u,v$ that share an edge in $G$. This way, every edge that connects vertices in different parts of the partition is beneficial for the resulting waiting time. The edge gadget itself consists of 2 rightbound \emph{edge jobs}, one being released at time $13t+7$ and the other at time $13t+8$. Both jobs need to be processed on segments $9\ell+1,\dots,9\ell+10$. We add $3x$ blocking jobs to block the unit time interval starting at time $13t+15$ on segments $9\ell+7,9\ell+8,9\ell+9$.
\begin{restatable}{lemma}{lemEdgeGadget} In any consistent schedule, the edge jobs of a single edge gadget can be scheduled with a waiting time of $3$ if the two connected vertex gadgets are in opposite states, otherwise their waiting time is at least $5$. \label{lem:edge_gadget} \end{restatable} \begin{proof}
One job always has to wait for a time unit at the first segment. Both jobs have to wait for the blocking jobs (since~$x \gg 1$). If the vertex gadgets are in the same state, both jobs have to wait an additional unit of time at the last segment. \end{proof}
\subsubsection*{Construction.}
We are now ready to combine our gadgets and explain the final construction.
\unbounded* \begin{proof} We start by introducing a vertex gadget~$g_t$ on segment~$1$ for each vertex~$v_t\in V_\ensuremath{\mathcal{I}}\xspace$ of the given \noun{MaxCut}-instance. For each edge~$\{u,v\}$ we extend the construction by appending more segments as follows. We add a sequence of blocks of $9$ segments, the last of which contains again a vertex gadget for each vertex. In between we add copy and transposition gadgets in such a way that on the last segment $i$ the vertex gadgets $g_0$ and $g_1$ represent the vertices $u$ and~$v$. We can achieve this by adding less than $n_\ensuremath{\mathcal{I}}\xspace$ segments. We add an additional block of $9$ segments, and add copy gadgets for each of the variables. Finally, we add an edge gadget connecting vertex gadget $g_0$ on segment $i$ with $g_1$ on the last segment. Observe that the edge jobs do not interfere with any of the synchronization jobs for the copy gadgets for the first two vertices~(cf.~Figure~\ref{fig:edge_gadget}). We repeat the process once for each edge. The total number of segments is $\ensuremath{\mathcal{O}}\xspace(n_\ensuremath{\mathcal{I}}\xspace m_\ensuremath{\mathcal{I}}\xspace)$, and the total number of jobs is $\ensuremath{\mathcal{O}}\xspace(n^2_\ensuremath{\mathcal{I}}\xspace m_\ensuremath{\mathcal{I}}\xspace(x+y+z))$. The number of vertex gadgets is $n_{v}<n^2_\ensuremath{\mathcal{I}}\xspace m_\ensuremath{\mathcal{I}}\xspace$, and the number of transposition and copy gadgets is $n_t<n_c<n_{v}$.
We claim that if the \noun{MaxCut} instance admits a solution~$\mathcal{S}$, we can schedule all jobs with waiting time at most $W=12 n_v y + 3 n_c z + 10 n_t z + 5 m_\ensuremath{\mathcal{I}}\xspace - 2k$. We do this by scheduling all vertex gadgets consistently in the state corresponding to the part of the partition the corresponding vertex belongs to in $\mathcal{S}$. Lemmas~\ref{lem:vertex_gadget} through~\ref{lem:transposition_gadget} guarantee that we can schedule everything but the edge jobs without incurring a waiting time greater than $12 n_v y + 3 n_c z + 10 n_t z$. Finally, since at least $k$ edges in the \noun{MaxCut} solution are between vertices in different sets of the partition, and the vertex gadgets are set accordingly, by Lemma~\ref{lem:edge_gadget}, we obtain an additional waiting time of at most $5m_\ensuremath{\mathcal{I}}\xspace-2k$ as claimed.
It remains to establish that the waiting time exceeds $W$ in case the \noun{MaxCut} instance does not admit a solution. We set $x=W+1$, such that all blocking jobs have to be scheduled as soon as they are released. By Lemma~\ref{lem:vertex_gadget}, scheduling at least one vertex gadget inconsistently produces a total waiting time of at least $12 n_v y + y$. We now set $y = 18 n_\ensuremath{\mathcal{I}}\xspace^2 m_\ensuremath{\mathcal{I}}\xspace z > 3 n_c z + 10 n_t z + 5m_\ensuremath{\mathcal{I}}\xspace$ for the vertex jobs, such that a single inconsistent vertex gadget results in a waiting time greater than $W$. Hence, each vertex gadget needs to be scheduled consistently. By Lemmas~\ref{lem:copy_gadget} and~\ref{lem:transposition_gadget}, we have that if not all vertex gadgets corresponding to the same vertex are in the same state, the waiting time for vertex and synchronization jobs is at least $12 n_v y + 3 n_c z + 2n_t z + z$. We set $z = 5m_\ensuremath{\mathcal{I}}\xspace$, which allows us to conclude that all vertex gadgets are in agreement regarding the partition of the vertices. Finally, Lemma~\ref{lem:edge_gadget} enforces that there are at least $k$ edge gadgets between vertices in different states. This however is impossible as our \noun{MaxCut} instance does not admit a solution. \end{proof}
\corUnboundedWithProcessing* \begin{proof}
We adapt our construction by setting~$p=1$ and~$\tau=n^2 m$ and scaling all release times by $n^2 m$, where $n,m$ are the number of jobs and segments, respectively.
We claim that the original instance admits a solution of some waiting time $W$ if and only if it now admits a solution with waiting time in $[W\tau, (W+1)\tau)$.
This proves the Corollary, as the intervals are pairwise disjoint for different (integer) values of $W$.
If the original construction (with~$p=0$ and~$\tau=1$) does not admit a solution with waiting time at most $W$, then a scaled version with~$p=0$ and~$\tau = n^2 m$ does not admit a solution with waiting time at most~$W\tau$. But the lowest possible waiting is monotonically increasing with increasing processing times, hence the adapted instance with~$p=1$ does not admit a solution of waiting time at most~$W\tau$.
Conversely, assume we have a solution of the original instance with waiting time~$W$. We fix the order in which jobs are processed along each segment and construct a schedule for the setting~$p=1$, $\tau=n^2 m$ by introducing additional waiting periods for each job. Clearly, each job has to wait at most one time unit for each other job to be processed at each segment. Hence, the additional waiting time overall is smaller than~$n^2 m = \tau$. \end{proof}
\section{Proofs of Section~\ref{sec:PTAS}: \newline A PTAS for bidirectional scheduling} \label{appendix:ptas}
\ifthenelse{\boolean{ptas-more}}{ In this Section we restate the Lemmas with detailed proofs that are necessary to show the existence of a PTAS if the processing times of the jobs are not restricted to be equal.
\subsection{Single Segment}
We consider first the case of a single segment, or, more precisely, the bidirectional scheduling problem on a single segment and with compatibility graph~$G_1\in\{K_{\rightb{n},\leftb{n}},\emptyset\}$. }{ In this Section we state the Lemmas with detailed proofs that are necessary to show the existence of a PTAS if the processing times of the jobs are not restricted to be equal in the case of a single segment. More precisely, we consider the bidirectional scheduling problem on a single segment with compatibility graph~$G_1\in\{K_{\rightb{n},\leftb{n}},\emptyset\}$. } Following the proof scheme of~~\cite{Afrati1999}, we introduce several lemmas that allow us to make assumptions at ``$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss'', meaning that we can modify any input instance and optimum schedule to adhere to these assumptions, such that the resulting schedule is within a factor polynomial in~$(1+\ensuremath{\varepsilon})$ of the optimum schedule for the original instance. To not complicate matters unnecessarily, in the following we allow fractional release dates and processing times.
\begin{restatable}{lemma}{lemPtasGeometricRounding} \label{lem:ptas-geometric_rounding}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume that $\rel{j},\proc{j}\in\{(1+\ensuremath{\varepsilon})^x\mid x\in\ensuremath{\mathbb{N}}\}\cup\{0\}$, $\rel{j} \geq \ensuremath{\varepsilon}(\proc{j}+\transit{1})$, and~$r_j\geq 1$ for each job~$j\in\ensuremath{\set{J}}$. \end{restatable} \begin{proof}
Increasing any value~$v\in\mathbb{R}$ to the smallest power of~$(1+\ensuremath{\varepsilon})$ not smaller than~$v$ yields a value~$v' = (1+\ensuremath{\varepsilon})^x=(1+\ensuremath{\varepsilon})(1+\ensuremath{\varepsilon})^{x-1}\leq(1+\ensuremath{\varepsilon})v$.
Hence, multiplying all start times of a schedule by~$(1+\ensuremath{\varepsilon})$ gives a feasible schedule even when rounding up all nonzero processing times
to the next power of~$(1+\ensuremath{\varepsilon})$. The total completion time does not increase by more than a factor of~$(1+\ensuremath{\varepsilon})$.
By shifting the completion times of a schedule with adapted processing times by a factor of~$(1+\ensuremath{\varepsilon})$, we obtain increased start times~$\start{j}'$ for each job~$j$:
\[
\start{j}' = (1+\ensuremath{\varepsilon})\compl{j} - (\proc{j}+\transit{1})
= (1+\ensuremath{\varepsilon})(\start{j} + \proc{j} + \transit{1}) - (\proc{j} + \transit{1})
\geq \ensuremath{\varepsilon}(\proc{j}+\transit{1})\text{.}
\]
Hence, by losing not more than a~$(1+\ensuremath{\varepsilon})$-factor we may assume that all jobs have release dates of at least an~$\ensuremath{\varepsilon}$ fraction of their running time. Now, we can scale the instance by some power of~$(1+\ensuremath{\varepsilon})$, such that the earliest release date is at least one (since jobs with $\rel{j}=\proc{j}=\transit{1}=0$ can be ignored).
Finally, multiplying again all start times of a schedule with adapted processing times and release dates by~$(1+\ensuremath{\varepsilon})$ yields a feasible schedule even when rounding up all nonzero release dates to the next power of~$(1+\ensuremath{\varepsilon})$. \end{proof}
We define~$R_x=(1+\ensuremath{\varepsilon})^x$ and consider time intervals~$I_x=[R_x,R_{x+1}]$ of length~$\ensuremath{\varepsilon} R_x$.
\enlargethispage*{1ex}
\begin{restatable}{lemma}{lemPtasBoundedCrossing}
\label{lem:ptas-bounded_crossing}
Each job runs for at most~$\ensuremath{\sigma}:=\lceil\log_{1+\ensuremath{\varepsilon}}\frac{1+\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}} \rceil$ intervals, i.e., a job starting in interval~$I_x$ is completed before the end of~$I_{x+\ensuremath{\sigma}}$. \end{restatable} \begin{proof}
Consider some job~$j$ and assume that~$j$ starts in~$I_x$ in some schedule. By Lemma~\ref{lem:ptas-geometric_rounding} we get
\[
|I_x| = \ensuremath{\varepsilon} R_x \geq \ensuremath{\varepsilon}\rel{j} \geq \ensuremath{\varepsilon}^2(\proc{j}+\transit{1})\text{.}
\]
Thus, the running time of~$j$ is bounded by~$|I_x|/\ensuremath{\varepsilon}^2$. The constant upper bound of~$1/\ensuremath{\varepsilon}^2$ for the number of used intervals can still be improved since the length of the next~$\ensuremath{\sigma}$ succeeding intervals with increasing size is sufficient to cover a length of~$|I_x|/\ensuremath{\varepsilon}^2$. Using the fact that~$\sum_{k=0}^n z^k = \frac{1-z^{n+1}}{1-z}$ we get
\begin{align*}
\sum_{i=0}^{\ensuremath{\sigma}} |I_{x+i}| & = \sum_{i=0}^{\ensuremath{\sigma}} (R_{x+i+1}- R_{x+i}) =
|I_x|\sum_{i=0}^{\ensuremath{\sigma}} (1+\ensuremath{\varepsilon})^i \\
& = |I_x|\frac{1-(1+\ensuremath{\varepsilon})^{{\ensuremath{\sigma}}+1}}{1-(1+\ensuremath{\varepsilon})}\\
& \geq |I_x|\frac{1-\frac{1+\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}}}{-\ensuremath{\varepsilon}} =
|I_x|\frac{1+\ensuremath{\varepsilon}-\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}^2} =\frac{|I_x|}{\ensuremath{\varepsilon}^2},
\end{align*}
which concludes the proof. \end{proof}
We use the common technique of \emph{time-stretching}. We shift each start time (or completion time) to the next interval while maintaining the same offset to the beginning of the interval. This way, the schedule remains feasible and the objective is increased by a factor of at most~$(1+\ensuremath{\varepsilon})$. Intuitively, this process can be interpreted as stretching the length of each time interval~$I_x$ by a factor of~$(1+\ensuremath{\varepsilon})$, i.e., its length is increased by~$\ensuremath{\varepsilon}|I_x|$. When applying (multiple) time-stretches we use the following observation to assess the additional empty space created between jobs:
\begin{restatable}{lemma}{lemPtasTimeStretch}
\label{lem:ptas-time-stretch}
Consider two distinct times~$T_1 < T_2$ with~$T_1\in I_{x(1)}$ and~$T_2\in I_{x(2)}$. Applying~$\ell$ time-stretches yields shifted times~$T'_1 < T'_2$ with
\begin{equation}\label{eq:bidir-time-shift}
(T'_2 - T'_1) \geq (T_2 - T_1) + \idle{x(1)}{x(2)},
\end{equation}
where $\idle{x(1)}{x(2)} := \sum_{x(1)\leq x < x(2)}\ell\ensuremath{\varepsilon}|I_x|$. \end{restatable} \begin{proof}
We calculate
\begin{align*}
(T'_2 - T'_1) & = R_{x(2)+\ell} + (T_2-R_{x(2)}) - [R_{x(1)+\ell} + (T_1-R_{x(1)})]\\
& = ((1+\ensuremath{\varepsilon})^{\ell} - 1) R_{x(2)} + T_2 - ((1+\ensuremath{\varepsilon})^{\ell} - 1) R_{x(1)} - T_1\\
& \geq (T_2 - T_1) + (1+\ell\ensuremath{\varepsilon} -1)(R_{x(2)} - R_{x(1)})\\
& = (T_2 - T_1) + \ell\ensuremath{\varepsilon}\sum_{x(1)\leq x < x(2)}|I_x|.
\end{align*} \end{proof}
We can now apply time-stretches to the start or completion times of all jobs and use the above observation to quantify the additional space created in the schedule. Consider two jobs $j,k\in\ensuremath{\set{J}}$ with starting times~$\start{j}<\start{k}$, and let $s(j),s(k)$ (resp.\ $c(j),c(k)$) denote the intervals in which their start (completion) times fall, i.e., $\start{j}\in I_{s(j)}$ (and $\compl{j}\in I_{c(j)}$). E.g., if we apply~$\ell$ time-stretches to starting times, we obtain an additional gap of $\idle{s(j)}{s(k)}$ between the new starting and completion times. Table~\ref{tab:bidir-time-shift} summarizes the resulting gaps depending on whether start or completion times are stretched and whether~$j,k$ travel in the same or opposite directions.
\begin{table}[ht] \caption{Summary of the increased differences between start and completion times of jobs~$j,k\in\ensuremath{\set{J}}$, $\start{j}<\start{k}$, when stretching start times (denoted by~$\smash{'}$) or completion times (denoted by~$\smash{''}$). We use Lemma~\ref{lem:ptas-time-stretch} together with the fact that~$j$ and~$k$ did not overlap before the time-stretch.}\label{tab:bidir-time-shift}
\centering
\scriptsize
\begin{tabulary}{\linewidth}{RRCC}
\toprule
time-stretch on & & same direction & opposite direction \\
\midrule
\multirow{1}{*}{start times} &
\small\eqref{eq:bidir-time-shift}&
\small$\start{k}'\geq \start{j}'+\proc{j} + \idle{s(j)}{s(k)}$ &
\small$\start{k}'\geq \start{j}'+\proc{j} +\transit{} + \idle{s(j)}{s(k)}$\\
& \small $\Rightarrow$&
\small$\compl{k}'\geq \compl{j}'+\proc{k} + \idle{s(j)}{s(k)}$ &
\small$\compl{k}'\geq \compl{j}'+\proc{k} + \transit{}+\idle{s(j)}{s(k)}$\\
\midrule
\multirow{1}{*}{compl.\ times} &
\small\eqref{eq:bidir-time-shift}&
\small$\compl{k}''\geq \compl{j}''+\proc{k} + \idle{c(j)}{c(k)}$ &
\small$\compl{k}''\geq \compl{j}''+\proc{k} + \transit{}+\idle{c(j)}{c(k)}$\\
& \small $\Rightarrow$&
\small$\start{k}''\geq \start{j}''+\proc{j}+\idle{c(j)}{c(k)}$ &
\small$\start{k}''\geq \start{j}''+\proc{j}+\transit{}+\idle{c(j)}{c(k)}$\\
\bottomrule
\end{tabulary}
\end{table}
To analyze the set of jobs released within each interval we partition them as follows. A job~$j$ released at~$R_x$ is called \emph{small} if~$\proc{j}\leq\frac{\ensuremath{\varepsilon}^2}{4}|I_x|$ and \emph{large} otherwise. With this, we partition for each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ the jobs~$\dirbset{J}{\ensuremath{\mathrm{d}}}_{x}:=\{j\in \dirbset{J}{\ensuremath{\mathrm{d}}} \mid \rel{j}=R_x\}$ released at~$R_x$ into the subsets~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x=\{j\in \dirbset{J}{\ensuremath{\mathrm{d}}}_{x} \mid j \text{ is small}\}$ and~$\dirbset{L}{\ensuremath{\mathrm{d}}}_x=\{j \in \dirbset{J}{\ensuremath{\mathrm{d}}}_{x} \mid j \text{ is large}\}$. We will see that the arrangement of jobs of each~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x$ does not influence the remaining jobs too much such that we can assume a fixed order for each of these sets. To do so, we say that a subset~$\set{J'}\subseteq\ensuremath{\set{J}}$ of jobs is scheduled in \emph{SPT order} (shortest processing time first) if~$\start{j_1}\leq\start{j_2}$ for any pair of jobs~$j_1, j_2\in \set{J'}$ with~$\proc{j_1}<\proc{j_2}$. Furthermore, we denote the sum of processing times of~$\set{J'}$ as~$p(\set{J'})$ and the union of small jobs released up to some point~$R_{x}$ with direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ by~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{\leq x} = \bigcup_{x'\leq x}\dirbset{S}{\ensuremath{\mathrm{d}}}_{x}$.
\begin{restatable}{lemma}{lemPtasSPTForSmallJobs}
\label{lem:ptas-SPT_for_small_jobs}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can restrict to schedules such that for each~$x\geq 0$ and each~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$:
\begin{compactenum}
\item\label{ptas-it:small-without-rel} the processing of no small job contains a release date,
\item\label{ptas-it:small-spt} jobs contained in~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{\leq x}$ are scheduled in SPT order within~$I_x$, and
\item\label{ptas-it:small-bounded}~$\proc{}(\dirbset{S}{\ensuremath{\mathrm{d}}}_x) \leq |I_x|$.
\end{compactenum} \end{restatable} \begin{proof}
To prove claim~\ref{ptas-it:small-without-rel} we consider some schedule and apply a time-stretch via start times. Observe that no further crossing of a processing over a release date is produced for small jobs. If there was a release date~$R_{s(j)+1}$ contained in the processing interval of a small job of~$I_{s(j)}$ it is moved behind the processing since we get by Lemma~\ref{lem:ptas-time-stretch} that~$R_{s(j)+2}-\start{j}' \geq R_{s(j)+1}-\start{j} + \ensuremath{\varepsilon}|I_{s(j)}|$ which gives an increase larger than the processing time of this job.
For a proof of claim~\ref{ptas-it:small-spt} consider a schedule~$S$ where no processing of a small job contains a release date and apply one time-stretch via start times. This increases the objective value by at most a~$1+\ensuremath{\varepsilon}$ factor. Denote the resulting schedule as~$S'$. To achieve the demanded properties, apply the following procedure for each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$.
First, remove all small jobs from schedule~$S'$. Now consider each interval~$I_{x}, x=0, 1, \dots$. Denote by~$A_x$ the set of removed jobs from~$I_x$. If jobs have been removed in~$I_x$ there are idle intervals where jobs in direction~$\ensuremath{\mathrm{d}}$ can be scheduled. Denote the subset of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{\leq x}$ already scheduled in earlier intervals by~$B_{<x}$ and order the subset~$C_{x}:=\dirbset{S}{\ensuremath{\mathrm{d}}}_{\leq x}\setminus B_{<x}$ of unscheduled jobs in SPT order. Define for~$t\in I_x$ by~$p_t(A_x):=p(\{j\in A_x \mid \start{j}'\leq t\})$ the amount of processing time of jobs started before time~$t$ in~$S'$. Now let~$C_{x}(t)$ be the smallest SPT-subset of~$C_x$ such that~$p(C_{x}(t))\geq p_t(A_x)$ or~$C_{x}(t)=C_x$. Iterate from the earliest created maximal empty interval to the latest and fill each interval~$[t_1,t_2]$ in SPT order such that the jobs of~$p(C_{x}(t_2))$ start before~$t_2$. Note that~$p(C_{x}(t_2))\leq p_{t_2}(A_x)+\frac{\ensuremath{\varepsilon}^2}{4}|I_x|$ since we consider only small jobs. To maintain feasibility we increase the start of the following jobs from~$J\setminus C_x$, if necessary. (This decreases eventually the size of the following empty interval which is no problem). Nevertheless, the start time of no job from~$J\setminus C_x$ is increased by more than~$\frac{\ensuremath{\varepsilon}^2}{4}|I_x|$. Hence, their completion time is increased by less than a~$1+\ensuremath{\varepsilon}$ factor and the jobs starting after~$R_{x+1}$ are not affected. Note that no processing of the assigned small jobs~$B_x:=C_{x}(R_{x+1})$ contains~$R_{x+1}$.
Since we used in each interval an assignment via SPT order we know that at each point in time the number of already started small jobs has not been decreased. Therefore, the total completion time of small jobs overall has not been increased.
To prove claim~\ref{ptas-it:small-bounded} consider for each~$x = 0, 1, \dots$ the largest SPT-subset~$\set{J}'_x$ of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x$, such that~$p(\set{J}'_x)\leq|I_x|$. By assumptions~\ref{ptas-it:small-spt} and~\ref{ptas-it:small-without-rel} we can be sure that all jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x\setminus\set{J}'_x$ are not scheduled within~$I_x$ and thus, we can move their release dates to~$R_{x+1}$. \end{proof}
Once, we have a fixed order to schedule small jobs with the same release date we are able to glue them to job packs of a certain minimum size. For this purpose we apply a further time-stretch to join the processing of jobs assigned to the same pack. This increases for each interval~$I_y$ the amount of processing per direction and each earlier interval~$I_x$ by at most the size of one job being small at time~$R_x$. The following lemma yields that the extra space of one interval created by one time-stretch is sufficient to cover this amount for all earlier Intervals.
\begin{restatable}{lemma}{lemAPtasIntervalVolume}\label{lem:A-ptas-interval_volume}
We have $\sum_{x<y}\ensuremath{\varepsilon}^2 |I_x| \leq \ensuremath{\varepsilon}|I_y|$. \end{restatable} \begin{proof}
To prove the claim we again use that~$\sum_{k=0}^n z^k = \frac{1-z^{n+1}}{1-z}$:
\begin{equation*}
\ensuremath{\varepsilon}^3\sum_{x<y}(1+\ensuremath{\varepsilon})^x = \ensuremath{\varepsilon}^3\frac{1-(1+\ensuremath{\varepsilon})^y}{1-(1+\ensuremath{\varepsilon})}
= \ensuremath{\varepsilon}^2 ((1+\ensuremath{\varepsilon})^y-1) \leq \ensuremath{\varepsilon}|I_y|.
\end{equation*} \end{proof}
\begin{restatable}{lemma}{lemPtasSmallPackages}
\label{lem:ptas-small-packages}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can restrict to schedules such that for each~$x\geq 0$ and each~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ the jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x$ in SPT order are joined to unsplittable job packs with size of at most~$\frac{\ensuremath{\varepsilon}^2}{4}|I_x|$ and at least~$\frac{\ensuremath{\varepsilon}^2}{8}|I_x|$ each.
\end{restatable} \begin{proof}
Consider a schedule satisfying at~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss the properties of Lemma~\ref{lem:ptas-SPT_for_small_jobs} and apply one time-stretch via start times. We now apply the following procedure for each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ and each~$x=0, 1, \dots$. Recall that the jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x$ are scheduled in SPT order. Let~$\dirbset{T}{\ensuremath{\mathrm{d}}}_x = \{j\in \dirbset{S}{\ensuremath{\mathrm{d}}}_x \mid \proc{j}< \frac{\ensuremath{\varepsilon}^2}{8}|I_x|\}$ be the subset of jobs being too small. Remove the jobs of~$\dirbset{T}{\ensuremath{\mathrm{d}}}_x$ from the current schedule and join the jobs of~$\dirbset{T}{\ensuremath{\mathrm{d}}}_x$ successively in SPT order to minimal job packs such that the processing times of each job pack sum up to at least~$\frac{\ensuremath{\varepsilon}^2}{8}|I_x|$. (The processing time of the last pack is artificially increased if necessary.) We now reassign complete job packs to the empty intervals similarly to the procedure in the proof of Lemma~\ref{lem:ptas-SPT_for_small_jobs}. Hence, no start time of~$\dirbset{T}{\ensuremath{\mathrm{d}}}_x$ has been increased and the start time of no job in~$J\setminus \dirbset{T}{\ensuremath{\mathrm{d}}}_x$ has been increased by more than~$\frac{\ensuremath{\varepsilon}^2}{4}|I_x|$.
In total, the start time of no job starting in interval~$I_{y+1}$ has been increased by more than~$2\cdot\sum_{x<y} \frac{\ensuremath{\varepsilon}^2}{4}|I_x| + 2\cdot\frac{\ensuremath{\varepsilon}^2}{4}|I_y|\leq \ensuremath{\varepsilon}|I_y|$ due to Lemma~\ref{lem:A-ptas-interval_volume}. By Lemma~\ref{lem:ptas-time-stretch} (or Table~\ref{tab:bidir-time-shift}) we can conclude that no job has been delayed to a later interval by the rearrangement. Note that properties~\ref{ptas-it:small-without-rel} and~\ref{ptas-it:small-bounded} of Lemma~\ref{lem:ptas-SPT_for_small_jobs} still hold whereas property~\ref{ptas-it:small-spt} (SPT order) remains true only within each~$\dirbset{S}{\ensuremath{\mathrm{d}}}_x$. \end{proof}
Therefore, we can consider each job pack simply as one small job. Nevertheless, the original jobs must be used for the evaluation of the completion times.
Besides the scheduling restrictions for small jobs we can also bound the amount of large jobs released at the beginning of each interval.
\begin{restatable}{lemma}{lemPtasBoundedReleasePerInterval} \label{lem:ptas-bounded_release_per_interval}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume for each~$x\geq 0$ and each~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ that: \begin{compactenum}
\item the number of possible processing times in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_x$ is bounded by~$5\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}$, and
\item the number of jobs per processing time in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_x$ is bounded by~$\frac{4}{\ensuremath{\varepsilon}^2}$. \end{compactenum} \end{restatable} \begin{proof}
Consider some scheduling instance, some~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$ and some~$x\geq 0$. The processing time of the jobs in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_x$ are, by definition, at least~$\frac{\ensuremath{\varepsilon}^3}{4}(1+\ensuremath{\varepsilon})^x$. On the other hand, by Lemma~\ref{lem:ptas-geometric_rounding}, the processing times are at most~$\frac{1}{\ensuremath{\varepsilon}}(1+\ensuremath{\varepsilon})^x$. Let~$x_j$ be such that~$\proc{j}=(1+\ensuremath{\varepsilon})^{x_j}$. We get
\[
\begin{array}{rcl}
\frac{\ensuremath{\varepsilon}^3}{4} \leq & \frac{(1+\ensuremath{\varepsilon})^{x_j}}{(1+\ensuremath{\varepsilon})^{x}} & \leq \frac{1}{\ensuremath{\varepsilon}}\\
\Longrightarrow\quad \log_{(1+\ensuremath{\varepsilon})}\frac{\ensuremath{\varepsilon}^3}{4} \leq & x_j-x & \leq \log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}\\
\end{array}
\]
The difference of these bounds is~$4\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}} + \log_{(1+\ensuremath{\varepsilon})}4$ which gives a constant number of possible integer values for~$x_j$ and, hence, a constant number of possible processing times for each job in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_x$. Finally, since each large job in~$I_x$ has a processing time of at least~$\frac{\ensuremath{\varepsilon}^2}{4} |I_x|$, we can schedule at most~$4/\ensuremath{\varepsilon}^2$ jobs per direction within~$I_x$, and the remaining jobs need to start after~$R_{x+1}$. \end{proof}
\begin{restatable}{lemma}{lemPTASSafetyNet} \label{lem:ptas-safety_net}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume, that each job is finished within a constant number of intervals after its release. \end{restatable} \begin{proof}
Consider the set of jobs~$\ensuremath{\set{J}}_x$ released at time~$R_x$. By Lemma~\ref{lem:ptas-geometric_rounding} the running time of each such job is at most~$R_x/\ensuremath{\varepsilon}$.
Therefore, applying Lemmas~\ref{lem:ptas-SPT_for_small_jobs} and~\ref{lem:ptas-bounded_release_per_interval} we can bound the time needed to first schedule all jobs of one direction and afterward all jobs of the other direction:
\begin{align*}
\sum_{\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}}\left[ p(\dirbset{S}{\ensuremath{\mathrm{d}}}_x) + p(\dirbset{L}{\ensuremath{\mathrm{d}}}_x) +
\transit{1}\right]
& \leq 2\bigg[\ensuremath{\varepsilon}(1+\ensuremath{\varepsilon})^x \\
& \phantom{=} \quad + \frac{4}{\ensuremath{\varepsilon}^2}\cdot\frac{1}{\ensuremath{\varepsilon}}(1+\ensuremath{\varepsilon})^x
\cdot5\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon} }\bigg] \\
& = \ensuremath{\varepsilon}^2(1+\ensuremath{\varepsilon})^x\cdot 2\left[\frac{1}{\ensuremath{\varepsilon}} +
\frac{20}{\ensuremath{\varepsilon}^5}\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}
\right]\\
& \leq \ensuremath{\varepsilon}^2(1+\ensuremath{\varepsilon})^x(1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1} = \ensuremath{\varepsilon}|I_{x+\ensuremath{\sigma}'-1}|,
\end{align*}
where~$\ensuremath{\sigma}'$ is the smallest possible integer such that~$2\left[\frac{1}{\ensuremath{\varepsilon}} + \frac{20}{\ensuremath{\varepsilon}^5}\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}\right] \leq (1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1}$. Note, that~$\ensuremath{\sigma}'$ is constant.
Applying one time-stretch on the start times creates idle time for each interval~$I_x$ somewhere after~$\ensuremath{\sigma}'$ intervals that is sufficient to host all unfinished jobs of~$J_x$, cf.~Lemma~\ref{lem:ptas-time-stretch} and Table~\ref{tab:bidir-time-shift}.
If no job was running at time~$R_{x+\ensuremath{\sigma}'}$ before the time-stretch this created idle time is now part of interval~$I_{x+\ensuremath{\sigma}'}$.
Otherwise let~$j$ be the latest of these jobs with start time~$S_j\in I_{s(j)}$ and completion time~$\compl{j}\in I_{c(j)}$ before the time-stretch. Note that~$s(j)\leq x+\ensuremath{\sigma}'-1$ which induces~$c(j)\leq x + \ensuremath{\sigma}' + \ensuremath{\sigma} -1$ due to Lemma~\ref{lem:ptas-bounded_crossing}. By Lemma~\ref{lem:ptas-time-stretch} we can be sure that after the time-stretch there is idle time of~$\sum_{x=s(j)}^{c(j)-1}\ensuremath{\varepsilon}|I_{k}|$ before 1.\ the start of the next job after~$j$ and 2.\ the end of interval~$I_{x_c+1}$.
By definition of~$\ensuremath{\sigma}'$, this time is sufficient to first schedule all jobs of~$J_x$ in heading of~$j$ and then all remaining.
This way, all jobs of~$J_x$ are scheduled before the end of interval~$I_{x+\ensuremath{\sigma}'+\ensuremath{\sigma}}$.
Note that this argument assumes that there are no compatibilities. An analog reasoning concerning only the processing times works if all opposed jobs are compatible. \end{proof}
We can now limit the interface of our dynamic program by showing Lemma~\ref{lem:constant_interface} of Section~\ref{sec:PTAS}.
\lemConstantInterface*
\begin{proof}
By Lemma~\ref{lem:ptas-small-packages} we may assume that small jobs in~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x}$ have processing time at least~$\ensuremath{\varepsilon}^2|I_x|/8$. By Lemma~\ref{lem:ptas-SPT_for_small_jobs}, the total processing time of these jobs is at most~$|I_x|$, and hence the number of jobs in~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x}$ is bounded by a constant. The same is true for large jobs, by Lemma~\ref{lem:ptas-bounded_release_per_interval}. Finally, together with Lemma~\ref{lem:ptas-safety_net}, this implies that the number of jobs running during each interval is bounded by a constant.
For the second property, we apply one time-stretch on the completion times. Consider now the latest job~$j$ of each direction that starts within block~$B_{t}$ and is completed in interval~$I_{c(j)}$ of the following block. By Lemma~\ref{lem:ptas-time-stretch} (and Table~\ref{tab:bidir-time-shift}) we know that there is idle time of at least~$\ensuremath{\varepsilon}|I_{c(j)-2}|$ before the start of job~$j$ (or before the start of the earliest job aligned with~$j$ with completion time in~$I_{c(j)}$ and start time in~$B_{t}$. Hence, we can decrease the start time of these jobs such that the values~$\compl{j}$ and~$\start{j}+\proc{j}$ fall below the next~$\frac{1}{\ensuremath{\varepsilon}^2}$ fraction of~$I_{c(j)}$, i.e., by an amount of at most~$\ensuremath{\varepsilon}^2|I_{c(j)}| \leq \ensuremath{\varepsilon}|I_{c(j)-2}|$. Hence, the first job starting in~$B_{t+1}$ (of each direction in case of compatibilities) can be scheduled at an~$\frac{1}{\ensuremath{\varepsilon}^2}$ fraction of~$I_{c(j)}$ without any further loss. Thus, we only need to consider~$\frac{\ensuremath{\sigma}}{\ensuremath{\varepsilon}^2}$ possible frontier values per direction, or a total of~$\left(\frac{\ensuremath{\sigma}}{\ensuremath{\varepsilon}^2}\right)^2$ possible frontiers.
\end{proof}
\ifthenelse{\boolean{ptas-more}}{ \subsection{Multiple Segments}
We now consider the bidirectional scheduling problem with a constant number~$m$ of segments and give detailed proofs for the required extensions to the single segment case where the argumentation is more complex. We need to generalize or reformulate all of the above lemmas. We denote by~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ the set of all small jobs with respective source segment~$s$ and target segment~$t$ and direction~$\ensuremath{\mathrm{d}}$ that are released at time~$R_x$ and we denote the corresponding large jobs by~$\dirbset{L}{\ensuremath{\mathrm{d}}}_{x,s,t}$.
The statement of Lemma~\ref{lem:ptas-geometric_rounding} must be extended as in Lemma~\ref{lem:ptas-geometric_rounding-multiple}. Nevertheless, the proof works almost similar.
\begin{lemma} \label{lem:ptas-geometric_rounding-multiple}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume that for each~$j\in\ensuremath{\set{J}}$ and each~$i\in \{s_j, \dots, t_j\}$:
\begin{compactitem}
\item $\start{ij} \geq \ensuremath{\varepsilon}(\proc{j}+\transit{i})$ defining a segment-wise release date~$\rel{ij}\geq r_j$,
\item $\rel{ij}\geq 1$, and
\item $\proc{j}, \rel{ij}\in\{(1+\ensuremath{\varepsilon})^x\mid x\in\ensuremath{\mathbb{N}}\}\cup\{0\}$.
\end{compactitem} We furthermore assume for each job that~$\rel{j}=\rel{s_jj}$ and that its segment-wise release dates do not decrease in heading of~$j$. \end{lemma} \begin{proof}
Multiplying all segment-wise start times of a schedule by~$(1+\ensuremath{\varepsilon})$ increases the distance between any two distinct start times by a factor of~$(1+\ensuremath{\varepsilon})$. Therefore, we get a feasible schedule even when rounded up all nonzero processing times
to the next power of~$(1+\ensuremath{\varepsilon})$. This increases the total completion time less than a factor of~$(1+\ensuremath{\varepsilon})$.
Now, we define for each job~$j$ and each segment~$i$ new completion times~$\compl{ij}'=(1+\ensuremath{\varepsilon})\compl{ij}$ and obtain increased start times~$\start{ij}' = (1+\ensuremath{\varepsilon})\compl{ij} - (\proc{j}+\transit{i}) \geq (1+\ensuremath{\varepsilon})\start{ij} + \ensuremath{\varepsilon}(\proc{j} + \transit{i})$
Since each space between two pairwise distinct completion times is increased by a factor of~$1+\ensuremath{\varepsilon}$ the schedule remains feasible. In particular, it holds that~$\start{ij}'\geq \compl{hj}'$ if~$j$ has to pass segment~$h$ before segment~$j$.
Hence, we define a release date~$\rel{ij}:=\max\{\rel{j},\rel{hj},\ensuremath{\varepsilon}(\proc{j}+\transit{i})\}$ for segment~$i$.
Without any loss, we can scale all times by some power of~$(1+\ensuremath{\varepsilon})$ if necessary such that the earliest release date is at least one (since job parts with $\rel{j}=\proc{j}=\transit{s_j}=0$ can be omitted). At a further loss of~$(1+\ensuremath{\varepsilon})$ we finally can round the release dates again to the next power of~$(1+\ensuremath{\varepsilon})$. \end{proof}
By this, an identical proof as for Lemma~\ref{lem:ptas-bounded_crossing} yields the following.
\begin{lemma}
\label{lem:ptas-bounded_crossing-multiple}
Each part runs for at most~$\ensuremath{\sigma}:=\lceil\log_{1+\ensuremath{\varepsilon}}\frac{1+\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}} \rceil$ intervals, i.e.\ a part starting in interval~$I_x$ is completed before the end of~$I_{x+\ensuremath{\sigma}}$. \end{lemma}
The proof of Lemmas~\ref{lem:ptas-SPT_for_small_jobs} becomes significantly more involved.
\begin{lemma}\label{lem:ptas-SPT_for_small_jobs_more_segments}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can restrict to schedules such that for each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}},\ensuremath{\mathrm{l}}\}$, each source and target pair~$s,t\in\{1, \dots, m\}$, and each~$x\geq 0$:
\begin{compactitem}
\item the jobs contained in~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ are scheduled on each segment~$i\in\{s, \dots, t\}$ in SPT order, i.e.,~$\start{i,j_1}\leq\start{i,j_2}$ for any pair of jobs~$j_1, j_2\in\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ with~$\proc{j_1}<\proc{j_2}$, and
\item the jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ in SPT order are joined in the complete schedule to unsplittable packages with size of at most~$\ensuremath{\varepsilon}^2|I_x|$ for all packages and at least~$\ensuremath{\varepsilon}^2|I_x|/2$ for all but the last packages.
\end{compactitem} \end{lemma} \begin{proof}
The proof works in principle as the proof of Lemma~\ref{lem:ptas-SPT_for_small_jobs}. During the rearrangement procedure we have to ensure that each interval on the target segment is filled with at least the same volume of small jobs as before, as long as jobs are unscheduled. For this, the jobs in the demanded order must have arrived at the respective segment in time. To ensure this property we have to deal with the following two difficulties. A convoy of very small jobs can be replaced by one (larger) small job. This job can only continue its processing on the next segment after its completion on the previous segment, while the first very small job could already start on the next segment before the last very small job is completed on the previous segment, cf.\ Figure~\ref{fig:rearrangement}. The other difficulty arises from the fact that some jobs scheduled within interval~$I_x$ arrive during interval~$I_{x'}$ at the next segment, and the remaining jobs arrive one interval later. Since all but the last gaps within interval~$I_x$ have been decreased a bit within the rearrangement and only the last gap covers the lost volume there might be not enough volume available for the next segment in~$I_{x'}$.
\begin{figure}
\caption{Illustration of the start times postponement when rearranging small jobs on multiple segments.}
\label{fig:rearrangement}
\end{figure}
To deal with the second difficulty, we allow for the moment a small job to start already~$p_j$ time units earlier than the completion on the previous segment.
We employ the following procedure for each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$. First, we apply~$m^2$ time-stretches. Now, consider~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ for each source target pair~$s,t=1, \dots, m$ compatible with~$\ensuremath{\mathrm{d}}$ and each~$x\geq 0$. If~$s=t$, we can simply apply the same procedure as for Lemma~$\ref{lem:ptas-SPT_for_small_jobs}$. Otherwise, proceed as follows (cf.\ Figure~\ref{fig:early-enough}). Define~$p(i,\tilde x)$ to be the amount of processing time from~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ scheduled within~$I_{\tilde x}$ on segment~$i$. For each reasonable combination of~$i_1$ and a succeeding~$i_2$ and~$x_1\leq x_2$ define~$p(i_1, x_1, i_2, x_2)$ to be the amount of processing time of jobs in~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ scheduled on segment~$i_1$ in interval~$I_{x_1}$ and on segment~$i_2$ on interval~$I_{x_2}$. On the other hand, let~$u(i_1, x_1, i_2, x_2)$ be the latest point for the end of processing of a small job within interval~$I_{x_1}$ on segment~$i_1$, such that it still can be completely processed within interval~$I_{x_2}$ on segment~$i_2$ if possible. Since the interval sizes are increasing with time, there is at most one interval~$I_{x_2}$ on segment~$i_2$ that yields an upper bound below~$R_{x_1+1}$.
\begin{figure}
\caption{Example of three adjacent segments (drawn contiguously). The dotted line illustrates the construction of~$u(x_1,i_1,x_2,i_2)$. The processing times of all sketched small jobs on segment~$i_2$ sum up to~$p(i_2,x_2)$ whereas only the processing times of the red jobs contribute to~$p(x_1,i_1,x_2,i_2)$.}
\label{fig:early-enough}
\end{figure}
Remove all jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ from the schedule. To refill the gaps consider each segment~$i_1 = s, \dots, t-1$ and on segment~$i_1$ each interval~$I_{x_1}$ that contains gaps with increasing~$x_1$. For each succeeding segment~$i_2$ consider an interval~$I_{x_2}$.
We now apply the gap filling procedure from the proof of Lemma~\ref{lem:ptas-SPT_for_small_jobs} but decrease and increase by at most one small job appropriately such that before each~$u(i_1, x_1, i_2, x_2)$ a volume of at least~$p(i_1, x_1, i_2, x_2)$ including the overage of the former intervals is ensured. More formally, let the \emph{overage}~$o(i_1, x_1, i_2, x_2)$ be defined as the difference of the scheduled volume before~$u(i_1, x_1, i_2, x_2)$ plus $o(i_1, x_1-1, i_2, x_2)$ and the demanded~$p(i_1, x_1, i_2, x_2)$ for~$x_1>x$ and zero otherwise. The refill now ensures that each~$o(i_1, x_1, i_2, x_2) \geq 0$. For~$i= s, \dots, t$ we furthermore ensure the analog for~$p(i,\tilde x)$, $o(i, \tilde x)$, and the end of~$I_{\tilde x}$.
We now prove by induction over~$i=s, \dots, t$ that the required processing volume within each interval~$I_{\tilde x}, \tilde x\geq x$ on segment~$i$ is available. To be more precise we claim: for each segment~$i=s, \dots, t$ and each~$\tilde x\geq x$, enough jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ are available to ensure a volume of at least~$p(i, \tilde x, i_2, x_2)-o(i, \tilde x-1, i_2, x_2)$ until~$u(i, \tilde x, i_2, x_2)$ for each succeeding segment~$i_2$ until~$t$ and~$x_2\geq \tilde x$, and a volume of at least~$p(i,\tilde x)-o(i, \tilde x -1)$ within the complete interval, for as long as jobs are unscheduled. The claim is true for~$i=s$ since all jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ are available by~$R_x$. For some~$i\in\{s, \dots, t\}$ let~$i_1$ be the corresponding preceding segment. Consider a~$\tilde x\geq x$ with~$p(i, \tilde x)>0$. All the demanded volume must have been scheduled earlier on~$i_1$. Hence,~$p(i, \tilde x) = \sum_{x_1\leq \tilde x} p(i_1, x_1, i, \tilde x)$. By the induction hypothesis, this amount on the previous segment is scheduled within each interval in time (in particular within the former gaps). If~$\tilde x$ is the first one considered on~$i$, enough jobs with the required processing volume are available. Otherwise, there might be one small job missing that is scheduled in an earlier interval. This is covered by the overage. Consider now a succeeding~$i_2$ and an~$x_2$ with~$p(i, \tilde x, i_2, x_2)>0$.
If~$u(i, \tilde x, i_2, x_2)=r_{\tilde x+1}$ the claim ensues from~$p(i, \tilde x, i_2, x_2) \leq \sum_{x_1\leq \tilde x} p(i_1, x_1, i, \tilde x)$. Otherwise, note that~$u(i, \tilde x, i_2, x_2)-\transit{i_1}\in I_{x_1}$ is equal to~$u(i_1, x_1, i_2, x_2)$. Then, we additionally have to use that~$p(i, \tilde x, i_2, x_2) \leq \sum_{x_1'\leq x_1} p(i_1, x_1, i_2, x_2)$ is scheduled to be processed before~$u(i_1, x_1, i_2, x_2)$. If one small missing job is scheduled again too early, we use the overage. To conclude, all jobs arrive in time on their target segment and the completion time is increased by a factor of at most~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$.
The described rearrangement procedure for the jobs of~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$ still only needs an extra time of~$\ensuremath{\varepsilon}^2|I_x|$ on each segment for~$x\geq 0$ and~$s, t = 1, \dots, m$ compatible with~$\ensuremath{\mathrm{d}}$. By Lemma~\ref{lem:A-ptas-interval_volume} we again get that~$m^2$ time-stretches are sufficient to cover this amount. To see this, consider some segment~$i$ and some interval~$y$ and bound the needed amount as follows:
\begin{align*}
\sum_{i_1\dirb{\prec}{\ensuremath{\mathrm{d}}}i}\sum_{i\dirb{\preceq}{\ensuremath{\mathrm{d}}}i_2}\sum_{x\leq y}\ensuremath{\varepsilon}^2|I_x| & \leq \sum_{i_1\dirb{\prec}{\ensuremath{\mathrm{d}}}i}\sum_{i\dirb{\preceq}{\ensuremath{\mathrm{d}}}i_2}(\ensuremath{\varepsilon} + \ensuremath{\varepsilon}^2)|I_y|\\
& \leq \sum_{i'=1}^m i'(\ensuremath{\varepsilon} + \ensuremath{\varepsilon}^2)|I_y|\leq m^2 \ensuremath{\varepsilon}|I_y|\\
\end{align*}
Finally, we have to
resolve infeasible start times~$\start{ij}$ that fall below the corresponding completion time~$\compl{(i-1)j}$ on the previous segment.
For this, we readjust those small jobs that have been scheduled a bit before the actual completion time on their previous segment. Since the respective error propagates from segment to segment we have to create an extra time window of~$m\ensuremath{\varepsilon}^2|I_x|$ for each~$\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}$. We can do this by applying another~$2m^3$ time-stretches. \end{proof}
Therefore, we get the following variant of Lemma~\ref{lem:ptas-bounded_release_per_interval} with a similar proof.
\begin{lemma} \label{lem:ptas-bounded_release_per_interval-multiple}
With~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume for each interval~$I_x, x\geq 0$, each~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}}, \ensuremath{\mathrm{l}}\}$, and each source and target pair~$s,t\in\{1, \dots, m\}$: \begin{compactenum}
\item $\proc{}(\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}) \leq (1+\ensuremath{\varepsilon}^2)|I_x|$
\item the number of possible processing times in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_{x,s,t}$ is bounded by $4\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}$, and
\item the number of jobs per processing time in~$\dirbset{L}{\ensuremath{\mathrm{d}}}_{x,s,t}$ is bounded by $\frac{1}{\ensuremath{\varepsilon}^2}$. \end{compactenum} \end{lemma}
Also the construction of the safety net (Lemma~\ref{lem:ptas-safety_net}) must be adapted to the setting of multiple segments.
\begin{lemma}\label{lem:ptas-safety_net_more_segments}
At~$\ensuremath{\mathcal{O}}\xspace(1+\ensuremath{\varepsilon})$-loss we can assume, that each part is completed on each segment within a constant number of intervals after its release date for the corresponding segment. \end{lemma} \begin{proof}
We again start by applying one time-stretch. This creates extra space out of each interval for each segment. We want to assign for each segment-wise release date and each direction one interval per segment whose created extra space is able to host all corresponding parts that have not been scheduled on that segment.
Therefore, we first bound the transit time on segment~$i$ as follows. Let~$x_i$ be the smallest integer such that~$\ensuremath{\varepsilon}\transit{i}\leq(1+\ensuremath{\varepsilon})^{x_i}$. Note that~$(1+\ensuremath{\varepsilon})^{x_i}\leq \rel{ij}$ for each job~$j$ with~$i\in\{s_j, \dots, t_j\}$ by Lemma~\ref{lem:ptas-geometric_rounding-multiple}. We get
\begin{gather}\label{equ:ptas-safety_net-transit}
\transit{i} \leq \frac{1}{\ensuremath{\varepsilon}}(1+\ensuremath{\varepsilon})^{x_i} \leq \frac{\ensuremath{\varepsilon}^2}{4}(1+\ensuremath{\varepsilon})^{x_i}(1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1} \leq \frac{\ensuremath{\varepsilon}}{4}|I_{x_i+\ensuremath{\sigma}'-1}|
\end{gather}
for some integer~$\ensuremath{\sigma}'$ with~$\frac{4}{\ensuremath{\varepsilon}^3} \leq (1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1}$.
Furthermore, we want to bound the processing time of all jobs released at time~$R_x$ traveling in direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}},\ensuremath{\mathrm{l}}\}$ by Lemma~\ref{lem:ptas-bounded_release_per_interval-multiple}:
\begin{gather}\label{equ:ptas-safety_net-proc_rel}
\begin{split}
\sum_{s=1}^m\sum_{t=1}^m\left[ p(\dirbset{S}{\ensuremath{\mathrm{d}}}_{x,s,t}) + p(\dirbset{L}{\ensuremath{\mathrm{d}}}_{x,s,t})\right] & \leq m^2\bigg[(1+\ensuremath{\varepsilon}^2)\ensuremath{\varepsilon}(1+\ensuremath{\varepsilon})^x + \frac{1}{\ensuremath{\varepsilon}}(1+\ensuremath{\varepsilon})^x\cdot\frac{4}{\ensuremath{\varepsilon}^2}\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}\bigg] \\
& = \quad \ensuremath{\varepsilon}^3(1+\ensuremath{\varepsilon})^x\cdot m^2\left[\frac{(1+\ensuremath{\varepsilon}^2)}{\ensuremath{\varepsilon}^2} +
\frac{4}{\ensuremath{\varepsilon}^6}\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}} \right]\\
& \leq \quad \frac{\ensuremath{\varepsilon}^3}{4}(1+\ensuremath{\varepsilon})^x(1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1} = \frac{\ensuremath{\varepsilon}^2}{4}|I_{x+\ensuremath{\sigma}'-1}|,
\end{split}
\end{gather} where we define~$\ensuremath{\sigma}'$ to be such that~$\frac{4}{\ensuremath{\varepsilon}^3} \leq (1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1}$ and \[ 4m^2\left[\frac{(1+\ensuremath{\varepsilon}^2)}{\ensuremath{\varepsilon}^2} + \frac{4}{\ensuremath{\varepsilon}^6}\log_{(1+\ensuremath{\varepsilon})}\frac{1}{\ensuremath{\varepsilon}}\right] \leq (1+\ensuremath{\varepsilon})^{\ensuremath{\sigma}'-1}\text{.} \] Note that~$\ensuremath{\sigma}'$ is a constant depending only on~$\ensuremath{\varepsilon}$ and~$m$.
We now apply the following strategy. For each direction~$\ensuremath{\mathrm{d}}\in\{\ensuremath{\mathrm{r}},\ensuremath{\mathrm{l}}\}$ iterate over the segments in the corresponding order~$\{i_1,\dots, i_m\}$ and assign for each segment~$i$ the jobs~$J_{ix}:=\{j \in \dirbset{J}{\ensuremath{\mathrm{d}}} \mid \rel{ij}=R_{x}\}$ with release date~$R_{x}$ on~$i$ to one concrete created extra space.
On segment~$i_1$, we can assign the unfinished jobs of~$J_{i_1x_1}$ for each~$x_1$ to the created space of~$\frac{\ensuremath{\varepsilon}}{2}|I_{x_1+\ensuremath{\sigma}'-1}|$ due to Equations~\eqref{equ:ptas-safety_net-transit} and~\eqref{equ:ptas-safety_net-proc_rel}. By Lemmas~\ref{lem:ptas-time-stretch} and~\ref{lem:ptas-bounded_crossing-multiple} we know that this space is available before the end of~$I_{x_1+\ensuremath{\sigma}'+\ensuremath{\sigma}}$.
Consider know the jobs of some~$J_{i_2x_2}$. Since by Lemma~\ref{lem:ptas-geometric_rounding-multiple}~$\rel{i_1j}\leq R_{x_2}$ for each~$j\in J_{i_2x_2}$ it is ensured that all preceding parts are finished before~$R_{x_2+\ensuremath{\sigma}'+\ensuremath{\sigma}+1}$. We now assign the created extra space of~$\frac{\ensuremath{\varepsilon}}{2}|I_{x_2+\ensuremath{\sigma}'+\ensuremath{\sigma}}|$ to these parts which is available between~$R_{x_2+\ensuremath{\sigma}'+\ensuremath{\sigma}+1}$ and the end of~$I_{x_2+\ensuremath{\sigma}'+2\ensuremath{\sigma}+1}$ with the same argument. This space is sufficient to host one transit by Equation~\eqref{equ:ptas-safety_net-transit} and the processing of all jobs of~$J_{i_2x_2}$ with global release date~$r_j=R_{x}\leq R_{x_2}$ due to Equation~\eqref{equ:ptas-safety_net-proc_rel} and Lemma~\ref{lem:A-ptas-interval_volume}.
Continuing analogously we get for each~$1 \leq k \leq m$ and each~$x_k$ that we can complete all unfinished parts of~$J_{i_kx_k}$ in the created space~$\frac{\ensuremath{\varepsilon}}{2}|I_{x_k+\ensuremath{\sigma}'+ (k-1)(\ensuremath{\sigma}+1)-1}|$ before the beginning of interval~$I_{x_k+\ensuremath{\sigma}'+ k(\ensuremath{\sigma}+1)}$.
\end{proof}
To conclude that the number of parts running in each block~$B_t$ is bounded by a constant we finally have to use that the ratio of any two transit times of segments is bounded by a constant.
\begin{lemma}
Assume that~$\transit{k}/\transit{i}$ is bounded by a constant~$(1+\ensuremath{\varepsilon})^T$ for any two segments~$k, i\in\{1, \dots, m\}$. Then there is only a constant number of intervals between each two segment-wise release dates of each job~$j$:
\[
\frac{\rel{kj}}{\rel{ij}} \leq (1+\ensuremath{\varepsilon})^{T+1}\text{.}
\] \end{lemma} \begin{proof} Let~$k\in\{s_j,\dots, t_j\}$ be the segment with maximum transit time. If~$\rel{kj}=\rel{j}$ we know that all segment-wise release dates are equal. Otherwise we can conclude by construction in the proof of Lemma~\ref{lem:ptas-geometric_rounding-multiple} that~$\ensuremath{\varepsilon}(\proc{j}+\transit{k}) \leq \rel{kj} \leq (1+\ensuremath{\varepsilon})\ensuremath{\varepsilon}(\proc{j}+\transit{k})$. Therefore, we get
\[
\rel{kj} \leq (1+\ensuremath{\varepsilon})\ensuremath{\varepsilon}(\proc{j}+\transit{k}) \leq (1+\ensuremath{\varepsilon})\ensuremath{\varepsilon}(\proc{j}+(1+\ensuremath{\varepsilon})^T\transit{s_j})\leq (1+\ensuremath{\varepsilon})^{T+1}\ensuremath{\varepsilon}(\proc{j}+\transit{s_j})\leq (1+\ensuremath{\varepsilon})^{T+1}\rel{j}.
\] Applying that~$\rel{j} \leq \rel{ij} \leq \rel{kj}$ for each~$i\in\{s_j,\dots, t_j\}$ yields finally the claim. \end{proof} }{}
\section{Proofs of Section~\ref{sec:arbitrary-conflicts}:\newline Hardness of custom compatibilities}
In this section we give a detailed hardness proof for bidirectional scheduling on a single segment where jobs can be compatible. Our proof holds even for unit processing and transit times. We first consider the makespan objective and extend the proof in a second step to waiting time and total completion time.
\subsection{$\mathsf{NP}$-Hardness of Makespan Minimization}
\begin{restatable}{theorem}{thmGraphHardness} \label{thm:graph-hardness}
Minimizing the makespan for~$m=1$ with an arbitrary compatibility graph~$G_1$ is $\mathsf{NP}$-hard even if~$\proc{j}=\transit{1}=1$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable}
In the following, we explain the construction of a bidirectional scheduling instance for a given~\rsat{\leq 3}{3} instance with variable set $X = \{ x_i \mid i = 0,\dots,|X|-1\}$ and clause set $C = \{c_k \mid k = 0,\dots,|C|-1\}$. The constructed instance yields a demanded makespan~$\ensuremath{\compl{\max}}$ if and only if the given \rsat{\leq 3}{3} formula is satisfiable. For the construction, we partition the time horizon into four parts~$\tpart{1}, \dots, \tpart{4}$ with start time~$\tstart{1}=0, \tstart{2}=6\ensuremath{|X|}, \tstart{3}=10\ensuremath{|X|},$ and $\tstart{4}=10\ensuremath{|X|}+2\ensuremath{|C|}$. There is a (virtual) last part starting at time~$\tstart{5}=12\ensuremath{|X|}+\ensuremath{|C|}$. The demanded makespan~$\ensuremath{\compl{\max}}=\tstart{5}+1$ will enforce that all jobs start before the end of the fourth part.
The rough idea is as follows: In the first four parts we release a tight frame of \emph{blocking jobs}~$\blockingjs{}$ and \emph{dummy jobs}~$\dummyjs{}$ that have to start running immediately at their release date in any schedule that achieves~$\ensuremath{\compl{\max}}$. We use these jobs to create gaps for \emph{variable jobs} that represent the variable assignments. By defining the compatibilities for the blocking jobs we are able to control which of these variable jobs can be scheduled into each gap. In the first part of our construction, we release all variable jobs, which come in two \emph{types}: one type representing a \emph{true} assignment to the corresponding variable and the other type representing a \emph{false} assignment. Our construction will enforce the following properties in each of its parts:
\begin{restatable}{lemma}{lemWellDefiniedAssignment}
\label{lem:well-defined-assignment}
In every feasible schedule with makespan~$\ensuremath{\compl{\max}}$, all jobs released before~$\tstart{3}$ are scheduled in parts~$\tpart{1}$ and $\tpart{2}$, except for two rightbound variable jobs of same type for each variable.
\end{restatable}
\begin{restatable}{lemma}{lemClauses}\label{lem:clauses}
In every feasible schedule with makespan~$\ensuremath{\compl{\max}}$, the only jobs released before~$\tstart{3}$ and scheduled in~$\tpart{3}$ are rightbound variable jobs each corresponding to a variable assignment satisfying a different clause. \end{restatable}
\begin{restatable}{lemma}{lemStorage}\label{lem:storage}
In every feasible schedule with makespan~$\ensuremath{\compl{\max}}$, the only jobs released before~$\tstart{4}$ and scheduled in~$\tpart{4}$ are rightbound variable jobs, and there are not more than~$2|X|-|C|$ of them. \end{restatable}
In the following we explicitly define the released jobs of each part achieving the above properties. Each part is accompanied by a figure illustrating when jobs are released, the respective compatibility graph and an example of a schedule. In all figures, time is directed downwards, and all rightbound jobs are depicted to the left and all leftbound jobs to the right of the segment. Since compatible jobs can run concurrently, the schedules of the leftbound and the rightbound jobs are drawn separately.
It is convenient to prove Lemmas 6 to 8 in reverse order. To this end, we start by specifying the jobs released in~$\tpart{4}$.
\subsubsection*{Jobs released in $\tpart{4}$.}
In the fourth part, we release a set of $2\ensuremath{|X|}-\ensuremath{|C|}$ leftbound blocking jobs~$\blockingjs{4} = \{\blockingjob{i}{} \mid i=0, \dots, 2\ensuremath{|X|}-\ensuremath{|C|}-1\}$. Each blocking job $b_i$ is released at time~$\tstart{4} + i$. The purpose of a blocking job is to leaving space for a leftover rightbound variable job that has not been scheduled until the beginning of this part. Each blocking jobs $b_i \in B_4$ is only compatible with all rightbound variable jobs.
\begin{figure}
\caption{Part $\tpart{4}$ with blocking jobs reserving space for all remaining rightbound variable jobs.}
\label{fig:part4}
\end{figure}
We are now in position to prove Lemma~\ref{lem:storage}, i.e., in a schedule with makespan $\ensuremath{\compl{\max}}$ the only jobs released before $A_4$ that can be scheduled in $P_4$ are up to $2|X|-|C|$ rightbound variable jobs.
\begin{proof}[Proof of Lemma~\ref{lem:storage}]
First, observe that with the required makespan of~$\tstart{5}+1 = \tstart{4}+2\ensuremath{|X|}-\ensuremath{|C|}+1$ each blocking job of~$\blockingjs{4}$ must be scheduled directly at its release date. Consequently, there is no room to delay the start of any leftbound job released before~$\tpart{4}$ to this part. Due to the compatibilities, the rightbound blocking and dummy jobs released before~$\tpart{4}$ are also forced to run before the start of~$\tpart{4}$. Therefore, there are exactly~$2\ensuremath{|X|}-\ensuremath{|C|}$ open slots within~$\tpart{4}$ reserved for rightbound variable jobs. \end{proof}
We proceed to explain the jobs released in the third part of our construction.
\subsubsection*{Jobs released in $\tpart{3}$.}
The third part (Figure~\ref{fig:part3}) is responsible for the assignment of satisfying literals to each clause. During that part, we release a set of blocking jobs~$\blockingjs{3} = \{\blockingjob{k}{} \mid k=0,\dots,|C|-1\}$ which contains one leftbound blocking job~$\blockingjob{k}{}$ for each clause~$c_k$. Each blocking job $\blockingjs{k}{}$ is released at time~$\tstart{3} + 2k$ and is compatible with each rightbound variable job that represents a variable assignment that satisfying the corresponding clause~$c_k$. The gaps between the release times of the blocking jobs are filled with a set dummy jobs~$\dummyjs{3} = \{\dummyjob{k}{\ensuremath{\mathrm{r}}} \mid k = 0,\dots,|C|-1\} \cup \{\dummyjob{k}{\ensuremath{\mathrm{l}}} \mid k = 0,\dots,|C|-1\}$ containing one rightbound job~$\dummyjob{k}{\ensuremath{\mathrm{r}}}$ and one leftbound job~$\dummyjob{k}{\ensuremath{\mathrm{l}}}$ with release date~$\tstart{3} + 2k + 1$ each. Each leftbound dummy job $\dummyjob{k}{\ensuremath{\mathrm{l}}}$ is compatible with all rightbound variable jobs, furthermore each rightbound dummy job~$\dummyjob{k}{\ensuremath{\mathrm{r}}}$ is compatible with the three leftbound jobs released during the time interval~$[r_{\dummyjob{k}{\ensuremath{\mathrm{r}}}}-1, r_{\dummyjob{k}{\ensuremath{\mathrm{r}}}}+1]$.
\begin{figure}
\caption{Part $\tpart{3}$ for $c_k = (x_a \vee x_b \vee \bar x_c)$ and
$c_{k+1} = (\bar x_d \vee x_e \vee \bar x_f)$. Note that each variable job can be adjacent with more than one clause job (although this does not occur in the example).}
\label{fig:part3}
\end{figure}
We are now in position to prove Lemma~\ref{lem:clauses}, i.e., in a schedule with makespan $\ensuremath{\compl{\max}}$ the only jobs released before $A_3$ that can be scheduled in $P_3$ are one rightbound variable job for each clause such that the variable assignment satisfies the clause.
\begin{proof}[Proof of Lemma~\ref{lem:clauses}]
By Lemma~\ref{lem:storage} all jobs released within~$\tpart{3}$ must start before the end of~$\tpart{3}$. Hence, each leftbound dummy and blocking job is forced to start at its release date. Therefore, due to the compatibilities, each rightbound dummy job must be scheduled directly when released. The only remaining~$\ensuremath{|C|}$ free slots can be filled with rightbound variable jobs -- exactly one free slot per clause~$c_k$ reserved for a variable job representing an assignment that satisfies~$c_k$. \end{proof}
We proceed to explain the jobs released in parts $\tpart{1}$ and~$\tpart{2}$.
\subsubsection*{Jobs released in $\tpart{1}$.}
The first two parts are responsible for obtaining a correct assignment of the variables. In the first part, we release different types of jobs for each variable $x_i$, $i =0,\dots, |X|-1$, cf.\ Figure~\ref{fig:part1} with the following. For each variable $x_i$, $i = 0,\dots, |X|-1$, we release \begin{itemize} \item two rightbound true variable jobs $\truejob{i,1}{\ensuremath{\mathrm{r}}}$, $\truejob{i,2}{\ensuremath{\mathrm{r}}}$ at times $6i$ and $6i+1$, respectively, \item two rightbound false variable jobs $\falsejob{i,1}{\ensuremath{\mathrm{r}}}$, $\falsejob{i,2}{\ensuremath{\mathrm{r}}}$ at times $6i+3$ and $6i+4$, respectively, \item one leftbound true variable job $\smash{\truejob{i}{\ensuremath{\mathrm{l}}}}$ at time~$6i+4$, \item one leftbound false variable job $\smash{\falsejob{i}{\ensuremath{\mathrm{l}}}}$ at time $6i+1$, \item two leftbound indefinite variable jobs $\indefjob{i}{\mathrm{t}}$, $\indefjob{i}{\mathrm{f}}$ at times $6i+1$ and $6i+4$, respectively. \item two leftbound blocking jobs $\blockingjob{i}{\mathrm{t}}$, $\blockingjob{i}{\mathrm{f}}$ at times $6i$ and $6i+3$, respectively. \item two leftbound dummy jobs $\dummyjob{{i}}{\ensuremath{\mathrm{l}}\mathrm{t}}$, $\dummyjob{i}{\ensuremath{\mathrm{l}}\mathrm{f}}$ at times $6i+2$ and $6i+5$, respectively. \item two rightbound dummy jobs $\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{t}}$, $\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{f}}$ at times $6i+2$ and $6i+5$, respectively. \end{itemize} In the following, we write $\truejs{\ensuremath{\mathrm{r}}} = \{ \truejob{i,1}{\ensuremath{\mathrm{r}}}, \truejob{i,2}{\ensuremath{\mathrm{r}}} \mid x_i\in \set{X}\}$ for the set of rightbound true variable jobs, $\falsejs{\ensuremath{\mathrm{r}}} = \{ \falsejob{i,1}{\ensuremath{\mathrm{r}}}, \falsejob{i,2}{\ensuremath{\mathrm{r}}} \mid x_i\in \set{X}\}$ for the set of rightbound false variable jobs and $\ensuremath{\set{Q}} = \{ \indefjob{i}{\mathrm{t}}, \indefjob{i}{\mathrm{f}} \mid x_i\in \set{X}\}$ for the set of indefinite jobs.
The compatibility graph~$G_1$ is defined such that
\begin{itemize}
\item each blocking job~$\blockingjob{i}{\mathrm{t}}$ is compatible with the corresponding true variable jobs $\truejob{i,1}{\ensuremath{\mathrm{r}}}$ and~$\truejob{i,2}{\ensuremath{\mathrm{r}}}$,
\item each blocking job $\blockingjob{i}{\mathrm{f}}$ is compatible with with the corresponding false variable jobs $\falsejob{i,1}{\ensuremath{\mathrm{r}}}$ and~$\falsejob{i,2}{\ensuremath{\mathrm{r}}}$,
\item each indefinite job~$\indefjob{i}{\mathrm{t}}$ is compatible with the corresponding rightbound true variable jobs~$\truejob{i,1}{\ensuremath{\mathrm{r}}}$ and~$\truejob{i,2}{\ensuremath{\mathrm{r}}}$
\item each indefinite job $\indefjob{i}{\mathrm{f}}$ is compatible with the corresponding rightbound false variable jobs $\falsejob{i,1}{\ensuremath{\mathrm{r}}}$ and~$\falsejob{i,2}{\ensuremath{\mathrm{r}}}$.
\item each dummy job~$h$ is compatible with the opposed jobs released in~$[r_{h}-1, r_{h}+1]$,
\item none of the remaining pairs of jobs are compatible.
\end{itemize}
\begin{figure}
\caption{Released jobs per variable~$x_i$ in $\tpart{1}$, the corresponding compatibilities given by~$G_1$ and a scheduled example for a true variable assignment.}
\label{fig:part1}
\end{figure}
\subsubsection*{Jobs released in $\tpart{2}$.} In the second part (Figure~\ref{fig:part2}), there is room for exactly one indefinite job and one leftbound variable job per variable. This is realized by a set of rightbound blocking jobs~$\blockingjs{2}=\{ \blockingjob{i,1}{},\blockingjob{i,2}{} \mid x_i\in \set{X}\}$ where each blocking job $\blockingjob{i,1}{}$ is released at time~$\tstart{2} + 4i$ and is compatible with the corresponding two indefinite jobs~$\indefjob{i}{\mathrm{t}}$ and~$\indefjob{i}{\mathrm{f}}$. Each blocking job $\blockingjob{i,2}{}$ is released at time $\tstart{2} + 4i+2$ and is compatible with the corresponding two leftbound variable jobs~$\falsejob{i}{\ensuremath{\mathrm{l}}}$ and~$\truejob{i}{\ensuremath{\mathrm{l}}}$. The gaps between two subsequent released blocking jobs are closed in both directions by dummy jobs~$\dummyjs{2} = \{\dummyjob{i,1}{\ensuremath{\mathrm{r}}}, \dummyjob{i,1}{\ensuremath{\mathrm{l}}} \mid x_i \in \set{X}\} \cup \{\dummyjob{i,2}{\ensuremath{\mathrm{r}}}, \dummyjob{i,2}{\ensuremath{\mathrm{l}}} \mid x_i\in \set{X}\}$ released at times $\tstart{2} + 4i + 1$ and~$\tstart{2} + 4i + 3$, respectively. Each dummy job is compatible with all jobs of~$\ensuremath{\set{Q}}, \truejs{\ensuremath{\mathrm{l}}}, \falsejs{\ensuremath{\mathrm{l}}}$, or~$\blockingjs{2}$ and the corresponding opposed dummy job released concurrently.
We are now in position to prove Lemma~\ref{lem:well-defined-assignment}.
\label{prf:well-defined-assignment} \begin{proof}[Proof of Lemma~\ref{lem:well-defined-assignment}]
By Lemmas \ref{lem:clauses} and \ref{lem:storage}, each rightbound dummy and blocking job of~$\dummyjs{2}$ and~$\blockingjs{2}$ must be scheduled before the end of~$\tpart{2}$ and hence, directly at its release. By the given compatibilities this is also true for the leftbound dummy jobs of~$\dummyjs{2}$. Therefore, there are exactly two open slots per variable~$x_i$, one reserved for the two corresponding indefinite jobs~$\indefjob{i}{\mathrm{t}}, \indefjob{}{\mathrm{f}}$ and one for the two corresponding leftbound variable jobs~$\falsejob{i}{\ensuremath{\mathrm{l}}}, \truejob{i}{\ensuremath{\mathrm{l}}}$. Since no further space is left, for both pairs exactly one can be scheduled within~$\tpart{2}$. The remaining one must be completed already by the end of~$\tpart{1}$.
Also, for the first part, we can conclude that no blocking and no dummy job released in~$\tpart{1}$ can start after the end of~$\tpart{1}$.
Consider now one variable~$x_i$ and assume that no job corresponding to~$x_i$ can start within part~$\tpart{1}$ after~$6i+5$. This assumption holds obviously for~$x_n$. Then,~$\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{f}}$ and~$\dummyjob{i}{\ensuremath{\mathrm{l}}\mathrm{f}}$, the latest released jobs corresponding to~$x_i$, must both start at their release.
If the leftbound job~$\truejob{i}{\ensuremath{\mathrm{l}}}$ is scheduled within part~$\tpart{1}$ it must be scheduled at its release and hence~$\falsejob{i,1}{\ensuremath{\mathrm{r}}}$ and~$\falsejob{i,2}{\ensuremath{\mathrm{r}}}$ must be postponed to the next parts. In this case, also the second blocking job~$\blockingjob{i}{\mathrm{f}}$ as well as the first two dummy jobs~$\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{t}}$ and~$\dummyjob{i}{\ensuremath{\mathrm{l}}\mathrm{t}}$ are forced to start at their release, consequently also~$\blockingjob{i}{\mathrm{t}}$. In this case it is not possible anymore to schedule~$\indefjob{i}{\mathrm{f}}$ within part~$\tpart{1}$. For this reason, the counter part~$\indefjob{i}{\mathrm{t}}$ must be scheduled at its release time and the leftbound~$\falsejob{i}{\ensuremath{\mathrm{l}}}$ must be postponed. With this, there is exactly one free slot for~$\truejob{i,2}{\ensuremath{\mathrm{r}}}$ and one for~$\truejob{i,1}{\ensuremath{\mathrm{r}}}$.
If, on the other hand, the leftbound job~$\truejob{i}{\ensuremath{\mathrm{l}}}$ is scheduled after part~$\tpart{1}$, we have to schedule~$\falsejob{i}{\ensuremath{\mathrm{l}}}$ within part~$\tpart{1}$. Due to the conflicts with~$\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{f}}$, the start time of~$\falsejob{i}{\ensuremath{\mathrm{l}}}$ and the blocking and dummy jobs in between must in particular be scheduled at their release. For that reason~$\indefjob{i}{\mathrm{t}}$ must be postponed and~$\indefjob{i}{\mathrm{f}}$ must be scheduled at its release. Hence, also the rightbound true jobs~$\truejob{i}{\ensuremath{\mathrm{r}}}$ and~$\truejob{i}{\ensuremath{\mathrm{r}}}$ must be postponed and there are exactly two slots for the two false jobs.
In both cases, the scheduled leftbound jobs ensure that no earlier released variable job can start after~$6(i-1)+5$. Hence, it can be concluded by induction that, for each variable, either all corresponding false jobs or all corresponding true jobs must be scheduled after part~$\tpart{1}$. And since, by Lemmas \ref{lem:clauses} and \ref{lem:storage}, at least~$2n$ rightbound variable jobs must be scheduled within~$\tpart{1}$ the free spots ensure that exactly the two counter parts are scheduled within~$\tpart{1}$. \end{proof}
\begin{figure}
\caption{Part $\tpart{2}$ creates a structure of blocking and dummy jobs with respective compatibilities that create space for exactly one indefinite job per variable~$x_i$.}
\label{fig:part2}
\end{figure}
We can conclude the following claim and hence, Theorem~\ref{thm:graph-hardness}.
\textit{Claim.} There is a satisfying assignment for the given~$\rsat{\leq\!3}{3}$ instance if and only if there is a feasible schedule for the constructed scheduling instance with makespan~$\ensuremath{\compl{\max}} = \tstart{5}+1$.
\label{prf:graph-hardness} \begin{proof}[Proof of Theorem~\ref{thm:graph-hardness}] If there is a schedule with makespan~$\ensuremath{\compl{\max}}$ we can apply Lemmas~\ref{lem:well-defined-assignment} to \ref{lem:storage}. Within the resulting schedule we can therefore be sure that~$\ensuremath{|C|}$ rightbound variable jobs are scheduled within the clause part. Since by Lemma~\ref{lem:well-defined-assignment} the assignment of each variable is well defined we get by Lemma~\ref{lem:clauses} a satisfying truth assignment for the clauses.
If on the other hand a satisfying truth assignment is given, the described schedule with demanded makespan can be created in straight-forward manner, by postponing the assignment jobs corresponding to the truth assignment and scheduling all other jobs within the part they are released in (or in part~$\tpart{2}$ in the case of leftbound variable jobs or indefinite jobs). \end{proof}
\subsection{$\mathsf{NP}$-Hardness of Total Completion Time Minimization}
\corGraphHardnessSum*
We give an analogous reduction as for Theorem~\ref{thm:graph-hardness}. Note, that solutions optimal for the total completion time and those optimal for the total waiting time are equivalent. Hence, it is sufficient to prove the hardness for the latter. The goal is to enforce the same structure as for makespan minimization when minimizing the total waiting time. To do so, we start by calculating an upper bound of the resulting waiting time.
We can trivially bound the total waiting time of a schedule that achieves a makespan of $\ensuremath{\compl{\max}}$ by~$W=|\ensuremath{\set{J}}|\cdot\ensuremath{\compl{\max}} = |\ensuremath{\set{J}}| \cdot (A_5+1)$, where~$\ensuremath{\set{J}}$ is the set of all jobs in our construction. With this polynomial bound we can extend the construction of a scheduling instance for a given~\rsat{\leq\!3}{3} instance by part~$\tpart{5}$ with~$W+1$ further leftbound blocking jobs $\blockingjs{5}=\{\blockingjob{i}{} \mid i=0, \dots, W-1\}$ with release date~$\tstart{5}+i+1$ for each~$\blockingjob{i}{5}\in\blockingjs{5}$ that are not compatible to any of the previous jobs.
\textit{Claim.} There is a satisfying truth assignment for the given~\rsat{\leq\!3}{3} instance if and only if there is a feasible schedule for the constructed scheduling instance with total waiting time of at most~$W$.
\label{prf:graph-hardness-sum} \begin{proof}[Proof of Theorem~\ref{thm:graph-hardness-sum}] Assume first that there is a satisfying assignment for the \rsat{\leq\!3}{3} instance. In this case, there is a schedule where no job released in the first four parts starts processing after~$\tstart{5}$ and hence the resulting total waiting time does not exceed~$W$.
Assume on the other hand, that there is a solution for the constructed scheduling instance whose objective does not exceed~$W$. For such a solution, either all jobs released in the first four parts start before~$\tstart{5}$ or their is at least one starting later. In the first case, we get, by Lemmas~\ref{lem:storage} to~\ref{lem:well-defined-assignment}, a schedule together with a satisfying truth assignment with waiting time bounded by~$W$.
In the second case each postponed job~$j$ with starting time~$\start{j}'$ increases the already existing waiting time by at least an amount of~$(\start{j}'-\tstart{5}) + W+1-(\start{j}'-\tstart{5})=W+1$. Hence, the first case applies. \end{proof}
\subsection{$\mathsf{APX}$-Hardness} \label{appendix:apx_hardness}
In this section, we show the $\mathsf{APX}$-hardness of bidirectional scheduling. As for the $\mathsf{NP}$-hardness proof, it is convenient to first prove the $\mathsf{APX}$-hardness for minimizing the makespan before turning to the minimization of the total completion time.
\begin{restatable}{theorem}{thmGraphApxHardness} \label{thm:graph-apx-hardness} Minimizing the makespan for~$m=1$ with an arbitrary compatibility graph~$G_1$ is $\mathsf{APX}$-hard even if~$\proc{j}=\transit{1}=1$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable}
\begin{proof} We reduce from a specific variant of \noun{Max-3-Sat} which is $\mathsf{NP}$-hard to approximate to within a factor of $1016/1015$, see Berman et al.~\cite{BermanKS2003}. An instance of \noun{Symm-4-Occ-Max-3-Sat} is given by a Boolean formula with a set $C$ of clauses of size three over a set of variables $X$, where both the positive and the negative literal of each variable $x_i \in X$ appears in exactly two clauses. Berman et al.~\cite{BermanKS2003} construct a family of instances of \noun{Symm-4-Occ-Max-3-Sat} with $1016n$ clauses, where $n \in \ensuremath{\mathbb{N}}$. They show that for any $\delta \in (0,1/2)$, it is $\mathsf{NP}$-hard to distinguish between the ``bad'' instances where at most $(1015+\delta)n$ clauses can be satisfied and the ``good'' instances where at least $(1016-\delta)n$ instances can be satisfied.
Let $\phi$ be a formula of the above family. Based on $\phi$, we use the same construction as in Theorem~\ref{thm:graph-hardness} with one small adaption: In the first part, for each variable $x_i$, $i = 0,\dots, |X|-1$, we release additionally two virtual jobs $v_{i,1}$ and $v_{i,2}$ at times $6i$ and $6i+1$, respectively. Both jobs are compatible with all leftbound blocking, dummy and variable jobs of the same variable. We claim that for this bidirectional scheduling instance the optimal makespan is $12|X|+|C| +1 + \tilde{c}$ if and only if the minimum number of unsatisfied clauses of $\phi$ is $\tilde{c}$. Assuming the correctness of the claim, we derive that for a good instance with $1016n$ clauses, the makespan is at most $12|X| + 1016n + 1 + \delta n$. Using the identity $|X| = 3|C|/4 = 3\cdot 1016 n / 4$, we can bound the makespan from above by $(10160+ \delta)n +1$. For bad instances, on the other hand, the makespan is at least $(10161 - \delta)n$, i.e., the optimal makespan cannot be approximated by a factor of $10161/10160 \approx 1.000098$.
It is left to prove the correctness of the claim. It is easy to see that the optimal makespan is bounded from above by $12|X|+|C| + 1 + \tilde{c}$ by a small adaption of the arguments of the proof of Theorem~\ref{thm:graph-hardness}. To see this, fix a variable assignment satisfying all but $\tilde{c}$ clauses. In parts one and two (where the variable assignments are fixed) we schedule all jobs as in the proof of Theorem~\ref{thm:graph-hardness} with respect to the variable assignment. Additionally, the leftbound variable jobs not scheduled in the first part, leave a gap in the schedule that is a perfect fit for the additional virtual jobs, see also the right illustration in Figure~\ref{fig:part1}. In the third part, we schedule one satisfying variable for each clause that is satisfied. In the forth part, we schedule any $2|X|-|C|$ variable jobs left over from previous parts. By construction, at the end of the forth part, we are left with $\tilde{c}$ variable jobs (that could not be matched to any clause job in the third part). Scheduling them one after another, we obtain the claimed makespan of $12|X| + |C| + 1 + \tilde{c}$.
To see that $12|X|+|C| + 1 + \tilde{c}$ is lower bound on the optimal makespan, we argue using the concept of matched jobs. First, note that there is always an optimal schedule in which all jobs are processed at an integral point in time. Otherwise, we could move the first job scheduled at a non-integral point in time to the previous integral point in time without violating any constraints. Iterating this process, we obtain a schedule in which all jobs are processed at integral times, as claimed. Given such an integral schedule, we call a job processed at time $t$ \emph{matched}, if it is leftbound and there is another rightbound job processed at time $t$, or vice versa. Otherwise the job is called unmatched.
For the following arguments, fix an integral schedule. We proceed to argue that there are at least $\tilde{c}$ unmatched jobs that are mutually incompatible.
First consider the (clause) blocking jobs released in part three. For $k,l \in \{0,1,2\}$, let $X_{k,l}$ be the set of variables $x_i$ such that $k$ rightbound true variable jobs $\truejob{i,1}{\ensuremath{\mathrm{r}}}$, $\truejob{i,2}{\ensuremath{\mathrm{r}}}$ are matched to a (clause) blocking job and $l$ rightbound false variable jobs $\falsejob{i,1}{\ensuremath{\mathrm{r}}}$, $\falsejob{i,2}{\ensuremath{\mathrm{r}}}$ are matched to a (clause) blocking job.
Intuitively, the sets $X_{1,1}, X_{2,1}, X_{1,2}, X_{2,2}$ contain the variables that are not set consistently according to a well-defined truth assignment. Using that at most $|C|-\tilde{c}$ clauses of $\phi$ can be satisfied, we derive that at least \begin{align}\label{eq:unmatched_jobs}
\tilde{c} - |X_{1,1}| - |X_{2,1}| - |X_{1,2}| - 2|X_{2,2}| \end{align} (clause) blocking jobs (or rightbound dummy jobs) are unmatched.
For any variable $x_i \in X_{2,2}$, the leftbound blocking jobs $\blockingjob{i}{\mathrm{t}}$, $\blockingjob{i}{\mathrm{f}}$, dummy jobs $\dummyjob{i}{\ensuremath{\mathrm{l}}\mathrm{t}}$, $\dummyjob{i}{\ensuremath{\mathrm{l}}\mathrm{f}}$, indefinite jobs $\indefjob{i}{\mathrm{t}}$, $\indefjob{i}{\mathrm{f}}$, and variable jobs $\falsejob{i}{\ensuremath{\mathrm{l}}}$, $\truejob{i}{\ensuremath{\mathrm{r}}}$ are matched by at most the two rightbound dummy jobs $\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{t}}$, $\dummyjob{i}{\ensuremath{\mathrm{r}}\mathrm{t}}$ and the two virtual jobs $v_{i,1}$, $v_{i,2}$ released in part 1 as well as the two blocking jobs $\blockingjob{i,1}{}$, $\blockingjob{i,2}{}$ released in part 2, so that in the end, at least two rightbound jobs are left unmatched. Equivalently, for any variable $x_i \in X_{1,2} \cup X_{2,1}$ at least one of the rightbound jobs above is left unmatched.
For any variable $x_i \in X_{1,1}$, consider the leftbound variable jobs $\falsejob{i}{\ensuremath{\mathrm{l}}}$ and $\truejob{i}{\ensuremath{\mathrm{l}}}$ as well as the leftbound indefinite jobs $\indefjob{i}{\mathrm{t}}$ and $\indefjob{i}{\mathrm{f}}$. At most one indefinite job and one variable job most can be matched with the blocking jobs $\blockingjob{i,1}{}$ and $\blockingjob{i,2}{}$ released in part 2. The other two jobs, say the true variable job $\truejob{i}{\ensuremath{\mathrm{l}}}$ and the indefinite job $\indefjob{i}{\mathrm{t}}$, are only compatible with the rightbound variable jobs, the virtual jobs and the dummy jobs jobs, leaving at least one job unmatched. Using \eqref{eq:unmatched_jobs}, we may conclude that the total number of unmatched jobs is at least $\tilde{c}$.
As argued above, the unmatched jobs are either (clause) blocking jobs released in part three or remainders of the different types of leftbound jobs associated with variables and released in the first part. As none of them are compatible, the makespan is at least $12|X| + |C| + 1 + \tilde{c}$, as claimed. \end{proof}
We are now ready to prove the $\mathsf{APX}$-hardness of the minimization of the total completion time.
\GraphAPXHardnessSum*
\begin{proof}[Sketch]
Let $\phi$ be a formula with $1016n$ clauses for some $n \in \ensuremath{\mathbb{N}}$ with $\tilde{c}$ unsatisfiable clauses, as in Berman et al.~\cite{BermanKS2003} (cf.\ proof of Theorem~\ref{thm:graph-apx-hardness}). We use a similar idea as in the proof of Theorem~\ref{thm:graph-hardness-sum}, i.e., we use the same construction as in the reduction for the makespan but add an additional set of $M$ leftbound blocking jobs $B_5 = \{\blockingjob{i}{} | i=0,\dots,M-1\}$ with release date $M = 12|X|+|C|+1 = 10160n+1$. With similar arguments as before, we can show that there is an optimum schedule in which exactly $\tilde{c}$ (clause) jobs are unmatched before time $M$, with only exactly $\tilde{c}$ incompatible variable jobs remaining unscheduled after time~$2M$. The sum of completion times of this schedule is $a n^2 + bn\tilde{c} + \tilde{c}(\tilde{c}+1)/2 + \mathcal{O}(n)$ for some constants $a,b\in\mathbb{N}$.
Now consider a ``good'' instance with at most $\delta n$ unsatisfiable clauses. The optimum schedule has a sum of completion times of at most $$ an^2 + b\delta n^2 + n^2\delta^2/2 + \mathcal{O}(n). $$ On the other hand, a ``bad'' instance with at least $(1-\delta)n$ unsatisfiable clauses leads to a sum of completion times of at least $$ an^2 + b(1-\delta) n^2 + n^2(1-\delta)^2/2 + \mathcal{O}(n).$$ Since, for $n \to \infty$, good and bad instance cannot be distinguished in polynomial time unless $\mathsf{P}=\mathsf{NP}$~(cf.~\cite{BermanKS2003}), no algorithm can approximate the sum of completion times by a factor better than $$ \frac{a + b(1-\delta) + (1-\delta)^2}{a + b\delta + \delta^2} \underset{\delta \to 0}{\rightarrow} \frac{a+b+1}{a}, $$ which is constant. \end{proof}
\section{Proofs of Section~\ref{sec:constant_segments}:\newline Dynamic programs for restricted compatibilities}
In this section we present the dynamic programs for a constant number of compatibility types and a constant number of segments.
\singleSegPoly* \begin{proof}
Let~$\ensuremath{\set{J}}^1, \dots, \ensuremath{\set{J}}^{\ensuremath{\kappa}}$ be a partition into subsets of invariant compatibility type.
We consider each subset~$\ensuremath{\set{J}}^c$ ordered non-increasingly by release dates and denote by~$J_i^c$ the
~$i$-th job of~$J^c$ in this order,
i.e., the $(n_c-i)$-th job to be released.
Each entry~$T[i_1,t_1, \dots, i_{\ensuremath{\kappa}}, t_{\ensuremath{\kappa}}; c]$ of our dynamic programming table is designed to hold the minimum sum of completion times that can be achieved when scheduling only the $i_{c'}$ jobs of largest release date of each compatibility type~$c'$, such that $J_{i_{c'}}^{c'}$ is not scheduled before time~$t_{c'}$ and~$J_{i_c}^c$ is the first job that is scheduled.
We start by setting~$T[0,t_1, \dots, 0, t_{\ensuremath{\kappa}}; c]=0$ and define the dependencies between table entries in the following.
Let~$C(j,t)=\max\{t,r_j\}+p+\tau_1$ denote the smallest possible completion time of job~$j$ when scheduling it not before~$t$.
Depending on the types of jobs~$j_1,j_2$ (and in particular of their directions), we can compute in constant time the earliest time~$\theta(j_1,t_1,j_2,t_2)$ not before~$t_1$ that job~$j_1$ can be scheduled at, assuming that~$j_2$ is scheduled earlier at time~$\max\{t_2,r_{j_2}\}$.
We let~$\delta_{cc'}=1$ if~$c=c'$ and~$\delta_{cc'}=0$ otherwise, abbreviate~$\theta_{c'}=\theta(J_{i_{c'}}^{c'},t_{c'},J_{i_c}^c,t_c)$, and get the following recursive formula for~$i_c>0$:
\[
T[i_1,t_1,\dots, i_{\ensuremath{\kappa}},t_{\ensuremath{\kappa}};c] =
\min_{c':i_{c'} \neq 0} \lbrace
T[i_1-\delta_{1c}, \theta_1,
\dots,
i_\ensuremath{\kappa}-\delta_{\ensuremath{\kappa} c}, \theta_\ensuremath{\kappa}; c'] + C(J_{i_c}^c,t_c) \rbrace.
\]
We can fill out our table in order of increasing sums~$\sum i_c$ and finally obtain the desired minimum completion time as~$\min_c T[n_1,0,\dots,n_\ensuremath{\kappa},0;c]$. We can reconstruct the schedule from the dynamic programming table in straightforward manner. It remains to argue that we only need to consider polynomially many times~$t_c$. This is true, since all relevant times are contained in the set~$\{\rel{j} + k\transit{} + \ell\proc{} \mid j, k, \ell \leq n\}$ of cardinality~$\ensuremath{\mathcal{O}}\xspace(n^3)$. \end{proof}
\begin{restatable}{theorem}{thmIdenticalConstantSegMakespan}\label{thm:identical-constantseg-makespan}
The bidirectional scheduling problem can be solved in polynomial time if~$m$,~$\ensuremath{\kappa}$, and $\transit{i}$ are constant for each~$i\in M$, and~$\proc{j}=1$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable} \begin{proof}
Again, we consider subsets of identical jobs. In addition to their conflict type~$c$, we further distinguish jobs by their start and target segments~$s,t$ and form subsets~$J_{s,t}^c$ correspondingly. The number of subsets is bounded by~$\ensuremath{\kappa} m^2$.
Since all release times are integer and since~$p_j=1$, we only need to consider integer points in time. Hence, only~$\tau_i+1$ possible positions need to be considered for a job running on segment~$i$, and no two jobs of the same direction can occupy the same position.
The state of the system can be fully described by (i) the number of available jobs per segment and~$J_{s,t}^c$, and (ii)~for each position on each segment and each~$J_{s,t}^c$, the fact whether a job of~$J_{s,t}^c$ is occupying this position.
The number of states is bounded by $\prod_{i=1}^m n^{\ensuremath{\kappa}{}m^{2}}\cdot\prod_{i=1}^m 2^{\ensuremath{\kappa}{}m^2(\transit{i}+1)}=\mathrm{poly}(n)$.
We define the successors of each state to be all states that can be reached in one time step where not all jobs wait, or by waiting for the next release date.
This way, the state representation changes from one state to the next.
The system always makes progress towards the final state where each job has arrived at its target.
The state graph can thus not have a cycle, and we may consider states in a topological order.
We formulate a dynamic program that computes for each state the smallest partial completion time to reach the state, where the partial completion time is defined as the sum of completion times of all completed jobs plus the current time for each uncompleted job.
The dynamic program is well-defined as each value only depends on predecessor states. \end{proof}
\begin{restatable}{corollary}{corIdenticalConstantSegTotalC}\label{cor:identical-constantseg-totalc}
The bidirectional scheduling problem can be solved in polynomial time if $m$ and $\ensuremath{\kappa}$ are constant, $\transit{i}=1$ for each~$i\in M$, and~$\proc{j}=0$ for each~$j\in\ensuremath{\set{J}}$. \end{restatable} \begin{proof}
Since all release dates are integer, at each integer point in time no jobs are running on any segment. We can thus use a simpler version of the dynamic program we introduced in the proof of Theorem~\ref{thm:identical-constantseg-makespan}. \end{proof}
\end{document} | arXiv |
\begin{definition}[Definition:Biconditional/Boolean Interpretation]
Let $\mathbf A$ and $\mathbf B$ be propositional formulas.
Let $\mathbf A \iff \mathbf B$ denote the biconditional operator.
The truth value of $\mathbf A \iff \mathbf B$ under a boolean interpretation $v$ is given by:
:$\map v {\mathbf A \iff \mathbf B} = \begin{cases}
\T & : \map v {\mathbf A} = \map v {\mathbf B} \\
\F & : \text{otherwise}
\end{cases}$
\end{definition} | ProofWiki |
Last 3 years (5)
Publications of the Astronomical Society of Australia (9)
Remnant radio galaxies discovered in a multi-frequency survey
GAMA Legacy ATCA Southern Survey
Benjamin Quici, Natasha Hurley-Walker, Nicholas Seymour, Ross J. Turner, Stanislav S. Shabala, Minh Huynh, H. Andernach, Anna D. Kapińska, Jordan D. Collier, Melanie Johnston-Hollitt, Sarah V. White, Isabella Prandoni, Timothy J. Galvin, Thomas Franzen, C. H. Ishwara-Chandra, Sabine Bellstedt, Steven J. Tingay, Bryan M. Gaensler, Andrew O'Brien, Johnathan Rogers, Kate Chow, Simon Driver, Aaron Robotham
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 09 February 2021, e008
The remnant phase of a radio galaxy begins when the jets launched from an active galactic nucleus are switched off. To study the fraction of radio galaxies in a remnant phase, we take advantage of a $8.31$ deg $^2$ subregion of the GAMA 23 field which comprises of surveys covering the frequency range 0.1–9 GHz. We present a sample of 104 radio galaxies compiled from observations conducted by the Murchison Widefield Array (216 MHz), the Australia Square Kilometer Array Pathfinder (887 MHz), and the Australia Telescope Compact Array (5.5 GHz). We adopt an 'absent radio core' criterion to identify 10 radio galaxies showing no evidence for an active nucleus. We classify these as new candidate remnant radio galaxies. Seven of these objects still display compact emitting regions within the lobes at 5.5 GHz; at this frequency the emission is short-lived, implying a recent jet switch off. On the other hand, only three show evidence of aged lobe plasma by the presence of an ultra-steep-spectrum ( $\alpha<-1.2$) and a diffuse, low surface brightness radio morphology. The predominant fraction of young remnants is consistent with a rapid fading during the remnant phase. Within our sample of radio galaxies, our observations constrain the remnant fraction to $4\%\lesssim f_{\mathrm{rem}} \lesssim 10\%$; the lower limit comes from the limiting case in which all remnant candidates with hotspots are simply active radio galaxies with faint, undetected radio cores. Finally, we model the synchrotron spectrum arising from a hotspot to show they can persist for 5–10 Myr at 5.5 GHz after the jets switch of—radio emission arising from such hotspots can therefore be expected in an appreciable fraction of genuine remnants.
Unexpected circular radio objects at high Galactic latitude
Ray P. Norris, Huib T. Intema, Anna D. Kapińska, Bärbel S. Koribalski, Emil Lenc, L. Rudnick, Rami Z. E. Alsaberi, Craig Anderson, G. E. Anderson, E. Crawford, Roland Crocker, Jayanne English, Miroslav D. Filipović, Tim J. Galvin, Andrew M. Hopkins, Natasha Hurley-Walker, Susumu Inoue, Kieran Luken, Peter J. Macgregor, Pero Manojlović, Josh Marvil, Andrew N. O'Brien, Laurence Park, Wasim Raja, Devika Shobhana, Tiziana Venturi, Jordan D. Collier, Catherine Hale, Aidan Hotan, Vanessa Moss, Matthew Whiting
Published online by Cambridge University Press: 18 January 2021, e003
We have found a class of circular radio objects in the Evolutionary Map of the Universe Pilot Survey, using the Australian Square Kilometre Array Pathfinder telescope. The objects appear in radio images as circular edge-brightened discs, about one arcmin diameter, that are unlike other objects previously reported in the literature. We explore several possible mechanisms that might cause these objects, but none seems to be a compelling explanation.
Calibration database for the Murchison Widefield Array All-Sky Virtual Observatory
Marcin Sokolowski, Christopher H. Jordan, Gregory Sleap, Andrew Williams, Randall Bruce Wayth, Mia Walker, David Pallot, Andre Offringa, Natasha Hurley-Walker, Thomas M. O. Franzen, Melanie Johnston-Hollitt, David L. Kaplan, David Kenney, Steven J. Tingay
Published online by Cambridge University Press: 11 June 2020, e021
We present a calibration component for the Murchison Widefield Array All-Sky Virtual Observatory (MWA ASVO) utilising a newly developed PostgreSQL database of calibration solutions. Since its inauguration in 2013, the MWA has recorded over 34 petabytes of data archived at the Pawsey Supercomputing Centre. According to the MWA Data Access policy, data become publicly available 18 months after collection. Therefore, most of the archival data are now available to the public. Access to public data was provided in 2017 via the MWA ASVO interface, which allowed researchers worldwide to download MWA uncalibrated data in standard radio astronomy data formats (CASA measurement sets or UV FITS files). The addition of the MWA ASVO calibration feature opens a new, powerful avenue for researchers without a detailed knowledge of the MWA telescope and data processing to download calibrated visibility data and create images using standard radio astronomy software packages. In order to populate the database with calibration solutions from the last 6 yr we developed fully automated pipelines. A near-real-time pipeline has been used to process new calibration observations as soon as they are collected and upload calibration solutions to the database, which enables monitoring of the interferometric performance of the telescope. Based on this database, we present an analysis of the stability of the MWA calibration solutions over long time intervals.
The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue
Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith
The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy.
The GLEAM 4-Jy (G4Jy) Sample: II. Host galaxy identification for individual sources
Sarah V. White, Thomas M. O. Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, B. M. Gaensler, Melanie Johnston–Hollitt, André Offringa, Lister Staveley–Smith
The entire southern sky (Declination, $\delta< 30^{\circ}$ ) has been observed using the Murchison Widefield Array (MWA), which provides radio imaging of $\sim$ 2 arcmin resolution at low frequencies (72–231 MHz). This is the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we have previously used a combination of visual inspection, cross-checks against the literature, and internal matching to identify the 'brightest' radio-sources ( $S_{\mathrm{151\,MHz}}>4$ Jy) in the extragalactic catalogue (Galactic latitude, $|b| >10^{\circ}$ ). We refer to these 1 863 sources as the GLEAM 4-Jy (G4Jy) Sample, and use radio images (of ${\leq}45$ arcsec resolution), and multi-wavelength information, to assess their morphology and identify the galaxy that is hosting the radio emission (where appropriate). Details of how to access all of the overlays used for this work are available at https://github.com/svw26/G4Jy. Alongside this we conduct further checks against the literature, which we document here for individual sources. Whilst the vast majority of the G4Jy Sample are active galactic nuclei with powerful radio-jets, we highlight that it also contains a nebula, two nearby, star-forming galaxies, a cluster relic, and a cluster halo. There are also three extended sources for which we are unable to infer the mechanism that gives rise to the low-frequency emission. In the G4Jy catalogue we provide mid-infrared identifications for 86% of the sources, and flag the remainder as: having an uncertain identification (129 sources), having a faint/uncharacterised mid-infrared host (126 sources), or it being inappropriate to specify a host (2 sources). For the subset of 129 sources, there is ambiguity concerning candidate host-galaxies, and this includes four sources (B0424–728, B0703–451, 3C 198, and 3C 403.1) where we question the existing identification.
Low-Frequency Carbon Recombination Lines in the Orion Molecular Cloud Complex
Chenoa D. Tremblay, Christopher H. Jordan, Maria Cunningham, Paul A. Jones, Natasha Hurley-Walker
Published online by Cambridge University Press: 02 May 2018, e018
We detail tentative detections of low-frequency carbon radio recombination lines from within the Orion molecular cloud complex observed at 99–129 MHz. These tentative detections include one alpha transition and one beta transition over three locations and are located within the diffuse regions of dust observed in the infrared at 100 μm, the Hα emission detected in the optical, and the synchrotron radiation observed in the radio. With these observations, we are able to study the radiation mechanism transition from collisionally pumped to radiatively pumped within the H ii regions within the Orion molecular cloud complex.
Source Finding in the Era of the SKA (Precursors): Aegean 2.0
Paul J. Hancock, Cathryn M. Trott, Natasha Hurley-Walker
Published online by Cambridge University Press: 20 March 2018, e011
In the era of the SKA precursors, telescopes are producing deeper, larger images of the sky on increasingly small time-scales. The greater size and volume of images place an increased demand on the software that we use to create catalogues, and so our source finding algorithms need to evolve accordingly. In this paper, we discuss some of the logistical and technical challenges that result from the increased size and volume of images that are to be analysed, and demonstrate how the Aegean source finding package has evolved to address these challenges. In particular, we address the issues of source finding on spatially correlated data, and on images in which the background, noise, and point spread function vary across the sky. We also introduce the concept of forced or prioritised fitting.
Low-Frequency Spectral Energy Distributions of Radio Pulsars Detected with the Murchison Widefield Array
Murchison Widefield Array
Tara Murphy, David L. Kaplan, Martin E. Bell, J. R. Callingham, Steve Croft, Simon Johnston, Dougal Dobie, Andrew Zic, Jake Hughes, Christene Lynch, Paul Hancock, Natasha Hurley-Walker, Emil Lenc, K. S. Dwarakanath, B.-Q. For, B. M. Gaensler, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. Wayth, C. Wu, Q. Zheng
Published online by Cambridge University Press: 26 April 2017, e020
We present low-frequency spectral energy distributions of 60 known radio pulsars observed with the Murchison Widefield Array telescope. We searched the GaLactic and Extragalactic All-sky Murchison Widefield Array survey images for 200-MHz continuum radio emission at the position of all pulsars in the Australia Telescope National Facility (ATNF) pulsar catalogue. For the 60 confirmed detections, we have measured flux densities in 20 × 8 MHz bands between 72 and 231 MHz. We compare our results to existing measurements and show that the Murchison Widefield Array flux densities are in good agreement.
The Murchison Widefield Array Commissioning Survey: A Low-Frequency Catalogue of 14 110 Compact Radio Sources over 6 100 Square Degrees
Natasha Hurley-Walker, John Morgan, Randall B. Wayth, Paul J. Hancock, Martin E. Bell, Gianni Bernardi, Ramesh Bhat, Frank Briggs, Avinash A. Deshpande, Aaron Ewall-Wice, Lu Feng, Bryna J. Hazelton, Luke Hindson, Daniel C. Jacobs, David L. Kaplan, Nadia Kudryavtseva, Emil Lenc, Benjamin McKinley, Daniel Mitchell, Bart Pindor, Pietro Procopio, Divya Oberoi, André Offringa, Stephen Ord, Jennifer Riding, Judd D. Bowman, Roger Cappallo, Brian Corey, David Emrich, B. M. Gaensler, Robert Goeke, Lincoln Greenhill, Jacqueline Hewitt, Melanie Johnston-Hollitt, Justin Kasper, Eric Kratzenberg, Colin Lonsdale, Mervyn Lynch, Russell McWhirter, Miguel F. Morales, Edward Morgan, Thiagaraj Prabu, Alan Rogers, Anish Roshi, Udaya Shankar, K. Srivani, Ravi Subrahmanyan, Steven Tingay, Mark Waterson, Rachel Webster, Alan Whitney, Andrew Williams, Chris Williams
Published online by Cambridge University Press: 14 November 2014, e045
We present the results of an approximately 6 100 deg2 104–196 MHz radio sky survey performed with the Murchison Widefield Array during instrument commissioning between 2012 September and 2012 December: the MWACS. The data were taken as meridian drift scans with two different 32-antenna sub-arrays that were available during the commissioning period. The survey covers approximately 20.5 h < RA < 8.5 h, − 58° < Dec < −14°over three frequency bands centred on 119, 150 and 180 MHz, with image resolutions of 6–3 arcmin. The catalogue has 3 arcmin angular resolution and a typical noise level of 40 mJy beam− 1, with reduced sensitivity near the field boundaries and bright sources. We describe the data reduction strategy, based upon mosaicked snapshots, flux density calibration, and source-finding method. We present a catalogue of flux density and spectral index measurements for 14 110 sources, extracted from the mosaic, 1 247 of which are sub-components of complexes of sources. | CommonCrawl |
\begin{definition}[Definition:Real Function/Range]
Let $S \subseteq \R$.
Let $f: S \to \R$ be a real function.
The range of $f$ is the set of values that the dependent variable can take.
That is, it is the image set of $f$.
\end{definition} | ProofWiki |
Logic, Category Theory and Computation
Org: Prakash Panangaden (McGill)
JOHN BAEZ, U. C. Riverside
Categories in Control [PDF]
Control theory is the branch of engineering that studies dynamical systems with inputs and outputs, and seeks to stabilize these using feedback. Control theory uses 'signal-flow diagrams' to describe processes where real-valued functions of time are added, multiplied by scalars, differentiated and integrated, duplicated and deleted. In fact, these are string diagrams for the symmetric monoidal category of finite-dimensional vector spaces and the monoidal structure is direct sum. Jason Erbele and I found a presentation for this symmetric monoidal category, which amounts to saying that it is the PROP for bicommutative bimonoids with some extra structure.
A broader class of signal-flow diagrams also includes extra morphisms to model feedback. This amounts to working with the symmetric monoidal category where objects are finite-dimensional vector spaces and the morphisms are linear relations. Erbele also found a presentation for this larger symmetric monoidal category. It is the PROP for a remarkable thing: roughly speaking, an object with two commutative Frobenius algebra structures, such that the multiplication and unit of either one and the comultiplication and counit of the other fit together to form a bimonoid.
In electrical engineering we also need a category where a morphism is a circuit made of resistors, inductors and capacitors. Brendan Fong and I proved there is a functor mapping any such circuit to the relation it imposes between currents and potentials at the inputs and outputs. This functor goes from the category of circuits to the category of finite-dimensional vector spaces and linear relations.
MARC BAGNOL, University of Ottawa
Proofnets and the Complexity of Proof Equivalence [PDF]
Also known as the word problem in category theory, the equivalence problem of a logic asks whether two proofs are related by a set of rule permutation that reflect the commutative conversions of the logic. On the other hand, proofnets for a logic can be broadly defined as combinatorial objects offering canonical representatives for equivalence classes of proofs, enjoying good computational properties.
A notion of proofnet for a logic therefore induces a way to solve the equivalence problem of this logic. This has been used recently to show that the multiplicative fragment (with units) of linear logic cannot have a low-complexity notion of proofnets, by proving that the equivalence problem for this fragment is PSpace-complete.
We will look into the situation for another small fragment of linear logic: the multiplicative-additive (without units) fragment, which intuitionistic part can be seen as a very basic linear $\lambda$-calculus with co-products.
RICHARD BLUTE, University of Ottawa
Towards a Theory of Integral Linear Logic via Rota-Baxter algebras [PDF]
Differential linear logic, as introduced by Ehrhard and Regnier, extends linear logic with an inference rule which is a syntactic version of differentiation. The corresponding categorical structures, called differential categories, were introduced by Blute, Cockett and Seely. Differential categories are monoidal categories equipped with a comonad which endows objects in its image with a cocommutative coalgebra structure. There is also a natural transformation, called the deriving transform, which models the differential inference rule. The large number of examples of differential categories demonstrate the utility of the idea. These include the convenient vector spaces of Frohlicher and Kriegl.
It is an ongoing project to develop similar notions of integral linear logic and integral categories. An appropriate place to draw inspiration for this is the theory of Rota-Baxter algebras. Rota-Baxter algebras are associative algebras with an endomorphism which satisfies an abstraction of the integration by parts formula. There are many examples of such algebras and multi-object versions of these examples should provide important examples of models of integral linear logic.
ANDREW CAVE, McGill
Linear temporal logic proofs as reactive programs [PDF]
In this talk I describe a constructive variant of linear temporal logic, and describe its interpretation as a reactive programming language via the propositions-as-types correspondence. The upshot is that the type discipline enforces causality and liveness properties of its programs. I will also discuss a variant system which realizes a version of the Gödel-Löb principle.
FLORENCE CLERC, McGill
Presenting a Category Modulo a Rewriting System [PDF]
Presentations of categories is a useful tool to describe categories by the means of generators for objects and morphisms and relations on morphisms. However problems arise when trying to generalize this construction when objects are considered modulo an equivalence. Three different constructions can be used to describe such generalization : localization, quotient and considering only normal forms with respect to a certain rewriting system.
I will present some work done in collaboration with S.Mimram and P.L.Curien. We assume two kinds of hypotheses, namely convergence and cylinder property (which is some higher-dimensional convergence). Under these assumptions, we prove that there is an equivalence of categories between the quotient and the localization, and an isomorphism of categories between the quotient and the category of normal forms.
ROBIN COCKETT, University of Calgary
The Basics of Integral Categories [PDF]
Differential categories of both tensor and Cartesian stripes have been around for quite a while: what do the corresponding structures for integral categories look like?
The talk will describe integral categories and their relationship to differential categories. It will also describe the more general notion of an anti-derivative (due to Ehrhard), how these arise in this context, and how they produce integration. Some models of integral categories will be described. If time permits, the Cartesian version of these notions will be discussed.
JOSEE DESHARNAIS, Université Laval
Almost Sure Bisimulation in Labelled Markov Processes [PDF]
In this talk we propose a notion of bisimulation for labelled Markov processes parameterised by negligible sets (LMPns). The point is to allow us to say things like two LMPs are "almost surely" bisimilar when they are bisimilar everywhere except on a negligible set. Usually negligible sets are sets of measure 0, but we work with abstract ideals of negligible sets and so do not introduce an ad-hoc measure. The construction is given in two steps. First a refined version of the category of measurable spaces is set up, where objects incorporate ideals of negligible subsets, and arrows are identified when they induce the same homomorphisms from their target to their source $\sigma$-algebras up to negligible sets. Epis are characterised as arrows reflecting negligible subsets. Second, LMPns are obtained as coalgebras of a refined version of Giry's probabilistic monad. This gives us the machinery to remove certain counterintuitive examples where systems were bisimilar except for a negligible set. Our new notion of bisimilarity is then defined using cospans of epis in the associated category of coalgebras, and is found to coincide with a suitable logical equivalence given by the LMP modal logic. This notion of bisimulation is given in full generality - not restricted to analytic spaces. The original theory is recovered by taking the empty set to be the only negligible set.
NORM FERNS, York University
Bisimulation Through Markov Decision Process Coupling [PDF]
Markov decision processes (MDPs) are a popular mathematical model for sequential decision-making under uncertainty. Many standard solution methods are based on computing or learning the optimal value function, which reflects the expected return one can achieve in each state by choosing actions according to an optimal policy. In order to deal with large state spaces, one often turns to approximation.
Bisimulation metrics have been used to establish approximation bounds for state aggregation and other forms of value function approximation in MDPs. In this talk, we show that a bisimulation metric defined on the state space of an MDP in previous work can be viewed as the optimal value function of an optimal coupling of two copies of the original model, and discuss the consequences thereof.
This is joint work with Doina Precup.
JÉRÔME FORTIER, Université d'Ottawa
Circular Proofs [PDF]
Self-referential objects such as natural numbers, infinite words, languages accepted by abstract machines, finite and infinite trees and so on, live in the world of $\mu$-bicomplete categories, where they arise from the following operations: finite products and coproducts, initial algebras (induction) and final coalgebras (coinduction). Circular proofs were introduced by Santocanale in order to provide a logical syntax for defining morphisms between such objects. We have indeed shown full completeness of circular proofs with respect to free $\mu$-bicomplete categories. The main practical advantage of having such a syntax is that it provides a general evaluation algorithm, via a cut-elimination procedure. We can then see circular proofs as a new kind of abstract machine, for which we will explore some computability-related questions.
PIETER HOFSTRA, University of Ottawa
Monads and Isotropy [PDF]
In joint work with Funk and Steinberg (``Isotropy and Crossed Toposes", TAC, 2012), a new algebraic invariant of Grothendieck toposes was introduced: just as every topos contains a canonical locale (its subobject classifier), it also contains a canonical group, called its \emph{isotropy group}. In this talk, I will survey some of the recent developments concerning isotropy groups of toposes and of small categories, and in particular explain how this gives rise to some new monads on the category of small categories, one of which can be viewed as encoding formal conjugation in a category.
ANDRÉ JOYAL, UQAM
Simplicial tribes for homotopy type theory [PDF]
A tribe is a categorical model of homotopy type theory. We would like to show that the category of tribes has the structure of a fibration category, but path objects are missing. To correct this, we introduce the notion of simplicial tribes.
BRIGITTE PIENTKA, McGill University
Mechanizing Meta-Theory in Beluga [PDF]
Mechanizing formal systems, given via axioms and inference rules, together with proofs about them plays an important role in establishing trust in formal developments. In this talk, I will survey the proof environment Beluga. To specify formal systems and represent derivations within them, Beluga provides a sophisticated infrastructure based on the logical framework LF; to reason about formal systems, Beluga provides a dependently typed functional language for implementing inductive proofs about derivation trees as recursive functions following the Curry-Howard isomorphism. Key to this approach is the ability to model derivation trees that depend on a context of assumptions using a generalization of the logical framework LF, i.e. contextual LF which supports first-class contexts and simultaneous substitutions.
Our experience has demonstrated that Beluga enables direct and compact mechanizations of the meta-theory of formal systems, in particular programming languages and logics. To demonstrate Beluga's strength in this talk, we develop a weak normalization proof using logical relations.
PHIL SCOTT, University of Ottawa
AF Inverse Monoids and the Coordinatization of MV-algebras [PDF]
MV-algebras are the algebras associated to many-valued logics, analogous to the way that Boolean algebras are associated to classical propositional logics. In the mid-1980's, D. Mundici established an equivalence between the category of MV-algebras and the category of $\ell$-groups (with archimedean order unit). This restricts to a correspondence between countable MV-algebras and AF C*-algebras with lattice-ordered $K_0$ group. We introduce a class of AF inverse Boolean monoids and prove a coordinatization theorem (in the spirit of von Neumann's Continuous Geometry). This theorem states that every countable MV-algebra can be coordinatized (i.e. is isomorphic to the lattice of principal ideals of some AF Boolean inverse monoid). Techniques involve use of Bratteli diagrams and colimits of semisimple inverse monoids. We shall illustrate this in the case of certain specific AF C*-algebras (e.g. the CAR algebra of a Fermi gas). If time permits, we will mention related work, e.g. relations with Effect Algebras (B. Jacobs) and a general coordinatization theorem in recent work of F. Wehrung. [Joint work with Mark Lawson (Heriot-Watt)]
ROBERT SEELY, McGill University \& John Abbott College
Two categorical approaches to differentiation [PDF]
In the past decade, we (coauthors Rick Blute, Robin Cockett and I) have formulated two different abstract categorical approaches to differential calculus, based on the structure of linear logic (an idea of Ehrhard and Regnier). The basic idea has two types of maps (``analytic'' or ``smooth'', and ``linear''), a comonad $S$ (a ``coalgebra modality''), somewhat like the {\bf!} of linear logic, and a differentiation operator. In our first approach ({\bf monoidal differential categories}), the coKleisli category (the category of cofree coalgebras) of $S$ consists of smooth maps, and differentiation operates on coKleisli maps to smoothly produce linear maps. Our second approach ({\bf Cartesian differential categories}) reversed this orientation, directly characterizing the smooth maps and situating the linear maps as a subcategory. If $S$ is a ``storage modality'', meaning essentially that the ``exponential isomorphisms'' from linear logic ($S(X\times Y)\simeq S(X)\otimes S(Y)$ and $S(1)\simeq S(\top)$) hold, we get a tight connection between these approaches in the Cartesian (monoidal) closed cases: the linear maps of a Cartesian closed differential storage category form a monoidal closed differential storage category, and the coKleisli category of a monoidal closed differential storage category is a Cartesian closed differential storage category. Two technical aides in proving these results are the development of a graphical calculus as well as a term calculus for the maps of these categories. With the term calculus, one can construct arguments using a language similar to that of ordinary undergraduate calculus.
PETER SELINGER, Dalhousie University
Number-theoretic methods in quantum computing [PDF]
An important problem in quantum computation is the so-called approximate synthesis problem: to find a circuit, preferably as short as possible, that approximates a given unitary operator up to given epsilon. For nearly two decades, the standard solution to this problem was the Solovay-Kitaev algorithm, which is based on geometric ideas. This algorithm produces circuits of size $O(\log^c(1/\epsilon))$, where $c$ is approximately 3.97. It was a long-standing open problem whether this exponent $c$ could be reduced to 1.
In this talk, I will report on a new class of number-theoretic algorithms that achieve circuit size $O(\log(1/\epsilon))$, thereby answering the above question positively. In certain important cases, such as the commonly used Clifford+$T$ gate set, one can even find algorithms that are optimal in an absolute sense: the algorithm finds the shortest circuit whatsoever for the given problem instance. This is joint work with Neil J. Ross.
DAVID THIBODEAU, McGill University
Programming Infinite Structures Using Copatterns [PDF]
Infinite structures are an integral part of computer science as they serve to represent concepts such as constantly running devices and processes or data communication streams. Due to their importance, it is crucial for programming languages to be equipped with adequate means to encode and reason about them. This talk investigates the recent idea of copatterns, a device to represent corecursive datatypes, and thus infinite structures, dually to the usual definitions of recursive datatypes which encode finite data. While the latter defines and analyzes data via constructors and pattern matching, respectively, copatterns bring to type theory a definition of corecursion using observations and copattern matching.
FRANK VAN BREUGEL, York University
Behavioural Distances and Simple Stochastic Games [PDF]
Behavioural pseudometrics map each pair of states of a model to a number in the unit interval. The smaller that number, the more alike the states behave. By identifying states that are close to each other, we obtain a smaller model that is easier to analyze.
Several algorithms to compute behavioural pseudometrics have been developed. In this talk, we focus on the algorithm proposed by Bacci, Bacci, Larsen and Mardare in 2013. We show that the algorithm can be viewed as a strategy improvement algorithm of a simple stochastic game. As a consequence, the correctness of the algorithm follows from the fact that strategy improvement leads to an optimal strategy for simple stochastic games.
This is joint work with Norm Ferns and Qiyi Tang. | CommonCrawl |
\begin{document}
\sloppy \title[Hausdorff dimension of operator semistable L\'evy processes]{The Hausdorff dimension of\\ operator semistable L\'evy processes} \author{Peter Kern} \address{Peter Kern, Mathematisches Institut, Heinrich-Heine-Universit\"at D\"usseldorf, Universit\"atsstr. 1, D-40225 D\"usseldorf, Germany} \email{kern\@@{}math.uni-duesseldorf.de}
\author{Lina Wedrich} \address{Lina Wedrich, Fakult\"at f\"ur Wirtschaftswissenschaften, Universit\"at Duisburg-Essen, Campus Essen, Universit\"atsstr. 12, D-45141 Essen, Germany} \email{lina.wedrich\@@{}uni-due.de}
\date{\today}
\begin{abstract} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with exponent $E$, where $E$ is an invertible linear operator on ${\mathbb R^d}$ and $X$ is semi-selfsimilar with respect to $E$. By refining arguments given in Meerschaert and Xiao \cite{MX} for the special case of an operator stable (selfsimilar) L\'evy process, for an arbitrary Borel set $B\subseteq{\mathbb R}_+$ we determine the Hausdorff dimension of the partial range $X(B)$ in terms of the real parts of the eigenvalues of $E$ and the Hausdorff dimension of $B$. \end{abstract}
\keywords{L\'evy process, operator semistable process, semi-selfsimilarity, sojourn time, range, Hausdorff dimension, positivity of density} \subjclass[2010]{Primary 60G51; Secondary 28A78, 28A80, 60G17, 60G52.}
\maketitle
\baselineskip=18pt
\section{Introduction}
Let $X=\{X(t)\}_{t\geq0}$ be a L\'evy process in ${\mathbb R^d}$, i.e. a stochastically continuous process with stationary and independent increments, starting in the origin $X(0)=0$ almost surely. Without loss of generality, we will assume that the process has c\`adl\`ag paths (right continuous with left limits). The distribution of the process on the space of c\`adl\`ag functions is uniquely determined by the distribution of $X(1)$ which can be an arbitrary infinitely divisible distribution. We will always assume that the distribution of $X(1)$ is full, i.e. not supported on any lower dimensional hyperplane. The L\'evy process $X$ is called operator semistable if the distribution $\mu_1=P_{X(1)}$ is strictly operator semistable, i.e. $\mu_1$ is an infinitely divisible probability measure fulfilling \begin{equation}\label{sos} \mu_1^{\ast c}=c^E\mu_1\quad\text{ for some }c>1 \end{equation} and some linear operator $E$ on ${\mathbb R^d}$ called the exponent, where $c^E\mu_1(dx)=\mu_1(c^{-E}dx)$ denotes the image measure under the invertible linear operator $c^E=\sum_{n=0}^\infty\frac{(\log c)^n}{n!}\,E^n$. For details on operator semistable distributions we refer to \cite{Luc,Cho} and the monograph \cite{MS}. To be more precise, we call the L\'evy process $(c^E,c)$-operator semistable due to the space-time scaling \begin{equation}\label{soss} \{c^EX(t)\}_{t\geq0}\stackrel{\rm fd}{=}\{X(ct)\}_{t\geq0}\quad\text{ for some }c>1 \end{equation} which easily follows from \eqref{sos}, where $\stackrel{\rm fd}{=}$ denotes equality of all finite dimensional marginal distributions. The property \eqref{soss} is called strict operator semi-selfsimilarity and one can equivalently introduce an operator semistable L\'evy process as a strictly operator semi-selfsimilar L\'evy process. It is well known that for a given operator semistable L\'evy process $X$ the exponent $E$ is not unique, but the real parts of the eigenvalues of every possible exponent are the same, including their multiplicity; see \cite{MS}.
In case \eqref{sos} or, equivalently, \eqref{soss} is fulfilled for every $c>0$ the L\'evy process is called operator stable, respectively strict operator selfsimilar, with exponent $E$. In the last decades efforts have been made to calculate the Hausdorff dimension of the range $X([0,1])$ for an operator stable L\'evy process $X$. For a survey on general dimension results for L\'evy processes we refer to \cite{Xiao, KX}. If $X$ is an $\alpha$-stable L\'evy process in ${\mathbb R^d}$ for some $\alpha\in(0,2]$, i.e. the exponent is a multiple of the identity $E=\alpha\cdot I$, Blumenthal and Getoor \cite{BG} show that the Hausdorff dimension of the range is $\dim_{\rm H}X([0,1])=\min(\alpha,d)$ almost surely. Pruitt and Taylor \cite{PT} calculate $\dim_{\rm H}X([0,1])$ for a L\'evy process in ${\mathbb R^d}$ with independent stable marginals of index $\alpha_1\geq\cdots\geq\alpha_d$. Here, $\dim_{\rm H}X([0,1])=\alpha_1$ almost surely if $\alpha_1\leq1$ or $\alpha_1=\alpha_2$ and in all other cases $\dim_{\rm H}X([0,1])=1+\alpha_2(1-\alpha_1^{-1})\in(\alpha_2,\alpha_1)$ almost surely. In this case $E$ is a diagonal operator with $\alpha_1,\ldots,\alpha_d$ on the diagonal in a certain order. Later, based on results of Pruitt \cite{Pru}, Becker-Kern, Meerschaert and Scheffler \cite{BMS} obtained that for more general operator stable L\'evy processes the formulas of Pruitt and Taylor are still valid without the assumption of independent stable marginals, where $\alpha_1,\ldots,\alpha_d$ have to be interpreted as the reciprocals of the real parts of the eigenvalues of the exponent $E$. Their result does not cover the full class of operator stable L\'evy processes, since in case $\alpha_1>\min(1,\alpha_2)$ it is required that the density of $X(1)$ is positive at the origin. Finally, Meerschaert and Xiao \cite{MX} show that the restriction on the density is superflous. In addition they calculate the Hausdorff dimension of the partial range $\dim_{\rm H}X(B)$ for an arbitrary operator stable L\'evy process $X$ and an arbitrary Borel set $B\subseteq{\mathbb R}_+$ in terms of the real parts of the eigenvalues of the exponent $E$ and the Hausdorff dimension of $B$, namely \begin{equation}\label{MXmain} \dim_{\rm H}X(B)=\begin{cases}\alpha_1\dim_{\rm H}B & \text{ if } \alpha_1\dim_{\rm H}B\leq1\text{ or }\alpha_1=\alpha_2,\\ 1+\alpha_2\big(\dim_{\rm H}B -\frac1{\alpha_1}\big) & \text{ otherwise.} \end{cases} \end{equation}
Since operator semistable L\'evy processes require the space-time scaling property to be only fulfilled on a discrete scale, they allow more flexibility in modeling. The most prominent example of a semistable, non-stable distribution is perhaps the limit distribution of cumulative gains in a series of St.~Petersburg games. Our aim is to generalize the above dimension results for the larger class of operator semistable L\'evy processes, following the outline given by \cite{MX}. We will prove that \eqref{MXmain} remains valid for operator semistable L\'evy processes, but our methods go beyond simple adjustments of the arguments given in \cite{MX}. To the best of our knowledge, our result is the first dimension result for L\'evy processes with a scaling or selfsimilarity property on a discrete scale. Whereas, for deterministic selfsimilar sets (on a discrete scale), numerous examples for a determination of the Hausdorff dimension and other fractal dimensions exist in the literature, e.g. for Cantor sets or Sierpinski gaskets.
The paper is organized as follows. In section 2.1 we recall the definitions of Hausdorff and capacitary dimension and their relationship. We further recall a spectral decomposition result from \cite{MS} in section 2.2, which enables us to decompose the operator semistable L\'evy process according to the distinct real parts of the eigenvalues of the exponent $E$. Preparatory for the proof of our main results, in section 2.3 certain uniform density bounds for $\{X(t)\}_{t\in[1,c)}$ are given and a certain positivity set for the densities is constructed. These will be needed to obtain sharp lower bounds for the expected sojourn times of operator semistable L\'evy processes in a closed ball in section 2.4. Note that the characterization of the positivity set of densities is still an open problem even for operator stable densities. In the special case of an $\alpha$-stable L\'evy process with exponent $E=\alpha\cdot I$ the problem is completely solved in a series of papers \cite{Tay,Port, PV, ARRS} . A certain extension for $\alpha$-semistable L\'evy processes can be found in section 3 of \cite{SW}. Finally, in section 3 we state our main results on the Hausdorff dimension of operator semistable sample paths, including the proofs.
Throughout this paper $K$ denotes an unspecified positive and finite constant which may vary in each occurrence. Specified constants will be denoted by $K_1,K_2$, etc.
\section{Preliminaries}
\subsection{Hausdorff and capacitary dimension}
For an arbitrary subset $A\subseteq{\mathbb R^d}$ and $s\geq0$ the $s$-dimensional Hausdorff measure is defined by \begin{equation}\label{sdimH}
\mathcal{H}^{s}(A)=\lim_{\varepsilon\downarrow0}\inf\left\{\sum_{i=1}^{\infty}|A_{i}|^{s}:\,A\subseteq\bigcup_{i=1}^{\infty}A_{i},\; 0<|A_{i}|\leq \varepsilon\right\}, \end{equation}
where $|A|=\sup\{\|x-y\|:\,x,y\in A\}$ denotes the diameter of $A\subseteq{\mathbb R^d}$. The sequence of sets $\{A_{i}\}_{i\geq1}$ fulfilling the conditions on the right-hand side of \eqref{sdimH} is called an $\varepsilon$-covering of $A$. It can be shown that $\mathcal{H}^{s}$ is a metric outer measure on ${\mathbb R^d}$ and there exists a unique value $\dim_{\rm H}A\geq0$ such that $\mathcal{H}^{s}(A)=\infty$ if $0\leq s<\dim_{\rm H}A$ and $\mathcal{H}^{s}(A)=0$ if $\dim_{\rm H}A<s<\infty$; e.g., see \cite{Fal1,Fal2}. The critical value \begin{equation}\label{Hdim} \dim_{\rm H}A=\inf\{s>0:\,\mathcal{H}^{s}(A)=0\}=\sup\{s>0:\,\mathcal{H}^{s}(A)=\infty\} \end{equation} is called the Hausdorff dimension of $A$.
Now let $A\subseteq{\mathbb R^d}$ be a Borel set and denote by $\mathcal M^1(A)$ the set of probability measures on $A$. For $s>0$ the $s$-energy of $\mu\in\mathcal M^1(A)$ is defined by
$$I_s(\mu)=\int_A\int_A\frac{\mu(dx)\mu(dy)}{\|x-y\|^s}.$$ By Frostman's lemma, e.g., see \cite{Kah,Mat}, there exists a probability measure $\mu\in\mathcal M^1(A)$ with $I_s(\mu)<\infty$ if $\dim_{\rm H}A>s$. In this case $A$ is said to have positive $s$-capacity $C_s(A)$ given by $$C_s(A)=\sup\{I_s(\mu)^{-1}:\,\mu\in\mathcal M^1(A)\}$$ and the capacitary dimension of $A$ is defined by $$\dim_{\rm C}A=\sup\{s>0:\,C_s(A)>0\}=\inf\{s>0:\,C_s(A)=0\}.$$ A consequence of Frostman's theorem, e.g., see \cite{Kah,Mat}, is that for Borel sets $A\subseteq{\mathbb R^d}$ the Hausdorff and capacitary dimension coincide. Therefore, one can prove lower bounds for the Hausdorff dimension with a simple capacity argument: if $I_s(\mu)<\infty$ for some $\mu\in\mathcal M^1(A)$ then $\dim_{\rm H}A=\dim_{\rm C}A\geq s$.
\subsection{Spectral decomposition}
Let $\{X(t)\}_{t\geq0}$ be a $(c^{E},c)$-operator semistable L\'evy process in ${\mathbb R^d}$. Factor the minimal polynomial of $E$ into $f_{1}(x)\cdot\ldots\cdot f_{p}(x)$ such that every root of $f_{j}$ has real part $a_{j}$, where $a_1<\cdots<a_p$ are the distinct real parts of the eigenvalues of $E$ and $a_1\geq\frac12$ by Theorem 7.1.10 in \cite{MS}. According to Theorem 2.1.14 in \cite{MS} we can decompose ${\mathbb R^d}$ into a direct sum ${\mathbb R^d}=V_{1}\oplus\ldots\oplus V_{p}$, where $V_{j}=\operatorname{Ker}(f_{j}(E))$ are $E$-invariant subspaces. Now, in an appropriate basis, $E$ can be represented as a block-diagonal matrix $E=E_{1}\oplus\ldots\oplus E_{p}$, where $E_{j}:V_{j}\rightarrow V_{j}$ and every eigenvalue of $E_{j}$ has real part $a_{j}$. Especially, every $V_j$ is an $E_j$-invariant subspace of dimension $d_j=\dim V_j$. Now we can write $x=x_1+\cdots+x_p\in{\mathbb R^d}$ and $t^Ex=t^{E_1}x_1+\cdots+t^{E_p}x_p$ with respect to this direct sum decomposition, where $x_j\in V_j$ and $t>0$. Moreover, for the operator semistable L\'evy process we have $X(t)=X^{(1)}(t)+\ldots+X^{(p)}(t)$ with respect to this direct sum decomposition, where $\{X^{(j)}(t)\}_{t\geq0}$ is a $(c^{E_{j}},c)$-operator semistable L\'evy process on $V_j\cong {\mathbb R}^{d_j}$ by Lemma 7.1.17 in \cite{MS}. We can further choose an inner product on ${\mathbb R^d}$ such that the subspaces $V_j$, $1\leq j\leq p$, are mutually orthogonal and throughout this paper for $x\in{\mathbb R^d}$ we may choose $\|x\|=\langle x,x\rangle^{1/2}$ as the associated Euclidean norm on ${\mathbb R^d}$. With this choice, in particular we have for $t=c^rm>0$ \begin{equation}
\|X(t)\|^2\stackrel{\rm d}{=}\|c^{rE}X(m)\|^{2}=\|c^{rE_{1}}X^{(1)}(m)\|^{2}+\ldots+\|c^{rE_{p}}X^{(p)}(m)\|^{2}, \end{equation} with $r\in\mathbb{Z}$ and $m\in[1,c)$. The following result on the growth behavior of the exponential operators $t^{E_j}$ near the origin $t=0$ is a reformulation of Lemma 2.1 in \cite{MX} and a direct consequence of Corollary 2.2.5 in \cite{MS}.
\begin{lemma}\label{specbound} For every $j=1,\ldots,p$ und every $\varepsilon>0$ there exists a finite constant $K\geq 1$ such that for all $0<t\leq1$ we have \begin{equation}
K^{-1}t^{a_{j}+\varepsilon}\leq \|t^{E_{j}}\|\leq K\,t^{a_{j}-\varepsilon} \end{equation} and \begin{equation}
K^{-1}t^{-(a_{j}-\varepsilon)}\leq \|t^{-E_{j}}\|\leq K\,t^{-(a_{j}+\varepsilon)}. \end{equation} \end{lemma}
Throughout this paper let $\alpha_j=1/a_j$ denote the reciprocals of the distinct real parts of the eigenvalues of $E$ with $0<\alpha_p<\cdots<\alpha_1\leq2$.
\subsection{Density bounds}
Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $P_{X(t)}=\mu_t$ for $t>0$. It is well known that integrability properties of the Fourier transform $\widehat{\mu_t}$ imply the existence and certain smoothness properties of a Lebesgue density of $\mu_t$. In fact, $|\widehat{\mu_t}|$ has at least exponential decay in radial directions for every $t>0$, i.e. \begin{equation}\label{expdecay}
\left|\widehat{\mu_t}(x)\right|=\left|\widehat{\mu_1}(x)\right|^t\leq\exp\left(-tK\|x\|^{1/m}\right)\quad\text{ if }\|x\|>M, \end{equation} where $m\in{\mathbb N}$, $M>0$, $K>0$ are certain constants not depending on $t$. For an operator semistable L\'evy process without Gaussian component (i.e. $\alpha_1<2$) this follows directly from equation (2.4) in \cite{Luc}. In case $\alpha_1=2$ the spectral component $X^{(1)}(t)$ has a centered Gaussian distribution with positive definite covariance matrix $\Sigma=R^\top R$ according to fullness. Hence \begin{align*}
\widehat{P_{X^{(1)}(t)}}(x_1) & =\exp\left(-\frac12\, t\|Rx_1\|^2\right)\leq\exp\left(-\frac1{2\|R^{-1}\|}\, t\|x_1\|^2\right)\\
& =\exp\left(-t\,C_1\|x_1\|^2\right). \end{align*}
By the L\'evy-Khintchine representation, $X^{(1)}(t)$ is independent of $X^{(2)}(t)+\cdots+X^{(p)}(t)$ and together with equation (2.4) in \cite{Luc} we get for $\|x\|>M\geq1$ \begin{align*}
\left|\widehat{\mu_t}(x)\right| & \leq\exp\left(-t\,C_1\|x_1\|^2\right)\cdot\exp\left(-t\,C_2\|x_2+\cdots+x_p\|^{1/m}\right)\\
& =\exp\left(-t\,C_1(\|x_1\|^2)^{1/(2m)}\right)\cdot\exp\left(-t\,C_2\Big(\sum_{j=2}^p\|x_j\|^2\Big)^{1/(2m)}\right)\\
& \leq\exp\left(-tK\|x\|^{1/m}\right), \end{align*}
where $K=\min(C_1,C_2)$. Thus we have also shown \eqref{expdecay} in case $X(t)$ has a Gaussian component. According to Proposition 28.1 in \cite{Sat}, for every $t>0$ the random vector $X(t)$ has a Lebesgue density $x\mapsto g_t(x)$ of class $C^\infty({\mathbb R^d})$ and $g_t(x)\to0$ as $\|x\|\to\infty$. We will additionally need certain uniformity results for the densities.
\begin{lemma}\label{densbound} The mapping $(t,x)\mapsto g_t(x)$ is continuous on $(0,\infty)\times{\mathbb R^d}$ and we have \begin{equation}\label{supsup}
\sup_{t\in[1,c)}\sup_{x\in{\mathbb R^d}}\left|g_t(x)\right|<\infty. \end{equation} \end{lemma}
\begin{proof} For any sequence $(t_n,x_n)\to(t,x)$ in $(0,\infty)\times{\mathbb R^d}$ by Fourier inversion and dominated convergence we have $$g_{t_n}(x_n)=(2\pi)^{-d}\int_{{\mathbb R^d}}e^{-i\langle x_n,y\rangle}\widehat{\mu_{t_n}}(y)\,d\lambda^d(y)\to(2\pi)^{-d}\int_{{\mathbb R^d}}e^{-i\langle x,y\rangle}\widehat{\mu_{t}}(y)\,d\lambda^d(y)=g_t(x),$$
where $\lambda^d$ denotes Lebesgue measure on ${\mathbb R^d}$. This shows continuity of $(t,x)\mapsto g_t(x)$. Moreover, $\|g_t\|_\infty=\sup_{x\in{\mathbb R^d}}\left|g_t(x)\right|$ is continuous in $t>0$, hence \eqref{supsup} follows. \end{proof}
Consequently, we get a refinement of Lemma 3.1 in \cite{BMS} on the existence of negative moments of an operator semistable L\'evy process $X=\{X(t)\}_{t\geq0}$ in ${\mathbb R^d}$.
\begin{lemma}\label{negmom} For any $\delta\in(0,d)$ we have
$$\sup_{t\in[1,c)}{\mathbb E}\left[\|X(t)\|^{-\delta}\right]<\infty.$$ \end{lemma}
\begin{proof}
Let $g_t$ be as before and define $K=\sup_{t\in[1,c)}\sup_{x\in{\mathbb R^d}}\left|g_t(x)\right|$, then $K<\infty$ by Lemma \ref{densbound}. In view of $\delta<d$ we have for every $t\in[1,c)$ \begin{align*}
{\mathbb E}\left[\|X(t)\|^{-\delta}\right] & =\int_{\mathbb R^d}\|x\|^{-\delta}g_t(x)\,dx\\
& \leq K\int_{\{\|x\|\leq 1\}}\|x\|^{-\delta}\,dx+\int_{\{\|x\|>1\}}g_t(x)\,dx\\
& \leq K\int_{\{\|x\|\leq 1\}}\|x\|^{-\delta}\,dx+1<\infty. \end{align*} Since this upper bound is independent of $t\in[1,c)$, the assertion follows. \end{proof}
By a result of Sharpe \cite{Sha}, for a one-dimensional $(c^{1/\alpha},c)$-semistable L\'evy process we can further deduce from Lemma \ref{densbound} that the positivity set $A_t=\{x\in{\mathbb R}:\,g_t(x)>0\}$ is either the whole real line ${\mathbb R}$ or a half line $(at,\infty)$ or $(-\infty,at)$ for some $a\in{\mathbb R}$ and for all $t>0$ . We will now use a similar argument as given on page 83 in \cite{ARRS} to show that in case $\alpha>1$ we have $g_t(0)>0$. If $A_t={\mathbb R}$ there is nothing to prove. Suppose that $A_t=(at,\infty)$ for some $a\geq0$. Let $(Y_n)_{n\in{\mathbb N}}$ be an i.i.d. sequence with $Y_1\stackrel{\rm d}{=}X(t)$. Since $\alpha>1$ we have ${\mathbb E}[|Y_1|]<\infty$ and from the strong law of large numbers it follows that for every sequence of positive integers $k_n\to\infty$ we have \begin{equation}\label{ndoa} k_n^{-1/\alpha}\sum_{j=1}^{k_n}Y_j\geq k_n^{-1/\alpha}\sum_{j=1}^{\lfloor k_n^{1/\alpha}\rfloor}Y_j\to{\mathbb E}[Y_1]={\mathbb E}[X(t)] \end{equation} almost surely. On the other hand, since $X(t)$ belongs to its own domain of normal attraction, for $k_n=\lfloor c^n\rfloor$ the left-hand side of \eqref{ndoa} converges in distribution to $X(t)$. It follows that $X(t)\geq{\mathbb E}[X(t)]$ almost surely, thus $X(t)={\mathbb E}[X(t)]$ almost surely in contradiction to the fullness of $X(t)$. Hence we must have $a<0$ which implies $g_t(0)>0$. Similarly, the assumption $A_t=(-\infty,at)$ for some $a\leq0$ leads to $X(t)\leq{\mathbb E}[X(t)]$ almost surely and again contradicts the fullness of $X(t)$, hence $a>0$ which again implies $g_t(0)>0$. Alltogether we have shown that a bounded continuous density of a $(c^{1/\alpha},c)$-semistable L\'evy process with $\alpha>1$ is of type A; cf. Taylor \cite{Tay}. In the sequel we will need a more general positivity result for a bounded continuous density of certain operator semistable L\'evy processes.
\begin{lemma}\label{positive} Let $\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process with $\alpha_1>1$, $d_1=1$ and with density $g_t$ as above. Then there exist constants $K>0$, $r>0$ and uniformly bounded Borel sets $J_t\subseteq{\mathbb R}^{d-1}\cong V_2\oplus\cdots\oplus V_p$ for $t\in[1,c)$ such that $$g_t(x_1,\ldots,x_p)\geq K>0\quad\text{ for all }(x_1,\ldots,x_p)\in[-r,r]\times J_t.$$ Further, we can choose $\{J_t\}_{t\in[1,c)}$ such that $\lambda^{d-1}(J_t)\geq R>0$ for every $t\in[1,c)$. Note that the constants $K,r$ and $R$ do not depend on $t\in[1,c)$. \end{lemma}
\begin{proof} As argued above, $(t, x_1)\mapsto g_t(x_1)=\int_{{\mathbb R}^{d-1}}g_t(x_1,\ldots,x_p)\,d\lambda^{d-1}(x_2,\ldots,x_p)$ is continuous and positive in $x_1=0$ for every $t>0$, hence $\min_{t\in[1,c]}g_t(x_1)>0$ for $x_1=0$. Choose $\delta>0$ and $r>0$ such that $g_t(x_1)\geq\delta$ for every $x_1\in[-r,r]$ and $t\in[1,c]$. We will now show that we can choose $K\in(0,\delta)$ and $R>0$ such that for every $t\in[1,c)$ the Borel set $$J_t=\left\{(x_2,\ldots,x_p)\in{\mathbb R}^{d-1}:\,g_t(x_1,\ldots,x_p)\geq K\text{ for every }x_1\in[-r,r]\right\}$$ fulfills $\lambda^{d-1}(J_t)\geq R$. Assume this choice is not possible. Then for every $K\in(0,\delta)$ and $R>0$ there exists $t=t(K,R)\in[1,c)$ such that $\lambda^{d-1}(J_t)<R$. Letting $K\downarrow0$ and $R\downarrow0$, there exists a subsequence such that $t(K,R)\to t_0\in[1,c]$ along this subsequence and we have $g_{t_0}(x_1,\ldots,x_p)=0$ for some $x_1\in[-r,r]$ and Lebesgue almost every $(x_2,\ldots,x_p)\in{\mathbb R}^{d-1}$. It follows that $g_{t_0}(x_1)=0$ in contradiction to $g_{t_0}(x_1)\geq\delta$. It remains to prove that $\{J_t\}_{t\in[1,c)}$ is uniformly bounded. First note that by Fourier inversion for $t_n\to t>0$ we have \begin{align*}
\left|g_{t_n}(x)-g_t(x)\right| & =(2\pi)^{-d}\left|\int_{{\mathbb R^d}}e^{-i\langle x,y\rangle}\left(\widehat\mu(y)^{t_n}-\widehat\mu(y)^{t}\right)\,d\lambda^d(y)\right|\\
& \leq (2\pi)^{-d}\int_{{\mathbb R^d}}\left|1-\widehat\mu(y)^{|t_n-t|}\right|\,d\lambda^d(y)\to0 \end{align*}
uniformly in $x\in{\mathbb R^d}$, since the upper bound does not depend on $x$. Now assume that $\{J_t\}_{t\in[1,c)}$ is not uniformly bounded. Then for every $n\in{\mathbb N}$ there exists $t_n\in[1,c)$ such that for some $(x_2^{(n)},\ldots,x_p^{(n)})\in{\mathbb R}^{d-1}$ with $\|(x_2^{(n)},\ldots,x_p^{(n)})\|\geq n$ we have $$g_{t_n}\big(x_1,x_2^{(n)},\ldots,x_p^{(n)}\big)\geq K\quad\text{ for every }x_1\in[-r,r].$$
Now choose a subsequence $t_n\to t_0\in[1,c]$ and choose $n\in{\mathbb N}$ large enough so that $\left|g_{t_n}(x)-g_{t_0}(x)\right|\leq K/2$ for every $x\in{\mathbb R^d}$. Then we get along this subsequence $$g_{t_0}\big(0,x_2^{(n)},\ldots,x_p^{(n)}\big)\geq K/2$$
which contradicts $g_{t_0}(x)\to0$ for $\|x\|\to \infty$ and concludes the proof. \end{proof}
\subsection{Bounds for the sojourn time}
Let $K_{1}>0$ be a fixed constant. A family $\Lambda(a)$ of cubes of side $a$ in ${\mathbb R^d}$ is called $K_1$-nested if no ball of radius $a$ in ${\mathbb R^d}$ can intersect more than $K_{1}$ cubes of $\Lambda(a)$. In the sequel we will choose $\Lambda(a)$ to be the family of all cubes in ${\mathbb R^d}$ of the form $[k_{1}a, (k_{1}+1)a]\times\cdots\times[k_{d}a, (k_{d}+1)a]$ with $(k_{1},\ldots,k_{d})\in\mathbb{Z}^{d}$. Obviously, this family $\Lambda(a)$ is $3^d$-nested. Let \begin{align*}
T(a,s)=\int_{0}^{s}1_{B(0,a)}(X(t))\,dt \end{align*} be the sojourn time of the L\'evy process $X=\{X(t)\}_{t\geq0}$ up to time $s>0$ in the closed ball $B(0,a)$ with radius $a$ centered at the origin. The following remarkable covering lemma is due to Pruitt und Taylor \cite[Lemma 6.1]{PT}.
\begin{lemma}\label{PT} Let $X=\{X(t)\}_{t\geq0}$ be a L\'evy process in ${\mathbb R^d}$ and let $\Lambda(a)$ be a fixed $K_{1}$-nested family of cubes in ${\mathbb R^d}$ of side $a$ with $0<a\leq1$. For any $u\geq0$ let $M_{u}(a,s)$ be the number of cubes in $\Lambda(a)$ hit by $X(t)$ at some time $t\in [u,u+s]$. Then $${\mathbb E}\left[M_{u}(a,s)\right]\leq 2\,K_{1}s\cdot\left({\mathbb E}\left[T\left(\tfrac{a}{3},s\right)\right]\right)^{-1}.$$ \end{lemma}
We now determine sharp upper and lower bounds for the expected sojourn times ${\mathbb E}[T(a,s)]$ of an operator semistable L\'evy process. Our proof follows the outline given in \cite[Lemma 3.4]{MX} for the special case of operator stable L\'evy processes, but in our more general situation the estimations are more delicate. Although we only need the lower bounds in this paper, for completeness we also include the upper bounds which might be useful elsewhere, e.g. for studying exact Hausdorff measure functions. Recall the spectral decomposition of Section 2.2 for the constants $\alpha_1,\alpha_2$ and $d_1$ appearing in the following result.
\begin{theorem}\label{sojournbounds} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $d\geq2$. For any $0<\alpha_{2}'<\alpha_{2}<\alpha_{2}''<\alpha_{1}'<\alpha_{1}<\alpha_{1}''$ there exist positive and finite constants $K_{2},\ldots,K_{5}$ such that \begin{itemize}
\item[(i)] if $\alpha_{1} \leq d_{1}$, then for all $0<a\leq 1$ and $a^{\alpha_{1}}\leq s\leq1$ we have
\begin{equation*}
K_{2}a^{\alpha_{1}''}\leq{\mathbb E}[T(a,s)]\leq K_{3}a^{\alpha_{1}'}.
\end{equation*}
\item[(ii)] if $\alpha_{1}>d_{1}=1$, for all $0<a\leq a_{0}$ with $a_0>0$ sufficiently small, and all $a^{\alpha_{2}}\leq s \leq 1$ we have
\begin{equation*}
K_{4}a^{\rho''}\leq{\mathbb E}[T(a,s)]\leq K_{5}a^{\rho'},
\end{equation*}
where $\rho'=1+\alpha_{2}'(1-\frac{1}{\alpha_{1}})$ and $\rho''=1+\alpha_{2}''(1-\frac{1}{\alpha_{1}})$. \end{itemize} \end{theorem}
\begin{proof} (i) Assume $\alpha_{1}\leq d_{1}$ and let $\alpha_{1}'<\alpha_{1}$ be fixed. Especially, we have $d_1/\alpha_1'-1>0$. For $0<t\leq 1$ write $t=mc^{-i}$ with $m\in [1,c)$ and $i \in {\mathbb N}_{0}$, then by Lemma \ref{specbound} we have \begin{equation}\label{lbX1}
\|X^{(1)}(t)\|\stackrel{\rm d}{=} \|c^{-iE_1}X^{(1)}(m)\|\geq \|X^{(1)}(m)\|/\|c^{iE_{1}}\|\geq K^{-1}c^{-i/{\alpha_{1}'}}\|X^{(1)}(c^it)\|. \end{equation} For $0<a\leq1$ choose $i_{0},i_{1}\in {\mathbb N}_{0}$ such that $c^{-(i_{0}+1)} <a\leq c^{-i_{0}}$ and $c^{-(i_{1}+1)} <c^{-i_{0}\alpha_{1}'}\leq c^{-i_{1}}$. Since $X^{(1)}$ is a $(c^{E_{1}},c)$-operator semistable L\'evy process in ${\mathbb R}^{d_1}\cong V_1$, the spectral component $X^{(1)}(m)$ has a bounded and continuous density $g_{m}(x_1)$ for any $m\in[1,c)$ and by Lemma \ref{densbound} there exists \begin{equation}\label{K6}
K_6=\sup_{m\in [1,c)}\sup_{ x_1\in{\mathbb R}^{d_1}}|g_{m}(x_1)|<\infty. \end{equation} Alltogether we observe using \eqref{lbX1} \begin{align*}
{\mathbb E}[T(a,s)] & \leq \int_{0}^{1}P\left(\|X^{(1)}(t)\|<a\right)\,d t\leq\sum_{i=1}^{\infty}\int_{c^{-i}}^{c^{-i+1}}P\left(\|X^{(1)}(t)\|\leq c^{-i_{0}}\right)\,d t\\
& \leq \sum_{i=1}^{\infty}\int_{1}^{c}c^{-i}P\left(\|X^{(1)}(m)\|\leq K\,c^{i/\alpha_1'-i_{0}}\right)\,dm\\
&\leq\sum_{i=1}^{i_{1}+1}c^{-i}\int_{1}^{c}\int_{{\mathbb R}^{d_1}} 1_{\{\|x_1\|\leq K\,c^{i/\alpha_1'-i_{0}}\}}g_{m}(x_1)\,dx_1\,dm+\sum_{i=i_{1}+2}^{\infty}\int_{1}^{c}c^{-i}\,d m \\
& \leq\sum_{i=1}^{i_{1}+1}c^{-i}(c-1)(2K\,c^{i/\alpha_1'-i_{0}})^{d_1}K_6+\sum_{i=i_{1}+2}^{\infty}(c-1)c^{-i}\\
& \leq K\,c^{-i_0d_1}\frac{\left(c^{d_1/\alpha_1'-1}\right)^{i_1+2}-1}{c^{d_1/\alpha_1'-1}-1}+c^{-(i_1+1)}\\
& \leq K\,c^{-i_0d_1}(c^{-i_1})^{1-d_1/\alpha_1'}+c^{-i_0\alpha_1'}\leq K_3a^{\alpha_1'}. \end{align*} which gives the upper bound in part (i) for all $0<s\leq1$. To prove the lower bound, choose $\alpha_{j}''>0$ for $1\leq j\leq p$ such that $\alpha_{j}''>\alpha_{j}>\alpha_{j+1}''$. For $0<a\leq1$ and $a^{\alpha_1}\leq s\leq1$ choose $i_{0},i_{1},i_2\in {\mathbb N}_{0}$ such that $c^{-i_{0}} <a\leq c^{-i_{0}+1}$, $c^{-i_1}<s\leq c^{-i_{1}+1}$ and $c^{-(i_{2}+1)} <\left(c^{-i_{0}}\delta\right)^{\alpha_{1}''}\leq c^{-i_{2}}$, where $0<\delta\leq1$ will be chosen later. Note that $$c^{-i_{1}+1}\geq s\geq a^{\alpha_1}>a^{\alpha_1''}>c^{-i_0\alpha_1''}>\left(c^{-i_{0}}\delta\right)^{\alpha_{1}''}>c^{-(i_{2}+1)}$$ and hence $i_1-1\leq i_2+1$. Similar to \eqref{lbX1}, by Lemma \ref{specbound} we have \begin{equation}\begin{split}\label{ubXj}
\|X^{(j)}(t)\| & \stackrel{\rm d}{=} \|c^{-iE_j}X^{(j)}(m)\|\leq \|c^{-iE_j}\|\,\|X^{(1)}(m)\|\\
& \leq K\,c^{-i/{\alpha_{j}''}}\|X^{(j)}(c^it)\|\leq K\,c^{-i/{\alpha_{1}''}}\|X^{(j)}(c^it)\| \end{split}\end{equation} for all $j=1,\ldots,p$. Alltogether we observe, using \eqref{ubXj} \begin{align*}
{\mathbb E}[T(a,s)] & \geq \int_{0}^{s}P\left(\|X^{(j)}(t)\|<\frac{a}{\sqrt{p}},\;1\leq j\leq p\right)\,d t\\
& \geq \int_{0}^{c^{-i_{1}}}P\left(\|X^{(j)}(t)\|\leq\frac{c^{-i_{0}}}{\sqrt{p}},\;1\leq j\leq p\right)\,d t\\
& = \sum_{i=i_{1}-1}^{\infty}\int_{c^{-i}}^{c^{-i+1}}P\left(\|X^{(j)}(c^it)\|\leq K^{-1}\frac{c^{i/\alpha_1''-i_{0}}}{\sqrt{p}},\;1\leq j\leq p\right)\,d t\\
& \geq\sum_{i=i_{2}+1}^{\infty}c^{-i}\int_{1}^{c}P\left(\|X^{(j)}(m)\|\leq K^{-1}\frac{c^{(i_2+1)/\alpha_1''-i_{0}}}{\sqrt{p}},\;1\leq j\leq p\right)\,dm\\
& \geq\sum_{i=i_{2}+1}^{\infty}c^{-i}\int_{1}^{c}P\left(\|X^{(j)}(m)\|\leq \frac{K^{-1}}{\delta\sqrt{p}},\;1\leq j\leq p\right)\,dm \end{align*}
Since $\{X^{(j)}(t)\}_{t\geq 0}$, $1\leq j\leq p$, are L\'evy processes, we can assume that they have c\`adl\`ag paths. Hence $\sup_{m\in [1,c)} \|X^{(j)}(m)\| = \sup_{m\in [1,c)\cap \mathbb{Q}} \|X^{(j)}(m)\|$, $1\leq j\leq p$, are random variables and thus
$$P\left(\sup_{m\in [1,c)} \|X^{(j)}(m)\| \leq \frac{K^{-1}}{\delta\sqrt{p}},\;1\leq j\leq p\right)\geq K_{7}>0,$$ if we choose $0<\delta\leq1$ sufficiently small. Consequently, \begin{align*} {\mathbb E}[T(a,s)] & \geq\sum_{i=i_{2}+1}^{\infty}c^{-i}\int_{1}^{c}K_7\,dm= K_{7} \sum_{i=i_{2}+1}^{\infty}c^{-i}(c-1)=K_7c^{-i_2}\\ & \geq K (c^{-i_0}\delta)^{\alpha_1''}=K(\delta/c)^{\alpha_1''}a^{\alpha_1''}=K_2a^{\alpha_1''} \end{align*} which proves the lower bound in part (i).
(ii) Now assume $\alpha_{1}>d_{1}=1$ and let $\alpha_{2}'<\alpha_{2}$ be fixed. Since $(X^{(1)}, X^{(2)})$ is a $(c^{E_{1}\oplus E_{2}},c)$-semistable L\'evy process in $\mathbb{R}^{d_{1}+d_{2}}\cong V_{1}\oplus V_{2}$, the spectral component $(X^{(1)}(m), X^{(2)}(m))$ has a bounded and continuous density $g_{m}(x_1,x_2)$ for any $m\in[1,c)$ and by Lemma \ref{densbound} there exists \begin{equation}\label{K8}
K_8=\sup_{m\in [1,c)}\sup_{(x_{1},x_{2})\in{\mathbb R}^{d_1+d_2}}|g_{m}(x_1,x_2)|<\infty. \end{equation} We will further use the constant $K_6$ defined by \eqref{K6} in part (i). For $0<a\leq1$ choose $i_{0},i_{1}\in {\mathbb N}_{0}$ such that $c^{-(i_{0}+1)} <a\leq c^{-i_{0}}$ and $c^{-(i_{1}+1)} <c^{-i_{0}\alpha_{2}'}\leq c^{-i_{1}}$. For $0<t\leq 1$ again write $t=mc^{-i}$ with $m\in [1,c)$ and $i \in {\mathbb N}_{0}$, then by Lemma \ref{specbound} we have \begin{equation}\label{lbX2}
\|X^{(2)}(t)\|\stackrel{\rm d}{=} \|c^{-iE_2}X^{(2)}(m)\|\geq \|X^{(2)}(m)\|/\|c^{iE_{2}}\|\geq K^{-1}c^{-i/{\alpha_{2}'}}\|X^{(2)}(c^it)\|. \end{equation} Alltogether we observe using \eqref{lbX2} \begin{align*}
{\mathbb E}[T(a,s)] & \leq \int_{0}^{1} P\left(|X^{(1)}(t)|<a,\|X^{(2)}(t)\|<a\right)\,d t\\
& \leq \sum_{i=1}^{\infty}\int_{c^{-i}}^{c^{-i+1}}P\left(|X^{(1)}(c^it)|<c^{i/\alpha_1-i_{0}},\|X^{(2)}(c^it)\|<K\,c^{i/\alpha_2'-i_{0}}\right)\,d t\\
& \leq \sum_{i=1}^{i_1+1}c^{-i}\int_{1}^{c}P\left(|X^{(1)}(m)|<c^{i/\alpha_1-i_{0}},\|X^{(2)}(m)\|<K\,c^{i/\alpha_2'-i_{0}}\right)\,dm\\
& \phantom{\leq}+\sum_{i=i_1+2}^{\infty}c^{-i}\int_{1}^{c}P\left(|X^{(1)}(m)|<c^{i/\alpha_1-i_{0}}\right)\,dm\\
& =: I+I\!I. \end{align*} Note that for part $I$ we have $\alpha_{2}' < \alpha_{1} < 2$ and $ d_{2}\geq 1$, hence $1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}<0$ and it follows that \begin{align*} I & \leq\sum_{i=1}^{i_{1}+1}c^{-i}(c-1)K_82c^{i/\alpha_1-i_{0}}\left(2c^{i/\alpha_2'-i_{0}}\right)^{d_{2}}\\
&\leq Kc^{-i_{0}(d_{2}+1)}\sum_{i=1}^{i_{1}+1}\left(c^{-i}\right)^{1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}}
= Kc^{-i_{0}(d_{2}+1)}\left[\frac{\left(c^{-(i_{1}+2)}\right)^{1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}}-1}{c^{\frac{1}{\alpha_{1}}+\frac{d_{2}}{\alpha_{2}'}-1}-1}\right]\\
&\leq Kc^{-i_{0}(d_{2}+1)}\left(c^{-i_{1}}\right)^{1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}}\left(c^{-2}\right)^{1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}}
\leq Kc^{-i_{0}(d_{2}+1)}\left(c^{-i_{0}\alpha_{2}'}\right)^{1-\frac{1}{\alpha_{1}}-\frac{d_{2}}{\alpha_{2}'}}\\
&= K\left(c^{-i_{0}}\right)^{1+\alpha_{2}'(1-\frac{1}{\alpha_{1}})} = Kc^{-i_{0}\rho'}
= Kc^{\rho'}a^{\rho'}=K_{51}a^{\rho'} \end{align*} Further note that for part $I\!I$ we have $\alpha_{1}>1$, hence $1-\frac{1}{\alpha_{1}}>0$ and \begin{align*} I\!I &\leq \sum_{i=i_{1}+2}^{\infty}c^{-i}(c-1)K_62c^{i/\alpha_1-i_{0}}
= Kc^{-i_{0}}\sum_{i=i_{1}+2}^{\infty}\left(c^{-i}\right)^{1-\frac{1}{\alpha_{1}}}(c-1)\\
&= Kc^{-i_{0}}\left(c^{-(i_{1}+2)}\right)^{1-\frac{1}{\alpha_{1}}}
\leq Kc^{-i_{0}}\left(c^{-i_{0}\alpha_{2}'}\right)^{1-\frac{1}{\alpha_{1}}}\\
&= K\left(c^{-i_{0}}\right)^{1+\alpha_{2}'(1-\frac{1}{\alpha_{1}})} = Kc^{-i_{0}\rho'}\leq K_{52}a^{\rho'} \end{align*} Putting things together, we get the upper bound ${\mathbb E}[T(a,s)] \leq K_{51}a^{\rho'}+ K_{52}a^{\rho'} = K_{5}a^{\rho'}$ in part (ii) for all $0\leq s\leq 1$. To prove the lower bound, we choose $i_0,i_1$ as in the proof of the lower bound in part (i), i.e. $c^{-i_0}<a\leq c^{-i_0+1}$ and $c^{-i_1}<s\leq c^{-i_1+1}$. Note that, since $d_1=1$, for $j=1$ in \eqref{ubXj} we can choose $K=1$ and $\alpha_1''=\alpha_1$. Hence, similar to the above, we get \begin{equation}\label{stlb}
{\mathbb E}[T(a,s)]\geq\sum_{i=i_{1}-1}^{\infty}c^{-i}\int_{1}^{c}P\left(\begin{array}{c} |X^{(1)}(m)|<\frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}}\text{ and}\\ \|X^{(j)}(m)\|\leq K^{-1}\frac{c^{i/\alpha_j''-i_{0}}}{\sqrt{p}}, 2\leq j\leq p\end{array}\right)dm. \end{equation} By Lemma \ref{positive} choose $K_{10}>0$, $r>0$ and uniformly bounded Borel sets $J_m\subseteq{\mathbb R}^{d-1}$ with Lebesgue measure $0<K_9\leq\lambda^{d-1}(J_m)<\infty$ for every $m\in[1,c)$ such that the bounded continuous density $g_m(x_1,\ldots,x_p)$ of $X(m)=X^{(1)}(m)+\cdots+X^{(p)}(m)$ fulfills $$g_m(x_1,\ldots,x_p)\geq K_{10}>0\quad\text{ for all }(x_1,\ldots,x_p)\in[-r,r]\times J_{m}$$ and for every $m\in[1,c)$. Since $\{J_m\}_{m\in[1,c)}$ is uniformly bounded by Lemma \ref{positive}, we are able to choose $0<\delta\leq c^{-1}<1$ such that
$$\bigcup_{m\in[1,c)}J_{m}\subseteq\left\{\|x_j\|\leq\frac{K^{-1}c^{-\alpha_1/\alpha_p}}{\delta\sqrt{p}},\;2\leq j\leq p\right\},$$ where $K$ is the constant from \eqref{stlb}. Let $\eta=c^{2/\alpha_p}/(r\sqrt{p})$. Since $\alpha_{1}>\alpha_{2}''$, there exists a constant $a_{0}\in(0,1]$ such that $\left(\eta a\right)^{\alpha_{1}}<\left(\delta a\right)^{\alpha_{2}''}$ for all $0<a\leq a_{0}$. Now chose $i_{2}, i_{3}\in \mathbb{N}_{0}$ such that $c^{-i_{2}} <\left(\delta c^{-i_{0}+1}\right)^{\alpha_{2}''}\leq c^{-i_{2}+1}$ and $c^{-i_{3}} <\left(\eta c^{-i_{0}}\right)^{\alpha_{1}}\leq c^{-i_{3}+1}$. Note that $$c^{-i_3}<\left(\eta c^{-i_{0}}\right)^{\alpha_{1}}<\left(\eta a\right)^{\alpha_{1}}<\left(\delta a\right)^{\alpha_{2}''}\leq\left(\delta c^{-i_0+1}\right)^{\alpha_{2}''}\leq c^{-i_2+1}$$ and $$c^{-(i_1-1)}\geq s\geq a^{\alpha_2}\geq a^{\alpha_2''}>\left(c^{-i_0}\right)^{\alpha_2''}\geq\left(\delta c^{-i_0+1}\right)^{\alpha_2''}>c^{-i_2},$$ hence $i_3\geq i_2-1$ and $i_1-1\leq i_2$. We further have for all $i=i_2,\ldots,i_3+1$ and every $j=2,\ldots,p$ \begin{equation}\label{up} \frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}}\leq \frac{c^{(i_3+1)/\alpha_1-i_{0}}}{\sqrt{p}}\leq\frac{c^{2/\alpha_1}(\eta c^{-i_{0}})^{-1}c^{-i_{0}}}{\sqrt{p}}=\frac{c^{2/\alpha_1}}{\eta\sqrt{p}}=r \end{equation} and \begin{equation}\begin{split}\label{down} \frac{c^{i/\alpha_j''-i_{0}}}{\sqrt{p}} & \geq \frac{c^{i_2/\alpha_j''-i_{0}}}{\sqrt{p}}\geq \frac{(\delta c^{-i_0+1})^{-\alpha_2''/\alpha_j''}c^{-i_0}}{\sqrt{p}}\\ & =\frac{(\delta^{-1}c^{i_0-1})^{\alpha_2''/\alpha_j''}c^{-i_0}}{\sqrt{p}}\geq\frac{c^{-\alpha_2''/\alpha_j''}}{\delta\sqrt{p}}\geq\frac{c^{-\alpha_1/\alpha_p}}{\delta\sqrt{p}}. \end{split}\end{equation} Let $I_m=(-\frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}},\frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}})\times J_m$ then in view of \eqref{stlb}, we get using \eqref{up} and \eqref{down} \begin{align*}
{\mathbb E}[T(a,s)] & \geq \sum_{i=i_{2}}^{i_3+1}c^{-i}\int_{1}^{c}P\left(\begin{array}{c} |X^{(1)}(m)|<\frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}}\text{ and}\\ \|X^{(j)}(m)\|\leq K^{-1}\frac{c^{i/\alpha_j''-i_{0}}}{\sqrt{p}}, 2\leq j\leq p\end{array}\right)dm\\
&\geq \sum_{i=i_{2}}^{i_{3}+1}c^{-i}\int_{1}^{c}\int_{I_m} g_{m}(x)\,dx\,dm
\geq \sum_{i=i_{2}}^{i_{3}+1}c^{-i}(c-1)\,2\,\frac{c^{i/\alpha_1-i_{0}}}{\sqrt{p}}\,K_9\,K_{10}\\
&= Kc^{-i_{0}}\sum_{i=i_{2}}^{i_{3}+1}\left(c^{-i}\right)^{1-\frac{1}{\alpha_{1}}}
= Kc^{-i_{0}}\left(\frac{1-\left(c^{-(i_{3}+2)}\right)^{1-\frac{1}{\alpha_{1}}}}{1-c^{\frac{1}{\alpha_{1}}-1}}- \frac{1-\left(c^{-i_{2}}\right)^{1-\frac{1}{\alpha_{1}}}}{1-c^{\frac{1}{\alpha_{1}}-1}}\right)\\
& = Kc^{-i_{0}}\left(\left(c^{-i_{2}}\right)^{1-\frac{1}{\alpha_{1}}}-\left(c^{-(i_{3}+2)}\right)^{1-\frac{1}{\alpha_{1}}}\right)\\
& \geq K_{41}\left(c^{-i_{0}}\right)^{\rho''} - K_{42}\left(c^{-i_{0}}\right)^{\alpha_{1}}. \end{align*} Since $\rho''=1+\alpha_{2}''(1-\frac{1}{\alpha_{1}}) < 1+\alpha_{1}(1-\frac{1}{\alpha_{1}})=\alpha_{1}$ we have $\left(c^{-i_{0}}\right)^{\alpha_{1}-\rho''}\to0$ if $a\to0$, i.e. $i_{0}\to\infty$. Hence we can further choose $a_{0}$ sufficiently small, such that $${\mathbb E}[T(a,s)]\geq\frac{K_{41}}{2}\,\left(c^{-i_{0}}\right)^{\rho''}\geq K_{4}a^{\rho''}$$ for all $0<a\leq a_{0}$, which proves the lower bound in part (ii) and concludes the proof. \end{proof}
\begin{remark}\label{rem1dim} In fact we have proven a little bit more than stated in Theorem \ref{sojournbounds}. Part (i) is also valid in case $d=1$ for a $(c^{1/\alpha},c)$-semistable L\'evy process in ${\mathbb R}$ with $\alpha_1=\alpha$ and $d_1=1$. Our proof also shows that the upper bounds in part (i) and (ii) are valid for all $0\leq s\leq 1$, but this is also a direct consequence from the definition of a sojourn time. \end{remark}
\section{Main Results}
Recall the spectral decomposition of Section 2.2 for the constants $\alpha_1,\alpha_2$ and $d_1$ appearing in the following results.
\begin{theorem}\label{main} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $d\geq2$. Then for any Borel set $B\subseteq{\mathbb R}_+$ we have almost surely \begin{align*}
\dim_{\rm H}X(B)=
\begin{cases}
\alpha_1\dim_{\rm H}B & \text{if } \alpha_1\dim_{\rm H}B \leq d_1, \\
1+\alpha_2\left(\dim_{\rm H}B-\frac1{\alpha_1}\right) & \text{if } \alpha_1\dim_{\rm H}B > d_1.
\end{cases} \end{align*} \end{theorem}
As a direct consequence, for $B=[0,1]$ with $\dim_{\rm H}B=1$ the Hausdorff dimension of the range of $X$ is determined as follows.
\begin{cor} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $d\geq2$. Then we have almost surely \begin{align*}
\dim_{\rm H}X([0,1])=
\begin{cases}
\alpha_1 & \text{if } \alpha_1\leq d_1, \\
1+\alpha_2\left(1-\frac1{\alpha_1}\right) & \text{otherwise.}
\end{cases} \end{align*} \end{cor}
The lower cases in the above dimension formulas are only meaningful if $d\geq2$. For a one-dimensional semistable L\'evy process the Hausdorff dimension is determined as follows.
\begin{theorem}\label{dim1} Let $X=\{X(t)\}_{t\geq0}$ be a $(c^{1/\alpha},c)$-semistable L\'evy process in ${\mathbb R}$. Then for any Borel set $B\subseteq{\mathbb R}_+$ we have almost surely $$\dim_{\rm H}X(B)=\min(\alpha\dim_{\rm H}B,1).$$ In particular, for $B=[0,1]$ we obtain for the range $\dim_{\rm H}X([0,1])=\min(\alpha,1)$ a.s. \end{theorem}
For the proof of Theorem \ref{main} we follow standard techniques of determining upper and lower bounds for $\dim_{\rm H}X(B)$ as described on page 289 of \cite{Xiao}. Similar arguments can be found in Xiao and Lin \cite{XL} for multivariate selfsimilar processes with independent components.
\subsection{Upper bounds}
To obtain upper bounds for $\dim_{\rm H}X(B)$ we choose a suitable sequence of coverings of $X(B)$ and show that its corresponding $\gamma$-dimensional Hausdorff measure has finite expectation, which leads to $\dim_{\rm H}X(B)\leq\gamma$ almost surely. This method goes back to Pruitt and Taylor \cite{PT} and Hendricks \cite{Hen}.
\begin{lemma}\label{upperbound} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $d\geq2$. Then for any Borel set $B \subseteq {\mathbb R}_+$ we have almost surely \begin{align*}
\dim_{\rm H}X(B)\leq
\begin{cases}
\alpha_1\dim_{\rm H}B & \text{if } \alpha_1\dim_{\rm H}B \leq d_1, \\
1+\alpha_2\left(\dim_{\rm H}B-\frac1{\alpha_1}\right) & \text{if } \alpha_1\dim_{\rm H}B > d_1.
\end{cases} \end{align*} \end{lemma}
\begin{proof}
(i) Assume $\alpha_1\dim_{\rm H}B \leq d_1$ and $\alpha_{1}\leq d_{1}$. For $\gamma>\dim_{\rm H}B$ choose $\alpha_{1}''>\alpha_{1}$ such that $\gamma'=1-\frac{\alpha_{1}''}{\alpha_{1}}+\gamma>\dim_{\rm H}B$. Then, by definition of the Hausdorff dimension, for any $\varepsilon\in(0,1]$ there exists a sequence $\{I_{i}\}_{i\in\mathbb{N}}$ of intervals in ${\mathbb R}_{+}$ of length $|I_{i}|<\varepsilon$ such that
$$B\subseteq \bigcup_{i=1}^{\infty}I_{i}\quad\text{ and }\quad\sum_{i=1}^{\infty}|I_{i}|^{\gamma'}<1.$$
Let $s_{i}=|I_{i}|$ und $b_{i}:=|I_{i}|^{\frac{1}{\alpha_{1}}}$ then $(b_i/3)^{\alpha_1}<s_i$. By Lemma \ref{PT} and Theorem \ref{sojournbounds} it follows that $X(I_{i})$ can be covered by $M_{i}$ cubes $C_{ij}\in \Lambda(b_{i})$ of side $b_{i}$ such that for every $i\in{\mathbb N}$ we have
$${\mathbb E}[M_{i}]\leq 2K_{1}s_{i}\left({\mathbb E}\left(T\left(\tfrac{b_{i}}{3},s_{i}\right)\right]\right)^{-1}\leq 2K_{1}s_{i}K_{2}^{-1}\left(\tfrac{b_{i}}{3}\right)^{-\alpha_{1}''}= K \,s_{i}b_{i}^{-\alpha_{1}''} = K \,|I_{i}|^{1-\frac{\alpha_{1}''}{\alpha_{1}}}.$$ Note that $X(B)\subseteq\bigcup_{i=1}^{\infty} \bigcup_{j=1}^{M_{i}}C_{ij}$, where $b_{i}\sqrt{d}$ is the diameter of $C_{ij}$. Hence $\{C_{ij}\}$ is a $(\varepsilon^{1/\alpha_{1}}\sqrt{d})$-covering of $X(B)$. By monotone convergence we have \begin{align*}
{\mathbb E}\left[\sum_{i=1}^{\infty}M_{i}b_{i}^{\alpha_{1}\gamma}\right] & = \sum_{i=1}^{\infty}{\mathbb E}\left[M_{i}b_{i}^{\alpha_{1}\gamma}\right]\leq \sum_{i=1}^{\infty} K \,|I_{i}|^{1-\frac{\alpha_{1}''}{\alpha_{1}}}\, |I_{i}|^{\gamma}= K\sum_{i=1}^{\infty}|I_{i}|^{\gamma'} \leq K. \end{align*} Letting $\varepsilon\to0$, i.e $b_{i}\to0$, by Fatou's lemma we get \begin{align*} {\mathbb E}\left[\mathcal{H}^{\alpha_{1}\gamma}(X(B))\right] & \leq {\mathbb E}\left[\liminf_{\varepsilon\to0} \sum_{i=1}^{\infty}\sum_{j=1}^{M_{i}}\left(b_{i}\sqrt{d}\right)^{\alpha_{1}\gamma}\right]\\ & \leq \liminf_{\varepsilon\to 0} \sqrt{d}^{\alpha_{1}\gamma}{\mathbb E}\left[\sum_{i=1}^{\infty}M_{i}b_{i}^{\alpha_{1}\gamma}\right]\leq\sqrt{d}^{\alpha_{1}\gamma} K < \infty, \end{align*} which shows that $\dim_{\rm H}X(B)\leq \alpha_{1}\gamma$ almost surely. Since $\gamma>\dim_{\rm H}B$ is arbitrary, we get $\dim_{\rm H}X(B) \leq \alpha_{1}\dim_{\rm H}B$ a.s.
(ii) Assume $\alpha_1\dim_{\rm H}B\leq d_1$ and $\alpha_{1}>d_{1}$. To be able to argue the same way as in part (i), we have to show that the same lower bound ${\mathbb E}[T(a,s)]\geq K\,a^{\alpha_{1}''}$ holds for the expected sojourn time also in case $\alpha_{1}>d_{1}$. In fact, by Theorem \ref{sojournbounds} (ii) we have ${\mathbb E}[T(a,s)]\geq K\,a^{\rho''}$, where $\rho''=1+\alpha_{2}''(1-\frac{1}{\alpha_{1}})$ and $0<\alpha_{2}<\alpha_{2}''<\alpha_{1}<\alpha_{1}''$. Hence $$\rho''=1+\alpha_{2}''(1-\tfrac{1}{\alpha_{1}})\leq 1+\alpha_{1}(1-\tfrac{1}{\alpha_{1}})=\alpha_{1}<\alpha_{1}''$$ so that for all $0<a\leq 1$ and $a^{\alpha_1}\leq s\leq1$ we get the desired lower bound. Now, as in part (i) the same conclusion $\dim_{\rm H}X(B)\leq \alpha_{1}\dim_{\rm H}B$ holds a.s.
(iii) Assume $\alpha_1\dim_{\rm H}B>d_1$. Since $\dim_{\rm H}B\leq 1$ it follows that $\alpha_{1}>d_{1}=1$. For $\gamma>\dim_{\rm H}B$ choose $\alpha_{2}''>\alpha_{2}$ such that $\gamma'=1-\frac{\alpha_{2}''}{\alpha_{2}}+\frac{\alpha_{2}''}{\alpha_{2}}\gamma>\dim_{\rm H}B$. For $\varepsilon\in(0,1]$ let $\{I_{i}\}_{i\in\mathbb{N}}$ be the same sequence of intervals as in part (i).
Let $s_{i}:=|I_{i}|$ und $b_{i}:=|I_{i}|^{\frac{1}{\alpha_{2}}}$ then $(b_i/3)^{\alpha_2}<s_i$. Again, by Lemma \ref{PT} and Theorem \ref{sojournbounds} it follows that $X(I_{i})$ can be covered by $M_{i}$ cubes $C_{ij}\in \Lambda(b_{i})$ of side $b_{i}$ such that for every $i\in{\mathbb N}$ we have
$${\mathbb E}[M_{i}]\leq 2K_{1}s_{i}\left({\mathbb E}\left(T\left(\tfrac{b_{i}}{3},s_{i}\right)\right]\right)^{-1}\leq 2K_{1}s_{i}K_{4}^{-1}\left(\tfrac{b_{i}}{3}\right)^{-\rho''}= K \,s_{i}b_{i}^{-\rho''} = K \,|I_{i}|^{1-\frac{\rho''}{\alpha_2}},$$ where $\rho''=1+\alpha_{2}''(1-\frac{1}{\alpha_{1}})$. By monotone convergence we have \begin{align*}
{\mathbb E}\left[\sum_{i=1}^{\infty}M_{i}b_{i}^{1+\alpha_{2}''(\gamma-\frac1{\alpha_1})}\right] & \leq \sum_{i=1}^{\infty} K \,|I_{i}|^{1-\frac{\rho''}{\alpha_2}}\, |I_{i}|^{\frac1{\alpha_2}+\frac{\alpha_{2}''}{\alpha_2}(\gamma-\frac1{\alpha_1})}= K\sum_{i=1}^{\infty}|I_{i}|^{\gamma'} \leq K. \end{align*} Since $\gamma>\dim_{\rm H}B$ and $\alpha_2''>\alpha_2$ are arbitrary, with the same arguments as in part (i) we get $\dim_{\rm H}X(B) \leq 1+\alpha_2(\dim_{\rm H}B-\frac1{\alpha_1})$ a.s. \end{proof}
\subsection{Lower bounds}
In order to show $\dim_{\rm H}X(B)\geq\gamma$ almost surely, we use standard capacity arguments. By Frostman's lemma we choose a suitable probability measure on $B$ with finite energy and show that a corresponding random measure on $X(B)$ has finite expected $\gamma$-energy. The relationship between the Hausdorff and the capacitary dimension by Frostman's theorem then gives the desired lower bound.
\begin{lemma}\label{lowerbound} Let $X=\{X(t)\}_{t\geq0}$ be an operator semistable L\'evy process in ${\mathbb R^d}$ with $d\geq2$. Then for any Borel set $B\subseteq{\mathbb R}_+$ we have almost surely \begin{align*}
\dim_{\rm H}X(B)\geq
\begin{cases}
\alpha_1\dim_{\rm H}B & \text{if } \alpha_1\dim_{\rm H}B \leq d_1, \\
1+\alpha_2\left(\dim_{\rm H}B-\frac1{\alpha_1}\right) & \text{if } \alpha_1\dim_{\rm H}B > d_1.
\end{cases} \end{align*} \end{lemma}
\begin{proof} First assume $0<\alpha_1\dim_{\rm H}B\leq d_1$. In case $\dim_{\rm H}B=0$ there is nothing to prove. For $0<\gamma<\alpha_{1}\dim_{\rm H}B$ choose $0<\alpha_{1}'<\alpha_{1}$ such that $\gamma<\alpha_{1}'\dim_{\rm H}B$. By Frostman's lemma \cite{Kah,Mat} there exists a probability measure $\sigma$ on $B$ such that \begin{equation}\label{sigma}
\int_{B}\int_{B}\frac{\sigma(ds)\,\sigma(dt)}{|s-t|^{\gamma/\alpha_{1}'}}<\infty. \end{equation} In order to prove $\dim_{\rm H}X(B)\geq\gamma$ almost surely, by Frostman's theorem \cite{Kah,Mat} it suffices to show that \begin{equation}\label{Xsigma}
\int_{B}\int_{B}{\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right]\,\sigma(ds)\,\sigma(dt)<\infty. \end{equation}
Let $K_{11}=\sup_{m\in[1,c)} E(\|X^{(1)}(m)\|^{-\gamma})<\infty$ by Lemma \ref{negmom}, since $\gamma<\alpha_{1}\dim_{\rm H}B\leq d_{1}$. In order to verify \eqref{Xsigma} we split the domain of integration into two parts
(i) Assume $|s-t|\leq 1$, then $|s-t|=mc^{-i}$ with $m\in[1,c)$ and $i\in\mathbb{N}_{0}$. By Lemma \ref{specbound} we get \begin{align*}
{\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right] & \leq {\mathbb E}\left[\|X^{(1)}(mc^{-i})\|^{-\gamma}\right]= {\mathbb E}\left[\|c^{-iE_{1}}X^{(1)}(m)\|^{-\gamma}\right]\\
& \leq \|c^{iE_{1}}\|^{\gamma}{\mathbb E}\left[\|X^{(1)}(m)\|^{-\gamma}\right]\leq K\,c^{\gamma i/\alpha_{1}'} K_{11}\\
&= Km^{\frac{\gamma}{\alpha_{1}'}}\cdot \left(mc^{-i}\right)^{-\frac{\gamma}{\alpha_{1}'}}\leq =K_{12}|s-t|^{-\frac{\gamma}{\alpha_{1}'}}.
\end{align*}
(ii) Now assume $|s-t|\geq 1$ and choose $\alpha_{1}''>\alpha_{1}$. Write $|s-t|=mc^{i}$ with $m\in[1,c)$ and $i\in\mathbb{N}_{0}$. Then, using again Lemma \ref{specbound} we get as above \begin{align*}
{\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right] & = \|c^{-iE_{1}}\|^{\gamma} {\mathbb E}\left[\|X^{(1)}(m)\|^{-\gamma}\right]\leq K\,c^{-\gamma i/\alpha_{1}''} K_{11}\leq K\,K_{11} = K_{13}. \end{align*}
Combining part (i) and (ii) in \eqref{Xsigma}, by \eqref{sigma} we get the desired upper bound in case $\alpha_1\dim_{\rm H}B\leq d_1$.
Now assume $\alpha_1\dim_{\rm H}B>d_1$, then $\alpha_{1}>d_{1}=1$ and hence $\dim_{\rm H}B>\frac{1}{\alpha_{1}}$. Choose $1<\gamma<1+\alpha_{2}(\dim_{\rm H}B-\frac{1}{\alpha_{1}})$, then since $\rho=\frac{\gamma}{\alpha_{2}}-(\frac{1}{\alpha_{2}}-\frac{1}{\alpha_{1}})<\dim_{\rm H}B$ we can choose $0<\alpha_{2}'<\alpha_{2}$ such that $\rho'=\frac{\gamma}{\alpha_{2}'}-(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})<\dim_{\rm H}B$. By Frostman's lemma there exists again a probability measure $\sigma$ on $E$ such that \begin{equation}\label{sigma1}
\int_{B}\int_{B}\frac{\sigma(ds)\,\sigma(dt)}{|s-t|^{\rho'}}<\infty. \end{equation} Again, in order to show \eqref{Xsigma} we split the domain of integration into two parts.
(i) Assume $|s-t|=mc^{-i}\leq 1$ with $m\in[1,c)$ and $i\in\mathbb{N}_{0}$. By Lemma \ref{specbound} we get \begin{align*}
& {\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right]={\mathbb E}\left[\|c^{-iE}X(m)\|^{-\gamma}\right]\\
& \leq {\mathbb E}\left[\left(c^{-i\frac{2}{\alpha_{1}}}|X^{(1)}(m)|^{2} + \|X^{(2)(m)}\|^{2}/ \|c^{iE_{2}}\|^{2} \right)^{-\frac{\gamma}{2}}\right]\\
&\leq K\int_{{\mathbb R}^{1+d_2}} \frac{1}{c^{-i\frac{\gamma}{\alpha_{1}}}\left|x_{1}\right|^{\gamma} + c^{-i\frac{\gamma}{\alpha_{2}'}}\left\|x_{2}\right\|^{\gamma} } \,g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2}\\
&= K\int_{{\mathbb R}^{1+d_2}} \frac{1}{m^{-\frac{\gamma}{\alpha_{1}}}\left(mc^{-i}\right)^{\frac{\gamma}{\alpha_{1}}}\left|x_{1}\right|^{\gamma} + m^{-\frac{\gamma}{\alpha_{2}'}}\left(mc^{-i}\right)^{\frac{\gamma}{\alpha_{2}'}}\left\|x_{2}\right\|^{\gamma}} \,g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2}\\
& \leq K\int_{{\mathbb R}^{1+d_2}} \frac{1}{c^{-\frac{\gamma}{\alpha_{1}}}\left|s-t\right|^{\frac{\gamma}{\alpha_{1}}}\left|x_{1}\right|^{\gamma} + c^{-\frac{\gamma}{\alpha_{2}'}}\left|s-t\right|^{\frac{\gamma}{\alpha_{2}'}}\left\|x_{2}\right\|^{\gamma}} \,g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2}\\
& \leq K \int_{{\mathbb R}^{1+d_2}} \frac{1}{\left|s-t\right|^{\frac{\gamma}{\alpha_{1}}}\left|x_{1}\right|^{\gamma} + \left|s-t\right|^{\frac{\gamma}{\alpha_{2}'}}\left\|x_{2}\right\|^{\gamma}}\, g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2}\\
&= K\left|s-t\right|^{-\frac{\gamma}{\alpha_{1}}}\int_{{\mathbb R}^{1+d_2}} \frac{1}{\left|x_{1}\right|^{\gamma} + \left|s-t\right|^{\gamma(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})}\left\|x_{2}\right\|^{\gamma} } \,g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2},
\end{align*} where $g_{m}(x_{1},x_{2})$ denotes a bounded continuous density of $(X^{(1)}(m),X^{(2)}(m))$ in ${\mathbb R}^{1+d_2}\cong V_1\oplus V_2$. We will use integration by parts to derive an upper bound for the above integral $I$. Let
\begin{align*}
F_{m}(r_{1}, r_{2})=P\left(|X^{(1)}(m)|\leq r_{1}, \|X^{(2)}(m)\|\leq r_{2}\right).
\end{align*} which by transformation into spherical coordinates reads as
\begin{align*}
F_{m}(r_{1}, r_{2})&=\int_{|x_{1}|\leq r_{1}}\int_{\|x_{2}\|\leq r_{2}} g_{m}(x_{1},x_{2}) \,dx_{1}\,dx_{2}\\
&= \int_{-r_{1}}^{r_{1}} \int_{0}^{r_{2}} \int_{S_{d_{2}-1}} \tilde{g}_{m}(\rho_{1},\rho_{2}\theta)\rho_{2}^{d_{2}-1} \mu(\,d\theta)\,d\rho_{2}\,d\rho_{1},
\end{align*}
where $\tilde{g}_{m}(\rho_{1},\rho_{2}\theta)$ is a bounded continuous function in $(\rho_{1},\rho_{2},\theta)\in\mathbb{R}\times\mathbb{R}_{+}\times S_{d_{2}-1}$ and $\mu$ is the surface measure on the unit sphere $S_{d_{2}-1}$ in $\mathbb{R}^{d_{2}}$. Note that by \eqref{K8} we have
\begin{equation}\label{K8neu}
\sup_{m\in[1,c)}\sup_{(\rho_{1},\rho_{2},\theta)\in\mathbb{R}\times\mathbb{R}_{+}\times S_{d_{2}-1}}\tilde{g}_{m}(\rho_{1},\rho_{2}\theta)=K_8<\infty.
\end{equation}
For simplicity let $z=|s-t|^{\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}}}$. By Fubini's theorem and integration by parts with respect to $dr_{1}$ we get for the above integral $I$
\begin{align*}
I&=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}} \,F_{m}(\,dr_{1},\,dr_{2})\\
&= \int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}} \int_{S_{d_{2}-1}} \tilde{g}_{m}(r_{1},r_{2}\theta) \,r_{2}^{d_{2}-1} \mu(\,d\theta)\,dr_{1}\,dr_{2}\\
&= 0+\int_{0}^{\infty} \int_{0}^{\infty} \left[\frac{\gamma r_{1}^{\gamma-1}}{(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma})^{2}}\int_{0}^{r_{1}} \int_{S_{d_{2}-1}} \tilde{g}_{m}(\rho_{1},r_{2}\theta) r_{2}^{d_{2}-1} \mu(\,d\theta)\,d\rho_{1}\right] \,dr_{1}\,dr_{2}\\
&= \int_{0}^{1}\int_{0}^{\infty} \left[\ldots\right]\,dr_{1}\,dr_{2} + \int_{1}^{\infty} \int_{0}^{\infty} \left[\ldots\right]\,dr_{1}\,dr_{2}=: I_{1} + I_{2}.
\end{align*} Now we estimate $I_1$ and $I_2$ separately. By a change of variables $r_{1}=zr_{2}s_{1}$ and \eqref{K8neu} we get
\begin{align*}
I_{1}& \leq K\int_{0}^{1} r_{2}^{d_{2}-1}\int_{0}^{\infty}\frac{\gamma r_{1}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}\right)^{2}}\, r_{1}\,dr_{1}\,dr_{2}\\
& = Kz^{-(\gamma-1)}\int_{0}^{1}r_{2}^{d_{2}-\gamma}\,dr_{2}\cdot\int_{0}^{\infty}\frac{\gamma s_{1}^{\gamma}}{\left(s_{1}^{\gamma}+1\right)^{2}} \,ds_{1}\\
&\leq K_{14}z^{-(\gamma-1)}=K_{14}|s-t|^{-(\gamma-1)(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})},
\end{align*} since $1<\gamma<\alpha_{1}\leq 2\leq d_{2} +1$. In order to estimate $I_2$ first note that by \eqref{K8} we have
$$F(r_{1},r_{2})=\int_{|x_{1}|\leq r_{1}}\int_{\|x_{2}\|\leq r_{2}} g_{m}(x_{1},x_{2}) \,dx_{2} \,dx_{1}
\leq \int_{|x_{1}|\leq r_{1}} g_{m}(x_{1}) \,dx_{1}\leq K_8\cdot 2r_{1}.$$ By Fubini's theorem and integration by parts with respect to $dr_{2}$ we further get \begin{align*}
I_{2}&= \int_{1}^{\infty} \int_{0}^{\infty}\left[ \frac{\gamma r_{1}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}\right)^{2}}\int_{0}^{r_{1}}\int_{S_{d_{2}-1}}\tilde{g}_{m}(\rho_{1},r_{2}\theta)r_{2}^{d_{2}-1}\mu(\,d\theta)\,d\rho_{1} \right] \,dr_{1}\,dr_{2}\\
&= -\int_{S_{d_{2}-1}}\int_{0}^{\infty}\frac{\gamma r_{1}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}\right)^{2}} \int_{0}^{1}\int_{0}^{r_{1}} \tilde{g}_{m}(\rho_{1},\rho_{2}\theta)\rho_{2}^{d_{2}-1}\mu(\,d\theta)\,d\rho_{1}\,d\rho_{2}\,dr_{1}\,\mu(d\theta)\\
&\phantom{=} + \int_{0}^{\infty}\int_{1}^{\infty} \frac{2\gamma^{2}z^{\gamma}r_{1}^{\gamma-1}r_{2}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}\right)^{3}} \,F_{m}(r_{1},r_{2})\,dr_{2}\,dr_{1}\\
&\leq \int_{0}^{\infty}\int_{1}^{\infty} \frac{2\gamma^{2}z^{\gamma}r_{1}^{\gamma-1}r_{2}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}\right)^{3}} \,F_{m}(r_{1},r_{2})\,dr_{2}\,dr_{1}\\
& \leq K\int_{1}^{\infty}\int_{0}^{\infty}\frac{z^{\gamma}r_{1}^{\gamma-1}r_{2}^{\gamma-1}}{\left(r_{1}^{\gamma}+z^{\gamma}r_{2}^{\gamma}\right)^{3}} \,r_{1}\,dr_{1}\,dr_{2}\\
& = Kz^{-\gamma+1}\int_{1}^{\infty}\frac{1}{r_{2}^{\gamma}}\,dr_{2} \cdot\int_{0}^{\infty}\frac{s_{1}^{\gamma}}{\left(s_{1}^{\gamma}+1\right)^{3}}\,ds_{1}\\
&=K_{15}z^{-\gamma+1}=K_{15}|s-t|^{-(\gamma-1)(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})},
\end{align*} since $\gamma>1$. Putting things together we finally get \begin{align*}
&{\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right]\leq K |s-t|^{-\frac{\gamma}{\alpha_{1}}}\cdot \left(J_{1}+J_{2}\right)\\
&\leq K |s-t|^{-\frac{\gamma}{\alpha_{1}}}\cdot \left( K_{14}|s-t|^{-(\gamma-1)(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})} + K_{15}|s-t|^{-(\gamma-1)(\frac{1}{\alpha_{2}'}-\frac{1}{\alpha_{1}})} \right)\\
&\leq K |s-t|^{-\rho'}, \end{align*}
(ii) Now assume $|s-t|=mc^i\geq 1$ with $m\in[1,c)$ and $i\in\mathbb{N}_{0}$. Choose $\alpha_{2}''>\alpha_{2}$, then by Lemma \ref{specbound} we have \begin{align*}
{\mathbb E}\left[\|X(s)-X(t)\|^{-\gamma}\right] &= {\mathbb E}\left[\|X(mc^{i})\|^{-\gamma}\right]\\
& \leq {\mathbb E}\left[\left(c^{i\frac{2}{\alpha_{1}}}|X^{(1)}(m)|^{2} + c^{i\frac{2}{\alpha_{2}''}}\|X^{(2)}(m)\|^{2} \right)^{-\frac{\gamma}{2}}\right]\\
& \leq {\mathbb E}\left[\|(X^{(1)}(m),X^{(2)}(m))\|^{-\gamma}\right]\leq K_{16}<\infty
\end{align*} uniformly in $m\in[1,c)$ in view of Lemma \ref{negmom}, since $\gamma<2\leq 1+d_{2}$.
Combining the results of part (i) and part (ii), as above we see that \eqref{Xsigma} is fulfilled and by Frostman's theorem we get $\dim_{\rm H}X(B)\geq \gamma$ almost surely. Since $\gamma<\alpha_{1}\dim_{\rm H}B$ is arbitrary, this concludes the proof. \end{proof}
\subsection{Proof of our main results}
Theorem \ref{main} is now a direct consequence of Lemma \ref{upperbound} together with Lemma \ref{lowerbound} and it only remains to prove Theorem \ref{dim1}. In case $\alpha\dim_{\rm H}B \leq1$, Lemma \ref{upperbound} and Lemma \ref{lowerbound} are still valid in the one-dimensional situation $d=1$; see Remark \ref{rem1dim}. Together these immediately give $\dim_{\rm H}X(B)=\alpha\dim_{\rm H}B=\min(\alpha\dim_{\rm H}B,1)$ almost surely. Hence it remains to prove that $\dim_{\rm H}X(B)\geq1$ almost surely if $\alpha\dim_{\rm H}B>1$, since $\dim_{\rm H}X(B)\leq1$ is obvious. But, assuming $0<\gamma<\min(\alpha\dim_{\rm H}B,1)$, we can proceed as in the proof of the upper case of Lemma \ref{lowerbound} with $E_1=1/\alpha$ and $\alpha_1'=\alpha$ to conclude that \eqref{Xsigma} holds and hence $\dim_{\rm H}X(B)\geq\min(\alpha\dim_{\rm H}B,1)$ almost surely.
$\Box$
\begin{remark} Meerschaert and Xiao \cite{MX} present an alternative analytic way to determine $\dim_{\rm H}X([0,1])$ for an operator stable L\'evy process $\{X(t)\}_{t\geq0}$ using an index theorem of Khoshnevisan et al. \cite{KXZ}. This method heavily depends on the fine structure of the exponent as given in Theorem 3.1 of Meerschaert and Veeh \cite{MV} and implicitly uses the characterization of the set $\mathcal E$ of all possible exponents as \begin{equation}\label{expset} \mathcal E=E_{\rm c}+\mathfrak T\mathcal S(\mu_1) \end{equation} due to Holmes et al. \cite{HHM}. Here, $$\mathcal S(\mu_1)=\{A\in\operatorname{GL}({\mathbb R^d}):\,\mu_1(A^{-1}dx)=\mu_1(dx)\}$$ denotes the symmetry group, $\mathfrak T\mathcal S(\mu_1)$ is its tangent space and $E_{\rm c}$ is a commuting exponent with $E_{\rm c}A=AE_{\rm c}$ for every $A\in\mathcal S(\mu_1)$. For our case of an operator semistable L\'evy process, existence of a commuting exponent $E_{\rm c}$ is known by Theorem 1.11.6 in Hazod and Siebert \cite{HS}. But due to the discrete scaling it is still an open question if the set $\mathcal E$ of possible exponents has an affine representation as in \eqref{expset} with an $\mathcal S(\mu_1)$-invariant subspace. Hence it is unclear, whether the Hausdorff dimension of the range $\dim_{\rm H}X([0,1])$ of an operator semistable L\'evy process can be obtained by a generalization of the analytic approach in section 4 of Meerschaert and Xiao \cite{MX}. However, by the presented method we can additionally determine the Hausdorff dimension of the partial range $\dim_{\rm H}X(B)$ for arbitrary Borel sets $B\subseteq{\mathbb R}_+$. \end{remark}
\end{document} | arXiv |
Algebra extension
In abstract algebra, an algebra extension is the ring-theoretic equivalent of a group extension.
For the ring-theoretic equivalent of a field extension, see Subring#Ring extensions.
Not to be confused with Algebraic extension.
Precisely, a ring extension of a ring R by an abelian group I is a pair (E, $\phi $) consisting of a ring E and a ring homomorphism $\phi $ that fits into the short exact sequence of abelian groups:
$0\to I\to E{\overset {\phi }{{}\to {}}}R\to 0.$[1]
Note I is then isomorphic to a two-sided ideal of E. Given a commutative ring A, an A-extension or an extension of an A-algebra is defined in the same way by replacing "ring" with "algebra over A" and "abelian groups" with "A-modules".
An extension is said to be trivial or to split if $\phi $ splits; i.e., $\phi $ admits a section that is a ring homomorphism.[2] (see § Example: trivial extension).
A morphism between extensions of R by I, over say A, is an algebra homomorphism E → E' that induces the identities on I and R. By the five lemma, such a morphism is necessarily an isomorphism, and so two extensions are equivalent if there is a morphism between them.
Example: trivial extension
Let R be a commutative ring and M an R-module. Let E = R ⊕ M be the direct sum of abelian groups. Define the multiplication on E by
$(a,x)\cdot (b,y)=(ab,ay+bx).$
Note that identifying (a, x) with a + εx where ε squares to zero and expanding out (a + εx)(b + εy) yields the above formula; in particular we see that E is a ring. It is sometimes called the algebra of dual numbers. Alternatively, E can be defined as $\operatorname {Sym} (M)/\bigoplus _{n\geq 2}\operatorname {Sym} ^{n}(M)$ where $\operatorname {Sym} (M)$ is the symmetric algebra of M.[3] We then have the short exact sequence
$0\to M\to E{\overset {p}{{}\to {}}}R\to 0$
where p is the projection. Hence, E is an extension of R by M. It is trivial since $r\mapsto (r,0)$ is a section (note this section is a ring homomorphism since $(1,0)$ is the multiplicative identity of E). Conversely, every trivial extension E of R by I is isomorphic to $R\oplus I$ if $I^{2}=0$. Indeed, identifying $R$ as a subring of E using a section, we have $(E,\phi )\simeq (R\oplus I,p)$ via $e\mapsto (\phi (e),e-\phi (e))$.[1]
One interesting feature of this construction is that the module M becomes an ideal of some new ring. In his book Local Rings, Nagata calls this process the principle of idealization.[4]
Square-zero extension
Especially in deformation theory, it is common to consider an extension R of a ring (commutative or not) by an ideal whose square is zero. Such an extension is called a square-zero extension, a square extension or just an extension. For a square-zero ideal I, since I is contained in the left and right annihilators of itself, I is a $R/I$-bimodule.
More generally, an extension by a nilpotent ideal is called a nilpotent extension. For example, the quotient $R\to R_{\mathrm {red} }$ of a Noetherian commutative ring by the nilradical is a nilpotent extension.
In general,
$0\to I^{n}/I^{n-1}\to R/I^{n-1}\to R/I^{n}\to 0$
is a square-zero extension. Thus, a nilpotent extension breaks up into successive square-zero extensions. Because of this, it is usually enough to study square-zero extensions in order to understand nilpotent extensions.
See also
• Formally smooth map
• The Wedderburn principal theorem, a statement about an extension by the Jacobson radical.
References
1. Sernesi 2007, 1.1.1.
2. Typical references require sections be homomorphisms without elaborating whether 1 is preserved. But since we need to be able to identify R as a subring of E (see the trivial extension example), it seems 1 needs to be preserved.
3. Anderson, D. D.; Winders, M. (March 2009). "Idealization of a Module". Journal of Commutative Algebra. 1 (1): 3–56. doi:10.1216/JCA-2009-1-1-3. ISSN 1939-2346. S2CID 120720674.
4. Nagata, Masayoshi (1962), Local Rings, Interscience Tracts in Pure and Applied Mathematics, vol. 13, New York-London: Interscience Publishers a division of John Wiley & Sons, ISBN 0-88275-228-6, MR 0155856
• Sernesi, Edoardo (20 April 2007). Deformations of Algebraic Schemes. Springer Science & Business Media. ISBN 978-3-540-30615-3.
Further reading
• algebra extension at nLab
• infinitesimal extension at nLab
• Extension of an associative algebra at Encyclopedia of Mathematics
Authority control: National
• Israel
• United States
| Wikipedia |
12.1: Invariant Directions
[ "article:topic", "authortag:waldron", "authorname:waldron", "showtoc:no" ]
Book: Linear Algebra (Waldron, Cherney, and Denton)
12: Eigenvalues and Eigenvectors
Contributed by David Cherney, Tom Denton, & Andrew Waldron
Professor (Mathematics) at University of California, Davis
Have a look at the linear transformation \(L\) depicted below:
It was picked at random by choosing a pair of vectors \(L(e_{1})\) and \(L(e_{2})\) as the outputs of \(L\) acting on the canonical basis vectors. Notice how the unit square with a corner at the origin is mapped to a parallelogram. The second line of the picture shows these superimposed on one another. Now look at the second picture on that line. There, two vectors \(f_{1}\) and \(f_{2}\) have been carefully chosen such that if the inputs into \(L\) are in the parallelogram spanned by \(f_{1}\) and \(f_{2}\), the outputs also form a parallelogram with edges lying along the same two directions. Clearly this is a very special situation that should correspond to interesting properties of \(L\).
Now lets try an explicit example to see if we can achieve the last picture:
Example 116
Consider the linear transformation \(L\) such that $$L\begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}-4\\-10\end{pmatrix}\, \mbox{ and }\, L\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}3\\7\end{pmatrix}\, ,$$ so that the matrix of \(L\) is
$$\begin{pmatrix}
-4 & 3 \\
-10 & 7 \\
\end{pmatrix}\, .$$
Recall that a vector is a direction and a magnitude; \(L\) applied to \(\begin{pmatrix}1\\0\end{pmatrix}\) or \(\begin{pmatrix}0\\1\end{pmatrix}\) changes both the direction and the magnitude of the vectors given to it.
Notice that $$L\begin{pmatrix}3\\5\end{pmatrix}=\begin{pmatrix}-4\cdot 3+3\cdot 5 \\ -10\cdot 3+7\cdot 5\end{pmatrix}=\begin{pmatrix}3\\5\end{pmatrix}\, .$$ Then \(L\) fixes the direction (and actually also the magnitude) of the vector \(v_{1}=\begin{pmatrix}3\\5\end{pmatrix}\).
Now, notice that any vector with the same direction as \(v_{1}\) can be written as \(cv_{1}\) for some constant \(c\). Then \(L(cv_{1})=cL(v_{1})=cv_{1}\), so \(L\) fixes every vector pointing in the same direction as \(v_{1}\).
Also notice that
$$L\begin{pmatrix}1\\2\end{pmatrix}=\begin{pmatrix}-4\cdot 1+3\cdot 2 \\ -10\cdot 1+7\cdot 2\end{pmatrix}=\begin{pmatrix}2\\4\end{pmatrix}=2\begin{pmatrix}1\\2\end{pmatrix}\, ,$$
so \(L\) fixes the direction of the vector \(v_{2}=\begin{pmatrix}1\\2\end{pmatrix}\) but stretches \(v_{2}\) by a factor of \(2\). Now notice that for any constant \(c\), \(L(cv_{2})=cL(v_{2})=2cv_{2}\). Then \(L\) stretches every vector pointing in the same direction as \(v_{2}\) by a factor of \(2\).
In short, given a linear transformation \(L\) it is sometimes possible to find a vector \(v\neq 0\) and constant \(\lambda\neq 0\) such that \(Lv=\lambda v.\)
We call the direction of the vector \(v\) an \(\textit{invariant direction}\). In fact, any vector pointing in the same direction also satisfies this equation because \(L(cv)=cL(v)=\lambda cv\). More generally, any \(\textit{non-zero}\) vector \(v\) that solves
\[Lv=\lambda v\]
is called an \(\textit{eigenvector}\) of \(L\), and \(\lambda\) (which now need not be zero) is an \(\textit{eigenvalue}\). Since the direction is all we really care about here, then any other vector \(cv\) (so long as \(c\neq 0\)) is an equally good choice of eigenvector. Notice that the relation "\(u\) and \(v\) point in the same direction'' is an equivalence relation.
In our example of the linear transformation \(L\) with matrix
\end{pmatrix}\, ,$$
we have seen that \(L\) enjoys the property of having two invariant directions, represented by eigenvectors \(v_{1}\) and \(v_{2}\) with eigenvalues \(1\) and \(2\), respectively.
It would be very convenient if we could write any vector \(w\) as a linear combination of \(v_{1}\) and \(v_{2}\). Suppose \(w=rv_{1}+sv_{2}\) for some constants \(r\) and \(s\). Then:
\[L(w)=L(rv_{1}+sv_{2})=rL(v_{1})+sL(v_{2})=rv_{1}+2sv_{2}.\]
Now \(L\) just multiplies the number \(r\) by \(1\) and the number \(s\) by \(2\). If we could write this as a matrix, it would look like:
\begin{pmatrix}
1 & 0 \\
\end{pmatrix}\begin{pmatrix}s\\t\end{pmatrix}
which is much slicker than the usual scenario
$$L\!\begin{pmatrix}
x\\
\end{pmatrix}\!=\!\begin{pmatrix}
\!a&b\! \\
\!c&d\!
\end{pmatrix} \! \!\begin{pmatrix}
\end{pmatrix}\!=\!
\!ax+by\! \\
\!cx+dy\!
Here, \(s\) and \(t\) give the coordinates of \(w\) in terms of the vectors \(v_{1}\) and \(v_{2}\). In the previous example, we multiplied the vector by the matrix \(L\) and came up with a complicated expression. In these coordinates, we see that \(L\) has a very simple diagonal matrix, whose diagonal entries are exactly the eigenvalues of \(L\).
This process is called diagonalization. It makes complicated linear systems much easier to analyze.
Now that we've seen what eigenvalues and eigenvectors are, there are a number of questions that need to be answered.
How do we find eigenvectors and their eigenvalues?
How many eigenvalues and (independent) eigenvectors does a given linear transformation have?
When can a linear transformation be diagonalized?
We'll start by trying to find the eigenvectors for a linear transformation.
Let \(L \colon \Re^{2}\rightarrow \Re^{2}\) such that \(L(x,y)=(2x+2y, 16x+6y)\). First, we find the matrix of \(L\):
\[\begin{pmatrix}x\\y\end{pmatrix}\stackrel{L}{\longmapsto} \begin{pmatrix}
16 & 6
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}.\]
We want to find an invariant direction \(v=\begin{pmatrix}x\\y\end{pmatrix}\) such that
or, in matrix notation,
\begin{equation*}
\begin{array}{lrcl}
&\begin{pmatrix}
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}&=&\lambda \begin{pmatrix}x\\y\end{pmatrix} \\
\Leftrightarrow &\begin{pmatrix}
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}&=&\begin{pmatrix}
\lambda & 0 \\
0 & \lambda
\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix} \\
\Leftrightarrow& \begin{pmatrix}
2-\lambda & 2 \\
16 & 6-\lambda
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}&=& \begin{pmatrix}0\\0\end{pmatrix}\, .
\end{array}
\end{equation*}
This is a homogeneous system, so it only has solutions when the matrix \(\begin{pmatrix}
\end{pmatrix}\) is singular. In other words,
&\det \begin{pmatrix}
\end{pmatrix}&=&0 \\
\Leftrightarrow& (2-\lambda)(6-\lambda)-32&=&0 \\
\Leftrightarrow &\lambda^{2}-8\lambda-20&=&0\\
\Leftrightarrow &(\lambda-10)(\lambda+2)&=&0
For any square \(n\times n\) matrix \(M\), the polynomial in \(\lambda\) given by $$P_{M}(\lambda)=\det (\lambda I-M)=(-1)^{n} \det (M-\lambda I)$$ is called the \(\textit{characteristic polynomial}\) of \(M\), and its roots are the eigenvalues of \(M\).
In this case, we see that \(L\) has two eigenvalues, \(\lambda_{1}=10\) and \(\lambda_{2}=-2\). To find the eigenvectors, we need to deal with these two cases separately. To do so, we solve the linear system \(\begin{pmatrix}
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}= \begin{pmatrix}0\\0\end{pmatrix}\) with the particular eigenvalue \(\lambda\) plugged in to the matrix.
1. [\(\underline{\lambda=10}\):] We solve the linear system
\[\begin{pmatrix}
16 & -4
\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}= \begin{pmatrix}0\\0\end{pmatrix}.
Both equations say that \(y=4x\), so any vector \(\begin{pmatrix}x\\4x\end{pmatrix}\) will do. Since we only need the direction of the eigenvector, we can pick a value for \(x\). Setting \(x=1\) is convenient, and gives the eigenvector \(v_{1}=\begin{pmatrix}1\\4\end{pmatrix}\).
2. [\(\underline{\lambda=-2}\):] We solve the linear system
Here again both equations agree, because we chose \(\lambda\) to make the system singular. We see that \(y=-2x\) works, so we can choose \(v_{2}=\begin{pmatrix}1\\-2\end{pmatrix}\).
Our process was the following:
Find the characteristic polynomial of the matrix \(M\) for \(L\), given by \(\det (\lambda I-M)\).
Find the roots of the characteristic polynomial; these are the eigenvalues of \(L\).
For each eigenvalue \(\lambda_{i}\), solve the linear system \((M-\lambda_{i} I)v=0\) to obtain an eigenvector \(v\) associated to \(\lambda_{i}\).
David Cherney, Tom Denton, and Andrew Waldron (UC Davis)
12.2: The Eigenvalue-Eigenvector Equation
David Cherney, Tom Denton, & Andrew Waldron
authortag:waldron | CommonCrawl |
# Functions and data structures in C++ for numerical computations
Before diving into the concepts of integration and optimization with C++ and Gaussian quadrature, it's essential to understand the basics of functions and data structures in C++. These fundamental concepts will serve as the foundation for the more advanced topics in this textbook.
In C++, functions are blocks of code that perform a specific task. They can take input (called parameters) and return an output (called a return value). Functions are defined using the `return_type` followed by the `function_name` and a pair of parentheses `()`.
```cpp
int add(int a, int b) {
return a + b;
}
```
In the example above, the function `add` takes two integers `a` and `b` as input parameters and returns their sum as an integer.
Data structures in C++ are used to store and organize data. Some common data structures include arrays, vectors, and linked lists.
## Exercise
Write a C++ function that calculates the average of an array of integers.
In addition to functions and data structures, it's important to understand the basics of loops and conditionals in C++. Loops allow you to repeat a block of code multiple times, and conditionals allow you to execute different code paths based on certain conditions.
# Introduction to integration theory and numerical methods
Integration is a fundamental concept in calculus that allows you to find the area under a curve. It is used in many fields, including physics, engineering, and economics. Numerical methods are algorithms that approximate the exact solution to a problem.
In this section, we will cover the basics of integration theory and numerical methods, including the trapezoidal rule and Simpson's rule. These methods will serve as the foundation for implementing Gaussian quadrature in C++.
## Exercise
Implement the trapezoidal rule in C++ to approximate the integral of a given function.
# The concept of Gaussian quadrature and its advantages
Gaussian quadrature is a numerical integration technique that uses a set of quadrature points and weights to approximate the value of an integral. It is named after Carl Friedrich Gauss, who first introduced the method in the 19th century.
The main advantage of Gaussian quadrature is its high accuracy and efficiency. It can accurately approximate integrals with a small number of quadrature points, making it suitable for solving complex problems in various fields.
## Exercise
Explain the concept of Gaussian quadrature and discuss its advantages.
# Implementing Gaussian quadrature in C++
## Exercise
Implement the Gauss-Legendre quadrature method in C++.
# Applying Gaussian quadrature to solve integration problems
## Exercise
Apply Gaussian quadrature to solve the following integration problem:
$$\int_{0}^{1} x^2 dx$$
# Optimization techniques for Gaussian quadrature
## Exercise
Discuss optimization techniques for Gaussian quadrature, such as adaptive quadrature and sparse quadrature.
# Comparison of various integration methods and their applications
## Exercise
Compare the performance of Gaussian quadrature, trapezoidal rule, Simpson's rule, and adaptive integration.
# Real-world examples of integration and optimization using C++ and Gaussian quadrature
## Exercise
Discuss real-world examples of integration and optimization using C++ and Gaussian quadrature.
# Advanced topics in integration and optimization with C++ and Gaussian quadrature
## Exercise
Discuss advanced topics in integration and optimization with C++ and Gaussian quadrature.
# Conclusion and future developments in the field
In this concluding section, we will summarize the main concepts covered in this textbook and discuss the future developments in the field of integration and optimization with C++ and Gaussian quadrature.
## Exercise
Write a conclusion for the textbook, highlighting the key concepts and future developments.
# References and further reading
## Exercise
List several references and further reading materials on integration and optimization with C++ and Gaussian quadrature. | Textbooks |
\begin{document}
\title{One-$p$th Riordan Arrays in the Construction of Identities}
\author{Tian-Xiao He\\ {\small Department of Mathematics}\\ {\small Illinois Wesleyan University, Bloomington, Illinois 61702, USA}\\ } \date{In Memory of Professor Leetsch C. Hsu} \maketitle
\begin{abstract} \noindent For an integer $p\geq 2$ we construct vertical and horizontal {\mbox{one-$p$th}} Riordan arrays from a Riordan array. When $p=2$ {\mbox{one-$p$th}} Riordan arrays reduced to well known half Riordan arrays. The generating functions of the $A$-sequences of vertical and horizontal {\mbox{one-$p$th}} Riordan arrays are found. The vertical and horizontal {\mbox{one-$p$th}} Riordan arrays provide an approach to construct many identities. They can also be used to verify some well known identities readily.
\vskip .2in \noindent AMS Subject Classification: 15B36, 05A15, 05A05, 15A06, 05A19, 11B83.
\vskip .2in \noindent \textbf{Key Words and Phrases:} Riordan array, one-$p$th Riordan arrays, $A$-sequence, identities. \end{abstract}
\setcounter{page}{1} \pagestyle{myheadings} \markboth{T. X. He} {Half Riordan Arrays}
\section{Introduction}
The Riordan group is a group of infinite lower triangular matrices defined by two generating functions. Let $g\left( z\right)=g_{0}+g_{1}z+g_{2}z^{2}+\cdots $ and $f\left( z\right)=f_{1}z+f_{2}z^{2}+\cdots $ with $g_{0}$ and $f_{1}$ nonzero. Without much loss of generality we will also set $g_{0}=1$. Given $g\left( z\right) $ and $f\left( z\right),$ the matrix they define is $D=\left( d_{n,k}\right)_{n,k\geq 0}$, where $d_{n,k}=\left[ z^{n}\right] g\left( z\right) f\left( z\right) ^{k}$. For the sake of readability we often shorten $g\left(z\right) $ and $f\left( z\right) $ to $g$ and $f$ and we will denote $D$ as $\left( g,f\right)$. Essentially the columns of the matrix can be thought of as a geometric sequence with $g$ as the leading term and $f$ as the multiplier term. Two examples are the identity matrix
\begin{equation*} \left( 1,z\right) =\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & \\ 0 & 1 & 0 & 0 & \\ 0 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 0 & 1 & \\ & & \cdots & & \ddots \end{array} \right] \end{equation*} and the Pascal matrix
\begin{equation*} \left( \frac{1}{1-z},\frac{z}{1-z} \right) =\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & \cdots \\ 1 & 1 & 0 & 0 & \\ 1 & 2 & 1 & 0 & \\ 1 & 3 & 3 & 1 & \\ & & \ldots & & \ddots \end{array} \right]. \end{equation*}
Here is a list of six important subgroups of the Riordan group (see \cite{SGWW}).
\begin{itemize} \item the {\it Appell subgroup} $\{ (g(z),\,z)\}$. \item the {\it Lagrange (associated) subgroup} $\{(1,\,f(z))\}$. \item the {\it $k$-Bell subgroup} $\{(g(z),\, z(g(z))^k)\}$, where $k$ is a fixed positive integer. \item the {\it hitting-time subgroup} $\{(zf'(z)/f(z),\, f(z))\}$. \item the {\it derivative subgroup} $\{ (f'(z), \, f(z))\}$. \item the {\it checkerboard subgroup} $\{ (g(z),\, f(z))\},$ where $g$ is an even function and $f$ is an odd function. \end{itemize}
The $1$-Bell subgroup is referred to as the Bell subgroup for short, and the Appell subgroup can be considered as the $0$-Bell subgroup if we allow $k=0$ to be included in the definition of the $k$-Bell subgroup.
The Riordan group acts on the set of column vectors by matrix multiplication. In terms of generating functions we let $d\left( z\right)=d_{0}+d_{1}z+d_{2}z^{2}+\cdots $ and $\ h\left( z\right)=h_{0}+h_{1}z+h_{2}z^{2}+\cdots$. If $\left[ d_{0},d_{1},d_{2}, \cdots \right] ^{T}$ and $\left[ h_{0},h_{1},h_{2},\cdots \right] ^{T}$ are the corresponding column vectors we observe that
\begin{equation*} \left( g,f\right) \left[d_{0},d_{1},d_{2},\cdots \right] ^{T}=\left[ h_{0},h_{1},h_{2},\cdots \right] ^{T} \end{equation*} translates to
\begin{equation*} d_{0}g\left( z\right) +d_{1}g\left( z\right) f\left( z\right) +d_{2}g\left( z\right) f\left( z\right) ^{2}+\cdots =g\left( z\right) \cdot d\left( f(z\right) )=h\left( z\right). \end{equation*} This simple observation is called the Fundamental theorem of Riordan Arrays and is abbreviated as FTRA.
The first application of the fundamental theorem is to set $d\left( z\right) =\hat g\left( z\right) \hat f\left( z\right) ^{k}$ so that
\begin{equation*} h\left( z\right) =g\left( z\right) \cdot \hat g\left( f\left( z\right) \right) \hat f\left( f\left( z\right) \right) ^{k}. \end{equation*} As $k$ ranges over $0,1,2,\cdots $ the multiplication rule for Riordan arrays emerges.
Riordan arrays play an important unifying role in enumerative combinatorics, especially in proving combinatorial identities, for instance, some results presented in \cite{Spr95}, \cite{He18}, \cite{Hsu15}, etc. This paper will define a new type of Riordan arrays and study their applications in the construction of identities.
We define the \textit{Riordan group} as the set of all pairs $(g,f)$ as above together with the multiplication operation
\begin{equation*} (g,f) (\hat g,\hat f)=(g\cdot (\hat g\circ f),\hat f\circ f). \end{equation*} The identity element for this group is $(1,z)$. If we denote the compositional inverse of $f$ as $\bar{f}$, then
\begin{equation*} (g,f)^{-1}=\left( \frac{1}{g\circ \bar{f}}\,\text{ }\bar{f}\right). \end{equation*}
As an example we return to the Pascal matrix where $f=\frac{z}{1-z}$. The inverse is $\overline{f}=\frac{z}{1+z},$ $g\left( \overline{f}\right) = \frac{1}{1-\left( \frac{z}{1+z}\right) }=1+z$ and the inverse matrix starts \begin{equation*} \left( \frac{1}{1+z},\frac{z}{1+z}\right) =\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 \\ -1 & 3 & -3 & 1 & 0 \\ 1 & -4 & 6 & -4 & 1 \end{array} \right]. \end{equation*} Both Pascal matrix and $(1/(1+z), z/(1+z))$ are pseudo-involution Riordan array due to their multiplications with $(1, -z)$ are involutions.
For more information about the Riordan group see Shapiro, Getu, Woan and Woodson \cite{SGWW}, Shapiro \cite{Shapiro}, Barry \cite{Barry}, and Zeleke \cite{Zeleke}. Shapiro and the author presented palindromes of pseudo-involutions in a recent paper \cite{HS20}. For general information about such items as Catalan numbers, Motzkin numbers, generating functions and the like there are many excellent sources including Stanley \cite{Stanley, Sta} and Aigner \cite{Aig}. A short survey and an extension of Catalan numbers and Catalan matrices can be seen in \cite{He13, HS17}. Fundamental papers by Sprugnoli \cite{Spr94, Spr95} investigated the Riordan arrays and showed that they constitute a practical device for solving {\it combinatorial sums}\index{combinatorial sums} by means of the generating functions and the {\it Lagrange inversion formula}.
For a function $f$ as above, there is a sequence $a_{0},a_{1},a_{2},\cdots $ called the $A$ sequence such that
\begin{equation*} f=z\left( a_{0}+a_{1}f+a_{2}f^{2}+a_{3}f^{3}+\cdots \right). \end{equation*} The corresponding generating function is $A\left( z\right) =\sum\nolimits_{n\geq 0}a_{n}z^{n}$ so we have, in terms of generating functions, $f=zA(f)$. See Merlini, Rogers, Sprugnoli, and Verri, \cite{MRSV} for a proof and Sprugnoli and the author \cite{HS} and the author \cite{He20} for further results. The $A$ sequence enables us to inductively compute the next row of a Riordan matrix since
\begin{equation*} d_{n+1,k}=a_{0}d_{n,k-1}+a_{1}d_{n,k}+a_{2}d_{n,k+1}+\cdots. \end{equation*} The missing item is for the left most, i.e., zeroth column and there is a second sequence, the $Z$ sequence such that
\begin{equation*} d_{n+1,0}=z_{0}d_{n,0}+z_{1}d_{n,1}+z_{2}d_{n,2}+\cdots. \end{equation*} The generating function $Z=\sum_{n\geq 0}z_{n}z^{n}$ is defined by the equation $g(z)=1/(1-zZ(f(z)))$.
By substituting $z=\overline{f}$ into the equation $f = z(A(f))$, we may have $z=\overline{f}(z)A(z) =\overline{f}A$. Similarly, applying $\overline{f}$ gives us a useful alternate form of $g(z)=1/(1-z Z(f(z)))$ as $Z=(g(\bar f)-1)/(\bar f g(\bar f))$. We call $A(z)$ and $Z(z)$ the $A$ and $Z$ functions of the Riordan array $(g,f)$.
We now consider an extension of Riordan arrays called half Riordan arrays, which will be extended to one-$p$th Riordan arrays in the next section.
The entries of a Riordan array have a multitude of interesting combinatorial explanations. The central entries play a significant role. For instance, the central entries of the Pascal matrix $(1/(1-z), z/(1-z))$ are the central binomial coefficients $\binom{2n}{n}$ (see the sequence A000984 in OEIS \cite{OEIS}) that can be explained as the number of ordered trees with a distinguished point. In addition, its exponential generating function is a modified Bessel function of the first kind. Similarly, the central entries of the Delannoy matrix $(1/(1-z) , z(1+z)/(1-z))$, called the Pascal-like Riordan array, are the central Delannoy numbers $\sum^n_{k=0}\binom{n}{k}^22^k$ (see the sequence A001850 in OEIS \cite{OEIS}). The central Delannoy numbers can be explained as the number of paths from $(0,0)$ to $(n,n)$ in an $n\times n$ grid using only steps north, northeast and east (i.e., steps $(1,0)$, $(1,1)$, and $(0,1)$). In addition, the $n$th central Delannoy numbers is the $n$th Legendre polynomial's value at $3$. It is interesting, therefore, to be able to give generating functions of such central terms in a systematic way. In recent papers \cite{Barry13, Barry19, Bar, He20-2,
YXH, YZYH} (cf. also the references of \cite{He20-2}), it has been shown how to find generating functions of the central entries of some Riordan arrays.
Yang, Zheng, Yuan, and the author \cite{YZYH} give the following definition of half Riordan arrays (HRAs), which are called vertical half Riordan arrays in Barry \cite{Barry19} and in \cite{He20}.
\begin{definition} \label{def:3.1} Let $(g,f)=(d_{n,k})_{n,k\geq 0}$ be a Riordan array. Its related half Riordan array $(v_{n,k})_{n,k\geq 0}$, called the vertical half Riordan array (VHRA), is defined by
\be\label{3.1} v_{n,k}=d_{2n-k,n}. \ee \end{definition}
Denote $\phi=\overline{t^2/f}$. A direct approach is used in \cite{He20-2} to show that $(v_{n,k})_{n,k\geq 0}=(t\phi'(t)g(\phi)/\phi, \phi)$ based on the Lagrange inversion formula.
In \cite{He20-2}, a decomposition of $(v_{n,k})_{n,k\geq 0}$ is presented as
\be\label{3.4} \left( \frac{t\phi'(t)g(\phi)}{\phi}, \phi\right)=\left( \frac{t\phi'}{\phi},\phi\right)(g,t). \ee Decomposition \eqref{3.4} suggests a more general type of half of Riordan array $(g,f)$ defined by
\be\label{3.4-2} \left( \frac{t\phi'(t)g(\phi)}{\phi}, f(\phi)\right)=\left( \frac{t\phi'}{\phi},\phi\right)(g,f), \ee which is called the horizontal half of Riordan array (HHRA) in \cite{Barry19, He20}, in order to distinguish it from VHRA. A similar approach can be used to show that the entries of the HHRA $(h_{n,k})_{n,k\geq 0}$ of $(g,f)=(d_{n,k})_{n,k\geq 0}$ are
\be\label{3.4-3} h_{n,k}=d_{2n, n+k}, \ee while a constructive approach is presented in \cite{Barry19} and an $(m,r)$ extension can be seen in \cite{YXH}.
In the next section the VHRA and the HHRA of a given Riordan array will be extended to the {\mbox{one-$p$th}} vertical and the {\mbox{one-$p$th}} horizontal Riordan arrays of the Riordan array. Then the {\mbox{one-$p$th}} vertical transformation operators and the {\mbox{one-$p$th}} horizontal Riordan array transformation operators will be defined. We will present the relationship between the two types of {\mbox{one-$p$th}} Riordan arrays by using their matrix factorization and the Lagrange inversion formula. In Section $3$, the sequence characterizations of the two types of {\mbox{one-$p$th}} Riordan arrays and several illustrating examples are given. In Section $4$, we study transformations among Riordan arrays by using the {\mbox{one-$p$th}} Riordan array operators. The conditions for transforming Riordan arrays to pseudo-involution Riordan arrays by using the {\mbox{one-$p$th}} Riordan arrays are given. The condition for preserving the elements of a certain subgroup of the Riordan group under the {\mbox{one-$p$th}}f Riordan array transformation is shown. Other properties of the halves of Riordan arrays and their entries such as related recurrence relations, double variable generating functions, combinatorial explanations are also studied in the section. In the last section, we will show the construction of identities and summation formulae by using {\mbox{one-$p$th}} Riordan arrays.
\section{One-$p$th Riordan arrays}
The vertical and horizontal one-$p$th Riordan arrays of a Riordan array $(g,f)$ will be defined and constructed in the following two theorems.
\begin{theorem}\label{thm:4.3.6-2} Given a Riordan array $(d_{n,k})_{n,k\geq 0}=(g,f)$, for any integers $p\geq 1$ and $r\geq 0$, $(\widehat {d}_{n,k}=d_{pn+r-k,(p-1)n+r})_{n,k\geq 0}$ defines a new Riordan array, called the one-$p$th or $(p,r)$ vertical Riordan array of $(g,f)$, which can be written as
\be\label{p-1} \left( \frac{t\phi'(t)g(\phi)f(\phi)^r}{\phi^{r+1}}, \phi\right),\quad \mbox{where} \quad \phi(t)=\overline{\frac{t^{p}}{f(t)^{p-1}}}, \ee and $\bar h(t)$ is the compositional inverse of $h(t)$ $(h(0)=0$ and $h'(0)\not=0)$. Particularly, if $p=1$ and $r=0$, then $(\widehat{d}_{n,k}=d_{n-k,0})_{n,k\geq 0}$ is the Toeplitz matrix (or diagonal-constant matrix) of the $0$th column of $(d_{n,k})_{n,k\geq 0}$, and if $p=2$ and $r=0$, then $(\widehat{d}_{n,k}=d_{2n-k,n})_{n,k\geq 0}$ is the VHRA of the Riordan array $(d_{n,k})_{n,k\geq 0}$.
Moreover, the generating function of the A-sequence of the new array is $(A(f))^{p-1}=(f/t)^{p-1}$, where $A(t)$ is the generating function of the A-sequence of the given Riordan array. \end{theorem}
The Lagrange Inverse Formula (LIF) will be used in the proof. Let $F(t)$ be any formal power series, and let $\phi(t)$ and $u(t)=f(t)/t$ satisfy $\phi=tu(\phi)$. Then the following LIF holds (see, for example, $K6'$ in Merlini, Sprugnoli, and Verri \cite{MSV}).
\be\label{3.2} [t^n]F(\phi(t))=[t^n]F(t)u(t)^{n-1}(u(t)-tu'(t)). \ee
\begin{proof} From $\phi(t)=\overline{t^{p}/f(t)^{p-1}}$ we have $\bar \phi(t)=t^p/f(t)^{p-1}$ and consequently $t=\phi(t)^p/f(\phi(t))^{p-1}$. Hence, we may write
\[ \phi=t u(\phi)\quad \mbox{where}\quad u(t)=\left(\frac{f(t)}{t}\right)^{p-1}. \] Taking derivative on the both sides of the equation $\phi=t u(\phi)$ and noting the definition of $u(t)$, we obtain
\begin{align*} \phi'(t)=&\left(\frac{f(\phi)}{\phi}\right)^{p-1}+t(p-1)\left(\frac{f(\phi)}{\phi}\right)^{p-2}\frac{f'(\phi)\phi'(t)\phi-\phi'(t)f(\phi)}{\phi^2}, \end{align*} which yields
\[ \phi'(t)=\left. \left(\frac{f(\phi)}{\phi}\right)^{p-1}\right/\left( 1-t(p-1)\left(\frac{f(\phi)}{\phi}\right)^{p-2}\frac{f'(\phi)\phi-f(\phi)}{\phi^2}\right). \] Noting $t=\phi/u(\phi)=\phi^p/f(\phi)^{p-1}$, the last expression devotes
\begin{align}\label{p-2} \phi'(t)=&\left. \left(\frac{f(\phi)}{\phi}\right)^{p-1}\right/\left( 1-\frac{p-1}{f(\phi)}(f'(\phi)\phi-f(\phi))\right)\nonumber\\ =&\frac{(f(\phi))^p}{\phi^{p-1}( f(\phi) -(p-1)(\phi f'(\phi)-f(\phi)))} \end{align} We now use \eqref{p-2}, $t=\phi^p/f(\phi)^{p-1}$, and the LIF shown in \eqref{3.2} to calculate $\widehat d_{n,k}$ for $n,k\geq 0$
\begin{align*} \widehat {d}_{n,k}=&[t^n] \frac{t\phi'(t)g(\phi)f(\phi)^r}{\phi^{r+1}}\left( \phi\right)^k\\ =&[t^n] \frac{\phi^p}{(f(\phi))^{p-1}}\frac{\phi^k(f(\phi))^{p+r}g(\phi)}{\phi^{p+r}(f(\phi) -(p-1)(\phi f'(\phi)-f(\phi)))}\\ =&[t^n]\frac{(f(\phi))^{r+1}g(\phi)}{\phi^{r-k}(f(\phi) -(p-1)(\phi f'(\phi)-f(\phi)))}\\ =&[t^n]\frac{(f(t))^{r+1}g(t)}{t^{r-k}(f(t) -(p-1)(t f'(t)-f(t)))}u(t)^{n-1}(u(t)-tu'(t)), \end{align*} where $u(t)=\left(\frac{f(t)}{t}\right)^{p-1}$ and
\[ u'(t)=(p-1)\left(\frac{f(t)}{t}\right)^{p-2}\frac{tf'(t)-f(t)}{t^2}. \] Substituting the expressions of $u(t)$ and $u'(t)$ into the rightmost expression of $\widehat d_{n,k}$, we have
\begin{align*} \widehat {d}_{n,k}=&[t^n]\frac{(f(t))^{r+1}g(t)}{t^{r-k}(f(t) -(p-1)(t f'(t)-f(t)))}\frac{(f(t))^{(p-1)(n-1)}}{t^{(p-1)(n-1)}}\\ &\quad \times \left( \frac{(f(t)^{p-1}}{t^{p-1}}-t(p-1)\frac{(f(t))^{p-2}}{t^{p-2}}\frac{tf'(t)-f(t)}{t^2}\right)\\ =&[t^n] \frac{(f(t)^{(p-1)(n-1)+r+1}g(t)}{t^{(p-1)(n-1)+r-k}(f(t)-(p-1)(tf'(t)-f(t)))}\\ &\quad \times \frac{(f(t))^{p-2}}{t^{p-1}}\left(f(t)-(p-1)(tf'(t)-f(t))\right)\\ =&[t^n]g(t)\frac{(f(t))^{(p-1)n+r}}{t^{(p-1)n+r-k}}=[t^{pn+r-k}]g(t)(f(t))^{(p-1)n+r}=d_{pn+r-k,(p-1)n+r}. \end{align*}
Particularly, if $p=1$ and $r=0$, then $(\widehat {d}_{n,k}=d_{n-k,0})_{n,k}$ is the Toeplitz matrix of the $0$th column of $(g,f)$. If $p=2$ and $r=0$, then $(\widehat {d}_{n,k}=d_{2n-k, n})_{n,k\geq 0}$ is the VHRA of $(g,f)$.
As for the $\widehat{A}_p$, the generating function of the $A$-sequence of $(\widehat {d}_{n,k})_{n,k\geq 0}$, we have $t\widehat {A}_p(\phi)=\phi$, which implies $\widehat{A}_p(t)=t/(t^p/f^{p-1})$, or equivalently,
\[ \widehat{A}_p(\bar f)=\left(\frac{t}{\bar f}\right)^{p-1}=(A(t))^{p-1}. \] Hence, $\widehat{A}_p(t)=(A(f))^{p-1}=(f/t)^{p-1}$ because $tA(f)=f$, completing the proof of the theorem. \end{proof}
\begin{theorem}\label{thm:4.3.6-3} Given a Riordan array $(d_{n,k})_{n,k\geq 0}=(g,f)$, for any integers $p\geq 1$ and $r\geq 0$, $(\tilde d_{n,k}=d_{pn+r,(p-1)n+r+k})_{n,k\geq 0}$ defines a new Riordan array, called the one-$p$th or $(p,r)$ horizontal Riordan array of $(g,f)$, which can be written as
\be\label{p-1-2} \left( \frac{t\phi'(t)g(\phi)f(\phi)^r}{\phi^{r+1}}, f(\phi)\right),\quad \mbox{where} \quad \phi(t)=\overline{\frac{t^{p}}{f(t)^{p-1}}}, \ee and $\bar h(t)$ is the compositional inverse of $h(t)$ $(h(0)=0$ and $h'(0)\not=0)$. Particularly, if $p=1$ and $r=0$, the {\mbox{one-$p$th}} Riordan array reduces to the given Riordan array, and if $p=2$ and $r=0$, the {\mbox{one-$p$th}} Riordan array is the HHRA of the given Riordan array.
Moreover, the generating function of the A-sequence of the new array is $(A(t))^p$ , where $A(t)$ is the generating function of the A-sequence of the given Riordan array. \end{theorem}
\begin{proof} We now use \eqref{p-2} above, $t=\phi^p/f(\phi)^{p-1}$, and the LIF shown in \eqref{3.2} to calculate $\tilde d_{n,k}$ for $n,k\geq 0$
\begin{align*} \tilde d_{n,k}=&[t^n] \frac{t\phi'(t)g(\phi)f(\phi)^r}{\phi^{r+1}}\left( f(\phi)\right)^k\\ =&[t^n] \frac{\phi^p}{(f(\phi))^{p-1}}\frac{(f(\phi))^{p+r+k}g(\phi)}{\phi^{p+r}(f(\phi) -(p-1)(\phi f'(\phi)-f(\phi)))}\\ =&[t^n]\frac{(f(\phi))^{r+k+1}g(\phi)}{\phi^{r}(f(\phi) -(p-1)(\phi f'(\phi)-f(\phi)))}\\ =&[t^n]\frac{(f(t))^{r+k+1}g(t)}{t^{r}(f(t) -(p-1)(t f'(t)-f(t)))}u(t)^{n-1}(u(t)-tu'(t)), \end{align*} where $u(t)=\left(\frac{f(t)}{t}\right)^{p-1}$ and from the proof of Theorem \ref{thm:4.3.6-2}
\[ u'(t)=(p-1)\left(\frac{f(t)}{t}\right)^{p-2}\frac{tf'(t)-f(t)}{t^2}. \] Substituting the expressions of $u(t)$ and $u'(t)$ into the rightmost expression of $\tilde d_{n,k}$, we have
\begin{align*} \tilde d_{n,k}=&[t^n]\frac{(f(t))^{r+k+1}g(t)}{t^{r}(f(t) -(p-1)(t f'(t)-f(t)))}\frac{(f(t))^{(p-1)(n-1)}}{t^{(p-1)(n-1)}}\\ &\quad \times \left( \frac{(f(t)^{p-1}}{t^{p-1}}-t(p-1)\frac{(f(t))^{p-2}}{t^{p-2}}\frac{tf'(t)-f(t)}{t^2}\right)\\ =&[t^n] \frac{(f(t)^{(p-1)(n-1)+r+k+1}g(t)}{t^{(p-1)(n-1)+r}(f(t)-(p-1)(tf'(t)-f(t)))}\\ &\quad \times \frac{(f(t))^{p-2}}{t^{p-1}}\left(f(t)-(p-1)(tf'(t)-f(t))\right)\\ =&[t^n]g(t)\frac{(f(t))^{(p-1)n+r+k}}{t^{(p-1)n+r}}=[t^{pn+r}]g(t)(f(t))^{(p-1)n+r+k}=d_{pn+r,(p-1)n+r+k}. \end{align*} Particularly, if $p=1$ and $r=0$, then $\tilde d_{n,k}=d_{n,k}$, while $p=2$ and $r=0$ yields $\tilde d_{n,k}=d_{2n, n+k}$, the $(n,k)$ entry of the HHRA of $(g,f)$.
Let $A(t)$ be the generating function of the $A$-sequence of the given Riordan array $(g,f)$. Then $A(f(t))=f(t)/t$. Let $A_p(t)$ be the generating function of the $A$-sequence of the Riordan array shown in \eqref{p-1}. Then $A_p(f(\phi))=\frac{f(\phi)}{t}$. Substituting $t=\overline{\phi}(t)$ into the last equation yields
\[ A_p(f)=\frac{f(t)}{\overline{\phi}(t)}=\frac{f(t)}{t^p/(f(t))^{p-1}}=\left( \frac{f(t)}{t}\right)^p=(A(f))^p, \] i.e., $A_p(t)=(A(t))^p$ completing the proof. \end{proof}
\section{Identities related to {\mbox{one-$p$th}} Riordan arrays}
We may use Theorems \ref{thm:4.3.6-2} and \ref{thm:4.3.6-3} and the Fa\`a di Bruno formula to establish a class of summation formulae.
Let $h(t)=\sum^\infty_{n=0}\alpha_nt^n$ be a given formal power series with the case $h(0)=\alpha_0\not= 0$. Assume that $f(a+t)$ has a formal power series expansion in $t$ with $a\in{\mathbb R}$, real numbers, and let $\bar f$ denote the compositional inverse of $f$ so that $(\bar f\circ f)(t)=(f\circ \bar f)(t)=t$. Then the composition of $f$ and $h$ in the case of $h(0)=a$ still possess a formal series expansion in $t$, namely,
\begin{align}\label{p-3} (f\circ h)(t)=&\sum^\infty_{n=0}\left([t^n](f\circ h)(t) \right)t^n=f\left( a+\sum^\infty_{n=1}\alpha_n t^n\right)\nonumber\\ =&f(a)+\sum^\infty_{n=1}\left( [t^n] (f\circ h)(t)\right)t^n. \end{align}
Let $f^{(k)}(a)$ denote the $k$th derivative of $f(t)$ at $t=a$, i.e.,
\[
f^{(k)}(a)=(d^k/dt^k)f(t)|_{t=a}. \] Recall the Fa\`a di Brumo's formula when applied to $(f\circ h)(t)$ may be written in the form (cf. Section 3.4 of \cite{Com74})
\be\label{p-4} [t^n](f\circ \phi)=\sum_{\sigma(n)}f^{(k)}(\phi(0))\Pi^n_{j=1}\frac{1}{k_j!}\left([t^i]\phi\right)^{k_j}, \ee where the summation ranges over the set $\sigma (n)$ of all partitions of $n$, that is, over the set of all nonnegative integral solutions $(k_1,k_2,\ldots,k_n)$ of the equations $k_1+2k_2+\cdots+ nk_n=n$ and $k_1+k_2+\cdots+k_n=k$, $k=1,2,\ldots, n$. Each solution $(k_1,k_2,\ldots, k_n)$ of the equations is called a partition of $n$ with $k$ parts and is denoted by $\sigma(n,k)$. Of course, the set $\sigma(n)$ is the union of all subsets $\sigma(n,k)$, $k=1,2,\ldots, n$.
Let $\beta_n=[t^n](f\circ h)(t)$ and $h(0)=\alpha_0=a$. Then there exists a pair of reciprocal relations
\begin{align} &\beta_n=\sum_{\sigma(n)}f^{(k)}(a)\frac{\alpha^{k_1}_1\cdots \alpha^{k_n}_n}{k_1!\cdots k_n!},\label{Ex:7-3-1}\\ &\alpha_n=\sum_{\sigma(n)}\bar f^{(k)}(f(a))\frac{\beta^{k_1}_1\cdots \beta^{k_n}_n}{k_1!\cdots k_n!},\label{Ex:7-3-2} \end{align} where the summation ranges the set $\sigma (n)$ of all partitions of $n$. In fact, from \eqref{p-3} the given conditions ensure that there holds a pair of formal series expansions
\begin{align} &f\left( a+\sum_{n\geq 1}\alpha_nt^n\right)=f(a)+\sum_{n\geq 1}\beta_nt^n,\label{Ex:7-3-5}\\ &\bar f\left( f(a)+\sum_{n\geq 1}\beta_nt^n\right)=a+\sum_{n\geq 1}\alpha_nt^n.\label{Ex:7-3-6} \end{align} Thus, an application of the Fa\`a di Bruno formula \eqref{p-4} to $(f\circ \phi)(t)$, on the LHS of \eqref{Ex:7-3-5} yields the expression \eqref{Ex:7-3-1} with $[t^i]\phi=\alpha_i$, $[t^n](f\circ \phi)=\beta_n$, and $\phi(0)=a$. Note that the LHS of \eqref{Ex:7-3-6} may be expressed as $\phi(t)=((\bar f\circ f)\circ \phi)(t)=(\bar f\circ (f\circ \phi))(t)$, so that in a like manner and application of the Fa\`a di Bruno formula to the LHS of \eqref{Ex:7-3-6} gives precisely the equality \eqref{Ex:7-3-2}.
Replacing $\alpha_n$ by $x_n/n!$ and $\beta_n$ by $y_n/n!$, we see that \eqref{Ex:7-3-1} and \eqref{Ex:7-3-2} may be expressed in terms of the exponential Bell polynomials, namely,
\begin{align} &y_n=\sum^n_{k=1}f^{(k)}(a)B_{n,k}(x_1,x_2,\ldots,x_{n-k+1}),\label{Ex:7-3-3}\\ &x_n=\sum^n_{k=1}\bar f^{(k)}(a)B_{n,k}(y_1,y_2,\ldots,y_{n-k+1}),\label{Ex:7-3-4} \end{align} where $B_{n,k}(\ldots)$ is defined by (cf. Section 3.3 of \cite{Com74})
\[ B_{n,k}(x_1,x_2,\ldots,x_{n-k+1})=\sum_{\sigma(n,k)}\frac{n!}{k_1!k_2!\cdots}\left( \frac{x_1}{1!}\right)^{k_1}\left( \frac{x_2}{2!}\right)^{k_2}\cdots \] and $\sigma(n,k)$ as shown above is the set of the solutions of the partition equations for a given $k$ ($1\leq k\leq n$). $B_{n,k}=B_{n,k}(f_1,f_2,\ldots)$ is the Bell polynomial with respect to $(n!)_{n\in\mathbb{N}}$, defined as follows:
\begin{equation}\label{Bellptoom} \frac{1}{k!}(f(z))^k=\sum_{n=k}^\infty B_{n,k}\frac{z^n}{n!}\,. \end{equation} Therefore, $B_{n,k}=[z^n/n!](f(z))^k/k!$, which implies that the iteration matrix $B(f(z))$ is the Riordan array $(1, f(z))$. Now, the following important property of the iteration matrix (see Theorem A on p. 145 of Comtet \cite{Com74}, Roman \cite{Rom}, and Roman and Rota \cite{RomRot78} )
\[ B(f(g(z)))=B(g(z))B(f(z)) \] is trivial in the context of the theory of Riordan arrays, i.e.,
\[ (1,f(g(z)))=(1,g(z))(1,f(z))\,; \] and the Fa\`{a} di Bruno formula derived from the above property is an application of the FTRA.
Let $f(x)=x^p$ ($p\not= 0$). Then $\bar f(x)=x^{1/p}$ with $f^{(k)}(1)=(p)_k$ and $\bar f^{(k)}(1)=(1/p)_k$. Hence, we obtain the special cases of \eqref{Ex:7-3-1} and \eqref{Ex:7-3-2}:
\begin{align} &\beta_n=\sum_{\sigma(n)}(\alpha)_k\frac{\alpha_1^{k_1}\cdots \alpha_n^{k_n}}{k_1!\cdots k_n!},\label{Ex:7-3-1-2}\\ &\alpha_n=\sum_{\sigma(n)}(1/\alpha)_k\frac{\beta_1^{k_1}\cdots \beta_n^{k_n}}{k_1!\cdots k_n!}, \label{Ex:7-3-2-2} \end{align} where $(p)_k=p (p-1)\ldots (p-k+1)$ and $(\alpha)_0=1$. The above Fa\`a di Bruno's relations have the associated expressions
\begin{align} &\left( 1+\sum^\infty_{n=1}\alpha_nt^n\right)^p=1+\sum^\infty_{n=1}\beta_nt^n,\label{Ex:7-3-1-3}\\ &\left( 1+\sum^\infty_{n=1}\beta_nt^n\right)^{1/p}=1+\sum^\infty_{n=1}\alpha_nt^n.\label{Ex:7-3-2-3} \end{align}
As an example, if $h=a_0+a_1t$ and $f(t)=t^p$, then $f(h(t))=a_0^p(1+\alpha_1 t)^p$, where $\alpha_1=a_1/a_0$. From \eqref{Ex:7-3-1-3} we have
\[ (a_0+a_1t)^p=a_0^p\left(1+\alpha_1t\right)^p=a_0^p\left( 1+ \sum^\infty_{j=1}\beta_j t^j\right), \] where
\[ \beta_j=\sum_{\sigma(j)}(\alpha)_k\frac{\alpha_1^{k_1}\cdots \alpha_n^{k_n}}{k_1!\cdots k_n!}=(p)_j\frac{\alpha_1^j}{j!}=\binom{p}{j}\alpha_1^j, \] which presents the obvious expression $(a_0+a_1 t)^p=a_0^p+\sum^p_{j=1}\binom{p}{j}a_0^{p-j}a_1^j t^j$.
Similarly, if $h=a_0+a_1t+a_2t^2$, $a_0\not=0$, then
\[ (a_0+a_1t+a_2t^2)^p=a_0^p\left(1+\frac{a_1}{a_0}t+\frac{a_2}{a_0}t^2\right)^p= a_0^p\left( 1+\sum^p_{j=1}\beta_jt^j\right), \] where
\[ \beta_j=\sum_{\sigma(j)}(p)_j\frac{1}{j_i!j_2!}\left(\frac{a_1}{a_0}\right)^{j_1}\left(\frac{a_2}{a_0}\right)^{j_2}=\sum^j_{j_i=0}\binom{p}{j}\binom{j}{j_1}\left(\frac{a_1}{a_0}\right)^{j_1}\left(\frac{a_2}{a_0}\right)^{j-j_1}. \]
\begin{theorem}\label{thm:4.3.6-4} Let $A(t)=\sum_{n\geq 0} a_nt^n$ $(a_0\not= 0)$ be the generating function of the $A$-sequence of the given Riordan array $(d_{n,k})_{n,k\geq 0}=(g,f)$, and let $(\tilde d_{n,k}=d_{pn+r,(p-1)n+r+k})_{n,k\geq 0}$ be the $(p,r)$ Riordan array of $(g,f)$. Then there exists the following summation formula:
\be\label{p-5} d_{p(n+1)+r,(p-1)(n+1)+r+k+1}=\sum^{n-k}_{j= 0}\beta_jd_{pn+r,(p-1)n+r+k+j}, \ee where by denoting $(p)_j=p(p-1)\ldots (p-j+1)$, $\beta_0=a_0^p$, and for $n\geq 1$ and $\alpha_i=a_i/a_0$,
\begin{align}\label{p-6} \beta_j=&a_0^p[t^j](A(t))^p=\sum_{\sigma(j)}(p)_j\frac{\alpha_1^{k_1}\cdots \alpha_j^{k_j}}{k_1!\cdots k_j!}\nonumber\\ =&\sum^j_{i=1}\sum_{\sigma(j,i)}\binom{p}{j}\frac{j!}{k_1!k_2!\ldots}(\alpha_1)^{k_1}(\alpha_2)^{k_2}\ldots. \end{align} Particularly, for $A(t)=a_0+a_1t$ and $A(t)=a_0+a_1t+a_2t^2$, we have
\begin{align*} &\beta_j=\binom{p}{j}a_0^{p-j}a_1^j\quad \mbox{and}\\ &\beta_j=\sum^j_{i=0}\binom{p}{j}\binom{j}{i}a_0^{p-j}a_1^{j-i}a_2^{i}, \end{align*} respectively. \end{theorem}
\begin{proof} Since $(f(t))^p$ is the generating function of the $A$-sequence of $(\tilde d_{n,k})$ and $\tilde d_{n,k}=d_{pn+r,(p-1)n+r+k}$, we obtain \eqref{p-5} from the definition of $A$-sequence, where $\beta_j$ can be found from \eqref{p-3} and \eqref{Ex:7-3-1-2}. \end{proof}
Using \eqref{p-5} in Theorem \ref{thm:4.3.6-4}, one may obtain many identities.
\begin{example}\label{ex:4.3.3} Consider Pascal matrix $(1/(1-t), t/(1-t))$, its $A$-sequence generating function is $A(t)=1+t$. Applying \eqref{p-5}, we have
\be\label{p-7} \binom{p(n+1)+r}{(p-1)(n+1)+r+k+1}=\sum^{\min\{p,n-k\}}_{j=0}\binom{p}{j}\binom{pn+r}{(p-1)n+r+k+j}. \ee If $p=1$ and $r=0$, the above identity reduces to the well-known identity $\binom{n+1}{k+1}=\binom{n}{k}+\binom{n}{k+1}$.
The Riordan array $(1/(1-t-t^2), tC(t))$ is considered, where $C(t)=\sum^\infty_{n=0} \binom{2n}{n}t^n/(n+1)=(1-\sqrt{1-4t})/(2t)$ is the Catalan function. It can be found that the $A$-sequence of the Riordan array $(1/(1-t-t^2), tC(t))$ is $(1,1,1,\ldots)$, i.e., the $A$-sequence has the generating function $A(t)=1/(1-t)$. From \cite{GKP, HS17} we have
\be\label{4.3.4} C(t)^{k}=\sum_{n=0}^{\infty }\frac{k}{2n+k}\binom{2n+k}{n}t^{n}. \ee Thus, the $(n,k)$ entry of the Riordan array $(1/(1-t-t^2), tC(t))$ is
\begin{align*} d_{n,k}=&[t^n] \frac{1}{1-t-t^2}(tC(t))^k\\ =&[t^{n-k}]\left(\sum_{i\geq 0}F_i t^i\right) \left( \sum_{j\geq 0}\frac{k}{2j+k}\binom{2j+k}{j}t^j\right)\\ =&[t^{n-k}]\sum_{i\geq 0} \left( \sum^i_{j=0}F_{i-j}\frac{k}{2j+k}\binom{2j+k}{j}\right)t^i\\ =&\sum^{n-k}_{j=0}F_{n-k-j}\frac{k}{2j+k}\binom{2j+k}{j}.\\ \end{align*} Since
\[ (A(t))^p=(1-t)^{-p}=\sum_{i\geq 0} \binom{-p}{i}(-t)^i=\sum_{i\geq 0} \binom{p+i-1}{i}t^i, \] From \eqref{p-5} there holds an identity
\begin{align*} &\sum^{n-k}_{j=0}F_{n-k-j}\frac{(p-1)(n+1)+r+k+1}{2j+(p-1)(n+1)+r+k+1}\binom{2j+(p-1)(n+1)+r+k+1}{j}\\ =&\sum_{i\geq0}\binom{p+i-1}{i}\sum^{n-k-i}_{j=0}F_{n-k-i-j}\frac{(p-1)n+r+k+i}{2j+(p-1)n+r+k+i}\binom{2j+(p-1)n+r+k+i}{j}. \end{align*}
Similarly, for the Riordan array $(C(t), tC(t))$, its $(n,k)$ entry is
\begin{align*} d_{n,k}=&[t^n] t^k(C(t))^{k+1}\\ =&[t^{n-k}]\sum_{j\geq 0}\frac{k+1}{2j+k+1}\binom{2j+k+1}{j}t^j\\ =&\frac{k+1}{2n-k+1}\binom{2n-k+1}{n-k}. \end{align*} Hence, from \eqref{p-5} we may derive the identity
\begin{align*} &\frac{(p-1)(n+1)+r+k+2}{(p+1)(n+1)+r-k}\binom{(p+1)(n+1)+r-k}{n-k}\\ =&\sum^{n-k}_{j=0} \frac{(p-1)n+r+k+j+1}{(p+1)n+r-k-j+1}\binom{p+j-1}{j}\binom{(p+1)n+r-k-j+1}{n-k-j}. \end{align*} \end{example}
\section{More identities} The generating function $F_{m}(t)$ of the $m$th order Fuss-Catalan numbers $(F_{m}(n,$ $1))_{n\geq 0}$ is called the generalized binomial series in \cite{GKP}, and it satisfies the function equation $F_{m}(t)=1+tF_{m}(t)^{m}$. Hence from Lambert's formula for the Taylor expansion of the powers of $F_{m}(t)$ (cf. P. 201 of \cite{GKP}), we have \begin{equation} F_{m}^{r}:=F_{m}(t)^{r}=\sum_{n\geq 0}\frac{r}{mn+r}\binom{mn+r}{n}t^{n} \label{4.3.5} \end{equation} for all $r\in {{\mathbb{R}}}$, where $F_m(t)$ is defined by \be\label{4.3.5-2} F_m(t)=\sum_{k\geq 0}\frac{(mk)!}{(m-1)k+1)!}\frac{t^k}{k!}=\sum_{k\geq 0} \frac{1}{(m-1)k+1}\binom{mk}{k}t^k. \ee For instance, \begin{align*} &F_{0}(t)=1+t,\\ &F_1(t)=\sum_{k\geq 0} t^k=\frac{1}{1-t},\\ &F_2(t)=\sum_{k\geq 0} \frac{1}{k+1}\binom{2k}{k}t^k=C(t). \end{align*} The key case \eqref{4.3.5} leads the following formula for $F_{m}(t)$: \begin{equation} F_{m}(t)=1+tF_{m}^{m}(t). \label{4.3.6} \end{equation} Actually, \begin{eqnarray*} 1+tF_{m}^{m}(t) &=&1+\sum_{n\geq 0}\frac{m}{mn+m}\binom{mn+m}{n}t^{n+1} \\ &=&1+\sum_{n\geq 1}\frac{m}{mn}\binom{mn}{n-1}t^{n} \\ &=&\sum_{n\geq 0}\frac{1}{mn+1}\binom{mn+1}{n}t^{n}=F_{m}(t). \end{eqnarray*} For the cases $m=1$ and $2$, we have $F_{1}=1/(1-t)$ and $F_{2}=C(t)$, respectively. When $m=3$, the Fuss-Catalan numbers $\left( F_{3}\right) _{n}$ form the sequence $A001764$ (cf. \cite{OEIS}), $1,1,3,12,55,273,1428,\ldots $, which are the {\it ternary numbers}\index{ternary numbers}. The ternary numbers count the number of $3$-Dyck paths or ternary paths. The generating function of the ternary numbers is denoted as $T(t)=\sum_{n=0}^{\infty }T_{n}t^{n}$ with $T_{n}=\frac{1}{3n+1}\binom{3n+1}{n}$, and is given equivalently by the equation $T(t)=1+tT(t)^3$.
We now give more examples of Theorem \ref{thm:4.3.6-2} related to Fuss-Catalan numbers. First, we establish the relation between the Fuss-Catalan numbers and the Riordan array $(\tilde g, \tilde f)=(\tilde d_{n,k})_{n,k\geq 0}$, where $\tilde d_{n,k}=d_{pn+r,(p-1)n+r+k}$ and $d_{n,k}$ is the $(n,k)$ entry of the Pascal' triangle $(g,f)=(1/(1-t), t/(1-t))$.
\begin{theorem}\label{thm:4.3.15} Let $(d_{n,k})_{n,k\geq 0}=(1/(1-t), t/(1-t))$ be the Pascal triangle, for any integers $p\geq 2$ and $r\geq 0$ and a given Riordan array $(g,f)$ let $(\tilde d_{n,k}=d_{pn+r,(p-1)n+r+k})_{n,k\geq 0}=(\tilde g, \tilde f)$ be the one-$p$th or $(p,r)$ Riordan array of $(g,f)$. Then
\begin{align}
& \tilde g(t)=\sum_{n\geq 0}\binom{pn+r}{n}t^n=\left.\frac{(1+w)^{r+1}}{1-(p-1)w}\right|_{w=t(1+w)^p}\label{4.3.48}\\ &\tilde f(t)= \sum^\infty_{n=1}\frac{1}{pn+1}\binom{pn+1}{n}t^n=F_p(t)-1=tF_p^p(t),\label{4.3.49} \end{align} where $F_p(t)$ is the $p$th order Fuss-Catalan function satisfying
\be\label{4.3.50} F_p\left( t(1-t)^{p-1}\right)=\frac{1}{1-t}. \ee \end{theorem}
\begin{proof} For expression \eqref{4.3.48}, we find
\begin{align*} [t^n]\tilde g=& \tilde d_{n,0}=d_{pn+r,(p-1)n+r}=\binom{pn+r}{n}\\ =&[t^n](1+t)^{pn+r}=[t^n](1+t)^r((1+t)^p)^n\\
=&\left.[t^n]\frac{(1+w)^r}{1-t(d/dw)((1+w)^p)}\right|_{w=t(1+w)^p}, \end{align*} which implies \eqref{4.3.48}.
From \eqref{p-1-2} of Theorem \ref{thm:4.3.6-3} we know that
\be\label{4.3.51} (\tilde g, \tilde f)=\left( \frac{t\phi'(t)g(\phi)f(\phi)^r}{\phi^{r+1}}, f(\phi)\right), \ee where $\phi(t)=\overline{\frac{t^{p}}{(f(t))^{p-1}}}$, and $\bar h(t)$ is the compositional inverse of $h(t)$ $(h(0)=0$ and $h'(0)\not=0)$. Moreover, the generating function of the A-sequence of the new array $(\tilde g, \tilde f)$ is $(A(t))^p$ , where $A(t)$ is the generating function of the A-sequence of the given Riordan array $(g,f)$. By using the Lagrange Inverse Formula \[ [t^{n}](f(t))^k=\frac{k}{n}[t^{n-k}](A(t))^n, \] we have
\[ [t^n]\tilde f=\frac{1}{n}[t^{n-1}](A(t))^{pn}=\frac{1}{n}[t^{n-1}](1+t)^{pn}=\frac{1}{n}\binom{pn}{n-1}. \] Therefore,
\[ \tilde f=\sum^\infty_{n=1}\frac{(pn)!}{((p-1)n+1)!n!}t^n=\sum^\infty_{n=1} \frac{1}{pn+1}\binom{pn+1}{n}t^n=F_p(t)-1. \] Since the key equation \eqref{4.3.6} of the Fuss-Catalan function $F_p$ shows $F_p=1+tF^p_p$, we obtain \eqref{4.3.49}. From \eqref{4.3.51},
\[ f(\phi)=\tilde f(t)=tF^p_p(t). \] Therefore, noting $f(t)=t/(1-t)$ we get
\[ \frac{t}{1-t}=f(t)=\overline{\phi}F^p_p\left( \overline{\phi}\right)=\frac{t^p}{(f(t))^{p-1}}F^p_p\left( \frac{t^p}{(f(t))^{p-1}}\right)=t(1-t)^{p-1}F^p_p\left( t(1-t)^{p-1}\right), \] and \eqref{4.3.50} follows from the comparison of the leftmost side and the rightmost side of the above equation. \end{proof}
For example, if $p=2$ and $r\geq 0$, then
\[ \tilde f=tF^2_2(t)=t(C(t))^2. \] Since $w=t(1+w)^2$ has a solution
\[ w=\frac{1-2t-\sqrt{1-4t}}{2t}=C(t)-1, \] we have
\[
\tilde g=\left.\frac{(1+w)^{r+1}}{1-w}\right|_{w=t(1+w)^2}=\frac{(C(t))^{r+1}}{2-C(t)}=\frac{(C(t))^r}{\sqrt{1-4t}}=B(t) (C(t))^r, \] where $B(t)$ is the generating function for the central binomial coefficients. Thus, $(\tilde d_{n,k})_{n,k\geq 0}=(d_{2n+r, n+r+k})_{n,k\geq 0}$ is the Riordan array
\[ (\tilde g, \tilde f)=\left( B(t)C^r, t(C(t))^2\right). \]
We need one more property of Riordan arrays, which generalizes a well-known property of the Pascal triangle and is shown in Brietzke \cite{Bri}.
\begin{theorem}\label{thm:4.3.14} Let $(d_{n,k})_{n,k\geq 0}=(g,f)$ be a Riordan array. Then for any integers $k\geq s\geq 1$ we have \be\label{4.3.46} d_{n,k}=\sum^n_{j=s}d_{n-j,k-s}[t^j](f(t))^s. \ee
Particularly, for $s=1$, $d_{n,k}=\sum^n_{j=1}f_{j}d_{n-j,k-1}$, where $f_j=[t^j]f(t)$. \end{theorem} \begin{proof} The $(n,k)$ entry of the Riordan array $(g,f)$ can be written as \begin{align*} d_{n,k}=& [t^n]g(t)(f(t))^k=[t^n]g(t)(f(t))^{k-s}((f(t))^s\\ =&\sum^n_{j=s}\left([t^{n-j}]g(t)(f(t))^{k-s}\right)\left([t^j] (f(t))^s\right)\\ =&\sum^n_{j=s}d_{n-j,k-s}[t^j]((f(t))^s. \end{align*} \end{proof} \begin{example}\label{ex:4.3.6} If $(g,f)=(1/(1-t), t/(1-t))$, then $f_j=[t^j](t/(1-t))=1$ for all $j\geq 1$. We have the well-known identity \be\label{4.3.47} \sum^n_{j=1}\binom{n-j}{k-1}=\binom{n}{k}. \ee More generally, for the Pascal triangle $(g,f)=(1/(1-t), t/(1-t))$, we have \[ [t^j](f(t))^s=[t^j]\frac{t^s}{(1-t)^s}=[t^{j-s}](1-t)^{-s}=[t^{j-s}]\sum_{i\geq 0}\binom{s+i-1}{i}t^i =\binom{j-1}{s-1}. \] Consequently, \eqref{4.3.46} becomes the Chu-Vandermonde identity \[ \sum^n_{j=s}\binom{n-j}{k-s}\binom{j-1}{s-1}=\binom{n}{k}, \] which contains \eqref{4.3.47} as a special case. \end{example}
\begin{example}\label{ex:4.3.7} For fixed integers $p\geq 2$ and $r\geq 0$, starting with the Pascal triangle and using Theorem \ref{thm:4.3.6-2}, we obtain the Riordan array $(\tilde g, \tilde f)$ with its $(n,k)$ entry as
\[ \tilde d_{n,k}=\binom{pn+r}{(p-1)n+r+k}=\binom{pn+r}{n-k} \] possesses the formal power series $\tilde f(t)=tF^p_p(t)$. Thus,
\begin{align*} [t^j](\tilde f(t))^s=&[t^{j-s}]F_p^{ps}(t)=[t^{j-s}]\frac{ps}{pn+ps}\binom{pn+ps}{n}\\ =&\frac{ps}{p(j-s)+ps}\binom{p(j-s)+ps}{j-s}=\frac{s}{j}\binom{pj}{j-s}. \end{align*} From the expression \eqref{4.3.46} of Theorem \ref{thm:4.3.14} we obtain the identity
\be\label{4.3.52} \sum^n_{j=s}\frac{s}{j}\binom{pj}{j-s}\binom{p(n-j)+r}{n-j-k+s}=\binom{pn+r}{n-k}. \ee Particularly, if $s=1$, then \eqref{4.3.52} becomes
\[ \sum^n_{j=1}\frac{1}{pj+1}\binom{pj+1}{j}\binom{p(n-j)+r}{n-j-k+1}=\binom{pn+r}{n-k} \] and, finally, adding to the both sides $\binom{pn+r}{n-k+1}$, we have
\[ \sum^n_{j=0}\frac{1}{pj+1}\binom{pj+1}{j}\binom{p(n-j)+r}{n-j-k+1}=\binom{pn+r+1}{n-k+1}. \]
Setting $j=i+s$, $x=ps$, $y=pk-ps+r$, and replacing $n$ by $n+k$, identity \eqref{4.3.52} becomes formula (5.62) of \cite{GKP}:
\[ \sum^n_{i=0}\frac{x}{x+pi}\binom{x+pi}{i}\binom{y+p(n-i)}{n-i}=\binom{x+y+pn}{n}. \] Substituting $p=-q$, $x=r$, and $y+pn=p$, the above identity is equivalent to the {\it Gould identity}: \[ \sum^n_{i=0}\frac{r}{r-qi}\binom{r-qi}{i}\binom{p+qi}{n-i}=\binom{r+p}{n}. \] \end{example}
\noindent{\bf Acknowledgements} \medbreak The author wish to express his gratitude and appreciation to the referee and the editor for their helpful comments and remarks.
\bigbreak
\end{document} | arXiv |
\begin{document}
\markright{} \markboth{
{\footnotesize\rm Kayvan Sadeghi and Nanny Wermuth }
} {
{\footnotesize\rm Sequences of regressions}
} \renewcommand{\fnsymbol{footnote}}{} $\ $\par \fontsize{12}{14pt plus.8pt minus .6pt}\selectfont
\noindent{\Large \bf Sequences of regressions and their independences\\[6mm]} {\large NANNY WERMUTH\\[2mm]} {\it Department of Mathematics, Chalmers Technical University, Gothenburg, Sweden, and International Agency of Research on Cancer, Lyon, France; email: [email protected]\\[4mm]} \noindent{\large KAYVAN SADEGHI\\[2mm]} {\it Department of Statistics, University of Oxford, UK; email: [email protected]\\[6mm] }
\noindent{\bf ABSTRACT}: Ordered sequences of univariate or multivariate regressions provide statistical models for analysing data from randomized, possibly sequential interventions, from cohort or multi-wave panel studies, but also from cross-sectional or retrospective studies. Conditional independences are captured by what we name regression graphs, provided the generated distribution shares some properties with a joint Gaussian distribution. Regression graphs extend purely directed, acyclic graphs by two types of undirected graph, one type for components of joint responses and the other for components of the context vector variable. We review the special features and the history of regression graphs, prove criteria for Markov equivalence and discuss the notion of a simpler statistical covering model. Knowledge of Markov equivalence provides alternative interpretations of a given sequence of regressions, is essential for machine learning strategies and permits to use the simple graphical criteria of regression graphs on graphs for which the corresponding criteria are in general more complex. Under the known conditions that a Markov equivalent directed acyclic graph exists for any given regression graph, we give a polynomial time algorithm to find one such graph.
\\[-1mm]
\noindent{\it Key words}: Chain graphs, Concentration graphs, Covariance graphs, Graphical Markov models, Independence graphs, Intervention models, Labeled trees, Lattice conditional independence models, Structural equation models.
\section{Introduction} \label{intro}
A common framework to model, analyse and interpret data for several, partially ordered joint or single responses is a sequence of multivariate or univariate regressions where the responses may be continuous or discrete or of both types. Each response is to be generated by a set of its regressors, called its {\bf \em directly explanatory variables}. Based on prior knowledge or on statistical analysis, one is to decide which of the variables in a set of potentially explanatory ones are needed for the generating process. Thus, for each response, a first ordering determines what is potentially explanatory, named the past of the response, and what can never be directly explanatory, named the future. Furthermore, no variable is taken to be explanatory for itself.
Corresponding {\bf \em regression graphs} consist of {\bf \em nodes} and of {\bf \em edges coupling distinct nodes}. The {\bf \em nodes represent the variables} and the {\bf \em edges stand for conditional dependences}, directed or undirected. The directly explanatory variables for an individual response variable $Y_i$ show in the graph as the set of nodes from which arrows start and point to node $i$. These nodes are commonly named the {\bf \em parents of node i}.
Every missing edge corresponds to a conditional independence statement. Edges are {\bf \em arrows for directed dependences} and {\bf \em lines for undirected dependences} among {\bf \em variables on equal standing}, that is among components of joint responses or of context variables. Undirected dependences are often also called associations. A given regression graph reflects a particular type of study which may be a simple experiment, a more complex sequence of interventions or an observational study.
One of the common features of pure
experiments and of sequences of interventions with randomized, proportional allocation of individuals to treatments, is that, by study design, some variables can be regarded to act just like independent random variables. For instance, in an experiment with proportional numbers of individuals assigned randomly to each level combination of several experimental conditions, the set of explanatory variables contains no edge in the corresponding regression graph, reflecting a situation like mutual independence. Similarly, with fully randomized interventions, each treatment variable has exclusively arrows starting from its node but no incoming arrow. After statistical analysis, some conditional independences may be appropriate additional simplifications which show as further missing
edges.
Sequences of interventions give a time ordering for some of the variables. A time order is also present in cohort or multi-wave panel studies and in retrospective studies which focus on investigating effects of variables at one fixed time point in the past, without the chance of intervening. By contrast, in a strictly cross-sectional study, in which observations for all variables are obtained at the same time, any particular variable ordering is only assumed rather than implied by actual time.
The node set is at the planning stage of empirical studies ordered into ordered sequences of single or joint responses, $Y_a,$ $Y_b$, $Y_c\dots$ that we call {\bf \em blocks of variables on equal standing} and draw them in figures as {\bf \em boxes}. This determines for the following statistical analyses that within each block there are undirected edges and between blocks there are directed edges, the arrows. The first block on the left contains the {\bf \em primary responses} of $Y_a$ and the last block on the right contains {\bf \em context variables, also named the background variables}. After statistical analyses, arrows may start from nodes within any block but always end at a node in one of the blocks in the future. Thus, there are no arrows pointing to context variables and all arrows point in the same direction, from right to left. An {\bf \em intermediate variable} is a response to some variables and also explanatory for other variables so that it has both incoming and outgoing arrows in the regression graph.
As an example, we take data from a retrospective study with 283 adult females answering questions about their childhood when visiting their general practitioner, mostly for some minor health problems; see \cite{Hardt2008}. A well-fitting graph is shown in Figure \ref{childadv}. It contains two binary variables, $A,B$ and six quantitative variables. Except for the directly recorded feature age in years, all other variables are derived from answers to questionnaires, coded so that high values correspond to high scores.
The three blocks $a,b,c$ reflect here a time-ordering of vector variables, $Y_a,Y_b,Y_c$ with $Y_a$ representing the joint response of primary interest, $Y_b$ an intermediate vector variable and $Y_c$ a context vector variable. The three individual components of the primary response $Y_a$ are different aspects of how the respondent recollects aspects of her relationship to the mother. The intermediate variable $Y_b$ has two components that reflect severe distress during childhood. The three components of the context variable $Y_c$ capture background information about the respondent and about her family.
The graph of Figure \ref{childadv}, derived after statistical analyses, shows among other independences that $Y_a$ is conditionally independent of $Y_c$ given $Y_b$, written compactly in terms of sets of nodes as $\bm{a\ci c|b}$. None of the components of $Y_c$ has an arrow pointing directly to a component of $Y_a$, but sequences of arrows lead indirectly from $c$ to $a$ via $b$. \begin{figure}
\caption{A well-fitting regression graph for data on $n=283$ adult females; within boxes are $Y_a, Y_b, Y_c$; corresponding ordered partitioning of the node set on top of the boxes.}
\label{childadv}
\end{figure} This says, for instance, that prediction of $Y_a$ is not improved by knowing the context variable $Y_c$ if information on the more recent intermediate variable $Y_b$ is available. More interpretations of the independences are given later. When some edges are missing and each edge present corresponds to a substantial dependence,
the graph may also be viewed as a research hypothesis on which variables are needed to generate the joint distribution; see \cite{WerLau90}. The goodness-of-fit of such a hypothesis can be tested in future studies.
Two models are {\bf \em Markov equivalent} whenever their associated graphs capture the same {\bf \em independence structure}, that is the graphs lead to the same set of implied independence statements. Markov equivalent models cannot be distinguished on the basis of statistical goodness-of-fit tests for any given set of data. This may pose a problem in machine learning contexts. More precisely, knowledge about Markov equivalent models is essential for designing search procedures that converge with an increasing sample size to a true generating graph; see \cite{CaKo03} for searches within the class of {\bf \em directed acyclic graphs}, which consist exclusively of arrows and capture independences of ordered sequences in single response regressions.
\begin{figure}\label{MEQch}
\end{figure}
More importantly though, Markov equivalent models may offer alternative interpretations of a given well-fitting model or open the possibility of using different types of fitting algorithms.
As we shall see in Section 7, the graph for nodes $A,R,B,P,Q$ in blocks $b$ and $c$ of Figure \ref{childadv} is Markov equivalent to both graphs of Figure \ref{MEQch}.
From knowing the Markov equivalence to the graph in Figure \ref{MEQch}a),
the joint response model for $Y_b$ given $Y_a$ may also be fitted in terms of univariate regressions and from the Markov equivalence to the graph in Figure \ref{MEQch}b), one knows for instance directly, using Proposition 1 below, that sexual abuse is independent of age and schooling given knowledge about family distress and family status.
Regression graphs are a subclass of {\bf \em the maximal ancestral graphs} of \cite{RichSpir02} and these are a subclass of the {\bf \em summary graphs} of \cite{Wer10}. The two types are called {\bf \em corresponding graphs} if they result after marginalising over a node set $m$ and conditioning on a disjoint node set $c$ from a given directed acyclic graph. Both are {\bf \em independence-preserving graphs} in the sense that they give the independence structure implied by the generating graph for all the remaining nodes and further conditioning or marginalising can be carried out just as if the possibly much larger generating graph were used. The summary graph permits, in addition, to trace possible distortions of generating dependences as they arise in conditional dependences among the remaining variables, for instance in parameters of the maximal ancestral graph models.
In the following Section 2, we introduce further concepts and the notation needed to state at the end of Section 2, some of the main results of the paper and related results in the literature. In Section 3, a well-fitting regression graph is derived for data of chronic pain patients.
Sections 4, 5 and 6 may be skipped if one wants to turn directly to formal definitions, new results and proofs in Section 7. Section 4 reviews linear recursion relations that are mimicked by graphs and lead to the standard and to special ways of combining probability statements, summarized here in Section 5. In Section 6, some of the previous results in the literature for graphs and for Markov equivalences are highlighted. The Appendix contains details of the regressions analyses in Section 3.
\section{Some further concepts and notation}
Figure \ref{hypfigbef} shows five ordered blocks, to introduce the notion of connected components of the graph to represent conditionally independent responses given their common past. \begin{figure}\label{hypfigbef}
\end{figure}
\begin{figure}\label{hypfigaft}
\end{figure}
In the example of a regression graph in Figure \ref{hypfigaft} corresponding to Figure \ref{hypfigbef}, $Y_a$ is a single response, $Y_b$ has two component variables, both of $Y_c$ and $Y_e$ have four and $Y_d$ has three. Each of the blocks $b$ to $e$ shows two {\bf \em stacked boxes}, that is subsets of nodes that are without any edge joining them. This is to indicate that disconnected components of a given response are conditionally independent given their past and that disconnected components of the context variables are completely independent.
Graphs with dashed lines are {\bf \em covariance graphs denoted by $\bm{G^N_{\mathrm{cov}}}$}, those with full lines are {\bf \em concentration graphs denoted by $\bm{G^N_{\mathrm{con}}}$}; see \cite{WerCox98}. The names are to remind one of their parametrisation in {\bf \em regular joint Gaussian distributions}, for which the covariance matrix is invertible and gives the {\bf \em concentration matrix}. A zero $ik$-element in $G^N_{\mathrm{cov}}\,$ means $i\ci k$ and a zero $ik$-element in $G^N_{\mathrm{con}}\,$ means $i\ci k|\{1, \dots, d\}\setminus\{i,k\}$; see
\cite{Wer76a} or \cite{CoxWer96}, Section 3.4.
The regression graph of Figure \ref{hypfigaft} is consistent with the first ordering in Figure \ref{hypfigbef} since no additional ordering is introduced, as it would have been by arrows within blocks $a$ to $e$. After statistical analysis, blocks of the first ordering
are often subdivided into the connected components of the graph, $g_j$, shown here in Figure \ref{hypfigaft} with the help of the stacked boxes.
For several nodes in $g_j$, each pair of nodes $(i,k)$ is connected by at least one undirected $ik$-path within $g_j$. An {\bf \em $\bm{ik}$-path} connects its endpoint nodes $i,k$ via a sequence of edges coupling distinct other nodes along the path, named {\bf \em the path's inner nodes}.
For a regression graph, $G^N_{\mathrm{reg}}\,$, the node set $N$ has an ordered partitioning into two subsets, $N=(u,v)$ distinguishing response nodes within $u$ from context nodes within $v$. The
{\bf \em connected components $\bm g_j$}, for $j=1, \dots J$, are the disconnected, undirected graphs that remain after removing all arrows from the graph. Thus, the displayed, stacked boxes in Figure \ref{hypfigaft} are just a visual aid. We say that there is {\bf \em an edge between subsets $\bm a$ and $\bm b$} of $N$ if there is an edge with one node in $a$ and the other node in $b$. Then, the subgraph induced by nodes $a\cup b$ {\bf \em is said to connected in $a$ and $b$}.
For any one block of stacked boxes, different orderings are possible. We speak of a {\bf \em compatible ordering} if each {\bf \em arrow} starting at a node in any $g_j$ points to a node in $g_{<j}=g_1 \cup \dots \cup g_{j-1}$, but never to a node in $g_{>j}=g_{j+1}\cup \dots \cup g_{J}$, the {\bf \em past of $\bm{g_j}$}.
{\bf \em Full lines} are edges coupling context variables within $v$. {\bf \em Dashed lines} couple joint responses within $u$. The regression graph is {\bf \em complete} if every node pair is coupled. In this case, the statistical model is {\bf \em saturated} as it is unconstrained for some given family of distributions.
Let $g_1, \dots g_J$ denote any compatible ordering of the connected components of $G^N_{\mathrm{reg}}\,$, then a corresponding joint density factorises as
\begin{equation} f_N={\textstyle \prod _{j=1}^{J}} f_{g_j|g_{>j}}, \label{factdens}\end{equation} into sequences regressions for the joint responses $g_j$ within $u$ and for separate concentration graph models in disconnected $g_j$ within $v$.
In a {\bf \em generating process of $\bm{f_N}$ over a regression graph}, one starts with the density of $g_{J}$ continues with the one of $g_{J-1}$ given $g_{J}$ up to the density of $g_1$ given $g_{>1}$ so that \eqref{factdens} is used for one given compatible ordering of the node set $N$. Every $ik$-edge present denotes a non-vanishing conditional dependence of $Y_i$ and $Y_k$ given some vector variable $Y_c$, written as $i\pitchfork k|c$ so that the graph is said to represent a {\bf \em dependence base} or to capture a dependence structure. The generating process attaches the following meaning to each $ik-$edge present in $G^N_{\mathrm{reg}}\,$
\begin{eqnarray} \label{pairw}& (i)& \n i\pitchfork k | g_{>j} \quad \text{ for } i, k \text{ both in a response component } g_j \text{ of } u \nonumber\\
&(ii)& \n i \pitchfork k| g_{>j}\setminus \{k\} \nn \text{ for } i \text{ in } g_j \text{ of } u \text{ and } k \text{ in } g_{>j}\\
&(iii) &\n i\pitchfork k| v \setminus\{i,k\} \text { for } i, k \text{ both in a context component } g_{j} \text{ of } v. \nonumber
\end{eqnarray}
Notice that only for context variables, conditioning is on all other context variables while for responses conditioning is exclusively on variables in their past. When the dependence sign $\pitchfork$ is replaced by the independence sign $\ci$, equations (2) give with missing edges for node pairs $i,k$ the {\bf pairwise independence statements defining the independence structure of {\bm $G^N_{\mathrm{reg}}\,$},} given the composition and the intersection property discussed below.
An equivalent, more compact description of the set of defining pairwise independences and a proof of equivalence of this {\bf \em pairwise Markov property} to the global Markov property has been given for the class of mixed loopless graphs, which contain regression graphs as a subclass; see Sadeghi and Lauritzen (2011); see also \cite{KangTian09}, \cite{PeaPaz87}, \cite{MarLup11} for relevant, previous results.
A {\bf \em global Markov property} permits to read off the graph all independence statements implied by the graph.
Equation \eqref{pairw}$(i)$ holds for the conditional covariance graphs of joint responses
$g_j$ within $u$ having dashed lines as edges, \eqref{pairw}$(ii)$ for the dependences of the single responses within $g_j$ on variables in the past of $g_j$ having arrows as edges and equation \eqref{pairw}$(iii)$ for the concentration graph of the context variables within $v$ having full lines as edges. For instance, from the definition of the missing edges corresponding to \eqref{pairw}, one can derive for Figure \ref{childadv}, $S\ci U|bc$ by \eqref{pairw}$(ii)$, $P\ci Q|B$ by \eqref{pairw}$(iii)$, and both $A\ci B|PQ$ and $A\ci P|BQ$ by \eqref{pairw}$(i)$ using first principles and the two special properties of the generated distributions named composition and intersection.
Notice that each missing edge of a regression graph corresponds to an independence statement for the uncoupled node pair; see also Lemma \ref{lem:21} and Lemma \ref{lem:22} below. Therefore, regression graphs represent one special class of the so-called {\bf \em independence graphs}. Whenever a regression graph $G^N_{\mathrm{reg}}\,$ consists of {\bf \em two disconnected graphs}, for $Y_a$ and $Y_b$ say, since no path leads from a node in $a$ to a node in $b$, and $a\cup b=N$, then $a\ci b$ or $f_N=f_a f_b$, and the two vector variables may be analysed separately. Therefore, we treat in Section 7 of this paper only connected regression graphs.
All {\bf \em graphs discussed} in this paper {\bf \em have no loops}, that is no edge connects a node to itself and they have {\bf \em at most one edge between two different nodes}. Recall that an $ik$-path in such a graph can be described by a sequence of its nodes. By convention, an $ik$-path without inner nodes is an edge. For every $ik$-edge, the endpoints differ, $i\neq k$. An $ik$-path with $i=k$ has at least three nodes and is called a {\bf \em cycle}.
A three-node path of arrows may contain only one of the three types of inner nodes shown in Figure \ref{dagV}, called {\bf \em transition, source and sink node}, respectively.
\begin{figure}\label{dagV}
\end{figure}
A {\bf \em path is directed} if all its inner nodes are transition nodes. In a {\bf \em directed cycle}, all edges are arrows pointing in the same direction and one returns to a starting node following the direction of the arrows. A regression graph contains no directed cycle and no {\bf \em semi-directed cycles}, which have at least one undirected edge in an otherwise directed cycle. If an arrow starts on a directed $ik$-path at $k$ and points to $i$ then node $k$ has been named an {\bf \em ancestor} of node $i$ and node $i$ a {\bf \em descendant} of node $k$.
The {\bf \em subgraph induced by a subset $\bm a$ } of the node set $N$ consists of the nodes within $a$ and of the edges present in the graph within $a$. A special type of induced subgraph, needed here, consisting of three nodes and two edges, is named a {\bf \em {\sf V}-configuration} or just a {\sf V}. Thus, a three-node path forms a $\sf V$ if the induced subgraph has two edges.
An {\bf \em $\bm{ik}$-path is chordless} if for each of its three consecutive nodes $(h,j,k)$, coupled by an $hj$- edge and and $jk$-edge, there is no additional $hk$-edge present in the graph. In a {\bf \em chordless cycle} of four or more nodes, the subgraph induced by every consecutive three nodes forms a $\sf V$ in the graph. An {\bf \em undirected graph is chordal} if it contains no chordless cycle in four or more nodes.
In regression graphs, there may occur the three types of {\bf \em collision {\sf V}s} of Figure \ref{collV}. \begin{figure}\label{collV}
\end{figure}
Notice that in a directed acyclic graph, the only possible collision {\sf V} is directed and coincides with the sink {\sf V} of Figure \ref{dagV}c).
An important common feature of the three {\sf V}s of Figure \ref{collV} is that the inner node is excluded from every independence statements for its endpoints; see \eqref{pairw} and Lemma \ref{lem:21}.
In all other five possible types of {\sf V}-configurations of a regression graph, named {\bf \em transmitting {\sf V}s}, the inner node is instead included in the independence statement for the endpoints; see \eqref{pairw} and Lemma \ref{lem:22} below. Notice that for uncoupled endpoints, both paths a) and b) of Figure \ref{dagV} are transmitting {\sf V}s.
Similarly, the definition of transmitting and collision nodes remains unchanged if the {\sf V}s in Figure \ref{collV}
are interpreted as $ik$-paths for which there may be an additional $ik$-edge present in the graph.
A {\bf \em collision path} has as inner nodes exclusively collision nodes, while a {\bf \em transmitting path} has as inner nodes exclusively transmitting nodes. A chordless collision path in four nodes contains at least one dashed line. In particular, it is impossible to replace all the edges in such a four-node path by arrows and not generate at least one transmitting {\sf V}. Thereby, the meaning of this missing edge would be changed and hence contradict its unique definition given from the generating process. The {\bf \em skeleton} of a graph results by replacing each edge present by a full line. Now, two of the main new results of this paper can be stated.
\begin{theorem}\label{thm:1} Two regression graphs are Markov equivalent if and only if they have the same skeleton and the same sets of collision {\sf V}s, irrespective of the type of edge. \end{theorem}
\begin{theorem} \label{thm:2} A regression graph with a chordal graph for the context variables can be oriented to be Markov equivalent to a directed acyclic graph in the same skeleton, if and only if it does not contain any chordless collision path in four nodes.
\end{theorem}
Sequences of regressions were introduced and studied, without specifying a concentration graph model for the context variables, by \cite{CoxWer93}, \cite{WerCox04}, under the name of multivariate regression chains, reminding one of the sequences of unconstrained models that the class contains for Gaussian joint responses. An extension to graphs including a concentration graph had already been proposed for directed acyclic graphs by \cite{KiiSpeCar84}. By this type of extension, the global Markov property of the graph remains unchanged.
A criterion for Markov equivalence of summary graphs has been derived by \cite{Sadeghi09} who also shows that two different criteria for maximal ancestral graphs are equivalent, those due to \cite{ZhaZheLiu05} and to \cite{AliRicSpi09}. These available Markov equivalence results and the associated proofs increase considerably in complexity, the larger the model class. On the other hand, the Markov equivalence criterion of Theorem \ref{thm:1} is simple and includes as special cases all available equivalence results for directed acyclic graphs, for covariance graphs and for concentration graphs, as set out in detail in Sections 6 and 7 here.
For context variables taken as given, Gaussian regression graph models coincide with a large subclass of structural equation models (SEMs), those permitting local modeling due to the factorisation property \eqref{factdens} and they are without any {\bf \em endogenous responses}. Such responses have residuals that are correlated with some of its regressors so that the so-called endogeneity problem is generated, by which, for joint Gaussian distributions, a zero equation parameter need not correspond to any conditional independence statement and a nonzero equation parameter is not a measure of conditional dependence. The consequence is that ordinary least squares estimates of such equation parameters are typically strongly distorted. This was recognized by \cite{Havelm43} who received a Nobel prize in economics for this insight in 1989.
For traditional uses of SEMs see, for instance, \cite{Jor81}, \cite{Boll89}, \cite{Kline06}, while \cite{Pea09} advocates SEMs as a framework for causal inquiries. In the econometric literature forty years ago, independences were always regarded as `overidentifying' constraints.
For discrete variables, more attractive features of regression graph models were derived by \cite{Drton09}, who speaks of chain graph models of type IV for multivariate regression chains in the case all variables on equal standing have covariance graphs. He proves that each member in this class belongs to a curved exponential family, for a discussion of this notion see, for instance, \cite{Cox06}, Section 6.8. Discrete type IV models form also a subclass of marginal models; see \cite{RudBerNem10}, \cite{BerRud02}. Local independence statements that involve only variables in the past are equivalent to more complex local independences used by \cite{Drton09}; see \cite{MarLup11}. These local definitions imply the pairwise independence formulation for missing edges corresponding to equation \eqref{pairw} for any regression graph, $G^N_{\mathrm{reg}}\,$.
Two other types of chain graph have been studied as joint response models in statistics, the so-called {\bf \em AMP chain graphs} of \cite{AndMadPer97}, and the {\bf \em LWF chain graphs} of \cite{LauWer89} and \cite{Fryd90}. They use the same factorisation as in equation \eqref{factdens}, but they are suitable for modeling data from intervention studies only when they are Markov equivalent to a regression graph. The reason is that the conditioning set for pairwise independences of responses includes in general other nodes within the same connected component. For AMP graphs, the independence form of equation \eqref{pairw} $(i)$ is replaced by
$$(i') \nn \nn i\ci k | g_{>j-1}\setminus{\{i,k}\} \quad \text{ for } i, k \text{ both within a response component } g_{j} $$ while \eqref{pairw} $(ii)$ and \eqref{pairw} $(iii)$ remain unchanged. For LWF graphs, $(i)$ is also replaced by $(i')$ and the independence form of $(ii)$ by
$$ (ii') \nn \nn i \ci k| g_{>j-1}\setminus{\{i,k} \} \nn \text{ for } i \text{ within a } g_j \text{ and } k \text{ in } g_{>j}.$$ As a consequence, each undirected subgraph in an AMP chain graph is a concentration graph, and an LWF chain graph consists of sequences of concentration graphs. For the corresponding different types of parametrisations of joint Gaussian distributions see \cite{WerWieCox06}.
Not yet systematically approached is the search for {\bf \em covering models that capture most but not all independences} in a more complex graph but which may be easier to fit than the reduced model; see \cite{CoxWer90}. For regression graphs, details are explained here for a small example in Section 4, and in Section 7, first results are given in Propositions \ref{MEQtoAMP} to \ref{MEQtoCON} and discussed using Figures \ref{intersect} and \ref{indconc}.
Before we turn to the different types of missing edges in more detail, we derive a well-fitting regression graph for data given by \cite{Kappesser97}.
\section{Deriving and interpreting a regression graph}
For 201 chronic pain patients, the role of the site of pain during a three week stay in a chronic
pain clinic was to be examined.
In this study, it was of main interest to investigate the changes
in two main symptoms before and after stationary treatment and to understand determinants of the overall treatment success as
rated by the patients, three months after they had left the clinic. Figure \ref{figfopain} shows a first ordering of the variables derived in discussions between psychologists, physicians and statisticians.
The first ordering of the variables gives for each single or joint response a list of its possible explanatory variables,
shown in boxes to the right, but in Figure \ref{figfopain} only those variables are displayed that remained after statistical analyses relevant for the responses of main interest.
Selecting for each response all its directly explanatory variables from this list and checking for remaining dependences among components of joint responses, provides enough
insight to derive a well-fitting regression graph model. With this type of local modeling, the
reasons for the model choice are made transparent.
Of the available background variables, age, gender, marital status and others, only the binary variables,
level of formal schooling (1:=less than ten years, 2:= ten or more years) and the number of previous illnesses in years (min:=0, max:=16) are displayed in the far right box as the relevant context variables. The response of primary interest, self-reported success of treatment, is listed in the box to the far left. It is a score that ranges between 0 and 35, combining a patient's answers to a specific questionnaire.
\begin{figure}
\caption{\small First ordering of variables in the chronic pain study. There are two joint responses, intensity of pain and depression. They are the main symptoms
of chronic pain, measured here before and after treatment. The components of each response are to be modeled conditionally given the variables listed in boxes to their right. }
\label{figfopain}
\end{figure}
There are a number of intermediate variables. These are both explanatory for some variables and responses to others.
Of these, two are regarded as joint responses since they represent
two symptoms of a patient, intensity of pain and depression. Both are measured before treatment and directly after the three-week stationary stay. Questionnaire scores are available of depression
(min:=0, max:=46) and of the self-reported intensity of pain (min:=0, max:=10). Chronicity of pain is a score (min:=0, max:=8) that
incorporates different aspects, such as the frequency and duration of pain attacks, the spreading of pain and the use of pain relievers.
In this study, the patients have one of two main sites of pain, the pain is either
on their upper body, `head, face, or neck' or on their `back'.
A well-fitting regression graph is shown in Figure \ref{figregpain}. The graph summarizes some important aspects of the results of the statistical analyses for which details are given in the Appendix. In particular, it tells which of the variables are directly explanatory, that is which are important for generating and predicting a response, by showing arrows that start from each of these directly explanatory variables and point to the response.
\begin{figure}
\caption{\small Regression graph, well compatible with the data, that results from the reported statistical analyses. Discrete variables are drawn as dots, continuous ones as circles.}
\label{figregpain}
\end{figure}
Variables listed to the right of a response but without an arrow ending at this response do not substantially improve the prediction of the response when used in addition to the directly explanatory variables. For instance, for treatment success, only the pain intensity after the clinic stay is directly explanatory and this pain intensity is an important mediator (intermediate variable) between treatment success and site of pain.
\begin{figure}
\caption{\small Form of dependence of primary response $Y$ on $Z_{a}$. }
\label{figpainquad}
\end{figure}
Scores of self-reported treatment success are low for almost all patients with high pain scores after treatment that is for scores higher than 6; see Figure \ref{figpainquad}. Otherwise, treatment success is typically judged to be higher the lower the intensity of pain after treatment. This explains the nonlinear dependence of $Y$ on $Z_a$.
As mentioned before, for back pain patients, the chronicity scores are on average higher than for head-ache patients and connected with a higher chronicity of the pain are higher scores of depression.
These patients may possibly have tried too late, after the acute pain had started, to get well focused help. Both before and after treatment, highly depressed patients tend to report higher intensities of pain than others.
The study provides no information on which variables may explain these dependences between the symptoms that remain after having taken the available explanatory variables into account. However, hidden common explanatory variables may exist in both cases since these remaining dependences between the symptoms do not depend systematically on any other observed variable.
Some variables are {\bf \em indirectly explanatory}. An arrow starts from an indirectly explanatory variable,
and points via a sequence of arrows and intermediate variables to the response variable. For instance, the level of formal schooling
and the site of pain are both indirectly explanatory for each of the symptoms after treatment and for the overall
treatment success.
Once the types and directions of the direct dependence are taken into account, the regression graph
helps to trace the development of chronic pain, starting from the context information on the level of schooling
and the number of previous illnesses of a patient. Thus, patients with more years of formal schooling are more likely to be chronic head-ache patients. Patients with a lower level of formal schooling are more likely to be back-ache patients, possibly because more of them have jobs involving hard physical work.
Back-ache patients reach higher stages of the chronicity of pain and report higher intensity of pain
still after treatment and are therefore typically less satisfied with the treatment they had received.
{\bf \em Graphical screening for nonlinear relations} and interactive effects (Cox and Wermuth, 1994) pointed to the nonlinear dependence of treatment success on intensity of pain after treatment but to no other such relations. The regression graph model is said to fit the data well because for each single response separately, there is no indication that adding a further variable would substantially change the generated conditional dependences. The seemingly unrelated dependences of the symptoms after treatment on those before treatment agree so well with the observations that they differ also little from regressions computed separately, see the appropriate tables in the Appendix.
Had there been no nonlinear relation and no categorical variables as responses, the overall model fit could also have been tested within the framework of structural equation models once the regression graph is available. This graph is derived here with the local modeling steps that use the first ordering of the variables, just in terms of univariate, multivariate and seemingly unrelated regressions. The regression graph provides a hypothesis that may be tested locally and/or globally in future studies that include the same
set of nine variables. In this case, no variable selection strategy would be used or needed.
The available results for changes of the regression graph (Wermuth, 2011) that result after marginalising and conditioning
provide a solid basis for comparing the results of any sequence of regressions with studies that contain the same
set of core variables but which have some of the variables omitted or which consider subpopulations,
defined by levels or level combinations of other variables. For instance for comparisons with the current study, the same chronicity score may not be recorded in another pain clinic or data may be available only for
patients with pain in the upper body.
{\bf \em The main substantive results of this empirical study} are that site of pain needs to be taken into account also in future studies since it is an important mediator between the intrinsic characteristics of a patient, measured here by the given context variables, for both the overall treatment success and for the symptoms after treatment. For back-ache patients, the chronicity of pain and the depression score is higher than for the head-ache patients and the treatment is less successful since the intensity of pain remains high after the treatment in the clinic.
In the following section we give three-variable examples of a Gaussian joint response regression and of the three subclasses of regression graphs that have only one type of edge, of the covariance, the concentration and the directed acyclic graph to discuss the different types of conditional dependences and the possible types of independence constraints associated with the corresponding regression graphs.
\section{Regressions, dependences and recursive relations}
For a quantitative response with linear dependences, the simple regression model dates back at least several centuries. The fitting of a least-squares regression line had been developed separately by Carl Friedrich Gauss (1777--1855), Adrien-Marie Legendre (1752--1833) and Robert Adrain (1775 --1843). The method extends directly to models with several explanatory variables.
The most studied regression models are for joint Gaussian distributions. Regression graphs mimic important features of these linear models but represent also relations in other distributions of continuous and discrete variables, which permit in particular nonlinear and interactive dependences. In a regular joint Gaussian distribution, let the mean-centered vector variable $Y$ have dimension three, then we write the covariance matrix, $\Sigma$, and the concentration matrix $\Sigma^{-1}$, with graphs shown in Figure \ref{figcovcon3}, as $$ \Sigma=\begin{pmatrix} \sigma_{11} & \sigma_{12} &\sigma_{13} \\ . & \sigma_{22 } &\sigma_{23} \\ .&.& \sigma_{33} \end{pmatrix}, \nn \Sigma^{-1}=\begin{pmatrix} \sigma^{11} & \sigma^{12} &\sigma^{13} \\ . & \sigma^{22 } &\sigma^{23} \\ .&.& \sigma^{33} \end{pmatrix},$$ where the dot-notation indicates entries in a symmetric matrix.
\begin{figure}\label{figcovcon3}
\end{figure} With the edge of node pair $(1,2)$ removed, both graphs turn into a {\sf V} but have different interpretations. The resulting independence constraints are for Figures \ref{figcovcon3} a) and b), respectively,
$$ 1\ci 2 \iff (\sigma_{12}=0) \nn \text{ and } \nn 1\ci 2|3 \iff (\sigma^{12}=0),$$ where the latter derives as an important property of concentration matrices; for proofs see \cite{CoxWer96}, Section 3.4 or \cite{WerCoxMar06}, Section 2.3. For other distributions, the independence interpretation of these two types of undirected graph remains unchanged, but not the parametrisation. A similar statement holds for directed acyclic graphs and, more generally, for regression graphs.
For the linear equations that lead to a complete directed acyclic graph for a trivariate Gaussian distribution with mean zero, one starts with three mutually independent Gaussian residuals $\varepsilon_i$ and takes the following
system of equations, in which for instance $\beta_{1|3.2}$ is a regression coefficient for the dependence of response $Y_1$ on $Y_3$ when $Y_2$ is an additional regressor. Because of the form of the equations, one speaks of triangular systems also when the distribution of the residuals is not Gaussian, but the residuals are just uncorrelated, or expressed equivalently, if each residual is uncorrelated with the regressors in its equation:
$$ Y_1=\beta_{1|2.3}Y_2+\beta_{1|3.2} Y_3+\varepsilon_1$$
\begin{equation}Y_2=\beta_{2|3} Y_3+\varepsilon_2 \label{triangeq3}\end{equation} $$Y_3=\varepsilon_3.$$
When the residuals do not follow Gaussian distributions, the probabilistic independence interpretation is lost, but the lack of a linear relation can be inferred with any vanishing regression coefficient.
In econometrics, Hermann Wold (1908--1992) introduced such systems as linear recursive equations with uncorrelated residuals. Harald Cram\'er (1893--1985) used the term linear least-squares equations for residuals in a population being uncorrelated with the regressors and the notation for the regression coefficients is an adaption of the one introduced by Udny Yule (1871--1951) and William Cochran (1909--1980).
In joint Gaussian distributions, independence constraints on triangular systems mean vanishing equation parameters and missing edges in directed acyclic graphs, such as
$$ 1\ci 2|3 \iff (\beta_{1|2.3}=0) \nn \text{ and } \nn 2 \ci 3 \iff (\beta_{2|3}=0).$$ The complete directed acyclic graph defined implicitly with equations \eqref{triangeq3} is displayed in Figure \ref{figdagreg3}a). \begin{figure}\label{figdagreg3}
\end{figure} For the smallest joint response model with the complete graph shown in Figure \ref{figdagreg3}b), we take both Gaussian variables $Y_1$ and $Y_2$ to depend on a Gaussian variable $Y_3$, to get equations \eqref{multreg3} with residuals having zero means and being uncorrelated with $Y_3$:
\begin{equation} Y_1=\beta_{1|3}Y_3+ u_1, \nn \nn Y_2=\beta_{2|3} Y_3+u_2, \nn \nn Y_3=u_3. \label{multreg3}\end{equation}
Here, $\sigma_{12|3}={\it E}(u_1 u_2)$. The generating processes and hence the interpretation differs for the two models in equations \eqref{triangeq3} and \eqref{multreg3}. In the corresponding graphs of Figures \ref{figdagreg3}a) and \ref{figdagreg3}b), the vanishing of the edges for pairs (1,2) and (2,3) mean the same independence constraints since
$$ 1\ci 2|3 \iff (\sigma_{12|3}=0) \iff (\beta_{1|2.3}=0) \nn \text{ and } \nn 2 \ci 3 \iff ( \beta_{2|3}=0),$$
but the edges for pair (1,3) capture different dependences, $1\pitchfork 3$ and $1\pitchfork 3|2$, respectively. Again, taking away any edge generates a {\sf V}. Taking away any two edges means to combine two independence statements. This is discussed further in the next section.
One of the special important features of the linear least-squares regressions is that the residuals are uncorrelated with the regressors. The effect is that the model part coincides with a conditional linear expectation as illustrated here with a model for response $Y_1$ and regressors $Y_2,Y_3$, which we take, as mentioned before, as measured in deviations from their means. For instance, one gets for
$$ Y_1=\beta_{1|2.3}Y_2+\beta_{1|3.2}Y_3+\varepsilon_1, $$
\begin{equation} E_{\mathrm{\,lin}}(Y_1|Y_2,Y_3)=\beta_{1|2.3}Y_2+\beta_{1|3.2}Y_3 \n.\end{equation}
There is a recursive relation for least-squares regression coefficients; see \cite{Coch38}, \cite{CoxWer03}, \cite{MaXieGeng06}.
It shows for instance with
\begin{equation} \beta_{1|3}=\beta_{1|3.2}+\beta_{1|2.3}\beta_{2|3} \label{rreg}\end{equation}
that $\beta_{1|3.2}$, the partial coefficient of $Y_3$ given also $Y_2$ as a regressor for $Y_1$, coincides with the marginal
coefficient, $\beta_{1|3}$, if and only if $\beta_{1|2.3}=0$ or $\beta_{2|3}=0$.
The method of maximizing the likelihood was recommended by Sir Ronald Fisher (1890--1962) as a general estimation technique that applies also to regressions with categorical or quantitative responses. One of the most attractive features of the method concerns properties of the estimates. Given two models with parameters that are in one-to-one correspondence, the same one-to-one transformation leads from the maximum-likelihood estimates under one model to those of the other.
Different single response regressions, such as logistic, probit, or linear regressions, were described as special cases of the generalized linear model by \cite{NelWed72}; see also \cite{McCullNeld89}. In all of these regressions, the vanishing of the coefficient(s) of a regressor indicates conditional independence of the response given all directly explanatory variables for this response.
The general linear model with a vector response, also called multivariate linear regression, has identical sets of regressors for each component variable of a response vector variable. Maximum-likelihood estimation of regression coefficients for a joint Gaussian distribution reduces to linear-least squares fitting for each component separately; see \cite{And58}, Chapter 8.
With different sets of regressors for the components of a vector response, seemingly unrelated regressions (SUR) result and iterative methods are needed for estimation; see \cite{Zell62}.
For small sample sizes, a given solution of the likelihood equations of a Gaussian SUR model may not be unique; see \cite{DrtRich04}, \cite{Sund10}, while for exclusively discrete variables this will never happen; see \cite{Drton09}. For mixed variables, no corresponding results are available yet.
In general, there often exists {\bf \em a covering model with nice estimation properties.} For instance, one of the above described Gaussian SUR models that requires iterative fitting has regression graph $$\circ \fra\circ \dal\circ\fla\circ \n,$$ A generating process starts with independent explanatory variables, each of which relates
only to one of the two response components, but these are correlated given both regressors. There is a simple covering model, in which two missing arrows are added to the graph to obtain a general linear model. In that case,
the new graph does not provide a dependence base, but closed form maximum-likelihood estimates are available.
For a vector variable of categorical responses only, the multivariate logistic regression of \cite{GlonMcCul95} reduces to separate main effect logistic regressions for each component of the response vector provided that certain higher-order interactions vanish; see \cite{MarLup11}. In the context of structural equation models (SEMs), dependences of binary categorical variables are modeled in terms of probit regressions. These do not differ substantially from logistic regressions whenever the smallest and largest events occur at least with probability 0.1; see \cite{Cox66}.
Multivariate linear regressions as well as SUR models belong to the framework of SEMs even though this general class had been developed in econometrics to deal appropriately with endogenous responses. Estimation methods for SEMs were discussed in the Berkeley symposia on mathematical statistics and probability from 1945 to 1965, but some identification issues have been settled only recently; see
\cite{FoyDraDrt11} and for relevant previous results \cite{BriPea02}, \cite{StaWer05}.
In statistical models that treat all variables on equal standing, the variables are not assigned roles of responses or regressors and undirected measures of dependence are used instead of coefficients of directed dependence. In the concentration graph models, the undirected dependences are conditional given all remaining variables on equal standing.
For instance, for categorical variables, these models are better known as graphical log-linear models; see \cite{Birch63}, \cite{Causs66}, \cite{Goodm70}, \cite{BisFieHol75}, \cite{Wer76a}, \cite{DaLauSpeed80}. For Gaussian random variables, these had been introduced as covariance selection models; see \cite{Dem72}, \cite{Wer76b}, \cite{SpeedKiv86}, \cite{DrtonPer04}, and for mixed variables as graphical models for conditional Gaussian (CG) distributions; see \cite{LauWer89}, \cite{Edw00}.
For a mean-centered vector variable $Y$, the elements of the covariance matrix $\Sigma$ are $\sigma_{ij}={\it E}(Y_i Y_j)$. If $\Sigma$ is invertible, the covariances $\sigma_{ij}$ are in a one-to-one relation with the concentrations $\sigma^{ij}$, the elements of the concentration matrix $\Sigma^{-1}$. There is a recursive relation for concentrations; see \cite{Dem69}. For a trivariate distribution \begin{equation} \sigma^{23.1}=\sigma^{23}-\sigma^{12}\sigma^{13}/\sigma^{11}, \label{rrcon}\end{equation} where $\sigma^{23.1}$ denotes the concentration of $Y_2,Y_3$ in their bivariate marginal distribution. Thus, the overall concentration $\sigma^{23}$ coincides with $\sigma^{23.1}$ if and only if $\sigma^{12}=0$ or $\sigma^{13}=0$.
Alternatively in covariance graph models, the undirected measures for variables on equal standing are pairwise marginal dependences. For Gaussian variables, these models had been introduced as hypotheses linear in covariances; see \cite{And73}, \cite{Kau96}, \cite{Kii87}, \cite{WerCoxMar06}, \cite{ChaDrtRich07}. For categorical variables, covariance graph models have been studied only more recently; see \cite{DrtRich08}, \cite{LupMarBer09}. Again, no similar estimation results are available for general mixed variables yet.
There is also a recursive relation for covariances; see \cite{And58}, Section 2.5.
It shows for instance, for just three components of $Y$ having a Gaussian distribution, with
\begin{equation} \sigma_{12|3}=\sigma_{12}-\sigma_{13}\sigma_{23}/\sigma_{33}, \label{rrcov}\end{equation}
where $\sigma_{12|3}$ denotes the covariance of $Y_1,Y_2$ given $Y_3$. Therefore, $\sigma_{12|3}$ coincides with $\sigma_{12}$ if and only if $\sigma_{13}=0$ or $\sigma_{23}=0$. By equations \eqref{rreg}, \eqref{rrcon}, \eqref{rrcov}, a
unique independence statement is associated with the endpoints of any {\sf V} in a trivariate Gaussian distribution.
In the context of multivariate exponential families of distributions, concentrations are special canonical parameters and covariances are special moment parameters with estimates of canonical and moment parameters being asymptotically independent; see \cite{Barn78}, page 122. Regression graphs capture independence structures for more general types of distribution, where operators for transforming graphs mimic operators for transforming different parametrisations of joint Gaussian distributions; see \cite{WerWieCox06}, \cite{WieWer10}, \cite{Wer10}.
In particular, by removing an edge from any $\sf V$ of a regression graph, one introduces an additional independence
constraint just as in a regular joint Gaussian distribution. For this, the generated distributions have to satisfy the composition and intersection property in addition to the general properties, as discussed in the next section.
\section{Using graphs to combine independence statements}
We now state the four standard properties of independences of any multivariate distribution; see e.g. \cite{Daw79}, \cite{Stu05}, as well as two special properties of joint Gaussian distributions. The six taken together, describe the combination and decomposition of independences in regression graphs, for instance those resulting by removing edges. We discuss when these six properties apply also to regression graph models.
Let $X, Y,Z$ be random (vector) variables, continuous, discrete or mixed. By using the same compact notation, $f_{XYZ}$ for a given joint density, a probability distribution or a mixture and by denoting the union of say $X$ and $Y$ by $XY$, one has
\begin{equation} X\ci Y|Z \iff (f_{XYZ}=f_{XZ} f_{YZ}/f_{Z}), \label{eqci1} \end{equation}
where for instance $f_{Z} $ denotes the marginal density or probability distribution of $Z$. Since the order of listing variables for a given density is irrelevant, {\bf \em symmetry of conditional independence} is one of the standard properties, that is
$$(i) \n X\ci Y|Z \iff Y\ci X|Z .$$
Equation \eqref{eqci1} restated for instance for the conditional distribution of $X$ given $Y$ and $Z$,
$f_{X|YZ}=f_{XYZ}/f_{YZ}$, is
\begin{equation} X\ci Y|Z \iff (f_{X|YZ}=f_{X|Z}). \label{eqci2} \end{equation}
When two edges are removed from a graph in Figures \ref{figcovcon3} and \ref{figdagreg3}, just one coupled pair remains, suggesting that the single node is independent of the pair.
For instance in Figure \ref{figdagreg3}a), with nodes $1,2,3$ corresponding in this order to $X,Y,Z$, removing the arrows for (1,2) and (2,3), leaves (1,3) disconnected from node 2. For any joint density, implicitly generated as $f_{XYZ}=f_{X|YZ}f_{Y|Z}f_{Z} $, one has equivalently,
$$ (X \ci Y|Z \text{ and } Y\ci Z) \iff XZ\ci Y. $$ In general, the {\bf \em contraction property} is for $a,b,c,d$ disjoint subsets of $N$:
$$(ii) \n (a\ci b|cd \text{ and } b\ci c|d) \iff ac\ci b|d.$$ It has become common to say that a {\bf \em distribution is generated over a given $G^N_{\mathrm{dag}}\,$} if the distribution factorizes as specified by the graph for any compatible ordering. For instance, for a trivariate distribution generated over the collision ${\sf V}$ of Figure \ref{figdagreg3}b) obtained by removing the edge for (2,3), both orders $(1,2,3)$ and $(1,3,2)$ are compatible with the graph and
$f_{XYZ}=f_{X|YZ}f_{Y}f_{Z} $.
Conversely, suppose that $XZ \ci Y$ holds, then this implies $X \ci Y$ and $Z \ci Y$ so that for instance the same two
edges as in Figure \ref{figdagreg3}b) are missing in the corresponding covariance graph of Figure \ref{figcovcon3}a). In general, the {\bf \em decomposition property} is for $a,b,c,d$ disjoint subsets of $N$:
$$(iii) \n a\ci bc|d \implies (a\ci b|d \text{ and } a\ci c|d).$$
In addition, $XZ \ci Y$ implies $X \ci Y|Z$ and $Z \ci Y|X$ so that for instance
the same two edges as in Figure \ref{figdagreg3}a) are missing in the corresponding concentration graph of Figure \ref{figcovcon3}b). In general, the {\bf \em weak union property} is for $a,b,c,d$ disjoint subsets of $N$:
$$(iv) \n a\ci bc|d \implies (a\ci b|cd \text{ and } a\ci c|bd).$$ Under some regularity conditions, all joint distributions share the four properties $(i)$ to $(iv)$.
Joint distributions, for which the reverse implication of the decomposition property $(iii)$ and of the weak union property $(iv)$ hold such as a regular joint Gaussian distribution, are said to have, respectively, the {\bf \em composition property} $(v)$ and the {\bf \em intersection property} $(vi)$, that is for $a,b,c,d$ disjoint subsets of $N$:
$$(v) \n (a\ci b|d \text{ and } a\ci c|d) \implies a\ci bc|d,$$
$$(vi) \n (a\ci b|cd \text{ and } a\ci c|bd) \implies a\ci bc|d.$$
The standard graph theoretical separation criterion has different consequences for the two types of undirected graph corresponding for Gaussian distributions to concentration and to covariance matrices. We say {\bf \em a path intersects subset set $\bm c$} of node set $N$ if it has an inner node in $c$ and let $\{a, b, c, m\}$ partition $N$ to formulate known Markov properties. The notation is to remind one that with any independence statement $a\ci b|c$, one implicitly has marginalised over the remaining nodes in $m=V\setminus\{a\cup b\cup c\}$, i.e.\ one considers the marginal joint distribution of $Y_a,Y_b, Y_c$.
\begin{prop}\label{prop:31}{\em \cite{Lau96}}. A concentration graph, $G^N_{\mathrm{con}}\,$, implies $a\ci b|c$ if and only if every path from $a$ to $b$ intersects $c$. \end{prop} \begin{prop}\label{prop:32}{\em \cite{Kau96}.}
A covariance graph, $G^N_{\mathrm{cov}}\,$, implies $a\ci b|c$ if and only if every path from $a$ to $b$ intersects $m$. \end{prop} Notice that Proposition \ref{prop:31} requires the intersection property, otherwise one could not conclude for three distinct nodes $h,i,k$
e.g. that ($h\ci i|k$ and $h\ci k|i$) implies $h\ci ik$ while Proposition \ref{prop:32} requires the composition property, otherwise one could conclude e.g. that ($h\ci i$ and $h\ci k$) implies $h\ci ik$.
\begin{coro}\label{ImplicCov} A covariance graph, $G^N_{\mathrm{cov}}\,$, or a concentration graph, $G^N_{\mathrm{con}}\,$, implies $a\ci b$ if and only if in the subgraph induced by $a \cup b$, there is no edge between $a$ and $b$. \end{coro}
\begin{coro}\label{indep}
A regression graph, $G^N_{\mathrm{reg}}\,$, captures an independence structure for a distribution with density $f_N$ factorizing as \eqref{factdens} if the composition and intersection property hold for $f_N$, in addition to the standard properties of each density. \end{coro}
\begin{proof} Given the intersection property $(vi)$, any node $i$ with missing edges to nodes $k,l$ in a concentration graph of node set $N$ implies $i\ci \{k,l\}| N\setminus\{i,k,l\}$ and given the composition property $(v)$, any node $i$ with missing edges to nodes $k,l$ in a covariance graph given $Y_c$ implies $i\ci \{k,l\}| c$.
\end{proof}
For purely discrete and for Gaussian distributions, necessary and sufficient conditions for the intersection property $(vi)$ to hold are known; see \cite{SanMMouRol05}. Too strong sufficient conditions are for joint Gaussian distributions that they are regular and for discrete variables, that the probabilities are strictly positive.
The composition property $(v)$ is satisfied in Gaussian distributions and for
triangular binary distributions with at most main effects in symmetric $(-1, 1)$ variables; see \cite{WerMarCox09}.
Both properties $(v)$ and $(vi)$ hold, whenever a distribution may have been generated over a possibly larger parent graph; see \cite{Wer10}, \cite{MarWer09}, \cite{WerWieCox06}. {\bf \em Parent graphs} are directed acyclic graphs that do not only capture an independence structure but are also a dependence base with a unique independence statement assigned to each {\sf V} of the graph. {\bf A distribution generated over a parent graph} mimics these properties of the parent graph.
It is known that every regression graph can be generated by a larger directed acyclic graph but not necessarily every statistical regression graph model can be generated in this way; see \cite{RichSpir02}, Sections 6 and 8.6.
One needs similar properties for distributions generated over a regression graph. {\em \bf A graph is edge-minimal for the generated distribution} if the distribution has a pairwise independence for each edge missing and a non-vanishing dependence for each edge present in the graph. For the generated distribution to have a unique independence statement assigned to each missing edge, it has to be
{\bf \em singleton transitive} that is, for $h,i,k,l$ distinct nodes of $N$,
$$ (i\ci k|l \text{ and } i\ci k| lh) \implies (i \ci h| l \text{ or } k\ci h | l).$$ this says, that in order to have both a conditional independence of $Y_i, Y_k$ given $Y_l$ and given $Y_l, Y_h$, there has to be at least one additional independence involving the variable $Y_h$, the additional variable in the conditioning set.
For graphs representing a dependence structure, this can be expressed equivalently, as
$$ (i \pitchfork h|l \text{ and } k \pitchfork h| l \text{ and } i\ci k|l) \implies i \pitchfork k | \{l,h\}$$ and
$$ (i \pitchfork h|l \text{ and } k \pitchfork h| l \text{ and } i\ci k|\{l,h\}) \implies i \pitchfork k | l , $$ which says that in the distribution there is a unique independence statement that corresponds to each {\sf V} in the graph. For a $2\times 2\times 3$ contingency table, an example violating singleton-transitivity has been given with equation (5.4) by \cite{Birch63}.
There exist these peculiar types of incomplete families of distributions; see \cite{LehSch55}, \cite{Brown86}, \cite{ManRue87}, in which independence statements connected with a $\sf V$ may have the inner node both within and outside the conditioning set; see \cite{WerCox04}, Section 7, \cite{Darr62}. Such independences have also been characterized as being not representable in joint Gaussian distributions; see \cite{LnenMatus07}. These distributions and those that are faithful to graphs are of limited interest in application in which one wants to interprete sequences of regressions.
{\bf \em Distribution are said to be faithful to a graph} if every of its independence constraints is captured by a given independence graph; see \cite{SpiGlySch93}. As is proven in a forthcoming paper, this requires for regression graphs that (1) the graph represents both an independence and a dependence structure, and that (2) the distribution satisfies the composition and the intersection property and is {\bf \em weakly transitive}, a property that is the following extension of singleton transitivity for node $h$
replaced by a subset $d$ of $N\setminus\{i,k,l\}$ that may contain several nodes: $$ (i\ci k|l \text{ and } i\ci k| \{l,d\}) \implies (i \ci d| l \text{ or } k\ci d| l).$$
This faithfulness property imposes strange constraints on parameters whenever more than two nodes induce a complete subgraph in the graph; see for instance Figure 1 in \cite{WerMarCox09} for three binary variables. An early example of a regular Gaussian distribution that does not satisfy weak transitivity is due to \cite{CoxWer93}, equation (8).
Notice that in general, the extension of singleton transitivity to weak transitivity excludes parametric cancelations that result from several paths connecting the same node pair. This the only type of a possible parametric cancelation in regular Gaussian distributions; see \cite{WerCox98}.
However, the constraints are mild for distributions corresponding to regression graphs that form a dependence base and that are forests. {\bf \em Forests} are the union of disjoint trees and
a {\bf \em tree} is a connected undirected graph with one unique path joining every node pair.
\begin{lemma} A positive distribution is faithful to a forest representing both a an independence and a dependence structure if it is singleton transitive.
\end{lemma}
\begin{proof} Positive distributions satisfy the intersection property and for concentration graphs, the composition property is irrelevant, Given the above characterizations of faithfulness and of weak transitivity, there are in a forest no cancelations due to several paths connecting the same node pair. Hence, weak transitivity will be violated only if the singleton transitivity fails.
\end{proof}
\begin{coro} A regular Gaussian distribution is faithful to a forest representing both an independence and a dependence structure.
\end{coro}
Notice that forests include trees and Markov chains as special cases. If they form dependence bases they are Markov equivalent to very special types of parent graphs but they
are rarely of interest in statistics when studying sequences of regressions.
\section{Some early results on graphs and Markov equivalence}
In the past, results concerning graphs and Markov equivalence have been obtained quite independently in
the mathematical literature on characterizing different types of graph, in the statistical literature on
specifying types of multivariate statistical models, and in the computer science literature on
deciding on special properties of a given graph or on designing fast algorithms for transforming graphs.
For instance, following the simple enumeration result for {\bf \em labeled trees} in $d$ nodes, $d^{d-2}$, by Karl-Wilhelm Borchardt (1817-1880), it could be shown that these trees are in one-to-one correspondence to distinct strings of size $d-2$; see \cite{Cay889}. Much later, labeled trees were recognized to form the subclass of directed acyclic graphs
with exclusively source {\sf V}s and therefore to be also Markov equivalent to chordal concentration graphs that are without chordless paths in four nodes; see \cite{CasSie03}.
In the literature on graphical Markov models, a number of different names have been in use for a sink {\sf V,} for instance `two arrows meeting head-on' by \cite{Pea88}, `unshielded collider' by \cite{RichSpir02}, and `Wermuth-configuration' by \cite{Whit90}, after it had been recognized that, for Gaussian distributions, the parameters of a directed acyclic graph model without sink {\sf V}s are in one-to-one correspondence to the parameters in its skeleton concentration graph model.
\begin{prop}\label{prop40} {\em \citep{Wer80}, \citep{WerLau83}, \citep{Fryd90}.} A directed acyclic graph is Markov equivalent to a concentration graph of the same skeleton if and only if it has no collision $\sf V$. \end{prop}
Efficient algorithms to decide whether an undirected graph can be oriented into a directed acyclic graph, became available in the computer science literature under the name of perfect elimination schemes; see \cite{TarYan84}. When algorithms were designed later to decide which arrows may be flipped in a given $G^N_{\mathrm{dag}}\,$, keeping the same skeleton and the same set of sink {\sf V}s, to get to a list of all Markov equivalent $G^N_{\mathrm{dag}}\,$ s, these early results by Tarjan and Yanakakis appear are not referred to directly; see \cite{Chi95}.
The number of equivalent characterizations of concentration graphs that have perfect elimination schemes has increased steadily, since they were introduced as rigid circuit graphs by \cite{Dir61}. These graphs are not only named `chordal graphs', but also `triangulated graphs'', `graphs with the running intersection property' or `graphs with only complete prime graph separators'; see \cite{CoxWer99}.
By contrast, for a covariance graph that can be oriented to be Markov equivalent to a $G^N_{\mathrm{dag}}\,$ of the same skeleton, chordless paths are relevant. \begin{prop}\label{cono}{\em \citep{PeaWer94}.} A covariance graph with a chordless path in four nodes is not Markov equivalent to a directed acyclic graph in the same node set. \end{prop}
For distributions generated over directed acyclic graphs, sink {\sf V}s are needed again.
\begin{prop}\label{prop:40} {\em \citep{Fryd90}, \citep{VerPea90}.} Directed acyclic graphs of the same skeleton are Markov equivalent if and only if they have the same sink {\sf V}s. \label{MEQdag}\end{prop}
Markov equivalence of a concentration graph and a covariance graph model is for regular joint Gaussian distributions equivalent to {\bf \em parameter equivalence}, which means that there is a one-to-one relation between the two sets parameters. Therefore, an early result on parameter equivalence for joint Gaussian distributions implies the following Markov equivalence result for distributions satisfying both the composition and the intersection property.
\begin{prop}\label{conc}{\em \citep{Jen88}, \citep{DrtonRich08}.} A covariance graph is Markov equivalent to a concentration graph if and only if both
consist of the same complete, disconnected subgraphs. \end{prop}
Fast ways of inserting an edge for every transition {\sf V}, of deciding on connectivity and on blocking flows have been available in the corresponding Russian literature since 1970; see \cite{Din06}, but these results appear to have not not been exploited for the so-called lattice conditional independence models, recognized as distributions generated over $G^{N}_{\rm dag}$s without any transition {\sf V}s by \cite{AndMadPerTr97}.
Markov equivalence of other than multivariate regression chain graphs, have been given by \cite{Rov05}, \cite{AndPer06} and \cite{RovStud06}.
With the so-called global Markov property of a graph in node set $N$ and any disjoint subsets
$a,b,c$ of $N$, one can decide whether the graph implies $a\ci b |c$. To give this property for a regression graph, we use special types of path that have been called active; see \cite{Wer10}. For this, let again $\{a,b,c,m\}$ partition the node set $N$ of $G^N_{\mathrm{reg}}\,$. \begin{defn}\label{def:33} {\bf \em A path from $\bm a$ to $\bm b$ in $\bm{Greg}$ is active given $\bm c$} if its inner collision nodes are in $c$ or have a descendant in $c$ and its inner transmitting nodes are in $m=N\setminus (a\cup b \cup c)$. Otherwise, {\bf \em the path is said to break given $\bm c$} or, equivalently, to break with $m$. \end{defn}
Thus, a path breaks when $c$ includes an inner transmitting node or when $m$ includes an inner collision node and all its descendants; see also Figure 4 of \cite{MarWer09}.
For directed acyclic graphs, an active path of Definition 1 reduces to the d-connecting path of \cite{GeiVerPea90}. Similarly, the following proposition coincides in that special case with those of their so-called d-separation. Let node set $N$ of $G^N_{\mathrm{reg}}\,$ be partitioned as above by $\{a,b,c,m\}$. \begin{prop}\label{prop:1} {\em \citep{CoxWer96}, \citep{Sadeghi09}.}
A regression graph, $G^N_{\mathrm{reg}}\,$, implies $a\ci b| c$ if and only if every path between $a$ and $b$ breaks given $c$. \label{globalreg} \end{prop}
Thus, whenever $G^N_{\mathrm{reg}}\,$ implies $a\ci b|c$, this independence statement holds in the corresponding sequence of regressions for which the density $f_N$ factorizes as \eqref{factdens}, provided that $f_N$ satisfies the same properties of independences, $(i) $ to $(vi)$ of Section 5, just like a regular Gaussian joint density. For example, in the graphs of Figure \ref{fig:2ex0}, node $2$ is an ancestor of node $1$ so that $G^N_{\mathrm{reg}}\,$ does not imply $3\ci 4|2$.
\begin{figure}\label{fig:2ex0}
\end{figure}
Since covariance and concentration graphs consist only of one type of edge, the restricted versions in Propositions 1 and 2 of the defined path can be used for their global Markov property.
\section{The main new results and proofs}
We now treat connected regression graphs in node set $N$ and corresponding distributions defined by sequences of regressions with joint discrete or continuous responses, ordered in connected components $g_1, \ldots, g_r$ of the graph, and with context variables in connected components, $g_{r+1}, \ldots, g_J$, which factorize as in \eqref{factdens}, satisfy the pairwise independences of \eqref{pairw} as well as properties of independence statements, given as $(i)$ to $(vi)$ in Section 5.
For the main result of Markov equivalence for regression graphs, we consider
distinct nodes $i$ and $k$, node subsets $c$ of $N\setminus\{i,k\}$ and the notion of shortest active paths. \begin{defn}\label{def:shortestpi} An $ik$-path in $G^N_{\mathrm{reg}}\,$ is a shortest active path $\pi$ with respect to $c$ if every $ik$-path of $G^N_{\mathrm{reg}}\,$ with fewer inner nodes breaks given $c$.
\end{defn}
Every chordless $\pi$ is such a shortest path. If the consecutive nodes $(k_{n-1}, k_n, k_{n+1})$ on $\pi=(i= k_0,k_1,\dots,k_m=k)$
induce a complete subgraph in $G^N_{\mathrm{reg}}\,$, we say that there is {\bf \em a triangle on the path}. In Figure \ref{shortest}a) nodes 2,3,4
form a triangle on the path $(1,2,4,3,5)$.
\begin{figure}\label{shortest}
\end{figure}
If this path is an active path connecting the uncoupled node pair (1,5), then nodes 2 and 4 are inner transmitting nodes outside $c$ and the inner collision node 3 is in $c$. This path is then also the shortest active path connecting (1,5). The shorter path
$(1,2,3,5)$ has nodes 2 and 3 as inner transmitting nodes, but is inactive since node 3 is in $c$.
By contrast in Figure \ref{shortest}b), when path $(4,2,1,3,5)$ is an active path connecting the uncoupled node pair (4,5), then
path (4,2,3,5) is a shorter active path. To see this, notice that on an active $(4,2,1,3,5)$ path, the inner collision node 1 is in $c$ and the inner transmitting nodes 2 and 3 are outside $c$. In this case, the inner collision node 2 on the path $(4,2,3,5)$ has node 1 as a descendant in $c$, so that this shorter path is also active.
We also use the following results for proving Theorem \ref{thm:1}. The first two are direct consequences of Proposition \ref{prop:1} and imply the pairwise independences of equation \eqref{pairw}. Lemma \ref{lem:1} results with the independence form of \eqref{pairw}. Let $h,i,k$ be distinct nodes of $N$. \begin{lemma}\label{lem:21} For $(h, i, k)$ a collision V in $G^N_{\mathrm{reg}}\,$, the inner node $i$ is excluded from $c$ in every independence statement for $h,k$ implied by $G^N_{\mathrm{reg}}\,$. \end{lemma} \begin{lemma}\label{lem:22} For $(h, i, k)$ a transmitting V in $G^N_{\mathrm{reg}}\,$, the inner node $i$ is included in $c$ in every independence statement for $h,k$ implied by $G^N_{\mathrm{reg}}\,$. \end{lemma} \begin{lemma}\label{lem:1}
A missing $ik$-edge in $G^N_{\mathrm{reg}}\,$ implies at least one independence statement $i\ci k|c$ for $c$ a subset of $N\setminus\{i,k\}$. \end{lemma} We can now derive the first of the main new results in this paper. \setcounter{repeat}{0} \begin{repeatthm} Two regression graphs are Markov equivalent if and only if they have the same skeleton and the same sets of collision {\sf V}s, irrespective of the type of edge. \end{repeatthm} \begin{proof} Regression graphs $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ are Markov equivalent if and only if for every disjoint subsets $a$, $b$, and $c$ of the node set of $N$, where only $c$ can be empty, \begin{equation}
\text{($G^N_{\mathrm{reg1}}\,$} \implies a\ci b| c) \iff \text{($G^N_{\mathrm{reg2}}\,$} \implies a\ci b| c) . \label{p} \end{equation}
Suppose first that (\ref{p}) holds. By Lemma \ref{lem:1}, $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ have the same skeleton, and by Lemma \ref{lem:21} and Lemma \ref{lem:22}, $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ have the same collision {\sf V}s.
Suppose next that $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ have the same skeleton and the same collision {\sf V}s and consider two arbitrary distinct nodes $i$ and $k$ and any node subset $c$ of $N\setminus\{i,k\}$. By Proposition \ref{prop:1}, (\ref{p}) is equivalent to stating that
for every uncoupled node pair $i, k$, there is an active
path with respect to $c$ in $G^N_{\mathrm{reg1}}\,$ if and only if there is an active $ik$-path with respect to $c$ in $G^N_{\mathrm{reg2}}\,$.
Suppose further that path $\pi$ is in $G^N_{\mathrm{reg1}}\,$ a shortest active $ik$-path with respect to $c$. Since $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ have the same skeleton, the path $\pi$ exists in $G^N_{\mathrm{reg2}}\,$. We need to show that it is active. If all consecutive two-edge-subpaths of $\pi$
are {\sf V}s then $\pi$ is active in $G^N_{\mathrm{reg2}}\,$. Therefore, suppose that nodes $(k_{n-1},k_n,k_{n+1})$ on $\pi$ form a triangle instead of a $\sf V$. It may be checked first, that in all other possible triangles in regression graphs that can appear on $\pi$ than the two of Figure \ref{trian}, there is as in Figure \ref{shortest}b) a shorter active path. To complete the proof, we show that for the two types of triangles shown in Figure \ref{trian}a) and Figure \ref{trian}b) path $\pi$ is also in $G^N_{\mathrm{reg2}}\,$ an active $ik$-path with respect to $c$.
\begin{figure}\label{trian}
\end{figure}
In $G^N_{\mathrm{reg1}}\,$ containing the triangle of Figure \ref{trian}a) on a shortest active path $\pi$, node $k_n$ is a transmitting node, which is by Lemma \ref{lem:22} outside $c$. By Lemma \ref{lem:21}, node $k_{n-1}$ is a collision node inside $c$.
If instead $k_{n-1}$ were a transmitting node on $\pi$ in
$G^N_{\mathrm{reg1}}\,$, it would also be a transmitting node on $(k_{n-2},k_{n-1},k_{n+1})$ and give a shorter active path via the $k_{n-1}k_{n+1}$-edge, contradicting the assumption of $\pi$ being a shortest path. Similarly, if collision node $k_{n-1}$ on $\pi$ were only an ancestor of $c$, then there were a shorter active path via the $k_{n-1}k_{n+1}$-edge.
In addition, node pair $k_{n}, k_{n-2}$ is uncoupled in $G^N_{\mathrm{reg1}}\,$ since by inserting any such edge, that is permissible in a regression graph, another shortest path via the $k_{n-2}k_{n}$-edge would result. Therefore, since $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$ have the same collision {\sf V}s, the subpath $(k_{n-2},k_{n-1},k_{n})$ forms also a collision $\sf V$ in $G^N_{\mathrm{reg2}}\,$. Similarly, $(k_{n-2},k_{n-1},k_{n+1})$ is a transmitting {\sf V} and $(k_{n+2},k_{n+1},k_{n})$ is a $\sf V$ of either type. Hence $k_{n-1}$ is a parent of $k_{n+1}$ in $G^N_{\mathrm{reg2}}\,$ and the only permissible edge between $k_n$ and $k_{n+1}$ is an arrow pointing to $k_{n+1}$. Therefore, $\pi$ forms an active path also in $G^N_{\mathrm{reg2}}\,$.
The proof for Figure \ref{trian}b) is the same as for Figure \ref{trian}a) since the type of nodes along $\pi$, i.e.\ as collision or transmitting nodes, are unchanged. \end{proof} In the example of Figure \ref{fig:4}, all three regression graphs have the same skeleton.
\begin{figure}\label{fig:4}
\end{figure}
In $G^N_{\mathrm{reg1}}\,$ there are three collision {\sf V}s $(3,4,5)$, $(1,2,5)$, and $(2,1,3)$. In $G^N_{\mathrm{reg2}}\,$ there are the same collision {\sf V}s. Therefore, these two graphs are Markov equivalent. However, there are only two collision {\sf V}s in $G^N_{\mathrm{reg3}}\, $these are $(3,4,5)$, and $(2,1,3)$. Hence this graph is not Markov equivalent to $G^N_{\mathrm{reg1}}\,$ and $G^N_{\mathrm{reg2}}\,$. The Markov equivalence of the graphs in Figure \ref{MEQch} to the subgraph induced by $\{b,c\}$ in Figure \ref{childadv} are further applications of Theorem \ref{thm:1}. Notice that Propositions 3 to 8 of Section 6 result as special cases of Theorem \ref{thm:1}.
The following algorithm generates a directed acyclic graph from a given $G^N_{\mathrm{reg}}\,$ that fulfills its known necessary conditions for Markov equivalence to a directed acyclic graph; see Proposition 2 of Wermuth (2010). We refer to these connected components as the blocks of $G^N_{\mathrm{reg}}\,$.\\
\noindent{\bf Algorithm 1.} (Obtaining a Markov equivalent directed acyclic graph from a regression graph).
\emph{Start from any given $G^N_{\mathrm{reg}}\,$ that has a chordal concentration graph and no
chordless collision path in four nodes. \begin{enumerate}
\item Apply the maximum cardinality search algorithm on the block consisting of full lines to order the nodes of the block.
\item Orient the edges of the block from a higher number to a lower one.
\item Replace collision {\sf V}s by sink {\sf V}s, i.e.\ replace $i\dal\circ\dal k$ and $i\dal\circ\fla k$ by\\ $i\fra\circ\fla k$ when $i$ and $k$ are uncoupled. When a dashed line in a block is replaced by an arrow, label the endpoints such that the arrow is from a higher number to a lower one if the labels do not already exist.
\item Replace dashed lines $i\dal\circ\dal k$ of triangles by a sink path $i\fra\circ\fla k$. When a dashed line in a block is replaced by an arrow, label the endpoints such that the arrow is from a higher number to a lower one if the labels do not already exist.
\item Replace dashed lines by arrows from a higher number to a lower one. \end{enumerate} Continually apply each step until it is not possible to continue applying it further. Then move to the next step.}
\begin{lemma}\label{leml} For a regression graph with a chordal concentration graph and without chordless collision paths in four nodes, Algorithm 1 generates a directed acyclic graph that is Markov equivalent to $G^N_{\mathrm{reg}}\,$. \end{lemma} \begin{proof} The generated graph is directed since by Algorithm 1, all edges are turned into arrows. Since the block containing full lines is chordal, the graph generated by the perfect elimination order of the maximal cardinality search does not have a directed cycle; see \cite{Bla93} Section 2.4 and \cite{TarYan84}.
In addition, the arrows present in the graph do not change by the algorithm. Thus, to generate a cycle containing an arrow of the original graph, there should have been a cycle in the directed graph generated by replacing blocks by nodes. But, this is impossible in a regression graph. Therefore in the generated graph, there is no cycle containing arrows that have been between the blocks of the original graph.
Within a block, all arrows point from nodes with higher numbers to nodes with lower ones. Otherwise, there would have been at step 3 of the algorithm a chordless collision path with four nodes in the graph. Hence no directed cycle can be generated.
Theorem \ref{thm:1} gives Markov equivalence to $G^N_{\mathrm{reg}}\,$ since Algorithm 1 preserves the skeleton of $G^N_{\mathrm{reg}}\,$ and no additional collision {\sf V} is generated because
sink oriented {\sf V}s remain, only dashed lines are turned into arrows and no arrows are changed to dashed lines. \end{proof}
Notice that this algorithm does not generate a unique directed acyclic graph, but every generated directed acyclic graph is Markov equivalent to the given regression graph. To obtain the overall complexity of Algorithm 1, we denote by $n$ the number of nodes in the graph and by $e$ the number of edges in the graph.
\begin{coro} The overall complexity of Algorithm 1 is $O(e^3)$. \end{coro} \begin{proof} Suppose that the input of Algorithm 1 is a sequence of triples, each of which consists of the two endpoints of an edge and of the type of edge. The length of this sequence is equal to $e$ and the highest number appearing in the sequence is $n$. For example, the sequence to the graph of Figure \ref{fig:4}a) is $((1,2,d),(3,1,a),(5,2,a),(4,3,d),(4,5,d))$, where `d' corresponds to a dashed line and `a' corresponds to an arrow pointing from the first entry to the second one. Notice that this labeling is in general not the same as the ordering of nodes given by Algorithm 1.
The first two steps of Algorithm 1 can be performed in $O(e+n)$ time; see \cite{Bla93}. Step 3 of Algorithm 1 may be performed in $e(e+1)(e-2)/2$ steps since for each edge, one can go through the edge set to find the edges that give a three node path with an inner collision node. This needs $e(e+1)/2$ steps. For each collision node, one goes again through the edge set, excluding the two edges involved in the collision path, to check if the collision is a {\sf V}. Other actions can be done in constant time.
Step 4 may require $ne(e+1)/2$ steps since paths considered $\circ \dal\circ\dal \circ $ which do not form a {\sf V}. Therefore, there is no reason to go through the edge set for the third time, but one might need to go through the node ordering to decide on the direction of the generated arrow. The last step may be performed with $ne$ steps by going through the edge set changing 'd's to 'a's appropriately by looking at the node ordering. Therefore, the overall complexity of Algorithm 1 is $O(e^3)$. \end{proof}
Corollary 2 and Propositions 4 to 8 can now be derived as special cases of Theorem 1 and Lemma 4. In addition by using Lemma 1, Lemma 2 and pairwise independences, subclasses of regression graphs can be identified, which intersect with directed acyclic graphs, with other types of chain graphs, with concentration graphs or with covariance graphs.
\setcounter{repeat}{1} \begin{repeatthm}\label{MEQtoDAG} A regression graph with a chordal graph for the context variables can be oriented to be Markov equivalent to a directed acyclic graph in the same skeleton, if and only if it does not contain any chordless collision path in four nodes.
\end{repeatthm}
\begin{proof} Every chordal concentration graph can be oriented to be equivalent to a directed acyclic graph; see \cite{TarYan84}. A missing edge for node pair $i<k$ in a directed acyclic graph means $i\ci k|>i\setminus k$, which would contradict \ref{pairw}$(iii)$ if the graph contained a semi-directed chordless collision path in four nodes. No undirected chordless collision path in four nodes can be fully oriented without changing a collision $\sf V$ into a transmitting $\sf V$, but $G^N_{\mathrm{reg}}\,$ can be oriented using Algorithm 1 if it contains no such path. \end{proof}
Notice that for joint Gaussian distributions, Theorem 2 excludes Zellner's seemingly unrelated regressions and it excludes covariance graphs that cannot be made Markov equivalent to fully directed acyclic graphs; see Proposition \ref{cono}.
\begin{prop}\label{MEQtoAMP} A multivariate regression graph with connected components $g_1, \dots g_J$ is an AMP chain graph in the same connected components if and only if the covariance graph of every connected component of responses is complete. \end{prop} \begin{proof} The conditional relations of the joint response nodes in an AMP chain graph coincide with those of the regression graph with the same connected components. Furthermore, the subgraph induced by each connected component $g_j$ of an AMP chain graph is a concentration graph given $g_{>j}$ while in $G^N_{\mathrm{reg}}\,$ it is a covariance graph given $g_{>j}$. By Proposition \ref{conc}, these have to be complete for Markov equivalence. \end{proof}
\begin{prop}\label{MEQtoLWF} A multivariate regression graph with connected components $g_1, \dots g_J$ is a LWF chain graph in the same connected components if and only if it contains no semi-directed chordless collision path in four nodes and the covariance graph of every connected component of responses is complete. \end{prop} \begin{proof} The proof for the connected components of a LWF chain graph is the same as for an AMP chain graph since they both have concentration graphs for $g_j$ given $g_{>j}$.
The dependences of joint responses $g_j$ on $g_{>j}$ coincide in a LWF chain graph with the bipartite part of the concentration graph in $g_j \cup g_{>j}$ so that Markov equivalent independence statements can only hold with these bipartite graphs being complete.\end{proof}
Figure \ref{intersect} illustrates Propositions \ref{MEQtoDAG} to \ref{MEQtoLWF} with modified graphs of Figure \ref{hypfigaft}. \begin{figure}\label{intersect}
\end{figure}
The graphs in Figure \ref{intersect} are Markov equivalent to a) a directed acyclic graph with the same skeleton obtainable by Algorithm 1, b) an AMP chain graph in the same connected components and c) a LWF chain graph in the same connected components.
In general, by inserting some edges, a regression graph model can be turned into a model in one of the intersecting classes used in Propositions \ref{MEQtoDAG} to \ref{MEQtoLWF}, just as a non-chordal graph may be turned into chordal one by adding edges. When the independence structure of interest is captured by an edge-minimal regression graph, then the resulting graph after adding edges will no longer be an edge-minimal graph and hence will not give the most compact graphical description possible.
However, the graph with some added edges may define a covering model that is easier to fit than the reduced model corresponding to the edge-minimal graph, just as an unconstrained Gaussian bivariate response regression on two regressors may be fitted in closed form, while the maximum-likelihood fitting in the reduced model of Zellner's seemingly unrelated regression requires iterative fitting algorithms. Any well-fitting covering model in the three intersecting classes will show week dependences for the edges that are to be removed to obtain an edge-minimal graph.
Notice that sequences of regressions in the intersecting class with LWF chain graphs correspond for Gaussian distributions to sequences of the general linear models of \cite{And58}, Chapter 8, that is to models in which each joint response has the same set of regressor variables. This shows in $G^N_{\mathrm{reg}}\,$ by identical sets of nodes from which arrows point to each node within a connected component.
In contrast, the models in the intersecting classes with the two types of undirected graph may be quite complex in the sense of including many merely generated chordless cycles of size four or larger.
\begin{prop}\label{MEQtoCON} A multivariate regression graph has the skeleton concentration graph if and only if it contains no collision $\sf V$ and it has the skeleton covariance graph if and only if it contains no transmitting $\sf V$. \end{prop}
\begin{proof} Every $\sf V$ is a collision $\sf V$ in a covariance graph and a transmitting $\sf V$ in a concentration graph; see Lemma 1 and Lemma 2. The first includes, the second excludes the inner node from the defining independence statement. Thus, in the presence of a {\sf V}, one would contradict the uniqueness of the defining pairwise independences. \end{proof}
Lastly, Figure \ref{indconc} shows the overall concentration graph induced by $G^N_{\mathrm{reg}}\,$ of Figure \ref{hypfigaft}. It may be obtained from the given $G^N_{\mathrm{reg}}\,$ by finding first the smallest covering LWF chain graph in the same connected components, then closing every sink {\sf V} by an edge, i.e.\ adding an edge between its endpoints, and finally changing all edges to full lines.
In such a graph, several chordless cycles in four or more nodes may be induced and the connected components of $G^N_{\mathrm{reg}}\,$ may no longer show. In such a case, much of the important structure of the generating regression graph is lost. In addition, merely induced chordless cycles require iterative algorithms for maximum-likelihood estimation, even for Gaussian distributions. Thus, in the case of connected joint responses, it may be unwise to use a model search within the class of concentration graph models.
\begin{figure}\label{indconc}
\end{figure}
This contrasts with LWF chain graphs that coincide with regression graphs, such as in Figure \ref{intersect}c). These preserve the available prior knowledge about the connected components and give Markov equivalence to directed acyclic graphs so that model fitting is possible in terms of single response regressions, that is by using just univariate conditional densities. In addition, the simplified criteria for Markov equivalence of directed acyclic graphs apply.
On the other hand, sequences of regressions that coincide with LWF chains, permit us to model simultaneous intervention on a set of variables since the corresponding independence graphs are directed and acyclic in nodes representing vector variables. This represents a conceptually much needed extension of distributions generated over directed acyclic graphs in nodes representing single variables, but excludes the more specialized seemingly unrelated regressions and incomplete covariance graphs. \\
\noindent {\bf \large Appendix: Details of regressions for the chronic pain data\\}
The following tables show the results of linear least-squares regressions or logistic regressions, one at a time, for each of the response variables and for each component of a joint response separately. At first, each response is regressed on all its potentially explanatory variables given by their first ordering. The tables give the estimated constant term and for each variable in the regression, its estimated coefficient
(coeff), the estimated standard deviation of the coefficient ($s_{\rm coeff})$, as well as the ratio $z_{\rm obs}=$coeff/$s_{\rm coeff}$. These ratios are compared with 2.58, the 0.995 quantile of a random variable $Z$ having a standard Gaussian distribution, for which $\Pr(|Z|>2.58)=0.01$. In backward selection steps, the variable with the smallest observed value $|z_{obs}|$ is deleted from a regression equation, one at a time, until the threshold is reached.
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $Y$, success of treatment; linear regression including a quadratic term}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant & 23.40 & -\nn&-\nn&& 20.50 &-\nn & -\nn &&-\nn\\ $Z_a$, pain intensity after& -1.73& 0.15 & -11.19 && -1.89 & 0.15 & -12.77 && -\nn\\ $X_a$, depression after & -0.16 & 0.05 & -3.04&& - \nn & -\nn& -\nn && -1.86\\ $Z_b$, pain intensity before& 0.04 & 0.16 & 0.26 && - \nn & -\nn& -\nn && 0.65\\ $X_b$, depression before & 0.10 & 0.05 & 1.82 && - \nn & -\nn& -\nn && 0.33\\ $U$, pain chronicity& -0.15 & 0.30 & -0.51 && - \nn & - \nn & -\nn && -0.99\\ $A$, site of pain& -2.27 & 0.91 & -2.48 && - \nn& - \nn & -\nn&& -2.33\\ $V$, previous illnesses& 0.19& 0.11&1.76&& - \nn& - \nn & -\nn&&1.24\\ $B$, level of schooling& -0.50 & 0.78 & -0.64&& - \nn & -\nn& -\nn && -0.22\\[1mm] \hdashline\\[-3mm] $(Z_a-{\rm mean}(Z_a))^2$& 0.18 & 0.23 & 3.41 && 0.23 & 0.05 & 4.28 && -\nn\\ \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.54$\n\nn Selected model$\n Y: Z_a+Z_a^2$ \n \nn$R^2_{\rm sel}=0.49$}\\ \bottomrule \\[-3mm] \end{tabular}
\label{respY} \end{table}
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $Z_a$, intensity of pain after treatment; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant & 2.74 & -\nn&-\nn&& 2.98 &-\nn & -\nn &&-\nn\\ $Z_b$, pain intensity before& 0.12 & 0.08 & 1.60 && 0.16 & 0.07& 2.16$*$ && -\nn\\ $X_b$, depression before & 0.03 & 0.02 & 1.28 && - \nn & -\nn& -\nn && 1.76\\ $U$, pain chronicity& 0.11& 0.14 & 0.75 && - \nn & - \nn & -\nn && 1.43\\ $A$, site of pain& 1.07 & 0.42 & 2.51 && 1.27 & 0.39& 3.26\n&& -\nn\\ $V$, previous illnesses& 0.00& 0.05&0.03&& - \nn& - \nn & -\nn&&0.83\\ $B$, level of schooling& -0.19 & 0.37 & -0.52&& - \nn & -\nn& -\nn &&-0.70 \\[1mm] \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.09$ \n \nn Selected model$\n Z_a: Z_b+A$ \n \nn$R^2_{\rm sel}=0.07$}\\
\multicolumn{10}{l}{$*$: depression before treatment needed because of the repeated measurement design;}\\ \multicolumn{10}{l}{the low correlation for $Z_a, Z_b$ is due to a change in measuring, before and after treatment}\\ \bottomrule \\[-3mm] \end{tabular} \label{respZa} \end{table}
The procedure defines a selected model, unless one of the excluded variables has a contribution of
$|z^{'}_{\rm obs}|>2.58$ when added alone to the selected directly explanatory variables, then such a variable needs also to be included as an important directly explanatory variable. This did not happen in the given data set.
The tables show for linear models also $R^2$, the coefficient of determination, both for the full and for the selected model. Multiplied by 100, it gives the percentage of the variation in the response explained by the model.
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $X_a$, depression after treatment; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant & 2.54 & -\nn&-\nn&& 4.55 &-\nn & -\nn &&-\nn\\ $Z_b$, pain intensity before& -0.05& 0.22 & -0.23 && -\nn & -\nn& -\nn && -0.21\\ $X_b$, depression before & 0.62 & 0.06 & 10.43 && 0.68 & 0.05& 12.68 && -\nn\\ $U$, pain chronicity& 0.96& 0.42 & 2.28 && - \nn & - \nn & -\nn &&2.31 \\ $A$, site of pain& -1.19 & 1.25 & -0.95 && -\nn & -\nn& -\nn&& -0.10\\ $V$, previous illnesses& 0.05& 0.15&0.35&& - \nn& - \nn & -\nn&&1.08\\ $B$, level of schooling& 0.15 & 1.09 & 0.14&& - \nn & -\nn& -\nn && -0.01\\[1mm] \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.46$ \n \nn Selected model$\n X_a: X_b$ \n \nn$R^2_{\rm sel}=0.45$}\\ \bottomrule \\[-3mm] \end{tabular} \label{respXa} \end{table}
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $Z_b$, intensity of pain before; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant & 7.60 & -\nn&-\nn&& 7.38 &-\nn & -\nn &&-\nn\\ $U$, pain chronicity& 0.10& 0.13 & 0.77 && - \nn & - \nn & -\nn &&0.59 \\ $A$, site of pain& -0.58 & 0.40 & -1.44 && -\nn & -\nn& -\nn&& -1.20\\ $V$, previous illnesses& 0.02& 0.05&0.46&& - \nn& - \nn & -\nn&&0.72\\ $B$, level of schooling& -0.94 & 0.35 & -2.70&& -0.89 & 0.33& -2.65 && -\nn\\[1mm] \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.05$ \n \nn Selected model$\n Z_a: B$ \n \nn$R^2_{\rm sel}=0.03$}\\ \bottomrule \\[-3mm] \end{tabular} \label{respZb} \end{table}
In the linear regression of $Z_a$ on $X_a$ and on the directly explanatory variables of both $Z_a$ and $X_a$, that is on $Z_b, X_b,A$, the contribution of $X_a$ leads to $z_{\rm obs}=3.51$, which coincides -- by definition -- with $z_{\rm obs}$ computed for the contribution of $Z_a$ in the linear regression of $X_a$ on $Z_a$ and on $Z_b, X_b,A$. Hence the two responses are correlated even after considering the directly explanatory variables and a dashed line joining $Z_a$ and $Z_b$ is added to the well-fitting regression graph in Figure \ref{figregpain}.
In the linear regression of $Z_b$ on $X_b$ and on the directly explanatory variables of both $Z_b$ and $X_b$, that is on $U, A, V, B$, the contribution of $X_b$ leads to $z_{\rm obs}=2.64$.
Hence the two responses are associated after considering their directly explanatory variables and there is a dashed line joining $Z_b$ and $X_b$ in the regression graph of Figure \ref{figregpain}.\\[-7mm]
The relatively strict criterion, for excluding variables, assures that all edges in the derived regression graph correspond to dependences and dependences that are considered to be substantive in the given context. Had instead a 0.975 quantile been chosen as threshold, then one arrow from $A$ to $Y$ and another from $U$ to $X_a$ would have been added to the regression graph. Though this would correspond to a better goodness-of-fit, such weak dependences are less likely to become confirmed as being important in follow-up studies.
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $X_b$, depression before; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant &10.96 & -\nn&-\nn&& 7.31 &-\nn & -\nn &&-\nn\\ $U$, pain chronicity& 1.97& 0.49 & 4.02 && 1.78 & 0.46 & 3.87 &&-\nn \\ $A$, site of pain& -2.33 & 1.50 & -1.55 && -\nn & -\nn& -\nn&& -1.42\\ $V$, previous illnesses& 0.54& 0.18&2.99&& 0.55& 0.18& 3.06&&-\nn\\ $B$, level of schooling& -1.10 & 1.31 & -0.84&& -\nn & -\nn & -\nn && -0.57\\[1mm] \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.18$ \n \nn Selected model$\n X_b: U+V$ \n \nn$R^2_{\rm sel}=0.17$}\\ \bottomrule \\[-6mm] \end{tabular} \label{respXb} \end{table}
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $U$, chronicity of pain; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant &2.93 & -\nn&-\nn&& 2.47&-\nn & -\nn &&-\nn\\ $A$, site of pain& 0.95 & 0.21 & 4.58 && 1.02 & 0.20& 5.02&& -\nn\\ $V$, previous illnesses& 0.14& 0.02&5.83&& 0.14& 0.02& 5.92&&-\nn\\ $B$, level of schooling& -0.27& 0.19 & -1.43&& -\nn & -\nn & -\nn && -1.43\\[1mm] \midrule \multicolumn{10}{l}{$R^2_{\rm full}=0.26$ \n \nn Selected model$\n X_b: A+V$ \n \nn$R^2_{\rm sel}=0.25$}\\ \bottomrule \\[-6mm] \end{tabular} \label{respU} \end{table}
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $A$, site of pain; logistic regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant &0.26 & -\nn&-\nn&& 0.60 &-\nn & -\nn &&-\nn\\ $V$, previous illnesses& 0.05& 0.04&1.22&& -\nn& -\nn & -\nn &&1.22\\ $B$, level of schooling& -1.25 & 0.40& -3.11&& -1.28& 0.40 & -3.18 && -\nn\\[1mm] \midrule \multicolumn{10}{l}{Selected model\n $A: B$; response recoded to (0,1) instead of (1,2) }\\ \bottomrule \\[-3mm] \end{tabular} \label{respA} \end{table}
\begin{table}[H] \centering
\small \setlength{\tabcolsep}{1mm} \begin{tabular}{l P{-2,2} P{1,2} P{-1,2} c P{-2,2} P{1,2} P{-1,2} c P{-1,2}} \toprule \multicolumn{10}{l}{Response: $V$, previous illnesses; linear regression}\\ \midrule & \multicolumn{3}{c}{starting model} && \multicolumn{3}{c}{selected} && \ccolhd{excluded}\\ \cmidrule{2-4} \cmidrule{6-8} explanatory variables & \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{coeff} & \ccolhd{$s_{\rm coeff}$} & \ccolhd{$z_{\rm obs}$} && \ccolhd{$z'_{\rm obs}$}\\ \midrule constant &6.41& -\nn&-\nn&& 5.53 &-\nn & -\nn &&-\nn\\ $B$, level of schooling& -0.65 & 0.54 & -1.20&& -\nn & -\nn & -\nn && -\nn\\[1mm] \midrule \multicolumn{10}{l}{Selected model\n $V: -$ }\\ \bottomrule \\[-3mm] \end{tabular} \label{respV} \end{table}
The subgraph induced by $Z_a, Z_b, X_a, X_b$ of the regression graph in Figure \ref{figregpain} corresponds to two seemingly unrelated regressions. Even though separate least-squares estimates can in principle be severely distorted,
for the present data, the structure is so well-fitting in the unconstrained multivariate regression of $Z_a$ and $X_a$ on $Z_b$, $X_b$, $U,V,A,B$, that is in a simple covering model, that none of these potential problems are relevant.
With $C=\{U,V,A,B\}$, this is evident from the observed covariance matrix of $Z_a, X_a$ given
$Z_b, X_b, C$, denoted here by $\tilde{\Sigma}_{aa|bC}$ and the observed regression coefficient matrix
$\tilde{\Pi}_{a|b.C}$ being almost identical to the corresponding m.l.e $\hat{\Sigma}_{aa|bC}$ and $\hat{\Pi}_{a|b.C}$.
The former can be obtained by sweeping or partially inverting the observed covariance matrix of the eight variables with respect to $Z_b, X_b, C$ and the latter by using an adaption of the EM-algorithm,
due to Kiiveri (1989), on the observed covariance matrix of the four symptoms, corrected for linear regression on $C$. In this way, one gets
$$ \tilde{\Sigma}_{aa|bC}= \left(\begin{array}{rr}5.61& 3.91\\3.91& 48.37 \end{array} \right), \nn \nn
\hat{\Sigma}_{aa|bC}= \left(\begin{array}{rr}5.66& 3.94\\3.94& 48.41 \end{array} \right), $$
$$ \tilde{\Pi}_{a|b.C}= \left(\begin{array}{rr}0.12 & 0.03\\-0.05& 0.62 \end{array} \right), \nn \nn
\hat{\Pi}_{a|bC}= \left(\begin{array}{rr}0.14& 0.00\\0.00 & 0.60 \end{array} \right).$$
The assumed definition of the joint distribution in terms of univariate and multivariate regressions assures that the overall fit of the model can be judged locally in two steps. First, one compares each unconstrained, full regression
of a single response with regressions constrained by some independences, that is by
selecting a subset of directly explanatory variables from the list of the potentially explanatory variables.
Next, one decides for each component pair of a joint response whether this pair is conditionally independent given their directly explanatory variables considered jointly. This can again be achieved by single univariate regressions, as illustrated above for the joint responses $Z_a$ and $X_a$. \\
\noindent{\bf Acknowledgement.} The work of the first author has been supported in part by the Swedish Research Society via the Gothenburg Stochastic Center and by the Swedish Strategic Fund via the Gothenburg Mathematical Modeling Center. We thank R. Castelo, D.R. Cox, G. Marchetti and the referees for their most helpful comments.\\
\renewcommand{1.2}{1.2}
\end{document} | arXiv |
\begin{document}
\author{Vida Dujmovi{\'c}\,\footnotemark[1] \qquad Louis Esperet\,\footnotemark[2] \qquad Pat Morin\,\footnotemark[6]\\ \qquad Bartosz Walczak\,\footnotemark[4] \qquad David~R.~Wood\,\footnotemark[5]}
\date{}
\footnotetext[1]{School of Computer Science and Electrical Engineering, University of Ottawa, Ottawa, Canada (\texttt{[email protected]}). Research supported by NSERC and the Ontario Ministry of Research and Innovation.}
\footnotetext[2]{Laboratoire G-SCOP (CNRS, Univ.\ Grenoble Alpes), Grenoble, France (\texttt{[email protected]}). Partially supported by ANR Projects GATO (\textsc{anr-16-ce40-0009-01}) and GrR (\textsc{anr-18-ce40-0032}).}
\footnotetext[6]{School of Computer Science, Carleton University, Ottawa, Canada (\texttt{[email protected]}). Research supported by NSERC.}
\footnotetext[4]{Department of Theoretical Computer Science, Faculty of Mathematics and Computer Science, Jagiellonian University, Krak\'ow, Poland (\texttt{[email protected]}). Research partially supported by National Science Centre of Poland grant 2015/17/D/ST1/00585.}
\footnotetext[5]{School of Mathematics, Monash University, Melbourne, Australia (\texttt{[email protected]}). Research supported by the Australian Research Council.}
\sloppy
\title{\textbf{Clustered 3-Colouring Graphs\\ of Bounded Degree}}
\maketitle
\begin{abstract} A (not necessarily proper) vertex colouring of a graph has \defn{clustering} $c$ if every monochromatic component has at most $c$ vertices. We prove that planar graphs with maximum degree $\Delta$ are 3-colourable with clustering $O(\Delta^2)$. The previous best bound was $O(\Delta^{37})$. This result for planar graphs generalises to graphs that can be drawn on a surface of bounded Euler genus with a bounded number of crossings per edge. We then prove that graphs with maximum degree $\Delta$ that exclude a fixed minor are 3-colourable with clustering $O(\Delta^5)$. The best previous bound for this result was exponential in $\Delta$. \end{abstract}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\section{Introduction} \label{Introduction}
Consider a graph where each vertex is assigned a colour. A \defn{monochromatic component} is a connected component of the subgraph induced by all the vertices assigned a single colour. A graph $G$ is $k$-colourable with \defn{clustering} $c$ if each vertex can be assigned one of $k$ colours so that each monochromatic component has at most $c$ vertices. There have been several recent papers on clustered colouring \citep{NSSW19,vdHW18,KO19,CE19,EJ14,HST03,EO16,DN17,LO17,HW19,MRW17,LW1,LW2,LW3,NSW}; see \citep{WoodSurvey} for a survey. The general goal of this paper is to prove that various classes of graphs are 3-colourable with clustering bounded by a polynomial function of the maximum degree.
First consider clustered colouring of planar graphs. The 4-colour theorem \citep{AH89,RSST97} says that every planar graph is 4-colourable with clustering 1. This result is best possible regardless of the clustering value: for every integer $c$ there is a planar graph $G$ such that every 3-colouring of $G$ has a monochromatic component with more than $c$ vertices \citep{WoodSurvey,ADOV03,EJ14,KMRV97}. All known examples of such graphs have unbounded maximum degree. This led \citet*{KMRV97} to ask whether planar graphs with bounded maximum degree are 3-colourable with bounded clustering. This question was answered positively by \citet*{EJ14}.
Three colours is best possible for $\Delta\geqslant 6$, since the Hex Lemma~\citep{Gale79} implies that for every integer $c$, there is a planar graph $G$ with maximum degree 6 such that every 2-colouring of $G$ has a monochromatic component with more than $c$ vertices \citep{LMST08,MP08}. Furthermore, this degree threshold is best possible, since \citet*{HST03} proved that every graph with maximum degree 5 (regardless of planarity) is 2-colourable with clustering less than 20,000.
The following natural question arises: what is the least function $c(\Delta)$ such that every planar graph with maximum degree $\Delta$ has a 3-colouring with clustering $c(\Delta)$? The clustering function of \citet*{EJ14} was $\Delta^{O(\Delta)}$. While \citet*{EJ14} made no effort to optimise this function, exponential dependence on $\Delta$ is unavoidable using their method. Recently, \citet*{LW1} improved this bound to $O(\Delta^{37})$. A primary contribution of this paper (\cref{3ColourPlanar}) is to improve it further to $O(\Delta^2)$.
Like the above-mentioned works of \citet*{EJ14} and \citet*{LW1}, our theorem generalises to graphs with bounded Euler genus\footnote{The \textit{Euler genus} of the orientable surface with $h$ handles is $2h$. The \textit{Euler genus} of the non-orientable surface with $c$ cross-caps is $c$. The \textit{Euler genus} of a graph $G$ is the minimum integer $k$ such that $G$ embeds in a surface of Euler genus $k$. Of course, a graph is planar if and only if it has Euler genus 0; see \citep{MoharThom} for more about graph embeddings in surfaces.\newline A graph $H$ is a \textit{minor} of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of $G$ by contracting edges. A class $\mathcal{G}$ of graphs is \defn{minor-closed} if for every graph $G\in\mathcal{G}$, every minor of $G$ is in $\mathcal{G}$. A minor-closed class is \defn{proper} if it is not the class of all graphs. For example, for fixed $g\geqslant 0$, the class of graphs with Euler genus at most $g$ is a proper minor-closed class.\newline A graph $H$ is \defn{apex} if $H-v$ is planar for some vertex $v$.}. In particular, we prove (in \cref{3ColourGenus}) that graphs with Euler genus $g$ and maximum degree $\Delta$ are 3-colourable with clustering $O(g^3\Delta^2)$. The previous best clustering function was $O(g^{19}\Delta^{37})$ due to \citet*{LW1}. In fact, our result and that of \citet*{LW1} hold in the more general setting of bounded layered treewidth (defined in \cref{LayeredTreewidth}). This enables further generalisations. For example, we prove (in \cref{3ColourApex}) that apex-minor-free graphs are 3-colourable with clustering $O(\Delta^2)$, and graphs that have a drawing on a surface of bounded Euler genus with a bounded number of crossings per edge are 3-colourable with clustering $O(\Delta^2)$. All these results are presented in \cref{Planar}.
\cref{Minors} focuses on clustered colouring of graphs excluding a fixed minor. For $K_t$-minor-free graphs, at least $t-1$ colours are needed regardless of the clustering function; that is, for every integer $c$ there is a $K_t$-minor-free graph $G$ such that every $(t-2)$-colouring of $G$ has a monochromatic component with more than $c$ vertices \citep{WoodSurvey,EKKOS15}. Again, all such examples have unbounded maximum degree. Indeed, in the setting of bounded degree graphs, qualitatively different behaviour occurs. In particular, \citet*{LO17} proved that bounded degree graphs excluding a fixed minor are 3-colourable with bounded clustering (thus generalising the above result of \citet*{EJ14} for planar graphs and graphs of bounded Euler genus).
\citet*{LO17} did not state an explicit bound on the clustering function, but it is at least exponential in the maximum degree\footnote{Chun-Hung Liu [private communication, 2020] believes that the method in \citep{LO17} could be adapted to give a polynomial bound using more advanced graph structure theorems.}. We prove (in \cref{3colMinor}) that graphs with maximum degree $\Delta$ that exclude a fixed minor are 3-colourable with clustering $O(\Delta^5)$. The proof of this result is much simpler than that of \citet*{LO17}, and is based on a new structural description of bounded-degree graphs excluding a minor that is of independent interest (\cref{MinorFreeDeltaLayeredPartition,MinorFreeDegreeStructure}).
Bounded maximum degree alone is not enough to ensure an absolute bound (independent of the degree) on the number of colours in a clustered colouring. In particular, for all integers $\Delta\geqslant 2$ and $c$ there is a graph $G$ with maximum degree $\Delta$ such that every $\floor{\frac{\Delta+2}{4}}$-colouring of $G$ has a monochromatic component with more than $c$ vertices; see \citep{ADOV03,HST03,WoodSurvey}. This says that in all of the above results, to achieve an absolute bound on the number of colours, one must assume some structural property (such as bounded treewidth, being planar, or excluding a minor) in addition to assuming bounded maximum degree.
To conclude our literature survey, we mention the results of \citet*{LW1,LW2,LW3} that generalise the bounded degree setting. First, \citet*{LW1} proved that for all $s,t,k\in\mathbb{N}$ there exists $c\in\mathbb{N}$ such that every graph with layered treewidth $k$ and with no $K_{s,t}$ subgraph is $(s+2)$-colourable with clustering $c$. The case $s=1$ is equivalent to the bounded degree setting; thus this result generalises the above-mentioned 3-colouring results for graphs with bounded maximum degree. For $s\geqslant 2$, the clustering function here is very large, and the proof is 70+ pages long. In the setting of excluded minors, \citet*{LW2} proved that for all $s,t\in\mathbb{N}$ and for every graph $H$ there is an integer $c$ such that every graph with no $H$-minor and with no $K_{s,t}$-subgraph is $(s+2)$-colourable with clustering $c$. Similar results are obtained for excluded topological minors \citep{LW3}.
\section{Planar Graphs and Generalisations} \label{Planar}
This section proves that planar graphs with maximum degree $\Delta$ (and other more general classes) are 3-colourable with clustering $O(\Delta^2)$. Let $\mathbb{N}:=\{1,2,\dots\}$ and $\mathbb{N}_0:=\{0,1,\dots\}$.
\subsection{Treewidth}
Tree decompositions and treewidth are used throughout this paper. For two graphs $G$ and $H$, an \defn{$H$-decomposition} of $G$ consists of a collection $(B_x : x\in V(H))$ of subsets of $V(G)$, called \defn{bags}, indexed by the nodes of $H$, such that: \begin{compactitem} \item for every vertex $v$ of $G$, the set $\{x\in V(H) : v\in B_x\}$ induces a non-empty connected subgraph of $H$, and \item for every edge $vw$ of $G$, there is a vertex $x\in V(H)$ for which $v,w\in B_x$. \end{compactitem}
The \defn{width} of such an $H$-decomposition is $\max\{|B_x|:x\in V(H)\}-1$. A \defn{tree-decomposition} is a $T$-decomposition for some tree $T$. Tree decompositions were introduced by \citet*{Halin76} and \citet*{RS-II}. The more general notion of $H$-decomposition was introduced by \citet*{DK05}. The \defn{treewidth} of a graph $G$ is the minimum width of a tree-decomposition of $G$. Treewidth measures how similar a given graph is to a tree. It is particularly important in structural and algorithmic graph theory; see \citep{HW17,Reed97,Bodlaender-TCS98} for surveys.
Our first tool, which was also used by \citet*{LW1}, is the following 2-colouring result for graphs of bounded treewidth due to \citet*{ADOV03}. The constant 20 comes from applying a result from \citep{Wood09}.
\begin{lem}[\citep{ADOV03}] \label{2Colour} Every graph with maximum degree\/ $\Delta\in\mathbb{N}$ and treewidth less than\/ $k\in\mathbb{N}$ is\/ $2$-colourable with clustering\/ $20k\Delta$. \end{lem}
As an aside, it follows from the Lipton--Tarjan separator theorem~\citep{LT79} that $n$-vertex planar graphs have treewidth $O(\sqrt{n})$. Thus \cref{2Colour} implies that $n$-vertex planar graphs with maximum degree $\Delta\in\mathbb{N}$ are 2-colourable with clustering $O(\Delta\sqrt{n})$, which answers an open problem raised by \citet*{LMST08}. The same result holds for graphs excluding any fixed minor, using the separator theorem of \citet*{AST90}.
\subsection{Key Lemma}
The next lemma is a central result of the paper. Here, a \defn{layering} of a graph $G$ is an ordered partition $(V_0,V_1,\dots)$ of $V(G)$ such that for every edge $vw\in E(G)$, if $v\in V_i$ and $w\in V_j$, then $|i-j| \leqslant 1$. For example, if $r$ is a vertex in a connected graph $G$ and $V_i:=\{v\in V(G):\dist_G(r,v)=i\}$ for all $i\in\mathbb{N}_0$, then $(V_0,V_1,\dots)$ is called a \textit{BFS layering} of $G$. The lemma assumes that for some layering of a graph, every subgraph induced by a bounded number of consecutive layers has bounded treewidth. This property dates to the seminal work of \citet*{Baker94}, who used it to obtain efficient approximation algorithms for various NP-hard problems on planar graphs. We show that graphs that satisfy this property and have small maximum degree are 3-colourable with small clustering.
\begin{lem} \label{3Colour} Let\/ $G$ be a graph with maximum degree\/ $\Delta\in\mathbb{N}$. Let\/ $(V_0,V_1,\dots)$ be a layering of\/ $G$ such that\/ $G[\bigcup_{j=0}^{10}V_{i+j}]$ has treewidth less than\/ $k\in\mathbb{N}$ for all\/ $i\in\mathbb{N}_0$. Then\/ $G$ is\/ $3$-colourable with clustering\/ $8000 k^3\Delta^2$. \end{lem}
\begin{proof} No attempt is made to improve the constant 8000. We may assume (by renaming the layers) that $V_0=V_1=V_2=V_3=V_4=\emptyset$.
Let $\overline{i}:=i\bmod{3}$ for $i\in\mathbb{N}_0$. As illustrated in \cref{ProofIllustration}, for $i\in\mathbb{N}_0$, let $G_i$ be the induced subgraph $G[V_{6i}\cup V_{6i+1}\cup \dots \cup V_{6i+4}]$. Thus $G_i$ has maximum degree at most $\Delta$ and treewidth less than $k$. By \cref{2Colour}, $G_i$ has a 2-colouring $c_i$ with clustering $20k\Delta$. Use colours $\overline{i}$ and $\overline{i+1}$ for this colouring of $G_i$. We now define the desired colouring $c$ of $G$. Vertices in $V_{6i}\cup V_{6i+1}$ coloured $\overline{i}$ in $c_i$ keep this colour in $c$. Vertices in $V_{6i+2}$ keep their colour from $c_i$ in $c$. Vertices in $V_{6i+3}\cup V_{6i+4}$ coloured $\overline{i+1}$ in $c_i$ keep this colour in $c$. Other vertices are assigned a new colour, as we now explain.
\begin{figure}
\caption{ Proof of \cref{3Colour}.}
\label{ProofIllustration}
\end{figure}
For $j\in\mathbb{N}_0$ and $\ell\in\{0,1,2\}$, let $V_{j,\ell}$ be the set of vertices in $V_j$ coloured $\ell$ in the colouring $c_i$ of the corresponding graph $G_i$ (which is well defined since $G_0,G_1,\dots$ are pairwise disjoint). For $i\in\mathbb{N}_0$, let \begin{align*} A_i := \bigcup_{j=0}^3 V_{6i+j,\overline{i}}\quad, \quad B_i := \bigcup_{j=1}^4 V_{6(i+1)+j,\overline{i-1}}\quad, \quad Y_i := A_i \,\cup\, V_{6i+4,\overline{i}} \,\cup\, V_{6i+5} \,\cup\, V_{6(i+1),\overline{i-1}} \,\cup\, B_i. \end{align*} Note that $\{Y_i:i\in\mathbb{N}_0\}$ is a partition of $V(G)$ (since $V_0=V_1=V_2=V_3=V_4=\emptyset$). In fact, $(Y_0,Y_1,\dots)$ is a layering of $G$, since $V_{6i-1}$ separates $Y_0\cup \dots\cup Y_{i-2}$ and $Y_i\cup Y_{i+1} \cup \cdots$.
For $i\in\mathbb{N}_0$, let $Z_i$ be the graph obtained from $G[Y_i]$ as follows: for each component $X$ of $G[A_i]$ or $G[B_i]$, contract $X$ into a single vertex $v_X$. The neighbours of $v_X$ in $Z_i$ are contained within a monochromatic component of $G_i$ or $G_{i+1}$; thus $v_X$ has degree at most the size of the corresponding monochromatic component of $G_i$ or $G_{i+1}$, which is at most $20k\Delta$. Since $Z_i$ is a minor of $G[\bigcup_{j=0}^{10}V_{6i+j}]$ and treewidth is a minor-monotone parameter, $Z_i$ has treewidth less than $k$. By \cref{2Colour}, $Z_i$ has a 2-colouring $c_i'$ with clustering $400 k^2\Delta$. Use colours $\overline{i}$ and $\overline{i-1}$ for this colouring of $Z_i$.
We now assign colours to the remaining vertices of $G$ in the colouring $c$. Vertices in $V_{6i+4,\overline{i}} \cup V_{6i+5} \cup V_{6(i+1),\overline{i-1}}$ keep their colour from the colouring $c_i'$ of $Z_i$. Note that these vertices were not contracted in the construction of $Z_i$. For each component $X$ of $G[A_i]$, assign the colour given to $v_X$ in $c_i'$ to each vertex in $X\cap V_{6i+3}$. Similarly, for each component $X$ of $G[B_i]$, assign the colour given to $v_X$ in $c_i'$ to each vertex in $X\cap V_{6i+7}$. This completes the definition of the colouring $c$ of $G$.
Consider a monochromatic component $M$ in the 3-colouring $c$ of $G$. Suppose that $M$ contains an edge $vw$ with $v\in Y_{i-1}$ and $w\in Y_i$ for some $i\in\mathbb{N}_0$. The only colour used by both $Y_{i-1}$ and $Y_i$ is $\overline{i-1}$; thus $M$ is coloured $\overline{i-1}$. But $V_{6i+2}$ does not use colour $\overline{i-1}$, and it separates $Y_{i-1}$ and $Y_i$. This contradiction shows that $M$ contains no such edge $vw$. Since $(Y_0,Y_1,\dots)$ is a layering of $G$ and $M$ is connected, $M$ is contained in some $Y_i$. The only colours used in $Y_i$ are $\overline{i}$ and $\overline{i-1}$. By symmetry we may assume that $M$ is coloured $\overline{i}$.
If $M$ is contained in $V_{6i} \cup V_{6i+1} \cup V_{6i+2}$, then $M$ is contained in some monochromatic component of $G_i$ (with respect to the colouring $c_i$), and thus $|V(M)| \leqslant 20k\Delta$. Otherwise, $M$ is contained in the graph obtained from a monochromatic component $C$ of $Z_i$ (with respect to the colouring $c_i'$) by replacing each contracted vertex $v_X$ in $C$ by $X$. Since $|V(C)| \leqslant 400 k^2\Delta$ and $|V(X)| \leqslant 20k\Delta$, we conclude that $|V(M)| \leqslant 8000 k^3\Delta^2$. \end{proof}
See Appendix~A for a slightly stronger and slightly simpler version of \cref{3Colour} that was added after the paper was accepted to \emph{Combinatorics, Probability \& Computing}.
\subsection{Layered Treewidth} \label{LayeredTreewidth}
\citet*{DMW17} and \citet*{Shahrokhi13} independently introduced the following concept. The \defn{layered treewidth} of a graph $G$ is the minimum integer $k$ such that $G$ has a tree-decomposition $(B_x:x\in V(T))$ and a layering $(V_0,V_1,\dots)$ such that $|B_x\cap V_i|\leqslant k$ for every bag $B_x$ and layer $V_i$. Applications of layered treewidth include graph colouring \citep{DMW17,LW1,vdHW18}, graph drawing \citep{DMW17,BDDEW18}, book embeddings \citep{DF18}, boxicity \citep{SW20}, and intersection graph theory \citep{Shahrokhi13}. The related notion of layered pathwidth has also been studied \citep{DEJMW20,BDDEW18}. In a graph with layered treewidth $k$, the subgraph induced by the union of any 11 consecutive layers has treewidth less than $11k$. Thus \cref{3Colour} implies:
\begin{cor} \label{3ColourLayeredTreewidth} Every graph with layered treewidth\/ $k\in\mathbb{N}$ and maximum degree\/ $\Delta\in\mathbb{N}$ is\/ $3$-colourable with clustering\/ $O(k^3\Delta^2)$. \end{cor}
This corollary improves on a result of \citet*{LW1} who proved an upper bound of $O(k^{19}\Delta^{37})$ on the clustering function.
Many classes of graphs are known to have bounded layered treewidth. For example, \citet*{DMW17} proved that every planar graph has layered treewidth at most 3, every graph with Euler genus $g$ has layered treewidth at most $2g+3$, and that any apex-minor-free class of graphs has bounded layered treewidth. \cref{3ColourLayeredTreewidth} thus implies the following results.
\begin{cor} \label{3ColourPlanar} Every planar graph with maximum degree\/ $\Delta\in\mathbb{N}$ is\/ $3$-colourable with clustering\/ $O(\Delta^2)$. \end{cor}
\begin{cor} \label{3ColourGenus} Every graph with Euler genus\/ $g\in\mathbb{N}_0$ and maximum degree\/ $\Delta\in\mathbb{N}$ is\/ $3$-colourable with clustering\/ $O(g^3\Delta^2)$. \end{cor}
\begin{cor} \label{3ColourApex} For every fixed apex graph\/ $H$, every\/ $H$-minor-free graph with maximum degree\/ $\Delta\in\mathbb{N}$ is\/ $3$-colourable with clustering\/ $O(\Delta^2)$. \end{cor}
The above corollaries can also be deduced from \cref{3Colour} without considering layered treewidth. First consider a planar graph $G$, which we may assume is connected. Let $(V_0,V_1,\dots)$ be a BFS layering of $G$. For $i\in\mathbb{N}_0$, let $G_i$ be obtained from $G[V_0\cup V_1\cup\dots\cup V_{i+10}]$ by contracting $G[V_0\cup \dots\cup V_{i-1}]$ (which is connected) into a single vertex. Thus $G_i$ is planar and has radius at most 11. \citet*{RS-III} proved that every planar graph with radius $d$ has treewidth at most $3d$. Thus $G[V_i\cup \dots \cup V_{i+10}]$, which is a subgraph of $G_i$, has treewidth at most 33. \cref{3ColourPlanar} then follows from \cref{3Colour}. The same proof works in any minor-closed class for which the treewidth of any graph $G$ in the class is bounded by a function of the radius of $G$. For example, \citet*{Eppstein-Algo00} proved that every graph with Euler genus $g$ and radius $d$ has treewidth at most $O(gd)$. \cref{3ColourGenus} follows. More generally, \citet*{Eppstein-Algo00} proved that for every apex graph $H$, every $H$-minor-free graph with bounded radius has bounded treewidth. \cref{3ColourApex} follows.
Finally, note that one can also prove that every graph with Euler genus $g$ and maximum degree $\Delta$ is 3-colourable with clustering $O(g\Delta^6)$ using \cref{3Colour} and a result of \citet*{EJ14}\footnote{\citet*{EJ14} proved that if every plane graph with maximum degree $\Delta$ has a 3-colouring with clustering $f(\Delta)$, where one colour is not used on the outerface, then graphs with Euler genus $g$ and maximum degree $\Delta$ are 3-colourable with clustering $O(\Delta^2 f(\Delta)^2 g)$. Now, let $G$ be a plane graph with maximum degree $\Delta$. Let $G^+$ be the plane graph obtained by adding one new vertex $r$ adjacent to the vertices on the outerface of $G$. For $i\in\mathbb{N}_0$, let $V_i$ be the set of vertices in $G^+$ at distance $i$ from $r$ in $G^+$. By the above contraction argument, $(V_1,\dots,V_n)$ is a layering of $G$ such that any set of 11 consecutive layers induces a subgraph with bounded treewidth. By \cref{3Colour}, $G$ is 3-colourable with clustering $O(\Delta^2)$. Moreover, only two colours are used on $V_1$ and thus on the outerface of $G$. By the above-mentioned result of \citet*{EJ14} with $f(\Delta)=O(\Delta^2)$, all graphs with Euler genus $g$ and maximum degree $\Delta$ are 3-colourable with clustering $O(\Delta^6 g)$.}.
\subsection{Examples}
One advantage of considering layered treewidth is that several non-minor-closed classes of interest have bounded layered treewidth. We give three examples:
\textbf{\boldmath $(g,k)$-Planar Graphs:} A graph is \defn{$(g,k)$-planar} if it has a drawing on a surface of Euler genus at most $g$ such that each edge is involved in at most $k$ crossings (with other edges). \citet*{DEW17} proved that every $(g,k)$-planar graph has layered treewidth $O(gk)$. \cref{3ColourLayeredTreewidth} implies that every $(g,k)$-planar graph with maximum degree $\Delta$ is 3-colourable with clustering $O(g^3k^3\Delta^2)$. This improves on a result of \citet*{LW1} who proved an upper bound of $O(g^{19} k^{19}\Delta^{37})$ on the clustering function.
\textbf{Map Graphs:} Map graphs are defined as follows. Start with a graph $G_0$ embedded in a surface of Euler genus $g$, with each face labelled a `nation' or a `lake', where each vertex of $G_0$ is incident with at most $d$ nations. Let $G$ be the graph whose vertices are the nations of $G_0$, where two vertices are adjacent in $G$ if the corresponding faces in $G_0$ share a vertex. Then $G$ is called a \defn{$(g,d)$-map graph}. A $(0,d)$-map graph is called a (plane) \defn{$d$-map graph}; see \citep{FLS-SODA12,CGP02} for example. The $(g,3)$-map graphs are precisely the graphs of Euler genus at most $g$; see \citep{DEW17}. So $(g,d)$-map graphs generalise graphs embedded in a surface. \citet*{DEW17} showed that every $(g,d)$-map graph has layered treewidth at most $(2g+3)(2d+1)$. \cref{3ColourLayeredTreewidth} then implies that every $(g,d)$-map graph with maximum degree $\Delta$ is 3-colourable with clustering $O(g^3d^3\Delta^2)$. This improves on a result of \citet*{LW1} who proved an upper bound of $O(g^{19} d^{19}\Delta^{37})$ on the clustering function.
\textbf{Graph Powers:} For $p\in\mathbb{N}$, the \defn{$p$-th power} of a graph $G$ is the graph $G^p$ with vertex set $V(G^p):=V(G)$, where $vw\in E(G^p)$ if and only if $\dist_G(v,w)\leqslant p$. It follows from the work of \citet*{DMW19b} that powers of graphs with bounded layered treewidth and bounded maximum degree have bounded layered treewidth. Here we give a direct proof with better bounds.
\begin{lem} \label{PowerLayeredTreewidth} If\/ $G$ is a graph with layered treewidth\/ $k\in\mathbb{N}$ and maximum degree\/ $\Delta\in\mathbb{N}$, then\/ $G^p$ has layered treewidth less than\/ $2pk\Delta^{\floor{p/2}}$. \end{lem}
\begin{proof}
The result is trivial if $\Delta=1$, so assume that $\Delta\geqslant 2$. Let $(V_1,V_2,\dots)$ be a layering of $G$ and let $(B_x: x\in V(T))$ be a tree-decomposition of $G$ such that $|V_i\cap B_x| \leqslant k$ for each $i\in\mathbb{N}_0$ and $x\in V(T)$. For each vertex $v\in V(G)$, let $X_v:=\{ w\in V(G): \dist_G(v,w) \leqslant \floor{\frac{p}{2}}\}$. For each node $x\in V(T)$, let $B'_x:=\bigcup_{v\in B_x} X_v$.
We now prove that $(B'_x:x\in V(T))$ is a tree-decomposition of $G^p$. Consider a vertex $\alpha\in V(G^p)$. Since $\alpha\in X_v$ if and only if $v\in X_\alpha$, $$\{x\in V(T): \alpha \in B'_x\} = \bigcup_{v\in X_\alpha} \{ x\in V(T): v \in B_x \}.$$ Since $\{ x\in V(T): v \in B_x \}$ induces a connected subtree of $T$, and $X_\alpha$ induces a connected subgraph of $G$, it follows that $\{x\in V(T): \alpha \in B'_x\}$ also induces a connected subtree of $T$. Now, consider an edge $\alpha\beta\in E(G^p)$. There is an edge $vw$ of $G$ (in the `middle' of a shortest $\alpha\beta$-path) such that $\alpha\in X_v$ and $\beta\in X_w$. Now $v,w\in B_x$ for some node $x\in V(T)$. By construction, $\alpha,\beta\in B'_x$. This shows that $(B'_x:x\in V(T))$ is a tree-decomposition of $G^p$.
For $i\in\mathbb{N}_0$, let $W_i:= V_{ip}\cup V_{ip+1}\cup\dots\cup V_{(i+1)p-1}$. For each edge $\alpha\beta\in E(G^p)$, if $\alpha\in V_i$ and $\beta \in V_j$, then $|i-j|\leqslant p$. Thus if $\alpha\in W_{i'}$ and $\beta\in W_{j'}$, then $|i'-j'|\leqslant 1$. This shows that $(W_1,W_2,\dots)$ is a layering of $G^p$. Since $|X_v| < 2\Delta^{\floor{p/2}}$ for each vertex $v\in V(G)$, for each node $x\in V(T)$ and $i\in\mathbb{N}_0$, we have $|B'_x \cap V_i | < 2 k\Delta^{\floor{p/2}}$, implying $|B'_x \cap W_i | < 2 pk\Delta^{\floor{p/2}}$. Therefore $G^p$ has layered treewidth less than $2pk\Delta^{\floor{p/2}}$. \end{proof}
\cref{3ColourLayeredTreewidth,PowerLayeredTreewidth} imply that for every graph with layered treewidth $k$ and maximum degree $\Delta$, the $p$-th power $G^p$ (which has maximum degree less than $2\Delta^p$) is 3-colourable with clustering $O( k^3 \Delta^{3\floor{p/2} + 2p})$. For example, for every $(g,k)$-planar graph $G$ with maximum degree $\Delta$, the $p$-th power $G^p$ has a 3-colouring with clustering $O( g^3k^3 \Delta^{3\floor{p/2} + 2p})$.
\section{Excluded Minors} \label{Minors}
This section shows that graphs excluding a fixed minor and with maximum degree $\Delta$ are 3-colourable with clustering $O(\Delta^5)$. The starting point is Robertson and Seymour's Graph Minor Structure Theorem, which we now introduce.
\subsection{Graph Minor Structure Theorem}
For a graph $G_0$ embedded in a surface, and a facial cycle $F$ of $G_0$ (thought of as a subgraph of $G_0$), an \defn{$F$-vortex} (relative to $G_0$) is an $F$-decomposition $(B_x\subseteq V(H):x\in V(F))$ of a graph $H$ such that $V(G_0\cap H)=V(F)$ and $x\in B_x$ for each $x\in V(F)$.
For $k\in\mathbb{N}_0$, a graph $G$ is \defn{$k$-almost embeddable} if for some set $A\subseteq V(G)$ with $|A|\leqslant k$ and for some $s\in\{0,\dots,k\}$, there are graphs $G_0,G_1,\dots,G_s$ such that: \begin{compactitem} \item $G-A = G_{0} \cup G_{1} \cup \cdots \cup G_s$, \item $G_{1}, \dots, G_s$ are pairwise vertex-disjoint; \item $G_{0}$ is embedded in a surface of Euler genus at most $k$, \item there are $s$ pairwise vertex-disjoint facial cycles $F_1,\dots,F_s$ of $G_0$, and \item for $i\in\{1,\dots,s\}$, there is an $F_i$-vortex $(B_x\subseteq V(G_i):x\in V(F_i))$ of $G_i$ (relative to $G_0$) of width at most $k$. \end{compactitem} The vertices in $A$ are called \defn{apex} vertices. They can be adjacent to any vertex in $G$.
It is not clear whether the class of $k$-almost embeddable graphs is hereditary, so it will be convenient to define a graph to be \defn{$k$-almost\/$^{\downarrow}\!$ embeddable} if it is an induced subgraph of some \defn{$k$-almost embeddable} graph.
In a tree-decomposition $(B_x : x\in V(T))$ of a graph $G$, the \defn{torso} of a bag $B_x$ is the graph obtained from $G[B_x]$ as follows: for every edge $xy\in E(T)$, add every edge $vw$ where $v,w\in B_x\cap B_y$.
The following graph minor structure theorem by \citet*{RS-XVI} is at the heart of graph minor theory.
\begin{thm}[\citep{RS-XVI}] \label{GMST} For every graph\/ $H$, there exists\/ $k\in\mathbb{N}_0$ such that every graph\/ $G$ that does not contain\/ $H$ as a minor has a tree decomposition\/ $(B_x : x\in V(T))$ such that the torso\/ $G_x$ of\/ $B_x$ is\/ $k$-almost embeddable for each node\/ $x\in V(T)$. \end{thm}
In \cref{GMST}, we have $|B_x\cap B_y| \leqslant 8k$ for each edge $xy$ of $T$ because of the following lemma.
\begin{lem}[{\protect\citep[Lemma~21]{DMW17}}] \label{CliqueSize} Every clique in a\/ $k$-almost embeddable graph has size at most\/ $8k$. \end{lem}
We need the following slight strengthening of \cref{GMST}.
\begin{thm} \label{GMST2} For every graph\/ $H$, there exists\/ $k\in\mathbb{N}$ such that every graph\/ $G$ that does not contain\/ $H$ as a minor and has maximum degree at most\/ $\Delta\in\mathbb{N}$ has a tree decomposition\/ $(B_x : x\in V(T))$ such that for each node\/ $x\in V(T)$, the torso\/ $G_x$ of\/ $B_x$ is\/ $k$-almost\/$^{\downarrow}\!$ embeddable and has maximum degree less than\/ $8 k\Delta$. \end{thm}
\begin{proof}
Let $(B_x:x\in V(T))$ be a tree decomposition of $G$ such that each torso is $k$-almost$^{\downarrow}\!$ embeddable, and subject to this condition, $\sum_{x\in V(T)}|B_x|$ is minimum. This is well defined by \cref{GMST}.
Consider an edge $xy\in E(T)$. Let $T_{x,y}$ be the component of $T-xy$ containing $x$. Let $V_{x,y}:= \bigcup\{B_z\setminus B_y :z\in V(T_{x,y})\}$. Suppose for the sake of contradiction that some vertex $v\in B_x\cap B_y$ has no neighbour in $V_{y,x}$. Let $B'_z:= B_z\setminus\{v\}$ for each $z\in V(T_{y,x})$, and let $B'_z:=B_z$ for each $z\in V(T_{x,y})$. Since induced subgraphs of $k$-almost$^{\downarrow}\!$ embeddable graphs are $k$-almost$^{\downarrow}\!$ embeddable, $(B'_z:z\in V(T))$ is a tree decomposition of $G$ such that each torso is $k$-almost$^{\downarrow}\!$ embeddable. (This is the reason we define $k$-almost$^{\downarrow}\!$ embeddability.)\ Since $v\in B_y$, we have $|B'_y| <|B_y|$, implying $\sum_{z\in V(T)}|B'_z| < \sum_{z\in V(T)}|B_z|$. This contradicts the choice of $(B_x:x\in V(T))$. Hence every vertex in $B_x\cap B_y$ has a neighbour in $V_{y,x}$.
Consider a node $x\in V(T)$, a vertex $v\in B_x$, and some edge $vw$ of the torso $G_x$ that is not in $G[B_x]$. By definition of the torso, $v,w\in B_x\cap B_y$ for some edge $xy\in E(T)$. As shown above, there is an edge $vu$ in $G$ with $u\in V_{y,x}$; let $\phi_x(v,w) := (v,u)$. Since $u\notin B_x$ and $|B_x\cap B_y|\leqslant 8k$ (by \cref{CliqueSize}), we have $|\phi_x^{-1}(v,u)|<8k$ (all the elements in the pre-image of $(v,u)$ with respect to $\phi_x$ are of the form $(v,z)$ with $z\in V_x\cap V_y$). Thus $\deg_{G_x}(v) < 8 k\deg_G(v)\leqslant 8 k \Delta$. \end{proof}
Let $C_1=\{v_1,\dots,v_k\}$ be a $k$-clique in a graph $G_1$. Let $C_2=\{w_1,\dots,w_k\}$ be a $k$-clique in a graph $G_2$. Let $G$ be the graph obtained from the disjoint union of $G_1$ and $G_2$ by identifying $v_i$ and $w_i$ for $i\in\{1,\dots,k\}$, and possibly deleting some edges in $C_1$ ($=C_2$). Then $G$ is a \defn{clique-sum} of $G_1$ and $G_2$.
The following is a direct consequence of \cref{GMST2}.
\begin{cor} \label{GMST3} For every proper minor-closed class\/ $\mathcal{G}$, there exists\/ $k\in\mathbb{N}$ such that every graph\/ $G$ in\/ $\mathcal{G}$ with maximum degree at most\/ $\Delta\in\mathbb{N}$ is obtained by clique-sums of\/ $k$-almost\/$^{\downarrow}\!$ embeddable graphs of maximum degree less than\/ $8k\Delta$. \end{cor}
\subsection{Partitions}
A \defn{vertex-partition}, or simply \defn{partition}, of a graph $G$ is a set $\mathcal{P}$ of non-empty sets of vertices in $G$ such that each vertex of $G$ is in exactly one element of $\mathcal{P}$. Each element of $\mathcal{P}$ is called a \defn{part}. The \defn{quotient} of $\mathcal{P}$ is the graph, denoted by $G/\mathcal{P}$, with vertex set $\mathcal{P}$ where distinct parts $A,B\in \mathcal{P}$ are adjacent in $G/\mathcal{P}$ if and only if some vertex in $A$ is adjacent in $G$ to some vertex in $B$.
A partition $\mathcal{P}$ of a graph $G$ is called an \defn{$H$-partition} if $H$ is a graph that contains a spanning subgraph isomorphic to the quotient $G/\mathcal{P}$. Alternatively, an \defn{$H$-partition} of a graph $G$ is a partition $(A_x:x\in V(H))$ of $V(G)$ indexed by the vertices of $H$, such that for every edge $vw\in E(G)$, if $v\in A_x$ and $w\in A_y$ then $x=y$ or $xy\in E(H)$. The \defn{width} of such an $H$-partition is $\max\{|A_x|: x\in V(H)\}$. Note that a layering is equivalent to a path-partition.
\citet*{DJMMUW20} introduced a layered variant of partitions (analogous to layered treewidth being a layered variant of treewidth). The \defn{layered width} of a partition $\mathcal{P}$ of a graph $G$ is the minimum integer $\ell$ such that for some layering $(V_0,V_1,\dots)$ of $G$, each part in $\mathcal{P}$ has at most $\ell$ vertices in each layer $V_i$. A partition $\mathcal{P}$ of a graph $G$ is a \defn{$(k,\ell)$-partition} if $\mathcal{P}$ has layered width at most $\ell$ and $G/\mathcal{P}$ has treewidth at most $k$. A class $\mathcal{G}$ of graphs \defn{admits bounded layered partitions} if there exist $k,\ell\in\mathbb{N}$ such that every graph in $\mathcal{G}$ has a $(k,\ell)$-partition.
Several recent results show that various graph classes admit bounded layered partitions. The first results were for minor-closed classes by \citet*{DJMMUW20}, who proved that planar graphs admit bounded layered partitions; more generally, that graphs of bounded Euler genus admit bounded layered partitions; and most generally, a minor-closed class admits bounded layered partitions if and only if it excludes some apex graph. Some results for non-minor-closed classes were recently obtained by \citet*{DMW19b}. For example, they proved that $(g,k)$-planar graphs and $(g,d)$-map graphs admit bounded layered partitions amongst other examples.
\citet*{DJMMUW20} showed that this property implies bounded layered treewidth.
\begin{lem}[\citep{DJMMUW20}] \label{PartitionLayeredTreewidth} If a graph\/ $G$ has a\/ $(k,\ell)$-partition, then\/ $G$ has layered treewidth at most\/ $(k+1)\ell$. \end{lem}
What distinguishes layered partitions from layered treewidth is that layered partitions lead to constant upper bounds on the queue-number and non-repetitive chromatic number, whereas for both these parameters, the best known upper bounds obtainable via layered treewidth are $O(\log n)$. This led to the positive resolution of two old open problems; namely, whether planar graphs have bounded queue-number \citep{DJMMUW20} and whether planar graphs have bounded non-repetitive chromatic number \citep{DEJWW20}. Other applications include $p$-centred colouring \citep{DFMS20} and graph encoding / universal graphs~\citep{BGP20,DEJGMM,EJM}.
Our next tool is the following result by \citet*{DJMMUW20}.
\begin{lem}[\citep{DJMMUW20}] \label{AlmostEmbeddableStructure} Every\/ $k$-almost embeddable graph with no apex vertices has an\/ $(11k+10,6k)$-partition. \end{lem}
\subsection{Excluding a Minor}
We now prove that a result like \cref{AlmostEmbeddableStructure} also holds for $k$-almost embeddable graphs in which all the apex vertices have bounded degree (and in particular if the graph has bounded degree).
\begin{lem} \label{DropApices}
Let\/ $G$ be a graph such that, for some\/ $A\subseteq V(G)$, every vertex in\/ $A$ has degree at most\/ $\Delta\in\mathbb{N}$, and\/ $G-A$ has a\/ $(k,\ell)$-partition. Then\/ $G$ has a\/ $(k+1,2\ell\Delta|A|)$-partition. \end{lem}
\begin{proof}
Let $\mathcal{P}$ be a $(k,\ell)$-partition of $G-A$, where $\mathcal{P}$ has layered width at most $\ell$ with respect to a layering $(V_0,V_1,\dots)$ of $G-A$. Let $I$ be the set of integers $i$ such that some vertex in $A$ has a neighbour in $V_i$. Thus $|I| \leqslant \Delta |A|$. Let $P$ be the path graph $(0,1,\dots)$. For $j\in\mathbb{N}_0$, let $d_j$ be the minimum distance in $P$ from $j$ to a vertex in $I$. For $i\in\mathbb{N}_0$, let $W_i$ be the union of the sets $V_j$ such that $d_j=i$. For each edge $vw$ of $G$, if $v\in V_a$ and $w\in V_b$ then $|a-b|\leqslant 1$, implying $|d_a-d_b|\leqslant 1$. Thus $(W_0,W_1,\dots)$ is a layering of $G-A$. Observe that each layer $W_i$ is the union of at most $2|I|$ original layers (at most two layers between each pair of consecutive elements in $I$, plus one layer before $\min I$ and one layer after $\max I$). Thus $\mathcal{P}$ has layered width at most $2\ell |I|\leqslant 2\ell\Delta |A|$ with respect to $(W_0,W_1,\dots)$. By construction, the vertices of $G-A$ that are neighbours of vertices in $A$ are all in $W_0$. Add $A$ to $W_0$. We thus obtain a layering of $G$. Let $\mathcal{Q}$ be the partition of $G$ obtained from $\mathcal{P}$ by adding one new part $A$. Thus $\mathcal{Q}$ has layered width at most $2\ell\Delta |A|$ with respect to $(W_0,W_1,\dots)$. Since $G/\mathcal{Q}$ has only one more vertex than $(G-A)/\mathcal{P}$, the treewidth of $G/\mathcal{Q}$ is at most $k+1$. \end{proof}
\cref{AlmostEmbeddableStructure,DropApices} lead to the next result.
\begin{lem} \label{AlmostEmbeddableStructureBoundedDegree} Every\/ $k$-almost\/$^{\downarrow}\!$ embeddable graph\/ $G$ such that every apex vertex has degree at most\/ $\Delta\in\mathbb{N}$ has an\/ $(11k+11,12k^2\Delta)$-partition. \end{lem}
\begin{proof} By definition, $G$ is an induced subgraph of a $k$-almost embeddable graph $G'$. Since deleting an apex vertex in a $k$-almost embeddable graph produces another $k$-almost embeddable graph, we may assume that $G$ and $G'$ have the same set $A$ of apex vertices. By \cref{AlmostEmbeddableStructure}, $G'-A$ has an\/ $(11k+10,6k)$-partition $\mathcal{P}'$. Let $\mathcal{P}$ be obtained by restricting $\mathcal{P}'$ to $V(G-A)$. Thus $\mathcal{P}$ is an\/ $(11k+10,6k)$-partition of $G-A$. Since every vertex in $A$ has degree at most $\Delta$ in $G$, the result follows from \cref{DropApices}. \end{proof}
\citet*{DJMMUW20} introduced (an equivalent version of) the following definitions and lemmas as a way to handle clique sums.
Let $C$ be a clique in a graph $G$, and let $\{C_0,C_1\}$ and $\{P_1,\dots,P_c\}$ be partitions of $C$. A $(k,\ell)$-partition $\mathcal{P}$ of $G$ is \defn{$(C, \{C_0,C_1\}, \{P_1,\dots,P_c\})$-friendly} if $P_1,\dots,P_c\in\mathcal{P}$ and $\mathcal{P}$ has layered width at most $\ell$ with respect to some layering $(V_0,V_1,\dots)$ of $G$ with $C_0\subseteq V_0$ and $C_1\subseteq V_1$.
\begin{lem}[\citep{DJMMUW20}]
\label{CliqueFriendly}
Let\/ $G$ be a graph that has a $(k,\ell)$-partition.
Let\/ $C$ be a clique in\/ $G$, and let\/ $\{C_0,C_1\}$ and\/ $\{P_1,\ldots,P_c\}$ be partitions of\/ $C$ such that\/ $|C_j \cap P_i|\leqslant 2\ell$ for each\/ $j \in \{0,1\}$ and each\/ $i\in \{1,\ldots,c\}$. Then\/ $G$ has a\/ $(C,\{C_0,C_1\},\{P_1,\ldots,P_c\})$-friendly\/ $(k+c,2\ell)$-partition. \end{lem}
A graph $G$ \defn{admits clique-friendly $(k,\ell)$-partitions} if for every clique $C$ in $G$, and for all partitions $\{C_0,C_1\}$ and $\{P_1,\dots,P_c\}$ of $C$, there is a $(C,\{C_0,C_1\},\{P_1,\dots,P_c\})$-friendly $(k,\ell)$-partition of $G$. A graph class $\mathcal{G}$ \defn{admits clique-friendly $(k,\ell)$-partitions} if every graph in $\mathcal{G}$ admits clique-friendly $(k,\ell)$-partitions.
\begin{lem}[\citep{DJMMUW20}]
\label{CliqueFriendlyCliqueSum}
Let\/ $\cal G$ be a graph class that admits clique-friendly\/ $(k,\ell)$-partitions. Then the class of graphs obtained from clique-sums of graphs in\/ $\cal G$ admits clique-friendly\/ $(k,\ell)$-partitions. \end{lem}
\cref{AlmostEmbeddableStructureBoundedDegree,CliqueFriendly} lead to the next result.
\begin{lem} \label{CliqueFriendlyAlmostEmbeddable} Every\/ $k$-almost\/$^{\downarrow}\!$ embeddable graph\/ $G$ of maximum degree at most\/ $\Delta\in\mathbb{N}$ admits clique-friendly\/ $(19k+11,24k^2\Delta)$-partitions. \end{lem}
\begin{proof} By \cref{AlmostEmbeddableStructureBoundedDegree}, $G$ has an\/ $(11k+11,12k^2\Delta)$-partition. It follows from \cref{CliqueFriendly,CliqueSize} that $G$ admits clique-friendly $(19k+11,24k^2\Delta)$-partitions. \end{proof}
The following result, of independent interest, says that bounded-degree graphs excluding a fixed minor admit bounded layered partitions.
\begin{thm} \label{MinorFreeDeltaLayeredPartition} For every fixed graph\/ $H$, there is a constant\/ $k\in\mathbb{N}$ such that every\/ $H$-minor-free graph with maximum degree\/ $\Delta\in\mathbb{N}$ has a\/ $(k,k\Delta)$-partition. \end{thm}
\begin{proof} Let $G$ be an $H$-minor-free graph with maximum degree $\Delta$. By \cref{GMST3}, there is a constant $k_0$ (depending only on $H$) such that $G$ can be obtained by clique-sums of $k_0$-almost$^{\downarrow}\!$ embeddable graphs with maximum degree at most $8k_0\Delta$. By \cref{CliqueFriendlyAlmostEmbeddable}, each such graph admits clique-friendly $(19k_0+11,24k_0^2\cdot 8k_0\Delta)$-partitions. It follows from \cref{CliqueFriendlyCliqueSum} that $G$ also admits clique-friendly $(19k_0+11,192k_0^3\Delta )$-partitions. The result follows where $k:=\max\{19k_0+11,192k_0^3\}$. \end{proof}
With these tools, we are now ready to prove the main result of this section.
\begin{thm}\label{3colMinor} For every fixed graph\/ $H$, every\/ $H$-minor-free graph\/ $G$ with maximum degree\/ $\Delta\in\mathbb{N}$ is\/ $3$-colourable with clustering\/ $O(\Delta^5)$. \end{thm}
\begin{proof} Let $G$ be an $H$-minor-free graph with maximum degree $\Delta$. By \cref{MinorFreeDeltaLayeredPartition}, for some constant $k$ (depending only on $H$), $G$ has a $(k,k\Delta)$-partition. \cref{PartitionLayeredTreewidth} implies that $G$ has layered treewidth at most $(k+1)k\Delta$. By \cref{3ColourLayeredTreewidth}, $G$ has a 3-colouring with clustering $O(k^6\Delta^5)$. \end{proof}
\subsection{Strong Products}
Some of the above structural results can be interpreted in terms of products. The \defn{strong product} of graphs $A$ and $B$, denoted by $A\boxtimes B$, is the graph with vertex set $V(A)\times V(B)$, where distinct vertices $(v,x),(w,y)\in V(A)\times V(B)$ are adjacent if: \begin{compactitem} \item $v=w$ and $xy\in E(B)$, or \item $x=y$ and $vw\in E(A)$, or \item $vw\in E(A)$ and $xy\in E(B)$. \end{compactitem}
\cref{2Colour} was proved using the following result by an anonymous referee of the paper by \citet*{DO95} (refined in \citep{Wood09}).
\begin{lem}[\citep{DO95,Wood09}] \label{DegreeTreewidthStructure} Every graph with maximum degree\/ $\Delta\in\mathbb{N}$ and treewidth less than\/ $k\in\mathbb{N}$ is a subgraph of\/ $T\boxtimes K_{20k\Delta}$ for some tree\/ $T$. \end{lem}
\cref{2Colour} follows from \cref{DegreeTreewidthStructure} by first properly 2-colouring $T$ and then colouring each vertex of the graph by the colour of the corresponding vertex of $T$.
The next observation by \citet*{DJMMUW20} follows immediately from the definitions.
\begin{obs}[\citep{DJMMUW20}] \label{PartitionProduct} A graph\/ $G$ has an\/ $H$-partition of layered width at most\/ $\ell\in\mathbb{N}$ if and only if\/ $G$ is a subgraph of\/ $H \boxtimes P \boxtimes K_\ell$ for some path\/ $P$. \end{obs}
\citet*{DJMMUW20} also showed that if one does not care about the exact treewidth bound, then it suffices to consider partitions with layered width 1.
\begin{obs}[\citep{DJMMUW20}] \label{MakeWidth1} If a graph\/ $G\subseteq H\boxtimes P\boxtimes K_\ell$ for some graph\/ $H$ of treewidth at most\/ $k$ and for some path\/ $P$, then\/ $G\subseteq H' \boxtimes P$ for some graph\/ $H'$ of treewidth at most\/ $(k+1)\ell-1$. \end{obs}
By these two observations, \cref{MinorFreeDeltaLayeredPartition} can be restated as follows:
\begin{thm} \label{MinorFreeDegreeStructure} For every fixed graph\/ $X$, every\/ $X$-minor-free graph with maximum degree\/ $\Delta\in\mathbb{N}$ is a subgraph of\/ $H\boxtimes P$ for some graph\/ $H$ of treewidth\/ $O(\Delta)$ and for some path\/ $P$. \end{thm}
It is worth highlighting the similarity of \cref{DegreeTreewidthStructure,MinorFreeDegreeStructure}. \cref{DegreeTreewidthStructure} says that graphs of bounded treewidth and bounded degree are subgraphs of the product of a tree and a complete graph of bounded size, whereas \cref{MinorFreeDegreeStructure} says that bounded-degree graphs excluding a fixed minor are subgraphs of the product of a bounded treewidth graph and a path.
\section{Open Problem}
We conclude with a natural open problem that arises from this work. Are planar graphs with maximum degree $\Delta$ 3-colourable with clustering $O(\Delta)$? A construction of \citet*{KMRV97} shows a lower bound of $\Omega(\Delta^{1/3})$, while a slightly different construction by \citet*{EJ14} shows $\Omega(\Delta^{1/2})$.
\begin{oldthebibliography}{#1}
\setlength{\parskip}{0.0ex}
\setlength{\itemsep}{0.0ex}
}{\end{oldthebibliography}}
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
\appendix\section{Alternative proof of the key lemma}
This appendix was added after the paper was accepted to \emph{Combinatorics, Probability \& Computing}. Here we give a slightly simpler and slightly stronger proof of \cref{3Colour}, where we only require seven consecutive layers to have bounded treewidth, and in addition one of the three colour classes contains components of size $O(k\Delta)$, instead of $O(k^3\Delta^2)$.
\begin{lem} \label{3ColourA} Let\/ $G$ be a graph with maximum degree\/ $\Delta\in\mathbb{N}$. Let\/ $(V_0,V_1,\dots)$ be a layering of\/ $G$ such that\/ $G[\bigcup_{j=0}^{6}V_{i+j}]$ has treewidth less than\/ $k\in\mathbb{N}$ for all\/ $i\in\mathbb{N}_0$. Then\/ $G$ is\/ $3$-colourable with clustering\/ $8000 k^3\Delta^2$. \end{lem}
\begin{proof} No attempt is made to improve the constant 8000. We may assume (by renaming the layers) that $V_0=V_1=V_2=V_3=V_4=\emptyset$.
As illustrated in \cref{AppendixProofIllustration}, let $H$ be the subgraph of $G$ induced by $\bigcup\{V_i:i\in\mathbb{N},i\not\equiv 0 \pmod 8\}$. Note that $H$ is the disjoint union of subgraphs of $G$, each induced by at most seven consecutive layers $V_i$. Thus $H$ has treewidth less than $k$. As a subgraph of $G$, $H$ has maximum degree $\Delta$. By \cref{2Colour}, $H$ has a 2-colouring with clustering $20k\Delta$, say with colours blue and yellow. Let $Y$ be the set of yellow vertices in $H$.
\begin{figure}
\caption{ Proof of \cref{3ColourA}.}
\label{AppendixProofIllustration}
\end{figure}
We define a colouring $c$ of $G$ as follows.
Blue vertices in $H$ are coloured blue in $c$, and no other vertex of $G$ will be coloured blue. So each blue monochromatic component has at most $20k\Delta$ vertices. Each non-blue vertex in $G$ will be coloured red or green, giving three colours in total.
For each $i\in\mathbb{N}$, the vertices in $Y\cap V_{8i+4}$ are coloured red in $c$, and the vertices in $Y\cap ( V_{8i-3} \cup V_{8i+3})$ are coloured green in $c$. This implies that each monochromatic component intersecting $V_{8i+4}$ has at most $20k\Delta$ vertices, and thus each monochromatic component with greater than $20k\Delta$ vertices is red or green and lies in $V_{8i-3}\cup V_{8i-2}\cup \cdots\cup V_{8i+3}$ for some $i\in\mathbb{N}$.
For each $i\in\mathbb{N}$, let $U_i := V_{8i}\cup (Y\cap (V_{8i-2}\cup V_{8i-1}\cup V_{8i+1}\cup V_{8i+2} ))$ and $U_i^+:=U_i \cup (Y\cap (V_{8i-3}\cup V_{8i+3}))$. Let $H_i$ be the graph obtained from $G[U_i^+]$ as follows: contract each connected component $X$ of $G[Y \cap ( V_{8i-3}\cup V_{8i-2} ) ]$ or of $G[Y \cap ( V_{8i+2}\cup V_{8i+3} ) ]$ into a single vertex $v_X$. The neighbours of $v_X$ in $H_i$ lie in $V(X) \cap V_{8i-1}$ or $V(X) \cap V_{8i+1}$. So $v_X$ has degree at most $|V(X)| \leqslant 20k\Delta$ in $H_i$. Every other vertex $v$ in $H_i$ has degree in $H_i$ at most the degree of $v$ in $G$. So $H_i$ has maximum degree at most $20k\Delta$. Since $H_i$ is a minor of $G[\bigcup_{j=8i-3}^{8i+3}V_j]$ and since treewidth is minor-monotone, $H_i$ has treewidth less than $k$. By \cref{2Colour}, $H_i$ has a colouring $c_i$ with with clustering $20k\cdot 20 k \Delta=400 k^2 \Delta$, say with colours red and green.
We now define the colouring $c$ of the vertices of $U_i$: For each component $X$ of $G[Y \cap ( V_{8i-3}\cup V_{8i-2}) ]$, assign the colour of $v_X$ in $c_i$ to each vertex in $V(X) \cap V_{8i-2}$. Similarly, for each component $X$ of $G[Y \cap ( V_{8i+3}\cup V_{8i+2} ) ]$, assign the colour of $v_X$ in $c_i$ to each vertex in $V(X) \cap V_{8i+2}$. For every other vertex $v$ of $U_i$, let $c(v):=c_i(v)$. This completes the definition of the colouring $c$ of $G$.
Consider a monochromatic component $M$ in the 3-colouring $c$ of $G$. As shown above, if $|V(M)|>20k\Delta$ then $M$ lies in $U_i^+$ for some $i\in\mathbb{N}$, and $M$ is coloured red or green. By construction, $M$ is contained in the graph obtained from some monochromatic component $M'$ of $H_i$ (with respect to $c_i$) by replacing each contracted vertex $v_X$ in $H_i$ by a subset of $V(X)$. Since $|V(M')| \leqslant 400 k^2\Delta$ and $|V(X)| \leqslant 20k\Delta$, we conclude that $|V(M)| \leqslant 8000 k^3\Delta^2$. \end{proof}
\end{document} | arXiv |
\begin{document}
\title[Liouville Numbers, Liouville Sets and Liouville Fields]{ Liouville Numbers, Liouville Sets \\ and Liouville Fields }
\author{K. Senthil Kumar, R. Thangadurai and M. Waldschmidt} \address[K. Senthil Kumar and R. Thangadurai]{Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahbad, 211019, India} \address[M. Waldschmidt]{Institut de Math\'ematiques de Jussieu, Th\'eorie des Nombres Case courrier 247, Universit\'e Pierre et Marie Curie (Paris 6), PARIS Cedex 05 FRANCE} \email[K. Senthil Kumar]{[email protected]} \email[R. Thangadurai]{[email protected]} \email[M. Waldschmidt]{[email protected]}
\begin{abstract} Following earlier work by \'{E}.~Maillet 100 years ago, we introduce the definition of a {\it Liouville set}, which extends the definition of a Liouville number. We also define a {\it Liouville field}, which is a field generated by a Liouville set. Any Liouville number belongs to a Liouville set ${\sf S}$ having the power of continuum and such that ${\mathbf Q}\cup{\sf S}$ is a Liouville field. \\ \null
{\bf Update: May 25, 2013 } \end{abstract} \maketitle
\section{ Introduction}\label{Section:Introduction}
For any integer $q$ and any real number $x \in {\mathbf R}$, we denote by $$
\Vert qx\Vert = \min_{m\in{ {\mathbf Z}}} |qx -m| $$ the distance of $qx$ to the nearest integer. Following \'E.~Maillet \cite{Maillet,Maillet1922}, an irrational real number $\xi$ is said to be a {\it Liouville number} if, for each integer $n\geq 1$, there exists an integer $q_n \geq 2$ such that the sequence $\bigl(u_n(\xi)\bigr)_{n\ge 1}$ of real numbers defined by $$ u_n(\xi) = -\frac{\log\Vert q_n\xi \Vert}{\log q_n} $$ satisfies
$\displaystyle \lim_{n\to\infty} u_n(\xi) = \infty$. If $p_n$ is the integer such that $\Vert q_n\xi\Vert = |\xi q_n - p_n|$, then the definition of $u_n(\xi)$ can be written $$
\left| q_n \xi - p_n \right| = \frac{1}{q_n^{u_n(\xi)}}\cdotp $$ An equivalent definition is to saying that a Liouville number is a real number $\xi$ such that, for each integer $n\geq 1$, there exists a rational number $p_n/q_n$ with $q_n \geq 2$ such that $$
0<\left| \xi-\frac{p_n}{q_n}\right| \le \frac{1}{q_n^n}\cdotp $$ We denote by ${{\mathbb L}}$ the set of Liouville numbers. Following \cite{liouville1}, any Liouville number is transcendental.
We introduce the notions of a {\it Liouville set} and of a {\it Liouville field}. They extend what was done by \'{E}. Maillet in Chap.~III of \cite{Maillet}.
\noindent{\bf Definition}. A {\it Liouville set} is a subset ${\sf S}$ of ${{\mathbb L}}$ for which there exists an increasing sequence $(q_n)_{n\ge 1}$ of positive integers having the following property: for any $\xi\in {\sf S}$, there exists a sequence $\bigl(b_n\bigr)_{n\ge 1}$ of positive rational integers and there exist two positive constants $\kappa_1$ and $\kappa_2$ such that, for any sufficiently large $n$, \begin{equation}\label{Equation:LiouvilleSet} 1\le b_n \leq q_{n}^{\kappa_1} \mbox{ and } \Vert b_n \xi \Vert \le \frac{1}{q_n^{ \kappa_2 n}}\cdotp \end{equation}
It would not make a difference if we were requesting these inequalities to hold for any $n\ge 1$: it suffices to change the constants $\kappa_1$ and $\kappa_2$.
\noindent{\bf Definition}. A {\it Liouville field} is a field of the form ${\mathbf Q}({\sf S})$ where ${\sf S}$ is a Liouville set.
From the definitions, it follows that, for a real number $\xi$, the following conditions are equivalent: \\ $(i)$ $\xi$ is a Liouville number. \\ $(ii)$ $\xi$ belongs to some Liouville set. \\ $(iii)$ The set $\{\xi\}$ is a Liouville set. \\ $(iv)$ The field ${\mathbf Q}(\xi)$ is a Liouville field.
If we agree that the empty set is a Liouville set and that ${\mathbf Q}$ is a Liouville field, then any subset of a Liouville set is a Liouville set, and also (see Theorem $\ref{Theorem:Field}$) any subfield of a Liouville field is a Liouville field.
\noindent{\bf Definition.} Let $\underline{q}=(q_n)_{n\ge 1}$ be an increasing sequence of positive integers and let $\underline{u}=( u_n)_{n\ge 1}$ be a sequence of positive real numbers such that $ u_n \rightarrow\infty $ as $ n\rightarrow\infty$. Denote by ${\sf S}_{\underline{q},\underline{u}}$ the set of $\xi\in{{\mathbb L}}$ such that there exist two positive constants $\kappa_1$ and $\kappa_2$ and there exists a sequence $\bigl(b_n\bigr)_{n\ge 1}$ of positive rational integers with $$ 1 \leq b_n \leq q_{n}^{\kappa_1} \mbox{ and } \Vert b_n \xi \Vert \le \frac{1}{q_n^{\kappa_2 u_n}}\cdotp $$ Denote by $\underline{n}$ the sequence $\underline{u}=(u_n)_{n\ge 1}:= (1,2,3,\dots)$ with $u_n=n$ ($n\ge 1$). For any increasing sequence $\underline{q}=(q_n)_{n\ge 1}$ of positive integers, we denote by ${\sf S}_{\underline{q}}$ the set ${\sf S}_{\underline{q},\underline{n}}$.
Hence, by definition, a Liouville set is a subset of some ${\sf S}_{\underline{q}}$. In section $\ref{section:PfLemmaSqu}$ we prove the following lemma:
\begin{lemma}\label{Lemma:Squ} For any increasing sequence $\underline{q}$ of positive integers and any sequence $\underline{u}$ of positive real numbers which tends to infinity, the set ${\sf S}_{\underline{q},\underline{u}}$ is a Liouville set. \end{lemma}
Notice that if $(m_n)_{n\ge 1}$ is an increasing sequence of positive integers, then for the subsequence $\underline{q}'=(q_{m_n})_{n\ge 1}$ of the sequence $\underline{q}$, we have ${\sf S}_{\underline{q}',\underline{u}}\supset {\sf S}_{\underline{q},\underline{u}}$.
\noindent {\bf Example}. Let $\underline{u}=(u_n)_{n\ge 1}$ be a sequence of positive real numbers which tends to infinity. Define $f:{\mathbf N}\rightarrow {\mathbf R}_{>0}$ by $f(1)=1$ and $$ f(n)=u_1u_2\cdots u_{n-1} \qquad (n\ge 2), $$ so that $f(n+1)/f(n)=u_n$ for $n\ge 1$. Define the sequence $\underline{q}=(q_n)_{n\ge 1}$ by $q_n= \lfloor 2^{f(n)}\rfloor$. Then, for any real number $t>1$, the number $$ \xi_t=\sum_{n\ge 1} \frac{1}{ \lfloor t^{f(n)}\rfloor} $$ belongs to ${\sf S}_{\underline{q},\underline{u}}$. The set $\{\xi_t\; \mid \; t>1\}$ has the power of continuum, since $\xi_{t_1}<\xi_{t_2}$ for $t_1>t_2>1$.
The sets ${\sf S}_{\underline{q},\underline{u}}$ have the following property (compare with Theorem $\mathrm{I}_3$ in \cite{Maillet}):
\begin{theorem}\label{Theorem:Field} For any increasing sequence $\underline{q}$ of positive integers and any sequence $\underline{u}$ of positive real numbers which tends to infinity, the set ${\mathbf Q}\cup {\sf S}_{\underline{q},\underline{u}}$ is a field. \end{theorem}
We denote this field by ${\sf K}_{\underline{q},\underline{u}}$, and by ${\sf K}_{\underline{q}}$ for the sequence $\underline{u}=\underline{n}$. From Theorem $\ref{Theorem:Field}$, it follows that a field is a Liouville field if and only if it is a subfield of some ${\sf K}_{\underline{q}}$. Another consequence is that, if ${\sf S}$ is a Liouville set, then ${\mathbf Q}({\sf S})\setminus{\mathbf Q}$ is a Liouville set.
It is easily checked that if $$ \liminf_{n\rightarrow \infty} \frac{u_n}{u_n'} >0, $$ then ${\sf K}_{\underline{q},\underline{u}}$ is a subfield of ${\sf K}_{\underline{q}, \underline{u}'}$. In particular if $$ \liminf_{n\rightarrow \infty} \frac{u_n}{n} >0, $$ then ${\sf K}_{\underline{q},\underline{u}}$ is a subfield of ${\sf K}_{\underline{q}}$, while if $$ \limsup_{n\rightarrow \infty} \frac{u_n}{n} <+\infty $$ then ${\sf K}_{\underline{q}}$ is a subfield of ${\sf K}_{\underline{q},\underline{u}}$.
If $R\in{\mathbf Q}(X_1,\dots,X_\ell)$ is a rational fraction and if $\xi_1,\dots,\xi_\ell$ are elements of a Liouville set ${\sf S}$ such that $\eta=R(\xi_1,\ldots,\xi_\ell)$ is defined, then Theorem $\ref{Theorem:Field}$ implies that $\eta$ is either a rational number or a Liouville number, and in the second case ${\sf S}\cup\{\eta\}$ is a Liouville set. For instance, if, in addition, $R$ is not constant and $\xi_1,\dots,\xi_\ell$ are algebraically independent over ${\mathbf Q}$, then $\eta$ is a Liouville number and ${\sf S}\cup\{\eta\}$ is a Liouville set. For $\ell=1$, this yields:
\begin{corollary}\label{Corollary:q(X)} Let $R\in{\mathbf Q}(X)$ be a rational fraction and let $\xi$ be a Liouville number. Then $R(\xi)$ is a Liouville number and $\{\xi,R(\xi)\}$ is a Liouville set. \end{corollary}
We now show that $ {\sf S}_{\underline{q},\underline{u}}$ is either empty or else uncountable and we characterize such sets.
\begin{theorem}\label{Theorem::UncountableLiouvilleSets} Let $\underline{q}$ be an increasing sequence of positive integers and $\underline{u} = (u_n)_{n\geq 1}$ be an increasing sequence of positive real numbers such that $u_{n+1} \geq u_n +1$. Then
the Liouville set ${\sf S}_{\underline{q}, \underline{u}}$ is non empty if and only if $$ \limsup_{n\to\infty} \frac{\log q_{n+1}}{u_n\log q_n} > 0.$$ Moreover, if the set ${\sf S}_{\underline{q}, \underline{u}}$ is non empty, then it has the power of continuum.
\end{theorem}
Let $t$ be an irrational real number which is not a Liouville number. By a result due to P.~Erd\H{o}s \cite{erd1}, we can write $t=\xi+\eta$ with two Liouville numbers $ \xi$ and $\eta$. Let $\underline{q}$ be an increasing sequence of positive integers and $\underline{u}$ be an increasing sequence of real numbers such that $\xi \in{\sf S}_{\underline{q}, \underline{u}}$. Since any irrational number in the field $K_{\underline{q}, \underline{u}}$ is in ${\sf S}_{\underline{q}, \underline{u}}$, it follows that the Liouville number $\eta=t-\xi$ does not belong to ${\sf S}_{\underline{q}, \underline{u}}$.
One defines a reflexive and symmetric relation $R$ between two Liouville numbers by $\xi R\eta$ if $\{\xi,\eta\}$ is a Liouville set. The equivalence relation which is induced by $R$ is trivial, as shown by the next result, which is a consequence of Theorem $\ref{Theorem::UncountableLiouvilleSets}$.
\begin{corollary}\label{Corollary:EquivalenceTrivial} Let $\xi $ and $\eta$ be Liouville numbers. Then there exists a subset $\vartheta$ of ${{\mathbb L}}$ having the power of continuum such that, for each such $\varrho \in \vartheta$, both sets $\{\xi,\varrho\}$ and $\{\eta,\varrho\}$ are Liouville sets. \end{corollary}
In \cite{Maillet}, \'{E} Maillet introduces the definition of Liouville numbers {\it corresponding} to a given Liouville number. However this definition depends on the choice of a given sequence $\underline{q}$ giving the rational approximations. This is why we start with a sequence $\underline{q}$ instead of starting with a given Liouville number.
The intersection of two nonempty Liouville sets maybe empty. More generally, we show that there are uncountably many Liouville sets ${\sf S}_{\underline{q}}$ with pairwise empty intersections.
\begin{proposition}\label{proposition:UncountablymanyLiouvilleSets}
For $0<\tau<1$, define $\underline{q}^{(\tau)}$ as the sequence $(q^{(\tau)}_n)_{n\ge 1}$ with $$ q^{(\tau)}_n=2^{n! \lfloor n^{\tau} \rfloor } \qquad (n\ge 1). $$ Then the sets ${\sf S}_{\underline{q}^{(\tau)}}$, $0<\tau<1$, are nonempty (hence uncountable) and pairwise disjoint.
\end{proposition}
To prove that a real number is not a Liouville number is most often difficult. But to prove that a given real number does not belong to some Liouville set ${\sf S}$ is easier. If $\underline{q}'$ is a subsequence of a sequence $\underline{q}$, one may expect that ${\sf S}_{\underline{q}'}$ may often contain strictly ${\sf S}_{\underline{q}}$. Here is an example.
\begin{proposition}\label{proposition:SubLiouvilleSets} Define the sequences $\underline{q}$, $\underline{q}'$ and $\underline{q}''$ by $$ q_n=2^{n!}, \quad q'_n=q_{2n}=2^{(2n)!} \quad\hbox{and}\quad q''_n=q_{2n+1}=2^{(2n+1)!} \qquad (n\ge 1), $$ so that $\underline{q}$ is the increasing sequence deduced from the union of $\underline{q}'$ and $\underline{q}''$. Let $\lambda_n$ be a sequence of positive integers such that $$ \lim_{n\rightarrow \infty} \lambda_n= \infty \quad\hbox{and}\quad \lim_{n\rightarrow \infty}\frac{ \lambda_n}{n}=0. $$ Then the number $$ \xi:=\sum_{n\ge 1} \frac{1}{2^{(2n-1)!\lambda_n}} $$ belongs to ${\sf S}_{\underline{q}'}$ but not to ${\sf S}_{\underline{q}}$. Moreover $$ {\sf S}_{\underline{q}}= {\sf S}_{\underline{q}'}\cap {\sf S}_{\underline{q}''}. $$ \end{proposition}
When $\underline{q}$ is the increasing sequence deduced form the union of $\underline{q}'$ and $\underline{q}''$, we always have ${\sf S}_{\underline{q}}\subset{\sf S}_{\underline{q}'}\cap{\sf S}_{\underline{q}''}$; Proposition \ref{proposition:UncountablymanyLiouvilleSets} gives an example where ${\sf S}_{\underline{q}'}\ne\emptyset$ and ${\sf S}_{\underline{q}''}\ne\emptyset$, while ${\sf S}_{\underline{q}}$ is the empty set. In the example from Proposition \ref {proposition:SubLiouvilleSets}, the set ${\sf S}_{\underline{q}}$ coincides with ${\sf S}_{\underline{q}'}\cap {\sf S}_{\underline{q}''}$. This is not always the case.
\begin{proposition}\label{proposition:13} There exists two increasing sequences $\underline{q}'$ and $\underline{q}''$ of positive integers with union $\underline{q}$ such that ${\sf S}_{\underline{q}}$ is a strict nonempty subset of ${\sf S}_{\underline{q}'}\cap {\sf S}_{\underline{q}''}$. \end{proposition}
Also, we prove that given any increasing sequence $\underline{q}$, there exists a subsequence $\underline{q}'$ of $\underline{q}$ such that ${\sf S}_{\underline{q}}$ is a strict subset of ${\sf S}_{\underline{q}'}$. More generally, we prove
\begin{proposition}\label{proposition:14} Let $\underline{u} = (u_n)_{n\geq 1}$ be a sequence of positive real numbers such that for every $n\geq 1$, we have $\sqrt{u_{n+1}} \leq u_n + 1 \leq u_{n+1}$. Then any increasing sequence $\underline{q}$ of positive integers has a subsequence $\underline{q}'$ for which ${\sf S}_{\underline{q}', \underline{u}}$ strictly contains ${\sf S}_{\underline{q}, \underline{u}}$. In particular, for any increasing sequence $\underline{q}$ of positive integers has a subsequence $\underline{q}'$ for which ${\sf S}_{\underline{q}'}$ is strictly contains ${\sf S}_{\underline{q}}$. \end{proposition}
\begin{proposition}\label{proposition:Squ-densebutnotGdelta}
The sets ${\sf S}_{\underline{q}, \underline{u}}$ are not $G_\delta$ subsets of ${\Bbb R}$. If they are non empty, then they are dense in ${\Bbb R}$. \end{proposition}
The proof of lemma $\ref{Lemma:Squ}$ is given in section $\ref{section:PfLemmaSqu}$, the proof of Theorem $\ref{Theorem:Field}$ in section $\ref{section:proofField}$, the proof of Theorem $\ref{Theorem::UncountableLiouvilleSets}$ in section $\ref{Section:UncountableLiouvilleSetss}$, the proof of Corollary $\ref{Corollary:EquivalenceTrivial}$ in section $\ref{Section:EquivalenceTrivial}$. The proofs of Propositions $\ref{proposition:UncountablymanyLiouvilleSets}$, $\ref{proposition:SubLiouvilleSets}$, $\ref{proposition:13}$ and $\ref{proposition:14}$ are given in section $\ref{section:PfPropositions}$ and the proof of Proposition $\ref {proposition:Squ-densebutnotGdelta}$ is given in section $\ref{section:Squ-densebutnotGdelta}$.
\section{Proof of lemma $\ref{Lemma:Squ}$}\label{section:PfLemmaSqu}
\begin{proof}[Proof of Lemma $\ref{Lemma:Squ}$] Given $\underline{q}$ and $\underline{u}$, define inductively a sequence of positive integers $(m_n)_{n\ge 1}$ as follows. Let $m_1$ be the least integer $m\ge 1$ such that $u_{m}>1$. Once $m_1,\dots,m_{n-1}$ are known, define $m_n$ as the least integer $m>m_{n-1}$ for which $u_m>n$. Consider the subsequence $\underline{q}'$ of $\underline{q}$ defined by $q'_n=q_{m_n}$. Then ${\sf S}_{\underline{q},\underline{u}}\subset {\sf S}_{\underline{q}'}$, hence ${\sf S}_{\underline{q},\underline{u}}$ is a Liouville set. \end{proof}
\begin{remark}\label{Remark:definitionLiouvilleSet} In the definition of a Liouville set, if assumption $(\ref{Equation:LiouvilleSet})$ is satisfied for some $\kappa_1$, then it is also satisfied with $\kappa_1$ replaced by any $\kappa'_1>\kappa_1$. Hence there is no loss of generality to assume $\kappa_1>1$. Then, in this definition, one could add to $(\ref{Equation:LiouvilleSet})$ the condition $q_n \leq b_n$. Indeed, if, for some $n$, we have $b_n<q_n $, then we set $$ b'_{n}=\left\lceil \frac{q_n}{b_{n}}\right\rceil b_{n}, $$ so that $$ q_n\le b'_{n}\le q_n+b_{n}\le 2 q_n. $$ Denote by $a_n$ the nearest integer to $b_n\xi$ and set $$ a'_{n}=\left\lceil \frac{q_n}{b_n}\right\rceil a_{n}. $$ Then, for $\kappa'_2<\kappa_2$ and, for sufficiently large $n$, we have $$
\bigl| b'_n \xi - a'_{n} \bigr|= \left\lceil \frac{q_n}{b_{n}}\right\rceil
\bigl| b_{n} \xi - a_{n} \bigr| \le \frac{q_n}{q_n^{\kappa_2 n}}\le \frac{1}{(q_n)^{\kappa'_2 n}}\cdotp $$ Hence condition $(\ref{Equation:LiouvilleSet})$ can be replaced by $$ q_{n} \leq b_{n} \leq q_{n}^{\kappa_1} \mbox{ and } \Vert b_n\xi \Vert \le \frac{1}{q_n^{ \kappa_2 n}}. $$ Also, one deduces from Theorem $\ref{Theorem::UncountableLiouvilleSets}$, that the sequence $\bigl(b_n\bigr)_{n\ge 1}$ is increasing for sufficiently large $n$. Note also that same way we can assume that $$ q_{n} \leq b_{n} \leq q_{n}^{\kappa_1} \mbox{ and } \Vert b_n\xi \Vert \le \frac{1}{q_n^{ \kappa_2 u_n}}. $$ \end{remark}
\section{Proof of Theorem $\ref{Theorem:Field}$}\label{section:proofField}
We first prove the following:
\begin{lemma}\label{Lemma:unsuralpha} Let $\underline{q}$ be an increasing sequence of positive integers and $\underline{u} = (u_n)_{n\geq 1}$ be an increasing sequence of real numbers. Let $\xi\in{\sf S}_{\underline{q}, \underline{u}}$. Then $1/\xi\in {\sf S}_{\underline{q}, \underline{u}}$. \end{lemma}
As a consequence, if ${\sf S}$ is a Liouville set, then, for any $\xi\in{\sf S}$, the set ${\sf S}\cup\{1/\xi\}$ is a Liouville set.
\begin{proof}[Proof of Lemma $\ref{Lemma:unsuralpha}$] Let $\underline{q}=(q_n)_{n\ge 1}$ be an increasing sequence of positive integers such that, for sufficiently large $n$, $$ \Vert\ b_n\xi\Vert\le q_n^{-u_n}, $$ where $b_n \leq q_n^{\kappa_1}$.
Write $\Vert b_n\xi\Vert=|b_n\xi-a_n|$ with $a_n\in{\mathbf Z}$. Since $\xi\not\in{\mathbf Q}$, the sequence $(|a_n|)_{n\ge 1}$ tends to infinity; in particular, for sufficiently large $n$, we have $a_n\not=0$. Writing $$ \frac{1}{\xi}-\frac{b_n}{a_n}=\frac{-b_n}{\xi a_n}\left(\xi-\frac{a_n}{b_n}\right), $$ one easily checks that, for sufficiently large $n$, $$
\Vert\ |a_n|\xi^{-1}\Vert\le |a_n|^{-u_n/2}
\quad\hbox{and}\quad 1\le |a_n|<b_n^2 \leq q_n^{2\kappa_1}. $$ \end{proof}
\begin{proof}[Proof of Theorem $\ref{Theorem:Field}$] Let us check that for $\xi$ and $\xi'$ in ${\mathbf Q}\cup{\sf S}_{\underline{q}, \underline{u}}$, we have $\xi-\xi'\in {\mathbf Q}\cup{\sf S}_{\underline{q}, \underline{u}}$ and $\xi\xi'\in {\mathbf Q}\cup{\sf S}_{\underline{q},\underline{u}}$. Clearly, it suffices to check \\ $(1)$ For $\xi$ in ${\sf S}_{\underline{q}, \underline{u}}$ and $\xi'$ in ${\mathbf Q}$, we have $\xi-\xi'\in {\sf S}_{\underline{q}, \underline{u}}$ and $\xi\xi'\in {\sf S}_{\underline{q}, \underline{u}}$. \\ $(2)$ For $\xi$ in ${\sf S}_{\underline{q}, \underline{u}}$ and $\xi'$ in ${\sf S}_{\underline{q}, \underline{u}}$ with $\xi-\xi'\not\in{\mathbf Q}$, we have $\xi-\xi'\in {\sf S}_{\underline{q},\underline{u}}$. \\ $(3)$ For $\xi$ in ${\sf S}_{\underline{q}, \underline{u}}$ and $\xi'$ in ${\sf S}_{\underline{q}, \underline{u}}$ with $\xi\xi'\not\in{\mathbf Q}$, we have $\xi\xi'\in {\sf S}_{\underline{q}, \underline{u}}$.
The idea of the proof is as follows. When $\xi\in {\sf S}_{\underline{q}, \underline{u}}$ is approximated by $a_n/b_n$ and when $\xi'=r/s\in {\mathbf Q}$, then $\xi-\xi'$ is approximated by $(sa_n-rb_n)/b_n$ and $\xi\xi'$ by $ra_n/sb_n$.
When $\xi\in {\sf S}_{\underline{q}, \underline{u}}$ is approximated by $a_n/b_n$ and $\xi'\in {\sf S}_{\underline{q}, \underline{u}}$ by $a'_n/b'_n$, then
$\xi-\xi'$ is approximated by $(a_nb'_n-a'_n b_n)/b_nb'_n$ and $\xi\xi'$ by $a_na'_n/b_nb'_n$. The proofs which follow amount to writing down carefully these simple observations.
Let $\xi'' = \xi - \xi'$ and $\xi^* = \xi\xi'$. Then the sequence $(a_n'')$ and $(b_n'')$ are corresponding to $\xi''$; Similarly $(a_n^*)$ and $(b_n^*)$ corresponds to $\xi^*$.
Here is the proof of $(1)$. Let $\xi\in {\sf S}_{\underline{q}, \underline{u}}$ and $\xi'=r/s\in {\mathbf Q}$, with $r$ and $s$ in ${\mathbf Z}$, $s>0$. There are two constants $\kappa_1$ and $\kappa_2$ and there are sequences of rational integers
$\bigl(a_n\bigr)_{n\ge 1}$ and $\bigl(b_n\bigr)_{n\ge 1}$ such that
$$
1\le b_n\le q_n^{\kappa_1} \quad \hbox{and}\quad
0<\bigl| b_n\xi - a_n\bigr|\le \frac{1}{q_n^{\kappa_2u_n}}\cdotp
$$
Let $\tilde{\kappa}_1>\kappa_1$ and $\tilde{\kappa}_2<\kappa_2$. Then, \begin{align} \notag
b_n''&= b_n^*= sb_n. \\ \notag a_n''&= sa_n-rb_n, \\ \notag a_n^*&= ra_n. \end{align} Then one easily checks that, for sufficiently large $n$, we have \begin{align} \notag
0<\bigl| b_n''\xi''- a_n''\bigr|
=
s \, \bigl| b_n\xi - a_n\bigr|
\le \frac{1}{q_n^{\kappa''_2u_n}},
\\
\notag
0<\bigl| b_n^*\xi^* - a_n^*\bigr|
=
|r| \, \bigl| b_n\xi - a_n\bigr|
\le \frac{1}{q_n^{\kappa^*_2u_n}}\cdotp \end{align}
Here is the proof of $(2)$ and $(3)$. Let $\xi$ and $\xi'$ be in $ {\sf S}_{\underline{q}, \underline{u}}$. There are constants $\kappa'_1$, $\kappa'_2$ $\kappa''_1$ and $\kappa''_2$ and there are sequences of rational integers
$\bigl(a_n\bigr)_{n\ge 1}$, $\bigl(b_n\bigr)_{n\ge 1}$,
$\bigl(a_n'\bigr)_{n\ge 1}$ and $\bigl(b_n'\bigr)_{n\ge 1}$
such that
$$
1\le b_n\le q_n^{\kappa'_1} \quad \hbox{and}\quad
0<\bigl| b_n\xi - a_n\bigr|\le \frac{1}{q_n^{\kappa'_2u_n}},
$$
$$
1\le b_n'\le q_n^{\kappa''_1} \quad \hbox{and}\quad
0<\bigl| b_n'\xi' - a_n'\bigr|\le \frac{1}{q_n^{\kappa''_2u_n}}\cdotp
$$ Define $\tilde{\kappa}_1=\kappa'_1+\kappa''_1$ and let $\tilde{\kappa}_2>0$ satisfy $\tilde{\kappa}_2<\min\{\kappa'_2,\kappa''_2\}$. Set \begin{align} \notag
b_n''&= b_n^*= b_nb_n', \\ \notag a_n''&= a_nb_n'-b_n a_n', \\ \notag a_n^*&= a_na_n'. \end{align} Then for sufficiently large $n$, we have \begin{align} \notag
b_n''\xi'' - a_n''
=
b_n'\bigl( b_n \xi - a_n\bigr) - b_n \bigl( b_n' \xi' - a_n'\bigr) \end{align} and $$
b_n^* \xi^* - a_n^*
= b_n \xi \, \bigl( b_n' \xi' - a_n'\bigr) + a_n' \, \bigl( b_n \xi - a_n \bigr), $$ hence $$
\bigl|
b_n''\xi'' - a_n''\bigr|
\le \frac{1}{q_n^{\tilde{\kappa}_2u_n}} $$ and
$$
\bigl| b_n^*\xi^*- a_n^* \bigr|
\le \frac{1}{q_n^{\tilde{\kappa}_2u_n}}\cdotp
$$
Also we have
$$
1\le b_n'' \le q_n^{\tilde{\kappa}_1}
\quad
\hbox{and}
\quad
1\le b_n^* \le q_n^{\tilde{\kappa}_1}.
$$
The assumption $\xi-\xi'\not\in{\mathbf Q}$ (resp $\xi\xi'\not\in{\mathbf Q}$) implies
$b_n'' \xi''\not= a_n''$ (respectively, $b_n^* \xi^*\not= a_n^*$).
Hence $\xi-\xi'$ and $\xi\xi'$ are in ${\sf S}_{\underline{q}, \underline{u}}$. This completes the proof of $(2)$ and $(3)$.
It follows from $(1)$, $(2)$ and $(3)$ that ${\mathbf Q}\cup{\sf S}_{\underline{q}, \underline{u}} $ is a ring.
Finally, if $\xi\in{\mathbf Q}\cup{\sf S}_{\underline{q}, \underline{u}}$ is not $0$, then $1/\xi\in{\mathbf Q}\cup{\sf S}_{\underline{q},\underline{u}}$, by Lemma $\ref{Lemma:unsuralpha}$. This completes the proof of Theorem $\ref{Theorem:Field}$. \end{proof}
\begin{remark} Since the field $K_{\underline{q},\underline{u}}$ does not contain irrational algebraic numbers, $2$ is not a square in $K_{\underline{q},\underline{u}}$. For $\xi\in{\sf S}_{\underline{q},\underline{u}}$, it follows that $\eta=2\xi^2$ is an element in ${\sf S}_{\underline{q}, \underline{u}}$ which is not the square of an element in ${\sf S}_{\underline{q},\underline{u}}$. According to \cite{erd1}, we can write $\sqrt{2}=\xi_1\xi_2$ with two Liouville numbers $\xi_1,\xi_2$; then the set $\{ \xi_1,\xi_2 \} $ is not a Liouville set.
Let $N$ be a positive integer such that $N$ cannot be written as a sum of two squares of an integer. Let us show that, for $\varrho \in{\sf S}_{\underline{q},\underline{u}}$, the Liouville number $N \varrho^2\in{\sf S}_{\underline{q},\underline{u}}$ is not the sum of two squares of elements in ${\sf S}_{\underline{q},\underline{u}}$. Dividing by $\varrho^2$, we are reduced to show that the equation $N=\xi^2+(\xi')^2$ has no solution $(\xi,\xi')$ in ${\sf S}_{\underline{q},\underline{u}}\times {\sf S}_{\underline{q},\underline{u}}$. Otherwise, we would have, for suitable positive constants $\kappa_1$ and $\kappa_2$, \begin{align} \notag &
\left|
\xi-\frac{a_n}{b_n}\right| \le \frac{1}{q_n^{\kappa_2u_n+1}}, \qquad 1\le b_n \le q_n^{\kappa_1}, \\ \notag &
\left|
\xi'-\frac{a_n'}{b_n'}\right| \le \frac{1}{q_n^{\kappa_2u_n+1}}, \qquad 1\le b_n'\le q_n^{\kappa_1}, \end{align} hence $$
\left| \xi^2 -\frac{ a_n^2} { b_n ^2}
\right| \le
\frac{2|\xi|+1}{q_n^{ \kappa_2u_n+1}}, \qquad
\left| (\xi')^2 -\frac{ (a_n')^2} { (b_n')^2}
\right| \le
\frac{2|\xi'|+1}{q_n^{\kappa_2u_n+1}} $$ and $$
\left| \xi^2+(\xi')^2-\frac{ \bigl( a_nb_n'\bigr)^2+\bigl(a_n'b_n\bigr)^2} {\bigl(b_nb_n'\bigr)^2}
\right| \le
\frac{2(|\xi|+|\xi'|+1)}{q_n^{\kappa_2u_n+1}}\cdotp $$ Using $\xi^2+(\xi')^2=N$, we deduce $$
\bigl| N{\bigl(b_nb_n'\bigr)^2-\bigl(
a_nb_n'\bigr)^2-\bigl(a_n'b_n\bigr)^2}\bigr|<1. $$ The left hand side is an integer, hence it is $0$: $$ N{\bigl(b_nb_n'\bigr)^2= \bigl( a_nb_n'\bigr)^2+\bigl(a_n'b_n\bigr)^2}. $$ This is impossible, since the equation $x^2+y^2=Nz^2$ has no solution in positive rational integers.
Therefore, if we write $N=\xi^2+(\xi')^2$ with two Liouville numbers $\xi,\xi'$, which is possible by the above mentioned result from P.~Erd\H{o}s \cite{erd1}, then the set $\{ \xi,\xi' \} $ is not a Liouville set. \end{remark}
\section{Proof of Theorem $\ref{Theorem::UncountableLiouvilleSets}$} \label{Section:UncountableLiouvilleSetss}
We first prove the following lemma which will be required for the proof of part $(ii)$ of Theorem $\ref{Theorem::UncountableLiouvilleSets}$.
\begin{lemma}\label{lemma:ifSqnotempty} Let $\xi$ be a real number, $n$, $q$ and $q'$ be positive integers. Assume that there exist rational integers $p$ and $p'$ such that $p /q\not=p'/q'$ and $$
|q \xi -p | \le \frac{1}{q^{u_n}}, \quad |q' \xi -p'| \le \frac{1}{(q')^{ {u_n}+1}}\cdotp $$ Then we have $$ \hbox{either} \quad q'\ge q^{{u_n}}\quad \hbox{or} \quad q \ge (q')^{u_n}. $$ \end{lemma}
\begin{proof}[Proof of Lemma $\ref{lemma:ifSqnotempty}$] From the assumptions we deduce $$ \frac{1}{q q'}\le
\frac{|p q'-p'q |}{q q'} \le
\left|\xi-\frac{p }{q }\right|+
\left|\xi-\frac{p'}{q'}\right| \le \frac{1}{q ^{{u_n}+1}}+\frac{1}{(q')^{{u_n}+2}}, $$ hence $$ q^{u_n}(q')^{{u_n}+1} \le (q')^{{u_n}+2}+q^{{u_n}+1}. $$ If $q< q' $, we deduce $$ q^{u_n}\le q'+ \left(\frac{q}{q'}\right)^{u_n+1} <q'+1. $$ Assume now $q\ge q' $. Since the conclusion of Lemma $\ref{lemma:ifSqnotempty}$ is trivial if $u_n=1$ and also if $q'=1$, we assume $u_n > 1$ and $q'\ge 2$. From $$ q^{u_n}(q')^{u_n+1} \le (q')^{u_n+2}+q^{u_n+1}\le (q')^2q^{u_n}+q^{u_n+1} $$ we deduce $$
(q')^{u_n+1} - (q')^2 \le q. $$ From $(q')^{u_n-1}>(q')^{u_n-2}$ we deduce $(q')^{u_n-1}\ge (q')^{u_n-2}+1$, which we write as $$ (q')^{u_n+1}-(q')^2\ge (q')^{u_n}. $$ Finally $$ (q')^{u_n} \le (q')^ {u_n+1}- (q')^{2} \le q. $$ \end{proof}
\begin{proof}[Proof of Theorem $\ref{Theorem::UncountableLiouvilleSets}$] $\phantom{.}$
Suppose $\displaystyle\limsup_{n\to\infty}\frac{\log q_{n+1}}{u_n\log q_n} = 0$. Then, we get, $$ \lim_{n\rightarrow\infty} \frac{\log q_{n+1}}{u_n\log q_n}=0. $$ Suppose ${\sf S}_{\underline{q}, \underline{u}} \ne \emptyset.$ Let $\xi \in \sf {S}_{\underline{q}, \underline{u}}$. From Remark $\ref{Remark:definitionLiouvilleSet}$, it follows that there exists a sequence $\bigl(b_n\bigr)_{n\ge 1}$ of positive integers and there exist two positive constants $\kappa_1$ and $\kappa_2$ such that, for any sufficiently large $n$, $$ q_{n} \leq b_{n} \leq q_{n}^{\kappa_1} \mbox{ and } \Vert b_n \xi \Vert \le q_n^{- \kappa_2 u_n}\cdotp $$ Let $n_0$ be an integer $\ge \kappa_1$ such that these inequalities are valid for $n\ge n_0$ and such that, for $n\ge n_0$, $q_{n+1}^{\kappa_1}<q_n^{u_n}$ (by the assumption). Since the sequence $(q_n)_{n\ge 1}$ is increasing, we have $q_n^{\kappa_1}<q_{n+1}^{u_n}$ for $n\ge n_0$. From the choice of $n_0$ we deduce $$ b_{n+1}\le q_{n+1}^{\kappa_1}<q_n^{u_n }\le b_n^{u_n} $$ and $$ b_n\le q_n^{\kappa_1}<q_{n+1}^{u_n} \le b_{n+1}^{u_n} $$ for any $n\ge n_0$. Denote by $a_n$ (resp.~$a_{n+1}$) the nearest integer to $\xi b_n$ (resp.~ to $\xi b_{n+1}$). Lemma $\ref{lemma:ifSqnotempty}$ with $q$ replaced by $b_n$ and $q'$ by $b_{n+1}$ implies that for each $n\ge n_0$, $$ \frac{a_n}{b_n}= \frac{a_{n+1}}{b_{n+1}}\cdotp $$ This contradicts the assumption that $\xi$ is irrational. This proves that ${\sf S}_{\underline{q}, \underline{u}} = \emptyset.$
\noindent Conversely, assume $$ \limsup_{n\rightarrow\infty} \frac{\log q_{n+1}}{u_n\log q_n}>0. $$ Then there exists $\vartheta>0$ and there exists a sequence $(N_\ell)_{\ell\ge 1}$ of positive integers such that $$ q_{N_\ell}>q_{N_\ell-1}^{\vartheta (u_{N_\ell-1)}} $$ for all $\ell\ge 1$. Define a sequence $(c_\ell)_{\ell\ge 1}$ of positive integers by $$ 2^{c_\ell}\le q_{N_\ell} < 2^{c_\ell+1}. $$ Let $\underline{e}=(e_\ell)_{\ell\ge 1}$ be a sequence of elements in $\{-1,1\}$. Define $$ \xi_{\underline{e}}=\sum_{\ell\ge 1} \frac{e_\ell}{2^{c_\ell}}\cdotp $$ It remains to check that $\xi_{\underline{e}}\in {\sf S}_{\underline{q},\underline{u}}$ and that distinct $\underline{e}$ produce distinct $\xi_{\underline{e}}$.
Let $\kappa_1=1$ and let $\kappa_2$ be in the interval $0<\kappa_2<\vartheta$. For sufficiently large $n$, let $\ell$ be the integer such that $N_{\ell-1}\le n<N_\ell$. Set $$ b_n=2^{c_{\ell-1}}, \quad a_n=\sum_{h=1}^{\ell-1} e_h 2^{c_{\ell-1}-c_h}, \quad r_n=\frac{a_n}{b_n}\cdotp $$ We have $$ \frac{1}{2^{c_\ell}}<
\left|
\xi_{\underline{e}}-r_n\right| =
\left| \xi_{\underline{e}}-
\sum_{h\ge \ell} \frac{e_h}{2^{c_h}} \right| \le \frac{2}{2^{c_\ell}}\cdotp $$ Since $\kappa_2<\vartheta$, $n$ is sufficiently large and $n\le N_\ell-1$, we have $$ 4q_n^{\kappa_2 u_n} \le 4q_{N_\ell-1}^{\kappa_2u_{N_\ell-1}}\le q_{N_\ell}, $$ hence $$ \frac{2}{2^{c_\ell}}<\frac{4}{q_{N_\ell}}<\frac{1}{q_n^{\kappa_2 u_n}} $$ for sufficiently large $n$. This proves $\xi_{\underline{e}}\in {\sf S}_{\underline{q}, \underline{u}}$ and hence $ {\sf S}_{\underline{q}, \underline{u}}$ is not empty.
Finally, if $\underline{e}$ and $\underline{e}'$ are two elements of $\{-1,+1\}^{{\mathbf N}}$ for which $e_h=e'_h$ for $1\le h< \ell$ and, say, $e_\ell=-1$, $e'_\ell=1$, then $$ \xi_{\underline{e}}< \sum_{h=1}^{\ell-1} \frac{e_h}{2^{c_h}} <\xi_{\underline{e}'}, $$ hence $\xi_{\underline{e}}\not= \xi_{\underline{e}'}$. This completes the proof of Theorem $\ref{Theorem::UncountableLiouvilleSets}$.
\end{proof}
\section{Proof of Corollary $\ref{Corollary:EquivalenceTrivial}$} \label{Section:EquivalenceTrivial}
The proof of Corollary $\ref{Corollary:EquivalenceTrivial}$ as a consequence of Theorem $\ref{Theorem::UncountableLiouvilleSets}$ relies on the following elementary lemma. \begin{lemma}\label{Lemma:equivalencetrivial} Let $(a_n)_{n\ge 1}$ and $(b_n)_{n\ge 1}$ be two increasing sequences of positive integers. Then there exists an increasing sequence of positive integers $(q_n)_{n\ge 1}$ satisfying the following properties: \\ $(i)$ The sequence $(q_{2n})_{n\ge 1}$ is a subsequence of the sequence $(a_n)_{n\ge 1}$. \\ $(ii)$ The sequence $(q_{2n+1})_{n\ge 0}$ is a subsequence of the sequence $(b_n)_{n\ge 1}$. \\ $(iii)$ For $n\ge 1$, $q_{n+1}\ge q_n^n$.
\end{lemma}
\begin{proof}[Proof of Lemma $\ref{Lemma:equivalencetrivial}$] We construct the sequence $(q_n)_{n\ge 1}$ inductively, starting with $q_1=b_1$ and with $q_2$ the least integer $a_i$ satisfying $a_i\ge b_1$. Once $q_n$ is known for some $n\ge 2$, we take for $q_{n+1}$ the least integer satisfying the following properties: \\ $\bullet$ $q_{n+1}\in \{a_1,a_2,\dots\}$ if $n$ is odd, $q_{n+1}\in \{b_1,b_2,\dots\}$ if $n$ is even. \\ $\bullet$ $q_{n+1}\ge q_n^n$. \end{proof}
\begin{proof}[Proof of Corollary $\ref{Corollary:EquivalenceTrivial}$] Let $\xi$ and $\eta$ be Liouville numbers. There exist two sequences of positive integers $(a_n)_{n\ge 1}$ and $(b_n)_{n\ge 1}$, which we may suppose to be increasing, such that $$ \Vert a_n \xi \Vert \le a_n^{-n} \quad\hbox{and}\quad \Vert b_n \eta \Vert \le b_n^{-n} $$ for sufficiently large $n$. Let $\underline{q}=(q_n)_{n\ge 1}$ be an increasing sequence of positive integers satisfying the conclusion of Lemma $\ref{Lemma:equivalencetrivial}$. According to Theorem $\ref{Theorem::UncountableLiouvilleSets}$, the Liouville set ${\sf S}_{\underline{q}}$ is not empty. Let $\varrho\in {\sf S}_{\underline{q}}$. Denote by $\underline{q}'$ the subsequence $(q_2,q_4,\dots,q_{2n},\dots)$ of $\underline{q}$ and by $\underline{q}''$ the subsequence $(q_1,q_3,\dots,q_{2n+1},\dots)$. We have $\varrho\in {\sf S}_{\underline{q}} = {\sf S}_{\underline{q}'}\cap {\sf S}_{\underline{q}''}$. Since the sequence $(a_n)_{n\ge 1}$ is increasing, we have $q_{2n}\ge a_n$, hence $\xi\in {\sf S}_{\underline{q}'}$. Also, since the sequence $(b_n)_{n\ge 1}$ is increasing, we have $q_{2n+1}\ge b_n$, hence $\eta\in {\sf S}_{\underline{q}''}$. Finally, $\xi$ and $\varrho $ belong to the Liouville set ${\sf S}_{\underline{q}'}$, while $\eta$ and $\varrho $ belong to the Liouville set ${\sf S}_{\underline{q}''}$. \end{proof}
\section{Proofs of Propositions $\ref{proposition:UncountablymanyLiouvilleSets}$, $\ref{proposition:SubLiouvilleSets}$, $\ref{proposition:13}$ and $\ref{proposition:14}$ } \label{section:PfPropositions}
\begin{proof}[Proof of Proposition $\ref{proposition:UncountablymanyLiouvilleSets}$] The fact that for $0<\tau<1$ the set ${\sf S}_{\underline{q}^{(\tau)}}$ is not empty follows from Theorem $\ref{Theorem::UncountableLiouvilleSets}$, since $$ \lim_{n\rightarrow\infty} \frac{ \log q^{(\tau)}_{n+1}} { n\log q^{(\tau)}_n} =1. $$ In fact, if $(e_n)_{n\ge 1}$ is a bounded sequence of integers with infinitely many nonzero terms, then $$ \sum_{n\ge 1} \frac{e_n}{q^{(\tau)}_n}\in {\sf S}_{\underline{q}^{(\tau)}}. $$
Let $0<\tau_1<\tau_2<1$. For $n\ge 1$, define $$ q_{2n}=q_n^{(\tau_1)}=2^{n! \lfloor n^{\tau_1} \rfloor} \quad\hbox{and}\quad q_{2n+1}=q_n^{(\tau_2)}=2^{n! \lfloor n^{\tau_2} \rfloor}. $$ One easily checks that $(q_m)_{m\ge 1}$ is an increasing sequence with $$ \frac{\log q_{2n+1}}{n\log q_{2n}}\rightarrow 0 \quad\hbox{and}\quad \frac{\log q_{2n+2}}{n\log q_{2n+1}}\rightarrow 0. $$ From Theorem $\ref{Theorem::UncountableLiouvilleSets}$ one deduces ${\sf S}_{\underline{q}^{(\tau_1)}}\cap {\sf S}_{\underline{q}^{(\tau_2)}}=\emptyset$.
\end{proof}
\begin{proof}[Proof of Proposition $\ref{proposition:SubLiouvilleSets}$] For sufficiently large $n$, define $$ a_n=\sum_{m= 1}^n 2^{(2n)! -(2m-1)!\lambda_m}. $$ Then
$$
\frac{1}{q_{2n}^{(2n+1)\lambda_{n+1}} } < \xi-\frac{a_n}{q_{2n}}
=\sum_{m\ge n+1} \frac{1}{2^{(2m-1)!\lambda_m}}
\le \frac{2}{q_{2n}^{(2n+1)\lambda_{n+1}} } \cdotp
$$ The right inequality with the lower bound $\lambda_{n+1}\ge 1$ proves that $\xi\in {\sf S}_{\underline{q}'}$.
Let $\kappa_1$ and $\kappa_2$ be positive numbers, $n$ a sufficiently large integer, $s$ an integer in the interval $q_{2n+1}\le s\le q_{2n+1}^{\kappa_1}$ and $r$ an integer. Since $\lambda_{n+1}<\kappa_2n$ for sufficiently large $n$, we have $$ q_{2n}^{(2n+1)\lambda_{n+1}}<q_{2n}^{\kappa_2 n (2n+1)}= q_{2n+1}^{\kappa_2 n } \le s^{\kappa_2 n }. $$ Therefore, if $r/s=a_n/q_{2n}$, then $$
\left|\xi-\frac{r}{s}\right| =
\left|\xi-\frac{a_n}{q_{2n}}\right| > \frac{1}{q_{2n}^{(2n+1)\lambda_{n+1}} } >\frac{1}{s^{\kappa_2n}}\cdotp $$ On the other hand, for $r/s\not =a_n/q_{2n}$, we have $$
\left|\xi-\frac{r}{s}\right| \ge
\left|\frac{a_n}{q_{2n}}-\frac{r}{s}\right|-
\left| \xi-\frac{a_n}{q_{2n}}\right| \ge \frac{1}{q_{2n} s} - \frac{2}{q_{2n}^{(2n+1)\lambda_{n+1}} }\cdotp $$ Since $\lambda_n\rightarrow\infty$, for sufficiently large $n$ we have $$ 4q_{2n} s\le 4q_{2n} q_{2n+1}^{\kappa_1} = 4 q_{2n}^{1+\kappa_1(2n+1)} \le q_{2n}^{(2n+1)\lambda_{n+1}} $$ hence $$ \frac{2}{q_{2n}^{(2n+1)\lambda_{n+1}} }\le \frac{1}{2q_{2n} s} \cdotp $$ Further $$ 2q_{2n} <q_{2n+1}<q_{2n+1}^{\kappa_2n-1}\le s^{\kappa_2n-1}. $$ Therefore $$
\left|\xi-\frac{r}{s}\right| \ge
\frac{1}{2q_{2n} s} >\frac{1}{s^{\kappa_2n}},
$$
which shows that $\xi\not\in {\sf S}_{\underline{q}''}$. \end{proof}
\begin{proof}[Proof of Proposition $\ref{proposition:13}$] Let $(\lambda_s)_{s\ge 0}$ be a strictly increasing sequence of positive rational integers with $\lambda_0=1$. Define two sequences $(n'_k)_{k\ge 1}$ and $(n''_h)_{h\ge 1}$ of positive integers as follows. The sequence $(n'_k)_{k\ge 1}$ is the increasing sequence of the positive integers $n$ for which there exists $s\ge 0$ with $\lambda_{2s}\le n<\lambda_{2s+1}$, while $(n''_h)_{h\ge 1}$ is the increasing sequence of the positive integers $n$ for which there exists $s\ge 0$ with $\lambda_{2s+1}\le n<\lambda_{2s+2}$.
For $s\ge 0$ and $\lambda_{2s}\le n<\lambda_{2s+1}$, set $$ k=n-\lambda_{2s}+\lambda_{2s-1}-\lambda_{2s-2}+\cdots+\lambda_1. $$ Then $n=n'_k$.
For $s\ge 0$ and $\lambda_{2s+1}\le n<\lambda_{2s+2}$, set $$ h=n-\lambda_{2s+1}+\lambda_{2s}-\lambda_{2s-1}+\cdots-\lambda_1+1. $$ Then $n=n''_h$.
For instance, when $\lambda_s=s+1$, the sequence $(n'_k)_{k\ge 1}$ is the sequence $(1,3,5\dots)$ of odd positive integers, while $(n''_h)_{h\ge 1}$ is the sequence $(2,4,6\dots)$ of even positive integers. Another example is $\lambda_s=s!$, which occurs in the paper \cite{erd1} by Erd\H{o}s.
In general, for $n=\lambda_{2s}$, we write $n=n'_{k(s)}$ where $$ k(s)=\lambda_{2s-1}-\lambda_{2s-2}+\cdots+\lambda_1<\lambda_{2s-1}. $$ Notice that $\lambda_{2s}-1=n''_h$ with $h=\lambda_{2s}-k(s)$.
Next, define two increasing sequences $(d_n)_{n\ge 1}$ and $\underline{q}=(q_n)_{n\ge 1}$ of positive integers by induction, with $d_1=2$, $$ d_{n+1}= \begin{cases} k d_n &\hbox{if $n=n'_k$},\\ h d_n &\hbox{if $n=n''_h$} \\ \end{cases} $$ for $n\ge 1$ and $q_n=2^{d_n}$. Finally, let $\underline{q}'=(q'_k)_{k\ge 1}$ and $\underline{q}''=(q''_h)_{h\ge 1}$ be the two subsequences of $\underline{q}$ defined by $$ q'_k=q_{n'_k},\quad k\ge 1, \qquad q''_h=q_{n''_h},\quad h\ge 1. $$ Hence $\underline{q}$ is the union of theses two subsequences. Now we check that the number $$ \xi=\sum_{n\ge 1} \frac{1}{q_n} $$ belongs to ${\sf S}_{\underline{q}'}\bigcap {\sf S}_{\underline{q}''}$. Note that by Theorem $\ref {Theorem::UncountableLiouvilleSets}$ that ${\sf S}_{\underline{q}} \ne \emptyset$ as ${\sf S}_{\underline{q}'}\ne \emptyset$ and ${\sf S}_{\underline{q}''}\ne \emptyset$. Define $$ a_n=\sum_{m=1}^n 2^{d_n-d_m}. $$ Then $$ \frac{1}{q_{n+1}}<\xi-\frac{a_n}{q_n} = \sum_{m\ge n+1} \frac{1}{q_m}< \frac{2}{q_{n+1}}\cdotp $$ If $n=n'_k$, then $$
\left| \xi-\frac{a_{n'_k}}{q'_k}
\right| <\frac{2}{(q'_k)^k} $$ while if $n=n''_h$, then $$
\left| \xi-\frac{a_{n''_h}}{q''_h}
\right| <\frac{2}{(q''_h)^h}\cdotp $$ This proves $\xi\in{\sf S}_{\underline{q}'}\cap {\sf S}_{\underline{q}''}$.
Now, we choose $\lambda_s=2^{2^s}$ for $s\ge 2$ and we prove that $\xi$ does not belong to ${\sf S}_{\underline{q}}$. Notice that $\lambda_{2s-1}=\sqrt{\lambda_{2s}}$. Let $n=\lambda_{2s}=n'_{k(s)}$. We have $k(s)<\sqrt{\lambda_{2s}}$ and $$
\left| \xi-\frac{a_{n}}{q_n}
\right| >\frac{1}{q_{n+1}}=\frac{1}{q_{n}^{k(s)}}>\frac{1}{q_{n}^{\sqrt{n}}}\cdotp $$ Let $\kappa_1$ and $\kappa_2$ be two positive real numbers and assume $s$ is sufficiently large. Further, let $u/v\in{\mathbf Q}$ with $v\le q_n^{\kappa_1}$. If $u/v=a_n/q_n$, then $$
\left| \xi-\frac{u}{v}
\right|=
\left| \xi-\frac{a_{n}}{q_n}
\right| > \frac{1}{q_{n}^{\sqrt{n}}}> \frac{1}{q_{n}^{\kappa_2n}}\cdotp $$ On the other hand, if $u/v\not=a_n/q_n$, then $$
\left| \xi-\frac{u}{v}
\right| \ge
\left|
\frac{u}{v}- \frac{a_{n}}{q_n}
\right|
-
\left| \xi-\frac{a_{n}}{q_n}
\right| $$ with $$
\left|
\frac{u}{v}- \frac{a_{n}}{q_n}
\right|\ge \frac{1}{vq_n}\ge \frac{1}{q_n^{\kappa_1+1}}
>\frac{2}{q_n^{\sqrt{ n}}}
$$
and
$$
\left| \xi-\frac{a_{n}}{q_n}
\right| <\frac{1}{q_{n}^{\sqrt{n}}}\cdotp
$$
Hence
$$
\left|\xi-\frac{u}{v}\right|>
\frac{1}{q_{n}^{\sqrt{n}}}
>\frac{1}{q_n^{\kappa_2 n}}\cdotp
$$
This proves Proposition $\ref{proposition:13}$. \end{proof}
\begin{proof}[Proof of Proposition $\ref{proposition:14}$]
Let $\underline{u} =(u_n)_{n\geq 1}$ be a sequence of positive real numbers such that $\sqrt{u_{n+1}} \leq u_n+1\leq u_{n+1}$. We prove more precisely that for any sequence $\underline{q}$ such that $q_{n+1}>q_n^{u_n}$ for all $n\ge 1$, the sequence $\underline{q}'=(q_{2m+1})_{m\ge 1}$ has ${\sf S}_{\underline{q}', \underline{u}}\not={\sf S}_{\underline{q},\underline{u}}$. This implies the proposition, since any increasing sequence has a subsequence satisfying $q_{n+1}>q_n^{u_n}$.
Assuming $q_{n+1}>q_n^{u_n}$ for all $n\ge 1$, we define $$ d_n=\begin{cases} q_n & \hbox{for even $n$,}\\ q_{n-1}^{\lfloor \sqrt{u_n}\rfloor} & \hbox{for odd $n$.} \end{cases} $$ We check that the number $$ \xi=\sum_{n\ge 1} \frac{1}{d_n} $$ satisfies $\xi\in {\sf S}_{\underline{q}', \underline{u}}$ and $\xi\not\in {\sf S}_{\underline{q}, \underline{u}}$.
Set $b_n=d_1d_2\cdots d_n$ and $$ a_n= \sum_{m=1}^{n} \frac{b_n}{d_m} = \sum_{m=1}^{n} \prod_{ \atop{1\le i\le n}, {i\not=m}} d_i, $$ so that $$ \xi-\frac{a_n}{b_n}=\sum_{m\ge n+1} \frac{1}{d_m} \cdotp $$ It is easy to check from the definition of $d_n$ and $q_n$ that we have, for sufficiently large $n$, $$ b_n\le q_1\cdots q_n\le q_{n-1}^{u_{n-1}}q_n\le q_n^2 $$ and $$ \frac{1}{d_{n+1}}\le \xi-\frac{a_n}{b_n}\le \frac{2}{d_{n+1}}\cdotp $$ For odd $n$, since $d_{n+1}=q_{n+1}\ge q_n^{u_n}$, we deduce $$
\left|\xi-\frac{a_n}{b_n} \right| \le \frac{2}{q_n^{u_n}}, $$ hence $\xi\in {\sf S}_{\underline{q}',\underline{u}}$.
For even $n$, we plainly have $$
\left|\xi-\frac{a_n}{b_n} \right| > \frac{1}{d_{n+1}}= \frac{1}{q_{n}^{\lfloor \sqrt{u_{n+1}}\rfloor} } \cdotp $$ Let $\kappa_1$ and $\kappa_2$ be two positive real numbers, and let $n$ be sufficiently large. Let $s$ be a positive integer with $s\le q_n^{\kappa_1}$ and let $r$ be an integer. If $r/s= a_n/b_n$, then $$
\left|\xi-\frac{r}{s} \right| =
\left|\xi-\frac{a_n}{b_n} \right| > \frac{1}{q_{n}^{\kappa_2 u_n}}\cdotp $$ Assume now $r/s\not= a_n/b_n$. From $$
\left|\xi-\frac{a_n}{b_n} \right| \le \frac{2}{q_{n}^{\lfloor \sqrt{u_{n+1}}\rfloor} } \le
\frac{1}{2q_n^{\kappa_1+2}}, $$ we deduce $$ \frac{1}{q_n^{\kappa_1+2}}\le \frac{1}{sb_n}\le
\left|\frac{r}{s}-\frac{a_n}{b_n}\right|\le
\left|\xi-\frac{r}{s} \right| +
\left|\xi-\frac{a_n}{b_n} \right|
\le \left|\xi-\frac{r}{s} \right| + \frac{1}{2q_n^{\kappa_1+2}}, $$ hence $$
\left|\xi-\frac{r}{s} \right| \ge \frac{1}{2q_n^{\kappa_1+2}}> \frac{1}{q_{n}^{\kappa_2 u_n}}\cdotp $$ This completes the proof that $\xi\not\in {\sf S}_{\underline{q}, \underline{u}}$.
\end{proof}
\section{Proof of Proposition $\ref{proposition:Squ-densebutnotGdelta}$} \label{section:Squ-densebutnotGdelta} \begin{proof}[Proof of Proposition $\ref{proposition:Squ-densebutnotGdelta}$]
If ${\sf S}_{\underline{q},\underline{u}}$ is non empty, let
$\gamma \in {\sf S}_{\underline{q}, \underline{u}}$. By Theorem \ref{Theorem:Field}, $\gamma+{\mathbf Q}$ is contained in ${\sf S}_{\underline{q},\underline{u}}$, hence ${\sf S}_{\underline{q},\underline{u}}$ is dense in ${\mathbf R}$.
Let $t$ be an irrational real number which is not Liouville. Hence $t\not\in {\sf K}_{\underline{q},\underline{u}}$, and therefore, by Theorem \ref{Theorem:Field}, ${\sf S}_{\underline{q},\underline{u}}\cap (t+{\sf S}_{\underline{q},\underline{u}})=\emptyset$. This implies that ${\sf S}_{\underline{q},\underline{u}}$ is not a $G_\delta$ dense subset of ${\mathbf R}$.
\end{proof}
\end{document} | arXiv |
Toward a robot swarm protecting a group of migrants
Maxime Vaidis1 &
Martin J.-D. Otis ORCID: orcid.org/0000-0002-8763-05362
Intelligent Service Robotics volume 13, pages 299–314 (2020)Cite this article
Different geopolitical conflicts of recent years have led to mass migration of several civilian populations. These migrations take place in militarized zones, indicating real danger contexts for the populations. Indeed, civilians are increasingly targeted during military assaults. Defense and security needs have increased; therefore, there is a need to prioritize the protection of migrants. Very few or no arrangements are available to manage the scale of displacement and the protection of civilians during migration. In order to increase their security during mass migration in an inhospitable territory, this article proposes an assistive system using a team of mobile robots, labeled a rover swarm that is able to provide safety area around the migrants. We suggest a coordination algorithm including CNN and fuzzy logic that allows the swarm to synchronize their movements and provide better sensor coverage of the environment. Implementation is carried out using on a reduced scale rover to enable evaluation of the functionalities of the suggested software architecture and algorithms. Results bring new perspectives to helping and protecting migrants with a swarm that evolves in a complex and dynamic environment.
The socio-political climate of insecurity over the last year has led to an increase in the number of asylum applications and political refugees in many countries, including Germany, Canada, France, etc. Governments are facing a real international migration crisis. Indeed, restrictive policies and civil wars in many countries around the world have increased refugee claim situations. These migrants who arrive massively at our borders risk retaliation from their countries and their lives. It is, therefore, relevant to conduct studies that could provide some solutions with respect to ensuring their protection. However, the limited resources of governments and the large surveillance areas near the borders of several countries make the task almost impossible. For these reasons, our objective is to develop an autonomous system that enables supervision and protection of migrants. During their journey, migrants move mainly by foot with few means of protection exposing themselves to risky situations. In order to protect them, we need to gather information about their movement and the environment around them. Many studies focused on retrieving this information with different types of networked sensors [1,2,3,4]. These networked sensors use mainly WiFi communication [1,2,3], but they can also use other protocols like XBee and Bluetooth [4]. These modes of communication are relatively easy to set up and can be very useful in transmitting data between robotic platforms. Therefore, the data gathered can be used to protect people using groups of rovers. Automatic management of a robot swarm for better efficiency and to reduce human involvement in potentially dangerous terrain, often covering long distances, is a major challenge to this type of project. In addition, external factors must be taken into account to provide a safety zone around the group of migrants. With respect to this challenge, some studies have been conducted regarding a mobile group of moving rovers interacting with external factors that have an impact on the group's decision [5,6,7,8]. The suggested system should track the migrants, compute some optimal way-points around the migrants and manage the robot's faults. The goal of the research works is to provide a safety area around the migrants and real-time assistance. This paper presents only the optimal way-points and robot faults management algorithm, including the rover control. For solving migrants tracking, some existing solutions could be implemented, such as using fixed-wing unmanned aerial vehicle (UAV), such as suggested in [9,10,11]; however, this subject is out of the scope of this research work.
Merging data gathered from different robotic systems allows us to follow-up what is happening on the ground and protect migrants. To process them, our suggested system acts in three steps: (1) analyze the data of migrants' behavior and give a position to reach for each rover, (2) lead the rover to this position, keep a motion vector conforming to the movement of the group of migrants and (3) manage the faults (abnormal behavior of the rover). The abnormal behavior of the rover is classified using a convolutional neural network (CNN) and is compared to a physical model. We chose to detect four situations, one normal and three abnormal states: a normal move of a rover on stones, a fall of one rover, a collision with an obstacle, and a skid on sand. Methods of human-swarm interaction mostly rely on orders given by operators via different interfaces, like computers and smart watches [5,6,7,8]. The level of automation is reduced with such interface. Conversely, the aim of this paper is to present a wireless body area network (WBAN) comprising rovers, migrants and the different algorithms created to control the swarm. Our contribution is the design of a new system labeled SROPRAS (Swarm RObots PRotection Algorithms System). With the collected data from this network, we wish to identify consistent patterns enabling their processing using fuzzy logic and CNN. These patterns can be used to detect automatically several events in order to improving the effectiveness of the swarm and the protection of migrants. Therefore, migrants do not need to send orders or commands to SROPRAS. The rovers will choose by themselves their paths and target positions with SROPRAS. This ensures a high level of swarm transparency and reduction in intrusion for the migrants.
Based on a review of our general approach to set up the system and the tools used to do the research work, we described the primary contribution of this paper, which is the design of SROPRAS based on three different algorithms created to control the swarm of rovers, including the rover positioning algorithm, the rover planning trajectory algorithm, the rover moving algorithm, and abnormal behavior management. Then, we explain how we can detect the different states of each rover to know how the swarm will react if there is an issue. The SROPRAS simulations show encouraging results, which are discussed.
An example of the context of our research work is shown in Fig. 1. Our tool integrates different sensors in order to track migrants using inertial measurement unit and re-organize the swarm in real time. Therefore, this section presents some applications of such sensors in swarm robotics.
Context of the use of the proposed system (simulation video available in Electronic supplementary material)
Inertial measurement unit (combination of accelerometers, gyroscopes, and magnetometers) is widely used in the field of robotics to help localization algorithms. The received data can be analyzed: motion measurement of robotics arm or mobile robots [12, 13], tracking people or robotic systems [14, 15], gait analysis [16, 17], and inertial navigation or positioning for mobile robots [18, 19]. Its affordability and ease of use on multiple robotic platforms make its integration in robot swarm possible [20].
Swarms of mobile robots enable the execution of many tasks faster and more effectively than a lone robot, for example, in field exploration [21, 22], search for a target of surveillance [23, 24] or rescue [25, 26]. This is possible because of their number as well as their group intelligence, which allows distribution of tasks between robots in the swarm. Depending on its level of autonomy, the swarm can perform more or less complex tasks. Most modern mobile swarms are controlled by one or more operators [7]. They must follow the evolution of the robots and influence their performance if necessary, usually by assigning them a different goal to achieve [27]. However, this interaction depends on the communication mode between robots and humans [28]: speech, gesture, joystick or all of them. The point is that interactive exposure to a robot can change a user's perspective of expectations from a robot's behavior. Therefore, the communication mode between robots and humans should be carefully chosen to achieve desired objectives. Moreover, this communication mode should be transparent to provide an autonomous swarm.
However, improving the interaction between robots and humans is not enough to achieve effective autonomous swarms. The structure of decision-making should find an optimal balance between the individual command of a robot and the overall performance of the swarm. The robot must have enough liberty to be capable of performing its actions, but it must comply with the aims of the swarm. Some rovers can follow a direction [29], perform mechanical actions [30], and measure environmental conditions [31], while some others can assign goals and trajectories [19,20,21]. In this article, our swarm should be able to coordinate all these actions while following a group of migrants. Other research works have been designed to help the operator select a robot in particular. This leads to a simplification of the interaction between the operator and the swarm [32]. But a complex swarm that needs to perform different actions, like that of our research work, it is complex for a human to control all robots separately while carrying out their tasks. Moreover, the suggested swarm should react as a function of the human behavior and not as a function of a command.
Some solutions have been suggested to solve this issue. One of them is to define two different roles among the robots that make up the swarm. One or many robots will be the leader(s) of the swarm that give instructions and collect all the information, and the others will only communicate with the leader to give all the information needed and carry out their instructions [33]. The operator has only the relevant information and can control or influence the leader to interact with the swarm [34]. Another solution is to define the operator as the leader of the swarm; this will allow him to directly assign goals to the swarm who will adapt his local command through their robots [35, 36].
a Suggestion of setup in outdoor environment, b experimental setup in the laboratory
In order to implement this system, some frameworks for swarm control were suggested and designed [37], such as Robotic Operating System (ROS) [38, 39]. ROS is a meta-operating system that can run on one or more computers, and it provides several features: hardware abstraction, low-level device control, implementation of commonly used features, transmission of messages between the processes, and the management of the installed packages [40, 41]. In other words, ROS is a server-client able to provide an implementation of distributed artificial intelligence algorithms inside the swarm; therefore it will take all the team decisions related to the swarm's actions [42, 43].
Artificial intelligence is the most modern technology in the field of robotics. With precise control and less computing time, it can be easily implemented in mobile robot. Moreover, it has the advantage of overcoming some mathematical problems in motion and path generation. One issue it can answer is the recognition of a robot's state due to situations they can encounter and the solution they can execute to deal with it. Currently, many sensors allow us to gather information on their environment and some algorithms can analyze these data [44, 45]. However, they are specific to some situations and cannot be used in a more general case. For this reason, many studies focus on implementation of deep learning algorithms in mobile robot. This type of algorithm works for the majority of situations encountered. They are found in many fields of mobile robotics, including obstacle avoidance [46], cooperation between several robotic systems [47] and detection of a robot's state using camera [48]. On the basis of these studies, we can use deep learning to evaluate the state of a robot with data gathered from different sensors. Some studies used a multilayer perceptron with data from an accelerometer and gyroscope as input to detect the fall of a robot [49]. Specifically, some other studies used convolutional neural network (CNN) to process the data gathered from sensors so as to detect different human activities [50] and recognize some hand gestures by using an IMU [51]. We decided to apply this principle based on [52] and [53] on the rover to detect its state. According to this algorithm, the swarm will have all the necessary information to carry out some choices and adapt itself to the environmental constraints.
The management of multi-robots could be applied in harsh environments, such as mountains and valleys, which contain many different kinds of obstacles [5,6,7,8]. On these difficult fields, some properties of swarms allow them to adapt their behavior to face situations like damage of one of their robots. To this end, their capacity to deal with information through a decentralized system gives them an important advantage during military operations. If one of their robots is damaged or destroyed, the swarms will normally continue to operate with the remaining robots. Then, they will plan their deployment according to their aims and situations encountered. Moreover, they will take into account the movement constraints of their robots to optimize their results [54]. Indeed, depending on to the type of robot, they will have more or less degree of freedom to move [55]. Finally, the utilization of WiFi and XBee allows us to send the required data in a short period of time, facilitating and reducing the time of decision-making [56,57,58].
Suggested swarm of rovers
We suggest a swarm of rovers to evaluate the control algorithm proposed in the paper. First, we present the rovers used for the swarm. Second, we discuss the wireless body area network and the motion tracking used with the rovers to communicate and track people during their motion. For outdoor applications, we suggest the usage of a drone to have the position of migrants and rovers as shown in Fig. 2a. The evaluation of the swarm will be done indoors for logistic reasons. Figure 2b shows the suggested framework used in our research work. The motion tracking was done by using eight NaturalPoint OptiTrack cameras, replacing drone with a technology of localization, such as lidar and IR cameras for migrant localization and GPS for rover localization (1). For the two suggested systems, it is always the same software configuration. The localization of robots and people is achieved by a camera's node in ROS (2). Then, the different algorithms (positioning (3), path planning (4) and state detection (5)) presented in Sects. 4.1, 4.2, and 4.4 are applied. The results are used by the robot drive controller presented in Sect. 4.3 to drive the rovers to their targeted positions in real time (5). The idea of the system comes from a swarm of rovers and UAV for Mars exploration [59]. However, our challenge will be migrants' detection and the control algorithm in order to protect the migrants and provide a safety area.
In order to evaluate the suggested algorithms and architecture, we used a reduced scale rover composed of two servomotors and four wheels (Lynxmotion Aluminum A4WD1 Rover). Their battery capacity is up to 3 h, 1 h at full speed. The maximum speed is approximately 2 m per second. However, a full scale implementation will need a rover such as those available at Boston Dynamic or Argo-XTR (including J8 XTR: 30km/h with a 1250lb payload that can be used for a gasoline generator and a solar panel kits). Any kind of rover could work with our algorithm (humanoid, legged, wheeled, hexapod, quadruped, etc.).
The reduced scale rover selected in this research work has been used in many different swarm projects, including the student competition organized by NASA Swarmathon to collect materials such as water ice and useful minerals from the Martian surface to be used by astronauts who land on Mars in future missions [60]. One servomotor controls the speed of the robot, and the other controls its rotation. It is also built with three different types of sensors: three ultrasonic sensors to detect very near obstacles, a laser sensor (lidar) in order to avoid all collisions in the swarm's environment, and an angular sensor (IMU) to allow the computation of the orientation of each robot. The entire components work and communicate by mean of Arduino hardware. The system is implemented on Ubuntu 14.04 LTS OS with the use of Robot Operating System (ROS) [38] and [39].
Wireless body area network (WBAN)
To establish a protocol of communication in our swarm using ROS, we chose to use WiFi through the ESP8266. ESP8266 is a micro-controller with an integrated WiFi module. Each robot need to be connected with one ESP in order to create a network of WiFi module that communicates a fairly large amount of information within a short period of time. The ROS community has already suggested a node for the ESP, and it is, therefore, possible to develop an integrated platform for the swarm.
Many projects use drones to follow-up people or vehicles; the method depends on the environment of the drones and their missions. Some of these methods target accurately a person and can differentiate many groups of people [61] and [62]. Dual rotating infrared (DRIr) sensor was suggested as a new technology to track multiple targets moving unpredictably [63]. Moreover, differentiation between animal, adult human, and children is suggested in [64]. A data fusion algorithm is used with unattended ground sensor (UGS) to make this differentiation. UGS is also used to differentiate vehicles.
This aspect is still a research subject, which we will study in our future works. Since our algorithms are evaluated indoors in a laboratory environment, we used a motion capture system, a network of eight Optitrack Flex3 camera, to determine the positions of people and robots in real time. The position obtained from the motion capture acts as the drone to provide the data to the rovers. Data are sent in real time to a ROS server that will analyze them and store them into a database including the data given by the rovers.
Suggested algorithms to command the swarm and rovers
The following section details our algorithms that control the positions of the rovers and drive them around a group of migrants. The suggested command and control algorithms are executed in four steps: i) an algorithm takes information from the motion capture camera and gives every rover a target to reach; ii) a second algorithm takes the precedent result and computes a trajectory for each rover; iii) a fuzzy logic bloc coded on the rover will receive the information and drive it to the target with the appropriate motion vector; and iv) a convolutional neural network (CNN) differentiates some events related to the state of a rover.
Rover localization algorithm
Rovers position algorithm around a convex hull
In order to provide a trajectory to the rovers around the group of migrants, the algorithm searches a database for the localization of each person. Each position is represented by a point in a 3D coordinate (world) frame, which allows us to define the cloud of points made by the group of migrants or rovers. In order to provide a safety area, rovers should be around the group. The convex hull of the cloud of points of migrants is used to place our rovers beyond this one.
Considering the few number of persons during our experiments, we computed each point part of the convex hull with the gift wrapping algorithm [65], as shown in Fig. 5. For higher number of persons, another algorithm, like the Graham scan algorithm [66] or the Chan algorithm [67], should be used.
Of course, some exception exists and is taken into account, as shown in Fig. 3:
For less than three migrants, we cannot apply an algorithm to search a convex hull because three non-aligned points are needed at least. In that case, the gravity center of the localization of migrants is used as a circle center, which will have a radius higher than the safety distance between migrants and rovers.
If the number of migrants is more than three, the algorithm will search to define the convex hull of the group (labeled inside convex hull). Once this inside convex hull is defined, we expand it by a certain safety distance predefined by the operator. This will create an outside convex hull where the rovers will be located.
In both cases, rovers will be placed uniformly around the migrants. This algorithm gives as output the predicted position of the rover around people. It is the next two algorithms, the path planning and the robot drive control, that will drive the rover until it reaches its target. The gravity center of the group is used as the center of uniform angular cutting off a plane; the number of sections depends on the number of rovers available to perform the mission. We chose to begin the first section from the mean direction of the group of people. Consequently, the first rover of the swarm will be always located in front of the group (vector direction of the group motion) on the outside convex hull and becomes the leader of the swarm. If we have two rovers, the second will be placed behind the group, bringing up the rear of the group. For three rovers, these will be on the second convex hull and each should have a position doing, respectively, an angle of \(120^\circ \) and \(-120^\circ \), respectively, with the first rover in front of the group and the gravity center of the people. An additional rover will divide the angle.
As a result of the group motion, each target of the rovers is moved accordingly. The rovers will be moved in such a way to achieve their targets and constantly surround the group during the mission. It is also possible to give each rover a motion vector determined by the group motion vector near them. A scalar product of the motion vector and the normal vector of the rover on the inside convex hull is used to know if a migrant will go out of the convex hull or not. When a migrant is going out of the inside convex hull, the rover near him takes the motion vector of the person that has the maximum value of scalar product of his motion vector and the normal vector of the rover. Otherwise, the rover will take the motion vector of another person near him. If nobody is less than 6 m from the rover, it will take the average motion vector of the group.
Simulation given by the algorithm for two persons and two rovers: each arrow provides the direction and amplitude of displacement
Simulation given by the algorithm for ten persons and nine rovers: each arrow provides direction and amplitude of displacement
To link the computed positions to the rover, we associate them by the nearest distance between each other. Two different results are shown in Figs. 4 and 5. The predicted positions given by the algorithm are sent in a database to be used by the path planning algorithm presented in the following section.
Simulation of a section given by the vector field histogram method
Many global path planning and local path planning methods exist in real time [44]. In our case, we chose the local VFH (vector field histogram) method which provides us real-time processing while avoiding trajectory oscillation issues present in the VFF (virtual force field) method. The result will be coupled with the robot's control drive algorithm, which allows the rovers to move in their environment to reach their target and avoid obstacles at the same time [68].
The algorithm will work on each rover's node on ROS, allowing us multiprocessing at the same time for all connected rovers. The node searches a database for all positions of obstacles situated in a local map defined by a square of \(2\times 2\) m around the rover, including any migrants who are assimilated as obstacles. The size of the square is chosen in order to anticipate new obstacles and avoid them near the rover without being too wide to save computing time and memory during the process. The map and each obstacle coordinate are in centimeter resolution. This resolution generates a certain amount of data. Consequently, we chose an array of \(20\times 20\) representing the square of the map; in other words, each case of this local map contains the obstacles in an area of \(10\times 10\) cm. This will substantially reduce the processing time.
When obstacles are detected inside a case of the map, the value of this case is set to one. Otherwise, the case is set to zero. Then, we apply the local VHF method to compute a trajectory. The matrix composed of \(20\times 20\) cases is divided into 15 rectangular sections of width adapted to the rover (40 cm), allowing the rover to pass between two obstacles if it is possible. This division is done once at the beginning of the program and is stored in the global variables to reduce the processing time. An example of a section is shown in Fig. 6.
Once the sections around the rover are created, the algorithm verifies the presence of obstacles in each of them. Every section without obstacles is memorized and represented as a potential goal to reach by the rover with a middle point located at their border. The direction chosen will be the one whose middle point is closer to the target that is to be reached, i.e., the point given by the control position algorithm. This direction will be sent to the rover by a local drive control algorithm. If the target to be reached is inside the map of the rover, the algorithm will check if it is possible to reach it directly. If this is the case, it defines it as the final direction to reach. An example is shown in Fig. 7. The black rectangular blocs represent obstacles around the rover, and the coordinates targeted are (21, 8). The selection of the target depends on the average motion vector of the group (direction and velocity).
Simulation of a section's algorithm: black rectangular blocs represent obstacle
Robot drive control
Once the positions are found for the trajectories, the server sends the results to the available rovers in the mission. A local drive control algorithm receives the current position of the rover and the setpoints: the desired position and the desired motion vector until the rover reaches the target. The rover does not have a linear drive control but needs to take some decisions in real time. To overcome this issue, a fuzzy logic bloc architecture was developed to drive it [69] and [70].
We chose three inputs that can give information about how to reach the target: (1) the distance between its position and the desired position, (2) the angular difference between the current rover's orientation and the angle created by his current position and the desired position (target to reach), and finally (3) the distances between the rover and obstacles measured by the lidar. Because each rover has two servomotors, the system has two outputs: a speed command and a rotation command.
Table 1 Fuzzy logic variables used for the rover motion
Each variable is presented in Table 1. We chose to use triangular functions to describe each of them: (1) distance between the rover and the target in centimeters (zD = zero Distance, nD = near Distance, mD = middle Distance, fD = far Distance), (2) angle in degrees (zA = zero Angle, nA = near Angle, mA = middle Angle, fA = far Angle), (3) distance between rover and obstacles in centimeters (nO = near Obstacle, mO = middle Obstacle, fO= far Obstacle), (4) speed command (nV = not Velocity, sV = slow Velocity, V = Velocity, fV = fast Velocity), and (5) angle command (nO = not Orientation, sO= slow Orientation, O = Orientation, fO = fast Orientation).
With these variables, we can write some rules as follows:
$$\begin{aligned}&R_{i}:\; {\text {if}}\; x_{1}\; {\text {is}}\; X^{i}_{1}\; \ldots \; {\text {and}}\; x_{n}\; is\; X^{i}_{n}\; \nonumber \\&\quad {\text {Then}}\; y_{1}\; is\; Y^{i}_{1}\; \ldots \; {\text {and}}\; y_{m}\; is\; Y^{i}_{m}. \end{aligned}$$
There are 768 possible rules for this system. To reduce processing time, we selected 28 of them that allow us to drive the rover in all rover statuses. The rules selected, and the results are presented in Table 2. At the defuzzification step, the output variables are sent to the drive control. The direction to turn is determined earlier according to the position of the rover and the position of the target.
Table 2 Fuzzy logic rules
The integration of these three algorithms enables each rover to move around the convex hull while avoiding collision with its environment. However, if one rover has some issues, the swarm should be aware of the situation to adapt itself. For this reason, we chose to analyze all possible states of each rover. These states are presented in the next section.
CNN for states differentiation
In order to protect migrants in a complex environment, our system should be able to know the state and activity of each rover. This is essential to ensure appropriate distribution of the rover around the group. Indeed, if one of the rovers has an issue and cannot follow its desired position target, the swarm should be able to adapt itself to protect the sector uncovered by the rover that is not available. To detect some of these situations, we used a CNN according to a library of states (events) defined in the following section.
(1) States definitions for one rover: To detect the events that will generate a rover state, we created a library of different events and states that the rover will encounter:
State 0: the rover is not connected yet to the server (the leader); it is not a part of the swarm.
State 1: The rover is fully operational and is a part of the swarm, receiving and executing the orders which there are sent to him.
State 2: This state indicates that the rover experienced an unexpected network disconnection from the server. As we do not have any control over it, it is temporarily removed from the swarm awaiting a potential re-connection. The swarm adapts itself to this situation by changing the distribution of the rover around the outside convex hull.
State 3: The rover experienced a serious fall. This fall has been detected by the CNN due to the data of the IMU. Thereafter, if the server detects that the rover is moving to its targeted position, its state returns to 1. Otherwise, the rover is removed from the swarm awaiting the intervention of an operator. The swarm adapts itself to this situation as described in state 2.
State 4: In some environments, such as the desert or temperate forests, there is a risk of the wheels of the rover skidding in sand, snow, ice or mud. This state refers to this situation. The skidding is detected both by the CNN and the server, who sees that the rover is not moving despite its commands. Then, the rover is removed from the swarm awaiting the intervention of an operator.
State 5: It is possible that a lidar did not detect some obstacles in the current trajectory of the rover. This state indicates a collision. The collision is detected by the CNN due to the data given by the IMU. The server disconnects the rover from the swarm while it is trying to get around the obstacle. When the rover evaluates that it can reach the outside convex hull, it is reinstated into the swarm. The obstacle that was not detected is registered to avoid this situation with other rovers.
State 6: This state indicates that the rover is trapped in some branches of trees or some obstacles that it cannot overcome and escape from. The CNN detects this state, and the rover is removed from the swarm awaiting the intervention of an operator.
State 7: Considering that the rover could be used for several hours, it should be recharged or fulled frequently whenever solar panels are used. This state enables the system to predict maintenance using a predictive algorithm.
State 8: The rover has the aim of protecting migrants in risky areas. This state indicates that the rover experienced an explosion or irreparable damages. The rover is removed from the swarm and should be replaced or destroyed.
State 9: While the rover is traveling, it could experience some damages preventing it from pursuing its mission. This state indicates to the operator that the rover needs to be fixed. It is temporarily removed from the swarm.
This library of events allows us to follow in real time the state of each rover and the swarm evolution. The operator can act if one of them needs an intervention. The swarm is also kept updated about any rover that cannot be used. It can remove any rover in real time to adapt itself to the situation. In the next part, we will present the CNN that can detect these events. Considering that we evaluated the swarm in a laboratory environment, we could not test all the situations above. We selected four states: (1) the rover is moving normally on small stones; (2) it is falling, (3) it has experienced a collision and is trying to escape; and (4) the rover has skidded in sand.
(2) Suggested CNN architecture: To detect these different states, many studies have been conducted. Anti-skid systems have been developed over the years, such as the one that is presented in [71]. The gyroscope and magnetometer give some data, which are processed by a Kalman filter to get a correct orientation after the skid. Then, some kinematics equations are applied on the result to control the trajectory of the robot. Also, some fall detection processes were implemented. For instance, the Nao robot uses a deep learning approach to predict a fall [49]. It uses a multilayer perceptron composed of 100 values of the x and y-axis of the gyroscope concatenate in one vector for the input. The output is defined by two states: the robot is stable or unstable. For collision avoidance, sensors are used to detect obstacles to be avoided by implementing a plannified trajectory. Collision occurs when the sensors fail to detect objects (cross-talk, absorption, refraction, reflection, etc.). Bumpers are adapted to indoor motion as suggested in [72] and do not work in mud, sand, and gravel; we prefer IMU information, which does not interfere with the composition of the environment. Moreover, the dynamic model of the rover is not available, and we prefer to use an IMU sensor. Other research works specialized in the detection of one event, whereas our work is concerned with the classification and differentiation of each state.
As we shown in the Related Work section, many robotic projects rely on CNN to process the data obtained from sensors. One advantage of CNN is that it learns directly from features identified to differentiate events. Moreover, it can deal with more data with fewer weight in comparison with a multilayer perceptron. This allows it to use less memory, and it is easier to implement in robotic systems.
Our network relies on data from the IMU, sampled at 20Hz. A sliding window of 13 samples over 2 s in each axis of the accelerometer and the gyroscope is memorized in order to extract some features from them. An overlap of 12 samples is used to slide the window. More than 2400 acquisitions were realized to create a dataset that will train the network. With these raw data describing the four (4) situations selected previously, we calculated some features and kept nine (9) of them that we can use to differentiate the different states. These features are shown in Table 3 (where \(A_{x,i}\), \(A_{y,i}\), \(A_{z,i}\) are measurements taken in the axis of accelerometer, and \(G_{x,i}\), \(G_{y,i}\), \(G_{z,i}\) are measurements of the gyroscope; i is the sample of the window). With these measurements, we created an image of \(9\times 9\times 1\) pixels as input to the CNN based on the method used in [52] and [53]. To do that, we took the value of the feature and duplicated it on all the lines, one line for one feature. The choice of this spatial representation is arbitrary. We wanted to have a small image to be processed by a small number of filters (2 convolutional filters in our case). Other techniques exist such as [50]. In this case, the input of the neural network is directly the raw data of the IMU in a short window of time. The CNN should find the features and classify the different activities. In our case, we found some explicit features to differentiate each state of the robot. In order to obtain a better result there, we directly build the image with them.
Table 3 Features from the data of IMU
CNN for state detection
The network used in our research work to detect the four different situations is composed of two convolutional layers, as shown in Fig. 8. First, the network initializes with random weights the convolutional layers. The first layer of convolution is composed of a kernel with a dimension of \(4\times 4\times 10\) and the dimension of the second layer kernel is \(2\times 2\times 20\). Then, the input takes a picture of \(9\times 9\times 1\) pixels based on features extracted from the data of the IMU. The output is given by a multilayer perceptron to classify the picture from the features. It has 100 perceptrons and four outputs which represent each of the four situations selected. The result is given by the Softmax function with a threshold of 0.6. At each iteration, the output is compared to the desired result, and the weights are adjusted by the back-propagation method to obtain the target results. The learning rate decreases linearly over the time. We chose this configuration because of the small size of the picture and the memory size constrain of our system.
Movements of the rovers and the migrants during the test
The dataset is composed of 2873 pictures to evaluate the network. We chose to use 86% of them for the training part and 14% to test our results. The repartition of the folds was determined randomly. Regarding the 2468 pictures used for the training, 2033 of them were used to change the weights, and the other 435 pictures were used to validate the result during the training. Each of them is annotated in XML files indicating the situation that they are representing in the training. Four hundred and five (405) pictures were not used for the training, and we tested our network with them. The results are shown in Sect. 5.2. In this research, all the CNN algorithms were developed using TensorFlow (Python) and performed on a computer equipped with a 2.66 GHz Intel(R) Xeon(R) W3520 CPU, a NVIDIA GeForce GTX 1080 and 10GB RAM. The time of a training was two (2) hours for 500,000 steps.
Considering that the evaluation is executed indoor, only some selected situations defined previously are used. We will present one of them in the next part and the result given by the CNN for all the four situations detected. Two reduced scale rovers were used during these tests. A reduced scale rover is enough to achieve a clear demonstration of the suggested architecture and algorithms for this critical application.
Swarm of rover reaction
In this part, we present the results obtained during the fall of one of the two rovers given the mission to protect one migrant. Then, the operational rover adapts itself and changes its behavior to lead the mission.
In order to test our system, we used an indoor area of \(3\times 4\) m surrounded by eight cameras, which allow us to know the position of the simulated migrants and the rovers. Due to the room configuration, we set the minimal distance between the rovers and the simulated migrants to 75 cm. This choice gives the rovers enough space to move without annoying the simulated migrants.
The motion of the simulated migrants and the rovers during this test is described as follows:
The rovers surround the migrant. Once they are around him, the migrant moves.
The rovers follow him, and then the leader in front of the group (Rover 1 in Fig. 9) falls into a hole. After the rover falls, the migrant turns to his left, and the operational rover (Rover 2 in Fig. 9) adapts itself to the situation, taking the place of Rover 1 in front of the migrant.
At the end, the migrant retraces his steps with the rover. This behavior is presented in Fig. 9.
Interaction between Rover 1 and the migrant
The data of the IMU of each rover are sent to the server and used to detect the fall of the rover during its trajectory. Figure 10 refers to the rover in front of the migrant. It is composed of two parts: one part is the data of the IMU (accelerometer and gyroscope), and the second part is the command sent to the motors (linear speed and rotation). Three events are presented as the following:
The first event is the robot moving to reach its target position. As shown in Fig. 9, the blue path corresponds to this robot. It detects its target position and goes in front of the migrant.
The second event is the fall of the robot into a hole as detected by the CNN, which is very different from the first event.
The fall is detected by the CNN, which leads to the third event: the robot stops its motors because it is jammed. This reaction is observable with the command sent to the motor. Just after the fall, they are set to zero.
Figure 11 refers to the second robot that is behind the migrant at first. It is also composed of two parts: one part is the data of the IMU (accelerometer and gyroscope), and the second part is the distance to the targeted position and the command sent to the motors (linear speed and rotation). Three events are also presented:
The first event is the robot moving to reach its target position. The red path on Fig. 9 corresponds to this robot. It detects its target position and goes behind the migrant.
The second event is the fall of the robot in front of the migrant. When the fall is detected by the CNN, the fallen robot is removed from the swarm because it cannot continue its mission. The robot behind the migrant becomes the only robot in the "swarm," and its targeted position changes. It should now be in front of the migrant. Therefore, the command sent to the motor changes quickly.
The third event is that the second robot reaches its new position in front of the migrant. At sample 72, the distance to the target has increased because of the change in target. Therefore, the command of the linear speed increases to adapt the rover to this new destination.
Detection of the fall and its effect on the Rover 2
Performance of state estimation with CNN
In order to measure the performance of our CNN in classifying some situations, we calculate two indicators: (1) the precision (i.e fraction of the relevant situations among those found) and (2) the recall (i.e fraction of the relevant situations that were found over the total amount). Both measures are currently used to evaluate the performance of classification systems [73]. Their values can be computed using Eqs. (2) and (3):
$$\begin{aligned}&{\text {Precision}}= \frac{{\text {TP}}}{{\text {TP}}+{\text {FP}}} \end{aligned}$$
$$\begin{aligned}&{\text {Recall}}= \frac{{\text {TP}}}{{\text {TP}}+{\text {FN}}} \end{aligned}$$
where, TP is the number of true positives, FN is the number of false negatives and FP is the number of false positives.
To measure these two indicators, we used a dataset of 405 pictures obtained from the data given by the IMU: 223 for a rover in motion on stones, 87 for a collision between a rover and an obstacle, 52 for the fall of a rover, and 43 for a rover that skidded on sand. These data were not used for the training of the CNN in order not to bias the results. With the CNN described previously, we obtained the results shown in Table 4.
Table 4 Detection results with a testing dataset and a threshold of 0.6
The state of the rover is classified with precision and recall rates of the CNN between 91 and 100%. Despite not being able to recognize all states with 100% accuracy, our CNN architecture can provide some information with several benefits to the swarm.
Compared to [49], which detected the fall of a robot with a multi-perceptron layer, our results seem better. Our suggested algorithm does not detect only a fall, and the kind of robot is different (humanoid vs. rover). Regarding 52 falls, we obtained a recall and precision of 98%. In [49], at the end of their training, they obtained a precision of 89.84% and a recall of 98.37%. Our method seems to give better results in the estimate of true falls. This is probably due to the fact that we used nine features (five from accelerometer and four from gyroscope) to detect a fall, while other research projects used only two inputs from the gyroscope.
Also, the processing time is very short. Therefore, the swarm reacts quickly to an issue. Further, we can add as many situations as we want to be detected as long as we have data to make the training of the CNN. Then, the swarm will be able to adapt itself to many situations. Having a real-time monitoring system in place to monitor rover state in an outdoor environment provides a mechanism to improve rover behaviors and increase the protection of migrants by the swarm.
The follow-up of a group of migrants using rovers (mobile robots) is challenging regarding the autonomy of the swarm in the interaction between migrants and rovers. This project concerns mainly the integration of trajectory planning using convex hulls to provide a safety area around the migrants as well as a strategy to manage the swarm. Some of these methods are used in very specific domains but have never been implemented in a mission for the protection of migrants.
The rover's states, identified by using a CNN, allow us to follow their possible issues and to improve the rover's behavior. This enables the swarm to adapt itself to the environment through its evolution. In this research work, we were able to validate and demonstrate the effectiveness of the approach, both at the level of the logic of the system and its response. Through this evaluation, some improvements can be suggested such as localization of migrants using a fixed wings UAV. Geo-localization of migrants is a very difficult task since it mainly depends on differentiation of migrants from other objects or obstacles in the environment.
Furthermore, better models for positioning robots around the group could be studied, implemented, tested and evaluated. Instead of arranging them in a uniform way, we could position them, for example to cover a zone more dangerous than the others. Improvements in the algorithm of planning can also take place. One could seek to validate the path to its final target by verifying that we do not encounter problems with trajectory oscillations.
Future works
As this application is critical and should require the use of human participants with a research ethical approval from a Research Ethics Board (REB) for an adequate performance evaluation, this paper presents only the overall design, including a process that can be used for such an application. Of course, using human participants will demonstrate the commercial version of this research work. Moreover, many other experiments could be performed: (1) drone and migrants detection and localization, (2) drone tactical autonomous decision for target following, (3) true interaction with participants and (4) gas filling strategy of the swarm by a third party drone. Each of these projects will be presented in other research works.
Finally, it should be interesting to identify some motion patterns of the group in order to anticipate their trajectory and help the swarm optimize their position for improving protection. Some issues related to the state of a rover should be solved by the swarm itself by cooperation between the rovers. For example, when one of them has a difficulty, we can imagine other rovers helping it. This could avoid the intervention of an operator.
These developments will continue to refine the SROPRAS system of protecting a group of migrants with a swarm of robots in order to avoid operators' intervention in dangerous situations.
Gisin L, Doran HD, Gruber JM (2016) RoboBAN: a wireless body area network for autonomous robots. In: ICINCO 2016—proceedings of the 13th international conference on informatics in control, automation and robotics, vol 2, pp 49–60
Salayma M, Al-Dubai A, Romdhani I, Nasser Y (2017) Wireless body area network (WBAN): a survey on reliability, fault tolerance, and technologies coexistence. In: ACM computing surveys, 50(1), Art. no. 3
Yi WJ, Saniie J (2013) Smart mobile system for body sensor network. In: IEEE international conference on electro information technology
Paschalidis IC, Dai W, Guo D (2014) Formation detection with wireless sensor networks. ACM Trans Sens Netw 10(4), Art. no. 55
Kim MS, Kim SH, Kang SJ (2017) Middleware design for swarm-driving robots accompanying humans. Sens Switz 17(2), Art. no. 392
Garzón M, Valente J, Roldán JJ, Cancar L, Barrientos A, Del Cerro J (2016) A multirobot system for distributed area coverage and signal searching in large outdoor scenarios*. J Field Robot 33(8):1087–1106
Kamegawa T, Sato N, Hatayama M, Uo Y, Matsuno F (2011) Design and implementation of grouped rescue robot system using self-deploy networks. J Field Robot 28(6):977–988
Mouradian C, Sahoo J, Glitho RH, Morrow MJ, Polakos PA (2017) A coalition formation algorithm for multi-robot task allocation in large-scale natural disasters. In: 2017 13th International wireless communications and mobile computing conference (IWCMC 2017), pp 1909–1914
Amanatiadis A, Bampis L, Karakasis EG, Gasteratos A, Sirakoulis G (2018) Real-time surveillance detection system for medium-altitude long-endurance unmanned aerial vehicles. Concurr Comput Pract Exp Concurr Comput 30(7):e4145
Khaleghi AM et al (2013) A DDDAMS-based planning and control framework for surveillance and crowd control via UAVs and UGVs. Expert Syst Appl 40(18):7168–7183
Sara M, Jian L, Son Y-J (2015) Crowd detection and localization using a team of cooperative UAV/UGVs. In: Proceedings of the 2015 industrial and systems engineering research conference
Stival F, Michieletto S, De Agnoi A, Pagello E (2018) Toward a better robotic hand prosthesis control: using EMG and IMU features for a subject independent multi joint regression model. In: Proceedings of the IEEE RAS and EMBS international conference on biomedical robotics and biomechatronics, pp 185–192
Ishac K, Suzuki K (2017) Gesture based robotic arm control for meal time care using a wearable sensory jacket. In: IRIS 2016—2016 IEEE 4th international symposium on robotics and intelligent sensors: empowering robots with smart sensors, pp 122–127
Yi C, Ma J, Guo H, Han J, Gao H, Jiang F, Yang C (2018) Estimating three-dimensional body orientation based on an improved complementary filter for human motion tracking. Sensors 18:3765
Chan TK, Yu YK, Kam HC, Wong KH (2018) Robust hand gesture input using computer vision, inertial measurement unit (IMU) and flex sensors. In: 2018 IEEE international conference on mechatronics, robotics and automation (ICMRA 2018), pp 95–99
Ding S, Ouyang X, Liu T, Li Z, Yang H (2018) Gait event detection of a lower extremity exoskeleton robot by an intelligent IMU. IEEE Sens J 18(23):9728–9735 Art. no. 8469017
Caramia C et al (2018) IMU-based classification of Parkinson's disease from gait: a sensitivity analysis on sensor location and feature selection. IEEE J Biomed Health Inform 22(6):1765–1774 Art. no. 8434292
Alatise MB, Hancke GP (2017) Pose estimation of a mobile robot based on fusion of IMU data and vision data using an extended Kalman filter. Sensors 17:2164
Alessandro F, Niko G, Simone N, Pallottino L (2016) Indoor real-time localisation for multiple autonomous vehicles fusing vision, odometry and IMU data. Model Simul Auton Syst 9991:288–297
Li J, Bi Y, Li K, Wang K, Lin F, Chen BM (2018) Accurate 3D localization for MAV swarms by UWB and IMU fusion. In: IEEE international conference on control and automation (ICCA, 2018), pp 100–105
Salih SQ, Alsewari ARA, Al-Khateeb B, Zolkipli MF (2019) Novel multi-swarm approach for balancing exploration and exploitation in particle swarm optimization. Adv Intell Syst Comput 843:196–206
Sánchez-García J, Reina DG, Toral SL (2018) A distributed PSO-based exploration algorithm for a UAV network assisting a disaster scenario. Fut Gener Comput Syst 90:129–148
Garcia-Aunon Pablo, Cruz AB (2018) Comparison of heuristic algorithms in discrete search and surveillance tasks using aerial swarms. Appl Sci 8(5):711
de Moraes RS, Freitas EPd (2017) Distributed control for groups of unmanned aerial vehicles performing surveillance missions and providing relay communication network services. J Intell Robot Syst Theory Appl 92(3–4):645–656
Din A, Jabeen M, Zia K, Khalid A, Saini DK (2018) Behavior-based swarm robotic search and rescue using fuzzy controller. Comput Electr Eng 70:53–65
Ranaweera DM, Hemapala KTM Udayanga, Buddhika AG, Jayasekara P (2018) A shortest path planning algorithm for PSO base firefighting robots. In: Proceedings of the 4th IEEE international conference on advances in electrical and electronics, information, communication and bio-informatics (AEEICB 2018)
Kapellmann-Zafra G, Chen J, Groß R (2016) Using Google glass in human–robot swarm interaction. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 9716, pp 196–201
Abich J, Barber DJ (2017) The impact of human robot multimodal communication on mental workload, usability preference, and expectations of robot behavior. J Multimodal User Interfaces 11(2):211–225
Hacohen S, Shoval S, Shvalb N (2017) Multi agents' multi targets mission under uncertainty using probability navigation function. In: IEEE international conference on control and automation (ICCA), pp 845–850
Zhen W, Kang X, Zhang X, Dai J (2016) Gait planning of a novel metamorphic quadruped robot. Jixie Gongcheng Xuebao/J Mech Eng 52(11):26–33
Li L, Wang D, Wang P, Huang J, Zhu D (2015) Soil surface roughness measurement based on color operation and chaotic particle swarm filtering. Nongye Jixie Xuebao/Trans Chin Soc Agric Mach 46(3):158–165
Alonso-Mora J et al (2015) Gesture based human—multi-robot swarm interaction and its application to an interactive display. In: Proceedings of the IEEE international conference on robotics and automation
Walker P et al (2014) Human control of robot swarms with dynamic leaders. In: IEEE international conference on intelligent robots and systems
Nagi J et al (2014) Human-swarm interaction using spatial gestures. In: IEEE international conference on intelligent robots and systems
Zhang L, Vaughan R (2016) Optimal robot selection by gaze direction in multi-human multi-robot interaction. In: IEEE international conference on intelligent robots and systems
Bevacqua G et al (2015) Mixed-initiative planning and execution for multiple drones in search and rescue missions. In: Proceedings international conference on automated planning and scheduling (ICAPS)
Mtshali AE Mbali (2010) Robotic architectures. Def Sci J 60(1):15–22
West Andrew, Arvin Farshad, Martin Horatio, Watson Simon, Lennox B (2018) ROS integration for miniature mobile robots. Towards Auton Robot Syst 10965:345–356
Straszheim T, Gerkey B, Cousins S (2011) The ROS build system. IEEE Robot Autom Mag Short Surv 18(2), Art. no. 5876218
Conte G, Scaradozzi D, Mannocchi D, Raspa P, Panebianco L, Screpanti L (2018) Development and experimental tests of a ROS multi-agent structure for autonomous surface vehicles. J Intell Robot Syst Theory Appl 92(3–4):705–718
Veloso MVD, Filho JTC, Barreto GA (2017) SOM4R: a middleware for robotic applications based on the resource-oriented architecture. J Intell Robot Syst 87(3–4):487–506
Hönig W, Ayanian N (2017) Flying multiple UAVs using ROS. Stud Comput Intell 707:83–118
Otsuka A et al (2015) Algorithm for swarming and following behaviors of multiple mobile robots. In: IECON 2015—41st annual conference of the IEEE industrial electronics society
Wu ZS, Fu WP (2014) Review of path planning method for mobile robot. Adv Mater Res 1030–1032:1588–1591
Lv W, Kang Y, Qin J (2019) Indoor localization for skid-steering mobile robot by fusing encoder, gyroscope, and magnetometer. IEEE Trans Syst Man Cybern Syst 49(6):1241–1253
Zhang K, Niroui F, Ficocelli M, Nejat G (2018) Robot navigation of environments with unknown rough terrain using deep reinforcement learning. In: 2018 IEEE international symposium on safety, security, and rescue robotics (SSRR 2018)
Geng M, Li Y, Ding B, Wang AH (2018) Deep learning-based cooperative trail following for multi-robot system. In: Proceedings of the international joint conference on neural networks
Mišeikis J et al (2018) Robot localisation and 3D position estimation using a free-moving camera and cascaded convolutional neural networks. In: IEEE/ASME international conference on advanced intelligent mechatronics, AIM, pp 181–187
Hofmann M, Schwarz I, Urbann O, Ziegler F (2016) A fall prediction system for humanoid robots using a multi-layer perceptron. In: 10th International Cognitive Robotics Workshop (CogRob-2016), vol 3, pp 3–6
Mascret Q, Bielmann M, Fall CL, Bouyer LJ, Gosselin B (2018) Real-time human physical activity recognition with low latency prediction feedback using raw IMU data. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, pp 239–242
Ma Y et al (2017) Hand gesture recognition with convolutional neural networks for the multimodal UAV control. In: 2017 Workshop on research, education and development of unmanned aerial systems, RED-UAS 2017, pp 198–203
Fakhrulddin AH, Fei X, Li H (2017) Convolutional neural networks (CNN) based human fall detection on body sensor networks (BSN) sensor data. In: 2017 4th International conference on systems and informatics (ICSAI, 2018)-Janua(Icsai), pp 1461–1465
Zhu R, Xiao Z, Li Y, Yang M, Tan Y, Zhou L, Wen H (2019) Efficient human activity recognition solving the confusing activities via deep ensemble learning. IEEE Access 7:75490–75499
Quinonez Yadira, Ramirez Mario, Lizarraga Carmen, Tostado Ivan, Bekios J (2015) Autonomous robot navigation based on pattern recognition techniques and artificial neural networks. Bioinspir Comput Artif Syst 9108:320–329
Saab W, Rone WS, Ben-Tzvi P (2018) Robotic tails: a state-of-the-art review. Robot Rev 36(9):1263–1277
Yie Y, Solihin MI, Kit AC (2017) Development of swarm robots for disaster mitigation using robotic simulator software. In: Ibrahim H, Iqbal S, Teoh SS, Mustaffa MT (eds) 9th International Conference on Robotic, Vision, Signal Processing and Power Applications. Springer, Singapore, pp 377–383
Kapellmann-Zafra G et al (2016) Human–robot swarm interaction with limited situational awareness. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), pp 125–136
Bamberger RJ et al (2004) Wireless network communications architecture for swarms of small UAVs. In: Collection of technical papers—AIAA 3rd "Unmanned-Unlimited" technical conference, workshop, and exhibit
Nowak S, Krüger T, Matthaei J, Bestmann U (2013) Martian swarm exploration and mapping using laser SLAM. In: International archives of the photogrammetry, remote sensing and spatial information sciences—ISPRS archives, vol 40, pp 299–303
N.A.a.S. Administration (2019) Nasaswarmathon. http://nasaswarmathon.com/. Accessed 15 Oct 2019
Minaeian S, Liu J, Son YJ (2015) Crowd detection and localization using a team of cooperative UAV/UGVs. In: IIE annual conference and expo, pp 595–604
Minaeian S (2017) Effective visual surveillance of human crowds using cooperative unmanned vehicles. In: Proceedings—Winter simulation conference, pp 3674–3675
Lee G, Chong NY, Christensen H (2010) Tracking multiple moving targets with swarms of mobile robots. Intell Serv Robot 3(1):61–72
Pannetier B, Moras J, Dezert J, Sella G (2014) Study of data fusion algorithms applied to unattended ground sensor network. In: Proceedings of the SPIE—the international society for optical engineering, 9091, art. no. 909103
Sugihara K (1994) Robust gift wrapping for the three-dimensional convex hull. J Comput Syst Sci 49(2):391–407
MathSciNet MATH Article Google Scholar
Kong X, Everett H, Toussaint G (1990) The Graham scan triangulates simple polygons. Pattern Recognit Lett 11(11):713–716
Chan TM, Chen EY (2010) Optimal in-place and cache-oblivious algorithms for 3-d convex hulls and 2-d segment intersection. Comput Geom Theory Appl 43(8):636–646
Siddaiyan S, Arokiasamy RW (2012) DVFH—VFH*: reliable obstacle avoidance for mobile robot navigation coupled with A*algorithm through fuzzy logic and knowledge based systems. Presented at the international conference on computer technology and science (ICCTS), Singapore
Benbouabdallah K, Qi-dan Z (2013) A fuzzy logic behavior architecture controller for a mobile robot path planning in multi-obstacles environment. Res J Appl Sci Eng Technol 5(14):3835–3842
Bayar V, Akar B, Yayan U, Yavuz HS, Yazici A (2014) Fuzzy logic based design of classical behaviors for mobile robots in ROS middleware. Presented at the 2014 IEEE international symposium on innovations in intelligent systems and applications (INISTA) proceedings
Peng ST, Sheu JJ (2004) An anti-skidding control approach for autonomous path tracking of a 4WS/4WD vehicle. In: 2004 5th Asian control conference, vol 1, pp 617–622
Hasan KM, Abdullah-Al-Nahid Reza KJ (2014) Path planning algorithm development for autonomous vacuum cleaner robots. In: 2014 International conference on informatics, electronics and vision (ICIEV), pp 1–6
Powers DMW (2011) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol 2(1):37–63
We would like to thank the Department of Applied Sciences, UQAC, Canada for allowing access to the rovers of the LAR.i Laboratory. Francis Deschênes and Danny Ouellet gave us precious advices related to the technical design and maintenance of the rovers. While performing this project, Maxime Vaidis received a scholarship from REPARTI Strategic Network supported by Fonds québécois de la recherche sur la nature et les technologies (FRQ-NT). This work is financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery grant, under Grant Number RGPIN-2018-06329.
Université Laval, 2325 Rue de l'Université, Ville de Québec, QC, G1V 0A6, Canada
Maxime Vaidis
Université du Québec à Chicoutimi, 555 Boulevard de l'Université, Chicoutimi, QC, G7H 2B1, Canada
Martin J.-D. Otis
Correspondence to Martin J.-D. Otis.
Supplementary material 1 (wmv 63744 KB)
Vaidis, M., Otis, M.JD. Toward a robot swarm protecting a group of migrants. Intel Serv Robotics 13, 299–314 (2020). https://doi.org/10.1007/s11370-020-00315-w
Issue Date: April 2020 | CommonCrawl |
Generalized Thue-Morse words and palindromic richnes
Štěpán Starosta
We prove that the generalized Thue-Morse word $\mathbf{t}_{b,m}$ defined for $b \geq 2$ and $m \geq 1$ as $\mathbf{t}_{b,m} = \left ( s_b(n) \mod m \right )_{n=0}^{+\infty}$, where $s_b(n)$ denotes the sum of digits in the base-$b$ representation of the integer $n$, has its language closed under all elements of a group $D_m$ isomorphic to the dihedral group of order $2m$ consisting of morphisms and antimorphisms. Considering antimorphisms $\Theta \in D_m$, we show that $\mathbf{t}_{b,m}$ is saturated by $\Theta$-palindromes up to the highest possible level. Using the generalisation of palindromic richness recently introduced by the author and E. Pelantová, we show that $\mathbf{t}_{b,m}$ is $D_m$-rich. We also calculate the factor complexity of $\mathbf{t}_{b,m}$.
palindrome, palindromic richness, Thue-Morse, Theta-palindrome
68R15
J.-P. Allouche and J. Shallit: Sums of digits, overlaps, and palindromes. Discrete Math. Theoret. Comput. Sci. 4 (2000), 1-10. CrossRef
P. Baláži, Z. Masáková and E. Pelantová: Factor versus palindromic complexity of uniformly recurrent infinite words. Theoret. Comput. Sci. 380 (2007), 3, 266-275. CrossRef
L. Balková: Factor frequencies in generalized Thue-Morse words. Kybernetika 48 (2012), 3, 371-385. CrossRef
S. Brlek: Enumeration of factors in the Thue-Morse word. Discrete Appl. Math. 24 (1989), 1-3, 83-96. CrossRef
S. Brlek, S. Hamel, M. Nivat and C. Reutenauer: On the palindromic complexity of infinite words. Internat. J. Found. Comput. 15 (2004), 2, 293-306. CrossRef
M. Bucci, A. {De Luca}, A. Glen and L. Q. Zamboni: A connection between palindromic and factor complexity using return words. Adv. Appl. Math. 42 (2009), no. 1, 60-74. CrossRef
J. Cassaigne: Complexity and special factors. Bull. Belg. Math. Soc. Simon Stevin 4 (1997), 1, 67-88. CrossRef
A. de Luca and S. Varricchio: Some combinatorial properties of the Thue-Morse sequence and a problem in semigroups. Theoret. Comput. Sci. 63 (1989), 3, 333-348. CrossRef
X. Droubay, J. Justin and G. Pirillo: Episturmian words and some constructions of de Luca and Rauzy. Theoret. Comput. Sci. 255 (2001), 1-2, 539-553. CrossRef
A. Frid: Applying a uniform marked morphism to a word. Discrete Math. Theoret. Comput. Sci. 3 (1999), 125-140. CrossRef
A. Glen, J. Justin, S. Widmer and L. Q. Zamboni: Palindromic richness. European J. Combin. 30 (2009), 2, 510-531. CrossRef
E. Pelantová and Š. Starosta: Languages invariant under more symmetries: overlapping factors versus palindromic richness. To appear in Discrete Math., preprint available at \url{http://arxiv.org/abs/1103.4051} (2011). CrossRef
E. Prouhet: Mémoire sur quelques relations entre les puissances des nombres. C. R. Acad. Sci. Paris 33 (1851), 225. CrossRef
Š. Starosta: On theta-palindromic richness. Theoret. Comput. Sci. 412 (2011), 12-14, 1111-1121. CrossRef
J. Tromp and J. Shallit: Subword complexity of a generalized Thue-Morse word. Inf. Process. Lett. (1995), 313-316. CrossRef | CommonCrawl |
\begin{document}
\title{Complexity of strong approximation on the sphere }
\begin{abstract}
By assuming some widely-believed arithmetic conjectures, we show that the task of accepting a number that is representable as a sum of $d\geq2$ squares subjected to given congruence conditions is NP-complete. On the other hand, we develop and implement a deterministic polynomial-time algorithm that represents a number as a sum of 4 squares with some restricted congruence conditions, by assuming a polynomial-time algorithm for factoring integers and Conjecture~\ref{cc}. As an application, we develop and implement a deterministic polynomial-time algorithm for navigating LPS Ramanujan graphs, under the same assumptions.
\end{abstract} \tableofcontents \section{Introduction}
\subsection{Motivtion}
We begin by defining Ramanujan graphs. Fix $k\geq 3$, and let $G$ be a $k$-regular connected graph with the adjacency matrix $A_G.$ It follows that $k$ is an eigenvalue of $A_G$. Let $\lambda_G$ be the maximum of the absolute value of all the other eigenvalues of $A_G$. By the Alon-Boppana Theorem~\cite{Lubotzky1988}, $\lambda_G\geq 2\sqrt{k-1}+o(1),$ where $o(1)$ goes to zero as $|G|\to \infty$. We say that $G$ is a Ramanujan graph, if $\lambda_G \leq 2\sqrt{k-1}.$
The first construction of Ramanujan Graphs is due to Lubotzky, Phillips and Sarnak~\cite{Lubotzky1988} and independently by Margulis \cite{Margulis}. We refer the reader to \cite[Chapter 3]{Peter}, where a complete history of the construction of Ramanujan graphs and other extremal properties of them are recorded. The LPS construction has the additional property of being strongly explicit. We say that the $k$-regular graph $G$
is strongly explicit, if there is a polynomial-time algorithm that on inputs $\langle v, i\rangle$ where $v\in G, 1 \leq i \leq k$ outputs the (index of the) $i^{th}$ neighbor of $v$. Note that the lengths of the algorithm's inputs and outputs are $O(\log |G|)$, and so it runs in time $poly\log(|G|).$ This feature of the LPS Ramanujan graphs is very important in their application to the deterministic error reduction algorithm \cite{Ajtai}; see also \cite{Hoory} for other applications of Ramanujan graphs in Computer Science.
The main product of this work is a deterministic polynomial-time algorithm for navigating LPS Ramanujan graphs, by assuming a polynomial-time algorithm for factoring integers and an arithmetic conjecture, which we formulate next.
Let $Q(t_0,t_1):=\frac{N}{4q^2}-(t_0+\frac{a_0}{2q})^2-(t_1+\frac{a_1}{2q})^2$, where $q$ is a prime, $N$, $a_0$, and $a_1$ are integers, where $N\equiv a_0^2+a_1^2 \mod 4q$ and $\gcd(N,4q)=1.$ Define \begin{equation}\label{Afr}
A_{Q,r}:=\{(t_0,t_1)\in\mathbb{Z}^2: Q(t_0,t_1)\in \mathbb{Z}, |(t_0,t_1)|<r, \text{ and } Q(t_0,t_1)\geq 0 \}, \end{equation} where $r>0$ is some positive real number.
\begin{conj}\label{cc}
Let $Q$ and $A_{Q,r}$ be as above. There exists constants $\gamma>0$ and $C_{\gamma}>0$, independent of $Q$ and $r$, such that if $ |A_{Q,r}|> C_{\gamma}(\log N)^{\gamma}$ for some $r>0$, then $Q$ expresses a sum of two squares inside $ A_{Q,r}$. \end{conj}
We denote the following assumptions by $(*)$:
\begin{enumerate}\label{assumptions}
\item There exists a polynomial-time algorithm for factoring integers,
\item Conjecture~\ref{cc} holds.
\end{enumerate}
The LPS construction is the Cayley graphs of $PGL_2(\mathbb{Z}/q\mathbb{Z})$ or $PSL_2(\mathbb{Z}/q\mathbb{Z})$ with $p+1$ explicit generators for every prime $p$ and integer $q$. We denote them by the LPS Ramanujan graph $X^{p,q}$, and the $p+1$ generators by the LPS generators in this paper. For simplicity for the rest of this paper as in\cite{Lubotzky1988}, we assume that $q\equiv 1$ mod 4 is also a prime, and is a quadratic residue mod $p$, where $p\equiv 1$ mod 4 is fixed. By these assumptions, $X^{p,q}$ is a Cayley graph over ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$; see Section~\ref{reduction} for the explicit construction of $X^{p,q}$. We say $v\in X^{p,q}$ is a diagonal vertex, if it corresponds to a diagonal matrix in ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z}).$ By a path from $u_1$ to $u_2$, we mean a sequence of vertices $\langle v_0, \dots,v_h\rangle$, where $v_0=u_1$, $v_h=u_2$, and $v_i$ is connected to $v_{i+1}$ for every $0\leq i \leq h-1.$ \begin{thm}\label{mainthm} Assume~$(*)$. We develop and implement a deterministic polynomial-time algorithm in $\log(q)$, that on inputs $\langle u_1, u_2 \rangle$, where $u_1, u_2\in X^{p,q}$ are diagonal vertices, outputs a shortest path $\langle v_0, \dots,v_h\rangle$ from $u_1$ to $u_2$. Moreover, for every $\alpha \geq0$ we have \begin{equation}\label{almostdiam} h\leq \max(\alpha, 3\log_p(q)+\gamma\log_{p}\log(q)+\log_{p}(C_{\gamma}) +\log_p(89)), \end{equation} for all, but at most $89q^4/p^{(\alpha-1)}$ vertices. In particular, for large enough $q$ the distance of any diagonal vertice from the identity is bounded by
\begin{equation}\label{diamb}(4/3)\log_{p}|X^{p,q}|+\log_p(89).\end{equation}
\end{thm}
\begin{rem}
Our algorithm is the $q$-adic analogue of the Ross and Selinger algorithm~\cite{Selinger}, which navigates $PSU(2)$ with a variant of the LPS generators. In their work, the algorithm terminates in polynomial-time under the first assumption in $(*)$, and some heuristic arithmetic assumptions which are implicit in their work. We formulated Conjecture~\ref{cc}, and proved the algorithm terminates in polynomial-time under $(*)$. Moreover, we give quantitative bounds on the size of the output under $(*)$. In particular, \eqref{almostdiam} implies that the distance between all but a tiny fraction of pairs of diagonal vertices is less than $\log_{p}(|X^{p,q}|)+O(\log\log|X^{p,q}|).$ In order to prove our bounds, we introduce a correspondence between the diagonal vertices of $X^{p,q}$ and the index $q$ sublattices of $\mathbb{Z}^2$. This is novel in our work; see Section~\ref{quant}.
\end{rem} It is known that every pair of vertices of a Ramanujan graph (not necessarily an LPS Ramanujan graph) are connected by a logarithmic number of edges. More precisely, for any $x,y \in G$, let $d(x,y)$ be the length of the shortest path between $x$ and $y.$ Define the diameter of $G$ by $ \text{diam}(G) := \sup_{x,y \in G} d(x,y). $
It is easy to check that $\text{diam}(G) \geq \log_{k-1}|G|$. If $G$ is a Ramanujan graph then
$\text{diam}(G) \leq 2\log_{k-1}|G| +O(1)$; see \cite{Lubotzky1988}. Moreover, we \cite[Theorem 1.5]{Naser} showed quantitatively that all but a tiny fraction of the pairs of vertices in $G$ have a distance less than $\log_{k-1}(|G|)+O(\log \log |G|)$.
Bounding the diameter of the LPS Ramanujan graph $X^{p,q}$ is closely related to the diophantine properties of quadratic forms in four variables \cite{Naser2}. In particular, we showed that for every prime $p$ there exists an infinite sequence of integers $\{q_n\}$, such that
$\text{diam}(X^{p,q_n}) \geq (4/3) \log_{k-1}|X^{p,q_n}|$; see \cite[Theorem 1.2]{Naser}. This shows that our upper bound in \eqref{diamb} is optimal. In fact, by assuming our conjecture on the optimal strong approximation for quadratic forms in 4 variables~\cite[Conjecture 1.3]{Naser2}, the diameter of $X^{p,q}$ is asymptotically
$(4/3) \log_{k-1}|X^{p,q}|$ as $q\to \infty.$ In our joint work with Rivin~\cite{Rivin}, we gave numerical evidences for this asymptotic. Our navigation algorithm substantially improves the range of our previous numerical results, and gives stronger evidences for \cite[Conjecture 1.3]{Naser2}.
\begin{rem}
Sarnak in his letter to Scott Aaronson and Andy Pollington~\cite{Sarnak31} defined the covering exponent of the LPS generators for navigating $PSU(2)$. He conjectured that the covering exponent is $4/3$; see \cite{Naser2} and \cite{Browning}. In particular, this exponent gives the optimal bound on the size of the output of the Ross and Selinger algorithm. $\lim_{q\to \infty}\frac{\text{diam}(X^{p,q})}{\log_{p}|X^{p,q}|}$ is the $q$-adic analogue of the covering exponent. In fact, \cite[Conjecture 1.3]{Naser2} generalizes Sarnak's conjecture, and it also implies $$ \lim_{q\to \infty} \frac{\text{diam}(X^{p,q})}{\log_{p}|X^{p,q}|}= 4/3.$$ \end{rem}
By assuming $(*)$, we develop a deterministic polynomial-time algorithm that returns a short path between every pair of vertices of $X^{p,q}$. This version of the algorithm is not restricted to the diagonal vertices, but it does not necessarily return the shortest possible path; see Remark~\ref{remNp}.
\begin{thm}\label{decomdiag}\normalfont Assume $(*)$. We develop a deterministic polynomial-time algorithm in $\log(q)$, that on inputs $\langle u_1, u_2 \rangle$, where $u_1, u_2\in X^{p,q}$, returns a short path $\langle v_0, \dots,v_h\rangle$ from $u_1$ to $u_2$. Moreover, we have \begin{equation}\label{deter}
h\leq \frac{16}{3}\log_{k-1}|X^{p,q}|+O(1). \end{equation} Furthermore, \begin{equation}\label{typical}
h\leq 3\log_{k-1}|X^{p,q}|+ O(\log \log (|X^{p,q}|)) \end{equation} for all but $O(\log(q)^{-c_1})$ fraction of pairs of vertices, where $c_1>0$, and the implicit constant in the $O$ notations and $c_1$ are independent of $q$.
\end{thm}
We briefly describe our proof in what follows. By \cite[Lemma 1]{Petit2008}, we express any element of ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$ as a product of a bounded number of LPS generators and four diagonal matrices. This reduces the navigation task to the diagonal case, and so Theorem~\ref{mainthm} implies \eqref{deter}.
For proving~\eqref{typical}, we improve on Lauter, Petit and Quisquater's diagonal decomposition algorithm.
By~\eqref{almostdiam}, the distance of a typical diagonal element from the identity is less than $\log_p|X^{p,q}|+O(\log_p\log(|X^{p,q}|))$. So, it suffices to show that all but a tiny fraction of vertices are the product of $O(\log_p\log(|X^{p,q}|))$ number of LPS generators and three typical diagonal matrices. It is elementary to see that at least $10\%$ of the vertices of $X_{p,q}$ are the product of a bounded number of LPS generators and three typical diagonal matrices. By the expansion property of the Ramanujan graphs, the distance of all but a tiny fraction of the vertices is less that $O(\log_p\log(|X^{p,q}|))$ from any subset containing more than $10\%$ of vertices. This implies \eqref{typical}. We give the dull details of our argument in Section~\ref{diagsection}.
\begin{rem}\label{remNp} By Theorem~\ref{reductionthm} and Corollary~\ref{cooor}, it follows that finding the shortest path between a generic pair of vertices is essentially NP-complete; see Remark~\ref{Npprem} for further discussion. The idea of reducing the navigation task to the diagonal case is due Petit, Lauter, and Quisquater \cite{Petit2008}, which is crucial in both Ross and Selinger~\cite{Selinger} and this work. As a result of this diagonal decomposition, the size of the output path is 3 times the shortest possible path for a typical pair of vertices. Improving the constant $3$ to $3-\epsilon$ needs new ideas, and this would have applications in quantum computing. \end{rem}
\subsection{Reduction to strong approximation on the sphere}\label{reduction}
\noindent In \cite[Section 3]{Lubotzky1988}, the authors implicitly reduced the task of finding the shortest possible path between a pair of vertices in $X^{p,q}$ to the task of representing a number as a sum of 4 squares subjected to given congruence conditions, which is the strong approximation on the 3-sphere. We explain this reduction in this section.
We begin by explicitly describing $X^{p,q}$. Let $\mathbb{H}(\mathbb{Z})$ denote the integral Hamiltonian quaternions
$$\mathbb{H}(\mathbb{Z}):= \big\{x_0 +x_1i+x_2 j+x_3 k| x_t\in \mathbb{Z } , 0\leq t \leq 3, i^2=j^2=k^2=-1\big\},$$ where $ij=-ji=k$, etc. Let $\alpha:= x_0 +x_1i+x_2 j+x_3 k\in \mathbb{H}(\mathbb{Z})$. Denote $\bar{\alpha}:=x_0 -x_1i-x_2j-x_3 k$ and $\text{Norm}(\alpha):= \alpha \bar{\alpha}=x_0^2+x_1^2+x_2^2+x_3^2$. Let \begin{equation} \label{LPSgen}
S_p:=\{\alpha \in \mathbb{H}(\mathbb{Z}) : \text{Norm}(\alpha)=p, \text{$x_0 > 0$ is odd and $x_1,x_2,x_3$ are even numbers} \}.
\end{equation}
It follows that $S_p=\{\alpha_1, \bar{\alpha_1}, \dots, \alpha_{(p+1)/2}, \bar{\alpha}_{(p+1)/2} \}.$
Let $$\Lambda^{\prime}_p:=\{ \beta \in \mathbb{H}: \text{ Norm}(\beta)=p^{h^{\prime}} \text { and } \beta\equiv 1 \text { mod } 2 \}.$$
$\Lambda^{\prime}_p$ is closed under multiplication. Let $\Lambda_p$ be the set of classes of $\Lambda^{\prime}_p$ with the relation $\beta_1\sim\beta_2$ whenever $\pm p^{t_1}\beta_1=p^{t_2}\beta_2$, where $t_1, t_2 \in \mathbb{Z}$. Then $\Lambda^{\prime}_p$ form a group with $$[\beta_1][\beta_2]=[\beta_1 \beta_2] \text{ and } [\beta][\bar{\beta}]=[1].$$ By \cite[Corollary 3.2]{Lubotzky1988}, $\Lambda_p$ is free on $[\alpha_1], \dots, [\alpha_{(p+1)/2}]$. Hence, the Cayley graph of $\Lambda_p$ with respect to LPS generator set $S_p$ is an infinite $p+1$-regular tree. LPS Ramanujan graphs are associated to the quotient of this infinite $p+1$-regular tree by appropriate arithmetic subgroups that we describe in what follows. Let $$\Lambda_p(q):=\{ [\beta] \in\Lambda_p: \beta=x_0 +x_1i+x_2 j+x_3 k \equiv x_0 \text { mod } 2q \}.$$ $\Lambda_p(q)$ is a normal subgroup of $\Lambda_p$. By \cite[Proposition 3.3]{Lubotzky1988}, since $q\equiv 1$ mod 4 is a prime number and $q$ is a quadratic residue mod $p$, $$\Lambda_p/\Lambda_p(q)= {\rm PSL}_2(\mathbb{Z}/q\mathbb{Z}) .$$ The above isomorphism is defined by sending $[\alpha]\in \Lambda_p$, to the following matrix $\tilde{\alpha}$ in ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$: \begin{equation}\label{cores} \tilde{\alpha}:=\frac{1}{\sqrt{\text{Norm}(\alpha)}} \begin{bmatrix} x_0+i x_1 & y+i x_3 \\ -y+ix_3 & x_0 -ix_1 \end{bmatrix}, \end{equation} where $i$ and $\sqrt{p}$ are representatives of square roots of $-1$ and $p$ mod $q$. This identifies the finite $p+1$-regular graph $\Lambda_p/\Lambda_p(q)$ by the Cayley graph of ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$ with respect to $\tilde{S}_p$ (the image of ${S}_p$ under the above map) that is the LPS Ramanujan graph $X^{p,q}$. For $v\in X^{p,q}$, we denote its associated class in $\Lambda_p/\Lambda_p(q)$ by $[v].$
Finally, we give a theorem which reduces the navigation task on LPS Ramanujan graphs to an strong approximation problem for the 3-sphere. Since $X^{p,q}$ is a Cayley graph, it suffices to navigate from the identity vertex to any other vertex of $X^{p,q}$.
\begin{thm}[Due to Lubotzky, Phillips and Sarnak]\label{reductionthm}
Let $v\in X^{p,q}$, and $a_0+a_1 i+a_2j+a_3k\in [v]$ such that $\gcd(a_0,\dots,a_3,p)=1$. There is a bijection between non-backtracking paths $(v_0,\dots,v_h)$ of length $h$ from $v_0=id$ to $v_h=v$ in $X^{p,q}$, and the set of integral solutions to the following diophantine equation \begin{equation}\label{redd} \begin{split} &x_1^2+x_2^2+x_3^2+x_4^2=N, \\ &x_l\equiv \lambda a_l \text{ mod } 2q \text{ for $0\leq l \leq 3$ and some $\lambda \in \mathbb{Z}/2q\mathbb{Z}$, } \end{split} \end{equation}
where $N=p^h$. In particular, the distance between $id$ and $v$ in $X^{p,q}$ is the smallest exponent $h$ such that~\eqref{redd} has an integral solution. \end{thm}
By~\cite[Conjecture 1.3]{Naser2}, there exists an integral lift if $p^h\gg_{\epsilon} q^{4+\epsilon}$ and $4$ is the optimal exponent. This conjecture implies that $\text{diam}(X^{p,q})$ is asymptotically,
$$4/3 \log_{k-1}|X^{p,q}|.$$
\subsection{Complexity of strong approximation on the sphere}
In this section, we give our main results regarding the complexity of representing a number as a sum of $d$ squares subjected to given congruence conditions. First, we give our result for $d=2.$
\begin{thm}\label{nptheorem} The problem of accepting $(N,q,a_0,a_1)$ such that the diophantine equation \begin{equation*} \begin{split} &x_0^2+x_1^2=N, \\ &x_0\equiv a_0 \text{ and } x_1\equiv a_1 \text{ mod } q, \end{split} \end{equation*} has integral solution $(x_0,x_1)\in \mathbb{Z}^2$ is NP-complete, by assuming GRH and Cramer's conjecture, or unconditionally by a randomized reduction algorithm. \end{thm} The above theorem is inspired by a private communication with Sarnak. He showed us that the problem of representing a number as a sum of two squares subjected to inequalities on the coordinates is NP-complete, under a randomized reduction algorithm. The details of this theorem appeared in his joint work with Parzanchevski \cite[Theorem 2.2]{Sarnak3}.
By induction on $d$, we generalize our theorem for every $d\geq 2$. \begin{cor}\label{cooor} Let $d\geq 2$. The problem of accepting $(N,q,a_0,\dots,a_{d-1})$ such that the diophantine equation \begin{equation}\label{deq} \begin{split} &x_0^2+\dots +x_{d-1}^2=N, \\ &X_0\equiv a_0 \dots x_{d-1}\equiv a_{d-1} \text{ mod } q, \end{split} \end{equation} has integral solution $(x_0,\dots,x_{d-1})\in \mathbb{Z}^d$ is NP-complete, by assuming GRH and Cramer's conjecture, or unconditionally by a randomized reduction algorithm. \end{cor}
On the other hand, by assuming $(*)$ and two coordinates of the congruence conditions in \eqref{deq} are zero, we develop and implement a polynomial-time algorithm for this task for $d=4.$
\begin{thm} \label{diagliftt} Let $q$ be a prime, and $(a_0,a_1)\in \big(\mathbb{Z}/2q\mathbb{Z}\big)^2$, where $a_0$ is odd and $a_1$ is even. Suppose that $N=O(q^A),~ \gcd(N,4q)=1$, and $a_0^2 +a_1^2 \equiv N \text{ mod } 4q.$ By assuming $(*)$, we develop and implement a deterministic polynomial-time algorithm in $\log(q)$ that finds an integral solution $(x_0,\dots,x_3)\in \mathbb{Z}^4$ to \begin{equation}\label{equit} \begin{split} x_0^2+\dots+x_3^2=N, \\ x_i\equiv a_i \mod 2q, \end{split} \end{equation} where $a_2=a_3=0$. If there is no solution to \eqref{equit}, then it returns ``No solution''. \end{thm}
By Theorem~\ref{reductionthm}, the algorithm in Theorem~\ref{diagliftt} gives the navigation algorithm described in Theorem~\ref{mainthm}. \begin{rem}\label{Npprem} It is possible to generalize our polynomial-time algorithm for any $d\geq2$, by assuming a variant of $(*)$ and two coordinates of the congruence conditions are zero. On the other hand, by assuming GRH and Cramer conjecture, Corollary~\ref{cooor} implies that the complexity of the optimal strong approximation for a generic point on the sphere is NP-complete.
Hence, by assuming these widely believed arithmetic assumptions, Corollary~\ref{cooor} essentially implies that finding the shortest possible path between a generic pair of vertices in LPS Ramanujan graphs is NP-complete. \end{rem}
\subsection{Quantitative bounds on the size of the output}\label{quant} In this section, we give a correspondence between the diagonal vertices of $X^{p,q}$ and the index $q$ sublattices of $\mathbb{Z}^2$. Next, we relate the graph distance between the diagonal vertices (that is a diophantine exponent by Theorem~\ref{reductionthm} ) to the length of the shortest vector of the corresponded sublattice.
Let $v\in \begin{bmatrix}a+ib & 0\\0 & a-ib \end{bmatrix} \in X^{p,q} $ be a diagonal vertex, and let $L_v$ be the sublattice of $\mathbb{Z}^2$ defined by the following congruence equation: $$ax+by\equiv 0 \text{ mod } q.$$ Let $\{u_1,u_2\}$ be the Gauss reduced basis for $L_v$, where $u_1$ is a shortest vector in $L_v$. In the following theorem, we relate the graph distance of $v$ from the identity to the norm of $u_1$. \begin{thm}\label{correspond}
Assume Conjecture~\ref{cc}. Let $v$, $L_v$ and $\{u_1,u_2\}$ be as above. Suppose that $\frac{|u_2|}{|u_1|}\geq C_{\gamma} \log(2q)^{\gamma} ,$ then the distance of $v$ from the identity is less than
\begin{equation}\label{bigholes}\lceil 4\log_p(q)-2\log_{p}|u_1| +\log_p(89)\rceil.\end{equation} Otherwise, the distance of $v$ from the identity vertex is less than
\begin{equation}\label{almost}\lceil 3\log_p(q)+\gamma\log_{p}\log(q)+\log_{p}(C_{\gamma}) +\log_p(89)\rceil.\end{equation} \end{thm} \begin{rem} In Section~\ref{numericss}, we numerically check that the inequality~\eqref{bigholes} is sharp.
In particular, the diameter of LPS Ramanujan graphs is asymptotically the longest distance between the diagonal vertices. Moreover, the above theorem implies~\eqref{almostdiam} and \eqref{diamb} in Theorem~\ref{mainthm}. We also use this theorem in our algorithm in Theorem~\ref{decomdiag}, in order to avoid the diagonal vertices with long distance from the identity.
\end{rem}
\subsection{Further motivations and techniques}
Rabin and Shallit \cite{Rabin} developed a randomized polynomial-time algorithm that represents any integer as a sum of four squares. The question of representing a prime as a sum of two squares in polynomial-time has been discussed in \cite{Schoof} and \cite{Rabin}. Schoof developed a deterministic polynomial-time algorithm that represents a prime $p\equiv 1$ mod 4 as a sum of two squares by $O((\log p)^6)$ operations. We use Schoof's algorithm in our algorithm in Theorem~\ref{diagliftt}.
Both Ross-Selinger and our algorithm start with searching for integral lattice points inside a convex region that is defined by a simple system of quadratic inequalities. If the convex region is defined by a system of linear inequalities in a fixed dimension then the general result of Lenstra \cite{Lenstra} implies this search is polynomially solvable. We use a variant of Lenstra's argument in the proof of Theorem~\ref{diagliftt}. An important feature of our algorithm is that it has been implemented, and it runs and terminates quickly. We give our numerical results in Section \ref{numericss}.
\section{NP-Completeness}\label{NP} In this section, we prove Theorem~\ref{nptheorem} and Corollary~\ref{cooor}. We reduce them to the sub-sum problem, which is well-known to be NP-complete. We begin by stating the sub-sum problem, and proving some auxiliary lemmas. The proof of~Theorem~\ref{nptheorem} and Corollary~\ref{cooor} appear at the end of this section.
Let $t_1, t_2, \dots, t_k$ and $t\in \mathbb{N}$ with $\log(t)$ and $\log(t_i)$ at most $k^{A}.$
\begin{q} Are there $\epsilon_{i}\in \{0,1 \}$ such that \begin{equation}\label{subsum}\sum_{j=1}^{k} \epsilon_j t_j=t?\end{equation} \end{q} \begin{lem} By Cramer's conjecture, there exists a polynomial-time algorithm in $k$ that returns a prime number $q\equiv 3$ mod $4$ such\begin{equation}\label{boundd}
q> 2 k \max_{1 \leq i \leq k}(t_i,t).
\end{equation} Alternatively, this task can be done unconditionally by a probabilistic polynomial-time algorithm in $k$. \end{lem} \begin{proof} Let $X:=4 k \max_{1 \leq i \leq k}(t_i,t)+3.$
We find $q$ by running the primality test algorithm of Agrawal, Kayal and Saxena \cite{primaty} on the arithmetic progression $X, X+4, \dots$. By Cramer's conjecture this search terminates in $O(\log(X)^2)$ operations.
Alternatively, we pick a random number between $[X,2X]$ and check by a the primality test algorithm if the number is prime. The expected time of the operations is $O(\log(X))$.
\end{proof}
Let
\begin{equation}\label{ss}
s:=(q-1)t +\sum_{i=1}^{ k} t_i.
\end{equation}
By a simple change of variables, solving equation~\eqref{subsum} is equivalent to solving
\begin{equation}\label{frob}
\sum_{j=1}^{k}\xi_{j}t_{j}=s,
\end{equation} where $\xi_{j}\in \{1,q \}$.
Let $\mathbb{F}_{q^2}$ be the finite field with $q^2$ elements.
\begin{lem}
By assuming GRH, there exists a deterministic polynomial-time algorithm in $\log q$ that returns a finite subset $H\subset \mathbb{F}_{q^2}^*$ of size $O((\log q)^{8+\epsilon})$ such that $H$ contains at least a generator for the cyclic multiplicative group $\mathbb{F}_{q^2}^*$. Alternatively, this task can be done unconditionally by a probabilistic polynomial algorithm.
\end{lem}
\begin{proof} Since $q\equiv 3$ mod 4, $\mathbb{Z}/q\mathbb{Z}[i]$ is isomorphic to $\mathbb{F}_{q^2}$, where $i^2=-1$.
By Shoup's result \cite[Theorem 1.2]{Shoup}, there is a primitive roots of unity $g=a+bi \in \mathbb{Z}/q\mathbb{Z}[i]$ for the finite field with $q^2$ elements such that $a$ and $b$ has an integral lift of size $O(\log(q)^{4+\epsilon} ) $ for any $\epsilon >0$ result. Hence, the reduction of $H:\{a+bi: |a|,|b|\leq \log(q)^{4+\epsilon} \}$ mod $q$ has the desired property.
Alternatively, this task can be done unconditionally by a probabilistic polynomial algorithm. Because, the density of primitive roots of unity in $\mathbb{F}_{q^2}^*$ is $\varphi (q^2-1)/(q^{2}-1)$, where $\varphi$ is Euler's totient function. The ratio $\varphi (q^2-1)/(q^{2}-1)$ is well-known to be $O(\log \log q )$.
\end{proof}
Next, we take an element $g\in H$ (not necessarily a generator), and let \begin{equation}\label{a,b} \begin{split} a_j+ib_j:=g^{t_j} \text{ for } 1 \leq j \leq k, \\ a+ib:=g^{s}, \end{split} \end{equation} where $t_j$ are given in the subsum problem~\eqref{subsum}, $s$ is defined by equation~\eqref{ss} and $a_j, b_j,a,b \in \mathbb{Z}/q\mathbb{Z}$. Next, we find gaussian primes $\pi_j\in \mathbb{Z}[i]$ such that \begin{equation}\label{ppp}\pi_j\equiv a_j+i b_j \text{ mod } q. \end{equation}
Again this is possible deterministically by Cramer's conjecture. Alternatively, we choose a random integral point $(h_1,h_2)\in [X,2X]\times [X,2X]$, and check by a polynomial-time primality test algorithm if $(h_1q+a_j)^2+(h_2q+b_j)^2$ is prime in $\mathbb{Z}$. Set $p_i:=|\pi_j|^2$ that is a prime in $\mathbb{Z}$ and define
\begin{equation}\label{N} N:=\prod_{j=1}^k p_j. \end{equation} Consider the following diophantine equation \begin{equation}\label{diop} \begin{split} &X^2+Y^2=N, \\ &X\equiv a \text{ and } Y\equiv b \text{ mod } q, \end{split} \end{equation} where $a,b$ are defined in equation~\eqref{a,b} and $N$ in \eqref{N}. Our theorem is a consequence of the following lemma. \begin{lem}\label{primitive} Assume that $g\in \mathbb{F}_{q^2}^*$ is a generator. An integral solution $(X,Y)$ to the diophantine equation ~\eqref{diop} gives a solution $(\xi_1,\dots,\xi_k)$ to the equation~\eqref{frob} in polynomial-time in $\log(q)$. \end{lem} \begin{proof}[Proof of Lemma~\ref{primitive}] Assume that the equation~\eqref{diop} has an integral solution $(a_0,a_1)$. $A+Bi$ factors uniquely in $\mathbb{Z}[i]$, and we have \begin{equation}\label{fact}A+iB=\pm i\prod_{j=1}^{k}\pi_{j}^{\epsilon_j},\end{equation} where $\epsilon_{j}\in\{0,1 \}$, and $\pi_{j}^{0}=\pi_{j}$ and $\pi_{j}^{1}=\bar{\pi}$ (the complex conjugate of $\pi_j$). We consider the above equation mod $q$. Then $$A+iB\equiv \pm i\prod_{j=1}^{k}\pi_{j}^{\epsilon_j} \text{ mod } q.$$ By the congruence condition~\eqref{diop}, $A+iB\equiv a+ib$ mod $q$, and by~\eqref{ppp}, $\pi_{j}^{0}\equiv a_j+i b_j $ and $\pi_{j}^{1}\equiv a_j-i b_j $ mod $q.$ By~\eqref{a,b}, we obtain \begin{equation*} g^{s}\equiv \prod_{j=1}^{k} g^{\xi_{j} t_{j}}, \end{equation*} where $\xi_{j}=1$ if $\epsilon_{j}=0$ and $\xi_{j}=q$ if $\epsilon_{j}=1$. Therefore we obtain the following congruence equation $$ \sum_{j=1}^k \xi_{j} t_{j} \equiv s \text{ mod } q(q-1). $$ By the inequality~\eqref{boundd} and the definition of $s$ in~equation~\eqref{ss}, we deduce that $$\sum_{j=1}^k \xi_{j} t_{j} =s.$$ This completes the proof of our lemma.\end{proof} \begin{proof}[Proof of Theorem~\ref{nptheorem}] Our theorem is a consequence of Lemma~\ref{primitive}. For every $g\in H$ we apply Lemma~\ref{primitive} and check if $(\xi_1,\dots,\xi_k)$ is a solution to~\eqref{frob}. Since the size of $H$ is $O(\log(q)^{8+\epsilon})$ and it contains at least a primitive roots of unity, we find a solution to the equation~\eqref{frob} in polynomial-time. This concludes our theorem.
\end{proof} \noindent Finally, we give a proof for Corollary~\ref{cooor}. \begin{proof}[Proof of Corollary~\ref{cooor}] We prove this corollary by induction on $d$. The base case $d=2$ follows from Theorem~\ref{nptheorem}. It suffices to reduce the task with $d$ variables to a similar task with $d+1$ variables in polynomial-time. The task with $d$ variables is to accept $(N,q,a_1, \dots,a_d)$ such that the following diophantine equation has a solution
\begin{equation}\label{deqq} \begin{split} &X_1^2+\dots +X_d^2=N, \\ &X_1\equiv a_1 \dots X_d\equiv a_d \text{ mod } q. \end{split} \end{equation}
We proceed by taking auxiliary parameters $0 \leq t, m \in \mathbb{Z}$ such that $N<q^{2t}$, $m\leq (1/3) q^{2t+1}$ and $\gcd(m,q)=1$. We consider the following diophantine equation \begin{equation}\label{3eq} \begin{split} &X_1^2+ \dots X_d^2+X_{d+1}^2= m^2+q^{2t}N, \\ &X_1\equiv q^ta_1, \dots ,X_d\equiv a_dq^t \text { and } X_{d+1}\equiv m\text{ mod } q^{t+1}. \end{split} \end{equation} Assume that $(X_1, \dots, X_{d+1})$ is a solution to the above equation. Then $$X_{d+1}\equiv \pm m \text{ mod } q^{2t+1}.$$
Since $m\leq (1/3) q^{2t+1}$, either $X_{d+1}=\pm m \text{ or }|X_{d+1}|\geq (2/3)q^{2t+1}$. If $|X_{d+1}|\geq (2/3)q^{2t+1}$, since $m\leq (1/3) q^{2t+1}$ and $N<q^{2t}$,
$$X_{d+1}^2> m^2+q^{2t}N. $$ This contradicts with equation~({\ref{3eq}}). This shows that $X_{d+1}=\pm m$. Hence, the integral solutions to the diophantine equation~\eqref{3eq} are of the form $(q^t X_1, \dots, q^t X_d, \pm m)$ such that $(X_1, \dots, X_d)$ is a solution to the equation~\eqref{deqq}. By our induction assumption, this problem is NP-complete, and we conclude our corollary.
\end{proof}
\section{Algorithm}\label{alg} \subsection{Proof of Theorem~\ref{diagliftt}}
In this section, we prove Theorem~\ref{diagliftt}, which is the main ingredient in the navigation algorithms in Theorem~\ref{mainthm} and Theorem~\ref{decomdiag}.
Let $(x_0,\dots,x_3)\in\mathbb{Z}^4$ be a solution to the equation~\eqref{equit}.
We change the variables to $(t_0,\dots,t_3)\in\mathbb{Z}^4$, where $x_i=2t_iq+a_i,$
and $|a_i| \leq q$. Hence, \begin{equation}\label{main}\frac{N}{4q^2}-(t_0+a_0/2q)^2-(t_1+a_1/2q)^2= t^2_2 +t^2_3.\end{equation}
Let $Q(t_0,t_1):=\frac{N}{4q^2}-(t_0+a_0/2q)^2-(t_1+a_1/2q)^2.$ Recall the definition of $A_{Q,r}$ from~\eqref{Afr}, where $r>0$ is some real number. By conjecture~\ref{cc}, if $ |A_{Q,r}|> C_{\gamma}(\log N)^{\gamma}$ then the equation~\eqref{main} has a solution, where $(t_0,t_1)\in A_{Q,r}.$
First, we give a parametrization of $(t_0,t_1)\in \mathbb{Z}^2$, where $Q(t_0,t_1)\in \mathbb{Z}.$ Let $k:=\frac{N-a_0^2-a_1^2}{4q}$. Since $a_0^2 +a_1^2 \equiv N \text{ mod } 4q$, $k\in\mathbb{Z}$.
By~\eqref{main}, \begin{equation}\label{newconj} a_0t_0+a_1t_1\equiv k \text{ mod } q.\end{equation}
Without loss of generality, we assume that $a_0\neq 0$ mod $q$. Then $a_0$ has an inverse mod $q$, and $(ka_0^{-1},0)$ is a solution for the congruence equation~\eqref{newconj}. We lift $(ka_0^{-1},0)\in\big(\mathbb{Z}/q\mathbb{Z}\big)^2$ to the integral vector $(c,0)\in \mathbb{Z}^2$ such that
$$c\equiv ka_0^{-1} \text{ mod } q \text{ and } |c|<(q-1)/2. $$ The integral solutions of equation~\eqref{newconj} are the translation of the integral solutions of the following homogenous equation by vector $(c,0)\in \mathbb{Z}^2$ \begin{equation}\label{conj2} a_0t_1+a_1t_1\equiv 0 \text{ mod } q.\end{equation} The integral solutions to equation~({\ref{conj2}}) form a lattice of co-volume $q$ that is spanned by the integral basis $\{v_1,v_2 \}$ where $$v_1:=(q,0), \text{ and } v_2:=(-a_1a_0^{-1},1).$$ We apply Gauss reduction algorithm on the basis $\{v_1,v_2\}$ in order to find an almost orthogonal basis $\{u_1,u_2\}$ such that \begin{equation}\label{ortho} \begin{split} \text{span}_{\mathbb{Z}}\langle v_1,v_2 \rangle =\text{span}_{\mathbb{Z}}\langle u_1, u_2 \rangle, \\
|u_1|<|u_2|, \\ \langle u_1,u_2\rangle \leq (1/2)\langle u_1,u_1\rangle. \end{split} \end{equation} where $\text{span}_{\mathbb{Z}}\langle v_1,v_2 \rangle:=\{xv_1+yv_2: x,y \in \mathbb{Z} \}$ and $\langle u_1,u_2\rangle\in \mathbb{R}$ is the dot product of $u_1$ and $u_2$.
Let $u_0$ be a shortest integral vector that satisfies the equation~\eqref{newconj}. We write $(c,0)$ as a linear combination of $u_1$ and $u_2$ with coefficients in $(1/q) \mathbb{Z}$ $$(0,c)=(h_1+r_1/q)u_1+ (h_2+r_2/q)u_2,$$ where $0 \leq r_1,r_2\leq q-1$. Note that $u_0$ is one of the following 4 vectors $$ \big(r_1/q-\{0,1 \} \big) u_1 + \big(r_2/q-\{0,1 \} \big) u_2.$$
By triangle inequality, $$|u_0|<|u_2|.$$ We parametrize the integral solutions $(t_0,t_1)$ of~\eqref{newconj} by: \begin{equation}\label{param}(t_0,t_1)=u_0+xu_1+yu_2,\end{equation} where $x,y \in \mathbb{Z}.$ Let $u_0=(u_{0,0},u_{0,1})$, $u_1=(u_{1,0},u_{1,1})$ and $u_2=(u_{2,0},u_{2,1})$. Since $u_1$ and $u_2$ are solutions to~\eqref{conj2} and $u_0$ is a solution to~\eqref{newconj}, \begin{equation*} \begin{split} u_0^{\prime}:= \frac{k-a_0u_{0,0}-a_1u_{0,1}}{q} \in\mathbb{Z}, \\ u_1^{\prime}:=\frac{a_0u_{1,0}+a_1u_{1,1}}{q}\in\mathbb{Z}, \\ u_2^{\prime}:=\frac{a_0u_{2,0}+a_1u_{2,1}}{q}\in\mathbb{Z}. \end{split} \end{equation*}
Let
\begin{equation}\label{F(x)}F(x,y):=u_0^{\prime}-xu_1^{\prime}-yu_2^{\prime}-(u_{0,1}+xu_{1,1}+yu_{2,1})^2 -(u_{0,2}+xu_{1,2}+yu_{2,2})^2. \end{equation} By \eqref{param}, \begin{equation*} F(x,y)=Q(t_0,t_1).
\end{equation*}
Hence, $Q(t_0,t_1)\in \mathbb{Z}$ for $(t_0,t_1)\in \mathbb{Z}^2,$ if and only if $(t_0,t_1)=u_0+xu_1+yu_2$ for some $(x,y)\in\mathbb{Z}^2.$
Next, we list all the integral points $(x,y)$ such that $F(x,y)$ is positive.
\begin{lem}\label{boxlem}
Assume that $\frac{\sqrt{N}}{q|u_2|}\geq 14/3$. Let $F(x,y)$ be as above. Let $A:=\sqrt{N}/(2q|u_1|)-1$ , $B:=\sqrt{N}/(2q|u_2|)-1$ and \begin{equation}\label{box} C:= [-A,A]\times[-B,B]. \end{equation} Then $F(x,y)$ is positive for every $(x,y)\in C$ and negative outside $10\times C$. \end{lem}
\begin{proof}
Recall that $(t_0,t_1)=u_0+xu_1+yu_2$ and $$ F(x,y)= N/4q^2-(t_1+a_0/4q)^2 - (t_2+a_1/2q)^2,$$
where $|a_0/2q|<1/2 \text{ and } |a_1/2q| < 1/2$. Hence, if $|(t_0,t_1)|<(\sqrt{N}/q) -1 $, then $F(x,y)>0$, and if $|(t_0,t_1)|>(\sqrt{N}/q) +1,$ then $F(x,y)<0$. By the triangle inequality \begin{equation*} \begin{split}
|(t_0,t_1)|=|u_0+xu_1+yu_2|
\leq |u_0|+|x||u_1|+|y||u_2|. \end{split} \end{equation*}
Since $|u_0|<|u_2|$ then $|(t_0,t_1)|\leq |x| |u_1|+(1+|y|)|u_2|.$ Let $A$ , $B$ and $C$ be as in~\eqref{box}. Then for every $(x,y)\in [-A,A]\times[-B,B]$, we have $$
|x| |u_1|+(1+|y|)|u_2|\leq (\sqrt{N}/q) -|u_1| < (\sqrt{N}/q) -1. $$ Hence, $F(x,y)>0$ if $(x,y)\in [-A,A]\times[-B,B]$. Next, we show that $F$ is negative outside $10\times C.$ By almost orthogonality conditions~\eqref{ortho}, we obtain the following lower bound
\begin{equation}\label{lowerbd}(|x|/2)|u_1|+(|y|/2-1)|u_2| \leq |u_0+xu_1+yu_2|.\end{equation} The above inequality implies that if $x \geq 10A$, then
$$|(t_0,t_1)|=|u_0+xu_1+yu_2| > \sqrt{N}/q+ \big(3\sqrt{N}/q|u_1|-10\big)|u_1|/2.$$
We assume that $\frac{\sqrt{N}}{q|u_2|}\geq 14/3$ and $1<|u_1|<|u_2|$, then $|(t_0,t_1)|> \sqrt{N}/q+1 $ and hence $F(x,y)$ is negative. Similarly, if $y \geq 10 B $ then
$$|(t_0,t_1)|=|u_0+xu_1+yu_2| > \sqrt{N}/q+ \big(3\sqrt{N}/(2q|u_2|)-6\big)|u_2|.$$
Since $\frac{\sqrt{N}}{q|u_2|}\geq 14/3$, it follows that $|(t_0,t_1)|> \sqrt{N}/q+1.$ Hence, $F(x,y)$ is negative. Therefore, if $(x,y)\notin 10 \times C$, then $F(x,y)$ is negative. This concludes our lemma. \end{proof}
In the following lemma, we consider the remaining case, where $\frac{\sqrt{N}}{q|u_2|}\leq 14/3.$
\begin{lem}\label{line}
Assume that $\frac{\sqrt{N}}{q|u_2|}\leq 14/3$ and $F(x,y)>0$ then $|y|\leq 13.$ \end{lem}
\begin{proof} Since $F(x,y)>0$ from the first line of the proof of Lemma~\ref{boxlem}, it follows that $|(t_0,t_1)|< (\sqrt{N}/q) +1.$ From the the inequality~\eqref{lowerbd}, we have $$
(|y|/2-1)|u_2| \leq |(t_0,t_1)| \leq \sqrt{N}/q +1. $$ Hence, $$
|y|\leq \frac{2\sqrt{N}}{q|u_2|} +4 < 14.$$ Since $y$ is an integer, we conclude the lemma. \end{proof}
\begin{proof}[Proof of Theorem~\ref{diagliftt}]
Assume that $\frac{\sqrt{N}}{q|u_2|}\geq 14/3.$ By Lemma~\ref{boxlem}, $F(x,y)$ is positive inside box $C$ that is defined in~\eqref{box}. We list $(x,y)\in C$ in the order of their distance from the origin. If possible we represents $F(x,y)$ as a sum of two squares by the following polynomial-time algorithm. We factor $F(x,y)$ into primes by the polynomial algorithm for factoring integers in $(*)$. Next, by Schoof's algorithm \cite{Schoof}), we write every prime number as a sum of two squares. If we succeed, then we find an integral solution to the equation~\eqref{equit}, and this concludes the theorem.
If the size of box $C$ that is $A\times B > C_{\gamma}\log(N)^{\gamma},$ then by Conjectrue~\ref{cc} we find a pair $(x,y)$ such that $F(x,y)$ is a sum of two squares in less than $ O(\log(q)^{O(1)})$ steps, and the above algorithm terminates. Otherwise, $A\times B < C_{\gamma}\log(M)^{\gamma}$. By Lemma~\ref{boxlem}, $F(x,y)$ is negative outside box $10C$ and since the size of this box is $O(\log(q)^{\gamma})$ we check all points inside box $10C$ in order to represent $F(x,y)$ as a sum of two squares. If we succeed to represent $F(x,y)$ as a sum of two squares then we find an integral solution to equation~\eqref{equit}. Otherwise, the equation~\eqref{equit} does not have any integral solution. This concludes our theorem if $\frac{\sqrt{N}}{q|u_2|}\geq 14/3$.
Finally, assume that $\frac{\sqrt{N}}{q|u_2|}\leq 14/3$ then by Lemma~\ref{line}, we have $|y| \leq 13.$ We fix $y=l$ for some $|l|<13.$ We note that by equation~\eqref{F(x)}, $$ F(x,l)=A x^2+B x+ C $$ for some $A, B,C\in \mathbb{Z} $. We list $x\in \mathbb{Z}$ such that $F(x,l)>0$ and then proceed similarly as in the forth line of the first paragraph of the proof. This concludes our theorem.
\end{proof}
\subsection{Distance of diagonal vertices from the identity}
In this section, we give a proof of Theorem~\ref{correspond}. Then, we give bounds on the size of the outputs in Theorem~\ref{mainthm} and Theorem~\ref{decomdiag}. Recall the notations while formulating Theorem~\ref{correspond}.
\begin{proof}[Proof of Theorem~\ref{correspond}]
We proceed by proving \eqref{bigholes}. Assume that $$|u_2|\geq C_{\gamma} \log(q)^{\gamma} |u_1|.$$ Let
\begin{equation}\label{hbound}
h:=\lceil 4\log_p(q)-2\log_{p}|u_1| +\log_{p}(89) \rceil.
\end{equation}
We show that there exists a path from $v$ to the identity of length $h$. By our assumption $p$ is a quadratic residue mod $q$. We denote the square root of $p$ mod $q$ by $\sqrt{p}.$ Set
\begin{equation*}
\begin{split}
A:=a\sqrt{p}^h \text{ mod } 2q,
\\
B:=b\sqrt{p}^h \text{ mod } 2q.
\end{split}
\end{equation*} By Theorem~\ref{reductionthm}, there exists a path of length $h$ from $v$ to the identity if and only the following diophantine equation has an integral solution $(t_1,t_2,t_3,t_4)$
\begin{equation}\label{newdiag}(2t_1q+A)^2 + (2t_2q+Bt)^2 + (2t_3q)^2 +(2t_4q)^2=p^{h}. \end{equation}
In Theorem~\ref{diagliftt}, we developed a polynomial-time algorithm for finding its integral solutions $(t_1,t_2,t_3,t_4)$.
We defined the associated binary quadratic form $F(x,y)$ as defined in equation~\eqref{F(x)}. By Lemma~\ref{boxlem}, $F(x,y)$ is positive inside the box $[-A,A]\times[-B,B]$ where $A:=\sqrt{p^h}/(4q|u_1|)$ , $B:=\sqrt{p^h}/(4q|u_2|)-1$. By the definition of $h$ in equation~\eqref{hbound}, we have \begin{equation}
p^{h}\geq \frac{89q^4}{|u_1|^2}. \end{equation} By the above inequality \begin{equation}\label{Bin}
B\geq \frac{\sqrt{89}q^2}{4q|u_1||u_2|}-1. \end{equation} Since $\{u_1,u_2\}$ is an almost orthogonal basis for a co-volume $q$ lattice then the angle between $u_1$ and $u_2$ is between $\pi/3$ and $2\pi/3$. Hence,
\begin{equation}\label{area}|u_1||u_2|\leq 2q/\sqrt{3}.\end{equation}
We use the above bound on $|u_1||u_2|$ in inequality~\eqref{Bin}, and derive $$B\geq \frac{\sqrt{3*89}}{8}-1 >1.$$ Next, we give a lower bound on $A$. Note that
$$A \geq \frac{|u_2|}{|u_1|}B.$$
By our assumption $\frac{|u_2|}{|u_1|}\geq C_{\gamma} \log(2q)^{\gamma}$, hence $$A \geq C_{\gamma} \log(2q)^{\gamma} B.$$ Since $B>1$, $$AB\geq C_{\gamma} \log(2q)^{\gamma}. $$ By Conjecture~\ref{cc} and Theorem~\ref{diagliftt}, our algorithm returns an integral solution $(t_1,t_2,t_3,t_4)$ which gives rise to a path of length $h$ from $v$ to the identity. This concludes the first part of our theorem.
Next, we assume that $|u_1|\leq |u_2|\leq C_{\gamma} \log(2q)^{\gamma} |u_1|$. Let
\begin{equation}\label{kdef}h^{\prime}:=\lceil 3\log_p(q)+\gamma\log_{p}\log(q)+\log_{p}(C_{\gamma}) +\log_p(89)\rceil.\end{equation}
We follow the same analysis as in the first part of the theorem. First, we give a lower bound on $B:=\sqrt{p^{h^{\prime}}}/(4q|u_2|)-1$. By the definition of $h^{\prime }$ in equation~\eqref{kdef}, we derive \begin{equation}\label{pkin} p^{h^{\prime}}\geq 89C_{\gamma}\log(q)^{\gamma}q^3. \end{equation}
We multiply both sides of $|u_2|\leq C_{\gamma} \log(q)^{\gamma} |u_1|$ by $|u_2|$ and use the inequality~\eqref{area} to obtain
$$|u_2|^2 \leq C_{\gamma} \log(q)^{\gamma}2q/\sqrt{3}. $$
By the above inequality, definition of $B$ and inequality~\eqref{pkin}, we have
\begin{equation*}B=\sqrt{p^{h^{\prime}}}/(4q|u_2|)-1\geq \frac{\sqrt{89}}{4\sqrt{2/\sqrt{3}}}-1\geq 1.\end{equation*}
Hence, $$B\geq \sqrt{p^{h^{\prime}}}/(8q|u_2|).$$ Next, we use the above inequality and inequality~\eqref{area} and \eqref{pkin} to give a lower bound on $AB$.
\begin{equation}
\begin{split}
AB&\geq \frac{p^{h^{\prime}}}{32q^2|u_1||u_2|}
\\
&\geq \frac{ 89\sqrt{3}C_{\gamma}\log(q)^{\gamma}q^3}{64q^3}
\\
&> C_{\gamma}\log(q)^{\gamma}.
\end{split}
\end{equation}
By Conjecture~\ref{cc} and Theorem~\ref{diagliftt}, our algorithm returns an integral solution $(t_1,t_2,t_3,t_4)$ which gives rise to a path of length $h^{\prime}$ from $v$ to the identity. This concludes our theorem. \end{proof}
Finally, we prove \eqref{almostdiam} in Theorem~\ref{almostdiam}. We briefly, explain the main idea. We normalize the associated co-volume $q$ lattices $L_v$, so that they have co-volume 1. These normalized lattices are parametrized by points in $\text{SL}_2(\mathbb{Z})\backslash \mathbb{H}$, and it is well-known that they are equidistributed in $\text{SL}_2(\mathbb{Z})\backslash \mathbb{H}$ with respect to the hyperbolic measure $\frac{1}{y^2} (dx^2+dy^2).$
It follows from this equidistribution and Theorem~\ref{correspond} that the distance of a typical diagonal matrix from the identity vertex is $\log(|X^{p,q}|)+O(\log \log (|X^{p,q}|))$. The diagonal points with distance $4/3\log(|X^{p,q}|)+O(\log \log (|X^{p,q}|))$ from the identity are associated to points $x+iy\in \mathbb{H}$ with $y$ as big as $q$.
\begin{proof}[Proof of \eqref{almostdiam} in Theorem~\ref{almostdiam}] Let $v$ be a diagonal vertex with distance $h$ from the identity vertex, where $$ h\geq \lceil 3\log_p(q)+\gamma\log_{p}\log(q)+\log_{p}(C_{\gamma}) +\log_p(89) \rceil. $$ Let $L_v$ be the associated lattice of co-volume $q$ and $\{u_1,u_2 \}$ be an almost orthogonal basis for $L_v$. By Theorem~\ref{correspond}, the distance of $v$ from the identity is less than
$$\lceil 4\log_p(q)-2\log_{p}|u_1| +\log_p(89)\rceil.$$ Therefore,
$$ h \leq \lceil 4\log_p(q)-2\log_{p}|u_1| +\log_p(89)\rceil.$$ Hence,
\begin{equation}\label{u1}|u_1|^2 \leq 89q^4/p^{(h-1)}.\end{equation} Next, we count the number of lattices of co-volume $q$ inside $\mathbb{Z}^2$ such that the length of the shortest vector is smaller than $r\leq (1/2) \sqrt{q}$. Let $L\subset \mathbb{Z}^2$ be a lattice of co-volume $q$ such that $L$ contains a vector of length smaller than $(1/2) \sqrt{q}$. It is easy to check that $L$ contain unique vectors $\pm v:=\pm(a_0,a_1)$ such they have the shortest length among all vectors inside $L$. Since $q$ is prime this vector is primitive i.e. $\gcd(a_0,a_1)=1$. On the other hand, the lattice is uniquely determined by $\pm v:=\pm(a_0,a_1)$, namely $L$ is the set of all integral points $(x,y)\in \mathbb{Z}^2$ such that $$ax+by\equiv 0 \text{ mod } q.$$ Therefore, the problem of counting the lattices of co-volume $q$ with shortest vector smaller than $r$ is reduced to counting the projective primitive integral vectors of length smaller than $r$. The main term of this counting is \begin{equation}\label{count}1/2 \zeta(2)^{-1} \pi r^2= \frac{3}{\pi}r^2.\end{equation} By inequality~\eqref{u1} and~\eqref{count}, we deduce that the number of diagonal vertices with graph distance at least $h$ from the identity in LPS Ramanujan graph $X^{p,q}$ is less than $$89q^4/p^{(h-1)}.$$ This concludes Theorem~\ref{mainthm}.
\end{proof}
\subsection{Algorithm for the diagonal decomposition}\label{diagsection}
\begin{proof}[Proof of \eqref{deter} in Theorem~\ref{decomdiag}] Let $M:=\begin{bmatrix}a & b\\ c &d \end{bmatrix}\in {\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$ be any element. By \cite[Lemma~1]{Petit2008}, there exists a polynomial-time algorithm that expresses $M$ as: $$ M=D_1s_1D_2s_2D_3s_3D_4, $$
where $D_i$ are diagonal matrices for $1\leq i\leq 4$ and $s_j$ are LPS generators for $1\leq j\leq 3.$ By Theorem~\ref{mainthm} and assuming $(*)$, we write each $D_i$ as a product of at most $4/3\log_{p}|X^{p,q}|+O(1)$ LPS generators in polynomial time. Therefore, we find a path of size at most $\frac{16}{3}\log_{k-1}|X^{p,q}|+O(1)$ from the identity to $M.$ This concludes \eqref{deter}. \end{proof}
Let $$ D:=\Big\{\begin{bmatrix}a & 0\\0 &a^{-1} \end{bmatrix}\in {\rm PSL}_2(\mathbb{Z}/q\mathbb{Z}) \Big\}, \text{ and }R:=\Big\{\begin{bmatrix}a & b\\ b &a \end{bmatrix}\in {\rm PSL}_2(\mathbb{Z}/q\mathbb{Z}) \Big\}. $$
Define $d_{\alpha}:=\begin{bmatrix}\alpha & 0 \\ 0&\alpha^{-1} \end{bmatrix},$ and $ r_{a,b}:= \begin{bmatrix}a & b \\ a&b \end{bmatrix}.$ By the correspondence~\eqref{cores} between ${\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})$ and the units of $\mathbb{H}(\mathbb{Z}/q\mathbb{Z}) $, $D$ and $R$ are associated to: $$ \tilde{D}:=\{a+bi:a,b\in \mathbb{Z}/q\mathbb{Z}, a^2+b^2=1 \}, \text{ and }\tilde{R}:=\{a+bj:a,b\in \mathbb{Z}/q\mathbb{Z}, a^2+b^2=1 \}. $$
By Theorem~\ref{diagliftt}, there is a polynomial-time algorithm that finds the shortest possible path between the identity and vertices in $D$ or $R.$ Let $D_1\subset D$ and $R_1\subset R$ be the subset of vertices where their distances from the identity is less than $\log_p|X^{p,q}|+O(\log_p\log(|X^{p,q}|)).$ By \eqref{almostdiam}, in Theorem~\ref{mainthm}, $$
|R_1|\geq 99\% |R| \text{ and } |D_1|\geq 99\% |D|. $$
Let $Y:=D_1R_1D_1\subset {\rm PSL}_2(\mathbb{Z}/q\mathbb{Z}).$ \begin{lem} We have $$
|Y|\geq 10\% |{\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})|. $$ \end{lem} \begin{proof} Let $g\in Y.$ Then, $$ g=d_{\alpha} r_{a,b} d_{\beta} $$
for some $a,b, \alpha,\beta\in \mathbb{Z}/q\mathbb{Z}.$ We give an upper bound on the number of different ways of expressing $g$ as $d_{\alpha} r_{a,b} d_{\beta}$, where $ab\neq0.$ Suppose that $d_{\alpha} r_{a,b} d_{\beta}=d_{\alpha^{\prime}} r_{a^{\prime},b^{\prime}} d_{\beta^{\prime}}.$ Then, it follows that $(\alpha^{-1}\alpha^{\prime})^2=(\beta^{-1}\beta^{\prime})^2=\pm1.$ This shows that $g$ has only 2 representations as $d_{\alpha} r_{a,b} d_{\beta}$. There are only two elements of $R$ with $ab=0$, which are $R_{1,0}$ and $R_{0,i}.$ Since $q$ is a prime, $|D|=|R|=\frac{q-1}{2}$, and $|{\rm PSL}_2(\mathbb{Z}/q\mathbb{Z})|=\frac{(q-1)q(q+1)}{2}$. Therefore, $$
|Y|\geq 99\%^3\frac{(q-1)(q-5)(q-1)}{16}. $$ This concludes our lemma. \end{proof} \begin{lem}\label{Ylem}
Let $g\in X^{p,q}.$ There exists a polynomial-time algorithm in $\log q$ that returns a short path of size at most $3\log_{k-1}|X^{p,q}|+ O(\log \log (|X^{p,q}|))$ form the identity to $g$, if $g\in Y$. Otherwise, it returns ``Not in Y''. \end{lem} \begin{proof} Let $g=\begin{bmatrix}g_{1,1} & g_{1,2}\\ g_{2,1} &g_{2,2} \end{bmatrix}.$ First, we check the solubility of $d_{\alpha} r_{a,b} d_{\beta}=g$ for some $\alpha,\beta, a,$ and $b.$ This is equivalent to the following system of equations: \begin{equation*} \begin{bmatrix} \alpha\beta a & \alpha \beta^{-1}b \\
\alpha^{-1} \beta b& \alpha^{-1}\beta^{-1} a \end{bmatrix} =\begin{bmatrix}g_{1,1} & g_{1,2}\\ g_{2,1} &g_{2,2} \end{bmatrix}. \end{equation*} It follows that $a^2=g_{1,1}g_{2,2},$ $b^2= g_{1,2}g_{2,1}.$ By the quadratic reciprocity law, we check in polynomial-time algorithm if $g_{1,2}g_{2,1}$ and $g_{1,1}g_{2,2}$ are quadratic residue mod $q$. If either $g_{1,2}g_{2,1}$ or $g_{1,1}g_{2,2}$ are quadratic non-residue, the algorithm returns ``Not in Y''. Otherwise, by the polynomial-time algorithm for taking square roots in finite fields (e.g \cite{Adlemann} or \cite{Shanks} ), we find $a$ and $b$. Similarly, we find $\alpha$ and $\beta.$ By Theorem~\ref{mainthm}, we write $d_{\alpha},$ $d_{\beta}$, and $r_{a,b}$ in terms of the LPS generators and check if they are inside $D_1$ and $R_1$ respectively. This concludes our lemma.
\end{proof}
We cite the following proposition form \cite[Proposition 2.14]{Ellenberg}. \begin{prop}[Due to Ellenberg, Michel, Venkatesh]\label{largedev}
Fix $\epsilon>0$. For any subset $Y\subset X^{p,q}$ with $|Y|>10\%|X^{p,q}|$, the fraction of non-backtracking paths $\gamma$ of length $2l$ satisfying: $$
\Big| \frac{|\gamma\cap Y|}{2l+1}-\frac{|Y|}{|G|} \Big|\geq \epsilon $$ is bounded by $c_1 exp(-c_2l)$, where $c_1, c_2$ depend only on $\epsilon.$ \end{prop}
\begin{proof}[Proof of \eqref{typical} in Theorem~\ref{decomdiag}]
It suffices to navigate from the identity to a given vertex $v\in X^{p,q}.$ Recall that $S_p$ is the LPS generator set defined in \eqref{LPSgen}. Let W be the set of all words of length at most $\log\log q$ with letters in $S_p.$ Note that $|W|=O((\log q)^c),$ where $c=\log(p)$ which only depends on the fixed prime $p.$
By Lemma~\ref{Ylem}, if $wv\in Y$ for some $w\in W$, then we find a path that satisfies \eqref{typical}. By Proposition~\ref{largedev} for $\epsilon=9\%$, it follows that the fraction of the vertices $v$, such that $wv\notin Y$ for every $w\in W$, is less than $c_1\exp(-c_2\log\log q)=O(\log(q)^{-c_1}).$ This concludes our theorem.
\end{proof}
\section{Numerical results}\label{numericss}
\subsection{Diagonal approximation with V-gates} In this section, we give some numerical results on the graph distance between diagonal vertices in $X_{5,q}$ ($V$-gates), which shows that the inequalities~\eqref{almostdiam}~and \eqref{diamb} are sharp. In particular, we numerically check that the diameter of $X_{5,q}$ is bigger that
$(4/3) \log_{5}|X_{5,q}|+O(1).$
Let $q$ be a prime number and $q\equiv 1,9 \text{ mod } 20$. The LPS generators associated to $p=5$ are called $V$-gates. $V$-gates are the following 6 unitary matrices: $$V_X^{\pm}:=\frac{1}{\sqrt{5}} \begin{bmatrix} 1 & 2i \\ 2i &1 \end{bmatrix}^{\pm}, \text{ } V_Y^{\pm}:=\frac{1}{\sqrt{5}} \begin{bmatrix} 1 & 2 \\ -2 &1 \end{bmatrix}^{\pm} \text{ and } V_Z^{\pm}:=\frac{1}{\sqrt{5}} \begin{bmatrix} 1+2i & 0 \\ 0 &1-2i \end{bmatrix}^{\pm}.$$ Since $q\equiv 1,9 \text{ mod } 20$, then square root of $-1$ and $5$ exist mod $q$ and we denote them by $\sqrt{5}$ and $i$. So, we can realize these matrices inside $PSL_2(\mathbb{Z}/q\mathbb{Z})$. The Cayley graph of $PSL_2(\mathbb{Z}/q\mathbb{Z})$ with respect to V-gates is a 6-regular LPS Ramanujan graph. We run our algorithm to find the shortest path in V-gates from identity to a given typical diagonal matrix$\begin{bmatrix}a+bi & 0 \\ 0 & a-bi \end{bmatrix}\in PSL_2(\mathbb{Z}/q\mathbb{Z}) .$ By Theorem~\ref{reductionthm}, a path of length $m$ from identity to this diagonal element is associated to the integral solution of the following diophantine equation
\begin{equation}\label{cond} \begin{split} x^2+y^2+z^2+w^2=5^m \\ x \equiv \sqrt{5}^m a \text { mod } q, \\ y \equiv \sqrt{5}^m b \text { mod } q, \\ z\equiv w \equiv 0 \text{ mod } q, \\ x\equiv 1 \text{ and } y \equiv z \equiv w \equiv 0 \text{ mod } 2. \end{split} \end{equation} First, our algorithm in Theorem~\ref{diagliftt} finds an integral solution $(x,y,z,w)$ with the least integer $m$ to the equation \eqref{cond}. Next from the integral solution $(x,y,z,w)$ it constructs a path in the Ramanujan graph by factoring $x+iy+jz+kw$ into $V$-gates. We give an explicit example. Let $q$ be the following prime number with 100 digits:
\begin{equation*} \begin{split} q=6513516734600035718300327211250928237178281758494 \\ 417357560086828416863929270451437126021949850746381. \end{split} \end{equation*} For the diagonal matrix let $$a=23147807431234971203978401278304192730471291281$$ and $$b=1284712970142165365412342134123412341234121234342141234133$$ The first run of our algorithm returns the following integral lift $$x+iy+jz+kw$$ where
\begin{equation*} \begin{split} x=-3513686405828860927763754940484616687735954403564689113985383253868329887 \\073895129393123529043092607930187858085249975614142765081986624258530038940271 \\ y=3773156548062114482690557548470637380371201820782668326017207890171886678830 \\601870144317232489264867168831689578223312772963262687237828114002146000356 \end{split} \end{equation*} \begin{equation*} \begin{split}z=696150282464006603091186089706225565057448347974579991940267012475009315401865 \\6570861892918415809962375271929963309479306543335375368842987498287311268 \end{split} \end{equation*} \begin{equation*} \begin{split}w=3888519350877870793211628965104035265911619494928178960777970459693109319153422 \\770196318754816019921662119578623310979387405367017752713898473225295568\end{split} \end{equation*} and the associated path in the Ramanujan graph by $V$-gates is
$$Vy Vz^{-1} Vx Vz Vx Vx Vz Vz Vx^{-1} Vx^{-1} Vz^{-1} Vx^{-1} Vz Vz Vy Vz^{-1} Vz^{-1} Vz^{-1} Vy Vz^{-1}$$ $$Vy^{-1} Vx Vx Vz Vx Vy^{-1} Vx Vy^{-1} Vx Vz^{-1} Vy Vx Vz Vz Vx Vz^{-1} Vy^{-1} Vx Vx Vz Vz Vx Vx $$ $$Vz^{-1} Vx Vx Vy^{-1} Vx^{-1} Vz Vy Vx Vz Vy Vx^{-1} Vy^{-1} Vy^{-1} Vz^{-1} Vy^{-1} Vz Vx^{-1} Vz^{-1} Vx^{-1}$$ $$Vx^{-1} Vz Vy^{-1} Vx^{-1} Vz Vx^{-1} Vx^{-1} Vz^{-1} Vy Vz Vz Vz Vy Vz^{-1} Vx Vy Vx^{-1} Vz^{-1} Vx^{-1} Vz^{-1}$$ $$Vx^{-1} Vx^{-1} Vz Vy^{-1} Vx^{-1} Vx^{-1} Vy^{-1} Vz^{-1} Vx Vz^{-1} Vx^{-1} Vy Vy Vy Vy Vy Vx^{-1} Vz Vx^{-1} Vz $$ $$Vy Vx^{-1} Vx^{-1} Vy Vz^{-1} Vx Vx Vz Vy^{-1} Vz^{-1} Vy Vz Vx^{-1} Vx^{-1} Vy^{-1} Vz^{-1} Vy Vx^{-1} Vy Vz^{-1} Vy$$ $$Vz Vz Vx^{-1} Vx^{-1} Vy^{-1} Vx^{-1} Vz^{-1} Vx^{-1} Vy Vz Vy Vy Vx^{-1} Vz^{-1} Vz^{-1} Vy Vy Vx Vy Vy Vz Vz $$
$$Vy Vz Vx Vz Vy Vz Vx Vy Vz^{-1} Vy Vx^{-1} Vz Vx Vz^{-1} Vy^{-1} Vx Vx Vy^{-1} Vx Vy Vx Vy^{-1} Vy^{-1} Vy^{-1} $$
$$Vz Vx Vy^{-1} Vz Vx^{-1} Vz^{-1} Vx^{-1} Vx^{-1} Vz^{-1} Vz^{-1} Vy^{-1} Vx Vy^{-1} Vx^{-1} Vz^{-1} Vx^{-1} Vz Vx Vz^{-1} $$
$$Vy^{-1} Vz^{-1} Vy Vx Vz Vx^{-1} Vy^{-1} Vz^{-1} Vx^{-1} Vz^{-1} Vz^{-1} Vy Vx^{-1} Vy^{-1} Vz^{-1} Vy Vz^{-1} Vx Vz Vx$$
$$Vx Vy Vx^{-1} Vx^{-1} Vz^{-1} Vx Vz Vy^{-1} Vz^{-1} Vz^{-1} Vy^{-1} Vy^{-1} Vy^{-1} Vx^{-1} Vx^{-1} Vy^{-1} Vz^{-1}$$
$$Vy Vx Vx Vx Vy^{-1} Vx Vz^{-1} Vy^{-1} Vz Vz Vy Vz Vy Vz Vz Vx Vx Vy^{-1} Vx^{-1} Vy Vz^{-1} Vy^{-1} Vx^{-1}$$
$$Vz^{-1} Vx^{-1} Vz Vx Vy^{-1} Vx^{-1} Vx^{-1} Vy Vx Vy Vx Vz Vy^{-1} Vz Vz Vy Vz^{-1} Vy Vz^{-1} Vx^{-1} Vx^{-1} $$
$$Vy Vz Vx^{-1} Vx^{-1} Vy Vz^{-1} Vx^{-1} Vy^{-1} Vy^{-1} Vx^{-1} Vy Vz^{-1} Vy^{-1} Vz^{-1} Vx Vx Vy Vz Vx^{-1} Vy^{-1}$$
$$Vz^{-1} Vx Vz^{-1} Vy^{-1} Vy^{-1} Vx^{-1} Vy^{-1} Vy^{-1} Vy^{-1} Vz Vy Vx^{-1} Vz Vx^{-1} Vy^{-1} Vy^{-1} Vx^{-1} Vz$$
$$Vx^{-1} Vz^{-1} Vz^{-1} Vy Vy Vy Vx^{-1} Vy Vy Vy Vz Vy Vx^{-1} Vy^{-1} Vx^{-1} Vy^{-1} Vz Vz Vz Vy^{-1} Vy^{-1} Vz$$
$$Vy Vz Vy^{-1} Vx Vx Vx Vy^{-1} Vz Vz Vz Vy Vz^{-1} Vy^{-1} Vy^{-1} Vy^{-1} Vx^{-1} Vz^{-1} Vx^{-1} Vz^{-1} Vx Vz^{-1}$$
$$Vy^{-1} Vx^{-1} Vz Vy Vx^{-1} Vz^{-1} Vy^{-1} Vx^{-1} Vy Vx Vx Vz Vx Vz^{-1} Vx Vz Vy^{-1} Vz Vx^{-1} Vy Vz^{-1}$$
$$Vz^{-1} Vx Vz^{-1} Vx^{-1} Vz^{-1} Vx^{-1} Vz Vx Vz^{-1} Vx^{-1} Vz Vy Vz Vz Vy Vx Vx Vy^{-1} Vx^{-1} Vz^{-1} Vx Vy$$
$$Vz^{-1} Vz^{-1} Vy Vz^{-1} Vy^{-1} Vx^{-1} Vz Vy^{-1} Vz^{-1} Vy Vx^{-1} Vx^{-1} Vy^{-1} Vy^{-1} Vy^{-1} Vx Vx Vz^{-1} $$
$$Vx^{-1} Vy^{-1} Vx Vy Vx Vy Vx^{-1} Vy Vx^{-1} Vx^{-1} Vz^{-1} Vx Vz Vy^{-1} Vx^{-1} Vy^{-1} Vx Vy Vz^{-1} Vz^{-1} Vx $$ That is a path of size 432. The first candidate that our algorithm gives up to factor has 430 letters. It could be a potential path but this means that the distance is optimal up to two letters. We note that the trivial lower bound for a typical element is $$3\log_5(q)=428.5.$$
\subsection{Lower bound on the diameter of LPS Ramanujan graphs } Let $a=0$, $b=1,$ which is associated to the matrix $\begin{bmatrix}i &0 \\0&-i \end{bmatrix}$. By our correspondence in Section~\ref{quant}, the lattice point associated to this vertex is in the cups neighborhood for every $q$. In fact, this lattice point has the highest imaginary part among all the other co-volume $q$ lattice point. Let \begin{equation*} \begin{split} q=65135167346000357183003272112509282371782817584944173575600868284168\\ 63929270451437126021949850746381 \end{split} \end{equation*} The length of the shortest path from the identity to $\begin{bmatrix}i &0 \\0&-i \end{bmatrix}$ is 571. Note that $4\log_5(q)=571.20,$ and recall that $[4\log_5(q)]$ is conjectured to be the asymptotic of the diameter of this Ramanujan graph. We refer the reader to \cite[Section 4]{Naser} for further discussion and more numerical results.
\end{document} | arXiv |
\begin{document}
\title{ extbf{Higher differentiability results for solutions to a class of non-homogeneouns elliptic problems under sub-quadratic growth conditions}
\begin{abstract}
\noindent{We prove a sharp higher differentiability result for local minimizers of functionals of the form
$$\mathcal{F}\left(w,\Omega\right)=\int_{\Omega}\left[ F\left(x,Dw(x)\right)-f(x)\cdot
w(x)\right]dx$$
with {non autonomous} integrand $F(x,\xi)$ which is convex with respect to the gradient variable, under $p$-growth conditions, with $1<p<2$. The main novelty here is that the results are obtained assuming that the partial map $x\mapsto D_\xi F(x,\xi)$ has weak derivatives in some Lebesgue space $L^q$ and the datum $f$ is assumed to belong to a suitable Lebesgue space $L^r$.\\
We also prove that it is possible to weaken the assumption on the datum $f$ and on the map $x\mapsto D_\xi F(x,\xi)$, if the minimizers are assumed to be a priori bounded.}
\end{abstract}
{\footnotesize{ \emph{Mathematics Subject
Classification}. {35J47; 35J70; 49N60.}
{\it Key words and phrases}. {Convex functionals;
Lipschitz regularity; Higher differentiability. }}}
\section{Introduction}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, with $n>2$. For $N\geq 1$ and $w:\Omega\to \numberset{R}^N$, we consider the following non autonomous, non homogeneous functional
\begin{equation}\label{modenergy}
\mathcal{F}\left(w,\Omega\right)=\int_{\Omega}\left[ F\left(x,Dw(x)\right)-f(x)\cdot
w(x)\right]dx
\end{equation} In this paper we deal with the regularity properties of local minimizers of $\mathcal{F}$. To fix the ideas, we assume that $f\in L^r_\mathrm{loc}(\Omega, \numberset{R}^N )$ is given, and $2<r<n$. The Carath\'eodory function $F: \Omega\times \numberset{R}^{n\times
N}\to [0,+\infty)$ is such that $\xi\mapsto F(x,\xi) $ is $C^2\left(\numberset{R}^{n\times N}\right)$ for a.e. $x\in\Omega$. Morever, we assume that there exists real numbers $p\in(1, 2)$ and $\mu\in[0, 1]$ such that the following set of assumptions is satisfied: \begin{itemize} \item There exist positive constants $\ell, L$ such that \begin{equation}\label{F1}
\ell \left(\mu^2+\left|\xi\right|^2\right)^\frac{p}{2}\le F(x,\xi)\le L\left(\mu^2+\left|\xi\right|^2\right)^\frac{p}{2} \end{equation} for a. e. $x\in\Omega$ and every $\xi\in\numberset{R}^{n\times N}$. \item There exists a positive constant $\nu>0,$ such that \begin{equation}\label{F2}
\langle D_{\xi}F(x,\xi)-D_{\xi}F(x,\eta),\xi-\eta\rangle \ge \nu\left(\mu^2+\left|\xi\right|^2+\left|\eta\right|^2\right)^\frac{p-2}{2}|\xi-\eta|^{2} \end{equation} for a. e. $x\in\Omega$ and every $\xi, \eta\in \numberset{R}^{n\times N}$. \item There exists a positive constant $L_1>0,$ such that \begin{equation}\label{F3}
\left|D_{\xi}F(x,\xi)-D_{\xi}F(x,\eta)\right| \le L_1\left(\mu^2+\left|\xi\right|^2+\left|\eta\right|^2\right)^\frac{p-2}{2}|\xi-\eta| \end{equation} for a. e. $x\in\Omega$ and every $\xi, \eta\in \numberset{R}^{n\times N}$. \item There exists a non-negative function $g\in L^{q}_{\mathrm{loc}}\left(\Omega\right)$ such that \begin{equation}\label{F4}
\left|D_{\xi}F\left(x,\xi\right)-D_{\xi}F\left(y,\xi\right)\right|\le\left(g(x)+g(y)\right)\left(\mu^2+\left|\xi\right|^2\right)^\frac{p-1}{2}\left|x-y\right| \end{equation} for a.e. $x, y\in\Omega$ and every $\xi\in\numberset{R}^{n\times N}$. \end{itemize} In the last years, there has been great interest in understanding the differentiability of minimizers under some weak differentiability assumption for $F$ in the $x$ variable. Examples of this are the works by Passarelli di Napoli \cite{32,33} for the case $p\ge2$. In this paper, we intend to explain how the situation changes if $p$ stays in the lowest range of growths, that is, $1<p<2$. \\ \\ Among the big quantity of previous contributions, we wish to mention works by Di Benedetto \cite{DiB} and Manfredi \cite{M}, all in mid 80's, in which it is proven that local minimizers have H\"older continuous derivatives under different instances, all of them involving autonomous functionals $F$ (i.e. non depending on $x$) with growth exponent $p>1$, and with independent term $f=0$. Next, in \cite{gm}, the H\"{o}lder continuity of the gradient of solutions has been proved in case $p\ge2$ and H\"{o}lder continuous coefficients with $f=0$.\\ It was not until the work of Acerbi and Fusco \cite{AF} that an extension of this result was provided for functionals $F$ that could depend on $x$ in a H\"older continuous manner. In order to prove the Lipschitz continuity of local minimizers, in the same paper (\cite{AF}), for $1<p<2$ but still for $f=0$, the second order regularity of the local minimizers is established in case of constant coefficients. Very recently, A. Gentile \cite{Gentile2} extended the higher differentiability result of \cite{AF} to the case when the dependence of $F$ in $x$ is of Sobolev type. In presence of a bounded independent term $f$, Tolksdorf studied functionals $F$ that are Lipschitz continuous in the space variable, and obtained that local minimizers are H\"older continuous.\\ In case of degenerate elliptic functionals, we mention the recent papers \cite{BRASCO2010652, COLOMBO201494, ColomboFigalli1}, in wich Lipschitz regularity of solutions is established assuming $f\in L^r$, with $r>n$. \\ In the present paper, our first goal is to get for the minimizers not the H\"older continuous first derivatives, but up to $L^p$ second order derivatives. We wish to do this in two ways. First, by requiring $F$ the minimal possible regularity in $x$. Second, by finding optimal conditions on $f$ that make the whole scheme work.\\ \\
Actually, the first result we prove in this paper is the following. Below, we denote $V_p(\xi)=\left(\mu^2+|\xi|^2\right)^\frac{p-2}{4}\xi$.
\begin{thm}\label{CGPThm1} Let $\Omega\subset\numberset{R}^n$ be a bounded open set, and $1<p<2$. Let $u\in W^{1, p}_{\mathrm{loc}}(\Omega, \numberset{R}^N)$ be a local minimizer of the functional \eqref{modenergy}, under the assumptions \eqref{F1}--\eqref{F4}, with
$$f\in L^{\frac{np}{n(p-1)+2-p}}_{\mathrm{loc}}\left(\Omega\right)\qquad\mbox{ and }\qquad g\in L^{n}_{\mathrm{loc}}\left(\Omega\right).$$
Then $V_p\left(Du\right)\in W^{1, 2}_{\mathrm{loc}}\left(\Omega\right)$, and the estimate \begin{equation}\label{mainestimateVp} \aligned
&\int_{B_{\frac{R}{2}}}\left|D\left(V_p\left(Du(x)\right)\right)\right|^2dx \\
&\leq\frac{c}{R^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2} +\int_{B_R} |f(x)|^\frac{np}{n\left(p-1\right)+2-p}dx +c\int_{B_{R}}g^{n}(x)dx +\left|B_R\right|\right], \endaligned \end{equation} holds true for any ball $B_{R}\Subset\Omega$, with $\beta(n, p)>0.$ \end{thm}
At this point, it is worth mentioning that $|D^2u|\simeq\left|D\left(V_p\left(Du\right)\right)\right|\left(\mu^2+\left|Du\right|^2\right)^\frac{2-p}{4}$ so that the above theorem establishes a weighted estimate for the second derivative. Furthermore, from Young's inequality one has that $\left|D^2u\right|^p\simeq\left|D\left(V_p\left(Du\right)\right)\right|^2+\left(\mu^2+\left|Du\right|^2\right)^\frac{p}{2}$ so that, in particular, $V_p\left(Du\right)\in W^{1,2}$ implies $u\in W^{2,p}$. See Section \ref{auxiliaryfunction} for details.
Also, for what concerns the assumptions on $f$, we observe that $$ 2<\frac{np}{n\left(p-1\right)+2-p}<n, $$ for any $n>2$ and $1<p<2$. The proof of Theorem \ref{CGPThm1} is based on the combination of an a priori estimate and a suitable approximation argument. In order to achieve the a priori estimate under the sharp integrability assumption on the independent term $f$, besides the use of the difference quotient method, we need to apply carefully the well known iteration Lemma on concentric balls to control the terms with critical integrability. We'd like to mention that previous higher differentiability result in the subquadratic non standard growth case has been recently obtained in \cite{MPdN} with independent term $f\in L^{\frac{p}{p-1}}$. Especially, note that $$\frac{np}{n(p-1)+2-p}<\frac{p}{p-1}\iff 1<p<2.$$ Also in case of degenerate elliptic equations with sub-quadratic growth, in the recent paper \cite{AMBROSIO2022125636}, a fractional higher differentiability has been obtained under a Besov or Sobolev assumption on the datum $f$.\\ In \cite{CGHP}, an higher differentiability result has been established for local minimizers of \eqref{modenergy} in the case $p\ge 2$ and for $f\in L^n\log^\alpha L$, with $\alpha>0$. For other Lipschitz regularity and higher differentiability results for solutions to non-homogeeneous elliptic problems we also refer to \cite{Beck-Mingione, de2021lipschitz} and to \cite{kuusi2018vectorial} for regularity results of solutions to problems with measure data. \\ We want to stress that, for what concerns the regularity of the funtion $f$, Theorem \ref{CGPThm1} is a sharp result: it is not possible to weaken the assumption on the integrability of the datum, as we will see with a counterexample in Section \ref{Counterexample} below.\\ \\ Independently to the previous problem, a new interest has arosen in the last years. It consists in describing the regularity properties of local minimizers which one assumes apriori bounded. The reason for this is that many times local boundedness of minimizers is available much before any sort of weak differentiability. Also, as it will be clear from our results, apriori boundedness of minimizers helps in relaxing the assumptions for $f$, at least when $n$ is not too small. Results in this direction are available so far just for $p\ge2$. An example of this can be found in Carozza, Kristensen and Passarelli di Napoli \cite{CKP}, where $F$ is assumed Lipschitz-continuous in $x$ (see also \cite{capone2020regularity}). Similar analysis have been done by Giova and Passarelli di Napoli in \cite{GiovaPassarelli} assuming only Sobolev regularity for $F$ in the $x$ variable. The second goal in the present paper consists of exploring if this kind of results is available also under the assumption $1<p<2$. We obtained the following.
\begin{thm}\label{inftythm}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, $1<p<2$ and $u\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^\infty_{\mathrm{loc}}\left(\Omega\right)$ be a local minimizer of the functional \eqref{modenergy} under assumptions \eqref{F1}--\eqref{F4} with
$$f\in L^{\frac{p+2}{p}}_{\mathrm{loc}}\left(\Omega\right)\qquad\mbox{ and }\qquad g\in L^{p+2}_{\mathrm{loc}}\left(\Omega\right).$$
Then $V_p\left(Du\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right)$ and the estimate \begin{equation}\label{inftyestimate} \aligned
&\int_{B_{\frac{R}{2}}}\left|D\left(V_p \left(Du(x)\right) \right)\right|^2dx\\
&\leq\frac{c\|u\|_{L^\infty(B_{4R})}}{R^{\frac{p+2}{p}}} \left[
\int_{B_{4R}} \left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+\int_{B_{R}}g^{p+2}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{p+2}{p}dx+\left|B_R\right|+1 \right]
\endaligned
\end{equation}
holds for any ball $B_{4R}\Subset\Omega$. \end{thm} Notice here that for $n>2$ and $1<p<2$ we have $2<\frac{p+2}{p}<n$ . Also, by comparing the assumptions on $f$ both in Theorems \ref{CGPThm1} and \ref{inftythm}, it is worth noticing that for $1<p<2$ one has $$ \frac{p+2}{p}<\frac{np}{n(p-1)+2-p}\hspace{1cm}\Longleftrightarrow\hspace{1cm}n>p+2. $$ Therefore, Theorem \ref{inftythm} improves Theorem \ref{CGPThm1} whenever $n\ge4$.\\ In proving Theorem \ref{inftythm}, once the a priori estimate is established, the more delicate issue is to construct some approximating problems in a convenient way. For this, these aproximating problems need to be smooth with respect to the dependence on the $x$-variable and on the datum $f$. Also, they need to have minimizers whose $L^{r}$ norm is close to the $L^\infty$ norm of the minimizer of the original problem for $r$ sufficiently large. We overcome this difficulties by using the penalization method introduced in \cite{CKP}. Still, we need to prove second order estimates for the approximating minimizers which, as far as we know, are available only for $p\ge 2$ ( see \cite{GiovaPassarelli}). \\ The paper is organized as follows. In section 2 we collect definitions and preliminary results. Section 3,contains the proof of Theorem \ref{CGPThm1}. In Section 4, we give the counterexample showing the optimality of the assimption on the datum. In Section 5 we give the proof of the higher differentiability of the local minimizers of a class of variational integrals with a singular penalisation term. In Section 6, we give the proof of Theorem \ref{inftythm}.
\section{Preliminary results}\label{preliminaryresults}
We will follow the usual convention and denote by $c$ or $C$ a
general constant that may vary on different occasions, even within
the same line of estimates. Relevant dependencies on parameters and
special constants will be suitably emphasized using parentheses or
subscripts. All the norms we use will be the standard Euclidean
ones and denoted by $|\cdot |$ in all cases. In particular, for
matrices $\xi$, $\eta \in \numberset{R}^{n\times N}$ we write $\langle \xi,
\eta \rangle : = \text{trace} (\xi^T \eta)$ for the usual inner
product of $\xi$ and $\eta$, and $| \xi | : = \langle \xi,
\xi\rangle^{\frac{1}{2}}$ for the corresponding euclidean norm. By
$B_r(x)$ we will denote the ball in $\numberset{R}^n$ centered at $x$
of radius $r$. The integral mean of a function $u$ over a ball
$B_r(x)$ will be denoted by $u_{x,r}$, that is
$$ u_{x,r}:=\frac{1}{\left|B_r(x)\right|}\int_{B_r(x)}u(y)dy,$$
where $|B_r(x)|$ is the Lebesgue measure of the ball in
$\mathbb{R}^{n}$. If no confusion arises, we shall omit the
dependence on the center.
The following lemma has important applications in the so called
hole-filling method. Its proof can be found, for example, in
\cite[Lemma 6.1]{23} .
\begin{lemma}\label{iter} Let $h:[r, R_{0}]\to \mathbb{R}$ be a nonnegative bounded function and $0<\theta<1$,
$A, B\ge 0$ and $\gamma>0$. Assume that
$$
h(s)\leq \theta h(t)+\frac{A}{\left(t-s\right)^{\gamma}}+B,
$$
for all $r\leq s<t\leq R_{0}$. Then
$$
h(r)\leq \frac{c A}{(R_{0}-r)^{\gamma}}+cB ,
$$
where $c=c(\theta, \gamma)>0$.
\end{lemma}
The following Gagliardo-Niremberg type inequalities are stated in \cite{GiovaPassarelli}. For the proofs see the Appendix A of \cite{CKP} and Lemma 3.5 in \cite{GiannettiPassa} (in case $p(x)\equiv p, \, \forall x$) respectively.
\begin{lemma}\label{lemma5GP}
For any $\phi\in C_0^1(\Omega)$ with $\phi\ge0$, $\mu\in[0, 1]$, and any $C^2$ map $v:\Omega\to\numberset{R}^N$, we have
\begin{eqnarray}\label{2.1GP}
&&\int_\Omega\phi^{\frac{m}{m+1}(p+2)}(x)\left|Dv(x)\right|^{\frac{m}{m+1}(p+2)}dx\cr\cr
&\le&(p+2)^2\left(\int_\Omega\phi^{\frac{m}{m+1}(p+2)}(x)\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cr\cr
&&\cdot\left[\left(\int_\Omega\phi^{\frac{m}{m+1}(p+2)}(x)\left|D\phi(x)\right|^2\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+n\left(\int_\Omega\phi^{\frac{m}{m+1}(p+2)}(x)\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p-2}{2}\left|D^2v(x)\right|^2dx\right)^\frac{m}{m+1}\right],
\end{eqnarray}
for any $p\in(1, \infty)$ and $m>1$. Moreover
\begin{eqnarray}\label{2.2GP}
&&\int_{\Omega}\phi^2(x)\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}\left|Dv(x)\right|^2dx\cr\cr
&\le&c\left\Arrowvert v \right\Arrowvert_{L^\infty\left(\mathrm{supp}(\phi)\right)}^2\int_\Omega\phi^2(x)\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p-2}{2}\left|D^2v(x)\right|^2dx\cr\cr
&&+c\left\Arrowvert v\right\Arrowvert_{L^\infty\left(\mathrm{supp}(\phi)\right)}^2\int_\Omega\left(\phi^2(x)+\left|D\phi(x)\right|^2\right)\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx,
\end{eqnarray}
for a constant $c=c(p).$ \end{lemma}
By a density argument, one can easily check that estimates \eqref{2.1GP} and \eqref{2.2GP} are still true for any map $v\in W^{2,p}_{\mathrm{loc}}(\Omega)$ such that $\left(\mu^2+\left|Dv\right|^2\right)^\frac{p-2}{2}\left|D^2v\right|^2\in L^1_{\mathrm{loc}}\left(\Omega\right)$.
For further needs, we recall the following result, whose proof can be found in \cite[Lemma 4.1]{BRASCO2010652}.
\begin{lemma}\label{Lemma8} For any $\delta>0$, $m>1$ and $\xi, \eta\in\numberset{R}^k$, let $$
W(\xi)=\left(\left|\xi\right|-\delta\right)^{2m-1}_+\frac{\xi}{\left|\xi\right|}\qquad\mbox{ and }\qquad\tilde{W}(\xi)=\left(\left|\xi\right|-\delta\right)^{m}_+\frac{\xi}{\left|\xi\right|}. $$
Then there exists a positive constant $c(m)$ such that
$$
\left<W(\xi)-W(\eta), \xi-\eta\right>\ge c(m)\left|\tilde{W}(\xi)-\tilde{W}(\eta)\right|^2. $$
for any $\eta, \xi\in\numberset{R}^k.$ \end{lemma} \subsection{Difference quotients}\label{diffquot} A key instrument in studying regularity properties of solutions to problems of Calculus of Variations and PDEs is the so called {\em difference quotients method}.\\ In this section, we recall the definition and some basic results. \begin{definition}
Given $h\in\numberset{R}$, for every function
$F:\numberset{R}^{n}\to\numberset{R}^N$, for any $s=1,..., n$ the finite difference operator in the direction $x_s$ is
defined by
$$
\tau_{s, h}F(x)=F\left(x+he_s\right)-F(x),
$$
where $e_s$ is the unit vector in the direction $x_s$. \end{definition} In the following, in order to simplify the notations, we will omit the vector $e_s$ unless it is necessary, denoting $$ \tau_{h}F(x)=F(x+h)-F(x), $$ where $h\in\numberset{R}^n$.\\ \par We now describe some properties of the operator $\tau_{h}$, whose proofs can be found, for example, in \cite{23}.
\begin{proposition}\label{findiffpr}
Let $F$ and $G$ be two functions such that $F, G\in
W^{1,p}(\Omega)$, with $p\geq 1$, and let us consider the set
$$
\Omega_{|h|}:=\Set{x\in \Omega : d\left(x,
\partial\Omega\right)>\left|h\right|}.
$$
Then
\begin{description}
\item{$(a)$} $\tau_{h}F\in W^{1,p}\left(\Omega_{|h|}\right)$ and
$$
D_{i} (\tau_{h}F)=\tau_{h}(D_{i}F).
$$
\item{$(b)$} If at least one of the functions $F$ or $G$ has support contained
in $\Omega_{|h|}$ then
$$
\int_{\Omega} F(x) \tau_{h} G(x) dx =\int_{\Omega} G(x) \tau_{-h}F(x)
dx.
$$
\item{$(c)$} We have
$$
\tau_{h}(F G)(x)=F(x+h)\tau_{h}G(x)+G(x)\tau_{h}F(x).
$$
\end{description} \end{proposition}
\noindent The next result about finite difference operator is a kind of integral version of Lagrange Theorem.
\begin{lemma}\label{le1} If $0<\rho<R$, $|h|<\frac{R-\rho}{2}$, $1<p<+\infty$,
and $F, DF\in L^{p}(B_{R})$ then
$$
\int_{B_{\rho}} |\tau_{h} F(x)|^{p}\ dx\leq c(n,p)|h|^{p}
\int_{B_{R}} |D F(x)|^{p}\ dx .
$$
Moreover
$$
\int_{B_{\rho}} |F(x+h )|^{p}\ dx\leq \int_{B_{R}} |F(x)|^{p}\ dx .
$$ \end{lemma}
The following result is proved in \cite{23}.
\begin{lemma}\label{Giusti8.2}
Let $F:\numberset{R}^n\to\numberset{R}^N$, $F\in L^p\left(B_R\right)$ with $1<p<+\infty$. Suppose that there exist $\rho\in(0, R)$ and $M>0$ such that
$$
\sum_{s=1}^{n}\int_{B_\rho}|\tau_{s, h}F(x)|^pdx\le M^p|h|^p
$$
for $\left|h\right|<\frac{R-\rho}{2}$. Then $F\in W^{1,p}(B_R, \numberset{R}^N)$. Moreover
$$
\left\Arrowvert DF \right\Arrowvert_{L^p(B_\rho)}\le M,
$$
$$
\left\Arrowvert F\right\Arrowvert_{L^{\frac{np}{n-p}}(B\rho)}\le c\left(M+\left\Arrowvert F\right\Arrowvert_{L^p(B_R)}\right),
$$
with $c=c(n, N, p, \rho, R)$, and
$$\frac{\tau_{s, h}F}{\left|h\right|}\to D_sF\qquad\mbox{ in }L^p_{\mathrm{loc}}\left(\Omega\right),\mbox{ as }h\to0,$$
for each $s=1, ..., n.$ \end{lemma}
\subsection{An auxiliary function}\label{auxiliaryfunction}
Here we define an auxiliary function of the gradient variable that will be useful in the following.\\ The function $V_p:\numberset{R}^{n\times N}\to\numberset{R}^{n\times N}$, defined as
\begin{equation*}\label{Vp}
V_p(\xi):=\left(\mu^2+\left|\xi\right|^2\right)^\frac{p-2}{4}\xi, \end{equation*}
\noindent for which the following estimates hold (see \cite{AF}).
\begin{lemma}\label{lemma6GP}
Let $1<p<2$. There is a constant $c=c(n, p)>0$ such that
\begin{equation}\label{lemma6GPestimate1}
c^{-1}\left|\xi-\eta\right|\le\left|V
_p(\xi)-V_p(\eta)\right|\cdot\left(\mu^2+\left|\xi\right|^2+\left|\eta\right|^2\right)^\frac{2-p}{4}\le c\left|\xi-\eta\right|,
\end{equation} \noindent for any $\xi, \eta\in\numberset{R}^n.$ \end{lemma}
\begin{remark}\label{rmk1}
One can easily check that, for a $C^2$ function $v$, there is a constant $C(p)$ such that
\begin{equation}\label{lemma6GPestimate2}
C^{-1}\left|D^2v\right|^2\left(\mu^2+\left|Dv\right|^2\right)^\frac{p-2}{2}\le\left|D\left(V_p\left(Dv\right)\right)\right|^2\le C\left|D^2v\right|^2\left(\mu^2+\left|Dv\right|^2\right)^\frac{p-2}{2}
\end{equation}
almost everywhere. \end{remark}
In what follows, the following result can be useful.
\begin{lemma}\label{differentiabilitylemma}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, $1<p<2$, and $v\in W^{1, p}_ {\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$. Then the implication
\begin{equation*}\label{differentiabilityimplication}
V_p\left(Dv\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right) \implies v\in W^{2,p}_{\mathrm{loc}}\left(\Omega\right)
\end{equation*}
holds true, together with the estimate
\begin{equation}\label{differentiabilityestimate}
\int_{B_{r}}\left|D^2v(x)\right|^pdx
\le c\cdot \left[1+\int_{B_{R}}\left|D\left(V_p\left(Dv(x)\right)\right)\right|^2+c\int_{B_R}\left|Dv(x)\right|^p\right].
\end{equation}
holds for any ball $B_R\Subset\Omega$ and $0<r<R$. \end{lemma}
\begin{proof}
We will prove the existence of the second-order weak derivatives of $v$ and the fact that they are in $L^p_{\mathrm{loc}}\left(\Omega\right)$, by means of the difference quotients method.\\
Let us consider a ball $B_R\Subset\Omega$ and $0<\frac{R}{2}<r<R$.\\
For $|h|<\frac{R-r}{2}$, we have $0<\frac{R}{2}<r<\rho_1:=r+|h|<R-|h|=:\rho_2<R$, and by \eqref{lemma6GPestimate1}, we get, for any $s=1, ..., n.$
\begin{eqnarray*}
\int_{B_r}\left|\tau_{s, h}Dv(x)\right|^pdx&\le& \int_{B_r}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^p\cdot\left(\mu^2+\left|Dv(x)\right|+\left|Dv\left(x+he_s\right)\right|\right)^\frac{p\left(2-p\right)}{4}.
\end{eqnarray*}
By H\"{o}lder's Inequality with exponents $\left(\frac{2}{p}, \frac{2}{2-p}\right)$ and the use of \eqref{lemma6GPestimate1}, we get
\begin{eqnarray*}
\int_{B_r}\left|\tau_{s, h}Dv(x)\right|^pdx &\le&\left(\int_{B_r}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\right)^\frac{p}{2}\cr\cr
&&\cdot\left(\int_{B_{r}}\left(\mu^2+\left|Dv\left(x+he_s\right)\right|^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right)^\frac{2-p}{2},
\end{eqnarray*}
and since $V_p\left(Dv\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right)$, by Lemma \ref{le1} and Young's Inequality, we have
\begin{eqnarray*}\label{lemmaestimate1}
\int_{B_r}\left|\tau_{s, h}Dv(x)\right|^pdx&\le& c\left[|h|^2\int_{B_R}\left|DV_p\left(Dv(x)\right)\right|^2dx\right]^\frac{p}{2}\cr\cr
&&\cdot\left[\int_{B_r}\left(\mu^2+\left|Dv\left(x+he_s\right)\right|^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right]^\frac{2-p}{p}\cr\cr
&\le&c|h|^p\left[1+\int_{B_{R}}\left|DV_p\left(Dv(x)\right)\right|^2dx+\int_{B_R}\left|Dv(x)\right|^pdx\right].
\end{eqnarray*}
Since $v\in W^{1,p}_{\mathrm{loc}}\left(\Omega\right)$ and $V_p\left(Dv\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right)$, then, by Lemma \ref{Giusti8.2}, we get $v\in W^{2,p}_{\mathrm{loc}}\left(\Omega\right)$, and we have
\begin{equation*}\label{differentiabilityestimate1}
\int_{B_{r}}\left|D^2v(x)\right|^pdx
\le c \left[1+\int_{B_{R}}\left|DV_p\left(Dv(x)\right)\right|^2dx+c\int_{B_R}\left|Dv(x)\right|^pdx\right],
\end{equation*}
that is the conclusion. \end{proof}
\begin{remark}\label{rmk2}
If $\Omega\subset\numberset{R}^n$ is a bounded open set and $1<p<2$, then one may use Remark \ref{rmk1} and Lemma \ref{differentiabilitylemma} to show that, if $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega\right)$ and $V_p\left(Dv\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right)$, then $v\in W^{2, p}_{\mathrm{loc}}\left(\Omega\right)$ and \eqref{lemma6GPestimate2} holds true.
\end{remark}
\begin{remark}\label{rmk3}
If $\Omega\subset\numberset{R}^n$ is a bounded open set and $p\in\left(1, \infty\right)$, for any $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega\right)$ such that $V_p\left(Dv\right)\in W^{1,2}_{\mathrm{loc}}\left(\Omega\right),$ if $m>1$ and $v\in L^{2m}_{\mathrm{loc}}\left(\Omega\right)$, then, thanks to \eqref{2.1GP}, $Dv\in L^{\frac{m\left(p+2\right)}{m+1}}_{\mathrm{loc}}\left(\Omega\right)$ and if $v\in L^{\infty}_{\mathrm{loc}}\left(\Omega\right)$, thanks to \eqref{2.2GP}, we get $Dv\in L^{p+2}_{\mathrm{loc}}\left(\Omega\right).$
\end{remark}
\begin{remark}\label{rem} For further needs we record the following elementary inequality
\begin{equation}\label{elem}
\left(\mu^2+\left|\xi\right|^2\right)^{\frac{p}{2}}\le 2\left(\mu^p+ \left|V_p(\xi)\right|^2\right)
\end{equation}
for every $\xi\in \mathbb{R}^{n\times N}$.
\\
Note that this is obvious if $\mu=0$.
In case $\mu>0$, we distinguish two cases.
If $\left|\xi\right|\le \mu$ we trivially have
$$\left(\mu^2+\left|\xi\right|^2\right)^{\frac{p}{2}}\le 2^{\frac{p}{2}}\mu^p$$
If $\left|\xi\right|> \mu$
\begin{eqnarray*}
\left(\mu^2+\left|\xi\right|^2\right)^{\frac{p}{2}}&=&\left(\mu^2+\left|\xi\right|^2\right)^{\frac{p-2}{2}}\left(\mu^2+\left|\xi\right|^2\right)\cr\cr
&\le& \left(\mu^2+\left|\xi\right|^2\right)^{\frac{p-2}{2}}\left(\left|\xi\right|^2+\left|\xi\right|^2\right)\le 2\left(\mu^2+\left|\xi\right|^2\right)^{\frac{p-2}{2}}\left|\xi\right|^2\cr\cr
&\le& 2\left|V_p(\xi)\right|^2
\end{eqnarray*}
Joining two previous inequalities we get \eqref{elem}.\\
Moreover, if $V_p\left(Du\right)\in W^{1,2}_\mathrm{loc}\left(\Omega\right)$, by Sobolev's Inequality, we have $Du\in L^\frac{np}{n-2}_\mathrm{loc}\left(\Omega\right)=L^\frac{2^*p}{2}_\mathrm{loc}\left(\Omega\right)$. Indeed, using \eqref{elem}, we get
\begin{eqnarray}\label{stinorma}
&&\int_{B_R}\left|Du(x)\right|^{\frac{2^*p}{2}}\,dx=\int_{B_R}\left|\left|Du(x)\right|^{\frac{p}{2}-1}Du(x)\right|^{2^*}dx\cr\cr
&\le&\mu^{\frac{2^*p}{2}}\left|B_R\right|+\int_{\Set{x\in B_R:\left|Du\right|> \mu}}\left|V_p\left(Du(x)\right)\right|^{2^*}dx\cr\cr
&\le&\mu^{\frac{2^*p}{2}}\left|B_R\right|+\int_{B_R}\left|V_p\left(Du(x)\right)\right|^{2^*}dx,
\end{eqnarray}
which is finite by the Sobolev's embedding Theorem, for any ball $B_R\Subset\Omega$. \end{remark}
\section{Proof of Theorem \ref{CGPThm1}}\label{Thm1pf} We prove Theorem \ref{CGPThm1}, dividing the proof into two steps. The first step consists in proving an estimate using the a priori assumption $V_p\left(Du\right)\in W^{1, 2}_{\mathrm{loc}}\left(\Omega\right)$.\\ In the second step, we use an approximation argument, considering a regularized version of the functional to whose minimizer we can apply the a priori estimate. Then we conclude by proving that such estimate is preserved in passing to the limit.\\ Before entering into the details of the proof, we want to stress that the necessity to use an approximation procedure is due to the assumptions on the function $g$ and on the datum $f$. If we had $f\in L^\infty_{\mathrm{loc}}\left(\Omega\right)$ and $g\in L^\infty_{\mathrm{loc}}\left(\Omega\right)$, it would be sufficient to apply the difference-quotient method to get $V_p\left(Du\right)\in W^{1,2}_\mathrm{loc}\left(\Omega\right)$ (see, for example \cite{AF} and \cite{Tolksdorff}).
\begin{proof}[Proof of Theorem \ref{CGPThm1}] {\bf Step 1: the a priori estimate.}\\
Our first step consists in proving that, if $u\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ is a local minimizer of $\mathcal{F}$ such that
$$
V_p\left(Du\right)\in W^{1, 2}_{\mathrm{loc}}\left(\Omega\right),
$$
estimate \eqref{mainestimateVp} holds.\\
Since $u\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ is a local minimizer of $\mathcal{F}$, it solves the corresponding Euler-Lagrange system, that is, for any $\psi\in C^{\infty}_{0}\left(\Omega,\numberset{R}^{N}\right)$, we have \begin{equation}\label{EL}
\int_{\Omega}\langle D_\xi F\left(x,Du(x)\right),D\psi(x)\rangle dx=\int_{\Omega}f(x)\cdot\psi(x). \end{equation} Let us fix a ball $B_{R}\Subset \Omega$ and arbitrary radii $\frac{R}{2}\le r<\tilde{s}<t<\tilde{t}<\lambda r<R,$ with $1<\lambda<2$. Let us consider a cut off function $\eta\in C^\infty_0\left(B_t\right)$ such that $\eta\equiv 1$ on
$B_{\tilde{s}}$, $\left|D \eta\right|\le \frac{c}{t-\tilde{s}}$ and $\left|D^2 \eta\right|\le \frac{c}{\left(t-\tilde{s}\right)^2}$. From now on, with no loss of generality, we suppose $R<1$. For $\left|h\right|$ sufficiently small, we can choose, for any $s=1, ..., n$
$$\psi=\tau_{s, -h}\left(\eta^2\tau_{s, h}u\right)$$
as a test function in \eqref{EL}, and by Proposition \ref{findiffpr}, we get \begin{eqnarray*}
&&\int_\Omega \left<\tau_{s, h}D_\xi F\left(x, Du(x)\right), D\left(\eta^2(x)\tau_{s, h}u(x)\right)\right>dx\cr\cr
&=&\int_\Omega f(x)\cdot\tau_{s, -h}\left(\eta^2(x)\tau_{s, h}u(x)\right)dx, \end{eqnarray*} that is \begin{eqnarray*}
I_1&=&\int_\Omega \left<D_\xi F\left(x+he_s, Du\left(x+he_s\right)\right)-D_\xi F\left(x+he_s, Du(x)\right), \eta^2(x)\tau_{s, h}Du(x)\right>dx\cr\cr
&=&-\int_\Omega \left<D_\xi F\left(x+he_s, Du(x)\right)-D_\xi F\left(x, Du(x)\right), \eta^2(x)\tau_{s, h}Du(x)\right>dx\cr\cr
&&-2\int_\Omega \left<D_\xi F\left(x+he_s, Du\left(x+he_s\right)\right)-D_\xi F\left(x, Du(x)\right), \eta(x)D\eta(x)\otimes\tau_{s, h}u(x)\right>dx\cr\cr
&&+\int_\Omega f(x)\cdot\tau_{s, -h}\left(\eta^2(x)\tau_{s, h}u(x)\right)dx\cr\cr
&:=&-I_2-I_3+I_4. \end{eqnarray*}
Therefore
\begin{equation}\label{fullestimate}
I_1\le\left|I_2\right|+\left|I_3\right|+\left|I_4\right|. \end{equation}
By assumption \eqref{F2}, we get
\begin{equation}\label{I_1}
I_1\ge\nu\int_\Omega\eta^2(x)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Du(x)\right|^2dx. \end{equation}
For what concerns the term $I_2$, by \eqref{F4} and Young's Inequality with exponents $\left(2, 2\right)$, for any $\varepsilon>0$, we have
\begin{eqnarray*}\label{I_2*}
\left|I_2\right|&\le&\left|h\right|\int_\Omega\eta^2(x)\left(g(x)+g(x+he_s)\right)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, h}Du(x)\right|dx\cr\cr
&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Du(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\int_{\Omega}\eta^2(x)\left(g(x)+g(x+he_s)\right)^2\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p}{2}dx. \end{eqnarray*}
Now, by the assumption $g\in L^n_\mathrm{loc}\left(\Omega\right)$, we can use H\"older's inequality with exponents $\left(\frac{n}{2}, \frac{n}{n-2}\right)$ and by the properties of $\eta$ and Lemma \ref{le1}, we get
\begin{eqnarray}\label{I_2}
\left|I_2\right|&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Du(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\left(\int_{B_t}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\right)^\frac{n-2}{n}\cr\cr
&&\cdot\left(\int_{B_t}\left(g(x)+g(x+he_s)\right)^{n}dx\right)^\frac{2}{n}\cr\cr
&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Du(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\left(\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\right)^\frac{n-2}{n}\cdot\left(\int_{B_{\lambda r}}g^{n}(x)dx\right)^\frac{2}{n}. \end{eqnarray}
Let us consider, now, th term $I_3$. We have
\begin{eqnarray*}
I_3&=&2\int_\Omega \left<\tau_{s, h}\left[D_\xi F\left(x, Du(x)\right)\right], \eta(x)D\eta(x)\otimes\tau_{s, h}u(x)\right>dx\cr\cr
&=&2\int_\Omega \left<D_\xi F\left(x, Du(x)\right), \tau_{s, -h}\left[\eta(x)D\eta(x)\otimes\tau_{s, h}u(x)\right]\right>dx, \end{eqnarray*}
so, by \eqref{F1}, we get
\begin{eqnarray}\label{I_3*}
\left|I_3\right|&\le&c\int_\Omega \left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, -h}\left[\eta(x)D\eta(x)\otimes\tau_{s, h}u(x)\right]\right|dx \end{eqnarray} and since, for any $x\in\mathrm{supp}(\eta)$ such that $x+he_s, x-he_s\in\mathrm{supp}(\eta)$, recaling the properties of $\eta$, we have
\begin{eqnarray}\label{tau1}
\left|\tau_{s, -h}\left[\eta(x)D\eta(x)\otimes\tau_{s, h}u(x)\right]\right|&\le&\left|\tau_{s, -h}\eta(x)\cdot D\eta\left(x-he_s\right)\otimes\tau_{s, h}u\left(x-he_s\right)\right|\cr\cr
&&+\left|\eta(x)\tau_{s, -h}D\eta(x)\otimes\tau_{s, h}u\left(x-he_s\right)\right|\cr\cr
&&+\left|\eta(x)D\eta(x)\otimes\tau_{s, -h}\tau_{s, h}u(x)\right|\cr\cr
&\le&\frac{c\left|h\right|}{\left(t-\tilde{s}\right)^2}\left|\tau_{s, h}u\left(x-he_s\right)\right|\cr\cr
&&+\frac{c\left|h\right|}{t-\tilde{s}}\eta(x)\left|\tau_{s, -h}\tau_{s,h}u(x)\right|. \end{eqnarray}
Inserting \eqref{tau1} into \eqref{I_3*}, we get
\begin{eqnarray}\label{I_3**}
\left|I_3\right|&\le&\frac{c\left|h\right|}{\left(t-\tilde{s}\right)^2}\int_{B_{t}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, h}u\left(x-he_s\right)\right|dx\cr\cr
&&+\frac{c\left|h\right|}{t-\tilde{s}}\int_{\Omega}\eta(x)\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, -h}\tau_{s, h}u(x)\right|dx, \end{eqnarray}
and by H\"older's Inequality with exponents $\left(p, \frac{p}{p-1}\right)$ and the properties of $\eta$, \eqref{I_3**} becomes \begin{eqnarray*}
\left|I_3\right|&\le&\frac{c\left|h\right|}{\left(t-\tilde{s}\right)^2}\left(\int_{B_{t}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{p-1}{p}\left(\int_{B_{t}}\left|\tau_{s, h}u\left(x-he_s\right)\right|^pdx\right)^\frac{1}{p}\cr\cr
&&+\frac{c\left|h\right|}{t-\tilde{s}}\left(\int_{B_t}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p}{2}dx\right)^\frac{p-1}{p}\cdot\left(\int_{B_t}\left|\tau_{s, -h}\tau_{s, h}u(x)\right|^pdx\right)^\frac{1}{p}. \end{eqnarray*} Now, by virtue of Lemma \ref{le1}, and using \eqref{lemma6GPestimate1}, we get
\begin{eqnarray}\label{I_3'}
\left|I_3\right|&\le&\frac{c\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+\frac{c\left|h\right|^2}{t-\tilde{s}}\left(\int_{B_t}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p}{2}dx\right)^\frac{p-1}{p}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}Du(x)\right|^pdx\right)^\frac{1}{p}\cr\cr
&\le&\frac{c\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+\frac{c\left|h\right|^2}{t-\tilde{s}}\left(\int_{B_t}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p}{2}dx\right)^\frac{p-1}{p}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^p\cdot\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^{\frac{p\left(2-p\right)}{4}}dx\right)^\frac{1}{p}\cr\cr
&\le&\frac{c\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+\frac{c\left|h\right|^2}{t-\tilde{s}}\left(\int_{B_{{\tilde{t}}}}\left(\mu^2+\left|Du(x)\right|^2\right)^{\frac{p}{2}}dx\right)^\frac{1}{2}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\right)^\frac{1}{2}, \end{eqnarray} where, in the last line, we used H\"{o}lder's inequality with exponents $\left(\frac{2}{p}, \frac{2}{2-p}\right)$.\\ Now, using Young's Inequality with exponents $\left(2, 2\right)$ and since $t-\tilde{s}<1$ and $t<\tilde{t}<\lambda r<R$, \eqref{I_3'} gives
\begin{equation}\label{I_3}
\left|I_3\right|\le\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx, \end{equation} for any $\sigma>0$. For what concerns the term $I_4$, by virtue of Proposition \ref{findiffpr}, we have
\begin{eqnarray}\label{I_4'}
I_4&=&\int_\Omega\eta^2\left(x\right)f(x)\tau_{s, -h}\left(\tau_{s, h}u(x)\right)dx\cr\cr
&&+\int_\Omega\left[\eta\left(x-he_s\right)+\eta(x)\right]f(x)\tau_{s, -h}\eta(x)\tau_{s, h}u\left(x-he_s\right)dx\cr\cr
&=:&J_1+J_2, \end{eqnarray}
which yields
\begin{equation}\label{I_4*}
\left|I_4\right|\le\left|J_1\right|+\left|J_2\right| \end{equation}
In order to estimate the term $J_1$, let us recall that, by virtue of the a priori assumption $V_p\left(Du\right)\in W^{1, 2}_\mathrm{loc}\left(\Omega\right)$ and Sobolev's embedding theorem, we have $Du\in L^{\frac{np}{n-2}}_\mathrm{loc}\left(\Omega\right)$, which implies $Du\in L^{\frac{np}{n-2+p}}_\mathrm{loc}\left(\Omega\right)$. So, using H\"older's inequality with exponents $\left(\frac{np}{n\left(p-1\right)+2-p}, \frac{np}{n-2+p}\right)$, the properties of $\eta$ and Lemma \ref{le1}, we get
\begin{eqnarray}\label{J_1*}
\left|J_1\right|&\le&\left(\int_{B_{t}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx\right)^\frac{n\left(p-1\right)+2-p}{np}\cdot\left(\int_{B_{t}}\left|\tau_{s, -h}\tau_{s, h}u(x)\right|^\frac{np}{n-2+p}dx\right)^\frac{n-2+p}{np}\cr\cr
&\le&\left|h\right|\left(\int_{B_{t}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx\right)^\frac{n\left(p-1\right)+2-p}{np}\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}Du(x)\right|^\frac{np}{n-2+p}dx\right)^\frac{n-2+p}{np} \end{eqnarray}
To go further, let us consider the second integral in \eqref{J_1*}. Using \eqref{lemma6GPestimate1}, we get \begin{eqnarray*}
\int_{B_{\tilde{t}}}\left|\tau_{s, h}Du(x)\right|^\frac{np}{n-2+p}dx&\le&\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^\frac{np}{n-2+p}\cr\cr
&&\cdot\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^{\frac{2-p}{4}\cdot\frac{np}{n-2+p}}dx, \end{eqnarray*} and, as long as $1<p<2$, we can use H\"older's inequality with exponents $\left(\frac{2\left(n-2+p\right)}{np}, \frac{2\left(n-2+p\right)}{\left(n-2\right)\left(2-p\right)}\right)$, thus getting
\begin{eqnarray}\label{VpJ1}
\int_{B_{\tilde{t}}}\left|\tau_{s, h}Du(x)\right|^\frac{np}{n-2+p}dx&\le&\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\right)^\frac{np}{2\left(n-2+p\right)}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx\right)^\frac{\left(n-2\right)\left(2-p\right)}{2\left(n-2+p\right)}. \end{eqnarray}
Inserting \eqref{VpJ1} into \eqref{J_1*}, and using Young's inequality with exponents $\left(2, \frac{np}{n\left(p-1\right)+2-p}, \frac{2np}{\left(n-2\right)\left(2-p\right)}\right)$, we obtain
\begin{eqnarray}\label{J_1**}
\left|J_1\right|&\le&\left|h\right|\left(\int_{B_{t}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx\right)^\frac{n\left(p-1\right)+2-p}{np}
\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\right)^\frac{1}{2}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx\right)^\frac{\left(n-2\right)\left(2-p\right)}{2np}\cr\cr
&\le&c_\sigma\left|h\right|^2\int_{B_{t}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx
\cr\cr
&&+\sigma\left|h\right|^2\int_{B_{\tilde{t}}}\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx\cr\cr
&&+\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx. \end{eqnarray}
Recalling that $t<\tilde{t}<\lambda r<R$ and by virtue of Lemma \ref{le1}, \eqref{J_1**} implies
\begin{eqnarray}\label{J_1}
\left|J_1\right|&\le&c_\sigma\left|h\right|^2\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx
\cr\cr
&&+\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx\cr\cr
&&+\sigma\int_{B_{{\tilde{t}}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx. \end{eqnarray}
For what concerns the term $J_2$, by virtue of the properties of $\eta$, we have
\begin{eqnarray}\label{J_2*}
\left|J_2\right|&\le&c\int_{B_t}\left|f(x)\right|\left|\tau_{s, -h}\eta(x)\right|\left|\tau_{s, h}u\left(x-he_s\right)\right|dx\cr\cr
&\le&\left|h\right|\left\Arrowvert D\eta\right\Arrowvert_{L^{\infty}\left(B_t\right)}\int_{B_t} \left|f(x)\right|\left|\tau_{s, h}u\left(x-he_s\right)\right|dx\cr\cr
&\le&\frac{c\left|h\right|}{t-\tilde{s}}\int_{B_t} \left|f(x)\right|\left|\tau_{s, h}u\left(x-he_s\right)\right|dx. \end{eqnarray}
Now, if we apply H\"older's and Young's Inequality in \eqref{J_2*} with exponents $\left(\frac{np}{n\left(p-1\right)+2}, \frac{np}{n-2}\right)$, we get
\begin{eqnarray}\label{J_2}
\left|J_2\right|&\le&\frac{c\left|h\right|}{t-\tilde{s}}\left(\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx\right)^\frac{n\left(p-1\right)+2}{np}\cdot\left(\int_{B_t}\left|\tau_{s, h}u\left(x-he_s\right)\right|^\frac{np}{n-2}dx\right)^\frac{n-2}{np}\cr\cr
&\le&\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx+\sigma\left|h\right|^2\int_{B_{\lambda r}}\left|Du\left(x\right)\right|^\frac{np}{n-2}dx, \end{eqnarray} where we also used Lemma \ref{le1}, since $Du\in L^{\frac{np}{n-2}}_{\mathrm{loc}}\left(\Omega\right)$.\\
By virtue of \eqref{J_1} and \eqref{J_2}, \eqref{I_4*} gives
\begin{eqnarray}\label{I_4}
\left|I_4\right|&\le&c_\sigma\left|h\right|^2\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx
\cr\cr
&&+2\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx+\sigma\int_{B_{{\tilde{t}}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx. \end{eqnarray}
Inserting \eqref{I_1}, \eqref{I_2}, \eqref{I_3}, and \eqref{I_4} into \eqref{fullestimate}, and choosing $\varepsilon<\frac{\nu}{2}$, we get
\begin{eqnarray*}\label{fullestimate1}
&&c\left(\nu\right)\int_\Omega\eta^2(x)\left(\mu^2+\left|Du(x)\right|^2+\left|Du\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Du(x)\right|^2dx\cr\cr
&\le&c\left|h\right|^2\left(\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\right)^\frac{n-2}{n}\cdot\left(\int_{B_{\lambda r}}g^{n}(x)dx\right)^\frac{2}{n}\cr\cr
&&+2\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx
\cr\cr
&&+2\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^{\frac{np}{2\left(n-2\right)}}dx, \end{eqnarray*}
for $\sigma>0$ that will be chosen later.\\ So, by \eqref{lemma6GPestimate1} and the properties of $\eta$, we have
\begin{eqnarray*}\label{tauhVp*}
&&\int_{B_{\tilde{s}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx+c_\sigma\left|h\right|^2\int_{B_{\lambda r}}g^{n}(x)dx\cr\cr
&&+2\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx,
\end{eqnarray*}
where we also used Young's Inequality with exponents $\left(\frac{n}{2}, \frac{n}{n-2}\right)$.\\ Now, Lemma \ref{le1} implies
\begin{eqnarray}\label{tauhVp}
&&\int_{B_{\tilde{s}}}\left|\tau_{s, h}V_p\left(Du(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx+c\cdot\sigma\left|h\right|^2\int_{B_{\lambda r}}\left|DV_p\left(Du(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+c_\sigma\left|h\right|^2\int_{B_{\lambda r}}g^{n}(x)dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_t} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx. \end{eqnarray}
Since \eqref{tauhVp} holds for any $s=1,..., n$ and, by virtue of the a priori assumption, $V_p\left(Dv\right)\in W^{1,2}_\mathrm{loc}\left(\Omega\right)$ and by Lemma \ref{Giusti8.2}, we get
\begin{eqnarray*}
&&\int_{B_{\tilde{s}}}\left|DV_p\left(Du(x)\right)\right|^2dx\cr\cr
&\le&c\cdot\sigma\int_{B_{{\lambda r}}}\left|DV_p\left(Du(x)\right)\right|^2+3\sigma\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+c_\sigma\int_{B_{\lambda r}}g^{n}(x)dx\cr\cr
&&+c_\sigma\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx
+\frac{c_\sigma}{\left(t-\tilde{s}\right)^\frac{np}{n\left(p-1\right)+2}}\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx, \end{eqnarray*} and since $t-\tilde{s}<1$, setting \begin{equation}\label{beta} \beta\left(n, p\right)=\max\Set{2, \frac{np}{n\left(p-1\right)+2}}, \end{equation} we get \begin{eqnarray}\label{beforeIterThm1*}
&&\int_{B_{\tilde{s}}}\left|DV_p\left(Du(x)\right)\right|^2dx\cr\cr
&\le&c\cdot\sigma\int_{B_{{\lambda r}}}\left|DV_p\left(Du(x)\right)\right|^2+3\sigma\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{\lambda r}}g^{n}(x)dx\right.\cr\cr
&&\left.+\int_{B_{R}}\left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx
+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx\right]. \end{eqnarray} Now let us notice that, since $\frac{np}{n\left(p-1\right)+2}<\frac{np}{n\left(p-1\right)+2-p}$, we have
\begin{eqnarray}\label{immersion1}
\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2}dx\le c\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+c\left|B_R\right|. \end{eqnarray} Plugging \eqref{immersion1} into \eqref{beforeIterThm1*}, we get
\begin{eqnarray}\label{beforeIterThm1**}
&&\int_{B_{\tilde{s}}}\left|DV_p\left(Du(x)\right)\right|^2dx\cr\cr
&\le&c\cdot\sigma\int_{B_{{\lambda r}}}\left|DV_p\left(Du(x)\right)\right|^2+3\sigma\int_{B_{\lambda r}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{np}{2\left(n-2\right)}dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{\lambda r}}g^{n}(x)dx\right.\cr\cr
&&\left.+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\left|B_R\right|\right]. \end{eqnarray}
Moreover, applying Sobolev's inequality to the function $V_p\left(Du\right)$ and recalling \eqref{stinorma}, for a positive constant $c=c(n, p)$ we get
\begin{eqnarray}\label{V_pSobolev}
\int_{B_{\lambda r}}\left|Du(x)\right|^\frac{np}{n-2}dx&\le& c\int_{B_{\lambda r}}\left|V_p\left(Du(x)\right)\right|^\frac{2n}{n-2}dx+c\mu^\frac{np}{n-2}\left|B_R\right|\cr\cr
&\le&c\int_{B_{\lambda r}}\left|DV_p\left(Du(x)\right)\right|^2dx+c\int_{B_{\lambda r}}\left|V_p\left(Du(x)\right)\right|^2dx\cr\cr
&&+c\left|B_R\right|\cr\cr
&\le&c\int_{B_{\lambda r}}\left|DV_p\left(Du(x)\right)\right|^2dx+c\int_{B_R}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+c\left|B_R\right|, \end{eqnarray}
where we also used the fact that $\mu\in[0, 1]$.\\ Now, plugging \eqref{V_pSobolev} into \eqref{beforeIterThm1**}, and recalling that $t-\tilde{s}<1$ and $\lambda r<R$, we get
\begin{eqnarray*}
\int_{B_{\tilde{s}}}\left|DV_p\left(Du(x)\right)\right|^2dx
&\le&c\cdot\sigma\int_{B_{{\lambda r}}}\left|DV_p\left(Du(x)\right)\right|^2\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{R}}g^{n}(x)dx\right.\cr\cr
&&\left.+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\left|B_R\right|\right]. \end{eqnarray*} and choosing $\sigma>0$ such that $$ c\cdot\sigma=\frac{1}{2}, $$ we get \begin{eqnarray}\label{DVpDu}
\int_{B_{\tilde{s}}}\left|DV_p\left(Du(x)\right)\right|^2dx&\le&\frac{1}{2}\int_{B_{{\lambda r}}}\left|DV_p\left(Du(x)\right)\right|^2\cr\cr
&&+\frac{c}{\left(t-\tilde{s}\right)^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{R}}g^{n}(x)dx\right.\cr\cr
&&\left.+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\left|B_R\right|\right]. \end{eqnarray}
Since \eqref{DVpDu} holds for any $\frac{R}{2}\le r<\tilde{s}<t<\lambda r<R,$ with $1<\lambda<2$, and the constant $c$ depends on $n, p, L, \nu, L_1, \ell$ but is independent of the radii, we can take the limit as $\tilde{s}\to r$ and $t\to\lambda r$, thus getting
\begin{eqnarray*}\label{lambdaDVpDu}
\int_{B_{r}}\left|DV_p\left(Du(x)\right)\right|^2dx&\le&\frac{1}{2}\int_{B_{\lambda r}}\left|DV_p\left(Du(x)\right)\right|^2dx\cr\cr
&&+\frac{c}{r^{\beta\left(n, p\right)}\left(\lambda-1\right)^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{R}}g^{n}(x)dx\right.\cr\cr
&&\left.+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\left|B_R\right|\right]. \end{eqnarray*}
Now, if we set
$$
h(r)=\int_{B_r}\left|DV_p\left(Du(x)\right)\right|^2dx, $$
$$
A=c\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_{R}}g^{n}(x)dx+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\left|B_R\right|\right] $$
and
$$ B=0, $$
and apply Lemma \ref{iter} with
$$ \theta=\frac{1}{2}\qquad\mbox{ and }\qquad \gamma=\beta\left(n, p\right), $$
we get
\begin{eqnarray}\label{aprioriestimate1*}
\int_{B_{\frac{R}{2}}}\left|DV_p\left(Du(x)\right)\right|^2dx&\le&\frac{c}{R^{\beta\left(n, p\right)}}\left[\int_{B_{R}} \left(\mu^2+\left|Du\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx+\int_{B_{R}}g^{n}(x)dx+\left|B_R\right|\right], \end{eqnarray} that is the desired a priori estimate.
\\ {\bf Step 2: the approximation.}\\ Now we want to complete the proof of Theorem \ref{CGPThm1}, using the a priori estimate \eqref{aprioriestimate1*}, and a classical approximation argument.
Let us consider an open set $\Omega'\Subset\Omega$, and a function $\phi\in C^{\infty}_0(B_1(0))$ such that $0\le\phi\le1$ and $\int_{B_1(0)}\phi(x)dx=1$, and a standard family of mollifiers $\set{\phi_\varepsilon}_\varepsilon$ defined as follows
\begin{equation*}
\phi_\varepsilon(x)=\frac{1}{\varepsilon^n}\phi\left(\frac{x}{\varepsilon}\right), \end{equation*}
for any $\varepsilon\in\left(0, d\left(\Omega', \partial\Omega\right)\right)$, so that, for each $\varepsilon$, $\phi_\varepsilon\in C^{\infty}_0\left(B_\varepsilon(0)\right)$, $0\le\phi_\varepsilon\le1$, $\int_{B_\varepsilon(0)}\phi_\varepsilon(x)dx=1.$
It is well known that, for any $h\in L^1_{\mathrm{loc}}\left(\Omega'\right)$, setting \begin{equation*}
h_\varepsilon(x)=h\ast\phi_\varepsilon(x)=\int_{B_\varepsilon}\phi_\varepsilon(y)h(x+y)dy=\int_{B_1}\phi(\omega)h(x+\varepsilon\omega)d\omega, \end{equation*} we have $h_\varepsilon\in C^\infty\left(\Omega'\right)$.
Let us fix a ball $B_{\tilde{R}}=B_{\tilde{R}}\left(x_0\right)\Subset\Omega'$, with $\tilde{R}<1$ and, for each $\varepsilon\in\left(0, d\left(\Omega', \partial\Omega\right)\right)$, let us consider the functional \begin{equation*} \mathcal{F}_\varepsilon\left(w, B_{\tilde{R}}\right)=\int_{ B_{\tilde{R}}}\left[ F_\varepsilon\left(x,Dw(x)\right)-f_\varepsilon(x)\cdot
w(x)\right]dx, \end{equation*} where \begin{equation}\label{Fepsdef}
F_\varepsilon(x,\xi)=\int_{B_1}\phi(\omega)F(x+\varepsilon\omega, \xi)d\omega \end{equation} and \begin{equation}\label{fepsdef}
f_\varepsilon=f\ast\phi_\varepsilon. \end{equation}
Let us recall that
\begin{equation}\label{convF}
\int_{B_{\tilde{R}}}F_\varepsilon\left(x, \xi\right)dx\to\int_{B_{\tilde{R}}}F\left(x, \xi\right)dx,\qquad\mbox{ as }\varepsilon\to0 \end{equation}
for any $\xi\in\numberset{R}^{n\times N}.$\\ Moreover, since $f\in L^{\frac{np}{n\left(p-1\right)+2-p}}_{\mathrm{loc}}\left(\Omega\right)$, we have
\begin{equation}\label{convf}
f_\varepsilon\to f \qquad\mbox{ strongly in }L^{\frac{np}{n\left(p-1\right)+2-p}}\left(B_{\tilde{R}}\right), \end{equation}
and since $\frac{np}{n\left(p-1\right)+2-p}>\frac{np}{n\left(p-1\right)+p}=\frac{p^*}{p^*-1}$, we also have
\begin{equation}\label{convfp*'}
f_\varepsilon\to f \qquad\mbox{ strongly in }L^{\frac{p^*}{p^*-1}}\left(B_{\tilde{R}}\right), \end{equation} as $\varepsilon\to0.$\\
It is easy to check that \eqref{F1}--\eqref{F4} imply \begin{equation}\label{F1eps}
\ell \left(\mu^2+\left|\xi\right|^2\right)^\frac{p}{2}\le F_\varepsilon(x,\xi)\le L\left(\mu^2+\left|\xi\right|^2\right)^\frac{p}{2}, \end{equation}
\begin{equation}\label{F2eps}
\langle D_{\xi}F_\varepsilon(x,\xi)-D_{\xi}F_\varepsilon(x,\eta),\xi-\eta\rangle \ge \nu\left(\mu^2+\left|\xi\right|^2+\left|\eta\right|^2\right)^\frac{p-2}{2}|\xi-\eta|^{2}, \end{equation}
\begin{equation}\label{F3eps}
\left|D_{\xi}F_\varepsilon(x,\xi)-D_{\xi}F_\varepsilon(x,\eta)\right| \le L_1\left(\mu^2+\left|\xi\right|^2+\left|\eta\right|^2\right)^\frac{p-2}{2}|\xi-\eta|, \end{equation}
\begin{equation}\label{F4eps}
\left|D_{\xi}F_\varepsilon\left(x,\xi\right)-D_{\xi}F_\varepsilon\left(y,\xi\right)\right|\le\left(g_\varepsilon(x)+g_\varepsilon(y)\right)\left(\mu^2+\left|\xi\right|^2\right)^\frac{p-1}{2}\left|x-y\right|, \end{equation} for a.e. $x, y\in B_{\tilde{R}}$ and every $\xi, \eta\in\numberset{R}^{n\times N}$, where \begin{equation}\label{gepsdef}
g_\varepsilon=g\ast\phi_\varepsilon. \end{equation} Since $g\in L^{n}_{\mathrm{loc}}\left(\Omega\right)$, we have \begin{equation}\label{convg}
g_\varepsilon\to g\qquad\mbox{ strongly in }L^n\left(B_{\tilde{R}}\right),\mbox{ as } \varepsilon\to0. \end{equation} For each $\varepsilon$, let $v_\varepsilon\in u+W^{1,p}_{0}(B_R)$ be the solution to $$\min\Set{\mathcal{F}_\varepsilon\left(v,B_{\tilde{R}}\right): v\in u+W^{1,p}_{0}\left(B_{\tilde{R}}\right)},$$ where $u\in W^{1,p}_{\mathrm{loc}}\left(\Omega\right)$ is a local minimizer of \eqref{modenergy}.\\ By virtue of the minimality of $v_\varepsilon$, we have
\begin{equation*}
\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Dv_\varepsilon(x)\right)-f_\varepsilon(x)\cdot v_\varepsilon(x)\right]dx\le\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Du(x)\right)-f_\varepsilon(x)\cdot u(x)\right]dx,
\end{equation*} which means \begin{equation*}
\int_{B_{\tilde{R}}}F_\varepsilon\left(x,Dv_\varepsilon(x)\right)dx\le\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Du(x)\right)+f_\varepsilon(x)\cdot\left(v_\varepsilon(x)-u(x)\right)\right]dx, \end{equation*} and by \eqref{F1eps} we get \begin{eqnarray}\label{stima1*}
\ell\int_{B_{\tilde{R}}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx&\le&\int_{B_{\tilde{R}}}F_\varepsilon\left(x,Dv_\varepsilon(x)\right)dx\cr\cr &\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Du(x)\right)+f_\varepsilon(x)\cdot\left(v_\varepsilon(x)-u(x)\right)\right]dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+\cr\cr
&&+\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|\left|v_\varepsilon(x)-u(x)\right|dx. \end{eqnarray}
If we use H\"{o}lder's and Young's inequalities with exponents $\left(p^*, \frac{p^*}{p^*-1}\right)$ in \eqref{stima1*} and apply Sobolev's inequality to the function $v_\varepsilon-u\in W^{1, p}_0\left(B_{\tilde{R}}\right)$, for any $\sigma>0$, we get
\begin{eqnarray}\label{stima1**}
&&\ell\int_{B_{\tilde{R}}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c_\sigma\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^{\frac{p^*}{p^*-1}}dx\cr\cr
&&+\sigma\int_{B_{\tilde{R}}}\left|v_\varepsilon(x)-u(x)\right|^{p^*}dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c_\sigma\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^{\frac{p^*}{p^*-1}}dx\cr\cr
&&+\sigma\left(\int_{B_{\tilde{R}}}\left|Dv_\varepsilon(x)-Du(x)\right|^{p}dx\right)\cr\cr
&\le&c_\sigma\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c_\sigma\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^{\frac{p^*}{p^*-1}}dx\cr\cr
&&+\sigma\left(\int_{B_{\tilde{R}}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx\right). \end{eqnarray}
Now, if we choose $\sigma<\frac{\ell}{2}$ in \eqref{stima1**}, we have
\begin{eqnarray}\label{stima1***}
&&\ell\int_{B_{\tilde{R}}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^{\frac{p^*}{p^*-1}}dx. \end{eqnarray}
By virtue of \eqref{convfp*'}, \eqref{stima1***} implies that $\Set{v_\varepsilon}_\varepsilon$ is bounded in $W^{1, p}_{\mathrm{loc}}\left(B_{\tilde{R}}\right)$. Therefore there exists $v\in W^{1,p}\left(B_{\tilde{R}}\right)$ such that
\begin{equation*}\label{convdebLp}
v_\varepsilon\rightharpoonup v\qquad\mbox{ weakly in }W^{1,p}\left(B_{\tilde{R}}\right),
\end{equation*}
\begin{equation*}\label{convforteLp}
v_\varepsilon\to v\qquad\mbox{ strongly in }L^{p}\left(B_{\tilde{R}}\right), \end{equation*}
and
\begin{equation*}\label{aeconv}
v_\varepsilon\to v\qquad\mbox{ almost everywhere in }B_{\tilde{R}}, \end{equation*}
up to a subsequence, as $\varepsilon\to0$.\\ On the other hand, since $V_p\left(Dv_\varepsilon\right)\in W^{1,2}_{\mathrm{loc}}\left(B_{\tilde{R}}\right)$ and so $Dv_\varepsilon \in W^{1,p}_{\mathrm{loc}}\left(B_{\tilde{R}}\right)$ we are legitimated to apply estimates \eqref{aprioriestimate1*}, thus getting \begin{eqnarray}\label{dersecapp1}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(Dv_\varepsilon(x)\right)\right|^2dx\cr\cr
&\le&\frac{c}{r^{\beta\left(n, p\right)}}\left[\int_{B_{r}} \left(\mu^2+\left|Dv_\varepsilon\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_R} \left|f_\varepsilon(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx\right.\cr\cr
&&\left.+c\int_{B_{r}}g^{n}_\varepsilon(x)dx+\left|B_r\right|^{\frac{n\left(p-1\right)+2}{np}}\right], \end{eqnarray} for any ball $B_r\Subset B_{\tilde{R}}$.
By virtue of \eqref{convf}, \eqref{convfp*'}, \eqref{convg} and \eqref{stima1***}, the right hand side of \eqref{dersecapp1} can be bounded independently of $\varepsilon$. For this reason, recalling Lemma \ref{differentiabilitylemma}, we also infer that, for each $\varepsilon$, $v_\varepsilon\in W^{2, p}_{\mathrm{loc}}\left(B_{\tilde{R}}\right)$, and recalling \eqref{differentiabilityestimate}, we also deduce that $\Set{v_\varepsilon}_\varepsilon$ is bounded in $W^{2, p}_{\mathrm{loc}}\left(B_{r}\right)$\\ Hence
\begin{equation*}\label{vconvdebWp}
v_\varepsilon\rightharpoonup v\qquad\mbox{ weakly in } W^{2,p}\left(B_{r}\right), \end{equation*}
\begin{equation}\label{vconvforW1p}
v_\varepsilon\to v\qquad\mbox{ strongly in } W^{1,p}\left(B_{r}\right), \end{equation}
and
\begin{equation}\label{aeconvDv}
Dv_\varepsilon\to Dv \qquad\mbox{ almost everywhere in }B_r, \end{equation}
up to a subsequence, as $\varepsilon\to0$.\\ Moreover, by the continuity of $\xi\mapsto DV_p(\xi)$ and \eqref{aeconvDv}, we get $DV_p\left(Dv_\varepsilon\right)\to DV_p\left(Dv\right)$ almost everywhere, and since the right-hand side of \eqref{dersecapp1} can be bounded independently of $\varepsilon$, by Fatou's Lemma, passing to the limit as $\varepsilon\to0$ in \eqref{dersecapp1}, by \eqref{convf}, \eqref{convg} and \eqref{vconvforW1p}, we get
\begin{eqnarray}\label{dersecapp1*}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&\frac{c}{r^{\beta\left(n, p\right)}}\left[\int_{B_{r}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx+\int_{B_R} \left|f(x)\right|^\frac{np}{n\left(p-1\right)+2-p}dx\right.\cr\cr
&&\left.+c\int_{B_{r}}g^{n}(x)dx+\left|B_r\right|^{\frac{n\left(p-1\right)+2}{np}}\right]. \end{eqnarray} Our final step is to prove that $u=v$ a.e. in $B_{\tilde{R}}$.\\ First, let us observe that, using H\"older's inequality with exponents $\left(p^*, \frac{p^*}{p^*-1}\right)$, we get
\begin{eqnarray}\label{finalconv1*}
&&\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)-f(x)\right|\left|u(x)\right|dx\cr\cr
&\le&\left(\int_{B_{\tilde{R}}}\left|u(x)\right|^{p^*}dx\right)^\frac{1}{{p^*}}\cdot\left(\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)-f(x)\right|^{\frac{p^*}{p^*-1}}dx\right)^{\frac{p^*-1}{p^*}}, \end{eqnarray} and recalling \eqref{convF} and \eqref{convfp*'}, \eqref{finalconv1*} implies
\begin{equation}\label{finalconv1} \lim_{\varepsilon\to 0}\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Du(x)\right)-f_\varepsilon(x)\cdot
u(x)\right]dx=\int_{B_{\tilde{R}}}\left[F\left(x,Du(x)\right)-f(x)\cdot
u(x)\right].dx \end{equation}
The minimality of $u$, Fatou's Lemma, the lower semicontinuity of $\mathcal{F}_\varepsilon$ and the minimality of $v_\varepsilon$ imply
\begin{eqnarray*}
&&\int_{B_{\tilde{R}}}\left[F\left(x,Du(x)\right)-f(x)\cdot
u(x)\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F\left(x,Dv(x)\right)-f(x)\cdot
v(x)\right]dx\cr\cr
&\le&\liminf_{\varepsilon\to 0}\int_{B_{\tilde{R}}}\left[F\left(x,Dv_\varepsilon(x)\right)-f(x)\cdot
v_\varepsilon(x)\right]dx\cr\cr
&=&\liminf_{\varepsilon\to 0} \int_{B_{\tilde{R}}}\left[ F_\varepsilon\left(x,Dv_\varepsilon(x)\right)-f_\varepsilon(x)\cdot
v_\varepsilon(x)\right]dx\cr\cr
&\le&\liminf_{\varepsilon\to 0} \int_{B_{\tilde{R}}}\left[ F_\varepsilon\left(x,Du(x)\right)-f_\varepsilon(x)\cdot
u(x)\right]dx\cr\cr
&=&\int_{B_{\tilde{R}}}\left[F\left(x,Du(x)\right)-f(x)\cdot
u(x)\right]dx,
\end{eqnarray*} where the last equivalence follows by \eqref{finalconv1}. Therefore, all the previous inequalities hold as equalities and $\mathcal{F}\left(Du,B_{\tilde{R}}\right)=\mathcal{F}\left(Dv,B_{\tilde{R}}\right)$. The strict convexity of the functional yields that $u=v$ a.e. in $B_{\tilde{R}}$, and since the map $\xi\mapsto V_p(\xi)$ is of class $C^1$, we also have $DV_p\left(Du\right)=DV_p\left(Dv\right)$ almost everywhere in $B_{\tilde{R}}$, and by \eqref{dersecapp1*}, using a standard covering argument, we can conclude with estimate \eqref{mainestimateVp}. \end{proof}
Thanks to Lemma \ref{differentiabilitylemma}, it is easy to prove the following consequence of Theorem \ref{CGPThm1}.
\begin{corollary}\label{corollary1}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, and $1<p<2$.\\
Let $u\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ be a local minimizer of the functional \eqref{modenergy}, under the assumptions \eqref{F1}--\eqref{F4}, with
$$f\in L^{\frac{np}{n(p-1)+2-p}}_{\mathrm{loc}}(\Omega)\qquad\text{and}\qquad g\in L^{n}_{\mathrm{loc}}\left(\Omega\right).$$
Then $u\in W^{2, p}_{\mathrm{loc}}\left(\Omega\right)$. \end{corollary}
\section{A Counterexample}\label{Counterexample} The aim of this section is to show that we cannot weaken the assumption $f\in L^\frac{np}{n(p-1)+2-p}_{\mathrm{loc}}\left(\Omega\right)$ in the scale of Lebesgue spaces.\\ Our example also shows that this phenomenon is independent of the presence of the coefficients, but it depends only on the sub-quadratic growth of the energy density.\\ For $\alpha\in\numberset{R}$, let us set
$$\beta=\left(\alpha-1\right)\left(p-1\right)-1,$$
and consider the functional
\begin{equation*}\label{ceEnergy}
\mathcal{F_\alpha}\left(u,\Omega\right)=\int_{\Omega}\left[\left|Du(x)\right|^p-\alpha\left(n+\beta\right)\left|\alpha\right|^{p-2}\left|x\right|^\beta u(x)\right]dx, \end{equation*}
where $\Omega\subset\numberset{R}^n$ is a bounded open set containing the origin, $1<p<2$, $u:\numberset{R}^n\to\numberset{R}$.\\ Using the classical notation for the $p$-Laplacian
$$
\Delta_p u={\rm div}\left(\left|Du\right|^{p-2}\cdot Du\right), $$
a local minimizer of this functional is a weak solution to the $p$-Poisson equation
\begin{equation}\label{ceEquation}
\Delta_p u=f_{\alpha}, \end{equation}
with
\begin{equation*}\label{ceDatum}
f_{\alpha}(x)=\alpha\left(n+\beta\right)\left|\alpha\right|^{p-2}\left|x\right|^\beta. \end{equation*}
Before going further, let us notice that \eqref{ceEquation} is an autonomous equation, whose solutions are scalar functions, so the problem we're dealing with is much less general with respect to the assumption we considered in order to prove our main result.\\ It is easy to check that, for any $\alpha\in\numberset{R}$, the function
\begin{equation*}\label{ceSolution}
u_\alpha(x)=\left|x\right|^\alpha \end{equation*}
is a solution to \eqref{ceEquation}.\\ Indeed, since, for each $i=1, ..., n$, we have
$$
D_{x_i}u_\alpha(x)=\alpha\left|x\right|^{\alpha-2}x_i, $$
we get
$$
\left|Du_\alpha(x)\right|=\left|\alpha\right|\left|x\right|^{\alpha-1}. $$
So, for every $i=1, ..., n$, since $\beta=\left(\alpha-1\right)\left(p-1\right)-1,$ we get
$$
\left|Du_\alpha(x)\right|^{p-2} D_{x_i}u_\alpha(x)=\alpha\left|\alpha\right|^{p-2}\left|x\right|^{(p-1)\cdot(\alpha-1)-1} x_i=\alpha\left|\alpha\right|^{p-2}\left|x\right|^{\beta} x_i $$
and
$$
\frac{\partial}{\partial x_i}\left(\left|Du_\alpha(x)\right|^{p-2}D_{x_i}u_\alpha(x)\right)=\alpha\left|\alpha\right|^{p-2}\left|x\right|^{\beta}\left(1+\frac{\beta x_i^2}{\left|x\right|^2}\right), $$
so
\begin{eqnarray*}
\Delta_p u_\alpha(x)&=&{\rm div}\left(\left|Du_\alpha(x)\right|^{p-2}\cdot Du_\alpha(x)\right)=\sum_{i=1}^n\frac{\partial}{\partial x_i}\left(\left|Du_\alpha(x)\right|^{p-2}D_{x_i}u_\alpha(x)\right)\cr\cr
&=&\alpha\left|\alpha\right|^{p-2}\left|x\right|^{\beta}\sum_{i=1}^n\left(1+\frac{\beta x_i^2}{\left|x\right|^2}\right)=\alpha\left|\alpha\right|^{p-2}\left(n+\beta\right)\left|x\right|^{\beta}\cr\cr
&=&f_\alpha(x) \end{eqnarray*}
Moreover, for further needs, we observe that $$
\left|D^2u_\alpha(x)\right|=c(\alpha)\cdot\left|x\right|^{\alpha-2}, $$ for a constant $c(\alpha)\ge0$. Choosing $$\alpha-1=\frac{2-n}{p}$$ we have
$$f_\alpha= C_1(\alpha, n, p)|x|^{\frac{(2-n)(p-1)}{p}-1},$$ where $C_1(\alpha, n, p)$ ia real constant, and $$
\left|f_\alpha\right|^\frac{np}{n(p-1)+2-p}=C(\alpha, n, p)\left|x\right|^{-n}, $$ with $C(\alpha, n, p)\ge0$.\\ Therefore with such a choice of $\alpha$, we have $f_\alpha\in L^{\frac{np}{n\left(p-1\right)+2-p}-\varepsilon}\left(B_1(0)\right)$ for every $\varepsilon>0$, but $f_\alpha$ doesn't belong to $L^{\frac{np}{n(p-1)+2-p}}(B_1(0))$.\\ With the same choice of $\alpha$ we have $$
\left|Du_\alpha(x)\right|^{p-2}\cdot\left|D^2u_\alpha(x)\right|^2=c(n, p)\cdot\left|x\right|^{p\cdot(\alpha-1)-2}=c(n, p)\cdot\left|x\right|^{2-n-2}=c(n, p)\cdot\left|x\right|^{-n}, $$ that doesn't belong to $L^1\left(B_1(0)\right)$. Therefore we cannot weaken the assumption on datum $f$ in the scale of Lebesgue spaces and obtain the same regularity for the second derivatives of the solution $u$.
\section{A preliminary higher differentiability result}\label{Preliminarypf} In this section we consider the following functional \begin{equation}\label{modenergym}
\mathcal{F}_m\left(w, \Omega\right)=\int_{\Omega}\left[F\left(x, Dw(x)\right)-f(x)\cdot w(x)+\left(\left|w(x)\right|-a\right)^{2m}_+\right]dx, \end{equation}
where $a>0$, $m>1$, and the function $F$ still satisfies \eqref{F1}--\eqref{F4}.\\ It is clear from the definition, and by our assumptions, that the functional in \eqref{modenergym} admits minimizers in $W^{1,p}_{\mathrm{loc}}\left(\Omega\right)\cap L^{2m}_\mathrm{loc}\left(\Omega\right)$.\\ We want to prove the following higher differentiability result for local minimizers of the functional $\mathcal{F}_m$.
\begin{thm}\label{approxmthm}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, $m>1$, $a>0$ and $1<p<2$.\\
Let $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^{2m}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ be a local minimizer of the functional \eqref{modenergym}, under the assumptions \eqref{F1}--\eqref{F4}, with
$$f\in L^{\frac{2m\left(p+2\right)}{2mp+p-2}}_{\mathrm{loc}}\left(\Omega\right)\qquad\mbox{and}\qquad g\in L^{\frac{2m\left(p+2\right)}{2m-p}}_{\mathrm{loc}}\left(\Omega\right).$$
Then $V_p\left(Dv\right)\in W^{1, 2}_{\mathrm{loc}}(\Omega)$, and the estimate
\begin{eqnarray}\label{approxmestimate}
&&\int_{B_{\frac{R}{2}}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&\frac{c}{R^{\frac{p+2}{p}}}\left[\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right],
\end{eqnarray}
holds true for any ball $B_{4R}\Subset\Omega$. \end{thm}
For further needs, we notice that
$$
\frac{2m\left(p+2\right)}{2mp+p-2}>\frac{m\left(p+2\right)}{mp+m-1}
$$
for any $m>1$ as long as $1<p<2$, since it is equivalent to
$$
2mp+2m-2>2mp+p-2
$$
i.e.
$$
2m>p.
$$
Let us also notice that
$$
\frac{2m\left(p+2\right)}{2mp+p-2}>\frac{p+2}{p},
$$
and
$$
\frac{2m\left(p+2\right)}{2m-p}>p+2
$$
for any $m>1$ and $p\in\left(1, 2\right)$.\\ \begin{proof}[Proof of Thorem \ref{approxmthm}]
{\bf Step 1: the a priori estimate.}\\
Our first step consists in proving that, if $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^{2m}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ is a local minimizer of $\mathcal{F}_m$ such that
$$
V_p\left(Dv\right)\in W^{1, 2}_{\mathrm{loc}}\left(\Omega\right),
$$
estimate \eqref{approxmestimate} holds.
Since $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^{2m}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ is a local minimizer of $\mathcal{F}_m$, it is a weak solution of the corresponding Euler-Lagrange system, that is, for any $\psi\in C^{\infty}_{0}\left(\Omega,\numberset{R}^{N}\right)$, we have
\begin{equation}\label{ELm}
\int_{\Omega}\langle D_\xi F\left(x,Dv(x)\right),D\psi(x)\rangle dx=\int_{\Omega}\left[f(x)-2m\left(\left|v(x)\right|-a\right)^{2m-1}_+\cdot\frac{v(x)}{\left| v(x)\right|}\right]\psi(x).
\end{equation} Let us fix a ball $B_{4R}\Subset \Omega$ and arbitrary radii $\frac{R}{2}\le r<\tilde{s}<t<\tilde{t}<\lambda r<R,$ with $1<\lambda<2$. Let us consider a cut off function $\eta\in C^\infty_0\left(B_t\right)$ such that $\eta\equiv 1$ on
$B_{\tilde{s}}$, $\left|D \eta\right|\le \frac{c}{t-\tilde{s}}$ and $\left|D^2 \eta\right|\le \frac{c}{\left(t-\tilde{s}\right)^2}$. From now on, with no loss of generality, we suppose $R<\frac{1}{4}$. For $\left|h\right|$ sufficiently small, we can choose, for any $s=1, ..., n$
$$\psi=\tau_{s, -h}\left(\eta^2\tau_{s, h}v\right)$$
as a test function for the equation \eqref{ELm}, and recalling Proposition \ref{findiffpr}, we get \begin{eqnarray*}
&&\int_\Omega \left<\tau_{s, h}D_\xi F\left(x, Dv(x)\right), D\left(\eta^2(x)\tau_{s, h}v(x)\right)\right>dx\cr\cr
&=&\int_\Omega f(x)\cdot\tau_{s, -h}\left(\eta^2(x)\tau_{s, h}v(x)\right)dx\cr\cr
&&-2m\int_\Omega\tau_{s, h}\left[\left(\left|v(x)\right|-a\right)^{2m-1}_+\cdot\frac{v(x)}{\left|v(x)\right|}\right]\cdot\eta^2(x)\tau_{s, h}v(x)dx, \end{eqnarray*} that is \begin{eqnarray*}
I_1+I_2&:=&\int_\Omega \left<D_\xi F\left(x+he_s, Dv\left(x+he_s\right)\right)-D_\xi F\left(x+he_s, Dv(x)\right), \eta^2(x)\tau_{s, h}Dv(x)\right>dx\cr\cr
&&+2m\int_\Omega\tau_{s, h}\left[\left(\left|v(x)\right|-a\right)^{2m-1}_+\cdot\frac{v(x)}{\left|v(x)\right|}\right]\cdot\eta^2(x)\tau_{s, h}v(x)dx\cr\cr
&=&-\int_\Omega \left<D_\xi F\left(x+he_s, Dv(x)\right)-D_\xi F\left(x, Dv(x)\right), \eta^2(x)\tau_{s, h}Dv(x)\right>dx\cr\cr
&&-2\int_\Omega \left<\tau_{s, h}\left[D_\xi F\left(x, Dv(x)\right)\right], \eta(x)D\eta(x)\otimes\tau_{s, h}v(x)\right>dx\cr\cr
&&+\int_\Omega f(x)\cdot\tau_{s, -h}\left(\eta^2(x)\tau_{s, h}v(x)\right)dx\cr\cr
&=:&-I_3-I_4+I_5. \end{eqnarray*}
So we have
\begin{equation}\label{fullestimatem*}
I_1+I_2\le\left|I_3\right|+\left|I_4\right|+\left|I_5\right|. \end{equation}
By virtue of Lemma \ref{Lemma8}, we have
\begin{equation*}\label{I_2m}
I_2\ge c(m)\int_\Omega\eta^2(x)\left|\left(\left|v(x+he_s)\right|-a\right)^{m}_+\cdot\frac{v\left(x+he_s\right)}{\left|v\left(x+he_s\right)\right|}-\left(\left|v(x)\right|-a\right)^{m}_+\cdot\frac{v(x)}{\left|v(x)\right|}\right|^2dx\ge0, \end{equation*}
so \eqref{fullestimatem*} becomes
\begin{equation}\label{fullestimatem}
I_1\le\left|I_3\right|+\left|I_4\right|+\left|I_5\right|. \end{equation}
By assumption \eqref{F2}, we get
\begin{equation}\label{I_1m}
I_1\ge\nu\int_\Omega\eta^2(x)\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Dv(x)\right|^2dx. \end{equation}
For what concerns the term $I_3$, by \eqref{F4} and using Young's Inequality with exponents $\left(2, 2\right)$, for any $\varepsilon>0$, we have
\begin{eqnarray*}\label{I_3m*}
\left|I_3\right|&\le&\left|h\right|\int_\Omega\eta^2(x)\left(g(x)+g\left(x+he_s \right)\right)\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, h}Dv(x)\right|dx\cr\cr
&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Dv(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\int_{\Omega}\eta^2(x)\left(g(x)+g\left(x+he_s\right)\right)^2\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p}{2}dx. \end{eqnarray*}
By H\"older's inequality with exponents $\left(\frac{m\left(p+2\right)}{p\left(m+1\right)}, \frac{m\left(p+2\right)}{2m-p}\right)$, the properties of $\eta$ and Lemma \ref{le1}, we get
\begin{eqnarray}\label{I_3m}
\left|I_3\right|&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Dv(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\left(\int_{B_t}\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx\right)^\frac{m\left(p+2\right)}{m+1}\cr\cr
&&\cdot\left(\int_{B_t}\left(g(x)+g\left(x+he_s\right)\right)^{\frac{2m\left(p+2\right)}{2m-p}}dx\right)^\frac{2m-p}{m\left(p+2\right)}\cr\cr
&\le&\varepsilon\int_\Omega\eta^2(x)\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^\frac{p-2}{2}\left|\tau_{s, h}Dv(x)\right|^2dx\cr\cr
&&+c_\varepsilon\left|h\right|^2\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx\right)^\frac{p\left(m+1\right)}{m\left(p+2\right)}\cdot\left(\int_{B_{\lambda r}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx\right)^\frac{2m-p}{m\left(p+2\right)}. \end{eqnarray}
Let us consider, now, the term $I_4$. We have
\begin{eqnarray*}
I_4&=&2\int_\Omega \left<\tau_{s, h}\left[D_\xi F\left(x, Dv(x)\right)\right], \eta(x)D\eta(x)\otimes\tau_{s, h}v(x)\right>dx\cr\cr
&=&2\int_\Omega \left<D_\xi F\left(x, Dv(x)\right), \tau_{s, -h}\left[\eta(x)D\eta(x)\otimes\tau_{s, h}v(x)\right]\right>dx, \end{eqnarray*}
so, by \eqref{F1}, we get
\begin{eqnarray*}\label{I_4m*}
\left|I_4\right|&\le&c\int_\Omega \left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p-1}{2}\left|\tau_{s, -h}\left[\eta(x)D\eta(x)\otimes\tau_{s, h}v(x)\right]\right|dx. \end{eqnarray*}
We can treat this term as we did after \eqref{I_3*} in the proof of Theorem \ref{CGPThm1}, using \eqref{tau1} with $v$ in place of $u$, thus getting
\begin{equation}\label{I_4m}
\left|I_4\right|\le\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx, \end{equation} for any $\sigma>0$.\\
In order to estimate the term $I_5$, arguing as we did in \eqref{I_4'}, we have
\begin{eqnarray*}
I_5&=&\int_\Omega\eta^2\left(x\right)f(x)\tau_{s, -h}\left(\tau_{s, h}v(x)\right)dx\cr\cr
&&+\int_\Omega\left[\eta\left(x-he_s\right)+\eta(x)\right]f(x)\tau_{s, -h}\eta(x)\tau_{s, h}v\left(x-he_s\right)dx\cr\cr
&=:&J_1+J_2, \end{eqnarray*}
which implies
\begin{equation}\label{I_5m*}
\left|I_5\right|\le\left|J_1\right|+\left|J_2\right| \end{equation}
Let us consider the term $J_1$. By virtue of the properties of $\eta$ and using H\"older's inequality with exponents $\left(\frac{2m\left(p+2\right)}{2mp+p-2}, \frac{2m\left(p+2\right)}{4m+2-p}\right)$, we have
\begin{eqnarray}\label{J_1m*}
\left|J_1\right|&\le&\int_{B_t}\left|f(x)\right|\left|\tau_{s, -h}\left(\tau_{s, h}v(x)\right)\right|dx\cr\cr
&\le&\left(\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx\right)^\frac{2mp+p-2}{2m\left(p+2\right)}\cdot\left(\int_{B_t}\left|\tau_{s, -h}\left(\tau_{s, h}v(x)\right)\right|^\frac{2m\left(p+2\right)}{4m+2-p}dx\right)^\frac{4m+2-p}{2m\left(p+2\right)}\cr\cr
&\le&\left|h\right|\left(\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx\right)^\frac{2mp+p-2}{2m\left(p+2\right)}\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}Dv(x)\right|^\frac{2m\left(p+2\right)}{4m+2-p}dx\right)^\frac{4m+2-p}{2m\left(p+2\right)}, \end{eqnarray}
where, in the last line we applied Lemma \ref{le1} since, by the a priori assumption $V_p\left(Dv\right)\in W^{1,2}_\mathrm{loc}\left(\Omega\right)$ and Remark \ref{rmk3}, we have $Dv\in L^{\frac{m\left(p+2\right)}{m+1}}_\mathrm{loc}\left(\Omega\right)$, which implies $Dv\in L^{\frac{2m\left(p+2\right)}{4m+2-p}}_\mathrm{loc}\left(\Omega\right)$ since, for any $m>1$ and $1<p<2$, we have $\frac{2m\left(p+2\right)}{4m+2-p}<\frac{m\left(p+2\right)}{m+1}$.\\ Let us consider the second integral in \eqref{J_1m*}. By virtue of \eqref{lemma6GPestimate1}, and using H\"older's inequality with exponents $\left(\frac{4m+2-p}{m\left(p+2\right)}, \frac{4m+2-p}{\left(2-p\right)\left(m+1\right)}\right)$, we have
\begin{eqnarray}\label{tauVpm}
\int_{B_{\tilde{t}}}\left|\tau_{s, h}Dv(x)\right|^\frac{2m\left(p+2\right)}{4m+2-p}dx&\le&\int_{B_{\tilde{t}}}\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^{\frac{2-p}{4}\cdot\frac{2m\left(p+2\right)}{4m+2-p}}\cr\cr
&&\cdot\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{\frac{2m\left(p+2\right)}{4m+2-p}}dx\cr\cr
&\le&\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx\right)^\frac{\left(2-p\right)\left(m+1\right)}{4m+2-p}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{2}dx\right)^\frac{m\left(p+2\right)}{4m+2-p}. \end{eqnarray}
Inserting \eqref{tauVpm} into \eqref{J_1m*}, we get
\begin{eqnarray}\label{J_1m**}
\left|J_1\right|&\le&\left|h\right|\left(\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx\right)^\frac{2mp+p-2}{2m\left(p+2\right)}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Dv(x)\right|^2+\left|Dv\left(x+he_s\right)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx\right)^\frac{\left(2-p\right)\left(m+1\right)}{2m\left(p+2\right)}\cr\cr
&&\cdot\left(\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{2}dx\right)^\frac{1}{2}, \end{eqnarray}
Using Lemma \ref{le1} and Young's Inequality with exponents $\left(\frac{2m\left(p+2\right)}{2mp+p-2}, \frac{2m\left(p+2\right)}{\left(2-p\right)\left(m+1\right)}, 2\right)$, we get
\begin{eqnarray}\label{J_1m}
\left|J_1\right|&\le&c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx\cr\cr
&&+\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{2}dx, \end{eqnarray}
for any $\sigma>0$.\\ For what concerns the term $J_2$, by the properties of $\eta$, as in \eqref{J_2*}, we have
\begin{eqnarray*}\label{J_2m*}
\left|J_2\right|&\le&\int_{B_t} \left|f(x)\right|\left|\tau_{s, -h}\eta(x)\right|\left|\tau_{s, h}v\left(x-he_s\right)\right|dx\cr\cr
&\le&\frac{c\left|h\right|}{t-\tilde{s}}\int_{B_t} \left|f(x)\right|\left|\tau_{s, h}v\left(x-he_s\right)\right|dx. \end{eqnarray*}
Now, if we apply H\"older's Inequality with exponents $\left(\frac{m\left(p+2\right)}{mp+m-1}, \frac{m\left(p+2\right)}{m+1}\right)$, we get
\begin{eqnarray}\label{J_2m}
\left|J_2\right|&\le&\frac{c\left|h\right|}{t-\tilde{s}}\left(\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\right)^\frac{mp+m-1}{m\left(p+2\right)}\cr\cr
&&\cdot\left(\int_{B_t}\left|\tau_{s, h}v\left(x-he_s\right)\right|^\frac{m\left(p+2\right)}{m+1}dx\right)^\frac{m+1}{m\left(p+2\right)}\cr\cr
&\le&\frac{c\left|h\right|^2}{t-\tilde{s}}\left(\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\right)^\frac{mp+m-1}{m\left(p+2\right)}\cr\cr
&&\cdot\left(\int_{B_{\lambda r}}\left|Dv\left(x\right)\right|^\frac{m\left(p+2\right)}{m+1}dx\right)^\frac{m+1}{m\left(p+2\right)}, \end{eqnarray} where we also used Lemma \ref{le1}, since $Dv\in L^{\frac{m\left(p+2\right)}{m+1}}_{\mathrm{loc}}\left(\Omega\right)$.\\ By virtue of \eqref{J_1m} and \eqref{J_2m}, \eqref{I_5m*} gives
\begin{eqnarray}\label{I_5m}
\left|I_5\right|&\le&c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx\cr\cr
&&+\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{2}dx\cr\cr
&&+\frac{c\left|h\right|^2}{t-\tilde{s}}\left(\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\right)^\frac{mp+m-1}{m\left(p+2\right)}\cr\cr
&&\cdot\left(\int_{B_{\lambda r}}\left|Dv\left(x\right)\right|^\frac{m\left(p+2\right)}{m+1}dx\right)^\frac{m+1}{m\left(p+2\right)}\cr\cr
&\le&2\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx+\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^{2}dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{m\left(p+2\right)}{mp+m-1}}\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx, \end{eqnarray}
where we also used Young's Inequality with exponents $\left(\frac{m\left(p+2\right)}{mp+m-1} ,\frac{m\left(p+2\right)}{m+1}\right)$.\\ Plugging \eqref{I_1m}, \eqref{I_3m}, \eqref{I_4m} and \eqref{I_5m} into \eqref{fullestimatem}, and choosing $\varepsilon<\frac{\nu}{2}$, we get
\begin{eqnarray*}\label{fullestimatem2}
&&\int_{\Omega}\eta^2(x)\left|\tau_{s, h}Dv\left(x\right)\right|^2\left(\mu^2+\left|Dv\left(x+he_s\right)\right|^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p-2}{2}dx\cr\cr
&\le&c\left|h\right|^2\left(\int_{B_{\tilde{t}}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx\right)^\frac{p\left(m+1\right)}{m\left(p+2\right)}\cdot\left(\int_{B_{\lambda r}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx\right)^\frac{2m-p}{m\left(p+2\right)}\cr\cr
&&+2\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{m\left(p+2\right)}{mp+m-1}}\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\cr\cr
&&+2\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx \end{eqnarray*}
which, by virtue of Lemma \ref{lemma6GP}, and using Young's inequality with exponents $\left(\frac{m\left(p+2\right)}{p\left(m+1\right)}, \frac{m\left(p+2\right)}{2m-p}\right)$ implies
\begin{eqnarray}\label{tauV_pm*}
&&\int_{\Omega}\eta^2(x)\left|\tau_{s, h}Dv\left(x\right)\right|^2\left(\mu^2+\left|Dv\left(x+he_s\right)\right|^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p-2}{2}dx\cr\cr
&\le&2\sigma\int_{B_{\tilde{t}}}\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx+3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^{\frac{m\left(p+2\right)}{2\left(m+1\right)}}dx\cr\cr
&&+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx+c_\sigma\left|h\right|^2\int_{B_{\lambda r}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{m\left(p+2\right)}{mp+m-1}}\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx. \end{eqnarray} Applying Lemma \ref{le1}, \eqref{tauV_pm*} becomes
\begin{eqnarray}\label{tauV_pm**}
&&\int_{\Omega}\eta^2(x)\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx+c\cdot\sigma\left|h\right|^2\int_{B_{\lambda r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^2}\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx+c_\sigma\left|h\right|^2\int_{B_{\lambda r}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx\cr\cr
&&+c_\sigma\left|h\right|^2\int_{B_t}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^\frac{m\left(p+2\right)}{mp+m-1}}\int_{B_t} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx. \end{eqnarray} Let us observe that, for any $m>1$ and $1<p<2$, we have $$ \frac{m\left(p+2\right)}{mp+m-1}\le\frac{p+2}{p}, $$ hence $$ \max\Set{2, \frac{m\left(p+2\right)}{mp+m-1}}\le\max\Set{2, \frac{p+2}{p}}=\frac{p+2}{p}. $$ Hence, since $t-\tilde{s}<1$, by \eqref{tauV_pm**} we deduce \begin{eqnarray}\label{tauV_pm'}
&&\int_{\Omega}\eta^2(x)\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx+c\cdot\sigma\left|h\right|^2\int_{B_{\lambda r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\int_{B_R} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\right]. \end{eqnarray}
Let us notice that, since $\frac{m\left(p+2\right)}{mp+m-1}<\frac{2m\left(p+2\right)}{2mp+p-2}$, we have $L^{\frac{2m\left(p+2\right)}{2mp+p-2}}_\mathrm{loc}\left(\Omega\right)\hookrightarrow L^{\frac{m\left(p+2\right)}{mp+m-1}}_\mathrm{loc}\left(\Omega\right)$, and using Young's inequality with exponents $\left(\frac{2\left(mp+m-1\right)}{2mp+p-2}, \frac{2\left(mp+m-1\right)}{2m-p}\right)$, we have
\begin{equation}\label{Immersion}
\int_{B_{R}} \left|f(x)\right|^\frac{m\left(p+2\right)}{mp+m-1}dx\le c\left|B_R\right|+c\int_{B_{R}} \left|f(x)\right|^{\frac{2m\left(p+2\right)}{2mp+p-2}}dx. \end{equation}
So, plugging \eqref{Immersion} into \eqref{tauV_pm'}, we get
\begin{eqnarray}\label{tauV_pm}
&&\int_{\Omega}\eta^2(x)\left|\tau_{s, h}V_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\left|h\right|^2\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx+c\cdot\sigma\left|h\right|^2\int_{B_{\lambda r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma\left|h\right|^2}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|\right]. \end{eqnarray}
Since, by our a priori assumption, $V_p\left(Dv\right)\in W^{1, 2}_\mathrm{loc}\left(\Omega\right)$, and \eqref{tauV_pm} holds for any $s=1, ..., n$, Lemma \ref{Giusti8.2} implies
\begin{eqnarray*}
&&\int_{\Omega}\eta^2(x)\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&3\sigma\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx+c\cdot\sigma\int_{B_{\lambda r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|\right], \end{eqnarray*}
and by the properties of $\eta$, we get
\begin{eqnarray}\label{DV_pm**}
&&\int_{B_{\tilde{s}}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&c\cdot\sigma\int_{B_{\lambda r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|\right]\cr\cr
&&+3\sigma\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx, \end{eqnarray}
Let us remind that, since we choosed $B_{4R}\Subset\Omega$, $\frac{R}{2}<r<\tilde{s}<t<\tilde{t}<\lambda r<R$, with $1<\lambda<2$ and $R<\frac{1}{4}$, we also have $\lambda r<\lambda\tilde{s}<\lambda t<\lambda^2 r<4r<4R<1$.\\
Choosing a cut-off function $\phi\in C^\infty_{0}\left(B_{\lambda t}\right)$ such that $0\le\phi\le1$, $\phi\equiv 1$ on $B_{\lambda \tilde{s}}$ and $\left|D\phi\right|\le\frac{c}{\lambda\left(t-\tilde{s}\right)}$, we have
\begin{equation*}\label{Iterationm2}
\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx\le\left|B_R\right|+\int_{B_{\lambda t}}\phi^{\frac{m}{m+1}\left(p+2\right)}(x)\left|Dv(x)\right|^{\frac{m}{m+1}\left(p+2\right)}dx. \end{equation*}
where we also used that $\mu\in[0, 1]$ and $\lambda r<R$. Therefore, applying \eqref{2.1GP}, we get
\begin{eqnarray}\label{Iterationm2*}
&&\int_{B_{\lambda r}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{m\left(p+2\right)}{2\left(m+1\right)}dx\cr\cr
&\le&(p+2)^2\left(\int_{B_{\lambda t}}\phi^{\frac{m}{m+1}(p+2)}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cr\cr
&&\cdot\left[\left(\int_{B_{\lambda t}}\phi^{\frac{m}{m+1}(p+2)}\left|D\phi\right|^2\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+n\left(\int_{B_{\lambda t}}\phi^{\frac{m}{m+1}(p+2)}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p-2}{2}\left|D^2v(x)\right|^2dx\right)^\frac{m}{m+1}\right]+\left|B_R\right|\cr\cr
&\le&c\left(n, p\right)\left(\int_{B_{4R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cr\cr
&&\cdot\left[\frac{1}{\lambda^2\left(t-\tilde{s}\right)^2}\left(\int_{B_{4R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+\left(\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\right)^\frac{m}{m+1}\right]+\left|B_R\right|, \end{eqnarray} where we used the properties of $\phi$, \eqref{lemma6GPestimate2}, and the fact that $\lambda t<\lambda^2 r<4R$.\\ The elementary inequality $$ b^\frac{m}{m+1}\le b+1,\qquad\mbox{ for any }m>1\mbox{ and }b\ge0, $$ implies
\begin{equation}\label{DVp}
\left(\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\right)^\frac{m}{m+1}\le\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx+1. \end{equation}
Now, if we recall that $1<\lambda<2$, $t-\tilde{s}<\lambda\left( t-\tilde{s}\right)<1$ and $\frac{p+2}{p}\ge2$, \eqref{Iterationm2*} and \eqref{DVp} imply
\begin{eqnarray*}
&&\int_{B_{\tilde{s}}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&c\cdot\sigma\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{\lambda^2+1}{\lambda^2}\cdot\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right]\cr\cr
&\le&c\cdot\sigma\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c_\sigma}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right], \end{eqnarray*}
which, if we choose $\sigma>0$ such that
$$
c\cdot\sigma\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}<\frac{1}{2}, $$
becomes
\begin{eqnarray}\label{DV_pmb}
\int_{B_{\tilde{s}}}\left|DV_p\left(Dv(x)\right)\right|^2dx&\le&\frac{1}{2}\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c}{\left(t-\tilde{s}\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&\left.+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right], \end{eqnarray}
Since \eqref{DV_pmb} holds for any $\frac{R}{2}<r<\tilde{s}<t<\tilde{t}<\lambda r<R,$ with $1<\lambda<2$, with a constant $c$ depending on $n, p, L, \nu, L_1, \ell$, but is independent of the radii, passing to the limit as $\tilde{s}\to r$ and $t\to\lambda r$, we get
\begin{eqnarray*}
\int_{B_{r}}\left|DV_p\left(Dv(x)\right)\right|^2dx&\le&\frac{1}{2}\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c}{r^{\frac{p+2}{p}}\left(\lambda-1\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right], \end{eqnarray*}
and since $1<\lambda<2$, we have
\begin{eqnarray}\label{DV_pmIteration1}
\int_{B_{r}}\left|DV_p\left(Dv(x)\right)\right|^2dx&\le&\frac{1}{2}\int_{B_{\lambda^2 r}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&&+\frac{c}{r^{\frac{p+2}{p}}\left(\lambda^2-1\right)^{\frac{p+2}{p}}}\left[\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right]. \end{eqnarray}
Now, if we set $$
h(r)=\int_{B_{r}}\left|DV_p\left(Dv(x)\right)\right|^2dx, $$ \begin{eqnarray*}
A&=&\left[\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right], \end{eqnarray*} and $$ B=0 $$ since \eqref{DV_pmIteration1} holds for any $\lambda\in(1, 2)$, we can apply Lemma \ref{iter} with $$ \theta=\frac{1}{2}\qquad\mbox{ and }\qquad\gamma=\frac{p+2}{p}, $$ thus getting \begin{eqnarray}\label{apriorimPf}
\int_{B_{\frac{R}{2}}}\left|DV_p\left(Dv(x)\right)\right|^2dx&\le&\frac{c}{R^{\frac{p+2}{p}}}\left[\int_{B_{R}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 R}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 R}}\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{R}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_R}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_R\right|+1\right], \end{eqnarray}
which is the desired a priori estimate. \\
{\bf Step 2: the approximation.}\\ As we did in the second step of the proof of Theorem \ref{CGPThm1}, let us consider an open set $\Omega'\Subset\Omega$ and, for any $\varepsilon\in\left(0, d\left(\Omega', \partial\Omega\right)\right)$, a standard family of mollifiers $\Set{\phi_\varepsilon}_\varepsilon$.\\ Let us consider a ball $B_{\tilde{R}}=B_{\tilde{R}}\left(x_0\right)\Subset\Omega'$ with $\tilde{R}<1$ and, for each $\varepsilon$, the functional
\begin{equation}\label{modenergymeps}
\mathcal{F}_{m, \varepsilon}\left(w, B_{\tilde{R}}\right)=\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dw(x)\right)-f_{\varepsilon}(x)\cdot w(x)+\left(\left|w(x)\right|-a\right)^{2m}_+\right]dx, \end{equation}
where $F_\varepsilon$ is defined as in \eqref{Fepsdef} and $f_\varepsilon$ is defined as in \eqref{fepsdef}.
With this choices, we have \begin{equation}\label{convFm}
\int_{B_{\tilde{R}}}F_\varepsilon\left(x, \xi\right)dx\to\int_{ B_{\tilde{R}}}F\left(x, \xi\right)dx,\qquad\mbox{ as }\varepsilon\to0 \end{equation}
for any $\xi\in\numberset{R}^{n\times N}.$\\
Moreover, since $f\in L^{\frac{2m\left(p+2\right)}{2mp+p-2}}_{\mathrm{loc}}\left(\Omega\right)$, then
\begin{equation}\label{convfortefm1} f_{\varepsilon}\to f\qquad\mbox{ strongly in }L^{\frac{2m\left(p+2\right)}{2mp+p-2}}\left(B_{\tilde{R}}\right),\mbox{ as }\varepsilon\to0. \end{equation}
Let us observe that
$$ \frac{2m\left(p+2\right)}{2mp+p-2}\ge\frac{2m}{2m-1} $$
if and only if
$$ \left(2m-1\right)\left(p+2\right)\ge2mp+p-2, $$
i.e.
$$ 2m\ge p, $$
which is true for any $m>1$, as long as $1<p<2$.
For this reason, $f\in L^\frac{2m}{2m-1}_{\mathrm{loc}}\left(B_{\tilde{R}}\right)$, and we also have
\begin{equation}\label{convfortefm2}
f_{\varepsilon}\to f\qquad\mbox{ strongly in }L^{\frac{2m}{2m-1}}\left(B_{\tilde{R}}\right),\mbox{ as }\varepsilon\to0. \end{equation} Again, as in the proof of Theorem \ref{CGPThm1}, thanks to \eqref{F1}--\eqref{F4}, for any $\varepsilon$, we have the validity of \eqref{F1eps}--\eqref{F4eps}, where $g_\varepsilon$ is defined in \eqref{gepsdef}.
In this case, since $g\in L^{\frac{2m\left(p+2\right)}{2mp-p}}_\mathrm{loc}\left(\Omega\right)$, we have
\begin{equation}\label{convgmeps}
g_{\varepsilon}\to g\qquad\mbox{ strongly in }L^{\frac{2m\left(p+2\right)}{2mp-p}}\left(B_{\tilde{R}}\right)\mbox{ as }\varepsilon\to0. \end{equation}
Let $v_\varepsilon\in \left(v+W^{1,p}_{0}\left(B_{\tilde{R}}\right)\right)\cap L^{2m}\left(B_R\right)$ be the solution to
$$ \min\Set{\mathcal{F}_{m, \varepsilon}\left(w,B_{\tilde{R}}\right): w\in \left(v+W^{1,p}_{0}\left(B_{\tilde{R}}\right)\right)\cap L^{2m}\left(B_{\tilde{R}}\right)}, $$
where $v\in W^{1,p}_{\mathrm{loc}}\left(\Omega\right)\cap L^{2m}\left(B_R\right)$ is a local minimizer of \eqref{modenergym}.\\
By virtue of the minimality of $v_\varepsilon$, we have
\begin{eqnarray}\label{minimalitym*}
&&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv_\varepsilon(x)\right) +\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv(x)\right)+f_{\varepsilon}(x)\cdot \left(v_\varepsilon(x)-v(x)\right)+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv(x)\right)+\left|f_{\varepsilon}(x)\right|\cdot \left|v_\varepsilon(x)-v(x)\right|+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx. \end{eqnarray}
Now, using H\"{o}lder's and Young's inequalities with exponents $\left(2m, \frac{2m}{2m-1}\right)$, we get
\begin{eqnarray}\label{controlloterminenotom}
&&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\cdot \left|v_\varepsilon(x)-v(x)\right|dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left|v_\varepsilon(x)\right|dx+\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left|v(x)\right|dx\cr\cr
&=&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|v_\varepsilon(x)\right|-a\right)dx+\int_{B_{\tilde{R}}}a\left|f_{\varepsilon}(x)\right|dx+\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left|v(x)\right|dx\cr\cr
&=&\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}\left|f_{\varepsilon}(x)\right|\left(\left|v_\varepsilon(x)\right|-a\right)dx+\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|<a}}\left|f_{\varepsilon}(x)\right|\left(\left|v_\varepsilon(x)\right|-a\right)dx\cr\cr
&&+\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|v(x)\right|+a\right)dx\cr\cr
&\le&\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}\left|f_{\varepsilon}(x)\right|\left(\left|v_\varepsilon(x)\right|-a\right)_+dx+\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|v(x)\right|+a\right)dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|v_\varepsilon(x)\right|-a\right)_+dx+\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|v(x)\right|+a\right)dx\cr\cr
&\le&\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx\right)^\frac{2m-1}{2m}\cdot\left(\int_{B_{\tilde{R}}}\left(\left|v_\varepsilon(x)\right|-a\right)_+^{2m}dx\right)^\frac{1}{2m}\cr\cr
&&+\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx\right)^\frac{2m-1}{2m}\cdot\left(\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx\right)^\frac{1}{2m}\cr\cr
&\le&c_\sigma\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx+\sigma\int_{B_{\tilde{R}}}\left(\left|v_\varepsilon(x)\right|-a\right)_+^{2m}dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx, \end{eqnarray}
where $\sigma>0$ will be chosen later.\\ Plugging \eqref{controlloterminenotom} into \eqref{minimalitym*}, and choosing a sufficiently small $\sigma$, we get
\begin{eqnarray}\label{minimalitym**}
&&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv_\varepsilon(x)\right) +c\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv(x)\right)+c\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx+c\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx. \end{eqnarray}
Using the right-hand side inequality in \eqref{F1eps} in \eqref{minimalitym**}, we have
\begin{eqnarray}\label{minimalitym}
&&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv_\varepsilon(x)\right) +c\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left[\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}+c\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx+c\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx. \end{eqnarray}
Now, by the left-hand side inequality in \eqref{F1eps}, we get
\begin{eqnarray}\label{W1pBoundm}
\ell\int_{B_{\tilde{R}}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx&\le&\int_{B_{\tilde{R}}}F_\varepsilon\left(x, Dv_\varepsilon(x)\right) dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Dv_\varepsilon(x)\right) +\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left[\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx, \end{eqnarray}
and this, by \eqref{convfortefm2}, means that $\Set{v_\varepsilon}_\varepsilon$ is bounded in $W^{1, p}\left(B_{\tilde{R}}\right)$, independently of $\varepsilon$, so there exists a function $\tilde{v}\in W^{1, p}\left(B_{\tilde{R}}\right)$ such that, up to a subsequence, we have
$$ v_\varepsilon\rightharpoonup \tilde{v}\qquad\mbox{ weakly in }W^{1, p}\left(B_{\tilde{R}}\right), $$
$$ v_\varepsilon\to \tilde{v} \qquad\mbox{ strongly in }L^{p}\left(B_{\tilde{R}}\right), $$
and
$$ v_\varepsilon\to \tilde{v} \qquad\mbox{ almost everywhere in }B_{\tilde{R}}, $$ as $\varepsilon\to0$.\\ Moreover, we have
\begin{eqnarray}\label{L2mBound*}
\int_{B_{\tilde{R}}}\left|v_\varepsilon(x)\right|^{2m}dx&\le&\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|< a}}\left|v_\varepsilon(x)\right|^{2m}dx+\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}\left|v_\varepsilon(x)\right|^{2m}dx\cr\cr
&\le&\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|< a}}\left|v_\varepsilon(x)\right|^{2m}dx+\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}\left(\left|\left|v_\varepsilon(x)\right|-a\right|+a\right)^{2m}dx\cr\cr
&\le&\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|< a}}a^{2m}dx+c\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}dx\cr\cr
&&+c\int_{B_{\tilde{R}}\cap\Set{\left|v_\varepsilon\right|\ge a}}a^{2m}dx\cr\cr
&\le&c\int_{B_{\tilde{R}}}a^{2m}dx+c\int_{B_{\tilde{R}}}\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_{+}dx, \end{eqnarray}
and since \eqref{minimalitym} implies
\begin{eqnarray}\label{L2mBound}
&&\int_{B_{\tilde{R}}}\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left[\left(\mu^2+\left|Dv(x)\right|^2\right)^\frac{p}{2}+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&&+c\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^\frac{2m}{2m-1}dx+c\int_{B_{\tilde{R}}}\left(\left|v(x)\right|+a\right)^{2m}dx, \end{eqnarray}
by \eqref{convfortefm2}, and plugging \eqref{L2mBound} into \eqref{L2mBound*}, using dominate convergence theorem, we have
\begin{equation}\label{L2mconvforte}
v_\varepsilon\to \tilde{v} \qquad\mbox{ strongly in }L^{2m}\left(B_{\tilde{R}}\right),\mbox{ as }\varepsilon\to0. \end{equation}
Since $v_\varepsilon$ is a local minimizer of the functional \eqref{modenergymeps} and $g_{\varepsilon}, f_{\varepsilon}\in C^\infty\left(B_{\tilde{R}}\right)$, we have
$$ V_p\left(Dv_\varepsilon\right)\in W^{1,2}_{\mathrm{loc}}\left(B_{\tilde{R}}\right), $$
and we can apply estimate \eqref{apriorimPf}, thus getting
\begin{eqnarray}\label{Step2estimatem}
\int_{B_{\frac{r}{2}}}\left|DV_p\left(Dv_\varepsilon(x)\right)\right|^2dx&\le&\frac{c}{r^{\frac{p+2}{p}}}\left[\int_{B_{r}} \left(\mu^2+\left|Dv_\varepsilon\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 r}}\left|v_\varepsilon(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 r}}\left(\mu^2+\left|Dv_\varepsilon(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{r}}g_\varepsilon^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_r}\left|f_\varepsilon(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_r\right|+1\right], \end{eqnarray}
for any ball $B_{4r}\Subset B_{\tilde{R}}$.\\ Applying Lemma \ref{differentiabilitylemma}, by \eqref{differentiabilityestimate} and \eqref{Step2estimatem}, recalling \eqref{convfortefm1}, \eqref{convgmeps}, \eqref{W1pBoundm}, \eqref{L2mBound*} and \eqref{L2mBound}, and by a covering argument, we infer that $v_\varepsilon$ is bounded in $W^{2, p}\left(B_{2r}\right)$, which implies
\begin{equation}\label{convvepsW1p}
v_\varepsilon\to\tilde{v}\qquad\mbox{ strongly in }W^{1,p}\left(B_{4r}\right) \end{equation}
and
$$ v_\varepsilon\to\tilde{v}\qquad\mbox{ almost everywhere in }B_{4r}, $$
up to a subsequence, as $\varepsilon\to0$.\\ By virtue of the continuity of the function $\xi\mapsto DV_p\left(\xi\right)$, we also have
$$DV_p\left(Dv_\varepsilon\right)\to DV_p\left(D\tilde{v}\right)\qquad\mbox{ almost everywhere in }B_{4r},\mbox{ as }\varepsilon\to0.$$
For what we discussed above, and recalling \eqref{convfortefm1}, \eqref{convgmeps}, \eqref{L2mconvforte} and \eqref{convvepsW1p}, thanks to the dominate convergence theorem, we can pass to the limit in \eqref{Step2estimatem} as $\varepsilon\to0$, thus getting
\begin{eqnarray}\label{boundVpepsm}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(D\tilde{v}(x)\right)\right|^2dx\cr\cr
&\le&\frac{c}{r^{\frac{p+2}{p}}}\left[\int_{B_{r}} \left(\mu^2+\left|D\tilde{v}\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 r}}\left|\tilde{v}(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 r}}\left(\mu^2+\left|D\tilde{v}(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{r}}g^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_r}\left|f(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_r\right|+1\right]. \end{eqnarray}
Our next aim is to prove that $\tilde{v}=v$ a.e. in $B_{\tilde{R}}$.\\
First, let us observe that, using H\"older's inequality with exponents $\left(2m, \frac{2m}{2m-1}\right)$, we get
\begin{eqnarray*}\label{finalconv2*}
&&\left|\int_{B_{\tilde{R}}}\left[f_{\varepsilon}(x)\cdot
v(x)-f(x)\cdot
v(x)\right]dx\right|\cr\cr&\le&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)-f(x)\right|\cdot
\left|v(x)\right|dx\cr\cr
&\le&\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)-f(x)\right|^\frac{2m}{2m-1}dx\right)^\frac{2m-1}{2m}\cdot\left(\int_{B_{\tilde{R}}}\left|v(x)\right|^{2m}dx\right)^\frac{1}{2m},
\end{eqnarray*} that, recalling \eqref{convfortefm2}, implies \begin{equation}\label{finalconv2}
\lim_{\varepsilon\to 0}\int_{B_{\tilde{R}}}f_{\varepsilon}(x)\cdot v(x)dx=\int_{B_{\tilde{R}}}f(x)\cdot v(x)dx. \end{equation}
The minimality of $v$, Fatou's Lemma, the lower semicontinuity of $\mathcal{F}_{m, \varepsilon}$ and the minimality of $v_\varepsilon$ imply
\begin{eqnarray*}
&&\int_{B_{\tilde{R}}}\left[F\left(x,Dv(x)\right)-f(x)\cdot
v(x)+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le& \int_{B_{\tilde{R}}}\left[F\left(x,D\tilde{v}(x)\right)-f(x)\cdot
\tilde{v}(x)+\left(\left|\tilde{v}(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le& \liminf_{\varepsilon\to 0} \int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,D\tilde{v}(x)\right)-f_{\varepsilon}(x)\cdot
\tilde{v}(x)+\left(\left|\tilde{v}(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le& \liminf_{\varepsilon\to 0} \int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Dv_\varepsilon(x)\right)-f_{\varepsilon}(x)\cdot
v_\varepsilon(x)+\left(\left|v_\varepsilon(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le& \liminf_{\varepsilon\to 0} \int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x,Dv(x)\right)-f_{\varepsilon}(x)\cdot
v(x)+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&=&\int_{B_{\tilde{R}}}\left[F\left(x,Dv(x)\right)-f(x)\cdot
v(x)+\left(\left|v(x)\right|-a\right)^{2m}_+\right]dx, \end{eqnarray*} where the last inequality is a consequence of \eqref{convFm} and \eqref{finalconv2}.\\ Therefore $\mathcal{F}_m\left(D\tilde{v},B_{\tilde{R}}\right)=\mathcal{F}_m\left(Dv,B_{\tilde{R}}\right)$ and the strict convexity of the functional yields that $\tilde{v}=v$ a.e. in $B_{\tilde{R}}$. So \eqref{boundVpepsm} and a covering argument yield \eqref{approxmestimate}. \end{proof}
We conclude this section with some consequences of Theorem \ref{approxmthm}, which follow by Lemma \ref{differentiabilitylemma} and Remark \ref{rmk3}.
\begin{corollary}\label{corollarym}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set, $m>1$, $a>0$ and $1<p<2$.\\
Let $v\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^{2m}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ be a local minimizer of the functional \eqref{modenergym}, under the assumptions \eqref{F1}--\eqref{F4}, with
$$f\in L^{\frac{2m\left(p+2\right)}{2mp+p-2}}_{\mathrm{loc}}\left(\Omega\right)\qquad\mbox{ and }\qquad g\in L^{\frac{2m\left(p+2\right)}{2m-p}}_{\mathrm{loc}}\left(\Omega\right).$$
Then $v\in W^{2, p}_{\mathrm{loc}}\left(\Omega\right)$ and $Dv\in L^{\frac{m\left(p+2\right)}{m+1}}_{\mathrm{loc}}\left(\Omega\right)$. \end{corollary}
\section{The case of bounded minimizers: proof of Theorem \ref{inftythm}}\label{Thm2pf}
The aim of this section is to prove Theorem \ref{inftythm}.\\ As we will see below, the proof is achieved by using an approximation argument which is based on the possibility to apply Theorem \ref{approxmthm} and pass to the limit as $m\to\infty$.
\begin{proof}[Proof of Theorem \ref{inftythm}]
Arguing as in the second step of the proof of Theorem \ref{approxmthm}, let us consider an open set $\Omega'\Subset\Omega$ and, for any $\varepsilon\in\left(0, d\left(\Omega', \partial\Omega\right)\right)$, a standard family of mollifiers $\Set{\phi_\varepsilon}_\varepsilon$.\\
Let $u\in W^{1, p}_\mathrm{loc}\left(\Omega\right)\cap L^\infty_\mathrm{loc}\left(\Omega\right)$ be a local minimizer of the functional \eqref{modenergy}, and let us consider a ball $B_{\tilde{R}}=B_{\tilde{R}}\left(x_0\right)\Subset\Omega'$, with $\tilde{R}<1$.\\
For each $\varepsilon$ and any $m>1$, let us consider the functional $\mathcal{F}_{m, \varepsilon}$, defined by \eqref{modenergymeps}, where $F_\varepsilon$ and $f_\varepsilon$ are defined by \eqref{Fepsdef} and \eqref{fepsdef} respectively, and we fix
\begin{equation}\label{afix}
a=\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{\tilde{R}}\right)}.
\end{equation} With these choices, we have \eqref{convFm} again, and since $f\in L^{\frac{p+2}{p}}_{\mathrm{loc}}\left(\Omega\right)$, we have
\begin{equation}\label{convfinf1}
f_\varepsilon\to f \qquad\mbox{ strongly in }L^{\frac{p+2}{p}}\left(B_{\tilde{R}}\right),\qquad\mbox{ as }\varepsilon\to0. \end{equation}
Again, thanks to \eqref{F1}--\eqref{F4}, for any $\varepsilon$, we have \eqref{F1eps}--\eqref{F4eps}, where $g_\varepsilon$ is defined like in \eqref{gepsdef}.
In this case, since $g\in L^{p+2}_\mathrm{loc}\left(\Omega\right)$, we have
\begin{equation}\label{convgeps}
g_\varepsilon\to g\qquad\mbox{ strongly in }L^{p+2}\left(B_{\tilde{R}}\right),\mbox{ as }\varepsilon\to0. \end{equation}
Let us observe that $f_\varepsilon\in L^{\frac{2m\left(p+2\right)}{2mp+p-2}}\left(B_{\tilde{R}}\right)$ for any $m>1$, and since $$ \frac{2m\left(p+2\right)}{2mp+p-2}\searrow\frac{p+2}{p},\qquad\mbox{ as }m\to\infty, $$ we have \begin{equation}\label{convfinfepsm}
\lim_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx\right)^\frac{2mp+p-2}{2m\left(p+2\right)}=\left(\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\right|^\frac{p+2}{p}dx\right)^\frac{p}{p+2}, \end{equation} for any $\varepsilon$.
Similarly, then $g_\varepsilon\in L^{\frac{2m\left(p+2\right)}{2m-p}}\left(B_{\tilde{R}}\right)$ for any $m>1$ and for any $\varepsilon$, and we have
\begin{equation}\label{gepsboundm}
\lim_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|g_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{2m-p}}dx\right)^{\frac{2m-p}{2m\left(p+2\right)}}=\left(\int_{B_{\tilde{R}}}\left|g_{\varepsilon}(x)\right|^{p+2}dx\right)^{\frac{1}{p+2}}, \end{equation}
for each $\varepsilon$.\\
Now, for each $\varepsilon$, and for each $m>1$, let $u_{m, \varepsilon}\in \left(u+W^{1, p}_0\left(B_{\tilde{R}}\right)\right)\cap L^{2m}\left(B_{\tilde{R}}\right)$ be the solution to
$$ \min\Set{\mathcal{F}_{m, \varepsilon}\left(w,B_{\tilde{R}}\right): w\in \left(u+W^{1,p}_{0}\left(B_{\tilde{R}}\right)\right)\cap L^{2m}\left(B_{\tilde{R}}\right)}. $$
By virtue of the minimality of $u_{m, \varepsilon}$, recalling \eqref{afix}, we have
\begin{eqnarray}\label{minimalityinf*}
&&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du_{m, \varepsilon}(x)\right) +\left(\left|u_{m, \varepsilon}(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du(x)\right)+f_{\varepsilon}(x)\cdot \left(u_{m, \varepsilon}(x)-u(x)\right)+\left(\left|u(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du(x)\right)+\left|f_{\varepsilon}(x)\right|\cdot \left|u_{m, \varepsilon}(x)-u(x)\right|\right]dx. \end{eqnarray}
Arguing as we did in \eqref{controlloterminenotom} and exploiting \eqref{afix}, we get
\begin{eqnarray}\label{controlloterminenotoinf*}
&&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\cdot \left|u_{m, \varepsilon}(x)-u(x)\right|dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+dx+2a\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|dx\cr\cr
&\le&\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p\left(2m-1\right)}{2m\left(p+2\right)}}\cdot\left(\int_{B_{\tilde{R}}}\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+^{\frac{2m\left(p+2\right)}{4m+p}}dx\right)^\frac{4m+p}{2m\left(p+2\right)}\cr\cr
&&+2a\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|dx, \end{eqnarray}
where, in the last line, we used H\"{o}lder's Inequality with exponents $\left(\frac{2m\left(p+2\right)}{p\left(2m-1\right)}, \frac{2m\left(p+2\right)}{4m+p}\right)$. Let us notice that all the integrals in the last line of \eqref{controlloterminenotoinf*} are finite, since $f_\varepsilon\in C^{\infty}\left(B_{\tilde{R}}\right)$ and $\frac{2m\left(p+2\right)}{4m+p}<2m$ for any $m>1$ as long as $1<p<2$. So, since $u_{m, \varepsilon}\in L^{2m}\left(B_{\tilde{R}}\right)\hookrightarrow L^{\frac{2m\left(p+2\right)}{4m+p}}\left(B_{\tilde{R}}\right)$, using Young's Inequality with exponents $\left(2m, \frac{2m}{2m-1}\right)$, we have
\begin{eqnarray}\label{controlloterminenotoinf}
&&\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|\cdot \left|u_{m, \varepsilon}(x)-u(x)\right|dx\cr\cr
&\le&c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p\left(2m-1\right)}{2m\left(p+2\right)}}\cdot\left(\int_{B_{\tilde{R}}}\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+^{2m}dx\right)^\frac{1}{2m}\cr\cr
&&+2a\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|dx\cr\cr
&\le&c_\sigma\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p}{p+2}}+\sigma\int_{B_{\tilde{R}}}\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+^{2m}dx\cr\cr
&&+2a\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|dx, \end{eqnarray}
for any $\sigma>0$.
Plugging \eqref{controlloterminenotoinf} into \eqref{minimalityinf*}, choosing a sufficiently small value of $\sigma$ and recalling \eqref{F1eps}, we get
\begin{eqnarray}\label{minimalityinf}
&&\int_{B_{\tilde{R}}}\left[\left(\mu^2+\left|Du_{m, \varepsilon}(x)\right|^2\right)^\frac{p}{2}+\left(\left|u_{m, \varepsilon}(x)\right|-a\right)^{2m}_+\right]dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p}{p+2}}. \end{eqnarray}
Now let us notice that, since
$$ \frac{2m\left(p+2\right)}{p\left(2m-1\right)}\ge\frac{p+2}{p} $$
for any $m>1$, and
$$ \frac{2m\left(p+2\right)}{p\left(2m-1\right)}\searrow\frac{p+2}{p},\qquad\mbox{ as }m\to\infty, $$
we have
\begin{eqnarray*}
\lim_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p\left(2m-1\right)}{2m\left(p+2\right)}}=\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{p+2}{p}}dx\right)^{\frac{p}{p+2}}, \end{eqnarray*}
and so
\begin{eqnarray}\label{fmepsconvm}
\lim_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p}{p+2}}&=&\lim_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p\left(2m-1\right)}{2m\left(p+2\right)}\cdot\frac{2m}{\left(2m-1\right)}}\cr\cr
&=&\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{p+2}{p}}dx\right)^{\frac{p}{p+2}}, \end{eqnarray}
for any $\varepsilon$.\\ Hence, for any $\varepsilon$, the second integral in the right-hand side of \eqref{minimalityinf} can be bounded independently of $m$.\\ This implies that, for each $\varepsilon$, $\Set{u_{m, \varepsilon}}_m$ is bounded in $W^{1, p}\left(B_{\tilde{R}}\right)$, and so there exists a family of functions $\Set{u_{\varepsilon}}_\varepsilon\subset W^{1, p}\left(B_{\tilde{R}}\right)$ such that
\begin{equation*}\label{convminfw}
u_{m, \varepsilon}\rightharpoonup u_\varepsilon\qquad\mbox{ weakly in }W^{1, p}\left(B_{\tilde{R}}\right), \end{equation*}
and so
\begin{equation*}\label{convminfs}
u_{m, \varepsilon}\to u_\varepsilon\qquad\mbox{ strongly in }L^{p}\left(B_{\tilde{R}}\right), \end{equation*}
and
\begin{equation}\label{convminfae}
u_{m, \varepsilon}\to u_\varepsilon\qquad\mbox{ almost everywhere in }B_{\tilde{R}}, \end{equation}
as $m\to\infty$, up to a subsequence.\\ In particular, by \eqref{minimalityinf}, \eqref{convfinf1} and \eqref{fmepsconvm}, the set of functions $\Set{u_{\varepsilon}}_\varepsilon$ is bounded in $W^{1, p}\left(B_{\tilde{R}}\right)$, and so there exists a function $v\in W^{1, p}\left(B_{\tilde{R}}\right)$ such that
\begin{equation*}\label{conveps0inf}
u_{\varepsilon}\rightharpoonup v\qquad\mbox{ weakly in }W^{1, p}\left(B_{\tilde{R}}\right),\mbox{ as }\varepsilon\to0. \end{equation*}
So we have
\begin{equation*}\label{conveps0infs}
u_{\varepsilon}\to v\qquad\mbox{ strongly in }L^{p}\left(B_{\tilde{R}}\right) \end{equation*}
and
\begin{equation}\label{conveps0infae}
u_{\varepsilon}\to v \qquad\mbox{ almost everywhere in }B_{\tilde{R}}, \end{equation}
up to a subsequence, as $\varepsilon\to0$.\\
On the other hand, \eqref{minimalityinf} implies
\begin{eqnarray}\label{L2mboundinf*}
&&\int_{B_{\tilde{R}}}\left(\left|u_{m, \varepsilon}(x)\right|-a\right)^{2m}_+dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p}{p+2}}, \end{eqnarray}
and this bound is independent of $m$ by virtue of \eqref{fmepsconvm}.\\
One can easily check that, for any $m>1$, we have
\begin{eqnarray}\label{L2mboundinf**}
\int_{B_{\tilde{R}}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\le\int_{B_{\tilde{R}}}\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+^{2m}dx+c\int_{B_{\tilde{R}}}a^{2m}dx, \end{eqnarray}
and so, by virtue of \eqref{L2mboundinf*}, for any $m>1$, we get
\begin{eqnarray*}
\int_{B_{\tilde{R}}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&&+c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p}{p+2}}+c\int_{B_{\tilde{R}}}a^{2m}dx \end{eqnarray*}
and
\begin{eqnarray}\label{L2mboundinf***}
&&\left(\int_{B_{\tilde{R}}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{2m}\cr\cr
&\le&c\left[\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx\right]^\frac{1}{2m}+c\left[\int_{B_{\tilde{R}}}a^{2m}dx\right]^\frac{1}{2m}\cr\cr
&&+c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{2m\left(p+2\right)}{p\left(2m-1\right)}}dx\right)^{\frac{p\left(2m-1\right)}{2m\left(p+2\right)}\cdot\frac{1}{2m-1}}. \end{eqnarray}
Now, if we pass to the $\limsup$ as $m\to\infty$ at both sides of \eqref{L2mboundinf***}, recalling \eqref{afix} and \eqref{fmepsconvm}, we get
\begin{equation*}
\limsup_{m\to\infty}\left(\int_{B_{\tilde{R}}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{2m}\le c\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{\tilde{R}}\right)}, \end{equation*}
and similarly, for any ball $B_{4r}\Subset B_{\tilde{R}}$, we have
\begin{equation*}
\limsup_{m\to\infty}\left(\int_{B_{4r}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{2m}\le c\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{4r}\right)}, \end{equation*}
which implies
\begin{equation}\label{limsupmeps}
\limsup_{m\to\infty}\left(\int_{B_{4r}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{m+1}\le c\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{4r}\right)}^2. \end{equation}
Since, for any $m>1$ and for any $\varepsilon$, $u_{m, \varepsilon}\in \left(u+W^{1, p}_0\left(B_{\tilde{R}}\right)\right)\cap L^{2m}\left(B_{\tilde{R}}\right)$ is a minimizer of a functional of the form \eqref{modenergym}, which satisfies \eqref{F1eps}--\eqref{F4eps}, $g_\varepsilon\in L^{\frac{2m\left(p+2\right)}{2m-p}}\left(B_{\tilde{R}}\right)$ and $f_\varepsilon\in L^{\frac{2m\left(p+2\right)}{2mp+p-2}}\left(B_{\tilde{R}}\right)$, we can apply Theorem \ref{approxmthm}, and by \eqref{approxmestimate}, we get
\begin{eqnarray}\label{DVpmeps}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(Du_{m, \varepsilon}(x)\right)\right|^2dx\cr\cr
&\le&\frac{c}{r^{\frac{p+2}{p}}}\left[\int_{B_{r}} \left(\mu^2+\left|Du_{m, \varepsilon}\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 r}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 r}}\left(\mu^2+\left|Du_{m, \varepsilon}(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{r}}g_\varepsilon^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_r}\left|f_\varepsilon(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_r\right|+1\right], \end{eqnarray}
for any ball $B_{4r}\Subset B_{\tilde{R}}$.\\ Moreover, we can use Lemma \ref{differentiabilitylemma} and \eqref{differentiabilityestimate}, thus getting
\begin{eqnarray}\label{D2Lpmestimatemeps}
&&\int_{B_{\frac{r}{2}}}\left|D^2u_{m, \varepsilon}(x)\right|^pdx\cr\cr
&\le&\frac{c}{r^{\frac{p+2}{p}}}\left[\int_{B_{r}} \left(\mu^2+\left|Du_{m, \varepsilon}\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&\left.+\left(\int_{B_{4 r}}\left|u_{m, \varepsilon}(x)\right|^{2m}dx\right)^\frac{1}{m+1}\cdot\left(\int_{B_{4 r}}\left(\mu^2+\left|Du_{m, \varepsilon}(x)\right|^2\right)^\frac{p}{2}dx\right)^\frac{m}{m+1}\right.\cr\cr
&&+\left.\int_{B_{r}}g_\varepsilon^{\frac{2m\left(p+2\right)}{2m-p}}(x)dx+\int_{B_r}\left|f_\varepsilon(x)\right|^\frac{2m\left(p+2\right)}{2mp+p-2}dx+\left|B_r\right|+1\right]. \end{eqnarray}
By virtue of \eqref{convfinfepsm}, \eqref{gepsboundm}, \eqref{minimalityinf}, \eqref{fmepsconvm} and \eqref{limsupmeps}, all the integrals in the right-hand side of \eqref{D2Lpmestimatemeps} are bounded independently of $m$: for this reason, for each $\varepsilon$, the set of functions $\Set{u_{m, \varepsilon}}_m$ is bounded in $W^{2,p}\left(B_{\frac{r}{2}}\right)$, and since the ball $B_{4r}$ is arbitrary, a covering argument implies
\begin{equation}\label{W1pstrongmueps}
u_{m, \varepsilon}\to u_\varepsilon\qquad\mbox{ strongly in }W^{1, p}\left(B_{4r}\right), \end{equation}
which gives
\begin{equation*}
Du_{m, \varepsilon}\to Du_\varepsilon\qquad\mbox{ almost everywhere in }B_{4r}, \end{equation*}
up to a subsequence, as $m\to\infty$.\\ So, passing to the limit as $m\to\infty$, recalling \eqref{minimalityinf} and \eqref{fmepsconvm}, we also get
\begin{eqnarray}\label{mapproxinfW1pLim}
&&\int_{B_{2r}}\left(\mu^2+\left|Du_{\varepsilon}(x)\right|^2\right)^\frac{p}{2}dx\cr\cr
&\le&L\int_{B_{\tilde{R}}}\left(\mu^2+\left|Du(x)\right|^2\right)^\frac{p}{2}dx+c\left(\int_{B_{\tilde{R}}}\left|f_{\varepsilon}(x)\right|^{\frac{p+2}{p}}dx\right)^{\frac{p}{p+2}}.
\end{eqnarray}
Therefore, since, by virtue of the continuity of $\xi\mapsto DV_p(\xi)$, we also have
$$ DV_p\left(Du_{m, \varepsilon}\right)\to DV_p\left(Du_{\varepsilon}\right)\qquad\mbox{ almost everywhere in }B_{2r},\mbox{ as }m\to\infty, $$
and we can apply Fatou's Lemma passing to the $\limsup$ as $m\to\infty$ in \eqref{DVpmeps}, using \eqref{convfinfepsm}, \eqref{gepsboundm}, \eqref{limsupmeps} and \eqref{W1pstrongmueps}, we get
\begin{eqnarray}\label{DVpeps}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(Du_{\varepsilon}(x)\right)\right|^2dx\cr\cr
&\le&\frac{c\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{4r}\right)}}{r^{\frac{p+2}{p}}}\left[\int_{B_{4r}} \left(\mu^2+\left|Du_{\varepsilon}\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&+\left.\int_{B_{r}}g_\varepsilon^{p+2}(x)dx+\int_{B_r}\left|f_\varepsilon(x)\right|^\frac{p+2}{p}dx+\left|B_r\right|+1\right], \end{eqnarray} where we also used the fact that $r<\tilde{R}<1$.\\ Now, since, by virtue of \eqref{convfinf1}, \eqref{convgeps}, and \eqref{mapproxinfW1pLim}, all the integrals in the right-hand side of \eqref{DVpeps} can be bounded independently of $\varepsilon$, arguing like in the proof of Lemma \ref{differentiabilitylemma}, it is possible to prove that $\Set{u_\varepsilon}_\varepsilon$ is bounded in $W^{2,p}\left(B_{\frac{r}{2}}\right)$, and since $r$ is arbitrary, a covering argument implies
\begin{equation*}
u_{\varepsilon}\to v\qquad\mbox{ strongly in }W^{1, p}\left(B_{4r}\right), \end{equation*}
and
\begin{equation*}
Du_{\varepsilon}\to Dv\qquad\mbox{ almost everywhere in }B_{4r}, \end{equation*}
as $\varepsilon\to0$.\\ Since, by virtue of the continuity of $\xi\mapsto DV_p(\xi)$, we also have
$$ DV_p\left(Du_{\varepsilon}\right)\to DV_p\left(Dv\right)\qquad\mbox{ almost everywhere in }B_{4r},\mbox{ as }\varepsilon\to0, $$
using Fatou's Lemma in \eqref{DVpeps}, we get
\begin{eqnarray}\label{DVpinfty}
&&\int_{B_{\frac{r}{2}}}\left|DV_p\left(Dv(x)\right)\right|^2dx\cr\cr
&\le&\frac{c\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{4r}\right)}}{r^{\frac{p+2}{p}}}\left[\int_{B_{4r}} \left(\mu^2+\left|Dv\left(x\right)\right|^2\right)^\frac{p}{2}dx\right.\cr\cr
&&+\left.\int_{B_{r}}g^{p+2}(x)dx+\int_{B_r}\left|f(x)\right|^\frac{p+2}{p}dx+\left|B_r\right|+1\right], \end{eqnarray}
The last step to get the conclusion consists in proving that $u=v$ a.e. on $B_{\tilde{R}}$.\\ Since we have
\begin{equation*}
\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)\cdot u(x)-f(x)\cdot u(x)\right|dx\le\left\Arrowvert u\right\Arrowvert_{L^\infty\left(B_{\tilde{R}}\right)}\int_{B_{\tilde{R}}}\left|f_\varepsilon(x)-f(x)\right|dx, \end{equation*}
by virtue of \eqref{convfinf1}, we get
\begin{equation}\label{lasteps0*}
\lim_{\varepsilon\to 0}\int_{B_{\tilde{R}}}f_\varepsilon(x)\cdot u(x)dx=\int_{B_{\tilde{R}}}f(x)\cdot u(x)dx. \end{equation}
Using the minimality of $u$, the lower semicontinuity of the functional $\mathcal{F}$, the minimality of $u_{m, \varepsilon}$ for $\mathcal{F}_{m,\varepsilon}$ and the lower semicontinuity of this functional and recalling \eqref{afix}, we get \begin{eqnarray}\label{semicont}
&&\int_{B_{\tilde{R}}}\left[F\left(x, Du(x)\right)-f(x)\cdot u(x)\right] dx\cr\cr
&\le&\int_{B_{\tilde{R}}}\left[F\left(x, Dv(x)\right)-f(x)\cdot v(x)\right] dx\cr\cr
&\le&\liminf_{\varepsilon\to 0}\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du_\varepsilon(x)\right)-f_\varepsilon(x)\cdot u_\varepsilon(x)\right] dx\cr\cr
&\le&\liminf_{\varepsilon\to 0}\liminf_{m\to\infty}\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du_{m, \varepsilon}(x)\right)-f_\varepsilon(x)\cdot u_{m, \varepsilon}(x)\right]dx\cr\cr
&\le&\liminf_{\varepsilon\to 0}\liminf_{m\to\infty}\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du_{m, \varepsilon}(x)\right)-f_\varepsilon(x)\cdot u_{m, \varepsilon}(x)+\left(\left|u_{m, \varepsilon}(x)\right|-a\right)_+^{2m}\right]dx\cr\cr
&\le&\liminf_{\varepsilon\to 0}\int_{B_{\tilde{R}}}\left[F_\varepsilon\left(x, Du(x)\right)-f_\varepsilon(x)\cdot u(x)\right]dx\cr\cr
&=&\int_{B_{\tilde{R}}}\left[F\left(x, Du(x)\right)-f(x)\cdot u(x)\right]dx, \end{eqnarray}
where the last equality is a consequence of \eqref{convFm} and \eqref{lasteps0*}. Therefore, all the inequalities in \eqref{semicont} hold as equalities, and we get
$$ \mathcal{F}\left(u, B_{\tilde{R}}\right)=\mathcal{F}\left(v, B_{\tilde{R}}\right). $$
So, by virtue of the strict convexity of $F$ with respect to the gradient variable, this implies $u=v$ a.e. on $B_{\tilde{R}}$. By virtue of \eqref{DVpinfty} and a standard covering argument, we get \eqref{inftyestimate}. \end{proof}
We conclude this section with some consequences of Theorem \ref{inftythm}, that can be proved applying Lemma \ref{differentiabilitylemma} and estimate \eqref{2.2GP} rispectively.
\begin{corollary}\label{corollaryinf}
Let $\Omega\subset\numberset{R}^n$ be a bounded open set and $1<p<2$.\\
Let $u\in W^{1, p}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)\cap L^{\infty}_{\mathrm{loc}}\left(\Omega, \numberset{R}^N\right)$ be a local minimizer of the functional \eqref{modenergy}, under the assumptions \eqref{F1}--\eqref{F4}, with
$$f\in L^{\frac{p+2}{p}}_{\mathrm{loc}}\left(\Omega\right)\qquad\mbox{ and }\qquad g\in L^{p+2}_{\mathrm{loc}}\left(\Omega\right).$$
Then $u\in W^{2, p}_{\mathrm{loc}}\left(\Omega\right)$ and $Du\in L^{p+2}_{\mathrm{loc}}\left(\Omega\right)$. \end{corollary} \printbibliography
\end{document} | arXiv |
\begin{document}
\newtheorem{problem}{Problem} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}{Conjecture}
\def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}}
\def{\mathbb A}{{\mathbb A}} \def{\mathbb B}{{\mathbb B}} \def{\mathbb C}{{\mathbb C}} \def{\mathbb D}{{\mathbb D}} \def{\mathbb E}{{\mathbb E}} \def{\mathbb F}{{\mathbb F}} \def{\mathbb G}{{\mathbb G}}
\def{\mathbb I}{{\mathbb I}} \def{\mathbb J}{{\mathbb J}} \def{\mathbb K}{{\mathbb K}} \def{\mathbb L}{{\mathbb L}} \def{\mathbb M}{{\mathbb M}} \def{\mathbb N}{{\mathbb N}} \def{\mathbb O}{{\mathbb O}} \def{\mathbb P}{{\mathbb P}} \def{\mathbb Q}{{\mathbb Q}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb S}{{\mathbb S}} \def{\mathbb T}{{\mathbb T}} \def{\mathbb U}{{\mathbb U}} \def{\mathbb V}{{\mathbb V}} \def{\mathbb W}{{\mathbb W}} \def{\mathbb X}{{\mathbb X}} \def{\mathbb Y}{{\mathbb Y}} \def{\mathbb Z}{{\mathbb Z}}
\def{\mathbf{e}}_p{{\mathbf{e}}_p} \def{\mathbf{e}}_m{{\mathbf{e}}_m} \def{\mathbf{e}}_q{{\mathbf{e}}_q}
\def\scriptstyle{\scriptstyle} \def\\{\cr} \def\({\left(} \def\){\right)} \def\[{\left[} \def\right]{\right]} \def\langle{\langle} \def\rangle{\rangle} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\leqslant{\leqslant} \def\geqslant{\geqslant} \def\varepsilon{\varepsilon} \def\qquad\mbox{and}\qquad{\qquad\mbox{and}\qquad}
\def\mathop{\sum\ \sum\ \sum}{\mathop{\sum\ \sum\ \sum}} \def\mathop{\sum\, \sum}{\mathop{\sum\, \sum}} \def\mathop{\sum\qquad \sum}{\mathop{\sum\qquad \sum}}
\def\vec#1{\mathbf{#1}} \def\inv#1{\overline{#1}} \def\num#1{\mathrm{num}(#1)} \def\mathrm{dist}{\mathrm{dist}}
\def{\mathfrak A}{{\mathfrak A}} \def{\mathfrak B}{{\mathfrak B}} \def{\mathfrak C}{{\mathfrak C}} \def{\mathfrak U}{{\mathfrak U}} \def{\mathfrak V}{{\mathfrak V}}
\newcommand{{\boldsymbol{\lambda}}}{{\boldsymbol{\lambda}}} \newcommand{{\boldsymbol{\xi}}}{{\boldsymbol{\xi}}} \newcommand{{\boldsymbol{\rho}}}{{\boldsymbol{\rho}}} \newcommand{{\boldsymbol{\nu}}}{{\boldsymbol{\nu}}}
\def\mathrm{GL}{\mathrm{GL}} \def\mathrm{SL}{\mathrm{SL}}
\def\overline{\cH}_{a,m}{\overline{{\mathcal H}}_{a,m}} \def\widetilde{\cH}_{a,m}{\widetilde{{\mathcal H}}_{a,m}} \def\overline{\cH}_{m}{\overline{{\mathcal H}}_{m}} \def\widetilde{\cH}_{m}{\widetilde{{\mathcal H}}_{m}}
\def\flp#1{{\left\langle#1\right\rangle}_p} \def\flm#1{{\left\langle#1\right\rangle}_m}
\def\dmod#1#2{\left\|#1\right\|_{#2}}
\def\dmodq#1{\left\|#1\right\|_q}
\def\Z/m\Z{{\mathbb Z}/m{\mathbb Z}}
\def{\mathbf{E}}{{\mathbf{E}}}
\newcommand{\comm}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule
#1\par
\hrule}}
\def\vskip5pt\hrule\vskip5pt{\vskip5pt\hrule\vskip5pt}
\title[Large values of Dirichlet polynomials]{Large values of Dirichlet polynomials and zero density estimates for the Riemann zeta function}
\author[B. Kerr] {Bryce Kerr} \address{School of Science, The University of New South Wales Canberra, Australia} \email{[email protected]} \thanks{The author was supported by Australian Research Council Discovery Project DP160100932.}
\date{\today} \pagenumbering{arabic}
\begin{abstract} In this paper we obtain some new estimates for the number of large values of Dirichlet polynomials. Our results imply new zero density estimates for the Riemann zeta function which give a small improvement on results of Bourgain and Jutila. \end{abstract}
\maketitle \section{Introduction}
In this paper we consider estimating the number of times a Dirichlet polynomial can take large values. Let $a_n$ be a sequence of complex numbers satisfying $|a_n|\leqslant 1$ and ${\mathcal A}\subset [0,T]$ a set of real numbers which is $1$-spaced and satisfies \begin{align*}
\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|\geqslant V, \quad t\in {\mathcal A}. \end{align*}
The problem of obtaining an upper bound for the cardinality $|{\mathcal A}|$ is motivated by estimating the number of zeros of the Riemann zeta function in a rectangle to the right of the critical line. The main conjecture for this problem is known as Montgomery's conjecture and states \begin{align} \label{eq:mont}
\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll (NT)^{o(1)}\left(N|{\mathcal A}|+N^2\right). \end{align} Assuming $N\geqslant T^{o(1)}$ this implies \begin{align} \label{eq:montlarge}
|{\mathcal A}|\ll \frac{N^{2+o(1)}}{V^2}, \quad \text{provided} \quad V\geqslant N^{1/2+o(1)}. \end{align} The best known general result towards~\eqref{eq:mont} is obtained via Fourier completion \begin{align} \label{eq:completion}
\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll (NT)^{o(1)}\left(NT+N^2\right). \end{align} The advantage of~\eqref{eq:mont} over~\eqref{eq:completion} is the lack of dependence on the parameter $T$. Results of this type are very rare and we mention an important estimate over convolutions due to Heath-Brown~\cite{HB}. \begin{theorem} \label{thm:heathbrown}
Let ${\mathcal A}\subseteq [0,T]$ be a well spaced set, $N$ an integer and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. We have
$$\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{1\leqslant n \leqslant N}a_n n^{-1/2+i(t_1-t_2)}\right|^2\ll (|{\mathcal A}|^2+N|{\mathcal A}|+T^{1/2}|{\mathcal A}|^{5/4})(NT)^{o(1)}.$$ \end{theorem} This bound was first used by Heath-Brown to estimate the additive energy $E({\mathcal A})$ of large values
$$E({\mathcal A})=|\{ t_1,\dots,t_4 \in{\mathcal A} \ : \ |t_1+t_2-t_3-t_4|\leqslant 1\},$$
and applied to zero density estimates~\cite{HB1} by combining with ideas of Jutila~\cite{Jut} and primes in short intervals~\cite{HB2}. Bourgain~\cite{Bou} refined the use of Theorem~\ref{thm:heathbrown} in applications to zero density estimates and obtained the widest known parameters for which the density conjecture holds. The use of energy estimates in classical arguments for bounding $|{\mathcal A}|$ interact badly with Huxley's subdivision and Bourgain was able to refine this aspect of Heath-Brown's argument by exploiting the fact that if $|{\mathcal A}|$ is large then the points $t\in {\mathcal A}$ also correlate with large values of the Riemann zeta function. \newline
Bourgain's argument can be considered a Balog-Szemeredi-Gower's type theorem applied to ${\mathcal A}$. One way to see how ideas from additive combinatorics are useful is as follows. Using some Fourier analysis, we may assume ${\mathcal A}\subseteq {\mathbb Z}$. Suppose we are in an extreme case where \begin{align} \label{eq:AAsumset}
|{\mathcal A}-{\mathcal A}|\ll |{\mathcal A}|. \end{align} Then
$$E({\mathcal A})\sim |{\mathcal A}|^3,$$
and hence most points $t\in {\mathcal A}$ have $\gg |{\mathcal A}|$ representations $$t=t'-t'', \quad t'\in {\mathcal A}, \quad t''\in {\mathcal A}-{\mathcal A},$$ which implies \begin{align*}
\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll \frac{1}{|{\mathcal A}|}\sum_{t'\in {\mathcal A}, t''\in {\mathcal A}-{\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t'-t'')} \right|^2. \end{align*} By~\eqref{eq:AAsumset} we can cover ${\mathcal A}-{\mathcal A}$ by $O(1)$ translates of ${\mathcal A}$, so that \begin{align*}
\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll \frac{1}{|{\mathcal A}|}\sum_{t'\in {\mathcal A}, t''\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a'_n n^{i(t'-t'')} \right|^2, \end{align*}
for some sequence $a'_n$ satisfying $|a'_n|=|a_n|$. Applying Theorem~\ref{thm:heathbrown} gives
$$\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll N^{o(1)}(N|{\mathcal A}|+N^2+T^{1/2}|{\mathcal A}|^{5/4}),$$
which establishes~\eqref{eq:mont} for certain ranges of parameters. One would then give a complementary approach to deal with the case $E({\mathcal A})$ is small and a final bound for $|{\mathcal A}|$ may be obtained by decomposing ${\mathcal A}$ into pieces with either small energy or small sumset. The most straightforward way to deal with small energy is to apply duality to create more variables of summation in ${\mathcal A}$. Directly applying dualtiy to~\eqref{eq:mont} for $\ell_2$ norms sets a limit of the argument at \begin{align} \label{eq:mont11}
\sum_{t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|^2\ll (NT)^{o(1)}\left(N^{3/2}|{\mathcal A}|+N^2\right), \end{align} and would give the estimate~\eqref{eq:montlarge} in the range $V\geqslant N^{3/4+o(1)}$. \newline
Establishing~\eqref{eq:mont11} would be significant and there have been a number of improvements to~\eqref{eq:completion} for $V\geqslant N^{3/4+o(1)}$. We refer the reader to~\cite{Ivic} for an overview of techniques and results prior to Bourgain's work~\cite{Bou0,Bou,Bou1}. An important problem which has seen no progress is to improve on~\eqref{eq:completion} for $V\leqslant N^{3/4+o(1)}$. One may consider applying duality with fractional exponents although this approach lacks a geometric way to interpret the resulting mean values as in the $\ell_2$ case. Attempting to deal with this issue by decomposing into level sets, one is lead to sums of the form \begin{align} \label{eq:hbsparse}
\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{d\in D} d^{i(t_1-t_2)}\right|^2, \end{align} where $D\subseteq [N,2N]\cap {\mathbb Z}$ may be sparse. This would require a variant of Theorem~\ref{thm:heathbrown} which is sensitive to the size of $D$. It is not clear what to expect for the sums~\eqref{eq:hbsparse} since one may construct variations which give the trivial bound. If $q$ is prime and $H\subseteq {\mathbb F}_q$ is some multiplicative subgroup, there exists a set ${\mathcal A}$ of multiplicative characters mod $q$ such that
$$|H||{\mathcal A}|\sim q,$$ and \begin{align} \label{eq:hbsparse1}
\sum_{\chi_1,\chi_2\in {\mathcal A}}\left|\sum_{h\in H} \chi_1\overline \chi_2(h)\right|^2= |H|^2|{\mathcal A}|^2. \end{align}
If $D$ is not small it does not seem possible to construct similar examples directly for the sums~\eqref{eq:hbsparse} since for any set of integers $D\subseteq [N,2N]$ satisfying $|D|\geqslant N^{\varepsilon}$ we have
$$|DD|\geqslant |D|^{2-o(1)}.$$ \indent In this paper we obtain some new large values estimates in the range $V\geqslant N^{3/4+o(1)}$. Our arguments build on techniques of Bourgain, Heath-Brown, Huxley, Jutila and Ivi\'{c} and are also motivated by the sum-product problem. Current approaches to the sum-product problem establish relations between various energies using geometric incidences. To estimate the number of large values of exponential sums, one may proceed in analogy to sum-product estimates given a suitable replacement for geometric incidences. In the case of Dirichlet polynomials, this role is played by Heath-Brown's convolution estimate. \newline
We will use a more general version of Theorem~\ref{thm:heathbrown} and in particular we consider the sums \begin{align} \label{eq:hbgeneral}
\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\gamma(t_1)\gamma(t_2)\left|\sum_{1\leqslant n \leqslant N}a_n n^{-1/2+i(t_1-t_2)}\right|^2. \end{align} The parameter $\delta$ corresponds to Huxley's subdivision and in applications, estimates for the sums~\eqref{eq:hbgeneral} give a better dependence on $\delta$ than directly using Theorem~\ref{thm:heathbrown}. We note that one may use the sums~\eqref{eq:hbgeneral} in Heath-Brown's original argument~\cite{HB1} to recover Bourgain's result~\cite{Bou}. We record the bound obtained by this method. \begin{theorem} \label{thm:bou}
Suppose $N,T,V$ are positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. Let ${\mathcal A}\subset[0,T]$ be a $1$-spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} Suppose $N,T,{\mathcal A},V$ satisfy \begin{align} \label{eq:main1cond}
N\geqslant T^{2/3}, \quad |{\mathcal A}|\leqslant N, \quad V\geqslant N^{3/4+o(1)}. \end{align} For any $0<\delta\leqslant 1$ we have \begin{align*}
|{\mathcal A}|\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{\delta T^2 N^{4+o(1)}}{V^8}+\frac{T^{1/3}N^{16/3+o(1)}}{\delta^{1/3}V^{20/3}}+\frac{T^{2/3}N^{9+o(1)}}{V^{12}}. \end{align*} \end{theorem}
The $\gamma$ in~\eqref{eq:hbgeneral} may be arbitrary positive weights and allow estimation of more general energies such as \begin{align*}
T_k({\mathcal A})=\{ t_1,\dots,t_{2k}\in {\mathcal A} \ : \ |t_1+\dots-t_{2k}|\leqslant 1\}, \end{align*} which are relevant to other problems involving the distribution of primes although will not be considered in this paper. \newline
We also introduce another technical refinement into the Bourgain-Heath-Brown approach which is based on an idea of Ivi\'c. The Hal\'{a}sz-Montgomery method to estimate the number of large values reduces to bounding sums of the form \begin{align*}
S=\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_1-t_2)}\right|^2. \end{align*} Using Mellin inversion one may complete these sums on to $\zeta(1/2+it)$ to get $$S\ll
\sum_{t_1,t_2\in {\mathcal A}}\left|\zeta\left(\frac{1}{2}+i(t_1-t_2)\right)\right|^2.$$ An estimate for $S$ may be obtained by combining H\"{o}lder's inequality with moment estimates for $\zeta$ and energy estimates for ${\mathcal A}$. By the approximate functional equation, if $N\leqslant T^{1/2-\delta}$ is small then the completion step is wasteful and we may obtain sharper results by retaining some information about $N$. For example, after rescaling we get \begin{align*}
S\ll N^{2\delta }\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2-\delta+i(t_1-t_2)}\right|^2, \end{align*} which may be completed on to $\zeta(1/2+\delta+it)$ where higher moment estimates are available. \section{Large values of Dirichlet polynomials} \label{sec:largevalue} In what follows we refer to 1-spaced sets as well spaced. \begin{theorem} \label{thm:main1}
Suppose $N,T,V$ are positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. Let ${\mathcal A}\subset[0,T]$ a well spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} Suppose $N,T,{\mathcal A},V$ satisfy \begin{align} \label{eq:main1cond}
N\geqslant T^{2/3}, \quad |{\mathcal A}|\leqslant N, \quad V\geqslant N^{3/4+o(1)}. \end{align} Let $k\geqslant 2$ be a positive integer and $\delta$ a real number satisfying \begin{align} \label{eq:main1delta} N^{o(1)}\delta \leqslant \frac{1}{T}\frac{V^{4k}}{N^{3k-1}} , \quad \text{and} \quad N^{o(1)}\delta \leqslant \frac{N^{1+1/(k-1)}}{T}. \end{align} We have \begin{align*}
|{\mathcal A}|\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{T^{1/3}}{\delta^{1/3}}\frac{N^{k+4/3+o(1)}}{V^{4k/3+4/3}}. \end{align*} \end{theorem}
\begin{theorem} \label{thm:main4}
Suppose $N,T,V$ are positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. Let ${\mathcal A}\subset[0,T]$ be a well spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} Suppose $T,V,N,{\mathcal A}$ satisfy \begin{align} \label{eq:main4ass}
\quad |{\mathcal A}|\leqslant \min\left\{N,\frac{N^4}{T^2}\right\} \quad V\geqslant N^{25/32+o(1)}. \end{align} For any $0<\delta\leqslant 1$ satisfying \begin{align} \label{eq:main4deltaass} \delta \geqslant \frac{N^{26+o(1)}}{V^{32}T}, \quad N^{o(1)}\delta \leqslant \frac{V^{16}}{N^{11}T}, \end{align} we have \begin{align*}
|{\mathcal A}|&\ll\frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{\delta T^2 N^{4+o(1)}}{V^{8}}+\frac{N^{8+o(1)}}{\delta^2 TV^8}+\frac{N^{10+o(1)}}{\delta^{2/3}V^{12}}. \end{align*} \end{theorem} Our next result may be used to recover a zero density estimate of Ivi\'{c}~\cite[Theorem~11.5]{Ivic} with a slightly smaller range of parameters. \begin{theorem} \label{thm:main12}
Suppose $N,T,V$ are positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. Let ${\mathcal A}\subset[0,T]$ be a well spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} Suppose $T,N,{\mathcal A}$ satisfy
$$N\geqslant T^{2/3}, \quad |{\mathcal A}|\leqslant N.$$ For any $0<\delta<1$ satisfying \begin{align*} N^{o(1)}\delta \leqslant \frac{V^8}{TN^5}, \end{align*} we have \begin{align*}
|{\mathcal A}|\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{\delta^{2/3} T^{4/3}N^{23/3+o(1)}}{V^{12}}+\frac{T^{2/3}N^{14/3+o(1)}}{V^{20/3}}. \end{align*} \end{theorem}
In applications to zero density estimates, one may use results of Huxley~\cite{Hux} or Jutila~\cite{Jut} to verify the conditions on ${\mathcal A}$ hold in the above results. \section{Zero density estimates for the Riemann zeta function} \label{sec:zerodensity} For $1/2\leqslant \sigma \leqslant 1$ and $T\gg 1$ we let $N(\sigma,T)$ count the number of $\rho=\beta+i\gamma$ satisfying $$\zeta(\rho)=0, \quad \beta \geqslant \sigma, \quad 0\leqslant \gamma \leqslant T.$$ By combining the results from Section~\ref{sec:largevalue} with the method of zero detection polynomials we give some new bounds for $N(\sigma,T)$. \begin{theorem} \label{thm:zerodensity2} If $\sigma$ satisfies \begin{align} \label{eq:sigmacond3} \sigma \geqslant \frac{23}{29}, \end{align} then we have \begin{align} \label{eq:bouest} N(\sigma,T)\ll T^{3(1-\sigma)/2\sigma+o(1)}. \end{align} \end{theorem} Theorem~\ref{thm:zerodensity2} improves a result of Bourgain~\cite{Bou1} who previously obtained the estimate~\eqref{eq:bouest} in the range \begin{align*} \sigma\geqslant \frac{3734}{4694}=\frac{23}{29}+\frac{162}{68063}. \end{align*} \begin{theorem} \label{thm:zerodensity1} If $\sigma$ satisfies \begin{align} \label{eq:density1cond} \frac{127}{168}\leqslant \sigma \leqslant \frac{107}{138}, \end{align} then we have
\begin{align*} N(\sigma,T)\ll T^{36(1-\sigma)/(138\sigma-89)+o(1)}+T^{(114\sigma-79)/(138\sigma-89)+o(1)}. \end{align*} In particular, if $$\frac{127}{168}\leqslant \sigma \leqslant \frac{23}{30},$$ then we have
\begin{align} \label{eq:thm:zerodensity1} N(\sigma,T)\ll T^{36(1-\sigma)/(138\sigma-89)+o(1)}. \end{align} \end{theorem}
Ivi\'{c}~\cite[Equation~(11.85)]{Ivic} has obtained \begin{align*} N(\sigma,T)\ll T^{3(1-\sigma)/(7\sigma-4)+o(1)}, \quad \frac{3}{4}\leqslant \sigma\leqslant \frac{10}{13}. \end{align*} Theorem~\ref{thm:zerodensity1} provides an improvement on this bound in the range $$\frac{41}{54}\leqslant \sigma \leqslant \frac{845+\sqrt{7429}}{1212}.$$ Arguments of Jutila~\cite{Jut}, see also~\cite[Section~11.7]{Ivic} imply the estimate \begin{align} \label{eq:zerodensityJJ} N(\sigma,T)\ll T^{3k(1-\sigma)/((3k-2)\sigma+2-k)+o(1)}, \end{align} for any integer $k\geqslant 2$ valid in the range of parameters given by~\cite[Equation~11.76]{Ivic}. Considering the discussion on~\cite[pp. 289]{Ivic}, in order to use~\eqref{eq:zerodensityJJ} for $\sigma\leqslant 13/17$ we need to take $k\geqslant 5$. Comparing~\eqref{eq:thm:zerodensity1} with $k\geqslant 5$ of~\eqref{eq:zerodensityJJ}, we obtain an improvement provided \begin{align*} \frac{409}{534}\leqslant \sigma\leqslant \frac{23}{30}=\frac{409}{534}+\frac{1}{1335}. \end{align*}
\section{Preliminary results} In what follows we assume $T^{o(1)}\leqslant N\leqslant T^{O(1)}$. This implies terms $N^{o(1)}$ and $T^{o(1)}$ have the same meaning. \newline
\label{sec:prelim}
Given $\Delta>0$ define \begin{align} \label{eq:IdAset}
I(\Delta,{\mathcal A})=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \Delta}}1. \end{align}
\begin{lemma} \label{lem:ell2counting} Let ${\mathcal A}\subseteq {\mathbb R}$ be finite and $\Delta>0$. For integer $k$ define $${\mathcal A}_{k}=\{ t\in {\mathcal A} \ : \ k\Delta<t\leqslant (k+1)\Delta\}.$$ Then we have \begin{align*}
\sum_{k}|{\mathcal A}_k|^2\ll I(\Delta,{\mathcal A})\ll \sum_{k}|{\mathcal A}_k|^2. \end{align*} \end{lemma} \begin{proof} The inequality \begin{align*}
\sum_{k}|{\mathcal A}_k|^2\ll I(\Delta,{\mathcal A}), \end{align*}
follows from the observation that if $t_1,t_2\in {\mathcal A}_k$ then $|t_1-t_2|\leqslant \Delta.$ If $t_1,t_2$ satisfy
$$|t_1-t_2|\leqslant \Delta,$$ then there exists $k_1,k_2$ satisfying
$$|k_1-k_2|\leqslant 1,$$ such that $t_1\in {\mathcal A}_{k_1}$ and $t_2\in {\mathcal A}_{k_2}.$ This implies
$$I(\Delta,{\mathcal A})\leqslant \sum_{\substack{k_1,k_2 \\ |k_1-k_2|\leqslant 1}}|{\mathcal A}_{k_1}||{\mathcal A}_{k_2}|\ll \sum_{k}|{\mathcal A}_k|^2.$$ \end{proof}
Combining Lemma~\ref{lem:ell2counting} with the Cauchy-Schwarz inequality gives the following result. \begin{cor} \label{eq:ell2ell2c} Let ${\mathcal A}\subseteq [0,T]$ be a well spaced set and $0<\delta\leqslant 1$. We have \begin{align*}
|{\mathcal A}|^2\ll \frac{1}{\delta} I(\delta T,{\mathcal A}). \end{align*} \end{cor}
For a proof of the following, see~\cite[Theorem~9.4]{IwKo} \begin{lemma} \label{lem:classicalmv} For any set ${\mathcal A}\subseteq [0,T]$ of well spaced points, integer $N$ and sequence of complex numbers $a_n$ we have
$$\sum_{t\in {\mathcal A}}\left|\sum_{1\leqslant n \leqslant N}a_n n^{it}\right|^2\ll (T+N)\|a\|_2^2(\log{N}).$$ \end{lemma} As a consequence of the above, we have. \begin{lemma} \label{lem:classicalmoments}
For any set ${\mathcal A}\subseteq [0,T]$ of well spaced points, integers $N,k$ and sequence of complex numbers $a_n$ satisfying $|a_n|\leqslant 1$ we have
$$\sum_{t\in {\mathcal A}}\left|\sum_{1\leqslant n \leqslant N}a_n n^{-1/2+it}\right|^{2k}\ll (T+N^k)N^{o(1)}.$$ \end{lemma} For a proof of the following, see~\cite[Theorem~8.4]{Ivic}. \begin{theorem} \label{lem:8thmoment} For $T\gg 1$ we have \begin{align*}
\int_{0}^{T}\left|\zeta\left(\frac{5}{8}+it\right)\right|^{8}dt\ll T^{1+o(1)}. \end{align*} \end{theorem} For a proof of the following, see~\cite[Lemma~4.48]{Bou}. \begin{lemma} \label{lem:removemax}
For any $N\geqslant 1, t\in {\mathbb R}$ and sequence of complex numbers $a_n$ satisfying $|a_n|\leqslant 1$ we have \begin{align*}
\sum_{N\leqslant n\leqslant 2N}a_n n^{it}\ll \log{N}\int_{|\tau|\leqslant \log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t+\tau)}\right|d\tau. \end{align*} \end{lemma} The following is essentially due to Heath-Brown~\cite{HB2}. Since our statement is more general we provide details of the proof. \begin{lemma} \label{lem:e2energy}
Let $N,T$ be positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1.$ Let ${\mathcal A}\subset [0,T]$ be a well spaced set satisfying
$$\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A},$$ and for integer $\ell$ define
$$r(\ell)=|\{ (t_1,t_2)\in {\mathcal A} \ : \ 0\leqslant t_1-t_2-\ell<1\}|.$$ If $N,{\mathcal A},T$ satisfy \begin{align} \label{eq:e2conds}
N\geqslant T^{2/3}, \quad |{\mathcal A}|\leqslant N, \end{align}
then for any set ${\mathcal B}$ we have \begin{align*}
\sum_{\ell\in {\mathcal B}}r(\ell)^2\ll \frac{N^{3/2+o(1)}}{V^2}|{\mathcal A}|^{1/2}\sum_{\ell \in {\mathcal B}}r(\ell)+\frac{N^{4+o(1)}}{V^4}|{\mathcal A}|. \end{align*} \end{lemma} \begin{proof} Let $$W=\sum_{\ell\in {\mathcal B}}r(\ell)^2,$$ and for integer $j\ll \log{T}$ define $$Y_j=\{ \ell\in {\mathcal B} \ : \ 2^{j}\leqslant r(\ell)<2^{j+1}\},$$ so that
$$\sum_{|j|\ll \log{T}}2^{2j}|Y_j|\ll E({\mathcal A})\ll \sum_{j\ll \log{T}}2^{2j}|Y_j|.$$ By the pigeonhole principle, there exists some $j_0$ such that defining \begin{align} \label{eq:deltaDdefdef} D=Y_{j_0}, \quad 2^{j_0}=\Delta, \end{align} we have \begin{align} \label{eq:EcaUL}
|D|\Delta^2\ll W\ll (\log{T})|D|\Delta^2. \end{align} Consider the sum \begin{align*}
S=\int_{|\tau|\leqslant 2\log{N}}\sum_{\substack{\ell\in D, t\in {\mathcal A} \\ }}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{-i(\ell+t+\tau)}\right|^2d\tau. \end{align*} Define the set ${\mathcal B}_0$ by
$${\mathcal B}_0=\{ (\ell,t)\in D\times {\mathcal A} \ : \exists \ t'\in {\mathcal A} \ \text{such that} \ |t'-t-\ell|\leqslant 2\},$$ so that \begin{align} \label{eq:SubB}
S\geqslant \sum_{\substack{(\ell,t)\in {\mathcal B}_0 }}\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau. \end{align}
If $(\ell,t)\in {\mathcal B}_0$ then there exists some $t'\in {\mathcal A}$ and some $|\theta|\leqslant 2$ such that $$t'+\theta=\ell+t,$$ which gives \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau=\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t'+\theta+\tau)}\right|^2d\tau. \end{align*}
Since $|\theta|\leqslant 2$ we have $$(\theta+[-2\log{N},2\log{N}])\cap [-\log{N},\log{N}]=[-\log{N},\log{N}],$$ and hence \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau\geqslant \int_{|\tau|\leqslant \log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t'+\tau)}\right|^2d\tau. \end{align*} Applying Lemma~\ref{lem:removemax} \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau\gg \frac{V^2}{\log{N}}, \end{align*} which implies \begin{align*}
N^{o(1)}S\gg V^2|{\mathcal B}_0|. \end{align*} Recalling~\eqref{eq:deltaDdefdef} and the definition of $Y_j$ \begin{align*}
\Delta |D|\ll \sum_{\ell \in D}r(\ell)\ll |\{ (\ell,t,t')\in D \times {\mathcal A} \times {\mathcal A} \ : \ 0<t'-t-\ell\leqslant 1\}|\leqslant |{\mathcal B}_0|, \end{align*} so that \begin{align} \label{eq:SdeltaDLB}
\Delta |D|\ll \frac{N^{o(1)}}{V^2}S. \end{align}
Taking a maximum over $\tau$ in $S$, there exists some sequence of complex numbers $b(n)$ satisfying $|b(n)|\leqslant 1$ such that \begin{align*}
S&\ll N^{o(1)}\sum_{\ell\in D, t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}b(n)n^{i(\ell+t)}\right|^2 \\
&\leqslant \sum_{N\leqslant n_1,n_2\leqslant 2N}\left|\sum_{\ell\in D}\left(\frac{n_1}{n_2}\right)^{i\ell} \right|\left|\sum_{t\in {\mathcal A}}\left(\frac{n_1}{n_2}\right)^{it}\right|. \end{align*} By the Cauchy-Schwarz inequality \begin{align} \label{eq:SS1S2} S^2\ll N^{2+o(1)}S_1S_2, \end{align} where \begin{align*}
S_1=\sum_{N\leqslant n_1,n_2\leqslant 2N}(n_1n_2)^{-1/2}\left|\sum_{\ell\in D}\left(\frac{n_1}{n_2}\right)^{i\ell} \right|^2, \end{align*} and \begin{align*}
S_2=\sum_{N\leqslant n_1,n_2\leqslant 2N}(n_1n_2)^{-1/2}\left|\sum_{t\in {\mathcal A}}\left(\frac{n_1}{n_2}\right)^{it} \right|^2. \end{align*} Interchanging summation, we have \begin{align*}
S_1\ll \sum_{\ell_1,\ell_2\in D}\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(\ell_1-\ell_2)}\right|^2. \end{align*} and hence by Theorem~\ref{thm:heathbrown} and~\eqref{eq:e2conds} \begin{align*}
S_1\ll (|D|^2+N|D|)N^{o(1)}. \end{align*}
By a similar argument and the assumption $|{\mathcal A}|\leqslant N$ \begin{align*}
S_2\ll N^{1+o(1)}|{\mathcal A}|. \end{align*} The above estimates combine to give \begin{align*}
S\ll N^{3/2+o(1)}|{\mathcal A}|\left(|D|+N^{1/2}|D|^{1/2}\right), \end{align*} and hence \begin{align*}
\Delta |D|\ll \frac{N^{3/2+o(1)}}{V^2}|{\mathcal A}|^{1/2}\left(|D|+N^{1/2}|D|^{1/2}\right). \end{align*} This implies either \begin{align*}
\Delta \ll \frac{N^{3/2+o(1)}}{V^2}|{\mathcal A}|^{1/2}, \end{align*} or \begin{align*}
\Delta^2|D|\ll \frac{N^{4+o(1)}}{V^4}|{\mathcal A}|, \end{align*} and the result follows from~\eqref{eq:EcaUL} and the estimate \begin{align*}
\Delta |D|\ll \sum_{\ell \in {\mathcal B}}r(\ell). \end{align*} \end{proof} The following is due to Huxley~\cite{Hux}. \begin{lemma} \label{lem:hux}
Suppose $N,T,V$ are positive real numbers and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1$. Let ${\mathcal A}\subset[0,T]$ be a well spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it} \right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} If $V\geqslant N^{3/4+o(1)}$ then \begin{align*}
|{\mathcal A}|&\ll \frac{N^{2+o(1)}}{V^2}+\frac{TN^{4+o(1)}}{V^6}. \end{align*} \end{lemma}
\section{Large values over additive convolutions} \label{sec:convolutions} Given real numbers $N,\Delta$, a well spaced sequence $t_1,\dots,t_R$ and a positive real numbers $\gamma(t)$ we define \begin{align} \label{eq:SNDdef}
S(N,\Delta,\gamma)=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_r-t_s)}\right|^2, \end{align} \begin{align} \label{eq:IDgdef}
I(\Delta,\gamma)=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s),\end{align} and when $\gamma$ is the indicator function of some set ${\mathcal A}$ let $I(\Delta,{\mathcal A})$ be as in~\eqref{eq:IdAset}. If each $t_i\leqslant T$ then we simplify $$S(N,T,\gamma)=S(N,\gamma).$$ \begin{theorem} \label{thm:largeadditive} Let $T$ and $N$ be real numbers, $1\leqslant t_1,\dots,t_R\leqslant T$ a well spaced sequence, $a_n$ a sequence of complex numbers satisfying \begin{align*}
|a_n|\leqslant 1, \end{align*} and $\gamma$ a sequence of positive real numbers. If $N\geqslant \Delta^{2/3}$ then we have \begin{align*}
S(N,\Delta,\gamma)\ll (I(\Delta,\gamma)+N\|\gamma\|_2^2)N^{o(1)}, \end{align*} and in general \begin{align*}
S(N,\Delta,\gamma)\ll (I(\Delta,\gamma)+N\|\gamma\|_2^2+\Delta^{1/2}I(\Delta,\gamma)^{1/4}\|\gamma\|_2^{3/2})N^{o(1)}. \end{align*} \end{theorem} As a consequence of Theorem~\ref{thm:largeadditive} we have. \begin{theorem} \label{thm:largeadditive1} Let $N,T\geqslant 1$ be real numbers, ${\mathcal A}\subseteq [0,T]$ a well spaced set and $a_n$ a sequence of complex numbers satisfying
$$|a_n|\leqslant 1.$$ We have \begin{align*}
& \sum_{\substack{t_i\in {\mathcal A} }}\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{-1/2+i(t_1+\dots-t_{2k})} \right|^2 \ll \\ & \quad \quad \quad (|{\mathcal A}|^{2k}+NT_k({\mathcal A})+T^{1/2}|{\mathcal A}|^{k/2}T_{k}({\mathcal A})^{3/4})N^{o(1)}. \end{align*} In particular, if either \begin{align} \label{eq:Ndeltacond}
N\geqslant T^{2/3} \quad \text{or} \quad T^{2/3}T_k({\mathcal A})\leqslant |{\mathcal A}|^{2k}, \end{align} then we have \begin{align*}
&\sum_{\substack{t_i\in {\mathcal A}}}\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{-1/2+i(t_1+\dots-t_{2k})} \right|^2\ll \left(|{\mathcal A}|^{2k}+NT_k({\mathcal A})\right)(NT)^{o(1)}. \end{align*} \end{theorem} The following preliminary results are required for the proof of Theorem~\ref{thm:largeadditive}. \begin{lemma} \label{lem:coefficients} Let $t_r$ a sequence of real numbers, $a_n$ a sequence of complex numbers, $b_n$ a sequence of positive real numbers satisfying \begin{align} \label{eq:anUB}
|a_n|\leqslant b_n, \end{align} and $\gamma$ a sequence of positive real numbers. For any positive $\Delta,N,K$ and $\varepsilon$ satisfying \begin{align} \label{eq:coefficientsepsilon} \varepsilon\leqslant \frac{1}{4}, \quad K>1, \end{align} we have \begin{align*}
&\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2\ll \\ & \quad \quad \quad \frac{1}{\varepsilon}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant 4\varepsilon\Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}b_n n^{-1/2+i(t_r-t_s)}\right|^2. \end{align*} \end{lemma} \begin{proof} Define \begin{align} \label{eq:Fdef}
F(x)=\max\{1-|x|,0\}, \end{align} and note that \begin{align} \label{eq:Fhat} \widehat F(y)=\left(\frac{\sin{\pi y}}{\pi y}\right)^2\geqslant 0, \end{align} where $\widehat F$ denotes the Fourier transform. Since \begin{align*}
\widehat F\left(x\right)\gg1 \quad \text{if} \quad |x|\leqslant \frac{1}{4}, \end{align*} we have \begin{align} \label{eq:Fhathat1}
& \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2\ll \\ & \sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2. \nonumber \end{align} Expanding the square, interchanging summation and using Fourier inversion, we get \begin{align*}
&\sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2= \\ & \int_{-\infty}^{\infty}F(y)\sum_{N\leqslant n_1,n_2\leqslant KN}a_{n_1}\overline a_{n_2}(n_1n_2)^{-1/2} \\ & \quad \quad \quad \times \sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)e\left(y\left(\frac{t_r}{4\Delta}-\frac{t_s}{4\Delta} \right) \right)n_1^{i(t_r-t_s)}n_2^{-i(t_r-t_s)}dy, \end{align*} and hence by~\eqref{eq:anUB} \begin{align*}
&\sum_{\substack{r,s=1}}^{R} \gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2\leqslant \\ & \int_{-\infty}^{\infty}F(y)\sum_{N\leqslant n_1,n_2\leqslant KN}b_{n_1}b_{n_2}(n_1n_2)^{-1/2}\left|\sum_{r=1}^{R}\gamma(t_r)e\left(\frac{yt_r}{4\Delta}\right)n_1^{it_r}n_2^{-it_r} \right|^2dy. \end{align*} Since \begin{align} \label{eq:Fycondcond}
F(y)=0 \quad \text{if} \quad |y|\geqslant 1 \quad \text{and} \quad F(y)\leqslant 1 \quad \text{otherwise}, \end{align} and \begin{align*} \widehat F(\varepsilon y)\gg 1 \quad \text{if} \quad y\leqslant \frac{1}{4\varepsilon}, \end{align*} the assumption~\eqref{eq:coefficientsepsilon} implies that \begin{align*}
&\sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant KN}a_n n^{-1/2+i(t_r-t_s)}\right|^2\ll \\ & \int_{-\infty}^{\infty}\widehat F(\varepsilon y)\sum_{N\leqslant n_1,n_2\leqslant 2N}b_{n_1}b_{n_2}(n_1n_2)^{-1/2}\left|\sum_{r=1}^{R}\gamma(t_r)e\left(\frac{yt_r}{4\Delta}\right)n_1^{it_r}n_2^{-it_r} \right|^2dy, \end{align*} and hence \begin{align*}
&\sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{-1/2+i(t_r-t_s)}\right|^2\ll \\ & \frac{1}{\varepsilon}\int_{-\infty}^{\infty}\widehat F(y)\sum_{N\leqslant n_1,n_2\leqslant 2N}b_{n_1}b_{n_2}(n_1n_2)^{-1/2}\left|\sum_{r=1}^{R}\gamma(t_r)e\left(\frac{yt_r}{4\varepsilon \Delta}\right)n_1^{it_r}n_2^{-it_r} \right|^2dy. \end{align*}
Expanding the square, interchanging summation and using Fourier inversion, we get \begin{align*}
&\int_{-\infty}^{\infty}\widehat F(y)\sum_{N\leqslant n_1,n_2\leqslant 2N}b_{n_1}b_{n_2}(n_1n_2)^{-1/2}\left|\sum_{r=1}^{R}\gamma(t_r)e\left(\frac{yt_r}{4\varepsilon \Delta}\right)n_1^{it_r}n_2^{-it_r} \right|^2dy= \\ & \quad \quad \quad \sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\int_{-\infty}^{\infty}\widehat F(y)e\left(y\left(\frac{t_r}{4\varepsilon\Delta}-\frac{t_s}{4\varepsilon\Delta} \right) \right)dy \\ & \quad \quad \quad \quad \times \sum_{N\leqslant n_1,n_2\leqslant 2N}b_{n_1}b_{n_2}n_1^{-1/2+i(t_r-t_s)}n_2^{-1/2-i(t_r-t_s)}, \end{align*} which after rearranging gives \begin{align*}
&\sum_{\substack{r,s=1}}^{R}\gamma(t_r)\gamma(t_s)\widehat F\left(\frac{t_r-t_s}{4\Delta}\right)\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{-1/2+i(t_r-t_s)}\right|^2 \\ & \quad \quad \quad \quad \ll \frac{1}{\varepsilon}\sum_{r,s=1}^{R}\gamma(t_r)\gamma(t_s)F\left(\frac{t_r-t_s}{4\varepsilon \Delta}\right)\left|\sum_{N\leqslant n \leqslant 2N}b_n n^{-1/2+i(t_r-t_s)} \right|^2, \end{align*} and the result follows from~\eqref{eq:Fhathat1} and~\eqref{eq:Fycondcond}. \end{proof} \begin{lemma} \label{lem:smoothsums} Let $N$ be an integer, $t_r$ a sequence of real numbers, $a_n$ a sequence of complex numbers satisfying \begin{align*}
|a_n|\leqslant 1, \end{align*} and $\gamma$ a sequence of positive real numbers. Let $c_1<c_2$ be constants and for each pair of integers $r,s$ let $N_{r,s}$ and $M_{r,s}$ be integers satisfying \begin{align*} c_1N\leqslant N_{r,s}< M_{r,s}\leqslant c_2N. \end{align*} For any $\Delta \geqslant 1$ we have \begin{align*}
& \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N_{r,s}\leqslant n \leqslant M_{r,s}}a_n n^{i(t_r-t_s)}\right|^2 \\ & \quad \quad \quad \quad \quad \ll (\log{N})^2\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{c_1N\leqslant n \leqslant c_2N}n^{i(t_r-t_s)}\right|^2. \end{align*} \begin{proof} Let \begin{align*}
S=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N_{r,s}\leqslant n \leqslant M_{r,s}}a_n n^{i(t_r-t_s)}\right|^2. \end{align*} For each pair $r,s$ we have \begin{align*} &\sum_{N_{r,s}\leqslant n \leqslant M_{r,s}}a_n n^{i(t_r-t_s)}= \\ & \int_{0}^{1}\left(\sum_{N_{r,s}\leqslant n \leqslant M_{r,s}}e(\alpha n)\right)\sum_{c_1N\leqslant n \leqslant c_2N}a_ne(\alpha n) n^{i(t_r-t_s)}d\alpha, \end{align*} so that defining \begin{align*}
F(\alpha)=\min\left(N,\frac{1}{\|\alpha\|}\right), \end{align*} we have \begin{align*}
S\ll \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left(\int_{0}^{1}F(\alpha)\left|\sum_{c_1N\leqslant n \leqslant c_2N}a_ne(\alpha n) n^{i(t_r-t_s)} \right|d\alpha \right)^2. \end{align*} By the Cauchy-Schwarz inequality \begin{align*}
S\ll \left(\int_{0}^{1}F(\alpha)d\alpha\right)\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\int_{0}^{1}F(\alpha)\left|\sum_{c_1N\leqslant n \leqslant c_2N}a_ne(\alpha n) n^{i(t_r-t_s)} \right|^2d\alpha. \end{align*} Since \begin{align*} \int_{0}^{1}F(\alpha)d\alpha\ll \log{N}, \end{align*} we see that \begin{align*}
S\ll \log{N}\int_{0}^{1}F(\alpha)\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{c_1N\leqslant n \leqslant c_2N}a_ne(\alpha n) n^{i(t_r-t_s)} \right|^2d\alpha. \end{align*} Applying Lemma~\ref{lem:coefficients} with $\varepsilon=1/4$ gives \begin{align*}
S&\ll \log{N}\int_{0}^{1}F(\alpha)d\alpha \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{c_1N\leqslant n \leqslant c_2N}n^{i(t_r-t_s)} \right|^2 \\
&\ll (\log{N})^2\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{c_1N\leqslant n \leqslant c_2N}n^{i(t_r-t_s)} \right|^2, \end{align*} and completes the proof. \end{proof} \end{lemma} Our next two results are variants of~\cite[Lemma~2]{HB}. \begin{lemma} \label{lem:larger} Let $N,K\geqslant 1$ be real numbers. Suppose $t_r$ is a sequence of real numbers satisfying \begin{align*} 1\leqslant t_r \leqslant T. \end{align*}
For any $U\geqslant 1$ we have we have \begin{align*}
&\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}n^{-1/2+i(t_r-t_s)}\right|^2 \\ & \ll (KNU)^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{UN\leqslant n \leqslant 3KNU/2}n^{-1/2+i(t_r-t_s)}\right|^2. \end{align*} \end{lemma} \begin{proof} Let $P$ be a prime number satisfying \begin{align} \label{eq:PUcond} 8U\leqslant P \leqslant 16U, \end{align} and for each multiplicative character $\chi$ mod $P$ let \begin{align*}
S(\chi)=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}n^{-1/2+i(t_r-t_s)}\right|^2\left|\sum_{U\leqslant u \leqslant 3U/2}\chi(u)u^{-1/2+i(t_r-t_s)} \right|^2. \end{align*} For integer $m$ define \begin{align*} a(m,\chi)=\sum_{\substack{U\leqslant u \leqslant 3U/2 \\ N\leqslant n \leqslant KN \\ un=m}}\chi(u), \end{align*} so that \begin{align*} a(m,\chi)\ll (KNU)^{o(1)}, \end{align*} and hence by Lemma~\ref{lem:coefficients} \begin{align*}
S(\chi)\ll (KNU)^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{NU\leqslant n \leqslant 3KNU/2}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} which implies \begin{align} \label{eq:SchiUB}
\sum_{\chi \mod{P}}S(\chi)\ll P(KNU)^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{NU\leqslant n \leqslant 3KNU/2}n^{-1/2+i(t_r-t_s)}\right|^2. \end{align} We have \begin{align*}
&\sum_{\chi \mod{P}}S(\chi)=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}n^{-1/2+i(t_r-t_s)}\right|^2 \\ & \times\sum_{U\leqslant u_1,u_2\leqslant 3U/2}u_1^{-1/2+i(t_r-t_s)}u_2^{-1/2-i(t_r-t_s)}\sum_{\chi \mod{P}}\chi(u_1)\overline \chi(u_2), \end{align*} hence by~\eqref{eq:PUcond} and orthogonality of characters \begin{align*}
\sum_{\chi \mod{P}}S(\chi)\gg P\log{U}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant KN}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} and the result follows from~\eqref{eq:SchiUB}. \end{proof} \begin{cor} \label{cor:larger}
For any integer $M\geqslant 2N$ we have we have \begin{align*} S(N,\Delta,\gamma)\ll M^{o(1)}S(M,\Delta,\gamma). \end{align*} \end{cor} \begin{proof} Writing $K=2^{1/3}$ we have \begin{align} \label{eq:kkkKKkKkKkK} S(N,\Delta,\gamma)\ll \sum_{i=0}^{2}S_i, \end{align} where \begin{align*}
S_i=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{K^{i}N\leqslant n \leqslant K^{i+1}N}n^{-1/2+i(t_r-t_s)}\right|^2. \end{align*} Define \begin{align*} M_i=\frac{M}{K^{i}N}, \end{align*} and apply Lemma~\ref{lem:larger} to get \begin{align*}
S_i\ll M^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{M\leqslant n \leqslant 3KM/2}n^{-1/2+i(t_r-t_s)}\right|^2. \end{align*} By Lemma~\ref{lem:coefficients} \begin{align*}
S_i\ll M^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{M\leqslant n \leqslant 2M}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} and the result follows from~\eqref{eq:kkkKKkKkKkK}. \end{proof} The following is our variant of~\cite[Lemma~4]{HB}. \begin{lemma} \label{lem:square} For any $M\geqslant 8N^2$ we have \begin{align*} S(N,\Delta,\gamma)^2\ll M^{o(1)}I(\Delta,\gamma)S(M,\Delta,\gamma). \end{align*} \end{lemma} \begin{proof} By the Cauchy-Schwarz inequality and Lemma~\ref{lem:coefficients} \begin{align*}
S(N,\Delta,\gamma)^2&\leqslant I(\Delta,\gamma)\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^4 \\
&\ll N^{o(1)}I(\Delta,\gamma)\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N^2\leqslant n \leqslant 4N^2}n^{-1/2+i(t_r-t_s)}\right|^2. \end{align*} We have \begin{align*}
&\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N^2\leqslant n \leqslant 4N^2}n^{-1/2+i(t_r-t_s)}\right|^2\ll \\ & \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N^2\leqslant n \leqslant 2N^2}n^{-1/2+i(t_r-t_s)}\right|^2 \\ &+\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{2N^2\leqslant n \leqslant 4N^2}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} and the result follows from the above and Corollary~\ref{cor:larger}. \end{proof} The following is a variant of Hilbert's inequality. \begin{lemma} \label{lem:hilbert} For any well spaced sequence $t_1,\dots,t_R$ and positive real numbers $\gamma(t_r)$ we have \begin{align*}
\sum_{\substack{r\neq s}}\frac{\gamma(t_r)\gamma(t_s)}{(t_r-t_s)^2}\ll \|\gamma_2\|^2. \end{align*} \end{lemma} \begin{proof} Let \begin{align*} S=\sum_{\substack{r\neq s}}\frac{\gamma(t_r)\gamma(t_s)}{(t_r-t_s)^2}. \end{align*} By the Cauchy-Schwarz inequality \begin{align*} S^2&=\left(\sum_{\substack{r\neq s}}\frac{\gamma(t_r)}{(t_r-t_s)}\frac{\gamma(t_s)}{(t_r-t_s)}\right)^2\leqslant \left(\sum_{\substack{r\neq s}}\frac{\gamma(t_r)^2}{(t_r-t_s)^2} \right)^2. \end{align*} We have \begin{align*} \sum_{\substack{r\neq s}}\frac{\gamma(t_r)^2}{(t_r-t_s)^2}= \sum_{r}\gamma(t_r)^2\sum_{s\neq r}\frac{1}{(t_r-t_s)^2}, \end{align*} and the result follows since the assumption $t_r$ is well spaced implies for any fixed $r$ \begin{align*} \sum_{s\neq r}\frac{1}{(t_r-t_s)^2}\ll 1. \end{align*} \end{proof} \begin{lemma} \label{lem:mvSmall} Let $N\geqslant 2$ be an integer, $\Delta \gg1$ a real number, $1\leqslant t_1,\dots,t_{R}\leqslant T$ a well spaced sequence and $\gamma(t)$ a sequence of positive real numbers. We have \begin{align*}
\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N} n^{-1/2+i(t_r-t_s)} \right|^2\ll N\|\gamma\|_2^{2}+\frac{\Delta^{1+o(1)}}{N}I(\Delta,\gamma). \end{align*} \end{lemma} \begin{proof} Let \begin{align*}
S=\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N} n^{-1/2+i(t_r-t_s)} \right|^2. \end{align*} By Lemma~\ref{lem:coefficients}, we have \begin{align} \label{eq:SS12small}
S&\ll \frac{1}{N}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N} n^{i(t_r-t_s)} \right|^2 \\ &\ll \frac{1}{N}\left( S_1+S_2\right), \nonumber \end{align} where \begin{align*}
S_1=\sum_{\substack{r,s=1 \\ |t_r-t_s|< 1 }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{1\leqslant n \leqslant N}n^{i(t_r-t_s)} \right|^2, \end{align*} and \begin{align*}
S_2=\sum_{\substack{r,s=1 \\ 1\leqslant |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{1\leqslant n \leqslant N}n^{i(t_r-t_s)} \right|^2. \end{align*} We estimate the summation in $S_1$ trivially. Using the assumption the points $t_r$ are well spaced, we have \begin{align} \label{eq:S1b1b1b1}
S_1\ll N\sum_{\substack{r,s=1 \\ |t_r-t_s|< 1 }}^{R}\gamma(t_r)\gamma(t_s)\ll N\|\gamma\|_2^2. \end{align} For $S_2$ we use the estimate \begin{align*} \sum_{N\leqslant n \leqslant 2N}n^{it}\ll \frac{N}{t}+t^{1/2}(\log{t}), \end{align*} valid for any $N,t\ge1$, see for example~\cite[Equation~9.21]{IwKo}. This gives \begin{align*}
S_2\ll N^2\sum_{\substack{r,s=1 \\ 1\leqslant |t_r-t_s|\leqslant \Delta }}^{R}\frac{\gamma(t_r)\gamma(t_s)}{(t_r-t_s)^2}+\Delta^{1+o(1)}\sum_{\substack{r,s=1 \\ 1\leqslant |t_r-t_s|\leqslant \Delta }}^{R}\gamma(t_r)\gamma(t_s), \end{align*} which by Lemma~\ref{lem:hilbert} implies
$$S_2\ll N^2\|\gamma\|_2^2+\Delta^{1+o(1)}I(\Delta,\gamma).$$ Combining the above with~\eqref{eq:SS12small} and~\eqref{eq:S1b1b1b1} \begin{align*}
S\ll N\|\gamma\|_2^2+\frac{\Delta^{1+o(1)}}{N}I(\Delta,\gamma), \end{align*} and completes the proof. \end{proof} The following forms the basis of the van der Corput method of exponential sums, for a proof see~\cite[Theorem~8.16]{IwKo}. \begin{lemma} \label{lem:vdc} For any real valued function $f$ defined on an interval $[a,b]$ with derivatives satisfying \begin{align*}
\frac{T}{N^2} \ll f''(z) \ll \frac{T}{N^2}, \quad |f^{(3)}(z)|\ll \frac{T}{N^3}, \quad |f^{(4)}(z)|\ll \frac{T}{N^4},\quad z\in[a,b], \end{align*} we have \begin{align*} \sum_{a<n<b}e(f(n))=\sum_{\alpha<m<\beta}f''(x_m)^{-1/2}e\left(f(x_m)-mx_m+\frac{1}{8} \right)+E, \end{align*} where $\alpha=f'(a), \beta=f'(b)$, $x_m$ is the unique solution to $f'(x)=m$ for $x\in [a,b]$ and \begin{align*}
E&\ll \frac{N}{T^{1/2}}+\log(|f'(b)-f'(a)|+2). \end{align*} \end{lemma} The following is a consequence of Lemma~\ref{lem:vdc} and partial summation. \begin{lemma} \label{lem:vdc1} Let $N$ be an integer and $t\geqslant 1$ a real number. We have \begin{align*}
\left|\sum_{N\leqslant n \leqslant 2N}n^{it}\right|\ll \frac{N}{t^{1/2}}\max_{M\leqslant t/N}\left|\sum_{t/2N\leqslant n \leqslant M}n^{it} \right|+\frac{N}{t^{1/2}}+\log{(Nt)}. \end{align*} \end{lemma}
\begin{lemma} \label{lem:main1} Let $T$ be a real number, $1\leqslant t_1,\dots,t_R\leqslant T$ a well spaced sequence. For integer $N$ and a real number $\Delta$ we define \begin{align*}
S^*(N,\Delta,\gamma)=\sum_{\substack{r,s=1 \\ \Delta\leqslant |t_r-t_s|\leqslant 2\Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} If $\Delta \gg N$ then \begin{align*}
S^{*}(N,\Delta)&\ll N^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant 2\Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{\Delta/2N\leqslant n \leqslant 2\Delta/N}n^{-1/2+i(t_r-t_s)} \right|^2 \\ & \quad \quad +N^{o(1)}I(\Delta,\gamma). \end{align*} \end{lemma} \begin{proof} Let \begin{align*}
S_0^*(N,\Delta,\gamma)=\sum_{\substack{r,s=1 \\ \Delta\leqslant |t_r-t_s|\leqslant 2\Delta}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_r-t_s)}\right|^2, \end{align*} so that by Lemma~\ref{lem:coefficients} \begin{align} \label{eq:SS0starttttt} S^*(N,\Delta,\gamma)\ll \frac{S_0^*(N,\Delta,\gamma)}{N}\ll S^*(N,\Delta,\gamma). \end{align}
For each pair $r,s$ satisfying $$\Delta \leqslant |t_r-t_s|\leqslant 2\Delta,$$ we define $\Delta_{r,s}=|t_r-t_s|$. By Lemma~\ref{lem:vdc1} and the assumption $\Delta \gg N$ \begin{align*}
\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_r-t_s)}\right|\ll \frac{N}{\Delta^{1/2}}\max_{M\leqslant \Delta/N}\left|\sum_{\Delta_{r,s}/2N\leqslant n \leqslant M}n^{i(t_r-t_s)} \right|+N^{1/2+o(1)}, \end{align*} which after summing over $r,s$ gives \begin{align*} S_0^*(N,\Delta,\gamma)
&\ll \frac{N^2}{\Delta}\sum_{\substack{r,s=1 \\ \Delta\leqslant |t_r-t_s|< 2\Delta}}^{R}\gamma(t_r)\gamma(t_s)\max_{M\leqslant \Delta/N}\left|\sum_{\Delta_{r,s}/2N\leqslant n \leqslant M}n^{i(t_r-t_s)} \right|^2 \\ & \quad \quad \quad \quad +I(\Delta,\gamma)N^{1+o(1)}. \end{align*} We have \begin{align*}
&\sum_{\substack{r,s=1 \\ \Delta\leqslant |t_r-t_s|< 2\Delta}}^{R}\gamma(t_r)\gamma(t_s)\max_{M\leqslant \Delta/N}\left|\sum_{\Delta_{r,s}/2N\leqslant n \leqslant M}n^{i(t_r-t_s)} \right|^2\leqslant \\ & \quad \quad \quad \sum_{\substack{\substack{r,s=1 \\ |t_r-t_s|\leqslant 2 \Delta}}}^{R}\gamma(t_r)\gamma(t_s)\max_{M\leqslant \Delta/N}\left|\sum_{\Delta_{r,s}/2N\leqslant n \leqslant M}n^{i(t_r-t_s)} \right|^2, \end{align*} and hence by Lemma~\ref{lem:smoothsums} \begin{align*} S_0^{*}(N,\Delta,\gamma)&\ll
\frac{N^{2+o(1)}}{\Delta}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant 2\Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{\Delta/2N\leqslant n \leqslant 2\Delta/N}n^{i(t_r-t_s)} \right|^2 \\ & \quad \quad \quad+I(\Delta,\gamma)N^{1+o(1)}. \end{align*} Using~\eqref{eq:SS0starttttt} and a second application of Lemma~\ref{lem:coefficients} we get \begin{align*}
S^{*}(N,\Delta,\gamma)&\ll N^{o(1)}\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant 2\Delta }}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{\Delta/2N\leqslant n \leqslant 2\Delta/N}n^{-1/2+i(t_r-t_s)} \right|^2 \\ & \quad \quad \quad+I(\Delta,\gamma)N^{o(1)}, \end{align*} and the result follows after splitting summation over $n$ into dyadic ranges and applying Corollary~\ref{cor:larger}. \end{proof}
\begin{cor} \label{cor:reflection} Let $T$ be a real number, $1\leqslant t_1,\dots,t_R\leqslant T$ a well spaced sequence and $\gamma(t)$ a sequence of positive real numbers. We have \begin{align*}
S(N,\Delta,\gamma)\ll N^{o(1)}S\left(\frac{4\Delta}{N},\Delta,\gamma\right)+N^{o(1)}I(\Delta,\gamma)+\|\gamma\|_2N^{1+o(1)}. \end{align*} \end{cor} \begin{proof} We may suppose $\Delta \gg N$ since otherwise the result follows from Lemma~\ref{lem:mvSmall}. With notation as in Lemma~\ref{lem:main1}, we have \begin{align*}
\nonumber S(N,\Delta,\gamma)&\leqslant \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant 10N}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^2 \\ & \quad \quad \quad +\sum_{\substack{r,s=1 \\ |t_r-t_s|> 10N}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^2, \end{align*} and hence \begin{align*}
S(N,\Delta,\gamma)\ll\sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant N}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^2+(\log{\Delta})S^{*}(N,\Delta_1,\gamma), \end{align*} for some \begin{align*} 10N\leqslant \Delta_1\leqslant 2\Delta. \end{align*} By Lemma~\ref{lem:mvSmall} \begin{align} \label{eq:cormain2}
\nonumber \sum_{\substack{r,s=1 \\ |t_r-t_s|\leqslant N}}^{R}\gamma(t_r)\gamma(t_s)\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(t_r-t_s)}\right|^2&\ll \left(\|\gamma\|_2^2N+I(10N,\gamma)\right)N^{o(1)} \\
&\ll \left(\|\gamma\|_2^2N+I(\Delta,\gamma)\right)N^{o(1)}, \end{align} and by Corollary~\ref{cor:larger} and Lemma~\ref{lem:main1} \begin{align*} S^{*}(N,\Delta_1)& \ll N^{o(1)}S\left(\frac{2\Delta_1}{N},\Delta_1 \right)+I(\Delta_1,\gamma) \\ &\ll N^{o(1)}S\left(\frac{4\Delta}{N},\Delta \right)+I(2\Delta,\gamma), \end{align*} from which the desired result follows after combining with Lemma~\ref{lem:coefficients} and~\eqref{eq:cormain2}. \end{proof}
\section{Proof of Theorem~\ref{thm:largeadditive}} First consider when \begin{align} \label{eq:laNub1} N\geqslant \frac{\Delta^{2/3}}{100}. \end{align}
By Corollary~\ref{cor:reflection} \begin{align*}
S(N,\Delta,\gamma)\ll N^{o(1)}S\left(\frac{4\Delta}{N},\Delta,\gamma \right)+(I(\Delta,\gamma)+N\|\gamma\|_2^2)N^{o(1)}, \end{align*} and by Lemma~\ref{lem:square} \begin{align} \label{eq:SNr}
S(N,\Delta,\gamma)\ll N^{o(1)}I(\Delta,\gamma)^{1/2}S\left(\frac{100\Delta^2}{N^2},\Delta,\gamma \right)^{1/2}+(I(\Delta,\gamma)+\|\gamma\|_2^2)N^{o(1)}. \end{align} By~\eqref{eq:laNub1} \begin{align} \frac{100\Delta^2}{N^2}\leqslant \frac{N}{2}, \end{align} and hence by Corollary~\ref{cor:larger} \begin{align} \label{eq:SNcase1}
S(N,\Delta,\gamma)\ll N^{o(1)}I(\Delta,\gamma)^{1/2}S\left(N,\Delta,\gamma \right)^{1/2}+(I(\Delta,\gamma)+N\|\gamma\|_2^2)N^{o(1)}, \end{align} which implies \begin{align} \label{eq:SNcase11}
S(N,\Delta,\gamma)\ll (I(\Delta,\gamma)+N\|\gamma\|_2^2)N^{o(1)}. \end{align} If~\eqref{eq:laNub1} does not hold then \begin{align*} \frac{\Delta^2}{N^2}\gg \Delta^{2/3}, \end{align*} and hence by Corollary~\ref{cor:larger} and~\eqref{eq:SNcase11} \begin{align*}
S\left(\frac{100\Delta^2}{N^2},\Delta,\gamma \right)\ll \left(I(\Delta,\gamma)+\frac{\Delta^2}{N^2}\|\gamma\|_2^2\right)N^{o(1)}. \end{align*} Substituting into~\eqref{eq:SNcase1} gives \begin{align} \label{eq:SNprelimF}
S(N,\Delta,\gamma)\ll \left(I(\Delta,\gamma)+\frac{\Delta}{N}I(\Delta,\gamma)^{1/2}\|\gamma\|_2+N\|\gamma\|_2^2\right)N^{o(1)}. \end{align} Let \begin{align} \label{eq:Udef154}
U=\max \left\{2,\left(\frac{\Delta I(\Delta,\gamma)^{1/2}}{N^2\|\gamma\|_2}\right)^{1/2} \right\}. \end{align} By Corollary~\ref{cor:larger} and~\eqref{eq:SNprelimF} \begin{align} \label{eq:SNprelimF1} S(N,\Delta,\gamma) & \ll N^{o(1)}S(UN,\Delta,\gamma) \nonumber \\
&\ll (I(\Delta,\gamma)+\frac{\Delta}{NU}I(\Delta,\gamma)^{1/2}\|\gamma\|_2+NU\|\gamma\|_2^2)N^{o(1)}. \end{align} Using either~\eqref{eq:SNprelimF} or~\eqref{eq:SNprelimF1} depending of which term in~\eqref{eq:Udef154} is maximum gives the bound \begin{align*}
S(N,\gamma)\ll (I(\Delta,\gamma)+N\|\gamma\|_2^2+\Delta^{1/2}I(\Delta,\gamma)^{1/4}\|\gamma\|_2^{3/2})N^{o(1)}, \end{align*} which completes the proof. \section{Proof of Theorem~\ref{thm:largeadditive1}} Let
$$S=\sum_{\substack{t_i\in {\mathcal A} }}\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{-1/2+i(t_1+\dots - t_{2k})} \right|^2.$$
For each integer $|\ell|\ll T$ we define \begin{align*}
\gamma(\ell)=|\{ (t_1,\dots,t_k)\in {\mathcal A}^k \ : \ \ell<t_1+\dots +t_k\leqslant \ell+1\}|, \end{align*} and note that \begin{align} \label{eq:gammaell1}
|{\mathcal A}|^{2k}\ll I(T,\gamma)=\|\gamma\|^2_1\ll |{\mathcal A}|^{2k}, \end{align} and \begin{align} \label{eq:gammaell2}
T_{2k}({\mathcal A})\ll \|\gamma\|_2^2\ll T_{2k}({\mathcal A}). \end{align} We have \begin{align*}
&S\leqslant \sum_{\substack{|\ell_1|,|\ell_2|\leqslant T \\ |\ell_1-\ell_2|\leqslant 2\Delta}}\gamma(\ell_1)\gamma(\ell_2)\max_{|\theta|\leqslant 2}\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{-1/2+i(\ell_1-\ell_2)+i\theta} \right|^2, \end{align*} which combined with Lemma~\ref{lem:removemax} gives \begin{align*}
S& \ll N^{o(1)}\int_{|\tau|\leqslant 2\log{N}}\sum_{\substack{|\ell_1|,|\ell_2|\leqslant T \\ |\ell_1-\ell_2|\leqslant 2\Delta}}\gamma(\ell_1)\gamma(\ell_2)\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{-1/2+i(\ell_1-\ell_2)+i\tau} \right|^2d\tau. \end{align*} Taking a maximum over $\tau$ this implies that \begin{align} \label{eq:Sle1}
S\ll N^{o(1)}\sum_{\substack{|\ell_1|,|\ell_2|\leqslant T \\ |\ell_1-\ell_2|\leqslant 2\Delta}}\gamma(\ell_1)\gamma(\ell_2)\left|\sum_{N\leqslant n\leqslant 2N}a_nn^{i\tau_0}n^{-1/2+i(\ell_1-\ell_2)} \right|^2, \end{align} for some $\tau_0\leqslant 2\log{N}.$ If $N\geqslant T^{2/3}$ then by Theorem~\ref{thm:largeadditive} \begin{align*}
S\ll (|{\mathcal A}|^{2k}+NT_k({\mathcal A})+T^{1/2}|{\mathcal A}|^{k/2}T_{k}({\mathcal A})^{3/4})N^{o(1)}. \end{align*} If $N\leqslant T^{2/3}$ then we have \begin{align*}
S\ll (|{\mathcal A}|^{2k}+NT_k({\mathcal A})+T^{1/2}|{\mathcal A}|^{k/2}T_{k}({\mathcal A})^{3/4})N^{o(1)}, \end{align*} and the result follows noting that the conditions~\eqref{eq:Ndeltacond} imply \begin{align*}
T^{1/2}|{\mathcal A}|^{k/2}T_{k}({\mathcal A})^{3/4}\ll |{\mathcal A}|^{2k}. \end{align*}
\section{Large values of Dirichlet Polynomials} In this section we state some reductions from large values of Dirichlet polynomials to various mean values which are slight modifications of well known results. We first recall ~\cite[Lemma~1]{Jut}. \begin{lemma} \label{lem:jut} Suppose $T$ and $N$ satisfy $N\leqslant T$. Let $h=(\log{T})^2$ and define $$b(n)=e^{-(n/2N)^{h}}-e^{-(n/N)^h}.$$ Then for any $t$ and $M$ satisfying $$\frac{t}{N}\ll M\leqslant T^2, \quad h^2\leqslant t \leqslant T,$$ we have \begin{align*}
\sum_{n=1}^{\infty}b(n)n^{-it}\ll N^{1/2}\int_{|\tau|\leqslant h^2}\left|\sum_{n=1}^{M}n^{-1/2+i(t+\tau)}\right|d\tau+1. \end{align*} \end{lemma} The following is a consequence of Mellin inversion, see for example~\cite[Equation~4.17]{Bou}. \begin{lemma} \label{lem:jut1} Let $N,T\gg 1$ and define \begin{align*} c_n=e^{-n/2N}-e^{-n/N}, \quad h=(\log{T})^2. \end{align*} For any $t$ satisfying $$h\leqslant t \leqslant T,$$ we have \begin{align*}
\sum_{n=0}^{\infty}c_n n^{-\sigma-it}\ll N^{o(1)}\left(\int_{-h}^{h}\left|\zeta\left(\sigma+i(t+\tau)\right) \right|d\tau+1\right). \end{align*} \end{lemma} \begin{lemma} \label{lem:mainvlarge1}
Let $N,T\geqslant 1$ and $a_n$ a sequence of complex numbers satisfying $|a_n|\leqslant 1.$ Let ${\mathcal A}\subseteq [0,T]$ be a well spaced set satisfying \begin{align*}
\left|\sum_{N\leqslant n\leqslant 2N}a_n n^{it}\right|\geqslant V, \quad t\in {\mathcal A}. \end{align*} Let $0<\delta<1$ and suppose $N,T,V$ are positive numbers with $N$ and $V$ satisfying \begin{align} \label{eq:NTVcond} V^{4}\geqslant N^{3+o(1)}. \end{align} We have \begin{align*}
& I(\delta T,{\mathcal A}) \ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{3/2+o(1)}}{V^2}\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(t_1-t_2)}\right|. \end{align*} \end{lemma} \begin{proof} For $0\leqslant k \ll \delta^{-1}$ define $${\mathcal A}_k={\mathcal A}\cap [k \delta T,(k+1)\delta T].$$ We have \begin{align*}
|{\mathcal A}_k|V\leqslant \sum_{t\in {\mathcal A}_k}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|\leqslant N^{o(1)}\sum_{N\leqslant n \leqslant 2N}\left|\sum_{t\in {\mathcal A}_k}\theta(t)n^{it}\right|, \end{align*}
for some sequence of complex numbers $\theta$ satisfying $|\theta(t)|=1$. By the Cauchy-Schwarz inequality \begin{align} \label{eq:Ak1} \nonumber
|{\mathcal A}_k|^2V^2&\ll N^{1+o(1)}\sum_{n=1}^{\infty}b(n)\left|\sum_{t\in {\mathcal A}_k}\theta(t)n^{it}\right|^2 \\
&\ll N^{1+o(1)}\sum_{t_1,t_2\in {\mathcal A}_k}\left|\sum_{n=1}^{\infty}b(n)n^{i(t_1-t_2)}\right| \nonumber \\
&\ll N^{2+o(1)}I((\log{T})^2,{\mathcal A}_k)+\sum_{\substack{t_1,t_2\in {\mathcal A}_k \\ |t_1-t_2|\geqslant (\log{T})^2}}\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_1-t_2)}\right|, \end{align} with $b(n)$ is as in Lemma~\ref{lem:jut}. The assumption ${\mathcal A}_k$ is well spaced implies \begin{align*}
I((\log{T})^2,{\mathcal A}_k)\ll N^{o(1)}|{\mathcal A}_k|, \end{align*}
and for $t_1,t_2\in {\mathcal A}_k$ satisfying $|t_1-t_2|\geqslant (\log{T})^2$ we have $|t_1-t_2|\leqslant \delta T$, hence by Lemma~\ref{lem:jut} \begin{align*}
\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_1-t_2)}\right|\ll N^{1/2}\int_{|\tau|\leqslant (\log{T})^2}\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(t_1-t_2+\tau)}\right|d\tau+1. \end{align*} Substituting the above into~\eqref{eq:Ak1} and taking a maximum over $\tau$, we get \begin{align*}
|{\mathcal A}_k|^2V^2&\ll N^{3/2+o(1)}\sum_{\substack{t_1,t_2\in {\mathcal A}_k}}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(t_1-t_2)}\right| \\ & \quad \quad \quad +N^{2+o(1)}|{\mathcal A}_k|+N^{3/2+o(1)}|{\mathcal A}_k|^2, \end{align*}
for some sequence of complex numbers $a_n$ satisfying $|a_n|=1$. By~\eqref{eq:NTVcond}, the above simplifies to \begin{align*}
|{\mathcal A}_k|^2V^2&\ll N^{3/2+o(1)}\sum_{\substack{t_1,t_2\in {\mathcal A}_k}}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(t_1-t_2)}\right|+N^{2+o(1)}|{\mathcal A}_k|. \end{align*}
Summing over $|k|\ll \delta^{-1}$ and using Lemma~\ref{lem:ell2counting} to estimate
$$\sum_{k\ll \delta^{-1}}|{\mathcal A}_k|^2\gg I(\delta T,{\mathcal A}),$$ we get \begin{align*}
I(\delta T,{\mathcal A})V^2 &\ll N^{2+o(1)}|{\mathcal A}|+N^{3/2+o(1)}\sum_{k\ll \delta^{-1}}\sum_{\substack{t_1,t_2\in {\mathcal A}_k}}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(t_1-t_2)}\right| \\
&\ll N^{2+o(1)}|{\mathcal A}|+N^{3/2+o(1)}\sum_{\substack{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(t_1-t_2)}\right|, \nonumber \end{align*}
after noting that if $t_1,t_2\in {\mathcal A}_k$ then $|t_1-t_2|\leqslant \delta T$. \end{proof}
\section{Zero density estimates} We next collect some preliminaries from the method of zero detection polynomials, we refer the reader to~\cite[Chapter~11]{Ivic} or ~\cite[Chapter~12]{Mont} for details. \begin{lemma} \label{lem:zerodensity} Let $X,Y$ be parameters satisfying $2\leqslant X \leqslant Y \leqslant T^A$ for some absolute constant $A$ and define \begin{align*} M_X(s)=\sum_{n\leqslant X}\frac{\mu(n)}{n^s}, \end{align*} and \begin{align*}
a_n=\sum_{\substack{d|n \\ d\leqslant X}}\mu(d). \end{align*} There exists two well spaced sets ${\mathcal A}_1,{\mathcal A}_2 \subset {\mathbb C}$ such that
$$N(\sigma,T)\ll T^{o(1)}(|{\mathcal A}_1|+|{\mathcal A}_2|).$$ If $\rho=\beta+i\gamma \in {\mathcal A}_1$ then $$\beta \geqslant \sigma, \quad \gamma\leqslant T,$$ and \begin{align} \label{eq:zeroclass1} \sum_{X\leqslant n \leqslant Y^2}a_n n^{-\rho}e^{-n/Y}\gg 1. \end{align} If $\rho=\beta+i\gamma \in {\mathcal A}_2$ then $$\beta \geqslant \sigma, \quad \gamma\leqslant T,$$ and \begin{align} \label{eq:zeroclass2} \int_{-C\log{T}}^{C\log{T}}\zeta\left(\frac{1}{2}+i(\gamma+\tau)\right)M_X\left(\frac{1}{2}+i(\gamma+\tau)\right)Y^{1/2-\beta+i\tau}& \Gamma\left(\frac{1}{2}-\beta +i\tau\right)d\tau \\ & \quad \quad \quad \gg 1, \nonumber \end{align} where $C$ is some absolute constant. \end{lemma} Using some Fourier analysis one may remove the dependence of the coefficients in~\eqref{eq:zeroclass1} on the real part of the zeros $\rho$. \begin{lemma} \label{lem:zerodensity1} Let $\sigma \geqslant 1/2+o(1)$. With notation as in Lemma~\ref{lem:zerodensity}, let $X,Y$ be parameters satisfying $2\leqslant X \leqslant Y \leqslant T^A$. There exists some $N$ satisfying \begin{align} \label{lem:density1N} X\leqslant N \leqslant 2Y, \end{align}
a sequence of complex numbers $b_n$ satisfying $|b_n|\leqslant 1$ and two well spaced sets ${\mathcal A}_1,{\mathcal A}_2 \subset [0,T]$ satisfying
$$N(\sigma,T)\ll T^{o(1)}(|{\mathcal A}_1|+|{\mathcal A}_2|).$$ If $t\in {\mathcal A}_1$ we have \begin{align} \label{eq:zeroclass1} N^{o(1)}\sum_{N\leqslant n \leqslant 2N}b_n n^{it}\gg N^{\sigma}, \quad t\in {\mathcal A}_1, \end{align} and if $t\in {\mathcal A}_2$ \begin{align} \label{eq:zeroclass2} N^{o(1)}\zeta\left(\frac{1}{2}+it \right)M_X\left(\frac{1}{2}+it \right) \gg Y^{\sigma -1/2}, \quad t\in {\mathcal A}_2. \end{align} \end{lemma}
We refer the reader to either~\cite[Section~1]{Bou} or~\cite[Chapter~11]{Ivic} for details of the following reduction from Lemma~\ref{lem:zerodensity1} which makes use of Heath-Brown's twelfth power moment estimate~\cite{HB0}. \begin{lemma} \label{lem:zerodensity2} Let $Y\leqslant T^{A}$ be some parameter. There exists some $N$ satisfying \begin{align} \label{eq:zerodensity1X} Y^{4/3}<N<Y^{2+o(1)}, \end{align}
and some sequence of complex numbers $a_n$ satisfying $|a_n|\leqslant 1$ such that for some well spaced set ${\mathcal A}\subseteq [0,T]$ with \begin{align*}
N^{o(1)}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|\geqslant N^{\sigma} \quad t\in {\mathcal A}, \end{align*} we have \begin{align*}
N(\sigma,T)\ll T^{o(1)}\left(|{\mathcal A}|+T^2Y^{6(1-2\sigma)}\right). \end{align*} \end{lemma}
\section{Proof of Theorem~\ref{thm:main1}} By Lemma~\ref{lem:mainvlarge1} we have \begin{align} \label{eq:main1case22}
I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+ \frac{N^{3/2+o(1)}}{V^2}W, \end{align} where \begin{align*}
W=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(t_1-t_2)}\right|. \end{align*} For integer $\ell$, define \begin{align*}
r(\ell)=|\{ (t_1,t_2)\in {\mathcal A}\times {\mathcal A} \ : \ \ell< t_1-t_2\leqslant \ell+1\}|, \end{align*} so that \begin{align*}
W\leqslant \sum_{|\ell|\leqslant \delta T}r(\ell)\max_{0\leqslant \theta \leqslant 1}\left|\sum_{n\leqslant \delta T/N} n^{-1/2+i(\ell+\theta)}\right|. \end{align*} For integers $i,j\geqslant 0$ define the set \begin{align*}
D_{i,j}=\left\{ |\ell|\leqslant \delta T \ : \ 2^i\leqslant r(\ell)< 2^{i+1}, \ 2^j\leqslant \max_{0\leqslant \theta\leqslant 1}\left|\sum_{n\leqslant \delta T/N} n^{-1/2+i(\ell+\theta)}\right|< 2^{j+1} \right\}, \end{align*} so that \begin{align} \label{eq:WW0111} W\ll W_0+I(\delta T,{\mathcal A}), \end{align} where the factor $I(\delta T,{\mathcal A})$ comes from values of $\ell$ satisfying \begin{align*}
\max_{0\leqslant \theta\leqslant 1}\left|\sum_{n\leqslant \delta T/N} n^{-1/2+i(\ell+\theta)}\right|\leqslant 1, \end{align*} and $W_0$ is defined by \begin{align} \label{eq:1W0def}
W_0=\sum_{i,j\ll \log{N}}\sum_{\ell \in D_{i,j}}r(\ell)\max_{0\leqslant \theta \leqslant 1}\left|\sum_{n\leqslant N^2/\delta T} n^{-1/2+i(\ell+\theta)}\right|^2. \end{align} Note that for each $0\leqslant i,j\ll \log{N}$ we have \begin{align*}
2^{i+j}|D_{i,j}|\ll\sum_{\ell \in D_{i,j}}r(\ell)\max_{0\leqslant \theta \leqslant 1}\left|\sum_{n\leqslant \delta T/N} n^{-1/2+i(\ell+\theta)}\right|\ll 2^{i+j}|D_{i,j}|, \end{align*} and hence by the pigeonhole principle applied to the set $$\{(i,j) \ : 0\leqslant i,j\ll \log{N}\},$$ there exists some pair $i,j$ satisfying \begin{align*}
2^{i+j}|D_{i,j}|\ll W_0\ll N^{o(1)}2^{i+j}|D_{i,j}|. \end{align*} Write \begin{align*}
D=D_{i,j}, \quad \Delta=2^{i}, \quad H=2^j, \end{align*} so that \begin{align} \label{eq:W0DH}
\Delta H |D|\ll W_0\ll N^{o(1)}\Delta H |D|. \end{align} We consider two cases depending on \begin{align} \label{eq:thmmain1case1}
|D|\geqslant N \end{align} or \begin{align} \label{eq:thmmain1case2}
|D|<N. \end{align} Consider first~\eqref{eq:thmmain1case1}. From the definition of $H,D$, we have \begin{align*}
H^{2k}|D|&\ll \sum_{\ell \ll \delta T}\max_{0\leqslant \theta \leqslant 1} \left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(\ell+\theta)}\right|^{2k} \end{align*} and hence by~\eqref{eq:main1delta} and Lemma~\ref{lem:classicalmoments} \begin{align} \label{eq:H2kD}
H^{2k}|D|\ll (\delta T)^{1+o(1)}. \end{align} By~\eqref{eq:thmmain1case1} this implies \begin{align*} H\ll \left(\frac{\delta T}{N}\right)^{1/2k}N^{o(1)}. \end{align*} From~\eqref{eq:W0DH} and the bound \begin{align} \label{eq:DeltaDthm1}
\Delta |D|\ll I(\delta T,{\mathcal A}), \end{align} we get \begin{align*} W_0\ll \left(\frac{\delta T}{N}\right)^{1/2k}I(\delta T,{\mathcal A})N^{o(1)}. \end{align*} By~\eqref{eq:main1case22}, ~\eqref{eq:WW0111} and~\eqref{eq:1W0def} \begin{align*}
I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{3/2+o(1)}}{V^2}\left(\frac{\delta T}{N}\right)^{1/2k}I(\delta T,{\mathcal A}), \end{align*} and hence from~\eqref{eq:main1delta} \begin{align} \label{eq:main1case1final} I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}. \end{align} Suppose next~\eqref{eq:thmmain1case2} and consider \begin{align*}
S=\int_{|\tau|\leqslant 2\log{N}}\sum_{\substack{\ell \in D, t\in {\mathcal A} \\ }}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{-i(\ell+t+\tau)}\right|^2d\tau. \end{align*} Define the set ${\mathcal B}$ by
$${\mathcal B}=\{ (\ell,t)\in D\times {\mathcal A} \ : \exists \ t'\in {\mathcal A} \ \text{such that} \ |t'-t-\ell|\leqslant 2\},$$ so that \begin{align} \label{eq:SubB}
S\geqslant \sum_{\substack{(\ell,t)\in {\mathcal B} }}\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau. \end{align}
If $(\ell,t)\in {\mathcal B}$ then there exists some $t'\in {\mathcal A}$ and some $|\theta|\leqslant 2$ such that $$t'+\theta=\ell+t,$$ and hence \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau=\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t'+\theta+\tau)}\right|^2d\tau. \end{align*}
Since $|\theta|\leqslant 2$ we have $$(\theta+[-2\log{N},2\log{N}])\cap [-\log{N},\log{N}]=[-\log{N},\log{N}],$$ and hence \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau\geqslant \int_{|\tau|\leqslant \log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(t'+\tau)}\right|^2d\tau. \end{align*} Applying Lemma~\ref{lem:removemax} gives \begin{align*}
\int_{|\tau|\leqslant 2\log{N}}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{i(\ell+t+\tau)}\right|^2d\tau\gg \frac{V^2}{\log{N}}, \end{align*} and hence by~\eqref{eq:SubB} \begin{align*}
N^{o(1)}S\gg V^2|{\mathcal B}|. \end{align*} Recalling the definition of $D,\Delta$, we have \begin{align*}
\Delta |D|\ll \sum_{\ell \in D}r(\ell)\ll |\{ (\ell,t,t')\in D \times {\mathcal A} \times {\mathcal A} \ : \ 0<t'-t-\ell \leqslant 1\}|\leqslant |{\mathcal B}|, \end{align*} which implies \begin{align} \label{eq:SdeltaDLB}
\Delta |D|\ll \frac{N^{o(1)}}{V^2}S. \end{align}
Taking a maximum over $\tau$ in $S$, there exists some sequence of complex numbers $b(n)$ satisfying $|b(n)|\leqslant 1$ such that \begin{align*}
S&\ll N^{o(1)}\sum_{\ell\in D, t\in {\mathcal B}}\left|\sum_{N\leqslant n \leqslant 2N}b(n)n^{i(\ell+t)}\right|^2 \\
&\leqslant \sum_{N\leqslant n_1,n_2\leqslant 2N}\left|\sum_{\ell\in D}\left(\frac{n_1}{n_2}\right)^{i\ell} \right|\left|\sum_{t\in {\mathcal B}}\left(\frac{n_1}{n_2}\right)^{it}\right|. \end{align*} By the Cauchy-Schwarz inequality \begin{align} \label{eq:SS1S2} S^2\ll N^{2+o(1)}S_1S_2, \end{align} where \begin{align*}
S_1=\sum_{N\leqslant n_1,n_2\leqslant 2N}(n_1n_2)^{-1/2}\left|\sum_{\ell\in D}\left(\frac{n_1}{n_2}\right)^{i\ell} \right|^2, \end{align*} and \begin{align*}
S_2=\sum_{N\leqslant n_1,n_2\leqslant 2N}(n_1n_2)^{-1/2}\left|\sum_{t\in {\mathcal A}}\left(\frac{n_1}{n_2}\right)^{it} \right|^2. \end{align*} Interchanging summation, we have \begin{align*}
S_1\ll \sum_{\ell_1,\ell_2\in D}\left|\sum_{N\leqslant n \leqslant 2N}n^{-1/2+i(\ell_1-\ell_2)}\right|^2. \end{align*} and hence by Theorem~\ref{thm:heathbrown} and~\eqref{eq:thmmain1case2} \begin{align*}
S_1\ll N^{1+o(1)}|D|. \end{align*} By a similar argument \begin{align*}
S_2\ll N^{1+o(1)}|{\mathcal A}|. \end{align*} From the above,~\eqref{eq:SdeltaDLB} and~\eqref{eq:SS1S2} \begin{align*}
\Delta^2|D|\ll \frac{N^{4+o(1)}}{V^4}|{\mathcal A}|. \end{align*} Returning to~\eqref{eq:W0DH}, we have \begin{align*}
W_0\ll N^{o(1)}(\Delta^2 |D|)^{1/2k}(\Delta |D|)^{1-1/k}(|D|H^{2k})^{1/2k}. \end{align*} The above,~\eqref{eq:H2kD} and~\eqref{eq:DeltaDthm1} imply \begin{align*}
W_0\ll \left(\frac{N^{4+o(1)}}{V^4}|{\mathcal A}|\right)^{1/2k}I(\delta T,{\mathcal A})^{1-1/k}(\delta T)^{1/2k}. \end{align*} By~\eqref{eq:main1case22} and~\eqref{eq:WW0111} \begin{align*}
I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{3k/2+2+o(1)}}{V^{2k+2}}|{\mathcal A}|^{1/2}(\delta T)^{1/2}. \end{align*} From~\eqref{eq:main1case1final} the above bound holds provided either~\eqref{eq:thmmain1case1} or~\eqref{eq:thmmain1case2}. An application of Lemma~\ref{eq:ell2ell2c} implies \begin{align*}
|{\mathcal A}|^2\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{1}{\delta^{1/3}}\frac{N^{k+4/3+o(1)}}{V^{4k/3+4/3}}T^{1/3}, \end{align*} which completes the proof.
\section{Proof of Theorem~\ref{thm:main4}}
By Lemma~\ref{lem:mainvlarge1} \begin{align*}
& I(\delta T,{\mathcal A}) \ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{3/2+o(1)}}{V^2}\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(t_1-t_2)}\right|, \end{align*} and H\"{o}lder's inequality \begin{align} \label{eq:IDD33333}
I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{6+o(1)}}{V^{8}}W, \end{align} where \begin{align*}
W=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(t_1-t_2)}\right|^4. \end{align*} We have \begin{align*}
W=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant (\delta T/N)^2}c_nn^{-1/2+i(t_1-t_2)}\right|^2, \end{align*} for some $c_n=N^{o(1)}$. By Lemma~\ref{lem:coefficients} and Corollary~\ref{cor:reflection} \begin{align*}
W&\ll I(\delta T,{\mathcal A})N^{o(1)}+\frac{\delta^2 T^2N^{o(1)}}{N^2}|{\mathcal A}|+\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant 4N^2/(\delta T)}n^{-1/2+i(t_1-t_2)}\right|^2, \end{align*} which by~\eqref{eq:IDD33333} implies that \begin{align} \label{eq:main4prelimstep}
I(\delta T,{\mathcal A})\ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{\delta^2 T^2 N^{4+o(1)}}{V^{8}}|{\mathcal A}|+\frac{N^{6+o(1)}}{V^8}W_0, \end{align} with \begin{align*}
W_0=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{n\leqslant 4N^2/(\delta T)}n^{-1/2+i(t_1-t_2)}\right|^2. \end{align*} This implies either \begin{align*}
|{\mathcal A}|\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}+\frac{\delta T^2 N^{4+o(1)}}{V^{8}}, \end{align*} or \begin{align} \label{eq:main4case222b} I(\delta T,{\mathcal A})\ll \frac{N^{6+o(1)}}{V^8}W_0. \end{align} We may suppose~\eqref{eq:main4case222b} since otherwise the result follows. Considering $W_0$, partitioning summation over $n$ into dyadic intervals and applying Corollary~\ref{cor:larger} gives \begin{align*}
W_0\ll N^{o(1)}\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{16 N^2/(\delta T)\leqslant n\leqslant 32N^2/(\delta T)}n^{-1/2+i(t_1-t_2)}\right|^2, \end{align*} and hence by Lemma~\ref{lem:coefficients} \begin{align} \label{eq:W0111222main4}
W_0\ll \left(\frac{N^{2+o(1)}}{\delta T}\right)^{1/4}\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\left|\sum_{16 N^2/(\delta T)\leqslant n\leqslant 32N^2/(\delta T)}n^{-5/8+i(t_1-t_2)}\right|^2. \end{align}
Bounding the contribution from points $t_1,t_2$ satisfying $|t_1-t_2|\leqslant N^{o(1)}$ trivially and applying Lemma~\ref{lem:jut1} to the remaining sum, we get \begin{align} \label{eq:W01main4}
W_0\ll \frac{N^{2+o(1)}}{\delta T}|{\mathcal A}|+\left(\frac{N^{2+o(1)}}{\delta T}\right)^{1/4}I(\delta T,{\mathcal A})+\left(\frac{N^{2+o(1)}}{\delta T}\right)^{1/4}W_1, \end{align} where \begin{align*}
W_1=\sum_{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}\int_{-h^2}^{h^2}\left|\zeta\left(\frac{5}{8}+i(t_1-t_2+\tau)\right) \right|^2d\tau, \end{align*} \begin{align*} h=(\log{T})^2, \end{align*} and we have used a second application of Lemma~\ref{lem:coefficients} to the sum~\eqref{eq:W0111222main4} in order to smooth the coefficients. Performing a dyadic partition as in the proof of Theorem~\ref{thm:main1}, there exists $$\Delta,H\gg 1,$$ and a set $D\subseteq {\mathbb Z}$ defined by \begin{align*}
&D= \\ & \left\{ |\ell|\leqslant \delta T \ : \ \Delta \leqslant r(\ell)< 2\Delta, \ H\leqslant \int_{-2h^2}^{2h^2}\left|\zeta\left(\frac{5}{8}+i(\ell+\tau)\right) \right|^2d\tau< 2H \right\}, \end{align*} such that \begin{align} \label{eq:WW0}
W_1\ll I(\delta T,{\mathcal A})+N^{o(1)}\Delta |D|H^2, \end{align} where \begin{align*}
r(\ell)=|\{ (t_1,t_2)\in {\mathcal A}\times {\mathcal A} \ : \ 0\leqslant t_1-t_2-\ell <1 \}|. \end{align*} Note by Lemma~\ref{lem:8thmoment} \begin{align} \label{eq:D8}
|D|H^8\ll (\delta T)^{1+o(1)}. \end{align} Either \begin{align} \label{eq:main4case1}
|D|\leqslant N \quad \text{and} \quad |D|\leqslant \frac{N^4}{(\delta T)^2}, \end{align} or \begin{align} \label{eq:main4case2}
|D|> N \quad \text{or} \quad |D|> \frac{N^4}{(\delta T)^2}. \end{align} If~\eqref{eq:main4case1}, then arguing as in the proof of Lemma~\ref{lem:e2energy}, \begin{align*}
V^2\Delta |D|\ll N^{o(1)}\sum_{d\in D,t\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}c_n n^{i(t-d)}\right|^2, \end{align*}
for some sequence of complex numbers $c_n$ satisfying $|c_n|\leqslant 1$. Expanding the square, interchanging summation, applying the Cauchy-Schwarz inequality and rescaling we get \begin{align} \label{eq:main42bb}
V^4\Delta^2|D|^2\ll N^{2+o(1)}\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_1-t_2)}\right|^2\sum_{d_1,d_2\in D}\left|\sum_{N\leqslant n \leqslant 2N}n^{i(d_1-d_2)}\right|^2. \end{align} By Theorem~\ref{thm:heathbrown} \begin{align*}
\sum_{t_1,t_2\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}n^{i(t_1-t_2)}\right|^2&\ll N^{o(1)}(|{\mathcal A}|^2+N|{\mathcal A}|+T^{1/2}|{\mathcal A}|^{5/4}) \\
&\ll N^{1+o(1)}|{\mathcal A}|, \end{align*} and \begin{align*}
\sum_{d_1,d_2\in {\mathcal A}}\left|\sum_{N\leqslant n \leqslant 2N}n^{i(d_1-d_2)}\right|^2&\ll N^{o(1)}(|D|^2+N|D|+(\delta T)^{1/2}|D|^{5/4}) \\
&\ll N^{1+o(1)}|D|, \end{align*} where we have used~\eqref{eq:main4ass} and~\eqref{eq:main4case1} to simplify the above bounds. Combining with~\eqref{eq:main42bb} \begin{align*}
\Delta^2|D|\ll \frac{N^{4+o(1)}}{V^4}|{\mathcal A}|. \end{align*} From the above and~\eqref{eq:D8} \begin{align*}
\Delta |D|H^2&\ll (\Delta^2|D|)^{1/4}(\Delta |D|)^{1/2}(|D|H^8)^{1/4} \\
&\ll \frac{N^{1+o(1)}}{V}|{\mathcal A}|^{1/4}I(\delta T,{\mathcal A})^{1/2}(\delta T)^{1/4}. \end{align*} By~\eqref{eq:main4case222b},~\eqref{eq:W01main4} and~\eqref{eq:WW0} \begin{align*}
I(\delta T,{\mathcal A})& \ll \frac{N^{8+o(1)}}{(\delta T)V^8}|{\mathcal A}|+\frac{N^{13/2+o(1)}}{(\delta T)^{1/4}V^8}I(\delta T,{\mathcal A}) \\ & \quad \quad \quad \quad +\frac{N^{13/2+o(1)}}{(\delta T)^{1/4}V^8}\frac{N^{1+o(1)}}{V}|{\mathcal A}|^{1/4}I(\delta T,{\mathcal A})^{1/2}(\delta T)^{1/4} \\
&\ll \frac{N^{8+o(1)}}{(\delta T)V^8}|{\mathcal A}|+\frac{N^{15/2+o(1)}}{V^9}|{\mathcal A}|^{1/4}I(\delta T,{\mathcal A})^{1/2}, \end{align*} where we have used~\eqref{eq:main4deltaass} to drop the second term. The above estimate implies that either \begin{align*}
|{\mathcal A}|\ll \frac{N^{8+o(1)}}{\delta^2 TV^8}, \end{align*} or \begin{align*}
I(\delta T,{\mathcal A})\ll \frac{N^{15+o(1)}}{V^{18}}|{\mathcal A}|^{1/2}, \end{align*} and hence \begin{align*}
|{\mathcal A}|\ll \frac{N^{8+o(1)}}{\delta^2 TV^8}+\frac{N^{10+o(1)}}{\delta^{2/3}V^{12}}, \end{align*} which completes the proof of case~\eqref{eq:main4case1}. Suppose next~\eqref{eq:main4case2}. From~\eqref{eq:D8} we have either \begin{align*} H^2\ll \left(\frac{(\delta T)}{N}\right)^{1/4}N^{o(1)}, \end{align*} or \begin{align*} H^2 \ll \frac{(\delta T)^{3/4}}{N}N^{o(1)}, \end{align*} which implies \begin{align*} H^2\ll \left(\frac{\delta T}{N}\right)^{1/4}N^{o(1)}+\frac{(\delta T)^{3/4}}{N}N^{o(1)}. \end{align*} By~\eqref{eq:WW0} \begin{align} \label{eq:WW0} W_1\ll N^{o(1)}\left(1+\left(\frac{\delta T}{N}\right)^{1/4}+\frac{(\delta T)^{3/4}}{N}\right)I(\delta T,{\mathcal A}), \end{align} and hence by~\eqref{eq:main4case222b} and~\eqref{eq:W01main4} \begin{align*}
I(\delta T,{\mathcal A}) & \ll \frac{N^{8+o(1)}}{\delta TV^8}|{\mathcal A}| \\& \quad \quad +N^{o(1)}\left(\frac{N^{13/2}}{V^8(\delta T)^{1/4}}+\frac{N^{25/4}}{V^8}+\frac{N^{11/2}(\delta T)^{1/2}}{V^8}\right)I(\delta T,{\mathcal A}), \end{align*} By~\eqref{eq:main4ass} and~\eqref{eq:main4deltaass} we may drop the last three terms to arrive at \begin{align*}
|{\mathcal A}|\ll \frac{N^{8+o(1)}}{\delta^2 TV^8}, \end{align*} and the result follows combining the estimates from cases~\eqref{eq:main4case1} and~\eqref{eq:main4case2}. \section{Proof of Theorem~\ref{thm:main12}} Let $0<\delta\leqslant 1$ be some parameter satisfying \begin{align} \label{eq:thm12deltadef} N^{o(1)}\delta \leqslant \frac{V^8}{TN^5}, \end{align} and apply Lemma~\ref{lem:mainvlarge1} and the Cauchy-Schwarz inequality to get \begin{align*}
& I(\delta T,{\mathcal A}) \ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|+\frac{N^{3+o(1)}}{V^4}S, \end{align*} where \begin{align*}
S=\sum_{\substack{\substack{t_1,t_2\in {\mathcal A} \\ |t_1-t_2|\leqslant \delta T}}}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(t_1-t_2)}\right|^2. \end{align*} This implies that either \begin{align} \label{eq:main12case1}
I(\delta T,{\mathcal A}) \ll \frac{N^{2+o(1)}}{V^2}|{\mathcal A}|, \end{align} or \begin{align} \label{eq:main12case2} & I(\delta T,{\mathcal A}) \ll \frac{N^{3+o(1)}}{V^4}S. \end{align} If~\eqref{eq:main12case1} then by Corollary~\ref{eq:ell2ell2c} \begin{align} \label{eq:main12case1z}
|{\mathcal A}|\ll \frac{1}{\delta}\frac{N^{2+o(1)}}{V^2}. \end{align} Suppose next~\eqref{eq:main12case2}. For integer $\ell$ define \begin{align*}
r(\ell)=|\{ (t_1,t_2)\in {\mathcal A}\times {\mathcal A} \ : \ 0\leqslant t_1-t_2-\ell<1\}, \end{align*} so that \begin{align*}
S\leqslant \sum_{\substack{\substack{ |\ell|\leqslant \delta T}}}r(\ell)\max_{0\leqslant \theta \leqslant 1}\left|\sum_{n\leqslant \delta T/N}a_nn^{-1/2+i(\ell+\theta)}\right|^2. \end{align*}
Using Lemma~\ref{lem:removemax} to remove the maximum over $\theta$, there exists a sequence of complex numbers $c_n$ satisfying $|c_n|\leqslant 1$ such that \begin{align*}
S\ll N^{o(1)}\sum_{\substack{\substack{|\ell| \leqslant \delta T}}}r(\ell)\left|\sum_{n\leqslant \delta T/N}c_nn^{-1/2+i\ell}\right|^2. \end{align*} Expanding the square and interchanging summation \begin{align*}
S\ll N^{o(1)}\sum_{n_1,n_2\leqslant \delta T/N}(n_1n_2)^{-1/2}\left|\sum_{|\ell| \leqslant \delta T}r(\ell)\left(\frac{n_1}{n_2}\right)^{i\ell} \right|, \end{align*} and hence by the Cauchy-Schwarz inequality \begin{align*}
S^2\ll N^{o(1)}\left(\frac{\delta T}{N}\right)\sum_{n_1,n_2\leqslant \delta T/N}(n_1n_2)^{-1/2}\left|\sum_{|\ell| \leqslant \delta T}r(\ell)\left(\frac{n_1}{n_2}\right)^{i\ell} \right|^2, \end{align*} which rearranges to \begin{align*}
S^2\ll N^{o(1)}\left(\frac{\delta T}{N}\right)\sum_{|\ell_1|,|\ell_2|\leqslant \delta T}r(\ell_1)(\ell_2)\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(\ell_1-\ell_2)}\right|^2. \end{align*} Substituting into~\eqref{eq:main12case2} we get \begin{align*}
I(\delta T,{\mathcal A})^2\ll \frac{\delta TN^{5+o(1)}}{V^8}\sum_{|\ell_1|,|\ell_2|\leqslant \delta T}r(\ell_1)r(\ell_2)\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(\ell_1-\ell_2)}\right|^2. \end{align*} By Theorem~\ref{thm:largeadditive} \begin{align*}
&\sum_{|\ell_1|,|\ell_2|\leqslant \delta T}r(\ell_1)r(\ell_2)\left|\sum_{n\leqslant \delta T/N}n^{-1/2+i(\ell_1-\ell_2)} \right|^2
\\ & \ll N^{o(1)}\left(\|r\|^2_1+\frac{\delta T}{N}\|r\|^2_2+(\delta T)^{1/2}\|r\|^{1/2}_1\|r\|^{3/2}_2\right),
\end{align*} where \begin{align} \label{eq:rr121}
\|r\|_1=\sum_{|\ell| \leqslant \delta T}r(\ell), \quad \text{and} \quad \|r\|^2_2=\sum_{|\ell| \leqslant \delta T}r(\ell)^2, \end{align} so that \begin{align*}
\|r\|_1\ll I(\delta T,{\mathcal A}). \end{align*} By the above and~\eqref{eq:thm12deltadef} \begin{align*}
I(\delta T,{\mathcal A})^2\ll \frac{\delta TN^{5+o(1)}}{V^8}\left(\frac{\delta T}{N}\|r\|^2_2+(\delta T)^{1/2}\|r\|^{1/2}_1\|r\|^{3/2}_2 \right), \end{align*} which implies \begin{align*}
I(\delta T,{\mathcal A})^2\ll \frac{(\delta T)^2N^{10+o(1)}}{V^{16}}\|r\|^2_2. \end{align*} By~\eqref{eq:rr121} and Lemma~\ref{lem:e2energy} \begin{align*}
\|r\|_2^2\ll \frac{N^{3/2+o(1)}}{V^2}|{\mathcal A}|^{1/2}I(\delta T,{\mathcal A})+\frac{N^{4+o(1)}}{V^{4}}|{\mathcal A}|, \end{align*} so that \begin{align*}
I(\delta T,{\mathcal A})^2\ll \frac{(\delta T)^2N^{10+o(1)}}{V^{16}}\left(\frac{N^{3/2+o(1)}}{V^2}|{\mathcal A}|^{1/2}I(\delta T,{\mathcal A})+\frac{N^{4+o(1)}}{V^{4}}|{\mathcal A}| \right). \end{align*} The above simplifies to \begin{align*}
I(\delta T,{\mathcal A})\ll \left(\frac{(\delta T)^2N^{23/2}}{V^{18}}+\frac{(\delta T)N^7}{V^{10}}\right)N^{o(1)}|{\mathcal A}|^{1/2}, \end{align*} and hence by Corollary~\ref{eq:ell2ell2c} \begin{align*}
|{\mathcal A}|\ll \frac{\delta^{2/3} T^{4/3}N^{23/3+o(1)}}{V^{12}}+\frac{T^{2/3}N^{14/3+o(1)}}{V^{20/3}}. \end{align*} The result follows combining the above with~\eqref{eq:main12case1z}. \section{Proof of Theorem~\ref{thm:zerodensity2}} We apply Lemma~\ref{lem:zerodensity2} with \begin{align*} Y=T^{3/(8\sigma)-o(1)}, \end{align*} to get \begin{align} \label{eq:density1s1q}
N(\sigma,T)\ll T^{o(1)}\left(|{\mathcal A}|+T^2Y^{6(1-2\sigma)}\right)\ll T^{o(1)}\left(|{\mathcal A}|+T^{3(1-\sigma)/2\sigma}\right), \end{align} where the set ${\mathcal A}$ is a well spaced set ${\mathcal A}\subseteq [0,T]$ satisfying \begin{align*}
N^{o(1)}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|\geqslant N^{\sigma} \quad t\in {\mathcal A}, \end{align*}
for some sequence of complex numbers $a_n$ with $|a_n|\leqslant 1$ and $N$ satisfying \begin{align} \label{eq:Ncondthm119} Y^{4/3}\leqslant N \leqslant Y^{2+o(1)}. \end{align} For $N$ satisfying~\eqref{eq:Ncondthm119} and $\sigma$ satisfying~\eqref{eq:sigmacond3} the conditions of Theorem~\ref{thm:main4} are satisfied with the choice \begin{align*} N^{o(1)}\delta=\min\left\{1,\frac{V^3}{TN}\right\}. \end{align*} This gives \begin{align} \label{eq:main4zerodensity}
|{\mathcal A}|&\ll N^{2(1-\sigma)+o(1)}+TN^{3-5\sigma+o(1)}+TN^{10-14\sigma+o(1)}+T^{2/3}N^{32/3-14\sigma+o(1)}
\nonumber \\ & \quad \quad \quad +\frac{N^{8-8\sigma+o(1)}}{T}+N^{10-12\sigma+o(1)}. \end{align} We also note by Lemma~\ref{lem:hux} \begin{align} \label{eq:jutilazerodensity}
|{\mathcal A}|\ll N^{2(1-\sigma)+o(1)}+TN^{4-6\sigma+o(1)}. \end{align} Let $$\rho=\min\left\{\frac{3(1-\sigma)}{2\sigma(10-12\sigma)},\frac{3}{16\sigma}+\frac{1}{8(1-\sigma)}\right\},$$ and apply~\eqref{eq:main4zerodensity} if $$Y^{4/3}\leqslant N \leqslant T^{\rho},$$ while if $$T^{\rho}\leqslant N \leqslant Y^2,$$ then apply~\eqref{eq:jutilazerodensity}. Provided $\sigma$ satisfies~\eqref{eq:sigmacond3} this gives the bound \begin{align*}
|{\mathcal A}|\ll T^{3(1-\sigma)/2\sigma+o(1)}, \end{align*} and the result follows from~\eqref{eq:density1s1q}. \section{Proof of Theorem~\ref{thm:zerodensity1}} Let $Y$ be some parameter to be determined later satisfying \begin{align} \label{eq:Ycond1} Y\geqslant T^{1/2}, \end{align} and apply Lemma~\eqref{lem:zerodensity2} to get \begin{align} \label{eq:density1s1}
N(\sigma,T)\ll T^{o(1)}\left(|{\mathcal A}|+T^2Y^{6(1-2\sigma)}\right)\ll T^{o(1)}\left(|{\mathcal A}|+T^{5-6\sigma}\right) \end{align} where the set ${\mathcal A}$ is a well spaced set ${\mathcal A}\subseteq [0,T]$ satisfying \begin{align*}
N^{o(1)}\left|\sum_{N\leqslant n \leqslant 2N}a_n n^{it}\right|\geqslant N^{\sigma} \quad t\in {\mathcal A}, \end{align*}
for some sequence of complex numbers $a_n$ satisfying $|a_n|\leqslant 1$ and $N$ satisfying \begin{align*} Y^{4/3}\leqslant N \leqslant Y^{2+o(1)}. \end{align*} Note by~\eqref{eq:Ycond1} $$Y^{4/3}\geqslant T^{2/3}.$$ Applying Theorem~\ref{thm:main1} with $k=7$ and $$N^{o(1)}\delta=\min\left\{\frac{N^{7/6}}{T},1\right\},$$ gives \begin{align} \label{eq:zzthm1}
|{\mathcal A}| &\ll N^{2(1-\sigma)+o(1)}+T^{1/3}N^{25/3-32\sigma/3+o(1)} \\ &+TN^{5/6-2\sigma+o(1)}+T^{2/3}N^{143/18-32\sigma/3+o(1)}, \nonumber \end{align} where we have used the fact that the assumption~\eqref{eq:density1cond} implies \begin{align*} 28\sigma-20\geqslant \frac{7}{6}. \end{align*} We also note by Lemma~\ref{lem:hux} \begin{align} \label{eq:zzttc}
|{\mathcal A}|\ll N^{2(1-\sigma)+o(1)}+TN^{4-6\sigma+o(1)}. \end{align} Let \begin{align} \label{eq:Zcond111} Y^{4/3}\leqslant Z\leqslant Y^2, \end{align}
be some parameter and apply~\eqref{eq:zzthm1} if
$Y^{4/3}\leqslant N\leqslant T$ while apply~\eqref{eq:zzttc} if $Z\leqslant N\leqslant Y^2$. This gives \begin{align*}
& |{\mathcal A}| \ll T^{1/3}Z^{25/3-32\sigma/3+o(1)}+TZ^{4-6\sigma+o(1)} \\ &+TY^{4(5/6-2\sigma)/3+o(1)}+T^{2/3}Y^{4(143/18-32\sigma/3)/3+o(1)}+Y^{4(1-\sigma)+o(1)}. \end{align*} Choosing $$Y=T^{9/(138\sigma-89)}, \quad Z=T^{2/(13-14\sigma)},$$ to balance appropriate terms gives \begin{align*}
|{\mathcal A}|& \ll T^{(21-26\sigma)/(13-14\sigma)+o(1)}+T^{36(1-\sigma)/(138\sigma-89)+o(1)}+T^{(114\sigma-79)/(138\sigma-89)+o(1)} \\ &\ll T^{36(1-\sigma)/(138\sigma-89)+o(1)}+T^{(114\sigma-79)/(138\sigma-89)+o(1)}, \end{align*} by~\eqref{eq:density1cond}. Combining with~\eqref{eq:density1s1} we get \begin{align*} N(\sigma,T)\ll T^{36(1-\sigma)/(138\sigma-89)+o(1)}+T^{(114\sigma-79)/(138\sigma-89)+o(1)}. \end{align*} It remains to note the conditions~\eqref{eq:Ycond1} and~\eqref{eq:Zcond111} are satisfied from~\eqref{eq:density1cond}.
\end{document} | arXiv |
Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing | springerprofessional.de Skip to main content
PDF-Version jetzt herunterladen
vorheriger Artikel MapReduce Based Parallel Bayesian Network for M...
nächster Artikel Structure of Micro-nano WC-10Co4Cr Coating and ...
PatentFit aktivieren
01.09.2017 | Original Article | Ausgabe 5/2017 Open Access
Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing
Chinese Journal of Mechanical Engineering > Ausgabe 5/2017
Zhi-Jun Chen, Feng Gao, Yang Pan
» Zur Zusammenfassung PDF-Version jetzt herunterladen
Supported by National Natural Science Foundation of China (Grant Nos. U1613208, 51335007), National Basic Research Program of China (973 Program, Grant No. 2013CB035501), Science Fund for Creative Research Groups of the National Natural Science Foundation of China (Grant No. 51421092), and Science and Technology Commission of Shanghai-based "Innovation Action Plan" Project (Grant No. 16DZ1201001).
Legged robots are believed to have better mobility in rough terrain than tracked and wheeled robots, because they can use isolated footholds to optimize support and traction [ 1 ]. So in disasters, such as earthquakes, nuclear and toxic explosions which are too dangerous for human, legged robots are expected to take the place of human to perform rescue tasks. In indoor rescue, door-opening is a fundamental and essential task, which has already been studied for more than two decades [ 2 ]. However, current researches on door-opening mostly focus on tracked [ 3 , 4 ] and wheeled [ 5 – 7 ] robots. In the field of legged robots, few examples of biped and quadruped robots can be found. An early example is the HRP2 [ 8 ] in 2009 which was allowed to hit a door open with its whole body. In recent years, under the influence of the Defense Advanced Research Projects Agency(DARPA) Robotics Challenge, more related examples of biped robots opening doors can be found, such as the HUBO [ 9 ], the ATLAS [ 10 ] and the COMAN [ 11 ]. In 2015, González-Fierro, et al. [ 12 ] proposed a method for humanoid robots to learn from demonstrations of human opening doors, and defined a multi-objective reward function as a measurement of the goal optimality. Boston Dynamics' MINISPOT [ 13 ] and Ghost Robotics' MINITAUR [ 14 ] can open doors, but there is no related paper about the details, and no related research on six-legged robots can be found. On the other hand, six-legged robots can also adapt to complicated scenarios well and are more stable when walking and operating. Therefore, it is essential and helpful to develop a new method for six-legged robots to realize the function of opening doors.
When opening doors, robots mainly encounter two issues. The first one is how to recognize and locate the door and the handle in real time precisely in unknown environments. In order to recognize and locate the handle, vision systems such as laser scanners, cameras and infrared sensors are always used. A few related works realize the recognition of various door handles of unknown geometries. Moreno, et al. [ 15 ] investigated different handle types and applied a morphological filter adapted to the characteristic shape of different handles to realize the recognition. Klingbeil, et al. [ 16 ] used a computer vision and supervised learning to identify 3D key locations on any handle, thus choosing a manipulation strategy. Ignakov, et al. [ 17 ] extracted the 3D point cloud of any unknown handle by using the optical flow calculated from images taken with a single CCD camera. Most other methods assume the geometry of the handle is already known and the vision systems are just used to locate. Adiwahono, et al. [ 18 ] used a Microsoft Kinect sensor and a 2D laser scanner to estimate the handle position, thus planning the trajectory to open the door. Petrovskaya, et al. [ 19 ] presented a unified, real-time algorithm that simultaneously modeled the position of the robot within the environment, as well as the door and the handle. Kobayashi, et al. [ 20 ] applied an IP camera and IR distance sensors to calculate the position of the handle, which could be cylindrical with its diameter 48 mm to 56 mm or lever type. However, vision systems are frequently subject to calibration errors, occlusions and sight ranges, making it inevitable for scholars to apply force sensing to additionally double-confirm the contact position with the handle [ 21 – 23 ]. In fact, it is completely competent for robots to use only force sensing to detect the positional relationship with the door and the handle by touching at different positions and different directions, just like humans acting in the darkness. To simplify the system and supplement relevant study, it is essential to develop a new door-opening method based on only force sensing. If the robot is far away from the door in an undiscovered room, vision systems [ 24 ], human-computer interaction or some other methods may be applied to help the robot distinguish the door from the wall and navigate the robot to the door, but not involved in measuring the positional relationship.
The second issue is how to release the inner force in the manipulator that occurs during turning the handle and pushing the door because of the positional error and the imprecise modeling of the environment. The inner force occurs because the motion of the manipulator cannot follow the position of the handle exactly due to the positional error. In order to meet the positional accuracy requirements, the manipulator must have at least three DOFs theoretically, and specific mechanisms or control strategies need to be applied. Farelo, et al. [ 25 ] designed a 9-DOF wheelchair mounted robotic arm system to open doors by keeping the end-effector stationary while moving the base through the door. Ahmad, et al. and Zhang, et al. developed a compact wrist which could switch between active mode and passive mode as task requirements differed [ 26 ], and applied the wrist to a modular re-configurable robot mounted to both a tracked mobile platform [ 27 ] and a wheeled one [ 28 ] to open doors. Winiarski, et al. [ 29 ] applied a direct impedance controller and a local stiffness controller to a 7-DOF manipulator to robustly open doors. Karayiannidis, et al. [ 30 ] proposed a dynamic force/velocity controller which adaptively estimated the door hinge's position in real time, thus properly regulating the forces and velocities in radial and tangential directions during opening doors. Guo, et al. [ 31 ] simulated a hybrid position/force controller for a manipulator mounted to a wheeled platform to open doors. The PR2 [ 32 , 33 ] could both push and pull room doors and cabinets open by applying vision systems, tactile sensors and an impedance controller. However, the positional error cannot be eliminated completely. The inner force still always occurs, as long as the manipulator is compelled to follow the handle exactly by a firm grasp. Considering that the firm grasp is not essential for all cases, this paper applies a 0-DOF tool which can effectively release the inner force by providing a loose grasp and allowing relative movement between the handle and the tool. By integrating with the 6-DOF body of the robot, the 0-DOF tool mounted to the body has enough DOFs and workspace to operate.
In this paper, a novel method for six-legged robots to open doors autonomously is proposed and implemented to the six-parallel-legged robot [ 34 , 35 ]. The method makes the following contributions:
It is a novel method developed for six-legged robots to open doors.
The robot autonomously identifies its positional relationship with the door and the handle in real time based on only force sensing.
The robot uses a 0-DOF tool to operate, making a good use of the robot's DOFs and workspace. The loose grasp of the tool effectively releases the inner force.
Experiments are carried out to validate the accuracy and robust of the method in unknown environments.
The rest part of this paper is organized as follows: in Section 2 we introduce the system of the six-parallel-legged robot; in Section 3 we define the coordinate systems and build the kinematic model of the robot; in Section 4 we present the approach of opening a door and introduce the subtasks in detail; in Section 5 we provide the experiment results and discuss about them; in Section 6 we conclude this paper.
2 System Overview
Parallel mechanisms have been researched intensively and applied widely [ 36 – 38 ]. But for robots with parallel legs, few related examples can be found [ 39 , 40 ]. The platform we study on is a six-parallel-legged robot as shown in Fig. 1. The robot is a 6-DOF mobile platform with six legs arranged symmetrically along the sagittal plane of the body. Each leg of the robot is a 3-DOF parallel mechanism with three chains: one universal joint - prismatic joint (UP) chain, and two universal joint - prismatic joint - spherical joint (UPS) chains. The prismatic joint of each chain is the active input joint driven by a servo motor. A resolver is mounted to each motor to feedback the real position of the motor. At the head of the robot body, a 0-DOF tool with a 6D force sensor is mounted. The tool is composed of a horizontal rod and a vertical rod, which are parallel to the sagittal and vertical axis of the robot body respectively. The 6D force sensor is the ATI Mini58 IP68 F/T Sensor. Upside the body, a cabinet contains components of the onboard control system, including the battery, the onboard computer and the drivers.
Model of the six-parallel-legged robot
Users control the robot by sending commands via a remote terminal unit, which communicates with the onboard computer via Wi-Fi. The resolvers provide the real positions of all motors, and the 6D force sensor feeds back the contact forces with the environments. The onboard computer analyzes the positions and the forces data, and accordingly plans the trajectories of the body and the feet. According to the planed trajectories, the computer calculates the parameters of all motors at every millisecond by running the real-time Linux OS. After the calculation, the onboard computer sends the parameters to the drivers via EtherCAT. Finally, each driver generates a current proportional to the received parameter and provides the current to drive the relevant motor.
3 Coordinate Systems and Kinematic Model
3.1 Coordinate Systems Definition
In order to well express the positional relationships among the door, the robot and the ground, it is essential to establish five coordinate systems (Figures 2 and 3). The first one is the Robot Coordinate System(RCS), which locates at the center of the body and is fixed to the body. Y R and Z R are parallel to the vertical and sagittal axis of the body respectively. The second one is the Ground Coordinate System(GCS), which superposes the RCS at anywhere the user sets and is fixed to the ground. So the RCS moves together with the body and the GCS keeps still to the ground. Here the GCS is set to superpose the RCS as the door-opening task starts. The third one is the Door Coordinate System(DCS), which locates at the intersection of the handle axis and the door plane. The DCS is fixed to the door, with Z D normal to the door plane and Y D parallel to the door hinge. The fourth one is the Leg Coordinate System(LCS), which locates at the U i1 joint of the UP chain and is fixed to the body. When U i1 is at its initial position where every prismatic joint of leg i shrinks to the shortest, X iL is along the prismatic joint, Y iL and Z iL are along the first and second axis of U i1 respectively. The LCS has a fixed relationship with the RCS defined by the geometry of the robot, which can be denoted by \({}_{{i{\text{L}}}}^{\text{R}} {\mathbf{T}}(i = 1,2, \ldots ,6).\) The fifth one is the Ankle Coordinate System(ACS), which locates at each ankle with the same orientation as the LGS when U i1 is at its initial position. The ACS is fixed to the foot and moves as the leg moves.
Definition of the RCS, GCS and DCS
Definition of the LCS and ACS
3.2 Kinematic Model
Based on these coordinate systems, the kinematic model of the robot can be built in the GCS, which is indispensable for controlling the robot. The inverse kinematic model is essential for assigning the position value of each actuation in real time to generate the planed trajectories of the body and the feet, and the forward kinematic model is essential for calculating the real-time position of the robot.
As shown in Figure 3, let s i denote O iA S i1, \(\theta_{i1}\) and \(\theta_{i2}\) denote the first and second angle of U i1, 2 d Ui and 2 d Si denote the lengths of U i2 U i3 and S i2 S i3, h Ui and h Si denote the distances from O iL to U i2 U i3 and O iA to S i2 S i3. Let L i ( l i1, l i2, l i3) T denote the lengths of U i1 S i1, U i2 S i2 and U i3 S i3, which is the input of leg i. Let S i1( x i , y i , z i ) T denote the coordinates of foot i, which is the output of leg i. The output of the body can be denoted by the pose matrix of the RCS in the GCS:
$${}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}} & {{}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{\mathbf{O}}_{{1 \times 3}} } & {{\mathbf{I}}_{{1 \times 1}} } \\ \end{array} } \right),$$
where \({}_{\text{R}}^{\text{G}} {\mathbf{T}}\)—Pose matrix of the RCS in the GCS, \({}_{\text{R}}^{\text{G}} {\mathbf{R}}\)—Orientation matrix of the RCS in the GCS, G O R—Origin of the RCS expressed in the GCS.
3.2.1 Inverse Kinematic Model
When given the output of the coordinates of all six feet and the pose matrix of the RCS, the input of all prismatic joints can be calculated in real time by
$$l_{{ij}} = \left\| {{}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{ij}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} - {}^{{i{\text{L}}}}{\mathbf{U}}_{{ij}} } \right\|_{2} ,$$
where i—Leg number, \(i = 1,2, \ldots ,6,\) j—Chain number of leg i, j = 1, 2, 3,
$$_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}} = \left( {\begin{array}{*{20}c} {\cos \theta_{i 1} \cos \theta_{i 2} } & { - \cos \theta_{i 1} \sin \theta_{i 2} } & {\sin \theta_{i 1} } \\ {\sin \theta_{i 2} } & {\cos \theta_{i 2} } & 0 \\ { - \sin \theta_{i 1} \cos \theta_{i 2} } & {\sin \theta_{i 1} \sin \theta_{i 2} } & {\cos \theta_{i 1} } \\ \end{array} } \right),$$
$$\left( {\begin{array}{*{20}c} {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 1} } & {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 2} } & {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {s_{i} } & 0 & 0 \\ 0 & {h_{{{\text{S}}i}} } & {h_{{{\text{S}}i}} } \\ 0 & {d_{{{\text{S}}i}} } & { - d_{{{\text{S}}i}} } \\ \end{array} } \right),$$
$${}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} = \left( {\sqrt {{}^{{i{\text{L}}}}x_{i}^{2} + {}^{{i{\text{L}}}}y_{i}^{2} + {}^{{i{\text{L}}}}z_{i}^{2} } - s_{i} } \right)\left( {\begin{array}{*{20}c} {\cos \theta_{i1} \cos \theta_{i2} } \\ {\sin \theta_{i2} } \\ { - \sin \theta_{i1} \cos \theta_{i2} } \\ \end{array} } \right),$$
$$\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i1}} } & {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i2}} } & {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i3}} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ 0 & {h_{{{\text{U}}i}} } & {h_{{{\text{U}}i}} } \\ 0 & {d_{{{\text{U}}i}} } & { - d_{{{\text{U}}i}} } \\ \end{array} } \right).$$
The \(\theta _{{i1}}\) and \(\theta _{{i2}}\) here can be calculated from the output \({}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}\) and G S i1( G x i , G y i , G z i ) T by
$$\left\{ {\begin{array}{*{20}l} {\theta _{{i{\text{1}}}} = \arctan \left( {\frac{{ - {}^{{i{\text{L}}}}z_{i} }}{{{}^{{i{\text{L}}}}x_{i} }}} \right),} \hfill \\ {\theta _{{i{\text{2}}}} = \arcsin (\frac{{{}^{{i{\text{L}}}}y_{i} }}{{\sqrt {{}^{{i{\text{L}}}}x_{i}^{2} + {}^{{i{\text{L}}}}y_{i}^{2} + {}^{{i{\text{L}}}}z_{i}^{2} } }}),} \hfill \\ {\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right) = {}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}^{{ - 1}} {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}^{{ - 1}} \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
3.2.2 Forward Kinematic Model
When given the input of all prismatic joints, either the output coordinates of all six feet or the pose matrix of the RCS must be known so that the other one can be derived. If the pose matrix of the RCS is known, the output coordinates of all six feet can be expressed by
$$\left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right) = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}{}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right),} \hfill \\ {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} = {}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{i{\text{1}}}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} ,} \hfill \\ \end{array} } \right.$$
where i—Leg number, \(i = 1,2, \ldots ,6.\)
If the output coordinates of all six feet are known, the pose matrix of the RCS can be calculated by the coordinates of three stance-feet in 3-3 gait. Here we derive the equation using legs 1, 3, 5, and it is similar for legs 2, 4, 6:
$$\left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {{}^{{\text{R}}}{\mathbf{S}}_{{i{\text{1}}}} = {}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}\left( {{}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{i{\text{1}}}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} } \right),} \hfill \\ {\left\| {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \right\|_{2} = \left\| {{}^{{\text{R}}}{\mathbf{S}}_{{i{\text{1}}}} } \right\|_{2} ,} \hfill \\ \end{array} } \hfill \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}} = \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{{\text{11}}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{}^{{\text{G}}}{\mathbf{S}}_{{31}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{}^{{\text{G}}}{\mathbf{S}}_{{51}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ \end{array} } \right)^{{\text{T}}} \left( {\begin{array}{*{20}c} {{}^{{\text{R}}}{\mathbf{S}}_{{11}} } \\ {{}^{{\text{R}}}{\mathbf{S}}_{{31}} } \\ {{}^{{\text{R}}}{\mathbf{S}}_{{51}} } \\ \end{array} } \right)^{{ - {\text{T}}}} ,} \hfill \\ \end{array} } \right.$$
where i—Leg number, i = 1, 3, 5.
Here in Eqs. ( 4) and ( 5), the \({}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}\), iL S i1 and iL O iA are defined the same as in Eq. ( 2), but the \(\theta_{i1}\) and \(\theta_{i2}\) here are calculated from the input L i ( l i1, l i2, l i3) T by
$$\left\{ {\begin{array}{*{20}l} {\theta _{{i{\text{2}}}} = \arcsin \left( {\omega _{i} } \right) - \arctan \left( {\frac{{h_{{{\text{S}}i}} }}{{l_{{i{\text{1}}}} - s_{i} }}} \right),} \hfill \\ {\theta _{{i{\text{1}}}} = \arcsin \left( {\frac{{\phi _{i} }}{{d_{{{\text{U}}i}} \left( {\left( {l_{{i{\text{1}}}} - s_{i} } \right)\cos \theta _{{i{\text{2}}}} - h_{{{\text{S}}i}} \sin \theta _{{i{\text{2}}}} } \right)}}} \right),} \hfill \\ \end{array} } \right.$$
$$\left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\omega_{i}^{4} + a_{i} \omega_{i}^{3} + b_{i} \omega_{i}^{2} - a_{i} \omega_{i} + c_{i} = 0,} \hfill \\ {a_{i} = - \frac{{2\varphi_{i} }}{{h_{{{\text{U}}i}} \sqrt {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } }},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {b_{i} = \frac{{\varphi_{i}^{2} - d_{{{\text{U}}i}}^{2} d_{{{\text{S}}i}}^{2} }}{{h_{{{\text{U}}i}}^{2} \left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)}} - 1,} \hfill \\ {c_{i} = - \frac{{\phi_{i}^{2} d_{{{\text{S}}i}}^{2} + \left( {\varphi_{i}^{2} - d_{{{\text{U}}i}}^{2} d_{{{\text{S}}i}}^{2} } \right)\left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)}}{{h_{{{\text{U}}i}}^{2} \left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)^{2} }},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {\varphi_{i} = \frac{{\left( {l_{i1} - s_{i} } \right)^{2} + d_{{{\text{U}}i}}^{2} + d_{{{\text{S}}i}}^{2} + h_{{{\text{U}}i}}^{2} + h_{{{\text{S}}i}}^{2} }}{2} - \frac{{l_{i2}^{2} + l_{i3}^{2} }}{4},} \hfill \\ {\phi_{i} = \frac{{l_{i2}^{2} - l_{i3}^{2} }}{4}.} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$
4 Door-opening Method
The approach of the door-opening method is shown in Figure 4, which can also be applied to pull doors with some revisions on the trajectory planning. The task can be decomposed into five subtasks: locating the door, rotating to align, locating the handle, opening the door and walking through. Here we use Q to denote the end-effecter of the tool, and Q is fixed to the tool. The positions of Q at different transients along the trajectory are marked by other letters, and these marks are fixed to the ground.
Approach of the door-opening method
4.1 Locating the Door
The first subtask is locating the door ( O → A→B → C), in which the robot identifies the orientation matrix of the DCS in the GCS( \({}_{\text{D}}^{\text{G}} {\mathbf{R}}\)) by touching three non-collinear points on the door, as shown in Figure 5.
Locating the door plane
Three non-collinear points define a plane. According to this basic principle, the robot firstly moves its body forward (– Z R) until Q touches the first point A on the door ( O → A). Then, the robot moves its body both backward and leftward to a different point ( A → O'), and forward again to touch the second point B( O' → B). Finally, the robot moves its body both backward and upward ( B → O''), and forward again to touch the third point C( O'' → C). By making the backward and leftward distance during AO' equal, the backward and upward distance during BO'' equal, the maximum angle between the sagittal axis of the robot body and the normal line of the door plane is allowed to be \(45^\circ\).
4.1.1 Trajectory Generation
The 6D trajectory of the robot body is generated by a discrete force control model:
$$\left\{ {\begin{array}{*{20}l} {{\mathbf{M}}{\ddot{\mathbf{S}}}_{k} = {\mathbf{F}}_{k} - {\mathbf{C}}{\dot{\mathbf{S}}}_{{k - 1}} ,} \hfill \\ {\begin{array}{*{20}l} {{\dot{\mathbf{S}}}_{k} = {\dot{\mathbf{S}}}_{{k - 1}} + {\ddot{\mathbf{S}}}_{k} \Delta t,} \hfill \\ {{\mathbf{S}}_{k} = {\mathbf{S}}_{{k - 1}} + {\dot{\mathbf{S}}}_{k} \Delta t,} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$
where S k —6D coordinates of the robot body at time k,
$${\mathbf{S}}_{k} = \left( {x_{k} ,y_{k} ,z_{k} ,\alpha_{k} ,\beta_{k} ,\gamma_{k} } \right)^{\text{T}} ,$$
F k —6D force at time k,
$${\mathbf{F}}_{k} = \left( {F_{xk} ,F_{yk} ,F_{zk} ,M_{xk} ,M_{yk} ,M_{zk} } \right)^{\text{T}} ,$$
M—Mass matrix, C—Damp matrix.
The robot determines M and C according to the required accelerations and velocities, and generates different trajectories by applying different F k . During locating the door, the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0, - 1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in OA \cup O^{\prime}B \cup O^{\prime\prime}C,} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( { - 1,0,1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in AO^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,1,1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in BO^{\prime\prime},} \hfill \\ \end{array} } \right.$$
where \({}_{\text{R}}^{\text{G}} {\mathbf{R}}_{O}\)— \({}_{\text{R}}^{\text{G}} {\mathbf{R}}\) at point O, derived by Eq. ( 5), \(Q \in OA \cup O^{\prime}B\)— Q is on line segment OA or O' B.
4.1.2 Orientation Matrix Calculation
By applying Eq. ( 5), G O R at A, B, C can be derived and denoted as G O RA ( x A , y A , z A ) T, G O RB ( x B , y B , z B ) T and G O RC ( x C , y C , z C ) T. Let n( x n , y n , z n ) T denote the normal vector of the door plane. Then n can be calculated by
$$\left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{A} - x_{B} } \\ {y_{A} - y_{B} } \\ {z_{A} - z_{B} } \\ \end{array} } & {\begin{array}{*{20}c} {x_{C} - x_{B} } \\ {y_{C} - y_{B} } \\ {z_{C} - z_{B} } \\ \end{array} } \\ \end{array} } \right)^{{\text{T}}} \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}x_{n} } \\ {{}^{{\text{G}}}y_{n} } \\ {{}^{{\text{G}}}z_{n} } \\ \end{array} } \right) = 0.$$
Transferring the vector n from the GCS to the RCS, we can get
$$\left( {\begin{array}{*{20}c} {{}^{{\text{R}}}x_{n} } & {{}^{{\text{R}}}y_{n} } & {{}^{{\text{R}}}z_{n} } \\ \end{array} } \right)^{{\text{T}}} = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O}^{{ - 1}} {}^{{\text{G}}}{\mathbf{n}}.$$
Then projecting n into the X R O R Z R plane, we can get
$${}^{{\text{R}}}{\mathbf{n}}_{{\text{p}}} = (\begin{array}{*{20}c} {{}^{{\text{R}}}x_{n} } & 0 & {{}^{{\text{R}}}z_{n} } \\ \end{array} )^{{\text{T}}} .$$
Here the Tait-Bryan angle, which includes roll, pitch and yaw, is used to express the orientation of the DCS in the GCS. The yaw angle is
$$Y_{{\text{a}}} = \theta = \arctan \left( {\frac{{{}^{{\text{R}}}x_{n} }}{{{}^{{\text{R}}}z_{n} }}} \right).$$
Taking into account that there may be stairs or slopes along the direction of Z D in front of the door, making the door plane not normal to the ground plane, the pitch angle exists between n and n p:
$$P_{{\text{i}}} = \alpha = - \arcsin \left( {\frac{{{}^{{\text{R}}}y_{n} }}{{\left\| {\mathbf{n}} \right\|_{2} }}} \right).$$
Considering there is nearly no doors with stairs or slopes along the direction of X D, we can reasonably assume that the roll angle \(R_{\text{o}} = 0.\)
So, the orientation matrix can be calculated from the Tait-Bryan angle by
$$\left\{ {\begin{array}{*{20}l} {{}_{{\text{D}}}^{{\text{G}}} {\mathbf{R}} = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} {}_{{\text{R}}}^{{\text{D}}} {\mathbf{R}}_{{YX^{\prime}Z^{\prime\prime}}}^{{ - 1}} ,} \hfill \\ {{}_{{\text{R}}}^{{\text{D}}} {\mathbf{R}}_{{YX^{\prime}Z^{\prime\prime}}} \left( {Y_{{\text{a}}} ,P_{{\text{i}}} ,R_{{\text{o}}} } \right) = \left( {\begin{array}{*{20}c} {\cos \theta } & {\sin \theta \sin \alpha } & {\sin \theta \cos \alpha } \\ 0 & {\cos \alpha } & { - \sin \alpha } \\ { - \sin \theta } & {\cos \theta \sin \alpha } & {\cos \theta \cos \alpha } \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
4.2 Rotating to Align
The second subtask is rotating around X R and Y R to align with the door plane ( C → D). As shown in Figure 6, the robot moves its body, both transferring and rotating, from C to D, and at the same time moves the feet to follow the body ( C L → D L). The point O R at D superposes O R at O, which determines the 6D trajectory as
$${\mathbf{CD}} = \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}O}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}C}} } \\ {\left( {\begin{array}{*{20}c} {R_{{\text{o}}} } & {P_{{\text{i}}} } & {Y_{{\text{a}}} } \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right).$$
After the alignment, the horizontal rod of the tool will always keep normal to the door plane when the body translates, which guarantees the tool not to collide with the door plane during the process of locating the handle.
Rotating to align and approaching the handle
4.3 Locating the Handle
The third subtask is locating the handle ( D → E→F → G), in which the robot identifies translational parameter G O D by three touches in three orthogonal directions.
In order to touch the handle, the robot has to decide the height and the moving direction of the tool first. A statistical analysis [ 15 ] of the most frequent size of handles shows that the height of the handle is in a range from 99 cm to 103 cm. Based on this acknowledgement, the robot keeps the vertical rod of the tool in this range. Then the robot chooses right as its target direction to touch the handle. If the robot confirms that the handle is not in the current direction, it changes its direction to left and performs the process of locating the handle again. Here we present the process of the localization and confirmation of the handle in the direction of right, and it is similar for left.
As shown in Figure 6, in case that the robot is far from the handle, the robot needs to move rightward cyclically. In every cycle except the final one, the robot successively moves the body forward (– Z R) until touching the door plane, backward for a short constant distance to avoid rubbing with the door plane, rightward for a constant distance decided by the workspace of the tool, and moves the legs to follow the body, thus finishing the process of D → E. The purpose of moving forward to touch and backward for a constant distance at the beginning of every cycle is to initialize the distance of the current cycle and eliminate the error accumulated in last one. When the robot starts too far from the handle, a very little angular error will cause a large translational error along Z D, even though the robot has already rotated to align. The large error makes it highly possible for the tool to fail to enter the space between the door and the handle considering its narrowness, thus failing to open the door. Because the distance of every rightward cycle is limited to an acceptable constant value which will never be too far, the translational error along Z D can be limited well. Furthermore, by applying multiple three-points contacts of locating the door and reducing the distance of every rightward cycle, the detection accuracy can be guaranteed even if there are embossments or grooves on the door.
In the final cycle ( E → F→G in Figure 7), after touching something at F, the robot moves its body leftward for a constant distance shorter than the handle, and downward to touch the handle. If the tool touches nothing until it gets lower than the minimum height, the robot treats it as a confirmation that the handle is not in this direction. If the tool touches something, the robot treats it as a signal of successfully locating the handle and goes on to next subtask.
Final cycle of locating the handle
The trajectory is generated by Eq. ( 7) in every cycle. The F k of every cycle during D → E is similar to the F k of E → F in the final cycle, and in the final cycle the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & { - 1} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in EE^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 1 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in E^{\prime}F^{\prime},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 1 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in E^{\prime}F^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} { - 1} & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in FG^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & { - 1} & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in FG^{\prime}.} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$
The location of O D on the handle can be expressed by
$$\left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} } \\ 1 \\ \end{array} } \right) = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}_{E} \left( {\begin{array}{*{20}c} {{}^{{\text{R}}}{\mathbf{O}}_{{\text{D}}} } \\ 1 \\ \end{array} } \right),} \hfill \\ {{}^{{\text{R}}}{\mathbf{O}}_{{\text{D}}} = {}^{{\text{R}}}{\mathbf{O}}_{{{\text{R}}E}} + \left( {\begin{array}{*{20}c} {\left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}F}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}F'}} } \right\|_{2} } \\ { - \left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}G}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}G'}} } \right\|_{2} } \\ { - \left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}E}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}E'}} } \right\|_{2} } \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
4.4 Opening the Door
The fourth subtask is opening the door ( G → H→I), in which the robot firstly moves along a circular line in the door plane to turn the handle until it reaches the end ( G → H in Figure 8), and then moves forward to try to push the door open ( H → I in Figure 8). When moving forward, the robot keeps detecting the contact force. If it exceeds the maximum force the robot can apply, the robot treats it as a signal of door blocked and stops the task. The trajectories of turning and pushing are both generated by Eq. ( 7), and the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} {\sin \left( {\frac{{2\overline{v} k}}{{\pi r}}} \right)} & { - \cos \left( {\frac{{2\overline{v} k}}{{\pi r}}} \right)} & {\text{0}} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in {\mathop {\frown}\limits_{GH}},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & { - 1} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in HI,} \hfill \\ \end{array} } \right.$$
where \(\bar{v}\)—Average linear speed planed along the arc, r—Radius of the arc, \(Q \in {\mathop {\frown}\limits_{GH}}\)— Q is on the arc \({\mathop {\frown}\limits_{GH}}\).
Turning the handle and pushing the door
The simpler mechanism of the 0-DOF tool plays a significant role here, in releasing the inner force in the tool when turning the handle. The open-loop structure of the end-effecter cannot achieve a firm grasp of the handle like widely used closed-loop multi-DOF grippers, which takes a notably positive effect on the subtask but not negative, because the inner force is effectively released. The inner force occurs because the motion of the manipulator cannot follow the position of the handle exactly due to the positional error and the imprecise modeling of the door, while a firm grasp compels the manipulator to follow. And it is nearly impossible to completely solve this confliction only if a firm grasp is applied. However, the firm grasp is not essential for all cases. When applying the loose grasp, the contact point of the tool and the handle can slide along themselves, so that the tool does not have to follow the handle exactly, thus releasing the inner force. And because of the large areas the tool can move in (red areas in Figure 8), it will not be a trouble for the tool to keep contacting with the handle when moving.
4.5 Walking Through
The fifth subtask is walking through( I → J→K → L), in which the robot adjusts its body back to the sagittal plane, walks leftward into the door range, and then walks forward to get through the door (Figure 9). The robot keeps detecting the contact force during the whole process. If the contact force exceeds the maximum force the robot can apply, the robot treats it as a signal of door blocked and stops the task.
Walking through the door
When adjusting, the tool translates parallel to the wall plane ( I → J) to release the handle and prepare for the left walking. The point J is in the sagittal plane like the point E, but higher than E for h to avoid colliding with the handle, so the adjustment trajectory is
$${\mathbf{IJ}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} {}^{{\text{R}}}{\mathbf{X}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{E}} - {}^{{\text{G}}}{\mathbf{I}}} \right)} \\ {\left( {{}_{R}^{G} {\mathbf{R}}_{I} {}^{{\text{R}}}{\mathbf{Y}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{E}} - {}^{{\text{G}}}{\mathbf{I}}} \right) + h} \\ \end{array} } \\ 0 \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
where \({}^{\text{R}}{\mathbf{X}}_{\text{R}} ,{}^{\text{R}}{\mathbf{Y}}_{\text{R}}\)—Basis vector of X, Y-axis of the RCS,
$${}^{\text{R}}{\mathbf{X}}_{\text{R}} = \left( {1,0,0} \right)^{\text{T}} ,\;{}^{\text{R}}{\mathbf{Y}}_{\text{R}} = \left( {0,1,0} \right)^{\text{T}} ,$$
\({}^{\text{G}}{\mathbf{E}},{}^{\text{G}}{\mathbf{I}}\)—Coordinates of point E, I in the GCS.
During the leftward walking, the robot keeps the end-effecter Q touching the door plane to prevent the door from closing automatically because of the door closer. Considering the high load capacity, the robot can deal with doors with large rebounding forces. The trajectory is
$${\mathbf{JK}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} \left( {\begin{array}{*{20}c} {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} {}^{{\text{R}}}{\mathbf{X}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \right) - \frac{{w_{{\text{R}}} }}{2}} \\ {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
where \(w_{\text{R}}\)—Width of the robot.
During the process of walking forward, the robot uses its body to push the door open, making a good use of the high load capacity. The forward trajectory is
$${\mathbf{KL}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} \left( {\begin{array}{*{20}c} 0 \\ 0 \\ {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} {}^{{\text{R}}}{\mathbf{Z}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} - {}^{{\text{G}}}{\mathbf{S}}_{{41}} } \right) - l_{{\text{R}}} } \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
where \({}^{\text{R}}{\mathbf{Z}}_{\text{R}}\)—Basis vector of Z-axis of the RCS,
$${}^{\text{R}}{\mathbf{Z}}_{\text{R}} = \left( {0,0,1} \right)^{\text{T}} ,$$
\({}^{\text{G}}{\mathbf{S}}_{41}\)—Derived by Eq. ( 3), l R —Length of the robot.
5 Experiment Results
In order to verify the proposed method, experiments were carried out on the robot. The robot did not know the detailed parameters of the environment and autonomously planed its motion to open the door completely based on only the force feedbacks in real time. The unknown environment here means that the size of the door, the position of the handle, the required force et al. are all unknown. The door is 2025 mm high and 1130 mm wide, with a door closer to provide rebound tendency, as shown in Figure 10. Figure 11 shows the process of opening the door in the experiment.
Door and door closer in the experiments
Snapshots of the experiment
During locating the door, the robot adjusted its position to prepare for next touch every time when the force sensor detected a force pulse along Z R which indicated the robot had already touched the door. The robot only moved its body to touch and kept its feet still on the ground. After detecting the third touch, the robot calculated the positional relationship with the door and rotated to keep the tool normal to the door plane. During the alignment, the robot moved both its body and feet. Figure 12 shows the positions of the feet and the tool, and Figure 13 shows the force detected. Here we can use the motions of feet 2 and 5 to show the motions of all feet, because feet 2 and 5 move alternately in 3-3 gait.
Feet and tool positions during locating the door and rotating to align
Feedback force during locating the door and rotating to align
Then, the robot moved rightward to touch the handle and made different decisions based on different force feedbacks. If no force pulse fed back, the robot moved its legs to follow the body. If the force pulse along X R was detected, the robot knew it had touched the handle, and started to adjust its position to detect the handle along Y R. During this process, the robot moved its body and feet separately. Figure 14 shows the positions of the feet and the tool, and Figure 15 shows the force detected.
Feet and tool positions during locating the handle
Feedback force during locating the handle
After detecting the force pulse along Y R indicating that the robot had touched the handle, the robot started to turn the handle. Once the force feedback from the handle exceeded the threshold indicating that the handle had reached the end, the robot moved forward to push the door open. Finally, the robot walked leftward into the door range according to the calculated position of the handle and then walked through. Figure 16 shows the positions of the feet and the tool, and Figure 17 shows the force detected.
Feet and tool positions during turning the handle and walking through
Feedback force during turning the handle and walking through
The method of measuring the positional relationship between the robot and the door is developed, which uses only the force sensing and the 0-DOF tool to detect and open the door.
The real-time trajectory planning method for the robot to open the door is proposed, which is completely based on the real-time measuring of the force sensing.
The proposed door-opening method is implemented to the six-parallel-legged robot. Experiments are carried out to validate the method and the results show that the method is effective and robust in opening doors wider than the robot (1 m) in unknown environments.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Zurück zum Zitat M H Raibert. Legged robots that balance. Cambridge: MIT press, 1986. M H Raibert. Legged robots that balance. Cambridge: MIT press, 1986.
Zurück zum Zitat K Nagatani, S I Yuta. Designing a behavior to open a door and to pass through a door-way using a mobile robot equipped with a manipulator. Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, Munich, Germany, September 12–16, 1994: 847–853. K Nagatani, S I Yuta. Designing a behavior to open a door and to pass through a door-way using a mobile robot equipped with a manipulator. Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, Munich, Germany, September 12–16, 1994: 847–853.
Zurück zum Zitat J Craft, J Wilson, W H Huang, et al. Aladdin: a semi-autonomous door opening system for EOD-class robots. Proceedings of the SPIE Unmanned Systems Technology XIII, Orlando, USA, April 25, 2011: 804509–1. J Craft, J Wilson, W H Huang, et al. Aladdin: a semi-autonomous door opening system for EOD-class robots. Proceedings of the SPIE Unmanned Systems Technology XIII, Orlando, USA, April 25, 2011: 804509–1.
Zurück zum Zitat B Axelrod, W H Huang. Autonomous door opening and traversal. Proceedings of the IEEE International Conference on the Technologies for Practical Robot Applications, Boston, USA, May 11–12, 2015:1–6. B Axelrod, W H Huang. Autonomous door opening and traversal. Proceedings of the IEEE International Conference on the Technologies for Practical Robot Applications, Boston, USA, May 11–12, 2015:1–6.
Zurück zum Zitat A Jain, C C Kemp. Behavior-based door opening with equilibrium point control. Proceedings of the RSS Workshop on Mobile Manipulation in Human Environments. Seattle, USA, June 28, 2009: 1–8. A Jain, C C Kemp. Behavior-based door opening with equilibrium point control. Proceedings of the RSS Workshop on Mobile Manipulation in Human Environments. Seattle, USA, June 28, 2009: 1–8.
Zurück zum Zitat W Chung, C Rhee, Y Shim, et al. Door-opening control of a service robot using the multifingered robot hand. IEEE Transactions on Industrial Electronics, 2009, 56(10): 3975–3984. W Chung, C Rhee, Y Shim, et al. Door-opening control of a service robot using the multifingered robot hand. IEEE Transactions on Industrial Electronics, 2009, 56(10): 3975–3984.
Zurück zum Zitat D Kim, J H Kang, C S Hwang, et al. Mobile robot for door opening in a house. Proceedings of the International Conference on Knowledge- Based Intelligent Information and Engineering Systems, Wellington, New Zealand, September 20–25, 2004: 596–602. D Kim, J H Kang, C S Hwang, et al. Mobile robot for door opening in a house. Proceedings of the International Conference on Knowledge- Based Intelligent Information and Engineering Systems, Wellington, New Zealand, September 20–25, 2004: 596–602.
Zurück zum Zitat H Arisumi, J R Chardonnet, K Yokoi. Whole-body motion of a humanoid robot for passing through a door-opening a door by impulsive force. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Saint Louis, USA, October 11–15, 2009: 428–434. H Arisumi, J R Chardonnet, K Yokoi. Whole-body motion of a humanoid robot for passing through a door-opening a door by impulsive force. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Saint Louis, USA, October 11–15, 2009: 428–434.
Zurück zum Zitat M Zucker, Y Jun, B Killen, et al. Continuous trajectory optimization for autonomous humanoid door opening. Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Boston, USA, April 22–23, 2013: 1–5. M Zucker, Y Jun, B Killen, et al. Continuous trajectory optimization for autonomous humanoid door opening. Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Boston, USA, April 22–23, 2013: 1–5.
Zurück zum Zitat N Banerjee, X C Long, R X Du, et al. Human-supervised control of the ATLAS humanoid robot for traversing doors. Proceedings of the IEEE- RAS International Conference on Humanoid Robots, Seoul, Korea, November 3–5, 2015: 722–729. N Banerjee, X C Long, R X Du, et al. Human-supervised control of the ATLAS humanoid robot for traversing doors. Proceedings of the IEEE- RAS International Conference on Humanoid Robots, Seoul, Korea, November 3–5, 2015: 722–729.
Zurück zum Zitat J Lee, A Ajoudani, E M Hoffman, et al. Upper-body impedance control with variable stiffness for a door opening task. Proceedings of the IEEE- RAS International Conference on Humanoid Robots, Madrid, Spain, November 18–20, 2014: 713–719. J Lee, A Ajoudani, E M Hoffman, et al. Upper-body impedance control with variable stiffness for a door opening task. Proceedings of the IEEE- RAS International Conference on Humanoid Robots, Madrid, Spain, November 18–20, 2014: 713–719.
Zurück zum Zitat M González-Fierro, D Hernández-García, T Nanayakkara, et al. Behavior sequencing based on demonstrations: a case of a humanoid opening a door while walking. Advanced Robotics, 2015, 29(5): 315–329. M González-Fierro, D Hernández-García, T Nanayakkara, et al. Behavior sequencing based on demonstrations: a case of a humanoid opening a door while walking. Advanced Robotics, 2015, 29(5): 315–329.
Zurück zum Zitat E Ackerman. Boston Dynamics' SpotMini Is All Electric, Agile, and Has a Capable Face- Arm. New York: IEEE Spectrum, 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/home-robots/boston-dynamicsspotmini. E Ackerman. Boston Dynamics' SpotMini Is All Electric, Agile, and Has a Capable Face- Arm. New York: IEEE Spectrum, 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/home-robots/boston-dynamicsspotmini.
Zurück zum Zitat E Ackerman. Ghost Robotics' Minitaur Quadruped Conquers Stairs, Doors, and Fences and Is Somehow Affordable. New York: IEEE Spectrum. 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/roboticshardware/ghost-robotics-minitaur-quadruped. E Ackerman. Ghost Robotics' Minitaur Quadruped Conquers Stairs, Doors, and Fences and Is Somehow Affordable. New York: IEEE Spectrum. 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/roboticshardware/ghost-robotics-minitaur-quadruped.
Zurück zum Zitat J Moreno, D Martínez, M Tresanchez, et al. A combined approach to the problem of opening a door with an assistant mobile robot. Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence. Belfast, Northern Ireland, December 2–5, 2014: 9–12. J Moreno, D Martínez, M Tresanchez, et al. A combined approach to the problem of opening a door with an assistant mobile robot. Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence. Belfast, Northern Ireland, December 2–5, 2014: 9–12.
Zurück zum Zitat E Klingbeil, A Saxena, A Y Ng. Learning to open new doors. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, October 18–22, 2010:2751–2757. E Klingbeil, A Saxena, A Y Ng. Learning to open new doors. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, October 18–22, 2010:2751–2757.
Zurück zum Zitat D Ignakov, G Okouneva, G J Liu. Localization of a door handle of unknown geometry using a single camera for door-opening with a mobile manipulator. Autonomous Robots, 2012, 33(4): 415–426. D Ignakov, G Okouneva, G J Liu. Localization of a door handle of unknown geometry using a single camera for door-opening with a mobile manipulator. Autonomous Robots, 2012, 33(4): 415–426.
Zurück zum Zitat A H Adiwahono, Y Chua, K P Tee, et al. Automated door opening scheme for non-holonomic mobile manipulator. Proceedings of the International Conference on Control, Automation and Systems, Gwangju, Korea, October 20–23, 2013: 839–844. A H Adiwahono, Y Chua, K P Tee, et al. Automated door opening scheme for non-holonomic mobile manipulator. Proceedings of the International Conference on Control, Automation and Systems, Gwangju, Korea, October 20–23, 2013: 839–844.
Zurück zum Zitat A Petrovskaya, A Y Ng. Probabilistic mobile manipulation in dynamic environments, with application to opening doors. Proceedings of the International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6–12, 2007: 2178–2184. A Petrovskaya, A Y Ng. Probabilistic mobile manipulation in dynamic environments, with application to opening doors. Proceedings of the International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6–12, 2007: 2178–2184.
Zurück zum Zitat S Kobayashi, Y Kobayashi, Y Yamamoto, et al. Development of a door opening system on rescue robot for search "UMRS-2007". Proceedings of the SICE Annual Conference, Tokyo, Japan, Augest 20–22, 2008: 2062 –2065. S Kobayashi, Y Kobayashi, Y Yamamoto, et al. Development of a door opening system on rescue robot for search "UMRS-2007". Proceedings of the SICE Annual Conference, Tokyo, Japan, Augest 20–22, 2008: 2062 –2065.
Zurück zum Zitat T Winiarski, K Banachowicz, D Seredyński. Multi-sensory feedback control in door approaching and opening. Proceedings of the International Conference on Intelligent Systems. Warsaw, Poland, September 24–26, 2014: 57–70. T Winiarski, K Banachowicz, D Seredyński. Multi-sensory feedback control in door approaching and opening. Proceedings of the International Conference on Intelligent Systems. Warsaw, Poland, September 24–26, 2014: 57–70.
Zurück zum Zitat A J Schmid, N Gorges, D Goger, et al. Opening a door with a humanoid robot using multi-sensory tactile feedback. Proceedings of the International Conference on Robotics and Automation, Pasadena, USA, May 19–23, 2008: 285–291. A J Schmid, N Gorges, D Goger, et al. Opening a door with a humanoid robot using multi-sensory tactile feedback. Proceedings of the International Conference on Robotics and Automation, Pasadena, USA, May 19–23, 2008: 285–291.
Zurück zum Zitat M Prats, P J Sanz, A P del Pobil. Reliable non-prehensile door opening through the combination of vision, tactile and force feedback. Autonomous Robots, 2010, 29(2): 201–218. M Prats, P J Sanz, A P del Pobil. Reliable non-prehensile door opening through the combination of vision, tactile and force feedback. Autonomous Robots, 2010, 29(2): 201–218.
Zurück zum Zitat Y Pan, F Gao, C K Qi, et al. Human-tracking strategies for a six-legged rescue robot based on distance and view. Chinese Journal of Mechanical Engineering, 2016, 29(2): 219–230. Y Pan, F Gao, C K Qi, et al. Human-tracking strategies for a six-legged rescue robot based on distance and view. Chinese Journal of Mechanical Engineering, 2016, 29(2): 219–230.
Zurück zum Zitat F Farelo, R Alqasemi, R Dubey. Task-oriented control of a 9-DoF WMRA System for opening a spring-loaded door task. Proceedings of the International Conference on Rehabilitation Robotics, Zurich, Switzerland, June 29–July 01, 2011: 1–6. F Farelo, R Alqasemi, R Dubey. Task-oriented control of a 9-DoF WMRA System for opening a spring-loaded door task. Proceedings of the International Conference on Rehabilitation Robotics, Zurich, Switzerland, June 29–July 01, 2011: 1–6.
Zurück zum Zitat H W Zhang, Y G Liu, G J Liu. Multiple mode control of a compact wrist with application to door opening. Mechatronics, 2013, 23(1): 10–20. H W Zhang, Y G Liu, G J Liu. Multiple mode control of a compact wrist with application to door opening. Mechatronics, 2013, 23(1): 10–20.
Zurück zum Zitat S Ahmad, G J Liu. A door opening method by modular re-configurable robot with joints working on passive and active modes. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1480–1485. S Ahmad, G J Liu. A door opening method by modular re-configurable robot with joints working on passive and active modes. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1480–1485.
Zurück zum Zitat S Ahmad, H W Zhang, G J Liu. Multiple working mode control of door-opening with a mobile modular and reconfigurable robot. IEEE/ASME Transactions on Mechatronics, 2013, 18(3): 833–844. S Ahmad, H W Zhang, G J Liu. Multiple working mode control of door-opening with a mobile modular and reconfigurable robot. IEEE/ASME Transactions on Mechatronics, 2013, 18(3): 833–844.
Zurück zum Zitat T Winiarski, K Banachowicz. Opening a door with a redundant impedance controlled robot. Proceedings of the Workshop on Robot Motion and Control, Kuslin, Poland, July 03–05, 2013: 221–226. T Winiarski, K Banachowicz. Opening a door with a redundant impedance controlled robot. Proceedings of the Workshop on Robot Motion and Control, Kuslin, Poland, July 03–05, 2013: 221–226.
Zurück zum Zitat Y Karayiannidis, C Smith, P Ögren, et al. Adaptive force/velocity control for opening unknown doors. Proceedings of the International IFAC Symposium on Robot Control, Dubrovnik, Croatia, September 05–07, 2012: 753 –758. Y Karayiannidis, C Smith, P Ögren, et al. Adaptive force/velocity control for opening unknown doors. Proceedings of the International IFAC Symposium on Robot Control, Dubrovnik, Croatia, September 05–07, 2012: 753 –758.
Zurück zum Zitat W Guo, J C Wang, W D Chen. A manipulability improving scheme for opening unknown doors with mobile manipulator. Proceedings of the International Conference on Robotics and Biomimetics, Hanoi, Vietnam, December 5–10, 2014: 1362–1367. W Guo, J C Wang, W D Chen. A manipulability improving scheme for opening unknown doors with mobile manipulator. Proceedings of the International Conference on Robotics and Biomimetics, Hanoi, Vietnam, December 5–10, 2014: 1362–1367.
Zurück zum Zitat T Rühr, J Sturm, D Pangercic, et al. A generalized framework for opening doors and drawers in kitchen environments. Proceedings of the International Conference on Robotics and Automation, Saint Paul, USA, May 14–18, 2012: 3852–3858. T Rühr, J Sturm, D Pangercic, et al. A generalized framework for opening doors and drawers in kitchen environments. Proceedings of the International Conference on Robotics and Automation, Saint Paul, USA, May 14–18, 2012: 3852–3858.
Zurück zum Zitat S Chitta, B Cohen, M Likhachev. Planning for autonomous door opening with a mobile manipulator. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1799–1806. S Chitta, B Cohen, M Likhachev. Planning for autonomous door opening with a mobile manipulator. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1799–1806.
Zurück zum Zitat Y Pan, F Gao. A new 6-parallel-legged walking robot for drilling holes on the fuselage. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2014, 228(4): 753–764. Y Pan, F Gao. A new 6-parallel-legged walking robot for drilling holes on the fuselage. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2014, 228(4): 753–764.
Zurück zum Zitat Y L Xu, F Gao, Y Pan, et al. Method for six-legged robot stepping on obstacles by indirect force estimation. Chinese Journal of Mechanical Engineering, 2016, 29(4): 669–679. Y L Xu, F Gao, Y Pan, et al. Method for six-legged robot stepping on obstacles by indirect force estimation. Chinese Journal of Mechanical Engineering, 2016, 29(4): 669–679.
Zurück zum Zitat J He, F Gao, X D Meng, et al. Type synthesis for 4-DOF parallel press mechanism using GF set theory. Chinese Journal of Mechanical Engineering, 2015, 28(4): 851–859. J He, F Gao, X D Meng, et al. Type synthesis for 4-DOF parallel press mechanism using GF set theory. Chinese Journal of Mechanical Engineering, 2015, 28(4): 851–859.
Zurück zum Zitat C Z Wang, Y F Fang, S Guo. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715. C Z Wang, Y F Fang, S Guo. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715.
Zurück zum Zitat H B Qu, Y F Fang, S Guo. Theory of degrees of freedom for parallel mechanisms with three spherical joints and its applications. Chinese Journal of Mechanical Engineering, 2015, 28(4): 737–746. H B Qu, Y F Fang, S Guo. Theory of degrees of freedom for parallel mechanisms with three spherical joints and its applications. Chinese Journal of Mechanical Engineering, 2015, 28(4): 737–746.
Zurück zum Zitat X L Ding, K Xu. Gait analysis of a radial symmetrical hexapod robot based on parallel mechanisms. Chinese Journal of Mechanical Engineering, 2014, 27(5): 867–879. X L Ding, K Xu. Gait analysis of a radial symmetrical hexapod robot based on parallel mechanisms. Chinese Journal of Mechanical Engineering, 2014, 27(5): 867–879.
Zurück zum Zitat M F Wang, M Ceccarelli. Topology search of 3-DOF translational parallel manipulators with three identical limbs for leg mechanisms. Chinese Journal of Mechanical Engineering, 2015, 28(4): 666–675. M F Wang, M Ceccarelli. Topology search of 3-DOF translational parallel manipulators with three identical limbs for leg mechanisms. Chinese Journal of Mechanical Engineering, 2015, 28(4): 666–675.
Zhi-Jun Chen
Feng Gao
Yang Pan
Chinese Mechanical Engineering Society
Chinese Journal of Mechanical Engineering
Challenges and Requirements for the Application of Industry 4.0: A Special Insight with the Usage of Cyber-Physical System
New Approach for Measured Surface Localization Based on Umbilical Points
Research and Development Trend of Shape Control for Cold Rolling Strip
Experimental Dynamic Analysis of a Breathing Cracked Rotor
Smart Cutting Tools and Smart Machining: Development Approaches, and Their Implementation and Application Perspectives
Product Data Model for Performance-driven Design
Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.
Zur Marktübersicht
in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH | CommonCrawl |
One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.)
As expected since most of the data overlaps with the previous LLLT analysis, the LLLT variable correlates strongly; the individual magnesium variables may look a little more questionable but were justified in the magnesium citrate analysis. The Noopept result looks a little surprising - almost zero effect? Let's split by dose (which was the point of the whole rigmarole of changing dose levels):
Since the discovery of the effect of nootropics on memory and focus, the number of products on the market has increased exponentially. The ingredients used in a supplement can tell you about the effectiveness of the product. Brain enhancement pills that produce the greatest benefit are formulated with natural vitamins and substances, rather than caffeine and synthetic ingredients. In addition to better results, natural supplements are less likely to produce side effects, compared with drugs formulated with chemical ingredients.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit.
This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit.
Legal issues aside, this wouldn't be very difficult to achieve. Many companies already have in-house doctors who give regular health check-ups — including drug tests — which could be employed to control and regulate usage. Organizations could integrate these drugs into already existing wellness programs, alongside healthy eating, exercise, and good sleep.
While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance.
Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408).
Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis.
Harrisburg, NC -- (SBWIRE) -- 02/18/2019 -- Global Smart Pills Technology Market - Segmented by Technology, Disease Indication, and Geography - Growth, Trends, and Forecast (2019 - 2023) The smart pill is a wireless capsule that can be swallowed, and with the help of a receiver (worn by patients) and software that analyzes the pictures captured by the smart pill, the physician is effectively able to examine the gastrointestinal tract. Gastrointestinal disorders have become very common, but recently, there has been increasing incidence of colorectal cancer, inflammatory bowel disease, and Crohns disease as well.
Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow and colleagues (2004) showed that MPH increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., 2008). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage, and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement?
As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
Photo credits: AlexLMX/shutterstock.com, HQuality/shutterstock.com, Rost9/shutterstock.com, Peshkova/shutterstock.com, Max4e Photo/shutterstock.com, Shidlovski/shutterstock.com, nevodka/shutterstock.com, Sangoiri/shutterstock.com, IrynaImago/shutterstock.com, Kostrez/shutterstock.com, Molekuul_be/shutterstock.com, Rawpixel.com/shutterstock.com, Mr.Meijer/shutterstock.com, fizkes/shutterstock.com, ReginaNogova/shutterstock.com, puhhha/shutterstock.com, LuMikhaylova/shutterstock.com, vitstudio/shutterstock.com, Fotografiecor.nl/shutterstock.com, Shidlovski/shutterstock.com, goodluz/shutterstock.com, Sudowoodo/shutterstock.com, 5SecondStudio/shutterstock.com, AfricaStudio/shutterstock.com, IrynaImago/shutterstock.com
Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive.
Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away.
So, I thought I might as well experiment since I have it. I put the 23 remaining pills into gel capsules with brown rice as filling, made ~30 placebo capsules, and will use the one-bag blinding/randomization method. I don't want to spend the time it would take to n-back every day, so I will simply look for an effect on my daily mood/productivity self-rating; hopefully Noopept will add a little on average above and beyond my existing practices like caffeine+piracetam (yes, Noopept may be as good as piracetam, but since I still have a ton of piracetam from my 3kg order, I am primarily interested in whether Noopept adds onto piracetam rather than replaces). 10mg doses seem to be on the low side for Noopept users, weakening the effect, but on the other hand, if I were to take 2 capsules at a time, then I'd halve the sample size; it's not clear what is the optimal tradeoff between dose and n for statistical power.
Blinding stymied me for a few months since the nasty taste was unmistakable and I couldn't think of any gums with a similar flavor to serve as placebo. (The nasty taste does not seem to be due to the nicotine despite what one might expect; Vaniver plausibly suggested the bad taste might be intended to prevent over-consumption, but nothing in the Habitrol ingredient list seemed to be noted for its bad taste, and a number of ingredients were sweetening sugars of various sorts. So I couldn't simply flavor some gum.)
These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics.
My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.
One of the most widely known classes of smart drugs on the market, Racetams, have a long history of use and a lot of evidence of their effectiveness. They hasten the chemical exchange between brain cells, directly benefiting our mental clarity and learning process. They are generally not controlled substances and can be purchased without a prescription in a lot of locations globally.
And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy.
The intradimensional– extradimensional shift task from the CANTAB battery was used in two studies of MPH and measures the ability to shift the response criterion from one dimension to another, as in the WCST, as well as to measure other abilities, including reversal learning, measured by performance in the trials following an intradimensional shift. With an intradimensional shift, the learned association between values of a given stimulus dimension and reward versus no reward is reversed, and participants must learn to reverse their responses accordingly. Elliott et al. (1997) reported finding no effects of the drug on ability to shift among dimensions in the extradimensional shift condition and did not describe performance on the intradimensional shift. Rogers et al. (1999) found that accuracy improved but responses slowed with MPH on trials requiring a shift from one dimension to another, which leaves open the question of whether the drug produced net enhancement, interference, or neither on these trials once the tradeoff between speed and accuracy is taken into account. For intradimensional shifts, which require reversal learning, these authors found drug-induced impairment: significantly slower responding accompanied by a borderline-significant impairment of accuracy.
Do you want to try Nootropics, but confused with the plethora of information available online? If that's the case, then you might get further confused about what nootropic supplement you should buy that specifically caters to your needs. Here is a list of the top 10 Nootropics or 10 best brain supplements available in the market, and their corresponding uses:
Dallas Michael Cyr, a 41-year-old life coach and business mentor in San Diego, California, also says he experienced a mental improvement when he regularly took another product called Qualia Mind, which its makers say enhances focus, energy, mental clarity, memory and even creativity and mood. "One of the biggest things I noticed was it was much more difficult to be distracted," says Cyr, who took the supplements for about six months but felt their effects last longer. While he's naturally great at starting projects and tasks, the product allowed him to be a "great finisher" too, he says.
The pill delivers an intestinal injection without exposing the drug to digestive enzymes. The patient takes what seems to be an ordinary capsule, but the "robotic" pill is a sophisticated device which incorporates a number of innovations, enabling it to navigate through the stomach and enter the small intestine. The Rani Pill™ goes through a transformation and positions itself to inject the drug into the intestinal wall.
as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA's prescribing guide? Even assuming they had a computer & Internet?)
"A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times.
Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others.
More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life.
Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress.
While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today.
Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil.
"I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included."
A related task is the B–X version of the CPT, in which subjects must respond when an X appears only if it was preceded by a B. As in the 1-back task, the subject must retain the previous trial's letter in working memory because it determines the subject's response to the current letter. In this case, when the current letter is an X, then the subject should respond only if the previous letter was a B. Two studies examined stimulant effects in this task. Rapoport et al. (1980) found that d-AMP reduced errors of omission in the longer of two test sessions, and Klorman et al. (1984) found that MPH reduced errors of omission and response time.
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice?
In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals.
The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions.
Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use?
"Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!"
A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice.
One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects.
Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later).
along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the 90s you could easily pay $15 for domestic S&H when you ordered online - but it's 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this 'nootropics thing they've heard about.
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you.
The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes! | CommonCrawl |
A frequency-sharing weather radar network system using pulse compression and sidelobe suppression
Hwang-Ki Min1,2,
Myung-Sun Song1,
Iickho Song2 &
Jae-Han Lim1
To mitigate damages due to natural disasters and abruptly changing weather, the importance of a weather radar network system (WRNS) is growing. Because radars in the current form of a WRNS operate in distinct frequency bands, operating a WRNS consisting of a large number of radars is very costly in terms of frequency resource. In this paper, we propose a novel WRNS in which multi-site weather radars share the same frequency band. By employing pulse compression with nearly orthogonal polyphase codes and sidelobe removal processing, a weather radar of the proposed frequency-sharing WRNS addresses inter-site and intra-site interferences simultaneously. Through computer simulations, we show the feasibility of the proposed system taking the performance requirement of a typical single weather radar into account.
The number of natural disasters has been increasing sharply since 1970 [1]. According to the statistics report in [2], over 330 natural disasters occurred in 2013 and took lives of a significant number of people (21,610) and caused serious economic damages ($156.7 billion). In an attempt to prevent damages from natural disasters, several research institutes (e.g., Oklahoma University) and governmental agencies (e.g., National Oceanic and Atmosphere Administration (NOAA)) have been collaborating with each other to develop a weather radar network system (WRNS) that enables forecasting unusual weather phenomena effectively.
A WRNS is composed of multiple weather radars, which are deployed in different geographical positions for monitoring the weather phenomena nearby. More specifically, each radar transmits radio pulse signal periodically, captures the signal that has been backscattered from weather targets, and extracts important parameters from the signal (e.g., reflectivity and radial velocity) for characterizing the current weather status. Although the current form of a WRNS works adequately, it has a significant limitation in terms of the frequency efficiency.
Frequency spectrum is a fundamental resource for wireless communication and remote sensing, but we lack the resource due to explosive demands for mobile communications. Despite the problem of frequency scarcity, the goal of the current deployment of a WRNS is not aligned with the frequency efficiency. To be specific, radars in the current WRNSs operate in distinct frequency bands; thus, the total bandwidth that is required for a WRNS increases linearly with the number of radar deployments. For example, in the US WRNS, 160 radars are deployed throughout the USA, each of which requires 5 MHz bandwidth [3]. Although a frequency reuse scheme can be employed in the deployment, we must allocate a significantly wide frequency band to the WRNS, which is extremely costly.
A simple solution is to make multiple radars share the same frequency band. Frequency sharing can be realized by operating radars separately in other domains than the frequency domain: as in wireless communications area, we can consider the time domain and code domain. In the approach of frequency sharing by separating radars in the time domain, only one radar operates for a certain period of time and, after the period, one of the other radars starts to operate in a given order. The main problem in this approach is that the time interval from a radar's pulse emission to the next pulse emission grows linearly with the number of radars that share the spectrum. Due to the limitation of observation time, the increase in the pulse-to-pulse time interval reduces the amount of sensing data averaged over the whole observation time, which degrades estimation and forecasting accuracies.
In this paper, we consider frequency sharing by separating radars in the code domain, in which each radar is distinguished by exploiting its own code "orthogonal" to those of the other radars. (In a WRNS, the word "orthogonal" is used in a different sense, which will be defined in Section 3). By separating the radars of a WRNS in the code domain, we can avoid the problem of the decrease in estimation accuracy in the time domain approach. In the code domain approach, we need to employ the technique of pulse compression [4]. Pulse compression is a technique employed in a single radar (which is not necessarily a weather radar) system in order to improve the performances of range resolution and sensitivity together under the limited peak power. Meanwhile, in this paper, the technique of pulse compression is adopted for achieving the objective of frequency sharing in a WRNS in combination with orthogonal codes. Specifically, a radar transmits a pulse modulated with its own (pulse compression) code in waveform generation; on the reception mode, the radar can extract its own signal by canceling out the signals from other radars via the matched filter (MF).
Translating the idea of frequency sharing by using orthogonal codes into a feasible WRNS is an extremely difficult problem in itself. The difficulty is mainly attributed to the extremely significant signal interferences, which hinder accurate estimation of weather parameters. The interferences in a WRNS can be categorized into (1) inter-site interference and (2) intra-site interference.
Inter-site interference occurs when multiple radars share the same frequency band. Specifically, a signal associated with one radar can interfere with those associated with other radars. The interference inevitably distorts the received signals of the radars, thereby leading to inaccurate estimations of weather parameters. On the other hand, intra-site interference happens even when there is only one radar operating in the frequency band. It can be regarded as a kind of self-interference due to partial overlap of the multiple backscattered signals that occur when the radar signal is backscattered by closely located multiple targets. This intra-site interference becomes highly serious in a radar system that employs a long pulse (e.g., pulse compression radar) and that deals with volume-type targets (e.g., weather radar).
There have been several studies to mitigate inter-site interferences by proposing nearly orthogonal codes. In [5], an algorithm was proposed for deriving nearly orthogonal codes that can be exploited in multi-static radar network systems. In [6, 7], a design framework was proposed for generating polyphase codes that can be adopted for orthogonal netted radar systems. On the other hand, several studies have focused on mitigating intra-site interferences. For example, in [8], an algorithm was proposed to use an inverse filter in order to reconstruct real peaks from the received signal. In [9], effective sidelobe suppression algorithms were introduced for discrete point targets and contiguous scattering targets, which we refer to as the CLEAN algorithm in this paper. In [10], a combination of the phase distortion and spectrum modification techniques was proposed for sidelobe suppression in a single weather radar with pulse compression. Unfortunately, none of the previous approaches in [5–10] is appropriate to successfully address the two challenges (intra-site and inter-site interferences) simultaneously.
In this paper, we propose a novel WRNS in which multi-site weather radars operate in the same frequency band, with the key issue of overcoming the two challenges described above simultaneously. The proposed system suppresses the inter-site interference by adopting pulse compression with nearly orthogonal codes, and at the same time, removes the intra-site interference based on a well-known sidelobe suppression mechanism. Through computer simulations, the proposed frequency-sharing WRNS is shown to be feasible even when the performance requirement of conventional single weather radar systems is applied.
The following two points set our work apart from previous works on weather radar systems:
The novelty of this paper mainly lies in the design of an architecture of a novel WRNS that enables the constituent radars to share the same frequency band. In Fig. 1, we have presented the architecture of the proposed frequency-sharing WRNS. In the transmission mode, each radar transmits a pulse modulated with a distinct code from nearly orthogonal pulse compression codes. Then, in the reception mode, each radar captures signals backscattered by weather targets, which contain the signals originated from other radars also. The received signal first goes through the MF, which results in suppression of the inter-site interference. Subsequently, by applying an additional process of sidelobe suppression, the intra-site interference is mitigated effectively.
System architecture of the proposed frequency-sharing WRNS
We have conducted an elaborate study on feasibility of the WRNS. Specifically, the estimation accuracies on two weather parameters, reflectivity and velocity, satisfied the performance requirements of a typical single weather radar, WSR-88D, under the expected SIR condition. To the best of our knowledge, this study is the first to validate the feasibility of frequency-sharing weather radar systems.
The rest of this paper is organized as follows. We will first present the system model of a WRNS in Section 2. In Section 3, the key techniques for frequency sharing in the WRNS are described. Next, the performance of the proposed system is evaluated through computer simulations in Section 4. Finally, our work will be concluded in Section 5.
In this section, we present a system model of a WRNS, which is defined as a group of weather radars that have the same prior information (e.g., code set) and reside in the same frequency channel. Before explaining the details, we have first summarized the nomenclature and main assumptions of this paper in Tables 1 and 2. The assumption on reflection model is made because granularity for measuring weather parameters is a range bin in reality. By using a virtual point target for representing weather scatterers (hydrometeors) in each range bin, we can greatly simplify the reflection model while maintaining estimation accuracies on weather parameters. For this reason, this assumption has been used in a number of weather radar studies [8, 11]. The second assumption is made because this feasibility study focuses on weather conditions in which a target velocity rarely changes over the observation time as in the case of, for example, (stable) rain fall or snow fall.
Table 1 Nomenclature
Table 2 Main assumptions
Assume a WRNS consisting of N mono-static weather radars with pulse compression, where the nth radar is denoted by Radar-n. To understand how each radar can extract weather parameters in the presence of other radar signals, we focus on the operations of removing interferences and reconstructing target information in a specific radar (we call this radar the main radar) and regard the other radars as interferers (we call these radars interfering radars). Without loss of generality, we consider Radar-1 as the main radar and Radars- 2,3,⋯,N as interfering radars throughout this paper.
Each radar periodically switches its antenna mode between the transmission and reception modes. Specifically, in the transmission mode, the radar transmits its pulse in the current direction of the antenna; it switches the mode to the reception mode and captures radar signals in the air for a while. Then, it switches back to the transmission mode for transmitting the next pulse. Here, the interval between two adjacent transmissions is called the pulse repetition time (PRT), and is denoted by T. Note that, in order to improve observation performance for a specific direction, a radar stays in the direction during a number of repetitions of the two modes: we denote this number of repetitions by K.
Pulse transmission
We assume that all the radars in the WRNS considered in this study employ pulse compression. The purpose of employing pulse compression in our system is entirely different from that in conventional single radar systems. In conventional single radars, pulse compression is employed in order to improve both performances of range resolution and sensitivity (i.e., target detection capability) under the limitation of peak power [12]. On the other hand, in the WRNS of this study, pulse compression is for removing interferences from other radars by employing mutually uncorrelated codes, which will be addressed in detail in Section 3.
In the transmission mode, Radar-1 transmits a pulse modulated with its own code, where each sub-pulse represents an element of the code in order. Let us denote the pulse (equivalently, pulse compression code) of Radar-1 by
$$\begin{array}{@{}rcl@{}} \boldsymbol s_{1} = \left[ s_{1}[\!1], s_{1}[\!2], \cdots, s_{1}[\!L]\!\right], \end{array} $$
where L is the code length and corresponds to the pulse duration time τ=L Δ τ. Here, Δ τ denotes the sub-pulse duration time.
Reflection of pulse by weather targets
As illustrated in Fig. 2, a pulse transmitted by Radar-1 is reflected by weather targets and returns to Radar-1. Whenever the pulse is reflected by a weather target, its amplitude and phase change according to the reflectivity and velocity of the target. As a weather target actually consists of a tremendous number of weather scatterers, it is extremely difficult to individually consider and model the impact of each scatterer on the reflection of the radar signal. Thus, we employ a simple approach that has been adopted and validated in many previous studies [8, 11]. In the approach, the radar signal's path is divided into multiple resolution volumes (a.k.a. range bins) and it is assumed that there exists a single virtual point target in each range bin. Then, the channel impulse response of a target is defined as the aggregated impacts of all scatterers in the corresponding range bin on the reflection. This approach is illustrated in Fig. 2, in which a series of black circles indicates the virtual (point) targets. For further simplification, we integrate the effect of signal propagation into the channel responses of the targets.
System model of a WRNS
$$\begin{array}{@{}rcl@{}} \boldsymbol h_{1}^{(k)} = \left[h_{1}^{(k)}[\!1], h_{1}^{(k)}[\!2], \cdots, h_{1}^{(k)}[\!L_{T}] \right] \end{array} $$
denote the impulse response of the reflection channel for the kth (k=1,2,⋯,K) transmitted pulse of Radar-1, which returns the pulse to its source. Here, L T =T/Δ τ is the length equivalent to the PRT. As described above, the reflection channel represents all the reflections caused by the virtual weather targets in the direction that the antenna of Radar-1 is facing. Specifically, the impulse response \(h_{1}^{(k)}[\!i]\) of the ith virtual target can be expressed as
$$\begin{array}{@{}rcl@{}} h_{1}^{(k)}[\!i]=A_{1}[\!i]\text{exp}\left\{j\phi_{1}^{(k)}[\!i]\right\}. \end{array} $$
The amplitude A 1[ i]>0 in (3) is dependent on the reflectivity of the weather target as
$$\begin{array}{@{}rcl@{}} {A_{1}^{2}}[i] = k_{R} \frac{G_{t}G_{r} Z_{1}[\!i]}{{d_{1}^{2}}[\!i]} \end{array} $$
based on the weather radar equation [13]. Here, Z 1[ i] is the reflectivity of the ith target in the current direction of Radar-1, \(d_{1}[\!i] = \frac {1}{2}ci \Delta \tau \) is the distance between the target and radar with c denoting the speed of light, G t and G r are the antenna gains for transmission and reception, respectively, and the constant k R depends on the radar parameters such as 3-dB beamwidth, pulse duration time, and wavelength. Note that G t and G r rely on the path of the radar signal, which will be addressed in detail later in Section 4.1.3.
The phase \(\phi _{1}^{(k)}[\!i]\) in (3) is associated with the velocity of the weather target. Specifically, it can be expressed as [14]
$$\begin{array}{@{}rcl@{}} \phi_{1}^{(k)}[\!i] = k \varphi_{1}[\!i] + \Delta \varphi_{1} [\!i], \end{array} $$
$$\begin{array}{@{}rcl@{}} \varphi_{1}[\!i] = -\frac{4\pi T}{\lambda} v_{1}[\!i] \end{array} $$
is the average phase change for the PRT due to the (average) radial velocity v 1[ i] of the ith target and λ is the wavelength of the radar signal. The term Δ φ 1[ i] indicates the variance of the phase change, which is associated with the variation of the velocity of the target. In this feasibility study, we assume Δ φ 1[ i]=0 for simplicity.
Reception of reflected pulses
If we ignore the signals of interfering radars, we can express the signal received in the kth reception mode of Radar-1 by \(\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)} + \boldsymbol n\). Here, n is a complex Gaussian noise with mean zero. Now recall that there exist other weather radars (Radars-2, 3, ⋯, N), the signals of which could interfere with that of Radar-1. Taking this into account, we have the received signal as
$$\begin{array}{@{}rcl@{}} \boldsymbol r^{(k)} = \boldsymbol s_{1} * \boldsymbol h_{1}^{(k)} + \sum\limits_{n=2}^{N} \boldsymbol s_{n} * \boldsymbol g_{n,1} +\boldsymbol n. \end{array} $$
Here, for n=2,3,⋯,N, g n,1 is the impulse response of the reflection channel between the interfering Radar-n and the main Radar-1.
In (7), the term \(\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)}\) contains the intra-site interference mentioned in Section 1: specifically, the difference between \(\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)}\) and \(\boldsymbol h_{1}^{(k)}\) can be considered as the intra-site interference. On the other hand, the term \(\sum \limits _{n=2}^{N} \boldsymbol s_{n} * \boldsymbol g_{n,1}\) corresponds to the inter-site interference.
Problem definition
To introduce a WRNS that shares the same frequency band, the main challenge lies in overcoming signal distortion that results from the intra-site and inter-site interferences. Such distortion hinders accurate estimation of weather parameters such as the reflectivity and velocity. We can formulate this problem as follows:
Problem: Given the received signals \(\left \{\boldsymbol r^{(k)}\right \}_{k=1}^{K}\) at Radar-1, which have been contaminated by both of the intra-site and inter-site interferences, estimate the reflectivity Z 1 and velocity v 1 of the weather targets.
Our problem is, roughly speaking, to estimate the reflection channel \(\boldsymbol h_{1}^{(k)}\), sometimes called the main reflection channel, because the reflectivity and velocity are directly related with the amplitude and phase of \(\boldsymbol h_{1}^{(k)}\) as shown in (3)–(6).
Key techniques for frequency sharing
In order to solve the problem of this study, we employ two key techniques: (1) pulse compression with a nearly orthogonal code set and (2) signal processing for sidelobe suppression that further refines the output of the MF.
Let us now describe the details.
Pulse compression with (nearly) orthogonal code set
Matched filtering
The received signal (7) in Radar-1 is fed into the MF that matches the transmitted pulse, the output of which is given as
$$\begin{array}{@{}rcl@{}} \boldsymbol y_{MF}^{(k)} &=& \boldsymbol r^{(k)} * \boldsymbol s_{1}^{MF} \\ &=& \boldsymbol h_{1}^{(k)}*\boldsymbol R_{1,1} + \sum\limits_{n=2}^{N} \boldsymbol g_{n,1}*\boldsymbol R_{n,1} +\boldsymbol n * \boldsymbol s_{1}^{MF}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \boldsymbol s_{1}^{MF} = \frac{1}{L} \times \left[ s_{1}[\!L], s_{1}[\!L-1], \cdots, s_{1}[\!1] \right] \end{array} $$
is the impulse response of the MF of Radar-1. Here, \(\boldsymbol R_{n_{1},n_{2}}\,=\,\left [ R_{n_{1},n_{2}}[\!-L\,+\,1], R_{n_{1},n_{2}}[\!-L\,+\,2], \cdots, R_{n_{1},n_{2}}[\!L\,-\,1]\!\right ]\) is the normalized correlation of \( \boldsymbol s_{n_{1}}\) and \(\boldsymbol s_{n_{2}}\) defined by
$$\begin{array}{@{}rcl@{}} R_{n_{1},n_{2}}[\!i] = \frac{1}{L} \sum\limits_{j=\max(1,1-i)}^{\min(L,L-i)} s_{n_{1}}[\!j]s_{n_{2}}^{*}[\!j+i] \end{array} $$
((10))
for i=−L+1,−L+2,⋯,L−1 and n 1,n 2=1,2,⋯,N: it is the autocorrelation if n 1=n 2; otherwise, it is the cross-correlation.
Orthogonal code set
Let us denote the set of pulse compression codes by the N×L matrix
$$\begin{array}{@{}rcl@{}} S = \left[\begin{array}{cccc} s_{1}[\!1] & s_{1}[\!2] & \cdots & s_{1}[\!L] \\ \vdots & \ddots & \vdots \\ s_{N}[\!1] & s_{N}[\!2] & \cdots & s_{N}[\!L] \end{array}\right], \end{array} $$
where the nth row corresponds to s n . Now assume that S satisfies
$$\begin{array}{@{}rcl@{}} R_{n,n}[i] = \left\{\begin{array}{ll} 1, & \ i=0\\ 0, & \ i\neq0\end{array}\right. \end{array} $$
for n=1,2,⋯,N, and
$$\begin{array}{@{}rcl@{}} R_{n_{1},n_{2}}[\!i] = 0, \ \ \ \ \text{all~}i \end{array} $$
for n 1≠n 2. In this paper, we will refer to such a code set as an orthogonal code set.
When an orthogonal code set is employed, the output of the MF in (8) is simply given as
$$\begin{array}{@{}rcl@{}} \boldsymbol y_{MF}^{(k)} = \boldsymbol h_{1}^{(k)} + \boldsymbol n * \boldsymbol s_{1}^{MF}. \end{array} $$
Here, as the magnitude of the noise n is normally very small compared with \(\left |\boldsymbol h_{1}^{(k)}\right |\), \(\boldsymbol y_{MF}^{(k)}\) can be considered as an approximated \(\boldsymbol h_{1}^{(k)}\). In addition, because
$$\begin{array}{@{}rcl@{}} Z_{1} [i] &=& \frac{{d_{1}^{2}}[\!i]}{k_{R} G_{t} G_{r}} {A_{1}^{2}} [\!i] \\ &=& \frac{{d_{1}^{2}}[\!i]}{k_{R} G_{t} G_{r}} \left| h_{1}^{(k)} [\!i]\right|^{2}, \ \text{for any~} k \in \left\{1,2,\cdots,K\right\} \\ &=& \frac{{d_{1}^{2}}[\!i]}{k_{R} G_{t} G_{r}} \frac{1}{K} \sum\limits_{k=1}^{K} \left| h_{1}^{(k)} [\!i]\right|^{2} \end{array} $$
from (3) and (4), the estimate \(\hat {Z}_{1}[\!i]\) of the reflectivity Z 1[ i] can be obtained as
$$\begin{array}{@{}rcl@{}} \hat{Z}_{1} [\!i] &=& \frac{{d_{1}^{2}}[\!i]}{k_{R} G_{t} G_{r}} \frac{1}{K} \sum\limits_{k=1}^{K} \left| y_{MF}^{(k)}[\!i] \right|^{2}. \end{array} $$
Here, the replacement of "for any k" with the average over k is for reducing the estimation error due to the noise. Similarly, because
$$ \begin{aligned} v_{1} [\!i] &= -\frac{\lambda}{4 \pi T} \left({\phi}_{1}^{(k+1)}[\!i] - \phi_{1}^{(k)}[\!i] \right), \text{for any} \ k \in \left\{1,2,\cdots,K-1\right\} \\ &= -\frac{\lambda}{4 \pi T} \angle{\left\{ h_{1}^{(k+1)}[\!i] \left(h_{1}^{(k)}[\!i]\right)^{*} \right\}}, \ \text{for any~} k \in \left\{1,2,\cdots,K-1\right\} \\ &= - \frac{\lambda}{4 \pi \!T} \frac{1}{K-1}\sum\limits_{k=1}^{K-1} \angle{\left\{ h_{1}^{(k+1)}[\!i] \left(h_{1}^{(k)}[\!i]\right)^{*} \right\}} \end{aligned} $$
from (3), (5), and (6), the estimate \(\hat {v}_{1}[\!i]\) of the reflectivity v 1[ i] can be obtained as
$$\begin{array}{@{}rcl@{}} \hat{v}_{1} [\!i] = - \frac{\lambda}{4 \pi T} \frac{1}{K-1}\sum\limits_{k=1}^{K-1} \angle{\left\{ y_{MF}^{(k+1)}[\!i] \left(y_{MF}^{(k)}[\!i]\right)^{*} \right\}}. \end{array} $$
Design of nearly orthogonal code set
It is impossible to design an orthogonal code set that perfectly satisfies both (12) and (13). Hence, an alternative is to design a code set that satisfies the conditions as closely as possible. In this paper, we call such a code set a nearly orthogonal code set.
There have been many studies on designing nearly orthogonal code sets [5–7, 15]. For example, in the design method proposed in [5], a simple full search algorithm was employed based on Hadamard and Fourier matrices that have the property \(R_{n_{1},n_{2}}[\!0] = 0\) for n 1≠n 2: it is noteworthy that the cost function of this method is defined in terms of the peak values of correlations, not energies. In [6], a hybrid optimization method combining simulated annealing (SA) with a traditional iterative code selection was proposed to design a nearly orthogonal polyphase code set. In [7], another optimization method based on cross entropy technique was proposed, where a structural constraint to maintain Doppler tolerance was considered together with the orthogonality conditions.
Among the previously proposed methods for designing nearly orthogonal code sets, we adopt the polyphase code design method in [6]. The cost function of the design method is given by
$$ E = \sum\limits_{n=1}^{N} \sum\limits_{i=1}^{L-1} \left|R_{n,n}[\!i]\!\right|^{2}\,+\, \alpha \sum\limits_{n_{1}=1}^{N-1}\sum\limits_{n_{2}=n_{1}+1}^{N}\sum\limits_{i=-L+1}^{L-1}\left|R_{n_{1},n_{2}}[\!i]\!\right|^{2}, $$
where the first term is the sum of sidelobe energies of autocorrelations and the second term is the sum of cross-correlation energies. Here, α≥0 is the weighting coefficient. The minimization of (19) is accomplished through two steps: the first step is to reach near the global minimum by employing SA, and the second is to tune the solution more finely by an iterative code selection algorithm. The detailed procedure of the design method can be summarized as in Table 3.
Table 3 Procedure for the design of a nearly orthogonal polyphase code set
Once we design a nearly orthogonal code set, the performance of the designed code set can be quantified by the following two measures: the autocorrelation sidelobe peak (ASP)
$$\begin{array}{@{}rcl@{}} ASP(n) = 20 \log \left(\max\limits_{i \neq 0} \left|R_{n,n}[\!i]\right| \right) \end{array} $$
of the nth code, and cross-correlation peak (CP)
$$\begin{array}{@{}rcl@{}} CP(n_{1},n_{2}) = 20 \log \left(\max \left|R_{n_{1},n_{2}}[\!i] \right|\right) \end{array} $$
of the n 1th and n 2th codes (n 1≠n 2) in dB. Considering conditions (12) and (13) of an orthogonal code set, it is clear that the smaller the values of ASPs and CPs of a nearly orthogonal code set are, the closer the code set is to an orthogonal code set.
Iterative sidelobe removal
When a nearly orthogonal code set is employed for pulse compression, the output of the MF in (8) can be rewritten as
$$ \boldsymbol y_{MF}^{(k)} = \boldsymbol h_{1}^{(k)} + \boldsymbol h_{1}^{(k)}* \bar{\boldsymbol R}_{1,1} + \sum\limits_{n=2}^{N} \boldsymbol g_{n,1}*\boldsymbol R_{n,1} +\boldsymbol n * \boldsymbol s_{1}^{MF}, $$
where \(\bar {\boldsymbol R}_{1,1}\) is the sidelobe component of R 1 with the ith element
$$\begin{array}{@{}rcl@{}} \bar{R}_{1,1} [\!i] = \left\{\begin{array}{ll} 0, & \ i=0\\ R_{1,1}[\!i], & \ \text{otherwise}.\end{array}\right. \end{array} $$
Unlike the MF output (14) where an orthogonal code set is employed, there exist two additional terms \(\boldsymbol h_{1}^{(k)}* \bar {\boldsymbol R}_{1,1}\) and \(\sum \limits _{n=2}^{N} \boldsymbol g_{n,1}*\boldsymbol R_{n,1}\) in (22), which makes it difficult to directly estimate \(\boldsymbol h_{1}^{(k)}\) from the MF output. The first term is caused by the non-zero autocorrelation sidelobe, i.e., \(\bar {\boldsymbol R}_{1,1} \neq \boldsymbol 0\), and therefore, can be regarded as distortion from autocorrelation sidelobe. In contrast, the second one comes from the non-zero cross-correlation, i.e., R n,1≠0, and can be regarded as residual interferences from other radars after matched filtering.
The degree of performance degradation due to the distortion from autocorrelation sidelobe is much more serious than that due to the residual interferences. The reasons can be explained as follows. (A) the amplitude of the main reflection channel \(\boldsymbol h_{1}^{(k)}\) is usually much larger than that of the interfering channel g n,1 because the antenna reception gain G r in \(\boldsymbol h_{1}^{(k)}\) is likely to be much higher than that in g n,1: we will elaborate the first reason later in Section 4.1.3. (B) Contiguous deployment of weather targets leads to an overlap between autocorrelation sidelobes of adjacent targets, which vastly distorts the MF output even with the code of low autocorrelation sidelobe. For this reason, a pulse compression radar system dealing with weather targets [16] has focused on addressing the distortion from autocorrelation sidelobe.
To improve the performance further, we need an additional signal processing for mitigating the distortion from autocorrelation sidelobe, which can be integrated with a nearly orthogonal code set. In this study, we apply an iterative sidelobe removal algorithm based on the CLEAN algorithm [9], which was proven to remove sidelobe components effectively in a binary-coded pulse compression radar system dealing with contiguous scattering targets. The core of the modified CLEAN algorithm are twofold. First, the algorithm tries to determine whether the current peak of the MF output comes from a real (probably the strongest) target or a spurious one based on the sidelobe level change test. Here, a spurious peak can occur by overlap of multiple sidelobes from real targets, which disrupts accurate estimation of target information. Second, for the peak determined as a real target, the algorithm estimates its sidelobe component and gracefully removes it. By iterating these steps, the algorithm gradually removes sidelobe components from the MF output, and at the same time, reconstruct (estimate) the impulse response from targets. We have summarized the detailed procedure for the iterative sidelobe removal in Table 4. Note that, in the sidelobe level change test described in 3-e of the procedure, we use the criterion " |Q(a 0)−Q(a 1)|<γ β M Q(R 1)" while " Q(a 0)−Q(a 1)<γ β M Q(R 1)" was used in the CLEAN algorithm: our criterion results in better performance in sidelobe removal than the CLEAN algorithm.
Table 4 Procedure for the iterative sidelobe removal
As we will show in the next section, the iterative sidelobe removal procedure produces an output y (k) which approximates the main reflection channel \(\boldsymbol h_{1}^{(k)}\). Finally, by using y (k) instead of the MF output \(\boldsymbol y_{MF}^{(k)}\) in (16) and (18), the estimates of the reflectivity and velocity can be obtained as
$$\begin{array}{@{}rcl@{}} \hat{Z}_{1} [\!i] &=& \frac{{d_{1}^{2}}[\!i]}{k_{R} G_{t} G_{r}} \frac{1}{K} \sum\limits_{k=1}^{K} \left| y^{(k)}[\!i] \right|^{2} \end{array} $$
$$\begin{array}{@{}rcl@{}} \hat{v}_{1} [\!i] = - \frac{\lambda}{4 \pi T} \frac{1}{K-1}\sum\limits_{k=1}^{K-1} \angle{\left\{ y^{(k+1)}[\!i] \left(y^{(k)}[\!i]\right)^{*} \right\}}, \end{array} $$
In this section, we examine the feasibility of the proposed frequency sharing scheme for a WRNS through computer simulations. All programs for the simulations have been implemented in MATLAB and are available in [17].
Simulation conditions
Configuration of the radar network
We first assume a simple radar network consisting of N = 3 weather radars with pulse compression. In the radar network, as described in Section 2, Radar-1 operates as the main radar while Radar-2 and Radar-3 operate as interfering ones against the main radar. Next, we artificially construct three reflection channels: \(\boldsymbol h_{1}^{(k)}\) (from Radar-1 to Radar-1), g 2,1 (from Radar-2 to Radar-1), and g 3,1 (from Radar-3 to Radar-1). Specifically, we design a set of three functions, which are then adopted as the amplitudes of the impulse responses of the three reflection channels. For the phase components of the impulse responses, each of which is associated with the velocities of the weather targets in the corresponding reflection channel as (5) and (6), we simply set all the velocities of the weather targets in each reflection channel to a constant: specifically, v 1[ i]=15, v 2,1[ i]=5, and v 3,1[ i]=−5.
In this feasibility study, we employ two amplitude sets for computer simulations. We have shown the amplitude sets in Fig. 3, where every member function has been normalized with its maximum magnitude. (Although the functions seem continuous, they are actually in the discrete-time domain.) In Set A, in order to make the analysis simple and clear, we assume that the main reflection channel \(\boldsymbol h_{1}^{(k)}\) consists of two steps of reflectivity ("high" and "low") and only the interference from Radar-2 exists. In Set B, it is assumed that both of the interferences from Radar-2 and Radar-3 exist and the amplitude functions are more complicated. Note that the member functions should be re-scaled according to the preset signal-to-interference ratio (SIR) of the radar network before being given as the amplitudes of the reflection channels.
Normalized amplitudes of the reflection channels \(\boldsymbol h_{1}^{(k)}\), g 2,1, and g 3,1
Design of pulse compression codes
By using the design method described in Section 3.1.3, we have designed a nearly orthogonal polyphase code set with length L=128 and P=4 phases for N=3 radars: specifically, we have obtained the 3×128 code set matrix S=[s n [ i] ] for n=1,2,3 and i=1,2,⋯,128 with \(s_{n}[\!i] \in \left \{1,\exp \left (j\frac {\pi }{2}\right),\exp \left (j\pi \right),\exp \left (j\frac {3\pi }{2}\right)\right \}\).
Remember that the performance of a nearly orthogonal code set can be quantified in terms of the ASP and CP in (20) and (21). The ASPs and CPs of the designed code set are as follows:
$$ \begin{aligned} ASP(1) =-21.50,\,& ASP(2)\,=\,-21.00, & \!ASP(3)\,=\,-20.41, \\ CP(1,2)\,=\,-19.22,\,& CP(1,3)\,=\,-18.55,& \!CP(2,3)\,=\,-19.47. \end{aligned} $$
Because we address signal processing of the received signal in the main radar only, the correlations R 1,1, R 2,1, and R 3,1 are only needed as apparent in (8). We have shown the magnitudes of those correlations in Fig. 4.
Magnitudes of correlations R 1,1, R 2,1, and R 3,1
Expected SIR
In this paper, the SIR at Radar-1 in the WRNS is evaluated as
$$\begin{array}{@{}rcl@{}} \text{SIR} = 20 \log \frac{\left\|\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)}\right\|}{\left\|\sum\limits_{n=2}^{3} \boldsymbol s_{n} * \boldsymbol g_{n,1} \right\|} \end{array} $$
in dB, where ∥·∥ is the Euclidean norm.
Because a radar amplifies the received signals with its antenna gain G r , the gain should be included in calculating the SIR. In this paper, G r has already been considered as a factor of the channel amplitude as shown in (4). Here, we should note that a weather radar employs a directional antenna, of which the gain is maximum in the antenna direction (mainlobe direction) and the maximum gain is much larger than the gains of other directions (sidelobe direction): for example, in a typical single weather radar WSR-88D [18], the gain difference between the mainlobe and sidelobe directions is at least 29 dB.
For simplicity, in our simulation, we exploit two different gain values when Radar-1 amplifies the received signals: the mainlobe gain along the mainlobe direction, and the sidelobe gain along the sidelobe directions. Based on the discussions above, the sidelobe gain is set to the value less than the mainlobe gain by 29 dB. In the WRNS we consider, the backscattered signal \(\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)}\) originated by the Radar-1 is assumed to be amplified with the mainlobe gain, whereas the signals s n ∗g n,1 (n≠1) from interfering radars are assumed to be amplified with the sidelobe gain. This is because when a WRNS operates actually, most energy in \(\boldsymbol s_{1} * \boldsymbol h_{1}^{(k)}\) is concentrated on the mainlobe direction in most cases; in contrast, only a small fraction of energy in s n ∗g n,1 (n≠1) is concentrated on the direction.
From this simplification, the SIR of the WRNS can be expected to be around 29 dB if we assume that other factors (antenna H/W, weather conditions, etc.) are the same or similar over the weather radars in the network.
Performance requirements
Performance of the proposed WRNS can be quantified in terms of errors in the estimations of reflectivity and velocity. Specifically, the error in reflectivity estimation (simply, reflectivity error) in dB is calculated as
$$\begin{array}{@{}rcl@{}} Z_{err}[\!i] &=& 10\log \hat{Z}_{1}[\!i] - 10\log Z_{1}[\!i] \\ &=& 10\log \frac{1}{K} \sum\limits_{k=1}^{K} \left| y^{(k)}[\!i] \right|^{2} - 10\log \left| h_{1}^{(k)}[\!i] \right|^{2} \end{array} $$
from (15) and (24). Here, recall that y (k) is the output of the iterative sidelobe removal algorithm. For convenience, we define a new symbol |y av |2 of which ith (i=1,2,⋯,L T ) element is \(\frac {1}{K}\sum \limits _{k=1}^{K} \left |y^{(k)}[\!i]\right |^{2}\): this can be regarded as an estimate of the squared amplitude \(\left | \boldsymbol h_{1}^{(k)}\right |^{2}\) of the main reflection channel. In simulations, we set the value of K to 10.
The performance requirements of the WSR-88D on reflectivity and velocity estimations are as follows [19]: (A) when the signal-to-noise ratio (SNR) is higher than 10 dB, the reflectivity error should be less than 1 dB, and (B) when the SNR is higher than 8 dB, the velocity error should be less than 1 m/s. Although the WSR-88D is a single radar (not radar network) and a simple pulse radar (not pulse compression radar), we refer to the criteria of the WSR-88D in the evaluation of the proposed WRNS. Taking the worst case in the requirements into account, we set the SNR of the WRNS to 8 dB in simulations.
Result from Set A
When the SIR is 29 dB
Let us first see the performance of the proposed scheme when Set A is employed for the reflection channels and the SIR is set to its expected value of 29 dB. Figure 5 shows the outputs obtained in each step for processing the received signals. The results are generally in accordance with the expectation from the discussions depicted in Section 3. In short, it is clearly observed that the main radar operates successfully satisfying the performance requirement for the conventional weather radars under the influence of interferences from other radars in the same frequency band. Specifically, we can make the following observations. (A) Compared with the received signal shown in the sub-figure (b), the MF output in the sub-figure (c) is much closer to the main reflection channel in the sub-figure (a), especially in terms of the magnitude. (B) Nonetheless, it is not reasonable to consider the MF output as a good approximation to the main reflection channel. As discussed in Section 3.2, the difference between the main reflection channel and MF output comes from three factors: the distortion from autocorrelation sidelobe, residual interferences from other radars, and noise. Among these factors, the effect of the residual interferences from other radars is negligible compared with that of the distortion from autocorrelation sidelobe in this case because the SIR is as high as around 30 dB. (C) Through the iterative sidelobe removal algorithm, we can effectively mitigate the distortion from autocorrelation as shown in the sub-figure (d). Finally, the distortion from the remaining high-frequency noise is removed by averaging the outputs of the sidelobe removal process as shown in the sub-figure (e). (D) In the sub-figures (f) and (g), it is observed that the performances on the reflectivity and velocity estimations practically satisfy the requirements from the WSR-88D. Specifically, the errors are around zero in the high reflectivity region. (E) Based on a comparison of the variance of the reflectivity errors with that of the velocity errors, it can be inferred that the phase estimation is more sensitive to noise than the amplitude estimation.
Output of each processing step when SIR = 29 dB for Set A. a Channel amplitudes: the main reflection channel \(\left |\boldsymbol h_{1}^{(k)}\right |\) (dotted line), and interfering channel (solid line). b Received signal r (k) in the first reception mode (k=1). c MF output \(\boldsymbol y_{MF}^{(1)}\) with respect to the input r (1). d Output y (1) of the sidelobe removal with respect to the input \(\boldsymbol y_{MF}^{(1)}\). e Estimate |y av |2 of the squared amplitude \(\left |\boldsymbol h_{1}^{(k)}\right |^{2}\) of the main reflection channel. f Reflectivity error. g Velocity error
When the SIR is 8 dB
We next consider a case where the SIR condition is worse than 29 dB. From a similar reason of determining the SNR, we can set the SIR to 8 dB for the worst case to see the feasibility of the proposed WRNS. Note that, however, the SIR is generally expected to be around 29 dB in an actually operating WRNS and the possibility of this worst case is not high.
We have shown a simulation result in Fig. 6. It is observed that, as in the case of the SIR of 29 dB, each of the processing steps faithfully carries out its own role expected. Due to the degradation of the SIR from 29 to 8 dB, however, the operation performance of the main radar does not satisfy the requirement of the WSR-88D although it partially satisfies the requirement near the high reflectivity region. From the results, if we consider the requirements strictly, we note that the proposed WRNS needs to be improved to satisfy the requirements in the worse case SIR scenario (8 dB).
Output of each processing step when SIR = 8 dB for Set A. For detailed explanation for each sub-figure, see the notes at the bottom of Fig. 5
Performance versus SIR
Let us now investigate the average performance of the proposed WRNS with respect to the SIR. For every value of the SIR from 10 to 40 dB at the interval of 5 dB, we have first obtained the reflectivity error Z err [ i] and velocity error v err [i] through a simulation as in the sub-figures (f) and () of Figs. 5 and 6, and then calculated the averages and standard deviations of the two absolute errors over the whole range (all i's): we denote these measures by mean(|Z err |), std(|Z err |), mean(|v err |), and std(|v err |).
To make the results more reliable, these four measures have been averaged over 50 repetitions of the simulations. In Fig. 7, we have shown the average error with respect to the SIR.
Estimation performance with respect to the SIR for Set A
From the results, it is observed that the operation performance of the main radar on reflectivity estimation satisfies the requirement of the WSR-88D when the SIR is higher than around 18 dB in terms of the mean. Even if we take the standard variation into account in addition to the mean, the main radar still operates successfully satisfying the requirement when the SIR is around its expected value (29 dB). Similarly, the performance on velocity estimation also satisfies the requirement when the SIR is around its expected value.
Result from Set B
As a more complicated scenario, let us now adopt Set B instead of Set A. The simulation results are shown in Figs. 8, 9, and 10. Generally, we can make observations similar to those in the case of Set A. Particularly, it is observed that the operation performance of the main radar satisfies the requirement under the expected SIR condition although the number of interfering radars increases and the reflection channels get more complicated.
Output of each processing step when SIR = 29 dB for Set B. For detailed explanation for each sub-figure, see the notes at the bottom of Fig. 5
Output of each processing step when SIR = 8 dB for Set B. For detailed explanation for each sub-figure, see the notes at the bottom of Fig. 5
Estimation performance with respect to the SIR for Set B
In this paper, we have proposed a novel weather radar network system (WRNS) in which multi-site weather radars share the same frequency band. In the proposed frequency-sharing WRNS, the inter-site interference and intra-site interference are removed simultaneously by adopting pulse compression with nearly orthogonal polyphase codes and an iterative sidelobe removal algorithm.
Through computer simulations, we have validated the feasibility of the proposed frequency-sharing WRNS. Specifically, the estimation accuracies on two weather parameters, reflectivity and velocity, satisfied the performance requirements of a typical single weather radar, WSR-88D, under the expected SIR condition. We have also observed that the estimation accuracies are not satisfactory when the SIR is lower than the expected level. These results pave the ground for possible improvements of the proposed system, which will be the theme of our future work.
In addition to the fact that (to the best of our knowledge) this study is the first to validate the feasibility of frequency-sharing weather radar systems, the beauty of this work is that we can apply the proposed architecture to other radar applications in which the scarcity of frequency resource is serious. For example, the proposed architecture with minor modifications (e.g., selection of codes) can be employed in vehicular radar systems in which radars share the same frequency band.
Billion-Dollar U.S, Weather and Climate Disasters 1980-2014 National Centers for Environmental Information. http://www.ncdc.noaa.gov/billions/events. Accessed 08 Apr 2016.
D Guha-Sapir, P Hoyois, R Below, Annual Disaster Statistical Review 2013: The Numbers and Trends (CRED, Brussel, 2014).
2700–2900 MHz: 4b. Meteorological Aids Service. http://www.ntia.doc.gov/files/ntia/publications/compendium/2700.00-2900.00_01MAR14-1.pdf. Accessed 08 Apr 2016.
EC Farnett, GH Stevens, in Radar Handbook, ed. by MI Skolnik. Pulse compression radar (McGraw-HillNew York, 1990), pp. 1–39. Chap. 10.
N Lee, J Chun, in Proceedings of the IEEE Radar Conf. Orthogonal pulse compression code design for waveform diversity in multistatic radar systems, (2008), pp. 1–6.
H Deng, Polyphase code design for orthogonal netted radar systems. IEEE Tr. Signal Process. 52(11), 3126–3135 (2004).
HA Khan, Y Zhang, C Ji, CJ Stevens, DJ Edwards, D O'Brien, Optimizing polyphase sequences for orthogonal netted radar. IEEE Signal Process. Lett. 13(10), 589–592 (2006).
AS Mudukutore, V Chandrasekar, RJ Keeler, Pulse compression for weather radars. IEEE Tr. Geosci. Remote Sens. 36(1), 125–142 (1998).
H Deng, Effective clean algorithm for performance-enhanced detection of binary coding radar signals. IEEE Tr. Signal Process. 52(1), 72–78 (2004).
H Wang, Z Shi, J He, Compression with considerable sidelobe suppression effect in weather radar. EURASIP J. Wirel. Comm. Netw. 2013(97), 1–8 (2013).
NJ Bucci, H Urkowitz, in Proceedings of the IEEE Nat. Radar Conf. Testing of doppler tolerant range sidelobe suppression in pulse compression meteorological radar, (1993), pp. 206–211.
F O'Hora, J Bech, Improving weather radar observations using pulse-compression techniques. Meteorol. Appl. 14(4), 389–401 (2007).
MA Richards, Fundamentals of Radar Signal Processing (McGraw-Hill, New York, 2005).
RK Hersey, MA Richards, JH McClellan, in Proceedings of the IEEE Radar Conf. Analytical and computer model of a doppler weather radar system, (2002), pp. 438–444.
SR Park, I Song, S Yoon, J Lee, A new polyphase sequence with perfect even and good odd cross-correlation functions for ds/cdma systems. IEEE Tr. Vehic. Techn. 51(5), 855 (2002).
JM Kurdzo, BL Cheong, RD Palmer, G Zhang, JB Meier, A pulse compression waveform for improved-sensitivity weather radar observations. J. Atmos. Ocean. Technol. 31(12), 2713 (2014).
MATLAB Source Codes. http://kr.mathworks.com/matlabcentral/fileexchange/55192-wrns-simulator. Accessed 08 Apr 2016.
National Research Council, Weather Radar Technology Beyond NEXRAD (National Academy Press, Washington, D.C., 2002).
DS Zrnic, Doppler Radar for USA Weather Surveillance (NOAA National Severe Storms Laboratory, Norman, 2012).
The authors wish to thank the Associate Editor and anonymous reviewers for their constructive suggestions and helpful comments. This work was supported by the ICT R &D program of MSIP/IITP [B0101-16-222, Development of Core Technologies to Improve Spectral Efficiency for Mobile Big-Bang] and by the National Research Foundation of Korea, with funding from MSIP, under Grant NRF-2015R1A2A1A01005868.
Electronics and Telecommunications Research Institute, 218 Gajeong-ro, Yuseong-gu, Daejeon, 34129, Korea
Hwang-Ki Min, Myung-Sun Song & Jae-Han Lim
School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
Hwang-Ki Min & Iickho Song
Hwang-Ki Min
Myung-Sun Song
Iickho Song
Jae-Han Lim
Correspondence to Jae-Han Lim.
H-KM, M-SS, IS, and J-HL proposed a novel weather radar network system (WRNS) in which multi-site weather radars operate in the same frequency band. The proposed system suppresses the inter-site interference by adopting pulse compression with nearly orthogonal codes, and, at the same time, removes the intra-site interference based on a sidelobe suppression technique. H-KM, M-SS, IS, and J-HL showed the feasibility of the proposed frequency-sharing WRNS through computer simulations, taking the performance requirement of a typical single weather radar, WSR-88D, into account. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Min, HK., Song, MS., Song, I. et al. A frequency-sharing weather radar network system using pulse compression and sidelobe suppression. J Wireless Com Network 2016, 100 (2016). https://doi.org/10.1186/s13638-016-0595-3
Weather radar network
Frequency sharing
Pulse compression
Sidelobe suppression | CommonCrawl |
# Fundamentals of data preprocessing and feature engineering
1. Data Cleaning
Data cleaning is the process of identifying and correcting or removing errors, inconsistencies, and inaccuracies in the dataset. It is important to ensure that the data is reliable and accurate before proceeding with the analysis. Some common techniques for data cleaning include:
- Handling missing values: Missing values can occur due to various reasons, such as data entry errors or incomplete data. There are several strategies for handling missing values, including imputation (replacing missing values with estimated values) or deletion (removing rows or columns with missing values).
- Removing duplicates: Duplicates can distort the analysis and lead to biased results. It is important to identify and remove duplicate records from the dataset.
- Handling outliers: Outliers are extreme values that deviate significantly from the rest of the data. They can affect the analysis and lead to incorrect conclusions. Outliers can be handled by either removing them or transforming them to a more appropriate value.
For example, let's consider a dataset of student exam scores. Suppose there is a missing value for one student's score. We can handle this missing value by imputing it with the mean score of all the other students.
```python
import pandas as pd
data = {'Name': ['John', 'Alice', 'Bob', 'Emily', 'Michael'],
'Score': [80, 90, 75, None, 85]}
df = pd.DataFrame(data)
mean_score = df['Score'].mean()
df['Score'].fillna(mean_score, inplace=True)
print(df)
```
Output:
```
Name Score
0 John 80.0
1 Alice 90.0
2 Bob 75.0
3 Emily 82.5
4 Michael 85.0
```
In this example, the missing value for Emily's score is imputed with the mean score of 82.5.
## Exercise
Consider a dataset of house prices. The dataset contains a column for the number of bedrooms, but some values are missing. Choose one of the following strategies to handle the missing values:
a) Impute the missing values with the median number of bedrooms.
b) Delete the rows with missing values.
Write the code to implement your chosen strategy.
### Solution
```python
import pandas as pd
data = {'Price': [100000, 150000, 200000, 250000],
'Bedrooms': [3, None, 4, 2]}
df = pd.DataFrame(data)
# Strategy: Impute missing values with the median number of bedrooms
median_bedrooms = df['Bedrooms'].median()
df['Bedrooms'].fillna(median_bedrooms, inplace=True)
print(df)
```
Output:
```
Price Bedrooms
0 100000 3.0
1 150000 3.5
2 200000 4.0
3 250000 2.0
```
In this example, the missing value for the number of bedrooms is imputed with the median value of 3.5.
2. Data Transformation
Data transformation involves converting the dataset into a suitable format for analysis. This can include scaling, normalization, encoding categorical variables, and creating new features.
- Scaling: Scaling is the process of transforming the values of numerical variables to a specific range. It is important to scale variables that have different units or scales to ensure that they contribute equally to the analysis. Common scaling techniques include min-max scaling and standardization.
- Normalization: Normalization is the process of transforming the values of numerical variables to a standard scale, usually between 0 and 1. Normalization is useful when the magnitude of the variables is not important, but the relative differences between the values are.
- Encoding categorical variables: Categorical variables need to be encoded into numerical values before they can be used in machine learning algorithms. This can be done using techniques such as one-hot encoding or label encoding.
- Creating new features: Feature engineering involves creating new features from the existing ones to improve the performance of the machine learning model. This can include combining variables, creating interaction terms, or transforming variables.
Let's consider a dataset of car prices. Suppose the dataset contains a categorical variable for the car's color, with values 'red', 'blue', and 'green'. We can encode this categorical variable using one-hot encoding.
```python
import pandas as pd
data = {'Price': [10000, 15000, 20000],
'Color': ['red', 'blue', 'green']}
df = pd.DataFrame(data)
df_encoded = pd.get_dummies(df, columns=['Color'])
print(df_encoded)
```
Output:
```
Price Color_blue Color_green Color_red
0 10000 0 0 1
1 15000 1 0 0
2 20000 0 1 0
```
In this example, the categorical variable 'Color' is encoded into three binary variables: 'Color_blue', 'Color_green', and 'Color_red'.
## Exercise
Consider a dataset of customer information. The dataset contains a categorical variable for the customer's occupation, with values 'teacher', 'engineer', and 'doctor'. Choose one of the following strategies to encode the categorical variable:
a) One-hot encoding
b) Label encoding
Write the code to implement your chosen strategy.
### Solution
```python
import pandas as pd
data = {'Name': ['John', 'Alice', 'Bob'],
'Occupation': ['teacher', 'engineer', 'doctor']}
df = pd.DataFrame(data)
# Strategy: One-hot encoding
df_encoded = pd.get_dummies(df, columns=['Occupation'])
print(df_encoded)
```
Output:
```
Name Occupation_doctor Occupation_engineer Occupation_teacher
0 John 0 0 1
1 Alice 0 1 0
2 Bob 1 0 0
```
In this example, the categorical variable 'Occupation' is encoded into three binary variables: 'Occupation_doctor', 'Occupation_engineer', and 'Occupation_teacher'.
3. Feature Engineering
Feature engineering involves creating new features from the existing ones to improve the performance of the machine learning model. This can include combining variables, creating interaction terms, or transforming variables. Feature engineering requires domain knowledge and creativity to identify meaningful features that capture important information in the data.
Some common techniques for feature engineering include:
- Polynomial features: Polynomial features are created by taking the powers or interactions of the existing features. This can capture non-linear relationships between the variables.
- Feature scaling: Scaling the features to a specific range can improve the performance of certain machine learning algorithms. This is especially important when the features have different units or scales.
- Feature selection: Feature selection involves selecting a subset of the most relevant features for the model. This can improve the model's performance and reduce overfitting.
Let's consider a dataset of housing prices. Suppose the dataset contains two numerical variables: 'Area' (in square feet) and 'Rooms' (number of rooms). We can create a new feature by taking the interaction between these two variables.
```python
import pandas as pd
data = {'Price': [100000, 150000, 200000],
'Area': [1000, 1500, 2000],
'Rooms': [3, 4, 5]}
df = pd.DataFrame(data)
df['Area_Rooms'] = df['Area'] * df['Rooms']
print(df)
```
Output:
```
Price Area Rooms Area_Rooms
0 100000 1000 3 3000
1 150000 1500 4 6000
2 200000 2000 5 10000
```
In this example, a new feature 'Area_Rooms' is created by taking the interaction between the 'Area' and 'Rooms' variables.
## Exercise
Consider a dataset of customer information. The dataset contains two numerical variables: 'Age' and 'Income'. Choose one of the following strategies to create a new feature:
a) Take the square root of the 'Income' variable.
b) Take the ratio of 'Income' to 'Age'.
Write the code to implement your chosen strategy.
### Solution
```python
import pandas as pd
data = {'Name': ['John', 'Alice', 'Bob'],
'Age': [30, 25, 35],
'Income': [50000, 60000, 70000]}
df = pd.DataFrame(data)
# Strategy: Take the ratio of 'Income' to 'Age'
df['Income_Age_Ratio'] = df['Income'] / df['Age']
print(df)
```
Output:
```
Name Age Income Income_Age_Ratio
0 John 30 50000 1666.666667
1 Alice 25 60000 2400.000000
2 Bob 35 70000 2000.000000
```
In this example, a new feature 'Income_Age_Ratio' is created by taking the ratio of 'Income' to 'Age'.
# Supervised learning: linear regression, logistic regression, decision trees, and ensemble methods
1. Linear Regression
Linear regression is a simple and widely used algorithm for predicting a continuous target variable based on one or more input features. It assumes a linear relationship between the input features and the target variable. The goal of linear regression is to find the best-fitting line that minimizes the difference between the predicted and actual values.
The equation for linear regression can be written as:
$$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$$
where $y$ is the target variable, $x_1, x_2, ..., x_n$ are the input features, and $\beta_0, \beta_1, \beta_2, ..., \beta_n$ are the coefficients that determine the slope of the line.
To find the best-fitting line, we use the method of least squares, which minimizes the sum of the squared differences between the predicted and actual values. The coefficients can be estimated using various optimization algorithms, such as gradient descent.
Let's consider a dataset of house prices. Suppose we want to predict the price of a house based on its size (in square feet). We can use linear regression to find the best-fitting line.
```python
import pandas as pd
from sklearn.linear_model import LinearRegression
data = {'Size': [1000, 1500, 2000, 2500],
'Price': [100000, 150000, 200000, 250000]}
df = pd.DataFrame(data)
X = df[['Size']]
y = df['Price']
model = LinearRegression()
model.fit(X, y)
print('Intercept:', model.intercept_)
print('Coefficient:', model.coef_[0])
```
Output:
```
Intercept: -50000.0
Coefficient: 100.0
```
In this example, the best-fitting line is $y = -50000 + 100x$, where $y$ is the predicted price and $x$ is the size of the house.
## Exercise
Consider a dataset of student exam scores. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use linear regression to find the best-fitting line that predicts the exam score based on the number of study hours. Write the code to implement linear regression.
### Solution
```python
import pandas as pd
from sklearn.linear_model import LinearRegression
data = {'Hours': [5, 7, 10, 12],
'Score': [60, 70, 80, 90]}
df = pd.DataFrame(data)
X = df[['Hours']]
y = df['Score']
model = LinearRegression()
model.fit(X, y)
print('Intercept:', model.intercept_)
print('Coefficient:', model.coef_[0])
```
Output:
```
Intercept: 40.0
Coefficient: 6.0
```
In this example, the best-fitting line is $y = 40 + 6x$, where $y$ is the predicted exam score and $x$ is the number of study hours.
2. Logistic Regression
Logistic regression is a popular algorithm for binary classification problems, where the target variable has two possible outcomes. It models the probability of the positive class as a function of the input features. Logistic regression uses the logistic function (also known as the sigmoid function) to map the linear combination of the input features to a probability value between 0 and 1.
The equation for logistic regression can be written as:
$$P(y=1|x) = \frac{1}{1 + e^{-(\beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n)}}$$
where $P(y=1|x)$ is the probability of the positive class, $x_1, x_2, ..., x_n$ are the input features, and $\beta_0, \beta_1, \beta_2, ..., \beta_n$ are the coefficients.
To find the best-fitting line, logistic regression uses maximum likelihood estimation, which maximizes the likelihood of the observed data given the model parameters. The coefficients can be estimated using various optimization algorithms, such as gradient descent.
Let's consider a dataset of email messages. Suppose we want to classify whether an email is spam or not based on the presence of certain keywords. We can use logistic regression to model the probability of an email being spam.
```python
import pandas as pd
from sklearn.linear_model import LogisticRegression
data = {'Keyword1': [1, 0, 1, 0],
'Keyword2': [0, 1, 1, 0],
'Spam': [1, 0, 1, 0]}
df = pd.DataFrame(data)
X = df[['Keyword1', 'Keyword2']]
y = df['Spam']
model = LogisticRegression()
model.fit(X, y)
print('Intercept:', model.intercept_[0])
print('Coefficient 1:', model.coef_[0][0])
print('Coefficient 2:', model.coef_[0][1])
```
Output:
```
Intercept: -0.4054651081081644
Coefficient 1: 0.4054651081081644
Coefficient 2: 0.4054651081081644
```
In this example, the logistic regression model is $P(y=1|x) = \frac{1}{1 + e^{-(0.405 + 0.405x_1 + 0.405x_2)}}$, where $P(y=1|x)$ is the probability of the email being spam, $x_1$ and $x_2$ are the presence of keyword 1 and keyword 2, respectively.
## Exercise
Consider a dataset of customer information. The dataset contains two input features: 'Age' (in years) and 'Income' (in dollars). Use logistic regression to model the probability of a customer purchasing a product based on their age and income. Write the code to implement logistic regression.
### Solution
```python
import pandas as pd
from sklearn.linear_model import LogisticRegression
data = {'Age': [25, 30, 35, 40],
'Income': [50000, 60000, 70000, 80000],
'Purchase': [0, 1, 1, 0]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
y = df['Purchase']
model = LogisticRegression()
model.fit(X, y)
print('Intercept:', model.intercept_[0])
print('Coefficient 1:', model.coef_[0][0])
print('Coefficient 2:', model.coef_[0][1])
```
Output:
```
Intercept: -0.0001670518960310375
Coefficient 1: -0.0001670518960310375
Coefficient 2: 0.0001670518960310375
```
In this example, the logistic regression model is $P(y=1|x) = \frac{1}{1 + e^{-(0 + 0x_1 + 0x_2)}}$, where $P(y=1|x)$ is the probability of the customer purchasing a product, $x_1$ and $x_2$ are the customer's age and income, respectively.
3. Decision Trees
Decision trees are a popular algorithm for both classification and regression problems. They model the target variable as a tree-like structure of decisions and their possible consequences. Each internal node of the tree represents a decision based on one of the input features, and each leaf node represents a predicted value or class.
The decision tree algorithm recursively partitions the input space into subsets based on the values of the input features. The goal is to create partitions that are as pure as possible, meaning that they contain mostly samples of the same class or have similar predicted values. The purity of a partition is typically measured using metrics such as Gini impurity or entropy.
To make predictions, a new sample is traversed down the tree from the root to a leaf node based on the values of its input features. The predicted value or class is then determined by the majority class or the average value of the samples in the leaf node.
Let's consider a dataset of customer information. Suppose we want to classify whether a customer will churn or not based on their age and income. We can use a decision tree to make predictions.
```python
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
data = {'Age': [25, 30, 35, 40],
'Income': [50000, 60000, 70000, 80000],
'Churn': [0, 1, 1, 0]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
y = df['Churn']
model = DecisionTreeClassifier()
model.fit(X, y)
print('Feature importance:', model.feature_importances_)
```
Output:
```
Feature importance: [0.4 0.6]
```
In this example, the decision tree model splits the input space based on the customer's income and age. The feature importance indicates that income is more important than age in predicting churn.
## Exercise
Consider a dataset of student exam scores. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use a decision tree to predict whether a student will pass or fail based on their study hours and exam score. Write the code to implement a decision tree.
### Solution
```python
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
data = {'Hours': [5, 7, 10, 12],
'Score': [60, 70, 80, 90],
'Pass': [0, 1, 1, 1]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score']]
y = df['Pass']
model = DecisionTreeClassifier()
model.fit(X, y)
print('Feature importance:', model.feature_importances_)
```
Output:
```
Feature importance: [0. 1.]
```
In this example, the decision tree model splits the input space based on the student's exam score. The feature importance indicates that exam score is more important than study hours in predicting whether a student will pass or fail.
4. Ensemble Methods
Ensemble methods combine multiple models to make more accurate predictions. They are based on the principle of "wisdom of the crowd," where the collective decision of multiple models is often better than that of a single model. Ensemble methods can be used for both classification and regression problems.
Two popular ensemble methods are bagging and boosting:
- Bagging: Bagging (short for bootstrap aggregating) involves training multiple models on different subsets of the training data and combining their predictions. Each model is trained on a random sample of the training data with replacement. The final prediction is typically the majority vote (for classification) or the average (for regression) of the individual predictions.
- Boosting: Boosting involves training multiple models sequentially, where each model tries to correct the mistakes of the previous models. The training data is reweighted at each step to focus on the samples that were misclassified by the previous models. The final prediction is a weighted combination of the individual predictions.
Ensemble methods can improve the performance and robustness of machine learning models, especially when the individual models have different strengths and weaknesses.
Let's consider a dataset of customer information. Suppose we want to classify whether a customer will churn or not based on their age and income. We can use a random forest, which is an ensemble of decision trees, to make predictions.
```python
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
data = {'Age': [25, 30, 35, 40],
'Income': [50000, 60000, 70000, 80000],
'Churn': [0, 1, 1, 0]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
y = df['Churn']
model = RandomForestClassifier()
model.fit(X, y)
print('Feature importance:', model.feature_importances_)
```
Output:
```
Feature importance: [0.4 0.6]
```
In this example, the random forest model is an ensemble of decision trees. The feature importance indicates that income is more important than age in predicting churn.
## Exercise
Consider a dataset of student exam scores. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use a random forest to predict whether a student will pass or fail based on their study hours and exam score. Write the code to implement a random forest.
### Solution
```python
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
data = {'Hours': [5, 7, 10, 12],
'Score': [60, 70, 80, 90],
'Pass': [0, 1, 1, 1]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score']]
y = df['Pass']
model = RandomForestClassifier()
model.fit(X, y)
print('Feature importance:', model.feature_importances_)
```
Output:
```
Feature importance: [0.4 0.6]
```
In this example, the random forest model is an ensemble of decision trees. The feature importance indicates that exam score is more important than study hours in predicting whether a student will pass or fail.
# Unsupervised learning: clustering, dimensionality reduction, and anomaly detection
1. Clustering
Clustering is the process of grouping similar data points together based on their characteristics. It is used to discover natural groupings or clusters in the data. Clustering can be used for various purposes, such as customer segmentation, document clustering, and image segmentation.
There are many clustering algorithms, but two popular ones are K-means clustering and hierarchical clustering:
- K-means clustering: K-means clustering is an iterative algorithm that partitions the data into K clusters, where K is a user-defined parameter. The algorithm starts by randomly initializing K cluster centroids and assigns each data point to the nearest centroid. It then updates the centroids by taking the mean of the data points assigned to each cluster. This process is repeated until convergence, where the centroids do not change significantly.
- Hierarchical clustering: Hierarchical clustering builds a hierarchy of clusters by recursively merging or splitting clusters based on their similarity. The algorithm starts with each data point as a separate cluster and iteratively merges the most similar clusters until a single cluster is formed.
Let's consider a dataset of customer information. Suppose we want to cluster the customers based on their age and income. We can use K-means clustering to group similar customers together.
```python
import pandas as pd
from sklearn.cluster import KMeans
data = {'Age': [25, 30, 35, 40, 45, 50],
'Income': [50000, 60000, 70000, 80000, 90000, 100000]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
model = KMeans(n_clusters=2)
model.fit(X)
print('Cluster labels:', model.labels_)
```
Output:
```
Cluster labels: [0 0 0 1 1 1]
```
In this example, the K-means clustering algorithm assigns each customer to one of two clusters based on their age and income.
## Exercise
Consider a dataset of student exam scores. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use K-means clustering to group the students based on their study hours and exam scores. Write the code to implement K-means clustering.
### Solution
```python
import pandas as pd
from sklearn.cluster import KMeans
data = {'Hours': [5, 7, 10, 12, 3, 8],
'Score': [60, 70, 80, 90, 50, 75]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score']]
model = KMeans(n_clusters=2)
model.fit(X)
print('Cluster labels:', model.labels_)
```
Output:
```
Cluster labels: [0 0 1 1 0 0]
```
In this example, the K-means clustering algorithm assigns each student to one of two clusters based on their study hours and exam scores.
2. Dimensionality Reduction
Dimensionality reduction is the process of reducing the number of input features while preserving most of the information in the data. It is used to overcome the curse of dimensionality, where high-dimensional data can be difficult to analyze and visualize. Dimensionality reduction can also improve the performance and efficiency of machine learning algorithms.
There are two main types of dimensionality reduction techniques: feature selection and feature extraction.
- Feature selection: Feature selection involves selecting a subset of the most relevant features for the model. This can be done based on statistical measures, such as correlation or mutual information, or using machine learning algorithms that rank the importance of features.
- Feature extraction: Feature extraction involves transforming the input features into a lower-dimensional space. This can be done using techniques such as principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE). These techniques aim to find a new set of features that capture most of the information in the original features.
Let's consider a dataset of customer information. Suppose we want to reduce the dimensionality of the dataset by selecting the most relevant features. We can use feature selection based on correlation to select the features with the highest correlation with the target variable.
```python
import pandas as pd
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
data = {'Age': [25, 30, 35, 40, 45, 50],
'Income': [50000, 60000, 70000, 80000, 90000, 100000],
'Churn': [0, 1, 1, 0, 1, 0]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
y = df['Churn']
selector = SelectKBest(score_func=f_regression, k=1)
X_new = selector.fit_transform(X, y)
print('Selected feature:', X.columns[selector.get_support()])
```
Output:
```
Selected feature: Index(['Income'], dtype='object')
```
In this example, feature selection based on correlation selects the 'Income' feature as the most relevant feature for predicting churn.
## Exercise
Consider a dataset of student exam scores. The dataset contains three input features: 'Hours' (number of study hours), 'Score' (exam score), and 'Attendance' (number of classes attended). Use feature selection based on correlation to select the most relevant feature for predicting the exam score. Write the code to implement feature selection.
### Solution
```python
import pandas as pd
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
data = {'Hours': [5, 7, 10, 12],
'Score': [60, 70, 80, 90],
'Attendance': [80, 90, 95, 100]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score', 'Attendance']]
y = df['Score']
selector = SelectKBest(score_func=f_regression, k=1)
X_new = selector.fit_transform(X, y)
print('Selected feature:', X.columns[selector.get_support()])
```
Output:
```
Selected feature: Index(['Hours'], dtype='object')
```
In this example, feature selection based on correlation selects the 'Hours' feature as the most relevant feature for predicting the exam score.
3. Anomaly Detection
Anomaly detection is the process of identifying rare or unusual data points that deviate significantly from the normal pattern. It is used to detect outliers or anomalies in the data, which may indicate fraudulent activity, errors, or unusual behavior. Anomaly detection can be used in various domains, such as fraud detection, network intrusion detection, and predictive maintenance.
There are many anomaly detection algorithms, but two popular ones are isolation forest and one-class SVM:
- Isolation forest: Isolation forest is an algorithm that isolates anomalies by randomly partitioning the data into subsets and then identifying anomalies as data points that require fewer partitions to be isolated. The intuition behind isolation forest is that anomalies are rare and can be easily separated from the normal data points.
- One-class SVM: One-class SVM is an algorithm that learns a boundary around the normal data points and classifies any data point outside the boundary as an anomaly. The algorithm finds the optimal hyperplane that maximizes the margin between the normal data points and the boundary.
Let's consider a dataset of customer transactions. Suppose we want to detect fraudulent transactions based on the transaction amount. We can use the isolation forest algorithm to identify the fraudulent transactions.
```python
import pandas as pd
from sklearn.ensemble import IsolationForest
data = {'Amount': [100, 200, 300, 400, 500, 1000],
'Fraud': [0, 0, 0, 0, 0, 1]}
df = pd.DataFrame(data)
X = df[['Amount']]
y = df['Fraud']
model = IsolationForest(contamination=0.1)
model.fit(X)
print('Anomaly scores:', model.decision_function(X))
```
Output:
```
Anomaly scores: [0.148 0.148 0.148 0.148 0.148 -0.148]
```
In this example, the isolation forest algorithm assigns an anomaly score to each transaction, where a higher score indicates a higher likelihood of being a fraudulent transaction.
## Exercise
Consider a dataset of student exam scores. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use the isolation forest algorithm to detect any unusual or anomalous student exam scores. Write the code to implement anomaly detection.
### Solution
```python
import pandas as pd
from sklearn.ensemble import IsolationForest
data = {'Hours': [5, 7, 10, 12, 3, 8],
'Score': [60, 70, 80, 90, 50, 75]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score']]
model = IsolationForest(contamination=0.1)
model.fit(X)
print('Anomaly scores:', model.decision_function(X))
```
Output:
```
Anomaly scores: [0.148 0.148 0.148 0.148 0.148 0.148]
```
In this example, the isolation forest algorithm assigns an anomaly score to each student exam score, where a higher score indicates a higher likelihood of being an anomalous score.
# Model evaluation and performance metrics
1. Accuracy
Accuracy is a common performance metric for classification problems. It measures the proportion of correctly classified samples out of the total number of samples. Accuracy is calculated as:
$$\text{Accuracy} = \frac{\text{Number of correctly classified samples}}{\text{Total number of samples}}$$
Accuracy is a simple and intuitive metric, but it can be misleading when the classes are imbalanced or when the cost of misclassification is different for different classes.
Let's consider a binary classification problem. Suppose we have a dataset of customer information and we want to predict whether a customer will churn or not based on their age and income. We can use a logistic regression model and calculate the accuracy of the model.
```python
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
data = {'Age': [25, 30, 35, 40],
'Income': [50000, 60000, 70000, 80000],
'Churn': [0, 1, 1, 0]}
df = pd.DataFrame(data)
X = df[['Age', 'Income']]
y = df['Churn']
model = LogisticRegression()
model.fit(X, y)
y_pred = model.predict(X)
accuracy = accuracy_score(y, y_pred)
print('Accuracy:', accuracy)
```
Output:
```
Accuracy: 0.5
```
In this example, the logistic regression model has an accuracy of 0.5, which means that 50% of the samples are correctly classified.
## Exercise
Consider a binary classification problem. The dataset contains two input features: 'Hours' (number of study hours) and 'Score' (exam score). Use a logistic regression model to predict whether a student will pass or fail based on their study hours and exam score. Calculate the accuracy of the model. Write the code to implement logistic regression and calculate the accuracy.
### Solution
```python
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
data = {'Hours': [5, 7, 10, 12],
'Score': [60, 70, 80, 90],
'Pass': [0, 1, 1, 1]}
df = pd.DataFrame(data)
X = df[['Hours', 'Score']]
y = df['Pass']
model = LogisticRegression()
model.fit(X, y)
y_pred = model.predict(X)
accuracy = accuracy_score(y, y_pred)
print('Accuracy:', accuracy)
```
Output:
```
Accuracy: 1.0
```
In this example, the logistic regression model has an accuracy of 1.0, which means that all the samples are correctly classified.
2. Precision, Recall, and F1 Score
Precision, recall, and F1 score are performance metrics for binary classification problems that take into account both the true positive and false positive rates.
- Precision measures the proportion of true positive predictions out of the total number of positive predictions. It is calculated as:
$$\text{Precision} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Positives}}$$
- Recall (also known as sensitivity or true positive rate) measures the proportion of true positive predictions out of the total number of actual positive samples. It is calculated as:
$$\text{Recall} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Negatives}}$$
- F1 score is the harmonic mean of precision and recall. It provides a balanced measure of the model's performance. F1 score is calculated as:
$$\text{F1 Score} = 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}$$
Precision, recall, and F1 score are useful when the classes are imbalanced or when the cost of false positives and false negatives is different.
Let's consider a binary classification problem. Suppose we have a dataset of customer information and we want to predict whether a customer will churn or not based on their age and income. We can use a logistic regression model and calculate the precision, recall, and F1 score of the model.
```python
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_score, recall_score, f1_score
data = {'Age': [25,
# Bias and variance trade-off in machine learning models
In machine learning, the bias-variance trade-off is a fundamental concept that helps us understand the performance of our models. It refers to the trade-off between the model's ability to fit the training data (low bias) and its ability to generalize to unseen data (low variance).
A model with high bias is too simple and makes strong assumptions about the data, leading to underfitting. It fails to capture the underlying patterns and relationships in the data. On the other hand, a model with high variance is too complex and tries to fit the noise in the training data, leading to overfitting. It performs well on the training data but fails to generalize to new data.
To find the right balance between bias and variance, we need to choose an appropriate model complexity. This can be done through techniques like cross-validation and regularization.
1. Bias
Bias measures how far off the predictions of a model are from the true values. A model with high bias consistently underestimates or overestimates the target variable. It is usually caused by a model that is too simple or has strong assumptions about the data.
For example, consider a linear regression model that assumes a linear relationship between the input features and the target variable. If the true relationship is non-linear, the model will have high bias and fail to capture the underlying patterns in the data.
2. Variance
Variance measures the variability of the model's predictions for different training sets. A model with high variance is sensitive to the specific training data and tends to overfit. It performs well on the training data but fails to generalize to new data.
For example, consider a decision tree model with no depth limit. It can create complex decision boundaries that perfectly fit the training data. However, it is likely to have high variance and overfit the noise in the data.
3. Bias-Variance Trade-off
The goal is to find the right balance between bias and variance. A model with too much bias will have high error on both the training and test data, while a model with too much variance will have low error on the training data but high error on the test data.
For example, consider a polynomial regression model. As the degree of the polynomial increases, the model becomes more flexible and can fit the training data better. However, if the degree is too high, the model will have high variance and overfit the noise in the data.
## Exercise
Consider a classification problem where you have a dataset with two input features and two classes. You train a logistic regression model and find that it has high accuracy on the training data but low accuracy on the test data. What can you conclude about the bias and variance of the model?
### Solution
Based on the given information, we can conclude that the model has low bias and high variance. The model performs well on the training data (low bias) but fails to generalize to new data (high variance). This indicates that the model is overfitting the training data.
# Reinforcement learning: Markov decision processes, Q-learning, and policy gradients
Reinforcement learning is a type of machine learning that focuses on training agents to make decisions in an environment. It is commonly used in scenarios where an agent interacts with an environment and learns to take actions that maximize a reward signal.
One key concept in reinforcement learning is Markov decision processes (MDPs). MDPs are mathematical models that describe the interaction between an agent and an environment. They are defined by a set of states, actions, transition probabilities, and reward functions.
1. Markov decision processes (MDPs)
MDPs are used to model decision-making problems in which the outcome is partly random and partly under the control of the decision maker. They are characterized by the Markov property, which states that the future state depends only on the current state and action, and not on the past history.
For example, consider a robot navigating a maze. The state of the robot could be its current position in the maze, and the actions could be moving up, down, left, or right. The transition probabilities would describe the likelihood of moving from one state to another based on the chosen action.
2. Q-learning
Q-learning is a popular algorithm used in reinforcement learning to find the optimal policy for an agent. The goal of Q-learning is to learn a Q-function, which estimates the expected future rewards for taking a particular action in a given state.
For example, in the maze navigation problem, the Q-function would estimate the expected future rewards for moving in a certain direction from a given position in the maze. The agent can then choose the action with the highest Q-value to maximize its rewards.
3. Policy gradients
Policy gradients is another approach to reinforcement learning that directly learns a policy function, which maps states to actions. The policy function is optimized by gradient ascent to maximize the expected cumulative reward.
For example, in the maze navigation problem, the policy function would directly map the current position in the maze to the next action to take. The agent can then follow the policy to navigate the maze and maximize its rewards.
## Exercise
Consider a robot learning to navigate a gridworld environment. The robot can move up, down, left, or right, and receives a reward of +1 for reaching the goal state and -1 for reaching a forbidden state. The robot uses Q-learning to learn the optimal policy.
What are the key components of the Q-learning algorithm?
### Solution
The key components of the Q-learning algorithm are:
- Q-function: Estimates the expected future rewards for taking a particular action in a given state.
- Exploration-exploitation trade-off: Balances between exploring new actions and exploiting the current knowledge.
- Learning rate: Controls the weight given to new information compared to previous knowledge.
- Discount factor: Determines the importance of future rewards compared to immediate rewards.
- Update rule: Updates the Q-values based on the observed rewards and the estimated future rewards.
# Deep learning: neural networks and backpropagation
Deep learning is a subfield of machine learning that focuses on training neural networks with multiple layers. Neural networks are computational models inspired by the structure and function of the human brain. They are composed of interconnected nodes, or neurons, that process and transmit information.
1. Neural networks
Neural networks are composed of layers of interconnected neurons. Each neuron receives input from the previous layer, applies a mathematical function to the input, and produces an output. The output of one neuron becomes the input for the next neuron in the network.
For example, in an image classification task, the input layer of a neural network would receive pixel values of an image. The hidden layers would process the input and extract features, and the output layer would produce the predicted class label.
2. Backpropagation
Backpropagation is a key algorithm used to train neural networks. It involves two main steps: forward propagation and backward propagation. In forward propagation, the input is fed through the network, and the output is calculated. In backward propagation, the error between the predicted output and the true output is calculated and used to update the weights of the network.
For example, in the image classification task, backpropagation calculates the difference between the predicted class label and the true class label. This error is then used to update the weights of the network, so that the next prediction is more accurate.
3. Activation functions
Activation functions are mathematical functions applied to the output of a neuron. They introduce non-linearity to the network, allowing it to learn complex patterns and relationships in the data. Common activation functions include the sigmoid function, the rectified linear unit (ReLU) function, and the hyperbolic tangent function.
For example, the sigmoid function maps the output of a neuron to a value between 0 and 1, representing the probability of a certain class. The ReLU function, on the other hand, sets negative values to zero and keeps positive values unchanged.
## Exercise
What is the purpose of backpropagation in training neural networks?
### Solution
The purpose of backpropagation is to update the weights of a neural network based on the difference between the predicted output and the true output. It calculates the error and propagates it backwards through the network, allowing the network to learn from its mistakes and make more accurate predictions in the future.
# Natural language processing and its applications in machine learning
Natural language processing (NLP) is a subfield of artificial intelligence that focuses on the interaction between computers and human language. It involves the development of algorithms and models that enable computers to understand, interpret, and generate human language.
1. Text preprocessing
Text preprocessing is an important step in NLP that involves cleaning and transforming raw text data into a format that can be easily understood by machine learning models. It typically includes tasks such as tokenization, removing stop words, stemming or lemmatization, and converting text to numerical representations.
For example, tokenization involves splitting a piece of text into individual words or tokens. Removing stop words involves removing common words like "the" and "and" that do not carry much meaning. Stemming or lemmatization involves reducing words to their base or root form.
2. Sentiment analysis
Sentiment analysis is a common application of NLP that involves determining the sentiment or emotion expressed in a piece of text. It can be used to analyze customer reviews, social media posts, or any other text data that contains subjective information.
For example, sentiment analysis can be used to determine whether a customer review is positive, negative, or neutral. This information can be valuable for businesses to understand customer feedback and make informed decisions.
3. Named entity recognition
Named entity recognition is another application of NLP that involves identifying and classifying named entities in text. Named entities can be anything from names of people, organizations, locations, dates, or any other specific entity.
For example, named entity recognition can be used to identify names of people, organizations, and locations in news articles. This information can be used for various purposes, such as information retrieval or knowledge graph construction.
## Exercise
What is the purpose of text preprocessing in natural language processing?
### Solution
The purpose of text preprocessing in natural language processing is to clean and transform raw text data into a format that can be easily understood by machine learning models. It involves tasks such as tokenization, removing stop words, stemming or lemmatization, and converting text to numerical representations. Text preprocessing helps to reduce noise and irrelevant information in the data, and allows the models to focus on the important features of the text.
# Big data and distributed machine learning
Big data refers to large and complex data sets that cannot be easily processed or analyzed using traditional data processing techniques. With the increasing volume, velocity, and variety of data being generated, it has become essential to develop new approaches and technologies to handle big data.
1. Challenges of big data
Big data poses several challenges in terms of storage, processing, and analysis. Traditional data processing techniques may not be able to handle the size and complexity of big data. Additionally, big data is often unstructured or semi-structured, requiring specialized tools and algorithms for analysis.
For example, consider a social media platform that generates a large amount of data in the form of posts, comments, and user interactions. Analyzing this data to gain insights and make informed decisions would require scalable and distributed processing systems.
2. Distributed machine learning
Distributed machine learning is an approach that enables the training and inference of machine learning models on large-scale distributed systems. It leverages the power of parallel processing and distributed computing to handle big data and speed up the training process.
For example, distributed machine learning can be used to train a deep learning model on a cluster of machines. Each machine processes a subset of the data and updates the model parameters accordingly. This allows for faster training and scalability to handle large datasets.
3. MapReduce and Spark
MapReduce and Spark are two popular frameworks for distributed computing that are commonly used in big data analytics and distributed machine learning. They provide abstractions and APIs for processing and analyzing large-scale data in parallel.
For example, MapReduce can be used to process and analyze large log files to extract useful information. Spark, on the other hand, provides a more general-purpose framework that supports various data processing tasks, including machine learning.
## Exercise
What are the challenges posed by big data in terms of storage, processing, and analysis?
### Solution
The challenges posed by big data in terms of storage, processing, and analysis include:
- Storage: Big data requires scalable and distributed storage systems to handle the large volume of data.
- Processing: Traditional data processing techniques may not be able to handle the size and complexity of big data. Specialized tools and algorithms are required for efficient processing.
- Analysis: Big data is often unstructured or semi-structured, requiring specialized techniques and algorithms for analysis. Additionally, the sheer volume of data can make analysis challenging and time-consuming.
# Ethical considerations in machine learning
Machine learning has the potential to bring about significant advancements and benefits in various fields. However, it also raises important ethical considerations that need to be addressed. Ethical considerations in machine learning involve issues such as fairness, transparency, accountability, and privacy.
1. Fairness
Fairness in machine learning refers to the absence of bias or discrimination in the algorithms and models used. It is important to ensure that machine learning systems do not unfairly disadvantage certain groups of people based on factors such as race, gender, or socioeconomic status.
For example, a facial recognition system that is trained on biased data may have higher error rates for certain racial or ethnic groups. This can lead to unfair treatment or discrimination in areas such as law enforcement or hiring.
2. Transparency
Transparency in machine learning refers to the ability to understand and interpret the decisions made by machine learning models. It is important to have transparency in order to identify and address any biases or errors in the models.
For example, a credit scoring model that uses machine learning should be transparent in its decision-making process. This allows individuals to understand why they were denied credit and take appropriate actions if they believe the decision was unfair or discriminatory.
3. Accountability
Accountability in machine learning refers to the responsibility and liability for the decisions made by machine learning models. It is important to establish clear lines of accountability to ensure that decisions made by machine learning systems are fair, unbiased, and in compliance with legal and ethical standards.
For example, if an autonomous vehicle causes an accident, it is important to determine who is responsible for the accident - the vehicle manufacturer, the software developer, or the user. Clear accountability mechanisms need to be in place to address such situations.
## Exercise
Why is fairness an important ethical consideration in machine learning?
### Solution
Fairness is an important ethical consideration in machine learning because machine learning systems have the potential to perpetuate or amplify existing biases and discrimination. If machine learning algorithms are trained on biased data or use biased features, they can result in unfair treatment or discrimination against certain groups of people. Fairness ensures that machine learning systems do not unfairly disadvantage or harm individuals based on factors such as race, gender, or socioeconomic status.
# Real-world case studies and practical applications
Machine learning has found applications in a wide range of fields, including healthcare, finance, marketing, transportation, and more. It has the potential to revolutionize these industries by enabling more accurate predictions, automated decision-making, and improved efficiency.
1. Healthcare
Machine learning is being used in healthcare to improve diagnosis, treatment, and patient outcomes. For example, machine learning algorithms can analyze medical images to detect diseases such as cancer at an early stage. They can also predict patient outcomes and suggest personalized treatment plans based on a patient's medical history and genetic information.
In a study published in the journal Nature Medicine, researchers used machine learning to predict the risk of developing cardiovascular disease in patients. The algorithm analyzed electronic health records and identified patterns and risk factors that were not previously known. This information can help doctors intervene earlier and prevent cardiovascular events.
2. Finance
Machine learning is transforming the finance industry by enabling more accurate risk assessment, fraud detection, and automated trading. Machine learning algorithms can analyze large volumes of financial data and identify patterns and trends that humans may miss. This can help financial institutions make better investment decisions and reduce the risk of fraud.
In the field of credit scoring, machine learning algorithms are used to assess the creditworthiness of individuals and businesses. These algorithms analyze various data points, such as credit history, income, and employment status, to predict the likelihood of default. This information helps lenders make informed decisions about granting loans.
3. Marketing
Machine learning is revolutionizing the field of marketing by enabling personalized advertising and customer segmentation. Machine learning algorithms can analyze customer data, such as browsing history and purchase behavior, to predict customer preferences and tailor marketing campaigns accordingly. This can lead to higher conversion rates and customer satisfaction.
Companies like Amazon and Netflix use machine learning algorithms to recommend products and movies to their customers. These algorithms analyze customer data, such as past purchases and ratings, to predict customer preferences and make personalized recommendations. This improves the customer experience and increases sales.
## Exercise
Think of another industry or domain where machine learning can have a significant impact. Describe one potential application of machine learning in that industry or domain.
### Solution
One potential application of machine learning in the transportation industry is autonomous vehicles. Machine learning algorithms can analyze sensor data from cameras, lidar, and radar to detect and classify objects on the road, such as other vehicles, pedestrians, and traffic signs. This information can be used to make real-time decisions, such as braking or changing lanes, to ensure safe and efficient navigation. Autonomous vehicles have the potential to reduce accidents, improve traffic flow, and provide mobility solutions for people who are unable to drive. | Textbooks |
\begin{document}
\title{The drop box location problem}
\begin{abstract} For decades, voting-by-mail and the use of ballot drop boxes has substantially grown within the United States (U.S.), and in response, many U.S.~election officials have added new drop boxes to their voting infrastructure. However, existing guidance for locating drop boxes is limited. In this paper, we introduce an integer programming model, the drop box location problem (DBLP), to locate drop boxes. The DBLP considers criteria of cost, voter access, and risk. The cost of the drop box system is determined by the fixed cost of adding drop boxes and the operational cost of a collection tour by a bipartisan team who regularly collects ballots from selected locations. The DBLP utilizes covering sets to ensure each voter is in close proximity to a drop box and incorporates a novel measure of access to measure the ability to use multiple voting pathways to vote. The DBLP is shown to be NP-Hard, and we introduce a heuristic to generate a large number of feasible solutions for policy makers to select from a posteriori. Using a real-world case study of Milwaukee, WI, U.S., we study the benefit of the DBLP. The results demonstrate that the proposed optimization model identifies drop box locations that perform well across multiple criteria. The results also demonstrate that the trade-off between cost, access, and risk is non-trivial, which supports the use of the proposed optimization-based approach to select drop box locations.
\end{abstract}
\begin{keywords} Community-Based Operations Research, OR in Government, Decision-Making, Inequality, Voting Systems, Election Risk, Equity, Integer Programming \end{keywords} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \section{Introduction}\label{sec:intro}
During the 2020 General election within the United States (U.S.), a record 46\% of U.S. voters cast a ballot by mail or absentee in-person \citep{mit_election_data__science_lab_voting_2021}. Approximately 41\% of these voters cast a ballot using a drop box \citep{noauthor_sharp_2020}, which are temporary or permanent fixtures similar to United States Postal Service (USPS) postboxes. Many states increased the number of drop boxes during 2020 in response to increased use of the vote-by-mail system and to help mitigate health risks associated with in-person voting \citep{corasaniti_postal_2020}.
In total, forty states and Washington, D.C.~allowed some use of ballot drop boxes \citep{huord_where_2020}.
However, the increase in drop box use is likely not a one time event. The use of non-traditional voting methods within the United States has steadily grown since 1996 \citep{scherer_majority_2021}. A recent survey of Wisconsin, U.S. election clerks found that approximately 78\% of election clerks would like some use of ballot drop boxes in future elections, and this percentage is higher among clerks from jurisdictions with a large voting age population \citep{burden_experiences_2021}. Many states have since introduced legislation to expand the number of drop boxes available to voters\footnote{There are challenges to some proposals and even calls to restrict the use of these resources \citep{vasilogambros_lawmakers_2020}.} \citep{vasilogambros_lawmakers_2020}.
Reasons for casting a ballot using a drop box include the perceived security they offer, anticipated mail delays, and a lack of voter confidence in the USPS \citep{nw_2020_2020}. For many individuals, drop boxes are also in close proximity of their home, work, or daily routine \citep{stewart_survey_2016}. Arguably, the primary benefit of drop boxes is the increased accessibility they offer to the voting infrastructure compared to in-person voting.
Studies suggest that adding drop boxes to a voting system can increase voter turnout \citep{collingwood_drop_2018, mcguire_does_2020}.
\citet{mcguire_does_2020} found that a decrease in one mile to the nearest drop box increases the probability of voting by 0.64 percent. This finding aligns with the hypothesis of election participation first offered by \citet{downs_economic_1957}. According to this hypothesis, potential voters decide whether to vote by comparing the cost (e.g., time) of voting and the potential benefits from voting. It was later argued that voting cost is the significant driver of voter turnout \citep{sigelman_cost_1982,haspel_location_2005}. We posit that the election infrastructure plays a large role in determining the cost to vote \citep{cantoni_precinct_2020, mcguire_does_2020, collingwood_drop_2018}. Thus, if we can improve the accessibility of ballot drop boxes to voters by appropriately designing the drop box infrastructure, then we can increase voter participation, particularly among groups who previously had a high cost to vote and low turnout.
Although drop boxes can increase voter participation,
there are many challenges associated with identifying drop box locations and managing the drop box voting system. First, drop boxes can pose a large financial cost.
Drop boxes can cost \$6,000 \citep{CISA_ballot_2020}, and designated video survillance cameras that increase drop box security can cost up to \$4,000 \citep{schaefer_ri_2020}.
Second, with an increased number of drop boxes, substantial time and resources must be devoted to collecting ballots. During the election period, it is recommended that bipartisan teams regularly collect ballots \citep{CISA_ballot_2020}. If drop boxes are not strategically placed or if there are a large number of drop boxes, this route may be costly and leave less time to devote to other election tasks. Third, there are security risks associated with ballot drop boxes that must be addressed \citep{scala_evaluating_2021},
although drop boxes are considered reliable \citep{elections_project_staff_drop_2020,scala_evaluating_2021}.
If the drop box specific security risks are mitigated appropriately, adding drop boxes to a voting system makes an adversarial attack on the electoral process more challenging. This improves the overall security of the voting system, since the system becomes more distributed \citep{scala_evaluating_2021}.
In addition to the previously mentioned challenges, elections are administered by state and local governments within the U.S., and each may have different voting processes. While the vote-by-mail process is typically similar across difference jurisdictions within the U.S., each jurisdiction may have unique challenges or preferences that necessitates a detailed analysis of potential drop box system design.
In light of these complexities, existing guidelines for selecting drop box locations are often insufficient to support election administrators.
In 2020, the Cybersecurity and Infrastructure Security Agency \citep{CISA_ballot_2020} recommended that a drop box be placed at the primary municipal building,
there be a drop box for every 15,000–20,000 registered voters, and more drop boxes should be added where there may be communities with historically low absentee ballot return rates.
However, these guidelines are not prescriptive enough to support administrators in identifying an appropriate portfolio of drop box locations.
To our knowledge, the only analytical approach to selecting drop box locations
uses a Geographic Information System (GIS) to determine the locations that served the most voters, allowing for a maximum drive time of 10 minutes \citep{greene_vote-by-mail_2015}.
This approach overlooks many of the trade-offs within the voting system and ignores socioeconomic differences between voters that may make voting more challenging for some individuals.
Without adequate decision support tools, election administrators may ultimately select drop box locations that perform poorly across multiple criteria by which voting systems are measured.
In this paper, we propose an integer program (IP) to support election administrators in determining how ballot drop boxes should be used in their voting systems when allowed by law\footnote{The ability to use or not use drop boxes and in what capacity is typically set by state law.}.
We formalize the IP as the drop box location problem (DBLP).
To our knowledge, the DBLP is the first mathematical model of the ballot drop box system to support election planning. The DBLP seeks to minimize the capital and operational cost of the drop box system, ensure equity of access to the voting system, and mitigate risks associated with the drop box system. Loosely, we let \emph{access} refer to the proximity of the voting infrastructure (e.g., polling places, drop boxes) to voters and the ease with which voters can cast a ballot.
Expanding access through the use of drop boxes is an important aspect of the DBLP, since voter turnout is highly correlated with the distance needed to travel to cast a ballot \citep{cantoni_precinct_2020}. We measure access to the drop box voting system using conventional covering sets. In addition, we propose a function based on concepts from discrete choice theory to measure the level of access a voter has to the multiple voting pathways offered by the voting system.
The remainder of the paper is structured as follows. In Section \ref{sec:lit}, we review the management science literature related to elections.
In Section \ref{sec:model}, we discuss measures by which the ballot drop box system can be assessed. We then formalize the drop box location problem (DBLP) and introduce an IP formulation of the DBLP.
In Section \ref{sec:solnmethods}, we discuss solution methods for the DBLP and introduce a heuristic to quickly generate a collection of feasible solutions for election officials to select from a posteriori. In Section \ref{sec:casestudy}, we introduce a case study of Milwaukee, WI, U.S. using real-world data. Using this case study, we demonstrate the value of our integer programming approach compared to rules-of-thumb that may otherwise be used. We find that the DBLP outperforms the rules-of-thumb with respect to nearly all criteria considered. We then investigate the trade-off between cost, access, and risk within potential drop box system designs. We find that the trade-off is non-trivial, and the optimization-based approach provides value.
We conclude with a brief discussion in Section \ref{sec:discussion}.
\section{Literature Review}\label{sec:lit}
Much of the management science literature aimed at supporting election planning focuses primarily on in-person voting processes.
Some research focuses on identifying and describing the in-person voting process including quantifying the arrival rate of in-person voters, the attrition rate of polling place queues, the check-in service rate, the time to vote, and poll worker characteristics \citep{spencer2010long,stein_waiting_2020}.
This research also studied how voting requirements (e.g., the introduction of voting identification requirements) impacts voting times \citep{stein_waiting_2020}.
Queueing theory has been widely used to analyze lines at polling locations and identify mitigating practices to avoid long lines \citep{stewart2015waiting,schmidt_designing_2021}.
Since voting machines have been recognized as a bottleneck in the in-person voting process \citep{yang2009all}, a stream of papers has focused on the allocation of voting machines to polling locations \citep{allen_mitigating_2006,wang2015efficiency,edelstein2010queuing}.
Other research has focused on risks of voting systems rather than operational design. The Election Assistance Commission (EAC) \citep{eac_elections_2009} analyzed threats to voting processes in the U.S.
for seven voting technology types. \citet{scala_evaluating_2021} identified security threats for mail-in voting processes and offered a relative score for each to identify the most important threats to address. They identify three drop-box related threats.
First, a misallocation of drop boxes can suppress voter turnout.
Second, a drop box can be damaged or destroyed. Third, ballots within a drop box can be stolen or manipulated. They find the likelihood of drop box risks to be relatively low compared to other risks \citep{scala_evaluating_2021}.
\citet{fitzsimmons_selecting_2020} study geographic-based risks by introducing a control problem to study how voter turnout can be manipulated through the strategic selection of polling locations.
A few papers attempt to detect disruptions or security incidents following an election \citep{highton_long_2006,allen_mitigating_2006}.
There are no known papers intended to support election administrators in planning and managing the vote-by-mail system.
Our proposed integer program addresses the risks of the drop box system \citep{scala_evaluating_2021} and employs concepts from the facility location literature.
Facility location problems are defined by a set of demands (e.g., voters) and a set of facilities (e.g., drop boxes) that can serve the needs of the demands. Arguably the most widely used facility location model is the maximal covering location problem (MCLP) \citep{church_maximal_1974}. In the MCLP, a demand is ``covered'' by, or can be served by, a predetermined set of locations called the \emph{covering set}. Facility locations are selected to maximize the number of demands covered by at least one facility. The location set covering problem (LSCP) instead requires that all demands are covered and the cost of the selected facility locations is minimized \citep{toregas_location_1971}.
The IP introduced within this paper extends the covering tour problem (CTP) \citep{gendreau_covering_1997}, which is a variant of the LSCP, by considering additional constraints and objective function terms. These changes allow us to accurately model the drop box voting system.
A CTP instance is defined by an undirected weighted graph with two mutually exclusive and exhaustive set of nodes, the \emph{tour} nodes and \emph{coverage nodes}. The objective of the CTP is to find a Hamiltonian tour of minimal length over a subset of the tour nodes such that each coverage node is covered by at least one node visited by the tour. The CTP is NP-Hard since the traveling salesman problem (TSP) can be reduced to it \citep{gendreau_covering_1997}.
Several solution methods, including exact \citep{gendreau_covering_1997,baldacci_scatter_2005} and heuristic \citep{murakami_generalized_2018,vargas_dynamic_2017}, have been proposed for the CTP. This paper represents the first known application of a CTP variation to voting systems.
\section{Problem Definition}\label{sec:model} \newcommand\drawredrect{
\begin{tikzpicture}
\tikz\draw[red,thick,dotted] (0,0) rectangle (2ex,1.25ex);
\end{tikzpicture} }
Election administrators in the U.S.~face many questions regarding the use of ballot drop boxes including whether drop boxes should be added to their local voting system and how drop boxes may affect voting performance measures. If election administrators decide to add drop boxes, they must decide how many drop boxes to add and where they should be located. The DBLP introduced in this section identifies the optimal placement of drop boxes once election administrators decide to add drop boxes to the voting system. However, election administrators can use the model during the election planning process to assess the cost, access, and risk of a potential drop box system. This can inform their decision of whether or not to add any drop boxes to the voting system.
The decisions surrounding the use of drop boxes are complex due to the number of potential locations for drop boxes, concerns about equity within the voting process, and the multiple criteria by which voting systems are measured. The most widely reported election performance metrics in the U.S.~are the number of individuals registered to vote and the fraction of eligible voters that cast a ballot, known as \emph{voter turnout} \citep{mit_election_data__science_lab_elections_2022}. In most states, there are multiple pathways by which voters can cast a ballot, and the accessibility of each pathway can influence voter turnout.
Figure \ref{fig:voting} describes the two main pathways, which are typically divided into `in-person' or `absentee'. With in-person voting, a voter obtains and casts a ballot at their assigned polling location, typically on election day. With absentee voting\footnote{Some states, such as Washington, use the ``absentee'' voting process as their primary voting method. Thus, we use ``absentee'' loosely in this paper, and sometimes refer to it as the vote-by-mail process.}, a voter requests a ballot be sent to them and the completed ballot is then returned either through the mail or using a drop box. In some states, voters must provide a reason to vote absentee, while in 34 states there is ``no-excuse'' absentee voting \citep{national_conference_of_state_legislatures_voting_2022}.
In addition to voter-based election metrics, the cost and security of the voting system is a key concern. The cost of an election is comprised of both infrastructure-based costs (e.g., polling locations) and resource based costs (e.g., staff).
The security of a voting system is not typically measured or reported to the public, despite being a major concern of officials and the public.
\begin{figure}
\caption{Typical pathways to cast a ballot, divided into in-person and absentee, and the component corresponding to the use of ballot drop boxes (\protect\drawredrect).}
\label{fig:voting}
\end{figure}
In this paper, we are concerned with a sub-pathway of the vote-by-mail process where the voter submits a ballot using a drop box.
In this pathway, a voter first requests and receives a ballot through the mail. They then decide to submit a ballot using a drop box rather than through the mail (or not returning it at all). This decision is influenced by the proximity of a drop box to the voter and the distrust the voter has in the USPS \citep{mcguire_does_2020}. A team of poll workers then collects ballots from the drop boxes, and the ballots are processed at an official election building. In this paper, we focus on the system related to the steps outlined in red (\protect\drawredrect), since they are the steps that are unique to the drop box system and are influenced by the locations of the drop boxes.
\subsection{Assessing Drop Box Infrastructure}\label{sec:metrics} There are two metrics typically used to assess the vote-by-mail system: the proportion of requested ballots that are returned and the number of ballots rejected \citep{mit_election_data__science_lab_elections_2022}. The use of ballot drop boxes can lower the rejection rate of mail ballots by reducing the time it takes a ballot to return to election officials. As a result, a voter can be notified of an incorrectly marked ballot more quickly to allow the voter to resubmit their ballot before the election deadline. This is a benefit that we do not explicitly consider in our model. We also posit based on empirical research that a well-designed drop box system can lead to a higher proportion of returned mail ballots and a higher voter turnout by improving the accessibility of the voting infrastructure \citep{downs_economic_1957,mcguire_does_2020}.
We elaborate on how access to the voting system is measured.
We employ the concept of \emph{coverage} to measure the access voters have to the drop box system.
Under the concept of coverage, a voter covered by a selected drop box location is assumed to have access to the drop box voting system. The locations that provide a voter coverage are called its \emph{covering set}.
Covering sets are flexible and can be defined to account for different modes of transportation, vehicle ownership, and other socioeconomic factors.
However,
drop boxes are a subcomponent of a larger voting system, and coverage overlooks the access provided by non-drop box voting pathways. In reality, some individuals may have better access to in-person voting than others, and adding drop boxes near them may not substantially benefit them.
This necessitates a second measure of access that distinguishes access to the complete voting infrastructure from coverage by the drop box system.
We introduce an access function based on the multinomial/conditional logit model from discrete choice theory \citep{aloulou_application_2018} to capture this dynamic. The application of discrete choice theory to questions within political science is most commonly used to explain or predict choices within a multi-candidate (or party) election \citep{glasgow_discrete_2008}. Discrete choice models have also been used to predict how individuals interact with infrastructure in different application domains. One of the earliest cases of this was the application of a conditional logit model to predict the use of the Bay Area Rapid Transit prior to its construction \citep{train_discrete_2009}. To the best of our knowledge, our paper represents the first use of a function based on discrete choice theory to model access within an optimization model.
The function we introduce makes use of some parameters. Let $v^1_{w} > 0$ be a measure of accessibility\footnote{In its exact form, $v^1_{w} = e^{U_w^1}$ where $U_w^1$ represents the utility of voting using the non-drop box voting system.} to the non-drop box voting system (e.g., in-person polling locations) for voters $w$. This can be determined, for example, by the distance to the nearest polling location. Let $a_{nw} > 0$ be a measure of the access\footnote{In its exact form, $a_{nw} = e^{U_{wn}}$ where $U_{wn}$ represents the utility of voting by using drop box $n$.} that a drop box at location $n$ would provide to $w$. This can be determined in part by the proximity of the location to the voters' places of residence and work and by the various transportation modes available between the voters and the drop box location. Based on empirical studies, the value of $a_{nw}$ should be increasing with decreasing distance \citep{mcguire_does_2020}. Finally, let $v^0_{w} > 0$ be the propensity of $w$ not to vote\footnote{In its exact form, $v^0_{w} = e^{U_w^0}$ where $U_w^0$ represents the utility of not voting.}. This could be informed by the historical non-voting rate (complement of turnout) or using surveys. Using these parameters, we introduce the following \emph{access function} to measure the access a group of individuals $w$ has to all voting pathways where $N^*$ represents the set of selected drop box locations: \begin{align}
A_w(N^*) : = \frac{v^1_{w}+\sum_{j \in N^*} a_{jw} }{v^0_{w}+v^1_{w}+\sum_{j \in N^*} a_{jw} } \nonumber \end{align} The access function takes values between zero and one. A value closer to one means that the voting system, including the new ballot drop boxes, is more accessible to individuals $w$, whereas value closer to zero means that the voting system is relatively inaccessible to individuals $w$. In this way, a higher access function value suggests higher turnout for $w$.
The access function can still be used when a strict interpretation is not reasonable or is not feasible due to data availability, since the benefit of the access function is a result of its structure.
First, the access function models access as a non-binary measure.
Second, adding any drop box to the voting system increases the value of the access function but to varying degrees based on the locations of the voter and the drop box.
Third, each voter has some heterogeneous level of access to non-drop box voting methods captured by $v_w^1$, and this access is treated as a constant within the scope of the decision to location drop boxes. Each voter also has a heterogeneous access function value when no drop boxes are added to the voting system, $A_w(\emptyset) = \frac{v_w^1}{v_w^0 + v_w^1}$, which is reflective of heterogeneous turnout rates.
Fourth, the benefit of adding a drop box near a voter is marginally decreasing as the access function value increases.
This incentivises placing drop boxes near populations with low levels of access to other voting pathways.
While it is desirable to increase voter turnout and access to the voting system, expanding the use of ballot drop boxes may increase the financial cost of managing the election.
The costs of the ballot drop box system can be broken into two major groups: fixed or operational. \emph{Fixed costs} represent the ``per drop box'' costs such as the initial purchase and costs of securing and maintaining the drop box.
Each location may have a different fixed cost due to varying installation and security equipment requirements.
Once drop boxes are installed, jurisdictions incur an \emph{operational cost} for a bipartisan team to collect ballots from the drop boxes \citep{CISA_ballot_2020}. The operational cost is determined, in part, by the distance between drop boxes, the opportunity cost of bipartisan team's time, and the frequency at which the ballots are collected during an election. We assume that a bipartisan team collects ballots from all drop boxes whenever a collection is conducted, and the drop boxes are visited in an order that minimizes the operational cost, referred to as the \emph{collection tour}.
In addition to introducing new financial costs, drop boxes introduce three types of risks to the voting process that can be mitigated through design requirements.
The first risk is that ballot drop boxes can be misallocated in a way that causes voter suppression \citep{scala_evaluating_2021}. There are two components to this risk. The first is the potential to misallocate drop boxes such that access to the drop box voting system is inequitable. The second is the potential to misallocate drop boxes such that the access to the entire voting system defined by the multiple voting pathways is inequitable.
These risks are reflected by the number of voters covered by a drop box, using the same definition of coverage introduced earlier, and the value of the access function for each voter, respectively.
We can mitigate the risk of voter suppression by requiring that each voter is covered by at least one drop box and that the value of the access function meets some minimal threshold for all voters.
The second risk is that a drop box could be damaged or destroyed \citep{scala_evaluating_2021}. A nefarious actor could influence an election by targeting drop boxes that provide access to certain voters. The impact of this risk can be mitigated by requiring all voters to be covered by multiple drop boxes, so that voters have redundant access to the drop box system.
The last risk is that ballots submitted to a drop box could be stolen or manipulated.
The impact of this risk can be mitigated by ensuring
that the collection tour has a low cost.
When the collection tour has a low cost, election officials can collect ballots often, leaving fewer at risk.
Other implicit design choices also mitigate this third risk. For example, requiring a bipartisan team to collect ballots, rather than one individual, reduces the risk of an insider attack. Likewise, incorporating security related costs, such as the cost of a video surveillance system, into the fixed cost of a drop box mitigates the risks associated with it.
There are additional risks and mitigations associated with the voting process that are not unique to the drop box infrastructure. For example, there is a risk of an insider attack on ballots stored at an election building after collected from the drop boxes \citep{scala_evaluating_2021}. However, these additional risks are outside the scope of the system considered in this paper (see Figure \ref{fig:voting}).
\subsection{The Drop Box Location Problem (DBLP)}\label{sec:DBLP}
We now formally introduce an IP formulation of the drop box location problem (DBLP) using the sets, parameters, and variables presented in Table \ref{drop box:T:notation}.
\begin{table}[hbt!] \centering \caption{Notation}\label{drop box:T:notation} \begin{tabular*}{\columnwidth}{@{}l@{\extracolsep{\fill}}c@{\extracolsep{\fill}}p{0.8\columnwidth}}
\multicolumn{3}{l}{\textbf{Sets}}\\ \hline $W$ & = & the set of voter populations \\
$N$ & = & the potential drop box locations \\
$T \subseteq N$ & = & the locations at which a drop box must be placed \\
$E$ & = & all pairs $i \in N$, $j\in N$ such that $i \neq j$ and $(j,i)\neq E$ \\
$N_w \subseteq N$ & = &drop box location that cover $w \in W$, $|N_w| \geq 2$ \\
\hline \\ \end{tabular*}
\begin{tabular*}{\columnwidth}{@{}l@{\extracolsep{\fill}}c@{\extracolsep{\fill}}p{0.8\columnwidth}} \multicolumn{3}{l}{\textbf{Parameters}}\\ \hline
$s$ & = & the start and end of the collection tour \\ $f_j$ & = & the fixed cost of placing a drop box at location $j \in N$ \\ $c_{ij}$ & = & the operational cost of traveling between $i \in N$ and $j \in N$ in the collection tour \\
$v^0_{w}$ & = & the propensity of $w \in W$ not to vote \\ $v^1_{w}$ & = & the accessibility of the non-drop box voting system to $w \in W$ \\ $a_{jw}$ & = & the accessibility of location $j \in N$ to $w \in W$ \\ $r$ & = & minimal allowable value for the access function \\ $q$ & = & minimal number of drop boxes covering each $ w \in W$ \\
\hline \\ \end{tabular*}
\begin{tabular*}{\columnwidth}{@{}l@{\extracolsep{\fill}}c@{\extracolsep{\fill}}p{0.8\columnwidth}} \multicolumn{3}{l}{\textbf{Decision Variables}}\\ \hline $x_{ij}$ & = & 1 if the collection tour moves between $i$ and $j$ $(i,j) \in E$ and 0 otherwise \\ $y_j$ & $=$ & 1 if a drop box is placed at location $j \in N$ and 0 otherwise \\
\hline \\ \end{tabular*}
\end{table}
The DBLP selects drop box locations from a set of potential locations, $N$. Potential drop box locations can be identified using existing guidelines \citep{CISA_ballot_2020, mcguire_does_2020}.
Let $y_n$ be a decision variable that equals one
if a drop box is located at location $n \in N$ and zero otherwise. Once drop box locations are selected, a collection tour over them must be found to determine the operational cost of the drop box system. Let $x_{ij}$ be a decision variable
that equals one if the collection tour travels between drop box $i$ and drop box $j$, $(i,j) \in E$, and zero otherwise, where $E$ represents all pairs $i \in N$, $j\in N$ such that $i \neq j$ and $(j,i)\neq E$ . We assume the collection tour always begins and ends at a drop box\footnote{When a drop box is not located at $s$, then the model is still valid. Simply let $f_s = 0$ and $a_{sw} = 0$ for each $w \in W$, while $s$ is not a member of any covering set.} located at $s$ (e.g., primary municipal building).
Let $T$
represent the locations at which there must be a drop box within our solution (e.g., existing drop box locations).
The set $T$ is always non-empty, since $T=\{s\}$ in the extreme case.
For each location $j \in N$, let $f_j$ equal the fixed cost of
a drop box at $j$. Let $c_{ij}$ represent the operational cost of traveling between drop boxes $(i,j) \in E$ on the collection tour.
Using this notation, we formalize the three goals of the DBLP. The first goal is to minimize the total cost associated with the selected drop box locations. The total cost of the drop box system is the sum of the fixed costs and the cost of the collection tour, $z_1 := \sum_{j \in N} f_j y_j + \sum_{(i,j)\in E} c_{ij} x_{ij} $.
The value of $z_1$ serves as the objective\footnote{Election administers likely have a fixed budget, but the amount allocated to managing the drop box system is likely not predetermined. Thus, we wish to minimize the proportion of the budget allocated to the drop box system.} function in the IP formulation of the DBLP.
The second goal of the DBLP is to equitably improve the accessibility of the voting system. Let $W$ denote the collection of voter populations. Let $N_w \subseteq N$ represent the drop boxes that cover $w \in W$.
We ensure equitable access to the drop box system\footnote{When $q \geq 1$.} by requiring that each voter is covered by $q$ drop boxes. Reasonable values of $q$ are $0$, $1$, or $2$. The cardinality of each covering set must be at least $q$, $|N_w| \geq q$ for all $w\in W$, otherwise the problem is infeasible.
We ensure equitable access to all voting pathways by requiring that the access function value is at least $r$ for each $w\in W$, $\min_{w \in W} A_w (N') \geq r$ where $N' = \{n \in N : y_n = 1\}$ are the selected drop box locations.
This constraint can be viewed as a second objective for the DBLP using the epsilon-constraint approach for multi-objective optimization problems \citep{mavrotas_effective_2009}.
The third goal of the DBLP is to mitigate the risks associated with the drop box voting system. The risk of misallocating drop boxes in a way that leads to voter suppression is addressed by the second goal of the DBLP. The risk of ballots being susceptible to manipulation once submitted to a drop box is addressed by minimizing the cost of the collection tour, which is captured within $z_1$. The risk of damage to or destruction of drop boxes is a way that degradates voter access to the voting system is mitigated by ensuring each voter is covered by $q$ drop boxes when $q \geq 2$.
If the optimal solution to the DBLP locates two or fewer drop boxes\footnote{It can be easily checked whether two or fewer drop boxes are needed to satisfy the constraints of the model.}, the collection tour visiting the drop box locations is trivial. Thus, we assume that at least three drop boxes are be selected in the optimal solution. Under this assumption, we can formulate the DBLP using the following IP.
\begin{align}
\underset{x,y}{\min} \ & z_1 = \sum_{(i,j) \in E} c_{ij} x_{ij} + \sum_{j \in N} f_j y_j \label{model:obj1} \\
\text{s.t.} \
& r(v^0_{w}+v^1_{w}+\sum_{j \in N} a_{jw}y_{j} ) \leq v^1_{w}+\sum_{j \in N} a_{jw}y_{j} & \forall \ w\in W \label{model:obj2} \\
& \sum_{j \in N_w} y_j \geq q & \forall \ w\in W \label{model:basecoverage}\\
& y_j = 1 & \forall \ j \in T \label{model:existing} \\
& \sum_{i \in N : (i,j) \in E} x_{ij} = 2y_j & \forall \ j \in N \label{model:balance} \\
& \sum_{\substack{(i,j)\in E : i \in S, j \in N \setminus S \\ \hspace{0.55cm}\text{ or } j \in S, i \in N \setminus S}} x_{ij} \geq 2y_t & \substack{\forall S \subset N, \ 2 \leq |S| \leq |N|-2,\\ T\setminus S \neq \emptyset, t \in S} \label{model:subtourelim} \\
& y_{j} \in \{0,1\} & \forall \ j \in N \label{model:ybin}\\
& x_{ij} \in \{0,1\} & \forall \ (i, j) \in E \label{model:xbin} \end{align}
The objective \eqref{model:obj1} is to minimize the total cost of the drop box system. Constraint set \eqref{model:obj2} requires that the value of the access function is at least $r$ for each $w \in W$.
Constraint set \eqref{model:basecoverage} ensures that each $w \in W$ is covered by at least $q$ drop boxes within their respective covering set.
Constraint set \eqref{model:existing} ensures that a drop box is added at each location in $T$. Constraint sets \eqref{model:balance} and \eqref{model:subtourelim} are used to determine the collection tour over the selected drop box locations using constraints originally introduced for the CTP \citep{gendreau_covering_1997}. Constraint set \eqref{model:balance} ensures that each selected drop box location is visited by the collection tour exactly once. Constraint set \eqref{model:subtourelim} introduces subtour elimination constraints. Note that these constraints differ from the subtour elimination constraints commonly seen in the TSP, since not all locations $N$ must be visited by collection tour. The bound on the summation refers to the edges in $E$ such that the edge is incident to one node in $S$ and one in $N\setminus S$.
Constraint sets \eqref{model:ybin} and \eqref{model:xbin} require the decision variables to be binary.
\subsection{Model Properties}
The DBLP is challenging to solve using existing solution techniques. This idea is formalized in Theorem \ref{thm:nphard}, which states that the DBLP is NP-Hard. A proof is provided in the Supplementary materials \ref{appx:proofs}.
\begin{theorem}\label{thm:nphard} The DBLP is NP-Hard. \end{theorem}
In some instances, the DBLP integer program may be large due to a large number of voter populations.
We present a condition that allow us to determine when certain constraints from constraint set \eqref{model:obj2} can be removed from the IP, which reduces the size of the integer program instance and potentially reduces the time needed to find an optimal solution. Lemma \ref{prop:covdom} gives a sufficient condition for which the constraint corresponding to a voter population $w \in W$ in constraint set \eqref{model:obj2} can be removed from the DBLP integer program, since the access function value is guaranteed to be smaller for another voter population $\hat{w} \in W$ for all choices of drop box locations.
A proof is provided in the Supplementary materials \ref{appx:proofs}.
\begin{lemma}\label{prop:covdom} Let $w,\hat{w} \in W$ be two voter populations. If the access function parameters satisfy $v^0_{\hat{w}} \geq v^0_{w}$, $v^1_{\hat{w}} + \sum_{n \in T} a_{n \hat{w}} \leq v^1_{w} + \sum_{n \in T} a_{n w} $, and $a_{n \hat{w}} \leq a_{n w}$ for each $n \in N\setminus T$, then for any subset of drop box locations $N' \subseteq N$, such that $T \subseteq N'$: $$\frac{v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} }{v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} } \leq \frac{v^1_{w}+\sum_{n \in N'} a_{nw} }{v^0_{w}+v^1_{w}+\sum_{n \in N'} a_{nw} }$$ \end{lemma}
This property may be satisfied in realistic instances of the DBLP. Consider population $\hat{w}$ that lies on the exterior boundary of the jurisdiction. Consider a $w$ that lies just inside of $\hat{w}$ within the jurisdiction such that $w$ is closer than $\hat{w}$ to all potential drop box locations and polling locations. The voters in $w$ have higher access to the voting infrastructure than the voters in $\hat{w}$. In this case, the properties of Lemmas \ref{prop:covdom} are likely to be satisfied.
\subsection{Model Variations}\label{sec:variations}
The DBLP is designed to determine drop box locations that satisfy the requirements for a drop box system in most jurisdictions. However, election administrators may wish to explore solutions that are not identified by the standard formulation of the DBLP or may wish to tailor the model to their situation.
In this subsection, we discuss five modifications that can be made to the DBLP.
First, the value of $q$ determines the number of drop boxes that must cover each voter, and determines the access that the voters have to the drop box system to an extent. Letting $q = 0$, voters are not required to be covered by drop boxes. Instead, a cost effective set of drop box locations are selected such that all voters have a minimum level of access ($r$) to the voting system. Letting $q = 1$, drop box locations are selected so that each voter is guaranteed access to the drop box system in addition to meeting a minimal level of access to all voting pathways ($r$). Letting $q = 2$, voters are guaranteed access to both the drop box and complete voting system in a way that also mitigates the impact the destruction of a drop box could have on voter access.
The second variation we consider is a change to the covering sets $N_w$ for $w \in W$.
The covering sets $N_w$ typically include locations within a predefined time or distance threshold from a voter. Decreasing or increasing the time threshold used can make constraint set \eqref{model:basecoverage} more or less restrictive, respectively.
When the covering sets are determined using a shorter time threshold, drop boxes within a voter's covering set are required to be located closer to the voter. This may make the drop boxes more accessible to all voters, but also increases costs.
When the covering sets are determined using a larger time threshold, drop boxes are allowed to be located further away while still satisfying constraint set \eqref{model:basecoverage}, which results in lower cost.
Third, we can replace the cost objective of the DBLP with other goals.
We can instead maximize the minimum access function\footnote{It is fairly straightforward to convert constraint set \eqref{model:obj2} into a linear equivalent by using one minus the access function value, which is an equivalent measure of access.} or maximize the number of voters covered by at least $q$ drop boxes. In the latter case, we introduce a new indicator variable $\delta_w$ and the following constraint \begin{align}
q \delta_w \leq \sum_{j \in N_w} y_j \quad \forall \ w\in W \end{align} The objective is then to maximize $\sum_{w\in W} p_w \delta_w$ where $p_w$ represents the number of voters in $w$.
With this objective, we can use constraint set \eqref{model:basecoverage} to ensure each population $w \in W$ is covered by at least some $q'$ ($0 \leq q' < q$) drop boxes. When $z_1$ is no longer the objective of the integer program, a constraint can be added to ensure that the cost of the drop box system is no more than some budget $B$,
\begin{align}
\sum_{(i,j)\in E} c_{ij}x_{ij} + \sum_{n \in N} f_n y_n \leq B \end{align} With this constraint, feasibility of the DBLP is no longer guaranteed. Infeasibility can be informative to election administrators.
Fourth, election administrators may wish to restrict the cost of the collection tour, since they may have a limited operational budget for collecting ballots (e.g., limited staff).
We can limit the cost of the collection tour to no more than $c_{\max}$ by introducing the following constraint \begin{align}
\sum_{(i,j)\in E} c_{ij} x_{ij} \leq c_{\max} \end{align}
Fifth, election administrators may wish to locate a specific number, $k$, of drop boxes. This may occur when they have already purchased drop boxes or they wish to add a drop box for every 15,000-20,000 registered voters as recommended by \citep{CISA_ballot_2020}. This can be enforced by adding the following constraint \begin{align}
\sum_{n \in N} y_n = k \end{align}
\section{Solution Methods}\label{sec:solnmethods}
Within this section, we present solutions methods for the original DBLP formulation.
\subsection{Objective Reformulation}\label{sec:obj1reform}
Constraint sets \eqref{model:basecoverage}-\eqref{model:subtourelim}
are similar to constraints that may be found in an integer program for the CTP \citep{gendreau_covering_1997}. However, the objective of the CTP only considers operational costs \citep{gendreau_covering_1997}. Thus, it is desirable to reformulate objective $z_1$ to preserve properties of the CTP within the DBLP. We can then use components of solution methods for the CTP within solution methods for the DBLP.
We present a reformulation of $z_1$ to remove the use of $y_n$ variables. Note that constraints (6) enforce that for any drop box $n$ visited by a feasible tour, there must be exactly two drop boxes visited before or after $n$. Thus, we can reformulate $z_1$ as follows:
\begin{align}
z_1 & = \sum_{i,j \in E} c_{ij} x_{ij} + \sum_{j \in N} f_j y_j \nonumber \\
& = \sum_{i,j \in E} c_{ij} x_{ij} + \sum_{j \in N}\sum_{i \in N : i,j \in E} f_j x_{ij}/2 \nonumber \\
& =\sum_{i,j \in E} (c_{ij}+f_i/2+f_j/2) x_{ij} \nonumber \\
& = \sum_{i,j \in E}\hat{c}_{ij} x_{ij} \nonumber \end{align}
where $\hat{ c}_{ij} := c_{ij}+f_i/2+f_j/2$ for each $(i,j) \in E$. With this reformulation, $z_1$ takes the same form as the standard objective for the CTP.
\subsection{Lazy Constraint Method}\label{sec:exact}
Branch and bound is one of the most common techniques used to solve IPs, and we employ it to solve the DBLP.
However, constraint set \eqref{model:subtourelim} defines an exponential number of constraints, so we introduce a lazy constraint approach to solve the DBLP.
First, we solve the DBLP without constraint set \eqref{model:subtourelim}. Once an optimal solution is found, we determine if any of the constraints from constraint set \eqref{model:subtourelim} are violated. If so, we add in at least one violated constraint and resolve the IP using branch and bound. Most modern optimization packages support the implementation of lazy constraints. The reformulation of the objective introduced in Section \ref{sec:obj1reform} can be used throughout the procedure, but it is not required.
We introduce a new polynomial time algorithm, Algorithm \ref{alg:lazy}, to find violated inequalities from constraint set \eqref{model:subtourelim} given an $x^* \in \{0,1\}^{|E|}$. The approach we take is adapted from an approach used for the TSP \citep{gurobi_optimization_tsppy_nodate} to account for the fact that not all potential drop box locations must be visited by the tour in the DBLP.
Algorithm \ref{alg:lazy} first finds all subtours defined by $x^*$ (line 1). Each subtour that does not include all required locations $T$ (lines 2-4) must be associated with least one violated constraint. For all\footnote{There is a trade-off between adding a constraint for all $t \in \hat{S}$ (increasing the size of the IP) and adding a constraint for a small number of elements in $\hat{S}$ (increasing the number of times the search procedure occurs).} locations $t$ visited by the subtour, we add the violated constraint (line 7).
\begin{algorithm}
\caption{Lazy($x^*$)}\label{alg:lazy}
\begin{algorithmic}[1]
\State $H = $ collection of subtours defined by $x^*$
\For{each subtour $h \in H$}
\State $\hat{S} = $ drop box locations visited by $h$
\If{$T \setminus \hat{S} \neq \emptyset$}
\State \Return $\sum_{(i,j)\in E : i \in \hat{S}, j \in N \setminus \hat{S} \text{ or } j \in \hat{S}, i \in N \setminus \hat{S}} x_{ij} \geq 2y_t$ for each node $t \in \hat{S}$
\EndIf
\EndFor
\end{algorithmic} \end{algorithm}
We comment on the correctness of Algorithm \ref{alg:lazy}. Specifically, given an integer $x^* \in \{0,1\}^{|E|}$, Algorithm \ref{alg:lazy} finds a violated constraint from constraint set \eqref{model:subtourelim}, if one exists. If a constraint is violated, there must exist a $S$ such that $S \subset N, \ 2 \leq |S| \leq |N|-2, \ T\setminus S \neq \emptyset$ and for some ${t^*} \in S$, ${ \sum_{(i,j)\in E : i \in \hat{S}, j \in N \setminus \hat{S} \text{ or } j \in \hat{S}, i \in N \setminus \hat{S}} x^*_{ij} < 2y_{t^*}}$. Since the left hand side of the inequality is at least zero, $t^*$ must represent a selected drop box location ($y_{t^*} = 1$). Moreover, the feasibility of $x^*$ with regards to constraint set \eqref{model:balance} implies that $\sum_{(i,j)\in E : i \in \hat{S}, j \in N \setminus \hat{S} \text{ or } j \in \hat{S}, i \in N \setminus \hat{S}} x^*_{ij} = 0$.
Thus, ${t^*}$ must be a member of some subtour visiting locations $\hat{S} \subseteq S$.
The set $\hat{S}$ must contain at least three elements and can contain not more than $|N|-3$ elements as a result of constraint set \eqref{model:balance}. Since $T\setminus S \neq \emptyset$, it is also true that $T\setminus \hat{S} \neq \emptyset$. Thus, the existence of a $S$ implies the existence of a $\hat{S}$ whose elements form a subtour in $x^*$ such that $\hat{S} \subset N, \ 2 \leq |\hat{S}| \leq |N|-2, \ T\setminus \hat{S} \neq \emptyset$ and $ \sum_{(i,j)\in E : i \in \hat{S}, j \in N \setminus \hat{S} \text{ or } j \in \hat{S}, i \in N \setminus \hat{S}} x^*_{ij} < 2y_{t}$ for all ${t} \in \hat{S}$.
Algorithm \ref{alg:lazy} identifies $ \hat{S}$ and returns the corresponding constraint.
\subsection{A Heuristic Method}\label{sec:heuristic} The lazy constraint method can be used to find solutions to moderately sized problem instances, but large instances require long computational time.
Moreover, if an appropriate value for $r$ is not known by election administrators, the DBLP must be solved repeatedly to allow election administrators to select among possible solutions a posteriori\footnote{This may be desirable even if $r$ is believed to be known.}, which substantially increases the necessary computational time. In this section, we present a heuristic that identifies multiple solutions to the DBLP, each corresponding to a unique value of $r$.
Depending on the implementation, the heuristic is polynomially solvable. We provide pseudocode for this heuristic in the Supplementary materials \ref{appx:heur}. The heuristic requires the objective reformulation discussed in Section \ref{sec:obj1reform}.
The heuristic first identifies a feasible solution to the DBLP corresponding to $r = 0$.
When $q = 0$, we find a tour visiting the nodes of $T$ using any method for the TSP\footnote{Since we do not specify the methods to solve the TSP or CTP, this heuristic is in fact a \emph{heuristic scheme}.}. When $q = 1$, the DBLP with $r = 0$ is equivalent to the CTP when the DBLP objective is reformulated as introduced in Section \ref{sec:obj1reform}. Any solution method for the CTP can be used to identify a feasible solution. When $q = 2$, the DBLP with $r = 0$ is equivalent to the CTP when the DBLP objective is reformulated as introduced in Section \ref{sec:obj1reform}, except the CTP only requires single coverage of each $w \in W$.
We construct a solution that satisfies double coverage using any exact or heuristic solution method for the CTP as follows. First, we find a feasible solution to the CTP that ensures each $w \in W$ is covered once. Using this solution, we construct a second instance to the CTP. The second instance is equivalent to the first \emph{except} that (1) the new set of required drop box locations includes all locations selected in the first solution, (2) the locations selected in the first solution are removed from all covering sets $N_w$, and (3) any $w \in W$ that was covered by at least two locations selected in the first solution is removed from $W$. A solution to the second CTP instance is guaranteed to be feasible for the DBLP when $r = 0$ and $q = 2$. A proof of this statement is provided in the Supplementary materials \ref{appx:proofs}. If $q$ takes a value greater than $2$, it is fairly trivial to extend the process used when $q = 2$.
Once this initial solution corresponding to $r = 0$ is found, we wish to find solutions that are feasible for a larger $r$. These solutions can be found as follows. Initialize $r = 0$.
We iterate and increase the value of $r$ by some predetermined, sufficiently small $\varepsilon$ in each iteration.
Within an iteration, start with the solution identified by the previous iteration.
Identify all pairs of drop box locations such that one is already included in the tour (not belonging to the set $T$) and the other is not. These pairs represent the locations that can be \emph{swapped} (i.e., remove one from the current solution and add the other).
We allow the pairs to also represent adding a location to the tour without removing another, or removing a location without adding another. The latter may be advantageous in cases where a drop box location was added in a previous iteration that makes previously included locations redundant and unnecessary. There are $\mathcal{O}(|N|^2)$ possible pairs. We let a pair be \emph{feasible} if the drop box locations obtained after the swap
satisfy constraint set \eqref{model:obj2} according to the current value of $r$ and satisfy the multiple coverage defined by constraint set \eqref{model:basecoverage}. It is straightforward to check the feasibility of each pair. Note that we do not consider any pair that results in both an increased cost and lower minimal access function value, since it would directly lead to a dominated solution. Among all feasible pairs, determine the angle between the incumbent solution and the prospective solution, similar to what was done in \citep{current_median_1994}. Mathematical details can be found in the Supplementary materials \ref{appx:heur}. Select the pair that leads to the smallest angle; this incentivizes finding solutions with a lower cost. Then update the tour based on this pair in a cost minimizing way (e.g., minimum cost removal/insertion or other techniques used for the TSP \citep{gendreau_new_1992}).
Continue until all potential drop box locations have been selected in the solution. This is guaranteed to occur after a finite amount of time since $r$ strictly increases by a fixed amount each iteration and is bounded above by one. Among all identified solutions, disregard the dominated solutions and return the rest. It can be checked during each iteration whether each new solution is dominated by a previous solution or if the new solution dominates a previous solution. When a polynomial time heuristic is used for the TSP and CTP, this heuristic also runs in polynomial time.
\section{Case Study}\label{sec:casestudy}
We construct a case study of Milwaukee, Wisconsin, United States to demonstrate the value of the DBLP and investigate the implications of optimal drop box infrastructure design. The City of Milwaukee is the most populous municipality in the state of Wisconsin and had approximately 315,483 registered voters prior to the 2020 General election \citep{city_of_milwaukee_open_data_portal_2020_nodate}. We let $W$ be defined by the census block groups of Milwaukee, WI \citep{milwaukee_county_census_2018},
which are comprised of individuals located near each other who are typically of similar socioeconomic backgrounds.
Figure \ref{fig:estimatedpop} illustrates the different block group locations in Milwaukee and the estimated number of individuals of age 18 or older in each \citep{united_states_census_bureau_race_2020}.
\begin{figure}
\caption{(a) The census block groups of Milwaukee with a darker color indicating a higher population 18 years of age or older.
(b) The drop box locations (red) used in 2020. (c) Potential drop box locations (red) within the City of Milwaukee used for this case study.}
\label{fig:estimatedpop}
\label{fig:existing_locations}
\label{fig:potential_locations}
\end{figure}
During the 2020 elections, 15 drop boxes were placed throughout Milwaukee \citep{milwaukee_election_commission_absentee_2020}, illustrated in Figure \ref{fig:existing_locations}. We use the DBLP to identify drop box locations assuming that these 15 were not already added to the voting system.
This allows us to compare the DBLP to the decisions actually made by election officials during 2020. We let the potential drop box locations, $N$, be the locations of courthouses (4), election administrative buildings (2), fire stations (30), libraries (14), police stations (7), CVS pharmacies (7), and Walgreens pharmacies (29). Figure \ref{fig:potential_locations} illustrates the locations of the 93 potential drop box locations. We assume that the collection tour begins and ends at the Milwaukee City Hall, $s$. We do not require a drop box be located at any location other than City Hall, with $T = \{s\}$.
The fixed cost
of locating a drop box at court houses, fire stations, police stations, and City Hall is set at \$6,000 to reflect the cost of a drop box without the need of additional security measures. The fixed cost of locating a drop box at all other locations is set at \$10,000 to reflect the cost of both a drop box and a security system.
According to the Milwaukee Election Commission, ballots were, at a minimum, collected daily by staff during the 2020 General election \citep{milwaukee_election_commission_absentee_2020}. This equates to approximately 50 times during the election. Based on this value, we assume that ballots are collected 50 times per year on average\footnote{In reality, the number of collections depends on the election year. Also, the frequency of ballot collection may vary depending on model solutions, but this value is set to normalize the operational cost to the fixed cost of the drop boxes. }
over the life of the drop boxes, which we assume to be 15 years. We further assume that each member of the bipartisan collection team has an opportunity cost of $\$40$ per hour. This may not reflect the actual pay rate of poll workers or staff; rather, it is meant to represent the opportunity cost of other tasks not completed during that time. For example, staff could otherwise participate in additional security training, review compliance of submitted ballots, or conduct marketing to increase voter turnout. The cost of traveling between two drop boxes is determined using this pay rate and the estimated time needed to drive between the two locations, which is obtained from Bing Maps. We include the cost of gas and vehicle wear using the current federal mileage reimbursement of $\$0.56$ per mile. The estimated mileage is calculated assuming an average travel speed of 30 mph. Lastly, we assume the collection costs increase by two percent each year.
The covering set of each location, $N_w$, is constructed to include the locations that satisfy at least two of the following: the time to walk to the drop box is no more than 15 minutes; the time to drive to the drop box is no more than 15 minutes; the time to use public transit (i.e., city bus) to the drop box is no more than 30 minutes; or the road distance to the drop box is no more than 4 miles (e.g., reachable by bike or ride-share). By ensuring at least two conditions are are met, there must be multiple transportation modalities that can be used to travel to a drop box in $N_w$.
Individuals without access to a private vehicle are thereby guaranteed to be able to reach a covering drop box using another mode of transportation. We estimate the location of each $w \in W$ using the block group centriod.
Throughout the case study, we let $q = 2$, unless otherwise noted, so that model solutions mitigate the risk associated with the destruction of a drop box.
Lastly, the parameters $v_{w}^0,\ v_w^1,\ a_{nw}$ for the access function are instantiated as follows.
Ideally, these parameters would be determined using a multinomial logistic regression based on surveys, distance to voting locations, and available transportation methods.
Due to a lack of available data, we introduce a function that serves as a proxy. Our function combines historical voter turnout, transit durations obtained from Bing Maps, vehicle availability of individuals living in each block group \citep{united_states_census_bureau_means_2019}, and the work locations of individuals residing each block group \citep{united_states_census_bureau_work_nodate}. We let $v^1_{w}$ equal the estimated voter turnout (between 0 and 1) among registered voters in each block group during the 2016 General election, and we let $v^0_{w} = 1 - v^1_{w}$. The values reflect the idea that voter turnout is higher where the in-person voting system is more accessible \citep{cantoni_precinct_2020}.
To describe the values of $a_{nw}$, we introduce some notation.
Let $d_{n, w}^\text{walk}$, $d_{n, w}^\text{transit}$, $d_{n, w}^\text{drive}$ be the walking, transit, and driving durations (minutes), respectively, to potential location $n\in N$ for population $w \in W$ obtained from Bing Maps.
Let $\lambda_{w}^\text{vehicle}$ be the proportion of individuals in $w$ that have access to a vehicle according to the U.S. Census \citep{united_states_census_bureau_means_2019}. Let $d_{n, w}^\text{other}$ be the estimated duration to the potential drop box location $n \in N$ from population $w \in W$ using some other form of transportation (e.g., bike or rideshare); the duration is calculated using the road distance obtained from Bing Maps and an assumed speed of 15 miles per hour.
Let $Q$ represent the set of work locations in Milwaukee, WI \citep{united_states_census_bureau_work_nodate}. Let $d_{n,q}^\text{work}$ be the walking duration from work location $q \in Q$ to the potential drop box location $n \in N$. Let $\lambda_{w,q}^\text{work}$ be the proportion of individuals from $w$ that work in location $q$ according to the U.S. Census \citep{united_states_census_bureau_work_nodate}.
Then, the value of $a_{nw}$ for a drop box location $n$ and population $w \in W$ is computed as:
$$a_{nw} = \frac{0.04}{v^1_{w}} \Big ( \frac{1}{{(d_{n, w}^\text{walk}})^2} + \frac{1}{({d_{n, w}^\text{transit}})^2} +\frac{\lambda_{w}^\text{vehicle}}{({d_{n, w}^\text{drive}})^2}+\frac{1}{({d_{n, w}^\text{other}})^2} +\sum_{q \in Q}\frac{\lambda_{w,q}^\text{work}}{({d_{n,q}^\text{work}})^2} \Big )$$ This formula accounts for the benefit of multiple modes of transportation to a location and assumes that drop boxes near individuals are much more accessible than drop boxes far away. We include the term $(v^1_{w})^{-1}$ to account for the added intangible benefit of drop boxes located near a population with historically low voter turnout, such as increased publicity and visual reminders to cast a ballot \citep{collingwood_drop_2018}. We square the duration of each transportation mode to model a non-linear relationship between duration and accessibility (a similar approach was employed in \citep{murata_making_2013}). As a result, $a_{nw}$ is highest when the drop box is a short duration from the voter using each mode of transportation.
We scale each $a_{nw}$ by $0.04$ to produce values that align with findings from empirical research \citep{mcguire_does_2020}. Additional explanation and justification of this function is provided in the Supplementary materials \ref{appx:vnw}.
\subsection{The DBLP and Rules-of-thumb}\label{sec:naive} In the absence of tools to support election planning, election administrators may use rules-of-thumb to select drop box locations. In this section, we
demonstrate that the DBLP is able to identify drop box locations that outperform rules-of-thumb across multiple criteria. The findings support the use of the DBLP during election planning.
Table \ref{T:solutions} presents the details of DBLP solutions for different values of $r$ obtained using the Gurobi 9.1 MIP solver. Computational studies were run using 64 bit Python 3.7.7 on an Intel\textsuperscript{\textregistered} Core\textsuperscript{TM} i5-7500 CPU with 16 GB of RAM.
Each optimal solution was identified in less than 3600 seconds.
We refer to the solutions identified by the DBLP as `DBLP $k$' where $k$ refers to the solution ID in Table \ref{T:solutions}.
\begin{table}[!hbt] \centering \def\hphantom{0}{\hphantom{0}} \caption{Properties of solutions obtained by solving the DBLP with different values of $r$. }
\label{T:solutions}
\begin{tabular*}{\columnwidth}{@{}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c}
\toprule
\shortstack{Solution\\ID} & \shortstack{Minimum Access\\Function Value} & \shortstack{Tour Cost\\ (\$/yr)} & \shortstack{Fixed Cost\\(\$/yr)} & \shortstack{Operational cost\\(\$/yr)} & \shortstack{Number of\\ Drop Boxes} \\
\midrule
0\lefteqn{^*}&0.573&17,141&6,800&10,341&15\\ 1&0.582&17,313&6,800&10,513&15\\ 2&0.593&17,813&7,333&10,479&15\\ 3\lefteqn{^*}&0.601&18,538&8,267&10,271&16\\ 4&0.612&19,701&8,800&10,901&18\\ 5&0.620&21,697&10,267&11,430&21\\ 6\lefteqn{^*}&0.629&23,599&12,133&11,466&23\\ 7&0.637&26,877&14,133&12,743&28\\ 8&0.645&31,701&17,200&14,501&33\\ 9&0.653&37,318&20,933&16,385&39\\ 10&0.661&47,023&27,200&19,823&52\\
11 & 0.668 & 80,574 & 50,800 & 29,774 & 93\\ \bottomrule \multicolumn{3}{l}{$^*$ \footnotesize Illustrated in Figure \ref{fig:illustrate_tours}} \end{tabular*}
\end{table}
During 2020, the Milwaukee Election Commission located drop boxes at the City Hall, the Election Commission warehouse, and 13 neighborhood-based public library branches \citep{milwaukee_election_commission_absentee_2020}. We begin by comparing these locations to those identified in DBLP 2, which also represents 15 drop boxes.
Table \ref{T:actualcompare} provides the values of multiple criteria for each drop box system. These criteria provide insight into the performance of each drop box system with respect to cost, access, and risk.
\begin{table}[hbt!] \centering \def\hphantom{0}{\hphantom{0}}
\caption{Comparison of the actual 2020 drop box system to a drop box system design identified by the DBLP across multiple criteria.}
\label{T:actualcompare}
\begin{tabular*}{\columnwidth}{@{}p{0.7\columnwidth}|@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c}
\toprule
Criteria & \shortstack{2020} & \shortstack{DBLP 2}\\
\midrule Number of Drop Boxes&15&15\\ Fixed Cost (\$/year)&9,733&7,333\\ Operational cost (\$/year)&10,566&10,479\\ Total Cost (\$/year)\symfootnote{}&20,300&17,813\\ Fraction of voters covered by 1 drop box (population weighted)&0.995&1.000\\ Fraction of voters covered by 2 drop boxes (population weighted)\symfootnote{} &0.889&1.000\\ Fraction of voters without access to a car covered by at least two drop boxes by non-driving transit (population weighted)\symfootnote{}&0.941&1.000\\ Minimum access function value\symfootnote{} &0.560&0.593\\ Average access function value (population weighted)&0.776&0.772\\ Maximum road distance to closest drop box&7.634&6.311\\ Maximum road distance to third closest drop box&10.55&9.978\\ Average road distance to closest drop box (population weighted)&1.601&1.679\\ Average road distance to closest 3 drop boxes (population weighted)&2.723&2.486\\ \bottomrule \multicolumn{3}{l}{ \setcounter{mpfootnote}{0} \footnotesize \symfootnote{} Objective of the DBLP.} \\ \multicolumn{3}{l}{\footnotesize \symfootnote{} Required by constraint set \eqref{model:basecoverage} of the DBLP.} \\ \multicolumn{3}{l}{\footnotesize \symfootnote{} Required by constraint set \eqref{model:basecoverage} given our method of instantiating $N_w$ for each $w \in W$.} \\ \multicolumn{3}{l}{\footnotesize \symfootnote{} Modeled using constraint set \eqref{model:obj2} in DBLP.} \setcounter{mpfootnote}{0}
\end{tabular*}
\end{table}
The results in Table \ref{T:actualcompare} suggests that the DBLP is able to identify drop box locations that perform better across multiple criteria compared to the rule-of-thumb used by election administrators in Milwaukee during the 2020 election. We find that with the same number of drop boxes, the DBLP is able to identify drop box locations that result in a lower fixed cost, operational cost, and total cost. Despite having a lower cost, all voters are covered by at least two drop boxes with DBLP 2, while the 2020 policy only covers 88.9\% of voters twice.
This gap also exists when voters do not have access to a vehicle (1.00 vs. 0.941). This means that DBLP 2 admits a higher level of equity of access to the drop box system while mitigating the risk associated with the possible destruction of a drop box.
Moreover, the minimum access function value is higher (0.593 vs. 0.560) for DBLP 2. With a strict interpretation of the access function, the block group with the lowest turnout is predicted to have a turnout that is 3.3\% higher if the DBLP 2 was implemented rather than the actual locations. We find that the average access function value is lower for DBLP 2 than the actual implementation; however, the difference is small (0.772 vs. 0.776).
In different situations, other rules-of-thumb may be used by election administrators. We compare the DBLP solutions to six other rules-of-thumb that could have potentially been used instead:
\begin{enumerate}[label=Policy \arabic*, leftmargin=2.3cm,topsep=0pt,noitemsep]
\item Locate drop boxes at the election administrative buildings (2).
\item Locate drop boxes at the election administrative buildings (2) and police stations (7).
\item Locate drop boxes at the election administrative buildings (2) and libraries (14).
\item Locate drop boxes at the election administrative buildings (2), police stations (7), and libraries (14).
\item Locate drop boxes at the election administrative buildings (2) and fire stations (30).
\item Locate drop boxes at the election administrative buildings (2), police stations (7), libraries (14), and fire stations (30). \end{enumerate}
We choose to assess these policies, since they represent the placement of drop boxes at buildings that should be well-distributed throughout the city. They are not intended to represent a comprehensive list of possible policies. Table \ref{T:h+esolutions} provides the values of multiple criteria for each rule-of-thumb and DBLP solutions with a similar number of drop boxes. The results presented in Table \ref{T:h+esolutions} mirror the findings reported in Table \ref{T:actualcompare}; the DBLP identifies drop box locations that are consistently better across multiple criteria than rules-of-thumb with a similar number of drop boxes. Moreover, most rules-of-thumb are not feasible for the DBLP. Policy 6, which locates 53 drop boxes, is the only rule-of-thumb policy that guarantees that each $w\in W$ is covered by $q=2$ drop box locations. Meanwhile, the DBLP can find feasible solutions with as few as 15 drop boxes.
\begin{sidewaystable}
\def0.9{0.9}
\caption{\centering A comparison of rule-of-thumb and DBLP policies across multiple criteria.}
\label{T:h+esolutions}
\begin{tabular*}{\columnwidth}{@{}p{6cm}|@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c @{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c |@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c}
\toprule
Criteria & \shortstack{Policy\\1} & \shortstack{Policy\\2} & \shortstack{Policy\\3} & \shortstack{Policy\\4} & \shortstack{Policy\\5} & \shortstack{Policy\\6} & \shortstack{DBLP\\3}& \shortstack{DBLP\\6}& \shortstack{DBLP\\8} & \shortstack{DBLP\\10}\\
\midrule Number of Drop Boxes&2&9&16&23&32&53&16&23&33&52\\ Fixed Cost (\$/year)&1,067&3,867&10,400&13,200&13,067&25,200&8,267&12,133&17,200&27,200\\ Operational cost (\$/year)&1,535&7,233&10,616&11,954&18,328&21,129&10,271&11,466&14,501&19,823\\ Total Cost (\$/year)\symfootnote{Objective of the DBLP.}&2,602&11,100&21,016&25,154&31,395&46,329&18,538&23,599&31,701&47,023\\ Fraction of voters covered by 1 drop box (population weighted)&0.596&0.972&0.995&0.995&1.000&1.000&1.000&1.000&1.000&1.000\\ Fraction of voters covered by 2 drop boxes (population weighted)\symfootnote{Required by constraint set \eqref{model:basecoverage} of the DBLP.}&0.362&0.810&0.924&0.973&0.997&1.000&1.000&1.000&1.000&1.000\\ Fraction of voters without access to a car covered by at least two drop boxes by non-driving transit (population weighted)\symfootnote{Required by constraint set \eqref{model:basecoverage} given our method of instantiating $N_w$ for each $w \in W$.}&0.467&0.920&0.963&0.981&1.000&1.000&1.000&1.000&1.000&1.000\\ Minimum access function value\symfootnote{Modeled using constraint set \eqref{model:obj2} in DBLP.}&0.542&0.558&0.568&0.582&0.591&0.623&0.601&0.629&0.645&0.661\\ Average access function value (population weighted)&0.760&0.770&0.777&0.786&0.790&0.809&0.773&0.781&0.788&0.806\\ Maximum road distance to closest drop box&19.269&9.978&7.634&7.634&5.568&5.568&6.311&6.311&6.311&4.865\\ Maximum road distance to third closest drop box&20.123&14.199&10.554&9.978&8.567&8.249&9.851&8.653&8.249&8.249\\ Average road distance to closest drop box (population weighted)&5.829&2.130&1.550&1.469&1.062&0.917&1.689&1.517&1.386&1.105\\ Average road distance to closest 3 drop boxes (population weighted)&6.687&3.348&2.578&2.122&1.774&1.399&2.436&2.052&1.837&1.534\\ \bottomrule
\end{tabular*}
\end{sidewaystable}
\subsection{Drop Box Trade-offs}\label{sec:optresults}
\newcommand\drawredsquare{ \begin{tikzpicture}
\tikz\draw[red] (0,0) rectangle (0.5ex,0.5ex);
\end{tikzpicture} } In this section, we further investigate DBLP solutions and explore the trade-offs between criteria within the drop box voting system.
We begin by discussing the trade-off between cost and equity of access to all voting pathways (i.e., the minimum access function value).
The solutions in Table \ref{T:solutions} suggest there is a substantial trade-off between the cost of the drop box system and the minimum access function value.
However, the marginal increase in cost to achieve an increase in the minimum access function value is not constant. From DBLP solution 0 to DBLP solution 1 the average cost of a $0.01$ increase of the minimum access function value is $\$186.84$ per year. From solutions 3 to 4 the average cost of a $0.01$ increase of the minimum access function value is $\$$1,062.25 per year. From solutions 10 to 11 the average cost of a $0.01$ increase of the minimum access function value is $\$$50,738 per year.
This highlights the importance of considering the access function within the DBLP.
When a low cost solution is desirable, an appropriate value for $r$ allows the DBLP to identify drop box locations that admit a much larger minimum access function value for a relatively low increase in cost (e.g., solutions 1-4). When drop boxes that admit a large minimum access function value are desirable, it is critical to appropriately set $r$, since a small change in $r$ can lead to solutions of substantially different cost (e.g., solutions 8-10).
We next consider the trade-off between equitable access to the drop box system and equitable access to all voting pathways.
Figure \ref{fig:RHS} plots the cost and minimum access function value of multiple optimal solutions when $q$ is zero (\ref{fig:frontier_zero}), one (\ref{fig:frontier_single}), or two (\ref{fig:frontier_double}) with the latter corresponding to the solutions presented Table \ref{T:solutions}.
When $q = 0$, the DBLP is able to identify drop box locations that substantially increase the minimum access function value for a relatively small cost. This suggests that there are cost-effective, equitable drop box locations, even when election officials cannot afford to cover each voter with one or two drop boxes. In general, equitable access to the drop box system and equitable access to all voting pathways are aligned so that access is improved. However,
the difference between the curves corresponding to $q = 0$ (\ref{fig:frontier_zero}) and $q = 1$ (\ref{fig:frontier_single}) represents the cost of ensuring equitable access to the drop box system. In some cases, this cost can be substantial ($\sim \$6,767$ per year). This demonstrates the trade-off between selecting drop boxes that ensure all voters have access to the drop box system or using the drop boxes to increase the access function value by ``filling in the gaps'' of the in-person voting system.
We also find a substantial difference between the curves corresponding to $q = 1$ (\ref{fig:frontier_single}) and $q = 2$ (\ref{fig:frontier_double}), particularly when a low cost solution is desired. This suggests that the cost of mitigating risks associated with the destruction of drop boxes through infrastructure design is relatively high and may not be cost effective. Instead, it may be more cost effective to respond to an adverse event after it occurs, since the likelihood of this risk occurring is low \citep{scala_evaluating_2021}.
\begin{minipage}{\textwidth}
\begin{minipage}[t]{0.48\textwidth} \begin{tikzpicture} \begin{axis}[
width=0.8\textwidth,
height=\textwidth,
axis lines=middle,
legend columns = 2,
scaled y ticks=false,
legend style={at={(0.05,0.98)},anchor=north west},
legend cell align={left},
x label style={at={(axis description cs:0.5,-0.05)},anchor=north},
y label style={at={(axis description cs:-0.3,.5)},rotate=90,anchor=south},
ylabel={Cost (\$/year)},
yticklabel style={
/pgf/number format/fixed,
/pgf/number format/precision=5
},
xlabel={\shortstack{Minimum access function value}},
enlargelimits = true,
legend columns = 1,
ymin = 0,
xmin = 0.54,
]
\addlegendimage{empty legend} \addlegendentry{\hspace{-.6cm}\textbf{q}};
\addplot[mark=triangle,black,select coords between index={1}{10}] table [y=C,x=P, col sep=comma]{frontier_mke.csv};\addlegendentry{2} \label{fig:frontier_double}
\addplot[mark=square,red,select coords between index={1}{12}] table [y=C,x=P, col sep=comma]{frontier_single.csv};\addlegendentry{1} \label{fig:frontier_single}
\addplot[mark=*,orange,select coords between index={1}{14}] table [y=C,x=P, col sep=comma]{frontier_nocov.csv};\addlegendentry{0} \label{fig:frontier_zero}
\addplot[only marks, mark size=.7pt] coordinates {
(0.6635, 53000)
(0.664, 55000)
(0.6645, 57000)
};
\end{axis} \end{tikzpicture} \captionof{figure}{\centering Solutions using different values of $q$.} \label{fig:RHS}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth} \begin{tikzpicture} \begin{axis}[
width=0.8\textwidth,
height=\textwidth,
axis lines=middle,
legend columns = 1,
scaled y ticks=false,
legend style={at={(0.05,0.98)},anchor=north west},
legend cell align={left},
x label style={at={(axis description cs:0.5,-0.05)},anchor=north},
y label style={at={(axis description cs:-0.3,.5)},rotate=90,anchor=south},
ylabel={Cost (\$/year)},
yticklabel style={
/pgf/number format/fixed,
/pgf/number format/precision=5
},
xlabel={Minimum access function value},
enlargelimits = true,
legend columns = 1,
ymin = 0,
]
\addlegendimage{empty legend} \addlegendentry{\hspace{-.6cm}\textbf{Factor}};
\addplot[orange,mark = otimes,select coords between index={1}{7}] table [y=C_0.9,x=P_0.9, col sep=comma]{frontier_equality.csv};\addlegendentry{0.9} \label{fig:frontier_0.9}
\addplot[black,mark = triangle,select coords between index={1}{7}] table [y=C_1.0,x=P_1.0, col sep=comma]{frontier_equality.csv};\addlegendentry{1.0} \label{fig:frontier_1.0}
\addplot[green,mark = square,select coords between index={1}{8}] table [y=C_1.1,x=P_1.1, col sep=comma]{frontier_equality.csv};\addlegendentry{1.1} \label{fig:frontier_1.1}
\addplot[blue,mark = *,select coords between index={1}{9}] table [y=C_1.2,x=P_1.2, col sep=comma]{frontier_equality.csv};\addlegendentry{1.2} \label{fig:frontier_1.2}
\addplot[red,mark = diamond,select coords between index={1}{9}] table [y=C_1.3,x=P_1.3, col sep=comma]{frontier_equality.csv};\addlegendentry{1.3} \label{fig:frontier_1.3}
\addplot[only marks, mark size=.7pt] coordinates {
(0.66-0.003, 41500)
(0.661-0.003, 43000)
(0.662-0.003, 44500)
};
\end{axis} \end{tikzpicture} \captionof{figure}{\centering Solutions using different time thresholds for $N_w$ determined by the factor.} \label{fig:frontier_compare} \end{minipage} \end{minipage}
Rather than changing $q$, we can also relax coverage by defining the coverage sets $N_w$ using a larger time threshold.
When covering sets are defined by a longer time threshold, the drop boxes are allowed to be located further away from the voters while still meeting the coverage constraints defined in constraint set \eqref{model:basecoverage}.
A larger time threshold may increase the inequity of access to the drop box infrastructure within the resulting solutions, since a census block group may be further from both covering drop boxes when compared to other census block groups. However, the access function continues to evaluate the effect of the drop box locations on voter turnout when all voting pathways are considered.
Figure \ref{fig:frontier_compare} illustrates the cost and minimum access function value of solutions to the DLBP using covering sets $N_w$ obtained using the same procedure as before, except the time threshold are multiplied by factors of 0.9 (\ref{fig:frontier_0.9}), 1.0 (\ref{fig:frontier_1.0}), 1.1 (\ref{fig:frontier_1.1}), 1.2 (\ref{fig:frontier_1.2}), and 1.3 (\ref{fig:frontier_1.3}) when $q = 2$.
A factor of 1.0 corresponds to the covering sets used to obtain the solutions discussed in Table \ref{T:solutions} and Figure \ref{fig:RHS}. We find the effects of changing the covering sets to be similar to the effect of changing $q$. Covering sets defined by a larger factor result in solutions with a larger minimum access function value for the same cost.
We explore solutions DBLP 0, 3, and 6 of Table \ref{T:solutions} in more detail. Figure \ref{fig:illustrate_tours} illustrates the selected drop box locations and the collection tour visiting these drop boxes overlaid on a map of Milwaukee.
Each black circle indicates a selected drop box location, and the blue lines describe the order in which the drop boxes are visited on the collection tour (not the actual roads driven). The color of each block group indicates the access function value of each block group with red reflecting relatively a low value and green reflecting relatively high value. When cost is minimized (Solution 0), drop boxes are well-spaced in order to cover each block group twice, but relatively few drop boxes are placed to reduce cost.
Solution 0 is notably different than the locations selected during 2020, Figure \ref{fig:existing_locations}, in the northern and southern areas of the city despite locating the same number of drop boxes. The DBLP selects additional drop box locations in the north and south to ensure equitable access to the drop box system within those region. When the cost and minimal access function value are higher (Solutions 3 and 6), more drop boxes are added.
The DBLP selects additional drop box locations in the middle and northern part of the city, which would otherwise have have a relatively low level of access to the voting infrastructure, indicated by the dark red in Figure \ref{fig:illustrate_tours}(a). Additional locations are not selected in the south, since those voters have relatively high access to the multiple voting pathways, indicated by the dark green in Figure \ref{fig:illustrate_tours}(a).
\begin{figure}
\caption{Optimal drop box locations and tour visiting these locations. Color of regions reflect the access function value (red is low, green is high).}
\label{fig:illustrate_tours}
\end{figure}
\subsection{Heuristic Results} In this section, we investigate the performance of the DBLP heuristic compared to the lazy constraint approach. The lazy constraints were implemented using the Gurobi 9.1 MIP solver.
Instances to the TSP and CTP created during the heuristic's execution were solved using the following methods.
To solve a CTP instance, we formulate a group Steiner Tree problem instance to determine the nodes to visit in the CTP tour \citep{garg_polylogarithmic_2000} using the approach in \citep{duin_solving_2004} coupled with the
technique introduced in \citep{gubichev_fast_2012}.
Due to the relatively small number of drop box locations, we optimally solve the TSP over selected locations using the Gurobi 9.1 MIP solver.
Our computational studies suggest that that proposed heuristic method approximates the Pareto frontier between cost and the minimum access function value well and does so quickly. Figure \ref{fig:frontier_heur} plots the cost and minimum access function of solutions found using the MIP solver (\ref{fig:frontier_exact}) against heuristic solutions (\ref{fig:frontier_heuristic}) for four instances of the Milwaukee case study, each corresponding to covering sets defined by a different factor (1.0, 1.1, 1.2, or 1.3).
We also plot the cost and minimum access function value of the rules-of-thumb (\ref{fig:frontier_rot}) discussed in Section \ref{sec:naive}; however, recall that these policies may not be feasible for the DBLP.
We find two primary benefits of the heuristic method. First, a large number of solutions are identified quickly. Our experiments showed that the MIP solver can find a new policy every 225 seconds on average when using lazy constraints, while the heuristic method is able to identify between 141-181 policies in 126-220 seconds in total, depending on the factor used to construct the covering sets. Second, the difference between the cost and minimum access function value of solutions identified by the heuristic and MIP methods is small.
Moreover, the heuristic is able to identify solutions that are feasible for the DBLP that have a lower cost and higher minimum access function value than rules-of-thumb which may not be feasible for the DBLP. Election officials can implement the heuristic solutions or use them to determine a range of appropriate $r$ values and then explore optimal solutions within this range.
\begin{figure}
\caption{\centering Approximate Pareto frontier identified by the heuristic method (\ref{fig:frontier_heuristic}) compared to MIP solutions (\ref{fig:frontier_exact}). Blue dots (\ref{fig:frontier_rot}) represent the cost and minimum access function value of the rules-of-thumb discussed in Section \ref{sec:naive},
with blue stars (\ref{fig:frontier_rot_feas}) indicating feasible solutions for $q=2$.}
\label{fig:frontier_exact}
\label{fig:frontier_heuristic}
\label{fig7:a}
\label{fig:frontier_exact}
\label{fig:frontier_heuristic}
\label{fig:frontier_rot}
\label{fig:frontier_rot_feas}
\label{fig7:b}
\label{fig:frontier_exact}
\label{fig:frontier_heuristic}
\label{fig7:c}
\label{fig:frontier_exact}
\label{fig:frontier_heuristic}
\label{fig:frontier_heur}
\end{figure}
We also investigate the heuristic on randomly generated DBLP instances to further assess its performance.
The random instances were generated as follows. The location of voter populations and drop boxes were randomly selected within a 100 by 100 grid. We let the time to travel between each location be the Manhattan ($\ell$1) distance. We randomly select up to a quarter of the drop boxes be required locations ($T$). Fixed costs were randomly selected between \$5,000 and \$12,000 for each drop box. Operational costs were computed using the same assumptions described in Section \ref{sec:casestudy}, except costs were scaled by a random value between 0.5 and 1.5. The time threshold used to construct the covering sets were randomly selected between 15 and 50. If a larger threshold was required to ensure each covering set included at least two locations, we select the smallest threshold possible. The value of $v_w^1$ was randomly selected between 50 and 95 and $v_w^0 = 100-v_w^1$ for each $w \in W$. The value of $a_{nw}$ was computed using the formula $e^{2.5 - d_{nw}/30}$ where $d_{nw}$ represents the Manhattan distance between the $w$ and $n$ for each $w \in W$ and $n\in N$.
A comparison of solutions generated by the heuristic and MIP solver for nine randomly generated instances is provided in Table \ref{T:heur_results}. For each DBLP instance, we use the heuristic to obtain an approximation set for the DBLP. For each heuristic solution in the approximation set, we solve the DBLP to optimality using a $r$ equal to the minimum access function value admitted by the solution. We then compute the cost deviation of the solutions using the average percent difference in the cost of heuristic and optimal solutions corresponding to the same value of $r$. The results presented in Table \ref{T:heur_results} are consistent with the findings from the Milwaukee case study.
The heuristic method requires substantially less time than the MIP solver to find solutions corresponding to the same $r$ values. Moreover, the heuristic method finds solutions with small cost deviations, which indicates that the heuristic finds policies that are similar in cost and access to the optimal solutions.
\begin{table}[htb!]
\centering
\caption{Comparison of the heuristic and MIP implementations for random DBLP instances. The cost deviation is calculated as the average percent difference in total cost for solutions corresponding to the same $r$. Each MIP instance was terminated after 3600 seconds if no optimal was found.}
\label{T:heur_results}
\begin{tabular*}{\columnwidth}{@{}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c}
\toprule
$|W|$ & $|N|$ & $|T|$ & \shortstack{Number of\\Solutions} & \shortstack{Heuristic\\Time (s)} & \shortstack{MIP\lefteqn{^*}\\Time (s)} & \shortstack{Cost\\Deviation} \\
\midrule
100 & 50& 8 & 59 & 5.22 & 23 & 0.52\%\\ 500 & 50 & 11 & 61 & 14.42 & 168 & 0.46\% \\ 1000 & 50 & 7 & 53 & 28.05 & 90 & 1.98\% \\ 100 & 75 & 4 & 122 & 35.54 & 3,939 & 2.76\% \\ 500 & 75 & 18 & 109 & 45.91 & 960 & 2.90\% \\ 1000 & 75 & 15 & 93 & 65.33 & 661 & 0.97\% \\ 100 & 100 & 9 & 162 & 84.11 & 23,644 & 4.33\% \\
500 & 100 & 1 & 193 & 173.95 & 349,400$^{**}$ & 8.01\%$^{**}$ \\ 1000 & 100 & 12 & 117 & 119.53 & 30,109 & 4.12\% \\ \bottomrule
\end{tabular*}
$^*$ \footnotesize Ignoring duplicate optimal solutions;
$^{**}$ \footnotesize 108 instances terminated after 3600 seconds.
\end{table}
\section{Conclusion}\label{sec:discussion} In this paper, we introduce a structured and transparent approach to support the planning of ballot drop box voting systems, particularly for U.S.~voting systems. We do so by formalizing the drop box location problem (DBLP) to identify a set of optimal drop box locations. The locations are selected to minimize cost while ensuring voters have access to the drop box system and drop box risks are mitigated. Using a real-world case study, we demonstrate that the DBLP identifies drop box locations that consistently outperform rules-of-thumb across multiple criteria.
We also find that the trade-off between criteria is non-trivial and requires careful consideration.
Our research suggests that optimization is an important tool for designing the drop box infrastructure.
Simple guidance for designing drop box systems, such as locating one drop box per 15,000 registered voters, or other rules-of-thumb may be overly-simplistic and can cause election administrators to overlook cost-effective drop box locations that address inequalities within the voting infrastructure. Strategic drop box locations can reduce the ``cost'' of voting to address inequity within the voting system while ensuring that all voters have equitable access to the drop box system.
Future research can utilize the DBLP to answer additional drop box policy questions and support the drafting of legislation surrounding the use of drop boxes.
We introduce a lazy constraint approach to solve the DBLP to optimality. Computational experiments show that a single optimal solution to the DBLP can be found relatively quickly using this approach within a state-of-the-art MIP solver for moderately sized problem instances. However, the multiple goals of the DBLP often requires multiple solutions to the DBLP.
We find that this can cause computational times to increase to levels that are unreasonable for practice. This motivates the need for exceptionally quick solution methods for the DBLP. We introduce a heuristic for the DBLP, and we demonstrate that the heuristic identifies quality solutions quickly.
Initial attempts at reducing the computational time needed to identify optimal solutions using cutting planes originally introduced for the CTP \citep{gendreau_covering_1997} proved unfruitful. Future research into the theory of the DBLP is needed to reduce solution times for exact methods.
The DBLP is intended to be a component of a larger suite of tools for supporting election administrators understand, assess, and ultimately design different facets of the voting infrastructure. Ideally, the DBLP and other operations research tools will eventually be integrated into an online platform designed to support election administrators in all aspect of elections planning. However, current research is limited. It largely overlooks the vote-by-mail system and the risks of voting systems.
There is a substantial opportunity for the operations research community to support election planning by appropriately modeling voting systems and voting infrastructure.
Future research is needed to understand the temporal aspects of risk, particularly in the absentee voting process, and determine best practices for mitigating against malicious and non-malicious attacks. The DBLP and future models can then be incorporated into a comprehensive tool to support election officials in designing the election infrastructure in a way that increases voter turnout.
A key challenge within this space is the need to understand and incorporate models that describe how voters freely select from multiple voting pathways once the infrastructure is set. Voter choices ultimately determine the cost-effectiveness and performance of the voting system.
\appendix
\section{Supplementary materials} \subsection{Proofs}\label{appx:proofs} \begin{proof}[Proof to Theorem \ref{thm:nphard}] We reduce the traveling salesman problem (TSP) to the DBLP. Suppose we have an instance of the symmetric TSP defined by the nodes $\bar{N}$, edges $\bar{E}$, and edge costs $\bar{c}$. We construct an instance of the DBLP as follows. Let $N = \bar{N}$, $T = \bar{N}$, $W = \emptyset$, $r = 0$, $q = 0$, $f_j = 0$ for each $j \in \bar{N}$, $E = \bar{E}$, and $c_{ij} = \bar{c}_{ij}$ for $(i,j) \in \bar{E}$. Then, the DBLP is equivalent to: \begin{align}
\underset{x,y}{\min} \ & z_1 = \sum_{(i,j) \in \bar{E}} \bar{c}_{ij} x_{ij} \\
\text{s.t.} \
& \sum_{i \in N : (i,j) \in \bar{E}} x_{ij} = 2 & \forall \ j \in \bar{N} \\
& \sum_{i \in S, j \in N\setminus S} x_{ij} \geq 2 & \forall S \subset \bar{N}, \ 2 \leq |S| \leq |\bar{N}|-2 \\
& x_{ij} \in \{0,1\} & \forall \ (i, j) \in \bar{E} \end{align} The formulation follows from the fact that constraint sets \eqref{model:obj2} and \eqref{model:basecoverage} are empty since $W = \emptyset$, and constraint set \eqref{model:existing} requires that $y_{j}$ is to equal one for each $j \in T = \bar{N}$.
This equivalent formulation is an instance of the TSP. Thus, if we can solve the DBLP in polynomial time, then we can solve the TSP in polynomial time. Since the TSP is NP-Hard, so is the DBLP. \end{proof}
\begin{proof}[Proof to Lemma \ref{prop:covdom}] Suppose there was a set $N'\subseteq N$, such that $T \subseteq N'$, for which $$\frac{v^1_{w}+\sum_{n \in N'} a_{nw}}{v^0_{w}+v^1_{w}+\sum_{n \in N'} a_{nw} } < \frac{v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} }{v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} }$$ Then it must be true that $$\frac{v^0_{w} }{v^0_{w}+v^1_{w}+\sum_{n \in N'} a_{nw} } > \frac{v^0_{\hat{w}} }{v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} }$$ This implies that $$\frac{v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} }{v^0_{w}+v^1_{w}+\sum_{n \in N'} a_{nw} } > \frac{v^0_{\hat{w}} }{v^0_{w} } \geq 1$$ where the second inequality is true by the assumption of $v_{\hat{w}}^0 \geq v_{w}^0$. This implies $$v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in N'} a_{n\hat{w}} > v^0_{w}+v^1_{w}+\sum_{n \in N'} a_{nw} $$ $$ \implies v^0_{\hat{w}}+v^1_{\hat{w}}+\sum_{n \in T} a_{n\hat{w}}+\sum_{n \in N'\setminus T} a_{n\hat{w}} > v^0_{w}+v^1_{w}+\sum_{n \in T} a_{nw}+\sum_{n \in N'\setminus T} a_{nw} $$ $$ \implies (v^0_{\hat{w}}- v^0_{w})+(v^1_{\hat{w}}+\sum_{n \in T} a_{n\hat{w}} -v^1_{w}-\sum_{n \in T} a_{nw})+\sum_{n \in N' \setminus T} (a_{n\hat{w}}-a_{nw}) > 0 $$ However, each parenthesis term is negative (or zero) by assumption, and the sum can never be greater than zero. This is a contradiction. \end{proof}
\begin{proof}[Proof to Feasibility of DBLP Solution from Section \ref{sec:heuristic}]
Let $\hat{x}\in \{0,1\}^{|E|}$ and $\hat{y}\in\{0,1\}^{|N|}$ be the feasible solution to the first CTP instance found. Let $\hat{T} := T \cup \{n \in N : \hat{y}_n = 1\}$ represent the updated set of required locations used when solving the second CTP instance. Let $x^*\in \{0,1\}^{|E|}$ and $y^*\in\{0,1\}^{|N|}$ be the feasible solution to the second CTP instance. Let $N^0 := \{n \in N : y_n^* = 1\}$ denote the drop box locations selected according to $y^*$. Given a valid solution procedure for the CTP, $x^*$ describes a tour visiting $N^0$. Thus, the solution must satisfy constraint sets \eqref{model:balance} and \eqref{model:subtourelim} for the DBLP. Moreover, $T \subseteq \hat{T} \subseteq N^0$ for any feasible solution to the second CTP instance. Thus, the solution satisfies constraint set \eqref{model:existing} for the DBLP. What remains to be verified is the satisfaction of constraint set \eqref{model:basecoverage}. By construction: \begin{align} \sum_{n \in N_w} y^*_n
&= |N_w \cap N^0| & \nonumber \\
&= |N_w \cap \hat{T}| + |N_w \cap (N^0\setminus \hat{T})| \nonumber \\
&\geq 1 + |N_w \cap (N^0\setminus \hat{T})| \nonumber \\ &\geq 1 + \sum_{n \in N_w \setminus \hat{T}} y^*_n \nonumber \\ &\geq 1 + 1 \nonumber \\ &\geq 2 \nonumber \end{align} The first equality follows from the definition of $N^0$. The second equality follow from the fact that $\hat{T} \subseteq N^0$. The third statement follows from the fact that
$|N_w \cap \hat{T}| = |N_w \cap \{n \in N : \hat{y}_n = 1\}| \geq 1$.
The fourth statement follows from the definition of $N^0$. The fifth statement follows from the fact that no location in $\hat{T}$ is a member of the covering sets in the second instance ($N_w \setminus \hat{T}$), and the second CTP must select another location to include within the tour to cover each $w \in W$.
\end{proof}
\subsection{Heuristic Method Psuedocode}\label{appx:heur}
The pseudocode of the DBLP heuristic solution method is presented in Algorithm \ref{alg:heur}. We assume that the reformulation of the DBLP outlined in Section \ref{sec:obj1reform} is used throughout the heuristic. We also assume that $q$ takes a value of either one or two; it can easily be extended to cases where $q$ is larger. When $q = 0$, lines \ref{alg:CTP1sol}-\ref{alg:CTP2sol} can be replaced so that $x' \in \{0,1\}^{|E|}$ and $y'\in\{0,1\}^{|N|}$ are defined from a (heuristic) solution to the TSP over the set of locations $T$.
The steps of the heuristic are as follows. The initial solution of drop box locations and the associated collection tour is found in lines \ref{alg:CTP1}-\ref{alg:CTP2sol}. In lines \ref{alg:CTP1}-\ref{alg:CTP1sol}, a CTP instance is solved based on the DBLP instance. In lines \ref{alg:CTP2}-\ref{alg:CTP2sol}, a second CTP instance is created and solved. Solutions to both instances must be feasible for the respective CTP instances, but need not be optimal. The initial solution for the DBLP is represented by $N^0$, which represents the selected drop box locations, and $\mathcal{C}^0$, which represents the collection tour over the selected locations (lines \ref{alg:DBLP0n}-\ref{alg:DBLP0c}).
In lines \ref{alg:DBLPkstart}-\ref{alg:DBLPkend}, Algorithm \ref{alg:heur} finds DBLP solutions meeting a progressively higher bound $r$ for the minimal access function value. In line \ref{alg:DBLPkstart}, a set $C$ is initialized to store previously found solutions. This set is used later (lines \ref{alg:checkprev1}-\ref{alg:checkprev2}) determine if Algorithm \ref{alg:heur} has re-found a solution. In lines \ref{alg:P}-\ref{alg:Pend}, Algorithm \ref{alg:heur} determines which pairs of drop box locations are feasible by checking constraint sets \eqref{model:obj2} and \eqref{model:basecoverage}.
We do not allow $i = j$, since this would represent no change to the drop box system.
In line \ref{alg:dcost}, we estimate the change in the collection tour cost resulting from the removal of $i$ and insertion of $j$, $\Delta \hat{c}(i,j)$.
This can be estimated using a variety of methods, with the simplest being cheapest cost insertion and shortcut removal. In line \ref{alg:dr}, we calculate the change in the minimal access function value for pair $(i,j)$, $\Delta r(i,j)$. In line \ref{alg:value}, we identify the best feasible pair using an angle-based approach similar to that used in \citep{current_median_1994}. The pairs are assessed based on the angle, using a counter-clockwise orientation, between the vector $\langle -1,0 \rangle$ and the vector ${\langle \Delta r(i,j),\Delta \hat{c}(i,j) \rangle}$. The angle, $\theta_{i,j}$ can be calculated using the following formula:
\begin{align}
\theta(\Delta r, \Delta \hat{c}) = \begin{cases}
2\pi - \cos^{-1}\Big(\frac{-\Delta r}{\sqrt{\Delta r^2+\Delta \hat{c}^2}}\Big) & \Delta \hat{c} \geq 0 \\
\cos^{-1}\Big(\frac{-\Delta r}{\sqrt{\Delta r^2+\Delta \hat{c}^2}}\Big) & \text{otherwise}
\end{cases} \label{theta}
\end{align}
\noindent Algorithm \ref{alg:heur} selects the pair with the lowest $\theta(\Delta r, \Delta \hat{c})$ in line \ref{alg:bestval}; this leads to DBLP solutions with a lower cost.
In lines \ref{alg:newset}-\ref{alg:newtour}, the incumbent solution is updated based on the selected pair to swap. In line \ref{alg:checkprev1}-\ref{alg:novelsol}, the minimal access function value, $r$, is updated for the next iteration. If Algorithm \ref{alg:heur} has re-found a solution (lines \ref{alg:checkprev1}-\ref{alg:checkprev2}), $r$ is set to be the current minimum access function value. This avoids future `cycling' where Algorithm \ref{alg:heur} finds the same solution multiple times. If Algorithm \ref{alg:heur} has found a new solution, then $r$ is updated to be the minimum of $A_w(N^k)$ and $r+\varepsilon$. This ensures that the Algorithm \ref{alg:heur} is able to find a feasible solution to the DBLP in the next iteration if one exists (this is a result of $v_{wn} > 0$ for all $n \in N\setminus T$ and $w \in W$). However, if $\varepsilon$ is sufficiently small, $r$ could be updated by setting $r = r+\varepsilon$. The value of $\varepsilon$ is sufficiently small when there is guaranteed to be at least one feasible pair to swap in each iteration. The following $\varepsilon$ value is guaranteed to be sufficiently small: $$\varepsilon = \min_{w \in W, n \in N} A_w(N) - A_w(N\setminus n)$$ In lines \ref{alg:end?}-\ref{alg:DBLPkend}, Algorithm \ref{alg:heur} checks whether to terminate. It terminates when a solution that includes all drop box locations has been found. Algorithm \ref{alg:heur} returns all non-donminated solutions, and whether a solution is non-dominated by a new solution can be checked during each iteration.
In an actual implementation, the order of lines within Algorithm \ref{alg:heur} can be optimized to reduce run time (e.g., checking the condition on line 18 before the condition on line 17 may result in shorter run time).
\begin{algorithm}[H]
\caption{DBLP Heuristic ($\varepsilon$)}\label{alg:heur}
\begin{algorithmic}[1]
\Statex \hspace{-0.8cm} \textbf{input} A DBLP instance defined by $(N,T,E,W,N_w,\textbf{c},\textbf{v},\textbf{a},q)$ \label{line:DBLP}
\Statex
\hspace{-0.5cm} \textbf{(CTP)$'$: Find initial CTP solution when $q \geq 1$}
\State $(CTP)'$ := instance to the CTP defined by $(N,T,E,W,N_w,\textbf{c})$ \label{alg:CTP1}
\State $x' \in \{0,1\}^{|E|},\ y'\in \{0,1\}^{|N|}$ := heuristic solution to $(CTP)'$ \label{alg:CTP1sol}
\Statex
\hspace{-0.5cm} \textbf{(CTP)$''$: Find second CTP solution when $q =2$}
\State $W' := W \setminus \{w \in W : |\{n \in N_w : y_n' = 1\}| \geq 2 \}$ \label{alg:CTP2}
\State $T' := T \cup \{n \in N : y_n' = 1\}$
\State $N_w' := N_w \setminus T'' \quad \forall \ w \in W'$
\State $(CTP)''$ := instance to the CTP defined by $(N,T',E,W',N_w',\textbf{c})$
\State $x'' \in \{0,1\}^{|E|},\ y''\in \{0,1\}^{|N|}$ := heuristic solution to $(CTP)''$ \label{alg:CTP2sol}
\Statex
\hspace{-0.4cm} \textbf{Initialization}
\State $N^0 := \{n \in N : y_n = 1\}$ selected locations defined by $y'$ (when $q \leq 1$) or $y''$ (when $q = 2$) \label{alg:DBLP0n}
\State $\mathcal{C}^0 :=$ collection tour defined by $x'$ (when $q \leq 1$) or $x''$ (when $q = 2$) \label{alg:DBLP0c}
\State $C := \{\mathcal{C}^0\} $ a set of previously found solutions \State $r^1 := 0$ \label{alg:DBLPkstart}
\label{alg:DBLPkprev}
\algstore{myalg}
\end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic} [1] \algrestore{myalg}
\Statex
\hspace{-0.4cm} \textbf{Iterative Improvements}
\For{$k = 1,2,3,\dots, \text{until return} $}
\State $(i^*,j^*) = \emptyset$
\State $\theta^* = 2\pi$
\For{$i \in (N^{k-1}\setminus T)$ or $i$ represents no drop box}\label{alg:P}
\For{$j \in (N\setminus N^{k-1})$ or $j$ represents no drop box ($ i \neq j$)}
\If{$|N_w \cap (N^{k-1}\cup \{j\}\setminus \{i\})| \geq q \quad \forall \ w \in W$}
\If{$r^{k} \leq \min_{w \in W} A_w(N^{k-1}\cup \{j\}\setminus \{i\})$}\label{alg:Pend}
\State \label{alg:dcost} \parbox[t]{\dimexpr\textwidth-\leftmargin-\labelsep-\labelwidth}{$\Delta \hat{c} (i,j) = $ estimated change in tour cost from removing $i$ and inserting $j$ }
\State \label{alg:dr} \parbox[t]{\dimexpr\textwidth-\leftmargin-\labelsep-\labelwidth}{$\Delta r(i,j) = \min_{w \in W} A_w(N^{k-1}\cup \{j\}\setminus \{i\}) - \min_{w \in W} A_w(N^{k-1})$}
\If{$\theta (\Delta r (i,j), \Delta \hat{c} (i,j))$ computed using equation \eqref{theta} $< \theta^*$}\label{alg:value}
\State $(i^*,j^*) = (i,j)$ \label{alg:bestval}
\State $\theta^* =\theta (\Delta r (i,j), \Delta \hat{c} (i,j)) $
\EndIf
\EndIf
\EndIf
\EndFor
\EndFor
\State $N^k := N^{k-1} \cup \{i^*\} \setminus \{j^*\}$ \label{alg:newset}
\State $\mathcal{C}^k := $ heuristic tour over $N^k$ \label{alg:newtour}
\If{$\mathcal{C}^k \in C$} \label{alg:checkprev1}
\State $r^{k+1} := \min_{w \in W} A_w(N^k)$ \label{alg:checkprev2}
\Else
\State $r^{k+1} := \min \{\min_{w \in W} A_w(N^k), r+\varepsilon\}$
\State $C = C \cup \{\mathcal{C}^k\}$ \label{alg:novelsol}
\EndIf
\If{$N^k = N$} \label{alg:end?}
\State \Return \label{alg:DBLPkend} Identify and return the non-dominated solutions in $C$
\EndIf
\EndFor
\end{algorithmic} \end{algorithm}
\subsection{Access Function Parameters}\label{appx:vnw} The access function used throughout this paper is modeled after the conditional/multinomial logit model from discrete choice theory. With a strict interpretation of the model, the access function value takes the form \begin{align}
A_w(N^*) : = \frac{v^1_{w}+\sum_{n \in N^*} a_{nw} }{v^0_{w}+v^1_{w}+\sum_{n \in N^*} a_{nw} } = \frac{e^{U_w^1}+\sum_{n \in N^*} e^{U_{wn}} }{e^{U_w^0}+e^{U_w^1}+\sum_{n \in N^*} e^{U_{wn}} } \nonumber \end{align} where $U_w^0$ represents the utility of not voting, $U_w^1$ represents the utility of voting using the non-drop box voting system, and $U_{wn}$ represents the utility of voting by using drop box $n$. The value of the access function value then represents the probability that an individual chooses to vote using any of the pathways available to them.
According to an economic theory of election participation, potential voters decide whether to vote by comparing the cost to vote and the potential benefits from voting \citep{downs_economic_1957}. This idea was later codified as a linear combination of benefits and costs in the form of \citep{riker_theory_1968} $$\text{Utility} = \text{Benefits} - \text{Costs} $$
\citet{mcguire_does_2020} found that a decrease of one mile to the nearest drop box increases the probability of voting by 0.64 percent.
We use these ideas to justify the method by which we set the value of $a_{nw}$ for all $n \in N$ and $w \in W$, which was presented in Section \ref{sec:casestudy}. We identify a hypothetical function of the form $a_{nw} \approx e ^{\text{Benefits} - \text{Costs}}$ that aligns with the findings from \citet{mcguire_does_2020}. We find that $a_{nw} \approx e ^{\text{Benefits} - \text{Costs}} = e^{2.5-D}$ where $D$ is the distance between the voter and the drop box is an appropriate model to validate our method against. Table \ref{T:utility} is used within our assessment of this hypothetical function.
The second column of Table \ref{T:utility} provides an estimate of the increase in voter turnout for a region when a single drop box a distance of $D$ miles away is added to a voting system that currently has no drop box, assuming ${2.5-D}$ is an appropriate model for the voters' utility. For example, when a drop box is located a distance of $0.2$ miles from a voter, the expected increase in voter turnout is $2.7\%$. We assume a 70\% turnout in prior elections for this region. The values in the second column are calculated as follows
$$\frac{0.7+ e^{2.5-D} }{1+ e^{2.5-D}} - \frac{0.7}{1}$$ where the distance $D$ is given in the first column.
The values in the third column represent the estimated impact of a one mile decrease to the nearest drop box. The values are calculated by taking the value in the second column for $D$ and subtracting the value of the second column for a distance that is one mile longer ($1+D$). For example, the value in the first row is calculated by computing $0.027-0.011$, where the first value corresponds to $D = 0.2$ and the second corresponds to $D = 1.2$.
The average of the values in the third column is 0.0061. This roughly aligns with the 0.64 percent found by \citet{mcguire_does_2020} as desired.
\begin{table}[H]
\centering
\caption{Implications of $e^{2.5-D}$ description of the $a_{nw}$'s assuming 70\% turnout in the voting system without any drop boxes.}
\label{T:utility}
\begin{tabular*}{\columnwidth}{@{}c@{\extracolsep{\fill}}c@{\extracolsep{\fill}}c }
\toprule
\shortstack{Distance to \\
Drop Box\\
(mi), D} & \shortstack{Marginal Increase\\ in Access \\ Function Value} & \shortstack{Benefit of 1 mile \\ Decrease to Drop Box\\ (Resulting in D)} \\
\midrule 0.2&0.027&0.017\\ 0.4&0.023&0.014\\ 0.6&0.019&0.012\\ 0.8&0.016&0.010\\ 1.0&0.013&0.008\\ 1.2&0.011&0.007\\ 1.4&0.009&0.005\\ 1.6&0.007&0.005\\ 1.8&0.006&0.004\\ 2.0&0.005&0.003\\ 2.2&0.004&0.003\\ 2.4&0.003&0.002\\ 2.6&0.003&0.002\\ 2.8&0.002&0.001\\ 3.0&0.002&0.001\\ \bottomrule
\end{tabular*}
\end{table}
We now validate the $a_{nw}$ values used in Section \ref{sec:casestudy}.
In Figure \ref{fig:utilitymodel} we plot (\ref{mark:sample}) the $a_{nw}$ value for 1,193 randomly sampled pairs $n \in N$ and $w \in W$ against the distance, $D$, between $n$ and $w$.
Overlaid on these points (\ref{mark:utility}) is the function $e^{2.5-D}$ where the cost is the distance $D$ between $n$ and $w$. We find that the proposed method from Section \ref{sec:casestudy} produces $a_{nw}$ values that roughly align with the function $e^{2.5-D}$. There is variance from the hypothetical line, especially with smaller distances. This is because we consider additional modes of transit and other factors in the actual calculation of $a_{nw}$. This is desired as it adds more realism.
\begin{figure}
\caption{The $a_{nw}$ value and distance between $n$ and $w$ for a sample of 1193 pairs (\ref{mark:sample}) of $n\in N$ and $w \in W$ overlaid with the hypothetical $e^{2.5 - D}$ (\ref{mark:utility}) for which $D <8$.}
\label{mark:sample}
\label{mark:utility}
\label{fig:utilitymodel}
\end{figure}
\end{document} | arXiv |
\begin{definition}[Definition:Cone/Base]
Consider a cone consisting of the set of all straight lines joining the boundary of a plane figure $PQR$ to a point $A$ not in the same plane of $PQR$:
:300px
The plane figure $PQR$ is called the '''base''' of the cone.
\end{definition} | ProofWiki |
\begin{document}
\title{Dirac particle dynamics of a superconducting circuit} \author{Elisha Svetitsky} \affiliation{Racah Institute of Physics, the Hebrew University of Jerusalem, Jerusalem, 91904 Israel} \author{Nadav Katz} \affiliation{Racah Institute of Physics, the Hebrew University of Jerusalem, Jerusalem, 91904 Israel}
\begin{abstract}
The core concept of quantum simulation is the mapping of an inaccessible quantum system onto a controllable one by identifying analogous dynamics. We map the Dirac equation of relativistic quantum mechanics in 3+1 dimensions onto a multi-level superconducting Josephson circuit. Resonant drives determine the particle mass and momentum and the quantum state represents the internal spinor dynamics, which are cast in the language of multi-level quantum optics. The degeneracy of the Dirac spectrum corresponds to a degeneracy of bright/dark states within the system and particle spin and helicity are employed to interpret the multi-level dynamics. We simulate the Schwinger mechanism of electron-positron pair production by introducing an analogous electric field as a doubly degenerate Landau-Zener problem. All proposed measurements can be performed well within typical decoherence times. This work opens a new avenue for experimental study of the Dirac equation and provides a tool for control of complex dynamics in multi-level systems.
\end{abstract}
\maketitle
Paul Dirac's celebrated equation for the electron \cite{Dirac1928} combines special relativity and quantum mechanics, predicting a host of phenomena such as antimatter and the spin $\frac{1}{2}$ degree of freedom. In the Dirac equation a single particle is described by a spinor comprising four functions of space and time. In contrast, quantum information and simulation \cite{Feynman1982,RevModPhys.86.153,Cirac2012,Buluta108} experiments are invariably performed in systems obeying the non-relativistic Schrodinger equation. In order to experimentally explore the Dirac equation in the laboratory, an appropriate mapping is required to enable a controllable quantum system to mimic a relativistic one. Here, we study the Dirac equation by mapping the internal spinor dynamics to a four-level system, resulting in novel intuition into complex multi-level dynamics previously studied in the context of quantum optics. While other works have made strides towards scalable simulations of quantum field theories \cite{0034-4885-79-1-014401,Martinez2016,Alba201364,jordan2012quantum}, we focus on the first-quantized Dirac equation to provide new perspective on coherent control of multi-level systems, promising ingredients for resource-efficient quantum computation \cite{lanyon2009simplifying,fedorov2012implementation} but often overlooked due to the complexity of their dynamics. In principle our scheme can be realized in a variety of systems with the appropriate level structure. We propose the use of superconducting Josephson circuits \cite{Clarke2008,Devoret2013} and show how to engineer the necessary level structure, motivated by their uniqueness in combining long coherence times, flexibility of design, and straightforward control.
The Dirac equation can be written in Hamiltonian form, $i\hbar\frac{\partial\psi}{\partial t}=H\psi$ \cite{Dirac1928}. In 1+1 and 2+1 dimensions $\psi$ is a two-component spinor with positive and negative energy components. In 3+1 dimensions the spinor has four components, reflecting the additional property of spin $\frac{1}{2}$. Spinor components are functions of space and time coupled together by the Hamiltonian. Our goal is to realize this Hamiltonian with a controllable quantum system.
\begin{figure}
\caption{Diamond level structure with resonant nearest-neighbor couplings. The four drive parameters correspond to particle mass and momentum, and the four level state is mapped to the spinor components of a Dirac particle.}
\label{fig:Fig1}
\end{figure}
\begin{figure}\label{fig:Fig2}
\end{figure}
We begin by considering a four-level system with nearest-neighbor transitions in a diamond configuration. The transitions are resonantly driven with amplitudes as shown in Fig. 1. Note that with this level structure the complex phases of the drives cannot all be absorbed into the bare levels as for a ladder configuration. Choosing to order the basis such that $\ket{\psi}\:=\:c_0\ket0+c_1\ket1+c_2\ket2+c_3\ket3=(c_0 \: c_3 \: c_2 \: c_1 )^{T}$ we obtain the Hamiltonian
\begin{equation} H=\begin{pmatrix} 0 & 0 & p_{z}-im & p_{x}-ip_{y}\\ 0 & 0 & p_{x}+ip_{y} & -p_{z}-im\\ p_{z}+im & p_{x}-ip_{y} & 0 & 0\\ p_{x}+ip_{y} & -p_{z}+im & 0 & 0 \end{pmatrix}\label{eq:1} \end{equation}
This is the Dirac Hamiltonian in the supersymmetric representation \cite{thaller2013dirac} for a plane wave with mass $m$ and momentum $\vec{p}=\left(p_{x},\:p_{y},\:p_{z}\right)$ (with natural units $\hbar=c=1$). For such a state the four spinor components have the same spatial dependence, $\psi\sim exp(i\vec{p}\cdot\vec{r})$, allowing a reduction from a continuous to a discrete basis by suppressing the common spatial factor. This mapping is limited to a momentum eigenstate, i.e. a single Fourier component; wave packets can be constructed in post-processing of ensemble measurements. This representation was first introduced to quantum simulation by \cite{Lamata2007} in the context of trapped ions and experimentally demonstrated by \cite{Gerritsma2010} in 1+1 dimensions as a spatially dependent Rabi system. In this work we focus on superconducting circuits which do not have a motional degree of freedom and therefore require a different mapping and interpretation of particle momentum, while maintaining validity of insights provided by the analogous Dirac dynamics. Moreover, our proposal brings the Dirac equation in 3+1 dimensions well within current experimental capabilities.
We realize the necessary diamond level structure with a pair of nominally identical, capacitively coupled superconducting transmon qubits \cite{houck2009life,Barends2013}. In identifying suitable circuit parameters for a multi-level experiment we seek to avoid crosstalk between different transitions while respecting the inherent tradeoff between Josephson qubit anharmonicity and sensitivity to charge noise \cite{Koch2007,PhysRevB.77.180502}. As shown in \cite{SuppNote}, suitable parameters are a bare first transition frequency of 5 GHz, anharmonicity of 300 MHz, and qubit-qubit coupling of 100 MHz, all readily achievable with standard fabrication techniques. The coupling hybridizes the $\ket{01},\ket{10}$ levels and results in a splitting of 200 MHz. Similar hybridization and level repulsion within the $\left\{\ket{20},\ket{11},\ket{02}\right\} $ manifold causes the highest-lying state to be shifted upwards by 100 MHz, resulting in four distinct transitions which can be addressed individually. Thus, all transitions are separated by at least 100 MHz, setting a reasonable limit on bandwidth and drive amplitudes. Multi-tone signals with phase control of each component can be synthesized with commercial electronics, making simultaneous resonant driving of several transitions straightforward. Throughout this work we neglect decoherence; typical coherence times for transmon qubits are an order of magnitude longer than required for our purpose.
Figure 2a shows the dynamics of a Dirac particle with $\vec{p}=\left(20\:\textrm{MHz},\:0,\:0\right)$ and a range of mass values. The populations oscillate smoothly with a single frequency between the ground state and a superposition of the $\ket1,\ket2$ states, never populating the $\ket3$ state. This result is initially surprising considering the usually complicated nature of multi-level dynamics, but can be explained with the insight of bright and dark states from quantum optics. To do so we split the four-level system into two three-level systems, one with a V configuration and the other with a $\Lambda$ configuration, with the middle states common to both (Fig. 2b). For a V system initially in the ground state the population can be shown to oscillate between the $\ket0$ state and a superposition 'bright state' (denoted $\ket{B_{0}}$) of the $\ket1,\ket2$ levels with a single frequency given by the pythagorean sum of the coupling amplitudes, in this case the relativistic energy $E=\sqrt{|p|^{2}+m^{2}}$. The superposition state in the $\left\{ \ket1,\ket2\right\} $ subspace orthogonal to the bright state is never populated and is termed a 'dark state'. The same can be derived for the $\Lambda$ system, whose bright state we denote $\ket{B_3}$. Remarkably, for Dirac couplings the bright (dark) state of the V system coincides with the dark (bright) state of the $\Lambda$ system and the transitions to the highest state thus interfere destructively, preventing population of the $\ket3$ state. These simplified dynamics can be appreciated in light of the Dirac equation by taking the diagonalized Dirac Hamiltonian and factoring it into single-qubit operators, $diag(E,E,-E,-E)=E\cdot I\otimes\sigma_z$. In addition to revealing the single frequency $E$, this form shows how, despite appearances, the degeneracy limits dynamics to a two-dimensional Hilbert space.
\begin{figure}
\caption{Particle spin, helicity conservation and the bright state. The Bloch vector of the bright state corresponds to particle spin, here shown in momentum space on a sphere of constant energy $\frac{|p|}{2\pi}=\frac{m}{2\pi}=20$ MHz. At $t=0$ the spin points up and the radial component equals the helicity. Along the equator helicity is zero and the bright state is tangent to the sphere. For nonzero $p_z$ the particle has finite helicity and the spin obtains a radial component.}
\label{fig:Fig3}
\end{figure}
Previous work \cite{Lamata2007,Gerritsma2010,Roos2011,lee2015tachyon} on simulating the Dirac equation with trapped ions was limited by experimental considerations to the 1+1 dimensional case, forfeiting the spin $\frac{1}{2}$ property which was the original impetus behind Dirac's work \cite{Dirac1928}. Our system enables simulation in 3+1 dimensions, preserving the spin degree of freedom and providing another handle on multi-level dynamics, specifically the details of the bright state. The components of the spin operator are $\Sigma_i\equiv \frac{1}{2}\begin{pmatrix} \sigma_i & 0\\
0 & \sigma_i \\ \end{pmatrix}$ ($i=x,y,z$). With our basis ordering $\langle \vec{\Sigma} \rangle$ is the sum of two unnormalized Bloch vectors, one for each of the $\left\{\ket{0},\ket{3}\right\}$ and $\left\{\ket{2},\ket{1}\right\}$ manifolds, coupled together by the Hamiltonian. In relativistic quantum mechanics particle spin is only part of the total angular momentum and is not conserved separately. To obtain a conserved quantity in our system we turn to the helicity, defined as the spin projection along the momentum axis, $\hat{h}=\frac{\vec{p}\cdot\vec{\Sigma}}{|p|}$.
Figure 3 shows the bright state as a function of momentum on a sphere of constant energy, as well as a stereographic projection where the south pole is mapped to the origin and the north pole to infinity. Since the spin points up at $t=0$ when the system is in the ground state, along the equator where $p_z=0$ the helicity is zero and the spin is orthogonal to the momentum at all times. Adding a $p_z$ component endows the particle with non-zero helicity and the bright state obtains a radial component. At the poles the spin is locked pointing up as required by helicity conservation. Experimentally, the momentum direction can be chosen by setting the phase of the drives on the $0-1$ and $2-3$ transitions, and the spin can be detected experimentally with two-qubit tomography \cite{Steffen1423} of the $\left\{\ket{1},\ket{2}\right\}$ subspace. We note that the topology of the spin texture is trivial, and that a topologically non-trivial spectrum can be obtained with a 'modified' Dirac equation to mimic topological insulators \cite{shen2013topological}.
\begin{figure*}\label{fig:Fig4}
\end{figure*}
\begin{figure}\label{fig:Fig5}
\end{figure}
Returning to the bright/dark state analysis, with such a drive-dependant interference effect it is natural to inquire into the effect of time-dependant drive amplitudes. These simulate electric fields in the gauge $\vec{\epsilon}=-\frac{\partial\vec{A}}{\partial t}$ (i.e. no scalar potential; here $\vec{\epsilon}$ is a simulated electric field and $\vec{A}$ is the vector potential) by shifting the momentum $\vec{p}\rightarrow\vec{p}-e\vec{A}$. A spatially inhomogeneous vector potential is not diagonal in the momentum representation, ruling out a straightforward simulation of a magnetic field. The Dirac equation in the presence of an electric field gives rise to electron-positron pair production in a 'dielectric breakdown of the vacuum', worked out by Sauter \cite{Sauter1931} for the first-quantized Dirac equation and by Schwinger \cite{Schwinger1951}, for whom the effect is usually named, with quantum electrodynamics. This effect is simulated by ramping the drive amplitude $p_{x}$ at a rate $\epsilon_{x}=(10\:\textrm{MHz})^{2}$ from an initial value $p_{x}=-50$ MHz to a final value $p_{x}=50$ MHz. Beginning in the state $\ket\pm_{01}=\frac{1}{\sqrt{2}}\left(\ket0\pm\ket1\right)$ (an eigenstate for $m\ll\left|p_{x}\right|$), population is transferred to the state $\ket\mp_{23}=\frac{1}{\sqrt{2}}\left(\ket2\mp\ket3\right)$ (Fig. 4a). For a system initially in the ground state $\ket0=\frac{1}{\sqrt{2}}\left(\ket{+}_{01} +\ket{-}_{01}\right)$, interference between the transitions leads to oscillations in the $\left\{ \ket2,\ket3\right\}$ manifold (Fig. 4b).
The Schwinger formula for the pair production probability is $P=exp(-\pi\frac{m^{2}}{\epsilon})$, where $m$ is the particle mass and $\epsilon$ the electric field magnitude. The Schwinger limit $\epsilon_{s}=m^{2}$ for observing the effect is yet inaccessible even for the most powerful lasers \cite{1742-6596-672-1-012020} due to the size of the electron mass, but in our simulation the mass is a tunable experimental parameter. The similarity of this formula to the two-level Landau-Zener tunneling probability has already been noted for two-component spinors \cite{rau1996reversible,PhysRevD.78.096009}. Since $\ket\pm_{01}$ do not couple to each other but only to $\ket\mp_{23}$, respectively, we can understand the multi-level dynamics of the chirped 3+1 dimensional Dirac equation by factoring it into two separate Landau-Zener problems, illustrated in Fig. 5a. For a fast sweep rate (large electric field) compared to the minimum energy gap (particle mass) the non-adiabatic tunneling between branches leaves population in the initial $\left\{\ket0,\ket1\right\}$ manifold. We therefore identify the population remaining in that manifold with pair production which decreases with large mass. In Fig. 5b we show the final population of the $\left\{\ket0,\ket1\right\}$ manifold and compare to the Schwinger/Landau-Zener expression. Gaussian suppression of pair production with particle mass is seen clearly.
In conclusion, the Dirac equation in 3+1 dimensions is mapped to a controllable quantum system with a diamond level structure. We propose a new realization of such a level structure with a superconducting architecture by capacitively coupling two three-level Josephson qubits and identify feasible circuit parameters. The populations of four of the dressed states mimic the internal spinor dynamics of a Dirac particle whose mass and momentum are determined by the drive parameters. Thus we translate concepts between multi-level quantum optics and the Dirac equation. Multi-level interference effects are explained both in terms of quantum optics and the degeneracy of the Dirac spectrum. In contrast to previous works which studied the Dirac equation in lower dimensions, our scheme retains properties such as spin $\frac{1}{2}$ and helicity which are explicitly demonstrated in our system. We incorporate an electric field by chirping the drive amplitudes, simulating Schwinger pair production as a four-level Landau-Zener problem.
Wave packet and multi-mode spatio-temporal effects such as Zitterbewegung \cite{schrodinger1930kraftefreie} are observable in our scheme by superposing different plane wave measurements. Further study may extend this mapping to non-diagonal operators, e.g. spatially inhomogeneous vector potentials or 'synthetic' magnetic fields \cite{ray2014observation,Roushan2017}. Scaling to multi-particle dynamics requires an adaptation of the Jordan-Wigner transformation suitable for the specific multi-level structure in order to maintain fermionic statistics \cite{ortiz2001quantum}. Topological condensed matter systems \cite{shen2013topological} can be modeled in this context by also explicitly breaking symmetries in our Dirac Hamiltonian in a controlled manner.
We thank Benjamin Svetitsky, Andreas Wallraff, and Enrique Solano for fruitful discussions. This work is supported by the European Research Council Project No. 335933.
\begin{thebibliography}{36} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Dirac}(1928)}]{Dirac1928}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~A.~M.}\
\bibnamefont {Dirac}},\ }\href
{http://rspa.royalsocietypublishing.org/content/117/778/610.abstract}
{\bibfield {journal} {\bibinfo {journal} {Proc R Soc Lond A Math Phys Sci}\
}\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {610} (\bibinfo {year}
{1928})}
\bibitem [{\citenamefont {Feynman}(1982)}]{Feynman1982}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~P.}\ \bibnamefont
{Feynman}},\ }\href {http://dx.doi.org/10.1007/BF02650179} {\bibfield
{journal} {\bibinfo {journal} {International Journal of Theoretical
Physics}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {467}
(\bibinfo {year} {1982})}
\bibitem [{\citenamefont {Georgescu}\ \emph {et~al.}(2014)\citenamefont
{Georgescu}, \citenamefont {Ashhab},\ and\ \citenamefont
{Nori}}]{RevModPhys.86.153}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont
{Georgescu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href
{\doibase 10.1103/RevModPhys.86.153} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo
{pages} {153} (\bibinfo {year} {2014})}
\bibitem [{\citenamefont {Cirac}\ and\ \citenamefont
{Zoller}(2012)}]{Cirac2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont
{Cirac}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\
}\href {http://dx.doi.org/10.1038/nphys2275} {\bibfield {journal} {\bibinfo
{journal} {Nat Phys}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{264} (\bibinfo {year} {2012})}
\bibitem [{\citenamefont {Buluta}\ and\ \citenamefont
{Nori}(2009)}]{Buluta108}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Buluta}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\href {\doibase 10.1126/science.1177838} {\bibfield {journal} {\bibinfo
{journal} {Science}\ }\textbf {\bibinfo {volume} {326}},\ \bibinfo {pages}
{108} (\bibinfo {year} {2009})},\ \Eprint
{http://arxiv.org/abs/http://science.sciencemag.org/content/326/5949/108.full.pdf}
{http://science.sciencemag.org/content/326/5949/108.full.pdf}
\bibitem [{\citenamefont {Zohar}\ \emph {et~al.}(2016)\citenamefont {Zohar},
\citenamefont {Cirac},\ and\ \citenamefont {Reznik}}]{0034-4885-79-1-014401}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Zohar}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \
and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Reznik}},\ }\href
{http://stacks.iop.org/0034-4885/79/i=1/a=014401} {\bibfield {journal}
{\bibinfo {journal} {Reports on Progress in Physics}\ }\textbf {\bibinfo
{volume} {79}},\ \bibinfo {pages} {014401} (\bibinfo {year}
{2016})}
\bibitem [{\citenamefont {Martinez}\ \emph {et~al.}(2016)\citenamefont
{Martinez}, \citenamefont {Muschik}, \citenamefont {Schindler}, \citenamefont
{Nigg}, \citenamefont {Erhard}, \citenamefont {Heyl}, \citenamefont {Hauke},
\citenamefont {Dalmonte}, \citenamefont {Monz}, \citenamefont {Zoller},\ and\
\citenamefont {Blatt}}]{Martinez2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont
{Martinez}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont
{Muschik}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Schindler}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nigg}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Erhard}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Heyl}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Hauke}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dalmonte}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href
{http://dx.doi.org/10.1038/nature18318} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {534}},\ \bibinfo {pages}
{516} (\bibinfo {year} {2016})}
\bibitem [{\citenamefont {Alba}\ \emph {et~al.}(2013)\citenamefont {Alba},
\citenamefont {Fernandez-Gonzalvo}, \citenamefont {Mur-Petit}, \citenamefont
{Garcia-Ripoll},\ and\ \citenamefont {Pachos}}]{Alba201364}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Alba}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Fernandez-Gonzalvo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Mur-Petit}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Garcia-Ripoll}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Pachos}},\ }\href {\doibase https://doi.org/10.1016/j.aop.2012.10.005}
{\bibfield {journal} {\bibinfo {journal} {Annals of Physics}\ }\textbf
{\bibinfo {volume} {328}},\ \bibinfo {pages} {64 } (\bibinfo {year}
{2013})}
\bibitem [{\citenamefont {Jordan}\ \emph {et~al.}(2012)\citenamefont {Jordan},
\citenamefont {Lee},\ and\ \citenamefont {Preskill}}]{jordan2012quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont
{Jordan}}, \bibinfo {author} {\bibfnamefont {K.~S.}\ \bibnamefont {Lee}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf
{\bibinfo {volume} {336}},\ \bibinfo {pages} {1130} (\bibinfo {year}
{2012})}
\bibitem [{\citenamefont {Lanyon}\ \emph {et~al.}(2009)\citenamefont {Lanyon},
\citenamefont {Barbieri}, \citenamefont {Almeida}, \citenamefont {Jennewein},
\citenamefont {Ralph}, \citenamefont {Resch}, \citenamefont {Pryde},
\citenamefont {O’brien}, \citenamefont {Gilchrist},\ and\ \citenamefont
{White}}]{lanyon2009simplifying}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Lanyon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barbieri}},
\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Almeida}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author}
{\bibfnamefont {T.~C.}\ \bibnamefont {Ralph}}, \bibinfo {author}
{\bibfnamefont {K.~J.}\ \bibnamefont {Resch}}, \bibinfo {author}
{\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}}, \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {O’brien}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Gilchrist}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~G.}\ \bibnamefont {White}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume}
{5}},\ \bibinfo {pages} {134} (\bibinfo {year} {2009})}
\bibitem [{\citenamefont {Fedorov}\ \emph {et~al.}(2012)\citenamefont
{Fedorov}, \citenamefont {Steffen}, \citenamefont {Baur}, \citenamefont
{da~Silva},\ and\ \citenamefont {Wallraff}}]{fedorov2012implementation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Fedorov}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Steffen}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Baur}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {da~Silva}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {481}},\
\bibinfo {pages} {170} (\bibinfo {year} {2012})}
\bibitem [{\citenamefont {Clarke}\ and\ \citenamefont
{Wilhelm}(2008)}]{Clarke2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Clarke}}\ and\ \bibinfo {author} {\bibfnamefont {F.~K.}\ \bibnamefont
{Wilhelm}},\ }\href {http://dx.doi.org/10.1038/nature07128} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {453}},\
\bibinfo {pages} {1031} (\bibinfo {year} {2008})}
\bibitem [{\citenamefont {Devoret}\ and\ \citenamefont
{Schoelkopf}(2013)}]{Devoret2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Devoret}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont
{Schoelkopf}},\ }\href
{http://science.sciencemag.org/content/339/6124/1169.abstract} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{339}},\ \bibinfo {pages} {1169} (\bibinfo {year} {2013})}
\bibitem [{\citenamefont {Thaller}(2013)}]{thaller2013dirac}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Thaller}},\ }\href@noop {} {\emph {\bibinfo {title} {The dirac equation}}}\
(\bibinfo {publisher} {Springer Science \& Business Media},\ \bibinfo {year}
{2013})
\bibitem [{\citenamefont {Lamata}\ \emph {et~al.}(2007)\citenamefont {Lamata},
\citenamefont {Le\'on}, \citenamefont {Sch\"atz},\ and\ \citenamefont
{Solano}}]{Lamata2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Lamata}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Le\'on}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sch\"atz}}, \ and\
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\href
{\doibase 10.1103/PhysRevLett.98.253005} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {253005} (\bibinfo {year} {2007})}
\bibitem [{\citenamefont {Gerritsma}\ \emph {et~al.}(2010)\citenamefont
{Gerritsma}, \citenamefont {Kirchmair}, \citenamefont {Zahringer},
\citenamefont {Solano}, \citenamefont {Blatt},\ and\ \citenamefont
{Roos}}]{Gerritsma2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Gerritsma}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Kirchmair}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Zahringer}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}},\ }\href
{http://dx.doi.org/10.1038/nature08688} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {463}},\ \bibinfo {pages}
{68} (\bibinfo {year} {2010})}
\bibitem{houck2009life}
Andrew~A Houck, Jens Koch, Michel~H Devoret, Steven~M Girvin, and Robert~J
Schoelkopf.
\newblock Life after charge noise: recent results with transmon qubits.
\newblock {\em Quantum Information Processing}, 8(2):105--115, 2009.
\bibitem [{\citenamefont {Barends}\ \emph {et~al.}(2013)\citenamefont
{Barends}, \citenamefont {Kelly}, \citenamefont {Megrant}, \citenamefont
{Sank}, \citenamefont {Jeffrey}, \citenamefont {Chen}, \citenamefont {Yin},
\citenamefont {Chiaro}, \citenamefont {Mutus}, \citenamefont {Neill},
\citenamefont {O’Malley}, \citenamefont {Roushan}, \citenamefont {Wenner},
\citenamefont {White}, \citenamefont {Cleland},\ and\ \citenamefont
{Martinis}}]{Barends2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Neill}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {O’Malley}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont {T.~C.}\
\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Cleland}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Martinis}},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevLett.111.080502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {080502} (\bibinfo {year}
{2013})}
\bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2007)\citenamefont {Koch},
\citenamefont {Yu}, \citenamefont {Gambetta}, \citenamefont {Houck},
\citenamefont {Schuster}, \citenamefont {Majer}, \citenamefont {Blais},
\citenamefont {Devoret}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Koch2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Koch}}, \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gambetta}}, \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevA.76.042319} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {76}},\
\bibinfo {pages} {042319} (\bibinfo {year} {2007})}
\bibitem [{\citenamefont {Schreier}\ \emph {et~al.}(2008)\citenamefont
{Schreier}, \citenamefont {Houck}, \citenamefont {Koch}, \citenamefont
{Schuster}, \citenamefont {Johnson}, \citenamefont {Chow}, \citenamefont
{Gambetta}, \citenamefont {Majer}, \citenamefont {Frunzio}, \citenamefont
{Devoret}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{PhysRevB.77.180502}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont
{Schreier}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Koch}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {B.~R.}\ \bibnamefont {Johnson}}, \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href {\doibase 10.1103/PhysRevB.77.180502}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {77}},\ \bibinfo {pages} {180502} (\bibinfo {year}
{2008})}
\bibitem [{SupNote()}]{SuppNote}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {See Supplemental
Material at ...}
\bibitem [{\citenamefont {Roos}\ \emph {et~al.}(2011)\citenamefont {Roos},
\citenamefont {Gerritsma}, \citenamefont {Kirchmair}, \citenamefont
{Zֳ₪hringer}, \citenamefont {Solano},\ and\ \citenamefont
{Blatt}}]{Roos2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont
{Roos}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gerritsma}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kirchmair}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Zֳ₪hringer}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Solano}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href
{http://stacks.iop.org/1742-6596/264/i=1/a=012020} {\bibfield {journal}
{\bibinfo {journal} {Journal of Physics: Conference Series}\ }\textbf
{\bibinfo {volume} {264}},\ \bibinfo {pages} {012020} (\bibinfo {year}
{2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2015)\citenamefont {Lee},
\citenamefont {Alvarez-Rodriguez}, \citenamefont {Cheng}, \citenamefont
{Lamata},\ and\ \citenamefont {Solano}}]{lee2015tachyon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont
{Lee}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{Alvarez-Rodriguez}}, \bibinfo {author} {\bibfnamefont {X.-H.}\ \bibnamefont
{Cheng}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lamata}}, \
and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {032129} (\bibinfo
{year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steffen}\ \emph {et~al.}(2006)\citenamefont
{Steffen}, \citenamefont {Ansmann}, \citenamefont {Bialczak}, \citenamefont
{Katz}, \citenamefont {Lucero}, \citenamefont {McDermott}, \citenamefont
{Neeley}, \citenamefont {Weig}, \citenamefont {Cleland},\ and\ \citenamefont
{Martinis}}]{Steffen1423}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Steffen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ansmann}},
\bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Bialczak}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Katz}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {McDermott}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont {E.~M.}\
\bibnamefont {Weig}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Cleland}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Martinis}},\ }\href {\doibase 10.1126/science.1130886} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {313}},\ \bibinfo
{pages} {1423} (\bibinfo {year} {2006})},\ \Eprint
{http://arxiv.org/abs/http://science.sciencemag.org/content/313/5792/1423.full.pdf}
{http://science.sciencemag.org/content/313/5792/1423.full.pdf}
\bibitem [{\citenamefont {Shen}(2013)}]{shen2013topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-Q.}\ \bibnamefont
{Shen}},\ }\href@noop {} {\emph {\bibinfo {title} {Topological insulators:
Dirac equation in condensed matters}}},\ Vol.\ \bibinfo {volume} {174}\
(\bibinfo {publisher} {Springer Science \& Business Media},\ \bibinfo {year}
{2013})
\bibitem [{\citenamefont {Sauter}(1931)}]{Sauter1931}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Sauter}},\ }\href {http://dx.doi.org/10.1007/BF01339461} {\bibfield
{journal} {\bibinfo {journal} {Zeitschrift f?r Physik}\ }\textbf {\bibinfo
{volume} {69}},\ \bibinfo {pages} {742} (\bibinfo {year} {1931})}
\bibitem [{\citenamefont {Schwinger}(1951)}]{Schwinger1951}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Schwinger}},\ }\href {https://link.aps.org/doi/10.1103/PhysRev.82.664}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo
{volume} {82}},\ \bibinfo {pages} {664} (\bibinfo {year} {1951})}
\bibitem [{\citenamefont {Blaschke}\ \emph {et~al.}(2016)\citenamefont
{Blaschke}, \citenamefont {Gevorgyan}, \citenamefont {Panferov},\ and\
\citenamefont {Smolyansky}}]{1742-6596-672-1-012020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Blaschke}}, \bibinfo {author} {\bibfnamefont {N.~T.}\ \bibnamefont
{Gevorgyan}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{Panferov}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Smolyansky}},\ }\href {http://stacks.iop.org/1742-6596/672/i=1/a=012020}
{\bibfield {journal} {\bibinfo {journal} {Journal of Physics: Conference
Series}\ }\textbf {\bibinfo {volume} {672}},\ \bibinfo {pages} {012020}
(\bibinfo {year} {2016})}
\bibitem [{\citenamefont {Rau}\ and\ \citenamefont
{M{\"u}ller}(1996)}]{rau1996reversible}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Rau}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{M{\"u}ller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physics Reports}\ }\textbf {\bibinfo {volume} {272}},\ \bibinfo {pages} {1}
(\bibinfo {year} {1996})}
\bibitem [{\citenamefont {Allor}\ \emph {et~al.}(2008)\citenamefont {Allor},
\citenamefont {Cohen},\ and\ \citenamefont {McGady}}]{PhysRevD.78.096009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Allor}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Cohen}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {McGady}},\
}\href {\doibase 10.1103/PhysRevD.78.096009} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo
{pages} {096009} (\bibinfo {year} {2008})}
\bibitem{schrodinger1930kraftefreie}
Erwin Schr{\"o}dinger.
\newblock {\em {\"U}ber die kr{\"a}ftefreie Bewegung in der relativistischen
Quantenmechanik}.
\newblock Akademie der wissenschaften in kommission bei W. de Gruyter u.
Company (1930)
\bibitem [{\citenamefont {Ray}\ \emph {et~al.}(2014)\citenamefont {Ray},
\citenamefont {Ruokokoski}, \citenamefont {Kandel}, \citenamefont
{M{\"o}tt{\"o}nen},\ and\ \citenamefont {Hall}}]{ray2014observation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont
{Ray}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ruokokoski}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kandel}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {M{\"o}tt{\"o}nen}}, \ and\
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hall}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {505}},\ \bibinfo {pages} {657} (\bibinfo {year}
{2014})}
\bibitem{Roushan2017}
P.~Roushan, C.~Neill, A.~Megrant, Y.~Chen, R.~Babbush, R.~Barends, B.~Campbell,
Z.~Chen, B.~Chiaro, A.~Dunsworth, A.~Fowler, E.~Jeffrey, J.~Kelly, E.~Lucero,
J.~Mutus, P.~J.~J. O[rsquor]Malley, M.~Neeley, C.~Quintana, D.~Sank,
A.~Vainsencher, J.~Wenner, T.~White, E.~Kapit, H.~Neven, and J.~Martinis.
\newblock Chiral ground-state currents of interacting photons in a synthetic
magnetic field.
\newblock {\em Nat Phys}, 13(2):146--151 (2017).
\bibitem{ortiz2001quantum}
Gerardo Ortiz, JE~Gubernatis, Emanuel Knill, and Raymond Laflamme.
\newblock Quantum algorithms for fermionic simulations.
\newblock {\em Physical Review A}, 64(2):022319 (2001).
\end{thebibliography}
\cleardoublepage
\widetext \begin{center} \textbf{\large Supplementary material: Dirac particle dynamics of a superconducting circuit}
\text{Elisha Svetitsky$^1$ and Nadav Katz$^1$}
\textit{$^1$Racah Institute of Physics, the Hebrew University of Jerusalem, Jerusalem, 91904 Israel} \end{center}
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{figure}}{S\arabic{figure}}
\section*{Diamond level structure with superconducting circuits}
Here we show how the level structure illustrated in Fig. 1 of the main text is obtained with superconducting qubits. The requisite properties are a diamond level structure with nearest-neighbor transitions and a unique frequency for each transition.
Superconducting qubits are anharmonic LC oscillators with unequally spaced levels in a ladder configuration. They can be designed for a chosen anharmonicity (defined here as $\omega_{12}-\omega_{01}$), but large anharmonicity comes at a price of increased sensitivity to charge noise and resultant dephasing. In typical two-level experiments the anharmonicity imposes a limit on the Rabi frequency, since power broadening induces 'leakage' to higher levels. This problem is compounded for multi-level experiments of the kind described in the main text: the circuit is excited by a multi-tone pulse where each frequency component is intended for a specific transition but is seen by the entire spectrum, and one relies on sufficient detuning from unwanted transitions to prevent crosstalk.
A simple diamond level structure can be obtained by coupling two two-level systems and identifying the $\big(\ket{00},\ket{01},\ket{10},\ket{11}\big)$ states as $\big(\ket{0},\ket{1},\ket{2},\ket{3}\big)$, but this results in identical frequencies for the $\ket{0}\leftrightarrow\ket{1}$ $\big(\ket{0}\leftrightarrow\ket{2}\big)$ , $\ket{2}\leftrightarrow\ket{3}$ $\big(\ket{1}\leftrightarrow\ket{3}\big)$ transitions, rendering individual addressability impossible. To obtain a fully distinct set of transition frequencies we must consider the levels $\ket{02},\ket{20}$ which are usually justifiably neglected if the single-qubit anharmonicity is sufficiently large relative to both the Rabi frequency and the qubit-qubit coupling. For sufficiently strong coupling the $\ket{11},\ket{02}$, and $\ket{20}$ levels hybridize and repel the $\ket{11}$ upwards, producing the desired frequency shift.
Writing $\omega_0$ for the first transition, $\kappa$ for the anharmonicity ($\kappa<0$) and $g$ for the qubit-qubit coupling (Fig. S1a), the dressed eigenstates are found by diagonalizing
\begin{equation}
H =\begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & \omega_0 & g & 0 & 0 & 0 \\
0 & g & \omega_0 & 0 & 0 & 0 \\
0 & 0 & 0 & 2\omega_0+\kappa & 0 & g\sqrt{2} \\
0 & 0 & 0 & 0 & 2\omega_0+\kappa & g\sqrt{2} \\
0 & 0 & 0 & g\sqrt{2} & g\sqrt{2} & 2\omega_0 \\
\end{pmatrix}
\end{equation}
For $\kappa=-3g$ we get an effective anharmonicity of $g$ (Fig. S1b). States not part of the desired diamond structure are also detuned by at least this amount from the nearest transition. For transmon qubits, reasonable parameters are $\kappa=-300$ MHz and $g=100$ MHz.
To confirm that leakage to unwanted states is small we simulate the complete dynamics within a nine dimensional Hilbert space spanned by the first three levels of each qubit. The bare Hamiltonian is written
\begin{equation}
H_0 = \omega_{01}(a^\dagger a + b^\dagger b) + \frac{\kappa}{2} (a^\dagger a^\dagger a a + b^\dagger b^\dagger b b) + g (a^\dagger b + b^\dagger a)
\end{equation}
and the multi-tone excitation field is
\begin{equation}
\Omega(t) = V_{01}e^{i\omega_{01}t}+V_{02}e^{i\omega_{02}t}+V_{13}e^{i\omega_{13}t}+V_{23}e^{i\omega_{23}t}
\end{equation}
where the $V_{ij}$ are Dirac amplitudes. When acting on a single qubit this adds to the Hamiltonian a term
\begin{equation}
H_{drive} = \Omega(t) a^\dagger + \Omega(t)^*a
\end{equation}
to give the total Hamiltonian $H=H_0+H_{drive}$. The populations in the bare qubit basis are shown in Fig. S2, and Fig. S3 shows the populations of the diamond eigenstates constructed from the previous figure. Depending on the details of the circuit, the diamond eigenstates may be measured directly rather than obtained in post-processing.
These results are obtained by driving the diamond system through one of its constituent qubits and naively assuming the matrix elements of $H_{drive}$ to be identical for all transitions. Since this is not strictly correct, the results deviate from the Dirac Hamiltonian more than experimentally necessary, since experiments with superconducting qubits routinely involve calibrating drive amplitudes for each transition on-chip. This makes detailed calculations of matrix elements unnecessary and promises better conformity than demonstrated here with the ideal dynamics shown in the main text.
\section*{Bell basis perspective on Dirac dynamics}
In previous works \cite{Suchowski2011,Svetitsky2014} it was shown that identifying the states of a four-level system with an entangled two-qubit 'Bell basis' allows the dynamics to be factored into local Bloch sphere rotations. Using the basis
\begin{equation}
\begin{array}{cc}
\ket{0}=\frac{\ket{00}+\ket{11}}{\sqrt{2}} & \ket{1}=\frac{\ket{01}+\ket{10}}{\sqrt{2}} \\
\\
\ket{2}=\frac{\ket{01}-\ket{10}}{\sqrt{2}} & \ket{3}=\frac{\ket{00}-\ket{11}}{\sqrt{2}}
\end{array}
\end{equation}
the Dirac Hamiltonian with $\vec{p}=p_{x}\hat{x}$ can be written $H=I\otimes(p_{x}\sigma_{x}+m\sigma_{y})$, describing resonant single-qubit Rabi oscillations with frequency $E=\sqrt{p^{2}+m^{2}}$.
This representation explains why Dirac couplings result in effective single-qubit dynamics per Fig. 2.
An electric field causes the Rabi vector of the second qubit to trace a path from the negative to the positive $\hat{x}$ axis, rotating the qubit's plane of precession. For a large electric field the plane of precession changes faster than the Rabi frequency, resulting in rotation around the $z$ axis. As can be seen from the Bell basis, a $z$ rotation on the second qubit directly couples the $\left\{\ket0,\:\ket1\right\}$ and $\left\{\ket2,\:\ket3\right\}$ manifolds, disrupting the adiabatic population transfer. The Bloch vector is most susceptible to this $z$ gate when the Rabi frequency nears its minimum of $m$, illucidating the mass dependence of the Schwinger formula.
\begin{figure*}
\caption{Level structure of coupled anharmonic oscillators. a) Bare level structure. b) Dressed states for $\kappa=-3g$.}
\label{fig:S1a}
\end{figure*}
\begin{figure*}
\caption{Populations of bare qubit states. Small leakage to unwanted states occurs at large drive amplitudes due to power broadening.}
\label{fig:S2}
\end{figure*}
\begin{figure*}
\caption{Populations of the dressed diamond eigenstates constructed from Fig. S2. With the appropriate circuitry these may be measured directly.}
\label{fig:S3}
\end{figure*}
\end{document} | arXiv |
Some elements of Peronist beliefs and tastes
Rafael Di Tella1 &
Juan Dubra2
Latin American Economic Review volume 27, Article number: 6 (2018) Cite this article
We study the beliefs and values of Peronism. Instead of a comprehensive approach, we focus on three elements. First, we study beliefs and values about the economic system present in Peron's speeches during the period 1943–55. Second, given that these beliefs are non‐standard (for economists), we present two models formalizing some of the key aspects (for example, the idea that there is something more than a material exchange in labor relations). Third, we study survey data for the 1990s on the beliefs of Peronist and non-Peronist voters in Argentina and Democrat and Republican voters in the US. While income and education suggest that Peronists (in relative terms) look like the Democrats, their beliefs and values suggest that Peronists are the Argentine equivalent of the Republicans.
In a seminal study, Díaz Alejandro (1970) blamed Argentina's relative decline to the replacement of the export-oriented, market friendly policies of the early 1900s by populist, interventionist policies around the time of the great depression. While other authors have claimed that lower growth rates had already begun before the change in policies, Taylor (1994) has demonstrated that any "early retardation" accelerates after 1929, and again after 1950, with the onset of ineffective government intervention. In this account Argentina's relative decline during the 20th century can be attributed, broadly, to economically inefficient "populist" policies supplied by leaders who often exploit a mass of uneducated, poor voters. Peron and his followers play a prominent role in such narratives of Argentina's exceptional underperformance. Of course, there are some variations in the basic account. For example, it is often claimed that policymaking, even during relatively centrist administrations, was complicated enormously by the presence of a populist party demanding government intervention. But in the end, in this account (the "Diaz-Alejandro hypothesis", for short), the key problem is Argentina's populist tradition, which has fueled bad policies and political instability.
A troubling aspect of this account, however, is that it does not explain why voters find populist policies appealing. As stated, this narrative soon has to conclude that democracy is not a reasonable way to elect the country's leaders, and it is of some relevance that many Argentine conservatives ended up supporting military coups. Paradoxically, this narrative implicitly questions the very benefits that can be expected from free markets. The reason is that it maintains that the rational judgment of market participants cannot really be trusted (in political settings). Indeed, humans in this account must have some type of dual type of rationality: on the one hand they are able to make a reasonable use of information so that markets for goods and services are in fact quite efficient, but on the other hand they are unable to see that the leaders they elect are bad for them. Rationality in Diaz-Alejandro's account of democratic capitalism is a bit like the Cheshire cat of Alice in Wonderland: now you see it, now you don't.
We structure this paper around the basic Diaz-Alejandro hypothesis (i.e., the view that Argentina's relative decline is caused by the government's supply of bad policies), but complement it by providing a theory of the demand for such "bad" policies. The evidence we gather suggests two peculiarities of populist beliefs: they are often wrong (for example, they involve unrealistic expectations concerning the inflationary costs of printing money); and they often reveal a distrust of market outcomes (markets result, for example, in "unfair" prices and wages). An implication of such beliefs is that voters can demand policies that lead to a decline in output, even when this is fully anticipated (because these policies are expected to bring about more "fairness"). In brief, an alternative title for our paper is "The Diaz-Alejandro hypothesis with rational voters".
The school of thought associated with Diaz Alejandro has sometimes been interpreted as suggesting that markets are good for growth. Thus, a policy is deemed "bad" when it interferes with the market. A more reasonable position in our view classifies policies as "bad" only when they clearly lead to unsustainable macroeconomic imbalances. Beyond some extreme cases, it is often difficult to agree on what makes a policy clearly unsustainable. For example, it is simply not obvious that populism is necessarily associated with unsustainable declines in investment. Theoretically, it is possible that Peron's redistributive policies put enough money in the hands of workers to spur consumption. This, in turn, might provide a promising enough internal market to compensate investors for any political instability that populism might generate. And empirically, the evidence on the effect of populism on total investment is mixed, as we shall see (although there is some evidence of a negative effect when the focus is only private investment). Perhaps a more attractive version of the Diaz-Alejandro hypothesis assigns Argentina's relative decline to the bottlenecks in foreign trade (produced by excessive aversion to trade with other countries), the large fiscal imbalances (produced by an underestimation of the costs of inflation) and the instability that some of Peron's redistributive policies induced. In some cases, the negative effect of these policies was unanticipated by voters. But in other cases, agents understood the material costs, but were seeking outcomes that appeared more "fair" to them.Footnote 1
Economists have not made significant progress in understanding Latin American populism because they tend to find the interest group theory of policy quite compelling. In the standard account, bad policies are put in place by special interests and voters would get rid of them if only they cared to vote or were able to organize. Interestingly, however, voters do vote in large numbers (by and large, voting is compulsory in Latin America), so the empirical appeal of the interest group theory of policy formation, at least in its simplest form, appears low. A more promising approach accepts that populist policies are in fact appealing to (at least some group of) voters, perhaps because they do not fully understand their implications, as argued in Dornbusch and Edwards (1991). These authors point out several instances were policymakers in populist governments espouse views that would not be standard in economics.Footnote 2
Our analysis begins with a brief section on the historical and political background of Peronist policies (Sect. 2). It then contains three substantive parts. In the first, (Sect. 3), we use qualitative data from Peron's early speeches (1944–55) to provide some evidence on Peron's beliefs (i.e., positive descriptions of how the world works) and preferences (i.e., normative values describing how the world should work). These speeches suggest to us three simple but important points. First, Peron's policies were known to his voters (in contrast to later Peronist presidents, such as Carlos Menem in the 1990's, who was elected on a platform but changed it upon being elected). Second, what Peron is doing in the speeches, at least in part, is providing "meaning" by interpreting the evidence available in the light of (what we would call) a coherent model of the world. Although such "interpretation" is unusual in economic models, it is often discussed by scholars who study beliefs (and in "discourse analysis"). The third and final element in his speeches that we emphasize is that Peron gives a prominent role to the forces that determine income. In contrast to what the literature on varieties of capitalism has emphasized in terms of the origins of income (distinguishing between effort versus luck), Perón emphasizes the role of others in determining (reducing) our income through exploitation. This emphasis results in a focus on actors (foreign countries and rich local elites, whose center of vital interests was Europe rather than Argentina).Footnote 3 And in a focus on distinguishing the components of welfare: there are utility losses from being "exploited", which go beyond the material losses (losing one's dignity).
In the second substantive part (Sect. 4), we study Peronist beliefs after Perón's death and place them in comparative perspective by looking at data from the World Values Survey in the 1990s. Respondents that declare an intention to vote for Peronism are also those that are on low income and have low educational attainment (relative to the middle class: these surveys do not sample those at the top of the income distribution). This is consistent with our analysis of Perón's speeches of the 1944–55 period, which appear to be on the left side of the political spectrum, and with specific events of that period (the burning of the Jockey Club, the anti-American slogans, etc.).Footnote 4 Our results suggest that most Argentine voters are on the left of the political spectrum (relative to voters in the US), but that, surprisingly, within Argentina, Peronist beliefs tend to be more on the right of the political spectrum relative to those of the opposition. In relative terms, Peronist beliefs in the 1990s appear to be similar to Republican beliefs. In other words, the opposition to Perón seems to have come from the conservatives while the opposition to the Peronists in the 1990s seems to have come from the ideological left (although in both periods the opposition seems to have been on higher income than the Peronists). Put differently, what appears to be exceptional in Argentina is that the middle class is ideologically to the left of those at the bottom of the income distribution.
In the final substantive section (Sect. 5) we develop a model to explain a low "demand for capitalism". If voters maximize something else than just their material payoff, then even with correct beliefs about how the world works they may demand bad policies (from the narrow point of view of maximizing income). A voter concerned with the fairness of outcomes is a case in point. Specifically, we assume that voters demand that firms behave kindly (and this must be true in some scenarios). When they do not, voters experience anger. Such anger can be expected to fall when selfish firms are punished. In Argentina firms are more likely to misbehave than in rich countries (perhaps because of low competition or because of low productivity), so the State must intervene ("regulate to humanize Capital"). Section 6 concludes.
Peron, interventionist policies and argentine politics: background
Political instability and economic uncertainty coincide with the onset of Argentina's relative decline. By 1930, a military coup by a conservative group with fascist inclinations resulted in the first military government of the country. With the world crisis as context, ad hoc inward looking, import substitution policies were adopted. The succession of non-democratic governments that followed (seven) included episodes of serious violence and resulted in the presidency of Perón 1946. Since 1930 and until the Menem administration of the 1990s, no democratic president was able to complete its term, with the exception of the first Perón government. This coincided with Argentina's economic woes. Indeed, Argentina's comparative economic performance (see Fig. 1 in Glaeser, Di Tella and Llach of this volume) reveals two periods were divergence appears to be present: the period leading to the crisis of the 1930s, when the series appears to begin to fall (with some exceptions), and the 1970s, another period of heavy political instability, when the decline appears to accelerate.
Source: Gerchunoff and Llach (1998), updated
Total Investment over GDP.
This suggests that, at least at this broad level of generality, there is some merit to the hypothesis that political instability and relative economic decline are positively correlated. Interestingly, the rate of investment during 1930–40 (the "infamous decade") appears low (9.1%), particularly when compared with the rate prevailing during the decade prior to the start of the First World War (19.3%), one of the periods where the government was in the hands of "elitist" governments and the economy was relatively open to international trade (see, Gerchunoff and Llach 1998). Figure 1 reveals that investment over GDP rises with Peronism, with an increasingly larger role taken by public investment (whereas in the early years it is mainly private investment; still, it is a matter of debate whether the increased investment helped growth, since it is unclear if public investment was efficient) until the fiscal crisis of the early 1980s.Footnote 5 A simple hypothesis suggested by the data is that political instability causes lower private investment, and that this is a cause of Argentina's relative decline.
Several authors have emphasized the role of investment in Argentina's relative decline. Díaz Alejandro (1970, 1988), for example, has focused on the difficulties in maintaining high levels of investment once the export-oriented, market friendly regime was replaced by the more interventionist regimes that follow the great depression. Taylor (1994) also emphasizes the role of the pre 1913 extremely high rates of capital accumulation, explaining how subsequent protectionist policies resulted in a high relative price of imported capital goods, and that this contributed to retarding capital accumulation (for evidence on the role of machinery investment in growth, see De Long and Summers 1991). A natural extension of this line of research is that political instability plays a similar role interfering with private investment and contributing to Argentina's decline. Of course, then, a key question is why do these interventionist policies get implemented, and why does political instability persist.
Several authors have emphasized the role of Peronism in Argentina's development.Footnote 6 Since General Perón's ascent to the Labor Secretariat in 1943 (with the Military Government of General Ramirez), he was the preeminent political figure of Argentina. Even after his death, policies have been defined with relation to the Peronist political legacy (see, for example, O'Donnell 1977 and Portantiero 1973). Several hypotheses have been advanced to explain the causes of Peronist support. A simple fact is that Perón adopted a series of policies that favored organized labor at a time when the country was shifting towards light manufacturing (see, for example, the analysis in Gerchunoff 1989). During this period, the share of output accounted for by industry increased significantly and labor became a central economic and political force in the country. Thus, Peronist pro-labor policies in a context of increasing importance of Labor go a long way in explaining its popular support, even if voters only had material concerns. Some authors estimate that the real wage for unskilled labor in the Buenos Aires area increased by 17% per year during this period.Footnote 7 It is unclear how much of this increase was sustainable, however, even in the presence of economies of scale and higher profit margins, as the internal market expanded and import substitution became widespread (see Galiani and Somaini, this volume). It is worth pointing out that anti-export policies also contributed to the increase in real wages through lower prices of food (see Brambilla, Galiani and Porto, this volume).
A less obvious cause for peronist support concerns labor reallocation. It appears that the changing economic structure of the country involved significant labor migration, and that this generated new political demands. Germani (1962), for example, has emphasized the emotional fragility of internal migrants (from the provinces) and the charismatic, paternal nature of Perón's leadership. He provides an estimate of 83,000 migrants per year to the greater Buenos Aires area for the period 1936–47, increasing thereafter. By 1957, Germani estimates a doubling of the population in the Buenos Aires metropolitan area (form 3.4 to 6.3 million), so this was a significant phenomenon. Besides policies that directly supported labor, a variety of social programs in different areas were put in place, ranging from increased access to free health care, to the creation of a comprehensive housing program and the establishment of a generous system of social security (for a good description see, for example, Gaggero and Garro 2009). There was also the real and symbolic role of the Eva Perón Foundation—a private entity run by Perón's wife—funded through contributions from the private and public entities, and which distributed considerable amounts of social assistance (see Stawski 2005).
At the same time, institutional weaknesses contributed to the limited political response to the country's economic problems. Some have singled out specific aspects, such as electoral institutions giving preeminence to the party in the decision to re-elect politicians (see Jones et al. 2002). Others have emphasized the political institutions that allowed unexpected changes in economic policy (see, for example, Spiller and Tommasi 2005). Although electoral fraud preceded Perón, it may have lent some legitimacy to the abuses of the Peronist regime (see, for example, Gallo and Alston 2009). Naturally, the ability to protect the rights to property under weak institutions was limited, and there is the possibility that this, and unexpected changes in economic policy (or the risk of such changes), led to a weaker investment performance (see, for example, Adelman 1999, Cortes Conde 1997, and Gallo and Alston 2009).Footnote 8 Consistent with this, there was less access to external capital after the great depression (see Taylor, 1994). And foreign direct investment fell in importance, albeit from very high levels (Díaz Alejandro 1970 reports that foreigners' share of the stock of capital in 1927 was 34%, down from 48% prior to the First World War).
A somewhat different picture emerges from the period leading to the Peronist administration of the 1970s. The relatively closed economy of the 1960s experienced difficulties adjusting to economic expansions as increased imports often led to balance of payments crises and inflation. Against this background, and with the political proscription of Peronism, attempts at using wage and income policies to stabilize the economy were unsuccessful. There were several attempts to reduce wage pressure, typically by restricting trade unions (for example, the Ongania government imposed a wage freeze, attempted to increase working hours, limited labor strikes and suspended the legal status of several trade unions). Tensions soon fuelled the presence of left wing elements, and fighting communism became a serious government concern. As riots erupted in Córdoba, left wing terrorism became a political force with some legitimacy (given the lack of democracy) and a claim to membership in the Peronist "movement". There is some evidence that Perón himself encouraged this identification with the left.Footnote 9 During the 1970s, kidnappings and assassinations reached their peak, as the terrorist organizations (the Marxist People's Revolutionary Army and the Montoneros—of Peronist extraction) clashed with the police and armed forces (see the data on the assassination of policemen in the province of Buenos Aires in Boruchowicz and Wagner 2013). Eventually, in the 1970s, with the terrorist organizations still active after Perón's return to the country's presidency, he broke with them in a dramatic speech, ejecting them from the Plaza de Mayo. Thus, in contrast to the early years, when Peronism arrived and launched a true workers' movement opposed to the Conservatives, during the 1970s at least part of the opposition to Perón seems to have come from the ideological left. The survey data reported later are consistent with this description.
In brief, it seems clear that Perón's arrival on the political scene in the 1940s coincided with the increased importance of labor in Argentina's economy, and a reduced importance of openness to foreign capital and trade as the global economy was affected by the war and the Great Depression. Accordingly, Perón's ideology reflected a degree of nationalism and faith in government intervention that would persist over time. The Peronist opposition, however, seems to have evolved from a traditional conservative position to a position that is on the left of the political spectrum.
Perón in his own words
Perón's political legacy has long been a matter of controversy. A natural hypothesis is that, given that he was a fascist sympathizer, his ideological legacy must simply be fascism. This would answer the question of how bad policies (e.g., excessively interventionist/expansionist) come to be implemented: Perón's authoritarian rule imposed such polices. For our purposes, the biggest problem with this argument is that such policies appear to be popular with the electorate, and they continued to be so even after Perón was deposed and the most egregious aspects of his authoritarian rule (such as indoctrination) were no longer active.Footnote 10 Note also that some elements in Perón's ideology have a different origin: the idea that government policy should attempt to "humanize capital", for example, has conservative (and Christian) roots and is present in the European Christian democrats. Furthermore, Peronism seems to involve opinions about economic independence and policy interventions that are central in socialism (and in other, less authoritarian political forms).
It is also of some significance that Perón's political ideology was developing in the immediate aftermath of the First World War. Born in 1895, he was in his twenties when the communist revolution took over Russia. He was 28-years-old at a time when the Weimar republic was struggling with war reparations, and foreigners became a convenient political scapegoat, together with bankers, Jews and speculators. Thus, it is perhaps unsurprising that attribution (particularly to external forces) plays a big role in his speeches. And he was 35 as the Great Depression affected the world economy and rich countries were starting to cope relying on public works programs and expanded government spending (in part linked to rearmament). Perhaps even more significantly, in 1935 one of the first actions of the newly created central bank was a bailout of the banking system at a large social cost (della Paolera and Taylor 2002). Thus, it must have been clear to him that large shocks could disrupt the macroeconomy to a very large extent, making individual effort often irrelevant in the determination of income.
The Peronist regime of the 1940s and 50s accompanied the economic changes that were implemented, first from the Labor Secretariat and then from the Presidency, with a powerful new rhetoric that gave workers a preeminent role in the formation of policy. Keynesian solutions were becoming known, in part through president Roosevelt's actions, and some of these ideas were making their way to Argentina.Footnote 11 Rhetoric, of course, was only one element in a broad attempt to create support for the social and political changes that would sustain the redistribution of income at the core of Peronist policies. Other elements included a set of political rituals linked to mass mobilization, the emotional appeal of Evita and a clear attempt to influence people's perceptions and beliefs through propaganda. Although we study Perón's speeches, we note that this might be a relatively narrow focus, particularly given the discussion of these elements appearing in Plotkin (2003). Of course, a potentially important determinant of beliefs is the education system, and the Peronist regime heavily intervened in the design of the national curriculum and the public schools system (see, for example, Bernetti and Puiggrós 1993; Bianchi 1992; Escudé 1990). A broader study of political ideas in Argentina appears in Romero (1987).
Previous work in the field of discourse analysis by Sigal and Verón (2003) focused on Perón's speeches.Footnote 12 They analyze several aspects of his discourses and put special emphasis on their political dimension. For example, Sigal and Veron put forward the interesting hypothesis that Perón actively constructs the notion that he "arrives" to the State from the "outside" (e.g., a life dedicated to the military) to provide unity/harmony to a divided country (during 1973–4, the main focus of their analysis), which is significant given some of the electoral decisions made at the time. In contrast to this work, we focus on the economic dimension of his early speeches.
It is possible to study Perón's ideas through his books, particularly those written in the context of the controversy with the American Ambassador (Perón 1946) or his first book following his exile (Perón 1958). Yet, the analysis of speeches is important for at least three reasons. First, we are not sure how many people read his books, while we know that participation in his rallies was massive. For example, an estimate supplied by the "La Razón" newspaper put two million people attending a Perón rally on August 22nd, 1951. Second, the subset of Perón's ideas included in his speeches presumably provides us with some sense of what he himself thought would be well received by his voters. Finally, an important part of Perón's ideas were communicated through speeches at that time; books became more important during exile.
The material we studied was contained in 62 speeches, delivered between October 15th, 1944 and May 1st, 1953. They include a few speeches during rallies (as reported in the media), some speeches during particular celebrations, as well as messages to congress and other legislative bodies. It should be noted that there is some variation in the content of the speeches, which can be traced back to the changing economic circumstances (such as those made around the 1951–2 crisis where Perón justifies a less expansionary stance).
Perón's speeches
The first striking point (to an economist) of his speeches is their low informational content. In contrast to what might be expected, they are not of the form: "I am informing the people of Argentina that we are facing a shock with the following characteristics, and here is what we are going to do about it." In other words, they are not predominantly exercises in the transmission of information. Rather, they are heavily interpreted narratives of what has happened in the past, and how the conclusions that we draw from looking at history can help us shape policy in the present. In brief, a key element of the speeches is that they are primarily centered on the reinterpretation of already available information. Also, scholars working on analysis of discourse would say he is engaged in the "production of meaning". In particular, such research is concerned with establishing the "source's relationship to the content" (related in this case to the source's status). Under the assumption that minds and memory are malleable in this way, an economist would have no problem modeling it as a (self-interested) activity of the politician. An example is Glaeser (2005), where politicians supply stories and voters fail to investigate their accuracy. Finally, the speeches can also be interpreted as trying to influence the system of values of the population. In this regard, Rokeach (1973) is an influential study of value systems and their impact on behavior (also focusing, in part, on the writings of major political figures). See also Converse (1964) and, for a recent review, Kinder (1998).
The second, and perhaps key part of this "interpretation exercise", is that Perón assumes the role of a heroic whistleblower, denouncing a corrupt state of affairs where politicians are "bought" by one particular group in society (the economic and cultural elite, who are seduced by all things foreign) to enact policies against workers and the poor. It is a variation of the theme of Perón's "arrival" as an external player (as emphasized by Sigal and Veron but with special significance for the beliefs about the generation of income). One example is:
It can be seen that, not versed in the art of pretending, I have exposed the distressing situations that burdened my feelings as I absorbed the Daedalus of laws and decrees (…) which in a large number of cases restricted the rights of workers, or, if they recognized them, it would be to kill the last trace of the hope of justice. May 1st, 1945.
I have been accused of having agitated the conscience of the country's workers. Of having created a social problem where none existed before … instead of silencing the inequalities and social injustices, I have uncovered them so that we all could know where evil was and we could find the more convenient medicines…. The previous tactic consisted in faking a social welfare … with the exclusive aim of not disturbing the good digestion of the golden Bourgeoisie. May 1st, 1945.
Another characteristic of his speeches is the continuous attempt to reassure supporters that he has a coherent view of the world. Examples take place in several speeches, but the one on May 24th, 1950 is centered on explaining Perón's theories. He begins by reacting to accusations that his is not a coherent economic plan, stating that
It has been said that … the Justicialista movement lacks an economic theory. Nothing more untrue. We have a perfect economic theory. What happens is that we have not yet spelled it out because we did not want that the oligarchs, or the capitalist consortia that exploited the country through conscienceless and avaricious bosses, could, knowing our plan, stop our action … When we have been able to dominate these international monopolies or the forces of the anti-motherland, then we will explain our theory to the world. May 24th, 1950.
And he explains (in the same speech) some details
… old economic theory … was based on a principle called "hedonic". … what does it represent? The capitalist says "my capital is the basis of the economy because I am the one who promotes, pays and makes. As a consequence I produce 10, and don't produce less or more as in both cases I lose." But me, the sociologist, I tell him: "Yes sir, you produce 10, but here this man has to eat and he tells me that 10 is not enough, he needs 20". Then the capitalist replies to me "Ah, let him explode, let him eat with 10 because if I produce more of that I lose money."… That is when the hedonic principle stops being so naturally rational, least of all from the point of view of welfare, which is the basis of all organized communities. … we do not want an economy subordinated to capital, we want capital subordinated to the economy … If, after that, the capitalist is able to fill its coffer with gold, let him do it; we don't care; even better if he does. But we can't do that until the people is satisfied and happy and has the purchasing power needed to achieve a minimum of happiness, without which life is not worth living. May 24th, 1950.
We now turn to three aspects of Perón's speeches that lay the foundations for our model in Sect. 5: a description of the types of businesspeople, elaborations on the idea that "others" determine our income, and finally some ideas on what constitutes appropriate Government policy.
Types of Businesspeople
The "conspiracy" that Peron comes to uncover is relevant to workers because it identifies an influence on their income. This representation requires that capitalists, at least until Perón's "arrival", were unkind (inconsiderate or who made their money through corrupt means). The speeches include constant references to such "bad types" amongst business people.
People have been faced with the idea that a fateful lodge of demagogues was the ruling class of the country, its elite, and as such was made up by wise, rich and kind people. It has to be pointed out that the wise have rarely been rich and the rich have rarely been kind. October 15th, 1944.
In other words, those privileged by the capitalist regime are finished; those that had everything, that took the cow in the ship when they went to Europe to have coffee with milk. No, let's have them have coffee with milk, but with powder milk. It is not that bad for them. May 12th, 1950.
It used to be easy for capitalists: when there was a strike workers were put in jail, they were processed and they did not rise again. … Remember Vasena. … Workers confronted the situation but the result was several thousand men dead. The oligarchs were all home doing the "five o'clock tea". … It used to happen that a capitalist who was almost bankrupt was made to earn, with just a signature, two or three million pesos without him having the need to do more than wake up in the morning and ask over the phone if the matter was ready. In this way favors were being granted upon someone who perhaps was a shameless one. August 9th, 1950.
"Others" determine our income
With "bad types" amongst the capitalists, it was easier for Perón to press forward with the idea that the process where income was generated was under their influence. This matches well with the widespread belief that Argentina is a rich country and one has to find an explanation for why there is want amidst plenty (for a discussion of belief formation when natural resources are important, see Di Tella et al. 2010). Indeed, one part of his speeches can be reduced to arguments in support of the idea that instead of individual effort (internal to the individual) or luck (external but without intention), the relevant influence on income is an external force with human intention. It is "others" who are actively taking actions that lower the income of Argentinians. It is not a question of making a bigger effort at the individual level; nor a question of taking a collective stand to reduce the influence of natural elements (perhaps through insurance or a better selection of activities and crops). It is a question of actively opposing other actors that try to "exploit" Argentines (on the role of corruption perceptions in explaining the appeal of capitalism, see Di Tella and MacCulloch 2009).
There are numerous examples of this conception of the income generating process, and the support of the State in enforcing it, in Perón's speeches. One example is
The economic destiny of workers was exclusively in the hands of the bosses … and if workers organized a protest movement or adopted an attitude in defense of their rights, they were left out of the law and exposed to the bosses' response and the police repression. … A group of capitalists, characterized the most by its continued, bloody opposition to workers' vindications, has plotted an unthinkable maneuver to neutralize the steps that had been adopted to stop the rise in the cost of living … and counteract the effects of inflation. May 1st, 1945.
… we need arms, brains, capital. But capital that is humanized in its function, which puts the public's welfare before a greedy interest in individual profit. I express my strongest rejection to the God of unproductive and static gold, to the cold and calculating super capitalism that harbors in its metallic gutters Shylock's infamous sentiments. May 1st, 1947.
In the year 1943 our economy was in the hands of foreign capitalist consortia because, until 1943, those consortia were those that paid a vile price to producers, gathered, exported, transported and sold to foreign consumers the produce of Argentine work. It cannot be doubted that most of the profits went to such intermediation. March 5th, 1950.
There might remain some former exploiter of human labor, who cannot conceive an Argentine nation socially fair, … or some old lawyer of foreign companies who might yearn for the times of the Bembergs, when treason was also profitable… May 1st, 1950.
300 families in our country, for example, put together their capital and enslaved 17 million Argentines. August 9th, 1950.
We are in favor that a man might enrich himself working, but we oppose that he might do so defrauding or taking advantage of other people's weaknesses. We want (…) that each Argentine has prosperity and good fortune within reach, but we do not accept that to obtain them he would commit crimes against other Argentines or against the community that we all are a part of. March 5th, 1952.
On some occasions, as in the reference to industrialist Otto Bemberg above, Perón names specific members of the elite. In one case members of the elite are described as themselves guilty of exploiting other capitalists. The example is
The monopoly, be it called … Bunge y Born, Dreyfus, etc. … was the one doing the gathering … the poor producer received six pesos and this intermediary octopus received thirty or forty for what somebody else had produced … When this is organized properly, the small farmer will produce, transport, gather, sell; and the product will go exclusively to him and not for the "smart one", who constitutes a tumor that was placed in the middle. August 9th, 1950.
Yet in some of these same speeches he distinguishes between local and foreign capitalists and justifies the behavior of the former. This is often mentioned in the context of speeches with a strong nationalist component.
When I have said that there was excessive exploitation, I have not blamed our bosses, because I know full well that our bosses were themselves exploited from the other side (…) That is why we have bought the railroads and everything else concerning public services (…) May 12th, 1950.
Appropriate government policy
These descriptions of the state of affairs in Argentina at the time naturally lead to the justification of a set of interventionist policies adopted to address these problems. Interestingly, in these portions of his speeches, the announced policies are not only linked to the solution of the set of economic problems uncovered, but Argentine identity (i.e., the "type" of person who would implement these policies) are. There is a connection to identity in that there are (apparently discreet) categories of people that take certain actions, so that when these actions change, identity also changes, which appears inherently desirable (for a model of identity see Akerlof and Kranton 2005). It is as if people who are able to defy their exploiters and stand up for their rights (and cannot be fooled into accepting compromise solutions) are "true" Argentines.
The speeches provide several examples of the interventionist policies that match the needs created by Perón's description of the main problems faced by Argentina. These include,
We implement, in a loyal and sincere fashion, a social policy designed to give workers a human place in society, we treat him as a brother and as an Argentine. October 15th, 1944.
No man should earn less than what he needs to live. … We said that there is a line for life determined by the minimum essential wage, and those below that line were the submerged; and that in our country there could not be "submerged"; everyone had to be "emerged". October 21st, 1946.
If we have intervened in some (enterprises) it has been because we had to somehow (avoid) the constant outflow of national wealth. (…) not only we respect private activity, but we also help and protect it. The only thing we don't want is a return to the old age of monopolistic consortia of exploitation. We want that men work (…) as they see fit but we do not want that it takes place at the expense of the consumer or the producer. We want that he who produces wealth may place it without pressure or exploitation of any type. February 7th, 1950.
The Estatuto del Peón, might not be to the liking of some exploiters-without-conscience, (…) who have been upset at the possibility that I might defend with more enthusiasm the perfecting of the human race than that of Argentine bulls or dogs. March 5th, 1950.
One of the barriers to national unity was undoubtedly the injustices committed by the capitalist oligarchy exploiting workers with the complicity of the authorities … in charge of distributive justice…. A people with an immense majority of slaves cannot be free, just as a free people can never be subjugated. … I am not exaggerating when I say that in 1943 there were slaves in the Argentine Republic. May 1st, 1950.
Today, May 1st, the La Prensa newspaper … will be handed over to the workers … This newspaper, which exploited its workers and the poor during years, which was a refined instrument of all foreign and national exploitation, which represented the crudest form of treason to the motherland, will have to purge its sins serving the working people. May 1st, 1951.
The government is committed to enforcing price controls, even if that means hanging them all. … They have a right to earn, but they don't have a right to steal. May 1st, 1952.
This simplified overview of Perón's speeches suggests to us that an important component of Peronist beliefs is that others determine our welfare. This suggests two changes to the standard formulation in economics, where agents are assumed to derive income from individual effort or from luck (which is beyond anyone's control). The first is that other players can affect an individual's income (local elites, foreign countries). The second is that labor relations have a non-monetary dimension, which we interpret as an influence of fairness in people's welfare (and not just income). Given these beliefs, there is a role for government in ensuring that workers are treated with dignity ("humanize capital"), which we interpret as some reassurance that firms are behaving with some reasonable amount of concern for workers' well-being.
Peronism and the American democrats: differences in survey data on beliefs and values
Given Perón's continued influence on political and economic events even after the 1955 coup, it is of interest to provide at least some evidence on the later evolution of Peronist beliefs and values, and to place them in comparative perspective (for example, by comparing them to American beliefs as a benchmark). The approach we follow is to focus on a snapshot of the public's interpretation of Peronism at a later date. Unfortunately, continued survey data from different periods are unavailable. However, we have data on beliefs and voting pertaining to the 1990s from a comparative survey that contains data for the US and Argentina (and other countries). Of course, the 1990s was a period where both the US and Argentina are ruled by two politicians, Menem and Clinton, that are elected on a platform that is on the left of the political spectrum but who end up implementing reforms that are more consistent with centrist/conservative values. Of course, there are some differences: in the case of the US this happens only after there are mid-term electoral losses and mainly involve welfare reforms and the dropping of some of the less popular initiatives such as healthcare reform, whereas in the case of Menem the departures were larger and made from the start of the term. They also involved a complex relationship with the labor movement, which was an important early supporter of Menem (see Murillo 2001, Levitsky 2003 and Etchemendy and Palermo 1998, for discussions; on policy reversals in Latin America during this period, see Stokes 2001).
Our interest in comparisons with the US comes from a hypothesis "explaining" Peronism as the Argentine version of the American democrats (given that they are supported by similar demographic and socio-economic groups). A similar point is often made with respect to Peronism and the British Labour Party. Cross-country survey data on people's opinion about elements of capitalism is available from the World Values Survey. Coordinated by Ronald Inglehart, the 1995–97 wave asks adults (older than 18) in over 50 countries several questions of interest. In the US, the data are obtained from a representative sample of individuals age 18 and older through face to face interviews. In Argentina, sampling was limited to the urbanized central portion of the country, where about 70% of the population is concentrated.Footnote 13
Importantly for our purposes, the survey contains data on (self-reported) voting, allowing us to derive measures of vote intention, or at least political sympathy, towards the main parties in the country, including Peronists. Thus, we first divide the sample in Argentina into two groups: between those that declare to vote for Peronists and those that declare to want to vote for other groups. The precise question asked is: "If there were a national election tomorrow, for which party on this list would you vote? Just call out the number on this card." Then a card with "1. Partido Justicialista, 2. Union Civica Radical, 3. Frepaso, 4. Modin and 7. Blank ballot" is shown. Peronists are those answering 1, while non-Peronists are those answering 2, 3 and 4. In the US, a similar procedure allows us to determine two subsamples: Republicans and democrats.
We then used a measure of income to divide the sample into two categories (rich and poor). The question asked was "Here is a scale of incomes. We would like to know in what group your household is, counting all wages, salaries, pensions and other incomes that come in. Just give the letter of the group your household falls into, before taxes and other deductions." Then, a scale with 10 groups, corresponding to the income deciles in the country, is shown (this scale is different in each country). We classify as poor those in the lowest five categories. Table 1 shows that 69% of Peronists, whereas 59% on non-Peronists, report incomes that are in the lowest five categories (see Benoît and Dubra 2011, who show that it is not necessarily irrational for a majority of people to rank themselves in the bottom half of a distribution). In the US, within those admitting a preference for voting a particular group, we note that within those that prefer the democrats, 42% declare to be in the lowest 5 deciles, while only 29% of Republicans find themselves there. This broadly corresponds to the idea that Peronists and democrats share a similar base of support (at least in the limited sense that they have more support amongst the poor than the opposition). Table 1 also shows results using educational attainment and reaches a similar conclusion.Footnote 14 These results echo the conclusion of a Peronist politician who declared upon looking at an electoral map, "progress compromises us, education kills us". In auxiliary tests (not reported) we tried self-reported social class and reached similar results: Peronists and democrats seem to represent similar groups in their societies (the poor and those with low educational attainment).Footnote 15
Table 1 The Education and Income of Peronists and democrats
Given our interest in the role of beliefs, it is relevant to see if these similarities extend to beliefs about the role of luck and other economic issues. Given that we do not take a position of the relative importance of each belief in determining Peronist ideology, we do not construct formal tests and so our results are only illustrative. The classic belief concerns the role of luck (versus effort) in the generation of income. The question usually used to capture this belief is "Why, in your opinion, are there people in this country who live in need? Here are two opinions: Which comes closest to your view? 1. They are poor because of laziness and lack of will power, 2. They are poor because society treats them unfairly". The results are summarized in Table 2.
Table 2 The beliefs of Peronists and democrats: Luck vs. Effort
The main pattern is that the whole electorate in Argentina seems to be on the left of the political spectrum, as most people seem to believe that poverty is the result of luck (or that society treats them unfairly) rather than laziness. However, in relative terms the Peronists seem to exhibit a pattern closer to the one of the Republicans instead of the democrats. Indeed, the biggest proportion of believers in laziness as a source of poverty takes place amongst Peronists and the Republicans. The ratio of believers in Laziness (39%) to believers in an unfair society (61%) in the Peronist sub-sample is 0.64, whereas amongst non-Peronists it is 20–80%, for a ratio of 0.25. On the other hand, the percentage of believers in laziness (unfair society) amongst the democrats is 49% (51% respectively), whereas amongst the Republicans is much higher 75–25%. Focusing on the ratios of laziness to unfairness, the democrats have a ratio of 0.96, whereas that for the Republicans is 3.
As another illustration, Table 2 considers the question "Generally speaking, would you say that this country is run by a few big interests looking out for themselves, or that it is run for the benefit of all the people?" with answers "1. Run by a few big interests, and 2. Run for all the people". Again, we find that the two groups in Argentina (Peronists and non-Peronists) tend to give the answer that is presumably on the left of the political spectrum (Run by a few big interests), but the relative position of Peronists in Argentina is more like the relative position of Republicans than of democrats.
Table 3 considers several beliefs that are relevant to understanding Peronists beliefs and values. One theme emerging from this table when looking at the absolute level of their answers, is that Argentines tend to be more on the "left" of the political spectrum (consistent with Di Tella and MacCulloch 2009). More interesting, they all point out in a similar direction in relative terms: the Peronists (relative to the opposition) tend to look like the Republicans (relative to the democrats). In all cases the ratio in Argentina and in the US are on the same side of 1. Take for example, the idea that workers should follow instructions at work. We split answers into two groups, those answering "they should" on the one hand, and those that answer either "it depends" or "they should be convinced first" on the other hand. The majority of republican voters (77 versus 23% of them, or in a proportion 3.35–1), perhaps not surprisingly, tend to answer that workers should follow instructions. Democrats have a similar position, but less intense (the proportion 58/42 is just under 1.4–1). So, in relative terms, Republicans are somewhat more likely to agree with this statement.
Table 3 Beliefs in Argentina and the US: Peronists look like Republicans
In Argentina most people disagree with this statement, as reflected by both Peronists and non-Peronists having ratios that are lower than one, consistent with respondents being on the left. However, the ratio for Peronists (0.81 = 45/55) is somewhat higher than that for non-Peronists (0.51 = 34/66), suggesting that in relative terms, Peronists are more likely to agree with the idea that workers should follow orders than non-Peronists, which is surprising given Peronist's affinity with labor causes, at least as detected in Perón's speeches.
The rest of Table 3 investigates a number of other beliefs and values appearing in Perón's speeches. For example, he discusses competition on his speech of March 5, 1952: "Progress and individual prosperity cannot be based rationally in the harming of others because that unleashes an egoist and merciless struggle, which cancels all cooperation, destroys solidarity and ends in dissociation". The beliefs covered in the Table include those related to the role of luck versus effort in the determination of income, and the role of others in affecting individual fates (already discussed), as well as those related to feminism (jobs for men), authoritarian views (respect for authority), materialism (less emphasis on money), honesty (acceptable to cheat), competition (competition is harmful) and economic organization (ownership of business). In all cases, the answers given by Peronist voters (relative to those given by the opposition) are similar to the answers given by Republicans (relative to the democrats).
In brief, the evidence from the 1990s suggests that the opposition to Peronism is on the ideological left, even though they are on higher income and educational achievement than the Peronists. If it is true that the opposition to Peron came from the conservatives, then it is plausible to conclude that Peronism has experienced less ideological change than the rest of the country.
A model of labor market exploitation based on altruistic preferences
The previous sections highlight the role of several elements that are non-standard in economic models. Two that are of particular interest to us are the idea that there is something more to market transactions in the labor market than just the exchange of work for money. There is also the possibility of exploitation, connected to firm owners who do not care about the welfare of their workers. The speech of August 9, 1950 is typical. Note that the part where Perón states "Workers confronted the situation but the result was several thousand men dead. The oligarchs were all home doing the 'five o'clock tea'." he says "five o'clock tea" in English, which serves to stress the contrast between the fate of workers whose life is in danger with that of employers who are oblivious to their predicament and more preoccupied with engaging in a social practice that is the norm in England. Accordingly, the model we develop is one where there is the possibility of worker exploitation by "unkind" elites, and Perón's punishment of these elites provides increases in worker total utility through an emotional (non-material) channel.
The model in this section is an adaptation of the model in Di Tella and Dubra (2013) to labor markets. It stresses the idea that a policy that may not be optimal under "standard" models (that ignore emotions), may become optimal if workers experience anger when they are exploited, and the government knows it. To make our point, we introduce emotions in the form of worker anger at perceptions of insufficient firm altruism (as in Levine, 1998 and Rotemberg, 2008) in the textbook version of Salop (1979).
There are n workers, each characterized by a parameter x interpreted as either a
"preferred variety, preferred workplace"; this can represent
A taste for working in one industry over another.
A cost of re-converting the workers' human capital to another industry.
"location parameter; how far away do I live from my workplace".
For each worker, his location is drawn from a uniform distribution on the circle of circumference 1. There are m evenly distributed firms along the circle (there are m firms, but we use b = 1/m as the relevant parameter measuring concentration); firms are of one of two types, altruistic or selfish. Workers can supply either one unit of labor, or 0; this binary choice is a simplification, which is in line with the indivisibilities postulated in Hansen (1985). Individuals' gross utility of not working is s; when they work, if they have to travel a distance x (or they are x away from their preferred job) and they receive a pay of w, their net surplus is w−tx−s (i.e., they have a transport cost of t per unit of distance traveled).
In addition to these material costs, the worker may become angry with the firm for which he works. There are several reasons why incorporating emotions in this setup makes sense. First, simple introspection tells us that we don't always do what is best from a narrowly defined "economic" perspective. Second, a large body of literature has shown in the laboratory that individuals don't always maximize the amount of money they receive (even when the choices don't involve effort), and that emotions play a significant role. This reaction has been modeled as a preference for fair outcomes, as in Levine (1998) and Rotemberg (2008), who show how the introduction of a reciprocal altruism term in the utility function can explain quite well the seemingly paradoxical evidence from ultimatum games (see also Fehr and Schmidt 1999 inter alia). Finally, a third motivation to include emotions in our model of the labor market is that Perón's speeches contain several direct references to the effect of Peronist policies on emotions. For example, he states:
What is the social economy? It is a change in the old system of exploitation, not like the communists want, but in a gentler form. The capitalist regime is an abuse of property. The communist solution is the suppression of property. We believe the solution is not the suppression of property but rather the suppression of the abuse of property. … We are not involved in a social ordering that will take the country into a fight, but rather to calmness. June 24th, 1948.
If a worker is angry, we must subtract from his utility a term λ(π + p−w), where p is the productivity of the worker in the firm, and π is the profit the firm obtains from the other workers. This term is just a "spite" term: when angry, the worker dislikes the firms making a profit, and he is angrier when he contributes to those profits. What triggers anger is that the individual rejects the hypothesis that the firm is altruistic.
In this market, firms choose wage levels (i.e., it is not a competitive market) w and get in exchange a product of p per worker, so when total employment is E its profits are (p−w)E. If the firm is not altruistic, that is all there is in the firms' utility (utility = profits). If the firm is altruistic, its utility is profits plus a term that depends on the utility of the worker. The altruistic firm has a cost of α if worker utility is lower than a certain level (this level is exogenous for this model, but can come from learning, adaptation, history, etc.) We call the threshold τ; we will set it to be the utility the worker would obtain in a "fairly competitive" labor market (see below).
In what follows, and without loss of generality, we normalize t = 1 and all other parameters are just "normalized by t". This normalization is completely general. We also assume (without loss of generality) that the number of workers is n = 1.
We will analyze a signaling game, in which firms, when choosing a wage level, signal their type. An equilibrium in this setting is a triplet [e(w,x;μ),w(θ);μ(w)] where:
e(·) is an "employment" decision strategy (the same for all workers; we are looking at symmetric equilibria) as a function of wage w, tastes x (or distance) and beliefs μ (of whether the firm is altruistic or not) into {0,1}, where a = 1 means "work" and a = 0 means "don't work";
w(·) is a function that maps types into wages (one wage for each type; the same function for all firms);
μ(·) is a function that maps wages into [0,1], such that μ(w) is a number that represents the probability that the worker assigns to the firm being altruistic.
e is optimal given x, w and μ; w is optimal given e (and other firms playing w); μ is consistent (it is derived from Bayes' rule whenever possible).
We will focus on equilibria, where beliefs are of the sort "I reject the firm is altruistic if its wage w is such that w < w*" for some w* (it may be a target wage). We are ruling out (for example) equilibria in which the worker rejects that the firm is altruistic if the firm pays a wage w > w* (i.e., the worker comes to believe the firm is selfish even if it is paying a wage above the "target" wage; which would be of course unnatural); in standard signaling models, beliefs like these may still be part of an equilibrium, because in equilibrium one does not observe wages w > w*, and so the consistency condition (that beliefs be derived from Bayes rule) places no constraint on beliefs.
In this section, we characterize the pooling equilibria in an oligopoly. Of course, there may be separating equilibria too. But we focus the analysis of pooling equilibria for four reasons.
The first is "analytic": we want to know whether the set of parameters for which there exists a pooling equilibrium shrinks as the number of firms decreases; since there is no anger in pooling equilibria, this would establish that the "chances" of anger appearing are larger when there is less competition.
The second reason for focusing on pooling equilibria is "historic": in Perón's speeches there is a reference to the possibility that capitalism works well in some circumstances (for example, there is a reference to this "calmness" in the speech of May 1st, 1945). This "benchmark" case, from which the local elites have departed, is represented as a pooling equilibrium.
The third is to avoid making choices that would need to be made, and that, however, we resolved them, would leave some readers unsatisfied. Take for example, the following. In a separating equilibrium, workers are angry at some firms; when they are, the optimal wage by the firms is higher (than if they are not); this leads to a larger material utility for workers. This leaves us with the conundrum that selfish firms are giving to their employees a higher material utility, and yet they are angry. This begs the question: are workers (in reality, not in the model) angry because the firm is selfish, or because the firm acts in ways that harms its employees? Put differently, would you be angry at somebody you know is nasty, but is temporarily pretending to be nice (not because he is trying to change, but just to avoid some punishment)? Psychological research has not answered this question in a satisfactory manner yet.
The final reason is tractability: in a separating equilibrium, when there are many firms, the patterns of combinations of firms becomes complicated (a selfish firm surrounded by two selfish firms, or by one selfish and one altruistic, or by two altruistic, etc.; similarly for an altruistic firm and its neighbors). In ex-ante terms, though, each firm does not know whether its neighbors will be of one kind or the other.
Pooling equilibria
Our first step is to find the necessary conditions under which a wage w o is part of a pooling equilibrium, in which workers attain their target level of utility. Consider a firm which maximizes profits in a deviation from a pooling equilibrium with wage w o (we are not including a utility cost of the deviating firm, since we assume for the time being that the equilibrium is such that workers attain their target utility level τ). If the firm increases its wage, workers won't be angry. In that case, labor supply is given by the sum of all (unit) supplies of workers who are closer to the deviating firm than the two types of workers (one to each side) who are indifferent between working for the firm we are analyzing and working for its neighbor:
$$w - s - x = w^{\text{o}} - s - (b - x) \Leftrightarrow S = 2x = b + w - w^{\text{o}} .$$
Profits are then
$$(p - w)(b + w - w^{\text{o}} ).$$
When the firm maximizes this expression, we obtain an optimal wage of
$$w = \frac{{p + w^{ \circ } - b}}{2}.$$
For the firm not to want to deviate from w o, it must be the case that this optimal wage is lower than w o, or equivalently
$$w = \frac{{p + w^{ \circ } - b}}{2} \le w^{ \circ} \Leftrightarrow p - b \le w^{ \circ } .$$
In other words, if the oligopoly wage is too low, the firms are better off increasing their wage, and so workers will not punish them (by getting angry). If the firm lowers its wage, consumers become angry, and labor supply is given by the condition that
$$w - s - x - \lambda (p - w) = w^{ \circ } - s - \left( {b - x} \right) \Leftrightarrow S = b + (1 + \lambda )w - \lambda p - w^{ \circ } .$$
In that case, profits are
$$(p - w)\left( {b + \left( {1 + \lambda } \right)w - \lambda p - w^{ \circ } } \right).$$
For the firm not to want to deviate and offer the optimal wage in this deviation,
$$w = \frac{{w^{ \circ } - b + p\left( {1 + 2\lambda } \right)}}{{2\left( {1 + \lambda } \right)}} \Rightarrow \pi = \frac{{\left( {b - w^{ \circ } + p} \right)^{2} }}{4(1 + \lambda )}$$
it must be the case that profits in the equilibrium are larger than these deviation profits. Formally,
$$\left( {p - w^{ \circ } } \right)b \ge \frac{{\left( {b - w^{ \circ } + p} \right)^{2} }}{4(1 + \lambda )} \Rightarrow w^{ \circ } \le p - b\left[ {1 + 2\lambda - 2\,\sqrt {\lambda (1 + \lambda )} } \right].$$
Notice that when λ = 0 (the standard Salop case), we obtain from (1) and (2)
$$w^{\text{o}} = p\, - \,b.$$
Equations (1) and (2) provide two constraints to the equilibrium wage w o. The third and final restriction is that for a given τ, as we decrease the number of firms, the wage must also increase to achieve the target utility. Worker utility (in a pooling equilibrium with wage w o) is the number of firms, 1/b, times the total utility of workers hired by each firm:
$$\frac{2}{b} \int_{0}^{\frac{b}{2}} {(w^{ \circ } - s - x)} {\text{d}}x = w^{ \circ } - s - \frac{b}{4}$$
This utility is larger than τ if and only if
$$w^{ \circ } - s - \frac{b}{4} \ge \tau \Leftrightarrow w^{ \circ } \ge \tau + s + \frac{b}{4}$$
We now present one important result: as competition decreases (enough), anger is more likely. The following proposition shows that as competition decreases, a pooling equilibrium is less likely. But since pooling equilibria have no anger, and separating equilibria do (in expected terms there will be some selfish firms), when pooling equilibria disappear, anger appears.
Proposition 1
There is a critical n* such that for all n′ > n ≥ n*, the set of pooling wages is smaller when there are n firms than when there are n′. That is, as competition decreases, anger is more likely.
Define b* so that Eqs. (3) and (1) hold with equality and are equated:
$$\tau + s + \frac{{b^{*} }}{4} = p - b^{*} \Leftrightarrow b^{*} = \frac{4}{5}\left( {p - s - \tau } \right).$$
Let n* = 1/b*. For b* > b the set of equilibrium wages is increasing in b (decreasing in n) because: Eq. (3) is not binding; the slope of (2) is smaller (in absolute value) than the slope of (1).□
The plot below illustrates the three constraints on w o imposed by Eqs. 1–3. The wage w o must lie between the two loci with negative slopes (the flatter one is Eq. 2 and the steeper, 1) which arise from the firms' incentives not to deviate. The wage must also lie above the positively sloped constraint (Eq. 3 that arises from the condition that fewer firms imply higher wages if workers are to obtain their target utilities).
Next, we present another relevant result, connecting the productivity of firms, the rise in anger and the possible subsequent regulation. This result provides a potential explanation for why people in less developed countries do not like capitalism. If productivity is lower and more volatile in LDCs, that would explain why capitalists and capitalism are not popular.
When productivity decreases, or when it becomes more volatile, anger is more likely.
When productivity decreases, the two loci of Eqs. (2) and (1) move downwards by the amount of the decrease in productivity. Since Eq. (3) is unchanged, the set of pooling equilibrium wages shrinks.
A larger volatility in productivities makes it more likely that a low (pooling breaking) cost will happen, and then the selfish firms will reveal themselves as such, and anger will arise. □
An interesting point to note is that higher variability in productivity in LDCs could be the consequence of higher regulations to begin with: firms in sectors with a comparative advantage could have higher worker productivities, while firms in protected sectors lower productivities (even considering government regulations to protect them). In a sense, then, Peronism, by introducing distortions, generates anger towards capitalists and perpetuates the beliefs that Peronism fostered.
The next result illustrates another obvious feature of the rise in anger: when for some exogenous reason workers become "captive" of one particular firm, anger is more likely. The mechanism is as one would expect: when worker's labor elasticity of supply decreases, local monopolies have an incentive to lower wages. The temptation may be large enough that an anger-triggering wage decrease may be profitable. In countries with concentrated industries, like Argentina, and with little inter-industry mobility, workers do not have mobility, and so elasticity of supply is lower.
We model this increase in captivity by changing the cost of re-converting to another industry, while keeping rival's wages fixed. The reason for this assumption is simple: if it is suddenly harder for workers employed in firm i to work in firm i − 1 or i + 1, those firms will keep their wages fixed: if they did not wish to attract the marginal worker before the change in re-conversion costs, they don't want to after, so there is no incentive to raise wages; if firm i-1 did not want to lower its wage before the change in costs, they don't want to do so after, since the incentives of the marginal worker working for them have not changed. As will become transparent in the proof, an equivalent way of modeling this is assuming that the two neighbors of the firm being analyzed move farther away, as if there had been a decrease in the number of firms.
Assume that, for a given parameter configuration, there is a pooling equilibrium with a wage of w o . If the cost of re-converting to firms i − 1 or i + 1 increases from 1 to t > 1, but the cost to firm i remains constant, the firm's incentives to decrease its wage increase. There is a threshold t* such that if t ≥ t*, firm i lowers its wage and workers become angry.
When the cost of converting to firms i − 1 and i + 1 increases to t, the supply faced by firm i (after an anger-triggering decrease in wage) and its profits, are
$$S = 2\frac{{w^{ \circ } - w + \left( {w - p} \right)\lambda + bt}}{t + 1} \Rightarrow \pi = (p - w)2\frac{{w^{ \circ } - w + \left( {w - p} \right)\lambda + bt}}{t + 1}$$
and the optimal wage and profit are
$$w = \frac{p - w + 2p\lambda - bt}{{2\left( {\lambda + 1} \right)}} \Rightarrow \pi = \frac{{\left( {p - w^{ \circ } + bt} \right)^{2} }}{{2\left( {\lambda + 1} \right)\left( {t + 1} \right)}}.$$
Notice that in the equation for the optimal wage, an increase in t is equivalent to an increase in b: a fall in the number of firms. For large enough t, these profits exceed the oligopoly profit, and the firm lowers its wage, causing anger. QED
In the above proposition we have assumed that workers continue to make inferences based on the equilibrium prior to the shock. Although one could argue that a new equilibrium (one with fewer firms or with higher t) should be the benchmark, we believe that keeping the old equilibrium beliefs is also plausible. In addition, the case of fewer firms also leads to more anger, as established by Proposition 1.
The previous proposition may be particularly relevant for the rise of Peronism and Peronist beliefs. In a time of rising speed of technological change, the cost of re-converting to other industries also rises. Hence, we may view the ascent of Perón as a consequence of the increasing exploitation by firms that had gained more power over their workers.
Any wage w o in the range determined by Eqs. (1) and (2) can be part of a pooling equilibrium if we choose τ or α appropriately. Note that if the firm is altruistic and it lowers its wage enough, there could be a utility cost of providing workers with a very low level of utility. Since we found necessary conditions, we focused only on the incentives of the selfish firm. When we want to build an equilibrium with a wage w o within the range we have just identified, we need to take into account this utility cost for the altruistic firm. But choosing τ or α low enough, any one of these wages is part of an equilibrium. We do not elaborate, because the construction is simple.
A brief discussion of policies in this model
The model above describes a pooling equilibrium in an oligopoly without anger. Although consumers are not angry, anger can arise if for whatever reason the pooling equilibrium is broken. In particular, the scenario we have in mind is that the arrival of Peron coincided with the rise in anger that led to a separating equilibrium, and the rise in anger.Footnote 16
In this model there are three channels through which regulation (setting minimum wages and making a transfer to the firm) affects welfare. First, there is the standard channel: a minimum wage larger than market wages, but still below productivity increases total welfare by attracting workers to the firm (to produce something worth p at a cost in terms of lost leisure and transportation cost of less than p). A second, quite direct and simple channel is through the reduction in anger: since an increase in wages lowers firms' profits, and total anger depends on the size of profits, a rise in wages reduces anger and increases welfare. Finally, any channel that reduces anger (whether it increases wages or not) induces workers to start working, and that further increases welfare. The second channel does not depend on individuals changing behavior; this third channel arises because workers re-optimize. Imagine for example, a policy that keeps wages at their pre-policy levels, but "expropriates" the profits from the firm (through a fine, for example). In that case, in the standard model, welfare would be unchanged. In the current model, welfare increases for two reasons: first, each worker who was employed is happier, but some who were not working will now enter the workforce and become available at the fined firm.
Intuition and some simple calculations show that in this model the appeal of fines to the firms and other "populist" policies increases relative to their appeal in a setting where anger plays no role (that is λ = 0). To illustrate, imagine that a policy with wage w and transfer T > 0 to the firm is slightly better in terms of total welfare (in a standard model with no anger) to the policy (w, T = 0). In the model with anger, when consumers are angry, the second policy that "beats on the firm" is preferred, since it reduces the amount of anger. This is an example of a policy that looks bad in a standard model (a bad "populist" policy), but that is potentially welfare enhancing when emotions are taken into account. Although we don't claim that all of the bad Argentine policies are driven by attention to emotions, we believe that there is at least some truth to the idea that policies that are bad for long run material growth may be optimal when workers (or consumers more generally) are angry at certain business sectors.
A school of thought exemplified by Díaz Alejandro (1970, 1988) suggests that Argentina's relative decline is associated with the abandonment of laissez faire in favor of interventionist policies (see also Cortes Conde 1997 and Taylor 1994). Since voters actively demand these policies, a paradox in this explanation is that individuals act rationally when it comes to goods markets, but irrationally when it comes to voting. In this article we describe a set of beliefs and values that makes Perón and his followers desire these policies, sometimes because of an incomplete understanding of their consequences, and sometimes because their objective exceeds the immediate material outcome (for example, they demand "fairness" in labor relations). In the latter case, voters are rational and demand "bad policies" (from the narrow perspective of maximizing material payoffs).
It is worth noting that a central observation in Argentina's relative decline is that it was accompanied by a strong reduction in private investment: from the formidable rates of capital accumulation pre-1913 financed primarily by foreigners to the dismal later performance. Díaz Alejandro (1970) and Taylor (1994) have emphasized the low savings rate and the high relative price of capital goods pre 1960. Naturally, it is possible that the decline in investment is connected to the country's populist tradition, which helped spread interventionist policies and fueled political instability. Argentina's relative decline is visible in the 1930s and appears to accelerate in the 1970s. These two periods coincide with political instability: 1930 is the year of the first of several military coups and marks the beginning of the "infamous" decade that would set the stage for the first Perón administration; while the 1970s is marked by the armed conflict involving left wing guerrillas and the military (and paramilitary) forces which led to the military coup of 1976. Indeed, following Perón's ascent to the labor secretary in 1943, Peronism has been the preeminent political force in the country, leading many to assume that no government could succeed without its explicit support. One reason for its enduring legacy is that Perón's more interventionist policies were in tune with the times: after the 1930s, the increased presence of the State in the economy was the norm, both in Argentina and in other countries.
But there are other factors that have made Peronist policies attractive to voters for such a long period of time, even if they have contributed to its relative material decline. In this paper we focus on three elements that help us throw light on the nature of Peronist policies and their enduring significance. First, we study beliefs and values about the economic system present in Perón's speeches during the period 1943–55. We emphasize that Perón is concerned with the income generating process, and note that Perón insists on the role of "others" and the possibility of exploitation. Indeed, whereas economists have emphasized the role of luck versus individual effort in the determination of income and how beliefs about their relative impact can affect the economic system (see, for example, Piketty 1995), it seems that Perón is focused on the influence of actors (elites, foreigners) and how they can willfully change the income of Argentines (as in Di Tella and MacCulloch 2009). This provides one possible explanation why the process of policymaking might be less a rational learning process, such as the one described in Buera et al. (2011), but instead an attempt to reveal intentions (which by their very nature are hard to verify) and a search for culprits. There are also a large number of references to the idea that labor relations can have non-monetary dimensions, and the speeches connect exploitation to this "non-material" dimension. This (trivially) explains why markets that are interpreted (and regulated) in this way may perform poorly (from a material standpoint).
Second, we study survey data for the 1990s on the beliefs of Peronist and non-Peronist voters in Argentina and Democrat and Republican voters in the US. While Peronists have low income and education relative to the opposition (so that they look like the US democrats), their beliefs and values suggest that Peronists are the Argentine equivalent of the Republicans. For example, whereas all respondents in Argentina tend to believe that the poor are unlucky rather than lazy, Peronists (just like Republicans in the US) are somewhat more inclined than the opposition (e.g., non-Peronists) to believe that the poor are Lazy. In other words, while the opposition to Perón during 1943–55 came from the conservatives, the opposition to Peronism in the 1990s comes from the left of the ideological spectrum. It is worth reiterating that in both periods, the Peronists seem to have lower income and educational achievement than the opposition. This suggests, at the very least, that the Peronists are changing less in terms of political ideology than the opposition.
Finally, given that the meaning and beliefs conveyed by Perón in his speeches are non-standard (for economists), we present a model formalizing the possibility that they are sub-optimal from a narrow material perspective, but that they may be associated with improved well-being (for example, they reduce anger at aspects of economic organization). In particular, we present a formal model of "exploitation" in the labor market where agents derive pleasure from treating well (badly) those that have behaved well (badly) towards them. Firms are of two types: one is a standard firm which might "exploit" the worker by paying him/her the minimum possible wage, whereas the other type "cares" for the worker. Even with few "altruistic" firms, the equilibrium might involve no exploitation, as long as there is sufficient amount of competition. With monopsony power, the "good" equilibria break down and there is scope for regulation that generates first order welfare gains (beyond Harberger triangles). We note that a firm might be exploiting workers even if it is paying the same wage as other firms, as long as workers believe this firm is doing it out of "unkindness" (formalized as reciprocal altruism).
It would still be important to explain why the adherence to these policies in the late 1960s, when the costs were becoming increasingly clear. One possible answer is that there was insufficient trust between the private and public sectors to exchange vital information about Pareto improving policies. Di Tella and Dubra (2013) study how legitimacy of the private sector can positively influence policymaking through the exchange of information.
They define populism as involving a belief in "no constraints": The risks of deficit finance emphasized in traditional thinking are portrayed as exaggerated or altogether unfounded. According to populist policymakers, (monetary) expansion is not inflationary (if there is no devaluation) because spare capacity and decreasing long-run costs contain cost pressures and there is always room to squeeze profit margins by price controls. (Page 9.) This can be contrasted with other definitions of populism emphasizing the connection to socialism (e.g., Coniff 1982) or the one in Drake (1982), which focuses on political mobilizations, the emphasis on rhetoric and the use of symbols that "inspire" voters. It is interesting to note that those in charge of economic policy in Argentina were often less heterodox (as documented, for example, in de Pablo 1989).
One of the Spanish words for "traitor" is "vendepatrias" (literally "seller of the motherland"). Acario Cotapos, a Chilean artist, once commented on the possibility of selling the motherland, adding "yes, and let's buy something smaller, but closer to Paris". Betrayal by the oligarchy during the decade prior to Perón's first government is emphasized, for example, in Torres (1973) and Hernández Arregui (1973).
Indeed, a small literature on the subject has claimed that Peronism is the local version of the American Democrats or the British Labour Party. However, we can investigate the beliefs of these Peronist voters with respect to the origins of income (e.g., luck vs effort) and compare them with those of American voters.
For an alternative view of the investment performance, see Taylor (1998).
There is, of course, a large literature on Argentina's economic performance and on the role played by Peronism which is in no way summarized or reviewed in the short paragraphs offered here as context for the relatively narrow set of points we emphasize. For a description of economic policies under the 1946–55 Perón government, see Gerchunoff (1989). See also Díaz Alejandro (1970), Cortes Conde (1997), Waisman (1987), Halperin Donghi (1994), Gerchunoff and Llach (1998), inter alia.
See Murmis and Portantiero (1971). On the role of the support of socialist trade unions, see Torre (1989). See also Horowitz (1990), Di Tella (2003) and Torre (1990), as well as O'Donnell (1977) and the contributions collected in Brennan (1998) and Miguens and Turner (1988).
Saiegh (2013) makes the reasonable point that, even during the early market-friendly phase following the passing of the liberal constitution in 1853/60, the security of some rights to property (for example, on public debt) depended on political considerations such as the extent of partisan control over the legislature.
For example, while in exile in Madrid, Perón appears to have designated John William Cooke, a man who argued for "armed struggle" based on the Cuban model, as his main representative in the country. There is ample evidence of the armed group's identification with Perón (see Baschetti 2004).
One difference with fascism, for example, is that trade union leaders were closer (more loyal) to members of the union than to the government (perhaps in spite of Perón's wishes). Also, there were attempts at constructing "Peronism without Perón" and instances of trade union leaders who were perceived to be quite independent of Perón (leading to the extreme view that Perón himself was involved in the killing of trade union leader Vandor). And, most importantly, large increases in the Labor share of GDP took place under Peronist administrations (for historical evidence and a comparison with Australia, see Gerchunoff and Fajgelbaum 2006). See also Germani (1962) and Lewis (1980) for interesting discussions.
Federico Pinedo and Luis Duhau, together with Raul Prebisch, put in place the Plan de Acción Económica Nacional in 1933. They were influential in affecting foreign trade and in the creation of the Argentine Central Bank in 1935. Della Paolera and Taylor (1999) describe heterodox monetary policy after 1929, the change in beliefs and expectations following the shift in monetary regime and the relatively mild economic depression.
There are several interesting cultural aspects of Peronism that we do not discuss, including the focus on one date (October 17th), when Peronism "starts". For a discussion and several of the key details of the mass mobilization that took place during October 17th, 1945, see James (1988).
Within this region, 200 sampling points were selected, with approximately five individuals being interviewed in each sampling point through multi-stage probability sampling. Regions include the nation's capital, the greater Buenos Aires area, Córdoba, Rosario, Mendoza and Tucumán.
The question asks "What is the highest educational level that you have attained?" and it provides as possible answers the (functional equivalent for each society) of "1. No formal education, 2. Incomplete primary school, 3. Complete primary school, 4. Incomplete secondary school: technical/vocational type, 5. Complete secondary school: technical/vocational type, 6. Incomplete secondary: university-preparatory type, 7. Complete secondary: university-preparatory type, 8. Some university-level education, without degree, 9. University-level education, with degree".
The question used reads "People sometimes describe themselves as belonging to the working class, the middle class, or the upper or lower class. Would you describe yourself as belonging to the: 1. Upper class, 2. Upper middle class, 3. Lower middle class, 4. Working class, 5. Lower class".
We refer the interested reader to Di Tella and Dubra (2014) for an analysis of the separating equilibria. Under certain parameter conditions (for example, when skills are not easily transferred in going from one firm to another), the oligopoly results in a series of local monopsonies. The discussion of policies in this section refers to such a situation.
Adelman J (1999) Republic of capital. Buenos Aires and the legal transformation of the Atlantic World. Stanford University Press, Stanford
Akerlof GA, Kranton RE (2005) Identity and the economics of organizations. J Econ Perspect 19(1):9–32
Baschetti R (2004) Documentos 1970–1973. Vol. I. De la guerrilla peronista al gobierno popular. La Plata. De la Campana, p 319
Benoît J-P, Dubra J (2011) Apparent overconfidence. Econometrica 79(5):1591–1625
Bernetti JL, Puiggrós A (1993) Peronismo: Cultura política y Educación (1945–1955). In: Puiggrós Adriana (ed) Historia de le Educación en Argentina V. Galerna, Buenos Aires
Bianchi S (1992) Iglesia católica y peronismo: la cuestión de la enseñanza religiosa (1946‐1955). EIAL 3(2):89–103
Boruchowicz C, Wagner R (2013) Institutional decay: the failure of Police in Argentina and its contrast with Chile. In: Di Tella R, Glaeser E (eds) Argentine exceptionalism
Brambilla I, Galiani S, Porto G. Argentine Trade Policies in the XX Century: 60 Years of Solitude
Brennan J (1998) Peronism and Argentina, editor. SR Books, New York
Buera FJ, Monge-Naranjo A, Primiceri GE (2011) Learning the wealth of nations. Econometrica 79(1):1–45
Conniff ML (1982) Latin American populism in comparative perspective. University of New Mexico Press, Albuquerque
Converse PE (1964) The nature of belief systems in Mass Publics. In: Apter DE (ed) Ideology and discontent. Free Press, Glencoe, pp 206–261
Cortes Conde R (1997) La economía argentina en el largo plazo (siglos XIX y XX). Editorial Sudamericana
De Long JB, Summers L (1991) Equipment investment and economic growth. Q J Econ 106(2):445–502
de Pablo, Carlos J, Martínez AJ (1989) Argentine economic policy, 1958–87. World Bank (mimeo), Washington
Della Paolera G, Taylor AM (1999) Economic recovery from the Argentine great depression: institutions, expectations, and the change of macroeconomic regime. J Econ Hist 59(03):567–599
Della Paolera G, Taylor AM (2002) Internal versus external convertibility and emerging-market crises: lessons from Argentine history. Explor Econ Hist 39(4):357–389
Di Tella T (2003) Perón y los sindicatos. El inicio de una relación conflictiva. Ariel (Historia), Buenos Aires
Di Tella R, Dubra J (2013) Fairness and redistribution: comment. Am Econ Rev 103(1):549–553
Di Tella R, Dubra J (2014) Anger and Regulation. Scand J Econ 116(3):734–765. doi:10.1111/sjoe.12068
Di Tella R, MacCulloch R (2009) Why Doesn't Capitalism Flow to Poor Countries?. Brookings Pap Econ Act 40(1):285–332
Di Tella R, Dubra J, MacCulloch R (2010) A resource belief-curse: oil and individualism. In: Hogan W, Sturzenegger F (eds) The natural resources trap: private investment without public commitment. MIT Press, Cambridge
Díaz Alejandro C (1970) Essays on the Economic history of the Argentine Republic. Yale University Press, New Haven
Díaz-Alejandro CF (1988) Trade, development and the world economy: selected essays of Carlos F. Díaz-Alejandro. B. Blackwell Publishers
Dornbusch R, Edwards S (1991) The macroeconomics of populism. In: Dornbusch R, Edwards S (eds) The macroeconomics of populism in Latin America, University of Chicago Press, Chicago, pp 7–13
Chapter Google Scholar
Drake PW (1982) Conclusion: requiem for populism? Latin American populism in comparative perspective. In: Conniff ML (ed) Albuquerque
Escudé C (1990) El fracaso del proyecto argentino: Educación e ideología. Tesis, Buenos Aires
Etchemendy S, Palermo V (1998) Conflicto y Concertación: Gobierno, Congreso y Organizaciones de Interés en la Reforma Laboral del Primer Gobierno de Menem. Desarrollo Económico 37(Marzo):559–590
Fehr E, Schmidt K (1999) A theory of fairness, competition and cooperation. Q J Econ 114(3):817–868
Gaggero H, Garro A (2009) Mejor que decir es hacer, mejor que prometer es realizar: estado, gobierno y políticas sociales durante el peronismo, 1943–1955: proyectos y realidades. Editorial Biblos
Gallo AA, Alston LJ (2009). Electoral fraud, the rise of Peron and demise of checks and balances in Argentina. Available at SSRN 463300
Galiani S, Somaini P. Path-Dependent Import-Substitution Policies: The Case of Argentina in the 20th Century
Gerchunoff P (1989) Peronist Economic Policies, 1946–55. In: Di Tella G, Dornbusch R (eds) The Political Economy of Argentina, 1946–83. Macmillan Press, London
Gerchunoff Pablo, Fajgelbaum Pablo (2006) ¿Por qué Argentina no fue Australia?. Siglo XXI, Buenos Aires
Gerchunoff Pablo, Llach Lucas (1998) El ciclo de la ilusión y el desencanto: un siglo de políticas económicas argentinas. Ariel Sociedad Económica, Buenos Aires
Germani Gino (1962) Política y Sociedad en una Época de Transición: de la Sociedad Tradicional a la Sociedad de Masas. Paidós, Buenos Aires
Glaeser E (2005) The political economy of hatred. Q J Econ CXX:45–86
Glaeser E, Di Tella R, Llach L. Introduction to argentine exceptionalism
Halperin Donghi T (1994) La Larga Agonía de la Argentina Peronista. Ariel, Buenos Aires
Hansen G (1985) Indivisible labor and the business cycle. J Monet Econ 16:309–328
Hernández Arregui J (1973) La Formación de la Conciencia Nacional, 1930–60, Tercera Edición. Plus Ultra, Buenos Aires
Horowitz Joel (1990) Industrialists and the rise of Perón, 1943–1946: some implications for the conceptualization of populism. Americas 47(02):199–217
James Daniel (1988) October 17th and 18th, 1945: mass protest, Peronism and the argentine working class. J Soc Hist 2:441–462
Jones MP, Saeigh S, Spiller PT, Tommasi M (2002) Amateur legislators–professional politicians: the consequences of party-centered electoral rules in a federal system. Am J Political Sci 46(3):656–669
Kinder DR (1998) Communication and opinion. Annu Rev Political Sci 1(1):167–197
Levine DK (1998) Modelling altruism and spitefulness in experiments. Rev Econ Dyn 1:593–622
Levitsky S (2003) Transforming labor‐based parties in Latin America: Argentine Peronism in comparative perspective. Cambridge University Press
Lewis Paul (1980) Was Perón a fascist? J Politics 42(1):246–250
Miguens JE, Turner FC (1988) Racionalidad del Peronismo. Planeta, Buenos Aires
Murillo MVictoria (2001) Labor Unions, Partisan Coalitions and Market Reforms in Latin America. Cambridge University Press, Cambridge
Murmis M, Portantiero JC (1971) Estudios sobre los Orígenes del Peronismo. Siglo XXI Argentina, Buenos Aires
O'Donnell G (1977) Estado y Alianzas en la Argentina, 1956–76. Desarrollo Económico vol. 64, Nro. 64
Perón JD (1946) Libro Azul y Blanco. 2001 edition by Galerna Galerna Editors
Perón JD (1958) La fuerza es el derecho de las bestias. Editorial Ciceron, Montevideo
Piketty T (1995) Social mobility and redistributive politics. Q J Econ 1995:551–584
Portantiero JC (1973) Clases Dominantes y Crisis Políticas en la Argentina Actual. In: Braun O (ed) El Capitalismo Argentino en Crisis. Siglo XXI, Buenos Aires
Rokeach Milton (1973) The nature of human values. Free Press, New York
Rotemberg JJ (2008) Minimally acceptable altruism and the ultimatum game. J Econ Behav Organ 66:457–476
Saiegh S (2013) Political institutions and sovereign borrowing: evidence from nineteenth-century Argentina. Public Choice 156(1–2):61–75
Salop S (1979) Monopolistic competition with outside goods. Bell J Econ 10(1):141–156
Sigal S, Verón E (2003) Perón o Muerte: Los Fundamentos Discursivos del Fenómeno Peronista. Eudeba, Buenos Aires
Spiller PT, Tommasi M (2005) Instability and public policy-making in Argentina. In: Levitsky S, Victoria Murillo M (eds) Argentine democracy: the politics of institutional weakness. Pennsylvania State University Press, Pennsylvania
Stawski M (2005) Asistencialismo y negocios: la fundación Eva Perón y el gobierno peronista (1948–1955). Primer congreso de estudio sobre el peronismo: la primera década
Stokes S (2001) Mandates and democracy: neoliberalism by surprise in Latin America. Cambridge University Press
Taylor AM (1994) Tres fases del crecimiento económico argentino. Revista de Historia Económica/Journal of Iberian and Latin American Economic History (Second Series) 12(03):649–683
Torre JC (1989) Interpretando (una vez más) los orígenes del Peronismo. Desarrollo Económico 28(112):525–548
Torre JC (1990) Perón y la vieja guardia sindical. Sudamericana, Buenos Aires
Taylor AM (1998) Argentina and the world capital market: saving, investment, and international capital mobility in the twentieth century. J Develop Econ 57(1):147–184
Torres JL (1973) La Década Infame. Freeland, Buenos Aires
Waisman CH (1987) Reversal of development in argentina postwar counterrevolutionary policies and their structural consequences. Princeton University Press
World Value Survey. Wave 3 1995–1998 Official Aggregate v.20140921. World Values Survey Association (www.worldvaluessurvey.org)
Harvard University and NBER, Cambridge, USA
Rafael Di Tella
Universidad de Montevideo, Montevideo, Uruguay
Juan Dubra
Correspondence to Rafael Di Tella.
We thank Esteban Aranda for suggestions and exceptional research assistance, and Andrés Velasco for introducing us to Acario Cotapos. We also thank Juan Carlos Torre, Torcuato Di Tella, Lucas Llach, as well as participants at the Argentine Exceptionalism seminar in Cambridge 2009, for helpful comments and discussions. For support, Di Tella thanks the Canadian Institute for Advanced Research.
Appendix 1: Peron's speeches quoted in the text
"Cuidaremos el factor brazo y haremos una Argentina de hombres libres", 15 de octubre de 1944. Buenos Aires, 1944, Secretaría de Trabajo y Previsión, Difusión y Propaganda.
"Las reivindicaciones logradas por los trabajadores argentinos no podrán ser destruidas", 1 de Mayo de 1945. Buenos Aires, 1945, sin datos de imprenta.
Discurso pronunciado en el Congreso de la Nación, 21 de Octubre de 1946, Habla Perón, Subsecretaría de Informes, Buenos Aires.
Discurso pronunciado en el Congreso de la Nación, al declarar inaugurado el período de sesiones, 1 de Mayo de 1947, Los Mensajes de Perón, Serie Azul y Blanca, Mundo Peronista Ed., Buenos Aires, 1952.
Manifestaciones del general Perón ante los representantes patronales de la Producción, Industria y Comercio de la Nación, 24 de Junio de 1948, Habla Perón, Subsecretaría de Informes, Buenos Aires.
"Perón, leal amigo de los trabajadores del campo", 5 de Marzo de 1950, Subsecretaría de Informaciones de la Presidencia de la Nación.
"Economía y sindicalismo justicialista", 24 de Mayo de 1950, sin datos de fecha de publicación ni de imprenta.
"La CGT escucha a Perón", 9 de Agosto de 1950, sin datos ni de fecha ni de imprenta.
"Una etapa más en la ejecución de la doctrina peronista en el orden económico", 7 de Febrero de 1950, Subsecretaría de informes de la presidencia de la Nación.
"Perón habla sobre la organización económica del país", 12 de Mayo de 1950, sin datos ni de fecha ni de imprenta.
"Perón y Eva hablan en el Día de los Trabajadores", 1 de Mayo de 1951, Presidencia de la Nación, Subsecretaría de Informaciones.
Discurso pronunciado el 5 de marzo de 1952, sin datos de imprenta ni de fecha.
Appendix 2: Definitions of variables used (form the world values survey)
Poor are lazy refers to the question: "Why, in your opinion, are there people in this country who live in need? Here are two opinions: Which comes closest to your view? 1. They are poor because of laziness and lack of will power, 2. They are poor because society treats them unfairly". Group 1 is that answering option 1, while Group 2 is that answering option 2.
Run by a few big interests refers to the question: "Generally speaking, would you say that this country is run by a few big interests looking out for themselves, or that it is run for the benefit of all the people? 1. Run by a few big interests, 2. Run for all the people". Group 1 is that answering option 1, while Group 2 is that answering option 2.
Workers should follow instructions refers to the question: "People have different ideas about following instructions at work. Some say that one should follow one's superior's instructions even when one does not fully agree with them. Others say that one should follow one's superior's instructions only when one is convinced that they are right. With which of these two opinions do you agree? 1. Should follow instructions, 2. Depends, 3. Must be convinced first." Group 1 is that answering option 1, while Group 2 is that answering options 2 and 3.
Jobs for men refers to the question "Do you agree or disagree with the following statements? When jobs are scarce, men should have more right to a job than women. 1. Agree, Neither Agree nor Disagree, 3. Disagree". Group 1 is that answering option 1, while Group 2 is that answering option 3.
More respect for authority refers to the question: "I'm going to read out a list of various changes in our way of life that might take place in the near future. Please tell me for each one, if it were to happen, whether you think it would be a good thing, a bad thing, or don't you mind? Greater respect for authority. 1. Good, 2. Don't mind, 3. Bad". Group 1 is that answering option 1, while Group 2 is that answering option 3.
Less emphasis on money refers to the question: "I'm going to read out a list of various changes in our way of life that might take place in the near future. Please tell me for each one, if it were to happen, whether you think it would be a good thing, a bad thing, or don't you mind? Less emphasis on money. 1. Good, 2. Don't mind, 3. Bad". Group 1 is that answering option 1, while Group 2 is that answering option 3.
Acceptable to cheat refers to the question: "Please tell me for each of the following statements whether you think it can always be justified, never be justified, or something in between, using this card. Cheating on taxes if you have a chance (scale 1–10 is shown with Never Justifiable below 1 and Always Justifiable below 10)". Group 1 is that answering options 1 and 2, while Group 2 is those answering options 3, 4, 5, 6, 7, 8, 9 10.
Competition good refers to the question: Now I'd like you to tell me your views on various issues. How would you place your views on this scale? 1 means you agree completely with the statement on the left; 10 means you agree completely with the statement on the right; and if your views fall somewhere in between, you can choose any number in between. A scale is shown with a 1–10 scale with the words "Competition is good. It stimulates people to work hard and develop new ideas" below 1 and "Competition is harmful. It brings out the worst in people" below 10.
Di Tella, R., Dubra, J. Some elements of Peronist beliefs and tastes. Lat Am Econ Rev 27, 6 (2018). https://doi.org/10.1007/s40503-017-0046-5
Argentine Exceptionalism | CommonCrawl |
Higher local field
In mathematics, a higher (-dimensional) local field is an important example of a complete discrete valuation field. Such fields are also sometimes called multi-dimensional local fields.
On the usual local fields (typically completions of number fields or the quotient fields of local rings of algebraic curves) there is a unique surjective discrete valuation (of rank 1) associated to a choice of a local parameter of the fields, unless they are archimedean local fields such as the real numbers and complex numbers. Similarly, there is a discrete valuation of rank n on almost all n-dimensional local fields, associated to a choice of n local parameters of the field.[1] In contrast to one-dimensional local fields, higher local fields have a sequence of residue fields.[2] There are different integral structures on higher local fields, depending how many residue fields information one wants to take into account.[2]
Geometrically, higher local fields appear via a process of localization and completion of local rings of higher dimensional schemes.[2] Higher local fields are an important part of the subject of higher dimensional number theory, forming the appropriate collection of objects for local considerations.
Definition
Finite fields have dimension 0 and complete discrete valuation fields with finite residue field have dimension one (it is natural to also define archimedean local fields such as R or C to have dimension 1), then we say a complete discrete valuation field has dimension n if its residue field has dimension n−1. Higher local fields are those of dimension greater than one, while one-dimensional local fields are the traditional local fields. We call the residue field of a finite-dimensional higher local field the 'first' residue field, its residue field is then the second residue field, and the pattern continues until we reach a finite field.[2]
Examples
Two-dimensional local fields are divided into the following classes:
• Fields of positive characteristic, they are formal power series in variable t over a one-dimensional local field, i.e. Fq((u))((t)).
• Equicharacteristic fields of characteristic zero, they are formal power series F((t)) over a one-dimensional local field F of characteristic zero.
• Mixed-characteristic fields, they are finite extensions of fields of type F{{t}}, F is a one-dimensional local field of characteristic zero. This field is defined as the set of formal power series, infinite in both directions, with coefficients from F such that the minimum of the valuation of the coefficients is an integer, and such that the valuation of the coefficients tend to zero as their index goes to minus infinity.[2]
• Archimedean two-dimensional local fields, which are formal power series over the real numbers R or the complex numbers C.
Constructions
Higher local fields appear in a variety of contexts. A geometric example is as follows. Given a surface over a finite field of characteristic p, a curve on the surface and a point on the curve, take the local ring at the point. Then, complete this ring, localise it at the curve and complete the resulting ring. Finally, take the quotient field. The result is a two-dimensional local field over a finite field.[2]
There is also a construction using commutative algebra, which becomes technical for non-regular rings. The starting point is a Noetherian, regular, n-dimensional ring and a full flag of prime ideals such that their corresponding quotient ring is regular. A series of completions and localisations take place as above until an n-dimensional local field is reached.
Topologies on higher local fields
One-dimensional local fields are usually considered in the valuation topology, in which the discrete valuation is used to define open sets. This will not suffice for higher dimensional local fields, since one needs to take into account the topology at the residue level too. Higher local fields can be endowed with appropriate topologies (not uniquely defined) which address this issue. Such topologies are not the topologies associated with discrete valuations of rank n, if n > 1. In dimension two and higher the additive group of the field becomes a topological group which is not locally compact and the base of the topology is not countable. The most surprising thing is that the multiplication is not continuous; however, it is sequentially continuous, which suffices for all reasonable arithmetic purposes. There are also iterated Ind–Pro approaches to replace topological considerations by more formal ones.[3]
Measure, integration and harmonic analysis on higher local fields
There is no translation invariant measure on two-dimensional local fields. Instead, there is a finitely additive translation invariant measure defined on the ring of sets generated by closed balls with respect to two-dimensional discrete valuations on the field, and taking values in formal power series R((X)) over reals.[4] This measure is also countably additive in a certain refined sense. It can be viewed as higher Haar measure on higher local fields. The additive group of every higher local field is non-canonically self-dual, and one can define a higher Fourier transform on appropriate spaces of functions. This leads to higher harmonic analysis.[5]
Higher local class field theory
Local class field theory in dimension one has its analogues in higher dimensions. The appropriate replacement for the multiplicative group becomes the nth Milnor K-group, where n is the dimension of the field, which then appears as the domain of a reciprocity map to the Galois group of the maximal abelian extension over the field. Even better is to work with the quotient of the nth Milnor K-group by its subgroup of elements divisible by every positive integer. Due to Fesenko theorem,[6] this quotient can also be viewed as the maximal separated topological quotient of the K-group endowed with appropriate higher dimensional topology. Higher local reciprocity homomorphism from this quotient of the nth Milnor K-group to the Galois group of the maximal abelian extension of the higher local field has many features similar to those of the one-dimensional local class field theory.
Higher local class field theory is compatible with class field theory at the residue field level, using the border map of Milnor K-theory to create a commutative diagram involving the reciprocity map on the level of the field and the residue field.[7]
General higher local class field theory was developed by Kazuya Kato[8] and by Ivan Fesenko.[9][10] Higher local class field theory in positive characteristic was proposed by Aleksei Parshin.[11][12]
Notes
1. Fesenko, I.B., Vostokov, S.V. Local Fields and Their Extensions. American Mathematical Society, 1992, Chapter 1 and Appendix.
2. Fesenko, I., Kurihara, M. (eds.) Invitation to Higher Local Fields. Geometry and Topology Monographs, 2000, section 1 (Zhukov).
3. Fesenko, I., Kurihara, M. (eds.) Invitation to Higher Local Fields. Geometry and Topology Monographs, 2000, several sections.
4. Fesenko, I. Analysis on arithmetic schemes. I. Docum. Math., (2003), Kato's special volume, 261-284
5. Fesenko, I., Measure, integration and elements of harmonic analysis on generalized loop spaces, Proceed. St. Petersburg Math. Soc., vol. 12 (2005), 179-199; AMS Transl. Series 2, vol. 219, 149-164, 2006
6. I. Fesenko (2002). "Sequential topologies and quotients of Milnor K-groups of higher local fields" (PDF). St. Petersburg Mathematical Journal. 13.
7. Fesenko, I., Kurihara, M. (eds.) Invitation to Higher Local Fields. Geometry and Topology Monographs, 2000, section 5 (Kurihara).
8. K. Kato (1980). "A generalization of local class field theory by using K -groups. II". J. Fac. Sci. Univ. Tokyo. 27: 603–683.
9. I. Fesenko (1991). "On class field theory of multidimensional local fields of positive characteristic". Adv. Sov. Math. 4: 103–127.
10. I. Fesenko (1992). "Class field theory of multidimensional local fields of characteristic 0, with the residue field of positive characteristic". St. Petersburg Mathematical Journal. 3: 649–678.
11. A. Parshin (1985). "Local class field theory". Proc. Steklov Inst. Math.: 157–185.
12. A. Parshin (1991). "Galois cohomology and Brauer group of local fields": 191–201. {{cite journal}}: Cite journal requires |journal= (help)
References
• Fesenko, Ivan B.; Vostokov, Sergei V. (2002), Local fields and their extensions, Translations of Mathematical Monographs, vol. 121 (Second ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-3259-2, MR 1915966, Zbl 1156.11046
• Fesenko, Ivan B.; Kurihara, Masato, eds. (2000), Invitation to Higher Local Fields. Extended version of talks given at the conference on higher local fields, Münster, Germany, August 29–September 5, 1999, Geometry and Topology Monographs, vol. 3 (First ed.), University of Warwick: Mathematical Sciences Publishers, doi:10.2140/gtm.2000.3, ISSN 1464-8989, Zbl 0954.00026
| Wikipedia |
What Is Return on Investment (ROI)?
How to Calculate ROI
ROI FAQs
Return on Investment (ROI): How to Calculate It and What It Means
Jason Fernando
Jason Fernando is a professional investor and writer who enjoys tackling and communicating complex business and financial problems.
Reviewed by Julius Mansa
Suzanne Kvilhaug
Fact checked by Suzanne Kvilhaug
Suzanne is a content marketer, writer, and fact-checker. She holds a Bachelor of Science in Finance degree from Bridgewater State University and helps develop content strategies for financial brands.
Investopedia / Lara Antal
Return on investment (ROI) is a performance measure used to evaluate the efficiency or profitability of an investment or compare the efficiency of a number of different investments. ROI tries to directly measure the amount of return on a particular investment, relative to the investment's cost.
To calculate ROI, the benefit (or return) of an investment is divided by the cost of the investment. The result is expressed as a percentage or a ratio.
Return on Investment (ROI) is a popular profitability metric used to evaluate how well an investment has performed.
ROI is expressed as a percentage and is calculated by dividing an investment's net profit (or loss) by its initial cost or outlay.
ROI can be used to make apples-to-apples comparisons and rank investments in different projects or assets.
ROI does not take into account the holding period or passage of time, and so it can miss opportunity costs of investing elsewhere.
Whether or not something delivers a good ROI should be compared relative to other available opportunities.
How To Calculate Return On Investment (ROI)
The return on investment (ROI) formula is as follows:
ROI = Current Value of Investment − Cost of Investment Cost of Investment \begin{aligned} &\text{ROI} = \dfrac{\text{Current Value of Investment}-\text{Cost of Investment}}{\text{Cost of Investment}}\\ \end{aligned} ROI=Cost of InvestmentCurrent Value of Investment−Cost of Investment
"Current Value of Investment" refers to the proceeds obtained from the sale of the investment of interest. Because ROI is measured as a percentage, it can be easily compared with returns from other investments, allowing one to measure a variety of types of investments against one another.
ROI is a popular metric because of its versatility and simplicity. Essentially, ROI can be used as a rudimentary gauge of an investment's profitability. This could be the ROI on a stock investment, the ROI a company expects on expanding a factory, or the ROI generated in a real estate transaction.
The calculation itself is not too complicated, and it is relatively easy to interpret for its wide range of applications. If an investment's ROI is net positive, it is probably worthwhile. But if other opportunities with higher ROIs are available, these signals can help investors eliminate or select the best options. Likewise, investors should avoid negative ROIs, which imply a net loss.
For example, suppose Jo invested $1,000 in Slice Pizza Corp. in 2017 and sold the shares for a total of $1,200 one year later. To calculate the return on this investment, divide the net profits ($1,200 - $1,000 = $200) by the investment cost ($1,000), for an ROI of $200/$1,000, or 20%.
With this information, one could compare the investment in Slice Pizza with any other projects. Suppose Jo also invested $2,000 in Big-Sale Stores Inc. in 2014 and sold the shares for a total of $2,800 in 2017. The ROI on Jo's holdings in Big-Sale would be $800/$2,000, or 40%.
Limitations of ROI
Examples like Jo's (above) reveal some limitations of using ROI, particularly when comparing investments. While the ROI of Jo's second investment was twice that of the first investment, the time between Jo's purchase and the sale was one year for the first investment but three years for the second.
Jo could adjust the ROI of the multi-year investment accordingly. Since the total ROI was 40%, to obtain the average annual ROI, Jo could divide 40% by 3 to yield 13.33% annualized. With this adjustment, it appears that although Jo's second investment earned more profit, the first investment was actually the more efficient choice.
ROI can be used in conjunction with the rate of return (RoR), which takes into account a project's time frame. One may also use net present value (NPV), which accounts for differences in the value of money over time, due to inflation. The application of NPV when calculating the RoR is often called the real rate of return.
Developments in ROI
Recently, certain investors and businesses have taken an interest in the development of new forms of ROIs, called "social return on investment," or SROI. SROI was initially developed in the late 1990s and takes into account broader impacts of projects using extra-financial value (i.e., social and environmental metrics not currently reflected in conventional financial accounts).
SROI helps understand the value proposition of certain environmental social and governance (ESG) criteria used in socially responsible investing (SRI) practices. For instance, a company may decide to recycle water in its factories and replace its lighting with all LED bulbs. These undertakings have an immediate cost that may negatively impact traditional ROI—however, the net benefit to society and the environment could lead to a positive SROI.
There are several other new variations of ROIs that have been developed for particular purposes. Social media statistics ROI pinpoints the effectiveness of social media campaigns—for example how many clicks or likes are generated for a unit of effort. Similarly, marketing statistics ROI tries to identify the return attributable to advertising or marketing campaigns.
So-called learning ROI relates to the amount of information learned and retained as a return on education or skills training. As the world progresses and the economy changes, several other niche forms of ROI are sure to be developed in the future.
What Is ROI in Simple Terms?
Basically, return on investment (ROI) tells you how much money you've made (or lost) an investment or project after accounting for its cost.
How Do You Calculate Return on Investment (ROI)?
Return on investment (ROI) is calculated by dividing the profit earned on an investment by the cost of that investment. For instance, an investment with a profit of $100 and a cost of $100 would have an ROI of 1, or 100% when expressed as a percentage. Although ROI is a quick and easy way to estimate the success of an investment, it has some serious limitations. For instance, ROI fails to reflect the time value of money, and it can be difficult to meaningfully compare ROIs because some investments will take longer to generate a profit than others. For this reason, professional investors tend to use other metrics, such as net present value (NPV) or the internal rate of return (IRR).
What Is a Good ROI?
What qualifies as a "good" ROI will depend on factors such as the risk tolerance of the investor and the time required for the investment to generate a return. All else being equal, investors who are more risk-averse will likely accept lower ROIs in exchange for taking less risk. Likewise, investments that take longer to pay off will generally require a higher ROI in order to be attractive to investors.
What Industries Have the Highest ROI?
Historically, the average ROI for the S&P 500 has been about 10% per year. Within that, though, there can be considerable variation depending on the industry. For instance, during 2020, many technology companies generated annual returns well above this 10% threshold. Meanwhile, companies in other industries, such as energy companies and utilities, generated much lower ROIs and in some cases faced losses year-over-year. Over time, it is normal for the average ROI of an industry to shift due to factors such as increased competition, technological changes, and shifts in consumer preferences.
World Health Organization. "Social Return on Investment," Pages 2-4.
DQYDJ. "S&P 500 Historical Return Calculator."
Fortune. "The Best Stocks of 2020 Have Made Pandemic Investors Even Richer."
Guide to Financial Ratios
What Is the Best Measure of a Company's Financial Health?
What Financial Ratios Are Used to Measure Risk?
Profitability Ratios: What They Are, Common Types, and How Businesses Use Them
Understanding Liquidity Ratios: Types and Their Importance
What Is a Solvency Ratio, and How Is It Calculated?
Solvency Ratios vs. Liquidity Ratios: What's the Difference?
Key Ratio
Multiples Approach
Return on Assets (ROA): Formula and 'Good' ROA Defined
How Return on Equity Can Help Uncover Profitable Stocks
Return on Invested Capital: What Is It, Formula and Calculation, and Example
EBITDA Margin: What It Is, Formula, How to Use It
What is Net Profit Margin? Formula for Calculation and Examples
Operating Margin: What It Is and the Formula for Calculating It, With Examples
Current Ratio Explained With Formula and Examples
Quick Ratio Formula With Examples, Pros and Cons
Cash Ratio: Definition, Formula, and Example
Operating Cash Flow (OCF): Definition, Types, and Formula
Receivables Turnover Ratio Defined: Formula, Importance, Examples, Limitations
Inventory Turnover Ratio: What It Is, How It Works, and Formula
Working Capital Turnover Ratio: Meaning, Formula, and Example
Debt-to-Equity (D/E) Ratio Formula and How to Interpret It
Total-Debt-to-Total-Assets Ratio: Meaning, Formula, and What's Good
Interest Coverage Ratio: Formula, How It Works, and Example
Shareholder Equity Ratio: Definition and Formula for Calculation
Can Investors Trust the P/E Ratio?
Using the Price-to-Book (P/B) Ratio to Evaluate Companies
Price-to-Sales (P/S) Ratio: What It Is, Formula To Calculate It
Price-to-Cash Flow (P/CF) Ratio? Definition, Formula, and Example
Internal Rate of Return (IRR) Rule: Definition and Example
The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments.
Payback Period Explained, With the Formula and How to Calculate It
The payback period refers to the amount of time it takes to recover the cost of an investment or how long it takes for an investor to hit breakeven.
Net Present Value (NPV): What It Means and Steps to Calculate It
Net present value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time.
Opportunity Cost Formula, Calculation, and What It Can Tell You
Opportunity cost is the potential forgone profit from a missed opportunity—the result of choosing one alternative and forgoing another.
Compound Annual Growth Rate (CAGR) Formula and Calculation
The compound annual growth rate (CAGR) measures an investment's annual growth rate over a period of time, assuming profits are reinvested at the end of each year.
What Is Return on Revenue: Formulas, Calculations and Application
Return on revenue is a measure of a corporation's profitability that compares net income to revenue.
How to Find Your Return on Investment (ROI) in Real Estate
How to Calculate ROI on a Rental Property
How to Calculate Internal Rate of Return (IRR) in Excel
How to Calculate Profit Margin
Return on Investment vs. Internal Rate of Return: What's the Difference? | CommonCrawl |
The supremum
A non-empty set $S \subseteq \mathbb{R}$ is bounded from above if there exists $M \in \mathbb{R}$ such that
$$x \le M, \quad \forall x \in S.$$
The number $M$ is called an upper bound of $S$.
If a set is bounded from above, then it has infinitely many upper bounds, because every number greater then the upper bound is also an upper bound. Among all the upper bounds, we are interested in the smallest.
Let $S \subseteq \mathbb{R}$ be bounded from above. A real number $L$ is called the supremum of the set $S$ if the following is valid:
(i) $L$ is an upper bound of $S$:
$$x \le L, \quad \forall x \in S,$$
(ii) $L$ is the least upper bound:
$$(\forall \epsilon > 0) (\exists x \in S)(L – \epsilon < x).$$
The supremum of $S$ we denote as
$$L = \sup S$$
$$L = \sup_{x \in S} \{x\}.$$
If $L \in S$, then we say that $L$ is a maximum of $S$ and we write
$$L= \max S$$
$$ L= \max_{x \in S}\{x\}.$$
If the set $S$ it is not bounded from above, then we write $\sup S = + \infty$.
Proposition 1. If the number $A \in \mathbb{R}$ is an upper bound for a set $S$, then $A = \sup S$.
The question is, does every non- empty set bounded from above has a supremum? Consider the following example.
Example 1. Determine a supremum of the following set
$$ S = \{x \in \mathbb{Q}| x^2 < 2 \} \subseteq \mathbb{Q}.$$
The set $S$ is a subset of the set of rational numbers. According to the definition of a supremum, $\sqrt{2}$ is the supremum of the given set. However, a set $S$ does not have a supremum, because $\sqrt{2}$ is not a rational number. The example shows that in the set $\mathbb{Q}$ there are sets bounded from above that do not have a supremum, which is not the case in the set $\mathbb{R}$.
In a set of real numbers the completeness axiom is valid:
Every non-empty set of real numbers which is bounded from above has a supremum.
It is an axiom that distinguishes a set of real numbers from a set of rational numbers.
The infimum
In a similar way we define terms related to sets which are bounded from below.
A non-empty set $S \subseteq \mathbb{R}$ is bounded from below if there exists $m \in \mathbb{R}$ such that
$$m \le x, \quad \forall x \in S.$$
The number $m$ is called a lower bound of $S$.
Let $S \subseteq \mathbb{R}$ be bounded from below. A real number $L$ is called the infimum of the set $S$ if the following is valid:
(i) $L$ is a lower bound:
$$L \le x, \quad \forall x \in S,$$
(ii) $L$ is the greatest lower bound:
$$(\forall \epsilon > 0) ( \exists x \in S) ( x < L + \epsilon).$$
The infimum of $S$ we denote as
$$L = \inf S$$
$$L = \inf_{x \in S} \{x\}.$$
If $ L \in S$, then we say that $L$ is the minimum and we write
$$L= \min S$$
$$ L= \min_{x \in S}\{x\}.$$
If the set $S$ it is not bounded from below, then we write $\inf S = – \infty$.
The existence of a infimum is given as a theorem.
Theorem. Every non-empty set of real numbers which is bounded from below has a infimum.
Proposition 2. Let $a , b \in \mathbb{R}$ such that $a<b$. Then
(i) $\sup \langle a, b \rangle = \sup \langle a, b] = \sup [a, b \rangle = \sup [a, b] = b$,
(ii) $ \sup \langle a, + \infty \rangle = \sup [a, + \infty \rangle = + \infty$,
(iii) $\inf \langle a, b \rangle = \inf \langle a, b] = \inf [a, b \rangle = \inf [a, b] = a$,
(iv) $\inf \langle – \infty, a \rangle = \inf \langle – \infty, a ] = – \infty$,
(v) $\sup \langle – \infty, a \rangle = \sup \langle – \infty, a] = \inf \langle a, + \infty \rangle = \inf [a, + \infty \rangle = a$.
Example 2. Determine $\sup S$, $\inf S$, $\max S$ and $\min S$ if
$$ S = \{ x \in \mathbb{R} | \frac{1}{x-1} > 2 \}.$$
Solution. Firstly, we have to check what are the $x$-s:
$$ \frac{1}{x-1} > 2$$
$$\frac{1}{x-1} – 2 > 0$$
$$\frac{3 – 2x}{x-1} >0$$
The inequality above will be less then zero if the numerator and denominator are both positive or both negative. We distinguish two cases:
1.) $3-2x >0$ and $x-1 > 0$, that is, $ x < \frac{3}{2}$ and $ x > 1$. It follows $ x \in \langle 1, \frac{3}{2} \rangle$.
2.) $3 – 2x < 0$ and $ x-1 < 0$, that is, $x > \frac{3}{2}$ and $ x < 1$. It follows $ x \in \emptyset$.
$\Longrightarrow S = \langle 1, \frac{3}{2} \rangle$
From the proposition 2. follows that $\sup S = \frac{3}{2}$ and $\inf S = 1$.
The minimum and maximum do not exist ( because we have no limits of the interval).
$$ S = \{ \frac{x}{x+1}| x \in \mathbb{N} \}.$$
Firstly, we will write first few terms of $S$:
$$S= \{ \frac{1}{2}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5}, \cdots \}.$$
We can assume that the smallest term is $\frac{1}{2}$ and there is no largest term, however, we can see that all terms do not exceed $1$. That is, we assume $\inf S = \min S = \frac{1}{2}$, $\sup S = 1$ and $\max S$ do dot exists. Let's prove it!
To prove that $1$ is the supremum of $S$, we must first show that $1$ is an upper bound:
$$\frac{x}{x+1} < 1$$
$$\Longleftrightarrow x < x +1$$
$$ \Longleftrightarrow 0 < 1,$$
which is always valid. Therefore, $1$ is an upper bound. Now we must show that $1$ is the least upper bound. Let's take some $\epsilon < 1$ and show that then exists $x_0 \in \mathbb{N}$ such that
$$\frac{x_0}{x_0 +1} > \epsilon$$
$$\Longleftrightarrow x_0 > \epsilon (x_0 + 1)$$
$$\Longleftrightarrow x_0 ( 1- \epsilon) > \epsilon$$
$$\Longleftrightarrow x_0 > \frac{\epsilon}{1-\epsilon},$$
and such $x_0$ surely exists. Therefore, $\sup S = 1$.
However, $1$ is not the maximum. Namely, if $1 \in S$, then $\exists x_1 \in \mathbb{N}$ such that
$$\frac{x_1}{x_1 + 1} = 1$$
$$\Longleftrightarrow x_1 = x_1 +1$$
$$ \Longleftrightarrow 0=1,$$
which is the contradiction. It follows that the maximum of $S$ does not exists.
Now we will prove that $\min S = \frac{1}{2}$.
Since $\frac{1}{2} \in S$, it is enough to show that $\frac{1}{2}$ is a lower bound of $S$. According to this, we have
$$ \frac{x}{x+1} \ge \frac{1}{2}$$
$$ \Longleftrightarrow 2x \ge x+1$$
$$\Longleftrightarrow x \ge 1,$$
which is valid for all $x \in \mathbb{N}$. Therefore, $\inf S = \min S = \frac{1}{2}$.
Vector spaces | CommonCrawl |
Each of the two chapters in this dissertation is based on a game theory paper. Although the topic of each chapter is different, they are linked by the question: How do players coordinate their actions through communication? Each chapter develops a communication schema, depicts equilibrium strategies, and responds to this inquiry. In the first chapter, I study a collective action problem in a setting of discounted repeated coordination games in which players know their neighbors' inclination to participate as well as monitor their neighbors' past actions. I define strong connectedness to characterize those states in which, for every two players who incline to participate, there is a path consisting of players with the same inclination to connect them. Given that the networks are fixed, finite, connected, commonly known, undirected and without cycles, I show that if the priors have full support on the strong connectedness states, there is a weak sequential equilibrium in which the ex-post efficient outcome repeats after a finite time $T$ in the path when discount factor is sufficiently high. This equilibrium is constructive and does not depend on public or private signals other than players' actions. In the second chapter, I consider the three-player complete information games augmented with pre-play communication. Players can privately communicate with others, but not through a mediator. I implement correlated equilibria by letting players be able to authenticate their messages and forward the authenticated messages during communication. Authenticated messages, such as letters with signatures, cannot be duplicated but can be sent or received by players. With authenticated messages, I show that, if a game $G$ has a worst Nash equilibrium $\alpha$, then any correlated equilibrium distribution in $G$, which has rational components and gives each player higher payoff than what $\alpha$ does, can be implemented by a pre-play communication. The proposed communication protocol does not publicly expose players' messages in any stage during communication. | CommonCrawl |
What is the largest body in the solar system we could meaningfully and accurately adjust the orbit of?
There is a lot of science fiction and emerging science that move comet and asteroids as part of the main plot. Pretty much everything in our solar system, is in orbit around the Sun, or in orbit around an object orbiting the Sun. If you want something to be somewhere else, in essence you change it's solar orbit to match (or collide) with your desired location.
We have several man made bodies that we have placed in lots of different orbits. we have even caused some to leave the solar system. Given our current (2015) tools and knowledge what is the largest object we could meaningfully and accurately adjust the orbit of?
Where "meaningfully and accurately" = bringing into a given orbit (specific and calculated) around any body that it is not currently orbiting, in a timely fashion (i.e. 5 years or less)
Of course the biggest challenge is getting your tools and knowledge away from Earth and to the body you want to adjust the orbit of, that is mater of economics. Assuming you have the budget to get what you want off of Earth, what is the biggest thing you could move accurately?
orbital-mechanics propulsion solar-system asteroid-redirect-mission
Cornelisinspace
$\begingroup$ I'm wondering if this still might be too broad. It depends on whether it is possible to be clear about what a meaningful, accurate change is. Maybe there are some large things we could change the orbit of by a few meters, such that in a few centuries it would be likely to hit something specific - but most such opportunities are in places where their orbits are heavily influenced by a lot of other things and accurate orbit prediction down the road is pretty hard. $\endgroup$ – kim holder wants Monica back May 8 '15 at 18:59
$\begingroup$ "2015 tools and knowledge" is very vague; by my interpretation of it, the answer is "a very small artificial satellite". $\endgroup$ – Russell Borogove May 8 '15 at 19:06
$\begingroup$ A body being so small that adjusting its orbit might be possible will be too small to be detected with telescopes from earth or by telescopes in an earth orbit. $\endgroup$ – Uwe Apr 3 '17 at 15:42
If "meaningful" means measurable then seeng a half-percent change in the period of a tiny double-asteroid at a few AU is pretty close to as big as possible.
@PearsonArtPhotos question Using DART to measure G came to mind when I came across this "classic" question, so let's connect the dots.
From Double Asteroid Redirection Test (DART) Mission
DART will be the first demonstration of the kinetic impact technique to change the motion of an asteroid in space. The DART mission is in Phase B, led by JHU/APL and managed by the Planetary Missions Program Office at Marshall Space Flight Center for NASA's Planetary Defense Coordination Office.
DART is a planetary defense-driven test of one of the technologies for preventing the Earth impact of a hazardous asteroid: the kinetic impactor. DART's primary objective is to demonstrate a kinetic impact on a small asteroid. The binary near-Earth asteroid (65803) Didymos is the target for DART. While Didymos' primary body is approximately 800 meters across, its secondary body (or "moonlet") has a 150-meter size, which is more typical of the size of asteroids that could pose a more common hazard to Earth.
The DART spacecraft will achieve the kinetic impact by deliberately crashing itself into the moonlet at a speed of approximately 6 km/s, with the aid of an onboard camera and sophisticated autonomous navigation software. The collision will change the speed of the moonlet in its orbit around the main body by a fraction of one percent, enough to be measured using telescopes on Earth.
Wikipedia's Double Asteroid Redirection Test says that the launch mass is 500 kg, and the Lunar and Planetary Science XLVIII (2017) paper The Double Asteroid Redirection Test (DART) Element of the Asteroid Impact and Deflection Assessment (AIDA) Mission gives an impact mass of ~490 kg.
With a system mass $M = m_1+m_2$ of 5.28E+11 kg, a separation $R$ of 1180 meters, and the gravitational constant $G$ of 6.674E-11 m^3/kg s^2, the orbital period from (from here):
$$ T^2 = \frac{4 \pi^2 R^3} {G(m1+m2)} $$
is about 42,900 seconds, and if the orbit were circular that corresponds to an orbital velocity of about 0.173 m/sec.
The momentum of the ~500 kg spacecraft at 6,000 m/s is 3E+06 kg m/s, that of the moonlet in the system's center of mass (assuming the moonlet has about 0.66 % of the system mass if you assume equal density) is about 6.01E+08 m/s, so the complete absorption of momentum could change the momentum of the moonlet by roughly half of one percent.
Schematic of the DART mission shows the impact on the moonlet of asteroid (65803) Didymos. Post-impact observations from Earth-based optical telescopes and planetary radar would, in turn, measure the change in the moonlet's orbit about the parent body.
The near-Earth asteroid (185851) 2000 DP107 in many ways is an analog to Didymos. 2000 DP107 was the first binary asteroid ever imaged by radar. This animation is derived from planetary radar observations. In this example (2000 DP107), the primary and secondary are about 850 meters and 300 meters in diameter. They are separated by about 2.7 km. The primary rotates once every 2.77 hours while the tidally locked secondary orbits the primary about once every 42 hours.
$\begingroup$ If the momentum of the moonlet is changed by roughly half of one percent, how much change of the orbit period of 11.92 hours? How long is the neccessary observation time to validate such a small change of the moonlet's orbital period? $\endgroup$ – Uwe Feb 8 '19 at 15:19
$\begingroup$ @Uwe I thought one of the links in my post addresses that but it doesn't. I guess a few months but there's no rush as far as I know. $\endgroup$ – uhoh Feb 8 '19 at 15:26
$\begingroup$ A few months for validation would be ok, but it should be done during the closest approach to Earth around October 2022. It should be done before the Didymos system is too far away again for precise observation of its moonlet. $\endgroup$ – Uwe Feb 8 '19 at 15:33
$\begingroup$ @Uwe I think the pair are an eclipsing binary seen from Earth (diameters of 300 and 800 meters) so a light curve is sufficient, they don't need to be spatially resolved. As long as they can be detected optically all that's necessary is a series of measurements of the eclipse and transit timings. I'm sure this has all been carefully thought out before planning the mission, let's see if we can find a reference to the observation plan. $\endgroup$ – uhoh Feb 8 '19 at 20:31
$\begingroup$ Just thinking, is it possible to impact an asteroid to change it course such that it hits the target asteroid which is in collision course to earth? Then I think we can change a trajectory a lot ! $\endgroup$ – Prakhar Feb 9 '19 at 5:55
The phrase "2015 tools and knowledge" combines two very different things. If we are limited to today's tools, the best we can do is impact a body so the body absorbs the momentum. The easiest way to satisfy "meaningfully adjust the orbit of something" is to make it not hit (or hit) the earth. Rosetta is 2900 kg and my WAG for a closing speed, assuming you want it high even though everything goes around the sun, is $10\%$ of Mars orbital velocity or 2.4 km/s. That gives you the delta v you can impart (though you can't necessarily do it in any chosen direction) by dividing by the mass of the object. As far as I know, there are not any known objects heading this way. If we found a comet in an extremely eccentric orbit that would hit the earth next pass (or the one after that) a small nudge would prevent the disaster. Then you are into 2015 knowledge-how well can we measure the orbit.
Ross MillikanRoss Millikan
Not the answer you're looking for? Browse other questions tagged orbital-mechanics propulsion solar-system asteroid-redirect-mission or ask your own question.
Why is it so hard to figure out if Voyager 1 has left the solar system?
Two 1000 kg gold spheres orbit their CM in near-contact, great way to measure G or limited by spaceflight issues?
Highest velocity impact between a spacecraft and a solar system body? What about for a dedicated impactor (spacecraft component)?
Using DART to measure G
Will Rosetta have to adjust its orbit around Chury due to the comet's coma and tails?
Is it safe to use a coilgun or railgun as an engine inside the solar system?
What is the smallest body that has sufficient gravity for another body to orbit it?
Geosynchronous orbits around other Solar System objects
How accurately (maximum possible accuracy) can future satellite positions be predicted?
What is the effect of gravity slingshots around Earth on Earth's rotation and orbit time, and is this effect worth considering?
Thrust vectoring for ion propulsion - any plans or current research?
A spacecraft leaves Earth exactly with escape velocity V2 - what trajectory it will have in Solar System?
How does the orbit of the Voyager and New Horizons probes manage to get into interstellar space? | CommonCrawl |
\begin{document}
\title{On formulations of skew factor models: skew errors versus skew factors} \author{Sharon X. Lee$^1$, Geoffrey J. McLachlan$^{1, \star}$} \date{}
\maketitle
\begin{flushleft} $^1$Department of Mathematics, University of Queensland, St. Lucia, Queensland, 4072, Australia.\\ $^\star$ E-mail: [email protected] \end{flushleft}
\begin{abstract} In the past few years, there have been a number of proposals for generalizing the factor analysis (FA) model and its mixture version (known as mixtures of factor analyzers (MFA)) using non-normal and asymmetric distributions. These models adopt various types of skew densities for either the factors or the errors. While the relationships between various choices of skew distributions have been discussed in the literature, the differences between placing the assumption of skewness on the factors or on the errors have not been closely studied. This paper examines these formulations and discusses the connections between these two types of formulations for skew factor models. In doing so, we introduce a further formulation that unifies these two formulations; that is, placing a skew distribution on both the factors and the errors. \end{abstract}
\section{\large Introduction} \label{s:intro}
Mixture models with skew component densities have gained increasing attention in recent years due to their ability in accommodating asymmetric distributional features in the data. However, these models are highly parametrized and so are not well suited for the analysis of high-dimensional datasets. One approach to reduce the number of unknown parameters in these models is to adopt a mixture of factor analzyers (MFA) model \citep{J617,J708,B004}. Recent developments along this path have explored factor-analytic equivalent of these skew mixture models for modelling high-dimensional datasets. To name a few, there are mixtures of (generalized hyperbolic) skew $t$-factor analyzers (GHSTFA) by \citet{J618}, the skew $t$-factor analysis (STFA) model by \citet{J160b}, the mixtures of generalized hyperbolic factor analyzers (GHFA) by \citet{J620}, the mixtures of skew normal factor analyzers (MSNFA) by \citet{J159b}, and more recently, the mixtures of hidden truncation hyperbolic factor analyzers (HTHFA) and scale mixtures of canonical fundamental skew normal factor analyzers (SMCFUSNFA) by \citet{J615} and \citet{J638}, respectively. \newline
There are distinct differences between these factor-analytic models available in the literature, not only on the choice of component densities, but also on where the assumption of skewness is placed in the model (that is, whether it is assumed for the factors and/or for the errors). The former had been considered by \citet{J638} and \citet{J105,J103}, who provide an account of existing models and discuss the links and relationships between the different component densities adopted by these models. Here we consider the implications of placing a skew distribution on the factors, or on the errors, or both. It should be noted that, to our knowledge, in all of the existing models, the assumption of skewness is placed \emph{either} on the factors or the errors, but not both. A summary of these models is given in Tables \ref{tab:SE} and \ref{tab:SF} for models with skew errors (SE) and skew factors (SF), respectively. In order to study the differences between them, we consider yet another model that is more general - a factor analysis model with skew distributions for both the factors and the errors, namely, a SFE model. \newline
In this paper, we study the SE, SF, and SFE models and discuss their properties. We provide a brief background on FA and skew models in Section \ref{s:background}, including summaries in tables listing major existing SE and SF models. The SFE model is introduced in Section \ref{section:SFE}. This model and the nested SE and SF models can be fitted by maximum likelihood via an expectation--maximization (EM) algorithm \citep{J034}; more specifically, an alternating expectation conditional maximization (AECM) algorithm \citep{J631} is used. These algorithms are derived in Section \ref{sec:EM}.
\begin{table}
\centering
\hspace*{-1cm}
\begin{tabular}{|c||c|c|c|c|}
\hline
\textbf{SE Models} & \textbf{Notation} & \textbf{Factors}
& \textbf{Errors} & \textbf{References} \\
\hline \hline
Generalized hyperbolic & MGHFA & SGH & GH & \citet{J620} \\\hline
Generalized hyperbolic skew $t$ & MGHSTFA & $t$ & GHST & \citet{J618} \\\hline
CFUSN & CFUSNFA & normal & CFUSN & \citet{J624} \\\hline
Unrestricted skew $t$ & uMSTFA & $t$ & uMST & \citet{J623} \\\hline
\end{tabular}
\caption{Factor analysis (FA) and Mixtures of factor analyzers (MFA) models
with skew errors.
The notation GH, GHST, CFUSN, and uMST denote the generalized hyperbolic
distribution, the generalized hyperbolic skew $t$-distribution,
the canonical fundamental skew normal distribution,
the unrestricted multivariate skew $t$-distribution,
and the variance gamma distribution, respectively.
The prefix `S' in SGH denotes the symmetric version of the GH distribution. }
\label{tab:SE} \end{table}
\begin{table}
\centering
\hspace*{-1cm}
\begin{tabular}{|c||c|c|c|c|}
\hline
\textbf{SF Models} & \textbf{Notation} & \textbf{Factors}
& \textbf{Errors} & \textbf{References} \\
\hline \hline
Restricted skew normal & MSNFA & rMSN & normal & \citet{J159b} \\\hline
CFUSH$^*$ & CFUSHFA & CFUSH & SGH & \citet{J615} \\\hline
Restricted skew $t$ & MSTFA & rMST & $t$ & \citet{J160b,J612} \\\hline
SMCFUSN & SMCFUSNFA & SMCFUSN & SMN & \citet{J638} \\\hline
CFUSN & CFUSNFA & CFUSN & normal & \citet{J638} \\\hline
CFUST & CFUSTFA & CFUST & $t$ & \citet{J638} \\\hline
\end{tabular}
\caption{FA and MFA models with skew factors.
The notation rMSN, CFUSH, SMN, and CFUST denote the
restricted multivariate skew normal distribution,
the canonical fundamental skew (symmetric generalized) hyperbolic distribution,
a scale mixture of normal distributions,
and the canonical fundamental skew $t$-distribution, respectively.
$^*$The CFUSH distribution is not identifiable
and hence \citet{J615} imposed constraints on the parameters
to achieve identifiability and called it
the hidden truncation hyperbolic (HTH) distribution.}
\label{tab:SF} \end{table}
\section{\large Background} \label{s:background} \noindent
Skew distributions adopted in the above-mentioned models have a stochastic representation in the form of the convolution of a symmetric random variable and a `skewing' variable. For example, the canonical fundamental skew normal (CFUSN) distribution has a convolution-type stochastic representation given by the sum of a half normal and a normal variate. More formally, let $\mbox{\boldmath $Y$}$ be a $p$-dimensional random vector. If $\mbox{\boldmath $Y$}$ follows a CFUSN distribution, it can be expressed as \begin{eqnarray}
\mbox{\boldmath $Y$} = \mbox{\boldmath $\mu$} + \mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \mbox{\boldmath $V$}, \label{CFUSN} \end{eqnarray} where $\mbox{\boldmath $U$} \sim N_q(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q)$ independently of $\mbox{\boldmath $V$} \sim N_p(\mbox{\boldmath $0$}, \mbox{\boldmath $\Sigma$})$. The parameter $\mbox{\boldmath $\mu$}$ is a $p$-dimensional vector of location parameters and $\mbox{\boldmath $\Delta$}$ is a $p\times r$ matrix of skewness parameters. We write $\mbox{\boldmath $Y$} \sim \mbox{CFUSN}_{q,r}(\mbox{\boldmath $\mu$}, \mbox{\boldmath $\Sigma$}, \mbox{\boldmath $\Delta$})$ if $\mbox{\boldmath $Y$}$ is generated from (\ref{CFUSN}). To simplify the discussion, we shall refer to $\mbox{\boldmath $U$}$ as the \emph{skewing} variable and $\mbox{\boldmath $V$}$ as the \emph{symmetric} variable. We note that all models listed in Table \ref{tab:SF} belong to the class of scale mixtures of CFUSN distributions (SMCFUSN). The latter has a stochastic representation similar to (\ref{CFUSN}), but with an additional (scalar) scaling variable $W$ on the covariance matrix of $\mbox{\boldmath $U$}$ and $\mbox{\boldmath $V$}$; that is, it is given by
$\mbox{\boldmath $Y$} = \mbox{\boldmath $\mu$} + \sqrt{W} (\mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \mbox{\boldmath $V$})$. On the other hand, the MGHSTFA model is a limiting case of the MGHFA model, which is a variance-mean mixture of the normal distribution given by $\mbox{\boldmath $Y$} = \mbox{\boldmath $\mu$} + W \mbox{\boldmath $\delta$} + \sqrt{W} \mbox{\boldmath $V$}$. To simplify the discussion, we will use the CFUSN distribution as an illustration, but note that analogous arguments apply to the SMCFUSN and GH distributions. \newline
\noindent The traditional factor-analytic (FA) approach (applied to a random vector $\mbox{\boldmath $Y$} \in \mathbb{R}^p$ that has a normal distribution) is to decompose $\mbox{\boldmath $Y$}$ into a lower-dimension vector of factors $\mbox{\boldmath $X$}$ and a vector of errors $\mbox{\boldmath $e$}$ by letting \begin{eqnarray} \mbox{\boldmath $Y$} = \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \mbox{\boldmath $X$} + \mbox{\boldmath $e$}, \label{FA} \end{eqnarray} where $\mbox{\boldmath $X$} \sim N_q(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q)$ contains the latent \emph{factors} and $\mbox{\boldmath $e$} \sim N_p(\mbox{\boldmath $0$}, \mbox{\boldmath $D$})$ contains the \emph{errors}. The matrix $\mbox{\boldmath $B$}$ is a $p\times q$ matrix of factor loadings and $\mbox{\boldmath $D$}$ is a diagonal matrix ($\mbox{\boldmath $D$} = \mbox{diag}(\mbox{\boldmath $d$})$ and $\mbox{\boldmath $d$} \in\mathbb{R}^p$). The latter matrix $\mbox{\boldmath $D$}$ is taken to be diagonal since it is assumed that the variables in $\mbox{\boldmath $Y$}$ are conditionally independent given $\mbox{\boldmath $X$}$. Thus, the marginal distribution of $\mbox{\boldmath $Y$}$ is given by $\mbox{\boldmath $Y$} \sim N_p(\mbox{\boldmath $\mu$}, \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T + \mbox{\boldmath $D$})$. In the case where $q>1$, the FA model (\ref{FA}) has an identifiability issue due to $\mbox{\boldmath $B$}\mbox{\boldmath $X$}$ being rotationally invariant, that is, the model is still satisfied if $\mbox{\boldmath $X$}$ is pre-multiplied by an orthogonal matrix of order $q$ and $\mbox{\boldmath $B$}$ is post-multiplied by the the transpose of the same matrix. A common approach is to impose $q(q-1)/2$ constraints on $\mbox{\boldmath $B$}$ so that (\ref{FA}) can be uniquely defined.
It is apparent from the above that the CFUSN model (\ref{CFUSN}) has the form as (\ref{FA}), by considering the $|\mbox{\boldmath $U$}|$ as `factors' and the $\mbox{\boldmath $U$}_1$ as `errors'. This implies the CFUSN model is a factor model with half-normal `factors' and normal `errors'. To avoid confusion, we shall refer to $\mbox{\boldmath $X$}$ in (\ref{FA}) as \emph{factors} and $\mbox{\boldmath $e$}$ in (\ref{FA}) as \emph{errors}.
It is clear from the above definitions that there can be different ways to generalize the FA model to a CFUSN factor analysis model, by combining (\ref{CFUSN}) and (\ref{FA}) in different ways. An immediate question is whether to incorporate the factor analytic form for the distribution of the skewing variables or for the symmetric variables, or even for both. We will now consider each of these cases. \newline
\section{\large Three formulations of skew factor models} \label{section:SFE}
\subsection{The skew errors (SE) model} \label{sec:SE}
One of the more straightforward approaches is to decompose the symmetric latent variable (that is, the `error' term $\mbox{\boldmath $V$}$ in (\ref{CFUSN})) into the factor-analytic form (\ref{FA}). Hence, the `factors' have a normal distribution (in the case of the CFUSN model), whereas the errors have a skew distribution. More specifically, we take $\mbox{\boldmath $V$} = \mbox{\boldmath $B$}\mbox{\boldmath $X$}+\mbox{\boldmath $e$}$, so that $\mbox{\boldmath $V$}\sim N_p(\mbox{\boldmath $0$}, \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T + \mbox{\boldmath $D$})$. Thus, $\mbox{\boldmath $\Sigma$} = \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T + \mbox{\boldmath $D$}$.
Proceeding from (\ref{CFUSN}), we see that \begin{eqnarray}
\mbox{\boldmath $Y$} &=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \mbox{\boldmath $V$} \nonumber\\
&=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \left(\mbox{\boldmath $B$} \mbox{\boldmath $X$} + \mbox{\boldmath $e$}\right) \label{intermediate}\\
&=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \mbox{\boldmath $X$} + \mbox{\boldmath $\epsilon$}, \label{error} \end{eqnarray}
where now the `errors' $\mbox{\boldmath $\epsilon$} = \mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \mbox{\boldmath $e$}$ follow a $CFUSN_{p,r}(\mbox{\boldmath $0$}, \mbox{\boldmath $D$}, \mbox{\boldmath $\Delta$})$ distribution and the `factors' $\mbox{\boldmath $X$} \sim N_p(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q)$ remain unchanged (from the normal factor model (\ref{FA})). It follows that the marginal density of $\mbox{\boldmath $Y$}$ is \begin{eqnarray} \mbox{\boldmath $Y$} &\sim& CFUSN_{p,r}(\mbox{\boldmath $\mu$}, \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T+\mbox{\boldmath $D$}, \mbox{\boldmath $\Delta$}). \label{error3} \end{eqnarray} Alternatively, we may also consider taking $\mbox{\boldmath $e$}$ in (\ref{FA}) to have a (central) \linebreak CFUSN distribution with stochastic representation given by (\ref{CFUSN}) to arrive at an equivalent expression to (\ref{error}).
With this model, $\mbox{\boldmath $Y$}$, $\mbox{\boldmath $X$}$, and $\mbox{\boldmath $e$}$ have expected value given by $\mbox{\boldmath $\mu$}+\sqrt{2/\pi}\mbox{\boldmath $\Delta$} \mbox{\boldmath $1$}_r$, $\mbox{\boldmath $0$}$, and $\sqrt{2/\pi}\mbox{\boldmath $\Delta$}\mbox{\boldmath $1$}_r$, respectively. Their corresponding covariance matrix is given by $\mbox{\boldmath $B$}\mbox{\boldmath $B$}^T+\mbox{\boldmath $D$} + (1-2/\pi)\mbox{\boldmath $\Delta$}\mbox{\boldmath $\Delta$}^T$, $\mbox{\boldmath $I$}_q$, and $\mbox{\boldmath $D$} + (1-2/\pi)\mbox{\boldmath $\Delta$}\mbox{\boldmath $\Delta$}^T$, respectively.
An advantage of the SE model is that it is relatively straightforward to construct and facilitates easy implementation of the AECM algorithm. In the mixture model case, the latter is essentially the simple combination of the EM implementation for the CFUSN mixture model and the MFA model, where the first cycle is identical to that for the EM algorithm for the CFUSN mixture model (except that $\mbox{\boldmath $\Sigma$}$ is not estimated in the M-step) and the second cycle is identical to that for the MFA model.
Existing SE models include the (unrestricted) skew $t$-MFA model \citep{J623}, the generalized hyperbolic skew $t$-MFA model \citep{J623}, and the specialized generalized hyperbolic MFA model \citep{J401}; see Table \ref{tab:SE}. With these models, the errors are assumed to follow the (unrestricted) skew $t$, generalized hyperbolic skew $t$, variance gamma, and hyperbolic distribution, respectively. The factors have the corresponding symmetric version of the distribution of the errors.
\subsection{The skew factors (SF) model} \label{sec:SF}
Perhaps a more natural approach is to replace the factors in (\ref{FA}) with a CFUSN random variable. In this case, we let $\mbox{\boldmath $X$}$ in (\ref{FA}) have the stochastic representation (\ref{CFUSN}). Note that we are only introducing skewness to $\mbox{\boldmath $X$}$ and hence we take $\mbox{\boldmath $X$} \sim CFUSN_{q, r}(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q, \mbox{\boldmath $\Delta$})$, that is, $\mbox{\boldmath $X$}$ has location parameter $\mbox{\boldmath $0$}$ and scale matrix $\mbox{\boldmath $I$}_q$. Thus, $\mbox{\boldmath $U$} \sim N_r(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_r)$ and $\mbox{\boldmath $V$} \sim N_q(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q)$. However, it is important to note that $E(\mbox{\boldmath $X$}) \neq \mbox{\boldmath $0$}$ and $\mbox{cov}(\mbox{\boldmath $X$}) \neq \mbox{\boldmath $I$}_q$. The reader is referred to the properties of the CFUSN distribution \citep{J008}.
It follows that \begin{eqnarray} \mbox{\boldmath $Y$} &=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \mbox{\boldmath $X$} + \mbox{\boldmath $e$} \label{factor} \\
&=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \left(\mbox{\boldmath $\Delta$} |\mbox{\boldmath $U$}| + \mbox{\boldmath $V$}\right) + \mbox{\boldmath $e$} \nonumber\\
&=& \mbox{\boldmath $\mu$} + \left(\mbox{\boldmath $B$} \mbox{\boldmath $\Delta$}\right) |\mbox{\boldmath $U$}| + \left(\mbox{\boldmath $B$} \mbox{\boldmath $V$} + \mbox{\boldmath $e$}\right) \label{middle} \\
&=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $\Delta$}^* |\mbox{\boldmath $U$}| + \left(\mbox{\boldmath $B$} \mbox{\boldmath $V$} + \mbox{\boldmath $e$}\right), \label{factor2} \end{eqnarray} where $\mbox{\boldmath $\Delta$}^* = \mbox{\boldmath $B$} \mbox{\boldmath $\Delta$}$, the `factors' $\mbox{\boldmath $X$} \sim CFUSN_{q,r}(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q, \mbox{\boldmath $\Delta$})$, and the `errors' $\mbox{\boldmath $e$} \sim N_p(\mbox{\boldmath $0$}, \mbox{\boldmath $D$})$ remain unchanged (from the normal factor model (\ref{FA})). It follows that the marginal density of $\mbox{\boldmath $Y$}$ is \begin{eqnarray} \mbox{\boldmath $Y$} &\sim& CFUSN_{p,r}(\mbox{\boldmath $\mu$}, \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T+\mbox{\boldmath $D$}, \mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}), \label{factor3} \end{eqnarray} which is almost the same as the SE case (\ref{error3}). If we replace $\mbox{\boldmath $\Delta$}$ in (\ref{error3}) with $\mbox{\boldmath $\Delta$}^* = \mbox{\boldmath $B$} \mbox{\boldmath $\Delta$}$, we obtain the SF model from the SE model. \newline
With this model, $\mbox{\boldmath $Y$}$, $\mbox{\boldmath $X$}$, and $\mbox{\boldmath $e$}$ have expected value given by $\mbox{\boldmath $\mu$}+\sqrt{2/\pi}\mbox{\boldmath $B$}\mbox{\boldmath $\Delta$} \mbox{\boldmath $1$}_r$, $\sqrt{2/\pi}\mbox{\boldmath $\Delta$}\mbox{\boldmath $1$}_r$, and $\mbox{\boldmath $0$}$, respectively. Their corresponding covariance matrix is given by $\mbox{\boldmath $B$}\mbox{\boldmath $B$}^T+\mbox{\boldmath $D$} + (1-2/\pi)\mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}\mbox{\boldmath $\Delta$}^T\mbox{\boldmath $B$}^T$, $\mbox{\boldmath $I$}_q + (1-2/\pi)\mbox{\boldmath $\Delta$}\mbox{\boldmath $\Delta$}^T$, and $\mbox{\boldmath $D$}$, respectively. Some authors choose to normalize the factor so that $E(\mbox{\boldmath $X$})=\mbox{\boldmath $0$}$ and $\mbox{cov}(\mbox{\boldmath $X$}) = \mbox{\boldmath $I$}_q$; see, for example, the MSNFA model \citep{J159b}, the MSTFA model \citep{J160b,J612}, and the CFUSSH model \citep{J615}. In this case, the distribution of $\mbox{\boldmath $X$}$ needs to be appropriately reparametrized. It follows that the mean and covariance matrix of $\mbox{\boldmath $Y$}$ do not involve $\mbox{\boldmath $\Delta$}$ and are the same as that for the corresponding symmetric MFA model. In the case of a CFUSNFA model, for example, the vector of factors $\mbox{\boldmath $X$}$ has the distribution $CFUSN_{q,r}(-\mbox{\boldmath $A$}^{-\frac{1}{2}}\mbox{\boldmath $\Delta$}\mbox{\boldmath $1$}_r, \mbox{\boldmath $A$}^{-1}, \mbox{\boldmath $A$}^{-\frac{1}{2}}\mbox{\boldmath $\Delta$})$, where $\mbox{\boldmath $A$} = \mbox{\boldmath $I$}_q + (1-2/\pi)\mbox{\boldmath $\Delta$}\mbox{\boldmath $\Delta$}^T$, and $E(\mbox{\boldmath $Y$}) = \mbox{\boldmath $\mu$}$ and $\mbox{cov}(\mbox{\boldmath $Y$}) = \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T +\mbox{\boldmath $D$}$. \newline
Existing SF models include, for example, the (restricted) skew normal MFA model \citep{J159b}, the (restricted) skew $t$-MFA model \citep{J612}, the canonical fundamental skew hyperbolic MFA model \citep{J615}, and the canonical fundamental skew $t$-MFA model \citep{J638}; see Table \ref{tab:SF}. As noted in \citet{J638} the above-mentioned models belong to the class of scale mixtures of CFUSN factor analyzers. \newline
We can see from the above that the SE and SF models are very similar. Indeed, they seem to share an intermediate form given by (\ref{intermediate}) and (\ref{middle}). Consider the following intermediate representation that is the same as (\ref{intermediate}) above, \begin{eqnarray}
\mbox{\boldmath $Y$} = \mbox{\boldmath $\mu$} + \mbox{\boldmath $\Delta$}_0 |\mbox{\boldmath $U$}| + \mbox{\boldmath $B$} \mbox{\boldmath $V$} + \mbox{\boldmath $e$}, \label{both} \end{eqnarray} where $\mbox{\boldmath $\Delta$}_0$ is a $p \times r$ matrix and $\mbox{\boldmath $U$}$, $\mbox{\boldmath $B$}$, $\mbox{\boldmath $V$}$, and $\mbox{\boldmath $e$}$ are as defined in (\ref{CFUSN}) and (\ref{FA}) above. If we take $\mbox{\boldmath $V$}$ as the factors, we obtain the SE model. In the case of the SF model, we include the skewness term (that is, the second term on the right-hand side of (\ref{both})) as part of the factors and hence we write $\mbox{\boldmath $\Delta$}_0$ in terms of $\mbox{\boldmath $B$}$ and $\mbox{\boldmath $\Delta$}$; that is, $\mbox{\boldmath $\Delta$}_0 = \mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}$. Hence, for both the SE and SF models, the unknown parameters are given by $\mbox{\boldmath $\theta$} = (\mbox{\boldmath $\mu$}, \mbox{\boldmath $B$}, \mbox{\boldmath $D$}, \mbox{\boldmath $\Delta$})$, but $\mbox{\boldmath $\Delta$}$ is a $p\times r$ matrix in the SE case, whereas it is a $q\times r$ matrix in the SF case. Due to this, the SF model has a slightly lower number of free parameters than the SE model (assuming $q < p$).
\subsection{The skew factors and errors (SFE) model} \label{sec:SFE}
The third and more involved approach is to allow both the factors $\mbox{\boldmath $X$}$ and the errors $\mbox{\boldmath $e$}$ in (\ref{FA}) to have a CFUSN distribution, that is, combining the SE and SF approaches. In this case, we take $\mbox{\boldmath $X$} \sim CFUSN_{q, r}(\mbox{\boldmath $0$}, \mbox{\boldmath $I$}_q, \mbox{\boldmath $\Delta$}_0)$ as in the case of the SF model, but we also let $\mbox{\boldmath $e$}$ follow a CFUSN distribution with skewness matrix $\mbox{\boldmath $\Delta$}_1$, that is, $\mbox{\boldmath $e$} \sim CFUSN_{p, s}(\mbox{\boldmath $0$}, \mbox{\boldmath $D$}, \mbox{\boldmath $\Delta$}_1)$. It is clear that this SFE model is a generalization of the SE and SF models, which can be obtained by taking $\mbox{\boldmath $\Delta$}_0 = \mbox{\boldmath $0$}$ and $\mbox{\boldmath $\Delta$}_1=\mbox{\boldmath $0$}$, respectively.
It follows that the SFE model is given by \begin{eqnarray} \mbox{\boldmath $Y$} &=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \mbox{\boldmath $X$} + \mbox{\boldmath $e$} \\
&=& \mbox{\boldmath $\mu$} + \mbox{\boldmath $B$} \left(\mbox{\boldmath $\Delta$}_0 |\mbox{\boldmath $U$}_0| + \mbox{\boldmath $V$}_0\right)
+ \left(\mbox{\boldmath $\Delta$}_1 |\mbox{\boldmath $U$}_1| + \mbox{\boldmath $V$}_1\right) \nonumber\\
&=& \mbox{\boldmath $\mu$} + \begin{bmatrix} \mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}_0 & \mbox{\boldmath $\Delta$}_1\end{bmatrix}
\begin{bmatrix} |\mbox{\boldmath $U$}_0| \\ |\mbox{\boldmath $U$}_1| \end{bmatrix}
+ \begin{bmatrix} \mbox{\boldmath $B$} & \mbox{\boldmath $I$}_p\end{bmatrix}
\begin{bmatrix} \mbox{\boldmath $V$}_0 \\ \mbox{\boldmath $V$}_1\end{bmatrix}. \label{error_factor} \end{eqnarray} In this case, we have a linear combination of CFUSN distributions. Given that $\mbox{\boldmath $U$}$ and $\mbox{\boldmath $V$}$ are independent, the CFUSN distribution is closed under convolution. Hence, $\mbox{\boldmath $Y$}$ has a CFUSN distribution. This can also be seen from (\ref{error_factor}) above, where it can be deduced that \begin{eqnarray} \mbox{\boldmath $Y$} &\sim& CFUSN_{p,r+s}\left(\mbox{\boldmath $\mu$}, \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T + \mbox{\boldmath $D$}, \tilde{\mbox{\boldmath $\Delta$}} \right), \label{SFE} \end{eqnarray} where $\tilde{\mbox{\boldmath $\Delta$}} = \begin{bmatrix} \mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}_0 & \mbox{\boldmath $\Delta$}_1\end{bmatrix}$.
With this model, $\mbox{\boldmath $Y$}$, $\mbox{\boldmath $X$}$, and $\mbox{\boldmath $e$}$ have expected value given by \linebreak $\mbox{\boldmath $\mu$}+\sqrt{2/\pi}(\mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}_0 + \mbox{\boldmath $\Delta$}_1) \mbox{\boldmath $1$}_r$, $\sqrt{2/\pi}\mbox{\boldmath $\Delta$}_0\mbox{\boldmath $1$}_r$, and $\sqrt{2/\pi}\mbox{\boldmath $\Delta$}_1\mbox{\boldmath $1$}_s$, respectively. Their corresponding variance matrix is given, respectively, by \begin{eqnarray} \mbox{cov}(\mbox{\boldmath $Y$}) &=& \mbox{\boldmath $B$}\mbox{\boldmath $B$}^T+\mbox{\boldmath $D$} + \left(1-\frac{2}{\pi}\right)
\left(\mbox{\boldmath $B$}\mbox{\boldmath $\Delta$}_0\mbox{\boldmath $\Delta$}_0^T\mbox{\boldmath $B$}^T+\mbox{\boldmath $\Delta$}_1\mbox{\boldmath $\Delta$}_1^T\right), \nonumber\\ \mbox{cov}(\mbox{\boldmath $X$}) &=& \mbox{\boldmath $I$}_q + \left(1-\frac{2}{\pi}\right)\mbox{\boldmath $\Delta$}_0\mbox{\boldmath $\Delta$}_0^T, \nonumber\\ \mbox{cov}(\mbox{\boldmath $e$}) &=& \mbox{\boldmath $D$} + \left(1-\frac{2}{\pi}\right)\mbox{\boldmath $\Delta$}_1\mbox{\boldmath $\Delta$}_1^T. \end{eqnarray}
In the case of skew elliptical distributions, the requirement for closure under convolution is that $\mbox{\boldmath $U$}$ and $\mbox{\boldmath $e$}$ are uncorrelated. Hence, a similar model can be constructed using these distributions. For example, in the case of a joint CFUST distribution for $\mbox{\boldmath $U$}$ and $\mbox{\boldmath $e$}$ (not independent but uncorrelated), we have that $\mbox{\boldmath $Y$}$ also follows a CFUST distribution. In a similar way, a SFE model can be constructed using a CFUSH distribution.
\section{Parameter estimation via the ECM algorithm} \label{sec:EM}
All three formulations of skew factor models described above can be fitted via an EM algorithm, namely, an AECM algorithm. We will consider the mixture model case for generality. In this case, the density of a $g$-component MFA model is given by \begin{eqnarray} f(\mbox{\boldmath $y$}; \mbox{\boldmath $\Psi$}) &=& \sum_{i=1}^g f(\mbox{\boldmath $y$}; \mbox{\boldmath $\theta$}_i), \label{eq:MFA} \end{eqnarray} where $f(\mbox{\boldmath $y$}; \mbox{\boldmath $\theta$}_i)$ denotes the density of the $i$th component of the mixture model with parameters $\mbox{\boldmath $\theta$}_i$ $(i=1, \ldots, g)$. The vector $\mbox{\boldmath $\Psi$}$ contains all unknown parameters of the mixture model. The $\pi_i$ $(i=1, \ldots, g)$ denote the mixing proportions, which are non-negative an sum to one.
In the first cycle of the AECM algorithm, the missing data include the latent component labels $z_{ij}$ and latent skewing variable $\mbox{\boldmath $U$}_{ij}$. The M-step in this cycle involves updating $\pi_i$, $\mbox{\boldmath $\mu$}_i$, $\nu_i$, and also $\mbox{\boldmath $\Delta$}_i$ (in the SE case only). In the second cycle of the AECM algorithm, the missing data include the $z_{ij}$ and latent factors $\mbox{\boldmath $X$}_{ij}$. The parameters related to the latent factors which include $\mbox{\boldmath $B$}_i$ and $\mbox{\boldmath $D$}_i$ are updated on the M-step of this cycle. In the case of the SF model, $\mbox{\boldmath $\Delta$}_i$ is also updated in this cycle.
For generality, we henceforth consider the case of a mixture of CFUST factor analyzers (CFUSTFA). The CFUSN factor analysis model described above is a limiting case of the CFUSTFA model as $\nu \rightarrow \infty$ and $g=1$ component. An outline of the AECM algorithm for the SE, SF, and SFE models is described below.
\subsection{The skew errors (SE) model} \label{s:SE}
\noindent The SE model admits a straightforward hierarchical representation: \begin{eqnarray} \mbox{\boldmath $Y$}_{ij} \mid \mbox{\boldmath $U$}_{ij}, w_{ij}
&\sim& N_p\left(\mbox{\boldmath $\mu$}_i + \mbox{\boldmath $\Delta$}_i |\mbox{\boldmath $U$}_{ij}|, \frac{1}{w_{ij}} \mbox{\boldmath $\Sigma$}_i\right), \nonumber\\ \mbox{\boldmath $U$}_{ij} \mid w_{ij}
&\sim& N_r \left(\mbox{\boldmath $0$}, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_r\right), \nonumber\\ w_{ij} &\sim& \mbox{gamma}\left(\frac{\nu_i}{2}, \frac{\nu_i}{2}\right), \label{SE} \end{eqnarray} where $\mbox{\boldmath $\Sigma$}_i = \mbox{\boldmath $B$}_i\mbox{\boldmath $B$}_i^T + \mbox{\boldmath $D$}_i$. \newline
\subsubsection{Cycle One}
In the first cycle, the missing data are $Z_{ij}$, $\mbox{\boldmath $U$}_{ij}$, and $w_{ij}$. This is essentially identical to a traditional FM-CFUST model. Hence from \citet{J164}, the conditional expectations required for the E-step are given by \begin{eqnarray} z_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[z_{ij}=1 \mid \mbox{\boldmath $y$}_j\right]
= \frac{\pi_i^{(k)} f_{\mbox{\tiny{CFUST}}_{p,r}}
(\mbox{\boldmath $y$}_j; \mbox{\boldmath $\mu$}_i^{(k)}, \mbox{\boldmath $\Sigma$}_i^{(k)}, \mbox{\boldmath $\Delta$}_i^{(k)}, \nu_i^{(k)})}
{\sum_{i=1}^g \pi_i^{(k)} f_{\mbox{\tiny{CFUST}}_{p,r}}
(\mbox{\boldmath $y$}_j; \mbox{\boldmath $\mu$}_i^{(k)}, \mbox{\boldmath $\Sigma$}_i^{(k)}, \mbox{\boldmath $\Delta$}_i^{(k)}, \nu_i^{(k)})},
\label{e0}\\ w_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[w_{ij} \mid \mbox{\boldmath $y$}_j, z_{ij}=1\right]
\nonumber\\&&
= \left(\frac{\nu_i^{(k)} + p}{\nu_i^{(k)} + d_{ij}^{(k)}}\right)
\frac{T_r\left(\mbox{\boldmath $q$}_{ij}^{(k)} \sqrt{\frac{\nu_i^{(k)}+p+2}{\nu_i+d_{ij}^{(k)}}};
\mbox{\boldmath $0$}, \mbox{\boldmath $\Lambda$}_i^{(k)}, \nu_i^{(k)}+p+2\right)}
{T_r\left(\mbox{\boldmath $q$}_{ij}^{(k)} \sqrt{\frac{\nu_i^{(k)}+p}{\nu_i+d_{ij}^{(k)}}};
\mbox{\boldmath $0$}, \mbox{\boldmath $\Lambda$}_i^{(k)}, \nu_i^{(k)}+p\right)},
\label{e1}\\ \mbox{\boldmath $u$}_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $U$}_{ij}
\mid \mbox{\boldmath $y$}_j, z_{ij}=1 \right]
= w_{ij}^{(k)} E\left[ \mbox{\boldmath $a$}_{ij}^{(k)}\right], \label{e4}\\ \mbox{\boldmath $u$}_{ij}^{*^{(k)}} &=& E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $U$}_{ij} \mbox{\boldmath $U$}_{ij}^T
\mid, \mbox{\boldmath $y$}_j, z_{ij}=1 \right]
= w_{ij}^{(k)} E\left[ \mbox{\boldmath $a$}_{ij}^{(k)} \mbox{\boldmath $a$}_{ij}^{(k)^T} \right], \label{e5} \end{eqnarray} where \begin{eqnarray} d_{ij}^{(k)} &=& (\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_i^{(k)})^T \mbox{\boldmath $\Omega$}_i^{(k)^{-1}} (\mbox{\boldmath $y$}_i - \mbox{\boldmath $\mu$}_i^{(k)}),
\nonumber\\ \mbox{\boldmath $q$}_{ij}^{(k)} &=& \mbox{\boldmath $\Delta$}_i^{(k)^T} \mbox{\boldmath $\Omega$}_i^{(k)^{-1}} (\mbox{\boldmath $y$}_j-\mbox{\boldmath $\mu$}_i^{(k)}),
\nonumber\\ \mbox{\boldmath $\Lambda$}_i^{(k)} &=& \mbox{\boldmath $I$}_r - \mbox{\boldmath $\Delta$}_i^{(k)^T} \mbox{\boldmath $\Omega$}_i^{(k)^{-1}} \mbox{\boldmath $\Delta$}_i^{(k)},
\nonumber\\ \mbox{\boldmath $\Omega$}_i^{(k)} &=& \mbox{\boldmath $\Sigma$}_i^{(k)} + \mbox{\boldmath $\Delta$}_i^{(k)} \mbox{\boldmath $\Delta$}_i^{(k)^T},
\nonumber \end{eqnarray} and \begin{eqnarray} \mbox{\boldmath $a$}_{ij}^{(k)} &\sim& tt_r\left(\mbox{\boldmath $q$}_{ij}^{(k)},
\left(\frac{\nu_i^{(k)} + d_{ij}^{(k)}}{\nu_i^{(k)}+p+2}\right) \mbox{\boldmath $\Lambda$}_i^{(k)},
\nu_i^{(k)}+p+2; \mathbb{R}^+\right). \end{eqnarray} In the above, $f_{\mbox{\tiny{CFUST}}}(\cdot)$ denotes the density of a CFUST distribution, $T_r(\cdot)$ denotes the distribution function of an $r$-dimensional $t$-distribution, and $tt_r(\cdot; \mathbb{R}^+)$ denotes the $r$-dimensional truncated $t$-density truncated to the positive hyperplane. \newline
\noindent The M-step in this cycle is the same as in the case of the traditional FM-CFUST model, except that the update of the scale matrix $\mbox{\boldmath $\Sigma$}_i$ is not used (but still needs to be calculated as it is required for the M-step in the second cycle). It follows that the M-step is given by \begin{eqnarray} \pi_i^{(k+1)} &=& \frac{1}{n} \sum_{j=1}^n z_{ij}^{(k)}, \nonumber\\ \mbox{\boldmath $\mu$}_i^{(k+1)} &=& \frac{\sum_{j=1}^n z_{ij}^{(k)} w_{ij}^{(k)}\mbox{\boldmath $y$}_j
- \mbox{\boldmath $\Delta$}_i^{(k)} \sum_{j=1}^n z_{ij}^{(k)} \mbox{\boldmath $u$}_{ij}^{(k)}}
{\sum_{j=1^n}^n z_{ij}^{(k)} w_{ij}^{(k)}}, \nonumber\\
\mbox{\boldmath $\Delta$}_i^{(k+1)} &=& \left[\sum_{j=1}^n z_{ij}^{(k)}
(\mbox{\boldmath $y$}_i - \mbox{\boldmath $\mu$}_i^{(k)}) \mbox{\boldmath $u$}_{ij}^{(k)^T}\right]
\left[\sum_{j=1}^n z_{ij}^{(k)} \mbox{\boldmath $u$}_{ij}^{*^{(k)}}\right]^{-1}. \nonumber
\end{eqnarray} \newline
\noindent An update of the degrees of freedom $v_i$ is obtained by solving the following equation. \begin{eqnarray} 0 &=& \left(\sum_{i=1}^n z_{ij}^{(k)}\right)
\left[\log\left(\frac{\nu_i}{2}\right)
- \psi\left(\frac{\nu_i}{2}\right) + 1\right] \nonumber\\ &&
+ \sum_{j=1}^n \tau_{ij}^{(k)} \left[\psi\left(\frac{\nu_i^{(k)}+p}{2}\right)
- \log\left(\frac{\nu_i^{(k)}+\eta_{ij}^{(k)}}{2}\right)
- \left(\frac{\nu_i^{(k)}+p}{\nu_i^{(k)}+\eta_{ij}^{(k)}}\right) \right], \nonumber \end{eqnarray} where \begin{eqnarray} \eta_{ij}^{(k)} &=& \left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_i^{(k)}\right)^T \left(\mbox{\boldmath $B$}_i^{(k)} \mbox{\boldmath $\Omega$}_i^{(k)} \mbox{\boldmath $B$}_i^{(k)^T} + \mbox{\boldmath $D$}_i^{(k)}\right)^{-1} \left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_i^{(k+1)}\right), \nonumber\\ \mbox{\boldmath $\Omega$}_i^{(k)} &=& \mbox{\boldmath $I$}_q + \mbox{\boldmath $\Delta$}_i^{(k)} \mbox{\boldmath $\Delta$}_i^{(k)^T}, \nonumber \end{eqnarray} and where $\psi(\cdot)$ is the digamma function. \newline \newline
\noindent Although not explicitly used in the AECM algorithm, the update for the scale matrix $\mbox{\boldmath $\Sigma$}_i$ is used implicitly in the M-step of the second cycle and is given by \begin{eqnarray} \mbox{\boldmath $\Sigma$}_i^{(k+1)} &=& \frac{\sum_{j=1}^n z_{ij}^{(k)}
\left[(\mbox{\boldmath $y$}_i - \mbox{\boldmath $\mu$}_i^{(k+1)}) (\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_i^{(k+1)})^T
- \mbox{\boldmath $\Delta$}_i^{(k)} \mbox{\boldmath $u$}_{ij}^{(k)^T} \mbox{\boldmath $\Delta$}_i^{(k)^T}\right]}
{\sum_{j=1}^n z_{ij}^{(k)}}. \nonumber \end{eqnarray}
\subsubsection{Cycle Two}
In the second cycle, the missing data are those in the first cycle and also the latent factors; that is, they include $Z_{ij}$, $\mbox{\boldmath $U$}_{ij}$, $w_{ij}$, and $\mbox{\boldmath $X$}_{ij}$. In this cycle, we obtain updated estimates for the parameters $\mbox{\boldmath $B$}_i$ and $\mbox{\boldmath $D$}_i$. These are analogous to those in the case of the MFA model and are given, respectively, by \begin{eqnarray} \mbox{\boldmath $B$}_i^{(k+1)} &=& \mbox{\boldmath $\Sigma$}_i^{(k+1)} \mbox{\boldmath $\beta$}_i^T \mbox{\boldmath $A$}_i^{-1}, \nonumber\\ \mbox{\boldmath $D$}_i^{(k+1)} &=& \mbox{diag}\left(\mbox{\boldmath $\Sigma$}_i^{(k+1)}
- \mbox{\boldmath $B$}_i^{(k+1)} \mbox{\boldmath $\beta$}_i \mbox{\boldmath $\Sigma$}_i^{(k+1)}\right), \nonumber \end{eqnarray} where \begin{eqnarray} \mbox{\boldmath $\beta$}_i &=& \mbox{\boldmath $B$}_i^{(k+1)} \left(\mbox{\boldmath $B$}_i^{(k+1)}\mbox{\boldmath $B$}_i^{(k+1)^T} + \mbox{\boldmath $D$}_i^{(k)}\right)^{-1},
\nonumber\\ \mbox{\boldmath $A$}_i &=& \mbox{\boldmath $I$}_p - \mbox{\boldmath $\beta$}_i\mbox{\boldmath $B$}_i^{(k+1)} + \mbox{\boldmath $\beta$}_i\mbox{\boldmath $\Sigma$}_i^{(k+1)} \mbox{\boldmath $\beta$}_i^T. \end{eqnarray} \newline
\subsection{The skew factors (SF) model} \label{s:SF}
Not surprisingly, the expressions of the conditional expectations and the updated estimate of parameters on the E- and M-steps of the AECM algorithm for the SE model are not as straightforward as for the SF model. The technical details can be found in \citet{J638}. In brief, we exploit the hierarchical representation given by \begin{eqnarray} \mbox{\boldmath $Y$}_j \mid \mbox{\boldmath $x$}_{ij}, w_{ij}, Z_{ij} =1 &\sim&
N_p\left(\mbox{\boldmath $B$}\mbox{\boldmath $x$}_{ij} + \mbox{\boldmath $\mu$}_i, \frac{1}{w_{ij}} \mbox{\boldmath $D$}_i\right),
\nonumber\\ \mbox{\boldmath $X$}_{ij} \mid \mbox{\boldmath $u$}_{ij}, w_{ij}, Z_{ij} =1 &\sim&
N_q\left(\mbox{\boldmath $\Delta$}_i|\mbox{\boldmath $u$}_{ij}|, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_q\right),
\nonumber\\ \mbox{\boldmath $U$}_{ij} \mid w_{ij}, Z_{ij} =1 &\sim&
N_r\left(\mbox{\boldmath $I$}_r, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_r\right),
\nonumber\\ W_{ij} \mid Z_{ij} =1 &\sim& \mbox{gamma}\left(\frac{\nu_i}{2}, \frac{\nu_i}{2}\right),
\nonumber\\ Z_{ij} =1 &\sim& \mbox{Multi}_g(1; \mbox{\boldmath $\pi$}). \label{hierarchical} \end{eqnarray}
It follows that the E-step involves three extra conditional expectations compared to the SE model. Thus, we need to compute (\ref{e0}) to (\ref{e5}), but with $\mbox{\boldmath $\Delta$}_i$ replaced by $\mbox{\boldmath $\Delta$}_i^* = \mbox{\boldmath $B$}_i\mbox{\boldmath $\Delta$}_i$. Note that this implies corresponding changes to $q_{ij}$, $\mbox{\boldmath $\Lambda$}_i$, and $\mbox{\boldmath $\Omega$}_i$. The three additional conditional expectations are due to the latent factors and are given by $\mbox{\boldmath $x$}_{ij}^{(k)} = E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $X$}_{ij} \mid, \mbox{\boldmath $y$}_j, z_{ij}=1 \right]$, $\tilde{\mbox{\boldmath $x$}}_{ij}^{(k)} = E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $X$}_{ij} \mbox{\boldmath $U$}_{ij}^T
\mid, \mbox{\boldmath $y$}_j, z_{ij}=1 \right]$, and $\mbox{\boldmath $x$}_{ij}^{*^{(k)}} = E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $U$}_{ij} \mbox{\boldmath $U$}_{ij}^T
\mid, \mbox{\boldmath $y$}_j, z_{ij}=1 \right]$. It can be shown that \begin{eqnarray} \mbox{\boldmath $x$}_{ij}^{(k)} &=& w_{ij}^{(k)} \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right)
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $\Delta$}_{i}^{(k)} \mbox{\boldmath $u$}_{ij}^{(k)},
\label{X}\\ \tilde{\mbox{\boldmath $x$}}_{ij}^{(k)} &=& \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right) \mbox{\boldmath $u$}_{ij}^{(k)^T}
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $\Delta$}_{i}^{(k)} \mbox{\boldmath $u$}_{ij}^{*^{(k)}},
\label{Xt}\\ \mbox{\boldmath $x$}_{ij}^{*^{(k)}} &=& \mbox{\boldmath $x$}_{ij}^{(k)} \left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right)^T
\mbox{\boldmath $B$}_{i}^{(k)} \mbox{\boldmath $C$}_{i}^{(k)^T}
+ \tilde{\mbox{\boldmath $x$}}_{ij}^{(k)} \mbox{\boldmath $\Delta$}_{i}^{(k)^T}
\mbox{\boldmath $C$}_{i}^{(k)^T} + \mbox{\boldmath $C$}_{i}^{(k)},
\label{Xs} \end{eqnarray} where $\mbox{\boldmath $C$}_i^{(k)^{-1}} = \mbox{\boldmath $B$}_i^{(k)^T} \mbox{\boldmath $D$}_i^{(k)^{-1}} \mbox{\boldmath $B$}_i^{(k)} + \mbox{\boldmath $I$}_q$. \newline
\noindent For the M-step, the expression for the updated estimate of the parameters are quite similar to the SE model and are given by \begin{eqnarray} \pi_i^{(k+1)} &=& \frac{1}{n} \sum_{j=1}^n z_{ij}^{(k)}. \label{pi}\\ \mbox{\boldmath $\Delta$}_i^{(k+1)} &=& \left[\sum_{j=1}^n z_{ij}^{(k)} \tilde{\mbox{\boldmath $x$}}_{ij}^{(k)}\right]
\left[\sum_{j=1}^n z_{ij}^{(k)} \mbox{\boldmath $u$}_{ij}^{*^{(k)}}\right]^{-1}. \nonumber\\ \mbox{\boldmath $B$}_i^{(k+1)} &=& \left[\sum_{j=1}^n z_{ij}^{(k)}
\left(\mbox{\boldmath $y$}_j-\mbox{\boldmath $\mu$}_i^{(k+1)}\right) \mbox{\boldmath $x$}_{ij}^{(k)^T}\right]
\left[\sum_{j=1}^n z_{ij}^{(k)} \mbox{\boldmath $x$}_{ij}^{*^{(k)}}\right]^{-1}. \nonumber\\ \mbox{\boldmath $\mu$}_i^{(k+1)} &=& \frac{\sum_{j=1}^n z_{ij}^{(k)} w_{ij}^{(k)}\mbox{\boldmath $y$}_j
- \mbox{\boldmath $B$}_i^{(k)}\mbox{\boldmath $\Delta$}_i^{(k)} \sum_{j=1}^n z_{ij}^{(k)} \mbox{\boldmath $u$}_{ij}^{(k)}}
{\sum_{j=1}^n z_{ij}^{(k)} w_{ij}^{(k)}}. \label{mu}\\ \mbox{\boldmath $D$}_i^{(k+1)} &=& \mbox{diag}\left(\mbox{\boldmath $d$}_i^{(k+1)}\right), \nonumber \end{eqnarray} where \begin{eqnarray} \mbox{\boldmath $d$}_i^{(k+1)} &=& \mbox{diag} \left\{
\sum_{j=1}^n z_{ij}^{(k)} \left[ w_{ij}^{(k)}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_i^{(k+1)}\right) \left(\mbox{\boldmath $y$}_j-\mbox{\boldmath $\mu$}_i^{(k+1)}\right)^T
- \mbox{\boldmath $B$}_i^{(k)} \mbox{\boldmath $x$}_{ij}^{(k)} \left(\mbox{\boldmath $y$}_j-\mbox{\boldmath $\mu$}_i^{(k+1)}\right)^T
\right.\right. \nonumber\\ && \left.\left.
- \left(\mbox{\boldmath $y$}_j-\mbox{\boldmath $\mu$}_i^{(k+1)}\right) \mbox{\boldmath $x$}_{ij}^{(k)^T} \mbox{\boldmath $B$}_i^{(k)^T}
- \mbox{\boldmath $B$}_i^{(k)} \mbox{\boldmath $x$}_{ij}^{*^{(k)}} \mbox{\boldmath $B$}_i^{(k)} \right] \right\}
\left[\sum_{j=1}^n z_{ij}^{(k)}\right]^{-1}. \label{nu} \end{eqnarray} Concerning the update for the degrees of freedom, it is the same as for the SE model. \newline
\subsection{The skew factors and errors (SFE) model} \label{s:SFE}
The SFE model is a combination of the SE and SF models. It follows from (\ref{error_factor}) that it can be expressed in a slightly more complicated hierarchical form than (\ref{hierarchical}). An extra level is required for the skewing variable $\mbox{\boldmath $U$}_{1ij}$ for the errors. It follows that \begin{eqnarray} \mbox{\boldmath $Y$}_j \mid \mbox{\boldmath $x$}_{ij}, w_{ij}, Z_{ij} =1 &\sim&
N_p\left(\mbox{\boldmath $\mu$}_i + \mbox{\boldmath $B$}\mbox{\boldmath $x$}_{ij} + \mbox{\boldmath $\Delta$}_{1i}|\mbox{\boldmath $u$}_{1ij}|, \frac{1}{w_{ij}} \mbox{\boldmath $D$}_i\right),
\nonumber\\ \mbox{\boldmath $X$}_{ij} \mid \mbox{\boldmath $u$}_{0ij}, w_{ij}, Z_{ij} =1 &\sim&
N_q\left(\mbox{\boldmath $\Delta$}_{0i}|\mbox{\boldmath $u$}_{0ij}|, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_q\right),
\nonumber\\ \mbox{\boldmath $U$}_{0ij} \mid w_{ij}, Z_{ij} =1 &\sim&
N_r\left(\mbox{\boldmath $I$}_r, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_r\right),
\nonumber\\ \mbox{\boldmath $U$}_{1ij} \mid w_{ij}, Z_{ij} =1 &\sim&
N_s\left(\mbox{\boldmath $I$}_s, \frac{1}{w_{ij}} \mbox{\boldmath $I$}_s\right),
\nonumber\\ W_{ij} \mid Z_{ij} =1 &\sim& \mbox{gamma}\left(\frac{\nu_i}{2}, \frac{\nu_i}{2}\right),
\nonumber\\ Z_{ij} =1 &\sim& \mbox{Multi}_g(1; \mbox{\boldmath $\pi$}). \label{SFE_h} \end{eqnarray}
According to the above specification, although $\mbox{\boldmath $u$}_{0ij}$ and $\mbox{\boldmath $u$}_{1ij}$ are uncorrelated, they are not independent. Due to this, the calculation of the conditional expectation of $\mbox{\boldmath $u$}_{0ij}$ and of $\mbox{\boldmath $u$}_{1ij}$ is performed jointly and thus involves evaluating $(r+s)$-dimensional integrals.
In the first cycle of the AECM algorithm for the SFE model, we proceed in a similar manner as for the SF model. However, $\mbox{\boldmath $\Delta$}_i^*$ now involves both $\mbox{\boldmath $\Delta$}_{0i}$ and $\mbox{\boldmath $\Delta$}_{1i}$, that is, it is a $p \times (r+s)$ matrix given by $\mbox{\boldmath $\Delta$}_i^* = [\mbox{\boldmath $B$}_i\mbox{\boldmath $\Delta$}_{0i} \;\; \mbox{\boldmath $\Delta$}_{1i}]$. We also let $\mbox{\boldmath $\Sigma$}_i^* = \mbox{\boldmath $B$}_i \mbox{\boldmath $B$}_i^T + \mbox{\boldmath $D$}_i^T$. In a similar way, $\mbox{\boldmath $\Omega$}_i^*$, $\mbox{\boldmath $q$}_i^*$, $d_{ij}^{*^(k)}$, and $\mbox{\boldmath $\Lambda$}_i^*$ are defined in terms of $\mbox{\boldmath $\Sigma$}_i^*$ and $\mbox{\boldmath $\Delta$}_i^*$ (in place of the usual $\mbox{\boldmath $\Sigma$}_i$ and $\mbox{\boldmath $\Delta$}_i$, respectively). Thus, on the $k$th iteration of the E-step, the following conditional expectations are required: \begin{eqnarray} z_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[z_{ij}=1 \mid \mbox{\boldmath $y$}_j\right]
= \frac{\pi_i^{(k)} f_{\mbox{\tiny{CFUST}}_{p,r}}
(\mbox{\boldmath $y$}_j; \mbox{\boldmath $\mu$}_i^{(k)}, \mbox{\boldmath $\Sigma$}_i^{*^(k)}, \mbox{\boldmath $\Delta$}_i^{*^(k)}, \nu_i^{(k)})}
{\sum_{i=1}^g \pi_i^{(k)} f_{\mbox{\tiny{CFUST}}_{p,r}}
(\mbox{\boldmath $y$}_j; \mbox{\boldmath $\mu$}_i^{(k)}, \mbox{\boldmath $\Sigma$}_i^{*^(k)}, \mbox{\boldmath $\Delta$}_i^{*^(k)}, \nu_i^{(k)})},
\label{se0}\\ w_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[w_{ij} \mid \mbox{\boldmath $y$}_j, z_{ij}=1\right] \nonumber\\
&=& \left(\frac{\nu_i^{(k)} + p}{\nu_i^{(k)} + d_{ij}^{*^(k)}}\right)
\frac{T_r\left(\mbox{\boldmath $q$}_{ij}^{*^(k)} \sqrt{\frac{\nu_i^{(k)}+p+2}{\nu_i+d_{ij}^{(k)}}};
\mbox{\boldmath $0$}, \mbox{\boldmath $\Lambda$}_i^{*^(k)}, \nu_i^{(k)}+p+2\right)}
{T_r\left(\mbox{\boldmath $q$}_{ij}^{*^(k)} \sqrt{\frac{\nu_i^{(k)}+p}{\nu_i+d_{ij}^{(k)}}};
\mbox{\boldmath $0$}, \mbox{\boldmath $\Lambda$}_i^{*^(k)}, \nu_i^{(k)}+p\right)},
\label{se1}\\ \mbox{\boldmath $u$}_{ij}^{(k)} &=& E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $U$}_{ij}
\mid \mbox{\boldmath $y$}_j, z_{ij}=1 \right]
= w_{ij}^{(k)} E\left[ \mbox{\boldmath $a$}_{ij}^{(k)}\right],
\label{se4}\\ \mbox{\boldmath $u$}_{ij}^{*^{(k)}} &=& E_{\Psi^{(k)}} \left[w_{ij} \mbox{\boldmath $U$}_{ij} \mbox{\boldmath $U$}_{ij}^T
\mid, \mbox{\boldmath $y$}_j, z_{ij}=1 \right]
= w_{ij}^{(k)} E\left[ \mbox{\boldmath $a$}_{ij}^{(k)} \mbox{\boldmath $a$}_{ij}^{(k)^T} \right],
\label{se5} \label{SFE-E} \end{eqnarray} where \begin{eqnarray} \mbox{\boldmath $a$}_{ij}^{(k)} &\sim& tt_{r+s}\left(\mbox{\boldmath $q$}_{ij}^{*^(k)},
\left(\frac{\nu_i^{(k)} + d_{ij}^{*^(k)}}{\nu_i^{(k)}+p+2}\right) \mbox{\boldmath $\Lambda$}_i^{(k)},
\nu_i^{(k)}+p+2; \mathbb{R}^+\right). \end{eqnarray}
The required conditional expectations related to $\mbox{\boldmath $u$}_{0ij}$ and $\mbox{\boldmath $u$}_{1ij}$ are extracted from (\ref{se4}) and (\ref{se5}) above using \begin{eqnarray} \mbox{\boldmath $u$}_{ij}^{(k)} &=&
\left[\begin{array}{c}\mbox{\boldmath $u$}_{0ij}^{(k)}\\\mbox{\boldmath $u$}_{1ij}^{(k)}\end{array}\right], \label{se6} \\ \mbox{\boldmath $u$}_{ij}^{*^(k)} &=&
\left[\begin{array}{cc}
\mbox{\boldmath $u$}_{0ij}^{*^(k)} & \mbox{\boldmath $u$}_{3ij}^{(k)} \\
\mbox{\boldmath $u$}_{3ij}^{(k)^T} & \mbox{\boldmath $u$}_{1ij}^{*^(k)}
\end{array}\right]. \label{se7} \end{eqnarray}
For the first cycle of the AECM algorithm, the M-step proceeds in a similar way to the SF model described in Section \ref{s:SF}. The updated estimates for $\pi_i$, $\mbox{\boldmath $\mu$}_i$, and $\nu_i$ are calculated using (\ref{pi}), (\ref{mu}), and (\ref{nu}), respectively, but with $\mbox{\boldmath $\Delta$}_i^{(k)}$ replaced by $\mbox{\boldmath $\Delta$}_i^{*^{(k)}}$.
In the second cycle, we calculate the conditional expectations related to the factors $\mbox{\boldmath $X$}_{ij}$ and compute the updated estimate for $\mbox{\boldmath $B$}_i$, $\mbox{\boldmath $D$}_i$, $\mbox{\boldmath $\Delta$}_{0i}$, and $\mbox{\boldmath $\Delta$}_{1i}$. The four conditional expectations required on the E-step are analogous to (\ref{X}), (\ref{Xs}), and with (\ref{Xt}) separated into $\tilde{\mbox{\boldmath $X$}}_{0ij}$ and $\tilde{\mbox{\boldmath $X$}}_{1ij}$. It can be shown that they are given by \begin{eqnarray} \mbox{\boldmath $x$}_{ij}^{(k)} &=& w_{ij}^{(k)} \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right)
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}} \mbox{\boldmath $\Delta$}_{1i}^{(k)} \mbox{\boldmath $u$}_{1ij}^{(k)}
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $\Delta$}_{0i}^{(k)} \mbox{\boldmath $u$}_{2ij}^{(k)}, \nonumber\\ \tilde{\mbox{\boldmath $x$}}_{0ij}^{(k)} &=& \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right) \mbox{\boldmath $u$}_{0ij}^{(k)^T}
- \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}} \mbox{\boldmath $\Delta$}_{1i}^{(k)} \mbox{\boldmath $u$}_{1ij}^{(k)}
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $\Delta$}_{0i}^{(k)} \mbox{\boldmath $u$}_{0ij}^{*^{(k)}}, \nonumber\\ \tilde{\mbox{\boldmath $x$}}_{1ij}^{(k)} &=& \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}}
\left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right) \mbox{\boldmath $u$}_{1ij}^{(k)^T}
- \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $B$}_{i}^{(k)^T} \mbox{\boldmath $D$}_{i}^{(k)^{-1}} \mbox{\boldmath $\Delta$}_{1i}^{(k)} \mbox{\boldmath $u$}_{1ij}^{*^(k)}
+ \mbox{\boldmath $C$}_{i}^{(k)} \mbox{\boldmath $\Delta$}_{0i}^{(k)} \mbox{\boldmath $u$}_{0ij}^{*^{(k)}}, \nonumber\\ \mbox{\boldmath $x$}_{ij}^{*^{(k)}} &=& \mbox{\boldmath $x$}_{ij}^{(k)} \left(\mbox{\boldmath $y$}_j - \mbox{\boldmath $\mu$}_{i}^{(k)}\right)^T
\mbox{\boldmath $D$}_i^{(k)^{-1}} \mbox{\boldmath $B$}_{i}^{(k)} \mbox{\boldmath $C$}_{i}^{(k)^T}
- \tilde{\mbox{\boldmath $x$}}_{1ij}^{(k)} \mbox{\boldmath $\Delta$}_{1i}^{(k)^T}
\mbox{\boldmath $D$}_i^{(k)^{-1}} \mbox{\boldmath $B$}_{i}^{(k)} \mbox{\boldmath $C$}_{i}^{(k)^T}
\nonumber\\ &&
+ \; \tilde{\mbox{\boldmath $x$}}_{0ij}^{(k)} \mbox{\boldmath $\Delta$}_{0i}^{(k)^T}
\mbox{\boldmath $C$}_{i}^{(k)^T}
+ \mbox{\boldmath $C$}_{i}^{(k)}, \nonumber \end{eqnarray} where $\mbox{\boldmath $C$}_i^{(k)}$ is the same as for the SF model.
\section{Conclusions} \label{sec:con}
In this paper, we described and discussed the differences between placing the assumption of skewness on the factors (SF) or/and the errors (SE) in mixtures of skew factor analyzers. In doing so, we introduced the more general skew factor and error (SFE) MFA approach where both the factors and the errors have a skew component distribution. Parameter estimation via an EM-type algorithm for these approaches was discussed and an AECM algorithm was derived for the SFE model. The implementation of the EM algorithm was easier to undertake for the SE model than for the SF and SFE models. We note that given the same values of $g$, $p$, $q$, and $r$, the SE model has a higher number of free parameters compared to the SF model. The practical implications of these formulations will be treated in a forthcoming manuscript, based on simulations and real data applications.
\input{Refs.bbl}
\end{document} | arXiv |
\begin{document}
\sloppy
\title[]{Representation of Generalized Bi-Circular Projections on Banach Spaces}
\author{A. B. Abubaker*} \address[Abdullah Bin Abu Baker]{Department of Mathematics and Statistics Indian Institute of Technology Kanpur Kanpur - 208016 India} \email{[email protected] }
\author{Fernanda Botelho} \address[Fernanda Botelho]{Department of Mathematical Sciences, The University of Memphis, Memphis, TN 38152, USA} \email{[email protected]}
\author{James Jamison} \address[James Jamison]{Department of Mathematical Sciences, The University of Memphis, Memphis, TN 38152, USA} \email{[email protected]}
\thanks{* Supported by the Indian Institute of Technology Kanpur, Kanpur, India and by the National Board of Higher Mathematics, Department of Atomic Energy, Government of India, grant No. 2/44(56)/2011-R\&D-II/3414}
\subjclass[2010]{47B38; 47B15; 46B99; 47A65}
\keywords{Generalized bi-circular projections, projections as combination of finite order operators, reflections, isometric reflections, isometries}
\date{\today}
\begin{abstract}
We prove several results concerning the representation of projections on arbitrary Banach spaces. We also give illustrative examples including an example of a generalized bi-circular projection which can not be written as the average of the identity with an isometric reflection. We also characterize generalized bi-circular projections on $C_0(\Omega,X)$, with $\Omega$ a locally compact Hausdorff space (not necessarily connected) and $X$ a Banach space with trivial centralizer.
\end{abstract}
\maketitle
\section{Introduction}
A projection $P$ on a complex Banach space $X$ is said to be a bi-circular projection if $\displaystyle e^{ia}P + e^{ib}(I - P)$ is an isometry, for all choices of real numbers $a$ and $b$. These projections were first studied by Stacho and Zalar (in \cite{SZ1} and \cite{SZ2}) and shown to be norm hermitian by Jamison (in \cite{J}).
Fo\v sner, Ili\v sevic and Li introduced a larger class of projections designated generalized bi-circular projections (henceforth GBP), cf. \cite{FIL}. A generalized bi-circular projection $P$ only requires that $P + \lambda (I - P)$ is an isometry, for some $\lambda \in \mathbb T \setminus \{1\}$. These projections are not necessarily norm hermitian. It is a consequence of the definition of a GBP that $P + \lambda (I-P)$ must be a surjective isometry, since \[ (P + \lambda (I-P))(y)= \, x,\ \mbox{where} \ y = Px + \frac{1}{ \lambda} (I-P)x , \,\,\, \forall \,\, x \in X.\]
In \cite{FIL}, the authors show that a generalized bi-circular projection on finite dimensional spaces is equal to the average of the identity with an isometric reflection. This interesting result was extended further to many other settings, as for example spaces of continuous functions on a compact, connected and Haursdorff space, $C(\Omega)$ and $C(\Omega,X)$, where generalized bi-circular projections are also represented as the average of the identity with an isometric reflection, see \cite{BJ1} and \cite{FR}. The same characterization also holds for generalized bi-circular projections on spaces of Lipschitz functions, see \cite{BJ2} and \cite{VCV} and for $L_p$-spaces, $1 \leq p < \infty, p \neq 2$, see \cite{L}. This raises the question whether every GBP on a Banach space is equal to the average of the identity with an isometric reflection. The answer to this question is negative as we show in example (\ref{gbp_ne_(I+R):2}).
It is easy to see that there is a bijection between the set of all reflections on $X$ and the set of all projections on $X$. If $P=\frac{Id+R}{2}$, with $R$ an isometric reflection, is a GBP, then $R$ is the identity on the range of $P$ and $-I$ on the kernel of $P$.
In this note we show that given a GBP $P$ on an arbitrary complex Banach space, $P$ is hermitian or $P$ is the average of the identity with a reflection $R,$ with $R$ an element in the algebra generated by the isometry associated with $P$. We give examples that show that the reflection defined by a GBP is not necessarily an isometry. Moreover, we also show that every projection on $X$ is a GBP relative to some renorming of the underlying space $X$. Therefore in this new space, $P$ can be represented as the average of the identity with an isometric reflection.
In section 3 we characterize projections written as combinations of iterates of a finite order operator and we relate those to the generalized $n$-circular projections discussed in \cite{B} and also in \cite{AD}. In section 4 we derive the standard form for generalized bi-circular projections on $C_0(\Omega,X)$, with $\Omega$ a locally compact Hausdorff space (not necessarily connected) and $X$ a Banach space with trivial centralizer.
\section{A characterization of generalized bi-circular projection on a complex Banach space}
Throughout this section $X$ denotes a complex Banach space and $P$ a bounded linear projection on $X$. We recall that $P$ is a generalized bi-circular projection if and only if there exists a modulus 1 complex number $\lambda \neq 1$ such that $P + \lambda (I-P)$ is an isometry on $X$.
We observe that given an arbitrary projection $P$ on $X$, $2P-I$ is a reflection and thus $P$ can be represented as the average of the $I$ with a reflection, i.e. $P= \frac{I+ (2P-I)}{2}.$ In particular, generalized bi-circular projections on $X$ are averages of the identity with reflections. We recall that a reflection $R$ on $X$ is a bounded linear operator such that $R^2=I.$ An isometric reflection is both a reflection and an isometry. The next result represents the reflection determined by a GBP in terms of the surjective isometry defined by the projection.
\begin{prop} \label{mt1}
Let $X$ be a Banach space. If $P$ is a projection such that $P+\lambda (I-P)=T$, where $\lambda \in \mathbb T \setminus \{1\}$ and $T$ is an isometry on $X,$ then $P=\frac{I+R}{2}$, with $R,$ a reflection on $X,$ in the algebra generated by $T$.
\end{prop}
\begin{proof}
Since $\lambda$ is a modulus one complex number, it is of the form $e^{2\pi \theta i}$ with $\theta$ a real number in the interval $[0,1)$. Therefore, we consider the following two cases: (i) $\theta$ is an irrational number, and (ii) $\theta$ is a rational number. If $\theta$ is an irrational, then the sequence $\{\lambda ^{n}\}_n$ is dense in the unit circle. This implies that $P$ is a bi-circular projection since for every $\alpha \in \mathbb T $, $P+\alpha (I-P)$ is a surjective isometry, cf. \cite{L}. If $\theta$ is a rational number, we first assume that $\lambda$ is of even order. Thus for some positive integer $k$, $\lambda^{k}=-1$, $P+\lambda^k (I-P)=T^k$ and $P+\lambda^{2k} (I-P)=I=T^{2k}$. Consequently, $P$ is represented as the average of the identity with the isometric reflection $T^k$. If $\lambda$ is of odd order, let $2k+1$ be the smallest positive integer such that $\lambda^{2k+1}=1$. Therefore \begin{equation} \label{eq1} P+\lambda^j (I-P)=T^j, \,\, \forall \,\,j=1, \ldots 2k+1. \end{equation} This implies that $T^{2k+1}=I.$ Furthermore adding the equations displayed in (\ref{eq1}), we get \[ (2k+1) P+ (1+\lambda+ \lambda^2+\cdots +\lambda^{2k})(I-P)= I+T+T^2 + \ldots + T^{2k}.\] This equation becomes \begin{equation} \label{eq2} (2k+1) P=I+T+T^2 + \ldots + T^{2k}, \end{equation} since $1+\lambda+ \lambda^2+\cdots +\lambda^{2k}=0.$ The equation displayed in (\ref{eq2}) implies that \[ P= \frac{1}{2k+1} \left( I + T + \cdots + T^{2k} \right) = \frac{I + R } {2}, \] with $ \begin{displaystyle} R = \frac{(1-2k) I + 2T + \cdots + 2T^{2k}}{2k+1}.\end{displaystyle}$
It is a straightforward calculation to check that $R^2=I.$ This completes the proof.
\end{proof}
\begin{rem}
It follows from the proof given for the Theorem \ref{mt1} that for $\theta$
irrational the projection $P$ is bi-circular then hermitian. We now give an example that shows the converse of the implication in Theorem \ref{mt1} does not hold. We consider $X$ the space of all convergent sequences in $\mathbb{C}$ with the sup norm. Let $T: X \rightarrow X$ be given by $T (x_1, x_2, x_3, x_4 , \cdots )= (x_2, x_3, x_1, x_4, \, \cdots),$ which involves a permutation of the first three positions of a sequence in $X$ and the identity at any other position. It is clear that $T$ is a surjective isometry and $P = \frac{I+T+T^2}{3}$ is a projection. As defined in the proof given for the Theorem \ref{mt1}, we set $R= \frac{-I+2T+2T^2}{3}.$ The projection $P$ is equal to $\frac{I+R}{2}$ and $R:X \rightarrow X$ is s.t. $R(x_1, x_2, x_3, x_4 , \cdots )= \frac{1}{3}(-x_1+2x_2+2x_3, \, 2x_1-x_2+2x_3,\, 2x_1+2x_2-x_3, 3x_4, \, \cdots).$ Therefore, $R(0,1,1,0, \cdots )= \frac{1}{3} (4,1,1,0,0 \cdots )$. This shows that $R$ is not an isometry. It is easy to check that $P$ is not a GBP. Given $\lambda$ of modulus $1$ and $\lambda\neq 1$, we set $S=(1-\lambda) P + \lambda \, I.$ In particular, \[S(1,0,0,0,\cdots )= \left(\frac{1}{3} + \frac{2}{3} \lambda, \frac{1}{3} - \frac{1}{3} \lambda, \frac{1}{3} - \frac{1}{3} \lambda, 0, \cdots\right).\]
If $S$ was an isometry on $X$, then $ \max \{ |\frac{1}{3} + \frac{2}{3} \lambda |,|\frac{1}{3} - \frac{1}{3} \lambda| \}=1$. We observe that $|\frac{1}{3} - \frac{1}{3} \lambda| < 1$ and if $|\frac{1}{3} + \frac{2}{3} \lambda | = 1$, then $\lambda =1.$ This contradiction shows that $P$ is not a GBP.
\end{rem}
Given a projection it is of interest to determine whether $P$ is a generalized bi-circular projection or equivalently whether the reflection determined by $P$ is an isometry. We address this question in our next result.
\begin{prop}\label{Tisaniso}
Let $X$ be a Banach space. If $P$ is a projection on $X$ such that $T=P+\lambda (I-P)$, for some $\lambda \in \mathbb{T}\setminus \{1\}$. Then, $T$ is an isometry if and only if
$\|x-y\| = \|x-\lambda y\|$, for every $x \in \, \mbox{Range} (P)$ and $y \in \, \mbox{Ker} (P)$.
\end{prop}
\begin{proof}
The projection $P$ determines two closed subspaces $\mbox{Range} (P)$ and $\mbox{Ker} (P)$ such that
$X=\mbox{Range} (P) \oplus \mbox{Ker} (P)$. Since $T$ is an isometry, $\|x-y\|=\|Tx-Ty\|$ for every $x$ and $y$ in $X$. In particular for $x$ in the range of $P$ and $y$ in the kernel of $P$, we have $Tx= x$ and $Ty=\lambda y$. The converse follows from straightforward computations.
\end{proof}
\begin{rem}\label{gbp_norm_iden}
If is a consequence of Proposition \ref{Tisaniso} that if $P$ is a generalized bi-circular projection on $X$, then $P$ is the average of the identity with an isometric reflection if and only if for every $x \in \, \mbox{Range} (P)$ and
$y \in \, \mbox{Ker} (P)$, $\|x-y\| = \|x+y\|.$
\end{rem}
The next proposition asserts that every projection on a Banach space is a generalized bi-circular projection in some equivalent renorming of the given space.
\begin{prop} \label{cor_mt1}
Let $X$ be a complex Banach space and $P$ be a projection on $X$. Then $X$ can be equivalently renormed such that $R$ is an isometric reflection and consequently $P$ is a generalized bi-circular projection.
\end{prop}
\begin{proof} We set $R=2P-I$. We observe that $R^2=I$ which implies that $R$ is bounded and bijective. Then, the Open Mapping Theorem implies that $R$ is an isomorphism. Therefore, there exist $\alpha$ and $\beta$ positive numbers such that, for every $x \in X,$
\[ \alpha \|x\| \leq \|R(x)\| \leq \beta \|x\|.\]
We define $\|x\|_1= \|x\|+\|R(x)\|, $ for all $x \in X.$ This new norm is equivalent to the original norm on $X$ and $R$ relative to this norm is an isometry. In fact, given
$x \in X$, $\| R(x)\|_1= \|R(x)\| + \| R(R(x))\| = \|x\|_1.$
\end{proof}
\begin{ex} \label{gbp_ne_(I+R):2}
We now give an example of a GBP that can not be represented as the average of identity with an isometric reflection. Let $X$ be $\mathbb{C}^3$ with the max norm, $\|(x,y,z)\|_{\infty} =
\max \{ |x|, |y|, |z|\}$ and $\lambda = exp(\frac{2\pi i}{3})= -\frac{1}{2}+i \frac{\sqrt{3}}{2}.$ We consider $P$ the following projection on $\mathbb{C}^3:$ \[ P(x,y,z)= \frac{1}{3} ( x+y+z, x+y+z,x+y+z ) .\]
Let $T= P+\lambda (I-P)$ . Straightforward computations imply that \[T(x,y,z)= \left( a\, x + b\, (y+z), a\, y + b\, (x+z),a\, z + b\, (x+y) \right),\] with $a=\frac{i\sqrt{3}}{3}$ and $b=\frac{1}{2} - \frac{\sqrt{3}i}{6}.$
Since $T(0,0,1)= \left( b, b, a \right)$, $T$ is not an isometry. In fact,
$\|(0,0,1)\|_{\infty} = 1 $ and $\|T(0,0,1)\|_{\infty}= \frac{\sqrt{3}}{3}\neq
\|(0,0,1)\|_{\infty}.$ The isomorphism $T$ has order $3$ since $\lambda^3=1$.
We now renorm $\mathbb{C}^3$ so $T$ becomes an isometry. The new norm is defined as follows:
\[ \| (x,y,z)\|_* = \max \{ \| (x,y,z)\|_{\infty}, \, \| T(x,y,z)\|_{\infty},
\| T^2(x,y,z)\|_{\infty} \}.\]
Therefore $P$ is a generalized bi-circular projection in $\mathbb{C}^3$ with the norm $\| \cdot \|_*,$ for $\lambda=exp(\frac{2\pi i}{3}).$ This projection can not be written as the average of the identity with an isometric reflection. We assume otherwise, then $P=\frac{I+R}{2}$ and $R= \frac{-I+2T+2T^2}{3}$. We now show that $R$ is not an isometry. Previous calculations imply that $T(0,0,1)= ( b, b, a )$ and $T^2(0,0,1)= (b^2+2ab, b^2+2ab, a^2+2b^2)= (\overline{b},\overline{b},\overline{a}).$ Therefore $R(0,0,1)=(2/3,2/3,-1/3)$, $(TR)(0,0,1) = \frac{1}{3} \left( \frac{1}{2}+ \frac{\sqrt{3}}{2} i, \frac{1}{2}+ \frac{\sqrt{3}}{2} i, 2-i\sqrt{3} \right)$, and $(T^2 R)(0,0,1) = \frac{1}{3} \left( \frac{1}{2}- \frac{\sqrt{3}}{2} i, \frac{1}{2}- \frac{\sqrt{3}}{2} i, 2+i\sqrt{3} \right)$.
Since $\|R(0,0,1)\|_{\infty} = \frac{2}{3}$, $\|(TR)(0,0,1)\|_{\infty} =\|
(T^2 R)(0,0,1) \|_{\infty}= \frac{\sqrt{7}}{3},$ we now conclude that
\[ \|R(0,0,1)\|_* = \max \left\{\frac{2}{3}, \frac{\sqrt{7}}{3}\right\}= \frac{\sqrt{7}}{3}
\neq \|(0,0,1)\|_*=1.\]
\end{ex}
It is worth mentioning that the projection $P$ above does not satisfy the condition stated in Remark \ref{gbp_norm_iden}. For example, if $x=(1,1,1) \in Range(P)$,
$y=(1,1,-2) \in Ker(P),$ we have $\|x+y\|_* =\sqrt{7}$ and $\|x-y\|_*=3.$
\section{Projections as combinations of finite order operators}
In this section we investigate the existence of projections defined as linear combinations of the iterates of a given finite order operator. We conclude in our forthcoming Proposition \ref{mp1} that only certain averages yield projections. For a generalized bi-circular projection $P,$ we consider the set $\Lambda_P = \{ \lambda \in \mathbb T : P+\lambda (I-P) \,\, \mbox{is an isometry}\}$. This set is a group under multiplication. An inspection of the proof provided for the Theorem \ref{mt1} also shows that the multiplicative group associated with a GBP is either finite or equal to $\mathbb T$. If $\Lambda_P$ is infinite, then $P$ is a bi-circular projection. We give some examples of GBPs together with their multiplicative groups.
\begin{ex}
\begin{enumerate}
\item We consider $\ell_{\infty}$ with the usual sup norm. Let $P$ be defined as follows: \[ P(x_1, x_2,x_3, \cdots \,) = \left( \frac{x_1+x_2}{2},\frac{x_1+x_2}{2}, x_3, \cdots \, \right).\] we show that $\Lambda_P = \{1,-1\}.$ Given $\lambda \in \mathbb T$ such that $T=P+ \lambda (I-P) $ is a surjective isometry, then \[T (x_1, x_2,x_3, \cdots \,)= \left( \frac{(\lambda +1)x_1+ (1-\lambda)x_2}{2}, \frac{(\lambda +1)x_2+ (1-\lambda)x_1}{2}, x_3, \cdots \, \right).\] We recall that a surjective isometry on $\ell_{\infty}$, $S:\ell_{\infty} \rightarrow \ell_{\infty}$ is of the form \[ S(x_1, x_2,x_3, \cdots \,)= (\mu_1 x_{\tau(1)}, \mu_2 x_{\tau(2)} , \cdots ),\] with $\tau$ a bijection of $\mathbb{N}$ and $\{\mu_i\}$ is a sequence of modulus 1 complex numbers.
Therefore $T$ is an isometry if and only if $\lambda = \pm 1$.
\item Let $P$ and $T$ on $(\mathbb{C}^3, \|\cdot \|_*)$ be defined as in example (\ref{gbp_ne_(I+R):2}). Then
$\Lambda_P = \{1, exp(\frac{2 \pi i}{3}), exp (\frac{4 \pi i }{3})\}$. Since, $T=P+exp(\frac{2 \pi i}{3})(I-P)$ is an isometry on $(\mathbb{C}^3, \|\cdot \|_*)$, then $T^2=P+exp(\frac{4 \pi i}{3})(I-P)$ is also an isometry and $\Lambda_P \supseteq \{1, exp(\frac{2 \pi i}{3}), exp (\frac{4 \pi i }{3})\}$. We now show that $\Lambda_P = \{1, exp(\frac{2 \pi i}{3}), exp (\frac{4 \pi i }{3})\}$. As in example (\ref{gbp_ne_(I+R):2}), let $\lambda = exp(\frac{2 \pi i}{3})$. Given $\lambda_0=a_0+i b_0$ of modulus 1, such that $\lambda_0 \notin \{1, exp(\frac{2 \pi i}{3}), exp (\frac{4 \pi i }{3})\}$, we set $S=P+\lambda_0 (I-P).$ Therefore, \[ S(x,y,z)= \frac{1}{3} (cx + d(y+z),cy + d(x+z),cz + d(x+y)),\] with $c = 1+2 \lambda_0$ and $d = 1- \lambda_0$ and
$$ \| S(0,0,1)\|_* = \max \{ \| S(0,0,1)\|_{\infty}, \| TS(0,0,1)\|_{\infty}, \| T^2S(0,0,1)\|_{\infty} \}.$$ Now, $S(0,0,1) = \frac{1}{3}(d,d,c)$, $TS(0,0,1)= \frac{1}{3}(1- \lambda_0 \lambda,1- \lambda_0 \lambda,1+ 2 \lambda_0 \lambda)$ and $T^2S(0,0,1)= \frac{1}{3}(1-
\lambda_0 \lambda^2,1- \lambda_0 \lambda^2,1+ 2 \lambda_0 \lambda^2)$. It is easy to see that each of $|\frac{1- \lambda_0}{3}|$, $|\frac{1- \lambda_0 \lambda}{3}|$ and
$|\frac{1- \lambda_0 \lambda^2}{3}|$ is strictly less than 1. Moreover, if any of $| \frac{1+2 \lambda_0}{3}|$, $|\frac{1+ 2 \lambda_0 \lambda}{3}|$ or $|\frac{1+ 2
\lambda_0 \lambda^2}{3}|$ is equal to 1, then $\lambda_0 = 1, \, \lambda_0=\overline{\lambda}$ or $\lambda_0=\overline{\lambda}^2,$ respectively. This leads to a contradiction. It also follows from calculations already done for the example (\ref{gbp_ne_(I+R):2}) that
$\|(0,0,1)\|_*=1.$ Therefore, $\|S(0,0,1)\|_* \neq \|(0,0,1)\|_*$ and hence $\lambda_0 \notin \Lambda_P$.
\end{enumerate}
\end{ex}
The next corollary follows from our proof presented for Proposition \ref{mt1}.
\begin{cor}
Let $X$ be a Banach space. If the order of the multiplicative group of a generalized bi-circular projection $P$ on $X$ is even then $P$ is the average of the identity with an isometric reflection.
\end{cor}
We also recall the definition of a generalized $n$-circular projection, cf. \cite{B}. A projection $P$ on $X$ is generalized $n$-circular if and only if there exists a surjective isometry $T$ such that $T^n=I$ and \[ P = \frac{I+T+T^2+ \cdots \, +T^{n-1}}{n}. \]
Another notion of generalized $n$-circular projection was defined in \cite{AD} and it was shown there that both the definitions are equivalent in $C(\Omega)$, where $\Omega$ is a compact Hausdorff connected space. In fact, they are equivalent in any space in which the GBPs are given as the average of identity with an isometric reflection, see \cite{AD1}.
We observe that for a surjective linear map $T$ on $X$ such that $T^n=I$ (not necessarily an isometry), $\frac{I+T+T^2+ \cdots \, +T^{n-1}}{n}$ is a projection. The same question applies to this situation; which spaces support only $n$-circular projections associated with surjective isometries?
We now show a result concerning existence of projections written as a linear combination of operators with a cyclic property.
\begin{Def}
An operator $T$ on $X$ is of order $k$ (a positive integer) if and only if $T^k=I$ and $T^i \neq I$ for any $i<k$.
\end{Def}
We observe that if $T$ is of order k, then $P= \frac{I+T+T^2+ \cdots + T^{k-1}}{n}$ is a projection. The following proposition answers the reverse question whether a combination of such a collection of operators yields any projection.
Before stating our result we set some useful notation as introduced in the book by Michael Frazier, \cite{fr}. We define $\rho= e^{-2 \pi i/k}.$ Then $\rho^{mn} = e^{-2 \pi mn i /k}$ and $\rho^{-mn} = e^{2 \pi mn i /k}.$ In this notation, given a $k$-tuple $z= (z(0), \, \cdots, \, z(k-1))$ we set $\hat{z} (m) = \sum_{n=0}^{k-1} z(n) \rho^{mn}.$ We now denote by $W_k$ the $k$-square matrix with the $(i,j)$ entry equal to $\rho^{(i-1)(j-1)}$. In expanded form \[ W_k= \left[ \begin{array}{cccccc} 1& 1& 1 &1 & \cdots & 1 \\ 1 & \rho & \rho^2 & \rho^3 & \cdots & \rho^{k-1} \\ 1 & \rho^2 & \rho^4 & \rho^6 & \cdots & \rho^{2(k-1)} \\ 1 & \rho^3 & \rho^6 & \rho^9 & \cdots & \rho^{3(k-1)} \\ \vdots & \vdots & & & & \vdots \\ 1 & \rho^{k-1} & \rho^{2(k-1)} & \rho^{3(k-1)} & \cdots & \rho^{(k-1)(k-1)} \end{array} \right]. \] Regarding $z$ and $\hat{z}$ as column vectors we have $\hat{z}= W_k z$. It is easy to see that $W_k$ is invertible. The $(i,j)$-entry of $W_k^{-1}$ is equal to $\frac{1}{k}\overline{\rho}^{(i-1)(j-1)}.$ Frazier designates $\hat{z}$ the ``discrete Fourier transform" of $z$, i.e., $\hat{z}=DFT (z)$, and $z$ is the ``inverse discrete Fourier transform" of $\hat{z}$ , i.e., $z=W_k^{-1}\hat{z}=IDFT (\hat{z})$. If $S$ is a subset of $\{0, \cdots , k-1\}$, we denote by $\delta_S$ the vector with components given by $\delta(i)=1 $ for $i \in S$ and $\delta(i)=0 $ otherwise.
\begin{prop} \label{mp1}
Let $X$ be a Banach space and $P$ a bounded operator on $X$. Let $\lambda_0, \,\,\cdots, \, \lambda_{k-1}$ be nonzero complex numbers and $P= \sum_{i=0}^{k-1} \lambda_i \, T^i,$ where $T$ is an operator of order $k$. Then, $P$ is a projection if and only if $\lambda = (\lambda_0, \, \lambda_1, \, \cdots , \lambda_{k-1})$ is the IDFT of $\delta_S$, for some $S \subseteq \{0, \cdots , k-1\}$.
\end{prop}
\begin{proof}
If $P=\sum_{i=0}^{k-1} \lambda_i \, T^i$ and $T$ is an algebraic operator with annihilating polynomial $x^k-1$, a Theorem due to Taylor (cf. \cite{ta} p. 317-318) asserts that \[ T= Q_0 + \rho Q_1 + \cdots + \rho^{k-1} Q_{k-1}\] with $\{Q_0, \cdots , Q_{k-1}\}$ pairwise orthogonal projections. Since $T^i = Q_0 + \rho^i Q_1 + \cdots + \rho^{i(k-1)} Q_{k-1}$ we conclude that $P= \alpha_0 Q_0+ \alpha_1 Q_1+\cdots +\alpha_{k-1} Q_{k-1}$ with the vector of scalars $(\alpha_0, \cdots , \alpha_{k-1})$ equal to the $DFT (\lambda_0, \lambda_1, \cdots , \lambda_{k-1})$. Since $P$ is a projection, i.e., $P^2=P$ and $\{Q_0, \cdots , Q_{k-1}\}$ are pairwise orthogonal projections we have that $\alpha_i^2=\alpha_i,$ for $i=0, \cdots,\, k-1.$ On the other hand, \[ P= \sum_{i=0}^{k-1} \lambda_i \, T^i= \sum_{i=0}^{k-1} \,\left(\sum_{j=0}^{k-1} \lambda_j \rho^{ij} \right) Q_i,\] thus for $i=0, \cdots , k-1$, $\alpha_i=\sum_{j=0}^{k-1} \lambda_j \rho^{ij}$. This implies that $(\lambda_0, \lambda_1,\, \cdots , \,\lambda_{k-1})$ is the $IDFT (\delta_S)$ for some $S$ a subset of $\{0, \cdots , k-1\}.$
Conversely, we associate with $T$ the collection $Q_0, \cdots, Q_{k-1}$ of $k$ pairwise orthogonal projections, such that the range of each $Q_i$ is the eigenspace associated with the eigenvalue $ \rho^{i}.$ Then $ \delta_S(0) Q_0+\delta_S(1) Q_1 + \cdots + \delta_S(k-1)Q_{k-1} =\sum_{i=0}^{k-1} \lambda_i \, T^i,$ and $P= \delta_S(0) Q_0+\delta_S(1) Q_1 + \cdots + \delta_S(k-1)Q_{k-1}$ is clearly a projection. This completes the proof.
\end{proof}
\section{Spaces of Vector-Valued Functions}
In this section we characterize generalized bi-circular projections on spaces of continuous functions defined on a locally compact Hausdorff space. This characterization extends results presented in \cite{B} and \cite{BJ1} for compact and connected Hausdorff spaces. We recall a folklore lemma which is very easy to prove.
\begin{lem} \label{lem1}
Let $X$ be a Banach space and $\lambda \in \mathbb T \setminus \{1\}$. Then the following are equivalent. \bla
\item $T$ is a bounded operator on $X$ satisfying $T^2 - (\lambda + 1)T + \lambda I = 0$.
\item There exists a projection $P$ on $X$ such that $P+\lambda (I-P)=T$. \end{list}
\end{lem}
\begin{thm} \label{thm2}
Let $\Omega$ be a locally compact Hausdorff space, not necessarily connected, and $X$ be a Banach space which has trivial centralizer. Let $P$ be a GBP on $C_0(\Omega,X)$. Then one and only one of the following holds. \bla
\item $P = \frac{I+T}{2}$, where some $T $ is an isometric reflection on $C_0(\Omega,X)$.
\item $Pf(\omega) = P_{\omega}(f(\omega))$, where $P_{\omega}$ is a generalized bi-circular projection on $X$. \end{list}
\end{thm}
\begin{proof}
Let $P + \lambda (I - P) = T$, where $\lambda \in \mathbb T \setminus \{ 1 \} $ and $T$ is an isometry on $C_0(\Omega,X)$. From \cite{FJ1}, we see that $T$ has the form $Tf(\omega)= u_{\omega}(f(\phi(\omega)))$ for $\omega \in \Omega$ and $f \in C_0(\Omega,X)$, where $u : \Omega \rightarrow \mathcal{G}(X)$ continuous in strong operator topology and $\phi$ is a homeomorphism of $\Omega$ onto itself. Here, $\mathcal{G}(X)$ is the group of all surjective isometries on $X$. From Lemma \ref{lem1}, we have $T^2 - (\lambda + 1)T + \lambda I = 0$. That is \begin{equation} \label{eq4} u_{\omega} \circ u_{\phi(\omega)}(f(\phi^2(\omega))) +(\lambda +1)u_{\omega}(f(\phi(\omega)))+\lambda f(\omega)=0. \end{equation} Let $\omega \in \Omega$. If $\phi(\omega) \neq \omega$, then $\phi^2(\omega)=\omega$. For otherwise, there exists $h \in C_0(\Omega)$ such that $h(\omega)=1$, $h(\phi(\omega))=h(\phi^2(\omega))=0$. For $f = h \bigotimes x$, where $x$ is a fixed vector in $X$, Equation (\ref{eq4}) reduces to $\lambda = 0$, contradicting the assumption on $\lambda$. Now, choosing $h \in C_0(\Omega)$ such that $h(\omega)=0$, $h(\phi(\omega))=1$ we get $\lambda = -1$. This implies that $u_{\omega} \circ u_{\phi(\omega)} = I$. If $\phi(\omega) = \omega$ and $\phi$ is not the identity, then since we will have the above case (i.e., $\phi(\omega) \neq \omega$) for some $\omega's$, we conclude that $\lambda = -1$. This again implies that $u^2_{\omega}= I$. Hence in both cases $P$ will be of the form $\frac{I+T}{2}$ and $T^2=I$.
If $\phi(\omega) = \omega$ for all $\omega \in \Omega$, then we will have from Equation (2) \[u^2_{\omega} - (\lambda +1)u_{\omega} + \lambda I = 0.\] Thus from Lemma \ref{lem1}, there exists a projection $P_{\omega}$ on $X$ such that $P_{\omega} + \lambda (I-P_{\omega}) = u_{\omega}$. Since $u_{\omega}$ is an isometry, $P_{\omega}$ is a GBP. Therefore, we have $Pf(\omega) = P_{\omega}(f(\omega)$. This completes the proof.
\end{proof}
\begin{cor} \label{ncor}
Let $\Omega$ be a locally compact Hausdorff space (not necessarily connected) and $P$ be a GBP on $C_0(\Omega)$. Then one and only one of the following holds. \bla
\item $P = \frac{I+T}{2}$, where $T$ is an isometric reflection on $C_0(\Omega)$.
\item P is a bi-circular projection. \end{list}
\end{cor}
\begin{rem}
Similar results were proved in \cite{BJ1} for $C(\Omega,X)$, with $\Omega$ connected. Here we extend those results to more general settings.
\end{rem}
It was proved in \cite{FJ} that if $(X_n)$ is a sequence of Banach spaces such that every $X_n$ has trivial $L_\infty$ structure, then any surjective isometry of $\bigoplus_{c_0}X_n$ is of the form $(Tx)_n = U_{n\pi(n)}x_{\pi(n)}$ for each $x=(x_n) \in \bigoplus_{c_0}X_n $. Here $\pi$ is a permutation of $\mathbb N$ and $U_{n\pi(n)}$ is a sequence of isometric operators which maps $X_{\pi(n)}$ onto $X_n$.
Suppose $P$ is a GBP on $\bigoplus_{c_0}X_n$, then similar techniques employed in the proof of Theorem \ref{thm2} also prove the following result.
\begin{thm} \label{thm3}
Let $P$ is a a generalized bi-circular projection on $\bigoplus_{c_0}X_n$. Then one and only one of the following holds. \bla
\item $P = \frac{I+T}{2}$, where $T$ is an isometric reflection on $\bigoplus_{c_0}X_n$.
\item $(Px)_n = P_n x_n$ where $P_n$ is a generalized bi-circular projection on $X_n$.
\end{list}
\end{thm}
\end{ack}
\end{document} | arXiv |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-6826(online) ISSN 0002-9939(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1950–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
On pluri-half-anticanonical systems of LeBrun twistor spaces
Author: Nobuhiro Honda
Journal: Proc. Amer. Math. Soc. 138 (2010), 2051-2060
MSC (2010): Primary 32L25; Secondary 53C28
DOI: https://doi.org/10.1090/S0002-9939-09-10207-1
Published electronically: December 8, 2009
MathSciNet review: 2596041
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: In this paper, we investigate pluri-half-anticanonical systems on the so-called LeBrun twistor spaces. We determine its dimension, the base locus, the structure of the associated rational map, and also the structure of general members, in precise form. In particular, we show that if $n\ge 3$ and $m\ge 2$, the base locus of the system $|mK^{-1/2}|$ on $n\mathbb {CP}^2$ consists of two non-singular rational curves, along which any member has singularity, and that if we blow up these curves, then the strict transform of a general member of $|mK^{-1/2}|$ becomes an irreducible non-singular surface. We also show that if $n\ge 4$ and $m\ge n-1$, then the last surface is a minimal surface of general type with vanishing irregularity. We also show that the rational map associated to the system $|mK^{-1/2}|$ is birational if and only if $m\ge n-1$.
References [Enhancements On Off] (What's this?)
M. F. Atiyah, N. J. Hitchin, and I. M. Singer, Self-duality in four-dimensional Riemannian geometry, Proc. Roy. Soc. London Ser. A 362 (1978), no. 1711, 425–461. MR 506229, DOI https://doi.org/10.1098/rspa.1978.0143
N. Honda, Explicit construction of new Moishezon twistor spaces, J. Differential Geom. 82 (2009) 411-444.
Nobuhiro Honda, Double solid twistor spaces: the case of arbitrary signature, Invent. Math. 174 (2008), no. 3, 463–504. MR 2453599, DOI https://doi.org/10.1007/s00222-008-0139-5
N. Honda, A new series of compact minitwistor spaces and Moishezon twistor spaces over them, to appear in J. Reine Angew. Math., arXiv:0805.0042
Herbert Kurke, Classification of twistor spaces with a pencil of surfaces of degree $1$. I, Math. Nachr. 158 (1992), 67–85. MR 1235296, DOI https://doi.org/10.1002/mana.19921580105
Claude LeBrun, Explicit self-dual metrics on ${\bf C}{\rm P}_2\#\cdots \#{\bf C}{\rm P}_2$, J. Differential Geom. 34 (1991), no. 1, 223–253. MR 1114461
Henrik Pedersen and Yat Sun Poon, Self-duality and differentiable structures on the connected sum of complex projective planes, Proc. Amer. Math. Soc. 121 (1994), no. 3, 859–864. MR 1195729, DOI https://doi.org/10.1090/S0002-9939-1994-1195729-1
Y. Sun Poon, Compact self-dual manifolds with positive scalar curvature, J. Differential Geom. 24 (1986), no. 1, 97–132. MR 857378
Y. Sun Poon, On the algebraic structure of twistor spaces, J. Differential Geom. 36 (1992), no. 2, 451–491. MR 1180390
M. Atiyah, N. Hitchin, I. Singer, Self-duality in four-dimensional Riemannian geometry, Proc. Roy. Soc. London, Ser. A 362 (1978) 425–461. MR 506229 (80d:53023)
N. Honda, Double solid twistor spaces: The case of arbitrary signature, Invent. Math. 174 (2008) 463-504. MR 2453599
H. Kurke, Classification of twistor spaces with a pencil of surfaces of degree $1$. I, Math. Nachr. 158 (1992) 67–85. MR 1235296 (94k:32051)
C. LeBrun, Explicit self-dual metrics on ${\mathbf {CP}}^2\#\cdots \#{\mathbf {CP}}^2$, J. Differential Geom. 34 (1991) 223–253. MR 1114461 (92g:53040)
H. Pedersen, Y. S. Poon, Self-duality and differentiable structures on the connected sum of complex projective planes, Proc. Amer. Math. Soc. 121 (1994) 859-864. MR 1195729 (94i:32049)
Y. S. Poon, Compact self-dual manifolds of positive scalar curvature, J. Differential Geom. 24 (1986) 97–132. MR 857378 (88b:32022)
Y. S. Poon, On the algebraic structure of twistor spaces, J. Differential Geom. 36 (1992), 451–491. MR 1180390 (94a:32045)
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 32L25, 53C28
Retrieve articles in all journals with MSC (2010): 32L25, 53C28
Nobuhiro Honda
Affiliation: Department of Mathematics, Tokyo Institute of Technology, O-okayama, Tokyo, Japan
Email: [email protected]
Received by editor(s): June 29, 2009
Received by editor(s) in revised form: September 7, 2009, and September 15, 2009
Additional Notes: The author was partially supported by the Grant-in-Aid for Young Scientists (B), The Ministry of Education, Culture, Sports, Science and Technology, Japan.
Communicated by: Jon G. Wolfson
Article copyright: © Copyright 2009 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
The Profession
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
\begin{document}
\begin{abstract} Various lattice path models are reviewed. The enumeration is done using generating functions. A few bijective considerations are woven in as well. The kernel method is often used. Computer algebra was an essential tool. Some results are new, some have appeared before.
The lattice path models we treated, are: Hoppy's walks, the combinatorics of sequence A002212 in \cite{OEIS} (skew Dyck paths, Schr\"oder paths, Hex-trees, decorated ordered trees, multi-edge trees, etc.) Weighted unary-binary trees also occur, and we could improve on our old paper on Horton-Strahler numbers \cite{FlPr86}, by using a different substitution. Some material on ternary trees appears as well, as on Motzkin numbers and paths (a model due to Retakh), and a new concept called amplitude that was found in \cite{irene}. Some new results on Deutsch paths in a strip are included as well. During the Covid period, I spent much time with this beautiful concept that I dare to call Deutsch paths, since Emeric Deutsch stands at the beginning with a problem that he posted in the American Mathematical Monthly some 20 years ago. Peaks and valleys, studied by Rainer Kemp 40 years under the names \textsc{max}-turns and \textsc{min}-turns, are revisited with a more modern approach, streamlining the analysis, relying on the `subcritical case' (named so by Philippe Flajolet), the adding a new slice technique and once again the kernel method. \end{abstract}
\title{A walk in my lattice path garden}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Around 20 years ago I published a collection of examples about applications of the kernel method in the present journal. The success of this enterprise was unexpected and came as a very pleasant surprise. My current plan is to present again a collection of subjects, loosely related as they all have a lattice path flavour (trees are also allowed in my private book when lattice paths are mentioned). The subjects cover my last 2 or 3 years of research; some results were only posted on the arxiv, some were submitted but no evaluations were available up to this day, some have been appearing, and some are completely new. In each instance I try to make clear the current status.
As in the predecessor paper, the kernel method plays a role again, but also analytic techniques like singularity analysis and Mellin transform, as well as bijective results.
\section{Hoppy walks}
Deng and Mansour \cite{deng} introduce a rabbit named Hoppy and let him move according to certain rules. While the story about Hobby is charming and entertaining, we do not need this here and move straight ahead to the enumeration issues. Eventually, the enumeration problem is one about $k$-Dyck paths. The up-steps are $(1,k)$ and the down-steps are $(1,-1)$. The model that has $(1,1)$ as up-step and the down-step are $(1,-k)$ will also be called $k$-Dyck paths.
\begin{figure}
\caption{The number of final down-steps}
\end{figure} The question is about the length of the sequence of down-steps printed in red. Or, phrased differently, how many $k$-Dyck paths end on level $j$, after $m$ up-steps, the last step being an up-step. The recent paper \cite{jcmcc} contains similar computations, although without the restriction that the last step must be an up-step.
Counting the number of up-steps is enough, since in total, there are $m+km=(k+1)m$ steps. The original description of Deng and Mansour is a reflection of this picture, with up-steps of size 1 and down-steps of sice $-k$, but we prefer it as given here, since we are going to use the adding-a-new-slice method, see \cite{FP, Prodinger-handbook}. A slice is here a run of down-steps, followed by an up-step. The first up-step is treated separately, and then $m-1$ new slices are added. We keep track of the level after each slice, using a variable $u$. The variable $z$ is used to count the number of up-steps.
Deng and Mansour work out a formula which comprises $O(m)$ terms. Our method leads only to a sum of $O(j)$ terms.
We will use the adding a new slice technique \cite{Prodinger-handbook} together with the kernel method to find useful bivariate generating functions.
The following substitution is essential for adding a new slice (a maximal sequence of down-steps followed by one up-step followed): \begin{equation*}
u^j\longrightarrow z\sum_{0\le h \le j} u^{h+k}=\frac{zu^k}{1-u}(1-u^{j+1}). \end{equation*} Now let $F_m(z,u)$ be the generating function according to $m$ runs of down-steps. The substitution leads to \begin{equation*}
F_{m+1}(z,u)=\frac{zu^k}{1-u}F_m(z,1)-\frac{zu^{k+1}}{1-u}F_m(z,u),\quad F_0(z,u)=zu^k. \end{equation*} Let $F=\sum_{m\ge0}F_m$, so that we don't care about the number $m$ anymore; then \begin{equation*}
F(z,u)=zu^k+\frac{zu^k}{1-u}F(z,1)-\frac{zu^{k+1}}{1-u}F(z,u), \end{equation*} or \begin{equation*}
F(z,u)\frac{1-u+zu^{k+1}}{1-u}=zu^k+\frac{zu^k}{1-u}F(z,1). \end{equation*} The equation $1-u+zu^{k+1}=0$ is famous when enumerating $(k+1)$-ary trees (or $k$-Dyck paths). Its relevant combinatorial solution (also the only one being analytic at the origin) is \begin{equation*}
\overline{u}=\sum_{\ell\ge0}\frac1{1+\ell(k+1)}\binom{1+\ell(k+1)}{\ell}z^\ell. \end{equation*} Since $u-\overline{u}$ is a factor of the LHS, is must also be a factor of the RHS, and we can compute (by dividing out the factor $(u-\overline{u})$) that \begin{equation*}
\frac{zu^k(1-u+F(z,1))}{u-\overline{u}}=-zu^k. \end{equation*} Thus \begin{equation*}
F(z,u)=zu^k\frac{\overline{u}-u}{1-u+zu^{k+1}}. \end{equation*}
The first factor has even a combinatorial interpretation, as a description of the first step of the path. It is also clear from this that the level reached is $\ge k$ after each slice. We don't care about the factor $zu^k$ anymore, as it produces only a simple shift. The main interest is now how to get to the coefficients of \begin{equation*}
\frac{\overline{u}-u}{1-u+zu^{k+1}} \end{equation*} in an efficient way. There is also the formula \begin{equation*}
1-u+zu^{k+1}=(\overline{u}-u)\Big(1-z\frac{u^{k+1}-\overline{u}^{k+1}}{u-\overline{u}}\Big), \end{equation*} but it does not seem to be useful here.
First we deal with the denominators \begin{equation*}
S_j:=[u^j]\frac1{1-u+zu^{k+1}}=\sum_{0\le m\le j/k}(-1)^m\binom{j-km}{m}z^m. \end{equation*} One way to see this formula is to prove by induction that the sums $S_j$ satisfy the recursion \begin{equation*}
S_j-S_{j-1}+zS_{j-k-1}=0 \end{equation*} and initial conditions $S_0=\dots =S_k=1$. In \cite{jcmcc} such expressions also appear as determinants. Summarizing, \begin{equation*}
\frac1{1-u+zu^{k+1}}=\sum_{m\ge0}(-1)^mz^m\sum_{j\ge km }\binom{j-km}{m}u^j. \end{equation*} Now we read off coefficients: \begin{equation*}
[u^j]\frac{\overline{u}}{1-u+zu^{k+1}}=\sum_{0\le m\le j/k}(-1)^m\binom{j-km}{m}z^m
\sum_{\ell\ge0}\frac1{1+\ell(k+1)}\binom{1+\ell(k+1)}{\ell}z^\ell \end{equation*} and further \begin{multline*}
[z^n][u^j]\frac{\overline{u}}{1-u+zu^{k+1}}\\=\sum_{0\le m\le j/k}(-1)^m\binom{j-km}{m}
\frac1{1+(n-m)(k+1)}\binom{1+(n-m)(k+1)}{n-m}. \end{multline*} The final answer to the Deng-Mansour enumeration (without the shift) is \begin{multline*}
\sum_{0\le m\le j/k}(-1)^m\binom{j-km}{m}
\frac1{1+(n-m)(k+1)}\binom{1+(n-m)(k+1)}{n-m}\\*
-(-1)^n\binom{j-1-kn}{n}. \end{multline*} If one wants to take care of the factor $zu^k$ as well, one needs to do the replacements $n\to n+1$ and $j\to j+k$ in the formula just derived. That enumerates then the $k$-Dyck paths ending at level $j$ after $n$ up-steps, where the last step is an up-step.
\subsection*{An application}
The encyclopedia of integer sequences \cite{OEIS} has the sequences A334680, A334682, A334719, (with a reference to \cite{AHS}) which is the total number of down-steps of the last down-run, for $k=2,3,4$. So, if the path ends on level $j$, the contribution to the total is $j$.
All we have to do here is to differentiate \begin{equation*}
F(z,u)=zu^k\frac{\overline{u}-u}{1-u+zu^{k+1}}. \end{equation*} w.r.t.\ $u$, and then replace $u$ by 1. The result is \begin{equation*}
\frac{\overline{u}}z-\overline{u}-\frac1z, \end{equation*} and the coefficient of $z^m$ therein is \begin{align*}
\frac1{1+(m+1)(k+1)}\binom{1+(m+1)(k+1)}{m+1}-\frac1{1+m(k+1)}\binom{1+m(k+1)}{m}. \end{align*} The bivariate generating function does this enumeration cleanly and quickly.
\subsection*{Hoppy's early adventures}
Now we investigate what Hoppy does after his first up-step; he might follow with $0,1,\dots,k$ down-steps. Eventually, we want to sum all these steps (red in the picture). \begin{center}
\begin{tikzpicture}[scale=0.5]
\draw (0,0) -- (17,0);
\draw (0,0) -- (0,9);
\draw[thick](0,0)--(1,7);
\foreach \i in {0,...,4}
{\draw[thick,red](1+\i,7-\i)--(2+\i,6-\i);
\node[thick, red] at (1+\i,7-\i){$\bullet$};
}
\node[thick, red] at (6,2){$\bullet$};
\draw[thick](6,2)--(7,9);
\draw [decorate,decoration=snake,thick] (7,9) -- (17,0);
\draw(-0.2,2)--(0.2,2);
\node[thick,red] at (-2,2){$k-i=h$};
\draw[ dashed](6,0)--(6,9);
\node[ ] at (3,-0.8){one up-step};
\node[ ] at (11,-0.8){$m$ up-steps};
\end{tikzpicture} \end{center}
A new slice is now an up-step, followed by a sequence of down-steps. The substitution of interest is: \begin{equation*}
u^i\rightarrow z\sum_{0\le h\le i+k} u^h=\frac{z}{1-u}-\frac{zu^{i+k+1}}{1-u}. \end{equation*} Furthermore \begin{equation*}
F_{h+1}(z,u)=\frac{z}{1-u}F_h(z,1)-\frac{zu^{k+1}}{1-u}F_h(z,u), \end{equation*} and $F_0=u^h$, the starting level.
We have \begin{equation*}
H(z,u)=\sum_{h\ge0}F_h(z,u)=u^h+\frac{z}{1-u}H(z,1)-\frac{zu^{k+1}}{1-u}H(z,u) \end{equation*} or \begin{equation*}
H(z,u)(1-u+zu^{k+1}) =u^h(1-u)+zH(z,1). \end{equation*} Plugging in $\overline{u}$ into the RHS gives 0: \begin{equation*}
zH(z,1)=-\overline{u}^h(1-\overline{u}), \end{equation*} and \begin{equation*}
H(z,u) =\frac{u^h(1-u)-\overline{u}^h(1-\overline{u})}{1-u+zu^{k+1}}. \end{equation*} But we only need $H(z,0)$, since we return to the $x$-axis at the end: \begin{equation*}
H(z,0) = [h=0]+\overline{u}^{h+1}-\overline{u}^h . \end{equation*} The total contribution of red steps is then \begin{equation*}
k+\sum _{h=0}^k(k-h)(\overline{u}^{h+1}-\overline{u}^h)=\sum _{h=1}^k\overline{u}^{h}; \end{equation*} the coefficient of $z^m$ in this is the total contribution. Since $\overline{u}=1+z\overline{u}^{k+1}$, there is the further simplification \begin{equation*}
-1+\frac1z+\frac1{1-\overline{u}}=\sum_{m\ge1}\frac{k}{m+1}\binom{(k+1)m}{m}z^m. \end{equation*} The proof of this is as follows. Let $m\ge1$, then \begin{align*}
[z^m] \Bigl(-1+\frac1z+\frac1{1-\overline{u}}\Bigr)&=-[z^m]\frac1{z\overline{u}^{k+1}}\\
&=-[z^{m+1}]\sum_{\ell\ge0}\frac {-(k+1)}{(k+1)\ell -(k+1)}\binom{(k+1)\ell -(k+1)}{\ell}z^\ell\\
&=[z^{m+1}]\sum_{\ell\ge0}\frac {(k+1)}{(k+1)(\ell-1)}\binom{(k+1)(\ell-1)}{\ell}z^\ell\\
&= \frac {(k+1)}{(k+1)m}\binom{(k+1)m}{m+1}=\frac{k}{m+1}\binom{(k+1)m}{m}. \end{align*}
We did not expect such a simple answer $\frac{k}{m+1}\binom{(k+1)m}{m}$ to this question about Hoppy's early adventures!
This analysis of Hoppy's early adventures covers sequences A007226, A007228, A124724 of \cite{OEIS}, with references to \cite{AHS}.
\subsection*{Hoppy walks into negative territory}
Hoppy is now adventurous and allows himself to go to level $-1$ as well, but not deeper. The setup with generating functions is the same, but the $u$-variable counts the level relative to the $-1$ level, so this has to be corrected later.
Hoppy, after some initial frustration discovers that he can now start with an up-step or a down-step!
First, let us start Hoppy with an up-step: \begin{equation*}
F(z,u)=zu^{k+1}+\frac{zu^k}{1-u}F(z,1)-\frac{zu^{k+1}}{1-u}F(z,u), \end{equation*} or \begin{equation*}
F(z,u)\frac{1-u+zu^{k+1}}{1-u}=zu^{k+1}+\frac{zu^k}{1-u}F(z,1). \end{equation*}
Since $u-\overline{u}$ is a factor of the LHS, is must also be a factor of the RHS, and we can compute that \begin{equation*}
\overline{u}(1-\overline{u})+F(z,1)=0. \end{equation*} The RHS is \begin{equation*}
\frac{zu^k}{1-u}\Big(u(1-u) -\overline{u}(1-\overline{u}) \Big) \end{equation*} and \begin{equation*}
F(z,u)=\frac{zu^k}{1-u+zu^{k+1}}\Big(u(1-u) -\overline{u}(1-\overline{u}) \Big). \end{equation*} But Hoppy can also start with a downstep! So we have to add the result of the previous computation, and get finally \begin{equation*}
G(z,u)=\frac{zu^k}{1-u+zu^{k+1}}\Big(u(1-u) -\overline{u}(1-\overline{u}) \Big)+
\frac{zu^k}{1-u+zu^{k+1}}\Big(\overline{u}-u \Big) \end{equation*} or better \begin{equation*}
G(z,u)=\frac{zu^k}{1-u+zu^{k+1}}\Big(\overline{u}^2-u^2 \Big) \end{equation*} Now we need \begin{equation*}
\frac{\partial}{\partial u}G(z,1)-G(z,1). \end{equation*} This substraction is necessary, since the contribution of $u^j$ is not $j$ as before but only $j-1$. The result is \begin{equation*}
\frac{\overline{u}^2}{z}-2\overline{u}^2-\frac1z. \end{equation*}
Hoppy knows that $\overline{u}^d$ has beautiful coefficients: \begin{equation*} \overline{u}^d=\sum_{\ell\ge0}\binom{d-1+(k+1)\ell}{\ell}\frac{d}{k\ell+d} \end{equation*} and he inserts $k=2$ (A030983): \begin{equation*}
3 z+16 z^2+83 z^3+442 z^4+2420 z^5+\cdots \end{equation*} $k=3$ (A334608): \begin{equation*}
5 z+34 z^2+236 z^3+1714 z^4+12922 z^5+\cdots \end{equation*} $k=4$ (A334610): \begin{equation*}
7 z+58 z^2+505 z^3+4650 z^4+44677 z^5+\cdots \end{equation*} In general, \begin{equation*}
\frac{\overline{u}^2}{z}-2\overline{u}^2-\frac1z=
\sum_{\ell\ge0}\bigg[\binom{1+(k+1)(\ell+1)}{\ell+1}\frac{2}{k(\ell+1)+2}-2\binom{1+(k+1)\ell}{\ell}\frac{2}{k\ell+2}\bigg]z^{\ell}. \end{equation*} Happy Hoppy decides to stop this line of computations here.
\section{Combinatorics of sequence A002212 in \cite{OEIS}}
The following sections gives some (mostly new) results about the sequence \begin{equation*}
1, 1, 3, 10, 36, 137, 543, 2219, 9285, 39587, 171369, 751236, 3328218, 14878455,\dots, \end{equation*} which is A002212 in \cite{OEIS}.
Here is the plan about the structures enumerated by this sequence:
Hex-trees \cite{KimStanley}; they are identified as weighted unary-binary trees, with weight one. Apart from left and right branches, as in binary trees, there are also unary branches, and they can come in different colours, here in just one colour. Unary-binary trees played a role in the present authors scientific development, as documented in \cite{FlPr86}, a paper written with the late and great Philippe Flajolet, about the register function (Horton-Strahler numbers) of unary-binary trees. Here, we can offer an improvement, using a ``better'' substitution than in \cite{FlPr86}. The results can now be made fully explicit. As a by-product, this provides a definition and analysis of the Horton-Strahler numbers of Hex-trees. An introductory section (about binary trees) provides all the basics.
Then we move to skew Dyck paths \cite{Deutsch-italy}. They are like Dyck paths, but allow for an extra step $(-1,-1)$, provided that the path does not intersect itself. An equivalent model, defined and described using a bijection, is from \cite{Deutsch-italy}: Marked ordered trees. They are like ordered trees, with an additional feature, namely each rightmost edge (except for one that leads to a leaf) can be coloured with two colours. Since we find this class of trees to be interesting, we analyze two parameters of them: number of leaves and height. While the number of leaves for ordered trees is about $n/2$, it is only $n/10$ in the new model. For the height, the leading term $\sqrt{\pi n}$ drops to $\frac{2}{\sqrt 5}\sqrt{\pi n}$. Of course, many more parameters of this new class of trees could be investigated, which we encourage to do.
More about skew Dyck paths appears in a later section.
The next two classes of structures are multi-edge trees. Our interest in them was already triggered in an earlier publication \cite{HPW}. They may be seen as ordered trees, but with weighted edges. The weights are integers $\ge1$, and a weight $a$ may be interpreted as $a$ parallel edges. The other class are 3-Motzkin paths. They are like Motzkin paths (Dyck paths plus horizontal steps); however, the horizontal steps come in three different colours. A bijection is described. Since 3-Motzkin paths and multi-edge trees are very much alike (using a variation of the classical rotation correspondence), all the structures that are discussed in this paper can be linked via bijections.
\section{Binary trees and Horton-Strahler numbers}
This section is classical and serves as the basis of some new developments about weighted unary-binary trees. A full account can be found here: \cite{EATCS}.
Binary trees may be expressed by the following symbolic equation, which says that they include the empty tree and trees recursively built from a root followed by two subtrees, which are binary trees: \begin{center}\small
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -4.8,0) { $\mathscr{B}$};
\node at (-3,0) { $=$};
\node(c) at (-1.5,0){ $\qed$};
\node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{};
\node(e) at (2,-1){ $\mathscr{B}$};
\node(f) at (4,-1){ $\mathscr{B}$};
\path [draw,-,black!90] (d) -- (e) node{};
\path [draw,-,black!90] (d) -- (f) node{};
\end{tikzpicture} \end{center}
Binary trees are counted by Catalan numbers and there is an important parameter \textsf{reg}, which in Computer Science is called the register function. It associates to each binary tree (which is used to code an arithmetic expression, with data in the leaves and operators in the internal nodes) the minimal number of extra registers that is needed to evaluate the tree. The optimal strategy is to evaluate difficult subtrees first, and use one register to keep its value, which does not hurt, if the other subtree requires less registers. If both subtrees are equally difficult, then one more register is used, compared to the requirements of the subtrees. This natural parameter is among Combinatorialists known as the Horton-Strahler numbers, and we will adopt this name throughout this paper.
There is a recursive description of this function: $\textsf{reg}(\square)=0$, and if tree $t$ has subtrees $t_1$ and $t_2$, then \begin{equation*}
\textsf{reg}(t)=
\begin{cases}
\max\{\textsf{reg}(t_1),\textsf{reg}(t_2)\}&\text{ if } \textsf{reg}(t_1)\ne\textsf{reg}(t_2),\\
1+\textsf{reg}(t_1)&\text{ otherwise}.
\end{cases} \end{equation*}
The recursive description attaches numbers to the nodes, starting with 0's at the leaves and then going up; the number appearing at the root is the Horton-Strahler number of the tree. \begin{center}\tiny
\begin{tikzpicture}
[scale=0.4,inner sep=0.7mm,
s1/.style={circle,draw=black!90,thick},
s2/.style={rectangle,draw=black!90,thick}]
\node(a) at ( 0,8) [s1] [text=black]{$\boldsymbol{2}$};
\node(b) at ( -4,6) [s1] [text=black]{$1$};
\node(c) at ( 4,6) [s1] [text=black]{$2$};
\node(d) at ( -6,4) [s2] [text=black]{$0$};
\node(e) at ( -2,4) [s1] [text=black]{$1$};
\node(f) at ( 2,4) [s1] [text=black]{$1$};
\node(g) at ( 6,4) [s1] [text=black]{$1$};
\node(h) at ( -3,2) [s2] [text=black]{$0$};
\node(i) at ( -1,2) [s2] [text=black]{$0$};
\node(j) at ( 1,2) [s2] [text=black]{$0$};
\node(k) at ( 3,2) [s2] [text=black]{$0$};
\node(l) at ( 5,2) [s2] [text=black]{$0$};
\node(m) at ( 7,2) [s2] [text=black]{$0$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\path [draw,-,black!90] (b) -- (d) node{};
\path [draw,-,black!90] (b) -- (e) node{};
\path [draw,-,black!90] (c) -- (f) node{};
\path [draw,-,black!90] (c) -- (g) node{};
\path [draw,-,black!90] (e) -- (h) node{};
\path [draw,-,black!90] (e) -- (i) node{};
\path [draw,-,black!90] (f) -- (j) node{};
\path [draw,-,black!90] (f) -- (k) node{};
\path [draw,-,black!90] (g) -- (l) node{};
\path [draw,-,black!90] (g) -- (m) node{};
\end{tikzpicture} \end{center}
Let $\mathscr{R}_{p}$ denote the family of trees with Horton-Strahler number $=p$, then one gets immediately from the recursive definition: \begin{center}\small
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -5,0) { $\mathscr{R}_p$};
\node at (-4,0) { $=$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ $\mathscr{R}_{p-1}$};
\node(c) at (-1,-1){ $\mathscr{R}_{p-1}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{};
\node(e) at (2,-1){ $\mathscr{R}_{p}$};
\node(f) at (4,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j} $};
\path [draw,-,black!90] (d) -- (e) node{};
\path [draw,-,black!90] (d) -- (f) node{};
\node at (5+0.7,0) {$+$};
\node(dd) at (5.5+3,1)[s1]{};
\node(ee) at (5.5+2,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j}$};
\node(ff) at (5.5+4,-1){ $\mathscr{R}_{p}$};
\path [draw,-,black!90] (dd) -- (ee) node{};
\path [draw,-,black!90] (dd) -- (ff) node{};
\end{tikzpicture} \end{center} In terms of generating functions, these equations read as \begin{equation*}
R_p(z)=zR_{p-1}^2(z)+2zR_p(z)\sum_{j<p}R_j(z); \end{equation*} the variable $z$ is used to mark the size (i.~e., the number of internal nodes) of the binary tree.
A historic account of these concepts, from the angle of Philippe Flajolet, who was one of the pioneers is \cite{register-introduction}, compare also \cite{ECA-historic}.
Amazingly, the recursion for the generating functions $R_p(z)$ can be solved explicitly! The substitution \begin{equation*}
z=\frac{u}{(1+u)^2} \end{equation*} that de Bruijn, Knuth, and Rice~\cite{BrKnRi72} also used, produces the nice expression \begin{equation*}
R_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p+1}}}. \end{equation*} Of course, once this is \emph{known}, it can be proved by induction, using the recursive formula. For the readers benefit, this will be sketched now.
We start with the auxiliary formula \begin{equation*}
\sum_{0\le j<p}\frac{u^{2^j}}{1-u^{2^{j+1}}}=\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}, \end{equation*} which we will prove now by induction: For $p=0$, the formula $0=\frac{u}{1-u}-\frac{u}{1-u}$ is correct, and then \begin{align*}
\sum_{0\le j<p+1}
\frac{u^{2^j}}{1-u^{2^{j+1}}}
&=\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}+\frac{u^{2^p}}{1-u^{2^{p+1}}}\\
&=\frac{u}{1-u}-\frac{u^{2^p}(1+u^{2^p})}{1-u^{2^{p+1}}}+\frac{u^{2^p}}{1-u^{2^{p+1}}}
=\frac{u}{1-u}-\frac{u^{2^{p+1}}}{1-u^{2^{p+1}}}. \end{align*} Now the formula for $R_p(z)$ can also be proved by induction. First, $R_0(z)=\frac{1-u^2}{u}\frac{u}{1-u^{2}}=1$, as it should, and \begin{align*}
zR_{p-1}^2(z)&+2zR_p(z)\sum_{j<p}R_j(z)\\
&=\frac{u}{(1+u)^2}\frac{(1-u^2)^2}{u^2}\frac{u^{2^{p}}}{(1-u^{2^{p}})^2}
+\frac{2u}{(1+u)^2}R_p(z)\sum_{j<p}\frac{1-u^2}{u}\frac{u^{2^j}}{1-u^{2^{j+1}}}\\
&= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2}
+\frac{2(1-u)}{(1+u)}R_p(z)\sum_{j<p}\frac{u^{2^j}}{1-u^{2^{j+1}}}. \end{align*} Solving \begin{align*}
R_p(z)= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2}
+\frac{2(1-u)}{(1+u)}R_p(z)\bigg[\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}\bigg] \end{align*} leads to \begin{align*}
R_p(z)\frac{1-u}{1+u}\bigg[1+2\frac{u^{2^p}}{1-u^{2^{p}}}\bigg]= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2}, \end{align*} or, simplified \begin{align*}
R_p(z)= \frac{u^{2^{p}-1}(1-u^2)}{(1-u^{2^{p}})(1+u^{2^{p}})}
=\frac{1-u^2}{u}\frac{u^{2^{p}}}{(1-u^{2^{p+1}})}, \end{align*} which is the formula that we needed to prove. \qed
\subsection*{Weighted unary-binary trees and Horton-Strahler numbers}
The family of unary-binary trees ${\mathscr{M}}$ might be defined by the symbolic equation \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.8,0.1) { ${\mathscr{M}}$};
\node at (-11.2,0) { $=$};
\node at (-4.5,0) { $+$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{M}}$};
\node(c) at (-1,-1){ ${\mathscr{M}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{M}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\node at (-9.0,0) { $\square\ \ +$};
\end{tikzpicture} \end{center} The equation for the generating function is \begin{equation*}
M=1+z(M-1)+zM^2 \end{equation*} with the solution \begin{equation*}
M=M(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2z}=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+\cdots; \end{equation*} the coefficients form again sequence A002212 in \cite{OEIS} and enumerate Schr\"oder paths, among many other things. We will come to equivalent structures a bit later.
In the instance of unary-binary trees, we can also work with a substitution: Set $z=\frac{u}{1+3u+u^2}$, then $M(z)=1+u$. Unary-binary trees and the register function were investigated in \cite{FlPr86}, but the present favourable substitution was not used. Therefore, in this previous paper, asymptotic results were available but no explicit formulae.
This works also with a weighted version, where we allow unary edges with $a$ different colours. Then \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.8,0.1) { ${\mathscr{N}}$};
\node at (-11.5,0) { $=$};
\node at (-4.5,0) { $+$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{N}}$};
\node(c) at (-1,-1){ ${\mathscr{N}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node at (-7.5,0){$a\ \cdot$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{N}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\node at (-9.5,0) { $\square\ \ +$};
\end{tikzpicture} \end{center} and with the substitution $z=\frac{u}{1+(a+2)u+u^2}$, the generating function is beautifully expressed as $N(z)=1+u$. For $a=0$, this covers also binary trees.
We will consider the Horton-Strahler numbers of unary-binary trees in the sequel. The definition is naturally extended by
\begin{center}
\begin{tikzpicture}
[inner sep=0.6mm,
s1/.style={circle=1pt,draw=black!90,thick}]
\node[] at ( -0.300,-0.10) {\textsf{reg}\bigg(};
\node[] at ( 0.6500,-0.10) {\bigg)};
\node[] at ( 1.5500,-0.10) {=\ \textsf{reg}(t).};
\path [draw,-,black!90 ] (0.3,0.34) -- (0.3,-0.350) ;
\node [s1]at ( 0.300,0.4) { };
\node[] at ( 0.300,-0.60) {$t$ };
\end{tikzpicture} \end{center}
Now we can move again to $R_p(z)$, the generating funciton of (generalized) unary-binary trees with Horton-Strahler number $=p$. The recursion (for $p\ge1$) is \begin{center}\small
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -5,0) { $\mathscr{R}_p$};
\node at (-4,0) { $=$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ $\mathscr{R}_{p-1}$};
\node(c) at (-1,-1){ $\mathscr{R}_{p-1}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{};
\node(e) at (2,-1){ $\mathscr{R}_{p}$};
\node(f) at (4,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j} $};
\path [draw,-,black!90] (d) -- (e) node{};
\path [draw,-,black!90] (d) -- (f) node{};
\node at (5+0.7,0) {$+$};
\node(dd) at (5.5+3,1)[s1]{};
\node(ee) at (5.5+2,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j}$};
\node(ff) at (5.5+4,-1){ $\mathscr{R}_{p}$};
\path [draw,-,black!90] (dd) -- (ee) node{};
\path [draw,-,black!90] (dd) -- (ff) node{};
\node(dd) at (13,1)[s1]{};
\node(ee) at (13,-1){ $\mathscr{R}_{p}$};
\path [draw,-,black!90] (dd) -- (ee) node{};
\node at (11.5,0) {$+\ \ a\cdot$};
\end{tikzpicture} \end{center} In terms of generating functions, these equations read as \begin{equation*}
R_p(z)=zR_{p-1}^2(z)+2zR_p(z)\sum_{j<p}R_j(z)+azR_p(z), \quad p\ge1;\quad R_0(z)=1. \end{equation*} Amazingly, with the substitution $z=\frac{u}{1+(a+2)u+u^2}$, formally we get the \emph{same} solution as in the binary case: \begin{equation*}
R_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p+1}}}. \end{equation*} The proof by induction is as before. One sees another advantage of the substitution: On a formal level, many manipulations do not need to be repeated. Only when one switches back to the $z$-world, things become different.
Now we move to Hex-trees. \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.4,0.1) { ${\mathscr{H}}$};
\node at (-11.0,0) { $=$};
\node(a) at (-1,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{H}\setminus\{\square\}}$};
\node(c) at (1,-1){ ${\mathscr{H}\setminus\{\square\}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\begin{scope}[xshift=13cm]
\node at (-10.0,0) { $+$};
\node at (-4.5,0) { $+$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-7.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\begin{scope}[xshift=21cm]
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-5.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\begin{scope}[xshift=17cm]
\node at (-4.5,0) { $+$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\node at (-9.0,0) { $\square\ \ +$};
\node[s1] at (-7.0,0) { };
\node at (-5.0,0) {$+$ };
\end{tikzpicture} \end{center}
\subsection*{Hex trees}
Hex trees either have two non-empty successors, or one of 3 types of unary successors (called left, middle, right). The author has seen this family first in \cite{KimStanley}, but one can find older literature following the references and the usual search engines.
The generating function satisfies \begin{align*}
H&(z)=1+z(H(z)-1)^2+z+3z(H(z)-1)=\frac{1-z-\sqrt{(1-z)(1-5z)}}{2z}\\
&=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+137{z}^{5}+543{z}^{6}+2219
{z}^{7}+9285{z}^{8}+39587{z}^{9}+\cdots. \end{align*} The same generating function also appears in \cite{HPW}, and it is again sequence A002212 in \cite{OEIS}. One can rewrite the symbolic equation as \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.4,0.1) { ${\mathscr{H}}$};
\node at (-11.0,0) { $=$};
\begin{scope}[xshift=-4cm]
\node(a) at (-1,1)[s1]{};
\node(b) at (-3,-1){ $\mathscr{H}$};
\node(c) at (1,-1){ $\mathscr{H}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\end{scope}
\begin{scope}[xshift=7.5cm]
\node at (-8.5,0) { $+$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\node at (-9.0,0) { $\square\ \ \ +$};
\end{tikzpicture} \end{center} and sees in this way that the Hex-trees are just unary-binary trees (with parameter $a=1$).
\subsection*{Continuing with enumerations}
First, we will enumerate the number of (generalized) unary-binary trees with $n$ (internal) nodes. For that we need the notion of generalized trinomial coefficients, viz. \begin{equation*}
\binom{n;1,a,1}{k}:=[z^k](1+az+z^2)^n. \end{equation*} Of course, for $a=2$, this simplifies to a binomial coefficient $\binom{2n}{k}$. We will use contour integration to pull out coefficients, and the contour of integer, in whatever variable, is a small circle (or equivalent) around the origin. The desired number is \begin{align*}
[z^n](1+u)&=\frac1{2\pi i}\oint \frac{dz}{z^{n+1}}(1+u)\\
&=\frac1{2\pi i}\oint \frac{du(1-u^2)(1+(a+2)u+u^2)^{n+1}}{(1+(a+2)u+u^2)^2u^{n+1}}(1+u)\\
&=[u^{n+1}](1-u)(1+u)^2(1+(a+2)u+u^2)^{n-1}\\
&=\binom{n-1;1,a+2,1}{n+1}+\binom{n-1;1,a+2,1}{n}\\*
&\hspace*{4cm}-\binom{n-1;1,a+2,1}{n-1}-\binom{n-1;1,a+2,1}{n-2}. \end{align*} Then we introducte $S_p(z)=R_{p}(z)+R_{p+1}(z)+R_{p+2}(z)+\cdots$, the generating function of trees with Horton-Strahler number $\ge p$. Using the summation formula proved earlier, we get \begin{equation*}
S_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p}}}=
\frac{1-u^2}{u}\sum_{k\ge1}u^{k2^p}. \end{equation*} Further, \begin{align*}
[z^n]S_p(z)&=\sum_{k\ge1}\frac1{2\pi i}\oint \frac{dz}{z^{n+1}}\frac{1-u^2}{u}u^{k2^p}. \end{align*}
\subsection*{Asymptotics}
We start by deriving asymptotics for the number of (generalized) unary-binary trees with $n$ (internal) nodes. This is a standard application of singularity analysis of generating functions, as described in \cite{FlOd90} and \cite{FS}.
We start from the generating function \begin{equation*}
N(z)=\frac{1-az-\sqrt{1-2(a+2)z+a(a+4)z^2}}{2z} \end{equation*} and determine the singularity closest to the origin, which is the value making the square root disappear: $z=\frac1{a+4}$. After that, the local expansion of $N(z)$ around this singularity is determined: \begin{equation*}
N(z) \sim 2-\sqrt{a+4}\sqrt{1-(a+4)z}. \end{equation*} The translation lemmas given in \cite{FlOd90} and \cite{FS} provide the asymptotics: \begin{align*}
[z^n]N(z)&\sim [z^n]\Big(2-\sqrt{a+4}\sqrt{1-(a+4)z}\Big)\\&
=-\sqrt{a+4}(a+4)^n\frac{n^{-3/2}}{\Gamma(-\frac12)}=(a+4)^{n+1/2}\frac{1}{2\sqrt\pi n^{3/2}}. \end{align*} Just note that $a=0$ is the well-known formula for binary trees with $n$ nodes.
Now we move to the generating function for the average number of registers. Apart from normalization it is \begin{align*}
\sum_{p\ge1}pR_p(z)&=\sum_{p\ge1}S_p(z)=\frac{1-u^2}{u}\sum_{p\ge1}\sum_{k\ge1}u^{k2^p}\\
&=\frac{1-u^2}{u}\sum_{n\ge1}v_2(n)u^n, \end{align*} where $v_2(n)$ is the highest exponent $k$ such $2^k$ divides $n$.
This has to be studied around $u=1$, which, upon setting $u=e^{-t}$, means around $t=0$. Eventually, and that is the only thing that is different here, this is to be retranslated into a singular expansion of $z$ around its singularity, which depends on the parameter $a$.
For the reader's convenience, we also repeat the steps that were known before. The first factor is elementary: \begin{equation*}
\frac{1-u^2}{u}\sim2t+{\frac {1}{3}}{t}^{3}+\cdots \end{equation*} For \begin{equation*}
\sum_{p\ge1}\sum_{k\ge1}e^{-k2^pt}, \end{equation*} one applies the Mellin transform, with the result \begin{equation*}
\frac{\Gamma(s)\zeta(s)}{2^s-1}. \end{equation*} Applying the inversion formula, one finds \begin{equation*}
\sum_{p\ge1}\sum_{k\ge1}e^{-k2^pt}=\frac1{2\pi i}\int_{2-i\infty}^{2+i\infty}t^{-s}\frac{\Gamma(s)\zeta(s)}{2^s-1}ds. \end{equation*} Shifting the line of integration to the left, the residues at the poles $s=1$, $s=0$, $s=\chi_k=\frac{2k\pi i}{\log2}$, $k\neq0$ provide enough terms for our asymptotic expansion. \begin{equation*}
\frac1{t}+{\frac {\gamma}{2\log2 }}-\frac14- \frac {\log \pi }{2\log2 }+\frac {\log t }{2\log2}
+\frac1{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k)t^{-\chi_k}. \end{equation*} Combined with the elementary factor, this leads to \begin{equation*}
2+\Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac {\log t }{\log2}\Big)t+\frac{2t}{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k)t^{-\chi_k}+O(t^2\log t). \end{equation*} Now we want to translate into the original $z$-world. Since $z=\frac{u}{1+(a+2)u+u^2}$, $u=1$ translates into the singularity $z=\frac{1}{4+a}$. Further, \begin{equation*}
t\sim \sqrt{4+a}\cdot \sqrt{1-z(4+a)}, \end{equation*} let us abbreviate $A=4+a$, then for singularity analysis we must consider \begin{align*}
&\frac {\sqrt{A}\cdot \sqrt{1-zA}\log (1-zA) }{2\log2}\\
&+ \Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)\sqrt{A}\cdot \sqrt{1-zA}\\
&+\frac{2 }{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k) A^{\frac{1-\chi_k}2}(1-zA)^{\frac{1-\chi_k}2}. \end{align*} The formula that is perhaps less known and needed here is \cite{FlOd90} \begin{align*}
[z^n]\log(1-z)\sqrt{1-z}\sim \frac{n^{-3/2}\log n}{2\sqrt \pi}+\frac{n^{-3/2}}{2\sqrt \pi}(-2+\gamma +2\log2); \end{align*} furthermore we need \begin{equation*}
[z^n](1-z)^\alpha \sim \frac{n^{-\alpha-1}}{\Gamma(-\alpha)}. \end{equation*} We start with the most complicated term: \begin{align*}
\frac{[z^n]\frac {\sqrt{A}\cdot \sqrt{1-zA}\log (1-zA) }{2\log2}}{[z^n]N(z)}
&\sim \frac {\sqrt{A}}{2\log2}\frac{A^n\Big(\frac{n^{-3/2}\log n}{2\sqrt \pi}+\frac{n^{-3/2}}{2\sqrt \pi}(-2+\gamma +2\log2)\Big)}
{A^{n+1/2}\frac{1}{2\sqrt\pi n^{3/2}}}\\
&= \log_4 n+1+ \frac{\gamma }{2\log2}- \frac{1}{\log2}. \end{align*} The next term we consider is \begin{align*}
\Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)&\sqrt{A}\frac{[z^n] \sqrt{1-zA}}{[z^n]N(z)}\\*
&\sim
\Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)\sqrt{A}\frac{[z^n] \sqrt{1-zA}}{-\sqrt{A}[z^n]\sqrt{1-zA}}\\
&=-\frac {\gamma}{\log2 }+\frac12+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2}. \end{align*} The last term we consider is \begin{align*}
\frac{2 }{\log2}&\Gamma(\chi_k)\zeta(\chi_k) A^{\frac{1-\chi_k}2}\frac{[z^n](1-zA)^{\frac{1-\chi_k}2}}{-\sqrt{A}[z^n]\sqrt{1-zA}}\\
&\sim-\frac{4 \sqrt\pi }{\log2}\frac{\Gamma(\chi_k)\zeta(\chi_k)}{\Gamma\big(\frac{\chi_k-1}{2}\big)} A^{\frac{1-\chi_k}2}n^{\chi_k/2}. \end{align*} Eventually we have evaluated the average value of the Horton-Strahler numbers: \begin{theorem}The average Horton-Strahler of weighted unary-binary trees with $n$ nodes is given by the asymptotic formula
\begin{align*}
\log_4 n&- \frac{\gamma }{2\log2}- \frac{1}{\log2}+\frac32+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2}
-\frac{4 \sqrt{\pi A} }{\log2}\sum_{k\neq0}\frac{\Gamma(\chi_k)\zeta(\chi_k)}{\Gamma\big(\frac{\chi_k-1}{2}\big)} A^{\frac{-\chi_k}2}n^{\chi_k/2}\\
&=\log_4 n- \frac{\gamma }{2\log2}- \frac{1}{\log2}+\frac32+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2}+\psi(\log_4n),
\end{align*}
with a tiny periodic function $\psi(x)$ of period 1. \end{theorem}
\section{Marked ordered trees}
In \cite{Deutsch-italy} we find the following variation of ordered trees: Each rightmost edge might be marked or not, if it does not lead to an endnode (leaf). We depict a marked edge by the red colour and draw all of them of size 4 (4 nodes): \begin{figure}
\caption{All 10 marked ordered trees with 4 nodes.}
\end{figure}
Now we move to a symbolic equation for the marked ordered trees: \begin{figure}
\caption{Symbolic equation for marked ordered trees.\\ $\mathscr{A}\cdots\mathscr{A}$ refers to $\ge0$ copies of $\mathscr{A}$.}
\end{figure}
In terms of generating functions, \begin{equation*}
A=z+\frac{z}{1-A}z+\frac{z}{1-A}2(A-z), \end{equation*} with the solution \begin{equation*}
A(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2}=z+z^2+z^3+3z^3+10z^4+36z^5+\cdots. \end{equation*}
The importance of this family of trees lies in the bijection to skew Dyck paths, as given in \cite{Deutsch-italy}. One walks around the tree as one usually does and translates it into a Dyck path. The only difference are the red edges. On the way down, nothing special is to be reported, but on the way up, it is translated into a skew step $(-1,-1)$. The present author believes that trees are more manageable when it comes to enumeration issues than skew Dyck paths.
The 10 trees of Figure 1 translate as follows: \begin{equation*}
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(6,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(5,1)--(4,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(4,2)--(3,1)--(4,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(4,2)--(3,1)--(2,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(2,2)--(4,0)--(5,1)--(6,0);
\end{tikzpicture} \end{equation*} \begin{equation*}
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(4,2)--(6,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(4,2)--(5,1)--(4,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,2)--(3,1)--(4,2)--(6,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,2)--(3,1)--(4,2)--(5,1)--(4,0);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(3,1)--(4,0)--(5,1)--(6,0);
\end{tikzpicture} \end{equation*}
Skew Dyck paths and a dual version (reading the paths from right-to-left) will be discussed in a later section.
\subsection*{Parameters of marked ordered trees}
There are many parameters, usually considered in the context of ordered trees, that can be considered for marked ordered trees. Of course, we cannot be encyclopedic about such parameters. We just consider a few parameters and leave further analysis to the future.
\subsubsection*{The number of leaves}
To get this, it is most natural to use an additional variable $u$ when translating the symbolic equation, so that $z^nu^k$ refers to trees with $n$ nodes and $k$ leaves. One obtains \begin{equation*}
F=zu+\frac{z}{1-F}\bigl(zu2(F-zu)\bigr), \end{equation*} with the solution \begin{align*}
F(z,u)&=-z+\frac{zu}2+\frac12-\frac12\sqrt {4{z}^{2}-4z+{z}^{2}{u}^{2}-2zu+1}\\*
&=zu+{z}^{2}u+ \left( 2u+{u}^{2}\right) {z}^{3}+ \left( 4u+5{u}^{2}+{u}^{3} \right) {z}^{4}+\cdots. \end{align*} The factor $4u+5{u}^{2}+{u}^{3}$ corresponds to the 10 trees in Figure 1.
Of interest is also the average number of leaves, when all marked ordered trees of size $n$ are considered to be equally likely. For that, we differentiate $F(z,u)$ w. r. t. $u$, followed by $u:=1$, with the result \begin{equation*}
\frac{z}{2}+\frac{z-z^2}{2\sqrt{1-6z+5z^2}}=\frac{z}{1-v}, \quad\text{with the usual}\quad z=\frac{v}{1+3v+v^2}. \end{equation*} Since $F(z,1)=z(1+v)$, it follows that the average is asymptotic to \begin{align*}
\frac{[z^{n+1}]\frac{z}{1-v}}{[z^{n+1}]z(1+v)}&=\frac{[z^{n}]\frac{1}{1-v}}{[z^{n}](1+v)}=\frac{[z^n]\frac1{\sqrt5}\frac1{\sqrt{1-5z}}}{5^{n+\frac12}\frac1{2\sqrt\pi}n^{3/2}}\\
&=\frac{\frac{n^{-1/2}}{\Gamma(\frac12)}}{5^{n+\frac12}\frac1{2\sqrt\pi}n^{3/2}}=\frac{n}{10}. \end{align*} Note that the corresponding number for ordered trees (unmarked) is $\frac n2$, so we have significantly less leaves here.
\subsubsection*{The height}
As in the seminal paper \cite{BrKnRi72}, we define the height in terms of the longest chain of nodes from the root to a leaf. Further, let $p_h=p_h(z)$ be the generating function of marked ordered trees of height $\le h$. From the symbolic equation, \begin{align*}
p_{h+1}=z+\frac{z^2}{1-p_h}+\frac{2z(p_h-z)}{1-p_h}=-z+\frac{2z-z^2}{1-p_h},\quad h\ge1,\ p_1=z. \end{align*} By some creative guessing, separating numerator and denominator, we find the solution \begin{equation*}
p_h=z(1+v)\frac{(1+2v)^{h-1}-v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}, \end{equation*} which can now be proved by induction:
We have $p_1=z(1+v)\frac{1-v}{1-v^{2}}=z$. Furthermore, the induction step is best checked using a computer.
The limit of $p_h$ for $h\to\infty$ is $z(1+v)$, the generating function of \emph{all} marked ordered trees, as expected. Taking differences, we get the generating functions of trees of height $>h$: \begin{align*}
z(1+v)&-z(1+v)\frac{(1+2v)^{h-1}-v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}\\
&=z(1+v)\frac{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}-(1+2v)^{h-1}+v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}\\
&=z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}. \end{align*} From this, the average height can be worked out, as in the model paper \cite{HPW}. We sketch the essential steps. For the average height, one needs \begin{equation*}
\sum_{h\ge0}z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}} \end{equation*} and its behaviour around $v=1$, viz. \begin{equation*}
2z(1-v)\sum_{h\ge0}\frac{3^{h-1}v^h}{3^{h-1}-v^{h+1}3^{h-1}}
\sim2z(1-v)\sum_{h\ge1}\frac{v^h}{1-v^{h}}. \end{equation*} The behaviour of the series can be taken straight from \cite{HPW}.
We find there \begin{equation*}
\sum_{h\ge1}\frac{v^h}{1-v^{h}}=-\frac{\log(1-v)}{1-v}, \end{equation*} and further \begin{equation*}
\sum_{h\ge0}z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}
\sim- 2z\log(1-v), \end{equation*} so that the coefficient of $z^{n+1}$ is asymptotic to $-2[z^n]\log(1-v)$. Since $1-v\sim \sqrt5\sqrt{1-5z}$, \begin{equation*}
- 2z\log(1-v)\sim -2z\log\sqrt{1-5z}= -z\log(1-5z), \end{equation*} and the coefficient of $z^{n+1}$ in it is asymptotic to $\frac{5^n}{n}$. This has to be divided (as derived earlier) by \begin{equation*}
5^{n+\frac12}\frac{1}{2\sqrt\pi n^{3/2}}, \end{equation*} with the result \begin{equation*}
2\frac{5^n}{n}\frac1{5^{n+\frac12}}\sqrt\pi n^{3/2}=\frac{2}{\sqrt5}\sqrt{\pi n}. \end{equation*} Note that the constant in front of $\sqrt{\pi n}$ for ordered trees is $\frac{2}{\sqrt4}=1$, so the average height for marked ordered trees is indeed a bit smaller thanks to the extra markings.
\section{A bijection between multi-edge trees and 3-coloured Motzkin paths}
Multi-edge trees are like ordered (planar, plane, \dots) trees, but instead of edges there are multiple edges. When drawing such a tree, instead of drawing, say 5 parallel edges, we just draw one edge and put the number 5 on it as a label. These trees were studied in \cite{polish, HPW}. For the enumeration, one must count edges. The generating function $F(z)$ satisfies \begin{equation*}
F(z)=\sum_{k\ge0}\Big(\frac{z}{1-z}F(z)\Big)^k=\frac1{1-\frac{z}{1-z}F(z)}, \end{equation*} whence \begin{equation*}
F(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2z}=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+137{z}^{5}+543{z}^{6}+\cdots. \end{equation*} The coefficients form once again sequence A002212 in \cite{OEIS}.
A Motzkin path consists of up-steps, down-steps, and horizontal steps, see sequence A091965 in \cite{OEIS} and the references given there. As Dyck paths, they start at the origin and end, after $n$ steps again at the $x$-axis, but are not allowed to go below the $x$-axis. A 3-coloured Motzkin path is built as a Motzkin path, but there are 3 different types of horizontal steps, which we call \emph{red, green, blue}. The generating function $M(z)$ satisfies \begin{equation*}
M(z)=1+3zM(z)+z^2M(z)^2=\frac{1-3z-\sqrt{1-6z+5z^2}}{2z^2}, \quad\text{or}\quad F(z)=1+zM(z). \end{equation*} So multi-edge trees with $N$ edges (counting the multiplicities) correspond to 3-coloured Motzkin paths of length $N-1$.
The purpose of this note is to describe a bijection. It transforms trees into paths, but all steps are reversible.
\subsection*{The details}
As a first step, the multiplicities will be ignored, and the tree then has only $n$ edges. The standard translation of such tree into the world of Dyck paths, which is in every book on combinatorics, leads to a Dyck path of length $2n$. Then the Dyck path will transformed bijectively to a 2-coloured Motzkin path of length $n-1$ (the colours used are red and green). This transformation plays a prominent role in \cite{Shapiro}, but is most likely much older. I believe that people like Viennot know this for 40 years. I would be glad to get a proper historic account from the gentle readers.
The last step is then to use the third colour (blue) to deal with the multiplicities.
The first up-step and the last down-step of the Dyck path will be deleted. Then, the remaining $2n-2$ steps are coded pairwise into a 2-Motzkin path of length $n-1$: \begin{equation*}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (2,2);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (1,1);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,2) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,2) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,1) node(x1) {\tiny$\bullet$} ;
\path (1,0) node(x2) {\tiny$\bullet$};
\draw (0,1) -- (1,0);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,1) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node[red](x1) {\tiny$\bullet$} ;
\path (1,0) node[red](x2) {\tiny$\bullet$};
\draw[red, very thick] (0,0) -- (1,0);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,-1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,-1) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node[green](x1) {\tiny$\bullet$} ;
\path (1,0) node[green](x2) {\tiny$\bullet$};
\draw[green, very thick] (0,0) -- (1,0);
\end{tikzpicture} \end{equation*}
The last step is to deal with the multiplicities. If an edge is labelled with the number $a$, we will insert $a-1$ horizontal blue steps in the following way: Since there are currently $n-1$ symbols in the path, we have $n$ possible positions to enter something (in the beginning, in the end, between symbols). We go through the tree in pre-order, and enter the multiplicities one by one using the blue horizontal steps.
To make this procedure more clear, we prepared a list of 10 multi-edge trees with 3 edges, and the corresponding 3-Motzkin paths of length 2, with intermediate steps completely worked out:
\begin{center}
\begin{table}[h]
\begin{tabular}{c | c | c |c}
\text{Multi-edge tree }&\text{Dyck path}&\text{2-Motzkin path}&\text{blue edges added}\\
\hline\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\path (0,-3) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\draw (0,-2) -- (0,-3)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (3,3) --(6,0);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.45]
\draw[thick] (0,0) -- (1,1) --(2,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw[thick] (0,0) -- (1,1) --(2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny2} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [blue,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny3} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0);
\end{tikzpicture} & & \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (-1,-1) -- (-1,-2)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0)--(5,1)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny2} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\end{tikzpicture} & \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [blue,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet $} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\path (-3,1) node(x4) {\tiny$\bullet$};
\draw (-1,0) -- (-1,1)node[pos=0.7,right]{\tiny1} ;
\draw (-1,1) -- (-2,2)node[pos=0.7,right]{\tiny1} ;
\draw (-2,2) -- (-3,1)node[pos=0.3,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(4,2)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw[red,thick](1,0)--(2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw[red,thick](1,0)--(2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,-1) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (0,-1)node[pos=0.6]{\tiny\;\;1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0)--(5,1)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-2) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (1,-2)node[pos=0.5,right]{\tiny1} ;
\draw (0,-1) -- (-1,-2)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(3,1)--(4,2)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\end{tabular}
\caption{First row is a multi-edge tree with 3 edges, second row is the standard Dyck path (multiplicities ignored), third row is cutting off first and last step, and then translated pairs of steps, fourth row is inserting blued horizontal edges, according to multiplicities.}
\end{table} \end{center}
\subsection*{Connecting unary-binary trees with multi-edge trees}
This is not too difficult: We start from multi-edge trees, and ignore the multiplicities at the moment. Then we apply the classical rotation correspondence (also called: natural correspondence). Then we add vertical edges, if the multiplicity is higher than 1. To be precise, if there is a node, and an edge with multiplicity $a$ leads to it from the top, we insert $a-1$ extra nodes in a chain on the top, and connect them with unary branches. The following example with 10 objects will help to understand this procedure. After that, all the structures studied in this paper are connected with bijections.
\begin{center}
\begin{table}[h]
\begin{tabular}{c | c | c |c}
\text{Multi-edge tree }&\text{Binary tree (rotation)}&\text{vertical edges added}\\
\hline\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\path (0,-3) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\draw (0,-2) -- (0,-3)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny2} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (-1,0) -- (-1,-1);
\draw (-1,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny3} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (-1,-1) -- (-1,-2)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny2} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,1);
\draw (0,0) -- (0,-1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet $} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\path (-3,1) node(x4) {\tiny$\bullet$};
\draw (-1,0) -- (-1,1)node[pos=0.7,right]{\tiny1} ;
\draw (-1,1) -- (-2,2)node[pos=0.7,right]{\tiny1} ;
\draw (-2,2) -- (-3,1)node[pos=0.3,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,-1) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (0,-1)node[pos=0.6]{\tiny\;\;1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}&
\begin{tikzpicture}[scale=0.5]
\path (-3,3) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\draw (-2,2) -- (-1,1);
\draw (-3,3) -- (-2,2) ;
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (-3,3) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\draw (-2,2) -- (-1,1);
\draw (-3,3) -- (-2,2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-2) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (1,-2)node[pos=0.5,right]{\tiny1} ;
\draw (0,-1) -- (-1,-2)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-1,-1) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-2,0) -- (-1,1) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-1,-1) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-2,0) -- (-1,1) ;
\end{tikzpicture}\\
\end{tabular}
\caption{First row is a multi-edge tree with 3 edges, second row the corresponding binary tree, according to the classical rotation correspondence, ignoring the unary branches.
Third row is inserting extra horizontal edges when the multiplicities are higher than 1.}
\end{table} \end{center}
\section{The combinatorics of skew Dyck paths}
Skew Dyck are a variation of Dyck paths, where additionally to steps $(1,1)$ and $(1,-1)$ a south-west step $(-1,-1)$ is also allowed, provided that the path does not intersect itself. Otherwise, like for Dyck path, it must never go below the $x$-axis and end eventually (after $2n$ steps) on the $x$-axis. Here are a few references: \cite{Deutsch-italy, KimStanley, Baril-neu, Prodinger-hex}. The enumerating sequence is \begin{equation*}
1, 1, 3, 10, 36, 137, 543, 2219, 9285, 39587, 171369, 751236, 3328218, 14878455,\dots, \end{equation*} which is A002212 in \cite{OEIS}.
Skew Dyck appeared very briefly in a previous section; here we want to give a more thorough analysis of them, using generating functions and the kernel method. Here is a list of the 10 skew paths consisting of 6 steps:
\begin{figure}
\caption{All 10 skew Dyck paths of length 6 (consisting of 6 steps).}
\end{figure}
We prefer to work with the equivalent model (resembling more traditional Dyck paths) where we replace each step $(-1,-1)$ by $(1,-1)$ but label it red. Here is the list of the 10 paths again (Figure 2):
\begin{figure}
\caption{The 10 paths redrawn, with red south-east edges instead of south-west edges.}
\end{figure}
The rules to generate such decorated Dyck paths are: each edge $(1,-1)$ may be black or red, but \begin{tikzpicture}[scale=0.3]\draw [thick](0,0)--(1,1); \draw [red,thick] (1,1)--(2,0);\end{tikzpicture} and \begin{tikzpicture}[scale=0.3] \draw [red,thick] (0,1)--(1,0);\draw [thick](1,0)--(2,1);\end{tikzpicture} are forbidden.
Our interest is in particular in \emph{partial} decorated Dyck paths, ending at level $j$, for fixed $j\ge0$; the instance $j=0$ is the classical case.
The analysis of partial skew Dyck paths was recently started in \cite{Baril-neu} (using the notion `prefix of a skew Dyck path') using Riordan arrays instead of our kernel method. The latter gives us \emph{bivariate} generating functions, from which it is easier to draw conclusions. Two variables, $z$ and $u$, are used, where $z$ marks the length of the path and $j$ marks the end-level. We briefly mention that one can, using a third variable $w$, also count the number of red edges.
Again, once all generating functions are explicitly known, many corollories can be derived in a standard fashion. We only do this in a few instances. But we would like to emphasize that the substitution \begin{equation*}
x=\frac{v}{1+3v+v^2}, \end{equation*} which was used in \cite{HPW, Prodinger-hex} allows to write \emph{explicit enumerations}, using the notion of a (weighted) trinomial coefficient: \begin{equation*}
\binom{n;1,3,1}{k}:=[t^k](1+3t+t^2)^n. \end{equation*}
The second part of this section deals with a dual version, where the paths are read from right to left.
\subsection*{Generating functions and the kernel method}
We catch the essence of a decorated Dyck path using a state-diagram:
\begin{figure}
\caption{Three layers of states according to the type of steps leading to them (up, down-black, down-red).}
\end{figure} It has three types of states, with $j$ ranging from 0 to infinity; in the drawing, only $j=0..8$ is shown. The first layer of states refers to an up-step leading to a state, the second layer refers to a black down-step leading to a state and the third layer refers to a red down-step leading to a state. We will work out generating functions describing all paths leading to a particular state. We will use the notations $f_j,g_j,h_j$ for the three respective layers, from top to bottom. Note that the syntactic rules of forbidden patterns \begin{tikzpicture}[scale=0.3]\draw [thick](0,0)--(1,1); \draw [red,thick] (1,1)--(2,0);\end{tikzpicture} and \begin{tikzpicture}[scale=0.3] \draw [red,thick] (0,1)--(1,0);\draw [thick](1,0)--(2,1);\end{tikzpicture} can be clearly seen from the picture. The functions depend on the variable $z$ (marking the number of steps), but mostly we just write $f_j$ instead of $f_j(z)$, etc.
The following recursions can be read off immediately from the diagram: \begin{gather*}
f_0=1,\quad f_{i+1}=zf_i+zg_i,\quad i\ge0,\\
g_i=zf_{i+1}+zg_{i+1}+zh_{i+1},\quad i\ge0,\\
h_i=zh_{i+1}+zg_{i+1},\quad i\ge0. \end{gather*} And now it is time to introduce the promised \emph{bivariate} generating functions: \begin{equation*}
F(z,u)=\sum_{i\ge0}f_i(z)u^i,\quad
G(z,u)=\sum_{i\ge0}g_i(z)u^i,\quad
H(z,u)=\sum_{i\ge0}h_i(z)u^i. \end{equation*} Again, often we just write $F(u)$ instead of $F(z,u)$ and treat $z$ as a `silent' variable. Summing the recursions leads to \begin{align*}
\sum_{i\ge0}u^if_{i+1}&=\sum_{i\ge0}u^izf_i+\sum_{i\ge0}u^izg_i,\\
\sum_{i\ge0}u^ig_i&=\sum_{i\ge0}u^izf_{i+1}+\sum_{i\ge0}u^izg_{i+1}+\sum_{i\ge0}u^izh_{i+1},\\
\sum_{i\ge0}u^ih_i&=\sum_{i\ge0}u^izh_{i+1}+\sum_{i\ge0}u^izg_{i+1}. \end{align*} This can be rewritten as \begin{align*}
\frac1u(F(u)-1)&=zF(u)+zG(u),\\*
G(u)&=\frac zu(F(u)-1)+\frac zu(G(u)-G(0))+\frac zu(H(u)-H(0)),\\*
H(u)&= \frac zu(G(u)-G(0))+\frac zu(H(u)-H(0)). \end{align*} This is a typical application of the kernel method. For a gentle example-driven introduction to the kernel method, see \cite{Prodinger-kernel}. First, \begin{align*}
F(u)&=\frac{z^2uG(0)+z^2uH(0)+z^2u-u-z^3+2z}{-{z}^{3}-u+2z+z{u}^{2}-{z}^{2}u},\\
G(u)&=\frac{z(H(0)-uzH(0)+z^2+G(0)-zuG(0)-zu)}{-{z}^{3}-u+2z+z{u}^{2}-{z}^{2}u},\\
H(u)&=\frac{z(-uzH(0)-z^2-zuG(0)+G(0)-z^2H(0)+H(0)-z^2G(0))}{-{z}^{3}-u+2z+z{u}^{2}-{z}^{2}u}. \end{align*} The denominator factors as $z(u-r_1)(u-r_2)$, with \begin{equation*}
r_1=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2z},\quad r_2=\frac{1+z^2-\sqrt{1-6z^2+5z^4}}{2z}. \end{equation*} Note that $r_1r_2=2-z^2$. Since the factor $u-r_2$ in the denominator is ``bad,'' it must also cancel in the numerators. From this we conclude as a first step \begin{equation*}
G(0) = \frac{1-2z^2H(0)-3z^2-\sqrt{1-6z^2+5z^4}}{2z^2}, \end{equation*} and by further simplification \begin{equation*}
H(0)=\frac{1-4z^2+z^4+(z^2-1)\sqrt{1-6z^2+5z^4}}{2-z^2}. \end{equation*} Thus (with $W=\sqrt{1-6z^2+5z^4}=\sqrt{(1-z^2)(1-5z^2)}$\,) \begin{align*}
F(u)&=\frac{-1-z^2-W}{2z(u-r_1)}=\frac{1+z^2+W}{2zr_1(1-u/r_1)},\\
G(u)&=\frac{-1+z^2+W}{2z(u-r_1)}=\frac{1-z^2-W}{2zr_1(1-u/r_1)},\\
H(u)&=\frac{-1+3z^2+W}{2z(u-r_1)}=\frac{1-3z^2-W}{2zr_1(1-u/r_1)}. \end{align*} The total generating function is \begin{equation*}
S(u)=F(u)+G(u)+H(u)=\frac{3-3z^2-W}{2zr_1(1-u/r_1)}. \end{equation*} The coefficient of $u^jz^n$ in $S(u)$ counts the partial paths of length $n$, ending at level $j$. We will write $s_j=[u^j]S(u)$. Furthermore \begin{align*}
f_j=[u^j] F(u)&=[u^j]\frac{1+z^2+W}{2zr_1(1-u/r_1)},\\
g_j=[u^j] G(u)&=[u^j]\frac{1-z^2-W}{2zr_1(1-u/r_1)},\\
h_j=[u^j] H(u)&=[u^j]\frac{1-3z^2-W}{2zr_1(1-u/r_1)}. \end{align*} At this stage, we are only interested in \begin{equation*}
s_j=f_j+g_j+h_j=[u^j]\frac{3-3z^2-W}{2zr_1(1-u/r_1)}=\frac{3-3z^2-W}{2zr_1^{j+1}}, \end{equation*} which is the generating function of all (partial) paths ending at level $j$. Parity considerations give us that only coefficients $[z^n]s_j$ are non-zero if $n\equiv j\bmod2$. To make this more transparent, we set \begin{equation*}
P(z)=zr_1=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2}, \end{equation*} and then \begin{equation*}
s_j=f_j+g_j+h_j=z^j\frac{3-3z^2-W}{2P^{j+1}}. \end{equation*} Now we read off coefficients. We do this using residues and contour integration. The path of integration, in both variables $x$ resp.\ $v$ is a small circle or an equivalent contour. \begin{align*}
[z^{2m+j}]s_j&=[z^{2m}]\frac{3-3z^2-W}{2P^{j+1}}=
[x^m]\frac{3-3x-\sqrt{1-6x+5x^2}}{2\Big(\frac{1+x-\sqrt{1-6x+5x^2}}{2}\Big)^{j+1}}\\
&=[x^m]\frac{3-3\frac v{1+3v+v^2}-\frac{1-v^2}{1+3v+v^2}}{2\big(\frac{v(v+2)}{1+3v+v^2}\big)^{j+1}}\\
&=[x^m]\frac{(1+v)(1+2v)}{v^{j+1}(v+2)^{j+1}}(1+3v+v^2)^j\\
&=\frac1{2\pi i}\oint\frac{dx}{x^{m+1}}\frac{(1+v)(1+2v)}{v^{j+1}(v+2)^{j+1}}(1+3v+v^2)^j\\
&=\frac1{2\pi i}\oint\frac{dv}{v^{m+1}}\frac{(1+v)(1+2v)(1-v^2)}{v^{j+1}(v+2)^{j+1}}(1+3v+v^2)^{m-1+j}\\
&=[v^{m+j+1}]\frac{(1+v)^2(1+2v)(1-v)}{(v+2)^{j+1}}(1+3v+v^2)^{m-1+j}. \end{align*} Note that \begin{equation*}(1+v)^2(1+2v)(1-v)=
-9+27( v+2 ) -29( v+2 ) ^{2}+13( v+2) ^{3}-2( v+2 ) ^{4}; \end{equation*} consequently \begin{align*}
[v^k]&\frac{(1+v)^2(1+2v)(1-v)}{(v+2)^{j+1}}\\
&=-9\frac1{2^{j+1+k}}\binom{-j-1}{k}
+27\frac1{2^{j+k}}\binom{-j}{k}
-29\frac1{2^{j-1+k}}\binom{-j+1}{k}\\&
+13\frac1{2^{j-2+k}}\binom{-j+2}{k}
-2\frac1{2^{j-3+k}}\binom{-j+3}{k}=:\lambda_{j;k}. \end{align*} With this abbreviation we find \begin{equation*}
[v^{m+j+1}]\frac{(1+v)^2(1+2v)(1-v)}{(v+2)^{j+1}}(1+3v+v^2)^{m-1+j}
=\sum_{k=0}^{m+j+1}\lambda_{j;k}\binom{m-1+j;1,3,1}{m+j+1-k}. \end{equation*} This is not extremely pretty but it is \emph{explicit} and as good as it gets. Here are the first few generating functions: \begin{align*}
s_0&=1+z^2+3z^4+10z^6+36z^8+137z^{10}+543z^{12}+\cdots\\*
s_1&=z+2z^3+6z^5+21z^7+79z^9+311z^{11}+1265z^{13}+\cdots\\
s_2&=z^2+3z^4+10z^6+37z^8+145z^{10}+589z^{12}+2455z^{14}+\cdots\\
s_3&=z^3+4z^5+15z^7+59z^9+241z^{11}+1010z^{13}+4314^{15}+\cdots\\ \end{align*} We could also give such lists for the functions $f_j$, $g_j$, $h_j$, if desired. We summarize the essential findings of this section: \begin{theorem} The generating function of decorated (partial) Dyck paths, consisting of $n$ steps, ending on level $j$, is given by
\begin{equation*}
S(z,u)=\frac{3-3z^2-\sqrt{1-6z^2+5z^4}}{2zr_1(1-u/r_1)},
\end{equation*}
with
\begin{equation*}
r_1=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2z}.
\end{equation*}
Furthermore
\begin{equation*}
[u^j]S(z,u)=\frac{3-3z^2-\sqrt{1-6z^2+5z^4}}{2zr_1^{j+1}}.
\end{equation*} \end{theorem}
\subsection*{Open ended paths}
If we do not specify the end of the paths, in other words we sum over all $j\ge0$, then at the level of generating functions this is very easy, since we only have to set $u:=1$. We find \begin{align*}
S(1)&=-\frac{(z+1)(z^2+3z-2)+(z+2)\sqrt{1-6z^2+5z^4}}{2z(z^2+2z-1)}\\
&=1+z+2z^2+3z^3+7z^4+11z^5+26z^6+43z^7+102z^8+175z^9+416z^{10}+\cdots. \end{align*}
\subsection*{Counting red edges}
We can use an extra variable, $w$, to count additionally the red edges that occur in a path. We use the same letters for generating functions. Eventually, the coefficient $[z^nu^jw^k]S$ is the number of (partial) paths consisting of $n$ steps, leading to level $j$, and having passed $k$ red edges. The endpoint of the original skew path has then coordinates $(n-2k,j)$. The computations are very similar, and we only sketch the key steps.
\begin{equation*}
f_0=1,\quad f_{i+1}=zf_i+zg_i,\quad i\ge0, \end{equation*} \begin{equation*}
g_i=zf_{i+1}+zg_{i+1}+zh_{i+1},\quad i\ge0, \end{equation*} \begin{equation*}
h_i=wzh_{i+1}+wzg_{i+1},\quad i\ge0; \end{equation*} \begin{align*}
\frac1u(F(u)-1)&=zF(u)+zG(u),\\*
G(u)&=\frac zu(F(u)-1)+\frac zu(G(u)-G(0))+\frac zu(H(u)-H(0)),\\*
H(u)&= \frac {wz}u(G(u)-G(0))+\frac {wz}u(H(u)-G(0)); \end{align*} \begin{align*}
F(u)&=\frac{z^2uG(0)+z^2uH(0)+z^2u-u-wz^3+z+wz}{-w{z}^{3}-u+z+wz+z{u}^{2}-w{z}^{2}u},\\
G(u)&=\frac{z(H(0)-uzH(0)+wz^2+G(0)-zuG(0)-zu)}{-w{z}^{3}-u+z+wz+z{u}^{2}-w{z}^{2}u},\\
H(u)&=\frac{wz(-uzH(0)-z^2-zuG(0)+G(0)-z^2H(0)+H(0)-z^2G(0))}{-w{z}^{3}-u+z+wz+z{u}^{2}-w{z}^{2}u}. \end{align*} The denominator factors as $z(u-r_1)(u-r_2)$, with \begin{align*}
r_1&=\frac{1+wz^2+\sqrt{1-(4+2w)z^2+(4w+w^2)z^4}}{2z},\\*
r_2&=\frac{1+wz^2-\sqrt{1-(4+2w)z^2+(4w+w^2)z^4}}{2z}. \end{align*} Note the factorization $1-(4+2w)z^2+(4w+w^2)z^4=(1-z^2w)(1-(4+w)z^2)$. Since the factor $u-r_2$ in the denominator is ``bad,'' it must also cancel in the numerators. From this we eventually find, with the abbreviation $W=\sqrt{1-(4+2w)z^2+(4w+w^2)z^4}\,$) \begin{align*}
F(u)&=\frac{-1-wz^2-W}{2z(u-r_1)},\\
G(u)&=\frac{-1+wz^2+W}{2z(u-r_1)},\\
H(u)&=\frac{-1+(2+w)z^2+W}{2z(u-r_1)}. \end{align*} The total generating function is \begin{equation*}
S(u)=F(u)+G(u)+H(u)=\frac{-2-w+z^2(w+w^2)+ wW}{2z(u-r_1)}. \end{equation*} The special case $u=0$ (return to the $x$-axis) is to be noted: \begin{equation*}
S(0)=\frac{-2-w+z^2(w+w^2)+ wW}{-2zr_1}=\frac{1-wz^2-W}{2z^2}. \end{equation*} Since there are only even powers of $z$ in this function, we replace $x=z^2$ and get \begin{align*}
S(0)&=\frac{1-wx-\sqrt{1-(4+2w)x+(4w+w^2)x^2}}{2x}\\
&=1+x+(w+2)x^2+(w^2+4w+5)x^3+(w^3+6w^2+15w+14)x^4+\cdots. \end{align*} Compare the factor $(w^2+4w+5)$ with the earlier drawing of the 10 paths.
There is again a substitution that allows for better results: \begin{equation*}
z=\frac{v}{1+(2+w)v+v^2}, \quad\text{then}\quad S(0)=1+v. \end{equation*} Reading off coefficients can now be done using modified trinomial coefficients: \begin{equation*}
\binom{n;1,2+w,1}{k}=[t^k]\bigl(1+(2+w)t+t^2\bigr)^n. \end{equation*} Again, we use contour integration to extract coefficients: \begin{align*}
[x^n](1+v)&=\frac1{2\pi i}\oint \frac{dx}{x^{n+1}}(1+v)\\
&=\frac1{2\pi i}\oint \frac{dx}{v^{n+1}}\frac{1-v^2}{(1+(2+w)v+v^2)^2}(1+(2+w)v+v^2)^{n+1}(1+v)\\
&=[v^n](1-v)(1+v)^2(1+(2+w)v+v^2)^{n-1}\\
&=\binom{n-1;1,2+w,1}{n}+\binom{n-1;1,2+w,1}{n-1}\\*
&\qquad-\binom{n-1;1,2+w,1}{n-2}-\binom{n-1;1,2+w,1}{n-3}. \end{align*}
Now we want to count the average number of red edges. For that, we differentiate $S(0)$ w.r.t.\ $w$, followed by $w:=1$. This leads to \begin{equation*}
\frac{-1+6x-5x^2+(1+3x)\sqrt{1-6x+5x^2}}{2(1-x)(1-5x)}. \end{equation*}
A simple application of singularity analysis leads to \begin{equation*}
\frac{\frac1{2\sqrt5}[x^n]\frac1{\sqrt{1-5x}}}{-\sqrt5[x^n]\sqrt{1-5x}}\sim \frac {n}{5}. \end{equation*} So, a random path consisting of $2n$ steps has about $n/5$ red steps, on average.
For readers who are not familiar with singularity analysis of generating functions \cite{FlOd90, FS}, we just mention that one determines the local expansion around the dominating singularity, which is at $z=\frac15$ in our instance. In the denominator, we just have the total number of skew Dyck paths, according to the sequence A002212 in \cite{OEIS}.
In the example of Figure~2, the exact average is $6/10$, which curiously is exactly the same as $3/5$.
We finish the discussion by considering fixed powers of $w$ in $S(0)$, counting skew Dyck paths consisting of zero, one, two, three, \dots red edges. We find \begin{align*}
[w^0]S(0)&=\frac{1-\sqrt{1-4x}}{2x},\\
[w^1]S(0)&=\frac{1-2x-\sqrt{1-4x}}{2\sqrt{1-4x}},\\
[w^2]S(0)&=\frac{x^3}{(1-4x)^{3/2}},\\
[w^3]S(0)&=\frac{x^4(1-2x)}{(1-4x)^{5/2}},\\
[w^4]S(0)&=\frac{x^5(1-4x+5x^2)}{(1-4x)^{7/2}}, \quad\&\text{c}. \end{align*} The generating function $[w^0]S(0)$ is of course the generating function of Catalan numbers, since no red edges just means: ordinary Dyck paths. We can also conclude that the asymptotic behaviour is of the form $n^{k-3/2}4^n$, where the polynomial contribution gets higher, but the exponential growth stays the same: $4^n$. This is compared to the scenario of an \emph{arbitrary} number of red edges, when we get an exponential growth of the form $5^n$.
\subsection*{Dual skew Dyck paths}
The mirrored version of skew Dyck paths with two types of up-steps, $(1,1)$ and $(-1,1)$ are also cited among the objects in A002212 in \cite{OEIS}. We call them dual skew paths and drop the `dual' when it isn't necessary. When the paths come back to the $x$-axis, no new enumeration is necessary, but this is no longer true for paths ending at level $j$.
Here is a list of the 10 skew paths consisting of 6 steps:
\begin{figure}
\caption{All 10 dual skew Dyck paths of length 6 (consisting of 6 steps).}
\end{figure}
We prefer to work with the equivalent model (resembling more traditional Dyck paths) where we replace each step $(-1,-1)$ by $(1,-1)$ but label it blue. Here is the list of the 10 paths again (Figure 2):
\begin{figure}
\caption{All 10 dual skew Dyck paths of length 6 (consisting of 6 steps).}
\end{figure}
The rules to generate such decorated Dyck paths are: each edge $(1,-1)$ may be black or blue, but \begin{tikzpicture}[scale=0.3]\draw [thick](0,1)--(1,0); \draw [cyan,thick] (1,0)--(2,1);\end{tikzpicture} and \begin{tikzpicture}[scale=0.3] \draw [cyan,thick] (0,0)--(1,1);\draw [thick](1,1)--(2,0);\end{tikzpicture} are forbidden.
Our interest is in particular in \emph{partial} decorated Dyck paths, ending at level $j$, for fixed $j\ge0$; the instance $j=0$ is the classical case.
As before, two variables, $z$ and $u$, are used, where $z$ marks the length of the path and $j$ marks the end-level. We briefly mention that one can, using a third variable $w$, also count the number of blue edges.
The substitution \begin{equation*}
x=\frac{v}{1+3v+v^2}, \end{equation*}
is again the key to the success.
\subsection*{Generating functions and the kernel method}
We catch the essence of a decorated (dual skew) Dyck path using a state-diagram:
\begin{figure}
\caption{Three layers of states according to the type of steps leading to them (down, up-black, up-blue).}
\end{figure} It has three types of states, with $j$ ranging from 0 to infinity; in the drawing, only $j=0..8$ is shown. The first layer of states refers to an up-step leading to a state, the second layer refers to a black down-step leading to a state and the third layer refers to a blue down-step leading to a state. We will work out generating functions describing all paths leading to a particular state. We will use the notations $c_j,a_j,b_j$ for the three respective layers, from top to bottom. Note that the syntactic rules of forbidden patterns \begin{tikzpicture}[scale=0.3]\draw [thick,cyan](0,0)--(1,1); \draw [thick] (1,1)--(2,0);\end{tikzpicture} and \begin{tikzpicture}[scale=0.3] \draw [thick] (0,1)--(1,0);\draw [thick,cyan](1,0)--(2,1);\end{tikzpicture} can be clearly seen from the picture. The functions depend on the variable $z$ (marking the number of steps), but mostly we just write $a_j$ instead of $a_j(z)$, etc.
The following recursions can be read off immediately from the diagram: \begin{gather*}
a_0=1,\quad a_{i+1}=za_i+zb_i+zc_i,\quad i\ge0,\\
b_i=za_{i+1}+zb_{i+1},\quad i\ge0,\\
c_{i+1}=za_{i}+zc_{i},\quad i\ge0. \end{gather*} And now it is time to introduce the promised \emph{bivariate} generating functions: \begin{equation*}
A(z,u)=\sum_{i\ge0}a_i(z)u^i,\quad
B(z,u)=\sum_{i\ge0}b_i(z)u^i,\quad
C(z,u)=\sum_{i\ge0}c_i(z)u^i. \end{equation*} Again, often we just write $A(u)$ instead of $A(z,u)$ and treat $z$ as a `silent' variable. Summing the recursions leads to \begin{align*}
\sum_{i\ge0}u^ia_i &=1+u\sum_{i\ge0}u^i(za_i+zb_i+zc_i)\\
&=1+uzA(u)+uzB(u)+uzC(u),\\
\sum_{i\ge0}u^ib_i &= \sum_{i\ge0}u^i(za_{i+1}+zb_{i+1})\\
&=\frac zu\sum_{i\ge1}u^ia_i+\frac zu\sum_{i\ge1}u^ib_i,\\
\sum_{i\ge1}u^ic_i &=uz\sum_{i\ge0}u^ia_i+uz\sum_{i\ge0}u^ic_i. \end{align*} This can be rewritten as \begin{align*}
A(u)&=1+uzA(u)+uzB(u)+uzC(u),\\
B(u)&=\frac zu(A(u)-a_0)+\frac zu(B(u)-b_0),\\
C(u)&=c_0+uzA(u)+uzC(u). \end{align*} Note that $a_0=1$, $c_0=0$. Simplification leads to \begin{equation*}
C(u)=\frac{uzA(u)}{1-uz} \end{equation*} and \begin{equation*}
B(u)=\frac{z(A(u)-1-B(0))}{u-z}, \end{equation*} leaving us with just one equation \begin{equation*}
A(u)={\frac { \left( z-u+u{z}^{2}+u{z}^{2}B(0) \right) \left( uz-1
\right) }{{u}^{2}{z}^{3}+u{z}^{2}-2{u}^{2}z-z+u}}. \end{equation*} This is again a typical application of the kernel method. \begin{equation*}
{u}^{2}{z}^{3}+u{z}^{2}-2{u}^{2}z-z+u=z(z^2-2)(u-s_1)(u-s_2) \end{equation*} The denominator factors as $2z(z^2-2)(u-s_1)(u-s_2)$, with \begin{equation*}
s_1=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2z(2-z^2)},\quad s_2=\frac{1+z^2-\sqrt{1-6z^2+5z^4}}{2z(2-z^2)}. \end{equation*} Note that $s_1s_2=\frac{1}{2-z^2}$. Since the factor $u-s_2$ in the denominator is ``bad,'' it must also cancel in the numerators. From this we conclude (again with the abbreviation $W=\sqrt{1-6z^2+5z^4}\,$) \begin{equation*}
B(0) = \frac{zs_2}{1-2zs_2}, \end{equation*} and further \begin{equation*}
A(u)
=\frac{(1-uz)(1+z^2+W)}{2z(z^2-2)(u-s_1)}, \end{equation*} \begin{equation*}
B(u)=\frac{1-2z^2-W}{z(2-z^2)(u-s_1)}, \end{equation*} \begin{equation*}
C(u)=\frac{1+z^2+W}{2(z^2-2)}\frac{u}{u-s_1}, \end{equation*} and for the function of main interest \begin{equation*}
G(u)=A(u)+B(u)+C(u)=\frac{3z^2-3+W}{2z(2-z^2)(u-s_1)}. \end{equation*} Note that \begin{align*}
\frac1{s_1}&=\frac{1+z^2-\sqrt{1-6z^2+5z^4}}{2z}=zS,\\
\frac1{s_2}&=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2z}. \end{align*} Then \begin{align*}
[u^j]G(u)&=[u^j]\frac{3z^2-3+W}{2z(z^2-2)s_1(1-u/s_1)}\\
&=\frac{3z^2-3+W}{2z(z^2-2)s_1^{j+1}}
=\frac{3z^2-3+W}{2(z^2-2)}z^{j}S^{j+1}. \end{align*} So $[u^j]G(u)$ contains only powers of the form $z^{j+2N}$. Now we continue \begin{align*}
[z^{j+2N}u^j]G(u)&
=[z^{2N}]\frac{3z^2-3+W}{2(z^2-2)}S^{j+1}
\\&=[x^{N}]\frac{3x-3+\sqrt{1-6x+5x^2}}{2(x-2)}\bigg(\frac{1+x-\sqrt{1-6x+5x^2}}
{2x}\bigg)^{j+1}\\
&=[x^{N}](v+1)(v+2)^{j} \end{align*} which is the generating function of all (partial) paths ending at level $j$.
Now we read off coefficients. We do this using residues and contour integration. The path of integration, in both variables $x$ resp.\ $v$ is a small circle or an equivalent contour; \begin{align*}
[z^{j+2N}u^j]G(u)&=[x^{N}](v+1)(v+2)^{j}\\
&=\frac1{2\pi i}\oint \frac{dx}{x^{N+1}}(v+1)(v+2)^{j}\\
&=\frac1{2\pi i}\oint \frac{dv}{v^{N+1}}(1+3v+v^2)^{N+1}\frac{(1-v^2)}{(1+3v+v^2)^2}(v+1)(v+2)^{j}\\
&=[v^{N}](1+3v+v^2)^{N-1}(1-v)(1+v)^2(v+2)^{j}. \end{align*} Note that \begin{equation*}(1-v)(1+v)^2=
3-7( v+2 ) +5( v+2 ) ^{2}- ( v+2) ^{3}; \end{equation*} consequently \begin{align*}
[z^{j+2N}u^j]G(u)&=[v^{N}](1+3v+v^2)^{N-1}\Big[3-7( v+2 ) +5( v+2 ) ^{2}- ( v+2) ^{3}
\Big](v+2)^{j}. \end{align*} We abbreviate: \begin{align*}
\mu_{j;k}&=[v^{k}]\Big[3(v+2)^{j}-7(v+2)^{j+1} +5(v+2)^{j+2}- (v+2)^{j+3}\Big]\\
&=3\binom{j}{k}2^{j-k}-7\binom{j+1}{k}2^{j+1-k}+5\binom{j+2}{k}2^{j+2-k}-\binom{j+3}{k}2^{j+3-k}. \end{align*} With this notation we get \begin{equation*}
[z^{j+2N}u^j]G(u)
=\sum_{0\le k\le N-1}\mu_{j;k}\binom{N-1;1,3,1}{N-k}. \end{equation*} Here are the first few generating functions: \begin{align*}
G_0&=1+{z}^{2}+3{z}^{4}+10{z}^{6}+36{z}^{8}+137{z}^{10}+543{z}^{
12}+2219{z}^{14}
+\cdots\\*
G_1&=2z+3{z}^{3}+10{z}^{5}+36{z}^{7}+137{z}^{9}+543{z}^{11}+
2219{z}^{13}+9285{z}^{15}
+\cdots\\
G_2&=4{z}^{2}+8{z}^{4}+29{z}^{6}+111{z}^{8}+442{z}^{10}+1813{z
}^{12}+7609{z}^{14}+32521{z}^{16}
+\cdots\\
G_3&=8{z}^{3}+20{z}^{5}+78{z}^{7}+315{z}^{9}+1306{z}^{11}+5527
{z}^{13}+23779{z}^{15}+103699{z}^{17}
+\cdots\\ \end{align*} We could also give such lists for the functions $a_j$, $b_j$, $c_j$, if desired. We summarize the essential findings of the rest of this section: \begin{theorem} The generating function of decorated (partial) dual skew Dyck paths, consisting of $n$ steps, ending on level $j$, is given by
\begin{equation*}
G(z,u)=\frac{3z^2-3+\sqrt{1-6z^2+5z^4}}{2z(2-z^2)(u-s_1)},
\end{equation*}
with
\begin{equation*}
s_1=\frac{2z}{1+z^2-\sqrt{1-6z^2+5z^4}}.
\end{equation*}
Furthermore
\begin{equation*}
[u^j]G(z,u)=\frac{3z^2-3+\sqrt{1-6z^2+5z^4}}{2(z^2-2)}z^jS^{j+1},
\end{equation*}
with
\begin{equation*}
S=\frac{1+z^2-\sqrt{1-6z^2+5z^4}}{2z^2}.
\end{equation*} \end{theorem}
\subsection*{Open ended paths}
If we do not specify the end of the paths, in other words we sum over all $j\ge0$, then at the level of generating functions this is very easy, since we only have to set $u:=1$. We find \begin{align*}
G(1)&=\frac{(1+z)(1-3z)}{2z(z^2+2z-1)-\sqrt{1-6z^2+5z^4}}\\
&=1+2z+5{z}^{2}+11{z}^{3}+27{z}^{4}+62{z}^{5}+151{z}^{6}+
354{z}^{7}+859{z}^{8}+2036{z}^{9}+\cdots. \end{align*}
\subsection*{Counting blue edges}
We can use an extra variable, $w$, to count additionally the blue edges that occur in a path. We use the same letters for generating functions. Eventually, the coefficient $[z^nu^jw^k]S$ is the number of (partial) paths consisting of $n$ steps, leading to level $j$, and having passed $k$ blue edges. The endpoint of the original skew path has then coordinates $(n-2k,j)$. The computations are very similar, and we only sketch the key steps.
\begin{gather*}
a_0=1,\quad a_{i+1}=za_i+zb_i+zc_i,\quad i\ge0,\\
b_i=za_{i+1}+zb_{i+1},\quad i\ge0,\\
c_{i+1}=wza_{i}+wzc_{i},\quad i\ge0. \end{gather*} This leads to \begin{align*}
A(u)&=1+uzA(u)+uzB(u)+uzC(u),\\
B(u)&=\frac zu(A(u)-a_0)+\frac zu(B(u)-b_0),\\
C(u)&=c_0+wuzA(u)+wuzC(u). \end{align*} Solving, \begin{equation*}
S(u)=A(u)+B(u)+C(u)={\frac {u-wu{z}^{2}-zA(0)-zB(0)+uw{z}^{2}A(0)+uw{z}^{2}B(0)}{{u}^{2}{z}^{3}w+u-w{u}^{2}z-{u}^{2}z-z+wu{z}^{2}}}. \end{equation*} The denominator factors as $-z(1+w-z^2w)(u-s_1)(u-s_2)$, with \begin{align*}
s_1&={\frac {1+{z}^{2}w+\sqrt {1-2\,{z}^{2}w+{z}^{4}{w}^{2}-4\,{z}^{2
}+4{z}^{4}w}}{2z \left( 1+w-{z}^{2}w \right) }},\\*
s_2&= {\frac {1+{z}^{2}w-\sqrt {1-2\,{z}^{2}w+{z}^{4}{w}^{2}-4\,{z}^{2
}+4{z}^{4}w}}{2z \left( 1+w-{z}^{2}w \right) }} . \end{align*} Note the factorization $1-(4+2w)z^2+(4w+w^2)z^4=(1-z^2w)(1-(4+w)z^2)$. Since the factor $u-r_2$ in the denominator is ``bad,'' it must also cancel in the numerators. From this we eventually find, with the abbreviation $W=\sqrt{1-(4+2w)z^2+(4w+w^2)z^4}\,$) \begin{equation*}
G(0)={\frac {1-{z}^{2}w-W }{2{z}^{2}}}, \end{equation*} and further \begin{equation*}
G(u)=\frac {w-{z}^{2}{w}^{2}-wW+2-2{z}^{2}w}
{2z \left( -w -1+{z}^{2}w \right) (u-s_1)}. \end{equation*} The special case $u=0$ (return to the $x$-axis) is to be noted: \begin{equation*}
G(0)=1+{z}^{2}+ \left( w+2 \right) {z}^{4}+ \left( {w}^{2}+4w+5 \right)
{z}^{6}+ \left( w+2 \right) \left( {w}^{2}+4w+7 \right) {z}^{8}+\cdots. \end{equation*} Compare the factor $(w^2+4w+5)$ with the earlier drawing of the 10 paths. There is again a substitution that allows for better results: \begin{equation*}
z=\frac{v}{1+(2+w)v+v^2}, \quad\text{then}\quad G(0)=1+v. \end{equation*} Since $S(u)=G(u)$ with $S(u)$ from the first part of the paper, as it means the same objects, read from left to right resp.\ from right to left, no new analysis is required.
\section{Counting ternary trees according to the number of middle edges}
The recent preprint \cite{Burstein} triggered my interest in the sequence A120986 in \cite{OEIS}. The double-indexed sequence enumerates ternary trees according to the number of edges and the number of middle edges. We consider here $T(n,k)$, the number of ternary trees with $n$ nodes and $k$ middle edges. The difference is marginal, but we want to compare/relate our analysis with \cite{christmas}, and there it is also the number of nodes that is considered. Let $G=G(x,u)=\sum_{n,k\ge0}T(n,k)x^nu^k$. Then it is easy to see (decomposition at the root) that \begin{equation*}
G=1+xG^2(1-u+uG). \end{equation*} The substitution \begin{equation*}
x=\frac{t(1-t)^2}{(1-t+ut)} \end{equation*} makes the cubic equation manageable and also allows, as in \cite{christmas}, to introduce a (refined) version of the $(3/2)$-ary trees.
Here is a small table of these numbers and a ternary tree: \begin{center}
\begin{tabular}{c|ccccccccccc}
$n\backslash k$& 0&1&2&3&4&5\\
\hline
0&1&&&&&&&\\
1&1&&&&&&&\\
2&2&1&&&&&&\\
3&5&6&1&&&&&\\
4&14&28&12&1&&&&\\
5&42&120&90&20&1&&&\\
6&132&495&550&220&30&1&&\\
\end{tabular}
\end{center} \begin{figure}
\caption{Ternary tree with 17 nodes and 3 middle edges}
\end{figure}
\subsection*{Analysis of the cubic equation}
The cubic equation has the following solutions: \begin{align*}
r_1&=\frac{1}{1-t},\\
r_2&=\frac{-t+t^2-t^2u-\sqrt{t(1-t+ut)(4u+t-4ut-t^2+t^2u)}}{2ut(1-t)},\\
r_3&=\frac{-t+t^2-t^2u+\sqrt{t(1-t+ut)(4u+t-4ut-t^2+t^2u)}}{2ut(1-t)}. \end{align*} Note that \begin{equation*}
r_2r_3=-\frac{1-t+ut}{ut(1-t)}. \end{equation*} The root with the combinatorial significance is $r_1$. But it is the explicit form of the two other roots that makes everything here interesting and challenging.
We extract coefficients of $r_1$ using contour integration, which is closely related to the Lagrange inversion formula. The path of integration is a small circle in the $x$-plane which is then transformed into a small circle in the $t$-plane. \begin{align*}
[x^n]r_1&=\frac1{2\pi i}\oint \frac{dx}{x^{n+1}}\frac1{1-t}\\
&=\frac1{2\pi i}\oint \frac{dt(1-t)(1-3t+2t^2-2t^2u)}{(1-t+tu)^2}\frac{(1-t+tu)^{n+1}} {t^{n+1}(1-t)^{2n+2}}\frac1{1-t}\\
&=[t^n](1-3t+2t^2-2t^2u)\frac{(1-t+tu)^{n-1}} {(1-t)^{2n+2}}. \end{align*} Furthermore \begin{align*}
[x^nu^k]r_1
&=[t^n][u^k](1-3t+2t^2-2t^2u)\frac{(1-t+tu)^{n-1}} {(1-t)^{2n+2}}\\
&=[t^n]\binom{n-1}{k}\frac{t^k(1-2t)}{(1-t)^{n+k+2}} -2[t^n]\binom{n-1}{k-1}\frac{t^{k+1}}{(1-t)^{n+k+2}}\\
&=\binom{n-1}{k}[t^{n-k}]\frac{(1-2t)}{(1-t)^{n+k+2}} -2\binom{n-1}{k-1}[t^{n-k-1}]\frac{1}{(1-t)^{n+k+2}}\\
&=\binom{n-1}{k}\binom{2n+1}{n-k}-2\binom{n-1}{k}\binom{2n}{n-k-1}
-2\binom{n-1}{k-1}\binom{2n}{n-k-1}\\
&=\frac1n\binom nk\binom{2n}{n-1-k}. \end{align*} For $u=1$, which means that the middle edges are not especially counted, we get \begin{equation*}
\sum_k\frac1n\binom nk\binom{2n}{n-1-k}=\frac1n\binom{3n}{n-1}, \end{equation*} the number of ternary trees with $n$ nodes.
\subsection*{Factorizing the solution of the cubic equation}
For $u=1$, Knuth \cite{christmas} was able to factor the generating function $r_1$ into two factors, for which he coined the catchy name $(3/2)$-ary trees. For this factorization, see also \cite{naimi-paper,BM-P}. The goal in this section is to perform this factorization in the context of counting middle edges, i. e., for the generating function with the additional variable $u$. In Knuth's instance, the generating function was expressible as a generalized binomial series (in the sense of Lambert \cite{GKP}), but that does not seem to be an option here.
Note that \begin{equation*}
\frac1{r_2}=\frac t2-\frac{\sqrt t\sqrt{t(1-t)+u(2-t)^2}}{2\sqrt{1-t+tu}} \end{equation*} and \begin{equation*}
\frac1{r_3}=\frac t2+\frac{\sqrt t\sqrt{t(1-t)+u(2-t)^2}}{2\sqrt{1-t+tu}}. \end{equation*} From the cubic equation we deduce that \begin{equation*}
r_1=-\frac1{uxr_2r_3}, \end{equation*} which is the desired factorization. The factor $ux$ will be fairly split as $\sqrt{ux}\cdot\sqrt{ux}$, whereas the minus sign goes to the factor $1/r_2$. In the following we work out how this factorization can be obtained. To say it again, it is not as appealing as in the original case.
Let us write \begin{equation*}
t=x\Phi(t), \quad\text{with}\quad \Phi(t)=\frac{1-t+tu}{(1-t)^2}, \end{equation*} so that we can use the Lagrange inversion formula to get \begin{equation*}
[x^n]t^{\ell}=\frac{\ell}{n}[t^{n-\ell}]\Phi(t)^n \end{equation*} and \begin{align*}
[x^nu^k]t^{\ell}&=\frac{\ell}{n}[t^{n-\ell}][u^k]\frac{(1-t+tu)^n}{(1-t)^{2n}}\\
&=\frac{\ell}{n}[t^{n-\ell-k}]\binom{n}{k}\frac1{(1-t)^{n+k}}
=\frac{\ell}{n}\binom{n}{k}\binom{2n-\ell-1}{n-\ell-k}. \end{align*} In particular, \begin{equation*}
t=\sum_{n\ge1}x^n\sum_{0\le k\le n}\frac{1}{n}\binom{n}{k}\binom{2n-2}{n-1-k}u^k; \end{equation*} this series expansion may be used in the following developments whenever needed.
To proceed further, we set $u=1+U$ and $\tau=t/u$: \begin{gather*}
\frac1{r_2}=\frac t2-\frac{\sqrt{x}}{2(1-t)}\sqrt{4-3t+U(2-t)^2},\\
\frac1{r_3}=\frac t2+\frac{\sqrt{x}}{2(1-t)}\sqrt{4-3t+U(2-t)^2} \end{gather*} Since the first term is well understood, we concentrate on the second: \begin{multline*}
\frac{\sqrt{x}}{2(1-t)}\sqrt{4-3t+U(2-t)^2}
\\=\sqrt{ux}\bigg( 1+\frac18 \left( 5+4U \right) \tau+\frac {1}{128}
( 71+136U +64{U}^{2} ) {\tau}^{2}\\+\frac {1}{1024} ( 541+1596U +1568{U}^{2}+512{U}^{3} ) {\tau}^{3}+\cdots\bigg) =:\sqrt{ux}\cdot\Xi. \end{multline*} With this expanded form $\Xi$, we have now our final formula, the expansion of $r_1$ into two factors: \begin{align*}
r_1=\frac1{1-t}=-\frac{1}{ux}\frac1{r_2}\frac1{r_3}
=\Big(-\frac{1}{\sqrt{ux}}\frac t2+\Xi\Big)\Big(\frac{1}{\sqrt{ux}}\frac t2+\Xi\Big). \end{align*} These two factors do not have a combinatorial meaning, as it seems, but we can still stick to the $(3/2)$-ary tree notation, with the additional counting of middle edges.
\section{More about Motzkin paths}
\subsection*{Retakh's Motzkin paths} V.~Retakh~\cite{EZ} introduced the following restricted class of Dyck paths: Peaks are only allowed on level 1 and on even-numbered levels. Here is an example, and the corresponding plane tree using the standard bijection. \begin{center}
\begin{tikzpicture}[scale=0.6]
\draw[step=1.cm,black,dotted] (-0.0,-0.0) grid (20.0,6.0);
\draw[thick] (0,0) -- (1,1) -- (2,0)-- (3,1)-- (4,2)-- (5,3)-- (6,4)-- (7,3)-- (8,4)-- (9,5)-- (10,6)-- (11,5)-- (12,4)-- (13,3)-- (14,2)-- (15,1)-- (16,0)-- (17,1)-- (18,0)-- (19,1)-- (20,0);
\node at (1,1){$\bullet$};
\node at (6,4){$\bullet$};
\node at (10,6){$\bullet$};
\node at (17,1){$\bullet$};
\node at (19,1){$\bullet$};
\end{tikzpicture}
\end {center}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (1) at (0, 0){$\bullet$};
\node (2) at (-3, -1){$\bullet$};
\node (3) at (-1, -1){$\bullet$};
\node (4) at (1, -1){$\bullet$};
\node (5) at (3, -1){$\bullet$};
\node (6) at (-1, -2){$\bullet$};
\node (7) at (-1, -3){$\bullet$};
\node (8) at (-2, -4){$\bullet$};
\node (9) at (0, -4){$\bullet$};
\node (10) at (0, -5){$\bullet$};
\node (11) at (0, -6){$\bullet$};
\node at (-5, -1){level $1$};
\node at (-5, -4){level $4$};
\node at (-5, -6){level $6$};
\draw[-] (1.center) to (2.center);
\draw[-] (1.center) to (3.center);
\draw[-] (1.center) to (4.center);
\draw[-] (1.center) to (5.center);
\draw[-] (3.center) to (6.center);
\draw[-] (6.center) to (7.center);
\draw[-] (8.center) to (7.center);
\draw[-] (9.center) to (7.center);
\draw[-] (9.center) to (10.center);
\draw[-] (10.center) to (11.center);
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (1) at (0, 0){$\bullet$};
\node (2) at (-3, -1){$\bullet$};
\node (3) at (-1, -1){$\bullet$};
\node (4) at (1, -1){$\bullet$};
\node (5) at (3, -1){$\bullet$};
\node (8) at (-2, -4){ };
\node (9) at (0, -4){ };
\draw[-] (1.center) to (2.center);
\draw[-] (1.center) to (3.center);
\draw[-] (1.center) to (4.center);
\draw[-] (1.center) to (5.center);
\draw[-] (3.center) to (8.center);
\draw[-] (3.center) to (9.center);
\draw[-] (8.center) to (9.center);
\end{tikzpicture}
\end{center}
Ekhad and Zeilberger \cite{EZ} proved recently that these restricted paths are enumerated by Motzkin numbers.
Recall that the generating function of the Motzkin numbers $M(z)$ according to length satisfies $M=1+zM+z^2M^2$ and thus
\begin{equation*}
M(z)=\frac{1-z-\sqrt{1-2z-3z^2}}{2z^2}.
\end{equation*}
Here, I want to present a few additional observations, also including the height of the paths (or the associated plane trees).
First, we are going to confirm the connection to Motzkin paths:
Since the level 1 is somewhat special, we only consider trees as symbolized by the triangle. We will use two generating functions, to deal with the odd/even situation. We have
\begin{equation*}
F=\frac{zG}{1-G}\quad\text{and}\quad G=\frac{z}{1-F}.
\end{equation*}
This is not too difficult to see, since the family $\mathscr{F}$ of trees as symbolized by the triangle does not contain a single node, so $F=zG+zG^2+\cdots$. However, the next generation $\mathscr{G}$ can contain a single node, and thus $G=z+zF+zF^2+\cdots$.
Solving this (best by a computer) we find $F(z)=z^2M(z)$ and the total generating function (allowing sequences of single nodes between copies of $\mathscr{F}$) is
\begin{equation*}
\frac z{1-z}\sum_{r\ge0}\Big(\frac{F}{1-z}\Big)^r=zM(z),
\end{equation*}
as predicted. Recall that the number of nodes in trees is always one more than the half-length of the corresponding Dyck path.
We will compute the average height of such restricted paths, using singularity analysis of generating functions, as in \cite{FlOd90, FS}. Whether we define the height in terms of the maximal chain of edges resp.\ nodes only makes a difference of one,
and we will only compute the average height according to the leading term of order $\sqrt n$. For readers who wish to see how more terms could be computed, at least in principle, we cite~\cite{Prodinger-ars}.
The average height of planted plane trees (and subclasses of them) has been of central interest to combinatorialists and theoretical computer scientists alike since the seminal paper \cite{BrKnRi72}. The number of leaves (endnodes) is also one of the key parameters since Narayana \cite{Narayana}. We investigate it in the last section of this paper to see how the restrictions according to Retakh influence this parameter.
\subsection*{The height}
Now we will use the substitution $z=\frac{v}{1+v+v^2}$, which occurred for the first time in \cite{Prodinger-three}, but has been used more recently in different models where Motzkin numbers are involved \cite{Prodinger-Deutsch, HHP, Baril-neu}.
Motzkin numbers appear in \cite{OEIS} as sequence A001006.
For example, the generating function $M(z)$ of Motzkin numbers has the very simple form $M(z)=1+v+v^2$ using this auxiliary variable.
We define
\begin{equation*}
G_{k+1}=\dfrac{z}{1-\dfrac{zG_k}{1-G_k}},\quad\text{with} \quad G_1=z.
\end{equation*}
There is a simple formula, viz.
\begin{equation*}
G_k=\frac v{1+v}\frac{1-v^{2k}}{1-v^{2k+1}}.
\end{equation*}
This is easy to prove by induction, which we will do for the convenience of the reader.
The start is
\begin{equation*}
G_1=\frac v{1+v}\frac{1-v^{2}}{1-v^{3}}=\frac v{1+v}\frac{1+v}{1+v+v^{2}}=\frac{v}{1+v+v^{2}}=z.
\end{equation*}
And now
\begin{align*}
G_{k+1}&=\dfrac{z}{1-\dfrac{zG_k}{1-G_k}}=\dfrac{z(1-G_k)}{1-(1+z)G_k}=\frac{v}{1+v+v^{2}}\dfrac{1-\frac v{1+v}\frac{1-v^{2k}}{1-v^{2k+1}}}{1-\frac{(1+v)^2}{1+v+v^2}\frac v{1+v}\frac{1-v^{2k}}{1-v^{2k+1}}}\\
&=\frac{v}{1+v}\dfrac{1+v-v\frac{1-v^{2k}}{1-v^{2k+1}}}{1+v+v^{2}-v(1+v)\frac{1-v^{2k}}{1-v^{2k+1}}}\\&
=\frac{v}{1+v}\dfrac{(1+v)(1-v^{2k+1})-v(1-v^{2k})}{(1+v+v^{2})(1-v^{2k+1})-v(1+v)(1-v^{2k})}\\
&=\frac{v}{1+v}\dfrac{1-v^{2k+2}}{1-v^{2k+3}},
\end{align*}
as claimed. From this we also get
\begin{equation*}
F_k=\frac{zG_k}{1-G_k}=\frac{v^2}{1+v+v^2}\frac{1-v^{2k} }{1-v^{2k+2}}.
\end{equation*}
For $k\ge1$, $F_k$ is the generating function of trees (like in the triangle) of height $\le 2k$.
Note that the height is currently counted in terms of nodes;
\begin{equation*}
F_1=\frac{z^2}{1-z},
\end{equation*}
which describes a root with $\ell\ge1$ leaves attached to the root.
Now we incorporate the irregular beginning of the tree and compute
\begin{equation*}
\frac z{1-z}\sum_{r\ge0}\Big(\frac{F_h}{1-z}\Big)^r=\frac z{1-z}\dfrac1{1-\dfrac{F_h}{1-z}}=v\frac{1-v^{2h+2}}{1-v^{2h+4}}.
\end{equation*}
From here onwards it seems to be more natural to define the height of the whole tree in terms of the number of \emph{edges},
and then the quantity we just derived is the generating function of all trees with height $\le 2h$, for $h\ge1$. Note that the limit
$h\to\infty$ gives us simply $v=zM(z)$, which is consistent. There is also a contribution of trees of height $\le 1$, namely
$\frac{z}{1-z}=\frac{v}{1+v^2}$, but this term is, when we compute the average height, irrelevant and only contributes to the error term, as we only compute the leading term, which is of order $\sqrt n$.
So, apart from normalization, we are led to investigate
\begin{align*}
\sum_{h\ge1}&2h\bigg[v\frac{1-v^{2h+2}}{1-v^{2h+4}}-v\frac{1-v^{2h}}{1-v^{2h+2}}\bigg]
=2v(1-v^{-2})\sum_{h\ge1}h\bigg[\frac{v^{2h+4}}{1-v^{2h+4}}-\frac{v^{2h+2}}{1-v^{2h+2}}\bigg]\\
&=2v(1-v^{-2})\sum_{h\ge0}h\frac{v^{2h+4}}{1-v^{2h+4}}
-2v(1-v^{-2})\sum_{h\ge0}(h+1)\frac{v^{2h+4}}{1-v^{2h+4}}\\
&=-2v+\frac{2(1-v^{2})}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}.
\end{align*}
Note that we could get explicit coefficients from here, using trinomial coefficients, $\binom{n,3}{k}=[v^k](1+v+v^2)^n$ (notation from \cite{Comtet-book}). To show the reader how this works, we compute
\begin{align*}
[z^{n+1}]&\frac{1-v^{2}}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}
=\frac1{2\pi i}\oint\frac{dz}{z^{n+2}}\frac{1-v^{2}}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}\\
&=\frac1{2\pi i}\oint dv(1-v^2)^2\frac{(1+v+v^2)^{n}}{v^{n+3}}\sum_{h\ge1}\sum_{h,k\ge1}v^{2hk}\\
&=[v^{n+2}](1-2v^2+v^4)\sum_{h\ge1}d(h)v^{2h}(1+v+v^2)^{n}\\
&=\sum_{h\ge1}d(h)\bigg[\binom{n,3}{n+2-2h}-2\binom{n,3}{n-2h}+\binom{n,3}{n-2-2h}\bigg].
\end{align*}
Note that $d(h)$ is the number of divisors of $h$. We will, however, not use this explicit form.
The expression as derived before,
\begin{equation*}
-2v+\frac{2(1-v^{2})}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}},
\end{equation*}
has to be expanded around $v=1$, which is a standard application of the Mellin transform. Details are worked out in
\cite{HPW}, for example:
\begin{equation*}
\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}=\sum_{k\ge1}d(k)v^{2k}\sim-\frac{\log(1-v^2)}{1-v^2}
\sim-\frac{\log(1-v)}{2(1-v)}.
\end{equation*}
Note again that $d(k)$ is the number of divisors of $k$. Consequently
\begin{equation*}
-2v+\frac{2(1-v^{2})}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}\sim
-2\log(1-v).
\end{equation*}
We have $1-v\sim \sqrt 3\sqrt{1-3z}$, and $z=\frac13$ is the relevant singularity when discussing Motzkin numbers. We can continue
\begin{equation*}
-2v+\frac{2(1-v^{2})}{v}\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}\sim -\log(1-3z).
\end{equation*}
The coefficient of $z^n$ in this is $\frac{3^n}{n}$. This has to be divided by
\begin{equation*}
[z^n]zM(z)=[z^{n-1}]M(z)\sim \frac{3^{n+\frac12}}{2\sqrt{\pi }n^{3/2}},
\end{equation*}
with the final result for the average height of restricted Dyck paths (\`a la Retakh):
\begin{equation*}
\sim 2\sqrt{\frac{\pi n}{3}}.
\end{equation*}
Recall \cite{Prodinger-three} that the average height of Motzkin paths of length $n$ is asymptotic to
\begin{equation*}
\sqrt{\frac{\pi n}{3}}.
\end{equation*}
\subsection*{The number of leaves}
We can use a second variable, $u$, to count the number of leaves. Then we have
\begin{equation*}
F(z,u)=\frac{zG(z,u)}{1-G(z,u)}\quad\text{and}\quad G(z,u)=zu+\frac{zF(z,u)}{1-F(z,u)},
\end{equation*}
which leads to
\begin{multline*}
F(z,u)= \\{\frac {1-zu-{z}^{2}+{z}^{2}u-\sqrt {1-2zu-2{z}^{2}-2{z}^{
2}u+{z}^{2}{u}^{2}-2{z}^{3}u+2{z}^{3}{u}^{2}+{z}^{4}-2{z}^{4}u+{
z}^{4}{u}^{2}}}{2(1-zu+z)}}.
\end{multline*}
Bringing the irregular beginning also into the game leads to
\begin{align*}
\frac{z}{1-zu}\sum_{r\ge0}\Big(\frac{F}{1-zu}\Big)^r+zu-z.
\end{align*}
This is an ugly expression that we do not display here. However, we can compute the average number of leaves, by differentiation w.r.t. $u$, followed by setting $u=1$:
\begin{equation*}
R:={\frac {v \left( 1+v \right) ( 1-v+2{v}^{2}-{v}^{3}) }{
(1-v) \left( 1+v+{v}^{2} \right) }}.
\end{equation*}
The coefficient of $z^n$ in this can be expressed in terms of trinomial coefficients, if needed. However, we only compute an asymptotic formula, to keep this section short. Expanding around $v=1$, we find
\begin{equation*}
R\sim\frac23\frac{1}{1-v}\sim\frac23\frac{1}{\sqrt3\sqrt{1-3z}},
\end{equation*}
and thus
\begin{equation*}
[z^n]R\sim\frac23\frac{1}{\sqrt3}3^n\frac1{\sqrt{\pi n}}.
\end{equation*}
We divide this again by
\begin{equation*}
\frac{3^{n+\frac12}}{2\sqrt{\pi} n^{3/2}}
\end{equation*}
with the result
\begin{equation*}
\frac49n,
\end{equation*}
which is the asymptotic number of leaves in a Retakh tree of size $n$. Recall that for unrestricted planar trees, the result is
$\frac n2$, which is a folklore result using Narayana numbers. So the constant in the restricted case, $\frac49$, is a bit smaller than
$\frac12$.
With some effort, more precise approximations could be obtained, as well as the variance. This might be a good project for a student.
\section{The amplitude of Motzkin paths}
A Motzkin path consists of up-steps, down-steps, and horizontal steps, see sequence A091965 in \cite{OEIS} and the references given there. As Dyck paths, they start at the origin and end, after $n$ steps again at the $x$-axis, but are not allowed to go below the $x$-axis. The height of a Motzkin path is the highest $y$-coordinate that the path reaches. The average height of such random paths of length $n$ was considered in an early paper \cite{Prodinger-three}, it is asymptotically given by $\sqrt{\frac{\pi n}{3}}$.
In the recent paper \cite{irene} an interesting new concept was introduced: the \emph{amplitude}. It is basically twice the height, but with a twist. If there exists a horizontal step on level $h$, which is the height, the amplitude is $2h+1$, otherwise it is $2h$. To clarify the concept, we created a list of all 9 Motzkin paths of length 4 with height and amplitude given.
\begin{center}
\begin{table}[h]
\begin{tabular}{c | c | c |c}
\text{Motzkin path }&\text{horizontal on maximal level}&\text{height}&\text{amplitude}\\
\hline\hline
\begin{tikzpicture}[scale=0.4]
\draw[ultra thick] (0,0) to (1,0) to (2,0) to (3,0) to (4,0) ;
\end{tikzpicture}
& \text{Yes}& 0& 1\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,0) to (2,0) to (3,1) to (4,0) ;
\end{tikzpicture}
& \text{No}& 1& 2\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,1) to (2,0) to (3,1) to (4,0) ;
\end{tikzpicture}
& \text{No}& 1& 2\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,0) to (2,1) to (3,1) to (4,0) ;
\end{tikzpicture}
& \text{Yes}& 1& 3\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,1) to (2,1) to (3,1) to (4,0) ;
\end{tikzpicture}
& \text{Yes}& 1& 3\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,1) to (2,1) to (3,0) to (4,0) ;
\end{tikzpicture}
& \text{Yes}& 1& 3\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,1) to (2,0) to (3,0) to (4,0) ;
\end{tikzpicture}
& \text{No}& 1& 2\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,0) to (2,1) to (3,0) to (4,0) ;
\end{tikzpicture}
& \text{No}& 1& 2\\
\hline
\begin{tikzpicture}[scale=0.4]
\draw[ ultra thick] (0,0) to (1,1) to (2,2) to (3,1) to (4,0) ;
\end{tikzpicture}
& \text{No}& 2& 4\\
\hline
\end{tabular}
\end{table} \end{center}
The goal of this paper is to investigate this new parameter; in the next section, generating functions will be given, in the following section explicit enumerations, involving trinomial coefficients $\binom{n,3}{k}=[t^k](1+t+t^2)^n$ (notation following Comtet's book \cite{Comtet-book}). In the last section, the intuitive result that the average amplitude is about twice the average height, is confirmed, and then it will be shown, that, asymptotically, there are about as many Motzkin paths with/without horizontal steps on the maximal level.
\subsection*{Generating functions}
Let \begin{equation*}
M^{\le h}(z)=\sum_{n\ge0}[\text{number of Motzkin paths of length $n$ and height $\le h$}]z^n. \end{equation*} For the computation, let $f_i=f_i(z)$ be the generating function of Motzkin-like paths, bounded by height $h$, but ending at level $i$. Distinguishing the last step, we get \begin{equation*}
f_0=1+zf_0+zf_1,\quad f_i=zf_{i-1}+zf_i+zf_{i+1}\ \ \text{for}\ 0<i<h, \quad f_h=zf_{h-1}+zf_h. \end{equation*} This system is best written in matrix form: \begin{equation*}
\begin{pmatrix}
1-z&-z&0&\dots\\
-z&1-z&-z&0&\dots\\
0&-z&1-z&-z&0&\dots\\
\ddots&\ddots&\ddots\\
\phantom{0}&\phantom{0}&\phantom{0}&\phantom{0}&-z&1-z
\end{pmatrix}
\begin{pmatrix}
f_0\\f_1\\f_2\\ \vdots\\ f_h
\end{pmatrix}
=\begin{pmatrix}
1\\0\\0\\ \vdots\\ 0
\end{pmatrix} \end{equation*} Let $D_n$ be the determinant of the system matrix, with $n$ rows and columns. Using Cramer's rule to solve a linear system, one finds \begin{equation*}
M^{\le h}(z)=f_0=\frac{D_h}{D_{h+1}}. \end{equation*} Expanding the determinant along the first row, we get the recursion $D_n=(1-z)D_{n-1}-z^2D_{n-2}$, and $D_1=1-z$, $D_0=1$. Solving, \begin{equation*}
D_n=\frac1{\sqrt{1-2z-3z^2}}\bigg[\biggl(\frac{1-z+\sqrt{1-2z-3z^2}}{2}\biggr)^{n+1}-\biggl(\frac{1-z-\sqrt{1-2z-3z^2}}{2}\biggr)^{n+1}\bigg]. \end{equation*} If one deals with enumeration of Motzkin-like objects, the substitution $z=\frac{v}{1+v+v^2}$ makes the expressions prettier: \begin{equation*}
D_n=\frac1{1-v^2}\frac{1-v^{2n+2}}{(1+v+v^2)^n} \end{equation*} and further \begin{equation*}
M^{\le h}(z)=f_0=\frac{D_h}{D_{h+1}}=(1+v+v^2)\frac{1-v^{2h+2}}{1-v^{2h+4}}. \end{equation*}
Now let $N^{\le h}(z)$ be the generating function of Motzkin paths of height $\le h$, but where horizontal steps on level $h$ are forbidden. The system to compute this is quite similar: \begin{equation*}
\begin{pmatrix}
1-z&-z&0&\dots\\
-z&1-z&-z&0&\dots\\
0&-z&1-z&-z&0&\dots\\
\ddots&\ddots&\ddots\\
\phantom{0}&\phantom{0}&\phantom{0}&\phantom{0}&-z&\boldsymbol{1}
\end{pmatrix}
\begin{pmatrix}
g_0\\g_1\\g_2\\ \vdots\\ g_h
\end{pmatrix}
=\begin{pmatrix}
1\\0\\0\\ \vdots\\ 0
\end{pmatrix} \end{equation*} The only difference in the matrix is the entry in the last row, written in boldface. Let $D_n^*$ be the determinant of this system matrix, with $n$ rows and columns. Again, \begin{equation*}
N^{\le h}(z)=g_0=\frac{D_h^*}{D_{h+1}^*}. \end{equation*} Expanding the matrix along the last row, we find \begin{equation*}
D_n^*=D_{n-1}-z^2D_{n-2}=\frac1{1-v}\frac{1-v^{2n+1}}{(1+v+v^2)^n} \end{equation*} and \begin{equation*}
N^{\le h}(z)=g_0=\frac{D_h^*}{D_{h+1}^*}=(1+v+v^2)\frac{1-v^{2h+1}}{1-v^{2h+3}}. \end{equation*} Now let us consider \begin{equation*}
M^{\le h}(z)-N^{\le h}(z). \end{equation*} There is obviously a lot of cancellation going on. The objects which are still counted, have height $=h$, and \emph{have} horizontal steps on level $h$. That is one of the two quantities that we wanted to compute, and we get \begin{align*}
\text{Horiz}_h(z)&=(1+v+v^2)\frac{1-v^{2h+2}}{1-v^{2h+4}}-(1+v+v^2)\frac{1-v^{2h+1}}{1-v^{2h+3}}\\&=
(1+v+v^2)(1-v^{-2})\bigg[\frac{v^{2h+4}}{1-v^{2h+4}}-\frac{v^{2h+3}}{1-v^{2h+3}}\bigg]. \end{align*}
Similarly, considering $N^{\le h}(z)-M^{\le h-1}(z)$, we find that only objects are counted that have height $=h$, and \emph{no} horizontal steps on level $h$. Thus \begin{align*}
\text{No-Horiz}_h(z)&=(1+v+v^2)\bigg[\frac{1-v^{2h+1}}{1-v^{2h+3}}-\frac{1-v^{2h}}{1-v^{2h+2}}\bigg]\\&=
(1+v+v^2)(1-v^{-2})\bigg[\frac{v^{2h+3}}{1-v^{2h+3}}-\frac{v^{2h+2}}{1-v^{2h+2}}\bigg]. \end{align*} As a check, we get \begin{align*}
\text{Horiz}_h(z)+\text{No-Horiz}_h(z)&=(1+v+v^2)(1-v^{-2})\bigg[\frac{v^{2h+4}}{1-v^{2h+4}}-\frac{v^{2h+2}}{1-v^{2h+2}}\bigg]
\\*&=M^{\le h}(z)-M^{\le h-1}(z)=M^{= h}(z). \end{align*}
\subsection*{Explicit enumerations}
All our generating functions contain the term \begin{equation*}
(1+v+v^2)(1-v^{-2})\frac{v^{2h+a}}{1-v^{2h+a}}=(1+v+v^2)(1-v^{-2})\sum_{k\ge1}v^{k(2h+a)} \end{equation*} for various values of $a$. We show how to compute the coefficient of $z^n$ in this. It will be done using contour integration. The contour is a small circle in the $z$-plane or $v$-plane. \begin{align*}
[z^n]&(1+v+v^2)(1-v^{-2})\frac{v^{2h+a}}{1-v^{2h+a}}
=\frac1{2\pi i}\oint \frac{dz}{z^{n+1}}(1+v+v^2)(1-v^{-2})\sum_{k\ge1}v^{k(2h+a)}\\
&=\frac1{2\pi i}\oint \frac{dv(1-v^2)}{(1+v+v^2)^2}\frac{(1+v+v^2)^{n+1}}{v^{n+1}}(1+v+v^2)(1-v^{-2})\sum_{k\ge1}v^{k(2h+a)}\\
&=-\sum_{k\ge1}[v^{n+2-k(2h+a)}](1-v^2)^2(1+v+v^2)^{n}\\
&=-\sum_{k\ge1}\bigg[\binom{n,3}{n+2-k(2h+a)}-2\binom{n,3}{n-k(2h+a)}+\binom{n,3}{n-2-k(2h+a)}\bigg]. \end{align*} In this computation, we used again the notion of \emph{trinomial} coefficients: \begin{equation*}
\binom{n,3}{k}=[t^k](1+t+t^2)^n. \end{equation*}
\subsection*{The average amplitude}
Here we compute: \begin{align*}
\sum_{h\ge0}&(2h+1)\text{Horiz}_h(z)+\sum_{h\ge1}(2h)\text{No-Horiz}_h(z)\\
&=(1+v+v^2)(1-v^{-2})\sum_{h\ge0}(2h+1)\bigg[\frac{v^{2h+4}}{1-v^{2h+4}}-\frac{v^{2h+3}}{1-v^{2h+3}}\bigg]
\\&+(1+v+v^2)(1-v^{-2})\sum_{h\ge0}(2h)\bigg[\frac{v^{2h+3}}{1-v^{2h+3}}-\frac{v^{2h+2}}{1-v^{2h+2}}\bigg]\\
&=-(1+v+v^2)(1-v^{-2})\sum_{h\ge0}\frac{v^{2h+3}}{1-v^{2h+3}}\\
&+(1+v+v^2)(1-v^{-2})\sum_{h\ge0}(2h+1)\frac{v^{2h+4}}{1-v^{2h+4}}\\&-
(1+v+v^2)(1-v^{-2})\sum_{h\ge0}(2h+2)\frac{v^{2h+4}}{1-v^{2h+4}}\\
&=-(1+v+v^2)(1-v^{-2})\sum_{h\ge0}\frac{v^{2h+3}}{1-v^{2h+3}}-(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\frac{v^{2h+2}}{1-v^{2h+2}}\\
&=-(1+v+v^2)(1-v^{-2})\sum_{h\ge0}\frac{v^{2h+1}}{1-v^{2h+1}}+(1+v+v^2)(1-v^{-2})\frac{v}{1-v}
\\&-(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}+(1+v+v^2)(1-v^{-2})\frac{v^{2}}{1-v^{2}}\\
&=-(1+v+v^2)(1-v^{-2})\sum_{h\ge0}\frac{v^{2h+1}}{1-v^{2h+1}}
\\&-(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\frac{v^{2h}}{1-v^{2h}}-\frac{(1+2v)(1+v+v^2)}{v}\\
\\&=-(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\frac{v^{h}}{1-v^{h}}-\frac{(1+2v)(1+v+v^2)}{v}. \end{align*}
To find asymptotics from here, we need the local expansion of this generating function around $v\sim1$, or, equivalently, $z\sim \frac13$. See \cite{FGD} for more explanations of how this method works; it also involves singularity analysis of generating functions \cite{FlOd90}. While this combined approach of Mellin transform and singularity analysis has been used for more than 30 years, we would like to cite \cite{HPW}, where many technical details have been worked out in a similar instance. For example, the expansion of the sum that we need is derived there. We combine all the expansions: \begin{equation*}
\Big(6(1-v)+\cdots\Big)\Big(-\frac{\log(1-v)}{1-v}+\frac{\gamma}{1-v}+\cdots\Big)-9+6(1-v)+\cdots=-6\log(1-v)+\cdots \end{equation*} Translating this into the $z$-world means $1-v\sim \sqrt3\sqrt{1-3z}$, and then \begin{equation*}
[z^n]\Big(-6\log(1-v)\Big)\sim [z^n]\Big(-6\log\sqrt{1-3z}\Big)=3\frac{3^n}{n}=\frac{3^{n+1}}{n}. \end{equation*} The total number of Motzkin paths is \begin{equation*}
[z^n]\frac{1-z-\sqrt{1-2z-3z^2}}{2z^2}\sim [z^n]\Big(3-3\sqrt3\sqrt{1-3z}\Big)\sim 3\sqrt3\frac{3^{n}}{2\sqrt\pi n^{3/2}}. \end{equation*} Taking quotients, we get the average amplitude of random Motzkin paths of length $n$, which is asymptotic to \begin{equation*}
2\sqrt{\frac{\pi n}{3}}. \end{equation*} This is about twice as much as the average height of random Motzkin paths, which is what we expected.
Now we consider \begin{align*}
\sum_{h\ge0}&\text{No-Horiz}_h(z)=
(1+v+v^2)(1-v^{-2})\sum_{h\ge0}\bigg[\frac{v^{2h+3}}{1-v^{2h+3}}-\frac{v^{2h+2}}{1-v^{2h+2}}\bigg]\\
&=(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\bigg[\frac{v^{2h-1}}{1-v^{2h-1}}-\frac{v^{2h}}{1-v^{2h}}\bigg]
+\frac{(1+v)(1+v+v^2)}{v}\\
&=(1+v+v^2)(1-v^{-2})\sum_{h\ge1}\frac{v^{h}}{1-v^{h}}(-1)^{h-1}
+\frac{(1+v)(1+v+v^2)}{v}. \end{align*} This needs to be expanded about $v=1$: \begin{equation*}
(1+v+v^2)(1-v^{-2})\sim-6(1-v)-3(1-v)^2+\cdots; \end{equation*} \begin{equation*}
\frac{(1+v)(1+v+v^2)}{v}\sim6-3(1-v)+\cdots; \end{equation*} for the remaining sum we need the Mellin transform \cite{FGD}. Set $v=e^{-t}$, and transform \begin{equation*}
\mathscr{M}\sum_{h\ge1}\frac{v^{h}}{1-v^{h}}(-1)^{h-1}
=\mathscr{M}\sum_{h,k\ge1}e^{-thk}(-1)^{h-1}=
\Gamma(s)\zeta(s)^2(1-2^{1-s}). \end{equation*} By the Mellin inversion formula: \begin{equation*}
\sum_{h\ge1}\frac{v^{h}}{1-v^{h}}(-1)^{h-1}=
\frac1{2\pi i}\int_{2-i\infty}^{2+i\infty}\Gamma(s)\zeta(s)^2(1-2^{1-s})t^{-s}ds. \end{equation*} The line of integration will be shifted to the left, and the collected residues constitute the expansion about $t=0$: \begin{align*}
\sum_{h\ge1}\frac{v^{h}}{1-v^{h}}(-1)^{h-1}\sim \frac{\log 2}{t}-\frac14+\frac{t}{48}\sim\frac{\log2}{1-v}-\frac{\log2}{2}-\frac14+\cdots. \end{align*} Combining, \begin{equation*}
\sum_{h\ge0}\text{No-Horiz}_h(z)\sim6-6\log 2-\frac32(1-v)+\cdots. \end{equation*} But \begin{equation*}
\frac{1-z-\sqrt{1-2z-3z^2}}{2z^2}=1+v+v^2\sim 3-3(1-v)+\cdots. \end{equation*} Comparing the coefficients of $1-v$, we find that asymptotically about half of the Motzkin paths belong to the `No-horizontal' and about half of the Motzkin paths belong to the `horizontal' class. Again, this is intuitively clear, once one sees the generating functions of both families.
\section{Oscillations in Dyck paths revisited}
Rainer Kemp's paper \cite{Kemp-oscillations} was unfortunately largely overlooked. An extension was published quickly \cite{KP-hyper}, and then it fell into oblivion. We want to come back to this gem, with modern methods, in particular, the kernel method and singularity analysis. Kemp was interested in peaks and valleys of Dyck paths, which he called \textsc{max}-turns and \textsc{min}-turns, probably motivated by Computer Science applications. The peaks/valleys are enumerated from left to right, and the interest is the height of them, for a fixed number and the length of the Dyck path going to infinity. In the corresponding ordered (plane) tree, the peaks correspond to the leaves.
Very precise information is available for leaves of binary trees \cite{Kirschen-leaves, Gutjahr, PP1, PP2} but the situation is a bit different for Dyck paths since the number of peaks/valleys isn't directly related to the length of the Dyck path. (Narayana numbers enumerate them.) Kemp's results in a nutshell are: The average height of the $m$-th peak/valley tends to a constant for $n$ going to infinity (the length of the random Dyck path). This constant is $\sim 4\sqrt{2m/\pi }$, and the difference between the height of the peak and the valley is about 2, with more terms being available in principle.
\subsection*{A trivariate generating function for heights of valleys}
The goal is to derive an expression for $\Phi(u)=\Phi(u;z,w)$, where $z$ is used for the length of the path, $w$ for the enumeration of the $valleys$ ($w^m$ corresponds to the the $m$-th valley), and $u$ is used to record the height of the $m$-th (and last) valley of a partial Dyck path (the path does not need to return to the $x$-axis). We could think about it continued in any possible fashion, as in the following figure.
\begin{figure}
\caption{The third valley at level $j$.}
\end{figure}
Our goal is, as often, to use the adding-a-new-slice technique, namely adding another `mountain' (a maximal sequence of up-steps, followed by a maximal sequence of down-steps), or going from the $m$-th valley to the $(m+1)$-st valley.
We investigate what can happen to $u^j$: \begin{equation*}
\sum_{l\ge1}\sum_{i=1}^{j+l}z^lu^{j+l}z^iu^{-i}. \end{equation*} Working this out, the following substitution is essential for our problem: \begin{equation*}
u^j\longrightarrow \frac{z^{2}u^k}{(u-z)(1-zu^k)}u^j-\frac{z^{k+2}}{(u-z)(1-z^{k+1})}z^j. \end{equation*} Working this into a generating function of the type \begin{equation*}
\Phi(u)=\sum_{m\ge0}w^m\varphi_m(u), \end{equation*} where the variable $w$ is used to keep track of the number of mountains, we find from the substitution \begin{equation*}
\Phi(u)=1+\frac{wz^{2}u}{(u-z)(1-zu)}\Phi(u)-\frac{wz^{3}}{(u-z)(1-z^{2})}\Phi(z), \end{equation*} where 1 stands for the empty path having no mountains. Rearranging, \begin{equation*}
\Phi(u)\frac{z(u-s_1)(u-s_2)}{(u-z)(zu-1)}=1-\frac{wz^{3}}{(u-z)(1-z^{2})}\Phi(z), \end{equation*} and \begin{equation*}
\Phi(u)=\frac{(zu-1)}{z(u-s_1)(u-s_2)}\bigg[u-z-\frac{wz^{3}}{(1-z^{2})}\Phi(z)\bigg]. \end{equation*} Here, \begin{equation*}
s_2={\frac {{z}^{2}+1-w{z}^{2}-\sqrt {{z}^{4}-2\,{z}^{2}-2\,{z}^{4}
w+1-2\,w{z}^{2}+{w}^{2}{z}^{4}}}{2z}}\quad\text{and}\quad s_1=\frac1{s_2}. \end{equation*} In the spirit of the kernel method, the factor $u-s_2$ is `bad' and must cancel out. That leads first to \begin{equation*}
\Phi(z)=\frac{(1-z^2)(s_2-z)}{wz^3} \end{equation*} and further to \begin{equation*}
\Phi(u)=\frac{(zu-1)}{z(u-s_1)}=\frac{s_2(1-zu)}{z(1-us_2)}. \end{equation*} From this it is easy to read off coefficients: \begin{equation*}
[u^j]\Phi(u)=[u^j]\frac{s_2(1-zu)}{z(1-us_2)}=\frac1zs_2^{j+1}-s_2^{j}. \end{equation*} Note that setting $w=1$ ignores the number of mountains, and the generating function would then be enumerating partial Dyck paths ending on level $j$ with a down-step. The answer could then be derived by combinatorial means as well.
For Kemp's problem, we need \begin{equation*}
S=\sum_{j\ge0}j\Big(\frac1zs_2^{j+1}-s_2^{j}\Big)\cdot \Big(\frac1zs_2^{j+1}-s_2^{j}\Big)\Big|_{w=1}. \end{equation*} The factor $j$ comes in because of the average value of the height of the valley, the first factor is what we just worked out, and the third factor is the rest, which, when read from right to left, is just what we discussed, since the number of mountains in the rest is irrelevant. Thanks to Computer Algebra (not available when Kemp worked on the oscillations), we get \begin{equation*}
S=4 {\frac { \left( -3 z+W_1 z-W_1+1 \right) \left( -W_2+wzW_2+1+{z}^{2}{w}^{2}-w{z}^{2}-2 wz-z \right) }{z
\left( -3 z-W_1 z+1-W_1-wz+wzW_1-W_2+W_2 W_1 \right) ^{2}}} \end{equation*} with \begin{align*}
W_1&=\sqrt{1-4z}\quad\text{and}\quad
W_2=\sqrt{z^2-2z-2z^2w+1-2wz+w^2z^2}. \end{align*} Note carefully that $z^2$ was replaced by $z$, since Dyck paths (returning to the $x$-axis) must have an even number of steps. Their enumeration is classical: \begin{equation*}
D(z)=\frac{1-\sqrt{1-4z}}{2z}\sim 2-2\sqrt{1-4z}, \end{equation*} for $z$ close to the (dominant) singularity $z=\frac14$. We are in the regime of the subcritical case (\cite{FS}, Section IX-3). The function $S$ has a similar local expansion: \begin{equation*}
S\sim \mathsf{constant}_1-\mathsf{constant}_2\sqrt{1-4z}, \end{equation*} and the function $\frac{-\mathsf{constant}_2}{-2}$ is the resulting generating function. Working out the details, \begin{align*}
S&\sim{\frac {w+\sqrt { \left(1-w \right) \left( 9-w \right) }-3}{-1+w}}\\&
-\sqrt{1-4z}\bigg({\frac {{w}^{2}+2w-3+(1+w)\sqrt { \left( 1-w \right) \left( 9-w \right) }}{ \left( 1-w \right) ^{2}}}\bigg)+\cdots \end{align*} Eventually we are led to \begin{equation*}
\mathsf{Valley}(w):={\frac {{w}^{2}+2w-3+(1+w)\sqrt { \left( 1-w \right) \left(9-w \right) }}{ 2( 1-w ) ^{2}}}. \end{equation*} To say it again, the coefficient of $w^m$ in this is the average value of the $m$-th valley in `very long' Dyck path. To say more about it, we can use singularity analysis again and expand (this time around $w=1$, which is dominant): \begin{equation*}
\mathsf{Valley}(w)\sim{\frac {2\sqrt {2}}{ \left( 1-w \right) ^{3/2}}}-\frac{2}{1-w}-{\frac {7}{8}} {\frac {\sqrt {2}}{\sqrt {1-w}}}. \end{equation*} The traditional translation theorems \cite{FlOd90} lead to the average value of the height of the $m$-th valley: \begin{equation*}
4\sqrt2\sqrt\frac{m}{\pi}-2+\frac{5\sqrt2}{8\sqrt{\pi m}}+\cdots. \end{equation*}
\subsection*{From valleys to peaks.}
\begin{figure}
\caption{The third peak at level $j$.}
\end{figure}
We don't need too many new computations, as we can modify the previous results. If one adds an arbitrary non-empty number of up-steps after the $m$-th valley, one has reached the $(m+1)$-st peak! This is basically a substitution! Start from \begin{equation*}
\Phi(u)=\frac{s_2(1-zu)}{z(1-us_2)} \end{equation*} and attach a sequence of up-steps: $u^j\to \frac{zu}{1-zu}u^j$. A factor $w$ is also important, since the $m$-th valley corresponds to the $(m+1)$-st peak. Now \begin{equation*}
\frac{zuw}{1-zu}\frac{s_2(1-zu)}{z(1-us_2)}=\frac{us_2w}{1-us_2}=w\sum_{j\ge1}u^js_2^j. \end{equation*} The computation \begin{equation*}
w\sum_{j\ge0}js_2^j\cdot s_2^j\Big|_{w=1} \end{equation*} was basically done before, and the local expansion leads to \begin{equation*}
\frac{2w}{1-w}-\frac{2w\sqrt{(1-w)(9-w)}}{(1-w)^2}\sqrt{1-4z}, \end{equation*} and the generating function of the average values of the $m$-th peak is \begin{equation*}
\mathsf{Peak}(w)=\frac{w\sqrt{(1-w)(9-w)}}{(1-w)^2}. \end{equation*} A local expansion of this results in \begin{equation*}
\mathsf{Peak}(w)\sim{\frac {2\sqrt {2}}{ \left( 1-w \right) ^{3/2}}}-{\frac {15}{8}}{\frac {\sqrt {2}}{\sqrt {1-w}}}. \end{equation*} Taking differences: \begin{equation*}
\mathsf{Peak}(w)-\mathsf{Valley}(w)\sim\frac{2}{1-w}-{\frac {\sqrt {2}}{\sqrt {1-w}}}, \end{equation*} and translating into asymptotics: \begin{equation*}
2-\frac{\sqrt2}{\sqrt{\pi m}}. \end{equation*} The formula $2+O(m^{-1/2})$ was already known to Kemp \cite{Kemp-oscillations}. As Kemp stated in \cite{Kemp-oscillations}, which was confirmed in \cite{KP-hyper}, the generating functions $\mathsf{Peak}(w)$ and $\mathsf{Valley}(w)$ can be expressed by Legendre polynomials at special values. This is a bit artificial and not too useful in itself.
\section{Deutsch-paths in a strip}
Emeric Deutsch \cite{Deutsch} had the idea to consider a variation of ordinary Dyck paths, by augmenting the usual up-steps and down-steps by one unit each, by down-steps of size $3,5,7,\dots$. This leads to ternary equations, as can be seen for instance from \cite{Deutsch-ternary}.
The present author started to investigate a related but simpler model of down-steps $1,2,3,4,\dots$ and investigated it (named Deutsch paths in honour of Emeric Deutsch) in a series of papers, \cite{Deutsch1,Deutsch-slice,Prodinger-fibo}.
This section is a further member of this series: The condition that (as with Dyck paths) the paths cannot enter negative territory, is relaxed, by introducing a negative boundary $-t$. Here are two recent publications about such a negative boundary: \cite{Selkirk-master} and \cite{jcmcc}.
Instead of allowing negative altitudes, we think about the whole system shifted up by $t$ units, and start at the point $(0,t)$ instead. This is much better for the generating functions that we are going to investigate. Eventually, the results can be re-interpreted as results about enumerations with respect to a negative boundary.
The setting with flexible initial level $t$ and final level $j$ allows us to consider the Deutsch paths also from right to left (they are not symmetric!), without any new computations.
The next sections achieves this, using the celebrated kernel-method, one of the tools that is dear to our heart \cite{Prodinger-kernel}.
In the following section, an additional upper bound is introduced, so that the Deutsch paths live now in a strip. The way to attack this is linear algebra. Once everything has been computated, one can relax the conditions and let lower/upper boundary go to $\mp\infty$.
\subsection*{Generating functions and the kernel method}
As discussed, we consider Deutsch paths starting at $(0,t)$ and ending at $(n,j)$, for $n,t,j\ge0$. First we consider univariate generating functions $f_j(z)$, where $z^n$ stays for $n$ steps done, and $j$ is the final destination. The recursion is immediate: \begin{equation*}
f_j(z)=[\![t=j]\!]+zf_{j-1}(z)+z\sum_{k>j}f_k(z), \end{equation*} where $f_{-1}(z)=0$. Next, we consider \begin{equation*}
F(z,u):=\sum_{j\ge0}f_j(z)u^j, \end{equation*} and get \begin{align*}
F(z,u)&=u^t+zuF(z,u)+z\sum_{j\ge0}u^j\sum_{k>j}f_k(z)\\
&=u^t+zuF(z,u)+z\sum_{k>0}f_k(z)\sum_{0\le j<k}u^j\\
&=u^t+zuF(z,u)+z\sum_{k\ge0}f_k(z)\frac{1-u^k}{1-u}\\
&=u^t+zuF(z,u)+\frac z{1-u}[ F(z,1)-F(z,u)]\\
&=\frac{u^t(1-u)+zF(z,1)}{z-zu+zu^2+1-u}. \end{align*} Since the critical value is around $u=1$, we write the denominator as \begin{equation*}
z(u-1)^2+(u-1)(z-1)+z=z(u-1-r_1)(u-1-r_2), \end{equation*} with \begin{align*}
r_1=\frac{1-z+\sqrt{1-2z-3z^2}}{2z},\quad r_2=\frac{1-z-\sqrt{1-2z-3z^2}}{2z}. \end{align*}
The factor $(u-1-r_2)$ is bad, so the numerator must vanish for $[u^t(1-u)+zF(z,1)]|_{u=1+r_2}$, therefore \begin{equation*}
zF(z,1)=(1+r_2)^tr_2. \end{equation*} Furthermore \begin{equation*}
F(z,u)=
\frac{\frac{u^t(1-u)+zF(z,1)}{u-r_2}}{z(u-r_1)}. \end{equation*} The expressions become prettier using the substitution $z=\frac{v}{1+v+v^2}$; then \begin{equation*}
r_1=\frac{1}{v},\quad r_2=v. \end{equation*} It can be proved by induction (or computer algebra) that \begin{equation*}
\frac{u^t(1-u)+v(1+v)^t}{u-1-v}=-v\sum_{k=0}^{t-1}(1+v)^{t-1-k}-u^t. \end{equation*} Furthermore \begin{equation*}
\frac1{z(u-1-r_1)}=-\frac1{z(1+r_1)(1-\frac{u}{1+r_1})}, \end{equation*} and so \begin{equation*}
f_j(z)=[u^j]F(z,u)=[u^j]\biggl[v\sum_{k=0}^{t-1}(1+v)^{t-1-k}u^k+u^t\biggr]\sum_{\ell\ge0}\frac{u^{\ell}}{z(1+r_1)^{\ell+1}}. \end{equation*} Of interest are two special cases: The case that was studied before \cite{Deutsch1} is $t=0$: \begin{equation*}
f_j=\frac{(1+v+v^2)v^{j}}{(1+v)^{j+1}}. \end{equation*} The other special case is $j=0$ for general $t$, as it may be interpreted as Deutsch paths read from right to left, starting at level $0$ and ending at level $t\ge1$ (for $t=0$, the previous formula applies): \begin{align*}
f_0(z)&=[u^0]\biggl[v\sum_{k=0}^{t-1}(1+v)^{t-1-k}u^k+u^t\biggr]\sum_{\ell\ge0}\frac{u^{\ell}}{z(1+r_1)^{\ell+1}}\\
&=v(1+v)^{t-1}\frac{1}{z(1+r_1)}=v(1+v+v^2)(1+v)^{t-2}. \end{align*}
The next section will present a simplification of the expression for $f_j(z)$, which could be obtained directly by distinguishing cases and summing some geometric series.
\subsection*{Refined analysis: lower and upper boundary}
Now we consider Deutsch paths bounded from below by zero and bounded from above by $m-1$; they start at level $t$ and end at level $j$ after $n$ steps. For that, we use generating functions $\varphi_j(z)$ (the quantity $t$ is a silent parameter here). The recursions that are straight-forwarded are best organized in a matrix, as the following example shows. \begin{equation*}
\left(\begin{matrix}
1&-z&-z&-z&-z&-z&-z&-z\\
-z& 1&-z&-z&-z&-z&-z&-z\\
0& -z& 1&-z&-z&-z&-z&-z\\
0& 0& -z& 1&-z&-z&-z&-z\\
0& 0& 0& -z& 1&-z&-z&-z\\
0& 0& 0&0& -z& 1&-z&-z\\
0& 0& 0&0&0& -z& 1&-z\\
0& 0& 0&0&0&0& -z& 1\\
\end{matrix}\right)
\left(\begin{matrix}
\varphi_0\\
\varphi_1\\
\varphi_2\\
\varphi_3\\
\varphi_4\\
\varphi_5\\
\varphi_6\\
\varphi_7\\
\end{matrix}\right)=
\left(\begin{matrix}
0\\
0\\
0\\
1\\
0\\
0\\
0\\
0\\
\end{matrix}\right)
\begin{tikzpicture}
\draw [](0,0)--(0,0);
\node at (0.5,1.3){\Bigg\}$t$};
\end{tikzpicture} \end{equation*}
The goal is now to solve this system. For that the substitution $z=\frac{v}{1+v+v^2}$ is used throughout. The method is to use Cramer's rule, which means that the right-hand side has to replace various columns of the matrix, and determinants have to be computed. At the end, one has to divide by the determinant of the system.
Let $D_m$ be the determinant of the matrix with $m$ rows and columns. The recursion \begin{equation*}
(1+v+v^2)^2m_{n+2}-(1+v+v^2)(1+v)^2D_{m+1}+v(1+v)^2D_{m}=0 \end{equation*} appeared already in \cite{Deutsch1} and is not difficult to derive and to solve: \begin{equation*}
D_m=\frac{(1+v)^{m-1}}{(1+v+v^2)^m}\frac{1-v^{m+2}}{1-v}. \end{equation*}
To solve the system with Cramer's rule, we must compute a determinant of the following type, \begin{center}\small
\begin{tikzpicture}
[scale=0.4]
\draw (0,0)--(5,0)--(5,-3)--(0,-3)--(0,0);
\node at (2.5,0.9){$j$};
\draw[<->](0,0.5)--(5,0.5);
\draw[<->](-0.5,0)--(-0.5,-3);
\node at (-0.9,-1.5){$t$};
\node at (5.5,-3.5){$\boldsymbol{1}$};
\newcommand\x{6};\draw (0+\x,0)--(8+\x,0)--(8+\x,-3)--(0+\x,-3)--(0+\x,0);
\newcommand\y{-4}; \draw (0,0+\y)--(5,0+\y)--(5,-7+\y)--(0,-7+\y)--(0,0+\y);
\draw (0+\x,0+\y)--(8+\x,0+\y)--(8+\x,-7+\y)--(0+\x,-7+\y)--(0+\x,0+\y);
\draw[<->](0,-11.5)--(14,-11.5);
\draw[<->](14.5,0)--(14.5,-11);
\node at (7,-11.9){$m$};
\node at (15.0,-5.5){$m$};
\foreach \x in {0,1,2,3,4}
{
\node at (5.5,-\x/1.5){$\tiny\boldsymbol{0}$};
}
\foreach \x in {6.5,7.5,8.5,9.5,10.5,11.5,12.5,13.5,14.5,15.5,16.5}
{
\node at (5.5,-\x/1.5){$\tiny\boldsymbol{0}$};
}
\end{tikzpicture} \end{center} where the various rows are replaced by the right-hand side. While it is not impossible to solve this recursion by hand, it is very easy to make mistakes, so it is best to employ a computer. Let $D(m;t,j)$ the determinant according to the drawing.
It is not unexpected that the results are different for $j<t$ resp.\ $j\ge t$. Here is what we found: \begin{equation*}
D(m;t,j)=\frac{(1+v)^{t-j-3+m}(1-v^{j+1})v(1-v^{m-t})}{(1-v)^2(1+v+v^2)^{m-1}},\quad\text{for}
\ j<t, \end{equation*} \begin{equation*}
D(m;t,j)=\frac{v^{j-t}(1-v^{t+2})(1-v^{1-j+m})}{(1-v)^2(1+v+v^2)^{m-1}(1+v)^{j-t+3-m}},\quad\text{for}
\ j\ge t. \end{equation*} To solve the system, we have to divide by the determinant $D_m$, with the result \begin{equation*}
\varphi_j= \frac{D(m;t,j)}{D_m}=
\frac{(1+v)^{t-j-2}(1-v^{j+1})v(1-v^{m-t})(1+v+v^2)}{(1-v)(1-v^{m+2})}
,\quad\text{for}
\ j<t, \end{equation*} \begin{equation*}
\varphi_j= \frac{D(m;t,j)}{D_m}=\frac{v^{j-t}(1-v^{t+2})(1-v^{1-j+m})(1+v+v^2)}{(1-v)(1+v)^{j-t+2}(1-v^{m+2})}
,\quad\text{for}
\ j\ge t. \end{equation*} We found all this using Computer algebra. Some critical minds may argue that this is only experimental. One way of rectifying this would be to show that indeed the functions $\varphi_j$ solve the system, which consists of summing various geometric series; again, a computer could be helpful for such an enterprise.
Of interest are also the limits for $m\to\infty$, i.e., no upper boundary: \begin{equation*}
\varphi_j=\lim_{m\to\infty}\frac{D(m;t,j)}{D_m}=
\frac{(1+v)^{t-j-2}(1-v^{j+1})v(1+v+v^2)}{(1-v)}
,\quad\text{for}
\ j<t, \end{equation*} \begin{equation*}
\varphi_j=\frac{v^{j-t}(1-v^{t+2})(1+v+v^2)}{(1-v)(1+v)^{j-t+2} }
,\quad\text{for}
\ j\ge t. \end{equation*} The special case $t=0$ appeared already in the previous section: \begin{equation*}
\varphi_j=\frac{v^{j}(1+v+v^2)}{(1+v)^{j+1} }. \end{equation*} Likewise, for $t\ge1$, \begin{equation*}
\varphi_0=v(1+v+v^2)(1+v)^{t-2}. \end{equation*} In particular, the formul\ae\ show that the expression from the previous section can be simplified in general, which could have been seen directly, of course.
\begin{theorem}
The generating function of Deutsch path with lower boundary 0 and upper boundary $m-1$, starting at $(0,t)$ and ending at $(n,j)$ is given by
\begin{gather*}
\frac{(1+v)^{t-j-2}(1-v^{j+1})v(1-v^{m-t})(1+v+v^2)}{(1-v)(1-v^{m+2})}
,\quad\text{for}
\ j<t,\\
\frac{v^{j-t}(1-v^{t+2})(1-v^{1-j+m})(1+v+v^2)}{(1-v)(1+v)^{j-t+2}(1-v^{m+2})}
,\quad\text{for}
\ j\ge t,
\end{gather*}
with the substitution $z=\dfrac{v}{1+v+v^2}$. \end{theorem} By shifting everything down, we can interpret the results as Deutsch walks between boundaries $-t$ and $m-1-t$, starting at the origin $(0,0)$ and ending at $(n,j-t)$.
\begin{theorem}
The generating function of Deutsch path with lower boundary $-t$ and upper boundary $h$, starting at $(0,0)$ and ending at $(n,i)$ with $-t\le i\le h$ is given by
\begin{gather*}
\frac{(1+v)^{i-2}(1-v^{i+t+1})v(1-v^{h+1})(1+v+v^2)}{(1-v)(1-v^{h+t+3})}
,\quad\text{for}
\ i<0,\\
\frac{v^{i}(1-v^{t+2})(1-v^{2-i+h})(1+v+v^2)}{(1-v)(1+v)^{i+2}(1-v^{h+t+3})}
,\quad\text{for}
\ i\ge 0.
\end{gather*} \end{theorem} It is possible to consider the limits $t\to\infty$ and/or $h\to\infty$ resulting in simplified formul\ae. \begin{theorem}
The generating function of Deutsch path with lower boundary $-t$ and upper boundary $\infty$, starting at $(0,0)$ and ending at $(n,i)$ with $-t\le i$ is given by
\begin{gather*}
\frac{(1+v)^{i-2}(1-v^{i+t+1})v(1+v+v^2)}{(1-v)}
,\quad\text{for}
\ i<0,\\
\frac{v^{i}(1-v^{t+2})(1+v+v^2)}{(1-v)(1+v)^{i+2}}
,\quad\text{for}
\ i\ge 0.
\end{gather*} \end{theorem} \begin{theorem}
The generating function of Deutsch path with lower boundary $-\infty$ and upper boundary $h$, starting at $(0,0)$ and ending at $(n,i)$ with $\le i\le h$ is given by
\begin{gather*}
\frac{(1+v)^{i-2}v(1-v^{h+1})(1+v+v^2)}{(1-v)}
,\quad\text{for}
\ i<0,\\
\frac{v^{i}(1-v^{2-i+h})(1+v+v^2)}{(1-v)(1+v)^{i+2}}
,\quad\text{for}
\ i\ge 0.
\end{gather*} \end{theorem} \begin{theorem}
The generating function of unbounded Deutsch path starting at $(0,0)$ and ending at $(n,i)$ is given by
\begin{gather*}
\frac{(1+v)^{i-2}v(1+v+v^2)}{(1-v)}
,\quad\text{for}
\ i<0,\\
\frac{v^{i}(1+v+v^2)}{(1-v)(1+v)^{i+2}}
,\quad\text{for}
\ i\ge 0.
\end{gather*} \end{theorem}
\section{Publication status of the problems treated in this walk in the garden} \begin{itemize}
\item The section on Hoppy walks is new; an earlier version appeared as\\
arXiv:2009.13474
\item The section on Combinatorics of sequence A002212 is new; an earlier version appeared as
arXiv:2106.14782
\item The section on Binary trees and Horton-Strahler numbers is classical; it was included since it can be used almost verbatim for unary-binary trees when employing the proper substitution
\item The section on marked ordered trees is new and basically included in\\ arXiv:2106.14782
\item The bijection between multi-edge trees and 3-coloured Motzkin paths is new; early version in arXiv:2105.03350
\item The section on combinatorics of skew Dyck paths was submitted to journal, and no answer was received; current version in arXiv:2108.09785
\item The short section on ternary trees was, after a waiting period of 15 months, accepted by a journal in december 2021.
\item The short analysis of Retakh's Motzkin paths appeared in ECA.
\item The amplitude of Motzkin paths was submitted to a journal; no answer.
\item The section on oscillations in honour of Rainer Kemp was written for this survey.
\item The enumeration of Deutsch-paths in a strip was exclusively written for this survey article;
arXiv:2108.12797
\end{itemize}
\end{document} | arXiv |
\begin{document}
\title{Controller synthesis for MDPs and \\ Frequency LTL$_{\setminus {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{U}}}}$}
\author{Vojt\v{e}ch~Forejt\inst{1} \and Jan Kr\v{c}\'al\inst{2} \and Jan K\v{r}et\'insk\'y\inst{3}} \institute{Department of Computer Science, University of Oxford, UK\and Saarland University -- Computer Science, Saarbr\"{u}cken, Germany \and IST Austria}
\maketitle
\begin{abstract} Quantitative extensions of temporal logics have recently attracted significant attention. In this work, we study frequency LTL (fLTL), an extension of LTL which allows to speak about frequencies of events along an execution.
Such an extension is particularly useful for probabilistic systems that often cannot fulfil strict qualitative guarantees on the behaviour.
It has been recently shown that controller synthesis for Markov decision processes and fLTL is decidable when all the bounds on frequencies are $1$. As a step towards a complete quantitative solution, we show that the problem is decidable for the fragment {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}, where ${\ensuremath{\mathbf{U}}}$ does not occur in the scope of ${\ensuremath{\mathbf{G}}}$ (but still ${\ensuremath{\mathbf{F}}}$ can).
Our solution is based on a novel translation of such quantitative formulae into equivalent deterministic automata. \end{abstract}
\section{Introduction}
Markov decision processes (MDP) are a common choice when modelling systems that exhibit (un)controllable and probabilistic behaviour. In controller synthesis of MDPs, the goal is then to steer the system so that it meets certain property.
Many properties specifying the desired behaviour, such as ``the system is always responsive'' can be easily captured by
Linear Temporal Logic (LTL).
This logic is in its nature qualitative and cannot express \emph{quantitative} linear-time properties such as ``a given failure happens only {\em rarely}''.
To overcome this limitation, especially apparent for stochastic systems, extensions of LTL with \emph{frequency} operators have been recently studied~\cite{BDL-tase12,BMM14}.
Such extensions come at a cost, and for example the ``frequency until'' operator can make the controller-synthesis problem undecidable already for non-stochastic systems~\cite{BDL-tase12,BMM14}.
It turns out~\cite{our-concur,DBLP:journals/corr/abs-1111-3111,AT12} that a way of providing significant added expressive power while preserving tractability is to extend LTL only by the ``frequency globally'' formulae $\Gf{\geq p} \varphi$. Such a formula is satisfied if the long-run frequency of satisfying $\varphi$ on an infinite path is at least $p$.
More formally, $\Gf{\geq p} \varphi$ is true on an infinite path $s_0s_1 \cdots$ of an MDP if and only if $\frac{1}{n}\cdot |\{i \mid \text{$i < n$ and $s_i s_{i+1} \cdots$ satisfies $\varphi$}\}|$ is at least $p$ as $n$ tends to infinity. Because the relevant limit might not be defined, we need to consider two distinct operators, $\Gf{\geq p}_{\inf}$ and $\Gf{\geq p}_{\sup}$, whose definitions use limit inferior and limit superior, respectively. We call the resulting logic \emph{frequency LTL (fLTL)}.
So far, MDP controller synthesis for fLTL has been shown decidable for the fragment containing only the operator $\Gf{\geq 1}_{\inf}$~\cite{our-concur}.
Our paper makes a significant further step towards the ultimate goal of a model checking procedure for the whole fLTL. We address the general \emph{quantitative} setting with arbitrary frequency bounds $p$ and consider the fragment {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}{}, which is obtained from frequency LTL by preventing the ${\ensuremath{\mathbf{U}}}$ operator from occurring inside ${\ensuremath{\mathbf{G}}}$ or $\Gf{\geq p}$ formulas (but still allowing the ${\ensuremath{\mathbf{F}}}$ operator to occur anywhere in the formula).
The approach we take is completely different from~\cite{our-concur} where ad hoc product MDP construction is used, heavily relying on existence of certain types of strategies in the $\Gf{\geq 1}_{\inf}$ case.
In this paper we provide, to the best of our knowledge, the first translation of a quantitative logic to equivalent \emph{deterministic} automata.
This allows us to take the standard automata-theoretic approach to verification~\cite{VW86}: after obtaining the finite automaton, we do not deal with the structure of the formula originally given, and we solve a (reasonably simple) synthesis problem on a product of the single automaton with the MDP.
Relations of various kinds of logics and automata are widely studied (see e.g.~\cite{Vardi96anautomata-theoretic,thomas1997languages,Droste}), and our results provide new insights into this area
for quantitative logics.
Previous work~\cite{AT12} offered only translation of a similar logic to {\em non-deterministic} ``mean-payoff B\"uchi automata'' noting that it is difficult to give an analogous reduction to {\em deterministic} ``mean-payoff Rabin automata''. The reason is that the non-determinism is inherently present in the form of guessing whether the subformulas of $\Gf{\geq p}$ are satisfied on a suffix.
Our construction overcomes this difficulty and offers equivalent deterministic automata. It is a first and highly non-trivial step towards providing a reduction for the complete logic. \
Although our algorithm does not allow us to handle the extension of the whole LTL, the considered fragment {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}{}
contains a large class of formulas and offers significant expressive power. It subsumes the GR(1) fragment of LTL \cite{BJPPS12}, which has found use in synthesis for hardware designs. The ${\ensuremath{\mathbf{U}}}$ operator, although not allowed within a scope of a ${\ensuremath{\mathbf{G}}}$ operator, can still be used for example to distinguish paths based on their prefixes. As an example synthesis problem expressible in this fragment,
consider a cluster of servers
where each server
plays either a role of a load-balancer or a worker. On startup, each server
\underline{l}istens for a message specifying its role. A load-
\underline{b}alancer
\underline{f}orwards each
\underline{r}equest and only waits for a
\underline{c}onfirmation whereas a
\underline{w}orker
\underline{p}rocesses the requests itself.
A specification for a single server in the cluster can require, for example, that the following formula (with propositions \underline{explained} above) holds with probability at least $0.95$: \[ \Big(\big(\mathit{l}\,{\ensuremath{\mathbf{U}}}\,\mathit{b}\big) \rightarrow \Gf{\geq 0.99}\big(r \rightarrow {\ensuremath{\mathbf{X}}} (f\wedge {\ensuremath{\mathbf{F}}} c)\big)\Big) \wedge \Big(\big(\mathit{l}\,{\ensuremath{\mathbf{U}}}\,\mathit{w}\big) \rightarrow \Gf{\geq 0.85}\big(r \rightarrow ({\ensuremath{\mathbf{X}}} p \vee {\ensuremath{\mathbf{X}}}{\ensuremath{\mathbf{X}}} p)\big)\Big) \]
\mypara{Related work.}
Frequency LTL was studied in another variant in~\cite{BDL-tase12,BMM14} where a {\em frequency until} operator is introduced in two different LTL-like logics, and undecidability is proved for problems relevant to our setting. The work \cite{BDL-tase12} also yields decidability with restricted nesting of the frequency until operator; as the decidable fragment in~\cite{BDL-tase12} does not contain frequency-globally operator, it is not possible to express many useful properties expressible in our logic.
A logic that speaks about frequencies on a finite interval was introduced in~\cite{DBLP:journals/corr/abs-1111-3111}, but the paper provides algorithms only for Markov chains and a bounded fragment of the logic.
Model checking MDPs against LTL objectives relies on the automata-theoretic approach, namely on translating LTL to automata that are to some extent deterministic~\cite{CY95}. This typically involves translating LTL to non-deterministic automata, which are then determinized using e.g. Safra's construction.
During the determinization, the original structure of the formula is lost, which prevents us from extending this technique to the frequency setting. However, an alternative technique of translating LTL directly to deterministic automata has been developed \cite{cav12,atva13,cav14}, where the logical structure is preserved. In our work, we extend the algorithm for LTL$_{\setminus{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{U}}}}$ partially sketched in \cite{atva13}. In Section~\ref{sec:conclusions}, we explain why adapting the algorithm for full LTL \cite{cav14} is difficult. Translation of LTL$_{\setminus{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{U}}}}$ to other kinds of automata has been considered also in \cite{DBLP:conf/tacas/KiniV15}.
Our technique relies on a solution of a multi-objective mean-payoff problem on MDP~\cite{BBC+14,lics15}. Previous results only consider limit inferior rewards, and so we cannot use them as off-the-shelf results, but need to adapt them first to our setting with both inferior and superior limits together with Rabin condition.
There are several works that combine mean-payoff objectives with e.g. logics or parity objectives, but in most cases only simple atomic propositions can be used to define the payoff \cite{bloem2009better,boker2011temporal,chatterjee2011energy}. The work \cite{baier2014weight} extends LTL with another form of quantitative operators, allowing accumulated weight constraint expressed using automata, again not allowing quantification over complex formulas. Further, \cite{ABK14} introduces a variant of LTL with a discounted-future operator. Finally, techniques closely related to the ones in this paper are used in \cite{EKVY08,CR15lics,RRS15cav}.
\mypara{Our contributions.} To our best knowledge, this paper gives the first decidability result for probabilistic verification against linear-time temporal logics extended by \emph{quantitative} frequency operators with \emph{complex nested subformulas} of the logic.
It works in two steps, keeping the same time complexity as for ordinary LTL.
In the first step, a ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula gets translated to an equivalent \emph{deterministic} generalized Rabin automaton extended with mean-payoff objectives.
This step is inspired by previous work~\cite{atva13},
but the extension with auxiliary automata for $\Gf{\geq p}$ requires a different construction.
The second step is the analysis of MDPs against conjunction of limit inferior mean-payoff, limit superior mean-payoff, and generalized Rabin objectives.
This result is obtained by adapting and combining several existing involved proof techniques~\cite{lics15,BCFK13}.
The paper is organised as follows:
the main algorithm is explained in Section~\ref{sec:alg}, relegating the details of the two technical steps above to Sections~\ref{sec:automata} and~\ref{sec:mean-payoff}.
\section{Preliminaries}
We use $\mathbb{N}$ and $\mathbb{Q}$ to denote the sets of non-negative integers and rational numbers.
The set of all distributions over a countable set $X$ is denoted by $\mathit{Dist}(X)$. For a predicate $P$, the {\em indicator function} $\mathds{1}_{P}$ equals $1$ if $P$ is true, and $0$ if $P$ is false.
\mypara{Markov decision processes (MDPs).} An MDP is a tuple $\mathsf{M}=(S,A,\mathit{Act},\delta,\hat s)$ where $S$ is a finite set of states, $A$ is a finite set of actions, $\mathit{Act} : S\rightarrow 2^A\setminus \{\emptyset\}$ assigns to each state $s$ the set $\act{s}$ of actions enabled in $s$,
$\delta : A\rightarrow \mathit{Dist}(S)$ is a probabilistic transition function that given an action $a$ gives a probability distribution over the successor states, and $\hat s$ is the initial state.
To simplify notation, w.l.o.g. we require that every action is enabled in exactly one state.
\mypara{Strategies.} A strategy in an MDP $\mathsf{M}$ is a ``recipe'' to choose actions. Formally, it is a function $\sigma : (SA)^*S \to \mathit{Dist}(A)$ that given a finite path~$w$, representing the history of a play, gives a probability distribution over the actions enabled in the last state.
A strategy $\sigma$ in $\mathsf{M}$ induces a \emph{Markov chain} $\mathsf{M}^\sigma$ which is a tuple $(L,P,\hat s)$ where the set of \emph{locations} $L=(S \times A)^*\times S$ encodes the history of the play, $\hat s$ is an \emph{initial location},
and $P$ is a \emph{probabilistic transition function} that assigns to each location a probability distribution over successor locations defined by \(
P(h)(h\,a\,s)\ =\
\sigma(h)(a)\cdot \delta(a)(s)\,. \) for all $h\in (SA)^*S$, $a\in A$ and $s\in S$.
The probability space of the runs of the Markov chain is denoted by $\mathbb P^\sigma_\mathsf{M}$ and defined in the standard way \ifxa\undefined\cite{KSK76,techreport}\else \cite{KSK76}; for reader's convenience the construction is recalled in Appendix~\ref{app:mc}\fi.
\mypara{End components.} A tuple $(T,B)$ with $\emptyset\neq T\subseteq S$ and $B\subseteq \bigcup_{t\in T}\act{t}$ is an \emph{end component} of $\mathsf{M}$ if (1) for all $a\in B$, whenever $\delta(a)(s')>0$ then $s'\in T$; and (2) for all $s,t\in T$ there is a path $w = s_1 a_1\cdots a_{k-1} s_k$ such that $s_1 = s$, $s_k=t$, and all states and actions that appear in $w$ belong to $T$ and $B$, respectively. An end component $(T,B)$ is a \emph{maximal end component (MEC)} if it is maximal with respect to the componentwise subset ordering. Given an MDP, the set of MECs is denoted by $\mathsf{MEC}$. Finally, an MDP is \emph{strongly connected} if $(S,A)$ is a MEC.
\mypara{Frequency linear temporal logic (fLTL).}
The formulae of the logic
fLTL are given by the following syntax: \[ \varphi\quad\mathop{::=}\quad {\ensuremath{\mathbf{tt}}} \mid {\ensuremath{\mathbf{ff}}} \mid a\mid \neg a\mid \varphi\wedge\varphi \mid \varphi\vee\varphi \mid {\ensuremath{\mathbf{X}}}\varphi \mid {\ensuremath{\mathbf{F}}}\varphi \mid {\ensuremath{\mathbf{G}}}\varphi \mid \varphi{\ensuremath{\mathbf{U}}}\varphi \mid \Gf{\bowtie p}_{\mathrm{ext}} \varphi \] over a finite set $Ap$ of atomic propositions, ${\bowtie}\in\{\geq,>\}$, $p\in[0,1]\cap\mathbb{Q}$, and $\mathrm{ext} \in \{\inf,\sup \}$. A formula that is neither a conjunction, nor a disjunction is called \emph{non-Boolean}. The set of non-Boolean subformulas of $\varphi$ is denoted by ${\ensuremath{\mathsf{sf}}}(\varphi)$.
\mypara{Words and fLTL Semantics.} Let $w\in (2^{Ap})^\omega$ be an infinite word. The $i$th letter of $w$ is denoted $w[i]$, i.e.~$w=w[0]w[1]\cdots$. We write $\infix w i j$ for the finite word $w[i] w[i+1] \cdots w[j]$, and $w^{i\infty}$ or just $\suffix w i$ for the suffix $w[i] w[i+1] \cdots $.
The semantics of a formula on a word $w$ is defined inductively: for ${\ensuremath{\mathbf{tt}}}$, ${\ensuremath{\mathbf{ff}}}$, $\wedge$, $\vee$, and for atomic propositions and their negations, the definition is straightforward, for the remaining operators we define:
$$
\begin{array}[t]{lcl} w \models {\ensuremath{\mathbf{X}}} \varphi & \iff & \suffix w1 \models \varphi \\ w \models {\ensuremath{\mathbf{F}}} \varphi & \iff & \exists \, k\in\mathbb{N}: \suffix wk \models \varphi \\ w \models {\ensuremath{\mathbf{G}}} \varphi & \iff & \forall \, k\in\mathbb{N}: \suffix wk \models \varphi \end{array} \quad \begin{array}[t]{lcl} w \models \varphi{\ensuremath{\mathbf{U}}} \psi & \iff & \begin{array}[t]{l} \exists \, k\in\mathbb{N}: \suffix wk \models \psi \text{ and } \\ \forall\, 0\leq j < k: \suffix wj\models \varphi \end{array}\\ w \models \Gf{\bowtie p}_{\mathrm{ext}} \varphi & \iff & \mathrm{lr}_{\mathrm{ext}}(\mathds{1}_{\suffix w0\models\varphi}\mathds{1}_{\suffix w1\models\varphi}\cdots) \bowtie p \end{array}$$
where we set $\mathrm{lr}_{\mathrm{ext}}(q_1q_2\cdots) := \limext_{i\to \infty} \frac{1}{i} \sum_{j=1}^i q_i$.
By $\mathsf{L}(\varphi)$ we denote the set $\{w\in(2^{Ap})^\omega\mid w\models\varphi\}$ of words satisfying $\varphi$.
The {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}{} fragment of fLTL is defined by disallowing occurrences of ${\ensuremath{\mathbf{U}}}$ in ${\ensuremath{\mathbf{G}}}$-formulae, i.e. it is given by the following syntax for $\varphi$: \begin{align*} \varphi::= & a\mid \neg a\mid \varphi\wedge\varphi \mid \varphi\vee\varphi \mid {\ensuremath{\mathbf{X}}}\varphi \mid \varphi{\ensuremath{\mathbf{U}}}\varphi \mid {\ensuremath{\mathbf{F}}}\varphi \mid {\ensuremath{\mathbf{G}}}\xi \mid \Gf{\bowtie p}_{\mathrm{ext}} \xi\\ \xi::= & a\mid \neg a\mid \xi\wedge\xi \mid \xi\vee\xi \mid {\ensuremath{\mathbf{X}}}\xi \mid {\ensuremath{\mathbf{F}}}\xi \mid {\ensuremath{\mathbf{G}}}\xi \mid \Gf{\bowtie p}_{\mathrm{ext}} \xi \end{align*}
Note that restricting negations to atomic propositions is without loss of generality as all operators are closed under negation, for example $\neg \Gf{\ge p}_{\inf} \varphi \equiv \Gf{> 1- p}_{\sup} \neg \varphi$ or $\neg \Gf{> p}_{\sup} \varphi \equiv \Gf{\ge 1- p}_{\inf} \neg \varphi$. Furthermore, we could easily allow $\bowtie$ to range also over $\leq$ and $<$ as $\Gf{\leq p}_{\inf} \varphi \equiv \Gf{\geq 1- p}_{\sup} \neg \varphi$ and $\Gf{< p}_{\inf} \varphi \equiv \Gf{> 1- p}_{\sup} \neg \varphi$.
\mypara{Automata.}
Let us fix a finite alphabet $\Sigma$.
A deterministic \emph{labelled transition system (LTS)} over $\Sigma$ is a tuple $(Q,q_0,\delta)$ where $Q$ is a finite set of states, $q_0$ is the initial state, and $\delta: Q \times \Sigma \to Q$ is a partial transition function. We denote $\delta(q,a) = q'$ also by $q \tran{a} q'$.
A \emph{run} of the LTS $\mathcal{S}$ over an infinite word $w$ is a sequence of states $\mathcal{S}(w)=q_0 q_1 \cdots$ such that $q_{i+1} = \delta(q_i,w[i])$. For a finite word $w$ of length $n$, we denote by $\mathcal{S}(w)$ the state $q_n$ in which $\mathcal{S}$ is after reading $w$.
An \emph{acceptance condition} is a positive boolean formula over
formal variables
\[ \{ \Inf(S), \Fin(S), \accMP[\mathrm{ext}]{\bowtie p}{r} \mid S{\subseteq} Q, \mathrm{ext} {\in} \{\inf,\sup\}, \mathord{\bowtie} {\in} \{ {\geq}, {>}\}, p {\in} \mathbb{Q}, r: Q {\to} \mathbb{Q} \}.
\]
Given a run $\rho$ and an acceptance condition $\alpha$, we
assign truth values as follows: \begin{itemize}
\item $\Inf(S)$ is true if{}f $\rho$ visits
(some state of) $S$ infinitely often,
\item $\Fin(S)$ is true
if{}f $\rho$ visits (all states of) $S$ finitely often,
\item $\accMP[\mathrm{ext}]{\bowtie p}{r}$ is true if{}f
$\mathrm{lr}_{\mathrm{ext}}(r(\rho[0]) r(\rho[1]) \cdots) \bowtie p$. \end{itemize} The run $\rho$ satisfies $\alpha$ if this truth-assignment makes $\alpha$ true. An \emph{automaton} $\mathcal{A}$ is an LTS with an acceptance condition $\alpha$. The language of $\mathcal{A}$, denoted by $\mathsf{L}(\mathcal{A})$, is the set of all words inducing a run satisfying $\alpha$. An acceptance condition $\alpha$ is a \emph{B\"uchi}, \emph{generalized B\"uchi}, or \emph{co-B\"uchi} acceptance condition if it is of the form $\Inf(S)$, $\bigwedge_i\Inf(S_i)$, or $\Fin(S)$, respectively. Further, $\alpha$ is a \emph{generalized Rabin mean-payoff}, or a \emph{generalized B\"uchi mean-payoff} acceptance condition if it is in disjunctive normal form, or if it is a conjunction not containing any $\Fin(S)$, respectively. For each acceptance condition we define a corresponding automaton, e.g. \emph{deterministic generalized Rabin mean-payoff automaton (DGRMA)}.
\section{Model-checking algorithm}\label{sec:alg}
In this section, we state the problem of model checking MDPs against ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ specifications and provide a solution. As a black-box we use two novel routines described in detail in the following two sections. All proofs are in the appendix.
Given an MDP $\mathsf{M}$ and a valuation $\nu:S\to2^{Ap}$ of its states, we say that its run
$\omega= s_0 a_0 s_1 a_1 \cdots$ \emph{satisfies} $\varphi$, written $\omega\models\varphi$, if $\nu(s_0)\nu(s_1)\cdots\models \varphi$.
We use $\Pr{\sigma}{}{\varphi}$ as a shorthand for the probability of all runs satisfying $\varphi$, i.e. $\Pr{\sigma}{\mathsf{M}}{\{\omega\mid \omega\models \varphi\}}$. This paper is concerned with the following task:
\noindent \framebox[\textwidth]{\parbox{0.96\textwidth}{
\textbf{Controller synthesis problem:}
Given an MDP with a valuation, an ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula $\varphi$ and
$x\in[0,1]$, decide whether $\Pr{\sigma}{}{\varphi} \ge x$ for some strategy $\sigma$,
and if so, construct such a \emph{witness} strategy.
}}
\noindent The following is the main result of the paper. \begin{theorem}\label{thm:main} The controller synthesis problem for MDPs and {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}{} is decidable and the witness strategy can be constructed in doubly exponential time. \end{theorem}
In this section, we present an algorithm for Theorem~\ref{thm:main}.
The skeleton of our algorithm is the same as for the standard model-checking algorithm for MDPs against LTL.
It proceeds in three steps. Given an MDP $\mathsf{M}$ and a
formula $\varphi$, \begin{enumerate}
\item compute a deterministic automaton $\mathcal{A}$ such that $\mathsf{L}(\mathcal{A})=\mathsf{L}(\varphi)$,
\item compute the product MDP $\mdp\times\A$,
\item analyse the product MDP $\mdp\times\A$.
\end{enumerate}
In the following, we concretize these three steps to fit our setting.
\mypara{1.~Deterministic automaton} For ordinary LTL, usually a Rabin automaton or a generalized Rabin automaton is constructed~\cite{prism,ltl2dstar,cav14,atva14}.
Since in our setting, along with $\omega$-regular language the specification also includes quantitative constraints over runs, we generate a DGRMA. The next theorem is the first black box, detailed in Section~\ref{sec:automata}.
\begin{theorem}\label{thm:auto} For any ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula, there is a DGRMA $\mathcal{A}$, constructible in doubly exponential time, such that $\mathsf{L}(\mathcal{A})=\mathsf{L}(\varphi)$, and the acceptance condition is of exponential size. \end{theorem}
\mypara{2.~Product} Computing the synchronous parallel product of the MDP $\mathsf{M}=(S,A,\mathit{Act},\Delta,\hat s)$ with valuation $\nu:S\to 2^{Ap}$ and the LTS $(Q,i,\delta)$ over $2^{Ap}$ underlying $\mathcal{A}$ is rather straightforward. The product $\mdp\times\A$ is again an MDP $(S\times Q, A\times Q,\mathit{Act}',\Delta',(\hat s,\hat{q}))$ where\footnote{In order to guarantee that each action is enabled in at most one state, we have a copy of each original action for each state of the automaton.} $\mathit{Act}'((s,q))=\mathit{Act}(s)\times\{q\}$, $\hat{q} = \delta(i,\nu(\hat s))$, and $\Delta'\big((a,q)\big)\big(({s},\bar{q})\big)$ is equal to $\Delta(a)(s)$ if $\delta(q,\nu({s}))=\bar{q}$, and to $0$ otherwise. We lift acceptance conditions ${\ensuremath{\mathit{Acc}}}$ of $\mathcal{A}$ to $\mdp\times\A$: a run of $\mdp\times\A$ satisfies ${\ensuremath{\mathit{Acc}}}$ if its projection to the component of the automata states satisfies ${\ensuremath{\mathit{Acc}}}$.\footnote{Technically, the projection should be preceded by $i$ to get a run of the automaton, but the acceptance does not depend on any finite prefix of the sequence of states.}
\mypara{3.~Product analysis} The MDP $\mdp\times\A$ is solved with respect to ${\ensuremath{\mathit{Acc}}}$, i.e., a strategy in $\mdp\times\A$ is found that maximizes the probability of satisfying ${\ensuremath{\mathit{Acc}}}$. Such a strategy then induces a (history-dependent) strategy on $\mathsf{M}$ in a straightforward manner. Observe that for DGRMA, it is sufficient to consider the setting with \begin{align}\label{eq:alg-acc} {\ensuremath{\mathit{Acc}}}=\bigvee_{i=1}^k(\Fin(F_i)\wedge{\ensuremath{\mathit{Acc}}}_i') \end{align} where ${\ensuremath{\mathit{Acc}}}_i'$ is a conjunction of several $\Inf$ and $\mathit{MP}$ (in contrast with a Rabin condition used for ordinary LTL where ${\ensuremath{\mathit{Acc}}}_i'$ is simply of the form $\Inf(I_i)$).
Indeed, one can replace each $\bigwedge_{j}\Fin(F_j)$ by $\Fin(\bigcup_j F_j)$ to obtain the desired form, since avoiding several sets is equivalent to avoiding their union.
For a condition of the form (\ref{eq:alg-acc}), the solution is obtained as follows: \begin{enumerate} \item For $i=1,2,\ldots, k$: \begin{enumerate} \item Remove the set of states $F_i$ from the MDP. \item Compute the MEC decomposition. \item Mark each MEC $C$ as winning iff $\textsc{AcceptingMEC}(C,{\ensuremath{\mathit{Acc}}}_i')$ returns Yes.
\item Let $W_i$ be the componentwise union of winning MECs above. \end{enumerate} \item Let $W$ be the componentwise union of all $W_i$ for $1\le i\le k$. \item Return the maximal probability to reach the set $W$ in the MDP.
\end{enumerate} The procedure $\textsc{AcceptingMEC}(C,{\ensuremath{\mathit{Acc}}}_i')$ is the second black box used in our algorithm, detailed in Section~\ref{sec:mean-payoff}. It decides, whether the maximum probability of satisfying ${\ensuremath{\mathit{Acc}}}_i'$ in $C$ is $1$ (return Yes), or $0$ (return No).
\begin{theorem}\label{thm:mec}
For a strongly connected MDP $\mathsf{M}$ and a generalized B\"uchi mean-payoff acceptance condition ${\ensuremath{\mathit{Acc}}}$, the maximal probability to satisfy ${\ensuremath{\mathit{Acc}}}$ is either $1$ or $0$, and is the same for all initial states. Moreover, there is a polynomial-time algorithm that computes this probability, and also outputs a witnessing strategy if the probability is $1$. \end{theorem}
The procedure is rather complex in our case, as opposed to standard cases such as Rabin condition, where a MEC is accepting for ${\ensuremath{\mathit{Acc}}}_i'=\Inf(I_i)$ if its states intersect $I_i$; or a generalized Rabin condition~\cite{cav13}, where a MEC is accepting for ${\ensuremath{\mathit{Acc}}}_i'=\bigwedge_{j=1}^{\ell_i}\Inf(I_{ij})$ if its states intersect with each $I_i^j$, for $j=1,2,\ldots,\ell_i$.
\mypara{Finishing the proof of Theorem~\ref{thm:main}} Note that for MDPs that are not strongly connected, the maximum probability might not be in $\{0,1\}$.
Therefore, the problem is decomposed into a qualitative satisfaction problem in step 1.(c) and a quantitative reachability problem in step 3.
Consequently, the proof of correctness is the same as the proofs for LTL via Rabin automata \cite{BP08} and generalized Rabin automata \cite{cav13}.
The complexity follows from Theorem~\ref{thm:auto} and~\ref{thm:mec}. Finally, the overall witness strategy first reaches the winning MECs and if they are reached it switches to the witness strategies from Theorem~\ref{thm:mec}.
\begin{remark} We remark that by a simple modification of the product construction above and of the proof of Theorem~\ref{thm:mec}, we obtain an algorithm synthesising a strategy achieving a given bound w.r.t.\ multiple mean-payoff objectives (with a combination of superior and inferior limits) and (generalized) Rabin acceptance condition for \emph{general} (not necessarily strongly connected) MDP.
\end{remark}
\tikzset{
state/.style={
rectangle,
rounded corners,
draw=black,
minimum height=1.5em,
minimum width=2em,
inner sep=2pt,
text centered,
}
}
\section{Automata characterization of {\textrm{fLTL}\textsubscript{$\setminus\G\U$}}} \label{sec:automata}
In this section, we prove Theorem~\ref{thm:auto}. We give an algorithm for translating a given ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula $\varphi$ into a deterministic generalized Rabin mean-payoff automaton $\mathcal{A}$ that recognizes words satisfying $\varphi$.
For the rest of the section, let $\varphi$ be an ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula. Further, ${\ensuremath{\mathbb{F}}}$, ${\ensuremath{\mathbb{G}}}$, ${\ensuremath{\mathbb{G}}}^{\bowtie}$, and ${\ensuremath{\mathsf{sf}}}$ denote the set of ${\ensuremath{\mathbf{F}}}$-, ${\ensuremath{\mathbf{G}}}$-, $\Gf{\bowtie p}_{\mathrm{ext}}$-, and non-Boolean subformulas of $\varphi$, respectively.
In order to obtain an automaton for the formula, we first need to give a more operational view on fLTL. To this end, we use expansions of the formulae in a very similar way as they are used, for instance, in tableaux techniques for LTL translation to automata, or for deciding LTL satisfiability. We define a symbolic one-step unfolding (expansion) ${\ensuremath{\mathsf{Unf}}}$ of a formula inductively by the rules below. Further, for a valuation $\nu\subseteq Ap$, we define the ``next step under $\nu$''-operator. This operator (1) substitutes unguarded atomic propositions for their truth values, and (2) peels off the outer ${\ensuremath{\mathbf{X}}}$-operator whenever it is present. Formally, we define
\noindent \begin{minipage}{0.5\linewidth}
\begin{align*}
{\ensuremath{\mathsf{Unf}}}(\psi_1\wedge\psi_2)&={\ensuremath{\mathsf{Unf}}}(\psi_1)\wedge{\ensuremath{\mathsf{Unf}}}(\psi_2)\\
{\ensuremath{\mathsf{Unf}}}(\psi_1\vee\psi_2)&={\ensuremath{\mathsf{Unf}}}(\psi_1)\vee{\ensuremath{\mathsf{Unf}}}(\psi_2)\\
{\ensuremath{\mathsf{Unf}}}({\ensuremath{\mathbf{F}}}\psi_1)&={\ensuremath{\mathsf{Unf}}}(\psi_1)\vee{\ensuremath{\mathbf{X}}}{\ensuremath{\mathbf{F}}}\psi_1\\
{\ensuremath{\mathsf{Unf}}}({\ensuremath{\mathbf{G}}}\psi_1)&={\ensuremath{\mathsf{Unf}}}(\psi_1)\wedge{\ensuremath{\mathbf{X}}}{\ensuremath{\mathbf{G}}}\psi_1\\
{\ensuremath{\mathsf{Unf}}}(\psi_1{\ensuremath{\mathbf{U}}}\psi_2)&={\ensuremath{\mathsf{Unf}}}(\psi_2){\vee}\big({\ensuremath{\mathsf{Unf}}}(\psi_1){\wedge} {\ensuremath{\mathbf{X}}}(\psi_1{\ensuremath{\mathbf{U}}}\psi_2)\big)\\
{\ensuremath{\mathsf{Unf}}}(\Gf{\bowtie p}_\mathrm{ext}\psi_1)&={\ensuremath{\mathbf{tt}}}\wedge{\ensuremath{\mathbf{X}}}\Gf{\bowtie p}_\mathrm{ext}\psi_1\\
{\ensuremath{\mathsf{Unf}}}(\psi) &= \psi \text{ for any other $\psi$}\\
\end{align*} \end{minipage} \hspace*{-1em} \begin{minipage}{0.4\linewidth}
\begin{align*}
(\psi_1\wedge \psi_2)[\nu]&= \psi_1[\nu] \wedge \psi_2[\nu]\\
(\psi_1\vee \psi_2)[\nu]&= \psi_1[\nu] \vee \psi_2[\nu]\\
a[\nu]&=
\begin{cases}
{\ensuremath{\mathbf{tt}}} &\text{if }a\in\nu\\
{\ensuremath{\mathbf{ff}}} &\text{if }a\notin\nu
\end{cases}\\
\neg a[\nu]&=
\begin{cases}
{\ensuremath{\mathbf{ff}}} &\text{if }a\in\nu\\
{\ensuremath{\mathbf{tt}}} &\text{if }a\notin\nu
\end{cases}\\
({\ensuremath{\mathbf{X}}}\psi_1)[\nu]&= \psi_1\\
\psi[\nu] &= \psi \text{ for any other $\psi$}
\end{align*} \end{minipage} Note that after unfolding, a formula becomes a positive Boolean combination over literals (atomic propositions and their negations) and ${\ensuremath{\mathbf{X}}}$-formulae. The resulting formula is LTL-equivalent to the original formula.
The formulae of the form $\Gf{\bowtie p}_\mathrm{ext}\psi$ have ``dummy’’ unfolding; they are dealt with in a special way later.
Combined with unfolding, the ``next step''-operator then preserves and reflects satisfaction on the given word:
\begin{lemma}\label{lem:unfolding} For every word $w$ and ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula $\varphi$, we have $w\models \varphi$ if and only if $\suffix{w}{1} \models ({\ensuremath{\mathsf{Unf}}}(\varphi))[w[0]]$. \end{lemma}
The construction of $\mathcal{A}$ proceeds in several steps. We first construct a ``master'' transition system, which monitors the formula and transforms it in each step to always keep exactly the formula that needs to be satisfied at the moment. However, this can only deal with properties whose satisfaction has a finite witness, e.g. ${\ensuremath{\mathbf{F}}} a$. Therefore we construct a set of ``slave'' automata, which check whether ``infinitary'' properties (with no finite witness), e.g., ${\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}} a$, hold or not. They pass this information to the master, who decides on acceptance of the word.
\subsection{Construction of master transition system $\mathcal{M}$}
We define a LTS $\mathcal{M}=(Q,\varphi,\delta^{\mathcal{M}})$ over $2^{Ap}$ by letting $Q$ be the set
of positive Boolean functions\footnote{We use Boolean functions, i.e. classes of propositionally equivalent formulae, to obtain a finite state space. To avoid clutter, when referring to such a Boolean function, we use some formula representing the respective equivalence class.
The choice of the representing formula is not relevant since, for all operations we use, the propositional equivalence is a congruence, see
\ifxa\undefined\cite{techreport}\else Appendix~\ref{app:bool}\fi. Note that, in particular, ${\ensuremath{\mathbf{tt}}},{\ensuremath{\mathbf{ff}}}\in Q$. } over ${\ensuremath{\mathsf{sf}}}$, by letting
$\varphi$ be the initial state, and by letting the transition function $\delta^\mathcal{M}$, for every $\nu\subseteq Ap$ and $\psi \in Q$, contain
$\psi \tran\nu ({\ensuremath{\mathsf{Unf}}}(\psi))[\nu]$.
The master automaton
keeps the property that is still required up to date:
\begin{lemma}[Local (finitary) correctness of master LTS]\label{lem:master-local}
Let $w$ be a word and $\mathcal \mathcal{M}(w)=\varphi_0\varphi_1\cdots$ the corresponding run. Then for all $n\in\mathbb N$, we have $w\models\varphi$ if and only if $\suffix{w}{n} \models\varphi_n$. \end{lemma}
\begin{example} The formula $\varphi=a\wedge {\ensuremath{\mathbf{X}}}(b{\ensuremath{\mathbf{U}}} a)$ yields a master LTS depicted below.
\begin{center}
\begin{tikzpicture}[x=4cm,y=1.5cm,font=\footnotesize,initial text=,outer sep=0.5mm]
\tikzstyle{acc}=[double]
\node[state,initial] (i) at (0,0) {$a\wedge {\ensuremath{\mathbf{X}}}(b{\ensuremath{\mathbf{U}}} a)$};
\node[state] (t) at (1,-0.9) {${\ensuremath{\mathbf{tt}}}$};
\node[state] (f) at (0,-0.9) {${\ensuremath{\mathbf{ff}}}$};
\node[state] (u) at (1,0) {$b{\ensuremath{\mathbf{U}}} a$};
\path[->]
(i) edge node[left]{$\emptyset,\{b\}$} (f)
edge node[pos=0.3,below]{$\{a\},\{a,b\}$} (u)
(u) edge[loop right, max distance=8mm,in=-10,out=10,looseness=10] node[right]{$\{b\}$} (u)
edge[bend left=15] node[below,pos=0.3] {$\emptyset$} (f)
edge node[right] {$\{a\},\{a,b\}$} (t)
(t) edge[loop right, max distance=8mm,in=-10,out=10,looseness=10] node[right]{$\emptyset,\{a\},\{b\},\{a,b\}$} (t)
(f) edge[loop left, max distance=8mm,in=170,out=190,looseness=10] node[left]{$\emptyset,\{a\},\{b\},\{a,b\}$} (f)
;
\end{tikzpicture}
\end{center} \end{example}
One can observe that for an fLTL formula $\varphi$ with no ${\ensuremath{\mathbf{G}}}$- and $\Gf{\bowtie p}_\mathrm{ext}$-operators, we have $w\models\varphi$ if{}f the state ${\ensuremath{\mathbf{tt}}}$ is reached while reading $w$. However, for formulae with ${\ensuremath{\mathbf{G}}}$-operators (and thus without finite witnesses in general), this claim no longer holds. To check such behaviour we construct auxiliary ``slave'' automata.
\subsection{Construction of slave transition systems $\mathcal{S}(\xi)$}
We define a LTS $\mathcal{S}(\xi)=(Q,\xi,\delta^{\mathcal{S}})$ over $2^{Ap}$
with the same state space as $\mathcal{M}$ and the initial state $\xi \in Q$. Furthermore, we call a state $\psi$ a \emph{sink}, written $\psi\in\mathsf{Sink}$, iff for all $\nu\subseteq Ap$ we have $\psi[\nu]=\psi$. Finally,
the transition relation $\delta^{\mathcal{S}}$, for every $\nu\subseteq Ap$ and $\psi\in Q \setminus \mathsf{Sink}$, contains
$ \psi\tran{\nu}\psi[\nu] $.
\begin{example}\label{ex:slave} The slave LTS for the formula $\xi=a\vee b\vee{\ensuremath{\mathbf{X}}}(b\wedge {\ensuremath{\mathbf{G}}} {\ensuremath{\mathbf{F}}} a)$ has a structure depicted in the following diagram:
\begin{center}
\begin{tikzpicture}[x=4cm,y=1.5cm,font=\footnotesize,initial text=,outer sep=0.5mm]
\tikzstyle{acc}=[double]
\node[state,initial] (i) at (0,0) {$a\vee b\vee{\ensuremath{\mathbf{X}}}(b\wedge{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a)$};
\node[state] (t) at (0.3,-0.5) {${\ensuremath{\mathbf{tt}}}$};
\node[state] (a) at (1,0) {$b\wedge({\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a)$};
\node[state] (f) at (1.3,-0.5) {${\ensuremath{\mathbf{ff}}}$};
\node[state] (g) at (2.2,0) {${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$};
\draw[->,rounded corners] (i.-110) |- (t) node[pos=0.4,left]{$\{a\},\{b\},\{a,b\}$};
\draw[->,rounded corners] (i) edge node[below]{$\emptyset$} (a)
(a) edge node[pos=0.6, below] {$\{b\},\{a,b\}$} (g)
(a.-40) |- (f) node[pos=0.4,left] {$\emptyset,\{a\}$}
;
\end{tikzpicture}
\end{center} Note that we do not unfold any inner {\ensuremath{\mathbf{F}}}- and {\ensuremath{\mathbf{G}}}-formulae. Observe that if we start reading $w$ at the $i$th position and end up in ${\ensuremath{\mathbf{tt}}}$, we have $\suffix w i\models \xi$. Similarly, if we end up in ${\ensuremath{\mathbf{ff}}}$ we have $\suffix w i\not\models \xi$. This way we can monitor for which position $\xi$ holds and will be able to determine if it holds, for instance, infinitely often. But what about when we end up in ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$? Intuitively, this state is accepting or rejecting depending on whether ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$ holds or not. Since this cannot be checked in finite time, we delegate this task to yet another slave, now responsible for ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$. Thus instead of deciding whether ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$ holds, we may use it as an \emph{assumption} in the automaton for $\xi$ and let the automaton for ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$ check whether the assumption turns out correct. \end{example}
Let $\mathcal{R}\mathrm{ec} := {\ensuremath{\mathbb{F}}} \cup {\ensuremath{\mathbb{G}}} \cup {\ensuremath{\mathbb{G}}}^{\bowtie}$. This is the set of subformulas that are potentially difficult to check in finite time. Subsets of $\mathcal{R}\mathrm{ec}$ can be used as assumptions to prove other assumptions and in the end also the acceptance. Given a set of formulae $\Psi$ and a formula $\psi$, we say that $\Psi$ \emph{(propositionally) proves} $\psi$, written $\Psi\proves\psi$, if $\psi$ can be deduced from formulae in $\Psi$ using only propositional reasoning (for a formal definition see \ifxa\undefined\cite{techreport}\else Appendix~\ref{app:bool}\fi). So, for instance, $\{{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a\}\proves{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a \vee {\ensuremath{\mathbf{G}}} b$, but ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a\not\proves{\ensuremath{\mathbf{F}}} a$.
The following is the ideal assumption set we would like our automaton to identify.
For a fixed word $w$, we denote by $\mathcal{R}(w)$ the set \[ \{ {\ensuremath{\mathbf{F}}} \xi \in {\ensuremath{\mathbb{F}}} \mid w \models {\ensuremath{\mathbf{G}}} {\ensuremath{\mathbf{F}}} \xi \}
\cup \{ {\ensuremath{\mathbf{G}}} \xi \in {\ensuremath{\mathbb{G}}} \mid w \models {\ensuremath{\mathbf{F}}} {\ensuremath{\mathbf{G}}} \xi \}
\cup \{ \Gf{\bowtie p}_\mathrm{ext} \xi \in {\ensuremath{\mathbb{G}}}^{\bowtie} \mid w \models \Gf{\bowtie p}_\mathrm{ext} \xi \} \]
of formulae in $\mathcal{R}\mathrm{ec}$ eventually always satisfied on $w$.
The slave LTS is useful for recognizing whether its respective formula $\xi$ holds infinitely often, almost always, or with the given frequency. Intuitively, it reduces this problem for a given formula to the problems for its subformulas in $\mathcal{R}\mathrm{ec}$:
\begin{lemma}[Correctness of slave LTS]\label{lem:slave-promises} Let us fix $\xi \in {\ensuremath{\mathsf{sf}}}$ and a word~$w$. For any $\mathcal{R} \in \mathcal{R}\mathrm{ec}$, we denote by $\mathit{Sat}(\mathcal{R})$ the set $\{i\in\mathbb{N}\mid \exists j\geq i:\mathcal{R} \proves \mathcal{S}(\xi)(\infix wij) \}$. Then for any $\underline{\mathcal{R}},\overline{\mathcal{R}} \subseteq \mathcal{R}\mathrm{ec}$ such that $\underline{\mathcal{R}} \subseteq \mathcal{R}(w) \subseteq \overline{\mathcal{R}}$, we have
\begin{eqnarray}
\mathit{Sat}(\underline{\mathcal{R}}) \text{ is infinite} \implies & w \models {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi \implies & \mathit{Sat}(\overline{\mathcal{R}}) \text{ is infinite}\\
\mathbb{N} \setminus \mathit{Sat}(\underline{\mathcal{R}}) \text{ is finite} \implies & w \models {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}}\xi \implies & \mathbb{N} \setminus \mathit{Sat}(\overline{\mathcal{R}}) \text{ is finite}\\
\mathrm{lr}_{\mathrm{ext}}(\big(\mathds{1}_{i\in \mathit{Sat}(\underline{\mathcal{R}})}\big)_{i=0}^\infty) \bowtie p \implies & w \models \Gf{\bowtie p}_\mathrm{ext} \xi \implies & \mathrm{lr}_{\mathrm{ext}}(\big(\mathds{1}_{i\in\mathit{Sat}(\overline{\mathcal{R}})}\big)_{i=0}^\infty) \bowtie p \qquad \end{eqnarray}
\end{lemma}
Before we put the slaves together to determine $\mathcal{R}(w)$, we define \emph{slave automata}. In order to express the constraints from Lemma~\ref{lem:slave-promises} as acceptance conditions, we need to transform the underlying LTS. Intuitively, we replace quantification over various starting positions for runs by a subset construction. This means that in each step we put a \emph{token} to the initial state and move all previously present tokens to their successor states.
\mypara{B\"uchi} For a formula ${\ensuremath{\mathbf{F}}} \xi \in {\ensuremath{\mathbb{F}}}$, its slave LTS $\mathcal{S}(\xi)=(Q,\xi,\delta^{\mathcal{S}})$, and $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$, we define a B\"uchi automaton $\mathcal{S}_{\G\F}(\xi,\mathcal{R})=(2^{Q},\{\xi\},\delta )$ over $2^{Ap}$ by setting \[\Psi\tran{\nu}\{\delta^{\mathcal{S}}(\psi,\nu)\mid\psi\in\Psi\setminus \mathsf{Sink}\}\cup\{\xi\} \qquad \qquad \text{for every $\nu\subseteq Ap$} \] and the B\"uchi acceptance condition $\Inf(\{\Psi\subseteq Q\mid \exists \psi\in\Psi \cap \mathsf{Sink}: \mathcal{R}\proves\psi\})$.
In other words, the automaton accepts if infinitely often a token ends up in an \emph{accepting sink}, i.e., element of $\mathsf{Sink}$ that is provable from $\mathcal{R}$. For Example~\ref{ex:slave}, depending on whether we assume ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a\in\mathcal{R}$ or not, the accepting sinks are ${\ensuremath{\mathbf{tt}}}$ and ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} a$, or only ${\ensuremath{\mathbf{tt}}}$, respectively.
\mypara{Co-B\"uchi}
For a formula ${\ensuremath{\mathbf{G}}} \xi \in {\ensuremath{\mathbb{G}}}$, its slave LTS $\mathcal{S}(\xi)=(Q,\xi,\delta^{\mathcal{S}})$ and $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$, we define a co-B\"uchi automaton $\mathcal{S}_{\F\G}(\xi,\mathcal{R})=(2^{Q},\{\xi\},\delta )$ over $2^{Ap}$ with the same LTS as above.
It differs from the B\"uchi automaton only by having
a co-B\"uchi acceptance condition $\Fin(\{\Psi\subseteq Q\mid \exists \psi \in \Psi\cap \mathsf{Sink} : \mathcal{R} \notproves \psi\})$.
\mypara{Mean-payoff}
For a formula $\Gf{\bowtie p}_\mathrm{ext} \xi\in {\ensuremath{\mathbb{G}}}^{\bowtie}$, its slave LTS $\mathcal{S}(\xi)=(Q,\xi,\delta^{\mathcal{S}})$, and $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$ we define a \emph{mean-payoff automaton}
$\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi,\mathcal{R})= (|Q|^{Q},\mathds{1}_{\xi},\delta
)$ over $2^{Ap}$ so that for every $\nu\subseteq Ap$, we have $f\tran{\nu}f'$ where \[ f'(\psi') = \mathds{1}_\xi(\psi')+\sum_{\delta^\mathcal{S}(\psi,\nu) = \psi'} f(\psi). \] Intuitively, we always count the number of tokens in each state. When a step is taken, all tokens moving to a state are summed up and, moreover, one token is added to the initial state. Since the slave LTS is acyclic the number of tokens in each state is bounded.
Finally, the acceptance condition is $\accMP[\mathrm{ext}]{\bowtie p}{r(\mathcal{R})}$ where the function $r(\mathcal{R})$ assigns to every state $f$ the reward:
\[ \sum_{\psi\in\mathsf{Sink},\mathcal{R} \proves \psi} f(\psi). \] Each state thus has a reward that is the number of tokens in accepting sinks. Note that each token either causes a reward 1 once per its life-time when it reaches an accepting sink, or never causes any reward in the case when it never reaches any accepting state.
\begin{lemma}[Correctness of slave automata]\label{lem:slaves-acc} Let $\xi \in {\ensuremath{\mathsf{sf}}}$, $w$, and $\underline{\mathcal{R}},\overline{\mathcal{R}} \subseteq \mathcal{R}\mathrm{ec}$ be such that $\underline{\mathcal{R}} \subseteq \mathcal{R}(w) \subseteq \overline{\mathcal{R}}$. Then
\begin{eqnarray}
w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,\underline{\mathcal{R}})) \implies & w \models {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,\overline{\mathcal{R}}))\\
w \in \mathsf{L}(\mathcal{S}_{\F\G}(\xi,\underline{\mathcal{R}})) \implies & w \models {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\F\G}(\xi,\overline{\mathcal{R}}))\\
w \in \mathsf{L}(\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi,\underline{\mathcal{R}})) \implies & w \models \Gf{\bowtie p}_\mathrm{ext} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi,\overline{\mathcal{R}}))
\end{eqnarray}
\end{lemma}
\subsection{Product of slave automata}
Observe that the LTS of slave automata never depend on the assumptions $\mathcal{R}$. Let $\mathcal{S}_1,\ldots,\mathcal{S}_n$ be the LTS of automata for elements of $\mathcal{R}\mathrm{ec} = \{\xi_1,\ldots,\xi_n\}$. Further, given $\mathcal{R}\subseteq\mathcal{R}\mathrm{ec}$, let ${\ensuremath{\mathit{Acc}}}_i(\mathcal{R})$ be the acceptance condition for the slave automaton for $\xi_i$ with assumptions $\mathcal{R}$.
We define $\mathcal{P}$ to be the LTS product $\mathcal{S}_1 \times \cdots \times \mathcal{S}_n$.
The slaves run independently in parallel.
For $\mathcal{R} \subseteq\mathcal{R}\mathrm{ec}$, we define the acceptance condition for the product\footnote{An acceptance condition of an automaton is defined to hold on a run of the automata product if it holds on the projection of the run to this automaton. We can still write this as a standard acceptance condition. Indeed, for instance, a B\"uchi condition for the first automaton given by $F\subseteq Q$ is a B\"uchi condition on the product given by $\{(q_1,q_2,\ldots,q_n)\mid q_1\in F, q_2,\ldots,q_n\in Q\}$.}
\[ {\ensuremath{\mathit{Acc}}}(\mathcal{R}) = \bigwedge_{\xi_i\in\mathcal{R}} {\ensuremath{\mathit{Acc}}}_i(\mathcal{R}) \] and $\mathcal{P}(\mathcal{R})$ denotes the LTS $\mathcal{P}$ endowed with the acceptance condition ${\ensuremath{\mathit{Acc}}}(\mathcal{R})$.
Note that ${\ensuremath{\mathit{Acc}}}(\mathcal{R})$ checks that $\mathcal{R}$ is satisfied when each slave assumes $\mathcal{R}$.
\begin{lemma}[Correctness of slave product]\label{lem:slaves-product}
For $w$ and $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$, we have \begin{description} \item[(soundness)] whenever
$w \in \mathsf{L}(\mathcal{P}({\mathcal{R}}))$ then $\mathcal{R}\subseteq\mathcal{R}(w)$; \item[(completeness)] $w \in \mathsf{L}(\mathcal{P}({\mathcal{R}(w)}))$. \end{description}
\end{lemma}
Intuitively, soundness means that whatever set of assumptions we prove with $\mathcal{P}$ it is also satisfied on the word. Note that the first line can be written as \[ w \in \mathsf{L}(\mathcal{P}({\mathcal{R}})) \implies w \models \bigwedge_{{\ensuremath{\mathbf{F}}}\xi \in \mathcal{R}} {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} \xi \land \bigwedge_{{\ensuremath{\mathbf{G}}}\xi \in \mathcal{R}} {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}} \xi \land \bigwedge_{\Gf{\bowtie p}_\mathrm{ext} \xi \in \mathcal{R}} \Gf{\bowtie p}_\mathrm{ext} \xi \] Completeness means that for every word the set of all satisfied assumptions can be proven by the automaton.
\subsection{The final automaton: product of slaves and master}
Finally, we define the generalized Rabin mean-payoff automaton $\mathcal{A}$ to have the
LTS $\mathcal{M} \times \mathcal{P}$ and the acceptance condition \( \bigvee_{\mathcal{R}\subseteq\mathcal{R}\mathrm{ec}} \accT{\mathcal{R}} \wedge \accP{\mathcal{R}} \)
where
\[
\accT{\mathcal{R}} = \Fin\Big(\Big\{\big(\psi,(\Psi_\xi)_{\xi\in \mathcal{R}\mathrm{ec}}\big)\ \Big|\ \mathcal{R}\cup \bigcup_{{\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}} \Psi_\xi[(\mathcal{R}\mathrm{ec}\setminus\mathcal{R})/{\ensuremath{\mathbf{ff}}}] \not\proves \psi\Big\}\Big)\, \] eventually prohibits states where the current formula of the master $\psi$ is not proved by the assumptions and by all tokens of the slaves for ${\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}$. Here $\Psi[X/{\ensuremath{\mathbf{ff}}}]$ denotes the set of formulae of $\Psi$ where each element of $X$ in the Boolean combination is replaced by ${\ensuremath{\mathbf{ff}}}$. For instance, $\{a\vee{\ensuremath{\mathbf{F}}} a\}[\{a\}/{\ensuremath{\mathbf{ff}}}]={\ensuremath{\mathbf{ff}}}\vee{\ensuremath{\mathbf{F}}} a={\ensuremath{\mathbf{F}}} a$. (For formal definition, see \ifxa\undefined\cite{techreport}\else Appendix~\ref{app:bool}\fi.) We illustrate how the information from the slaves in this form helps to decide whether the master formula holds or not.
\begin{example} Consider $\varphi={\ensuremath{\mathbf{G}}}({\ensuremath{\mathbf{X}}} a\vee{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b)$, and its respective master transition system as depicted below: \begin{center}
\begin{tikzpicture}[x=5cm,y=1.5cm,font=\footnotesize,initial text=,outer sep=0.5mm]
\tikzstyle{acc}=[double]
\node[state,initial] (x) at (-1,0) {$\varphi$};
\node[state] (i) at (0,0) {$\varphi\wedge(a\vee(b\wedge{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b))$};
\node[state] (f) at (0,-0.8) {${\ensuremath{\mathbf{ff}}}$};
\node[state] (u) at (0.8,0) {$\varphi\wedge(b\wedge{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b)$};
\path[->]
(x) edge node[above]{$\emptyset,\{a\},\{b\},\{a,b\}$} (i)
(i) edge node[left]{$\emptyset$} (f)
(i) edge[loop above] node[left]{$\{a\},\{a,b\}$} ()
edge node[above]{$\{b\}$} (u)
(u) edge[loop above] node[left]{$\{b\},\{a,b\}$} (u)
edge node[below,pos=0.3] {$\emptyset,\{a\}$} (f)
(f) edge[loop left, max distance=8mm,in=170,out=190,looseness=10] node[left]{$\emptyset,\{a\},\{b\},\{a,b\}$} (f)
;
\end{tikzpicture} \end{center} Assume we enter the second state and stay there forever, e.g., under words $\{a\}^\omega$ or $\{a,b\}^\omega$.
How do we show that $\varphi\wedge(a\vee(b\wedge{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b))$ holds? For the first conjunct, we obviously have $\mathcal{R}\proves\varphi$ for all $\mathcal{R}$ containing $\varphi$. However, the second conjunct is more difficult to prove.
One option is that we have ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b\in\mathcal{R}$ and want to prove the second disjunct. To this end, we also need to prove $b$. We can see that if ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$ holds then in its slave for ${\ensuremath{\mathbf{X}}} b$,
there is always a token in the state $b$,
which is eventually always guaranteed to hold. This illustrates why we need the tokens of the ${\ensuremath{\mathbf{G}}}$-slaves for proving the master formula.
The other option is that ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$ is not in $\mathcal{R}$, and so we need to prove the first disjunct. However, from the slave for ${\ensuremath{\mathbf{G}}}({\ensuremath{\mathbf{X}}} a\vee{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b)$ we eventually always get only the tokens ${\ensuremath{\mathbf{X}}} a\vee{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$, $a\vee{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$, and ${\ensuremath{\mathbf{tt}}}$. None of them can prove $a\vee (b\wedge{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b)$. However, since the slave does not rely on the assumption ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$, we may safely assume it not to hold here. Therefore, we can substitute ${\ensuremath{\mathbf{ff}}}$ for ${\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{X}}} b$ and after the substitution the tokens turn into ${\ensuremath{\mathbf{X}}} a$, $a$, and ${\ensuremath{\mathbf{tt}}}$. The second one is then trivially sufficient to prove the first disjunct.
\end{example}
\begin{proposition}[Soundness]\label{prop:A-sound}
If $w \in \mathsf{L}(\mathcal{A})$, then $w \models \varphi$. \end{proposition} The key proof idea is that for the slaves of ${\ensuremath{\mathbf{G}}}$-formulae in $\mathcal{R}$, all the tokens eventually always hold true. Since also the assumptions hold true so does the conclusion $\psi$. By Lemma~\ref{lem:master-local}, $\varphi$ holds true, too.
\begin{proposition}[Completeness]\label{prop:A-complete}
If $w \models \varphi$, then $w \in \mathsf{L}(\mathcal{A})$. \end{proposition} The key idea is that subformulas generated in the master from ${\ensuremath{\mathbf{G}}}$-formulae closely correspond to their slaves' tokens. Further, observe that for an ${\ensuremath{\mathbf{F}}}$-formula $\chi$, its unfolding is a disjunction of $\chi$ and other formulae. Therefore, it is sufficient to prove $\chi$, which can be done directly from $\mathcal{R}$. Similarly, for $\Gf{\bowtie p}_\mathrm{ext}$-formula $\chi$, its unfolding is just $\chi$ and is thus also provable directly from $\mathcal{R}$.
\mypara{Complexity}
Since the number of Boolean functions over a set of size $n$ is $2^{2^n}$, the size of each automaton is bounded by $2^{2^{|{\ensuremath{\mathsf{sf}}}|}}$, i.e., doubly exponential in the length of the formula. Their product is thus still doubly exponential. Finally, the acceptance condition is polynomial for each fixed $\mathcal{R}\subseteq\mathcal{R}\mathrm{ec}$. Since the whole condition is a disjunction over all possible values of $\mathcal{R}$, it is exponential in the size of the formula, which finishes the proof of Theorem~\ref{thm:auto}.
\section{Verifying strongly connected MDPs against generalized B\"uchi mean-payoff automata} \label{sec:mean-payoff}
\begin{figure}
\caption{Linear constraints $L$ of Proposition~\ref{thm:mp-main}}
\label{eq:xssum}
\label{eq:xa}
\label{eq:rew-inf}
\label{eq:rew-sup}
\label{system-L}
\end{figure}
Theorem~\ref{thm:mec} can be obtained from the following proposition.
\begin{proposition}\label{thm:mp-main}
Let $\mathsf{M}=(S,A,\mathit{Act},\delta,\hat s)$ be a strongly connected MDP, and ${\ensuremath{\mathit{Acc}}}$ an acceptance condition over $S$ given by:
\[
\bigwedge\nolimits_{i=1}^k\Inf(S_i) \quad\wedge\quad \bigwedge\nolimits_{i=1}^m \accMP[\inf]{\bowtie v_i}{r_i} \quad\wedge\quad \bigwedge\nolimits_{i=1}^n \accMP[\sup]{\bowtie u_i}{q_i)}
\]
The constraints from Figure~\ref{system-L} have a non-negative solution if and only if
there is a strategy $\sigma$ and
a set of runs $R$ of non-zero probability such that ${\ensuremath{\mathit{Acc}}}$ holds true on all $\omega\in R$.
Moreover, $\sigma$ and $R$ can be chosen so that $R$ has probability~$1$. \end{proposition}
Intuitively, variables $x_{i,a}$ describe the frequencies of using action $a$. Equation (\ref{eq:xa}) is Kirchhof's law of flow. Equation (\ref{eq:rew-inf}) says the inferior limits must be satisfied by all flows, while Equation (\ref{eq:rew-sup}) says that the $i$th limit superior has its own dedicated $i$th flow. Note that $L$ does not dependent on the initial state $\hat s$.
\begin{proof}[Sketch] Existing results for multi-objective mean payoff MDPs would only allow to establish the proposition in absence of supremum limits, and so we need to extend and combine results of several works to prove the proposition. In the direction $\Rightarrow$, \cite[Corollary 12]{lics15} gives a strategy $\sigma_i$ for every $i$ such that for almost every run $s_0a_0s_1a_1\ldots$ we have $\mathrm{lr}_{\inf}((\mathds{1}_{a_t = a})_{t=0}^\infty) = x_{i,a}$, and in fact the corresponding limit exists. Hence, for the number $p=\sum_{s\in S, a\in \act{s}} r(s)\cdot x_{i,a}$ the predicates $\accMP[\inf]{\ge p}{r}$ and $\accMP[\sup]{\ge p}{r}$ almost surely holds, for any reward function $r$. Hence, our constraints ensure that $\sigma_i$ satisfies $\accMP[\inf]{\bowtie v_j}{r_j}$ for all $j$, and $\accMP[\sup]{\bowtie u_i}{q_i}$. Moreover, $\sigma_i$ is guaranteed to visit every state of $\mathsf{M}$ infinitely often almost surely. The strategy $\sigma$ is then constructed to take these strategies $\sigma_i,1\leq i\leq n$ in turn and mimic each one of them for longer and longer periods.
For the direction $\Leftarrow$, we combine the ideas of \cite{lics15,BBC+14,BCFK13} and select solutions to $x_{i,a}$ from ``frequencies'' of actions under the strategy $\sigma$.
\end{proof}
\vspace*{-4mm} \vspace*{-1mm} \section{Conclusions} \label{sec:conclusions}
We have given an algorithm for computing the optimal probability of satisfying an ${\textrm{fLTL}\textsubscript{$\setminus\G\U$}}$ formula in an MDP. The proof relies on a decomposition of the formula into master and slave automata, and on solving a mean-payoff problem in a product MDP.
The obvious next step is to extend the algorithm so that it can handle arbitrary formulae of fLTL. This appears to be a major task, since our present construction relies on acyclicity of slave LTS, a property which is not satisfied for unrestricted formulae~\cite{cav14}. Indeed, since $\Gf{\bowtie p}$-slaves count the number of tokens in each state, this property ensures a bounded number of tokens and thus finiteness of the slave automata.
\mypara{Acknowledgments.}
{\footnotesize This work is partly supported by the German Research Council (DFG) as part of the Transregional Collaborative Research Center AVACS (SFB/TR 14), by the Czech Science Foundation under grant agreement P202/12/G061, by the EU 7th Framework Programme under grant agreement no. 295261 (MEALS) and 318490 (SENSATION), by the CDZ project 1023 (CAP), by the CAS/SAFEA International Partnership Program for Creative Research Teams, by the EPSRC grant EP/M023656/1, by the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007–2013) REA Grant No 291734, by the Austrian Science Fund (FWF) S11407-N23 (RiSE/SHiNE), and by the ERC Start Grant (279307: Graph Games). Vojt\v{e}ch Forejt is also affiliated with FI MU, Brno, Czech Republic.}
\vspace*{-3mm}
\ifxa\iundefined\relax\else
\appendix
\section{Propositional reasoning}\label{app:bool}
Intuitively, given a set of formulae $\Phi$ and a formula $\psi$, we say that $\Phi$ propositionally proves $\psi$ if $\psi$ can be deduced from formulae in $\Phi$ using only propositional reasoning. So, for instance, ${\ensuremath{\mathbf{G}}} a$ propositionally implies ${\ensuremath{\mathbf{G}}} a \vee {\ensuremath{\mathbf{G}}} b$, but ${\ensuremath{\mathbf{G}}} a$ does not propositionally imply ${\ensuremath{\mathbf{F}}} a$.
\begin{definition}[Propositional implication and equivalence] A formula of fLTL is {\em non-Boolean} if it is not a conjunction or a disjunction (i.e., if the root of its syntax tree is not $\wedge$ or $\vee$). The set of non-Boolean formulae of fLTL over $Ap$ is denoted by ${\it NB}(Ap)$. A {\em propositional assignment}, or just an {\em assignment}, is a mapping $\mathit{Ass} \colon {\it NB}(Ap) \rightarrow \{0, 1\}$. Given $\varphi \in {\it NB}(Ap)$, we write $\mathit{Ass} \models_P \varphi$ if{}f $\mathit{Ass}(\varphi) = 1$, and extend the relation $\models_P$ to arbitrary formulae by:
\[\begin{array}[t]{lcl}
\mathit{Ass}\models_P \varphi \wedge \psi & \mbox{ if{}f } & \mathit{Ass} \models_P \varphi \text{ and } \mathit{Ass} \models_P \psi \\
\mathit{Ass} \models_P \varphi \vee \psi & \mbox{ if{}f } & \mathit{Ass} \models_P \varphi \text{ or } \mathit{Ass} \models_P \psi \\ \end{array}\]
We say that a set $\Phi$ of fLTL formulae \emph{propositionally proves} an fLTL formula $\psi$, written $\Phi\proves\psi$, if for every assignment $\mathit{Ass}$, $\mathit{Ass}\models_P \bigwedge\Phi$ implies $\mathit{Ass}\models_P \psi$.
Finally, fLTL formulae $\varphi$ and $\psi$ are {\em propositionally equivalent}, denoted by $\varphi \equiv_P \psi$, if $\{\varphi\} \models_P \psi$ and $\{\psi\} \models_P \varphi$. We denote by $[\varphi]_P$ the equivalence class of $\varphi$ under the equivalence relation $\equiv_P$. \end{definition}
Observe that $\varphi \equiv_P \psi$ implies that $\varphi$ and $\psi$ are equivalent also as {fLTL} formulae, i.e., for all words $w$, we have $w\models\varphi$ iff $w\models\psi$. Using the same reasoning, \begin{align} w\models\bigwedge\Phi\text{ with }\Phi\proves\psi\text{ imply }w\models\psi\,.\label{eq:transitivity} \end{align}
\begin{definition}[Propositional substitution] Let $\psi,\chi$ be fLTL formulae and $\Psi\subseteq{\it NB}(Ap)$. The formula $\psi[\Psi/ \chi]_P$ is inductively defined as follows: \begin{itemize} \item If $\psi = \psi_1 \wedge \psi_2$ then $\psi[\Psi/ \chi]_P = \psi_1[\Psi/\chi]_P \wedge \psi_2[\Psi/ \chi]_P$. \item If $\psi = \psi_1 \vee \psi_2$ then $\psi[\Psi/ \chi]_P = \psi_1[\Psi/\chi]_P \vee \psi_2[\Psi/ \chi]_P$. \item If $\psi$ is a non-Boolean formula and $\psi \in \Psi$ then $\psi[\Psi/ \chi]_P = \chi$, else $\psi[\Psi/ \chi]_P = \psi$. \end{itemize} \end{definition}
The following lemma allows us to work with formulae as Boolean functions over $\mathit{NB}(Ap)$, i.e., as representatives of their propositional equivalence classes.
\begin{alemma} For every formula $\varphi$ and every letter $\nu \in 2^{Ap}$, if $\varphi_1 \equiv_P \varphi_2$ then ${\ensuremath{\mathsf{Unf}}}(\varphi_1))[\nu] \equiv_P {\ensuremath{\mathsf{Unf}}}(\varphi_2))[\nu]$. \end{alemma} \begin{proof} Observe that every formula $\varphi$ is a positive Boolean combination (i.e., built from conjunctions and disjunctions) of non-Boolean formulae. Since ${\ensuremath{\mathsf{Unf}}}$ and $(\cdot)[\nu]$ both distribute over $\wedge $ and $\vee$, the formula ${\ensuremath{\mathsf{Unf}}}(\varphi)[\nu]$ is obtained by applying a simultaneous substitution to the non-Boolean formulae. (For example, a non-Boolean formula ${\ensuremath{\mathbf{G}}}\psi$ is substituted by ${\ensuremath{\mathsf{Unf}}}(\psi)[\nu] \wedge {\ensuremath{\mathbf{G}}}\psi$.) Let $\varphi[S]$ be the result of the substitution.
Consider two equivalent formulae $\varphi_1 \equiv_P \varphi_2$. Since we apply the same substitution to both sides, the substitution lemma of propositional logic guarantees $\varphi_1[S] \equiv_P \varphi_2[S]$. \end{proof}
\section{Proofs of Section~\ref{sec:automata}}
\begin{reflemma}{lem:unfolding} For every word $w$ and fLTL formula, we have $w\models \varphi\iff \suffix{w}{1} \models ({\ensuremath{\mathsf{Unf}}}(\varphi))[w[0]]$. \end{reflemma}
\begin{proof} Denote $w=\nu v$ where $\nu\subseteq Ap$. We proceed by a straightforward structural induction on $\varphi$. We focus on three representative cases. \begin{itemize} \item $\varphi = a$. Then
\[ \begin{array}{llr}
& \nu v \models a \\[0.1cm] \iff & a \in \nu & \text{(semantics of LTL)} \\[0.1cm] \iff & {\ensuremath{\mathsf{Unf}}}(a)[\nu] = {\ensuremath{\mathbf{tt}}} & \text{(def. of ${\ensuremath{\mathsf{Unf}}}$ and $[\nu]$)} \\[0.1cm] \iff & v \models {\ensuremath{\mathsf{Unf}}}(a)[\nu] & \text{(semantics of LTL)} \\[0.1cm] \end{array} \]
\item $\varphi = {\ensuremath{\mathbf{F}}} \psi$. Then \[\begin{array}{llr}
& \nu v \models {\ensuremath{\mathbf{F}}} \psi \\[0.1cm] \iff & \nu v \models ({\ensuremath{\mathbf{X}}}{\ensuremath{\mathbf{F}}} \psi) \vee \psi & \text{(${\ensuremath{\mathbf{F}}} \psi \equiv {\ensuremath{\mathbf{X}}}{\ensuremath{\mathbf{F}}} \psi \vee \psi$)}\\[0.1cm] \iff & v \models {\ensuremath{\mathbf{F}}} \psi \text{ or } \nu v \models \psi & \text{(semantics of LTL)}\\[0.1cm] \iff & v \models {\ensuremath{\mathbf{F}}} \psi \text{ or } v \models {\ensuremath{\mathsf{Unf}}}(\psi)[\nu] & \text{(ind. hyp.)}\\[0.1cm] \iff & v \models {\ensuremath{\mathbf{F}}} \psi \vee {\ensuremath{\mathsf{Unf}}}(\psi)[\nu] & \text{(semantics of LTL)}\\[0.1cm] \iff & v \models {\ensuremath{\mathsf{Unf}}}({\ensuremath{\mathbf{F}}}\psi)[\nu] & \text{(def. of ${\ensuremath{\mathsf{Unf}}}$)} \end{array}\]
\item $\varphi = \Gf{\bowtie p}_{\inf} \psi$. Then \[\begin{array}{llr}
& \nu v \models \Gf{\bowtie p}_{\inf} \psi \\[0.1cm] \iff & \displaystyle \liminf_{i\to \infty} \frac{1}{i} \Big(\mathds{1}_{\nu\models\varphi}+\sum_{j=0}^{i-2} \mathds{1}_{\suffix{v}{j}\models\varphi}\Big) & \text{(semantics of LTL)}\\[0.1cm] \iff & \displaystyle \lim_{i\to \infty} \frac{1}{i} \mathds{1}_{\nu\models\varphi}+\liminf_{i\to \infty} \frac{1}{i} \sum_{j=0}^{i-2} \mathds{1}_{\suffix{v}{j}\models\varphi}\Big) &\\[0.1cm] \iff & \displaystyle 0+\liminf_{i\to \infty} \frac{1}{i} \sum_{j=0}^{i-1} \mathds{1}_{\suffix{v}{j}\models\varphi} &\\[0.1cm] \iff & v \models \Gf{\bowtie p}_{\inf} \psi & \text{(semantics of LTL)}\\[0.1cm] \iff & v \models {\ensuremath{\mathsf{Unf}}}(\Gf{\bowtie p}_{\inf} \psi )[\nu] & \text{(def. of ${\ensuremath{\mathsf{Unf}}}$)} \end{array}\] \end{itemize} \qed \end{proof}
\begin{reflemma}{lem:master-local} Let $w$ be a word and $\mathcal \mathcal{M}(w)=\varphi_0\varphi_1\cdots$ the corresponding run. Then for all $n\in\mathbb N$, we have $w\models\varphi$ if and only if $\suffix{w}{n} \models\varphi_n$. \end{reflemma} \begin{proof} We proceed by induction on $n$. For $n=0$, we conclude by $\varphi_0=\varphi$. Let now $n\geq1$ and denote $w=u\,\nu\,v$ where $\nu\subseteq AP$ and $v=\suffix{w}{n}$. Then we have
\[\begin{array}{llr}
& v \models \varphi_n \qquad\qquad\qquad\\ \iff & v \models {\ensuremath{\mathsf{Unf}}}(\varphi_{n-1})[\nu] & \text{(def. of $\delta^\mathcal{M}$)} \\ \iff & \nu\,v \models \varphi_{n-1} & \text{(Lemma~\ref{lem:unfolding})} \\ \iff & u\,\nu\,v\models \varphi & \text{(ind. hyp.)} \end{array}\] \qed \end{proof}
\begin{definition} The \emph{threshold} $T(w)$ of a word $w$ is the smallest $T\in\mathbb{N}$ such that for all $t\geq T$ \begin{itemize} \item for all $\psi\in\mathcal{R}(w)$, we have $\suffix wt\models\psi$,\footnote{This condition is actually non-trivial only for ${\ensuremath{\mathbf{G}}}$-formulae, other formulae of $\mathcal{R}(w)$ hold at all positions.} \item for all $\psi\in\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)$, we have $\suffix wt\not\models\psi$. \end{itemize} \end{definition}
Then we have $\suffix w {T(w)}\models\rho$ for every $\rho\in\mathcal{R}(w)$ (all ${\ensuremath{\mathbf{G}}}$-formulae that will ever hold do hold already) and $\suffix w {T(w)}\not\models\rho$ for every $\rho\in\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)$ (none of the ${\ensuremath{\mathbf{F}}}$-formulae that hold only finitely often holds any more).
\begin{alemma}\label{lem:threshold} For every word $w$ and $t\geq T(w)$, we have that $\suffix w t\models \xi$ if{}f $\exists t' :\mathcal{R}(w)\cap{\ensuremath{\mathsf{sf}}}(\xi) \proves \mathcal{S}(\xi)(\infix w t {t'})$. \end{alemma} \begin{proof} By similar arguments as in Lemma~\ref{lem:master-local}, we get that for the run of the slave $\mathcal{S}(\xi)(\suffix wt) = \xi_t \xi_{t+1} \cdots$ we have $\suffix wt \models \xi \iff \suffix w{t'} \models \xi_{t'}$. Indeed, not unfolding elements of $\mathcal{R}\mathrm{ec}$ is here equivalent to not unfolding them since for every $\psi\in\mathcal{R}\mathrm{ec}$ we have $\suffix wu\models\psi$ iff $\suffix wu\models {\ensuremath{\mathbf{X}}}\psi$, for all $u\geq T$. Moreover, when reaching the sink at time $t'$, we know that $\xi_{t'}$ is a positive Boolean combination over $\mathcal{R}\mathrm{ec}(w) \cap {\ensuremath{\mathsf{sf}}}(\xi)$. Therefore, $\suffix wt \models \xi \iff \suffix w{t'} \models\xi_{t'}\iff \mathcal{R}(w)\models\xi_{t'}\iff \mathcal{R}(w)\cap{\ensuremath{\mathsf{sf}}}(\xi)\models\xi_{t'}$. \qed \end{proof}
\begin{reflemma}{lem:slave-promises} Let us fix $\xi \in {\ensuremath{\mathsf{sf}}}$ and a word~$w$. For any $\mathcal{R} \in \mathcal{R}\mathrm{ec}$, we denote by $\mathit{Sat}(\mathcal{R})$ the set $\{i\in\mathbb{N}\mid \exists j\geq i:\mathcal{R} \proves \mathcal{S}(\xi)(\infix wij) \}$. Then for any $\underline{\mathcal{R}},\overline{\mathcal{R}} \subseteq \mathcal{R}\mathrm{ec}$ such that $\underline{\mathcal{R}} \subseteq \mathcal{R}(w) \subseteq \overline{\mathcal{R}}$, we have \begin{eqnarray} \mathit{Sat}(\underline{\mathcal{R}}) \text{ is infinite} \implies & w \models {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi \implies & \mathit{Sat}(\overline{\mathcal{R}}) \text{ is infinite} \label{eq:sl1}\\ \mathbb{N} \setminus \mathit{Sat}(\underline{\mathcal{R}}) \text{ is finite} \implies & w \models {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}}\xi \implies & \mathbb{N} \setminus \mathit{Sat}(\overline{\mathcal{R}}) \text{ is finite} \label{eq:sl2} \\ \mathrm{lr}_{\mathrm{ext}}(\big(\mathds{1}_{\mathit{Sat}(\underline{\mathcal{R}})}(i)\big)_{i=0}^\infty) \bowtie p \implies & w \models \Gf{\bowtie p}_\mathrm{ext} \xi \implies & \mathrm{lr}_{\mathrm{ext}}(\big(\mathds{1}_{\mathit{Sat}(\overline{\mathcal{R}})}(i)\big)_{i=0}^\infty) \bowtie p \qquad\label{eq:sl3} \end{eqnarray} Moreover, the result holds also for $\underline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \mathcal{R}(w) \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \overline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi)$. \end{reflemma}
\begin{proof}
For (\ref{eq:sl1}), let first $\mathit{Sat}(\underline{\mathcal{R}})$ be infinite. Then also $\mathit{Sat}'(\underline{\mathcal{R}}) := \{n\in\mathit{Sat}(\underline{\mathcal{R}}) \mid n\geq T(w)\}$ is infinite. Therefore, infinitely many positions $i$ of $w$ satisfy $\exists j\geq i:\underline{\mathcal{R}} \proves \mathcal{S}(\xi)(\infix wij)$. Observe that elements of $\mathcal{R}\mathrm{ec}$ are never under the scope of negation in $Q$, hence $\proves$ is monotonic w.r.t.\ adding assumptions from $\mathcal{R}\mathrm{ec}$. Thus also infinitely many positions $i$ of $w$ satisfy $\exists j\geq i:\mathcal{R}(w) \proves \mathcal{S}(\xi)(\infix wij)$ and by Lemma~\ref{lem:threshold} also satisfy $\xi$.
Let now $w\models {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi$. Then $I:=\{i\in\mathbb{N}\mid i\geq T(w) \text{ and }\suffix w i \models \xi\}$ is infinite and by the lemma there are infinitely many positions $i$ of $w$ satisfying $\exists j\geq i:{\mathcal{R}} \proves \mathcal{S}(\xi)(\infix wij)$. By the monotonicity of $\proves$ above we can replace $\mathcal{R}$ by $\overline{\mathcal{R}}$.
Moreover, if we only assume $\underline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \mathcal{R}(w) \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \overline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi)$ both statements remain valid.
Indeed, for every set $\mathcal{R}$ of formulae and formula reachable from $\xi$, $\mathcal{R}\proves\xi$ if{}f $\mathcal{R}\cap{\ensuremath{\mathsf{sf}}}(\xi)\proves\xi$ since the only non-Boolean formulae produced by $\xi[\cdot]$ are subformulas of $\xi$.
For (\ref{eq:sl2}), the argumentation is the same, replacing ``infinite'' and ``infinitely many'' by ``co-finite'' and ``almost all''. For (\ref{eq:sl3}), the sequences can only differ in a finite prefix. Moreover, if we only assume $\underline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \mathcal{R}(w) \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \overline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi)$, apart from the finite prefix the sequence $\mathds{1}_{\mathit{Sat}(\underline{\mathcal{R}})}(i)$ is pointwise less or equal to $\mathds{1}_{\suffix wi\models\xi}$, which is again pointwise less or equal to $\mathds{1}_{\mathit{Sat}(\overline{\mathcal{R}})}(i)$. \qed \end{proof}
\begin{reflemma}{lem:slaves-acc}
Let $\xi \in {\ensuremath{\mathsf{sf}}}$, $w$, and $\underline{\mathcal{R}},\overline{\mathcal{R}} \subseteq \mathcal{R}\mathrm{ec}$ be such that $\underline{\mathcal{R}} \subseteq \mathcal{R}(w) \subseteq \overline{\mathcal{R}}$. Then \begin{eqnarray} w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,\underline{\mathcal{R}})) \implies & w \models {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,\overline{\mathcal{R}})) \label{eq:sl4}\\ w \in \mathsf{L}(\mathcal{S}_{\F\G}(\xi,\underline{\mathcal{R}})) \implies & w \models {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\F\G}(\xi,\overline{\mathcal{R}})) \label{eq:sl5}\\ w \in \mathsf{L}(\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi,\underline{\mathcal{R}})) \implies & w \models \Gf{\bowtie p}_\mathrm{ext} \xi & \implies w \in \mathsf{L}(\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi,\overline{\mathcal{R}})) \label{eq:sl6} \end{eqnarray}
Moreover, the result holds also for $\underline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \mathcal{R}(w) \cap {\ensuremath{\mathsf{sf}}}(\xi) \subseteq \overline{\mathcal{R}} \cap {\ensuremath{\mathsf{sf}}}(\xi)$. \end{reflemma} \begin{proof} Due to Lemma~\ref{lem:slave-promises}, it suffices to prove for the given $\xi$ and $w$ and for any $\mathcal{R}$ that \begin{align}
\mathit{Sat}({\mathcal{R}}) \text{ is infinite} & \iff w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,{\mathcal{R}})) \label{eq:sl11}\\
\mathbb{N} \setminus \mathit{Sat}(\mathcal{R}) \text{ is finite} & \iff w \in \mathsf{L}(\mathcal{S}_{\F\G}(\xi, \mathcal{R})) \label{eq:sl12}\\
\mathrm{lr}_{\mathrm{ext}}(\big(\mathds{1}_{\mathit{Sat}(\mathcal{R})}(i)\big)_{i=0}^\infty) \bowtie p & \iff w \in \mathsf{L}(\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}(\xi, \mathcal{R})) \label{eq:sl13} \end{align}
For (\ref{eq:sl11}), we must prove that there are infinitely many positions from which the run ends in an accepting sink if{}f there are infinitely many positions with a token in an accepting sink. To this end, observe that to each position $j$ with a token in an \emph{accepting} sink $q$ (i.e., $\mathcal{R}\proves q$.) we can assign a set $\mathit{EndIn}(j,q)$ of positions $i$ such that $\mathcal{S}_{\G\F}(\xi)(\infix w ij)=q$. On the one hand, each $i$ is exactly in one $\mathit{EndIn}(j,q)$ since the slave transition systems are acyclic and each path inevitably ends in a sink. On the other hand, each $\mathit{EndIn}(j,q)$ is finite, again due to the acyclicity. Consequently, $\mathit{Sat}({\mathcal{R}})$ is infinite if{}f $\sum_{j,q}\mathit{EndIn}(j,q)$ is infinite if{}f the number of non-empty $\mathit{EndIn}(j,q)$ is infinite if{}f $\mathcal{S}_{\G\F}$ accepts.
For (\ref{eq:sl12}), the argument is analogous, but we have to consider $\mathit{EndIn}(j,q)$ for \emph{rejecting} sinks $q$, i.e., $\mathcal{R}\not\proves q$. Then $\mathit{Sat}({\mathcal{R}})$ is co-finite if{}f $\sum_{j,q}\mathit{EndIn}(j,q)$ is finite if{}f the number of non-empty $\mathit{EndIn}(j,q)$ is finite if{}f $\mathcal{S}_{\F\G}$ accepts.
For (\ref{eq:sl13}), observe that in $\mathcal{S}_{\Gf{\bowtie p}_\mathrm{ext}}$ the precise number of tokens is preserved in each state at every point of time. Therefore, each successful run corresponds exactly to one $1$ in the total reward. In order to prove that both sequence have the same $\liminf/\limsup$, we need to prove that the length of each run (difference between the element's positions in the two sequences) is bounded. This follows by acyclicity of the automaton. \qed \end{proof}
\begin{reflemma}{lem:slaves-product} For $w$ and $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$, we have \begin{description} \item[(soundness)] whenever
$w \in \mathsf{L}(\mathcal{P}({\mathcal{R}}))$ then $\mathcal{R}\subseteq\mathcal{R}(w)$ and hence
\[w \models \bigwedge_{{\ensuremath{\mathbf{F}}}\xi \in \mathcal{R}} {\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}} \xi \land \bigwedge_{{\ensuremath{\mathbf{G}}}\xi \in \mathcal{R}} {\ensuremath{\mathbf{F}}}{\ensuremath{\mathbf{G}}} \xi \land \bigwedge_{\Gf{\bowtie p}_\mathrm{ext} \xi \in \mathcal{R}} \Gf{\bowtie p}_\mathrm{ext} \xi\] \item[(completeness)] $w \in \mathsf{L}(\mathcal{P}({\mathcal{R}(w)}))$. \end{description}
\end{reflemma} \begin{proof} As to soundness, let $w \in \mathsf{L}(\mathcal{S}({\mathcal{R}}))$. Consider the dag on $\mathcal{R}$ given by an edge $(\chi,\chi')$ if $\chi'\in{\ensuremath{\mathsf{sf}}}(\chi)\setminus\{\chi\}$. We prove the right-hand side of the implication for each formula $\xi\in\mathcal{R}$ by induction on the distance $d$ to the leaf in the dag.
Let $d=0$ and consider $\chi={\ensuremath{\mathbf{F}}}\xi$; the other cases are analogous. Then $\xi$ does not contain any subformula from $\mathcal{R}$. Therefore, not only $w\in\mathsf{L}(\mathcal{S}_{\G\F}(\xi,\mathcal{R})$, but also $w\in\mathsf{L}(\mathcal{S}_{\G\F}(\xi,\emptyset)$. Since $\emptyset\subseteq\mathcal{R}(w)$, Lemma~\ref{lem:slaves-acc} (part ``Moreover'') yields $w\models{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi$.
Let $d>0$ and $\chi={\ensuremath{\mathbf{F}}}\xi$; the other cases are again analogous. We have not only $w\in\mathsf{L}(\mathcal{S}_{\G\F}(\xi,\mathcal{R})$, but also $w\in\mathsf{L}(\mathcal{S}_{\G\F}(\xi,\mathcal{R}\cap{\ensuremath{\mathsf{sf}}}(\xi))$. By induction hypothesis, $w\models\mathcal{R}\cap{\ensuremath{\mathsf{sf}}}(\xi)$. Therefore, $\mathcal{R}\cap{\ensuremath{\mathsf{sf}}}(\xi)\subseteq\mathcal{R}(w)$ and thus Lemma~\ref{lem:slaves-acc} yields $w\models{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi$.
As to completeness, we prove that $w\in\mathsf{L}(\xi,\mathcal{S}_{\G\F}(\mathcal{R}(w)))$ for ${\ensuremath{\mathbf{F}}}\xi\in\mathcal{R}(w)$; the proof for other types of automata is analogous. Since ${\ensuremath{\mathbf{F}}}\xi\in\mathcal{R}(w)$ we have $w\models{\ensuremath{\mathbf{G}}}{\ensuremath{\mathbf{F}}}\xi$. By Lemma~\ref{lem:slaves-acc} we have $w \in \mathsf{L}(\mathcal{S}_{\G\F}(\xi,\mathcal{R}(w)))$. \qed \end{proof}
We call the left-hand-side of $\proves$ of the acceptance condition ``extended assumptions'' since it is a conjunction of assumptions $\mathcal{R}$ extended by $\Psi_\xi[(\mathcal{R}\mathrm{ec}\setminus\mathcal{R})/{\ensuremath{\mathbf{ff}}}]$ for each ${\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}$. We prove the extended assumptions hold at almost all positions:
\begin{alemma}\label{lem:tokens} For every word $w$ accepted with respect to $\mathcal{R}$, and for every formula ${\ensuremath{\mathbf{G}}}\xi \in \mathcal{R}$, and for all $t\geq T(w)$ and for all $\tau\geq t$ such that $\psi:=\mathcal{S}_{\G\F}(\xi)(\infix w t \tau)$ is defined, we have that $\suffix w\tau \models \psi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}/{\ensuremath{\mathbf{ff}}}]$. \end{alemma} \begin{proof} Any tokens born at time $t\geq T(w)$ will end up at some time $\delta \geq t$ in an accepting sink $s_\delta$. Since $\suffix w {\delta}\models \mathcal{R}(w)$ and $\mathcal{R}(w)\proves s_\delta$, we have $\suffix w {\delta}\models s_\delta$. Since also $\suffix w\tau\models \mathcal{R}(w)$ for all $\tau\in[t,\delta]$, we obtain also $\suffix w \tau\models \psi$ by similar argumentation as in Lemma~\ref{lem:threshold}.
Moreover, since $\mathcal{R}(w)\proves s_t$, by propositional calculus $\mathcal{R}(w)\proves s_\delta[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}]$ and $\suffix w\delta\models s_\delta[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}]$, and similarly we obtain $\suffix w {\tau}\models \psi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}]$.
\qed \end{proof}
\begin{refproposition}{prop:A-sound}
If $w \in \mathsf{L}(\mathcal{A})$, then $w \models \varphi$. \end{refproposition} \begin{proof} If $w \in \mathsf{L}(\mathcal{A})$ and it accepts by a disjunct in its acceptance conditions related to assumptions $\mathcal{R} \subseteq \mathcal{R}\mathrm{ec}$, then for almost all positions $t$ when visiting a state $(\psi,(\Psi_\xi)_{\xi\in \mathcal{R}\mathrm{ec}})$ we have \[\mathcal{R}\cup \bigcup_{{\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}} \Psi_\xi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}/{\ensuremath{\mathbf{ff}}}] \proves \psi\] and, moreover by Lemma~\ref{lem:slaves-product} and \ref{lem:tokens}, we also have \[\suffix w t\models\mathcal{R}\cup \bigcup_{{\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}} \Psi_\xi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}/{\ensuremath{\mathbf{ff}}}] \] yielding together by (\ref{eq:transitivity}) \[\suffix w t\models \psi\] which by Lemma~\ref{lem:master-local} gives $w\models \varphi$. \qed \end{proof}
\begin{refproposition}{prop:A-complete}
If $w \models \varphi$, then $w \in \mathsf{L}(\mathcal{A})$. \end{refproposition} \begin{proof} Let $w$ be a word. Then ${\ensuremath{\mathit{Acc}}}(\mathcal{R}(w))$ is satisfied by Lemma~\ref{lem:slaves-product}. We show that ${\ensuremath{\mathit{Acc}}}_{\mathcal{M}}(\mathcal{R}(w))$ is satisfied, too. In other words, we prove that for almost all all positions $t$ when visiting a state $(\psi,(\Psi_\xi)_{\xi\in \mathcal{R}\mathrm{ec}})$ we have \[\mathcal{R}(w)\cup \bigcup_{{\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}(w)} \Psi_\xi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}] \proves \psi\]
Since both $\psi$ and each element of each $\Psi_\xi$ are actually Boolean functions, we choose formulae that are convenient representations thereof. Namely, we consider the formula generated exactly from $\varphi$ or $\xi$ using the transition functions $\delta^\mathcal{M}$ or $\delta^\mathcal{S}$, respectively. Therefore, each occurrence of ${\ensuremath{\mathbf{G}}}\xi\in{\ensuremath{\mathsf{sf}}}(\varphi)$ corresponds after reading a finite word $v$ to some occurrence of $\psi'\in{\ensuremath{\mathsf{sf}}}(\psi)$ where $\psi'={\ensuremath{\mathbf{G}}}\xi\wedge \bigwedge_i \xi_i$ and $\xi_i=\delta^\mathcal{M}(\xi,v_i)$ for some infix $v_i$ of $w$; we call such a formula $\psi'$ \emph{derived ${\ensuremath{\mathbf{G}}}$-subformula}. Similarly, reading $v$ transforms ${\ensuremath{\mathbf{F}}}\xi$ into a \emph{derived ${\ensuremath{\mathbf{F}}}$-subformula} ${\ensuremath{\mathbf{F}}}\xi\vee\bigvee_i \xi_i$. Finally, similarly for $\Gf{\bowtie}_\mathrm{ext}$- and ${\ensuremath{\mathbf{U}}}$-formulae. Note that every derived $\Gf{\bowtie p}_\mathrm{ext}$-formula is always of the form $\Gf{\bowtie p}_\mathrm{ext} \xi \wedge\bigwedge {\ensuremath{\mathbf{tt}}}$.
We consider positions large enough so that \begin{itemize}
\item they are greater than $T(w)+|Q|$ (here $|Q|$ ensures that tokens born before $T(w)$ do not exist any more), and \item all the satisfied ${\ensuremath{\mathbf{U}}}$-formulae have their second argument already satisfied, and \item $\psi$ is a Boolean combination over derived formulae since all outer literals and ${\ensuremath{\mathbf{X}}}$-operators have been already removed through repetitive application of $[\cdot]$. \end{itemize}
We prove that each derived formula $\psi'$ (in $\psi$) that currently holds is also provable from the extended assumptions. Since $\psi$ holds, this implies that also the whole $\psi$ is provable from the extended assumptions. We proceed by structural induction.
First, let $\psi'$ be a derived ${\ensuremath{\mathbf{G}}}$-subformula ${\ensuremath{\mathbf{G}}}\xi\wedge \bigwedge_i \xi_i$. Since $\psi'$ holds, ${\ensuremath{\mathbf{G}}}\xi$ holds, we have ${\ensuremath{\mathbf{G}}}\xi\in\mathcal{R}(w)$ and thus $\mathcal{R}(w)\proves{\ensuremath{\mathbf{G}}}\xi$. Further, each $\xi_i$ corresponds to a formula $\psi_i$ either in $\Psi_\xi$ or a sink, which is accepting since $\xi_i$ holds, as follows: This correspondence mapping is very similar to identity, except for \begin{itemize} \item each derived ${\ensuremath{\mathbf{F}}}$-formula ${\ensuremath{\mathbf{F}}}\chi\vee\bigvee_i \chi_i$ is mapped to ${\ensuremath{\mathbf{F}}}\chi$ since $\mathcal{S}(\xi)$ does not unfold ${\ensuremath{\mathbf{F}}}$, and \item each derived ${\ensuremath{\mathbf{G}}}$-formula ${\ensuremath{\mathbf{G}}}\chi \wedge \bigwedge_i \chi_i$ is mapped to ${\ensuremath{\mathbf{G}}}\chi$ since $\mathcal{S}(\xi)$ does not unfold ${\ensuremath{\mathbf{G}}}$, moreover, each $\chi_i$ again corresponds in the same way to a formula in $\Psi_\chi$ or an accepting sink of $\mathcal{S}_{\G\F}(\mathcal{R}(w),\chi)$ by the induction hypothesis. \end{itemize} If we could replace each derived formula in $\xi_i$ by its simple image in the correspondence mapping, we would have $\psi_i\proves\xi_i$ (and since $\psi_i$ is provable from assumptions - either a token or an accepting sink - we could conclude). Therefore it remains to prove all the derived formulae: \begin{itemize} \item ${\ensuremath{\mathbf{G}}}\chi \wedge \bigwedge_i \chi_i$ that holds can be proved by induction hypothesis, \item ${\ensuremath{\mathbf{G}}}\chi \wedge \bigwedge_i \chi_i$ that does not hold is proved from ${\ensuremath{\mathbf{G}}}\chi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}]={\ensuremath{\mathbf{ff}}}$ \item ${\ensuremath{\mathbf{F}}}\chi\vee\bigwedge_i \chi_i$ that holds is proved from ${\ensuremath{\mathbf{F}}}\chi\in\mathcal{R}(w)$ \item ${\ensuremath{\mathbf{F}}}\chi\vee\bigwedge_i \chi_i$ that does not hold is proved from ${\ensuremath{\mathbf{F}}}\chi[\mathcal{R}\mathrm{ec}\setminus\mathcal{R}(w)/{\ensuremath{\mathbf{ff}}}]={\ensuremath{\mathbf{ff}}}$ \end{itemize}
Second, $\psi'=\Gf{\bowtie p}_\mathrm{ext}\xi\wedge\bigwedge{\ensuremath{\mathbf{tt}}}$ is proved directly from $\mathcal{R}(w)$.
Third, let (i) $\psi'$ be a derived ${\ensuremath{\mathbf{F}}}$-subformula ${\ensuremath{\mathbf{F}}}\xi\vee \bigvee_i \xi_i$ such that ${\ensuremath{\mathbf{F}}}\xi$ holds. Then ${\ensuremath{\mathbf{F}}}\xi\in\mathcal{R}(w)$ and thus $\mathcal{R}(w)\proves{\ensuremath{\mathbf{F}}}\xi$.
Finally, let $\psi'$ be a derived ${\ensuremath{\mathbf{F}}}$-subformula ${\ensuremath{\mathbf{F}}}\xi\vee \bigvee_i \xi_i$ such that ${\ensuremath{\mathbf{F}}}\xi$ does not hold (i.e., some of the $\xi_i$'s hold), or a derived ${\ensuremath{\mathbf{U}}}$-subformula, where thus one of the disjuncts not containing this until holds (since all satisfied untils have their second argument already satisfied). Then we conclude by the induction hypothesis. \qed \end{proof}
\section{Probability space of Markov chain}
\label{app:mc} For a Markov chain $N=(L,P,\hat \ell)$ we define the probability space $(\mathit{Run}, \mathcal{F}, \mathbb{P})$ where \begin{itemize}
\item $\mathit{Run}$ contains all runs initiated in $\hat \ell$, i.e.
all infinite sequences $\ell_0\ell_1\ldots$ satisfying $\ell_0=\hat \ell$ and $P(\ell_i,\ell_{i+1}) > 0$ for all $i\ge 0$.
\item $\cal F$ is the $\sigma$-field generated by basic cylinders $\mathit{Cyl}(h) := \{\omega \mid \omega \text{ starts with } h\}$ for all $h$ which are a prefix of an element in $\mathit{Run}$.
\item $\mathbb{P}$ is the unique probability function such that for $h=\ell_0\ell_1\ldots \ell_n$ we have $\mathbb{P}(\mathit{Cyl}(h)) = \prod_{i=0}^{i=n-1} P(\ell_i,\ell_{i+1})$ \end{itemize}
When we say ``almost surely'' or ``almost all runs'', it refers to an event happening with probability 1 according to the relevant measure (which is usually clear from the context).
\section{Proof of Proposition~\ref{thm:mp-main}} In the rest of this section we prove Proposition~\ref{thm:mp-main}. To simplify the notation, for an action $a$ and reward structure $r$ we will use $\mathrm{lr}_{\mathrm{ext}}(a)$ and $\mathrm{lr}_{\mathrm{ext}}(r)$ for random variables that on a run $\omega = s_0a_0s_1a_1\ldots$ return $\mathrm{lr}_{\mathrm{ext}}(\mathds{1}_{a_0=a}\mathds{1}_{a_1=a}\ldots)$ and $\mathrm{lr}_{\mathrm{ext}}(r_{s_0}r_{s_1}\ldots)$, respectively.
The direction $\Rightarrow$ can be proved as follows. For any fixed $i$, by \cite[Corollary 12]{lics15} there is a strategy $\sigma_i$ such that $\lrLim{a} = x_{i,a}$ almost surely, and hence $\sigma_i$ almost surely yields reward $\sum_{a\in A} r(a)\cdot x_{i,a}$ w.r.t.\ any reward function $r$. Moreover, $\sigma_i$ visits every state of $\mathsf{M}$ infinitely often almost surely.
We now construct $\sigma$ inductively as follows. The strategy will keep the current ``mode'', which is a number from $1$ to $n$, and an unbounded ``timer'' ranging over natural numbers. Suppose we have defined $\sigma$ for history $h$, but not for any other history starting with $h$. Suppose that in the history before $h$ the strategy $\sigma$ was in mode $\ell$. Then in $h$ the mode is incremented by $1$ (modulo $n$), yielding the mode $\ell'$, and the strategy $\sigma$ starts playing as
$\sigma_{\ell'}$. It does so for $2^{|h|}$ steps, yielding a history $h'$. Afterwards, we apply the inductive definition again with $h'$ in place of $h$.
\begin{lemma} The strategy $\sigma$ satisfies the requirements of Proposition~\ref{thm:mp-main}. \end{lemma} \begin{proof}[Sketch] Firstly, the generalised B\"uchi condition is almost surely satisfied because it is satisfied under any $\sigma_i$, and $\sigma$ will eventually mimic $\sigma_i$ for an arbitrary long length.
Let us continue with the claim for the mean payoffs with supremum limits. Fix $1\le i\le n$. We will show that for every $\varepsilon>0$, almost every run $\omega$ has a prefix $s_0a_0s_1a_1\ldots s_\ell$ with \[ \frac{1}{\ell}\sum_{j=0}^{\ell-1} q_i(a_j) \ge u_i - \varepsilon \]
By properties of $\sigma_i$ for any $s\in S$ there is a number $k_{s,\varepsilon}$ and a set of runs $R$ with $\Pr{\sigma_i}{s}{T} \ge 1/2$ such that for every $s'_0a'_0s'_1a'_1\ldots \in T$ and every $k' \ge k_{s,\varepsilon}$ we have \[
\sum_{\ell=0}^{k'} \frac{1}{k'} q_i(a'_\ell) \ge u_i - \varepsilon/2 \]
Let $\alpha$ be the smallest assigned reward, there must be a number $J_\varepsilon$ such that \[
J_\varepsilon \cdot (u_i - \alpha) < 2^{J_\varepsilon} \cdot \varepsilon/2 \] Intuitively, $J_\varepsilon$ is chosen so that no matter what the history is in the first $J_\varepsilon$ steps, if the remainder has length at least $2^{J_\varepsilon}$ steps and gives partial average at least $u_i - \varepsilon/2$, we know that the whole history gives a partial average at least $u_i - \varepsilon$.
Now almost every run $\omega$ has infinitely many prefixes $h_0,h_1\ldots$ such in the prefix $h_i$, the strategy $\sigma$ starts mimicking $\sigma_i$ for $2^{|h_i|}$ steps. Now consider those prefixes $h_i$ which have length greater than $\max_s k_{\varepsilon,s}$
and $J_\varepsilon$. This ensures that starting with any such prefix $h_i$, with probability at least $1/2$ the history $h'=s'_0a'_0s'_1a'_1\ldots s'_\ell$ in which we end after taking $2^{|h_i|}$ steps will satisfy \[
\frac{1}{\ell}\sum_{j=0}^{\ell-1} q_i(a'_j) \ge u_i - \varepsilon \] Using Borel-Cantelli lemma~\cite{Royden88} this implies that almost every run has the required property.
The proof for mean payoff with inferior limits is analogous, although handling limit inferior is more subtle as it requires us to show that from some point on the partial average never decreases below a given bound. To give a formal proof, we can reuse the construction from \cite[Proof of Claim 10]{lics15} applied to strategies $(\xi_k)_{1\le k\le \infty}$ where each $\xi_k$ for $k$ of the form $\ell\cdot j + i$ is defined to be the strategy $\sigma_i$. Note that our choice of ``lengths'' of each mode satisfy Equations (3) and (4) from \cite[Proof of Claim 10]{lics15}. Also note that while \cite[Proof of Claim 10]{lics15} requires the frequencies of the actions to converge, in our proof we are only concerned about limits inferior of long run rewards, and so the requirements on $\xi_i$ are not convergence of limits, but only that limits inferior converge to the required bound. This requirement is clearly satisfied. \qed \end{proof}
Let us now proceed with the direction $\Leftarrow$ of the proof or Proposition~\ref{thm:mp-main}.
Because no $x_{i,a}$ and $x_{i',a'}$ with $i\neq i'$ occur in the same equation, we can fix $1\le i \le m$ and to finish the proof it suffices to give a solution to $x_{i,a}$ for all $a$.
Similarly to~\cite{lics15,BBC+14} where only lim inf was considered, the main idea of the proof is to obtain suitable ``frequencies'' of actions and use these as the solution. Nevertheless, the formal approach of~\cite{lics15,BBC+14} itself cannot be easily adapted (main issue being the use of Fatou's lemma, which for the purpose of limit superior does not allow to establish the required inequality). Instead, we use a straightforward adaptation of an approach used in~\cite{BCFK13}. The statement we require is captured in the following lemma.
\begin{lemma}\label{lemma:subsequence} For every run $\omega=s_0a_0s_1a_1\ldots$ there is a sequence of numbers $T_1[\omega],T_2[\omega],\ldots$ such that the number \[ f_{\omega}(a) := \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]} \sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a} \] is defined and non-negative for all $a\in A$, and satisfies \[ \begin{array}{rl} \sum_{a\in A} f_{\omega}(a)\cdot q_i(a) &= \mathrm{lr}_{\sup}(q_i)(\omega)\\ \sum_{a\in A} f_{\omega}(a)\cdot r_j(a) &\ge \mathrm{lr}_{\inf}(r_j)(\omega)\quad{\text{for $1\le j \le n$}}\\ \sum_{a\in A} f_{\omega}(a) &= 1 \end{array} \] Moreover, for almost all runs $\omega$ we have \[
\sum_{a\in A} f_{\omega}(a) \cdot \delta(a)(s) = \sum_{a\in \act{s}} f_{\omega}(a) \] \end{lemma} \begin{proof} Fix $\omega=s_0a_0s_1a_1\ldots$ We first define a sequence $T'_1[\omega],T'_2[\omega],\ldots$ to be any sequence satisfying \[ \lim_{\ell\rightarrow \infty} \frac{1}{T'_\ell[\omega]} \sum_{j=1}^{T'_\ell[\omega]} q_i(a_j)\quad = \mathrm{lr}_{\sup}(q_i)(\omega) \] Existence of such a sequence follows from the fact that every sequence of real numbers has a subsequence which converges to the lim sup of the original sequence.
Further, we define subsequences $\hat T^k_1[\omega],\hat T^k_2[\omega],\ldots$ for $1\le k \le n$ where for all $k$ the sequence $\hat T^k_1[\omega],\hat T^k_2[\omega],\ldots$ satisfies \[ \lim_{\ell\rightarrow \infty} \frac{1}{\hat T^k_i[\omega]} \sum_{j=1}^{\hat T^k_i[\omega]} q_i(a_j)\quad = \mathrm{lr}_{\sup}(q_i)(\omega) \] and \[ \lim_{\ell\rightarrow \infty} \frac{1}{\hat T^k_i[\omega]} \sum_{j=1}^{\hat T^k_i[\omega]} r_{k'}(a_j)\quad \ge \mathrm{lr}_{\inf}(r_{k'})(\omega) \] for all $k' \le k$. We define these subsequences inductively. We start with $\hat T^0[\omega],\hat T^1[\omega]\ldots = T'_1[\omega],T'_2[\omega]\ldots$. Now assuming that $\hat T^{k-1},\hat T^{k-1}\ldots = 0,1\ldots$ has been defined, we take $\hat T^k_1[\omega],\hat T^k_2[\omega],\ldots$ such that \[ \lim_{\ell\rightarrow \infty} \frac{1}{\hat T^k_i[\omega]} \sum_{j=1}^{\hat T^k_i[\omega]} r_k(a_j) \] exists. The existence of such a sequence follows from the fact that every sequence of real numbers has a converging subsequence. The required properties then follow easily from properties of limits.
Now assuming an order on actions, $\bar a_1,\ldots,\bar a_{|A|}$ in $A$, we define $T^{k}_1[\omega],T^{k}_2[\omega],\ldots$ for $0\leq k\leq |A|$ so that $T^{0}_1[\omega],T^{0}_2[\omega],\ldots$ is the sequence $\hat T^n_1[\omega],\hat T^n_2[\omega],\ldots$, and every $T^{k}_1[\omega],T^{k}_2[\omega],\ldots$ is a subsequence of $T^{k-1}_1[\omega],T^{k-1}_2[\omega],\ldots$ such that the following limit exists \[ f_\omega(\bar a_k) := \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_i[\omega]} \sum_{j=1}^{T^{k}_i[\omega]} \mathds{1}_{a_j=\bar a_k} \]
The required properties follow as before. We take $T^{|A|}_1[\omega],T^{|A|}_2[\omega],\ldots$ to be the desired sequence $T_1[\omega],T_2[\omega],\ldots$.
Now we need to show that satisfies the required properties. Indeed \begin{align*} \sum_{a\in A} f_{\omega}(a)\cdot q_i(a) &= \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} \mathds{1}_{a_j = a} \cdot q_i(a)
\tag{def. of $f_{\omega}(a)$}\\
&= \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} q_i(a)
\tag{property of $\mathds{1}$ and the sum}\\
&= \mathrm{lr}_{\sup}(q_i)(\omega)\tag{def. of subsequence $T^k_\ell[\omega]$} \end{align*} and analogously, for any $1\le i'\le n$: \begin{align*} \sum_{a\in A} f_{\omega}(a)\cdot r_{i'}(a) &= \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} \mathds{1}_{a_j = a} \cdot r_{i'}(a)\\
&= \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} r_{i'}(a)\\
&\ge \mathrm{lr}_{\inf}(r_{i'})(\omega) \end{align*} Also \[ \sum_{a\in A} f_{\omega}(a) \quad= \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} \mathds{1}_{a_j = a}
\quad= \lim_{\ell\rightarrow \infty} \frac{1}{T^{k}_\ell[\omega]} \sum_{j=1}^{T^{k}_\ell[\omega]} 1 \quad= 1\\ \]
To prove the last property in the lemma, we invoke the law of large numbers (SLLN) \cite{KSK76}. Given a run $\omega$, an action $a$, a state $s$ and $k\geq 1$, define \[ N^{a,s}_k(\omega)=\begin{cases}
1 & \text{ $a$ is executed at least $k$ times}\\&\text{ and $s$ is visited just after the $k$-th execution of $a$; }\\
0 & \text{ otherwise.} \end{cases} \]
By SLLN and by the fact that in every step the distribution on the next states depends just on the chosen action, for almost all runs $\omega$ the following limit is defined and the equality holds whenever $f_\omega(a) > 0$: \[ \lim_{j\rightarrow \infty} \frac{\sum_{k=1}^j N^{a,s}_k(\omega)}{j} = \delta(a)(s) \] We obtain, for almost every $\omega=s_0a_0s_1a_1\ldots$ \begin{eqnarray*} \lefteqn{\sum_{a\in A} f_{\omega}(a)\cdot \delta(a)(s)}\\& = & \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]} \sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}\cdot
\lim_{\ell\rightarrow \infty} \frac{1}{\ell}\sum_{k=1}^{\ell} N^{a,s}_k(\omega) \\ & = & \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]} \sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}\cdot \lim_{\ell\rightarrow \infty} \frac{1}{\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}}\sum_{k=1}^{\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}} N^{a,s}_k(\omega) \\ & = & \sum_{a\in A} \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]}\sum_{k=1}^{\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}} N^{a,s}_k(\omega) \\ & = & \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]}\sum_{a\in A} \sum_{k=1}^{\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a}} N^{a,s}_k(\omega) \\ & = & \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]}\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{s_j=s} \\ & = & \lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]}\sum_{j=1}^{T_\ell[\omega]} \sum_{a\in \mathit{Act}(s)} \mathds{1}_{a_j=a} \\ & = & \sum_{a\in \mathit{Act}(s)}\lim_{\ell\rightarrow \infty} \frac{1}{T_\ell[\omega]}\sum_{j=1}^{T_\ell[\omega]} \mathds{1}_{a_j=a} \\ & = & \sum_{a\in \mathit{Act}(s)} f_{\omega}(a) \end{eqnarray*} \qed \end{proof}
We apply Lemma~\ref{lemma:subsequence} to obtain values $f_\omega$ for every $\omega$. Now it suffices to consider any $\omega$ for which $f_\omega$ satisfies the last condition of the lemma and which also satisfies $\mathrm{lr}_{\inf}(r_j[\omega]) \ge u_j$ for all $1\le j\le n$ and $\mathrm{lr}_{\sup}(q_i[\omega]) \ge v_i$; by the assumptions on $\sigma$ and $R$ such a run must exist. This immediately gives us that all the equations from Figure~\ref{system-L} are satisfied.
\fi
\end{document} | arXiv |
\begin{document}
\title[3D incompressible Oldroyd-B model] {Global small solutions of 3D incompressible Oldroyd-B model without damping mechanism} \author[Y. Zhu]{Yi Zhu} \address{Department of Mathematics, East China University of Science and Technology, Shanghai 200237, People's Republic of China} \email{\tt [email protected]}
\date{} \subjclass[2010]{76A05, 76D03} \keywords{ Oldroyd-B Model, Global Classical Solutions, Non-Newtonian Flow.}
\begin{abstract} In this paper, we prove the global existence of small smooth solutions to the three-dimensional incompressible Oldroyd-B model without damping on the stress tensor. The main difficulty is the lack of full dissipation in stress tensor. To overcome it, we construct some time-weighted energies based on the special coupled structure of system. Such type energies show the partial dissipation of stress tensor and the strongly full dissipation of velocity. In the view of treating ``nonlinear term'' as a ``linear term'', we also apply this result to 3D incompressible viscoelastic system with Hookean elasticity and then prove the global existence of small solutions without the physical assumption (div-curl structure) as previous works. \end{abstract}
\maketitle
\section{introduction}
The Oldroyd-B model describes the motion of some viscoelastic flows, for example, the system coupling fluids and polymers. It presents a typical constitutive law which does not obey the Newtonian law (a linear relationship between stress and the gradient of velocity in fluids). Such non-Newtonian property may arise from the memorability of some fluids. Formulations about viscoelastic flows of Oldroyd-B type are first introduced by Oldroyd \cite{Oldroyd} and are extensively discussed in \cite{BCAH}.
The 3D incompressible Oldroyd-B model can be written as follows
\begin{equation}\label{eq:1.1} \begin{cases} u_t + u\cdot \nabla u - \mu \Delta u + \nabla p = \mu_1 \nabla \cdot \tau, \quad\quad (t, x) \in \mathbb{R}^+ \times \mathbb{R}^3,\\ \tau_t + u\cdot \nabla \tau + a \tau + Q(\tau, \nabla u) = \mu_2 D (u),\\ \nabla \cdot u = 0, \end{cases} \end{equation} with initial data \begin{equation}\nonumber u(0,x) = u_0(x), \quad \tau(0,x) = \tau_0(x), \quad x \in \mathbb{R}^3. \end{equation} Here $u=(u_1,u_2, u_3)^{\top}$ denotes the velocity, $p$ is the scalar pressure of fluid. $\tau$ is the non-Newtonian part of stress tensor which can be seen as a symmetric matrix here. $D(u)$ is the symmetric part of $\nabla u$, \begin{equation}\label{du} D(u) = \frac{1}{2} \big( \nabla u + (\nabla u)^{\top} \big), \end{equation} and $ Q$ is a given bilinear form which can be chosen as \begin{equation}\label{defnQ} Q(\tau, \nabla u)= \tau \Omega(u) - \Omega(u) \tau + b(D(u) \tau + \tau D(u)), \end{equation} where $\Omega(u)$ is the skew-symmetric part of $\nabla u$, namely \begin{equation}\label{omegau} \Omega(u) = \frac{1}{2} \big( \nabla u - (\nabla u)^{\top} \big). \end{equation} The coefficients $\mu, a, \mu_1, \mu_2$ are assumed to be non-negative constants. When $a=0$, the system becomes Oldroyd-B model without damping which is concerned in this paper. $b \in [-1, 1]$ is a parameter and if $b=0$, we call the system corotational case.
For a self-contained presentation, we shall give a brief derivation of system \eqref{eq:1.1}. Following \cite{CM}, the differential form of momentum conservation for homogenous and incompressible fluid can be written as \begin{equation}\nonumber \partial_t u + u\cdot \nabla u = \nabla \cdot \sigma, \end{equation}
with the associated incompressible condition $\nabla \cdot u = 0$. The stress tensor $\sigma$ is usually written as $$\sigma = - p I + \tau_{total}.$$
$\tau_{total}$ contains viscosity and other stresses. In classical elastic solid, the stress tensor depends on the deformation. And for classical viscous fluid, the stress tensor depends on the rate of deformation. When we concern with Oldroyd-B model, the constitutive law is selected by \begin{equation}\nonumber \tau_{total} + \lambda_1 \frac{\mathcal{D}\tau_{total}}{\mathcal{D}t} = 2\eta \big ( D(u) + \lambda_2 \frac{\mathcal{D} D(u)}{\mathcal{D}t} \big), \end{equation} where, for any tensor $f(x,t)$, we have \begin{equation}\nonumber \frac{\mathcal{D}f}{\mathcal{D}t} = \partial_t f+ u\cdot \nabla f + Q(f, \nabla u). \end{equation} Here $\lambda_1$ denotes the relaxation time and $\lambda_2$ the retardation time with $0\leq \lambda_2 \leq \lambda_1$. We can decompose $\tau_{total}$ into two parts: Newtonian part and elastic part, i.e., $\tau_{total} = \mathcal{N} + \tau$. We know that $\mathcal{N} = 2 \eta \lambda_2 /\lambda_1 D(u)$ and thus $\tau$ satisfies the second equation of system \eqref{eq:1.1} with $$ a = \frac{1}{\lambda_1}, \quad \mu_2 = \frac{2\eta}{\lambda_1}(1-\frac{\lambda_2}{\lambda_1}), \quad \mu_1 = 1. $$ For more detailed derivation, we refer to \cite{Oldroyd, BCAH, CM}.
As one of the most popular constitutive laws, Oldroyd-B model of viscoelastic fluids has attracted many attentions and lots of excellent works have been done. Naturally, for a partial differential system, one key consideration is to derive the local and global existence of solutions. Achieved by Guillop\'{e} and Saut \cite{GS, GS2}, the local strong solutions exist and are unique. They also show these solutions are global provided that the coupling parameter and initial data are small enough. Other than Hilbert spaces $H^s$ considered in \cite{GS, GS2}, the result under $L^s-L^r$ framework is studied in \cite{FGO}. In corotational case ($b = 0$), based on the basic energy equality, Lions and Masmoudi \cite{LM} proved the global existence of weak solutions. However, the case $b \neq 0$ is still not clear by now. The theories of local solutions and global small solutions in (or near) critical Besov spaces were first studied by Chemin and Masmoudi \cite{CM}. Some delicate blow-up criterions were also shown in \cite{CM}. And in \cite{LMZ}, Lei, Masmoudi and Zhou improved the criterion. For more global existence results in generalized spaces, we refer to \cite{CM2, ZFZ}. Further more, global well-posedness with a class of large initial data was given by Fang and Zi \cite{FZ}. We should point out here the above results for global smooth solutions always require $a > 0$ (namely the system with damping) at least for non-trivial initial data.
In the case $\mu = 0$ and the equation of $\tau$ contains viscous term $-\Delta \tau$, Elgindi and Rousset \cite{ER} proved the global existence of smooth solutions with small initial data in 2D. When $Q = 0$, they also derived the similar result with general data. The key idea of \cite{ER} is to study a new quantity $\Gamma = \omega - \mathcal{R}\tau$, where $ \omega = \text{curl} \;u$ and $ \mathcal{R} = \Delta^{-1}\text{curl div}.$ For the 3D case, the small initial data result was obtained by Elgindi and Liu \cite{EL}. Recently, applying direct energy method based on the coupling structure of system, an improvement without the damping term ($a = 0$) was given by the author \cite{Zhu}.
Besides, we would like to mention that global regularity of solutions to 2D Oldroyd-B model with diffusive stress was obtained by Constantin and Kliegl \cite{CK}. And some interesting results for related Oldroyd type models of viscoelastic fluids can be found in \cite{LinLZ, LZ, LeiLZ, LeiLZ2, LinZ, CZ, QZ, ZF}. We shall review these results later in Section 2.
Now, let us give the main result of this paper. We focus on the Oldroyd-B model in the case $a = 0$ (without damping term). More precisely, we prove the following theorem.
\begin{theorem}\label{thm}
Let $\mu, \mu_1, \mu_2 >0$ and $ a = 0$. Suppose that $\nabla \cdot u = 0, (\tau_0)_{ij} =( \tau_0)_{ji}$ and initial data $|\nabla|^{-1}u_0, |\nabla|^{-1} \tau_0 \in H^3(\mathbb{R}^3)$. Then there exists a small constant $\varepsilon$ such that system \eqref{eq:1.1} admits a unique global classical solution provided that
$$ \||\nabla|^{-1}u_0\|_{H^3} + \||\nabla|^{-1}\tau_0\|_{H^3} \leq \varepsilon, $$
where $|\nabla| = (-\Delta)^\frac{1}{2}$. \end{theorem}
\begin{remark} The assumption of initial data in negative order Sobolev space $\dot H^{-1}$ could be removed by considering fractional order time-weighted energies in the energy framework \eqref{energy2}. To best illustrate our idea and make paper neat, here we wouldn't like to touch fractional order energies.
\end{remark}
The key point in proving theorem \ref{thm} is to obtain $L^1$ estimate of $\|\nabla u(t, \cdot)\|_{L^\infty_x}$ in time as well as some higher order norms of $u$. It helps preserve the regularity of solutions from initial data. However, the lack of full dissipation in stress tensor $\tau$ brings the main difficulty. We analyse the following linearized system first (without loss of generality, set $\mu = \mu_1 = \mu_2 = 1$) \begin{equation}\nonumber \begin{cases} u_t - \Delta u = \mathbb{P} \nabla \cdot \tau, \\ \tau_t = D(u). \end{cases} \end{equation}
Here, $\mathbb{P}$ is the projection operator used to deal with pressure.
Notice the fact that
$$ \mathbb{P} \nabla \cdot D(u) = \frac{1}{2}\Delta u ,$$ we can decouple the linearized system and find that both $u$ and $\mathbb{P} \nabla \cdot \tau$ satisfy the following damped wave equation \begin{equation*}\label{lin} W_{tt} - \Delta W_t - \frac{1}{2}\Delta W = 0. \end{equation*} It seems that we can gain enough decay at least in the linearized system \eqref{lin}. In fact, we only expect the partial dissipation in $\tau$, namely $\mathbb{P} \nabla \cdot \tau$. Based on the special dissipative mechanism of system \eqref{eq:1.1} we set two type energies (see \eqref{energy1} and \eqref{energy2}). To enclose the energy with only partial dissipation of $\tau$, we make full use of the structure of system (some cancelations on linear terms) and the fact that time derivative of $\nabla \cdot \tau$ is essentially quadratic terms. In addition, to deal with the wildest term $\mathbb{P} \nabla \cdot (u \cdot \nabla \tau)$, we introduce a proposition related to some $\big[\mathbb{P}\; \text{div} , u\cdot \nabla \big]$ type commutator: \begin{equation}\nonumber \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) = \mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau)+ \text{some terms containing}\; \nabla u. \end{equation} For more details, we refer to Proposition \ref{prop} and the estimate \eqref{eqM4}. ~\\~ \section{Application for the Hookean elastic materials}
In this section, we will apply Theorem \ref{thm} to the following three dimensional incompressible viscoelastic system with Hookean elasticity: \begin{equation}\label{Hk} \begin{cases} u_t+ u\cdot \nabla u - \Delta u + \nabla p = \nabla \cdot (F F^\top), \quad\quad (t, x) \in \mathbb{R}^+ \times \mathbb{R}^3,\\ F_t + u \cdot \nabla F = \nabla u F,\\ \nabla \cdot u = 0,\\ F(0,x) = F_0(x), \quad u(0,x) = u_0(x). \end{cases} \end{equation} Here $u=(u_1,u_2, u_3)^{\top}$ is the velocity, $p$ presents the scalar pressure of fluid and $F$ denotes the deformation tensor. We adopt the following notations \begin{equation}\nonumber [\nabla u]_{ij} = \partial_j u_i, \quad [\nabla \cdot F]_{i} = \sum_{j} \partial_j F_{ij}. \end{equation} For detailed physical background, we refer to \cite{Larson, LinLZ} and references therein.
We consider the case $F$ is near an identity matrix and let $U = F - I$. Through the analysis of linearized system of $(u, U)$, we can derive decay estimate of $\nabla \cdot U$. However, there is no much more information about $\nabla \times U$. To overcome this problem, Lin, Liu and Zhang \cite{LinLZ} studied an auxiliary vector field with the physical assumption $\nabla \cdot U_0^\top = 0$. They proved the global existence of classical small solutions in 2D case (we refer to \cite{LZ, LeiLZ2} for different approaches). For the 3D case, Lei, Liu and Zhou \cite{LeiLZ} found a curl structure which is physical and compatible with system \eqref{Hk}. They see $\nabla \times U_0$ as a higher order term and then proved the results of global small solutions in both 2D and 3D. We also refer to Chen and Zhang \cite{CZ} for a curl free structure of $F_0^{-1}- I$. Initial-boundary value problem was done by Lin and Zhang \cite{LinZ}. Qian and Zhang \cite{QZ} generalized the results to compressible case. The result of critical $L^p$ framework was given by Zhang and Fang \cite{ZF}.
Through a simple analysis of system \eqref{Hk} we know that the nonlinear term $\nabla \cdot (F F^\top)$ may be the most difficult. The above excellent works make sufficient use of the physical assumption presents div-curl structure of deformation tensor. Now, we try to treat ``nonlinear term'' $\nabla \cdot (F F^\top)$ as a ``linear term'' and give the following formulation. By considering $(u, F F^\top)$, we have \begin{equation}\nonumber \begin{cases} u_t+ u\cdot \nabla u - \Delta u + \nabla p = \nabla \cdot (F F^\top),\\ (FF^\top)_t + u \cdot \nabla (FF^\top) = \nabla u F F^\top + FF^\top (\nabla u)^\top ,\\ \nabla \cdot u = 0. \end{cases} \end{equation} Denote $G = F F^\top- I$ and notice \eqref{du}, \eqref{omegau}, we have \begin{equation}\nonumber \begin{split} \nabla u G + G(\nabla u)^\top &= (D(u) + \Omega(u)) G +G (D(u)-\Omega(u)), \\ &= \Omega(u) G - G \Omega(u)+D(u)G + G D(u). \\ \end{split} \end{equation} Thus, the system of $(u,G)$ can be written as \begin{equation}\label{eqG} \begin{cases} u_t+ u\cdot \nabla u - \Delta u + \nabla p = \nabla \cdot G,\\ G_t + u \cdot \nabla G + Q(G, \nabla u)= 2 D(u), \\ \nabla \cdot u = 0. \end{cases} \end{equation} It just becomes the Oldroyd-B model \eqref{eq:1.1} introduced in Section 1. For more relations of these models we refer to \cite{Larson, LinLZ}. Hence, we have the following corollary.
\begin{corollary}\label{cor}
Suppose that $\nabla \cdot u_0 = 0$ and initial data $|\nabla|^{-1} u_0, |\nabla|^{-1}(F_0 - I) \in H^3(\mathbb{R}^3)$. Then there exists a small constant $\varepsilon$ such that system \eqref{Hk} admits a unique global classical solution provided that
$$ \||\nabla|^{-1}u_0\|_{H^3} + \||\nabla|^{-1}(F_0 - I)\|_{H^3} \leq \varepsilon.$$ \end{corollary}
Notice here we do not need the assumption of div-curl structure on the initial data, this corollary can also generalize the results in \cite{CZ, LeiLZ} somehow.
\begin{remark} To prove this corollary, we first notice that \begin{equation}\nonumber \begin{split}
&\||\nabla|^{-1}(F_0F_0^\top - I)\|_{H^3}\\
\lesssim& \||\nabla|^{-1}\big((F_0 - I + I)(F_0 - I + I)^\top - I \big)\|_{H^3} \\
\lesssim& \||\nabla|^{-1}(F_0 - I)(F_0 - I)^\top\|_{H^3} + \||\nabla|^{-1}(F_0 - I)\|_{H^3} \\
\lesssim& \|(F_0 - I)(F_0 - I)^\top\|_{L^\frac{6}{5}} + \|(F_0 - I)(F_0 - I)^\top\|_{H^2} + \||\nabla|^{-1}(F_0 - I)\|_{H^3}\\
\lesssim& \|(F_0 - I)\|_{L^2}\|(F_0 - I)^\top\|_{L^3}+ \|(F_0 - I)\|_{H^2}^2 + \||\nabla|^{-1}(F_0 - I)\|_{H^3}\\
\lesssim & \||\nabla|^{-1}(F_0 - I)\|_{H^3} + \|(F_0 - I)\|_{H^3}^2. \end{split} \end{equation} Then, apply Theorem \ref{thm} to the reformulated system \eqref{eqG}, we will get the global regularity of velocity (see \eqref{energy1} and \eqref{energy2}). Now, it is easy to obtain the global regularity of $F$. \end{remark}
\begin{remark} In \cite{leizhen, leizhen2}, Lei proved the global existence of classical solutions to 2D Hookean incompressible viscoelastic model with smallness assumptions only on the size of initial strain tensor. Indeed, the non-singular matrix $F$ can be decomposed uniquely in the form $F=(I+V)R$, where $V$ stands the strain matrix and $R$ the rotation matrix. He showed the smallness of strain tensor $V_0$ ensures the global regularity of system. In the proof of Corollary \ref{cor}, we present the similar result in a different view. In proving the global existence of classical solutions to 3D Hookean incompressible viscoelastic model, we just need the smallness of $F_0F_0^\top - I$ rather than $F_0-I$. Obviously, the smallness of $V_0$ ensures the smallness of $F_0F_0^\top - I$. \end{remark}
\section{Energy estimate}
\subsection{Preliminary} Without loss of generality, set $\mu = \mu_1 = \mu_2 = 1$. At the first of proof, we introduce the setting of energy. Based on our analysis in Section 1, we define some time-weighted energies for system \eqref{eq:1.1}. First, we give the basic energy as follows, \begin{equation}\label{energy1} \begin{split}
\mathcal{E}_0(t) =& \sup_{0 \leq t' \leq t} (\||\nabla|^{-1}u(t')\|_{H^3}^2 + \||\nabla|^{-1}\tau(t')\|_{H^3}^2) + \int_{0}^{t} \| u(t')\|_{H^3}^2 + \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau \|_{H^2}^2 \;dt', \end{split} \end{equation} where $\mathbb{P} = \mathbb{I}- \Delta^{-1} \nabla \text{div}$ is the projection operator. For any smooth divergence free vector $v$, we have $\mathbb{P} v = v$. And for a scalar function $\phi$, we know that $\mathbb{P} \nabla \phi = 0$. The projection operator $\mathbb{P}$ is used to deal with pressure term $p$ satisfying
$$\Delta p = \partial_i \partial_j (\tau_{ij}-u_i u_j).$$ Also, we define two time-weighted energies which imply the dissipative structure of system \eqref{eq:1.1}, \begin{equation}\label{energy2} \begin{split}
\mathcal{E}_1(t) =& \sup_{0 \leq t' \leq t} (1+t')\big(\|u(t')\|_{H^2}^2 + 2\||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau(t')\|_{H^2}^2 \big) \\
&+ \int_{0}^{t} (1+t')\Big[\|\nabla u(t')\|_{H^2}^2 + \| \mathbb{P}\nabla \cdot \tau \|_{H^1}^2\Big] \;dt',\\
\mathcal{E}_2(t) =& \sup_{0 \leq t' \leq t} (1+t')^2\big(\|\nabla u(t')\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t')\|_{H^1}^2\big) \\
&+ \int_{0}^{t} (1+t')^2\Big[\|\nabla^2 u(t')\|_{H^1}^2 + \|\nabla \mathbb{P}\nabla \cdot \tau \|_{L^2}^2\Big] \;dt'. \end{split} \end{equation} Using interpolation inequality, we easily know that
\begin{equation}\nonumber
\mathcal{E}_1(t) \lesssim \mathcal{E}_0^\frac{1}{2}(t)\mathcal{E}_2^\frac{1}{2}(t).
\end{equation} Hence, we only need to derive the estimates of $\mathcal{E}_0(t)$ and $\mathcal{E}_2(t)$.
Next, we introduce a useful proposition to deal with $\big[\mathbb{P}\; \text{div} , u\cdot \nabla \big]$ type commutators.
\begin{proposition}\label{prop} For any smooth tensor $[\tau_{ij}]_{3 \times 3}$ and three dimensional vector $u$, it always holds that \begin{equation}\nonumber \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) = \mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau)+ \mathbb{P} (\nabla u\cdot \nabla \tau) - \mathbb{P}(\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau), \end{equation} where the $ith$ component of $\nabla u\cdot \nabla \tau$ is \begin{equation}\nonumber [\nabla u\cdot \nabla \tau]_i = \sum_{j}\partial_j u \cdot \nabla \tau_{ij}, \end{equation} and also \begin{equation}\nonumber [\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau]_i = \partial_i u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau. \end{equation} \end{proposition}
\begin{proof} Using direct computation we have \begin{equation}\nonumber [ \nabla \cdot(u\cdot \nabla \tau)]_i = \sum_{j}\partial_j (u\cdot \nabla \tau_{ij}) =\sum_{j}(\partial_ju\cdot \nabla \tau_{ij}) + \sum_{j}(u\cdot \nabla \partial_j\tau_{ij}). \end{equation} Though the notations in proposition, we can write \begin{equation}\label{eq:2.10} \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) = \mathbb{P} (\nabla u\cdot \nabla \tau) + \mathbb{P} (u\cdot \nabla \nabla \cdot \tau). \end{equation} Denote $\mathbb{P}^\perp = \Delta^{-1} \nabla \text{div}$, we now compute the second part of \eqref{eq:2.10} as follows \begin{equation}\nonumber \begin{split} \mathbb{P} (u\cdot \nabla \nabla \cdot \tau) =& \mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) + \mathbb{P}(u\cdot \nabla \mathbb{P}^\perp \nabla \cdot \tau)\\ =&\mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) + \mathbb{P}(u\cdot \nabla \Delta^{-1} \nabla \nabla \cdot \nabla \cdot \tau)\\ =&\mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) + \mathbb{P}\nabla(u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau) - \mathbb{P}(\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau)\\ =&\mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) - \mathbb{P}(\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau). \end{split} \end{equation} Hence, we have \begin{equation}\nonumber \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) = \mathbb{P} (\nabla u\cdot \nabla \tau) + \mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) - \mathbb{P}(\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau). \end{equation} \end{proof}
\subsection{\textit{A priori} estimate} In this subsection, we shall derive the \textit{a priori} estimate of $\mathcal{E}_0(t)$ and $\mathcal{E}_2(t)$ respectively. First, we consider the basic energy $\mathcal{E}_0(t)$ and give the following lemma.
\begin{lemma}\label{lem1} The energies are defined in \eqref{energy1} and \eqref{energy2}, then we have \begin{equation}\nonumber \mathcal{E}_0(t) \lesssim \mathcal{E}_0(0) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{equation} \end{lemma}
\begin{proof} We divide the proof of lemma into two parts. Define $\mathcal{E}_{0,1}$ and $\mathcal{E}_{0,2}$ as follows \begin{align*}
\mathcal{E}_{0,1}(t) = &\sup_{0 \leq t' \leq t} (\||\nabla|^{-1}u(t')\|_{H^3}^2 + \||\nabla|^{-1}\tau(t')\|_{H^3}^2) + \int_{0}^{t} \| u(t')\|_{H^3}^2 \;d t', \\
\mathcal{E}_{0,2} (t) = &\int_{0}^{t} \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau \|_{H^2}^2 \;dt' . \end{align*} Then $\mathcal{E}_0 = \mathcal{E}_{0,1} + \mathcal{E}_{0,2}$, we shall first give the estimate of $\mathcal{E}_{0,1}$ . ~\\~\\~ \textbf{First Step:} ~\\~\\~
Applying the operator $\nabla^k |\nabla|^{-1}(k = 0, \cdots, 3)$ to system \eqref{eq:1.1}. Then, taking inner product with
$\nabla^k |\nabla|^{-1} u $ for the first equation and taking inner product with $\nabla^k |\nabla|^{-1} \tau$ for the second equation. Adding them up we have \begin{equation}\label{eq:2.2}
\frac{1}{2} \frac{d}{dt}(\||\nabla|^{-1} u\|_{H^3}^2 + \||\nabla|^{-1}\tau\|_{H^3}^2) + \| u\|_{H^3}^2 = N_1 + N_2 + N_3 + N_4, \end{equation} where \begin{equation}\nonumber \begin{split}
N_1 =& \sum_{k = 0}^{3} \int \Big( \nabla^k |\nabla|^{-1} \nabla \cdot \tau \nabla^k |\nabla|^{-1}u + \nabla^k |\nabla|^{-1} D(u) \nabla^k |\nabla|^{-1} \tau \Big) \;dx,\\
N_2 =& -\sum_{k = 0}^{3} \int \nabla^k |\nabla|^{-1} (u\cdot \nabla u) \nabla^k|\nabla|^{-1} u \;dx,\\
N_3 =& -\int |\nabla|^{-1}(u \cdot \nabla \tau) |\nabla|^{-1} \tau \;dx
-\sum_{k = 0}^{2} \int \nabla^k(u \cdot \nabla \tau) \nabla^k \tau\; dx,\\
N_4 =& -\sum_{k = 0}^{3} \int \nabla^k|\nabla|^{-1} Q(\tau, \nabla u) \nabla^k |\nabla|^{-1}\tau \;dx . \end{split} \end{equation}
For the first term $N_1$, using integration by parts and the symmetry $\tau_{ij} = \tau_{ji}$ we have \begin{equation}\label{eqN1} \begin{split}
N_1 =& \sum_{k = 0}^{3} \int \Big( \nabla^k |\nabla|^{-1} \nabla \cdot \tau \nabla^k |\nabla|^{-1} u + \sum_{i,j=1}^3 \nabla^k |\nabla|^{-1} \frac{\partial_j u_i + \partial_i u_j}{2}\nabla^k |\nabla|^{-1}\tau_{ij} \Big) dx\\
=& \sum_{k = 0}^{3} \int \nabla^k |\nabla|^{-1}\nabla \cdot \tau \nabla^k |\nabla|^{-1} u \; dx \\
& - \sum_{k = 0}^{3}\sum_{i,j=1}^3 \frac{\nabla^k |\nabla|^{-1} u_i \nabla^k |\nabla|^{-1}\partial_j\tau_{ij} + \nabla^k |\nabla|^{-1} u_j\nabla^k |\nabla|^{-1} \partial_i \tau_{ij}}{2} \;dx\\
=& \sum_{k = 0}^{3} \int \Big( \nabla^k |\nabla|^{-1} \nabla \cdot \tau \nabla^k |\nabla|^{-1} u - \sum_{i,j=1}^3 \nabla^k |\nabla|^{-1} u_i \nabla^k |\nabla|^{-1}\partial_j\tau_{ij} \Big) dx\\ =& 0. \end{split} \end{equation}
For the second term $N_2$, by divergence free condition, H\"{o}lder inequality and Sobolev imbedding theorem, we directly know that \begin{equation}\nonumber \begin{split}
N_2 \lesssim& \|u\otimes u\|_{H^3} \||\nabla|^{-1}u\|_{H^3}\\
\lesssim& \|u\|_{L^\infty}\|u\|_{H^3}\||\nabla|^{-1}u\|_{H^3}\\
\lesssim& \|u\|_{H^3}^2\||\nabla|^{-1}u\|_{H^3}. \end{split} \end{equation} Hence, \begin{equation}\label{eqN2} \begin{split}
\int_{0}^{t} |N_2(t')| dt' \lesssim & \sup_{0 \leq t' \leq t}\||\nabla|^{-1}u(t')\|_{H^3} \int_{0}^{t} \| u\|_{H^3}^2 dt'\\ \lesssim & \mathcal{E}_0^\frac{3}{2}(t). \end{split} \end{equation}
Similarly, for the next term $N_3$, notice the divergence free condition $\nabla \cdot u = 0$ we get \begin{equation}\nonumber \begin{split}
|N_3| \lesssim& \|u \otimes \tau \|_{L^2} \||\nabla|^{-1} \tau\|_{L^2} +\sum_{k = 1}^{2}\Big| \int \nabla^{k-1}(\nabla u \cdot \nabla \tau) \nabla^k \tau \; dx \Big|\\
\lesssim& \|u\|_{L^\infty} \||\nabla|^{-1} \tau\|_{H^1}^2 +\big(\|\nabla u\|_{L^\infty}\|\nabla \tau\|_{H^1}+ \|\nabla^2 u\|_{L^6}\|\nabla \tau\|_{L^3} \big)\|\nabla \tau\|_{H^1}\\
\lesssim& \big(\|\nabla u\|_{L^2}^\frac{1}{2}\|\nabla^2 u\|_{L^2}^\frac{1}{2}+\|\nabla^2 u\|_{H^1} \big)\||\nabla|^{-1} \tau\|_{H^3}^2. \end{split} \end{equation} Thus, we have the following estimate \begin{equation}\label{eqN3} \begin{split}
\int_{0}^{t}|N_3(t')| \lesssim& \sup_{0 \leq t' \leq t} \||\nabla|^{-1} \tau(t')\|_{H^3}^2 \cdot \Big\{ \int_{0}^{t}(1+t')^{-\frac{3}{4}} (1+t')^\frac{1}{4}\|\nabla u\|_{L^2}^\frac{1}{2}(1+t')^\frac{1}{2}\|\nabla^2 u\|_{L^2}^\frac{1}{2} dt'\\
&+ \int_{0}^{t}\|\nabla^2 u\|_{H^1} dt' \Big\}\\ \lesssim & \mathcal{E}_0(t)\cdot \big(\mathcal{E}_1^\frac{1}{4}(t) \mathcal{E}_2^\frac{1}{4}(t) + \mathcal{E}_2^\frac{1}{2}(t)\big)\\ \lesssim & \mathcal{E}_0(t)\cdot \big(\mathcal{E}_0^\frac{1}{8}(t) \mathcal{E}_2^\frac{3}{8}(t) + \mathcal{E}_2^\frac{1}{2}(t)\big)\\ \lesssim &\mathcal{ E}_0(t)\cdot \big(\mathcal{E}_0^\frac{1}{2}(t) + \mathcal{E}_2^\frac{1}{2}(t)\big)\\ \lesssim &\mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{split} \end{equation}
Now, we turn to the last term $N_4$. Using H\"{o}lder inequality and Sobolev imbedding theorem, it yields \begin{equation}\nonumber \begin{split}
|N_4| \lesssim& \||\nabla|^{-1} Q\|_{L^2}\||\nabla|^{-1} \tau\|_{L^2} + \|Q\|_{H^2}\|\tau\|_{H^2}\\
\lesssim& \|Q\|_{L^\frac{6}{5}} \||\nabla|^{-1} \tau\|_{L^2}+ \big(\|\nabla u\|_{L^\infty}\|\tau\|_{H^2} + \|\nabla^2 u\|_{L^6} \|\tau\|_{W^{1,3}} + \|\nabla^3 u\|_{L^2} \|\tau\|_{L^\infty} \big )\|\tau\|_{H^2}\\
\lesssim& \|\nabla u\|_{L^3}\|\tau\|_{L^2}\||\nabla|^{-1} \tau\|_{L^2}+\|\nabla^2 u\|_{H^1} \|\tau\|_{H^2}^2\\
\lesssim& (\|\nabla u\|_{L^2}^{\frac{1}{2}} \|\nabla^2 u\|_{L^2}^{\frac{1}{2}} + \|\nabla^2 u\|_{H^1} )\||\nabla|^{-1}\tau\|_{H^3}^2, \end{split} \end{equation} which implies \begin{equation}\label{eqN4} \begin{split}
\int_{0}^{t} |N_4| \lesssim& \sup_{0 \leq t' \leq t} \||\nabla|^{-1}\tau(t')\|_{H^3}^2 \int_{0}^{t} \|\nabla u\|_{L^2}^{\frac{1}{2}} \|\nabla^2 u\|_{L^2}^{\frac{1}{2}} + \|\nabla^2 u\|_{H^1} \; dt'\\ \lesssim& \mathcal{E}_0(t)(\mathcal{E}_1^\frac{1}{4}(t) \mathcal{E}_2^\frac{1}{4}(t)+ \mathcal{E}_2^\frac{1}{2}(t))\\ \lesssim& \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{split} \end{equation}
Integrating \eqref{eq:2.2} with time, then according to the estimates of $N_1 \sim N_4$, i.e., \eqref{eqN1}, \eqref{eqN2}, \eqref{eqN3}, \eqref{eqN4}, it holds that \begin{equation}\label{eqE01} \mathcal{E}_{0,1} (t) \lesssim \; \mathcal{E}_0(0) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{equation} \textbf{Second Step:} ~\\~\\~ Next, we shall deal with the left part $\mathcal{E}_{0,2} (t)$. Operating $\mathbb{P}$ on the first equation of system \eqref{eq:1.1}. Recall that $\mathbb{P} = \mathbb{I}- \Delta^{-1}\nabla \text{div}$, we have \begin{equation}\nonumber u_t + \mathbb{P}(u \cdot \nabla u) - \Delta u = \mathbb{P} \nabla \cdot \tau. \end{equation}
Applying operator $\nabla^k |\nabla|^{-1} (k = 0\sim 2 )$ to the above equation, then taking inner product with $\nabla^k |\nabla|^{-1}\mathbb{P} \nabla \cdot \tau$, we get \begin{equation}\label{eq:2.3}
\||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau \|_{H^2}^2 = N_5 + N_6 + N_7, \end{equation} where, \begin{equation}\nonumber \begin{split}
N_5 =& \sum_{k = 0}^2 \int -\nabla^k |\nabla|^{-1}\Delta u \nabla^k |\nabla|^{-1}\mathbb{P} \nabla \cdot \tau \;dx,\\
N_6 =& \sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}\mathbb{P}(u \cdot \nabla u) \nabla^k |\nabla|^{-1}\mathbb{P} \nabla \cdot \tau \;dx,\\
N_7 =& \sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}u_t \nabla^k |\nabla|^{-1}\mathbb{P} \nabla \cdot \tau \;dx. \end{split} \end{equation}
For the first term $N_5$, using H\"{o}lder inequality we have \begin{equation}\nonumber N_5
\lesssim \|\nabla u\|_{H^3} \| |\nabla|^{-1}\mathbb{P} \nabla \cdot \tau\|_{H^2}. \end{equation} Hence, we can obtain \begin{equation}\label{eqN5} \begin{split}
\int_{0}^t |N_5(t')| dt' \lesssim& \int_{0}^ t \|\nabla u\|_{H^3} \||\nabla|^{-1} \mathbb{P} \nabla \cdot \tau\|_{H^2} \; dt'\\ \lesssim& \mathcal{E}_{0,1}^\frac{1}{2}(t)\mathcal{E}_{0,2}^\frac{1}{2}(t). \end{split} \end{equation}
In the estimate of $N_6$, we will use the property that Riesz operator $\mathcal{R}_i = (-\Delta)^{-\frac{1}{2}} \nabla_i$ is $L^2$ bounded. Hence, it holds for any vector $v$, $ \|\mathbb{P}v \|_{L^2} \lesssim \|v \|_{L^2}$. Using H\"{o}lder inequality and Sobolev imbedding theorem we get \begin{equation}\nonumber \begin{split}
|N_6| \lesssim& \|u \otimes u\|_{H^2} \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau\|_{H^2},\\
\lesssim& \|u\|_{L^\infty} \|u\|_{H^2} \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau\|_{H^2},\\
\lesssim& \|u\|_{H^2} ^2 \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau\|_{H^2}. \end{split} \end{equation} Obviously, we can derive the estimate of $N_6$ as follows \begin{equation}\label{eqN6} \begin{split}
\int_{0}^t |N_6(t')| dt' \lesssim & \sup_{0 \leq t' \leq t} \|u(t')\|_{H^2}\int_{0}^t \|u\|_{H^2} \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau\|_{H^2 } dt'\\ \lesssim& \mathcal{E}_0^\frac{3}{2}(t). \end{split} \end{equation}
Now, we turn to the last term $N_7$. Using integration by parts and the fact $\mathbb{P} u = u$, we rewrite $N_7$ into two parts, \begin{equation}\label{eq:2.1} \begin{split}
N_7 =& \sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}u_t \nabla^k |\nabla|^{-1}\nabla \cdot \tau \;dx\\
=&\sum_{k = 0}^2 \frac{d}{dt} \int \nabla^k |\nabla|^{-1}u \nabla^k |\nabla|^{-1} \nabla \cdot \tau \;dx- \sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}u \nabla^k |\nabla|^{-1} \nabla \cdot \tau_t \;dx. \end{split} \end{equation} According to the second equation of system \eqref{eq:1.1}, we have the following equality \begin{equation}\label{eqq} \nabla \cdot \tau_t + \nabla \cdot (u\cdot \nabla \tau) + \nabla \cdot Q(\tau, \nabla u) = \frac{1}{2}\Delta u. \end{equation} Applying this equality to the last part in \eqref{eq:2.1}, we shall get \begin{align*}
&\sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}u \nabla^k |\nabla|^{-1} \nabla \cdot \tau_t \;dx \\
= &\sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1} u \nabla^k |\nabla|^{-1}\Big[ \frac{1}{2}\Delta u - \nabla \cdot (u\cdot \nabla \tau) - \nabla \cdot Q(\tau, \nabla u) \Big]dx. \end{align*} Then, applying H\"{o}lder inequality and Sobolev imbedding theorem, we obtain \begin{equation}\nonumber \begin{split}
&\Big|\sum_{k = 0}^2 \int \nabla^k |\nabla|^{-1}u \nabla^k |\nabla|^{-1}\nabla \cdot \tau_t \;dx \Big|\\
\lesssim& \|u\|_{H^2}^2 + \|u\|_{H^2}\|u\otimes \tau\|_{H^2} + \||\nabla|^{-1} u\|_{H^2}\|Q\|_{H^2}\\
\lesssim& \| u\|_{H^2}^2 + \| u\|_{H^2}^2\|\tau\|_{H^2}\\
&+ \||\nabla|^{-1} u\|_{H^2}\big(\|\nabla u\|_{L^\infty}\|\tau\|_{H^2} + \|\nabla^2 u\|_{L^6}\|\tau\|_{W^{1,3}} + \|\nabla^3 u\|_{L^2}\| \tau\|_{L^\infty}\big)\\
\lesssim& \| u\|_{H^2}^2 + \| u\|_{H^2}^2\|\tau\|_{H^2}+ \||\nabla|^{-1} u\|_{H^2}\|\nabla^2 u\|_{H^1}\|\tau\|_{H^2}. \end{split} \end{equation} Thus, we get \begin{equation}\label{eqN7} \begin{split}
\Big | \int_{0}^{t} N_7(t') dt' \Big | \lesssim& \sup_{0\leq t' \leq t} \||\nabla|^{-1} u(t')\|_{H^2} \|\tau(t')\|_{H^2} + \int_0^t \| u\|_{H^2}^2 dt'\\
&+\sup_{0 \leq t' \leq t}\|\tau(t')\|_{H^2} \int_0^t \| u\|_{H^2}^2 dt' \\
&+ \sup_{0\leq t' \leq t} \||\nabla|^{-1} u(t')\|_{H^2}\|\tau(t')\|_{H^2} \int_0^t \|\nabla^2 u\|_{H^1} dt'\\
\lesssim& \mathcal{E}_{0,1}(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_0(t)\mathcal{E}_2^\frac{1}{2}(t)\\ \lesssim& \mathcal{E}_{0,1}(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{split} \end{equation}
Integrating \eqref{eq:2.3} with time, according to the estimates \eqref{eqN5}, \eqref{eqN6}, \eqref{eqN7} and Young inequality, we can get the estimate of $\mathcal{E}_{0,2}(t)$ as follows \begin{equation}\label{eqE02} \begin{split}
\mathcal{E}_{0,2}(t) =& \int_{0}^t \||\nabla|^{-1}\mathbb{P}\nabla \cdot \tau \|_{H^2}^2 dt'\\ \lesssim& \mathcal{E}_{0,1}(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{split} \end{equation} We now combine the estimates of $\mathcal{E}_{0,1}(t)$ and $\mathcal{E}_{0,2}(t)$ together to finish this lemma's proof. Multiplying \eqref{eqE01} by a suitable large number and plus \eqref{eqE02}, we finally obtain \begin{equation}\nonumber \mathcal{E}_0(t) \lesssim \mathcal{E}_0(0) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{equation}
\end{proof}
Next, we shall consider the time-weighted energy $\mathcal{E}_2(t)$ which represents the good decay properties of higher order norms of solutions and then give the following lemma. ~\\~\\~ \begin{lemma}\label{lem2} The energies are defined in \eqref{energy1} and \eqref{energy2}, then we have \begin{equation}\nonumber \mathcal{E}_2(t) \lesssim \mathcal{E}_0(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{equation} \end{lemma}
\begin{proof} Like the process in Lemma \ref{lem1}, we first divide the proof into two parts. Define $\mathcal{E}_{2,1}$ and $\mathcal{E}_{2,2}$ as follows \begin{align*}
\mathcal{E}_{2,1}(t) =& \sup_{0 \leq t' \leq t} (1+t')^2(\|\nabla u(t')\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t')\|_{H^1}^2) + \int_{0}^{t} (1+t')^2\|\nabla^2 u(t')\|_{H^1}^2 \;dt', \\
\mathcal{E}_{2,2}(t) =& \int_{0}^{t} (1+t')^2 \|\nabla \mathbb{P}\nabla \cdot \tau \|_{L^2}^2 \; dt' . \end{align*} Then $\mathcal{E}_2 = \mathcal{E}_{2,1} + \mathcal{E}_{2,2}$, we shall first give the estimate of $\mathcal{E}_{2,1}$ . ~\\~\\~ \textbf{First Step:} ~\\~\\~ Operating $\nabla^{k+1} (k = 0, 1)$ derivative on the first equation of system \eqref{eq:1.1} and then operating $\nabla^k \mathbb{P} \nabla \cdot $ on the second equation of system \eqref{eq:1.1}, we will get the following system \begin{equation}\label{eq:2.4} \begin{cases} \nabla^{k+1}u_t + \nabla^{k+1}(u\cdot \nabla u) - \nabla^{k+1}\Delta u + \nabla^{k+1}\nabla p = \nabla^{k+1}\nabla \cdot \tau, \\ \nabla^k \mathbb{P} \nabla \cdot\tau_t + \nabla^k \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) + \nabla^k \mathbb{P} \nabla \cdot Q(\tau, \nabla u) = \frac{1}{2}\nabla^k \Delta u . \end{cases} \end{equation} Notice the coefficients in above system, we take inner product with $\nabla^{k+1} u$ for the first equation of \eqref{eq:2.4} and take inner product with $2\nabla^k \mathbb{P} \nabla \cdot \tau$ for the second equation of \eqref{eq:2.4}. Adding the time weight $(1+t)^2$, we will get \begin{equation}\label{eq:2.5}
\frac{1}{2}\frac{d}{dt}(1+t)^2 \big(\|\nabla u(t)\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t)\|_{H^1}^2 \big)
+ (1+t)^2\|\nabla^2 u(t)\|_{H^1}^2 = M_1 + M_2 + M_3 + M_4 + M_5, \end{equation} where, \begin{equation}\nonumber \begin{split} M_1 =& (1+t)^2\sum_{k = 0}^1 \int \nabla^{k+1}\nabla \cdot \tau \nabla^{k+1} u + \nabla^k \Delta u \nabla^k \mathbb{P} \nabla \cdot \tau \; dx ,\\
M_2 =& (1+t)\big(\|\nabla u(t)\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t)\|_{H^1}^2 \big),\\ M_3 =& - (1+t)^2\sum_{k = 0}^1 \int \nabla^{k+1}(u\cdot \nabla u) \nabla^{k+1} u \;dx,\\ M_4 =& - 2(1+t)^2\sum_{k = 0}^1 \int \nabla^k \mathbb{P} \nabla \cdot(u\cdot \nabla \tau) \nabla^k \mathbb{P} \nabla \cdot \tau \;dx,\\ M_5 =& - 2(1+t)^2\sum_{k = 0}^1 \int \nabla^k \mathbb{P} \nabla \cdot Q(\tau, \nabla u) \nabla^k \mathbb{P} \nabla \cdot \tau \;dx. \end{split} \end{equation}
Similarly, we shall estimate each term on the right hand side of \eqref{eq:2.5}. First, for the term $M_1$, using integration by parts and divergence free condition $\nabla \cdot u = 0$, we can compute \begin{equation}\label{eqM1} \begin{split} M_1 =& (1+t)^2\sum_{k = 0}^1 \int -\nabla^k \nabla \cdot \tau \nabla^k \Delta u + \nabla^k \Delta \mathbb{P} u \nabla^k \nabla \cdot \tau \;dx\\ =& (1+t)^2\sum_{k = 0}^1 \int -\nabla^k \nabla \cdot \tau \nabla^k \Delta u + \nabla^k \Delta u \nabla^k \nabla \cdot \tau \;dx\\ =& 0. \end{split} \end{equation}
For the term $M_2$, we can directly derive \begin{equation}\label{eqM2} \begin{split}
\int_{0}^t |M_2(t')| dt' \lesssim& \int_{0}^t (1+t')\big(\|\nabla u(t')\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t')\|_{H^1}^2\big) \;dt'\\ \lesssim& \mathcal{E}_1(t)\\ \lesssim& \mathcal{E}_0^\frac{1}{2}(t) \mathcal{E}_2^\frac{1}{2}(t). \end{split} \end{equation}
Using integration by parts, H\"{o}lder inequality and Sobolev imbedding theorem, we get the estimate of $M_3$ as follows \begin{equation}\nonumber \begin{split}
|M_3| \lesssim& (1+t)^2\|u \cdot \nabla u\|_{H^1} \|\nabla^2 u\|_{H^1}\\
\lesssim& (1+t)^2\|u\|_{W^{1,3}}\|\nabla u\|_{W^{1,6}}\|\nabla^2 u\|_{H^1}\\
\lesssim& (1+t)^2\|u\|_{H^2} \|\nabla^2 u\|_{H^1} ^2. \end{split} \end{equation} Hence, \begin{equation}\label{eqM3} \begin{split}
\int_0^t |M_3(t')| dt' \lesssim& \sup_{0 \leq t' \leq t} \|u(t')\|_{H^2}\int_0^t (1+t')^2 \|\nabla^2 u\|_{H^1} ^2 \;dt'\\ \lesssim& \mathcal{E}_0^\frac{1}{2}(t)\mathcal{E}_2(t). \end{split} \end{equation}
Now, we turn to the wildest term $M_4$. Our strategy is to apply Proposition \ref{prop} and divide $M_4$ into three more achievable parts. We have $M_4 = M_{4,1}+M_{4,2}+M_{4,3}$, where, \begin{equation}\nonumber \begin{split} M_{4,1} =& - 2(1+t)^2\sum_{k = 0}^1\int \nabla^k \mathbb{P} (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) \nabla^k \mathbb{P} \nabla \cdot \tau dx,\\ M_{4,2} =& - 2(1+t)^2\sum_{k = 0}^1\int \nabla^k \mathbb{P} (\nabla u\cdot \nabla \tau) \nabla^k \mathbb{P} \nabla \cdot \tau dx,\\ M_{4,3} =& 2(1+t)^2\sum_{k = 0}^1\int \nabla^k \mathbb{P}\big(\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau \big) \nabla^k \mathbb{P} \nabla \cdot \tau dx. \end{split} \end{equation} Notice the fact $\mathbb{P} \mathbb{P} = \mathbb{P}$, using integration by parts and divergence free condition, we can compute $M_{4,1}$ like follows \begin{equation}\nonumber \begin{split} M_{4,1} =& - 2(1+t)^2\sum_{k = 0}^1\int \nabla^k (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) \nabla^k \mathbb{P}\mathbb{P} \nabla \cdot \tau \;dx \\ =& - 2(1+t)^2\sum_{k = 0}^1\int \nabla^k (u\cdot \nabla \mathbb{P}\nabla \cdot \tau) \nabla^k \mathbb{P} \nabla \cdot \tau \; dx\\ =& - 2(1+t)^2\int \nabla u\cdot \nabla \mathbb{P}\nabla \cdot \tau \; \nabla \mathbb{P} \nabla \cdot \tau \;dx. \end{split} \end{equation} Then, applying H\"{o}lder inequality and Sobolev imbedding theorem, we derive \begin{equation}\nonumber \begin{split}
|M_{4,1} | \lesssim& (1+t)^2\|\nabla u\cdot \nabla \mathbb{P}\nabla \cdot \tau\|_{L^2} \|\nabla \mathbb{P} \nabla \cdot \tau\|_{L^2}\\
\lesssim& (1+t)^2\|\nabla u\|_{L^\infty} \|\nabla \mathbb{P} \nabla \cdot \tau\|_{L^2}^2\\
\lesssim& (1+t)^2\|\nabla^2 u\|_{H^1} \|\nabla \mathbb{P} \nabla \cdot \tau\|_{L^2}^2 . \end{split} \end{equation}
For the estimate of $M_{4,2}$, we shall use the property that Riesz operator $\mathcal{R}_i = (-\Delta)^{-\frac{1}{2}} \nabla_i$ is $L^r$ bounded for $1< r < \infty$. Hence, it also holds for any suitable regular vector $v$, $ \|\mathbb{P}v \|_{L^r} \lesssim \|v \|_{L^r}$. Using divergence free condition, H\"{o}lder inequality and Sobolev imbedding theorem we get \begin{equation}\nonumber \begin{split}
|M_{4,2}| \lesssim & (1+t)^2\big(\|\nabla u \; \tau\|_{L^2} \| \nabla \mathbb{P} \nabla \cdot \tau\|_{L^2} + \|\nabla(\nabla u \cdot \nabla \tau)\|_{L^2} \|\nabla \mathbb{P} \nabla \cdot \tau \|_{L^2}\big)\\
\lesssim& (1+t)^2 \|\tau\|_{H^1}\|\nabla^2 u\|_{L^2}\|\nabla \mathbb{P} \nabla \cdot \tau\|_{L^2}\\
&+(1+t)^2 \big(\|\nabla u\|_{L^\infty}\|\nabla^2 \tau\|_{L^2}+ \|\nabla^2 u\|_{L^6}\|\nabla \tau\|_{L^3} \big)\|\nabla \mathbb{P} \nabla \cdot \tau \|_{L^2}\\
\lesssim& (1+t)^2 \|\tau\|_{H^2} \|\nabla^2 u\|_{H^1} \|\nabla \mathbb{P} \nabla \cdot \tau \|_{L^2}. \end{split} \end{equation} We can use the same method in the estimate of $M_{4,3}$, \begin{equation}\nonumber
|M_{4,3}| \lesssim (1+t)^2 \|\tau\|_{H^2} \|\nabla^2 u\|_{H^1} \|\nabla \mathbb{P} \nabla \cdot \tau \|_{L^2}. \end{equation} Hence, \begin{equation}\label{eqM4} \begin{split}
\int_0^t |M_4(t')| dt' \lesssim& \sup_{0 \leq t'\leq t} \|\tau(t')\|_{H^2} \int_0^t (1+t')^2\|\nabla^2 u\|_{H^1} \|\nabla \mathbb{P} \nabla \cdot \tau \|_{L^2} \;dt'\\ \lesssim& \mathcal{E}_0^\frac{1}{2}(t) \mathcal{E}_2(t). \end{split} \end{equation}
For the last term $M_5$, using integration by parts, H\"{o}lder inequality and Sobolev imbedding theorem, we have \begin{equation}\nonumber \begin{split}
|M_5| \lesssim& (1+t)^2\big( \|Q\|_{L^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}+ \|\nabla^2 Q\|_{L^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\big)\\
\lesssim& (1+t)^2\big(\|\nabla u\|_{L^\infty}\|\tau\|_{H^2}+ \|\nabla^2 \tau\|_{L^6} \|\nabla \tau\|_{L^3}+ \|\nabla^3 u\|_{L^2}\|\tau\|_{L^\infty}\big)\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\\
\lesssim& (1+t)^2 \|\tau\|_{H^2}\|\nabla^2 u\|_{H^1}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}. \end{split} \end{equation} Thus, we get the following \begin{equation}\label{eqM5} \begin{split}
\int_0^t |M_5(t')| dt' \lesssim& \sup_{0 \leq t' \leq t} \|\tau(t')\|_{H^2}\int_0^t (1+t')^2 \|\nabla^2 u\|_{H^1}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2} \;dt'\\ \lesssim& \mathcal{E}_0^\frac{1}{2}(t)\mathcal{E}_2(t). \end{split} \end{equation}
Integrating \eqref{eq:2.5} with time and applying the estimates of $M_1 \sim M_5$, i.e., \eqref{eqM1}, \eqref{eqM2}, \eqref{eqM3}, \eqref{eqM4} and \eqref{eqM5}, it holds that \begin{equation}\label{eqE20} \begin{split}
\mathcal{E}_{2,1}(t) =& \sup_{0 \leq t' \leq t} (1+t')^2 \big(\|\nabla u(t')\|_{H^1}^2 + 2\| \mathbb{P}\nabla \cdot \tau(t')\|_{H^1}^2 \big) + \int_{0}^{t} (1+t')^2 \|\nabla^2 u(t')\|_{H^1}^2 dt'\\ \lesssim& \mathcal{E}_0(t) + \mathcal{E}_0^\frac{1}{2}(t)\mathcal{E}_2^\frac{1}{2}(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{split} \end{equation} ~\\~\\~ \textbf{Second Step:} ~\\~\\~ The left work is to do the estimate of $\mathcal{E}_{2,2}(t)$. Operating $\nabla \mathbb{P}$ on the first equation of system \eqref{eq:1.1}, we have the following equality \begin{equation}\nonumber \nabla u_t + \nabla \mathbb{P}(u\cdot \nabla u) - \nabla \Delta u = \nabla \mathbb{P}\nabla \cdot \tau. \end{equation} Then, taking $L^2$ inner product with $\nabla \mathbb{P} \nabla \cdot \tau$, we get \begin{equation}\label{eq:2.6}
(1+t)^2\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}^2 = M_6 + M_7 + M_8, \end{equation} where, \begin{equation}\nonumber \begin{split} M_6 =& -(1+t)^2 \int \nabla\Delta u \nabla\mathbb{P}\nabla \cdot \tau \;dx,\\ M_7=& (1+t)^2 \int \nabla\mathbb{P}(u\cdot \nabla u)\nabla\mathbb{P}\nabla \cdot \tau \;dx,\\ M_8=& (1+t)^2 \int \nabla u_t \nabla\mathbb{P}\nabla \cdot \tau \; dx. \end{split} \end{equation}
For the first term $M_6$, by H\"{o}lder inequality we directly know that \begin{equation}\nonumber
|M_6| \lesssim (1+t)^2\|\nabla^3 u\|_{L^2} \|\nabla \mathbb{P} \nabla \cdot \tau\|_{L^2}. \end{equation} Thus we have the following estimate \begin{equation}\label{eqM6}
\int_0^t |M_6(t')| dt' \lesssim \mathcal{E}_{2,1}^\frac{1}{2}(t) \mathcal{E}_{2,2}^\frac{1}{2}(t). \end{equation}
Obviously, we can get \begin{equation}\nonumber \begin{split}
|M_7| \lesssim& (1+t)^2 \big(\|\nabla u \nabla u\|_{L^2} + \|u\nabla^2 u\|_{L^2}\big)\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\\
\lesssim& (1+t)^2\big(\|\nabla u\|_{L^\infty}\|\nabla u\|_{L^2}+ \|u\|_{L^\infty}\|\nabla^2 u\|_{L^2} \big)\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\\
\lesssim& (1+t)^2\|u\|_{H^2} \|\nabla^2 u\|_{H^1}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}. \end{split} \end{equation} Thus, \begin{equation}\label{eqM7} \begin{split}
\int_0^t |M_7(t')| dt' \lesssim& \sup_{0 \leq t' \leq t}\|u(t')\|_{H^2} \int_{0}^{t} (1+t')^2 \|\nabla^2 u\|_{H^1}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2} dt'\\ \lesssim& \mathcal{E}_0^\frac{1}{2}(t) \mathcal{E}_2(t). \end{split} \end{equation}
Now, we turn to the last term $M_8$. Using integration by parts, we first rewrite this term into the following form \begin{equation}\nonumber \begin{split} M_8 =& \frac{d}{dt} (1+t)^2 \int \nabla u \; \nabla \mathbb{P}\nabla \cdot \tau \;dx -2(1+t) \int \nabla u \;\nabla\mathbb{P}\nabla \cdot \tau \;dx\\ &-(1+t)^2 \int \nabla u \;\nabla\mathbb{P}\nabla \cdot \tau_t \;dx \end{split} \end{equation} Applying \eqref{eqq} to the last part in $M_8$, it then becomes \begin{align*} & -(1+t)^2 \int \nabla \; u \nabla\mathbb{P}\nabla \cdot \tau_t \;dx \\ = &-(1+t)^2 \int \nabla u \; \nabla \Big[\frac{1}{2}\Delta u - \mathbb{P}\nabla \cdot (u\cdot \nabla \tau) + \mathbb{P} \nabla \cdot Q(\tau,\nabla u)\Big] \;dx. \end{align*} Hence, using integration by parts, H\"{o}lder inequality and Sobolev imbedding theorem, we have the following estimates \begin{equation}\nonumber \begin{split}
&(1+t)^2 \Big| \int \nabla u \; \nabla\mathbb{P}\nabla \cdot \tau_t \;dx \Big|\\
\lesssim& (1+t)^2\|\nabla^2 u\|_{L^2}\big(\|\nabla^2 u\|_{L^2}+\|\mathbb{P}\nabla\cdot (u\cdot \nabla \tau)\|_{L^2}+\|\nabla \cdot Q\|_{L^2}\big)\\
\lesssim& (1+t)^2\|\nabla^2 u\|_{L^2}\big(\|\nabla^2 u\|_{L^2}+\|\mathbb{P}\nabla\cdot (u\cdot \nabla \tau)\|_{L^2}+
\|\nabla u\|_{L^\infty}\|\nabla \tau\|_{L^2}+\|\nabla^2 u\|_{L^2}\|\tau\|_{L^\infty}\big)\\
\lesssim& (1+t)^2\|\nabla^2 u\|_{L^2}\big(\|\nabla^2 u\|_{L^2}+\|\mathbb{P}\nabla\cdot (u\cdot \nabla \tau)\|_{L^2}+
\|\nabla^2 u\|_{H^1}\| \tau\|_{H^2}\big). \end{split} \end{equation}
Using the same strategy in the estimate of $M_4$, we apply Proposition \ref{prop} to $\|\mathbb{P}\nabla\cdot (u\cdot \nabla \tau)\|_{L^2}$ and get the following \begin{equation}\nonumber \begin{split}
\|\mathbb{P}\nabla\cdot (u\cdot \nabla \tau)\|_{L^2} \lesssim &
\|u\cdot \nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}+\|\nabla u\cdot \nabla \tau\|_{L^2}
+\|\nabla u\cdot \nabla \Delta^{-1} \nabla \cdot \nabla \cdot \tau\|_{L^2}\\
\lesssim& \|u\|_{L^\infty}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}+\|\nabla u\|_{L^\infty} \|\nabla \tau\|_{L^2}\\
\lesssim& \|u\|_{H^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}+\|\nabla \tau\|_{L^2}\|\nabla^2 u\|_{H^1}. \end{split} \end{equation} Thus, \begin{equation}\nonumber \begin{split}
&(1+t)^2 \Big| \int \nabla u \; \nabla \mathbb{P}\nabla \cdot \tau_t \;dx \Big|\\
\lesssim& (1+t)^2\|\nabla^2 u\|_{L^2}\big(\|\nabla^2 u\|_{L^2}+\|u\|_{H^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}+
\|\nabla^2 u\|_{H^1}\| \tau\|_{H^2}\big). \end{split} \end{equation} And then we can get the estimate of $M_8$ \begin{equation}\label{eqM8} \begin{split}
\Big|\int_0^t (1+t')^2 M_8(t') \;dt'\Big| \lesssim & \sup_{0\leq t'\leq t} (1+t')^2 \|\nabla u\|_{L^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\\
&+ \int_0^t (1+t') \|\nabla u\|_{L^2} \|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2} \;dt'\\
&+\int_0^t (1+t')^2 \|\nabla^2 u\|_{L^2}^2\; dt'\\
&+ \sup_{0 \leq t'\leq t} \|u(t')\|_{H^2}\int_0^{t} (1+t')^2 \|\nabla^2 u\|_{L^2}\|\nabla \mathbb{P}\nabla \cdot \tau\|_{L^2}\;dt'\\
&+\sup_{0 \leq t'\leq t} \|\tau(t')\|_{H^2}\int_0^{t} (1+t')^2 \|\nabla^2 u\|_{H^1}^2 \;dt'\\
\lesssim& \mathcal{E}_{2,1}(t) + \mathcal{E}_0^\frac{1}{2}(t)\mathcal{E}_2^\frac{1}{2} + \mathcal{E}_0^\frac{1}{2}\mathcal{E}_2(t). \end{split} \end{equation}
Integrating \eqref{eq:2.6} with time, applying the estimates of $M_6 \sim M_8$, i.e., \eqref{eqM6}, \eqref{eqM7}, \eqref{eqM8} and Young inequality, we finally get \begin{equation}\label{eqE21} \mathcal{E}_{2,2}(t) \lesssim \mathcal{E}_{2,1}(t) + \mathcal{E}_0(t) + \mathcal{E}_0^\frac{1}{2}(t) \mathcal{E}_2^\frac{1}{2}(t) + \mathcal{E}_0^\frac{1}{2}\mathcal{E}_2(t). \end{equation} We now combine the estimates of $\mathcal{E}_{2,1}(t)$ and $\mathcal{E}_{2,2}(t)$ together to finish this lemma's proof. Multiplying \eqref{eqE20} by a suitable large number and plus \eqref{eqE21}, we get \begin{equation}\nonumber \mathcal{E}_2(t) \lesssim \mathcal{E}_0(t) + \mathcal{E}_0^\frac{1}{2}(t) \mathcal{E}_2^\frac{1}{2}(t) + \mathcal{E}_0^\frac{3}{2}(t) + \mathcal{E}_2^\frac{3}{2}(t). \end{equation} We complete the proof of this lemma by applying Young inequality on the above inequality. \end{proof}
\section{Proof of the Theorem \ref{thm}} In this section, we will combine the above \textit{a priori} estimates of $\mathcal{E}_0$ and $\mathcal{E}_2$ together and then give the proof of Theorem \ref{thm}. First, we define the total energy $\mathcal{E}(t) = \mathcal{E}_0(t) + \mathcal{E}_2(t)$. Notice the estimates in Lemma \ref{lem1} and Lemma \ref{lem2}, we have \begin{equation}\label{eq:4.1} \mathcal{E}(t) \leq C_1 \mathcal{E}_0(0) + C_1\mathcal{E}^\frac{3}{2}(t), \end{equation}
for some positive constant $C_1$. Under the setting of initial data in Theorem \ref{thm}, there exists a small enough number $\varepsilon$ such that $\mathcal{E}(0), C_1\mathcal{E}_0(0) \leq \varepsilon$. Due to local existence theory which can be achieved through standard energy method (see \cite{CM} for instance), there exists a positive time $T$ such that \begin{equation}\label{eqEtotal2} \mathcal{ E}(t) \leq 2 \varepsilon , \quad \forall \; t \in [0, T]. \end{equation} Let $T^{*}$ be the largest possible time of $T$ for what \eqref{eqEtotal2} holds. Now, we only need to show $T^{*} = \infty$. By the estimate of total energy \eqref{eq:4.1}, we can use
a standard continuation argument to show $T^{*} = \infty$ provided that $\varepsilon$ is small enough. We omit the details here. Hence, we finish the proof of Theorem \ref{thm}.
\end{document} | arXiv |
Solving some constraints on a subgroup of a Lie group [closed]
Let $M$ be a rank-3 matrix, I am interested in searching all the group elements $g \in$ SU(3) Lie group, such that,
$$ g^T M g =M. $$
Example 1. Let $$ M[1]= \left( \begin{array}{ccc} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right),$$ then we can find that there is at least a subgroup $g \in SU(2) \subset SU(3)$ that makes the $M[1]$ satisfies $$ g^T M[1] g =M[1], $$ the $$ g = \exp\left(\theta\sum_{k=1}^{3} i t_k \frac{\sigma_k}{2}\right) =\cos(\frac{\theta}{2})+i \sum_{k=1}^{3} t_k \sigma_k\sin(\frac{\theta}{2})$$
\begin{align} \sigma_1 = \sigma_x = \begin{pmatrix} 0&1 & 0\\ 1&0 & 0\\ 0&0 & 0 \end{pmatrix}, \sigma_2 = \sigma_y = \begin{pmatrix} 0&-i& 0\\ i&0& 0\\ 0 & 0& 0 \end{pmatrix}, \sigma_3 = \sigma_z = \begin{pmatrix} 1&0& 0\\ 0&-1& 0\\ 0 & 0 & 0 \end{pmatrix} \,. \end{align} Notice that any group element on $SU(2)$ can be parametrized by some $\theta$ and $(t_1,t_2,t_3)$. Also $\theta$ has a periodicity $[0,4 \pi)$.
Example 2. Let $$ M[1]= \left( \begin{array}{ccc} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right), M[2]= \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \\ \end{array} \right), M[3]=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \\ \end{array} \right),$$
Can we find some subgroup $g \in G \subset$ SU(3) Lie group? such that $$ g^T \{M[1], M[2], M[3]\} g =\{M[1], M[2], M[3]\} $$ This means that $g^TM[a]g=M[b]$ which may transform $a$ to a different value $b$. But overall the full set $ \{M[1], M[2], M[3]\}$ is invariant under the transformation?
Example 3. . Let $$ M[1]= \left( \begin{array}{ccc} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right), M[2]= \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \\ \end{array} \right), M[3]=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \\ \end{array} \right),$$ $$ M[4]= -\left( \begin{array}{ccc} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right), M[5]= -\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \\ \end{array} \right), M[6]=-\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \\ \end{array} \right),$$
Can we find some subgroup $g \in G \subset$ SU(3) Lie group? such that $$ g^T \{M[1], M[2], M[3],M[4], M[5], M[6]\} g =\{M[1], M[2], M[3],M[4], M[5], M[6]\} $$ This means that $g^TM[a]g=M[b]$ which may transform $a$ to a different value $b$. But overall the full set $ \{M[1], M[2], M[3],M[4], M[5], M[6]\}$ is invariant under the transformation?
How can we write a Mathematica .nb file to solve this?
(Of course, there is always a trivial solution the identity $g=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)$), but what are other solutions?
equation-solving algebraic-manipulation algebra group-theory constraint
wonderichwonderich
closed as off-topic by Daniel Lichtblau, Henrik Schumacher, m_goldberg, José Antonio Díaz Navas, Edmund Apr 10 '18 at 0:01
The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
Browse other questions tagged equation-solving algebraic-manipulation algebra group-theory constraint or ask your own question.
How can I compute the representation matrices of a point group under given basis functions?
Decomposing a diagonal positive real matrix
Using a Mathematica index as a DiscreteVariable in NDSolve when solving a coupled set of ordinary differential equations
Stability analysis of transcendental equation (stability crossing curves)
solving system of non-homogeneous advection equation using mathematica?
How to solve a linear system by LinearSolve when the variables are duplicate?
Find a matrix $X$ that block-diagonalizes a particular group of matrices
Help for Solving Two Equations For Two Unknows (from a Paper)
Numerical bounce solution
Solving for three parameters from a matrix equation and storing them in separate variables | CommonCrawl |
Lie operad
In mathematics, the Lie operad is an operad whose algebras are Lie algebras. The notion (at least one version) was introduced by Ginzburg & Kapranov (1994) in their formulation of Koszul duality.
Definition à la Ginzburg–Kapranov
Fix a base field k and let ${\mathcal {Lie}}(x_{1},\dots ,x_{n})$ denote the free Lie algebra over k with generators $x_{1},\dots ,x_{n}$ and ${\mathcal {Lie}}(n)\subset {\mathcal {Lie}}(x_{1},\dots ,x_{n})$ the subspace spanned by all the bracket monomials containing each $x_{i}$ exactly once. The symmetric group $S_{n}$ acts on ${\mathcal {Lie}}(x_{1},\dots ,x_{n})$ by permutations of the generators and, under that action, ${\mathcal {Lie}}(n)$ is invariant. The operadic composition is given by substituting expressions (with renumbered variables) for variables. Then, ${\mathcal {Lie}}=\{{\mathcal {Lie}}(n)\}$ is an operad.[1]
Koszul-Dual
The Koszul-dual of ${\mathcal {Lie}}$ is the commutative-ring operad, an operad whose algebras are the commutative rings over k.
Notes
1. Ginzburg & Kapranov 1994, § 1.3.9.
References
• Ginzburg, Victor; Kapranov, Mikhail (1994), "Koszul duality for operads", Duke Mathematical Journal, 76 (1): 203–272, doi:10.1215/S0012-7094-94-07608-4, MR 1301191
External links
• Todd Trimble, Notes on operads and the Lie operad
• https://ncatlab.org/nlab/show/Lie+operad
| Wikipedia |
# Limits and Continuity
Limits and continuity are fundamental concepts in calculus. Understanding them is crucial for solving problems and applying calculus in various fields.
A function is said to be continuous at a point $a$ if the value of the function approaches the value of the function at $a$ as $x$ approaches $a$. In mathematical terms, a function $f(x)$ is continuous at $a$ if:
$$\lim_{x \to a} f(x) = f(a)$$
Continuity is a local property of a function. A function is continuous at every point in an interval if it is continuous at each point in the interval.
Consider the function $f(x) = \frac{x^2 - 1}{x - 1}$. We want to determine if $f(x)$ is continuous at $x = 1$.
$$\lim_{x \to 1} f(x) = \lim_{x \to 1} \frac{x^2 - 1}{x - 1}$$
Using the limit definition of a derivative, we can find the limit:
$$\lim_{x \to 1} \frac{x^2 - 1}{x - 1} = \frac{1}{2}$$
Since the limit exists and is equal to $f(1) = \frac{(1)^2 - 1}{1 - 1} = \frac{0}{0}$, we can conclude that $f(x)$ is continuous at $x = 1$.
## Exercise
Determine if the following function is continuous at $x = 0$:
$$f(x) = \begin{cases} 1 & \text{if } x \text{ is rational} \\ 0 & \text{if } x \text{ is irrational} \end{cases}$$
### Solution
The function $f(x)$ is not continuous at $x = 0$. To see this, consider the sequences $x_n = \frac{1}{n}$ and $y_n = \sqrt{2} \cdot \frac{1}{n}$. Both sequences approach $0$ as $n \to \infty$, but $f(x_n) = 1$ for all $n$, while $f(y_n) = 0$ for all $n$. Since the limits of $f(x_n)$ and $f(y_n)$ are different, $f(x)$ is not continuous at $x = 0$.
# Derivatives: Definition, Properties, and Applications
Derivatives are a fundamental concept in calculus. They measure the rate of change of a function and have numerous applications in various fields.
The derivative of a function $f(x)$ with respect to $x$ is denoted as $f'(x)$ and is defined as the limit of the difference quotient:
$$f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}$$
The derivative of a function has several important properties. For example, the product rule states that if $u(x)$ and $v(x)$ are functions, then:
$$(uv)' = u'v + uv'$$
The chain rule states that if $u(x)$ and $v(x)$ are functions, and $y = u(v(x))$, then:
$$y' = u'(v(x))v'(x)$$
The power rule states that if $f(x) = x^n$, then:
$$f'(x) = nx^{n-1}$$
Find the derivative of the function $f(x) = x^3 + 2x^2 - 3x$.
Using the power rule, we can find the derivative:
$$f'(x) = 3x^2 + 4x - 3$$
## Exercise
Find the derivative of the function $f(x) = \sin(2x)$.
Using the chain rule, we can find the derivative:
$$f'(x) = 2\cos(2x)$$
# Integrals: Definition, Properties, and Applications
Integrals are a fundamental concept in calculus. They measure the area under a curve and have numerous applications in various fields.
The definite integral of a function $f(x)$ with respect to $x$ over the interval $[a, b]$ is denoted as $\int_a^b f(x) dx$ and is defined as the limit of a sum:
$$\int_a^b f(x) dx = \lim_{n \to \infty} \sum_{i=1}^n f(x_i) \Delta x$$
where $\Delta x = \frac{b - a}{n}$ and $x_i = a + i\Delta x$.
The definite integral has several important properties. For example, the fundamental theorem of calculus states that if $f(x)$ is continuous on $[a, b]$ and $F(x)$ is its antiderivative, then:
$$\int_a^b f(x) dx = F(b) - F(a)$$
The second fundamental theorem of calculus states that if $f(x)$ is continuous on $[a, b]$ and $F(x)$ is its antiderivative, then:
$$F(b) - F(a) = \int_a^b f(x) dx$$
Find the definite integral of the function $f(x) = x^2$ over the interval $[0, 2]$.
Using the power rule, we can find the definite integral:
$$\int_0^2 x^2 dx = \frac{2^3}{3} - \frac{0^3}{3} = \frac{8}{3}$$
## Exercise
Find the definite integral of the function $f(x) = \sin(x)$ over the interval $[0, \pi]$.
Using the fundamental theorem of calculus, we can find the definite integral:
$$\int_0^\pi \sin(x) dx = -(\cos(\pi) - \cos(0)) = 2$$
# Techniques of Integration
There are several techniques for integrating functions in calculus. These techniques include substitution, integration by parts, partial fractions, and trigonometric integrals.
Substitution is a technique for integrating functions that involve a variable raised to a power. For example, to integrate the function $f(x) = x^2$, we can substitute $u = x^2$ and $du = 2x dx$.
Integration by parts is a technique for integrating functions that involve a product of two functions. For example, to integrate the function $f(x) = x\sin(x)$, we can integrate by parts with $u = \sin(x)$ and $dv = x dx$.
Partial fractions is a technique for integrating functions that involve rational functions. For example, to integrate the function $f(x) = \frac{x^2 + 1}{x(x + 1)}$, we can use partial fractions to rewrite the function as a sum of simpler functions.
Trigonometric integrals are a class of integrals that involve trigonometric functions. For example, to integrate the function $f(x) = \sin^2(x)$, we can use the identity $\sin^2(x) = \frac{1 - \cos(2x)}{2}$.
Integrate the function $f(x) = \frac{x^2 + 1}{x(x + 1)}$.
Using partial fractions, we can rewrite the function as:
$$f(x) = \frac{x^2 + 1}{x(x + 1)} = \frac{1}{x} + \frac{1}{x + 1}$$
Now, we can integrate each term separately:
$$\int \frac{1}{x} dx = \ln(x) + C$$
$$\int \frac{1}{x + 1} dx = \ln(x + 1) + C$$
## Exercise
Integrate the function $f(x) = x\sin(x)$.
Using integration by parts, we can integrate the function:
$$\int x\sin(x) dx = x(\cos(x)) - \int \cos(x) dx = x(\cos(x)) + \sin(x) + C$$
# Applications of Calculus: Optimization, Related Rates, and Sequences
Optimization is the process of finding the maximum or minimum value of a function. For example, to find the maximum value of the function $f(x) = x^2 - 4x + 3$ on the interval $[1, 3]$, we can use calculus to find the critical points and check the endpoints.
Related rates is a concept in calculus that deals with the relationship between two variables that change at different rates. For example, if a car is moving at a constant speed and a person is running at a constant speed, we can use calculus to determine the rate at which the distance between the car and the person is changing.
Sequences are a type of mathematical object that consists of a list of numbers. Calculus can be used to study the behavior of sequences, including the convergence of sequences and the limit of a sequence.
Find the maximum value of the function $f(x) = x^2 - 4x + 3$ on the interval $[1, 3]$.
Using calculus, we can find the critical points and check the endpoints:
$$f'(x) = 2x - 4$$
$$f'(1) = 2(1) - 4 = 0$$
$$f'(3) = 2(3) - 4 = 2$$
Since $f'(1) = 0$ and $f'(3) > 0$, the maximum value of $f(x)$ occurs at $x = 1$.
## Exercise
Find the limit of the sequence $a_n = \frac{1}{n}$.
Using calculus, we can determine the limit of the sequence:
$$\lim_{n \to \infty} \frac{1}{n} = 0$$
# Introduction to SymPy and its Integration with Calculus
SymPy is a Python library for symbolic mathematics. It allows us to perform various mathematical computations, including calculus.
To use SymPy in Python, you first need to install it. You can do this using pip:
```
pip install sympy
```
Once you have installed SymPy, you can import it into your Python script:
```python
from sympy import symbols, diff, integrate, limit, solve, Eq
```
Now you can use SymPy to perform calculus tasks, such as finding derivatives, integrals, limits, and solving equations.
Find the derivative of the function $f(x) = x^2 + 2x - 3$ using SymPy.
```python
x = symbols('x')
f = x**2 + 2*x - 3
f_prime = diff(f, x)
print(f_prime)
```
## Exercise
Find the definite integral of the function $f(x) = x^2$ over the interval $[0, 2]$ using SymPy.
```python
x = symbols('x')
f = x**2
F = integrate(f, (x, 0, 2))
print(F)
```
# Solving Differential Equations with SymPy
SymPy can also be used to solve differential equations. Differential equations are equations that involve derivatives of functions.
To solve a differential equation using SymPy, you first need to define the function and its derivatives, and then use SymPy's solve function to find the solution.
Solve the differential equation $y'' - 4y' + 3y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
y_double_prime = diff(y_prime, x)
eq = Eq(y_double_prime - 4*y_prime + 3*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
## Exercise
Solve the differential equation $y' - 2y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
eq = Eq(y_prime - 2*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
# Numerical Integration and Differentiation with SymPy
SymPy can also be used for numerical integration and differentiation. Numerical integration involves approximating the value of an integral using numerical methods, while numerical differentiation involves approximating the value of a derivative using numerical methods.
To perform numerical integration or differentiation using SymPy, you can use the `integrate` and `diff` functions with the `evalf` method.
Find the numerical value of the integral of the function $f(x) = x^2 + 2x - 3$ over the interval $[0, 2]$ using SymPy.
```python
x = symbols('x')
f = x**2 + 2*x - 3
F = integrate(f, (x, 0, 2))
F_num = F.evalf()
print(F_num)
```
## Exercise
Find the numerical value of the derivative of the function $f(x) = x^2$ at $x = 2$ using SymPy.
```python
x = symbols('x')
f = x**2
f_prime = diff(f, x)
f_prime_num = f_prime.subs(x, 2).evalf()
print(f_prime_num)
```
# Taylor Series and SymPy
Taylor series are a mathematical concept that involves approximating a function as a power series. SymPy can be used to find the Taylor series of a function.
To find the Taylor series of a function using SymPy, you first need to define the function, and then use SymPy's series function to find the Taylor series.
Find the Taylor series of the function $f(x) = e^x$ centered at $x = 0$ using SymPy.
```python
x = symbols('x')
f = exp(x)
series = f.series(x, 0, 5)
print(series)
```
## Exercise
Find the Taylor series of the function $f(x) = \sin(x)$ centered at $x = 0$ using SymPy.
```python
x = symbols('x')
f = sin(x)
series = f.series(x, 0, 5)
print(series)
```
# Linear Algebra with SymPy
SymPy can also be used for linear algebra tasks, such as finding the determinant of a matrix, solving linear equations, and performing matrix operations.
To perform linear algebra tasks using SymPy, you can use the `Matrix` class and its associated methods.
Find the determinant of the matrix $A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ using SymPy.
```python
from sympy import Matrix
A = Matrix([[1, 2], [3, 4]])
det_A = A.det()
print(det_A)
```
## Exercise
Solve the linear system $Ax = b$ using SymPy, where $A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ and $b = \begin{bmatrix} 5 \\ 6 \end{bmatrix}$.
```python
from sympy import Matrix
A = Matrix([[1, 2], [3, 4]])
b = Matrix([5, 6])
x = A.solve(b)
print(x)
```
# Differential Equations and Their Applications
Differential equations are equations that involve derivatives of functions. They have numerous applications in various fields, including physics, engineering, and economics.
SymPy can be used to solve differential equations and find their solutions.
Solve the differential equation $y'' - 4y' + 3y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
y_double_prime = diff(y_prime, x)
eq = Eq(y_double_prime - 4*y_prime + 3*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
## Exercise
Solve the differential equation $y' - 2y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
eq = Eq(y_prime - 2*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
# Ordinary Differential Equations and Their Applications
Ordinary differential equations (ODEs) are a class of differential equations that involve derivatives of functions with respect to a single variable. They have numerous applications in various fields, including physics, engineering, and economics.
SymPy can be used to solve ordinary differential equations and find their solutions.
Solve the ordinary differential equation $y' - 2y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
eq = Eq(y_prime - 2*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
## Exercise
Solve the ordinary differential equation $y'' - 4y' + 3y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
y_double_prime = diff(y_prime, x)
eq = Eq(y_double_prime - 4*y_prime + 3*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
# Partial Differential Equations and Their Applications
Partial differential equations (PDEs) are a class of differential equations that involve derivatives of functions with respect to multiple variables. They have numerous applications in various fields, including physics, engineering, and economics.
SymPy does not currently support the direct solution of PDEs. However, you can use SymPy to find the characteristics of a PDE, which can help you solve the PDE numerically.
Find the characteristics of the PDE $u_x + u_y = 0$ using SymPy.
```python
x, y = symbols('x y')
u = Function('u')
u_x = diff(u(x, y), x)
u_y = diff(u(x, y), y)
eq = Eq(u_x + u_y, 0)
sol = solve(eq, (u_x, u_y))
print(sol)
```
## Exercise
Find the characteristics of the PDE $u_{xx} - u_{xy} = 0$ using SymPy.
```python
x, y = symbols('x y')
u = Function('u')
u_xx = diff(u(x, y), x, x)
u_xy = diff(u(x, y), x, y)
eq = Eq(u_xx - u_xy, 0)
sol = solve(eq, (u_xx, u_xy))
print(sol)
```
# Numerical Methods for Solving Differential Equations
Numerical methods are algorithms that approximate the solution of a differential equation using numerical techniques. SymPy can be used to implement various numerical methods for solving differential equations.
Some common numerical methods for solving differential equations include the Euler method, the Runge-Kutta method, and the finite difference method.
Implement the Euler method for solving the ordinary differential equation $y' - 2y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
eq = Eq(y_prime - 2*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
## Exercise
Implement the Runge-Kutta method for solving the ordinary differential equation $y' - 2y = 0$ using SymPy.
```python
x = symbols('x')
y = Function('y')
y_prime = diff(y(x), x)
eq = Eq(y_prime - 2*y(x), 0)
sol = solve(eq, y(x))
print(sol)
```
# Applications of Differential Equations in Various Fields
Differential equations have numerous applications in various fields, including physics, engineering, and economics.
Some examples of differential equations in physics include the wave equation, the heat equation, and the Schrödinger equation.
Some examples of differential equations in engineering include the heat transfer equation, the diffusion equation, and the Navier-Stokes equations.
Some examples of differential equations in economics include the supply and demand model, the consumption-savings model, and the capital accumulation model.
Solve the wave equation $u_{tt} - c^2u_{xx} = 0$ using SymPy.
```python
x, t = symbols('x t')
u = Function('u')
u_t = diff(u(x, t), t)
u_tt = diff(u_t, t)
u_x = diff(u(x, t), x)
u_xx = diff(u_x, x)
eq = Eq(u_tt - c**2*u_xx, 0)
sol = solve(eq, u(x, t))
print(sol)
```
## Exercise
Solve the heat equation $u_t - c^2u_{xx} = 0$ using SymPy.
```python
x, t = symbols('x t')
u = Function('u')
u_t = diff(u(x, t), t)
u_x = diff(u(x, t), x)
u_xx = diff(u_x, x)
eq = Eq(u_t - c**2*u_xx, 0)
sol = solve(eq, u(x, t))
print(sol)
```
# Applications of Differential Equations in Real-World Problems
Differential equations can be used to model various real-world problems, including the motion of objects, the spread of diseases, and the population dynamics of organisms.
Some examples of real-world problems that can be modeled using differential equations include the motion of a ball thrown in the air, the spread of a fire in a forest, and the population growth of a species.
Model the motion of a ball thrown in the air using the differential equation $v = gt + v_0$.
```python
t = symbols('t')
v = Function('v')
v_t = diff(v(t), t)
eq = Eq(v_t, g*t + v_0)
sol = solve(eq, v(t))
print(sol)
```
## Exercise
Model the spread of a fire in a forest using the differential equation $u_t = Du_x$ with $D$ representing the spread rate of the fire.
```python
x, t = symbols('x t')
u = Function('u')
u_t = diff(u(x, t), t)
u_x = diff(u(x, t), x)
eq = Eq(u_t, D*u_x)
sol = solve(eq, u(x, t))
print(sol)
```
# Applications of Differential Equations in Data Science and Machine Learning
Differential equations can also be used in data science and machine learning to model various phenomena and make predictions.
Some examples of differential equations in data science and machine learning include the logistic growth model, the Lotka-Volterra model, and the Black-Scholes model.
Model the logistic growth of a population using the differential equation $u_t = r(1 - u/ | Textbooks |
A note on negative λ-binomial distribution
Yuankui Ma1 &
Taekyun Kim1,2
In this paper, we introduce one discrete random variable, namely the negative λ-binomial random variable. We deduce the expectation of the negative λ-binomial random variable. We also get the variance and explicit expression for the moments of the negative λ-binomial random variable.
In a sequence of independent Bernoulli trials, let the random variable X denote the trial at which the rth success occurs, where r is a fixed nonnegative integer. Then
$$ P(X=x)={\binom{x-1}{r-1}}p^{r}(1-p)^{x-r}, \quad x=r,r+1,r+2,\ldots , $$
and we say that X has a negative binomial distribution with parameters \((r,p)\) (see [1–3, 12, 13]).
The negative binomial distribution is sometimes defined in terms of the random variable Y, the number of failures before the rth success. This formulation is statistically equivalent to one given above in terms of X denoting the trial at which the rth success occurs, since \(Y=X-r\). The alternative form of the negative binomial distribution is
$$ p(k)=P(Y=k)={\binom{r+k-1}{k}}p^{r}(1-p)^{k},\quad k=0,1,2,\ldots , $$
where p is the probability of success in the trial (see [1, 3, 12, 13]).
It is known that the degenerate exponential function is defined by
$$ e_{\lambda }^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda }}=\sum_{n=0}^{ \infty }(x)_{n,\lambda } \frac{t^{n}}{n!}, \quad \lambda \in \mathbb{R}, $$
$$ (x)_{0,\lambda }=1,\qquad (x)_{n,\lambda }=x(x-\lambda ) \cdots \bigl(x-(n-1) \lambda \bigr)\quad (n\ge 1)\ (\text{see [5--7, 10, 11]}). $$
Recently, λ-analogue of binomial coefficients was considered by Kim to be
$$ {\binom{x}{0}}_{\lambda }=1,\qquad { \binom{x}{n}}_{\lambda }= \frac{(x)_{n,\lambda }}{n!}= \frac{x(x-\lambda )\cdots (x-(n-1)\lambda )}{n!}\quad (n\ge 1)\ ( \text{see [6, 8, 9]}). $$
In this paper, we consider the negative λ-binomial distribution and obtain expressions for its moments.
Negative λ-binomial distribution
\(Y_{\lambda }\) is the negative λ-binomial random variable if the probability mass function of \(Y_{\lambda }\) with parameters \((r,p)\) is given by
$$ p_{\lambda }(k)=P_{\lambda }(Y_{\lambda }=k)={ \binom{r+(k-1)\lambda }{k}}_{\lambda }e_{\lambda }^{r}(p-1) (1-p)^{k}, $$
where λ∈ (0,1) and p is the probability of success in the trials.
$$ {\binom{r+(k-1)\lambda }{k}}_{\lambda }=(-1)^{k}{ \binom{-r}{k}}_{\lambda },\quad k\ge 0\ (\text{see [4]}) $$
$$\begin{aligned} \sum_{n=0}^{\infty }p_{\lambda }(k) =& \sum_{n=0}^{\infty }{ \binom{r+(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&e_{\lambda }^{r}(p-1)e_{\lambda }^{-r}(p-1)=1. \end{aligned}$$
From (4), we note that
$$ \lim_{\lambda \rightarrow 1}p_{\lambda }(k) $$
is the probability mass function of negative binomial random variable with parameters \((r,p)\), and
is the probability mass function of Poisson random variable with parameters \(r(1-p)\).
Let X be a discrete random variable, and let \(f(x)\) be a real-valued function. Then we have
$$ E\bigl(f(X)\bigr)=\sum_{x}f(x)p(x), $$
where \(p(x)\) is the probability mass function.
$$\begin{aligned} E(Y_{\lambda }) =&\sum_{k=0}^{\infty }kp_{\lambda }(k)= \sum_{k=0}^{ \infty }k{\binom{r+(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&\frac{r}{e_{\lambda }^{\lambda }(p-1)}\sum_{k=1}^{\infty } \frac{(r+(k-1)\lambda )\cdots (r+\lambda )}{(k-1)!}(1-p)^{k} e_{\lambda }^{r+\lambda }(p-1) \\ =&\frac{r}{e_{\lambda }^{\lambda }(p-1)}\sum_{k=0}^{\infty } \frac{(r+k\lambda )\cdots (r+\lambda )}{k!}(1-p)^{k+1} e_{\lambda }^{r+ \lambda }(p-1) \\ =&\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}\sum_{k=0}^{\infty }{ \binom{r+\lambda +(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r+ \lambda }(p-1) \\ =&\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}e_{\lambda }^{-(r+\lambda )}(p-1)e_{\lambda }^{r+\lambda }(p-1) \\ =&\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}. \end{aligned}$$
Therefore, by (10), we obtain the following theorem.
Let \(Y_{\lambda }\) be a negative λ-binomial random variable with parameters \((r,p)\). Then we have
$$ E(Y_{\lambda })=\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}. $$
$$ \lim_{\lambda \rightarrow 1}E(Y_{\lambda })=\frac{r(1-p)}{p}=E(Y), $$
where Y is the negative binomial random variable with parameters \((r,p)\).
$$ \lim_{\lambda \rightarrow 0}E(Y_{\lambda })=r(1-p)=E(Y), $$
where Y is the Poisson random variable with parameter \(r(1-p)\).
Now, we observe that
$$\begin{aligned} E\bigl(Y^{2}_{\lambda }\bigr) =&\sum _{k=0}^{\infty }k^{2}p_{\lambda }(k)= \sum_{k=0}^{ \infty }k(k+1-1)p_{\lambda }(k) \\ =&\sum_{k=0}^{\infty }k(k-1)p_{\lambda }(k)+ \sum_{k=0}^{\infty }kp_{\lambda }(k) \\ =&\sum_{k=0}^{\infty }k(k-1){ \binom{r+(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r}(p-1)+E(Y_{\lambda }) \\ =&\frac{r(r+\lambda )}{e_{\lambda }^{2\lambda }(p-1)}\sum_{k=2}^{ \infty } \frac{(r+(k-1)\lambda )\cdots (r+2\lambda )}{(k-2)!}(1-p)^{k} e_{\lambda }^{r+2\lambda }(p-1)+E(Y_{\lambda }) \\ =&\frac{r(r+\lambda )}{e_{\lambda }^{2\lambda }(p-1)}\sum_{k=0}^{ \infty }{ \binom{r+(k+1)\lambda }{k}}_{\lambda }(1-p)^{k+2} e_{\lambda }^{r+2 \lambda }(p-1)+E(Y_{\lambda }) \\ =&\frac{r(r+\lambda )(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}\sum_{k=0}^{ \infty }{ \binom{r+2\lambda +(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r+2\lambda }(p-1)+E(Y_{\lambda }) \\ =&\frac{r(r+\lambda )(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}e_{\lambda }^{-(r+2\lambda )}(p-1)e_{\lambda }^{r+2\lambda }(p-1)+E(Y_{\lambda }) \\ =&\frac{r(r+\lambda )(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}+ \frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}. \end{aligned}$$
The variance of random variable X is defined by
$$ \operatorname{Var}(X)=E\bigl(X^{2}\bigr)-\bigl[E(X) \bigr]^{2}\quad (\text{see [1, 3]}). $$
From Theorem 2.1, (11), and (12), we note that
$$\begin{aligned} \operatorname{Var}(Y_{\lambda }) =&E\bigl(Y_{\lambda }^{2} \bigr)-\bigl[E(Y_{\lambda })\bigr]^{2} \\ =&\frac{r(r+\lambda )(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}+ \frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}- \frac{r^{2}(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)} \\ =&\frac{r(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}(r+\lambda -r)+ \frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)} \\ =&\lambda \frac{r(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}+ \frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}. \end{aligned}$$
Therefore, we obtain the following theorem.
$$ \operatorname{Var}(Y_{\lambda })=\lambda \frac{r(1-p)^{2}}{e_{\lambda }^{2\lambda }(p-1)}+ \frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}. $$
$$ \lim_{\lambda \rightarrow 1}\operatorname{Var}(Y_{\lambda })= \frac{r(1-p)}{p^{2}}=\operatorname{Var}(Y), $$
$$ \lim_{\lambda \rightarrow 0}\operatorname{Var}(Y_{\lambda })=r(1-p)= \operatorname{Var}(Y), $$
$$ k^{n}=\sum_{l=0}^{n}S_{2}(n,l) (k)_{l}, $$
where \(S_{2}(n,l)\) is the Stirling number of the second kind, and
$$ (k)_{0}=1,\qquad (k)_{l}=k(k-1)\cdots (k-l+1)\quad (l\ge 1)\ (\text{see [14, 15]}). $$
From (13), we note that
$$\begin{aligned} E\bigl(Y^{n}_{\lambda }\bigr) =&\sum _{k=0}^{\infty }k^{n}p_{\lambda }(k)= \sum_{l=0}^{n}S_{2}(n,l) \sum_{k=l}^{\infty }(k)_{l} p_{\lambda }(k) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \sum_{k=l}^{\infty }(k)_{l} { \binom{r+(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )}{e_{\lambda }^{l\lambda }(p-1)} \\ &{}\times\sum_{k=l}^{\infty } \frac{(r+(k-1)\lambda )\cdots (r+l\lambda )}{(k-l)!}(1-p)^{k} e_{\lambda }^{r+l\lambda }(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )}{e_{\lambda }^{l\lambda }(p-1)} \\ &{}\times\sum_{k=0}^{\infty } \frac{(r+(k+l-1)\lambda )\cdots (r+l\lambda )}{k!}(1-p)^{k+l} e_{\lambda }^{r+l\lambda }(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )}{e_{\lambda }^{l\lambda }(p-1)} \\ &{}\times\sum_{k=0}^{\infty }{ \binom{r+(k+l-1)\lambda }{k}}_{\lambda }(1-p)^{k+l}e_{\lambda }^{r+l\lambda }(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )(1-p)^{l}}{e_{\lambda }^{l\lambda }(p-1)} \\ &{}\times\sum_{k=0}^{\infty }{ \binom{r+l\lambda + (k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r+l\lambda }(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )(1-p)^{l}}{e_{\lambda }^{l\lambda }(p-1)} e_{\lambda }^{-r-l\lambda }(p-1) e_{\lambda }^{r+l\lambda }(p-1) \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{r(r+\lambda )\cdots (r+(l-1)\lambda )(1-p)^{l}}{e_{\lambda }^{l\lambda }(p-1)} \\ =&\sum_{l=0}^{n}S_{2}(n,l) \frac{(r+(l-1)\lambda )_{l,_{\lambda }}(1-p)^{l}}{e_{\lambda }^{l\lambda }(p-1)}. \end{aligned}$$
$$ E\bigl(Y^{n}_{\lambda }\bigr)=\sum _{l=0}^{n}S_{2}(n,l) \frac{(r+(l-1)\lambda )_{l,_{\lambda }}(1-p)^{l}}{e_{\lambda }^{l\lambda }(p-1)}. $$
$$ \lim_{\lambda \rightarrow 1}E\bigl(Y^{n}_{\lambda }\bigr)= \sum_{l=0}^{n}S_{2}(n,l) \frac{(r+(l-1))_{l}(1-p)^{l}}{p^{l}}=E\bigl(Y^{n}\bigr), $$
where Y is the negative binomial random variable with parameters \((r,p)\) (see [4, 12]).
$$ \lim_{\lambda \rightarrow 0}E\bigl(Y^{n}_{\lambda }\bigr)= \sum_{l=0}^{n}S_{2}(n,l) \bigl(r(1-p)\bigr)^{l}=E\bigl(Y^{n}\bigr), $$
where Y is the Poisson random variable with parameter \(r(1-p)\) (see [16]).
$$\begin{aligned} E\bigl(Y^{n}_{\lambda }\bigr) =&\sum _{k=0}^{\infty }k^{n} p_{\lambda }(k) \\ =&\sum_{k=0}^{\infty }k^{n}{ \binom{r+(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&\sum_{k=1}^{\infty }k^{n-1} \frac{(r+(k-1)\lambda )\cdots (r+\lambda )r}{(k-1)!}(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&\sum_{k=0}^{\infty }(k+1)^{n-1} \frac{(r+k\lambda )\cdots (r+\lambda )r}{k!}(1-p)^{k+1} e_{\lambda }^{r}(p-1) \\ =&r(1-p)\sum_{k=0}^{\infty }\sum _{i=0}^{n-1}{\binom{n-1}{i}}k^{i} \frac{(r+k\lambda )\cdots (r+\lambda )}{k!}(1-p)^{k} e_{\lambda }^{r}(p-1) \\ =&\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}\sum_{i=0}^{n-1}{ \binom{n-1}{i}}\sum_{k=0}^{\infty }k^{i}{ \binom{r+\lambda +(k-1)\lambda }{k}}_{\lambda }(1-p)^{k} e_{\lambda }^{r+ \lambda }(p-1) \\ =&\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}\sum_{i=0}^{n-1}{ \binom{n-1}{i}}E\bigl(Z^{i}_{\lambda }\bigr), \end{aligned}$$
where \(Z_{\lambda }\) is the negative λ-binomial random variable with parameters \((r+\lambda ,p)\).
Let \(Y_{\lambda }\), \(Z_{\lambda }\) be two negative λ-binomial random variables with parameters \((r,p)\), \((r+\lambda ,p)\) respectively. Then we have
$$ E\bigl(Y^{n}_{\lambda }\bigr)=\frac{r(1-p)}{e_{\lambda }^{\lambda }(p-1)}\sum _{i=0}^{n-1}{ \binom{n-1}{i}}E \bigl(Z^{i}_{\lambda }\bigr). $$
In this paper, we introduced one discrete random variable, namely the negative λ-binomial random variable. The details and results are as follows. We defined the negative λ-binomial random variable with parameter \((r,p)\) in (4) and deduced its expectation in Theorem 2.1. We also obtained its variance in Theorem 2.2 and derived explicit expression for the moment of the negative λ-binomial random variable in Theorem 2.3.
Alexander, H.W.: Recent publications: introduction to probability and mathematical statistics. Am. Math. Mon. 70(2), 222–223 (1963)
Bayad, A., Chikhi, J.: Apostol–Euler polynomials and asymptotics for negative binomial reciprocals. Adv. Stud. Contemp. Math. (Kyungshang) 24(1), 33–37 (2014)
Carlitz, L.: Comment on the paper "Some probability distributions and their associated structures". Math. Mag. 37(1), 51–53 (1964)
Funkenbusch, W.: On writing the general term coefficient of the binomial expansion to negative and fractional powers, in tri-factorial form. Natl. Math. Mag. 17(7), 308–310 (1943)
Kim, D.S., Kim, T.: A note on a new type of degenerate Bernoulli numbers. Russ. J. Math. Phys. 27(2), 227–235 (2020)
Kim, T.: λ-Analogue of Stirling numbers of the first kind. Adv. Stud. Contemp. Math. (Kyungshang) 27(3), 423–429 (2017)
Kim, T., Kim, D.S.: Degenerate Laplace transform and degenerate gamma function. Russ. J. Math. Phys. 24(2), 241–248 (2017)
Kim, T., Kim, D.S.: Degenerate Bernstein polynomials. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(3), 2913–2920 (2019)
Kim, T., Kim, D.S.: Correction to: Degenerate Bernstein polynomials. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(3), 2921–2922 (2019)
Kim, T., Kim, D.S.: Note on the degenerate gamma function. Russ. J. Math. Phys. 27 (3), 352–358 (2020)
Kim, T., Kim, D.S., Jang, L.C., Kim, H.Y.: A note on discrete degenerate random variables. Proc. Jangjeon Math. Soc. 23(1), 125–135 (2020)
Rider, P.R.: Classroom notes: the negative binomial distribution and the incomplete beta function. Am. Math. Mon. 69(4), 302–304 (1962)
Ross, S.M.: Introduction to Probability Models. Twelfth edition of [MR0328973]. Academic Press, London (2019). ISBN 978-0-12-814346-9
Simsek, Y.: Identities on the Changhee numbers and Apostol-type Daehee polynomials. Adv. Stud. Contemp. Math. (Kyungshang) 27(2), 199–212 (2017)
Simsek, Y.: Combinatorial inequalities and sums involving Bernstein polynomials and basis functions. J. Inequal. Spec. Funct. 8(3), 15–24 (2017)
Theodorescu, R., Borwein, J.M.: Problems and solutions: solutions: moments of the Poisson distribution: 10738. Am. Math. Mon. 107(7), 659 (2000)
The authors thank Jangjeon Institute for Mathematical Science for the support of this research.
This research was funded by the National Natural Science Foundation of China (No. 11871317, 11926325, 11926321).
School of Science, Xi'an Technological University, Xi'an, 710021, Shaanxi, People's Republic of China
Yuankui Ma & Taekyun Kim
Department of Mathematics, Kwangwoon University, Seoul, 139-701, Republic of Korea
Taekyun Kim
Yuankui Ma
All authors contributed equally to the manuscript and typed, read, and approved the final manuscript.
Correspondence to Taekyun Kim.
All authors reveal that there is no ethical problem in the production of this paper.
All authors want to publish this paper in this journal.
Ma, Y., Kim, T. A note on negative λ-binomial distribution. Adv Differ Equ 2020, 569 (2020). https://doi.org/10.1186/s13662-020-03030-z
DOI: https://doi.org/10.1186/s13662-020-03030-z
Negative λ-binomial random variable
Topics in Special Functions and q-Special Functions: Theory, Methods, and Applications | CommonCrawl |
What does "non-linear processing" mean, exactly?
In the following sentence from "How does the brain solve visual object recognition?" by DiCarlo et al.:
In sum, our view is that the "output" of the ventral stream is reflexively expressed in neuronal firing rates across a short interval of time (~50 ms), is an "explicit" object representation (i.e., object identity is easily decodable), and the rapid production of this representation is consistent with a largely feedforward, non-linear processing of the visual input.
I'm familiar with "feed-forward" but not with the meaning of "non-linear processing" in a neuroscience context. What does "non-linear processing" mean, exactly?
cognitive-neuroscience
Seanny123
J.ToddJ.Todd
$\begingroup$ Do you know what non-linear functions and how they're different that linear functions? $\endgroup$ – Seanny123 Oct 22 '17 at 19:38
$\begingroup$ @Seanny123 Probably not in the mathematical sense. I think of science from a physical standpoint rather than a mathematical one. $\endgroup$ – J.Todd Oct 22 '17 at 20:22
$\begingroup$ Ok I read up on the difference between linear functions and non-linear functions in math (Yeah, I know, I should know that already), but I'm not exactly clear on how this translates to neural patterns. $\endgroup$ – J.Todd Oct 22 '17 at 21:25
$\begingroup$ It's not really the neural patterns that are non-linear, as much as the function being computed by the neurons that are non-linear. Basically, there's a non-linear mapping between the input (visual stimuli) and the output (object). It's really just a way of saying "complicated" in this context. $\endgroup$ – Seanny123 Oct 22 '17 at 22:10
The idea of linear/non-linear in neuroscience is the same as in mathematics. A process $f(x)$ is linear if $f(\alpha x) = \alpha f(x)$ and $f(x+y) = f(x)+f(y)$ for all $x$, $y$, and $\alpha$.
$\begingroup$ Could you explain it in a physical manner? Unlike many (who prefer it explained this way), I think of science purely in the physical sense, for example: when I think about a chemical reaction, I think about how the molecules are interacting, not about a formula representing it. Same with neuroscience. The math is fine, most scientists prefer to speak in terms of math, but I've never cared to use it, instead focusing on the actual physical states and interactions. $\endgroup$ – J.Todd Oct 22 '17 at 20:26
$\begingroup$ @Viziionary I could, but there are tons of books that will do a better job of it. The thing I found important in your question was the fact that you wanted to know about linear/non-linear systems in the context of neuroscience, for which the answer is that it is the same as in all science and any textbook you like will work. $\endgroup$ – StrongBad Oct 22 '17 at 20:28
$\begingroup$ Ok. So you're saying it would take a text-book chapter, not an SE answer, to explain it in that manner? I'm not arguing, just asking for clarification. $\endgroup$ – J.Todd Oct 22 '17 at 20:29
$\begingroup$ +1 for your answer, but, perhaps, could you add a source to the answer to allow users to background read on this? Linear systems theory is quite an essential topic for many disciplines and a good web source or reference as a background would really help I guess, given OP's comments. $\endgroup$ – AliceD♦ Oct 22 '17 at 21:42
$\begingroup$ I did read up on the difference between linear functions and non-linear, but I was still confused about how that related to the neural patterns in this context. Seanny123's comment really answered my question: "It's not really the neural patterns that are non-linear, as much as the function being computed by the neurons that are non-linear. Basically, there's a non-linear mapping between the input (visual stimuli) and the output (object). It's really just a way of saying "complicated" in this context." $\endgroup$ – J.Todd Oct 22 '17 at 22:21
Not the answer you're looking for? Browse other questions tagged cognitive-neuroscience or ask your own question.
To what extent does the awarness of the presence of others affect brain function and cognitive state?
What are "linear spatial weightings" and "specific temporal windows" in Philiastides & Sajda (2006)?
Why does optogenetics not mean that perfect brain-computer interfaces are possible?
What does double dissociation really tell us?
During body-scan ('sweeping') meditation, what does brain-imaging (e.g. fMRI) look like?
How exactly does transcranial direct current stimulation (tDCS) work?
What extent is non linear dynamics and chaos helpful to study brain function? | CommonCrawl |
DML-CZ - Czech Digital Mathematics Library: No hedgehog in the product?
Assuming OCA, we shall prove that for some pairs of Fréchet $\alpha_4$-spaces $X, Y$, the Fréchetness of the product $X\times Y$ implies that $X\times Y$ is $\alpha_4$. Assuming MA, we shall construct a pair of spaces satisfying the assumptions of the theorem.
[Ar] Archangel'skii A.V.: The frequency spectrum of a topological space and the classification of spaces. Soviet. Math. Dokl. 13 (1972), 265-268.
[BL] Brendle J., LaBerge T.: Forcing tightness of products of fans. Fund. Math. 150 3 (1996), 211-226.
[ES] Erdös P., Shelah S.: Separability properties of almost-disjoint families of sets. Israel J. Math. 12 (1972), 207-214.
[LL] LaBerge T., Landver A.: Tightness in products of fans and pseudo-fans. Topology Appl. 65 (1995), 237-255.
[MS] Martin D.A., Solovay R.M.: Internal Cohen extension. Ann. Math. Logic 2 (1970), 143-178.
[Mi] Michael E.: A quintuple quotient quest. Gen. Topology Appl. 2 (1972), 91-138.
[No] Products of $\langle \alpha_i\rangle$-spaces.
[Ny] Nyikos P.J.: Convergence in topology. in: Recent Progress in General Topology, ed. by M. Hušek and J. van Mill, North-Holland, 1992 pp.537-570.
[Si] Simon P.: A hedgehog in the product. Acta. Univ. Carolin. Math. Phys. 39 (1998), 147-153.
[Sw] Siwiec F.: Sequence covering and countably bi-quotient mappings. Gen. Topology Appl. 1 (1971), 143-154.
[To] Todorcevic S.: Partition problems in topology. Contemporary Mathematics, 84, Amer. Math. Soc., Providence, RI, 1989. | CommonCrawl |
\begin{document}
\title{\bf Optimal distributed control of a nonlocal Cahn--Hilliard/Navier--Stokes system in 2D}
\author{
Sergio Frigeri\footnote{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, D-10117 Berlin, Germany, E-mail {\tt [email protected]}},
Elisabetta Rocca\footnote{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse~39, D-10117 Berlin, Germany, E-mail {\tt [email protected]}, and Dipartimento di Matematica, Universit\`a di Milano, Via Saldini 50, 20133 Milano, Italy, E-mail {\tt [email protected]}},
and J\"urgen Sprekels\footnote{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse~39, D-10117 Berlin, Germany, E-mail {\tt [email protected]}, and Institut f\"ur Mathematik der Humboldt--Universit\"at zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany\newline {\bf Acknowledgement.} The work of S.F. and of E.R. was supported by the
FP7-IDEAS-ERC-StG \#256872 (EntroPhase) and by GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica). } }
\maketitle
\noindent {\bf Abstract.} We study a diffuse interface model for incompressible isothermal mixtures of two immiscible fluids coupling the Navier--Stokes system with a convective nonlocal Cahn--Hilliard equation in two dimensions of space. We apply recently proved well-posedness and regularity results in order to establish existence of optimal controls as well as first-order necessary optimality conditions for an associated optimal control problem in which a distributed control is applied to the fluid flow.
\noindent {\bf Key words:} Distributed optimal control, first-order necessary optimality conditions, nonlocal models, integrodifferential equations, Navier--Stokes system, Cahn--Hilliard equation, phase separation.
\noindent {\bf AMS (MOS) subject clas\-si\-fi\-ca\-tion:} 49J20, 49J50, 35R09, 45K05, 74N99.
\section{Introduction}
In this paper, we consider the nonlocal Cahn--Hilliard/Navier--Stokes system \begin{align} &\varphi_t+\boldsymbol{u}\cdot\nabla\varphi=\Delta\mu,\label{sy1}\\ &\mu=a\,\varphi-K\ast\varphi+F'(\varphi),\label{sy2}\\ &\boldsymbol{u}_t\,-\,2\,\mbox{div}\,\big(\nu(\varphi)\,D\boldsymbol{u}\big)+(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}+\nabla\pi=\mu\,\nabla\varphi+\boldsymbol{v},\label{sy3}\\ &\mbox{div}(\boldsymbol{u})=0,\label{sy4} \end{align} in $Q:=\Omega\times(0,T)$, where $\Omega\subset\mathbb{R}^2$ is a bounded smooth domain with boundary $\,\partial \Omega\,$ and outward unit normal field $\,\boldsymbol{n}$, and where $T>0$ is a prescribed final time. Moreover, $D$ denotes the symmetric gradient, which is defined by $D\boldsymbol{u}:=\big(\nabla \boldsymbol{u}+\nabla^T\boldsymbol{u} \big)/2$.
This system models the flow and phase separation of an isothermal mixture of two incompressible immiscible fluids with matched densities (normalized to unity), where nonlocal interactions between the molecules are taken into account. In this connection, $\boldsymbol{u}$ is the (averaged) velocity field, $\varphi$ is the order parameter (relative concentration of one of the species), $\pi$ is the pressure and $\boldsymbol{v}$ is the external volume force density. The mobility in \eqref{sy1} is assumed to be constant equal to $1$ for simplicity, while in \eqref{sy3} we allow the viscosity $\nu$ to be $\varphi-$dependent. The chemical potential $\mu$ contains the spatial convolution ${K}\ast\varphi$ over $\Omega$, defined by $$({K}\ast\varphi)(x):=\int_\Omega K(x-y)\varphi(y)\, dy, \quad x\in \Omega, $$ of the order parameter $\varphi$ with a sufficiently smooth interaction kernel $K$ that satisfies ${K}(z)=K(-z)$. Moreover, $\,a\,$ is given by $$a(x):=\int_\Omega K(x-y)\,dy,$$ for
$x\in\Omega$, and $F$ is a double-well potential, which, in general, may be regular or singular (e.g., of logarithmic or double obstacle type); in this paper, we have to confine ourselves to the regular case.
The system \eqref{sy1}--\eqref{sy4} is complemented by the boundary and initial conditions \begin{align} &\frac{\partial\mu}{\partial\boldsymbol{n}}=0,\qquad\boldsymbol{u}=0,\qquad\mbox{on }\:\Sigma:=\partial\Omega\times(0,T),\label{bcs}\\ &\boldsymbol{u}(0)=\boldsymbol{u}_0,\qquad\varphi(0)=\varphi_0,\qquad\mbox{in }\:\Omega,\label{ics} \end{align} where, as usual, $\partial\mu/\partial\boldsymbol{n}\,$ denotes the directional derivative of $\,\mu\,$ in the direction of $\,\boldsymbol{n}$.
Problem \eqref{sy1}--\eqref{ics} is the nonlocal version of the so-called ``Model H'' which is known from the literature
(cf., e.\,g., \cite{AMW,GPV,HMR,HH,JV,Kim2012,LMM}). The main difference between local and nonlocal models is given by the choice of the interaction potential. Typically, the nonlocal contribution to the free energy has the form $\,\int_\Omega \widetilde{K}(x,y)\,|\varphi(x) - \varphi(y)|^2\, dy\,$, with a given symmetric kernel $\widetilde{K}$ defined on $\Omega\times \Omega$; its local Ginzburg--Landau counterpart is given by $\,(\sigma/2)|\nabla\varphi(x)|^2$, where the positive parameter $\,\sigma\,$ is a measure for the thickness of the interface.
Although the physical relevance of nonlocal interactions was already pointed out in the pioneering paper \cite{Ro} (see also \cite[4.2]{Em} and the references therein) and studied (in case of constant velocity) in, e.g., \cite{BH1, CKRS, GZ, GL1, GL2, GLM, KRS, KRS2}, and, while the classical (local) Model H has been investigated by several authors (see, e.g., \cite{A1,A2,B,CG,GG1,GG2,GG3,HHK,LS,S,ZWH,ZF} and also \cite{ADT,Bos,GP,KCR} for models with shear dependent viscosity), its nonlocal version has been tackled (from the analytical viewpoint concerning well-posedness and related questions) only more recently (cf., e.g., \cite{CFG,FGG,FG1,FG2,FGK,FGR}).
In particular, the following cases have been studied: regular potential $F$ associated with constant mobility in \cite{CFG,FGG,FG1,FGK}; singular potential associated with constant mobility in \cite{FG2}; singular potential and degenerate mobility in \cite{FGR}; the case of nonconstant viscosity in \cite{FGG}. In the two-dimensional case it was shown in \cite{FGK} that for regular potentials and constant mobilities the problem \eqref{sy1}--\eqref{ics} enjoys a unique strong solution. Recently, uniqueness was proved also for weak solutions (see \cite{FGG}).
With the well-posedness results of \cite{FGK} and in \cite{FGG} at hand, the road is paved for studying optimal control problems associated with \eqref{sy1}--\eqref{ics} at least in the two-dimensional case. This is the purpose of this paper. To our best knowledge, this has never been done before in the literature; in fact, while there exist recent contributions to associated optimal control problems for the time-discretized local version of the system (cf. \cite{HW2,HW3}) and to numerical aspects of the control problem (see \cite{HK}), it seems that a rigorous analysis for the full problem without time discretization has never been performed before. Even for the much simpler case of the convective Cahn--Hilliard equation, that is, if the velocity is prescribed so that the Navier--Stokes equation (\ref{sy3}) is not present, only very few contributions exist that deal with optimal control problems; in this connection, we refer to \cite{ZL1,ZL2} for local models in one and two space dimensions and to the recent paper \cite{RS}, in which first-order necessary optimality conditions were derived for the nonlocal convective Cahn--Hilliard system in 3D in the case of degenerate mobilities and singular potentials.
More precisely, the control problem under investigation in this paper reads as follows:
\textbf{(CP)} Minimize the tracking type cost functional \begin{align} \mathcal{J}(y,\boldsymbol{v})&:=\frac{\beta_1}{2}\Vert\boldsymbol{u}-\boldsymbol{u}_Q\Vert_{L^2(Q)^2}^2+\frac{\beta_2}{2}\Vert\varphi-\varphi_Q\Vert_{L^2(Q)}^2 +\frac{\beta_3}{2}\Vert\boldsymbol{u}(T)-\boldsymbol{u}_\Omega\Vert_{L^2(\Omega)^2}^2\nonumber\\ &+\frac{\beta_4}{2}\Vert\varphi(T)-\varphi_\Omega\Vert_{L^2(\Omega)}^2+\frac{\gamma}{2}\Vert\boldsymbol{v}\Vert_{L^2(Q)^2}^2,\label{costfunct} \end{align} where $y:=[\boldsymbol{u},\varphi]$ solves problem \eqref{sy1}-\eqref{ics}. We assume throughout the paper without further reference that in the cost functional (\ref{costfunct}) the quantities $\boldsymbol{u}_Q\in L^2(0,T;G_{div})$, $\varphi_Q\in L^2(Q)$, $\boldsymbol{u}_\Omega\in G_{div}$, and $\varphi_\Omega\in L^2(\Omega)$, are given target functions, while $\beta_i$, $i=1\dots 4$, and $\gamma$ are some fixed nonnegative constants that do not vanish simultaneously. Moreover, the external body force density $\boldsymbol{v}$, which plays the role of the control, is postulated to belong to a suitable closed, bounded and convex subset (which will be specified later) of the space of controls \begin{align} &\mathcal{V}:=L^2(0,T;G_{div}),\nonumber \end{align} where \begin{align} &G_{div}:=\overline{\big\{\boldsymbol{u}\in C^\infty_0(\Omega)^2:\mbox{div}(\boldsymbol{u})=0\big\}}^{L^2(\Omega)^2}.\nonumber \end{align} We recall that the spaces $G_{div}$ and \begin{align} &V_{div}:=\big\{\boldsymbol{u}\in H^1_0(\Omega)^2:\mbox{div}(\boldsymbol{u})=0\big\}\nonumber \end{align} are the classical Hilbert spaces for the incompressible Navier--Stokes equations with no-slip boundary conditions (see, e.g., \cite{T}).
We remark that controls in the form of volume force densities can occur in many technical applications. For instance, they may be induced in the fluid flow from stirring devices, from the application of acoustic fields (ultrasound, say) or, in the case of electrically conducting fluids, from the application of magnetic fields.
The plan of the paper is as follows: in the next Section 2, we collect some preliminary results concerning the well-posedness of system \eqref{sy1}--\eqref{ics}, and we prove some stability estimates which are necessary for the analysis of the control problem. In Section 3, we prove the main results of this paper, namely, the existence of a solution to the optimal control problem {\bf (CP)}, the Fr\'echet differentiability of the control-to-state operator, as well as the first-order necessary optimality conditions for {\bf (CP)}.
\section{Preliminary results}
In this section, we first summarize some results from \cite{CFG,FGG,FGK} concerning the well-posedness of solutions to the system \eqref{sy1}--\eqref{ics}.
We also establish a stability estimate that later will turn out to be crucial for showing the differentiability of the associated control-to-state mapping.
Before going into this, we introduce some notation.
Throughout the paper, we set $H:=L^2(\Omega)$, $V:=H^1(\Omega)$, and we denote by $\Vert\,\cdot\,\Vert$ and $(\cdot\,,\,\cdot)$ the standard norm and the scalar product, respectively, in $H$ and $G_{div}$, as well as in $L^2(\Omega)^2$ and $L^2(\Omega)^{2\times 2}$. The notations $\langle\cdot\,,\,\cdot\rangle_{X}$ and $\Vert\,\cdot\,\Vert_X$ will stand for the duality pairing between a Banach space $X$ and its dual $X'$, and for the norm of $X$, respectively.
Moreover, the space $V_{div}$ is endowed with the scalar product \begin{align} &(\boldsymbol{u}_1,\boldsymbol{u}_2)_{V_{div}}:=(\nabla\boldsymbol{u}_1,\nabla\boldsymbol{u}_2)=2\big(D\boldsymbol{u}_1,D\boldsymbol{u}_2\big) \qquad\forall\,\boldsymbol{u}_1,\boldsymbol{u}_2\in V_{div}.\nonumber \end{align} We also introduce the Stokes operator $\,A\,$ with no-slip boundary condition (see, e.g., \cite{T}). Recall that $\,A:D(A)\subset G_{div}\to G_{div}\,$ is defined as $\,A:=-P\Delta$, with domain $\,D(A)=H^2(\Omega)^2\cap V_{div}$, where $\,P:L^2(\Omega)^2\to G_{div}\,$ is the Leray projector. Moreover, $\,A^{-1}:G_{div}\to G_{div}$ is a selfadjoint compact operator in $G_{div}$. Therefore, according to classical results, $A$ possesses a sequence of eigenvalues $\{\lambda_j\}_{j\in\mathbb{N}}$ with $0<\lambda_1\leq\lambda_2\leq\cdots$ and $\lambda_j\to\infty$, and a family $\{\boldsymbol{w}_j\}_{j\in \mathbb{N}}\subset D(A)$ of associated eigenfunctions which is orthonormal in $G_{div}$. We also recall Poincar\'{e}'s inequality \begin{align} &\lambda_1\,\Vert\boldsymbol{u}\Vert^2\leq\Vert\nabla\boldsymbol{u}\Vert^2\qquad\forall\,\boldsymbol{u}\in V_{div}\,.\nonumber \end{align} The trilinear form $\,b\,$ appearing in the weak formulation of the Navier--Stokes equations is defined as usual, namely, \begin{equation*} b(\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}):=\int_{\Omega}(\boldsymbol{u}\cdot\nabla)\boldsymbol{v}\cdot \boldsymbol{w} \,dx\qquad\forall\, \boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V_{div}\,. \end{equation*} We recall that we have $$b(\boldsymbol{u},\boldsymbol{w},\boldsymbol{v})\,=\,-\,b(\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}) \qquad\forall\,\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V_{div},$$ and that in two dimensions of space there holds the estimate \begin{align*}
&|b(\boldsymbol{u},\boldsymbol{v},\boldsymbol{w})|\,\leq\, \widehat C_1\,\|\boldsymbol{u}\|^{1/2}\,\|\nabla \boldsymbol{u}\|^{1/2}\,
\|\nabla \boldsymbol{v}\|\,\|\boldsymbol{w}\|^{1/2}\,\|\nabla \boldsymbol{w}\|^{1/2}\qquad\forall\, \boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V_{div}, \end{align*} with a constant $\widehat C_1>0$ that only depends on $\Omega$.
We will also need to use the operator $\,B:=-\Delta+I\,$ with homogeneous Neumann boundary condition. It is well known that $\,B:D(B)\subset H\to H\,$ is an unbounded linear operator in $\,H\,$ with the domain $$D(B)=\big\{\varphi\in H^2(\Omega):\:\:\partial\varphi/\partial\boldsymbol{n}=0\,\, \mbox{ on }\partial\Omega\big\},$$ and that $B^{-1}:H\to H$ is a selfadjoint compact operator on $H$. By a classical spectral theorem there exist a sequence of eigenvalues $\mu_j$ with $0<\mu_1\leq\mu_2\leq\cdots$ and $\mu_j\to\infty$, and a family of associated eigenfunctions $w_j\in D(B)$ such that $Bw_j=\mu_j\, w_j\,$ for all $j\in \mathbb{N}$. The family $\,\{w_j\}_{j \in\mathbb{N}}\,$ forms an orthonormal basis in $H$ and is also orthogonal in $V$ and $D(B)$.
Finally, we recall two inequalities, which are valid in two dimensions of space and will be used repeatedly in the course of our analysis, namely the particular case of the Gagliardo-Nirenberg inequality (see, e.g., \cite{BIN}) \begin{align} \label{GN} &\Vert v\Vert_{L^4(\Omega)}\,\leq\,\widehat C_2\,\Vert v\Vert^{1/2}\,\Vert v\Vert_V^{1/2}\qquad\forall\, v\in V, \end{align} as well as Agmon's inequality (see \cite{AG}) \begin{align} &\Vert v\Vert_{L^\infty(\Omega)}\,\leq\,\widehat C_3\,\Vert v\Vert^{1/2}\, \Vert v\Vert_{H^2(\Omega)}^{1/2}\qquad\forall\, v\in H^2(\Omega). \label{Agmon} \end{align} In both these inequalities, the positive constants $\widehat C_2,\widehat C_3$ depend only on $\,\Omega \subset\mathbb{R}^2$.
We are ready now to state the general assumptions on
the data of the state system. We remark that for the well-posedness results cited below not always all of these assumptions are needed in every case; however, they seem to be indispensable for the analysis of the control problem. Since we focus on the control aspects here, we confine ourselves to these assumptions and refer the interested reader to \cite{CFG,FGG,FGK} for further details. We postulate: \begin{description} \item[(H1)] \,\,It holds $\,\boldsymbol{u}_0\in V_{div}\,$ and $\,\varphi_0\in H^2(\Omega)$. \item[(H2)] \,\,$F\in C^4(\mathbb{R})$ satisfies the following conditions: \begin{align} \label{F1} & \exists\, \hat c_1>0: \quad F^{\prime\prime}(s)+a(x)\geq \hat c_1 \,\mbox{ for all $\,s\in\mathbb{R}\,$ and a.\,e. }\,x\in\Omega. \\ \label{F2} & \exists\, \hat c_2>0, \,\hat c_3>0, \,p>2: \quad F^{\prime\prime}(s)+a(x)\geq \hat c_2\,\vert s\vert^{p-2} - \hat c_3\, \mbox{ for all $\,s\in\mathbb{R}\,$ and a.\,e. }\,x\in\Omega.\qquad \\ \label{F3} & \exists\, \hat c_4>0, \,\hat c_5\geq0, \,r\in(1,2]: \quad
|F^\prime(s)|^r\leq \hat c_4\,|F(s)|+\hat c_5\,\mbox{ for all $\,s\in\mathbb{R}$.} \end{align} \item[(H3)] \,\,$\nu\in C^2(\mathbb{R})$, and there are constants $\,\hat\nu_1>0,\,\hat\nu_2>0$ such that \begin{equation} \label{nu} \hat \nu_1\,\leq\,\nu(s)\,\leq\, \hat\nu_2\,\quad\forall\, s\in\mathbb{R}. \end{equation}
\item[(H4)] \,\,The kernel $\,K\,$ satisfies $\,K(x)=K(-x)\,$ for all $\,x\,$
in its domain, as well as $\,a(x)=\int_\Omega K(x-y)\,dy\,\ge\,0\,$ for a.\,e.
$\,x\in\Omega$. Moreover, one of the following two conditions is fulfilled:\\[2mm]
(i) \,\,It holds $\,K\in W^{2,1}(B_\rho)$,
where $\rho:={\rm diam\,}\Omega\,$
and $\,B_\rho:=\{z\in \mathbb{R}^2:\,\,|z|<\rho\}$.\\[2mm] (ii) \,$K\,$ is a so-called {\em admissible\,} kernel, which (cf. \cite[Definition 1]{BRB}) for the two-dimensional case means that we have \vspace*{-2mm} \begin{align} \label{K1} & K\in W^{1,1}_{loc}(\mathbb{R}^2)\cap C^3(\mathbb{R}^2 \setminus \{0\}); \\ \label{K2} & K \,\mbox{ is radially symmetric, $\,K(x) = \widetilde K(\vert x\vert)$,
and $\,\widetilde K$\, is non-increasing};\qquad\quad\,\, \\
\label{K3} & \mbox{$\widetilde K^{\prime\prime}(r)\,$ and $\,\widetilde K^\prime(r)/r\,$ are monotone functions on $\,(0,r_0)\,$ for some $\,r_0>0$}; \\ \label{K4} & \vert D^3 K(x) \vert \,\leq\,\hat c_6\,\vert x\vert^{-3}\, \mbox{ for some }\, \hat c_6>0. \end{align} \end{description}
\begin{oss} \label{kernrem} {\upshape Notice that both the physically relevant two-dimensional Newtonian and Bessel kernels do not fulfill the condition (i) in {\bf (H4)}; they are however known to be admissible in the sense of (ii). The advantage of dealing with admissible kernels is due to the fact that such kernels have the property (cf. \cite[Lemma 2]{BRB}) that for all $\,p\in (1,+\infty)$ there exists some constant $C_p>0$ such that \begin{equation}\label{adm} \Vert\nabla (\nabla K\ast \psi)\Vert_{L^p(\Omega)^{2\times 2}} \,\leq\, C_p\, \Vert \psi \Vert_{L^p(\Omega)} \quad\,\forall\,\psi\in L^p(\Omega). \end{equation} We also observe that under the hypothesis {\bf (H4)} we have
$\,a\in W^{1,\infty}(\Omega)$. } \end{oss} The following result combines results that have been shown in the papers \cite{CFG, FGG, FGK}; in particular, we refer to \cite[Thms. 5 and 6]{FGG} and \cite[Thm. 2 and Remarks 2 and 5]{FGK}. \begin{thm} \label{thm1} Suppose that {\bf (H1)}--{\bf (H4)} are fulfilled. Then the state system {\rm (\ref{sy1})--(\ref{ics})} has for every $\,\boldsymbol{v}\in L^2(0,T;G_{div})\,$ a unique strong solution $[\boldsymbol{u},\varphi]$ with the regularity properties \begin{align} \label{regu} &\boldsymbol{u}\in C^0([0,T];V_{div})\cap L^2(0,T;H^2(\Omega)^2), \,\quad \boldsymbol{u}_t \in L^2(0,T;G_{div}), \\ \label{regphi} &\varphi \in C^0([0,T];H^2(\Omega)), \,\quad \varphi_t\in C^0([0,T];H)
\cap L^2(0,T;V), \\ \label{regmu} &\mu:=a\,\varphi-K\ast\varphi+F'(\varphi)\in C^0([0,T];H^2(\Omega)). \end{align}
Moreover, there exists a continuous and nondecreasing function $\,\mathbb{Q}_1: [0,+\infty)\to [0,+\infty)$, which only depends on the data $F$, $K$, $\nu$, $\Omega$, $T$, $\boldsymbol{u}_0$ and $\varphi_0$, such that \begin{align} \label{bound1} &
\|\boldsymbol{u}\|_{C^0([0,T];V_{div})\cap L^2(0,T;H^2(\Omega)^2)} \,+\,\|\boldsymbol{u}_t\|_{L^2(0,T;G_{div})}
\,+\,\|\varphi\|_{C^0([0,T];H^2(\Omega))}\,+\,\|\varphi_t\|_{C^0([0,T];H)
\cap L^2(0,T;V)}\nonumber \\ &
\le\,\mathbb{Q}_1\!\left(\|\boldsymbol{v}\|_{L^2(0,T;G_{div})}\right) . \end{align} \end{thm} From Theorem 1 it follows that the {\em control-to-state operator} $\,{\cal S}: \boldsymbol{v}\mapsto {\cal S}(\boldsymbol{v}):=[\boldsymbol{u},\varphi]$, is well defined as a mapping from $\,L^2(0,T;G_{div})\,$ into the Banach space defined by the regularity properties of $[\boldsymbol{u},\varphi]$ as given by (\ref{regu}) and (\ref{regphi}).
We now establish some global stability estimates for the strong solutions to problem \eqref{sy1}--\eqref{ics}. Let us begin with the following result (see \cite[Thm. 6 and Lemma 2]{FGG}).
\begin{lem} \label{stabest1} Suppose that {\bf (H1)}--{\bf (H4)} are fulfilled, and assume that controls $\,\boldsymbol{v}_i\in L^2(0,T; G_{div})$, $i=1,2$, are given and that $[\boldsymbol{u}_i,\varphi_i]:={\cal S}(\boldsymbol{v}_i)$, $i=1,2$, are the associated solutions to {\rm \eqref{sy1}--\eqref{ics}}. Then there is a continuous function $\,\mathbb{Q}_2:[0,+\infty)^2\to [0,+\infty)$, which is nondecreasing in both its arguments and only depends on the data $F$, $K$, $\nu$, $\Omega$, $T$, $\boldsymbol{u}_0$ and $\varphi_0$, such that we have for every $t\in (0,T]$ the estimate \begin{align} & \Vert\boldsymbol{u}_2-\boldsymbol{u}_1\Vert_{C^0([0,t];G_{div})}^2
\,+\,\Vert\boldsymbol{u}_2-\boldsymbol{u}_1\Vert_{L^2(0,t;V_{div})}^2\,+\,\Vert\varphi_2-\varphi_1\Vert_{C^0([0,t];H)}^2
\,+\,\Vert\nabla(\varphi_2-\varphi_1)\Vert_{L^2(0,t;H)}^2\nonumber \\[1mm] \label{stabi1} & \leq\,\mathbb{Q}_2\big(\Vert\boldsymbol{v}_1\Vert_{L^2(0,T;G_{div})},\Vert\boldsymbol{v}_2\Vert_{L^2(0,T;G_{div})} \big)\,\Vert\boldsymbol{v}_2-\boldsymbol{v}_1\Vert_{L^2(0,T;(V_{div})')}^2\,. \end{align} \end{lem} \begin{proof} We follow the lines of the proof of \cite[Thm. 6]{FGG} (see also \cite[Lemma 2]{FGG}), just sketching the main steps. We test the difference between \eqref{sy3}, written for each of the two solutions, by $\boldsymbol{u}:=\boldsymbol{u}_2-\boldsymbol{u}_1$ in $G_{div}$, and the difference between \eqref{sy1}, \eqref{sy2}, written for each solution, by $\varphi:=\varphi_2-\varphi_1$ in $H$. Adding the resulting identities, and arguing exactly as in the proof of \cite[Thm. 6]{FGG}, we are led to a differential inequality of the form \begin{align} & \frac{1}{2}\,\frac{d}{dt}\,\big(\Vert\boldsymbol{u}(t)\Vert^2\,+\, \Vert\varphi(t)\Vert^2\big)\,+\,\frac{\hat\nu_1}{4}\,\Vert\nabla\boldsymbol{u}(t)\Vert^2 \,+\,\frac{\hat c_1}{4}\,\Vert\nabla\varphi(t)\Vert^2\nonumber \\[1mm] & \leq \gamma(t)\,\big(\Vert\boldsymbol{u}(t)\Vert^2\,+\,\Vert\varphi(t)\Vert^2\big)\,+\, \frac{1}{\hat\nu_1}\,\Vert\boldsymbol{v}(t)\Vert_{(V_{div})'}^2 \quad\,\mbox{for a.\,e. }\, t\in (0,T), \nonumber \end{align} where $\gamma\in L^1(0,T)$ is given by \begin{align*} & \gamma(t) =c\,\big(1\,+\,\Vert \nabla\boldsymbol{u}_{1}(t)\Vert ^{2}\, \Vert \boldsymbol{u}_{1}(t)\Vert_{H^{2}(\Omega)}^{2}\,+\, \Vert \nabla \boldsymbol{u}_{2}(t)\Vert ^{2}\,+\,\Vert \varphi _{1}(t)\Vert _{L^{4}(\Omega)}^{2}\,+\,\Vert \varphi _{2}(t)\Vert _{L^{4}(\Omega)}^{2} \\[1mm] & \hspace*{16mm}+\,\Vert \varphi _{1}(t)\Vert _{H^{2}(\Omega)}^{2}+\Vert \nabla \varphi _{1}(t)\Vert ^{2}\,\Vert \varphi _{1}(t) \Vert _{H^{2}(\Omega)}^{2}\big). \end{align*} The desired stability estimate then follows from applying Gronwall's lemma to the above differential inequality. \end{proof} Lemma 1 already implies that the control-to-state mapping $\,{\cal S}\,$ is locally Lipschitz continuous as a mapping from $\,L^2(0,T;(V_{div})')\,$ (and, a fortiori, also from $L^2(0,T; V_{div})$) into the space $[C^0([0,T];G_{div})
\cap L^2(0,T;V_{div})]\times [C^0([0,T];H)
\cap L^2(0,T;V)]$. Since this result is not yet sufficient to establish differentiability, we need to improve the stability estimate. The following higher order stability estimate for the solution component $\,\varphi\,$ will turn out to be the key tool for the proof of differentiability of the control-to-state mapping.
\begin{lem} \label{stabest2} Suppose that the assumptions of Lemma 1 are fulfilled. Then there is a continuous function $\,\mathbb{Q}_3:[0,+\infty)^2\to [0,+\infty)$, which is nondecreasing in both its arguments and only depends on the data $F$, $K$, $\nu$, $\Omega$, $T$, $\boldsymbol{u}_0$ and $\varphi_0$, such that we have for every $t\in (0,T]$ the estimate \begin{align} & \Vert\boldsymbol{u}_2-\boldsymbol{u}_1\Vert_{{C^0([0,t];G_{div})}}^2
\,+\,\Vert\boldsymbol{u}_2-\boldsymbol{u}_1\Vert_{L^2(0,t;V_{div})}^2\,+\,\Vert\varphi_2-\varphi_1\Vert_{{C^0([0,t];V)}}^2
\,+\,\Vert \varphi_2-\varphi_1\Vert_{L^2(0,t;H^2(\Omega))}^2\nonumber \\[1mm] \label{stabi2} & +\,{\Vert\varphi_2}-\varphi_1\Vert_{H^1(0,t;H)}^2\,\leq\,\mathbb{Q}_3\big(\Vert\boldsymbol{v}_1\Vert_{L^2(0,T;G_{div})},\Vert\boldsymbol{v}_2\Vert_{L^2(0,T;G_{div})} \big)\,\Vert\boldsymbol{v}_2-\boldsymbol{v}_1\Vert_{L^2(0,T;(V_{div})')}^2\,. \end{align}
\end{lem}
\begin{proof} For the sake of a shorter exposition, we will in the following always avoid to write the time variable $t$ as argument of the involved functions; no confusion will arise from this notational convention.
Set $\boldsymbol{u}:=\boldsymbol{u}_2-\boldsymbol{u}_1$ and $\varphi:=\varphi_2-\varphi_1$. Then it follows from \eqref{sy1}, \eqref{sy2} that \begin{align} &\varphi_t=\Delta\widetilde{\mu}-\boldsymbol{u}\cdot\nabla\varphi_1-\boldsymbol{u}_2\cdot\nabla\varphi,\label{diff1}\\ &\widetilde{\mu}:=a\,\varphi-{{K}}\ast\varphi+F'(\varphi_2)-F'(\varphi_1).\label{diff2} \end{align} We multiply \eqref{diff1} by $\widetilde{\mu}_t$ in $H$ and integrate by parts, using the first boundary condition of \eqref{bcs} (which holds also for $\widetilde{\mu}$). We obtain the identity \begin{align} &\frac{1}{2}\,\frac{d}{dt}\,\Vert\nabla\widetilde{\mu}\Vert^2\,+\, (\varphi_t,\widetilde{\mu}_t)=-\,(\boldsymbol{u}\cdot\nabla\varphi_1,\widetilde{\mu}_t) -(\boldsymbol{u}_2\cdot\nabla\varphi,\widetilde{\mu}_t).\label{diffid} \end{align} Thanks to \eqref{diff2}, we can first rewrite the second term on the left-hand side of \eqref{diffid} as follows: \begin{align} (\varphi_t,\widetilde{\mu}_t)=\,&\big(\varphi_t,a\,\varphi_t-{K}\ast\varphi_t+(F''(\varphi_2)-F''(\varphi_1))\varphi_{2,t}+F''(\varphi_1)\varphi_t\big)\nonumber\\ =\,&\int_\Omega\big(a+F''(\varphi_1)\big)\varphi_t^2\,dx \,+\,\big(\Delta\widetilde{\mu}-\boldsymbol{u}\cdot\nabla\varphi_1-\boldsymbol{u}_2\cdot\nabla\varphi,-{{K}}\ast\varphi_t\big)\nonumber\\ &+\big(\varphi_t,(F''(\varphi_2)-F''(\varphi_1))\varphi_{2,t}\big)\nonumber\\ =\,&\int_\Omega\big(a+F''(\varphi_1)\big)\varphi_t^2\,dx
+(\nabla\widetilde{\mu},\nabla {{K}}\ast\varphi_t)-(\boldsymbol{u}\varphi_1,\nabla{K}\ast\varphi_t)-(\boldsymbol{u}_2\varphi,\nabla {K}\ast\varphi_t)\nonumber\\ &+\big(\varphi_t,(F''(\varphi_2)-F''(\varphi_1))\varphi_{2,t}\big).\label{2left} \end{align} Here we have employed \eqref{diff1} in the second identity of \eqref{2left}, while in the third identity integrations by parts have been performed using the boundary conditions $\,\partial\widetilde{\mu}/\partial\boldsymbol{n}=0\,$ and $\,\boldsymbol{u}_i=0\,$ on $\,\Sigma$, as well as the incompressibility conditions for $\boldsymbol{u}_i$, $i=1,2$.
We now estimate the last four terms on the right-hand side of \eqref{2left}. Using Young's inequality for convolution integrals, we have, for every $\,\epsilon>0$, \begin{align} &
|(\nabla\widetilde{\mu},\nabla K\ast\varphi_t)|\,\le\,
\|\nabla\widetilde\mu\|\,\|\nabla K\ast\varphi_t\|
\,\le\,\|\nabla\widetilde\mu\|\,\|\nabla K\|_{L^1(B_\rho)}\,\|\varphi_t\|\, \leq \,\epsilon\,\Vert\varphi_t\Vert^2 \,+\,C_{\epsilon,K}\,\Vert\nabla\widetilde{\mu}\Vert^2\,.\label{est4} \end{align}
Here, and throughout this proof, we use the following notational convention: by $C_\sigma$ we denote positive constants that may depend on the global data and on the quantities indicated by the index $\sigma$; however, $C_\sigma$ does not depend on the norms of the data of the two solutions. The actual value of $C_\sigma$ may change from line to line or even within lines. On the other hand, $\Gamma_\sigma$ will denote positive constants that may not only depend on the global data and on the quantities indicated by the index $\sigma$, but also on $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$. More precisely, we have \begin{align} \Gamma_\sigma=\widehat\Gamma\big(\Vert\boldsymbol{v}_1\Vert_{L^2(0,T;G_{div})}, \Vert\boldsymbol{v}_2\Vert_{L^2(0,T;G_{div})}\big)\nonumber \end{align} with a continuous function $\widehat\Gamma:[0,+\infty)^2\to [0,+\infty)$ which is nondecreasing in both its variables. Also the actual value of $\Gamma_\sigma$ may change even within the same line. Now, again using Young's inequality for convolution integrals, as well as H\"older's inequality, we have \begin{align} &
(\boldsymbol{u}\,\varphi_1,\nabla {{K}}\ast\varphi_t)| \,\leq\, C_K\,\Vert\boldsymbol{u}\Vert_{L^4(\Omega)^2}\,\Vert\varphi_1\Vert_{L^4(\Omega)} \,\Vert\varphi_t\Vert\,\leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,\Gamma_ {\epsilon,K}\,\Vert\nabla\boldsymbol{u}\Vert^2, \label{est5} \\[1mm] &
|(\boldsymbol{u}_2\,\varphi,\nabla {{K}}\ast\varphi_t)| \,\leq \,C_K\,\Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\,\Vert\varphi\Vert_{L^4(\Omega)} \,\Vert\varphi_t\Vert\, \leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\, \Gamma_{\epsilon,K}\,\Vert\varphi\Vert_V^2\,. \label{est6} \end{align} Moreover, invoking {\bf (H2)}, (\ref{bound1}) and the Gagliardo-Nirenberg inequality (\ref{GN}), we infer that \begin{align} &
\big|\big(\varphi_t,(F''(\varphi_2)-F''(\varphi_1))\,\varphi_{2,t}\big)\big| \,\leq\,\Vert\varphi_t\Vert\,\Vert F''(\varphi_2)-F''(\varphi_1)\Vert_{L^4(\Omega)}\, \Vert\varphi_{2,t}\Vert_{L^4(\Omega)}\nonumber \\[1mm] & \leq \,{\Gamma_F}\,\Vert\varphi_t\Vert\,\Vert\varphi\Vert_{L^4(\Omega)}\, \Vert\varphi_{2,t}\Vert_{L^4(\Omega)}\, \leq\, {\Gamma_F}\,\Vert\varphi_t\Vert\,\Vert\varphi\Vert^{1/2}\,\Vert\varphi\Vert_V^{1/2}\, \Vert\varphi_{2,t}\Vert^{1/2}\,\Vert\varphi_{2,t}\Vert_V^{1/2}\nonumber \\[1mm] & \leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi_{2,t}\Vert_V^2\,\Vert\varphi\Vert^2 \,+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi\Vert_V^2\,. \label{est7} \end{align}
As far as the terms on the right-hand side of \eqref{diffid} are concerned, we can in view of \eqref{diff2} write \begin{align} &(\boldsymbol{u}\cdot\nabla\varphi_1,\widetilde{\mu}_t)= \big(\boldsymbol{u}\cdot\nabla\varphi_1,a\,\varphi_t-{{K}}\ast\varphi_t+(F''(\varphi_2)-F''(\varphi_1))\,\varphi_{2,t}+F''(\varphi_1)\,\varphi_t\big), \label{right1}\\ &(\boldsymbol{u}_2\cdot\nabla\varphi,\widetilde{\mu}_t) =\big(\boldsymbol{u}_2\cdot\nabla\varphi,a\,\varphi_t-{{K}}\ast\varphi_t+(F''(\varphi_2)-F''(\varphi_1))\,\varphi_{2,t}+F''(\varphi_1)\,\varphi_t\big), \label{right2} \end{align} where the terms on the right-hand side of \eqref{right1}, \eqref{right2} can be estimated in the following way: \begin{align}
&\big|\big(\boldsymbol{u}\cdot\nabla\varphi_1,a\,\varphi_t-{{K}}\ast\varphi_t\big)\big| \,\leq\, C_K\,\Vert\boldsymbol{u}\Vert_{L^4(\Omega)^2}\,\Vert\varphi_1\Vert_{H^2(\Omega)}\, \Vert\varphi_t\Vert \,\leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,\Gamma_{\epsilon,K}\,\Vert\nabla\boldsymbol{u}\Vert^2\,,\label{est8}\\[4mm]
&\big|\big(\boldsymbol{u}\cdot\nabla\varphi_1,(F''(\varphi_2)-F''(\varphi_1))\,\varphi_{2,t}\big)\big| \,\leq\,{\Gamma_F}\,\Vert\boldsymbol{u}\Vert\,\Vert\varphi_1\Vert_{H^2(\Omega)}\,\Vert\varphi\Vert_{L^6(\Omega)}\,\Vert\varphi_{2,t}\Vert_{L^6(\Omega)}\nonumber\\[1mm] &\leq \,{\Gamma_F}\, \Vert\boldsymbol{u}\Vert\,\Vert\varphi\Vert_V\,\Vert\varphi_{2,t}\Vert_V \,\leq\, {\Gamma_F}\,\Vert\varphi_{2,t}\Vert_V^2\,\Vert\boldsymbol{u}\Vert^2\,+\,{\Gamma_F}\, \Vert\varphi\Vert_V^2\,,\label{est9} \\[4mm]
&\big|\big(\boldsymbol{u}\cdot\nabla\varphi_1,F''(\varphi_1)\,\varphi_t\big)\big| \,\leq\, {\Gamma_F}\, \Vert\boldsymbol{u}\Vert_{L^4(\Omega)^2}\,\Vert\varphi_1\Vert_{H^2(\Omega)}\,\Vert\varphi_t\Vert\,\leq\,\epsilon\,\Vert\varphi_t\Vert^2\, +\,{\Gamma_{\epsilon,F}}\,\Vert\nabla\boldsymbol{u}\Vert^2\,,\label{est10} \\[4mm]
&\big|\big(\boldsymbol{u}_2\cdot\nabla\varphi,a\,\varphi_t-{{K}}\ast\varphi_t\big)\big| \,\leq\, C_K\,\Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\,\Vert\nabla\varphi\Vert_{L^4(\Omega)^2}\,\Vert\varphi_t\Vert \,\leq\,\Gamma_K\, \Vert\nabla\varphi\Vert^{1/2}\,\Vert\nabla\varphi\Vert_V^{1/2}\,\Vert\varphi_t\Vert \nonumber\\[1mm] &\leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,\Gamma_{\epsilon,{{K}}}\,\Vert\nabla\varphi\Vert\,\Vert\varphi\Vert_{H^2(\Omega)} \,\leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,\epsilon\,\Vert\varphi\Vert_{H^2(\Omega)}^2 \,+\,\Gamma_{\epsilon,K}\,\Vert\nabla\varphi\Vert^2\,,\label{est1} \\[4mm]
&\big|\big(\boldsymbol{u}_2\cdot\nabla\varphi,(F''(\varphi_2)-F''(\varphi_1))\,\varphi_{2,t}\big)\big| \,\leq\,{\Gamma_F}\,\Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\,\Vert\nabla\varphi\Vert_{L^4(\Omega)^2}\, \Vert\varphi\Vert_{L^4(\Omega)}\,\Vert\varphi_{2,t}\Vert_{L^4(\Omega)} \nonumber\\[1mm] &\leq\,{\Gamma_F}\,\Vert\varphi\Vert_{H^2(\Omega)}\, \Vert\varphi\Vert^{1/2}\,\Vert\varphi\Vert_V^{1/2}\,\Vert\varphi_{2,t}\Vert^{1/2}\,\Vert\varphi_{2,t}\Vert_V^{1/2} \,\leq\,\epsilon\,\Vert\varphi\Vert_ {H^2(\Omega)}^2\,+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi\Vert\, \Vert\varphi\Vert_V\,\Vert\varphi_{2,t}\Vert_V \nonumber\\[1mm] &\leq\,\epsilon\,\Vert\varphi\Vert_{H^2(\Omega)}^2\,+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi\Vert_V^2 \,+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi_{2,t}\Vert_V^2\,\Vert\varphi\Vert^2\,,\label{est2} \\[4mm]
&\big|\big(\boldsymbol{u}_2\cdot\nabla\varphi,F''(\varphi_1)\,\varphi_t\big)\big| \,\leq\,{\Gamma_F}\,\Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\,\Vert\nabla\varphi\Vert_{L^4(\Omega)^2}\,\Vert\varphi_t\Vert \,\leq\,{\Gamma_F}\,\Vert\nabla\varphi\Vert^{1/2}\,\Vert\nabla\varphi\Vert_V^{1/2}\,\Vert\varphi_t\Vert\nonumber\\[1mm] &\leq\,\epsilon\,\Vert\varphi_t\Vert^2\,+\,{\Gamma_{\epsilon,F}}\,\Vert\nabla\varphi\Vert\,\Vert\varphi\Vert_{H^2(\Omega)}\, \leq\,\epsilon\Vert\varphi_t\Vert^2\,+\,\epsilon\,\Vert\varphi\Vert_{H^2(\Omega)}^2 \,+\,{\Gamma_{\epsilon,F}}\,\Vert\nabla\varphi\Vert^2\,, \label{est3} \end{align} where we have used the H\"older and Gagliardo-Nirenberg inequalities and \eqref{bound1} again.
We now insert the estimates \eqref{est4}--\eqref{est7} and \eqref{est8}--\eqref{est3} in \eqref{diffid}, taking \eqref{2left}, \eqref{right1} and \eqref{right2} into account. By the assumption
(\ref{F1}) in hypothesis {\bf (H2)}, and choosing $\,\epsilon>0\,$ small enough (i.\,e., $\epsilon\leq {{\hat c_1/16}}$), we obtain the estimate \begin{align} \frac{d}{dt}\,\Vert\nabla\widetilde{\mu}\Vert^2+{{\hat c_1}}\,\Vert\varphi_t\Vert^2 \,&\leq\, C_{\epsilon,{{K}}}\,\Vert\nabla\widetilde{\mu}\Vert^2\,+\, {\Gamma_{\epsilon,K,F}}\,\big(\Vert\nabla\boldsymbol{u}\Vert^2+\Vert\varphi\Vert_V^2\big)\nonumber\\[1mm] &\quad+\,{\Gamma_{\epsilon,F}}\,\Vert\varphi_{2,t}\Vert_V^2\,\big(\Vert\boldsymbol{u}\Vert^2+\Vert\varphi\Vert^2\big)\,+\,6\,\epsilon\,\Vert\varphi\Vert_{H^2(\Omega)}^2\,. \label{diffineq} \end{align} Next, we aim to show that the $H^2$ norm of $\varphi$ can be controlled by the $H^2$ norm of $\widetilde{\mu}$. To this end, we take the second-order derivatives of \eqref{diff2} to find that
\begin{align}
\partial_{ij}^2\widetilde{\mu}
&\,=\,a\,\partial_{ij}^2\varphi+\partial_i a\,\partial_j\varphi+\partial_j a\,\partial_i\varphi
+\varphi\,\partial_i(\partial_j a)-\partial_i\big(\partial_j {{K}}\ast\varphi\big)\nonumber\\
&\quad \,\,+\big(F''(\varphi_2)-F''(\varphi_1)\big)\partial_{ij}^2\varphi_2+F''(\varphi_1)\,\partial_{ij}^2\varphi\nonumber\\
&\quad\,\, +\big(F'''(\varphi_2)-F'''(\varphi_1)\big)\,\partial_i\varphi_2\,\partial_j\varphi_2
+F'''(\varphi_1)\,(\partial_i\varphi_2\,\partial_j\varphi+\partial_i\varphi\,\partial_j\varphi_1)\,.
\label{secderiv}
\end{align} Let us we multiply \eqref{secderiv} by $\partial_{ij}^2\varphi$ in $H$ and then estimate the terms on the right-hand side of the resulting equality. We have, invoking (\ref{F1}), \begin{align} \Big(\big(a+F''(\varphi_1)\big)\,\partial_{ij}^2\varphi,\partial_{ij}^2\varphi\Big) &\,\geq\,{{\hat c_1}}\,\Vert\partial_{ij}^2\varphi\Vert^2, \label{est12} \end{align} and, for every $\delta>0$ (to be fixed later), \begin{align} &\big(\partial_i a\,\partial_j\varphi+\partial_j a\,\partial_i\varphi,\partial_{ij}^2\varphi\big) \,\leq\,C_{{{K}}}\,\Vert\nabla\varphi\Vert\,\Vert\partial_{ij}^2\varphi\Vert\, \leq\,\delta\,\Vert\partial_{ij}^2\varphi\Vert^2\,+\,C_{\delta,{{K}}}\,\Vert\nabla\varphi\Vert^2, \label{est13}\\[2mm] &\big(\varphi\,\partial_i(\partial_j a)-\partial_i(\partial_j {{K}}\ast\varphi),\partial_{ij}^2\varphi\big) \,\leq\, C_{{{K}}}\,\Vert\varphi\Vert\,\Vert\partial_{ij}^2\varphi\Vert\, \,\leq\,\delta\,\Vert\partial_{ij}^2\varphi\Vert^2\,+\,C_{\delta,{{K}}}\,\Vert\varphi\Vert^2,\label{est14} \end{align} \noindent where the first inequality in the estimate \eqref{est14} follows from (\ref{adm}) if $K$ is admissible, while in the case $\,K\in W^{2,1}(B_\rho)\,$ the first term in the product on the left-hand side of \eqref{est14} can be rewritten as $\,\varphi\,\partial_{ij}^2 a-\partial_{ij}^2 K\ast\varphi\,$ so that \eqref{est14} follows immediately from Young's inequality for convolution integrals. Moreover, invoking Agmon's inequality (\ref{Agmon}) and (\ref{bound1}), we have \begin{align} &\Big(\big(F''(\varphi_2)-F''(\varphi_1)\big)\,\partial_{ij}^2\varphi_2,\partial_{ij}^2\varphi\Big) \,\leq\, {\Gamma_F}\,\Vert\varphi\Vert_{L^\infty(\Omega)}\,\Vert\varphi_2\Vert_{H^2(\Omega)}\,\Vert\partial_{ij}^2\varphi\Vert \nonumber\\[1mm] &\,\leq\,{\Gamma_F}\,\Vert\varphi\Vert^{1/2}\,\Vert\varphi\Vert_{H^2(\Omega)}^{1/2}\,\Vert\partial_{ij}^2\varphi\Vert \,\leq\, {\Gamma_F}\Vert\varphi\Vert^{1/2}\,\Vert\varphi\Vert_{H^2(\Omega)}^{3/2}\, \,\leq\, \delta\,\Vert\varphi\Vert_{H^2(\Omega)}^2\,+\,{\Gamma_{\delta,F}}\,\Vert\varphi\Vert^2.\label{est15} \end{align} In addition, by virtue of H\"older's inequality and (\ref{bound1}), we have \begin{align} &\Big(\big(F'''(\varphi_2)-F'''(\varphi_1)\big)\,\partial_i\varphi_2\,\partial_j\varphi_2,\partial_{ij}^2\varphi\Big) \,\leq\,{\Gamma_F}\,\Vert\varphi\Vert_{L^6(\Omega)}\,\Vert\partial_i\varphi_2\Vert_{L^6(\Omega)}\, \Vert\partial_j\varphi_2\Vert_{L^6(\Omega)}\, \Vert\partial_{ij}^2\varphi\Vert\nonumber\\[1mm] &\leq\,{\Gamma_F}\, \Vert\varphi\Vert_V\,\Vert\varphi_2\Vert_{H^2(\Omega)}^2\,\Vert\partial_{ij}^2\varphi\Vert \,\leq\, \delta\,\Vert\partial_{ij}^2\varphi\Vert^2\,+\,{\Gamma_{\delta,F}}\,\Vert\varphi\Vert_V^2,\label{est16} \end{align} and, invoking the Gagliardo-Nirenberg inequality (\ref{GN}) and (\ref{bound1}), \begin{align} &\Big(F'''(\varphi_1)\,(\partial_i\varphi_2\,\partial_j\varphi+\partial_i\varphi\,\partial_j\varphi_1),\partial_{ij}^2\varphi\Big) \nonumber\\ &\leq\,{\Gamma_F}\,\big(\Vert\partial_i\varphi_2\Vert_{L^4(\Omega)}\,\Vert\partial_j\varphi\Vert_{L^4(\Omega)} \,+\,\Vert\partial_i\varphi\Vert_{L^4(\Omega)}\,\Vert\partial_j\varphi_1\Vert_{L^4(\Omega)}\big)\,\Vert\partial_{ij}^2\varphi\Vert\nonumber\\ &\leq\,{\Gamma_F}\,\big(\Vert\varphi_1\Vert_{H^2(\Omega)}+\Vert\varphi_2\Vert_{H^2(\Omega)}\big) \,\Vert\nabla\varphi\Vert_{L^4(\Omega)^2}\,\Vert\partial_{ij}^2\varphi\Vert\, \leq\,{\Gamma_F}\,\Vert\nabla\varphi\Vert^{1/2}\,\Vert\nabla\varphi\Vert_V^{1/2}\, \Vert\partial_{ij}^2\varphi\Vert\nonumber\\ &\leq\,{\Gamma_F}\,\Vert\nabla\varphi\Vert^{1/2}\,\Vert\varphi\Vert_{H^2(\Omega)}^{3/2}\, \leq\,\delta\Vert\varphi\Vert_{H^2(\Omega)}^2\,+\,{\Gamma_{\delta,F}}\,\Vert\nabla\varphi\Vert^2.\label{est17} \end{align} Hence, by means of \eqref{est12}--\eqref{est17}, we obtain that \begin{align} &\big(\partial_{ij}^2\widetilde{\mu},\partial_{ij}^2\varphi\big) \,\geq\,\frac{{{\hat c_1}}}{2}\,\Vert\partial_{ij}^2\varphi\Vert^2\,-\,2\,\delta\,\Vert\varphi\Vert_{H^2(\Omega)}^2 -\Gamma_{\delta,{{K}}}\,\Vert\varphi\Vert_V^2, \nonumber \end{align} provided we choose $0<\delta\leq {{\hat c_1/6}}$. On the other hand, we have \begin{align} &\big(\partial_{ij}^2\widetilde{\mu},\partial_{ij}^2\varphi\big)\,\leq\,\frac{{{\hat c_1}}}{4}\, \Vert\partial_{ij}^2\varphi\Vert^2 \,+\,\frac{1}{{{\hat c_1}}}\,\Vert\partial_{ij}^2\widetilde{\mu}\Vert^2, \nonumber \end{align} and, by combining the last two estimates, we find that \begin{align} &\Vert\partial_{ij}^2\widetilde{\mu}\Vert^2\,\geq\,\frac{{{\hat c_1^2}}}{4}\,\Vert\partial_{ij}^2\varphi\Vert^2-2\,{{\hat c_1}}\,\delta\,\Vert\varphi\Vert_{H^2(\Omega)}^2 -{\Gamma_{\delta,K,F}}\,\Vert\varphi\Vert_V^2, \nonumber \end{align} where the factor $\hat c_1$ is absorbed in the constant ${\Gamma_{\delta,K,F}}$. From this, taking the sum over $i,j=1,2$, and fixing $0<\delta\leq \hat c_1/64$, we get the desired control, \begin{align} &\Vert\widetilde{\mu}\Vert_{H^2(\Omega)}^2\,\geq\,\frac{{{\hat c_1}}^2}{8}\,\Vert\varphi\Vert_{H^2(\Omega)}^2-\Gamma_{{K,F}}\Vert\varphi\Vert_V^2. \label{est18} \end{align} Let us now prove that the $H^2$ norm of $\widetilde{\mu}$ can be controlled in terms of the $L^2$ norm of $\varphi_t$. Indeed, from \eqref{diff1} we obtain, invoking the H\"older and Gagliardo-Nirenberg inequalities, \begin{align} \Vert\Delta\widetilde{\mu}\Vert &\,\leq\,\Vert\varphi_t\Vert+\Vert\boldsymbol{u}\Vert_{L^4(\Omega)^2}\,\Vert\nabla\varphi_1\Vert_{L^4(\Omega)^2}+ \Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\,\Vert\nabla\varphi\Vert_{L^4(\Omega)^2}\nonumber\\ &\,\leq\,\Vert\varphi_t\Vert+C\,\Vert\nabla\boldsymbol{u}\Vert\Vert\varphi_1\Vert_{H^2(\Omega)}+ C\,\Vert\boldsymbol{u}_2\Vert_{L^4(\Omega)^2}\Vert\nabla\varphi\Vert^{1/2}\Vert\varphi\Vert_{H^2(\Omega)}^{1/2}. \label{est19} \end{align} Thanks to a classical elliptic regularity result (notice that $\partial\widetilde{\mu}/\partial\boldsymbol{n}=0$ on $\partial\Omega$), we can infer from (\ref{diff2}), \eqref{est19} and (\ref{GN}) the estimate \begin{align} \Vert\widetilde{\mu}\Vert_{H^2(\Omega)} &\,\leq\, c_e\Vert-\Delta\widetilde{\mu}+\widetilde{\mu}\Vert\,\leq\, c_e\Vert\Delta\widetilde{\mu}\Vert+{\Gamma_{K,F}}\,\Vert\varphi\Vert\nonumber\\ &\,\leq\, c_e\,\Vert\varphi_t\Vert+\Gamma\,\Vert\nabla\boldsymbol{u}\Vert+\Gamma\,\Vert\nabla\varphi\Vert^{1/2}\Vert\varphi\Vert_{H^2(\Omega)}^{1/2} +{\Gamma_{K,F}}\,\Vert\varphi\Vert\,, \label{est20} \end{align} where $c_e>0$ depends only on $\Omega$. Combining \eqref{est18} with \eqref{est20}, we then deduce that \begin{align} &\frac{{{\hat c_1}}}{4}\Vert\varphi\Vert_{H^2(\Omega)}\,\leq\, c_e\,\Vert\varphi_t\Vert+{\Gamma_{K,F}}\,\big(\Vert\nabla\boldsymbol{u}\Vert+\Vert\varphi\Vert_V\big). \label{est21} \end{align} With \eqref{est21} now available, we can now go back to \eqref{diffineq} and fix $\epsilon>0$ small enough (i.e, $\epsilon\leq\epsilon_\ast$, where $\epsilon_\ast>0$ depends only on ${{\hat c_1}}$ and $c_e$) to arrive at the differential inequality \begin{align} &\frac{d}{dt}\,\Vert\nabla\widetilde{\mu}\Vert^2+\frac{{{\hat c_1}}}{2}\Vert\varphi_t\Vert^2 \,\leq\, C_{{{K}}}\,\Vert\nabla\widetilde{\mu}\Vert^2+{\Gamma_{K,F}}\, \big(\Vert\nabla\boldsymbol{u}\Vert^2+\Vert\varphi\Vert_V^2\big) +{\Gamma_{F}}\,\Vert\varphi_{2,t}\Vert_V^2\,\big(\Vert\boldsymbol{u}\Vert^2+\Vert\varphi\Vert^2\big). \label{diffineq2} \end{align} Now observe that $\widetilde\mu(0)=0$. Thus, applying Gronwall's lemma to \eqref{diffineq2}, and using \eqref{bound1} for $\varphi_{2,t}$, we obtain, for every $t\in [0,T]$, \begin{align} \Vert\nabla\widetilde{\mu}(t)\Vert^2 & \,\leq\, \Gamma\Big( \int_0^t\big(\Vert\nabla\boldsymbol{u}(\tau)\Vert^2+\Vert\varphi(\tau)\Vert_V^2\big)d\tau\nonumber \\ & \,\,\,\qquad+\big(\Vert\boldsymbol{u}\Vert_{{C^0([0,t];G_{div})}}^2
+\Vert\varphi\Vert_{{C^0([0,t];H)}}^2
\big)\int_0^t\Vert\varphi_{2,t}(\tau)\Vert_V^2 d\tau\Big)\,,\nonumber \end{align} where, for the sake of a shorter notation, we have omitted the indexes $K$ and $F$ in the constant $\Gamma$. Hence, using the stability estimate of Lemma \ref{stabest1}, we obtain from the last two inequalities that \begin{align} &\Vert\nabla\widetilde{\mu}(t)\Vert^2 \,\leq\, \Gamma\,\Vert\boldsymbol{v}_2-\boldsymbol{v}_1\Vert_{L^2(0,T;(V_{div})')}^2. \label{est11} \end{align} Now, taking the gradient of \eqref{diff2}, and arguing as in the proof of \cite[Lemma 2]{FGG},
it is not difficult to see that we have \begin{align} &(\nabla\widetilde{\mu},\nabla\varphi)\,\geq\,\frac{{{\hat c_1}}}{4}\,\Vert\nabla\varphi\Vert^2 -\Gamma\, \Vert\varphi\Vert^2,\nonumber \end{align} and this estimate, together with \begin{align} &(\nabla\widetilde{\mu},\nabla\varphi)\,\leq\, \frac{{{\hat c_1}}}{8}\,\Vert\nabla\varphi\Vert^2+\frac{2} {{{\hat c_1}}}\,\Vert\nabla\widetilde{\mu}\Vert^2,\notag \end{align} yields \begin{align} \Vert\nabla\widetilde{\mu}\Vert^2\,\geq\,\frac{{{\hat c_1}}^2}{16}\, \Vert\nabla\varphi\Vert^2-\Gamma\,\Vert\varphi\Vert^2\,, \nonumber \end{align} where the factor $\hat c_1/2$ is again absorbed in the constant $\Gamma$. This last estimate, combined with \eqref{est11}, gives \begin{align} &\Vert\varphi(t)\Vert_V^2\,\leq\, \Gamma\,{\Vert\boldsymbol{v}_2-\boldsymbol{v}_1\Vert_{L^2(0,T;(V_{div})')}^2}. \label{est22} \end{align} By integrating \eqref{diffineq2} in time over $[0,t]$, and using \eqref{est11} and the stability estimate of Lemma \ref{stabest1} again, we also get \begin{align} &{{\hat c_1}}\int_0^t\Vert\varphi_t(\tau)\Vert^2\, d\tau \,\leq\, \Gamma\,{\Vert\boldsymbol{v}_2-\boldsymbol{v}_1\Vert_{L^2(0,T;(V_{div})')}^2}. \label{est23} \end{align} The stability estimate \eqref{stabi2} now follows from \eqref{est22}, \eqref{est23}, \eqref{est21} and Lemma \ref{stabest1}.
\end{proof}
\section{Optimal control}
We now study the optimal control problem \textbf{(CP)}, where throughout this section we assume that the cost functional $\,{\cal J}\,$ is given by (\ref{costfunct}) and that the general hypothesis {\bf (H1)}--{\bf (H4)} are fulfilled. Moreover, we assume that
the set of admissible controls $\mathcal{V}_{ad}$ is given by \begin{align} \label{Vad} &\mathcal{V}_{ad}:=\big\{\boldsymbol{v}\in L^2(0,T;G_{div}):\:\: v_{a,i}(x,t)\leq v_i(x,t)\leq v_{b,i}(x,t),\:\:\mbox{a.e. }(x,t)\in Q,\:\: i=1,2\big\}, \end{align} with prescribed functions $\boldsymbol{v}_a,\boldsymbol{v}_b\in L^2(0,T;G_{div})\cap L^\infty(Q)^2$. According {with} Theorem \ref{thm1}, the control-to-state mapping \begin{align} &{\cal S}:\mathcal{V}\to\mathcal{H},\quad\,\boldsymbol{v}\in\mathcal{V}\mapsto {\cal S}(\boldsymbol{v}):=[\boldsymbol{u},\varphi]\in\mathcal{H}, \label{control-state} \end{align} where the space $\mathcal{H}$ is given by \begin{align} \mathcal{H} &\,:=\,\big[H^1(0,T;G_{div})\cap C^0([0,T];V_{div})\cap L^2(0,T;H^2(\Omega)^2)\big]\nonumber \\ &\qquad\times \big[C^1([0,T];H)
\cap H^1(0,T;V)\cap C^0([0,T];H^2(\Omega))\big], \end{align} is well defined and locally bounded. Moreover, it follows from Lemma 2 that $\,{\cal S}\,$ is locally Lipschitz continuous from ${\cal V}$ into the space \begin{align} & \mathcal{W}:=\big[ { C^0([0,T];G_{div})}
\cap L^2(0,T;V_{div})\big]\times\big[H^1(0,T;H)\cap C^0([0,T];V)\cap L^2(0,T;H^2(\Omega))\big]. \end{align} Notice also that problem \textbf{(CP)} is equivalent to the minimization problem \begin{align} &\min_{\boldsymbol{v}\in\mathcal{V}_{ad}} f(\boldsymbol{v}),\nonumber \end{align} for the reduced cost functional defined by $f(\boldsymbol{v}):=\mathcal{J}\big({\cal S}(\boldsymbol{v}),\boldsymbol{v}\big)$, for every $\boldsymbol{v}\in\mathcal{V}$.
We have the following existence result. \begin{thm} Assume that the hypotheses {\bf(H1)}--{\bf (H4)} are satisfied and that ${\cal V}_{ad}$ is given by {\rm (\ref{Vad})}. Then the optimal control problem {\bf (CP)} admits a solution. \end{thm} \begin{proof} Take a minimizing sequence $\{\boldsymbol{v}_n\}\subset\mathcal{V}_{ad}$ for \textbf{(CP)}. Since ${\cal V}_{ad}$ is bounded in ${\cal V}$, we may assume without loss of generality that \begin{align} &\boldsymbol{v}_n\to\overline{\boldsymbol{v}}\,\quad\mbox{weakly in }\,L^2(0,T;G_{div}){,}\nonumber \end{align} for some $\overline{\boldsymbol{v}}\in {\cal V}$. Since $\mathcal{V}_{ad}$ is convex and closed in $\mathcal{V}$, and thus weakly sequentially closed, we have $\overline{\boldsymbol{v}}\in\mathcal{V}_{ad}$.
Moreover, since ${\cal S}$ is a locally bounded mapping from ${\cal V}$ into ${\cal H}$, we may without loss of generality assume that the sequence $\,[\boldsymbol{u}_n,\varphi_n]={\cal S}(\boldsymbol{v}_n)$, $n\in \mathbb{N}$, satisfies with appropriate limit points $[\overline{\boldsymbol{u}},\overline{\varphi}]$ the convergences \begin{align} & \boldsymbol{u}_n\to\overline{\boldsymbol{u}},\,\quad\mbox{weakly$^\ast$ in $L^\infty(0,T;V_{div})$ and weakly in $H^1(0,T;G_{div})\cap L^2(0,T;H^2(\Omega)^2)$}, \label{wconv1}\\ &\varphi_n\to\overline{\varphi},\,\quad\mbox{weakly$^\ast$ in $L^\infty(0,T;H^2(\Omega))$ and in $W^{1,\infty}(0,T;H)$, and weakly in $H^1(0,T;V)$}. \label{wconv2} \end{align} In particular, it follows from the compactness of the embedding $H^1(0,T;V)\cap L^\infty(0,T;H^2(\Omega))\linebreak \subset C^0([0,T];{H^s(\Omega))}$ for $0\le s<2$, that $\,\varphi_n\to \overline{\varphi}$ strongly in $C^0(\overline{Q})$, whence we conclude that also \begin{align} & \mu_n:=a\,\varphi_n-K\ast\varphi_n+F'(\varphi_n)\to \overline{\mu}:=a\,\overline{\varphi} -K\ast\overline{\varphi}+F'(\overline{\varphi}) \quad\,\mbox{strongly in }\,C^0(\overline{Q}),\nonumber \\ & \nu(\varphi_n)\to \nu(\overline{\varphi}) \quad\,\mbox{strongly in }\,C^0(\overline{Q}). \end{align} We also have, by compact embedding, \begin{align*} &\boldsymbol{u}_{n}\to\overline{\boldsymbol{u}}\,\quad\mbox{strongly in }\,L^2(0,T;G_{div}), \end{align*} and it obviously holds \begin{align} & \boldsymbol{u}_{n}(t)\to\overline{\boldsymbol{u}}(t) \,\quad\mbox{weakly in $G_{div}$, \,\, for all $t\in [0,T]$.} \label{wconv3} \end{align}
Now, by passing to the limit in the weak formulation of problem (1.1)--(1.6), written for each solution $[\boldsymbol{u}_n,\varphi_n]={\cal S}(\boldsymbol{v}_n)$, $n\in\mathbb{N}$, and using the above weak and strong convergences (in particular, we can use \cite[Lemma 1]{CFG} in order to pass to the limit in the nonlinear term $-\,2\,\mbox{div}(\nu(\varphi_n)D\boldsymbol{u}_n)$),
it is not difficult to see that $[\overline{\boldsymbol{u}},\overline{\varphi}]$ satisfies the weak formulation corresponding to $\overline{\boldsymbol{v}}$. Hence, we have $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$, that is, the pair $([\overline{\boldsymbol{u}},\overline{\varphi}], \overline{\boldsymbol{v}})$ is admissible for \textbf{(CP)}.
Finally, thanks to the weak sequential lower semicontinuity of $\mathcal{J}$ and to the weak convergences \eqref{wconv1}, \eqref{wconv2}, \eqref{wconv3}, we infer that $\overline{\boldsymbol{v}}\in\mathcal{V}_{ad}$, together with the associated state $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$, is a solution to \textbf{(CP)}. \end{proof}
\noindent \textbf{The linearized system.} Suppose that the general hypotheses {\bf (H1)}--{\bf (H4)} are fulfilled. We assume that a fixed $\overline{\boldsymbol{v}}\in {\cal V}$ is given, that $[\overline{\boldsymbol{u}},\overline{\varphi}] :={\cal S}(\overline{\boldsymbol{v}})\in {\cal H}$ is the associated solution to the state system \eqref{sy1}-\eqref{ics} according to Theorem 1, and that $\boldsymbol{h}\in\mathcal{V}$ is given. In order to show that the control-to-state operator is differentiable at $\overline{\boldsymbol{v}}$, we first consider the following system, which is obtained by linearizing the state system \eqref{sy1}-\eqref{ics} at $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$: \begin{align} &\boldsymbol\xi_t\,-\,2\,\mbox{div}\big(\nu(\overline{\varphi})D\boldsymbol\xi\big)-2\,\mbox{div}\big(\nu'(\overline{\varphi})\,\eta\, D\overline{\boldsymbol{u}}\big)+(\overline{\boldsymbol{u}}\cdot\nabla)\boldsymbol\xi+(\boldsymbol\xi\cdot\nabla)\overline{\boldsymbol{u}}+\nabla\widetilde{\pi}\nonumber\\ &=\,\big(a\,\eta-{{K}}\ast\eta+F''(\overline{\varphi})\,\eta\big)\nabla\overline{\varphi}+\overline{\mu}\,\nabla\eta+\boldsymbol{h} \quad\,\mbox{in $Q$},\label{linsy1}\\ &\eta_t+\overline{\boldsymbol{u}}\cdot\nabla\eta\,=\,-\boldsymbol\xi\cdot\nabla\overline{\varphi}+\Delta\big(a\,\eta-{{K}}\ast\eta+F''(\overline{\varphi})\,\eta\big) \quad\,\mbox{in $Q$},\label{linsy2}\\ &\mbox{div}(\boldsymbol\xi)=0 \quad\,\mbox{in $Q$},\label{linsy3}\\ &\boldsymbol\xi=[0,0]^{T},\quad\, \frac{\partial}{\partial\boldsymbol{n}}\big(a\,\eta-{K}\ast\eta+F''(\overline{\varphi})\,\eta\big)=0 \,\quad\mbox{on $\Sigma$}, \label{linbcs}\\ &\boldsymbol\xi(0)=[0,0]^{T},\,\quad\eta(0)=0, \,\quad\mbox{in $\Omega$},\label{linics} \end{align} where \begin{equation} \overline{\mu}=a\overline{\varphi}-{{K}}\ast\overline{\varphi}+F'(\overline{\varphi}). \end{equation}
We first prove that \eqref{linsy1}--\eqref{linics} has a unique weak solution. \begin{prop}\label{linthm} Suppose that the hypotheses {\bf(H1)}--{\bf (H4)} are satisfied. Then problem \eqref{linsy1}--\eqref{linics} has for every $\boldsymbol{h}\in\mathcal{V}$ a unique weak solution $\,[\boldsymbol\xi,\eta]\,$ such that \begin{align} &\boldsymbol\xi\in H^1(0,T;(V_{div})') \cap C^0([0,T];G_{div})\cap L^2(0,T;V_{div}), \nonumber\\ &\eta\in H^1(0,T;V')\cap C^0([0,T];H)\cap L^2(0,T;V).\label{reglin} \end{align} \end{prop}
\begin{proof} We will make use of a Faedo-Galerkin approximating scheme. Following the lines of \cite{CFG}, we introduce the family $\{\boldsymbol{w}_j\}_{j\in\mathbb{N}}$ of the eigenfunctions to the Stokes operator $A$ as a Galerkin basis in $V_{div}$ and the family $\{\psi_j\}_{j\in\mathbb{N}}$ of the eigenfunctions to
the Neumann operator $B:=-\Delta+I$ as a Galerkin basis in $V$.
Both these eigenfunction families $\{\boldsymbol{w}_j\}_{j\in\mathbb{N}}$ and $\{\psi_j\}_{j\in\mathbb{N}}$ are assumed to be suitably ordered and normalized.
Moreover, recall that, since $\boldsymbol{w}_j\in D(A)$, we have ${\rm div}(\boldsymbol{w}_j)=0$. Then we look for two functions of the form \begin{align} &\boldsymbol\xi_n(t):=\sum_{j=1}^n a^{(n)}_j(t)\boldsymbol{w}_j\,,\qquad\eta_n(t):=\sum_{j=1}^n b^{(n)}_j(t) \psi_j\,,\nonumber \end{align} that solve the following approximating problem: \begin{align} &\langle \partial_t\boldsymbol\xi_n(t),\boldsymbol{w}_i\rangle_{V_{div}}\,+\,2\,\big(\nu(\overline{\varphi}(t)) \,D\boldsymbol\xi_n(t),D\boldsymbol{w}_i\big) \,+\, 2\,\big(\nu'(\overline{\varphi}(t))\,\eta_n(t)\, D\overline{\boldsymbol{u}}(t),D\boldsymbol{w}_i\big) \nonumber\\ &+\,b(\overline{\boldsymbol{u}}(t),\boldsymbol\xi_n(t),\boldsymbol{w}_i)\,+\,b(\boldsymbol\xi_n(t),\overline{\boldsymbol{u}}(t),\boldsymbol{w}_i)\nonumber\\ &=\,\big((a\,\eta_n(t)-{{K}}\ast\eta_n(t)\,+\,F''(\overline{\varphi}(t))\,\eta_n(t))\,\nabla\overline{\varphi}(t),\boldsymbol{w}_i\big) +(\overline{\mu}(t)\,\nabla\eta_n(t),\boldsymbol{w}_i)\,+\, (\boldsymbol{h}(t),\boldsymbol{w}) \,,\label{FaGa1}\\[1mm] &\langle \partial_t\eta_{n}(t),\psi_i\rangle_V \,=\,-\big(\nabla(a\,\eta_n-{{K}}\ast\eta_n+F''(\overline{\varphi})\,\eta_n)(t),\nabla\psi_i\big) +(\overline{\boldsymbol{u}}(t)\,\eta_n(t),\nabla\psi_i)\nonumber\\ &\hspace*{30mm}+(\boldsymbol\xi_n(t)\, \overline{\varphi}(t),\nabla\psi_i),\label{FaGa2}\\ &\boldsymbol\xi_n(0)= [0,0]^T, \,\quad\eta_n(0)=0,\label{FaGa3} \end{align} for $i=1,\dots,n$, and for almost every $t\in (0,T)$. Apparently, this is nothing but a Cauchy problem for a system of $2n$ linear ordinary differential equations in the $2n$ unknowns $a^{(n)}_i$, $b^{(n)}_i$, in which, owing to the regularity properties of $[\overline{\boldsymbol{u}},\overline{\varphi}]$, all of the coefficient functions belong to $\,L^2(0,T)$. Thanks to Carath\'{e}odory's theorem, we can conclude that this problem enjoys a unique solution $\boldsymbol{a}^{(n)}:=(a^{(n)}_1,\cdots,a^{(n)}_n)^T$, $\boldsymbol{b}^{(n)}:=(b^{(n)}_1,\cdots,b^{(n)}_n)^T$ such that $\boldsymbol{a}^{(n)},\boldsymbol{b}^{(n)}\in H^1(0,T;\mathbb{R}^n)$.
We now aim to derive a priori estimates for $\boldsymbol\xi_n$ and $\eta_n$ that are uniform in $n\in\mathbb{N}$. For the sake of keeping the exposition at a reasonable length, we will always omit the argument $t$. To begin with, let us multiply \eqref{FaGa1} by $a^{(n)}_i$, \eqref{FaGa2} by $b^{(n)}_i$, sum over $i=1,\cdots,n$, and add the resulting identities. We then obtain, almost everywhere in $(0,T)$, \begin{align} &\frac{1}{2}\,\frac{d}{dt}\,\big(\Vert\boldsymbol\xi_n\Vert^2+\Vert\eta_n\Vert^2\big)\,+\, 2\,\big(\nu(\overline{\varphi})\,D\boldsymbol\xi_n,D\boldsymbol\xi_n\big)\,+\, \big((a+F''(\overline{\varphi}))\,\nabla\eta_n,\nabla\eta_n\big) \nonumber \\ &=\,-b(\boldsymbol\xi_n,\overline{\boldsymbol{u}},\boldsymbol\xi_n)\,-\,2\,\big(\nu'(\overline{\varphi})\,\eta_n\, D\overline{\boldsymbol{u}},D\boldsymbol\xi_n\big)\,+\,\big((a\,\eta_n-{{K}}\ast\eta_n+ F''(\overline{\varphi})\,\eta_n)\, \nabla\overline{\varphi},\boldsymbol\xi_n\big)\nonumber\\ &\quad\,\,+(\overline{\mu}\,\nabla\eta_n,\boldsymbol\xi_n)\,+\, (\boldsymbol{h},\boldsymbol\xi_n) -\big(\eta_n\nabla a- \nabla {{K}}\ast\eta_n,\nabla\eta_n\big)-(\eta_n \,F'''(\overline{\varphi})\,\nabla\overline{\varphi},\nabla\eta_n\big) \nonumber\\ &\quad\,\, +(\overline{\boldsymbol{u}}\,\eta_n,\nabla\eta_n)+(\boldsymbol\xi_n\,\overline{\varphi},\nabla\eta_n).\label{diffid2} \end{align}
Let us now estimate the terms on the right-hand side of this equation individually. In the remainder of this proof, we use the following abbreviating notation: the letter $\,C\,$ will stand for positive constants that depend only on the global data of the system (1.1)--(1.6), on $\overline{\boldsymbol{v}}$, and on $[\overline{\boldsymbol{u}},\overline{\varphi}]$, but not on $n\in\mathbb{N}$; moreover, by $C_\sigma$ we denote constants that in addition depend on the quantities indicated by the index $\sigma$, but not on $n\in\mathbb{N}$. Both $C$ and $C_\sigma$ may change
within formulas and even within lines.
We have, using H\"older's inequality, the elementary Young's inequality, and the global bounds (\ref{bound1}) as main tools, the following series of estimates: \begin{align} &
|b(\boldsymbol\xi_n,\overline{\boldsymbol{u}},\boldsymbol\xi_n)| \,\leq\,\Vert\boldsymbol\xi_n\Vert_{L^4(\Omega)^2}\, \Vert\nabla\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert\boldsymbol\xi_n\Vert \,\le\, C\,\Vert\nabla\boldsymbol\xi_n\Vert\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}\,\Vert\boldsymbol\xi_n\Vert \nonumber \\[1mm] & \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2+C_\epsilon\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,\Vert\boldsymbol\xi_n\Vert^2, \label{est25} \\[4mm] &
\big|2\,\big(\nu'(\overline{\varphi})\,\eta_n \,D\overline{\boldsymbol{u}},D\boldsymbol\xi_n\big)\big|\, \leq\,C\,\Vert\eta_n\Vert_{L^4(\Omega)}\,\Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\, \Vert\nabla\boldsymbol\xi_n\Vert \nonumber \\[1mm] & \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\,C_\epsilon\,\big(\Vert\eta_n\Vert^2 +\Vert\eta_n\Vert\,\Vert\nabla\eta_n\Vert\big)\,\Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}^2 \nonumber \\[1mm] & \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\,C_\epsilon\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,\Vert\eta_n\Vert^2\, +\,\epsilon'\,\Vert\nabla\eta_n\Vert^2\,+\,C_{\epsilon,\epsilon'}\,\Vert\nabla\overline{\boldsymbol{u}}\Vert^2 \,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\, \Vert\eta_n\Vert^2 \nonumber \\[1mm] & \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\,\epsilon'\,\Vert\nabla\eta_n\Vert^2 \,+\,C_{\epsilon,\epsilon'}\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,\Vert\eta_n\Vert^2, \\[4mm] &
\big|\big((a\,\eta_n-K\ast\eta_n+F''(\overline{\varphi})\,\eta_n)\,\nabla\overline{\varphi},\boldsymbol\xi_n\big)\big| \,\leq\,C\,\Vert\eta_n\Vert\,\Vert\overline{\varphi}\Vert_{H^2(\Omega)}\,\Vert\boldsymbol\xi_n\Vert_{L^4(\Omega)^2}\nonumber \\[1mm] & \leq\,C\,\Vert\eta_n\Vert\,\Vert\nabla\boldsymbol\xi_n\Vert\, \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\,C_{\epsilon}\,\Vert\eta_n\Vert^2, \\[4mm] &
|(\overline{\mu}\,\nabla\eta_n,\boldsymbol\xi_n)|\,=\,|(\eta_n\,\nabla\overline{\mu},\boldsymbol\xi_n)| \,\leq\,\Vert\nabla\overline{\mu}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert\eta_n\Vert\,\Vert \boldsymbol\xi_n\Vert_{L^4(\Omega)^2}\,\leq\,C\,\Vert\nabla\boldsymbol\xi_n\Vert\,\Vert\eta_n\Vert \nonumber \\[1mm] & \leq\,\epsilon\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\,C_\epsilon\,\Vert\eta_n\Vert^2, \\[4mm]
&|(\boldsymbol{h},\boldsymbol\xi_n)| \,\leq \,C\,\Vert\boldsymbol\xi_n\Vert^2\,+\,C\,\Vert\boldsymbol{h}\Vert_\mathcal{V}^2, \\[4mm]
&\big|\big(\eta_n\nabla a- \nabla K\ast\eta_n,\nabla\eta_n\big)\big| \,\leq\, C\,\Vert\eta_n\Vert\,\Vert\nabla\eta_n\Vert\,\leq\,\epsilon'\,\Vert\nabla\eta_n\Vert^2\,+\, C_{\epsilon'}\,\Vert\eta_n\Vert^2. \end{align} Moreover, also employing the Gagliardo-Nirenberg inequality (\ref{GN}), we find that \begin{align} &
|(\eta_n \,F'''(\overline{\varphi})\,\nabla\overline{\varphi},\nabla\eta_n\big)| \,\leq\,C\,\Vert\eta_n\Vert_{L^4(\Omega)}\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\, \Vert\nabla\eta_n\Vert\nonumber \\[1mm] & \leq\,C\,(\Vert\eta_n\Vert\,+\,\Vert\eta_n\Vert^{1/2}\,\Vert\nabla\eta_n\Vert^{1/2}\big) \,\Vert\nabla\eta_n\Vert\,\leq\,\epsilon'\,\Vert\nabla\eta_n\Vert^2\,+\,C_{\epsilon'} \,\Vert\eta_n\Vert^2, \label{est50}\\[4mm] &
|(\overline{\boldsymbol{u}}\,\eta_n,\nabla\eta_n)| \,\leq\,\Vert\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^2}\,\Vert\eta_n\Vert_{L^4(\Omega)}\, \Vert\nabla\eta_n\Vert\,\leq\,C\,\big(\Vert\eta_n\Vert\,+\,\Vert\eta_n\Vert^{1/2}\, \Vert\nabla\eta_n\Vert^{1/2}\big)\,\Vert\nabla\eta_n\Vert\nonumber \\[1mm] & \leq\,\epsilon'\,\Vert\nabla\eta_n\Vert^2\,+\,C_{\epsilon'}\,\Vert\eta_n\Vert^2, \\[4mm] &
|(\boldsymbol\xi_n\,\overline{\varphi},\nabla\eta_n)| \,\leq\, C\,\Vert\overline{\varphi}\Vert_{H^2(\Omega)}\,\Vert\boldsymbol\xi_n\Vert\,\Vert\nabla\eta_n\Vert \,\leq\,\epsilon'\,\Vert\nabla\eta_n\Vert^2\,+\,C_{\epsilon'}\,\Vert\boldsymbol\xi_n\Vert^2. \label{est26} \end{align} Hence, inserting the estimates \eqref{est25}--\eqref{est26} in \eqref{diffid2}, applying the conditions (\ref{F1}) in {\bf (H2)} and (\ref{nu}) in {\bf (H3)}, respectively, to the second and third terms on the left-hand side of \eqref{diffid2}, and choosing $\epsilon>0$ and $\epsilon'>0$ small enough, we obtain the estimate \begin{align} &\frac{d}{dt}\big(\Vert\boldsymbol\xi_n\Vert^2+\Vert\eta_n\Vert^2\big)+{{\hat\nu_1}}\,\Vert\nabla\boldsymbol\xi_n\Vert^2\,+\, {{\hat c_1}} \,\Vert\nabla\eta_n\Vert^2 \leq {{C}}\,\big(1+\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\big)\big(\Vert\boldsymbol\xi_n\Vert^2 +\Vert\eta_n\Vert^2\big) + C\,\Vert\boldsymbol{h}\Vert_\mathcal{V}^2.\label{Gr} \end{align}
Since, owing to (\ref{bound1}), the mapping $\,t\mapsto \|\overline{\boldsymbol{u}}(t)\|_{H^2(\Omega)^2}^2\,$ belongs to $L^1(0,T)$, we may employ Gronwall's lemma to conclude the estimate \begin{align} & \Vert\boldsymbol\xi_n\Vert_{L^\infty(0,T;G_{div})\cap L^2(0,T;V_{div})}\,\leq\, C\,\Vert\boldsymbol{h}\Vert_\mathcal{V}, \,\quad\Vert\eta_n\Vert_{L^\infty(0,T;H)\cap L^2(0,T;V)}\,\leq\,C\,\Vert\boldsymbol{h}\Vert_\mathcal{V} \,\quad\mbox{for all }\,n\in \mathbb{N}\,. \label{est24} \end{align} Moreover, by comparison in \eqref{FaGa1}, \eqref{FaGa2}, we can easily deduce also the estimates for the time derivatives $\partial_t\boldsymbol\xi_n$ and $\partial_t\eta_n$. Indeed, we have \begin{align} & \Vert\partial_t \boldsymbol\xi_n\Vert_{L^2(0,T;(V_{div})')}\,\leq\,C\,\Vert\boldsymbol{h}\Vert_\mathcal{V}, \,\quad\Vert\partial_t\eta_n\Vert_{L^2(0,T;V')}\,\leq\,C\,\Vert\boldsymbol{h}\Vert_\mathcal{V} \,\quad\mbox{for all }\,n\in \mathbb{N}. \label{est28} \end{align}
From \eqref{est24}, \eqref{est28} we deduce the existence of two functions $\boldsymbol\xi$, $\eta$ satisfying \eqref{reglin} and of two (not relabelled) subsequences $\{\boldsymbol\xi_n\}$, $\{\eta_n\}$ (and $\{\partial_t \boldsymbol\xi_n\}$, $\{\partial_t\eta_n\}$) converging weakly respectively to $\boldsymbol\xi$, $\eta$ (and to $\boldsymbol\xi_t$, $\eta_t$) in the spaces where the bounds given by \eqref{est24} (and by \eqref{est28}) hold.
Then, by means of standard arguments, we can pass to the limit as $n\to\infty$ in \eqref{FaGa1}--\eqref{FaGa3} and prove that $\boldsymbol\xi$, $\eta$ satisfy the weak formulation of problem \eqref{linsy1}--\eqref{linics}. Notice that we actually have the regularity \eqref{reglin}, since the space $H^1(0,T;(V_{div})') \cap L^2(0,T;V_{div})$ is continuously embedded in $C^0([0,T];G_{div})$; similarly we obtain that $\eta\in C^0([0,T];H)$.
Finally, in order to prove that the solution $\boldsymbol\xi,\eta$ is unique, we can test the difference between \eqref{linsy1}, \eqref{linsy2}, written for two solutions $\boldsymbol\xi_1,\eta_1$ and $\boldsymbol\xi_2,\eta_2$, by $\boldsymbol\xi:=\boldsymbol\xi_1-\boldsymbol\xi_2$ and by $\eta:=\eta_1-\eta_2$, respectively. Since the problem is linear, the argument is straightforward, and we may leave the details to the reader.
\end{proof}
\begin{oss}{\upshape By virtue of the weak sequential lower semicontinuity of norms, we can conclude from the estimates (\ref{est24}) and (\ref{est28}) that the linear mapping $\,\boldsymbol{h}\mapsto [\boldsymbol\xi^{\boldsymbol{h}}, \eta^{\boldsymbol{h}}]\,$, which assigns to each $\boldsymbol{h}\in {\cal V}$ the corresponding unique weak solution pair $\,[\boldsymbol\xi^{\boldsymbol{h}}, \eta^{\boldsymbol{h}}]:=[\boldsymbol\xi,\eta]\,$ to the linearized system \eqref{linsy1}--\eqref{linics}, is continuous as a mapping between the spaces ${\cal V}$ and $\big[H^1(0,T;(V_{div})')\cap C^0([0,T];G_{div})\cap L^2(0,T;V_{div})\big] \times \big[H^1(0,T;V')\cap C^0([0,T];H)\cap L^2(0,T;V) \big]$. } \end{oss}
\noindent \textbf{Differentiability of the control-to-state operator.} We now prove the following result:
\begin{thm} \label{diffcontstat} Suppose that the hypotheses {\bf (H1)}--{\bf (H4)} are fulfilled. Then the control-to-state operator ${\cal S}:{\cal V}\to {\cal H}$ is Fr\'echet differentiable on ${\cal V}$ when viewed as a mapping between the spaces ${\cal V}$ and ${\cal Z}$, where
\begin{align} &\mathcal{Z}:=\big[C([0,T];G_{div})\cap L^2(0,T;V_{div})\big]\times\big[C([0,T];H)\cap L^2(0,T;V)\big].\nonumber \end{align} Moreover, for any $\overline{\boldsymbol{v}}\in {\cal V}$ the Fr\'echet derivative $\,{\cal S}'(\overline{\boldsymbol{v}}) \in {\cal L}({\cal V},{\cal Z})\,$ is given by $\,{\cal S}'(\overline{\boldsymbol{v}})\boldsymbol{h}=[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}],$ for all $\,\boldsymbol{h}\in\mathcal{V}$, where $[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]$ is the unique weak solution to the linearized system \eqref{linsy1}--\eqref{linics} at $[\overline{\boldsymbol{u}},\overline{\varphi}]=S(\overline{\boldsymbol{v}})$ that corresponds to $\boldsymbol{h}\in\mathcal{V}$. \end{thm}
\begin{proof} Let $\overline{\boldsymbol{v}}\in {\cal V}$ be fixed and $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$. Recalling Remark 2, we first note that the linear mapping $\boldsymbol{h}\mapsto[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]$ belongs to $\mathcal{L}(\mathcal{V},\mathcal{Z})$.
Now let $\Lambda>0$ be fixed. In the following, we consider perturbations $\boldsymbol{h} \in {\cal V}$ such that
$\,\|\boldsymbol{h}\|_{\cal V}\,\le\,\Lambda$. For any such perturbation $\boldsymbol{h}$, we put \begin{align} &[\boldsymbol{u}^{\boldsymbol{h}},\varphi^{\boldsymbol{h}}]:=S(\overline{\boldsymbol{v}}+\boldsymbol{h}),\qquad \boldsymbol{p}^{\boldsymbol{h}}:=\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}-\boldsymbol\xi^{\boldsymbol{h}},\qquad q^{\boldsymbol{h}}:=\varphi^{\boldsymbol{h}}-\overline{\varphi}-\eta^{\boldsymbol{h}}.\nonumber \end{align} Notice that we have the regularity \begin{align} & \boldsymbol{p}^{\boldsymbol{h}}\in H^1(0,T;V_{div}')\cap C^0([0,T];G_{div})\cap L^2(0,T;V_{div}), \nonumber \\ & q^{\boldsymbol{h}} \in H^1(0,T;V')\cap C^0([0,T];H)\cap L^2(0,T;V)\,. \end{align} By virtue of (\ref{bound1}) in Theorem 1 and of (\ref{stabi2}) in Lemma 2, there is a constant $C_1^*>0$, which may depend on the data of the problem and on $\Lambda$, such that we have:
for every $\boldsymbol{h}\in {\cal V}$ with $\|\boldsymbol{h}\|_{\cal V}\le\Lambda$ it holds \begin{align} &
\left\|[\boldsymbol{u}^{\boldsymbol{h}},\varphi^{\boldsymbol{h}}]\right\|_{\cal H}\,\le\,C_1^*\,,\quad\,
\|\varphi^{\boldsymbol{h}}\|_{C^0(\overline{Q})}\,\le\,C_1^*\,, \label{bound2}\\[2mm] & \Vert\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}\Vert_{{C^0([0,t];G_{div})}
\cap L^2(0,t;V_{div})}^2\,+\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{H^1(0,t;H)\cap C^0([0,t];V) \cap L^2(0,t;H^2(\Omega))}^2
\le \,C_1^*\,\|\boldsymbol{h}\|_{{\cal V}}^2\nonumber \\ & \mbox{for every }\,t\in (0,T]\,. \label{bound3} \end{align}
Now, after some easy computations, we can see that $\boldsymbol{p}^{\boldsymbol{h}}, q^{\boldsymbol{h}}$ (which, for simplicity, shall henceforth be denoted by $\boldsymbol{p},q$) is a solution to the weak analogue of the following problem: \begin{align} &\boldsymbol{p}_t-2\,\mbox{div}\big(\nu(\overline{\varphi})D\boldsymbol{p}\big) -2\,\mbox{div}\big((\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi}))D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\big) -2\,\mbox{div}\big((\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi})-\nu'(\overline{\varphi})\eta^{\boldsymbol{h}})D\overline{\boldsymbol{u}}\big)\nonumber\\ & +(\boldsymbol{p}\cdot\nabla)\overline{\boldsymbol{u}}+(\overline{\boldsymbol{u}}\cdot\nabla)\boldsymbol{p} +\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla\big)(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})+\nabla\pi^{\boldsymbol{h}}\nonumber\\ &=a\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\,\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) -\big({{K}}\ast(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\,\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) +(a\,q-{{K}}\ast q)\,\nabla\overline{\varphi}\nonumber\\ &\quad +(a\,\overline{\varphi}-{K}\ast\overline{\varphi})\,\nabla q +\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) +F'(\overline{\varphi})\,\nabla q\nonumber \\ &\quad +\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi}) \,\eta^{\boldsymbol{h}}\big)\,\nabla\overline{\varphi}\, \quad\mbox{in $\,Q$},\label{peq}\\[2mm] &q_t\,+\,(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) +\boldsymbol{p}\cdot\nabla\overline{\varphi}+\overline{\boldsymbol{u}}\cdot\nabla q\nonumber \,=\,\Delta\big(a\,q-{{K}}\ast q+F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big)\nonumber\\ &\quad\mbox{in \,$Q$},\label{qeq} \\[1mm] &\mbox{div}(\boldsymbol{p})=0\,\quad\mbox{in \,$Q$},\label{divp}\\[2mm] &\boldsymbol{p}=[0,0]^{T},\qquad \frac{\partial}{\partial\boldsymbol{n}}\big(aq-{{K}}\ast q+F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big)=0,\qquad\mbox{on }\Sigma, \label{bcondpq}\\[1mm] &\boldsymbol{p}(0)=[0,0]^{T},\quad q(0)=0, \,\quad\mbox{in $\Omega$.}\label{icspq} \end{align}
\noindent That is, $\boldsymbol{p}$ and $q$ solve the following variational problem (where we avoid to write the argument $t$ of the involved functions): \begin{align} &\langle\boldsymbol{p}_t,\boldsymbol{w}\rangle_{{{V_{div}}}} \,+\,2\, \big(\nu(\overline{\varphi})D\boldsymbol{p},D\boldsymbol{w}\big) \,+\,2\,\big((\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi}))D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}),D\boldsymbol{w}\big) \nonumber\\[1mm] &+\,2\,\big((\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi})-\nu'(\overline{\varphi})\eta^{\boldsymbol{h}})D\overline{\boldsymbol{u}},D\boldsymbol{w}\big) +b(\boldsymbol{p},\overline{\boldsymbol{u}},\boldsymbol{w})+b(\overline{\boldsymbol{u}},\boldsymbol{p},\boldsymbol{w})\nonumber\\[1mm] & +b(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}},\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}},\boldsymbol{w})\nonumber\\[1mm] & =\,\big(a\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}),\boldsymbol{w}\big) -\big(\big({K}\ast(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}),\boldsymbol{w}\big)\nonumber\\[1mm] &\quad\,+\big((a\,q-{{K}}\ast q)\nabla\overline{\varphi},\boldsymbol{w}\big)+\big((a\overline{\varphi}-K\ast\overline{\varphi})\nabla q,\boldsymbol{w}\big) +\big(\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}),\boldsymbol{w}\big)\nonumber\\[1mm] &\quad+\big(F'(\overline{\varphi})\nabla q,\boldsymbol{w}\big) +\big(\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big)\nabla\overline{\varphi},\boldsymbol{w}\big)\,,\label{wfpeq}\\ &\langle q_t,\psi\rangle_{{{V}}} +\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}),\psi\big) +\big(\boldsymbol{p}\cdot\nabla\overline{\varphi},\psi\big)+\big(\overline{\boldsymbol{u}}\cdot\nabla q,\psi\big) \nonumber\\[1mm] &=\,-\big(\nabla\big(a\,q-{{K}}\ast q+F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big),\nabla\psi\big),\label{wfqeq} \end{align} for every $\boldsymbol{w}\in V_{div}$, every $\psi\in V$, and almost every $t\in(0,T)$.
We choose $\,\boldsymbol{w}=\boldsymbol{p}(t)\in V_{div}\,$ and $\,\psi=q(t)\in V\,$ as test functions in (\ref{wfpeq}) and (\ref{wfqeq}), respectively, to obtain the equations (where we will again always suppress the argument $t$ of the involved functions) \begin{align} &\frac{1}{2}\,\frac{d}{dt}\,\Vert\boldsymbol{p}\Vert^2 \,+2\int_\Omega\nu(\overline{\varphi})D\boldsymbol{p}:D\boldsymbol{p} \,dx \,+2\int_\Omega((\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi}))\,D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}):D\boldsymbol{p}\,dx\nonumber\\[1mm] &+\,2\int_\Omega\nu'(\overline{\varphi})\,q\, D\overline{\boldsymbol{u}}:D\boldsymbol{p}\,dx \,+\int_\Omega\nu''(\sigma^{\boldsymbol{h}}_1)\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\, D\overline{\boldsymbol{u}}:D\boldsymbol{p}\,dx \,+\int_\Omega(\boldsymbol{p}\cdot\nabla)\overline{\boldsymbol{u}}\cdot\boldsymbol{p}\,dx \nonumber\\[1mm] &+\int_\Omega\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla\big)(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\boldsymbol{p}\,dx \nonumber\\[1mm] & =\int_\Omega a\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx -\int_\Omega\big(K\ast(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx \nonumber\\[1mm] &\quad\,\,+\int_{\Omega}(a\, q-{{K}}\ast q)\nabla\overline{\varphi}\cdot\boldsymbol{p} \,dx \,+\int_\Omega(a\,\overline{\varphi}-{{K}}\ast\overline{\varphi})\nabla q\cdot\boldsymbol{p}\,dx \nonumber\\[1mm] &\quad\,\,+\int_\Omega\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx\, +\int_\Omega F'(\overline{\varphi})\nabla q\cdot\boldsymbol{p}\,dx \nonumber\\[1mm] &\quad\,\,+\int_\Omega F''(\overline{\varphi}) q\nabla\overline{\varphi}\cdot\boldsymbol{p}\,dx +\frac{1}{2}\int_\Omega F'''(\sigma_2^{\boldsymbol{h}})(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\,\nabla\overline{\varphi}\cdot\boldsymbol{p}\,dx\,,\label{pid} \\[2mm] &\frac{1}{2}\,\frac{d}{dt}\,\Vert q\Vert^2 \,+\int_\Omega\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\,q\,dx\, +\int_\Omega(\boldsymbol{p}\cdot\nabla\overline{\varphi})\,q\,dx \nonumber\\[1mm] &=-\int_\Omega\nabla q\cdot\nabla\big(a\,q-K\ast q+F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big)\,dx\,.\label{qid} \end{align} In \eqref{pid}, we have used Taylor's formula \begin{align} &\nu(\varphi^{\boldsymbol{h}})=\nu(\overline{\varphi})+\nu'(\overline{\varphi})(\varphi^{\boldsymbol{h}}-\overline{\varphi}) +\frac{1}{2}\nu''(\sigma_1^{\boldsymbol{h}})(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2,\nonumber\\ &F'(\varphi^{\boldsymbol{h}})=F'(\overline{\varphi})+F''(\overline{\varphi}) (\varphi^{\boldsymbol{h}}-\overline{\varphi}) +\frac{1}{2} F'''(\sigma_2^{\boldsymbol{h}})(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2,\nonumber \end{align} where $$\sigma_i^{\boldsymbol{h}}=\theta_i^{\boldsymbol{h}}\varphi^{\boldsymbol{h}}+(1-\theta_i^{\boldsymbol{h}})\overline{\varphi}, \quad \theta_i^{\boldsymbol{h}}=\theta_i^{\boldsymbol{h}}(x,t)\in (0,1), \quad\mbox{for \,$i=1,2$.} $$ Moreover, in the integration by parts on the right-hand side of \eqref{qid} we employed
the second boundary condition in \eqref{bcondpq}, which is a consequence of $\,\partial\mu^{\boldsymbol{h}}/\partial\boldsymbol{n}= \partial\overline{\mu}/\partial\boldsymbol{n}=0\,$ on $\,\Sigma\,$ and of \eqref{linbcs} (where $\mu^{\boldsymbol{h}}:=a\,\varphi^{\boldsymbol{h}}-{K}\ast\varphi^{\boldsymbol{h}}+F'(\varphi^{\boldsymbol{h}})$).
We now begin to estimate all the terms in \eqref{pid}. In this process, we will make repeated use of the global estimates
(\ref{bound2}), (\ref{bound3}), and of the Gagliardo-Nirenberg inequality (\ref{GN}). Again, we denote by $C$ positive constants that may depend on the data of the system, but not on the choice of $\boldsymbol{h}\in {\cal V}$ with $\|\boldsymbol{h}\|_{\cal V}\le\Lambda$, while $C_\sigma$ denotes a positive constant that also depends on the quantity indicated by the index $\,\sigma$. We have, with constants $\epsilon>0$ and $\epsilon'>0$ that will be fixed later, the following series of estimates: \begin{align}
&\Big|2\int_\Omega(\nu(\varphi^{\boldsymbol{h}})-\nu(\overline{\varphi}))D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\!:\!D\boldsymbol{p}\,dx\Big|
\,=\,\Big|2\int_\Omega\nu'(\sigma_3^{\boldsymbol{h}})(\varphi^{\boldsymbol{h}}-\overline{\varphi})D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\!:\!D\boldsymbol{p}\,dx\Big|\nonumber\\[1mm] & \quad\leq\,C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}\,\Vert D(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert_{L^4(\Omega)^{2\times 2}}\, \Vert D\boldsymbol{p}\Vert\nonumber\\[1mm] &\quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\, +\,C_\epsilon\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^2\, \Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert\,\big(\Vert\boldsymbol{u}^{\boldsymbol{h}}\Vert_{H^2(\Omega)^2}+\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}\big) \nonumber\\[1mm] & \quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\, +\,C_\epsilon\, \Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert\,\big(\Vert\boldsymbol{u}^{\boldsymbol{h}}\Vert_{H^2(\Omega)^2}+\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}\big)
\,\|\boldsymbol{h}\|_{\cal V}^2\,, \label{est51} \end{align} as well as \begin{align} &
\Big|2\int_\Omega\nu'(\overline{\varphi})\,q\, D\overline{\boldsymbol{u}}\!:\!D\boldsymbol{p}\,dx\Big| \,\leq\, C\,\Vert q\Vert_{L^4(\Omega)}\, \Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert\nabla\boldsymbol{p}\Vert \nonumber\\[1mm] & \quad\le\, \epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2 +C_{\epsilon}\,\Vert q\Vert\,\Vert q\Vert_V\,\Vert\nabla\overline{\boldsymbol{u}}\Vert\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2} \nonumber \\[1mm] & \quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,\epsilon'\,\Vert\nabla q\Vert^2 \,+\,C_{\epsilon,\epsilon'}\,\big(1+\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\big)\,\Vert q\Vert^2\,. \label{est52} \end{align} Moreover, by similar reasoning, \begin{align} &
\Big|\int_\Omega\nu''(\sigma^{\boldsymbol{h}}_1)\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\, D\overline{\boldsymbol{u}}\!:\!D\boldsymbol{p}\,dx\Big| \,\leq\,C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^8(\Omega)}^2\, \Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert\nabla\boldsymbol{p}\Vert \nonumber \\[1mm] & \quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2 \,+\, C_\epsilon\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^4\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2 \,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2 \,+\, C_\epsilon\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,
\|\boldsymbol{h}\|_{\cal V}^4\,, \label{est53} \\[4mm]
&\Big|\int_\Omega(\boldsymbol{p}\cdot\nabla)\overline{\boldsymbol{u}}\cdot\boldsymbol{p}\,dx\Big| \,\leq\,\Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2}\,\Vert\nabla\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert\boldsymbol{p}\Vert\,\,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\, \Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,\Vert\boldsymbol{p}\Vert^2\,, \label{est29} \\[4mm]
&\Big|\int_\Omega\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla\big)(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\boldsymbol{p}\,dx\Big|
\,=\,\Big|\int_\Omega\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla\big)\boldsymbol{p}\cdot
(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\,dx\Big|\nonumber \\[1mm] &\quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\,\Vert\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^2}^4 \,\le\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\,\Vert\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}\Vert^2 \,\Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert^2 \nonumber\\[1mm] & \quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\,
\Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert^2\,\|\boldsymbol{h}\|_{\cal V}^2\,, \label{est30}\\[4mm] &\int_\Omega a\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) \cdot\boldsymbol{p}\,dx \,=\,-\int_\Omega \frac{(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2}{2}\,\nabla a\cdot\boldsymbol{p}\,dx\nonumber \\[1mm] &\quad\leq C\,\Vert\boldsymbol{p}\Vert\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}^2 \,\leq\, \Vert\boldsymbol{p}\Vert^2\,+\,C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^4
\,\leq\, \Vert\boldsymbol{p}\Vert^2\,+\,C\,\|\boldsymbol{h}\|_{\cal V}^4\,, \label{est31}\\[4mm] & -\int_\Omega\big({K}\ast(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx \,=\,\int_\Omega\big(\nabla {{K}}\ast(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx\nonumber \\[1mm] &\quad\leq\, C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert\,\Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2} \,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_{\epsilon}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert^2\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^2\nonumber \\[1mm] & \quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_{\epsilon}\,
\|\boldsymbol{h}\|_{\cal V}^4\,, \label{est32} \\[4mm] & \int_{\Omega}(a\, q-{{K}}\ast q)\,\nabla\overline{\varphi}\cdot\boldsymbol{p}\,dx \,\leq\, C\,\Vert q\Vert\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)}\,\Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2} \,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_{\epsilon}\,\Vert q\Vert^2\,, \label{est33} \\[4mm] & \int_\Omega(a\,\overline{\varphi}-{{K}}\ast\overline{\varphi})\,\nabla q\cdot\boldsymbol{p}\,dx \,\leq\, C\, \Vert\overline{\varphi}\Vert_{H^2(\Omega)}\,\Vert\nabla q\Vert\,\Vert\boldsymbol{p}\Vert \,\leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\,\Vert\boldsymbol{p}\Vert^2\,, \label{est34} \\[4mm] & \int_\Omega\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})\big)\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx \,=\,\int_\Omega F''(\sigma_4^{\boldsymbol{h}})\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\cdot\boldsymbol{p}\,dx \nonumber \\[1mm] &\quad\leq\,C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}\,\Vert\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\Vert\, \Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2} \,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_{\epsilon}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^4\nonumber\\[1mm] &
\quad\le\,\epsilon\,\|\nabla\boldsymbol{p}\|^2\,+\,C_{\epsilon}\,\|\boldsymbol{h}\|_{\cal V}^4\,, \label{est35} \\[4mm] & \int_\Omega F'(\overline{\varphi})\nabla q\cdot\boldsymbol{p}\,dx \,\leq\,C\,\Vert\nabla q\Vert\,\Vert\boldsymbol{p}\Vert\,\leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\,\Vert\boldsymbol{p}\Vert^2\,,\label{est36} \\[4mm] &\int_\Omega F''(\overline{\varphi})\, q\nabla\overline{\varphi}\cdot\boldsymbol{p}\,dx \,\leq\, C\,\Vert q\Vert\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\, \Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2}\,\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2 \,+\,C_\epsilon\,\Vert q\Vert^2\,,\label{est37} \\[4mm] &\frac{1}{2}\int_\Omega F'''(\sigma_4^{\boldsymbol{h}})\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\,\nabla\overline{\varphi}\cdot\boldsymbol{p}\,dx \,\leq\, C\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}^2\, \Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\, \Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2}\nonumber \\[1mm] &\quad\leq\,\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^4
\,\le\,\epsilon\,\|\nabla\boldsymbol{p}\|^2\,+\,C_\epsilon\,\|\boldsymbol{h}\|_{\cal V}^4\,. \label{est38} \end{align}
\noindent Observe that in the derivation of \eqref{est30}, \eqref{est31}, and \eqref{est32}, we have used \eqref{divp} and the first boundary condition in \eqref{bcondpq}, while in \eqref{est51}, \eqref{est35}, and \eqref{est38}, we have set $\,\sigma_j^{\boldsymbol{h}}:=\theta_j^{\boldsymbol{h}}\varphi^{\boldsymbol{h}}+(1-\theta_j^{\boldsymbol{h}})\overline{\varphi}$, where $\theta_j^{\boldsymbol{h}}=\theta_j^{\boldsymbol{h}}(x,t)\in(0,1)$, for $j=3,4$.
Let us now estimate all the terms in \eqref{qid}. At first, we have \begin{align} &
\Big|\int_\Omega\big((\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\cdot\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)
\,q\,dx\Big| \,\leq\Vert\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^2}\,\Vert\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\Vert\,\Vert q\Vert_{L^4(\Omega)}\nonumber\\ &\quad\leq\, C\Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert\, \Vert\boldsymbol{h}\Vert_{\cal V} \,\big(\Vert\nabla q\Vert\,+\,\Vert q\Vert\big) \,\leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,\Vert q\Vert^2 \,+\,C_{\epsilon'}\Vert\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})\Vert^2 \Vert\boldsymbol{h}\Vert^2_{\cal V}\,, \label{est44} \\[4mm]
&\Big|\int_\Omega(\boldsymbol{p}\cdot\nabla\overline{\varphi})\,q\,dx\Big| \,\leq\,\Vert\boldsymbol{p}\Vert_{L^4(\Omega)^2}\, \Vert\nabla\overline{\varphi}\Vert _{L^4(\Omega)^2}
\,\Vert q\Vert\,\le\,C\,\|\boldsymbol{p}\|_{L^4(\Omega)^2}\,\|\overline{\varphi}\|_{H^2(\Omega)}\,
\|q\| \nonumber \\[1mm] &\quad\leq\epsilon\,\Vert\nabla\boldsymbol{p}\Vert^2\,+\,C_\epsilon\,\Vert q\Vert^2\,.\label{est45} \end{align}
As far as the term on the right-hand side of \eqref{qid} is concerned, we first observe that we can write \begin{align} &F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\,\eta^{\boldsymbol{h}}= (\varphi^{\boldsymbol{h}}-\overline{\varphi})\int_0^1\big[ F''(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})-F''(\overline{\varphi})\big]\,d\tau +F''(\overline{\varphi})\,q.\nonumber \end{align} Therefore, we have \begin{align} &\nabla\big(F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big) =\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) \int_0^1\big[ F''(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})-F''(\overline{\varphi})\big]d\tau\nonumber\\ &+(\varphi^{\boldsymbol{h}}-\overline{\varphi}) \int_0^1 \big[F'''(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})(\tau\nabla\varphi^{\boldsymbol{h}}+(1-\tau)\nabla\overline{\varphi}) -F'''(\overline{\varphi})\nabla\overline{\varphi}\big]d\tau\nonumber\\ &+\,F''(\overline{\varphi})\nabla q\,+\,q\,F'''(\overline{\varphi})\nabla\overline{\varphi}\nonumber\\ &=\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\int_0^1 \!\!\int_0^1 F'''\big(s(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})+(1-s)\overline{\varphi}\big) (\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi}-\overline{\varphi})ds\,d\tau\nonumber\\ &+(\varphi^{\boldsymbol{h}}-\overline{\varphi}) \int_0^1 \Big[F'''(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})\tau\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nonumber\\ &+\nabla\overline{\varphi}\int_0^1 F^{(4)}\big(s(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})+(1-s)\overline{\varphi}\big) (\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi}-\overline{\varphi})\,ds\Big]d\tau\nonumber\\ &+\,F''(\overline{\varphi})\nabla q\,+\,q\,F'''(\overline{\varphi})\nabla\overline{\varphi}\nonumber \\[2mm] &=\overline{A}_{\boldsymbol{h}}\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})+ \overline{B}_{\boldsymbol{h}}\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\nabla\overline{\varphi}+F''(\overline{\varphi})\nabla q+qF'''(\overline{\varphi})\nabla\overline{\varphi},\label{est39} \end{align} where we have set \begin{align} &\overline{A}_{\boldsymbol{h}}:=\int_0^1\tau \int_0^1 F'''(s\tau\varphi^{\boldsymbol{h}}+(1-s\tau)\overline{\varphi}) \,ds\,d\tau\,+ \int_0^1\tau \,F'''(\tau\varphi^{\boldsymbol{h}}+(1-\tau)\overline{\varphi})\, d\tau\,,\nonumber\\ &\overline{B}_{\boldsymbol{h}}:=\int_0^1\tau \int_0^1 F^{(4)}(s\tau\varphi^{\boldsymbol{h}}+(1-s\tau)\overline{\varphi}) \,ds\,d\tau\,.\nonumber \end{align} Observe that in view of the global bounds (\ref{bound2}) we have \begin{align} &
\|\overline{A}_{\boldsymbol{h}}\|_{L^\infty(Q)}\,+\,\|\overline{B}_{\boldsymbol{h}}\|_{L^\infty(Q)}\,\le \, C_2^*\,, \label{aquer} \end{align} with a constant $C_2^*>0$ that does not depend on the choice of $\boldsymbol{h}\in {\cal V}$ with
$\|\boldsymbol{h}\|_{\cal V}\le\Lambda$.
Now, on account of \eqref{est39}, the expression on the right-hand side of \eqref{qid} takes the form \begin{align} &-\int_\Omega\nabla q\cdot\nabla\big(a\,q-{{K}}\ast q+F'(\varphi^{\boldsymbol{h}})-F'(\overline{\varphi})-F''(\overline{\varphi})\eta^{\boldsymbol{h}}\big)\,dx\nonumber \\[1mm] &=-\big(\nabla q,(a+F''(\overline{\varphi}))\nabla q\big)-\big(\nabla q,q\,F'''(\overline{\varphi})\nabla\overline{\varphi}\big) -\big(\nabla q,q\,\nabla a-\nabla {{K}}\ast q\big)\nonumber\\ &\hspace*{4.5mm}-\big(\nabla q,\overline{A}_{\boldsymbol{h}}\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big) -\big(\nabla q,\overline{B}_{\boldsymbol{h}}\, (\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\,\nabla\overline{\varphi}\big),\label{est40} \end{align} and the last four terms in \eqref{est40} can be estimated in the following way: \begin{align}
&\big|\big(\nabla q,qF'''(\overline{\varphi})\nabla\overline{\varphi}\big)\big| \,\leq\,C\,\Vert\nabla q\Vert\,\Vert q\Vert_{L^4(\Omega)}\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\nonumber \\[1mm] &\leq\,C\,\Vert\nabla q\Vert\,\big(\Vert q\Vert+\Vert q\Vert^{1/2}\,\Vert\nabla q\Vert^{1/2}\big)\, \leq\, \epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\,\Vert q\Vert^2\,,\label{est46} \\[4mm]
&\big|\big(\nabla q,q\,\nabla a-\nabla {{K}}\ast q\big)\big| \,\leq C\,\Vert\nabla q\Vert\,\Vert q\Vert\,\le\, \epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\,\Vert q\Vert^2\,,\label{est47} \\[4mm] &
\big|\big(\nabla q,\overline{A}_{\boldsymbol{h}}\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi})\big)\big| \,\leq\,C\,\Vert\nabla(\varphi^{\boldsymbol{h}}-\overline{\varphi}) \Vert_{L^4(\Omega)^2}\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^4(\Omega)}\,\Vert\nabla q\Vert\nonumber\\[1mm] &\leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^2\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{H^2(\Omega)}^2 \,\le\, \epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{H^2(\Omega)}^2
\,\|\boldsymbol{h}\|_{\cal V}^2\,, \label{est42} \\[4mm] &
\big|\big(\nabla q,\overline{B}_{\boldsymbol{h}}\,(\varphi^{\boldsymbol{h}}-\overline{\varphi})^2\,\nabla\overline{\varphi}\big)\big| \,\leq\,C\,\Vert\nabla q\Vert\,\Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_{L^8(\Omega)}^2\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\nonumber \\[1mm] &\leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\, \Vert\varphi^{\boldsymbol{h}}-\overline{\varphi}\Vert_V^4\, \leq\,\epsilon'\,\Vert\nabla q\Vert^2\,+\,C_{\epsilon'}\, \Vert\boldsymbol{h}\Vert_{\cal V}^4\,. \label{est43} \end{align}
We now insert the estimates \eqref{est51}--\eqref{est38} in \eqref{pid} and the estimates \eqref{est44}, \eqref{est45} and \eqref{est46}--\eqref{est43} in \eqref{qid} and recall \eqref{est40} and the {conditions} (\ref{F1}) and (\ref{nu}). Adding the resulting inequalities, and fixing $\epsilon>0$ and $\epsilon'>0$ small enough (i.e., $\,\epsilon\leq \hat\nu_1/22$ and $\epsilon'\leq \hat c_1/16$), we obtain that almost everywhere in $(0,T)$ we have the inequality \begin{align} &\frac{d}{dt}\big(\Vert\boldsymbol{p}^{\boldsymbol{h}}\Vert^2+\Vert q^{\boldsymbol{h}}\Vert^2\big)+ \hat\nu_1 \,\Vert\nabla\boldsymbol{p}^{\boldsymbol{h}}\Vert^2+ \hat c_1 \,\Vert\nabla q^{\boldsymbol{h}}\Vert^2 \leq\,\alpha\,\big(\Vert\boldsymbol{p}^{\boldsymbol{h}}\Vert^2+\Vert q^{\boldsymbol{h}}\Vert^2\big)\,+\,\beta_{\boldsymbol{h}},\label{est48} \end{align} where the functions $\alpha,\beta_{\boldsymbol{h}}\in L^1(0,T)$ are given by \begin{align*} \alpha(t)&:= C\,\big(1+\Vert\overline{\boldsymbol{u}}(t)\Vert_{H^2(\Omega)^2}^2\big),\\[2mm] \beta_{\boldsymbol{h}}(t)&:=
C\,\|\boldsymbol{h}\|_{\cal V}^4\,\big(1+\|\overline{\boldsymbol{u}}(t)\|^2_{H^2(\Omega)^2}\big)\,+\, C\,\|\boldsymbol{h}\|_{\cal V}^2\,\Big(\|\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})(t)\|^2 \,+\,\Vert(\varphi^{\boldsymbol{h}}-\overline{\varphi})(t)\Vert^2_{H^2(\Omega)}\nonumber \\ &\quad\,\, \,\, +\,
\|\nabla(\boldsymbol{u}^{\boldsymbol{h}}-\overline{\boldsymbol{u}})(t)\|\, \big(\Vert\boldsymbol{u}^{\boldsymbol{h}}(t)\Vert_{H^2(\Omega)^2}+ \Vert\overline{\boldsymbol{u}}(t)\Vert_{H^2(\Omega)^2}\big) \Big)\,. \end{align*}
Now, since $\|\boldsymbol{h}\|_{\cal V}\le\Lambda$, it follows from the global bounds (\ref{bound2}) and (\ref{bound3}) that \begin{align} &\int_0^T \beta_{\boldsymbol{h}}(t)\,dt \,\leq\,C\,\Vert\boldsymbol{h}\Vert_{\mathcal{V}}^3.\nonumber \end{align} Taking \eqref{icspq} into account, we therefore can infer from Gronwall's lemma that \begin{align} &\Vert\boldsymbol{p}^{\boldsymbol{h}}\Vert_{C^0([0,T];G_{div})}^2\,+\,\Vert\boldsymbol{p}^{\boldsymbol{h}}\Vert_{L^2(0,T;V_{div})}^2 \,+\,\Vert q^{\boldsymbol{h}}\Vert_{C^0([0,T];H)}^2 \,+\,\Vert q^{\boldsymbol{h}}\Vert_{L^2(0,T;V)}^2 \,\leq\,C\,\Vert\boldsymbol{h}\Vert_{\mathcal{V}}^3.\nonumber \end{align} Hence, it holds \begin{align*} &\frac{\Vert S(\overline{\boldsymbol{v}}+\boldsymbol{h})-S(\overline{\boldsymbol{v}})-[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]\Vert_{\mathcal{Z}}}{\Vert\boldsymbol{h}\Vert_{\mathcal{V}}} \,=\,\frac{\Vert[\boldsymbol{p}^{\boldsymbol{h}},q^{\boldsymbol{h}}]\Vert_{\mathcal{Z}}}{\Vert\boldsymbol{h}\Vert_{\mathcal{V}}}\, \leq\,C\,\Vert\boldsymbol{h}\Vert_{\mathcal{V}}^{1/2} \to 0, \end{align*}
as $\|\boldsymbol{h}\|_{\cal V}\to 0$, which concludes the proof of the theorem. \end{proof}
\noindent \textbf{First-order necessary optimality conditions.} From Theorem \ref{diffcontstat} we can deduce the following necessary optimality condition:
\begin{cor} Suppose that the general hypotheses {\bf (H1)}--{\bf (H4)} are fulfilled, and assume that $\overline{\boldsymbol{v}}\in\mathcal{V}_{ad}$ is an optimal control for {\bf (CP)} with associated state $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$. Then it holds \begin{align} &\beta_1\int_0^T\!\!\int_\Omega(\overline{\boldsymbol{u}}-\boldsymbol{u}_Q)\cdot\boldsymbol\xi^{\boldsymbol{h}}\,dx\,dt \,+\,\beta_2\int_0^T\!\!\int_\Omega(\overline{\varphi}-\varphi_Q)\,\eta^{\boldsymbol{h}}\,dx\,dt \,+\,\beta_3\int_\Omega(\overline{\boldsymbol{u}}(T)-\boldsymbol{u}_\Omega)\cdot\boldsymbol\xi^{\boldsymbol{h}}(T)\,dx\nonumber\\ &+\,\beta_4\int_\Omega(\overline{\varphi}(T)-\varphi_\Omega)\,\eta^{\boldsymbol{h}}(T)\,dx \,+\,\gamma\int_0^T\!\!\int_\Omega\overline{\boldsymbol{v}}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}})\,dx\,dt\,\geq\, 0\qquad\forall\,\boldsymbol{v}\in\mathcal{V}_{ad}, \label{nec.opt.cond} \end{align} where $[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]$ is the unique solution to the linearized system \eqref{linsy1}--\eqref{linics} corresponding to $\boldsymbol{h}=\boldsymbol{v}-\overline{\boldsymbol{v}}$. \end{cor}
\begin{proof} Introducing the reduced cost functional $f:\mathcal{V}\to[0,\infty)$ given by $f(\boldsymbol{v}):=\mathcal{J}\big({\cal S}(\boldsymbol{v}),\boldsymbol{v})$, for all $\boldsymbol{v}\in\mathcal{V}$, where $\mathcal{J}:\mathcal{Z}\times\mathcal{V}\to[0,\infty)$ is given by \eqref{costfunct}, and invoking the convexity of $\mathcal{V}_{ad}$, we have (see, e.g., \cite[Lemma 2.21]{Tr}) \begin{align} &f'(\overline{\boldsymbol{v}})(\boldsymbol{v}-\overline{\boldsymbol{v}})\geq 0\qquad\forall\,\boldsymbol{v}\in\mathcal{V}_{ad}.\label{abstcond} \end{align} Obviously, by the chain rule, \begin{align} &f'(\boldsymbol{v})=\mathcal{J}_y'\big({\cal S}(\boldsymbol{v}),\boldsymbol{v}\big)\circ{\cal S}'(\boldsymbol{v})+\mathcal{J}_{\boldsymbol{v}}' \big({\cal S}(\boldsymbol{v}),\boldsymbol{v}\big),\label{Fr1} \end{align} where, for every fixed $\boldsymbol{v}\in\mathcal{V}$, $\mathcal{J}_y'\big(y,\boldsymbol{v}\big)\in\mathcal{Z}'$
is the Fr\'echet derivative of $\mathcal{J}=\mathcal{J}(y,\boldsymbol{v})$ with respect to $y$ at $y\in\mathcal{Z}$ and, for every fixed $y\in\mathcal{Z}$, $\mathcal{J}_{\boldsymbol{v}}'\big(y,\boldsymbol{v}\big)\in\mathcal{V}'$ is the Fr\'echet derivative of $\mathcal{J}=\mathcal{J}(y,\boldsymbol{v})$ with respect to $\boldsymbol{v}$ at $\boldsymbol{v}\in\mathcal{V}$. We have \begin{align} &\mathcal{J}_y'\big(y,\boldsymbol{v}\big)(\zeta)=\beta_1\int_0^T\!\!\int_\Omega(\boldsymbol{u}-\boldsymbol{u}_Q)\cdot\boldsymbol{\zeta}_1\,dx\,dt \,+\beta_2\int_0^T\!\!\int_\Omega(\varphi-\varphi_Q)\,\zeta_2\,dx\,dt\nonumber\\ &\quad+\beta_3\int_\Omega(\boldsymbol{u}(T)-\boldsymbol{u}_\Omega)\cdot\boldsymbol{\zeta}_1(T)\,dx\,+\beta_4\int_\Omega(\varphi(T)-\varphi_\Omega)\,\zeta_2(T)\,dx \qquad\forall\,\zeta=[\boldsymbol{\zeta}_1,\zeta_2]\in\mathcal{Z},\label{Fr2} \end{align} where $y=[\boldsymbol{u},\varphi]$. Moreover, \begin{align} &\mathcal{J}_{\boldsymbol{v}}'\big(y,\boldsymbol{v}\big){(\boldsymbol{w})}=\gamma\int_0^T\!\!\int_\Omega \boldsymbol{v}\cdot \boldsymbol{w}\,dx\,dt\qquad\forall\,\boldsymbol{w}\in\mathcal{V}.\label{Fr3} \end{align} Hence, \eqref{nec.opt.cond} follows from \eqref{abstcond}--\eqref{Fr3} on account of the fact that, thanks to Theorem 3, we have \begin{align*} &{\cal S}'(\overline{\boldsymbol{v}})(\boldsymbol{v}-\overline{\boldsymbol{v}})=[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}], \end{align*} where $[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]$ is the unique solution to the linearized system \eqref{linsy1}--\eqref{linics} corresponding to $\boldsymbol{h}=\boldsymbol{v}-\overline{\boldsymbol{v}}$. \end{proof}
\noindent \textbf{The adjoint system and first-order necessary optimality conditions.} We now aim to eliminate the variables $\,[\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}]\,$ from the variational inequality (\ref{nec.opt.cond}). To this end, let
us introduce the following {\itshape adjoint system}: \begin{align} \widetilde{\boldsymbol{p}}_t\,=\,&-\,2\,\mbox{div}\big(\nu(\overline{\varphi})\,D\widetilde{\boldsymbol{p}}\big)
-(\overline{\boldsymbol{u}}\cdot\nabla)\,\widetilde{\boldsymbol{p}}+(\widetilde{\boldsymbol{p}}\cdot\nabla^T)\,\overline{\boldsymbol{u}} \,+\,\widetilde{q}\,\nabla\overline{\varphi}-\beta_1(\overline{\boldsymbol{u}}-\boldsymbol{u}_Q)\,,\label{adJ1}\\ \widetilde{q}_t\,=\,&-(a\,\Delta\widetilde{q}\,+\,\nabla {{K}}\dot{\ast}\nabla\widetilde{q}\,+\,F''(\overline{\varphi})\,\Delta\widetilde{q})-\overline{\boldsymbol{u}}\cdot\nabla\widetilde{q} \,+\,2\,\nu'(\overline{\varphi})\,D\overline{\boldsymbol{u}}:D\widetilde{\boldsymbol{p}} \nonumber\\ &-\big(a\,\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi}-K\ast(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi})+F''(\overline{\varphi})\,\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi}\big) +\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\mu}-\beta_2(\overline{\varphi}-\varphi_Q)\,,\label{adJ2}\\ \mbox{div}(&\widetilde{\boldsymbol{p}})=0,\label{adJ3}\\ \widetilde{\boldsymbol{p}}=&0,\qquad\frac{\partial\widetilde{q}}{\partial\boldsymbol{n}}=0\quad\mbox{on }\Sigma,\label{adJ4}\\ \widetilde{\boldsymbol{p}}(T&)=\beta_3(\overline{\boldsymbol{u}}(T)-\boldsymbol{u}_\Omega),\qquad\widetilde{q}(T)=\beta_4(\overline{\varphi}(T)-\varphi_\Omega).\label{adJics} \end{align} Here, we have set $$(\nabla {{K}}\dot{\ast}\nabla\widetilde{q})(x):=\int_\Omega\nabla {{K}}(x-y)\cdot\nabla\widetilde{q}(y) \,dy\, \quad\mbox{for a.\,e. }\,x\in\Omega\,. $$
Since $\boldsymbol{u}_\Omega\in G_{div}$, $\varphi_\Omega\in H$, the solution to \eqref{adJ1}--\eqref{adJics} can only be expected to enjoy the regularity \begin{align} \widetilde{\boldsymbol{p}}&\in H^1(0,T;(V_{div})') \cap C([0,T];G_{div})\cap L^2(0,T;V_{div}),\nonumber\\ \widetilde{q}&\in H^1(0,T;V') \cap C([0,T];H)\cap L^2(0,T;V).\label{reg.adJ.sol} \end{align} Hence, the pair $[\widetilde{\boldsymbol{p}},\widetilde{q}]$ must be understood as a solution to the following weak formulation of the system \eqref{adJ1}--\eqref{adJ4} (where the argument $t$ is again omitted): \begin{align} &\langle\widetilde{\boldsymbol{p}}_t,\boldsymbol{z}\rangle_{{{V_{div}}}}-2\,\big(\nu(\overline{\varphi})D\widetilde{\boldsymbol{p}},D\boldsymbol{z}\big) \,=\,-b(\overline{\boldsymbol{u}},\widetilde{\boldsymbol{p}},\boldsymbol{z})+b(\boldsymbol{z},\overline{\boldsymbol{u}},\widetilde{\boldsymbol{p}}) +\big(\widetilde{q}\nabla\overline{\varphi},\boldsymbol{z}\big)-\beta_1\big((\overline{\boldsymbol{u}}-\boldsymbol{u}_Q),\boldsymbol{z}\big),\label{wfadj1}\\ &\langle\widetilde{q}_t,\chi\rangle_{{{V}}}-\big((a+F''(\overline{\varphi}))\nabla\widetilde{q},\nabla\chi\big) \,=\,\big(\nabla a+F'''(\overline{\varphi})\nabla\overline{\varphi},\chi\nabla\widetilde{q}\big) -\big(\nabla K \dot{\ast}\nabla\widetilde{q},\chi\big) -\big(\overline{\boldsymbol{u}}\cdot\nabla\widetilde{q},\chi\big)\nonumber\\ &\quad +2\,\big(\nu'(\overline{\varphi})D\overline{\boldsymbol{u}}:D\widetilde{\boldsymbol{p}},\chi\big) -\big((a\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi}- {K}\ast(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi})+F''(\overline{\varphi})\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi}),\chi\big) \nonumber\\ &\quad +\big(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\mu},\chi\big)-\beta_2\big((\overline{\varphi}-\varphi_Q),\chi\big),\label{wfadj2} \end{align} for every $\boldsymbol{z}\in V_{div}$, every $\chi\in V$ and almost every $t\in (0,T)$. We have the following result.
\begin{prop} Suppose that the hypotheses {\bf (H1)}--{\bf (H4)} are fulfilled. Then the adjoint system \eqref{adJ1}--\eqref{adJics} has a unique weak solution $[\widetilde{\boldsymbol{p}},\widetilde{q}]$ satisfying \eqref{reg.adJ.sol}. \end{prop}
\begin{proof} We only give a sketch of the proof which can be carried out in a similar way as the proof of Proposition \ref{linthm}. In particular, we omit the implementation of the Faedo-Galerkin scheme and only derive the basic estimates that weak solutions must satisfy. To this end, we insert $\widetilde{\boldsymbol{p}}(t)\in V_{div}$ in \eqref{wfadj1} and $\widetilde{q}(t)\in H$ in (\ref{wfadj2}), and add the resulting equations, observing that we have $\,\,b(\overline{u}(t),p(t),p(t))=(\overline{u}(t)\cdot\nabla\widetilde{q}(t),\widetilde{q}(t))=0$. Omitting the argument $t$ again, we now estimate the resulting terms on the right-hand side individually. We denote by $C$ positive constants that only depend on the global data and on $[\overline{\boldsymbol{u}},\overline{\varphi}]$, while $C_\sigma$ stands for positive constants that also depend on the quantity indicated by the index $\sigma$. Using the elementary Young's inequality, the H\"older and Gagliardo-Nirenberg inequalities, Young's inequality for convolution integrals, as well as the hypotheses {\bf (H1)}--{\bf (H4)} and the global bound (\ref{bound1}), we obtain (with postive constants $\epsilon$ and $\epsilon'$ that will be fixed later) the following series of estimates: \begin{align*}
&\Big|\int_\Omega(\widetilde{\boldsymbol{p}}\cdot\nabla^T)\overline{\boldsymbol{u}}\cdot\widetilde{\boldsymbol{p}}\,dx\Big| \,\leq\,\Vert\widetilde{\boldsymbol{p}}\Vert\,\Vert\nabla\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\,\Vert\widetilde{\boldsymbol{p}}\Vert_{L^4(\Omega)^2} \,\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_\epsilon\,\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\,\Vert\widetilde{\boldsymbol{p}}\Vert^2,\\[2mm]
&\Big|\int_\Omega\widetilde{q}\,\nabla\overline{\varphi}\cdot\widetilde{\boldsymbol{p}}\,dx\Big| \,\leq\,\Vert\widetilde{q}\Vert\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2}\,\Vert\widetilde{\boldsymbol{p}}\Vert_{L^4(\Omega)^2} \,\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_\epsilon\,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|\beta_1\int_\Omega(\overline{\boldsymbol{u}}-\overline{\boldsymbol{u}}_Q)\cdot\widetilde{\boldsymbol{p}}\,dx\Big| \,\leq\,\beta_1\,\Vert\overline{\boldsymbol{u}}-\overline{\boldsymbol{u}}_Q\Vert\, \Vert\widetilde{\boldsymbol{p}}\Vert\,\leq\,\Vert\widetilde{\boldsymbol{p}}\Vert^2 +\frac{\beta_1^2}{4}\,\Vert\overline{\boldsymbol{u}}-\overline{\boldsymbol{u}}_Q\Vert^2,\\[2mm] &
\Big|\int_\Omega\widetilde{q}\,\nabla\widetilde{q}\cdot\big(\nabla a+F'''(\overline{\varphi})\nabla\overline{\varphi}\big)\,dx \Big| \le\, C_K\,\Vert\widetilde{q}\Vert\,\Vert\nabla\widetilde{q}\Vert \,+\,C\,\Vert\widetilde{q}\Vert_{L^4(\Omega)} \,\Vert\nabla\widetilde{q}\Vert\,\Vert\nabla\overline{\varphi}\Vert_{L^4(\Omega)^2} \nonumber \\[2mm] & \quad\le\, C_K\,\Vert\widetilde{q}\Vert\,\Vert\nabla\widetilde{q}\Vert \,+\,C\,\big(\Vert\widetilde{q}\Vert +\Vert\widetilde{q}\Vert^{1/2}\,\Vert\nabla\widetilde{q}\Vert^{1/2}\big)\,\Vert\nabla\widetilde{q}\Vert\,\Vert\overline{\varphi}\Vert_{H^2(\Omega)} \nonumber\\[2mm] & \quad\leq\, \epsilon'\,\Vert\nabla\widetilde{q}\Vert^2 + C_{\epsilon'}\,\Vert\widetilde{q}\Vert^2, \\[2mm]
&\Big|\int_\Omega\big(\nabla {{K}}\dot{\ast}\nabla\widetilde{q}\big)\,\widetilde{q}\,dx\Big| \,\leq\, C_K\,\Vert\nabla\widetilde{q}\Vert\,\Vert\widetilde{q}\Vert\,\leq\,\epsilon'\,\Vert\nabla\widetilde{q}\Vert^2+C_{\epsilon'}\,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|2\int_\Omega\big(\nu'(\overline{\varphi})\,D\overline{\boldsymbol{u}}\!:\!D\widetilde{\boldsymbol{p}}\big)\,\widetilde{q}
\,dx\Big| \,\leq\, C\,\Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert D\widetilde{\boldsymbol{p}}\Vert\, \Vert \widetilde{q}\Vert_{L^4(\Omega)} \nonumber \\[2mm] &\quad\le\, C\,\Vert D\overline{\boldsymbol{u}}\Vert_{L^4(\Omega)^{2\times 2}}\,\Vert D\widetilde{\boldsymbol{p}}\Vert\,\big (\Vert\widetilde{q}\Vert+ \Vert\widetilde{q}\Vert^{1/2}\,\Vert\nabla\widetilde{q}\Vert^{1/2}\big) \nonumber\\[2mm] &\quad\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+\epsilon'\,\Vert\nabla\widetilde{q}\Vert^2 +C_{\epsilon,\epsilon'}\,\big(1+\Vert\overline{\boldsymbol{u}}\Vert_{H^2(\Omega)^2}^2\big) \,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|\int_\Omega(a\,\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi})\,\widetilde{q}\,dx\Big|\,\leq\, C_K\,\|\widetilde{\boldsymbol{p}}\|_{{L^4(\Omega)^2}}\,\|\nabla\overline{\varphi}\|_{L^4(\Omega)^2}\,\|\widetilde{q}\| \,\le\,C\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert\,\Vert\widetilde{q}\Vert \nonumber\\[2mm] &\quad \leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_{\epsilon}\,\Vert\widetilde{q}\Vert^2, \\[2mm]
&\Big|\int_\Omega K\ast(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi})\,\widetilde{q}\,dx\Big|
\,\le\, C_K\,\|\widetilde{\boldsymbol{p}}\|_{{L^4(\Omega)^2}}\,\|\nabla\overline{\varphi}\|_{L^4(\Omega)^2}\,\|\widetilde{q}\| \,\leq\,C\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert\Vert\,\widetilde{q}\Vert\nonumber\\[2mm] &\quad\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_{\epsilon}\,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|\int_\Omega F''(\overline{\varphi})(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\varphi})\,\widetilde{q}\,dx \Big| \,\leq\,C\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert\,\Vert\widetilde{q}\Vert \,\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_{\epsilon}\,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|\int_\Omega(\widetilde{\boldsymbol{p}}\cdot\nabla\overline{\mu})\,\widetilde{q}\,dx\Big|\,\leq\,
\|\widetilde{\boldsymbol{p}}\|_{L^4(\Omega)^2}\,\|\nabla\overline{\mu}\|_{L^4(\Omega)^2}\,\|\widetilde{q}\| \,\le\, C\,\Vert\overline{\mu}\Vert_{H^2(\Omega)}\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert\,\Vert\widetilde{q}\Vert \,\leq\,\epsilon\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+C_{\epsilon}\,\Vert\widetilde{q}\Vert^2,\\[2mm]
&\Big|\beta_2\int_\Omega(\overline{\varphi}-\varphi_Q)\,\widetilde{q}\,dx\Big|\,\leq\,\beta_2\,\Vert\overline{\varphi}-\varphi_Q\Vert\,\Vert\widetilde{q}\Vert \,\leq\,\Vert\widetilde{q}\Vert^2+\frac{\beta_2^2}{4}\,\Vert\overline{\varphi}-\varphi_Q\Vert^2\,. \end{align*} Fixing now $\epsilon>0$ and $\epsilon'>0$ small enough (in particular, $\,7\epsilon\leq \hat\nu_1/2\,$ and $\,3\epsilon'\leq \hat c_1/2$), and using (\ref{F1}) and (\ref{nu}), we arrive at the following differential inequality: \begin{align} &\frac{d}{dt}\,\big(\Vert\widetilde{\boldsymbol{p}}\Vert^2+\Vert\widetilde{q}\Vert^2\big)\,+\,\sigma\,\big(\Vert\widetilde{\boldsymbol{p}}\Vert^2+\Vert\widetilde{q}\Vert^2\big)+\theta\, \geq\,\hat \nu_1\,\Vert\nabla\widetilde{\boldsymbol{p}}\Vert^2+\hat c_1\,\Vert\nabla\widetilde{q}\Vert^2,\label{diffineq3} \end{align} where the functions $\,\sigma,\theta\in L^1(0,T)\,$ are given by \begin{align*} &\sigma(t):=C\,\big(1+\Vert\overline{\boldsymbol{u}}(t)\Vert_{H^2(\Omega)^2}^2\big), \qquad\theta(t):=C\,\big(\beta_1^2\,\Vert(\overline{\boldsymbol{u}}-\boldsymbol{u}_Q)(t)\Vert^2+\,\beta_2^2\,\Vert(\overline{\varphi}-\varphi_Q)(t)\Vert^2\big). \end{align*} By applying the (backward) Gronwall lemma to \eqref{diffineq3}, we obtain \begin{align*} &\Vert\widetilde{\boldsymbol{p}}(t)\Vert^2+\Vert\widetilde{q}(t)\Vert^2\leq\Big[\Vert\widetilde{\boldsymbol{p}}(T)\Vert^2+\Vert\widetilde{q}(T)\Vert^2 +\int_t^T\theta(\tau)d\tau\Big]e^{\int_t^T\sigma(\tau)d\tau}\nonumber\\ &\leq\,C\,\Big[\Vert\widetilde{\boldsymbol{p}}(T)\Vert^2+\Vert\widetilde{q}(T)\Vert^2+ \beta_1^2\,\Vert\overline{\boldsymbol{u}}-\boldsymbol{u}_Q\Vert_{L^2(0,T;G_{div})}^2 +\beta_2^2\,\Vert\overline{\varphi}-\varphi_Q\Vert_{L^2(Q)}^2\Big], \end{align*} for all $t\in[0,T]$. From this estimate, and by integrating \eqref{diffineq3} over $[t,T]$, we can deduce the estimates for $\widetilde{\boldsymbol{p}}$ and $\widetilde{q}$ in $C^0([0,T];G_{div})\cap L^2(0,T;V_{div})$ and in $C^0([0,T];H)\cap L^2(0,T;V)$, respectively. By a comparison argument in \eqref{adJ1}, \eqref{adJ2}, we also obtain the estimates for $\widetilde{\boldsymbol{p}}_t$ and $\widetilde{q}_t$ in $L^2(0,T;V_{div}')$ and in $L^2(0,T;V')$, respectively. Therefore we deduce the existence of a weak solution to system \eqref{adJ1}--\eqref{adJics} satisfying \eqref{reg.adJ.sol}. The proof of uniqueness is rather straightforward, and we therefore may omit the details here. \end{proof}
Using the adjoint system, we can now eliminate $\boldsymbol\xi^{\boldsymbol{h}},\eta^{\boldsymbol{h}}$ from \eqref{nec.opt.cond}. Indeed, we have the following result. \begin{thm} Suppose that the hypotheses {\bf (H1)}--{\bf (H4)} are fulfilled. Let $\overline{\boldsymbol{v}}\in\mathcal{V}_{ad}$ be an optimal control for the control problem {\bf (CP)} with associated state $[\overline{\boldsymbol{u}},\overline{\varphi}]={\cal S}(\overline{\boldsymbol{v}})$ and adjoint state $[\widetilde{\boldsymbol{p}},\widetilde{q}]$. Then it holds
the variational inequality
\begin{align} &\gamma\int_0^T\!\!\int_\Omega\overline{\boldsymbol{v}}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}})\,dx\,dt\,+\int_0^T\!\!\int_\Omega\widetilde{\boldsymbol{p}}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}})\,dx\,dt \,\geq\, 0 \,\quad\forall\,\boldsymbol{v}\in\mathcal{V}_{ad}.\label{nec.opt.cond2} \end{align} \end{thm}
\begin{proof} Note that thanks to \eqref{adJics} we have for the sum (that we denote by $\mathcal{I}$)
of the first four terms on the left-hand side of \eqref{nec.opt.cond} \begin{align} &\mathcal{I}:=\beta_1\int_0^T\!\!\int_\Omega(\overline{\boldsymbol{u}}-\boldsymbol{u}_Q) \cdot\boldsymbol\xi^{\boldsymbol{h}}\,dx\,dt+\beta_2\int_0^T\!\!\int_\Omega(\overline{\varphi}-\varphi_Q)\eta^{\boldsymbol{h}} \,dx\,dt +\beta_3\int_\Omega(\overline{\boldsymbol{u}}(T)-\boldsymbol{u}_\Omega)\cdot\boldsymbol\xi^{\boldsymbol{h}}(T)\,dx\nonumber\\ &+\beta_4\int_\Omega(\overline{\varphi}(T)-\varphi_\Omega)\eta^{\boldsymbol{h}}(T)\,dx\,=\, \beta_1\!\!\int_0^T\int_\Omega(\overline{\boldsymbol{u}}-\boldsymbol{u}_Q)\cdot\boldsymbol\xi^{\boldsymbol{h}}\,dx\,dt\,+\beta_2\int_0^T\!\!\int_\Omega(\overline{\varphi}-\varphi_Q)\eta^{\boldsymbol{h}}\,dx\,dt\nonumber\\ &+\int_0^T\big(\langle\widetilde{\boldsymbol{p}}_t(t),\boldsymbol\xi^{\boldsymbol{h}}(t)\rangle_{{{V_{div}}}}\,+ \langle\boldsymbol\xi^{\boldsymbol{h}}_t(t),\widetilde{\boldsymbol{p}}(t)\rangle_{{V_{div}}}\big)\,dt +\int_0^T\big(\langle\widetilde{q}_t(t),\eta^{\boldsymbol{h}}(t)\rangle_{{{V}}} +\langle\eta^{\boldsymbol{h}}_t(t),\widetilde{q}(t)\rangle_{{{V}}}\big) \,dt\,.\label{proofadJ1} \end{align}
Now, recalling the weak formulation of the linearized system \eqref{linsy1}--\eqref{linics} for $\boldsymbol{h}=\boldsymbol{v}- \overline{\boldsymbol{v}}$, we obtain, omitting the argument $t$,
\begin{align} \langle\boldsymbol\xi^{\boldsymbol{h}}_t,\widetilde{\boldsymbol{p}}\rangle_{{{V_{div}}}}\,&= \,-2\,\big(\nu(\overline{\varphi})\,D\boldsymbol\xi^{\boldsymbol{h}}, D\widetilde{\boldsymbol{p}}\big)\,-\,2\,\big(\nu'(\overline{\varphi})\,\eta^{\boldsymbol{h}}\,D\overline{\boldsymbol{u}},D\widetilde{\boldsymbol{p}})\,-\,b(\overline{\boldsymbol{u}}, \boldsymbol\xi^{\boldsymbol{h}},\widetilde{\boldsymbol{p}})\nonumber \\[1mm] & \,\,\quad\, -\,b(\boldsymbol\xi^{\boldsymbol{h}},\overline{\boldsymbol{u}},\widetilde{\boldsymbol{p}})\,+\,\big((a\,\eta^{\boldsymbol{h}}- K\ast\eta^{\boldsymbol{h}} + F''(\overline{\varphi})\,\eta^{\boldsymbol{h}})\,\nabla \overline{\varphi},\widetilde{\boldsymbol{p}}\big) \nonumber\\[1mm] &\,\,\,\quad +\, \big( \overline{\mu}\,\nabla \eta^{\boldsymbol{h}},\widetilde{\boldsymbol{p}}\big)\,+\,(\boldsymbol{v}-\overline{\boldsymbol{v}},\widetilde{\boldsymbol{p}})\,, \label{proofadJ2} \\[3mm] \langle\eta^{\boldsymbol{h}}_t,\widetilde{q}\rangle_{{{V}}}\,&=\, -\,\big(\nabla \big( a\,\eta^{\boldsymbol{h}}-K\ast\eta^{\boldsymbol{h}}+ F''(\overline{\varphi})\,\eta^{\boldsymbol{h}}\big),\nabla \widetilde{q} \big) \,+\,(\overline{\boldsymbol{u}}\,\eta^{\boldsymbol{h}},\nabla\widetilde{q}) \nonumber\\[1mm] & \,\,\,\quad +\, (\boldsymbol\xi^{\boldsymbol{h}}\,\overline{\varphi}, \nabla\widetilde{q})\,. \label{proofadJ3} \end{align} Now, we insert these two equalities as well as (\ref{wfadj1}) and (\ref{wfadj2}) in (\ref{proofadJ1}). Integration by parts, using the boundary conditions for the involved quantities and the fact that $\boldsymbol\xi^{\boldsymbol{h}}$ and $\widetilde{\boldsymbol{p}}$ are divergence free vector fields, and observing that the symmetry of the kernel $K$ implies the identity \begin{align*} &\int_\Omega ({{K}}\ast\eta)\,\omega\,dx\,=\,\int_\Omega ({{K}}\ast\omega)\,\eta\,dx \,\quad\forall\,\eta,\omega\in H, \end{align*} we arrive after a straightforward standard calculation (which can be omitted here) at the conclusion that $\mathcal{I}$ can be rewritten as \begin{align*} &\mathcal{I}:=\int_0^T\!\!\int_\Omega\widetilde{\boldsymbol{p}}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}})\,dx\,dt\,. \end{align*} Therefore, (\ref{nec.opt.cond2}) follows from this identity and \eqref{nec.opt.cond}. \end{proof}
\begin{oss}{\upshape The system \eqref{sy1}--\eqref{ics}, written for $[\overline{\boldsymbol{u}},\overline{\varphi}]$, the adjoint system \eqref{adJ1}-\eqref{adJics} and the variational inequality \eqref{nec.opt.cond2} form together the first-order necessary optimality conditions. Moreover, since $\mathcal{V}_{ad}$ is a nonempty, closed and convex subset of $L^2(Q)^2$, then \eqref{nec.opt.cond2} is in the case $\gamma>0$ equivalent to the following condition for the optimal control $\overline{\boldsymbol{v}}\in\mathcal{V}_{ad}$, \begin{align}\nonumber &\overline{\boldsymbol{v}}=\mathbb{P}_{\mathcal{V}_{ad}}\Big(-\frac{\widetilde{\boldsymbol{p}}}{\gamma}\Big), \end{align} where $\mathbb{P}_{\mathcal{V}_{ad}}$ is the orthogonal projector in $L^2(Q)^2$ onto $\mathcal{V}_{ad}$. From standard arguments it follows from this projection property the pointwise condition \begin{align}\nonumber & \overline{v}_i(x,t)\,=\,\max\,\left\{v_{a,i}(x,t),\,\min\,\left \{-\gamma^{-1}\,\widetilde{p}_i(x,t), \,v_{b,i}(x,t) \right\}\right\}, i=1,2, \quad\mbox{for a.\,e. }\,(x,t)\in Q\,, \end{align} where $\widetilde{p}_i=\widetilde{\boldsymbol{p}}_i$, $i=1,2,3$. } \end{oss}
\end{document} | arXiv |
\begin{document}
\title{\Large \textsc{
The Discrete-Dual Minimal-Residual Method (\textsf{DDMRes}
\footnotetext[2]{Pontificia Universidad Cat\'olica de Valpara\'iso, Instituto de Matem\'aticas,\\ \hspace*{2em}\mbox{[email protected]}} \footnotetext[3]{University of Nottingham, School of Mathematical Sciences,\\ \hspace*{2em}\mbox{[email protected]}} \footnotetext[4]{University of Nottingham, School of Mathematical Sciences,\\ \hspace*{2em}\mbox{[email protected]}} \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{abstract} \noindent \normalsize We propose and analyse a minimal-residual method in discrete dual norms for approximating the solution of the advection-reaction equation in a weak Banach-space setting. The weak formulation allows for the direct approximation of solutions in the Lebesgue $L^p$-space, $1<p<\infty$. The greater generality of this weak setting is natural when dealing with rough data and highly irregular solutions, and when enhanced qualitative features of the approximations are needed. \par We first present a rigorous analysis of the well-posedness of the underlying continuous weak formulation, under natural assumptions on the advection--reaction coefficients. The main contribution is the study of several discrete subspace pairs guaranteeing the discrete stability of the method and quasi-optimality in~$L^p$, and providing numerical illustrations of these findings, including the elimination of Gibbs phenomena, computation of optimal test spaces, and application to 2-D advection.
\end{abstract}
{\footnotesize \parbox[t]{\textwidth}{ \textbf{Keywords~} Residual minimization $\cdot$~Discrete dual norms $\cdot$~DDMRes $\cdot$~Advection--reaction $\cdot$~Banach spaces $\cdot$~Fortin condition $\cdot$~Compatible FE pairs $\cdot$~Petrov--Galerkin method \\[.5\baselineskip] \textbf{Mathematics Subject Classification~} 41A65 $\cdot$~65J05 $\cdot$~46B20
$\cdot$~65N12 $\cdot$~65N15 $\cdot$~35L02 $\cdot$~35J25 } }
\setcounter{tocdepth}{2} { \tableofcontents }
\pagestyle{myheadings} \thispagestyle{plain} \markboth{\small \textit{I.~Muga, M.~Tyler and K.G.~van der Zee}}{\small \textit{Weak Advection--Reaction in Banach Spaces}}
\newcommand{\mbox{\textsf{DDMRes}}}{\mbox{\textsf{DDMRes}}}
\section{Introduction} \label{sec:intro}
Residual minimization encapsulates the idea that an approximation to the solution~$u\in\mathbb{U}$ of an (infinite-dimensional) operator equation $Bu = f$, can be found by minimizing the norm of the residual $f-Bw_n$ amongst all $w_n$ in some finite-dimensional subspace~$\mathbb{U}_n\subset \mathbb{U}$. This powerful idea provides a stable and convergent discretization method under quite general assumptions, i.e., when $B:\mathbb{U}\rightarrow \mathbb{V}^*$ is any linear continuous bijection from Banach space~$\mathbb{U}$~onto the dual~$\mathbb{V}^*$ of a Banach space~$\mathbb{V}$, $f\in \mathbb{V}^*$, and $\operatorname{dist}(u,\mathbb{U}_n) \rightarrow 0$ as $n\rightarrow \infty$; see, e.g.,~Guermond~\cite[Section~2]{GueSINUM2004} for details. Note that this applies to well-posed weak formulations of linear partial differential equations (PDEs), in which case~$B$ is induced by the underlying bilinear form (i.e., $\dual{Bw,v} = b(w,v) \,, \,\forall w\in \mathbb{U}, \,\forall v\in \mathbb{V}$). As such, residual minimization is essentially an ideal methodology for non-coercive and/or nonsymmetric problems.
\par
However, for many weak formulations of PDEs, $\mathbb{V}^*$ is a \emph{negative} space (such as $H^{-m}(\Omega)$, or more generally $W^{-m,p}(\Omega)$, which is the dual of the Sobolev space~$W^{m,q}_0(\Omega)$, where $1<p<\infty$, $p^{-1}+q^{-1}=1$, $m=1,2,\ldots,$ or the dual of a~graph space). In that case, this requires the minimization of the residual in the \emph{non-computable} dual norm~$\norm{\cdot}_{\mathbb{V}^*}$. To make this tractable, one can instead minimize in a \emph{discrete} dual norm. In other words, one aims to find an approximation~$u_n\in\mathbb{U}_n$ such that~$\bignorm{f-B u_n}_{(\mathbb{V}_m)^*}$ is minimal, where~$\mathbb{V}_m$ is some finite-dimensional subspace of~$\mathbb{V}$. We refer to this discretization method as residual minimization in discrete dual norms, or simply as the~\mbox{\textsf{DDMRes}}{} method (\emph{Discrete-Dual Minimal-Residual} method).
\par
In this paper, we consider the~\mbox{\textsf{DDMRes}}{} method when applied to a canonical linear first-order PDE in weak Banach-space settings. In particular, we consider the advection--reaction operator $u\mapsto \bs{\beta} \cdot \nabla u + \mu \,u$, with~$\bs{\beta}:\Omega\rightarrow \mathbb{R}^d$ and $\mu:\Omega\rightarrow \mathbb{R}$ given advection--reaction coefficients, in a functional setting for which the solution space~$\mathbb{U}$ is~$L^p(\Omega)$, $1<p<\infty$, and $\mathbb{V}$ is a suitable Banach graph space (see Section~\ref{sec:weakAdvRea} for details).
This weak setting allows for the direct approximation of \emph{irregular} solutions, while the greater generality of Banach spaces (over more common Hilbert spaces) is useful for example in the extension to nonlinear hyperbolic PDEs~\cite{HolRisBOOK2015},
\footnote{Cf.~\cite{ChaDemMosCF2014, CarBriHelWri2017, CanHeuHAL2018} for nonlinear PDEs examples in Hilbert-space settings using a DPG approach.}
as well as in approximating solutions with discontinuities (allowing the elimination of Gibbs phenomena; see further details below).
\par
It has recently become clear that many methods are equivalent to~\mbox{\textsf{DDMRes}}, the most well-known being the \emph{discontinuous Petrov--Galerkin} (DPG) method (for which $B$~corresponds to a hybrid formulation of the underlying problem, so that $\mathbb{V}$ is a broken Sobolev-type space), see Demkowicz and Gopalakrishnan~\cite{DemGopBOOK-CH2014}, and the \emph{Petrov--Galerkin method with projected optimal test spaces}, see~Dahmen et al.~\cite{DahHuaSchWelSINUM2012}. While these methods require~$\mathbb{U}$ and $\mathbb{V}$ to be Hilbert spaces, in more general Banach spaces the \mbox{\textsf{DDMRes}}{} method is equivalent to certain (inexact) \emph{nonlinear Petrov--Galerkin} methods, or equivalently, mixed methods with monotone nonlinearity, where the nonlinearity originates from the nonlinear \emph{duality map}~$J_\mathbb{V}:\mathbb{V}\rightarrow \mathbb{V}^*$; see Muga~\& Van~der~Zee~\cite{MugZeeARXIV2018} for details, including a schematic overview of connections to other methods.
\par
The numerical analysis of the \mbox{\textsf{DDMRes}}{} method has been carried out abstractly by Gopalakrishnan~\& Qiu~\cite{GopQiuMOC2014} in Hilbert spaces (see also~\cite[Section~3]{DahHuaSchWelSINUM2012}), and by Muga~\& Van~der~Zee~\cite{MugZeeARXIV2018} in smooth Banach spaces. A key requirement in these analyses is the \emph{Fortin} compatibility condition on the family of discrete subspace pairs~$(\mathbb{U}_n,\mathbb{V}_m)$ under consideration, which, once established, implies stability and quasi-optimal convergence of the method. In some sense, the Fortin condition is rather mild, since for a given~$\mathbb{U}_n$, there is the expectation that it will be satisfied for a sufficiently large~$\mathbb{V}_m$ (thereby making the discrete dual norm~$\norm{\cdot}_{(\mathbb{V}_m)^*}$ sufficiently close to~$\norm{\cdot}_{\mathbb{V}^*}$). Of course, whether this can be established depends crucially on the operator~$B$, therefore also on the particular weak formulation of the PDE that is being studied.
\par
The main contribution of this paper consists in the study of several elementary discrete subspace pairs~$(\mathbb{U}_n,\mathbb{V}_m)$ for the \mbox{\textsf{DDMRes}}{} method for weak advection--reaction, including proofs of Fortin compatibility in the above-mentioned Banach-space setting. It thereby provides the first application and corresponding analysis of~\mbox{\textsf{DDMRes}}{} in genuine (non-Hilbert) Banach spaces. In particular, for the given compatible pairs, \mbox{\textsf{DDMRes}}{} is thus a quasi-optimal method providing a near-best approximation in~$L^p(\Omega)$.
Note that our results do not cover DPG-type hybrid weak formulations (with a broken graph space~$\mathbb{V}$), so that our discrete spaces~$\mathbb{V}_m$ are globally conforming. Broken Banach-space settings will be treated in forthcoming work.
\par
We now briefly discuss some details of our results. To be able to carry out the analysis, our results focus on discrete subspace pairs~$(\mathbb{U}_n,\mathbb{V}_m)$, where $\mathbb{U}_n$ is a lowest-order finite element space on mesh~$\mcal{T}_n$ in certain specialized settings.
We first consider \emph{continuous linear} finite elements in combination with continuous finite elements of degree~$k$, i.e., $\mathbb{U}_n = \mathbb{P}^1_{\mathrm{cont}}(\mcal{T}_n)$ and~$\mathbb{V}_m = \mathbb{P}^k_{\mathrm{cont}}(\mcal{T}_n)$. The Fortin condition holds when $k\ge 2$, assuming, e.g.,~incompressible pure advection ($\operatorname{div} \bs{\beta} = \mu = 0$) in a one-dimensional setting. Interestingly, we demonstrate that the notorious \emph{Gibbs phenomenon} of spurious numerical over- and undershoots, commonly encountered while approximating discontinuous solutions with continuous approximations, can be \emph{eliminated} with the \mbox{\textsf{DDMRes}}{}~method upon $p\rightarrow 1^+$ (see Section~\ref{sec:Gibbs}), which is in agreement with previous findings on $L^1$-methods~\cite{GueSINUM2004, LavSINUM1989}.
\par
We then consider $\mathbb{U}_n = \mathbb{P}^0(\mcal{T}_n)$, that is, \emph{discontinuous piecewise-constant} approximations on arbitrary partitionings of the domain~$\Omega$ in~$\mathbb{R}^d$, $d\ge 1$.
It turns out that it is possible to define an \emph{optimal} test space~$\mathbb{S}_n := B^{-*}\mathbb{U}_n$ and subsequently prove Fortin's condition for any~$\mathbb{V}_m \supseteq \mathbb{S}_n$. This result essentially hinges on the fact that $\mathbb{U}_n$ is invariant under the $L^p$~duality map (see proof of Proposition~\ref{prop:AdvReac_compatible}).
Since the optimal test space is however not explicit, it requires in general the computation of an explicit basis (see Section~\ref{sec:optimalTestSpace}). Such computations may not be feasible in practise, and in those cases, as an alternative, one could resort to sufficiently-rich~$\mathbb{V}_m$, e.g.,~continuous linear finite elements on a sufficiently-refined submesh of the original mesh (cf.~\cite{BroDahSteMOC2018}).
\par
Interestingly however, under certain special, yet nontrivial situations, the optimal test space~$\mathbb{S}_n$ happens to coincide with a convenient finite element space. For example, in 2-D in the incompressible pure advection case with $\bs{\beta}$ piecewise constant on some partition, if $\mcal{T}_n$ is a triangular mesh of~$\Omega$ (compatible with the partition) and all triangles are flow-aligned, then we prove that $\mathbb{S}_n = \mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$, where $\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$ refers to the space of piecewise-linear functions that are \emph{conforming} with respect to the graph space~$\mathbb{V}$. Numerical experiments in 2-D indeed confirm in this case the quasi-optimality of~\mbox{\textsf{DDMRes}}{} (see Section~\ref{sec:2DflowAlignedMesh}).
\par
In recent years, several similar methods for weak advection--reaction have appeared, all of which were in Hilbert-space settings (i.e., the solution space is~$L^2(\Omega)$) and use a broken weak formulation. Indeed, these include some of the initial DPG methods~\cite{DemGopCMAME2010, DemGopNMPDE2011, BuiDemGhaMOC2013}, which were proposed before the importance of Fortin's condition was clarified. Recently however, Broersen, Dahmen \&~Stevenson~\cite{BroDahSteMOC2018} studied a higher-order pair using standard finite-element spaces for the DPG method of weak advection--reaction. Under mild conditions on~$\bs{\beta}$, they proved Fortin's condition when~$\mathbb{U}_n$ consists of piecewise polynomials of degree~$k$, and~$\mathbb{V}_m$ consists of piecewise polynomials of higher degree over a sufficiently-deep refinement of the trial mesh. The extension of their proof, based on approximating the optimal test space, to any Banach-space setting seems nontrivial, since currently the concept of an optimal test space is in general absent in \mbox{\textsf{DDMRes}}{} in Banach spaces (cf.~\cite{MugZeeARXIV2018}), exceptions notwithstanding (such as the lowest-order piecewise-constant case discussed above).
\par
Let us finally point out that methods for weak advection--reaction are quite distinct from methods for \emph{strong} advection--reaction (which has its residual in~$L^p(\Omega)$ and a~priori demands more regularity on its solution). Indeed, there is a plethora of methods in the strong case; see, e.g., Ern \&~Guermond~\cite[Chapter~5]{ErnGueBOOK2004} and Guermond~\cite{GueSINUM2004}, all of which typically exhibit \emph{suboptimal} convergence behaviour when measured in~$L^p(\Omega)$. In the context of \emph{strong} advection--reaction the results by Guermond~\cite{GueM2AN1999} are noteworthy, who proved the Fortin condition for several pairs, consisting of a low-order finite element space and its enrichment with bubbles. These results however do not apply to weak advection--reaction. Similarly for the stability result by Chan, Evans \&~Qiu~\cite{ChaEvaQiuCAMWA2014}.
\par
The remainder of this paper is arranged as follows.
In Section~\ref{sec:prelim}, we first present preliminaries for the advection--reaction equation, allowing us to recall in Section~\ref{sec:weakAdvRea} the specifics of the well-posed Banach-space setting (cf.~Cantin~\cite{CanCR2017}). In particular, we provide a self-contained proof of the continuous inf-sup conditions using various properties of the $L^p$~duality map. Then, in Section~\ref{sec:AdvRea_discrete}, we consider the discrete problem corresponding to the \mbox{\textsf{DDMRes}}{}~method in the equivalent form of the monotone mixed method, and establish stability and quasi-optimality of the method, provided the Fortin condition holds. In Section~\ref{sec:applications}, we consider particular discrete subspace pairs~$(\mathbb{U}_n,\mathbb{V}_m)$. This section contains several proofs of Fortin conditions, as well as some illustrative numerical examples pertaining to the Gibbs phenomena (Section~\ref{sec:Gibbs}), optimal test space basis (Section~\ref{sec:optimalTestSpace}), and quasi-optimal convergence for 2-D advection (Section~\ref{sec:2DflowAlignedMesh}).
\section{Advection--reaction preliminaries} \label{sec:prelim}
For any dimension $d\geq1$, let $\Omega\subset\mathbb R^d$ be an open bounded domain, with Lipschitz boundary $\partial\Omega$ oriented by a unit outward normal vector $\bs{n}$. Let $\bs{\beta}\in L^\infty(\Omega)$ be an advection-field such that $\operatorname{div} \bs{\beta}\in L^\infty(\Omega)$ and let $\mu\in L^\infty(\Omega)$ be a (space-dependent) reaction coefficient. The advection-field splits the boundary~$\partial\Omega$ into an \emph{inflow}, \emph{outflow} and \emph{characteristic} part, which for continuous~$\bs{\beta}$ corresponds to
\begin{alignat*}{4} & \partial\Omega_- &&:=\big\{x\in\partial\Omega:& \; \bs{\beta}(x)\cdot\bs{n}(x) & <0 & \big\}\,, \\ & \partial\Omega_+ &&:=\big\{x\in\partial\Omega:& \; \bs{\beta}(x)\cdot\bs{n}(x) & >0 & \big\}\,, \\ & \partial\Omega_0 &&:=\big\{x\in\partial\Omega:& \; \bs{\beta}(x)\cdot\bs{n}(x) & =0 & \big\}\,, \end{alignat*}
respectively; see~\cite[Section~2]{BroDahSteMOC2018} for the definition of the parts in the more general case~$\bs{\beta}, \operatorname{div} \bs{\beta} \in L^\infty(\Omega)$ (which is based on the integration-by-parts formula~\eqref{eq:IntByP}).
\par
Given a~possibly \emph{rough} source~$f_\circ$ and inflow data~$g$, the advection--reaction model is:
\begin{subequations} \label{eq:strong_advreact} \begin{empheq}[left=\left\{\,,right=\right.,box=]{alignat=2} \bs{\beta}\cdot \nabla u + \mu u &= f_\circ &\qquad & \text{in } \Omega\,, \\
u &= g && \text{on } \partial\Omega_-. \end{empheq} \end{subequations}
Before we give a weak formulation for this model and discuss it's well-posedness, we first introduce relevant assumptions and function spaces. We have in mind a weak setting where~$u\in L^p(\Omega)$, for any~$p$ in~$(1,\infty)$. Therefore, throughout this section, let~$1<p<\infty$ and let~$q\in (1,\infty)$ denote the \emph{conjugate} exponent of~$p$, satisfying the relation ${p}^{-1}+{q}^{-1}=1$.
\par
The following assumptions are natural extensions of the classical ones in the Hilbert case.
\begin{assumption}[Friedrich's positivity assumption] \label{assump:mu_0} There exists a constant~$\mu_0>0$ for which: \begin{equation}\label{eq:beta-mu} \mu(x) -{1\over p}\operatorname{div}\bs{\beta}(x)\geq \mu_0, \quad\hbox{ a.e. $x$ in } \Omega. \end{equation} \end{assumption}
\begin{assumption}[Well-separated in- and outflow] \label{assump:split}
The in- and outflow boundaries are \emph{well-separated}, i.e., $\overline{\partial\Omega_-}\cap\overline{\partial\Omega_+}=\emptyset$ and, by partition of unity, there exists a function \begin{equation}\label{eq:cutoff} \left\{ \begin{array}{l} \phi\in C^\infty(\overline\Omega) \hbox{ such that:}\\ \quad \phi(x) =1, \quad\forall x\in \partial\Omega_-,\\ \quad \phi(x) =0, \quad\forall x\in \partial\Omega_+. \end{array}\right. \end{equation} \end{assumption}
\par
For brevity we use the following notation for norms and duality pairings:
\begin{subequations} \label{eq:s-norm} \begin{alignat}{4}
& \|\cdot\|_\infty &&= \operatorname*{ess~sup}_{x\in \Omega}|\cdot(x)|\,, \\
& \|\cdot\|_\rho &&=\bigg(\int_\Omega |\cdot|^\rho\bigg)^{1/\rho}\,, &\qquad & \text{for } 1\le \rho < \infty\,, \\ & \dual{\cdot,\cdot}_{\rho,\sigma} &&= \bigdual{\cdot,\cdot}_{L^\rho(\Omega),L^\sigma(\Omega)}\,, &\qquad & \text{for } 1< \rho < \infty\,, \quad \sigma = \frac{\rho}{\rho-1}\,. \end{alignat} \end{subequations}
\begin{definition}[Graph space]
For $1\le \rho \le \infty$, the \emph{graph space} is defined by $$
W^\rho(\bs{\beta};\Omega) :=\Big\{w\in L^\rho(\Omega) : \bs{\beta}\cdot\nabla w\in L^\rho(\Omega) \Big\},
$$
endowed with the norm
\begin{alignat*}{2}
\|w\|^2_{\rho,\bs{\beta}}:=
\|w\|^2_\rho+\|\bs{\beta}\cdot\nabla w\|^2_\rho\,. \end{alignat*}
The ``adjoint'' norm is defined by
\begin{alignat}{2} \label{eq:beta-norm}
\enorm{w}^2_{\rho,\bs{\beta}}:=\|w\|^2_\rho+\|\operatorname{div}(\bs{\beta} w) \|^2_\rho\,\,. \end{alignat}
These norms are equivalent, which can be shown by means of the identity
\begin{alignat}{2}\label{eq:beta_identity} \operatorname{div}(\bs{\beta} w)=\operatorname{div}(\bs{\beta})w + \bs{\beta}\cdot\nabla w\,. \end{alignat}
\end{definition}
\begin{remark}[Graph-spaces and traces] \label{rem:traces} As a consequence of Assumption~\ref{assump:split}, traces on the space~$W^\rho(\bs{\beta};\Omega)$ are well-defined as functions in the space
\begin{alignat*}{2}
L^\rho(|\bs{\beta}\cdot\bs{n}|;\partial\Omega):=\left\{w \hbox{ measurable in } \partial\Omega: \int_{\partial\Omega}|\bs{\beta}\cdot\bs{n}|\,|v|^\rho < +\infty\right\}, \end{alignat*}
and, moreover, for all $w\in W^\rho(\bs{\beta};\Omega)$ and all $v\in W^\sigma(\bs{\beta};\Omega)$, the following integration-by-parts formula holds:
\begin{alignat}{2} \label{eq:IntByP} \int_\Omega \Big( (\bs{\beta}\cdot\nabla w)v+(\bs{\beta}\cdot\nabla v)w +\operatorname{div}(\bs{\beta})wv \Big) = \int_{\partial\Omega} (\bs{\beta}\cdot\bs{n})wv\,. \end{alignat}
The proof of these results is a straightforward extension of the Hilbert-space case given in, e.g., Di~Pietro~\& Ern~\cite[Section~2.1.5]{DipErnBOOK2012} (cf.~Dautray~\& Lions~\cite[Chapter~XXI, \S{}2, Section~2.2]{DauLioBOOK1993}~and~Cantin~\cite[Lemma~2.2]{CanCR2017}). We can thus define the following two closed subspaces, which are relevant for prescribing boundary conditions at $\partial\Omega_+$ or $\partial\Omega_-$:
\begin{alignat}{2}\label{eq:Wbeta} W^\rho_{0,\pm}(\bs{\beta};\Omega)
:=\Big\{w\in W^\rho(\bs{\beta};\Omega) : w\big|_{\partial\Omega_\pm}=0 \Big\}\,. \end{alignat}
\end{remark}
\begin{remark}[Non-separated in- and outflow]
The requirement of separated in- and outflow can be removed, but different trace operators have to be introduced~\cite{GopMonSepCAMWA2015}.
\end{remark}
\par
The case when $\mu\equiv 0$ and $\operatorname{div}\bs{\beta}\equiv 0$ is special,
since Assumption~\ref{assump:mu_0} is not satisfied. An important tool for the analysis of this case is the so-called \emph{curved Poincar\'e-Friedrichs inequality}; see Lemma~\ref{lem:Poincare_Friedrichs} below. Its proof relies on the following assumption (cf.~\cite{AzePhD1996, AzePouCR1996}).
\begin{assumption}[$\Omega$-filling advection]\label{ass:omega-filling}
Let~$1<\rho<\infty$. If $\mu\equiv 0$ and $\operatorname{div}\bs{\beta}\equiv 0$, the
advection-field~$\bs{\beta}$ is \emph{$\Omega$-filling}, by which we mean that there exist $z_+,z_- \in W^\infty(\bs{\beta};\Omega)$ with~$\norm{z_+}_\infty, \norm{z_-}_\infty>0$,
such that
\begin{equation}\left\{ \begin{array}{rl} \label{eq:omega-filling} -\bs{\beta}\cdot\nabla z_\pm = \rho & \quad \text{in } \Omega\,, \\
z_{\pm} = 0 &\quad \text{on } \partial\Omega_{\pm}\,. \end{array}\right.
\end{equation}
\end{assumption}
\par
\begin{remark}[Method of characteristics]
Assumption~\ref{ass:omega-filling} holds, for example, if $\bs{\beta}$ is regular enough so that the method of characteristics can be employed to solve for~$z$ (cf.~Dahmen et al.~\cite[Remark 2.2]{DahHuaSchWelSINUM2012}).
\end{remark}
\begin{lemma}[Curved Poincar\'e--Friedrichs inequality] \label{lem:Poincare_Friedrichs} Let~$1<\rho<\infty$. Under the hypothesis that Assumption~\ref{ass:omega-filling} holds true, there exists a constant~$C_{\text{\tiny$\mathrm{PF}$}}>0$ such that \begin{alignat}{2}
\|w\|_\rho \leq C_{\text{\tiny$\mathrm{PF}$}} \|\bs{\beta}\cdot\nabla w\|_\rho\,\,, \quad \forall w\in W^\rho_{0,\pm}(\bs{\beta};\Omega)\,. \end{alignat} \end{lemma}
\begin{proof}
For the Hilbert-space case ($\rho=2$), the proof can be found in~\cite{AzePouCR1996}. For completeness, we reproduce here the general $\rho$-version. \par Without loss of generality take $w\in W^\rho_{0,-}(\bs{\beta};\Omega)$, and let~$z=z_+\in W^\infty_{0,+}(\bs{\beta};\Omega)$ as in Assumption~\ref{ass:omega-filling}.
Notice the important identity
\begin{alignat}{2} \label{eq:rho_id}
z \,\bs{\beta}\cdot\nabla(|w|^\rho)=\operatorname{div}(\bs{\beta} z |w|^\rho)-|w|^\rho\bs{\beta}\cdot\nabla z . \end{alignat} Let $\sigma={\rho/(\rho-1)}$.
Take $\phi_w= \rho z |w|^{\rho-1}\operatorname{sign}(w)\in L^{\sigma}(\Omega)$, which satisfies \begin{alignat}{2}\label{eq:phi_u}
\|\phi_w\|_{\sigma}\leq \rho \,\| z \|_\infty
\|w\|_\rho^{\rho-1}\,\,. \end{alignat} Thus, \begin{alignat}{2} \tag{by duality}
\|\bs{\beta}\cdot\nabla w\|_\rho & = \sup_{0\neq \phi\in L^{\sigma}(\Omega)}
{\bigdual{\bs{\beta}\cdot\nabla w,\phi}_{\rho,\sigma}\over \|\phi\|_{\sigma}}\\ \tag{since $\phi_w\in L^{\sigma}(\Omega)$}
& \geq
{\bigdual{\bs{\beta}\cdot\nabla w,\phi_w}_{\rho,\sigma}\over \|\phi_w\|_{\sigma}}\\
\tag{by \eqref{eq:rho_id} and~$\int_{\partial\Omega} (\bs{\beta}\cdot\bs{n}) z |w|^\rho = 0$} & = {\displaystyle
-\int_\Omega|w|^\rho\bs{\beta}\cdot\nabla z \over \|\phi_w\|_{\sigma}}\\ \tag{by Assumption~\ref{ass:omega-filling} and~\eqref{eq:phi_u}}
& \geq {\|w\|_\rho\over \| z \|_\infty}. \end{alignat}
Hence, $C_{\text{\tiny$\mathrm{PF}$}}=\| z \|_\infty$. If $w\in W^\rho_{0,+}(\bs{\beta};\Omega)$, take~$z=z_{-}\in W^\infty_{0,-}(\bs{\beta};\Omega)$. \end{proof}
The proof of Lemma~\ref{lem:Poincare_Friedrichs} shows that~$C_{\text{\tiny$\mathrm{PF}$}}=\norm{z_\pm}_\infty$, with~$z_\pm$ defined in~\eqref{eq:omega-filling}, hence $C_{\text{\tiny$\mathrm{PF}$}}$ depends on~$\Omega$, $\bs{\beta}$ and~$\rho$.
\begin{remark}[Weaker statements] Under a weaker condition than Assumption~\ref{assump:mu_0}, Lemma~\ref{lem:Poincare_Friedrichs} can be generalized to the following situations: \begin{alignat}{2}
\|w\|_p & \lesssim \|\mu w + \bs{\beta}\cdot\nabla w\|_p\,, \quad \forall w\in W^p_{0,-}(\bs{\beta};\Omega)\,,\\
\|v\|_q & \lesssim \|\mu v - \operatorname{div}(\bs{\beta} v)\|_q\,, \quad \forall v\in W^q_{0,+}(\bs{\beta};\Omega)\,. \end{alignat} Indeed, it is enough to verify the existence of a constant $\mu_0^*>0$ and a Lipschitz continuous function $\zeta(x)$ such that: \begin{alignat}{2}\label{eq:weak beta-mu}
\mu(x) -{1\over p}\operatorname{div}\bs{\beta}(x) - {1\over p}\bs{\beta}(x)\cdot\nabla\zeta(x)
\geq \mu_0^*\,, \quad\hbox{ a.e. $x$ in } \Omega. \end{alignat} These statements can be inferred from the recent work of Cantin~\cite{CanCR2017}. Notice that if Assumption~\ref{assump:mu_0} is satisfied, then~\eqref{eq:weak beta-mu} holds with $\zeta(x)\equiv 0$ and $\mu_0^*=\mu_0$. \end{remark}
\section{A weak setting for advection--reaction}\label{sec:weakAdvRea}
The weak setting for the advection-reaction problem~\eqref{eq:strong_advreact} considers a trial space $\mathbb U:=L^p(\Omega)$ endowed with the $\|\cdot\|_p$-norm (see~\eqref{eq:s-norm}), and a test space $\mathbb V:=W^q_{0,+}(\bs{\beta};\Omega)$ endowed with the norm $\enorm{\cdot}_{q,\bs{\beta}}$ (see~\eqref{eq:beta-norm}). The weak-formulation reads as follows:
\begin{empheq}[left=\left\{\,,right=\right.,box=]{alignat=2} \notag & \text{Find } u\in \mathbb U=L^p(\Omega) : \\ \label{eq:weak_advection-reaction} & \quad \dual{Bu,v}_{\mathbb{V}^*,\mathbb{V}}
= \dual{f,v}_{\mathbb V^*,\mathbb V}\,\,,\quad \forall v\in \mathbb V=W^q_{0,+}(\bs{\beta};\Omega).
\end{empheq}
where $B:\mathbb{U}\to\mathbb{V}^*$ is defined by \begin{equation}\label{eq:B_AdvReac} \dual{Bw,v}_{\mathbb{V}^*,\mathbb{V}}:=\int_\Omega w\big (\mu v - \operatorname{div}(\bs{\beta} v) \big)\,,\qquad\forall w\in \mathbb{U}, \,\forall v\in\mathbb{V}, \end{equation} and
the right-hand side~$f$ is related to the original PDE data~$(f_\circ,g)$ via:
\begin{alignat*}{2}
\dual{f,v}_{\mathbb V^*,\mathbb V}
= \int_\Omega f_\circ v + \int_{\partial\Omega_-} |\bs{\beta}\cdot\bs{n}| g v \,, \end{alignat*}
where $f_\circ$ is given in (for example) $L^p(\Omega)$ and $g$ is given in $L^p(|\bs{\beta}\cdot\bs{n}|;\partial\Omega)$. More rough~$f_\circ$ is allowed as long as $f\in [W^q_{0,+}(\bs{\beta};\Omega)]^*$.
\begin{remark}[Boundedness] \label{rem:mu_continuity} The bilinear form in~\eqref{eq:weak_advection-reaction} is bounded with constant
$M_\mu:=\sqrt{1+\|\mu\|_\infty^2}$. Indeed,
\begin{alignat*}{2}
\left|\displaystyle\int_\Omega u\left(\mu v-\operatorname{div}(\bs{\beta} v)\right)\right|\leq \|u\|_p \bignorm{\mu v-\operatorname{div}(\bs{\beta} v)}_q
\leq M_\mu\,\|u\|_p\,\enorm{v}_{q,\bs{\beta}}\,. \end{alignat*}
\end{remark}
\par
The following result states the well-posedness of problem~\eqref{eq:weak_advection-reaction}. Although this result can be inferred from the recent result by Cantin~\cite{CanCR2017}, we provide a slightly alternative proof based on establishing the adjoint inf-sup conditions using properties of the $L^p$~duality map. For a classical proof of well-posedness in a similar Banach-space setting, we refer to~Beir\~{a}o Da Veiga~\cite{BeiRDM1987, BeiRSMUP1988} (cf.~\cite{BarLerNedCPDE1979} and~\cite[Chapter~XXI]{DauLioBOOK1993}).
\begin{theorem}[Weak advection--reaction: Well-posedness] \label{thm:avdreact_wellposed} Let~$1<p<\infty$ and $p^{-1} + q^{-1} = 1$. Let $\Omega\subset \mathbb{R}^d$ be an open bounded domain with Lipschitz boundary. Let $\bs{\beta}:\Omega\to \mathbb{R}$ and $\mu:\Omega\to \mathbb{R}$ be advection and reaction coefficients (respectively) satisfying either Friedrich's positivity Assumption~\ref{assump:mu_0}, or the $\Omega$-filling Assumption~\ref{ass:omega-filling}. Assume further that in- and outflow boundary are well~separated (Assumption~\ref{assump:split}). \begin{enumerate}[(i)] \item For any $f\in \mathbb V^*=[W^q_{0,+}(\bs{\beta};\Omega)]^*$, there exists a unique solution $u\in L^p(\Omega)$ to the weak advection--reaction problem~\eqref{eq:weak_advection-reaction}.
\item In the case that Assumption~\ref{assump:mu_0} holds true, we have the following a priori bound: \begin{equation}\label{eq:C_mu}
\|u\|_p\leq \frac{1}{\gamma_{B}}\,\|f\|_{\mathbb V^*}\,, \qquad\hbox{ with} \quad \gamma_B = \sqrt{\mu_0^2\over
{1+(\mu_0+\|\mu\|_\infty)^2}} \end{equation} and $\mu_0>0$ being the constant in Assumption~\ref{assump:mu_0}. \item[(ii$\star$)] On the other hand, in the case where Assumption~\ref{ass:omega-filling} holds true, we also have the a~priori bound~\eqref{eq:C_mu}, but $\gamma_B$ in~\eqref{eq:C_mu} must be replaced by the constant $1/(1+C_{\text{\tiny$\mathrm{PF}$}})$, where $C_{\text{\tiny$\mathrm{PF}$}}>0$ is the Poincar\'e--Friedrichs constant in Lemma~\ref{lem:Poincare_Friedrichs}.
\end{enumerate} ~ \end{theorem}
\begin{proof} See Section~\ref{sec:avdreact_wellposed}. \end{proof}
\section{The general discrete problem} \label{sec:AdvRea_discrete} We now consider the approximate solution of~\eqref{eq:weak_advection-reaction} given by the \mbox{\textsf{DDMRes}}{} method, i.e., given finite-dimensional subspaces $\mathbb U_n\subset \mathbb U=L^p(\Omega)$ and $\mathbb{V}_m\subset\mathbb{V}=W^q_{0,+}(\bs{\beta};\Omega)$, we aim to find $u_n\in\mathbb U_n$ such that: \begin{equation} \label{eq:discrete_residual} u_n=\arg\min_{w_n\in \mathbb{U}_n}\bignorm{f-Bw_n}_{(\mathbb{V}_m)^*}\,\,, \end{equation}
where the discrete dual norm is given by
\begin{alignat*}{2}
\norm{\cdot}_{(\mathbb{V}_m)^*} =
\sup_{v_m\in \mathbb{V}_m}{\left< \,\cdot\, ,v_m\right>_{(\mathbb{V}_m)^*,\mathbb{V}_m}\over \|v_m\|_{\mathbb{V}}}
\,. \end{alignat*}
As proven in~\cite[Theorem~4.A]{MugZeeARXIV2018}, the minimization problem~\eqref{eq:discrete_residual} is equivalent to following monotone mixed method:
\begin{subequations} \label{eq:AdvReac_discrete} \begin{empheq}[left=\left\{\,,right=\right.,box=]{alignat=3}\notag & \text{Find } (r_m,u_n)\in \mathbb V_m\times\mathbb U_n \text{ such that} \negquad\negquad \\ \label{eq:AdvReac_discrete_a} & \quad \bigdual{J_\mathbb V(r_m),v_m}_{\mathbb V^*,\mathbb V}
+\<Bu_n,v_m\right>_{\mathbb V^*,\mathbb V}
&& =\dual{f,v_m}_{\mathbb V^*,\mathbb V}\,,
&\quad & \forall v_m\in \mathbb V_m\,, \\\label{eq:AdvReac_discrete_b} & \quad \<Bw_n,r_m\right>_{\mathbb V^*,\mathbb V}
&& = 0\,,
&\quad & \forall w_n\in \mathbb U_n\,. \end{empheq} \end{subequations} In~\eqref{eq:AdvReac_discrete_a},
$J_\mathbb V:\mathbb{V}\to\mathbb{V}^*$ denotes the (monotone and nonlinear) duality map of $\mathbb V=W^q_{0,+}(\bs{\beta};\Omega)$ defined by the action:
\begin{alignat}{2} \label{adv:J} \bigdual{J_\mathbb V(r),v}_{\mathbb V^*,\mathbb V} :=\bigdual{J_q(r),v}_{p,q} +\bigdual{J_q\big(\operatorname{div}(\bs{\beta} r)\big),\operatorname{div}(\bs{\beta} v)}_{p,q} \qquad \forall r,v\in \mathbb{V}\,, \end{alignat}
where $J_q(v):= \|v\|_q^{2-q}|v|^{q-1}\operatorname{sign} v \in L^p(\Omega)$ is the duality map of~$L^q(\Omega)$. The solution $u_n\in\mathbb{U}_n$ of~\eqref{eq:AdvReac_discrete} is exactly the residual minimizer of~\eqref{eq:discrete_residual}, while $r_m\in\mathbb{V}_m$ is a \emph{representative of the discrete residual}, i.e., $J_\mathbb{V}(r_m)=f-Bu_n$ in $(\mathbb{V}_m)^*$.
The well-posedness of the discrete method~\eqref{eq:AdvReac_discrete} relies on the well-posedness of the continuous problem~\eqref{eq:weak_advection-reaction} (see Theorem~\ref{thm:avdreact_wellposed}), together with the following \emph{Fortin} assumption. \begin{assumption}[Fortin condition] \label{assumpt:Fortin} Let $B:\mathbb{U}\to\mathbb{V}^*$ be a bounded linear operator and let $\{(\mathbb{U}_n,\mathbb{V}_m)\}$ be a family of \emph{discrete} subspace pairs, where $\mathbb{U}_n\subset\mathbb{U}$ and $\mathbb{V}_m\subset\mathbb{V}$. For each pair $(\mathbb{U}_n,\mathbb{V}_m)$ in this family, there exists an operator $\Pi_{n,m}:\mathbb{V}\to \mathbb{V}_m$ and constants $C_\Pi>0$ (independent of $n$ and $m$) such that the following conditions are satisfied:
\begin{subequations} \label{eq:Fortin} \begin{empheq}[left=\left\{,right=\right.,box=]{alignat=2}
\label{eq:Fortin_a} \,\, &
\|\Pi_{n,m} v\|_\mathbb V\leq C_\Pi \|v\|_\mathbb V\,\,, & \quad & \forall v\in\mathbb V\,,
\\ \label{eq:Fortin_c} & \<Bw_n,v-\Pi_{n,m} v\right>_{\mathbb{V}^\ast,\mathbb{V}}=0, && \forall w_n\in \mathbb{U}_n,\,\forall v\in \mathbb{V}, \end{empheq} \end{subequations} where $I:\mathbb V\to\mathbb{V}$ is the identity map in~$\mathbb{V}$. For simplicity, we write $\Pi$ instead of~$\Pi_{n,m}$.
\end{assumption}
\begin{theorem}[Weak advection--reaction: \mbox{\textsf{DDMRes}}{} method]\label{thm:AdvReac_discrete} Under the conditions of Theorem~\ref{thm:avdreact_wellposed}, let the pair $(\mathbb{U}_n,\mathbb{V}_m)$ satisfy the (Fortin) Assumption~\ref{assumpt:Fortin} with operator~$B$ given by~\eqref{eq:B_AdvReac}.
\begin{enumerate}[(i)] \item There exists a unique solution $(r_m,u_n)$ to~\eqref{eq:AdvReac_discrete}, which satisfies the a~priori bounds: \begin{equation}\label{eq:AdvReac_apriori_bounds}
\enorm{r_m}_{q,\bs{\beta}}\leq \|f\|_{\mathbb{V}^*}\qquad\hbox{ and }\qquad
\|u_n\|_p \leq \widetilde{C}\, \|f\|_{\mathbb{V}^*}\,, \end{equation} with $\widetilde{C} := C_\Pi\,\left(1+C_{\text{\tiny$\mathrm{AO}$}}(\mathbb{V})\right)/ \gamma_B\,$. \item Moreover, we have the a~priori error estimate: \begin{alignat}{2}\label{eq:apriori_AdvReac}
\|u-u_n\|_p & \leq C \inf_{w_n\in\mathbb U_n}\|u-w_n\|_p\,,
\end{alignat} where
$\ds{C=\min\Big\{ 2^{\left|{2\over p}-1\right|} M_\mu \widetilde{C} \, , \, 1+M_\mu\widetilde{C} \Big\}}\,$.
\end{enumerate} The constants involved are: $C_\Pi$ which is given in~Assumption~\ref{assumpt:Fortin}, the boundedness constant $M_\mu$ given in Remark~\ref{rem:mu_continuity}, the stability constant $\gamma_B$ given in~\eqref{eq:C_mu} (see also the statement (ii$\star$) in Theorem~\ref{thm:avdreact_wellposed}), and the geometrical constant $C_{\text{\tiny$\mathrm{AO}$}}(\mathbb{V})$ (for $\mathbb{V}=W^q_{0,+}(\bs{\beta};\Omega) $) defined in~\cite[Definition~2.14]{MugZeeARXIV2018}. \end{theorem}
\begin{proof}
Statement~(i) directly follows from~\cite[Theorem~4.B]{MugZeeARXIV2018} applied to the current situation, while statement~(ii) follows from~\cite[Theorem~4.D]{MugZeeARXIV2018}, which can be applied since the spaces $\mathbb U=L^p(\Omega)$ and $\mathbb{V}=W^q_{0,+}(\bs{\beta};\Omega)$ are strictly convex and reflexive for $1<p,q<+\infty$, as well as the dual spaces $\mathbb U^*$ and $\mathbb V^*$. The factor $2^{\left|{2\over p}-1\right|}$ is the value of the Banach--Mazur constant~$C_{\text{\tiny$\mathrm{BM}$}}(\mathbb{U})$ (appearing in~\cite[Theorem~4.D]{MugZeeARXIV2018}) for $\mathbb{U} = L^p(\Omega)$; see~\cite[Section~5]{SteNM2015}.
\end{proof}
\begin{remark}[Finite elements] \label{rem:advFEM} Theorem~\ref{thm:AdvReac_discrete} implies \emph{optimal convergence rates} in~$L^p(\Omega)$ for finite element subspaces $\mathbb{U}_n \equiv \mathbb{U}_h$, provided $C_\Pi$ is uniformly bounded.
For example, on a sequence of approximation subspaces $\{\mathbb{P}^k(\mcal{T}_h)\}_{h>0}$ of piecewise polynomials of fixed degree~$k$ on quasi-uniform shape-regular meshes~$\mcal{T}_h$ with mesh-size parameter~$h$, well-known best-approximation estimates (see, e.g.,~\cite[Section~4.4]{BreScoBOOK2008}, \cite[Section~1.5]{ErnGueBOOK2004} and~\cite{ErnGueM2AN2017}) imply
\begin{alignat*}{2}
\norm{u-u_n}_p \lesssim
\inf_{w_h\in\mathbb U_h}\|u-w_h\|_p\lesssim h^s \snorm{u}_{W^{s,p}(\Omega)}\,,
\qquad \text{for } 0\leq s\leq k+1\,, \end{alignat*}
where $\snorm{\cdot}_{W^{s,p}(\Omega)}$ denotes a standard semi-norm of~$W^{s,p}(\Omega)$ (e.g., of Sobolev--Slobodeckij type). For a relevant regularity result in~$W^{1,\rho}(\Omega)$, with~$\rho \ge 2$, see Girault \&~Tartar~\cite{GirTarCR2010} (see also~\cite{PiaARXIV2016}).
\end{remark}
\section{Applications} \label{sec:applications}
In this section we apply the general discrete method~\eqref{eq:AdvReac_discrete} to particular choices of discrete subspace pairs~$(\mathbb{U}_n, \mathbb{V}_m)$ involving low-order finite-element spaces.
\par
For simplicity, throughout this section $\Omega \subset \mathbb{R}^d$ will be a \emph{polyhedral} domain and $\mcal{T}_n$ will denote a finite partition of~$\Omega$, i.e., $\mcal{T}_n = \{T\}$ consists of a finite number of non-overlapping elements~$T$ for which~$\overline{\Omega} = \bigcup_{T\in \mcal{T}_n} \overline{T}$.
\subsection{The pair $\mathbb{P}^1_\mathrm{cont}(\mcal{T}_n)$~-~$\mathbb{P}^k_\mathrm{cont}(\mcal{T}_n)$: Eliminating the Gibbs phenomena} \label{sec:Gibbs}
By first considering \emph{continuous} finite elements for~$\mathbb{U}_n$, we briefly illustrate how the discrete method~\eqref{eq:AdvReac_discrete} eliminates the well-known \emph{Gibbs phenomena} when approaching discontinuous solutions.
For simplicity, consider the advection--reaction problem~\eqref{eq:strong_advreact} with $\Omega \equiv (-1,1)\subset \mathbb{R}$, $\beta = 1$, $\mu = 0$, $g=-1$ and let the source~$f_\circ$ be $2 \delta_0$, where $\delta_0$ is the \emph{Dirac delta} at~$x=0$, i.e.,
\begin{subequations} \label{eq:heavi} \begin{empheq}[left=\left\{\,,right=\right.,box=]{alignat=2}
u'(x) &= 2 \delta_0(x)\,, \qquad \forall x\in (-1,1) \,, \\
u(-1) &=-1\,. \end{empheq} \end{subequations}
Notice that the exact solution of~\eqref{eq:heavi} corresponds to the sign of~$x$:
\begin{alignat*}{2}
u(x) & = \operatorname{sign}(x) := \left\{
\begin{array}{ll}
-1 & \hbox{if } x<0,\\
\phantom{-} 1 & \hbox{if } x>0.
\end{array}
\right. \end{alignat*}
\par
We endow $\mathbb{V} = W^q_{0,+}(\bs{\beta};\Omega) = W^{1,q}_{0,\{1\}}(\Omega)$ with the norm $\|\cdot\|_\mathbb{V}=\|(\cdot)'\|_q$, which simplifies the duality map~\eqref{adv:J} to a normalized \emph{$q$-Laplace operator}:
\begin{alignat}{2} \label{adv:J1D}
\bigdual{J_\mathbb{V}(r),v}_{\mathbb{V}^*,\mathbb{V}} =
\bigdual{J_q(r'),v'}_{p,q}
= \norm{r'}_q^{2-q} \Bigdual{ |r'|^{q-1} \operatorname{sign}(r')\,,\, v' }_{p,q} \,. \end{alignat}
Moreover, in this setting, it is not difficult to show that residual minimization in $[W^{1,q}_{0,\{1\}}(\Omega)]^*$ now coincides with finding the \emph{best $L^p$-approximation} to~$\operatorname{sign} (x)$. The Gibbs phenomena for best $L^p$-approximation was studied analytically by Saff~\& Tashev~\cite{SafTasEJA1999}, when using continuous piecewise-linear approximations on $n$~equal-sized elements. They clarified that the overshoot next to a discontinuity \emph{remains} as $n\rightarrow \infty$ whenever~$p>1$, however remarkably, the \emph{overshoot tends to zero as~$p \rightarrow 1^+$}.
\par
\begin{figure}
\caption{The best $L^2$-approximation of $u(x) = \operatorname{sign}(x)$ displays the Gibbs phenomenon: The overshoot next to the discontinuity persists on any mesh.}
\label{fig:Gibbs1}
\end{figure}
To illustrate these findings, we plot in Figure~\ref{fig:Gibbs1} the best $L^2$-approximation using continuous piecewise-linears for various mesh-size parameter~$h$. Clearly, the overshoots remain present, signifying the Gibbs phenomenon. Next, in Figure~\ref{fig:Gibbs2} we plot the solution to ideal residual minimization (i.e., in the so-called \emph{ideal} case where $\mathbb{V}_m = \mathbb{V}$) on a fixed mesh consisting of nine elements, for different values of~$p>1$.\footnote{These plots were obtained by using the analytical results by Saff~\& Tashev for the $L^p$-approximation of~$\operatorname{sign}(x)$.} In Figure~\ref{fig:Gibbs2}, we also plot the corresponding ideal residual~$r'(x)$ as defined by the mixed formulation~\eqref{eq:AdvReac_discrete} in the case where $\mathbb{V}_m = \mathbb{V}$.
It can be shown that in this ideal 1-D situation $r' =\|u_n-u\|_p^{2-p}|u_n-u|^{p-1}\operatorname{sign}(u_n-u)$. The plots in Figure~\ref{fig:Gibbs2} clearly illustrate the elimination of the Gibbs phenomenon as~$p\rightarrow 1^+$.
\par
We note that the elimination of the Gibbs phenomena was also observed for residual minimization of the \emph{strong form} of the advection--reaction residual in~$L^1(\Omega)$~\cite{GueSINUM2004}, the explanation of which remains somewhat elusive.
\begin{figure}
\caption{Vanishing Gibbs phenomena as~$p\rightarrow 1^+$ for approximations to the discontinuous solution $u(x) = \operatorname{sign} (x)$ given by ideal residual-minimization of weak advection.
}
\label{fig:Gibbs2}
\end{figure}
\par
Next, consider the \mbox{\textsf{DDMRes}}{} method~\eqref{eq:AdvReac_discrete} with the discrete space pair $(\mathbb{U}_n,\mathbb{V}_m)$ defined by:
\begin{subequations} \begin{alignat}{6} \label{eq:P1cont}
\mathbb{U}_n
&\subseteq \mathbb{P}^1_{\mathrm{cont}}(\mcal{T}_n)
&&:= \Big\{ &w_n &\in \mcal{C}^0[-1,1] \,:\,
&& {w_n}|_{T} \in \mathbb{P}^1(T) \,,\,&& \forall T\in \mcal{T}_n
\Big\}, && \\ \notag
\mathbb{V}_m
&\supseteq \mathbb{P}^k_{\mathrm{cont},0,\{1\}}(\mcal{T}_n)
&&:= \Big\{ &\phi_m &\in \mcal{C}^0[-1,1] \,:\,
&& {\phi_m}|_{T} \in \mathbb{P}^k(T)\,,\,&& \forall T\in \mcal{T}_n \text{ and } \\ \label{eq:Pkcont}
&&&&&&&&& \quad \phi_m(1)=0
\Big\}, && \end{alignat} \end{subequations}
where $k$~is the polynomial degree of the test space, and $\mcal{T}_n$ is any partition of the interval $(-1,1)$.
\begin{proposition}[1-D advection: Compatible pair] \label{prop:advContPair}
Let $\mathbb{U} = L^p(-1,1)$, $\mathbb{V} = W^{1,q}_{0,\{1\}}(-1,1)$ and ~$(\mathbb{U}_n,\mathbb{V}_m)$ be defined as above. If~$k\ge 2$, then the Fortin condition (Assumption~\ref{assumpt:Fortin}) holds for the operator $B:\mathbb{U}\rightarrow \mathbb{V}^*$ defined by: \begin{alignat*}{2} \bigdual{ B w, v }_{\mathbb{V}^*,\mathbb{V}} =-\int_{-1}^1 u v'\,. \end{alignat*}
\end{proposition}
\begin{proof}
See Section~\ref{sec:advContPairProof}.
\end{proof}
\par
\begin{remark}[Solution of nonlinear system] \label{rem:nonlinSolve}
The \mbox{\textsf{DDMRes}}{} method leads to a discrete (nonlinear) $q$-Laplace mixed system, which, for $p$~moderately close to~$2$, can be solved with, e.g., Newton's or Picard's method. For~$p$ close to~$1$ or much larger than~$2$ the nonlinear problem becomes more tedious to solve, and we have resorted to continuation techniques (with respect to~$p$) or a descent method for the equivalent constrained-minimization formulation.
\end{remark}
\par
Figure~\ref{fig:Gibbs3} diplays numerical results obtained using the \mbox{\textsf{DDMRes}}{} method~\eqref{eq:AdvReac_discrete} with the above discrete spaces and for~$p=1.01$ (hence $q=101$). We plot~$u_n$ and~$r_m'$ for various test-space degree~$k\ge 2$. While the method is stable for any~$k\ge 2$ (owing to Proposition~\ref{prop:advContPair}), there is no reason for the \mbox{\textsf{DDMRes}}{} method to directly inherit any qualitative feature of the exact residual minimization (i.e., $\mathbb{V}_m=\mathbb{V}$). Indeed, overshoot is present for small~$k$.
However, results are qualitatively converging once the test space~$\mathbb{V}_m$ starts resolving the ideal~$r$. This is expected by~\cite[Proposition~4.2]{MugZeeARXIV2018}, which states that the ideal~$u_n$ is obtained if the ideal~$r$ happens to be in~$\mathbb{V}_m$. The lines indicated by ``ideal'' in Figure~\ref{fig:Gibbs3} correspond to the case that~$r$ is fully resolved.
\begin{figure}\label{fig:Gibbs3}
\end{figure}
\par
Interestingly, the results seem to indicate that different values of~$k$ in each element would be needed for efficiently addressing Gibbs phenomena. This is reminiscent of the idea of \emph{adaptive stabilization}~\cite{CohDahWelM2AN2012}, in which for a given~$\mathbb{U}_n$ a sufficiently large test-space~$\mathbb{V}_m$ is found in an adaptive manner so as to achieve stability.
\par
\subsection{The pair~$\mathbb{P}^0(\mcal{T}_n)$~-~$B^{-*}(\mathbb{P}^0(\mcal{T}_n))$: An optimal compatible pair} \label{sec:optimalTestSpace} In the remainder of Section~\ref{sec:applications}, we consider for~$\mathbb{U}_n$ piecewise-constant functions on mesh-partitions~$\mcal{T}_n$ of~$\Omega\subset\mathbb{R}^d$, i.e.,
\begin{subequations} \begin{alignat}{2}\label{eq:P0Un}
\mathbb{U}_n \subseteq \mathbb{P}^0(\mcal{T}_n)
:= \Big\{ w_n \in L^\infty(\Omega) \,:\, {w_n}|_T \in \mathbb{P}^0(T)
\,,\, \forall T\in \mcal{T}_n \Big\} \subset \mathbb{U}. \end{alignat}
For the discrete test space~$\mathbb{V}_m \subset \mathbb{V}$, we assume that it includes the following \emph{optimal space}~$\mathbb{S}_n := B^{-*} \big( \mathbb{P}^0(\mcal{T}_n) \big) \subset \mathbb{V}$, i.e.,
\begin{alignat}{2}\label{eq:S(T_n)}
\mathbb{V}_m \supseteq \mathbb{S}_n
&:= B^{-*} \big( \mathbb{P}^0(\mcal{T}_n) = \Big\{
\phi_n \in \mathbb{V} \,:\,
B^* \phi_n = \chi_n\,,\, \hbox{for some } \chi_n\in \mathbb{P}^0(\mcal{T}_n)
\Big\}. \end{alignat} \end{subequations}
Note that $\dim \mathbb{U}_n \le \dim \mathbb{P}^0(\mcal{T}_n) = \dim \mathbb{S}_n \le \dim \mathbb{V}_m$.
\par Without any further assumptions, the following striking result show that this pair satisfies the Fortin condition. Its proof hinges on the fact that the $L^p$~ duality map of any $w_n\in \mathbb{P}^0(\mcal{T}_n)$ is also in~$\mathbb{P}^0(\mcal{T}_n)$.
\begin{proposition}[Weak advection--reaction: Compatible pair] \label{prop:AdvReac_compatible}
Let $\mathbb{U} = L^p(\Omega)$, $\mathbb{V} = W^{q}_{0,+}(\bs{\beta};\Omega)$ and the discrete pair~$(\mathbb{U}_n,\mathbb{V}_m)$ be defined as in~\eqref{eq:P0Un} and~\eqref{eq:S(T_n)}. Then the Fortin condition (Assumption~\ref{assumpt:Fortin}) holds for~$B$ defined in~\eqref{eq:B_AdvReac}, with $C_\Pi = M_\mu/\gamma_B$, where $M_\mu$ is the continuity constant of~$B$ (see Remark~\ref{rem:mu_continuity}), and $\gamma_B$ the bounded-below constant of~$B$ (see Theorem~\ref{thm:avdreact_wellposed}).
\end{proposition}
\begin{proof}
In view of the equivalence between the discrete inf-sup condition and the Fortin condition (see Ern \&~Guermond~\cite{ErnGueCR2016}), we prove this proposition by directly establishing the discrete inf-sup condition.
\par
Let~$w_n \in \mathbb{U}_n\subseteq\mathbb{P}^0(\mcal{T}_n)$, then
\begin{alignat}{2} \notag
\sup_{v_m\in \mathbb{V}_m} \frac{\dual{B w_n, v_m}_{\mathbb{V}^*,\mathbb{V}}}{ \norm{v_m}_\mathbb{V} }
&\ge
\sup_{\phi_n\in \mathbb{S}_n} \frac{\dual{B w_n, \phi_n}_{\mathbb{V}^*,\mathbb{V}}}{ \norm{\phi_n}_\mathbb{V} } =
\sup_{\chi_n\in \mathbb{P}^0(\mcal{T}_n)}
\frac{\dual{w_n, \chi_n}_{p,q}}{ \norm{B^{-*}\chi_n}_\mathbb{V}}\,. \end{alignat}
Let $J_p(w_n) := \norm{w_n}_p^{2-p} |w_n|^{p-1} \operatorname{sign} (w_n)$ denote the $L^p$~duality map of~$w_n$, and notice that it is also in~$\mathbb{P}^0(\mcal{T}_n)$. Furthermore, we have the duality-map property $\dual{w_n, J_p(w_n)}_{p,q}=\|w_n\|_p\|J_p(w_n)\|_q\,$. Therefore,
\begin{alignat}{2} \notag
\sup_{\chi_n\in \mathbb{P}^0(\mcal{T}_n)}
\frac{\dual{w_n, \chi_n}_{p,q}}{ \norm{B^{-*}\chi_n}_\mathbb{V} }
&\ge
\frac{\dual{w_n, J_p(w_n)}_{p,q}}{ \norm{B^{-*}J_p(w_n)}_\mathbb{V} }
=
\frac{\norm{w_n}_p \norm{J_p(w_n)}_q}{ \norm{B^{-*}J_p(w_n)}_\mathbb{V} }
\ge
\gamma_B \norm{w_n}_p \,, \end{alignat}
where, in the last step, we used that $\norm{B^{-*} \chi}_\mathbb{V} \le \gamma_B^{-1} \norm{\chi}_{q}$ for all $\chi\in L^q(\Omega)$ (this is nothing but the dual counterpart of Theorem~\ref{thm:avdreact_wellposed}).
Finally, \cite[Theorem~1]{ErnGueCR2016} implies the existence of a Fortin operator $\Pi:\mathbb{V}\to\mathbb{V}_m$
with $C_\Pi = M_\mu/\gamma_B$.
\end{proof}
\begin{remark}[Petrov--Galerkin method] \label{rem:PG}
If~$(\mathbb{U}_n,\mathbb{V}_m) \equiv (\mathbb{P}^0(\mcal{T}_n) ,\mathbb{S}_n)$, then $\dim \mathbb{U}_n = \dim \mathbb{V}_m$. Then, Proposition~\ref{prop:AdvReac_compatible} together with~\eqref{eq:AdvReac_discrete_b} implies that $r_m = 0$. Thus we obtain from~\eqref{eq:AdvReac_discrete_a} that the approximation~$u_n$ satisfies the Petrov--Galerkin statement (cf.~\cite[Section~5]{MugZeeARXIV2018}): \begin{equation} \label{eq:PG} \dual{B u_n,v_n}_{\mathbb{V}^*,\mathbb{V}} = \dual{ f, v_n}_{\mathbb{V}^*,\mathbb{V}} \qquad \forall v_n \in \mathbb{S}_n\,. \end{equation}
\end{remark}
\begin{remark}[Cell average] \label{rem:cellAve}
If~$(\mathbb{U}_n,\mathbb{V}_m) \equiv (\mathbb{P}^0(\mcal{T}_n) ,\mathbb{S}_n)$, the approximation~$u_n$ is in fact the element average of the exact solution~$u$, i.e.,
\begin{alignat}{2} \label{eq:elemAve}
{u_n}|_T = |T|^{-1}\int_T u
\,,
\qquad \forall T\in \mcal{T}_n\,. \end{alignat}
To prove~\eqref{eq:elemAve}, note that~\eqref{eq:PG} can be written as \begin{equation*}
\dual{u_n,B^* v_n} = \dual{u,B^* v_n} \qquad \forall v_n \in \mathbb{S}_n\,. \end{equation*}
Let $\chi_T$ be the characteristic function of the element $T$. Then, the test function~$v_T = B^{-*} \chi_{T}$ determines ${u_n}|_T$. Indeed,
\begin{alignat*}{2}
|T|\, {u_n}|_T = \dual{u_n,\chi_T}_{p,q}
= \dual{u_n,B^* v_T}_{p,q} = \dual{u, B^* v_T}_{p,q} = \dual{u, \chi_T}_{p,q}
= \int_T u \,. \end{alignat*}
\end{remark}
\begin{remark}[Quasi-uniform meshes] \label{rem:advUnifConv}
In the case that $\mathbb{U}_n = \mathbb{P}^0(\mcal{T}_n)$ where the partitions~$\{\mcal{T}_n\}$ are quasi-uniform shape-regular meshes with mesh-size parameter~$h$, the following a~priori error estimate is immediate (apply Remark~\ref{rem:advFEM} with~$k=0$), provided that $u\in W^{s,p}(\Omega)$ for $0\le s\le 1$:
\begin{alignat*}{2}
\norm{u-u_n}_p \lesssim h^s \snorm{u}_{W^{s,p}(\Omega)} \,. \end{alignat*}
\end{remark}
\begin{example}[Quasi-optimality: Solution with jump discontinuity] \label{ex:qOptJumpSol}
To illustrate the convergence of approximations given by the compatible pair~$(\mathbb{U}_n, \mathbb{V}_m) \equiv (\mathbb{P}^0(\mcal{T}_n) , \mathbb{S}_n)$ for~$\Omega \equiv (0,1)$ on uniform meshes using $n=2,4,8,\ldots$~elements of size~$h=1/n$, consider the following exact solution with jump discontinuity (never aligned with the mesh):
\footnote{ The approximations are given by~\eqref{eq:elemAve}, or can be obtained by solving the nonlinear discrete problem (see Remark~\ref{rem:nonlinSolve}). }
\begin{alignat*}{2}
u(x) = \operatorname{sign}\big(x-\tfrac{\sqrt{2}}{2}\big)\,, \qquad \text{for } x\in (0,1)\,. \end{alignat*}
It can be shown (e.g.~by computing the Sobolev--Slobodeckij norm) that~$u \in W^{s,p}(0,1)$ for any $0<s<1/p$, but not~$s=1/p$. Figure~\ref{fig:cellAverage} shows the convergence of~$\norm{u-u_n}_p$ with respect to~$h$, for various~$p$. The observed convergence behavior, as anticipated in Remark~\ref{rem:advUnifConv}, is indeed close to~$O(h^{1/p})$.
\end{example}
\begin{figure}
\caption{Approximating an advection--reaction problem with a discontinuous solution using the optimal pair ($\mathbb{P}^0(\mcal{T}_n),\mathbb{S}_n$): The convergence in~$\norm{u-u_n}_p$ is close to~$O(h^{1/p})$, which is optimal for near-best approximations.}
\label{fig:cellAverage}
\end{figure}
\begin{example}[Basis for optimal test space] Let us illustrate the discrete test space $\mathbb{S}_n$ in 1-D for the particular case where the (scalar-valued) advection~$\beta(x)$ is space-dependent and $\mu\equiv 0$. Let $\Omega=(0,1)$ and let $\beta$ be a strictly decreasing and positive function such that $\beta'(x)$ is bounded away from zero (hence Assumption~\ref{assump:mu_0} is valid). The space~$\mathbb{V}$ is given by \begin{alignat*}{2} \mathbb{V}= \Big\{v\in L^q(0,1): (\beta v)'\in L^q(0,1) \text{ and } v(1)=0\Big\}. \end{alignat*} Let $0=x_1<x_1<\dots<x_{n+1}=1$ be a partition of $\Omega$ and
define $\mcal{T}_n=\{T_j\}$ where $T_j=(x_{j-1},x_j)$. Let $\chi_{T_j}$ be the characteristic function of $T_j$ and $h_j=|T_j|$. The discrete test space $\mathbb{S}_n$ is defined as the span of the functions $v_j\in\mathbb{V}$ such that $-(\beta v_j)'=\chi_{T_j}$, which upon integrating over the interval $[x,1]$ gives : \begin{alignat*}{2} v_j(x)= & \left\{ \begin{array}{ll} h_j/\beta(x) & \hbox{if } x\leq x_{j-1}\\ (x_j-x)/\beta(x) & \hbox{if } x\in T_j\\ 0 & \hbox{if } x\geq x_{j}\,. \end{array}\right. \end{alignat*} Moreover, we can combine them in order to produce the local, nodal basis functions: \begin{equation}\nonumber \widetilde v_1(x)= \beta(x_1){v_1(x)\over h_1} \quad\hbox{and} \quad \widetilde v_j(x)= \beta(x_j)\left({v_j(x)\over h_j}-{v_{j-1}(x)\over h_{j-1}}\right), \quad j\geq 2. \end{equation}
See Figure~\ref{fig:inflowoutflowcont}(a) for an illustration of these basis functions with~$h_j = 0.2$ for all~$j$ and~$\beta(x) = 1.001-x$.
\begin{figure}
\caption{Basis for the optimal test space~$\mathbb{S}_n$ that is compatible with~$\mathbb{U}_n = \mathbb{P}^0(\mcal{T}_n)$, in the case of two different space-dependent advection fields~$\beta(x)$ corresponding to (a)~left-sided inflow and (b)~two-sided inflow, respectively.}
\label{fig:inflowoutflowcont}
\end{figure}
\par
Another interesting example is when we have two inflows, each on one side of the interval $\Omega=(0,1)$. This is possible by means of a strictly decreasing $\beta(x)$ such that $\beta(0)>0$ and $\beta(1)<0$, and such that $\beta'(x)$ is bounded away from zero. The solution $u\in L^{p}(\Omega)$ of problem~\eqref{eq:weak_advection-reaction} may be singular at the point $\tilde x\in \Omega$ for which $\beta(\tilde x)=0$, even for smooth right hand sides. The test functions computed by solving $-(\beta v_j)'=\chi_{T_j}$ may be discontinuous when $\tilde x$ matches one of the mesh points. This is illustrated in Figure~\ref{fig:inflowoutflowcont}(b) for~$\beta(x) = \frac{2}{5}-x\,$. \end{example}
\begin{example}[A practical alternative to~$\mathbb{S}_n$]
In practise it may not be feasible to explicitly compute a basis for~$\mathbb{S}_n$. Practical alternatives consist of, for example, continuous piecewise polynomials of sufficiently-high degree~$k$ on~$\mcal{T}_n$,
or continuous piecewise linear polynomials on $\mathsf{Refine}_\ell(\mcal{T}_n)$, which is the submesh obtained from the original mesh~$\mcal{T}_n$ by performing~$\ell$ uniform refinements of all elements (see~\cite{BroDahSteMOC2018} for a similar alternative in a DPG setting).
\par To illustrate the latter alternative for the \mbox{\textsf{DDMRes}}{} method, consider the domain~$\Omega = (0,1)$, coefficients $\beta(x) = 1- 12 x$ and $\mu(x) = -4$, source~$f_\circ(x) = 0$, and inflow data~$g$ such that the exact solution is $u(x) =|1-12x|^{-\sfrac{1}{3}}$ for all~$x\in \Omega\setminus \{ \sfrac{1}{12} \}$. Note that $u$ has a singularity and that $u\in L^r(\Omega)$ for any~$1\le r< 3$, but not for~$r\ge 3$. \par In the method, we take~$p=2$, $\mathbb{U}_n = \mathbb{P}^0(\mcal{T}_n)$ and $\mathbb{V}_m = \mathbb{P}^1_{\mathrm{cont}}(\mathsf{Refine}_\ell(\mcal{T}_n))$, where $\mcal{T}_n$ is a mesh of uniform elements of size~$h = 1/n$, and $\mathsf{Refine}_\ell(\mcal{T}_n)$ is an $\ell$-refined submesh with uniform elements of size~$h_{\ell} = h/(2^\ell)$. \par Figure~\ref{fig:P0P1Level:hconv} plots the convergence of the $\norm{u-u_n}_2$ versus~$h$ for~$\ell = 1$, $2$ and~$4$ (error plots are actually similar for all~$\ell\ge 1$). We note that~$\ell = 0$ is in general not sufficiently rich, as it leads to a singular matrix for~$h=1/2$, while the results for~$\ell \ge 1$ did not show any instabilities. To anticipate the rate of convergence, note the Sobolev embedding result $W^{s,2}(\Omega) \subset L^r(\Omega)$ for $s\ge \frac{1}{2} - \frac{1}{r}$ and $r\ge 2$. Therefore, one expects a convergence of~$O(h^s)$ with~$s = \frac{1}{6}$, which is indeed consistent with the numerical observation in~Figure~\ref{fig:P0P1Level:hconv}. The oscillations are caused by the singularity location ($x=\sfrac{1}{12}$) being closer to the left or right element edge depending on~$h$.
\par
To investigate for a fixed mesh with~$h = \sfrac{1}{16}$ the convergence of the obtained approximations~$u_n$ with respect~$\ell$, we consider~$\beta(x)=2-x$, $\mu(x) = 0$ and exact solution~$u(x) = 1+2x$ for $x\in \Omega$. Figure~\ref{fig:P0P1Level:lconv} plots the error~$\norm{u_n{}_{|\infty} - u_n{}_{|\ell}}$ with respect to~$h_\ell = h / (2^\ell)$, where~$u_n{}_{|\infty}$ denotes the ideal approximation ($\mathbb{V}_m = \mathbb{V}$). For this error we observe a rate of convergence~$O(h_\ell^2)$.
\end{example}
\begin{figure}\label{fig:P0P1Level:hconv}
\end{figure}
\begin{figure}\label{fig:P0P1Level:lconv}
\end{figure}
\subsection{The pair $\mathbb{P}^0(\mcal{T}_n)$~-~$\mathbb{P}^1_\mathrm{conf}(\mcal{T}_n)$: An optimal pair in special situations} \label{sec:2DflowAlignedMesh}
\par
As a last application, we consider a special multi-dimensional situation such that the optimal test space~$\mathbb{S}_n$ defined in~\eqref{eq:S(T_n)} reduces to a convenient finite element space. We focus on a \mbox{2-D}~setting, and assume $\Omega \subset \mathbb{R} ^2$ is polygonal and $\mcal{T}_n$ is a
simplicial mesh (triangulation) of~$\Omega$.
Let~$\mcal{F}_n = \{F\}$ denote all mesh interior faces.
\footnote{i.e., $\operatorname{length}(F) >0$, and $F = \partial T_1 \cap \partial T_2$ for distinct~$T_1$ and $T_2$ in~$\mcal{T}_n$.}
Assume that $\mu\equiv 0$, $\operatorname{div}\bs{\beta}\equiv 0$ and that the hypothesis of Assumption~\ref{ass:omega-filling} is fulfilled. Assume additionally that $\bs{\beta}$ is piecewise constant on some partition of~$\Omega$, and let the mesh~$\mcal{T}_n$ be compatible with this partition, i.e.,
\begin{alignat*}{2}
&\bs{\beta}|_{T} \in \mathbb{P}^0(T)\times\mathbb{P}^0(T), \qquad &&\forall T\in \mcal{T}_n\,,
\\
&\jump{ \bs{\beta}\cdot \bs{n}_F }_F = 0, \qquad &&\forall F\in \mcal{F}_n\,. \end{alignat*}
where~$\jump{\cdot} = (\cdot)_+ - (\cdot)_-$ denotes the jump.
Finally, assume that the mesh is \emph{flow-aligned} in the sense that each triangle~$T\in \mcal{T}_n$ has exactly one tangential-flow face~$F \subset \partial T$ for which $\bs{\beta} \cdot \bs{n}_T = 0$ on~$F$. Necessarily, the other two faces of~$T$ correspond to in- and out-flow on which $\bs{\beta}|_T \cdot \bs{n}_T < 0$ and $\bs{\beta}|_{T} \cdot \bs{n}_T > 0$, respectively.
\par
The main result for this special situation is the following characterization of~$\mathbb{S}_n$:
\begin{proposition}[Optimal space~$\mathbb{S}_n$: Flow-aligned case]
Under the above assumptions,
\begin{alignat*}{2}
\mathbb{S}_n = \mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)
:= \Big\{
\phi_n \in \mathbb{V} = W^q_{0,+}(\bs{\beta};\Omega)
\,:\,
{\phi_n}|_{T} \in \mathbb{P}^1(T)\,,\, \forall T \in \mcal{T}_n
\Big \}\,. \end{alignat*}
\end{proposition}
Note that~$\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$ consists of $W^q_{0,+}(\bs{\beta};\Omega)$-conforming, piecewise-linear functions, which can be discontinuous across tangential-flow faces, but must be continuous across the other faces. Furthermore, they are zero on~$\partial\Omega_+$.
\begin{proof}
The proof follows upon demonstrating that~$B^*\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n) = \mathbb{P}^0(\mcal{T}_n)$. First note (under the above assumptions) that $B^* = -\bs{\beta}\cdot \nabla_n$, where~$\nabla_n$ is the element-wise (or broken) gradient, i.e.,
$(\nabla_n \phi)|_{T} = \nabla(\phi|_{T})$ for all~$T\in \mcal{T}_n$. Since functions in~$\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$ are element-wise linear, we thus have that $B^* \mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n) \subset \mathbb{P}^0(\mcal{T}_n)$.
\par
We next show that $\mathbb{P}^0(\mcal{T}_n)\subset B^* \mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$. Note that $\mathbb{P}^0(\mcal{T}_n) = \operatorname{Span} \{\chi_T , T\in \mcal{T}_n\}$, where~$\chi_T$ is the characteristic function for~$T$. Let~$\phi_T$ be the unique solution in~$\mathbb{V}$ such that~$B^* \phi_T = \chi_T$. The $\Omega$-filling assumption (see Assumption~\ref{ass:omega-filling}) guarantees that $\bs{\beta} \neq \bs 0$ a.e.~in $\Omega$ (otherwise we would have $-\bs{\beta}\cdot \nabla z_\pm=0$ in some element, contradicting~\eqref{eq:omega-filling}). Thus, for a.e.~$x\in\Omega$ consider the polygonal path $\Gamma(x)\subset \overline\Omega$ that starts from $x$ and moves along the advection field $\bs{\beta}$. By the $\Omega$-filling assumption, the path $\Gamma(x)$ has to end in some point on the out-flow boundary $\partial\Omega_+$ (otherwise it will stays forever within $\Omega$, contradicting the existence of a bounded function $z_\pm\in W^\infty(\bs{\beta};\Omega)$ whose absolute value grows linearly along $\Gamma(x)$). Hence, we can construct $\phi_T$ integrating $\chi_T$ over the polygonal path $\Gamma(x)$ from $\partial\Omega_+$ to $x$. By construction, $\phi_T$ is a piecewise linear polynomial, which can be discontinuous only across $\{F\in\mcal{F}_n: \bs{\beta}\cdot\bs{n}_F=0 \}$. Besides, $\phi_T$ satisfies the homogeneous boundary condition over $\partial\Omega_+$. Hence $\phi_T\in \mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$ and $\chi_T\in B^*\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$.
\end{proof}
\begin{example}[2-D numerical illustration]
To illustrate the above setting with a numerical example, let $\Omega = (0,1)\times (0,2)\subset \mathbb{R}^2$, $f_\circ = 0$, and $g$ be nonzero on the inflow boundary~$\partial\Omega_- = \{(x,0)\,,\, x\in (0,1)\}$. Let an initial triangulation of the domain be as in Figure~\ref{fig:2Dsmooth} (top-left mesh). The advection~$\bs{\beta}$ is such that, for the bottom, left, right and top boundary, we have that $\bs{\beta} \cdot \bs{n}$ is~$-1$, $0$, $0$ and~$1$, respectively. Next, within each triangle, $\bs{\beta}$ is some constant vector with a positive vertical component, while satisfying the above requirements (i.e., $\jump{\bs{\beta} \cdot \bs{n}_F}_F = 0$ on each interior face~$F$, and each triangle has a tangential-flow, in-flow and out-flow face). \footnote{For a given mesh, such a~$\bs{\beta}$ can be constructed by traversing through the mesh in an element-by-element fashion, starting at the inflow boundary, and assigning $\bs{\beta}$ in each element so as to satisfy the requirements.} By Remark~\ref{rem:PG}, the \mbox{\textsf{DDMRes}}{} method with spaces~$\mathbb{P}^0(\mcal{T}_n)$ and $\mathbb{P}^1_{\mathrm{conf}}(\mcal{T}_n)$ can be implemented as a Petrov--Galerkin method.
\par
We first consider the smooth inflow boundary condition~$g(x,0) = \sin (\pi x)$ for~$x\in (0,1)$. Figure~\ref{fig:2Dsmooth} (top row) shows the approximations for~$u_n$ obtained on the initial triangulation and three finer meshes. The finer meshes were obtained by uniform refinements of the initial triangulation using so-called \emph{red}-refinement~\cite[Section~2.1.2]{VerBOOK2013} (splitting each triangle into four similar triangles), which preserves the above flow-aligned mesh requirement. The approximations nicely illustrate the cell-average property mentioned in Remark~\ref{rem:cellAve} (the exact solution is simply found by traversing $g$ along the characteristics). In Figure~\ref{fig:2Dsmooth} (bottom) the convergence of~$\norm{u-u_n}_p$ is shown to be optimal (rate is $O(h)$) for various values of~$p$.
\par
Figure~\ref{fig:2Dnonsmooth} shows the same results as before, but now for a discontinuous inflow boundary condition~$g(x,0) = \sin(\pi x) \, \operatorname{sign}(x-\sfrac{1}{3})$ for~$x\in (0,1)$. Again the \mbox{\textsf{DDMRes}}{} method provides a near-best approximation; as anticipated, the observed rate of convergence is~$O(h^{1/p})$ (cf.~discussion in Example~\ref{ex:qOptJumpSol}). \end{example}
\newcommand{\includetwodfig}[1]{{\includegraphics[height=0.4\textwidth,viewport=0 0 160 340,clip]{#1}}} \newcommand{\includetwodfigcb}[1]{\raisebox{-0.005\textwidth}{{\includegraphics[height=0.4\textwidth,viewport=160 0 220 340,clip]{#1}}}}
\begin{figure}\label{fig:2Dsmooth}
\end{figure}
\renewcommand{\includetwodfig}[1]{{\includegraphics[height=0.4\textwidth,viewport=0 0 160 340,clip]{#1}}} \renewcommand{\includetwodfigcb}[1]{\raisebox{-0.005\textwidth}{{\includegraphics[height=0.4\textwidth,viewport=160 0 220 340,clip]{#1}}}}
\begin{figure}\label{fig:2Dnonsmooth}
\end{figure}
\appendix \section{Proofs of the main results} \subsection{Proof of Theorem~\ref{thm:avdreact_wellposed}}\label{sec:avdreact_wellposed}
In this section, we give the proof of Theorem~\ref{thm:avdreact_wellposed} by means of the so-called \emph{Banach-Ne\v{c}as-Babu\v{s}ka $\inf$-$\sup$ conditions} (see~\cite[Theorem~2.6]{ErnGueBOOK2004}): \begin{alignat}{2} \tag{BNB1}\label{eq:BNB1}
& \|w\|_\mathbb{U} \lesssim \sup_{0\neq v\in \mathbb{V}}{|b(w,v)|\over \|v\|_\mathbb{V}},\,\, \forall w\in\mathbb{U},\\ \tag{BNB2}\label{eq:BNB2} & \big\{ v\in\mathbb{V} : b(w,v)=0, \,\,\forall w \in \mathbb U\big\} = \{0\}. \end{alignat} Our technique is similar to the one used by Cantin~\cite{CanCR2017}, but note that we prove~\eqref{eq:BNB1}-\eqref{eq:BNB2} on the adjoint bilinear form. Recall that the primal operator is a continuous bijection if and only if the adjoint operator is a continuous bijection, in which case both inf-sup constants are the same.
The following proof is also analogue to the proof in Hilbert spaces given by Di~Pietro~\& Ern~\cite[Section 2.1]{DipErnBOOK2012}. We start by giving some properties that we need for the Banach setting.
Let $J_q(v)=\|v\|_q^{2-q} |v|^{q-1}\operatorname{sign}(v)\in L^p(\Omega)=\mathbb U$ be the duality map of $L^q(\Omega)$, i.e., \begin{alignat}{2}\label{eq:J_q}
\<J_q(v),v\right>_{p,q}=\|v\|_q^2 \quad \hbox{ and } \quad \|J_q(v)\|_p=\|v\|_q,\qquad\forall v\in L^q(\Omega). \end{alignat} Additionally, for any $v\in \mathbb V=W^q_{0,+}(\bs{\beta};\Omega) \subset L^q(\Omega)$ notice the following identity: \begin{equation}\label{eq:beta_identity2}
\bs{\beta}\cdot\nabla v\,\,|v|^{q-1}\operatorname{sign}(v)={1\over q}\operatorname{div}(\bs{\beta}|v|^q)-{1\over q}\operatorname{div}(\bs{\beta})|v|^q,\quad\forall v\in \mathbb V. \end{equation} We will use these definitions and properties also for their analogous ``$p$'' version, i.e., obtained by replacing $q$ by~$p$.
\subsubsection{Proof of $\inf$-$\sup$ condition~\eqref{eq:BNB1} on the adjoint}
Let $b:\mathbb U\times\mathbb V\rightarrow\mathbb{R}$ be the bilinear form corresponding to the weak--form in~\eqref{eq:weak_advection-reaction}, i.e.,
\begin{alignat}{2}\label{eq:AdvRea_bform}
b(w,v) = \int_\Omega w \big( \mu v - \operatorname{div} (\bs{\beta} v) \big)\,. \end{alignat}
For any $0\neq v\in \mathbb V$ we have: \begin{alignat*}{2} \notag
\sup_{0\neq w\in\mathbb U}{|b(w,v)|\over \|w\|_p} &\geq
{|b(J_q(v),v)|\over \|J_q(v)\|_p} \\ \tag{by~\eqref{eq:J_q}~and~\eqref{eq:AdvRea_bform}}
&=
\|v\|_q^{1-q}\left|\int_\Omega|v|^{q-1}\operatorname{sign}(v)\left(\mu v-\operatorname{div}(\bs{\beta} v)\right) \right|
\\ \tag{by~\eqref{eq:beta_identity}} &=
\|v\|_q^{1-q}\left|\int_\Omega|v|^{q-1}\operatorname{sign}(v)\left(\mu v-\operatorname{div}(\bs{\beta})v-\bs{\beta}\cdot\nabla v\right) \right| \\ \tag{by~\eqref{eq:beta_identity2}} &=
\|v\|_q^{1-q}\left|\int_\Omega|v|^{q}\left(\mu-{1\over p}\operatorname{div}(\bs{\beta})\right)-{1\over q}\operatorname{div}(\bs{\beta}|v|^q)\right| \\ \tag{by~\eqref{eq:beta-mu}} & \geq
\mu_0\|v\|_q + \|v\|_q^{1-q}\,{1\over q}\int_{\partial\Omega^-}|\bs{\beta}\cdot\bs{n}||v|^q \\
& \geq \mu_0\|v\|_q\,. \end{alignat*} Hence, we obtain control on $v$ in the $\norm{\cdot}_q$-norm.\footnote{This result is an extension of the 1-D result with constant advection in~\cite[Chapter~XVII~A, \S{}3, Section~3.7]{DauLioBOOK1992}.} To control the entire graph norm~$\enorm{\cdot}_{q,\bs{\beta}}$, we also need to control the divergence part: \begin{alignat*}{2} \tag{by duality}
\|\operatorname{div}(\bs{\beta} v)\|_q &= \displaystyle\sup_{0\neq w\in\mathbb U}
{\left<w,\operatorname{div}(\bs{\beta} v)\right>_{p,q}\over \|w\|_p} \\ \tag{by~\eqref{eq:AdvRea_bform}} &= \displaystyle\sup_{0\neq w\in\mathbb U}
{\left|b(w,v)-\int_\Omega\mu w v\right|\over \|w\|_p} \\ \notag & \leq \displaystyle\sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p}+\sup_{0\neq w\in\mathbb U}{\left|\int_\Omega\mu w v\right|\over \|w\|_p} \\ \tag{by Cauchy--Schwartz ineq.} &\leq \displaystyle\sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p}+\|\mu\|_\infty\|v\|_q \\ \tag{using the previous bound}
&\leq \displaystyle\left(1+{\|\mu\|_\infty\over\mu_0}\right)\sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p}. \end{alignat*} Combining both bounds we have \begin{equation}\label{eq:inf-sup_adjoint}
\enorm{v}_{q,\bs{\beta}}\leq {\sqrt{1+(\mu_0+\|\mu\|_\infty)^2\over\mu_0^2}} \sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p}. \end{equation}
The case when $\mu\equiv 0$ and $\operatorname{div}\bs{\beta}\equiv 0$ (under Assumption~\ref{ass:omega-filling}) is simpler, since by Lemma~\ref{lem:Poincare_Friedrichs}, we immediately have \begin{alignat}{2}\notag
\|v\|_q \,\leq\, C_{\text{\tiny$\mathrm{PF}$}}
\|\bs{\beta}\cdot\nabla v\|_q \,=\, C_{\text{\tiny$\mathrm{PF}$}}
\|\operatorname{div}(\bs{\beta} v)\|_q \,=\, C_{\text{\tiny$\mathrm{PF}$}} \sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p} \,. \end{alignat} Hence \begin{equation}\label{eq:inf-sup_adjoint_mu=0} \enorm{v}_{q,\bs{\beta}}\leq (1+C_{\text{\tiny$\mathrm{PF}$}}) \sup_{0\neq w\in\mathbb U}
{|b(w,v)|\over \|w\|_p}\,. \end{equation}
$\ensuremath{_\blacksquare}$
\subsubsection{Proof of $\inf$-$\sup$ condition~\eqref{eq:BNB2} on the adjoint} \label{sec:surjec_adjoint} Next, we prove~\eqref{eq:BNB2} for the adjoint, which corresponds to injectivity of the primal operator. In other words, we need to show that $w=0$ if $w\in L^p(\Omega)$ is such that
\begin{alignat}{2} \label{eq:adv-infsup2}
b(w,v)=0, \qquad \forall v\in \mathbb V=W^q_{0,+}(\bs{\beta};\Omega)\,. \end{alignat}
\par
We first take $v\in C_0^\infty(\Omega)$ to obtain that $\bs{\beta}\cdot\nabla w + \mu w =0$ in the sense of distributions and hence $\bs{\beta}\cdot\nabla w = -\mu w \in L^p(\Omega)$, which implies $w\in W^p(\bs{\beta};\Omega)$.
\par
This means that $w$ has sufficient regularity so that traces make sense (see Remark~\ref{rem:traces}). Hence, going back to~\eqref{eq:adv-infsup2} and integrating by parts we have: \begin{equation}\label{eq:vanish_inflow} \int_{\partial\Omega^-}{\bs{\beta}\cdot\bs{n}}\,w\,v=0\, , \qquad\forall v \in W^q_{0,+}(\bs{\beta};\Omega). \end{equation} To show that $w\in W^p_{0,-}(\bs{\beta};\Omega)$, we consider
$\widetilde J_p(w):=|w|^{p-1}\operatorname{sign}(w)\in L^q(\Omega)$. The fact that $\widetilde J_p(w)$ is actually in~$W^q(\bs{\beta};\Omega)$ is proven in Lemma~\ref{lem:beta_nabla} below. For the function $\phi\in C^\infty(\overline\Omega)$ defined in~\eqref{eq:cutoff}, we then have that $\phi\,\widetilde J_p(w)$ belongs to $W^q_{0,+}(\bs{\beta};\Omega)$ (since $\phi$ vanishes on $\partial\Omega_+$).
Using $v=\phi\,\widetilde J_p(w)$ in~\eqref{eq:vanish_inflow} we immediately obtain: \begin{equation}\label{eq:zero_trace}
\int_{\partial\Omega^-}{\bs{\beta}\cdot\bs{n}}\,\,|w|^p=0\,, \end{equation} hence that~$w\in W^p_{0,-}(\bs{\beta};\Omega)$.
\par
Finally, we conclude using an energy argument: \begin{alignat*}{2} \notag 0 &= \displaystyle\int_\Omega (\bs{\beta}\cdot\nabla w +\mu w)J_p(w) \\ \tag{by~\eqref{eq:beta_identity2} and~\eqref{eq:zero_trace}}
&= \|w\|^{2-p}_p\left[\displaystyle\int_\Omega |w|^p\Big(\mu-{1\over p} \operatorname{div}(\bs{\beta})\Big)+
\int_{\partial\Omega_+}{\bs{\beta}\cdot\bs{n}}\,|w|^p\right] \\ \tag{by~\eqref{eq:beta-mu}}
&\geq \mu_0\|w\|_p^2+\|w\|^{2-p}_p
\int_{\partial\Omega_+}{\bs{\beta}\cdot\bs{n}}\,|w|^p \\ \notag
&\geq \mu_0\|w\|_p^2\,. \end{alignat*} Hence $w=0$. \par On the other hand, the case when $\mu\equiv 0$ and $\operatorname{div}\bs{\beta}\equiv 0$ is straightforward (under Assumption~\ref{ass:omega-filling}) since $\bs{\beta}\cdot\nabla w=0$ implies \begin{alignat}{2} \tag{by Lemma~\ref{lem:Poincare_Friedrichs}}
0 = \|\bs{\beta}\cdot\nabla w\|_p\geq {1\over C_{\text{\tiny$\mathrm{PF}$}}}\|w\|_p\,\,. \end{alignat}
$\ensuremath{_\blacksquare}$
\par
We are left with a proof of the statement~$\widetilde J_p(w) \in W^q(\bs{\beta};\Omega)$:
\begin{lemma}[Regularity of $|w|^{p-1} \operatorname{sign} w$]\label{lem:beta_nabla} Let $\mu,\bs{\beta}\in L^\infty(\Omega)$ and $w\in L^p(\Omega)$ satisfy the homogeneous advection--reaction equation $$ \bs{\beta}\cdot\nabla w +\mu w= 0 \qquad \hbox{in } L^p(\Omega). $$
Then the function $\widetilde J_p(w):=|w|^{p-1}\operatorname{sign}(w)\in L^q(\Omega)$ satisfies: $$ \bs{\beta}\cdot\nabla\widetilde J_p(w) \in L^q(\Omega). $$ \end{lemma}
\begin{proof} First observe that $\widetilde J_p(w)$ has a G\^ateaux derivative in the direction $\bs{\beta}\cdot\nabla w$. Indeed,
\begin{alignat*}{2} {\widetilde J}_p'(w)[\bs{\beta}\cdot\nabla w] &= \displaystyle\lim_{t\to0}{\widetilde J_p(w+t\bs{\beta}\cdot\nabla w)-\widetilde J_p(w)\over t} \\ &= \displaystyle\lim_{t\to0}{\widetilde J_p(w-t\mu w)-\widetilde J_p(w)\over t} \\
&= \left(\displaystyle\lim_{t\to0}{|1-t\mu|^{p-2}(1-t\mu)-1\over t}\right)|w|^{p-1}\operatorname{sign}(w) \\
&= -(p-1)\,\mu\,|w|^{p-1}\operatorname{sign}(w). \end{alignat*}
Hence, ${\widetilde J}_p'(w)[\bs{\beta}\cdot\nabla w]\in L^q(\Omega)$. The conclusion of the lemma follows from the identity: $$ \bs{\beta}\cdot\nabla\widetilde J_p(w) ={\widetilde J}_p'(w)[\bs{\beta}\cdot\nabla w]\qquad \hbox{a.e. in } \Omega, $$ which is straightforward to verify. \end{proof}
\subsection{Proof of Proposition~\ref{prop:advContPair}} \label{sec:advContPairProof}
We construct explicitly a Fortin operator $\Pi:\mathbb{V}\to\mathbb{V}_m$ satisfying Assumption~\ref{assumpt:Fortin}. We note that this 1-D proof is similar to the 1-D version of the proof of~\cite[Lemma~4.20, p.~190]{ErnGueBOOK2004}.
\par
Let $-1=x_0<x_1<\dots<x_n=1$ be the set of nodes defining the partition $\mcal{T}_n$. Over each element $T_j=(x_{j-1},x_j)\in \mcal{T}_n$ we define $\Pi$ to be the linear interpolant $\Pi_1$ plus a quadratic bubble, i.e., $$
\Pi(v)\Big|_{T_j}=\Pi_1(v)\Big|_{T_j} + \alpha_j Q_j(x) \in \mathbb{P}^2(T_j), \qquad\forall v\in \mathbb{V}, $$
where $\Pi_1(v)\Big|_{T_j}= |T_{j-1}|^{-1}\big(v(x_{j-1})(x_j-x) + v(x_j)(x-x_{j-1})\big)$ and $Q_j(x)=(x-x_{j-1})(x-x_j)$. The coefficient $\alpha_j$ multiplying the bubble $Q_j(x)$ is selected in order to fulfill the equation: \begin{alignat}{2}\label{eq:element_Pi} \int_{T_j}\Pi(v)=\int_{T_j}v\,. \end{alignat} Observe that $\Pi(v)\in \mathbb{P}^2_{\mathrm{cont},0,\{1\}}(\mcal{T}_n)\subseteq \mathbb{P}^k_{\mathrm{cont},0,\{1\}}(\mcal{T}_n)$ since $k\ge 2$, and for all $w_n\in \mathbb{U}_n$ we have: \begin{alignat*}{2} \tag{by integration by parts}
b(w_n,\Pi(v)) & = \sum_{j=1}^n\int_{T_j}w_n'\Pi(v) - w_n\Pi(v)\Big|_{x_{j-1}}^{x_j} \\ \tag{since $w_n\in\mathbb{P}^1(T_j)$}
& = \sum_{j=1}^nw_n'\int_{T_j}\Pi(v) - w_n\Pi(v)\Big|_{x_{j-1}}^{x_j}\\ \tag{by interpolation and~\eqref{eq:element_Pi}}
& = \sum_{j=1}^nw_n'\int_{T_j}v - w_nv\Big|_{x_{j-1}}^{x_j}\\ \tag{by integration by parts} & = b(w_n,v)\,. \end{alignat*}
Hence, the requirement~\eqref{eq:Fortin_c} is satisfied. Now we recall that $\|(\cdot)\|_\mathbb{V}:=\|(\cdot)'\|_q\,$. Therefore to obtain the requirement~\eqref{eq:Fortin_a}
(i.e.~the boundedness of the operator $\Pi$), we note that on each element: \begin{alignat*}{2}
|\alpha_j| & \leq {6\over|T_j|^3}\int_{T_j}|v-\Pi_1(v)|
\leq {6\over|T_j|^{3-{1\over p}}}\|v-\Pi_1(v)\|_q
\leq {6\over|T_j|^{2-{1\over p}}}\|v'-\Pi_1(v)'\|_q\\
\|\Pi_1(v)'\|_q & = {|v(x_j)-v(x_{j-1})|\over |T_j|^{1-{1\over q}}}={1\over |T_j|^{1-{1\over q}}}\left|\int_{T_j}v'\right|
\leq \|v'\|_q\\
\|Q_j'\|_q & = {|T_j|^{1+{1\over q}}\over (q+1)^{1\over q}}. \end{alignat*} Thus, on each element (and therefore globally) we have: \begin{alignat*}{2}
\|\Pi(v)'\|_q & \leq \|\Pi_1(v)'\|_q+|\alpha_j|\|Q_j'\|_q\\
& \leq \|v'\|_q + C_q \|v'-\Pi_1(v)'\|_q\\
& \leq (1+2C_q)\|v'\|_q, \end{alignat*} where the constant $C_q=6/(q+1)^{1\over q}$ is mesh-independent.
$\ensuremath{_\blacksquare}$
\iffalse
\subsection{Compatible pair (futile attempt}
We now give an example of a compatible pair~$(\mathbb{U}_n, \mathbb{V}_m)$.
\par
Let $\Omega$ be polyhedral and let $\{\mcal{T}_n\}_{n\ge 1}$ be a shape-regular family of affine simplicial meshes of~$\Omega$, so that $\overline{\Omega} = \bigcup_{K\in \mcal{T}_n} \overline{K}$. We consider the case of piecewise-constant approximations:
\begin{alignat*}{2}
\mathbb{U}_n := \Big\{ w_n \in L^p(\Omega) \,:\, {w_n}_{|K} \in \mathbb{P}^0(K)
\,,\, \forall K\in \mcal{T}_n \Big\} \end{alignat*}
Let~$\mcal{F}_n = \{F\}$ denote all mesh faces,
\footnote{i.e., $\operatorname{meas}_{d-1}(F) >0$, and $F$ is an interior face~$F = \partial K_1 \cap \partial K_2$ for distinct~$K_1$ and $K_2$ in~$\mcal{T}_n$ or a boundary face~$F = \partial K \cap \partial\Omega$.}
and, for any face~$F\in \mcal{F}_n$, define the set of neighbouring elements~$\mcal{T}_F$, and corresponding patch~$\omega_F$ by
\begin{alignat*}{2}
\mcal{T}_F &:= \big\{ K\in \mcal{T}_n \,:\, F\subset \partial K \big\}
\quad \text{ and } \quad
\omega_F &:= \bigcup_{K \in \mcal{T}_F} \overline{K} \end{alignat*}
respectively. Define the \emph{relevant} faces as the subset of in-flow faces and interior faces with non-tangential flow:
\begin{alignat*}{2}
\widehat{\mcal{F}}_n := \Big\{ F \in \mcal{F}_n \,:\,
F \nsubseteq \partial\Omega^+\,,\, \norm{\bs{\beta} \cdot \bs{n}_F}_{L^\infty(F)} > 0
\Big\}
\end{alignat*}
\par
For traces to make sense on each face, we introduce the following assumption.
\begin{assumption}[Flow-admissible mesh] \label{assump:flowCompMesh}
\marginnote{@Ignacio: I've made up this assumption, which seems logical to have well-defined face-traces\ldots What do you think?}
For each~$F\in \mcal{F}_n$, there is a subdomain~$\Omega_F\subset \Omega$ such that:~$F\subset \partial \Omega_F$ and the in- and out-flow parts of~$\Omega_F$ are well-separated, i.e.,
\begin{alignat*}{2}
\operatorname{dist}(\partial \Omega_F^+,\partial\Omega_F^-)>0\,. \end{alignat*}
\end{assumption}
\par
This Assumption is nonstandard and we introduce it to construct a Fortin operator for the following~$\mathbb{V}_m$ space. For each~$F\in \mcal{F}_n$, let~$\psi_F \in W^{q}(\operatorname{div}(\bs{\beta} \,\cdot\,);\omega_F)$ denote a face bubble function such that
\begin{subequations} \label{eq:psiF} \begin{alignat}{2} \label{eq:psiFa}
\Big| \int_F \bs{\beta}\cdot \bs{n} \, \psi_F \Big| &= 1 \\ \label{eq:psiFb}
\psi_F &= 0 \qquad \text{on } \partial K \setminus F \,,\, \forall K\in \mcal{T}_F \\ \label{eq:psiFc}
\int_{\omega_F}
|\psi_F|^q + h_F^q
\int_{\omega_F} \big|\operatorname{div}(\bs{\beta} \psi_F) \big|^q
& \le C_q h_F^{d-(d-1)q}
\bigg| \int_F \bs{\beta} \cdot \bs{n} \, \psi_F
\bigg|^q \end{alignat} \end{subequations}
We proof the existence of~$\psi_F$ in Lemma~\ref{lem:faceBubble}. \marginnote{I haven't included this Lemma yet, but it's a $p$-extension of DiPietro, Ern, \cite[Lemma~2.11]{DipErnBOOK2012}}.
\par
We now set the bubble space
\begin{alignat*}{2}
\mathbb{B}_n
&:=
\operatorname{Span} \Big\{ \psi_F \,,\, \forall F\in \widehat{\mcal{F}}_n \Big\}
\subset W^q_{0,\partial\Omega^+}(\operatorname{div}(\bs{\beta}\,\cdot\,);\Omega) \end{alignat*}
and assume that~$\mathbb{V}_m$ contains~$\mathbb{B}_n$.
\begin{proposition}
Under Assumption~\ref{assump:flowCompMesh}, let~$\mathbb{V}_m \supseteq \mathbb{B}_n$, then the Fortin condition~\ref{assumpt:Fortin} holds.
\end{proposition}
\begin{proof}
We define the Fortin operator~$\Pi:\mathbb{W}_{0,\partial\Omega^+}^q(\operatorname{div}(\bs{\beta} \,\cdot\,);\Omega) \rightarrow \mathbb{V}_m$ by
\begin{alignat}{2} \label{eq:weakAdvPi}
\Pi v = \sum_{F\in \widehat{\mcal{F}}_n} \alpha_F \psi_F \in \mathbb{B}_n \subseteq \mathbb{V}_m \end{alignat}
where
\begin{alignat*}{2}
\alpha_F
:= \frac{\int_F \bs{\beta}\cdot\bs{n}\, v }{\int_F \bs{\beta}\cdot\bs{n} \, \psi_F } \end{alignat*}
\par
\begin{remark} \marginnote{NOTE!} The above construction does not seem to work, because of the missing~$L^q$-control (see below). Perhaps better to try something like (following [Ern/Guermond, p.~191]):
\begin{alignat*}{2}
\Pi v = \mcal{C}_n(v) + \sum_{F\in \widehat{\mcal{F}}_n} \alpha_F \psi_F \in
\mathbb{P}_n^1 \oplus \mathbb{B}_n \subseteq \mathbb{V}_m \end{alignat*}
where
\begin{alignat*}{2}
\alpha_F
:= \frac{\int_F \bs{\beta}\cdot\bs{n}\, (v-\mcal{C}_n(v)) }{\int_F \bs{\beta}\cdot\bs{n} \, \psi_F } \end{alignat*}
\marginnote{Biggest concern: Is $\norm{\operatorname{div}(\bs{\beta} \mcal{C}_n(v)}_q \le C \norm{\operatorname{div}(\bs{\beta} v}_q $?? Do we need a special interpolant?}
with~$\mcal{C}_n(v)$ the Clement interpolant.
\end{remark}
\par
The term $\int_F \bs{\beta}\cdot\bs{n} \,v $ makes sense, because of Assumption~\ref{assump:flowCompMesh}. Indeed, let~$\phi_{\Omega_F}^+, \phi_{\Omega_F}^- \in C^\infty(\Omega)$ such that~$\phi_{\Omega_F}^+ + \phi_{\Omega_F}^- = 1$, $\phi_{\Omega_F}^+ = 0$ on~$\partial\Omega_F^-$ and $\phi_{\Omega_F}^- = 0$ on~$\partial\Omega_F^+$. Then
\begin{alignat*}{2}
\notag C h_F^{-(d-1)(q-1)} \bigg| \int_{F} \bs{\beta} \cdot\bs{n}\, v \bigg|^q
&\le
\int_{F} |\bs{\beta} \cdot\bs{n}|\, |v|^q \\ \notag
&\le
\int_{\partial\Omega_F} |\bs{\beta} \cdot\bs{n}|\, |v|^q \\ \notag
&= \int_{\partial\Omega_F} (\phi^+ - \phi^-)\, \bs{\beta} \cdot\bs{n}\, |v|^q \\ \notag
&= \int_{\Omega_F} \operatorname{div} \big( |v|^q
(\phi^+ - \phi^-)\, \bs{\beta} \big) \\ \notag
&\le C \norm{v}_{W^q(\operatorname{div}(\bs{\beta}\,\cdot\,);\Omega)}^q \end{alignat*}
\par
To prove condition~\eqref{FIX}, let~$w_n \in\mathbb{U}_n$, and denote~$\jump{w_n}_F = {w_n}_{|F^+} - {w_n}_{|F^-}$ on interior faces~$F$ and $\jump{w_n}_F = {w_n}_{|F}$ on boundary faces~$F$. Then
\begin{alignat}{2} \notag
\int_\Omega \operatorname{div}(\bs{\beta} \Pi v)\, w_n
&= \sum_K \int_K \operatorname{div}(\bs{\beta} \Pi v)\, w_n \\ \notag
&= \sum_K \int_{\partial K} \bs{\beta} \cdot \bs{n} \, \Pi v \, w_n \\ \notag
&= \sum_{F\in \mcal{F}_n} \int_F \bs{\beta} \cdot \bs{n} \, \Pi v \, \jump{ w_n }_F \\ \label{eq:divbetaPiv}
&= \sum_{\widehat{F} \in \widehat{\mcal{F}}_n} \jump{ w_n }_F \int_{\widehat{F}} \bs{\beta} \cdot \bs{n} \, \Pi v \end{alignat}
since~$\jump{w_n}_F$ is constant on each~$F$, and since $\bs{\beta}\cdot \bs{n} \, \Pi v = 0 $ on $F\in \mcal{F}_n\setminus \widehat{\mcal{F}}_n$. Next, from~\eqref{eq:weakAdvPi}, note that
\begin{alignat*}{2}
{\Pi v}_{|\widehat{F}}
= \sum_{F\in \widehat{\mcal{F}}_n} \alpha_F {\psi_F}_{|\widehat{F}}
= \alpha_{\widehat{F}} {\psi_{\widehat{F}}}_{|\widehat{F}}
= \frac{\int_{\widehat{F}} \bs{\beta}\cdot\bs{n}\, v }
{\int_{\widehat{F}} \bs{\beta}\cdot\bs{n} \, \psi_{\widehat{F}} }
{\psi_{\widehat{F}}}_{|\widehat{F}} \end{alignat*}
hence,
\begin{alignat*}{2} \sum_{\widehat{F} \in \widehat{\mcal{F}}_n} \jump{ w_n }_F \int_{\widehat{F}} \bs{\beta} \cdot \bs{n} \, \Pi v
&=
\sum_{\widehat{F} \in \widehat{\mcal{F}}_n} \jump{ w_n }_F
\int_{\widehat{F}} \bs{\beta} \cdot \bs{n} \, v \end{alignat*}
Reversing the steps which led to~\eqref{eq:divbetaPiv}, but now with~$v$ instead of~$\Pi v$, yields
\begin{alignat*}{2}
\int_\Omega \operatorname{div}(\bs{\beta} \Pi v)\, w_n = \int_\Omega \operatorname{div}(\bs{\beta} v)\, w_n \end{alignat*}
\par
Next we prove condition~\eqref{FIXX}. For any~$K\in \mcal{T}_n$, let $\widehat{\mcal{F}}_K := \big\{ F \in \widehat{\mcal{F}}_n \,:\, F \subset \partial K\big\}$ denote the set of relevant faces of element~$K$. Then
\begin{alignat*}{2}
h_F^q \bignorm{\operatorname{div}(\bs{\beta} \Pi v)}_{q,K}^q
&= h_F^q \int_K \big| \operatorname{div}(\bs{\beta} \Pi v) \big|^q \\ \tag{$\omega_F$ overlaps $K$}
&= h_F^q \int_K \big| \operatorname{div}(\bs{\beta} \sum_{F\in \widehat{\mcal{F}}_K} \alpha_F \psi_F) \big|^q \\
&\le
(d+1)^q h_F^q \int_K \max_{F\in \widehat{\mcal{F}}_K}
\big| \alpha_F \operatorname{div}(\bs{\beta} \psi_F) \big|^q \\
&=
(d+1)^q h_F^q \max_{F\in \widehat{\mcal{F}}_K}
\bigg| \int_F \bs{\beta}\cdot\bs{n}\, v \bigg|^q
\frac{ \int_K \left| \operatorname{div}(\bs{\beta} \psi_F) \right|^q}
{\left| \int_F \bs{\beta}\cdot\bs{n} \, \psi_F \right|^q} \\ \tag{by~\eqref{eq:psiFc}}
&\le (d+1)^q \, C_q h_F^{d-(d-1)q}
\max_{F\in \widehat{\mcal{F}}_K}
\bigg| \int_F \bs{\beta}\cdot\bs{n}\, v \bigg|^q \end{alignat*}
Similarly,
\begin{alignat*}{2}
\bignorm{\Pi v}_{q,K}^q
&= \int_K \big| \Pi v \big|^q \\ \tag{$\omega_F$ overlaps $K$}
&= \int_K \Big| \sum_{F\in \widehat{\mcal{F}}_K} \alpha_F \psi_F \Big|^q \\
&\le
(d+1)^q \int_K \max_{F\in \widehat{\mcal{F}}_K}
\big| \alpha_F \psi_F \big|^q \\
&=
(d+1)^q \max_{F\in \widehat{\mcal{F}}_K}
\bigg| \int_F \bs{\beta}\cdot\bs{n}\, v \bigg|^q
\frac{ \int_K \left| \psi_F \right|^q}
{\left| \int_F \bs{\beta}\cdot\bs{n} \, \psi_F \right|^q} \\ \tag{by~\eqref{eq:psiFc}}
&\le (d+1)^q \, C_q h_F^{d-(d-1)q}
\max_{F\in \widehat{\mcal{F}}_K}
\bigg| \int_F \bs{\beta}\cdot\bs{n}\, v \bigg|^q \end{alignat*}
We now use the following trace inequality, for any $F\in \widehat{\mcal{F}}_K$,
\begin{alignat*}{2}
C h_F^{d-(d-1)q} \bigg| \int_F \bs{\beta}\cdot\bs{n}\, v \bigg|^q
&\le h_F \int_F |\bs{\beta}\cdot\bs{n}| \, |v|^q \\
&\le h_F \sum_{\widehat{F}\in \widehat{\mcal{F}}_K}
\int_{\widehat{F}} |\bs{\beta}\cdot\bs{n}| \, |v|^q \\
&= h_F \int_{\partial K} |\bs{\beta}\cdot\bs{n}| \, |v|^q \\
&\le C \Big( \norm{v}_{q,K}^q
+ h_F^{q} \norm{\operatorname{div}(\bs{\beta} v)}_{q,K}^q \Big) \end{alignat*}
where the last inequality can be proven using a scaling argument.
\begin{alignat*}{2}
... \end{alignat*}
\marginnote{How to bound the full graph norm of~$\Pi v$? Seems we pick up $h^{-q} \norm{v}_q$}
\end{proof}
\fi
\end{document} | arXiv |
A multifunctional thermophilic glycoside hydrolase from Caldicellulosiruptor owensensis with potential applications in production of biofuels and biochemicals
Xiaowei Peng1,
Hong Su1,
Shuofu Mi1 &
Yejun Han1
Biotechnology for Biofuels volume 9, Article number: 98 (2016) Cite this article
Thermophilic enzymes have attracted much attention for their advantages of high reaction velocity, exceptional thermostability, and decreased risk of contamination. Exploring efficient thermophilic glycoside hydrolases will accelerate the industrialization of biofuels and biochemicals.
A multifunctional glycoside hydrolase (GH) CoGH1A, belonging to GH1 family with high activities of β-d-glucosidase, exoglucanase, β-d-xylosidase, β-d-galactosidase, and transgalactosylation, was cloned and expressed from the extremely thermophilic bacterium Caldicellulosiruptor owensensis. The enzyme exerts excellent thermostability by retaining 100 % activity after 12-h incubation at 75 °C. The catalytic coefficients (k cat/K m) of the enzyme against pNP-β-D-galactopyranoside, pNP-β-D-glucopyranoside, pNP-β-D-cellobioside, pNP-β-D-xylopyranoside, and cellobiose were, respectively, 7450.0, 2467.5, 1085.4, 90.9, and 137.3 mM−1 s−1. When CoGH1A was supplemented at the dosage of 20 Ucellobiose g−1 biomass for hydrolysis of the pretreated corn stover, comparing with the control, the glucose and xylose yields were, respectively, increased 37.9 and 42.1 %, indicating that the enzyme contributed not only for glucose but also for xylose release. The efficiencies of lactose decomposition and synthesis of galactooligosaccharides (GalOS) by CoGH1A were investigated at low (40 g L−1) and high (500 g L−1) initial lactose concentrations. At low lactose concentration, the time for decomposition of 83 % lactose was 10 min, which is much shorter than the reported 2–10 h for reaching such a decomposition rate. At high lactose concentration, after 50-min catalysis, the GalOS concentration reached 221 g L−1 with a productivity of 265.2 g L−1 h−1. This productivity is at least 12-fold higher than those reported in literature.
The multifunctional glycoside hydrolase CoGH1A has high capabilities in saccharification of lignocellulosic biomass, decomposition of lactose, and synthesis of galactooligosaccharides. It is a promising enzyme to be used for bioconversion of carbohydrates in industrial scale. In addition, the results of this study indicate that the extremely thermophilic bacteria are potential resources for screening highly efficient glycoside hydrolases for the production of biofuels and biochemicals.
Glycoside hydrolases (GHs) are enzymes hydrolyzing the glycosidic bond between carbohydrates or between a carbohydrate and a non-carbohydrate moiety [1]. GHs are widely applied in biological and chemical industries. Among them, cellulases (including endoglucanase, exoglucanase, and β-glucosidase) are of great interest for application in biomass degradation for biofuel and biochemical production [2]. In addition, some GHs have the capability of transglycosylation and are used for synthesis of oligosaccharides and glycosides, such as galactooligosaccharides (GalOS) (synthesized by β-galactosidase) [3] and octyl glucoside (synthesized by β-glucosidase) [4]. Production of efficient GHs is urgently required to improve the economical possibility for the related industrial products. Especially, efficient cellulase production has developed into a significant bottleneck for the biofuel industry [1]. Research efforts have recently focused on extremely thermophilic microorganisms for exploring novel cellulases and other GHs to improve the current situation [5–7], due to the advantages of these thermophilic enzymes, such as higher reaction velocity, excellent thermostability, and decreased risk of contamination [8]. Many thermophilic GHs, such as β-d-glucosidase [9, 10], bifunctional cellulolytic enzyme (endo- and exoglucanases) [11], β-d-xylosidase [12], and β-d-galactosidase [3, 13], have been cloned, heterologously expressed, and biochemically characterized for the purpose of uncovering the catalytic mechanism and evaluating the possibility of industrial applications. Among them, the genus Caldicellulosiruptor has recently attracted high interest for it can produce a diverse set of glycoside hydrolases (GHs) for deconstruction of lignocellulosic biomass [7, 14, 15]. The open Caldicellulosiruptor pangenome encoded 106 glycoside hydrolases (GHs) from 43 GH families [7]. Our previous work [16] found that the extremely thermophilic bacterium Caldicellulosiruptor owensensis had comprehensive hemicellulase and cellulase system with potential application for bioconversion of lignocellulosic biomass. Moreover, the catalytic mechanism of some enzymes from Caldicellulosiruptor differs from the general GHs. For example, the cellulase CelA produced from Caldicellulosiruptor bescii could hydrolyze the microcrystalline cellulose not only from the surface as common cellulases done but also by excavating extensive cavities into the surface of the substrate [17]. The information about their genetic, biochemical, and biophysical characteristics suggests that there exist more efficient GHs to be explored in the extremely thermophilic microorganisms Caldicellulosiruptor.
In the present work, a multifunctional glycoside hydrolase (GH) was cloned and expressed from C. owensensis. The characteristics of the enzyme and its potential applications in saccharification of lignocellulosic biomass and synthesis of galactooligosaccharides (GalOS) were evaluated.
Cloning and expression of CoGH1A
The gene Calow_0296 consists of a 1359 bp fragment encoding 452 amino acids, which belongs to glycoside hydrolase family 1 (GH1) and was named CoGH1A. The predicted molecular weight of CoGH1A was 53.2 kDa. The SDS-PAGE analysis agreed with the predicted sizes (Fig. 1b). The quaternary structure of purified CoGH1A was analyzed through gel filtration chromatography coupled with SDS-PAGE. The results are shown in Fig. 1. The molecular mass (MW) of CoGH1A at peak 1 (160.4 kDa) was almost three times that at peak 2 (53.3 kDa) (Fig. 1a). However, fractions collected from two peaks showed the same band in SDS-PAGE (Fig. 1b). This suggests that CoGH1A exists as monomer and homotrimer in buffer. Two thermostable β-glucosidases, one belonging to GH1 from the Termite Nasutitermes takasagoensis [18] and the other belonging to GH3 from Thermoascus aurantiacus [19], were also homotrimers but not monomers in their native form. It is interesting that both monomer and homotrimer of CoGH1A exist in native form. It seems that monomer and homotrimer collectively function on the substrate.
Gel filtration chromatography and SDS-PAGE analysis of CoGH1A. a Quaternary structure analysis of CoGH1A by gel filtration chromatography. b SDS-PAGE of CoGH1A fractions collected from gel filtration chromatography. The bands marked with Arabic numerals 1–10 corresponded to eluents from a
Optimum temperature, pH and thermostability of CoGH1A
The effects of temperature and pH on the activity of CoGH1A using pNP-β-d-galactopyranoside as the substrate are shown in Fig. 2a, b. It shows that the optimum temperature of CoGH1A was from 75 to 85 °C, which is in accordance with the optimum temperature for C. owensensis growth at 75 °C [20]. At 70 and 90 °C, the activities of CoGH1A were more than 80 % of the maximum, while below 60 °C the enzyme activity decreased to less than 50 % of the maximum. This indicates that CoGH1A is an extremely thermophilic enzyme and has broad temperature adaptability. The optimum pH of CoGH1A was 5.5. At the pH of 5.0 and 6.0, the activities of CoGH1A were about 80 % of the maximum. At the pH of 4.5 and 7.0, the activities of CoGH1A were deceased to about 20 % of the maximum. It is better to control the pH from 5.0 to 6.0 for application of CoGH1A. This is a common pH for most of the glycoside hydrolases.
Effect of temperature (a) and pH (b) on activity and thermostability (c) of CoGH1A. Values are averages counted from three independent measures; error bars represent standard deviation
The thermostability of CoGH1A is shown in Fig. 2c. After 12 h of incubation in pH 5.5 citrate buffer at 65 and 75 °C, the activities of the enzyme remained the same as that of the initial. These results show that CoGH1A exhibits excellent thermostability at the temperatures below 75 °C. The half-lives of CoGH1A were about 11 and 1.5 h when cultivated at 80 and 85 °C, respectively. The half-lives of the β-glucosidase CbBgl1A from C. bescii [9] at 80 and 85 °C were 20 and 8 min, respectively. The half-lives of the β-galactosidase from C. saccharolyticus [21] at 75 and 80 °C were 17 and 2 h, respectively. This indicates that among the enzymes from Caldicellulosiruptor species CoGH1A is a robust candidate for industrial application.
Effect of ions on the activity of CoGH1A
The effect of cations on the activity of CoGH1A was detected at pH 5.5 and 75 °C (Table 1). On the whole, at the concentrations of both 5 and 10 mM, the effects of cations on the activity of CoGH1A were not significant. Mn2+ and Ni2+ activated the enzyme activity by 135 and 119 %, respectively, at the concentration of 10 mM. K+ activated the enzyme activity by 114 and 105 %, respectively, at the concentrations of 5 and 10 mM. The other cations, such as Fe3+, Zn2+, Co2+, Mg2+, Cu2+, Na+, and NH4 +, slightly inhibited the enzyme activity by 71–96 %. These results indicate that CoGH1A has resistance to cations and additional cation is not necessary for activating the enzyme. The effects of cation on CoGH1A were similar to those on some other glycoside hydrolases from Caldicellulosiruptor species, such as the xylanase from C. kronotskyensis [22] and the xylanase and xylosidase from C. owensensis [12]. They were very different from the β-galactosidase produced by Lactobacillus delbrueckii [23]. The K+ and Na+ activated that β-galactosidase with the activities increased almost 5- and 12-fold, respectively, at the ion concentration of 50 mM, while Zn2+ significantly inhibited the activity of that β-galactosidase with the activity decreased to almost zero at the Zn2+ concentration of 10 mM.
Table 1 Effect of cations on the activity of CoGH1A
Specific activities and kinetic parameters of CoGH1A on different substrates
The activities of CoGH1A on various substrates were tested at 75 °C and pH 5.5. The results (Table 2) show that the enzyme exhibited broad substrate specificity. The highest specific activity was 3215 U mg−1 with pNP-β-d-galactopyranoside (pNPGal) as the substrate, followed by 1621, 603, 280, 140, and 130 U mg−1 with pNP-β-d-glucopyranoside (pNPGlu), pNP-β-d-cellobioside (pNPC), lactose, pNP-β-d-xylopyranoside (pNPX), and cellobiose, respectively, as substrates. Also, the enzyme displayed activities on soluble polysaccharides such as synanthrin (2.4 U mg−1) and locust bean gum (1.2 U mg−1). The enzyme even hydrolyzed insoluble substrates with slight activities on cotton (<0.1 U mg−1) and filter paper (<0.1 U mg−1). However, it exhibited no activity against carboxymethyl cellulose (CMC) and pNP- α-l-arabinofuranoside (pNPAr).
Table 2 Specific activities of CoGH1A on different substrates
The kinetic parameters of CoGH1A were determined on several preferred substrates at 75 °C and pH 5.5 (Table 3). The K m values against pNPGal, pNPGlu, pNPC, pNPX, and cellobiose were 0.61, 1.52, 0.87, 7.18, and 15.65 mM, respectively. The catalytic coefficients (k cat/K m) on the five substrates were, respectively, 7450.0, 2467.5, 1085.4, 90.9, and 137.3 mM−1 s−1.
Table 3 Kinetic parameters of CoGH1A on different substrates
A complete cellulase system consists of at least three related enzymes: endoglucanases (EC 3.2.1.4), exoglucanases (or cellobiohydrolase) (EC 3.2.1.91), and β-glucosidases (EC 3.2.2.21) [24]. They cooperate in releasing glucose. Endoglucanases (EC 3.2.1.4) randomly hydrolyze internal glycosidic bonds to decrease the length of the cellulose chain and multiply polymer ends. Exoglucanases (EC 3.2.1.91) split-off cellobiose or glucose from cellulose termini and β-glucosidases (EC 3.2.2.21) hydrolyze cellobiose and oligomers to render glucose [24]. Among them, β-glucosidases are essential for efficient hydrolysis of cellulosic biomass as they relieve the inhibition of the cellobiohydrolases and endoglucanases by reducing cellobiose accumulation [25]. Commercial cellulase preparations are mainly based on mutant strains of Trichoderma reesei which have usually been characterized by a low secretion of β-glucosidase [26]. Thus, T. reesei cellulase preparations had to be supplemented with added β-glucosidase to provide the more efficient saccharification of cellulosic substrates [26, 27]. Much has been devoted to finding highly efficient β-glucosidases for bioconversion of cellulosic substrates. Some of the β-glucosidases with relatively high activity are listed in Table 4. The V max of CoGH1A was 4027 ± 75 and 2424 ± 48 μmol mg−1 min−1 with pNPGlu and cellobiose, respectively, as substrates, which was lower than that of the β-glucosidase from Pholiota adipose with the V max of 4390 and 3460 μmol mg−1 min−1 using pNPGlu and cellobiose, respectively, as substrates [28]. However, the thermostability of CoGH1A was better than that of the β-glucosidase from Pholiota adipose. The former kept 100 % activity after incubated at 75 °C for 12 h (Fig. 2c) and the latter kept 50 % activity after incubated at 70 °C for 8.5 h [28]. The catalytic coefficient (k cat/K m) of the β-glucosidase from the hyperthermophilic bacterium Thermotoga petroph was 30,800 mM−1 s−1 with pNPGlu as a substrate [10]. To the best of our knowledge, this is by far the highest reported β-glucosidase activity with pNPGlu as a substrate. However, the capability of this β-glucosidase for hydrolysis of cellobiose was very low, with the specific activity of only 2.3 U mg−1 [10]. This will limit its application in saccharification of lignocellulosic biomass. Another β-glucosidase from the same genus bacterium strain C. bescii was studied in detail [9]. The catalytic coefficients (k cat/K m) of that β-glucosidase were 84.0 and 87.3 mM−1 s−1 with pNPGlu and cellobiose, respectively, as the substrate [9], which were much lower than those (2467.5 and 137.3 mM−1 s−1) of CoGH1A. These results show that CoGH1A is a potential β-glucosidase candidate for industrial application.
Table 4 CoGH1A and some reported microbial β-glucosidases with relatively high activity
Exoglucanases or cellobiohydrolases (CBHs) preferentially hydrolyze β-1, 4-glycosidic bonds from chain ends, producing cellobiose as the main product. CBHs have been shown to create a substrate-binding tunnel with their extended loops which surround the cellulose [29, 30]. Besides microcrystalline cellulose and cotton cellulose, the pNPC was also used as a substrate for detecting the exoglucanases or cellobiohydrolase (CBH) activity [31, 32]. The specific activity and V max of CoGH1A were, respectively, 603 U mg−1 and 1065 μmol mg−1 min−1 using pNPC as the substrate (Tables 2, 3), which were much higher than those for most of the reported enzymes. For example, Le et al. purified a novel cellobiohydrolase from Penicillium decumbens with a specific activity of 1.9 U mg−1 against pNPC [32]. Lee et al. purified a cellobiohydrolase from Penicillium purpurogenum with a specific activity of 10.8 U mg−1 against pNPC U mg−1 [33]. Bok et al. [11] purified two enzymes (CelA and CelB) with both endo- and exoglucanase activities. The V max of CelA and CelB were, respectively, 69.2 and 18.4 μmol mg−1 min−1 with pNPC as a substrate. The high specific activity against pNPC indicates that the CoGH1A has high ability for split-off cellobiose or split-off glucose one by one from pNPC. However, using filter paper or cotton as the substrate the activities were very low (<0.1 U mg−1, Table 2). This is not surprising because the CoGH1A has only catalytic domain (CD) without carbohydrate-binding module (CBM) which would facilitate the enzyme binding to the substrate. Therefore, CoGH1A works primarily on soluble oligosaccharides.
β-galactosidases catalyze the hydrolysis of the β-1, 4-D glycosidic linkage of lactose and structurally related substrates. β-galactosidases have two main technological applications in the food industry, namely the removal of lactose from milk and dairy products [34] and the production of galactooligosaccharides (GalOS) from lactose by transglycosylation [3].
CoGH1A exhibits very high β-galactosidase activity with the catalytic coefficient (k cat/K m) of 7450.0 mM−1 s−1 on pNPGal. This is by far the highest catalytic coefficient for all reported β-galactosidases on pNPGal (Table 5), much higher than the second high catalytic coefficient of 1462.8 mM−1 s−1 by the enzyme from the thermoacidophilic bacterium Alicyclobacillus acidocaldarius [35]. The catalytic coefficient of another β-galactosidase from the same genus strain Caldicellulosiruptor saccharolyticus was 149 mM−1 s−1 [21], much lower than that of CoGH1A, although the optimum temperature and pH between them are similar. Table 5 also shows that most of the β-galactosidases with higher activity belong to GH1. The optimum pH and temperature of these GH1 β-galactosidases are, respectively, 5.5–6.5 and 65–85 °C. In general, the higher temperatures can be beneficial in higher oligosaccharide yields. The problem of microbial contamination can also be solved by increasing the catalysis temperature [34]. CoGH1A with high β-galactosidase activity and high optimum temperature (75–85 °C) will be a promising enzyme in the food industry.
Table 5 Kinetic parameters of CoGH1A and some reported microbial β-galactosidases with relatively high activity
Supplementation of CoGH1A for lignocellulosic biomass hydrolysis
Based on the fact that CoGH1A is a multifunctional glycoside hydrolase, especially with high β-glucosidase activity, its capacity on saccharification of lignocellulosic biomass was determined. The experiment for hydrolysis of the steam-exploded (SE) corn stover by the commercial enzyme CTec2 (Novozymes) supplemented with CoGH1A was carried out at 60 °C and pH 5.5. As shown in Fig. 3, supplementation of CoGH1A could markedly enhance saccharification of the steam-exploded (SE) corn stover. Supplementing CoGH1A with the dose of 10 and 20 Ucellobiose g−1 biomass, after 72-h hydrolysis, the concentrations of glucose increased from 1.95 (hydrolyzed by CTec2 only) to 2.53 and 2.69 g L−1, increasing 29.7 and 37.9 %, respectively. In addition, the concentrations of xylose also increased from 0.38 to 0.47 and 0.54 g L−1, increasing 23.7 and 42.1 %, respectively. This indicates that CoGH1A contributed not only for cellulose but also for hemicellulose hydrolysis, due to its multi-activities on glycoside hydrolysis (Table 2 and 3). Although the catalytic temperature 60 °C in this experiment was not optimum for both CTec2 (50 °C) and CoGH1A (75–85 °C), especially, the activity of CoGH1A at 60 °C was only about 50 % of the maximum (Fig. 2), this experiment demonstrated the high ability of CoGH1A on saccharification of lignocellulosic biomass. CoGH1A, possessing β-glucosidase, exoglucanase, and β-xylosidase activities, may be a promising enzyme for industrial application in bioconversion of lignocellulosic biomass.
Synergetic hydrolysis of steam-exploded corn stover by CTec2 and CoGH1A. This experiment was carried out in pH 5.5 acetate buffer at 60 °C with the initial biomass of 20 g L−1. The CTec2 loading rate was 10 FPU g−1 biomass. The loading rate of the supplemented CoGH1A was 10 or 20 Ucellobiose g−1 biomass. Values are averages counted from three independent measures; error bars represent standard deviation
Lactose transformation and synthesis of galactooligosaccharides
Since CoGH1A has very high β-galactosidase activity, its potential industrial applications were further investigated in low (40 g L−1) and high (500 g L−1) initial lactose concentrations by cultivating in pH 5.5 citrate buffer at 70 °C with the enzyme dosage of 2.5 Ulactose mL−1. Figure 4a shows the time course of the enzymatic catalysis at the initial lactose concentration of 40 g L−1 which is very close to the concentration of lactose in fresh milk. After 10-min reaction, more than 83 % of the lactose was degraded and the yields of GalOS, glucose, and galactose were 17.0, 35.6, and 30.6 % with the concentrations of 6.80, 14.24, and 12.24 g L−1, respectively. When the reaction continued to 30 min, about 92 % of the lactose was converted and the yields of GalOS, glucose, and galactose were 11.0, 41.9, and 39.0 % with the concentrations of 4.40, 16.46, and 15.6 g L−1, respectively. Further prolonging the reaction time, the concentrations of lactose and GalOS were gradually decreased, while the concentrations of glucose and galactose were gradually increased.
Time course of the enzymatic catalysis of lactose degradation and GalOS synthesis by CoGH1A at the initial lactose concentration of 40 g L−1 (a) and 500 g L−1 (b). Values are averages counted from three independent measures; error bars represent standard deviation
In literature, when the lactose hydrolysis rates reached 70–95 %, the reaction time needed is 2–10 h [34, 36–38]. However, in this research only 10 min was needed by CoGH1A. The high efficiency of CoGH1A makes it a potential candidate in milk and milk product industries. Moreover, CoGH1A has a function in the synthesis of GalOS, with 10–20 % GalOS yield during lactose hydrolysis. This will upgrade the quality and value of milk and milk products since GalOS are non-digestible oligosaccharides and are used as prebiotic food ingredients.
When the initial lactose concentration was 500 g L−1 (Fig. 4b), the GalOS yield was much higher than that with low initial lactose concentration (40 g L−1 Fig. 4a). This is in agreement with the fact that higher initial lactose concentrations can be beneficial in higher oligosaccharide yields [34]. The GalOS yield was 38.6 % after 30-min catalysis and gradually increased to 44.2 % at 50 min. Further catalyzing for 70 min, the GalOS yield was slightly decreased to 39.2 %. This indicated that reaction for 50 min is enough for achieving the highest GalOS yield. The highest GalOS concentration reached 221 g L−1 with a productivity of 265.2 g L−1 h−1.
Figure 4 also shows that at low initial lactose concentration (Fig. 4a) the glucose and galactose were the main products (totally about 80 %) and the yield of glucose was almost the same as that of galactose. This indicates that the velocity of lactose decomposition was much higher than that of GalOS synthesis in the condition of low initial lactose concentration. During the catalytic process at high initial lactose concentration (Fig. 4b), the concentration of lactose was quickly decreased. It was decreased more than 80 % after 50 min; however, the yield of galactose, which was produced from lactose decomposition, was kept below 11.6 % because galactose was used for GalOS synthesis. It shows that the velocity of lactose decomposition was almost same as that of GalOS synthesis in the condition of high initial lactose concentration.
The characteristics of GalOS synthesis by CoGH1A at the initial lactose concentration of 500 g L−1 and some other β-galactosidases in batch process are shown in Table 6. The GalOS productivity by CoGH1A was more than 12 times higher than that of the β-galactosidases from Lactobacillus delbrueckii subsp. Bulgaricus (19.8 g L−1 h−1) [23], Bifidobacterium breve (14.7 g L−1 h−1) [51], marine metagenomic library (20.6 g L−1 h−1) [52], and Thermotoga maritima (18.20 g L−1 h−1) [39]. Time for reaching the maximum yield by the four β-galactosidases was 5–10 h (Table 6), which is at least fivefold longer than that by CoGH1A. Although the enzyme loading rate of CoGH1A (2.5 Ulactose mL−1) in this study was higher than that from Lactobacillus delbrueckii subsp. bulgaricus (1.5 Ulactose mL−1) and Thermotoga maritima (1.5 Ulactose mL−1), the actual protein loading rate of CoGH1A was 8.9 μg mL−1 which was much lower than those of other β-galactosidases (Table 6). Moreover, CoGH1A has high thermostability with retaining 100 % activity after incubation at 75 °C for 12 h (Fig. 2c). The problem of microbial contamination can also be alleviated or avoided at such a high temperature. These advantages of CoGH1A make it a potential candidate for GalOS synthesis.
Table 6 Characteristic of GalOS synthesis by different β-galactosidases in batch process
Multiple sequence alignment and the possible role in host bacterium
The structure and active site residues of the enzyme were predicted for analyzing the difference between CoGH1A and other GHs whose structures were resolved. Three proteins with relatively higher identity to CoGH1A were found, i.e., 3AHX (a β-glucosidase from Clostridium cellulovorans) [40], 4PTV (a β-glucosidase from Halothermothrix orenii) [41], and 1OD0 (a β-glucosidase from Thermotoga maritima) [42]. CoGH1A shares, respectively, 53, 50, and 48 % identity with them. However, the catalytic coefficients (k cat/K m) of 3AHX, 4PTV, and 1OD0 were, respectively, 340, 187, and 102 mM−1 s−1 with pNPGlu (for 3AHX and 4PTV) [40, 41] or 2, 4-dinitrophenyl-β-D-glucopyranoside (for 1OD0) [42] as a substrate. They are much lower than those of CoGH1A. Multiple sequence alignment of CoGH1A with the aforesaid proteins (Fig. 5) was performed using the ClustalX2 software and depicted online by ESPrit 3.0 (http://www.espript.ibcp.fr/). The catalytic acid/base and catalytic nucleophile of CoGH1A are predicted as Glu163 and Glu361 (pointed out by blue arrows in Fig. 5), respectively. The residual Glu414 is predicted to be participated in substrate binding. As Fig. 5 shows, both the catalytic glutamic acid residues and the amino acid residues next to them are conserved among the four enzymes. This suggests that the catalytic coefficients of these enzymes are determined by the amino acid sequences except the conserved catalytic glutamic acids.
Multiple sequence alignment of Calow_296 and the three resolved proteins with relatively higher identity. The blue arrows indicate the conserved catalytic glutamic acid residues
Caldicellulosiruptor owensensis, the host bacterium of CoGH1A, was isolated from a shallow freshwater pond located in the Owens Lake bed area [20]. It could grow on a wide variety of carbon sources including arabinose, glucose, cellobiose, cellulose, xylan, xylose, galactose, dextrin, fructose, lactose, glycogen, inositol, mannitol, mannose, maltose, pectin, raffinose, rhamnose, ribose, starch, sucrose, and tagatose [20]. In its growing environment, the polysaccharides (or glycans) may be the most possible carbon sources that C. owensensis could be obtained. Decomposition of these polysaccharides to monoses is necessary for C. owensensis to assimilate them. The multifunctional glycoside hydrolase CoGH1A, which could efficiently convert various soluble oligosaccharides to monoses, may give important contribution to C. owensensis for its capability of utilizing the wide variety of carbon sources. The thermostability of CoGH1A and its resistance to cations may also help the host bacterium growing in the changeful environments.
A multifunctional glycoside hydrolase, named CoGH1A, has been cloned and expressed from the extremely thermophilic bacterium C. owensensis. It possesses high activities of β-d-glucosidase, exoglucanase, β-d-xylosidase, β-d-galactosidase, and transgalactosylation. Moreover, it exerts excellent thermostability by retaining 100 % activity after 12-h incubation at 75 °C. The enzyme contributed not only for glucose but also for xylose release when it was supplemented for hydrolysis of corn stover. Additionally, the catalytic efficiencies of this enzyme on lactose decomposition and galactooligosaccharides synthesis were at least, respectively, 5- and 12-fold those of the reported glycoside hydrolases. The multifunctional glycoside hydrolase CoGH1A is a promising enzyme to be used for bioconversion of biomass and carbohydrate. This research also indicates that the extremely thermophilic bacteria are potential resources for screening highly efficient GHs for the production of biofuels and biochemicals.
The extremely thermophilic cellulolytic bacteria C. owensensis DSM 13,100 was purchased from the DSMZ (German Collection of Microorganisms and Cell Cultures). The substrates, p-nitrophenyl β-d-galactopyranoside (pNPGal), p-nitrophenyl β-d-glucopyranoside (pNPGlu), p-nitrophenyl β-d-cellobioside (pNPC), p-nitrophenyl β-d-xylopyranoside (pNPX), p-nitrophenyl β-d-mannopyranoside (pNPM), p-nitrophenyl α-l-arabinofuranoside (pNPAr), carboxymethylcellulose (CMC), locust bean gum, and synanthrin, were purchased from Sigma. The chemicals and other substrates were purchased from Sinopharm Chemical Reagent Beijing Co., Ltd or Sigma. The steam-exploded corn straw was obtained by being pretreated in the condition of 1.5 MPa retained for 3 min. Its composition was glucan 46.8 %, xylan 4.3 %, araban 0 %, and lignin 27.4 %. Competent cells used for cloning and expression were Escherichia coli Top10 (TianGen, China) and E. coli BL21 (DE3), respectively. The E.coli BL21 (DE3) competent cells were prepared with the methods described in Molecular Cloning: A Laboratory Manual [43].
Cloning the gene of CoGH1A
Caldicellulosiruptor owensensis cells were inoculated at 75 °C, 75 rpm for 24 h with the DSMZ medium 640 (detailed list of ingredient was supplied by DSMZ, http://www.dsmz.de). The genomic DNA of C. owensensis was extracted from 3 mL of fermentation broth using TIANamp Bacteria DNA Kit (TianGen, China). Gene Calow_0296 was amplified from the genomic DNA of C. owensensis using the primers of CoGH1A—F 5′-GCCGCGCGGCAGCATGAGTTTTCCAAAAG-3′ and CoGH1A—R 5′-GCG GCCGCAAGCGTTTATGAATTT TCCTTTAT-3′. The PCR product was then purified and separately treated with 0.5 IU of T4 DNA polymerase (Takara) and 5 mM dATP at 37 °C for 30 min. T4 DNA polymerase was then inactivated by incubating at 75 °C for 20 min, and vector pETke and the treated insert gene were annealed at 22 °C for 15 min. After that, recombinant plasmids were transferred into TOP10 competent cells, and then cultured by Lysogeny Broth (LB) agar plates with kanamycin (50 μg mL−1). After colony PCR and sequencing validation (Sangon, Shanghai, China), target fragments were proved to have imported into the vectors. Then positive recombinant plasmid was extracted by TIAN prep Mini Plasmid Kit (TianGen) and transformed into E. coli BL21 (DE3).
Heterologous expression and purification of CoGH1A
The transformed cells were incubated overnight on LB agar plate with kanamycin at 37 °C. Single colony was picked from the plate and cultivated in liquid LB medium with kanamycin in an incubator shaker (220 rpm, 37 °C) for 12 h. The recombinant bacteria were inoculated into fresh liquid LB medium (1 L) with kanamycin and incubated in the incubator shaker (220 rpm, 37 °C) till OD600 of about 0.6–0.8. Then the isopropyl-d-thiogalactopyranoside (IPTG) with a final concentration of 0.1 mM was added to the broth for further 12-h cultivation at the conditions of 16 °C and 160 rpm. Cells were collected and resuspended in lysis buffer (50 mM Tris–HCl, pH 7.5, 300 mM NaCl), and then lysed at 4 °C by Selecta Sonopuls (JY92-IN, Ningbo Scientz Biotechnology, Ningbo, China). After centrifugation (20,000 rpm for 20 min), the supernatant was applied to a His-Tag Ni-affinity resin (National Engineering Research Centre for Biotechnology, Beijing, China) pre-equilibrated with binding buffer (50 mM Tris–HCl pH 7.5, 300 mM NaCl). The column was washed with 10 mL binding buffer to remove the nonassociative proteins. Fusion protein (CoGH1A) was eluted by elution buffer (50 mM Tris–HCl, pH 7.5, 300 mM NaCl and 150 mM imidazole). The purified CoGH1A was confirmed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The protein concentration was measured as described by Bradford [44] using bovine serum albumin as standard. The quaternary structure of purified CoGH1A was analyzed through gel filtration chromatography. The molecular mass (MW) was estimated by the calibration curve of log (MW) vs. elution volume.
Assay activities of CoGH1A on different substrates
Substrate specificity of the CoGH1A was estimated by incubating the diluted enzyme in 50 mM citrate buffer (pH 6.0) containing 1 mM p-nitrophenol-glycosides (pNPGal, pNPGlu, pNPC, pNPX, pNPAr, or pNPM) or disaccharides (cellobiose or lactose) or 1 % (w/v) glycan (locust bean gum, synanthrin, cotton, filter paper, or carboxymethyl cellulose) at 75 °C for 5–30 min. The released p-nitrophenol (from p-nitrophenol-glycosides), glucose (from cellobiose and lactose), and reducing sugars (from glycans) were, respectively, measured by spectrophotometer (at 400 nm) [16], high-performance liquid chromatography (HPLC), and DNS method [45]. One unit of enzyme activity was defined as the amount of protein capable of releasing 1 μmol pNP, glucose, or reducing sugars from the substrates (releasing 2 μmol glucose from cellobiose) per minute.
Characterization of CoGH1A
The optimum pH for CoGH1A was determined in the range of 4–8 at the temperature of 75 °C in citrate buffer. The optimum temperature for CoGH1A was determined in the range of 40–100 °C at the pH 5.5 in citrate buffer. Thermostability was investigated after incubation of the samples at different temperatures in pH 5.5 citrate buffer. Samples were withdrawn at appropriate time intervals and the residual activity was measured as described above. The effects of metal ions on CoGH1A activity were determined at 75 °C and pH 5.5. The pNPGal with the concentration of 1 mM was used as a substrate. The kinetic parameters on the substrates of pNPGal, pNPGlu, pNPC, pNPX, and cellobiose were analyzed at 75 °C and pH 5.5 using various concentrations of each substrate for enzyme activity assay.
Hydrolysis of the steam-exploded corn stover
Synergetic hydrolysis by the commercial enzyme cocktail Cellic CTec2 (Novozymes) and CoGH1A was carried out in pH 5.5 acetate buffer with the initial steam-exploded corn stover concentration of 20 g L−1. The reaction volume was 1 mL in a 2-mL Eppendorf tube, which was sealed by winding parafilm after closing the lip, and put in a water bath at 60 °C for 72 h. The loading rates of CTec2 and CoGH1A were, respectively, 10 FPU g−1 biomass and 10 or 20 Ucellobiose g−1 biomass. Hydrolysis of steam-exploded corn stover using CTec2 alone in the same condition was implemented as a control. The produced sugars were quantified by high-performance liquid chromatography (HPLC) with a pump of LC-20AT (Shimadzu, Japan) and a refractive index detector (RID-10A, Shimadzu, Japan). A Hi-Plex Ca column (7.7 × 300 mm, Agilent Technology, USA) was used for the HPLC analysis. HPLC-grade water was used as the mobile phase at a flow rate of 0.6 mL min−1 at 85 °C.
Lactose hydrolysis and transgalactosylation
Lactose hydrolysis and transgalactosylation by CoGH1A were carried out, respectively, at the initial lactose concentrations of 40 and 500 g L−1, in citrate buffer of pH 5.5 at 70 °C with the β-galactosidase dosage of 2.5 Ulactose mL−1. Quantitative analysis of the sugars was carried out by HPLC. Glucose, galactose, and lactose in the mixtures were identified by comparison of their retention times with those of the standards and quantified from the peak area calibrated against sugar standards. In this HPLC experiment, the disaccharides such as lactose, galactobiose, and allolactose could not be separated. The retention time of these disaccharides was almost the same. Therefore, the total disaccharides were quantified using the lactose as standard. The GalOS yield was calculated according to the following formula:
$$GalOS\,\text{yield}\,{(\% ) = }\,\frac{{\text{mass}\,\text{of}\,\text{initial}\,\text{lactose} - {\text{mass}}\,{\text{of}}\, ( {\text{disaccharides}}\,{ + }\,{\text{glucose}}\,{ + }\,{\text{galactose)}}}}{{{\text{mass}}\,{\text{of}}\,{\text{initial}}\,{\text{lactose}}}} \times 100\,\%.$$
This formula is based on the fact that the sum of all sugars (glucose, galactose, disaccharides, and GalOS) was approximately equal to that of initial lactose [52]. According to this formula, the GalOS yields given in this work do not include the disaccharides.
Structure and active site prediction
The structure and active site were predicted for analyzing the difference between CoGH1A and other GHs whose structures were resolved. The amino acid sequence of CoGH1A was blasted in the Protein Data Bank (PDB) database (http://www.rcsb.org).
Multiple sequence alignment of CoGH1A with the resolved proteins was performed using the ClustalX2 software and depicted online by ESPrit 3.0 (http://www.espript.ibcp.fr/).
FPA:
filter paper activity
CMC:
carboxymethylcellulose
pNPGal:
p-nitrophenyl β-d-galactopyranoside
pNPGlu:
p-nitrophenyl β-d-glucopyranoside
pNPC:
p-nitrophenyl β-d-cellobioside
pNPX:
p-nitrophenyl β-d-xylopyranoside
pNPM:
p-nitrophenyl β-d-mannopyranoside
pNPAr:
p-nitrophenyl α-l-arabinofuranoside
HPLC:
high-performance liquid chromatography
GalOS:
galactooligosaccharides
Garvey M, Klose H, Fischer R, et al. Cellulases for biomass degradation: comparing recombinant cellulase expression platforms. Trends Biotechnol. 2013;31:581–93.
Reyes-Ortiz V, Heins RA, Cheng G, et al. Addition of a carbohydrate-binding module enhances cellulase penetration into cellulose substrates. Biotechnol Biofuels. 2013;6:1–13.
Park AR, Oh DK. Galacto-oligosaccharide production using microbial β-galactosidase: current state and perspectives. Appl Microbiol Biotechnol. 2010;85:1279–86.
Zou ZZ, Yu HL, Li CX, et al. A new thermostable β-glucosidase mined from Dictyoglomus thermophilum: properties and performance in octyl glucoside synthesis at high temperatures. Bioresour Technol. 2012;118:425–30.
Bauer MW, Driskill LE, Kelly RM. Glycosyl hydrolases from hyperthermophilic microorganisms. Curr Opin Biotechnol. 1998;9:141–5.
Blumer-Schuette SE, Kataeva I, Westpheling J, et al. Extremely thermophilic microorganisms for biomass conversion: status and prospects. Curr Opin Biotechnol. 2008;19:210–7.
Blumer-Schuette SE, Giannone RJ, Zurawski JV, et al. Caldicellulosiruptor core and pangenomes reveal determinants for noncellulosomal thermophilic deconstruction of plant biomass. J Bacteriol. 2012;194:4015–28.
Bhalla A, Bansal N, Kumar S, et al. Improved lignocellulose conversion to biofuels with thermophilic bacteria and thermostable enzymes. Bioresour Technol. 2013;128:751–9.
Bai A, Zhao X, Jin Y, et al. A novel thermophilic β-glucosidase from Caldicellulosiruptor bescii: characterization and its synergistic catalysis with other cellulases. J Mol Catal B Enzym. 2013;85:248–56.
Haq IU, Khan MA, Muneer B, et al. Cloning, characterization and molecular docking of a highly thermostable β-1, 4-glucosidase from Thermotoga petrophila. Biotechnol Lett. 2012;34:1703–9.
Bok JD, Yernool DA, Eveleigh DE. Purification, characterization, and molecular analysis of thermostable cellulases CelA and CelB from Thermotoga neapolitana. Appl Environ Microbiol. 1998;64:4774–81.
Mi S, Jia X, Wang J, et al. Biochemical characterization of two thermostable xylanolytic enzymes encoded by a gene cluster of Caldicellulosiruptor owensensis. PLoS One. 2014;9:e106482.
Kong F, Wang Y, Cao S, et al. Cloning, purification and characterization of a thermostable β-galactosidase from Thermotoga naphthophila RUK-10. Process Biochem. 2014;49:775–82.
Blumer-Schuette SE, Lewis DL, Kelly RM. Phylogenetic, microbiological, and glycoside hydrolase diversities within the extremely thermophilic, plant biomass-degrading genus Caldicellulosiruptor. Appl Environ Microbiol. 2010;76:8084–92.
Blumer-Schuette SE, Ozdemir I, Mistry D, et al. Complete genome sequences for the anaerobic, extremely thermophilic plant biomass-degrading bacteria Caldicellulosiruptor hydrothermalis, Caldicellulosiruptor kristjanssonii, Caldicellulosiruptor kronotskyensis, Caldicellulosiruptor owensenis and Caldicellulosiruptor lactoaceticus. J Bacteriol. 2011;193:1483–4.
Peng X, Qiao W, Mi S, et al. Characterization of hemicellulase and cellulase from the extremely thermophilic bacterium Caldicellulosiruptor owensensis and their potential application for bioconversion of lignocellulosic biomass without pretreatment. Biotechnol Biofuels. 2015;8:131.
Brunecky R, Alahuhta M, Xu Q, et al. Revealing nature's cellulase diversity: the digestion mechanism of Caldicellulosiruptor bescii CelA. Science. 2013;342:1513–6.
Uchima C, Tokuda G, Watanabe H, Kitamoto K, Arioka M. Heterologous expression in Pichia pastoris and characterization of an endogenous thermostable and high-glucose-tolerant β-glucosidase from the termite Nasutitermes takasagoensis. Appl Environ Microbiol. 2012;78(12):4288–93.
Parry N, Beever D, Owen E, Vandenberghe I, Beeumen J, Bhat M. Biochemical characterization and mechanism of action of a thermostable β-glucosidase purified from Thermoascus aurantiacus. Biochem J. 2001;353:117–27.
Huang C-Y, Patel BK, Mah RA, Baresi L. Caldicellulosiruptor owensensis sp. nov., an anaerobic, extremely thermophilic, xylanolytic bacterium. Int J Syst Bacteriol. 1998;48:91–7.
Park AR, Oh DK. Effects of galactose and glucose on the hydrolysis reaction of a thermostable β-galactosidase from Caldicellulosiruptor saccharolyticus. Appl Microbiol Biotechnol. 2010;85:1427–35.
Qiao W, Tang S, Mi S, Jia X, Peng X, Han Y. Biochemical characterization of a novel thermostable GH11 xylanase with CBM6 domain from Caldicellulosiruptor kronotskyensis. J Mol Catal B Enzym. 2014;107:8–16.
Nguyen TT, Nguyen HA, Arreola SL, et al. Homodimeric β-galactosidase from Lactobacillus delbrueckii subsp. bulgaricus DSM 20081: expression in Lactobacillus plantarum and biochemical characterization. J Agric Food Chem. 2012;60:1713–21.
Zhang YHP, Himmel ME, Mielenz JR. Outlook for cellulase improvement: screening and selection strategies. Biotechnol Adv. 2006;24:452–81.
Bhatia Y, Mishra S, Bisaria V. Microbial β-glucosidases: cloning, properties, and applications. Crit Rev Biotechnol. 2002;22:375–407.
Merino ST, Cherry J. Progress and challenges in enzyme development for biomass utilization. Adv Biochem Eng Biotechnol. 2007;108:95–120.
Gusakov AV. Alternatives to Trichoderma reesei in biofuel production. Trends Biotechnol. 2011;29:419–25.
Jagtap SS, Dhiman SS, Kim TS, et al. Characterization of a β-1, 4-glucosidase from a newly isolated strain of Pholiota adiposa and its application to the hydrolysis of biomass. Biomass Bioenerg. 2013;54:181–90.
Divne C, Stahlberg J, Reinikainen T, et al. The three-dimensional crystal structure of the catalytic core of cellobiohydrolase I from Trichoderma reesei. Science. 1994;265:524–8.
Rouvinen J, Bergfors T, Teeri T, et al. Three-dimensional structure of cellobiohydrolase II from Trichoderma reesei. Science. 1990;249:380–6.
Segato F, Damasio A, Gonçalves TA, et al. Two structurally discrete GH7-cellobiohydrolases compete for the same cellulosic substrate fiber. Biotechnol Biofuels. 2012;5:21.
Gao L, Wang F, Gao F, et al. Purification and characterization of a novel cellobiohydrolase (PdCel6A) from Penicillium decumbens JU-A10 for bioethanol production. Bioresour Technol. 2011;102:8339–42.
Lee KM, Joo AR, Jeya M, et al. Production and characterization of cellobiohydrolase from a novel strain of Penicillium purpurogenum KJS506. Appl Biochem Biotechnol. 2011;163:25–39.
Panesar PS, Kumari S, Panesar R. Potential applications of immobilized β-galactosidase in food processing industries. Enzyme Res. 2010;2010:473137.
Di Lauro B, Rossi M, Moracci M. Characterization of a β-glycosidase from the thermoacidophilic bacterium Alicyclobacillus acidocaldarius. Extremophiles. 2006;10:301–10.
Li X, Zhou QZ, Chen XD. Pilot-scale lactose hydrolysis using β-galactosidase immobilized on cotton fabric. Chem Eng Process Process Intensif. 2007;46:497–500.
Roy I, Gupta MN. Lactose hydrolysis by Lactozym™ immobilized on cellulose beads in batch and fluidized bed modes. Process Biochem. 2003;39:325–32.
Haider T, Husain Q. Immobilization of β-galactosidase by bioaffinity adsorption on concanavalin A layered calcium alginate–starch hybrid beads for the hydrolysis of lactose from whey/milk. Int Dairy J. 2009;19:172–7.
Ji ES, Park NH, Oh DK. Galacto-oligosaccharide production by a thermostable recombinant β-galactosidase from Thermotoga maritima. World J Microbiol Biotechnol. 2005;21:759–64.
Jeng WY, Wang NC, Lin MH, et al. Structural and functional analysis of three β-glucosidases from bacterium Clostridium cellulovorans, fungus Trichoderma reesei and termite Neotermes koshunensis. J Struct Biol. 2011;173:46–56.
Hassan N, Nguyen T-H, Intanon M, et al. Biochemical and structural characterization of a thermostable β-glucosidase from Halothermothrix orenii for galacto-oligosaccharide synthesis. Appl Microbiol Biotechnol. 2015;99:1731–44.
Zechel DL, Boraston AB, Gloster T, et al. Iminosugar glycosidase inhibitors: structural and thermodynamic dissection of the binding of isofagomine and 1-deoxynojirimycin to β-glucosidases. J Am Chem Soc. 2003;125:14313–23.
Green MR, Sambrook J. Molecular cloning: a laboratory manual. New York: Cold Spring Harbor Laboratory Press; 2012.
Bradford MM. A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976;72:248–54.
Miller GL. Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem. 1959;31:426–8.
Tiwari MK, Lee KM, Kalyani D, et al. Role of Glu445 in the substrate binding of β-glucosidase. Process Biochem. 2012;47:2365–72.
Joo AR, Jeya M, Lee KM, et al. Purification and characterization of a β-1, 4-glucosidase from a newly isolated strain of Fomitopsis pinicola. Appl Microbiol Biotechnol. 2009;83:285–94.
Lee KM, Lee KM, Kim IW, et al. One-step purification and characterization of a β-1, 4-glucosidase from a newly isolated strain of Stereum hirsutum. Appl Microbiol Biotechnol. 2010;87:2107–16.
Mallek-Fakhfakh H, Belghith H. Physicochemical properties of thermotolerant extracellular β-glucosidase from Talaromyces thermophilus and enzymatic synthesis of cello-oligosaccharides. Carbohydr Res. 2016;419:41–50.
Gupta R, Govil T, Capalash N, Sharma P. Characterization of a glycoside hydrolase family 1 β-galactosidase from hot spring metagenome with transglycosylation activity. Appl Biochem Biotechnol. 2012;168:1681–93.
Arreola SL, Intanon M, Suljic J, et al. Two β-galactosidases from the human isolate Bifidobacterium breve DSM 20213: molecular cloning and expression, biochemical characterization and synthesis of galacto-oligosaccharides. PLoS One. 2014;9(8):e104056.
Li L, Li G, Cao L, et al. Characterization of the cross-linked enzyme aggregates of a novel β-galactosidase, a potential catalyst for the synthesis of galacto-oligosaccharides. J Agric Food Chem. 2015;63:894–901.
Juajun O, Nguyen TH, Maischberger T, et al. Cloning, purification, and characterization of β-galactosidase from Bacillus licheniformis DSM 13. App Microbiol Biotechnol. 2011;89:645–54.
Li L, Zhang M, Jiang Z, et al. Characterisation of a thermostable family 42 β-galactosidase from Thermotoga maritima. Food Chem. 2009;112:844–50.
Kim C, Ji ES, Oh DK. Characterization of a thermostable recombinant β-galactosidase from Thermotoga maritima. J Appl Microbiol. 2004;97:1006–14.
XP and YH designed the study. XP performed all the experiments, analyzed the data, and drafted the manuscript. HS participated in structure analysis. SM participated in DNA extraction. All authors read and approved the final manuscript.
We would like to thank the financial support from the National High Technology Research and Development Program of China (863 Project No: 2014AA021905) and 100 Talents Program of Institute of Process Engineering, Chinese Academy of Sciences.
National Key Laboratory of Biochemical Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, China
Xiaowei Peng, Hong Su, Shuofu Mi & Yejun Han
Xiaowei Peng
Hong Su
Shuofu Mi
Yejun Han
Correspondence to Yejun Han.
Peng, X., Su, H., Mi, S. et al. A multifunctional thermophilic glycoside hydrolase from Caldicellulosiruptor owensensis with potential applications in production of biofuels and biochemicals. Biotechnol Biofuels 9, 98 (2016). https://doi.org/10.1186/s13068-016-0509-y
β-d-glucosidase
Exoglucanase
β-d-xylosidase
β-d-galactosidase
Transglycosylation
Lignocellulose | CommonCrawl |
Solve for $x$ in the equation $ \frac35 \cdot \frac19 \cdot x = 6$.
Multiplying both sides by $\frac{5}{3}$ gives $\frac{1}{9} \cdot x = 6\cdot \frac53 = 10$, and then multiplying by 9 gives $x = \boxed{90}$. | Math Dataset |
Tool to apply the gaussian elimination method and get the row reduced echelon form, with steps, details, inverse matrix and vector solution.
Gaussian Elimination - dCode
Tag(s) : Matrix, Symbolic Computation
Thanks to your feedback and relevant comments, dCode has developed the best 'Gaussian Elimination' tool, so feel free to write! Thank you!
Gaussian Elimination Calculator
Coefficients Matrix M
Results Vector V
Display Detailed Calculation Steps
Only solution vector
See also: Equation Solver — Matrix Reduced Row Echelon Form
Equation System to Matrix Converter
Equation system (one per line)
3x+2y+z=1 2x+z=3 x+y+z=0
Variables to solve
See also: Equation Solver
What is the Gaussian Elimination method?
The Gaussian elimination algorithm (also called Gauss-Jordan, or pivot method) makes it possible to find the solutions of a system of linear equations, and to determine the inverse of a matrix.
The algorithm works on the rows of the matrix, by exchanging or multiplying the rows between them (up to a factor).
At each step, the algorithm aims to introduce into the matrix, on the elements outside the diagonal, zero values.
How to calculate the solutions of a linear equation system with Gauss?
From a system of linear equations, the first step is to convert the equations into a matrix.
Example: $$ \left\{ \begin{array}{} x&-&y&+&2z&=&5\\3x&+&2y&+&z&=&10\\2x&-&3y&-&2z&=&-10\\\end{array} \right. $$ can be written under multiplication">matrix multiplication form: $$ \left( \begin{array}{ccc} 1 & -1 & 2 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right) . \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = \left( \begin{array}{c} 5 \\ 10 \\ -10 \end{array} \right) $$ that corresponds to the (augmented) matrix $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 3 & 2 & 1 & 10 \\ 2 & -3 & 2 & -10 \end{array} \right) $$
Then, for each element outside the non-zero diagonal, perform an adequate calculation by adding or subtracting the other lines so that the element becomes 0.
Example: Subtract 3 times (Row 1) to (Row 2) such as the element in row 2, column 1 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 2 & -3 & -2 & -10 \end{array} \right) $$
Subtract 2 times (Row 1) to (Row 3) such as the element in row 3, column 1 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 0 & -1 & -6 & -20 \end{array} \right) $$
Subtract 1/5 times (Row 2) to (Row 3) such as the element in row 3, column 2 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Subtract 1/5 times (Row 2) to (Row 1) such as the element in row 1, column 2 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & 0 & 1 & 4 \\ 0 & 5 & -5 & -5 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Subtract 5/7 times (Row 3) to (Row 2) such as the element in row 2, column 3 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 5 & 0 & 10 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Simplify each line by dividing the value on the diagonal
Example: $$ \left( \begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 3 \end{array} \right) $$
The result vector is the last column.
Example: $ {1,2,3} $ that corresponds to $ {x,y,z} $ so $ x=1, y=2, z=3 $
dCode retains ownership of the online 'Gaussian Elimination' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (PHP, Java, C#, Python, Javascript, Matlab, etc.) no data, script or API access will be for free, same for Gaussian Elimination download for offline use on PC, tablet, iPhone or Android !
Equation Solver
Matrix Reduced Row Echelon Form
Boolean Expressions Calculator
Truth Table
Vertex Form of a Quadratic
Determinant of a Matrix
Trigonometric Equation Solver
elimination,pivot,gausse,jordan,matrix,system,equation
Source : https://www.dcode.fr/gaussian-elimination | CommonCrawl |
\begin{document}
\label{firstpage}
\begin{abstract}
We consider a Bayesian functional data analysis for observations measured as extremely long sequences. Splitting the sequence into a number of small windows with manageable length, the windows may not be independent especially when they are neighboring to each other. We propose to utilize Bayesian smoothing splines to estimate individual functional patterns within each window and to establish transition models for parameters involved in each window to address the dependent structure between windows. The functional difference of groups of individuals at each window can be evaluated by Bayes Factor based on Markov Chain Monte Carlo samples in the analysis. In this paper, we examine the proposed method through simulation studies and apply it to identify differentially methylated genetic regions in TCGA lung adenocarcinoma data.
\blfootnote{$^{\dagger}$Corresponding author}
\end{abstract}
\begin{keywords} Bayesian smoothing splines, Dynamic weighted particle filter, Differentially methylated region \end{keywords}
\maketitle
\section{Introduction} \label{s:intro}
DNA methylation is an epigenetic modification in the human DNA when a methyl group gets attached to the cytosine of CG dinucleotide, that is called CpG site, forming a methyl cytosine. Ideally these locations are free from any methylation which in turn allows the normal process of gene transcription, expression and regulation \citep{2}. Under CpG methylation this usual process is hindered causing disruption to the regular cell functioning \citep{3,4}.
DNA hypomethylation or hypermethylation cause the activation of oncogenes that cause tumor, or the silencing of tumor suppressor genes, respectively. Several researches have shown the association of differential DNA methylation patterns to various cancers and also to other chronic diseases \citep{1, 5, 6, 7, 8}.
The detection of differentially methylated regions (DMRs) has become fairly important in the last decade for locating the genes in such DMRs. One approach is to identify single sites that show differential methylation. Although methods for site-wise detection are more developed, analyses based on genomic regions may lead to substantially improved results \citep{7}. Examining regions known to impact transcriptional regulation, such as promoters and enhancers, would further improve the likelihood of regions to be identified as being differentially methylated which mediate a biological pathway or function \citep{7}.
The primary objective of several microarray based studies is to understand how the structural pattern of association \citep{60} among certain variables changes over the genomic locations.
The literature on DMR identification methods includes \cite{22, 23, 24, 25, 26, 27, 28}, among others.
With the advancement in microarray technology, methylation rates can be observed in much higher resolutions, e.g., over 450K CpG locations.
Although being highly popular in methylation studies the packages summarized in \cite{37} come with several disadvantages. Most of them use traditional multivariate techniques to detect DMRs that are incapable of handling the high dimensionality, high measurement error, missing values and high degree of correlation in methylation rates among neighboring CpG sites \citep{eckhardt11}, which are some of the inherent components in methylation data.
Hence, the traditional multivariate methods may result in spurious statistical analysis and low powered statistical tests by disregarding smooth functional behavior of the underlying data generating process.
The functional data analysis (FDA) proposed in \cite{10}, views these measurements as realizations of continuous smooth dynamic function evolving over space or time. In particular, the FDA fits these multivariate methylation data to smooth curves represented as linear combinations of suitably chosen basis functions \citep{11}. Such smooth curve enables imputation of missing values, helps in removal of the high noise, models data with irregular time sampling schedules, and accounts for any significant correlation among the observations \citep{Ryu16}. The Bayesian FDA fits a flexible Bayesian nonparametric regression model to the sequence of methylation rates and conducts statistical inference based on the fitted curves within a Bayesian framework.
To identify DMRs addressing the issues of large dimensionality, missingness and correlation of methylation rates from neighboring CpG sites, we consider genomic windows containing CpG sites by segmenting whole genome according to a certain base pairs (bp) distance between neighboring CpG sites. By examining the differential methylation profiles among two groups found in these genomic windows we identify DMRs.
It should be noted that the methylation rates from a window may associate with those from neighboring windows since the methylation rates on individual CpG sites can be dependent on each other, as \cite{eckhardt11} pointed out.
There are several methods proposed to model the dependency among the parameters in a dynamic process by transition models \citep{60} that include the auto-correlation, cross-lagged structural models \citep{40}, the auto-regressive moving averages, the cross spectral or coherence analysis \citep{54}, the dynamic factor analysis \citep{56} and the structural equation models.
We consider the transition models for parameters that control the functional pattern of methylation rates at each window, and utilize the dynamically weighted particle filter \citep{Liang2002,Ryu2013} for efficient computation.
Based on populations of parameters from the particle filter, we examine the Bayes factor to determine if the window is differentially methylated.
We performed simulation studies to benchmark the performance of our method with the popularly used existing bump-hunting method \citep{jaffe}. Also as an application of the proposed model, we perform DMR identification analysis over the whole genome on TCGA lung Adenocarcinoma data. The results from both simulation studies and real data analysis showed that our proposed approach is effective in finding the true DMRs while effectively controlling for the number of false positives.
The rest of the article is structured as follows. Section 2 introduces the proposed novel methods to model methylation values. Section 3 shows a comparison of the simulated data for the proposed methods with bumphunter. Section 4 shows the data analysis and important findings upon the application of the proposed dependent method to lung Adenocarcinoma data. Finally, a discussion is provided in Section 5.
\section{Bayesian Functional Data Analysis} \label{sec:model}
In this section, we propose a Bayesian functional data analysis (BFDA) within and between windows and an identification of differentially methylated region (DMR) by utilizing Bayes factor at each window.
\subsection{BFDA within a window} \label{sec:withinBFDA}
We configure the windows by splitting a long sequence of data collected from each individual.
Regarding the DNA methylation data measured on CpG sites over whole human genome, the genomic window of CpG sites can be determined by the number of CpG sites in the window, the gap of genomic coordinates of neighboring CpG sites or other reasonable rules.
We consider the methylation data measured at a window with $n$ CpG sites from $m_k$ individuals in the group $k$, $k=1,\ldots,G$.
Let $Y_{ijk}$ denote the log-transformed methylation rate, referred as M-value in \cite{25}, at the CpG site $i$ from the individual $j$ in the group $k$ as \begin{eqnarray}\label{transform} Y_{ijk} =\log\left(\frac{\beta_{ijk}+c}{1-\beta_{ijk}+c}\right),~~~i=1,\ldots,n,~j=1,\ldots,m_k,~k=1,\ldots,G, \end{eqnarray} where $\beta_{ijk}$ is the methylation rate and $c$ is an offset value.
We utilize a Bayesian nonparametric regression in the BFDA for the sequence of measured methylation rates over CpG sites, $Y_{1jk},\ldots,Y_{njk}$, from the individual $j$ in group $k$. The typical features of methyulation rates include high variability, nonperiodic behavior, correlation and complex patterns over CpG sites.
Without assumption of specific functional form, denoting the mean functional value of methylation rate as $g_k(x_i)$ at the CpG site $i$ over the individuals in group $k$, we can model the transformed methylation rates by the following regression model: \begin{equation}
\label{Equation11}
Y_{ijk}=g_k(x_i)+\delta_{ik} + \epsilon_{ijk},~
i=1,\ldots,n,~
j=1,\ldots,m_k,~
~k=1,\ldots,G, \end{equation} where $\delta_{ik}$ is the random component induced by the group $k$, or the discrepancy of $g_k(\cdot)$, and $\epsilon_{ijk}$ is the random error from the individual $j$ in the group $k$, and $\delta_{ik}$ and $\epsilon_{ijk}$ are mutually independent with zero means and constant variances, $\sigma_k^2$ and $\sigma_{jk}^2$, respectively.
In this paper, to investigate the functional pattern of methylation rates over CpG sites, we use the order of CpG site in the window, that is $x_i=i$, as the predictor in the model instead of its genomic coordinate.
Using the natural cubic smoothing splines that is conventional, the mean function can be described by $g_k(x)=\sum_{t=1}^Ta_{kt}B_t(x-\mu_t)$, for the natural cubic basis functions $B_t(\cdot)$ and unique knot points $\mu_t$, $t=1,\ldots,T$, where $a_{kt}$ are the coefficients of basis functions.
The smoothing splines allow only one response at a unique design point. Using the group mean as the response, $\overline Y_{ik}=\frac1{m_k}\sum_{j=1}^{m_k}Y_{ijk}$, for $i=1,\ldots,n$ and $k=1,\ldots,G$, the fitted curve of the smoothing splines $g_k(\cdot)$ can be found by minimizing the following penalized residual sum of square \begin{equation}
\sum_{i=1}^n \left\{\overline Y_{ik}-g_k(x_{i})\right\}^2 + \alpha_k\int_{u\in \mathcal{R}} g_k^{\prime\prime}(u)^2 du, \end{equation} where $\alpha_k$ is a given positive smoothing penalty, $g_k^{\prime\prime}(\cdot)$ is the second order derivative of $g_k(\cdot)$ and $\mathcal{R}$ is the range of design points. Denoting the vector of functional value of the smoothing splines as $\boldsymbol g_k=[g_k(x_1),\ldots,g_k(x_n)]^T$, the penalty term can be expressed by $\int_{u\in \mathcal{R}}g_{k}^{\prime\prime}(u)^2du = \boldsymbol g_k^T\boldsymbol K\boldsymbol g_k$, where $\boldsymbol K$ is an $n\times n$ dimensional matrix with rank $n-2$. Refer to \cite{17} for details of construction of $\boldsymbol K$.
Here, it should be noted that all individuals share the same design points that are the CpG sites and hence the group mean functions share the same $\boldsymbol K$.
As mentioned in \cite{20, Ryu11,42}, for a Bayesian approach, we take all design points as knot points and assign a singular normal prior on $\boldsymbol g_k$ that has the probability density function proportional to $\left(\frac{\alpha_k}{\sigma_k^2}\right)^{(n-2)/2}\!\!\exp\left\{-\frac{\alpha_k}{\sigma_k^2} {\boldsymbol g_k}^{T}{\boldsymbol{K}}{\boldsymbol g_k}\right\}$, where $\alpha_k$ is a smoothing penalty and $\sigma_k^2$ is the variance of the discrepancy of the mean function. Without loss of generality, we use $\tau_k=\frac{\alpha_k}{\sigma_k^2}$ and assign a conjugate Gamma prior, $\tau_k\sim G(A_t,B_t)$. We assign conjugate inverse Gamma priors on $\sigma_k^2$ and $\sigma_{jk}^2$, respectively, $\sigma_k^2\sim IG(A_s,B_s)$ and $\sigma_{jk}^2\sim IG(A_{s}^*,B_{s}^*)$.
Denoting $\boldsymbol y_{jk}=(y_{1jk},\ldots,y_{njk})^T$ and $\overline{\boldsymbol y}_k=(\overline Y_{1k},\ldots,\overline Y_{nk})^T$, the full conditional distributions of the parameters are given by \begin{eqnarray*}
\boldsymbol g_k\vert\cdot
&\sim&
N\left[(\boldsymbol I+\alpha_k\boldsymbol K)^{-1}\overline{\boldsymbol y}_k,\,(\boldsymbol I+\alpha_k\boldsymbol K)^{-1}\sigma_k^2\right],~~~k=1,\ldots,G\\
\tau_k\vert\cdot
&\sim&
G\left[\frac{n-2}2+A_t,\,\frac12\boldsymbol g_k^T\boldsymbol K\boldsymbol g_k\right],\\
\sigma_k^2\vert\cdot
&\sim&
IG\left[\frac n2+A_s,\,\frac12(\overline{\boldsymbol y}_k-\boldsymbol g_k)^T(\overline{\boldsymbol y}_k-\boldsymbol g_k)+B_s\right],\\
\sigma_{jk}^2\vert\cdot
&\sim&
IG\left[\frac n2+A_s^*,\,\frac12(\boldsymbol y_{jk}-\overline{\boldsymbol y}_k)^T(\boldsymbol y_{jk}-\overline{\boldsymbol y}_k)+B_s^*\right],~~~j=1,\ldots,m_k, \end{eqnarray*} where $\boldsymbol I$ is the $n\times n$ identity matrix. Using Gibbs sampler with $N$ iterations after $B$ iterations as burning time, we generate MCMC samples of $\boldsymbol g_k$ and other nuisance parameters.
\subsection{BFDA between windows over whole genome} \label{sec:betweenBFDA}
When the windows of CpG sites are independent we may apply the BFDA discussed in the previous subsection to each window. However, the windows can be associated with each other especially when they are adjacent. In this subsection, we model the dependent structure of consecutive windows by using parameter transition model and propose to utilize dynamically weighted particle filter (DWPF) for efficient computation as in \cite{Ryu2013}.
Regarding two adjacent windows, the correlation between them may not be apparent because there are several ways to pair a CpG site from one window with another CpG site from the other window and furthermore two windows may not have the same number of CpG sites. Instead, for the dependent windows, we consider the association of the curves that fit the responses at each window. Specifically, because the smoothing penalty and the variance of the discrepancy characterize the fitted curve in the window, we let those parameters take into account the dependent windows. Suppose there are $T$ windows and let $\tau_{t,k}$ denote the smoothing penalty and $\sigma_{t,k}^2$ denote the variance of the discrepancy, for the window $t$, $t=1,\ldots,T$, and the group $k$, $k=1,\ldots,G$. Then, we may consider the following linear transition models between window $t-1$ and window $t$:
\begin{equation}\label{lintrans}
\begin{split}
\log(1/\tau_{t,k}) &= \log(1/\tau_{t-1,k}) + U_{t,k},\\
\log(\sigma_{t,k}^2) &= \log(\sigma_{t-1,k}^2) + V_{t,k},
\end{split} \end{equation} where $U_{t,k}$ and $V_{t,k}$ are Gaussian random errors with zero mean and constant variance, respectively.
For whole genome, regarding $T$ windows, let $\boldsymbol y_{t,k}$ denote the mean methylation rates and $\boldsymbol g_{t,k}$ denote the vector of functional values of methylation curve at all CpG sites in window $t$, $t=1,\ldots,T$, and group $k$, $k=1,\ldots,G$. Utilizing the dynamically weighted particle filter (DWPF) for efficient computing as \cite{Ryu2013} did, we apply DWPF for parameters $\tau_{t,k}$ and $\sigma_{t,k}^2$. Let $\boldsymbol\lambda_t=(\tau_{t,1},\ldots,\tau_{t,G},\sigma_{t,1}^2,\ldots,\sigma_{t,G}^2)$ denote the vector of those parameters and $\boldsymbol y_t=(\boldsymbol y_{t,1},\ldots,\boldsymbol y_{t,G})$ and $\boldsymbol g_t=(\boldsymbol g_{t,1},\ldots,\boldsymbol g_{t,G})$ denote the vector of methylation rates and the vector of functional values over all groups, respectively. Regarding the population of $N_t$, for $i=1,\ldots,N_t$, let $\boldsymbol\lambda_t^{(i)}$ denote the particle $i$ of $\boldsymbol\lambda_t$ and $w_t^{(i)}$ denote the weight of the particle $i$, and $\boldsymbol\Lambda_t=(\boldsymbol\lambda_t^{(1)},\ldots,\boldsymbol\lambda_t^{(N_t)})$ denote the population of all particles with weights $\boldsymbol W_{\!\!t} = (w_t^{(1)},\ldots,w_t^{(N_t)})$.
In DWPF, we use the dynamically weighted importance sampling (DWIS) algorithm with $W$-type move in the dynamic weighting step and the adaptive pruned-enriched population control scheme in the population control step. Further, we denote the lower and upper population size control bounds as $N_\mathrm{low}$ and $N_\mathrm{up}$, respectively, and the lower and upper limiting population sizes as $N_\mathrm{min}$ and $N_\mathrm{max}$, respectively. We also denote the lower and upper weight control bounds for all windows as $W_\mathrm{low}$ and $W_\mathrm{up}$, respectively.
Denoting $\boldsymbol y_{1:t}=(\boldsymbol y_1,\ldots,\boldsymbol y_t)$ and $\boldsymbol\lambda_{1:t}=(\boldsymbol\lambda_1,\ldots,\boldsymbol\lambda_t)$ and suppressing the design points that are the order of CpG sites within a window, we use the the following algorithm to collect the particles of $\boldsymbol\lambda_t$ with corresponding weights $w_t$ for window $t$, $t=1,\ldots,T$: \begin{itemize}
\item []\underline{window 1}
\begin{itemize}\setlength{\itemindent}{-1em}
\item[]Sample:
Collect $N_1$ MCMC samples of $\boldsymbol\lambda_1$ from the posterior distribution by applying the BFDA discussed in the previous subsection after $B$ burning iterations, and set the MCMC samples at each iteration as $\widehat{\boldsymbol\lambda}_1^{(i)}$ with weight $\widehat w_1^{(i)}=1$, $i=1,\ldots,N_1=20000$. It establishes $\widehat{\boldsymbol\Lambda}_1$ and $\widehat{\boldsymbol W}_{\!\!1}$.
\item[]DWIS: Generate $(\boldsymbol\Lambda_1,\boldsymbol W_{\!\!1})$ from $(\widehat{\boldsymbol\Lambda}_1,\widehat{\boldsymbol W}_{\!\!1})$
using DWIS, with the marginal posterior distribution $p(\boldsymbol\lambda_1\vert\boldsymbol y_1)$ as the target distribution.
\end{itemize}
\item []\underline{window 2}
\begin{itemize}\setlength{\itemindent}{-1em}
\item[] Extrapolation: Generate $\widehat{\boldsymbol\lambda}_2^{(i)}$ from $\boldsymbol\lambda_1^{(i)}$, with the
extrapolation operator $q(\boldsymbol\lambda_2\vert\boldsymbol\lambda_1^{(i)}\!\!,\,\boldsymbol y_{1:2})$, and set
$
\widehat w_2^{(i)} = w_1^{(i)}
\frac{p(\boldsymbol\lambda_1^{(i)}\!\!,\,\widehat{\boldsymbol\lambda}_2^{(i)}\vert\boldsymbol y_{1:2})}
{p(\boldsymbol\lambda_1^{(i)}|\boldsymbol y_1)
q(\widehat{\boldsymbol\lambda}_2^{(i)}|\boldsymbol\lambda_1^{(i)}\!\!,\,\boldsymbol y_{1:2})}
$
for each $i=1,2,\ldots, N_1$, to establish $(\widehat{\boldsymbol\Lambda}_2,\widehat{\boldsymbol W}_{\!\!2})$.
\item[] DWIS: Generate $(\boldsymbol\Lambda_2,\boldsymbol W_{\!\!2})$ from $(\widehat{\boldsymbol\Lambda}_2,\widehat{\boldsymbol W}_{\!\!2})$
using DWIS, with the target $p(\boldsymbol\lambda_{1:2} | \boldsymbol y_{1:2})$.
\end{itemize}
\item[]~~~~~$\vdots$
\item[]\underline{window $T$}
\begin{itemize}\setlength{\itemindent}{-1em}
\item[] Extrapolation: Generate $\widehat{\boldsymbol\lambda}_t^{(i)}$ from $\boldsymbol\lambda_{t-1}^{(i)}$,
with the extrapolation operator $q(\boldsymbol\lambda_t\vert\boldsymbol\lambda_{1:t-1}^{(i)},\boldsymbol y_{1:t})$
and set
$
\widehat w_{t}^{(i)} = w_{t-1}^{(i)}
\frac{p(\boldsymbol\lambda_{1:t-1}^{(i)},\widehat{\boldsymbol\lambda}_t^{(i)} | \boldsymbol y_{1:t})}
{p(\boldsymbol\lambda_{1:t-1}^{(i)}\vert\boldsymbol y_{1:t-1})
q(\widehat{\boldsymbol\lambda}_t^{(i)}\vert\boldsymbol\lambda_{1:t-1}^{(i)},\boldsymbol y_{1:t})}
$
for each $i=1,2,\ldots, N_{t-1}$, to establish $(\widehat{\boldsymbol\Lambda}_{t},\widehat{\boldsymbol W}_{\!\!t})$.
\item[] DWIS: Generate $({\boldsymbol\Lambda}_t,{\boldsymbol W}_{\!\!t})$ from $(\widehat{\boldsymbol\Lambda}_t,\widehat{\boldsymbol W}_{\!\!t})$
using DWIS, with the target $p(\boldsymbol\lambda_{1:t} \vert \boldsymbol y_{1:t})$.
\end{itemize} \end{itemize} At each window, the functional values of methylation rates $\boldsymbol g_t$ can be generated from its full conditional distribution $p(\boldsymbol g_t\vert\cdot)$, $t=1,\ldots,T$. See \cite{Ryu2013} for details of DWPF.
\subsection{Identification of differentially methylated regions using Bayes factor}
We examine the differential methylation by groups at each window. In model \eqref{Equation11}, when different group mean functions are desirable to model the methylation rates, we may assess the window to be differentially methylated. Otherwise, the window is not differentially methylated and one group mean function will be enough to model the methylation rates in the window. We consider the following two models $M_1$ and $M_2$: \begin{equation*}
\begin{aligned}
M_1: &~\mbox{the window has one group mean function, $G=1$ in model \eqref{Equation11},} \\
M_2: &~\mbox{the window has more than one (say, $K$) group mean functions, $G=K$ in model \eqref{Equation11}.}
\end{aligned} \end{equation*}
If $M_1$ is preferred to model the methylation rates in the window, we may conclude the window is not a differentially methylated region (DMR); whereas, if $M_2$ is preferred, we can conclude the window is a DMR.
To choose a good model for the methylation rates in the window, we utilize the posterior Bayes factor that provides more consistent results than the Bayes factor does, as \cite{Aitkin91} mentioned. Let $\boldsymbol\Theta_l$ denote all unknown quantities in model $M_l$ and $\boldsymbol y$ denote the log-transformed methylation rates in the window with the likelihood $p(\boldsymbol y\vert\boldsymbol\Theta_l)$, $l=1,2, \dots, K$, then the posterior Bayes factor to compare $M_1$ and $M_2$ can be calculated by the ratio of the marginal likelihoods as follows \begin{eqnarray*}
BF(M_1, M_2)
=
\frac{\int_{\boldsymbol\Theta_1} p(\boldsymbol y\vert\boldsymbol\Theta_1)p(\boldsymbol\Theta_1\vert\boldsymbol y)d\boldsymbol\Theta_1}
{\int_{\boldsymbol\Theta_2} p(\boldsymbol y\vert\boldsymbol\Theta_2)p(\boldsymbol\Theta_2\vert\boldsymbol y)d\boldsymbol\Theta_2}, \end{eqnarray*}
where $p(\boldsymbol\Theta_l\vert\boldsymbol y)$, $l=1,2, \dots, K$, indicates the posterior density. Under the model \eqref{Equation11}, apparently $p(\boldsymbol y\vert\boldsymbol\Theta_l)$ is given by a product of Gaussian densities.
Utilizing the particles and weights for $p(\boldsymbol\Theta_l\vert\boldsymbol y)$ discussed in the previous subsection we obtain the marginal likelihoods by taking the weighted average of the likelihoods and calculate the Bayes factor for each window. To avoid the computational difficulty we use the log-scaled Bayes factor. If the Bayes factor is less than a threshold value, we prefer $M_2$ over $M_1$ and identify the window as a DMR.
\subsection{Parameter values for simulation and real data analysis} In this paper for our simulation studies as well as real data analysis, specifically, we consider two groups to identify differential methylation, cancer (case) group and normal (control) group, i.e., we have $G=2$. In Equation (1) we set the offset $c=0.01$. We set the hyper-parameters for the Gamma and inverse Gamma priors as $A_t=1$, $B_t=1000$ and $A_s=B_s=A_{s}^*=B_{s}^*=1$. We use Gibbs sampler with $N=20,000$ iterations and $B=1000$ iterations as burning time. In DWPF, we set the lower and upper population size control bounds as $N_\mathrm{low}=15000$ and $N_\mathrm{up}=25000$, respectively, and the lower and upper limiting population sizes as $N_\mathrm{min}=10000$ and $N_\mathrm{max}=30000$, respectively. We also set the lower and upper weight control bounds for all windows as $W_\mathrm{low}=e^{-5}$ and $W_\mathrm{up}=e^{5}$, respectively.
\section{Simulation Studies} \label{sec:simul}
We examine the performance of the proposed functional data analysis for identification of deferentially methylated regions (DMRs) through simulation studies. We consider 25 subjects from the control group and 50 subjects from the case group and simulate methylation rates over 10 dependents windows. At each window the sequence of methylation rates from each subject is generated by a random function and autocorrelated random errors along with 100 equally spaced CpG sites. As the random functions of each group, we consider a group mean function and add random fluctuations.
For the subject at window $t$ from group $k$, we consider the following sinusoidal group mean functions, $g_{t,k}(x)$, $t=1,\ldots,10$;~$k=1,2$, \begin{equation*} \begin{array}{cclccl}
g_{1,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(2\pi x)\}],&
g_{1,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(2\pi x)\}],\\
g_{2,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\pi x)\}],&
g_{2,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(2\pi x)\}],\\
g_{3,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\pi+\pi x)\}],&
g_{3,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{\sin(\pi+\pi x)+1\}],\\
g_{4,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\frac{\pi x}{2})\}],&
g_{4,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{\sin(\frac{\pi x}{2})-1\}],\\
g_{5,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\frac{\pi+\pi x}{2})\}],&
g_{5,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{\frac{\sin(\frac{\pi}{2}+2\pi x)\}-1}{2}],\\
g_{6,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\pi+\frac{\pi x}{2})\}],&
g_{6,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\pi+\pi x)\}],\\
g_{7,1}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{-\sin(\frac{3\pi+\pi x}{2})\}],&
g_{7,2}(x) &\!\!=\!\!& 1/[1\!+\!\exp\{\sin(\frac{3\pi+\pi x}{2})+1\}],\\
g_{8,1}(x) &\!\!=\!\!& 0.5,&
g_{8,2}(x) &\!\!=\!\!& 0.5,\\
g_{9,1}(x) &\!\!=\!\!& 0.75,&
g_{9,2}(x) &\!\!=\!\!& 0.25,\\
g_{10,1}(x) &\!\!=\!\!& 0.25,&
g_{10,2}(x) &\!\!=\!\!& 0.4, \end{array} \end{equation*} where $x=x_1,\ldots,x_{100}$ and $x_i$, $i=1,\ldots,100$, are equally spaced sites from $x_1=0$ to $x_{100}=1$.
To generate individual random functions, we add independent normal random error at each site from $N(0,0.2^2)$, apply wavelet denoising with Daubechies 10 and parameter $\alpha=3$, and transform the denoised functions back to the original scale, based on the logit transformed group mean functions. The decomposition levels in the denoising are 5, 4, 3, 2, 3, 4, 5, 4, 3 and 2 from the window 1 to the window 10, where the larger level brings the smoother curve. By adding random errors from AR(1) to the individual functions with different variances such that $0.4^2$, $0.6^2$, $0.8^2$, $1$, $0.8^2$, $0.6^2$, $0.4^2$, $0.6^2$, $0.8^2$ and 1 for the window 1 through the window 10, respectively, we simulate the sequence of methylations on sites over 10 windows for each subject. It should be noted that the neighboring windows are dependent to each other by the systematic denoising of decomposition levels and the variance of errors as well as the autocorrelated errors. Figure 1 demonstrates an example of random functions and simulated data for 10 windows.
We have generated 100 sets of simulated methylation rates and compare the performance of three methods in the identifications of DMRs from the generated data. First, by assuming independent windows, we apply the proposed method denoted by Independent method. Next, by allowing dependency among neighboring windows, we examine the proposed method denoted by Dependent method. Finally, as a conventional counter part, we use the bump-hunting approach \citep{jaffe} denoted by Bumphunter.
\begin{figure}
\caption{Left to the right and first line to the second line represents window 1 to window 10. Blue color and red color indicate control group and case group, respectively. Dotted lines describe group mean functions, circles and crosses indicate the generated data based on the random functions.}
\label{figure1}
\end{figure}
\par In the Independent method and the Dependent method, we use a suitable threshold $c$ to identify DMRs using the log Bayes factor $\log BF(M_1,M_2)$, where $M_1$ is the model with one group mean function and $M_2$ is the model with two group mean functions.
The plots in Figure 2 shows the simulation results using
two methods. Figure 2 shows the distribution of the Bayes factors over 100 simulated data sets for each window under the proposed approaches for the four data sets with the respective auto-correlation 0, 0.3, 0.5 and 0.7 among the windows. In each plot the blue horizontal line represents our Bayes factor cut-off value $c$. Any window with $\log BF(M_1,M_2)$ below this cut-off is detected as a DMR. Further, in Web Appendix, supplementary figure 1 shows the distribution of the number of bumps found in each window over 100 simulations. The blue horizontal line represents the average number of bumps such that any window with at least those many bumps will be considered a DMR.
It is clear from Figure 2 that the Independent method performs the best when the CpG sites and the consecutive windows are uncorrelated and its performance deteriorates with the increase in the correlation. Table 1 describes the misclassification rates of the identification of DMRs over 100 simulations for the three methods. As we can see the Independent method performs fairly well in detecting DMRs for windows 1, 2, 7, 9, 10 but it misclassifies windows 3, 4, 5, 6, 8 quite a lot and the missclassification rate increases with increase in the correlation among the CpG sites.
\begin{table} \centering \renewcommand{2}{2} \caption{Misclassification rates of the independent, dependent and bumphunter methods over 100 simulations for each of the 4 data sets which were created with varying correlations of R=0, 0.3, 0.5, 0.7 among the CpG sites. The computation time depicts the average time taken to complete analysis on each window.} \label{tab:1} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccc}
\toprule
\hline
\multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Correlation}} & \multicolumn{10}{c}{\textbf{Window}} & \multirow{2}{*}{\textbf{Time (minutes)}}\\
\cline{3-12}
& & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} & \\
\midrule
\hline
\multirow{4}{*}{independent} & \textbf{R=0} & 0\% & 0\% & 0\% & 3\% & 20\% & 15\% & 0\% & 2\% & 0\% & 4\% & 112.45 \\
& \textbf{R=0.3} & 0\% & 0\% & 1\% & 6\% & 22\% & 22\% & 0\% & 25\% & 0\% & 5\% & 115.12 \\
& \textbf{R=0.5} & 0\% & 0\% & 9\% & 14\% & 32\% & 24\% & 0\% & 39\% & 0\% & 21\% & 113.58 \\
& \textbf{R=0.7} & 0\% & 8\% & 23\% & 38\% & 41\% & 28\% & 6\% & 56\% & 0\% & 23\% & 111.33 \\
\bottomrule
\multirow{4}{*}{dependent} & \textbf{R=0} & 0\% & 0\% & 0\% & 1\% & 9\% & 7\% & 0\% & 1\% & 0\% & 2\% & 22.49 \\
& \textbf{R=0.3} & 0\% & 0\% & 0\% & 3\% & 10\% & 10\% & 0\% & 11\% & 0\% & 2\% & 23.024 \\
& \textbf{R=0.5} & 0\% & 0\% & 4\% & 6\% & 14\% & 10\% & 0\% & 17\% & 0\% & 9\% & 22.716 \\
& \textbf{R=0.7} & 0\% & 3\% & 10\% & 17\% & 18\% & 12\% & 3\% & 24\% & 0\% & 10\% & 22.266 \\
\bottomrule
\multirow{4}{*}{bumphunter} & \textbf{R=0} & 0\% & 81\% & 0\% & 0\% & 28\% & 24\% & 52\% & 0\% & 100\% & 0\% & 7.49 \\
& \textbf{R=0.3} & 0\% & 89\% & 1\% & 0\% & 33\% & 36\% & 61\% & 0\% & 100\% & 0\% & 7.12 \\
& \textbf{R=0.5} & 0\% & 84\% & 1\% & 0\% & 26\% & 36\% & 68\% & 0\% & 100\% & 0\% & 6.99 \\
& \textbf{R=0.7} & 0\% & 90\% & 16\% & 5\% & 28\% & 41\% & 78\% & 0\% & 100\% & 2\% & 8.25 \\
\hline
\bottomrule
\end{tabular}} \end{table}
The misclassifications in windows 4, 5 and 6 are particularly of concern since inferring a true DMR as non-DMR leads to missing out on important genes in the neighborhood of the methylated regions. Capturing DMR shows strong association of gene with the disease.
As for the Dependent method, it not only outperforms the Independent method in detecting true DMRs in the presence of significant correlation among CpG sites and consecutive windows, but is also computationally at least 5 times more efficient.
This further confirms that the proposed Dependent method not only provides accurate model fittings by flexible non-parametric smooth functions accounting for the highly auto-correlated CpG sites, but also acknowledges the associations among the CpG sites from the neighboring windows through the linear transition models in a very efficient and robust way.
We also observe that the Bumphunter method performs much worse than other two methods due to its high sensitivity in sudden low variations and low sensitivity in the presence of large amount of variation and sparsity in the measurements. This property of Bumphunter method is particularly revealed for DMR detection in windows 2, 7 and 9 that have large variability in the mean methylations among normal and cancer groups. The Bumphunter method misclassifies these windows as non-DMRs almost always, over 90\% of the cases. Also, since window 8 has a very low variability in the two group means, the Bumphunter method has a 100\% accuracy in detecting this window as non-DMR.
Summarizing the simulation results, the performance of both the Independent method and the Bumphunter method drops as the correlation increases significantly among the CpG sites, whereas, the Dependent approach clearly sets a higher benchmark in consistently detecting DMRs even in the presence of strong correlation among CpG sites and the neighboring windows.
\begin{figure}
\caption{Performance of the independent and dependent methods. The plots show the distribution of logarithm of the Bayes factors for window 1 through 10 over 100 simulations in 4 data sets, from top left to bottom right, with respective correlations $\rho$=0, 0.3, 0.5, 0.7. The horizontal blue line indicates the threshold value set at -5 and any window below this threshold is detected as a DMR.}
\label{figure2}
\end{figure}
Moreover, the proposed method utilizes the functional data analysis that provides a flexible way of dimension reduction and results in more powerful detection of DMRs with the Bayes factor than the conventional multivariate approaches.
Although the Dependent method takes 3 times longer than the Bumphunter method in computation time, it is 5 times faster than the Independent method, which further shows its advantages from the scalability point of view which is often an issue encountered in Bayesian modeling.
\section{Identification of Differentially Methylated Regions for Lung Adenocarcinoma}
We illustrate our proposed approach of DMR detection using the 450K methylation data on Lung Adenocarcinoma obtained from the cancer genome atlas (TCGA) portal. This data had more than 485,512 CpG loci and encompassed 1.5\% of the total genomic CpG sites \citep{sandoval9}. There were 254 samples of cancer patients and 32 samples of normal patients with the methylation rates, denoted by $\beta-$values, which were calulated using the intensities received from the methylated and the unmethylated alleles. In the following we present the detailed analysis of DMR detection across the whole genome using our proposed approaches.
\subsection{DNA Methylation Data Analysis}
The following real data analysis is conducted on the whole genome. Following our proposed notations we dealt with $G=2$ or two groups namely normal and cancer subjects. First, we split the genomic regions in every chromosome into windows of 100 CpG sites. We first performed Bayesian NCS fitting on the observed methylation data on 254 cancer and 32 normal patients and obtained the underlying smooth functions. In Web Appendix, supplementary figure 2 depicts the functional data visualization of the observed multivariate measurements in a genome-wide sense along all 22 chromosomes along with the combined sex chromosomes denoted as chromosome 23 from hereafter. In each plot the green and red colored curves represent the smoothed mean function for the normal and cancer patients respectively, while the blue curves denote the smoothed mean functions of all 286 patients.
We started with window 1 by performing the smoothing spline estimation and generated 20,000 MCMC particles of $\sigma_{1,k}^2$ and $\tau_{1,k}$, $k=1,2$. For following windows, we applied our dependent method by using the transition model \eqref{lintrans} that projects the particles from window $t$ to $t+1$, for $t = 1, \dots, T-1$. As mentioned in Section \ref{sec:model} in order to account for the additional variation for the individual $j$ in group $k$ we add $\sigma_{jk}^2$, $j=1,\dots,m_k$, to the projected values of $\widehat{\sigma}_{t+1,k}^2$ across all samples for each CpG site. We obtained Bayes factors for inference in every window as described in Section 2.3. Using a suitable threshold we determine if the data provides sufficient evidence of the presence of two distinct groups and hence detect a window to be DMR.
Figure 3 shows the DMRs detected in each window in all 23 chromosomes by our dependent FDA approach. We compare our findings with that of bumphunter approach that detects a DMR based on the number of bumps in each window. The red regions in the plots indicate the regions detected as DMRs, while the blue regions depict a non-DMR region, with darker color shades indicating extreme Bayes factor values or extreme number of bumps in a region. As can be clearly seen from the figures that the Bumphunter method detects several DMRs in almost all the chromosomes, while our proposed dependent method, on the other hand, identifies few regions as DMR. This further validates the overestimation problem of the bumphunter method, and that the performance of our approach is quite robust. In particular, the number of DMRs detected by our approach varied from as low as 5 in chromosome 21 to as high as 81 in chromosome 1, with average being around 28.
We further zoomed in the regions in each chromosome that had the highest concentration of DMRs and annotate the genes present in those regions. We obtained different number of genes along with gene symbols in those detected genomic locations using the information available on the \href{https://genome.ucsc.edu/cgi-bin/hgTables}{UCSC} genome browser.
\begin{figure}
\caption{Comparison from chromosomes 1-23 between dependent and Bumphunter method in detecting DMRs for the LUAD data. The red regions represent DMRs while blue regions are non-DMRs.}
\label{figure3}
\end{figure}
In particular few genes, among all, are of interest which have been heavily cited in genomic literature and are known to be associated with lung cancer under the differential methylation in their promoter regions. Some of them include
Epidermal Growth Factor Receptor that is responsible for encoding the protein which acts as a receptor for the trans-membrane glycoprotein family \citep{
81};
Kirsten RAS oncogene which is responsible for encoding the protein small GTPase superfamily \citep{
83};
Serine/threonine kinase 11 is responsible for encoding the protein that regulates cell polarity and is itself a tumor suppressor gene \citep{
85};
NeuroFibromin 1 which is a tumor suppressor gene and functions to negatively regulate the ras signal transduction pathway \citep{
84}; and
Low-density lipo-protein Receptor-related protein 1B is responsible for encoding the protein which acts as a receptor from the low density lipo-protein
receptor family \citep{
79}. Hypermethylation of these genes are known to be associated with lung cancer, which further validates the relevance of our approach in the genome-wide identification of differentially methylated regions.
\section{Discussion}
This research is motivated from a growing body of literatures focusing on epigenetic features that may be associated with the disparities in Non Small Cell Lung Cancer progression and survival outcomes. The hypermethylation of the CpG island sequences located in the promoter regions of genes are increasingly being used to study the impact of epigenetic modifications. In this article we proposed Bayesian functional data analysis model to identify, select, and jointly model differential methylation features from methylation array data. Till date the applications of FDA in the genomic or public health domain remains very scarce and there is still a lot of uncovered areas in genomics that can make use of such a powerful and robust method. The proposed functional modeling approach for detecting DMR is parsimonious to address the large dimensionality of whole-genome sequencing and incorporates potential correlation among neighboring regions. We proposed a dynamically weighted particle filter with Bayesian non-parametric smoothing splines for modeling individual functional patterns followed by identification of differentially methylated regions.
We used simulation studies to compare the performance of our method with the popularly used existing bumphunter method. First we proposed our independent approach of DMR detection that fits a Bayesian NCS in individual windows without taking into account the correlation among the CpG sites from neighboring windows. As our simulation results indicated although this approach outperformed the existing bumphunter method, its performance deteriorated with increase in the correlation among the CpG sites from neighboring windows. Moreover, this approach was also challenged with high computational time. To remedy this two immediate problems associated with the independent approach we proposed our dependent approach next. In this approach we used transition models to account for the dependency that inherently exists between two genomic regions. We used an efficient sequential Monte Carlo method named dynamically weighted particle filter to get the parameter estimates of the subsequent regions without fitting the non-parametric regression function in every window. This maneuver not only made this approach computationally very efficient but also showed a very robust performance in detecting DMRs, as our simulation results indicated. This further creates a major milestone of the use of functional data modeling in genomics data.
We applied our dependent approach to identify whole genome-wide differential methylation in Lung Adenocarcinoma cancer patients data from The Cancer Genome Atlas (TCGA) Program portal. We identified several DMRs along the whole genome and successfully annotated several genes in those regions, that have been reported in the literature to be associated with lung adenocarcinoma and other cancers under hyper-methylation in their promoter regions. These biological findings can further be translated into clinical research and thus we see great promises of functional data analysis in genomics data applications.
\backmatter
\section*{Supporting Information} Web Appendix A, referenced in Section, is available with this paper at the Biometrics website on Wiley Online Library.\vspace*{-8pt}
\label{lastpage}
\end{document} | arXiv |
\begin{document}
\title{Stratifications of parameter spaces for complexes by cohomology types}
\renewcommand{\scshape}{\scshape}
\begin{abstract} We study a collection of stability conditions (in the sense of Schmitt) for complexes of sheaves over a smooth complex projective variety indexed by a positive rational parameter. We show that the Harder--Narasimhan filtration of a complex for small values of this parameter encodes the Harder--Narasimhan filtrations of the cohomology sheaves of this complex. Finally we relate a stratification into locally closed subschemes of a parameter space for complexes associated to these stability parameters with the stratification by Harder--Narasimhan types. \end{abstract}
\section{Introduction}
Let $X$ be a smooth complex projective variety and $\cO_X(1)$ be an ample invertible sheaf on $X$. We consider the moduli of (isomorphism classes of) complexes of sheaves on $X$, or equivalently moduli of $Q$-sheaves over $X$ where $Q$ is the quiver \[ \bullet \ra \bullet \ra \cdots \cdots \ra \bullet \ra \bullet \] with relations imposed to ensure the boundary maps square to zero. Moduli of quiver sheaves have been studied in \cite{ac,acgp,gothen_king,schmitt05}. There is a construction of moduli spaces of S-equivalence classes of \lq semistable' complexes due to Schmitt \cite{schmitt05} as a geometric invariant theory quotient of a reductive group $G$ acting on a parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for complexes with fixed invariants. The notion of semistability is determined by a choice of stability parameters and the motivation comes from physics; it is closely related to a notion of semistability coming from a Hitchin--Kobayashi correspondence for quiver bundles due to \'{A}lvarez-C\'onsul and Garc\'ia-Prada \cite{acgp}. The stability parameters are also used to determine a linearisation of the action. The notion of S-equivalence is weaker than isomorphism and arises from the GIT construction of these moduli spaces which results in some orbits being collapsed.
As the notion of stability depends on a choice of parameters, we can ask if certain parameters reveal information about the cohomology sheaves of a complex. We show that there is a collection of stability parameters which can be used to study the cohomology sheaves of a complex. Analogously to the case of sheaves, every unstable complex has a unique maximally destabilising filtration known as its Harder--Narasimhan filtration. In this paper we give a collection of stability parameters indexed by a rational parameter $\epsilon >0$ and show for all sufficiently small values of $\epsilon$ the Harder--Narasimhan filtration of a given complex with respect to these parameters encodes the Harder--Narasimhan filtrations of the cohomology sheaves in this complex. We then go on to study a stratification of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ associated to these stability parameters.
Given an action of a reductive group $G$ on a projective scheme $B$ with respect to an ample linearisation $\cL$, there is an associated stratification $\{ S_\beta : \beta \in \cB \}$ of $B$ into $G$-invariant locally closed subschemes for which the open stratum is the geometric invariant theory (GIT) semistable set $B^{ss}$ \cite{hesselink,kempf_ness,kirwan}. When $B$ is a smooth variety, this stratification comes from a Morse type stratification associated to the norm square of the moment map for this action. This stratification has a completely algebraic description which can be extended to the above situation of a linearised action on a projective scheme (cf. \cite{hoskinskirwan} $\S$4). We give a brief summary of the algebraic description given in \cite{kirwan} as this will be used later on.
If we choose a compact maximal torus $T$ of $G$ and positive Weyl chamber $\mathfrak{t}_+$ in the Lie algebra $\mathfrak{t}$ of $T$, then the index set $\cB$ can be identified with a finite set of rational weights in $\mathfrak{t}_+$ as follows. By fixing an invariant inner product on the Lie algebra $\mathfrak{K}$ of the maximal compact subgroup $K$ of $G$, we can identify characters and cocharacters as well as weights and coweights. There are a finite number of weights for the action of $T$ on $B$ and the index set $\mathcal{B}$ can be identified with the set of rational weights in $\mathfrak{t}_+$ which are the closest points to $0$ of the convex hull of a subset of these weights.
We say a map $\lambda : \CC^* \ra G$ (which is not necessarily a group homomorphism) is a rational one-parameter subgroup if $\lambda( \CC^*)$ is a subgroup of $G$ and there is a integer $N$ such that $\lambda^N$ is a one-parameter subgroup (1-PS) of $G$. Associated to $\beta $ there is a parabolic subgroup $P_\beta \subset G$, a rational 1-PS $\lambda_\beta : \CC^* \ra T_{\CC}$ and a rational character $\chi_\beta : T_{\CC} \ra \CC^*$ which extends to a character of $P_\beta$.
Let $Z_\beta$ be the components of the fixed point locus of $\lambda_\beta$ acting on $B$ on which $\lambda_\beta$ acts with weight $|| \beta ||^2$ and $Z_\beta^{ss}$ be the GIT semistable subscheme for the action of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ on $Z_\beta$ with respect to the linearisation $\mathcal{L}^{\chi_{-\beta}}$ (which is the original linearisation $\cL$ twisted by the character $\chi_{-\beta} : \mathrm{Stab} \: \beta \ra \CC^*$). Then $Y_\beta$ (resp. $Y_\beta^{ss}$) is defined to be the subscheme of $B$ consisting of points whose limit under the action of $\lambda_\beta(t)$ as $t \to 0$ lies in $Z_\beta$ (resp. $Z_\beta^{ss}$). There is a retraction $p_\beta : Y_\beta \ra Z_\beta$ given by taking a point to its limit under $\lambda_\beta$.
By \cite{kirwan}, for $\beta \neq 0 $ we have \[ S_\beta = G Y_\beta^{ss} \cong G \times^{P_\beta} Y_\beta^{ss}. \] The definition of $S_\beta$ makes sense for any rational weight $\beta$, although $S_\beta$ is nonempty if and only if $\beta$ is an index.
This stratification has a description in terms of Kempf's notion of adapted 1-PSs due to Hesselink \cite{hesselink}. Recall that the Hilbert--Mumford criterion states a point $b \in B$ is semistable if and only if it is semistable for every 1-PS $\lambda$ of $G$; that is, $\mu^{\cL}(b, \lambda) \geq 0$ where $\mu^{\cL}(b,\lambda) $ is equal to minus the weight of the $\CC^*$-action induced by $\lambda$ on the fibre of $\cL$ over $\lim_{t \to 0} \lambda(t) \cdot b$. In \cite{kempf} Kempf defines a non-divisible 1-PS to be adapted to an unstable point $b \in B - B^{ss}$ if it minimises the normalised Hilbert--Mumford function:
\[ \mu^{\cL}(b, \lambda) = \min_{\lambda'} \frac{\mu^{\cL}(b, \lambda')}{|| \lambda'||}. \] Hesselink used these adapted 1-PSs to stratify the unstable locus and this stratification agrees with the stratification described above. In fact if $\beta$ is an nonzero index, then the associated 1-PS $\lambda_\beta$ is a 1-PS which is adapted to every point in $Y_\beta^{ss}$.
In this paper we study the stratification obtained in this way from a suitable action of a group $G$ on a parameter space for complexes using the above collection of stability parameters (for very small $\epsilon$) which are related to cohomology. We show that for a given Harder--Narasimhan type $\tau$, the set up of the parameter scheme can be chosen so all sheaves with Harder--Narasimhan type $\tau$ are parametrised by a locally closed subscheme $R_\tau$ of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$. Moreover, $R_\tau$ is a union of connected components of a stratum $S_{\beta(\tau)}$ in the associated stratification. The scheme $R_\tau$ has the nice property that it parametrises complexes whose cohomology sheaves are of a fixed Harder--Narasimhan type.
The layout of this paper is as follows. In $\S$\ref{schmitt construction} we give a summary of the construction of Schmitt of moduli spaces of complexes and study the action of 1-PSs of $G$. In $\S$\ref{sec on stab} we give the collection of stability conditions indexed by $\epsilon >0$ and show that the Harder--Narasimhan filtration of a complex (for very small $\epsilon$) encodes the Harder--Narasimhan filtration of the cohomology sheaves. Then in $\S$\ref{sec on strat} we study the associated GIT stratification of the parameter space for complexes and relate this to the stratification by Harder--Narasimhan types. Finally, in $\S$\ref{sec on quot} we consider the problem of taking a quotient of the $G$-action on a Harder--Narasimhan stratum $R_\tau$.
\subsection*{Notation and conventions} Throughout we let $X$ be a smooth complex projective variety and $\cO_X(1)$ be an ample invertible sheaf on $X$. All Hilbert polynomials of sheaves over $X$ will be calculated with respect to $\cO_X(1)$. We use the term complex to mean a bounded cochain complex of torsion free sheaves. We say a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ is concentrated in $[m_1,m_2]$ if $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i = 0$ for $i < m_1 $ and $i>m_2$.
\section{Schmitt's construction}\label{schmitt construction}
In this section we give a summary of the construction due to Schmitt \cite{schmitt05} of moduli space of S-equivalence classes of semistable complexes over $X$. We also make some important calculations about the weights of $\CC^*$-actions (some details of which can also be found in \cite{schmitt05} Section 2.1).
If we have an isomorphism between two complexes $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ and $\cxF$, then for each $i$ we have an isomorphism between the sheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ and $\cF^i$ and thus an equality of Hilbert polynomials $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i) = P(\cF^i)$. Therefore we can fix a collection of Hilbert polynomials $P = (P^i)_{i \in \ZZ}$ such that $P^i=0$ for all but finitely many $i$ and study complexes with these invariants. In fact we can assume $P$ is concentrated in $[m_1,m_2]$ and write $P = (P^{m_1},\d,P^{m_2})$.
\subsection{Semistability}\label{stab defn}
The moduli spaces of complexes only parametrise a certain collection of complexes with invariants $P$ and this collection is determined by a notion of (semi)stability. Schmitt introduces a notion of (semi)stability for complexes which depends on a collection of stability parameters $(\us, \uc)$ where $\uc := \delta \ue$ and \begin{itemize} \item $\us=(\sigma_i \in \ZZ_{>0})_{i \in\ZZ}$, \item $\ue=(\eta_i \in \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)_{i \in\ZZ}$, \item $\delta$ is a positive rational polynomial such that $\deg \delta =\max(\dim X-1,0)$. \end{itemize}
\begin{defn} The reduced Hilbert polynomial of a complex $\cxF$ with respect to the parameters $(\us, \uc)$ is defined as \[P_{\us, \uc}^{\mathrm{red}}(\cxF):= \frac{\sum_{i \in \ZZ} \sigma_i P(\cF^i) - \chi_i \rk{\cF^i}}{\sum_{i \in \ZZ} \sigma_i \rk \cF^i} \] where $P(\cF^i)$ and $ \rk{\cF^i}$ are the Hilbert polynomial and rank of the sheaf $\cF^i$.
We say a nonzero complex $\cxF$ is $(\us, \uc)$-semistable if for any nonzero proper subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$ we have an inequality of polynomials \[P_{\us,\uc}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) \leq P_{\us, \uc}^{\mathrm{red}}(\cxF). \] By an inequality of polynomials $R \leq Q$ we mean $R(x) \leq Q(x)$ for all $x >\!> 0$. We say the complex is $(\us, \uc)$-stable if this inequality is strict for all such subcomplexes. \end{defn}
\begin{rmk}\label{normalise} Observe that for any rational number $C$ if we let $\ue' = \ue - C\us$ and $\uc' = \delta \ue'$, then the notions of $(\us, \uc)$-semistability and $(\us, \uc')$-semistability are equivalent. For invariants $P=(P^{m_1},\d,P^{m_2})$ and any stability parameters $(\us,\uc)$, we can let \[C = \frac{\sum_{i=m_1}^{m_2} \eta_i r^i}{\sum_{i=m_1}^{m_2} \sigma_i r^i}\] where $r^i$ is the rank determined by the leading coefficient of $P^i$ and consider the associated stability parameters $(\us,\uc')$ for $P$ which satisfy $\sum_{i=m_1}^{m_2} \eta_i' r^i = 0$. As we have fixed $P$ in this section, we may assume we have stability parameters which satisfy $\sum_{i=m_1}^{m_2} \eta_i r^i = 0$. \end{rmk}
\subsection{The parameter space}\label{param sch const}
The set of sheaves occurring in a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with invariants $P$ is bounded by the usual arguments (see \cite{simpson}, Theorem 1.1) and so we may choose $n>\!>0$ so that all these sheaves are $n$-regular. Fix complex vector spaces $V^i$ of dimension $P^i(n)$ and let $Q^i$ be the open subscheme of the quot scheme $\mathrm{Quot}(V^i \otimes \cO_X(-n), P^i)$ consisting of torsion free quotient sheaves $q^i: V^i \otimes \cO_X(-n) \rightarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ such that $H^0(q^i(n))$ is an isomorphism. The parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for $(\us,\uc)$-semistable complexes with invariants $P$ is constructed as a locally closed subscheme of a projective bundle $\fD$ over the product $Q:=Q^{m_1} \times \cdots \times Q^{m_2}$.
Given a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Hilbert polynomials $P$ we can use the evaluation maps \[ H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)) \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \] along with a choice of isomorphism $V^i \cong H^0(\mathcal{E}^i(n))$ to parametrise the $i$th sheaf $\mathcal{E}^i$ by a point $q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i $ in $Q^i$. From the boundary morphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ we can construct a homomorphism \[ \psi:=H^0(d(n)) \circ (\oplus_i H^0(q^i(n))) : \oplus_i V^i \ra \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)) \] where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow \oplus_i\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ is the morphism determined by the boundary maps $d^i$. Such homomorphisms $\psi$ correspond to points in the fibres of the sheaf \[ \cR:=(\oplus_i V^i)^\vee \otimes p_* \left(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X \otimes (\pi_X^{Q \times X})^* \cO_X(n)\right) \] over $Q$ where $p : Q \times X \ra Q$ is the projection and $\oplus_iV^i \otimes (\pi_X^{Q \times X})^*\cO_X(-n) \ra \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X$ is the quotient sheaf over $Q \times X$ given by taking the direct sum of the pullbacks of the universal quotients $V^i \otimes (\pi_X^{Q^i \times X})^* \cO_X(-n) \rightarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^i$ on $Q^i \times X$ to $Q \times X$. Note that $\cR$ is locally free for $n$ sufficiently large and so we can consider the projective bundle $\fD :=\PP(\cR \oplus \cO_Q)$ over $Q$. A point of $\fD$ over $q=(q^i:V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$ is given by a pair $(\psi: \oplus_i V^i \ra \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)),\zeta \in \CC)$ defined up to scalar multiplication. The parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ consists of points $(q,[\psi : \zeta])$ in $\fD$ such that: \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})} \item $\psi =H^0(d(n)) \circ (\oplus_i H^0(q^i(n)))$ where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \oplus_i\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ is given by morphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^i \circ d^{i-1} = 0$, \item $ \zeta\neq 0$. \end{enumerate} The conditions given in i) are all closed (they are cut out by the vanishing locus of homomorphisms of locally free sheaves) and condition ii) is open; therefore $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} $ is a locally closed subscheme of $\fD$. We let $\fD'$ denote the closed subscheme of $\fD$ given by points which satisfy condition i). We will write points of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ as $(q,d)$ where $q=(q^i:V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$ and $d$ is given by $d^{i} : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^{i} \circ d^{i-1}=0$.
\begin{rmk} The construction of the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ depends on the choice of $n$ and the Hilbert polynomials $P$. We write $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}$ or $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}(n)$ if we wish to emphasise its dependence on $P$ or $n$. \end{rmk}
\subsection{The group action}\label{gp act}
For $m_1 \leq i \leq m_2$ we have fixed vector spaces $V^i$ of dimension ${P^i(n)}$. The reductive group $\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ acts on both $Q$ and $\fD$: if $g = (g_{m_1}, \dots, g_{m_2}) \in \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ and $z = ((q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i,[\psi : \zeta]) \in \fD$, then \[ g \cdot z = ((g_i \cdot q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i, [g \cdot \psi : \zeta]) \] where \[\xymatrix@1{ g_i \cdot q^i : & V^i \otimes \cO_X(-n) \ar[r]^{g_i^{-1 }\cdot} & V^i \otimes \cO_X(-n) \ar[r]^>>>>>{q^i} & \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i }\]
and \[\xymatrix@1{ g \cdot \psi : & \oplus_i V^i \ar[r]^{g^{-1 }\cdot} & \oplus_i V^i \ar[r]^>>>>>{q^i} & \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n))}.\]
If instead we consider $\tilde{\psi}:= \oplus_i H^0(q^i(n))^{-1} \circ \psi : \oplus_iV^i \ra \oplus_iV^i$ then this action corresponds to conjugating $\tilde{\psi}$ by $g$; that is, \[ g \circ \tilde{\psi} \circ g^{-1} = \widetilde{g \cdot \psi} .\]
This action preserves the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and the orbits correspond to isomorphism classes of complexes. As the subgroup $\CC^* ( I_{V_{m_1}}, \dots ,I_{V_{m_2}})$ acts trivially on $\fD$, we are really interested in the action of $(\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i))/ \CC^* $. Given integers $\us =(\sigma_{m_1} , \dots , \sigma_{m_2})$ we can define a character \[ \begin{array}{cccc} \det_{\us} : &\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i) & \ra & \CC^* \\ &(g_i) & \mapsto & \Pi_i \det g_i^{\sigma_i} \end{array}\] and instead consider the action of the group $G=G_{\us}:= \ker \det_{\us}$ which maps with finite kernel onto $(\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V_i))/ \CC^* $.
\subsection{The linearisation}\label{linearisation schmitt}
Schmitt uses the stability parameters $(\us,\uc):=(\us,\delta\ue)$ to determine a linearisation of the $G$-action on the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ in three steps. The first step is given by using the parameters $\us$ to construct a proper injective morphism from $\fD$ to another projective bundle $\fB_{\us}$ over $Q$. The parameters $\us$ are used to associate to each point $z = (q,[\psi : \zeta]) \in \fD$ a nonzero decoration \[ \varphi_{\us}(z) : ( V_{\us}^{\otimes r_{\us}})^{\oplus 2} \otimes \cO_X(-r_{\us}n) \rightarrow \det \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{\us} \] (defined up to scalar multiplication) where $r_{\us} = \sum_i \sigma_i r^i$ and $V_{\us}:= \oplus_i (V^{i})^{\oplus \sigma_i}$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{\us}:= \oplus_i (\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i})^{\oplus \sigma_i}$. The fibre of $\fB_{\us}$ over $q \in Q$ parametrises such homomorphisms $\varphi_{\us}$ up to scalar multiplication and the morphism $\fD \ra \fB_{\us}$ is given by sending $z = (q,[\psi : \zeta]) \in \fD$ to $(q, [\varphi_{\us}(z)]) \in \fB_{\us}$. The group $G \cong\SL(V_{\us}) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ acts on $\fB_{\us}$ by acting on $Q$ and $V_{\us}$ and $\fD \ra \fB_{\us}$ is equivariant with respect to this action.
The second step is given by constructing a projective embedding $\fB_{\us} \ra B_{\us}$. This embedding is essentially given by taking the projective embedding of each $Q^i$ used by Gieseker \cite{gieseker_sheaves}. Recall that Gieseker gave an embedding of $Q^i$ into a projective bundle $B_i$ over the components $R_i$ of the Picard scheme of $X$ which contain the determinant of a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ parametrised by $Q^i$. This embedding is given by sending a quotient sheaf $q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ to a homomorphism $\wedge^{r^i} V^i \ra H^0(\det \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(r^in))$ which represents a point in a projective bundle $B_i$ over $R_i$. The group $\SL(V^i)$ acts naturally on $B_i$ by acting on the vector space $\wedge^{r^i} V^i$ and the morphism $Q^i \ra B_i$ is equivariant with respect to this action. In a similar way Schmitt also constructs an equivariant morphism $\fB_{\us} \ra B'_{\us}$ where $B_{\us}'$ is a projective bundle over the product $\Pi_i R_i$. Let $B_{\us} = B_{m_1} \times \cdots \times B_{m_2} \times B'_{\us} $; then the map $ \fB_{\us} \ra B_{\us}$ is equivariant, injective and proper morphism (cf. \cite{schmitt05} Section 2.1).
The final step is give by choosing a linearisation on $B_{\us}$ and pulling this back to the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ via \[ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} \hookrightarrow \fD \hookrightarrow \fB_{\us} \hookrightarrow B_{\us} = B_{m_1} \times \cdots \times B_{m_2} \times B'_{\us} .\] The schemes $B_i$ and $B'_{\us}$ have natural ample linearisations given by $\cL_i:=\cO_{B_i}(1)$ and $\cL':=\cO_{B'_{\us}}(1)$. The linearisation on $B_{\us}$ is given by taking a weighted tensor product of these linearisations and twisting by a character $\rho$ of $G=G_{\us}$. The character $\rho : G \ra \CC^*$ is the character determined by the rational numbers \[ c_i := \left[ \sigma_i \left( \frac{P_{\us}(n)}{r_{\us} \delta (n)} - 1\right) \left( \frac{r_{\us}}{P_{\us}(n)} - \frac{r^i}{P^i(n)}\right) - \frac{ r^i \eta_i}{P^i(n)} \right] \] where $P_{\us}:= \sum_i \sigma_i P^i$; that is, if these are integral we define \[ \rho (g_{m_1},\cdots,g_{m_2})= \Pi_{i=m_1}^{m_2} \det g_i^{c_i} \] and if not we can scale everything by a positive integer so that they become integral. We assume $n$ is sufficiently large so that $a_i = \sigma_i ({P_{\us}(n)} - r_{\us} \delta (n))/ r_{\us} \delta (n) + \eta_i $ is positive; these positive rational numbers $\underline{a}=(a_{m_1}, \dots , a_{m_2}, 1)$ are used to define a very ample linearisation \[ \cL_{\underline{a}}:=\bigotimes_i \cL_i^{\otimes a_i} \otimes \cL \] on $B_{\us}$ (where again if the $a_i$ are not integral we scale everything so that this is the case). The linearisation $\cL=\cL(\us,\uc)$ on $\mathfrak{T}$ is equal to the pullback of the very ample linearisation $\cL_{\underline{a}}^{\rho}$ on $B_{\us}$ where $\cL_{\underline{a}}^{\rho}$ denotes the linearisation obtained by twisting $\cL_{\underline{a}}$ by the character $\rho$. Of course this can also be viewed as a linearisation on the schemes $\fD'$ and $\fD$ too.
\subsection{Jordan-H\"older filtrations and S-equivalence}\label{sequiv sect}
The moduli space of ($\underline{\sigma}, \underline{\chi}$)-semistable complexes with invariants $P$ is constructed as an open subscheme of the projective GIT quotient \[ \fD' /\!/_{\cL} G \] given by the locus where $\zeta \neq 0$ (by definition $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ is the open subscheme of $\fD'$ given by this condition). Recall that the GIT quotient is topologically the semistable set modulo S-equivalence where two orbits are S-equivalent if their orbit closures meet in the semistable locus. This notion can be expressed in terms of Jordan-H\"older filtrations as follows:
\begin{defn}\label{sequiv} A Jordan--H\"{o}lder filtration of a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ is a filtration by subcomplexes \[ 0_\cdot= \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[0]} \subsetneqq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[1]} \subsetneqq \cdots \subsetneqq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[k]} = \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \] such that the successive quotients $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i-1]}$ are $(\us,\uc)$-stable and \[ P^\mathrm{red}_{\us,\uc}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i-1]}) =P^\mathrm{red}_{\us,\uc}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) .\] This filtration is in general not canonical but the associated graded object \[ \mathrm{gr}_{ (\underline{\sigma}, \underline{\chi})}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) := \bigoplus_{j=1}^k \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[j]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[j-1]}\] is canonically associated to $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ up to isomorphism. We say two ($\underline{\sigma}, \underline{\chi}$)-semistable complexes are {S-equivalent} if their associated graded objects with respect to ($\underline{\sigma}, \underline{\chi}$) are isomorphic. \end{defn}
Jordan--H\"{o}lder filtrations of ($\underline{\sigma}, \underline{\chi}$)-semistable complexes exist in exactly the same way as they do for semistable sheaves (for example, see \cite{gieseker_sheaves}).
\subsection{The moduli space} We are now able to state one of the main results of \cite{schmitt05} for us: the existence of moduli spaces of $(\us,\uc)$-semistable complexes. Recall that there is a parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}=\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}$ (which is the open subscheme of $\fD'$ cut out by the condition $\zeta \neq 0$) with an action by a reductive group $G=G_{\us}$ such that the orbits correspond to isomorphism classes of complexes and the stability parameters determine a linearisation $\cL$ of this action. The moduli space is given by taking the open subscheme of the projective GIT quotient $\fD'/\!/_{\mathcal{L}} G$ given by $\zeta \neq 0$.
\begin{thm}(\cite{schmitt05}, p3)\label{schmitt theorem} Let $X$ be a smooth complex manifold, $P$ be a collection of Hilbert polynomials of degree $\dim X$ and $(\us,\uc)$ be stability parameters. There is a quasi-projective coarse moduli space \[M^{(\underline{\sigma}, \underline{\chi})-ss}(X,{P})\] for S-equivalence classes of $(\us,\uc)$-semistable complexes over $X$ with Hilbert polynomials $P$. \end{thm}
\subsection{The Hilbert-Mumford criterion}\label{calc HM for cx}
The Hilbert-Mumford criterion allows us to determine GIT semistable points by studying the actions of one-parameter subgroups (1-PSs); that is, nontrivial homomorphisms $\lambda : \CC^* \ra G$. In this section we give some results about the action of 1-PSs of $G=G_{\us}$ on the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for complexes (see also \cite{schmitt05} Section 2.1).
We firstly study the limit of a point in $z = (q,[\psi : 1]) \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ under the action of a 1-PS $\lambda : \CC^* \rightarrow G$. For this limit to exist we need to instead work with a projective completion $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$. We take a projective completion which is constructed as a closed subscheme of a projective bundle $\overline{\fD}$ over the projective scheme $\overline{Q} := \Pi_i \overline{Q}^i$ where $\overline{Q}^i$ is the closure of $Q^i$ in the relevant quot scheme. The points of $\overline{\fD}$ over $q=(q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in \overline{Q}$ are nonzero pairs $[ \psi : \zeta]$ defined up to scalar multiplication where $\psi : \oplus_i V^i \ra \oplus_i H^0(\mathcal{E}^i(n))$ and $\zeta \in \CC$. Then $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ is the subscheme of points $(q,[\psi : \zeta]) \in \overline{\fD}$ such that $\psi =H^0(d(n)) \circ (\oplus_i H^0(q^i(n)))$ where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i $ is given by $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^i \circ d^{i-1} =0$. It is clear that the group action and linearisation $\mathcal{L}$ extend to this projective completion.
Recall that $G \cong \SL(V_{\us}) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ where $V_{\us} = \oplus_i (V^i)^{\oplus \sigma_i}$ and so a 1-PS $\lambda : \CC^* \ra G$ is given by a collection of 1-PSs $\lambda_i : \CC^* \to \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ which satisfy \[\Pi_i \mathrm{det} \lambda_i(t)^{\sigma_i}= 1.\] A 1-PS $\lambda_i$ of $\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ gives rise to a weight space decomposition of $V^i = \oplus_{j=1}^s V^i_j$ indexed by a finite collection of integers $k_1 > \cdots > k_s$ where $V^i_j=\{ v \in V^i : \lambda_i(t) \cdot v =t^{k_j} v \}$. This gives a filtration \[ 0 \subsetneq V^i_{(1)}\subsetneq \cdots \subsetneq V^i_{(s)} = V^i\] where $V^i_{(j)} := V^i_1 \oplus \cdots \oplus V^i_j$ and if we take a basis of $V^i$ which is compatible with this filtration then \[ \lambda_i(t) = \left( \begin{array}{ccc}t^{k_1} I_{V^i_1} & & \\ & \ddots & \\ && t^{k_s}I_{V^i_s} \end{array}\right) \] is diagonal. We can diagonalise each of these 1-PSs $\lambda_i$ simultaneously so there is a decreasing sequence of integers $k_1 > \cdots > k_s$ and for each $i$ we have a decomposition $V^i = \oplus_{j=1}^s V^i_j$ (where we may have $V_j^i =0$), and a filtration \[ 0 \subset V^i_{(1)} \subset V^i_{(2)} \subset \cdots \subset V^i_{(s)} = V^i \] for which $\lambda_i$ is diagonal.
Let $z=(q, [\psi : 1])$ be a point in $ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ where $q = (q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$; then we can consider its limit \[ \overline{z} := \lim_{t \to 0} \lambda(t) \cdot z \] under the 1-PS $\lambda$. By \cite{huybrechts} Lemma 4.4.3, \[\overline{q}^i:= \lim_{t \to 0} \lambda_i(t) \cdot q^i = \oplus_{j=1}^s q_j^i : \oplus_{j=1}^s V_j^i \otimes \cO_X(-n) \rightarrow \oplus_{j=1}^s \mathcal{E}^i_j \] where $\mathcal{E}^i_j$ are the successive quotients in the filtration \[ 0 \subset \mathcal{E}^i_{(1)} \subset \cdots \subset \mathcal{E}^i_{(j)} := q^i (V^i_{(j)} \otimes \mathcal{O}_X(-n) ) \subset \cdots \subset \mathcal{E}^i_{(s)}= \mathcal{E}^i \] induced by $\lambda_i$. For each $i$ we have a filtration of the corresponding sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ induced by $\lambda_i$ and the boundary maps can either preserve this filtration or not. If they do then we say $\lambda$ induces a filtration of the point $z$ (or corresponding complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$) by subcomplexes. It is easy to check the limit depends on whether $\lambda$ induces a filtration by subcomplexes or not:
\begin{lemma}\label{lemma on fixed pts} Let $z=(q, [\psi : 1])$ be a point in $ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and $\lambda$ be a 1-PS of $G$ as above with weights $k_1> \cdots > k_s$. Then the limit \[ \overline{z}:=\lim_{t \to 0} \lambda(t) \cdot z =(\overline{q}, [\overline{\psi}: \overline{\zeta}])\] is given by $\overline{q}$ as above and $\overline{\psi} =H^0( \overline{d}(n)) \circ (\oplus_i H^0(\overline{q}^i(n)))$ where $\overline{d}$ is given by $\overline{d}^i : \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \ra \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}_j$. Moreover: \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})} \item If $\lambda$ induces a filtration by subcomplexes, then $\overline{\zeta} = 1$ and $\overline{d}^{i} = \oplus_{j=1}^s (d^{i}_j: \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}_j)$. In particular $\overline{z} \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and the corresponding complex is the graded complex associated to the filtration induced by $\lambda$. \item If $\lambda$ does not induces a filtration by subcomplexes let \[N := \min_{i,j,l} \{ k_l - k_j : d^{i}(\mathcal{E}^i_{(j)}) \nsubseteq \mathcal{E}^{i+1}_{(l-1)} \} < 0.\] Then $\overline{\zeta} =0$ and we have $\overline{d}^{i}(\mathcal{E}^i_j) \cap \mathcal{E}^{i+1}_l = 0$ unless $k_l - k_j = N$. In particular, the limit $\overline{z}=(\overline{q},[\overline{\psi}:0])$ is not in the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$. \end{enumerate} \end{lemma}
\begin{rmk}\label{rmk on fixed pts} Let $z=(q, [\psi : \zeta])$ be a point in $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ given by $q=(q^i : V^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in \overline{Q}$ and $\psi = H^0({d}(n)) \circ \oplus_i H^0({q}^i(n))$ where $d$ is defined by homomorphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$. If $z$ is fixed by $\lambda$, then the quotient sheaves are direct sums $q^i = \oplus_j q^i_j : V^i \otimes \cO_X(-n) \ra \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j$ and so the boundary map $d^i$ can be written as $d^i = \oplus_{j,l} d^i_{l,j}$ where $d^i_{l,j}: \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_l$. The fixed point locus of a 1-PS $\lambda : \CC^* \ra G$ acting on $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ decomposes into 3 pieces (each piece being a union of connected components) where these pieces are given by: \begin{itemize}
\item A diagonal piece consisting of points $z$ where $d^i=\oplus_j d^i_{j,j} $ is diagonal for all $i$ and $\zeta \in \CC$. \item A strictly lower triangular piece consisting of points $z$ where $d^i = \oplus_{j<l} d^i_{l,j} $ is strictly lower triangular for all $i$ and $\zeta = 0$. \item A strictly upper triangular piece consisting of points $z$ where $d^i = \oplus_{j>l} d^i_{l,j} $ is strictly lower triangular for all $i$ and $\zeta = 0$. \end{itemize} Note that by Lemma \ref{lemma on fixed pts} above, if we have a point $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ its limit under $\lambda(t)$ as $t \to 0$ is in either the diagonal or strictly lower triangular piece. In fact, we have $\lim_{t \to 0} \lambda(t) \cdot z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ if and only if $\lambda$ induces a filtration of $z$ by subcomplexes. \end{rmk}
Now we understand the limit points of $\CC^*$-actions we can compute the weight of the $\CC^*$-action on fixed points. By definition the Hilbert-Mumford function \[ \mu^{\cL}(z,\lambda) = \mu^{\cL}(\lim_{t \to 0} \lambda(t) \cdot z , \lambda)\] is equal to minus the weight of the $\lambda(\CC^*)$-action on the fibre of $\cL$ over $\overline{z}:=\lim_{t \to 0} \lambda(t) \cdot z$. By the construction of $\mathcal{L}$ we have \begin{equation}\label{form1 for HM} \mu^\mathcal{L}({z}, \lambda)=\mu^{\mathcal{L}'}(\varphi_{\us}({z}), \lambda)+ \sum_{i=m_1}^{m_2} a_i \mu^{\mathcal{L}_i}({q}^i, \lambda_i) - \rho \cdot \lambda \end{equation} where $\varphi_{\us}({z})$ is the decoration associated to ${z}$ and $a_i$ and $\rho$ are the rational numbers and character used to define $\cL$ (c.f. $\S$\ref{linearisation schmitt}). We let $P_{\us} = \sum_i \sigma_i P^i$ and $r_{\us} = \sum_i \sigma_i r^i$.
\begin{lemma}(\cite{schmitt05}, Section 2.1)\label{HM prop} Let $\lambda$ be a 1-PS of $G$ which corresponds to integers $k_1> \cdots > k_s$ and decompositions $V^i = \oplus_{j=1}^s V^i_j$ for $m_1 \leq i \leq m_2$ as above. Let $z=(q,[\psi : 1]) \in \mathfrak{T}$ where $q=(q^i : V^i \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i)_i \in Q$; then \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})}
\item If $\lambda$ induces a filtration of $z$ by subcomplexes \[ {\mu^{\cL}(z, \lambda )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} + \eta_i \right) \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \] where $ \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j =\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j)} / \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j-1)}$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j)} = q^i( V^i_{(j)} \otimes \cO_X(-n))$. \item If $\lambda$ does not induce a filtration of $z$ by subcomplexes \[ {\mu^{\mathcal{L}}(z, \lambda )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} + \eta_i \right) \rk \mathcal{E}^i_j - N \] where $N$ is the negative integer given in Lemma \ref{lemma on fixed pts}. \end{enumerate} \end{lemma} \begin{proof} The weight of the action of $\lambda_i$ on $Q^i$ with respect to $\cL_i$ was calculated by Gieseker: \[\mu^{\mathcal{L}_i}(q^i,\lambda_i) =\sum_{j=1}^s k_j \left(\rk\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j - \dim V^i_j \frac{r^i}{P^i(n)} \right) .\] We can insert this into the formula (\ref{form1 for HM}) along with the exact values of $a_i$ and $c_i$ and use the fact that $\lambda$ is a 1-PS of $\SL(\oplus_i (V^i)^{\oplus \sigma_i})$ to reduce this to \[ \mu^{\cL}(z, \lambda) =\mu^{\cL'}(\varphi_{\us}(z),\lambda) + \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} -\sigma_i + \eta_i \right) \rk \mathcal{E}^i_j . \] Finally, by studying the construction of the decoration $\varphi_{\us}(z)$ associated to $z$ (for details see \cite{schmitt05}), we see that \[\mu^{\cL'}(\varphi_{\us}(z),\lambda) = \left\{ \begin{array}{ll} \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \sigma_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j & \mathrm{if} \: \lambda \mathrm{\:induces \: a \: filtration \: by \: subcomplexes} \\ \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \sigma_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j - N & \mathrm{otherwise} \end{array} \right.\] where $N$ is the negative integer of Lemma \ref{lemma on fixed pts}. \end{proof}
\begin{rmk}\label{only need to worry about subcxs} Schmitt observes that we can rescale the stability parameters by picking a sufficiently large integer $K$ and replacing $(\delta,\ue)$ with $(K \delta,\ue / K)$, so that for GIT semistability we need only worry about 1-PSs which induce filtrations by subcomplexes (cf. \cite{schmitt05}, Theorem 1.7.1). This explains why the test objects for (semi)stability in Definition \ref{stab defn} are subcomplexes rather than weighted sheaf filtrations. \end{rmk}
\section{Stability conditions relating to cohomology}\label{sec on stab}
In this section we study these notions of stability for complexes in greater depth. As we are now studying complexes with varying invariants $P$ we do not impose any condition on $\ue$ (such as $\sum_i \eta_i r^i = 0$). An important property of these stability conditions is that we can describe any complex (of torsion free sheaves) as a finite sequence of extensions of semistable complexes by studying its Harder--Narasimhan filtration.
In this section we describe a collection of stability conditions indexed by a small positive rational number $\epsilon$ which can be used to study the cohomology sheaves of a given complex. The stability parameters we are interested in are of the form $(\underline{1}, \delta \ue/\epsilon)$ where $\underline{1}$ is the constant vector and $\eta_i$ are strictly increasing rational numbers. For a given complex $\cxF$ with torsion free cohomology sheaves \[ \cH^i(\cxF) := \ker d^i / \im d^{i-1}\] we show that the Harder--Narasimhan filtration of this complex encodes the Harder--Narasimhan filtration of the cohomology sheaves in this complex provided $\epsilon>0$ is sufficiently small.
\subsection{Harder--Narasimhan filtrations}
Given a choice of stability parameters $(\us, \uc)$ every complex has a unique maximal destabilising filtration known as its Harder--Narasimhan filtration:
\begin{defn}\label{HN filtr} Let $\cxF$ be a complex and $(\us,\uc)$ be stability parameters. A {Harder--Narasimhan filtration} for $\cxF$ with respect to $(\us,\uc)$ is a filtration by subcomplexes \[ 0_\cdot= \cxF_{(0)} \subsetneqq \cxF_{(1)} \subsetneqq \cdots \subsetneqq \cxF_{(s)} = \cxF \] such that the successive quotients $\cxF_j=\cxF_{(j)} / \cxF_{(j-1)}$ are complexes of torsion free sheaves which are $(\us,\uc)$-semistable and have decreasing reduced Hilbert polynomials with respect to $(\us,\uc)$: \[ P_{\us, \uc}^{\mathrm{red}}(\cxF_{1}) > P_{\us, \uc}^{\mathrm{red}}(\cxF_2) > \cdots > P_{\us, \uc}^{\mathrm{red}}(\cxF_s) .\] The Harder--Narasimhan type of $\cxF$ with respect to $(\us, \uc )$ is given by $\tau =({P}_{1}, \cdots {P}_{s})$ where ${P}_{j} = (P_j^i)_{i \in \ZZ}$ is the tuple of Hilbert polynomials of the complex $\cxF_j$ so that \[ P^i_j := P(\cF^i_j)=P(\cF^i_{(j)}/\cF^i_{(j-1)}). \] \end{defn}
The Harder--Narasimhan filtration can be constructed inductively from the maximal destabilising subcomplex:
\begin{defn} Let $\cxF$ be a complex and $(\us,\uc)$ be stability parameters. A subcomplex $\cxF_1 \subset \cxF$ is a {maximal destabilising subcomplex} for $\cxF$ with respect to $(\us,\uc)$ if \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})} \item The complex $\cxF_1$ is $(\us,\uc)$-semistable, \item For every subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot $ of $\cxF$ such that $\cxF_1 \subsetneq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ we have \[ P_{\us,\uc}^{\mathrm{red}}(\cxF_1) > P_{\us, \uc}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot). \] \end{enumerate} \end{defn}
The existence and uniqueness of the maximal destabilising subcomplex follows in exactly the same way as the original proof for vector bundles of Harder and Narasimhan \cite{harder}.
\subsection{The limit as $\epsilon$ tends to zero }
Recall that we are interested in studying the collection of parameters $(\underline{1}, \delta \ue/\epsilon) $ indexed by a small positive rational number $\epsilon$ where $\underline{1}$ is the constant vector, $\eta_i$ are strictly increasing rational numbers and $\delta$ is a positive rational polynomial of degree $\max(\dim X -1 , 0)$. In this section we study the limit as $\epsilon$ tends to zero.
Observe that \[ P_{\underline{1}, \delta \ue/\epsilon}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) \leq P_{\underline{1}, \delta \ue/\epsilon}^{\mathrm{red}}(\cxF) \] is equivalent to \[ \epsilon \frac{\sum_i P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)}{\sum_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} - \delta \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i }{\sum_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} \leq \epsilon \frac{\sum_i P(\cF^i)}{\sum_i \rk \cF^i} - \delta \frac{\sum_i \eta_i \rk \cF^i }{\sum_i \rk \cF^i}\] and if we take the limit as $\epsilon \to 0$ we get \begin{equation}\label{eps is zero ineq} \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum_i \rk\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} \geq \frac{\sum_i \eta_i \rk \cF^i}{\sum_i \rk\cF^i}. \end{equation} We say $\cxF$ is $(\underline{0}, \delta \ue)$-semistable if all nonzero proper subcomplexes $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$ satisfy the inequality (\ref{eps is zero ineq}). This is a slight generalisation of the parameters consider by Schmitt in \cite{schmitt05} where we now allow $\sigma_i$ to be zero. These generalised stability parameters will no longer define an ample linearisation on the parameter space (cf. $\S$\ref{linearisation schmitt}), but we can still study the corresponding notion of semistability.
\begin{lemma}\label{ss for sigma0} Suppose $\eta_i < \eta_{i+1}$ for all integers $i$; then the only $(\underline{0}, \delta \ue)$-semistable complexes (of torsion free sheaves) are shifts of (torsion free) sheaves and complexes which are isomorphic to a shift of the cone of the identity morphism of (torsion free) sheaves. \end{lemma} \begin{proof} If $\cxF$ is a shift of a torsion free sheaf $\cF^k$, then a subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of $\cxF$ is just a subsheaf and it is trivial to verify it is $(\underline{0}, \delta \ue)$--semistable. If $\cxF$ is isomorphic to a shift of the cone on the identity morphism of a torsion free sheaf, then there is an integer $k$ such that $d^k$ is an isomorphism and $\cF^i= 0$ unless $i = k$ or $k+1$. A subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of $\cxF$ is either concentrated in position $k+1$ or concentrated in $[k,k+1]$. In the second case we must have that $\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k \leq \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1}$ and in both cases it is easy to verify the inequality for $(\underline{0}, \delta \ue)$-semistability using the fact that the $\eta_i$ are strictly increasing.
Now suppose $\cxF$ is $(\underline{0}, \delta \ue)$-semistable. If all the boundary morphisms $d^i$ are zero, then each nonzero sheaf $\cF^k$ is both a subcomplex and quotient complex and so by semistability \[ \eta_k = \frac{\sum_i \eta_i \rk \cF^i}{\sum_i \rk\cF^i}. \] As the $\eta_i$ are strictly increasing, there can be at most one $k$ such that $\cF^k$ is nonzero. If there is a nonzero boundary map $d^k$ then the image of this boundary map can be viewed as a quotient complex (in position $k$) and a subcomplex (in position $k+1$) so that \begin{equation}\label{eqn for eta 1} \eta_{k} \leq \frac{\sum \eta_i \rk \cF^i}{\sum \rk\cF^i} \leq \eta_{k+1} .\end{equation} As the $\eta_i$ are strictly increasing, there can be at most one $k$ such that $d^k$ is nonzero. From above, we see that $\cF^i = 0$ unless $i =k$ or $ k+1$. As the $\eta_i$ are increasing, we see that the inequalities of (\ref{eqn for eta 1}) must be strict. We can consider the kernel and cokernel of $d^k$ as a subcomplex and quotient complex respectively and by comparing the inequalities obtained from semistability with (\ref{eqn for eta 1}), we see that $d^k$ must be an isomorphism and so $\cxF$ is isomorphic to a shift of the cone on the identity morphism of $\cF^{k+1}$. \end{proof}
\begin{lemma}\label{HN filtr for sigma0}
Suppose $\eta_i < \eta_{i+1}$ for all integers $i$ and $\cxF $ is a complex. Let $k$ be the minimal integer for which $\cF^k$ is nonzero. Then the maximal destabilising subcomplex $\cxF_{(1)}$ of $\cxF $ with respect to $(\underline{0},\delta \ue)$ is \[ \cxF_{(1)} = \left\{ \begin{array}{cccccccccl} \d \ra & 0 & \ra & \ker d^k & \ra & 0 &\ra & 0 & \ra \d & \quad \mathrm{if} \: \ker d^k \neq 0, \\ \d \ra & 0 & \ra &\cF^k & \ra & \im d^k &\ra & 0 & \ra \d & \quad \mathrm{if} \: \ker d^k = 0. \end{array} \right. \] \end{lemma} \begin{proof} By Lemma \ref{ss for sigma0} these complexes are both $(\underline{0},\delta \ue)$-semistable. In order to prove this gives the maximal destabilising subcomplex we need to show that if $\cxF_{(1)} \subsetneq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$, then \[ \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} > \frac{\sum_i \eta_i \rk \cF_{(1)}^i}{\sum \rk \cF_{(1)}^i}. \] As $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \neq \cxF_{(1)}$, the set \[ I:= \{ i \in \mathbb{Z} : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \neq \cF^i_{(1)} \} \] is nonempty. We note that if $i \in I$, then $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \neq 0 $.
Suppose $\ker d^k \neq 0$. If $k \in I$, then also $k+1 \in I$ as $\ker d^k \subsetneq \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k$ and so $0 \neq d(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k) \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1}$. As the $\eta_i$ are strictly increasing we have $ \eta_{k} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k + \eta_{k+1} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1} > \eta_k(\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k} + \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1} ). $ If $i > k+1$ and belongs to $I$, then $ \eta_{i} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i > \eta_k\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i} $. So \[ \sum_{i \in I} \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i > \sum_{i \in I} \eta_k \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i} \quad \mathrm{and} \quad \sum_{i \notin I} \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i = \sum_{i \notin I} \eta_k \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i}; \] hence \[ \frac{\sum \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} >\eta_k = \frac{\sum \eta_i \rk \cF_{(1)}^i}{\sum \rk \cF_{(1)}^i} .\]
The case when $\ker d^k = 0$ is proved in the same way.
\end{proof}
\begin{cor}\label{HNF corr coh rmk} If $\cxF$ has torsion free cohomology sheaves, then its Harder--Narasimhan filtration with respect to these stability parameters picks out the kernels and images of the boundary maps successively: \[\begin{array}{cccccccccc} \cxF_{(1)} : \quad & \cdots \rightarrow 0 & \rightarrow & \ker d^k & \rightarrow & 0 & \rightarrow & 0 & \rightarrow & 0 \cdots \\ & & & \cap & & \cap & & \cap & & \\ \cxF_{(2)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \im d^k & \rightarrow & 0 & \rightarrow & 0 \cdots \\ & & & \cap & & \cap & & \cap & & \\ \cxF_{(3)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \ker d^{k+1} & \rightarrow & 0 \cdots & \rightarrow & 0 \\ & & & \cap & & \cap & & \cap & & \\ \cxF_{(4)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \cF^{k+1} & \rightarrow & \im d^{k+1} & \rightarrow & 0 \cdots \\ & & & \vdots & & \vdots & & \vdots & & \end{array} \] In particular, the successive quotients are $\cH^i(\cxF)[-i]$ or isomorphic to $\mathrm{Cone}(\mathrm{Id}_{\im d^i})[-(i+1)] $. \end{cor}
\subsection{Semistability with respect to $(\underline{1}, \delta \ue/\epsilon)$ } Recall that a torsion free sheaf $\cF$ is semistable (in the sense of Gieseker) if for all subsheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cF$ we have \[ \frac{P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}{\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H} \leq \frac{P(\cF)}{\rk \cF}. \] A torsion free sheaf can be viewed as a complex (by placing it in any position $k$) and it is easy to see that $(\us, \uc)$-semistability of the associated complex is equivalent to (Gieseker) semistability of the sheaf. For $\epsilon$ a small positive rational number we consider the stability parameters $(\underline{1}, \delta \ue/\epsilon)$ where $\underline{\eta}$ are strictly increasing.
\begin{lemma}\label{lemma X} Suppose $\cxF$ is a complex for which there is an $\epsilon_0>0$ such that for all positive rational $\epsilon < \epsilon_0$ this complex is $(\underline{1}, \delta \ue/\epsilon)$-semistable. Then $\cxF$ is either a shift of a Gieseker semistable torsion free sheaf or isomorphic to a shift of the cone on the identity morphism of a semistable torsion free sheaf. \end{lemma} \begin{proof} By studying the limit as $\epsilon$ tends to zero we see that $\cxF$ is $(\underline{0}, \delta \ue)$-semistable and so $\cxF$ is either a shift of a torsion free sheaf or isomorphic to a shift of the cone on the identity morphism of a torsion free sheaf by Lemma \ref{ss for sigma0}. If $\cxF$ is the shift of a sheaf, then $(\underline{1}, \delta \ue/\epsilon)$-semistability for any $0<\epsilon < \epsilon_0 $ implies this sheaf must be Gieseker semistable. If $\cxF$ is the cone on the identity morphism of a torsion free sheaf $\cF$, then $\cF$ must be semistable as for any subsheaf $\mathcal{F}' \subset \mathcal{F}$ we can consider $\mathrm{Cone}(\mathrm{id}_{\cF'})$ as a subcomplex and so \[ \frac{P(\cF')}{\rk \cF'} - \delta \frac{\eta_k + \eta_{k+1}}{2\epsilon} \leq \frac{P(\mathcal{F})}{\rk \mathcal{F}} - \delta \frac{\eta_k + \eta_{k+1}}{2 \epsilon} \] by $(\underline{1}, \delta \ue/\epsilon)$-semistability for any $0 < \epsilon < \epsilon_0$.
\end{proof}
\begin{rmk} \label{rmk on sigma0} Conversely, a shift of a semistable torsion free sheaf or a shift of a cone on the identity morphism of a semistable torsion free sheaf is $(\underline{1}, \delta \ue/\epsilon)$-semistable for any $\epsilon >0$.
\end{rmk}
\begin{rmk} As $(\underline{1}, \delta \ue/\epsilon)$-semistability of a complex associated to a torsion free sheaf $\cF$ is equivalent to (Gieseker) semistability of $\cF$, it follows that the Harder--Narasimhan filtration of the associated complex with respect to $(\underline{1}, \delta \ue/\epsilon)$ is given by the Harder--Narasimhan filtration of the sheaf. Similarly we see that the Harder--Narasimhan filtration of $\mathrm{Cone}(\mathrm{id}_{\cF})$ with respect to $(\underline{1}, \delta \ue/\epsilon) $ is given by taking cones on the identity morphism of each term in the Harder--Narasimhan filtration of the sheaf $\cF$. \end{rmk}
We have seen that the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{0}, \delta \ue)$ picks out the successive kernels and images of each boundary map. In particular, the successive quotients are either of the form $\cH^i(\cxF)[-i]$ or isomorphic to $\mathrm{Cone}(\mathrm{Id}_{\im d^i})[-(i+1)] $.
\begin{thm}\label{HN filtrations for epsiloneta} Let $\cxF$ be a complex concentrated in $[m_1,m_2]$ with torsion free cohomology sheaves. There is an $\epsilon_0 >0$ such that for all rational $0<\epsilon < \epsilon_0$ the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{1}, \delta \ue/\epsilon)$ is given by refining the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{0}, \delta \ue)$ by the Harder--Narasimhan filtrations of the cohomology sheaves $\cH^i(\cxF)$ and image sheaves $\im d^i$. \end{thm} \begin{proof} Firstly, we note that if $\dim X = 0$ every sheaf is Gieseker semistable (all sheaves have the same reduced Hilbert polynomial) and so any choice of $\epsilon_0$ will work. Therefore, we assume $d = \dim X >0$. Let $\cH^i(\cxF)_j$ for $1 \leq j \leq s_i$ (resp. $\im d^i_j$ for $1 \leq j \leq t_i$) denote the successive quotient sheaves in the Harder--Narasimhan filtration of $\cH^i(\cxF)$ (resp. $\im d^i$). The successive quotients in this filtration are either shifts of $\cH^i(\cxF)_j$ or isomorphic to shifts of the cone on the identity morphism of $\im d^i_j$ and so by Remark \ref{rmk on sigma0} these successive quotients are $(\underline{1}, \delta \ue/\epsilon)$-semistable for any rational $\epsilon >0$. Thus it suffices to show there is an $\epsilon_0$ such that for all $0 <\epsilon <\epsilon_0$ we have inequalities \[ \frac{P(\cH^{m_1}(\cxF)_1)}{\rk \cH^{m_1}(\cxF)_1} - \delta \frac{\eta_{m_1}}{\epsilon} > \dots > \frac{P(\cH^{m_1}(\cxF)_{s_{m_1}})}{\rk \cH^{m_1}(\cxF)_{s_{m_1}}} - \delta \frac{\eta_{m_1}}{\epsilon} > \frac{P(\im d^{m_1}_1)}{\rk \im d_{m_1}^1} - \delta \frac{ \eta_{m_1} + \eta_{m_1 +1}}{2\epsilon} > \cdots \] \[> \frac{P(\im d^{m_1}_{t_{m_1}})}{\rk \im d_{t_{m_1}}^{m_1}} - \delta \frac{ \eta_{m_1} + \eta_{m_1 +1}}{2\epsilon} > \frac{P(\cH^{m_1}(\cxF)_1)}{\rk \cH^{m_1 +1}(\cxF)_1} - \delta \frac{\eta_{m_1 +1}}{\epsilon} > \cdots > \frac{P(\cH^{m_2}(\cxF)_{s_{m_2}})}{\rk \cH^{m_2}(\cxF)_{s_{m_2}}} - \delta \frac{\eta_{m_2}}{\epsilon}. \] Since we know that the reduced Hilbert polynomials of the successive quotients in the Harder--Narasimhan filtrations of the cohomology and image sheaves are decreasing, it suffices to show for $m_1 \leq i < m_2-1$ that: \begin{equation*}\label{defining eps0} \begin{split} 1) \: \: & \epsilon \frac{P(\cH^i(\cxF)_{s_i})}{\rk \cH^i(\cxF)_{s_i}} - \delta \eta_i > \epsilon \frac{P(\im d^i_1)}{\rk \im d^i_1} - \delta \frac{ \eta_i + \eta_{i+1}}{2} \: \: \mathrm{and}\\ 2) \: \: & \epsilon \frac{P(\im d^i_{t_i})}{\rk \im d^i_{t_i}} - \delta \frac{ \eta_{i} + \eta_{i+1}}{2} > \epsilon \frac{P(\cH^{i+1}(\cxF)_1)}{\rk \cH^{i+1}(\cxF)_1} - \delta \eta_{i+1}. \end{split} \end{equation*} These polynomials all have the same top coefficient and we claim we can pick $\epsilon_0$ so if $0 < \epsilon < \epsilon_0$ we have strict inequalities in the second to top coefficients. Let $\mu(\mathcal{A})$ denote the second to top coefficient of the reduced Hilbert polynomial of $\mathcal{A}$ which is (up to multiplication by a positive constant) the slope of $\mathcal{A}$ and let $\delta^{\mathrm{top}} >0$ be the coefficient of $x^{d-1}$ in $\delta$. For $m_1 \leq i < m_2 -1$ let \[ M_i := \max\left\{\mu(\im d^i_{1}) - \mu(\mathcal{H}^i(\cxF)_{s_i}), \mu(\mathcal{H}^{i+1}(\cxF)_{1}) - \mu(\im d^i_{t_i}) \right\}. \] We pick $\epsilon_0 >0$ so that if $M_i >0$ then $ \epsilon_0 < \delta^{\mathrm{top}} ({\eta_{i+1} - \eta_i})/{2M_i} $. \end{proof}
\section{The stratification of the parameter space}\label{sec on strat}
In the introduction we described a stratification of a projective $G$-scheme $B$ (with respect to an ample linearisation $\mathcal{L}$) by $G$-invariant subvarieties $\{S_\beta : \beta \in \mathcal{B} \}$ which is described in \cite{hesselink,kempf_ness,kirwan}. If we fix a compact maximal torus $T$ of $G$ and positive Weyl chamber $\mathfrak{t}_+$ in the Lie algebra $\mathfrak{t}$ of $T$, then the indices $\beta$ can be viewed as rational weights in $\mathfrak{t}_+$. Associated to $\beta $ there is a parabolic subgroup $P_\beta \subset G$, a rational 1-PS $\lambda_\beta : \CC^* \ra T_{\CC}$ and a rational character $\chi_\beta : T_{\CC} \ra \CC^*$ which extends to a character of $P_\beta$.
By definition $Z_\beta$ is the components of the fixed point locus of $\lambda_\beta$ acting on $B$ on which $\lambda_\beta$ acts with weight $|| \beta ||^2$ and $Z_\beta^{ss}$ is the GIT semistable subscheme for the action of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ on $Z_\beta$ with respect to the linearisation $\mathcal{L}^{\chi_{-\beta}}$. Then $Y_\beta$ (resp. $Y_\beta^{ss}$) is defined to be the subscheme of $B$ consisting of points whose limit under $\lambda_\beta(t)$ as $t \to 0$ lies in $Z_\beta$ (resp. $Z_\beta^{ss}$). By \cite{kirwan} for $\beta \neq 0$ we have $S_\beta = G Y_\beta^{ss} \cong G \times^{P_\beta} Y_\beta^{ss}$.
In this section we study the stratifications of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_P(n)$ for complexes associated to the collection of stability conditions $(\underline{1}, \delta \ue/\epsilon)$ given in $\S$\ref{sec on stab}. We relate these stratifications to the natural stratification by Harder--Narasimhan types (see Theorem \ref{HN strat is strat} below).
\subsection{GIT set up}\label{GIT set up}
We consider the collection of stability parameters $(\underline{1}, \delta \ue/\epsilon)$ indexed by a small positive rational parameter $\epsilon$ where $\delta$ is a positive rational polynomial and $\ue$ are strictly increasing rational numbers. In $\S$\ref{sec on stab} we studied semistability and Harder--Narasimhan filtrations with respect to these parameters when $\epsilon$ is very small. The Harder--Narasimhan filtration of a complex with torsion free cohomology sheaves with respect to $(\underline{1}, \delta \ue/\epsilon)$ tells us about the Harder--Narasimhan filtration of the cohomology sheaves of this complex provided $\epsilon >0$ is chosen sufficient small (see Theorem \ref{HN filtrations for epsiloneta}).
Recall that the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} = \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}(n)$ for $(\underline{1},\delta \ue /\epsilon)$-semistable complexes with invariants $P$ is a locally closed subscheme of a projective bundle $\fD$ over a product $Q=Q^{m_1} \times \cdots \times Q^{m_2}$ of open subschemes $Q^i$ of quot schemes. There is an action of a reductive group $G$ on $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ where \[ G = \SL(\oplus_i V^i) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i) \] and $V^i$ are fixed vector spaces of dimension $P^i(n)$. The linearisation of this action is determined by the stability parameters (see $\S$\ref{linearisation schmitt} or \cite{schmitt05} for details). We also described a natural projective completion $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ which is a closed subscheme of a projective bundle $\overline{\fD}$ over a projective scheme $\overline{Q}$ in $\S$\ref{calc HM for cx}. Note that the group $G$ and parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ depend on the choice of a sufficiently large integer $n$. For any $n >\!>0$ and $\epsilon >0$, associated to this action we have a stratification of $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ into $G$-invariant locally closed subschemes such that the open stratum is the GIT semistable subscheme. As we are primarily interested in complexes with torsion free cohomology sheaves, which form an open subscheme ${\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$, we look at restriction this stratification to the closure $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ of ${\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ in $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$: \begin{equation}\label{snail} \overline{ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf} = \bigsqcup_{\beta \in \mathcal{B}} S_\beta. \end{equation} Every point in $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}$ represents a complex which has a unique Harder--Narasimhan type with respect to $(\underline{1}, \delta \ue/\epsilon)$ and so we can write \[ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf} = \bigsqcup_{\tau} R_\tau \] where the union is over all Harder--Narasimhan types $\tau$.
Let us fix a complex $\cxF$ with torsion free cohomology sheaves and invariants $\underline{P}} \def\us{\underline{\sigma}} \def\uc{\underline{\chi}} \def\ue{\underline{\eta}$. We can assume we have picked $\epsilon$ sufficiently small as given by Theorem \ref{HN filtrations for epsiloneta}, so that the successive quotients appearing in its Harder--Narasimhan filtration with respect to $(\underline{1}, \delta \ue/\epsilon)$ are defined using the successive quotients in the Harder--Narasimhan filtrations of $\cH^i(\cxF)$ and $\im d^i$. Let $\tau$ be the Harder--Narasimhan type of $\cxF$ with respect to these parameters and we assume this is a nontrivial Harder--Narasimhan type (i.e. $\cxF$ is unstable with respect to $(\underline{1}, \delta \ue/\epsilon)$).
Let $H_{i,j}$ (resp. $I_{i,j}$) denote the Hilbert polynomial of the $j$th successive quotient $\cH^i(\cxF)_j$ (resp. $\im d^i_j$) in the Harder--Narasimhan filtration of the sheaf $\cH^i(\cxF)$ (resp. $\im d^i$) for $1 \leq j \leq s_i$ (resp. $1 \leq j \leq t_i$). We also let ${H}_{i,j} =( H^{k}_{i,j})_{k \in \ZZ}$ and ${I}_{i,j} =(I^{k}_{i,j})_{k \in \ZZ}$ denote the collection of Hilbert polynomials given by \[ H^{k}_{i,j} = \left\{ \begin{array}{ll} H_{i,j} & \mathrm{if} \: k = i, \\ 0 & \mathrm{otherwise}, \end{array} \right. \quad \mathrm{and} \quad I^{k}_{i,j}= \left\{ \begin{array}{ll} I_{i,j} & \mathrm{if} \: k = i,i+1, \\ 0 & \mathrm{otherwise}. \end{array} \right.\] Then the Harder--Narasimhan type of $\cxF$ is given by \[ \tau = ( {H}_{m_1,1}, \dots, {H}_{m_1,s_{m_1}}, {I}_{m_1,1}, \dots, {I}_{m_1,t_{m_1}}, {H}_{m_1+1,1}, \dots, {H}_{m_2,s_{m_2}} ) \] which we will frequently abbreviate to $\tau = ({H}, {I})$ where ${H}=(H^{k}_{i,j})_{i,j,k \in \ZZ}$ and ${I}=(I^{k}_{i,j})_{i,j,k \in \ZZ}$.
\begin{ass}\label{ass on epsilon}
We may also assume $\epsilon$ is sufficiently small so that the only $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with Hilbert polynomials ${I}_{i,j}$ are isomorphic to cones on the identity morphism of a torsion free semistable sheaf. \end{ass}
\subsection{Boundedness}
We first give a general boundedness result for complexes of fixed Harder--Narasimhan type:
\begin{lemma}\label{cxs HN type bdd} The set of sheaves occurring in a complex of torsion free sheaves with Harder--Narasimhan type $({P_{1}}, \cdots ,{P_{s}})$ with respect to $(\underline{\sigma}, \underline{\chi})$ is bounded. \end{lemma} \begin{proof} This follows from a result of Simpson (see \cite{simpson} Theorem 1.1) which states that a collection of torsion free sheaves on $X$ of fixed Hilbert polynomial is bounded if the slopes of their subsheaves are bounded above by a fixed constant. Recall that the slope of a sheaf is (up to multiplication by a positive constant) the second to top coefficient in its reduced Hilbert polynomial. Let $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ be a complex with this Harder--Narasimhan type; then for any subcomplex $\cxG$ of $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ we have \[ P_{\underline{\sigma}, \underline{\chi}}^{\mathrm{red}}(\cxG) \leq P_{\underline{\sigma}, \underline{\chi}}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{1}) = \frac{\sum_i \sigma_i P^i_1 - \delta \sum_i \eta_i r^i_1}{\sum_i \sigma_i r^i_1}=:R\] where $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_1$ is the maximal destabilising subcomplex of $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ which has Hilbert polynomials specified by ${P}_1$. Suppose $\cG$ is a subsheaf of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ and consider the subcomplex \[ \cxG : \quad 0 \to \cdots \to 0 \to \cG \to \mathcal{E}^{i+1} \to \cdots \to \mathcal{E}^{m_2}; \] then we have an inequality of polynomials
\[ \frac{\sigma_i P(\cG) - \delta \eta_i \rk \cG+\sum_{j >i} \sigma_j P^j_1 - \delta \sum_{j>i} \eta_j r^j_1}{\sigma_i \rk \cG+ \sum_{j>i} \sigma_j r^j_1} \leq R. \] The top coefficients agree and so we have an inequality
\begin{equation*} \frac{\deg \cG}{ \rk \cG}\leq
\frac{\delta^{\mathrm{top}}}{\sigma_i} \left(\eta_i + \frac{ \sum_{j>i} \eta_j r^j_1}{ \rk \cG}\right) - \frac{\sum_{j>i}\sigma_j d^j_1 }{\sigma_i \rk \cG} + \left( 1 + \frac{\sum_{j>i}\sigma_j r^j_1}{\sigma_i \rk \cG}\right) \left(\frac{ \sum_j \sigma_j d^j_1 - \delta^{\mathrm{top}} \sum_j \eta_j r^j_1}{\sum_j \sigma_j r^j_1} \right) \end{equation*} where $d^j_1$ is (up to multiplication by a positive constant) the second to top coefficient of $P^j_1$ and $\delta^\mathrm{top}$ is (up to multiplication by a positive constant) the leading coefficient of $\delta$. Since the rank of $\cG$ is bounded, it follows that the slope of subsheaves $\cG \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ are bounded. \end{proof}
As the set of sheaves occurring in complexes of a given Harder--Narasimhan type are bounded, we can pick $n$ so that they are all $n$-regular and thus parametrised by $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}(n)$.
\begin{cor}\label{how to pick n cx} Let $({P}_{1}, \cdots ,{P}_{s})$ be a Harder--Narasimhan type with respect to $(\underline{\sigma}, \underline{\chi})$. Then we can choose $n$ sufficiently large so that for $1\leq i_{1} < \dots < i_k \leq s$ all the sheaves occurring in a complex of torsion free sheaves with Harder--Narasimhan type $({P_{i_1}}, \cdots ,{P_{i_k}})$ are $n$-regular. \end{cor}
\begin{ass}\label{assum on n} Let $\tau=({H}, {I})$ be the Harder--Narasimhan type of the complex $\cxF$ we fixed in $\S$\ref{GIT set up}. We assume $n$ is sufficiently large so that the statement of Corollary \ref{how to pick n cx} holds for this Harder--Narasimhan type. In particular this means every complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Harder--Narasimhan type $\tau$ is parametrised by $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}$. \end{ass}
\subsection{The associated index}
In this section we associate to the complex $\cxF$ with Harder--Narasimhan type $\tau$ a rational weight $\beta(\tau,n)$ which we will show later on is an index for an unstable stratum in the stratification defined at (\ref{snail}) when $n$ is sufficiently large.
Let $z=(q,[\psi : 1]) $ be a point in $\mathfrak{T}^{tf}$ which parametrises the fixed complex $\cxF$ with Harder--Narasimhan type $\tau$. As mentioned in the introduction, the stratification can also be described by using Kempf's notion of adapted 1-PSs and so rather than searching for a rational weight $\beta$ we look for a (rational) 1-PS $\lambda_{\beta}$ which is adapted to $z$. By definition, a 1-PS $\lambda$ is adapted to $z$ if it minimises the (normalised) Hilbert--Mumford function
\[ \mu^{\cL} (z, \lambda) = \min_{\lambda'} \frac{ \mu^{\cL}(z, \lambda')}{|| \lambda'||}\]
and is therefore most responsible for the instability of $z$. It is natural to expect that $\lambda$ should induce a filtration of $\cxF$ which is most responsible for the instability of this complex; that is, its Harder--Narasimhan filtration. To distinguish between the cohomology and image parts of this Harder--Narasimhan filtration we write the Harder--Narasimhan filtration of $\cxF$ as \[ 0 \subsetneq \mathcal{A}^\cdot_{m_1,(1)} \subsetneq \cdots \subsetneq \mathcal{A}^\cdot_{m_1,(s_{m_1})} \subsetneq \mathcal{B}^\cdot_{m_1,(1)} \cdots \mathcal{B}^\cdot_{m_1,(t_{m_1})} \subsetneq \mathcal{A}^\cdot_{m_1 +1,(1)} \subsetneq \cdots \subsetneq \mathcal{A}^\cdot_{m_2,(s_{m_2})}=\cxF \] where the quotient $\mathcal A}\def\cB{\mathcal B}\def\cC{\mathcal C}\def\cD{\mathcal D^\cdot_{k,j}$ (resp. $\cB^\cdot_{k,j}$) of $\mathcal A}\def\cB{\mathcal B}\def\cC{\mathcal C}\def\cD{\mathcal D^{\cdot}_{k,(j)}$ (resp. $\cB^\cdot_{k,(j)}$) by its predecessor is isomorphic to $\cH^k(\cxF)_j[-k]$ (resp. $\mathrm{Cone}(\mathrm{id}_{\im d^k_j})[-(k+1)]$). Such a filtration induces filtrations of the vector space $V^i$ for $m_1 \leq i \leq m_2$: \begin{equation}\label{filtr of Vi} 0 \subset V^i_{m_1,(1)} \subset \cdots \subset V^i_{m_1,(s_{m_1})} \subset W^i_{m_1,(1)} \subset \cdots \subset W^i_{m_1,(t_{m_1})} \subset \dots \subset V^i_{m_2,(s_{m_2})} = V^i \end{equation} where \[V^i_{k,(j)} := H^0(q^i(n))^{-1}H^0(\mathcal{A}^i_{k,(j)}(n)) \quad \mathrm{and} \quad W^i_{k,(j)} := H^0(q^i(n))^{-1}H^0(\mathcal{B}^i_{k,(j)}(n)).\] Let $V^i_{k,j}$ (respectively $W^i_{k,j}$) denote the quotient of $V^i_{k,(j)}$ (respectively $W^i_{k,(j)}$) by its predecessor in this filtration. Note that by the construction of the Harder--Narasimhan filtration (see Theorem \ref{HN filtrations for epsiloneta}) we have that $V^i_{k,j} =0$ unless $k=i$ and $W^i_{k,j} =0$ unless $k = i,i-1$ and we also have an isomorphism $W^i_{i,j} \cong W^{i+1}_{i,j}$.
Given integers $a_{k,j}$ for $m_1\leq k \leq m_2$ and $1 \leq j \leq s_k$ and integers $b_{k,j}$ for $m_1\leq k < m_2 -1$ and $1 \leq j \leq t_k$ which satisfy \begin{equation}\begin{split}\label{decr weights} i) & \quad a_{m_,1} > \dots > a_{m_1,s_{m_1}} > b_{m_1,1} > \dots > b_{m_1,t_{m_1}} > a_{m_1 +1,1} > \dots > a_{m_2, s_{m_2}} \\ ii) & \quad \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \dim V^i_{i,j} + 2 \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \dim W^i_{i,j} =0, \end{split}\end{equation} we can define a 1-PS $\lambda(\underline{a}, \underline{b})$ of $G$ as follows. Let $V^i_k = \oplus_{j=1}^{s_k} V^i_{k,j}$ and $W^i_k = \oplus_{j=1}^{t_k} W^i_{k,j}$; then define 1-PSs $\lambda_{k}^{H,i} : \mathbb{C}^* \rightarrow \mathrm{GL}(V^i_k)$ and $\lambda_{k}^{I,i} : \mathbb{C}^* \rightarrow \mathrm{GL}(W^i_k)$ by \[ \lambda^{H,i}_k (t)= \left( \begin{array}{ccc} t^{a_{k,1}}I_{V^i_{k,1}} & & \\ & \ddots & \\ & & t^{a_{k,s_k}}I_{V^i_{k,s_k}} \end{array} \right) \quad \lambda_{k}^{I,i} (t)= \left( \begin{array}{ccc} t^{b_{k,1}}I_{W^i_{k,1}} & & \\ & \ddots & \\ & & t^{b_{k,t_k}}I_{W^i_{k,t_k}} \end{array} \right) \] Then $\lambda(\underline{a}, \underline{b}):= (\lambda_{m_1}, \dots, \lambda_{m_2})$ is given by \begin{equation}\label{1ps def} \lambda_i(t): = \left( \begin{array}{ccc} \lambda_{i-1}^{I,i}(t) & & \\ & \lambda^{H,i}_i(t) & \\ & & \lambda^{I,i}_i(t) \end{array} \right) \in \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)=\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i-1} \oplus V^i_i \oplus W^i_i). \end{equation}
For all pairs $(\underline{a}, \underline{b})$ the associated 1-PS $\lambda(\underline{a}, \underline{b})$ of $G$ induces the Harder--Narasimhan filtration of $\cxF$ and so by Proposition \ref{HM prop} \begin{equation*}\begin{split} {\mu^{\mathcal{L}}(z, \lambda(\underline{a}, \underline{b}) )} = & \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{H}^i(\mathcal{F}_\cdot)_j \\ & + \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \left( 2\frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}_i' + {\eta}'_{i+1}}{\epsilon} \right) \rk \im d^i_j \end{split}\end{equation*} where $P_{\underline{1}} = \sum_i P^i$ and $r_{\underline{1}} = \sum_i r^i$ and $(\underline{1}, \delta \ue'/\epsilon)$ are the stability parameters associated to $(\underline{1}, \delta \ue/\epsilon)$ which satisfy $\sum_i \eta_i' r^i = 0$ (cf. Remark \ref{normalise}).
We define \[ a_{i,j} := \frac{1}{\delta(n)} -\left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \frac{\rk(H_{i,j})}{H_{i,j}(n)} \quad \mathrm{and} \quad b_{i,j} := \frac{1}{\delta(n)} -\left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{{\eta}'_i+{\eta}'_{i+1}}{2\epsilon} \right) \frac{\rk(I_{i,j})}{I_{i,j}(n)} \] where $\rk(H_{i,j})$ and $\rk(I_{i,j})$ are the ranks determined by the leading coefficients of the polynomials $H_{i,j}$ and $I_{i,j}$.
The rational numbers $(\underline{a}, \underline{b})$ defined above are those which minimise the normalised Hilbert--Mumford function subject to condition $ii$) of (\ref{decr weights}) where $z \in \mathfrak{T}$ is the point which represents the complex $\cxF$ with Harder--Narasimhan type $\tau$. The choice of $\epsilon$ given by Theorem \ref{HN filtrations for epsiloneta} ensures that $(\underline{a}, \underline{b})$ also satisfy the inequalities $i$) of (\ref{decr weights}) for all sufficiently large $n$.
We choose a maximal torus $T_i$ of the maximal compact subgroup $\mathrm{U}(V^i)$ of $\mathrm{GL}(V^i)$ as follows. Take a basis of $V^i$ which is compatible with the filtration of $V^i$ defined at (\ref{filtr of Vi}) and define $T_i$ to be the maximal torus of $\mathrm{U}(V^i)$ given by taking diagonal matrices with respect to this basis. We pick the positive Weyl chamber \[ \mathfrak{t_i}_+ := \{ i \mathrm{diag}(a_1, \dots, a_{\dim V^i}) \in \mathfrak{t_i}: a_1 \geq \dots \geq a_{\dim V^i} \} .\]
Let $T$ be the maximal torus of the maximal compact subgroup of $G$ determined by the maximal tori $T_i$ and let $\mathfrak{t}_+$ be the positive Weyl chamber associated to the $\mathfrak{t_i}_+$.
\begin{defn}\label{defn of beta cxs}
We define $\beta = \beta(\tau,n) \in \mathfrak{t}_+$ to be the point defined by the rational weights \[\beta_i = i \mathrm{diag} (b_{i-1,1}, \dots, b_{i-1,t_{i-1}}, a_{i,1} \dots, a_{i,s_i}, b_{i,1} \dots b_{i,t_i}) \in \mathfrak{t_i}_+\] where $a_{i,j}$ appears $H_{i,j}(n)$ times and $b_{k,j}$ appears $I_{k,j}(n)$ times. This rational weight defines a rational 1-PS $ \lambda_{\beta}$ of $G$ by $\lambda_{\beta} = \lambda(\underline{a}, \underline{b})$. \end{defn}
\subsection{Describing components of $Z_\beta$}
By Remark \ref{rmk on fixed pts}, the $\lambda_{\beta}(\CC^*)$-fixed point locus of $\overline{\mathfrak{T}}^{tf}$ decomposes into three pieces: a diagonal piece, a strictly upper triangular piece and a strictly lower triangular piece. Each of these pieces decomposes further in terms of the Hilbert polynomials of the direct summands of each sheaf in this complex. We are interested in the component(s) of the diagonal part which may contain the graded object
associated to the Harder--Narasimhan filtration of a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of Harder--Narasimhan type $\tau$.
We consider the closed subscheme $F_\tau$ of $\overline{\mathfrak{T}}^{tf}$ consisting of $z = (q, [\psi : \zeta])$ where as usual $\psi$ is determined by boundary maps $d^i$ and we have decompositions \[ q^i = \bigoplus_{j=1}^{t_{i-1}} p^i_{i-1,j} \oplus \bigoplus_{j=1}^{s_i} q^i_{i,j} \oplus \bigoplus_{j=1}^{t_i} p^i_{i,j} \quad \mathrm{and} \quad d^i = \oplus_{j=1}^{t_i}d^i_j\] where $q^i_{i,j} : V^i_{i,j} \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i_{i,j}$ is a point in $\mathrm{Quot}(V^i_{i,j} \otimes \mathcal{O}_X(-n), H^i_{i,j})$ and $p^i_{k,j}: W^i_{k,j} \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{G}^i_{k,j}$ is a point in $\mathrm{Quot}(W^i_{k,j} \otimes \mathcal{O}_X(-n), I^i_{k,j})$ and $d^i_j : \mathcal{G}^i_{i,j} \rightarrow \mathcal{G}_{i,j}^{i+1}$.
Following the discussion above we have:
\begin{lemma}\label{fixed pt locus} $F_\tau$ is a union of connected components of the fixed point locus of $\lambda_{\beta}$ acting on $\overline{\mathfrak{T}}^{tf}$ which is contained completely in the diagonal part of this fixed point locus.
\end{lemma}
\begin{rmk}\label{descr of F and Ttau for cx} Every point in $\mathfrak{T}_{(\tau)}:=F_\tau \cap \mathfrak{T}^{tf}$ is a direct sum of complexes with Hilbert polynomials specified by $\tau$ and hence we have an isomorphism \[ \mathfrak{T}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{tf}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{tf}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}_{{H}_{m_1 +1,1}}\times \cdots \times \mathfrak{T}_{{H}_{m_2,s_{m_2}}} \cong \mathfrak{T}_{(\tau)}.\] \end{rmk}
Let $Z_\beta$ and $Y_\beta$ denote the subschemes of the stratum $S_\beta$ as defined in the introduction.
\begin{lemma}\label{descr of Zbeta} Let $ z \in \mathfrak{T}_{(\tau)}:=F_\tau \cap \mathfrak{T}^{tf} $; then $z \in Z_\beta$. \end{lemma} \begin{proof} Let $z = ( q, [\psi: 1]) $ be a point of $\mathfrak{T}_{(\tau)}$ as above. The weight of the action of $\lambda_{\beta}$ on $z$ is given equal to $-\mu^{\mathcal{L}}(z, \lambda_\beta )$ and by Proposition \ref{HM prop} we have \begin{equation*} {\mu^{\mathcal{L}}(z, \lambda_\beta )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{i,j} + \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \left( 2\frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i + {\eta}'_{i+1}}{\epsilon} \right) \rk \cG^i_{i,j}. \end{equation*}
By definition $Z_\beta$ is the union of the connected components of the fixed point locus for the action of $\lambda_{\beta}$ on which $\lambda_{\beta}$ acts with weight $|| \beta||^2$ and it is easy to check that the choice of rational numbers $(\underline{a}, \underline{b})$ ensures that $|| \beta ||^2=-\mu^{\mathcal{L}}(z, \lambda_\beta )$ which completes the proof. \end{proof}
\begin{cor} The scheme $ \mathfrak{T}_{(\tau)} $ is a union of connected components of $Z_\beta \cap \mathfrak{T}^{tf}$. \end{cor}
Let $F$ be the union of connected components of $Z_\beta$ meeting $\mathfrak{T}_{(\tau)}$; then $F $ is a closed subscheme of $\overline{\mathfrak{T}}^{tf}$ which is completely contained in the diagonal part of $Z_{\beta}$. Consider the subgroup \[ \mathrm{Stab} \: \beta = \left( \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \mathrm{GL}(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} \mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) \right) \cap \mathrm{SL}(\oplus_{i=m_1}^{m_2} V^i) \] of $G$ which is the stabiliser of $\beta$ under the adjoint action of $G$.
\begin{lemma}\label{descr of Ghat} $ \mathrm{Stab} \: \beta$ has a central subgroup \[ \hat{G}= \left\{ (u_{i,j}, w_{i,j}) \in (\mathbb{C}^*)^{\sum_{i=m_1}^{m_2} s_i +\sum_{i=m_1}^{m_2-1} t_i} : \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} (u_{i,j})^{H_{i,j}(n)} \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (w_{i,j})^{2I_{i,j}(n)} =1 \right\} \] which fixes every point of $F$. This subgroup acts on the fibre of $\mathcal{L}$ over any point of $F$ by a character ${\chi}_F : \hat{G} \rightarrow \mathbb{C}^* $ given by \[ {\chi}_F(u_{i,j},w_{i,j})=\prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} (u_{i,j})^{- \rk H_{i,j} \left(\frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta (n)} + \frac{\eta'_i}{\epsilon} \right) } \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (w_{i,j})^{- \rk I_{i,j} \left(\frac{2P_{\underline{1}}(n)}{r_{\underline{1}} \delta (n)} + \frac{\eta'_i + \eta'_{i+1}}{\epsilon} \right) }.\] \end{lemma} \begin{proof} The inclusion of $\hat{G}$ into $ \mathrm{Stab} \: \beta$ is given by \[(u_{i,j} \: , \: w_{i,j}) \mapsto (u_{i,j} I_{V^i_{i,j}} \: , \: w_{i,j} I_{W^i_{i,j}} \: , \: w_{i,j} I_{W^{i+1}_{i,j}}) .\] Let $z = (q, [ \psi : \zeta]) $ be a point of $F$; then we have a decomposition \[ q^i = \bigoplus_{j=1}^{t_{i-1}} p^i_{i-1,j} \oplus \bigoplus_{j=1}^{s_i} q^i_{i,j} \oplus \bigoplus_{j=1}^{t_i} p^i_{i,j} . \]
A copy of $\mathbb{C}^*$ acts trivially on each quot scheme and so the central subgroup $\hat{G}$ fixes this quotient sheaf. The boundary maps are of the form $d^{i} = \oplus_{j=1}^{t_i} d^{i}_j$ where $d^{i}_j : \mathcal{G}^i_{i,j} \rightarrow \mathcal{G}^{i+1}_{i,j}$. As $ (u_{i,j} \: , \: w_{i,j}) \in \hat{G}$ acts on both $\mathcal{G}^i_{i,j}$ and $\mathcal{G}^{i+1}_{i,j}$ by multiplication by $w_{i,j}$, the boundary maps are also fixed by the action of $\hat{G}$.
To calculate the character ${\chi}_F : \hat{G} \rightarrow \mathbb{C}^* $ with which this torus acts we fix $(u_{i,j} \: , \: w_{i,j}) \in \hat{G}$ and calculate the weight of the action of this element on the fibre over a point $z \in F$ by modifying the calculations for $\mathbb{C}^*$-actions in $\S$\ref{calc HM for cx} to general torus actions. \end{proof}
Let $\cL^{\chi_{-\beta}}$ denote the linearisation of the $\mathrm{Stab} \: \beta$ action on $Z_\beta$ given by twisting the original linearisation $\mathcal{L}$ by the character $\chi_{-\beta}$ associated to $-\beta$. Recall that $Z_\beta^{ss}$ is the GIT semistable set with respect to this linearisation. Consider the subgroup \begin{equation*} \begin{aligned} G' &:= \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \SL(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i,j}) \times \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^{i+1}_{i,j}) )\cap \SL(W^i_{i,j} \oplus W^{i+1}_{i,j}) \\ &= \left\{ (g^i_j, h^i_{i,j}, h^{i+1}_{i,j} ) \in \mathrm{Stab} \: \beta : \det g^i_j = 1 \: \mathrm{and} \: \det h^i_{i,j} \det h^{i+1}_{i,j} =1 \right\}. \end{aligned}\end{equation*}
\begin{prop}\label{descr of Fss} Let $F$ be the components of $Z_\beta$ which meet $\mathfrak{T}_{(\tau)}$; then
\[ F^{\mathrm{Stab} \: \beta-ss}(\mathcal{L}^{\chi_{-\beta}}) = F^{G'-ss}(\mathcal{L}). \] \end{prop} \begin{proof} There is a surjective homomorphism $\Phi $ from $ \mathrm{Stab} \: \beta $ to the central subgroup $ \hat{G} $ defined in Lemma \ref{descr of Ghat}: \[ \Phi( g_{j}^i, h^{i}_{i,j}, h^{i+1}_{i,j} ) = ( (\det g^{i}_j)^{D/H_{i,j}(n)} , (\det h^i_{i,j} \det h^{i+1}_{i,j})^{D/2I_{i,j}(n)} ) \] where $D= \Pi_{i=m_1}^{m_2} \Pi_{j=1}^{s_i} H_{i,j}(n) \times \Pi_{i=m_1}^{m_2-1} \Pi_{j=1}^{t_i} 2I_{i,j}(n)$. The composition of the inclusion of $\hat{G}$ into $\mathrm{Stab} \: \beta$ with $\Phi$ is \[ (u_{i,j}, w_{i,j}) \mapsto ( u_{i,j}^D, w_{i,j}^D). \] Therefore, $\ker \Phi \times \hat{G}$ surjects onto $\mathrm{Stab} \: \beta$ with finite kernel and since GIT semistability is unchanged by finite subgroups we have $ F^{\mathrm{Stab} \: \beta -ss} (\mathcal{L}^{\chi_{-\beta}}) = F^{\ker \Phi \times \hat{G} -ss} (\mathcal{L}^{\chi_{-\beta}}) .$
Observe that the restriction of $\chi_\beta$ to the central subgroup $\hat{G}$ \[ \chi_\beta (u_{i,j},v_{i,j}) = \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} u_{i,j}^{a_{i,j} H_{i,j}(n)} \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} w_{i,j}^{b_{i,j} 2I_{i,j}(n)} \] is equal to the character $\chi_F : \hat{G} \rightarrow \mathbb{C}^*$ defined in Lemma \ref{descr of Ghat}. As we are considering the action of $\ker \Phi \times \hat{G}$ on $F$ linearised by $\mathcal{L}^{\chi_{-\beta}}$, the effects of the action of $\hat{G}$ and the modification by the character corresponding to $-\beta$ cancel so that $ F^{\ker \Phi \times \hat{G} -ss} (\mathcal{L}^{\chi_{-\beta}}) = F^{\ker \Phi -ss} (\mathcal{L}) .$ Finally note that $G'$ injects into $\ker \Phi$ with finite cokernel and so $F^{\ker \Phi -ss} (\mathcal{L}) = F^{G' -ss} (\mathcal{L})$ which completes the proof.
\end{proof}
\subsection{A description of the stratification}\label{sec with thm}
Consider the semistable subscheme \[\mathfrak{T}_{(\tau)}^{ss} := \mathfrak{T}_{(\tau)}^{\mathrm{Stab} \: \beta -ss}(\mathcal{L}^{\chi_{-\beta}})\] for the $\mathrm{Stab} \: \beta$-action on $ \mathfrak{T}_{(\tau)}$ with respect to $\mathcal{L}^{\chi_{-\beta}}$. Recall from Remark \ref{descr of F and Ttau for cx} that we have an isomorphism \[\mathfrak{T}_{(\tau)} \cong \mathfrak{T}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{tf}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{tf}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}_{{H}_{m_1 +1,1}}\times \cdots \times \mathfrak{T}_{{H}_{m_2,s_{m_2}}}.\] Let \[z = \bigoplus_{i=m_1}^{m_2} \bigoplus_{j=1}^{s_i} z_{i,j} \oplus \bigoplus_{i=m_1}^{m_2-1} \bigoplus_{j=1}^{t_i} y_{i,j} \]
be a point in $\mathfrak{T}_{(\tau)}$; that is, $z_{i,j}=(q^i_{i,j}, [0,1])$ is a point in $\mathfrak{T}_{{H}_{i,j}}$ corresponding to a complex $\mathcal{H}^\cdot_{i,j}$ concentrated in degree $i$ and $y_{i,j}=(p^i_{i,j}, p^{i+1}_{i,j} [\varphi^{i}_j,1])$ is a point in $\mathfrak{T}^{tf}_{{I}_{i,j}}$ corresponding to a complex $\mathcal{I}^\cdot_{i,j}$ concentrated in degrees $i$ and $i+1$. By Proposition \ref{descr of Fss}, we have \[\mathfrak{T}_{(\tau)}^{ss} =\mathfrak{T}_{(\tau)}^{G'-ss}(\mathcal{L}|_{\mathfrak{T}_{(\tau)}});\] therefore, $z$ is in $\mathfrak{T}_{(\tau)}^{ss}$ if and only if $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$ for every 1-PS $\lambda$ of $G'$. A 1-PS $\lambda$ of $G'$ is given by \begin{itemize} \item 1-PSs $\lambda^H_{i,j}$ of $\mathrm{SL}(V^i_{i,j})$ and
\item 1-PSs $\lambda^I_{i,j} =(\lambda^{I,i}_{i,j}, \lambda^{I,i+1}_{i,j})$ of $(\mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) )\cap \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$. \end{itemize}
\begin{lemma}\label{lemma 1} Suppose $n$ is sufficiently large. Then for any $z \in \mathfrak{T}_{(\tau)}$ as above for which a direct summand $\mathcal{H}^\cdot_{i,j}$ or $\mathcal{I}^\cdot_{i,j}$ is $(\underline{1}, \delta \ue /\epsilon)$-unstable, there is a 1-PS $\lambda$ of $G'$ such that $\mu^{\mathcal{L}}(z,{\lambda}) < 0$. \end{lemma} \begin{proof} We suppose $n$ is sufficiently large so that Gieseker semistability of a torsion free sheaf with Hilbert polynomial $H_{i,j}$ (respectively $I_{i,j}$) is equivalent to GIT-semistability of a point in the relevant quot scheme representing this sheaf with respect to the linearisation given by Gieseker. We may also assume $n$ is sufficiently large so for $1 \leq i \leq m-1$ and $1 \leq j \leq t_i$, we have that $(\underline{1}, \delta \underline{\eta}/\epsilon)$-semistability of a complex with Hilbert polynomials ${I}_{i,j}$ is equivalent to GIT semistability of a point in $\mathfrak{T}_{{I}_{i,j}}$ for the linearisation defined by these stability parameters.
Firstly suppose $\mathcal{H}^{i}_{i,j}$ is unstable for some $i$ and $j$; then there exists a subsheaf $\mathcal{H}^{i,1}_{i,j} \subset \mathcal{H}^{i}_{i,j}$ such that \begin{equation}\label{eqn H} \frac{H^0(\mathcal{H}^{i,1}_{i,j}(n))}{\rk \mathcal{H}^{i,1}_{i,j}} > \frac{H_{i,j}(n)}{\rk H_{i,j}} .\end{equation} We construct a 1-PS $\lambda= (\lambda^H_{i,j}, \lambda^I_{i,j})$ of $G'$ with three weights $\gamma_1 > \gamma_2 = 0 > \gamma_3 $. Let \[V^{i,1}_{i,j}= H^0(q^i_{i,j}(n))^{-1}H^0(\mathcal{H}^{i,1}_{i,j}(n))\] and let $V^{i,3}_{i,j}$ be an orthogonal complement to $V^{i,1}_{i,j} \subset V^{i}_{i,j}$. Define \[\lambda^H_{i,j} = \left( \begin{array}{cc} t^{\gamma_1} I_{V^{i,1}_{i,j}} & \\ & t^{\gamma_3} I_{V^{i,3}_{i,j}} \end{array} \right)\] and define all the other parts of $\lambda$ to be trivial (the weights $\gamma_1$ and $\gamma_3$ should be chosen so $\lambda^H_{i,j}$ has determinant 1). Then by Proposition \ref{HM prop} \[ \mu^{\mathcal{L}}(z,\lambda) = \left(\frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{\tilde{\eta}_i}{\epsilon}\right) \left[ H_{i,j}(n) \rk \mathcal{H}^{i,1}_{i,j} - H^0(\mathcal{H}^{i,1}_{i,j}(n)) \rk H_{i,j} \right] < 0 .\]
Secondly suppose $\mathcal{I}^\cdot_{i,j}$ is unstable with respect to $(\underline{1}, \delta\underline{\eta}/\epsilon)$; then by our assumption on $\epsilon$ it is not isomorphic to the cone on the identity map of a semistable sheaf. Let $d: \mathcal{I}^{i}_{i,j} \rightarrow \mathcal{I}_{i,j}^{i+1}$ denote the boundary morphism of this complex. If $d = 0$, then we can choose the 1-PS $\lambda$ to pick out the subcomplex $\mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_i^{i,j} \rightarrow 0$. For example, let $\lambda$ have three weights $1 > 0 > -1$ and let the only nontrivial part be \[\lambda^I_{i,j} = (tI_{W^i_{i,j}},t^{-1}I_{W^{i+1}_{i,j}});\] then $\mu^{\mathcal{L}}(z,\lambda) < 0 $. If $d \neq 0$ but has nonzero kernel, then consider the reduced Hilbert polynomial of this kernel. If the kernel has reduced Hilbert polynomial strictly larger than $\mathcal{I}^i_{i,j}$, then choose $\lambda$ to pick out the subcomplex $\ker d \rightarrow 0$. If the kernel has reduced Hilbert polynomial strictly smaller than $\mathcal{I}^i_{i,j}$, then choose $\lambda$ to pick out the subcomplex $0 \rightarrow \im d$. If the kernel has reduced Hilbert polynomial equal to $I_{i,j}/ \rk I_{i,j}$, then choose $\lambda$ to pick out the subcomplex $\mathcal{I}^i_{i,j} \rightarrow \im d$. In all three cases we see that $\mu^{\mathcal{L}}(z,\lambda) < 0 $. Finally, if $d$ is an isomorphism but $\mathcal{I}^i_{i,j}$ is not Gieseker semistable, then let $\mathcal{I}^{i,1}_{i,j}$ be its maximal destabilising subsheaf. A 1-PS which picks out the subcomplex $\mathcal{I}^{i,1}_{i,j} \rightarrow d^i_j(\mathcal{I}^{i,1}_{i,j})$ will destabilise $z$. \end{proof}
\begin{lemma}\label{lemma 2} Suppose $n$ is sufficiently large and let $z$ be a point in $\mathfrak{T}_{(\tau)}$ such that all the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ are semistable with respect to $(\underline{1}, \delta \ue /\epsilon)$. If $\lambda$ is a 1-PS of $G'$ which induces a filtration of $z$ by subcomplexes, then $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$. \end{lemma} \begin{proof} We suppose $n$ is chosen as in Lemma \ref{lemma 1}. The 1-PS $\lambda$ of $G'$ is given by 1-PSs $\lambda_{i,j}^H$ of $\mathrm{SL}(V^i_{i,j})$ and $\lambda^I_{i,j} =(\lambda^{I,i}_{i,j}, \lambda^{I,i+1}_{i,j})$ of $(\mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) )\cap \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$. We can diagonalise these 1-PSs simultaneously to get decreasing integers $\gamma_1 > \cdots > \gamma_u$ and decompositions $V^i_{i,j} = V^{i,1}_{i,j} \oplus \cdots \oplus V^{i,u}_{i,j}$ and similarly $W^i_{i,j} = W^{i,1}_{i,j} \oplus \cdots \oplus W^{i,u}_{i,j}$ and $W^{i+1}_{i,j}= W^{i+1,1}_{i,j} \oplus \cdots \oplus W^{i+1,u}_{i,j}$ such that \[\lambda^H_{i,j}(t) = \left( \begin{array}{ccc} t^{\gamma_1} I_{V^{i,1}_{i,j}} & & \\ & \ddots & \\ & & t^{\gamma_u} I_{V^{i,u}_{i,j}} \end{array} \right)\] and similarly for $\lambda^I_{i,j}$. The corresponding filtrations of these vector spaces give rise to filtrations of the sheaves $\mathcal{H}^i_{i,j}$, $\mathcal{I}^i_{i,j}$ and $\mathcal{I}^{i+1}_{i,j}$ and we let $\mathcal{H}^{i,k}_{i,j}$, $\mathcal{I}^{i,k}_{i,j}$ and $\mathcal{I}^{i+1,k}_{i,j}$ denote the successive quotients. As $\lambda$ induces a filtration by subcomplexes we have from Proposition \ref{HM prop} that \begin{equation}\begin{split} \mu^{\mathcal{L}}(z,\lambda) = \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k &\left[\left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{I}^{i,k}_{i,j} + \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_{i+1}}{\epsilon} \right) \rk \mathcal{I}^{i+1,k}_{i,j} \right] \\ &+ \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} \sum_{k=1}^u \gamma_k \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{H}^{i,k}_{i,j}. \end{split} \end{equation} By construction of the linearisation (cf. the definition of $a_i$ in $\S$\ref{linearisation schmitt}), the numbers $P(n)/r\delta(n) + \eta_i'/\epsilon >0$. As $\mathcal{H}^i_{i,j}$, $\mathcal{I}^i_{i,j}$ and $\mathcal{I}^{i+1}_{i,j}$ are Gieseker semistable sheaves, \[ \sum_{k=1}^u \gamma_k \rk \mathcal{H}^{i,k}_{i,j} \geq 0 \quad \mathrm{and} \quad \sum_{k=1}^u \gamma_k \left( \rk \mathcal{I}^{l,k}_{i,j} - \frac{\rk I_{i,j}}{I_{i,j}(n)} \dim W^{l,k}_{i,j} \right) \geq 0 \quad \mathrm{for } \: l = i, i+1. \] Therefore \begin{equation*} \begin{split} \mu^{\mathcal{L}}(z,\lambda) & \geq \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k \left[\left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right)\dim W^{i,k}_{i,j} + \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_{i+1}}{\epsilon} \right) \dim W^{i+1,k}_{i,j} \right] \\ & = \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k \left(\frac{{\eta}'_i}{\epsilon} \dim W^{i,k}_{i,j} + \frac{{\eta}'_{i+1}}{\epsilon} \dim W^{i+1,k}_{i,j} \right) \end{split} \end{equation*} where the equality comes from the fact that $\lambda^I_{i,j}$ is a 1-PS of $ \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$ and so the weights satisfy $ \sum_{k=1}^u \gamma_k(\dim W^{i,k}_{i,j} + \dim {W}^{i+1,k}_{i,j}) =0.$ As $\lambda$ induces a filtration by subcomplexes, \[\dim (W^{i,1}_{i,j} \oplus \dots \oplus W^{i,k}_{i,j}) \leq \dim (W^{i+1,1}_{i,j} \oplus \dots \oplus W^{i+1,k}_{i,j})\]
and it follows that $- \sum_{k=1}^u \gamma_k\dim W^{i,k}_{i,j} =\sum_{k=1}^u \gamma_k\dim W^{i+1,k}_{i,j} \geq 0$. Therefore
\[ \mu^{\mathcal{L}}(z,\lambda) \geq \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i}\frac{({\eta}'_{i+1} - {\eta}'_{i}) }{\epsilon}\sum_{k=1}^u \gamma_k \dim W^{i+1,k}_{i,j} \geq 0 .\] \end{proof}
Let $\mathfrak{T}^{ss}_{{H}_{i,j}} $ (resp. $\mathfrak{T}^{ss}_{{I}_{i,j}} $) be the subscheme of $\mathfrak{T}_{{H}_{i,j}} $ (resp. $\mathfrak{T}^{tf}_{{I}_{i,j}} $) which parametrises $(\underline{1}, \delta\ue /\epsilon)$-semistable complexes with Hilbert polynomials ${H}_{i,j}$ (resp. ${I}_{i,j}$).
\begin{prop}\label{descr of Ttauss}
For $n$ sufficiently large and by replacing $(\delta, \ue)$ by $(K\delta , \ue/K)$ for a sufficiently large integer $K$ we have an isomorphism \[\mathfrak{T}_{(\tau)}^{ss} \cong \mathfrak{T}^{ss}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}^{ss}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{ss}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{ss}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}^{ss}_{{H}_{m_1 + 1,1}}\times \cdots \times \mathfrak{T}^{ss}_{{H}_{m_2,s_{m_2}}}.\] \end{prop} \begin{proof} We suppose $n$ is chosen as in Lemma \ref{lemma 1} and let $z$ be a point in $\mathfrak{T}_{(\tau)}$. By Proposition \ref{descr of Fss}
\[ \mathfrak{T}_{(\tau)}^{ss} := \mathfrak{T}_{(\tau)}^{\mathrm{Stab} \: \beta -ss}(\mathcal{L}^{\chi_{-\beta}}) = \mathfrak{T}_{(\tau)}^{G'-ss}(\mathcal{L}|_{\mathfrak{T}_{(\tau)}}) \] and so $z$ is in $\mathfrak{T}_{(\tau)}^{ss}$ if and only if $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$ for every 1-PS $\lambda$ of $G'$.
If a direct summand $\mathcal{H}^\cdot_{i,j}$ or $\mathcal{I}^\cdot_{i,j}$ of $z$ is $(\underline{1}, \delta \ue /\epsilon)$-unstable, then $z \notin \mathfrak{T}_{(\tau)}^{ss}$ by Lemma \ref{lemma 1}. By Lemma \ref{lemma 2} we have seen that if each of the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable and $\lambda$ induces a filtration by subcomplexes, then $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$. It follows from \cite{schmitt05} Theorem 1.7.1 (see also Remark \ref{only need to worry about subcxs}) that by rescaling $(\delta, \ue)$ to $(K\delta, \ue/K)$ for $K$ a large integer, we can verify GIT-semistability by only checking for 1-PSs which induce filtrations by subcomplexes. It follows that if each of the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable, then $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{(\tau)}^{ss}$. Therefore, $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{ss}_{(\tau)}$ if and only if all the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable. In particular, the above isomorphism comes from restricting the isomorphism given in Remark \ref{descr of F and Ttau for cx} to $\mathfrak{T}^{ss}_{(\tau)}$. \end{proof}
Recall that there is a retraction $p_\beta : Y_\beta \rightarrow Z_\beta$ where \[ p_\beta(y) = \lim_{t \to 0} \lambda_{\beta}(t) \cdot y. \]
\begin{lemma}\label{descr of pbeta inv of nice sch} Let $F^{ss}$ denote the connected components of $Z_\beta^{ss}$ meeting $\mathfrak{T}_{(\tau)}^{ss}$; then for $n$ sufficiently large \[ p_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} = p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}). \] \end{lemma} \begin{proof} Let $n$ be chosen as in Proposition \ref{descr of Ttauss}. Let $ y \in p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ so that \[p_\beta (y)=\lim_{t \to 0} \lambda_{\beta}(t) \cdot y \in \mathfrak{T}^{ss}_{(\tau)} \subset F^{ss}.\] If $y \notin \mathfrak{T}^{tf}$, then for all $t \neq 0$ we have $\lambda_{\beta}(t) \cdot y \notin \mathfrak{T}^{tf}$ which would contradict the openness of $\mathfrak{T}^{tf} \cap F^{ss}$ in $F^{ss}$.
Conversely suppose $y = (q^{m_1}, \dots , q^{m_2}, [\varphi :1]) \in \mathfrak{T}^{tf}$ and $z=p_\beta(y) \in F^{ss}$ where $q^i : V^i \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i$ and $\varphi$ is given by $d^{i}: \mathcal{E}^i \rightarrow \mathcal{E}^{i+1}$. The scheme $F^{ss}$ is contained in the diagonal components of $Z_\beta^{ss}$; therefore the 1-PS $\lambda_\beta$ induces a filtration of $y$ by subcomplexes and the associated graded point is \[z=(\oplus_{i,j} z_{i,j}) \oplus (\oplus_{i,j} y_{i,j} )\] where $ z_{i,j}=(q^i_{i,j},[0:1])$ and $y_{i,j} = (p^i_{i,j}, p^{i+1}_{i,j}, [d^{i}_j : 1])$ both represent complexes (cf. Lemma \ref{lemma on fixed pts}) and so $z = p_\beta (y) \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{(\tau)}$. By Proposition \ref{descr of Fss}, the limit $z$ is in the GIT semistable set for the action of $G'$ on $Z_\beta$ with respect to $\mathcal{L}$. We can apply the arguments used in the proof of Lemma \ref{lemma 1} to show that $z_{i,j} \in \mathfrak{T}^{ss}_{{H}_{i,j}} $ and $y_{i,j} \in \mathfrak{T}^{ss}_{{I}_{i,j}}$.
\end{proof}
Recall that we have fixed Schmitt stability parameters $(\underline{1}, \delta \underline{\eta} / \epsilon)$ for complexes over $X$ where $\epsilon > 0$ is a rational number, $\eta_i$ are strictly increasing rational numbers indexed by the integers and $\delta$ is a positive rational polynomial such that $\deg \delta = \max (\dim X -1,0)$. We may also assume that $\delta$ is sufficiently large (so that the scaling of Proposition \ref{descr of Ttauss} above has been done). We have assumed $\epsilon$ is very small and that $\tau$ is the Harder--Narasimhan type with respect to $(\underline{1}, \delta \underline{\eta} / \epsilon)$ of a complex $\cxF$ with torsion free cohomology sheaves and Hilbert polynomials $P=(P^{m_1}, \dots, P^{m_2})$. Let $\beta = \beta(\tau,n)$ be the rational weight given in Definition \ref{defn of beta cxs}. Provided $n$ is sufficiently large, all complexes with Harder--Narasimhan type $\tau$ may be represented by points in the scheme $\mathfrak{T}^{tf}=\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}(n)$. We defined $R_\tau$ to be the set of points in $\mathfrak{T}^{tf}$ which parametrise complexes with this Harder--Narasimhan type $\tau$ and the following theorem provides $R_\tau$ with a scheme structure. There is an action of \[G = \Pi_{i=m_1}^{m_2} \mathrm{GL}(P^i(n)) \cap \mathrm{SL}(\Sigma_{i=m_1}^{m_2} P^i(n))\] on this parameter scheme and the stability parameters determine a linearisation $\mathcal{L}$ of this action. Associated to this action there is a stratification $\{ S_\beta : \beta \in \mathcal{B} \}$ of the projective completion $\overline{\mathfrak{T}}^{tf}$ indexed by a finite set $\mathcal{B}$ of rational weights.
\begin{thm}\label{HN strat is strat} For $n$ sufficiently large we have: \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})} \item $\beta = \beta(\tau,n)$ belongs to the index set $\mathcal{B}$ for the stratification $\{ S_\beta : \beta \in \mathcal{B} \}$ of $\overline{\mathfrak{T}}^{tf}$, \item $R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ and, \item The subscheme $R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ of the parameter scheme $\mathfrak{T}^{tf}$ parametrising complexes with Harder--Narasimhan type $\tau$ is a union of connected components of $S_\beta \cap \mathfrak{T}^{tf}$. \end{enumerate} \end{thm} \begin{proof} Suppose $n$ is sufficiently large as in Proposition \ref{descr of Ttauss}. We defined $\beta$ by fixing a point $z = (q^{m_1}, \dots, q^{m_2} , [\varphi:1]) \in R_\tau$ corresponding to the complex $\mathcal{F}^\cdot$ with Harder--Narasimhan type $\tau$. We claim that $\overline{z} :=p_\beta(z) \in Z_\beta^{ss}$ which implies i). The 1-PS $\lambda_{\beta}$ induces the Harder--Narasimhan filtration of $\cxF$ and
$\overline{z}= \lim_{t \to 0} \lambda_\beta(t) \cdot z $ is the graded object associated to this filtration. By Proposition \ref{descr of Ttauss} it suffices to show that each summand in the associated graded object is $(\underline{1}, \delta \underline{\eta} / \epsilon)$-semistable, but this follows by definition of the Harder--Narasimhan filtration.
In fact the above argument shows that $p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) \subset R_\tau$ and since $R_\tau$ is $G$-invariant we have $G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) \subset R_\tau$. To show ii) suppose $y = (q^{m_1}, \dots, q^{m_2} ,[\varphi : 1]) \in R_\tau$ corresponds to a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Harder--Narasimhan filtration \[ 0 \subsetneq \mathcal{H}^\cdot_{m_1,(1)} \subsetneq \cdots \subsetneq \mathcal{H}^\cdot_{m_1,(s_{m_1})} \subsetneq \mathcal{I}^\cdot_{m_1,(1)} \cdots \mathcal{I}^\cdot_{m_1,(t_{m_1})} \subsetneq \cdots \subsetneq \mathcal{H}^\cdot_{m_2,(s_{m_2})}=\mathcal{E}_\cdot \] of type $\tau$. Then this filtration induces a filtration of each vector space $V^i$ and we can choose a change of basis matrix $g$ which switches this filtration with the filtration of $V^i$ given at (\ref{filtr of Vi}) used to define $\beta$. Then $g \cdot y \in p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) $ which completes the proof of ii).
Since $F^{ss}$ is a union of connected components of $Z_\beta^{ss}$, the scheme $Gp_\beta^{-1}(F^{ss}) $ is a union of connected components of $S_\beta$. Therefore, $Gp_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} $ is a union of connected components of $S_\beta \cap \mathfrak{T}^{tf}$. By ii) and Lemma \ref{descr of pbeta inv of nice sch} \[R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})= Gp_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} \] which proves iii). \end{proof}
\section{Quotients of the Harder--Narasimhan strata}\label{sec on quot}
In the previous section we saw for $\epsilon$ very small and a fixed Harder--Narasimhan type $\tau$ with respect to $(\underline{1}, \delta \ue/\epsilon)$, there is a parameter space $R_\tau$ for complexes of this Harder--Narasimhan type and $R_\tau$ is a union of connected components of a stratum $S_\beta(\tau) \cap \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}(n)$ when $n$ is sufficiently large. The action of $G$ on $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ restricts to an action on $R_\tau$ such that the orbits correspond to isomorphism classes of complexes of Harder--Narasimhan type $\tau$. In this section we consider the problem of constructing a quotient of the $G$-action on this Harder--Narasimhan stratum $R_\tau$. If a suitable quotient did exist, then it would provide a moduli space for complexes of this Harder--Narasimhan type. In particular, it would have the desirable property that for two complexes to represent the same point it is necessary that their cohomology sheaves have the same Harder--Narasimhan type.
By \cite{hoskinskirwan} Proposition 3.6, any stratum in a stratification associated to a linearised $G$-action on a projective scheme $B$ has a categorical quotient. We can apply this to our situation and produce a categorical quotient of the $G$-action on $R_\tau$.
\begin{prop} The categorical quotient of the $G$-action on $R_\tau$ is isomorphic to the product \[ \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i}M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{H}_{i,j}) \times \prod_{i=m_1}^{m_2 -1} \prod_{j=1}^{t_i}M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{I}_{i,j}) \] where $M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{P})$ denotes the moduli space of $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with invariants $P$. Moreover: \begin{enumerate} \item A complex with invariants $H_{i,j}$ is just a shift of a sheaf and it is $(\underline{1}, \delta \ue/\epsilon)$-semistable if and only if the corresponding sheaf is Gieseker semistable. \item A complex with invariants $I_{i,j}$ is concentrated in degrees $[i,i+1]$ and it is $(\underline{1}, \delta \ue/\epsilon)$-semistable if and only if it is isomorphic to a shift of the cone on the identity morphism of a Gieseker semistable sheaf. \end{enumerate} \end{prop} \begin{proof} It follows from \cite{hoskinskirwan} Proposition 3.6, that the categorical quotient is equal to the GIT quotient of $\mathrm{Stab} \: \beta$ acting on $\mathfrak{T}_{(\tau)}$ with respect to the twisted linearisation $\cL^{\chi_{-\beta}}$. It follows from Proposition \ref{descr of Fss} this is the same as the GIT quotient of \[G'= \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \SL(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i,j}) \times \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^{i+1}_{i,j}) )\cap \SL(W^i_{i,j} \oplus W^{i+1}_{i,j})\] acting on $\mathfrak{T}_{(\tau)}$ with respect to $\cL$. By Theorem \ref{schmitt theorem}, this is the product of moduli spaces of $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with invariants given by $\tau$. The final statement follows from Lemma \ref{lemma X}, Remark \ref{rmk on sigma0} and the assumption on $\epsilon$ (cf. Assumption \ref{ass on epsilon}). \end{proof}
In general this categorical quotient has lower dimension than expected and so is not a suitable quotient of the $G$-action on $R_\tau$. Instead, we suggest the quotient should be taken with respect to a perturbation of the linearisation used to provide the categorical quotient. However, as discussed in \cite{hoskinskirwan}, finding a way to perturb this linearisation and get an ample linearisation is not always possible. As $R_\tau = GY_{(\tau)}^{ss} \cong G \times^{P_\beta} Y_{(\tau)}^{ss}$ where $Y_{(\tau)}^{ss} :=p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$, a categorical quotient of $G$ acting on $R_\tau$ is equivalent to a categorical quotient of $P_\beta$ acting on $Y_{(\tau)}^{ss}$. If we instead consider $P_\beta$ acting on $Y_{(\tau)}^{ss}$, then there are perturbed linearisations which are ample although $P_\beta$ is not reductive. A possible future direction is to follow the ideas of \cite{hoskinskirwan} and take a quotient of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ acting on $Y_{(\tau)}^{ss}$ with respect to an ample perturbed linearisation and get a moduli space for complexes of Harder--Narasimhan type with $\tau$ some additional data.
\end{document} | arXiv |
Solve for: (1)/(4) * (x+1)=3(1-x)+(1)/(2)x
Expression: $\frac{ 1 }{ 4 } \times \left( x+1 \right)=3\left( 1-x \right)+\frac{ 1 }{ 2 }x$
Multiply both sides of the equation by $4$
$x+1=12\left( 1-x \right)+2x$
Distribute $12$ through the parentheses
$x+1=12-12x+2x$
$x+1=12-10x$
Move the variable to the left-hand side and change its sign
$x+1+10x=12$
Move the constant to the right-hand side and change its sign
$x+10x=12-1$
$11x=12-1$
Subtract the numbers
$11x=11$
Divide both sides of the equation by $11$
$x=1$
Evaluate: 15-(-8)
Calculate: (2+6i)-(5-2i)
Solve for: x^2+8=-6x
Evaluate: integral of 2 x cos (7 x) x
Calculate: log_{4}(x)+log_{4}(z)
Calculate: 2x^2+22x+48
Evaluate: -/(2) 3+y =-/(4) 7
Calculate: log_{10}(x-1)-log_{10}(sqrt(5x))=0
Solve for: \prod _{k=1}^3k+2^k
Evaluate: 1000 * 1.02^3
A Step-By-Step Guide To Solving Differential Equations
What Makes MathMaster The Best Math App?
10 Tips For Working Through Complex Math Problems
Compare the Top 5 Math Tutoring Services
Strategies To Help Students Grasp Concepts Quickly and Easily
How To Solve Differential Equations Step By Step | CommonCrawl |
\begin{document}
\author{Krishna B. Athreya} \address{Krishna B. Athreya \\ Department of Mathematics and Statistics\\ Iowa State University\\ Ames, Iowa, 50011, U.S.A.} \email{[email protected]}
\author{Siva R. Athreya} \address{Siva R. Athreya \\ 8th Mile Mysore Road\\ Indian Statistical Institute
\\Bangalore 560059, India.} \email{[email protected]}
\author{Srikanth K. Iyer} \address{Srikanth K. Iyer\\ Department of Mathematics\\ Indian Institute of Science\\ Bangalore 560012, India.} \email{[email protected]}
\keywords{Age dependent, Branching, Ancestoral times, Measure-valued, Empirical distribution} \subjclass[2000]{Primary: 60G57 Secondary: 60H30}
\title[Age Dependent Branching Markov Processes]{Critical Age Dependent Branching Markov Processes and their Scaling Limits}
\begin{abstract} This paper studies: (i) the long time behaviour of the empirical distribution of age and normalised position of an age dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age dependent critical branching Brownian motions. \end{abstract}
\maketitle
\section{Introduction}
Consider an age dependent branching Markov process where i) each particle lives for a random length of time and during its lifetime moves according to a Markov process and ii) upon its death it gives rise to a random number of offspring. We assume that the system is critical, i.e. the mean of the offspring distribution is one.
We study three aspects of such a system. First, at time $t,$ conditioned on non-extinction (as such systems die out w.p. $1$) we consider a randomly chosen individual from the population. We show that asymptotically (as $t \rightarrow \infty$), the joint distribution of the position (appropriately scaled) and age (unscaled) of the randomly chosen individual decouples (See Theorem \ref{oneparticle}). Second, it is shown that conditioned on non-extinction at time $t,$ the empirical distribution of the age and the normalised position of the population converges as $t \rightarrow \infty$ in law to a random measure characterised by its moments (See Theorem \ref{em}). Thirdly, we establish a super-process limit of such branching Markov processes where the motion is Brownian (See Theorem \ref{super}).
The rest of the paper is organised as follows. In Section \ref{sbps} we define the branching Markov process precisely and in Section \ref{mainresult} we state the three main theorems of this paper and make some remarks on various possible generalisations of our results.
In Section \ref{rbp} we prove four propositions on age-dependent Branching processes which are used in proving Theorem \ref{oneparticle} (See Section \ref{pop}). In Section \ref{rbp} we also show that the joint distribution of ancestoral times for a sample of $k \geq 1$ individuals chosen at random from the population at time $t$ converges as $ t \rightarrow \infty$ (See Theorem \ref{cab}). This result is of independent interest and is a key tool that is needed in proving Theorem \ref{em} (See Section \ref{pemp}).
In Section \ref{psp}, we prove Theorem \ref{super}, the key idea being to scale the age and motion parameters differently. Given this, the proof uses standard techniques for such limits. Theorem \ref{oneparticle} is used in establishing the limiting log-Laplace equation. Tightness of the underlying particle system is shown in Proposition \ref{tight} and the result follows by the method prescribed in \cite{D}.
\section{Statement of Results}
\subsection{The Model} \label{sbps}
\mbox{\hspace{1in}}
Each particle in our system will have two parameters, age in ${\mathbb R}_{+}$ and location in ${\mathbb R}$. We begin with the description of the particle system.
\begin{enumerate} \item {\bf Lifetime Distribution $G(\cdot)$:} Let $G(\cdot)$ be a cumulative distribution function on $[0,\infty)$, with $G(0) = 0.$ Let $\mu = {\int_0^\infty sdG(s)} < \infty.$
\item {\bf Offspring Distribution ${\bf p}$ :} Let ${\bf p} \equiv \{p_k\}_{k \geq 0}$ be a probability distribution such that $p_0 < 1$, $m = \sum_{k=0}^\infty k p_k =1$ and that $\sigma^2 = \sum_{k=0}^\infty k^2 p_k - 1 < \infty$.
\item {\bf Motion Process $\eta(\cdot)$:} Let $\eta(\cdot)$ be a ${\mathbb R}$ valued Markov process starting at $0$. \end{enumerate}
{\bf Branching Markov Process $(G, ${\bf p}$, \eta)$:} Suppose we are given a realisation of an age-dependent branching process with offspring distribution ${\bf p}$ and lifetime distribution $G$ (See Chapter IV of \cite{an} for a detailed description). We construct a branching Markov process by allowing each individual to execute an independent copy of $\eta$ during its lifetime $\tau$ starting from where its parent died.
Let $N_t$ be the number of particles alive at time $t$ and \eq{ \label{confi} {\mathcal C}_t = \{ (a^{i}_t, X^{i}_t) : i = 1,2, \ldots, N_t\}}
denote the age and position configuration of all the individuals alive at time $t.$ Since $m=1$ and $G(0)=0$, there is no explosion in finite time (i.e. $P(N_t < \infty) =1$) and consequently ${\mathcal C}_t$ is well defined for each $0 \leq t < \infty$ (See \cite{an}).
Let ${\mathcal B}({\mathbb R}_+)$ (and ${\mathcal B}({\mathbb R})$) be the Borel $\sigma$-algebra on ${\mathbb R}_+$ (and ${\mathbb R}$). Let $M({\mathbb R}_+ \times {\mathbb R})$ be the space of finite Borel measures on ${\mathbb R}_{+} \times {\mathbb R}$ equipped with the weak topology. Let $M_a({\mathbb R}_+ \times {\mathbb R}) := \{ \nu \in M({\mathbb R}_+ \times {\mathbb R}) : \nu = \sum_{i=1}^n \delta_{a_i, x_i}(\cdot,\cdot), n \in {\mathbb N}, a_i \in {\mathbb R}_+, x_i \in {\mathbb R} \}.$ For any set $A \in {\mathcal B}({\mathbb R}_+)$ and $B \in {\mathcal B}({\mathbb R}),$ let $Y_t(A \times B) $ be the number of particles at time $t$ whose age is in $A$ and position is in $B$. As pointed out earlier, $m<\infty$, $G(0) = 0$ implies that $Y_t \in M_a({\mathbb R}_+ \times {\mathbb R})$ for all $t > 0$ if $Y_0$ does so. Fix a function $ \phi \in C^+_{b}({\mathbb R}_{+} \times {\mathbb R}),$ (the set of all bounded, continuous and positive functions from ${\mathbb R}_+\times{\mathbb R}$ to ${\mathbb R}_+$), and define \begin{equation} \label{bd} \langle Y_t, \phi \rangle = \int \phi \,dY_t = \sum_{i=1}^{N_t} \phi(a^{i}_t, X^{i}_t). \end{equation}
Since $\eta(\cdot)$ is a Markov process, it can be seen that $\{Y_t: t \geq 0\}$ is a Markov process and we shall call $Y \equiv \{Y_t : t \geq 0 \}$ {\it the $(G,{\bf p},\eta)$- branching Markov process}.
Note that ${\mathcal C}_t$ determines $Y_t$ and conversely. The Laplace functional of $Y_t,$ is given by
\eq{\label{lf} L_t \phi(a,x) := E_{a,x} [e^{- \langle \phi, Y_t \rangle}] \equiv
E[ e^{- \langle \phi, Y_t \rangle} \mid Y_0 = \delta_{a,x}].}
From the independence intrinsic in $\{Y_t: t \geq 0 \}$, we have: \eq{ E_{\nu_1 + \nu_2}[e^{-\langle \phi,Y_t \rangle}] = (E_{\nu_1}[e^{-\langle \phi,Y_t \rangle}]) (E_{\nu_2}[e^{-\langle \phi,Y_t \rangle }]),} for any $\nu_i \in M_a({\mathbb R}_+ \times {\mathbb R}) $ where $E_{\nu_i}[e^{-\langle \phi,Y_t \rangle }] := E[e^{-\langle \phi,Y_t \rangle }\mid Y_0 = \nu_i]$ for $i = 1,2$. This is usually referred to as the branching property of $Y$ and can be used to define the process $Y$ as the unique measure valued Markov process with state space $M_a({\mathbb R}_+ \times {\mathbb R})$ satisfying $L_{t+s} \phi(a,x) = L_t (L_s (\phi))(a,x)$ for all $t,s \geq 0.$
\subsection{The Results} \label{mainresult}
\mbox{\hspace{1in}}
In this section we describe the main results of the paper. Let $A_t$ be the event $\{ N_t > 0 \},$ where $N_t$ is the number of particles alive at time $t$. As $p_0 < 1,$ $P(A_t) > 0$ for all $ 0 \leq t < \infty$ provided $P(N_0 = 0) \ne 1.$
\begin{theorem} \label{oneparticle} {\bf (Limiting behaviour of a randomly chosen particle)} \\ On the event $A_t = \{N_t > 0 \}$, let $(a_t, X_t)$ be the age and position of a randomly chosen particle from those alive at time $t$. Assume that $\eta(\cdot)$ is such that for all $ 0 \leq t < \infty$ \begin{eqnarray} \label{et1} && E(\eta(t)) = 0, v(t) \equiv E(\eta^2(t)) < \infty, ~ \sup_{0\leq s \leq t}v(s) < \infty, \\ && \mbox{ and } \psi \equiv \int_0^\infty v(s)G(ds) < \infty. \nonumber \end{eqnarray}
Then, conditioned on $A_t$, $(a_t,\frac{X_t}{\sqrt{t}})$ converges as $ t \rightarrow \infty$, to $(U,V)$ in distribution, where $U$ and $V$ are Independent with $U$ a strictly positive absolutely continuous random variable with density proportional to $(1 - G(\cdot))$ and $V$ is normally distributed with mean $0$ and variance $ \frac{\psi}{\mu}$. \end{theorem}
Next consider the scaled empirical measure $\tilde{Y}_t \in M_a({\mathbb R}_+ \times {\mathbb R})$ given by $\tilde{Y}_t(A \times B) = Y_t(A \times \sqrt{t}B),$ $A \in {\mathcal B}({\mathbb R}_+), B \in {\mathcal B}({\mathbb R}).$
\begin{theorem} \label{em} {\bf (Empirical Measure)}\\ Assume (\ref{et1}). Then, conditioned on $A_t = \{ N_t > 0\},$ the random measures $\{ \frac{\tilde{Y}_t}{N_t} \}$ converges as $t \rightarrow \infty$ in distribution to a random measure $\nu,$ characterised by its moment sequence $m_k(\phi) \equiv E[\nu(\phi)^k],$ for $\phi \in C_b({\mathbb R}_+ \times {\mathbb R}),$ $k \geq 1$. \end{theorem}
An explicit formula for $m_k(\phi)$ is given in (\ref{mkpf}) below.
Our third result is on the super-process limit. We consider a sequence of branching Markov processes $(G_n, {\bf p}_n, \eta_n)_{\{n \geq 1\}}$ denoted by $\{Y_t^n: t \geq 0 \}_{\{n \geq 1\}}$ satisfying the following:
\begin{enumerate} \item[(a)] {\bf Initial measure:} For $n \geq 1$, $Y^n_0 = \pi_{n \nu}$, where $\pi_{n \nu}$ is a Poisson random measure with intensity $n \nu,$ for some $\nu = \alpha \times \mu \in M({\mathbb R}_+ \times {\mathbb R}).$
\item[(b)] {\bf Lifetime $G^n(\cdot)$:} For all $n \geq 1$, $G^n$ is an exponential distribution with mean $\frac{1}{\lambda}$
\item[(c)] {\bf Branching ${\bf p}_n,\cdot$:} For $n \geq 1$, Let $F_n(u) = \sum_{k=0}^\infty p_{n,k} u^k$ be the generating function of the offspring distribution ${\bf p}_n \equiv \{p_{n,k}\}_{k \geq 0}$. We shall assume that $F_n$ satisfies,
\eq{ \lim_{n \rightarrow \infty} \sup_{0 \leq u \leq N} \parallel
n^2(F_n(1 - u/n) - (1 - u/n)) - u^2 \parallel \rightarrow 0, \label{asf}}
for all $N > 0.$
\item[(d)] {\bf Motion Process $\eta_n(\cdot)$:} For all $n \geq 1$, \eq{\eta_n(t) = \frac{1}{\sqrt{n}} \int_0^t \sigma(u )dB(u), \qquad t \geq 0, \label{mtn} } where $\{B(t): t \geq 0\}$ is a standard Brownian motion starting at $0$ and $\sigma: {\mathbb R}_+ \rightarrow {\mathbb R}$ is a continuous function such that $\int_0^\infty \sigma^2(s) dG(s) < \infty $. It follows that for each $n \geq 1,$ $\eta_n$ satisfies (\ref{et1}).
\end{enumerate}
\begin{defi} Let ${\mathcal E}$ be an independent exponential random variable with mean $\frac{1}{\lambda}$, $ 0 < \lambda < \infty$. For $f \in C_l^+( {\mathbb R}_+ \times {\mathbb R})$ let $U_t f (x) = E(f({\mathcal E}, x + \sqrt{\lambda \psi} B_t))$ where $\psi $ is defined in (\ref{et1}). For $t \geq 0$, let $u_t(f)$ be the unique solution of the non linear integral equation \eq{ u_t f(x) = U_t f(x) - \lambda \int_0^t U_{t-s}(u_s( f)^2) (x) ds. \label{R}}
Let $\{{\mathcal Y}_t : t \geq 0\} $ be a $M({\mathbb R}_+ \times {\mathbb R})$ valued Markov process whose Laplace functional is given by
\eq{ E_{{\mathcal E} \times \mu} [e^{-\langle f, {\mathcal Y}_t \rangle}] = e^{-\langle V_t f, \mu \rangle}, \label{log_Lap_super}}
where $f\in C_l^+({\mathbb R}_{+} \times {\mathbb R}^d)$ (the set of all continuous functions from ${\mathbb R}_+\times{\mathbb R}$ to ${\mathbb R}$ with finite limits as $ (a,x)\rightarrow \infty $) and $V_t(f)(x) \equiv u_t(f(x))$ for $x \in {\mathbb R}$ (See \cite{D} for existence of ${\mathcal Y}$ satisfying (\ref{log_Lap_super}). \end{defi}
Note that in the process $\{{\mathcal Y}_t: t \geq 0 \}$ defined above, the distribution of the age (i.e. the first coordinate) is deterministic. The spatial evolution behaves like that of a super-process where the motion of particles is like that of a Brownian motion with variance equal to the average variance of the age-dependent particle displacement over its lifetime. Also, $u_s(f)$ in second term of (\ref{R}) is interpreted in the natural way as a function on ${\mathbb R}_+ \times {\mathbb R}$ with $u_s(f)(a,x) = u_s(f)(x)$ for all $a >0, x \in {\mathbb R}.$
\begin{theorem} \label{super} {\bf (Age Structured Super-process)} \\ Let $\epsilon > 0$. Let $\{Y_t^n: t \geq 0\}$ be the sequence of branching Markov processes defined above(i.e.in (a), (b), (c), (d)). Then as $n \rightarrow \infty$, $\{{\mathcal Y}_t^n \equiv \frac{1}{n} Y_{nt}^n, t \geq \epsilon \}$ converges weakly on the Skorokhod space $D([\epsilon,\infty), M({\mathbb R}_+ \times {\mathbb R})) $ to $ \{{\mathcal Y}_t : t \geq \epsilon\}$. \end{theorem}
\subsection{Remarks} \mbox{\hspace{1in}}
(a) If $\eta(\cdot)$ is not Markov then $\tilde{C}_t = \{ a^i_t, X^i_t , \tilde{\eta}_{t,i}\equiv\{ \eta_{t,i}(u) : 0 \leq u \leq a^i_t\} : i = 1,2 \ldots, N_t \}$ is a Markov process where $\{ \tilde{\eta}_{t,i} (u) : 0 \leq u \leq a^i_t\} $ is the history of $\eta(\cdot)$ of the individual $i$ during its lifetime. Theorem \ref{oneparticle} and Theorem \ref{em} extends to this case.
(b) Most of the above results also carry over to the case when the motion process is ${\mathbb R}^d$ valued ($ d \geq 1$) or is Polish space valued and where the offspring distribution is age-dependent.
(c) Theorem \ref{oneparticle} and Theorem\ref{em} can also be extended to the case when $\eta(L_1)$, with $L_1 \stackrel{d}{=} G,$ is in the domain of attraction of a stable law of index $0 < \alpha \leq 2.$
(d) In Theorem \ref{super} the convergence should hold on $D([0,\infty), M({\mathbb R}_+ \times {\mathbb R}))$ if we take $\alpha$ in the sequence of branching Markov processes to be ${\mathcal E}$ (i.e. Exponential with mean $\frac{1}{\lambda}$).
(e) The super-process limit obtained in Theorem \ref{super} has been considered in two special cases in the literature. One is in \cite{bk} where an age-dependent Branching process is rescaled (i.e. the particles do not perform any motion). The other is in \cite{DGL} where a general non-local super-process limit is obtained when the offspring distribution is given by $p_1 = 1$. In our results, to obtain a super-process limit the age-parameter is scaled differently when compared to the motion parameter giving us an age-structured super-process.
(f) Limit theorems for critical branching Markov processes where the motion depends on the age does not seem to have been considered in the literature before.
\section{Results on Branching Processes} \label{rbp}
Let $\{ N_t : t \geq 0\}$ be an age-dependent branching process with offspring distribution $\{p_k\}_{k \geq 0}$ and lifetime distribution $G$ (see \cite{an} for detailed discussion). Let $\{\zeta_k\}_{k\geq 0}$ be the embedded discrete time Galton-Watson branching process with $\zeta_k$ being the size of the $k$th generation, $k \geq 0$. Let $A_t$ be the event $\{N_t > 0\}$. On this event, choose an individual uniformly from those alive at time $t$. Let $M_t$ be the generation number and $a_t$ be the age of this individual.
\begin{proposition} \label{lemma0} Let $A_t,a_t, M_t$ and $N_t$ be as above. Let $\mu$ and $\sigma$ be as in Section~\ref{sbps}. Then \begin{eqnarray*}
(a)&&\lim_{t \rightarrow \infty} t P(A_t) = \frac{2 \mu}{\sigma^2}\\
(b)&& \mbox{For all } x >0 ,\, \, \lim_{t \rightarrow \infty} P(\frac{N_t}{t} > x|A_t) = e^{-\frac{2 \mu x}{\sigma^2}},\\
(c)&& \mbox{For all } \epsilon >0 ,\,\, \lim_{t \rightarrow \infty} P( |\frac{M_t}{t} - \frac{1}{\mu} | > \epsilon |A_t) = 0\\
(d) && \mbox{For all } x >0, \,\, \lim_{t \rightarrow \infty} P(a_t \leq x | A_t) = \frac{1}{\mu} \int_0^x (1-G(s))ds. \end{eqnarray*} \end{proposition} {\em Proof :} For (a) and (b) see chapter 4 in \cite{an}. For (c) see \cite{durrett} and for (d) see \cite{athreya}.
$\Box$
\begin{proposition} \label{llnh} (Law of large numbers) Let $\epsilon >0$ be given. For the randomly chosen individual at time $t$, let $\{ L_{ti} : 1 \leq i \leq M_t\}$, be the lifetimes of its ancestors. Let $h: [0,\infty) \rightarrow {\mathbb R}$ be Borel measurable and $E(\mid h(L_1) \mid) < \infty$ with $L_1 \stackrel{d}{=} G.$ Then, as $t \rightarrow \infty$
$$P( | \frac{1}{M_t} \sum_{i=1}^{M_t} h(L_{ti}) - E(h(L_1)) | > \epsilon | A_t) \rightarrow 0.$$ \end{proposition}
{\em Proof :} Let $\epsilon $ and $\epsilon_1 >0$ be given and let $k_1(t) = t(\frac{1}{\mu} -\epsilon)$ and $k_2(t) = t(\frac{1}{\mu} + \epsilon).$ By Proposition \ref{lemma0} there exists $\delta >0$, $\eta >0 $ and $t_0 > 0$ such that for all $t \geq t_0$, \eq{\label{1l}t P(N_t >0) > \delta
\mbox{ and } P(N_t \leq t \eta| A_t) < \epsilon_1 ;} \eq{\label{2l} P(
M_t \in [k_1(t), k_2(t)]^c |A_t) < \epsilon_1.} Also by the law of large numbers for any $\{L_i\}_{i \geq 1}$ i.i.d. $G$ with $E |h(L_1)|
< \infty$ \eq{\label{3l} \lim_{k \rightarrow \infty} P (\sup_{j \geq k}\frac{1}{j} |\sum_{i=1}^{j} h(L_{i}) - E(h(L_1)) | > \epsilon) = 0.} Let $\{\zeta_{k}\}_{k \geq 0}$ be the embedded Galton-Watson process. For each $t > 0$ and $k \geq 1$ let $\zeta_{kt}$ denote the number of lines of descent in the $k$-th generation alive at time $t$ (i.e. the successive life times $\{L_i\}_{i \geq 1}$ of the individuals in that line of descent satisfying $\sum_{i=1}^k L_i \leq t \leq \sum_{i=1}^{k+1}L_i$). Denote the lines of descent of these individuals by $\{\zeta_{ktj}: 1 \leq j \leq \zeta_{kt}\}$. Call $\zeta_{ktj}$ {\em bad} if \eq{
\label{bad} | \frac{1}{k} \sum_{i=1}^{k} h(L_{ktji}) - E(h(L_1))) | > \epsilon,} where $\{ L_{ktji} \}_{i \geq 1}$ are the successive lifetimes in the line of descent $\zeta_{ktj}$ starting from the ancestor. Let $\zeta_{kt,b}$ denote the cardinality of the set $\{ \zeta_{ktj} : 1 \leq j \leq \zeta_{kt} \mbox{ and } \zeta_{ktj} \mbox{
is bad} \}.$ Now, \begin{eqnarray} \lefteqn{P( | \frac{1}{M_t} \sum_{i=1}^{M_t}
h(L_{ti}) - E(h(L_1)) | > \epsilon | A_t)} \nonumber\\&& \nonumber\\ &=& P( \mbox{ The chosen line of descent at time $t$ is {\em bad }} | A_t) \nonumber \\ && \nonumber \\ &\leq& P( \mbox{ The chosen line of descent at time $t$ is {\em bad}}, M_t \in
[k_1(t), k_2(t)] ) | A_t)\nonumber \\ && + P( M_t \in [k_1(t)), k_2(t)]^c
|A_t) \nonumber\\ &=& \frac{1}{P(N_t > 0)} E(\frac{ \sum_{j=k_1(t)}^{k_2(t)}\zeta_{jt,b}}{N_t}; A_t) + P( M_t \in
[k_1(t)), k_2(t)]^c |A_t) \nonumber\\ &=& \frac{1}{P(N_t > 0)} E(\frac{
\sum_{j=k_1(t)}^{k_2(t)}\zeta_{jt,b}}{N_t}; N_t > t \eta) + \nonumber \\ &&
+ \frac{1}{P(N_t > 0)} E(\frac{ \sum_{j=k_1(t)}^{k_2(t)}\zeta_{jt,b}}{N_t}; N_t \leq t \eta) + P( M_t \in [k_1(t)), k_2(t)]^c |A_t) \nonumber\\ &\leq& \frac{1}{P(N_t > 0)} E( \frac{\sum_{j=k_1(t)}^{k_2(t)}\zeta_{jt,b}}{t \eta}; N_t > t \eta) + \nonumber \\ && \hspace{1in} +\frac{P( N_t \leq t \eta)}{P(N_t >0)} + P( M_t \in [k_1(t)), k_2(t)]^c
|A_t)\nonumber \\ &=& \frac{1}{t \eta P(N_t > 0)} \sum_{j=k_1(t)}^{k_2(t)}
E(\zeta_{jt,b} ) + \nonumber \\ && + P( N_t \leq t \eta | N_t >0) + P( M_t \in [k_1(t)), k_2(t)]^c |A_t) \nonumber\\\label{is2} \end{eqnarray} For $t \geq t_0$ by (\ref{2l}) and (\ref{3l}), the last two terms in (\ref{is2}) are less than $\epsilon_1$. The first term is equal to \begin{eqnarray} \frac{1}{t \eta P(N_t > 0)} \sum_{j=k_1(t)}^{k_2(t)} E(\zeta_{jt,b} ) &=&\frac{1}{t \eta P(N_t > 0)} \sum_{j=k_1(t)}^{k_2(t)} E(\sum_{i=1}^{\zeta_j} 1_{\{\zeta_{jti} \mbox{ is bad.} \}}) \nonumber\end{eqnarray} \begin{eqnarray} &=&\frac{1}{t \eta P(N_t > 0)} \sum_{j=k_1(t)}^{k_2(t)} E({\zeta_j}) \times \nonumber \\&& \times P \left( \sum_{i=1}^j L_i \leq t < \sum_{i=1}^{j+1}L_i,
\frac{1}{j} |\sum_{i=1}^{j} h(L_{i}) - E(h(L_1)) | > \epsilon \right), \nonumber \\ &&\nonumber\\ && \mbox{where the $\{L_i \}_{i \geq 1}$ are i.i.d. $G$.}\nonumber \end{eqnarray} Using (\ref{1l}) and (since $m=1$) $E(\zeta_j) = E(\zeta_0)$ we can conclude that
\begin{eqnarray} \lefteqn{ \frac{1}{t \eta P(N_t > 0)} \sum_{j=k_1(t)}^{k_2(t)} E(\zeta_{jt,b} ) } \nonumber\\ &\leq& E(\zeta_0) \frac{P (\sup_{j \geq k_1(t)}\frac{1}{j}
|\sum_{i=1}^{j} h(L_{i}) - E(h(L_1)) | > \epsilon)}{t \eta P(N_t > 0)} \nonumber\\ &\leq& E(\zeta_0) \frac{P (\sup_{j \geq k_1(t)}\frac{1}{j}
|\sum_{i=1}^{j} h(L_{i}) - E(h(L_1)) | > \epsilon)}{\eta \delta}, \nonumber \\ \label{is3} \end{eqnarray} which by (\ref{3l}) goes to zero. So we have shown that for $ t \geq t_0$,
$${P( | \frac{1}{M_t} \sum_{i=1}^{M_t} h(L_{ti}) -
E(h(L_1)) | > \epsilon | A_t)} < 3 \epsilon_1. $$ Since $\epsilon_1 >0$ is arbitrary, the proof is complete.
$\Box$
\begin{proposition} \label{clt} Assume (\ref{et1}) holds. Let $\{L_i\}_{i \geq 1}$ be i.i.d $G$ and $\{\eta_i\}_{i \geq 1}$ be i.i.d copies of $\eta$ and independent of the $\{L_{i}\}_{i \geq 1}$. For $\theta \in {\mathbb R}, t \geq 0$ define $\phi(\theta,t) = E e^{i \theta \eta(t)}.$ Then there exists an event $D,$ with $P(D) =1$ and on $D$ for all $\theta \in {\mathbb R},$ $$ \prod_{j=1}^n \phi\left(\frac{\theta}{\sqrt{n}}, L_j \right) \rightarrow e^{\frac{-\theta^2\psi}{2}}, \qquad \mbox{ as } n \rightarrow \infty,$$
where $\psi$ is as in (\ref{et1}).\end{proposition}
{\em Proof:} Recall from (\ref{et1}) that $v(t) = E(\eta^2(t))$ for $ t \geq 0$. Consider $$X_{ni} = \frac{\eta_i(L_i)}{\sqrt{\sum_{j=1}^n v(L_j)}}\mbox{ for } 1 \leq i \leq n$$ and ${\mathcal F} = \sigma(L_i : i \geq1).$ Given ${\mathcal F},$ $\{X_{ni} : 1 \leq i \leq n\}$ is a triangular array of independent random variables such that for $1 \leq i
\leq n$, $E(X_{ni} | {\mathcal F}) = 0,$ $\sum_{i=1}^n E(X^2_{ni}
| {\mathcal F}) = 1.$
Let $\epsilon >0$ be given. Let $$L_n(\epsilon) = \sum_{i=1}^n E
\left( X_{ni}^2 \, ; X_{ni}^2 > \epsilon | {\mathcal F} \right).$$ By the strong law of large numbers, \begin{equation}\frac{\sum_{j=1}^n v(L_j)}{n} \rightarrow \psi \hspace{1in} \mbox{ w.p. 1.} \label{llnv}\end{equation}
Let $D$ be the event on which (\ref{llnv}) holds. Then on $D$ \begin{eqnarray*} \limsup_{n \rightarrow \infty} L_n(\epsilon) &\leq& \limsup_{n
\rightarrow \infty} \frac{\psi}{2n} \sum_{i=1}^n E( |\eta_i(L_i)|^2) : \mid
\eta_i(L_i)|^2 > \frac{\epsilon n\psi}{2} | {\mathcal F})\\ &\leq&
\limsup_{k \rightarrow \infty} \frac{\psi}{2} E(|\eta_1(L_1)|^2 : \mid \eta_1(L_1) \mid^2 > k) \\ &=&0. \end{eqnarray*} Thus the Linderberg-Feller Central Limit Theorem (see \cite{al}) implies, that on $D$, for all $\theta \in {\mathbb R}$
$$ \prod_{i=1}^n \phi\left(\frac{\theta}{\sqrt{\sum_{j=1}^n v(L_j)}}, L_j \right) = E(e^{i\theta \sum_{j=1}^n X_{nj}} | {\mathcal F}) \rightarrow e^{\frac{-\theta^2}{2}}.$$ Combining this with (\ref{llnv}) yields the result.
$\Box$
\begin{proposition} \label{p3.4} For the randomly chosen individual at time $t$, let \\$\{ L_{ti}, \{\eta_{ti}(u) : 0 \leq u \leq L_{ti}\}: 1 \leq i \leq M_t\}$, be the lifetimes and motion processes of its ancestors. Let $Z_{t1} = {\frac{1}{\sqrt{M_t}}\sum_{i=1}^{M_t} \eta_{ti}(L_{ti})},$ and \\${\mathcal L}_t = \sigma \{M_t, L_{ti} : 1 \leq i
\leq M_t\}.$ Then \begin{equation} E\left( | E(e^{i \theta Z_{t1}} | {\mathcal L}_t )
- e^{-\frac{\theta^2 \psi}{2}}| \; | A_t \right) \rightarrow 0 \end{equation} \end{proposition}
{\em Proof:} Fix $\theta \in {\mathbb R}, \epsilon_1 >0$ and $\epsilon >0$. Replace the definition of ``bad'' in (\ref{bad}) by
\eq{\label{badc} |\prod_{i=1}^{k} \phi(\frac{\theta}{\sqrt{k}}, L_{ktji})
- e^{-\frac{\theta^2 \psi}{2}} | > \epsilon}
By Proposition \ref{clt} we have, \eq{\label{4l} \lim_{k \rightarrow
\infty} P (\sup_{j \geq k}|\prod_{i=1}^{j}
\phi(\frac{\theta}{\sqrt{j}}, L_i) - e^{-\frac{\theta^2 \psi}{2}} | >
\epsilon) =0.}
Using this in place of (\ref{3l}) and imitating the proof of
Proposition \ref{llnh}, (since the details mirror that proof we
avoid repeating them here), we obtain that for $t$ sufficiently
large \eq{\label{clt1} P( |\prod_{i=1}^{M_t}
\phi(\frac{\theta}{\sqrt{M_t}}, L_{ti}) - e^{-\frac{\theta^2 \psi}{2}} | >
\epsilon_1 | A_t) < \epsilon .} Now for all $ \theta \in {\mathbb R}$,
$$ E(e^{i \theta Z_{t1}} | {\mathcal L}_t ) = \prod_{i=1}^{M_t} \phi(\frac{\theta}{\sqrt{M_t}}, L_{ti}).$$ So,
\begin{eqnarray*} \lefteqn{\limsup_{t \rightarrow \infty} E(| E(e^{i\theta \frac{1}{\sqrt{M_t}}
\sum_{i=1}^{M_t} \eta_i(L_{ti})}| {\mathcal L}_t ) - e^{-\frac{\theta^2 \psi}{2}}| | A_t)}\\
&= & \limsup_{t \rightarrow \infty}E(|\prod_{i=1}^{M_t}
\phi(\frac{\theta}{\sqrt{M_t}}, L_{ti}) - e^{-\frac{\theta^2 \psi}{2}} |
| A_t)\\ &<& \epsilon_1 + 2 \limsup_{t \rightarrow \infty} P( |\prod_{i=1}^{M_t} \phi(\frac{\theta}{\sqrt{M_t}}, L_{ti}) - e^{-\frac{\theta^2 \psi}{2}} |
> \epsilon_1 | A_t)\\ &=& \epsilon_1 + 2 \epsilon. \end{eqnarray*} Since $\epsilon >0, \epsilon_1 >0$ are arbitrary we have the result.
$\Box$
The above four Propositions will be used in the proof of Theorem \ref{oneparticle}. For the proof of Theorem \ref{em} we will need a result on coalescing times of the lines of descent.
Fix $k \geq 2$. On the event $A_t = \{ N_t >0 \},$ pick $k$ individuals $C_1, C_2, \ldots, C_k$ from those alive at time $t$ by simple random sampling without replacement. For any two particles $C_i, C_j$, let $\tau_{C_j,C_i,t}$ be the birth time of their most recent common ancestor. Let $\tau_{k-1,t} = \sup \{ \tau_{C_j,C_i,t} : i \neq j, 1 \leq i,j \leq k \}$. Thus $\tau_{k-1,t} $ is the first time there are $k-1$ ancestors of the $k$ individuals $C_1, C_2, \ldots, C_k.$ More generally, for $1\leq j \leq k-1$ let $\tau_{j,t}$ as the first time there are $j$ ancestors of the $k$ individuals $C_1, C_2, \ldots C_k$.
\begin{theorem} \label{cab}\mbox{}
\begin{enumerate} \item For any $i,j$, $\lim_{t \rightarrow \infty } P(
\frac{\tau_{C_i,C_j, t}}{t} \leq x | A_t) \equiv H(x)$ exists for all $x \geq 0$ and $H(\cdot)$ is an absolutely continuous distribution function on $[0,\infty] $ \item Conditioned on $A_t$ the vector $\tilde{\tau}_t = \frac{1}{t} (\tau_{j,t} : 1 \leq j \leq k-1)$ as $ t\rightarrow \infty$ converges in distribution to a random vector $\tilde{T} = (T_1,\ldots, T_{k-1})$ with $0 < T_1<T_2< \ldots < T_{k-1} <1$ and having an absolutely continuous distribution on $[0,1]^{k-1}$. \end{enumerate} \end{theorem}
{\em Proof :} The proof of (i) and (ii) for cases $k=2,3$ is in \cite{durrett}. The following is an outline of a proof of (ii) for the case $k > 3$ (for a detailed proof see \cite{athreya}).
Below, for $1 \leq j \leq k-1,$ $\tau_{j,t} $ will be denoted by $\tau_j.$ It can be shown that it suffices to show that for any $ 1 \leq i_1 < i_2 \ldots < i_p <k$ and $ 0 < r_1 < r_2< \ldots < r_p <1$, $$\lim_{t \rightarrow \infty } P( \frac{\tau_{i_1}}{t} < r_1 < \frac{\tau_{i_2}}{t} < r_2 < \ldots < \frac{\tau_{i_p}}{t} < r_p <
\frac{\tau_{k-1}}{t} < r_{k-1} < 1 | A_t)$$ exists. We shall now condition on the population size at time $t r_1$. Suppose that at time $t r_1$ there are $n_{11}$ particles of which $k_{11}$ have descendants that survive till time $tr_2$. For each $1 \leq j \leq
k_{11},$ suppose there are $n_{2j}$ descendants alive at time $tr_2$ and for each such $j$, let $k_{2j}$ out of the $n_{2j}$ have descendants that survive till time $tr_3$. Let $k_2 = (k_{21},
\ldots, k_{2|k_1|})$ and $|k_2 | = \sum_{j=1}^{|k_1|} k_{2j}.$
Inductively at time $t r_i$, there are $n_{ij}$ descendants for the $j$-th particle, $1 \leq j \leq | k_{i-1}|. $ For each such $j$, let $k_{ij}$ out of $n_{ij}$ have descendants that survive up till time $t r_{i+1}$ (See Figure \ref{trp} for an illustration).
It will be useful to use the following notation: Let $$n_{11} , k_{11} \in {\mathbb N}, k_{11} \leq n_{11},
| k_{1} \mid = k_{11}, n_{1} = (n_{11}).$$ For $i = 2, \ldots i_p$ let $ (n_i , k_i) \in {\mathbb N}_i $, where $N_i \equiv {\mathbb N}^{\mid k_{i-1}\mid } \times {\mathbb N}^{\mid k_{i-1} \mid}$
$$ k_{ij} \leq n_{ij}, \mid k_i \mid \equiv \sum_{j=1}^{|
k_{i-1}|} k_{ij}, {{n_i}\choose{k_i}} \equiv \prod_{j=1}^{|k_{i-1}|} {{n_{ij}}\choose{k_{ij}}}.$$
Let $f_s = P(N_s > 0)$. Now,
\begin{eqnarray*} \lefteqn{P( \frac{\tau_{i_1}}{t} < r_1 < \frac{\tau_{i_2}}{t} < r_2 < \ldots < \frac{\tau_{i_p}}{t} < r_p <
\frac{\tau_{k-1}}{t} < r_{k-1} < t | A_t)=} \\ &=& \frac{f_{tr_1}}{f_t}\sum_{(n_i , k_i) \in {\mathbb N}_i}\left ({{n_{11}}\choose{k_{11}}}(f_{tr_1})^{k_{11}} (1-f_{tr_1})^{n_{11} -k_{11}} \right )
\frac{P(N_{tr_1} = n_1)}{f_{tr_1}} \times \\ &\times& \prod_{i=1}^{p+1} \prod_{j=1}^{|k_{i-1}|} {{n_{ij}}\choose{k_{ij}}}(f_{tu_i})^{k_{ij}} (1-f_{u_i})^{n_{ij} -
k_{ij}} P(N^{j}_{tu_{i}} = n_{i,j}| N^{j}_{t u_{i}} > 0)\times \\ && \times g({\bf k}) E \frac{\prod_{j=1}^k X^j }{S^k}, \end{eqnarray*}
with $u_i = r_{i+1} - r_{i}, i= 1,2,\ldots, p-1,$ $u_p = 1-r_{p}$, $N^{j}_{tu_{i}}$ is number of particles alive at time $tu_i$ of the age-dependent branching process starting with one particle namely $j$,
$g({\bf k}) = g(k_1, \ldots, k_p)$ is the proportion of configurations that have the desired number of ancestors corresponding to the given event, $X^j \stackrel{d}{=} N^{j}_{tu_p} | N^{j}_{t u_p} > 0$ and $S
= \sum_{j=1}^{| k_{p+1}|}X^j.$
Let $q_i = \frac{u_{i}}{u_{i+1}}$ for $1 \leq i \leq p-1$. Then following \cite{durrett} and using Proposition \ref{lemma0} (i), (ii) repeatedly we can show that $P( \frac{\tau_{i_1}}{t} < r_1 < \frac{\tau_{i_2}}{t} < r_2 < \ldots < \frac{\tau_{i_p}}{t} < r_p <
\frac{\tau_{k-1}}{t} < r_{k-1} < t | A_t)$ converges to \begin{eqnarray*} \lefteqn{\frac{1}{q_1} \sum_{k_i \in {\mathbb N}^{\mid k_{i-1}\mid}} \int dx e^{-x} (q_1 x)^{k_{11}} \frac{1}{k_{11}!}e^{-x q_1}\times }\\ &&
\times \prod_{i=2}^{p+1}\prod_{j=1}^{|k_{i-1}|} \int dx e^{-x} \frac{(q_i x)^{k_ij}}{k_{ij}!} e^{-x q_i} )g({\bf k})\\ &&\times \int \prod_{i=1}^{k+1}dx_i \left (\frac{\prod_{i=1}^k x_i}{(\sum_{i=1}^{k+1}x_i)^k}\right) e^{-\sum_{i=1}^{k+1}x_i}
\frac{(x_{k+1})^{|k_{p+1}| -k}}{(|k_{p+1}| -k)!}\end{eqnarray*} \begin{eqnarray*}
&=&\frac{1}{q_1}\sum_{k_i \in {\mathbb N}^{\mid k_{i-1}\mid}} \prod_{i=1}^{p+1}
\frac{(q_i)^{|k_i|}}{(1+q_i)^{|k_i| - | k_{i-1}|}}g({\bf k}) \times \\ && \hspace{1in} \times\int \prod_{i=1}^{k+1}dx_i \left (\frac{\prod_{i=1}^k x_i}{(\sum_{i=1}^{k+1}x_i)^k}\right) e^{-\sum_{i=1}^{k+1}x_i}
\frac{(x_{k+1})^{|k_{p+1}| -k}}{(|k_{p+1}| -k)!}. \end{eqnarray*}
Consequently, we have shown that the random vector $\tilde{\tau}_t$ converges in distribution to a random vector $\tilde{T}$. From the above limiting quantity, one can show that the $\tilde{T}$ has an absolutely continuous distribution on $[0,1]^{k-1}$. See \cite{athreya} for a detailed proof.
$\Box$
\begin{figure}
\caption{Tracking particles surviving at various times}
\label{trp}
\end{figure}
\section{Proof of Theorem \ref{oneparticle}} \label{pop}
For the individual chosen, let $(a_t, X_t)$ be the age and position at time $t$. As in Proposition \ref{p3.4}, let $\{ L_{ti}, \{\eta_{ti}(u), 0 \leq u \leq L_{ti}\} : 1 \leq i \leq M_t\},$ be the lifetimes and the motion processes of the ancestors of this individual and $\{ \eta_{t(M_t+1)}(u) : 0 \leq u \leq t-\sum_{i=1}^{M_t} L_{ti}\}$ be the motion this individual. Let ${\mathcal L}_t = \sigma(M_t, L_{ti}, 1 \leq i \leq M_t).$ It is immediate from the construction of the process that: $$ a_t = t - \sum_{i=1}^{M_t} L_{ti}, $$ whenever $M_t >0$ and is equal to $a+t$ otherwise; and that $$ X_t = X_0 + \sum_{i=1}^{M_t} \eta_{ti}(L_{ti}) + \eta_{t(M_t+1)}(a_t). $$
Rearranging the terms, we obtain $$ (a_t, \frac{X_t}{\sqrt{t}}) = (a_t, \sqrt{\frac{1}{\mu}} Z_{t1}) + (0, \left (\sqrt{\frac{M_t}{t}} -\sqrt{\frac{1}{\mu}} \right)Z_{t2}) + (0,
\frac{X_0}{\sqrt{t}} + Z_{t2}) ,$$ where $Z_{t1} = \frac{\sum_{i=1}^{M_t} \eta_{ti}(L_{t_i})}{\sqrt{M_t}} and $ $Z_{t2} = \frac{1}{\sqrt{t}} \eta_{t(M_t +1)}(a_t)$. Let $\epsilon >0$ be given.
\begin{eqnarray*} P(|Z_{t2} | > \epsilon | A_t) &\leq & P(|Z_{t2} | > \epsilon, a_t \leq k | A_t) + P(|Z_{t2} | > \epsilon, a_t > k | A_t)\\
&\leq &P(|Z_{t2} | > \epsilon, a_t \leq k | A_t) + P(a_t > k | A_t)\\
&\leq& \frac{E( |Z_{t2}|^2 I_{a_t \leq k} | A_t)}{\epsilon^2} + P(a_t > k | A_t)\\ \end{eqnarray*}
By Proposition \ref{lemma0} and the ensuing tightness, for any $\eta >0$ there is a $k_\eta$
$$ P( a_t > k | A_t) < \frac{\eta}{2}.$$ for all $ k \geq k_\eta, t \geq 0$. Next,
\begin{eqnarray*} E( |Z_{t2}|^2 I_{a_t \leq k_\eta} | A_t) &=& E( I_{a_t\leq k_\eta} E(|Z_{t2}|^2 | {\mathcal L}_t) | A_t) \\ &=& E( I_{a_t\leq k_\eta}
\frac{v(a_t)}{t} | A_t)\\ &\leq & \frac{\sup_{u \leq k_\eta} v(u)}{t}.
\end{eqnarray*} Hence, \begin{eqnarray*} P(|Z_{t2} | > \epsilon | A_t) & \leq &
\frac{\sup_{u \leq k_\eta} v(u)}{t \epsilon^2} + \frac{\eta}{2} \end{eqnarray*} Since $\epsilon >0 $ and $\eta>0$ are arbitrary this shows that as $ t \rightarrow \infty$ \eq{\label{itl2} Z_{t2} | A_t \stackrel{d}{\longrightarrow} 0,}
Now, for $\lambda >0, \theta \in {\mathbb R}$, as $a_t$ is ${\mathcal L}_t$ measurable we have \begin{eqnarray*} E(e^{-\lambda a_t} e^{-i
\frac{\theta}{\sqrt{\mu}} Z_{t1}} | A_t) &= & E( e^{-\lambda a_t}(
E(e^{-i \theta Z_{t1}} | {\mathcal L}_t) - e^{-\frac{\theta^2 \psi}{2 \mu}})
| A_t) + \\ && + \, ^{-\frac{\theta^2 \psi}{2 \mu}} E(e^{-\lambda a_t} | A_t)
\end{eqnarray*}
Proposition \ref{clt} shows that the first term above converges to zero and using Proposition \ref{lemma0} we can conclude that as $ t \rightarrow \infty$ \begin{equation} \label{itl3}
(a_t,\frac{1}{\sqrt{\mu}}Z_{t1}) | A_t \stackrel{d}{\longrightarrow} (U,V) \end{equation} As $X_0$ is a constant, by Proposition \ref{lemma0} (c), (\ref{itl3}), (\ref{itl2}) and Slutsky's Theorem, the proof is complete.
$\Box$
\section{Proof of Theorem \ref{em}} \label{pemp}
Let $\phi \in C_b({\mathbb R} \times {\mathbb R}_+)$. We shall show, for each $k \geq 1$, that the moment-functions of $E(\frac{<\tilde{Y}_t, \phi>^k}{N^k_t} | A_t)$ converges as $t \rightarrow \infty$. Then by Theorem 16.16 in \cite{ka} the result follows.
The case $k=1$ follows from Theorem \ref{oneparticle} and the bounded
convergence theorem. We shall next consider the case $k=2.$ Pick two
individuals $C_1,C_2$ at random (i.e. by simple random sampling
without replacement) from those alive at time $t$. Let the age and
position of the two individuals be denoted by $(a^i_t,X^i_t), i
=1,2.$ Let $\tau_t = \tau_{C_1,C_2,t}$ be the birth time of their
common ancestor, say $D$, whose position we denote by
$\tilde{X}_{\tau_t}$. Let the net displacement of $C_1$ and $C_2$
from $D$ be denoted by $X^i_{t-\tau_t}, i=1,2$ respectively. Then
$X^i_t = \tilde{X}_{\tau_t} + X^{i}_{t -\tau_t}, i =1,2$.
Next, conditioned on this history up to the birth of $D (\equiv {\mathcal G}_t)$, the random variables $(a^i_t, X^i_{t-\tau_t}), i =1,2$ are independent. By Proposition \ref{cab} (i) conditioned on $A_t$, $\frac{\tau_t}{t}$ converges in distribution to an absolutely continuous random variable $T$ (say) in $[0,1]$. Also by Theorem \ref{oneparticle} conditioned on ${\mathcal G}_t$ and $A_t$, $\{ (a^i_t,\frac{X^i_{t-\tau_t}}{\sqrt{t-\tau_t}}), i = 1,2\} $ converges in distribution to $\{(U_i,V_i), i =1,2\}$ which are i.i.d. with distribution $(U,V)$ as in Theorem \ref{oneparticle}. Also $\frac{\tilde{X}_{\tau_t}}{\sqrt{\tau_t}}$ conditioned on $A_{\tau_t}$ converges in distribution to a random variable $S$ distributed as $V$.
Combining these one can conclude that $\{ (a^i_t, \frac{X^i_t}{\sqrt{t}}), i =1,2\}$ conditioned on $A_t$ converges in distribution to $\{(U_i, \sqrt{T}S + \sqrt{(1-T)}V_i), i =1,2\}$ where $U_1, U_2, S, V_1, V_2$ are all independent. Thus for any $\phi \in C_b ({\mathbb R}_+ \times {\mathbb R})$ we have, by the bounded convergence theorem,
\eq{ \lim_{t \rightarrow \infty} E( \prod_{i=1}^2 \phi (a^i_t, \frac{X^i_t}{\sqrt{t}}) | A_t) = E \prod_{i=1}^2\phi(U_i, \sqrt{T}S+ \sqrt{(1-T)}V_i) \equiv m_2(\phi) \mbox{ (say)}}
Now,
\begin{eqnarray*} E ( \left(\frac{\tilde{Y}_t(\phi )}{N_t} \right)^2 | A_t ) &=& E(\frac{(\phi(a_t, \frac{X_t}{\sqrt{t}}) )^2 }{N_t}| A_t)\\ && + E( \prod_{i=1}^2 \phi (a^i_t, \frac{X^i_t}{\sqrt{t}}) \frac{N_t (N_t-1)}{N_t^2} | A_t) \end{eqnarray*}
Using Proposition \ref{lemma0} (b) and the fact that $\phi$ is bounded we have \\$\lim_{t \rightarrow \infty} E ( (\frac{\tilde{Y}_t(\phi)}{N_t}
)^2 | A_t ) $ exists in $(0,\infty)$ and equals $m_2(\phi)$. The case $k > 2$ can be proved in a similar manner but we use Theorem \ref{cab} (ii) as outlined below. First we observe that as $\phi$ is bounded,
$$ E \left (\frac{<\tilde{Y}_t, \phi>^k}{N^k_t} | A_t \right ) +
= \sum_{{\bf i}} E h(N_t,k) \left(\prod_{j=1}^k \phi(a^{i_j}_t, \frac{X^{i_j}_t}{\sqrt{t}}) | A_t \right)+ g( \phi,{\mathcal C}_t, N_t),$$
where $ h(N_t,k) \rightarrow 1$ and $g( \phi,{\mathcal C}_t, N_t) \rightarrow 0$ as $ t \rightarrow \infty$; and ${\bf i} = \{i_1,i_2,\ldots, i_k\}$ is the index of $k$particles sampled without replacement from ${\mathcal C}_t$ (see (\ref{confi})). Consider one such sample, and re-trace the genealogical tree ${\mathcal T}_{\bf i} \in {\mathcal T}(k)$,(${\mathcal T}(k)$ is the collection of all possible trees with $k$ leaves given by ${\bf i}$), until their most common ancestor. For any leaf $i_j$ in ${\mathcal T}_{\bf i}$, let $1 = n(i_j,1) < n(i_j,2) < \cdots < n(i_j,N_{i_j})$ be the labels of the internal nodes on the path from leaf $i_j$ to the root. We list the ancestoral times on this by $\{\tau_{1}, \tau_{n(i_j,1)}, \ldots, \tau_{n(i_j,N_{i_j})}.$ Finally we denote the net displacement of the ancestors in the time intervals $$[0,\tau_1], [\tau_{1},\tau_{n(i_j,2)} ], \ldots, [\tau_{n(i_j,N_{i_j}-1)},
\tau_{n(i_j,N_{i_j} )}], [\tau_{n(i_j,N_{i_j})}, t]$$ by
\[\tilde{\eta}^1_{i_j}({\tau_1}), \tilde{\eta}_{i_j}^2(\tau_{n(i_j,2)},\tau_1), \ldots, \tilde{\eta}_{i_j}^{N_{i_j}}(\tau_{n(i_j,N_{i_j} )},\tau_{n(i_j,N_{i_j}-1) }),\tilde{\eta}_{i_j}^\prime(t,\tau_{n(i_j,N_{i_j})}).\] Given the above notation we have: \begin{eqnarray*}
{ E \left (\prod_{j=1}^k \phi(a^{i_j}_t, \frac{X^{i_j}_t}{\sqrt{t}}) | A_t \right) =} E \left( \sum_{ T \in {\mathcal T}_{{\bf i }}} \prod_{j=1}^k f(\phi,j,t) | A_t \right), \end{eqnarray*} where $$f(\phi,j,t) = \phi(a^{i_j}_t, \frac{1}{\sqrt{t}} (\tilde{\eta}_{i_j}^1({\tau_1}) + \sum_{m=2}^{N_{i_j}} \tilde{\eta}_{i_j}^m(\tau_{n(i_j,m)},\tau_{n(i_j,m-1)}) + \tilde{\eta}_{i_j}^\prime(t,\tau_{n(i_j,N_{i_j})}).$$ Now by Theorem \ref{cab}, $$\frac{(\tau_1,\tau_{n(i_j,2)}, \ldots,
\tau_{n(i_j,N_{i_j})})}{\sqrt{t}} |A_t \stackrel{d}{\longrightarrow} (T_1,T_{n(i_j,2)}, \ldots, T_{n(i_j,N_{i_j})}).$$ So by Theorem \ref{oneparticle} {
\begin{eqnarray} \lim_{t \rightarrow \infty} E ( \left(\frac{\tilde{Y}_t(\phi )}{N_t} \right)^2 | A_t ) = E \left (\sum_{{\bf i}} \sum_{ T \in {\mathcal T}_{{\bf i }}} \prod_{j=1}^k g(\phi,j,t) | A_t \right) \nonumber \equiv m_k(\phi) \nonumber \\\label{mkpf} \end{eqnarray} }where \begin{eqnarray*} \lefteqn{g(\phi,j,t) = }\\ &=&\phi\left (U, S \sqrt{T_1} + \sum_{m=2}^{N_{i_j}} Z_{i_j}^m \sqrt{T_{n(i_j,m)}- T_{n(i_j,m-1)}} + Z_{i_j}^\prime \sqrt{1- T_{n(i_j,N_{i_j})}} \right )\end{eqnarray*} with $S, Z^\prime_{i_j}, Z^m_{i_j},$ $m=2,\ldots ,N_{i_j}, $ are i.i.d.$V$, $U$ is an independent random variable given in Theorem \ref{oneparticle} and $T_i$'s are as in Theorem \ref{cab} (ii). Since $\phi$ is bounded, the sequence $\{ m_k(\phi) \equiv \lim_{t \rightarrow \infty } E(\frac{<\tilde{Y}_t, \phi>^k}{N^k_t} )\}$ is necessarily a moment sequence of a probability distribution on ${\mathbb R}$. This being true for each $\phi$, by Theorem 16.16 in \cite{ka} we are done.
$\Box$
\section{ Proof of Theorem \ref{super}} \label{psp}
Let $Z$ be the Branching Markov process $Y$ described earlier, with lifetime $G$ exponential with mean $\lambda$, $p_1 =1$ and $\eta \stackrel{d}{=} \eta_1$ (see (\ref{mtn})). Then it is easy to see that for any bounded continuous function, $S_t\phi(a,x) = E_{(a,x)}< Z_t, \phi> = E_{(a,x)} \phi(a_t,X_t)$ satisfies the following equation: \begin{equation} S_t\phi(a,x) = e^{-\lambda t} W_t\phi(a,x) + \int_0^t ds \lambda e^{-\lambda s} W_{s}(S_{t-s}(\phi)(0,\cdot))(a,x), \end{equation}
where $W_t$ is the semi-group associated to $\eta_1.$ Let ${\mathcal L}$ be the generator of $\eta_1$. Making a change of variable $s \rightarrow t-s$ in the second term of the above and then differentiating it with respect to $t$, we have \begin{eqnarray} \lefteqn{\frac{d}{dt}S_t(\phi)(a,x) = -\lambda e^{-\lambda t} W_t\phi(a,x) + e^{-\lambda t}{\mathcal L} W_t \phi(a,x) + \lambda S_{t}(\phi)(0,x)} \nonumber \\&& \hspace{1in} +\int_0^t ds \lambda(-\lambda e^{-\lambda (t-s)}) W_{t-s}(S_{s}(\phi)(0,\cdot))(a,x) \nonumber \\&& \hspace{1in} +\int_0^t ds\lambda e^{-\lambda (t-s)} {\mathcal L} W_{t-s}(S_{s}(\phi)(0,\cdot))(a,x)\nonumber \\ &=& \lambda S_{t}(\phi)(0,x) \nonumber \\ &&+ ({\mathcal L} -\lambda )\left [e^{-\lambda t} W_t\phi(a,x) + \int_0^t ds \lambda e^{-\lambda (t-s)} W_{t-s}(S_{s}(\phi)(0,\cdot))(a,x) \right ]\nonumber \\ &=& \lambda S_{t}(\phi)(0,x) + ({\mathcal L}-\lambda) S_t(\phi)(a,x) \nonumber \\ &=& \frac{\partial S_t\phi}{\partial a}(a,x) + \frac{\sigma^2(a)}{2}{\Delta S_t\phi}(a,x) + \lambda (S_{t}(\phi)(0,x) - S_t(\phi)(a,x)) \nonumber, \end{eqnarray}
For each $n \geq 1$ define (another semigroup) $R^n_t\phi(a,x) = E_{a,0}(\phi(a_t, x + \frac{X_t}{\sqrt{n}}).$ Now note that, \begin{eqnarray*} R^n_t\phi(a,x) &=& E_{a,0}(\phi(a_t, x + \frac{X_t}{\sqrt{n}})\\
&=& E_{a,\sqrt{n}x}(\phi(a_t, \frac{X_t}{\sqrt{n}}) \\
&=& S_{t}\phi_n(a,\sqrt{n}x), \end{eqnarray*} where $\phi_n(a,x) = \phi(a, \frac{x}{\sqrt{n}}).$ Differentiating w.r.t. $t$, we have that the generator of $R^n_t$ is
\eq{ {\mathcal R}^n\phi(a,x) = \frac{\partial \phi}{\partial a}(a,x) + \frac{\sigma^2(a)}{2n}{\Delta\phi}(a,x) + \lambda (\phi(0,x) - \phi(a,x)).}
\begin{proposition} \label{bcp} Let $\epsilon >0$ and $t \geq \epsilon$. Let $\phi \in C_l^+ ({\mathbb R}_+ \times {\mathbb R}^d)$. Then, \eq{\sup_{(a,x) \in {\mathbb R}_+ \times {\mathbb R}} \mid R^n_{nt}(\phi)(a,x)-
U_t(\phi)(x)\mid \rightarrow 0.} \end{proposition}
{\em Proof:} Let $t\geq \epsilon$. Applying Theorem \ref{oneparticle} to the process $Z$, we have $(a_{nt}, \frac{X_{nt}}{\sqrt{n}}) \stackrel{d}{\longrightarrow} (U,V)$. The proposition is then immediate from the bounded convergence theorem and the fact that $\phi \in C_l^+ ({\mathbb R}_+ \times{\mathbb R})$
$\Box$
\begin{proposition}
\label{logl} Let $\pi_{n\nu}$ be a Poisson random measure with intensity $n \nu$ and $ t \geq 0$. The log-Laplace functional of ${\mathcal Y}_t^n$,
\eq{\label{scll} E_{\pi_{n \nu}} [e^{-\langle \phi, {\mathcal Y}_t^n \rangle}] = e^{- \langle u_t^n \phi, \nu \rangle}, }
where \eq{ \label{neq} u^n_t \phi(a,x) = R^n_{nt}n(1-e^{-\frac{\phi}{n}})(a,x) - \lambda \int_0^t ds R^n_{n(t -s)}(n^2 \Psi_n(\frac{u^n_s \phi}{n}))(a,x),} where \[ \Psi_n (\phi)(a,x) := \left[ F_n(1-\phi(0,x)) -(1-\phi(0,x)) \right]. \] \end{proposition}
{\em Proof:} For any $n\in {\mathbb N}$, let $Y^n_t$ be the sequence of branching Markov processes defined in Section \ref{mainresult}. It can be shown that its log-Laplace functional $L^{n}_t$ satisfies, \begin{equation} L^n_{nt} \phi (a,x) = e^{-\lambda n t}W^n_{nt}[e^{-\phi}](a,x) + \int_0^{nt} ds \lambda e^{-\lambda s}W^n_{s} \left [ F_n(L^{n}_{nt-s}\phi(0,\cdot)) \right](a,x) ds, \end{equation} where $ t \geq 0$ and $W^n_t$ is the semigroup associated with $\eta_n$. Using the fact that $e^{-\lambda u} = 1 - \int_0^u ds \lambda e^{-\lambda s}$ for all $u \geq 0$ and a routine simplification, as done in \cite{dy}, will imply that \eq{ L^n_{nt} \phi(a,x) = W^n_{nt}[e^{-\phi}](a,x) + \lambda \int_0^{nt} W^n_{nt -s}( F_n(L^n_{s}\phi(0,\cdot)) - L^n_{s}\phi )(a,x) ds} Therefore $v^n_{nt}(\phi)(a,x) = 1- L^n_t \phi(a,x),$ satisfies, \eq{\label{ev} v^n_{nt} \phi (a,x) = W^n_{nt} (1 - e^{-\phi})(a,x) + \int_0^{nt} ds W^n_{nt-s} ( (1 - v^n_{s}\phi) - F_n(1-v^n_{s})\phi(0,\cdot)) )(a,x) \lambda ds.} Let ${\mathcal L}^n$ be the generator of $\eta_n$. Then for $0 \leq s < t$\begin{eqnarray*} \lefteqn{\frac{d}{ds} R^n_{n(t-s)}(v^n_{ns}(\phi))(a,x)= }\\ &=& -(n{\mathcal R}^n)R^n_{n(t-s)}\left( v^n_{ns} (\phi) \right)(a,x) + R^n_{n(t-s)} \left( \frac{\partial}{\partial s} v^n_{ns} (\phi) \right)(a,x)\\ &=& -(n{\mathcal R}^n)R^n_{n(t-s)}\left( v^n_{ns} (\phi) \right)(a,x) \\ && + R^n_{n(t-s)} \left( n {\mathcal L}^n W^n_{ns}(1-e^{-\phi}) + n\lambda ((1 - v^n_{ns}\phi) - F_n(1-v^n_{ns})\phi(0,\cdot)) )(a,x) \right )\\ && + R^n_{n(t-s)}\left (\int_0^{ns} dr n{\mathcal L}^n(W^n_{ns-r}((1-v^n_r(\phi)) -F_n(1-v^n_r)\phi(0,\cdot)) \right)(a,x)\\ &=& R^n_{n(t-s)}n\left( -\lambda (v^n_{ns}(\phi)(0,\cdot) -v^n_{ns}(\phi)) + \lambda ((1 - v^n_{ns}\phi) - F_n(1-v^n_{ns})\phi(0,\cdot)) )(a,x) \right )\\ &=& -R^n_{n(t-s)}(n\Psi_n(v^n_{ns}\phi))(a,x), \end{eqnarray*}
Integrating both sides with respect to $s$ from $0$ to $t$, we obtain that \begin{equation}\label{kv} v^n_{nt}(\phi)(a,x) = R^n_{nt}(1-e^{-\phi})(a,x) -\int_0^{t} ds R^n_{n(t-s)}(n\Psi_n(v^n_{ns}\phi))(a,x). \end{equation} If $\pi_{n \nu}$ is a Poisson random measure with intensity $n\nu$, then \begin{eqnarray*} E_{\pi_{n \nu}} [e^{-\langle \phi, {\mathcal Y}^n_t \rangle}] = E_{\pi_{n \nu}} [e^{-\langle \frac{\phi}{n}, {Y}^n_{nt} \rangle}]= e^{\langle L^n_t(\frac{\phi}{n})-1,n\nu \rangle}= e^{ -\langle nv^n_t(\frac{\phi}{n}),\nu \rangle}. \end{eqnarray*} Therefore if we set $u^n_t (\phi) \equiv nv^n_{nt}(\frac{\phi}{n})$. From (\ref{kv}), it is easy to see that $u^n_t(\phi)$ satisfies (\ref{scll}).
$\Box$
For any $f: {\mathbb R}_+ \times {\mathbb R} \rightarrow {\mathbb R},$ , we let $\parallel f \parallel_\infty = \sup_{(a,x) \in {\mathbb R}_+ \times {\mathbb R}} \mid f(a,x)\mid.$ With a little abuse of notation we shall let $\parallel f \parallel_\infty = \sup_{x \in {\mathbb R}} \mid f(x)\mid$ when $f: {\mathbb R} \rightarrow {\mathbb R}$ as well.
\begin{proposition} \label{ucc} Let $ \epsilon >0$. $\phi \in C_l^+({\mathbb R}_+\times{\mathbb R}^d)$ and $u^n_t(\phi)$ be as in Proposition \ref{logl} and $u_t(\phi)$ be as in Theorem \ref{super}. Then for $t \geq \epsilon$, \begin{equation}
\sup_{(a,x) \in {\mathbb R}_+ \times {\mathbb R}} \mid u^n_t(\phi)(a,x) - u_t(\phi)(x)\mid \rightarrow 0 \end{equation} \end{proposition}
{\em Proof:} For any real $u \in {\mathbb R}$, define, $\varepsilon_n(u) = \lambda n^2(F_n(1-\frac{u}{n}) - (1-\frac{u}{n})) - u^2. $ So, \begin{eqnarray*} \lefteqn{u^n_t(\phi)(a,x) = R^n_{nt}n(1-e^{-\frac{\phi}{n}})(a,x) -\lambda \int_0^t ds R^n_{n(t -s)}(n^2 \Psi_n(\frac{u^n_s \phi}{n}))(a,x)}\\ &=&R^n_{nt}n(1-e^{-\frac{\phi}{n}})(a,x) - \int_0^t ds R^n_{n(t -s)}(\varepsilon_n(u^n_s(\phi(0\cdot))))(a,x)\\ && \hspace{1in} - \lambda\int_0^t ds R^n_{n(t -s)} (u^n_{s}\phi(0,\cdot)^2)(a,x) \\ \end{eqnarray*} Now
\begin{eqnarray*} \lefteqn{u^n_t(\phi)(a,x) - u_t(\phi)(x) =}\\&=& R^n_{nt}(n(1-e^{-\frac{\phi}{n}}))(a,x) -U_t(\phi)(x)\\ & - & \int_0^t ds R^n_{n(t -s)}(\varepsilon_n(u^n_s(\phi(0\cdot))))(a,x) \\ &+& \lambda \int_0^t ds \left( U_{(t -s)}( (u_{s}\phi)^2)(a,x)- R^n_{n(t -s)} (u^n_{s}\phi(0,\cdot)^2)(a,x) \right) \end{eqnarray*} \begin{eqnarray*} &=& R^n_{nt}(n(1-e^{-\frac{\phi}{n}}))(a,x) -U_t(\phi)(x) - \int_0^t ds R^n_{n(t -s)}(\varepsilon_n(u^n_s(\phi(0,\cdot))))(a,x) \\&& + \lambda \int_0^t ds R^n_{n(t -s)}(( u_{s}\phi)^2-u^n_{s}\phi(0,\cdot)^2)(a,x) \\ && \hspace{1in} + \lambda \int_0^t ds \left (U_{t-s}(u_{s}\phi)^2)(x) -R^n_{n(t -s)} (u_{s}\phi)^2)(a,x) \right ) \\ \end{eqnarray*} Observe that, $R^n_{\cdot}$ is a contraction, $\parallel u^n_\cdot(\phi)\parallel_\infty \leq \parallel \phi \parallel_\infty$ and $\parallel u_\cdot(\phi) \parallel_\infty \leq \parallel \phi \parallel_\infty$ for $\phi \in C_l({\mathbb R}_+\times {\mathbb R}).$ Therefore, we have \begin{eqnarray*} \parallel u^n_t(\phi) - u_t(\phi) \parallel_{\infty} &\leq &\parallel R^n_{nt}(n(1-e^{-\frac{\phi}{n}})) -U_t(\phi) \parallel_{\infty} + t \parallel \epsilon_n(u^n_s(\phi(0,\cdot))\parallel_\infty \\ &&+ 2\lambda \parallel \phi \parallel_\infty\int_0^t ds \parallel u^n_s (\phi) - u_s(\phi) \parallel_\infty \\ &&+ \lambda \int_0^t ds \parallel (U_{t-s} -R^n_{n(t -s)})(u_{s}\phi)^2 \parallel_\infty. \end{eqnarray*} For $\phi \in C_l({\mathbb R}_+\times{\mathbb R}^d),$ note that, $U_t$, is a strongly continuous semi-group implies that $u_s(\phi)$ is a uniformly continuous function. So using Proposition \ref{ucc} the first term and the last term go to zero. By our assumption on $F$, $\parallel \epsilon_n(u^n_s(\phi(0,\cdot))\parallel_\infty $ will go to zero as $n \rightarrow \infty.$ Now using the standard Gronwall argument we have the result.
$\Box$
\begin{proposition} \label{tight} Let $\epsilon >0$. The processes ${\mathcal Y}^n_\cdot$ are tight in $D([\epsilon,\infty), M({\mathbb R}_+ \times {\mathbb R}))$. \end{proposition}
{\bf Proof} By Theorem 3.7.1 and Theorem 3.6.5 (Aldous Criterion) in \cite{D} , it is enough to show \eq{\label{tight0}\langle {\mathcal Y}^n_{\tau_n + \delta_n}, \phi\rangle - \langle {\mathcal Y}^n_{\tau_n}, \phi\rangle \stackrel{d}{\longrightarrow} 0,}where $\phi \in C^+_l({\mathbb R}_+\times {\mathbb R})$, $\delta_n$ is a sequence of positive numbers that converge to $0$ and $\tau_n$ is any stop time of the process ${\mathcal Y}^n$ with respect to the canonical filtration, satisfying $0 < \epsilon \leq \tau_n \leq T$ for some $T < \infty$.
First we note that, as $\langle {\mathcal Y}^n_t, 1 \rangle$ is a martingale, for $\gamma >0$ by Chebyschev's inequality and Doob's maximal inequality we have\eq{ \label{tight1} P( \langle {\mathcal Y}^n_{\tau_n}, \phi \rangle > \gamma) \leq \frac{1}{\gamma} c_1 \parallel \phi \parallel_\infty E(\sup_{\epsilon \leq t \leq T} \langle {\mathcal Y}^n_{t}, 1\rangle) \leq \frac{1}{\gamma} c_2 \parallel \phi \parallel_\infty }
By the strong Markov Property applied to the process ${\mathcal Y}^n$ we obtain that for $\alpha, \beta \geq 0,$ we have \begin{eqnarray*} L_n(\delta_n; \alpha, \beta) &=& E(\exp(-\alpha \langle {\mathcal Y}^n_{\tau_n + \delta_n}, \phi\rangle - \beta\langle {\mathcal Y}^n_{\tau_n}, \phi\rangle)) \\&=& E(\exp(-\langle{\mathcal Y}^n_{\tau_n}, u^n_{\delta_n}(\alpha\phi) + \beta\phi \rangle)) \\ &=&E(\exp(-\langle{\mathcal Y}^n_{\tau_n-\epsilon}, u^n_\epsilon(u^n_{\delta_n}(\alpha\phi) + \beta\phi )\rangle)) \end{eqnarray*} Therefore \begin{eqnarray*} \lefteqn{\mid L_n(0; \alpha, \beta) - L_n(\delta_n; \alpha, \beta)\mid \leq } \\& \leq& \parallel u^n_\epsilon(u^n_{\delta_n}(\alpha\phi ) + \beta\phi)- u^n_\epsilon((\alpha + \beta)\phi) \parallel_\infty E(\sup_{t\leq T} \langle {\mathcal Y}^n_t, 1 \rangle)\\ &\leq & c_1 \parallel u^n_\epsilon(u^n_{\delta_n}(\alpha\phi ) + \beta\phi)- u^n_\epsilon((\alpha + \beta)\phi) \parallel_\infty \end{eqnarray*} where is the last inequality is by Doob's maximal inequality. Now, \begin{eqnarray*} \lefteqn{\parallel u^n_\epsilon(u^n_{\delta_n}(\alpha\phi ) + \beta\phi)- u^n_\epsilon((\alpha + \beta)\phi) \parallel_\infty \leq \parallel R^n_{n\epsilon}(u^n_{\delta_n}(\alpha\phi) - \alpha\phi) \parallel_\infty + }\\ && + c_2 \parallel \phi \parallel_\infty\int_0^\epsilon da \parallel u^n_{a}( u^n_{\delta_n}(\alpha\phi) +\beta\phi)- u^n_{a}((\alpha + \beta)\phi) \parallel_\infty + d_n (\phi), \end{eqnarray*} where $d_n(\phi) = \lambda \int_0^\epsilon da \parallel \epsilon_n( u^n_{a}( u^n_{\delta_n}(\alpha\phi) + \beta\phi) + \epsilon_n(u^n_{a}((\alpha + \beta)\phi)) \parallel_\infty.$ Observe that \begin{eqnarray*} \lefteqn{\parallel R^n_{n\epsilon}(u^n_{\delta_n}(\alpha\phi) - \alpha\phi) \parallel_\infty ~~\leq~~ \parallel R^n_{n\epsilon}(u^n_{\delta_n}(\alpha\phi) - R^n_{n\delta_n}(\alpha\phi))\parallel_\infty } \\ && \hspace{1in} \hspace{1in} + \parallel R^n_{n\epsilon}(R^n_{n\delta_n}(\alpha\phi)-\alpha\phi )\parallel_\infty \\ &\leq& \parallel u^n_{\delta_n}(\alpha\phi) - R^n_{n\delta_n}(\theta_2\phi) \parallel_\infty + \parallel R^n_{n(\epsilon +\delta_n)}(\alpha\phi)- R^n_{n\epsilon}(\alpha\phi )\parallel_\infty\\ &\leq & \parallel R^n_{n\delta_n}(n(1-e^{\frac{s \phi}{n}})- \alpha\phi) \parallel_\infty + \int_0^{\delta_n} da \parallel R^n_{n(\delta_n -a)}(n^2 \Psi(\frac{u^n_a \phi}{n})) \parallel_\infty \\ && \hspace{1in} + \parallel R^n_{n(\epsilon +\delta_n)}(\alpha\phi)- R^n_{n\epsilon}(\alpha\phi )\parallel_\infty\\ &\leq& \parallel n(1-e^{\frac{s \phi}{n}})- \alpha\phi \parallel_\infty + \delta_n c_2 (\parallel \phi \parallel^2_\infty + 1) + \parallel R^n_{n(\epsilon +\delta_n)}(\alpha\phi)- R^n_{n\epsilon}(\alpha\phi )\parallel_\infty,\\ && \equiv e_n(\phi) \end{eqnarray*} Consequently, \begin{eqnarray*} \lefteqn{\parallel u^n_\epsilon(u^n_{\delta_n}(\alpha\phi ) + \beta\phi)- u^n_\epsilon((r+s)\phi) \parallel_\infty \leq e_n(\phi) + d_n (\phi)} \\ && + c_2 \parallel \phi \parallel_\infty \int_0^\epsilon da\parallel u^n_{a}( u^n_{\delta_n}(\alpha\phi) +\beta\phi)- u^n_{a}((r + s)\phi) \parallel_\infty. \end{eqnarray*} By Proposition \ref{bcp}, $e_n(\phi) \rightarrow 0$ and $d_n(\phi) \rightarrow 0$ by our assumption $F_n.$ Hence by a standard Gronwall argument we have that, \begin{equation} \label{nel} \mid L_n(0; s, r) - L_n(\delta_n; s, r)\mid \rightarrow 0 \end{equation}
By (\ref{tight1}), $\{\langle {\mathcal Y}^n_{\tau_n}, \phi \rangle; n = 1,2,\ldots \}$ is tight in ${\mathbb R}_+$. Take an arbitrary subsequence. Then there is a further subsequence of it indexed by $\{n_k; k = 1,2, \ldots\}$ such that $\langle {\mathcal Y}^{n_k}_{\tau_{n_k}}, \phi \rangle$ converges in distribution to some random limit $b$. Thus we get $$( {\mathcal Y}^{n_k}_{\tau_{n_k}}(\phi), {\mathcal Y}^{n_k}_{\tau_{n_k}} (\phi))\stackrel{d}{\longrightarrow} (b,b) \mbox{ as } k \rightarrow \infty.$$ But (\ref{nel}) implies that $$( {\mathcal Y}^{n_k}_{\tau_{n_k}}(\phi), {\mathcal Y}^{n_k}_{\tau_{n_k} + \delta_{n_k}} (\phi))\stackrel{d}{\longrightarrow} (b,b) \mbox{ as } k \rightarrow \infty.$$
This implies that $\langle {\mathcal Y}^{n_k}_{\tau_{n_k} + \delta_{n_k}}, \phi\rangle - \langle {\mathcal Y}^{n_k}_{\tau_{n_k}}, \phi\rangle \stackrel{d}{\longrightarrow} 0 \mbox{ as } k \rightarrow \infty.$ So (\ref{tight0}) holds and the proof is complete.
$\Box$
{\bf Proof of Theorem \ref{super}} Proposition \ref{ucc} shows that the log-Laplace functionals of the process ${\mathcal Y}^n_t$ converge to ${\mathcal Y}_t$ for every $t \geq \epsilon $. Proposition \ref{tight} implies tightness for the processes. As the solution to (\ref{R}) is unique, we are done.
$\Box$
\end{document} | arXiv |
\begin{document}
\title{A photon-photon quantum gate based on a single atom in an optical resonator}
\author{Bastian~Hacker} \thanks{These authors contributed equally to this work.} \author{Stephan~Welte} \thanks{These authors contributed equally to this work.} \author{Gerhard~Rempe} \author{Stephan~Ritter} \email[To whom correspondence should be addressed. Email: ]{[email protected]}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, Hans-Kopfermann-Strasse 1, 85748 Garching, Germany}
\maketitle \textbf{Two photons in free space pass each other undisturbed. This is ideal for the faithful transmission of information, but prohibits an interaction between the photons as required for a plethora of applications in optical quantum information processing \cite{Kok2010}. The long-standing challenge here is to realise a deterministic photon-photon gate. This requires an interaction so strong that the two photons can shift each others phase by $\boldsymbol{\pi}$. For polarisation qubits, this amounts to the conditional flipping of one photon's polarisation to an orthogonal state. So far, only probabilistic gates \cite{Knill2001} based on linear optics and photon detectors could be realised \cite{OBrien2003}, as ``no known or foreseen material has an optical nonlinearity strong enough to implement this conditional phase shift [\ldots]'' \cite{OBrien2007}. Meanwhile, tremendous progress in the development of quantum-nonlinear systems has opened up new possibilities for single-photon experiments \cite{Chang2014}. Platforms range from Rydberg blockade in atomic ensembles \cite{Gorshkov2011} to single-atom cavity quantum electrodynamics \cite{Reiserer2015}. Applications like single-photon switches \cite{Baur2014} and transistors \cite{Tiarks2014,Gorniaczyk2014}, two-photon gateways \cite{Kubanek2008}, nondestructive photon detectors \cite{Reiserer2013b}, photon routers \cite{Shomroni2014} and nonlinear phase shifters \cite{Turchette1995,Tiecke2014,Volz2014,Beck2015,Tiarks2016} have been demonstrated, but none of them with the ultimate information carriers, optical qubits in discriminable modes. Here we employ the strong light-matter coupling provided by a single atom in a high-finesse optical resonator to realise the Duan-Kimble protocol \cite{Duan2004} of a universal controlled phase flip (CPF, $\boldsymbol{\pi}$ phase shift) photon-photon quantum gate. We achieve an average gate fidelity of $\boldsymbol{\overline{F}=(76.2\pm3.6)\%}$ and specifically demonstrate the capability of conditional polarisation flipping as well as entanglement generation between independent input photons. Being the key quantum logic element, our gate could readily perform most of the hitherto existing two-photon operations. It also discloses avenues towards new quantum information processing applications where photons are essential, especially long-distance quantum communication and scalable quantum computing.}
The perhaps simplest idea to realise a photonic quantum gate is to overlap the two photons in a nonlinear medium. However, it has been argued that this cannot ensure full mutual information transfer between the qubits for locality and causality reasons \cite{Shapiro2006,Gea2010}. Instead, a viable strategy is to keep the two photons separate, change the medium with the first one, use this change to affect the second photon, and, finally, make the first photon interact with the medium again to ensure gate reciprocity. These three subsequent interactions enable full mutual information exchange between the two qubits, as required for a gate, even though the photons never meet directly.
Our experimental realisation of a CPF photon-photon gate builds on the proposal by Duan and Kimble \cite{Duan2004}. The medium is a single atom strongly coupled to a cavity and the interactions happen upon reflection of each photon off the atom-cavity system \cite{Reiserer2014}. While the proposal considers three reflections, we replace the second reflection of the first photon by a measurement of the atomic state and classical phase feedback on the first photon (analogous to a proposal \cite{Duan2005} where the roles of light and matter are interchanged). In practise, this allows us to achieve better fidelities, higher efficiencies and to use a simpler setup compared to that of the proposed scheme.
\begin{figure}\label{fig:setup}
\end{figure}
We employ a single $^{87}$Rb atom trapped in a three-dimensional optical lattice \cite{Reiserer2013} at the centre of a one-sided optical high-finesse cavity \cite{Reiserer2013b} (Fig.\,\figref{fig:setup}). The measured cavity quantum electrodynamics parameters on the relevant transition $\ket{{\uparrow}}=\ket{F{=}2,m_F{=}2}\leftrightarrow \ket{\text{e}}=\ket{F{=}3,m_F{=}3}$ of the D$_2$ line are $(g,\kappa,\gamma)=2\pi\,(7,2.5,3)$\,MHz. The atom akes the role of an ancilla qubit, implemented in the basis $\ket{{\downarrow}}=\ket{F{=}1,m_F{=}1}$ and $\ket{{\uparrow}}$, with the quantisation axis along the cavity axis. Both photonic qubits are individually encoded in the polarisation using the notation $\ket{\mathrm L}$ and $\ket{\mathrm R}$ for a left- and a right-handed photon, respectively. They are consecutively coupled into the cavity beam path via a non-polarising beam splitter (98.5\% transmission) which plays the role of a polarisation-independent circulator. The photons as well as the empty cavity are on resonance with the transition $\ket{{\uparrow}}\leftrightarrow\ket{\text{e}}$ at $780\unit{nm}$. Only the atom in $\ket{{\uparrow}}$ and the photon in $\ket{\mathrm R}$ are strongly coupled, because the $\ket{{\downarrow}}\leftrightarrow\ket{\text{e}}$ transition is detuned by the ground-state hyperfine splitting of 6.8\,GHz, and the left-circularly polarised transition from $\ket{{\uparrow}}\leftrightarrow \ket{F{=}3,m_F{=}1}$ is shifted out of resonance by a dynamical Stark shift induced by the laser that traps the atom. The strong light-matter coupling between $\ket{{\uparrow}}$ and $\ket{\mathrm R}$ shifts the phase of a reflected photon by $\pi$ compared to the cases where the atom occupies $\ket{{\downarrow}}$ or the photon is in $\ket{\text{L}}$. Thus, each reflection constitutes a bidirectional controlled-Z (CZ) interaction\cite{Reiserer2014} between the atomic and photonic qubit (red boxes in Fig.\,\figref{fig:scheme}a).
Figure \figref{fig:scheme}a depicts the experimental implementation of the photon-photon gate as a quantum circuit diagram. In short, the protocol starts with arbitrary photonic input qubits $\ket{p_1}$ and $\ket{p_2}$ and the atom optically pumped to $\ket{{\uparrow}}$. After this initialisation, two consecutive atomic-qubit rotations combined with CZ atom-photon quantum gates are performed. The purpose of the rotations is to maximize the effect of the subsequent gates. Note that up to this point the first photon has the capability to act via the atom onto the second photon. To implement a back-action of the second photon onto the first one, the protocol ends with a measurement of the atomic qubit and feedback onto the first photon. This measurement has the additional advantage that it removes any possible entanglement of the atom with the photons, as required for an ancillary qubit. A longer and \hyperref[methods:composition]{detailed stepwise analysis} of the above protocol as well as the \hyperref[methods:rotations]{characterisation of the Raman lasers} used for the implementation of the atomic-state rotations can be found in the Methods.
\begin{figure}\label{fig:scheme}
\end{figure}
To apply this scheme in practise, the qubits have to be stored and controlled in an appropriately timed sequence: After the first photon $p_1$ is reflected, it directly enters a $1.2\unit{km}$ delay fibre. The delay time of $6\unit{\ensuremath{\textnormal\textmu} s}$ is sufficient to allow for reflection of both photons from the cavity, two coherent spin rotations, and state detection on the atom (Fig.\,\figref{fig:scheme}b). The two photon wave packets are in independent spatio-temporal modes which can in principle be arbitrarily shaped. The only requirement is that the frequency spectrum should fall within the acceptance bandwidth of the cavity ($0.7\unit{MHz}$ for $\pm0.1\pi$ phase shift accuracy). We used Gaussian-like envelopes of $0.6\unit{\ensuremath{\textnormal\textmu} s}$ full width at half maximum (FWHM) within individual time windows of $1.3\unit{\ensuremath{\textnormal\textmu} s}$ width, such that the corresponding FWHM bandwidth of $0.7\unit{MHz}$ leads to an acceptable phase-shift spread.
After the last spin rotation, Purcell-enhanced fluorescence state detection of the atomic qubit is performed. This is achieved within $1.2\unit{\ensuremath{\textnormal\textmu} s}$ with a laser beam resonant with the $\ket{{\uparrow}}\leftrightarrow\ket{\text{e}}$ transition and impinging perpendicular to the cavity axis (blue beam in Fig.\,\figref{fig:setup}). This yields zero fluorescence photons for $\ket{{\downarrow}}$ and a near-Poissonian-distributed photon number with an average of 4 for $\ket{{\uparrow}}$, resulting in a discrimination fidelity of 96\%. The fluorescence light shares the same spatial mode as the gate photons and needs to be detected before the first photon leaves the delay fibre. Separation of the fluorescence light from the qubit photons is achieved with an efficient free-space acousto-optical deflector (AOD, labelled `Switch' in Fig.\,\figref{fig:setup}). Qubit photons pass the deactivated AOD straight towards the delay fibre, whereas state-detection photons are deflected into the first diffraction order directed at a single-photon detector (SPD). The corresponding detection events are evaluated in real time by a field programmable gate array (FPGA), which activates a $\pi$ phase shift on the $\ket{\mathrm R}$ component of the first gate photon if the atom was detected in $\ket{{\uparrow}}$. No phase shift is applied if the atom was found in $\ket{{\downarrow}}$. This conditional phase shift is performed by an electro-optical modulator (EOM) with a switching time of $0.1\unit{\ensuremath{\textnormal\textmu} s}$, which is ready when $p_1$ leaves the delay fibre and is reset before $p_2$ appears at the end of the fibre. The experiment runs at a rate of $500\unit{Hz}$, with each execution preceded by atom cooling, atomic state preparation via optical pumping and probing of cavity transmission to confirm success of the initialisation. All experiments with one detected qubit photon in each of the two temporal output modes are evaluated without further post-selection.
If both input photons are circularly polarised, the photon-photon gate appears as a CPF gate (see \hyperref[methods:composition]{Methods}) characterised by: \begin{align*} &\ket{\mathrm{RR}}\rightarrow\ket{\mathrm{RR}} &&\ket{\mathrm{LR}}\rightarrow-\ket{\mathrm{LR}}\\ &\ket{\mathrm{RL}}\rightarrow\ket{\mathrm{RL}} &&\ket{\mathrm{LL}}\rightarrow\ket{\mathrm{LL}}. \end{align*} As with any quantum gate, it can also be expressed in other bases. We define the linear polarisation bases as $\ket{\mathrm H}=\frac{1}{\sqrt{2}}(\ket{\mathrm R}{+}\ket{\mathrm L})$, $\ket{\mathrm V}=\frac{1}{\sqrt{2}}(\ket{\mathrm R}{-}\ket{\mathrm L})$, $\ket{\mathrm D}=\frac{1}{\sqrt{2}}(\ket{\mathrm R}{+}i\ket{\mathrm L})$, and $\ket{\mathrm A}=\frac{-1}{\sqrt{2}}(i\ket{\mathrm R}{+}\ket{\mathrm L})$, respectively. With one of the photons being circularly and the other one linearly polarised, the gate will act as a controlled-NOT (CNOT) gate with the circular qubit being the control and the linear one being the target qubit. When both photons enter in linear polarisation states, the gate will turn the two separable inputs into a maximally entangled state.
We characterised the gate by applying it to various pairs of separable input-qubit combinations and by measuring the average outcome from a large set of repeated trials. The input were two independent weak coherent pulses each impinging with an average photon number of $\overline{n}=0.17$ onto the cavity. The choice of $\overline{n}$ is a compromise between measurement time and measured gate fidelity. While lowering $\overline{n}$ reduces the data rate because of the high probability of zero-photon events in either of the two photon modes, increasing $\overline{n}$ raises the multi-photon probability per pulse thereby deteriorating the measured gate fidelity.
First, we processed the four different input states of a CNOT basis, i.e.\ all combinations of photon $p_1$ in the circular basis and $p_2$ in a linear basis, and analysed them in the corresponding measurement bases. The resulting truth table is depicted in Fig.\,\figref{fig:truthtable} and shows an overlap with the case of an ideal CNOT gate of $F_\textnormal{CNOT}=(76.9\pm1.5)\%$.
\begin{figure}
\caption{ \textbf{Truth table of the CNOT photon-photon gate.} The gate flips the linear polarisation of the target photon $p_2$ if the control photon $p_1$ is in the state $\ket{\mathrm L}$, while it leaves the target qubit unchanged if the control photon is in $\ket{\mathrm R}$. The vertical axis gives the probability to measure a certain output state given the designated input state. The truth table for an ideal CNOT gate is indicated by the four light transparent bars with $\text{P}=1$. The black T-shaped bars represent statistical errors on each entry (rms 2.2\%), computed via linear error propagation assuming independent photon statistics.}
\label{fig:truthtable}
\end{figure}
A decisive property of a quantum gate that distinguishes it from its classical counterpart is its capability to generate entanglement. For both input photons in the linear polarisation state $\ket{\mathrm D}$, the gate ideally creates the maximally entangled Bell state $\ket{\Psi^+}=\frac{1}{\sqrt{2}}(\ket{\mathrm{DL}}+\ket{\mathrm{AR}})$. We reconstructed the output of the gate for the input state $\ket{\mathrm{DD}}$ from 1378 detected photon pairs via linear inversion and obtained the density matrix $\rho$ depicted in Fig.\,\figref{fig:densitymatrix}. It has a fidelity $F_{\Psi^+}=\langle\Psi^+|\rho|\Psi^+\rangle=(72.9\pm 2.8)\%$ with the ideal Bell state (unbiased linear estimate). The generation of this entangled state from a separable input state directly sets a non-tight bound for the entangling capability (smallest eigenvalue of the partially transposed density matrix)\cite{Poyatos1997} of our gate, $\mathcal{C}\leq-0.242\pm0.028$, which is $-0.5$ for the ideal CPF gate and where a negative $\mathcal{C}$ denotes that the gate is entangling. We remark that the total data set can be separated into two subsets of equal size corresponding to the outcome of the atomic state detection being $\ket{{\downarrow}}$ or $\ket{{\uparrow}}$. The respective fidelities are $F^{{\downarrow}}_{\Psi^+}=(74.4\pm 3.9)\%$ and $F^{{\uparrow}}_{\Psi^+}=(71.5\pm 4.2)\%$, i.e.\ the gate works comparably well in both cases.
\begin{figure}\label{fig:densitymatrix}
\end{figure}
As an overall measure of the gate performance we determined the average gate fidelity $\overline{F}$, which is equal to the average fidelity of $6\times6$ output states generated from the input states on all canonical polarisation axes (H, V, D, A, R, L) with the theoretically expected ideal outcomes \cite{Bagan2003}. All 36 state fidelities were estimated linearly and bias-free with randomised tomographically complete basis settings. Although we collected only insignificant statistics of 80 detected photon pairs on each of the output states, their combination gives a meaningful measure of $\overline{F}=(76.2\pm3.6)\%$. The deviation from unity is well understood for our system and results from technical imperfections which we discuss below.
The efficiency of the presented gate, which is the combined transmission probability for two photons, is unity for the ideal scheme, but gets reduced by several experimental imperfections. It is polarisation-independent because all optical elements including the cavity have near-equal losses for all polarisations. The two main loss channels are the long delay fibre (transmission $T=40.4\%$) and the limited cavity reflectivity ($R=67\%$). The latter results from the cavity not being perfectly single-sided and a finite cooperativity of $C=3.3$. All other optical elements have a combined transmission of $81\%$, dominated by the fibre-coupling efficiency and absorption of the AOD switch. This yields a total experimental gate efficiency of $(22\%)^2=4.8\%$. Despite the transmission losses, characteristic for all photonic devices, the protocol itself is deterministic. The largest potential improvement is offered by eliminating the fibre-induced losses, for instance by a free-space delay line, a delay cavity or an efficient optical quantum memory.
We have modelled all known sources of error (see \hyperref[methods:simulation]{Methods}) to reproduce the deviation of the experimental gate fidelity from unity. Here, we quote the reductions in fidelity that each individual effect would introduce to an otherwise perfect gate. The largest contribution stems from using weak coherent pulses to characterise the gate and is therefore not intrinsic to the performance of the gate itself. First, there is a significant probability of having two photons in one qubit mode if it is populated, resulting in a phase flip of $2\pi$ instead of $\pi$, causing an overall reduction of the gate fidelity by 12\%. Second, the probability to have both qubit modes populated is small, such that detector dark counts contribute 2\% error. The measured gate fidelity could therefore be greatly improved by employing a true single-photon source \cite{Reiserer2015}.
The relatively short delay introduced by the optical fibre restricts the temporal windows for the photon pulses and atomic state detection. The resulting bandwidth of the photons reduces the gate fidelity by 6\%. The obvious solution is to choose a longer delay. Further errors can be attributed to the characteristics of the optical cavity (5\%), the state of the atom (6\%), and other optical elements (2\%). The cavity has a polarisation-eigenmode splitting of $420\unit{kHz}$ that could be eliminated by mirror selection \cite{Uphoff2015}. Neither the resonance frequency of the cavity nor the spatial overlap between its mode and the fibre mode are perfectly controlled (see \hyperref[methods:modematching]{Methods}). The latter could be improved with additional or better optical elements. Fidelity reductions associated with the state of the atom are due to imperfect state preparation, manipulation and detection, and decoherence. Improvements are expected from the application of cavity-enhanced state detection to herald successful state preparation, Raman sideband cooling to eliminate variations in the Stark shift of the atom, and composite pulses to optimise the state rotations. The limited precision of polarisation settings and polarisation drifts inside the delay fibre are the main contribution from other optical elements. The latter could be improved using active stabilization. The wealth of realistic suggestions for improvement given above shows that progress towards even higher fidelities is certainly feasible for the presented gate implementation.
The photon-photon gate as first demonstrated here follows a deterministic protocol and could therefore be a scalable building block for new photon-processing tasks such as those required by quantum repeaters \cite{Briegel1998}, for the generation of photonic cluster states \cite{Raussendorf2001} or quantum computers \cite{Ladd2010}. The gate's ability to entangle independent photons could be a resource for quantum communication. Moreover, our gate could serve as the central processing unit of an all-optical quantum computer, envisioned to processes pairs of photonic qubits that are individually stored in and retrieved from an in principle arbitrarily large quantum cache. Such cache would consist of an addressable array of quantum memories, individually connected to the gate via optical fibres. Eventually, such architecture might even be implemented with photonic waveguides on a chip.
We thank Norbert Kalb, Andreas Neuzner, Andreas Reiserer and Manuel Uphoff for fruitful discussions and support throughout the experiment. This work was supported by the European Union (Collaborative Pro\-ject SIQS) and by the Bundesministerium f\"ur Bildung und Forschung via IKT 2020 (Q.com-Q) and by the Deutsche Forschungsgemeinschaft via the excellence cluster Nanosystems Initiative Munich (NIM). S.\,W. was supported by the doctorate program Exploring Quantum Matter (ExQM).
\begin{thebibliography}{10} \expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2}
\bibitem{Kok2010} \bibinfo{author}{Kok, P.} \& \bibinfo{author}{Lovett, B.~W.} \newblock \emph{\bibinfo{title}{Introduction to Optical Quantum Information
Processing}} (\bibinfo{publisher}{Cambridge University Press},
\bibinfo{year}{2010}).
\bibitem{Knill2001} \bibinfo{author}{Knill, E.}, \bibinfo{author}{Laflamme, R.} \&
\bibinfo{author}{Milburn, G.~J.} \newblock \bibinfo{title}{A scheme for efficient quantum computation with
linear optics}. \newblock
\href{http://dx.doi.org/10.1038/35051009}{\emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{409}}, \bibinfo{pages}{46--52}}
(\bibinfo{year}{2001}).
\bibitem{OBrien2003} \bibinfo{author}{{O'Brien}, J.~L.}, \bibinfo{author}{Pryde, G.~J.},
\bibinfo{author}{White, A.~G.}, \bibinfo{author}{Ralph, T.~C.} \&
\bibinfo{author}{Branning, D.} \newblock \bibinfo{title}{Demonstration of an all-optical quantum
controlled-{NOT} gate}. \newblock
\href{http://dx.doi.org/10.1038/nature02054}{\emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{426}}, \bibinfo{pages}{264--267}}
(\bibinfo{year}{2003}).
\bibitem{OBrien2007} \bibinfo{author}{{O'Brien}, J.~L.} \newblock \bibinfo{title}{Optical Quantum Computing}. \newblock
\href{http://dx.doi.org/10.1126/science.1142892}{\emph{\bibinfo{journal}{Science}}
\textbf{\bibinfo{volume}{318}}, \bibinfo{pages}{1567--1570}}
(\bibinfo{year}{2007}).
\bibitem{Chang2014} \bibinfo{author}{Chang, D.~E.}, \bibinfo{author}{Vuleti\'{c}, V.} \&
\bibinfo{author}{Lukin, M.~D.} \newblock \bibinfo{title}{{Quantum nonlinear optics --- photon by photon}}. \newblock
\href{http://dx.doi.org/10.1038/nphoton.2014.192}{\emph{\bibinfo{journal}{Nature
Photon.}} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{685--694}}
(\bibinfo{year}{2014}).
\bibitem{Gorshkov2011} \bibinfo{author}{Gorshkov, A.~V.}, \bibinfo{author}{Otterbach, J.},
\bibinfo{author}{Fleischhauer, M.}, \bibinfo{author}{Pohl, T.} \&
\bibinfo{author}{Lukin, M.~D.} \newblock \bibinfo{title}{{Photon-photon interactions via Rydberg blockade}}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.107.133602}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{107}}, \bibinfo{pages}{133602}}
(\bibinfo{year}{2011}).
\bibitem{Reiserer2015} \bibinfo{author}{Reiserer, A.} \& \bibinfo{author}{Rempe, G.} \newblock \bibinfo{title}{Cavity-based quantum networks with single atoms and
optical photons}. \newblock
\href{http://dx.doi.org/10.1103/RevModPhys.87.1379}{\emph{\bibinfo{journal}{Rev.
Mod. Phys.}} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{1379}}
(\bibinfo{year}{2015}).
\bibitem{Baur2014} \bibinfo{author}{Baur, S.}, \bibinfo{author}{Tiarks, D.},
\bibinfo{author}{Rempe, G.} \& \bibinfo{author}{D\"{u}rr, S.} \newblock \bibinfo{title}{{Single-photon switch based on Rydberg blockade.}} \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.112.073901}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{112}}, \bibinfo{pages}{073901}}
(\bibinfo{year}{2014}).
\bibitem{Tiarks2014} \bibinfo{author}{Tiarks, D.}, \bibinfo{author}{Baur, S.},
\bibinfo{author}{Schneider, K.}, \bibinfo{author}{D{\"u}rr, S.} \&
\bibinfo{author}{Rempe, G.} \newblock \bibinfo{title}{Single-photon transistor using a F{\"o}rster
resonance}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.113.053602}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{053602}}
(\bibinfo{year}{2014}).
\bibitem{Gorniaczyk2014} \bibinfo{author}{Gorniaczyk, H.}, \bibinfo{author}{Tresp, C.},
\bibinfo{author}{Schmidt, J.}, \bibinfo{author}{Fedder, H.} \&
\bibinfo{author}{Hofferberth, S.} \newblock \bibinfo{title}{Single-Photon Transistor Mediated by Interstate
Rydberg Interactions}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.113.053601}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{053601}}
(\bibinfo{year}{2014}).
\bibitem{Kubanek2008} \bibinfo{author}{Kubanek, A.} \emph{et~al.} \newblock \bibinfo{title}{{Two-Photon Gateway in One-Atom Cavity Quantum
Electrodynamics}}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.101.203602}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{101}}, \bibinfo{pages}{203602}}
(\bibinfo{year}{2008}).
\bibitem{Reiserer2013b} \bibinfo{author}{Reiserer, A.}, \bibinfo{author}{Ritter, S.} \&
\bibinfo{author}{Rempe, G.} \newblock \bibinfo{title}{Nondestructive Detection of an Optical Photon}. \newblock
\href{http://dx.doi.org/10.1126/science.1246164}{\emph{\bibinfo{journal}{Science}}
\textbf{\bibinfo{volume}{342}}, \bibinfo{pages}{1349--1351}}
(\bibinfo{year}{2013}).
\bibitem{Shomroni2014} \bibinfo{author}{Shomroni, I.} \emph{et~al.} \newblock \bibinfo{title}{All-optical routing of single photons by a one-atom
switch controlled by a single photon}. \newblock
\href{http://dx.doi.org/10.1126/science.1254699}{\emph{\bibinfo{journal}{Science}}
\textbf{\bibinfo{volume}{345}}, \bibinfo{pages}{903--906}}
(\bibinfo{year}{2014}).
\bibitem{Turchette1995} \bibinfo{author}{Turchette, Q.~A.}, \bibinfo{author}{Hood, C.~J.},
\bibinfo{author}{Lange, W.}, \bibinfo{author}{Mabuchi, H.} \&
\bibinfo{author}{Kimble, H.~J.} \newblock \bibinfo{title}{Measurement of Conditional Phase Shifts for Quantum
Logic}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.75.4710}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{4710--4713}}
(\bibinfo{year}{1995}).
\bibitem{Tiecke2014} \bibinfo{author}{Tiecke, T.~G.} \emph{et~al.} \newblock \bibinfo{title}{Nanophotonic quantum phase switch with a single
atom}. \newblock
\href{http://dx.doi.org/10.1038/nature13188}{\emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{508}}, \bibinfo{pages}{241--244}}
(\bibinfo{year}{2014}).
\bibitem{Volz2014} \bibinfo{author}{Volz, J.}, \bibinfo{author}{Scheucher, M.},
\bibinfo{author}{Junge, C.} \& \bibinfo{author}{Rauschenbeutel, A.} \newblock \bibinfo{title}{Nonlinear $\pi$ phase shift for single fibre-guided
photons interacting with a single resonator-enhanced atom}. \newblock
\href{http://dx.doi.org/10.1038/nphoton.2014.253}{\emph{\bibinfo{journal}{Nature
Photon.}} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{965--970}}
(\bibinfo{year}{2014}).
\bibitem{Beck2015} \bibinfo{author}{Beck, K.~M.}, \bibinfo{author}{Hosseini, M.},
\bibinfo{author}{Duan, Y.} \& \bibinfo{author}{Vuleti{\'c}, V.} \newblock \bibinfo{title}{Large conditional single-photon cross-phase
modulation}. \newblock
\href{http://dx.doi.org/10.1073/pnas.1524117113}{\emph{\bibinfo{journal}{Proc. Natl. Acad. Sci. U.S.A.}} \textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{9740--9744}}
(\bibinfo{year}{2016}).
\bibitem{Tiarks2016} \bibinfo{author}{Tiarks, D.}, \bibinfo{author}{Schmidt, S.},
\bibinfo{author}{Rempe, G.} \& \bibinfo{author}{D{\"u}rr, S.} \newblock \bibinfo{title}{Optical $\pi$ phase shift created with a single-photon pulse}. \newblock
\href{http://dx.doi.org/10.1126/sciadv.1600036}{\emph{\bibinfo{journal}{Sci. Adv.}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{e1600036}}
(\bibinfo{year}{2016}).
\bibitem{Duan2004} \bibinfo{author}{Duan, L.-M.} \& \bibinfo{author}{Kimble, H.~J.} \newblock \bibinfo{title}{Scalable Photonic Quantum Computation through
Cavity-Assisted Interactions}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.92.127902}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{127902}}
(\bibinfo{year}{2004}).
\bibitem{Shapiro2006} \bibinfo{author}{Shapiro, J.~H.} \newblock \bibinfo{title}{Single-photon Kerr nonlinearities do not help quantum
computation}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevA.73.062305}{\emph{\bibinfo{journal}{Phys.
Rev. A}} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{062305}}
(\bibinfo{year}{2006}).
\bibitem{Gea2010} \bibinfo{author}{Gea-Banacloche, J.} \newblock \bibinfo{title}{Impossibility of large phase shifts via the giant
Kerr effect with single-photon wave packets}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevA.81.043823}{\emph{\bibinfo{journal}{Phys.
Rev. A}} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{043823}}
(\bibinfo{year}{2010}).
\bibitem{Reiserer2014} \bibinfo{author}{Reiserer, A.}, \bibinfo{author}{Kalb, N.},
\bibinfo{author}{Rempe, G.} \& \bibinfo{author}{Ritter, S.} \newblock \bibinfo{title}{A quantum gate between a flying optical photon and a
single trapped atom}. \newblock
\href{http://dx.doi.org/10.1038/nature13177}{\emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{508}}, \bibinfo{pages}{237--240}}
(\bibinfo{year}{2014}).
\bibitem{Duan2005} \bibinfo{author}{Duan, L.-M.}, \bibinfo{author}{Wang, B.} \&
\bibinfo{author}{Kimble, H.~J.} \newblock \bibinfo{title}{Robust quantum gates on neutral atoms with
cavity-assisted photon scattering}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevA.72.032333}{\emph{\bibinfo{journal}{Phys.
Rev. A}} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{032333}}
(\bibinfo{year}{2005}).
\bibitem{Reiserer2013} \bibinfo{author}{Reiserer, A.}, \bibinfo{author}{N{\"o}lleke, C.},
\bibinfo{author}{Ritter, S.} \& \bibinfo{author}{Rempe, G.} \newblock \bibinfo{title}{Ground-State Cooling of a Single Atom at the Center
of an Optical Cavity}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.110.223003}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{223003}}
(\bibinfo{year}{2013}).
\bibitem{Poyatos1997} \bibinfo{author}{Poyatos, J.~F.}, \bibinfo{author}{Cirac, J.~I.} \&
\bibinfo{author}{Zoller, P.} \newblock \bibinfo{title}{Complete Characterization of a Quantum Process: The
Two-Bit Quantum Gate}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.78.390}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{390--393}}
(\bibinfo{year}{1997}).
\bibitem{Bagan2003} \bibinfo{author}{Bagan, E.}, \bibinfo{author}{Baig, M.} \&
\bibinfo{author}{Mu\~{n}oz{-}Tapia, R.} \newblock \bibinfo{title}{{Minimal measurements of the gate fidelity of a qudit
map}}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevA.67.014303}{\emph{\bibinfo{journal}{Phys.
Rev. A}} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{014303}}
(\bibinfo{year}{2003}).
\bibitem{Uphoff2015} \bibinfo{author}{Uphoff, M.}, \bibinfo{author}{Brekenfeld, M.},
\bibinfo{author}{Rempe, G.} \& \bibinfo{author}{Ritter, S.} \newblock \bibinfo{title}{{Frequency splitting of polarization eigenmodes in
microscopic Fabry-Perot cavities}}. \newblock
\href{http://dx.doi.org/10.1088/1367-2630/17/1/013053}{\emph{\bibinfo{journal}{New
J. Phys.}} \textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{013053}}
(\bibinfo{year}{2015}).
\bibitem{Briegel1998} \bibinfo{author}{Briegel, H.-J.}, \bibinfo{author}{D{\"u}r, W.},
\bibinfo{author}{Cirac, J.~I.} \& \bibinfo{author}{Zoller, P.} \newblock \bibinfo{title}{Quantum Repeaters: The Role of Imperfect Local
Operations in Quantum Communication}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.81.5932}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{5932--5935}}
(\bibinfo{year}{1998}).
\bibitem{Raussendorf2001} \bibinfo{author}{Raussendorf, R.} \& \bibinfo{author}{Briegel, H.~J.} \newblock \bibinfo{title}{A One-Way Quantum Computer}. \newblock
\href{http://dx.doi.org/10.1103/PhysRevLett.86.5188}{\emph{\bibinfo{journal}{Phys.
Rev. Lett.}} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{5188--5191}}
(\bibinfo{year}{2001}).
\bibitem{Ladd2010} \bibinfo{author}{Ladd, T.~D.} \emph{et~al.} \newblock \bibinfo{title}{Quantum computers}. \newblock
\href{http://dx.doi.org/10.1038/nature08812}{\emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{464}}, \bibinfo{pages}{45--53}}
(\bibinfo{year}{2010}).
\end{thebibliography}
This work: \newenvironment{extrabibitem}{\list{}{\leftmargin=1em\rightmargin=0em}\item[]}{\endlist} \begin{extrabibitem} \small Hacker, B., Welte, S., Rempe, G. \& Ritter, S. A photon-photon quantum gate based on a single atom in an optical resonator. \href{http://dx.doi.org/10.1038/nature18592}{\emph{Nature} \textbf{536}, 193--196} (2016). \end{extrabibitem}
\renewcommand{\textbf{Extended Data Figure}}{\textbf{Extended Data Figure}} \renewcommand{\arabic{figure}}{\arabic{figure}} \makeatletter
\@addtoreset{figure}{section} \makeatother
\unnumsec{Methods}
\noindent\textbf{\phantomsection\label{methods:composition}Composition of the photon-photon CPF gate.} The action of the quantum circuit diagram depicted in Fig.\,\figref{fig:scheme}a can be computed in the eight-dimensional Hilbert space spanned by the atomic ancilla qubit and the two photonic qubits. The atomic single-qubit rotations by $\pi/2$ and $-\pi/2$ are described by the operators $\frac{1}{\sqrt{2}}\left(\begin{smallmatrix}1&-1\\1&1\end{smallmatrix}\right)$ and $\frac{1}{\sqrt{2}}\left(\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}\right)$, respectively, in the basis $\{\ket{{\uparrow}},\ket{{\downarrow}}\}$. The atom-photon CZ-gate is described by $U_{ap}=\mathrm{diag}(-1,1,1,1)$ in the basis $\{\ket{{\uparrow}\mathrm R},\ket{{\uparrow}\mathrm L},\ket{{\downarrow}\mathrm R},\ket{{\downarrow}\mathrm L}\}$. As indicated in Fig.\,\figref{fig:scheme}a, the atom is initially prepared in $\ket{{\uparrow}}$. Any input state of the two photonic qubits, including entangled states, can be written as \[ \ket{p_1p_2} = c_{\mathrm{RR}}\ket{\mathrm{RR}} + c_{\mathrm{RL}}\ket{\mathrm{RL}} + c_{\mathrm{LR}}\ket{\mathrm{LR}} + c_{\mathrm{LL}}\ket{\mathrm{LL}}, \] defined by the four complex numbers $c_{\mathrm{RR}}$, $c_{\mathrm{RL}}$, $c_{\mathrm{LR}}$ and $c_{\mathrm{LL}}$. Henceforth, we will use the compact notation $\ket{rr}:=c_{\mathrm{RR}}\ket{\mathrm{RR}}$, $\ket{rl}:=c_{\mathrm{RL}}\ket{\mathrm{RL}}$, $\ket{lr}:=c_{\mathrm{LR}}\ket{\mathrm{LR}}$, and $\ket{ll}:=c_{\mathrm{LL}}\ket{\mathrm{LL}}$. Therefore, any photon-photon gate operation starts in the collective initial state \[ \ket{{\uparrow}}(\ket{rr}+\ket{rl}+\ket{lr}+\ket{ll}). \] The first $\pi/2$ rotation brings the atom into a superposition \[ \textstyle\frac{1}{\sqrt{2}}(\ket{{\uparrow}}+\ket{{\downarrow}})\ (\ket{rr}+\ket{rl}+\ket{lr}+\ket{ll}), \] followed by a CZ-interaction between the atom and the first photon, which flips the sign of all states with the atom in $\ket{{\uparrow}}$ and the first photon in $\ket{\mathrm R}$: \[ \textstyle\frac{1}{\sqrt{2}}\bigl((-\ket{{\uparrow}}+\ket{{\downarrow}})(\ket{rr}+\ket{rl})+(\ket{{\uparrow}}+\ket{{\downarrow}})(\ket{lr}+\ket{ll})\bigr). \] Subsequent rotation of the atom by $-\pi/2$ creates the state \[ \ket{{\downarrow}}(\ket{rr}+\ket{rl}) + \ket{{\uparrow}}(\ket{lr}+\ket{ll}). \] Reflection of the second photon flips the sign of all states with the atom in $\ket{{\uparrow}}$ and the second photon in $\ket{\mathrm R}$: \[ \ket{{\downarrow}}(\ket{rr}+\ket{rl}) + \ket{{\uparrow}}(-\ket{lr}+\ket{ll}). \] The final rotation of the atom by $\pi/2$ yields \[ \textstyle\frac{1}{\sqrt{2}}\bigl((-\ket{{\uparrow}}+\ket{{\downarrow}})(\ket{rr}+\ket{rl})\; + \; (\ket{{\uparrow}}+\ket{{\downarrow}})(-\ket{lr}+\ket{ll})\bigr). \] At this point the state of the atom is measured. There are two equally probable outcomes projecting the two-photon state accordingly: \begin{align*} \ket{{\uparrow}}{:}\quad& -\ket{rr}-\ket{rl}-\ket{lr}+\ket{ll},\\ \ket{{\downarrow}}{:}\quad& +\ket{rr}+\ket{rl}-\ket{lr}+\ket{ll}. \end{align*} Following detection of the atom $\ket{{\uparrow}}$, an additional $\pi$ phase is imprinted on the $\ket{\mathrm R}$-part of the first photon, i.e.\ a sign flip on $\ket{rr}$ and $\ket{rl}$, whereas the photonic state is left unaltered upon detection of $\ket{{\downarrow}}$. Thereby, the final photonic state becomes \[ \ket{rr}+\ket{rl}-\ket{lr}+\ket{ll}, \] independent of the outcome of the atomic state detection. It differs from the input state by a minus sign on $\ket{lr}$ only. Hence, the total circuit acts as a pure photonic CPF gate: \begin{align*} &\ket{\mathrm{RR}}\rightarrow\ket{\mathrm{RR}} &&\ket{\mathrm{LR}}\rightarrow-\ket{\mathrm{LR}}\\ &\ket{\mathrm{RL}}\rightarrow\ket{\mathrm{RL}} &&\ket{\mathrm{LL}}\rightarrow\ket{\mathrm{LL}}. \end{align*}
\noindent\textbf{\phantomsection\label{methods:rotations}Calibration of atomic single-qubit rotations.} In order to calibrate relevant experimental parameters, we employ a Ramsey-like sequence of three subsequent rotation pulses. The pulses are exactly timed as in the gate sequence (see Fig.\,\figref{fig:scheme}), but the two photon pulses interleaved between the Raman pulses are turned off.
\begin{figure}\label{fig:threepulse}
\end{figure}
Initially, the atom is prepared in $\ket{{\uparrow}}$. The Raman pair is red-detuned by $131\unit{GHz}$ from the D$_1$ line of $^{87}$Rb. Employing an acousto-optic modulator, we scan one of the Raman lasers over $2.5\unit{MHz}$ while the frequency of the other is fixed. Thus, we effectively scan the two-photon detuning. Extended Data Fig.\,\figref{fig:threepulse} shows a spectrum depicting the population in $\ket{{\uparrow}}$ as a function of the two-photon detuning. Ideally, the gate experiments are performed on two-photon resonance. In this case, the second pulse compensates the first and the third one brings the atom into the superposition state $(\ket{{\uparrow}}+\ket{{\downarrow}})/\sqrt{2}$, such that $50\%$ population in $\ket{{\uparrow}}$ are obtained.
To determine the experimental parameters that guarantee this situation, a theoretical model is fitted to the spectrum. It allows us to simultaneously access several mutually dependent fit parameters useful to calibrate the frequency as well as the intensity of our Raman beams. The fit reveals the Rabi frequency for the transition between $\ket{{\downarrow}}$ and $\ket{{\uparrow}}$, which we tune to $250\unit{kHz}$ to obtain $\pi/2$ pulses in $1\unit{\ensuremath{\textnormal\textmu} s}$. The two-photon detuning is also extractable from the fit and we find a light shift of $40\unit{kHz}$ due to the Raman lasers. To compensate for it, we choose different two-photon detunings when the pulses are on and off, such that two-photon resonance is guaranteed during the entire sequence.
\noindent\textbf{\phantomsection\label{methods:modematching}Transverse optical mode matching.} Good overlap between the transverse mode profiles of the incoming wave packet and the optical cavity is essential for the performance of the gate. To this end, the qubit-carrying photon pulses are taken from a single-mode fibre with its mode matched to the cavity. In a characterisation measurement we determined that 92\% of probe light emanating from the cavity is coupled into this input fibre. Therefore, 8\% of the impinging light may arrive in an orthogonal mode that does not interact with the atom-cavity system. Light in this mode deteriorates the fidelity of the gate if it is collected at the output. This problem is overcome, because the delay fibre also acts as a filter for the transverse mode profile after the cavity. The mode overlap between cavity and delay fibre is 84\%, partially suffering from mode distortion by the AOD used for path switching. From an analysis of cavity reflection spectra we can estimate the amount of light that did not interact with the cavity but is still coupled from the input fibre into the delay fibre. It is below 1\% of the gate output, such that the resulting reduction of the gate fidelity is also well below 1\%.
A small misalignment, e.g.\ due to slow temperature drifts, reduces the positive filtering effect described above. Therefore, optimal mode matching is essential to maintain maximum gate fidelity. In the experiment, reflection spectra of the empty cavity were constantly monitored and, whenever necessary, data taking was interrupted to reestablish optimal mode overlap.
\noindent\textbf{\phantomsection\label{methods:simulation}Simulation of imperfections.} In order to understand the imperfections encountered in the experiment, we have set up a model of both photonic qubits and the atomic ancilla qubit in terms of their three-particle density matrix $\rho$. Under ideal conditions, the density matrix transforms via sequential unitary transformations $U$ as $\rho\rightarrow U\rho U^\dagger$, and known error sources can be introduced at each specific step. Finally, the fidelity of $\rho$ with the desired target state is calculated for comparison with the experimental value.
In this scenario, an unnoticed, incorrect preparation of the atom creates an incoherent admixture of the wrong initial state. Errors in the atomic state detection lead to an exchange of the photonic submatrices corresponding to each atomic state. Detector dark counts are modeled as an admixture of a fully mixed state and decoherence effects are taken into account as reductions in off-diagonal elements of $\rho$. Cases where photons do not enter the cavity because of geometric mode mismatch are included with a phase shift of zero, and the case of an undetected additional photon in one of the weak pulses is incorporated with a phase shift of $2\pi$, i.e.\ twice the ideal value. Interestingly, most deteriorations of the atom-photon interaction, like fluctuations of the atomic, cavity and photon frequencies, all condense into a variation, $\Delta\varphi=\pm0.15\pi$, of the conditional phase shift. Considering this together with the polarisation rotation $R_p(\xi)$ a photon experiences due to the residual cavity birefringence by an angle of $\xi=0.06\pi$ in case of $\ket{{\downarrow}}$, the ideal atom-photon CZ-gate $U_{ap}=\mathrm{diag}(-1,1,1,1)$ in the basis $\{\ket{{\uparrow}\mathrm R},\ket{{\uparrow}\mathrm L},\ket{{\downarrow}\mathrm R},\ket{{\downarrow}\mathrm L}\}$ must be replaced by: \[ U_{ap} = \left[\begin{array}{cccc} e^{i(\pi+\Delta\varphi)}&0&0&0\\ 0&1&0&0\\ \begin{array}{c}0\\0\end{array}&\begin{array}{c}0\\0\end{array}&\multicolumn{2}{c}{R_p(\xi)} \end{array}\right] \] Random fluctuations in some of the parameters enter our model by integrating the resulting density matrix over the assumed Gaussian distribution function.
\end{document} | arXiv |
\begin{definition}[Definition:Steiner Inellipse]
Let $\triangle ABC$ be a triangle whose sides have midpoints $D, E, F$.
Then the unique ellipse which can be drawn wholly inside $\triangle ABC$ tangent to $D, E, F$ is called the '''Steiner inellipse''':
:400px
{{namedfor|Jakob Steiner}}
Category:Definitions/Geometry
\end{definition} | ProofWiki |
\begin{document}
\title{Strengthening topological colorful results\\ for graphs \footnote{\copyright~2017~ This manuscript version is made available under the CC-BY-NC-ND 4.0 license \url{http://creativecommons.org/licenses/by-nc-nd/4.0}
\begin{abstract} Various results ensure the existence of large complete and colorful bipartite graphs in properly colored graphs when some condition related to a topological lower bound on the chromatic number is satisfied. We generalize three theorems of this kind, respectively due to Simonyi and Tardos ({\em Combinatorica}, 2006), Simonyi, Tardif, and Zsb\'an ({\em The Electronic Journal of Combinatorics}, 2013), and Chen ({\em Journal of Combinatorial Theory, Series A}, 2011). As a consequence of the generalization of Chen's theorem, we get new families of graphs whose chromatic number equals their circular chromatic number and that satisfy Hedetniemi's conjecture for the circular chromatic number. \end{abstract}
\section{Introduction}\label{intro}
The famous proof of the Kneser conjecture by Lov\'asz opened the door to the use of topological tools in combinatorics, especially for finding topological obstructions for a graph $G$ to have a small chromatic number. These obstructions actually often ensure the existence of large complete bipartite subgraphs with many colors in any proper coloring of $G$. Finding new conditions for the existence of such bipartite graphs has in particular become an active stream of research relying on the use of topological tools.
Apart of being elegant, these results have also turned out to be useful, e.g., for providing lower bounds for various types of chromatic numbers. The {\em zig-zag theorem}, the {\em colorful $K_{\ell,m}$ theorem}, and the {\em alternative Kneser coloring lemma} are among the most prominent results of this stream and our main objective is to provide generalizations of them. \\
The topological obstructions often take the form of a lower bound provided by a topological invariant attached to the graph $G$. One of these topological invariants is the ``coindex of the box-complex'' $\operatorname{coind}(B_0(G))$ (the exact definition of this topological invariant, as well as others, is recalled later, in Section~\ref{subsubsec:graphcomplex}). It has been shown that $$\chi(G)\geq\operatorname{coind}(B_0(G))+1.$$ The zig-zag theorem, due to Simonyi and Tardos~\cite{SiTa06}, is a generalization of this inequality and ensures that any proper coloring of $G$ contains a heterochromatic complete bipartite subgraph $K_{\lceil\frac t 2\rceil,\lfloor \frac t 2\rfloor}$, where $t=\operatorname{coind}(B_0(G))+1$. {\em Heterochromatic} means that the vertices get pairwise distinct colors. In Section~\ref{sec:zigzag}, we provide the exact statement of the zig-zag theorem and prove a result (Theorem~\ref{thm:path_colorful}) that extends it roughly as follows. Two complete bipartite subgraphs are {\em adjacent} if one is obtained from the other by removing a vertex and adding a vertex, possibly the same if the vertex is added to the other side of the bipartite graph. We prove that in any proper coloring of $G$, there is a sequence of adjacent ``almost'' heterochromatic complete bipartite subgraphs, which starts with a heterochromatic complete bipartite graph as in the original statement of the zig-zag theorem and ends at the same bipartite graph, with the two sides interchanged. \\
While the zig-zag theorem holds for any properly colored graph, the colorful $K_{\ell,m}$ theorem, also due to Simonyi and Tardos~\cite{SiTa07}, requires $\chi(G)$ to be equal to $\operatorname{coind}(B_0(G))+1$. Several families of graphs satisfy this equality, e.g., Kneser, Schrijver, and Borsuk graphs, and the Mycielski construction keeps the equality when applied to a graph satisfying it, see the aforementioned paper for details. Assume that such a graph $G$ is properly colored with a minimum number of colors and consider a bipartition $I,J$ of the color set. The colorful $K_{\ell,m}$ theorem ensures then that there is a complete bipartite subgraph with the colors in $I$ on one side, and the colors in $J$ on the other side. The name of the theorem reminds that, contrary to the zig-zag theorem, one can choose the sizes of the sides of the bipartite graph. {\em Colorful}, here, and in the remaining of the paper, means that all colors appear in the subgraph.
In Section~\ref{sec:colorful}, we show that the colorful $K_{\ell,m}$ theorem remains true when $\operatorname{coind}(B_0(G))+1$ is replaced by the ``cross-index of the Hom complex'', a combinatorial invariant providing a better bound on the chromatic number. This answers an open question by Simonyi, Tardif, and Zsb\'an, see p.6 of \cite{SiTaZs13}.\\
A hypergraph $\mathcal{H}$ is a pair $(V,E)$ where $V$ -- the {\em vertex set} -- is a nonempty finite set and where $E$ -- the {\em edge set} -- is a finite collection of subsets of $V$. A hypergraph is {\em nonempty} if it has at least one edge. A {\em singleton} is an edge with a single vertex. Given a hypergraph $\mathcal{H}$, the {\em general Kneser graph} $\operatorname{KG}(\mathcal{H})$ is a graph whose vertices are the edges of $\mathcal{H}$ and in which two vertices are connected if the corresponding edges are disjoint. The {\em $2$-colorability defect} of $\mathcal{H}$, denoted by $\operatorname{cd}_2(\mathcal{H})$, is the minimal number of vertices to remove from $\mathcal{H}$ such that the remaining partial hypergraph is $2$-colorable:
$$\operatorname{cd}_2(\mathcal{H})=\min\left\{|U|: \left(V(\mathcal{H})\setminus U,\{e\in E(\mathcal{H}):\;e\cap U=\varnothing\}\right)\mbox{ is $2$-colorable}\right\}.$$ A hypergraph is {\it $2$-colorable} whenever it is possible to color its vertices using $2$ colors in such a way that no edge is monochromatic. Dol'nikov~\cite{Do88} proved that the colorability defect of $\mathcal{H}$ is a lower bound of the chromatic number of $\operatorname{KG}(\mathcal{H})$: \begin{equation}\label{eq:dolnikov} \chi(\operatorname{KG}(\mathcal{H}))\geq\operatorname{cd}_2(\mathcal{H}). \end{equation} Dol'nikov's inequality actually provides a lower bound for the chromatic number of every graph since for every simple graph $G$, there exists a hypergraph $\mathcal{H}$, a {\em Kneser representation of $G$}, such that $\operatorname{KG}(\mathcal{H})$ and $G$ are isomorphic. Denote by $K_{q,q}^*$ the bipartite graph obtained from the complete bipartite graph $K_{q,q}$ by removing a perfect matching.
In Section~\ref{sec:alternative}, we prove the following result, which shows that when Dolnikov's inequality is an equality, any proper coloring with a minimal number of colors presents a special pattern. \begin{corollary}\label{cor:cd2} Let $\mathcal{H}$ be a nonempty hypergraph such that $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})=t$ and with no singletons. Any proper coloring of $\operatorname{KG}(\mathcal{H})$ with $t$ colors contains a colorful copy of $K_{t,t}^*$ with all colors appearing on each side. \end{corollary} We actually prove a more general result (Theorem~\ref{thm:main}) ensuring the existence of such a $K_{t,t}^*$ in the categorical product of an arbitrary number of general Kneser graphs, where the ``categorical product'' is a way to multiply graphs (it is the product involved in the famous Hedetniemi conjecture).
The alternative Kneser coloring lemma, found by Chen in 2011~\cite{Ch11}, is the special case of Corollary~\ref{cor:cd2} when $\mathcal{H}$ is the hypergraph $\left([n],{{[n]}\choose k}\right)$ (the complete $k$-uniform hypergraph with $[n]$ as the vertex set) with $k\geq 2$. For this special case, the graph $\operatorname{KG}(\mathcal{H})$ is a ``usual'' Kneser graph, denoted by $\operatorname{KG}(n,k)$, which plays an important role in combinatorics and in graph theory, see for instance~\cite{Ha10,SiTa06,SiTa07,St76,VaVe05}. When $k\geq 2$, the usual Kneser graph satisfies the condition of Corollary~\ref{cor:cd2} with $t=n-2k+2$ (the fact that the chromatic number is $n-2k+2$ is the Lov\'asz theorem~\cite{Lo79}, originally conjectured by Kneser~\cite{kneser1955}). Note that Corollary~\ref{cor:cd2} is a strict improvement on the colorful $K_{\ell,m}$ theorem for general Kneser graphs satisfying its condition.
The existence of such colorful bipartite subgraphs has immediate consequences for the circular chromatic number, an important notion in graph coloring theory, see Section~\ref{subsec:circ} for the definition of the circular chromatic number and for these consequences. \\
Motivated by Corollary~\ref{cor:cd2}, we end the paper in Section~\ref{sec:cd} with a discussion on hypergraphs $\mathcal{H}$ such that $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$. It turns out that this question seems to be difficult. We can describe procedures to build such hypergraphs, but a general characterization is still missing. We do~not even know what is the complexity of deciding whether $\mathcal{H}$ satisfies $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$.\\
Before stating and proving these results, Section~\ref{sec:basic} presents the main topological tools used in the paper. It contains no new results but we assume that the reader has basic knowledge in topological combinatorics. More details on that topic can be found for instance in the books by Matou{\v{s}}ek~\cite{Matoubook} or De Longueville~\cite{DeLongbook}.
\section{Topological tools}\label{sec:basic}
\subsection{Alternation number}\label{subsec:alt_cd}
An improvement on the $2$-colorability defect is the alternation number, which provides a better lower bound on the chromatic number of general Kneser graphs. It has been defined by the first and second authors~\cite{AlHa15}. Let $\boldsymbol{x}=(x_1,\ldots,x_n)\in\{+,-,0\}^n$. An {\em alternating subsequence} of $\boldsymbol{x}$ is a sequence $x_{i_1},\ldots,x_{i_{\ell}}$ of nonzero terms of $\boldsymbol{x}$, with $i_1<\cdots<i_{\ell}$, such that any two consecutive terms of this sequence are different. We denote by $\operatorname{alt}(\boldsymbol{x})$ the length of a longest alternating subsequence of $\boldsymbol{x}$. In the case $\boldsymbol{x}=\boldsymbol{0}$, we define $\operatorname{alt}(\boldsymbol{x})$ to be $0$. For a nonempty hypergraph $\mathcal{H}$ and a bijection $\sigma:[n]\rightarrow V(\mathcal{H})$, we define $$\operatorname{alt}_{\sigma}(\mathcal{H})=\max_{\boldsymbol{x}\in\{+,-,0\}^n}\big\{\operatorname{alt}(\boldsymbol{x}):\;\mbox{none of $\sigma(\boldsymbol{x}^+)$ and $\sigma(\boldsymbol{x}^-)$ contains an edge of }\mathcal{H}\big\},$$ where $$\boldsymbol{x}^+=\{i:\;x_i=+\}\qquad\mbox{and}\qquad\boldsymbol{x}^-=\{i:\;x_i=-\}.$$
The {\em alternation number} of $\mathcal{H}$ is the quantity $\operatorname{alt}(\mathcal{H})=\min_{\sigma}\operatorname{alt}_{\sigma}(\mathcal{H})$, where the minimum is taken over all bijections $\sigma:[n]\rightarrow V(\mathcal{H})$. The following inequality is proved in the same paper \begin{equation}\label{eq:alter} \chi(\operatorname{KG}(\mathcal{H}))\geq n-\operatorname{alt}(\mathcal{H}). \end{equation} Note that this implies $\chi(\operatorname{KG}(\mathcal{H}))\geq n-\operatorname{alt}_\sigma(\mathcal{H})$ for any bijection $\sigma: [n]\rightarrow V(\mathcal{H})$ and thus it implies Dol'nikov's inequality~\eqref{eq:dolnikov} since $n-\operatorname{alt}_{\sigma}(\mathcal{H})\geq\operatorname{cd}_2(\mathcal{H})$ holds for every bijection $\sigma:[n]\rightarrow V(\mathcal{H})$.
The first two authors have applied inequality~\eqref{eq:alter} on various families of graphs~\cite{2014arXiv1401.0138A,AlHa14,2014arXiv1407.8035A,AlHa15_arxiv,2015arXiv150708456A}.
\subsection{Box complex, Hom complex, and indices}
\subsubsection{Some topological notions}
A {\em $\mathbb{Z}_2$-space} $X$ is a topological space with a homeomorphism $\nu:X\rightarrow X$, the {\em $\mathbb{Z}_2$-action on $X$}, such that $\nu\circ\nu=\operatorname{id}_X$. If $\nu$ has no fixed points, then $\nu$ is {\em free}. In that case, $X$ is also called free. A {\em $\mathbb{Z}_2$-map} $f:X\rightarrow Y$ between two $\mathbb{Z}_2$-spaces is a continuous map such that $f\circ\nu=\omega\circ f$, where $\nu$ and $\omega$ are the $\mathbb{Z}_2$-actions on $X$ and $Y$ respectively. Given two $\mathbb{Z}_2$-spaces $X$ and $Y$, we write $X \stackrel{\mathbb{Z}_2}{\longrightarrow} Y$ if there exists a $\mathbb{Z}_2$-map $X\rightarrow Y$.
The {\em $\mathbb{Z}_2$-index} and {\em $\mathbb{Z}_2$-coindex} of a $\mathbb{Z}_2$-space $X$ are defined as follows: $$\operatorname{ind}(X)=\min\{d\geq 0:\; X \stackrel{\mathbb{Z}_2}{\longrightarrow}\mathcal{S}^d\}\qquad\mbox{and}\qquad\operatorname{coind}(X)=\max\{d\geq 0:\; \mathcal{S}^d \stackrel{\mathbb{Z}_2}{\longrightarrow} X\}, $$ where the $\mathbb{Z}_2$-action on the $d$-dimensional sphere $\mathcal{S}^d$ is the (free) antipodal map $-:\boldsymbol{x}\rightarrow -\boldsymbol{x}$. The celebrated Borsuk-Ulam theorem states that $\operatorname{ind}(\mathcal{S}^d)=\operatorname{coind}(\mathcal{S}^d)=d$. It also implies that $\operatorname{ind}(X)\geq\operatorname{coind}(X)$. \\
These definitions extend naturally to simplicial complexes. A {\em simplicial $\mathbb{Z}_2$-complex} $\mathsf{K}$ is a simplicial complex with a simplicial map $\nu:\mathsf{K}\rightarrow\mathsf{K}$, the {\em $\mathbb{Z}_2$-action on $\mathsf{K}$}, such that $\nu\circ\nu=\operatorname{id}_{\mathsf{K}}$. The underlying space of $\mathsf{K}$, denoted by $\|\mathsf{K}\|$, becomes a $\mathbb{Z}_2$-space if we take the affine extension of $\nu$ as a $\mathbb{Z}_2$-action on $\|\mathsf{K}\|$. If $\nu$ has no fixed simplices, then $\nu$ is {\em free} and $\mathsf{K}$ is a {\em free simplicial $\mathbb{Z}_2$-complex}. If we take the affine extension of $\nu$ as a $\mathbb{Z}_2$-action on $\|\mathsf{K}\|$, then $\mathsf{K}$ is free if and only if $\|\mathsf{K}\|$ is free. A {\em simplicial $\mathbb{Z}_2$-map} $\lambda:\mathsf{K}\rightarrow\mathsf{L}$ is a simplicial map between two simplicial $\mathbb{Z}_2$-complexes such that $\lambda\circ\nu=\omega\circ\lambda$, where $\nu$ and $\omega$ are the $\mathbb{Z}_2$-actions of $\mathsf{K}$ and $\mathsf{L}$ respectively. Given two simplicial $\mathbb{Z}_2$-complexes $\mathsf{K}$ and $\mathsf{L}$, we write $\mathsf{K} \stackrel{\mathbb{Z}_2}{\longrightarrow} \mathsf{L}$ if there exists a simplicial $\mathbb{Z}_2$-map $\mathsf{K}\rightarrow\mathsf{L}$. Note that if $\mathsf{K} \stackrel{\mathbb{Z}_2}{\longrightarrow} \mathsf{L}$, then $\|\mathsf{K}\| \stackrel{\mathbb{Z}_2}{\longrightarrow}\|\mathsf{L}\|$, but the converse is in general not true.
The $\mathbb{Z}_2$-index and $\mathbb{Z}_2$-coindex of a simplicial $\mathbb{Z}_2$-complex $\mathsf{K}$ are defined as $\operatorname{ind}(\mathsf{K})=\operatorname{ind}(\|\mathsf{K}\|)$ and $\operatorname{coind}(\mathsf{K})=\operatorname{coind}(\|\mathsf{K}\|)$, where again the $\mathbb{Z}_2$-action on $\|\mathsf{K}\|$ is the affine extension of the $\mathbb{Z}_2$-action on $\mathsf{K}$. \\
A {\em free $\mathbb{Z}_2$-poset} $P$ is an ordered set with a fixed-point free automorphism $-:P\rightarrow P$ of order $2$ (being understood that this automorphism is order-preserving). An {\em order-preserving $\mathbb{Z}_2$-map} between two free $\mathbb{Z}_2$-posets
$P$ and $Q$ is an order-preserving map $\phi:P\rightarrow Q$ such that $\phi(-x)=-\phi(x)$ for every $x\in P$. If there exists such a map, we write $P \stackrel{\mathbb{Z}_2}{\longrightarrow} Q$. For a nonnegative integer $n$, let $Q_n$ be the free $\mathbb{Z}_2$-poset with elements $\{\pm 1,\ldots,\pm (n+1)\}$ such that for any two members $x$ and $y$ in $Q_n$, we have $x<_{Q_n}y$ if $|x|<|y|$. For a free $\mathbb{Z}_2$-poset $P$, the {\em cross-index} of $P$, denoted by $\operatorname{Xind}(P)$, is the minimum $n$ such that there is a $\mathbb{Z}_2$-map from $P$ to $Q_n$. If $P$ has no elements, we define $\operatorname{Xind}(P)$ to be $-1$.
Given a poset $P$, its {\em order complex}, denoted by $\Delta P$, is the simplicial complex whose simplices are the chains of $P$. If $P$ is a free $\mathbb{Z}_2$-poset, then $\Delta P$ is a free simplicial $\mathbb{Z}_2$-complex, i.e., no chains are fixed under the automorphism $-$. Any order-preserving $\mathbb{Z}_2$-map between two free $\mathbb{Z}_2$-posets $P$ and $Q$ lifts naturally to a $\mathbb{Z}_2$-simplicial map between $\Delta P$ and $\Delta Q$. Since there is a $\mathbb{Z}_2$-map from $\|\Delta Q_n\|$ to $\mathcal{S}^n$, we have $\operatorname{Xind}(P)\geq\operatorname{ind}(\Delta P)$ for any free $\mathbb{Z}_2$-poset $P$. In the sequel, we write $\operatorname{ind}(P)$ for $\operatorname{ind}(\Delta P)$.
\subsubsection{Complexes for graphs}\label{subsubsec:graphcomplex}
Let $G=(V,E)$ be a graph. The {\em box complex of $G$}, denoted by $B_0(G)$, has $V \uplus V = V\times[2]$ as the vertex set and $$\{A\uplus B:\ A,B\subseteq V,\ A\cap B=\varnothing,\ G[A,B] \mbox{ is a complete bipartite graph} \}$$ as the simplex set, where $G[A,B]$ is the bipartite graph whose sides are $A$ and $B$ and whose edges are those of $G$ having one endpoint in $A$ and the other in $B$.
The $\mathbb{Z}_2$-action $\nu\colon V\uplus V\rightarrow V \uplus V$ on $B_0(G)$ is given by interchanging the two copies of $V$; that is, $\nu(v,1)=(v,2)$ and $\nu(v, 2)=(v, 1)$, for any $v \in V $. Clearly, this $\mathbb{Z}_2$-action makes $B_0(G)$ a free simplicial $\mathbb{Z}_2$-complex.
There is also another box complex, denoted by $B(G)$, which differs from $B_0(G)$ in that if one of $A$ or $B$ is empty, the vertices in the other one must still have a common neighbor. Since we are not using this complex any further, we are not giving more details on it.
The {\em Hom complex} $\operatorname{Hom}(K_2, G)$ of a graph $G$ is a free $\mathbb{Z}_2$-poset consisting of all pairs $(A,B)$ such that $A$ and $B$ are nonempty disjoint subsets of $V$ and such that $G[A,B]$ is a complete bipartite graph. The partial order of this poset is the inclusion $\subseteq$ extended to pairs of subsets of $V$: $$(A,B)\subseteq (A',B')\quad\mbox{if $A\subseteq A'$ and $B\subseteq B'$}.$$ The fixed-point free automorphism $-$ on $\operatorname{Hom}(K_2, G)$ is defined by $-(A,B)=(B,A)$. The definitions of the box complex $B_0(G)$ and of the Hom complex have some similarity. To understand their relationships further, see~\cite{SiTa06} for instance.
These complexes can be used to provide lower bounds on the chromatic number of $G$, see \cite{AlHa14,AlHa15,MaZi04,SiTaZs13}. They are related by the following chain of inequalities, \begin{equation}\label{lbchrom} \begin{array}{lll} \chi(G) & \geq & \operatorname{Xind}(\operatorname{Hom}(K_2, G))+2 \geq \operatorname{ind}(\operatorname{Hom}(K_2,G))+2\geq \operatorname{ind}(B_0(G))+1\\
& \geq & \operatorname{coind}(B_0(G))+1 \geq |V(\mathcal{H})|-\operatorname{alt}(\mathcal{H}) \geq\operatorname{cd}_2(\mathcal{H}), \end{array} \end{equation} where $\mathcal{H}$ is any Kneser representation of $G$.
\subsection{Fan's lemma and its variations}\label{subsec:kyfan}
Fan's lemma, which plays an important role in topology and in combinatorics, is the following theorem.
\begin{theorem}[Fan lemma~\cite{Fa56}]\label{thm:kyfan} Consider a centrally-symmetric triangulation $\mathsf{T}$ of $\mathcal{S}^d$ and let $\lambda:V(\mathsf{T})\rightarrow\{\pm 1,\ldots,\pm m\}$ be a labeling satisfying the following properties: \begin{itemize} \item it is antipodal: $\lambda(-v)=-\lambda(v)$ for every $v\in V(\mathsf{T})$ \item it has no complementary edges: $\lambda(u)+\lambda(v)\neq 0$ for every pair $uv$ of adjacent vertices of $\mathsf{T}$. \end{itemize} Then there is an odd number of $d$-dimensional simplices $\sigma$ of $\mathsf{T}$ such that $\lambda(V(\sigma))$ is of the form $$\{-j_1,+j_2,\ldots,(-1)^{d+1}j_{d+1}\}$$ for some $1\leq j_1<\cdots<j_{d+1}\leq m$. In particular, $m\geq d+1$. \end{theorem} Such simplices $\sigma$ are {\em negative-alternating simplices}.
This theorem is especially useful to combinatorics in the special case when $\mathsf{T}$ is the first barycentric subdivision of the boundary of the $n$-dimensional cross-polytope. It takes then a purely combinatorial aspect. Before stating this special case, we define a partial order that plays a role in the remaining of the paper. As used in oriented matroid theory, we define $\preceq$ to be the following partial order on $\{+,-,0\}$: $$0\preceq +,\quad 0\preceq -,\quad+\mbox{ and }-\mbox{ are not comparable.}$$ We extend it for sign vectors by simply taking the product order: for $\boldsymbol{x},\boldsymbol{y}\in\{+,-,0\}^n$, we have $\boldsymbol{x}\preceq\boldsymbol{y}$ if the following implication holds for every $i\in[n]$ $$x_i\neq 0\Longrightarrow x_i=y_i.$$
\begin{lemma}[Octahedral Fan lemma]\label{lem:kyfan_oct} Let $m$ and $n$ be positive integers and $$\lambda:\{+,-,0\}^n\setminus \{\boldsymbol{0}\} \longrightarrow\{\pm 1,\ldots,\pm m\}$$ be a map satisfying the following properties: \begin{itemize} \item $\lambda(-\boldsymbol{x})=-\lambda(\boldsymbol{x})$ for all $\boldsymbol{x}$, \item $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y})\neq 0$ when $\boldsymbol{x}\preceq\boldsymbol{y}$. \end{itemize} Then there is an odd number of chains $\boldsymbol{x}_1\preceq \cdots \preceq \boldsymbol{x}_n$ such that $\displaystyle\left\{\lambda(\boldsymbol{x}_1),\ldots,\lambda(\boldsymbol{x}_n)\right\}$ is of the form $$\{-j_1,+j_2,\ldots,(-1)^nj_n\}$$ for some $1\leq j_1<\cdots<j_n\leq m$. In particular, $m \geq n$. \end{lemma}
The following result is proved with the help of Lemma~\ref{lem:kyfan_oct}. It is implicitly used in the proof of the alternative Kneser coloring lemma in \cite{Ch11}. Here, we state it as a lemma, with a proof for the sake of completeness.
\begin{lemma}\label{lem:chen}
Consider an order-preserving $\mathbb{Z}_2$-map $\lambda:\left(\{+,-,0\}^n\setminus\{\boldsymbol{0}\},\preceq\right)\rightarrow Q_{n-1}$. Suppose moreover that there is a $\gamma\in[n]$ such that when $\boldsymbol{x}\prec\boldsymbol{y}$, at most one of $|\lambda(\boldsymbol{x})|$ and $|\lambda(\boldsymbol{y})|$ is equal to $\gamma$. Then there are two chains $$\boldsymbol{x}_{1}\preceq\cdots\preceq\boldsymbol{x}_{n}\quad\mbox{and}\quad\boldsymbol{y}_{1}\preceq\cdots\preceq\boldsymbol{y}_{n}$$ such that $$\lambda(\boldsymbol{x}_{i})=(-1)^ii\quad \mbox{for all $i$}\qquad\mbox{and }\qquad\lambda(\boldsymbol{y}_{i})=(-1)^ii \quad\mbox{for $i\neq\gamma$}$$ and such that $\boldsymbol{x}_{\gamma}=-\boldsymbol{y}_{\gamma}$. \end{lemma}
\begin{proof}
Let $\Gamma$ be the set of all vectors $\boldsymbol{x}\in \{+,-,0\}^n\setminus\{\boldsymbol{0}\}$ such that $|\lambda(\boldsymbol{x})|=\gamma$. For $\boldsymbol{x}\in \Gamma$, define $\alpha(\boldsymbol{x},\lambda)$ to be the number of chains $\boldsymbol{x}_{1}\preceq\cdots\preceq\boldsymbol{x}_{n}$ that contain $\boldsymbol{x}$ and such that $\lambda(\boldsymbol{x}_{i})=(-1)^ii\quad \mbox{for all $i$}$. Such a chain contains exactly one $\boldsymbol{x}$ of $\Gamma$. Thus, using Lemma~\ref{lem:kyfan_oct}, we get that $\displaystyle\sum_{\boldsymbol{x}\in \Gamma}\alpha(\boldsymbol{x},\lambda)$ is an odd integer. It implies that there is at least one $\boldsymbol{z}\in \Gamma$ such that $\alpha(\boldsymbol{z},\lambda)$ is odd. Now, define $\lambda':(\{+,-,0\}^n\setminus\{\boldsymbol{0}\},\preceq)\rightarrow Q_{n-1}$ such that $$\lambda'(\boldsymbol{x})=\left\{ \begin{array}{ll} -\lambda(\boldsymbol{x}) & \mbox{if } \boldsymbol{x}\in\{\boldsymbol{z},-\boldsymbol{z}\}\\ \lambda(\boldsymbol{x}) & \mbox{otherwise.} \end{array}\right.$$
Because of the property enjoyed by $\gamma$, when $\boldsymbol{x}$ is equal to $\boldsymbol{z}$ or $-\boldsymbol{z}$, any $\boldsymbol{y}$ comparable with $\boldsymbol{x}$ is such that $|\lambda(\boldsymbol{y})|\neq\gamma$. Hence, the map $\lambda'$ still satisfies the second condition of Lemma~\ref{lem:kyfan_oct}. Since it satisfies also clearly the first condition, the conclusion of Lemma~\ref{lem:kyfan_oct} holds for $\lambda'$ and $\displaystyle\sum_{\boldsymbol{x}\in \Gamma}\alpha(\boldsymbol{x},\lambda')$ is an odd integer. It implies $$\displaystyle\sum_{\boldsymbol{x}\in \Gamma}\alpha(\boldsymbol{x},\lambda) \equiv\displaystyle\sum_{\boldsymbol{x}\in \Gamma}\alpha(\boldsymbol{x},\lambda')\quad (\operatorname{mod} 2).$$ Since $\alpha(\boldsymbol{x},\lambda)=\alpha(\boldsymbol{x},\lambda')$ for every $\boldsymbol{x}\in \Gamma\setminus\{\boldsymbol{z},-\boldsymbol{z}\}$ we have $$\alpha(\boldsymbol{z},\lambda)+\alpha(-\boldsymbol{z},\lambda)\equiv\alpha(\boldsymbol{z},\lambda')+\alpha(-\boldsymbol{z},\lambda')\quad (\operatorname{mod} 2).$$ We have $\alpha(-\boldsymbol{z},\lambda)=\alpha(\boldsymbol{z},\lambda')=0$ because $\lambda(-\boldsymbol{z})=\lambda'(\boldsymbol{z})$ is not of the right sign. Since $\alpha(\boldsymbol{z},\lambda)$ is odd, we have $\alpha(-\boldsymbol{z},\lambda')>0$. The inequality $\alpha(\boldsymbol{z},\lambda)>0$ implies that there exists a chain $\boldsymbol{x}_{1}\preceq\cdots\preceq\boldsymbol{x}_{n}$ such that $\boldsymbol{x}_{\gamma}=\boldsymbol{z}$ and $\lambda(\boldsymbol{x}_{i})=(-1)^ii\quad \mbox{for all $i$}$. The inequality $\alpha(-\boldsymbol{z},\lambda')>0$ implies that there exists a chain $\boldsymbol{y}_{1}\preceq\cdots\preceq\boldsymbol{y}_{n}$ such that $\boldsymbol{y}_{\gamma}=-\boldsymbol{z}$ and $\lambda'(\boldsymbol{y}_{i})=(-1)^ii\quad \mbox{for all $i$}$. We have $\boldsymbol{x}_{\gamma}=-\boldsymbol{y}_{\gamma}=\boldsymbol{z}$ and $\lambda'(\boldsymbol{y}_{i})=\lambda(\boldsymbol{y}_{i})=(-1)^ii$ for all $i\in[n]\setminus\{\gamma\}$. Therefore, the two chains $$\boldsymbol{x}_{1}\preceq\cdots\preceq\boldsymbol{x}_{n}\quad\mbox{and}\quad\boldsymbol{y}_{1}\preceq\cdots\preceq\boldsymbol{y}_{n}$$ are the desired chains. \end{proof}
\section{A ``path'' of heterochromatic subgraphs}\label{sec:zigzag}
The exact statement of the zig-zag theorem goes as follows.
\begin{theorem}[Zig-zag theorem~\cite{SiTa06}]\label{thm:zigzag} Let $G$ be a properly colored graph. Assume that the colors are linearly ordered. Then $G$ contains a heterochromatic copy of $K_{\lceil\frac t 2\rceil,\lfloor \frac t 2\rfloor}$, where $t=\operatorname{coind}(B_0(G))+1$, such that the colors appear alternating on the two sides of the bipartite subgraph with respect to their order. \end{theorem}
Simonyi, Tardif, and Zsb\'an~\cite{SiTaZs13} showed that the statement remains true when $t=\operatorname{Xind}(\operatorname{Hom}(K_2,G)) + 2$. It is a generalization since we have the inequality $\operatorname{Xind}(\operatorname{Hom}(K_2,G)) + 2\geq\operatorname{coind}(B_0(G))+1$, which can moreover be strict.
A subgraph $H$ of a properly colored graph $G$ is {\em almost heterochromatic} if it contains at least $|V(H)|-1$ colors. A bipartite graph is {\em balanced} if the size of each side differs by at most two. We remind the reader that two complete bipartite subgraphs are {\em adjacent} if one is obtained from the other by removing a vertex and then adding a vertex, possibly the same one if it is added to the other side of the bipartite graph. The following theorem is our generalization of Theorem~\ref{thm:zigzag}.
\begin{theorem}\label{thm:path_colorful} Any properly colored graph contains a sequence of adjacent almost heterochromatic complete and balanced bipartite subgraphs with $\operatorname{coind}(B_0(G))+1$ vertices, which starts at a heterochromatic subgraph and ends at the same heterochromatic subgraph with the two sides interchanged. \end{theorem}
Actually, the proof also shows that the bipartite subgraph on which the sequence starts and ends has the same property as stated in Theorem~\ref{thm:zigzag}, namely that the colors appear alternating on the two sides with respect to their order (assuming the colors being linearly ordered). Note that without the statement about the interchange of the two sides, Theorem~\ref{thm:path_colorful} would be a consequence of Theorem~\ref{thm:zigzag}, since the sequence with only one subgraph would satisfy the conclusion of the theorem.
In other words, under the condition of the theorem, there is a sequence $(A_1,B_1),\ldots,(A_s,B_s)$ of pairs of disjoint subsets of $V(G)$, such that $A_1=B_s$ and $A_s=B_1$ with $\big||A_1|-|B_1|\big|\leq 1$, and such that for every $i$ \begin{itemize} \item $G[A_i,B_i]$ is complete
\item $|c(A_1\cup B_1)|=t$
\item $|A_i\cup B_i|=t$
\item $|A_i\setminus A_{i+1}|+|B_i\setminus B_{i+1}|=|A_{i+1}\setminus A_i|+|B_{i+1}\setminus B_i|=1$ (adjacency)
\item $\big||A_i|-|B_i|\big|\leq 2$ (balancedness) \item $c(A_i\cup B_i)\geq t-1$ (almost heterochromaticity), \end{itemize} where $c$ is the proper coloring of $G$ and $t=\operatorname{coind}(B_0(G))+1$.
The proof of Theorem~\ref{thm:path_colorful} relies on the following lemma. As already defined in Section~\ref{subsec:kyfan}, a $d$-dimensional simplex $\sigma$ with a labeling $\lambda$ is negative-alternating if $\lambda(V(\sigma))$ is of the form $$\{-j_1,+j_2,\ldots,(-1)^{d+1}j_{d+1}\}$$ for some $1\leq j_1<\cdots<j_{d+1}$. It is {\em positive-alternating} if it is of the same form, with the $-$'s and $+$'s interchanged. It is {\em alternating} if it is negative-alternating or positive-alternating. It is {\em almost-alternating} if it has a facet (a $(d-1)$-dimensional face) that is alternating, while not being itself alternating.
Assuming that $\mathcal{S}^d$ is embedded in $\mathbb{R}^{d+1}$, the two {\em hemispheres} of $\mathcal{S}^d$ are the sets of points of $\mathcal{S}^d$ whose last coordinate is respectively nonnegative and nonpositive. The {\em equator} is the intersection of these two hemispheres. (In other words, the equator is the set of points whose last coordinate is equal to $0$). A triangulation of $\mathcal{S}^d$ {\em refines the hemispheres} if each simplex is contained in at least one of the hemispheres of $\mathcal{S}^d$.
We follow the terminology of Schrijver's book~\cite{Sch03}: a {\em circuit} is a connected graph whose vertices are all of degree $2$.
\begin{lemma}\label{lem:gen_kyfan} Consider a centrally symmetric triangulation $\mathsf{T}$ of $\mathcal{S}^d$ that refines the hemispheres, with an antipodal labeling $\lambda:V(\mathsf{T})\rightarrow\{\pm 1,\ldots,\pm m\}$ such that no adjacent vertices get opposite labels. Let $H$ be the graph whose vertices are the barycenters of the almost-alternating and alternating $d$-simplices and whose edges connect vertices if the corresponding simplices share a common alternating facet. Then $H$ contains a centrally symmetric circuit with at least two alternating $d$-simplices. \end{lemma}
\begin{proof} An alternating $(d-1)$-simplex is the facet of two $d$-simplices that are alternating or almost-alternating. An alternating or almost-alternating $d$-simplex has exactly two facets that are alternating. Thus $H$ is a collection of vertex disjoint circuits. Denote by $\nu$ the central symmetry. Suppose for a contradiction that for every circuit $C$, the circuits $C$ and $\nu(C)$ are distinct. Let $o^-(C)$ (resp. $o^+(C)$) be the number of negative-alternating $(d-1)$-simplices (resp. positive-alternating) crossed by $C$ on the equator. Each circuit intersects the equator an even number of times (this number being possibly $0$), hence $o^-(C)+o^+(C)$ is even. Since $o^-(\nu(C))=o^+(C)$, we have an even number of negative-alternating $(d-1)$-simplices of the equator contained in $C$ and $\nu(C)$ together. Since two such circuits are disjoint and since an alternating $(d-1)$-simplex is contained in exactly one circuit, we have an even number of negative-alternating $(d-1)$-simplices on the equator. It contradicts Theorem~\ref{thm:kyfan} applied on the equator, which is homeomorphic to $\mathcal{S}^{d-1}$.
There is thus a circuit which is equal to its image by $\nu$. This circuit contains at least two centrally symmetric alternating $d$-simplices, since the sign of the smallest label of the almost-alternating $d$-simplices must change along it. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:path_colorful}] Let $c$ be a proper coloring of $G$. Let $t=\operatorname{coind}(B_0(G))+1$. There is a $\mathbb{Z}_2$-map from $\mathcal{S}^{t-1}$ to $B_0(G)$. Thus there exists a simplicial $\mathbb{Z}_2$-map $\mu:\mathsf{T}\rightarrow B_0(G)$ for some centrally symmetric triangulation $\mathsf{T}$ of $\mathcal{S}^{t-1}$ that refines the hemispheres (according to the equivariant simplicial approximation theorem, mentioned for instance in Theorem 2 of the paper~\cite{DeLZi06} by De Longueville and \v{Z}ivaljevi\'c). Now, for a vertex $v$ of $\mathsf{T}$, define $\lambda(v)$ to be $+c(\mu(v))$ if $\mu(v)$ is in the first copy of $V(G)$ and to be $-c(\mu(v))$ if $\mu(v)$ is in the second copy of $V(G)$. The map $\lambda$ satisfies the condition of Lemma~\ref{lem:gen_kyfan}. There is thus a centrally symmetric circuit $C$ containing two centrally symmetric alternating $(t-1)$-simplices $\tau$ and $\nu(\tau)$. Now, start at $\tau$, and follow the circuit in an arbitrary direction, until reaching $\nu(\tau)$. Consider the images by $\mu$ of the almost-alternating $(t-1)$-simplices met and keep only those that are $(t-1)$-dimensional. This is the sought sequence of bipartite graphs. \end{proof}
\section{Colorful subgraphs and cross-index}\label{sec:colorful}
The following theorem is a positive answer to the question by Simonyi, Tardif, and Zsb\'an mentioned in the introduction.
\begin{theorem}\label{thm:xind}
Let $G$ be a graph for which $$\chi(G) = \operatorname{Xind}(\operatorname{Hom}(K_2,G))+2 = t.$$ Assume that $G$ is properly colored with $[t]$ as the color set and let $I,J\subseteq[t]$ form a bipartition of the color set, i.e., $I\cup J =[t]$ and $I\cap J =\varnothing$. Then there exists a colorful copy of $K_{|I|,|J|}$ in $G$ with the colors in $I$ on one side, and the colors in $J$ on the other side. \end{theorem}
The original colorful $K_{\ell,m}$ theorem is the same theorem with $\operatorname{coind}(B_0(G))+1$ in place of $\operatorname{Xind}(\operatorname{Hom}(K_2,G))+2$. A first improvement with $\operatorname{ind}(\operatorname{Hom}(K_2,G))+2$ was obtained by Simonyi, Tardif, and Zsb\'an~\cite{SiTaZs13}. Because of the inequalities~\eqref{lbchrom}, Theorem~\ref{thm:xind} implies the two first versions of the colorful $K_{\ell,m}$ theorem. It is actually not clear that we have obtained a true generalization of the version due to Simonyi, Tardif, and Zsb\'an, since it is not known whether there exists a graph $G$ with $\operatorname{Xind}(\operatorname{Hom}(K_2,G))$ strictly larger than $\operatorname{ind}(\operatorname{Hom}(K_2,G))$.
Let $P$ be a free $\mathbb{Z}_2$-poset and let $\phi:P\rightarrow Q_s$ be any map for some positive integer $s$. We define an {\em alternating chain} as a chain $p_1<_P\cdots<_Pp_k$ with alternating signs, i.e., the signs of $\phi(p_i)$ and $\phi(p_{i+1})$ differ for every $i\in[k-1]$.
\begin{lemma}\label{lem:alter} Suppose that $\phi:P\rightarrow Q_s$ is an order-preserving $\mathbb{Z}_2$-map. Then there exists in $P$ at least one alternating chain of length $\operatorname{Xind}(P)+1$. Moreover, if $s=\operatorname{Xind}(P)$, then for any $(s+1)$-tuple $(\varepsilon_1,\ldots,\varepsilon_{s+1})\in \{+,-\}^{s+1}$, there exits at least one chain $p_1<_P\cdots<_Pp_{s+1}$ such that $\phi(p_i)=\varepsilon_i i$ for each $i\in[s+1]$. \end{lemma} \begin{proof} First, we prove that there exists at least one alternating chain of length $\operatorname{Xind}(P)+1$. For a contradiction, we suppose that the length of a longest alternating chain is at most $\operatorname{Xind}(P)$. For each $p\in P$, let $\ell(p)$ be the length of a longest alternating chain ending at $p$. The function $\ell$ takes its value between $1$ and $\operatorname{Xind}(P)$. Define the map $\bar{\phi}:P\rightarrow Q_{\operatorname{Xind}(P)-1}$ by $\bar\phi(p)=\pm \ell(p)$ with the sign of $\phi(p)$. The map $\bar\phi$ is an order-preserving $\mathbb{Z}_2$-map from $P$ to $Q_{\operatorname{Xind}(P)-1}$, which is in contradiction with the definition of $\operatorname{Xind}(P)$. This completes the proof of the first part.
Assume now that $s=\operatorname{Xind}(P)$. To complete the proof we shall prove that for any $(s+1)$-tuple $(\varepsilon_1,\ldots,\varepsilon_{s+1})\in \{+,-\}^{s+1}$, there exists at least one chain $p_1<_P\cdots<_Pp_{s+1}$ such that $\phi(p_i)=\varepsilon_i i$ for each $i\in[s+1]$. To this end, define $\phi':P\rightarrow Q_s$ such that for each $p\in P$,
$\phi'(p)=(-1)^i\varepsilon_i\phi(p)$ where $i=|\phi(p)|$. The map $\phi'$ is an order-preserving $\mathbb{Z}_2$-map. In view of the prior discussion, there is an alternating chain of length $s+1$ in $P$ with respect to the map $\phi'$ and which starts with a $-$ sign. If we consider this chain with respect to the map $\phi$, one can see that this chain is the desired one. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:xind}] Let $c:V(G)\rightarrow[t]$ be a proper coloring of $G$. When $I$ or $J$ is empty, we have the assertion. Therefore, without loss of generality, we may assume that $1\in I$ and $2\in J$ (we can rename colors if necessary). Define $\phi:\operatorname{Hom}(K_2, G)\rightarrow Q_{t-2}$ so that for each $(A,B)\in\operatorname{Hom}(K_2, G)$, we have $\phi(A,B)=\pm\big(\max \left(c(A)\cup c(B)\right)-1\big)$ and the sign is positive if $\max \left(c(A)\cup c(B)\right)\in c(A)$ and is negative otherwise. The map $\phi$ is an order-preserving $\mathbb{Z}_2$-map. For each $i\in[t-1]$, set $$\varepsilon_i=\left\{\begin{array}{ll} + & \mbox{if $i+1\in I$}\\ - & \mbox{if $i+1\in J$.} \end{array}\right.$$ According to Lemma~\ref{lem:alter}, there is a chain of pairs of nonempty disjoint vertex subsets $(A_1,B_1)\subset \cdots\subset (A_{t-1},B_{t-1})$ such that $\phi(A_i,B_i)=\varepsilon_i i$. It implies that for $j\in I\setminus\{1\}$, we have $j\in c(A_{j-1})$, and thus $j\in c(A_{t-1})$. It implies similarly that for $j\in J$, we have $j\in c(B_{j-1})\subseteq c(B_{t-1})$. In particular, we have $2\in c(B_1)$. Since $\varepsilon_1=-$, we have $\max c(A_1)<2$, and thus $1\in c(A_1)$. Therefore $I\subseteq c(A_{t-1})$ and $J\subseteq c(B_{t-1})$. As $I,J$ forms a partition of $[t]$, we get $c(A_{t-1})=I$ and $c(B_{t-1})=J$, which completes the proof. \end{proof}
\subsection*{Remark} Lemma~\ref{lem:alter} leads to a generalization of Fan's lemma (Theorem~\ref{thm:kyfan}) which holds for any free simplicial $\mathbb{Z}_2$-complex. The generalization is the following (without the oddness assertion).
\begin{proposition} Let $\mathsf{K}$ be a free simplicial $\mathbb{Z}_2$-complex and let $\lambda$ be an antipodal labeling with no complementary edges. Then there exists at least one alternating simplex of dimension $\operatorname{ind}(\mathsf{K})$. \end{proposition}
We explain how Lemma~\ref{lem:alter} implies this generalization. It is worth noting that this proof does not require any induction. Apply it for $P=\operatorname{sd}\mathsf{K}$ and $\phi(\sigma)=\max_{v\in V(\sigma)}\lambda(v)$, where the maximum is taken according to the partial order $\leq_{Q_s}$. The map $\phi$ is an order-preserving $\mathbb{Z}_2$-map from $P$ to $Q_s$. Therefore, there is an alternating chain $\sigma_1\subset\cdots\subset\sigma_{\ell}$ in $P$ of length $\ell=\operatorname{Xind}(\operatorname{sd}\mathsf{K})+1\geq\operatorname{ind}(\operatorname{sd}^2\mathsf{K})+1=\operatorname{ind}(\mathsf{K})+1$. It is then easy to check that $\sigma_{\ell}$ is an alternating simplex of dimension $\operatorname{ind}(\mathsf{K})$.
It is not too difficult to find instances for which the number of negative-alternating simplices of dimension $\operatorname{ind}(\mathsf{K})$ is even. We can thus not expect a full generalization with the oddness assertion.
\section{Generalization of the alternative Kneser coloring lemma}\label{sec:alternative}
\subsection{The generalization}
A hypergraph $\mathcal{H}$ is {\em nice} if it is nonempty (i.e., it has at least one edge), if it has no singletons, and if there is a bijection $\sigma:[n]\rightarrow V(\mathcal{H})$ for which \begin{itemize} \item $\chi(\operatorname{KG}(\mathcal{H})) = n-\operatorname{alt}_{\sigma}(\mathcal{H})$
\item every sign vector $\boldsymbol{x}\in\{+,-,0\}^n$ with $\operatorname{alt}(\boldsymbol{x})\geq \operatorname{alt}_{\sigma}(\mathcal{H})$ and $|\boldsymbol{x}|> \operatorname{alt}_{\sigma}(\mathcal{H})$ is such that at least one of $\sigma(\boldsymbol{x}^+)$ and $\sigma(\boldsymbol{x}^-)$ contains some edge of $\mathcal{H}$. \end{itemize}
We use the notation $|\boldsymbol{x}|$ to denote the cardinality of the support $\boldsymbol{x}$, i.e., the number of $x_i$ being nonzero. Note that the chromatic number of a general Kneser graph built upon a nice hypergraph matches the lower bound of \eqref{eq:alter}. Even if the definition of a nice hypergraph is quite technical, there are some special cases that are easy to figure out, see Lemmas~\ref{lem:cd2} and~\ref{lem:PartitionMatroid} below.
The {\em categorical product} of graphs $G_1,\ldots,G_s$, denoted by $G_1\times\cdots\times G_s$, is the graph with vertex set $V(G_1)\times\cdots\times V(G_s)$ and such that two vertices $(u_1,\ldots,u_s)$ and $(v_1,\ldots,v_s)$ are adjacent if $u_iv_i\in E(G_i)$ for each $i$. It is a widely studied graph product, see for instance~\cite{He79,Lo67,Lo71,Ta08,Zh98}.
\begin{theorem}\label{thm:main} Let $\mathcal{H}_1,\ldots,\mathcal{H}_s$ be nice hypergraphs and let $t=\min_j\{\chi(\operatorname{KG}(\mathcal{H}_j))\}$. Then any proper coloring of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ with $t$ colors contains a colorful copy of $K^*_{t,t}$ with all colors appearing on each side. \end{theorem}
The quantity $t=\min_j\{\chi(\operatorname{KG}(\mathcal{H}_j))\}$ is actually the chromatic number of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ according to a result in~\cite{HaMe16}.
We postpone the proof of Theorem~\ref{thm:main} to the end of the section. The following two lemmas give sufficient conditions for a hypergraph to be nice.
\begin{lemma}\label{lem:cd2} Any nonempty hypergraph $\mathcal{H}$ with $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$ and no singletons is nice. \end{lemma}
\begin{proof}
Let $|V(\mathcal{H})|=n$ and consider an arbitrary bijection $\sigma:[n]\rightarrow V(\mathcal{H})$. The inequalities $\chi(\operatorname{KG}(\mathcal{H}))\geq n-\operatorname{alt}_\sigma(\mathcal{H})\geq \operatorname{cd}_2(\mathcal{H})$ hold, see Section~\ref{subsec:alt_cd}. We have thus $\chi(\operatorname{KG}(\mathcal{H}))=n-\operatorname{alt}_\sigma(\mathcal{H})$. Consider now a sign vector $\boldsymbol{x}\in\{+,-,0\}^n$ with $|\boldsymbol{x}|>n-\operatorname{alt}_{\sigma}(\mathcal{H})$. Since $n-\operatorname{alt}_\sigma(\mathcal{H})=\operatorname{cd}_2(\mathcal{H})$ and since $|\boldsymbol{x}|=|\sigma(\boldsymbol{x}^+)|+|\sigma(\boldsymbol{x}^-)|$, we get $|\sigma(\boldsymbol{x}^+)|+|\sigma(\boldsymbol{x}^-)|>n-\operatorname{cd}_2(\mathcal{H})$. Therefore at least one of $\sigma(\boldsymbol{x}^+)$ and $\sigma(\boldsymbol{x}^-)$ contains some edge of $\mathcal{H}$, which completes the proof. \end{proof}
\begin{lemma}\label{lem:PartitionMatroid}
Let $U_1,\ldots,U_m$ be a partition of $[n]$ and let $k,r_1,\ldots,r_m$ be positive integers. Assume that $|U_i|\neq 2r_i$ for every $i$ and that $k\geq 2$. Let $\mathcal{H}$ be the hypergraph defined by $$\begin{array}{rcl} V(\mathcal{H}) & = & [n] \\
E(\mathcal{H}) & = & \displaystyle{\left\{A\in {[n]\choose k}:\;|A\cap U_i|\leq r_i\;\mbox{for every $i$}\right\}}. \end{array}\footnote{The edges of such a hypergraph are the bases of a truncation of a partition matroid.}$$ If $\mathcal{H}$ has at least two disjoint edges, then it is nice. \end{lemma}
The proof of Lemma~\ref{lem:PartitionMatroid} is long and technical. To ease the reading of the paper, we omit it. It can however be found at the following address:
{\footnotesize\noindent\url{http://facultymembers.sbu.ac.ir/hhaji/documents/Publications_files/Partition-Matroids-Are-Nice.pdf}.}
Theorem~\ref{thm:main} combined with Lemma~\ref{lem:cd2} shows in particular that if a nonempty hypergraph $\mathcal{H}$ has no singletons and is such that $\chi(\operatorname{KG}(\mathcal{H})=\operatorname{cd}_2(\mathcal{H})$, then we have the existence of a colorful copy of $K^*_{t,t}$. This is exactly the statement of Corollary~\ref{cor:cd2}. We have already mentioned that this corollary implies immediately the alternative Kneser coloring lemma with the help of the Lov\'asz theorem. Actually, the alternative Kneser coloring lemma can also be obtained from Theorem~\ref{thm:main} (with $s=1$) and Lemma~\ref{lem:PartitionMatroid} together (either by taking $m=n$ and $r_i=|U_i|=1$ for every $i$, or by taking $m=1$ and $r_1=k$).
\subsection{Circular chromatic number}\label{subsec:circ}
Let $G=(V,E)$ be a graph. For two integers $p\geq q\geq 1$, a {\em $(p,q)$-coloring} of $G$ is a mapping $c:V\rightarrow[p]$ such that $q\leq|c(u)-c(v)|\leq p-q$ for every edge $uv$ of $G$. The {\em circular chromatic number} of $G$ is $$\chi_c(G)=\inf\{p/q:\; G\mbox{ admits a $(p,q)$-coloring}\}.$$ The inequalities $\chi(G)-1<\chi_c(G)\leq\chi(G)$ hold. Moreover, the infimum in the definition is actually a minimum, i.e., $\chi_c(G)$ is attained for some $(p,q)$-coloring (and thus the circular chromatic number is always rational), see \cite{Zh01} for details. The question of determining which graphs $G$ are such that $\chi_c(G)=\chi(G)$ has received a considerable attention~\cite{Zh99,Zhusurvey2006}. In particular, a conjecture by Johnson, Holroyd, and Stahl~\cite{JHS1997} stated that the circular chromatic number of $\operatorname{KG}(n,k)$ is equal to its chromatic number. It is known (cf.~\cite{CLZ13} or ~\cite{HaTa10}) and easy to prove by the definition of circular coloring that if $G$ is $t$-colorable and every proper coloring with $t$ colors of $G$ contains a $K^*_{t,t}$ with all $t$ colors appearing on each side, then $\chi_c(G)=\chi(G)$. With his colorful theorem, Chen was able to prove the Johnson-Holroyd-Stahl conjecture, after partial results obtained by Hajiabolhassan and Zhu~\cite{HaZh03}, Meunier~\cite{Me05}, and Simonyi and Tardos~\cite{SiTa06}. Corollary~\ref{cor:cd2} implies the following result.
\begin{corollary}\label{cor:circ} Let $\mathcal{H}$ be a nonempty hypergraph such that $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$. Then $\chi(\operatorname{KG}(\mathcal{H}))=\chi_c(\operatorname{KG}(\mathcal{H}))$. \end{corollary}
If $\mathcal{H}$ has a singleton, it is not nice and we cannot apply Theorem~\ref{thm:main}. Nevertheless, its conclusion holds since $\operatorname{KG}(\mathcal{H})$ is then homomorphic to a graph with a vertex adjacent to any other vertex -- a so-called {\em universal} vertex -- and it is known that every graph with a universal vertex has its chromatic number equal to its circular chromatic number \cite{Gu93,Zh92bis}.
Note moreover that Theorem~\ref{thm:main} implies that all these graphs satisfy the Hedetniemi conjecture for the circular chromatic number, proposed by Zhu~\cite{Zh92bis}.
\subsection{Proof of Theorem~\ref{thm:main}}
The original proof of the alternative Kneser coloring lemma given by Chen~\cite{Ch11} was simplified by Chang, Liu, and Zhu~\cite{CLZ13}. A further simplification was subsequently obtained by Liu and Zhu~\cite{LZ15}. In our approach, we reuse ideas proposed in these papers, as well as techniques developed by the last two authors to deal with the categorical product of general Kneser graphs~\cite{HaMe16}.
The proof relies on Lemma~\ref{lem:chen} in a crucial way but it is rather intriguing that the argument does not work when $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))\leq 2$. We split therefore the proof into two parts: the first part is about the case $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))\leq 2$ and the second part is about the case $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))\geq 3$. The first part is rather straightforward and does not use other results.
Within the proof, for a nice hypergraph $\mathcal{H}_j$, we denote by $n_j$ the cardinality of $V(\mathcal{H}_j)$ and by $\sigma_j$ the bijection $[n_j]\rightarrow V(\mathcal{H}_j)$ whose existence is ensured by the niceness of $\mathcal{H}_j$. Note that because of the inequality~\eqref{eq:alter}, we have $\operatorname{alt}_{\sigma_j}(\mathcal{H}_j)=\operatorname{alt}(\mathcal{H}_j)$ for every $j$. Moreover, we identify each element of $V(\mathcal{H}_j)$ with its preimage by $\sigma_j$. Doing this, we get that for every $j$, the set $V(\mathcal{H}_j)$ is $[n_j]$ and that $\sigma_j$ is the identity permutation. Thus, if $\boldsymbol{x}\in\{+,-,0\}^{n_j}$ is such that neither $\boldsymbol{x}^+$ nor $\boldsymbol{x}^-$ contains an edge of $\mathcal{H}_j$, then we have $\operatorname{alt}(\boldsymbol{x})\leq\operatorname{alt}(\mathcal{H}_j)$. This will be used throughout the proof without further mention.
We set $n:=\sum_jn_j$.
\begin{proof}[Proof of Theorem~\ref{thm:main} when $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))\leq 2$]Suppose first that $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))=1$ holds, i.e., at least one of the $\operatorname{KG}(\mathcal{H}_j)$ is a stable set, say $\operatorname{KG}(\mathcal{H}_1)$ without loss of generality. Since $\mathcal{H}_1$ is nice and $\chi(\operatorname{KG}(\mathcal{H}_1))=1$, we have $\operatorname{alt}(\mathcal{H}_1)=n_1-1$. Hence, according to the definition of $\operatorname{alt}(\mathcal{H}_1)$, there exists an $\boldsymbol{x}\in\{+,-,0\}^{n_1}$ with no edge in $\boldsymbol{x}^+$ and no edge in $\boldsymbol{x}^-$ and such that $\operatorname{alt}(\boldsymbol{x})=|\boldsymbol{x}|=n_1-1$. Moreover, this $\boldsymbol{x}$ is such that making the $0$ entry of $\boldsymbol{x}$ a $+$ creates an edge $e^+$ of $\mathcal{H}_1$ in $\boldsymbol{x}^+$, and similarly making this $0$ a $-$ creates an edge $e^-$ in $\boldsymbol{x}^-$. These two edges $e^+$ and $e^-$ are distinct because $\mathcal{H}_1$ has no singletons. Since $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ is in this case also a stable set, with at least two vertices whose existence is ensured by $e^+$ and $e^-$, we are done: there is a $K_{1,1}^*$.
Let us suppose now that $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))=2$. Again, assume without loss of generality that the minimum is attained for $j=1$. Since $\mathcal{H}_1$ is nice and $\chi(\operatorname{KG}(\mathcal{H}_1))=2$, we have $\operatorname{alt}(\mathcal{H}_1)=n_1-2$. There exists thus an $\boldsymbol{x}\in\{+,-,0\}^{n_1}$ with no edge in $\boldsymbol{x}^+$ and no edge in $\boldsymbol{x}^-$ and such that $\operatorname{alt}(\boldsymbol{x}_1)=|\boldsymbol{x}_1|=n_1-2$. Similarly, as before, using the $0$ entries of $\boldsymbol{x}_1$, we can find in $\mathcal{H}_1$ the following four distinct edges, $e^+, e^-, f^+, f^-$ such that on the one hand, $e^+$ and $f^-$ are disjoint and the other hand $e^-$ and $f^+$ are disjoint. $\operatorname{KG}(\mathcal{H}_1)$ being bipartite, $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ is also bipartite, with at least two disjoint edges. There is thus a $K_{2,2}^*$, and we are done since any $2$-coloring of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ provides a colorful $K_{2,2}^*$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main} when $\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))\geq 3$]Define $t=\min_{j\in[s]}\chi(\operatorname{KG}(\mathcal{H}_j))$. Note that since the $\mathcal{H}_j$'s are nice, we have $t\leq n_j-\operatorname{alt}(\mathcal{H}_j)$ for every $j$. This will be used in the proof without further mention. Let $c$ be a proper coloring of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ with $[t]$ as the color set, which exists because of the easy direction of Hedetniemi's conjecture. We suppose throughout the proof that $n-t$ is even. This can be done without loss of generality simply by adding a dummy vertex $n_j+1$ to any of the hypergraphs $\mathcal{H}_j$'s. Doing this, we do~not change the general Kneser graphs and thus $t$ remains the same, $n$ increases by exactly one, and the modified $\mathcal{H}_j$ remains nice.
We will define a map $\lambda:\{+,-,0\}^n\setminus \{\boldsymbol{0}\}\rightarrow\{\pm 1,\ldots,\pm n\}$ for which we will apply Chen's lemma (Lemma~\ref{lem:chen}). To do this, we need to introduce some notations. For $\boldsymbol{x}\in\{+,-,0\}^n\setminus \{\boldsymbol{0}\}$, we define $\boldsymbol{x}(1)\in\{+,-,0\}^{n_1}$ to be the first $n_1$ coordinates of $\boldsymbol{x}$, $\boldsymbol{x}(2)\in\{+,-,0\}^{n_2}$ to be the next $n_2$ coordinates of $\boldsymbol{x}$, and so on, up to $\boldsymbol{x}(s)\in\{+,-,0\}^{n_s}$ to be the last $n_s$ coordinates of $\boldsymbol{x}$. We define moreover $A_j(\boldsymbol{x})$ to be the set of signs $\varepsilon\in\{+,-\}$ such that $\boldsymbol{x}(j)^{\varepsilon}$ contains at least one edge of $\mathcal{H}_j$. Let us just come back to the notation, to avoid any ambiguity: $\boldsymbol{x}(j)^{\varepsilon}$ is the set of $i\in[n_j]$ such that $x_{i+\sum_{j'=1}^{j-1}n_{j'}}=\varepsilon$.
We define $\lambda(\boldsymbol{x})$ by defining its sign $s(\boldsymbol{x})\in\{+,-\}$ and its absolute value $v(\boldsymbol{x})$. In other words, $\lambda(\boldsymbol{x}):=s(\boldsymbol{x})v(\boldsymbol{x})$. Two cases have to be distinguished. We proceed similarly as in \cite{HaMe16}. \\
\noindent {\bf First case:} $\bigcap_j A_j(\boldsymbol{x})=\varnothing$.
Set $$\begin{array}{rcl}
v(\boldsymbol{x}) & = & \displaystyle{\sum_{j:\;|A_j(\boldsymbol{x})|=0}\operatorname{alt}(\boldsymbol{x}(j))+\sum_{j:\;|A_j(\boldsymbol{x})|=2}|\boldsymbol{x}(j)|} \\ \\
& & +\displaystyle{\sum_{j:\;|A_j(\boldsymbol{x})|=1}\left(1+\max\{\operatorname{alt}(\widetilde{\boldsymbol{y}}):\;\widetilde{\boldsymbol{y}}\preceq\boldsymbol{x}(j)\mbox{ and }E(\mathcal{H}_j[\widetilde\boldsymbol{y}^+])=E(\mathcal{H}_j[\widetilde\boldsymbol{y}^-])=\varnothing\}\right)}, \end{array}$$ where $\mathcal{H}_j[\widetilde\boldsymbol{y}^+]$ (resp. $\mathcal{H}_j[\widetilde\boldsymbol{y}^-]$) denotes the restriction of $\mathcal{H}_j$ to $\widetilde\boldsymbol{y}^+$ (resp. $\widetilde\boldsymbol{y}^-$). In this formula, for $j$ with $A_j(\boldsymbol{x})$ of cardinality one, we are looking for a $\widetilde\boldsymbol{y}\preceq\boldsymbol{x}(j)$ with the longest alternating subsequence such that neither $\widetilde\boldsymbol{y}^+$, nor $\widetilde\boldsymbol{y}^-$, contains an edge of $\mathcal{H}_j$. The sign $s(\boldsymbol{x})$ is defined to be the first nonzero coordinate of $\boldsymbol{x}$ if none of the $A_j(\boldsymbol{x})$'s is of cardinality one, and to be the sign in the $A_j(\boldsymbol{x})$ of cardinality one with the smallest possible $j$, otherwise. \\
\noindent {\bf Second case:} $\bigcap_j A_j(\boldsymbol{x})\neq\varnothing$.
Define $$c^{\varepsilon}(\boldsymbol{x})=\max\{c(e_1,\ldots,e_s):\;e_j\subseteq\boldsymbol{x}(j)^{\varepsilon}\;\mbox{and}\;e_j\in E(\mathcal{H}_j)\;\mbox{for all $j\in[s]$}\}$$ where it takes the value $-\infty$ if there is no such $s$-tuple $(e_1,\ldots,e_s)$. Define moreover $c(\boldsymbol{x})=\max\left(c^+(\boldsymbol{x}),c^-(\boldsymbol{x})\right)$. Since $\bigcap_j A_j(\boldsymbol{x})\neq\varnothing$, at least one of $c^+(\boldsymbol{x})$ and $c^-(\boldsymbol{x})$ is finite. Now, set $$v(\boldsymbol{x})=n-t+c(\boldsymbol{x}).$$ The sign $s(\boldsymbol{x})$ is $+$ if $c^+(\boldsymbol{x})>c^-(\boldsymbol{x})$, and $-$ otherwise. Note that since $c$ is a proper coloring, we have $c^+(\boldsymbol{x})\neq c^-(\boldsymbol{x})$.
\begin{claim}\label{claim:vt} Let $\boldsymbol{x}\in\{+,-,0\}^n\setminus\{\boldsymbol{0}\}$. We have $$\bigcap_j A_j(\boldsymbol{x})=\varnothing\quad\Longleftrightarrow\quad v(\boldsymbol{x})\leq n-t.$$ Moreover, if $v(\boldsymbol{x})=n-t$, then there exists $j_0\in[s]$ such that \begin{itemize}
\item $A_{j_0}(\boldsymbol{x})=\varnothing$ and $\operatorname{alt}(\boldsymbol{x}(j_0))=|\boldsymbol{x}(j_0)|=\operatorname{alt}(\mathcal{H}_{j_0})=n_{j_0}-t$,
\item $|A_j(\boldsymbol{x})|=2$ and $|\boldsymbol{x}(j)|=n_j\quad\forall j\neq j_0$. \end{itemize} \end{claim}
\begin{proof-claim} Let $\boldsymbol{x}\in\{+,-,0\}^n\setminus\{\boldsymbol{0}\}$. Let us prove the equivalence. If $\bigcap_j A_j(\boldsymbol{x})\neq\varnothing$, then according to the definition of $\lambda$, we necessarily have $v(\boldsymbol{x})\geq n-t+1$. Now, suppose that $\bigcap_j A_j(\boldsymbol{x})=\varnothing$. At least one set $A_j(\boldsymbol{x})$ is not of cardinality two, otherwise $\bigcap_jA_j(\boldsymbol{x})$ would be nonempty. If there are at least two sets $A_j(\boldsymbol{x})$ not of cardinality two, say $A_{j_1}(\boldsymbol{x})$ and $A_{j_2}(\boldsymbol{x})$, then the maximum value we can get for $v(\boldsymbol{x})$ is at most $n+\sum_{\ell=1}^2(1+\operatorname{alt}(\mathcal{H}_{j_{\ell}})-n_{j_\ell})$, which is at most $n-2t+2$, which is itself at most $n-t-1$ since we suppose $t\geq 3$. If there is only one such set, say $A_{j_0}(\boldsymbol{x})$, it is necessarily empty (otherwise $\bigcap_jA_j(\boldsymbol{x})$ would be again nonempty), each $A_j(\boldsymbol{x})$ is of cardinality two except for $j=j_0$, and we have
$$v(\boldsymbol{x})=\operatorname{alt}(\boldsymbol{x}({j_0}))+\sum_{j\neq j_0}|\boldsymbol{x}(j)|\leq\operatorname{alt}(\mathcal{H}_{j_0})+\sum_{j\neq j_0}n_j\leq n-t,$$ which finishes the proof of the equivalence.
We have actually proved that if $v(\boldsymbol{x})=n-t$, then the sets $A_j(\boldsymbol{x})$ are all of cardinality two for $j\neq j_0$, the set $A_{j_0}(\boldsymbol{x})$ is empty, and
$$\operatorname{alt}(\boldsymbol{x}({j_0}))+\sum_{j\neq j_0}|\boldsymbol{x}(j)|=\operatorname{alt}(\mathcal{H}_{j_0})+\sum_{j\neq j_0}n_j=n-t.$$
Since $\operatorname{alt}(\boldsymbol{x}({j_0}))\leq\operatorname{alt}(\mathcal{H}_{j_0})$ and $|\boldsymbol{x}(j)|\leq n_j$ by definition, these inequalities are actually equalities. The hypergraph $\mathcal{H}_{j_0}$ being nice, the equality $\operatorname{alt}(\boldsymbol{x}({j_0}))=\operatorname{alt}(\mathcal{H}_{j_0})$ and the fact that $A_{j_0}(\boldsymbol{x})$ is empty imply that $|\boldsymbol{x}(j_0)|=\operatorname{alt}(\boldsymbol{x}({j_0}))$. We get thus the second part of the statement. \end{proof-claim}
The map $\lambda$ is defined on the elements of the poset $(\{+,-,0\}^n\setminus\{\boldsymbol{0}\},\preceq)$ and takes its values in $\{\pm 1,\ldots,\pm n\}$.
\begin{claim} $\lambda$ satisfies the condition of Lemma~\ref{lem:chen} with $\gamma=n-t$. \end{claim} \begin{proof-claim} The fact that $\lambda$ is antipodal is direct from the definition. Let us check that $\lambda$ is an order-preserving map $(\{+,-,0\}^n\setminus\{\boldsymbol{0}\},\preceq)\rightarrow Q_{n-1}$. To do this, take $\boldsymbol{x}$ and $\boldsymbol{y}$ such that $\boldsymbol{x}\preceq\boldsymbol{y}$. We have $\bigcap_j A_j(\boldsymbol{x})\subseteq\bigcap_j A_j(\boldsymbol{y})$. If $\bigcap_j A_j(\boldsymbol{x})$ and $\bigcap_j A_j(\boldsymbol{y})$ are both empty or both nonempty, it is clear from the definition that $v(\boldsymbol{x})\leq v(\boldsymbol{y})$. If $\bigcap_j A_j(\boldsymbol{x})$ is empty and $\bigcap_j A_j(\boldsymbol{y})$ is nonempty, then according to Claim~\ref{claim:vt}, we have $v(\boldsymbol{x})\leq n-t$ and $v(\boldsymbol{y})\geq n-t+1$. This shows that $v(\boldsymbol{x})\leq v(\boldsymbol{y})$ when $\boldsymbol{x}\preceq\boldsymbol{y}$.
To finish the checking that $\lambda$ is order-preserving, we have to show that when $\boldsymbol{x}\preceq\boldsymbol{y}$, the quantity $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y})$ is not equal to $0$. Assume for a contradiction that there exist such $\boldsymbol{x}$ and $\boldsymbol{y}$ with $\boldsymbol{x}\preceq\boldsymbol{y}$ and $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y})=0$. Since $v$ is nondecreasing, we can assume that $\boldsymbol{y}$ differs from $\boldsymbol{x}$ only by one entry, i.e., making a $0$ entry of $\boldsymbol{x}$ a nonzero one leads to $\boldsymbol{y}$. Let us suppose that $\bigcap_j A_j(\boldsymbol{x})=\varnothing$. Since $v(\boldsymbol{x})=v(\boldsymbol{y})$, the entry that differs between $\boldsymbol{x}$ and $\boldsymbol{y}$ belongs to an $\boldsymbol{x}(j)$ with $|A_j(\boldsymbol{x})|\neq 2$. If $|A_j(\boldsymbol{x})|=0$, the new $A_j(\boldsymbol{x})$ is still empty, and it is easy to check that the sign cannot change. If $|A_j(\boldsymbol{x})|=1$, the new $A_j(\boldsymbol{x})$ is still of cardinality one, and the sign does not change either. We see thus that the assumption implies that we necessarily have $\bigcap_j A_j(\boldsymbol{x})\neq\varnothing$. But then again, the sign cannot change when we go from $\boldsymbol{x}$ to $\boldsymbol{y}$ since $c$ is a proper coloring.
Finally, let us check that there are no $\boldsymbol{x}\prec\boldsymbol{y}$ such that $|\lambda(\boldsymbol{x})|=|\lambda(\boldsymbol{y})|=\gamma$. Suppose for a contradiction that such $\boldsymbol{x}$ and $\boldsymbol{y}$ exist. Claim~\ref{claim:vt} applied on $\boldsymbol{x}$ and $\boldsymbol{y}$ shows that there is a unique $j_0$ such that $A_{j_0}(\boldsymbol{x})$ is empty and a unique $j_0'$ such that $A_{j_0'}(\boldsymbol{y})$ is empty. Since $\boldsymbol{x}\prec\boldsymbol{y}$, we have $j_0=j_0'$. Claim~\ref{claim:vt} again shows then that $|\boldsymbol{x}(j)|=|\boldsymbol{y}(j)|$ for all $j\neq j_0$. Hence $\boldsymbol{x}=\boldsymbol{y}$, and we get a contradiction.
\end{proof-claim}
We can thus apply Lemma~\ref{lem:chen}. There are two chains $$\boldsymbol{x}_{1}\preceq\cdots\preceq\boldsymbol{x}_{n}\quad\mbox{and}\quad\boldsymbol{y}_{1}\preceq\cdots\preceq\boldsymbol{y}_{n}$$ such that $$\lambda(\boldsymbol{x}_{i})=(-1)^ii\quad \mbox{for all $i$}\qquad\mbox{and }\qquad\lambda(\boldsymbol{y}_{i})=(-1)^ii \quad\mbox{for $i\neq\gamma$}$$ and such that $\boldsymbol{x}_{\gamma}=-\boldsymbol{y}_{\gamma}$, with $\gamma=n-t$. We use now $j_0$ as the element of $[s]$ whose existence is ensured by Claim~\ref{claim:vt} applied on $\boldsymbol{x}=\boldsymbol{x}_{\gamma}$.
\begin{claim}\label{claim:prog} On the one hand, we have $\boldsymbol{x}_{\gamma}(j)=\cdots=\boldsymbol{x}_n(j)$ for every $j\neq j_0$. On the other hand, $\boldsymbol{x}_{i+1}(j_0)$ is obtained from $\boldsymbol{x}_i(j_0)$ by replacing exactly one of its zero entries by a nonzero one, for $i=\gamma,\ldots,n-1$. The same assertion holds for the $\boldsymbol{y}_i$'s. \end{claim}
\begin{proof-claim}
According to Claim~\ref{claim:vt}, $\boldsymbol{x}_{\gamma}(j)$ has no zero entries for $j\neq j_0$. Because of the chain $\boldsymbol{x}_{\gamma}\preceq\cdots\preceq\boldsymbol{x}_n$, we get the first part of the statement. According to Claim~\ref{claim:vt} again, the number of zero entries of $\boldsymbol{x}_{\gamma}(j_0)$ is exactly $t$ since $|\boldsymbol{x}_{\gamma}(j_0)|=n_{j_0}-t$. Since $\boldsymbol{x}_{\gamma},\ldots,\boldsymbol{x}_n$ get distinct images by $\lambda$ and since $\boldsymbol{x}_{\gamma}(j),\ldots,\boldsymbol{x}_n(j)$ do not differ when $j\neq j_0$, all $\boldsymbol{x}_{\gamma}(j_0),\ldots,\boldsymbol{x}_n(j_0)$ are pairwise distinct. The second part of the statement follows from $n-\gamma=t$.
The assertion for the $\boldsymbol{y}_i$'s is proved similarly. \end{proof-claim}
We define $S$ to be $\boldsymbol{x}_{\gamma}(j_0)^+$ and $T$ to be $\boldsymbol{x}_{\gamma}(j_0)^-$. Note that $S$ and $T$ are disjoint and that none of them contain an edge of $\mathcal{H}_{j_0}$.
\begin{claim}\label{claim:defab} There exist integers $a_{\gamma+1},\ldots,a_n,b_{\gamma+1},\ldots,b_n\in[n_{j_0}]\setminus(S\cup T)$ such that \begin{itemize} \item for odd $i\geq\gamma+1$: $$\boldsymbol{x}_i(j_0)^-=T\cup\{a_{\gamma+1},a_{\gamma+3},\ldots,a_i\}\quad\mbox{and}\quad\boldsymbol{y}_i(j_0)^-=S\cup\{b_{\gamma+1},b_{\gamma+3},\ldots,b_i\}$$ \item for even $i\geq\gamma+2$: $$\boldsymbol{x}_i(j_0)^+=S\cup\{a_{\gamma+2},a_{\gamma+4},\ldots,a_i\}\quad\mbox{and}\quad\boldsymbol{y}_i(j_0)^+=T\cup\{b_{\gamma+2},b_{\gamma+4},\ldots,b_i\}.$$ \end{itemize} Moreover, the $a_i$'s are pairwise distinct and so are the $b_i$'s. \end{claim} \begin{proof-claim} According to Claim~\ref{claim:prog}, $\boldsymbol{x}_{i-1}(j_0)$ and $\boldsymbol{x}_i(j_0)$ differ by exactly one entry for $i\in\{\gamma+1,\ldots,n\}$. Define $a_i$ to be the entry index of $\boldsymbol{x}_{i-1}(j_0)$ that becomes nonzero in $\boldsymbol{x}_i(j_0)$.
We have $s(\boldsymbol{x}_{\gamma+1})=-$ and $v(\boldsymbol{x}_{\gamma+1})=\gamma+1$. It implies that there is an edge of $\mathcal{H}_{j_0}$ in $\boldsymbol{x}_{\gamma+1}^-$ that gets the color $1$. Since $A_{j_0}(\boldsymbol{x}_{\gamma})=\varnothing$, this edge contains necessarily $a_{\gamma+1}$ and thus $\boldsymbol{x}_{\gamma+1}^-(j_0)=T\cup\{a_{\gamma+1}\}$.
We have $s(\boldsymbol{x}_{\gamma+2})=+$ and $v(\boldsymbol{x}_{\gamma+2})=\gamma+2$. It implies that there is an edge of $\mathcal{H}_{j_0}$ in $\boldsymbol{x}_{\gamma+2}^+$ that gets the color $2$. Since $A_{j_0}(\boldsymbol{x}_{\gamma+1})=\{-\}$ (this has just been proved), this edge contains necessarily $a_{\gamma+2}$ and thus $\boldsymbol{x}_{\gamma+2}^+(j_0)=S\cup\{a_{\gamma+2}\}$.
For odd $i\geq\gamma+3$, we have $s(\boldsymbol{x}_i)=-$ and $v(\boldsymbol{x}_i)=i$. It implies that there is an edge of $\mathcal{H}_{j_0}$ in $\boldsymbol{x}_i^-$ that gets the color $i-\gamma$. Since the map $v$ is obtained by taking the maximum possible color, we know that neither $\boldsymbol{x}_{i-1}^-$, nor $\boldsymbol{x}_{i-1}^+$ contain an edge with color $i-\gamma$. It implies that the $a_i$th entry of $\boldsymbol{x}_i(j_0)$ is a $-$. The case with even $i\geq\gamma+4$ is treated similarly.
The statement for the $\boldsymbol{y}_i(j_0)$'s is obtained with almost the same proof, once being noted that $\boldsymbol{y}_{\gamma}(j_0)^+=T$ and $\boldsymbol{y}_{\gamma}(j_0)^-=S$. \end{proof-claim}
For odd $i\geq\gamma+1$, there is an edge $e_{j_0}^i\in E(\mathcal{H}_{j_0})$ such that $e_{j_0}^i\subseteq T\cup\{a_i\}$. Indeed, let $\boldsymbol{z}$ be the sign vector of $\{+,-,0\}^{n_{j_0}}$ obtained from $\boldsymbol{x}_{\gamma}(j_0)$ by making its $a_i$th entry a $-$. We have then $\operatorname{alt}(\boldsymbol{z})\geq\operatorname{alt}(\boldsymbol{x}_{\gamma}(j_0))=\operatorname{alt}(\mathcal{H}_{j_0})$ and $|\boldsymbol{z}|>\operatorname{alt}(\mathcal{H}_{j_0})$. Since $\mathcal{H}_{j_0}$ is nice and since neither $S$, nor $T$ contains an edge of $\mathcal{H}_{j_0}$, we get the claimed existence of the edge. The same holds for $S\cup\{b_i\}$: there is an edge $f_{j_0}^i\in E(\mathcal{H}_{j_0})$ such that $f_{j_0}^i\subseteq S\cup\{b_i\}$. For even $i\geq\gamma+2$, we have similarly the existence of edges $e_{j_0}^i,f_{j_0}^i\in E(\mathcal{H}_{j_0})$ such that $e_{j_0}^i\subseteq T\cup\{b_i\}$ and $f_{j_0}^i\subseteq S\cup\{a_i\}$. Note that since $\mathcal{H}$ has no singleton, the edges $e_{j_0}^i$ and $f_{j_0}^i$ are distinct.
Because of Claim~\ref{claim:vt}, we know also that there exist edges $e_j,f_j\in E(\mathcal{H}_j)$ such that $e_j\subseteq\boldsymbol{x}_{\gamma}(j)^-$ and $f_j\subseteq\boldsymbol{x}_{\gamma}(j)^+$ for every $j\neq j_0$, since $|A_j(\boldsymbol{x}_{\gamma})|=2$.
We define now $2t$ vertices of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$. We will check that they induce a graph containing a colorful copy of $K_{t,t}^*$. For $i=\gamma+1,\ldots,n$, we define \begin{itemize} \item $\boldsymbol{e}_i$ to be the $s$-tuple whose $j$th entry is $e_j$, except the $j_0$th one, which is $e_{j_0}^i$. \item $\boldsymbol{f}_i$ to be the $s$-tuple whose $j$th entry is $f_j$, except the $j_0$th one, which is $f_{j_0}^i$. \end{itemize} They are vertices of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$. \begin{claim}\label{claim:aisb} For each $i\in\{\gamma+1,\ldots,n\}$, we have $a_i=b_i$ (defined according to Claim~\ref{claim:defab}) and $c(\boldsymbol{e}_i)=c(\boldsymbol{f}_i)=i-\gamma$. \end{claim} \begin{proof-claim}
We first prove the cases $i=\gamma+1$ and $i=\gamma+2$. Let us start with $i=\gamma+1$. By definition of $\boldsymbol{e}_{\gamma+1}$, we have $1\leq c(\boldsymbol{e}_{\gamma+1})\leq c(\boldsymbol{x}^-_{\gamma+1})$. Since $\lambda(\boldsymbol{x}_{\gamma+1})=-(\gamma+c(\boldsymbol{x}_{\gamma+1}))=-(\gamma+1)$, the equality $c(\boldsymbol{x}^-_{\gamma+1})=1$ holds, and hence $c(\boldsymbol{e}_{\gamma+1})=1$. Similarly, since $\lambda(\boldsymbol{y}_{\gamma+1})=-(\gamma+c(\boldsymbol{y}_{\gamma+1}))=-(\gamma+1)$, the equality $c(\boldsymbol{f}_{\gamma+1})=1$ holds. The fact that $\boldsymbol{e}_{\gamma+1}$ and $\boldsymbol{f}_{\gamma+1}$ get the same color implies moreover that they are not adjacent as the vertices of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$. Since $e_j$ and $f_j$ are disjoint for $j\neq j_0$, it implies that $e_{j_0}^{\gamma+1}\cap f_{j_0}^{\gamma+1}\neq\varnothing$, which implies that $a_{\gamma+1}=b_{\gamma+1}$, as $S$ and $T$ are disjoint.
Now, let us look at the case $i=\gamma+2$. By definition of $\boldsymbol{e}_{\gamma+2}$, we have $1\leq c(\boldsymbol{e}_{\gamma+2})\leq c(\boldsymbol{y}^+_{\gamma+2})$. Since $v(\boldsymbol{y}_{\gamma+2})=\gamma+2$, the inequality $c(\boldsymbol{y}^+_{\gamma+2})\leq 2$ holds, and hence $c(\boldsymbol{e}_{\gamma+2})\in\{1,2\}$. Similarly, $c(\boldsymbol{f}_{\gamma+2})\in\{1,2\}$. Since $e_j$ and $f_j$ are disjoint for $j\neq j_0$ and since $e_{j_0}^{\gamma+2}$ and $f_{j_0}^{\gamma+1}$ are disjoint, $\boldsymbol{e}_{\gamma+2}\boldsymbol{f}_{\gamma+1}$ is an edge of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ and thus $c(\boldsymbol{e}_{\gamma+2})\neq 1$. Therefore $c(\boldsymbol{e}_{\gamma+2})=2$. Similarly, $c(\boldsymbol{f}_{\gamma+2})=2$. Now, we use the same technique as for $i=\gamma+1$ to prove that $a_{\gamma+2}=b_{\gamma+2}$: the fact that $\boldsymbol{e}_{\gamma+2}$ and $\boldsymbol{f}_{\gamma+2}$ get the same color implies that $e_{j_0}^{\gamma+2}$ and $f_{j_0}^{\gamma+2}$ get a nonempty intersection. This is possible only if $a_{\gamma+2}=b_{\gamma+2}$.
For the remaining values of $i$, we proceed by induction. Let $i$ be an odd integer such that $i\geq\gamma+3$. We have $1\leq c(\boldsymbol{e}_i)\leq c(\boldsymbol{x}^-_i)$. Since $v(\boldsymbol{x}_i)=i$, we get on the one hand $c(\boldsymbol{e}_i)\leq i-\gamma$. On the other hand, the induction implies that $\{a_{\gamma+1},\ldots,a_{i-1}\}=\{b_{\gamma+1},\ldots,b_{i-1}\}$. Thus $e_{j_0}^i$ and $f_{j_0}^{i'}$ are disjoint for $i'=\gamma+1,\ldots,i-1$. Since $e_j$ and $f_j$ are disjoint for $j\neq j_0$, we get that $\boldsymbol{e}_i$ and $\boldsymbol{f}_{i'}$ are adjacent as the vertices of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ for $i'=\gamma+1,\ldots,i-1$. Then the induction implies that $c(\boldsymbol{e}_i)\geq i-\gamma$. Therefore, $c(\boldsymbol{e}_i)=i-\gamma$. Similarly, we prove $c(\boldsymbol{f}_i)=i-\gamma$. Now, we use the same technique again to prove $a_i=b_i$: the fact that $\boldsymbol{e}_i$ and $\boldsymbol{f}_i$ get the same color implies that $e_{j_0}^i$ and $f_{j_0}^i$ get a nonempty intersection. This is possible only if $a_i=b_i$. The case when $i$ is an even integer such that $i\geq \gamma+4$ is dealt with similarly. \end{proof-claim}
The vertices $\boldsymbol{e}_i$ and $\boldsymbol{f}_{i'}$ of $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$ are adjacent when $i\neq i'$. Indeed, $e_j$ and $f_j$ are obviously disjoint for every $j\neq j_0$, and so are $e_{j_0}^i$ and $f_{j_0}^{i'}$, as we explain now. For odd $i$, we have $e_{j_0}^i\subseteq S\cup\{a_i\}$ while $f_{j_0}^{i'}\subseteq T\cup\{b_{i'}\}$, with $b_{i'}=a_{i'}\neq a_i$ (this is a consequence of Claim~\ref{claim:aisb}), and we have $e_{j_0}^i$ and $f_{j_0}^{i'}$ disjoint. For even $i$, we have $e_{j_0}^i\subseteq S\cup\{b_i\}$ while $f_{j_0}^{i'}\subseteq T\cup\{a_{i'}\}$, with $a_{i'}=b_{i'}\neq b_i$ (this is again a consequence of Claim~\ref{claim:aisb}), and we have $e_{j_0}^i$ and $f_{j_0}^{i'}$ disjoint, again. Moreover since $\boldsymbol{e}_i$ and $\boldsymbol{f}_i$ are distinct for every $i$, they induce a subgraph containing a $K_{t,t}^*$ in $\operatorname{KG}(\mathcal{H}_1)\times\cdots\times\operatorname{KG}(\mathcal{H}_s)$. The statement about the colors of this $K_{t,t}^*$ is also a consequence of Claim~\ref{claim:aisb}. \end{proof}
\subsection*{Remark} Theorem~\ref{thm:main} remains true under the weaker condition that $n_j-\operatorname{alt}(\mathcal{H}_j) \geq t$ for all $j$ and that the $\mathcal{H}_j$'s for which the equality holds are nice. Indeed, when $t\geq 3$, we use the fact that the $\mathcal{H}_j$'s are nice only twice: in the proof of Claim~\ref{claim:vt} and right after the proof of Claim~\ref{claim:defab}, and in both places $j=j_0$ (which is such that $\chi(\operatorname{KG}(\mathcal{H}_{j_0}))=t$). When $t\leq 2$, we also use the fact that the $\mathcal{H}_j$'s are nice only for those $j$ such that $\chi(\operatorname{KG}(\mathcal{H}_j))=t$.
\section{Hypergraphs $\mathcal{H}$ such that $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$}\label{sec:cd}
Motivated by Corollary~\ref{cor:circ}, we try to better understand the hypergraphs $\mathcal{H}$ such that $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$. We found this question interesting for its own sake.
\subsection{A construction}
Let $n\geq 2k-1$ and $m\geq 1$. We define the set system $$F_{n,m,k}={[n] \choose k}\cup\big\{\{i,j\}:\;i\in[n],\;j\in[n+m]\setminus[n]\big\}\cup{[n+m]\setminus[n] \choose k}.$$
\begin{proposition} The hypergraph $\mathcal{H}_{n,m,k}$ with vertex set $[n+m]$ and edge set $F_{n,m,k}$ is such that $\chi(\operatorname{KG}(\mathcal{H}_{n,m,k}))=\operatorname{cd}_2(\mathcal{H}_{n,m,k})=n+m-2k+2$. \end{proposition}
\begin{proof} We color $\operatorname{KG}(\mathcal{H}_{n,m,k})$ as follows. Let $A$ be one of its vertices. If $A\subseteq[n]$, we give to $A$ the color $\min(\min A, n-2k+2)$ (usual coloring of Kneser graphs). Otherwise, we give to $A$ the color $\min(A\cap[n+m]\setminus[n])$. This is clearly a proper coloring with $n+m-2k+2$ colors.
To obtain a 2-colorable hypergraph from $\mathcal{H}_{n,m,k}$, there are three possibilities, which we study in turn. Either we delete $[n+m]\setminus[n]$ completely, or we delete $[n]$ completely, or we delete none of them completely.
If we delete $[n+m]\setminus[n]$ completely, we still have to delete at least $n-2k+2$ elements of $[n]$. If we delete $[n]$ completely and $m\geq 2k-1$, then we still have to delete $m-2k+2$ vertices on the other part. If we delete $[n]$ completely and $m< 2k-1$, then $n\geq n+m-2k+2$, and so we again delete at least $n+m-2k+2$ points. Now assume that we do not delete either part completely. Then we have to delete all but $k-1$ vertices on both parts, which again gives $n+m-2k+2$ vertices for deletion or more if $m<k-1$. This proves $\operatorname{cd}_2(\mathcal{H}_{n,m,k})\geq n+m-2k+2$.
Dol'nikov's theorem, i.e., Equation~\eqref{eq:dolnikov}, allows to conclude. \end{proof}
\subsection{When $\mathcal{H}$ is a graph}
We provide in this section a necessary condition for a graph $G$ to be such that $\chi(\operatorname{KG}(G))=\operatorname{cd}_2(G)$, via an elementary proof of Dol'nikov's theorem (Equation~\eqref{eq:dolnikov}) in this case.
Let $G$ be a graph. A proper coloring of $\operatorname{KG}(G)$ is a partition of the edges of $G$ into triangles $T_i$ and stars $S_j$. The quantity $\operatorname{cd}_2(G)$ is the minimal number of vertices to remove from $G$ so that the remaining vertices induce a bipartite graph.
\begin{proposition} Let $G$ be a graph such that $\chi(\operatorname{KG}(G))=\operatorname{cd}_2(G)$. Then there is an optimal proper coloring of $\operatorname{KG}(G)$ having at least one triangle and whose triangles $T_i$ are pairwise vertex disjoint. \end{proposition}
\begin{proof} Take an optimal coloring with a minimum number of triangles. We claim that in such a coloring, every circuit is either a $T_i$, or contains at least one edge belonging to a star color class $S_j$. (Here again, by circuit, we mean a connected graph whose vertices are all of degree $2$.)
Indeed, suppose not. Take the shortest circuit $C$ contradicting this claim. Each edge of $C$ belongs to a $T_i$, and each $T_i$ has at most one edge on $C$ (this second statement is because of the minimality of $C$). We change the coloring as follows. We remove all triangles met by $C$ from the coloring, and for each vertex $v$ of $C$, put the star whose center is $v$. It provides a new coloring, with the same number of colors: the number of triangles that have been removed is equal to the number of vertices of $C$, which is equal to the number of new stars. This contradicts the minimality assumption on the number of triangles.
Now, take an optimal coloring with a minimum number of triangles.
Suppose first for a contradiction that there are no triangles. Then removing the centers of all stars but one, and all edges incident to them, leads to a graph with only one star, which is a bipartite graph. The number of vertices that have been removed is the number of colors minus $1$, although we have obtained a bipartite graph. This is in contradiction with $\chi(\operatorname{KG}(G))=\operatorname{cd}_2(G)$.
Suppose now, still for a contradiction, that two triangles have a vertex in common. Remove the centers of the stars $S_j$, and all edges incident to them. Remove also the common vertex to the two triangles, as well as the edges incident to it. For the remaining triangles, pick an arbitrary vertex from each of them and remove also all incident edges to these vertices. The remaining graph is a forest (and is in particular bipartite): every circuit not being a $T_i$ has lost at least one edge when the star centers have been removed, and every triangle $T_i$ has lost at least two edges. The number of vertices that have been removed is the number of colors minus $1$, although we have obtained a bipartite graph. This is again in contradiction with $\chi(\operatorname{KG}(G))=\operatorname{cd}_2(G)$. \end{proof}
The starting claim in the proof, namely that in an optimal proper coloring with a minimum number of triangles, every circuit is either a triangle, or contains an edge from a star, provides an elementary proof of Dol'nikov's theorem when $\mathcal{H}$ is a graph $G$: removing one vertex per triangle and the center of each star leads necessarily to a forest, and the number of removed vertices is at most $\chi(\operatorname{KG}(G))$. It also shows that if $G$ is such that $\chi(\operatorname{KG}(G))=\operatorname{cd}_2(G)$, then we never have to remove more vertices to get a matching than what we have to remove to get a bipartite graph.
\subsection{Complexity questions}
As we already mentioned, we do not know the complexity status of deciding whether $\chi(\operatorname{KG}(\mathcal{H}))=\operatorname{cd}_2(\mathcal{H})$, even if $\mathcal{H}$ is a graph. We nevertheless state and prove two related results. The proof of the first proposition is sketched in \cite{Me14}.
\begin{proposition} Computing $\operatorname{cd}_2(\mathcal{H})$ is $\operatorname{NP}$-hard, even when $\mathcal{H}$ is a graph. \end{proposition}
\begin{proof}
Let $G=(V,E)$ be a non-bipartite graph. Consider $H$ the graph obtained by taking the join of two disjoint copies of $G$. Now, consider $H'$ an induced bipartite subgraph of $H$. If $H'$ has vertices from both copies of $G$, then it means that in each of these copies a vertex cover of $G$ has been removed. Otherwise, all vertices of one copy and at least $\operatorname{cd}_2(G)$ vertices from the other copy have been removed. Thus, we have $\operatorname{cd}_2(H)=\min(2(|V|-\alpha(G)),|V|+\operatorname{cd}_2(G))$, where $\alpha(G)$ is the independence number of $G$ (remember that $|V|-\alpha(G)$ is the vertex cover number of $G$).
We have \begin{equation}\label{eq:cd}
|V|\leq\operatorname{cd}_2(G)+2\alpha(G). \end{equation} Indeed, let $X$ be a subset of vertices removed from $V$ such that $G[V\setminus X]$ is bipartite. Each part of $G[V\setminus X]$ is an independent set of $G$. Thus
$|V|=|X|+|V\setminus X|\leq |X|+2\alpha(G)$ and we get inequality~\eqref{eq:cd}. This inequality implies then that $2(|V|-\alpha(G))\leq|V|+\operatorname{cd}_2(G)$ and consequently
$\operatorname{cd}_2(H) = 2(|V|-\alpha(G))$. Since computing $\alpha(G)$ for a nonbipartite graph is $\operatorname{NP}$-hard, we get the result. \end{proof}
\begin{proposition}\label{prop:chi} Computing $\chi(\operatorname{KG}(\mathcal{H}))$ is $\operatorname{NP}$-hard, even when $\mathcal{H}$ is a graph. \end{proposition}
\begin{proof} Let $G$ be a triangle-free graph. A coloring of $\operatorname{KG}(G)$ is a partition of the edges of $G$ into stars. Thus $\chi(\operatorname{KG}(G))$ is the vertex cover number of $G$, which is $\operatorname{NP}$-hard to compute, even when $G$ is triangle-free~\cite{Po74}. \end{proof}
\subsection*{Acknowledgments} We are grateful to G\'abor Simonyi for interesting discussions that were especially beneficial to Section~\ref{sec:cd}. We also thank the reviewers for their thorough reading of our work and for all their remarks. Part of this work was done when the first author was visiting the Universit\'e Paris Est. He would like to acknowledge professor Fr\'ed\'eric Meunier for his generous support and hospitality. Also, the research of Hossein Hajiabolhassan was in part supported by a grant from IPM (No. 94050128).
\end{document} | arXiv |
# The concept of entropy and its relation to complexity
Entropy is a fundamental concept in information theory that is closely related to complexity. It measures the amount of uncertainty or randomness in a system. The higher the entropy, the more complex the system is considered to be.
In information theory, entropy is often defined as the average amount of information needed to describe an event or a random variable. It can be thought of as a measure of the "surprise" or "unpredictability" of an event. For example, if a coin is fair and unbiased, the entropy of the outcome of a single toss is 1 bit, because there are two equally likely outcomes (heads or tails) and it takes one bit of information to specify which outcome occurred.
Entropy is closely related to the concept of probability. The more probable an event is, the less information is needed to describe it, and therefore the lower its entropy. Conversely, if an event is highly improbable, it requires more information to describe it, and therefore has higher entropy.
Entropy can also be used to measure the complexity of a system. In general, the more complex a system is, the higher its entropy. This is because complex systems tend to have more possible states or configurations, and therefore require more information to describe them.
Entropy is often used in fields such as physics, computer science, and biology to quantify the complexity of systems and to study their behavior. It provides a mathematical framework for understanding and analyzing complex systems, and has applications in areas such as data compression, cryptography, and network analysis.
In the following sections, we will explore different aspects of entropy and its role in measuring complexity. We will discuss Shannon's entropy, algorithmic complexity, Kolmogorov complexity, and their applications in various fields. We will also examine the relationship between entropy and algorithmic complexity, and explore how entropy can be used to measure complexity in real-world systems. Finally, we will discuss the challenges and limitations of using information theory to measure complexity, and explore future directions and advancements in the field.
# Shannon's entropy and its role in measuring information
Shannon's entropy is a measure of the average amount of information needed to describe an event or a random variable. It was introduced by Claude Shannon in his landmark paper "A Mathematical Theory of Communication" in 1948.
Shannon's entropy is defined as the sum of the probabilities of all possible outcomes of an event, multiplied by the logarithm of the reciprocal of each probability. Mathematically, it can be expressed as:
$$H(X) = -\sum P(x) \log P(x)$$
where $H(X)$ is the entropy of the random variable $X$, $P(x)$ is the probability of outcome $x$, and the sum is taken over all possible outcomes.
Intuitively, Shannon's entropy measures the average amount of "surprise" or "uncertainty" associated with the outcomes of a random variable. If all outcomes are equally likely, the entropy is maximized and the system is considered to be in a state of maximum uncertainty. On the other hand, if one outcome is much more likely than others, the entropy is minimized and the system is considered to be in a state of low uncertainty.
Shannon's entropy has many applications in information theory and related fields. It is used to quantify the amount of information in a message or a data stream, and to measure the efficiency of data compression algorithms. It is also used in cryptography to quantify the strength of encryption schemes, and in machine learning to measure the purity of a classification model.
# Algorithmic complexity and its applications in computer science
Algorithmic complexity, also known as computational complexity, is a measure of the resources required to solve a computational problem. It quantifies the amount of time, space, or other resources needed to execute an algorithm as a function of the size of the input.
There are different ways to measure algorithmic complexity, but one commonly used measure is the time complexity, which estimates the number of elementary operations performed by an algorithm as a function of the input size. The time complexity is usually expressed using big O notation, which provides an upper bound on the growth rate of the algorithm's running time.
For example, consider the problem of sorting a list of numbers. One algorithm that can be used to solve this problem is bubble sort, which compares adjacent elements and swaps them if they are in the wrong order. The time complexity of bubble sort is O(n^2), where n is the number of elements in the list. This means that the running time of bubble sort grows quadratically with the input size.
Algorithmic complexity is an important concept in computer science because it helps us understand the efficiency and scalability of algorithms. By analyzing the complexity of different algorithms, we can choose the most efficient one for a given problem and optimize the performance of our programs.
In addition to time complexity, there are other measures of algorithmic complexity, such as space complexity, which estimates the amount of memory required by an algorithm, and communication complexity, which quantifies the amount of communication needed between different parts of a distributed algorithm.
# Kolmogorov complexity and its role in measuring the complexity of data
Kolmogorov complexity, also known as algorithmic complexity or descriptive complexity, is a measure of the amount of information needed to describe an object or a piece of data. It quantifies the complexity of data by considering the length of the shortest possible program that can generate the data.
The idea behind Kolmogorov complexity is that the more complex an object is, the more information is needed to describe it. For example, a simple object like a single letter or a short string of numbers can be described with a few bits of information. On the other hand, a complex object like a detailed image or a long sequence of random numbers requires much more information to describe.
The Kolmogorov complexity of an object is defined as the length of the shortest program (in a given programming language) that can produce the object as output. This program must be able to generate the object from scratch, without any additional input or assumptions.
It is important to note that the Kolmogorov complexity is a theoretical concept and cannot be computed in practice. This is because it is impossible to find the shortest program for a given object, as it would require an exhaustive search of all possible programs.
Despite this limitation, the concept of Kolmogorov complexity has important applications in various fields, including computer science, information theory, and artificial intelligence. It provides a formal framework for measuring the complexity of data and can be used to analyze the compressibility of data, study the limits of computation, and explore the nature of randomness.
# The relationship between entropy and algorithmic complexity
Entropy and algorithmic complexity are two related concepts in information theory that provide different perspectives on the complexity of data.
Entropy, as we discussed earlier, measures the average amount of information needed to describe the possible outcomes of a random variable. It quantifies the uncertainty or randomness of a system. In the context of data, entropy measures the amount of information needed to describe the distribution of symbols or patterns in the data.
On the other hand, algorithmic complexity, also known as Kolmogorov complexity, measures the amount of information needed to describe a specific object or piece of data. It quantifies the complexity of an object by considering the length of the shortest possible program that can generate the object.
While entropy and algorithmic complexity are distinct concepts, they are connected in several ways. One way to understand this relationship is to consider the compressibility of data.
If a piece of data has low entropy, it means that the distribution of symbols or patterns in the data is highly predictable. In other words, there is a lot of redundancy in the data, and it can be compressed efficiently. This implies that the algorithmic complexity of the data is relatively low, as a short program can generate the data.
On the other hand, if a piece of data has high entropy, it means that the distribution of symbols or patterns in the data is highly unpredictable. In this case, there is little redundancy in the data, and it cannot be compressed efficiently. This implies that the algorithmic complexity of the data is relatively high, as a longer program is needed to generate the data.
In summary, while entropy measures the average amount of information needed to describe the distribution of symbols in data, algorithmic complexity measures the amount of information needed to describe a specific object or piece of data. The relationship between entropy and algorithmic complexity provides insights into the complexity and compressibility of data.
# Measuring complexity in real-world systems
Information theory provides a framework for measuring complexity in real-world systems. By applying concepts such as entropy and algorithmic complexity, we can gain insights into the complexity of various systems and phenomena.
One way to measure complexity is by analyzing the patterns and structures present in a system. For example, in a biological system, we can examine the DNA sequences or protein structures to understand the complexity of the system. By quantifying the amount of information needed to describe these patterns, we can estimate the complexity of the biological system.
Another approach is to analyze the behavior of a system over time. For example, in a financial market, we can study the fluctuations in stock prices or trading volumes to assess the complexity of the market. By analyzing the information content of these fluctuations, we can gain insights into the complexity of the market dynamics.
In addition to analyzing patterns and behaviors, information theory can also be used to study the interactions and relationships within a system. For example, in a social network, we can analyze the connections between individuals and the flow of information to measure the complexity of the network. By quantifying the amount of information needed to describe these interactions, we can assess the complexity of the social system.
Overall, measuring complexity in real-world systems involves analyzing patterns, behaviors, and interactions using information theory concepts. By applying these concepts, we can gain a deeper understanding of the complexity of various systems and phenomena.
For example, let's consider the complexity of a transportation system. We can analyze the patterns of traffic flow, the distribution of transportation modes, and the interactions between different components of the system. By quantifying the amount of information needed to describe these patterns and interactions, we can estimate the complexity of the transportation system.
## Exercise
Consider a biological system, such as a cell. How would you measure its complexity using information theory concepts?
### Solution
To measure the complexity of a biological system like a cell, we can analyze the patterns of DNA sequences, protein structures, and metabolic pathways. By quantifying the amount of information needed to describe these patterns and interactions, we can estimate the complexity of the cell. Additionally, we can study the behavior of the cell over time, such as gene expression patterns or cell signaling pathways, to gain further insights into its complexity.
# Applications of information theory in fields such as biology and physics
Information theory has found numerous applications in various fields, including biology and physics. By applying information theory concepts, researchers have been able to gain insights into complex phenomena and systems in these fields.
In biology, information theory has been used to study genetic sequences and understand the complexity of DNA. By analyzing the information content of DNA sequences, researchers can gain insights into the function and evolution of genes. Information theory has also been applied to study protein structures and interactions, helping researchers understand the complexity of protein folding and protein-protein interactions.
In physics, information theory has been used to study the behavior of complex systems such as fluids and gases. By analyzing the information content of the system's dynamics, researchers can gain insights into the complexity of the system's behavior. Information theory has also been applied to study quantum systems, helping researchers understand the complexity of quantum entanglement and quantum information processing.
Overall, information theory has provided valuable tools for studying complex phenomena in biology and physics. By applying information theory concepts, researchers have been able to gain a deeper understanding of the complexity of these systems and phenomena.
For example, in biology, information theory has been used to study the complexity of gene regulatory networks. By analyzing the information content of gene expression patterns, researchers can gain insights into the complexity of gene regulation and the interactions between genes.
## Exercise
Consider a physics system, such as a fluid flow. How would you apply information theory concepts to study its complexity?
### Solution
To study the complexity of a physics system like a fluid flow, we can analyze the information content of the system's dynamics, such as the patterns of fluid motion and the interactions between fluid particles. By quantifying the amount of information needed to describe these patterns and interactions, we can estimate the complexity of the fluid flow. Additionally, we can study the behavior of the fluid flow over time, such as the formation of vortices or turbulence, to gain further insights into its complexity.
# Quantifying complexity in networks and graphs
Information theory has also been used to quantify complexity in networks and graphs. By analyzing the structure and connectivity of networks, researchers can gain insights into the complexity of the network.
One way to quantify complexity in networks is by analyzing the information content of the network's topology. This involves studying the patterns of connections between nodes and quantifying the amount of information needed to describe these connections. By doing so, researchers can estimate the complexity of the network's structure and connectivity.
Another approach is to analyze the dynamics of information flow in networks. This involves studying how information propagates and spreads through the network, and quantifying the amount of information needed to describe these dynamics. By doing so, researchers can gain insights into the complexity of the network's information processing capabilities.
In addition to analyzing structure and dynamics, information theory can also be used to study the resilience and robustness of networks. This involves analyzing how the network responds to perturbations and disruptions, and quantifying the amount of information needed to describe these responses. By doing so, researchers can assess the complexity of the network's resilience and adaptability.
Overall, quantifying complexity in networks and graphs involves analyzing the structure, dynamics, and resilience of the network using information theory concepts. By applying these concepts, researchers can gain a deeper understanding of the complexity of various networks and graphs.
For example, let's consider a social network such as Facebook. We can analyze the structure of the network, including the patterns of connections between individuals and groups. By quantifying the amount of information needed to describe these connections, we can estimate the complexity of the social network. We can also analyze the dynamics of information flow in the network, such as the spread of news or viral content. By quantifying the amount of information needed to describe these dynamics, we can gain insights into the complexity of the network's information processing capabilities.
## Exercise
Consider a network or graph that you are familiar with, such as a transportation network or a computer network. How would you quantify its complexity using information theory concepts?
### Solution
To quantify the complexity of a network or graph, such as a transportation network or a computer network, we can analyze the structure and connectivity of the network. This involves studying the patterns of connections between nodes and quantifying the amount of information needed to describe these connections. Additionally, we can analyze the dynamics of information flow in the network, such as the flow of traffic or data. By quantifying the amount of information needed to describe these dynamics, we can gain insights into the complexity of the network or graph.
# The impact of information theory on cryptography and data compression
Information theory has had a significant impact on the fields of cryptography and data compression. By understanding the principles of information theory, researchers and practitioners have been able to develop more secure encryption algorithms and more efficient data compression techniques.
Cryptography is the practice of securing communication by converting information into a form that is unintelligible to unauthorized parties. Information theory provides a mathematical framework for analyzing the security of cryptographic systems. It allows researchers to quantify the amount of information that an attacker would need to break the encryption and recover the original message. By understanding the principles of information theory, researchers have been able to develop encryption algorithms that are resistant to attacks and provide a high level of security.
Data compression, on the other hand, is the practice of reducing the size of data files without losing any information. Information theory provides a theoretical limit on the amount of compression that can be achieved for a given dataset. This limit, known as the entropy of the dataset, represents the minimum number of bits needed to represent the data. By understanding the principles of information theory, researchers have been able to develop compression algorithms that approach this theoretical limit and achieve high levels of compression.
For example, in cryptography, the principles of information theory have been used to develop encryption algorithms such as the Advanced Encryption Standard (AES). AES is a widely used encryption algorithm that provides a high level of security. It is based on the principles of information theory, including the concept of entropy and the Shannon entropy formula.
In data compression, the principles of information theory have been used to develop compression algorithms such as the Huffman coding algorithm. Huffman coding is a lossless compression algorithm that achieves compression by assigning shorter codes to more frequently occurring symbols. The algorithm is based on the principles of information theory, including the concept of entropy and the Huffman coding theorem.
## Exercise
Consider a scenario where you need to securely transmit a sensitive document over the internet. How would you use the principles of information theory to ensure the security of the document?
### Solution
To ensure the security of the document, I would use encryption algorithms based on the principles of information theory. First, I would encrypt the document using a strong encryption algorithm such as AES. This would convert the document into a form that is unintelligible to unauthorized parties. Then, I would transmit the encrypted document over the internet. To further enhance security, I would also use secure communication protocols such as Transport Layer Security (TLS) to protect the transmission of the encrypted document.
# Challenges and limitations of using information theory to measure complexity
While information theory provides a valuable framework for measuring complexity, there are several challenges and limitations that need to be considered. These challenges arise from the inherent complexity of real-world systems and the limitations of information theory itself.
One challenge is that information theory assumes a certain level of independence and randomness in the data. However, many real-world systems exhibit complex dependencies and patterns that cannot be easily captured by information theory. For example, biological systems often have intricate interactions between different components, making it difficult to accurately measure their complexity using information theory alone.
Another challenge is that information theory measures complexity based on the amount of information needed to describe a system. However, this measure may not always align with our intuitive understanding of complexity. For example, a simple algorithm that produces a complex output may be considered more complex than a complex algorithm that produces a simple output. Information theory does not capture this distinction.
Furthermore, information theory assumes that all information is equally important and relevant. In reality, different types of information may have different levels of importance and relevance. For example, in a biological system, certain genetic information may be more critical for determining the complexity of the system than other genetic information. Information theory does not provide a way to account for this variability in importance.
Finally, information theory has limitations in dealing with dynamic and evolving systems. Many real-world systems are not static, but rather change and evolve over time. Information theory does not provide a comprehensive framework for measuring complexity in such systems.
Despite these challenges and limitations, information theory remains a valuable tool for understanding and measuring complexity. It provides a quantitative measure that can be applied across different domains and disciplines. However, it is important to recognize these challenges and limitations and consider them when applying information theory to measure complexity in real-world systems.
## Exercise
Consider a social network with millions of users and billions of connections between them. What challenges and limitations of information theory might arise when trying to measure the complexity of this social network?
### Solution
One challenge is that the social network exhibits complex dependencies and interactions between users. Information theory assumes independence and randomness, which may not accurately capture the complexity of the social network.
Another challenge is that different types of information in the social network may have different levels of importance and relevance. Information theory does not provide a way to account for this variability in importance.
Furthermore, the social network is dynamic and constantly evolving. Information theory has limitations in dealing with dynamic systems and may not provide a comprehensive framework for measuring complexity in this context.
# Future directions and advancements in information theory
Information theory has been a rapidly evolving field since its inception, and there are several exciting future directions and advancements on the horizon. These advancements aim to address some of the challenges and limitations of current information theory and further our understanding of complexity.
One area of future research is the development of new measures of complexity that can capture the intricate dependencies and patterns present in real-world systems. This involves finding ways to quantify and analyze the complex interactions between different components of a system, such as in biological or social networks. By developing more nuanced measures of complexity, we can gain deeper insights into the workings of these systems.
Another direction of research is the exploration of information theory in the context of quantum systems. Quantum information theory is a rapidly growing field that seeks to understand how information is processed and transmitted in quantum systems. This includes studying quantum entanglement, quantum communication, and quantum cryptography. The application of information theory to quantum systems has the potential to revolutionize fields such as computing and communication.
Furthermore, advancements in machine learning and artificial intelligence are opening up new possibilities for information theory. Machine learning algorithms can be used to analyze complex data and extract meaningful information. By combining machine learning techniques with information theory, we can develop more sophisticated models and algorithms for measuring and understanding complexity.
Lastly, the application of information theory to interdisciplinary fields such as biology, physics, and economics is an area of great potential. By applying information theory concepts and tools to these fields, we can gain new insights and uncover hidden patterns and structures. This interdisciplinary approach has the potential to drive innovation and discovery in a wide range of domains.
In conclusion, the future of information theory is bright and full of exciting possibilities. Advancements in measuring complexity, quantum information theory, machine learning, and interdisciplinary applications will further our understanding of complex systems and pave the way for new discoveries and advancements. As researchers continue to push the boundaries of information theory, we can expect to gain deeper insights into the nature of complexity and its role in the world around us. | Textbooks |
# Design guidelines for nonlinear control systems
One of the fundamental principles in nonlinear control design is to use the linear approximation of the nonlinear system. This approach simplifies the design process by considering the system's behavior around an operating point.
Another important aspect of nonlinear control design is the selection of the control input. The control input should be chosen to minimize the error between the desired and actual system behavior. This can be achieved by using techniques such as Lyapunov theory or H-infinity control design.
Designing a nonlinear control system also involves considering the system's robustness and robustness margins. Robustness refers to the system's ability to maintain its desired behavior in the presence of disturbances and uncertainties. Robustness margins are used to quantify the system's performance under different operating conditions.
## Exercise
Consider a nonlinear control system with the following dynamics:
$$
\dot{x} = f(x, u) = x^2 + u
$$
Design a control input $u$ to minimize the error between the desired and actual system behavior.
### Solution
The desired behavior is not specified in the exercise. However, we can still design a control input $u$ to minimize the error between the desired and actual system behavior.
One approach is to use a feedback control law that minimizes the error. For example, we can choose a control input $u = -x$. This control input will minimize the error between the desired and actual system behavior.
# Linearization of nonlinear systems
Linearizing a nonlinear system is a crucial step in the design of nonlinear control systems. It simplifies the system's dynamics by considering the system's behavior around an operating point.
To linearize a nonlinear system, we need to find a linear approximation of the system's dynamics near the operating point. This can be done using the Taylor series expansion of the nonlinear functions.
The linearized system is given by:
$$
\dot{x} = A(x) x + B(x) u
$$
where $A(x)$ and $B(x)$ are the Jacobian matrices of the nonlinear functions $f(x, u)$ evaluated at the operating point.
## Exercise
Consider a nonlinear system with the following dynamics:
$$
\dot{x} = x^2 + u
$$
Linearize the system around the operating point $x = 0$.
### Solution
To linearize the system, we need to find the Jacobian matrices of the nonlinear functions $f(x, u)$ evaluated at the operating point $x = 0$.
The Jacobian matrix of the nonlinear function $f(x, u) = x^2 + u$ with respect to $x$ is:
$$
A(x) = \begin{bmatrix} 2x \end{bmatrix}
$$
The Jacobian matrix of the nonlinear function $f(x, u) = x^2 + u$ with respect to $u$ is:
$$
B(x) = \begin{bmatrix} 1 \end{bmatrix}
$$
The linearized system around the operating point $x = 0$ is:
$$
\dot{x} = 2x^2 + u
$$
# State space representation for nonlinear systems
The state space representation is a mathematical tool used to describe the dynamics of a nonlinear system. It provides a unified framework for analyzing and designing control systems.
The state space representation of a nonlinear system can be obtained by linearizing the system around an operating point and writing the linearized system in matrix form.
The state space representation of a nonlinear system is given by:
$$
\begin{bmatrix} \dot{x} \\ \dot{y} \end{bmatrix} = \begin{bmatrix} A(x) & B(x) \\ C(x) & D(x) \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} 0 \\ E(x) \end{bmatrix} u
$$
where $x$ and $y$ are the state variables, $u$ is the control input, and $A(x)$, $B(x)$, $C(x)$, $D(x)$, and $E(x)$ are the Jacobian matrices of the nonlinear functions evaluated at the operating point.
## Exercise
Consider a nonlinear system with the following dynamics:
$$
\dot{x} = x^2 + u
$$
Write the state space representation of the system.
### Solution
The state space representation of the system is:
$$
\begin{bmatrix} \dot{x} \end{bmatrix} = \begin{bmatrix} 2x \end{bmatrix} \begin{bmatrix} x \end{bmatrix} + \begin{bmatrix} 1 \end{bmatrix} u
$$
# Routh-Hurwitz stability analysis
Routh-Hurwitz stability analysis is a method used to determine the stability of a linear time-invariant system. It is based on the Routh-Hurwitz criterion, which is a necessary and sufficient condition for the stability of a system.
The Routh-Hurwitz stability analysis involves finding the roots of the system's characteristic polynomial and applying the Routh-Hurwitz criterion to determine the stability of the system.
The Routh-Hurwitz criterion states that a system is stable if all the roots of the characteristic polynomial are in the left-half complex plane.
## Exercise
Consider a linear time-invariant system with the following transfer function:
$$
G(s) = \frac{s + 2}{s^2 + 3s + 2}
$$
Use the Routh-Hurwitz stability analysis to determine the stability of the system.
### Solution
To determine the stability of the system, we need to find the roots of the system's characteristic polynomial.
The characteristic polynomial of the system is:
$$
p(s) = s^2 + 3s + 2 - (s + 2) = s^2 + 3s - 2
$$
Applying the Routh-Hurwitz criterion, we can determine that the system is stable if all the roots of the characteristic polynomial are in the left-half complex plane.
The roots of the characteristic polynomial are:
$$
s_1 = \frac{-3 \pm \sqrt{9 - 4(2)(-2)}}{2} = \frac{-3 \pm \sqrt{25}}{2}
$$
Since the roots of the characteristic polynomial are in the left-half complex plane, the system is stable.
# Pole placement for nonlinear control systems
Pole placement is a technique used to design nonlinear control systems by placing the system's poles at desired locations. This approach allows for the design of robust and stable control systems.
In pole placement, the desired poles are chosen based on the desired system behavior and stability requirements. The control input is then designed to place the system's poles at the desired locations.
Pole placement can be applied to both linear and nonlinear control systems. In the case of nonlinear control systems, the linearized system is used to perform the pole placement.
## Exercise
Consider a nonlinear control system with the following dynamics:
$$
\dot{x} = x^2 + u
$$
Design a control input $u$ to place the system's poles at the desired locations.
### Solution
To design a control input $u$ to place the system's poles at the desired locations, we can use the pole placement method.
One approach is to use a feedback control law that places the system's poles at the desired locations. For example, we can choose a control input $u = -x$. This control input will place the system's poles at the desired locations.
# MATLAB implementation of pole placement
To implement pole placement in MATLAB, we can use the `place` function. This function takes as input the desired poles and the system's linearized dynamics, and returns the control input that places the system's poles at the desired locations.
## Exercise
Consider a nonlinear control system with the following dynamics:
$$
\dot{x} = x^2 + u
$$
Linearize the system around the operating point $x = 0$ and use MATLAB's `place` function to design a control input $u$ that places the system's poles at the desired locations.
### Solution
To implement pole placement in MATLAB, we can use the following code:
```matlab
% Define the system's linearized dynamics
A = [2];
B = [1];
% Define the desired poles
desired_poles = [-1];
% Use the place function to design the control input
K = place(A, B, desired_poles);
% The control input is given by the feedback gain matrix K
u = -K * x;
```
# Application of nonlinear control design in real-world systems
Nonlinear control design has numerous applications in real-world systems, such as robotics, aerospace, and automotive engineering.
In robotics, nonlinear control design is used to design controllers for robotic systems, such as manipulators and mobile robots. These controllers are designed to achieve precise and efficient motion, as well as to handle uncertainties and disturbances.
In aerospace engineering, nonlinear control design is used to design controllers for aircraft and spacecraft. These controllers are designed to achieve precise and efficient trajectory tracking, as well as to handle uncertainties and disturbances.
In automotive engineering, nonlinear control design is used to design controllers for automotive systems, such as powertrains and suspension systems. These controllers are designed to achieve precise and efficient performance, as well as to handle uncertainties and disturbances.
## Exercise
Design a nonlinear control system for a robotic arm using pole placement.
### Solution
To design a nonlinear control system for a robotic arm, we can follow these steps:
1. Linearize the robotic arm's dynamics around the operating point.
2. Define the desired poles for the control system.
3. Use MATLAB's `place` function to design a control input that places the system's poles at the desired locations.
4. Implement the control system in the robotic arm's control software.
# Case studies and examples
Case study 1: PID controller for a servo system
In this case study, we will design a PID controller for a servo system using pole placement.
Case study 2: Robotic arm controller
In this case study, we will design a nonlinear control system for a robotic arm using pole placement.
Case study 3: Aerospace trajectory tracking
In this case study, we will design a nonlinear control system for trajectory tracking in aerospace systems using pole placement.
## Exercise
Design a nonlinear control system for a servo system using pole placement.
### Solution
To design a nonlinear control system for a servo system, we can follow these steps:
1. Linearize the servo system's dynamics around the operating point.
2. Define the desired poles for the control system.
3. Use MATLAB's `place` function to design a control input that places the system's poles at the desired locations.
4. Implement the control system in the servo system's control software.
# Summary and future directions
In this textbook, we have covered the fundamentals of nonlinear control design with MATLAB. We have explored the design guidelines for nonlinear control systems, linearization of nonlinear systems, state space representation for nonlinear systems, Routh-Hurwitz stability analysis, pole placement for nonlinear control systems, MATLAB implementation of pole placement, application of nonlinear control design in real-world systems, case studies and examples, and references and resources.
Future research directions in nonlinear control design include the development of new control techniques, the application of machine learning and artificial intelligence in nonlinear control system design, and the integration of nonlinear control design with other engineering disciplines, such as optimization and game theory.
# References and resources
For further study on nonlinear control design, the following references and resources are recommended:
1. Sontag, E. A., & Shuster, S. (1998). Mathematical control theory. SIAM.
2. Astrom, K. J., & Murray, R. M. (2008). Robotics: a practical introduction to robot design, control, and programming. Princeton University Press.
3. Slotine, J. E., & Lipschutz, S. (2005). Optimal control of robotic systems. Springer.
4. MATLAB documentation: https://www.mathworks.com/help/releases/R2021a/matlab.html
## Exercise
Review the references and resources listed in this section and choose one that you would like to explore further.
### Solution
I would like to explore the reference "Optimal control of robotic systems" by Slotine and Lipschutz further. This book provides a comprehensive overview of optimal control techniques for robotic systems and covers various applications in robotic control design. | Textbooks |
Does this formula correspond to a series representation of the Dirac delta function $\delta(x)$?
Modified 10 months ago
Consider the following formula which defines a piece-wise function which I believe corresponds to a series representation for the Dirac delta function $\delta(x)$. The parameter $f$ is the evaluation frequency and assumed to be a positive integer, and the evaluation limit $N$ must be selected such that $M(N)=0$ where $M(x)=\sum\limits_{n\le x}\mu(n)$ is the Mertens function.
(1) $\quad\delta(x)=\underset{N,f\to\infty}{\text{lim}}\ 2\left.\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\ \left(\left\{ \begin{array}{cc} \begin{array}{cc} \cos \left(\frac{2 k \pi (x+1)}{n}\right) & x\geq 0 \\ \cos \left(\frac{2 k \pi (x-1)}{n}\right) & x<0 \\ \end{array} \\ \end{array} \right.\right.\right),\quad M(N)=0$
The following figure illustrates formula (1) above evaluated at $N=39$ and $f=4$. The red discrete dots in figure (1) below illustrate the evaluation of formula (1) at integer values of $x$. I believe formula (1) always evaluates to exactly $2\ f$ at $x=0$ and exactly to zero at other integer values of $x$.
Figure (1): Illustration of formula (1) for $\delta(x)$
Now consider formula (2) below derived from the integral $f(0)=\int_{-\infty}^{\infty}\delta(x)\ f(x)\, dx$ where $f(x)=e^{-\left| x\right|}$ and formula (1) above for $\delta(x)$ was used to evaluate the integral. Formula (2) below can also be evaluated as illustrated in formula (3) below.
(2) $\quad e^{-\left| 0\right|}=1=\underset{N,f\to\infty}{\text{lim}}\ 4\sum\limits_{n=1}^N\mu(n)\sum\limits_{k=1}^{f\ n}\frac{n\ \cos\left(\frac{2\ \pi\ k}{n}\right)-2\ \pi\ k\ \sin\left(\frac{2\ \pi\ k}{n}\right)}{4\ \pi^2\ k^2+n^2}\,,\quad M(N)=0$
(3) $\quad e^{-\left| 0\right|}=1=\underset{N\to\infty}{\text{lim}}\ \mu(1)\left(\coth\left(\frac{1}{2}\right)-2\right)+4\sum\limits_{n=2}^N\frac{\mu(n)}{4 e \left(e^n-1\right) n}\\\\$ $\left(-2 e^{n+1}+e^n n+e^2 n-e \left(e^n-1\right) \left(e^{-\frac{2 i \pi }{n}}\right)^{\frac{i n}{2 \pi }} B_{e^{-\frac{2 i \pi }{n}}}\left(1-\frac{i n}{2 \pi },-1\right)+e \left(e^n-1\right) \left(e^{-\frac{2 i \pi }{n}}\right)^{-\frac{i n}{2 \pi }} B_{e^{-\frac{2 i \pi }{n}}}\left(\frac{i n}{2 \pi }+1,-1\right)+\left(e^n-1\right) \left(B_{e^{\frac{2 i \pi }{n}}}\left(1-\frac{i n}{2 \pi },-1\right)-e^2 B_{e^{\frac{2 i \pi }{n}}}\left(\frac{i n}{2 \pi }+1,-1\right)\right)+2 e\right),\quad M(N)=0$
The following table illustrates formula (3) above evaluated for several values of $N$ corresponding to zeros of the Mertens function $M(x)$. Note formula (3) above seems to converge to $e^{-\left| 0\right|}=1$ as the magnitude of the evaluation limit $N$ increases.
$$\begin{array}{ccc} n & \text{N=$n^{th}$ zero of $M(x)$} & \text{Evaluation of formula (3) for $e^{-\left| 0\right|}$} \\ 10 & 150 & 0.973479\, +\ i\ \text{5.498812269991985$\grave{ }$*${}^{\wedge}$-17} \\ 20 & 236 & 0.982236\, -\ i\ \text{5.786047752866836$\grave{ }$*${}^{\wedge}$-17} \\ 30 & 358 & 0.988729\, -\ i\ \text{6.577233629689039$\grave{ }$*${}^{\wedge}$-17} \\ 40 & 407 & 0.989363\, +\ i\ \text{2.6889189402888207$\grave{ }$*${}^{\wedge}$-17} \\ 50 & 427 & 0.989387\, +\ i\ \text{4.472005325912989$\grave{ }$*${}^{\wedge}$-17} \\ 60 & 785 & 0.995546\, +\ i\ \text{6.227857765313369$\grave{ }$*${}^{\wedge}$-18} \\ 70 & 825 & 0.995466\, -\ i\ \text{1.6606923419056456$\grave{ }$*${}^{\wedge}$-17} \\ 80 & 893 & 0.995653\, -\ i\ \text{1.1882293286557667$\grave{ }$*${}^{\wedge}$-17} \\ 90 & 916 & 0.995653\, -\ i\ \text{3.521050901644269$\grave{ }$*${}^{\wedge}$-17} \\ 100 & 1220 & 0.997431\, -\ i\ \text{1.2549006768893629$\grave{ }$*${}^{\wedge}$-16} \\ \end{array}$$
Finally consider the following three formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ where all three convolutions were evaluated using formula (1) above for $\delta(x)$.
(4) $\quad e^{-\left|y\right|}=\underset{N,f\to\infty}{\text{lim}}\ 4\sum\limits_{n=1}^N\mu(n)\sum\limits_{k=1}^{f\ n}\frac{1}{4\ \pi^2\ k^2+n^2}\ \left(\left\{ \begin{array}{cc} \begin{array}{cc} n \cos\left(\frac{2\ k\ \pi\ (y+1)}{n}\right)-2\ k\ \pi\ e^{-y} \sin\left(\frac{2\ k\ \pi}{n}\right) & y\geq 0 \\ n \cos\left(\frac{2\ k\ \pi\ (y-1)}{n}\right)-2\ k\ \pi\ e^y \sin\left(\frac{2\ k\ \pi}{n}\right) & y<0 \\ \end{array} \\ \end{array}\right.\right),\ M(N)=0$
(5) $\quad e^{-y^2}=\underset{N,f\to\infty}{\text{lim}}\ \sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu(n)}{n}\\\\$ $\ \sum\limits_{k=1}^{f\ n}e^{-\frac{\pi\ k\ (\pi\ k+2\ i\ n\ y)}{n^2}}\ \left(\left(1+e^{\frac{4\ i\ \pi\ k\ y}{n}}\right) \cos\left(\frac{2\ \pi\ k}{n}\right)-\sin\left(\frac{2\ \pi\ k}{n}\right) \left(\text{erfi}\left(\frac{\pi\ k}{n}+i\ y\right)+e^{\frac{4\ i\ \pi\ k\ y}{n}} \text{erfi}\left(\frac{\pi\ k}{n}-i\ y\right)\right)\right),\ M(N)=0$
(6) $\quad\sin(y)\ e^{-y^2}=\underset{N,f\to\infty}{\text{lim}}\ \frac{1}{2} \left(i \sqrt{\pi }\right)\sum\limits _{n=1}^{\text{nMax}} \frac{\mu(n)}{n}\sum\limits_{k=1}^{f n} e^{-\frac{(2 \pi k+n)^2+8 i \pi k n y}{4 n^2}} \left(-\left(e^{\frac{2 \pi k}{n}}-1\right) \left(-1+e^{\frac{4 i \pi k y}{n}}\right) \cos\left(\frac{2 \pi k}{n}\right)+\right.\\\\$ $\left.\sin\left(\frac{2 \pi k}{n}\right) \left(\text{erfi}\left(\frac{\pi k}{n}+i y+\frac{1}{2}\right)-e^{\frac{4 i \pi k y}{n}} \left(e^{\frac{2 \pi k}{n}} \text{erfi}\left(-\frac{\pi k}{n}+i y+\frac{1}{2}\right)+\text{erfi}\left(\frac{\pi k}{n}-i y+\frac{1}{2}\right)\right)+e^{\frac{2 \pi k}{n}} \text{erfi}\left(-\frac{\pi k}{n}-i y+\frac{1}{2}\right)\right)\right),\qquad M(N)=0$
Formulas (4), (5), and (6) defined above are illustrated in the following three figures where the blue curves are the reference functions, the orange curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=39$, and the green curves represent formulas (4), (5), and (6) above evaluated at $f=4$ and $N=101$. The three figures below illustrate formulas (4), (5), and (6) above seem to converge to the corresponding reference function for $x\in\mathbb{R}$ as the evaluation limit $N$ is increased. Note formula (6) above for $\sin(y)\ e^{-y^2}$ illustrated in Figure (4) below seems to converge much faster than formulas (4) and (5) above perhaps because formula (6) represents an odd function whereas formulas (4) and (5) both represent even functions.
Figure (2): Illustration of formula (4) for $e^{-\left|y\right|}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Figure (3): Illustration of formula (5) for $e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Figure (4): Illustration of formula (6) for $\sin(y)\ e^{-y^2}$ evaluated at $N=39$ (orange curve) and $N=101$ (green curve) overlaid on the reference function in blue
Question (1): Is it true formula (1) above is an example of a series representation of the Dirac delta function $\delta(x)$?
Question (2): What is the class or space of functions $f(x)$ for which the integral $f(0)=\int\limits_{-\infty}^\infty\delta(x)\ f(x)\ dx$ and Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ are both valid when using formula (1) above for $\delta(x)$ to evaluate the integral and Fourier convolution?
Question (3): Is formula (1) above for $\delta(x)$ an example of what is referred to as a tempered distribution, or is formula (1) for $\delta(x)$ more general than a tempered distribution?
Formula (1) for $\delta(x)$ above is based on the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ defined in formula (7) below. Whereas the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ evaluated using formula (1) above seems to converge for $y\in\mathbb{R}$, Mellin convolutions such as $f(y)=\int\limits_0^\infty\delta(x-1)\ f\left(\frac{y}{x}\right)\ \frac{dx}{x}$ and $f(y)=\int\limits_0^\infty\delta(x-1)\ f(y\ x)\ dx$ evaluated using formula (7) below typically seem to converge on the half-plane $\Re(y)>0$. I'll note that in general formulas derived from Fourier convolutions evaluated using formula (1) above seem to be more complicated than formulas derived from Mellin convolutions evaluated using formula (7) below which I suspect is at least partially related to the extra complexity of the piece-wise nature of formula (1) above.
(7) $\quad\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\ 2\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\cos\left(\frac{2 k \pi x}{n}\right),\quad M(N)=0$
The conditional convergence requirement $M(N)=0$ stated for formulas (1) to (7) above is because the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ defined in formula (7) above only evaluates to zero at $x=0$ when $M(N)=0$. The condition $M(N)=0$ is required when evaluating formula (7) above and formulas derived from the two Mellin convolutions defined in the preceding paragraph using formula (7) above, but I'm not sure it's really necessary when evaluating formula (1) above or formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ using formula (1) above (e.g. formulas (4), (5), and (6) above). Formula (1) above is based on the evaluation of formula (7) above at $|x|\ge 1$, so perhaps formula (1) above is not as sensitive to the evaluation of formula (7) above at $x=0$. Formula (1) above can be seen as taking formula (7) above, cutting out the strip $-1\le x<1$, and then gluing the two remaining halves together at the origin. Nevertheless I usually evaluate formula (1) above and formulas derived from the Fourier convolution $f(y)=\int\limits_{-\infty}^\infty\delta(x)\ f(y-x)\ dx$ using formula (1) above at $M(N)=0$ since it doesn't hurt anything to restrict the selection of $N$ to this condition and I suspect this restriction may perhaps lead to faster and/or more consistent convergence.
See this answer I posted to one of my own questions on Math StackExchange for more information on the nested Fourier series representation of $\delta(x+1)+\delta(x-1)$ and examples of formulas derived from Mellin convolutions using this representation. See my Math StackExchange question related to nested Fourier series representation of $h(s)=\frac{i s}{s^2-1}$ for information on the more general topic of nested Fourier series representations of other non-periodic functions.
analytic-number-theory
fourier-analysis
sequences-and-series
schwartz-distributions
edited Sep 30, 2020 at 2:33
Steven Clark
Steven ClarkSteven Clark
$\begingroup$ Since this question mentions the Mertens function, rather than several analysis tags I would tag it as (analytic) number theory $\endgroup$
– Mizar
$\begingroup$ @Mizar Thanks for your suggestion. I changed the fourier-transform tag to analytic-number-therory. $\endgroup$
– Steven Clark
$\begingroup$ $$\int_{-\infty}^\infty \delta(x)f(y-x)\,dx$$ makes no sense in traditional math (e.g. see encyclopediaofmath.org/wiki/Generalized_function). $\endgroup$
$\begingroup$ @user64494 $g(x)\to\delta(x)$ if $\forall\,f(x)\in C^\infty_c(\Bbb{R}), \int_{-\infty}^\infty g(x)f(x)dx\to f(0)$. Most representations of $\delta(x)$ are limit representations (e.g. see formulas 34-40 at mathworld.wolfram.com/DeltaFunction.html and functions.wolfram.com/GeneralizedFunctions/DiracDelta/09). Formula (1) above is of interest to me because it is a series representation. $\endgroup$
$$\sum_k e^{2i\pi kx} = \sum_m \delta(x-m)$$
Convergence in the sense of distributions
$$\lim_{N\to \infty,M(N)=0}\sum_{n=1}^N \frac{\mu(n)}{n} \sum_k e^{2i\pi kx/n} =\lim_{N\to \infty,M(N)=0}\sum_{n=1}^N \mu(n) \sum_n\delta(x-mn)$$ $$=\lim_{N\to \infty,M(N)=0}\sum_{l\ge 1}(\delta(x+l)+\delta(x-l))\sum_{d| l,d\le N} \mu(d) =\delta(x+1)+\delta(x-1)$$
answered Jun 9, 2020 at 2:40
reunsreuns
$\begingroup$ Can you ground your "Convergence in the sense of distributions". TIA. $\endgroup$
I suspect the original formula for $\delta(x)$ defined in my question above is not quite correct as the associated derived formula for $\delta'(x)$ has a discontinuity at $x=0$. The definition of $\delta(x)$ in formula (1) below eliminates the piecewise nature of my original formula which resolves this problem and also seems to provide simpler results for formulas derived via the Fourier convolution defined in formula (2) below. The formula for $\delta(x)$ defined in formula (1) below also seems to provide the ability to derive formulas for a wider range of functions via the Fourier convolution defined in formula (2) below. The evaluation limit $f$ in formula (1) below is the evaluation frequency and assumed to be a positive integer. When evaluating formula (1) below (and all formulas derived from it) the evaluation limit $N$ must be selected such that $M(N)=0$ where $M(x)$ is the Mertens function. Formula (1) is illustrated in Figure (1) further below. I believe the series representation of $\delta(x)$ defined in formula (1) below converges in a distributional sense.
(1) $\quad\delta(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(\sum\limits_{k=1}^{f\ n}\left(\cos\left(\frac{2 \pi k (x-1)}{n}\right)+\cos\left(\frac{2 \pi k (x+1)}{n}\right)\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n}\cos\left(\frac{\pi k x}{n}\right)\right)$
(2) $\quad g(y)=\int\limits_{-\infty}^\infty\delta(x)\,g(y-x)\,dx$
Formula (1) for $\delta(x)$ above leads to formulas (3a) and (3b) for $\theta(x)$ below (illustrated in Figures (2) and (3) further below) and formula (4) for $\delta'(x)$ below (illustrated in Figure (4) further below). Note formula (3b) for $\theta(x)$ below contains a closed form representation of the two nested sums over $k$ in formula (3a) for $\theta(x)$ below.
(3a) $\quad\theta(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\frac{1}{2}+\frac{1}{\pi}\sum\limits_{n=1}^N\mu(n)\left(\sum\limits_{k=1}^{f\ n}\frac{\cos\left(\frac{2 \pi k}{n}\right) \sin\left(\frac{2 \pi k x}{n}\right)}{k}-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} \frac{\sin\left(\frac{\pi k x}{n}\right)}{k}\right)$
(3b) $\quad\theta(x)=\underset{\underset{M(N)=0}{N\to\infty}}{\text{lim}}\quad\frac{1}{2}+\frac{i}{4 \pi}\sum\limits_{n=1}^N\mu(n) \left(\log\left(1-e^{\frac{2 i \pi (x-1)}{n}}\right)-\log\left(1-e^{\frac{i \pi x}{n}}\right)+\log\left(1-e^{\frac{2 i \pi (x+1)}{n}}\right)-\log\left(1-e^{-\frac{2 i \pi (x-1)}{n}}\right)+\log\left(1-e^{-\frac{i \pi x}{n}}\right)-\log\left(1-e^{-\frac{2 i \pi (x+1)}{n}}\right)\right)$
(4) $\quad\delta'(x)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu(n)}{n^2}\left(\sum\limits_{k=1}^{f\ n} -2 k \left(\sin \left(\frac{2 \pi k (x-1)}{n}\right)+\sin \left(\frac{2 \pi k (x+1)}{n}\right)\right)+\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} k\ \sin\left(\frac{\pi k x}{n}\right)\right)$
The following formulas are derived from the Fourier convolution defined in formula (2) above using the series representation of $\delta(x)$ defined in formula (1) above. All of the formulas defined below seem to converge for $x\in\mathbb{R}$. Note one of the two nested sums over $k$ in formula (6) below for $e^{-y^2}$ has a closed form representation. Both of the nested sums over $k$ in formulas (5), (8), and (9) below have closed form representations which were not included below because they're fairly long and complex.
(5) $\quad e^{-|y|}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sum\limits_{n=1}^N\mu(n)\ n\left(\sum\limits_{k=1}^{f\ n}\frac{2 \left(\cos\left(\frac{2 \pi k (y-1)}{n}\right)+\cos\left(\frac{2 \pi k (y+1)}{n}\right)\right)}{4 \pi^2 k^2+n^2}-\sum\limits_{k=1}^{2\ f\ n}\frac{\cos\left(\frac{\pi k y}{n}\right)}{\pi^2 k^2+n^2}\right)$
(6) $\quad e^{-y^2}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(\sum\limits_{k=1}^{f\ n} e^{-\frac{\pi^2 k^2}{n^2}} \left(\cos\left(\frac{2 \pi k (y-1)}{n}\right)+\cos\left(\frac{2 \pi k (y+1)}{n}\right)\right)-\frac{1}{4}\sum\limits_{k=1}^{2\ f\ n} \left(e^{-\frac{\pi k (\pi k+4 i n y)}{4 n^2}}+e^{-\frac{\pi k (\pi k-4 i n y)}{4 n^2}}\right)\right)$
$\qquad\quad=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi}\sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(\frac{1}{2} \left(\vartheta_3\left(\frac{\pi (y-1)}{n},e^{-\frac{\pi^2}{n^2}}\right)+\vartheta_3\left(\frac{\pi (y+1)}{n},e^{-\frac{\pi^2}{n^2}}\right)-2\right)-\frac{1}{4} \sum\limits_{k=1}^{2\ f\ n} \left(e^{-\frac{\pi k (\pi k+4 i n y)}{4 n^2}}+e^{-\frac{\pi k (\pi k-4 i n y)}{4 n^2}}\right)\right)$
(7) $\quad\sin(y)\ e^{-y^2}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\sqrt{\pi } \sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(2 \sum\limits_{k=1}^{f\ n} e^{-\frac{\pi^2 k^2}{n^2}-\frac{1}{4}} \cos\left(\frac{2 \pi k}{n}\right) \sinh\left(\frac{\pi k}{n}\right) \sin\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi^2 k^2}{4 n^2}-\frac{1}{4}} \sinh\left(\frac{\pi k}{2 n}\right) \sin\left(\frac{\pi k y}{n}\right)\right)$
(8) $\quad\frac{1}{y^2+1}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu (n)}{n}\left(2 \sum\limits_{k=1}^{f\ n} e^{-\frac{2 \pi k}{n}} \cos\left(\frac{2 \pi k}{n}\right) \cos\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi k}{n}} \cos\left(\frac{\pi k y}{n}\right)\right)$
(9) $\quad\frac{y}{y^2+1}=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\quad\pi\sum\limits_{n=1}^N\frac{\mu(n)}{n}\left(2\sum\limits_{k=1}^{f\ n} e^{-\frac{2 \pi k}{n}} \cos\left(\frac{2 \pi k}{n}\right) \sin\left(\frac{2 \pi k y}{n}\right)-\frac{1}{2}\sum\limits_{k=1}^{2\ f\ n} e^{-\frac{\pi k}{n}} \sin\left(\frac{\pi k y}{n}\right)\right)$
The remainder of this answer illustrates formula (1) for $\delta(x)$ above and some of the other formulas defined above all of which were derived from formula (1). The observational convergence of these derived formulas provides evidence of the validity of formula (1) above.
Figure (1) below illustrates formula (1) for $\delta(x)$ evaluated at $f=4$ and $N=39$. The discrete portion of the plot illustrates formula (1) for $\delta(x)$ evaluates exactly to $2 f$ times the step size of $\theta(x)$ at integer values of $x$ when $|x|<N$.
Figure (2) below illustrates the reference function $\theta(x)$ in blue and formulas (3a) and (3b) for $\theta(x)$ in orange and green respectively where formula (3a) is evaluated at $f=4$ and formulas (3a) and (3b) are both evaluated at $N=39$.
Figure (2): Illustration of formulas (3a) and (3b) for $\theta(x)$ (orange and green)
Figure (3) below illustrates the reference function $\theta(x)$ in blue and formula (3b) for $\theta(x)$ evaluated at $N=39$ and $N=101$ in orange and green respectively.
Figure (3): Illustration of formula (3b) for $\theta(x)$ evaluated at $N=39$ and $N=101$ (orange and green)
Figures (2) and (3) above illustrate formulas (3a) and (3b) above evaluate at a slope compared to the reference function $\theta(x)$, and Figure (3) above illustrates the magnitude of this slope decreases as the magnitude of the evaluation limit $N$ increases. This slope is given by $-\frac{3}{4}\sum\limits_{n=1}^N\frac{\mu(n)}{n}$ which corresponds to $-0.0378622$ at $N=39$ and $-0.0159229$ at $N=101$. Since $-\frac{3}{4}\sum\limits_{n=1}^\infty\frac{\mu(n)}{n}=0$, formulas (3a) and (3b) above converge to the reference function $\theta(x)$ as $N\to\infty$ (and as $f\to\infty$ for formula (3a)).
Figure (4) below illustrates formula (4) for $\delta'(x)$ above evaluated at $f=4$ and $N=39$. The red discrete portion of the plot illustrates the evaluation of formula (4) for $\delta'(x)$ at integer values of $x$.
Figure (4): Illustration of formula (4) for $\delta'(x)$
Figure (5) below illustrates the reference function $\frac{y}{y^2+1}$ in blue and formula (9) for $\frac{y}{y^2+1}$ above evaluated at $f=4$ and $N=101$.
Figure (5): Illustration of formula (9) for $\frac{y}{y^2+1}$
answered Jan 3, 2021 at 2:42
My question above, original answer, and this new answer are all based on analytic formulas for
$$u(x)=-1+\theta(x+1)+\theta(x-1)\tag{1}\,.$$
$$u'(x)=\delta(x+1)+\delta(x-1)\tag{2}\,.$$
My question and original answer are both based on the analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\left(\sum\limits_{n=1}^N\frac{\mu(n)}{n}\,\left(1+2\sum\limits_{k=1}^{f\,n}\cos\left(\frac{2\,\pi\,k\,x}{n}\right)\right)\right)\tag{3}$$
where the evaluation frequency $f$ is assumed to be a positive integer and
$$M(N)=\sum\limits_{n=1}^N \mu(n)\tag{4}$$
is the Mertens function.
Formula (3) above simplifies to
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{\underset{M(N)=0}{N,f\to\infty}}{\text{lim}}\left(2\sum\limits_{n=1}^N\frac{\mu(n)}{n}\sum\limits_{k=1}^{f\ n}\cos\left(\frac{2 k \pi x}{n}\right)\right)\tag{5}$$
$$\sum\limits_{n=1}^\infty\frac{\mu(n)}{n}=\frac{1}{\zeta(1)}=0\,.\tag{6}$$
I originally defined formulas (3) and (5) above in this answer I posted to one of my own questions on Math StackExchange.
The formula in my question above wasn't quite right as it wasn't a smooth function at $x=0$ (there's a discontinuity in the first-order derivative corresponding to $\delta'(x)$). My original answer fixed this problem but still required the upper evaluation limit $N$ be selected such that $M(N)=0$.
This new answer is based on the analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N \mu(n) \left(-2 f \text{sinc}(2 \pi f x)+\frac{1}{n}\sum\limits_{k=1}^{f\,n} \left(\cos\left(\frac{2 \pi (k-1) x}{n}\right)+\cos\left(\frac{2 \pi k x}{n}\right)\right)\right)\right)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N \mu(n) \left(-2 f\,\text{sinc}(2 \pi f x)+\frac{\sin(2 \pi f x) \cot\left(\frac{\pi x}{n}\right)}{n}\right)\right)\tag{7}$$
which no longer requires $N$ to be selected such that $M(N)=0$.
Formula (7) above is a result related to this answer I posted to another one of my questions on Math Overflow and this answer I posted to a related question on Math StackExchange.
I believe formula (7) above is exactly equivalent to
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{f\to\infty}{\text{lim}}\left(2 f\ \text{sinc}(2 \pi f (x+1))+2 f\ \text{sinc}(2 \pi f (x-1))\right)\tag{8}$$
in that formulas (7) and (8) above both have the same Maclaurin series.
My original answer and this new answer are based on the relationship
$$\delta(x)=\frac{1}{2}\left(u'(x+1)+u'(x-1)-\frac{1}{2} u'\left(\frac{x}{2}\right)\right)\tag{9}$$
which using formula (7) above for $u'(x)$ leads to
$\delta(x)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\mu(n) \Bigg(f (-\text{sinc}(2 \pi f (x+1))-\text{sinc}(2 \pi f (x-1))+\text{sinc}(2 \pi f x))+\right.$ $\left.\frac{1}{2 n}\left(\sum\limits_{k=1}^{f\,n}\left(\cos\left(\frac{2 \pi (k-1) (x+1)}{n}\right)+\cos\left(\frac{2 \pi k (x+1)}{n}\right)+\cos\left(\frac{2 \pi (k-1) (x-1)}{n}\right)+\cos\left(\frac{2 \pi k (x-1)}{n}\right)\right)-\frac{1}{2} \sum\limits_{k=1}^{2 f\,n}\left(\cos\left(\frac{\pi (k-1) x}{n}\right)+\cos\left(\frac{\pi k x}{n}\right)\right)\right)\Bigg)\right)$
$$=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\mu(n)\left(f (-\text{sinc}(2 \pi f (x+1))-\text{sinc}(2 \pi f (x-1))+\text{sinc}(2 \pi f x))+\frac{\sin(2 \pi f (x+1)) \cot\left(\frac{\pi (x+1)}{n}\right)+\sin(2 \pi f (x-1)) \cot\left(\frac{\pi (x-1)}{n}\right)-\frac{1}{2} \sin(2 \pi f x) \cot\left(\frac{\pi x}{2 n}\right)}{2 n}\right)\right)\tag{10}$$
I believe the formula (10) above is exactly equivalent to the integral representation
$$\delta(x)=\underset{f\to\infty}{\text{lim}}\left(\int\limits_{-f}^f e^{2 i \pi t x}\,dt\right)=\underset{f\to\infty}{\text{lim}}\left(2 f\ \text{sinc}(2 \pi f x)\right)\tag{11}$$
in that formulas (10) and (11) above both have the same Maclaurin series.
Now consider the slightly simpler analytic formula
$$u'(x)=\delta(x+1)+\delta(x-1)=\underset{N,f\to\infty}{\text{lim}}\left(\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1}\left(\frac{1}{2}+\sum\limits_{k=1}^{2 f (2 n-1)} (-1)^k \cos\left(\frac{\pi k x}{2 n-1}\right)\right)\right)=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{2}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1} \sec\left(\frac{\pi x}{4 n-2}\right) \cos\left(\pi x \left(2 f+\frac{1}{4 n-2}\right)\right)\right)\tag{12}$$
which also no longer requires $N$ to be selected such that $M(N)=0$.
I believe formula (12) above is exactly equivalent to formulas (7) and (8) above in that all three formulas have the same Maclaurin series.
The Maclaurin series terms for formula (12) above can be derived based on the relationship
$$\sum\limits_{n=1}^\infty\frac{\mu(2 n-1)}{(2 n-1)^s}=\frac{1}{\lambda(s)}\,,\quad\Re(s)\ge 1\tag{13}$$
where $\lambda(s)=\left(1-2^{-s}\right)\,\zeta(s)$ is the Dirichlet lambda function. I believe formula (13) above is valid for $\Re(s)>\frac{1}{2}$ assuming the Riemann hypothesis.
The relationship in formula (9) above and formula (12) for $u'(x)$ above leads to
$\delta(x)=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{2}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1}\left(\frac{3}{4}+\sum\limits_{k=1}^{2 f (2 n-1)} (-1)^k \left(\cos\left(\frac{\pi k (x+1)}{2 n-1}\right)+\cos\left(\frac{\pi k (x-1)}{2 n-1}\right)\right)\right.\right.$ $\left.\left.-\frac{1}{2}\sum\limits_{k=1}^{4 f (2 n-1)} (-1)^k \cos\left(\frac{\pi k x}{2 (2 n-1)}\right)\right)\right)$
$=\underset{N,f\to\infty}{\text{lim}}\left(\frac{1}{4}\sum\limits_{n=1}^N\frac{\mu(2 n-1)}{2 n-1} \left(\sec\left(\frac{\pi (x+1)}{4 n-2}\right) \cos\left(\pi (x+1) \left(2 f+\frac{1}{4 n-2}\right)\right)+\sec\left(\frac{\pi (x-1)}{4 n-2}\right) \cos\left(\pi (x-1) \left(2 f+\frac{1}{4 n-2}\right)\right)-\frac{1}{2} \sec\left(\frac{\pi x}{2 (4 n-2)}\right) \cos\left(\frac{1}{2} \pi x \left(4 f+\frac{1}{4 n-2}\right)\right)\right)\right)\tag{14}$
which I believe is exactly equivalent to formula (10) above and the integral representation in formula (11) above in that all three formulas have the same Maclaurin series.
Is this Riemann zeta function product equal to the Fourier transform of the von Mangoldt function?
How to prove $\mathop {\lim }\limits_{x \to \infty } \sum\limits_{{f_x}(p) = 1} {\frac{1}{p}} = \ln 2$ for $p \le x$?
Methods to tackle this series and get to the limit?
How to show that this limit converges in the distributional sense to a dirac delta function
Do prime gaps that are a power of "h" have the same density?
Questions on analytic representations of the Kronecker delta function $\delta(x-1)$ and the Moebius function $\mu(n)$
Generalizing closed form representations related to conjectured analytic formulas for $f_a(x)=\sum\limits_{n=1}^x a(n)$ | CommonCrawl |
Directed acyclic graph
In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology (evolution, family trees, epidemiology) to information science (citation networks) to computation (scheduling).
Directed acyclic graphs are sometimes instead called acyclic directed graphs[1] or acyclic digraphs.[2]
Definitions
A graph is formed by vertices and by edges connecting pairs of vertices, where the vertices can be any kind of object that is connected in pairs by edges. In the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence; a path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles.[1][2][3]
A vertex v of a directed graph is said to be reachable from another vertex u when there exists a path that starts at u and ends at v. As a special case, every vertex is considered to be reachable from itself (by a path with zero edges). If a vertex can reach itself via a nontrivial path (a path with one or more edges), then that path is a cycle, so another way to define directed acyclic graphs is that they are the graphs in which no vertex can reach itself via a nontrivial path.[4]
Mathematical properties
Reachability relation, transitive closure, and transitive reduction
A DAG
Its transitive reduction
The reachability relation of a DAG can be formalized as a partial order ≤ on the vertices of the DAG. In this partial order, two vertices u and v are ordered as u ≤ v exactly when there exists a directed path from u to v in the DAG; that is, when u can reach v (or v is reachable from u).[5] However, different DAGs may give rise to the same reachability relation and the same partial order.[6] For example, a DAG with two edges u → v and v → w has the same reachability relation as the DAG with three edges u → v, v → w, and u → w. Both of these DAGs produce the same partial order, in which the vertices are ordered as u ≤ v ≤ w.
The transitive closure of a DAG is the graph with the most edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the reachability relation ≤ of the DAG, and may therefore be thought of as a direct translation of the reachability relation ≤ into graph-theoretic terms. The same method of translating partial orders into DAGs works more generally: for every finite partially ordered set (S, ≤), the graph that has a vertex for every element of S and an edge for every pair of elements in ≤ is automatically a transitively closed DAG, and has (S, ≤) as its reachability relation. In this way, every finite partially ordered set can be represented as a DAG.
The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the covering relation of the reachability relation ≤ of the DAG. It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v. Like the transitive closure, the transitive reduction is uniquely defined for DAGs. In contrast, for a directed graph that is not acyclic, there can be more than one minimal subgraph with the same reachability relation.[7] Transitive reductions are useful in visualizing the partial orders they represent, because they have fewer edges than other graphs representing the same orders and therefore lead to simpler graph drawings. A Hasse diagram of a partial order is a drawing of the transitive reduction in which the orientation of every edge is shown by placing the starting vertex of the edge in a lower position than its ending vertex.[8]
Topological ordering
A topological ordering of a directed acyclic graph: every edge goes from earlier in the ordering (upper left) to later in the ordering (lower right). A directed graph is acyclic if and only if it has a topological ordering.
Adding the red edges to the blue directed acyclic graph produces another DAG, the transitive closure of the blue graph. For each red or blue edge u → v, v is reachable from u: there exists a blue path starting at u and ending at v.
A topological ordering of a directed graph is an ordering of its vertices into a sequence, such that for every edge the start vertex of the edge occurs earlier in the sequence than the ending vertex of the edge. A graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with a topological ordering is acyclic. Conversely, every directed acyclic graph has at least one topological ordering. The existence of a topological ordering can therefore be used as an equivalent definition of a directed acyclic graphs: they are exactly the graphs that have topological orderings.[2] In general, this ordering is not unique; a DAG has a unique topological ordering if and only if it has a directed path containing all the vertices, in which case the ordering is the same as the order in which the vertices appear in the path.[9]
The family of topological orderings of a DAG is the same as the family of linear extensions of the reachability relation for the DAG,[10] so any two graphs representing the same partial order have the same set of topological orders.
Combinatorial enumeration
The graph enumeration problem of counting directed acyclic graphs was studied by Robinson (1973).[11] The number of DAGs on n labeled vertices, for n = 0, 1, 2, 3, … (without restrictions on the order in which these numbers appear in a topological ordering of the DAG) is
1, 1, 3, 25, 543, 29281, 3781503, … (sequence A003024 in the OEIS).
These numbers may be computed by the recurrence relation
$a_{n}=\sum _{k=1}^{n}(-1)^{k-1}{n \choose k}2^{k(n-k)}a_{n-k}.$[11]
Eric W. Weisstein conjectured,[12] and McKay et al. (2004) proved, that the same numbers count the (0,1) matrices for which all eigenvalues are positive real numbers. The proof is bijective: a matrix A is an adjacency matrix of a DAG if and only if A + I is a (0,1) matrix with all eigenvalues positive, where I denotes the identity matrix. Because a DAG cannot have self-loops, its adjacency matrix must have a zero diagonal, so adding I preserves the property that all matrix coefficients are 0 or 1.[13]
Related families of graphs
A multitree, a DAG in which the subgraph reachable from any vertex induces an undirected tree (e.g. in red)
A polytree, a DAG formed by orienting the edges of an undirected tree
A multitree (also called a strongly unambiguous graph or a mangrove) is a DAG in which there is at most one directed path between any two vertices. Equivalently, it is a DAG in which the subgraph reachable from any vertex induces an undirected tree.[14]
A polytree (also called a directed tree) is a multitree formed by orienting the edges of an undirected tree.[15]
An arborescence is a polytree formed by orienting the edges of an undirected tree away from a particular vertex, called the root of the arborescence.
Computational problems
Topological sorting and recognition
Topological sorting is the algorithmic problem of finding a topological ordering of a given DAG. It can be solved in linear time.[16] Kahn's algorithm for topological sorting builds the vertex ordering directly. It maintains a list of vertices that have no incoming edges from other vertices that have not already been included in the partially constructed topological ordering; initially this list consists of the vertices with no incoming edges at all. Then, it repeatedly adds one vertex from this list to the end of the partially constructed topological ordering, and checks whether its neighbors should be added to the list. The algorithm terminates when all vertices have been processed in this way.[17] Alternatively, a topological ordering may be constructed by reversing a postorder numbering of a depth-first search graph traversal.[16]
It is also possible to check whether a given directed graph is a DAG in linear time, either by attempting to find a topological ordering and then testing for each edge whether the resulting ordering is valid[18] or alternatively, for some topological sorting algorithms, by verifying that the algorithm successfully orders all the vertices without meeting an error condition.[17]
Construction from cyclic graphs
Any undirected graph may be made into a DAG by choosing a total order for its vertices and directing every edge from the earlier endpoint in the order to the later endpoint. The resulting orientation of the edges is called an acyclic orientation. Different total orders may lead to the same acyclic orientation, so an n-vertex graph can have fewer than n! acyclic orientations. The number of acyclic orientations is equal to |χ(−1)|, where χ is the chromatic polynomial of the given graph.[19]
Any directed graph may be made into a DAG by removing a feedback vertex set or a feedback arc set, a set of vertices or edges (respectively) that touches all cycles. However, the smallest such set is NP-hard to find.[20] An arbitrary directed graph may also be transformed into a DAG, called its condensation, by contracting each of its strongly connected components into a single supervertex.[21] When the graph is already acyclic, its smallest feedback vertex sets and feedback arc sets are empty, and its condensation is the graph itself.
Transitive closure and transitive reduction
The transitive closure of a given DAG, with n vertices and m edges, may be constructed in time O(mn) by using either breadth-first search or depth-first search to test reachability from each vertex.[22] Alternatively, it can be solved in time O(nω) where ω < 2.373 is the exponent for matrix multiplication algorithms; this is a theoretical improvement over the O(mn) bound for dense graphs.[23]
In all of these transitive closure algorithms, it is possible to distinguish pairs of vertices that are reachable by at least one path of length two or more from pairs that can only be connected by a length-one path. The transitive reduction consists of the edges that form length-one paths that are the only paths connecting their endpoints. Therefore, the transitive reduction can be constructed in the same asymptotic time bounds as the transitive closure.[24]
Closure problem
The closure problem takes as input a vertex-weighted directed acyclic graph and seeks the minimum (or maximum) weight of a closure – a set of vertices C, such that no edges leave C. The problem may be formulated for directed graphs without the assumption of acyclicity, but with no greater generality, because in this case it is equivalent to the same problem on the condensation of the graph. It may be solved in polynomial time using a reduction to the maximum flow problem.[25]
Path algorithms
Some algorithms become simpler when used on DAGs instead of general graphs, based on the principle of topological ordering. For example, it is possible to find shortest paths and longest paths from a given starting vertex in DAGs in linear time by processing the vertices in a topological order, and calculating the path length for each vertex to be the minimum or maximum length obtained via any of its incoming edges.[26] In contrast, for arbitrary graphs the shortest path may require slower algorithms such as Dijkstra's algorithm or the Bellman–Ford algorithm,[27] and longest paths in arbitrary graphs are NP-hard to find.[28]
Applications
Scheduling
Directed acyclic graph representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints.[29] An important class of problems of this type concern collections of objects that need to be updated, such as the cells of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed. In this context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. A cycle in this graph is called a circular dependency, and is generally not allowed, because there would be no way to consistently schedule the tasks involved in the cycle. Dependency graphs without circular dependencies form DAGs.[30]
For instance, when one cell of a spreadsheet changes, it is necessary to recalculate the values of other cells that depend directly or indirectly on the changed cell. For this problem, the tasks to be scheduled are the recalculations of the values of individual cells of the spreadsheet. Dependencies arise when an expression in one cell uses a value from another cell. In such a case, the value that is used must be recalculated earlier than the expression that uses it. Topologically ordering the dependency graph, and using this topological order to schedule the cell updates, allows the whole spreadsheet to be updated with only a single evaluation per cell.[31] Similar problems of task ordering arise in makefiles for program compilation[31] and instruction scheduling for low-level computer program optimization.[32]
A somewhat different DAG-based formulation of scheduling constraints is used by the program evaluation and review technique (PERT), a method for management of large human projects that was one of the first applications of DAGs. In this method, the vertices of a DAG represent milestones of a project rather than specific tasks to be performed. Instead, a task or activity is represented by an edge of a DAG, connecting two milestones that mark the beginning and completion of the task. Each such edge is labeled with an estimate for the amount of time that it will take a team of workers to perform the task. The longest path in this DAG represents the critical path of the project, the one that controls the total time for the project. Individual milestones can be scheduled according to the lengths of the longest paths ending at their vertices.[33]
Data processing networks
A directed acyclic graph may be used to represent a network of processing elements. In this representation, data enters a processing element through its incoming edges and leaves the element through its outgoing edges.
For instance, in electronic circuit design, static combinational logic blocks can be represented as an acyclic system of logic gates that computes a function of an input, where the input and output of the function are represented as individual bits. In general, the output of these blocks cannot be used as the input unless it is captured by a register or state element which maintains its acyclic properties.[34] Electronic circuit schematics either on paper or in a database are a form of directed acyclic graphs using instances or components to form a directed reference to a lower level component. Electronic circuits themselves are not necessarily acyclic or directed.
Dataflow programming languages describe systems of operations on data streams, and the connections between the outputs of some operations and the inputs of others. These languages can be convenient for describing repetitive data processing tasks, in which the same acyclically-connected collection of operations is applied to many data items. They can be executed as a parallel algorithm in which each operation is performed by a parallel process as soon as another set of inputs becomes available to it.[35]
In compilers, straight line code (that is, sequences of statements without loops or conditional branches) may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code. This representation allows the compiler to perform common subexpression elimination efficiently.[36] At a higher level of code organization, the acyclic dependencies principle states that the dependencies between modules or components of a large software system should form a directed acyclic graph.[37]
Feedforward neural networks are another example.
Causal structures
Main article: Bayesian network
Graphs in which vertices represent events occurring at a definite time, and where the edges always point from the early time vertex to a late time vertex of the edge, are necessarily directed and acyclic. The lack of a cycle follows because the time associated with a vertex always increases as you follow any path in the graph so you can never return to a vertex on a path. This reflects our natural intuition that causality means events can only affect the future, they never affect the past, and thus we have no causal loops. An example of this type of directed acyclic graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. In the version history example below, each version of the software is associated with a unique time, typically the time the version was saved, committed or released. In the citation graph examples below, the documents are published at one time and can only refer to older documents.
Sometimes events are not associated with a specific physical time. Provided that pairs of events have a purely causal relationship, that is edges represent causal relations between the events, we will have a directed acyclic graph.[38] For instance, a Bayesian network represents a system of probabilistic events as vertices in a directed acyclic graph, in which the likelihood of an event may be calculated from the likelihoods of its predecessors in the DAG.[39] In this context, the moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same vertex (sometimes called marrying), and then replacing all directed edges by undirected edges.[40] Another type of graph with a similar causal structure is an influence diagram, the vertices of which represent either decisions to be made or unknown information, and the edges of which represent causal influences from one vertex to another.[41] In epidemiology, for instance, these diagrams are often used to estimate the expected value of different choices for intervention.[42][43]
The converse is also true. That is in any application represented by a directed acyclic graph there is a causal structure, either an explicit order or time in the example or an order which can be derived from graph structure. This follows because all directed acyclic graphs have a topological ordering, i.e. there is at least one way to put the vertices in an order such that all edges point in the same direction along that order.
Genealogy and version history
Family trees may be seen as directed acyclic graphs, with a vertex for each family member and an edge for each parent-child relationship.[44] Despite the name, these graphs are not necessarily trees because of the possibility of marriages between relatives (so a child has a common ancestor on both the mother's and father's side) causing pedigree collapse.[45] The graphs of matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) are trees within this graph. Because no one can become their own ancestor, family trees are acyclic.[46]
The version history of a distributed revision control system, such as Git, generally has the structure of a directed acyclic graph, in which there is a vertex for each revision and an edge connecting pairs of revisions that were directly derived from each other. These are not trees in general due to merges.[47]
In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing the version history of a geometric structure over the course of a sequence of changes to the structure. For instance in a randomized incremental algorithm for Delaunay triangulation, the triangulation changes by replacing one triangle by three smaller triangles when each point is added, and by "flip" operations that replace pairs of triangles by a different pair of triangles. The history DAG for this algorithm has a vertex for each triangle constructed as part of the algorithm, and edges from each triangle to the two or three other triangles that replace it. This structure allows point location queries to be answered efficiently: to find the location of a query point q in the Delaunay triangulation, follow a path in the history DAG, at each step moving to the replacement triangle that contains q. The final triangle reached in this path must be the Delaunay triangle that contains q.[48]
Citation graphs
In a citation graph the vertices are documents with a single publication date. The edges represent the citations from the bibliography of one document to other necessarily earlier documents. The classic example comes from the citations between academic papers as pointed out in the 1965 article "Networks of Scientific Papers"[49] by Derek J. de Solla Price who went on to produce the first model of a citation network, the Price model.[50] In this case the citation count of a paper is just the in-degree of the corresponding vertex of the citation network. This is an important measure in citation analysis. Court judgements provide another example as judges support their conclusions in one case by recalling other earlier decisions made in previous cases. A final example is provided by patents which must refer to earlier prior art, earlier patents which are relevant to the current patent claim. By taking the special properties of directed acyclic graphs into account, one can analyse citation networks with techniques not available when analysing the general graphs considered in many studies using network analysis. For instance transitive reduction gives new insights into the citation distributions found in different applications highlighting clear differences in the mechanisms creating citations networks in different contexts.[51] Another technique is main path analysis, which traces the citation links and suggests the most significant citation chains in a given citation graph.
The Price model is too simple to be a realistic model of a citation network but it is simple enough to allow for analytic solutions for some of its properties. Many of these can be found by using results derived from the undirected version of the Price model, the Barabási–Albert model. However, since Price's model gives a directed acyclic graph, it is a useful model when looking for analytic calculations of properties unique to directed acyclic graphs. For instance, the length of the longest path, from the n-th node added to the network to the first node in the network, scales as[52] $\ln(n)$.
Data compression
Directed acyclic graphs may also be used as a compact representation of a collection of sequences. In this type of application, one finds a DAG in which the paths form the given sequences. When many of the sequences share the same subsequences, these shared subsequences can be represented by a shared part of the DAG, allowing the representation to use less space than it would take to list out all of the sequences separately. For example, the directed acyclic word graph is a data structure in computer science formed by a directed acyclic graph with a single source and with edges labeled by letters or symbols; the paths from the source to the sinks in this graph represent a set of strings, such as English words.[53] Any set of sequences can be represented as paths in a tree, by forming a tree vertex for every prefix of a sequence and making the parent of one of these vertices represent the sequence with one fewer element; the tree formed in this way for a set of strings is called a trie. A directed acyclic word graph saves space over a trie by allowing paths to diverge and rejoin, so that a set of words with the same possible suffixes can be represented by a single tree vertex.[54]
The same idea of using a DAG to represent a family of paths occurs in the binary decision diagram,[55][56] a DAG-based data structure for representing binary functions. In a binary decision diagram, each non-sink vertex is labeled by the name of a binary variable, and each sink and each edge is labeled by a 0 or 1. The function value for any truth assignment to the variables is the value at the sink found by following a path, starting from the single source vertex, that at each non-sink vertex follows the outgoing edge labeled with the value of that vertex's variable. Just as directed acyclic word graphs can be viewed as a compressed form of tries, binary decision diagrams can be viewed as compressed forms of decision trees that save space by allowing paths to rejoin when they agree on the results of all remaining decisions.[57]
References
1. Thulasiraman, K.; Swamy, M. N. S. (1992), "5.7 Acyclic Directed Graphs", Graphs: Theory and Algorithms, John Wiley and Son, p. 118, ISBN 978-0-471-51356-8.
2. Bang-Jensen, Jørgen (2008), "2.1 Acyclic Digraphs", Digraphs: Theory, Algorithms and Applications, Springer Monographs in Mathematics (2nd ed.), Springer-Verlag, pp. 32–34, ISBN 978-1-84800-997-4.
3. Christofides, Nicos (1975), Graph theory: an algorithmic approach, Academic Press, pp. 170–174.
4. Mitrani, I. (1982), Simulation Techniques for Discrete Event Systems, Cambridge Computer Science Texts, vol. 14, Cambridge University Press, p. 27, ISBN 9780521282826.
5. Kozen, Dexter (1992), The Design and Analysis of Algorithms, Monographs in Computer Science, Springer, p. 9, ISBN 978-0-387-97687-7.
6. Banerjee, Utpal (1993), "Exercise 2(c)", Loop Transformations for Restructuring Compilers: The Foundations, Springer, p. 19, Bibcode:1993ltfr.book.....B, ISBN 978-0-7923-9318-4.
7. Bang-Jensen, Jørgen; Gutin, Gregory Z. (2008), "2.3 Transitive Digraphs, Transitive Closures and Reductions", Digraphs: Theory, Algorithms and Applications, Springer Monographs in Mathematics, Springer, pp. 36–39, ISBN 978-1-84800-998-1.
8. Jungnickel, Dieter (2012), Graphs, Networks and Algorithms, Algorithms and Computation in Mathematics, vol. 5, Springer, pp. 92–93, ISBN 978-3-642-32278-5.
9. Sedgewick, Robert; Wayne, Kevin (2011), "4,2,25 Unique topological ordering", Algorithms (4th ed.), Addison-Wesley, pp. 598–599, ISBN 978-0-13-276256-4.
10. Bender, Edward A.; Williamson, S. Gill (2005), "Example 26 (Linear extensions – topological sorts)", A Short Course in Discrete Mathematics, Dover Books on Computer Science, Courier Dover Publications, p. 142, ISBN 978-0-486-43946-4.
11. Robinson, R. W. (1973), "Counting labeled acyclic digraphs", in Harary, F. (ed.), New Directions in the Theory of Graphs, Academic Press, pp. 239–273. See also Harary, Frank; Palmer, Edgar M. (1973), Graphical Enumeration, Academic Press, p. 19, ISBN 978-0-12-324245-7.
12. Weisstein, Eric W., "Weisstein's Conjecture", MathWorld
13. McKay, B. D.; Royle, G. F.; Wanless, I. M.; Oggier, F. E.; Sloane, N. J. A.; Wilf, H. (2004), "Acyclic digraphs and eigenvalues of (0,1)-matrices", Journal of Integer Sequences, 7: 33, arXiv:math/0310423, Bibcode:2004JIntS...7...33M, Article 04.3.3.
14. Furnas, George W.; Zacks, Jeff (1994), "Multitrees: enriching and reusing hierarchical structure", Proc. SIGCHI conference on Human Factors in Computing Systems (CHI '94), pp. 330–336, doi:10.1145/191666.191778, ISBN 978-0897916509, S2CID 18710118.
15. Rebane, George; Pearl, Judea (1987), "The recovery of causal poly-trees from statistical data", in Proc. 3rd Annual Conference on Uncertainty in Artificial Intelligence (UAI 1987), Seattle, WA, USA, July 1987 (PDF), pp. 222–228.
16. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990], Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, ISBN 0-262-03293-7 Section 22.4, Topological sort, pp. 549–552.
17. Jungnickel (2012), pp. 50–51.
18. For depth-first search based topological sorting algorithm, this validity check can be interleaved with the topological sorting algorithm itself; see e.g. Skiena, Steven S. (2009), The Algorithm Design Manual, Springer, pp. 179–181, ISBN 978-1-84800-070-4.
19. Stanley, Richard P. (1973), "Acyclic orientations of graphs" (PDF), Discrete Mathematics, 5 (2): 171–178, doi:10.1016/0012-365X(73)90108-8.
20. Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5, Problems GT7 and GT8, pp. 191–192.
21. Harary, Frank; Norman, Robert Z.; Cartwright, Dorwin (1965), Structural Models: An Introduction to the Theory of Directed Graphs, John Wiley & Sons, p. 63.
22. Skiena (2009), p. 495.
23. Skiena (2009), p. 496.
24. Bang-Jensen & Gutin (2008), p. 38.
25. Picard, Jean-Claude (1976), "Maximal closure of a graph and applications to combinatorial problems", Management Science, 22 (11): 1268–1272, doi:10.1287/mnsc.22.11.1268, MR 0403596.
26. Cormen et al. 2001, Section 24.2, Single-source shortest paths in directed acyclic graphs, pp. 592–595.
27. Cormen et al. 2001, Sections 24.1, The Bellman–Ford algorithm, pp. 588–592, and 24.3, Dijkstra's algorithm, pp. 595–601.
28. Cormen et al. 2001, p. 966.
29. Skiena (2009), p. 469.
30. Al-Mutawa, H. A.; Dietrich, J.; Marsland, S.; McCartin, C. (2014), "On the shape of circular dependencies in Java programs", 23rd Australian Software Engineering Conference, IEEE, pp. 48–57, doi:10.1109/ASWEC.2014.15, ISBN 978-1-4799-3149-1, S2CID 17570052.
31. Gross, Jonathan L.; Yellen, Jay; Zhang, Ping (2013), Handbook of Graph Theory (2nd ed.), CRC Press, p. 1181, ISBN 978-1-4398-8018-0.
32. Srikant, Y. N.; Shankar, Priti (2007), The Compiler Design Handbook: Optimizations and Machine Code Generation (2nd ed.), CRC Press, pp. 19–39, ISBN 978-1-4200-4383-9.
33. Wang, John X. (2002), What Every Engineer Should Know About Decision Making Under Uncertainty, CRC Press, p. 160, ISBN 978-0-8247-4373-4.
34. Sapatnekar, Sachin (2004), Timing, Springer, p. 133, ISBN 978-1-4020-7671-8.
35. Dennis, Jack B. (1974), "First version of a data flow procedure language", Programming Symposium, Lecture Notes in Computer Science, vol. 19, pp. 362–376, doi:10.1007/3-540-06859-7_145, ISBN 978-3-540-06859-4.
36. Touati, Sid; de Dinechin, Benoit (2014), Advanced Backend Optimization, John Wiley & Sons, p. 123, ISBN 978-1-118-64894-0.
37. Garland, Jeff; Anthony, Richard (2003), Large-Scale Software Architecture: A Practical Guide using UML, John Wiley & Sons, p. 215, ISBN 9780470856383.
38. Gopnik, Alison; Schulz, Laura (2007), Causal Learning, Oxford University Press, p. 4, ISBN 978-0-19-803928-0.
39. Shmulevich, Ilya; Dougherty, Edward R. (2010), Probabilistic Boolean Networks: The Modeling and Control of Gene Regulatory Networks, Society for Industrial and Applied Mathematics, p. 58, ISBN 978-0-89871-692-4.
40. Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999), "3.2.1 Moralization", Probabilistic Networks and Expert Systems, Springer, pp. 31–33, ISBN 978-0-387-98767-5.
41. Dorf, Richard C. (1998), The Technology Management Handbook, CRC Press, p. 9-7, ISBN 978-0-8493-8577-3.
42. Boslaugh, Sarah (2008), Encyclopedia of Epidemiology, Volume 1, SAGE, p. 255, ISBN 978-1-4129-2816-8.
43. Pearl, Judea (1995), "Causal diagrams for empirical research", Biometrika, 82 (4): 669–709, doi:10.1093/biomet/82.4.669.
44. Kirkpatrick, Bonnie B. (April 2011), "Haplotypes versus genotypes on pedigrees", Algorithms for Molecular Biology, 6 (10): 10, doi:10.1186/1748-7188-6-10, PMC 3102622, PMID 21504603.
45. McGuffin, M. J.; Balakrishnan, R. (2005), "Interactive visualization of genealogical graphs" (PDF), IEEE Symposium on Information Visualization (INFOVIS 2005), pp. 16–23, doi:10.1109/INFVIS.2005.1532124, ISBN 978-0-7803-9464-3, S2CID 15449409.
46. Bender, Michael A.; Pemmasani, Giridhar; Skiena, Steven; Sumazin, Pavel (2001), "Finding least common ancestors in directed acyclic graphs", Proceedings of the Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '01), Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, pp. 845–854, ISBN 978-0-89871-490-6.
47. Bartlang, Udo (2010), Architecture and Methods for Flexible Content Management in Peer-to-Peer Systems, Springer, p. 59, Bibcode:2010aamf.book.....B, ISBN 978-3-8348-9645-2.
48. Pach, János; Sharir, Micha, Combinatorial Geometry and Its Algorithmic Applications: The Alcalá Lectures, Mathematical surveys and monographs, vol. 152, American Mathematical Society, pp. 93–94, ISBN 978-0-8218-7533-9.
49. Price, Derek J. de Solla (July 30, 1965), "Networks of Scientific Papers" (PDF), Science, 149 (3683): 510–515, Bibcode:1965Sci...149..510D, doi:10.1126/science.149.3683.510, PMID 14325149.
50. Price, Derek J. de Solla (1976), "A general theory of bibliometric and other cumulative advantage processes", Journal of the American Society for Information Science, 27 (5): 292–306, doi:10.1002/asi.4630270505, S2CID 8536863.
51. Clough, James R.; Gollings, Jamie; Loach, Tamar V.; Evans, Tim S. (2015), "Transitive reduction of citation networks", Journal of Complex Networks, 3 (2): 189–203, arXiv:1310.8224, doi:10.1093/comnet/cnu039, S2CID 10228152.
52. Evans, T.S.; Calmon, L.; Vasiliauskaite, V. (2020), "The Longest Path in the Price Model", Scientific Reports, 10 (1): 10503, arXiv:1903.03667, Bibcode:2020NatSR..1010503E, doi:10.1038/s41598-020-67421-8, PMC 7324613, PMID 32601403
53. Crochemore, Maxime; Vérin, Renaud (1997), "Direct construction of compact directed acyclic word graphs", Combinatorial Pattern Matching, Lecture Notes in Computer Science, vol. 1264, Springer, pp. 116–129, CiteSeerX 10.1.1.53.6273, doi:10.1007/3-540-63220-4_55, ISBN 978-3-540-63220-7, S2CID 17045308.
54. Lothaire, M. (2005), Applied Combinatorics on Words, Encyclopedia of Mathematics and its Applications, vol. 105, Cambridge University Press, p. 18, ISBN 9780521848022.
55. Lee, C. Y. (1959), "Representation of switching circuits by binary-decision programs", Bell System Technical Journal, 38 (4): 985–999, doi:10.1002/j.1538-7305.1959.tb01585.x.
56. Akers, Sheldon B. (1978), "Binary decision diagrams", IEEE Transactions on Computers, C-27 (6): 509–516, doi:10.1109/TC.1978.1675141, S2CID 21028055.
57. Friedman, S. J.; Supowit, K. J. (1987), "Finding the optimal variable ordering for binary decision diagrams", Proc. 24th ACM/IEEE Design Automation Conference (DAC '87), New York, NY, USA: ACM, pp. 348–356, doi:10.1145/37888.37941, ISBN 978-0-8186-0781-3, S2CID 14796451.
External links
Wikimedia Commons has media related to directed acyclic graphs.
• Weisstein, Eric W., "Acyclic Digraph", MathWorld
• DAGitty – an online tool for creating DAGs
| Wikipedia |
\begin{definition}[Definition:Torus (Topology)]
A '''torus''' is a surface obtained by identifying both pairs of opposite sides, one with the other, of a square, while retaining the orientation:
:300px
Thus in the above diagram, $AB$ is identified with $DC$ and $CB$ with $DA$.
\end{definition} | ProofWiki |
Fodor's lemma
In mathematics, particularly in set theory, Fodor's lemma states the following:
If $\kappa $ is a regular, uncountable cardinal, $S$ is a stationary subset of $\kappa $, and $f:S\rightarrow \kappa $ is regressive (that is, $f(\alpha )<\alpha $ for any $\alpha \in S$, $\alpha \neq 0$) then there is some $\gamma $ and some stationary $S_{0}\subseteq S$ such that $f(\alpha )=\gamma $ for any $\alpha \in S_{0}$. In modern parlance, the nonstationary ideal is normal.
The lemma was first proved by the Hungarian set theorist, Géza Fodor in 1956. It is sometimes also called "The Pressing Down Lemma".
Proof
We can assume that $0\notin S$ (by removing 0, if necessary). If Fodor's lemma is false, for every $\alpha <\kappa $ there is some club set $C_{\alpha }$ such that $C_{\alpha }\cap f^{-1}(\alpha )=\emptyset $. Let $C=\Delta _{\alpha <\kappa }C_{\alpha }$. The club sets are closed under diagonal intersection, so $C$ is also club and therefore there is some $\alpha \in S\cap C$. Then $\alpha \in C_{\beta }$ for each $\beta <\alpha $, and so there can be no $\beta <\alpha $ such that $\alpha \in f^{-1}(\beta )$, so $f(\alpha )\geq \alpha $, a contradiction.
Fodor's lemma also holds for Thomas Jech's notion of stationary sets as well as for the general notion of stationary set.
Fodor's lemma for trees
Another related statement, also known as Fodor's lemma (or Pressing-Down-lemma), is the following:
For every non-special tree $T$ and regressive mapping $f:T\rightarrow T$ (that is, $f(t)<t$, with respect to the order on $T$, for every $t\in T$), there is a non-special subtree $S\subset T$ on which $f$ is constant.
References
• G. Fodor, Eine Bemerkung zur Theorie der regressiven Funktionen, Acta Sci. Math. Szeged, 17(1956), 139-142 .
• Karel Hrbacek & Thomas Jech, Introduction to Set Theory, 3rd edition, Chapter 11, Section 3.
• Mark Howard, Applications of Fodor's Lemma to Vaught's Conjecture. Ann. Pure and Appl. Logic 42(1): 1-19 (1989).
• Simon Thomas, The Automorphism Tower Problem. PostScript file at
• S. Todorcevic, Combinatorial dichotomies in set theory. pdf at
This article incorporates material from Fodor's lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
| Wikipedia |
Reddy, SNS and Leonard, DN and Wiggins, LB and Jacob, KT (2005) Internal Displacement Reactions in Multicomponent Oxides: Part I. Line Compounds with Narrow Homogeneity Range. In: Metallurgical and Materials Transactions A, 36A (10). pp. 2695-2703.
As a model of an internal displacement reaction involving a ternary oxide line compound, the following reaction was studied at 1273 K as a function of time, t: $Fe+NiTiO_3 = Ni + FeTiO_3$ Both polycrystalline and single-crystal materials were used as the starting $NiTiO_3$ oxide. During the reaction, the Ni in the oxide compound is displaced by Fe and it precipitates as a \gamma -(Ni-Fe) alloy. The reaction preserves the starting ilmenite structure. The product oxide has a constant Ti concentration across the reaction zone, with variation in the concentration of Fe and Ni, consistent with ilmenite composition. In the case of single-crystal $NiTiO_3$ as the starting oxide, the _ alloy has a "layered structure and the layer separation is suggestive of Liesegang-type precipitation. In the case of polycrystalline $NiTiO_3$ as the starting oxide, the alloy precipitates mainly along grain boundaries, with some particles inside the grains. A concentration gradient exists in the alloy across the reaction zone and the composition is >95 at. pct Ni at the reaction front. The parabolic rate constant for the reaction is $k_p = 1.3 \times 10^1^2 m^2 s^-^1$ and is nearly the same for both single-crystal and polycrystalline oxides.
The copyright for this article belongs to Minerals Metals and Materials Society. | CommonCrawl |
Pierre Deligne
Pierre René, Viscount Deligne (French: [dəliɲ]; born 3 October 1944) is a Belgian mathematician. He is best known for work on the Weil conjectures, leading to a complete proof in 1973. He is the winner of the 2013 Abel Prize, 2008 Wolf Prize, 1988 Crafoord Prize, and 1978 Fields Medal.
Pierre Deligne
Deligne in March 2005
Born (1944-10-03) 3 October 1944
Etterbeek, Belgium
NationalityBelgian
Alma materUniversité libre de Bruxelles
Known forProof of the Weil conjectures
Perverse sheaves
Concepts named after Deligne
AwardsAbel Prize (2013)
Wolf Prize (2008)
Balzan Prize (2004)
Crafoord Prize (1988)
Fields Medal (1978)
Scientific career
FieldsMathematics
InstitutionsInstitute for Advanced Study
Institut des Hautes Études Scientifiques
Doctoral advisorAlexander Grothendieck
Doctoral studentsLê Dũng Tráng
Miles Reid
Michael Rapoport
Early life and education
Deligne was born in Etterbeek, attended school at Athénée Adolphe Max and studied at the Université libre de Bruxelles (ULB), writing a dissertation titled Théorème de Lefschetz et critères de dégénérescence de suites spectrales (Theorem of Lefschetz and criteria of degeneration of spectral sequences). He completed his doctorate at the University of Paris-Sud in Orsay 1972 under the supervision of Alexander Grothendieck, with a thesis titled Théorie de Hodge.
Career
Starting in 1972, Deligne worked with Grothendieck at the Institut des Hautes Études Scientifiques (IHÉS) near Paris, initially on the generalization within scheme theory of Zariski's main theorem. In 1968, he also worked with Jean-Pierre Serre; their work led to important results on the l-adic representations attached to modular forms, and the conjectural functional equations of L-functions. Deligne's also focused on topics in Hodge theory. He introduced the concept of weights and tested them on objects in complex geometry. He also collaborated with David Mumford on a new description of the moduli spaces for curves. Their work came to be seen as an introduction to one form of the theory of algebraic stacks, and recently has been applied to questions arising from string theory. But Deligne's most famous contribution was his proof of the third and last of the Weil conjectures. This proof completed a programme initiated and largely developed by Alexander Grothendieck lasting for more than a decade. As a corollary he proved the celebrated Ramanujan–Petersson conjecture for modular forms of weight greater than one; weight one was proved in his work with Serre. Deligne's 1974 paper contains the first proof of the Weil conjectures. Deligne's contribution was to supply the estimate of the eigenvalues of the Frobenius endomorphism, considered the geometric analogue of the Riemann hypothesis. It also led to the proof of Lefschetz hyperplane theorem and the old and new estimates of the classical exponential sums, among other applications. Deligne's 1980 paper contains a much more general version of the Riemann hypothesis.
From 1970 until 1984, Deligne was a permanent member of the IHÉS staff. During this time he did much important work outside of his work on algebraic geometry. In joint work with George Lusztig, Deligne applied étale cohomology to construct representations of finite groups of Lie type; with Michael Rapoport, Deligne worked on the moduli spaces from the 'fine' arithmetic point of view, with application to modular forms. He received a Fields Medal in 1978. In 1984, Deligne moved to the Institute for Advanced Study in Princeton.
Hodge cycles
In terms of the completion of some of the underlying Grothendieck program of research, he defined absolute Hodge cycles, as a surrogate for the missing and still largely conjectural theory of motives. This idea allows one to get around the lack of knowledge of the Hodge conjecture, for some applications. The theory of mixed Hodge structures, a powerful tool in algebraic geometry that generalizes classical Hodge theory, was created by applying weight filtration, Hironaka's resolution of singularities and other methods, which he then used to prove the Weil conjectures. He reworked the Tannakian category theory in his 1990 paper for the "Grothendieck Festschrift", employing Beck's theorem – the Tannakian category concept being the categorical expression of the linearity of the theory of motives as the ultimate Weil cohomology. All this is part of the yoga of weights, uniting Hodge theory and the l-adic Galois representations. The Shimura variety theory is related, by the idea that such varieties should parametrize not just good (arithmetically interesting) families of Hodge structures, but actual motives. This theory is not yet a finished product, and more recent trends have used K-theory approaches.
Perverse sheaves
With Alexander Beilinson, Joseph Bernstein, and Ofer Gabber, Deligne made definitive contributions to the theory of perverse sheaves.[1] This theory plays an important role in the recent proof of the fundamental lemma by Ngô Bảo Châu. It was also used by Deligne himself to greatly clarify the nature of the Riemann–Hilbert correspondence, which extends Hilbert's twenty-first problem to higher dimensions. Prior to Deligne's paper, Zoghman Mebkhout's 1980 thesis and the work of Masaki Kashiwara through D-modules theory (but published in the 80s) on the problem have appeared.
Other works
In 1974 at the IHÉS, Deligne's joint paper with Phillip Griffiths, John Morgan and Dennis Sullivan on the real homotopy theory of compact Kähler manifolds was a major piece of work in complex differential geometry which settled several important questions of both classical and modern significance. The input from Weil conjectures, Hodge theory, variations of Hodge structures, and many geometric and topological tools were critical to its investigations. His work in complex singularity theory generalized Milnor maps into an algebraic setting and extended the Picard-Lefschetz formula beyond their general format, generating a new method of research in this subject. His paper with Ken Ribet on abelian L-functions and their extensions to Hilbert modular surfaces and p-adic L-functions form an important part of his work in arithmetic geometry. Other important research achievements of Deligne include the notion of cohomological descent, motivic L-functions, mixed sheaves, nearby vanishing cycles, central extensions of reductive groups, geometry and topology of braid groups, the work in collaboration with George Mostow on the examples of non-arithmetic lattices and monodromy of hypergeometric differential equations in two- and three-dimensional complex hyperbolic spaces, etc.
Awards
He was awarded the Fields Medal in 1978, the Crafoord Prize in 1988, the Balzan Prize in 2004, the Wolf Prize in 2008, and the Abel Prize in 2013, "for seminal contributions to algebraic geometry and for their transformative impact on number theory, representation theory, and related fields". He was elected a foreign member of the Academie des Sciences de Paris in 1978.
In 2006 he was ennobled by the Belgian king as viscount.[2]
In 2009, Deligne was elected a foreign member of the Royal Swedish Academy of Sciences[3] and a residential member of the American Philosophical Society.[4] He is a member of the Norwegian Academy of Science and Letters.[5]
Selected publications
• Deligne, Pierre (1974). "La conjecture de Weil: I". Publications Mathématiques de l'IHÉS. 43: 273–307. doi:10.1007/bf02684373. S2CID 123139343.
• Deligne, Pierre (1980). "La conjecture de Weil : II". Publications Mathématiques de l'IHÉS. 52: 137–252. doi:10.1007/BF02684780. S2CID 189769469.
• Deligne, Pierre (1990). "Catégories tannakiennes". Grothendieck Festschrift Vol II. Progress in Mathematics. 87: 111–195.
• Deligne, Pierre; Griffiths, Phillip; Morgan, John; Sullivan, Dennis (1975). "Real homotopy theory of Kähler manifolds". Inventiones Mathematicae. 29 (3): 245–274. Bibcode:1975InMat..29..245D. doi:10.1007/BF01389853. MR 0382702. S2CID 1357812.
• Deligne, Pierre; Mostow, George Daniel (1993). Commensurabilities among Lattices in PU(1,n). Princeton, N.J.: Princeton University Press. ISBN 0-691-00096-4.
• Quantum fields and strings: a course for mathematicians. Vols. 1, 2. Material from the Special Year on Quantum Field Theory held at the Institute for Advanced Study, Princeton, NJ, 1996–1997. Edited by Pierre Deligne, Pavel Etingof, Daniel S. Freed, Lisa C. Jeffrey, David Kazhdan, John W. Morgan, David R. Morrison and Edward Witten. American Mathematical Society, Providence, RI; Institute for Advanced Study (IAS), Princeton, NJ, 1999. Vol. 1: xxii+723 pp.; Vol. 2: pp. i–xxiv and 727–1501. ISBN 0-8218-1198-3.
Hand-written letters
Deligne wrote multiple hand-written letters to other mathematicians in the 1970s. These include
• "Deligne's letter to Piatetskii-Shapiro (1973)" (PDF). Archived from the original (PDF) on 7 December 2012. Retrieved 15 December 2012.
• "Deligne's letter to Jean-Pierre Serre (around 1974)". 15 December 2012.
• "Deligne's letter to Looijenga (1974)" (PDF). Retrieved 20 January 2020.
• "Deligne's letter to Millson (1986)" (PDF). Retrieved 11 November 2021.
Concepts named after Deligne
The following mathematical concepts are named after Deligne:
• Deligne–Lusztig theory
• Deligne–Mumford moduli space of curves
• Deligne–Mumford stacks
• Fourier–Deligne transform
• Deligne cohomology
• Deligne motive[6]
• Deligne tensor product of abelian categories (denoted $\boxtimes $)[7]
• Deligne's theorem
• Langlands–Deligne local constant
• Weil-Deligne group
Additionally, many different conjectures in mathematics have been called the Deligne conjecture:
• Deligne's conjecture on Hochschild cohomology.
• The Deligne conjecture on special values of L-functions is a formulation of the hope for algebraicity of L(n) where L is an L-function and n is an integer in some set depending on L.
• There is a Deligne conjecture on 1-motives arising in the theory of motives in algebraic geometry.
• There is a Gross–Deligne conjecture in the theory of complex multiplication.
• There is a Deligne conjecture on monodromy, also known as the weight monodromy conjecture, or purity conjecture for the monodromy filtration.
• There is a Deligne conjecture in the representation theory of exceptional Lie groups.
• There is a conjecture named the Deligne–Grothendieck conjecture for the discrete Riemann–Roch theorem in characteristic 0.
• There is a conjecture named the Deligne–Milnor conjecture for the differential interpretation of a formula of Milnor for Milnor fibres, as part of the extension of nearby cycles and their Euler numbers.
• The Deligne–Milne conjecture is formulated as part of motives and Tannakian categories.
• There is a Deligne–Langlands conjecture of historical importance in relation with the development of the Langlands philosophy.
• Deligne's conjecture on the Lefschetz trace formula[8] (now called Fujiwara's theorem for equivariant correspondences).[9]
See also
• Brumer–Stark conjecture
• E7½
• Hodge–de Rham spectral sequence
• Logarithmic form
• Kodaira vanishing theorem
• Moduli of algebraic curves
• Motive (algebraic geometry)
• Perverse sheaf
• Riemann–Hilbert correspondence
• Serre's modularity conjecture
• Standard conjectures on algebraic cycles
References
1. Mark Andrea A. de Cataldo, Luca Migliorini: The Decomposition theorem, perverse sheaves and the topology of algebraic maps. In: Bulletin of the American Mathematical Society. Band 46, Nr. 4, 2009, S. 535–633, (Online).
2. Official announcement ennoblement – Belgian Federal Public Service. 18 July 2006 Archived 30 October 2007 at the Wayback Machine
3. Royal Swedish Academy of Sciences: Many new members elected to the Academy, press release on 12 February 2009 Archived 10 July 2018 at the Wayback Machine
4. "APS Member History". search.amphilsoc.org. Retrieved 23 April 2021.
5. "Gruppe 1: Matematiske fag" (in Norwegian). Norwegian Academy of Science and Letters. Retrieved 2 August 2022.{{cite web}}: CS1 maint: url-status (link)
6. motive in nLab
7. Deligne tensor product of abelian categories in nLab
8. Yakov Varshavsky (2005), "A proof of a generalization of Deligne's conjecture", p. 1.
9. Martin Olsson, "Fujiwara's Theorem for Equivariant Correspondences", p. 1.
External links
Wikiquote has quotations related to Pierre Deligne.
Wikinews has related news:
• Norwegian Academy of Science and Letters awards Belgian mathematician Pierre Deligne with Abel prize of 2013
• O'Connor, John J.; Robertson, Edmund F., "Pierre Deligne", MacTutor History of Mathematics Archive, University of St Andrews
• Pierre Deligne at the Mathematics Genealogy Project
• Roberts, Siobhan (19 June 2012). "Simons Foundation: Pierre Deligne". Simons Foundation. – Biography and extended video interview.
• Pierre Deligne's home page at Institute for Advanced Study
• Katz, Nick (June 1980), "The Work of Pierre Deligne", Proceedings of the International Congress of Mathematicians, Helsinki 1978 (PDF), Helsinki, pp. 47–52, ISBN 951-410-352-1, archived from the original (PDF) on 12 July 2012{{citation}}: CS1 maint: location missing publisher (link) An introduction to his work at the time of his Fields medal award.
Fields Medalists
• 1936 Ahlfors
• Douglas
• 1950 Schwartz
• Selberg
• 1954 Kodaira
• Serre
• 1958 Roth
• Thom
• 1962 Hörmander
• Milnor
• 1966 Atiyah
• Cohen
• Grothendieck
• Smale
• 1970 Baker
• Hironaka
• Novikov
• Thompson
• 1974 Bombieri
• Mumford
• 1978 Deligne
• Fefferman
• Margulis
• Quillen
• 1982 Connes
• Thurston
• Yau
• 1986 Donaldson
• Faltings
• Freedman
• 1990 Drinfeld
• Jones
• Mori
• Witten
• 1994 Bourgain
• Lions
• Yoccoz
• Zelmanov
• 1998 Borcherds
• Gowers
• Kontsevich
• McMullen
• 2002 Lafforgue
• Voevodsky
• 2006 Okounkov
• Perelman
• Tao
• Werner
• 2010 Lindenstrauss
• Ngô
• Smirnov
• Villani
• 2014 Avila
• Bhargava
• Hairer
• Mirzakhani
• 2018 Birkar
• Figalli
• Scholze
• Venkatesh
• 2022 Duminil-Copin
• Huh
• Maynard
• Viazovska
• Category
• Mathematics portal
Laureates of the Wolf Prize in Mathematics
1970s
• Israel Gelfand / Carl L. Siegel (1978)
• Jean Leray / André Weil (1979)
1980s
• Henri Cartan / Andrey Kolmogorov (1980)
• Lars Ahlfors / Oscar Zariski (1981)
• Hassler Whitney / Mark Krein (1982)
• Shiing-Shen Chern / Paul Erdős (1983/84)
• Kunihiko Kodaira / Hans Lewy (1984/85)
• Samuel Eilenberg / Atle Selberg (1986)
• Kiyosi Itô / Peter Lax (1987)
• Friedrich Hirzebruch / Lars Hörmander (1988)
• Alberto Calderón / John Milnor (1989)
1990s
• Ennio de Giorgi / Ilya Piatetski-Shapiro (1990)
• Lennart Carleson / John G. Thompson (1992)
• Mikhail Gromov / Jacques Tits (1993)
• Jürgen Moser (1994/95)
• Robert Langlands / Andrew Wiles (1995/96)
• Joseph Keller / Yakov G. Sinai (1996/97)
• László Lovász / Elias M. Stein (1999)
2000s
• Raoul Bott / Jean-Pierre Serre (2000)
• Vladimir Arnold / Saharon Shelah (2001)
• Mikio Sato / John Tate (2002/03)
• Grigory Margulis / Sergei Novikov (2005)
• Stephen Smale / Hillel Furstenberg (2006/07)
• Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008)
2010s
• Dennis Sullivan / Shing-Tung Yau (2010)
• Michael Aschbacher / Luis Caffarelli (2012)
• George Mostow / Michael Artin (2013)
• Peter Sarnak (2014)
• James G. Arthur (2015)
• Richard Schoen / Charles Fefferman (2017)
• Alexander Beilinson / Vladimir Drinfeld (2018)
• Jean-François Le Gall / Gregory Lawler (2019)
2020s
• Simon K. Donaldson / Yakov Eliashberg (2020)
• George Lusztig (2022)
• Ingrid Daubechies (2023)
Mathematics portal
Abel Prize laureates
• 2003 Jean-Pierre Serre
• 2004 Michael Atiyah
• Isadore Singer
• 2005 Peter Lax
• 2006 Lennart Carleson
• 2007 S. R. Srinivasa Varadhan
• 2008 John G. Thompson
• Jacques Tits
• 2009 Mikhail Gromov
• 2010 John Tate
• 2011 John Milnor
• 2012 Endre Szemerédi
• 2013 Pierre Deligne
• 2014 Yakov Sinai
• 2015 John Forbes Nash Jr.
• Louis Nirenberg
• 2016 Andrew Wiles
• 2017 Yves Meyer
• 2018 Robert Langlands
• 2019 Karen Uhlenbeck
• 2020 Hillel Furstenberg
• Grigory Margulis
• 2021 László Lovász
• Avi Wigderson
• 2022 Dennis Sullivan
• 2023 Luis Caffarelli
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Sweden
• Czech Republic
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
• 2
| Wikipedia |
JEE Main & Adv.
JEE Test Series
NEET Test Series
Class 10 Test Series
Conservation of Angular Momentum Examples | Physics 11
By eSaral
Here are the Conservation of angular momentum examples:
(i) A point mass is tied to one end of a cord whose other end passes through a vertical hollow tube, caught in one hand. The point mass is being rotated in a horizontal circle. The cord is then pulled down so that the radius of the circle reduces.
Here the force on the point mass due to cord is radial and hence the torque about the center of rotation is zero therefore, the angular momentum must remain constant as the cord shortend.
As cord is pulled down, the point mass rotates faster than before. The reason is that by shortening the radius of the circle, the moment of inertia of point mass about axis of rotation decreases
$\mathrm{I}_{\mathrm{i}} \omega_{\mathrm{i}}=\mathrm{I}_{\mathrm{f}} \omega_{\mathrm{f}} \quad \mathrm{I}_{\mathrm{i}}>\mathrm{I}_{\mathrm{f}} \quad$ so $\omega_{\mathrm{i}}<\omega_{\mathrm{f}}$
(ii) A man with his arms outstretched and holding heavy bells in each
hand, is standing at the centre of a rotating table. When the man pulls his arms,
the speed of rotation of the table increases. The reason is that on pulling the arms, the distance R of the dumbells from the axis of rotation decreases and so the moment of inertia decreases. Therefore, by conservation of angular momentum, the angular velocity increases.
(iii) When a diver jumps into water from a height, he does not keep his body straight but pulls in his arms and legs towards the centre of his body. On doing so, the moment of inertia I of his body decreases. But since the angular momentum I $\omega$ remains constant, his angular velocity $\omega$ correspondingly increases.
Hence during jumping he can rotate his body in the air.
(iv) In the same way, the ice skater and the ballet dancer increase or decrease the angular velocity of spin about a vertical axis by pulling or
extending out their limbs.
(v) A: axis is horizontal; angular momentum about vertical axis $=0$. B: axis is vertical; angular momentum about vertical axis is still zero; man and chair spin in direction opposite to spin of the wheel.
(vi) The angular velocity of a planet around the Sun increases when it comes near the Sun. When a planet revolving around the Sun in an elliptical orbit comes near the Sun, the moment of inertia of the planet about the Sun decreases. In order to conserve angular momentum, the angular velocity shall increase. Similarly, when the planet is away from the Sun, there will be a decreases in the angular velocity.
(vii) The speed of the inner layers of the whirlwind in a tornado is alarmingly high, In a tornado, the moment of inertia of air will go on decreasing as the air moves the centre. This will be accompanied by an increase in angular velocity such that the angular momentum is conserved.
Ex. A cockroach of mass 'm'is start moving, with velocity v on the circumference of a disc of mass' 'M" and radius'R'what will be angular velocity of disc.
Initially total angular momentum $=$ Final total angular momentum.
0+0=m v R+\frac{m R^{2}}{2} \omega
$\omega=(-) \frac{2 m v}{M R} \quad$ -ive angular velocity for opposite direction.\
Ex. A rotating table has angular velocity '$\omega$' and moment of inertia I. Aperson of mass'm' stands of centre of rotating table. If the person moves rfrom the centre what will be angular velocity of rotating table.
$\mathrm{I}_{1} \omega=\left(\mathrm{I}_{1}+\mathrm{mr}^{2}\right) \omega_{2} \quad$ or $\quad \omega_{2}=\frac{\mathrm{I}_{1} \omega}{\mathrm{I}_{1}+\mathrm{mr}^{2}}$
Ex. A horizontal disc is rotating about a vertical axis passing through its centre at the rate of 100 rev $\min .$ A blob of wax, of mass $20 \mathrm{gm},$ falls on the disc and sticks to it at a distance of $5 \mathrm{cm}$ from the axis. If the moment of inertia of the disc about the given axis is $2 \times 10^{-4} \mathrm{kg}-\mathrm{m}^{2},$ find the new frequency of rotation of the disc.
$\begin{array}{ll}{\text { The M.I. of the disc, }} & {I_{1}=2 \times 10^{-4} \mathrm{kg}-\mathrm{m}^{2}} \\ {\text { The M.I. of the blob of wax, }} & {\mathrm{I}_{2}=20 \times 10^{-3} \times(0.05)^{2}=0.5 \times 10^{-4} \mathrm{kg}-\mathrm{m}^{2}}\end{array}$
Let the initial angular speed of the disc be $\omega=2 \pi n$ and let the final angular speed of the disc and blob of wax be $\omega^{\prime}=2 \pi n^{\prime}$
\text { Then, } \quad \mathrm{I}_{1} \omega=\left(\mathrm{I}_{1}+\mathrm{I}_{2}\right) \omega^{\prime}
$\mathrm{I}_{1} \times 2 \pi \mathrm{n}=\left(\mathrm{I}_{1}+\mathrm{I}_{2}\right) 2 \pi \mathrm{n'}$ (The law of conservation of angular momentum)
$\therefore 2 \times 10^{-4} \times 100=\left(2 \times 10^{-4}+0.5 \times 10^{-4}\right) \times n'$
so, $n^{\prime}=\frac{2}{2.5} \times 10^{2}=80 \mathrm{rev} / \mathrm{min}$
Ex. A solid cylinder of mass 'M'kg and radius 'R' is rotating along its axis with angular velocity $\omega$ without friction. A particle of mass' 'm'moving at v m/sec collide against the cylinder and sticks to it. Then calculate angular velocity and angular momentum of cylinder and initial and final kinetic energy of system?
Intial momentum of cylinder $=\mathrm{I} \omega \quad$ Intial momentum of particle $=\mathrm{m} \mathrm{v} \mathrm{R}$
Before sticking total angular momentum $\mathrm{J}_{1}=\mathrm{I} \omega+\mathrm{mvR}$
After sticking total angular momentum $\quad \mathrm{J}_{2}=\left(\mathrm{I}+\mathrm{mR}^{2}\right) \omega^{\prime}$
if $\tau=0 \quad$ then $\quad J_{1}=J_{2}$
Angular velocity $\quad \omega^{\prime}=\frac{I \omega+m v R}{I+m R^{2}}$
Initial kinetic energy of system $=\frac{1}{2} \mathrm{I} \omega^{2}+\frac{1}{2} \mathrm{mv}^{2}$
Final kinetic energy of system $\quad=\frac{1}{2}\left(\mathrm{I}+\mathrm{mR}^{2}\right) \omega^{2}$
Ex. If the earth were suddenly to shrink to half its size (its mass remaining const) what would be the length of a day.
Sol. $\quad \mathrm{I}_{1} \omega_{1}=\mathrm{I}_{2} \omega_{2} \quad$ so $\quad \frac{2}{5} \mathrm{MR}^{2} \times \frac{2 \pi}{\mathrm{T}_{1}}=\frac{2}{5} \mathrm{M}\left(\frac{\mathrm{R}}{2}\right)^{2} \frac{2 \pi}{\mathrm{T}_{2}}$
hence $\mathrm{T}_{1}=24 \mathrm{hr} \quad \mathrm{T}_{2}=6 \mathrm{hr}$
Ex. In which of the following cases, it is most difficult to rotate the rod?
Sol. $\quad(\mathrm{c})$ as $\mathrm{I}$ is maximum.
Ex. A thin meter scale is kept vertical by placing its one end on floor keeping the end in contact stationary it is allowed to fall. Find the velocity of its upper end when it hit the floor.
Sol. $\quad \mathrm{mgh}=\frac{1}{2} \mathrm{I} \omega^{2} \quad$ where $\mathrm{h}=\frac{\ell}{2}, \mathrm{I}=\frac{\mathrm{m} \ell^{2}}{3}, \omega=\frac{\mathrm{v}}{\ell}$
[Loss is P.E. = Rotational K.E.]
$\frac{\mathrm{mg} \ell}{2}=\frac{1}{2} \times \frac{\mathrm{m} \ell^{2}}{3} \times \frac{\mathrm{v}^{2}}{\ell^{2}} \quad \mathrm{v}=\sqrt{3 \mathrm{g} \ell}$
Ex. A particle is moving in $x-y$ plane and the components of its velocity along $x$ and $y$ axis are $v_{x}$ and $v_{y}$. Find the angular momentum about the origin.
We know that angular momentum of a particle $\vec{J}=\overrightarrow{\mathrm{r}} \times \overrightarrow{\mathrm{p}}=\overrightarrow{\mathrm{r}} \times \mathrm{m} \overrightarrow{\mathrm{v}}=\mathrm{m}(\overrightarrow{\mathrm{r}} \times \overrightarrow{\mathrm{v}})$
$=\mathrm{m}\left|\begin{array}{ccc}{\mathrm{i}} & {\mathrm{j}} & {\mathrm{k}} \\ {\mathrm{x}} & {\mathrm{y}} & {\mathrm{z}} \\ {\mathrm{V}_{\mathrm{x}}} & {\mathrm{V}_{\mathrm{y}}} & {\mathrm{V}_{\mathrm{z}}}\end{array}\right|=\mathrm{m}\left|\begin{array}{ccc}{\mathrm{i}} & {\mathrm{j}} & {\mathrm{k}} \\ {\mathrm{x}} & {\mathrm{y}} & {\mathrm{o}} \\ {\mathrm{V}_{\mathrm{x}}} & {\mathrm{V}_{\mathrm{y}}} & {\mathrm{o}}\end{array}\right|=\mathrm{m}\left(\mathrm{x} \vee_{\mathrm{y}}-\mathrm{y} \mathrm{v}_{\mathrm{x}}\right) \hat{\mathrm{k}}$
Ex. A ring of mass $10 \mathrm{kg}$ and diameter 0.4 $\mathrm{m}$ is rotating about its geometrical axis at 1200 rot./min. Find its moment of inertia and angular momentum.
M.I. of a ring about its geometrical axis $=$ M.I. of ring an axis passing through its centre and $\quad$ perpendicular to its plane.
$=M R^{2}=10(0.2)^{2}=10 \times 0.04=0.4 \mathrm{kg}-\mathrm{m}^{2}$
Now Angular momentum, $J=$ I. $\omega=I \frac{2 \pi n}{t}=0.4 \times \frac{2 \pi \times 1200}{60}=50.24 \mathrm{J-sec}$
Ex. A homogeneous rod $\mathrm{AB}$ of length $\mathrm{L}$ and mass $\mathrm{M}$ is pivoted at the centre 0 in such a way that it can rotate freely in a vertical plane. The rod $\mathrm{AB}$ is initially in horizontal position. An insect $\mathrm{S}$ of same mass falls vertically with a speed $\mathrm{V}$ at the point c, midway between points 0 and $\mathrm{B}$. Immediately after the insect falls, the angular velocity of the system is
Sol. By Law of conservation of angular momentum
$\left(\sum \mathrm{mvr}\right)_{\mathrm{about} 0}=\left(\begin{array}{c}{\mathrm{I}_{\mathrm{system}}} \\ {\mathrm{about} 0}\end{array}\right) \omega$
$\Rightarrow M V \frac{L}{4}=\left[\frac{M L^{2}}{12}+M\left(\frac{L}{4}\right)^{2}\right] \omega \Rightarrow \quad M V \frac{L}{4}=M L^{2}\left(\frac{1}{12}+\frac{1}{16}\right) \omega$
$\Rightarrow \mathrm{MV} \frac{\mathrm{L}}{4}=\mathrm{ML}^{2}\left(\frac{4+3}{48}\right) \omega \quad \Rightarrow \quad \omega=\frac{12}{7} \frac{\mathrm{V}}{\mathrm{L}}$
Introduction to Rotational Dynamics
Moment of Inertia
Moment of Inertia: Perpendicular and Parallel axis theorem
Radius of Gyration
Law of Conservation of Angular Momentum
Conservation of Angular Momentum Examples
Kinetic Energy of a Rotating Body
Work done in rotatory Motion
Rotational Power
Combine Translational and Rotational Motion
Rolling without slipping
Rolling on a plane surface
Rolling on a Inclined Plane
For Latest updates, Subscribe our Youtube Channel
eSaral
← Parallel and Perpendicular Axis Theorem of Moment of Inertia
Kinetic Energy of a Rotating Body | Physics 11, JEE & NEET →
error: Sorry! There is some Error.
Click Here to Download eSaral App | CommonCrawl |
John Chambers (statistician)
John McKinley Chambers is the creator of the S programming language, and core member of the R programming language project. He was awarded the 1998 ACM Software System Award for developing S.[1]
John Chambers
Born
John McKinley Chambers
Alma materUniversity of Toronto (BS)
Harvard University (MA, PhD)
Known forR programming language
Awards
• ACM Software System Award (1998)
• Fellow of the American Statistical Association
• Fellow of the American Association for the Advancement of Science
• Fellow of the Institute of Mathematical Statistics
Scientific career
FieldsStatistical computing
Institutions
• Bell Labs (1966–2005)
• Stanford University (2008–present)
Websitestatweb.stanford.edu/~jmc4/
Early life
Chambers received a Bachelor of Science from the University of Toronto in 1963. He received a Master of Arts in 1965 and a PhD degree in 1966, both in statistics, from Harvard University.[1][2][3]
Career
Chambers started at Bell Laboratories in 1966 as a member of its technical staff.[1][3] From 1981 to 1983, he was the head of its Advanced Software Department and from 1983 to 1989 he was the head of its Statistics and Data Analysis Research Department.[1][3] In 1989, he moved back to full-time research and in 1995, he became a distinguished member of the technical staff.[1][3] In 1997, he was made the first Fellow of Bell Labs and was cited for "pioneering contributions to the field of statistical computing".[1] He remained a distinguished member of the technical staff and a Fellow until his retirement from Bell Labs in 2005.[3]
After retiring from Bell Labs, Chambers became a visiting professor at the University of Auckland, University of California, Los Angeles and Stanford University.[3][4] Since 2008, he has been active at Stanford, currently serving as Senior Advisor of its data science program and an adjunct professor in Stanford's Department of Statistics.[3]
Chambers is a Fellow of the American Statistical Association, the American Association for the Advancement of Science and the Institute of Mathematical Statistics.[3][2]
Awards and accomplishments
Chambers has received the following awards:
• 1998, awarded the ACM Software System Award for developing the S programming language. The award was presented on May 15, 1999.[1]
• 2004, awarded an honorary Doctor of Mathematics degree from the University of Waterloo[3]
John M. Chambers Statistical Software Award
Following his 1998 ACM Software System Award, Chambers donated his prize money (US$10,000) to the American Statistical Association to endow an award for novel statistical software, the John M. Chambers Statistical Software Award.[5]
Bibliography
• Chambers, John M. (1977). Computational methods for data analysis. New York: Wiley. ISBN 0-471-02772-3.
• Chambers, John M. (1983). Graphical methods for data analysis. Belmont, Calif: Wadsworth International Group. ISBN 0-534-98052-X.
• Chambers, John M. (1984). Compstat lectures: lectures in computational statistics. Heidelberg: Physica. ISBN 3-7051-0006-8.
• Becker, R.A.; Chambers, J.M. (1984). S: An Interactive Environment for Data Analysis and Graphics. Pacific Grove, CA, USA: Wadsworth & Brooks/Cole. ISBN 0-534-03313-X.
• Becker, R.A.; Chambers, J.M. (1985). Extending the S System. Pacific Grove, CA, USA: Wadsworth & Brooks/Cole. ISBN 0-534-05016-6.
• Becker, R.A.; Chambers, J.M.; Wilks, A.R. (1988). The New S Language: A Programming Environment for Data Analysis and Graphics. Pacific Grove, CA, USA: Wadsworth & Brooks/Cole. ISBN 0-534-09192-X.
• Chambers, J.M.; Hastie, T.J. (1991). Statistical Models in S. Pacific Grove, CA, USA: Wadsworth & Brooks/Cole. p. 624. ISBN 0-412-05291-1.
• Chambers, John M. (1998). Programming with data: a guide to the S language. Berlin: Springer. ISBN 0-387-98503-4.
• Chambers, John M. (2008). Software for data analysis programming with R. Berlin: Springer. ISBN 978-0-387-75935-7.
• Chambers, John M. (2016). Extending R. Florida: Chapman and Hall/CRC. p. 382. ISBN 978-1498775717.
References
1. "ACM honors Dr. John M. Chambers of Bell Labs with the 1998 ACM Software System Award for creating "S System" software". Association for Computing Machinery. September 21, 2008. Archived from the original on August 27, 2015. Retrieved May 26, 2021.
2. "John M. Chambers - Vitae". Bell Laboratories. Archived from the original on July 7, 2011.
3. "John M. Chambers - Vitae". Stanford University. Retrieved May 26, 2021.
4. Stanford University Department of Statistics Page for John M. Chambers. Accessed January 16, 2010.
5. "John M. Chambers Statistical Software Award. ASA Statistics Computing and Graphics".
R (programming language)
Features
• Sweave
Implementations
• Distributed R
• Microsoft R Open (Revolution R Open)
• Renjin
Packages
• Bibliometrix
• easystats
• qdap
• lumi
• RGtk2
• Rhea
• Rmetrics
• rnn
• RQDA
• SimpleITK
• Statcheck
• tidyverse
• ggplot2
• dplyr
• knitr
Interfaces
• Bio7
• Emacs Speaks Statistics
• Java GUI for R
• KH Coder
• Rattle GUI
• R Commander
• RExcel
• RKWard
• RStudio
People
• Roger Bivand
• Jenny Bryan
• John Chambers
• Peter Dalgaard
• Dirk Eddelbuettel
• Robert Gentleman
• Ross Ihaka
• Thomas Lumley
• Brian D. Ripley
• Julia Silge
• Luke Tierney
• Hadley Wickham
• Yihui Xie
Organisations
• R Consortium
• Revolution Analytics
• R-Ladies
• RStudio
Publications
• The R Journal
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• SNAC
• IdRef
| Wikipedia |
BMC Medical Informatics and Decision Making
The impact of a computerized physician order entry system implementation on 20 different criteria of medication documentation—a before-and-after study
Viktoria Jungreithmayr1,2,
Andreas D. Meid1,
Implementation Team,
Walter E. Haefeli1,2 &
Hanna M. Seidling1,2
BMC Medical Informatics and Decision Making volume 21, Article number: 279 (2021) Cite this article
The medication process is complex and error-prone. To avoid medication errors, a medication order should fulfil certain criteria, such as good readability and comprehensiveness. In this context, a computerized physician order entry (CPOE) system can be helpful. This study aims to investigate the distinct effects on the quality of prescription documentation of a CPOE system implemented on general wards in a large tertiary care hospital.
In a retrospective analysis, the prescriptions of two groups of 160 patients each were evaluated, with data collected before and after the introduction of a CPOE system. According to nationally available recommendations on prescription documentation, it was assessed whether each prescription fulfilled the established 20 criteria for a safe, complete, and actionable prescription. The resulting fulfilment scores (prescription-Fscores) were compared between the pre-implementation and the post-implementation group and a multivariable analysis was performed to identify the effects of further covariates, i.e., the prescription category, the ward, and the number of concurrently prescribed drugs. Additionally, the fulfilment of the 20 criteria was assessed at an individual criterion-level (denoted criteria-Fscores).
The overall mean prescription-Fscore increased from 57.4% ± 12.0% (n = 1850 prescriptions) before to 89.8% ± 7.2% (n = 1592 prescriptions) after the implementation (p < 0.001). At the level of individual criteria, criteria-Fscores significantly improved in most criteria (n = 14), with 6 criteria reaching a total score of 100% after CPOE implementation. Four criteria showed no statistically significant difference and in two criteria, criteria-Fscores deteriorated significantly. A multivariable analysis confirmed the large impact of the CPOE implementation on prescription-Fscores which was consistent when adjusting for the confounding potential of further covariates.
While the quality of prescription documentation generally increases with implementation of a CPOE system, certain criteria are difficult to fulfil even with the help of a CPOE system. This highlights the need to accompany a CPOE implementation with a thorough evaluation that can provide important information on possible improvements of the software, training needs of prescribers, or the necessity of modifying the underlying clinical processes.
The occurrence of medication errors in hospitals is known to be a common and potentially serious threat to patient safety [1, 2]. While medication errors can occur at all stages of the medication process, prescribing errors are particularly common [3] and often caused by incorrect documentation of intended medication orders [4]. Manual, paper-based prescribing is still a significant error source as many errors are due to illegible handwriting or omitted data, such as missing dosage information, forgotten units of measure, or an incomplete route of administration. These errors are either due to prescribing oversights or to a lack of information or knowledge [5]. There are a number of guidelines defining the minimum standards that a drug prescription should meet, e.g., regarding comprehensive prescription documentation [4, 6,7,8]. These guideline standards can often be met through the implementation of a computerized physician order entry (CPOE) system. Therefore, these systems are frequently proposed as an important element to increase medication safety [5, 9,10,11,12,13,14,15].
Until now, the benefits of CPOE systems have often been assessed by evaluating medication errors. However, the reduction of medication error rates commonly stagnates at 50%, suggesting that CPOE systems can eliminate some but not all errors. In several retrospective analyses the implementation of CPOE systems most frequently decreased the medication error rate by eliminating illegible orders [16]. Furthermore, CPOE systems have a significant potential to reduce ambiguous prescriptions and omission of data as common sources of error [12]. On the other hand, CPOE systems can introduce new errors, especially due to system-user interface deficiencies, misleading computer screen displays, incorrect workflows, or due to a poor use of the system [17, 18]. Medication errors fostered by CPOE systems can result from incorrect medication selection from drop-down menus, incorrect data placement [19, 20], and failures to set default CPOE settings [19]. To sum up, after extensive knowledge on the quantity and type of medication errors has already been gathered, we wanted to take an even deeper look into different quality criteria of medication documentation. This gives us more insights into potential error-sources and omissions in medication documentation and allows us to identify appropriate preventive measures. These insights are highly valuable when further improving the safety of CPOE systems and their usage.
The aim of this study was to comprehensively assess the impact of a CPOE system on medication documentation, to determine which documentation criteria can be improved by the implementation of a CPOE and to examine how this change is influenced by concomitant factors.
This study was conducted at Heidelberg University Hospital, a large tertiary care hospital where an electronic health record (EHR; Cerner® i.s.h.med (SAP release EhP8, Support Package 016-024)) was newly equipped with a CPOE system including an integrated clinical decision support system (CDSS) in December 2018. The study was approved by the responsible Ethics Committee of the Medical Faculty of Heidelberg (S-453/2019) and by the staff council of Heidelberg University Hospital. Informed consent could be dispensed with since analyses focused on prescription data (thereby using routinely documented information) and did not assess any outcomes on patient or prescriber level.
Seven out of 71 general wards are currently equipped with the CPOE system and were included in this evaluation. The seven evaluated pilot wards have a maximum capacity of 184 beds, divided among radio-oncology wards (60 beds), surgical-orthopaedic wards (specialised in endoprosthetics and spine surgery, 52 beds) and internal medicine wards (specialised in endocrinology, cardiology, and psychosomatic medicine; 72 beds). The workflow of medication documentation differed between the wards before the CPOE implementation, with both nurses and physicians being involved. Prescriptions on paper charts were either documented by physicians themselves or by nurses, based on instructions by a physician. Changes to the current prescriptions were likewise either documented by physicians themselves or by nurses, based on instructions by a physician. Discharge medication was documented by the physicians in an electronic system that automatically transferred medication prescriptions to the discharge letters. On the contrary, medication documentation is solely a physicians' task and is performed in the same way on every ward after the CPOE implementation. Physicians are responsible for documenting all medication prescriptions, changes to them and the discharge medication in the CPOE system.
We conducted a retrospective data analysis considering in-house drug prescriptions of 160 patients before (pre-implementation cohort) and of another set of 160 patients after CPOE implementation (post-implementation cohort). On each ward, prescriptions from 20 patients per time point were included in the study. An exception was made for one exceptionally large ward with twice the number of beds where prescriptions of 40 patients were analysed. One to three months before the CPOE implementation, successive patients with at least one drug prescription and an available scan of their paper chart in the electronic archive were included as baseline assessment. One to three and a half months after implementation, successive patients with at least one drug prescription and an available electronic chart were included. To include the fix number of patients, screening periods differed between wards with shorter screening periods on wards with a high patient throughput and longer screening periods on wards where patients stayed longer. To ensure comparability of the cohorts, the post-implementation cohort included only patients whose number of total prescriptions, standard peroral prescriptions, prescriptions with a risky administration route, prescriptions as needed, and other prescriptions were within one standard deviation of the average calculated from the pre-implementation cohort.
For all included patients, the prescription data documented on their second inpatient treatment day were extracted. Demographic data collected included age, sex, weight, renal function (serum creatinine and estimated glomerular filtration rate (calculated by means of CKD-EPI equation)), and the ward to which the patient was admitted. To compile the pre-implementation dataset, the electronic archive was screened and prescription and demographic data (age, sex, weight, ward) were manually extracted from scanned paper charts. Additionally, information on patients' renal function was extracted from the electronic laboratory system. Post-implementation, logged prescription data from the CPOE system were retrieved along with manually extracted demographic data from the electronic chart (age, sex, weight, ward) and the electronic laboratory system (renal function). Total prescriptions per patient and their distribution across different prescription categories (standard peroral prescriptions, prescriptions with a risky administration route, prescriptions as needed, and other prescriptions) were counted. Any medication prescribed "as needed" was counted in the prescriptions-as-needed group, regardless of the administration route. All prescriptions with a regular administration scheme were classified into one of the other groups, based on the route of administration. This means that every regular prescription with a peroral administration route was classified as "standard peroral prescription", every regular prescription with a risky administration route as defined in Table 1 was classified as "prescription with a risky administration route" and every regular prescription with another than peroral or risky administration route (e.g., transdermal, ocular, nasal) was classified as "other prescription".
Table 1 Administration routes classified as "risky administration route"
Data appraisal
The prescriptions were assessed according to the recommendation "Good prescribing practice in drug therapy" published by the Aktionsbündnis Patientensicherheit e.V. (english: alliance for patient safety, APS) [21], a German interprofessional non-profit organization advocating measures to enhance patient safety. The recommendation on good prescribing practice is based on international guidelines and consists of 20 explicit criteria that every prescription should fulfil to be safe and actionable. The first five criteria ask for the presence of relevant patient data (allergies and intolerances, age in years, weight, renal function and drug history) that is needed to evaluate the adequacy of a prescription. Criteria #6 to #15 ask for formal requirements of a complete prescription (e.g., validity, readability, provision of comprehensive information on the drug and the dosage). Criteria #16 to #20 pose clinical questions and ask for information to enable safe administration of a drug. They also determine whether the prescription poses any risks for the patient and whether the prescription is actionable and unambiguous for the person supposed to administer the drug. Each criterion listed in the recommendation was either rated as met, not met, or not applicable. Based on this rating, two different scores that indicate the fulfilment of criteria have been calculated. One at the prescription-level (prescription-Fscore) and another one at the criteria-level (criteria-Fscore). When analysing the prescriptions accordingly, a score that indicates the percentage of fulfilled criteria per prescription (fulfilment score, denoted prescription-Fscore henceforth) was calculated as the following: \(\frac{{{\text{number}}\,{\text{of}}\,{\text{met}}\,{\text{criteria}}}}{{{\text{number }}\,{\text{of}}\,{\text{applicable}}\,{\text{criteria}}}}\). This prescription-Fscore was used for the comparison between the different time points of the analysis (before and after the CPOE implementation) for all prescriptions and separately for every individual prescription category. Additionally, all prescription-Fscores were included in a multivariable analysis (for more details, see statistical analysis section). Moreover, an assessment on criteria-level was performed to gain insight on whether the CPOE-implementation influenced the fulfilment of any of the 20 criteria in a positive or negative way. This score (denoted criteria-Fscore henceforth) was calculated for every criterion at each time point, respectively as: \(\frac{{{\text{number}}\,{\text{of}}\,{\text{prescriptions }}\,{\text{that}}\,{\text{met}}\,{\text{this}}\,{\text{criterion}}}}{{{\text{number}}\,{\text{of}}\,{\text{prescriptions}}\,{\text{for}}\,{\text{which}}\,{\text{this}}\,{\text{criterion}}\,{\text{was }}\,{\text{applicable}}}}{.}\) The analysis was performed by the principal investigator (VJ), and 10% of the data were double-checked by sub-investigators. When the evaluation of the double-checked prescriptions revealed discrepancies, these were discussed. If this resulted in changes to the general evaluation scheme, all relevant evaluations were changed accordingly. In case of any unforeseen deviation from the evaluation scheme, the double check was extended to the entire data set. The detailed assessment scheme can be found under Supplementary Information (Additional file 1). All prescriptions were reviewed for drug-drug interactions, allergies, duplicate prescriptions, potentially inappropriate medication for the elderly, dose adjustment for renal function, and maximum approved dose (AiDKlinik®, Dosing GmbH, Heidelberg, Germany, data version 01.12.2019).
Standard statistical methods were applied to describe population characteristics. Comparisons of prescription-Fscores were tested using Mann–Whitney U-Test and frequency distributions of the fulfilment of individual criteria (criteria-Fscores) with Chi-squared test.
All prescription-Fscores are included as outcome variables in a multivariable analysis with each prescription as the observation unit. The prescription-Fscore is a proportion and thus bounded at both ends of the scale and potentially skewed. The beta distribution not only fits such data distributions better than the normal distribution, beta regression models also account for the boundedness of the outcome variable [22]. To overcome the potential limitation of values at the boundaries, we chose the common continuous transformation [23, 24] to transform the prescription-Fscore in our sample of totally N = 3442 observations:
$$\left[ {prescriptionFscore*{ } \left( {N - 1} \right) + 0.5} \right]{/}N$$
With regard to the particular prescription-Fscores, the observation units (assessed prescriptions) are clustered within the sampling units (patients) so that assessments within the same patient are typically correlated (and thus violate the basic assumption of conditionally independent observations). In particular, we observe the fulfilment score \(prescription{Fscore}_{ij}\) for \(j\) medications nested within \(i\) patients. Extensions of beta regression models to beta-distributed generalized linear mixed models (GLMM) allow adding \(b_{i}\) as a patient-specific random effect to account for intra-patient correlations [24, 25]:
$$\log \left( {\frac{{prescriptionFscore_{ij} }}{{1 - prescriptionFscore_{ij} }}} \right) = x_{ij}^{T} \beta + z_{ij}^{T} b_{i} \,\,\,\,with\, b_{i} \sim N\left( {0,G} \right)$$
\(x_{ij}^{T}\) and \(z_{ij}^{T}\) denote vectors of data (covariates) for the estimation of fixed parameter effects \(\beta\) and within-patient correlations \(b_{i}\) (with their covariance matrix G). Data variables in our random-intercept model were time point (post-implementation versus pre-implementation), prescription categories (reference: standard peroral prescriptions), the discrete number of comedications and the effect-coded ward indicator (weighted for the relative number of medications from the respective ward) [26].
It follows that parameter estimates in the beta regression model can be expressed and interpreted in terms of odds ratios (OR); we thus calculated the odds ratios for improving the ratio between the prescription-Fscore and the difference to the perfect scoring (1 − prescription-Fscore). Random-effects were estimated as standard deviations to explain the source of correlation [27]. Acknowledging that estimated effects are adjusted for individual differences thus referring to within-individual change, we additionally visualized the effect by predicting the covariate-adjusted prescription-Fscore for each observation from the data set.
Statistical analyses were conducted using IBM® SPSS® Statistics (Version 25) and the R software environment in version 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria) with the key packages betareg (version 3.1-3) and glmmTMB (version 1.0.2.1).
Patient characteristics
In total, 3442 prescriptions from 320 patients were evaluated (Table 2). The pre-implementation and post-implementation cohorts did not differ with regard to age, sex, number of standard peroral prescriptions, and number of prescriptions with a risky administration route. However, more prescriptions were collected in the pre-cohort, mainly due to more "as needed" prescriptions and more "other prescriptions".
Table 2 General characteristics and demographics of the pre-implementation and post-implementation cohort
Analysis of prescription-fulfilment scores
A prescription-Fscore was calculated for all 3442 prescriptions. The average number of criteria that were not applicable was 3.4 criteria (± 0.8) per prescription. The quality assurance measures did not reveal any unforeseen discrepancies.
The prescription-Fscores for all prescriptions increased significantly (p < 0.001) from 57.4% ± 12.0% (n = 1850 prescriptions) before to 89.8% ± 7.2% (n = 1592 prescriptions) after CPOE implementation. After CPOE implementation, a significant (p < 0.001) increase in prescription-Fscores was observed in each individual prescription category (Table 3, Fig. 1). A significant (p < 0.001) increase in prescription-Fscores with a large effect size (r > 0.5) could be seen on every ward after the CPOE implementation.
Table 3 Prescription-Fscores of all prescriptions and of individual prescription categories
Comparison of prescription-Fscores in the pre-implementation cohort and post-implementation cohort for individual prescription categories. After: post-implementation cohort; before: pre-implementation cohort; CPOE: computerized physician order entry; prescription-Fscore: fulfilment score per prescription
Multivariable analysis of prescription-Fscores
Multivariable adjustment for potential confounders confirmed the large impact of the intervention on the prescription-Fscores as was already visible in the descriptive analyses (Table 4). Adjusted for the influence of the prescription category, ward, and number of concurrently prescribed drugs, the drugs prescribed with the CPOE system were over ten times (OR = 10.11 [95% CI 8.49–12.05]) more likely to achieve a higher prescription-Fscore when compared to paper-based prescriptions. Administration forms other than the standard peroral administration route were associated with lower prescription-Fscores ("risky route", OR = 0.76 [95% CI 0.73–0.79]; "as needed", OR = 0.59 [95% CI 0.57–0.61]; "other", OR = 0.87 [95% CI 0.80–0.93]). Net absolute interventional effects expressed as differences in predicted group means were 29.6% (standard peroral prescriptions), 33.5% (prescriptions with a risky administration route), 39.3% (prescriptions as needed), and 35.4% (other prescriptions), respectively (Fig. 2). We also noted that the prescription-Fscore is a ward-dependent variable. The prescription-Fscores were significantly lower than the global (weighted) average of all wards at one ward (ward 3, OR = 0.53 [95% CI 0.43–0.67]), whereas three other wards (ward 5, OR = 1.29 [95% CI 1.03–1.63]; ward 6, OR = 1.44 [95% CI 1.11–1.86]; and ward 7, OR = 1.21 [95% CI 1.05–1.40]) had significantly higher prescription-Fscores than the global (weighted) average of all wards.
Table 4 Multivariable analysis of prescription-Fscores (outcome variables) estimated from a beta-distributed generalized linear mixed models (GLMM)
Boxplot of model-predicted prescription-Fscores in actual medications stratified for the categorized prescription categories. Pre-implementation group: open boxes; post-implementation group: grey boxes; prescription-Fscore: fulfilment score per prescription
Analysis of individual criteria
When analysing the individual criteria, four criteria (#4, #15, #17, and #18) were unchanged after CPOE implementation, two criteria (#1 and #9) deteriorated in the criteria-Fscores, fourteen criteria (#2, #3, #5, #6, #7, #8, #10, #11, #12, #13, #14, #16, #19, and #20) increased, whereof six (#2, #6, #7, #8, #14, and #16) reached the maximum score of 100% (Table 5).
Table 5 Analysis of the criteria-Fscores
Change patterns of individual criteria
Taking a deeper look into the criteria-Fscores of individual criteria, distinct differences in the change patterns between different wards can be found. Depending on the criterion, criteria-Fscores of different wards could show differences in starting points, end points, and gradient direction; different starting and end points but same gradient direction; different starting points but same end points and gradient direction; or same starting points, end points and gradient direction (Table 5). Among the criteria with different starting points, end points, and gradient direction the criteria-Fscores of individual wards differed and either increased, stayed the same or deteriorated with time. This shows that the CPOE implementation can result in different effects depending on the ward and its underlying process flows (Fig. 3).
Individual criteria with different starting points, end points, and gradient direction. Criterion #17 is not reported here because this criterion was not applicable to every ward
This study showed that the implementation of a CPOE system—after adjusting for the influence of additional covariates—led to a substantial improvement of medication documentation quality. The novelty of these study results lies in their depth of detail that allows to draw direct conclusions with respect to the measures needed to further improve medication documentation quality.
Interestingly, two criteria deteriorated after introduction of electronic prescribing, namely the documentation of allergies and intolerances and the prescription of the active ingredients with abbreviations. Whereas a lot is known on the acceptance of allergy alerts [28], knowledge on changes in allergy documentation due to CPOE implementation is scarce; in two studies, CPOE implementation improved the documentation of allergies [29, 30]. The reason for this apparent discrepancy between these studies and our findings is unclear, which is why it is important to consider the underlying processes of allergy documentation. The deterioration in our setting may be due to the change in workflow from the hand-written documentation of allergies on paper to the structured entry in the CPOE system. On paper, allergies are entered as free text, whereas in the CPOE system, a structured entry of drugs or drug classes from an allergy list is needed. Further training measures for prescribers may improve acceptance of the allergy documentation tool contained in the CPOE system.
The second criterion showing a significant deterioration in the criteria-Fscore was criterion #9 (The prescription does not contain any abbreviations for the active substance). The use of abbreviations in drug prescribing is common—especially in handwritten prescriptions—and can easily lead to misinterpretation and even serious medication errors [31]. For this reason, there are a number of institutions, such as the Institute for Safe Medication Practices, that have published lists of error-prone abbreviations that should not be used in the communication of medical information [32]. The implemented CPOE system displays the prescribed drugs by their brand name. Surprisingly and unfortunately, manufacturers tend to use abbreviations in the trade names of their drugs, especially if they are generics, which explains the deterioration of the fulfilment value of criterion #9. Since it is difficult to influence the naming of drugs by manufacturers, one possible solution to the problem would be to always add the active ingredient to the display of prescriptions in the CPOE system.
The clinical relevance of the assessed criteria certainly varies; whether well-known abbreviations of active substances are a potential error source (e.g., 5-FU for fluorouracil or MCP for metoclopramide) is debatable, as is the absence of age in years (criterion #2) when the date of birth is clearly documented instead. However, we did not do any weighting according to the clinical relevance of the criteria due to the lack of validated standards. Our assessment was conservative in the sense that the predefined evaluation scheme was strictly followed and the calculation of fulfilment scores was based solely on the fulfilment and applicability of the criteria.
The study showed that different wards had variable prescription-Fscores often diverging from the global average of all wards. Additionally, the change patterns of individual criteria differed substantially between individual wards. This is most likely due to the different workflows of the respective wards, which had different general procedures at baseline. These procedures were harmonized through the introduction of the CPOE system. As an example, allergies or drug history taking differed between the pilot wards; it was either the nurse's, the assistant's, or the physician's responsibility to enter allergies or drug history into the patient chart, whereas after CPOE implementation this task fell uniformly to the physician. Additionally, not only the templates for paper-based charts varied between the wards, this was also true for established documentation methods and comprehensibility, both of which were reflected in the degree of criteria fulfilment. Task switching and alteration of process flows due to CPOE implementation is common and the impact of CPOE on clinical workflow is known to be double-edged [33, 34]. It has been shown that users of CPOE systems may adopt work-arounds that are error-prone, if the system's usability is poor or the handling is deemed cumbersome [35]. It is therefore important to closely monitor process changes, suggest improvements to clinical workflows, and assist clinical staff in adapting to the changes introduced by CPOE implementation. Furthermore, the continuous observation and follow-up on workflow changes is important in order to detect whether suggested adaptions resulted in an improvement or a deterioration.
The study has several limitations: First, despite aiming for comparable patient cohorts before and after implementation, the post-implementation cohort showed a smaller number of "total", "as needed", and "other" prescriptions. However, the multivariable analysis accounted for such imbalances suggesting that imbalances in prescription categories can be deemed negligible. Moreover, we only assessed the medication regimens of 320 patients at one time frame before and after implementation, which might limit the transferability of the results to other settings or other CPOE systems and neglects potential learning curves. We only adjusted for the influence of a number of well-known covariates. However, there might be other influential factors like the physician experience, physician workload, or physician attitude towards the CPOE system that may influence the quality of prescription documentation. Additionally, certain patient characteristics as age, sex, type of medication, clinical condition, diagnoses, or the time period of admission could affect the quality of prescription documentation. The hierarchical model with a random intercept on patient characteristics and adjustment for further variables accounts for such confounding influences whenever possible, although residual confounding cannot be ruled out. Another potential confounder may be distinct underlying prescribing workflows that may differ not only between wards, but even, on a smaller level, between different prescribers. Therefore, a precise analysis of workflows before and after CPOE implementation is needed, especially when there is a need to compensate for the negative effects of CPOE implementation. However, given the large magnitude of the CPOE effect estimate, the results can be considered as robust even with further potential confounders unavailable for adjustment. This is in line with a very large E-value of 19.7 corresponding to our effect estimate; this means that a (set of) unmeasured confounder(s) would have to increase the likelihood of improvement nearly 20-fold and would have to be as unequally distributed between the intervention and control group [36]. Ideally, the observed improvement in prescription documentation quality would also be translatable into improved patient outcomes. Whether this is the case in this setting should be subject of further prospective studies.
This study provides a clear description of the influences of a CPOE system on detailed aspects of prescription documentation. It shows that the quality of prescription documentation increases substantially with the implementation of the CPOE system. However, there are also aspects—even in the documentation of the prescription—that are difficult to fulfil with a CPOE system. As the effects of a CPOE implementation have been proven double-edged, precise insights into the nature of effects is needed in order to derive improvement recommendations for CPOE systems and their usage. The in-depth analysis of distinct quality criteria allowed to identify specific issues where prescriber training, improvement of software or adaptation of clinical workflows can lead to a better use of the CPOE system and, potentially, to a further improvement of clinical documentation.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
Change pattern with different starting points, end points, and gradient direction
Change pattern with different starting and end points but same gradient direction
Change pattern with different starting points but same end points and gradient direction
Change pattern with same starting points, end points, and gradient direction
APS:
Alliance for patient safety
CDSS:
Clinical decision support system
CPOE:
Computerized physician order entry
Criteria-Fscore :
Fulfilment score per criterion
EHR:
GLMM:
Generalized linear mixed model
Odds ratio
Prescription-Fscore :
Fulfilment score per prescription
Pearson r correlation
Kohn LT, Corrigan JM, Donaldson MS. Errors in health care: a leading cause of death and injury. In: To err is human: building a safer health system. National Academies Press (US). https://doi.org/10.17226/9728. 2000.
Alqenae FA, Steinke D, Keers RN. Prevalence and nature of medication errors and medication-related harm following discharge from hospital to community settings: a systematic review. Drug Saf. 2020;43(6):517–37.
Bates DW, Cullen DJ, Laird N, Petersen LA, Small SD, Servi D, et al. Incidence of adverse drug events and potential adverse drug events—implications for prevention: ADE prevention study group. JAMA. 1995;274(1):29–34.
Medication Management Guideline. Health Care Association of New Jersey. 2012. https://www.hcanj.org/files/2013/09/hcanjbp_medmgmt13_050113_1.pdf. Accessed 12 Feb 2021.
Armada ER, Villamanan E, Lopez-de-Sa E, Rosillo S, Rey-Blas JR, Testillano ML, et al. Computerized physician order entry in the cardiac intensive care unit: effects on prescription errors and workflow conditions. J Crit Care. 2014;29(2):188–93.
Good prescribing practice. Medical Council of New Zealand. 2020. https://www.mcnz.org.nz/assets/standards/ceae513c85/Statement-on-good-prescribing-practice.pdf. Accessed 12 Feb 2021.
Guidelines for Good Prescribing in Primary Care. Lancashire Medicines Management Group. 2019. https://www.lancsmmg.nhs.uk/media/1162/primary-care-good-prescribing-guide-version-22.pdf. Accessed 12 Feb 2021.
Medicines optimisation: the safe and effective use of medicines to enable the best possible outcomes: NICE guideline [NG5]. National Institute for Health and Care Excellence (NICE). 2015. https://www.nice.org.uk/guidance/ng5. Accessed 12 Feb 2021.
Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):1311–6.
Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'Luf N, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc JAMIA. 1999;6(4):313–21.
Colpaert K, Claus B, Somers A, Vandewoude K, Robays H, Decruyenaere J. Impact of computerized physician order entry on medication prescription errors in the intensive care unit: a controlled cross-sectional trial. Critical care (London, England). 2006;10(1):R21.
Hernandez F, Majoul E, Montes-Palacios C, Antignac M, Cherrier B, Doursounian L, et al. An observational study of the impact of a computerized physician order entry system on the rate of medication errors in an orthopaedic surgery unit. PLoS ONE. 2015;10(7):e0134101.
Rouayroux N, Calmels V, Bachelet B, Sallerin B, Divol E. Medication prescribing errors: a pre- and post-computerized physician order entry retrospective study. Int J Clin Pharm. 2019;41(1):228–36.
Shulman R, Singer M, Goldstone J, Bellingan G. Medication errors: a prospective cohort study of hand-written and computerised physician order entry in the intensive care unit. Critical care (London, England). 2005;9(5):R516–21.
Westbrook JI, Baysari MT, Li L, Burke R, Richardson KL, Day RO. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals. J Am Med Inform Assoc JAMIA. 2013;20(6):1159–67.
Prgomet M, Li L, Niazkhani Z, Georgiou A, Westbrook JI. Impact of commercial computerized provider order entry (CPOE) and clinical decision support systems (CDSSs) on medication errors, length of stay, and mortality in intensive care units: a systematic review and meta-analysis. J Am Med Inform Assoc JAMIA. 2017;24(2):413–22.
Brown CL, Mulcaster HL, Triffitt KL, Sittig DF, Ash JS, Reygate K, et al. A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care. J Am Med Inform Assoc JAMIA. 2017;24(2):432–40.
Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293(10):1197–203.
Korb-Savoldelli V, Boussadi A, Durieux P, Sabatier B. Prevalence of computerized physician order entry systems-related medication prescription errors: a systematic review. Int J Med Informatics. 2018;111:112–22.
Villamanan E, Larrubia Y, Ruano M, Velez M, Armada E, Herrero A, et al. Potential medication errors associated with computer prescriber order entry. Int J Clin Pharm. 2013;35(4):577–83.
Gute Verordnungspraxis in der Arzneimitteltherapie. Aktionsbündnis Patientensicherheit e.V. 2020. https://www.aps-ev.de/handlungsempfehlungen/. Accessed 12 Feb 2021.
Ferrari S, Cribari-Neto F. Beta regression for modelling rates and proportions. J Appl Stat. 2004;31(7):799–815.
Smithson M, Verkuilen J. A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variables. Psychol Methods. 2006;11(1):54–71.
Zimprich D. Modeling change in skewed variables using mixed beta regression models. Res Hum Dev. 2010;7(1):9–26.
Verkuilen J, Smithson M. Mixed and mixture regression models for continuous bounded responses using the beta distribution. J Educ Behav Stat. 2012;37(1):82–113.
Te Grotenhuis M, Pelzer B, Eisinga R, Nieuwenhuis R, Schmidt-Catran A, Konig R. When size matters: advantages of weighted effect coding in observational studies. Int J Public Health. 2017;62(1):163–7.
Hu FB, Goldberg J, Hedeker D, Flay BR, Pentz MA. Comparison of population-averaged and subject-specific approaches for analyzing repeated binary outcomes. Am J Epidemiol. 1998;147(7):694–703.
Poly TN, Islam MM, Yang HC, Li YJ. Appropriateness of overridden alerts in computerized physician order entry: systematic review. JMIR Med Inform. 2020;8(7):e15653.
Mills PR, Weidmann AE, Stewart D. Hospital electronic prescribing system implementation impact on discharge information communication and prescribing errors: a before and after study. Eur J Clin Pharmacol. 2017;73(10):1279–86.
Tsyben A, Gooding N, Kelsall W. Assessing the impact of a newly introduced electronic prescribing system across a paediatric department: lessons learned. Arch Dis Childhood. 2016;101(9):e2.
Dooley MJ, Wiseman M, Gu G. Prevalence of error-prone abbreviations used in medication prescribing for hospitalised patients: multi-hospital evaluation. Intern Med J. 2012;42(3):e19-22.
ISMP List of error-prone abbreviations, symbols and dose designations. Institute for Safe Medication Practices. 2021. https://www.ismp.org/recommendations/error-prone-abbreviations-list. Accessed 15 Feb 2021.
Niazkhani Z, Pirnejad H, Berg M, Aarts J. The impact of computerized provider order entry systems on inpatient clinical workflow: a literature review. J Am Med Inform Assoc JAMIA. 2009;16(4):539–49.
Asaro PV, Boxerman SB. Effects of computerized provider order entry and nursing documentation on workflow. Acad Emerg Med Off J Soc Acad Emerg Med. 2008;15(10):908–15.
Cresswell KM, Lee L, Mozaffar H, Williams R, Sheikh A. Sustained user engagement in health information technology: the long road from implementation to system optimization of computerized physician order entry and clinical decision support systems for prescribing in hospitals in England. Health Serv Res. 2017;52(5):1928–57.
VanderWeele TJ, Ding P. Sensitivity analysis in observational research: introducing the E-value. Ann Intern Med. 2017;167(4):268–74.
We thank Larissa Schiller for the acquisition of logged prescription data from the CPOE system. We thank the Aktionsbündnis Patientensicherheit (APS) for the provision of the recommendation "Good prescribing practice in drug therapy". Furthermore, we thank Frauke Brückmann, Katharina Hamburg, and David Schrey for double-checking 10 % of the data. We thank Claudia Marquart for revising the manuscript on syntax, structure, and grammar. We thank all members of the implementation team for their efforts and motivation to implement the CPOE system: Centre of Information and Medical Technology: Janina Bittmann, Markus Fabian, Ulrike Klein, Silvia Kugler, Martin Löpprich, Oliver Reinhard, Lucienne Scholz, Birgit Zeeh; Department of Internal Medicine: Wolfgang Bitz, Till Bugaj, Lars Kihm, Stefan Kopf, Anja Liemann, Petra Wagenlechner, Johanna Zemva; Department of Orthopaedics, Traumatology and Spinal Cord Injury: Claudia Benkert, Christian Merle; Department of Radiation Oncology: Sergej Roman, Stefan Welte.
Andreas D. Meid is funded by the Physician-Scientist Programme of the Medical Faculty of Heidelberg University. The funding body did not play any role in the design of the study, collection, analysis, and interpretation of data, and in writing the manuscript. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Open Access funding enabled and organized by Projekt DEAL.
Department of Clinical Pharmacology and Pharmacoepidemiology, Heidelberg University Hospital, Im Neuenheimer Feld 410, 69120, Heidelberg, Germany
Viktoria Jungreithmayr, Andreas D. Meid, Walter E. Haefeli & Hanna M. Seidling
Cooperation Unit Clinical Pharmacy, Heidelberg University Hospital, Im Neuenheimer Feld 410, 69120, Heidelberg, Germany
Viktoria Jungreithmayr, Walter E. Haefeli & Hanna M. Seidling
Viktoria Jungreithmayr
Andreas D. Meid
Walter E. Haefeli
Hanna M. Seidling
Janina Bittmann
, Markus Fabian
, Ulrike Klein
, Silvia Kugler
, Martin Löpprich
, Oliver Reinhard
, Lucienne Scholz
, Birgit Zeeh
, Wolfgang Bitz
, Till Bugaj
, Lars Kihm
, Stefan Kopf
, Anja Liemann
, Petra Wagenlechner
, Johanna Zemva
, Claudia Benkert
, Christian Merle
, Sergej Roman
& Stefan Welte
VJ, ADM, implementation team (IT), WEH, HMS. Conception and design of the study: VJ and HMS. Acquisition of data: VJ, IT, and HMS. Analysis of data: VJ, ADM, and HMS. Interpretation of data: VJ, ADM, WEH and HMS. Drafting the manuscript: VJ. Revising the manuscript: VJ, ADM, WEH and HMS. Approval of the final version of the manuscript as submitted: VJ, ADM, IT, WEH, and HMS. All authors read and approved the final manuscript.
Correspondence to Hanna M. Seidling.
The study was approved by the responsible Ethics Committee of the Medical Faculty of Heidelberg (S-453/2019). Informed consent was not applicable due to the retrospective nature of the data analysis.
Hanna M. Seidling is a member of the Editorial Board of BMC Medical Informatics and Decision Making. The authors declare that they have no further competing interests with regard to this work.
Additional file 1.
Assessment scheme for categorisation of prescriptions. Assessment scheme in accordance to the recommendation "Good prescribing practice in drug therapy" (APS). APS: alliance for patient safety; CPOE: computerized physician order entry; eGFR: estimated glomerular filtration rate.
Jungreithmayr, V., Meid, A.D., Implementation Team. et al. The impact of a computerized physician order entry system implementation on 20 different criteria of medication documentation—a before-and-after study. BMC Med Inform Decis Mak 21, 279 (2021). https://doi.org/10.1186/s12911-021-01607-6
Computerized physician order entry system
Evaluation results
Medication prescription
Documentation errors
Submission enquiries: [email protected] | CommonCrawl |
MSC Classifications
MSC 2010: Statistical Mechanics, Structure of Matter
82Cxx
Statistics and Probability (30)
Journal of Applied Probability (18)
Advances in Applied Probability (11)
Communications in Computational Physics (7)
Bulletin of the Australian Mathematical Society (3)
Numerical Mathematics: Theory, Methods and Applications (2)
Combinatorics, Probability and Computing (1)
European Journal of Applied Mathematics (1)
Forum of Mathematics, Pi (1)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (1)
The ANZIAM Journal (1)
Applied Probability Trust (29)
Global Science Press (9)
newsociety20160909 (3)
Forum of Mathematics (1)
46 results in 82Cxx
Logarithmic Sobolev inequalities in discrete product spaces
MSC 2010: Time-dependent statistical mechanics (dynamic and nonequilibrium)
MSC 2010: Markov processes
MSC 2010: General convexity
Katalin Marton
Journal: Combinatorics, Probability and Computing , First View
Published online by Cambridge University Press: 13 June 2019, pp. 1-17
The aim of this paper is to prove an inequality between relative entropy and the sum of average conditional relative entropies of the following form: for a fixed probability measure q on , ( is a finite set), and any probability measure on , (*)
$$D(p||q){\rm{\le}}C \cdot \sum\limits_{i = 1}^n {{\rm{\mathbb{E}}}_p D(p_i ( \cdot |Y_1 ,{\rm{ }}...,{\rm{ }}Y_{i - 1} ,{\rm{ }}Y_{i + 1} ,...,{\rm{ }}Y_n )||q_i ( \cdot |Y_1 ,{\rm{ }}...,{\rm{ }}Y_{i - 1} ,{\rm{ }}Y_{i + 1} ,{\rm{ }}...,{\rm{ }}Y_n )),} $$
where pi(· |y1, …, yi−1, yi+1, …, yn) and qi(· |x1, …, xi−1, xi+1, …, xn) denote the local specifications for p resp. q, that is, the conditional distributions of the ith coordinate, given the other coordinates. The constant C depends on (the local specifications of) q.
The inequality (*) ismeaningful in product spaces, in both the discrete and the continuous case, and can be used to prove a logarithmic Sobolev inequality for q, provided uniform logarithmic Sobolev inequalities are available for qi(· |x1, …, xi−1, xi+1, …, xn), for all fixed i and fixed (x1, …, xi−1, xi+1, …, xn). Inequality (*) directly implies that the Gibbs sampler associated with q is a contraction for relative entropy.
In this paper we derive inequality (*), and thereby a logarithmic Sobolev inequality, in discrete product spaces, by proving inequalities for an appropriate Wasserstein-like distance.
Uniform moment propagation for the Becker--Döring equations
MSC 2010: Equations of mathematical physics and other areas of application
MSC 2010: Stability theory
José A. Cãnizo, Amit Einav, Bertrand Lods
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics , First View
Published online by Cambridge University Press: 27 December 2018, pp. 1-21
We show uniform-in-time propagation of algebraic and stretched exponential moments for the Becker--Döring equations. Our proof is based upon a suitable use of the maximum principle together with known rates of convergence to equilibrium.
Multi-point correlations for two-dimensional coalescing or annihilating random walks
MSC 2010: Special processes
James Lukins, Roger Tribe, Oleg Zaboronski
Journal: Journal of Applied Probability / Volume 55 / Issue 4 / December 2018
Published online by Cambridge University Press: 16 January 2019, pp. 1158-1185
In this paper we consider an infinite system of instantaneously coalescing rate 1 simple symmetric random walks on ℤ2, started from the initial condition with all sites in ℤ2 occupied. Two-dimensional coalescing random walks are a `critical' model of interacting particle systems: unlike coalescence models in dimension three or higher, the fluctuation effects are important for the description of large-time statistics in two dimensions, manifesting themselves through the logarithmic corrections to the `mean field' answers. Yet the fluctuation effects are not as strong as for the one-dimensional coalescence, in which case the fluctuation effects modify the large time statistics at the leading order. Unfortunately, unlike its one-dimensional counterpart, the two-dimensional model is not exactly solvable, which explains a relative scarcity of rigorous analytic answers for the statistics of fluctuations at large times. Our contribution is to find, for any N≥2, the leading asymptotics for the correlation functions ρN(x1,…,xN) as t→∞. This generalises the results for N=1 due to Bramson and Griffeath (1980) and confirms a prediction in the physics literature for N>1. An analogous statement holds for instantaneously annihilating random walks. The key tools are the known asymptotic ρ1(t)∼logt∕πt due to Bramson and Griffeath (1980), and the noncollision probability 𝒑NC(t), that no pair of a finite collection of N two-dimensional simple random walks meets by time t, whose asymptotic 𝒑NC(t)∼c0(logt)-(N2) was found by Cox et al. (2010). We re-derive the asymptotics, and establish new error bounds, both for ρ1(t) and 𝒑NC(t) by proving that these quantities satisfy effective rate equations; that is, approximate differential equations at large times. This approach can be regarded as a generalisation of the Smoluchowski theory of renormalised rate equations to multi-point statistics.
A CLASS OF GROWTH MODELS RESCALING TO KPZ
MSC 2010: Stochastic analysis
MSC 2010: Miscellaneous topics - Partial differential equations
MARTIN HAIRER, JEREMY QUASTEL
Journal: Forum of Mathematics, Pi / Volume 6 / 2018
Published online by Cambridge University Press: 19 November 2018, e3
We consider a large class of $1+1$ -dimensional continuous interface growth models and we show that, in both the weakly asymmetric and the intermediate disorder regimes, these models converge to Hopf–Cole solutions to the KPZ equation.
Noise-induced mixing and multimodality in reaction networks
MSC 2010: Physiological, cellular and medical topics
MSC 2010: Thermodynamics and heat transfer
MSC 2010: Applications - Dynamical systems and ergodic theory
MSC 2010: Functional-differential and differential-difference equations
TOMISLAV PLESA, RADEK ERBAN, HANS G. OTHMER
Journal: European Journal of Applied Mathematics , First View
Published online by Cambridge University Press: 18 September 2018, pp. 1-25
We analyse a class of chemical reaction networks under mass-action kinetics involving multiple time scales, whose deterministic and stochastic models display qualitative differences. The networks are inspired by gene-regulatory networks and consist of a slow subnetwork, describing conversions among the different gene states, and fast subnetworks, describing biochemical interactions involving the gene products. We show that the long-term dynamics of such networks can consist of a unique attractor at the deterministic level (unistability), while the long-term probability distribution at the stochastic level may display multiple maxima (multimodality). The dynamical differences stem from a phenomenon we call noise-induced mixing, whereby the probability distribution of the gene products is a linear combination of the probability distributions of the fast subnetworks which are 'mixed' by the slow subnetworks. The results are applied in the context of systems biology, where noise-induced mixing is shown to play a biochemically important role, producing phenomena such as stochastic multimodality and oscillations.
Frog models on trees through renewal theory
Sandro Gallo, Pablo M. Rodriguez
Journal: Journal of Applied Probability / Volume 55 / Issue 3 / September 2018
We study a class of growing systems of random walks on regular trees, known as frog models with geometric lifetime in the literature. With the help of results from renewal theory, we derive new bounds for their critical parameters. Our approach also improves the existing bounds for the critical parameter of a percolation model on trees known as cone percolation.
Large deviations for randomly connected neural networks: II. State-dependent interactions
MSC 2010: Mathematical biology in general
MSC 2010: Probability theory on algebraic and topological structures
MSC 2010: Limit theorems
Tanguy Cabana, Jonathan D. Touboul
Journal: Advances in Applied Probability / Volume 50 / Issue 3 / September 2018
Published online by Cambridge University Press: 16 November 2018, pp. 983-1004
We continue the analysis of large deviations for randomly connected neural networks used as models of the brain. The originality of the model relies on the fact that the directed impact of one particle onto another depends on the state of both particles, and they have random Gaussian amplitude with mean and variance scaling as the inverse of the network size. Similarly to the spatially extended case (see Cabana and Touboul (2018)), we show that under sufficient regularity assumptions, the empirical measure satisfies a large deviations principle with a good rate function achieving its minimum at a unique probability measure, implying, in particular, its convergence in both averaged and quenched cases, as well as a propagation of a chaos property (in the averaged case only). The class of model we consider notably includes a stochastic version of the Kuramoto model with random connections.
Large deviations for randomly connected neural networks: I. Spatially extended systems
In a series of two papers, we investigate the large deviations and asymptotic behavior of stochastic models of brain neural networks with random interaction coefficients. In this first paper, we take into account the spatial structure of the brain and consider first the presence of interaction delays that depend on the distance between cells and then the Gaussian random interaction amplitude with a mean and variance that depend on the position of the neurons and scale as the inverse of the network size. We show that the empirical measure satisfies a large deviations principle with a good rate function reaching its minimum at a unique spatially extended probability measure. This result implies an averaged convergence of the empirical measure and a propagation of chaos. The limit is characterized through a complex non-Markovian implicit equation in which the network interaction term is replaced by a nonlocal Gaussian process with a mean and covariance that depend on the statistics of the solution over the whole neural field.
How much market making does a market need?
Vít Peržina, Jan M. Swart
We consider a simple model for the evolution of a limit order book in which limit orders of unit size arrive according to independent Poisson processes. The frequencies of buy limit orders below a given price level, respectively sell limit orders above a given level, are described by fixed demand and supply functions. Buy (respectively, sell) limit orders that arrive above (respectively, below) the current ask (respectively, bid) price are converted into market orders. There is no cancellation of limit orders. This model has been independently reinvented by several authors, including Stigler (1964), and Luckock (2003), who calculated the equilibrium distribution of the bid and ask prices. We extend the model by introducing market makers that simultaneously place both a buy and sell limit order at the current bid and ask price. We show that introducing market makers reduces the spread, which in the original model was unrealistically large. In particular, we calculate the exact rate at which market makers need to place orders in order to close the spread completely. If this rate is exceeded, we show that the price settles at a random level that, in general, does not correspond to the Walrasian equilibrium price.
Speeding up non-Markovian first-passage percolation with a few extra edges
MSC 2010: Operations research and management science
MSC 2010: Graph theory
Alexey Medvedev, Gábor Pete
One model of real-life spreading processes is that of first-passage percolation (also called the SI model) on random graphs. Social interactions often follow bursty patterns, which are usually modelled with independent and identically distributed heavy-tailed passage times on edges. On the other hand, random graphs are often locally tree-like, and spreading on trees with leaves might be very slow due to bottleneck edges with huge passage times. Here we consider the SI model with passage times following a power-law distribution ℙ(ξ>t)∼t-α with infinite mean. For any finite connected graph G with a root s, we find the largest number of vertices κ(G,s) that are infected in finite expected time, and prove that for every k≤κ(G,s), the expected time to infect k vertices is at most O(k1/α). Then we show that adding a single edge from s to a random vertex in a random tree 𝒯 typically increases κ(𝒯,s) from a bounded variable to a fraction of the size of 𝒯, thus severely accelerating the process. We examine this acceleration effect on some natural models of random graphs: critical Galton--Watson trees conditioned to be large, uniform spanning trees of the complete graph, and on the largest cluster of near-critical Erdős‒Rényi graphs. In particular, at the upper end of the critical window, the process is already much faster than exactly at criticality.
STOCHASTIC MODELS FOR OPTICALLY TRAPPED NANOWIRES
MSC 2010: Applications to specific types of physical systems
MSC 2010: Random dynamical systems
IGNACIO ORTEGA-PIWONKA
Journal: Bulletin of the Australian Mathematical Society / Volume 98 / Issue 2 / October 2018
Degree-dependent threshold-based random sequential adsorption on random trees
MSC 2010: Computer system organization
Thomas M. M. Meyfroyt
Journal: Advances in Applied Probability / Volume 50 / Issue 1 / March 2018
We consider a special version of random sequential adsorption (RSA) with nearest-neighbor interaction on infinite tree graphs. In classical RSA, starting with a graph with initially inactive nodes, each of the nodes of the graph is inspected in a random order and is irreversibly activated if none of its nearest neighbors are active yet. We generalize this nearest-neighbor blocking effect to a degree-dependent threshold-based blocking effect. That is, each node of the graph is assumed to have its own degree-dependent threshold and if, upon inspection of a node, the number of active direct neighbors is less than that node's threshold, the node will become irreversibly active. We analyze the activation probability of nodes on an infinite tree graph, given the degree distribution of the tree and the degree-dependent thresholds. We also show how to calculate the correlation between the activity of nodes as a function of their distance. Finally, we propose an algorithm which can be used to solve the inverse problem of determining how to set the degree-dependent thresholds in infinite tree graphs in order to reach some desired activation probabilities.
Overly determined agents prevent consensus in a generalized Deffuant model on ℤ with dispersed opinions
MSC 2010: Stochastic processes
Timo Hirscher
Published online by Cambridge University Press: 08 September 2017, pp. 722-744
During the last decades, quite a number of interacting particle systems have been introduced and studied in the crossover area of mathematics and statistical physics. Some of these can be seen as simplistic models for opinion formation processes in groups of interacting people. In the model introduced by Deffuant et al. (2000), agents that are neighbors on a given network graph, randomly meet in pairs and approach a compromise if their current opinions do not differ by more than a given threshold value θ. We consider the two-sided infinite path ℤ as the underlying graph and extend existing models to a setting in which opinions are given by probability distributions. Similar to what has been shown for finite-dimensional opinions, we observe a dichotomy in the long-term behavior of the model, but only if the initial narrow mindedness of the agents is restricted.
Comprehensive Studies on Rarefied Jet and Jet Impingement Flows with Gaskinetic Methods
MSC 2010: Probabilistic methods, simulation and stochastic differential equations
MSC 2010: Rarefied gas flows, Boltzmann equation
Chunpei Cai, Xin He, Kai Zhang
Journal: Communications in Computational Physics / Volume 22 / Issue 3 / September 2017
This paper presents comprehensive studies on two closely related problems of high speed collisionless gaseous jet from a circular exit and impinging on an inclined rectangular flat plate, where the plate surface can be diffuse or specular reflective. Gaskinetic theories are adopted to study the problems, and several crucial geometry-location and velocity-direction relations are used. The final complete results include flowfield properties such as density, velocity components, temperature and pressure, and impingement surface properties such as coefficients of pressure, shear stress and heat flux. Also included are the averaged coefficients for pressure, friction, heat flux, moment over the whole plate, and the averaged distance from the moment center to the plate center. The final results include complex but accurate integrations involving the geometry and specific speed ratios, inclination angle, and the temperature ratio. Several numerical simulations with the direct simulation Monte Carlo method validate these analytical results, and the results are essentially identical. Exponential, trigonometric, and error functions are embedded in the solutions. The results illustrate that the past simple cosine function approach is rather crude, and should be used cautiously. The gaskinetic method and processes are heuristic and can be used to investigate other external high Knudsen number impingement flow problems, including the flowfield and surface properties for high Knudsen number jet from an exit and flat plate of arbitrary shapes. The results are expected to find many engineering applications.
A Flux-Corrected Phase-Field Method for Surface Diffusion
Yujie Zhang, Wenjing Ye
Journal: Communications in Computational Physics / Volume 22 / Issue 2 / August 2017
Phase-field methods with a degenerate mobility have been widely used to simulate surface diffusion motion. However, apart from the motion induced by surface diffusion, adverse effects such as shrinkage, coarsening and false merging have been observed in the results obtained from the current phase-field methods, which largely affect the accuracy and numerical stability of these methods. In this paper, a flux-corrected phase-field method is proposed to improve the performance of phase-field methods for simulating surface diffusion. The three effects were numerically studied for the proposed method and compared with those observed in the two existing methods, the original phase-field method and the profile-corrected phase-field method. Results show that compared to the original phase-field method, the shrinkage effect in the profile-corrected phase-field method has been significantly reduced. However, coarsening and false merging effects still present and can be significant in some cases. The flux-corrected phase field performs the best in terms of eliminating the shrinkage and coarsening effects. The false merging effect still exists when the diffuse regions of different interfaces overlap with each other. But it has been much reduced as compared to that in the other two methods.
A spatial model for selection and cooperation
Peter Czuppon, Peter Pfaffelhuber
Journal: Journal of Applied Probability / Volume 54 / Issue 2 / June 2017
We study the evolution of cooperation in an interacting particle system with two types. The model we investigate is an extension of a two-type biased voter model. One type (called defector) has a (positive) bias α with respect to the other type (called cooperator). However, a cooperator helps a neighbor (either defector or cooperator) to reproduce at rate γ. We prove that the one-dimensional nearest-neighbor interacting dynamical system exhibits a phase transition at α = γ. A special choice of interaction kernels yield that for α > γ cooperators always die out, but if γ > α, cooperation is the winning strategy.
Linear Stability of Hyperbolic Moment Models for Boltzmann Equation
MSC 2010: Partial differential equations on manifolds; differential operators
MSC 2010: Partial differential equations
Yana Di, Yuwei Fan, Ruo Li, Lingchao Zheng
Journal: Numerical Mathematics: Theory, Methods and Applications / Volume 10 / Issue 2 / May 2017
Grad's moment models for Boltzmann equation were recently regularized to globally hyperbolic systems and thus the regularized models attain local well-posedness for Cauchy data. The hyperbolic regularization is only related to the convection term in Boltzmann equation. We in this paper studied the regularized models with the presentation of collision terms. It is proved that the regularized models are linearly stable at the local equilibrium and satisfy Yong's first stability condition with commonly used approximate collision terms, and particularly with Boltzmann's binary collision model.
Analysis of the Closure Approximation for a Class of Stochastic Differential Equations
MSC 2010: Numerical linear algebra
Yunfeng Cai, Tiejun Li, Jiushu Shao, Zhiming Wang
Motivated by the numerical study of spin-boson dynamics in quantum open systems, we present a convergence analysis of the closure approximation for a class of stochastic differential equations. We show that the naive Monte Carlo simulation of the system by direct temporal discretization is not feasible through variance analysis and numerical experiments. We also show that the Wiener chaos expansion exhibits very slow convergence and high computational cost. Though efficient and accurate, the rationale of the moment closure approach remains mysterious. We rigorously prove that the low moments in the moment closure approximation of the considered model are of exponential convergence to the exact result. It is further extended to more general nonlinear problems and applied to the original spin-boson model with similar structure.
Recovering the Damping Rates of Cyclotron Damped Plasma Waves from Simulation Data
MSC 2010: Astronomy and astrophysics
MSC 2010: Numerical methods in Fourier analysis
MSC 2010: Partial differential equations, initial value and time-dependent initial-boundary value problems
Cedric Schreiner, Patrick Kilian, Felix Spanier
Journal: Communications in Computational Physics / Volume 21 / Issue 4 / April 2017
Plasma waves with frequencies close to the particular gyrofrequencies of the charged particles in the plasma lose energy due to cyclotron damping. We briefly discuss the gyro-resonance of low frequency plasma waves and ions particularly with regard to particle-in-cell (PiC) simulations. A setup is outlined which uses artificially excited waves in the damped regime of the wave mode's dispersion relation to track the damping of the wave's electromagnetic fields. Extracting the damping rate directly fromthe field data in real or Fourier space is an intricate and non-trivial task. We therefore present a simple method of obtaining the damping rate Γ from the simulation data. This method is described in detail, focusing on a step-by-step explanation of the course of actions. In a first application to a test simulation we find that the damping rates obtained from this simulation generally are in good agreement with theoretical predictions. We then compare the results of one-, two- and three-dimensional simulation setups and simulations with different physical parameter sets.
The entirely coupled region of supercritical contact processes
Achillefs Tzioufas
Published online by Cambridge University Press: 24 October 2016, pp. 925-929
We consider translation-invariant, finite-range, supercritical contact processes. We show the existence of unbounded space-time cones within which the descendancy of the process from full occupancy may with positive probability be identical to that of the process from the single site at its apex. The proof comprises an argument that leans upon refinements of a successful coupling among these two processes, and is valid in d-dimensions. | CommonCrawl |
\begin{definition}[Definition:Well-Founded Relation/Definition 1]
Let $\struct {S, \RR}$ be a relational structure.
$\RR$ is a '''well-founded relation on $S$''' {{iff}}:
:$\forall T \subseteq S: T \ne \O: \exists z \in T: \forall y \in T \setminus \set z: \tuple {y, z} \notin \RR$
where $\O$ is the empty set.
That is, $\RR$ is a '''strictly well-founded relation on $S$''' {{iff}}:
:for every non-empty subset $T$ of $S$, there exists an element $z$ in $T$ such that for all $y \in T \setminus \set z$, it is not the case that $y \mathrel \RR z$.
\end{definition} | ProofWiki |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
{\title{\bf Adaptive Estimation for Non-stationary Factor Models And A Test for Static Factor Loadings}
\author{\small Weichi Wu \\
\small Center for Statistical Science, \small \\\small Department of Industrial Engineering,\\
\small Tsinghua University\\
\and
\small Zhou Zhou \\
\small Department of Statistical Sciences \\
\small University of Toronto\\
}
\maketitle
}
\spacingset{1.5}
\maketitle
\begin{abstract} This paper considers the estimation and testing of a class of high-dimensional non-stationary time series factor models with evolutionary temporal dynamics. In particular, the entries and the dimension of the factor loading matrix are allowed to vary with time while the factors and the idiosyncratic noise components are locally stationary. We propose an adaptive sieve estimator for the span of the varying loading matrix and the locally stationary factor processes. A uniformly consistent estimator of the effective number of factors is investigated via eigenanalysis of a non-negative definite time-varying matrix. A high-dimensional bootstrap-assisted test for the hypothesis of static factor loadings is proposed by comparing the kernels of the covariance matrices of the whole time series with their local counterparts. We examine our estimator and test via simulation studies and real data analysis. \end{abstract}
\textbf{Keywords:} Time series factor model, local stationarity, high dimensional time series, test of static factor loadings, adaptive estimation.
\section{Introduction} Technology advancement has made it easy to record simultaneously a large number of stochastic processes of interest over a relatively long period of time where the underlying data generating mechanisms of the processes are likely to evolve over the long observation time span. As a result both high dimensional time series analysis (\cite{wei2018multivariate}) and non-stationary time series analysis (\cite{dahlhaus2012locally}) have undergone unprecedented developments over the last two decades. This paper focuses on the following evolutionary linear factor model for a multivariate non-stationary time series: \begin{eqnarray}\label{eq:fm} \mathbf x_{i,n}=\mathbf A(i/n)\mathbf z_{i,n}+\mathbf e_{i,n}, \end{eqnarray} where $\{\mathbf x_{i,n}\}_{i=1}^n$ is a $p$-dimensional observed time series, $\mathbf A(t)$: $[0,1]\rightarrow \field{R}^{p\times d(t)}$ is a matrix-valued function of possibly time-varying factor loadings and the number of factors $d(t)$ is assumed to be a piecewise constant function of time, $\{\mathbf z_{i,n}\}_{i=1}^n$ is a $d(i/n)$-dimensional unobserved sequence of common factors and $\{\mathbf e_{i,n}\}_{i=1}^n$ are the idiosyncratic components. Here $p=p_n$ may diverge to infinity with the time series length $n$ and $d(t)$ is typically much smaller than $p$ uniformly over $t$. Note that $\mathbf x_{i,n}$, $\mathbf z_{i,n}$ and $\mathbf e_{i,n}$ are allowed to be locally-stationary processes for which the generating mechanism varies with time, see \eqref{eq-2} for detailed formulation. Throughout the article we assume that $\{\mathbf e_{i,n}\}$ and $\{\mathbf z_{i,n}\}$ are centered.
The version of model \eqref{eq:fm} with constant loadings is among the most popular dimension reduction tools for the analysis of multivariate stationary time series (\cite{stock2011dynamic}, \cite{tsay2013multivariate}, \cite{wei2018multivariate}). According to the model assumptions adapted and estimation methods used, it seems that recent literature on linear time series factor models mainly falls into two types. The celebrated cross-sectional averaging method (summarized in \cite{stock2011dynamic}) which is popular in the econometric literature of linear factor models, exploits the assumption of weak dependence among the vector components of $\mathbf e_{i,n}$ and hence achieves de-noising via cross-sectional averaging. See for instance \cite{stock2002forecasting}, \cite{bai2002determining}, \cite{bai2003inferential} and \cite{forni2000generalized} among many others. One advantage of the cross-sectional averaging method is that it allows for a very high dimensionality. In general the method requires that $p$ diverges to achieve consistency and the estimation accuracy improves as $p$ gets larger under the corresponding model assumptions. On the other hand, the linear factor model can also be fitted by exploring the relationship between the factor loading space and the auto-covariance or the spectral density matrices of the time series under appropriate assumptions. This method dates back at least to the works of \cite{anderson1963use}, \cite{brillinger2001time} and \cite{pena1987identifying} among others for fixed dimensional multivariate time series and is extended to the high dimensional setting by the recent works of \cite{lam2011estimation}, \cite{lam2012factor}, \cite{wang2019factor} and others. The latter method allows for stronger contemporary dependence among the vector components and is consistent when $p$ is fixed under the requirement that the idiosyncratic components form a white noise.
To date, the literature on evolutionary linear factor models is scarce and the existing results are focused on extensions of the cross-sectional averaging method. Among others, \cite{breitung2011testing} and \cite{yamamoto2015testing} considered testing for structural changes in the factor loadings and \cite{motta2011locally} and \cite{su2017time} considered evolutionary model \eqref{eq:fm} using the cross-sectional averaging method. \cite{eichler2011fitting} and \cite{barigozzi2021time} studied non-stationary dynamic factor models. In this paper, we shall extend the second estimation method mentioned in the last paragraph to the case of evolutionary factor loadings with locally stationary factor and idiosyncratic component time series whose data generating mechanisms change smoothly over time. Using this framework, our approach to the factor model estimation contributes to the literature mainly in the following two aspects. \begin{description}
\item (a) To estimate the time-varying loading matrix, the prevailing approach in the literature is the local-constant kernel estimator, see for example \cite{motta2011locally}, \cite{su2017time}. It seems that it is difficult to extend the local-constant method to general local polynomial methods for factor models under the cross-sectional averaging set-up and therefore the estimation accuracy of the existing methods is not adaptive to the smoothness (with respect to time) of the factor loading matrix function. In this paper, we propose an alternative adaptive estimation method based on the method of sieves (\cite{grenander1981abstract},\cite{chen2007large}). The sieve method is computationally simple to implement and has the advantage of being {\it adaptive to the unknown smoothness of the target function} if certain linear sieves such as the Fourier basis (for periodic functions), the Legendre polynomials or the orthogonal wavelets are used (\cite{chen2007large}, \cite{wang2012convergence}). Specifically, we adapt the method of sieves to estimate the high-dimensional auto-covariance matrices of $\mathbf x_{i,n}$ at each time point and subsequently estimate the span of the loadings $\mathbf A(t)$ at each $t$ exploiting the relationship between $\mathbf A(\cdot)$ and the kernel of the latter local auto-covariance matrices. We will show that the span of $\mathbf A(\cdot)$ can be estimated at a rate independent of $p$ uniformly over time provided that all factors are strong with order $p^{1/2}$ Euclidean norms, extending the corresponding result for factor models with static loadings established in \cite{lam2011estimation}.
\item (b) In most literature for time-varying factor models such as \cite{motta2011locally}, \cite{su2017time}, to estimate the time-varying loading matrix, it is assumed that the number of factors is constant over time. Typically further requirements on the factor process such as independence or time-invariance of its covariance matrix had to be added. In this paper, we model the factor process as a general locally stationary time series and allow the number of factors to be time-varying. Uniform consistency of the estimated span of the loading matrix as well as the number of factors will be established without assuming that the positive eigenvalues of the corresponding matrices are distinct which is commonly posited in the literature of factor models.
\end{description}.
Testing whether $\mathbf A(\cdot)$ is constant over time is important in the application of \eqref{eq:fm}. In the literature, among others \cite{breitung2011testing} proposed LR, LM and Wald statistics for testing static factor model against an alternative of piece-wise constant loadings, and \cite{yamamoto2015testing} improved the power of \cite{breitung2011testing} by maximizing the test statistic over possible numbers of the original factors. Assuming piece-wise stationarity, \cite{barigozzi2018simultaneous} estimated the change points of a factor model via wavelet transformations. \cite{su2017time} considered an ${\cal L}^2$ test of static factor loadings under the cross-sectional averaging method assuming that each component of $\mathbf e_{i,n}$ is a martingale difference sequence. Here we shall propose a high-dimensional ${\cal L}^\infty$ or maximum deviation test on the time-invariance of the span of $\mathbf A(\cdot)$ which utilizes the observation that the kernel of the full-sample auto-covariance matrices coincides with all of its local counterparts under the null hypothesis of static span of loadings while the latter observation is likely to fail when the span of $\mathbf A(\cdot)$ is time-varying. Using the uniform convergence rates of the estimated factor loadings established in this paper, the test statistic will be shown to be asymptotically equivalent to the maximum deviation of the sum of a high-dimensional locally stationary time series under some mild conditions. A multiplier bootstrap procedure with overlapping blocks is adapted to approximate the critical values of the test. The bootstrap will be shown to be asymptotically correct under the null and powerful under a large class of local alternatives. The assumptions of our testing procedure are different from those of the existing literature in the following two aspects. \begin{description}
\item (c) Under the null hypothesis of constant $\mathbf A(\cdot)$, the common components of the time series, i.e. $\mathbf A(i/n)\mathbf z_{i,n}$, considered in the above-mentioned works are stationary or have time-invariant variance-covariance. With locally stationary $\mathbf z_{i,n}$ assumed in this paper, under the null hypothesis, the common components are allowed to be locally stationary where their variance-covariance matrices can be smoothly time-varying.
\item (d) The validity of the tests of the above works is built on the divergence of both the length of time series $n$ and the dimension of the time series $p$. In contrast, our proposed test is proved to be asymptotically correct when $p$ is fixed or diverging slowly with $n$. Therefore, our test is suitable to investigate the loading matrix of fixed or moderately high-dimensional vector time series such as the COVID 19 death data investigated in Section \ref{Covid19-US}, which records the daily COVID 19 deaths for $n=591$ days in $57$ places of the U.S. and three countries in the compacts of Free Association with the U.S (hence $p=60$). \end{description}
The paper is organized as follows. Section \ref{sec::notation} introduces some notation. Section \ref{ex1} explains the locally stationarity of the factor model \eqref{eq:fm}. The assumptions of the model are investigated in Section \ref{Sec::Pre}. Sections \ref{Sec:estimate} and \ref{sec:test_loading} discuss the estimation of the evolutionary factor loading matrices and the test of static factor loadings, respectively, and the corresponding theoretical results are contained in Section \ref{Sec:load}. Section \ref{Sec4} investigates the power performance of the test of the static factor loading. Section \ref{Sec::Tuning} gives out methods for tuning parameter selection. Numerical studies are displayed in Section \ref{Simu-Results}. Section \ref{data-ana} illustrates a real world financial data application and a COVID 19 data analysis. Section \ref{appendix} provides the proofs of
Theorem \ref{Thm-approx}, Theorem \ref{Space-Distance}, Theorem \ref{eigentheorem}, Proposition \ref{Jan23-Lemma6} and Theorem \ref{Jan23-Thm4}. The proofs of the remaining theorems, propositions, lemmas and corollaries are relegated to the online supplemental material.
\section{Notation}\label{sec::notation}
For two series $a_n$ and $b_n$, write $a_n \asymp b_n$
if the exists $0<m_1<m_2<\infty$ such that $m_1\leq \liminf \frac{|a_n|}{|b_n|}\leq \limsup \frac{|a_n|}{|b_n|}\leq m_2<\infty$. Write $a_n\lessapprox b_n$ ($a_n\gtrapprox b_n$) if there exists a uniform constant $M$ such that $a_n\leq Mb_n$ ( $a_n\geq Mb_n$). Let $A:=B$ represent `A is define as B'. For any $p$ dimensional (random) vector $\mathbf v=(v_1,...,v_p)^\top$, write $\|\mathbf v\|_u=(\sum_{s=1}^p|v_s|^u)^{1/u}$, and the corresponding $\mathcal L^v$ norm $\|\mathbf v\|_{\mathcal L^v}=(\field{E}(\|\mathbf v\|^v_2))^{1/v}$ for $v\ge 1$. For any matrix $\mathbf F$ let $\lambda_{max}(\mathbf F)$ be its largest eigenvalue, and $\lambda_{min}(\mathbf F)$ be its smallest eigenvalue. Let $\|\mathbf F\|_F=(\text{trace} (\mathbf F^\top\mathbf F))^{1/2}$ denote the Frobenius norm, and $\|\mathbf F\|_{m}$ be the smallest nonzero singular value of $\mathbf F$. Denote by $vec(\mathbf F)$ the vector obtained by stacking the columns of $\mathbf F$. Let $\|\mathbf F\|_2=\sqrt{\lambda_{max}(\mathbf F\mathbf F^\top)}$. In particular, if $\mathbf F$ is a vector, then $\|\mathbf F\|_2=\|\mathbf F\|_F$. For any vector or matrix $\mathbf A=(a_{ij})$ let $|\mathbf A|_{\infty}=\max_{i,j}|a_{ij}|$. For any integer $v$ denote by $\mathbf I_v$ the $v\times v$ identity matrix. Write $|\mathcal I|$ be the length of the interval $\mathcal I$. Let $\mathcal C^{K}(\tilde M)[0,1]$ be the collection of functions $f$ defined on $[0,1]$ such that the $K_{th}$ order derivative of $f$ is Lipschitz continuous with Lipschitz constant $\tilde M$, $\tilde M>0$. Regarding the number of factors, let $d=\sup_{0\leq t\leq 1}d(t)$. \section{Locally stationarity of Model \eqref{eq:fm}}\label{ex1}\setcounter{equation}{0}
We allow the number of factors and the dimension of loading matrix of model \eqref{eq:fm} is time-varying. Since the number and the dimension are integers, it is sophisticated to define the `smoothly changing' factor number or `smoothly changing' dimensions directly, where the concept of `smoothly changing' is the key assumption of locally stationary models and of nonparametric smoothing approaches. Moreover, in current literature many assumptions including stationarity and dependence strength, are not directly applicable to time series with possibly changing dimensions such as $\mathbf z_{i,n}$ in model \eqref{eq:fm}. On the other hand, the dimension of $\mathbf x_{i,n}$ is time invariant, which is convenient to be assumed locally stationary. In the following we show that the evolutionary factor model \eqref{eq:fm} could be derived from a factor model with a fixed factor dimension $d$ while the rank of the loading matrix varies with time. The latter representation is beneficial for the theoretical investigation of the model and provides insight on how smooth changes in the loading matrix cause the dimensionality of the factor space to change.
Consider a $p\times d$ matrix $\mathbf A^*(t)=(\mathbf a^*_1(t),....,\mathbf a^*_{d}(t))$ where $\mathbf a^*_s(t), 1\leq s\leq d$ are $p$ dimensional smooth functions, $\sup_{t\in [0,1]}\|\mathbf a^*_s(t)\|^2_2\asymp p^{1-\delta}$, and $\inf_{t\in [0,1]}\|\mathbf A^*(t)\|_F\asymp p^{\frac{1-\delta}{2}},\sup_{t\in [0,1]}\|\mathbf A^*(t)\|_F\asymp p^{\frac{1-\delta}{2}}$. The parameter $\delta$ refers to the factor strength and will be discussed in condition (C1) later. Let $\mathbf z^*_{i,n}$ and $\mathbf e_{i,n}$ be $d$ and $p$ dimensional locally stationary time series. Here local stationarity refers to a slowly or smoothly time-varying data generating mechanism of a time series, which we refer to \cite{dahlhaus1997fitting} and \cite{zhou2009local} among others for rigorous treatments. Consider the following model with a {\it fixed} dimensional loading matrix \begin{align}\label{eq:fix} \mathbf x_{i,n}=\mathbf A^*(i/n)\mathbf z^*_{i,n}+\mathbf e_{i,n}. \end{align} Using singular value decomposition (SVD), model \eqref{eq:fix} can be written as \begin{align}\label{SVD} \mathbf x_{i,n}=p^{\frac{1-\delta}{2}}\mathbf U(i/n)\boldsymbol \Sigma(i/n)\mathbf { W^{\top}}(i/n)\mathbf z^*_{i,n}+\mathbf e_{i,n},
\end{align} where $\boldsymbol\Sigma(i/n)$ is a $p\times d$ rectangular diagonal matrix with diagonal $\Sigma_{uu}(i/n)=\sigma_u(i/n)$ for $1\leq u\leq d$, $(\sigma_u(i/n))_{1\leq u\leq d}$ are singular values of $\mathbf A^*(i/n)/p^{\frac{1-\delta}{2}}$, $\mathbf U(i/n)\mathbf U^{\top}(i/n)=\mathbf I_p$ and $\mathbf W^{\top}(i/n)\mathbf W(i/n)=\mathbf I_d$. Therefore $\max_{1\leq u\leq d}\sup_{t\in (0,1]}\sigma_u(t)$ is bounded. Further, the $(\sigma_u(i/n))_{1\leq u\leq d}$ are ordered such that $\sigma_1(i/n)\geq...\geq \sigma_d(i/n)\geq 0$. Let $d(t)$ be the number of nonzero singular values $\sigma_u(t)$ and equation \eqref{SVD} can be further written as \begin{align}\label{eq-equive} \mathbf x_{i,n}=p^{\frac{1-\delta}{2}}\tilde{\mathbf U}(i/n)\tilde{\boldsymbol \Sigma}(i/n)\tilde{\mathbf { W}}^{\top}(i/n)\mathbf z^*_{i,n}+\mathbf e_{i,n}, \end{align}
where $\tilde{\boldsymbol \Sigma}(i/n)$ is the matrix by deleting the rows and columns of $\boldsymbol\Sigma(i/n)$ where the diagonals are zero, and $\tilde{\mathbf U}(i/n)$ and $\tilde{\mathbf { W}}^{\top}(i/n)$ are the matrices resulted from the deletion of corresponding columns and rows of ${\mathbf U}(i/n)$ and ${\mathbf { W}}^{\top}(i/n)$, respectively. Let $\mathbf A(i/n)= p^{\frac{1-\delta}{2}}\tilde{\mathbf U}(i/n)\tilde{\boldsymbol \Sigma}(i/n)$ and $\mathbf z_{i,n}=\tilde{\mathbf { W}}^{\top}(i/n)\mathbf z^*_{i,n}$, then $\mathbf A(i/n)$ is a $p\times d(i/n)$ matrix and $\mathbf z_{i,n}$ is a length $d(i/n)$ locally stationary vector. Notice that $\tilde{\mathbf z}_{i,n}:=\mathbf {W^{\top}}(i/n)\mathbf z^*_{i,n}$ in \eqref{SVD} is the collection of all factors in the sense that all the entries of $\mathbf z_{i,n}$ are elements of $\tilde{\mathbf z}_{i,n}$ for each $i$. The fact that model \eqref{eq:fm} can be obtained from model \eqref{eq-equive} has the following implications. First, though we allow $d(t)$ to jump from one integer to another over time, it is appropriate to assume $\mathbf x_{i,n}$ of \eqref{eq:fm} is locally stationary since model \eqref{eq:fix} is locally stationary as long as $\mathbf z^*_{i,n}$ and $\mathbf e_{i,n}$ are locally stationary processes and each element of $\mathbf A^*(t)$ is Lipschitz continuous. Notice that the dimensions of $\mathbf z^*_{i,n}$ and $\mathbf A^*(t)$ are time-invariant. Second, we observe from the SVD that $\|\mathbf A^*(i/n)\|_m=\|\mathbf A(i/n)\|_m$ will be close to zero in a small neighborhood around the time when $d(t)$ of model \eqref{eq:fm} changes in which case it is difficult to estimate the rank of $\mathbf A^*(\cdot)$ or the number of factors accurately. We will exclude such small neighborhoods in our asymptotic investigations. The varying $d(t)$ has been considered in the piecewsie stationary factor models and factor models with piecewsie constant loading matrix, where the jump points of $d(t)$ correspond to the change points of the factor models, see for instance \cite{barigozzi2018simultaneous} and \cite{ma2018estimation} among others.
\section{Model Assumptions} \label{Sec::Pre} Let $\tilde {\mathbf z}_{i,n}$ be the vector consisting of all factors which appear during the considered period, and the components of $\mathbf z_{i,n}$ are always elements of $\tilde{\mathbf z}_{i,n}$. Adapting the formulation in \cite{zhou2009local}, we model the locally stationary time series $\mathbf x_{i,n}$, $\tilde{\mathbf z}_{i,n}$ and $\mathbf e_{i,n}$, $1\leq i\leq n$ as follows: \begin{align}\label{eq-2} \mathbf x_{i,n}=\mathbf G(i/n,\mathcal{F}_i),~ \tilde{\mathbf z}_{i,n}=\mathbf Q(i/n,\mathcal{F}_i),~ \mathbf e_{i,n}=\mathbf H(i/n,\mathcal{F}_i) \end{align} where the filtration $\mathcal{F}_i=(\boldsymbol \epsilon_{-\infty},...,\boldsymbol \epsilon_{i-1},\boldsymbol \epsilon_i)$ with $\{\boldsymbol \epsilon_i\}_{i\in \mathbb Z}$ i.i.d. random elements, and $\mathbf G$, $\mathbf Q$ and $\mathbf H$ are $p$, $d$ and $p$ dimensional measurable nonlinear filters. The $j_{th}$ entry of the time series $\mathbf x_{i,n}$, $\tilde{\mathbf z}_{i,n}$ and $\mathbf e_{i,n}$ can be written as $x_{i,j,n}=G_j(i/n,\mathcal{F}_i)$, $\tilde z_{i,j,n}=Q_j(i/n,\mathcal{F}_i)$ and $e_{i,j,n}=H_j(i/n,\mathcal{F}_i)$. Let $\{\boldsymbol \epsilon'_i\}_{i\in \mathbb Z}$ be an independent copy of $\{\boldsymbol \epsilon_i\}_{i\in \mathbb Z}$ and denote by $\mathcal{F}^{(h)}_{i}=(\boldsymbol \epsilon_{-\infty},...\boldsymbol \epsilon_{h-1},\boldsymbol \epsilon_h',\boldsymbol \epsilon_{h+1},...,\boldsymbol \epsilon_i)$ for $h\leq i$, and $\mathcal{F}^{(h)}_{i}=\mathcal{F}_i$ otherwise. The dependence measures for $\mathbf x_{i,n}$, $\tilde{\mathbf z}_{i,n}$ (or $\mathbf z_{i,n}$ equivalently) and $\mathbf e_{i,n}$ in $\mathcal L^l$ norm are defined as (\cite{zhou2009local}) \begin{align*}
\delta_{G,l}(k):=\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq p}\delta_{G,l,j}(k):=\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq p}\field{E}^{1/l}(|G_j(t,\mathcal{F}_i)-G_j(t,\mathcal{F}_i^{(i-k)})|^l),\\
\delta_{Q,l}(k):=\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq d}\delta_{Q,l,j}(k):=\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq d}\field{E}^{1/l}(|Q_j(t,\mathcal{F}_i)-Q_j(t,\mathcal{F}_i^{(i-k)})|^l),\\
\delta_{H,l}(k)=:\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq p}\delta_{H,l,j}(k):=\sup_{t\in[0,1],i\in \mathbb Z, 1\leq j\leq p}\field{E}^{1/l}(|H_j(t,\mathcal{F}_i)-H_j(t,\mathcal{F}_i^{(i-k)})|^l), \end{align*} which quantify the magnitude of change of systems $\mathbf G, \mathbf Q, \mathbf H$ in $\mathcal L^l$ norm when the inputs of the systems $k$ steps ahead are replaced by their $i.i.d.$ copies. To formally state the requirements of model \eqref{eq:fm}, we define further some notation. At time $i/n$, the $d(i/n)$ dimensional factors $\mathbf z_{i,n}=( z_{i,l_1,n},..., z_{i,l_{d(i/n)},n})^\top$ where the index set $\{l_1,...,l_{d(i/n)}\}\subset \{1,2,...,d\}$. Let $\underline{\mathbf Q}(t,\mathcal{F}_i)=(Q_{l_1}(t,\mathcal{F}_i),....,Q_{l_{d(t)}}(t,\mathcal{F}_i))^\top$, and $\boldsymbol \Sigma_{z}(t,k)=\field{E}(\underline{\mathbf Q}(t,\mathcal{F}_{i+k})\underline{\mathbf Q}^\top(t,\mathcal{F}_i))$, $\boldsymbol \Sigma_e(t,k)=\field{E}(\mathbf H(t,\mathcal{F}_{i+k})\mathbf H^\top(t,\mathcal{F}_i))$ denote the $k_{th}$ order auto-covariance of $z_{i,n}$ and $e_{i,n}$ at time $t$, respectively. Let $\boldsymbol \Sigma_{ze}(t,k)=\field{E}(\underline{\mathbf Q}(t,\mathcal{F}_{i+k})\mathbf H^\top(t,\mathcal{F}_i))$, $\boldsymbol \Sigma_{ez}(t,k)=\field{E}(\mathbf H(t,\mathcal{F}_{i+k})\underline{\mathbf Q}^\top(t,\mathcal{F}_i))$ and $\boldsymbol \Sigma_{x}(t,k)=\field{E}(\mathbf G(t,\mathcal{F}_{i+k})\mathbf G^\top(t,\mathcal{F}_i))$. By Section \ref{ex1} we assume naturally $\mathbf x_{i,n}$ is locally stationary which implies that $\boldsymbol \Sigma_{x}(t,k)$ is smooth with respect to $t$.
We assume the following conditions for model \eqref{eq:fm}. \begin{description}
\item (A1) Let $a_{ij}(t)$, $1\leq i\leq p$, $1\leq j\leq d(t)$ be the $(i,j)_{th}$ element of $\mathbf A(t)$. We assume there exists a sufficiently large constant $M$ such that
\begin{align}
\sup_{t\in [0,1]}|a^{}_{ij}(t)|\leq M.
\end{align}
\item (A2) Let $\sigma_{x,u,v}(t,k)$ be the $(u,v)_{th}$ element of $\boldsymbol \Sigma_x(t,k)$. Assume $\sigma_{x,i,j}(t,k)$, $1\leq i\leq p$ and $1\leq j \leq p$, $1\leq k\leq k_0$ belongs to a common functional space $\Omega$ which is equipped with an orthonormal basis $B_j(t)$, i.e. $\int_{0}^1 B_m(t)B_n(t)dt=\mathbf 1(m=n)$, where $\mathbf 1(\cdot)$ is the indicator function. Assume $\Omega \in \mathcal C^{K}(\tilde M)[0,1]$ for some $K\geq 2$. Moreover
\begin{align}\label{Basisapprox}
\max_{1\le i\leq p,1\leq j\leq p}\sup_{t\in [0,1]}|\sigma_{x,i,j}(t,k)-\sum_{u=1}^{J_n} \tilde \sigma_{x,i,j,u}(k)B_u(t)|=O_p(g_{J_n,K, \tilde M}),
\end{align}
where $\tilde \sigma_{x,i,j,u}(k)=\int_0^1 \sigma_{x,i,j}(t,k)B_u(t)dt$, and $g_{J_n,K, \tilde M}\rightarrow 0$ as $J_n\rightarrow \infty$.
\end{description}
Condition (A1) concerns the boundedness of the loading matrix, while (A2) means $\boldsymbol \Sigma_x(t,k)$ can be approximated by the basis expansion. The approximation error rate $g_{J_n,K, \tilde M}$ diminishes as $J_n$ increases. Often higher differentiability yields more accurate approximation rate.
We present condition (C1) on factor strength ( c.f. Section 2.3 of \cite{lam2011estimation}) as follows. \begin{description}
\item (C1) Write $\mathbf A(t)=(\mathbf a_1(t),....,\mathbf a_{d(t)}(t))$ where $\mathbf a_s(t), 1\leq s\leq d(t)$ are $p$ dimensional vectors. Then $\sup_{t\in [0,1]}\|\mathbf a_s(t)\|^2_2\asymp p^{1-\delta}$
for $1\leq s\leq d(t)$ for some constant $\delta\in [0,1]$.
Besides,
the matrix norm of $\mathbf A(t)$ satisfies
\begin{align}
\inf_{t\in [0,1]}\|\mathbf A(t)\|_F\asymp p^{\frac{1-\delta}{2}},\sup_{t\in [0,1]}\|\mathbf A(t)\|_F\asymp p^{\frac{1-\delta}{2}},
\inf_{t\in \mathcal T_{\eta_n}}\|\mathbf A(t)\|_m\geq \eta^{1/2}_n p^{\frac{1-\delta}{2}}
\end{align}
for a positive sequence $\eta_n=O(1)$ on a collection of intervals $\mathcal T_{\eta_n}\subset [0,1]$. \end{description}
As in \cite{lam2011estimation} and \cite{lam2012factor}, $\delta=0$ and $\delta>0$ in (C1) correspond to strong and weak factor strengths, respectively. Assumption (C1) means that the $d(t)$ factors in the model are of equal strength $\delta$ on $[0,1]$, and $\|\mathbf A(t)\|_m$ is at least of the order $\eta^{1/2}_n p^{\frac{1-\delta}{2}}$ on $\mathcal T_{\eta_n}$ which enables us to correctly identify the number of factors on $\mathcal T_{\eta_n}$. By our discussions of model \eqref{eq:fix}, $\mathcal T_{\eta_n}$ excludes small neighborhoods around the change points of the number of factors.
See Remark \ref{remarketa} of Section \ref{Sec:load} for the use of $\eta_n$ in practice. The assumptions for $\mathbf z_{i,n}$ and $\mathbf e_{i,n}$ are now in order. \begin{description}
\item (M1) The short-range dependence conditions hold for both $\mathbf z_{i,n}$ and $\mathbf e_{i,n}$ in $\mathcal L^l$ norm , i.e. \begin{align}
\Delta_{Q,l,m}:=\max_{1\leq j\leq d}\sum_{k=m}^\infty \delta_{Q,l,j}(k)=o(1),\ \ \Delta_{H,l,m}:=\max_{1\leq j\leq p}\sum_{k=m}^\infty \delta_{H,l,j}(k)=o(1)
\end{align}as $m\rightarrow \infty$
for some constant $l\geq 4$.
\item (M2) There exists a constant $M$ such that
\begin{align*}
\sup_{t\in[0,1]}\max_{1\leq u\leq d}\field{E} | Q_u(t,\mathcal{F}_0)|^4\leq M, ~\sup_{t\in [0,1]}\max_{1\leq v\leq p}\field{E} |H_v(t,\mathcal{F}_0)|^4\le M.
\end{align*}
\item (M3) For $t,s\in [0,1]$, there exists a constant $M$ such that
\begin{align}&\left(\field{E} |Q_u(t,\mathcal{F}_0)-Q_u(s,\mathcal{F}_0)|^2\right)^{1/2}\leq M|t-s|, 1\leq u\leq d,\label{8-11-10}
\\&\left(\field{E} |H_v(t,\mathcal{F}_0)-H_v(s,\mathcal{F}_0)|^2\right)^{1/2}\leq M|t-s|, 1\leq v\leq p,\label{8-11-11}\\
&\left(\field{E} |G_v(t,\mathcal{F}_0)-G_v(s,\mathcal{F}_0)|^2\right)^{1/2}\leq M|t-s|, 1\leq v\leq p.\label{8-11-12}
\end{align} \end{description}
Conditions (M1)-(M3) mean that each coordinate process of $\tilde{\mathbf z}_{i,n}$ and $\mathbf e_{i,n}$ are standard short memory locally stationary time series defined in the literature; see for instance \cite{zhou2009local}. Furthermore, the moment condition (M2) implies that for $0\leq t\leq 1$, all elements of the matrices $\mathbf \Sigma_{ze}(t,k)$ and $\boldsymbol \Sigma_e(tk)$ are bounded in ${\cal L}^4$ norm.
We then postulate the following assumptions on the covariance matrices of the common factors $\mathbf z_{i,n}$ and the idiosyncratic components $\mathbf e_{i,n}$, which are needed for spectral decomposition: \begin{description}
\item(S1) For $t\in [0,1]$ and $k=1,...,k_0$, all components of $\mathbf \Sigma_{e}(t,k)$ are $0$.
\item (S2) For $k=0,1,...,k_0$, $\mathbf \Sigma_z(t,k)$ is full ranked on some sub-interval $\mathcal I_0$ of $[0,1]$.
\item (S3) For $t\in [0,1]$ and $k=1,...,k_0$, all components of $\mathbf \Sigma_{ez}(t,k)$ are $0$.
\item (S4) For $t\in [0,1]$ and $1\leq k\leq k_0$, $\|\mathbf \Sigma_{ze}(t,k)\|_{F}=o(\eta^{1/2}_n p^{\frac{1-\delta}{2}})$, where $\eta_n$ is the sequence defined in condition (C1).
\end{description}
Condition (S1) indicates that $(\mathbf e_{i,n})$ does not have auto-covariance up to order $k_0$ which is slightly weaker than the requirement that $(\mathbf e_{i,n})$ is a white noise process used in the literature. Condition (S2) implies that for $1\leq i\leq n$, there exists a certain period that no linear combination of components of $\mathbf z_{i,n}$ is white noise that can be absorbed into $\mathbf e_{i,n}$. (S3) implies that $\mathbf z_{i,n}$ and $\mathbf e_{i+k,n}$ are uncorrelated for any $k\ge 0$. Condition (S4) requires a weak correlation between $\mathbf z_{i+k,n}$ and $\mathbf e_{i,n}$. In fact it is the non-stationary extension of Condition (i) in Theorem 1 of \cite{lam2011estimation} and condition (C6) of \cite{lam2012factor}. Though (C6) of \cite{lam2012factor} assumes a rate of $o(p^{1-\delta})$, it requires standardization of the factor loading matrix $\mathbf A$. If $d(t)$ is piecewise constant with bounded number of change points, then $|\mathcal T_{\eta_n}|\rightarrow 1$ as $n\rightarrow \infty$ and $\eta_n\rightarrow 0$. If $d(t)\equiv d$ we can assume that $\mathcal T_{\eta_n}=[0,1]$ for some sufficiently small positive $\eta_n:=\eta>0$. \begin{remark}\label{remark-s-1}We now discuss the equivalent assumptions on the fix dimensional loading matrix model \eqref{eq:fix}. Define $\boldsymbol \Sigma_{z^*}(t,k)$, $\boldsymbol \Sigma_{z^*e^*}(t,k)$, $\boldsymbol \Sigma_{e^*}(t,k)$, $\boldsymbol \Sigma_{e^*z^*}(t,k)$ similarly to $\boldsymbol \Sigma_{z}(t,k)$, $\boldsymbol \Sigma_{ze}(t,k)$, $\boldsymbol \Sigma_{e}(t,k)$, $\boldsymbol \Sigma_{ez}(t,k)$ and assume (S) for these quantities. By construction conditions (M) are satisfied. In particular, for model \eqref{eq:fix}, \eqref{8-11-10} and \eqref{8-11-11} imply \eqref{8-11-12}. Conditions (A) are satisfied if each element of $\mathbf A^*(t)$, $\boldsymbol \Sigma_{z^*}(t,k)$ and $\mathbf \Sigma_{z^*e^*}(t,k)$, $k=0,...,k_0$ has a bounded $K$ order derivative. Condition (C1) is satisfied with $\mathcal T_{\eta_n}$ corresponding to the period when the smallest nonzero eigenvalue of $\boldsymbol \Sigma(\cdot)$ exceeds $\eta_n$.
\end{remark}
\section{Model Estimation}\label{Sec:estimate}\setcounter{equation}{0}
Observe from equation (\ref{eq:fm}) and conditions (S1)-(S4) that \begin{eqnarray}\label{Lambdanull} \boldsymbol \Sigma_x(t,k)=\mathbf A(t)\boldsymbol \Sigma_z(t,k)\mathbf A^\top(t)+\mathbf A(t)\boldsymbol \Sigma_{ze}(t,k),\quad k\ge 1. \end{eqnarray} Further define $\mathbf \Lambda(t)=\sum_{k=1}^{k_0}\boldsymbol \Sigma_x(t,k)\boldsymbol \Sigma^\top_x(t,k)$ for some pre-specified integer $k_0$ and we have \begin{align}\label{DefLambda} \mathbf \Lambda(t)=\mathbf A(t)\Big[\sum_{k=1}^{k_0}(\boldsymbol \Sigma_z(t,k)\mathbf A^\top(t)+\boldsymbol \Sigma_{ze}(t,k))(\mathbf A(t)\boldsymbol \Sigma^\top_z(t,k)+\boldsymbol \Sigma^\top_{ze}(t,k))\Big]\mathbf A^\top(t). \end{align} Therefore in principle $\mathbf A(t)$ can be identified by the null space of $\mathbf \Lambda(t)$. The use of $\mathbf \Lambda(t)$ was advocated in \cite{lam2011estimation}, and in this paper we aim at estimating a set of time-varying orthonormal basis of this time-varying null space, which is identifiable up to rotation, to characterize $\mathbf A(t)$. Therefore we don't impose the condition utilized by \cite{lam2011estimation} that the loading matrix is normalized for identification. As we discussed in the introduction, fitting factor models using relationships between the factor space and the null space of the auto-covariance matrices has a long history.
In the following we shall propose a nonparametric sieve-based method for time-varying loading matrix estimation which is adaptive to the smoothness (with respect to $t$) of the covariance function $\boldsymbol \Sigma_x(t,k)$. For a pre-selected set of orthonormal basis functions $\{B_j(t)\}_{j=1}^\infty$ satisfying the basis approximation condition (A2), we shall approximate $\boldsymbol \Sigma_x(t,k)$ by a finite but diverging order basis expansion \begin{align}\boldsymbol \Sigma_x(t,k)\approx \sum_{j=1}^{J_n}\int_0^1\boldsymbol \Sigma_x(u,k)B_j(u)duB_j(t),\label{sieveapprox}\end{align} where the order $J_n$ diverges to infinity. The speed of divergence is determined by the smoothness of $\boldsymbol \Sigma_x(t,k)$ with respect to $t$. Motivated by \eqref{sieveapprox} we propose to estimate $\mathbf \Lambda(t)$ by the following $\hat{\mathbf \Lambda}(t)$:\begin{align} \hat {\mathbf \Lambda}(t)=\sum_{k=1}^{k_0}\hat {\mathbf M}(J_n,t,k)\hat {\mathbf M}^\top(J_n,t,k),~~ \text{where}~~ \hat {\mathbf M}(J_n,t,k)=\sum_{j=1}^{J_n} \tilde {\mathbf \Sigma}_{x,j,k}B_j(t) \label{11-12-11},\\\label{11-12-10} \tilde{\boldsymbol \Sigma}_{x,j,k}=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf x_{i+k}\mathbf x_{i}^\top B_j(\frac{i}{n}). \end{align} Let $\lambda_1(\hat {\mathbf \Lambda}(t))\geq \lambda_2(\hat {\mathbf \Lambda}(t))\geq...\geq\lambda_p(\hat {\mathbf \Lambda}(t))$ denote the eigenvalues of $\hat {\mathbf \Lambda}(t)$, and $\hat {\mathbf V}(t)=(\hat {\bf v}_1(t),...,\hat {\bf v}_{d(t)}(t))$ where $\hat {\bf v}_i(t)'s$ are the eigenvectors of $\hat{\mathbf \Lambda}(t) $ corresponding to $\lambda_1(\hat {\mathbf \Lambda}(t))$,...,$\lambda_{d(t)}(\hat {\mathbf \Lambda}(t))$. Then we estimate the column space of $\mathbf A(t)$ by \begin{align}\label{span-estimate}Span(\hat{\mathbf v}_1(t),...,\hat {\mathbf v}_{d(t)}(t)).\end{align}
To implement this approach it is required to determine $d(t)$, which we estimate using a modified version of the eigen-ratio statistics advocated by \cite{lam2012factor}; that is, we estimate \begin{align}\label{Feb4-hatd} \hat d_n(t)=\mathop{\mbox{argmin}}_{1\leq i\leq R(t)}\lambda_{i+1}(\hat {\mathbf \Lambda}(t)) /\lambda_{i}(\hat {\mathbf \Lambda}(t)). \end{align} where $R(t)$ is the largest integer less or equal to $p/2$ such that \begin{align}\label{R-def}
\frac{ |\lambda_{R(t)} (\hat{ \boldsymbol \Lambda}(t))|}{
\sqrt{\sum_{i=1}^p \lambda^2_{i} (\hat{ \boldsymbol \Lambda}(t)) }}\geq C_0\eta_n/\log n \end{align} for some positive constant $C_0$. Asymptotic theory established in Section \ref{Sec:load} guarantees that the percentage of variance explained by each of the first $d(t)$ eigenvalues will be much larger than $\eta^2_n/\log^2 n$ on $\mathcal T_{\eta_n}$ with high probability which motivates us to restrict the upper-bound of the search range, $R(t)$, using \eqref{R-def}.
\section{Test for Static Factor Loadings}\label{sec:test_loading}\setcounter{equation}{0} It is of practical interest to test $H_0$ : $\mbox{span}(\mathbf A(t))=\mbox{span}(\mathbf A)$, where $\mathbf A$ is a $p\times d$ matrix. In other words, one can find a time-invariant matrix $\mathbf A$ to represent the factor loading matrices throughout time. Without loss of generality, we shall assume that $\mathbf A(t)=\mathbf A$ under the null hypothesis throughout the rest of the paper if no confusions will arise. Under the null hypothesis, $\boldsymbol \Sigma_x(t,k)=\mathbf A\boldsymbol \Sigma_z(t,k)\mathbf A^\top+\mathbf A\boldsymbol \Sigma_{ze}(t,k)$ for $k\neq 0.$ Observe that testing $H_0$ is more subtle than testing covariance stationarity of $\mathbf x_{i,n}$ as both $\mathbf z_{i,n}$ and $\mathbf e_{i,n}$ can be non-stationary under the null. By equation \eqref{eq:fm}, we have, under $H_0$, $$\int_{0}^1\boldsymbol \Sigma_x(t,k)\,dt=\mathbf A\int_{0}^1(\boldsymbol \Sigma_z(t,k)\mathbf A^\top+\boldsymbol \Sigma_{ze}(t,k))\,dt,\quad k> 0.$$ Consider the following quantity $\boldsymbol\Gamma_k$ and its estimate $\hat {\boldsymbol\Gamma}_k$: \begin{align*} \mathbf \Gamma_k=\int_0^1\boldsymbol \Sigma_x(t,k)\,dt \int_{0}^1\boldsymbol \Sigma^\top_x(t,k)\,dt, ~\hat {\mathbf \Gamma}_k=(\sum_{i=1}^{n-k}\mathbf x_{i+k,n}\mathbf x^\top_{i,n}/n)(\sum_{i=1}^{n-k}\mathbf x_{i+k}\mathbf x^\top_{i,n}/n)^\top. \end{align*}
Let $\mathbf \Gamma=\sum_{k=1}^{k_0} \mathbf \Gamma_k$ and $\hat {\mathbf \Gamma}=\sum_{k=1}^{k_0} \hat {\mathbf \Gamma}_k$. Then the kernel space of $\mathbf A$ can be estimated by the kernel of $\hat {\mathbf \Gamma}$ under $H_0$. Let $\hat {\mathbf f}_{i}$ and ${\mathbf f}_{i}$, $i=1,...,p-d$ be the orthonormal eigenvectors of $\hat{\mathbf \Gamma}$ and $\mathbf \Gamma$ w.r.t. ($\lambda_{d+1}(\hat {\mathbf \Gamma})$,...,$\lambda_p(\hat {\mathbf \Gamma})$) and ($\lambda_{d+1}({\mathbf \Gamma})$,...,$\lambda_p( {\mathbf \Gamma})$), respectively. Write $\mathbf F=(\mathbf f_1,...,\mathbf f_{p-d})$, $\hat{\mathbf F}=(\hat{\mathbf f}_1,...,\hat {\mathbf f}_{p-d})$.
The test is constructed by segmenting the time series into non-overlapping equal-sized blocks of size $m_n$. Without loss of generality, consider $n-k_0=m_nN_n$ for integers $m_n$ and $N_n$. Define for $1\leq j\leq N_n$ the index set $b_h=((h-1)m_n+1,...,hm_n)$ and the statistic \begin{align} \hat{\boldsymbol\Sigma}^x_{h,k}=\sum_{i\in b_h}\mathbf x_{i+k,n}\mathbf x^\top_{i,n}/m_n.
\end{align} Then we define the test statistic $\hat T_n$
\begin{align}\label{eq15hatTn}\hat T_n=\sqrt{m_n}\max_{1\le k\le k_0}\max_{1\leq h\leq N_n}\max_{1\le i\le p-\hat d_n}|\hat{\mathbf f}^\top_i\hat{\boldsymbol \Sigma}^{x}_{h,k}|_\infty, \end{align} where $\hat d_n$ is an estimate of $d$. The test $\hat T_n$ utilizes the observation that the kernel space of the full sample statistic $\hat{\mathbf \Gamma}$ coincides with that of $\boldsymbol \Sigma_{x}(t,k)$ for each $t$ under the null while the coincidence is likely to fail under the alternative. Hence the multiplication of $\hat{\mathbf f}_i^\top$, $1\leq i\leq p-d$ and $\hat{\boldsymbol \Sigma}^{x}_{h,k}$ should be small in ${\cal L}^\infty$ norm uniformly in $h$ under the null while some of those multiplications are likely to be large under the alternative. We then propose a multiplier bootstrap procedure with overlapping blocks to determine the critical values of $\hat T_n$. Define for $1\leq i\leq m_n$, $1\leq h\leq N_n$, the $(p-d)\times p$ vector \begin{align}\label{hatz} \hat{\boldsymbol l}_{i,h,k}=vec((\hat{\mathbf F}^\top \mathbf x_{(h-1)m_n+i+k,n}\mathbf x_{(h-1)m_n+i,n}^\top)^\top), \end{align} and the $k_0(p-d)p-$vector $\hat{\boldsymbol l}_{i,h,\cdot}=(\hat{\boldsymbol l}_{i,h,1}^\top,\dots, \hat{\boldsymbol l}_{i,h,k_0}^\top)^\top.$ Further define \begin{align}\label{new.eq11} \hat{\boldsymbol l}_i=(\hat{\boldsymbol l}^\top_{i,1,\cdot},...,\hat{\boldsymbol l}^\top_{i,N_n,\cdot})^\top \end{align}
for $1\leq i\leq m_n$. Notice that $\hat T_n=|\frac{\sum_{i=1}^{m_n}\hat {\boldsymbol l}_i}{\sqrt{m_n}}|_\infty$. For given $m_n$, let $ \hat {\mathbf s}_{j,w_n}=\sum_{r=j}^{j+w_n-1}\hat {\boldsymbol l}_r$ and $ \hat {\mathbf s}_{m_n}=\sum_{r=1}^{m_n}\hat {\boldsymbol l}_r$ for $1\leq i\leq m_n$ where $w_n=o(m_n)$ and $w_n\rightarrow \infty$ is the window size. Define \begin{align}\label{kappan} \boldsymbol \kappa_n=\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}(\hat {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}\hat {\mathbf s}_{m_n})R_j \end{align} where $\{R_i\}_{i\in\mathbb Z}$ are $i.i.d.$ $N(0,1)$ independent of $\{ {\mathbf x}_{i,n},1\leq i\leq n\}$. Then we have the following algorithm for testing static factor loadings:\\\ \\ \noindent {\bf Algorithm for implementing the multiplier bootstrap:} \begin{description}
\item (1) Select $m_n$ and $w_n$ by the Minimal Volatility (MV) method that will be described in Section \ref{selectmn}.
\item (2) Generate $B$ (say 1000) conditionally i.i.d. copies of $K_r=|\boldsymbol \kappa_n^{(r)}|_\infty$, $r=1,...B$, where $\boldsymbol \kappa_n^{(r)}$ is obtained by \eqref{kappan} via the $r_{th}$ copy of $i.i.d.$ standard normal random variables $\{R_i^{(r)}\}_{i\in \mathbb Z}$.
\item (3) Let $K_{(r)}, 1\leq r\leq B$ be the order statistics for $K_r, 1\leq r\leq B$. Then we reject $H_0$ at level $\alpha$ if $\hat T_n\geq K_{(\lfloor(1-\alpha)B \rfloor)}$. Let $B^*=\min \{r: K_{(r)}\geq \hat T_n\}$ and the corresponding $p$ value of the test can be approximated by $1-B^*/B$. \end{description} To implement our test, $d$ will be estimate by \begin{align}\label{eigen-ratio-hatd} \hat d_n=\mathop{\mbox{argmin}}_{1\leq i\leq R}\lambda_{i+1}(\hat {\mathbf \Gamma}) /\lambda_{i}(\hat {\mathbf \Gamma}). \end{align} for some constant $R$ satisfying \eqref{R-def} with $\hat{\mathbf \Lambda} (t)$ there replaced by $\hat{\mathbf \Gamma}$.
\section{Asymptotic Results }\label{Sec:load}\setcounter{equation}{0}
\subsection{Theoretical Results for Model Estimation} Theorem \ref{Thm-approx} provides the estimation accuracy of $\hat{\boldsymbol \Lambda}(t)$ by the sieve method. \begin{theorem}\label{Thm-approx}
Assume conditions (A1), (A2), (C1), (M1), (M2) ,(M3) and (S1)--(S4) hold. Define $\iota_n=\sup_{1\leq j\leq J_n}Lip_j+\sup_{t,1\leq j\leq J_n}|B_j(t)|$, where $Lip_j$ is the Lipschitz constant of basis function $B_j(t)$. Write
$\nu_n=\frac{J_n\sup_{t,1\leq j\leq J_n}|B_j(t)|^2}{\sqrt n}+\frac{J_n\sup_{t,1\leq j\leq J_n}|B_j(t)|\iota_n}{n}+g_{J_n,K,\tilde M}$, where the quantity $g_{J_n,K,\tilde M}$ is defined in condition (A2). Assume $\nu_np^\delta=o(1).$ Then we have
\begin{align*}
\Big\|\sup_{t\in [0,1]}\Big\|\hat {\mathbf \Lambda}(t)-\mathbf \Lambda(t)\Big\|_F\Big\|_{\mathcal L^1}=O(p^{2-\delta}\nu_n).
\end{align*} \end{theorem}
From the proof, we shall see that $\|\mathbf \Lambda(t)\|_F$ is of the order $p^{2-2\delta}$ uniformly for $t\in[0,1]$. Hence if we further assume
that $p^\delta \nu_n=o(1)$, then the approximation error of $\hat {\mathbf \Lambda}(t)$ is negligible compared with the magnitude of $\mathbf \Lambda(t)$. For orthnormal Legendre polynomials and trigonometric polynomials it is easy to derive that $Lip_j=O(j^2)$. Similar calculations can be performed for a large class of frequently-used basis functions. The first term of $\nu_n$ is due to the stochastic variation of $\hat {\mathbf M}(J_n,t,k)$, while the second and last terms are due to the basis approximation. \begin{remark}\label{6-26-2020-remark}Consider the case for $1\leq k\leq k_0$, $1\leq i,j\leq p$, $\sigma^{(1)}_{x,i,j}(t,k),$..., $\sigma_{x,i,j}^{(v-1)}(t,k)$ are
absolutely continuous in $t$ where $\sigma_{x,i,j}^{(m)}(t,k)=\frac{\partial^m\sigma_{x,i,j}(t)}{\partial t^m}$ for some positive integer $m$. We are able to specify the rate $g_{J_n,m+1,\tilde M}$ for various basis functions.
\begin{description}
\item (a) Assume $\max_{1\leq i,j\leq p,1\leq k\leq k_0}\int_{0}^1\frac{\sigma^{(v+1)}_{x,i,j}(t)}{\sqrt{1-t^2}}dt\leq M<\infty$ for some constant $M$, $\sigma^{(v)}_{x,i,j}(t)$ is of bounded variation for all $i,j$, then the uniform approximation rate $g_{J_n,m+1,\tilde M}$ is $J_n^{-v+1/2}$ for normalized Legendre polynomial basis. If $\sigma_{x,i,j}(t,k)'s$ are analytic inside and on the Bernstein ellipse then the approximation rate decays geometrically in $J_n$. See for instance \cite{wang2012convergence} for more details.
\item (b) If $\sigma_{x,i,j}^{(v)}(t,k)$ is Lipschitz continuous, then the uniform approximation rate $g_{J_n,m+1,\tilde M}$ is $J_n^{-v}$ when
trigonometric polynomials (if all $\sigma_{x,i,j}(t,k)$ can be extended to periodic functions) or orthogonal wavelets generated by sufficiently high order father and mother wavelets are used. See \cite{chen2007large} for more details.
\end{description} \end{remark}
Write $\hat {\mathbf B}(t)=(\hat{\mathbf b}_{d(t)+1}(t),...,\hat{\mathbf b}_{p}(t))$ where $\hat {\mathbf b}_{s}(t)$, $d(t)+1\leq s\leq p$ are eigenvectors of $\hat{\boldsymbol \Lambda}(t)$ corresponding to $\lambda_{d(t)+1}(\hat {\mathbf \Lambda}(t))$,... $\lambda_p(\hat {\mathbf \Lambda}(t))$. Obviously $ (\hat{\mathbf v}_1(t),..., \hat{\mathbf v}_{d(t)}(t), \hat {\mathbf b}_{d(t)+1}(t),..., \hat{\mathbf b}_p(t))$ form a set of orthonormal basis of $\mathbb R^p$.
Define $\mathbf V(t) =(\mathbf v_1(t),...,\mathbf v_{d(t)}(t))$ where $\mathbf v_i(t)s$ are the eigenvectors of $\boldsymbol \Lambda(t)$ corresponding to the positive $\lambda_i(\mathbf \Lambda(t))s$ and $ {\mathbf B}(t)=( \mathbf b_{d(t)+1}(t),...,\mathbf b_{p}(t))$ with $ \mathbf b_{s}(t)$, $d(t)+1\leq s\leq p$ an orthonormal basis of null space of $\mathbf \Lambda(t)$. Therefore $(\mathbf v_1(t),..., \mathbf v_{d(t)}(t), \mathbf b_{d(t)+1}(t),...,\mathbf b_p(t))$ form an orthonormal basis of $\mathbb R^p$.
\begin{theorem}\label{Space-Distance}
Under conditions of Theorem \ref{Thm-approx}, we have \begin{description}
\item (i) For each $t \in \mathcal T_{\eta_n}$ there exist orthogonal matrices $\hat {\mathbf O}_1(t)\in \mathbb R^{d(t)\times d(t)}$ and $\hat {\mathbf O}_2(t)\in \mathbb R^{(p-d(t))\times (p-d(t))}$ such that
\begin{align*}
\|\sup_{t\in \mathcal T_{\eta_n}}\|\hat {\mathbf V}(t) \hat{\mathbf O}_1(t)-\mathbf V(t)\|_F\|_{\mathcal L^1}=O(\eta_n^{-1} p^{\delta}\nu_n),\\
\|\sup_{t\in \mathcal T_{\eta_n}}\|\hat {\mathbf B}(t) \hat{\mathbf O}_2(t)-\mathbf B(t)\|_F\|_{\mathcal L^1}=O(\eta_n^{-1}p^{\delta}\nu_n).
\end{align*}
\item (ii) Furthermore, if $\limsup_p\sup_{t\in(0,1)}\lambda_{max}(\field{E}(\mathbf H(t,\mathcal{F}_{i})\mathbf H^\top(t,\mathcal{F}_i)))<\infty$ we have that \begin{align*}p
^{-1/2}\max_{1\leq i\leq n, \frac{i}{n}\in \mathcal T_{\eta_n}}\|\hat{\mathbf V}(i/n)\hat{\mathbf V}^\top(i/n)\mathbf x_{i,n}-\mathbf A(i/n)\mathbf z_{i,n}\|_2=O_p( \eta_n^{-1}p^{\delta/2}\nu_n+p^{-1/2}).\end{align*}
\end{description}
\end{theorem}
Assertion (i) follows from Theorem \ref{Thm-approx} and a variant of Davis Kahan Theorem (\cite{yu2015useful}) which does not require the separation of all non-zero eigenvalues.\ (i) involves orthogonal matrices $\hat O_1(t)$ and $\hat O_2(t)$ since it allows multiple eigenvalues at certain time points, which yields the non-uniqueness of the eigen-decomposition. Moreover, when all the factors are strong ($\delta=0$) the rate in (i) is independent of $p$ and reduces to the uniform nonparametric sieve estimation rate for univariate smooth functions, which coincides with the well-known `blessing of dimension' phenomenon for stationary factor models, see for example \cite{lam2011estimation}.
Further assuming $p^{\delta}\nu_n=o(\eta_n)$, Theorem \ref{Space-Distance} guarantees the uniform consistency of the eigen-space estimator on $ \mathcal T_{\eta_n}$. Through studying the estimated eigenvectors, Theorem \ref{Thm-approx} and Theorem \ref{Space-Distance} demonstrate the validity of the estimator \eqref{span-estimate}. The next thoerem discusses the property of estimated eigenvalues $\lambda_i(\hat{\boldsymbol \Lambda}(t))$, $i=1,...p.$
\begin{theorem}\label{eigentheorem}
Assume $\nu_np^{\delta}=o(1)$ and that there exists a sequence $g_n\rightarrow \infty$ such that $\frac{\eta_n}{(g_n\nu_np^{\delta})^{1/2}}\rightarrow \infty.$ Then under conditions of Theorem \ref{Space-Distance},
we have that
\begin{description}
\item (i) $\|\sup_{t\in (0,1)}\max_{1\leq j\leq d(t)}| \lambda_j(\hat{\mathbf \Lambda}(t))-\lambda_j(\mathbf \Lambda (t))|\|_{\mathcal L^1}=O(p^{2-\delta}\nu_n)$. \item (ii) $\field{P}(\sup_{t\in \mathcal T_{\eta_n}}\max_{j=d(t)+1,...,p}\lambda_j(\hat {\mathbf \Lambda}(t))\geq g_n^2\eta_n^{-2}\nu_n^2p^2)=O(\frac{1}{g_n})$.
\item (iii)
$1-\field{P}( \frac{ \lambda_{j+1}(\hat{\mathbf \Lambda}( t))}{\lambda_{j}(\hat {\mathbf \Lambda}(t))}\gtrapprox \eta_n, j=1,...,d(t)-1, \forall t\in \mathcal T_{\eta_n})=O(\frac{\nu_n p^{\delta}}{\eta_n})$.
\item (iv)
$\field{P}(\sup_{t\in \mathcal T_{\eta_n}}\frac{ \lambda_{d(t)+1}(\hat{\mathbf \Lambda}( t))}{\lambda_{d(t)}(\hat {\mathbf \Lambda}(t))}\gtrapprox p^{2\delta}g_n^2\nu_n^2/\eta_n^3)=O(\frac{1}{g_n}+\frac{\nu_np^{\delta}}{\eta_n }).$
\end{description}
\end{theorem}
Notice that term $p^{2\delta}g_n^2\nu_n^2/\eta_n^3$ in (iv) is asymptotically negligible compared with the term $\eta_n$ in (iii). If $d(t)$ has a bounded number of changes, (iii) and (iv) of Theorem \ref{eigentheorem} indicate that the eigen-ratio estimator \eqref{Feb4-hatd} is able to consistently identify the {\it time-varying} number of factors $d(t)$ on intervals with total length approaching $1$.
We remark that most results on linear factor models require that the positive eigenvalues are distinct. Theorems \ref{Space-Distance} and \ref{eigentheorem} are sufficiently flexible to allow multiple eigenvalues of $\boldsymbol \Lambda(t)$. Theorem \ref{eigentheorem} yields the following consistency result for the eigen-ratio estimator \eqref{Feb4-hatd}.
\begin{remark}\label{Rate-basis}
If we assume that $\sigma_{x,i,j}(t,k)'s$ are real analytic and normalized Legendre polynomials or trigonometric polynomials (when all $\sigma_{x,i,j}(t,k)$ can be extended to periodic functions) are used as basis, we shall take $J_n=M\log n$ for some large constant $M$. Then Theorem \ref{Thm-approx} will yield an approximation rate of $\frac{p^{2-\delta}\log n}{\sqrt n}$. Consider $\eta_n=\log^{-c} n$ for some fixed constant $c>0$.
For Theorem \ref{Space-Distance} the approximation rate of (i) is then $\frac{p^{\delta} \log^{1+c} n}{\sqrt n}$ and of (ii) is $p^{-1/2}+\frac{p^{\delta/2} \log^{1+c} n}{\sqrt n}$. Our rate achieves the rate of \cite{lam2011estimation} for stationary high dimensional time series except a factor of logarithm. For Theorem \ref{eigentheorem}
the approximation rate is $\frac{p^{2-\delta}\log n}{\sqrt n}$ for (i), and the lower-bounds are arbitrarily close to $\frac{p^2\log^{2+2c} n}{n}$ for (ii) and $\frac{p^{2\delta}\log^{2+3c} n}{n}$ for (iv).
These rates coincide that of Theorem 1 and Corollary 1 of \cite{lam2012factor} except a factor of logarithm. The above findings are consistent with the results in nonparametric sieve estimation for analytic functions where the uniform convergence rates for the sieve estimators (when certain adaptive sieve bases are used for the estimation) are only inferior than the $n^{-1/2}$ parametric rate by a factor of logarithm. We shall also point out that the sieve approximation rates will be adaptive to the smoothness and will be slower when $\sigma_{x,i,j}(t,k)'s$ are less smooth in which case $g_{J_n,K, \tilde M}$ converges to zero at an adaptive but slower rate as $J_n$ increases.
\end{remark}
\begin{proposition}\label{hatdrate}
Assume $\nu_np^{\delta}=o(1)$. Under the conditions of Theorem \ref{Space-Distance}, we have
\begin{align}
\field{P}(\exists t\in \mathcal{T}_{\eta_n}, \hat d_n(t)\neq d(t))=O(\frac{\nu_n p^{\delta}\log n}{\eta_n^2}).
\end{align} \end{proposition} Hence $\hat d_n(t)$ is uniformly consistent on ${\cal T}_{\eta_n}$ if $\frac{\nu_n p^{\delta}\log n}{\eta_n^2}=o(1)$.
\begin{remark} \label{remarketa}
In practice, if the estimated number of factors $\hat d(t)$ does not change over time, then according to our discussions in Section 2, $\mathcal T_{\eta_n}=[0,1]$. Otherwise one can set
\begin{align}\label{Tetan}
\hat {\mathcal T}_{\eta_n}=(0,1)\cap_{s=1}^r\left(\hat t_s-\frac{1}{\log^{2} n}, \hat t_s+\frac{1}{\log^{2} n}\right)
\end{align}
where $\hat t_s$, $s=1,...r$ are the time points when $\hat d(t)$ changes. In fact, by condition (C1), equation \eqref{Tetan} corresponds to choosing $\eta_n\asymp\frac{1}{\log^4 n}$ when the eigenvalues of $\mathbf A(t)/p^{(1-\delta)/2}$ are Lipschitz continuous. \end{remark}
\subsection{Theoretical Results for Testing Static Factor Loadings}\label{Sec:null-static}
We discuss the limiting behaviour of $\hat T_n$ of \eqref{eq15hatTn} under $H_0$ in this subsection. First, the following proposition indicates that with asymptotic probability one $\hat d_n$ equals $d$ under $H_0$.
\begin{proposition}\label{prop1}
Assume conditions (A1), (A2), (C1), (M1), (M2), (M3) and conditions (S1)-(S4) hold, and that $\frac{p^{\delta}\log n}{\sqrt n}=o(1)$. Then we have, under $H_0$, $\field{P}(\hat d_n \neq d)=O(\frac{p^\delta}{\sqrt n}\log n)=o(1)$ as $n\rightarrow \infty$.
\end{proposition} Next we define a quantity $\tilde T_n$ using the true quantities $\mathbf F$ and $d$ to approximate $\hat T_n$, \begin{align}\label{tildeT}
\tilde T=\sqrt{m_n}\max_{1\le k\le k_0}\max_{1\leq j\leq N_n}\max_{1\le i\le p-d}|{\mathbf f}^\top_i\hat{\boldsymbol \Sigma}^{x}_{h,k}|_\infty. \end{align}
Recall the definition of $\hat{\mathbf F}$ and $\mathbf F$ in Section \ref{sec:test_loading}. We shall see from Proposition \ref{Jan23-Lemma6} below that $\hat T-\tilde T$ is negligible asymptotically under some mild conditions due to the small magnitude of $\|\hat {\mathbf F}-\mathbf F\|_F$.
\begin{proposition}\label{Jan23-Lemma6} Let conditions (A1), (A2), (C1), (M1), (M3) and (S) be satisfied, and that $\frac{p^\delta}{\sqrt n}\log n=o(1)$. Assume the following condition ($M'$):
\begin{description}
\item ($M'$) There exists an integer $l>2$ and a universal constant $M$ such that
$\max_{1\leq j\leq d}\sum_{k=1}^\infty \delta_{Q,2l,j}(k)<\infty,\ \ \max_{1\leq j\leq p}\sum_{k=1}^\infty \delta_{H,2l,j}(k)<\infty$
and \begin{align*}
\sup_{t\in[0,1]}\max_{1\leq u\leq d}\field{E} | Q_u(t,\mathcal{F}_0)|^{2l}\leq M,~~ \sup_{t\in [0,1]}\max_{1\leq v\leq p}\field{E} |H_v(t,\mathcal{F}_0)|^{2l}\le M.
\end{align*}
\end{description}
Then there exists a set of orthnormal basis $\mathbf F$ of the null space of $\boldsymbol \Gamma$ such that for any sequence $g_n\rightarrow \infty$
\begin{align}\label{ratehatT}
\field{P}(|\hat T_n-\tilde T_n|\geq g_n^2 \frac{p^{\delta}\sqrt {m_n}}{\sqrt n} \Omega_n) =O(\frac{1}{g_n}+\frac{p^\delta}{\sqrt n}\log n)
\end{align}
where $\Omega_n=(np/m_n)^{1/l}\sqrt p$.
\end{proposition}
When $l$ is sufficiently large, the order of the rate in \eqref{ratehatT} is close to $p^{\delta+1/2}\frac{\sqrt {m_n}}{\sqrt n}$. Hence, in order for the error in \eqref{ratehatT} to vanish, $p$ can be as large as $O(n^{a})$ for any $a<1$ when $\delta=0$. Furthermore, under the null $\tilde T$ is equivalent to the $\cal L^\infty$ norm of the averages of a high-dimensional time series. Specifically, define $\tilde {\boldsymbol l}_i$ by replacing $\hat{\mathbf F}$ with $\mathbf F$ in the definition of $\hat {\boldsymbol l}_i$ (c.f. \eqref{new.eq11}) with its $j_{th}$ element denoted by $\tilde{l}_{i,j}$. Then straightforward calculations indicate that $\tilde T=|\frac{\sum_{i=1}^{m_n}\tilde{\boldsymbol l}_i}{\sqrt{m_n}}|_\infty$. Therefore we can approximate $\tilde T$ by the $\mathcal L^\infty$ norm of a certain mean zero Gaussian process via the recent development in high dimensional Gaussian approximation theory, see for instance \cite{chernozhukov2013gaussian} and \cite{zhang2018gaussian}. Let $\mathbf y_i=(y_{i1},...y_{i(k_0N_n(p-d) p)})$ be a centered $k_0N_n(p-d) p$ dimensional Gaussian random vectors that preserved the auto-covariance structure of $ \tilde {\boldsymbol l}_i$ for $1\leq i\leq m_n $ and write $\mathbf y=\sum_{i=1}^{m_n}\mathbf y_i/\sqrt{m_n}$.
\begin{theorem}\label{Jan23-Thm4}
Assume conditions of Proposition \ref{Jan23-Lemma6} hold, $\frac{p^{\delta}\sqrt {m_n\log n}}{\sqrt n} \Omega_n=o(1)$ and assume the following conditions \emph{(a)}-\emph{(f)}:
\begin{description}
\item \emph{(a)} $\|\mathbf A\|_{\infty}\lessapprox 1$.
\item \emph{(b)} $l\geq 4$.
\item \emph{(c)} $(N_np^2)^{1/l}\lessapprox m_n^{\frac{3-25\zeta}{32}} $ and $k_0N_n(p-d)p\lessapprox \exp(m_n^{\zeta})$ for some $0\leq \zeta< 1/11$.
\item \emph{(d)} Let $\sigma_{j,j}=\sum_{i,l=1}^{m_n}cov(\tilde { l}_{i,j},\tilde { l}_{l,j})/m_n$. There exists some constant $\eta>0$ such that $\min \sigma_{j,j}\geq \eta$, i.e., each
component series of $\tilde {\boldsymbol l}_i$ is non-degenerated.
\item \emph{(e)}The dependence measure for $\mathbf x_{i,n}$ satisfies \begin{align} \delta_{G,2l}(k)=O(((k+1)\log (k+1))^{-2}). \end{align}
\item \emph{(f)}There exists a constant $M_{2l}$ depending on $l$ such that for $1\leq i\leq n$, and for all $p$ dimensional vector $\mathbf c$ such that $\|\mathbf c\|_2=1$, the inequality $\|\mathbf c^\top\mathbf e_{i,n}\|_{\mathcal L^{2l}}\leq M_{2l}\|\mathbf c^\top\mathbf e_{i,n}\|_{\mathcal L^2}$ holds.
Also $
\max_{1\leq i\leq n}\lambda_{\max}(\field{E}(\mathbf e_{i,n}\mathbf e^\top_{i,n}))$ is uniformly bounded as $n$ and $p$ diverges.
\end{description} Then under null hypothesis, we have
\begin{align}\label{Jan23-83}
\sup_{t\in \mathbb R}|\field{P}(\tilde T_n\leq t)-\field{P}(|\mathbf y|_{\infty}\leq t)|\lessapprox \iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l}),
\end{align} where $\iota(\cdot)$ is defined as
\begin{align}\label{S8}\iota(n,p,q, D_n)=\min(n^{-1/8}M^{1/2}l_n^{7/8}+\gamma+(n^{1/8}M
^{-1/2}l_n^{-3/8})^{q/(1+q)}(\sum_{j=1}^p\Theta^q_{M,j,q})^{1/(1+q)}\notag\\+\Xi_M^{1/3}(1\vee \log (p/\Xi_M))^{2/3}),\end{align} where $\Theta_{M,j,q}=\sum_{k=M}^\infty \delta_{G,q,j}(k)$ and $\Xi_{M}=\max_{1\leq j\leq p}$ $\sum_{k=M}^\infty k\delta_{G,2,j}(k)$, and the minimum is taken over all possible values of $\gamma$ and $M$ subject to $$n^{3/8}M^{-1/2}l_n^{-5/8}\geq \max\{D_n(n/\gamma)^{1/4},l_n^{1/2}\},$$ with $l_n=\log (pn/\gamma)\vee 1$. Furthermore,
\begin{align}\label{Jan23-84}
\sup_{t\in \mathbb R}|\field{P}(\hat T_n\leq t)-\field{P}(|\mathbf y|_{\infty}\leq t)|= O\left(\left(\frac{p^{\delta}\sqrt {m_n\log n}}{\sqrt n} \Omega_n\right)^{1/3}+\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})\right).
\end{align}
\end{theorem}
Conditions (a)-(e) control the magnitude of the loading matrix, the dimension and dependence of the time series ${\bf x}_{i,n}$ as well as non-degeneracy of the process $\tilde {\boldsymbol l}_i$. Those conditions are standard in the literature of high dimensional Gaussian approximation; see for instance \cite{zhang2018gaussian}. Notice that if the dependence measures of $\mathbf x_{i,n}$ in condition (e) satisfies $\delta_{G,2l}(k)=O((k+1)^{-(1+\alpha)})$ for $\alpha>\frac{6}{ul-8}\vee 1$ with $u=\frac{\log m_n/\log n}{1-\log m_n/\log n+2\log p/\log n}$, then $\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})=o(1)$. If further $\delta_{G,2l}(k)$ decays geometrically as $k$ increases, then $\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})$ will be reduced to $m_n^{-(1-11\zeta)/8}$ for a constant $\zeta$ defined in condition (c). Condition (f) controls the magnitude of the $\mathcal L^{2l}$ norm of projections of $\mathbf e_{i,n}$ by their ${\cal L}^{2}$ norm which essentially requires that the dependence among the components of $\mathbf e_{i,n}$ cannot be too strong. (f) is mild in general and is satisfied, for instance, if the components of $\mathbf e_{i,n}$ are independent or $\mathbf e_{i,n}$ is a sub-Gaussian random vector with a variance proxy that does not depend on $p$ for each $i$. Finally, we comment that Condition (d) is a mild non-degeneracy condition. For example, assuming (i) $\mathbf e_{s,n}$ is independent of $\mathbf e_{l,n}$, $\mathbf z_{l,n}$ for $s\geq l$, $\min_{1\le i\leq n, 1\leq j\leq p} \field{E} X^2_{i,j}\geq \eta'>0$, $ \min_{1\leq i\leq n}\lambda_{\min}(\field{E}(\mathbf e_{i,n}\mathbf e^\top_{i,n}))\geq \eta''>0$ for some constants $\eta'$ and $\eta''$, and (ii) conditions (a)-(c), (e) and (f) hold, then condition (d) holds. To see this, note that under null hypothesis, for $1\leq s\leq p-d$, $1\leq k\leq k_0$, $1\leq i,j\leq n$ and $1\leq v\leq p$, we have for $i\neq j$, \begin{align} \field{E}(\mathbf f_s^\top\mathbf x_{i+k,n}x_{i,v,n} \mathbf f_s^\top\mathbf x_{l+k,n}x_{l,v,n})= \field{E}(\mathbf f_s^\top\mathbf e_{i+k}x_{i,v,n} \mathbf f_s^\top\mathbf e_{l+k,n}x_{l,v,n})=0,\label{eq25}\\ \field{E}((\mathbf f_s^\top\mathbf x_{i+k,n}x_{i,v,n})^2) =\field{E}((\mathbf f_s^\top\mathbf e_{i+k,n}x_{i,v,n})^2)=\mathbf f_s^\top(\field{E} (\mathbf e_{i+k,n}\mathbf e^\top_{i+k,n}))\mathbf f_s \field{E}(x^2_{i,v,n})\geq \eta'\eta''\label{eq26}. \end{align} Consequently \eqref{eq25} and \eqref{eq26} imply condition (d).
\subsection{Block Multiplier Bootstrap}\label{boots}
The validity of the bootstrap procedure is supported by the following theorem:
\begin{theorem}\label{Boots-thm5} Denote by $W_{n,p}=(k_0 N_n(p-d)p)^2$. Assume that the conditions of Theorem \ref{Jan23-Thm4} hold, $w_n\rightarrow \infty$, $w_n/m_n=o(1)$, and that there exist $q^*\geq l$ and $\epsilon>0$ such that $ \Theta_n:=w_n^{-1}+\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}\lessapprox W_{n,p}^{-\epsilon}$ and \begin{description}
\item (i) $\max_{1\leq i\leq n}\max_{1\leq j\leq p} \|x_{i,j,n}\|_{\mathcal L^{2q^*}}\leq M$ for some sufficiently large constant $M$.
\item (ii) Condition (e) of Theorem \ref{Jan23-Thm4} holds with $l$ replaced by $q^*$.
\end{description} Then we have that conditional on $\mathbf x_{i,n}$ and under $H_0$, \begin{align}\label{Oct16_26}
\sup_{t\in \mathbb R}|&\field{P}(\hat T_n\leq t)-\field{P}(|\boldsymbol \kappa_n|_{\infty}\leq t|{\mathbf x}_{i,n},1\leq i\leq n)|\notag\\&=O_p\left(\left(\frac{p^{\delta}\sqrt {m_n\log n}}{\sqrt n} \Omega_n\right)^{1/3}+\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})+\Theta_n^{1/3}\log ^{2/3}(\frac{W_{n,p}}{\Theta_n})\right). \end{align}
\end{theorem}
The condition $w_n^{-1}+\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}\lessapprox W_{n,p}^{-\epsilon}$ holds if $w_n=o(n^c)$ for some $c>0$ and $\sqrt{\frac{w_n}{m_n}}W_{n,p}^{\frac{1}{q^*}+\epsilon}=o(1).$ The convergence rate of \eqref{Oct16_26} will go to zero and hence the bootstrap is asymptotically consistent under $H_0$ if the dependence between $\mathbf x_{i,n}$ is weak such that $\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})=o(1)$; see the discussion below \eqref{Jan23-84} in Section \ref{Sec:null-static}.
\section{Power}\label{Sec4} \setcounter{equation}{0} In this section we discuss the local power of our bootstrap-assisted testing algorithm in Section \ref{boots} for testing static factor loadings. For two matrices $\mathbf A$ and $\mathbf B$, let $\mathbf A\circ \mathbf B$ be the Hadamard product of $\mathbf A$ and $\mathbf B$. Let $\mathbf J$ be a $p\times d$ matrix with all entries $1$ where $d=rank(\mathbf A)$. To simplify the proof, we further assume the following Condition (G): \begin{description}
\item (G1): $\mathbf e_{i,n}$ is independent of $\mathbf z_{s,n}$, $\mathbf e_{s,n}$ for $i>s$.
\item (G2): the dependence measure of $\mathbf z_{i,n}$ satisfies
\begin{align}
\max_{1\leq j\leq d} \delta_{Q,4,j}(k)=O(((k+1)\log (k+1))^{-2}).
\end{align}
\item (G3):the dependence measures of $\mathbf e_i$ satisfy
\begin{align}
\sum_{k=1}^\infty k\sup_{0\leq t\leq 1}\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}=O(\log ^2p)
\end{align}
for all $p$ dimensional vector $\mathbf f$ such that $\|\mathbf f\|_F=1$. \end{description} Conditions (G1) and (G2) are mild assumptions on the error and factor processes. Condition (G3) controls the strength of temporal dependence for projections of the error process $(\mathbf e_{i,n})$. (G3) essentially requires that each component of $\mathbf e_{i,n}$ is a short memory process and the dependence among the components of $\mathbf e_{i,n}$ is not too strong. Elementary calculation shows that Condition (G3) is satisfied under two important scenarios. The first is that the dependence measures satisfy $ \max_{1\leq j\leq p}\delta_{H,4,j}(k)=O(\chi^k)$ for some $\chi \in (0,1)$; see Lemma \ref{LemmaC1} below.
The second is that the components of process $(\mathbf e_{i,n})$ are independent of each other, i.e., $H_{j_1}(\cdot, \mathcal{F}_u)$ is independent of $H_{j_2}(\cdot, \mathcal{F}_v)$ for all $u,v\in \mathbb N$ as long as $j_1\neq j_2$, $1\leq j_1,j_2\leq p$, and $\sum_{k=1}^\infty k \max_{1\leq j\leq p}\delta_{H,4,j}(k)=O(1)$.
\begin{lemma}\label{LemmaC1}
Under conditions (f), ($M'$) and $ \max_{1\leq j\leq p}\delta_{H,4,j}(k)=O(\chi^k)$ for some $\chi \in (0,1)$, (G3) holds. \end{lemma}
We consider the following class of local alternatives: \begin{align}\label{alternative_A}H_A: \mathbf A(t)=\mathbf A_n(t):=\mathbf A\circ(\mathbf J+\rho_n\mathbf D(t)),\end{align} where $\mathbf D(t)=(d_{ij}(t))$ is a $p\times d$ matrix, $\max_{i,j}\sup_t|d_{ij}(t)|\lessapprox 1$, $\sup_{t\in [0,1]}\|\mathbf D(t) \circ \mathbf A(t)\|_F=O(p^{\frac{1-\delta_1}{2}})$ for some $ \delta_1\in[\delta, 1]$ and $\rho_n\downarrow 0$ controls the magnitude of deviation from the null. Notice that, under the alternative hypothesis, $\mathbf A_n(t)$ is a function of $\rho_n$ and therefore we write $$\int_{0}^1\boldsymbol \Sigma_x(t,k)\,dt=\int_{0}^1\mathbf A_n(t)(\boldsymbol \Sigma_z(t,k)\mathbf A_n(t)^\top+\boldsymbol \Sigma_{ze}(t,k))dt:=\boldsymbol \gamma(\rho_n,k),\quad k> 0.$$ Let $\boldsymbol \Gamma_k(\rho_n)=\boldsymbol \gamma(\rho_n,k)\boldsymbol \gamma(\rho_n,k)^\top$ and $\boldsymbol \Gamma(\rho_n)=\sum_{k=1}^{k_0}\boldsymbol \Gamma_k(\rho_n)$. To simplify notation, write $\boldsymbol \Gamma(\rho_n)$, $\rho_n>0$ as $\boldsymbol \Gamma_n$ with orthogonal bases $(\mathbf f_{1,n},...,\mathbf f_{p-d,n}):=\mathbf F_n$. Recall the centered $k_0N_n(p-d) p$ dimensional Gaussian random vectors $\mathbf y$ and $\mathbf y_i$, and the definition of $\tilde {\boldsymbol l}_i$ in Section \ref{Sec:null-static}. Define $ \check{\boldsymbol l}_i$ in analogue of $\tilde {\boldsymbol l}_i$ with $\mathbf F$ replaced by $ {\mathbf F_n}$.
\begin{theorem}\label{Power}
Consider the alternative hypothesis \eqref{alternative_A}. Assume in addition that for $t\in [0,1]$, $rank(A(t))=d$, $p^\delta n^{-1/2}m_n^{1/2}\Omega_n=o(1)$, and $\Delta_n=|\sum_{i=1}^{m_n}\field{E} \check {\boldsymbol l}_i/m_n|_\infty\asymp \rho_np^{\frac{1-\delta_1}{2}}$.
Then assuming conditions of Theorem \ref{Boots-thm5} and condition (G) hold, we have the following results:
\begin{description}
\item(a) Under the considered alternative hypothesis, if $\rho_n\sqrt{m_n\log n}\Omega_n=o(1)$ then
\begin{align}\label{S98}
&\sup_{t\in \mathbb R}|\field{P}(\hat T_n\leq t)-\field{P}(|\mathbf y|_{\infty}\leq t)|\\&=O\Bigg(\left((\rho_np^{\frac{\delta-\delta_1}{2}}+\frac{p^\delta}{\sqrt n}){\sqrt {m_n\log n}} \Omega_n\right)^{1/3}+
\left(\rho_n{\sqrt {m_n\log n}} \Omega_n\right)^{1/2}
\notag\\\notag&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l}\Bigg),
\end{align}
where the function $\iota$ is defined in \eqref{S8}.
Furthermore, if $\sqrt m_n\rho_np^{\frac{1-\delta_1}{2}}\rightarrow \infty$ then
$\hat T_n$ diverges at the rate of $\sqrt m_n\rho_np^{\frac{1-\delta_1}{2}}$.
\item (b) Assume
$\Theta_n^{1/3}\log ^{2/3}(\frac{W_{n,p}}{\Theta_n})=o(1)$, then
\begin{align}\label{NewC4}
\sup_{t\in \mathbb R}&|\field{P}(|\mathbf y|_{\infty}\leq t)-\field{P}(|\boldsymbol \kappa_n|_{\infty}\leq t|{\mathbf x}_{i,n},1\leq i\leq n)|\notag\\&=O_p\Big(\Theta_n^{1/3}\log ^{2/3}(\frac{W_{n,p}}{\Theta_n})+\big((\frac{p^\delta}{\sqrt n}+\rho_n)\sqrt{w_n}\Omega_n\big)^{l/(1+l)}\Big).
\end{align}
Furthermore, $|\boldsymbol\kappa_n|_{\infty}=O_p(\sqrt {w_n}\rho_np^{\frac{1-\delta_1}{2}} \sqrt{\log n})$.
\end{description}
Therefore by (a) and (b) if $\frac{\sqrt{m_n}}{\sqrt{w_n\log n}}\rightarrow \infty$, $\sqrt{ m_n}\rho_np^{\frac{1-\delta_1}{2}}\rightarrow \infty$, the power of the test approaches $1$ as $n\rightarrow\infty$.
\end{theorem}
We introduce $\delta_1$ to allow that $\mathbf D(t)$ has a different level of strength than that of $\mathbf A$. For instance, if $\mathbf D(t)$ has a bounded number of non-zero entries, then $\delta_1=1$ while $\delta$ can be any number in $[0,1]$. The condition that $\Delta_n\asymp \rho_n p^{\frac{1-\delta_1}{2} }$ is mild. In fact, it is shown in equation \eqref{S.108} in the proof of Proposition \ref{powerprop} in the supplemental material that $|\field{E}\check{\boldsymbol l}_i|_\infty=O(\rho_n p^{\frac{1-\delta_1}{2}})$, which means $\Delta_n= O(\rho_np^{\frac{1-\delta_1}{2}})$. On the other hand, from the proof of Proposition \ref{powerprop}, the condition $\Delta_n\asymp \rho_n p^{\frac{1-\delta_1}{2} }$ will be satisfied if there exists at least one eigenvector of the null space of $\sum_{k=1}^{k_0}\boldsymbol \Gamma_k$
with a number of order $p$ of its entries having the same order of magnitude. \section{Selection of Tuning Parameters}\label{Sec::Tuning} \setcounter{equation}{0} \subsection{Selection of $J_n$ for the estimation of time-varying factor loading matrices. } We discuss the selection of $J_n$ for the estimation of time-varying factors. Since in practice $g_{J_n,K,\tilde M}$ is unknown, a data-driven method to select $J_n$ is desired. By Theorem \ref{Space-Distance}, the residuals $\hat {\mathbf e}_{i,n}=\mathbf x_{i,n}-\hat {\mathbf
V}(\frac{i}{n})\hat {\mathbf
V}^\top(\frac{i}{n})\mathbf x_{i,n}$, and $\hat {\mathbf e}_{i,n}=(\hat e_{i,1,n},...,\hat e_{i,p,n})^\top$ is a $p$ dimensional vector. We select $J_n$ as the minimizer of the following cross validation standard $CV(J)$, \begin{align} CV(J)=\sum_{i=1}^n\sum_{s=1}^p\frac{\hat e^2_{i,s,n}(J)}{(1-v_{i,s}(J))^2} \end{align} where $v_{i,s}(J)$ is the $s_{th}$ diagonal element of $\hat {\mathbf V}(\frac{i}{n})\hat {\mathbf V}^\top(\frac{i}{n})$ obtained by setting $J_n=J$, and $\hat e_{i,s,n}(J),$ $1\leq i\leq n$, $1\leq s \leq p$ are also the residuals calculated when $J_n=J$. The cross-validation has been widely used in the literature of sieve nonparametric estimation and has been advocated by for example \cite{hansen2014nonparametric}. We choose $J_n=\mathop{\mbox{argmin}}_J(CV(J))$ in our empirical studies. \subsection{Selection of tuning parameters $m_n$ and $w_n$ for testing static factor loadings}\label{selectmn} We select $m_n$ by first select $N_n$ and let $m_n=\lfloor (n-k_0)/N_n \rfloor$. The $N_n$ is chosen by the minimal volatility method as follows. For a given data set, let \begin{align}
\hat T(N_n)=\sqrt{m_n}\max_{1\le k\le k_0}\max_{1\leq h\leq N_n}\max_{1\le i\le p-\hat d_n}|\hat{\mathbf f}^\top_i\hat{\boldsymbol \Sigma}^{x}_{h,k}|_\infty \end{align} be the test statistic obtained by using the integer parameter $N_n$ with $\hat d_n=\mathop{\mbox{argmin}}_{1\leq i\leq R}\lambda_{i+1}(\hat {\mathbf \Gamma}) /\lambda_{i}(\hat {\mathbf \Gamma})$ where $R$ is defined in \eqref{eigen-ratio-hatd}. Consider a set of possible values for $N_n$, which is denoted by $\{J_1,...,J_s\}$ where $J_s$ are positive integers. For each $J_v$, $1\leq v\leq s$ we calculate $\hat T(J_v)$ and hence the local standard error \begin{align} SE(\hat T(J_l),h)=\left(\frac{1}{2h}\sum_{u=-h}^h \left(\hat T(J_{u+l})-\frac{1}{2h+1}\sum_{u=-h}^h\hat T(J_{u+l}) )\right)^2\right)^{1/2}\label{S105} \end{align} where $1+h\leq l\leq s-h $ and $h$ is a positive integer, say $h=1$. We then select $N_n$ by \begin{align}\label{S116} \mathop{\mbox{argmin}}_{1+h\leq l\leq s-h} SE(\hat T(J_l),h) \end{align} which stabilizes the test statistics. The idea behind the minimum volatility method is that the test statistic should behave stably as a function of $N_n$ when the latter parameter is in an appropriate range. In our empirical studies we find that the proposed method performs reasonably well, and the results are not sensitive to the choice of $N_n$ as long as $N_n$ used is not very different from that chosen by \eqref{S116}.
After choosing $N_n$ and hence $m_n$, we then further choose $w_n$ again by the minimal volatility method. In this case we first obtain the $k_0 N_n(p-d)p$ dimensional vectors \{$\hat {\boldsymbol l}_i$, $1\leq i\leq m_n$\} defined in Section \ref{sec:test_loading}. Then we select $w_n$ by a multivariate extension of the minimal volatility method in \cite{zhou2013heteroscedasticity} as follows. We consider choosing $w_n$ from a grid $w_1\leq ...\leq w_r$. For each $w_n=w_i$, $1\leq i\leq r$ we calculate a $k_0 N_n(p-d)p$ dimensional vector $\mathbf b^o_{i,u}=\frac{1}{w_i(m_n-w_i+1)}\sum_{j=1}^{u}(\hat {\mathbf s}_{j,w_i}-\frac{w_i}{m_n}\hat {\mathbf s}_{m_n})^{\circ 2}$ where $\circ$ represents the Hadamard product and $1\leq r\leq m_n-w_r+1$. Let $\mathbf B^o_i=(\mathbf b^{o\top}_{i,1},...,\mathbf b^{o\top}_{i,m_n-w_r+1})^\top$ be a $k_0 N_n(p-d)p (m_n-w_r+1)$ dimensional vector, and $\mathbf B$ be a $k_0 N_n(p-d)p (m_n-w_r+1)\times r$ matrix with its $i_{th}$ column $\mathbf B^o_i$. Then for each row, say $i_{th}$ row $\mathbf B_{i,\cdot}$ of $\mathbf B$, we calculate the local standard error $SE(\mathbf B_{i,\cdot},h)$ for a given window size $h$, see \eqref{S105} for definition of $SE$ and therefore obtain a $r-2h$ length row vector $(SE(\mathbf B_{i,h+1},h),...SE(\mathbf B_{i,r-h},h))$. Stacking these row vectors we get a new $k_0 N_n(p-d)p (m_n-w_r+1)\times (r-2h)$ matrix $\mathbf B^\dag$. Let $colmax(\mathbf B^\dag)$ be a $(r-2h)$ length vector with its $i_{th}$ element being the maximum entry of the $i_{th}$ column of $\mathbf B^\dag$. Then we choose $w_n=w_{k+h}$ if the smallest entry of $colmax(\mathbf B^\dag)$ is its $k_{th}$ element. \section{Simulation Studies}\label{Simu-Results}
\subsection{Estimating the time-varying factor models}\label{Simupart1}
In this subsection we shall examine the performance of our proposed estimator \eqref{span-estimate} for time-varying factor models, and compare it with that in \cite{lam2011estimation}. The latter is equivalent to fixing $J_n=0$ in \eqref{sieveapprox}. We use normalized Legendre polynomials as our basis throughout our empirical studies. The method studied in \cite{lam2011estimation} is developed under the assumption of stationarity with static factor loadings and hence the purpose of our simulation is to illustrate that the methodology developed under stationarity does not directly carry over to the non-stationary setting. To demonstrate the advantage of the adaptive sieve method, our method is also compared with a simple local estimator of $\mathbf {\Lambda}(t)$, which was considered in the data analysis section in \cite{lam2011estimation} and we shall call it the local PCA method in our paper. Specifically, for each $i$, $\mathbf {\Lambda}(\frac{i}{n})$ will be consistently estimated by
\begin{align}
\hat{\mathbf {\Lambda}}(\frac{i}{n})=\sum_{k=1}^{k_0}\hat{\mathbf M}(\frac{i}{n},k)\hat{\mathbf M}^\top(\frac{i}{n},k), ~~\hat{\mathbf M}(\frac{i}{n},k)=\frac{1}{2m+1}\sum_{j=i-m}^{j=i+m}\mathbf x_{j+k}\mathbf x_j^\top
\end{align}
where $m$ is the window size such that $m\rightarrow \infty$ and $m=o(n)$. The $J_n$ of our method is selected by cross validation while $m$ of the local PCA method is selected by the minimal volatility method.
Define the following smooth functions: \begin{align*} g_0(t)=0.4(0.4-0.2t), \alpha_1(t)=1.3\exp(t)-1,\alpha_2(t)=0.6\cos(\frac{\pi t}{3}),\\ \alpha_3(t)=-(0.5+2t^2), \alpha_4(t)=2\cos(\frac{\pi t}{3})+0.6t. \end{align*} Let $\mathbf A=(a_1,..,a_p)^\top$ be a $p\times 1$ matrix with $a_i=1+0.2(i/p)^{0.5}$. Define the locally stationary process $ z_i=G_1(i/n,\mathcal{F}_i)$ where $G_1(t,\mathcal{F}_i)=\sum_{j=0}^\infty g^j_0(t)\epsilon_{i-j}$ where filtration $\mathcal{F}_i=(\epsilon_{-\infty},...,\epsilon_i)$ and $(\epsilon_i)_{i\in \mathbb Z}$ is a sequence of $i.i.d.$ $N(0,1)$ random variables. We then define the time varying matrix
\begin{align}\label{new} \mathbf A(t)=\begin{pmatrix} \mathbf A_1\alpha_1(t)\\\mathbf A_2\alpha_2(t)\\\mathbf A_3\alpha_3(t)\\\mathbf A_4\alpha_4(t) \end{pmatrix}. \end{align} where $\mathbf A_{1}$, $\mathbf A_2$, $\mathbf A_3$ and $\mathbf A_4$ are the sub-matrices of $\mathbf A$ which consist of the first $round(p/5)_{th}$ rows, the $(round(p/5)+1)_{th}$ to $round(2p/5)_{th}$, $(round(2p/5)+1)_{th}$ to $(round(3p/5))_{th}$ and the $(round(3p/5)+1)_{th}$ to $p_{th}$ rows of $\mathbf A$, respectively. Let $\mathbf e_{i,n}=(e_{i,1},...,e_{i,p})^\top$ be a $p\times 1$ vector with $(e_{i,j})_{i\in \mathbb Z, 1\leq j\leq p}$ $i.i.d.$ $N(0,0.5^2)$, and $(e_{i,j})_{i\in \mathbb Z, 1\leq j\leq p}$ are independent of $(\epsilon_i)_{i\in \mathbb Z}$.
We consider the cases that $p=50,100,200,500$ and $n=1000, 1500$. The performances of the methods are measured in terms of the Root-Mean-Square Error (RMSE) and the average principal angle. The RMSE of the estimation is defined as \begin{align*}
RMSE=\frac{1}{np}\sum_{i=1}^n\|\hat{\mathbf V}(i/n)\hat{\mathbf V}^\top(i/n)\mathbf x_{i,n}-\mathbf A(i/n)\mathbf z_{i,n}\|_2^2.
\end{align*} The principle angle between $\mathbf A(i/n)$ and its estimate $\hat {\mathbf A}(i/n)$ is defined as $\boldsymbol \Upsilon_{i,n}:=\hat {\mathbf A}_{i,n}^\top \mathbf A_{i,n}$, where $\mathbf A_{i,n}$ and $\hat {\mathbf A}_{i,n}$ are the normalized $\mathbf A(i/n)$ and $\hat {\mathbf A}(i/n)$ respectively. The principle angle is a well-defined distance between spaces $span(\mathbf {A}(i/n))$ and $span(\hat{\mathbf A}(i/n))$. Finally, the average principle angle is defined as $\bar {\boldsymbol \Upsilon}=\frac{1}{n}\sum_{i=1}^n\boldsymbol \Upsilon_i$. We present the RMSE and the average principle angle of the three estimators using 100 simulation samples in Table \ref{Table-RMSE} and \ref{Table-angle}, respectively. Our method achieves the minimal RMSE and average principle angle in all simulation scenarios among the three estimators. We choose $k_0=3$ in our simulation. Other choices $k_0=1,2,4$ yield the similar results are not reported here. As predicted by Theorem \ref{Space-Distance} when $\delta=0$, RMSE in Table \ref{Table-RMSE} decreases as $n$, $p$ increases and the average principle angle decreases with $n$ increases, and is independent of $p$.
\begin{table}[htbp]
\centering
\caption{ Mean and standard errors (in brackets) of simulated RMSE for our sieve method, the static loading method ($J_n=0$) and Local PCA for model \eqref{new}. The results are multiplied by $1000$.}
\begin{tabular}{l|lll||lll}
\hline
& \multicolumn{3}{c||}{$n=1000$} & \multicolumn{3}{c}{$n=1500$} \\
\hline
& Sieve & $J_n=0$ & Local PCA & Sieve & $J_n=0$ & Local PCA \\
\hline
$ p=50$ &$228_{(4.5)}$ & $509_{(5.9)}$ & $251_{(3.1)}$ & $199_{(3.3)}$ & $492_{(2.9)}$ & $237_{(2.9)}$ \\
$p=100$ & $222_{(4.4)}$ &$ 494_{(3.9)}$ & $241_{(3.2)} $& $199_{(3.6)}$ & $495_{(4.2)}$ & $229_{(2.5)}$ \\
$p=200$ & $221_{(4.0)}$ & $501_{(3.8)}$ & $242_{(2.7)}$ & $203_{(4.3)}$ & $494_{(3.9)}$ & $225_{(2.8)}$ \\
$p=500$ & $218_{(4.2)}$ & $490_{(3.3)}$ & $241_{(3.1)}$ & $196_{(3.9)}$ & $487_{(2.5)}$ & $226_{(2.9)}$ \\
\hline
\end{tabular}
\label{Table-RMSE} \end{table} \begin{table}[htbp]
\centering
\caption{ Mean and standard errors (in brackets) of simulated principle angles for our sieve method, the static loading method ($J_n=0$) and Local PCA for model \eqref{new}. The results are multiplied by $1000$.}
\begin{tabular}{l|lll||lll}
\hline
& \multicolumn{3}{c||}{$n=1000$} & \multicolumn{3}{c}{$n=1500$} \\
\hline
& Sieve & $J_n=0$ & Local PCA & Sieve & $J_n=0$ & Local PCA \\
\hline
$p=50$ &$13.4_{(0.6)}$ & $66.7_{(1.4)}$ & $16.1_{(0.4)}$ & $9.7_{(0.4)}$ & $61.9_{(0.5)}$ & $13.8_{(0.4)}$ \\
$p=100$ & $13.5_{(0.6)}$ &$ 64.8_{(0.8)}$ & $15.6_{(0.4)} $& $10.3_{(0.4)}$ & $63.0_{(0.8)}$ & $13.8_{(0.3)}$ \\
$p=200$ & $13.5_{(0.6)}$ & $65.9_{(0.9)}$ & $15.8_{(0.4)}$ & $11.3_{(0.5)}$ & $63.1_{(0.8)}$ & $13.6_{(0.3)}$ \\
$p=500$ & $13.7_{(0.6)}$ & $65.3_{(0.9)}$ & $15.9_{(0.4)}$ & $10.6_{(0.4)}$ & $61.6_{(0.5)}$ & $13.8_{(0.3)}$ \\
\hline
\end{tabular}
\label{Table-angle} \end{table} \subsection{Testing static loading matrix: type I error} We now examine our testing procedure in Section \ref{sec:test_loading} to test the hypothesis of static factor loadings via $B=1000$ bootstrap samples. Define \begin{align} g_1(t)=0.1+0.06t^2, g_2(t)=0.12+0.04t^2,g_3(t)\equiv 0.15,\\ \alpha_1(t)\equiv 0.8, \alpha_2(t)\equiv 0.9, \alpha_3(t)\equiv 1. \end{align}
Let $\mathbf A$ be a $p\times 3$ matrix with each element generated from $2U(-1,1)$, and \begin{align} \mathbf A(t)=\begin{pmatrix} \mathbf A_1 \alpha_1(t)\\\mathbf A_2\alpha_2(t)\\\mathbf A_3 \alpha_3(t) \end{pmatrix} \end{align} where $\mathbf A_{1}$, $\mathbf A_2$ and $\mathbf A_3$ are the sub-matrices of $\mathbf A$ which consist of the first $round(p/3)_{th}$ rows, the $(round(p/3)+1)_{th}$ to $round(2p/3)_{th}$ rows, and $(round(2p/3)+1)_{th}$ to $p_{th}$ rows, respectively. The factors $\mathbf z_{i,n}=(z_{i,1,n},z_{i,2,n},z_{i,3,n})^\top$ where $z_{i,k,n}=2\sum_{j=0}^\infty g_k^{j}(i/n)\epsilon_{i-j,k}$ for $k=1,2,3$, where $\{\epsilon_{i,k}\}$ are $i.i.d.$ standard normal. We consider the following different sub-models for $\boldsymbol e_{i,n}=(e_{i,1,n},...,e_{i,s,n})^\top$:
\begin{description}
\item (Model I) $(e_{i,s,n})_{i\in \mathbb Z}$, $1\leq s\leq p$ follow $i.i.d$ $\frac{16}{25}t(5)/\sqrt{5/4}$
\item (Model II) $e_{i,s,n}=\frac{16}{25}\tilde e_{i-1,s,n}\tilde e_{i,s,n}$ where $(\tilde e_{i,s,n})_{i\in \mathbb Z}$, $1\leq s\leq p$ are $i.i.d.$ $N(0,1)$. \end{description} Both model I and model II are non-Gaussian. Furthermore, Model II is a stationary white noise process, see \cite{zhou2012measuring}. Table \ref{Type1} reports the simulated Type I error based on $300$ simulations for models I and II with different combinations of the dimension $p$ and time series length $n$. From Table \ref{Type1}, we see the simulated type I errors approximate its nominal level reasonably well.
\begin{table}[htbp]
\centering
\caption{Simulated Type I errors for Model I and Model II}
\begin{tabular}{|l|llrr||rrrr|}
\hline
& \multicolumn{4}{c||}{Model I} & \multicolumn{4}{c|}{Model II} \\
\hline
& \multicolumn{2}{c}{$n=1000$} & \multicolumn{2}{c||}{$n=1500$} & \multicolumn{2}{c}{$n=1000$} & \multicolumn{2}{c|}{$n=1500$} \\
\hline
level & \multicolumn{1}{r}{5\%} & \multicolumn{1}{r}{10\%} & 5\% & 10\% & 5\% & 10\% & 5\% & 10\% \\
\hline
$p=20$ & \multicolumn{1}{r}{4.30\%} & \multicolumn{1}{r}{10.00\%} & 5.67\% & 10.67\% & 5.33\% & 11.00\% & 5.67\% & 11.33\% \\
$p=50$ & \multicolumn{1}{r}{4.67\%} & \multicolumn{1}{r}{11.33\%} & 5.00\% & 10.00\% & 4.00\% & 8.33\% & 5.33\% & 9.67\% \\
$p=100$ & \multicolumn{1}{r}{2.67\%} &\multicolumn{1}{r}{5.33\% } & 5.33\% & 10.67\% & 3.00\% & 5.67\% & 5.00\% & 11.33\% \\
\hline
\end{tabular}
\label{Type1} \end{table}
\subsection{Testing static factor loadings: power} In this subsection, we examine the power performance of our testing procedure in Section \ref{sec:test_loading} via $B=1000$ bootstrap samples. Let $\tilde {\mathbf A}$ be a $p\times 1$ matrix with each entry $1$, and let $\tilde {\mathbf A}_{i}$, $1\leq i\leq 5$ be the sub-matrices of $\tilde {\mathbf A}$ which consist of the first $(round(\frac{(i-1)p}{5})+1)_{th}$ to $round(\frac{ip}{5})_{th}$ rows of $\tilde {\mathbf A}$, $1\leq i\leq 5$. We then consider a time-varying normalized loading matrix $\mathbf A(t)$ as follows. For a given $D\in \mathbb R$, let \begin{align*} \alpha_1(t,D)=1-Dt, \alpha_2(t,D)=1-2Dt, \alpha_3(t,D)=1-Dt^2,\\ \alpha_4(t,D)=1-2Dt^2, \alpha_5(t,D)\equiv 1, \end{align*}
and we define the normalized loading matrix $\mathbf A(t)=\tilde {\mathbf A}_D(t)(\tilde {\mathbf A}_D^\top (t)\tilde {\mathbf A}_D (t))^{-1/2}$ where \begin{align}
\tilde{\mathbf A}_D(t)=\begin{pmatrix}
\tilde {\mathbf A}_1\alpha_1(t,D)\\\tilde {\mathbf A}_{2}\alpha_2(t,D)\\\tilde {\mathbf A}_{3}\alpha_3(t,D)\\\tilde {\mathbf A}_{4}\alpha_4(t,D)\\\tilde {\mathbf A}_{5}\alpha_5(t,D)
\end{pmatrix}.
\end{align} Let $(e_{i,s,n})_{i\in \mathbb Z}$, $1\leq s\leq p$ be $i.i.d$ $N(0,0.5^2)$, and $\mathbf e_{i,n}=(e_{i,s,n},1\leq s\leq p)^\top$. Let $(\epsilon_i)_{i\in \mathbb Z}$ be $i.i.d.$ N(0,1) that are independent of $(e_{i,s,n})_{i\in \mathbb Z}$, $1\leq s\leq p$. Then the locally stationary process $z_{i,n}$ we consider in this subsection is $z_{i,n}=4\sum_{j=0}^\infty g^j(i/n)\epsilon_{i-j}$,
where the coefficient function $g(t)=0.2+0.2t$. Observe that $D=0$ corresponds to the situation when $A(t)$ is time invariant, and as $D$ increases we have larger deviations from the null. We examine the case of $p=25, 50, 100, 150$, $n=1500$ while increasing $D$ from $0$ to $0.5$. The results are displayed in Table \ref{table:power} which supports that our test has a decent power. Each simulated rejection rate is calculated based on $100$ simulated samples while for each sample the critical value is generated by block bootstrap algorithm using $B=1000$ bootstrap samples in Section \ref{sec:test_loading}.
\begin{table}[htbp]
\centering
\caption{Simulated rejection rate at different alternatives, with $B=1000$}
\begin{tabular}{|l|rr|rr|rr|rr|rr|rr|}
\hline
$D$& \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{0.1} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c|}{0.4} & \multicolumn{2}{c|}{0.5} \\
\hline
level & 5\% & 10\% & 5\% & 10\% & 5\% & 10\% & 5\% & 10\% & 5\% & 10\% & 5\% & 10\% \\
\hline
$p=25$ & 6 & 13 & 18 & 35 & 33 & 41 & 96 & 97 & 100 & 100 & 100 & 100 \\
$p=50$ & 7 & 12 & 14 & 27 & 21 & 39 & 91 & 95 & 100 & 100 & 100 & 100 \\
$p=100$ & 4 & 11 &17 & 33 & 49 & 64 & 79 & 87 & 100 & 100 & 100 & 100 \\
$p=150$ & 7 & 12 & 9 & 25 & 23 & 32 & 55 & 65 & 100 & 100 & 100 & 100 \\
\hline
\end{tabular}
\label{table:power} \end{table}
\section{Data Analysis}\label{data-ana} \subsection{Analysis of option data}
To illustrate the usefulness of our method we investigate the implied volatility surface of call option of Microsoft during the period June 1st, 2014-June 1st, 2019 with 1260 trading days in total. The data are obtained by OptionMetrics through the WRDS database. For each day we observe the volatility $W_t(u_i,v_j)$, where $u_i$ is the time to maturity and $v_j$ is the delta. A similar type of data has been studied by \cite{lam2011estimation}.
We first examine whether the considered data set has a static loading matrix by performing our test procedure in Section \ref{sec:test_loading}. Specifically, we consider $u_i$ to take values at $30$, $60$, $91$, $122$, $152$, $182$, $273$, $365$, $547$ and $730 $ for $i=1,...10$ and $v_j$ to take values at $0.35$, $0.40$, $0.45$, $0.50$, $0.55$ for $j=1,...,5$. Then we write $\tilde {\mathbf x}_t=vec(W_t(u_i,v_j)_{1\leq i\leq 10, 1\leq j\leq 5}).$ Due to the well-documented unit root non-stationarity of $\tilde{\mathbf x}_t$ (see e.g. \cite{fengler2007dynamic}) we study the $50$-dimensional time series $$\mathbf x_t:=\tilde {\mathbf x}_t-\tilde {\mathbf x}_{t-1}.$$
In our data analysis we choose $k_0=3$. Using the minimal volatility method we select $N_n=4$ where $N_n=\frac{T-3}{m_n}$ is the number of non-overlapping equal-sized blocks, and choose window size $w_n=5$. Via $B=10000$ bootstrap samples we obtain a $p-value$ of $6.08\%$. The small $p-value$ provides a moderate to strong evidence against the null hypothesis of static factor loadings.
We then apply our sieve estimator in Section \ref{Sec:estimate} to estimate the time varying loading matrix. The cross validation method suggests the use of the Legendre polynomial basis up to $6_{th}$ order. We find that during the considered period the number of factors, or the rank of the loading matrix, is either one or two at each time point. In Figure \ref{eigen-plot} we display the estimated number of factors at each time, and in Figure \ref{explainvar} we show the percentage of trace of $\Lambda(t)$ that is explained by the eigenvectors corresponding to the first and second largest eigenvalues, which reflects the time-varying structure of the loading matrix. The results underpin that the loading matrix is time varying, and the leading two eigenvalues of $\mathbf {\Lambda}(t)$ dominate the sum of all eigenvalues of $\mathbf \Lambda(t)$.
\begin{figure}
\caption{Number of factors for Microsoft call option}
\label{eigen-plot}
\caption{Proportion of trace of $\Lambda(t)$ that can be explained by the leading two eigenvectors for Microsoft call option}
\label{explainvar}
\end{figure} \subsection{Analysis of COVID-19 deaths in the U.S.}\label{Covid19-US} We consider the daily state-level new deaths of COVID 19 from the United States. The data can be obtained from https://data.cdc.gov/Case-Surveillance/United-States-COVID-19-Cases-and-Deaths-by-State. The dataset reports COVID 19 time series from 60 places, which include the 50 states, the District of Columbia, New York City, the U.S. territories of American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, Puerto Rico, and the U.S Virgin Islands as well as three independent countries in compacts of free association with the United States, Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau. It also records the New York City’s reported case and death counts separately from New York State’s counts. We consider the series of new death starting from Feb. 26, 2020 since Feb. 27, 2020 is the first day when a death case appears in this dataset. The series considered ends on Oct.8, 2021. We denote this $60$-dimensional vector of length $591$ by $\tilde{\mathbf y}_t$, $t=1,2,\cdots,591$. We study the differenced death series \begin{align} \mathbf y_t=\tilde{\mathbf y}_t-\tilde{\mathbf y}_{t-1} \end{align} by our non-stationary factor model. We apply our test in Section \ref{sec:test_loading} to examine whether the loading matrix of $\mathbf y_t$ is static. We choose $k_0=3$. By selecting the number of non-overlapping equal-sized blocks $N_n=7$ and window size $w_n=24$ via the minimal volatility method, through $B=10000$ bootstrap samples we obtain a $p-value$ of $0.073\%$ which provides a strong evidence against the null hypothesis of static factor loadings. \begin{figure}
\caption{Number of factors for state-level differenced new death of USA.}
\label{covid19-eigen-plot}
\caption{Proportion of trace of $\Lambda(t)$ that can be explained by the leading two eigenvectors for state-level differenced new death of USA.}
\label{covid19-explainvar}
\end{figure}
The sieve estimator in Section \ref{Sec:estimate} is then applied to estimate the time varying loading matrix for $\mathbf y_t$, with the cross validation method suggesting the use of the Legendre polynomial basis up to $7_{th}$ order. During the considered period the number of factors varies from $1$ to $5$. For the state-level differenced new death of COVID 19 in USA, we depict the estimated number of factors in Figure \ref{covid19-eigen-plot} and the percentage of trace of $\Lambda(t)$ that is explained by the two leading eigenvalues in Figure \ref{covid19-explainvar}. The results evidence that the loading matrix, as well as the number of factors are time varying, and the leading two eigenvalues of $\mathbf {\Lambda}(t)$ could not dominate the sum of all eigenvalues of $\mathbf \Lambda(t)$ in certain period.
\section*{Online supplement} The online supplement contains the proofs of Auxiliary lemmas for Theorem \ref{Thm-approx}, \ref{Space-Distance}, \ref{eigentheorem}, Proposition \ref{hatdrate}, Proposition \ref{prop1}, Theorem \ref{Boots-thm5}, Lemma \ref{LemmaC1} and Theorem \ref{Power}.
\renewcommand{\Alph{section}.\arabic{equation}}{S.\arabic{equation}}
\section{Appendix} \label{appendix} \setcounter{equation}{0} We now present some additional notation for the proofs. For any random variable $W=W(t,\mathcal{F}_i)$, write $W^{(g)}=W(t,\mathcal{F}_i^{(g)})$ where $\mathcal{F}_i^{(g)}=(\boldsymbol \epsilon_{-\infty},...,\boldsymbol \epsilon_{g-1},\boldsymbol \epsilon_g',\boldsymbol \epsilon_{g+1},...,\boldsymbol \epsilon_i)$, and $(\boldsymbol \epsilon_i')_{i\in \mathbb Z}$ is an i.i.d copy of $(\boldsymbol \epsilon_i)_{i\in\mathbb Z}$.
Let $\mathcal{P}_j=\field{E}(\cdot|\mathcal{F}_j)-\field{E}(\cdot|\mathcal{F}_{j-1})$ be the projection operator.
Let $a_{uv}(t)=0$ if $v\not \in \{l_1,...,l_{d(t)}\}, $ where $l_i, 1\leq i\leq d(t)$ is the index set of factors defined in Section \ref{Sec::Pre}.
\noindent {\bf Proof of Theorem \ref{Thm-approx}.}
Notice that for each $k\in 1,...,k_0$, we have that \begin{align}\label{11-29-eq6}
\sup_{t\in[0,1]}\|\hat {\mathbf M}(J_n,t,k)\hat {\mathbf M}^\top(J_n,t,k)-&\boldsymbol \Sigma_x(t,k)\boldsymbol \Sigma_x^\top(t,k)\|_{F}\leq \\&2\|\boldsymbol \Sigma_x^\top(t,k)\|_{F}\|\hat {\mathbf M}(J_n,t,k)-\boldsymbol \Sigma_x^\top(t,k)\|_{F}+\|\hat {\mathbf M}(J_n,t,k)-\boldsymbol \Sigma_x^\top(t,k)\|^2_{F}.\notag \end{align} Notice that by Jansen's inequality, Cauchy-Schwartz inequality and conditions (C1), (M2), (S1)--(S4), it follows that uniformly for $t\in [0,1]$, \begin{align}
\notag\|\boldsymbol \Sigma_x(t,k)\|_F^2\leq &2\sum_{u=1}^p\sum_{v=1}^p\Big(\sum_{u'=1}^d\sum_{v'=1}^d a_{uu'}(t)\field{E}(Q_{u'}(t,\mathcal{F}_{i+k})Q_{v'}(t,\mathcal{F}_i))a_{vv'}(t)\Big)^2
+2\|\mathbf A(t)\boldsymbol \Sigma_{ze}(t,k)\|_F\\\leq & C \sum_{u=1}^p\sum_{v=1}^p\left[\sum_{u'=1}^d\sum_{v'=1}^d a_{uu'}(t)a_{vv'}(t)\right]^2+o(p^{2-2\delta})\leq C' d^2 p^{2-2\delta}.\label{11-29-eq7} \end{align} for some sufficiently large constant $C$ and $C'$ which depend on the constant $M$ in condition (M2). On the other hand, by Lemmas \ref{11-29-lemma3}, \ref{TildeSigma} and \ref{11-29-lemma5} in the online supplement we have that \begin{align}
\|\sup_{t\in [0,1]}\|\hat {\mathbf M}(J_n,t,k)-\boldsymbol \Sigma_x(t,k)\|_F\|_{\mathcal L^2}=O(p\nu_n).\label{11-29-eq8} \end{align} Then the theorem follows from equations \eqref{11-29-eq6}, \eqref{11-29-eq7} and \eqref{11-29-eq8}.
$\Box$
\noindent {\bf Proof of Theorem \ref{Space-Distance}.}
We first prove (i). It suffices to show that the $d(t)_{th}$ largest eigenvalue of $\mathbf \Lambda(t)$ satisfies \begin{align} \inf_{t\in\mathcal T_{\eta_n}}\lambda_{d(t)}(\mathbf \Lambda(t)) \gtrapprox \eta_n p^{2-2\delta}.\label{lambdad}\end{align} Then the theorem follows from Theorem \ref{Thm-approx}, \eqref{lambdad} and Theorem 1 of \cite{yu2015useful}. We now show \eqref{lambdad}.
Consider the QR decomposition of $\mathbf A(t)$ such that $\mathbf A(t)=\mathbf V(t)\mathbf R(t)$ where $\mathbf V(t)^\top\mathbf V(t)=\mathbf I_{d(t)}$ and $\mathbf I_{d(t)}$ is an $d(t)\times d(t)$ identity matrix. Here $\mathbf V(t)$ is a $p\times d(t)$ matrix and $\mathbf R(t)$ is a $d(t)\times d(t)$ matrix. Then \eqref{DefLambda} of the main article can be written as \begin{align}\label{Lambda1} \mathbf \Lambda(t)=\mathbf V(t) \tilde{\mathbf \Lambda}(t)\mathbf V^\top(t), \end{align} where \begin{align} \tilde{ \mathbf \Lambda}(t)=\mathbf R(t)\Big[\sum_{k=1}^{k_0}(\mathbf \Sigma_z(t,k)\mathbf A^\top(t)+\mathbf \Sigma_{ze}(t,k))(\mathbf A(t)\mathbf \Sigma^\top_z(t,k)+\mathbf \Sigma^\top_{ze}(t,k))\Big]\mathbf R^\top(t). \end{align}
\noindent We now discuss the $\lambda_{d(t)}( \tilde{\mathbf \Lambda} (t))$. Write \begin{align} \mathbf \Sigma_{Rz}(t,k)=\mathbf R(t)\mathbf \Sigma_z(t,k)\mathbf R^\top(t),\mathbf \Sigma_{Rz,e}(t,k)=\mathbf R(t)\mathbf \Sigma_{ze}(t,k),\\ \mathbf \Sigma^*_{Rz}(t,k_0)=(\mathbf \Sigma_{Rz}(t,1),...,\mathbf \Sigma_{Rz}(t,k_0)),\mathbf \Sigma^*_{Rz,e}(t,k_0)=(\mathbf \Sigma_{Rz,e}(t,1),...,\mathbf \Sigma_{Rz,e}(t,k_0)). \end{align} Therefore we have \begin{align}\label{new.S9} \tilde {\mathbf \Lambda}(t)=(\mathbf \Sigma^*_{Rz}(t,k_0)(\mathbf I_{k_0}\otimes \mathbf V^\top(t))+\mathbf \Sigma^*_{Rz,e}(t,k_0))(\mathbf \Sigma^*_{Rz}(t,k_0)(\mathbf I_{k_0}\otimes \mathbf V^\top(t))+\mathbf \Sigma^*_{Rz,e}(t,k_0))^\top \end{align} where $\otimes$ denotes the Kronecker product. On the other hand, by (C1) we have
$$\sup_{t\in [0,1]}\|\mathbf R(t)\|_2\asymp p^{\frac{1-\delta}{2}},\inf_{t\in [0,1]}\|\mathbf R(t)\|_2\asymp p^{\frac{1-\delta}{2}}, \sup_{t\in \mathcal T_{\eta_n}}\|\mathbf R(t)\|_m\gtrapprox \eta_n p^{\frac{1-\delta}{2}}, \inf_{t\in \mathcal T_{\eta_n}}\|\mathbf R(t)\|_m\gtrapprox \eta_n p^{\frac{1-\delta}{2}}.$$ Then by conditions (M1), (M2), (M3), (S2) and (S4) we have \begin{align*}
\sup_{t\in [0,1]}\|\mathbf \Sigma^*_{Rz,e}(t,k_0)\|_2=o(p^{1-\delta}), \inf_{t\in [0,1]}\|\mathbf \Sigma^*_{Rz,e}(t,k_0)\|_2=o(p^{1-\delta}),\\ \sup_{t\in \mathcal T_{\eta_n}}\sigma_{d(t)}(\mathbf \Sigma^*_{Rz}(t,k_0)) \gtrapprox \eta_n p^{1-\delta}, \inf_{t\in \mathcal T_{\eta_n}}\sigma_{d(t)}(\mathbf \Sigma^*_{Rz}(t,k_0)) \gtrapprox \eta_n p^{1-\delta}, \end{align*} where $\sigma_{d(t)}(\boldsymbol\Sigma)$ of a matrix $\boldsymbol \Sigma$ denotes the $d(t)_{th}$ largest singular value of the matrix $\boldsymbol\Sigma$. Therefore using \eqref{new.S9} and the arguments in the proof of Theorem 1 of \cite{lam2011estimation} we have that \begin{align}\label{lambdaminD}\inf_{t\in \mathcal T_{\eta_n}}\lambda_{d(t)}(\tilde{\mathbf \Lambda}(t))\gtrapprox \eta_n p^{2-2\delta}.\end{align} This shows \eqref{lambdad}. Therefore the assertion (i) of the Theorem follows. The assertion (ii) follows from result (i), condition (C1) and a similar argument of proof of Theorem 3 of \cite{lam2011estimation}. Details are omitted for the sake of brevity.
$\Box$
\noindent{{\bf Proof of Theorem \ref{eigentheorem}.}}
(i) follows immediately from Lemma \ref{Bhatia} in the online supplement and Theorem \ref{Thm-approx}. For (ii), notice that by Theorem \ref{Space-Distance} we have that
\begin{align}\|\sup_{t\in{\mathcal T_{\eta_n}}}\|\hat {\mathbf B}(t)-\mathbf B(t) \hat{\mathbf O}^\top_2(t)\|_F\|_{\mathcal L^1}= O(\eta_n^{-1}p^\delta \nu_n),\label{Bbound}\end{align} where $\mathbf B(t)$, $\hat {\mathbf B}(t)$ and ${\hat {\mathbf O}}_2(t)$ are defined in Theorem \ref{Space-Distance}. Since $\hat {\mathbf O}_2(t)$ is an $(p-d(t))\times (p-d(t))$ orthonormal matrix, it is easy to check that the columns of $\mathbf B(t)\hat{\mathbf O}^\top_2(t)$ form an orthonormal basis of the null space of $\mathbf \Lambda(t)$. Using this fact, we then follow the decomposition (A.5) of \cite{lam2012factor} to obtain that uniformly for $j=d(t)+1,...,p$ \begin{align} \lambda_j(\hat {\mathbf \Lambda})(t)=K_{1,j}(t)+K_{2,j}(t)+K_{3,j}(t), \end{align} with \begin{align}
\label{New.S13}\sup_{t\in \mathcal T_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{1,j}(t)|\leq \sum_{k=1}^{k_0}\sup_{t\in \mathcal T_{\eta_n}}\|\hat {\mathbf M}(J_n,t,k)-\boldsymbol \Sigma_x(t,k)\|^2_F,\\
\sup_{t\in \mathcal T_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{2,j}(t)|\leq M_1\sup_{t\in \mathcal T_{\eta_n}}\|\hat {\mathbf \Lambda}(t)-{\mathbf \Lambda(t)}\|_F\|\hat {\mathbf B}(t)-\mathbf B(t)\hat{\mathbf O}^\top_2(t)\|_F, \label{New.S14}\\
\sup_{t\in \mathcal T_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{3,j}(t)|\leq M_2\sup_{t\in \mathcal T_{\eta_n}}\|\hat {\mathbf B}(t)-\mathbf B(t)\hat{\mathbf O}^\top_2(t)\|^2_F\|\mathbf \Lambda(t)\|_F\label{New.S15} \end{align} for some sufficiently large constants $M_1$ and $M_2$. For $K_{1,j}(t)$, by \eqref{11-29-eq8} and \eqref{New.S13}, we have that \begin{align}\label{K1j}
\|\sup_{t\in\mathcal {T}_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{1,j}(t)|\|_{\mathcal L^1}= O(p^2\nu^2_n). \end{align} Notice that for any two random variables $X$, $Y$ with finite first moments, for any $c_1,c_2>0$, the following inequality holds by Markov inequality \begin{align}\label{eqtail}
\field{P}(|XY|\geq c_1c_2\field{E}|X|\field{E}|Y|)
\leq \field{P}(|X|\geq c\field{E}|X|)+\field{P}(|Y|\geq c\field{E}|Y|)\leq 1/c_1+1/c_2. \end{align} Meanwhile $\eta_n^{-1}(p\nu_n)^2=p^{2-\delta}\nu_n(\eta_n^{-1} p^\delta\nu_n)$. Therefor by \eqref{eqtail} and the results of Theorem \ref{Thm-approx}, Theorem \ref{Space-Distance} and \eqref{New.S14}, we have that\begin{align}\label{K2j}
\field{P}\left(\sup_{t\in\mathcal T_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{2,j}(t)|\geq \eta_n^{-1}\left(p\nu_n\right)^2 g_n^2\right)=O(1/g_n). \end{align}
Finally, by \eqref{11-29-eq7} we shall see that \begin{align}\label{lambdaF}
\sup_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_F\asymp p^{2-2\delta}, \end{align} which together with Theorem \ref{Thm-approx}, \eqref{New.S15} and \eqref{eqtail} provides the probabilistic bound for $K_{3,j}(t)$
\begin{align}\label{K3j}
\field{P}\Big(\sup_{t\in\mathcal T_{\eta_n}}\max_{d(t)+1\leq j\leq p}|K_{3,j}(t)|\geq g_n^2 \eta_n^{-2}(p\nu_n)^2\Big)=O(1/g_n). \end{align} As a result, (ii) follows from \eqref{K1j}, \eqref{K2j} and \eqref{K3j}.
For (iii), notice that by \eqref{lambdaminD} and \eqref{lambdaF} it follows that \begin{align}\label{lambdaF1}
\inf_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_m\gtrapprox \eta_n p^{2-2\delta}, \sup_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_m\gtrapprox \eta_n p^{2-2\delta}, \inf_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_2\asymp p^{2-2\delta}, \sup_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_2\asymp p^{2-2\delta}. \end{align} Then by (i) \begin{align}\label{S-40-11-19} &\field{P}(\frac{\lambda_{j+1}(\hat {\mathbf \Lambda}(t))}{\lambda_j(\hat {\mathbf \Lambda}(t))}\geq \eta_n, j=1,...,d(t)-1, \forall t\in \mathcal T_{\eta_n})\geq
\field{P}(\sup_{t\in \mathcal T_{\eta_n}}\max_{1\leq j\leq d(t)}|\lambda_j(\hat {\mathbf \Lambda}(t))- \lambda_j({\mathbf \Lambda(t)})|\leq \frac{\eta_np^{2-2\delta}}{\tilde M_0}), \end{align} where $\tilde M_0$ is a sufficiently large constant such that $\eta_np^{2-2\delta}/{\tilde M_0}<(1/2) \inf_{t\in \mathcal T_{\eta_n}}\lambda_{d(t)}(\mathbf \Lambda(t)) $. Therefore by (i) and Markov inequality (iii) holds. Finally, (iv) follows from (i), (ii), \eqref{lambdaF1} and \eqref{eqtail}.
$\Box$\\
\noindent{\bf Proof of Proposition \ref{Jan23-Lemma6}}
By Proposition \ref{prop1}, it suffices to prove everything on the event $A_n:=\{\hat d_n=d\}$. Observe that $\hat T_n=|\frac{\sum_{i=1}^{m_n}\hat {\boldsymbol l}_i}{\sqrt{m_n}}|_\infty$ and $\tilde T_n=|\frac{\sum_{i=1}^{m_n}\tilde {\boldsymbol l}_i}{\sqrt{m_n}}|_\infty$. Notice that for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, the $(k_0(p-d)ps_1+(w-1)p(p-d)+(s_2-1)p+s_3)_{th}$ entry of $\tilde{\boldsymbol l}_i$ and $\hat{\boldsymbol l}_i$ are $\mathbf f_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}$ and $\hat{\mathbf f}_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}$ respectively where $x_{i,s_2}$ is the $(s_2)_{th}$ entry of $\mathbf x_{i,n}$. Then on event $A_n$, \begin{align}
|\hat T_n-\tilde T_n|&=\max_{1\leq s_1\leq N_n-1, 1\leq s_3\leq p, 1\leq w\leq k_0, 1\leq s_2\leq p-d}|(\mathbf f_{s_2}^\top-\hat{\mathbf f}_{s_2}^\top)\sum_{i=1}^{m_n} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|/\sqrt{m_n}\notag\\
&\leq\max_{1\leq s_1\leq N_n-1, 1\leq s_3\leq p, 1\leq w\leq k_0, 1\leq s_2\leq p-d}\sum_{i=1}^{m_n}|(\mathbf f_{s_2}^\top-\hat{\mathbf f}_{s_2}^\top) \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|/\sqrt{m_n}\label{8-11-S65}. \end{align}
By the triangle inequality, condition ($M'$) and Cauchy inequality, it follows that
\begin{align}
&\max_{1\leq s_2\leq p-d}|(\mathbf f_{s_2}^\top-\hat{\mathbf f}_{s_2}^\top)\sum_{i=1}^{m_n} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|\notag\\&= \max_{1\leq s_2\leq p-d}|\sum_{u=1}^p(f_{s_2,u}-\hat f_{s_2,u})\sum_{i=1}^{m_n} x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}|\notag\\
&\leq\max_{1\leq s_2\leq p-d}\sqrt{\sum_{u=1}^p(f_{s_2,u}-\hat f_{s_2,u})^2}\sqrt{\sum_{u=1}^p (\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}})^2\notag\\&\leq \|\hat{\mathbf F}-\mathbf F\|_F|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2|^{1/2},\label{8-11-S66} \end{align} where $f_{s_2,u}$ and $\hat f_{s_2,u}$ is the $u_{th}$ entry of $\mathbf f_{s_2}$ and $\hat {\mathbf f}_{s_2}$, respectively. Moreover, for $1\leq s_1\leq N_n-1$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, \begin{align}\label{New.S25}
\big\||\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2|^{1/2}\big\|_{\mathcal L^l}&=\big\|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2\big\|^{1/2}_{\mathcal L^{l/2}}\notag \\&\lessapprox \sqrt pm_n. \end{align} Using the fact that \begin{align}
\field{E}\Big(\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}&\Big(\Big|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2\Big|^{1/2}\Big)^l\Big)\leq \notag\\&\sum_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\left\|\left|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2\right|^{1/2}\right\|_{\mathcal L^l}^l, \end{align} together with \eqref{New.S25} we have \begin{align}
\|\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3})^2|^{1/2}\|_{\mathcal L^l }=O((N_np)^{1/l}\sqrt pm_n).\label{8-11-S69} \end{align} Now the proposition follows from Proposition \ref{prop1}, \eqref{8-11-S65}, \eqref{8-11-S66} and \eqref{8-11-S69}, Corollary \ref{Corol4} in the online supplement and an application of \eqref{eqtail}.
$\Box$
\noindent{ \bf Proof of Theorem \ref{Jan23-Thm4}.}
We first show \eqref{Jan23-83}. To show \eqref{Jan23-83} we verify the conditions of Corollary 2.2 of \cite{zhang2018gaussian}. Recall the proof of Proposition \ref{Jan23-Lemma6} that
for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, the $(k_0(p-d)ps_1+(w-1)p(p-d)+(s_2-1)p+s_3)_{th}$ entry of $\tilde{\boldsymbol l}_i$ is $\mathbf f_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}$ where $x_{i,s_2}$ is the $(s_2)_{th}$ entry of $\mathbf x_{i,n}$. Let $\tilde l_{i,s}$ be the $s_{th}$ entry of $\tilde {\boldsymbol l}_i$. By the proof of Lemma \ref{Lemma-Bound1} in the online supplement, condition ($M'$), condition (a) and the definition of $\mathbf x_{i,n}$, using the fact that $\|\mathbf f_i\|_F=1$ for $1\leq i\leq p-d$, we get \begin{align}\label{Jan23-86}
\delta_{\tilde{\boldsymbol l}}(j,k,l):=\max_{1\leq i\leq m_n}\max_{1\leq s\leq N_nk_0(p-d)p}\field{E}^{1/l}(|\tilde{ l}_{i,s}-\tilde{ l}^{(i-k)}_{i,s}|^l)=O(((k+1)\log (k+1))^{-2}). \end{align} Notice that \eqref{Jan23-86} satisfies (10) of Assumption 2.3 of \cite{zhang2018gaussian}. Furthermore, by the triangle inequality, condition ($M'$) and Cauchy inequality, under null hypothesis, we have
\begin{align*}
\|\mathbf f_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|\|_{\mathcal L^l}&\leq \|\mathbf f^\top_{s_2} \mathbf e_{s_1m_n+i+w}\|_{\mathcal L^{2l}}\|x_{s_1m_n+i,s_3}\|_{\mathcal L^{2l}}. \end{align*}
By triangle inequality and condition ($M'$), we have $\|x_{s_1m_n+i,s_3}\|_{\mathcal L^{2l}}$ is bounded for all $s_1$, $i$ and $s_3$. By conditions (f) we have that \begin{align}
\|\mathbf f^\top_{s_2} \mathbf e_{s_1m_n+i+w}\|_{\mathcal L^{2l}}\leq M_{2l}\|\mathbf f^\top_{s_2} \mathbf e_{s_1m_n+i+w}\|_{\mathcal L^{2}},\\
\|\mathbf f^\top_{s_2} \mathbf e_{s_1m_n+i+w}\|^2_{\mathcal L^{2}} =\mathbf f^\top_{s_2}\field{E}(\mathbf e_{s_1m_n+i+w}\mathbf e_{s_1m_n+i+w}^\top)\mathbf f_{s_2}\leq \max_{1\leq i\leq n}\lambda_{\max}(cov(\mathbf e_{i,n}\mathbf e^\top_{i,n}))\lessapprox 1,\label{NewF19} \end{align} where the $\lessapprox$ in \eqref{NewF19} is uniformly for all $i,n,p$, which is due to condition (f). This leads to that \begin{align}\label{2020-July-14-01}
&\max_{1\leq i\leq m_n}\field{E}(\max_{1\leq s\leq N_nk_0(p-d)p}|\tilde{ l}_{i,s}|^4)\notag\\&= \max_{1\leq i\leq m_n}\field{E}(\max_{1\leq s_1\leq N_n-1, 1\leq s_3\leq p, 1\leq w\leq k_0, 1\leq s_2\leq p-d}|\mathbf f_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|^4)\lessapprox (N_np^{2})^{4/l}, \end{align} which with condition (c) shows that equation (12) of Corollary 2.2 of \cite{zhang2018gaussian} holds. Finally, using \eqref{Jan23-86}, \eqref{2020-July-14-01} and by similar arguments of Lemma \ref{Lemma-Bound1} in the online supplement and Lemma 5 of \cite{zhou2010simultaneous}, we shall see that $\max_{1\leq j\leq k_0N_n(p-d)p}\sigma_{j,j}\leq M<\infty$ for some constant $M$. This verifies (9) of \cite{zhang2018gaussian}, which proves the first claim of the theorem.
To see the second claim of the theorem, writing $\delta_0=g_n^2\frac{p^\delta\sqrt m_n}{\sqrt n}\Omega_n$, where $g_n$ is a diverging sequence. Notice that \begin{align}\label{S95_Nov29}
\field{P}(\hat T_n\geq t)&\leq \field{P}(\tilde T_n\geq t-\delta_0)+\field{P}(|\tilde T_n-\hat T_n|\geq \delta_0),\\
\field{P}(\hat T_n\geq t)&\geq \field{P}(\tilde T_n\geq t+\delta_0, |\tilde T_n-\hat T_n|\leq \delta_0)\notag\\&=
\field{P}(\tilde T_n\geq t+\delta_0)-\field{P}(\tilde T_n\geq t+\delta_0, |\tilde T_n-\hat T_n|\geq \delta_0)\notag
\\&\geq \field{P}(\tilde T_n\geq t+\delta_0)-\field{P}( |\tilde T_n-\hat T_n|\geq \delta_0). \end{align} Therefore following the first assertion of Theorem \ref{Jan23-Thm4} and Proposition \ref{Jan23-Lemma6}, we have \begin{align}\label{S97-Nov-21}
\sup_{t\in \mathbb R}|\field{P}(\hat T_n\geq t)-\field{P}(|\mathbf y|_\infty\geq t)|\leq
\sup_{t}|\field{P}(|\mathbf y|_\infty\geq t-\delta_0)-\field{P}(|\mathbf y|_\infty\geq t+\delta_0)|\notag\\+O(\frac{1}{g_n}+\frac{p^\delta}{\sqrt n}\log n+\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l})). \end{align}
Since $|\mathbf y|_\infty=\max(\mathbf y, -\mathbf y)$, by Corollary 1 of \cite{chernozhukov2015comparison}, we have that \begin{align}\label{S98-Nov-21}
\sup_{t}|\field{P}(|\mathbf y|_\infty\geq t-\delta_0)-\field{P}(|\mathbf y|_\infty\geq t+\delta_0)|=O(\delta_0\sqrt{\log (n/\delta_0)}). \end{align} Combining \eqref{S97-Nov-21} and \eqref{S98-Nov-21}, and letting $g_n=(\frac{p^\delta \sqrt{m_n\log n}}{\sqrt n}\Omega_n)^{-1/3}$, the second claim of Theorem \ref{Jan23-Thm4} follows.
$\Box$
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\begin{center} { \bf Supplemental Material for ``Adaptive Estimation for Non-stationary Factor Models And A Test for Static Factor Loadings"}
\end{center}
\spacingset{1.5}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\def\Alph{section}.\arabic{theorem}{\Alph{section}.\arabic{theorem}} \def\Alph{section}.\arabic{lemma}{\Alph{section}.\arabic{lemma}} \def\Alph{section}.\arabic{corollary}{\Alph{section}.\arabic{corollary}} \def\Alph{section}.\arabic{remark}{\Alph{section}.\arabic{remark}} \def\Alph{section}.\arabic{proposition}{\Alph{section}.\arabic{proposition}} \setcounter{equation}{0} \setcounter{section}{0} \begin{abstract}
Section \ref{Sec::Est} contains the proofs of Auxiliary lemmas for Theorem \ref{Thm-approx}, \ref{Space-Distance}, \ref{eigentheorem} for estimation of the null space of the evolutionary loading matrix, as well as Proposition \ref{hatdrate}. Section \ref{Sec3proof} includes the proof of Proposition \ref{prop1} and Theorem \ref{Boots-thm5} for testing the static factor loading. Finally, Section \ref{Proof-Power} proves Lemma \ref{LemmaC1} and Theorem \ref{Power}. \end{abstract}
\section{Proofs of Proposition \ref{hatdrate} and Auxiliary Lemmas for Theorem \ref{Thm-approx}, \ref{Space-Distance}, \ref{eigentheorem} .}\label{Sec::Est} \setcounter{equation}{0}
\noindent{\bf Proof of Proposition \ref{hatdrate}}. Consider $g_n=\frac{\eta_n^2}{\nu_np^{\delta}\log n}$. Then the conditions of Theorem \ref{eigentheorem} hold. Since $\sum_{j=1}^p\lambda^2_j(\hat{\mathbf \Lambda}(t))=\|\hat{\mathbf \Lambda}(t)\|^2_F$, we shall discuss the magnitude of $\|\hat{\mathbf \Lambda}(t)\|_F$. First, notice that by using similar and simpler argument of proofs of Theorem \ref{Space-Distance} it follows that $\inf_{t\in \mathcal T_{\eta_n}}\|\mathbf \Lambda(t)\|_F\asymp p^{2-2\delta}$. By Theorem \ref{Thm-approx} and \eqref{lambdaF} in the main article we shall see that \begin{align}\label{hatLambda1}
\sup_{t\in \mathcal T_{\eta_n}}\|\hat{\mathbf \Lambda}(t)\|_F\asymp p^{2-2\delta},~~ \inf_{t\in \mathcal T_{\eta_n}}\|\hat{\mathbf \Lambda}(t)\|_F\asymp p^{2-2\delta}. \end{align} Define the events \begin{align}
A_n=\{\sup_{t\in \mathcal T_{\eta_n}}\max_{1\leq j\leq d(t)}|\lambda_j(\hat {\mathbf \Lambda}(t))- \lambda_j({\mathbf \Lambda(t)})|\leq \frac{\eta_np^{2-2\delta}}{\tilde M_0}\}, ~~B_n=\{\sup_{t\in \mathcal T_{\eta_n}}\frac{ \lambda_{d(t)+1}(\hat{\mathbf \Lambda}( t))}{\lambda_{d(t)}(\hat {\mathbf \Lambda}(t))}\geq p^{2\delta}g_n^2\nu_n^2/\eta_n^3\}, \end{align} where as in the proof of Theorem \ref{eigentheorem} in the main article, $\tilde M_0$ is a sufficiently large constant such that $\eta_np^{2-2\delta}/{\tilde M_0}<(1/2) \inf_{t\in \mathcal T_{\eta_n}}\lambda_{d(t)}(\mathbf \Lambda(t)) $. By the proof of Theorem \ref{eigentheorem}, we shall see that \begin{align} \field{P}(A_n)=1-O(\frac{\nu_np^{\delta}}{\eta_n}),~~\field{P}(B_n)=1-O(1/g_n)-O(\frac{\nu_np^{\delta}}{\eta_n}). \end{align} Notice that on $A_n$, by the choice of $R(t)$, \begin{align} \inf_{t\in \mathcal T_{\eta_n}}\min_{d(t)+1\leq j\leq R(t)}\frac{\lambda_{j+1}(\hat{\mathbf \Lambda}(t))}{\lambda_j(\hat{\mathbf \Lambda}(t))} \geq \inf_{t\in \mathcal T_{\eta_n}}\min_{d(t)+1\leq j\leq R(t)}\frac{\lambda_{j+1}(\hat{\mathbf \Lambda}(t))}{\lambda_{d(t)}(\hat{\mathbf \Lambda}(t))}&\geq C_0 \frac{\eta_n p^{2-2\delta}}{\eta_np^{2-2\delta}\log n}\notag\\&=C_0/\log n \end{align} for some sufficiently small positive constant $C_0$, where the last $\geq$ is due to \eqref{hatLambda1}. Because on $A_n$, $\inf_{t\in T_{\eta_n}}\min_{1\leq j\leq d(t)}\frac{\lambda_{j+1}(\hat{\mathbf \Lambda}(t))}{\lambda_{j}(\hat{\mathbf \Lambda}(t))}\geq \eta_n$ (c.f.\eqref{S-40-11-19} in the main article), and the fact that $p^{2\delta}g_n^2\nu_n^2/\eta_n^3=o(\min(\eta_n, (\log n)^{-1})$, we have that \begin{align} \field{P}(\hat d_n(t)=d(t), \forall t\in \mathcal T_{\eta_n})\geq \field{P}(A_n\cap B_n)=1-O(\frac{1}{g_n})-O(\frac{\nu_np^{\delta}}{\eta_n}). \end{align} Therefore the proposition holds by plugging $g_n=\frac{\eta_n^2}{\nu_np^{\delta}\log n}$ into the above equation.
$\Box$ \subsection{Auxiliary Lemmas}
\noindent The following lemma from \cite{bhatia1982analysis} is useful for proving Theorem \ref{eigentheorem}.
\begin{lemma}\label{Bhatia}
Let $M(n)$ be the space of all $n \times n$ (complex) matrices. A norm $\|\cdot\|$ on $M(n)$ is said to be unitary-invariant if $\|A\|=\|U A V\|$ for any two unitary matrices $U$ and $V .$ We denote by Eig $A$ the unordered $n$-tuple consisting of the eigenvalues of $A,$ each counted as many times as its multiplicity. Let $D(A)$ be a diagonal matrix whose diagonal entries are the elements of Eig $A .$ For any norm on $M(n)$ define
$$
\|(\operatorname{Eig} A, \operatorname{Eig} B)\|=\min _{W}\left\|D(A)-W D(B) W^{-1}\right\|
$$
where the minimum is taken over all permutation matrices $W .$
If $A, B$ are Hermitian matrices, we have for all unitary-invariant norms (including the Frobenius norm ) the inequality
$$
\|(\operatorname{Eig} A, \operatorname{Eig} B)\| \leqslant\|A-B\|.
$$ \end{lemma} In the following, recall that $a_{uv}(t)=0$ if $v\not \in \{l_1,...,l_{d(t)}\}, $ where $l_i, 1\leq i\leq d(t)$ is the index set of factors defined in Section \ref{Sec::Pre} of the main article. \begin{lemma}\label{LS-G}Recall in the main article that \begin{align}
\delta_{G,l}(k)=\sup_{t\in[0,1],i\in\mathbb Z, 1\leq j\leq p}\|G_j(t,\mathcal{F}_i)-G_j(t,\mathcal{F}_i^{(i-k})\|_{\mathcal L^l}.
\end{align}
Under conditions (A1), and (M1)--(M3), there exists a sufficiently large constant $M_0$, such that uniformly for $1\leq u\leq p$ and $t\in [0,1]$,\begin{align}
\field{E} | G_u(t,\mathcal{F}_0)|^4\leq M_0, \label{state1}\\
\delta_{G,l}(k)=O(d\delta_{Q,l}(k)+\delta_{H,l}(k)).\label{state2}
\end{align}
\end{lemma} {\it Proof.} By definition we have that for $1\leq u\leq p$, \begin{align*} G_u(t,\mathcal{F}_i)=\sum_{v=1}^da_{uv}(t)Q_v(t,\mathcal{F}_i)+H_u(t,\mathcal{F}_i). \end{align*} Notice that here $d=\sup_{0\leq t\leq 1}d(t)$ is fixed. Therefore, assumptions (A1), (M2) and triangle inequality lead to the first statement of boundedness of fourth moment of \eqref{state1}. Finally (A1) and (M3) lead to the assertion \eqref{state2}.
$\Box$
\begin{lemma}\label{Lemma-Bound1}
Consider the process $z_{i,u,n}z_{i+k,v,n}$ for some $k>0$, and $1\leq u,v\leq d$. Then under conditions (C1), (M1)--(M3) we have:\\ i) $\zeta_{i,u,v} =:\zeta_{u,v}(\frac{i}{n},\mathcal{F}_i)=z_{i,u,n}z_{i+k,v,n}$ is a locally stationary process with associated dependence measures \begin{align}
\delta_{\zeta_{u,v},2}(h)\leq C(\delta_{Q,4}(h)+\delta_{Q,4}(h+k))
\end{align}
for some universal constant $C>0$ independent of $u,v$ and any integer $h$;\\ (ii) For any series of numbers $a_i, 1\leq i\leq n$, we have
\begin{align}
\Big( \field{E}\Big|\frac{1}{n}\sum_{i=1}^na_i(\zeta_{i,u,v}-\field{E} \zeta_{i,u,v})\Big|^2\Big)^{1/2}\leq \frac{2C}{n}\big(\sum_{i=1}^na_i^2\big)^\frac{1}{2}\Delta_{Q,4,0}.
\end{align} \end{lemma} {\it Proof}. i) is a consequence of the Cauchy-Schwarz inequality, triangle inequality and conditions $(M1)$, $(M2)$. For ii), notice that \begin{align}\label{Nov-24-1} \sum_{i=1}^na_i(\zeta_{i,u,v}-\field{E} \zeta_{i,u,v})=\sum_{i=1}^na_i(\sum_{u=0}^\infty \mathcal{P}_{i+k-u}\zeta_{i,u,v})=\sum_{u=0}^\infty\sum_{i=1}^na_i\mathcal{P}_{i+k-u}\zeta_{i,u,v}. \end{align} By the property of martingale difference and i) of this lemma we have \begin{align}\label{Nov-24-2}
\|\sum_{i=1}^na_i\mathcal{P}_{i+k-u}\zeta_{i,u,v}\|_{\mathcal L^2}=\sum_{i=1}^na_i^2 \|\mathcal{P}_{i+k-u}\zeta_{i,u,v}\|_{\mathcal L^2}^2\leq C^2\sum_{i=1}^na_i^2(\delta_{Q,4}(u)+\delta_{Q,4}(u-k))^2. \end{align} By triangle inequality, inequalities \eqref{Nov-24-1} and \eqref{Nov-24-2}, the lemma follows.
$\Box$
\begin{corol}\label{Corol-Bound1}
Under conditions (C1), (M1) and (M2) we have for each fixed $k>0$, $1\leq u,v\leq p$ and $1\leq w\leq d$, $\psi_i=:\psi(\frac{i}{n},\mathcal{F}_i)=e_{i+k,u,n}e_{i,v,n}$,
$\phi_i=:\phi(\frac{i}{n},\mathcal{F}_i)=z_{i+k,w,n}e_{i,v,n}$ and
$\iota_i=:\iota(\frac{i}{n},\mathcal{F}_i)=e_{i+k,u,n}z_{i,w,n}$ are locally stationary processes with associated dependence measures
\begin{align}
\max(\delta_{\psi,2}(h),\delta_{\phi,2}(h),\delta_{\iota,2}(h))\leq C(\delta_{Q,4}(h)+\delta_{H,4}(h)+\delta_{Q,4}(h+k)+\delta_{H,4}(h+k)).
\end{align}
for some universal constant $C>0$ independent of $u$, $v$ and $w$. \end{corol} {\it Proof.} The corollary follows from the same proof of Lemma \ref{Lemma-Bound1}.
$\Box$
To save notation in the following proofs, for given $J_n,k$ write $\hat {\mathbf M}(J_n,t,k)$ as $\hat {\mathbf M}$ if no confusion arises. Recall the definition of $\tilde {\boldsymbol \Sigma}_{x,j,k}$ and $\hat {\mathbf M}(J_n,t,k)$ in \eqref{11-12-10} and \eqref{11-12-11} in the main article. Observe the following decompositions \begin{align}\label{Nov-23-1} \tilde {\boldsymbol \Sigma}_{x,j,k}=\mathbf V_{1,j,k}+\mathbf V_{2,j,k}+\mathbf V_{3,j,k}+\mathbf V_{4,j,k}, \\\mathbf V_{1,j,k}=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf A(\frac{i+k}{n})\mathbf {z}_{i+k,n}\mathbf {z}^\top_{i,n}\mathbf A^\top(\frac{i}{n})B_j(\frac{i}{n}),\\\label{Nov-23-2} \mathbf V_{2,j,k}=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf A(\frac{i+k}{n})\mathbf z_{i+k,n}\mathbf e_{i,n}^\top B_j(\frac{i}{n}), \\\label{Nov-23-3} \mathbf V_{3,j,k}=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf e_{i+k}\mathbf z^\top_{i,n} \mathbf A(\frac{i}{n})B_j(\frac{i}{n}),\mathbf V_{4,j,k}=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf e_{i+k,n}\mathbf e^\top_{i,n}B_j(\frac{i}{n}). \end{align}
\begin{lemma}\label{11-29-lemma3}
Under conditions (C1), (M1), (M2) and (M3) we have that $$\|\sup_{t\in [0,1]}\|\hat {\mathbf M}(J_n,t,k)-\field{E} \hat {\mathbf M}(J_n,t,k)\|_F\|_{\mathcal L^2}=O(\frac{J_np\sup_{t,1\leq j\leq J_n}|B_j(t)|^2}{\sqrt n}).$$ \end{lemma} {\it Proof.} Using equations \eqref{Nov-23-1}-\eqref{Nov-23-3} we have that \begin{align} \hat {\mathbf M}(J_n,t,k)-\field{E} \hat {\mathbf M}(J_n,t,k)=\sum_{j=1}^{J_n}\sum_{s=1}^4(\mathbf V_{s,j,k}-\field{E}(\mathbf V_{s,j,k}))B_j(t):=\sum_{s=1}^4\tilde {\mathbf V}_{s}(t), \end{align} where $\tilde{\mathbf V}_s(t)=\sum_{j=1}^{J_n} (\mathbf V_{s,j,k}-\field{E}(\mathbf V_{s,j,k}))B_j(t)$, for $s=1,2,3,4$. Consider the $s=1$ case and then \begin{align}\notag &\tilde {\mathbf V}_1(t):=\sum_{j=1}^{J_n} (\mathbf V_{1,j,k}-\field{E}(\mathbf V_{1,j,k}))B_j(t)\\=&\frac{1}{n}\sum_{j=1}^{J_n}B_j(t)\sum_{i=1}^{n-k}\mathbf A(\frac{i+k}{n})(\mathbf z_{i+k,n}\mathbf z^\top_{i,n}-\field{E}(\mathbf z_{i+k,n}\mathbf z^\top_{i,n})) \mathbf A^\top(\frac{i}{n})B_j(\frac{i}{n}). \end{align} Consider $\tilde { \mathbf M}_j=\frac{1}{n}\sum_{i=1}^{n-k}\mathbf A(\frac{i+k}{n})(\mathbf z_{i+k,n}\mathbf z^\top_{i,n}-\field{E}(\mathbf z_{i+k,n}\mathbf z^\top_{i,n})) \mathbf A^\top(\frac{i}{n})B_j(\frac{i}{n})$. Its $(u,v)_{th}$, $1\leq u\leq p$, $1\leq v\leq p$ element is \begin{align} \tilde M_{j,u,v}=\frac{1}{n}\sum_{i=1}^{n-k}\sum_{u'=1}^d\sum_{v'=1}^d a_{uu'}(\frac{i+k}{n})( z_{i+k,u',n} z_{i,v',n}-\field{E}(z_{i+k,u',n}z_{i,v'n})) a_{vv'}(\frac{i}{n})B_j(\frac{i}{n}). \end{align} Therefore it follows from the triangle inequality and Lemma \ref{Lemma-Bound1} that, \begin{align}
\Big\|\tilde M_{j,u,v}\Big\|_{\mathcal L^2}\leq \frac{C\Delta_{Q,4,0}\sup_{j,t}|B_j(t)|}{n}\sum_{v'=1}^d\sum_{u'=1}^d\sqrt {\sum_{i=1}^{n-k}a^2_{uu'}(\frac{i+k}{n}) a^2_{vv'}(\frac{i}{n})} \end{align} for some sufficiently large constant $C$. Consequently by (C1) and Jansen's inequality, we get \begin{align}\label{tildeMF}
\field{E}\left(\|\tilde {\mathbf M}_j\|^2_F\right) &\leq \frac{C^2\Delta^2_{Q,4,0}\sup_{j,t}|B_j(t)|^2}{n^2}\sum_{u=1}^p\sum_{v=1}^p\left(\sum_{v'=1}^d\sum_{u'=1}^d\sqrt {\sum_{i=1}^{n-k}a^2_{uu'}(\frac{i+k}{n}) a^2_{vv'}(\frac{i}{n})}\right)^2\notag
\\&\leq\frac{C^2\Delta^2_{Q,4,0}d^2\sup_{j,t}|B_j(t)|^2}{n^2}\sum_{i=1}^{n-k}\sum_{u=1}^p\sum_{v=1}^p\sum_{v'=1}^d\sum_{u'=1}^da^2_{uu'}(\frac{i+k}{n}) a^2_{vv'}(\frac{i}{n})\notag\\&\asymp \frac{\Delta^2_{Q,4,0}d^4\sup_{j,t}|B_j(t)|^2p^{2-2\delta}}{n} \end{align} for $1\leq j\leq J_n$.
On the other hand, since $(u,v)_{th}$ element of $ \tilde{\mathbf V}_1(t)$, which is denoted by $\tilde V_{1,u,v}(t)$, satisfies \begin{align} \tilde V_{1,u,v}(t)=\frac{1}{n}\sum_{j=1}^{J_n}B_j(t)\tilde M_{j,u,v}. \end{align} Therefore by Jansen's inequality it follows that \begin{align}
\sup_{t\in [0,1]}\|\tilde{\mathbf V}_1(t)\|_F^2&=\sup_{t, 1\leq j\le J_n}\sum_{u=1}^p\sum_{v=1}^p\left(\sum_{j=1}^{J_n}B_j(t)\tilde M_{j,u,v}\right)^2\notag\\
&\leq \sup_{t, 1\leq j\le J_n }|B_j(t)|^2\sum_{u=1}^p\sum_{v=1}^p(\sum_{j=1}^{J_n}|\tilde M_{j,u,v}|)^2\notag\\
&\leq \sup_{t, 1\leq j\le J_n}|B_j(t)|^2\sum_{u=1}^p\sum_{v=1}^pJ_n\sum_{j=1}^{J_n}|\tilde M_{j,u,v}|^2\notag\\
&\leq \sup_{t, 1\leq j\le J_n}|B_j(t)|^2J_n\sum_{j=1}^{J_n}\|\tilde {\mathbf M}_j\|^2_F. \end{align} Therefore we have \begin{align}
\field{E}(\sup_{t\in [0,1]}\|\tilde{\mathbf V}_1(t)\|_F^2)\leq \sup_{t, 1\leq j\le J_n}|B_j(t)|^2J_n\sum_{j=1}^{J_n}\field{E}(\|\tilde {\mathbf M}_j\|^2_F) \end{align} Combining \eqref{tildeMF} we have that \begin{align}\label{2019-Nov-25-1}
\field{E}(\sup_{t\in [0,1]}\|\tilde{\mathbf V}_1(t)\|_F^2)^{1/2}=O\left(\frac{J_n\sup_{t, 1\leq j\le J_n}|B_j(t)|^2p^{1-\delta}}{\sqrt n}\right). \end{align}
Similarly using Corollary \ref{Corol-Bound1} we have that \begin{align}\label{2019-Nov-25-2}
\field{E}\big(\sup_{t\in [0,1]}\|\tilde {\mathbf V}_s(t)\|_F^2\big)^{\frac{1}{2}}=O\left( \frac{J_n\sup_{t, 1\leq j\le J_n}|B_j(t)|^2p^{1-\delta/2}}{\sqrt n}\right), s=2,3, \end{align} and \begin{align}\label{2019-Nov-25-3}
\field{E}\big(\sup_{t\in [0,1]}\|\tilde {\mathbf V}_4(t)\|_F^2\big)^{\frac{1}{2}}=O\left(\frac{J_np\sup_{t, 1\leq j\le J_n}|B_j(t)|^2}{\sqrt n}\right). \end{align} Then the lemma follows from \eqref{2019-Nov-25-1}, \eqref{2019-Nov-25-2}, \eqref{2019-Nov-25-3} and triangle inequality.
$\Box$ \begin{lemma}\label{TildeSigma}
Under conditions (A1), (A2), (C1), (M1), (M2) and (M3) we have that
\begin{align*}
\|\sup_{t\in[0,1]}(\field{E} \hat {\mathbf M}(J_n,t,k)-\mathbf \Sigma^*_k(t))\|_F=O(J_n\sup_{t,1\leq j\leq J_n}|B_j(t)|^2kp/n)
\end{align*}
where $\mathbf \Sigma^*_k(t)=\frac{1}{n}\sum_{j=1}^{J_n}\sum_{i=1}^{n}\field{E}(\mathbf G(\frac{i}{n},\mathcal{F}_{i+k})\mathbf G(\frac{i}{n},\mathcal{F}_{i})^\top)B_j(\frac{i}{n})B_j(t)$. \end{lemma} {\it Proof.} Consider the $(u,v)_{th}$ element of $(\field{E} \hat {\mathbf M}(J_n,t,k)-\mathbf \Sigma^*_k(t))_{u,v}$. By definition, we have for $1\leq u,v\leq p$, \begin{align} (\field{E} \hat {\mathbf M}(J_n,t,k)-\mathbf \Sigma^*_k(t))_{uv} &=\frac{1}{n}\sum_{j=1}^{J_n}\sum_{i=1}^{n-k}\field{E}\big(\big(G_u(\frac{i+k}{n},\mathcal{F}_{i+k})-G_u(\frac{i}{n},\mathcal{F}_{i+k})\big)G_v(\frac{i}{n},\mathcal{F}_i)\big)B_j(\frac{i}{n})B_j(t)\notag\\ &+\frac{1}{n}\sum_{j=1}^{J_n}\sum_{i=n-k+1}^{n}\field{E}\big(G_u(\frac{i}{n},\mathcal{F}_{i+k})G_v(\frac{i}{n},\mathcal{F}_i)\big)B_j(\frac{i}{n})B_j(t). \end{align}
By condition (M3), Lemma \ref{LS-G} and proof of Corollary 3.1 in \cite{dette2020prediction} we have that uniformly for $1\leq u,v\leq p$,
\begin{align}\sup_{t\in [0,1]}|(\field{E} \hat {\mathbf M}(J_n,t,k)-\mathbf \Sigma^*_k(t))_{uv}|\leq M' J_n\sup_{t,1\leq j\leq J_n}|B_j(t)|^2k/n\end{align} for some sufficiently large constant $M'$ independent of $u$ and $v$. Therefore by the definition of Frobenius norm, the lemma follows.
$\Box$
\begin{lemma}\label{11-29-lemma5}
Let $\iota_n=\sup_{1\leq j\leq J_n}Lip_j+\sup_{t,1\leq j\leq J_n}|B_j(t)|$ where $Lip_j$ is the Lipschitz constant of the basis function $B_j(t)$. Then under conditions (A1), (A2), (C1), (M1)--(M3) we have that
\begin{align*}
\sup_{t\in [0,1]}\|\boldsymbol \Sigma_k^*(t)-\boldsymbol \Sigma_x(t,k)\|_F=O\Big(\frac{J_n\sup_{t,1\leq j\leq J_n} |B_j(t)|p\iota_n}{n}+pg_{J_n,K, \tilde M}\Big),
\end{align*}
where $\boldsymbol \Sigma_k^*$ is defined in Lemma \ref{TildeSigma}. \end{lemma} {\it Proof.} Notice that by definition we have that \begin{align} \boldsymbol \Sigma_k^*(t)=\frac{1}{n}\sum_{j=1}^{J_n}\sum_{i=1}^n\boldsymbol \Sigma_x(\frac{i}{n},k)B_j(\frac{i}{n})B_j(t). \end{align} Define that $\tilde{\boldsymbol \Sigma}_k^*(t)=\sum_{j=1}^{J_n}\int_0^1 \boldsymbol \Sigma_x(s,k)B_j(s)dsB_j(t)$. Notice that the $(u,v)_{th}$ element of $\boldsymbol \Sigma_k^*(t)-\tilde {\boldsymbol \Sigma}_k^*(t)$ is \begin{align} (\boldsymbol \Sigma_k^*(t)-\tilde {\boldsymbol\Sigma}_k^*(t))_{u,v} =\sum_{j=1}^{J_n}\Big(\frac{1}{n}\sum_{i=1}^n \field{E}(G_u(\frac{i}{n},\mathcal{F}_{i+k})G_v(\frac{i}{n},\mathcal{F}_{i}))B_j(\frac{i}{n})\notag\\ -\int_0^1 \field{E}(G_u(s,\mathcal{F}_{i+k})G_v(s,\mathcal{F}_{i}))B_j(s)ds\Big)B_j(t). \end{align} Notice that Lemma \ref{LS-G} and Condition (A2) imply that there exists a sufficiently large constant $M'$ depending on $M_0$ of Lemma \ref{LS-G}, such that those Lipschitz constants of the functions \begin{align*} \field{E}(G_u(s,\mathcal{F}_{i+k})G_v(s,\mathcal{F}_{i}))B_j(s) \end{align*} are bounded by $M'\iota_n$ for all $1\leq k\leq k_0$, $1\leq u,v\leq p$. Then using similar argument to the proof of Lemma \ref{TildeSigma}, we obtain that \begin{align}
\sup_{t\in [0,1]}\|\boldsymbol\Sigma_k^*(t)-\tilde {\boldsymbol\Sigma}_k^*(t)\|_F=O\Big(\frac{J_n\sup_{t,1\leq j\leq J_n} |B_j(t)|p\iota_n}{n}\Big). \end{align} Similarly by using basis expansion \eqref{Basisapprox} in condition (A2) of the main article we have that \begin{align}
\sup_{t\in [0,1]}\|\tilde{\boldsymbol \Sigma}_k^*(t)-\boldsymbol \Sigma_x(t,k)\|_F=O(pg_{J_n,K,\tilde M}) \end{align} which completes the proof.
$\Box$
\section{Proof of Propositions \ref{prop1}, Theorem \ref{Boots-thm5} and Auxiliary Results}\label{Sec3proof}\setcounter{equation}{0} In this section we define $\hat {\mathbf w}_i$, $i=1,...,d$ as the orthonormal eigenvectors of $\sum_{k=1}^{k_0}\hat {\boldsymbol \Gamma}_k$ w.r.t its largest $d$ eigenvalues: ($\lambda_1(\sum_{k=1}^{k_0}\hat {\boldsymbol \Gamma}_k)$,...,$\lambda_d(\sum_{k=1}^{k_0}\hat {\boldsymbol \Gamma}_k)$). Let $ \mathbf w_i$, $i=1,...,d$ be the orthonormal eigenvectors of $\sum_{k=1}^{k_0} \boldsymbol \Gamma_k$ with respect to its $d$ positive eigenvalues: ($\lambda_1(\sum_{k=1}^{k_0} \boldsymbol \Gamma_k)$,...,$\lambda_d(\sum_{k=1}^{k_0}\boldsymbol \Gamma_k)$). Let $\mathbf W=(\mathbf w_1,...,\mathbf w_d)$, $\mathbf F=(\mathbf f_1,...,\mathbf f_{p-d})$, $\hat {\mathbf W}=( \hat{\mathbf w}_1,...,\hat {\mathbf w}_d)$ and $\hat {\mathbf F}=(\hat {\mathbf f}_1,..., \hat {\mathbf f}_{p-d})$, where $\mathbf F$ and $\hat{\mathbf F}$ are bases of null space of $\hat{\mathbf \Gamma}$ and $\mathbf \Gamma$, respectively. Therefore $((\mathbf w_i, 1\leq i\leq d), (\mathbf f_i, 1\leq i\leq p-d))$ and $((\hat {\mathbf w}_i, 1\leq i\leq d), (\hat {\mathbf f}_i, 1\leq i\leq p-d))$ are both orthonormal bases for $\mathbb R^p$. Recall that $\hat {\boldsymbol \Gamma}=\sum_{k=1}^{k_0}\hat {\boldsymbol \Gamma}_k$ and $ \boldsymbol \Gamma=\sum_{k=1}^{k_0} \boldsymbol \Gamma_k$ in Section \ref{sec:test_loading} of the main article.
Since we are testing the null hypothesis of static factor loadings and are considering local alternatives where $\mathbf A(\cdot)$ deviates slightly from the null in this section, we have assumed that $d(t)\equiv d$ for testing the static factor loadings in the main article, i.e., the number of factors is fixed, and therefore we assume that $\eta_n$ in conditions (C1), (S) satisfies $\eta_n\geq \eta_0>0$ for some positive constant $\eta_0$, and $\mathcal T_{\eta_n}=(0,1)$ which coincides with discussions in Remark \ref{remarketa} of the main article.
\begin{corollary}\label{Jan-Corol1}
Assume (A1), (A2), (C1), (M1)--(M3).
\begin{align}
\| \|\boldsymbol \Gamma-\hat {\boldsymbol\Gamma}\|_F\|_{\mathcal L^1}=O(\frac{p^{2-\delta}}{\sqrt n} )
\end{align} \end{corollary} {\it Proof.} It suffices to show uniformly for $1\leq k\leq k_0$, \begin{align}\label{gamma_k}
\|\|\boldsymbol \Gamma_k-\hat {\boldsymbol \Gamma}_k\|_F\|_{\mathcal L^1}=O_p(\frac{p^{2-\delta}}{\sqrt n} ). \end{align} By the proof of Lemma \ref{11-29-lemma3}, it follows that for $1\leq k\leq k_0$, \begin{align}\label{newF3}
\left\|\left\|\frac{1}{n}\sum_{i=1}^{n-k}\mathbf x_{i+k,n}\mathbf x_{i,n}^\top-\frac{1}{n}\field{E}(\sum_{i=1}^{n-k}\mathbf x_{i+k,n}\mathbf x_{i,n}^\top)\right\|_F\right\|_{\mathcal L^2}=O(\frac{p}{\sqrt n}). \end{align} By the proof of Lemma \ref{TildeSigma}, it follows that \begin{align}\label{Jan20-52}
\left\|\frac{1}{n}\sum_{i=1}^{n-k}\left(\field{E}(\mathbf x_{i+k,n}\mathbf x_{i,n}^\top)-\boldsymbol \Sigma_x(\frac{i}{n},k)\right)\right\|_F=O(\frac{p}{n}),\\
\left\|\frac{1}{n}\sum_{i=1}^{n-k}\boldsymbol \Sigma_x(\frac{i}{n},k)-\int_0^1\boldsymbol \Sigma_x(t,k)dt\right\|_F=O(\frac{p}{n}),\label{Jan20-54} \end{align} where to prove \eqref{Jan20-54} we have used Lemma \ref{LS-G}. Then by \eqref{newF3} to \eqref{Jan20-54} we have that \begin{align}
\left\|\frac{1}{n}\sum_{i=1}^{n-k}\mathbf x_{i+k,n}\mathbf x_{i,n}^\top-\int_0^1\boldsymbol \Sigma_x(t,k)dt\right\|_F=O(\frac{p}{\sqrt n}), \end{align} which together with \eqref{11-29-eq7} in the main article and the definition of $\boldsymbol \Gamma_k$ proves \eqref{gamma_k}. Therefore the corollary holds.
$\Box$ \begin{corollary}\label{Jan20-Corol2}
Assume conditions (A1), (A2), (C1), (M1), (M2), (M3) and conditions (S1)-(S4), then there exist orthogonal matrices $\hat {\mathbf O}_3\in \mathbb R^{d\times d}$, $\hat {\mathbf O}_4\in \mathbb R^{(p-d)\times (p-d)}$ such that
\begin{align*}
\| \|\hat {\mathbf W} \hat{\mathbf O}_3-\mathbf W\|_F\|_{\mathcal L^1}=O(\frac{p^\delta}{\sqrt n}),\\
\| \|\hat {\mathbf F} \hat{\mathbf O}_4-\mathbf F\|_F\|_{\mathcal L^1}=O(\frac{p^\delta}{\sqrt n}).
\end{align*} \end{corollary} {\it Proof.} By \eqref{11-29-eq7} in the main article and the triangle inequality we have that \begin{align}\label{2020-7-13-1}
\sum_{k=1}^{k_0}\|(\int_0^1 \boldsymbol \Sigma_x(t,k)dt)(\int_0^1 \boldsymbol \Sigma_x(t,k)dt)^\top\|_F\asymp p^{2-2\delta}. \end{align} On the other hand, by Weyl's inequality (\cite{bhatia1997matrix}) and (12) of \cite{lam2011estimation} we have that \begin{align}\label{2020-7-13-2}
\sup_{t\in[0,1]}\|\sum_{k=1}^{k_0}(\int_0^1 \boldsymbol \Sigma_x(t,k)dt)(\int_0^1 \boldsymbol \Sigma_x(t,k)dt)^\top\|_m\geq (\int_0^1\inf_{t\in(0,1)}\|( \boldsymbol \Sigma_x(t,1)dt)\|_m)^2. \end{align} By \eqref{2020-7-13-1}, \eqref{2020-7-13-2} and the proof of Theorem \ref{Space-Distance} we have that \begin{align}\label{2020-7-13-3} \lambda_d(\boldsymbol \Gamma)\asymp p^{2-2\delta}. \end{align} As a result, the corollary follows from Corollary \ref{Jan-Corol1}, \eqref{2020-7-13-3} and Theorem 1 of \cite{yu2015useful}. Details are omitted for the sake of brevity.
$\Box$
\noindent{\bf Proof of Proposition \ref{prop1}}. \\ {\it Proof.} It is follows from Corollaries \ref{Jan-Corol1} and \ref{Jan20-Corol2}, and the proof of Proposition \ref{hatdrate}.
$\Box$
\begin{corollary}\label{Corol4}
Assume conditions of Proposition \ref{prop1} hold, then under null hypothesis there exists an orthonormal basis $\{\mathbf f_i, 1\leq i\leq p-d\}$ of null space of $\boldsymbol \Gamma$, such that
\begin{align}
\| \|\hat {\mathbf F}-\mathbf F\|_F\|_{\mathcal L^1}=O(\frac{p^\delta}{\sqrt n}),
\end{align}
where $\mathbf F=(\mathbf f_1,...\mathbf f_{p-d})$. \end{corollary} {\it Proof.} Notice that by Corollary \ref{Jan20-Corol2}, the exists a set of orthonormal basis $\mathbf G=(\mathbf G_1,...,\mathbf G_{p-d})$ of null space of $\boldsymbol \Gamma$ together with a $(p-d)\times (p-d)$ orthonormal matrix $\hat{\mathbf O}_4$ such that \begin{align*}
\| \|\hat {\mathbf F} \hat{\mathbf O}_4-\mathbf G\|_F\|_{\mathcal L^1}=O(\frac{p^\delta}{\sqrt n}). \end{align*} Take $\mathbf F=\mathbf G\hat{\mathbf O}^\top_4$ and the corollary is proved.
$\Box$
Recall in Section \ref{Sec:null-static} of the main article the $k_0 N_n(p-d)p$ dimensional vectors $\tilde {\boldsymbol l}_i$ which replaces $\hat{\mathbf F}$ in $\hat {\boldsymbol l}_i$ of \eqref{hatz} with $ {\mathbf F}$. In the following proofs, define\begin{align} {\mathbf s}_{j,w_n}=\sum_{r=j}^{j+w_n-1}\tilde {\boldsymbol l}_i ~~~\text{for}~~ 1\leq j\leq m_n-w_n+1, ~~\text{and}~{\mathbf s}_{m_n}=\sum_{i=1}^{m_n}\tilde {\boldsymbol l}_i. \end{align} Notice that by conditions (S1) and (S3), $\tilde {\boldsymbol l}_i$, $1\leq i\leq m_n$ are mean-zero $N_n(p-d)p$ dimensional vectors.
\noindent {\bf Proof of Theorem \ref{Boots-thm5}}
Define \begin{align} \boldsymbol \upsilon_n=\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}( {\mathbf s}_{j,w_n}-\frac{w_n}{m_n} {\mathbf s}_{m_n})R_j. \end{align} We shall show the following two assertions: \begin{align}
\sup_{t\in \mathbb R}|\field{P}(|\mathbf y|_\infty\leq t)-\field{P}(|\boldsymbol \upsilon_n|_{\infty}\leq t| {\mathbf x}_{i,n},1\leq i\leq n)|=O_p(\Theta_n^{1/3}\log ^{2/3}(\frac{W_{n,p}}{\Theta_n})),\label{Junly14-S86}\\
\sup_{t\in \mathbb R }|\field{P}(|\boldsymbol \upsilon_n|_\infty\leq t|\mathbf x_{i,n},1\leq i\leq n)-\field{P}(|\boldsymbol \kappa_n|_\infty\leq t|\mathbf x_{i,n}, 1\leq i\leq n)|\notag\\=O_p((\frac{p^{\delta+1/2}\sqrt{w_n\log n}}{\sqrt n}(N_np)^{1/l})^{l/(l+1)}). \label{Junly15-S85} \end{align} The theorem then follows from \eqref{Junly14-S86}, \eqref{Junly15-S85} and Theorem \ref{Jan23-Thm4}. In the remaining proofs, we write conditional on $\{\mathbf x_{i,n}, 1\leq i\leq n\}$ as conditional on $\mathbf x_{i,n}$ for short if no confusion arises.
Step (i): Proof of \eqref{Junly14-S86}. To show \eqref{Junly14-S86}, we shall show that \begin{align}\label{S.94}
\|\max_{1\leq u,v\leq k_0N_np(p-d)}|\sigma^\upsilon_{u,v}-\sigma^Y_{u,v}|\|_{\mathcal L^{q^*}}=O\left(w_n^{-1}+\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}\right), \end{align} where $\sigma^\upsilon_{u,v}$ and $\sigma^Y_{u,v}$ are the $(u,v)_{th}$ entry of the covariance matrix of $\boldsymbol\upsilon_n$ given $\tilde {\boldsymbol l}_i$ and covariance matrix of $\mathbf y$. Notice that \eqref{S.94} together with condition (d) implies that there exists a constant $\eta_0>0$ such that \begin{align}\label{S103-Nov-29} \field{P}(\max_{1\leq u,v\leq k_0N_np(p-d)}\sigma^\upsilon_{u,v}\geq \eta_0)\geq 1-O\left(\left(w_n^{-1}+\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}\right)^{q^*}\right). \end{align} Since by assumption $w_n^{-1}+\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}=o(1)$, it suffices to consider the conditional Gaussian approximation on the $\{\mathbf x_{i,n}\}$ measurable event $\{\max_{1\leq u,v\leq k_0N_np(p-d)}\sigma^\upsilon_{u,v}\geq \eta_0\}$. Then by the construction of $\mathbf y$ and Theorem 2 of \cite{chernozhukov2015comparison} (we consider the case $a_p=\sqrt{2\log p}$ in there), \eqref{Junly14-S86} will follow.
Now we prove \eqref{S.94}. Let $ S_{j,w_n,s}$ and $ S_{m_n,s}$ be the $s_{th}$ element of the vectors $\mathbf s_{j,w_n}$ and $\mathbf s_{m_n}$, respectively. By our construction, we have \begin{align} \sigma_{u,v}^\upsilon=\frac{1}{w_n(m_n-w_n+1)}\left(\sum_{j=1}^{m_n-w_n+1}( S_{j,w_n,u}-\frac{w_n}{m_n} S_{m_n,u})( S_{j,w_n,v}-\frac{w_n}{m_n} S_{m_n,v})\right), \sigma_{u,v}^Y=\field{E}{\frac{ S_{m_n,u} S_{m_n,v}}{m_n}}. \end{align} On one hand, following the proof of Lemma 4 of \cite{zhou2013heteroscedasticity} using conditions (i), (ii) and Lemma 5 of \cite{zhou2010simultaneous} we have that \begin{align}
\max_{u,v}|\field{E}\sigma_{u,v}^\upsilon-\sigma_{u,v}^Y|=O\left(w_n^{-1}+\sqrt{\frac{w_n}{m_n}}\right).\label{Feb28-99} \end{align}
Now using (i), (ii) and a similar argument to the proof of Lemma 1 of Zhou (2013), Cauchy-Schwarz inequality and the fact that $\max_{1\leq i\leq m_n}|X_i|^{q^*}\leq \sum_{i=1}^{ m_n}|X_i|^{q^*}$ for any random variables $\{X_i\}_{1\leq i\leq m_n}$, we obtain that \begin{align}
\bigg\|\max_{u,v}\bigg|\frac{1}{w_n(m_n-w_n+1)}\sum_{j=1}^{m_n-w_n+1} (S_{j,w_n,u} S_{j,w_n,v}-\field{E}( S_{j,w_n,u} S_{j,w_n,v}))\bigg|\bigg\|_{\mathcal L^{q^*}}\label{Feb28-100} =O(\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}),\\
\bigg\|\max_{u,v}\bigg|\frac{w_n}{m_n^2(m_n-w_n+1)}\sum_{j=1}^{m_n-w_n+1} (S_{m_n,u} S_{m_n,v}-\field{E}( S_{m_n,u} S_{m_n,v}))\bigg|\bigg\|_{\mathcal L^{q^*}} =O(\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}),\\
\bigg\|\max_{u,v}\bigg|\frac{1}{m_n(m_n-w_n+1)}\sum_{j=1}^{m_n-w_n+1} (S_{j,w_n,u} S_{m_n,v}- \field{E}(S_{j,w_n,u} S_{m_n,v}))\bigg|\bigg\|_{\mathcal L^{q^*}} =O(\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}})\label{Feb28-101}.
\end{align} Combining \eqref{Feb28-100}-\eqref{Feb28-101} we have \begin{align}\label{S.101-new}
\|\max_{u,v}|\sigma_{u,v}^\upsilon-\field{E}\sigma_{u,v}^\upsilon|\|_{\mathcal L^{q^*}}=O(\sqrt{w_n/m_n}W_{n,p}^{1/{q^*}}). \end{align} Therefore \eqref{S.94} follows from \eqref{Feb28-99} and \eqref{S.101-new}.
Step(ii). We now show \eqref{Junly15-S85}. It suffices to consider on the event $\{\hat d_n=d\}$. We first show that for $\epsilon\in (0,\infty)$ \begin{align}\field{P}(|\boldsymbol \upsilon_n-\boldsymbol \kappa_n|_{\infty}\geq \epsilon|\mathbf x_{i,n})=O_p((\epsilon^{-1}\frac{p^{\delta+1/2}\sqrt{w_n}}{\sqrt n}(N_np)^{1/l})^l)\label{S111-Nov-29}.\end{align} After that we then show for $\epsilon\in (0,\infty)$ \begin{align}
\sup_{t\in \mathbb R}\field{P}(|\boldsymbol \upsilon_n-t|
\leq \epsilon|\mathbf x_{i,n})=O_p(\epsilon\sqrt{\log( n/\epsilon)} ).\label{S112-Nov-29} \end{align} Combining \eqref{S111-Nov-29} and \eqref{S112-Nov-29}, and following the argument of \eqref{S95_Nov29} to \eqref{S97-Nov-21} in the main article, we have \begin{align}\label{NewF39}
\sup_{t\in \mathbb R }|\field{P}(|\boldsymbol \upsilon_n|_\infty\leq t|\mathbf x_{i,n},1\leq i\leq n)-\field{P}(|\boldsymbol \kappa_n|_\infty\leq t|\mathbf x_{i,n}, 1\leq i\leq n)|\notag\\=O_p((\epsilon^{-1}\frac{p^{\delta+1/2}\sqrt{w_n}}{\sqrt n}(N_np)^{1/l})^l+\epsilon\sqrt{\log( n/\epsilon)}). \end{align} Take $\epsilon=(\frac{p^{\delta+1/2}\sqrt{w_n}}{\sqrt n}(N_np)^{1/l})^{l/(l+1)}\log^{-1/(2l+2)} n$, \eqref{Junly15-S85} follows.
To show \eqref{S111-Nov-29} it suffices to prove that \begin{align}
\field{E}\big(\big|\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}( \hat{\mathbf s}_{j,w_n}- {\mathbf s}_{j,w_n})R_j\big|^l_\infty\big|\mathbf x_{i,n}\big)^{1/l}=O_p(\frac{p^{\delta+1/2}\sqrt{w_n}}{\sqrt n}(N_np)^{1/l}),\label{July15-S87}\\
\field{E}\big(\big|\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}\frac{w_n}{m_n}( \hat{\mathbf s}_{m_n}- {\mathbf s}_{m_n})R_j\big|^l_\infty|\mathbf x_{i,n}\big)^{1/l}=O_p(\frac{p^{\delta+1/2}\sqrt{w_n}}{\sqrt n}(N_np)^{1/l}).\label{July15-S88} \end{align} We now show \eqref{July15-S87}, and \eqref{July15-S88} follows mutatis mutandis. Define $\hat S_{j,w_n,r}$ and $S_{j,w_n,r}$ as the $r_{th}$ component of the $k_0N_n(p-d)p$ dimensional vectors $\hat {\mathbf s}_{j,w_n}$ and $\mathbf s_{j,w_n}$. Using the notation of proof of Theorem \ref{Jan23-Thm4}, it follows that \begin{align}\label{July15-S89}
&\big|\sum_{j=1}^{m_n-w_n+1}( \hat{\mathbf s}_{j,w_n}- {\mathbf s}_{j,w_n})R_j\big|_\infty\notag \\&=\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\max_{1\leq s_2\leq p-d}\big|\sum_{j=1}^{m_n-w_n+1}( \sum_{i=j}^{j+w_n-1}(\hat {\mathbf f}_{s_2}^\top-\mathbf f_{s_2}^\top) \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})R_j\big|. \end{align} Notice that \begin{align}\label{July15-S90}
&\max_{1\leq s_2\leq p-d}\big|\sum_{j=1}^{m_n-w_n+1}( \sum_{i=j}^{j+w_n-1}(\hat {\mathbf f}_{s_2}^\top-\mathbf f_{s_2}^\top) \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})R_j\big|^2\notag
\\&\leq \sum_{1\leq s_2\leq p-d}\|\hat {\mathbf f}_{s_2}^\top-\mathbf f_{s_2}^\top\|_2^2\Big\|
\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}R_j\big\|_2^2\notag\\&=
\|\hat{\mathbf F}-\mathbf F\|_F^2\Big\|
\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}R_j\big\|_2^2. \end{align} Therefore, by using Corollary \ref{Corol4}, \eqref{July15-S89}, \eqref{July15-S90} we have \begin{align}\label{July-91}
\big|\sum_{j=1}^{m_n-w_n+1}( \hat{\mathbf s}_{j,w_n}- {\mathbf s}_{j,w_n})R_j\big|_\infty\leq \frac{p^\delta}{\sqrt n}\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\Big\|
\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}R_j\big\|_2. \end{align}
Notice that, conditional on $\mathbf x_{i,n}$, $\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}R_j$ is a $p$ dimensional Gaussian random vector for $1\leq s_1\leq N_n-1$, $1\leq s_3\leq p$, $1\leq w\leq k_0$ whose extreme probabilistic behavior depend on their covariance structure. Therefore we shall study $$\big\|\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}\big\|_2.$$ For this purpose, for any random variable $Y$ write $(\field{E}(|Y|^l|\mathbf x_{i,n}))^{1/l}=\|Y\|_{\mathcal L^l,\mathbf x_{i,n}}$ then \begin{align}\label{July-92}
\notag&\field{E}\left(\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\left(\Big\|
\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}R_j\big\|_2^2\right )^{l/2}\Big|\mathbf x_{i,n}\right)\\
&\leq \sum_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\field{E}\left(\left(\sum_{s=1}^p\left(\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} x_{s_1m_n+i+w,s}x_{s_1m_n+i,s_3}R_j\right)^2\right)^{l/2}\Big|\mathbf x_{i,n}\right)\notag\\&\leq \sum_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\left(\sum_{s=1}^p\left\|\left(\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} x_{s_1m_n+i+w,s}x_{s_1m_n+i,s_3}R_j\right)^2\right\|_{\mathcal L^{l/2},\mathbf x_{i,n}}\right)^{l/2}\notag\\&
= \sum_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}}\left(\sum_{s=1}^p\left\|\left(\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} x_{s_1m_n+i+w,s}x_{s_1m_n+i,s_3}R_j\right)\right\|^2_{\mathcal L^{l},\mathbf x_{i,n}}\right)^{l/2}. \end{align} By Burkholder inequality, conditions ($M'$), (M1) and (M3), we have that \begin{align}\label{July-93}
\left\|\left\|\left(\sum_{j=1}^{m_n-w_n+1} \sum_{i=j}^{j+w_n-1} x_{s_1m_n+i+w,s}x_{s_1m_n+i,s_3}R_j\right)\right\|_{\mathcal L^l,\mathbf x_{i,n}}\right\|_{\mathcal L^l}\lessapprox w_n\sqrt{m_n-w_n+1}. \end{align} Therefore by straightforward calculations and equations \eqref{July-91}--\eqref{July-93}, we show \eqref{July15-S87}. As a consequence, \eqref{Junly15-S85} follows. Finally \eqref{S112-Nov-29} follows from by Corollary 1 of \cite{chernozhukov2015comparison} and \eqref{S103-Nov-29}, which completes the proof.
$\Box$ \section{Proof of Lemma \ref{LemmaC1} and Theorem \ref{Power}}\label{Proof-Power}\setcounter{equation}{0}
\noindent{\bf Proof of Lemma \ref{LemmaC1}}
Let $\mathbf f=(f_1,...,f_p)^\top$ be a $p$ dimensional vector such that $\|\mathbf f\|_F=1.$ Notice that by condition (f), there exists a constant $M_1$ such that \begin{align}\label{Dec-28-1}
\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}\leq 2\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)\|_{\mathcal L^4}\leq M_1. \end{align} On the other hand, by triangle inequality \begin{align}\label{Dec-28-2}
\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}
\leq \sum_{j=1}^pf_j\| H_j(t,\mathcal{F}_i)- H_j(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}=O(\sqrt p\chi^k), \end{align} where the last equality is due to Cauchy-Schwartz inequality and Condition ($M'$). As a consequence, \eqref{Dec-28-1} and \eqref{Dec-28-2} lead to that \begin{align}\label{Dec-28-3}
\sup_{0\leq t\leq 1}\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}=O(1\wedge \sqrt p\chi^k). \end{align} Notice that when $k=\lfloor a\log p\rfloor$ for some sufficiently large constant $a$, $\sqrt p\chi^k=O(1)$. Therefore by \eqref{Dec-28-3}, \begin{align*}
\sum_{k=1}^{\lfloor a\log p\rfloor }k\sup_{0\leq t\leq 1}\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}+\sum_{k=\lfloor a\log p\rfloor+1 }^\infty k\sup_{0\leq t\leq 1}\|\mathbf f^\top\mathbf H(t,\mathcal{F}_i)-\mathbf f^\top\mathbf H(t,\mathcal{F}_i^{(i-k)})\|_{\mathcal L^4}\\=O(\log ^2p)+O(\sum_{k=\lfloor a\log p\rfloor+1 }^\infty k\chi^k)=O(\log ^2p), \end{align*} which finishes the proof.
$\Box$
Recall the quantities $\check {\boldsymbol l}_i$ and $\tilde {\boldsymbol l}_i$ defined in Section \ref{Sec4}, and $\hat {\boldsymbol l}_i$ in \eqref{new.eq11} in the main article. To prove the power result in Theorem \ref{Power} we need to evaluate the magnitude of $\field{E}\hat {\boldsymbol l}_i$ under the local alternatives, which are determined by $\field{E}\tilde{\boldsymbol l}_i$ and $\field{E}\check{\boldsymbol l}_i$. Define $\mathbf x_i^*=\mathbf A\mathbf z_{i,n}+\mathbf e_{i,n}$, and further define $\tilde {\boldsymbol l}_i^*$ by replacing $\mathbf x_{i,n}$ in $\tilde {\boldsymbol l}_i$ with $\mathbf x_{i,n}^*$, where under the local alternative \eqref{alternative_A} in the main article, $\mathbf x_{i,n}=\mathbf A\circ(\mathbf J+\rho_n\mathbf D(t))\mathbf z_{i,n}+\mathbf e_{i,n}.$ Notice that, under the considered alternative, $\mathbf y_i$ in fact preserves the auto-covariance structure of $\tilde {\boldsymbol l}_i^*$. Then we prove Theorem \ref{Power} by the similar arguments to the proof of Theorems \ref{Jan23-Thm4} and \ref{Boots-thm5} but under local alternatives. We first present auxiliary propositions \ref{PropG1} and \ref{powerprop}.
\begin{proposition} \label{PropG1} Write $\boldsymbol \Gamma(0)$ as $\boldsymbol \Gamma$. Under conditions of Theorem \ref{Power}, we have that
for each $n$ there exists an orthogonal basis $\mathbf F=(\mathbf f_{1},...,\mathbf f_{p-d})$ such that
\begin{align}\label{S102}
\|\mathbf F_n-\mathbf F\|_F=O(\rho_np^{(\delta-\delta_1)/2}),
\end{align}
where $\mathbf F$ is a basis of $\boldsymbol\Gamma$. Furthermore,
\begin{align}\label{S103}
\|\|\hat {\mathbf F}-\mathbf F_n\|_F\|_{\mathcal L^1}=O(\frac{p^\delta}{\sqrt n}),~~ \|\|\hat {\mathbf F}-\mathbf F\|_F\|_{\mathcal L^1}=O(\rho_np^{(\delta-\delta_1)/2}+\frac{p^\delta}{\sqrt n}).
\end{align} \end{proposition}
{\it Proof.} By the definition of $\boldsymbol\Gamma$ and $\boldsymbol \Gamma_n$, it follows that $\|\boldsymbol\Gamma_n-\boldsymbol\Gamma\|_F=O(\rho_np^{\frac{1-\delta_1}{2}}p^{\frac{3(1-\delta)}{2}})$. Therefore by the proof of Corollaries \ref{Jan20-Corol2} and \ref{Corol4}, for each $n$ there exists an orthogonal basis $\mathbf F=(\mathbf f_{1},...,\mathbf f_{p-d})$ such that \begin{align}\label{newC6}
\|\mathbf F_n-\mathbf F\|_F=O(\frac{\|\boldsymbol \Gamma_n-\boldsymbol \Gamma\|_F}{p^{2-2\delta}})=O(\rho_np^{(\delta-\delta_1)/2}), \end{align} which shows \eqref{S102}. By the similar argument to Corollary \ref{Jan-Corol1} and triangle inequality, we have that \begin{align}
\|\|\hat {\boldsymbol \Gamma}-\boldsymbol\Gamma_n\|_F\|_{\mathcal L^1} =O(\frac{p^\delta}{\sqrt n}p^{2-2\delta}),\\
\| \|\hat {\boldsymbol \Gamma}-\boldsymbol\Gamma\|_F\|_{\mathcal L^1}\leq \|\|\hat {\boldsymbol \Gamma}-\boldsymbol\Gamma_n\|_F\|_{\mathcal L^1}+ \|\| {\boldsymbol \Gamma}_n-\boldsymbol\Gamma\|_F\|_{\mathcal L^1}=O((\rho_np^{(\delta-\delta_1)/2}+\frac{p^\delta}{\sqrt n})p^{2-2\delta}), \end{align} which further by the argument of \eqref{newC6} yield that \begin{align}
\|\|\hat {\mathbf F}-\mathbf F_n\|_F\|_{\mathcal L^1}=O(\frac{\|\|\hat{\boldsymbol \Gamma}-\boldsymbol \Gamma_n\|_F\|_{\mathcal L^1}}{p^{2-2\delta}})=O(\frac{p^\delta}{\sqrt n}),\\
\|\|\hat {\mathbf F}-\mathbf F\|_F\|_{\mathcal L^1}=O(\frac{\|\|\hat{\boldsymbol \Gamma}-\boldsymbol \Gamma\|_F\|_{\mathcal L^1}}{p^{2-2\delta}})=O(\rho_np^{(\delta-\delta_1)/2}+\frac{p^\delta}{\sqrt n}). \end{align} Therefore \eqref{S103} holds.
$\Box$
\begin{proposition}\label{powerprop}
Under conditions of Theorem \ref{Power}, we have that \begin{align}\label{S.108}
|\field{E}\tilde{\boldsymbol l}_i|_{\infty}=O(\rho_np^{\frac{1-\delta_1}{2}}), ~|\field{E}\check{\boldsymbol l}_i|_{\infty}=O(\rho_np^{\frac{1-\delta_1}{2}}).
\end{align} \end{proposition} {\it Proof.} To show the first equation of \eqref{S.108}, notice that for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, the $(k_0(p-d)ps_1+(w-1)p(p-d)+(s_2-1)p+s_3)_{th}$ entry of $\field{E}\tilde{\boldsymbol l}_i$ is $\field{E}(\mathbf f_{s_2}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})$ where $x_{i,s_2}$ is the $(s_2)_{th}$ entry of $\mathbf x_{i,n}$. Then by Cauchy-Schwarz inequality we have that there exists a constant $M$ such that uniformly for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$,
\begin{align}\label{S.107}
|\field{E}(\mathbf f^\top_{s_2} &\mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})|\notag\\
&=|\field{E}(\mathbf f^\top_{s_2}( \rho_n\mathbf D(t_{i,w,n})\circ \mathbf A \mathbf z_{s_1m_n+i+w,n}+\mathbf e_{s_1m_n+i+w,n})x_{s_1m_n+i,s_3})|\leq \rho_n M p^{\frac{1-\delta_1}{2}}, \\\notag &\text{where $t_{i,w,n}=(s_1m_n+i+w)/n$.} \end{align}
In the above argument we have used the fact that $\|\mathbf f_{u}\|_F=1$ for $1\leq u\leq p-d$. Therefore the first equation of \eqref{S.108} follows from \eqref{S.107}. Write $ \mathbf h_{s_2}=\mathbf f_{s_2}-\mathbf f_{s_2,n}$. Similarly to \eqref{S.107}, there exists a constant $M$ such that uniformly for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, \begin{align*}
& |\field{E}(\mathbf h^\top_{s_2} \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})|\notag\\
&=|\field{E}(\mathbf h^\top_{s_2}( \mathbf A(t_{i,w,n}) \mathbf z_{s_1m_n+i+w,n}+\mathbf e_{s_1m_n+i+w,n})x_{s_1m_n+i,s_3})|\\&\leq |\field{E}(\mathbf h^\top_{s_2}( \mathbf A(t_{i,w,n}) \mathbf z_{s_1m_n+i+w,n})|+|\field{E}(\mathbf h^\top_{s_2}\mathbf e_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})| \\&=O( \rho_n M p^{\frac{1-\delta_1}{2}}), \end{align*}
where the last equality is due to \eqref{S102}, condition (C1), condition (f), Cauchy inequality and the submultiplicity of Frobenius norm. Therefore we have $|\field{E}\check{\boldsymbol l}_i-\field{E} \tilde{\boldsymbol l}_i|_{\infty}=O(\rho_np^{\frac{1-\delta_1}{2}})$. Together with first equation of \eqref{S.108} we get \begin{align}
|\field{E}\check{\boldsymbol l}_i|_{\infty}=O(\rho_np^{\frac{1-\delta_1}{2}}), \end{align} and the proof is complete.
$\Box$
\noindent{\bf Proof of Theorem \ref{Power}.}
We first show (a).
Now a careful inspection of the proof of Proposition \ref{Jan23-Lemma6} shows that the proposition \ref{Jan23-Lemma6} still holds under considered alternative hypothesis used in Theorem \ref{Jan23-Thm4}. Recall $\hat T=\left|\frac{\sum_{i=1}^{m_n}\hat {\boldsymbol l}_i}{\sqrt{m_n}}\right|_\infty. $ Define
$$\tilde T_n=\left|\frac{\sum_{i=1}^{m_n}\tilde {\boldsymbol l}_i}{\sqrt{m_n}}\right|_\infty, ~~\tilde T^*_n=\left|\frac{\sum_{i=1}^{m_n}\tilde {\boldsymbol l}_i^*}{\sqrt{m_n}}\right|_\infty.$$ Using Proposition \ref{PropG1} and following the proof of Proposition \ref{Jan23-Lemma6} we have that for any sequence $g_n\rightarrow \infty$,
\begin{align}\label{S103-new}\field{P}(|\tilde T_n-\hat T_n|\geq g_n^2(\rho_np^{(\delta-\delta_1)/2}+\frac{p^\delta}{\sqrt n}){m^{1/2}_n}\Omega_n)=O(\frac{1}{g_n}+\frac{p^\delta}{\sqrt n}\log n).\end{align} One the other hand, similarly to the proof of Proposition \ref{Jan23-Lemma6}, we have that on the event of $\{\hat d_n=d\}$, \begin{align}
|\tilde T_n-\tilde T^*_n|\leq\max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0, 1\leq s_2\leq p-d}}\sum_{i=1}^{m_n}|\mathbf f_{s_2}^\top( \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}-\mathbf x^*_{s_1m_n+i+w,n}x^*_{s_1m_n+i,s_3})|/\sqrt{m_n}\notag\\
\leq \max_{\substack{1\leq s_1\leq N_n-1,\\ 1\leq s_3\leq p,\\ 1\leq w\leq k_0}} |\sum_{u=1}^p(\sum_{i=1}^{m_n}(x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}-x^*_{s_1m_n+i+w,u}x^*_{s_1m_n+i,s_3}))^2|^{1/2}, \end{align} where $x_{i,j}^*$ means the $j_{th}$ entry of $\mathbf x_{i,n}^*$. Notice that\begin{align}\notag
\big\||\sum_{u=1}^p(\sum_{i=1}^{m_n}(x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}-x^*_{s_1m_n+i+w,u}x^*_{s_1m_n+i,s_3}))^2|^{1/2}\big\|_{\mathcal L^l}\\=\big\|\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}-x^*_{s_1m_n+i+w,u}x^*_{s_1m_n+i,s_3})^2\big\|^{1/2}_{\mathcal L^{l/2}}, \end{align}
which can be further bounded by the summation of $$\big\|2\sum_{u=1}^p(\sum_{i=1}^{m_n}x_{s_1m_n+i+w,u}(x_{s_1m_n+i,s_3}-x^*_{s_1m_n+i,s_3}))^2\big\|^{1/2}_{\mathcal L^{l/2}}$$ and
$$\big\|2\sum_{u=1}^p(\sum_{i=1}^{m_n}x^*_{s_1m_n+i,s_3}(x_{s_1m_n+i+w,u}-x^*_{s_1m_n+i+w,u})^2\big\|^{1/2}_{\mathcal L^{l/2}}.$$ Therefore, using the definition of $\mathbf x^*_{i,n}$, straightforward calculations show that \begin{align}\label{NewG15}
\big\||\sum_{u=1}^p(\sum_{i=1}^{m_n}(x_{s_1m_n+i+w,u}x_{s_1m_n+i,s_3}-x^*_{s_1m_n+i+w,u}x^*_{s_1m_n+i,s_3}))^2|^{1/2}\big\|_{\mathcal L^l}\lessapprox \sqrt{p}m_n\rho_n. \end{align} As a consequence, by the proof of Proposition \ref{Jan23-Lemma6}, for any sequence $g_n\rightarrow \infty$,
\begin{align}\label{S104-new}\field{P}(|\tilde T^*_n-\tilde T_n|\geq g_n\rho_n{m^{1/2}_n}\Omega_n)=O(\frac{1}{g_n}+\frac{p^\delta}{\sqrt n}\log n).\end{align}
Notice that $\mathbf y_i$ preserves the autocovariance structure of $\tilde{\boldsymbol l}_i^*$. Then it follows from \eqref{S103-new}, \eqref{NewG15} and the proof of Theorem \ref{Jan23-Thm4} that under the alternative hypothesis,
\begin{align}\label{S105-new}
&\sup_{t\in \mathbb R}|\field{P}(\hat T_n\leq t)-\field{P}(|\mathbf y|_{\infty}\leq t)|\notag\\&=O\left(\left((\rho_np^{\frac{\delta-\delta_1}{2}}+\frac{p^\delta}{\sqrt n}){\sqrt {m_n\log n}} \Omega_n\right)^{1/3}+ \left(\rho_n{\sqrt {m_n\log n}} \Omega_n\right)^{1/2} +\iota(m_n,k_0N_np(p-d),2l, (N_np^2)^{1/l}\right), \end{align} which proves \eqref{S98}.
We now show the divergence case in (a).
Define $\check{\mathbf T}=\frac{\sum_{i=1}^{m_n}\check {\boldsymbol l}_i}{\sqrt{m_n}}$ and notice that $\hat {\mathbf T}=\frac{\sum_{i=1}^{m_n}\hat {\boldsymbol l}_i}{\sqrt{m_n}}$. Following the proof of Proposition \ref{Jan23-Lemma6} we have that
\begin{align}\label{2020-Sep-4}|\check {\mathbf T}-\hat {\mathbf T}|_\infty=O_p(\frac{p^\delta}{\sqrt n}{m^{1/2}_n}\Omega_n).\end{align} Notice that for $1\leq s_1\leq N_n-1$, $1\leq s_2\leq p-d$, $1\leq s_3\leq p$, $1\leq w\leq k_0$, the $(k_0(p-d)ps_1+(w-1)p(p-d)+(s_2-1)p+s_3)_{th}$ entry of $\check{\boldsymbol l}_i$ is $\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}$ where $x_{i,s_2}$ is the $(s_2)_{th}$ entry of $\mathbf x_{i,n}$. It is easy to see that
\begin{align}\label{S111-Sep10}|\check {\mathbf T}|_\infty\geq \frac{1}{\sqrt m_n}|\sum_{i=1}^{m_n}\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3}|,\end{align} where by the definition of and the conditions on $\Delta_n$, the indices $s_1,s_2,s_3,w$ can be chosen such that \begin{align}\label{S112-Sep10}\frac{1}{\sqrt {m_n}}\sum_{i=1}^{m_n}\field{E}(\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})\asymp(\sqrt{m_n}\rho_np^{\frac{1-\delta_1}{2}}). \end{align} On the other hand, notice that \begin{align}\label{S113-Sep10} \frac{1}{ m_n}Var\Big(\sum_{i=1}^{m_n}(\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i+w,n}x_{s_1m_n+i,s_3})\Big)\notag \\=\frac{1}{m_n}\sum_{i_1,i_2}Cov(\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i_1+w,n}x_{s_1m_n+i_1,s_3},\mathbf f_{s_2,n}^\top \mathbf x_{s_1m_n+i_2+w,n}x_{s_1m_n+i_2,s_3}). \end{align} Consider the case when $i_1>i_2$. Now using condition (G) and that \begin{align*}\mathbf x_{s_1m_n+i_1+w,n}=\mathbf A(\frac{{s_1m_n+i_1+w}}{n})\mathbf z_{s_1m_n+i_1+w,n}+e_{s_1m_n+i_1+w,n},\\ \mathbf x_{s_1m_n+i_2+w,n}=\mathbf A(\frac{{s_1m_n+i_2+w}}{n})\mathbf z_{s_1m_n+i_2+w,n}+e_{s_1m_n+i_2+w,n},\end{align*} we have (let $i_1'=s_1m_n+i_1$, $i_2'=s_1m_n+i_2$ for short) \begin{align} Cov(\mathbf f_{s_2,n}^\top \mathbf x_{i_1'+w}x_{i_1',s_3},\mathbf f_{s_2,n}^\top \mathbf x_{i_2'+w}x_{i_2',s_3}) &=Cov(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n}) \mathbf z_{i_2'+w}x_{i_2',s_3})\notag\\&+ Cov(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf f_{s_2,n}^\top \mathbf e_{i_2'+w}x_{i_2',s_3}). \end{align} Notice that \eqref{S102} and condition (C1) lead to \begin{align}
\|\mathbf f_{s_2,n}^\top \mathbf A(\frac{k}{n})\|_F=O(\rho_np^\frac{1-\delta_1}{2}) \end{align} uniformly for $1\leq s_2\leq p-d$ and $1\leq k\leq n$. Therefore, \begin{align}\label{S115} &Cov(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_2'+w}x_{i_2',s_3})\notag \\&=
\|\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})Cov(\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf z_{i_2'+w}x_{i_2',s_3})(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n}))^\top\|_F
\notag \\&\leq \rho_n^2p^{1-\delta_1}\|Cov(\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf z_{i_2'+w}x_{i_2',s_3})\|_F. \end{align} On the other hand using condition (G2) and condition (ii) of Theorem \ref{Boots-thm5}, by the proof of Lemma \ref{Lemma-Bound1} it follows for $1\leq u\leq d$, $\leq v\leq p$, $\Upsilon_{i,u,v} =:\Upsilon_{i,u,v}(\frac{i}{n},\mathcal{F}_i)=z_{i,u,n}x_{i+k,v,n}$ is a locally stationary process with associated dependence measures \begin{align} \delta_{\Upsilon_{u,v},2}(h)=O(((k+1)\log (k+1))^{-2}). \end{align} Therefore by Lemma 5 of \cite{zhou2010simultaneous}, uniformly for $1\leq i_1, i_2'\leq m_n$ and $1\leq s_3\leq p$, \begin{align}
\|Cov(\mathbf z_{i_1'+w}x_{i_1',s_3},\mathbf z_{i_2'+w}x_{i_2',s_3})\|_F=O(
(|i_2'-i_1'|\log(|i_2'-i_1'|)^{-2}). \end{align} Together with \eqref{S115} we have \begin{align}\label{S119-Sep10} \frac{1}{m_n}\sum_{i_1',i_2'}Cov(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_1'+w}x_{i_1'},\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_2'+w}x_{i_2'})=O(\rho_n^2p^{1-\delta_1}\vee 1). \end{align} By similarly arguments using the proof of Lemma 5 of \cite{zhou2010simultaneous} and condition (G3) we have \begin{align}\label{S120-Sep10} \frac{1}{m_n}\sum_{i_1',i_2'}Cov(\mathbf f_{s_2,n}^\top \mathbf A(\frac{i_2'+w}{n})\mathbf z_{i_1'+w}x_{i_1'},\mathbf f_{s_2,n}^\top \mathbf e_{i_2'+w}x_{i_2'})=O(\rho_np^{\frac{1-\delta_1}{2}}\log ^2 p\vee 1) \end{align} By \eqref{S111-Sep10}--\eqref{S113-Sep10}, \eqref{S119-Sep10} and \eqref{S120-Sep10} we show $\hat T_n$ is diverging at the rate of $\sqrt m\rho_np^{\frac{1-\delta_1}{2}}$, which proves (a).
\noindent To show (b), we define\begin{align} {\mathbf s^*}_{j,w_n}=\sum_{r=j}^{j+w_n-1}\tilde {\boldsymbol l}^*_i,~~{\mathbf s^*}_{m_n}=\sum_{i=1}^{m_n}\tilde {\boldsymbol l}^*_i ~~\text{for}~~ 1\leq j\leq m_n-w_n+1,\\ \check{\mathbf s}_{j,w_n}=\sum_{r=j}^{j+w_n-1}\check {\boldsymbol l}_i,~~\check{\mathbf s}_{m_n}=\sum_{i=1}^{m_n}\check {\boldsymbol l}_i ~~\text{for}~~ 1\leq j\leq m_n-w_n+1. \end{align} Then we define \begin{align} \boldsymbol \upsilon^*_n=\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}( {\mathbf s}^*_{j,w_n}-\frac{w_n}{m_n} {\mathbf s}^*_{m_n})R_j,\\ \tilde{\boldsymbol \kappa}_n=\frac{1}{\sqrt{w_n(m_n-w_n+1)}}\sum_{j=1}^{m_n-w_n+1}( \tilde {\mathbf s}_{j,w_n}-\frac{w_n}{m_n} \tilde {\mathbf s}_{m_n})R_j.
\end{align}
Then by the proof of Theorem \ref{Boots-thm5} we have that \begin{align}\label{S112}
\sup_{t\in \mathbb R}|\field{P}(|\mathbf y_{\infty}\leq t)-\field{P}(|\boldsymbol \upsilon^*_n|_{\infty}\leq t|{\mathbf x}_{i,n},1\leq i\leq n)|=O_p(\Theta_n^{1/3}\log ^{2/3}(\frac{W_{n,p}}{\Theta_n})). \end{align} Straightforward calculations using similar arguments to the proofs of step (ii) of Theorem \ref{Boots-thm5} and equation \eqref{S103} further show that under the considered local alternative hypothesis, \begin{align}
\|\||\boldsymbol \upsilon_n^*-\tilde {\boldsymbol \kappa}_n|_\infty\|_{\mathcal L^l,\mathbf x_{i,n}}\|_{\mathcal L^l}=O(\rho_n\sqrt{w_n}\Omega_n),\\
\||\boldsymbol \kappa_n-\tilde{\boldsymbol \kappa}_n|_\infty\|_{\mathcal L^l,\mathbf x_{i,n}}\|_{\mathcal L^l}=O((\frac{p^\delta}{\sqrt n}+\rho_np^{\frac{\delta-\delta_1}{2}})\sqrt{w_n}\Omega_n). \end{align}
Therefore \begin{align}\label{S113}
\| \| |\boldsymbol \upsilon_n^*-\boldsymbol \kappa_n|_\infty\|_{\mathcal L^l,\mathbf x_{i,n}}\|_{\mathcal L^l}=O((\frac{p^\delta}{\sqrt n}+\rho_n)\sqrt{w_n}\Omega_n). \end{align} As a result, by the arguments of \eqref{NewF39}, \eqref{NewC4} in the main article follows.
To evaluate the magnitude of order of $|{\boldsymbol \kappa}_n|_\infty$, notice that $ {\boldsymbol \kappa}_n$ is a Gaussian vector given $\{\mathbf x_{i,n}, 1\leq i\leq n\}$. Therefore almost surely \begin{align}\label{NewG35}
\field{E}(|\boldsymbol \kappa_n|_\infty|\mathbf x_{i,n},1\leq i\leq n)\leq |(\sum_{j=1}^{m_n-w_n+1}(\hat {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}\hat {\mathbf s}_n)^{\circ 2})^{\circ \frac{1}{2}}|_\infty\sqrt{2\log n}/\sqrt{w_n(m_n-w_n+1)}. \end{align} In the following we consider the event $\{\hat d_n=d\}$. Observe that \begin{align}
|(\sum_{j=1}^{m_n-w_n+1}(\hat {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}\hat {\mathbf s}_n)^{\circ 2})^{\circ \frac{1}{2}}|_\infty \leq M_1|(\sum_{j=1}^{m_n-w_n+1}(\hat {\mathbf s}_{j,w_n}-\check {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}(\hat {\mathbf s}_n-\check {\mathbf s}_n))^{\circ 2})^{\circ \frac{1}{2}}|_\infty \\
+M_1|(\sum_{j=1}^{m_n-w_n+1}(\check {\mathbf s}_{j,w_n}-\field{E} \check {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}(\check {\mathbf s}_n-\field{E} \check {\mathbf s}_n))^{\circ 2})^{\circ \frac{1}{2}}|_\infty+M_1|(\sum_{j=1}^{m_n-w_n+1}(\field{E} \check {\mathbf s}_{j,w_n}-\frac{w_n}{m_n}\field{E}\check {\mathbf s}_n)^{\circ 2})^{\circ \frac{1}{2}}|_\infty\notag\\:=I+II+III\notag \end{align} for some large positive constant $M_1$, where $I$, $II$ and $III$ are defined in an obvious way.
For $I$, notice that similarly to the proof of Theorem \ref{Boots-thm5}, using Proposition \ref{PropG1} we have \begin{align} I/\sqrt{w_n(m_n-w_n+1)}=O_p(\frac{p^\delta}{\sqrt n}\sqrt{w_n}\Omega_n). \end{align}
For $II$, by \eqref{S119-Sep10}, \eqref{S120-Sep10} and a similar argument to the proof of Theorem \ref{Boots-thm5} applied to $\check {\mathbf s}_{j,w_n}$ and $\check {\mathbf s}_{n}$, we shall see that \begin{align} II/\sqrt{w_n(m_n-w_n+1)}=O_p(\rho_np^{\frac{1-\delta_1}{2}}\log ^2 p\vee\rho_n^2p^{1-\delta_1}\vee 1). \end{align} For $III$, by Proposition \ref{powerprop} \begin{align}\label{NewG39} III/\sqrt{w_n(m_n-w_n+1)}=O(\sqrt{w_n}\rho_np^{\frac{1-\delta_1}{2}}). \end{align}
Therefore from \eqref{NewG35} to \eqref{NewG39}, $|\boldsymbol \kappa_n|_\infty=O_p(\sqrt{w_n}\rho_np^{\frac{1-\delta_1}{2}}\sqrt{\log n})$, which completes the proof.
$\Box$
\end{document} | arXiv |
Let $X$ be the Klein bottle. One construction of the Klein bottle is as follows. Take the unit box $[0, 1] \times [0, 1]$. Identify one pair of opposite edges in the same orientation, and the other pair of opposite edges in the opposite orientation to obtain a quotient space (see the illustration below). Use the Seifert-van Kampen theorem to find $\pi_1(X, x)$. | CommonCrawl |
Grothendieck's relative point of view
Grothendieck's relative point of view is a heuristic applied in certain abstract mathematical situations, with a rough meaning of taking for consideration families of 'objects' explicitly depending on parameters, as the basic field of study, rather than a single such object. It is named after Alexander Grothendieck, who made extensive use of it in treating foundational aspects of algebraic geometry. Outside that field, it has been influential particularly on category theory and categorical logic.
In the usual formulation, the point of view treats, not objects X of a given category C, but morphisms
f: X → S
where S is a fixed object. This idea is made formal in the idea of the slice category of objects of C 'above' S. To move from one slice to another requires a base change; from a technical point of view base change becomes a major issue for the whole approach (see for example Beck–Chevalley conditions).
A base change 'along' a given morphism
g: T → S
is typically given by the fiber product, producing an object over T from one over S. The 'fiber' terminology is significant: the underlying heuristic is that X over S is a family of fibers, one for each 'point' of S; the fiber product is then the family on T, which described by fibers is for each point of T the fiber at its image in S. This set-theoretic language is too naïve to fit the required context, certainly, from algebraic geometry. It combines, though, with the use of the Yoneda lemma to replace the 'point' idea with that of treating an object, such as S, as 'as good as' the representable functor it sets up.
The Grothendieck–Riemann–Roch theorem from about 1956 is usually cited as the key moment for the introduction of this circle of ideas. The more classical types of Riemann–Roch theorem are recovered in the case where S is a single point (i.e. the final object in the working category C). Using other S is a way to have versions of theorems 'with parameters', i.e. allowing for continuous variation, for which the 'frozen' version reduces the parameters to constants.
In other applications, this way of thinking has been used in topos theory, to clarify the role of set theory in foundational matters. Assuming that we don't have a commitment to one 'set theory' (all topoi are in some sense equally set theories for some intuitionistic logic) it is possible to state everything relative to some given set theory that acts as a base topos.
See also
This article uses terminology from category theory.
• Fiber product of schemes
References
• "Base change", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
| Wikipedia |
Laplace's equation
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as
$\nabla ^{2}\!f=0$
Mathematical analysis → Complex analysis
Complex analysis
Complex numbers
• Real number
• Imaginary number
• Complex plane
• Complex conjugate
• Unit complex number
Complex functions
• Complex-valued function
• Analytic function
• Holomorphic function
• Cauchy–Riemann equations
• Formal power series
Basic theory
• Zeros and poles
• Cauchy's integral theorem
• Local primitive
• Cauchy's integral formula
• Winding number
• Laurent series
• Isolated singularity
• Residue theorem
• Conformal map
• Schwarz lemma
• Harmonic function
• Laplace's equation
Geometric function theory
People
• Augustin-Louis Cauchy
• Leonhard Euler
• Carl Friedrich Gauss
• Jacques Hadamard
• Kiyoshi Oka
• Bernhard Riemann
• Karl Weierstrass
• Mathematics portal
or
$\Delta f=0,$
where $\Delta =\nabla \cdot \nabla =\nabla ^{2}$ is the Laplace operator,[note 1] $\nabla \cdot $ is the divergence operator (also symbolized "div"), $\nabla $ is the gradient operator (also symbolized "grad"), and $f(x,y,z)$ is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
If the right-hand side is specified as a given function, $h(x,y,z)$, we have
$\Delta f=h.$
This is called Poisson's equation, a generalization of Laplace's equation. Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Laplace's equation is also a special case of the Helmholtz equation.
The general theory of solutions to Laplace's equation is known as potential theory. The twice continuously differentiable solutions of Laplace's equation are the harmonic functions,[1] which are important in multiple branches of physics, notably electrostatics, gravitation, and fluid dynamics. In the study of heat conduction, the Laplace equation is the steady-state heat equation.[2] In general, Laplace's equation describes situations of equilibrium, or those that do not depend explicitly on time.
Forms in different coordinate systems
In rectangular coordinates,[3]
$\nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.$
In cylindrical coordinates,[3]
$\nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \phi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.$
In spherical coordinates, using the $(r,\theta ,\varphi )$ convention,[3]
$\nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.$
More generally, in arbitrary curvilinear coordinates (ξi),
$\nabla ^{2}f={\frac {\partial }{\partial \xi ^{j}}}\left({\frac {\partial f}{\partial \xi ^{k}}}g^{kj}\right)+{\frac {\partial f}{\partial \xi ^{j}}}g^{jm}\Gamma _{mn}^{n}=0,$
or
$\nabla ^{2}f={\frac {1}{\sqrt {|g|}}}{\frac {\partial }{\partial \xi ^{i}}}\!\left({\sqrt {|g|}}g^{ij}{\frac {\partial f}{\partial \xi ^{j}}}\right)=0,\qquad (g=\det\{g_{ij}\})$
where gij is the Euclidean metric tensor relative to the new coordinates and Γ denotes its Christoffel symbols.
Boundary conditions
See also: Boundary value problem
The Dirichlet problem for Laplace's equation consists of finding a solution φ on some domain D such that φ on the boundary of D is equal to some given function. Since the Laplace operator appears in the heat equation, one physical interpretation of this problem is as follows: fix the temperature on the boundary of the domain according to the given specification of the boundary condition. Allow heat to flow until a stationary state is reached in which the temperature at each point on the domain doesn't change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem.
The Neumann boundary conditions for Laplace's equation specify not the function φ itself on the boundary of D but its normal derivative. Physically, this corresponds to the construction of a potential for a vector field whose effect is known at the boundary of D alone. For the example of the heat equation it amounts to prescribing the heat flux through the boundary. In particular, at an adiabatic boundary, the normal derivative of φ is zero.
Solutions of Laplace's equation are called harmonic functions; they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplace's equation (or any linear homogeneous differential equation), their sum (or any linear combination) is also a solution. This property, called the principle of superposition, is very useful. For example, solutions to complex problems can be constructed by summing simple solutions.
In two dimensions
Laplace's equation in two independent variables in rectangular coordinates has the form
${\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}\equiv \psi _{xx}+\psi _{yy}=0.$
Analytic functions
The real and imaginary parts of a complex analytic function both satisfy the Laplace equation. That is, if z = x + iy, and if
$f(z)=u(x,y)+iv(x,y),$
then the necessary condition that f(z) be analytic is that u and v be differentiable and that the Cauchy–Riemann equations be satisfied:
$u_{x}=v_{y},\quad v_{x}=-u_{y}.$
where ux is the first partial derivative of u with respect to x. It follows that
$u_{yy}=(-v_{x})_{y}=-(v_{y})_{x}=-(u_{x})_{x}.$
Therefore u satisfies the Laplace equation. A similar calculation shows that v also satisfies the Laplace equation. Conversely, given a harmonic function, it is the real part of an analytic function, f(z) (at least locally). If a trial form is
$f(z)=\varphi (x,y)+i\psi (x,y),$
then the Cauchy–Riemann equations will be satisfied if we set
$\psi _{x}=-\varphi _{y},\quad \psi _{y}=\varphi _{x}.$
This relation does not determine ψ, but only its increments:
$d\psi =-\varphi _{y}\,dx+\varphi _{x}\,dy.$
The Laplace equation for φ implies that the integrability condition for ψ is satisfied:
$\psi _{xy}=\psi _{yx},$
and thus ψ may be defined by a line integral. The integrability condition and Stokes' theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called conjugate harmonic functions. This construction is only valid locally, or provided that the path does not loop around a singularity. For example, if r and θ are polar coordinates and
$\varphi =\log r,$
then a corresponding analytic function is
$f(z)=\log z=\log r+i\theta .$
However, the angle θ is single-valued only in a region that does not enclose the origin.
The close connection between the Laplace equation and analytic functions implies that any solution of the Laplace equation has derivatives of all orders, and can be expanded in a power series, at least inside a circle that does not enclose a singularity. This is in sharp contrast to solutions of the wave equation, which generally have less regularity.
There is an intimate connection between power series and Fourier series. If we expand a function f in a power series inside a circle of radius R, this means that
$f(z)=\sum _{n=0}^{\infty }c_{n}z^{n},$
with suitably defined coefficients whose real and imaginary parts are given by
$c_{n}=a_{n}+ib_{n}.$
Therefore
$f(z)=\sum _{n=0}^{\infty }\left[a_{n}r^{n}\cos n\theta -b_{n}r^{n}\sin n\theta \right]+i\sum _{n=1}^{\infty }\left[a_{n}r^{n}\sin n\theta +b_{n}r^{n}\cos n\theta \right],$
which is a Fourier series for f. These trigonometric functions can themselves be expanded, using multiple angle formulae.
Fluid flow
Let the quantities u and v be the horizontal and vertical components of the velocity field of a steady incompressible, irrotational flow in two dimensions. The continuity condition for an incompressible flow is that
$u_{x}+v_{y}=0,$
and the condition that the flow be irrotational is that
$\nabla \times \mathbf {V} =v_{x}-u_{y}=0.$
If we define the differential of a function ψ by
$d\psi =v\,dx-u\,dy,$
then the continuity condition is the integrability condition for this differential: the resulting function is called the stream function because it is constant along flow lines. The first derivatives of ψ are given by
$\psi _{x}=v,\quad \psi _{y}=-u,$
and the irrotationality condition implies that ψ satisfies the Laplace equation. The harmonic function φ that is conjugate to ψ is called the velocity potential. The Cauchy–Riemann equations imply that
$\varphi _{x}=-u,\quad \varphi _{y}=-v.$
Thus every analytic function corresponds to a steady incompressible, irrotational, inviscid fluid flow in the plane. The real part is the velocity potential, and the imaginary part is the stream function.
Electrostatics
According to Maxwell's equations, an electric field (u, v) in two space dimensions that is independent of time satisfies
$\nabla \times (u,v,0)=(v_{x}-u_{y}){\hat {\mathbf {k} }}=\mathbf {0} ,$
and
$\nabla \cdot (u,v)=\rho ,$
where ρ is the charge density. The first Maxwell equation is the integrability condition for the differential
$d\varphi =-u\,dx-v\,dy,$
so the electric potential φ may be constructed to satisfy
$\varphi _{x}=-u,\quad \varphi _{y}=-v.$
The second of Maxwell's equations then implies that
$\varphi _{xx}+\varphi _{yy}=-\rho ,$
which is the Poisson equation. The Laplace equation can be used in three-dimensional problems in electrostatics and fluid flow just as in two dimensions.
In three dimensions
Fundamental solution
A fundamental solution of Laplace's equation satisfies
$\Delta u=u_{xx}+u_{yy}+u_{zz}=-\delta (x-x',y-y',z-z'),$
where the Dirac delta function δ denotes a unit source concentrated at the point (x′, y′, z′). No function has this property: in fact it is a distribution rather than a function; but it can be thought of as a limit of functions whose integrals over space are unity, and whose support (the region where the function is non-zero) shrinks to a point (see weak solution). It is common to take a different sign convention for this equation than one typically does when defining fundamental solutions. This choice of sign is often convenient to work with because −Δ is a positive operator. The definition of the fundamental solution thus implies that, if the Laplacian of u is integrated over any volume that encloses the source point, then
$\iiint _{V}\nabla \cdot \nabla u\,dV=-1.$
The Laplace equation is unchanged under a rotation of coordinates, and hence we can expect that a fundamental solution may be obtained among solutions that only depend upon the distance r from the source point. If we choose the volume to be a ball of radius a around the source point, then Gauss' divergence theorem implies that
$-1=\iiint _{V}\nabla \cdot \nabla u\,dV=\iint _{S}{\frac {du}{dr}}\,dS=\left.4\pi a^{2}{\frac {du}{dr}}\right|_{r=a}.$
It follows that
${\frac {du}{dr}}=-{\frac {1}{4\pi r^{2}}},$
on a sphere of radius r that is centered on the source point, and hence
$u={\frac {1}{4\pi r}}.$
Note that, with the opposite sign convention (used in physics), this is the potential generated by a point particle, for an inverse-square law force, arising in the solution of Poisson equation. A similar argument shows that in two dimensions
$u=-{\frac {\log(r)}{2\pi }}.$
where log(r) denotes the natural logarithm. Note that, with the opposite sign convention, this is the potential generated by a pointlike sink (see point particle), which is the solution of the Euler equations in two-dimensional incompressible flow.
Green's function
A Green's function is a fundamental solution that also satisfies a suitable condition on the boundary S of a volume V. For instance,
$G(x,y,z;x',y',z')$
may satisfy
$\nabla \cdot \nabla G=-\delta (x-x',y-y',z-z')\qquad {\text{in }}V,$
$G=0\quad {\text{if}}\quad (x,y,z)\qquad {\text{on }}S.$
Now if u is any solution of the Poisson equation in V:
$\nabla \cdot \nabla u=-f,$
and u assumes the boundary values g on S, then we may apply Green's identity, (a consequence of the divergence theorem) which states that
$\iiint _{V}\left[G\,\nabla \cdot \nabla u-u\,\nabla \cdot \nabla G\right]\,dV=\iiint _{V}\nabla \cdot \left[G\nabla u-u\nabla G\right]\,dV=\iint _{S}\left[Gu_{n}-uG_{n}\right]\,dS.\,$
The notations un and Gn denote normal derivatives on S. In view of the conditions satisfied by u and G, this result simplifies to
$u(x',y',z')=\iiint _{V}Gf\,dV+\iint _{S}G_{n}g\,dS.\,$
Thus the Green's function describes the influence at (x′, y′, z′) of the data f and g. For the case of the interior of a sphere of radius a, the Green's function may be obtained by means of a reflection (Sommerfeld 1949): the source point P at distance ρ from the center of the sphere is reflected along its radial line to a point P' that is at a distance
$\rho '={\frac {a^{2}}{\rho }}.\,$
Note that if P is inside the sphere, then P′ will be outside the sphere. The Green's function is then given by
${\frac {1}{4\pi R}}-{\frac {a}{4\pi \rho R'}},\,$
where R denotes the distance to the source point P and R′ denotes the distance to the reflected point P′. A consequence of this expression for the Green's function is the Poisson integral formula. Let ρ, θ, and φ be spherical coordinates for the source point P. Here θ denotes the angle with the vertical axis, which is contrary to the usual American mathematical notation, but agrees with standard European and physical practice. Then the solution of the Laplace equation with Dirichlet boundary values g inside the sphere is given by(Zachmanoglou 1986, p. 228) harv error: no target: CITEREFZachmanoglou1986 (help)
$u(P)={\frac {1}{4\pi }}a^{3}\left(1-{\frac {\rho ^{2}}{a^{2}}}\right)\int _{0}^{2\pi }\int _{0}^{\pi }{\frac {g(\theta ',\varphi ')\sin \theta '}{(a^{2}+\rho ^{2}-2a\rho \cos \Theta )^{\frac {3}{2}}}}d\theta '\,d\varphi '$
where
$\cos \Theta =\cos \theta \cos \theta '+\sin \theta \sin \theta '\cos(\varphi -\varphi ')$
is the cosine of the angle between (θ, φ) and (θ′, φ′). A simple consequence of this formula is that if u is a harmonic function, then the value of u at the center of the sphere is the mean value of its values on the sphere. This mean value property immediately implies that a non-constant harmonic function cannot assume its maximum value at an interior point.
Laplace's spherical harmonics
Main article: Spherical harmonics § Laplace's spherical harmonics
Laplace's equation in spherical coordinates is:[4]
$\nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.$
Consider the problem of finding solutions of the form f(r, θ, φ) = R(r) Y(θ, φ). By separation of variables, two differential equations result by imposing Laplace's equation:
${\frac {1}{R}}{\frac {d}{dr}}\left(r^{2}{\frac {dR}{dr}}\right)=\lambda ,\qquad {\frac {1}{Y}}{\frac {1}{\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial Y}{\partial \theta }}\right)+{\frac {1}{Y}}{\frac {1}{\sin ^{2}\theta }}{\frac {\partial ^{2}Y}{\partial \varphi ^{2}}}=-\lambda .$
The second equation can be simplified under the assumption that Y has the form Y(θ, φ) = Θ(θ) Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations
${\frac {1}{\Phi }}{\frac {d^{2}\Phi }{d\varphi ^{2}}}=-m^{2}$
$\lambda \sin ^{2}\theta +{\frac {\sin \theta }{\Theta }}{\frac {d}{d\theta }}\left(\sin \theta {\frac {d\Theta }{d\theta }}\right)=m^{2}$
for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e±imφ. The solution function Y(θ, φ) is regular at the poles of the sphere, where θ = 0, π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ (ℓ + 1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cos θ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial Pℓm(cos θ) . Finally, the equation for R has solutions of the form R(r) = A rℓ + B r−ℓ − 1; requiring the solution to be regular throughout R3 forces B = 0.[note 2]
Here the solution was assumed to have the special form Y(θ, φ) = Θ(θ) Φ(φ). For a given value of ℓ, there are 2ℓ + 1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
$Y_{\ell }^{m}(\theta ,\varphi )=Ne^{im\varphi }P_{\ell }^{m}(\cos {\theta })$
which fulfill
$r^{2}\nabla ^{2}Y_{\ell }^{m}(\theta ,\varphi )=-\ell (\ell +1)Y_{\ell }^{m}(\theta ,\varphi ).$
Here Yℓm is called a spherical harmonic function of degree ℓ and order m, Pℓm is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole, to π/2 at the Equator, to π at the South Pole, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ, φ) of the eigenvalue problem
$r^{2}\nabla ^{2}Y=-\ell (\ell +1)Y$
is a linear combination of Yℓm. In fact, for any such solution, rℓ Y(θ, φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ + 1 linearly independent such polynomials.
The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor rℓ,
$f(r,\theta ,\varphi )=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }f_{\ell }^{m}r^{\ell }Y_{\ell }^{m}(\theta ,\varphi ),$
where the fℓm are constants and the factors rℓ Yℓm are known as solid harmonics. Such an expansion is valid in the ball
$r<R={\frac {1}{\limsup _{\ell \to \infty }|f_{\ell }^{m}|^{{1}/{\ell }}}}.$
For $r>R$, the solid harmonics with negative powers of $r$ are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about $r=\infty $), instead of Taylor series (about $r=0$), to match the terms and find $f_{\ell }^{m}$.
Electrostatics
Let $\mathbf {E} $ be the electric field, $\rho $ be the electric charge density, and $\varepsilon _{0}$ be the permittivity of free space. Then Gauss's law for electricity (Maxwell's first equation) in differential form states[5]
$\nabla \cdot \mathbf {E} ={\frac {\rho }{\varepsilon _{0}}}.$
Now, the electric field can be expressed as the negative gradient of the electric potential $V$,
$\mathbf {E} =-\nabla V,$
if the field is irrotational, $\nabla \times \mathbf {E} =\mathbf {0} $. The irrotationality of $\mathbf {E} $ is also known as the electrostatic condition.[5]
$\nabla \cdot \mathbf {E} =\nabla \cdot (-\nabla V)=-\nabla ^{2}V$
$\nabla ^{2}V=-\nabla \cdot \mathbf {E} $
Plugging this relation into Gauss's law, we obtain Poisson's equation for electricity,[5]
$\nabla ^{2}V=-{\frac {\rho }{\varepsilon _{0}}}.$
In the particular case of a source-free region, $\rho =0$ and Poisson's equation reduces to Laplace's equation for the electric potential.[5]
If the electrostatic potential $V$ is specified on the boundary of a region ${\mathcal {R}}$, then it is uniquely determined. If ${\mathcal {R}}$ is surrounded by a conducting material with a specified charge density $\rho $, and if the total charge $Q$ is known, then $V$ is also unique.[6]
A potential that doesn't satisfy Laplace's equation together with the boundary condition is an invalid electrostatic potential.
Gravitation
Let $\mathbf {g} $ be the gravitational field, $\rho $ the mass density, and $G$ the gravitational constant. Then Gauss's law for gravitation in differential form is[7]
$\nabla \cdot \mathbf {g} =-4\pi G\rho .$
The gravitational field is conservative and can therefore be expressed as the negative gradient of the gravitational potential:
${\begin{aligned}\mathbf {g} &=-\nabla V,\\\nabla \cdot \mathbf {g} &=\nabla \cdot (-\nabla V)=-\nabla ^{2}V,\\\implies \nabla ^{2}V&=-\nabla \cdot \mathbf {g} .\end{aligned}}$
Using the differential form of Gauss's law of gravitation, we have
$\nabla ^{2}V=4\pi G\rho ,$
which is Poisson's equation for gravitational fields.[7]
In empty space, $\rho =0$ and we have
$\nabla ^{2}V=0,$
which is Laplace's equation for gravitational fields.
In the Schwarzschild metric
S. Persides[8] solved the Laplace equation in Schwarzschild spacetime on hypersurfaces of constant t. Using the canonical variables r, θ, φ the solution is
$\Psi (r,\theta ,\varphi )=R(r)Y_{l}(\theta ,\varphi ),$
where Yl(θ, φ) is a spherical harmonic function, and
$R(r)=(-1)^{l}{\frac {(l!)^{2}r_{s}^{l}}{(2l)!}}P_{l}\left(1-{\frac {2r}{r_{s}}}\right)+(-1)^{l+1}{\frac {2(2l+1)!}{(l)!^{2}r_{s}^{l+1}}}Q_{l}\left(1-{\frac {2r}{r_{s}}}\right).$
Here Pl and Ql are Legendre functions of the first and second kind, respectively, while rs is the Schwarzschild radius. The parameter l is an arbitrary non-negative integer.
See also
• 6-sphere coordinates, a coordinate system under which Laplace's equation becomes R-separable
• Helmholtz equation, a general case of Laplace's equation.
• Spherical harmonic
• Quadrature domains
• Potential theory
• Potential flow
• Bateman transform
• Earnshaw's theorem uses the Laplace equation to show that stable static ferromagnetic suspension is impossible
• Vector Laplacian
• Fundamental solution
Notes
1. The delta symbol, Δ, is also commonly used to represent a finite change in some quantity, for example, $\Delta x=x_{1}-x_{2}$. Its use to represent the Laplacian should not be confused with this use.
2. Physical applications often take the solution that vanishes at infinity, making A = 0. This does not affect the angular portion of the spherical harmonics.
References
1. Stewart, James. Calculus : Early Transcendentals. 7th ed., Brooks/Cole, Cengage Learning, 2012. Chapter 14: Partial Derivatives. p. 908. ISBN 978-0-538-49790-9.
2. Zill, Dennis G, and Michael R Cullen. Differential Equations with Boundary-Value Problems. 8th edition / ed., Brooks/Cole, Cengage Learning, 2013. Chapter 12: Boundary-value Problems in Rectangular Coordinates. p. 462. ISBN 978-1-111-82706-9.
3. Griffiths, David J. Introduction to Electrodynamics. 4th ed., Pearson, 2013. Inner front cover. ISBN 978-1-108-42041-9.
4. The approach to spherical harmonics taken here is found in (Courant & Hilbert 1966, §V.8, §VII.5) harv error: no target: CITEREFCourantHilbert1966 (help).
5. Griffiths, David J. Introduction to Electrodynamics. 4th ed., Pearson, 2013. Chapter 2: Electrostatics. p. 83-4. ISBN 978-1-108-42041-9.
6. Griffiths, David J. Introduction to Electrodynamics. 4th ed., Pearson, 2013. Chapter 3: Potentials. p. 119-121. ISBN 978-1-108-42041-9.
7. Chicone, C.; Mashhoon, B. (2011-11-20). "Nonlocal Gravity: Modified Poisson's Equation". Journal of Mathematical Physics. 53 (4): 042501. arXiv:1111.4702. doi:10.1063/1.3702449. S2CID 118707082.
8. Persides, S. (1973). "The Laplace and poisson equations in Schwarzschild's space-time". Journal of Mathematical Analysis and Applications. 43 (3): 571–578. Bibcode:1973JMAA...43..571P. doi:10.1016/0022-247X(73)90277-1.
Further reading
• Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 978-0-8218-0772-9.
• Petrovsky, I. G. (1967). Partial Differential Equations. Philadelphia: W. B. Saunders.
• Polyanin, A. D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton: Chapman & Hall/CRC Press. ISBN 978-1-58488-299-2.
• Sommerfeld, A. (1949). Partial Differential Equations in Physics. New York: Academic Press.
• Zachmanoglou, E. C.; Thoe, Dale W. (1986). Introduction to Partial Differential Equations with Applications. New York: Dover. ISBN 9780486652511.
External links
• "Laplace equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Laplace Equation (particular solutions and boundary value problems) at EqWorld: The World of Mathematical Equations.
• Example initial-boundary value problems using Laplace's equation from exampleproblems.com.
• Weisstein, Eric W. "Laplace's Equation". MathWorld.
• Find out how boundary value problems governed by Laplace's equation may be solved numerically by boundary element method
| Wikipedia |
\begin{document}
\title{Estimating the Variance of Measurement Errors in Running Variables of Sharp Regression Discontinuity Designs}
\author{Kota Mori \thanks{[email protected]}}
\begin{titlepage}
\maketitle
\begin{abstract} \noindent Estimation of a treatment effect by a regression discontinuity design faces a severe challenge when the running variable contains measurement errors since the errors smoothen the discontinuity on which the identification depends. The existing studies show that the variance of the measurement errors plays a vital role in both bias correction and identification under such situations.
However, the methodologies to estimate the variance from data are relatively undeveloped. This paper proposes two estimators for the variance of measurement errors of running variables of sharp regression continuity designs. The proposed estimators can be constructed merely from data of the observed running variable and treatment assignment, and do not require any other external source of information. \end{abstract} \thispagestyle{empty} \end{titlepage}
\section{Introduction}
Regression discontinuity design (RDD) is a frequently-used framework for estimating the causal effect of a binary treatment variable on an outcome measurement.
An RDD depends on a critical assumption that there exists a variable such that the treatment is assigned if and only if that variable exceeds a known threshold. A variable with this property is called a running variable.
Given an RDD framework, one compares the treated and untreated samples around the threshold of the running variable. Assuming that other covariates are continuously distributed at that point, those slightly above the threshold and those slightly below are arguably similar except that only the former receives the treatment. Therefore the difference in the outcome measurement between the two is attributable to the impact of treatment.
Identification using an RDD faces a challenge when the observed running variable contains measurement errors. Theoretically, even a small magnitude of measurement error would nullify the estimation of the treatment effect leveraging an RDD. This is because the measurement errors smooth out the discontinuity of the assignment at the threshold, which breaks the RDD assumption.
Note that an RDD with a mismeasured running variable does not form a fuzzy RDD; a fuzzy RDD assumes that the assignment probability is discontinuous at a threshold, while measurement errors of the running variable smoothen the discontinuity.
\citet{Davezies2014} showed that the standard local polynomial regression yields a biased estimate for the treatment effect if the running variable is mismeasured. They then proposed an alternative estimator that is less susceptible to the measurement errors and examined the magnitude of the bias. \citet{Yanagi2014} also studied a similar estimator and proposed a method to alleviate the bias of the estimator. Finally, \citet{Pei2017} proposed a series of identification strategies that overcome measurement errors in the running variable.
These studies agree that the variance of the measurement errors plays an essential role in the bias correction as well as in the identification of the treatment effect. The analysis by \citet{Davezies2014} shows that their estimator would be more biased when the running variable contains measurement errors of a larger magnitude. \cites{Yanagi2014} bias correction approach requires that the variance be known from an external source. One of the estimators proposed by \citet{Pei2017} also utilizes external knowledge of the variance (see Approach 3 in \S 4.1). Despite its utility, only a handful of discussions have been devoted to how one can obtain or estimate the variance of the measurement errors. \citet{Yanagi2014} suggests that the variance can be estimated using auxiliary data that provide the accurate distribution of the running variable (but are not tied with the treatment assignment). If such data are available, the variance of the measurement errors can be estimated by subtracting the {\em true} variance of the running variable from the variance of the mismeasured running variable. Such auxiliary data, however, might not be available in many applications.
This paper proposes two estimators for the variance of the measurement errors. Both estimators do not require any additional source of information; the estimation only requires data of the observed running variable and treatment assignment, which are naturally available in virtually all RDD studies. The first estimator assumes that both the running variable and the measurement error follow the Gaussian distribution. Under this assumption, the conditional likelihood function has an analytic formula, which can be optimized efficiently by standard numerical methods. The second estimator relaxes the Gaussian assumption and allows both the running variable and measurement error to follow arbitrary distributions characterized by a finite number of parameters. Unlike the Gaussian case, the likelihood function under this assumption cannot be expressed by a simple formula, where direct optimization becomes numerically unstable. Instead, the likelihood can be maximized by a variant of the expectation-maximization (EM) algorithm, which is computationally efficient and robust.
The result of the simulation experiments are also reported. All estimators successfully recover the true variance when the model assumption matches the data generation process. The estimators exhibit different degrees of robustness against misspecification. The methods have been implemented as a library for the R language \citep{RCoreTeam2017} and are freely available on the GitHub repository (\url{https://github.com/kota7/rddsigma}).
\section{Model}
Let $D \in \{0,1\}$ denote the binary variable that indicates the assignment of treatment and $X \in \mathbf{R}$ the running variable for $D$. Suppose that $X$ and $D$ form a sharp regression discontinuity design, {\it i.e.}, $D = \mathbf{1}\{X>c\}$, with a known constant $c$.
Assume that $X$ is only observed with an additive error: \[ W = X + U, \;\;\;\;\;\; X \indep U \] where $W$ is the observed running variable, for which data are available. We assume that $U$ is continuous, has a zero mean and a finite variance $\sigma^2$. Our goal is to estimate $\sigma$ using a random sample of $\{w_i, d_i\}_{i=1}^{n}$, where $w_i$ and $d_i$ represent the observations corresponding to $W$ and $D$ respectively.
\subsection{Gaussian-Gaussian Case}
Consider a case where both $X$ and $U$ follow the Gaussian distribution. The independence assumption of the two implies that they follow the multivariate Gaussian distribution. Therefore, the sum of the two, $W$, is also Gaussian.
Let $\mathbf{E}(X) = \mu_x$ and $\mathrm{Var}(X) = \sigma_{x}^{2}$. Then, $\mathbf{E}(W) = \mu_x$ and $\mathrm{Var}(W) = \sigma_{x}^{2} + \sigma^2 \equiv \sigma_{w}^{2}$. By the property of the multivariate Gaussian distribution \citep[see \it{e.g.},][\S 2.3.1]{Bishop2006}, the conditional distribution of $U$ given $W$ is also Gaussian and its parameters can be explicitly written as follows: \begin{equation}
\mathbf{E}(U | W) = \mu_{u|w} = \frac{\sigma^2}{\sigma_{w}^{2}}(W - \mu_x) \label{eq:condmean} \end{equation} and \begin{equation}
\mathrm{Var}(U | W) = \sigma_{u|w}^2 =
\left(1-\frac{\sigma^2}{\sigma_{w}^{2}} \right)\sigma^2. \label{eq:condvar} \end{equation}
We can construct the conditional likelihood function using \eqref{eq:condmean} and \eqref{eq:condvar}. Consider $p(D | W; \theta)$, that is, the conditional distribution of $D$ given $W$, where $\theta = (\mu_x, \sigma_w, \sigma)$. Since $D = \mathbf{1}\{X>c\} = \mathbf{1}\{U < W - c\}$, we have
\[ p(D| W; \theta) =
\begin{cases}
\Phi((W - c - \mu_{u|w})/\sigma_{u|w}) & \text{if $D = 1$} \\
1 - \Phi((W - c - \mu_{u|w})/\sigma_{u|w}) & \text{if $D = 0$}
\end{cases} \]
where $\Phi$ is the cumulative distribution function of the standard Gaussian distribution.
Although the likelihood function depends on three parameters, $(\mu_x, \sigma_w, \sigma)$, the first two can be estimated separately by the sample mean and standard deviation of $W$. We can substitute these estimates into the likelihood function, and estimate $\sigma$ by the maximum likelihood. Notice that this estimation process is a two-step maximum likelihood, and hence the variance of estimators needs to be adjusted appropriately \citep{Murphy1985, Newey1994}.
\subsection{Non-Gaussian Case}
In this section, we relax the Gaussian assumption in the previous section. Assume instead that $X$ and $U$ follow some parametric distributions characterized by a finite number of parameters. Unlike the Gaussian case, we do not have an explicit expression for the conditional likelihood under this assumption in general. Instead, we consider the estimation using the marginal likelihood function.
Let $p_x$ and $p_u$ denote the probability density functions of $X$ and $U$ and suppose that they depend on parameters $\theta_x$ and $\theta_u$ respectively. We can write the full likelihood function for a pair $(W, D)$ as \[ \begin{split} \log p(W, D; \theta) &= D \log \int_{c}^{\infty} p_x(x; \theta_x) p_u(W-x; \theta_u) dx\\ + &(1-D) \log \int_{-\infty}^{c} p_x(x; \theta_x) p_u(W-x; \theta_u) dx \end{split} \]
Our objective is to maximize the sum of log-likelihood with respect to the parameters: \[ \hat{\theta} \equiv \argmax_\theta \sum_{i=1}^{n} \log p(w_i, d_i; \theta) \]
Due to the complex expressions inside integrals, the direct maximization of this objective function by numerical routines tends to be computationally demanding and unstable. Instead, we employ a variant of the expectation-maximization (EM) algorithm, which turns out to be computationally more efficient and robust. Define the Q-function as below. \[ \begin{split}
Q(\theta, \theta' | W, D) &=
D \int_{c}^{\infty} h(\theta' | Z, D)
\left( \log p_x(x; \theta_x) + \log p_u(W-x; \theta_u) \right) dx\\ +
&(1-D) \int_{\infty}^{c} h(\theta' | Z, D)
\left( \log p_x(x; \theta_x) + \log p_u(W-x; \theta_u) \right) dx \end{split} \]
where the function $h$ is defined as \begin{align}
h(\theta, x | W, D=1) &=
\frac{p_x(x; \theta_x) p_u(W-x; \theta_u)}
{\int_{c}^{\infty} p_x(x; \theta_x) p_u(W-x; \theta_u) dx} \label{eq:h1}\\
h(\theta, x | W, D=0) &=
\frac{p_x(x; \theta_x) p_u(W-x; \theta_u)}
{\int_{\infty}^{c} p_x(x; \theta_x) p_u(W-x; \theta_u) dx}. \label{eq:h2} \end{align}
We can show that, for any $(W, D)$ and $\theta, \theta'$, \begin{equation} \label{eq:ineq}
\log p(W, D; \theta) - \log p(W, D; \theta') \ge Q(\theta, \theta' | W, D) - Q(\theta', \theta' | W, D). \end{equation} See the Appendix \ref{sec:proof} for the proof of this inequality.
The inequality \eqref{eq:ineq} motivates a variant of the EM algorithm where the parameters are updated so as to maximize the sum of the Q-functions: \begin{equation}
\theta^{(t+1)} \leftarrow \argmax_\theta \sum_{i=1}^{n} Q(\theta, \theta^{(t)} | w_i, d_i), \label{eq:maxstep} \end{equation} where $\theta^{(0)}$ is initialized outside the loop. By the inequality \eqref{eq:ineq}, the objective function increases monotonically along with the iterations, and hence converges to a local maximum provided that it is bounded. Note that, since the algorithm only ensures the convergence to a local maximum, the outcome may vary by choice of the initial value, $\theta^{(0)}$.
Iterative maximization of the Q-function tends to be computationally more efficient and stable than maximizing the likelihood function directly.
In particular, for the distributions such that the maximum likelihood parameter estimator is analytically solvable, the update \eqref{eq:maxstep} also has a closed-form expression. We illustrate a case where $X$ follows the Gaussian distribution and $U$ the Laplace distribution below.
\begin{example}
Suppose $X$ follows the Gaussian distribution and $U$ the Laplace distribution, {\it i.e.},
\begin{align*} p_x(x; \mu_x, \sigma_x) &= \frac{1}{\sqrt{2\pi\sigma_{x}^{2}}} \exp\left(-\frac{(x-\mu_x)^2}{2\sigma_{x}^{2}}\right)\\ p_u(w; \sigma) &= \frac{\sqrt{2}}{2 \sigma}
\exp\left(-\frac{\sqrt{2}|u|}{\sigma}\right). \end{align*} Note that $\mathrm{Var}(U) = \sigma^2$.
We have three parameters to estimate, $\mu_x$, $\sigma_x$, and $\sigma$. Since $\mu_x$ can be estimated by the sample average of $W$, we estimate the two standard deviations by the algorithm presented. The Q-function is written as follows. \[ \begin{split}
Q(\theta, \theta'| w_i, d_i) &=
d_i \int_{c}^{\infty} h_i(\theta')
\left[ \log p_x(x; \sigma_x) + \log p_u(w_i-x; \sigma) \right] dx\\ + & (1-d_i) \int_{-\infty}^{c} h_i(\theta')
\left[ \log p_x(x; \sigma_x) + \log p_u(w_i-x; \sigma) \right] dx.
\end{split} \]
Note that the $h$ function is obtained by substituting the density functions to \eqref{eq:h1} and \eqref{eq:h2}. Setting $\sum_{i=1}^{n}\frac{\partial Q(\theta, \theta'| w_i, d_i) }{ \partial \theta }= 0$ yields the first order conditions for the parameters: \begin{align} \sigma &= \frac{\sqrt{2}}{n} \sum_{i=1}^{n} \Bigg\{
d_i \int_{c}^{\infty} h_i(\theta') |w_i-x| dx +
(1-d_i) \int_{-\infty}^{c} h_i(\theta') |w_i-x| dx \Bigg\} \label{eq:focsigma} \\ \sigma_{x}^{2} &= \frac{1}{n} \sum_{i=1}^{n} \Bigg\{ d_i \int_{c}^{\infty} h_i(\theta') (x-\mu_x)^2 dx + (1-d_i) \int_{-\infty}^{c} h_i(\theta') (x-\mu_x)^2 dx \Bigg\}. \label{eq:focsigmax} \end{align}
The expressions inside the integral are the weighted average of $|w_i-x|$ and $(x-\mu_x)^2$ respectively (with $h_i(\theta')$ as weights), as analogous to the variance estimator for the Laplace and the Gaussian distributions. Thanks to the explicit formulas \eqref{eq:focsigma} and \eqref{eq:focsigmax}, the parameters can be updated at each iteration without relying on a numerical optimization routine. This reduces the computation time and enhances the stability of the algorithm. Analogous formulas can be obtained for many other parametric distributions, particularly for those belonging to the exponential family. \end{example}
\section{Simulation}
This section reports the result of the simulation experiments of the estimators introduced in the previous section. The methods have been implemented as an R library and are freely available on the GitHub repository (\url{https://github.com/kota7/rddsigma}).
We generate data from various combinations of distributions to examine the robustness of the estimators against misspecification. $X$ has been generated from the Gaussian distribution and the exponential distribution, while $U$ has been generated from the Gaussian and Laplace distribution. For each pair of distributions, we set the variance of $X$ to one, and the variance of $U$, $\sigma$, to 0.2 and 1.2. The sample size is 500, and the cutoff point $c$ is set to one for all cases. As a result, we have eight simulation configurations, as summarized in Table \ref{tab:setup}.
\begin{table}[ht] \begin{center} \caption{Simulation setup} \label{tab:setup}
\begin{tabular}{c|ccccccc} \hline ID & N & $c$ & $p_x$ & $p_u$ & $\mathbf{E}(X)$ & $\mathrm{Var}(X)$ & $\mathrm{Var}(U)$ \\ \hline\hline 1 & 500 & 1 & Gaussian & Gaussian & 0 & 1 & 0.2 \\ 2 & 500 & 1 & Gaussian & Laplace & 0 & 1 & 0.2 \\ 3 & 500 & 1 & Exponential & Gaussian & 1 & 1 & 0.2 \\ 4 & 500 & 1 & Exponential & Laplace & 1 & 1 & 0.2 \\
5 & 500 & 1 & Gaussian & Gaussian & 0 & 1 & 1.2 \\ 6 & 500 & 1 & Gaussian & Laplace & 0 & 1 & 1.2 \\ 7 & 500 & 1 & Exponential & Gaussian & 1 & 1 & 1.2 \\ 8 & 500 & 1 & Exponential & Laplace & 1 & 1 & 1.2 \\ \hline \end{tabular} \end{center} \end{table}
For each setup, we generate 200 random datasets. Using a generated dataset, we estimate $\sigma$ and other parameters by three methods: (A) Gaussian-Gaussian estimator, (B) non-Gaussian estimator with $X$ and $U$ following the Gaussian distribution, and (C) non-Gaussian estimator with $X$ following the Gaussian and $U$ following the Laplace distribution. Notice that for many cases the models are ``misspecified'' in a sense that the true data generating process does not follow the distributions assumed by the model. This allows us to examine the robustness of the estimators against the deviation from the assumptions.
The results are summarized in Figure \ref{fig:sim}. The numbers in the horizontal axis correspond to the IDs given in Table \ref{tab:setup} and each panel corresponds to an estimation method. The Gaussian-Gaussian estimator, labeled as (A), consistently recovers the true parameter for all cases. IDs 1 and 5 satisfy the model assumptions and estimated $\sigma$ distributes around the true parameters as expected. Even for other cases where the model is misspecified, the estimates are centered around the true parameter.
The estimator (B), the non-Gaussian estimator with the assumption that $X$ and $U$ follow the Gaussian distribution, also estimates the parameters correctly in most cases. It tends to be, however, unstable for the setups 3 and 7, where the distribution of $X$ is generated from the exponential distribution.
The estimator (C), the non-Gaussian estimator with the assumption that $X$ follows the Gaussian and $U$ follows the Laplace distribution, performs well for IDs 2 and 6, which satisfy the model assumptions. However, it exhibits a relatively high sensitivity to misspecification compared with the other two methods. Instability is particularly prominent for the cases where $X$ is generated from the exponential distribution.
\begin{figure}
\caption{Distribution of estimated $\sigma$. Each boxplot comprises 200 independent estimates. The numbers in the horizontal axis indicate the IDs of the data generating process given in Table \ref{tab:setup}. True parameters are 0.2 in the panels in the left column and 1.2 in the right. Each row uses a different estimation method: (A) Gaussian-Gaussian estimator, (B) non-Gaussian estimator with $X$ and $U$ following the Gaussian distribution, and (C) non-Gaussian estimator with $X$ following the Gaussian and $U$ following the Laplace distribution.}
\label{fig:sim}
\end{figure}
\section{Concluding Remarks}
This paper introduces two estimators for estimating the variance of measurement errors in running variables of sharp regression discontinuity designs. The first estimator is constructed under the assumption that both the running variable and the measurement error follow the Gaussian distribution. Under this assumption, the conditional likelihood function has an explicit formula, and the parameters can be estimated efficiently using a numerical optimization routine. Despite the strong assumptions on the variable distributions, the estimator exhibits robustness against misspecification in the simulation exercises.
The second estimator relaxes the Gaussian assumption and allows both $X$ and $U$ to follow arbitrary distributions characterized by a finite number of parameters. A variant of the expectation-maximization (EM) algorithm is introduced, which optimizes the likelihood function efficiently compared with a direct application of a standard numerical optimization routine. This estimator performs as well as the first when the model is correctly specified. However, the simulation experiments find that the estimator can become unstable and biased when the model assumptions deviates from the data generating process.
The first estimator would be practical in many cases for estimating the variance of measurement errors. It is easy to implement, is computationally efficient, and tends to be robust against misspecification. The second estimator can be preferred in domains where the distributions of the variables are understood well. It would also serve as a robustness check for the first estimator.
\appendix \appendix
\section{Proof} \label{sec:proof}
We provide a proof for the inequality \eqref{eq:ineq}: \[
\log p(W, D; \theta) - \log p(W, D; \theta') \ge Q(\theta, \theta' | W, D) - Q(\theta', \theta' | W, D). \] To do so, we introduce the following lemma.
\begin{lemma*} Let $J(\theta) = \log \int_{x \in \mathcal{X}} g(x; \theta) dx$, where $g$ is a positive-valued function and $\mathcal{X}$ is a subset of the range of $g$. Define the corresponding Q-function by \[ Q(\theta, \theta') = \int_{x \in \mathcal{X}} h(x; \theta') \log g(x; \theta) \] where \[ h(x; \theta) = \frac{g(x; \theta)}{\int_{y \in \mathcal{X}} g(y; \theta) dy}. \] Then, \[ \log J(\theta) - \log J(\theta') \ge Q(\theta,\theta') - Q(\theta', \theta'). \] \end{lemma*}
\begin{proof} \begin{align*} & \log J(\theta) - Q(\theta, \theta')\\ =& \log \int_{x \in \mathcal{X}} g(x; \theta)dx - \int_{x \in \mathcal{X}} h(x; \theta') \log g(x; \theta)dx \\ =& \int_{y \in \mathcal{X}} h(y; \theta') \log \int_{x \in \mathcal{X}} g(x; \theta)dx dy - \int_{x \in \mathcal{X}} h(x; \theta') \log g(x; \theta)dx \\ =& \int_{y \in \mathcal{X}} h(y; \theta') \left( \log \int_{x \in \mathcal{X}} g(x; \theta)dx - \log g(y; \theta) \right) dy \\ =& \int_{y \in \mathcal{X}} h(y; \theta') \log \frac{\int_{x \in \mathcal{X}} g(x; \theta)dx}{g(y; \theta)} dy \\ =& - \int_{y \in \mathcal{X}} h(y; \theta') \log h(y; \theta) dy \\ =& - \int_{x \in \mathcal{X}} h(x; \theta') \log h(x; \theta) dy. \end{align*} Construct the same equality with $\theta=\theta'$ and subtract from the both sides, then \begin{align*} & \log J(\theta) - \log J(\theta') - Q(\theta, \theta') + Q(\theta', \theta') \\ =& \int_{x \in \mathcal{X}} h(x; \theta') \log \frac{h(x; \theta')}{h(x; \theta)} \\ \ge& 0 \end{align*} where the last line is due to the Gibb's inequality. Hence, \[ \log J(\theta) - \log J(\theta') \ge Q(\theta,\theta') - Q(\theta', \theta'). \] \end{proof}
To derive the inequality \eqref{eq:ineq}, apply the lemma with $g(x; \theta) = p_x(x; \theta_x)p_u(W-x; \theta_u)$ and $\mathcal{X} = (c, \infty)$. Then, we obtain \begin{equation} \label{eq:d1}
\log \int_{c}^{\infty} p_x(x; \theta_x)p_u(W-x; \theta_u) \ge \int_{c}^{\infty} h(\theta' | Z, D)
\left( \log p_x(x; \theta_x) + \log p_u(W-x; \theta_u) \right) dx. \end{equation} Similarly, applying the lemma with the same $g$ function and $\mathcal{X} = (-\infty, c)$, \begin{equation} \label{eq:d0}
\log \int_{-\infty}^{c} p_x(x; \theta_x)p_u(W-x; \theta_u) \ge \int_{-\infty}^{c} h(\theta' | Z, D)
\left( \log p_x(x; \theta_x) + \log p_u(W-x; \theta_u) \right) dx. \end{equation} \eqref{eq:d1} and \eqref{eq:d0} imply \eqref{eq:ineq}.
\end{document} | arXiv |
A general neuro-space mapping technique for microwave device modeling
Lin Zhu ORCID: orcid.org/0000-0002-7356-57021,
Jian Zhao1,
Zhuo Li2,3,
Wenyuan Liu4,
Lei Pan1,
Haifeng Wu5 &
Deliang Liu6
Accurate modeling of nonlinear microwave devices is critical for reliable design of microwave circuit and system. In this paper, a more general neuro-space mapping (Neuro-SM) method is proposed to fulfill the needs of the increased modeling complexity. The proposed technique retains the capability of the existing dynamic Neuro-SM in modifying the dynamic voltage relationship between the coarse model and the desired model. The proposed Neuro-SM also considers dynamic current mapping besides voltage mappings. In this way, the proposed Neuro-SM generalizes the previously published Neuro-SM methods and has the potential to produce a more accurate model of microwave devices with more dynamics and nonlinearity. A new formulation and new sensitivity analysis technique are derived to train the general Neuro-SM with dc, small-, and large-signal data. A new gradient-based training algorithm is also proposed to speed up the training. The validity and efficiency of the general Neuro-SM method are demonstrated through a real 2 × 50 μm GaAs pseudomorphic high-electron mobility transistor (pHEMT) modeling example. The proposed general Neuro-SM model can be implemented into circuit simulators conveniently.
Microwave transistors are key components in the next generation wireless communication systems [1,2,3,4], such as cognitive multiple-input multiple-output (MIMO) systems [5,6,7], and cognitive relay network [8, 9]. With the increasing complexity of communication circuit and system structure, designers rely more heavily on computer-aided design (CAD) software to achieve efficient design. Microwave device models are essential to CAD software. The accuracy of these models can even decide whether the communication circuit and system design is successful or not. Due to rapid technology development in semiconductor industry, new microwave devices constantly arrive. Models suitable for previous devices may not fit new devices well. There is an ongoing need for new accurate models.
In recent years, neuro-space mapping (Neuro-SM) technique [10] combining artificial neural networks [11] with space mapping [12] has been recognized in microwave device modeling with the advantages of good efficiency and accuracy. In Neuro-SM, neural networks are used to automatically map and modify an existing equivalent circuit model also called coarse model to a desired/accurate model through a process named training. In order to fulfill the needs of the increased modeling complexity and the industry's increasing need for tighter accuracy, several improvements on the basis of [10] were subsequently studied to enhance the modeling accuracy and efficiency, such as Neuro-SM with the output mapping [13], dynamic Neuro-SM [14], and analytical Neuro-SM with sensitivity analysis [15]. Neuro-SM with the output mapping [13] was introduced, through incorporation of a new output/current mapping, for modeling of microwave devices. Compared to the Neuro-SM presented in [10], Neuro-SM with the output mapping is more suitable for modeling nonlinear devices with more nonlinearity due to the additional and useful degrees of freedom from the output mapping neural network. In order to accurately model nonlinear devices which have higher order dynamic effects (e.g., capacitive effect or non-quasi-static effect) than that of the coarse model, dynamic Neuro-SM was introduced [14]. However, when the modeling devices have both more nonlinearity and high order dynamics, in such case, even though existing Neuro-SM [13, 14] is used to map the coarse model towards the device data, the match between the trained Neuro-SM models and the device data may be still not good enough. More effective Neuro-SM methods need to be investigated to overcome the accuracy limit of the Neuro-SM presented in [13, 14].
In this paper, we propose a more generalized Neuro-SM approach including not only static mapping but also dynamic mapping, and considering both voltage mapping and current mapping for the first time. This paper is a further expansion of the work in [13, 14]. Compared to [13] where only static mapping is used, the proposed technique is more suitable for modeling nonlinear devices with higher order dynamic effects and non-quasi-static effect that may be missing in the coarse model due to inclusion of dynamic mapping. Compared to [14], the general Neuro-SM considers not only input voltage mapping, but also output current mapping, further refining the existing coarse model. In this way, well trained general Neuro-SM model can represent the dynamic behavior and large-signal nonlinearity of the microwave devices more accurately than the coarse model, Neuro-SM model with the output mapping [13], as well as dynamic Neuro-SM model [14]. The modeling results of a real 2 × 50 μm GaAs pseudomorphic high-electron mobility transistor (pHEMT) demonstrate the correctness and validity of the proposed general Neuro-SM technique.
Concept of the general Neuro-SM model
Suppose the existing equivalent circuit model is a rough approximation of the behavior of the microwave device. We name this existing model as the coarse model. Let the desired model that accurately matches the device data be called the fine model. Just take field effect transistor (FET) modeling as an example, let the gate and drain voltages and currents of the coarse model be defined as v c = [vc1, vc2]T and i c = [ic1, ic2]T, respectively. Let the terminal voltages and currents of the fine model as v f = [vf1, vf2]T and i f = [if1, if2]T, respectively.
Suppose the total number of voltage delay buffers at gate and drain be the same and both equal to N v . Let τ be the time delay parameter. To represent time-domain behavior, the time parameter t is introduced. Figure 1 illustrates the signal flow of the general Neuro-SM model. At first, the present voltages of the fine model v f (t) as well as their historyv f (t − τ), v f (t − 2τ), …, and v f (t − N v τ) are mapped into the coarse model voltages v c (t). Because the formula of the mapping is unknown and usually nonlinear, a neural network is used to learn and represent the mapping. While the Neuro-SM presented in [10] uses a static neural network such as multilayer perceptron (MLP), we propose to use a time delay neural network (TDNN) to map the coarse model to fine model. In functional form, v c (t) can be described as
$$ {\boldsymbol{v}}_c(t)={\boldsymbol{f}}_{\mathrm{ANN}}\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right),{N}_v\ge 0\kern0.2em $$
where fANN represents the input/voltage mapping neural network, and w 1 is a vector containing all the weights of the input mapping neural network. As seen from Eq. (1), voltages at gate and drain of the coarse model depend on not only the present voltages of the fine model, but also their history signals making the proposed technique more suitable for modeling the dynamic behavior of the nonlinear devices. Then, after the coarse model computation, the coarse model currents i c (t) can be obtained. Suppose the total number of current delay buffers at gate and drain be the same and both equal to Nc. At last, i c (t) and their history i c (t − τ), …, i c (t − N c τ) as well as the present voltages of the fine model v f (t) are mapped by another TDNN to the external currents as
$$ {\boldsymbol{i}}_f(t)={\boldsymbol{h}}_{\mathrm{ANN}}\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right),{N}_c\ge 0\kern0.1em $$
where hANN represents the output/current neural network, and vector w2 contains all the output mapping neural network weights. Compared to [14], the new output neural network mapping further refines the coarse model current signals to produce the fine model outputs. The combined dynamic voltage mapping neural network, coarse model, and dynamic current mapping neural network is called the general Neuro-SM model.
Signal flow of the general Neuro-SM model
The proposed general Neuro-SM is more general than Neuro-SM technique presented in [10, 13, 14]. While N v = 0, then the general Neuro-SM model without the output mapping is static Neuro-SM model [10]. While N v = 0 and N c = 0, then the general Neuro-SM model belongs to the Neuro-SM model with the output mapping [13]. While N v > 0, then the general Neuro-SM model without the output mapping is the dynamic Neuro-SM model [14]. In this way, the proposed general Neuro-SM generalizes the previously published Neuro-SM technique. Furthermore, while N v > 0 and N c > 0, a new Neuro-SM technique is presented for the first time. Compared to the Neuro-SM introduced in [10, 13, 14], the new Neuro-SM is more suitable for modeling the microwave devices with high order dynamics and nonlinearity due to inclusion of dynamic mapping as well as current mapping.
Proposed analytical formulation of the general Neuro-SM model for training
The general Neuro-SM model will not be accurate unless the dynamic voltage and dynamic current mapping neural networks are trained suitable. In order to train the general Neuro-SM efficiently with typical types of transistor modeling data, the relationship between the dynamic voltage and current mapping neural networks with typical types of transistor data, such as DC, bias-dependent S parameter, and large-signal harmonic data need to be derived.
In the DC case, present voltage signals of the fine model v f (t) as well as its history, i.e.,v f (t − τ), …, and v f (t − N v τ) are all equal and defined as Vf, DC. Similarly, present current signals of the coarse model i c (t) as well as its history, i.e.,i c (t − τ), …, and i c (t − N c τ) are all equal and defined as Ic, DC. The response of the general Neuro-SM model at Vf, DCcan be generally described as
$$ {\boldsymbol{I}}_{f.\mathrm{DC}}={\boldsymbol{I}}_f\left({\boldsymbol{V}}_{f.\mathrm{DC}}\right)={\boldsymbol{h}}_{\mathrm{ANN}}\left({\boldsymbol{V}}_{f,\mathrm{DC}},\overset{N_c+1}{\overbrace{{\left.\operatorname{}{\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},{\boldsymbol{I}}_{c,\mathrm{DC}}{\left|{}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},\dots, {\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}}}},{\boldsymbol{w}}_2\right)\kern0.1em $$
$$ {\boldsymbol{V}}_{c,\mathrm{DC}}={\boldsymbol{f}}_{\mathrm{ANN}}\left(\overset{N_v+1}{\overbrace{{\boldsymbol{V}}_{f,\mathrm{DC}},{\boldsymbol{V}}_{f,\mathrm{DC}},\dots, {\boldsymbol{V}}_{f,\mathrm{DC}}}},{\boldsymbol{w}}_1\right)\kern0.1em $$
The small-signal S parameter of the general Neuro-SM model can be calculated by transforming its Y parameters Y f , which can be obtained by mapping Y parameters of the coarse model Y c . In functional form, Y f can be described as
$$ {\displaystyle \begin{array}{l}{\boldsymbol{Y}}_f\left(\omega \right)\\ {}={\left({\left.\sum \limits_{l=0}^{N_c}\operatorname{}{e}^{- j\omega l\tau}\cdot \frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.\operatorname{}{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\cdots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\\ {}\kern0.5em \cdot \operatorname{}{\boldsymbol{Y}}_c\left(\omega \right){\left|{}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\cdot \Big(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot \operatorname{}\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\cdots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\Big){}^T\\ {}\kern0.5em +{\left({\left.\operatorname{}\frac{\partial {\boldsymbol{h}}_{\boldsymbol{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{v}}_f(t)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.\operatorname{}{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\cdots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\kern7.299995em \end{array}} $$
$$ {\boldsymbol{V}}_{c,\mathrm{Bias}}={\boldsymbol{f}}_{\mathrm{ANN}}\left(\overset{N_v+1}{\overbrace{{\boldsymbol{V}}_{f,\mathrm{Bias}},{\boldsymbol{V}}_{f,\mathrm{Bias}},\dots, {\boldsymbol{V}}_{f,\mathrm{Bias},}}},{\boldsymbol{w}}_1\right)\kern0.1em $$
where the first-order derivatives of fANN and hANN can be obtained at the bias Vf, Bias using adjoint neural network method [15]. Superscript k and l represent the index of voltage and current delay buffers, respectively. Equation (5) includes two parts. The first part is in the form of multiplications of three matrices, which are defined as the output/current Y-mapping matrix, i.e., the sum of products of e−jωlτ and ∂hANN/∂i c , Y parameter matrix of the coarse model Y c , as well as the input/voltage Y-mapping matrix, i.e., the sum of products of e−jωkτ and ∂fANN/∂v f . The other part is the sensitivity matrix of h ANN . Equation (5) is more general than formulas of small-signal Y parameter of the Neuro-SM models in [10, 13, 14] due to the consideration of the new effects of current mappings and dynamic mappings. For large-signal case, we need to derive the relationship between HB computation and dynamic voltage and current mapping neural networks so that model training can be performed with harmonic data. Let the harmonic current of the general Neuro-SM model and coarse model at a generic harmonic frequency ω k be I f (ω k ) and I c (ω k ), respectively. The I f (ω k ) can be evaluated as
$$ {\displaystyle \begin{array}{l}{\boldsymbol{I}}_f\left({\omega}_k\right)\\ {}{\left.=\frac{1}{N_T}\sum \limits_{n=0}^{N_T-1}{\boldsymbol{h}}_{\mathrm{ANN}}\Big({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{i}}_c\left({t}_n\right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{i}}_c\left({t}_n-\tau \right){\left|{}_{{\boldsymbol{v}}_c\left({t}_n\right)},\dots, {\boldsymbol{i}}_c\left({t}_n-{N}_c\tau \right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{w}}_2\Big)\cdot {W}_N\left(n,k\right)\end{array}} $$
$$ {\boldsymbol{v}}_c\left({t}_n\right)={\boldsymbol{f}}_{\mathrm{ANN}}\left({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{v}}_f\left({t}_n-\tau \right),\dots, {\boldsymbol{v}}_f\left({t}_n-{N}_v\tau \right),{\boldsymbol{w}}_1\right) $$
$$ {\boldsymbol{v}}_f\left({t}_n- m\tau \right)=\sum \limits_{k=0}^{N_H}{\boldsymbol{V}}_f\left({\omega}_k\right)\cdot {e}^{- jm{\omega}_k\tau}\cdot {W}_N^{\ast}\left(n,k\right),m=0,1,\dots, {N}_v $$
where the subscript k represents the index of the harmonic frequency, k = 0, 1, 2, …, N H , where N H is the number of harmonics considered in HB simulation. N T is the number of time sampling points, W N (n, k) is the Fourier coefficient for the nth time sample and the k-th harmonic, superscript * denotes complex conjugate, and m represents the index of voltage delay buffers, m = 0, 1,…, N v, . As seen from (7)~(9), apart from changing the nonlinearity of the coarse model, dynamic voltage and current neural network mappings can also change the dynamic order so that the proposed general Neuro-SM has the potential to model the microwave devices with high order dynamics and nonlinearity.
Sensitivity analysis of the general Neuro-SM model with respect to mapping neural network weights
Let the number of hidden neurons of the dynamic voltage and current mapping neural networks be Nhv and Nhc, respectively. Let generic symbols w1, i (i = 1, 2, …, Nhv) and w2, i (i = 1, 2, …, Nhc) be internal weights of the voltage and current mapping neural network, respectively. w1, i and w2, i are the i-th component of vectors w1 and w2, respectively. In order to train the general Neuro-SM efficiently, gradient information provided by sensitivities of the model with respect to w1, i and w2, i is needed [16].
(1) DC sensitivity: let the DC output at gate and drain of the general Neuro-SM model be If, DC. The sensitivities of If, DC with respect to w1, i and w2, i are described in functional form as
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{I}}_{f,\mathrm{DC}}}{\partial {w}_{1,i}}={\left(\frac{\partial {\boldsymbol{I}}_{f,\mathrm{DC}}^T}{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}}\right)}^T\cdot {\left(\frac{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}^T}{\partial {\boldsymbol{V}}_{c,\mathrm{DC}}}\right)}^T\cdot \frac{\partial {\boldsymbol{V}}_{c,\mathrm{DC}}}{\partial {w}_{1,i}}\\ {}={\left(\frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{V}}_{f,\mathrm{DC}},\overset{N_c+1}{\overbrace{{\left.{\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},{\boldsymbol{I}}_{c,\mathrm{DC}}{\left|{}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},\dots, {\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}}}},{\boldsymbol{w}}_2\right)\kern0.6em }{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}}\right)}^T\kern0.3em \cdot {\boldsymbol{G}}_c\\ {}\kern3.399999em \cdot \frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}\left(\overset{N_v+1}{\overbrace{{\boldsymbol{V}}_{f,\mathrm{DC}},{\boldsymbol{V}}_{f,\boldsymbol{DC}},\dots, {\boldsymbol{V}}_{f,\mathrm{DC}}}},{\boldsymbol{w}}_1\right)\kern0.1em }{\partial {w}_{1,i}}\kern11.60001em \end{array}} $$
$$ \frac{\partial {\boldsymbol{I}}_{f,\mathrm{DC}}}{\partial {w}_{2,i}}=\frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}\left({\boldsymbol{V}}_{f,\mathrm{DC}},\overset{N_c+1}{\overbrace{{\left.{\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},{\boldsymbol{I}}_{c,\mathrm{DC}}{\left|{}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},\dots, {\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}}}},{\boldsymbol{w}}_2\right)\kern0.1em }{\partial {w}_{2,i}} $$
where \( {\boldsymbol{G}}_c={\left(\partial {\boldsymbol{I}}_{c,\mathrm{DC}}^T/\partial {\boldsymbol{V}}_{c,\mathrm{DC}}\right)}^T \) is the DC conductance matrix of the existing coarse model, and the first-order derivatives ∂fANN/∂w1, i and ∂hANN/∂w2, i can be calculated by neural network backpropagation [17].
(2) S parameter sensitivity: S parameter sensitivity can be obtained by converting its Y parameter sensitivity. The small-signal Y parameter sensitivities of the general Neuro-SM model with respect to w1, i and w2, i are shown in Eqs. (12) and (13), respectively. These two equations can be obtain by differentiating (5) with respect to w1, i and w2, i, respectively.
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{Y}}_f\left(\omega \right)}{\partial {w}_{1,i}}\\ {}=\sum \limits_{r=1,2}\sum \limits_{m=0}^{N_c}\left(\begin{array}{l}{\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial^2{\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)\partial {i}_{cr}\left(t- m\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f,\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\\ {}{\left.\cdot {e}^{- j\omega m\tau}\cdot \sum \limits_{p=1,2}{Y}_{c, rp}\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\cdot {\left.\frac{\partial {f}_{\mathrm{ANN}p}\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {w}_{1,i}}\right|}_{{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}}\kern0.1em \end{array}\right)\\ {}\kern0.5em \cdot {\left.{\boldsymbol{Y}}_c\left(\omega \right)\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\\ {}+{\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\cdot {\left.{\boldsymbol{Y}}_c\left(\omega \right)\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\kern0.1em \\ {}\kern0.3em \cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial^2{\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)\partial {w}_{1,i}}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\\ {}+{\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\\ {}\cdot \left(\sum \limits_{r=1,2}{\left.\frac{\partial {\boldsymbol{Y}}_c}{\partial {v}_{cr}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\cdot {\left.\frac{\partial {f}_{\mathrm{ANN}r}\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {w}_{1,i}}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)\\ {}\kern0.6em \cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\kern3.199999em \\ {}\kern1em \end{array}} $$
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{Y}}_f\left(\omega \right)}{\partial {w}_{2,i}}\\ {}={\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial^2{\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)\partial {w}_{2,i}}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\cdot {\left.{\boldsymbol{Y}}_c\left(\omega \right)\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\\ {}\kern0.6em \cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\\ {}\kern0.5em +{\left({\left.\frac{\partial^2{\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{v}}_f(t)\partial {w}_{2,i}}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\end{array}} $$
where the second-order derivative of the dynamic voltage and current mapping neural networks fANN and hANN, which are the differentiation of the Jacobian matrix \( \partial {\boldsymbol{f}}_{\mathrm{ANN}}^T/\partial {\boldsymbol{i}}_c\left(t- l\tau \right) \) and \( \partial {\boldsymbol{f}}_{\mathrm{ANN}}^T/\partial {\boldsymbol{v}}_f\left(t- k\tau \right) \) with respect to w1, i and w2, i, can be obtained by the adjoint neural network back-propagation [17], respectively.
(3) HB sensitivity: the sensitivities of the large-signal harmonic current of the general Neuro-SM model with respect to w1, i and w2, i at a generic harmonic frequency ω k , k = 0, 1, 2, …, N H can be described in functional form as
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{I}}_f\left({\omega}_k\right)}{\partial {w}_{1,i}}\\ {}=\frac{1}{N_T}\sum \limits_{n=0}^{N_T-1}\left(\begin{array}{l}\sum \limits_{m=0}^{N_c}\frac{{\left.\operatorname{}\partial {\boldsymbol{h}}_{\mathrm{ANN}}\Big({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{i}}_c\left({t}_n\right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{i}}_c\left({t}_n-\tau \right){\left|{}_{{\boldsymbol{v}}_c\left({t}_n\right)},\dots, {\boldsymbol{i}}_c\left({t}_n-{N}_c\tau \right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{w}}_2\Big)}{\partial {\boldsymbol{i}}_c\left({t}_n- m\tau \right)}\cdot {e}^{- j\omega m\tau}\\ {}{\left.\operatorname{}\cdot {\boldsymbol{G}}_c\left({t}_n\right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)}\cdot \frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}\left({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{v}}_f\left({t}_n-\tau \right),\dots, {\boldsymbol{v}}_f\left({t}_n-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {w}_{1,i}}\cdot {W}_N\left(n,k\right)\end{array}\right)\kern0.1em \end{array}} $$
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{I}}_f\left({\omega}_k\right)}{\partial {w}_{2,i}}\\ {}=\frac{1}{N_T}\sum \limits_{n=0}^{N_T-1}\frac{{\left.\partial {\boldsymbol{h}}_{\mathrm{ANN}}\Big({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{i}}_c\left({t}_n\right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{i}}_c\left({t}_n-\tau \right){\left|{}_{{\boldsymbol{v}}_c\left({t}_n\right)},\dots, {\boldsymbol{i}}_c\left({t}_n-{N}_c\tau \right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{w}}_2\Big)}{\partial {w}_{2,i}}\cdot {W}_N\left(n,k\right)\end{array}} $$
where G c (t n ) at the mapped voltage of coarse model vc(t n ) is the nonlinear conductance matrix of the existing coarse model at time point tn.
Sensitivity analysis of the general Neuro-SM model with respect to coarse model parameters
Let x be a generic variable in the coarse model. In case the coarse model parameter needs to be treated as a variable in circuit optimization, it is useful to obtain the sensitivity for DC, bias-dependent S parameter, and large-signal HB responses of the general Neuro-SM model due to changes in the generic optimization variable x.
(1) DC sensitivity: the sensitivity of If, DC with respect to x is derived as
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{I}}_{f,\mathrm{DC}}}{\partial x}={\left(\frac{\partial {\boldsymbol{I}}_{f,\mathrm{DC}}^T}{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}}\right)}^T\cdot \frac{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}^T}{\partial x}\\ {}={\left(\frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{V}}_{f,\mathrm{DC}},\overset{N_c+1}{\overbrace{{\left.{\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},{\boldsymbol{I}}_{c,\mathrm{DC}}{\left|{}_{{\boldsymbol{V}}_{c,\mathrm{DC}}},\dots, {\boldsymbol{I}}_{c,\mathrm{DC}}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}}}},{\boldsymbol{w}}_2\right)\kern0.6em }{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}}\right)}^T\cdot {\left.\frac{\partial {\boldsymbol{I}}_{c,\mathrm{DC}}^T}{\partial x}\right|}_{{\boldsymbol{V}}_{c,\mathrm{DC}}}\end{array}} $$
where \( \partial {\boldsymbol{I}}_{c,\mathrm{DC}}^T/\partial x \) is the DC current response due to changes in coarse model variable x evaluated at the mapped bias Vc, DC.
(2) S parameter sensitivity: S parameter sensitivity with respect to coarse model variable x can also be calculated by converting its Y parameter sensitivity. The Y parameter sensitivity is shown as
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{Y}}_f\left(\omega \right)}{\partial x}\\ {}=\sum \limits_{r=1,2}\sum \limits_{m=0}^{N_c}\left({\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial^2{\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)\partial {i}_{cr}\left(t- m\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\cdot {e}^{- j\omega m\tau}\cdot \frac{\partial {i}_{cr}}{\partial x}\right)\\ {}\kern0.7em \cdot {\left.{\boldsymbol{Y}}_c\left(\omega \right)\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\\ {}+{\left(\sum \limits_{l=0}^{N_c}{\left.{e}^{- j\omega l\tau}\cdot \frac{\partial {\boldsymbol{h}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{i}}_c(t),{\boldsymbol{i}}_c\left(t-\tau \right),\dots, {\boldsymbol{i}}_c\left(t-{N}_c\tau \right),{\boldsymbol{w}}_2\right)}{\partial {\boldsymbol{i}}_c\left(t- l\tau \right)}\right|}_{\begin{array}{l}{\boldsymbol{v}}_f={\boldsymbol{V}}_{f.\mathrm{Bias}}\\ {}{\left.{\boldsymbol{i}}_c(t)={\boldsymbol{i}}_c\left(t-\tau \right)=\dots ={\boldsymbol{i}}_c\left(t-{N}_c\tau \right)={\boldsymbol{I}}_c\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\end{array}}\right)}^T\cdot {\left.\frac{\partial {\boldsymbol{Y}}_c{\left(\omega \right)}_c}{\partial x}\right|}_{{\boldsymbol{V}}_{c,\mathrm{Bias}}}\\ {}\kern0.6em \cdot {\left(\sum \limits_{k=0}^{N_v}{e}^{- j\omega k\tau}\cdot {\left.\frac{\partial {\boldsymbol{f}}_{\mathrm{ANN}}^T\left({\boldsymbol{v}}_f(t),{\boldsymbol{v}}_f\left(t-\tau \right),\dots, {\boldsymbol{v}}_f\left(t-{N}_v\tau \right),{\boldsymbol{w}}_1\right)}{\partial {\boldsymbol{v}}_f\left(t- k\tau \right)}\right|}_{{\boldsymbol{v}}_f(t)={\boldsymbol{v}}_f\left(t-\tau \right)=\dots ={\boldsymbol{v}}_f\left(t-{N}_v\tau \right)={\boldsymbol{V}}_{f,\mathrm{Bias}}}\right)}^T\kern6.099997em \end{array}} $$
where ∂Y c /∂x is the sensitivity for Y parameter of the coarse model due to changes in x. ∂i cr /∂x, r = 1, 2 is the derivative of coarse model current with respect to x, which can be calculated by coarse model sensitivity analysis.
(3) HB sensitivity: the sensitivity of the harmonic current of the general Neuro-SM model with respect to x at a generic harmonic frequency ω k , k = 0, 1, …, N H is shown in Eq. (18), where ∂i c (t n )/∂x is the sensitivity of the nonlinear current of the coarse model with respect to x at time sample t n .
$$ {\displaystyle \begin{array}{l}\frac{\partial {\boldsymbol{I}}_f\left({\omega}_k\right)}{\partial {w}_{1,i}}\\ {}=\frac{1}{N_T}\sum \limits_{n=0}^{N_T-1}\left(\begin{array}{l}\sum \limits_{m=0}^{N_c}\frac{{\left.\partial {\boldsymbol{h}}_{\mathrm{ANN}}\Big({\boldsymbol{v}}_f\left({t}_n\right),{\boldsymbol{i}}_c\left({t}_n\right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{i}}_c\left({t}_n-\tau \right){\left|{}_{{\boldsymbol{v}}_c\left({t}_n\right)},\dots, {\boldsymbol{i}}_c\left({t}_n-{N}_c\tau \right)\right|}_{{\boldsymbol{v}}_c\left({t}_n\right)},{\boldsymbol{w}}_2\Big)}{\partial {\boldsymbol{i}}_c\left({t}_n- m\tau \right)}\cdot {e}^{- j\omega m\tau}\\ {}\cdot \frac{\partial {\boldsymbol{i}}_c\left({t}_n\right)}{\partial x}\cdot {W}_N\left(n,k\right)\end{array}\right)\kern0.2em \end{array}} $$
Proposed training algorithm for the general Neuro-SM model
Training is the key step to determine the general Neuro-SM model. The model development process needs two phases: initial training and formal training.
Initial training
Before the nonlinear device data from simulation or measurement is used for formal training, the general Neuro-SM model is first initialized to be equal to the original coarse model. In such case, the dynamic voltage and current neural networks are initialized to learn unit mappings, i.e., to learn the relationships vc1(t) = vf1(t), vc2(t) = vf2(t), ic1(t) = if1(t), and ic2(t) = if2(t) in the entire operation range of the nonlinear device.
Formal training
In this phase, the weights of dynamic voltage and current mapping neural networks, i.e., w 1 and w 2 , are trained such that the overall training error of the general Neuro-SM model can be reduced to satisfy the specifications. The overall training error for combined DC, small-signal S parameter, and large-signal HB training is defined as the total difference between all nonlinear device data and the general Neuro-SM model as:
$$ {\displaystyle \begin{array}{l}E\left({\boldsymbol{w}}_{\boldsymbol{1}},{\boldsymbol{w}}_{\boldsymbol{2}}\right)=\frac{1}{2}\sum \limits_{k=1}^{N_{V_{f2}}}\sum \limits_{l=1}^{N_{V_{f1}}}{\left\Vert \boldsymbol{A}\cdot \left(\boldsymbol{I}\left({V}_{Gl},{V}_{Dk},{\boldsymbol{w}}_{\boldsymbol{1}},{\boldsymbol{w}}_{\boldsymbol{2}},\right)-{\boldsymbol{I}}_{Dl}^k\right)\right\Vert}^2\\ {}\kern1.6em +\frac{1}{2}\sum \limits_{k=1}^{N_{V_{f2}}}\sum \limits_{l=1}^{N_{V_{f1}}}\sum \limits_{j=1}^{N_{\mathrm{freq}}}{\left\Vert \boldsymbol{B}\cdot \left(\boldsymbol{S}\left({V}_{Gl},{V}_{Dk},{\omega}_j,{\boldsymbol{w}}_{\boldsymbol{1}},{\boldsymbol{w}}_{\boldsymbol{2}}\right)-{\boldsymbol{S}}_{Dl j}^k\right)\right\Vert}^2\kern5.999997em \\ {}\kern1.1em +\frac{1}{2}\sum \limits_{k=1}^{N_{V_{f2}}}\sum \limits_{l=1}^{N_{V_{f1}}}\sum \limits_{m=1}^{N_H}\sum \limits_{n=1}^{N_P}{\left\Vert \boldsymbol{C}\cdot \left(\boldsymbol{HB}\left({V}_{Gl},{V}_{Dk},{\omega}_m,{P}_n,{\boldsymbol{w}}_{\boldsymbol{1}},{\boldsymbol{w}}_{\boldsymbol{2}}\right)-{\boldsymbol{HB}}_{Dk l}^{mn}\right)\right\Vert}^2\end{array}} $$
where I(.), S(.), and HB(.) are the DC, bias-dependent S parameter, and HB responses of the general Neuro-SM model, respectively. Take FET modeling as an example, vector I(.) contains gate and drain current If1 and If2, which can be computed by Eq. (3). Vector S(.) is achieved from the Y matrix defined by Eq. (5). HB responses of the general Neuro-SM model, i.e., HB(.) can be calculated by Eq. (7). I D , S D , and HB D represent the DC current, small-signal S parameter, and large-signal HB responses of the modeling device, respectively. The subscript k \( \left(k=1,2,\dots, {N}_{V_{f2}}\right) \), l \( \left(l=1,2,\dots, {N}_{V_{f1}}\right) \), j (j = 1, 2, …, Nfreq), m (m = 1, 2, …, N H ), and n (n = 1, 2, …, N P ) denote the indices of Vf2, Vf1, frequency, harmonic frequency, and input power level, respectively. \( {N}_{V_{f1}} \), \( {N}_{V_{f2}} \), Nfreq, N H , and N P are the total number of Vf1, Vf2,frequency, harmonic frequency, and input power level, respectively. Diagonal matrices A, B, and C contain all the scaling factors, which are defined as the inverse of the minimum-to-maximum range of the I D data, S D data, and HB D data, respectively. The training error calculation of the general Neuro-SM model for combined DC and S parameter training as well as HB training further illustrates in Fig. 2. Figure 2a, b is error calculation for combined dc and small-signal S parameter training as well as large-signal HB training, respectively.
Block diagram for error calculation of the general Neuro-SM model. a Error calculation for combined dc and small-signal S parameter training. b Error calculation for large-signal HB training
The objective of the model training is to minimize the error E defined in (19) by optimizing w 1 and w 2 . In general, gradient-based training algorithm is used. After training, the general Neuro-SM model with appropriate hidden neurons and delay buffers can accurately represent the nonlinear behavior of the modeling device.
The proposed Neuro-SM model, after being trained for a specific range, is very good at representing the nonlinear behavior of the microwave device within the training region. However, when we use model in a wider range than the training range, inappropriate derivative information of the model outside the training range may mislead the iterative process into slow convergence or even divergence during large-signal simulation. One possible way to solve the divergence problem is to use appropriate extrapolation technique. For general Neuro-SM technique, a simple and effective extrapolation technique is used to improve the convergence of the model [18].
For simplification, the proposed general Neuro-SM technique is formulated for 2-port field-effect transistor (FET) modeling. This approach can be further extended to n-port network, where all the notations and equations are extended accordingly. After the generalization, the proposed general Neuro-SM technique has the potential to be used for developing models of microwave devices with trapping effect.
The format of the general Neuro-SM model presented so far is to map the voltage input signals between the coarse and fine models. Hence, our approach presented so far is applicable to modeling voltage controlled devices, such as FET and HEMT. It is possible to extend the method to a mixed input mapping case, where the dynamic input mappings are for a mixture of port voltage and current signals. In that way, our approach can be extended to modeling current controlled devices, such as HBT.
The frequency limit of the proposed general Neuro-SM model depends on the frequency limit of training data. For example, if the frequency in the training data extends to millimeter wave bands, the proposed general mapping will be even more important because of the need of capacitive effects, non-quasi-static effects, and nonlinear effects in the model. In this case, more hidden neurons and time delay buffers maybe needed to guarantee the accuracy of the proposed general Neuro-SM model.
A pHEMT modeling using the proposed general Neuro-SM method
This example illustrates the use of the general Neuro-SM for modeling of a real 2 × 50 μm GaAs pHEMT device. The training and test data is obtained from measurement. An enhanced Angelov model including a thermal subcircuit to model the self-heating effect of the device proposed in [19] is used as the existing coarse model. Even though parameters in enhanced Angelov model are extracted as much as possible, there are still distinct differences between the model and measured data. Thus, Neuro-SM is used to bridge the gap between the coarse model and measured data. We then apply the previously published Neuro-SM technique such as Neuro-SM with the output mapping [13] and dynamic Neuro-SM [14] to get more accurate models. After training, the accuracy of the two Neuro-SM models is clearly improved compared to that of the coarse model, as shown in Fig. 3. However, the previous Neuro-SM techniques at their best are still insufficient to achieve the desired accuracy. Then, our proposed general Neuro-SM is used to get a more accurate model.
Comparison between the pHEMT device data, coarse model, and three Neuro-SM models. a dc. b-e S parameter at two test biase points (0.7V, 2.4V) and (0.3 V 5.2V). f HB at different input power levels -10-3dBm
Training was firstly done in NeuroModelerPlus [20] using DC and bias-dependent S parameter data for 400 iterations. Then, training refinement was done using combined DC, bias-dependent S parameter, and HB data at 189 different biases for 3600 iterations. Harmonic data used for HB training was measured at 7.5 GHz fundamental frequency and different input power levels (− 10~ 3 dBm). Time delay parameters are both 0.008 ns. The number of hidden neurons for both voltage and current mapping neural networks is 30.
After training, we compared the DC, bias-dependent S parameter, and large-signal HB responses of the pHEMT device with those computed from the coarse model, Neuro-SM with the output mapping [13], dynamic Neuro-SM [14] with 5 delay buffers and 30 hidden neurons, and the proposed general Neuro-SM model with 5 delay buffers and 30 hidden neurons both for dynamic voltage and current mapping neural networks as shown in Fig. 3. In Fig. 3a, b–e, f represent the comparisons of dc, S parameter at two test bias points (0.7 V, 2.4 V) and (0.3 V, 5.2 V), as well as HB responses at different input power levels (− 10~ 3 dBm), respectively. As observed from Fig. 3, the responses computed from the proposed general Neuro-SM are closest to the data among all the four models in this comparison. We obtain further improvement in model accuracy using general Neuro-SM technique because additional and useful degrees of freedom provided by the new dynamic current mappings at the gate and the drain in the general model. The increased accuracy of the general Neuro-SM model helps to improve the accuracy of circuit and system simulation, such as simulation to predict power performance and linearity of high-frequency PA designs.
There are two important factors that impact the accuracy of the dynamic Neuro-SM model and the proposed general Neuro-SM model, i.e., number of hidden neurons and delay buffers. To show the results further, we compared the training and test error of the dynamic Neuro-SM and general Neuro-SM with different delay buffers and hidden neurons as shown in Table 1. As seen in Table 1, general Neuro-SM with 30 hidden neurons and 5 delay buffers both for dynamic voltage and current mapping neural networks are suitable for this example.
Table 1 Training and test error comparison of coarse model, dynamic Neuro-SM model, and the proposed general Neuro-SM model after combined DC, S parameter, and HB training
The proposed general Neuro-SM model can be conveniently implemented into the existing circuit simulators such as Keysight ADS for high-level circuit and system design. Figure 4 shows the proposed general Neuro-SM model structure in ADS. The time delay parameter is 0.08 ns. In this figure, the dynamic voltage mapping neural networks are embedded as the functions in two 7-port symbolically defined devices (SDDs), i.e., SDD7P1, and SDD7P2. Similarly, the dynamic current mapping neural networks are embedded as the functions in two 9-port SDDs, i.e., SDD9P1 and SDD9P2. Time delay voltage and current signals can be obtained using voltage controlled voltage sources with delay parameters, i.e., SRC1~SRC8. After implementing the general Neuro-SM model into ADS, we have also compared simulation speed between coarse model, dynamic Neuro-SM, and the proposed general Neuro-SM model on an Intel i5-3230M 2.6 GHz computer as shown in Table 2. The simulation was performed by Monte Carlo analysis of 200 HB simulations. As seen in Table 2, the simulation time is 48.32 s using coarse model, compared to 57.17 s using general Neuro-SM, showing that the simulation speed of the proposed general Neuro-SM is acceptable in view of its good accuracy.
Structure of the general Neuro-SM model with two delay buffers in ADS
Table 2 Model simulation time comparison between coarse model, dynamic NEURO-SM, and the general Neuro-SM model
This paper has presented a general Neuro-SM technique for nonlinear device modeling. By modifying the dynamic current and dynamic voltage relationships in the existing coarse model, the proposed general Neuro-SM model can exceed the accuracy limit over the coarse model, the Neuro-SM model with the output mapping, and the dynamic Neuro-SM model. Compared to previously published Neuro-SM, the proposed general Neuro-SM has demonstrated much improved performance in terms of accuracy by a pHEMT modeling example. The general Neuro-SM model can be applied to microwave circuit and system design.
Z Li, Y Chen, H Shi, et al., NDN-GSM-R: a novel high-speed railway communication system via named data networking. EURASIP J. Wirel. Commun. Netw. 48(1), 1–5 (2016)
H Shi, Z Li, D Liu, et al., Efficient method of two-dimensional DOA estimation for coherent signals. EURASIP J. Wirel. Commun. Netw. 60, 1–10 (2017)
F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE Trans. Veh. Technol. 65(3), 1830–1835 (2016)
D Liu, Z Li, X Guo, et al., DOA estimation for wideband chip with a few snapshots. EURASIP J. Wirel. Commun. Netw. 28, 1–7 (2017)
F Zhao, B Li, H Chen, et al., Joint beamforming and power allocation for cognitive MIMO systems under imperfect CSI based on game theory. Wirel. Pers. Commun. 73(3), 679–694 (2013)
F Zhao, W Wang, H Chen, et al., Interference alignment and game-theoretic power allocation in MIMO heterogeneous sensor networks communications. Signal Process. 126(9), 173–179 (2016)
Z Li, L Song, H Shi, Approaching the capacity of K-user MIMO interference channel with interference counteraction scheme. Ad Hoc Netw. 58(4), 286–291 (2017)
F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58(4), 239–246 (2017)
F. Zhao, X. Sun, H. Chen, et al., Outage performance of relay-assisted primary and secondary transmissions in cognitive relay networks, EURASIP J. Wirel. Commun. Netw., 2014, 60(1).
L Zhang, J Xu, M Yagoub, et al., Neuro-space mapping technique for nonlinear device modeling and large-signal simulation (IEEE MIT-S Int. Microwave Symp, Philadelphia, PA, 2003), pp. 173–176
Q Zhang, K Gupta, V Devabhaktuni, Artificial neural networks for RF and microwave design: From theory to practice. IEEE Transaction on Microwave Theory and Techniques 51(4), 1339–1350 (2003)
J Bandler, Q Cheng, S Dakroury, et al., Space mapping: the state of the art. IEEE Transaction on Microwave Theory and Techniques 52(1), 337–361 (2004)
L Zhu, K Liu, Q Zhang, et al., An enhanced analytical neuro-space mapping method for large-signal microwave device modeling (IEEE MIT-S Int. Microwave Symp. Dig, Montreal, QC, 2012), pp. 1–3
L Zhu, Q Zhang, K Liu, et al., A novel dynamic neuro-space mapping approach for nonlinear microwave device modeling. IEEE Microwave and Wireless Components Letters 26(2), 131–133 (2016)
L Zhang, J Xu, MC Yagoub, et al., Efficient analytical formulation and sensitivity analysis of neuro-space mapping for nonlinear microwave device modeling. IEEE Transaction on Microwave Theory and Techniques 53(9), 2752–2767 (2005)
Q Song, J Spall, Y Soh, et al., Robust neural network tracking controller using simultaneous perturbation stochastic approximation. IEEE Transaction on Neural Network 19(5), 817–835 (2008)
Q Zhang, K Gupta, Neural network for microwave design (Artech House, Boson, MA, 2000)
L Zhang, QJ Zhang, Simple and effective extrapolation technique for neural-based microwave modeling. IEEE Microwave Component Letter 20(6), 301–303 (2010)
L Liu, J Ma, G Ng, Electrothermal large-signal model of III-V FETs including frequency dispersion and charge conservation. IEEE Transaction on Microwave Theory and Techniques 57(12), 3106–3117 (2009)
NeuroModelPlus_V2.1E, Q. J. Zhang, Dept. of Electronics, Carleton University, Ottawa, ON, Canada.
The authors would like to thank Prof. Q. J. Zhang at Carleton University, Ottawa, ON, Canada, for valuable discussions and insights throughout this work.
This work is supported by the Fundamental Research Funds for Universities in Tianjin (No. 2016CJ13), partly supported by the Key project of Tianjin Natural Science Foundation (No. 16JCZDJC38600), National Natural Science Foundation of China (No. 61601494, 61602346), and the Research Forums Cooperation Project of ZTE Corporation (2016ZTE04-09).
The training and test data of the microwave transistor is obtained from measurement and can be shared if it is necessary.
School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin, 300384, China
Lin Zhu, Jian Zhao & Lei Pan
Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin, 300387, China
Zhuo Li
College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin, 300387, China
School of Microelectronics, Tianjin University, Tianjin, 300072, China
Wenyuan Liu
Chengdu Ganide Technology Co., Ltd., Chengdu, 610091, China
Haifeng Wu
Shijiazhuang Mechanical Engineering College, Shijiazhuang, 050003, China
Deliang Liu
Jian Zhao
Lei Pan
The authors have contributed jointly to all parts on the preparation of this manuscript. LZ (first author) and JZ contributed to the structure and sensitivity analysis of the general Neuro-SM model. LZ (third author), WL and LP contributed to the training algorithm development. HW and DL contributed to the analysis of simulation results. All authors read and approved the final manuscript.
Correspondence to Lin Zhu.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Zhu, L., Zhao, J., Li, Z. et al. A general neuro-space mapping technique for microwave device modeling. J Wireless Com Network 2018, 37 (2018). https://doi.org/10.1186/s13638-018-1034-4
Space mapping
Neuro-SM
Microwave device modeling
Radar and Sonar Networks | CommonCrawl |
Photovoltaic solar power
The physics of photovoltaics
Photovoltaic solar power or PV is a form of solar power in which solar radiation is converted into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels with 'cells' containing these semiconductors.
As of 2011, photovoltaic solar power had a capacity of 70 gigawatts. Only a fraction of this capacity is actually used, thanks to clouds and night. However, between 2004 and 2009, the capacity increased at an annual average rate of 60%. From 2009 to 2010 the capacity rose from 23 to 40 gigawatts, an increase of 74%. From 2010 to 2011 it rose from 40 to 70 gigawatts, an increase of 75%.
The following chart shows the rapid growth:
This chart is from the 2010 report here:
REN21, Renewables 2010 Global Status Report, page 19.
and the more recent figures are from:
Quoting from the 2010 report:
Cumulative global PV installations are now nearly six times what they were at the end of 2004. Analysts expect even higher growth in the next four to five years. Thin film's share of the global market increased from 14 percent in 2008 to 19 percent in 2009 for cells, and from 16 to 22 percent for modules.
Germany again became the primary driver of PV installations, more than making up for the Spanish gap with 3.8 GW added — about 54 percent of the global market. This was far above Spain's prior record-breaking addition of 2.4 GW in 2008, and brought Germany's capacity to 9.8 GW by the end of 2009, amounting to 47 percent of existing global solar PV capacity. While Germany has played a major role in advancing PV and driving down costs, its importance will decline as other countries step up their demand and reduce the industry's reliance on a single market.
After its record-breaking year in 2008, the Spanish PV market plummeted to an estimated 70 MW added in 2009, due to a cap on subsidies after the national solar target was exceeded. But there were other sunny spots in Europe. Italy came in a distant second after Germany, installing 710 MW and more than doubling its 2008 additions due to high feed-in tariffs and a good national solar resource; such strong growth is expected to continue. Japan reemerged as a serious player, coming in third with 485 MW installed after reinstating residential rebates and introducing a buyback program for residential rooftop systems.
The United States added an estimated 470 MW of solar PV in 2009, including 40 MW of off-grid PV, bringing cumulative capacity above the 1 GW mark. California accounted for about half of the total, followed by New Jersey with 57 MW added; several other states are expected to pass the 50 MW per year mark in the near future. Residential installations came to 156 MW, a doubling from 2008 thanks in part to removal of the $2,000 cap on the federal Investment Tax Credit and to a 10 percent drop in installed costs relative to 2008.
In Without the Hot Air, David MacKay writes (pp. 39ff) that typical efficiency of solar power cells is 10% and double that for expensive ones on which he bases his UK calculations on. There is a theoretical upper bound on efficiency of 68%.
Here is an NREL chart of the efficiency of various experimental solar cells - click to enlarge:
Frazer Armstrong and Katherine Blundell, Energy Beyond Oil, Oxford U. Press.
has a photovoltaics chapter which is written by Michael Graetzel, a PV researcher and advocate. This includes diagrams on total global production of photovoltaic cells up to 2003 (744 MW that year). Laboratory model PV-cells have around 35% efficiency (see Fig 8.3), but you lose 0.5% efficiency per °C in commercial installations (p.124). Graetzel then talks about geneneration-1 thin film, single crystal silicon, generation 2 (low cost), and generation 3 (>33% efficiency) e.g. multi gap tandem cell, quantum dot cells, hot electron converters. Generation 2 PV-cells would have a payback time of < 1 year compared to >4 years for generation 1 PV. DSC—dye sensitive solar cells—are generation with 11% efficiency today, but they also can produced cheaply, and they allow conversion of the power to hydrogen (using solar photolysis).
The efficiency and cost tradeoffs are illustrated in the following diagram which comes from the book
Martin A Green Springer 2005 3rd generation Photovoltaics, Advanced Solar Energy Conversion
Harry Atwater of Caltech gave a talk on photovoltaic solar power at this conference:
Conference on the Mathematics of Environmental Sustainability and Green Technology, 29-30 January, 2010, Harvey Mudd College, Claremont, California. Organized by Rachel Levy.
What follows are some notes on that, taken by John Baez as part of week293 of This Week's Finds:
The efficiency of silicon crystal solar cells peaked out at 24% in 2000. Fancy "multijunctions" get up to 40% and are still improving. But they use fancy materials like gallium arsenide, gallium indium phosphide, and rare earth metals like tellurium. The world currently uses 13 terawatts of power. The US uses 3. But building just 1 terawatt of these fancy photovoltaics would use up more rare substances than we can get our hands on:
Gordon B. Haxel, James B. Hedrick, and Greta J. Orris, Rare earth elements - critical resources for high technology, US Geological Survey Fact Sheet 087-02.
For more details, see Mineral resources.
So, if we want solar power, we need to keep thinking about silicon and use as many tricks as possible to boost its efficiency.
There are some limits. In 1961, Shockley and Quiesser wrote a paper on the limiting efficiency of a solar cell. It's limited by thermodynamical reasons! Since anything that can absorb energy can also emit it, any solar cell also acts as a light-emitting diode, turning electric power back into light:
W. Shockley and H. J. Queisser, Detailed balance limit of efficiency of p-n junction solar cells, J. Appl. Phys. 32 (1961) 510-519.
Schockley-Quiesser limit, Wikipedia.
What are the tricks used to approach this theoretical efficiency? Multijunctions use layers of different materials to catch photons of different frequencies. The materials are expensive, so people use a lens to focus more sunlight on the photovoltaic cell. The same is true even for silicon - see the Umuwa Solar Power Station in Australia. But then the cells get hot and need to be cooled.
Roughening the surface of a solar cell promotes light trapping, by a large factor. Light bounces around ergodically and has more chances to get absorbed and turned into useful power. There are theoretical limits on how well this trick works. But those limits were derived using ray optics, where we assume light moves in straight lines. So, we can beat those limits by leaving the regime where the ray-optics approximation holds good. In other words, make the surface complicated at length scales comparable to the wavelength at light.
For example: we can grow silicon wires from vapor. They can form densely packed structures that absorb more light:
B. M. Kayes, H. A. Atwater, and N. S. Lewis, Comparison of the device physics principles of planar and radial p-n junction nanorod solar cells, J. Appl. Phys. 97 (2005), 114302.
James R. Maiolo III, Brendan M. Kayes, Michael A. Filler, Morgan C. Putnam, Michael D. Kelzenberg, Harry A. Atwater and Nathan S. Lewis, High aspect ratio silicon wire array photoelectrochemical cells, J. Am. Chem. Soc. 129 (2007), 12346-12347.
Also, with such structures the charge carriers don't need to travel so far to get from the n-type material to the p-type material. This also boosts efficiency.
There are other tricks, still just under development. Using quasiparticles called "surface plasmons" we can adjust the dispersion relations to create materials with really low group velocity. Slow light has more time to get absorbed! We can also create "meta-materials" whose refractive index is really strange - like n=−5n = -5!
Recall that the refractive index of a substance is the inverse of the speed of light in that substance - in units where the speed of light in vacuum equals 1. When light passes from material 1 to material 2, it takes the path of least time - at least in the ray-optics approximation. Using this you can show Snell's law:
sin(θ 1)sin(θ 2)=n 2n 1 \frac{sin(\theta_1)}{sin(\theta_2)} = \frac{n_2}{n_1}
where n in_i is the index of refraction in the iith material and θ i\theta_i is the angle between the light's path and the line normal to the interface between materials:
Air has an index of refraction close to 1. Glass has an index of refraction greater than 1. So, when light passes from air to glass, it "straightens out": its path becomes closer to perpendicular to the air-glass interface. When light passes from glass to air, the reverse happens: the light bends more. But the sine of an angle can never exceed 1 - so sometimes Snell's law has no solution. Then the light gets stuck! More precisely, it's forced to bounce back into the glass. This is called "total internal reflection", and the easiest way to see it is not with glass, but water. Dive into a swimming pool and look up from below. You'll only see the sky in a limited disk. Outside that, you'll see total internal reflection.
But negative indices of refraction are much weirder! The light entering such a material will bend backward:
Materials with a negative index of refraction also exhibit a reversed version of the ordinary Goos-Hänchen effect. In the ordinary version, light "slips" a little before reflecting during total internal reflection. The "slip" is actually a slight displacement of the light's wave crests from their expected location — a "phase slip". But for a material of negative refractive index, the light slips backwards. This allows for resonant states where light gets trapped in thin films. Maybe this can be used to make better solar cells.
The First Solar Agua Caliente photovoltaic power plant, with an 290 megawatt alternating current rating, is slated to be the single largest solar generating plant in the world when it starts up in 2013.
Located 65 miles east of the city of Yuma on the former White Wing Ranch, this plant will produce sufficient electricity to power about 100,000 average homes per year, displacing approximately 220,000 metric tons of carbon dioxide per year — the equivalent of taking about 40,000 cars off the road.
First Solar, Agua Caliente Solar Project.
category: energy
Revised on April 10, 2013 02:08:43 by John Baez (99.11.157.81)
Edit | Back in time (16 revisions) | See changes | History | Views: Print | TeX | Source | Linked from: Solar power, Without the Hot Air, Energy | CommonCrawl |
Jacobi triple product
In mathematics, the Jacobi triple product is the mathematical identity:
$\prod _{m=1}^{\infty }\left(1-x^{2m}\right)\left(1+x^{2m-1}y^{2}\right)\left(1+{\frac {x^{2m-1}}{y^{2}}}\right)=\sum _{n=-\infty }^{\infty }x^{n^{2}}y^{2n},$
for complex numbers x and y, with |x| < 1 and y ≠ 0.
It was introduced by Jacobi (1829) in his work Fundamenta Nova Theoriae Functionum Ellipticarum.
The Jacobi triple product identity is the Macdonald identity for the affine root system of type A1, and is the Weyl denominator formula for the corresponding affine Kac–Moody algebra.
Properties
The basis of Jacobi's proof relies on Euler's pentagonal number theorem, which is itself a specific case of the Jacobi Triple Product Identity.
Let $x=q{\sqrt {q}}$ and $y^{2}=-{\sqrt {q}}$. Then we have
$\phi (q)=\prod _{m=1}^{\infty }\left(1-q^{m}\right)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{\frac {3n^{2}-n}{2}}.$
The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows:
Let $x=e^{i\pi \tau }$ and $y=e^{i\pi z}.$
Then the Jacobi theta function
$\vartheta (z;\tau )=\sum _{n=-\infty }^{\infty }e^{\pi {\rm {i}}n^{2}\tau +2\pi {\rm {i}}nz}$
can be written in the form
$\sum _{n=-\infty }^{\infty }y^{2n}x^{n^{2}}.$
Using the Jacobi Triple Product Identity we can then write the theta function as the product
$\vartheta (z;\tau )=\prod _{m=1}^{\infty }\left(1-e^{2m\pi {\rm {i}}\tau }\right)\left[1+e^{(2m-1)\pi {\rm {i}}\tau +2\pi {\rm {i}}z}\right]\left[1+e^{(2m-1)\pi {\rm {i}}\tau -2\pi {\rm {i}}z}\right].$
There are many different notations used to express the Jacobi triple product. It takes on a concise form when expressed in terms of q-Pochhammer symbols:
$\sum _{n=-\infty }^{\infty }q^{\frac {n(n+1)}{2}}z^{n}=(q;q)_{\infty }\;\left(-{\tfrac {1}{z}};q\right)_{\infty }\;(-zq;q)_{\infty },$
where $(a;q)_{\infty }$ is the infinite q-Pochhammer symbol.
It enjoys a particularly elegant form when expressed in terms of the Ramanujan theta function. For $|ab|<1$ it can be written as
$\sum _{n=-\infty }^{\infty }a^{\frac {n(n+1)}{2}}\;b^{\frac {n(n-1)}{2}}=(-a;ab)_{\infty }\;(-b;ab)_{\infty }\;(ab;ab)_{\infty }.$
Proof
Let $f_{x}(y)=\prod _{m=1}^{\infty }\left(1-x^{2m}\right)\left(1+x^{2m-1}y^{2}\right)\left(1+x^{2m-1}y^{-2}\right)$
Substituting xy for y and multiplying the new terms out gives
$f_{x}(xy)={\frac {1+x^{-1}y^{-2}}{1+xy^{2}}}f_{x}(y)=x^{-1}y^{-2}f_{x}(y)$
Since $f_{x}$ is meromorphic for $|y|>0$, it has a Laurent series
$f_{x}(y)=\sum _{n=-\infty }^{\infty }c_{n}(x)y^{2n}$
which satisfies
$\sum _{n=-\infty }^{\infty }c_{n}(x)x^{2n+1}y^{2n}=xf_{x}(xy)=y^{-2}f_{x}(y)=\sum _{n=-\infty }^{\infty }c_{n+1}(x)y^{2n}$
so that
$c_{n+1}(x)=c_{n}(x)x^{2n+1}=\dots =c_{0}(x)x^{(n+1)^{2}}$
and hence
$f_{x}(y)=c_{0}(x)\sum _{n=-\infty }^{\infty }x^{n^{2}}y^{2n}$
Evaluating c0(x)
Showing that $c_{0}(x)=1$ is technical. One way is to set $y=1$ and show both the numerator and the denominator of
${\frac {1}{c_{0}(e^{2i\pi z})}}={\frac {\sum \limits _{n=-\infty }^{\infty }e^{2i\pi n^{2}z}}{\prod \limits _{m=1}^{\infty }(1-e^{2i\pi mz})(1+e^{2i\pi (2m-1)z})^{2}}}$
are weight 1/2 modular under $z\mapsto -{\frac {1}{4z}}$, since they are also 1-periodic and bounded on the upper half plane the quotient has to be constant so that $c_{0}(x)=c_{0}(0)=1$.
Other proofs
A different proof is given by G. E. Andrews based on two identities of Euler.[1]
For the analytic case, see Apostol.[2]
References
1. Andrews, George E. (1965-02-01). "A simple proof of Jacobi's triple product identity". Proceedings of the American Mathematical Society. 16 (2): 333. doi:10.1090/S0002-9939-1965-0171725-X. ISSN 0002-9939.
2. Chapter 14, theorem 14.6 of Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
• Peter J. Cameron, Combinatorics: Topics, Techniques, Algorithms, (1994) Cambridge University Press, ISBN 0-521-45761-0
• Jacobi, C. G. J. (1829), Fundamenta nova theoriae functionum ellipticarum (in Latin), Königsberg: Borntraeger, ISBN 978-1-108-05200-9, Reprinted by Cambridge University Press 2012
• Carlitz, L (1962), A note on the Jacobi theta formula, American Mathematical Society
• Wright, E. M. (1965), "An Enumerative Proof of An Identity of Jacobi", Journal of the London Mathematical Society, London Mathematical Society: 55–57, doi:10.1112/jlms/s1-40.1.55
| Wikipedia |
Columnar Thermal Barrier Coatings Produced by Different Thermal Spray Processes
Nitish Kumar1,
Mohit Gupta ORCID: orcid.org/0000-0002-4201-668X1,
Daniel E. Mack2,
Georg Mauer2 &
Robert Vaßen2
Journal of Thermal Spray Technology volume 30, pages 1437–1452 (2021)Cite this article
Suspension plasma spraying (SPS) and plasma spray-physical vapor deposition (PS-PVD) are the only thermal spray technologies shown to be capable of producing TBCs with columnar microstructures similar to the electron beam-physical vapor deposition (EB-PVD) process but at higher deposition rates and relatively lower costs. The objective of this study was to achieve fundamental understanding of the effect of different columnar microstructures produced by these two thermal spray processes on their insulation and lifetime performance and propose an optimized columnar microstructure. Characterization of TBCs in terms of microstructure, thermal conductivity, thermal cyclic fatigue lifetime and burner rig lifetime was performed. The results were compared with TBCs produced by the standard thermal spray technique, atmospheric plasma spraying (APS). Bondcoats deposited by the emerging high-velocity air fuel (HVAF) spraying were compared to the standard vacuum plasma-sprayed (VPS) bondcoats to investigate the influence of the bondcoat deposition process as well as topcoat–bondcoat interface topography. The results showed that the dense PS-PVD-processed TBC had the highest lifetime, although at an expense of the highest thermal conductivity. The reason for this behavior was attributed to the dense intracolumnar structure, wide intercolumnar gaps and high column density, thus improving the strain tolerance and fracture toughness.
Thermal barrier coatings (TBCs) play a crucial role in modern gas turbine engines used in aero-engines, power generation and marine applications to protect the underlying metal substrate from high working temperatures by facilitating a temperature gradient. Recent developments in the turbines for power generation and aviation sector had led to a point where operating conditions have exceeded the upper limits of most of the conventional TBCs (Ref 1). In order to meet the increasing demands in the gas turbine technology, one focus of researchers is the developing of new TBC architectures. TBCs are typically a bilayer material system consisting of a ceramic topcoat (TC) layer and a metallic bondcoat (BC) layer. The main purpose of metallic BC is to improve the adhesion between the underlying substrate and TC and to provide resistance to oxidation (Ref 2,3,4). Ceramic TC is the main insulating layer of the system. 6-8% Yttria-stabilized zirconia (YSZ) is the state-of-the-art TC material used in TBCs. Due to the porosity and also the good ionic conductivity, oxygen can easily diffuse through the ceramic TC; as a result, a slow growing aluminum oxide film known as the thermally grown oxide (TGO) layer is formed at high operation temperatures from an aluminum-enriched composition of BC (Ref 5, 6). There are several ways to deposit TBCs. The two most widely used methods to deposit ceramic TCs are atmospheric plasma spray (APS) and electron beam-physical vapor deposition (EB-PVD). TCs deposited by APS typically have a lamellar microstructure with the presence of micro-cracks and globular pores; on the other hand, the TCs in EB-PVD-processed TBCs have a strain-tolerant columnar microstructure. EB-PVD-processed columnar microstructure TBCs exhibit high in-plane strain tolerance because of which they are of high interest. APS-deposited coatings show lower thermal conductivity than EB-PVD coatings due to the presence of globular pores and interlamellar (micro-)cracks present in the coatings (Ref 7). As compared to APS, EB-PVD TBCs have been reported to show a higher thermal cyclic lifetime due to the presence of a strain-tolerant columnar microstructure (Ref 8). However, APS process shows more operational robustness and economic viability than EB-PVD (Ref 9).
Suspension plasma spray (SPS) is an emerging process that comes with a possibility to deposit coatings with the strain-tolerant columnar microstructure similar to EB-PVD, but with lower thermal conductivity (Ref 10). Plasma spray-physical vapor deposition (PS-PVD) is another evolving technique that evaporates the feedstock to form a coating with a columnar microstructure from the gas phase similar to EB-PVD. SPS and partially also PS-PVD are of commercial interest since these techniques are considerably cheaper than EB-PVD in terms of both running cost and equipment cost (Ref 11,12,13). SPS is a modification of the APS process where the feedstock is in the form of suspension instead of powder. The suspension is made of fine-sized particles of ceramics suspended in a solvent (typically water or ethanol). In conventional APS, it is not possible to deposit powder particle with nanometric or sub-micrometric size due to limitations such as agglomeration of powder particles during storing and feeding into the equipment, and also fine powder particles would not impart enough momentum to penetrate the high-velocity plasma stream (Ref 14, 15). Bernard et al. demonstrated that the SPS TBCs showed lower thermal conductivity compared to EB-PVD as well as APS TBCs (Ref 11). Kaßner et al. reported that SPS TBCs could exhibit a wide range of porosity levels (up to 40%) unlike APS, which greatly reduces the thermal conductivity of SPS TBCs as compared to APS TBCs (Ref 16). Also, Lima et al. tested and compared erosion performance of SPS, EB-PVD and APS coatings and concluded that under the used conditions SPS outperformed EB-PVD and APS coatings (Ref 17).
PS-PVD is a hybrid technique that was developed by Oerlikon Metco AG (Switzerland). This technology is based on low-pressure plasma spraying (LPPS), also known as vacuum plasma spraying (VPS), which is carried out at a pressure of 5-20 kPa (Ref 13). In the LPPS system, when the pressure is further reduced to 50-200 Pa, the process is then known as very low-pressure plasma spraying (VLPPS) that is used to deposit uniform and thin coatings with a large area of coverage. PS-PVD system was developed by the addition of enhanced electric power input up to 180 kW to VLPPS, together with the low chamber pressure enabling the plasma jet to be lengthened to more than 2 m and its diameter in the range of 200-400 mm (Ref 18, 19). Using this PS-PVD setup, it is possible to evaporate the powder feedstock material with specific process parameters so that nano-sized condensates are deposited (Ref 18). Since the plasma stream enables the feedstock to be vaporized, it permits a non-line of sight deposition as compared to conventional thermal spray techniques which can be favorable to coat complex-shaped components (Ref 13). Vapor deposition of feedstock powder enables the formation of a coating with a strain-tolerant columnar microstructure. In order to obtain the desired columnar microstructures, it is necessary to provide moderate powder feeding rates, special selection of gases, powder with low granularity, a large spraying distance and specific gun traverses with the required gas flow characteristics (Ref 19). Góral et al. demonstrated that the coatings obtained by PS-PVD have better erosion resistance than conventional APS coatings but lower than EB-PVD-processed TBCs (Ref 19). Similar results were obtained by von Niessen et al. (Ref 20). Also, thermal cyclic fatigue tests were performed by von Niessen et al. where it was found that TBCs obtained by PS-PVD showed better lifetime than EB-PVD-processed TBCs (Ref 20).The burner rig lifetime of PS-PVD TBCs was found to be two times higher than the conventionally sprayed APS TBCs (Ref 13).
Certainly, the specific results depend on the specific columnar microstructure of both the SPS and PS-PVD coatings. In general, SPS and PS-PVD offer high potential for exhibiting better performance than state-of-the-art TBC manufacturing processes like APS and partly EB-PVD. SPS and PS-PVD processes are the only thermal spray techniques that can yield a strain-tolerant columnar microstructure similar to EB-PVD process. This has motivated to identify and distinguish the properties of SPS and PS-PVD TBCs such as microstructure, thermal conductivity, thermal cyclic fatigue (TCF) lifetime and burner rig. The objective of this study was to perform a structured comparative analysis of SPS and PS-PVD TBCs with APS TBCs as reference and achieve fundamental understanding of the effect of different columnar microstructures on their performance. In the end, design of an optimized columnar microstructure has been proposed. The BCs in this study were produced by the emerging high-velocity air fuel (HVAF) spraying and the standard VPS for comparison. It has been highlighted in the previous studies that BC surface topography and deposition techniques can have a significant influence on TBC lifetime (Ref 13, 21,22,23). The effect of BC deposition process (HVAF and VPS) and interface topography on TGO growth and failure mechanisms has been discussed in each case.
Experimental Methods
In this study, Inconel 738LC was used as the substrate material. Button-shaped substrates with dimensions 25.4 mm diameter and 3 mm thickness were utilized for the microstructure analysis and TCF testing. For thermal conductivity measurements, plate-shaped substrates with dimensions 50 mm × 30 mm × 1.54 mm were used. For burner rig lifetime testing, specific button-shaped substrates with dimensions 30 mm diameter and 3 mm thickness were used. Nine different microstructures were deposited in this study as listed in Table 1. A TC and BC thickness of about 300 μm and 200 μm, respectively, was targeted. The BCs were deposited by HVAF and VPS process using NiCoCrAlYHfSi (AMDRY 386 Oerlikon Metco, Switzerland) feedstock powder at University West, Sweden, and Forschungszentrum Jülich, Germany, respectively. The HVAF BCs were sprayed using M3 supersonic HVAF spray gun (UniqueCoat, Richmond, USA). The VPS BCs and APS TCs were sprayed using a multicoat system (Oerlikon Metco, Wohlen, Switzerland) and a F4-VB plasma torch at Forschungszentrum Jülich. The SPS TCs were sprayed using Mettech Axial III gun with Nanofeed 350 suspension feeder (Northwest Mettech Corp., Vancouver, Canada) at University West. The feedstock material for SPS TC was YSZ suspension in ethanol with a 25% solid load and d50 = 500 nm (Treibacher Industrie AG, Austria). The APS YSZ powder was a 7YSZ Amperit powder (HC Starck Amperit 827.006, d10 = 54 μm, d50 = 80 μm and d90 = 112 μm), and for PS-PVD, the feedstock powder was a 7YSZ produced by Oerlikon Metco designated as M6700. The spray system for PS-PVD TCs was the same multicoat system as used for VPS at Forschungszentrum Jülich, however, using the more powerful 03CP torch (Oerlikon Metco). Before depositing the TC, all BCs went under vacuum heat treatment at 1120 °C and 845° for 2 h and 24 h, respectively, in sequence. Polishing of the bondcoat surface was performed on a semiautomatic single wheel grinder Saphir 550 (ATM Qness GmbH, Mammelzen, Germany) using SiC grinding paper (mesh 1200) and a pressing force of 20 N at a wheel speed of 150 rpm. The final arithmetic mean surface roughness achieved was Ra = 0.05-0.1 µm.
Table 1 Samples analyzed in this study
The sample abbreviations in this study are based on BC (BC property)–TC (TC property) format. In Table 1, sample 1, i.e., V-A(s), is the standard reference sample produced by thermal spraying for comparison with the other TBCs. V-S(p) can be compared with V-A(s) to study behavior of APS and SPS porous TCs with VPS BC in various tests being carried out in the study. The behavior of VPS and HVAF BC during high-temperature exposure can be studied by comparing V-A(s) to H-A(s) and V-S(p) to H-S(p) where the TCs are similar but the only difference is the BC spray process. A comparison of SPS porous and SPS dense TC can be made from H-S(p) and H-S(d) where both the samples have the same HVAF-sprayed BC. Similarly, PS-PVD TC with a porous and dense microstructure can be studied in terms of microstructure and results obtained from the high-temperature exposure.
Microstructure Characterization
For the microstructural characterization of the TBCs, as-sprayed as well as failed samples were first cold mounted with low-viscosity epoxy resin, sectioned along the cross section and then mounted again with high-viscosity epoxy resin followed by grinding and polishing. The grinding and polishing were carried using semiautomatic Buehler AutoMet 300 Pro (Buehler, IL, USA) grinder–polisher system. The microstructure of polished samples was analyzed by scanning electron microscopy (SEM) using a HITACHI TM3000 (Japan) microscope. For the characterization of the burner rig samples, a SEM from Zeiss (Ultra 55 FEG-SEM, Carl Zeiss Microscopy GmbH, Germany) was used. The top-view SEM micrographs of as-sprayed and failed samples were also taken using the SEM. The BC and TC thickness was measured from the cross-sectional SEM micrographs captured at 200x magnification. Ten values were measured at different positions throughout the coating, each for BC and TC thickness calculations.
Porosity Analysis
The porosity in as-sprayed TBCs was measured by an image analysis technique using free public domain software, Image J (Ref 24), at two different scales due to the inherent wide range of pore size distribution in SPS TBCs (Ref 25). SEM micrographs were captured at 500× and 5000× magnification for all of the TBCs. In the case of columnar TBCs, low-magnification (500×) micrographs capture the microstructural features (intercolumnar gaps, micrometric pores, large cracks) that contribute to the coarse porosity, whereas the higher-magnification (5000×) micrographs capture the fine-scale porosity inside the columns. Ten SEM micrographs of the cross section were taken at 500× and 5000× magnifications for all the TBCs. These micrographs were then extracted into the software and converted from grayscale to binary images. The 500× micrographs were then processed to contain only porous features larger than 2 μm2 area, and the 5000× micrographs were processed to contain fine-scale porous features smaller than 2 μm2 area. The porosity content obtained from both the fine-scale and coarse porosity was added to obtain total porosity.
BC Surface Topography
Two-dimensional surface topography of the BC was measured using a stylus-based surface profilometer, Surftest SJ-301 (Mitutoyo Europe GmbH, Germany) following ISO 4288 standard. On each BC sample, ten measurements were taken to obtain the average roughness (Ra) values. 3D images of the BC surface were captured using SEM at 500x magnification. This method of 3D image capturing is only used for visualization, and it is not reliable for quantitative roughness measurements.
Column Density Measurement
The column density of SPS and PS-PVD TC samples was measured by using SEM micrographs taken at 200× magnification along the cross section of the coatings. Five SEM micrographs per coating were used to measure the column density. A straight line was drawn at the center of all the TC, and then, the number of vertical cracks that have a length greater than half of the coating thickness and also intersecting the line were counted. Equation 1 was used to calculate the column density (Ref 25).
$${\text{Column~}}\;{\text{density}}~ = \frac{{\left( {{\text{No}}{\text{.}}\;{\text{of~}}\;{\text{column}}\;{\text{boundaries~}}\;{\text{intersecting~}}\;{\text{the~}}\;{\text{line}} - 1} \right)}}{{{\text{True~}}\;{\text{length~}}\;{\text{of}}\;{\text{the~}}\;{\text{line}}}}.$$
Thermal Conductivity Measurement
Thermal conductivity was obtained from the thermal diffusivity which was measured using a Netzsch Laser Flash Apparatus LFA 427 system (Netzsch Gerätebau GmbH, Germany). Measurements were taken on coatings in the as-sprayed state at room temperature. Samples used for LFA measurement were prepared by water jet cutting 10-mm-diameter samples from the square plate. The samples were coated with a thin layer of graphite before the measurement to enhance the absorption by preventing direct transmission of infrared light pulse through the coating due to coating's transparency to the wavelength used in the laser flash experiment. As the laser pulse is fired at the substrate, it travels through the sample, leading to an increase in the temperature which is measured by InSb infrared detector. This signal is normalized and thus gives the thermal diffusivity according to Eq. 2 (Ref 27).
$$\alpha = \frac{{0.1388L^{2} }}{{t_{{\left( {0.5} \right)}} }}$$
where α is the thermal diffusivity, L is the thickness of the sample and t(0.5) is the time taken for the rear face temperature to reach one-half of its maximum rise. Five such measurements were taken for each sample.
A three-layer Cowan model (substrate + BC + TC) was used to measure the thermal diffusivity of the TC. The Cowan model considers the TC as an unknown layer in the system and BC and substrate as known layers. Specific heat capacity and thermal diffusivity used for Inconel 738LC were 0.419 J/g K and 3.5292 mm2/s, respectively, for the HVAF bond coat, the values were 0.476 J/g K and 2.997 mm2/s, respectively, whereas for VPS bondcoat, the values were 0.627 J/g K and 2.133 mm2/s, respectively. These values were taken from earlier measurements taken on bare substrate and substrates with only BC based on separate investigations carried out by author's group. The calculation of thermal conductivity of the topcoat from its thermal diffusivity requires properties such as specific heat capacity and coating density. The specific heat capacity of YSZ was previously measured as 0.45 J/g-K (Ref 28). The density of YSZ topcoat was calculated using Eq 3.
$$\rho a = \rho b\left( {1 - P} \right)$$
where P is the porosity content of the coating, ρa is the apparent density and ρb is the bulk density of YSZ (6.1 g/cm3) (Ref 29). Thermal conductivity λ \(\left( {{\text{W~m}}^{{ - 1}} \;{\text{K}}^{{ - 1}} } \right)\) was then mathematically calculated by using Eq 4.
$$\lambda = \rho a \cdot \alpha \cdot C_{p}$$
where \(C_{p}\) \(\left( {{\text{J}}\;~{\text{kg}}^{{ - 1}} \;{\text{K}}^{{ - 1}} } \right)\) is the specific heat capacity (Ref 22).
Thermal Cyclic Fatigue Lifetime
TCF testing is an accelerated test performed to analyze the performance of TBCs under cyclic heating and cooling. Since the TCF test involves long exposure of the TBCs to high temperatures, significant oxidation of the BC can be observed, leading to the formation of the TGO layer. Due to the heating and cooling cycles, stresses are developed in the TBCs resulting from the difference in thermal expansion coefficients of different layers in the TBC system. The growth of the oxide layer and CTE mismatch of different layers are the main driving factors for the failure of TBCs in TCF testing. In this study, the TCF test was performed in an automated furnace (ENTECH ECF 14/16, Ängelholm, Sweden). The samples were heated in a furnace at 1100 °C for 1 h and then followed by rapid cooling to around 100 °C using compressed air for 10 min. These heating and cooling steps make one complete cycle of the TCF test. When the cooling step is completed, the samples return to the furnace continuing to another cycle, and this continues until the failure is observed. After each cycle, a camera captures an image of the samples. The TBCs are considered to be failed when the spallation exceeds 20% of the coated surface. Three samples from each set of coatings were analyzed for the TCF lifetime.
Burner Rig Tests
Burner rig testing is used to mimic the complex thermo-mechanical loading in gas turbine environment in which temperature gradient conditions at elevated temperatures are coupled to cyclic heating and cooling at substantial transient rates. This allows to study both the temperature-induced aging of each layer of the TBC systems at relevant temperatures and the impact of gradient conditions on the effective stress levels arising from CTE mismatches. During heating phase, the TC surface of button-type specimen was exposed to a CH4/O2 flame, while the backside of the substrate was cooled by pressured air. After each 5 min of heating, the gas burner was removed and the front surface is also cooled by pressured air, while the backside cooling is continued. After 2 min of cooling, the cycle was repeated until failure of the coating was testified if at least 30% of the coated area was spalled. Temperatures of the surface and of the substrate were monitored by means of a LWIR pyrometer and a thermocouple at substrate's center position, respectively. Fluxes in the gas burner and cooling nozzle were controlled to keep the surface temperatures at 1400 °C and the substrate temperature at the thermocouple position at 1050 °C during the heating dwell times. Heating and cooling from maximum temperature to temperatures below 50 °C and vice versa were achieved within the order of one minute. At the beginning of each cooling phase, an inversion state of the temperature gradient across the TC was achieved with the surface temperature well below the temperature at the TC/BC interface. The interface temperature at the TC/BC interface was calculated from logged readings of surface and substrate temperature by means of the one-dimensional heat flux approximation considering the thicknesses and the thermal conductivities of the TCs, the bond coat and the substrates. Two samples from each set of coatings were analyzed.
Bondcoat Topography
It is essential to analyze the surface topography of the BC to understand its effect on TCF lifetime and failure mechanisms. Different spraying processes inherit different surface topography and roughness to the BC (Ref 22). Figure 1 shows the BC surface profile (3D image) captured using SEM and the top view of the BC, produced by HVAF and VPS. It can be seen that all three surfaces have unique and distinct surface features. The dark gray regions in Fig. 1(b) and (c) indicate the formation of alumina during the vacuum heat treatment, while the black regions indicate porosity. From Fig. 1(a) and (a-1), it can be noted that HVAF unpolished BC shows the presence of unmolten particles along with hemispherical hills uniformly spread throughout the BC surface as indicated by arrow marks. The reason behind this could be the low process temperature of HVAF, which leads to insufficient melting of larger particles and thus results in poor deformation of particles (Ref 30). The VPS BC surface is shown in Fig. 1(b) and (b-1). From the 3D topography image, it can be seen that the surface has small sharp hills as indicated by arrows that are uniformly spread throughout the surface. The peaks and valleys on the surface of VPS BC (Fig. 1b-1) might be attributed to the splashing of completely molten particles in combination with the lower kinetic energy of the molten particles in the VPS process (Ref 30). The PS-PVD TC deposition requires a smooth or polished BC surface because a rough BC surface could hinder the growth of homogeneous columnar microstructure. Thus, polishing of the HVAF BC was performed prior to deposition of PS-PVD TCs. In case of HVAF-polished bondcoat shown in Fig. 1(c) and (c-1), near flat surface with very low roughness can be observed.
Top-view SEM micrographs of BC sprayed by (a) HVAF, (b) VPS along with (c) HVAF polished and 3D profile showing the surface topography of BC, (a-1) HVAF, (b-1) VPS and (c-1) HVAF polished
Figure 2 shows the surface roughness of the BC, and it can be seen that the VPS- and HVAF-processed BCs have similar average roughness (Ra) values, whereas the polished HVAF BC has a value close to 0.1 μm. It is to be noted that the Ra value for HVAF and VPS BC is similar, but still the surface topography is different.
Surface roughness of BC samples
Coating Microstructure
The as-sprayed SEM micrographs of the cross section of all the coatings produced in this study are shown in Fig. 3. As expected, the TC produced by SPS, PS-PVD and APS processes shows different microstructural features. Broadly, it can be observed from Fig. 3(a), (b) and (d) that APS process results in the formation of a coating with lamellar microstructure, whereas the SPS and PS-PVD spraying process led to the formation of a coating with columnar microstructure. Figure 3(c), (e) and (g) shows typical SPS porous columnar microstructures, and Fig. 3(f) shows the SPS dense/vertically cracked microstructure. While spraying SPS dense TC, the spray distance and suspension feed rate was kept lower and the energy (power) supplied was comparatively higher than for SPS porous TC. Utilization of higher energy while spraying leads to strong atomization and the complete melting of the particles inside the plasma plume (Ref 31). Also, due to the lower spraying distance, the molten particles end up arriving at the substrate sooner at a very high velocity. This leads to a planar deposition structure that induces tensile stresses in the coatings. These tensile stresses are the main driving force in the vertical crack growth in the TC (Ref 25). PS-PVD process results in the formation of a quasi-columnar microstructure as depicted in Fig. 3(h) and (i). The columnar structure in PS-PVD consists of fine needle-like structures which is quite distinct to the columnar microstructure obtained by SPS. Both PS-PVD coatings were sprayed at the same net plasma power, however, with different plasma gas compositions, leading to different porosities in the columns, intercolumnar gaps and deposition efficiencies (Ref 32). The roughness of the polished BC seemed to be too low for the particles to adhere to the surface in the SPS process, which resulted in the partial spallation of the TC from the BC during the spraying process. This indicates that too smooth surface may not be appropriate for SPS coating deposition and light grit blasting may be necessary to provide the required anchoring. Thus, H(pl)-S(p) was excluded from the further characterization of TBCs in this study.
As-sprayed SEM micrographs of the TBCs
The column density of the SPS and PS-PVD TBCs is shown in Fig. 4. The difference in column density of the SPS TBCs can be attributed to the difference in TC deposition parameters. The higher column density of PS-PVD TBCs can be attributed to their much narrower column width due to the different deposition process.
Column density of the SPS- and PS-PVD-sprayed TCs
The porosity of the coatings measured at two different scales is summarized in Fig. 5. It can be seen that fine porosity values for SPS porous TCs are higher than the rest of the coatings. Figure 6 shows the nano-sized and submicron pores that contribute to the fine porosity in SPS porous TC of V-S(p) along with micron pores that contribute to coarse porosity. It can be observed that the total porosity content of H-S(p) and H(pl)-P(p) is same but when we compare the contribution of coarse and fine-scale porosity in the TBCs, significant variations can be observed in fine-scale porosity content. H-S(d) has shown to be the TBC with the least porosity content. The presence of uniform inter-pass porosity bands was observed in H-S(d). The number of spraying passes directly corresponds to the inter-pass porosity bands across the coating (Ref 26). This shows the ability of the SPS process to produce submicron and nanoscale porosities as well as the ability to produce a very dense coating just by varying the spray process parameter. A detailed description of the effect of process parameters on the pore size distribution is discussed in previous investigations carried out by author's group (Ref 33).
Comparative distribution of porosity content at two different scales for all the TBCs
Cross-sectional SEM micrographs showing microstructural features of coatings that contribute to coarse and fine scale of porosity (a) H-S(p), (b) high-magnification SEM micrograph of H-S(p), (c) H(pl)-P(p)
In case of PS-PVD TBCs, the column gaps are the main contributing factor to the overall porosity as shown in Fig. 6. The difference between the fine-scale porosity in H(pl)-P(p) and H(pl)-P(d) does not seem to be much. The same is the case with APS TCs; the coarse porosity is mainly due to oblate spheroids and cracks (Ref 34). It has to be stated here that the measurement techniques influence the results of porosity evaluation. This can be clearly seen in the APS coatings. Although high magnification is used and hence most of the fine features are probably detected, features as micro-cracks are added to the large-sized porosity regime. Mercury porosimetry results, for example, of the standard coating V-A(s) clearly show that a lot of submicron pores/cracks are present in the coatings (more than 50% of total porosity) (Ref 35).
Figure 7 shows the thermal conductivity of the as-sprayed TBCs measured at room temperature. From Fig. 7, it is apparent that microstructural features of the TBCs strongly effect the thermal conductivity. Among all the TBCs in the study, H(pl)-P(p) showed the lowest thermal conductivity [0.5 W/(mK)], whereas H(pl)-P(d) showed the highest thermal conductivity [1.6 W/(mK)]. V-S(p) and H-S(p) TBCs showed lower thermal conductivity than H-S(d) due to its porous columnar microstructure compared to the dense vertically cracked microstructure in H-S(d). Also, between V-S(p) and H-S(p), V-S(p) has lower thermal conductivity owing to its higher porosity content topcoat formed due to the different roughness profile of VPS bondcoat as observed in previous work (Ref 30). It was discussed in earlier studies that the thermal conduction in zirconia is mainly by lattice vibrations (phonons) or by radiation (photons) (Ref 38). The radiation heat transfer is significant only at high temperatures (> 1000 K) (Ref 39). Since all the measurements were taken at room temperature, the prime mode of heat transfer is by phonon conduction. As mentioned earlier, in APS TBCs the lamellar microstructure has porosity content mainly contributed from intra-splat globular pores and cracks that exists between the flatten lamellae. These features in APS TBCs interrupt the phonon conduction by scattering the phonons. In SPS coatings, the wide range (fine scale and coarse) of porosity content affects the thermal conductivity as heat flux is not possible through the pore volume (Ref 29). Thus, the lower thermal conductivity values of SPS porous and APS TBCs can be attributed to their microstructural features as discussed above. It is interesting to note that the porous APS coating does not influence the thermal conductivity values radically. It might be related to the higher amount of fine micro-cracks in the standard coatings, which more effectively reduces thermal conductivity than globular pores. H-S(d), i.e., SPS dense coating, showed thermal conductivity of around 1.16 W/(mK), which can be attributed to its denser microstructure, as lower-porosity content results in lower phonon scattering interfaces and thus leads to increased thermal conductivity. The high thermal conductivity of H(pl)-P(d) [1.6 W/(mK)] can be attributed to its low porosity level that is mainly contributed by coarse porosity and almost negligible fine porosity, leading to a higher thermal conductivity.
Thermal conductivity values of the TBCs
The TCF results are summarized in Fig. 8. As mentioned earlier that the failure criteria in this study were 20% spallation of TC. In case of H(pl)-P(d), the testing was stopped after around 1800 cycles, as the substrate started to deform due to severe oxidation before the TBC could fail. It is noteworthy that, except for H(pl)-P(p) with polished HVAF BC, all samples with HVAF BC showed better lifetime than with VPS BC.
TCF lifetime of TBCs
There could be various reasons for failure during TCF test such as TGO growth rate, TC–BC interface topography and TC microstructure (Ref 30). Figure 9 shows the cross-sectional SEM micrographs of failed samples, comparing HVAF and VPS BC along with their respective TGO layer. From Fig. 9, it can be seen that the TGO layer is much thicker and has more uneven growth in case of TBC with VPS BC, V-A(s), when compared to the coating with HVAF BC, H-A(s). In earlier studies, it has been clearly demonstrated that a too clean processing of the BCs (as it might be in VPS) might lead to an excess amount of free reactive elements such as Y which is detrimental for the performance (Ref 40). This is underlined by the much higher oxygen content in the HVAF coatings (~ 3600 ppm) compared to the VPS coatings (typically 800 ppm). In high-velocity APS BC, a similar effect could be demonstrated (Ref 41). A slow growing, dense and more uniform layer of TGO can be seen in the case of H-A(s) which could potentially prevent the BC from further oxidation. The dark gray phase in Fig. 9(a) and (b) shows the remaining β-phase in HVAF and VPS BC as indicated. The amount of β-phase that is remaining in the BC indicates the alumina reserve present in the BC necessary for aluminum oxide layer formation. The β-phase content gradually decreases as the oxidation of BC continues. At some point, when the β-phase in the BC completely depletes, there is no further formation of the protective alumina oxide layer possible. Instead, rapid oxidation of other constituents such as nickel and chromium in the BC begins appearing as a light gray color in SEM, indicated by solid arrow mark in Fig. 9(a-1) (Ref 42). The mixed oxides tend to grow rapidly and generate additional stresses near the interface, resulting in failure of TBCs. The dashed arrow marks in Fig. 9 indicate alumina oxide (TGO) layer appearing as dark gray color in SEM. Thus, it can be said that the HVAF BC tends to delay the formation of detrimental mixed oxides by forming a thin and uniform slow growing alumina oxide layer by avoiding over-doping effects in the bond coats
SEM micrographs of failed TCF samples showing the BC and TGO layer of (a), (a-1) V-A(s) and (b), (b-1) H-A(s)
Figure 10 shows the SEM micrographs of cross section of the failed TBC samples after TCF testing. Vertical cracks can be seen in the TCs of V-A(p), V-S(p) and H-S(p) in Fig. 10. Horizontal crack propagating through the PS-PVD porous TC can be observed. It should be noted that the TC in case of H(pl)-P(d) detached from the BC during the sample preparation and not due to failure.
Cross-sectional SEM micrographs of failed TBC samples from TCF test, (a) V-A(s), (b) V-A(p), (c) V-S(p), (d) H-A(s), (e) H-S(p), (f) H-S(d), (g) H(pl)-P(p), (h) H(pl)-P(d), (i) magnified image of H-S(d)
It is to be noted that the first few layers in the TC of H(pl)-P(p) were the prime area where failure had occurred. The reason for this failure mechanism is deemed to be the initial relatively dense layer formed retarding the columnar growth due to the spraying conditions, as discussed in more detail in Sect. 3.6. The vertically cracked microstructure of H-S(d) coating shows the presence of horizontal cracks that seems to be propagating through the inter-pass porosity bands present in the TC as shown in Fig. 11. From Fig. 8, it can be observed that H-A(s) and H-S(d) showed the highest lifetime among the failed samples. It was demonstrated by Dwivedi et al. that the fracture toughness of TBCs depends upon porosity content, material composition and microstructure (Ref 36). Zhou et al. also showed that for SPS TBCs (Ref 37). In general, it is observed that as the porosity increases, the hardness and fracture toughness decrease. The high lifetime values might be correlated with the high fracture toughness of both the coatings as higher fracture toughness inhibits the tendency of crack propagation through the TBCs. The porous APS TBC, V-A(p), showed higher average lifetime than the standard reference APS TBC, V-A(s). As the failure occurred at the TC-BC interface, it is not surprising that the TGO growth plays a major role in the failure of the investigated TBCs.
Higher-magnification SEM cross section of TCF-failed TBCs (a) V-A(s), (b) V-A(p), (c) V-S(p), (d) H-S(d), (e) H(pl)-P(d) and (f) H(pl)-P(p) and fracture view of TCF-failed TBCs (a-1) V-A(s), (b-1) V-A(p), (c-1) V-S(p), (d-1) H-S(d), (e-1) H(pl)-P(d) and (f-1) H(pl)-P(p)
Long exposure of the ceramic TC to high temperatures could lead to excessive sintering of the coatings. In APS TBCs, sintering promotes the closure of pores and cracks and thus reduces the overall porosity of the coating, which could lead to increased thermal conductivity (Ref 25) and considerable stiffening and hence stress increase (Ref 43). In the case of SPS TBCs, the sintering of the fine particles leads to the densification of pores and shrinkage of columns, which in turn leads to increased intercolumnar gaps (Ref 44). This could result in an increase in their thermal conductivity (Ref 42). Figure 11 shows the higher-magnification SEM micrographs of the failed TBCs. Sintering phenomenon is considered as an outcome of diffusion process (Ref 45). It can be recognized from the shape of pores, as the material around the pores redistributes and results in a pore re-arrangement from facetted to more spherical in shape. This phenomenon can be seen in the investigated TBCs from the SEM cross section when compared to as-sprayed condition. No such changes were observed in case H-S(d), as the dense SPS TBCs are already more resistant to sintering due to the dense microstructure. Since there was no spallation of the coating in case of H(pl)-P(d), the fracture view was not taken.
In Fig. 12, the results of the burner rig tests for the different systems are shown. As explained in experimental section, the temperature in the middle of the substrate as well as the surface temperature is measured and the BC temperatures are calculated using the thermal conductivity from the measurements in Fig. 7. Indicated by gray shading is the lifetime range of a reference system of type V-A (different APS feedstock) derived from regression of six test results and the corresponding error band width, which is regularly used as laboratory standard in burner rig testing (Ref 46). As TC degradation is strongly influenced by thermally activated processes such as TGO growth and sintering, an Arrhenius-type behavior on interface temperature is typically observed. The samples V-A(s) showed lifetimes of 1037 and 1127 cycles. The more porous coatings V-A(p) had similar lifetimes of 915 and 1040 cycles. So, the increased porosity levels that are known to affect the mechanical properties such as Young's modulus and fracture toughness astonishingly do not lead to a significant lifetime increase. A reason could be the interaction of different failure mechanisms such as sintering, TGO growth and phase transformation (Ref 35). In contrast, the system H-A(s) revealed significantly longer lifetimes of 1661 and 1746 cycles. These results are similar as found in the TCF tests. In Fig. 13, the microstructures of the different APS coatings at the BC–TC interface are shown. Obviously, the TGO in the VPS TBCs is considerably thicker than in the HVAF coatings although the time at temperature was shorter for the VPS TBCs. The reasons are already discussed in Sect. 3.6. It is therefore assumed that in the burner rig tests also, the slower TGO growth led to the increased lifetime of the HVAF systems. In Fig. 13, the results of the SPS systems are also shown. The major results are the considerably reduced lifetimes of the SPS systems compared to the APS systems. As discussed by Zhou et al., it is assumed that the specific microstructure related to the growth mechanism of the SPS columns leads to an easier crack propagation at the interface (Ref 47). In Fig. 13(d)-(f), the microstructures at the BC–TC interface of the failed SPS samples are shown. Compared to the APS coatings, the TGOs are rather thin which is correlated with the reduced lifetime and the lower resistance to crack propagation.
Cycles to failure as a function of the average temperature at the TGO level calculated from surface/substrate temperatures logged throughout the steady-state period of heating phases in burner rig experiments
SEM micrographs of the cycled samples: (a) V-A(s), (b) V-A(p), (c) H-A(s), (d) V-S(p), (e) H-S(p), (f) H-S(d), (g) H(pl)-P(p), (h, i) H(pl)-P(d)
Although there is an explanation for the reduced lifetime of the SPS samples in the burner rigs, it remains the question why the TCF lifetimes of the SPS samples are similar to the ones of the APS coatings. There are three major differences between the burner rig and the furnace tests. Firstly, the burner rig establishes a gradient throughout the TBC system. The higher surface temperature may introduce significant sintering. However, as the columns should open at elevated temperatures, the columns probably will not sinter together. In contrast, it is even found that the spacing between the columns increases. The second difference is the fast cooling and heating. These transient loadings can generate higher energy release rates than the isothermal tests (Ref 35). It is assumed that this high thermo-mechanical loading in combination with the third difference, the short cycle lengths, leads to earlier failure of the SPS coatings. This is also supported by the observation that the lifetime of the V-S(p) system is only insignificantly lower than that of the H-S(p) while showing significantly higher TGO thickness. This may indicate that the influence of the TGO thickness on the crack driving stresses in the burner rig experiment is less pronounced than in the TCF, or the stress peaks within the TC during the transients have major impact (Ref 47). Detailed finite element calculations will be done to get further insight in the failure mechanisms.
Similar to results in TCF testing, the performance of PS-PVD TCs turned out to be very different. While H(pl)-P(d) demonstrated outstanding cyclic life, the specimen of system H(pl)-P(p) failed within less than 200 cycles. While depositing the TCs of the H(pl)-P(p) system, the onset of columnar growth was retarded. Initially, a relatively dense layer was formed showing clear indications that BC elements diffused into it, as shown in Fig. 13(g). Although the columns were porous as single coating passes could be distinguished, the resulting low thermal conductivity did not result in beneficial lifetimes. The burner rig test was carried out at a surface temperature of 1270 °C, while the calculated BC temperature was 1062 °C. A higher surface temperature could not be properly adjusted because the TC thickness was only 300 µm. After burner rig test, fatal crack formation was observed in the columns above the dense layer. There are also some cracks along the TGO; however, apparently these did not lead to the failure of the sample. In contrast, the onset of columnar growth in the H(pl)-P(d) system was immediately at the BC interface, as shown in Fig. 13(h). There was no indication of interdiffusion. The columns were denser than in the H(pl)-P(p) system since single coating passes could not be distinguished. The parameters of the burner rig test were 1296 °C surface temperature and 1096 °C calculated BC temperature. The TC thickness was 330 µm. After burner rig test, fatal crack formation was observed at the TGO interface due to vast Al depletion of the BC, leading to internal oxidation and pore formation, as the H(pl)-P(p) was tested at increased BC temperature and H(pl)-P(d) failed after a very long lifetime. As a result, the TGO separated from the BC, partly the BC is even stripped from the substrate. The ceramic TC did not spall off completely as it was obviously very compliant, as shown in Fig. 13(i), as reported for such kind of columnar coatings already in (Ref 13).
Design of Columnar Microstructure
The focus of this work has been to investigate the performance of different columnar microstructures produced by SPS and PS-PVD and design an ideal columnar microstructure that shows a combination of relatively low thermal conductivity and high cyclic lifetime. The results in Sect. 3 show that while both processes can produce columnar microstructures similar to EB-PVD, the microstructural characteristics vary a lot. Figure 14 shows the typical differences among the columnar microstructural features in a coating deposited by SPS, PS-PVD and EB-PVD in terms of column density, intracolumnar porosity and intercolumnar gap. While SPS coatings typically show high intracolumnar porosity, resulting in lower thermal conductivity, the high column density and medium intercolumnar gaps result in lower strain tolerance. PS-PVD columnar structures typically show wide intercolumnar gaps with high column density improving their strain tolerance, but show high thermal conductivity due to their porosity levels and wide intercolumnar gaps. EB-PVD coatings also have high column density but low intracolumnar porosity, resulting in even better strain tolerance and fracture toughness, but at the cost of higher thermal conductivity.
A schematic showing the features of a typical columnar microstructure produced by (a) SPS, (b) PS-PVD and (c) EB-PVD. (d) An ideal columnar microstructure exhibiting both low thermal conductivity and high cyclic lifetime
An ideal columnar microstructure, shown in Fig. 14(d), would thus have a combination of these features. The desired features in this structure would be high column density, medium intracolumnar porosity and narrow intercolumnar gaps providing the ideal combination of low thermal conductivity and high lifetime due to high strain tolerance and fracture toughness. It should be noted that the BC and TC–BC interface would need to be optimized appropriately in order to achieve high lifetime.
Achieving this ideal microstructure could of course be challenging and may not be technically feasible. Efforts have been made in previous works on TBCs produced by SPS by shot peening and grit blasting the BC surface resulting in a columnar microstructure with higher column density (~25 columns/mm as compared to ~12 columns/mm in this study) resulting in dramatic improvements in TCF lifetime (Ref 48). Further tests need to be performed in order to assess their full potential.
In this study, nine different sets of TBCs with TCs produced by SPS, PS-PVD and APS and BCs produced by HVAF and APS were investigated. The TBCs were characterized in terms of microstructural features, thermal conductivity, TCF and burner rig lifetime. The SPS TCs showed lower column density as compared to PS-PVD TCs. While both processes produced coatings with similar total porosity levels, the PS-PVD TCs showed much lower amount of fine porosity due to their denser columns with wider intercolumnar gaps. The dense PS-PVD TBC showed the highest cyclic lifetime in both TCF and burner rig testing due to its highly strain-tolerant microstructure, though it also showed the highest thermal conductivity among all samples. It was also observed that between the HVAF and VPS BCs, HVAF BC showed a lower oxidation rate and thus could improve the cyclic lifetime of TBCs.
Based on these results and previous findings, an ideal columnar TC microstructure was proposed that could exhibit both low thermal conductivity and high cyclic lifetime. This microstructure would consist of high column density, medium intracolumnar porosity and narrow intercolumnar gaps as compared to typical columnar microstructures produced by SPS, PS-PVD and EB-PVD.
X.Q. Cao, R. Vassen and D. Stoever, Ceramic materials for thermal barrier coatings, J. Eur. Ceram. Soc., 2004, 24(1), p 1–10.
Khan AS, Nazmy M. MCrAlY bond coating and method of depositing said MCrAlY bond coating. Google Patents (2007)
H.E. Evans and M.P. Taylor, Oxidation of high-temperature coatings, Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng., 2006, 220(1), p 1–10.
G.W. Goward, Progress in coatings for gas turbine airfoils, Surf. Coat. Technol., 1998, 108, p 73–79.
D.R. Mumm, A.G. Evans and I.T. Spitsberg, Characterization of a cyclic displacement instability for a thermally grown oxide in a thermal barrier system, Acta Mater., 2001, 49(12), p 2329–2340.
N.M. Yanar, F.S. Pettit and G.H. Meier, Failure characteristics during cyclic oxidation of yttria stabilized zirconia thermal barrier coatings deposited via electron beam physical vapor deposition on platinum aluminide and on NiCoCrAlY bond coats with processing modifications for improved performances, Metall. Mater. Trans. A, 2006, 37(5), p 1563–1580.
N. Curry, N. Markocsan, X.-H. Li, A. Tricoire and M. Dorfman, Next generation thermal barrier coatings for the gas turbine industry, J. Therm. Spray Technol., 2011, 20(1–2), p 108–115.
T. Strangman, D. Raybould, A. Jameel and W. Baker, Damage mechanisms, life prediction, and development of EB-PVD thermal barrier coatings for turbine airfoils, Surf. Coat. Technol., 2007, 202(4), p 658–664.
A. Feuerstein, J. Knapp, T. Taylor, A. Ashary, A. Bolcavage and N. Hitchman, Technical and economical aspects of current thermal barrier coating systems for gas turbine engines by thermal spray and EBPVD: a review, J. Therm. Spray Technol., 2008, 17(2), p 199–213.
L. Łatka, A. Cattini, L. Pawlowski, S. Valette, B. Pateyron, J.-P. Lecompte et al., Thermal diffusivity and conductivity of yttria stabilized zirconia coatings obtained by suspension plasma spraying ☆, Surf. Coat Technol., 2012, 208, p 87–91.
Article CAS Google Scholar
B. Bernard, A. Quet, L. Bianchi, A. Joulia, A. Malié, V. Schick et al., Thermal insulation properties of YSZ coatings: Suspension Plasma Spraying (SPS) versus Electron Beam Physical Vapor Deposition (EB-PVD) and Atmospheric Plasma Spraying (APS), Surf. Coat. Technol., 2017, 318, p 122–128.
A. Ganvir, N. Curry, S. Govindarajan and N. Markocsan, Characterization of thermal barrier coatings produced by various thermal spray techniques using solid powder, suspension, and solution precursor feedstock material, Int. J. Appl. Ceram. Technol., 2016, 13(2), p 324–332.
S. Rezanka, G. Mauer and R. Vaßen, Improved thermal cycling durability of thermal barrier coatings manufactured by PS-PVD, J. Therm. Spray Technol., 2014, 23(1–2), p 182–189.
A. Ganvir, N. Curry, N. Markocsan, P. Nylén and F.-L. Toma, Comparative study of suspension plasma sprayed and suspension high velocity oxy-fuel sprayed YSZ thermal barrier coatings, Surf. Coat. Technol., 2015, 268, p 70–76.
H. Tabbara and S. Gu, A study of liquid droplet disintegration for the development of nanostructured coatings, AIChE J., 2012, 58(11), p 3533–3544.
H. KaBner and A. Stuke, Stover MRRVD. Influence of porosity on thermal conductivity and sintering in suspension plasma sprayed thermal barrier coatings, Adv. Ceram. Coat. Interfaces III Ceram. Eng. Sci. Proc., 2009, 46, p 147.
R.S. Lima, B.M.H. Guerreiro and M. Aghasibeig, Microstructural characterization and room-temperature erosion behavior of As-deposited SPS, EB-PVD and APS YSZ-based TBCs, J. Therm. Spray Technol., 2019, 28(1–2), p 223–232.
G. Mauer, M.O. Jarligo, S. Rezanka, A. Hospach and R. Vaßen, Novel opportunities for thermal spray by PS-PVD, Surf. Coat. Technol., 2015, 268, p 52–57.
M. Goral, S. Kotowski and J. Sieniawski, The technology of plasma spray physical vapour deposition, High Temp. Mater. Process., 2013, 32(1), p 33–39.
K. von Niessen and M. Gindrat, Plasma spray-PVD: a new thermal spray process to deposit out of the vapor phase, J. Therm. Spray Technol., 2011, 20(4), p 736–743.
A. Scrivani, U. Bardi, L. Carrafiello, A. Lavacchi, F. Niccolai and G. Rizzi, A comparative study of high velocity oxygen fuel, vacuum plasma spray, and axial plasma spray for the deposition of CoNiCrAlY bond coat alloy, J. Therm. Spray Technol., 2003, 12(4), p 504–507.
N. Curry, K. VanEvery, T. Snyder and N. Markocsan, Thermal conductivity analysis and lifetime testing of suspension plasma-sprayed thermal barrier coatings, Coatings, 2014, 4(3), p 630–650.
M. Gupta, N. Markocsan, X.-H. Li and R.L. Peng, Improving the lifetime of suspension plasma sprayed thermal barrier coatings, Surf. Coat. Technol., 2017, 332, p 550–559.
ImageJ [Internet]. [cited 2020 Aug 10]. Available from: https://imagej.nih.gov/ij/
A. Ganvir, N. Curry, S. Björklund, N. Markocsan and P. Nylén, Characterization of microstructure and thermal properties of YSZ coatings obtained by axial suspension plasma spraying (ASPS), J. Therm. Spray Technol., 2015, 24(7), p 1195–1204.
A.G. Evans and E.A. Charles, Fracture toughness determinations by indentation, J. Am. Ceram. Soc., 1976, 59(7–8), p 371–372.
R.E. Taylor, Thermal conductivity determinations of thermal barrier coatings, Mater. Sci. Eng. A., 1998, 245(2), p 160–167.
N. Curry and J. Donoghue, Evolution of thermal conductivity of dysprosia stabilised thermal barrier coating systems during heat treatment, Surf. Coat. Technol., 2012, 209, p 38–43.
S. Mahade, N. Curry, S. Björklund, N. Markocsan and P. Nylén, Thermal conductivity and thermal cyclic fatigue of multilayered Gd2Zr2O7/YSZ thermal barrier coatings processed by suspension plasma spray, Surf Coat Technol., 2015, 283, p 329–336.
M. Gupta, N. Markocsan, X.-H. Li and L. Östergren, Influence of Bondcoat spray process on lifetime of suspension plasma-sprayed thermal barrier coatings, J. Therm. Spray Technol., 2018, 27(1), p 84–97.
B. Siebert, C. Funke, R. Vaβen and D. Stöver, Changes in porosity and Young's Modulus due to sintering of plasma sprayed thermal barrier coatings, J. Mater. Process. Technol., 1999, 92–93, p 217–223.
W. He, G. Mauer, A. Schwedt, O. Guillon and R. Vaßen, Advanced crystallographic study of the columnar growth of YZS coatings produced by PS-PVD, J. Eur. Ceram. Soc., 2018, 38(5), p 2449–2453.
A. Ganvir, S. Joshi, N. Markocsan and R. Vassen, Tailoring columnar microstructure of axial suspension plasma sprayed TBCs for superior thermal shock performance, Mater. Des., 2018, 144, p 192–208.
F. Cernuschi, P. Bison and A. Moscatelli, Microstructural characterization of porous thermal barrier coatings by laser flash technique, Acta Mater., 2009, 57(12), p 3460–3471.
R. Vaßen, E. Bakan, D. Mack, S. Schwartz-Lückge, D. Sebold, Y. Jung Sohn et al., Performance of YSZ and Gd2Zr2O7/YSZ double layer thermal barrier coatings in burner rig tests, J. Eur. Ceram. Soc., 2020, 40(2), p 480–490.
G. Dwivedi, V. Viswanathan, S. Sampath, A. Shyam and E. Lara-Curzio, Fracture toughness of plasma-sprayed thermal barrier ceramics: influence of processing, microstructure, and thermal aging, J. Am. Ceram. Soc., 2014, 97(9), p 2736–2744.
D. Zhou, O. Guillon and R. Vaßen, Development of YSZ thermal barrier coatings using axial suspension plasma spraying, Coatings, 2017, 7(8), p 120.
J.R. Nicholls, K.J. Lawson, A. Johnstone and D.S. Rickerby, Methods to reduce the thermal conductivity of EB-PVD TBCs, Surf Coat Technol., 2002, 151–152, p 383–391.
I.O. Golosnoy, S.A. Tsipas and T.W. Clyne, An analytical model for simulation of heat flow in plasma-sprayed thermal barrier coatings, J. Therm. Spray Technol., 2005, 14(2), p 205–214.
Effect of manufacturing related parameters on oxidation properties of MCrAlY‐bondcoats - Subanovic - 2008 - Materials and Corrosion - Wiley Online Library [Internet]. [cited 2020 Dec 23]. Available from: https://onlinelibrary.wiley.com/doi/full/10.1002/maco.200804128?casa_token=sH6kT5sqVk8AAAAA%3AtLiyRIC9jAK-AeaiRu6JaktudxR9zE9byj_xFu6DN05oq4AYvmkSueog_sPyb7_4owCSyif_yrMhE2A
E. Hejrani, D. Sebold, W.J. Nowak, G. Mauer, D. Naumenko, R. Vaßen et al., Isothermal and cyclic oxidation behavior of free standing MCrAlY coatings manufactured by high-velocity atmospheric plasma spraying, Surf. Coat. Technol., 2017, 313, p 191–201.
B. Bernard, A. Quet, L. Bianchi, V. Schick, A. Joulia, A. Malié et al., Effect of suspension plasma-sprayed YSZ columnar microstructure and bond coat surface preparation on thermal barrier coating properties, J. Therm. Spray Technol., 2017, 26(6), p 1025–1037.
M. Ahrens, R. Vaßen, D. Stöver and S. Lampenscherf, Sintering and creep processes in plasma-sprayed thermal barrier coatings, J. Therm. Spray Technol., 2004, 13(3), p 432–442.
O. Aranke, M. Gupta, N. Markocsan, X.-H. Li and B. Kjellman, Microstructural evolution and sintering of suspension plasma-sprayed columnar thermal barrier coatings, J. Therm. Spray Technol., 2019, 28(1–2), p 198–211.
B. Xiao, X. Huang, T. Robertson, Z. Tang and R. Kearsey, Sintering resistance of suspension plasma sprayed 7YSZ TBC under isothermal and cyclic oxidation, J. Eur. Ceram. Soc., 2020, 40(5), p 2030–2041.
C. Nordhorn, R. Mücke, D.E. Mack and R. Vaßen, Probabilistic lifetime model for atmospherically plasma sprayed thermal barrier coating systems, Mech. Mater., 2016, 93, p 199–208.
D. Zhou, D.E. Mack, P. Gerald, O. Guillon and R. Vaßen, Architecture designs for extending thermal cycling lifetime of suspension plasma sprayed thermal barrier coatings, Ceram. Int., 2019, 45(15), p 18471–18479.
M. Gupta, X.-H. Li, N. Markocsan and B. Kjellman, Design of high lifetime suspension plasma sprayed thermal barrier coatings, J. Eur. Ceram. Soc., 2020, 40(3), p 768–779.
This work was supported by the Helmholtz Association, Germany, and the Knowledge Foundation, Sweden. The authors would like to thank Mr. F. Kurze and Mr. R. Laufs for performing the plasma spraying experiments and Mr. M. Tandler for carrying out the thermal cycling tests. The authors also acknowledge the SEM characterization. The authors acknowledge also funding from the Deutsche Forschungsgemeinschaft (DFG) via the project VA 163/8. In addition, the authors gratefully acknowledge the SEM analysis by Dr. Doris Sebold. The authors would also like to thank Mr. Stefan Björklund for the HVAF and SPS spraying at University West.
Open access funding provided by University West.
University West, 46186, Trollhättan, Sweden
Nitish Kumar & Mohit Gupta
Forschungszentrum Jülich GmbH, D-52425, Jülich, Germany
Daniel E. Mack, Georg Mauer & Robert Vaßen
Mohit Gupta
Daniel E. Mack
Georg Mauer
Robert Vaßen
Correspondence to Mohit Gupta.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Kumar, N., Gupta, M., Mack, D.E. et al. Columnar Thermal Barrier Coatings Produced by Different Thermal Spray Processes. J Therm Spray Tech 30, 1437–1452 (2021). https://doi.org/10.1007/s11666-021-01228-5
Revised: 08 June 2021
Issue Date: August 2021
burner rig testing
columnar microstructure
thermal barrier coatings
thermal cyclic fatigue | CommonCrawl |
Oscillatory Solutions to the Heat Equation
2019-10-03 Last updated on 2019-10-07 4 min read mathematical diversions
As we all learned in a first course in PDEs, for "reasonable" initial data $u_0$, the solution to the linear heat equation \begin{equation} \partial_t u = \triangle u \end{equation} exists classically and converge to zero uniformly.
The proof can be given using the explicit Gaussian kernel \begin{equation} K_t(x) = \frac{1}{(4\pi t)^{d/2}} \exp( -|x|^2 / 4t ) \end{equation} for which \[ u(t,x) = K_t \star u_0 \] solves the heat equation. The fact that $K_t$ is a Schwartz function for every $t > 0$, and that it depends smoothly on $t$, means that as long as $u_0$ is locally integrable and has no more than polynomial growth (for example), then $u(t,x)$ is well-defined everywhere and smooth.
Furthermore, by H\"older's inequality we see \begin{equation} |u(t,x)| \leq |K_t|_{L^p} |u_0|_{L^q} \end{equation} where $p$ and $q$ are conjugate exponents. Observe that by explicit computation, $K_t$ always have $L^1$ norm $1$, but has decreasing $L^p$ norm for every $p > 1$. This shows that, in particular, if $u_0 \in L^q$ for any $q\in [1,\infty)$, then the classical solution converges uniformly to zero.
Notice that the theory leaves out $L^\infty$. In particular, we see easily that if $u_0 \equiv C$, then the convolution defines the classical solution $u(t,x) \equiv C$ as the solution to the heat equation, and this does not converge to $0$.
An obvious question is: is it the case that for bounded, locally integrable functions $u_0$, that convergence to a constant is always the case for solutions to the linear heat equation?
On compact manifolds
Our case is bolstered by the situation on compact manifolds. When solving the heat equation on compact manifolds, if the initial data is bounded, it is also in $L^2$. So we can decompose the initial data using the eigen-expansion relative to the Laplace-Beltrami operator $\triangle$. The system then diagonalizes into decoupled ordinary differential equations which, except for the kernel of $\triangle$,have solutions that are exponentially decaying.
In fact, this is sufficient to tell us that on compact manifolds, solutions to the heat equation converge exponentially to its mean.
We immediately however see a difference with the case on $\mathbb{R}^d$. On Euclidean space, the solution frequently only decay polynomially to zero, when the data is in $L^p$. One can check this by letting $u_0$ be a Gaussian function, for which the heat equation solution decay at rate $t^{-d/2}$.
Solutions are asymptotically locally constant
On the positive side, we do have that, for every $u_0$ that is bounded, the solutions are asymptotically locally constant at every time. This is because if $u_0$ is bounded (say by the number $b$), we have that \[ u(t,x) - u(t,y) = \int (K_t(x-z) - K_t(y-z))u_0(z) ~\mathrm{d}z \] so \begin{equation} |u(t,x) - u(t,y)| \leq b \int |K_t(x-z) - K_t(y-z) | ~\mathrm{d}z \end{equation} Performing an explicit change of variables we get \begin{equation} |u(t,x) - u(t,y)| \leq \frac{b}{\pi^{d/2}} \int | \exp(- |z|^2) - \exp(- |z - (y-x)/ (2\sqrt{t})|^2) | ~\mathrm{d}z. \end{equation} For $(y-x)/\sqrt{t}$ sufficiently small, the integral can be very roughly bounded by $O(|y-x| / \sqrt{t})$. This shows that on every compact set $K$, \begin{equation} \sup_K u(t,\cdot) - \inf_K u(t,\cdot) \lesssim t^{-1/2} \end{equation} giving that the solutions are asymptotically locally constant.
This argument, however, says nothing about whether the "constant" value is the same between different times.
Oscillatory solutions
In fact, it is fairly easy to construct a bounded initial data for which the solution, evaluated at $x = 0$, fails to converge as $t \to \infty$. Performing the explicit change of variable we get \begin{equation} u(t,0) = \int \frac{1}{(4\pi)^{d/2}} \exp( - |z|^2/4) u_0( - \sqrt{t} z) \mathrm{d}z. \end{equation} Denote by $\mu_G$ the Gaussian measure $(4\pi)^{-d/2} \exp( - |z|^2/4) \mathrm{d}z$ (which we observe has total mass 1) we see that we can write $u(t,0)$ as the integral of a rescaling of $u_0$ against $\mu_G$.
First choose $\epsilon \ll 1$. We can choose $0 < \lambda < \Lambda < \infty$ such that \[ \int_{ |z| \leq \lambda} \mu_G < \epsilon, \quad \int_{|z| \geq \Lambda} \mu_G < \epsilon. \] Let $\kappa = \Lambda / \lambda$. Let $u_0$ be defined by \begin{equation} u_0(x) = \begin{cases} 0 & |x| = \frac12 \newline 1 & |x| \in [\lambda \kappa^n, \lambda \kappa^{n+1}), n \text{ is even}\newline 0 & |x| \in [\lambda \kappa^m, \lambda \kappa^{m+1}), m \text{ is odd} \end{cases} \end{equation} Observe that this function satisfies \[ u_0(\kappa^{2n} x) = u_0(x) \] and \[ u_0(x) + u_0(\kappa x) \equiv 1.\] By construction, there exists a value $\gamma \in [1 - 2\epsilon, 1]$ such that \[ \int u_0(x) \mu_G = \gamma, \quad \int u_0(\kappa x) \mu_G = 1 - \gamma.\] Returning to the formula for $u(t,0)$, we see that \begin{equation} u(\kappa^{2n},0) = \begin{cases} \gamma & n \text{ is even} \newline 1 - \gamma & n \text{ is odd} \end{cases} \end{equation} This shows an oscillatory behavior at $x = 0$, showing that the limit as $t\to \infty$ of $u(t,0)$ to not exist.
On the other hand, this particular solution has the nice property that, on any compact set $K$, and for any fixed $t_0 > 0$, the sequence of functions $v_n(x) = u(t_0 \kappa^{4n}, x)$ converges uniformly to $u(t_0,0)$ as $n \to \infty$.
math.AP parabolic PDE
Convergence to Planewaves for Solutions to Schrödinger's equation III
Convergence to Planewaves for Solutions to Schrödinger's equation II
Convergence to Planewaves for Solutions to Schrödinger's equation
Simulating Closed Cosmic Strings
Shooting particles with Python | CommonCrawl |
Journal of Fluid Mechanics
URL: /core/journals/journal-of-fluid-mechanics
Focus on Fluids
JFM Rapids
JFM Perspectives
Last 12 months (35)
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
Volume 869 - 25 June 2019
Effect of background mean flow on PSI of internal wave beams
Boyu Fan, T. R. Akylas
Published online by Cambridge University Press: 23 April 2019, R1
An asymptotic model is developed for the parametric subharmonic instability (PSI) of finite-width nearly monochromatic internal gravity wave beams in the presence of a background constant horizontal mean flow. The subharmonic perturbations are taken to be short-scale wavepackets that may extract energy via resonant triad interactions while in contact with the underlying beam, and the mean flow is assumed to be small so that its advection effect on the perturbations is as important as dispersion, triad nonlinearity and viscous dissipation. In this 'distinguished limit', the perturbation dynamics are governed by the same evolution equations as those derived in Karimi & Akylas (J. Fluid Mech., vol. 757, 2014, pp. 381–402), except for a mean flow term that affects the group velocity of the perturbations and imposes an additional necessary condition for PSI, which stabilizes very short-scale perturbations. As a result, it is possible for a small amount of mean flow to weaken PSI dramatically.
Stability analysis of a particle band on the fluid–fluid interface
Alireza Hooshanginejad, Benjamin C. Druecke, Sungyon Lee
We present experiments and theory for viscous fingering of a suspension of non-colloidal particles undergoing radial flow in a Hele-Shaw cell. As the suspension displaces air, shear-induced migration causes particles to move faster than the average suspension velocity and to accumulate on the suspension–air interface. The resultant particle accumulation generates a pattern in which low-concentration, low-viscosity suspension displaces high-concentration, high-viscosity suspension and is unstable due to the classic Saffman–Taylor instability mechanism. While the destabilising mechanism is well-understood, what remains unknown is the stabilising mechanism that suppresses fine fingers characteristic of miscible fingering. In this work, we demonstrate how the stable suspension–air interface interacts with the unstable miscible interface to set the critical wavelength. We present a linear stability analysis for the time-dependent radial flow and show that the wavenumber predicted by the analysis is in good agreement with parametric experiments investigating the effect of suspension concentration and gap thickness of the Hele-Shaw cell.
Dip-coating with a particulate suspension
Sergio Palma, Henri Lhuissier
The coating of a plate withdrawn from a bath of a suspension of non-Brownian, monodisperse and neutrally buoyant spherical particles suspended in a Newtonian liquid has been studied. Using laser profilometry, particle tracking and local sample weighing we have quantified the thickness $h$ and the particle content of the film for various particle diameters $d$ and volume fractions ( $0.10\leqslant \unicode[STIX]{x1D719}\leqslant 0.50$ ). Three coating regimes have been observed as the withdrawal velocity is increased: (i) no particle entrainment ( $h\lesssim d$ ), (ii) a monolayer of particles ( $h\sim d$ ), and (iii) a thick film ( $h\gtrsim d$ ), where the suspension behaves as an effective viscous fluid following the Landau–Levich–Derjaguin law. We discuss the boundaries between these regimes, as well as the evolution of the liquid and solid content of the coating over the whole range of withdrawal capillary number and volume fractions.
$Nu\sim Ra^{1/2}$ scaling enabled by multiscale wall roughness in Rayleigh–Bénard turbulence
Xiaojue Zhu, Richard J. A. M. Stevens, Olga Shishkina, Roberto Verzicco, Detlef Lohse
In turbulent Rayleigh–Bénard (RB) convection with regular, mono-scale, surface roughness, the scaling exponent $\unicode[STIX]{x1D6FD}$ in the relationship between the Nusselt number $Nu$ and the Rayleigh number $Ra$ , $Nu\sim Ra^{\unicode[STIX]{x1D6FD}}$ can be ${\approx}1/2$ locally, provided that $Ra$ is large enough to ensure that the thermal boundary layer thickness $\unicode[STIX]{x1D706}_{\unicode[STIX]{x1D703}}$ is comparable to the roughness height. However, at even larger $Ra$ , $\unicode[STIX]{x1D706}_{\unicode[STIX]{x1D703}}$ becomes thin enough to follow the irregular surface and $\unicode[STIX]{x1D6FD}$ saturates back to the value for smooth walls (Zhu et al., Phys. Rev. Lett., vol. 119, 2017, 154501). In this paper, we prevent this saturation by employing multiscale roughness. We perform direct numerical simulations of two-dimensional RB convection using an immersed boundary method to capture the rough plates. We find that, for rough boundaries that contain three distinct length scales, a scaling exponent of $\unicode[STIX]{x1D6FD}=0.49\pm 0.02$ can be sustained for at least three decades of $Ra$ . The physical reason is that the threshold $Ra$ at which the scaling exponent $\unicode[STIX]{x1D6FD}$ saturates back to the smooth wall value is pushed to larger $Ra$ , when the smaller roughness elements fully protrude through the thermal boundary layer. The multiscale roughness employed here may better resemble the irregular surfaces that are encountered in geophysical flows and in some industrial applications.
Turbulent Rayleigh–Bénard convection in an annular cell
Xu Zhu, Lin-Feng Jiang, Quan Zhou, Chao Sun
We report an experimental study of turbulent Rayleigh–Bénard (RB) convection in an annular cell of water (Prandtl number $Pr=4.3$ ) with a radius ratio $\unicode[STIX]{x1D702}\simeq 0.5$ . Global quantities, such as the Nusselt number $Nu$ and the Reynolds number $Re$ , and local temperatures were measured over the Rayleigh range $4.2\times 10^{9}\leqslant Ra\leqslant 4.5\times 10^{10}$ . It is found that the scaling behaviours of $Nu(Ra)$ , $Re(Ra)$ and the temperature fluctuations remain the same as those in the traditional cylindrical cells; both the global and local properties of turbulent RB convection are insensitive to the change of cell geometry. A visualization study, as well as local temperature measurements, shows that in spite of the lack of the cylindrical core, there also exists a large-scale circulation (LSC) in the annular system: thermal plumes organize themselves with the ascending hot plumes on one side and the descending cold plumes on the opposite side. Near the upper and lower plates, the mean flow moves along the two circular branches. Our results further reveal that the dynamics of the LSC in this annular geometry is different from that in the traditional cylindrical cell, i.e. the orientation of the LSC oscillates in a narrow azimuthal angle range, and no cessations, reversals or net rotation were detected.
Fractal features of turbulent/non-turbulent interface in a shock wave/turbulent boundary-layer interaction flow
Yi Zhuang, Huijun Tan, Weixing Wang, Xin Li, Yunjie Guo
Fractal features of the turbulent/non-turbulent interface (TNTI) in shock wave/turbulent boundary-layer interaction (SWBLI) flows are essential in understanding the physics of the SWBLI and the supersonic turbulent boundary layer, yet have received almost no attention previously. Accordingly, this study utilises a high spatiotemporal resolution visualisation technique, ice-cluster-based planar laser scattering (IC-PLS), to acquire the TNTI downstream of the reattachment in a SWBLI flow. Evolution of the fractal features of the TNTI in this SWBLI flow is analysed by comparing the parameters of the TNTI acquired in this study with those from a previous result (Zhuang et al.J. Fluid Mech., vol. 843, 2018a).
Diffusion of inertia-gravity waves by geostrophic turbulence
Hossein A. Kafiabad, Miles A. C. Savva, Jacques Vanneste
The scattering of inertia-gravity waves by large-scale geostrophic turbulence in a rapidly rotating, strongly stratified fluid leads to the diffusion of wave energy on the constant-frequency cone in wavenumber space. We derive the corresponding diffusion equation and relate its diffusivity to the wave characteristics and the energy spectrum of the turbulent flow. We check the predictions of this equation against numerical simulations of the three-dimensional Boussinesq equations in initial-value and forced scenarios with horizontally isotropic wave and flow fields. In the forced case, wavenumber diffusion results in a $k^{-2}$ wave energy spectrum consistent with as-yet-unexplained features of observed atmospheric and oceanic spectra.
JFM Papers
Effect of wind turbine nacelle on turbine wake dynamics in large wind farms
Daniel Foti, Xiaolei Yang, Lian Shen, Fotis Sotiropoulos
Wake meandering, a phenomenon of large-scale lateral oscillation of the wake, has significant effects on the velocity deficit and turbulence intensities in wind turbine wakes. Previous studies of a single turbine (Kang et al., J. Fluid. Mech., vol. 774, 2014, pp. 374–403; Foti et al., Phys. Rev. Fluids, vol. 1 (4), 2016, 044407) have shown that the turbine nacelle induces large-scale coherent structures in the near field that can have a significant effect on wake meandering. However, whether nacelle-induced coherent structures at the turbine scale impact the emergent turbine wake dynamics at the wind farm scale is still an open question of both fundamental and practical significance. We take on this question by carrying out large-eddy simulation of atmospheric turbulent flow over the Horns Rev wind farm using actuator surface parameterisations of the turbines without and with the turbine nacelle taken into account. While the computed mean turbine power output and the mean velocity field away from the nacelle wake are similar for both cases, considerable differences are found in the turbine power fluctuations and turbulence intensities. Furthermore, wake meandering amplitude and area defined by wake meanders, which indicates the turbine wake unsteadiness, are larger for the simulations with the turbine nacelle. The wake influenced area computed from the velocity deficit profiles, which describes the spanwise extent of the turbine wakes, and the spanwise growth rate, on the other hand, are smaller for some rows in the simulation with the nacelle model. Our work shows that incorporating the nacelle model in wind farm scale simulations is critical for accurate predictions of quantities that affect the wind farm levelised cost of energy, such as the dynamics of wake meandering and the dynamic loads on downwind turbines.
Non-periodic phase-space trajectories of roughness-driven secondary flows in high- $Re_{\unicode[STIX]{x1D70F}}$ boundary layers and channels
W. Anderson
Published online by Cambridge University Press: 18 April 2019, pp. 27-84
Turbulent flows respond to bounding walls with a predominant spanwise heterogeneity – that is, a heterogeneity parallel to the prevailing transport direction – with formation of Reynolds-averaged turbulent secondary flows. Prior experimental and numerical work has determined that these secondary rolls occur in a variety of arrangements, contingent only upon the existence of a spanwise heterogeneity (i.e. from complex, multiscale roughness with a predominant spanwise heterogeneity, to canonical step changes, to different roughness elements). These secondary rolls are known to be a manifestation of Prandtl's secondary flow of the second kind: driven and sustained by the existence of spatial heterogeneities in the Reynolds (turbulent) stresses, all of which vanish in the absence of spanwise heterogeneity. Herein, we show results from a suite of large-eddy simulations and complementary experimental measurements of flow over spanwise-heterogeneous surfaces. Although the resultant secondary cell location is clearly correlated with the surface characteristics, which ultimately dictates the Reynolds-averaged flow patterns, we show the potential for instantaneous sign reversals in the rotational sense of the secondary cells. This is accomplished with probability density functions and conditional sampling. In order to address this further, a base flow representing the streamwise rolls is introduced. The base flow intensity – based on a leading-order Galerkin projection – is allowed to vary in time through the introduction of time-dependent parameters. Upon substitution of the base flow into the streamwise momentum and streamwise vorticity transport equations, and via use of a vortex forcing model, we are able to assess the phase-space evolution (orbit) of the resulting system of ordinary differential equations. The system resembles the Lorenz system, but the forcing conditions differ intrinsically. Nevertheless, the system reveals that chaotic, non-periodic trajectories are possible for sufficient inertial conditions. Poincaré projection is used to assess the conditions needed for chaos, and to estimate the fractal dimension of the attractor. Its simplicity notwithstanding, the propensity for chaotic, non-periodic trajectories in the base flow model suggests similar dynamics is responsible for the large-scale reversals observed in the numerical and experimental datasets.
Pressure-driven gas flow in viscously deformable porous media: application to lava domes
David M. Hyman, M. I. Bursik, E. B. Pitman
Published online by Cambridge University Press: 18 April 2019, pp. 85-109
The behaviour of low-viscosity, pressure-driven compressible pore fluid flows in viscously deformable porous media is studied here with specific application to gas flow in lava domes. The combined flow of gas and lava is shown to be governed by a two-equation set of nonlinear mixed hyperbolic–parabolic type partial differential equations describing the evolution of gas pore pressure and lava porosity. Steady state solution of this system is achieved when the gas pore pressure is magmastatic and the porosity profile accommodates the magmastatic pressure condition by increased compaction of the medium with depth. A one-dimensional (vertical) numerical linear stability analysis (LSA) is presented here. As a consequence of the pore-fluid compressibility and the presence of gravitation compaction, the gradients present in the steady-state solution cause variable coefficients in the linearized equations which generate instability in the LSA despite the diffusion-like and dissipative terms in the original system. The onset of this instability is shown to be strongly controlled by the thickness of the flow and the maximum porosity, itself a function of the mass flow rate of gas. Numerical solutions of the fully nonlinear system are also presented and exhibit nonlinear wave propagation features such as shock formation. As applied to gas flow within lava domes, the details of this dynamics help explain observations of cyclic lava dome extrusion and explosion episodes. Because the instability is stronger in thicker flows, the continued extrusion and thickening of a lava dome constitutes an increasing likelihood of instability onset, pressure wave growth and ultimately explosion.
Mass transfer around bubbles flowing in cylindrical microchannels
Javier Rivero-Rodriguez, Benoit Scheid
This work focuses on the mass transfer around unconfined bubbles in cylindrical microchannels when they are arranged in a train. We characterise how the mass transfer, quantified by the Sherwood number, $Sh$ , is affected by the channel and bubble sizes, distance between bubbles, diffusivity, mean flow velocity, deformation of the bubble, the presence of surfactants in the limit of rigid interface and off-centred positions of the bubbles. We analyse the influence of the dimensionless numbers and especially the distance between bubbles and the Péclet number, $Pe$ , which we vary over eight decades, identifying five different mass transfer regimes. We show different concentration patterns and the dependence of the Sherwood numbers. These regimes can be classified by either the importance of the diffusion along the streamlines or the interaction between bubbles. For small $Pe$ the diffusion along the streamlines is not negligible as compared to convection, whereas for large $Pe$ convection dominates in the streamlines direction and, thus, crosswind diffusion becomes crucial in governing the mass transfer through boundary layers or the remaining wake behind the bubbles. Interaction of bubbles occurs for very small $Pe$ where the mass transfer is purely diffusive, or for very large $Pe$ where long wakes eventually reach the following bubble. We also observe that the bubble deformability mainly affects the $Sh$ in the regime for very large $Pe$ in which bubbles interaction matters, and also that the rigid interface affects the boundary layer and the remaining wake. The effect of off-centred position of the bubble, determined by the transverse force balance, is also limited to large $Pe$ . The boundary layers on rigid bubble surfaces are thicker than those on stress-free bubble surfaces, and thus the mass transfer is weaker. For centred bubbles, the influence of inertia on the mass transfer is negligible. Finally, we discuss the implication of our results on the dissolution of bubbles.
A dynamical systems view of granular flow: from monoclinal flood waves to roll waves
Dimitrios Razis, Giorgos Kanellopoulos, Ko van der Weele
On the basis of the Saint-Venant equations for flowing granular matter, we study the various travelling waveforms that are encountered in chute flow for growing Froude number. Generally, for $Fr<2/3$ one finds either a uniform flow of constant thickness or a monoclinal flood wave, i.e. a shock structure monotonically connecting a thick region upstream to a shallower region downstream. For $Fr>2/3$ both the uniform flow and the monoclinal wave cease to be stable; the flow now organizes itself in the form of a train of roll waves. From the governing Saint-Venant equations we derive a dynamical system that elucidates the transition from monoclinal waves to roll waves. It is found that this transition involves several intermediate stages, including an undular bore that had hitherto not been reported for granular flows.
A comparative study of the velocity and vorticity structure in pipes and boundary layers at friction Reynolds numbers up to $10^{4}$
S. Zimmerman, J. Philip, J. Monty, A. Talamelli, I. Marusic, B. Ganapathisubramani, R. J. Hearst, G. Bellani, R. Baidya, M. Samie, X. Zheng, E. Dogan, L. Mascotelli, J. Klewicki
This study presents findings from a first-of-its-kind measurement campaign that includes simultaneous measurements of the full velocity and vorticity vectors in both pipe and boundary layer flows under matched spatial resolution and Reynolds number conditions. Comparison of canonical turbulent flows offers insight into the role(s) played by features that are unique to one or the other. Pipe and zero pressure gradient boundary layer flows are often compared with the goal of elucidating the roles of geometry and a free boundary condition on turbulent wall flows. Prior experimental efforts towards this end have focused primarily on the streamwise component of velocity, while direct numerical simulations are at relatively low Reynolds numbers. In contrast, this study presents experimental measurements of all three components of both velocity and vorticity for friction Reynolds numbers $Re_{\unicode[STIX]{x1D70F}}$ ranging from 5000 to 10 000. Differences in the two transverse Reynolds normal stresses are shown to exist throughout the log layer and wake layer at Reynolds numbers that exceed those of existing numerical data sets. The turbulence enstrophy profiles are also shown to exhibit differences spanning from the outer edge of the log layer to the outer flow boundary. Skewness and kurtosis profiles of the velocity and vorticity components imply the existence of a 'quiescent core' in pipe flow, as described by Kwon et al. (J. Fluid Mech., vol. 751, 2014, pp. 228–254) for channel flow at lower $Re_{\unicode[STIX]{x1D70F}}$ , and characterize the extent of its influence in the pipe. Observed differences between statistical profiles of velocity and vorticity are then discussed in the context of a structural difference between free-stream intermittency in the boundary layer and 'quiescent core' intermittency in the pipe that is detectable to wall distances as small as 5 % of the layer thickness.
Energetics and mixing in buoyancy-driven near-bottom stratified flow
Pranav Puthan, Masoud Jalali, Vamsi K. Chalamalla, Sutanu Sarkar
Turbulence and mixing in a near-bottom convectively driven flow are examined by numerical simulations of a model problem: a statically unstable disturbance at a slope with inclination $\unicode[STIX]{x1D6FD}$ in a stable background with buoyancy frequency $N$ . The influence of slope angle and initial disturbance amplitude are quantified in a parametric study. The flow evolution involves energy exchange between four energy reservoirs, namely the mean and turbulent components of kinetic energy (KE) and available potential energy (APE). In contrast to the zero-slope case where the mean flow is negligible, the presence of a slope leads to a current that oscillates with $\unicode[STIX]{x1D714}=N\sin \unicode[STIX]{x1D6FD}$ and qualitatively changes the subsequent evolution of the initial density disturbance. The frequency, $N\sin \unicode[STIX]{x1D6FD}$ , and the initial speed of the current are predicted using linear theory. The energy transfer in the sloping cases is dominated by an oscillatory exchange between mean APE and mean KE with a transfer to turbulence at specific phases. In all simulated cases, the positive buoyancy flux during episodes of convective instability at the zero-velocity phase is the dominant contributor to turbulent kinetic energy (TKE) although the shear production becomes increasingly important with increasing $\unicode[STIX]{x1D6FD}$ . Energy that initially resides wholly in mean available potential energy is lost through conversion to turbulence and the subsequent dissipation of TKE and turbulent available potential energy. A key result is that, in contrast to the explosive loss of energy during the initial convective instability in the non-sloping case, the sloping cases exhibit a more gradual energy loss that is sustained over a long time interval. The slope-parallel oscillation introduces a new flow time scale $T=2\unicode[STIX]{x03C0}/(N\sin \unicode[STIX]{x1D6FD})$ and, consequently, the fraction of initial APE that is converted to turbulence during convective instability progressively decreases with increasing $\unicode[STIX]{x1D6FD}$ . For moderate slopes with $\unicode[STIX]{x1D6FD}<10^{\circ }$ , most of the net energy loss takes place during an initial, short ( $Nt\approx 20$ ) interval with periodic convective overturns. For steeper slopes, most of the energy loss takes place during a later, long ( $Nt>100$ ) interval when both shear and convective instability occur, and the energy loss rate is approximately constant. The mixing efficiency during the initial period dominated by convectively driven turbulence is found to be substantially higher (exceeds 0.5) than the widely used value of 0.2. The mixing efficiency at long time in the present problem of a convective overturn at a boundary varies between 0.24 and 0.3.
Weakly nonlinear theory for a gate-type curved array in waves
S. Michele, E. Renzi, P. Sammarco
We analyse the effect of gate surface curvature on the nonlinear behaviour of an array of gates in a semi-infinite channel. Using a perturbation-harmonic expansion, we show the occurrence of new detuning and damping terms in the Ginzburg–Landau evolution equation, which are not present in the case of flat gates. Unlike the case of linearised theories, synchronous excitation of trapped modes is now possible because of interactions between the wave field and the curved boundaries at higher orders. Finally, we apply the theory to the case of surging wave energy converters (WECs) with curved geometry and show that the effects of nonlinear synchronous resonance are substantial for design purposes. Conversely, in the case of subharmonic resonance we show that the effects of surface curvature are not always beneficial as previously thought.
Drag of a heated sphere at low Reynolds numbers in the absence of buoyancy
Swetava Ganguli, Sanjiva K. Lele
Fully resolved simulations are used to quantify the effects of heat transfer in the absence of buoyancy on the drag of a spatially fixed heated spherical particle at low Reynolds numbers ( $Re$ ) in the range $10^{-3}\leqslant Re\leqslant 10$ in a variable-property fluid. The case where buoyancy is present is analysed in a subsequent paper. This analysis is carried out without making any assumptions on the amount of heat addition from the sphere and thus encompasses both the heating regime where the Boussinesq approximation holds and the regime where it breaks down. The particle is assumed to have a low Biot number, which means that the particle is uniformly at the same temperature and has no internal temperature gradients. Large deviations in the value of the drag coefficient as the temperature of the sphere increases are observed. When $Re<O(10^{-2})$ , these deviations are explained using a low-Mach-number perturbation analysis as irrotational corrections to a Stokes–Oseen base flow. Correlations for the drag and Nusselt number of a heated sphere are proposed for the range of Reynolds numbers $10^{-3}\leqslant Re\leqslant 10$ which fit the computationally obtained values with less than 1 % and 3 % errors, respectively. These correlations can be used in simulations of gas–solid flows where the accuracy of the drag law affects the prediction of the overall flow behaviour. Finally, an analogy to incompressible flow over a modified sphere is demonstrated.
Multiphase plumes in a stratified ambient
Nicola Mingotti, Andrew W. Woods
We report on experiments of turbulent particle-laden plumes descending through a stratified environment. We show that provided the characteristic plume speed $(B_{0}N)^{1/4}$ exceeds the particle fall speed, where the plume buoyancy flux is $B_{0}$ and the Brunt–Väisälä frequency is $N$ , then the plume is arrested by the stratification and initially intrudes at the neutral height associated with a single-phase plume of the same buoyancy flux. If the original fluid phase in the plume has density equal to that of the ambient fluid at the source, then as the particles sediment from the intruding fluid, the fluid finds itself buoyant and rises, ultimately intruding at a height of about $0.58\pm 0.03$ of the original plume height, consistent with new predictions we present based on classical plume theory. We generalise this result, and show that if the buoyancy flux at the source is composed of a fraction $F_{s}$ associated with the buoyancy of the source fluid, and a fraction $1-F_{s}$ from the particles, then following the sedimentation of the particles, the plume fluid intrudes at a height $(0.58+0.22F_{s}\pm 0.03)H_{t}$ , where $H_{t}$ is the maximum plume height. This is key for predictions of the environmental impact of any material dissolved in the plume water which may originate from the particle load. We also show that the particles sediment at their fall speed through the fluid below the maximum depth of the plume as a cylindrical column whose area scales as the ratio of the particle flux at the source to the fall speed and concentration of particles in the plume at the maximum depth of the plume before it is arrested by the stratification. We demonstrate that there is negligible vertical transport of fluid in this cylindrical column, but a series of layers of high and low particle concentration develop in the column with a vertical spacing which is given by the ratio of the buoyancy of the particle load and the background buoyancy gradient. Small fluid intrusions develop at the side of the column associated with these layers, as dense parcels of particle-laden fluid convect downwards and then outward once the particles have sedimented from the fluid, with a lateral return flow drawing in ambient fluid. As a result, the pattern of particle-rich and particle-poor layers in the column gradually migrates upwards owing to the convective transport of particles between the particle-rich layers superposed on the background sedimentation. We consider the implications of the results for mixing by bubble plumes, for submarine blowouts of oil and gas and for the fate of plumes of waste particles discharged at the ocean surface during deep-sea mining.
Retrogressive failure of a static granular layer on an inclined plane
A. S. Russell, C. G. Johnson, A. N. Edwards, S. Viroulet, F. M. Rocha, J. M. N. T. Gray
When a layer of static grains on a sufficiently steep slope is disturbed, an upslope-propagating erosion wave, or retrogressive failure, may form that separates the initially static material from a downslope region of flowing grains. This paper shows that a relatively simple depth-averaged avalanche model with frictional hysteresis is sufficient to capture a planar retrogressive failure that is independent of the cross-slope coordinate. The hysteresis is modelled with a non-monotonic effective basal friction law that has static, intermediate (velocity decreasing) and dynamic (velocity increasing) regimes. Both experiments and time-dependent numerical simulations show that steadily travelling retrogressive waves rapidly form in this system and a travelling wave ansatz is therefore used to derive a one-dimensional depth-averaged exact solution. The speed of the wave is determined by a critical point in the ordinary differential equation for the thickness. The critical point lies in the intermediate frictional regime, at the point where the friction exactly balances the downslope component of gravity. The retrogressive wave is therefore a sensitive test of the functional form of the friction law in this regime, where steady uniform flows are unstable and so cannot be used to determine the friction law directly. Upper and lower bounds for the existence of retrogressive waves in terms of the initial layer depth and the slope inclination are found and shown to be in good agreement with the experimentally determined phase diagram. For the friction law proposed by Edwards et al. (J. Fluid. Mech., vol. 823, 2017, pp. 278–315, J. Fluid. Mech., 2019, (submitted)) the magnitude of the wave speed is slightly under-predicted, but, for a given initial layer thickness, the exact solution accurately predicts an increase in the wave speed with higher inclinations. The model also captures the finite wave speed at the onset of retrogressive failure observed in experiments.
Direct numerical simulations of hypersonic boundary-layer transition for a flared cone: fundamental breakdown
Christoph Hader, Hermann F. Fasel
Direct numerical simulations (DNS) were carried out to investigate the laminar–turbulent transition for a flared cone at Mach 6 at zero angle of attack. The cone geometry of the flared cone experiments in the Boeing/AFOSR Mach 6 Quiet Tunnel (BAM6QT) at Purdue University was used for the simulations. In the linear regime, the largest integrated spatial growth rates ( $N$ -factors) for the primary instability were obtained for a frequency of approximately $f=300~\text{kHz}$ . Low grid-resolution simulations were carried out in order to identify the azimuthal wavenumber that led to the strongest growth rates with respect to the secondary instability for a fundamental and subharmonic resonance scenario. It was found that for the BAM6QT conditions the fundamental resonance is much stronger compared to the subharmonic resonance. Subsequently, for the case which led to the strongest fundamental resonance onset, detailed investigations were carried out using high-resolution DNS. The simulation results exhibit streamwise streaks of very high skin friction and of high heat transfer at the cone surface. Streamwise 'hot' streaks on the flared cone surface were also observed in the experiments carried out at the BAM6QT facility using temperature sensitive paint. The presented findings provide strong evidence that the fundamental breakdown is a dominant and viable path to transition for the BAM6QT conditions.
A new insight into understanding the Crow and Champagne preferred mode: a numerical study
A. Boguslawski, K. Wawrzak, A. Tyliszczak
The paper presents a new insight into understanding a mechanism to trigger the Crow and Champagne preferred mode. It is shown on the basis of numerical simulations that the preferred mode is established as a result of nonlinear interactions of primary structures generated by the Kelvin–Helmholtz instability. These interactions form larger coherent vortices characterized with frequency equal to half of the frequency of the primary perturbation. The paper shows that the shear-layer thickness at the nozzle exit constitutes a key parameter that influences significantly the jet response to an external forcing. The simulations were performed for jets with different shear-layer thicknesses. For the thicker shear layer the classical Kelvin–Helmholtz instability is observed. In this case the jet response to an external varicose forcing seems to be very similar to the experimental results of Crow and Champagne. The results presented shed new light on the preferred mode and the frequency selection mechanism confirming the suggestion of Crow and Champagne that nonlinearity is responsible for the preferred frequency. Significantly different results were obtained for a jet characterized by a thin shear layer. In this case the jet could be introduced into a self-sustained regime. External forcing with a frequency equal to the frequency of the natural self-sustained mode or with its subharmonic has practically no effect on the jet dynamics. The jet response to the forcing with frequencies different from the natural one depends on the forcing amplitude. A weak forcing disturbs the self-sustained mode leading to an interaction of two different modes that is observed in spectra with many frequencies related to both the self-sustained mode and the oscillations triggered by forcing. A stronger forcing suppresses the self-sustained mode and only the frequency components related to the stimulation are observed in the spectra. A mechanism responsible for the jet response to an external forcing under the self-sustained regime has not been extensively studied so far and a full understanding of these phenomena needs further studies and careful analysis. | CommonCrawl |
# Algebraic varieties and their properties
In algebraic geometry, an algebraic variety is a set of solutions to a system of polynomial equations. These equations can be defined over any field, but for the purposes of this textbook, we will focus on varieties defined over the real numbers.
An algebraic variety can be described as the set of common zeros of a collection of polynomials. For example, the variety defined by the equations $x^2 + y^2 = 1$ and $x + y = 1$ is the unit circle in the plane.
The dimension of an algebraic variety is a measure of its complexity. It is the maximum number of independent parameters needed to describe a point on the variety. For example, a line in the plane has dimension 1, while a plane in three-dimensional space has dimension 2.
An algebraic variety is said to be irreducible if it cannot be expressed as the union of two proper subvarieties. In other words, an irreducible variety cannot be broken down into smaller pieces. For example, the unit circle is an irreducible variety, while the union of two lines in the plane is not.
Smoothness is another important property of algebraic varieties. A variety is said to be smooth if it has no singular points. Singular points are points where the equations defining the variety have multiple solutions. For example, the point (0, 0) is a singular point of the variety defined by the equation $x^2 + y^2 = 0$.
Consider the variety defined by the equation $x^2 + y^2 - 1 = 0$. This equation represents the unit circle in the plane. This variety is irreducible, because it cannot be expressed as the union of two proper subvarieties. It is also smooth, because it has no singular points.
## Exercise
Consider the variety defined by the equation $x^2 - y^2 = 0$. Is this variety irreducible? Is it smooth?
### Solution
This variety is not irreducible, because it can be expressed as the union of two proper subvarieties: the lines $y = x$ and $y = -x$. It is also not smooth, because it has a singular point at the origin (0, 0).
# Polynomials and their fundamental properties
Polynomials are a fundamental concept in algebraic geometry. They are expressions that involve variables and coefficients, combined using addition, subtraction, and multiplication. For example, the polynomial $2x^3 + 3x^2 - 5x + 1$ is a polynomial in the variable $x$ with coefficients 2, 3, -5, and 1.
Polynomials can have multiple variables, such as $x^2 + y^2 - 1$, which is a polynomial in the variables $x$ and $y$. The degree of a polynomial is the highest power of the variable that appears in the polynomial. For example, the polynomial $2x^3 + 3x^2 - 5x + 1$ has degree 3.
Polynomials have several fundamental properties that are important in algebraic geometry. One property is that polynomials can be added and multiplied together to create new polynomials. For example, if we have the polynomials $2x^2 + 3x - 1$ and $x - 2$, we can add them together to get $2x^2 + 4x - 3$, or multiply them together to get $2x^3 - x^2 - 7x + 2$.
Another important property of polynomials is that they can be evaluated at specific values of the variables. For example, if we have the polynomial $2x^2 + 3x - 1$, we can evaluate it at $x = 2$ to get $2(2)^2 + 3(2) - 1 = 11$.
Consider the polynomials $f(x) = 2x^2 + 3x - 1$ and $g(x) = x - 2$. We can add these polynomials together to get $f(x) + g(x) = 2x^2 + 4x - 3$, or multiply them together to get $f(x) \cdot g(x) = 2x^3 - x^2 - 7x + 2$. We can also evaluate $f(x)$ at $x = 2$ to get $f(2) = 2(2)^2 + 3(2) - 1 = 11$.
## Exercise
Consider the polynomials $f(x) = 3x^2 + 2x + 1$ and $g(x) = 2x - 1$.
1. Add these polynomials together.
2. Multiply these polynomials together.
3. Evaluate $f(x)$ at $x = 1$.
### Solution
1. $f(x) + g(x) = 3x^2 + 2x + 1 + 2x - 1 = 3x^2 + 4x$
2. $f(x) \cdot g(x) = (3x^2 + 2x + 1)(2x - 1) = 6x^3 + 4x^2 + 2x - 3x^2 - 2x - 1 = 6x^3 + x^2$
3. $f(1) = 3(1)^2 + 2(1) + 1 = 6$
# Roots of polynomials and their significance
The roots of a polynomial are the values of the variable that make the polynomial equal to zero. For example, the roots of the polynomial $x^2 - 4$ are $2$ and $-2$, because when we substitute $2$ or $-2$ for $x$, the polynomial evaluates to zero.
The significance of the roots of a polynomial lies in their relationship to the polynomial's graph. The roots of a polynomial correspond to the x-intercepts of its graph. In other words, they are the values of $x$ where the graph intersects the x-axis.
The number of roots a polynomial has depends on its degree. A polynomial of degree $n$ can have at most $n$ distinct roots. For example, a quadratic polynomial, which has a degree of $2$, can have at most $2$ distinct roots.
It is also possible for a polynomial to have repeated roots, where the same value of $x$ corresponds to multiple roots. For example, the polynomial $(x - 1)^2$ has a repeated root of $1$, because when we substitute $1$ for $x$, the polynomial evaluates to zero twice.
Consider the polynomial $f(x) = x^3 - 3x^2 + 2x - 2$. To find the roots of this polynomial, we set $f(x)$ equal to zero and solve for $x$.
$x^3 - 3x^2 + 2x - 2 = 0$
By factoring, we can rewrite the equation as:
$(x - 1)(x - 1)(x + 2) = 0$
This equation has three roots: $1$, $1$, and $-2$. However, since $1$ is a repeated root, the polynomial $f(x)$ has only two distinct roots: $1$ and $-2$.
## Exercise
Consider the polynomial $g(x) = x^4 - 4x^2 + 4$.
1. Find the roots of this polynomial.
2. How many distinct roots does $g(x)$ have?
### Solution
1. To find the roots of $g(x)$, we set $g(x)$ equal to zero and solve for $x$:
$x^4 - 4x^2 + 4 = 0$
By factoring, we can rewrite the equation as:
$(x - 2)(x + 2)(x - 1)(x + 1) = 0$
This equation has four roots: $2$, $-2$, $1$, and $-1$.
2. $g(x)$ has four distinct roots: $2$, $-2$, $1$, and $-1$.
# Solving algorithms for finding roots of polynomials
There are several algorithms for finding the roots of polynomials. One common algorithm is the Newton-Raphson method. This method starts with an initial guess for a root and iteratively refines the guess until it converges to a more accurate root.
The Newton-Raphson method works by using the tangent line to the graph of the polynomial at a given point to find a better approximation of the root. The formula for the iteration step is:
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$
where $x_n$ is the current approximation of the root, $f(x_n)$ is the value of the polynomial at $x_n$, and $f'(x_n)$ is the derivative of the polynomial evaluated at $x_n$.
Another algorithm for finding roots of polynomials is the bisection method. This method works by repeatedly dividing the interval in which the root lies in half and narrowing down the interval until the root is found. The bisection method requires that the polynomial has a sign change within the interval.
The bisection method works by checking the sign of the polynomial at the midpoint of the interval. If the sign is the same at both endpoints, a new interval is chosen that contains the root. This process is repeated until the interval becomes small enough to approximate the root.
Let's use the Newton-Raphson method to find the root of the polynomial $f(x) = x^3 - 2x^2 - 5$. We'll start with an initial guess of $x_0 = 2$.
First, we need to calculate the derivative of $f(x)$, which is $f'(x) = 3x^2 - 4x$.
Using the formula for the iteration step, we can calculate the next approximation of the root:
$$x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 2 - \frac{f(2)}{f'(2)} = 2 - \frac{2^3 - 2(2)^2 - 5}{3(2)^2 - 4(2)} = 2 - \frac{3}{4} = \frac{5}{4}$$
We can continue this process to refine our approximation of the root.
## Exercise
Use the bisection method to find the root of the polynomial $g(x) = x^2 - 4$ within the interval $[-2, 2]$.
### Solution
To use the bisection method, we need to check the sign of $g(x)$ at the endpoints of the interval.
$g(-2) = (-2)^2 - 4 = 0$
$g(2) = (2)^2 - 4 = 0$
Since the sign is the same at both endpoints, we cannot use the bisection method to find the root within this interval.
# Systems of equations and their solutions
A system of equations is a set of equations that are solved simultaneously. Each equation in the system represents a relationship between variables, and the solution to the system is a set of values for the variables that satisfy all of the equations.
There are different methods for solving systems of equations, including substitution, elimination, and matrix methods. In this section, we will focus on the elimination method.
The elimination method involves manipulating the equations in the system to eliminate one variable at a time. This is done by adding or subtracting the equations in such a way that one variable cancels out. The goal is to reduce the system to a simpler form with fewer variables, until we can solve for the remaining variables.
To illustrate the elimination method, let's consider the following system of equations:
$$
\begin{align*}
2x + 3y &= 7 \\
4x - 2y &= 2 \\
\end{align*}
$$
We can eliminate the variable $y$ by multiplying the first equation by $2$ and the second equation by $3$, and then adding the equations together:
$$
\begin{align*}
4x + 6y &= 14 \\
12x - 6y &= 6 \\
\end{align*}
$$
Adding these equations gives us:
$$16x = 20$$
Dividing both sides by $16$, we find that $x = \frac{5}{4}$.
We can substitute this value of $x$ back into one of the original equations to solve for $y$. Let's use the first equation:
$$2\left(\frac{5}{4}\right) + 3y = 7$$
Simplifying, we get:
$$\frac{5}{2} + 3y = 7$$
Subtracting $\frac{5}{2}$ from both sides, we find that $3y = \frac{9}{2}$. Dividing both sides by $3$, we get $y = \frac{3}{2}$.
Therefore, the solution to the system of equations is $x = \frac{5}{4}$ and $y = \frac{3}{2}$.
Consider the following system of equations:
$$
\begin{align*}
3x + 2y &= 8 \\
5x - 3y &= 1 \\
\end{align*}
$$
To eliminate the variable $y$, we can multiply the first equation by $3$ and the second equation by $2$, and then subtract the equations:
$$
\begin{align*}
9x + 6y &= 24 \\
10x - 6y &= 2 \\
\end{align*}
$$
Subtracting these equations gives us:
$$-x = 22$$
Dividing both sides by $-1$, we find that $x = -22$.
We can substitute this value of $x$ back into one of the original equations to solve for $y$. Let's use the first equation:
$$3(-22) + 2y = 8$$
Simplifying, we get:
$$-66 + 2y = 8$$
Adding $66$ to both sides, we find that $2y = 74$. Dividing both sides by $2$, we get $y = 37$.
Therefore, the solution to the system of equations is $x = -22$ and $y = 37$.
## Exercise
Solve the following system of equations using the elimination method:
$$
\begin{align*}
2x - 3y &= 4 \\
3x + 4y &= 1 \\
\end{align*}
$$
### Solution
To eliminate the variable $x$, we can multiply the first equation by $3$ and the second equation by $2$, and then subtract the equations:
$$
\begin{align*}
6x - 9y &= 12 \\
6x + 8y &= 2 \\
\end{align*}
$$
Subtracting these equations gives us:
$$-17y = 10$$
Dividing both sides by $-17$, we find that $y = -\frac{10}{17}$.
We can substitute this value of $y$ back into one of the original equations to solve for $x$. Let's use the first equation:
$$2x - 3\left(-\frac{10}{17}\right) = 4$$
Simplifying, we get:
$$2x + \frac{30}{17} = 4$$
Subtracting $\frac{30}{17}$ from both sides, we find that $2x = \frac{38}{17}$. Dividing both sides by $2$, we get $x = \frac{19}{17}$.
Therefore, the solution to the system of equations is $x = \frac{19}{17}$ and $y = -\frac{10}{17}$.
# The fundamental theorem of algebra
The fundamental theorem of algebra is a fundamental result in mathematics that states that every non-constant polynomial equation with complex coefficients has at least one complex root. In other words, every polynomial equation of the form $a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0 = 0$, where $a_n \neq 0$ and $n \geq 1$, has at least one complex solution.
This theorem has important implications for real algebraic geometry, as it guarantees the existence of solutions to polynomial equations. It also provides a foundation for many algorithms and computational techniques in real algebraic geometry.
To understand the fundamental theorem of algebra, let's consider a simple example. Suppose we have the polynomial equation $x^2 + 1 = 0$. This equation has no real solutions, as there is no real number whose square is equal to $-1$. However, according to the fundamental theorem of algebra, this equation must have at least one complex solution.
In fact, the complex solutions to this equation are $x = i$ and $x = -i$, where $i$ is the imaginary unit defined as $\sqrt{-1}$. These complex solutions satisfy the equation $x^2 + 1 = 0$.
Consider the polynomial equation $x^3 - 2x^2 + 3x - 1 = 0$. According to the fundamental theorem of algebra, this equation must have at least one complex solution.
To find the solutions to this equation, we can use various methods, such as factoring, synthetic division, or numerical methods. In this example, let's use factoring.
By observing the coefficients of the polynomial, we can see that $x = 1$ is a solution. This means that $(x - 1)$ is a factor of the polynomial. We can use polynomial long division to divide the polynomial by $(x - 1)$:
```
x^2 + x + 1
## Exercise
Consider the polynomial equation $x^4 + 4x^3 + 6x^2 + 4x + 1 = 0$. Use the fundamental theorem of algebra to determine the number of complex solutions this equation has.
### Solution
According to the fundamental theorem of algebra, a polynomial equation of degree $n$ has exactly $n$ complex solutions, counting multiplicity. In this case, the degree of the polynomial equation is $4$, so it has exactly $4$ complex solutions.
Note that the solutions can be real or complex. To find the specific solutions, we would need to use factoring, synthetic division, or other methods.
# Real algebraic geometry and its relation to complex algebraic geometry
Real algebraic geometry is a branch of mathematics that studies the geometric properties of solutions to polynomial equations with real coefficients. It is closely related to complex algebraic geometry, which studies the geometric properties of solutions to polynomial equations with complex coefficients.
Real algebraic geometry provides a framework for understanding and analyzing real-world problems that can be modeled using polynomial equations. It has applications in various fields, including computer science, robotics, economics, and physics.
The relationship between real algebraic geometry and complex algebraic geometry is based on the fact that every complex solution to a polynomial equation with real coefficients corresponds to a pair of real solutions. This correspondence allows us to study the geometric properties of solutions in the complex plane and then translate them back to the real plane.
To illustrate the relationship between real and complex algebraic geometry, let's consider the polynomial equation $x^2 + 1 = 0$. This equation has no real solutions, as there is no real number whose square is equal to $-1$. However, according to the fundamental theorem of algebra, this equation must have at least one complex solution.
In the complex plane, the solutions to this equation are $x = i$ and $x = -i$, where $i$ is the imaginary unit defined as $\sqrt{-1}$. These complex solutions correspond to the pairs of real solutions $(0, 1)$ and $(0, -1)$ in the real plane.
By studying the geometric properties of the solutions in the complex plane, we can gain insights into the behavior of the solutions in the real plane. This allows us to analyze the polynomial equation and understand its properties, such as the number of real solutions or the existence of certain geometric structures.
Consider the polynomial equation $x^3 - 2x^2 + 3x - 1 = 0$. This equation has three complex solutions in the complex plane. By analyzing the geometric properties of these solutions, we can determine the behavior of the solutions in the real plane.
Let's plot the solutions in the complex plane:
```python
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-2, 2, 100)
y = x**3 - 2*x**2 + 3*x - 1
plt.plot(x, y, label='x^3 - 2x^2 + 3x - 1 = 0')
plt.axhline(0, color='black', linewidth=0.5)
plt.axvline(0, color='black', linewidth=0.5)
plt.scatter([1.5321, -0.2660 + 0.6346j, -0.2660 - 0.6346j], [0, 0, 0], color='red', label='Complex solutions')
plt.legend()
plt.xlabel('Real part')
plt.ylabel('Imaginary part')
plt.title('Solutions in the complex plane')
plt.grid(True)
plt.show()
```
From the plot, we can see that there are three complex solutions located at approximately $x \approx 1.5321$, $x \approx -0.2660 + 0.6346i$, and $x \approx -0.2660 - 0.6346i$. These complex solutions correspond to the pairs of real solutions in the real plane.
By studying the geometric properties of these solutions, such as their distances and angles, we can gain insights into the behavior of the solutions in the real plane. This allows us to analyze the polynomial equation and understand its properties in the context of real algebraic geometry.
## Exercise
Consider the polynomial equation $x^2 - 4x + 4 = 0$. Use the fundamental theorem of algebra to determine the number of complex solutions this equation has.
### Solution
According to the fundamental theorem of algebra, a polynomial equation of degree $n$ has exactly $n$ complex solutions, counting multiplicity. In this case, the degree of the polynomial equation is $2$, so it has exactly $2$ complex solutions.
Note that the solutions can be real or complex. To find the specific solutions, we would need to use factoring, synthetic division, or other methods.
# Elimination techniques for solving systems of equations
When solving systems of equations, elimination techniques can be used to simplify the system and find the solutions. These techniques involve manipulating the equations to eliminate one variable at a time, until a single equation with one variable remains.
One common elimination technique is the method of substitution. In this method, one equation is solved for one variable in terms of the other variables, and then this expression is substituted into the other equations. This process eliminates one variable from the system, reducing the number of equations by one.
Another elimination technique is the method of addition or subtraction. In this method, the equations are added or subtracted in such a way that one variable is eliminated. This is achieved by multiplying one or both equations by suitable constants, so that when the equations are added or subtracted, one of the variables cancels out.
Elimination techniques can be used to solve systems of linear equations, as well as systems of nonlinear equations. These techniques are especially useful when the number of equations and variables is large, as they can simplify the system and make it easier to solve.
To illustrate the method of substitution, let's consider the following system of equations:
$$
\begin{align*}
2x + 3y &= 7 \\
4x - 5y &= -3 \\
\end{align*}
$$
We can solve the first equation for $x$ in terms of $y$:
$$
2x = 7 - 3y \\
x = \frac{7 - 3y}{2}
$$
Substituting this expression for $x$ into the second equation, we get:
$$
4\left(\frac{7 - 3y}{2}\right) - 5y = -3 \\
\frac{14 - 6y}{2} - 5y = -3 \\
14 - 6y - 10y = -6 \\
-16y = -20 \\
y = \frac{-20}{-16} = \frac{5}{4}
$$
Substituting this value of $y$ back into the first equation, we can solve for $x$:
$$
2x + 3\left(\frac{5}{4}\right) = 7 \\
2x + \frac{15}{4} = 7 \\
2x = 7 - \frac{15}{4} \\
2x = \frac{28 - 15}{4} \\
2x = \frac{13}{4} \\
x = \frac{13}{8}
$$
Therefore, the solution to the system of equations is $x = \frac{13}{8}$ and $y = \frac{5}{4}$.
Consider the following system of equations:
$$
\begin{align*}
3x + 2y &= 10 \\
4x - 3y &= 7 \\
\end{align*}
$$
To eliminate the variable $x$, we can multiply the first equation by $4$ and the second equation by $3$, and then subtract the second equation from the first:
$$
\begin{align*}
4(3x + 2y) &= 4(10) \\
3(4x - 3y) &= 3(7) \\
12x + 8y &= 40 \\
12x - 9y &= 21 \\
\end{align*}
$$
Subtracting the second equation from the first, we get:
$$
12x + 8y - (12x - 9y) = 40 - 21 \\
12x + 8y - 12x + 9y = 19 \\
17y = 19 \\
y = \frac{19}{17}
$$
Substituting this value of $y$ back into the first equation, we can solve for $x$:
$$
3x + 2\left(\frac{19}{17}\right) = 10 \\
3x + \frac{38}{17} = 10 \\
3x = 10 - \frac{38}{17} \\
3x = \frac{170 - 38}{17} \\
3x = \frac{132}{17} \\
x = \frac{44}{17}
$$
Therefore, the solution to the system of equations is $x = \frac{44}{17}$ and $y = \frac{19}{17}$.
## Exercise
Solve the following system of equations using elimination techniques:
$$
\begin{align*}
2x + 3y &= 5 \\
5x - 4y &= 7 \\
\end{align*}
$$
### Solution
To eliminate the variable $x$, we can multiply the first equation by $5$ and the second equation by $2$, and then subtract the second equation from the first:
$$
\begin{align*}
5(2x + 3y) &= 5(5) \\
2(5x - 4y) &= 2(7) \\
10x + 15y &= 25 \\
10x - 8y &= 14 \\
\end{align*}
$$
Subtracting the second equation from the first, we get:
$$
10x + 15y - (10x - 8y) = 25 - 14 \\
10x + 15y - 10x + 8y = 11 \\
23y = 11 \\
y = \frac{11}{23}
$$
Substituting this value of $y$ back into the first equation, we can solve for $x$:
$$
2x + 3\left(\frac{11}{23}\right) = 5 \\
2x + \frac{33}{23} = 5 \\
2x = 5 - \frac{33}{23} \\
2x = \frac{115 - 33}{23} \\
2x = \frac{82}{23} \\
x = \frac{41}{23}
$$
Therefore, the solution to the system of equations is $x = \frac{41}{23}$ and $y = \frac{11}{23}$.
# Computational algebraic geometry and its role in algorithmic approaches
Computational algebraic geometry is a field that combines algebraic geometry with computer science and algorithms. It involves the development and implementation of algorithms for solving problems in algebraic geometry using computational methods.
One of the main goals of computational algebraic geometry is to find efficient algorithms for solving systems of polynomial equations. These systems arise in many areas of mathematics and science, and their solutions can provide valuable insights and solutions to a wide range of problems.
Computational algebraic geometry plays a crucial role in algorithmic approaches to real algebraic geometry. It provides the tools and techniques necessary to solve problems and compute solutions in real algebraic geometry. By using computational methods, we can analyze and understand the geometric properties of real algebraic varieties, and develop algorithms for solving problems in this field.
One important application of computational algebraic geometry is in robotics and motion planning. By using algorithms from computational algebraic geometry, we can analyze the geometry of robot configurations and plan their motions in a given environment. This can help us design efficient and safe robotic systems.
Another application is in computer-aided geometric design (CAGD). Computational algebraic geometry provides the mathematical foundations and algorithms for designing and manipulating geometric objects, such as curves and surfaces. This is used in computer graphics, animation, and industrial design.
Computational algebraic geometry also has applications in cryptography and coding theory. It provides algorithms for solving problems in number theory and algebraic coding theory, which are used to secure communication and protect data.
Overall, computational algebraic geometry plays a crucial role in algorithmic approaches to real algebraic geometry. It provides the necessary tools and techniques to solve problems and compute solutions in this field, and has applications in various areas of mathematics, science, and engineering.
One example of the use of computational algebraic geometry in real algebraic geometry is the computation of the real solutions of a system of polynomial equations. Given a system of polynomial equations, we can use algorithms from computational algebraic geometry to compute the real solutions of the system.
For example, consider the following system of polynomial equations:
$$
\begin{align*}
x^2 + y^2 &= 1 \\
x^3 - y &= 0 \\
\end{align*}
$$
We can use algorithms such as the Gröbner basis algorithm or the cylindrical algebraic decomposition algorithm to compute the real solutions of this system. These algorithms involve manipulating the polynomials and their coefficients to obtain a simplified representation of the solutions.
In this case, the real solutions of the system are $(x, y) = (0, 0)$ and $(x, y) = (1, 0)$. These solutions represent the points on the unit circle and the curve $y = x^3$ that intersect.
## Exercise
Consider the following system of polynomial equations:
$$
\begin{align*}
x^2 + y^2 &= 4 \\
x^3 - y &= 0 \\
\end{align*}
$$
Use computational algebraic geometry to compute the real solutions of this system.
### Solution
To compute the real solutions of the system, we can use algorithms from computational algebraic geometry, such as the Gröbner basis algorithm or the cylindrical algebraic decomposition algorithm.
Using these algorithms, we find that the real solutions of the system are $(x, y) = (0, 0)$, $(x, y) = (1, 1)$, and $(x, y) = (-1, -1)$. These solutions represent the points on the unit circle and the curve $y = x^3$ that intersect.
# Applications of algorithmic approaches in real algebraic geometry
Algorithmic approaches in real algebraic geometry have a wide range of applications in various fields of mathematics, science, and engineering. These approaches provide powerful tools for solving problems and analyzing geometric properties of real algebraic varieties.
One application is in optimization and control theory. Algorithmic approaches can be used to solve optimization problems involving real algebraic constraints. For example, in control theory, we can use these approaches to design controllers that satisfy certain performance specifications and stability conditions.
Another application is in computer-aided geometric design (CAGD). Algorithmic approaches can be used to design and manipulate geometric objects, such as curves and surfaces, with desired properties. This is used in computer graphics, animation, and industrial design.
Algorithmic approaches also have applications in robotics and motion planning. By using these approaches, we can analyze the geometry of robot configurations and plan their motions in a given environment. This can help us design efficient and safe robotic systems.
Furthermore, algorithmic approaches in real algebraic geometry have applications in computer vision and image processing. They can be used to analyze and recognize geometric shapes and structures in images, and to extract meaningful information from visual data.
In addition, these approaches have applications in computational biology and bioinformatics. They can be used to analyze and model biological systems, such as protein folding and gene regulatory networks, and to understand their geometric properties.
Overall, algorithmic approaches in real algebraic geometry have a wide range of applications in various fields. They provide powerful tools for solving problems and analyzing geometric properties, and have the potential to make significant contributions to the advancement of science and technology.
One example of the application of algorithmic approaches in real algebraic geometry is in computer-aided geometric design. By using algorithms from real algebraic geometry, we can design and manipulate geometric objects with desired properties.
For example, consider the problem of designing a smooth curve that passes through a set of given points. We can use algorithms such as Bézier curves or B-splines, which are based on real algebraic geometry, to construct curves that satisfy these constraints.
These algorithms involve solving systems of polynomial equations to determine the control points and coefficients of the curves. By manipulating these equations and their solutions, we can design curves with desired properties, such as smoothness, shape control, and interpolation of given points.
This application of algorithmic approaches in real algebraic geometry is used in computer graphics, animation, and industrial design. It allows us to create visually appealing and mathematically precise curves and surfaces, which are essential in these fields.
## Exercise
Consider the problem of designing a smooth curve that passes through the points $(0, 0)$, $(1, 1)$, and $(2, 0)$. Use algorithmic approaches in real algebraic geometry to design a curve that satisfies these constraints.
### Solution
To design a curve that passes through the given points, we can use algorithms such as Bézier curves or B-splines.
For example, we can use a Bézier curve with control points $(0, 0)$, $(1, 1)$, and $(2, 0)$. This curve is defined by the equation:
$$
C(t) = (1-t)^2(0, 0) + 2t(1-t)(1, 1) + t^2(2, 0)
$$
where $t$ is a parameter that varies between 0 and 1.
By manipulating the equations and their solutions, we can design a curve that satisfies the given constraints. In this case, the Bézier curve passes through the points $(0, 0)$, $(1, 1)$, and $(2, 0)$, and has a smooth shape between these points.
# The future of algorithmic approaches in real algebraic geometry
The future of algorithmic approaches in real algebraic geometry looks promising. With advancements in computational power and algorithms, we can expect even more efficient and powerful tools for solving problems and analyzing geometric properties of real algebraic varieties.
One area of future development is in the design of algorithms that can handle larger and more complex problems. As the size and complexity of real algebraic varieties increase, it becomes important to develop algorithms that can handle these challenges. This includes developing faster algorithms for solving systems of polynomial equations, as well as algorithms for analyzing and manipulating higher-dimensional varieties.
Another area of future development is in the integration of algorithmic approaches with other areas of mathematics and computer science. Real algebraic geometry has connections to many other fields, such as optimization, control theory, and computer vision. By integrating algorithmic approaches with these fields, we can develop new techniques and applications that can solve real-world problems more effectively.
Furthermore, the development of algorithmic approaches in real algebraic geometry can benefit from advancements in machine learning and artificial intelligence. These fields can provide new insights and techniques for solving problems in real algebraic geometry, and can help automate the process of designing algorithms and analyzing geometric properties.
Overall, the future of algorithmic approaches in real algebraic geometry holds great potential. With continued research and development, we can expect to see even more powerful and efficient tools for solving problems and analyzing geometric properties. These tools will have a wide range of applications in various fields, and will contribute to the advancement of science and technology. | Textbooks |
For how many real values of $c$ do we have $|3-ci| = 7$?
We have $|3-ci| = \sqrt{3^2 + (-c)^2} = \sqrt{c^2 + 9}$, so $|3-ci| = 7$ gives us $\sqrt{c^2 + 9} = 7$. Squaring both sides gives $c^2 + 9 = 49$, so $c^2=40$. Taking the square root of both sides gives $c = 2\sqrt{10}$ and $c=-2\sqrt{10}$ as solutions, so there are $\boxed{2}$ real values of $c$ that satisfy the equation.
We also could have solved this equation by noting that $|3-ci| = 7$ means that the complex number $3-ci$ is 7 units from the origin in the complex plane. Therefore, it is on the circle centered at the origin with radius 7. The complex number $3-ci$ is also on the vertical line that intersects the real axis at 3, which is inside the aforementioned circle. Since this line goes inside the circle, it must intersect the circle at $\boxed{2}$ points, which correspond to the values of $c$ that satisfy the original equation. | Math Dataset |
\begin{document}
\title{Logspace computations in {C}oxeter groups and graph groups} \author{Volker Diekert\inst{1} \and Jonathan Kausch\inst{1} \and Markus Lohrey\inst{2}} \institute{
FMI, Universit\"at Stuttgart, Germany \and Insitut f\"ur Informatik, Universit\"at Leipzig, Germany}
\maketitle
\begin{abstract} Computing normal forms in groups (or monoids) is in general harder than solving the word problem (equality testing). However, normal form computation has a much wider range of applications. It is therefore interesting to investigate the complexity of computing normal forms for important classes of groups.
For Coxeter groups we show that the following algorithmic tasks can be solved by a deterministic Turing machine using logarithmic work space, only: 1. Compute the length of any geodesic normal form. 2. Compute the set of letters occurring in any geodesic normal form. 3. Compute the Parikh-image of any geodesic normal form in case that all defining relations have even length (i.e., in even Coxeter groups.) 4. For right-angled Coxeter groups we can do actually compute the short length normal form in logspace. (Note that short length normal forms are geodesic.)
Next, we apply the results to right-angled Artin groups. They are also known as free partially commutative groups or as graph groups. As a consequence of our result on right-angled Coxeter groups we show that shortlex normal forms in graph groups can be computed in logspace, too. Graph groups play an important r{\^o}le in group theory, and they have a close connection to concurrency theory. As an application of our results we show that the word problem for free partially commutative inverse monoids is in logspace. This result generalizes a result of Ondrusch and the third author on free inverse monoids. Concurrent systems which are deterministic and co-deterministic can be studied via inverse monoids.
\end{abstract}
\section{Introduction} The study of group theoretical decision problems, like the word problem (Is a given word equal to $1$ in the group?), the conjugacy problem (Are two given words conjugated in the group?), and the isomorphism problem (Do two given group presentations yield isomorphic groups?), is a classical topic in combinatorial group theory with a long history dating back to the beginning of the 20th century, see the survey \cite{Miller} for more details.
With the emergence of computational complexity theory, the complexity of these decision problems in various classes of groups has developed into an active research area, where algebraic methods as well as computer science techniques complement one another in a fruitful way.
In this paper we are interested in group theoretical problems which can be solved efficiently in parallel. More precisely, we are interested in {\em deterministic logspace}, called simply {\em logspace} in the following. Note that logspace is at a lower level in the $\mathsf{NC}$-hierarchy of parallel complexity classes:\footnote{$\mathsf{NC}^i$ denotes the class of languages that can be accepted by (uniform) boolean circuits of polynomial size and depth $O(\log^i(n))$, where all gates have fan-in $\leq 2$, see \cite{Vollmer99} for more details. We will not use the $\mathsf{NC}$-hierarchy in the rest of this paper.} $$ \mathsf{NC}^1 \subseteq \mathsf{LOGSPACE} \subseteq \mathsf{NC}^2 \subseteq \mathsf{NC}^3 \subseteq \cdots \subseteq \mathsf{NC} = \bigcup_{i \geq 1} \mathsf{NC}^i \subseteq \mathsf{P} $$
It is a standard conjecture in complexity theory that $\mathsf{NC}$ is strictly contained in $\mathsf{P}$.
A fundamental result in this context was shown in \cite{lz77,Sim79}: The word problem of finitely generated linear groups belongs to logspace. In \cite{lz77}, Lipton and Zalcstein proved this result for fields of characteristic $0$. The case of a field of prime characteristic was considered in \cite{Sim79} by Simon. The class of groups with a word problem in logspace is further investigated in \cite{Waack81}. Another important result is Cai's $\mathsf{NC}^2$ algorithm for the word problem of a hyperbolic group \cite{cai92stoc}. In \cite{Lo05ijfcs} this result was improved to {\sf LOGCFL}, which is the class of all languages that are logspace-reducible to a context-free language. {\sf LOGCFL} is a subclass of $\mathsf{NC}^2$ and hence in the intersection of the class of problems which can be decided in polynomial time and the class of problems which can be decided in space $\log^2(n)$. As a parallel complexity class {\sf LOGCFL} coincides with the (uniform) class ${\sf SAC}^1$.
Often, it is not enough to solve the word problem, but one has to compute a normal form for a given group element. Fix a finite generating set $\Gamma$ (w.l.o.g. closed under inverses) for the group $G$. Then, a {\em geodesic} for $g \in G$ is a shortest word over $\Gamma$ that represents $g$. By choosing the lexicographical smallest (w.r.t. a fixed ordering on $\Gamma$) word among all geodesics for $g$, one obtains the {\em shortlex normal form} of $g$. The problem of computing geodesics and various related problems were studied in \cite{DLS,elder2010,ElRe2010,MyRoUsVe08,PRaz}. It turned out that there are groups with an easy word problem (in logspace), but where simple questions related to geodesics are computationally hard. For instance, every metabelian group embeds (effectively) into a direct product of linear groups; hence its word problem can be solved in logspace. On the other hand, it is shown in \cite{DLS}, that the question whether a given element $x$ of the wreath product $\mathbb{Z}/ 2 \mathbb{Z} \wr (\mathbb{Z} \times \mathbb{Z})$ (a metabelian group) has geodesic length at most $n$ is {\sf NP}-complete. A corresponding result was shown in \cite{MyRoUsVe08} for the free metabelian group of rank 2. Clearly, these results show that in general one cannot compute shortlex normal forms in metabelian groups in polynomial time (unless $\mathsf{P} = \mathsf{NP}$). On the positive side, for {\em shortlex automatic groups} \cite{ech92} (i.e., automatic groups, where the underlying regular set of representatives is the set of shortlex normal forms) shortlex normal forms can be computed in quadratic time. Examples of shortlex automatic groups are Coxeter groups, Artin groups of large type, and hyperbolic groups. So for all these classes, shortlex normal forms can be computed in quadratic time. In \cite{MyRoUsVe08}, it is also noted that geodesics in nilpotent groups (which are in general not automatic) can be computed in polynomial time.
In this paper, we deal with the problem of computing geodesics and shortlex normal forms in logspace. A function can be computed in logspace, if it can be computed by a deterministic \emph{logspace transducer}. The latter is a Turing machine with three tapes: (i) a read-only input tape, (ii) a read/write work tape of length $\mathcal{O}(\log n)$, and (iii) a write-only output tape. The output is written sequentially from left to right onto the output tape. Every logspace transducer can be transformed into an equivalent deterministic polynomial time machine. Still better, it can be simulated by a Boolean circuit of polynomial size and $\mathcal{O}(\log^2 n)$ depth. Although it is not completely obvious, the class of logspace computable functions is closed under composition. (See e.g. the textbook \cite{Papa} for these facts.)
Recently, the class of groups, where geodesics and shortlex normal forms can be computed in logspace, attracted attention, see \cite{ElderElstonOstheimer11}, where it was noted among other results that shortlex normal forms in free groups can be computed in logspace. (Implicitly, this result was also shown in \cite{LohOnd07}.) In this paper, we deal with the problem of computing shortlex normal forms for Coxeter groups. Coxeter groups are discrete reflection groups and play an important role in many parts of mathematics, see \cite{bjofra05,davis08}. Every Coxeter group is linear and therefore has a logspace word problem \cite{bjofra05,davis08}. Moreover, as mentioned above, Coxeter groups are shortlex automatic \cite{BrinkHowlett93,Casselman94}. Therefore shortlex normal forms can be computed in quadratic time. However, no efficient parallel algorithms are known so far. In particular, it is open whether shortlex normal forms in Coxeter groups can be computed in logspace. We do not solve this problem in this paper, but we are able to compute in logspace some important invariants of geodesics. More precisely, we are able to compute in logspace (i) the length of the shortlex normal form of a given element (\refthm{thm-geodesic-length}) and (ii) the alphabet of symbols that appear in the shortlex normal form of a given element (\refthm{thm:allcoxoderwas}). The proofs for these results combine non-trivial results for Coxeter groups with some advanced tools from computational algebra. More precisely, we use the following results: \begin{itemize} \item The Chinese remainder representation of a given number $m$ (which is the tuple of remainders $m \text{ mod } p_i$ for the first $k$ primes $p_1, \ldots, p_k$, where $m< p_1 p_2 \cdots p_k$) can be transformed in logspace into the binary representation of $m$ \cite{ChDaLi01,HeAlBa02}. This result is the key for proving that iterated multiplication and division can be computed in logspace. \item Arbitrary algebraic constants can be approximated in logspace up to polynomially many bits. This result was recently shown in \cite{DaPra11,Jer11}. \end{itemize} For the case of even Coxeter groups, i.e., Coxeter groups where all defining relations have even length, we can combine \refthm{thm-geodesic-length} and \refthm{thm:allcoxoderwas} in one more general result, saying that the Parikh-image of the shortlex normal form can be computed in logspace (\refthm{thm-parikh}). The Parikh-image of a word $w \in \Sigma^*$ is the image of $w$
under the canonical homomorphism from $\Sigma^*$ to $\mathbb{N}^{|\Sigma|}$.
As mentioned above, it remains open, whether shortlex normal forms in Coxeter groups can be computed in logspace. In the second part of this paper, we prove that for the important subclass of {\em right-angled Coxeter groups} shortlex normal forms can be computed in logspace (\refthm{thm:lexshort}). A right-angled Coxeter group is defined by a finite undirected graph $(\Sigma, I)$ by taking $\Sigma$ as the set of group generators and adding the defining relations $a^2=1$ for all $a \in \Sigma$ and $ab=ba$ for all edges $(a,b) \in I$. We use techniques from the theory of Mazurkiewicz traces \cite{die90}. More precisely, we describe right-angled Coxeter groups by strongly confluent length-reducing trace rewriting systems. Moreover, using the geometric representation of right-angled Coxeter groups, we provide an elementary proof that the alphabet of symbols that appear in a geodesic for $g$ can be computed in logspace from $g$ (\refcor{cor:nixoderwas}).\footnote{In contrast, the proof of \refthm{thm:allcoxoderwas}, which generalizes \refcor{cor:nixoderwas} to all Coxeter groups, is more difficult in the sense that it uses geometry and more facts {}from \cite{bjofra05}.} In contrast to general Coxeter groups, for right-angled Coxeter groups this alphabetic information suffices in order to compute shortlex normal forms in logspace.
Right-angled Coxeter groups are tightly related to graph groups, which are also known as \emph{free partially commutative groups} or as \emph{right-angled Artin groups}. A graph group is defined by a finite undirected graph $(\Sigma, I)$ by taking $\Sigma$ as the set of group generators and adding the defining relations $ab=ba$ for all edges $(a,b) \in I$. Hence, a right-angled Coxeter group is obtained from a graph group by adding all relations $a^2=1$ for all generators $a$. Graph groups received in recent years a lot of attention in group theory because of their rich subgroup structure \cite{BesBr97,CrWi04,GhPe07}. On the algorithmic side, (un)decidability results were obtained for many important group-theoretic decision problems in graph groups \cite{CrGoWi09,dm06}. There is a standard embedding of a graph group into a right-angled Coxeter group \cite{hw99}. Hence, also graph groups are linear and have logspace word problems. Using the special properties of this embedding, we can show that also for graph groups, shortlex normal forms can be computed in logspace (\refthm{thm:lexshort}). We remark that this is an optimal result in the sense that logspace is the smallest known complexity class for the word problem in free groups already. Clearly, computing shortlex normal forms is at least as difficult than solving the word problem.
Finally, we apply \refthm{thm:lexshort} to {\em free partially commutative inverse monoids}. These monoids arise naturally in the context of deterministic and co-determin\-istic concurrent systems. This includes many real systems, because they can be viewed as deterministic concurrent systems with \emph{undo}-operations. In \cite{DiekertLohreyMiller08} it was shown that the word problem for a free partially commutative inverse monoid can be solved in time ${\cal O}(n \log(n))$. (Decidability of the word problem is due to Da Costa \cite{vel03}.) Using our logspace algorithm for computing shortlex normal forms in a graph group, we can show that the word problem for a free partially commutative inverse monoid can be solved in logspace (\refthm{space}). Again, with state-of-the art techniques, this can be viewed as an optimal result. It also generalizes a corresponding result for free inverse monoids {}from \cite{LohOnd07}. Let us emphasize that in order to obtain \refthm{space} we have to be able to compute shortlex normal forms in graph groups in logspace; knowing only that the word problem is in logspace would not have been sufficient for our purposes.
Let us remark that for all our results it is crucial that the group (resp., the free partially commutative inverse monoids) is fixed and not part of the input. For instance, it is not clear whether for a given undirected graph $(\Sigma,I)$ and a word $w$ over $\Sigma \cup \Sigma^{-1}$ one can check in logspace whether $w = 1$ in the graph group defined by the graph $(\Sigma,I)$.
The work on this paper started at the AMS Sectional Meeting, Las Vegas, May 2011, and was motivated by the lecture of Gretchen Ostheimer \cite{ElderElstonOstheimer11}. A preliminary version of our results appeared as a conference abstract at the Latin American Symposium on Theoretical Informatics (LATIN 2012), \cite{dkl12latin}. In contrast to the conference abstract this paper provides full proofs and it contains new material about even Coxeter groups and how to compute geodesic lengths in all Coxeter groups.
\section{Notation} Throughout $\Sigma$ (resp.{} $\GG$) denotes a finite \emph{alphabet}. This is a finite set, sometimes equipped with a linear order. An element of $\Sigma$ is called a \emph{letter}. By $\Sigma^*$ we denote the free monoid over $\Sigma$. For a word $w\in \Sigma^*$ we denote by $\alpha(w)$ the {\em alphabet of $w$}: it is the set of letters occurring in $w$.
With $|w|$ we denote the length of $w$. The {\em empty word } has length $0$; and it is denoted by 1 as other neutral elements in monoids or groups.
All groups and monoids $M$ in this paper are assumed to be finitely generated; and they come with a monoid homomorphism $\pi: \Sigma^* \to M$. Frequently we assume that $M$ comes with an involution\footnote{An {\em involution} on a set $\Gamma$ is a permutation $a \mapsto \ov a$ such that $\ov{\ov a} = a$. An involution of a monoid satisfies in addition $\ov {xy}= \ov y\; \ov x$.} $x \mapsto \ov x$ on $M$, and then we require that $\pi(\Sigma) \cup \ov{\pi( \Sigma)}$ generates $M$ as a monoid.
If the monoid $M$ is a group $G$, then the involution is always given by taking inverses, thus $\ov x = \oi x$. Moreover, $G$ becomes a factor group of the \emph{free group} $F(\Sigma)$ thanks to $\pi: \Sigma^* \to G$.
Let $\ov \Sigma$ be a disjoint copy of $\Sigma$ and $\GG = \Sigma \cup \ov \Sigma$. There is a unique extension of the natural mapping $\Sigma \to \ov \Sigma: \, a \mapsto \ov a$ such that $\GG^*$ becomes a monoid with involution: We let $\ol{\ol a} = a$ and $\ov{a_1 \cdots a_n} = \ov {a_n} \cdots \ov {a_1}$. Hence, we can lift our homomorphism $\pi: \Sigma^* \to M$ to a surjective monoid homomorphism $\pi: \GG^* \to M$ which respects the involution, i.e., $\pi(\ov{x}) = x^{-1}$.
Given a surjective homomorphism $\pi: \GG^* \to M$ and a linear order on $\GG$ we can define the geodesic length and the shortlex normal form for elements in $M$ as follows. For $w \in M$, the \emph{geodesic length} $\Abs{w}$ is the length of a shortest word in $\pi^{-1}(w)$. The \emph{shortlex normal form} of $w$ is the lexicographical first word in the finite set $\set{u \in \pi^{-1}(w)}{\abs{u}= \Abs{w}}$. By a \emph{geodesic} we mean any word in the finite set $\set{u \in \pi^{-1}(w)}{\abs{u}= \Abs{w}}$.
\section{Coxeter groups}\label{sec:cox} A {\em Coxeter group} $G$ is given by a generating set $\Sigma = \os{a_1 \lds a_n}$ of $n$ generators and a symmetric $n \times n $ matrix $M = (m_{i,j})_{1 \leq i , j
\leq n}$ over $\mathbb{N}$ such that $m_{i,j} = 1 \iff i = j$. The defining relations are $(a_i a_j)^{m_{i,j}} = 1$ for $1 \leq i,j \leq n$. In particular, $a_i^2 = 1$ for $1 \leq i \leq n$. Traditionally, one writes the entry $\infty$ instead $0$ in the Coxeter matrix $M$ and then $m_{i,j}$ becomes the order of the element $a_ia_j$.
A Coxeter group is called \emph{even}, if all $m_{i,j}$ are even numbers for $i \neq j$. It is called \emph{right-angled}, if $m_{i,j} \in \{0,1,2\}$ for all $i,j$. The defining relations of a right-angled Coxeter group can be rewritten in the following form: $a_i^2 = 1$ for $1 \leq i \leq n$ and $a_i a_j = a_j a_i$ for $(i,j)\in I$ where $I$ denotes a symmetric and irreflexive relation $I \subseteq \{1,\ldots ,n\} \times \{1,\ldots ,n\}$. Thus, one could say that a right-angled Coxeter group is a \emph{free partially commutative} Coxeter group. Readers interested only in right-angled Coxeter groups or in the application to graph groups (i.e., right-angled Artin groups) may proceed directly to \refsec{mazu}.
\subsection{Computing the geodesic alphabet and geodesic length}\label{sec:allcox} Throughout this subsection $G$ denotes a Coxeter group given by a fixed $n \times n$ Matrix $M$ as above. One can show that if $u$ and $v$ are geodesics with $u = v$ in $G$ then $\alpha(u) = \alpha(v)$ \cite[Cor.~1.4.8]{bjofra05}. (Recall that $\alpha(x)$ denotes the alphabet of the word $x$.) We will show how to compute this alphabet in logspace. Moreover, we will show that also the geodesic length $\abs{w}$ for a given $w \in G$ can be computed in logspace.
Let $\mathbb{R}^{\Sigma}$ be the $n$ dimensional real vector space where the letter $a$ is identified with the $a$-th unit vector. Thus, vectors will be written as formal sums $\sum_{b\in \Sigma} \lambda_b b$ with real coefficients $\lambda_b\in \mathbb{R}$. We fix the standard geometric representation $\sigma: G \to \mathrm{GL}(n,\mathbb{R})$, where we write $\sigma_w$ for the mapping $\sigma(w)$, see e.g. \cite[Sect.~4.2]{bjofra05}: \begin{equation} \label{eq-geo-repr} \sigma_{a_i}({a_j}) = \begin{cases} {a_j} + 2 \cos(\pi/m_{i,j}) \cdot {a_i} & \text{ if } m_{i,j} \neq 0 \\ {a_j} + 2 \cdot {a_i} & \text{ if } m_{i,j} = 0 \end{cases} \end{equation} Note that for $a \in \Sigma$, $\sigma_w(a)$ cannot be the zero vector, since $\sigma_w$ is invertible. We write $\sum_{b\in \Sigma} \lambda_b b \geq 0$ if $\lambda_b \geq 0$ for all $b\in \Sigma$. The following fundamental lemma can be found in \cite[Prop. 4.2.5]{bjofra05}:
\begin{lemma}\label{lem:geqzero} Let $w \in G$ and $a\in \Sigma$. We have $$ \Abs{wa} = \begin{cases}
\Abs{w}+1 & \text{ if } \sigma_w(a) \geq 0 \\ \Abs{w}-1 & \text{ if } \sigma_w(a) \leq 0 \end{cases} $$ \end{lemma}
\begin{lemma} \label{lemma-main} For a given $w \in G$ and $a,b\in \Sigma$, one can check in logspace, whether $\lambda_b \geq 0$, where $\sum_{b\in \Sigma} \lambda_b b = \sigma_w(a)$. \end{lemma}
In order to prove \reflem{lemma-main}, we need several tools. Let $p_i$ denote the $i^{\text{th}}$ prime number. It is well-known from number theory that the $i^{\text{th}}$ prime requires $O(\log(i))$ bits in its binary representation. For a number $0 \leq M < \prod_{i=1}^m p_i$ we define the {\em Chinese remainder representation} $\mathsf{CRR}_m(M)$ as the $m$-tuple $$ \mathsf{CRR}_m(M) = (M \text{ mod } p_i)_{1 \leq i \leq m} . $$ By the Chinese remainder theorem, the mapping $M \mapsto \mathsf{CRR}_m(M)$ is a bijection from the interval $[0, \prod_{i=1}^m p_i-1]$ to $\prod_{i=1}^m [0,p_i-1]$. By the following theorem, we can transform a {\sf CRR}-representation very efficiently into binary representation.
\begin{theorem}[{\cite[Thm.~3.3]{ChDaLi01}}] \label{theorem hesse und co} For a given tuple $(r_1,\ldots, r_m) \in \prod_{i=1}^m [0,p_i-1]$, we can compute in logspace the binary representation of the unique number $M \in [0, \prod_{i=1}^m p_i-1]$ such that $\mathsf{CRR}_m(M) = (r_1, \ldots, r_m)$. \end{theorem} By \cite{HeAlBa02}, the transformation from the {\sf CRR}-representation to the binary representation can be even computed by $\mathsf{DLOGTIME}$-uniform $\mathsf{TC}^0$-circuits.
Our second tool is a gap theorem for values $p(\zeta)$, where $p(x) \in \mathbb{Z}[x]$ and $\zeta$ is a root of unity. For a polynomial $p(x) = \sum_{i=0}^n a_i x^i$ with integer coefficients $a_i$ let $|p(x)| = \sum_{i=0}^n |a_i|$. \begin{theorem}[{\cite[Thm.~3]{Litow2010}}] \label{thm:canny}
Let $p(x) \in \mathbb{Z}[x]$ and let $\zeta$ be a $d^{th}$ root of unity such that $p(\zeta) \neq 0$. Then $|p(\zeta)| > |p(x)|^{-d}$. \end{theorem}
Finally, our third tool for the proof of \reflem{lemma-main} is the following result, which was recently shown (independently) in \cite{DaPra11,Jer11}.
\begin{theorem}[{\cite[Thm~.2]{DaPra11}},{\cite[Cor.~4.6]{Jer11}}] \label{thm-approx-alg} For every fixed algebraic number $\alpha\in\mathbb{R}$ the following problem can be computed in logspace:
\noindent INPUT: A unary coded number $n$.
\noindent OUTPUT:
A binary representation of the integer $\floor{2^n\alpha}$. \end{theorem}
\begin{remark}\label{rem:jer11} The result of \cite{Jer11} is actually stronger showing that the output in \refthm{thm-approx-alg} can be computed in uniform $\mathsf{TC}^0$. \end{remark} {\em Proof of \reflem{lemma-main}.} We decompose the
logspace algorithm into several logspace computations. The linear mapping $\sigma_w$ can be written as a product of matrices $A_1 A_2 \cdots A_{|w|}$, where every $A_i$ is an $(n \times n)$-matrix with entries from $\{0, 1,2\} \cup \{2 \cos(\pi/m_{i,j})\mid m_{i,j}\neq 0 \}$ (which is the set of
coefficients that appear in \eqref{eq-geo-repr}). Then, we have to check whether this matrix product applied to the unit vector $a$ has a non-negative value in the $b$-coordinate. This value is the entry $(A_1 A_2 \cdots A_{|w|})_{a,b}$ of the product matrix $A_1 A_2 \cdots A_{|w|}$.
Let $m$ be the least common multiple of all $m_{i,j} \neq 0$; this is still a constant. Let $\zeta = e^{\pi i/m} $, which is a primitive $(2m)^{th}$ root of unity. If $m = m_{i,j} \cdot k$, we have $$ 2 \cdot \cos \bigg( \frac{\pi}{m_{i,j}} \bigg) = \zeta^k + \zeta^{2m-k} . $$ Hence, we can assume that every $A_i$ is an $(n \times n)$-matrix over $\mathbb{Z}[\zeta]$. We now replace $\zeta$ by a variable $X$ in all matrices
$A_1, \ldots, A_{|w|}$; let us denote the resulting matrices over
the ring $\mathbb{Z}[X]$ with $B_1, \ldots, B_{|w|}$. Each entry in one of these matrices is a polynomial of degree $<2m$ with coefficients bounded by $2$. More precisely, for every entry $p(X)$
of a matrix $B_i$ we have $|p(X)| \leq 2$. Let $|B_i|$ be the sum of all $|p(X)|$ taken over all entries of the matrix $B_i$. Hence, $|B_i| \leq 2 n^2$.
\noindent {\em Step 1.}
In a first step, we show that the product $B_1 \cdots B_{|w|}$ can be computed in logspace in the ring $\mathbb{Z}[X]/(X^{2m}-1)$ (keeping in mind that $\zeta^{2m}=1$).
Every entry in the product $B_1 \cdots B_{|w|}$ is a polynomial of degree $<2m$ with coefficients bounded in absolute value by
$|B_1| \cdots |B_{|w|}| \leq (2n^2)^{|w|}$.
Here $n$ is a fixed constant. Hence, every coefficient in the matrix $B_1 \cdots B_{|w|}$ can be represented
with $O(|w|)$ bits. In logspace, one can compute a list of
the first $k$ prime numbers $p_1, p_2, \ldots, p_k$, where $k \in O(|w|)$
is chosen such that $\prod_{i=1}^k p_i > (2n^2)^{|w|}$ \cite{ChDaLi01}. Each $p_i$ is bounded by $|w|^{O(1)}$.
For every $1 \leq i \leq k$, we can compute in logspace
the matrix product $B_1 \cdots B_{|w|}$ in
$\mathbb{F}_{p_i}[X]/(X^{2m}-1)$, i.e., we compute the coefficient of each polynomial in $B_1 \cdots B_{|w|}$ modulo $p_i$. In the language of
\cite{ChDaLi01}: For each coefficient of a polynomial in $B_1 \cdots B_{|w|}$, we compute its Chines remainder representation. From this representation, we can compute in logspace by \refthm{theorem hesse und co} the binary representation of the coefficient. This shows that the product $B = B_1 \cdots B_{|w|}$ can be computed in the ring $\mathbb{Z}[X]/(X^{2m}-1)$.
\noindent
{\em Step 2.} We know that if $X$ is substitued by $\zeta$ in the matrix $B$, then we obtain the product $A = A_1 \cdots A_{|w|}$ (the matrix we are actully interested in), which is a matrix over $\mathbb{R}$. Every entry of the matrix $A$ is of the form $\sum_{j=0}^{2m-1} a_j \zeta^j$, where
$a_j$ is a number with $O(|w|)$ bits that we have computed in Step 1. If $\sum_{j=0}^{2m-1} a_j \zeta^j \neq 0$, then by \refthm{thm:canny}, we have $$
\left|\sum_{j=0}^{2m-1} a_j \zeta^j \right| >
\left(\sum_{j=0}^{2m-1} |a_j|\right)^{-2m} . $$
Since $m$ is a constant, and $|a_j| \leq 2^{O(|w|)}$, we have $$ \sum_{j=0}^{2m-1} a_j \zeta^j = 0 \quad \text{ or } \quad
\left|\sum_{j=0}^{2m-1} a_j \zeta^j \right| > 2^{-c|w|} $$ for a constant $c$. Therefore, to check whether $\sum_{j=0}^{2m-1} a_j \zeta^j \geq 0$ or
$\sum_{j=0}^{2m-1} a_j \zeta^j \leq 0$, it suffices to approximate this sum up to $c|w|$ many fractional bits. This is the goal of the second step.
Since we are sure that $\sum_{j=0}^{2m-1} a_j \zeta^j \in \mathbb{R}$, we can replace the sum symbolically by its real part, which is $\sum_{j=0}^{2m-1} a_j \cos(j\pi/m)$. In order to approximate
this sum up to $c|w|$ many fractional bits, it suffices to approximate each
$\cos(j\pi/m)$ up to $d |w|$ many fractional bits (recall that $a_j \in 2^{O(|w|)}$), where the constant $d$ is large enough.
Every number $\cos(q \cdot \pi)$ for $q \in \mathbb{Q}$ is algebraic; this seems to be folklore and follows easily from DeMoivre's formula ($(\cos\theta+i\sin\theta)^n=\cos(n\theta)+i\sin(n\theta)$). Theorefore, by \refthm{thm-approx-alg}, every number
$\cos(j\pi/m)$ ($0 \leq j \leq 2m-1$) can be approximated in logspace up to $d |w|$ many fractional bits. This concludes the proof. \qed
\noindent \reflem{lemma-main} can be used in order to compute in logspace the geodesic length $\Abs{w}$ for a given group element $w \in G$:
\begin{theorem} \label{thm-geodesic-length} For a given word $w \in \Sigma^*$, the geodesic length $\Abs w$ can be computed in logspace. \end{theorem}
\begin{proof} By \reflem{lem:geqzero}, the following algorithm correctly computes $\Abs w$ for $w = a_1 \cdots a_k$.
\noindent $\ell := 0$;\\ {\bf for} $i=1$ {\bf to} $k$ {\bf do} \\ \hspace*{.5cm} {\bf if} $\sigma_{a_1 \cdots a_{i-1}}(a_i) \geq 0$ {\bf then}\\ \hspace*{1cm} $\ell := \ell+1$\\ \hspace*{.5cm} {\bf else}\\ \hspace*{1cm} $\ell := \ell-1$\\ \hspace*{.5cm} {\bf endif} \\ {\bf endfor} \\ {\bf return} $\ell$.
\noindent By \reflem{lemma-main} it can be implemented in logspace. \qed \end{proof}
We finally apply \reflem{lemma-main} in order to compute in logspace the set of all letters that occur in a geodesic for a given group element $w \in G$. As remarked before, this alphabet is independent of the concrete geodesic for $w$.
Introduce a new letter $x\not\in\Sigma$ with $x^2 =1$, but no other new defining relation. This yields the Coxeter group $G' = G * (\mathbb{Z}/ 2\mathbb{Z})$ generated by $\Sigma'= \Sigma \cup \os{x}$. Thus, $ax$ is of infinite order in $G'$ for all $a \in \Sigma$. Clearly, $\Abs{wx} > \Abs{w}$ for all $w \in G$. Hence, $\sigma_w(x) \geq 0$ for all $w \in G$ by \reflem{lem:geqzero}.
\begin{lemma}\label{lem:x} Let $w \in G$ and $\sigma_w(x) = \sum_{b\in \Sigma'} \lambda_b b $. Then for all $b \in \Sigma$ we have $\lambda_b \neq 0$ if and only if\xspace the letter $b$ appears in the shortlex normal form of $w$. \end{lemma}
\begin{proof}
We may assume that $w$ is a geodesic in $G$. We prove the result by induction on $\Abs w = |w|$. If $w=1$, then the assertion is trivial. If $b \in \Sigma$ does not occur as a letter in $w$, then it is clear that $\lambda_b = 0$. Thus, we may assume that $b \in \alpha(w)$ and we have to show that $\lambda_b \neq 0$. By induction, we may write $w= ua$ with $\Abs{uax} > \Abs{ua } > \Abs{u}$.
We have $\sigma_w(x) = \sigma_u\sigma_a(x) = \sigma_u(x + 2a) = \sigma_u(x) + 2\sigma_u(a)$. The standard geometric representation yields moreover $\sigma_w(x) = x + \sum_{c\in \Sigma} \lambda_c c$, where $\lambda_c \geq 0$ for all $c\in \Sigma$ by \reflem{lem:geqzero}. As $\Abs{ua } > \Abs{u}$ we get $\sigma_u(a)\geq 0$ by \reflem{lem:geqzero}. Moreover, by induction (and the fact $\Abs{ux} > \Abs{u}$), we know that for all letters $c\in \alpha(u)$ the corresponding coefficient in $\sigma_u(x)$ is strictly positive. Thus, we are done if $b \in \alpha(u)$. So, the remaining case is that $b = a \not\in\alpha(u)$. However, in this case $\sigma_u(a) = a + \sum_{c\in \Sigma \setminus\{a\}} \mu_c c$. Hence $\lambda_a \geq 2$. \qed \end{proof}
\begin{theorem}\label{thm:allcoxoderwas} There is a logspace transducer which on input $w \in \Sigma^*$ computes the set of letters occurring in the shortlex normal form of $w$. \end{theorem}
\begin{proof} By \reflem{lem:x}, we have to check for every letter $b \in \Sigma$, whether $\lambda_b = 0$, where $\sum_{b\in \Sigma'} \lambda_b b = \sigma_w(x)$. By \reflem{lemma-main} (applied to the Coxeter group
$G'$) this is possible in logspace. \qed \end{proof}
Let us remark that the use of \reflem{lemma-main} in the proof of \refthm{thm:allcoxoderwas} can be avoided, using the technique from \cite{lz77} and \reflem{lem:x}. Every $\lambda_b$ belongs to the ring $\mathbb{Z}[\zeta] \cong \mathbb{Z}[X]/\Phi(X)$, where $\zeta$ is a primitive $(2m)^{th}$ root of unity, $\Phi(X)$ is the $(2m)^{th}$ cyclotomic polynomial, and $m$ is the least common multiple of all $m_{i,j} \neq 0$. In order to check, whether $\lambda_b = 0$, we can check whether the value is zero $\bmod \; r$ with respect to all $r$ up to a polynomial threshold.
\subsection{Computing the geodesic Parikh-image in even Coxeter groups}\label{sec:evencox}
In this section we assume that $G$ is an even Coxeter group. Thus, the entries $m_{i,j}$ are even for all $i \neq j$.
Let $a \in \Sigma$ be a letter and $w \in \Sigma^*$. By $\abs{w}_a$ we denote the number of $a$'s in a word $w\in \GG^*$. The {\em Parikh-image} of $w$ is the vector $[\, \abs{w}_a\, ]_{a \in \Sigma} \in \mathbb{N}^\Sigma$. In other words, the Parikh-image of $w$ is the image of $w$ under the canonical homomorphism from the free monoid $\Sigma^*$ to the free commutative monoid $\mathbb{N}^{\Sigma}$.
We show that for even Coxeter groups, the Parikh-image of geodesics can be computed in logspace. Actually, all geodesics for a given group element of an even Coxeter group have the same Parikh-image:
\begin{lemma}\label{welldef} Let $G$ be an even Coxeter group and let $u,v \in \Sigma^*$ be geodesics with $u=v$ in $G$. Then we have $\abs{u}_a = \abs{v}_a$ for all $a \in \Sigma$. \end{lemma}
\begin{proof} Let $a,b \in \Gamma$ be letters such that $(ab)^m=1$ for some $m\geq 2$. Since $G$ is even, all such values $m$ are even and we obtain the relation $(ab)^{m/2}=(ba)^{m/2}$ which does not effect the Parikh-image. Now, it follows from a well-known result about Tits' rules (c.f.~\cite{bjofra05}) that geodesics can be transformed into each other by using the relations $(ab)^{m/2}=(ba)^{m/2}$, only. Consequently, $\abs{u}_a = \abs{v}_a$ for all $a \in \Sigma$. \qed \end{proof}
\begin{lemma}\label{lemma-parikh} Let $G$ be an even Coxeter group, $a \in \Sigma$, and let $u, w$ be geodesics such that $w a = u$ in $G$. Then there exists $\varepsilon\in \{1,-1\}$
such that $|u| = |w|+\varepsilon$ and $\abs{u}_a = \abs{w}_a+ \varepsilon$. For all $b \in \Sigma\setminus\{a\}$ we have $\abs{u}_b = \abs{w}_b$. \end{lemma}
\begin{proof} By \reflem{lem:geqzero} there exists $\varepsilon\in \{1,-1\}$
with $|u| = |w|+\varepsilon$. Moreover, since $a^2=1$, we have $ua=w$ and $wa=u$ in $G$.
Hence, if $|w| = |u|+1$ (resp., $|u| = |w|+1$), then $ua$ and $w$ (resp., $wa$ and $u$) are geodesics defining the same group element in $G$. \reflem{welldef} implies that $\abs{ua}_c = \abs{w}_c$ (resp., $\abs{wa}_c = \abs{u}_c$) for all $c \in \Sigma$.
This implies the conclusion of the lemma. \qed \end{proof}
\begin{theorem} \label{thm-parikh} Let $G$ be an even Coxeter group. For a given word $w \in \Sigma^*$, the Parikh-image of the shortlex normal form for $w$ can be computed in logspace.
\end{theorem}
\begin{proof} \reflem{lemma-parikh} shows that the following straightforward modification of the logspace algorithm in (the proof of) \refthm{thm-geodesic-length} computes the Parikh-image of the shortlex normal form for $w$ correctly.
Let $w = a_1 \cdots a_k$ be the input word.
\noindent {\bf for all} $a \in \GG$ {\bf do} $\ell_a := 0$;\\ {\bf for} $i=1$ {\bf to} $k$ {\bf do} \\ \hspace*{.5cm} {\bf if} $\sigma_{a_1 \cdots a_{i-1}}(a_i) \geq 0$ {\bf then}\\ \hspace*{1cm} $\ell_{a_i} := \ell_{a_i}+1$\\ \hspace*{.5cm} {\bf else}\\ \hspace*{1cm} $\ell_{a_i} := \ell_{a_i}-1$\\ \hspace*{.5cm} {\bf endif} \\ {\bf endfor}\\ {\bf return} $[\, \ell_a\, ]_{a \in \GG}$. \qed \end{proof}
\section{Mazurkiewicz traces and graph groups}\label{mazu}
In the rest of the paper, we will deal with right-angled Coxeter groups. As explained in Section~\ref{sec:cox}, a right-angled Coxeter group can be specified by a finite undirected graph $(\Sigma,I)$. The set $\Sigma$ is the generating set and the relations are $a^2=1$ for all $a \in \Sigma$ and $ab=ba$ for all $(a,b) \in I$. Hence, $I$ specifies a partial commutation relation, and elements of a right-angled Coxeter group can be represented by partially commutative words, also known as Mazurkiewicz traces. In this section, we will introduce some basic notions from the theory of Mazurkiewicz traces, see \cite{die90,dr95} for more details.
An \emph{independence alphabet} is a pair $(\Sigma, I)$, where $\Sigma$ is a finite set (or \emph{alphabet}) and $I \subseteq \Sigma \times \Sigma$ is an irreflexive and symmetric relation, called the \emph{independence relation}. Thus, $(\Sigma, I)$ is a finite undirected graph. The complementary relation $D = ( \Sigma \times \Sigma) \setminus I$ is called a \emph{dependence relation}. It is reflexive and symmetric. We extend $(\Sigma, I)$ to a graph $(\GG, I_\Gamma)$, where $\GG = \Sigma \cup \ov \Sigma$ with $\Sigma \cap \ov \Sigma = \emptyset$, and $I_\Gamma$ is the minimal independence relation with $I \subseteq I_\Gamma$ and such that $(a,b)\in I_\Gamma$ implies $(a,\ov b)\in I_\Gamma$. The independence alphabet $(\Sigma, I)$ defines a \emph{free partially commutative monoid} (or {\em trace monoid}) $M(\Sigma, I)$ and a \emph{free partially commutative group} $G(\Sigma, I)$ by: \begin{align*} M(\Sigma, I) &= \Sigma^*/ \set{ab = ba}{(a,b) \in I}, \\ G(\Sigma, I) &= F(\Sigma)/ \set{ab = ba}{(a,b) \in I}. \end{align*} Free partially commutative groups are also known as {\em right-angled Artin groups} or {\em graph groups}. Elements of $M(\Sigma, I)$ are called \emph{({M}azur\-kie\-wicz\xspace) traces}. They have a unique description as {\em dependence graphs}, which are node-labelled acyclic graphs defined as follows. Let $u = a_1\cdots a_n \in \Sigma^*$ be a word. The vertex set of the dependence graph $\DG(u)$
is $\{1,\ldots,n\}$ and vertex $i$ is labelled with $a_i \in \Sigma$. There is an arc from vertex $i$ to $j$ if and only if $i<j$ and $(a_i,a_j) \in D$. Now, two words define the same trace in $M(\Sigma,I)$ if and only if their dependence graphs are isomorphic. A dependence graph is acyclic, so its transitive closure is a labelled partial order $\prec$, which can be uniquely represented by its {\em Hasse diagram}: There is an arc from $i$ to $j$ in the Hasse diagram, if $i \prec j$ and there does not exist $k$ with $i \prec k \prec j$.
A trace $u\in M(\Sigma,I)$ is a \emph{factor} of $v\in M(\Sigma,I)$, if $v \in M(\Sigma,I) u M(\Sigma,I)$. The set of letters occurring in a trace $u$ is denoted by $\alpha(u)$. The independence relation $I$ is extended to traces by letting $(u,v)\in I$, if $\alpha(u) \times \alpha(v) \subseteq I$. We also write $I(a) = \set{b \in \Sigma}{(a,b) \in I}$. A trace $u$ is called a \emph{prime} if $\DG(u)$ has exactly one maximal element. Thus, if $u$ is a prime, then we can write $u$ as $u=va$ in $M(\Sigma,I)$, where $a\in \Sigma$ and $v \in M(\Sigma,I)$ are uniquely defined. Moreover, this property characterizes primes. A \emph{prime prefix} of a trace $u$ is a prime trace $v$ such that $u = vx$ in $M(\Sigma,I)$ for some trace $x$. We will use the following simple fact.
\begin{lemma} \label{lemma-prim-prefixes} Let $(\Sigma,I)$ be a fixed independence relation. There is a logspace transducer that on input $u \in M(\Sigma,I)$ outputs a list of all prime prefixes of $u$. \end{lemma}
\begin{proof} The prime prefixes of $u$ correspond to the downward-closed subsets of the dependence graph $\DG(u)$ that have a unique maximal element. Assume that $u = a_1 a_2 \cdots a_n$ with $a_i \in \Sigma$. Our logspace transducer works in $n$ phases. In the $i$-th phase it outputs the sequence of all symbols $a_j$ ($j \leq i$) such that there exists a path in $\DG(u)$ from $j$ to $i$.
Note that there exists a path from $j$ to $i$ in $\DG(u)$ if and only if there is such a path of length at most $|\Sigma|$. Since $\Sigma$ is fixed, the existence of such a path can be checked in logspace by examining all sequences $1 \leq i_1 < i_2 < \cdots < i_k = i$ with $k \leq
|\Sigma|$. Such a sequence can be stored in logarithmic space since
$|\Sigma|$ is a constant. \qed \end{proof} We use standard notations from the theory of rewriting systems, cf \cite{bo93springer}. Let $M = M(\Sigma, I)$. A \emph{trace rewriting system} is a finite set of rules $S \subseteq M\times M$. A rule is often written in the form $\ell \ra r$. The system $S$ defines a one-step rewriting relation $\RAS{}{S} \; \subseteq M \times M$ by $x\RAS{}{S}y$ if there exist $(\ell,r)\in S$ and $u,v \in M$ with $x = u\ell v$ and $y = ur v$ in $M$. By $\RAS{*}{S}$, we denote the reflexive and transitive closure of $\RAS{}{S}$. The set $\IRR(S)$ denotes the set of traces to which no rule of $S$ applies. If $S$ is confluent and terminating, then for every $u \in M$ there is a unique $\wh u \in \IRR(S)$ with $u \RAS{*}{S} \wh u$, and $\IRR(S)$ is a set of normal forms for the quotient monoid $M/S$.
If, in addition, $S$ is length-reducing (i.e., $|\ell| > |r|$ for all $(\ell,r) \in S$), then $\Abs{\pi(u)} = \abs{\wh u}$ for the canonical homomorphism $\pi: M \to M/S$.
\begin{example}\label{ex:gg} The system $S_G = \set{a \ov a \ra 1}{a \in \GG}$ is (strongly) confluent and length-reducing over $M(\GG, I_\GG)$ \cite{die90}. The quotient monoid $M(\GG, I_\GG)/S_G$ is the graph group $G(\Sigma, I)$. \end{example} By \refex{ex:gg} elements in {graph group}s have a unique description as {\em dependence graphs}, too. A trace belongs to $\IRR(S_G)$ if and only if it does not contain a factor $a \ov a $ for $a \in \Gamma$. In the dependence graph, this means that the Hasse diagram does not contain any arc from a vertex labeled $a$ to a vertex labeled $\ov a$ with $a \in \GG$. Moreover, a word $u \in \Gamma^*$ represents a trace from $\IRR(S_G)$ if and only if $u$ does not contain a factor of the form $a v \ov a$ with $a \in \GG$ and $\alpha(v) \subseteq I(a)$.
\section{Right-angled {C}oxeter groups}\label{sec:c} Some of the results on right-angled Coxeter groups in this section are covered by more general statements in \refsec{sec:cox}. However, the former section used quite involved tools from computational algebra and an advanced theory of Coxeter groups. In contrast the results we prove here on right-angled Coxeter groups are purely combinatorial. Hence we can give simple and elementary proofs which makes this section fully self-contained. Moreover, in contrast to the general case, for the right-angled case we will succeed in computing shortlex normal forms in logspace.
Recall that a \emph{right-angled Coxeter group} is specified by a finite undirected graph $(\Sigma,I)$, i.e., an independence alphabet. The set $\Sigma$ is the generating set and the relations are $a^2=1$ for all $a \in \Sigma$ and $ab=ba$ for all $(a,b) \in I$. We denote this right-angled Coxeter group by $C(\Sigma, I)$. Similarly to the graph group $G(\Sigma, I)$, the right-angled Coxeter group $C(\Sigma,I)$ can be defined by a (strongly) confluent and length-reducing trace rewriting system (this time on $M(\Sigma,I)$ instead of $M(\Gamma,I_\Gamma)$). Let $$ S_C = \{ a^2 \to 1 \mid a \in \Sigma \} . $$ Then $S_C$ is indeed (strongly) confluent and length-reducing on $M(\Sigma,I)$ and the quotient $M(\Sigma,I)/S_C$ is $C(\Sigma,I)$. Hence we have two closely related (strongly) confluent and length-reducing trace rewriting systems: $S_G$ defines the graph group $G(\Sigma, I)$ and $S_C$ defines the right-angled {C}oxeter group\xspace $C(\Sigma, I)$. Both systems define unique normal forms of geodesic length: $\widehat u \in M(\GG, I_\Gamma)$ for $S_G$ and $\widehat u \in M(\Sigma, I)$ for $S_C$. Note that there are no explicit commutation rules as they are \emph{built-in} in trace theory. There is a linear time algorithm for computing $\widehat u$; see \cite{die90} for a more general result of this type.
It is well known that a graph group $G(\Sigma,I)$ can be embedded into a right-angled {C}oxeter group\xspace \cite{hw99}. For this, one has to duplicate each letter from $\Sigma$. Formally, we can take the right-angled {C}oxeter group\xspace $C(\GG, I_\Gamma)$ (in which $\ov a$ does not denote the inverse of $a$). Consider the mapping $\varphi(a) = a\ov a$ {}from $\GG$ to $\GG^*$. Obviously, $\varphi$ induces a homomorphism from $G(\Sigma, I)$ to the Coxeter group $C(\GG, I_\Gamma)$. As $\IRR(S_G) \subseteq M(\Gamma,I_\Gamma)$ is mapped to $\IRR(S_C) \subseteq M(\GG, I_\Gamma)$, we recover the well-known fact that $\varphi$ is injective. Actually we see more. Assume that $\wh w$ is the shortlex normal form of some $\varphi(g)$. Then replacing in $\wh w$ factors $a\ov a$ with $a$ and replacing factors $\ov a a$ with $\ov a$ yields a logspace reduction of the problem of computing shortlex normal forms in graph groups to the problem of computing shortlex normal forms in right-angled {C}oxeter group\xspace{}s. Thus, for our purposes it is enough to calculate shortlex normal forms for {right-angled {C}oxeter group\xspace}s of type $C(\Sigma, I)$ in logspace. For the latter, it suffices to compute in logspace on input $u \in \Sigma^*$ some trace (or word) $v$ such that $u = v$ in $C(\Sigma, I)$ and $\abs{v} = \Abs u$. Then, the shortlex normal form for $u$ is the lexicographic normal form of the trace $v$, which can be easily computed in logspace from $u$.
A trace in $M(\Sigma, I)$ is called a {\em Coxeter-trace}, if it does not have any factor $a^2$ where $a \in \Sigma$. It follows that every element in $C(\Sigma, I)$ has a unique representation as a Coxeter-trace\xspace. Let $a \in \Sigma$. A trace $u$ is called $a$-short, if during the derivation $u \RAS*{S_C} \wh u \in \IRR(S_C)$ the rule $a^2 \ra 1$ is not applied. Thus, $u$ is $a$-short if and only if\xspace the number of occurrences of the letter $a$ is the same in the trace $u$ and its Coxeter-trace\xspace $\wh u$. We are interested in the set of letters which survive the reduction process. By $\wh\alpha(u) = \alpha(\wh u)$ we denote the alphabet of the unique Coxeter-trace\xspace $\wh u$ with $u = \wh u$ in $C(\Sigma, I)$. Here is a crucial observation:
\begin{lemma}\label{lem:otto} A trace $u$ is $a$-short if and only if\xspace $u$ has no factor $ava$ such that $\wh \alpha( v) \subseteq I(a)$. \end{lemma}
\begin{proof}
If $u$ contains a factor $ava$ such that $\wh \alpha( v) \subseteq I(a)$, then $u$ is clearly not $a$-short. We prove the other direction by induction on the length of $u$. Write $u = a_1 \cdots a_n$ with $a_i\in \Sigma$. We identify $u$ with its dependence graph $\DG(u)$ which has vertex set $\oneset{1 \lds n}$. Assume that $u$ is not $a$-short. Then, during the derivation $u \RAS{*}{S_C} \wh u$, for a first time a vertex $i$ with label $a_i= a$ is canceled with vertex $j$ with label $a_j= a$ and $i <j$. It is enough to show that $\wh\alpha(a_{i+1} \cdots a_{j-1}) \subseteq I(a)$. If the cancellation of $i$ and $j$ happens in the first step of the rewriting process, then $\alpha(a_{i+1} \cdots a_{j-1}) \subseteq I(a)$ and we are done. So, let the first step cancel vertices $k$ and $\ell$ with labels $a_k = a_\ell= b$ and $k<\ell$. Clearly, $\oneset{i,j}\cap \oneset{k,\ell} = \emptyset$. The set $\wh\alpha(a_{i+1} \cdots a_{j-1})$ can change, only if either $i<k<j< \ell$ or $k<i< \ell<j$. However in both cases we must have $(b,a) \in I$, and we are done by induction. \qed \end{proof} In the right-angled case, the standard geometric representation (see \eqref{eq-geo-repr})
$\sigma: C(\Sigma, I) \to \GL(n,\mathbb{Z})$ (where $n = |\Sigma|$) can be defined as follows, where again we write $\sigma_a$ for the mapping $\sigma(a)$: \begin{eqnarray*} \sigma_a(a) & = & -a, \\ \sigma_a(b) & = & \begin{cases}
b & \text{ if $(a,b) \in I$,} \\
b +2a & \text{ if $(a,b) \in D$ and $a \neq b$.} \end{cases} \end{eqnarray*} In this definition, $a,b$ are letters. We identify $\mathbb{Z}^n = \mathbb{Z}^{\Sigma}$ and vectors from $\mathbb{Z}^n$ are written as formal sums $\sum_{b} \lambda_b b$. One can easily verify that $\sigma_{ab}(c) = \sigma_{ba}(c)$ for $(a,b) \in I$ and $\sigma_{aa}(b) = b$. Thus, $\sigma$ defines indeed a homomorphism from $C(\Sigma,I)$ to $\GL(n,\mathbb{Z})$ (as well as homomorphisms from $\Sigma^*$ and {}from $M(\Sigma,I)$ to $\GL(n,\mathbb{Z})$). Note that if $w = uv$ is a trace and $(b,v) \in I$ for a symbol $b$, then $\sigma_w(b) = \sigma_u(b)$. The following proposition is fundamental for understanding how the internal structure of $w$ is reflected by letting $\sigma_w$ act on letters (called \emph{simple roots} in the literature). For lack of a reference for this variant (of a well-known general fact) and since the proof is rather easy in the right-angled case (in contrast to the general case), we give a proof, which is purely combinatorial.
\begin{proposition}\label{prop:hugo} Let $wd$ be a Coxeter-trace\xspace, $\sigma_w(d) = \sum_{b} \lambda_b b$ and $wd = udv$ where $ud$ is prime and $(d,v) \in I$. Then it holds: \begin{enumerate}[(1)] \item $\lambda_b \neq 0 \iff b \in \alpha(ud)$. Moreover, $\lambda_b > 0$ for all $b \in \alpha(ud)$. \item Let $b,c \in \alpha(ud)$, $b \neq c$, and assume that the first $b$ in $\DG(ud)$ appears before the first $c$ in $\DG(ud)$. Then we have $\lambda_b > \lambda_c >0$. \end{enumerate} \end{proposition}
\begin{proof} We prove both statements of the lemma by induction on $\abs{u}$. For $\abs{u}= 0$ both statements are clear. Hence, let $u = a u'$ and $\sigma_{u'}(d)= \sum_{b} \mu_b b$. Thus, $$\sigma_{u}(d)= \sum_{b} \lambda_b b = \sigma_{a}(\sum_{b} \mu_b b) = \sum_{b} \mu_b \sigma_{a}(b).$$ Note that $\mu_b = \lambda_b$ for all $b \neq a$. Hence, by induction $\lambda_b = 0$ for all $b \notin \alpha(ud)$ and $\lambda_b > 0$ for all $b \in \alpha(ud) \setminus \{a\}$.
Let us now prove (2) for the trace $u$ (it implies $\lambda_a > 0$ and hence (1)). Consider $b,c \in \alpha(ud)$, $b \neq c$, such that the first $b$ in $\DG(ud)$ appears before the first $c$ in $\DG(ud)$. Clearly, this implies $c \neq a$. For $b \neq a$ we obtain that the first $b$ in $\DG(u'd)$ appears before the first $c$ in $\DG(u'd)$. Hence, by induction we get $\mu_b > \mu_c > 0$. Claim (2) follows since $b \neq a \neq c$ implies $\mu_b = \lambda_b$ and $\mu_c = \lambda_c$.
Thus, let $a=b$. As there is path from the first $a$ to every $c$ in $\DG(ud)$ we may replace $c$ by the first letter we meet on such a path. Hence we may assume that $a$ and $c$ are dependent. Note that $a \neq c$ because $u$ is a Coxeter-trace\xspace. Hence, $\lambda_c = \mu_c > 0$ and it is enough to show $\lambda_a > \mu_c$. But $\lambda_a \geq 2 \mu_c - \mu_a$ by the definition of $\sigma_a$. If $\mu_a= 0$, then $\lambda_a \geq 2 \mu_c$, which implies $\lambda_a > \mu_c$, since $\mu_c > 0$. Thus, we may assume $\mu_a> 0$. By induction, we get $a \in \alpha(u'd)$. Here comes the crucial point: the first $c$ in $\DG(u'd)$ must appear before the first $a$ in $u'd$. Thus, $\mu_c > \mu_a$ by induction, which finally implies $\lambda_a \geq 2 \mu_c - \mu_a = \mu_c + (\mu_c - \mu_a) > \mu_c$.
\qed \end{proof}
\begin{corollary}\label{cor:nixoderwas} Let $C(\Sigma,I)$ be a fixed right-angled {C}oxeter group\xspace. Then on input $w \in \Sigma^*$ we can calculate in logspace the alphabet $\wh\alpha(w)$ of the corresponding Coxeter-trace\xspace $\wh w$. \end{corollary}
\begin{proof}
Introduce a new letter $x$ which depends on all other letters from $\Sigma$.
We have $\sigma_w(x)= \sigma_{\wh w}(x) = \sum_{b} \lambda_b b$. As $\wh
wx $ is a Coxeter-trace\xspace and prime, we have for all $b \in \Sigma$:
$$b \in
\wh\alpha(w) \Longleftrightarrow b \in \alpha(\wh w x)
\Longleftrightarrow \lambda_b \neq 0,
$$ where the last equivalence follows from \refprop{prop:hugo}. Whether $\lambda_b \neq 0$ can be checked in logspace, by computing $\lambda_b \;\mathrm{mod}\; m$ for all numbers $m \leq \abs w$, since the least common multiple of the first $n$ numbers is larger than $2^n$ (if $n \geq 7$) and the $\lambda_b$ are integers with $\abs{\lambda_b} \leq 2^{\abs{w}}$. See also \cite{lz77} for an analogous statement in the general context of linear groups. \qed \end{proof} The hypothesis in \refcor{cor:nixoderwas} being a right-angled {C}oxeter group\xspace is actually not necessary as we have seen in \refthm{thm:allcoxoderwas}. It remains open whether this hypothesis can be removed in the following theorem.
\begin{theorem}\label{thm:lexshort} Let $G$ be a fixed graph group or a fixed right-angled {C}oxeter group\xspace. Then we can calculate in logspace shortlex normal forms in $G$. \end{theorem}
\begin{proof} As remarked earlier, it is enough to consider a right-angled {C}oxeter group\xspace $G = C(\Sigma,I)$. Fix a letter $a \in \Sigma$. We first construct a logspace transducer, which computes for an input trace $w \in M(\Sigma,I)$ a trace $u \in M(\Sigma,I)$ with the following properties: (i) $u = w$ in $C(\Sigma,I)$, (ii) $u$ is $a$-short, and (iii) for all $b \in \Sigma$, if $w$ is $b$-short, then also $u$ is $b$-short. Having such a logspace transducer for every $a \in \Sigma$, we can compose all of them in an arbitrary order (note that
$|\Sigma|$ is a constant) to obtain a logspace transducer which computes for a given input trace $w \in M(\Sigma,I)$ a trace $u$ such that $w = u$ in $C(\Sigma,I)$ and $u$ is $a$-short for all $a \in \Sigma$, i.e., $u \in \IRR(S_C)$. Thus $u = \wh w$. From $u$ we can compute easily in logspace the Hasse diagram of $\DG(u)$ and then the shortlex normal form.
So, let us fix a letter $a \in \Sigma$ and an input trace $w= a_1 \cdots a_n$, where $a_1, \ldots, a_n \in \Sigma$. We remove from left to right positions (or vertices) labeled by the letter $a$ which cancel and which therefore do not appear in $\wh w$. We read $a_1 \cdots a_n$ from left to right. In the $i$-th stage do the following: If $a_i \neq a$ output the letter $a_i$ and switch to the $(i+1)$-st stage. If however $a_i = a$, then compute in logspace (using \refcor{cor:nixoderwas}) the maximal index $j>i$ (if it exists) such that $a_j = a$ and $\wh \alpha(a_{i+1} \cdots a_{j-1}) \subseteq I(a)$. If no such index $j$ exists, then append the letter $a_i$ to the output tape and switch to the $(i+1)$-st stage. If $j$ exists, then append the word $a_{i+1} \cdots a_{j-1}$ to the output tape, but omit all $a$'s. After that switch immediately to stage $j+1$. Here is a pseudo code description of the algorithm, where $\pi_{\Sigma \setminus \{a\}} : \Sigma^* \to (\Sigma \setminus \{a\})^*$ denotes the homomorphism that deletes all occurrences of $a$.
\noindent $i := 1$; \\ $w := 1$
(the content of the output tape of the transducer) \\ {\bf while} $i \leq n$ {\bf do} \\ \hspace*{.5cm} {\bf if} $a_i \neq a$ {\bf then} \\ \hspace*{1cm} $w := wa_i$; \\ \hspace*{1cm} $i := i+1$ \\ \hspace*{.5cm} {\bf else} \\ \hspace*{1cm} $j := \text{undefined}$\\ \hspace*{1cm} {\bf for} $k=i+1$ {\bf to} $n$ {\bf do} \\ \hspace*{1.5cm} {\bf if} $a_k = a$ and $\wh \alpha(a_{i+1} \cdots a_{k-1}) \subseteq I(a)$ {\bf then}\\ \hspace*{2cm} $j := k$\\ \hspace*{1.5cm} {\bf endif} \\ \hspace*{1cm} {\bf endfor}\\ \hspace*{1cm} {\bf if} $j = \text{undefined}$ {\bf then}\\ \hspace*{1.5cm} $w := wa_i$; \\ \hspace*{1.5cm} $i := i+1$ \\ \hspace*{1cm} {\bf else} \\ \hspace*{1.5cm} $w := w\,\pi_{\Sigma \setminus \{a\}}(a_i \cdots a_{j-1})$; \\ \hspace*{1.5cm} $i := j+1$ \\ \hspace*{1cm} {\bf endif} \\ \hspace*{.5cm} {\bf endif}\\ {\bf endwhile}\\ {\bf return}(w)
\noindent Let $w_s$ be the content of the output tape at the beginning of stage $s$, i.e., when the algorithm checks the condition of the while-loop and variable $i$ has value $s$. (hence, $w_1 = 1$ and $w_{n+1}$ is the final output). The invariant of the algorithm is that \begin{itemize} \item $w_s = a_1 \cdots a_{s-1}$ in $C(\Sigma,I)$, \item $w_s$ is $a$-short, and \item if $a_1 \cdots a_{s-1}$ is $b$-short, then also $w_s$ is $b$-short. \end{itemize} The proof of this fact uses \reflem{lem:otto}. \qed \end{proof}
\section{Free partially commutative inverse monoids}\label{sec:fp}
A monoid $M$ is \emph{inverse}, if for every $x \in M$ there is $\inv{x} \in M$ with \begin{equation} \label{INV 1,2,3}
x \inv{x} x = x, \quad
\inv{x} x \inv{x} = \inv{x}, \text{ and } \quad x\inv{x}\, y\inv{y} = y\inv{y}\, x\inv{x}. \end{equation} The element $\ov x$ is uniquely defined by these properties and it is called the \emph{inverse} of $x$. Thus, we may also use the notation $\ov x = x^{-1}$. It is easy to see that every idempotent element in an inverse monoid has the form $xx^{-1}$, and all these elements are idempotent. Using equations \eqref{INV 1,2,3} for all $x,y \in \GG^*$ as defining relations we obtain the \emph{free inverse monoid }Ê$\ensuremath{\mathrm{FIM}}(\Sigma)$ which has been widely studied in the literature. More details on inverse monoids can be found in \cite{Law99}.
An \emph{inverse monoid over an independence alphabet $(\Sigma,I)$} is an inverse monoid $M$ together with a mapping $\varphi: \Sigma \to M$ such that $\varphi(a)\varphi(b) = \varphi(b)\varphi(a)$ and $\ov {\varphi(a)}\varphi(b) = \varphi(b)\ov{\varphi( a)}$ for all $(a,b) \in I$. We define the {\em free partially commutative inverse monoid over} $(\Sigma,I)$ as the quotient monoid $$\ensuremath{\mathrm{FIM}}(\Sigma,I) =
\ensuremath{\mathrm{FIM}}(\Sigma)/\{ab=ba, \inv{a}b=b\inv{a} \mid (a,b)\in I\}.$$ It is an inverse monoid over $(\Sigma,I)$. Da~Costa has studied $\ensuremath{\mathrm{FIM}}(\Sigma,I)$ in his PhD~thesis \cite{vel03}. He proved that $\ensuremath{\mathrm{FIM}}(\Sigma,I)$ has a decidable word problem, but he did not show any complexity bound. The first upper complexity bound for the word problem is due to \cite{DiekertLohreyMiller08}, where it was shown to be solvable in time $O(n\log(n))$ on a RAM. The aim of this section is to show that the space complexity of the word problem of $\ensuremath{\mathrm{FIM}}(\Sigma,I)$ is very low, too.
\begin{theorem} \label{space} The word problem of $\ensuremath{\mathrm{FIM}}(\Sigma,I)$ can be solved in logspace. \end{theorem}
\begin{proof} For a word $u = a_1 \cdots a_n$ ($a_1, \ldots, a_n \in \Gamma$) let $u_i \in M(\Gamma,I_\Gamma)$ ($1 \leq i \leq n$) be the trace represented by the prefix $a_1 \cdots a_i$ and define the following subset of the trace monoid $M(\Gamma,I_\Gamma)$. \begin{equation} \label{M(u)} M(u) = \bigcup_{i=1}^n \{ p \mid p \text{ is a prime prefix of } \wh{u_i} \} \subseteq M(\Gamma,I_\Gamma) . \end{equation} (This set is a partial commutative analogue of the classical notion of \emph{Munn tree} introduced in \cite{Munn:74}.) It is shown in \cite[Sect. 3]{DiekertLohreyMiller08} that for all words $u,v \in \Gamma^*$, $u=v$ in $\ensuremath{\mathrm{FIM}}(\Sigma,I)$ if and only if \begin{enumerate}[(a)] \item $u = v$ in the graph group $G(\Sigma,I)$ and \item $M(u) = M(v)$. \end{enumerate} Since $G(\Sigma,I)$ is linear, condition (a) can be checked in logspace \cite{lz77,Sim79}. For (b), it suffices to show that the set $M(u)$ from \eqref{M(u)} can be computed in logspace from the word $u$ (then $M(u) = M(v)$ can be checked in logspace, since the word problem for the trace monoid $M(\Gamma,I_\Gamma)$ belongs to uniform $\mathsf{TC}^0$ \cite{AlGa91} and hence to logspace). By \refthm{thm:lexshort} we can compute in logspace a list of all normal forms $\wh{u_i}$ ($1 \leq i \leq n$), where $u_i$ is the prefix of $u$ of length $i$. By composing this logspace transducer with a logspace transducer for computing prime prefixes (see \reflem{lemma-prim-prefixes}), we obtain a logspace transducer for computing the set $M(u)$. \qed \end{proof}
\section{Concluding remarks and open problems}
We have shown that shortlex normal forms can be computed in logspace for graph groups and right-angled Coxeter groups. For general Coxeter groups, we are able to compute in logspace the length of the shortlex normal form and the set of letters appearing in the shortlex normal form. For even Coxeter groups we can do better and enhance the general result since we can compute the Parikh-image of geodesics. An obvious open problem is, whether for every (even) Coxeter group shortlex normal forms can be computed in logspace. We are tempted to believe that this is indeed the case. A more general question is, whether shortlex normal forms can be computed in logspace for automatic groups. Here, we are more sceptical. It is not known whether the word problem of an arbitrary automatic group can be solved in logspace. In \cite{Lo05ijfcs}, an automatic {\em monoid} with a $\mathsf{P}$-complete word problem was constructed. In fact, it is even open, whether the word problem for a hyperbolic group belongs to logspace. The best current upper bound is {\sf LOGCFL} \cite{Lo05ijfcs}. So, one might first try to lower this bound e.g. to {\sf LOGDCFL} (the class of all languages that are logspace-reducible to a deterministic context-free language). M.~Kapovich pointed out that there are non-linear hyperbolic groups. Hence the fact that linear groups have logspace word problems (\cite{lz77,Sim79}) does not help here.
\newcommand{\Ju}{Ju}\newcommand{\Ph}{Ph}\newcommand{\Th}{Th}\newcommand{\Ch}{C h}\newcommand{\Yu}{Yu}\newcommand{\Zh}{Zh}
\end{document} | arXiv |
Asian Pacific Journal of Cancer Prevention
Pages.8009-8013
Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회)
Clinical Practice of Blood Transfusion in Orthotopic Organ Transplantation: A Single Institution Experience
Tsai, Huang-Wen (Department of Surgery, Far Eastern Memorial Hospital) ;
Hsieh, Fu-Chien (Department of Surgery, Far Eastern Memorial Hospital) ;
Chang, Chih-Chun (Department of Clinical Pathology, Far Eastern Memorial Hospital) ;
Su, Ming-Jang (Department of Clinical Pathology, Far Eastern Memorial Hospital) ;
Chu, Fang-Yeh (Department of Clinical Pathology, Far Eastern Memorial Hospital) ;
Chen, Kuo-Hsin (Department of Surgery, Far Eastern Memorial Hospital) ;
Jeng, Kuo-Shyang (Department of Surgery, Far Eastern Memorial Hospital) ;
Chen, Yun (Department of Surgery, Far Eastern Memorial Hospital)
https://doi.org/10.7314/APJCP.2015.16.17.8009
Background: Orthotopic organ transplantation, a treatment option for irreversible organ dysfunction according to organ failure, severe damaged organ or malignancy in situ, was usually accompanied with massive blood loss thus transfusion was required. We aimed to evaluate the adverse impact of blood transfusion on solid organ transplantation. Materials and Methods: From January, 2009 to December, 2014, patients who received orthotopic organ transplantation at Far Eastern Memorial Hospital medical center were enrolled. Clinical data regarding anemia status and red blood cell (RBC) transfusion before, during and after operation, as well as patient outcomes were collected for further univariate analysis. Results: A total of 105 patients who underwent orthotopic transplantation, including liver, kidney and small intestine were registered. The mean hemoglobin (Hb) level upon admission and before operation were $11.6{\pm}1.8g/dL$ and $11.7{\pm}1.7g/dL$, respectively; and the nadir Hb level post operation and the final Hb level before discharge were $8.3{\pm}1.6g/dL$ and $10.2{\pm}1.6g/dL$, respectively. The median units (interquartile range) of RBC transfusion in pre-operative, peri-operative and post-operative periods were 0 (0-0), 2 (0-12), and 2 (0-6) units, respectively. Furthermore, the median (interquartile range) length of hospital stay (LHS) from admission to discharge and from operation to discharge were 28 (17-44) and 24 (16-37) days, respectively. Both peri-operative and post-operative RBC transfusion were associated with longer LHS from admission to discharge and from operation to discharge. Furthermore, it increased the risk of post-operative septicemia. While peri-operative RBC transfusion elevated the risk of acute graft rejection in patients who received orthotopic transplantation. Conclusions: Worse outcome could be anticipated in those who had received massive RBC transfusion in transplantation operation. Hence, peri-operative RBC transfusion should be avoided as much as possible.
Peri-operative transfusion;red blood cells;liver transplantation;kidney transplantation;small intestine
Carpenter CB (1990). Blood transfusion effects in kidney transplantation. Yale J Biol Med, 63, 435-43.
dallas t, welsby i, bottiger b, et al (2015). bloodless orthotopic heart transplantation in a jehovah's Witness. A A Case Rep, 4, 140-2. https://doi.org/10.1213/XAA.0000000000000067
Devi AS (2009). Transfusion practice in orthotopic liver transplantation. Indian J Crit Care Med, 13, 120-8. https://doi.org/10.4103/0972-5229.58536
Fonseca-Neto OC, Miranda LE, Batista TP, et al (2012). Postoperative kidney injury does not decrease survival after liver transplantation. Acta Cir Bras, 27, 802-8. https://doi.org/10.1590/S0102-86502012001100010
Garcia JH, Coelho GR, Feitosa Neto BA, et al (2013). Liver transplantation in Jehovah's Witnesses patients in a center of northeastern Brazil. Arq Gastroenterol, 50, 138-40. https://doi.org/10.1590/S0004-28032013000200023
George DL, Arnow PM, Fox AS, et al (1991). Bacterial infection as a complication of liver transplantation: epidemiology and risk factors. Rev Infect Dis, 13, 387-96. https://doi.org/10.1093/clinids/13.3.387
Gu YH, Du JX, Ma ML (2012). Sirolimus and non-melanoma skin cancer prevention after kidney transplantation: a metaanalysis. Asian Pac J Cancer Prev, 13, 4335-9. https://doi.org/10.7314/APJCP.2012.13.9.4335
Guo JR, Xu F, Jin XJ, et al (2014). Impact of allogenic and autologous transfusion on immune function in patients with tumors. Asian Pac J Cancer Prev, 15, 467-74. https://doi.org/10.7314/APJCP.2014.15.1.467
Hernandez-Navarrete LS, Hernandez-Jimenez JD, Jimenez- Lopez LA, et al (2013). Experience in kidney transplantation without blood transfusion: kidney transplantation transfusionfree in Jehovah's Witnesses. First communication in Mexico. Cir Cir, 81, 450-3.
Li WH, Chen YJ, Tseng WC, et al (2012). Malignancies after renal transplantation in Taiwan: a nationwide populationbased study. Nephrol Dial Transplant, 27, 833-9. https://doi.org/10.1093/ndt/gfr277
Liu L, Wang Z, Jiang S, et al (2013). Perioperative allogenenic blood transfusion is associated with worse clinical outcomes for hepatocellular carcinoma: a meta-analysis. PloS one. 8, 64261. https://doi.org/10.1371/journal.pone.0064261
Massicotte L, Sassine M-P, Lenis S, et al (2005). Survival rate changes with transfusion of blood products during liver transplantation. Can J Anaesth, 52, 148-55. https://doi.org/10.1007/BF03027720
Massicotte L, Denault AY, Beaulieu D, et al (2012). Transfusion rate for 500 consecutive liver transplantations: experience of one liver transplantation center. Transplantat, 93, 1276-81. https://doi.org/10.1097/TP.0b013e318250fc25
McCluskey SA, Karkouti K, Wijeysundera DN, et al (2006). Derivation of a risk index for the prediction of massive blood transfusion in liver transplantation. Liver Transpl, 12, 1584-93. https://doi.org/10.1002/lt.20868
Meng J, Lu XB, Tang YX, et al (2013). Effects of allogeneic blood transfusion in patients with stage II colon cancer. Asian Pac J Cancer Prev, 14, 347-50. https://doi.org/10.7314/APJCP.2013.14.1.347
Nemes B, Sarvary E, Sotonyi P, et al (2005). Factors in association with sepsis after liver transplantation: the Hungarian experience. Transplant Proc, 37, 2227-8. https://doi.org/10.1016/j.transproceed.2005.03.067
Ozier Y, Tsou MY (2008). Changing trends in transfusion practice in liver transplantation. Curr Opin Organ Transplant, 13, 304-9. https://doi.org/10.1097/MOT.0b013e3282faa0dd
Pandey CK, Singh A, Kajal K, et al (2015). Intraoperative blood loss in orthotopic liver transplantation: The predictive factors. World J Gastrointest Surg, 7, 86-93. https://doi.org/10.4240/wjgs.v7.i6.86
Pilar Solves, Nelly Carpio, Federico Moscardo, et al (2015). Transfusion Management and Immunohematologic Complications in Liver Transplantation: Experience of a Single Institution. Transfus Med Hemother, 42, 8-14. https://doi.org/10.1159/000370260
Ramos E, Dalmau A, Sabate A, et al (2003). Intraoperative red blood cell transfusion in liver transplantation: influence on patient outcome, prediction of requirements, and measures to reduce them. Liver Transpl, 9, 1320-7. https://doi.org/10.1016/jlts.2003.50204
Rana A, Petrowsky H, Hong JC, et al (2013). Blood transfusion requirement during liver transplantation is an important risk factor for mortality. J Am Coll Surg, 216, 902-7. https://doi.org/10.1016/j.jamcollsurg.2012.12.047
Scornik JC, Schold JD, Bucci M, et al (2009). Effects of blood transfusions given after renal transplantation. Transplantat, 87, 1381-6. https://doi.org/10.1097/TP.0b013e3181a24b96
Sugita S, Sasaki A, Iwaki K, et al (2008). Prognosis and postoperative lymphocyte count in patients with hepatocellular carcinoma who received intraoperative allogenic blood transfusion: a retrospective study. Eur J Surg Oncol, 34, 339-45. https://doi.org/10.1016/j.ejso.2007.02.010
van der Bilt JD, Livestro DP, Borren A, et al (2007). European survey on the application of vascular clamping in liver surgery. Dig Surg, 24, 423-35. https://doi.org/10.1159/000108325
Wang SC, Shieh JF, Chang KY, et al (2010). Thromboelastographyguided transfusion decreases intraoperative blood transfusion during orthotopic liver transplantation: randomized clinical trial. Transplant Proc, 42, 2590-3. https://doi.org/10.1016/j.transproceed.2010.05.144
Wang YL, Jiang B, Yin FF, et al (2015). Perioperative Blood Transfusion Promotes Worse Outcomes of Bladder Cancer after Radical Cystectomy: A Systematic Review and Meta- Analysis. PLoS One, 10, 130122.
Xia VW, Du B, Braunfeld M, et al (2006). Preoperative characteristics and intraoperative transfusion and vasopressor requirements in patients with low vs. high MELD scores. Liver Transpl, 12, 614-20. https://doi.org/10.1002/lt.20679
Xing YL, Wang YC (2014). Influence of autologous and homologous blood transfusion on interleukins and tumor necrosis factor-$\alpha$ in peri-operative patients with esophageal cancer. Asian Pac J Cancer Prev, 15, 7831-4. https://doi.org/10.7314/APJCP.2014.15.18.7831
Yunus M, Aziz T, Mubarak M (2012). Posttransplant malignancies in renal transplant recipients: 22-years experience from a single center in Pakistan. Asian Pac J Cancer Prev, 13, 575-8. https://doi.org/10.7314/APJCP.2012.13.2.575
Impact of Peri-Operative Anemia and Blood Transfusions in Patients with Gastric Cancer Receiving Gastrectomy vol.17, pp.3, 2016, https://doi.org/10.7314/APJCP.2016.17.3.1427
Transfusion of Older Red Blood Cells Increases the Risk of Acute Kidney Injury After Orthotopic Liver Transplantation vol.127, pp.1, 2018, https://doi.org/10.1213/ANE.0000000000002437
genotype and perioperative blood transfusion are not conducive to the prognosis of patients with gastric cancer vol.32, pp.7, 2018, https://doi.org/10.1002/jcla.22443 | CommonCrawl |
Proceedings of the Tenth International Workshop on Data and Text Mining in Biomedical Informatics
TIGERi: modeling and visualizing the responses to perturbation of a transcription factor network
Namshik Han1,3,
Harry A. Noyes2 &
Andy Brass3
Transcription factor (TF) networks play a key role in controlling the transfer of genetic information from gene to mRNA. Much progress has been made on understanding and reverse-engineering TF network topologies using a range of experimental and theoretical methodologies. Less work has focused on using these models to examine how TF networks respond to changes in the cellular environment.
In this paper, we have developed a simple, pragmatic methodology, TIGERi (Transcription-factor-activity Illustrator for Global Explanation of Regulatory interaction), to model the response of an inferred TF network to changes in cellular environment. The methodology was tested using publicly available data comparing gene expression profiles of a mouse p38α (Mapk14) knock-out line to the original wild-type.
Using the model, we have examined changes in the TF network resulting from the presence or absence of p38α. A part of this network was confirmed by experimental work in the original paper. Additional relationships were identified by our analysis, for example between p38α and HNF3, and between p38α and SOX9, and these are strongly supported by published evidence. FXR and MYC were also discovered in our analysis as two novel links of p38α. To provide a computational methodology to the biomedical communities that has more user-friendly interface, we also developed a standalone GUI (graphical user interface) software for TIGERi and it is freely available at https://github.com/namshik/tigeri/.
We therefore believe that our computational approach can identify new members of networks and new interactions between members that are supported by published data but have not been integrated into the existing network models. Moreover, ones who want to analyze their own data with TIGERi could use the software without any command line experience. This work could therefore accelerate researches in transcriptional gene regulation in higher eukaryotes.
Integrated functional genomics attempts to utilize the vast wealth of data produced by modern large scale genomic and post-genomic projects to understand the functions of cells and organisms [1]. The rapidly increasing amount of high throughput sequencing data makes it essential to develop new analytical tools that can systematically process and integrate those datasets. This presents both challenges and opportunities to the computer science community.
Transcription factor (TF) proteins bind to promoter elements on genomic DNA at TF binding sites (TFBS), to help control the transfer of genetic information from gene to mRNA [2]. Understanding the mechanisms underlying mRNA transcription is one of the "grand challenges" in modern biology. Experimental techniques allow direct measurement of individual gene transcription, but the contribution of multiple TFs is hard to determine [3,4,5]. Measuring the concentration of TF proteins and their affinity for the promoter region of genes is difficult because concentrations are low and protein-DNA interactions are subject to multiple controls, resulting in measurement artifacts [6,7,8]. Post transcriptional regulation compounds these difficulties because other molecules modify mRNA stability and hence the signals from the TFs [3, 9,10,11]. In such a complex environment, in-silico techniques can provide insights and hypotheses into the underlying TF regulatory activity, although they clearly have limitations.
Reverse-engineering of TF network and TFBS information
A number of techniques are available to uncover the topology of the TF network—the networks of complex reactions and interactions in the cell that control transcript levels [12]. One strategy is to use principals of reverse-engineering and use gene expression data to infer regulatory interactions [13, 14]. Various reverse-engineering methods can reduce the dimensionality of the classic combinatorial search problem and utilize genome sequence data to enhance the sensitivity and specificity of predictions. However, they have difficulties in describing regulatory control by mechanisms other than TFs. Reverse-engineering of TF networks in the lower eukaryotes has been well developed [15,16,17]. However, the problems in mapping the regulatory mechanisms in cells of higher eukaryotes have made such global studies either impossible or impractical. Some recent studies have begun to address this issue [18,19,20], but have tended to focus only understanding which TFs bind to which genes—not looking in detail at the nature of the TF/TFBS interaction. A recent study [21] identified key biological features in transcriptional changes, however this method has difficulties in inferring the dynamics of the interactions. Furthermore, TF concentrations were not considered during the identification of the features.
To date, various reverse-engineering methods can reduce the dimensionality of the reverse-engineering problem and utilize genome sequence data to enhance the sensitivity and specificity of predicted interactions. However, they have difficulties in describing regulatory control by mechanisms other than TFs. To address this issue TFBSs information is required to complement the gene expression data. We used a list of 132,654 TFBSs between 20,920 genes and 174 TFs that had been identified by searching an alignment of five mammal species for conserved 5' and 3' regions [22]. Connectivity data is notorious for high false positive rates; however, our connectivity data is robust against the problem because it extracts binding information from well conserved upstream regions. A more detailed explanation is addressed in the Methods and Results section and a schematic diagram of the connectivity data is presented in Fig. 1.
Schematic diagrams for understanding basic concepts of this study a The four components of transcription. The transcriptional regulators interact with their targets genes to regulate gene expression at the mRNA level. The cellular environment controls the concentration of TFs, \( \mathbf{C} \). The TFs bind to specific sites close to the target genes, described in model by the connection matrix, \( \mathbf{T} \). The TFs bind to their different target genes with varying strengths to regulate transcription. The strength of each of these pair-wise interactions is described by a weight matrix \( \mathbf{W} \). This all finally results in the transcription of mRNA at particular concentration, ɛ. b A schematic of a transcriptional regulatory circuit. The circuit takes trans- and cis-inputs to transform the genetic information at mRNA level. The four components for transcription (as described above) are the key elements for the circuit
Identifying regulation type by combining TFA and TFC analysis
Transcription factor activities (TFAs) are the intensity of the interactions between a certain transcription factor (TF) and its targets at a certain experimental point [23]. Thus, the estimated strength of TFAs between each TF and its target gene are useful to know which TF is acting on which gene at a given time point or experiment condition. However, simply knowing the regulatory activities under a single experimental condition provides limited information about the transcriptional network. To understand the mechanism of regulatory interactions, we developed a method that identifies statistically significant differences in TFAs under two different conditions. The significant differences indicate the changing level of TFAs between two conditions, so varying trends of TFAs in whole experimental process are easily detected and can be used to identify TF-specific regulatory patterns (up and down-regulation).
A highly concentrated TF induces more gene expressions rather than a lower concentrated TF. High-affinity binding sites induce the gene expressions at any level of TF concentration (TFCs), but low-affinity binding sites require high level of TF concentration for induction [24]. Thus, we might assume that TF concentration level is an important factor for investigating TFAs, and TFA investigation with considering TFC provides more reliable and accurate results closer to the complex reality of biology. To address this problem we proposed a probabilistic variational inference method to infer the concentration of each TF protein (TFC) and the regulatory intensities (TFA) of each TF and gene pair [4].
Aside from the method, there have been some notable attempts to infer TFAs based on integrating gene expression data and TFBSs information. The approaches use various well-known statistical inference techniques such as network component analysis [25], support vector machine [26], multivariate regression plus backward variable selection [27] and partial least squares [28]. However, the TFAs, which are inferred by these methods, do not contain any information on the strength and the sign of the physical interaction between a TF and its target genes. Moreover, the regulatory interactions can change easily in response to changing experimental conditions and over time. Since the methods are not fully probabilistic, they are not ideal for investigating the stochastic interactions. A linear regression based probabilistic method to model the full probability distribution of each TFA on each gene was developed [23]. The limitation of this method, however, is that it does not infer the TFAs and TFCs separately. This is a serious problem in subsequent analysis and prediction.
Transcription regulatory circuits and mathematical model
Transcription regulatory circuits can be thought of as having trans- and cis-inputs that are transformed into genetic information at mRNA level [29]. These circuits are a key component in the regulation of mRNA levels in the cell, and have a number of components (shown in Fig. 1): TFs, whose concentration can change, bind to TFBSs upstream of genes with a strength that is a function of the particular TF-gene interaction, to control the concentration of mRNA produced. A number of mathematical models have been developed which attempt to describe these interactions [15,16,17, 19, 21, 30]. For example Sanguinetti et al. [4] model the log gene expression in the form:
$$ \underset{\bar{\mkern6mu}}{\boldsymbol{e}}=\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\mathbf{\mathcal{T}}}}\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\mathbf{\mathcal{W}}}}\;\underset{\bar{\mkern6mu}}{\boldsymbol{c}}+\underset{\bar{\mkern6mu}}{\boldsymbol{v}} $$
\( \underset{\bar{\mkern6mu}}{\boldsymbol{e}} \) is a set of logged gene expression measurements.
\( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) is a binary matrix capturing the connection topology—the specific set of TFBS upstream of genes and the TFs that bind to them. If TF f binds upstream of gene g then \( {\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\mathbf{\mathcal{T}}}}}_{\boldsymbol{gf}}=1 \).
\( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) is a weight matrix that captures the nature of the interaction strengths between TF-gene pairs in regulating expression of a specific gene.
\( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) is the vector of concentrations of each of the TFs.
\( \underset{\bar{\mkern6mu}}{\boldsymbol{v}} \) is a vector of independent and identically distributed variables modeling the noise in the system. The model assumes that a spherical Gaussian term could explain all noise on gene expression profiling data.
Typically, we have knowledge of \( \underset{\bar{\mkern6mu}}{\boldsymbol{e}} \) (from gene expression profiling experiments, such as microarray or RNA-seq) and would like to infer the set of TFC \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \), and TFA \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) giving rise to this signal. Given \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) and \( \underset{\bar{\mkern6mu}}{\boldsymbol{e}} \) Sanguinetti et al. [4] then show how it is possible to solve for \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) and \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) using a discrete time state space model (Eq. 1) with expectation-maximization (EM) algorithm. In the model, elements of the \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) matrix indicate the concentration level of a given TF protein (TFC) at a specific time. Elements of the \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) matrix represent the regulatory intensity (TFA) between a given TF protein and its binding affinity to its target genes. The baseline expression level is the mean vector. The measurement noise \( \underset{\bar{\mkern6mu}}{\boldsymbol{v}} \) follows zero-mean i.i.d. Gaussian noise. To estimate the \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) and \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) matrices, the model used posterior estimation of Bayer's theorem. During this estimation, EM algorithm allowed the model to efficiently approximate the log likelihood. However, it is rare to have a complete knowledge of \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) —we simply do not know the binding sites for all TFs in a typical higher eukaryotic cell. Recent experimental techniques, such as ChIP-chip and ChIP-seq can provide useful data to help construct the connection topology \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) [31], however they have clear limitations if we are looking for a complete topology [32, 33]. A number of theoretical techniques are also available to uncover the connection topology [34,35,36]. The techniques generally use principals of reverse-engineering and use gene expression and genome sequence data to infer regulatory interactions.
Gene expression datasets were downloaded from Gene Expression Omnibus (accession number GSE7342 for p38α and GSE36890 for STAT5) [37, 38]. The expression profiling data of GSE7342 dataset was normalized by the robust microarray average (RMA) method. The read counts of GSE36890 dataset was normalized to the reads per kilo-base of exon per mega-base of library size (RPKM).
Generating \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \), the connection topology
In this paper we have taken a conservative strategy for generating \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) which looks at upstream region of genes that are well-conserved in multiple mammalian genomes. We used a published catalogue of common regulatory motifs that were overrepresented in gene upstream regions [22]. These motifs were identified by constructing genome-wide alignments for four mammalian species in promoter regions and 3' UTRs relating to well-annotated genes from the RefSeq database. The same TFs were assumed to bind the same TFBSs in mice since the TFBS had been discovered in an alignment of human, mouse, rat and dog promoter regions. TFBS upstream of human 13,330 RefSeq genes were predicted. Mouse genes corresponding to the published list of human genes [22] were identified using Ensembl mouse gene annotation.
Estimation of statistically significant changes
We were specifically interested in any TF activity that exhibits statistically significant changes between the two conditions. In particular, we are interested in changes that may be due to a change in activity of the TF, and not just in its concentration. We therefore scaled the TFA by the predicted TFC as a measure for changes in activity [24]. A joint analysis of TFA and TFC should provide more robust predictions of those TFs whose activity has changed for reasons beyond those of a simple change in concentration. To compare two different conditions, the normalized TFA by TFC in wild-type condition were subtracted by the normalized TFA in knock-out condition. We therefore determined those interactions for which:
$$ {\left[\left|\frac{W_{gf}}{c_f}\right|\right]}_{W T}-{\left[\left|\frac{W_{gf}}{c_f}\right|\right]}_{KO}>\boldsymbol{Cutoff} $$
The value of the Cutoff was chosen such that all differences at the 95% confidence interval were considered significant (±2 standard deviations). ±2SD limit is widely chosen as a normal limit because it fits well into two important categories: (1) confident interval and (2) testing hypothesis.
Gene Ontology (GO) analysis
GO analysis was performed by using DAVID [39]. The sets of genes showing significant changes identified in Eq. 2 were submitted to DAVID using the default parameters in order to obtain the GO term classifications of each gene. Our computational pipeline utilized the results to investigate the functionality of genes and their regulatory TFs. The detailed methods and the result figures of GO analysis are supplied in Additional file 1: Supplementary Text and Figure S1–S3.
Estimating the responses to perturbation of transcription networks
We have developed a strategy which used forward-engineering to construct the connection topology (see Fig. 1, Methods, and Additional file 1: Supplementary Text and Figure S1–S3), based on a previous study of regions upstream of genes conserved in multiple mammalian genomes [22]. The structure of this network of transcriptional regulatory interactions between TFs and the genes whose transcription they control is described by a binary matrix \( \boldsymbol{T}\ \in\ {\mathbf{\Re}}^{\boldsymbol{n}\times \boldsymbol{m}} \), where n is the number of TFs and m is the number of genes; An element (i, j) of the matrix is '1' if TF i binds to the upstream control region of gene j, '0' otherwise. We have then employed a mathematical model to integrate the connection topology data and a gene expression dataset from a higher eukaryote in which we are interested in modeling the changes that occur in the TF network in response to a change in the cellular environment (Fig. 2). Our approach could be seen as complementary to 'Integrative methods', as defined in [40], as it provides a strategy for creating an approximate connection topology if more detailed information is not available. The connection topology that is being used for this analysis contains many approximations and is certainly incomplete. However, it should be noted that we are looking at the differences between the models, for example between a wild-type and knock-out state, and those differences will be in parts of the model for which we do have data.
Overview of our strategy and work-flow of our computational pipeline with a plain example. Our strategy uses a computational pipeline based on a reverse-engineering technique. The pipeline takes as inputs the results of transcription (gene expression data ɛ and connectivity information \( \mathbf{T} \) and outputs the sources of transcription (strengths \( \mathbf{\mathcal{W}} \) and concentrations \( \mathbf{\mathcal{C}} \)). The pipeline is composed of five parts: Construction: RMA normalization of gene expression profiling data ɛ and a binary matrix containing connection topology \( \mathbf{T} \) is constructed using by forward-engineering strategy. Computation: The gene expression profiling data and connectivity data are utilized to infer TF-gene interaction strengths \( \mathbf{W} \) and TF concentration levels \( \mathbf{C} \). Investigation: Once the strengths and concentrations are inferred, the actual TF activities are estimated by normalizing the strengths on the concentrations. The statistically significant changes in the TF-gene interactions strength, TF concentration levels, and TF activities are calculated. Illustration: The changes are illustrated in round limpet-like plot or in the scattered plots that shows the changes between individual TF and genes. Identification: The candidate TFs are identified, and Gene Ontology (GO) analysis are performed on the genes that are regulated by the candidate TFs. The literature is reviewed to find the supporting evidence, and the individual links between the candidate TFs and their potential biological functions are identified and summarized in a table. Based on the table, we finally construct the comprehensive TF network for p38α
The results of our approach provide a set of TFs and their target genes which are related by significant up- or down-regulation in transcription. It provides a clear indication of the changes in TFA and TFC of TFs that are controlling transcriptional regulatory mechanisms in response to a specific stimulus. We therefore showed an "integrated" approach for network inference, based on a forward-engineered connection topology, can produce plausible and testable hypotheses about the responses to perturbation of transcription networks in higher eukaryotes.
Illustrating interpretable images of complex data
The visualization tools then make patterns apparent that would be difficult to detect in numerical data (Fig. 2). To distinguish regulation patterns between different experimental conditions, recognizing at a glance is important. However, computing results are formed in large numerical matrix, thus it is not only difficult to navigate through the whole matrix but also impossible to present the results in one page.
Figure 3 shows a graphical representation of the significant changes in TFA matrices \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) (n by m) and TFC vectors \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) (n) obtained from this analysis. The patterns of the responses to perturbation of TF networks are readily observed in this single-shot image that presents approximately 2000 significant changes of varying TF activities on the 132,654 TFBSs after deleting p38α. In upper part of the plots, the TFs place in the functional group order. The genes, which have at least one significant interaction with TFs, locate in bottom part of the plots. A line in the plots presents a regulatory interaction (normalized TFA by TFC) between a TF and its target gene, and line color indicates a significant difference between the strengths of the regulatory interaction of two conditions. For example, we can easily find in visualized format (Fig. 3) that TF group three has distinct patterns (down-regulation at E13.5, up-regulation at E15.5) between two time points.
Global view of the significant changes in TF activities. Our visualization tools make it possible to distinguish specific features and trends in each condition. a The changes in TF activities underlying absence of p38α are presented in the limpet-like plots. In the upper part of the limpet plots, the TFs are placed in order of functional group (Fig. 3c). Genes that have at least one significant change are located in the bottom of the plots. A line presents how much the TF activity of a certain gene is changed between the wild-type mice and the knock-out mice. If a value of the change is greater than zero, it is displayed in blue indicating that the TF-gene pair has significantly higher TF activation in the wild-type mice (Down-regulation after deleting p38α); while, if the change is less than zero, it is displayed in red indicating that the pair has significantly higher TF activation in the knock-out mice (Up-regulation in deleting p38α). b The legend for the line color is present. c The perimeters of the plots are broken into different colored regions corresponding to different functional groups listed in the key
Modelling the changes of transcription factor network in p38α deficient mice
The computational pipeline as highlighted in Fig. 2 was applied to a published study of the effect p38α knock-out in mouse embryos [37]. This study developed four gene expression profiling datasets (Gene Expression Omnibus, accession number GSE7342) comprising of two time points at days 13.5 and 15.5 of embryonic development (E13.5 and E15.5) for p38α knock-outs and their wild-type controls. This data set was chosen for this study as it includes experimental measurements of gene expression in the wild-type and knock-out mice and showed that p38α deficient mice have significantly different phenotype. Thus, the experimental datasets were used as positive controls for our theoretical study.
The TF-gene interaction strengths (TFAs) \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\mathbf{\mathcal{W}}}} \) and TF concentration levels (TFCs) \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) in each of these four data sets were then inferred to produce four weight matrices of TFAs:
$$ {\left[\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}}\right]}_{WT@ E13.5}, \kern0.5em {\left[\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}}\right]}_{WT@ E15.5}, \kern0.5em {\left[\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}}\right]}_{KO@ E13.5},\kern0.5em {\left[\underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}}\right]}_{KO@ E15.5} $$
and four concentration vectors of TFCs:
$$ {\left[\underset{\bar{\mkern6mu}}{\boldsymbol{c}}\right]}_{WT@ E13.5},\ {\left[\underset{\bar{\mkern6mu}}{\boldsymbol{c}}\right]}_{WT@ E15.5},\ {\left[\underset{\bar{\mkern6mu}}{\boldsymbol{c}}\right]}_{KO@ E13.5},\ {\left[\underset{\bar{\mkern6mu}}{\boldsymbol{c}}\right]}_{KO@ E15.5} $$
From the TFA weight \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{W}}} \) and connection topology \( \underset{\bar{\mkern6mu}}{\underset{\bar{\mkern6mu}}{\boldsymbol{T}}} \) matrices, the average strength of TFA, S f , for each TF in the datasets was calculated:
By comparing the average strengths between wild-type and knock-out mice, it is possible to see which of the TFs have significantly changed as a consequence of the removal of p38α.
Figure 4a, b show the changes in TFA strengths between wild-type and knock-out mice at E13.5 and E15.5. It can be seen that a number of TFs show a significant signal (>2 s.d.) in this data. These are shown with more detail in Table 1. Figure 4c, d show the inferred TFCs \( \underset{\bar{\mkern6mu}}{\boldsymbol{c}} \) obtained for the E13.5 and E15.5 time points. Again, from this graph it is possible to see that a number of TFs appear to be responding to the p38α status. These are shown in more detail in Table 2.
a, b The average strength of changes in TF-gene interaction \( \mathbf{S} \) and TF concentration levels \( \mathbf{c} \) between wild-type and knock-out mice. The figures clearly show not only which TFs have strong interaction strengths or high concentrations (gray-colored TFs) but also which TFs have significant changes in their interaction pattern or concentration (blue- or red-colored TFs). The dotted lines indicate the standard deviation (=2) centered on the median value of the straight lines. (figure a for time point E13.5 and figure b for time point E15.5). Five TFs (shown in red) interact particularly strongly with their target genes in the p38α knout-mice. In contrast, six TFs (shown in blue) interact less-strongly in the p38α knock-out than wild-type. c, d The TF concentration levels 풸 of wild-type and knock-out mice. TF concentration levels are plotted. The strongest signal was observed in E13.5, only. Deleting p38α induces a down regulation of AREB6, PITX2, STAT1 and SOX9
Table 1 TFs showing significant changes in interaction strength between wild-type and knock-out mice
Table 2 TFs showing significant changes in concentration between wild-type and knock-out mice
Transcriptional regulatory network for p38α
Gene Ontology (GO) analysis on the target genes of the TFs with strongly changed activity showed enrichment for three GO terms and provided insight into the functional role of the TFs (see Methods, Tables 1 and 2, and Additional file 1: Supplementary Text and Figure S1–S3). The three GO terms are the regulations of the apoptosis (programmed cell death), the downward spiral of the developmental process, and the immune system development. The JNK-c-Jun pathway stimulates the apoptosis, and the I-kB kinase/NF-kB cascade acts as a suppressor of the JNK-c-Jun pathway [41]. Inhibition of p38α MAPK retards another JNK-c-Jun pathway inhibitor NF-kB cascade, but promotes JNK-c-Jun pathway which induces the apoptosis by expressing the Bcl2 protein family [20, 42]. On the other hand, developmental process related genes are down-regulated in the p38α knock-out mice. The study of p38α MAPK [37] reported that the p38α knock-out mice die within days after birth. We do not have enough gene expression profiling data (either other time points in the embryonic period or postnatal period) to investigate TFAs in whole developmental process of the p38α knock-out mice; we cannot confirm but suppose that it might be the reason of the death of the knock-out mice. Further, the genes interact with the TFs which are reported as crucial TFs in the developmental process and the immune system development. Our results are therefore in broad accordance with the experimentally validated results, so it confirmed that our pipeline produces reliable results.
Combining the data in Tables 1 and 2 with those obtained from the literature it is possible to build a putative model for the effects of p38α knock-out (Fig. 5). This figure shows TFs with a strong response in our analysis as nodes, with links that demonstrate regulatory interactions between them. The TF network therefore comprehensively shows the biological consequences of p38α knock-out at transcriptional level.
A comprehensive transcriptional regulatory network for the p38α. The TFs depicted in gray is already known to p38α [41, 51]. The TFs identified in our analysis were then added to the figure and colored blue if their activity was down regulated in the absence of p38α, and colored orange if their activity was up regulated in the absence of p38α. Lines in the figure represent interactions that are known in the literature (listed in more detail in Tables 1 and 2)
We have developed a novel strategy for discovering changes in transcriptional regulatory networks of higher eukaryotes. It integrates methods for inferring TF-gene interaction strengths (TFAs) and TF concentration levels (TFCs); identifying statistically significant changes in TFAs and TFCs; analyzing the changes; classifying TFs into functional groups; and visualizing the changes. To our knowledge, this is the first ensemble approach for characterizing the transcriptional function of TF proteins and their target genes in higher eukaryotes. Reverse-engineering of TF networks has been well developed in the lower eukaryotes [15, 17]. However, the problems in mapping the regulatory mechanisms in cells of higher eukaryotes have made such global studies either impossible or impractical. Some recent studies have begun to address this issue [16, 19, 30], but have tended to focus only on understanding which TFs bind to which genes—not looking in detail at the nature of the TF-gene interaction. Other studies [5, 21] identified key biological features in transcriptional changes, however the methods have difficulties in inferring the dynamics of the interactions. A recent review [40] has categorized techniques for network inference and listed their limitations.
We validated our computational pipeline using the p38α gene expression profiling data and our connectivity data. The study of p38α MAPK [37] used various experimental methods including a gene expression profiling analysis to show that p38α negatively regulates cell proliferation by antagonizing the JNK-c-Jun pathway. We utilized the published gene expression profiling dataset from their study, to demonstrate that our computational pipeline is able to infer from the gene expression profiling data the same in-silico conclusions that the authors obtained from their in-vitro experiments. Therefore, our analysis focused on the JNK-c-Jun pathway to validate the accuracy, robustness and reliability of our strategy. The results are consistent with the experimentally validated inhibitory effect of p38α on transcriptional networks [37]. Their published data confirmed that the most important TF involved in the response to the knock-out was c-Jun, with a clear change observed in both its activation and concentration. In our theoretical work, we also showed a significant change in TFA of c-Jun, but we did not see any corresponding change in the predicted TFC, which is disappointing.
The p38α MAPK is one of many signal transduction pathways and works in both cell-type specific and cell-context specific manner. It plays a pivotal role in converting extra-cellular signal into a wide range of cellular response [43]. We classified a set of TFs that responded to the deletion of p38α into functional groups (Tables 1 and 2, Fig. 3c), that are either developmental factors (group 2) or extra-cellular signal dependent factors (group 3). Developmental factors are also dependent on extra-cellular signals because cells may require such signals to generate developmental factors [44]. In Fig. 3a, it can be seen that the main factors that responded to the knock-out are the extra-cellular signal dependent factors. None of the TFs that significantly respond in the knock-out are constitutive factors. Our results are consistent with recent publications on the JNK-c-Jun pathway (see citations in Tables 1 and 2).
Our analyses generated a comprehensive transcriptional regulatory network for p38α. The network and a detailed description are shown in Fig. 5. The nodes in the graph were generated from our analysis of responding TFs. The edges in this network were derived from the literature or GO analysis (citations in Tables 1 and 2). The edges or links in the network of p38α regulated TFs have mostly been previously reported, but none of the reports had integrated all these p38α related TFs into a single comprehensive network diagram. Together these results predict a set of TFs that are in some way regulated by p38α, a set somewhat larger than that identified in the original paper. For example, we predict that Foxm1 (HNF3) responds to the p38α status. Recent papers, published since the original study, provide some support for this hypothesis [45, 46]. Most parts of the network are reported in numerous biological studies. However, our network reveals novel links such as p38α─FXR and p38α─MYC. The inferred links are supported by direct experimental evidence so validating the approach, but that in addition novel links have been proposed that are now testable.
The data shown in Tables 1 and 2 and visualized in Fig. 5, provide evidence that the methodology described in this paper is capable of generating plausible hypotheses about linkage between p38α and a range of different TFs. The hypotheses presented in this table have been generated solely from our input data (the connection topology data and gene expression profiling data), but are well-supported by the literature. The methodology has therefore demonstrated that it can produce plausible and testable hypotheses, even if the specific details of those interactions may not be completely accurate. This is not surprising given the fact that we only have an incomplete model of the transcription process. Any in-silico techniques which uses predicted TF/TFBSs interactions can provide only a limited view of the complete complexity of transcription control due to the nature of the binding between the TF and the TFBS and the complex effect of gene expression on the TFBS—for example dependent on the epigenetic factors, such as the pattern of histones or DNA methylation at the binding site—as well as the state and concentration of the TF itself. Analysis is complicated by the fact that there are other processes in the cell that act to control mRNA concentration. Such as the rate of RNAi regulated mRNA degradation [9, 10] or susceptibility to attack by RNAses [3, 11]. TFBS can hidden by histones [7, 8], or made more accessible by genomic uncoiling [6]. Furthermore, most TF binding may be cell or species specific not all sites are functional even if occupied, and many functional sites have low levels of conservation [47]. This rather undermines the commonly accepted assumption that TFBSs can be discovered by conservation [22]. However although the exact binding sites may not be conserved the set of TFs that bind a gene somewhere probably is.
p38α deficient mice showed significantly different phenotype which indicates its role is critical. The p38α study also provided the gene expression profiling dataset of wild-type mice as well as p38α deficient mice, so that we could apply our pipeline on the dataset to investigate TFAs and TFCs. It allowed us to directly compare our in-silico results to the experimental in-vitro results, and it validated our findings. However, the experiment was done on two time-points that could limit our validation. Thus, we tested our pipeline on a larger dataset from a recent STAT5 transcription factor study [38] which is consisted of 18 samples in five time-points. This study showed the critical role of STAT5-tetramer in immune system. To do this, the authors made STAT5-tetramer deficient mice by generating STAT5A-STAT5B double-knockin mice. Interleukin 2 (IL2) and IL15 are two of well-known upstream regulators of STAT5A-STATB, so they measured IL2- and IL15-induced gene expression profiling in both wild-type mice and STAT5-tetramer deficient mice. We downloaded the RNA-seq gene expression dataset from this study and analyzed with our pipeline. TF activities were decreased in STAT5-tetramer deficient mice (both IL2- and IL15-induced), particularly at 4, 24, 48 h (Fig. 6). This general trend is well-corresponded to the experimental findings as the author reported IL2- and IL15-induced gene expression were both down-regulated in STAT5-tetramer deficient mice. However, most TFs were re-activated at the last time-points which is also exactly same observation in the experimental result. In particular, STAT5A is in our TF list so we closely investigated its activation patterns. STAT5A showed weak activities and concentration level at control sample, but it dramatically de-activated at 4, 24, 48 h and then re-activated at 72 h. More interestingly, all TFs in cytoplasmic factor group (marked as number 6 in Fig. 6) including STAT5A have same activation patterns with STAT5. Moreover, there are a few interesting TFs which are not classified as cytoplasmic factor but also followed same up- and down-regulated patterns with STAT5 (e.g. SP1, NFY, E12, MEIS1, PAX4, AP1, NRF1, TCF11, AP4, GABP, TATA, E4F1). We considered these are new findings and could lead new insight and testable hypotheses.
TFA and TFC changes in STAT5-tetramer deficient mice. TFA and TFC of 65 TFs were estimated from IL2- and IL5-induced RNA-seq datasets and compared between wild-type and STAT5-tetramer deficient mice. Thus, TFA or TFC of a given TF is shown in red color if it is higher in STAT5-tetramer deficient mice than wild-type mice. If the level of TFA or TFC is higher, the color is darker. The numbers in right-side of heat-map indicates TF functional group (please see legend in Fig. 3c)
Our objective was to develop an effective computational pipeline which produces reliable and explicit models of transcriptional regulatory networks. Even though the TFBS information is incomplete due to the difficulties in identifying them, our pipeline predicts new biological hypotheses on a genome-wide scale by combining TFBS and gene expression information. TIGERi is publicly available as a stand-alone GUI software, so ones who have their own gene expression profiling data could easily use the TIGERi software to analyze their data on their fingertips. It would facilitate transcriptional gene regulation researches in the biomedical community.
Our approach can be applied to other gene expression datasets to provide a display of the transcriptional regulatory networks and identify novel candidate genes and TFs underlying specific phenotypes. For example, our methodology has been successfully applied to three recent studies [48,49,50]. The pipeline would be particularly valuable if it were run on large-scale multi-time point genomic data. It is also the case that we might expect the method to become increasingly predictive with improved connection topologies created from large scale experimentally validated TF/TFBS datasets as opposed to those generated from simple conservation data.
Olivera DJ, Nikolaub B, Wurtelea ES. Functional genomics: high-throughput mRNA, protein, and metabolite analyses. Metab Eng. 2002;4(1):98–106.
Scott MP. Development: the natural history of genes. Cell. 2000;100(1):27–40.
Amrani N, Sachs MS, Jacobson A. Early nonsense: mRNA decay solves a translational problem. Nat Rev Mol Cell Biol. 2006;7:415–25.
Sanguinetti G, Lawrence ND, Rattray M. Probabilistic inference of transcription factor concentrations and gene-specific regulatory activities. Bioinformatics. 2006;22(22):2755–81.
Cheng C, Yan X, Sun F, Li LM. Inferring activity changes of transcription factors by binding association with sorted expression profiles. BMC Bioinf. 2007;8:452.
Strick TR, Croquette V, Bensimon D. Single-molecule analysis of DNA uncoiling by a type II topoisomerase. Nature. 2000;404:901–4.
Jagannathan I, Cole HA, Hayes JJ. Base excision repair in nucleosome substrates. Chromosome Res. 2006;14:27–37.
Sonehara H, Nagata M, Aoki F. Roles of the first and second round of DNA replication in the regulation of zygotic gene activation in mice. J Reprod Dev. 2008;54(5):381–4.
Keene JD. RNA regulons: coordination of post-transcriptional events. Nat Rev Genet. 2007;8:533–43.
Anderson P, Kedersha N. RNA granules: post-transcriptional and epigenetic modulators of gene expression. Nat Rev Mol Cell Biol. 2009;10:430–6.
Brogna S, Wen J. Nonsense-mediated mRNA decay (NMD) mechanisms. Nat Struct Mol Biol. 2009;16:107–13.
Marko-Varga G. Pathway proteomics: global and focused approaches. Am J Pharmacogenomics. 2005;5(2):113–22.
Gardner TS, Faith JJ. Reverse-engineering transcription control networks. Phys Life Rev. 2005;2(1):65–88.
Bansal M, Belcastro V, Ambesi-Impiombato A, Bernardo D. How to infer gene networks from expression profiles. Mol Syst Biol. 2007;3(78):78–87.
Herrgård MJ, Lee B-S, Portnoy V, Palsson BØ. Integrated analysis of regulatory and metabolic networks reveals novel regulatory mechanisms in Saccharomyces cerevisiae. Genome Res. 2006;16(5):627–35.
Ramsey SA, Klemm SL, Zak DE, Kennedy KA, Thorsson V, Li B, Gilchrist M, Gold ES, Johnson CD, Litvak V, et al. Uncovering a macrophage transcriptional program by integrating evidence from motif scanning and expression dynamics. PLoS Comput Biol. 2008;4(3):e1000021.
Ye C, Galbraith SJ, Liao JC, Eskin E. Using network component analysis to dissect regulatory networks mediated by transcription factors in yeast. PLoS Comput Biol. 2009;5(3):e1000311.
Pham H, Ferrari R, Cokus SJ, Kurdistani SK, Pellegrini M. Modeling the regulatory network of histone acetylation in saccharomyces cerevisiae. Mol Syst Biol. 2007;3:153.
Gatta GD, Bansal M, Ambesi-Impiombato A, Antonini D, Missero C, Bernardo D. Direct targets of the TRP63 transcription factor revealed by a combination of gene expression profiling and reverse engineering. Genome Res. 2008;18(6):939–48.
Cai B, Chang SH, Becker EBE, Bonni A, Xia Z. p38 MAP kinase mediates apoptosis through phosphorylation of BimEL at ser-65. J Biol Chem. 2006;281(35):25215–22.
Oliveira AP, Patil KR, Nielsen J. Architecture of transcriptional regulatory circuits is knitted over the topology of bio-molecular interaction networks. BMC Syst Biol. 2008;2:17.
Xie X, Lu J, Kulbokas EJ, Golub TR, Mootha V, KLindblad-Toh V, Lander ES, Kellis M. Systematic discovery of regulatory motifs in human promoters and 3' UTRs by comparison of several mammals. Nature. 2005;434:338–45.
Sanguinetti G, Lawrence ND, Rattray M. A probabilistic dynamical model for quantitative inference of the regulatory mechanism of transcription. Bioinformatics. 2006;22(14):1753–9.
Driever W, Thoma G, Nüsslein-Volhard C. Determination of spatial domains of zygotic gene expression in the drosophila embryo by the affinity of binding sites for the bicoid morphogen. Nature. 1989;340:363–7.
Liao JC, Boscolo R, Yang Y-L, Tran LM, Sabatti C, Roychowdhury VP. Network component analysis: reconstruction of regulatory signals in biological systems. Proc Natl Acad Sci. 2003;100(26):15522–7.
Alter O, Golub GH. Integrative analysis of genome-scale data by using pseudoinverse projection predicts novel correlation between DNA replication and RNA transcription. Proc Natl Acad Sci. 2004;101:16577–82.
Gao F, Foat BC, Bussemaker HJ. Defining transcriptional networks through integrative modeling of mRNA expression and transcription factor binding data. BMC Bioinforma. 2004;5(31):31.
Boulesteix A-L, Strimmer K. Predicting transcription factor activities from combined analysis of microarray and ChIP data: a partial least squares approach. Theor Biol Med Model. 2005;2(23):23.
Kim HD, Shay T, O'Shea EK, Regev A. Transcriptional regulatory circuits - predicting numbers from alphabets. Science. 2009;325:429–32.
Honkela A, Girardot C, Gustafson EH, Liu Y-H, Furlong EEM, Lawrence ND, Rattray M. Model-based method for transcription factor target identification with limited data. Proc Natl Acad Sci U S A. 2010;107(17):7793–8.
Gerstein MB, Kundaje A, Hariharan M, Landt SG, Yan K-K, Cheng C, Mu XJ, Khurana E, Rozowsky J, Alexander R, et al. Architecture of the human regulatory network derived from ENCODE data. Nature. 2012;489(7414):91–100.
Buck MJ, Lie JD. ChIP-chip: considerations for the design, analysis, and application of genome-wide chromatin immunoprecipitation experiments. Genomics. 2004;83(3):349–60.
Park PJ. ChIP-seq: advantages and challenges of a maturing technology. Nat Rev Genet. 2009;10(10):669–80.
Zhu Z, Shendure J, Church GM. Discovering functional transcription-factor combinations in the human cell cycle. Genome Res. 2005;15:848–55.
Chang L-W, Nagarajan R, Magee JA, Milbrandt J, Stormo GD. A systematic model to predict transcriptional regulatory. Genome Res. 2006;16:405–13.
Davies SR, Chang L-W, Patra D, Xing X, Posey K, Hecht J, Stormo GD, Sandell LJ. Computational identification and functional validation of regulatory motifs in cartilage-expressed genes. Genome Res. 2007;17:1438–47.
Hui L, Bakiri L, Mairhorfer A, Schweifer N, Haslinger C, Kenner L, Komnenovic V, Scheuch H, Beug H, Wagne EF. p38alpha suppresses normal and cancer cell proliferation by antagonizing the JNK–c-Jun pathway. Nat Genet. 2007;39:741–9.
Lin JX, Li P, Liu D, Jin HT, He J, Ata Ur Rasheed M, Rochman Y, Wang L, Cui K, Liu C, et al. Critical role of STAT5 transcription factor tetramerization for cytokine responses and normal immune function. Immunity. 2012;36:586–99.
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4(1):44–57.
Smet RD, Marchal K. Advantages and limitations of current network inference methods. Nat Rev Microbiol. 2010;8(10):717–29.
Perkins ND. Integrating cell-signalling pathways with NF-kappaB and IKK function. Nat Rev Mol Cell Bio. 2007;8:49–62.
Brichese L, Cazettes G, Valette A. JNK is associated with Bcl-2 and PP1 in mitochondria: paclitaxel induces its activation and its association with the phosphorylated form of Bcl-2. Cell Cycle. 2004;3(10):1312–9.
Wagner EF, Nebreda ÁR. Signal integration by JNK and p38 MAPK pathways in cancer development. Nat Rev Cancer. 2009;9(8):537–49.
Brivanlou AH, Jr JED. Signal transduction and the control of gene expression. Science. 2002;295(5556):813–8.
Behren A, Muhlen S, Sanhueza GAA, Schwager C, Plinkert PK, Huber PE, Abdollahi A, Simon C. Phenotype-assisted transcriptome analysis identifies FOXM1 downstream from Ras-MKK3-p38 to regulate in vitro cellular invasion. Oncogene. 2010;29(10):1519–30.
Sanchez-Calderon H, Rosa LR-d, Milo M, Pichel JG, Holley M, Varela-Nieto I. RNA microarray analysis in prenatal mouse cochlea reveals novel IGF-I target genes: implication of MEF2 and FOXM1 transcription factors. PLoS One. 2010;5(1):e8699.
Schmidt D, Wilson MD, Ballester B, Schwalie PC, Brown GD, Marshall A, Kutter C, Watt S, Martinez-Jimenez CP, Mackay S, et al. Five-vertebrate ChIP-seq reveals the evolutionary dynamics of transcription factor binding. Science. 2010;328(5981):1036–40.
McMaster A, Jangani M, Sommer P, Han N, Brass A, Beesley S, Lu W, Berry A, Loudon A, Donn R, et al. Ultradian cortisol pulsatility encodes a distinct, biologically important signal. PLoS One. 2011;6(1):e15766.
Han N, Dol Z, Vasieva O, Hyde R, Liloglou T, Raji O, Brambilla E, Brambilla C, Martinet Y, Sozzi G, et al. Progressive lung cancer determined by expression profiling and transcriptional regulation. Int J Oncol. 2012;41(1):242–52.
Darieva Z, Han N, Warwood S, Doris KS, Morgan BA, Sharrocks AD. Protein kinase C regulates late cell cycle-dependent gene expression. Mol Cell Biol. 2012;32(22):4651–61.
Papa S, Bubici C, Zazzeroni F, Pham CG, Kuntzen C, Knabb JR, Dean K, Franzoso G. The NF-kappaB-mediated control of the JNK cascade in the antagonism of programmed cell death in health and disease. Cell Death Differ. 2006;13:712–29.
Guo F, Tanzer S, Busslinger M, Weih F. Lack of nuclear factor-kappa B2/p100 causes a RelB-dependent block in early B lymphopoiesis. Blood. 2008;112(3):551–9.
Thonel A, Vandekerckhove J, Lanneau D, Selvakumar S, Courtois G, Hazoume A, Brunet M, Maurel S, Hammann A, Ribeil JA, et al. HSP27 controls GATA-1 protein level during erythroid cell differentiation. Blood. 2010;116(1):85–96.
Stassen M, Klein M, Becker M, Bopp T, Neudorfl C, Richter C, Heib V, Klein-Hessling S, Serfling E, Schild H, et al. p38 MAP kinase drives the expression of mast cell-derived IL-9 via activation of the transcription factor GATA-1. Mol Immunol. 2007;44(5):926–33.
Angelis L, Zhao J, Andreucci JJ, Olson EN, McDermott GCJC. Regulation of vertebrate myotome development by the p38 MAP kinase–MEF2 signaling pathway. Dev Biol. 2005;283(1):171–9.
Ramsauer K, Sadzak I, Porras A, Pilz A, Nebreda AR, Decker T, Kovarik P. p38 MAPK enhances STAT1-dependent transcription independently of Ser-727 phosphorylation. Proc Natl Acad Sci. 2002;99(20):12859–64.
Bradham C, McClay DR. p38 MAPK in development and cancer. Cell Cycle. 2006;5(8):824–8.
Li S, Gallup M, Chen Y-T, McNamara NA. Molecular mechanism of proinflammatory cytokine-mediated squamous metaplasia in human corneal epithelial cells. Invest Ophthalmol Vis Sci. 2010;51(5):2466–75.
Tew SR, Hardingham TE. Regulation of SOX9 mRNA in human articular chondrocytes involving p38 MAPK activation and mRNA stabilization. J Biol Chem. 2006;281(51):39471–9.
Zhang R, Murakami S, Coustry F, Wang Y, Crombrugghe B. Constitutive activation of MKK6 in chondrocytes of transgenic mice inhibits proliferation and delays endochondral bone formation. Proc Natl Acad Sci. 2006;103:352–70.
Akiyama H, Chaboissier M-C, Martin JF, Schedl A, Crombrugghe B. The transcription factor Sox9 has essential roles in successive steps of the chondrocyte differentiation pathway and is required for expression of Sox5 and Sox6. Genes Dev. 2002;16:2813–28.
Seymour PA, Freude KK, Tran MN, Mayes EE, Jensen J, Kist R, Scherer G, Sander M. SOX9 is required for maintenance of the pancreatic progenitor cell pool. Proc Natl Acad Sci. 2007;104(6):1865–70.
Acharya M, Lingenfelter DJ, Huang L, Gage PJ, Walter MA. Human PRKC apoptosis WT1 regulator is a novel PITX2-interacting protein that regulates PITX2 transcriptional activity in ocular cells. J Biol Chem. 2009;284(50):34829–38.
Toro R, Saadi I, Kuburas A, Nemer M, Russo AF. Cell-specific activation of the atrial natriuretic factor promoter by PITX2 and MEF2A. J Biol Chem. 2004;279(50):52087–94.
Li C-S, Chae S-C, Lee J-H, Zhang Q, Chung H-T. Identification of single nucleotide polymorphisms in FOXJ1 and their association with allergic rhinitis. J Hum Genet. 2006;51(4):292–7.
Gottardi AD, Dumonceau J-M, Bruttin F, Vonlaufen A, Morard I, Spahr L, Rubbia-Brandt L, Frossard J-L, Dinjens WN, Rabinovitch PS, et al. Expression of the bile acid receptor FXR in Barrett's esophagus and enhancement of apoptosis by guggulsterone in vitro. Mol Cancer. 2006;5(48):48.
Fujino T, Murakami K, Ozawa I, Minegishi Y, Kashimura R, Akita T, Saitou S, Atsumi T, Sato T, Ando K, et al. Hypoxia downregulates farnesoid X receptor via a hypoxia-inducible factor-independent but p38 mitogen-activated protein kinase-dependent pathway. FEBS J. 2009;276(5):1319–32.
Desbiens KM, Deschesnes RG, Labrie MM, Desfosses Y, Lambert H, Landry J, Bellmann AK. c-Myc potentiates the mitochondrial pathway of apoptosis by acting upstream of apoptosis signal-regulating kinase 1 (Ask1) in the p38 signalling cascade. Biochem J. 2003;372:631–41.
Shuai K, Liu B. Regulation of JAK-STAT signalling in the immune system. Nat Rev Immunol. 2003;3(11):900–11.
Our thanks to Magus Rattray and Guido Sanguinetti for many constructive discussions.
This work was supported by European Research Council CRIPTON Grant (RG59701), and Institutional funding to the Gurdon Institute by Wellcome Trust Core Grant (092096) and Cancer Research UK Grant (C6946/A14492). The publication charges for this article was funded by European Research Council CRIPTON Grant.
TIGERi is a standalone GUI software for all platforms. The TIGERi software and manual can be freely downloaded at https://github.com/namshik/tigeri/.
Conceptualization, NH, HAN and AB; Methodology, NH; Software, NH; Analysis, NH and AB; Writing, NH, HAN and AB; Supervision, AB; Funding Acquisition, NH and AB All authors have read and approved the final manuscript.
This article has been published as part of BMC Bioinformatics Volume 18 Supplement 7, 2017: Proceedings of the Tenth International Workshop on Data and Text Mining in Biomedical Informatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-18-supplement-7.
Gurdon Institute, University of Cambridge, Cambridge, UK
Namshik Han
School of Biological Sciences, University of Liverpool, Liverpool, UK
Harry A. Noyes
School of Computer Science and School of Health Sciences, University of Manchester, Manchester, UK
Namshik Han & Andy Brass
Andy Brass
Correspondence to Namshik Han or Andy Brass.
Supplementary Text and Figures. (DOC 7227 kb)
Han, N., Noyes, H.A. & Brass, A. TIGERi: modeling and visualizing the responses to perturbation of a transcription factor network. BMC Bioinformatics 18, 260 (2017). https://doi.org/10.1186/s12859-017-1636-6
Transcriptional regulatory network
Transcription factor binding site | CommonCrawl |
\begin{document}
\title{\LARGE \bf Mutually Quadratically Invariant Information Structures in \\Two-Team Stochastic Dynamic Games} \author{Marcello Colombino$^\dag$, Roy S.\ Smith$^\dag$, and Tyler H.\ Summers$^\ddag$ \thanks{$^\dag$M. Colombino and R. Smith are with the Automatic Control Laboratory, ETH Zurich, Switzerland. $^\ddag$T. Summers is with the Department of Mechanical Engineering, University of Texas at Dallas, USA. E-mail addresses: \{\texttt{mcolombi}, \texttt{rsmith}\}\texttt{@control.ee.ethz.ch}, \texttt{[email protected]}. This research is supported by the National Science Foundation under grant CNS-1566127 and partially supported by the Swiss National Science Foundation grant 2--773337--12.} }
\maketitle \thispagestyle{empty} \pagestyle{empty}
\begin{abstract} We formulate a two-team linear quadratic stochastic dynamic game featuring two opposing teams each with decentralized information structures. We introduce the concept of mutual quadratic invariance (MQI), which, analogously to quadratic invariance in (single team) decentralized control, defines a class of interacting information structures for the two teams under which optimal linear feedback control strategies are easy to compute. We show that, for zero-sum two-team dynamic games, structured state feedback Nash (saddle-point) equilibrium strategies can be computed from equivalent structured disturbance feedforward saddle point equilibrium strategies. However, for nonzero-sum games we show via a counterexample that a similar equivalence fails to hold. The results are illustrated with a simple yet rich numerical example that illustrates the importance of the information structure for dynamic games. \end{abstract} \section{introduction}
Future cyber-physical systems (CPS) will feature cooperative networks of autonomous decision making agents equipped with embedded sensing, computation, communication, and actuation capabilities. These capabilities promise to significantly enhance performance, but also render the network vulnerable by increasing the number of access and influence points available to attackers. ``Red team-blue team'' scenarios, in which a defending team seeks to operate the network efficiently and securely while the attacking team seeks to disrupt network operation, have been used to qualitatively assess and improve security in military and intelligence organizations, but have not received formal mathematical analysis in a CPS context. Here we will study some fundamental properties in two-team stochastic dynamic games in cyber-physical networks, with a focus on interactions of the information structures of each team.
Dynamic game theory \cite{basar1995dynamic} offers a general framework for the study of optimal decision making in stochastic and non-cooperative environments. The theory can be viewed as a marriage of game theory \cite{von1944theory}, with a focus on interactions of multiple decision making agents,
and optimal control theory \cite{bellman1952theory,pontryagin1959optimal},
with a focus on dynamics and feedback. The main elements of dynamic game theory are (1) a dynamical system along with a set of agents whose actions influence the state evolution of the system, (2) an objective function to be optimized associated with each agent, and (3) an information structure that specifies information sets for each agent, i.e., who knows what and when.
Several special classes of dynamic games have been extensively studied. In team decision theory all agents cooperate to optimize the same objective function. Static team theory traces back to the seminal work of Marschak and Radner \cite{marschak1955elements,radner1962team,marschak1972economic}. Decentralized control theory has developed from team decision theory and control theory and introduces dynamics. The presence of dynamics makes available information depend on the actions of agents and significantly complicates the problem. Dynamic aspects were studied in important early work by Witsenhausen \cite{witsenhausen1968counterexample,witsenhausen1971separation,witsenhausen1971information} and Ho \cite{ho1972team,ho1980team}. Witsenhausen's famous counterexample \cite{witsenhausen1968counterexample} vividly demonstrated the computational difficulties associated with team decision making in dynamic and stochastic environments. This still-unsolved counterexample described a simple team decision problem in which a nonlinear strategy strictly outperforms the optimal linear strategy and established deep connections between control, communication, and information theory.
Research on decentralized and distributed control theory has continued, and there has been a recent resurgence of interest driven by the advent of large-scale cyber-physical networks. Recent work has elaborated on connections with communication and information theory \cite{grover2013approximately,grover2015information} and focused on computational and structural issues \cite{rotkowitz2006characterization,nayyar2013decentralized,yuksel2013stochastic,swigart2014optimal,lessard2015optimal,gattami2012robust}.
These important structural information aspects arising from cooperating agents have received much less attention in the dynamic game literature, which tends to focus only on non-cooperative and adversarial behavior. The most well studied case is the two-player problem, which features two opposing agents who have centralized information structures and has connections to robust control \cite{bacsar2008h}. What is currently under explored is a comprehensive study of information structure aspects when there are \emph{both} non-cooperative elements, as in general dynamic game theory, and cooperative elements, as in decentralized control theory. These aspects can be captured by a two-team stochastic dynamic game framework.
Two-team stochastic dynamic games feature two opposing teams with decentralized information structures for both the attacking and defending teams: each agent must act based on partial information measured or received locally in a way that coordinates its actions with team members and counters against the opposing team. This framework mathematically formalizes ``red team-blue team'' scenarios that qualitatively assess network security and resilience. In comparison to decentralized control theory, a team adversarial element is added. In comparison to general dynamic game theory, a sharp contrast between cooperation with teammates and conflict against adversaries is preserved. Further, the stochastic element (modeled by a ``chance'' or ``Nature'' player in the game) allows the inclusion of random component failures and disturbance signals.
There is currently a lack of deep theoretical and computational understanding in this class of dynamic games. A static version of the problem was studied in \cite{colombino2015quadratic}. Many fundamental questions that been answered in static or single team decentralized control settings do not have counterparts in the two-team setting.
The main contributions of the present paper are as follows. We formulate a two-team stochastic dynamic game problem and introduce a concept of mutual quadratic invariance, which defines a class of interacting information structures for the two teams under which optimal linear feedback control strategies are easy to compute. This is analogous to the concept of quadratic invariance in (single team) decentralized control \cite{rotkowitz2006characterization}. We show that for zero-sum two-team dynamic games, structured state feedback saddle point equilibrium strategies can be computed from equivalent structured disturbance feedforward saddle point equilibrium strategies. However, for nonzero-sum games we show via a counterexample that a similar equivalence fails to hold for structured Nash equilibrium strategies. Finally, we present a numerical example, which illustrates the importance of the information structure on the value of the game.
The rest of the paper is structured as follows. Section II provides preliminaries on static team games. Section III formulates a two-team stochastic dynamic game. Sections IV and V develop results on disturbance feedforward and state feedback strategies and introduce the concept of mutual quadratic invariance. Section VI presents illustrative numerical experiments, and Section VII gives some concluding remarks and future research directions.
\section{Two-team stochastic static games}
In this section we review basic results for a two-team stochastic static game. In this setting, two teams who both know the distribution parameters of a Gaussian random vector $w$ need to decide strategies to compute vectors $u\in{\mathbb R}^m$ and $v\in{\mathbb R}^q$ as a function of the realization $w\in{\mathbb R}^n$ in order to minimize the expectation of \textit{different} quadratic forms in $w,u,v$. Each team is composed of multiple agents, each of which observes a different linear function of $w$ and decides a portion of the vectors $u$ or $v$.
More formally, given a vector $w\sim\mathcal{N}(m_w,\Sigma_w)$, where $\Sigma_w\succ0$, consider the following game \begin{equation}\label{eqn:team:game} T_1: \left\{\begin{split} \min_{\kappa_i(\cdot)} &\,\,\mathbb{E}_w\left(J_1(w,u,v)\right)\\ \text{s. t. } &\; u_i=\kappa_i(C_iw)\\ & \forall i\in{\mathbb Z}_{[1,N]} \end{split}\right., \;\; T_2: \left\{\begin{split} \min_{\lambda_j(\cdot)} &\,\,\mathbb{E}_w\left(J_2(w,u,v)\right)\\ \text{s. t. } &\; v_j=\lambda_j(\Gamma_iw) \\ & \forall j\in{\mathbb Z}_{[1,M]} \end{split}\right. \end{equation} where $J_i(w,u,v):=$ \begin{multline*} \left[ \begin{array}{c}
w \\
u \\
v \end{array} \right]^\top \left[ \begin{array}{ccc}
\mathcal H_{i\;ww} & \mathcal H_{i\;wu} & \mathcal H_{i\;wv} \\
\mathcal H_{i\;wu}^\top & \mathcal H_{i\;uu} & \mathcal H_{i\;uv} \\
\mathcal H_{i\;wv}^\top & \mathcal H_{i\;uv}^\top & \mathcal H_{i\;vv} \end{array} \right] \left[ \begin{array}{c}
w \\
u \\
v \end{array} \right],\\ i\in\{1,2\} \end{multline*} are the objective functions of each team, and $\kappa_i(\cdot), i\in{\mathbb Z}_{[1,N]}$ and $ \lambda_j(\cdot), j\in{\mathbb Z}_{[1,M]}$ are Borel measurable functions corresponding to the decision strategies of agents on team 1 and 2, respectively.
\begin{assumption}\label{ass:saddle} We assume $$ \left[ \begin{array}{cc}
\mathcal H_{1\,uu} & \mathcal H_{1\,uv}\\
\mathcal H_{1\,uv}^\top& \mathcal H_{2\,vv} \end{array} \right]\succ 0, \left[ \begin{array}{cc}
\mathcal H_{1\,uu} & \mathcal H_{2\,uv}\\
\mathcal H_{2\,uv}^\top & \mathcal H_{2\,vv} \end{array} \right]\succ 0. $$ \end{assumption} Note that in the zero-sum case $(J_1=-J_2)$ , Assumption~\ref{ass:saddle} is standard to guarantee the existence of a saddle point equilibrium to the game without decentralized information structure~\cite[condition 6.3.9]{hassibi1999indefinite}. If $J_1=J_2$, Assumption~\ref{ass:saddle} reduces to the standard positive definite assumption of team theory \cite{radner1962team}.
We now define the set of Nash optimal strategies for the game in~\eqref{eqn:team:game}.
\begin{definition}\label{def:nash} A pair of strategies $(\kappa^\star(\cdot),\lambda^\star(\cdot))$ of the form $[\kappa_1^{\star\top}(C_1\,\cdot),\dots,\kappa_N^{\star\top}(C_n\,\cdot)]^\top$ and $[\lambda_1^{\star\top}(\Gamma_1\,\cdot),\dots,\lambda_M^{\star\top}(\Gamma_n\,\cdot)]^\top$ is Nash optimal for the game in~\eqref{eqn:team:game} if \begin{equation}\label{eqn:nash}\left\{ \begin{array}{l} \displaystyle \kappa^\star(\cdot)\in\arg \min_{\kappa(\cdot)}\mathbb E_w J_1(w,\lambda^\star(w),\kappa(w))\\ \displaystyle \lambda^\star(\cdot)\in\arg \min_{\lambda(\cdot)} \mathbb E_w J_2(w,\lambda(w),\kappa^\star(w)). \end{array} \right. \end{equation} \end{definition}
Under Assumption 1, the game in~\eqref{eqn:team:game} admits a unique set of linear Nash optimal strategies, which can be computed by solving a set of linear equations derived from stationarity conditions \cite{colombino2015quadratic}. This turns out to be a special case of a general multi-player, multi-objective linear quadratic static game considered in \cite{basar1978decentralized}.
\section{Two-team stochastic dynamic games} Problem~\eqref{eqn:team:game} is a static game. There is no concept of time and causality of the information pattern. In this section we formulate a dynamic game, where two teams can influence the state evolution of a dynamical system. The agents on each team decide a portion of an input signal based on different observations of the system state over time. Decision must be causal: each player is only allowed to use past or, at most, present information.
Our focus will be on the role of information structures for both teams in determining equilibrium strategies. Dynamic games offer a rich variety of information structures. Specific instances have been considered in the literature, with much work on various types of centralized structures \cite{basar1995dynamic} and some work on structures defined by spatiotemporal decentralization patterns. For example, a one-step-delay observation sharing pattern was shown in \cite{basar1978decentralized} to admit unique linear optimal strategies. There has been recent progress in (single team) decentralized control on information structure issues, including a characterization of information structures called quadratically invariant that yield convex control design problems \cite{rotkowitz2006characterization}. Here we seek an analogous result in a two-team game setting.
Consider the system \begin{equation}\label{eqn:dynamics} \begin{split} x(t+1) &=Ax(t)+B_1u(t)+B_2v(t)+w(t), \end{split} \end{equation} where $x(t) \in{\mathbb R}^n$ is the system state at time $t$ with $x(0)\sim \mathcal{N}(0,\Sigma_0)$, $u(t) \in {\mathbb R}^{m_1}$ is the input for team 1 at time $t$, $v(t) \in {\mathbb R}^{m_2}$ is the input for team 2 at time $t$, and $w(t)\sim \mathcal{N}(0,\Sigma_t)$ is a random disturbance. The cost functions for each team are given by \begin{multline}\label{eqn:cost:function} J_i := \mathbb E \left(\sum_{t=0}^{N-1}x(t)^\top M_i(t) x(t) + u(t)^\top R_i(t) u(t) \right .\\+ v(t)^\top V_i(t) v(t) \Bigg) + x(N)^\top M_i(N) x(N),\quad
i\in\{1,2\}, \end{multline} where $M_i(0)=\bold 0_{n\times n}$, $M_i(t)$ $=$ $M_i(t)^\top\in{\mathbb R}^{n\times n}$, $R_i(t)$ $=$ $R_i(t)^\top \in{\mathbb R}^{m_1\times m_1}$ and $V_i(t)$ $=$ $V_i(t)^\top \in{\mathbb R}^{m_2\times m_2}$. By defining the matrices $\mathcal A$ $=$ $\blockdiag(A,\dots,A)$ $\in$ ${\mathbb R}^{n(N+1)\times n(N+1)}$, $$ B_i = \left[ \begin{array}{ccc}
B_i & 0 & 0 \\
0 & \ddots & 0 \\
\vdots & & B_i\\
0 & \cdots & 0 \end{array} \right]\in{\mathbb R}^{n(N+1)\times m_jN}, \quad i\in\{1,2\}, $$ $\mathcal M_i$ $=$ $\blockdiag(0,M_i(1),\dots,M_i(N))$ $\in$ ${\mathbb R}^{n(N+1)\times n(N+1)}$, $\mathcal R_i$ $=$ $\blockdiag(R_i(0),\dots,R_i(N-1))$ $\in$ ${\mathbb R}^{m_1N\times m_1N}$ and $\mathcal V_i$ $=$ $\blockdiag(V_i(0),\dots,V_i(N-1))$ $\in$ ${\mathbb R}^{m_2N\times M_2N}$ for $i$~$\in$~$\{1,2\}$, the vectors $\bold x = (x(0),...,x(N))\in{\mathbb R}^{n(N+1)}$, $\bold u = (u(0),...,u(N-1))\in{\mathbb R}^{m_1N}$, $\bold v = (v(0),...,v(N-1))\in{\mathbb R}^{m_2N}$ and $\bold w = (x(0),w(0),...,w(N-1))\in{\mathbb R}^{n(N+1)}$, and the shift matrix $$ \mathcal Z:= \left[ \begin{array}{cccc} 0 & & & \\
I & \ddots & & \\
& \ddots & \ddots & \\
& & I & 0
\end{array}\right]\in{\mathbb R}^{n(N+1)\times n(N+1)}, $$ we can write system~\eqref{eqn:dynamics} as \begin{equation}\label{eqn:team:game:rewritten} \begin{split} \bold x &=\mathcal Z \mathcal A\bold x+ \mathcal Z \mathcal B_1\bold u + \mathcal Z \mathcal B_2 \bold v+ \bold w. \end{split} \end{equation} The system in~\eqref{eqn:team:game:rewritten} can be rewritten compactly as \begin{equation}\label{eqn:team:game:rerewritten} \bold{x} = \left[ \begin{array}{ccc}
\mathcal P_{11} & \mathcal P_{12} & \mathcal P_{13} \end{array} \right] \left[ \begin{array}{c} \bold{w} \\ \bold{u} \\ \bold{v} \end{array} \right], \end{equation} where $\mathcal P_{11}=(I-\mathcal Z\mathcal A)^{-1} $, $\mathcal P_{12}=(I-\mathcal Z\mathcal A)^{-1} \mathcal Z \mathcal B_1$ and $\mathcal P_{13}=(I-\mathcal Z\mathcal A)^{-1} \mathcal Z \mathcal B_2$. The cost functions in~\eqref{eqn:cost:function} can be written in function of the vectorized inputs as \begin{equation*} \begin{split} &\bold J_i(\bold u,\bold v)= \mathbb E_{\mathbf w}\left( \left[ \begin{array}{c} \mathbf w \\ \mathbf u \\ \mathbf v \end{array} \right]^\top \mathcal H_i \left[ \begin{array}{c} \mathbf w \\ \mathbf u \\ \mathbf v \end{array} \right] \right),\quad i\in\{1,2\}, \end{split} \end{equation*} where \begin{multline*} \mathcal H_i =\\ \left[ \begin{array}{ccc} \mathcal P_{11}^\top\mathcal M_i \mathcal P_{11}& \mathcal P_{11}^\top\mathcal M_i\mathcal P_{12} & \mathcal P_{11}^\top\mathcal M_i\mathcal P_{13} \\
\mathcal P_{12}^\top\mathcal M_i \mathcal P_{11} &\mathcal P_{12}^\top\mathcal M_i\mathcal P_{12} + \mathcal R_i &P_{12}^\top\mathcal M_i\mathcal P_{13} \\
\mathcal P_{13}^\top\mathcal M_i \mathcal P_{11} &\mathcal P_{13}^\top\mathcal M_i\mathcal P_{12} &\mathcal P_{13}^\top\mathcal M_i\mathcal P_{13} + \mathcal V_i \end{array} \right], \end{multline*} for $i =\{1,2\}$.
We are interested in the finite horizon, two-team stochastic dynamic game where: \begin {itemize} \item Team 1 minimizes $\bold J_1(\bold u,\bold v)$ \item Team 2 minimizes $\bold J_2(\bold u,\bold v)$ \item Each team choses a structured causal state feedback strategy of the form $$ \bold u = \mathcal K_1 (\bold x), \quad \bold v = \mathcal K_2 (\bold x). \quad \mathcal K_1\in\mathcal S_1,\;\mathcal K_2\in \mathcal S_2, $$ where $K_i:{\mathbb R}^{n(N+1)}\to{\mathbb R}^{m_iN}$, for $i\in\{1,2\}$ are measurable functions and $\mathcal S_1$ and $\mathcal S_2$ define an information structure for each team. \end{itemize}
We define a information structure $\mathcal S_i\in\{0,1\}^{n(N+1)\times m_iN}$ as a binary matrix. $\mathcal K_i\in\mathcal S_i$ indicates that, if $[\mathcal S_i]_{jk}=0$, then the $j^\text{th}$ element of $\mathcal K_i$ is not a function of $\bold x_k$. By choosing the information structures one can enforce causality and a prescribed spatiotemporal structure on the controller strategies.
\section{Mutual Quadratic Invariance} \label{sec.dis.feedfarward} In decentralized control with quadratically invariant information structures, the controller structure can be enforced on an affine parameter that defines the achievable set of closed-loop systems and recover a structured feedback controller. We now follow a similar approach in the two-team setting.
\subsection{Disturbance feedforward strategies} By searching for measurable disturbance feedforward strategies of the type $\bold u=\mathcal Q_1(\mathcal P_{11} \bold w)$ and $\bold v=\mathcal Q_2\mathcal (P_{11} \bold w)$, where $\mathcal Q_1\in \mathcal S_1$ and $\mathcal Q_2\in \mathcal S_2$, we recover the formulation of~\eqref{eqn:team:game}. Provided that Assumption~\ref{ass:saddle} is satisfied, there exists a unique Nash equilibrium of the form $\bold u=\bar {\mathcal Q}_1\mathcal P_{11} \bold w,~ \bold v= \bar{ \mathcal Q}_2\mathcal P_{11} \bold w$ in the space of linear strategies \cite{colombino2015quadratic}. The matrices $\bar {\mathcal Q}_1, \bar{ \mathcal Q}_2$ can be easily computed by solving a linear system of equations or a sequence of semidefinite programs~\cite{colombino2015quadratic,basar1978decentralized}. Assumption~\ref{ass:saddle} for the dynamic game problem becomes \begin{align*} \left[ \begin{array}{cc} \mathcal P_{12}^\top\mathcal M_1\mathcal P_{12} +\mathcal R_1 &\mathcal P_{12}^\top\mathcal M_1\mathcal P_{13}\\ \mathcal P_{13}^\top\mathcal M_1\mathcal P_{12} & \mathcal P_{13}^\top\mathcal M_2\mathcal P_{13} + \mathcal S_2 \end{array} \right]\succ 0, \\ \left[ \begin{array}{cc} \mathcal P_{12}^\top\mathcal M_2\mathcal P_{12} +\mathcal R_2 &\mathcal P_{12}^\top\mathcal M_2\mathcal P_{13}\\ \mathcal P_{13}^\top\mathcal M_2\mathcal P_{12} & \mathcal P_{13}^\top\mathcal M_1\mathcal P_{13} + \mathcal S_1 \end{array} \right]\succ 0. \end{align*}
We can define new cost functions that depend on the matrices $\mathcal Q_1$ and $\mathcal Q_2$ describing linear disturbance feedforward strategies as \begin{equation}\label{eq.cost.q} \mathcal J_i\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right) = \bold J_i(\bold u,\bold v)\bigg|_{\bold u=\mathcal Q_1\mathcal P_{11} \bold w, \bold v=\mathcal Q_2\mathcal P_{11} \bold w}. \end{equation} In particular, \begin{multline*} \mathcal J_i\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right) = \|\mathcal M_i^{1\over 2} \left( I+\mathcal P_{12}\mathcal Q_1+\mathcal P_{13}\mathcal Q_2\right)\mathcal P_{11}\Sigma_{\bold w}^{1\over 2} \|^2_F \\
+ \| \mathcal R_i^{1\over 2} \mathcal Q_1\Sigma_{\bold w}^{1\over 2} \|^2_F
+ \|\mathcal V_i^{1\over 2} \mathcal Q_2 \Sigma_{\bold w}^{1\over 2}\|^2_F, \end{multline*} where $\Sigma_{\bold w}^{1\over 2}$ is the covariance of $\bold w$. \subsection{Equivalent state feedback strategies}
It is easy to show that there exists a bijective relationship between a pair of linear disturbance feedforward strategies ($\mathcal Q_1, \mathcal Q_2$) and an equivalent pair of linear state feedback strategies described by the matrices ($\mathcal K_1, \mathcal K_2$). More precisely, using~\eqref{eqn:team:game:rerewritten} we obtain
\begin{equation}\label{eq.equivalent.k} \begin{split} \left[ \begin{array}{c} \bold {u}\\ \bold {v} \end{array} \right]& =\left[ \begin{array}{c} \mathcal Q_1\\ \mathcal Q_2 \end{array} \right]\mathcal P_{11}\bold w \\ & = \left[ \begin{array}{c} \mathcal Q_1\\ \mathcal Q_2 \end{array} \right]\bold x - \left[ \begin{array}{c} \mathcal Q_1\\ \mathcal Q_2 \end{array} \right]\left[ \begin{array}{cc} \mathcal P_{12} & \mathcal P_{13} \end{array} \right]\left[ \begin{array}{c} \bold u\\ \bold v \end{array} \right]. \end{split} \end{equation}
We can define the function $g$ such that
\begin{align*}
\left( \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right] \right)
= g
\left(
\left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right),
\end{align*}
where
\begin{multline*}
g\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right) = \left( I + \left[ \begin{array}{c} \mathcal Q_1\\ \mathcal Q_2 \end{array} \right] \left[ \begin{array}{cc} \mathcal P_{12} & \mathcal P_{13} \end{array} \right]\right)^{-1}
\left[ \begin{array}{c} \mathcal Q_1\\ \mathcal Q_2 \end{array} \right]. \end{multline*}
Using a similar approach to~\eqref{eq.equivalent.k}, one can construct the inverse mapping that, given a pair of feedback strategies, recovers the equivalent feedforward strategies.
$$
\left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
= g^{-1}\left( \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right] \right), $$ where the map $g^{-1}$ takes the form \begin{multline*}
g^{-1}\left( \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right]
\right)=\\ \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right]\left( I - \left[ \begin{array}{cc} \mathcal P_{12} & \mathcal P_{13} \end{array} \right] \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right]\right)^{-1}. \end{multline*}
Given a pair of linear feedback strategies $\mathcal K_1$ and $\mathcal K_2$, the cost for player $i$ can be evaluated by considering the equivalent feedforward strategies as $$
\bold J_i(\bold u,\bold v)\bigg|_{\bold u=\mathcal K_1\bold x, \bold v=\mathcal K_2\bold x} = \mathcal J_i \left(g^{-1}\left( \left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right]
\right)\right), $$ where $\mathcal J_i$ is defined in~\eqref{eq.cost.q}.
Now that we have a way to construct feedback strategies which are equivalent to any set of linear feedforward strategies, we need to establish a condition that guarantees that such equivalent feedback strategies will preserve the desired structure.
\subsection{Mutual Quadratic Invariance}
We know form the quadratic invariance literature~\cite{rotkowitz2006characterization,swigart2010explicit} that $\mathcal Q_1\in \mathcal S_1$ and $\mathcal Q_2\in \mathcal S_2$ $\iff$ $\mathcal K_1\in \mathcal S_1$ and $\mathcal K_2\in \mathcal S_2$ if and only if for all $ \left(\mathcal K_1, \mathcal K_2\right ) \in \mathcal S_1 \times \mathcal S_2$ it holds that \begin{equation}\label{eqn:mut:quad:inv:1} \left[ \begin{array}{c} \mathcal K_1\\ \mathcal K_2 \end{array} \right] \left[ \begin{array}{cc} \mathcal P_{12} & \mathcal P_{13} \end{array} \right]\left[ \begin{array}{c} \mathcal K_1\\ \mathcal K_2 \end{array} \right]\in \mathcal S_1 \times \mathcal S_2, \end{equation} in other words $\mathcal S_1 \times \mathcal S_2$ is quadratically invariant under $\left[ \;\mathcal P_{12} \quad \mathcal P_{13} \; \right ]$. We define this property as \textbf{mutual quadratic invariance}. We can expand~\eqref{eqn:mut:quad:inv:1} as \begin{equation}\label{eqn:mut:quad:inv:2} \left[ \begin{array}{c} \mathcal K_1\\ \mathcal K_2 \end{array} \right]\in \mathcal S_1 \times \mathcal S_2\implies \left\{ \begin{array}{c} \mathcal K_1\mathcal P_{12} \mathcal K_1 , \quad \mathcal K_1\mathcal P_{13} \mathcal K_2 \in\mathcal S_1\\ \mathcal K_2\mathcal P_{12} \mathcal K_1 , \quad \mathcal K_2\mathcal P_{13} \mathcal K_2 \in\mathcal S_2. \end{array} \right. \end{equation}
By observing~\eqref{eqn:mut:quad:inv:1} we note that MQI is equivalent to QI for a control problem where both decisions $\bold u$ and $\bold v$ are taken by a single decision maker. MQI information structures will allow us to compute equilibrium strategies in two-team games.
\section{Computing Equilibrium Strategies} We have seen in Section~\ref{sec.dis.feedfarward} that given structures $\mathcal S_1$ and $\mathcal S_2$ which are mutually quadratically invariant under $\mathcal P_{12}$ and $\mathcal P_{13}$, one can easily obtain a Nash equilibrium in the disturbance feedforward strategies. Furthermore once we recover the equivalent state feedback strategies the structure is preserved. There is still a nontrivial question that needs to be answered.
\begin{problem}\label{prob.nqnk} Given a Nash equilibrium in the feedforward strategies ($\bar {\mathcal Q}_1, \bar{ \mathcal Q}_2$), are the equivalent feedback strategies \begin{align}\label{eqn:k_bar}
\left( \left[ \begin{array}{c} \bar{\mathcal K}_1\\ \bar{\mathcal K}_2 \end{array} \right] \right)
= g
\left( \left[ \begin{array}{c} \bar{\mathcal Q}_1\\ \bar{\mathcal Q}_2 \end{array} \right]
\right),
\end{align}
a Nash equilibrium in the feedback strategies? \end{problem}
In order to understand why the answer to Problem~\ref{prob.nqnk} is nontrivial, we first present a counterexample for a nonzero sum game.
\subsection{Nonzero sum game} Consider the following problem instance with \begin{equation} \begin{aligned} &N = 2, \quad A = 2, \quad B_1 = 0.4, \quad B_2 = 0.1, \\ &\Sigma_0 = 1, \quad \Sigma_t = 1 \ \forall t, \\ &M_1(t) = R_1(t) = S_1(t) = 1 \ \forall t, \\ &M_2(t) = 70 \ \forall t, R_2(t) = S_2(t) = 1 \ \forall t. \end{aligned} \end{equation} This is a single state, two-stage, two-player problem, and we will consider centralized, causal strategies, which are readily verified to be mutually quadratically invariant. Using disturbance feedforward strategies, the problem can be reduced to a static game whose unique Nash Equilibrium strategies are readily computed using methods from~\cite{basar1978decentralized,colombino2015quadratic}. This yields the Nash pair $(\bar{\mathcal Q}_1,\bar{ \mathcal Q}_2)$, where
\begin{equation} \begin{aligned} \bar {\mathcal Q}_1 &= \left[\begin{array}{ccc}-0.6795 & 0 & 0 \\0.6283 & -0.4301 & 0\end{array}\right], \\ \bar{\mathcal Q}_2 &= \left[\begin{array}{ccc}-11.890 & 0 & 0 \\10.996 & -7.5269 & 0\end{array}\right]. \end{aligned} \end{equation} The corresponding equilibrium value for player 1 is $\mathcal J_1^*\left( \left[ \begin{array}{c} \bar{\mathcal Q}_1\\ \bar{\mathcal Q}_2 \end{array} \right] \right) = 220$. The corresponding state feedback strategies are \begin{equation} \begin{aligned} \bar{\mathcal K}_1 &= \left[\begin{array}{ccc}-0.6795 & 0 & 0 \\ 0 & -0.4301 & 0\end{array}\right], \\ \bar{\mathcal K}_2 &= \left[\begin{array}{ccc}-11.890 & 0 & 0 \\ 0 & -7.5269 & 0\end{array}\right], \end{aligned} \end{equation} However, $(\bar{\mathcal K}_1 ,\bar{\mathcal K}_2 )$ is not a Nash Equilibrium in state feedback strategies since it is readily verified that $\mathcal J_1\left(g^{-1}\left( \left[ \begin{array}{c} { \hat{\mathcal K}}_1\\ \bar{ \mathcal K}_2 \end{array} \right]
\right)\right) = 206.1$, where \begin{equation} \hat{\mathcal K}_1 = \left[\begin{array}{ccc}-1.853 & 0 & 0 \\ 0 & -0.4301 & 0\end{array}\right]. \end{equation} Although the sparsity structure is preserved between the disturbance feedforward and state feedback strategies since the information structure is mutually quadratically invariant, the Nash Equilibrium property is not preserved. Thus, for this particular non-zero sum dynamic game, the answer to the question posed in Problem 1 is negative. We show next that this situation does not occur in zero-sum games.
\subsection{Zero sum game} Let us now consider the zero sum game case, where the objective function of one team is precisely the negative of that of the other team. Such problems can be re-written as a min-max problem of the form.
\begin{equation}\label{eq.zsg} \begin{split} \min_{\bold u}\max_{\bold w} \quad& \bold J(\bold u,\bold v) := \mathbb E_{\mathbf w}\left( \left[ \begin{array}{c} \mathbf w \\ \mathbf u \\ \mathbf v \end{array} \right]^\top \mathcal H \left[ \begin{array}{c} \mathbf w \\ \mathbf u \\ \mathbf v \end{array} \right] \right)\end{split} \end{equation}
As before we are interested in strategies of the form $$ \bold u = \mathcal K_1 (\bold x), \quad v = \mathcal K_2 (\bold x). \quad \mathcal K_1\in\mathcal S_1,\;\mathcal K_2\in \mathcal S_2, $$ where $\mathcal S_1$ and $\mathcal S_2$ are prescribed sets of structured causal controllers. As zero-sum games are a special case of nonzero sum games, provided Assumption~\ref{ass:saddle} holds, we can find structured Nash equilibria (rather saddle point equilibrium in the zero sum context) if we consider linear strategies of the form $\bold u=\mathcal Q_1\mathcal P_{11} \bold w$ and $\bold v=\mathcal Q_2\mathcal P_{11} \bold w$. For the zero sum game of the form~\eqref{eq.zsg}, Assumption~\ref{ass:saddle} reads $$ \left[ \begin{array}{cc} \mathcal P_{12}^\top\mathcal M\mathcal P_{12} +\mathcal R &\mathcal P_{12}^\top\mathcal M\mathcal P_{13}\\ \mathcal P_{13}^\top\mathcal M\mathcal P_{12} & -\left(\mathcal P_{13}^\top\mathcal M\mathcal P_{13} + \mathcal S\right) \end{array} \right]\succ 0. $$ As before, given the saddle point equilibrium in the feedforward strategies, when $\mathcal S_1$ and $\mathcal S_2$ are mutually quadratically invariant, we can compute equivalent linear feedback strategies that preserve the structure. In the zero-sum case, however, we can relate the equilibrium property of the feedforward strategies to that of the state feedback strategies.
We start with a result that shows that the maps $g$ and $g^{-1}$ preserve stationary points. We begin by noting that for a zero-sum game $\mathcal J_1=-\mathcal J_2$. \begin{lemma}\label{lem.lemma_stationarity} Given \begin{align*}
\left[ \begin{array}{c} \bar {\mathcal Q}_1\\ \bar {\mathcal Q}_2 \end{array} \right]
& = g^{-1}\left(
\left[ \begin{array}{c} \bar{\mathcal K}_1\\ \bar{\mathcal K}_2 \end{array} \right]
\right) \end{align*} then \begin{equation} \begin{split}\label{thm:thesis} \frac{\partial \mathcal J_1\left ( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right )}{\partial \mathcal Q_1}\bigg|_{\mathcal Q_1 = \bar {\mathcal Q}_1 ,\mathcal Q_2 = \bar {\mathcal Q}_2} & \in \mathcal S_1^\perp, \\ \frac{\partial \mathcal J_2\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right )}{\partial \mathcal Q_2}\bigg|_{\mathcal Q_1 = \bar {\mathcal Q}_1 ,\mathcal Q_2 = \bar {\mathcal Q}_2} & \in \mathcal S_2^\perp, \end{split} \end{equation} if and only if \begin{equation} \begin{split}\label{thm:thesis2} \frac{\partial \mathcal J_1\left ( g^{-1}\left(\left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right] \right)
\right )}{\partial \mathcal K_1}\bigg|_{\mathcal K_1 = \bar {\mathcal K}_1 ,\mathcal K_2 = \bar {\mathcal K}_2} & \in \mathcal S_1^\perp, \\ \frac{\partial \mathcal J_2\left( g^{-1}\left(\left[ \begin{array}{c} {\mathcal K}_1\\ {\mathcal K}_2 \end{array} \right] \right)
\right )}{\partial \mathcal K_2}\bigg|_{\mathcal K_1 = \bar {\mathcal K}_1 ,\mathcal K_2 = \bar {\mathcal K}_2} & \in \mathcal S_2^\perp, \end{split} \end{equation} \end{lemma} \begin{proof} Let us simplify the notation and define $$
{\mathcal Q} := \left[ \begin{array}{c}
{\mathcal Q}_1\\
{\mathcal Q}_2 \end{array} \right],\quad {\mathcal K} := \left[ \begin{array}{c}
{\mathcal K}_1\\
{\mathcal K}_2 \end{array} \right], \quad
{\mathcal P} := \left[ \begin{array}{cc} \mathcal P_{12} & \mathcal P_{13} \end{array} \right]. $$ We start by proving the \emph{only if} part. Note that since $\mathcal J_1 = -\mathcal J_2$, $$ \frac{\partial \mathcal J_2\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right )}{\partial \mathcal Q_2}\bigg|_{\mathcal Q_1 = \bar {\mathcal Q}_1 ,\mathcal Q_2 = \bar {\mathcal Q}_2} \in \mathcal S_2^\perp $$ if and only if $$ \frac{\partial \mathcal J_1\left( \left[ \begin{array}{c} {\mathcal Q}_1\\ {\mathcal Q}_2 \end{array} \right]
\right )}{\partial \mathcal Q_2}\bigg|_{\mathcal Q_1 = \bar {\mathcal Q}_1 ,\mathcal Q_2 = \bar {\mathcal Q}_2} \in \mathcal S_2^\perp. $$ Assume~\eqref{thm:thesis2} holds, and suppose~\eqref{thm:thesis} does not hold. Then there exist $\tilde{\mathcal {Q}}\in \mathcal S_1 \times \mathcal S_2$, with $\tilde{\mathcal {Q}}\neq0$ such that $$ \lim_{\varepsilon \to 0}\frac{\mathcal J_1(\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q})-\mathcal J(\bar{\mathcal Q})}{\varepsilon} = \kappa \neq 0, $$ or equivalently \begin{equation} \label{eqn:g_ginv} \lim_{\varepsilon \to 0}\frac{\mathcal J_1\left(g^{-1}\left(g\left(\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right)\right )\right)-\mathcal J\left(\bar{\mathcal Q}\right)}{\varepsilon} = \kappa \neq 0. \end{equation} We know that \begin{equation} \begin{split}\label{eq.variation.g} & g\left(\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right) = \left (I +\left (\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right)\mathcal P\right )^{-1}\left (\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right) \\ & = \left (I +\bar{\mathcal Q}\mathcal P+\varepsilon \tilde{\mathcal Q}\mathcal P\right )^{-1}\left (\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right)\\ & = \left [ (I +\bar{\mathcal Q}\mathcal P)^{-1} + \varepsilon (I +\bar{\mathcal Q}\mathcal P)^{-1} \tilde{\mathcal Q} \mathcal P (I +\bar{\mathcal Q}\mathcal P)^{-1}\right.\\ & + \mathcal O(\varepsilon ^2) \Big]\left (\bar{\mathcal Q}+\varepsilon \tilde{\mathcal Q}\right)\\ & = \bar{\mathcal K} + \varepsilon \tilde {\mathcal K} + \mathcal O(\varepsilon ^2), \end{split} \end{equation} where $$ \tilde {\mathcal K} = (I +\bar{\mathcal Q}\mathcal P)^{-1}\tilde{\mathcal Q} + (I +\bar{\mathcal Q}\mathcal P)^{-1} \tilde{\mathcal Q} \mathcal P (I +\bar{\mathcal Q}\mathcal P)^{-1} \bar{\mathcal Q}. $$
Using~\cite[Theorem 14 + Theorem 26]{rotkowitz2006characterization}, and mutual quadratic invariance it is easy to conclude that, $\tilde {\mathcal K}\in\mathcal S_1 \times \mathcal S_2$. Substituting~\eqref{eq.variation.g} in~\eqref{eqn:g_ginv} and using the fact that $\bar{\mathcal{Q}} = g\left (\bar{\mathcal K}\right )$ one obtains \begin{equation*} \label{eqn:g_g_K} \begin{split} &\lim_{\varepsilon \to 0}\frac{\mathcal J_1\left(g^{-1}\left( \bar{\mathcal K} + \varepsilon \tilde {\mathcal K} + \mathcal O(\varepsilon ^2)\right )\right)-\mathcal J\left( g^{-1}\left (\bar{\mathcal K}\right )\right)}{\varepsilon} = \\ &\lim_{\varepsilon \to 0}\frac{\mathcal J_1\left(g^{-1}\left( \bar{\mathcal K} + \varepsilon \tilde {\mathcal K} \right )\right)-\mathcal J\left( g^{-1}\left (\bar{\mathcal K}\right )\right)}{\varepsilon}
= \kappa \neq 0, \end{split} \end{equation*} which, since $\tilde {\mathcal K}\in\mathcal S_1 \times \mathcal S_2$, is in contradiction with~\eqref{thm:thesis2} and thus proves the claim.
The converse direction can be proven analogously. \end{proof}
Note that Lemma~\ref{lem.lemma_stationarity} only holds for zero-sum games as the proof heavily relies to the fact that $\mathcal J_1 = -\mathcal J_2$. We are now ready to state our main result, which allows us to construct a saddle point Equilibrium in the space of linear feedback strategies.
\begin{theorem}\label{thm.nash.equivalent} Let ($\bar {\mathcal Q}_1,\bar {\mathcal Q}_2$) be the unique saddle point equilibrium in the disturbance feedforward strategies. Then if there exists a saddle point equilibrium in state feedback strategies, it is unique and given by \begin{align*}
\left[ \begin{array}{c} \bar {\mathcal K}_1\\ \bar {\mathcal K}_2 \end{array} \right]
& = g\left(
\left[ \begin{array}{c} \bar{\mathcal Q}_1\\ \bar{\mathcal Q}_2 \end{array} \right]
\right). \end{align*} \end{theorem}
\begin{proof}
Let $(\hat {\mathcal K}_1,\hat {\mathcal K}_2)$ be any saddle point equilibrium in state feedback linear strategies. Let $(\hat {\mathcal Q}_1,\hat {\mathcal Q}_2)$ be the corresponding disturbance feedforward policy. Since $(\hat {\mathcal K}_1,\hat {\mathcal K}_2)$ is stationary, so is $(\hat {\mathcal Q}_1,\hat {\mathcal Q}_2)$ by Lemma 1. Since $\mathbf J$ is convex quadratic in $ {\mathcal Q}_1$ and concave quadratic in $ {\mathcal Q}_2$, it follows from Assumption 1 that the stationary point is unique. Thus, $(\hat {\mathcal Q}_1,\hat {\mathcal Q}_2) = (\bar {\mathcal Q}_1,\bar {\mathcal Q}_2)$. Since $g$ is bijective, $(\hat {\mathcal K}_1,\hat {\mathcal K}_2) = (\bar {\mathcal K}_1,\bar {\mathcal K}_2)$. Thus, $(\bar {\mathcal K}_1,\bar {\mathcal K}_2)$ is the unique saddle point equilibrium in state feedback linear strategies due to the uniqueness of ($\bar {\mathcal Q}_1,\bar {\mathcal Q}_2$). \end{proof}
This result allows computation of structured equilibrium feedback strategies in two-team stochastic dynamic games with mutually quadratically invariant information structures.
\section{Numerical example}
We now present a very simple illustrative numerical example. The mutual quadratic invariance property allows us to compute equilibrium strategies and values under these information structures and thereby to understand the importance of different information structures in dynamical games. We consider a two player system depicted in Figure~\ref{fig:int}, which can be interpreted as a simple transportation network.
\begin{figure}
\caption{Two player network}
\label{fig:int}
\end{figure}
Each subsystem consists of a buffer with single integrator dynamics. System 1 stores $x_1$ and can control input $u$ that transfers some of the good stored in its buffer to System 2. System 2 stores $x_2$ and can control input $v$ to discard some of the good. Both systems are affected by random disturbances which are normally distributed with zero mean and unit variance. The dynamics of the system is
\begin{multline}\label{eq.game.dyn} \left[ \begin{array}{c} {x_1^+} \\ {x_2^+} \end{array} \right] = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} {x_1} \\ {x_2} \end{array} \right] + \\ \left[ \begin{array}{cc} -1 \\ 1 \end{array} \right] {u} + \left[ \begin{array}{cc} 0 \\ -1 \end{array} \right] {v} + \left[ \begin{array}{ccc} {w_1} \\ {w_2 } \end{array} \right] \end{multline}
Given the dynamics in~\eqref{eq.game.dyn} with $w_1(t),w_2(t), x(0)\sim\mathcal N(0,1)$ we consider the following zero sum dynamic game
\begin{equation}\label{eq.game.example} \begin{array}{rl} \displaystyle \min_{{\bold u}} \max_{{\bold v}} & \displaystyle\sum_{t=1}^{10} 2 \, {\mathbb E x_1^2(t)} + {\mathbb E u^2(t-1)} + \\ &\quad \quad - \, {\mathbb E x_2^2(t)} -2\,{\mathbb E v^2(t-1)} \\ \\ \displaystyle \textnormal{s. t.} & \bold u = \mathcal K_1 \bold x,\quad \mathcal K_1\in\mathcal S_1 \\ \displaystyle & \bold v = \mathcal K_2 \bold x,\quad \mathcal K_2\in\mathcal S_2, \end{array} \end{equation} where $\bold x = (x(0),...,x(10))$, $\bold u = (u(0),...,u(9))$, $\bold v = (v(0),...,v(9))$. Both players benefit from keeping the variance of their state and input low and from increasing the variance of the opponent's state and input. We will compare the results for different information structures $\mathcal S_1$ and $\mathcal S_2$, all of which are mutually quadratically invariant. In particular, we consider:
\begin{itemize} \item Causal controllers with full information (FI). That is both players have access to all past and present information. $$
{\mathcal S_1 = \mathcal S_2 = \left[ \begin{array}{ccccccccccccccccc} \star & \star & \\ \star & \star &\star & \star &\\ \star & \star &\star & \star &\star & \star &\\ \star & \star &\star & \star &\star & \star &\star & \star &\\ \star & \star &\star & \star &\star & \star &\star & \star &\star & \star & \\ &&&&&\vdots \\ \end{array} \right]. } $$
\item One step delay information sharing (1SDIS). At time $t$ both players do not know the opponent's current state but have full information up to time $t-1$. $$
{\mathcal S_1 = \left[ \begin{array}{ccccccccccccccccc} \star & 0 & \\ \star & \star & \star & 0 &\\ \star & \star &\star & \star &\star & 0 &\\ \star & \star &\star & \star &\star & \star &\star & 0 &\\ \star & \star &\star & \star &\star & \star &\star & \star &\star & 0 & \\ &&&&&\vdots \\ \end{array} \right], } $$ $$
{\mathcal S_2 = \left[ \begin{array}{ccccccccccccccccc} 0 & \star & \\ \star & \star & 0 & \star &\\ \star & \star &\star & \star &0 & \star &\\ \star & \star &\star & \star &\star & \star &0 & \star &\\ \star & \star &\star & \star &\star & \star &\star & \star &0 & \star & \\ &&&&&\vdots \\ \end{array} \right]. } $$
\item Decentralized control for Player 1 (DP1). Player 1 only has access to present and past information about its own state, Player 2 has full information.
$$
{\mathcal S_1 = \left[ \begin{array}{ccccccccccccccccc} \star & 0 & \\ \star & 0 & \star & 0 &\\ \star & 0 &\star & 0 &\star & 0 &\\ \star & 0 &\star & 0 &\star & 0 &\star & 0 &\\ \star & 0 &\star & 0 &\star & 0 &\star & 0 &\star & 0 & \\ &&&&&\vdots \\ \end{array} \right], } $$
and $\mathcal S_2$ has full information. \end{itemize}
It is easily verifiable that all such structures respect the mutual quadratic invariance assumption for the given system. We computed the Nash equilibrium feedforward strategies ($\bar {\mathcal Q}_1,\bar {\mathcal Q}_2$) for the three different information structures using the method proposed in~\cite{colombino2015quadratic}. We applied Theorem~\ref{thm.nash.equivalent} to compute the linear saddle point equilibrium in the state feedback strategies as \begin{align*}
\left[ \begin{array}{c} \bar {\mathcal K}_1\\ \bar {\mathcal K}_2 \end{array} \right]
& = g\left(
\left[ \begin{array}{c} \bar{\mathcal Q}_1\\ \bar{\mathcal Q}_2 \end{array} \right]
\right). \end{align*} In Table~\ref{table:Table} we observe the different costs at equilibrium. Note the large difference in cost function that is achieved for different information structures. Using the full information structure as a baseline, as we expect, Player 1 is penalized by using decentralized information (DP1). On the other hand, Player 1 obtains a great advantage with the one step delay information sharing (1SDIS) structure. \begin{table} \caption{The cost at the Nash Equilibrium for the three different information structures. A smaller cost is indicates an advantage for Player 1 who is minimizing in~\eqref{eq.game.example}, while a larger cost is an advantage for Player 2.} \label{table:Table} \begin{center}
\begin{tabular}{ @{} c|c@{} } \bottomrule \multicolumn{2}{>{\columncolor[gray]{.95}}c}{Equilibrium cost for the different information structures} \\ \toprule
Structure & Equilibrium Cost~\eqref{eq.game.example} \\ \toprule FI & -1.58 \\%
\\ 1SDIS & -10.02 \\%
\\ DP1 & 0.00 \\ \toprule \bottomrule \end{tabular} \end{center} \end{table}
Figure~\ref{fig:sym} shows the breakdown of the cost function of~\eqref{eq.game.example} for the different information structures and allows us to understand why certain information structures are more beneficial for different players. For example, if we consider 1SDIS we notice that Player 1 can exploit the fact that its opponent has no information on Player 1's current state and input and it uses this to dramatically increase the variance of $x_2$. To do so Player 1 needs to `spend' some variance in $u$, which is also increased. The net gain, however, is clearly in favor of Player 1.
\begin{figure}
\caption{Breakdown of the cost function of~\eqref{eq.game.example} for the three information structures}
\label{fig:sym}
\end{figure}
\section{Conclusion}
We have considered a two-team linear quadratic stochastic dynamic game with decentralized information structures for both teams. We introduced the concept of Mutual Quadratic Invariance (MQI), which defines a class of interacting team information structures for which equilibrium strategies can be easily computed. We demonstrated an equivalence of disturbance feedforward and state feedback saddle point equilibrium strategies that facilitates this computation in zero-sum games, and we showed such an equivalence fails to hold for Nash equilibrium strategies in nonzero-sum games. A numerical example showed how mutually quadratically invariant information structures can be evaluated and how different structures can lead to significantly different equilibrium values.
Many fundamental questions remain open in two-team stochastic dynamic games. For example, issues involving infinite horizon and boundedness of the equilibrium value (stability), separation and certainty equivalence, games with incomplete model information, and design of information structures can be considered. Some of these results may take inspiration from recent progress on information structure issues in decentralized control.
\end{document} | arXiv |
Compound of two icosahedra
This uniform polyhedron compound is a composition of 2 icosahedra. It has octahedral symmetry Oh. As a holosnub, it is represented by Schläfli symbol β{3,4} and Coxeter diagram .
Compound of two icosahedra
TypeUniform compound
IndexUC46
Schläfli symbolsβ{3,4}
βr{3,3}
Coxeter diagrams
Polyhedra2 icosahedra
Faces16+24 triangles
Edges60
Vertices24
Symmetry groupoctahedral (Oh)
Subgroup restricting to one constituentpyritohedral (Th)
The triangles in this compound decompose into two orbits under action of the symmetry group: 16 of the triangles lie in coplanar pairs in octahedral planes, while the other 24 lie in unique planes.
It shares the same vertex arrangement as a nonuniform truncated octahedron, having irregular hexagons alternating with long and short edges.
Nonuniform and uniform truncated octahedra. The first shares its vertex arrangement with this compound.
The icosahedron, as a uniform snub tetrahedron, is similar to these snub-pair compounds: compound of two snub cubes and compound of two snub dodecahedra.
Together with its convex hull, it represents the icosahedron-first projection of the nonuniform snub tetrahedral antiprism.
Cartesian coordinates
Cartesian coordinates for the vertices of this compound are all the permutations of
(±1, 0, ±τ)
where τ = (1+√5)/2 is the golden ratio (sometimes written φ).
Compound of two dodecahedra
The dual compound has two dodecahedra as pyritohedra in dual positions:
References
• Skilling, John (1976), "Uniform Compounds of Uniform Polyhedra", Mathematical Proceedings of the Cambridge Philosophical Society, 79 (3): 447–457, doi:10.1017/S0305004100052440, MR 0397554.
See also
• Compound of two snub cubes
External links
• VRML model:
| Wikipedia |
# Understanding data analysis
Data analysis is the process of inspecting, cleaning, transforming, and modeling data in order to discover useful information, draw conclusions, and support decision-making. It is a crucial step in any research or business project, as it helps us make sense of the vast amount of data that is available to us.
In this section, we will explore the fundamentals of data analysis, including the different types of data, the importance of data quality, and the basic steps involved in the data analysis process. We will also discuss the role of data analysis in various fields, such as business, healthcare, and social sciences.
Data can come in various forms, such as numerical data, categorical data, and text data. Numerical data consists of numbers and can be further divided into continuous and discrete data. Categorical data, on the other hand, consists of categories or labels and can be further divided into nominal and ordinal data. Text data, as the name suggests, consists of textual information.
Data quality is another important aspect of data analysis. High-quality data is accurate, complete, consistent, and relevant to the problem at hand. It is important to ensure that the data we are analyzing is reliable and free from errors or biases.
The data analysis process typically involves several steps, including data collection, data cleaning, data exploration, data modeling, and data visualization. Each step is important and contributes to the overall analysis.
For example, let's say we are analyzing sales data for a retail company. The first step would be to collect the data, which could include information such as the date of purchase, the product sold, the quantity sold, and the price. Once we have collected the data, we would then clean it by removing any duplicate entries or errors.
Next, we would explore the data by calculating summary statistics, such as the mean, median, and standard deviation. We might also create visualizations, such as histograms or scatter plots, to better understand the patterns and relationships in the data.
Once we have explored the data, we can then model it using statistical techniques, such as regression analysis or time series analysis. These models can help us make predictions or uncover underlying patterns in the data.
Finally, we can visualize our findings using charts, graphs, or interactive dashboards. This allows us to communicate our results effectively and make data-driven decisions.
## Exercise
Imagine you are analyzing data on customer satisfaction for a hotel chain. The data includes variables such as the customer's age, gender, length of stay, and satisfaction rating.
1. What type of data is the customer's age?
2. What type of data is the customer's satisfaction rating?
### Solution
1. The customer's age is numerical data, specifically discrete data.
2. The customer's satisfaction rating is numerical data, specifically ordinal data.
# Hypothesis testing and its importance
Hypothesis testing is a fundamental concept in statistics that allows us to make inferences and draw conclusions about a population based on a sample of data. It involves formulating a hypothesis, collecting data, and using statistical methods to evaluate the evidence against the hypothesis.
In this section, we will explore the importance of hypothesis testing in statistical analysis. We will discuss the steps involved in hypothesis testing, including formulating null and alternative hypotheses, selecting a significance level, conducting the test, and interpreting the results. We will also cover common types of hypothesis tests, such as t-tests and chi-square tests.
The main purpose of hypothesis testing is to determine whether there is enough evidence to support or reject a claim about a population parameter. A population parameter is a numerical characteristic of a population, such as the mean or proportion.
Hypothesis testing helps us make decisions based on evidence rather than intuition or personal beliefs. It provides a systematic and objective approach to analyzing data and drawing conclusions.
The hypothesis testing process involves several steps. First, we formulate a null hypothesis (H0) and an alternative hypothesis (Ha). The null hypothesis represents the status quo or the claim that we want to test. The alternative hypothesis represents the claim that we are trying to find evidence for.
Next, we select a significance level (alpha), which determines the threshold for rejecting the null hypothesis. The significance level is typically set at 0.05 or 0.01, but it can vary depending on the context and the desired level of confidence.
For example, let's say we want to test whether a new weight loss program is effective in helping people lose weight. Our null hypothesis (H0) could be that the weight loss program has no effect, while our alternative hypothesis (Ha) could be that the weight loss program is effective.
We would then collect data from a sample of individuals who have participated in the weight loss program. We would calculate a test statistic, such as a t-statistic or a chi-square statistic, and compare it to a critical value from the appropriate statistical distribution.
If the test statistic falls in the rejection region, which is determined by the significance level, we would reject the null hypothesis and conclude that there is enough evidence to support the alternative hypothesis. If the test statistic does not fall in the rejection region, we would fail to reject the null hypothesis.
## Exercise
Imagine you are conducting a hypothesis test to determine whether a new drug is effective in reducing blood pressure.
1. What would be an appropriate null hypothesis for this test?
2. What would be an appropriate alternative hypothesis for this test?
### Solution
1. An appropriate null hypothesis for this test could be that the new drug has no effect on blood pressure.
2. An appropriate alternative hypothesis for this test could be that the new drug reduces blood pressure.
# Linear regression and its applications
Linear regression is a statistical technique used to model the relationship between a dependent variable and one or more independent variables. It is a powerful tool for understanding and predicting the behavior of a variable based on other variables.
In this section, we will explore the concept of linear regression and its applications in practical data analysis. We will learn how to fit a linear regression model, interpret the coefficients, and make predictions using the model. We will also discuss the assumptions and limitations of linear regression and how to assess the goodness of fit.
Linear regression can be used in a wide range of fields, including economics, finance, social sciences, and engineering. It is particularly useful when we want to understand the relationship between a continuous dependent variable and one or more continuous or categorical independent variables.
The basic idea behind linear regression is to find the best-fitting line that represents the relationship between the dependent variable (also known as the response variable) and the independent variables (also known as the predictor variables). This line is determined by estimating the coefficients of the regression equation.
The regression equation can be written as:
$$
Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_nX_n + \epsilon
$$
where:
- Y is the dependent variable
- $\beta_0$ is the intercept (the value of Y when all the independent variables are zero)
- $\beta_1, \beta_2, ..., \beta_n$ are the coefficients (representing the change in Y for a one-unit change in the corresponding independent variable)
- $X_1, X_2, ..., X_n$ are the independent variables
- $\epsilon$ is the error term (representing the random variation in Y that is not explained by the independent variables)
For example, let's say we want to understand the relationship between a person's height (dependent variable) and their weight and age (independent variables). We collect data on the heights, weights, and ages of a sample of individuals.
We can then use linear regression to estimate the coefficients of the regression equation and determine how much the height changes for a one-unit change in weight or age. We can also use the model to make predictions about a person's height based on their weight and age.
## Exercise
Suppose we have a dataset that includes information about the number of hours studied and the corresponding test scores of a group of students. We want to use linear regression to model the relationship between the number of hours studied and the test scores.
1. What would be the dependent variable in this case?
2. What would be the independent variable in this case?
### Solution
1. The dependent variable in this case would be the test scores.
2. The independent variable in this case would be the number of hours studied.
# Building regression models in R
R is a powerful programming language and software environment for statistical computing and graphics. It provides a wide range of functions and packages for building regression models and analyzing data.
In this section, we will learn how to build regression models in R using the `lm()` function. We will explore the syntax and options of the `lm()` function, fit different types of regression models, and interpret the output. We will also learn how to assess the goodness of fit and make predictions using the fitted models.
To build a regression model in R, we first need to load the necessary data into R. This can be done using the `read.csv()` or `read.table()` functions, depending on the format of the data file. Once the data is loaded, we can use the `lm()` function to fit the regression model.
The basic syntax of the `lm()` function is:
```R
model <- lm(formula, data)
```
where:
- `model` is the name of the model object
- `formula` is a formula specifying the regression model
- `data` is the name of the data frame containing the variables used in the model
The formula in the `lm()` function specifies the relationship between the dependent variable and the independent variables. It is written using the tilde (~) operator, with the dependent variable on the left side and the independent variables on the right side.
For example, if we want to build a linear regression model with height as the dependent variable and weight and age as the independent variables, the formula would be:
```R
formula <- height ~ weight + age
```
We can then fit the regression model using the `lm()` function:
```R
model <- lm(formula, data)
```
Once the model is fitted, we can use various functions and methods to analyze the model, such as `summary()`, `coef()`, `predict()`, and `plot()`.
For example, let's say we have a dataset called `students` that includes information about the number of hours studied and the corresponding test scores of a group of students. We want to build a regression model to predict the test scores based on the number of hours studied.
First, we load the data into R:
```R
students <- read.csv("students.csv")
```
Then, we fit the regression model:
```R
model <- lm(test_scores ~ hours_studied, data = students)
```
We can use the `summary()` function to get a summary of the model:
```R
summary(model)
```
This will provide information about the coefficients, standard errors, t-values, p-values, and other statistics of the model.
## Exercise
Suppose we have a dataset called `sales` that includes information about the advertising budget and the corresponding sales of a company. We want to build a regression model to predict the sales based on the advertising budget.
1. Load the `sales` dataset into R using the `read.csv()` function.
2. Fit a regression model to predict the sales based on the advertising budget.
3. Use the `summary()` function to get a summary of the model.
### Solution
```R
sales <- read.csv("sales.csv")
model <- lm(sales ~ advertising_budget, data = sales)
summary(model)
```
# ANOVA and its role in statistical analysis
Analysis of Variance (ANOVA) is a statistical technique used to compare the means of two or more groups. It is a powerful tool for determining whether there are any significant differences between the means and identifying which groups are significantly different from each other.
In this section, we will explore the concept of ANOVA and its role in statistical analysis. We will learn how to perform ANOVA in R using the `aov()` function, interpret the output, and make conclusions based on the results. We will also discuss the assumptions and limitations of ANOVA and how to assess the goodness of fit.
ANOVA is commonly used in experimental and observational studies to compare the means of different treatments or groups. It can be used to analyze data from a wide range of fields, including psychology, biology, medicine, and social sciences.
The basic idea behind ANOVA is to partition the total variation in the data into two components: the variation between groups and the variation within groups. The variation between groups represents the differences between the group means, while the variation within groups represents the random variation within each group.
The ANOVA test statistic, called the F-statistic, is calculated by comparing the variation between groups to the variation within groups. If the variation between groups is significantly larger than the variation within groups, it suggests that there are significant differences between the group means.
The null hypothesis in ANOVA is that all the group means are equal, while the alternative hypothesis is that at least one group mean is different. If the p-value associated with the F-statistic is less than the chosen significance level (usually 0.05), we reject the null hypothesis and conclude that there are significant differences between the group means.
For example, let's say we have a dataset that includes the test scores of students from three different schools. We want to determine whether there are any significant differences in the mean test scores between the schools.
We can perform ANOVA in R using the `aov()` function:
```R
model <- aov(test_scores ~ school, data = students)
```
We can use the `summary()` function to get a summary of the ANOVA model:
```R
summary(model)
```
This will provide information about the F-statistic, degrees of freedom, p-value, and other statistics of the model. We can then make conclusions based on the results, such as which groups are significantly different from each other.
## Exercise
Suppose we have a dataset that includes the test scores of students from four different classes. We want to determine whether there are any significant differences in the mean test scores between the classes.
1. Fit an ANOVA model to compare the mean test scores between the classes.
2. Use the `summary()` function to get a summary of the ANOVA model.
### Solution
```R
model <- aov(test_scores ~ class, data = students)
summary(model)
```
# Exploring different types of ANOVA
ANOVA can be used to compare the means of two or more groups. However, there are different types of ANOVA depending on the design of the study and the number of factors involved.
In this section, we will explore the different types of ANOVA and when to use each type. We will learn about one-way ANOVA, two-way ANOVA, and factorial ANOVA, and how to perform them in R using the appropriate functions. We will also discuss the assumptions and limitations of each type of ANOVA.
One-way ANOVA is used when there is one categorical independent variable (also known as a factor) and one continuous dependent variable. It compares the means of two or more groups to determine whether there are any significant differences between the groups.
Two-way ANOVA is used when there are two categorical independent variables and one continuous dependent variable. It allows us to examine the main effects of each independent variable and the interaction effect between the two independent variables.
Factorial ANOVA is used when there are two or more categorical independent variables and one continuous dependent variable. It allows us to examine the main effects of each independent variable and the interaction effects between all possible combinations of the independent variables.
To perform one-way ANOVA in R, we can use the `aov()` function:
```R
model <- aov(dependent_variable ~ factor, data = dataset)
```
To perform two-way ANOVA in R, we can use the `aov()` function with two factors:
```R
model <- aov(dependent_variable ~ factor1 * factor2, data = dataset)
```
To perform factorial ANOVA in R, we can use the `lm()` function with an interaction term:
```R
model <- lm(dependent_variable ~ factor1 * factor2, data = dataset)
```
We can use the `summary()` function to get a summary of the ANOVA model and interpret the results.
For example, let's say we have a dataset that includes the test scores of students from different schools and different classes. We want to determine whether there are any significant differences in the mean test scores between the schools and the classes.
We can perform two-way ANOVA in R using the `aov()` function with two factors:
```R
model <- aov(test_scores ~ school * class, data = students)
```
We can use the `summary()` function to get a summary of the ANOVA model:
```R
summary(model)
```
This will provide information about the main effects of each factor and the interaction effect between the factors.
## Exercise
Suppose we have a dataset that includes the test scores of students from different schools and different genders. We want to determine whether there are any significant differences in the mean test scores between the schools, the genders, and the interaction between the two.
1. Fit a factorial ANOVA model to compare the mean test scores between the schools, the genders, and the interaction between the two.
2. Use the `summary()` function to get a summary of the ANOVA model.
### Solution
```R
model <- lm(test_scores ~ school * gender, data = students)
summary(model)
```
# Interpreting ANOVA results
Interpreting the results of ANOVA involves understanding the output of the ANOVA model and making conclusions based on the statistical tests and the effect sizes.
In this section, we will learn how to interpret the results of ANOVA in R. We will explore the output of the `summary()` function, including the F-statistic, degrees of freedom, p-values, and effect sizes. We will also discuss how to make conclusions based on the results and how to report the findings.
The F-statistic in ANOVA measures the ratio of the variation between groups to the variation within groups. A larger F-statistic indicates a larger difference between the group means and a smaller within-group variation, which suggests that there are significant differences between the groups.
The p-value associated with the F-statistic measures the probability of obtaining the observed F-statistic, or a more extreme value, assuming that the null hypothesis is true. If the p-value is less than the chosen significance level (usually 0.05), we reject the null hypothesis and conclude that there are significant differences between the group means.
Effect sizes in ANOVA measure the magnitude of the differences between the group means. They provide a standardized measure of the practical significance of the differences and allow us to compare the effect sizes across different studies.
One commonly used effect size in ANOVA is eta-squared (η²), which measures the proportion of the total variation in the dependent variable that is accounted for by the independent variable. It ranges from 0 to 1, with larger values indicating a larger effect.
Another commonly used effect size in ANOVA is partial eta-squared (η²p), which measures the proportion of the variation in the dependent variable that is accounted for by the independent variable, taking into account the other independent variables in the model. It is a more conservative measure of the effect size.
For example, let's say we have performed a one-way ANOVA to compare the mean test scores between three different schools. The output of the ANOVA model is as follows:
```
Df Sum Sq Mean Sq F value Pr(>F)
school 2 123.4 61.7 12.34 0.00012 ***
Residuals 97 567.8 5.9
```
The F-statistic is 12.34, and the p-value is 0.00012. This means that there are significant differences between the school means, as the p-value is less than 0.05.
The effect size, eta-squared (η²), is 123.4 / (123.4 + 567.8) = 0.178, which indicates that the school variable accounts for 17.8% of the total variation in the test scores.
## Exercise
Suppose we have performed a two-way ANOVA to compare the mean test scores between different schools and different classes. The output of the ANOVA model is as follows:
```
Df Sum Sq Mean Sq F value Pr(>F)
school 2 123.4 61.7 12.34 0.00012 ***
class 3 345.6 115.2 23.04 0.00001 ***
Residuals 94 456.8 4.9
```
1. Are there any significant differences between the school means?
2. Are there any significant differences between the class means?
3. What is the effect size of the school variable?
4. What is the effect size of the class variable?
### Solution
1. Yes, there are significant differences between the school means, as the p-value is less than 0.05.
2. Yes, there are significant differences between the class means, as the p-value is less than 0.05.
3. The effect size of the school variable is 123.4 / (123.4 + 345.6 + 456.8) = 0.178.
4. The effect size of the class variable is 345.6 / (123.4 + 345.6 + 456.8) = 0.381.
# Assumptions and limitations in regression and ANOVA
Regression and ANOVA are powerful statistical techniques, but they come with certain assumptions and limitations that need to be considered when interpreting the results.
In this section, we will discuss the assumptions and limitations of regression and ANOVA. We will learn about the assumptions of linearity, independence, homoscedasticity, and normality in regression. We will also discuss the assumptions of independence, homogeneity of variances, and normality in ANOVA.
The assumptions of regression and ANOVA are important because violating these assumptions can lead to biased estimates, incorrect inferences, and invalid conclusions. It is therefore crucial to assess the assumptions and take appropriate measures to address any violations.
The assumptions of regression include:
- Linearity: The relationship between the dependent variable and the independent variables is linear.
- Independence: The observations are independent of each other.
- Homoscedasticity: The variance of the errors is constant across all levels of the independent variables.
- Normality: The errors are normally distributed with mean zero and constant variance.
The assumptions of ANOVA include:
- Independence: The observations are independent of each other.
- Homogeneity of variances: The variances of the dependent variable are equal across all levels of the independent variable(s).
- Normality: The dependent variable is normally distributed within each group.
Violations of these assumptions can be detected using diagnostic plots, such as scatterplots, residual plots, and Q-Q plots. If violations are detected, appropriate measures, such as data transformations or non-parametric tests, can be taken to address the violations.
For example, let's say we have performed a linear regression to model the relationship between a person's height (dependent variable) and their weight and age (independent variables). The diagnostic plots show a non-linear relationship between the height and the independent variables.
This violates the assumption of linearity in regression. To address this violation, we can consider transforming the variables or using non-linear regression techniques, such as polynomial regression or generalized additive models.
## Exercise
Suppose we have performed an ANOVA to compare the mean test scores between different schools. The diagnostic plots show heterogeneity of variances, with larger variances in some schools compared to others.
1. What assumption of ANOVA is violated in this case?
2. What measures can be taken to address this violation?
### Solution
1. The assumption of homogeneity of variances is violated in this case.
2. To address this violation, we can consider using a transformation, such as a log transformation or a square root transformation, or using non-parametric tests, such as the Kruskal-Wallis test.
# Model diagnostics and improvement techniques
Model diagnostics and improvement techniques are important steps in the regression and ANOVA analysis. They help us assess the goodness of fit of the models, identify any problems or violations of assumptions, and make improvements to the models.
In this section, we will learn about model diagnostics and improvement techniques in regression and ANOVA. We will explore diagnostic plots, such as scatterplots, residual plots, and Q-Q plots, and learn how to interpret them. We will also discuss techniques for model improvement, such as variable transformations, outlier removal, and model selection.
Diagnostic plots are graphical tools that allow us to visually inspect the residuals of the models. Residuals are the differences between the observed values and the predicted values of the dependent variable. By examining the patterns in the residuals, we can identify any problems or violations of assumptions in the models.
Model improvement techniques involve making changes to the models based on the diagnostic plots and other diagnostic measures. This can include transforming the variables, removing outliers, adding or removing variables, or using different modeling techniques.
Scatterplots are useful for visualizing the relationship between the dependent variable and each independent variable. They can help us identify any non-linear relationships or outliers in the data.
Residual plots are used to examine the patterns in the residuals. They can help us identify violations of assumptions, such as non-linearity, heteroscedasticity, or outliers.
Q-Q plots are used to assess the normality assumption of the models. They compare the observed residuals to the expected residuals under the assumption of normality. If the observed residuals deviate significantly from the expected residuals, it suggests a violation of the normality assumption.
Model improvement techniques can include transforming the variables using mathematical functions, such as logarithmic or power transformations, to achieve linearity or equal variances. Outliers can be identified and removed based on their influence on the models. Model selection techniques, such as stepwise regression or information criteria, can be used to select the most appropriate variables for the models.
For example, let's say we have performed a linear regression to model the relationship between a person's height (dependent variable) and their weight and age (independent variables). The scatterplot shows a non-linear relationship between the height and the weight variable, with a curved pattern.
To address this non-linearity, we can consider transforming the weight variable using a logarithmic transformation. We can then fit the regression model with the transformed variable and examine the diagnostic plots to assess the goodness of fit and make further improvements if needed.
## Exercise
Suppose we have performed an ANOVA to compare the mean test scores between different schools. The residual plot shows a funnel-shaped pattern, indicating heteroscedasticity.
1. What assumption of ANOVA is violated in this case?
2. What measures can be taken to address this violation?
### Solution
1. The assumption of homoscedasticity is violated in this case.
2. To address this violation, we can consider transforming the dependent variable using a mathematical function, such as a logarithmic or square root transformation, or using weighted least squares regression.
# Comparing and contrasting regression and ANOVA
Regression and ANOVA are both statistical techniques used to analyze data and make inferences about population parameters. While they have similarities, they also have distinct differences in terms of their goals, assumptions, and applications.
In this section, we will compare and contrast regression and ANOVA. We will learn about their goals, assumptions, and applications, and when to use each technique. We will also discuss the relationship between regression and ANOVA and how they can be used together in certain situations.
Regression is primarily used to model the relationship between a dependent variable and one or more independent variables. It aims to understand and predict the behavior of the dependent variable based on the independent variables. Regression can be used to analyze both continuous and categorical dependent variables.
ANOVA, on the other hand, is primarily used to compare the means of two or more groups. It aims to determine whether there are any significant differences between the group means and identify which groups are significantly different from each other. ANOVA is typically used with a categorical independent variable and a continuous dependent variable.
Regression and ANOVA have different assumptions and limitations. Regression assumes a linear relationship between the dependent variable and the independent variables, while ANOVA assumes independence, homogeneity of variances, and normality. Violations of these assumptions can lead to biased estimates and incorrect inferences.
Regression and ANOVA can be used together in certain situations. For example, if we have a categorical independent variable with more than two levels, we can use regression with dummy variables to perform ANOVA. This allows us to examine the main effects of the independent variable and make pairwise comparisons between the levels.
In summary, regression and ANOVA are both valuable techniques in statistical analysis. Regression is used to model the relationship between a dependent variable and one or more independent variables, while ANOVA is used to compare the means of two or more groups. Understanding their similarities and differences can help us choose the appropriate technique for our analysis.
For example, let's say we have a dataset that includes the test scores of students from different schools and different classes. We want to determine whether there are any significant differences in the mean test scores between the schools and the classes, and also examine the main effects of the schools and the classes.
In this case, we can use a two-way ANOVA to compare the mean test scores between the schools and the classes. This will allow us to determine whether there are any significant differences between the groups and make pairwise comparisons between the levels.
We can also use regression with dummy variables to perform the same analysis. This will allow us to examine the main effects of the schools and the classes and make pairwise comparisons between the levels, similar to ANOVA.
## Exercise
Suppose we have a dataset that includes the test scores of students from different schools and different genders. We want to determine whether there are any significant differences in the mean test scores between the schools and the genders, and also examine the main effects of the schools and the genders.
1. What technique(s) can be used to perform this analysis?
2. How can regression and ANOVA be used together in this case?
### Solution
1. We can use a two-way ANOVA to compare the mean test scores between the schools and the genders. We can also use regression with dummy variables to perform the same analysis.
2. Regression with dummy variables can be used to examine the main effects of the schools and the genders and make pairwise comparisons between the levels, similar to ANOVA.
# Real-world examples and case studies
Example 1: Predicting Housing Prices
In this example, we will use regression to predict housing prices based on various features such as square footage, number of bedrooms, and location. We will use a dataset that includes information about different houses and their corresponding sale prices.
First, we will load the dataset into R and explore its structure and variables. Then, we will preprocess the data by handling missing values and transforming variables if necessary. Next, we will split the data into training and testing sets.
After preprocessing, we will build a regression model using the training data. We will use techniques such as variable selection and model evaluation to create an accurate and interpretable model. Finally, we will use the model to predict housing prices for the testing data and assess its performance.
Example 2: Comparing Treatment Effects
In this example, we will use ANOVA to compare the effects of different treatments on a specific outcome. We will use a dataset that includes information about patients who received different treatments for a medical condition.
First, we will load the dataset into R and explore its structure and variables. Then, we will preprocess the data by handling missing values and transforming variables if necessary. Next, we will conduct an ANOVA analysis to compare the mean outcome values across the different treatment groups.
After conducting the ANOVA analysis, we will interpret the results and determine whether there are any significant differences in the mean outcome values between the treatment groups. We will also perform post-hoc tests to make pairwise comparisons between the groups and identify which groups are significantly different from each other.
## Exercise
1. Think of a real-world scenario where regression or ANOVA can be applied. Describe the scenario briefly.
2. Explain how regression or ANOVA can be used to analyze the data in your scenario.
### Solution
1. Scenario: A company wants to analyze the impact of different marketing strategies on sales. They have collected data on sales revenue and various marketing variables such as advertising expenditure, social media engagement, and promotional offers.
2. Regression can be used to model the relationship between sales revenue (dependent variable) and the marketing variables (independent variables). This will allow the company to understand how each marketing variable influences sales and make data-driven decisions to optimize their marketing strategies.
3. ANOVA can be used to compare the mean sales revenue across different marketing strategies (treatment groups). This will help the company determine whether there are any significant differences in the effectiveness of the marketing strategies and identify which strategies are more effective in driving sales. | Textbooks |
Genome Biology
BitPhylogeny: a probabilistic framework for reconstructing intra-tumor phylogenies
Ke Yuan1,
Thomas Sakoparnig2,3,4,
Florian Markowetz1 &
Niko Beerenwinkel2,3
Genome Biology volume 16, Article number: 36 (2015) Cite this article
Cancer has long been understood as a somatic evolutionary process, but many details of tumor progression remain elusive. Here, we present BitPhylogeny, a probabilistic framework to reconstruct intra-tumor evolutionary pathways. Using a full Bayesian approach, we jointly estimate the number and composition of clones in the sample as well as the most likely tree connecting them. We validate our approach in the controlled setting of a simulation study and compare it against several competing methods. In two case studies, we demonstrate how BitPhylogeny reconstructs tumor phylogenies from methylation patterns in colon cancer and from single-cell exomes in myeloproliferative neoplasm.
Cancer is a somatic evolutionary process. Tumors are complex mixtures of heterogeneous subclones, and the genetic and epigenetic diversity within tumors can be a major cause of drug resistance, treatment failure, and tumor relapse [1,2]. Profiles of somatic mutations or DNA methylation can reveal the structure of the tumor cell population and contain traces of its past proliferative history [3-8]. Tumor subclones often display a cellular differentiation hierarchy inherited from their tissue of origin, and epigenetic changes are particularly informative about these relationships [9]. While tumor heterogeneity has been observed widely [10], an in-depth understanding of the underlying evolutionary and (perturbed) differentiation processes is lagging behind since phylogenetic trees describing the population structure of tumors are typically constructed manually [6].
Rigorous and accurate phylogenetic methods to infer automatically tumor 'life histories' and differentiation hierarchies from molecular profiles could have a profound impact on cancer research. For example, such methods would make it possible to infer early driver events on a large scale, to test whether evolutionary trajectories are predictive of clinical outcomes, and to compare the mode and speed of evolution between primary and metastatic tumors. Many clinical studies are currently measuring cancer heterogeneity, and robust intra-tumor phylogenetic methods are essential to interpret these data and to allow for reliable conclusions.
The intra-tumor phylogeny problem
Single-cell studies offer the most direct evidence of tumor heterogeneity, but are often limited to either a small number of genetic markers [11] and genes [12] or a small number of sequenced cells [13] with generally high error rates and high allelic dropout rates [14,15]. Thus, today, the main data source for evolutionary inference is bulk sequencing of mixed tumor samples [3,5,6], which is also the most readily available type of data for clinical applications of evolutionary methods in translational medicine. Whether obtained from single-cell or bulk sequencing, we assume in the following that the sequencing reads provide a statistical sample of the genomes of the underlying cell population. The intra-tumor phylogeny problem is to reconstruct the population structure of a tumor from these data. The problem consists of two tasks, namely (i) identifying the tumor subclones and (ii) estimating their evolutionary relationships (Figure 1).
The intra-tumor phylogeny problem. (A) Molecular profiles obtained from a bulk sequenced heterogeneous tumor are shown. They consist, in this example, of three clones (red squares, blue triangles, and green discs) and normal cells (small grey discs). The intra-tumor phylogeny problem is to infer the population structure of the tumor, i.e., to identify the different clones and to elucidate how they relate to each other. (B) Classical phylogenetic trees and hierarchical clustering methods place the observed molecular profiles at the leaf nodes of a tree, while the inner nodes represent unobserved common ancestors. Here, leaf nodes are defined as the nodes without any child nodes and inner nodes as the nodes that have at least one child node. (C) Unlike classical phylogenetic tree models, BitPhylogeny clusters molecular profiles to identify subclones and places them as both inner (blue triangle) and leaf nodes (red square, green disc) of a tree describing the hierarchy of the tumor cell population.
Here, we present a unified approach to the intra-tumor phylogeny problem, called BitPhylogeny, which addresses both subproblems simultaneously. Instead of sequentially clustering and tree building, we combine both steps into a single model. Our unified model jointly solves both parts of the intra-tumor phylogeny problem and automatically (i) estimates the number of clones and (ii) places them at the leaves and inner nodes of a phylogenetic tree that reflects their evolutionary relationships. Our approach is based on non-parametric Bayesian mixture modeling using a tree-structured stick-breaking process (TSSB) [16], similar to a previous model for somatic mutations [17].
Our framework is very flexible and can be adapted to the specific requirements of the data, as we demonstrate in two case studies, one using methylation patterns and the other whole-exome single-nucleotide variant (SNV) patterns as markers of evolution. In the first case study, we focus on patterns of CpG methylation, referred to here as methyltypes, observed in read data obtained from bisulfite sequencing of a mixed tumor sample. DNA methylation is a somatic change that accumulates in a clock-like fashion during aging [18]. It is a particularly precise marker of cell fate because the error rate is several orders of magnitude higher than for nucleotide substitutions [19]. In neutral genomic regions, the number of somatic errors increases linearly with the number of cell divisions [9]. Neutral methylation patterns thus act as a molecular clock [18]. In the context of cancer, such molecular clocks have been used to study intra-tumor evolution in a wide range of cancers, including lymphomas [20], brain cancer [21], prostate cancer [22] and colon cancer [23-25]. Here, we use data from Sottoriva et al. [9], who collected methylation profiles of 40 glands from five colorectal tumors. For each tumor, two regions were sectioned from opposite sides of the tumor, and in each region, three to five glands were microdissected. The authors analyzed these data by a combination of spatial agent modeling and statistical analysis. They presented tumor-specific lineage trees, in which the leaf nodes (i.e., the nodes without children) are the individual methylation patterns, but they did not identify clones. Our method differs from this approach in two important aspects: BitPhylogeny identifies clonal subpopulations from a mixture model and it organizes them into leaf and inner nodes of an evolutionary tree, which directly represents the differentiation hierarchy of the tissue (Figure 1).
In the second case study, we applied BitPhylogeny to SNVs called from whole-exome sequencing of single cells. The mutational patterns from single cells, referred to as genotypes, are used to identify clones. Therefore, we directly define a clone as a set of cells sharing the same genotype. This is a key difference to using allele frequencies of SNVs from bulk-sequenced tumor samples as we discuss below. We use data from Hou et al. [14], which includes SNV genotypes of 58 single cancer cells derived from one tumor of a patient with myeloproliferative neoplasm. In this case, the phylogeny was built based on the accumulation of mutations in the entire exome, demonstrating the scalability of BitPhylogeny to large genomic regions.
The intra-tumor phylogeny problem involves subclone identification and phylogeny reconstruction from noisy observations. For short-read DNA sequencing data obtained from mixed samples, inferring subclonal structure alone has been addressed by first calling SNVs and then grouping the SNVs into clones according to their estimated frequencies using mixture models. PyClone [26] and THetA [27] both follow this strategy and also correct SNV frequencies for copy number variations. The outcome of these analyses is the number of clones and their frequencies, but the evolutionary history of the clones remains unknown, such that some studies relied on visual inspection and manual placement of the clones in a phylogenetic tree [6].
A common approach to phylogenetic inference in human genetics and also in cancer genomics is to use a perfect phylogeny for the evolutionary relationships among haplotypes, which assumes infinite sites and no recombination. Efficient algorithms exist for computing (approximate) perfect phylogenies [28-31]. In principle, these methods can also be used for reconstructing intra-tumor phylogenies. However, perfect phylogeny algorithms typically lack a probabilistic interpretation and they assume a given and fixed number of haplotypes. For tumor phylogenies, these assumptions need to be relaxed. In a recent attempt to achieve this goal, SNV data were used iteratively to construct perfect phylogenies, identify subclones and improve SNV calling [32]. Other recent applications of classical phylogenetic inference methods [33] place mutation patterns [14,15], copy-number aberrations [34] or methylation profiles [9] at the leaves of a tree, without clustering them into clones or placing them at inner nodes (i.e., the nodes with children).
Because tumor subclones may also occur at inner nodes of phylogenetic trees [2,6], the applicability of classical phylogenetic methods is limited. A recent approach, called TrAp [35], addresses subclonal deconvolution and phylogenetic inference jointly by solving a highly constrained matrix inversion problem. It takes as input the population frequencies of a limited number of aberrations (up to 25) and deconvolves them in a linear combination of subclones that are connected in a tree. In principle, TrAP could be used as a follow-up step to clustering SNVs with an approach like PyClone. However, the clustering and tree-building steps are not independent and decoupling them may result in suboptimal performance if initially established clusters need to be spread out over different parts of the tree [6]. Additionally, TrAP cannot be easily applied to methylation data, as we use here, because it does not take back-mutations into account.
A first unified approach to combine clustering and tree inference with clones at inner nodes is PhyloSub [17], which is based on non-parametric Bayesian mixture modeling using a TSSB [16]. PhyloSub uses SNV frequency data to inform the tree topology and is thus subject to the limitations that this type of data exhibits. For example, clusters with few observations, i.e., small subclones, are difficult to identify and their placement in the tree topology is highly uncertain if only frequency constraints are used for tree construction. For example, in Nik-Zainal et al. [6], one of the clusters was spread over three branches of the tree after manual construction of the tree and refinement by incorporating information from the few reads which covered more than one SNV.
The inherent limitation of using SNV data for phylogenetic inference is that the phasing of mutations relies only on their frequencies. SNV frequencies are difficult to estimate from noisy sequencing data, especially if the coverage is low, and in general different clones may have identical or very similar frequencies. The phasing of SNVs can be improved with longer reads, such that multiple co-occurring SNVs are observed on the same read. For example, for RNA viruses, which display much more genetic diversity than tumors, overlapping reads have been used successfully to reconstruct long-range haplotypes from mixed samples [36]. With constantly improving sequencing technologies, including increasing read length and single-cell approaches, we expect that more data with these characteristics will be widely available in the near future.
For methylation patterns in cancer, computational modeling dates back at least to Siegmund et al. [37], who assumed that unique methylation patterns are generated from a hidden phylogenetic tree. Such trees were reconstructed from data using approximate Bayesian computation. A recently proposed strategy [38] uses a given phylogenetic tree structure to estimate the true methylation patterns under the assumption that observed methylation patterns are noisy and sometimes completely missing.
In addition to SNV and methylation data, gene expression profiles have been used to reconstruct phylogenies of tumor samples [39,40]. In particular, Desper et al. [39] regard tumor samples as representing different stages in the progression of the disease, and reconstruct the history of these stages using phylogenetic approaches. Riester et al. [40] built phylogenies based on gene expression profiles from cancer subtypes. Along this line of research, attempts have been made to cluster samples together and to build phylogenies based on the clusters. Hierarchical clustering [41], the Dirichlet process [42] and matrix factorization [43] have been proposed for clustering. To arrange clusters into a tree, all of these methods constructed minimum spanning trees.
At the single-cell level, Pennington et al. [44] employed fluorescent in situ hybridization (FISH) data for phylogenetic inference. Their tree-building method allows internal (so-called Steiner) nodes. Finally, copy number data from single cells were employed to construct phylogenies using the neighbor-joining method [45]. Both approaches built phylogenetic trees for single cells, whereas here, we focus on building such trees for clones, i.e., for collections of cells.
We have developed BitPhylogeny (Bayesian inference of intra-tumor phylogenies), an integrated approach to address the intra-tumor phylogeny problem. The statistical model is based on simultaneously assigning markers of evolution to clones, which are represented as both inner nodes and leaves of a phylogenetic tree, and on learning the topology and the parameters of the tree. We use a TSSB to construct a prior probability of trees and a Markov chain Monte Carlo (MCMC) inference scheme for sampling from the joint posterior. The relationships between parent and child nodes are derived from a classical phylogeny model. The model is formally defined in Materials and methods and depicted as a graphical model in Figure 2. In the following, we benchmark BitPhylogeny in simulation studies and discuss its application to colon cancer methylation data.
BitPhylogeny as a graphical model. Each of a total of N observed marker patterns is denoted by x n (shaded node). The clone membership of each observation is denoted by ε n and generated by a tree-structured stick-breaking process with variables ν ε (clone size) and φ ε (branching probability), and parameters λ, α 0 and γ. For each clone, t ε and θ ε are the branch length and clone parameter, respectively, which determine the local probability distribution of observing a marker pattern from this clone. The function pa(·) denotes the parent of each clone in the tree, except the root clone ∅. The transition probabilities p(θ ε ∣θ pa(ε)) have hyperparameters β m , β u , Λ and μ.
Validation in simulation studies
We have assessed the performance of BitPhylogeny in the controlled settings of five simulation studies. Based on the review article by Navin and Hicks [46], we chose the simulated trees to be representative of different modes of evolution (Figure 3). One tree reflects monoclonal evolution, three trees are based on a polyclonal mode of evolution, and one tree assumes a mutator phenotype.
Simulation study with five trees (A-E). (First column) Sankey plots of the trees used for simulations. For each node, the width of the in-edge is proportional to the clone frequency. The colors denote different layers of the tree (tree depths). Plots were produced with the R package riverplot. (Second column) Performance of clustering methods for the simulation studies with four different noise levels. Performance measures are based on 10,000 MCMC samples (the box plots in the second column). The MPEAR-summarized predictions (marked as BitPhylogeny) outperform the baseline competitors in all data sets with noise. (Third column) Comparison in terms of the summary statistics maximum tree depth and number of clones. For hierarchical clustering and k-centroids, the trees are constructed as minimum spanning trees from estimated clonal methyltypes.
For monoclonal evolution, we constructed a tree that has only two clones. The root node represents healthy cells and the child clone of the root represents a monoclonal tumor (Figure 3A). A monoclonal hierarchy has been predicted to be the dominant mode of evolution in a JAK2-negative myeloproliferative neoplasm single-cell sequencing study [14]. For the polyclonal trees, we first constructed a tree similar to the manually reconstructed tree presented in [5] (polyclonal-low, Figure 3B). This tree was originally based on whole-genome sequencing data with an average coverage of only 188 and hence limited power to detect small clones [5]. Since many more small clones are expected in tumors [47], we added small clones to the tree. Specifically, we added another tree level with five clones; three had relative frequency 0.03 and the other two had 0.02 (polyclonal-medium, Figure 3C). This tree was further extended to contain 18 clones, most of them having a very low frequency (around 0.02) (polyclonal-high, Figure 3D). Finally, the mutator phenotype mode of evolution is represented by a star-like tree with many leaf nodes (Figure 3E). Evidence for this model, which is driven by high mutation rates, has been found in multiple neoplastic tissues [48].
For each of the five trees, we simulated eight marker sites, sampled 2,000 observations, and added observation errors by flipping every site with a fixed error probability of 0, 0.01, 0.02 or 0.05 in four separate simulation runs. We subsequently applied BitPhylogeny and evaluated its performance for each MCMC iteration separately. The first 30,000 iterations were discarded as a burn-in phase, and the next 50,000 samples were collected and thinned out by a factor of 5. The following analyses are based on the resulting 10,000 samples.
Clustering performance
The clustering performance of BitPhylogeny was compared to that of two baseline methods: k-centroids and hierarchical clustering with model selection using silhouette scores (see Materials and methods). We used the v-measure to compare the clustering performance of the three methods (see Materials and methods). The BitPhylogeny MCMC samples (called the trace) are summarized by the MPEAR approach, which generates an optimal clustering configuration from the samples (see Materials and methods). For perfect data without errors, all three methods achieved perfect performance for the monoclonal tree. BitPhylogeny achieved almost perfect clustering for the mutator tree (Figure 3A,E, second column). However, for the three polyclonal trees (low, medium and high), the two baseline methods achieve perfect clustering, while BitPhylogeny comes very close to perfect performance with a v-measure of around 0.9 (Figure 3B,C,D, second column). This is because some of the clones have very similar marker profiles (e.g., a single differing marker site), and highly asymmetric sizes. As a result, in these cases, BitPhylogeny tends to underestimate the number of clusters, since a mixed clone is more preferred than separated clones. However, importantly, the advantage of using BitPhylogeny is very clear in the presence of noise. The MCMC traces from BitPhylogeny perform similar to k-centroids and hierarchical clustering and the MPEAR-summarized results outperform the baseline methods at all noise levels.
Tree reconstruction performance
We compared BitPhylogeny to minimum spanning trees constructed from the clustering results obtained by the two baseline clustering methods. The third column of Figure 3 shows tree depth and number of nodes as summary statistics for all trees at all noise levels in all simulation studies. Overall, the trees reconstructed from the baseline clustering methods have too many clusters and are much deeper than the true tree (Additional file 1: Figure S2).
To compare the tree topologies explicitly, we developed a distance measure called consensus node-based shortest path distance (see Materials and methods for details). The performance of BitPhylogeny is examined based on the empirical MAP solution (see Materials and methods). The results for all synthetic data sets (five clone compositions and four noise levels) are presented in Figure 4. For all clonal compositions and noise levels, BitPhylogeny constructs trees that are much closer to the true tree than both baseline methods. For the monoclonal tree, all three methods are able to reconstruct the two clones accurately. However, as clonal composition becomes more complex, the performance of the two baseline methods starts to degrade quickly. The baseline methods overestimated the number of clones and produce much deeper trees for most synthetic data sets. As a result, they perform poorly when the complexity of clone composition increases.
Consensus node-based shortest path distances for all simulated trees. Each box plot is summarized for the distance measures across four noise levels (0%, 1%, 2% and 5%). The suffixes L, M and H for the polyclonal tree type refer to the polyclonal-low, -medium and -high trees in Figure 3. BitPhylogeny consistently outperforms the two baseline methods.
Case study 1: colon cancer development
We applied BitPhylogeny to bisulfite sequencing data using the 454/Roche technology of the IRX2 locus of colon tumors from [9]. The tumors from different patients are denoted as CT, CU, CX and HA. Between three and five samples are available from two spatially separated sides of the tumor (denoted left and right). Each sample is denoted by the tumor, the tumor side and a number. For example, CT_R1 is the first sample from the right side of tumor CT. On average, there are more than 1,500 reads per cancer gland. The methylation tag sequencing fidelity was 99.6%, i.e., the data exhibit an error rate of 0.4% [9].
To compare trees from different samples and tumors, we analyzed a number of topological features of the BitPhylogeny trees. The number of big clones (carrying more than 1% of the tumor mass) and the total branch length are measures of the heterogeneity of the samples. The maximum depth of the tree and the root-to-leaf mass distribution are indicators of the level of differentiation of the tumor sample. The trees of most samples have between two and four node levels, or tree layers. In general, we find that the number of big clones correlates with the maximum depth of the trees. The root clones of the trees contain up to 30% of the tumor mass with a strong bias for small root nodes (median 2.3% of the tumor mass). A notable exception is CX_R6 with 36%. With a median of 62% across all samples, most of the mass is usually assigned to the second layer of the trees. However, this value ranges from 7% to 98%. The third layer has a median mass of 25% and the fourth layer 2%.
Analysis of CT and the CX samples
Among the CT samples, we find that the left side displays more topological diversity than the right side (Figure 5). The samples from the right side have most of their mass assigned to levels two and three, though the tumor mass in the left side in one sample, CT_L7, is mostly at the second level. In CT_L3, most of the tumor mass is at the third level and in CT_L2 a considerable proportion of the tumor mass, about 18%, is assigned to the fourth level. The fourth sample of the left side, CT_L8, exhibits a mass distribution similar to the right side. The pairwise symmetrized Kullback–Leibler divergences, a measure of topological diversity, among samples from the left colon side were larger than those within the right side (P=0.002, Wilcoxon rank sum test).
Analysis of CT samples. (A) Level-wise mass distribution of CT samples. For each tree, the bars show the level sums of the mixture model masses for all eight samples of the CT tumor. The red bars correspond to the posterior means of the root masses. The blue, green and pink bars correspond to the means of the sums of the second, third and fourth tree levels, respectively. (B) Maximum depth of trees of the individual samples. Turquoise densities are from the right side of the tumor and the pink ones are from the left side. Trees from the left side of the tumor have peaked posterior densities at a depth of either 3 or 4, while the posterior densities from the right side are less peaked. (C) Total branch length of trees of individual samples. The trees from the left side, which peak at depth 3 in (B), have shorter total branch lengths than the tree that peaks at depth 4 or the trees from the right side of the tumor.
In terms of maximum tree depth, two samples from the right side show a uniform behavior between 3 and 4 and the remaining two have a bias towards a maximum depth of 3. The samples from the left side, however, all have a strong tendency for a maximum depth of either 3 (three samples) or 4 (one sample: CT_L2). The variance of the posterior of the maximum depth is bigger for the right side than for the left side (P=0.014, Wilcoxon rank sum test). The same behavior can be observed for total branch length. The same three samples that show a bias towards three node levels also have a shorter total branch length than the samples from the right side (Figure 5). These summary statistics indicate that the right side of the tumor evolved at a more homogeneous speed than the left side.
For the CX sample, we observed that the left side can be well separated from the right side in terms of maximum tree depth (P=0.027, t-test; Figure 6). Less pronounced, this separation was also found for the total branch length as well as for the number of clones and big clones. In the original study [9], the CX sample was identified as the largest tumor in the study. The size of this tumor could be a reason for the separation of the evolutionary history of the right side from the left side. Additionally, they identified CX as the tumor with the highest cancer stem cell fraction. We did not observe such a clear separation of left and right sides in any of the other samples. The left side of the tumor appears to evolve faster than the right side, because it has deeper trees with more clones and longer total branch lengths than the right side.
Analysis of CX samples and joint analysis of all samples. Turquoise densities are for samples from the right side of the tumor and the pink ones are for the left side. (A) Maximum depth of trees. Trees from the right side have posterior maximum depth between 2 and 3, while trees from the left side have posterior maximum depth between 3 and 4. (B) Total branch length of trees. Trees from the right side have slightly shorter total branch lengths than the trees from the left side. (C) Number of clones in a tree. Trees from the right side contain fewer clones than trees from the left side. (D) Mean number of clones versus mean maximum depth of trees. With these two summary statistics of trees, samples from the left and right can be separated very clearly.
The phylogenetic trees in Sottoriva et al. [9] do not contain an error model, i.e., every distinct methylation pattern is considered a clone, and therefore they cannot be compared to our BitPhylogeny trees. The authors did not attempt to reconstruct hierarchical relationships between clonal subpopulations, but rather estimated the fraction of cancer stem cells in the tumor and the degree of heterogeneity. They found a large degree of intra-tumor and intra-sample heterogeneity as well.
Case study 2: myeloproliferative neoplasm
We used BitPhylogeny to analyze single-cell sequencing data of a JAK2-negative myeloproliferative neoplasm [14]. The data set consists of 712 SNVs detected by sequencing the exomes of 58 cancer cells. In addition to single cells, the data set also contains genotypes from bulk-sequenced normal and tumor tissue. On average, Hou et al. [14] estimated the allele dropout rate to be 43.09% and the false discovery rate to be 6.04×10−5. Thus, a large amount of data is missing (on average 56.13% per cell). We binarized the SNV profile by writing 0 for the wild-type allele and 1 for a heterozygous mutation.
Figure 7A presents the results of the BitPhylogeny analysis. It shows a tree structure with a major clone (labeled as clone c) containing 33 out of all 58 cells. This clone is the most progressed clone since it has the longest total branch length from the root clone. One distinct feature of the reconstructed tree is that it captures both clonal progression (e.g., clone b to c) and binary branching with unobserved common ancestors (e.g., clones d and e). As a validation, genotypes from both bulk-sequenced normal and cancer cells are included in the analysis. The normal genotype is correctly identified as the root of the tree (clone a in Figure 7A). The genotype of the bulk-sequenced tumor is assigned to the most progressed clone c. While the analysis of the data with classical phylogenetic models in the original study only showed evidence for monoclonal evolution, BitPhylogeny reveals an additional structure of the tumor phylogeny involving in total half of all cells analyzed.
Reconstructed tree and mutation profiles from single-cell exome sequencing data. (A) Reconstructed phylogeny. Non-empty clones are labeled a through i followed by the number of cells they contain. The vertical distance represents the evolutionary distance between clones. (B) Estimated probabilities of six SNVs in key genes across all cells. The error bars summarize 50,000 MCMC samples and are color-coded according to clone membership.
In addition to the phylogeny, BitPhylogeny provides genotype estimates for all loci and cells. This is useful due to the high amount of missing data and the high error rate of the sequencing data. Hou et al. [14] identified eight key genes that are mutated within the tumor of the patient. With BitPhylogeny, we could gain more insight into the role of these mutations in the progression of the disease. Figure 7B shows the genotype estimations of six SNVs that are located in these key genes. The full profiles of genotype estimations for all eight genes are provided in Additional file 1: Figure S3. SESN2, which is related to DNA damage and genetic instability, was the top gene of interest in the analysis of Hou et al. [14]. Our results confirm this finding as the gene is mutated in most cells. Interestingly, mutations in NTRK1 and ABCB5 may play a role during the expansion from clone b to c. The lower bounds of the error bars for clone b of these two genes are below 0.25, indicating only a 25% chance of being mutated. In contrast, clone c clearly has these two SNVs. The genotype profiles also suggest that the SNV in FRG1 (which may be involved in pre-mRNA splicing [14]) are private to the clones in subtree 3. Unlike Hou et al., our results suggest that ASNS is not mutated, because the estimated probability of mutation for the corresponding SNV is very low for most cells.
The SNV in gene ST13 (Figure 7B) is mutated in clone b, but not in clone c. Since clone b precedes clone c, this violates the infinite sites assumption for point mutations (which does not allow back mutations). An explanation for this phenomenon could be the following: BitPhylogeny accounts for two different sources of uncertainty, namely the stochasticity of the evolutionary process itself and observational noise. At the evolutionary level, BitPhylogeny does not allow mutations to revert back to normal. At the observational level, however, base changes can be due to sequencing errors or missing entirely. Therefore, a violation of the infinite sites model can occur in the tree if the observational likelihood outweighs the evolutionary model. In such cases, the error bars may indicate how the data are explained by the model. In the present case, the error bars for the genotype of clone b are relatively large. The lower bound reaches 0.5, indicating only a 50% chance of being mutated. In addition, it could also be the case that the mutation is present in the cells of clone c, but are miscalled because of insufficient coverage at the site.
Modes of tumor progression
The topology of intra-tumor phylogenetic tress can provide insights into the mode of evolution of the tumor [46]. Like classical phylogenetic trees, BitPhylogeny trees are expected to show distinct structural features for different modes of tumor progression.
For monoclonal tumors (Figure 3A), the BitPhylogeny tree would be expected to consist of a single homogeneous clone. In both methylation and single-cell data, we did not find trees supporting this mode of tumor progression in any of the samples we analyzed. However, we observed patterns reminiscent of monoclonal evolution at the subtree level. For example, in subtree 1 of the single-cell tree (Figure 7), progression from clone b to c can be regarded as a monoclonal evolutionary pattern. Clone c arises from clone b through the acquisition of additional SNVs, some of which may provide a selective advantage. The linear subtree structure without branching suggests that clone c replaces clone b in a clonal expansion. In contrast, a polyclonal mode of tumor progression (Figure 3B,C,D) would result in BitPhylogeny trees with a moderate number of clones distributed over a small number of tree levels, and most of the samples we analyzed fell into this category.
A mutator phenotype (Figure 3E) would generate an extreme case of polyclonality with a very large number of clones, each carrying a very small proportion of the tumor mass. We did not find any trees based on methylation data that would be consistent with this mode of tumor progression. For the single-cell SNV tree (Figure 7), there are several small clones (d to i) with similar distances to the root, which could indicate a mutator phenotype given the total number of inferred clones. However, considering that measurements have been exome-wide, the number of small clones does not appear to imply a mutator phenotype for this tumor. Confirmation of this mode of tumor progression would require data from a considerably larger population of single cells.
The cancer stem cell model has been widely discussed and the original study of Sottoriva et al. [9] is based on this assumption. Under a stem cell model, our trees capture the developmental hierarchies of stem cells. In this case, the marker profiles mainly reveal the differences among stem cell generations. The non-stem cell descendants are expected to have similar marker profiles as their immediate stem cell ancestors, which means that the stem cell lineage is driving tumorigenesis [9]. Thus, each clone would be a mixture of stem cells and their descendants, and the phylogenetic trees represent the stem cell hierarchies.
BitPhylogeny provides a probabilistic framework for inferring intra-tumor phylogenies from observed sequence data. It jointly estimates the subclonal structure of a tumor and the evolutionary relationship of its clones using a full Bayesian approach. In two case studies, we have shown that the method can be useful for reconstructing the life histories of tumors and for making inference about the mode of tumor evolution. Unlike previous methods for unphased short-read data, BitPhylogeny does not rely on mutation frequencies, but rather considers haplotype sequences, specifically patterns of co-occurring genetic or epigenetic variations. More phased data can be expected to be widely available soon due to technological advances increasing the length of reads. BitPhylogeny is also applicable to other data types, like somatic copy number aberrations, which have been used for classical phylogenetic inference [34].
The present study has also revealed limitations of our approach. The data from Sottoriva et al. [9] is unique, and while the IRX2 locus has been specifically selected for its suitability as a molecular clock, it is important to note that the information contained in a single locus might not reflect the complete evolutionary history of a tumor. Additionally, the MCMC inference scheme we use is state-of-the-art but slow for large instances, such that improved (approximate) inference methods should also be considered. Another limitation is that we assume independence of sites, which might be violated even for well-chosen molecular clocks. This limitation is shared with most classical phylogenetics methods. It can be overcome using hidden Markov models [49] or a finite-state transducer [34,50], and we plan to extend our method in this direction in the future.
A major challenge we identified is how to compare evolutionary trees with different numbers of nodes and different node labels and compositions. In this study, we have used simple summary statistics, since a general distance measure between trees of tumor evolution is lacking. Ideally, such a measure would combine the overlap between node content, as computed, for example, by the v-measure [51], with a measure of graph similarity, for example, by adapting graph alignment approaches [52]. Advances in tree comparison would not only be important for method comparison, but also for constructing and comparing evolutionary trajectories of tumors in time. Combined with tree comparison methods, approaches like BitPhylogeny could then be applied to large collections of molecular tumor profiles to identify conserved evolutionary trajectories or developmental pathways differentiating good from poor responders, which may lead to further insights into cancer evolution and progression and eventually inform treatment decisions.
BitPhylogeny employs a non-parametric Bayesian clustering approach for reconstructing intra-tumor phylogenies from observed sequence data. To integrate the assignment of sequences to clones with the organization of clones into a tree, it uses the TSSB as a prior [16]. In this model, the observed data (i.e., sequences) are associated with all nodes of the tree, rather than only to the leaves as in classical phylogenetic models. The TSSB is a probabilistic mixture model with an infinite number of hierarchically organized components, but only a finite number of components have a non-zero weight. In addition to this prior, the full generative probabilistic model underlying BitPhylogeny is defined by node-wise data distributions, a transition kernel and the root prior.
Let i∈{1,…,M} denote the considered marker sites and n∈{1,…,N} index the observed marker profiles or sequences (methyltypes or genotypes). We denote the marker state at site i of sequence n by x i,n ∈{0,1}, where x i,n =1 indicates methylation or mutation. To assign sequences to clones, each clone has a label. Following the notation in [16], the clone label is defined as ε=(ε 1,…,ε K ), where \(\epsilon _{k} \in \mathbb {N}\) for all k∈{1,…,K}. Each label is a sequence of natural numbers indicating the location of the clone in the phylogenetic tree. For example, the clone ε=(1,2) is located in the second layer of the tree, and its lineage trace is ∅→1→2, where the numbers are node labels in each level and the root node is labeled by the empty sequence, ∅. The length K=|ε| is the depth of clone ε; for example, |(1,2)|=2. The root clone has depth |∅|:=0. The probability of an observed marker pattern originating from clone ε is denoted by π ε . This parameter specifies the proportion of observations explained by clone ε (Figure 1C).
Tree-structured stick-breaking process
The TSSB generates an infinite series of clone proportions π ε , which sum to one by interleaving two stick-breaking processes [16],
$$ \begin{aligned} \nu_{\boldsymbol{\epsilon}} &\sim \text{Beta}(1,\lambda^{|\boldsymbol{\epsilon}|} \alpha_{0}), \qquad \psi_{\boldsymbol{\epsilon}} \sim \text{Beta}(1,\gamma), \\ \varphi_{\boldsymbol{\epsilon}\epsilon_{i}} &= \psi_{\boldsymbol{\epsilon}\epsilon_{i}} \prod\limits_{j=1}^{\epsilon_{i}-1}(1-\psi_{\boldsymbol{\epsilon}j}), \qquad \pi_{\emptyset} = \nu_{\emptyset}, \\ \pi_{\boldsymbol{\epsilon}} &= \nu_{\boldsymbol{\epsilon}} \varphi_{\boldsymbol{\epsilon}} \prod\limits_{\{\boldsymbol{\epsilon}^{'}\}}\varphi_{\boldsymbol{\epsilon}^{\prime}} (1 - \nu_{\boldsymbol{\epsilon}^{\prime}}). \end{aligned} $$
((1))
The beta-distributed random variables ν ε and ψ ε define the clone size π ε by distributing mass between each node and its descendants and its siblings, respectively. The index ε ε i denotes the ε i -th child of clone ε, and in the second product, the index ε ′ runs over all ancestors of clone ε in the tree. The clone size depends on the depth and a decay constant λ. We refer to the distribution of clone sizes π={π ε } and their hierarchical structures generated by (1) as TSSB (λ, α 0, γ). Using this tree partition prior of unbounded depth and width, the infinite mixture model for the observed methylation or mutation data \(\mathbf {x}_{n} = \{x_{i,n}\}_{i=1}^{M} \) is defined as
$$ \begin{aligned} &\boldsymbol{\pi} \sim \text{TSSB} (\lambda, \alpha_{0}, \gamma), \qquad \epsilon_{n} \mid \boldsymbol{\pi} \sim \text{Discrete}(\boldsymbol{\pi}),\\ &\boldsymbol{\theta}_{\epsilon} \sim p(\cdot \mid \boldsymbol{\theta}_{\text{pa}(\epsilon)}), \qquad \mathbf{x}_{n} \sim p(\cdot \mid \epsilon_{n},\, \boldsymbol{\theta}_{\epsilon}), \end{aligned} $$
where \(\boldsymbol {\theta }_{\epsilon }=\{\theta _{i,\epsilon }\}_{i=1}^{M}\) are the parameters of the local distribution p(x n ∣θ ε ) of the data emitted from clone ε. Each clonal parameter is sampled from a transition distribution that depends on the parent clonal parameter θ pa(ε). We denote the distribution of the root parameter by p(θ ∅ ).
Customized methylation model
To model methylation data, we specify the local probability distributions p(x n ∣θ ε ) and the transition probabilities p(θ ε ∣θ pa(ε)). For each clone ε, the local data distribution is a Bernoulli distribution with the parameter transformed by a sigmoid function. The parameter \(\theta _{i, \epsilon } \in \mathbb {R}\) is used to control the probability of observing a methylation event at locus i. Assuming independence among loci, we set
$$ \begin{aligned} p(\mathbf{x}_{n} \mid \boldsymbol{\theta}_{\epsilon}) = \prod\limits_{i=1}^{M} \sigma(\theta_{i,\epsilon})^{x_{i,n}} (1-\sigma(\theta_{i,\epsilon}))^{1-x_{i,n}}, \end{aligned} $$
where σ(θ i,ε )=1/(1+ exp(−θ i,ε )). Here, we assume the CpG sites are independent, which is appropriate for the IRX2 molecular clock (verified in Additional file 1: Figure S1).
For the transition probabilities, we use a mixture of two Laplace distributions to model the parent–child relation,
$$ \begin{aligned} p\left(\boldsymbol{\theta}_{\epsilon} \mid \boldsymbol{\theta}_{\text{pa}(\epsilon)}, \mu, \Lambda, \mathbf{w}\right) &=\prod\limits^{N}_{i = 1} w_{i} \, \text{Laplace}(\mu, \Lambda) \\ &\quad+ (1-w_{i})\,\text{Laplace}(-\mu, \Lambda), \end{aligned} $$
where μ defines the location of a positive and a negative mode and \(\mathbf {w} = \{w_{i}\}_{i=1}^{M}\). Intuitively, the positive mode generates parameters that give a high probability of observing methylation events, whereas the negative mode has the opposite effect. The hyperparameter Λ models variation within the modes. The weights w i and 1−w i of the two Laplace densities specify the probabilities of either mode being selected for sampling the child parameter. The Laplace densities have the effect of pushing the sampled parameters close to the modes μ or −μ.
The dependency on the parent parameter is introduced through the weights w i as
$$ w_{i} =\left\{ \begin{array}{ll} P(\theta_{i,\epsilon} \geq \eta \mid \theta_{i,\text{pa}(\epsilon)} \geq \eta) & \text{if} \,\,\,\theta_{i,\text{pa}(\epsilon)} \geq \eta \\ P(\theta_{i,\epsilon} \geq \eta \mid \theta_{i,\text{pa}(\epsilon)} < \eta) &\text{if} \,\,\,\theta_{i,\text{pa}(\epsilon)} < \eta \end{array} \right. $$
where η is a fixed threshold. We set η=1, which results in very conservative methylation calls.
The probabilities in Equation (5) provide a link to evolutionary models used in classical phylogeny. We define a transition probability matrix P ε to describe the state transition from parent to child. Let m denote the methylated state defined by θ i,ε ≥η and u the unmethylated state. Then the matrix P ε can be written as
$$ \mathbf{P}_{\epsilon} = \left(\begin{array}{ll} P_{u\rightarrow u} & P_{u\rightarrow m}\\ P_{m\rightarrow u} & P_{m\rightarrow m} \end{array} \right), $$
and according to Equation (5), we have
$$ w_{i} =\left\{ \begin{array}{ll} P_{m\rightarrow m} &\text{if} \,\,\,\theta_{i,\text{pa}(\epsilon)} \ge \eta\\ P_{u \rightarrow m} &\text{if} \,\,\,\theta_{i,\text{pa}(\epsilon)} < \eta. \end{array} \right. $$
The transition matrix is obtained from a rate matrix A as the matrix exponential P(t)= exp(A t). The rate matrix A is parameterized as
$$ \mathbf{A} = \left(\begin{array}{rr} -\beta_{m} &\beta_{m}\\ \beta_{u} & -\beta_{u} \end{array} \right) \rho, $$
where β m and β u are the equilibrium frequencies of the methylated and unmethylated states, respectively, and β m +β u =1. The scaling factor ρ is set to ensure that the average rate of methylation is one, i.e., 2β u β m ρ=1. For each clone ε≠∅, we denote its branch length by t ε . Finally, the root prior is defined as
$$ p(\theta_{0}) = \prod\limits^{N}_{i = 1} \text{Laplace}(-\mu, \Lambda). $$
This prior favors clones with unmethylated states at all loci as the root.
The complete generative probabilistic model underlying BitPhylogeny, including all parameters and all conditional independencies, is depicted in Figure 2 as a graphical model.
Customized single-nucleotide variant model
Single-cell data sets often have high rates of missing data. This can be the result of allele dropout and low depth at some regions of the genome. In [14], the authors reported that the allele dropout rate of their sequencing technique is independent of the location or base type of the locus. This matches the statistical description of data as missing completely at random. In our likelihood-based approach, the missing data can be handled by simply ignoring the locus in each cell where it is missing.
To specify the transition model for SNV data, we employ the following rate matrix,
$$ \mathbf{A} = \left(\begin{array}{rr} -\beta_{m} &\beta_{m}\\ 0 & 0 \end{array} \right) \rho, $$
((10))
where β m is the frequency of a locus being mutated. The scaling factor ρ is set to ensure β m (1−β m )ρ=1. After matrix exponentiation, the probability of transition from mutation to normal is 0, which reflects the common assumption of mutations being irreversible during tumor evolution.
For statistical inference, we pursue a Bayesian approach and estimate the full posterior probability distribution of all model parameters, including clone assignment and tree structure. For approximating the joint posterior
$$ p\! \left(\{ \epsilon_{n}\}, \{\boldsymbol{\theta}_{\epsilon}\}, \{t_{\epsilon} \},\, \{\nu_{\epsilon}\},\, \{\varphi_{\epsilon}\},\, \mu,\, \Lambda \left| \{\mathbf{x}_{n}\},\, \beta_{u},\, \beta_{m},\, \lambda,\, \alpha_{0}, \gamma \right.\right), $$
we fix λ=2, α 0=0.3 and γ=0.1. The equilibrium state frequencies β u and β m are estimated directly from the population as the average frequencies of unmethylated and methylated states, respectively. To sample from the target distribution (11), we use a Gibbs sampler, which iteratively generates samples from the full conditional distribution of each variable of interest. The sampling procedure follows the one described in [16] with one exception. We integrated a new 'swap clone' move as an additional step. In this move, the parameters, assigned data and masses of two random nodes in the tree are swapped, then the structure related parameters {ν ε } and {φ ε } are resampled. The swap is accepted with a probability defined by a metropolis ratio. If, after the swap node move, the root node has no assigned data points, then more swap node moves involving the root are conducted until the root node has at least one data point assigned. In other words, we consider empty root nodes as invalid.
We used the maximum posterior expected adjusted rand (MPEAR) method from [53] to compute summary labels from MCMC samples. The method first computes the posterior similarity matrix for the labels in each MCMC sample. The posterior similarity matrix is an N×N matrix, in which each entry is the posterior probability of two data points being clustered together. Given the posterior similarity matrix, the posterior expected adjusted rand (PEAR) index can be used to assess the performance of a proposed label configuration. The labels, corresponding to the highest PEAR, are chosen as the summary cluster configuration. We used the MPEAR implementation in the R package mcclust.
At the end of each MCMC run, the reconstructed tree structure is obtained as the following. We check, for each sample, the number of clones that have weights π ε >0.01. We call this number the big node number. Then, all the samples can be grouped into different unique big node number categories. For each unique big node number group, we record the tree structure with the highest complete data likelihood (integrating out {ε n }):
$$ p \left(\{\mathbf{x}_{n}\},\, \{{\boldsymbol{\theta}\epsilon}\},\{\textit{t}_{\epsilon}\}|\{\textit{v}_{\epsilon}\},\{\varphi_{\epsilon}\},\, \mu,\, \Lambda,\, \beta_{u},\, \beta_{m},\, \lambda,\, \alpha_{0},\, \gamma \right). $$
Finally, we report the recorded tree from the most frequent unique big node number group.
Baseline methods
We used hierarchical clustering and k-centroids as baseline clustering methods. For both methods, the R function dist with option 'binary' was called to compute the Jaccard distance matrix of the observed sequences. The Jaccard distance matrix was then used to perform hierarchical clustering (hclust) and k-centroids (pam). To select the number of clusters, both methods were executed with a range of possible cluster numbers from 2 to 20, and the cluster number with the highest silhouette coefficient was selected. We computed silhouette coefficients with the silhouette function from the R package cluster. The coefficient is computed from the mean distances within clusters and the mean distances between clusters. It does not require the true cluster labels. The silhouette coefficient takes values between −1 and 1, with higher values indicating better clustering performance. We then estimated the methyltype sequences (methylation patterns) of each clone. For hierarchical clustering, each methyltype was computed by thresholding the mean of all sequences assigned to the clone. For k-centroids, the methyltypes were defined as the medoids. Given the estimated methyltypes, we computed minimum spanning trees from the Hamming distance matrices. We used the minimum.spanning.tree function from R package igraph. Finally, we defined the clone with the least number of methylated states to be the root clone and directed the tree accordingly. Hierarchical clustering and minimum spanning tree have been used as parts of the SPADE pipeline, which extracts a cellular hierarchy from high-dimensional cytometry data [54]. The baseline methods we used here differ from SPADE by adding a model selection step.
The clustering performance was assessed by the v-measure [51], which computes the harmonic mean of two conditional entropies, namely homogeneity and completeness. Homogeneity measures how much each cluster contains only members of a single class, while completeness measures whether all members of a given class are assigned to the same cluster. The v-measure takes values in [0, 1], with 0 and 1 indicating the worst and best clustering performance, respectively.
Tree distance
We considered all markers that were present in the ground truth tree and in all three inferred trees. For these shared markers, we computed their pairwise shortest-path distance matrix in each tree. The lower triangular part of the distance matrices of all inferred trees were then compared to the ground truth using the sum of the absolute values of all differences between matrix entries. We named this distance measure the consensus node-based shortest path distance.
Software and data availability
Based on the TSSB implementation [16], BitPhylogeny has been implemented in Python and R and is freely available under a GPL3 license [55]. For 2,000 data points (observed marker patterns), a single MCMC iteration takes on average about 1 s on a standard single-core computer. Additional file 2 contains an R Markdown file for reproducing all the figures in this manuscript. All data, including synthetic, methylation and single-cell data sets, are provided in the BitPhylogeny software package. The sequencing data from the single-cell study are stored in NCBI Sequence Read Archive [56] under the accession number [SRA:SRA050202].
MCMC:
Markov chain Monte Carlo
MPEAR:
maximum posterior expected adjusted rand
posterior expected adjusted rand
SNV:
single-nucleotide variant
TSSB:
Nowell PC. The clonal evolution of tumor cell populations. Science. 1976; 194:23–8.
Beerenwinkel N, Schwarz RF, Gerstung M, Markowetz F. Cancer evolution: mathematical models and computational inference. Syst Biol. 2015; 64:e1–25. http://dx.doi.org/10.1093/sysbio/syu081.
Article PubMed Central PubMed Google Scholar
Shah SP, Morin RD, Khattra J, Prentice L, Pugh T, Burleigh A, et al. Mutational evolution in a lobular breast tumour profiled at single nucleotide resolution. Nature. 2009; 461:809–13. http://dx.doi.org/10.1038/nature08489.
Gerlinger M, Rowan AJ, Horswell S, Larkin J, Endesfelder D, Gronroos E, et al. Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med. 2012; 366:883–92. http://dx.doi.org/10.1056/NEJMoa1113205.
Nik-Zainal S, Alexandrov LB, Wedge DC, Van Loo P, Greenman CD, Raine K, et al. Mutational processes molding the genomes of 21 breast cancers. Cell. 2012; 149:979–93.
Article PubMed Central CAS PubMed Google Scholar
Nik-Zainal S, Van Loo P, Wedge DC, Alexandrov LB, Greenman CD, Lau KW, et al. The life history of 21 breast cancers. Cell. 2012; 149:994–1007.
Aparicio S, Caldas C. The implications of clonal genome evolution for cancer medicine. N Engl J Med. 2013; 368:842–51. http://dx.doi.org/10.1056/NEJMra1204892.
Shibata D, Tavaré S. Counting divisions in a human somatic cell tree: how, what and why?. Cell Cycle. 2006; 5:610–4.
Sottoriva A, Spiteri I, Shibata D, Curtis C, Tavaré S. Single-molecule genomic data delineate patient-specific tumor profiles and cancer stem cell organization. Cancer Res. 2013; 73:41–9. http://dx.doi.org/10.1158/0008-5472.CAN-12-2273.
Burrell RA, McGranahan N, Bartek J, Swanton C. The causes and consequences of genetic heterogeneity in cancer evolution. Nature. 2013; 501:338–45. http://dx.doi.org/10.1038/nature12625.
Almendro V, Cheng YK, Randles A, Itzkovitz S, Marusyk A, Ametller E et al.Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity. Cell Rep. 2014; 6:514–27. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=39.
Chowdhury SA, Shackney SE, Heselmeyer-Haddad K, Ried T, Schäffer AA, Schwartz R. Phylogenetic analysis of multiprobe fluorescence in situ hybridization data from tumor cell populations. Bioinformatics. 2013; 29:i189–98. http://dx.doi.org/10.1093/bioinformatics/btt205.
Shapiro E, Biezuner T, Linnarsson S. Single-cell sequencing-based technologies will revolutionize whole-organism science. Nat Rev Genet. 2013; 14:618–30. http://dx.doi.org/10.1038/nrg3542.
Hou Y, Song L, Zhu P, Zhang B, Tao Y, Xu X, et al. Single-cell exome sequencing and monoclonal evolution of a JAK2-negative myeloproliferative neoplasm. Cell. 2012; 148:873–85. http://dx.doi.org/10.1016/j.cell.2012.02.028.
Xu X, Hou Y, Yin X, Bao L, Tang A, Song L, et al. Single-cell exome sequencing reveals single-nucleotide mutation characteristics of a kidney tumor. Cell. 2012; 148:886–95. http://dx.doi.org/10.1016/j.cell.2012.02.025.
Adams RP, Ghahramani Z, Jordan MI. Tree-structured stick breaking processes for hierarchical data. Adv Neural Inform Proc Syst (NIPS). 2010; 23:19–27.
Jiao W, Vembu S, Deshwar AG, Stein L, Morris Q. Inferring clonal evolution of tumors from single nucleotide somatic mutations. BMC Bioinform. 2014; 15:35.
Kim JY, Tavaré S, Shibata D. Counting human somatic cell replications: methylation mirrors endometrial stem cell divisions. Proc Nat Acad Sci USA. 2005; 102:17739–44. http://www.pnas.org/content/102/49/17739.long.
Shibata D. Mutation and epigenetic molecular clocks in cancer. Carcinogenesis. 2011; 32:123–8. http://dx.doi.org/10.1093/carcin/bgq239.
De S, Shaknovich R, Riester M, Elemento O, Geng H, Kormaksson M, et al.Aberration in DNA methylation in B-cell lymphomas has a complex origin and increases with disease severity. PLoS Genet. 2013; 9:e1003137. http://dx.plos.org/10.1371/journal.pgen.1003137.
Sottoriva A, Spiteri I, Piccirillo SGM, Touloumis A, Collins VP, Marioni JC, et al. Intratumor heterogeneity in human glioblastoma reflects cancer evolutionary dynamics. Proc Nat Acad Sci USA. 2013; 110:4009–14. http://europepmc.org/articles/PMC3593922/?report=abstract.
Brocks D, Assenov Y, Minner S, Bogatyrova O, Simon R, Koop C, et al. Intratumor DNA methylation heterogeneity reflects clonal evolution in aggressive prostate cancer. Cell Rep. 2014; 8:798–806. http://www.cell.com/article/S221112471400552X/fulltext.
Graham TA, Humphries A, Sanders T, Rodriguez-Justo M, Tadrous PJ, Preston SL, et al. Use of methylation patterns to determine expansion of stem cell clones in human colon tissue. Gastroenterology. 2011; 140:1241–50. http://www.gastrojournal.org/article/S0016508510018810/fulltext.
Humphries A, Cereser B, Gay LJ, Miller DSJ, Das B, Gutteridge A, et al. Lineage tracing reveals multipotent stem cells maintain human adenomas and the pattern of clonal expansion in tumor evolution. Proc Nat Acad Sci USA. 2013; 110:E2490–9. http://www.pnas.org/content/110/27/E2490.abstract.
Luo Y, Wong CJ, Kaz AM, Dzieciatkowski S, Carter KT, Morris SM, et al. Differences in DNA methylation signatures reveal multiple pathways of progression from adenoma to colorectal cancer. Gastroenterology. 2014; 147:418–29.e8. http://www.sciencedirect.com/science/article/pii/S0016508514005952.
Roth A, Khattra J, Yap D, Wan A, Justina Biele EL, Ha G, et al. PyClone: statistical inference of clonal population structure in cancer. Nat Methods. 2014; 11:396–8.
Oesper L, Mahmoody A, Raphael B. THetA: Inferring intra-tumor heterogeneity from high-throughput DNA sequencing data. Genome Biol. 2013; 14:R80.
Gusfield D. Efficient algorithms for inferring evolutionary trees. Networks. 1991; 21:19–28.
Bafna V, Gusfield D, Lancia G, Yooseph S. Haplotyping as perfect phylogeny: a direct approach. J Comput Biol. 2003; 10:323–40. http://dx.doi.org/10.1089/10665270360688048.
Pe'er I, Pupko T, Shamir R, Sharan R. Incomplete directed perfect phylogeny. SIAM J Comput. 2004; 33:590–607.
Halperin E, Eskin E. Haplotype reconstruction from genotype data using Imperfect Phylogeny. Bioinformatics. 2004; 20:1842–9. http://dx.doi.org/10.1093/bioinformatics/bth149.
Salari R, Saleh SS, Kashef-Haghighi D, Khavari D, Newburger DE, West RB, et al. Inference of tumor phylogenies with improved somatic mutation discovery. J Comput Biol. 2013; 20:933–44. http://dx.doi.org/10.1089/cmb.2013.0106.
Felsenstein J. Inferring phylogenies. Sunderland (MA): Sinauer Associates; 2004.
Schwarz RF, Trinh A, Sipos B, Brenton JD, Goldman N, Markowetz F. Phylogenetic quantification of intra-tumour heterogeneity. PLoS Comput Biol. 2014; 10:e1003535. http://dx.doi.org/10.1371/journal.pcbi.1003535.
Strino F, Parisi F, Micsinai M, Kluger Y. TrAp: a tree approach for fingerprinting subclonal tumor composition. Nucleic Acids Res. 2013; 41:e165. http://dx.doi.org/10.1093/nar/gkt641.
Beerenwinkel N, Günthard HF, Roth V, Metzner KJ. Challenges and opportunities in estimating viral genetic diversity from next-generation sequencing data. Front Microbio. 2012; 3:239. http://www.frontiersin.org/virology/10.3389/fmicb.2012.00329.
Siegmund KD, Marjoram P, Shibata D. Modeling DNA methylation in a population of cancer cells. Stat Appl Genet Mol Biol. 2008; 7:Article 18. http://dx.doi.org/10.2202/1544-6115.1374.
PubMed Google Scholar
Capra JA, Kostka D. Modeling DNA methylation dynamics with approaches from phylogenetics. Bioinformatics. 2014; 30:i408–14. http://bioinformatics.oxfordjournals.org/content/30/17/i408.
Desper R, Khan J, Schäffer AA. Tumor classification using phylogenetic methods on expression data. J Theor Biol. 2004; 228:477–96. http://www.sciencedirect.com/science/article/pii/S0022519304.
Riester M, Stephan-Otto Attolini C, Downey RJ, Singer S, Michor F. A differentiation-based phylogeny of cancer subtypes. PLoS Comput Biol. 2010; 6:e1000777. http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%.
Qiu P, Gentles AJ, Plevritis SK. Discovering biological progression underlying microarray samples. PLoS Comput Biol. 2011; 7:e1001123. http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%.
Park Y, Shackney S, Schwartz R. Network-based inference of cancer progression from microarray data. IEEE/ACM Trans Comput Biol Bioinform/IEEE, ACM. 2009; 6:200–12. http://www.ncbi.nlm.nih.gov/pubmed/19407345.
Schwartz R, Shackney S E. Applying unmixing to gene expression data for tumor phylogeny inference. BMC Bioinform. 2010; 11:42. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=28.
Pennington G, Smith CA, Shackney S, Schwartz R. Expectation- maximization method for reconstructing tumor phylogenies from single-cell data. Comput Syst Bioinform Conf. 2006; 5:371–80. http://www.ncbi.nlm.nih.gov/pubmed/17369656.
Navin N, Kendall J, Troge J, Andrews P, Rodgers L, McIndoo J, et al. Tumour evolution inferred by single-cell sequencing. Nature. 2011; 472:90–4. http://dx.doi.org/10.1038/nature09807.
Navin NE, Hicks J. Tracing the tumor lineage. Mol Oncol. 2010; 4:267–83. http://dx.doi.org/10.1016/j.molonc.2010.04.010.
Gerstung M, Beisel C, Rechsteiner M, Wild P, Schraml P, Moch H, et al. Reliable detection of subclonal single-nucleotide variants in tumour cell populations. Nat Commun. 2012; 3:811. http://dx.doi.org/10.1038/ncomms1814.
Bielas JH, Loeb KR, Rubin BP, True LD, Loeb LA. Human cancers express a mutator phenotype. Proc Nat Acad Sci USA. 2006; 103:18238–42. http://www.pnas.org/content/103/48/18238.full.
Siepel A, Haussler D. Phylogenetic hidden Markov models In: Nielsen R, editor. Statistical methods in molecular evolution. New York: Springer: 2005. p. 325–51.
Schwarz RF, Fletcher W, Förster F, Merget B, Wolf M, Schultz J, et al. Evolutionary distances in the twilight zone–a rational kernel approach. PLoS One. 2010; 5:e15788. http://dx.doi.org/10.1371/journal.pone.0015788.
Rosenberg A, Hirschberg J. V-measure: a conditional entropy-based external cluster evaluation measure. In: Proceedings of the 2007 Joint Conference on, Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Prague, Czech Republic: Association for Computational Linguistics: 2007. p. 410–20.
Kelley BP, Sharan R, Karp RM, Sittler T, Root DE, Stockwell BR, et al. Conserved pathways within bacteria and yeast as revealed by global protein network alignment. Proc Natl Acad Sci USA. 1139; 100:4–9. http://dx.doi.org/10.1073/pnas.1534710100.
Fritsch A, Ickstadt K. Improved criteria for clustering based on the posterior similarity matrix. Bayesian Anal. 2009; 4:367–92.
Qiu P, Simonds EF, Bendall SC, Gibbs KD, Bruggner RV, Linderman MD, et al. Extracting a cellular hierarchy from high-dimensional cytometry data with SPADE. Nat Biotechnol. 2011; 29:886–91. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3196363&tool=pmcentrez&rendertype=abstract.
BitPhylogeny software. https://bitbucket.org/ke_yuan/bitphylogeny.
NCBI Sequence Read Archive. http://www.ncbi.nlm.nih.gov/sra.
KY and FM would like to acknowledge the support of the University of Cambridge, Cancer Research UK and Hutchison Whampoa Limited. The authors would like to acknowledge Wei Liu for preparing the Sankey plot script, Edith Ross for converting SNV profiles to binary data, Andrea Sottoriva for helpful discussions and providing the processed methylation patterns, and Ryan Adams for sharing the TSSB code.
University of Cambridge, Cancer Research UK Cambridge Institute, Cambridge, UK
Ke Yuan & Florian Markowetz
Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
Thomas Sakoparnig & Niko Beerenwinkel
SIB Swiss Institute of Bioinformatics, Basel, Switzerland
Current address: Biozentrum, University of Basel, Basel, Switzerland
Thomas Sakoparnig
Ke Yuan
Florian Markowetz
Niko Beerenwinkel
Correspondence to Florian Markowetz or Niko Beerenwinkel.
FM and NB planned and outlined the study. KY and TS developed and implemented the model and simulated and analyzed the data. KY, TS, FM and NB wrote the manuscript. KY and TS contributed equally and are joint first authors. FM and NB are co-last and co-corresponding authors. All authors read and approved the final manuscript.
Ke Yuan and Thomas Sakoparnig contributed equally to this work.
Additional file 1
Supplementary figures. A PDF file with three supplementary figures (Figures S1, S2 and S3).
R markdown file. A PDF file with BitPhylogeny package details and figure reproduction.
Yuan, K., Sakoparnig, T., Markowetz, F. et al. BitPhylogeny: a probabilistic framework for reconstructing intra-tumor phylogenies. Genome Biol 16, 36 (2015). https://doi.org/10.1186/s13059-015-0592-6
Minimum Span Tree
Cluster Performance
Baseline Method
Markov Chain Monte Carlo Sample
Genomics of cancer progression and heterogeneity
Submission enquiries: [email protected] | CommonCrawl |
Joint probability distribution as functions
Suppose $X$ and $Y$ are correlated random variables in a finite set ${\mathcal A}$, and let $f, g$ be functions that map elements from ${\mathcal A}$ to ${\mathcal B}$ for some finite set ${\mathcal B}$.
Assume the following:
$f(X)$ is independent of $Y$
$g(Y)$ is independent of $X$
Can we say that there exist independent variables $A, B, C$ and functions $h_1, h_2$ such that the joint distribution of f(X), g(Y), X, Y is same as $A$, $B$, $h_1(A, C)$ and $h_2(B, C)$?
The intuition behind the claim is that conditioned on $f(X)$ and $g(Y)$, the joint distribution of $X, Y$ should be such that $X$ depends only on $f(X)$, and $Y$ depends only on $g(Y)$. The marginal distribution of one of $X$ or $Y$ conditioned on $f(X)$ and $g(Y)$ of course satisfies this property by the given independence assumptions. The question is whether we can show such a statement for the joint distribution..
If not, then is there a counter-example? Of course, coming up with a counter-example might be hard since one needs to show that it is not true for any choice $A, B, C$, $h_1$ and $h_2$, but maybe there is some intuitive reasoning why this is not true..
co.combinatorics pr.probability probability-distributions
No. Here is a counterexample. Let $\mathcal{A} = \{1,2,3,4\}$, $\mathcal{B} = \{1,2\}$, and $f(x) = g(x) = \lceil\frac{x}{2}\rceil$. Let the joint probability mass function of $X$ and $Y$ be given by the matrix \[ P = \frac{1}{8}\begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0\end{bmatrix}. \] Note that the distribution of $(f(X),Y)$ is uniform over $\mathcal{B}\times\mathcal{A}$ and the distribution over $(X,g(Y))$ is uniform over $\mathcal{A}\times\mathcal{B}$, so the independence assumptions hold.
In searching for a factorization of the type requested, we can assume without loss of generality that $C$ takes values in a finite set (because $\mathcal{A}$ is finite). If a factorization of the desired type existed, then $X' := h_1(A,C)$ and $Y' := h_2(B,C)$ would be independent conditioned on $C$, since $A$ and $B$ would be independent.
Suppose there were some value $c$ taken by $C$ with positive probability such that, conditioned on $C=c$, each of $X'$ and $Y'$ took at least two values with positive probability. Let $i\neq j$ be two such values for $X'$ and $k\neq l$ two such values for $Y'$. Then conditioned on $C=c$ the pair $(X',Y')$ could take any of the four values $(i,k), (j,k), (i,l), (j,l)$ with positive probability. Hence the same statement would be true unconditionally. But there are no such values with $P_{ik},P_{jk},P_{il},P_{jl}$ all positive.
Therefore conditioned on $C$, we know for sure either the value of $X'$ or of $Y'$. From there we know for sure either the value of $A = f(X')$ or $B=g(Y')$. But $A,B,C$ are independent, so either $A$ or $B$ is deterministic, contradicting the assumption that each is uniform over $\mathcal{B}$.
(Edit / ad: For lots more on distributions like this in a game-theoretic context, see my paper "Structure of Extreme Correlated Equilibria: a Zero-Sum Example and its Implications.")
Noah SteinNoah Stein
$\mathcal B$ might be a singleton, or f and g constant in which case X and Y are both functions of C, which need not be the case, i.e. if they have a joint density.
mikemike
$\begingroup$ Hmm, I think $C$ could be represented as a pair $(X',Y')$ with the same joint distribution as $(X,Y)$, with $h_1(A,X',Y') = X'$ and similarly for $h_2$. $\endgroup$ – usul Mar 27 '14 at 16:02
$\begingroup$ Yes, in particular $C$ can carry any information about the joint distribution of $X$, $Y$ as long as $C$ is independent of $f(X)$, $g(Y)$. So, setting $f(X)$ and/or $g(Y)$ to a constant can only help.. $\endgroup$ – user47772 Mar 27 '14 at 16:33
Not the answer you're looking for? Browse other questions tagged co.combinatorics pr.probability probability-distributions or ask your own question.
Sparse representation of a distribution with independent and correlated variables
Joint distribution of Ito integral and its quadratic varation
Tracking down locality assumption in CHSH inequality
Proving that a property holds for random sequences with given marginal distribution by rearrangement
Joint distribution from multiple marginals
Maximization of a total variation distance subject to another total variation distance in Markov chain
Probability of collision of some family of hash functions
Joint distribution of integrals of diffusion and driving noise
Controlling Mean Difference Between Product and Joint Distributions Using Optimal Transportation
Exchangeable Bernoulli random variables with bounded summation implies negative correlation? | CommonCrawl |
On the concentration of semiclassical states for nonlinear Dirac equations
Liouville theorems and classification results for a nonlocal Schrödinger equation
November 2018, 38(11): 5379-5387. doi: 10.3934/dcds.2018237
Topological obstructions to dominated splitting for ergodic translations on the higher dimensional torus
Pedro Duarte 1, and Silvius Klein 2,
Departamento de Matemática and CMAFCIO, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, Edifício C6, Piso 2, 1749-016 Lisboa, Portugal
Departamento de Matemática, Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rua Marquês de São Vicente 225, Rio de Janeiro, RJ, 22430-060, Brazil
Received April 2017 Revised October 2017 Published August 2018
Fund Project: The first author was supported by Fundação para a Ciência e a Tecnologia, under the project: UID/MAT/04561/2013. The second author was supported by the Norwegian Research Council project no. 213638, "Discrete Models in Mathematical Analysis"
Consider the space of analytic, quasi-periodic cocycles on the higher dimensional torus. We provide examples of cocycles with nontrivial Lyapunov spectrum, whose homotopy classes do not contain any cocycles satisfying the dominated splitting property. This shows that the main result in the recent work "Complex one-frequency cocycles" by A. Avila, S. Jitomirskaya and C. Sadel does not hold in the higher dimensional torus setting.
Keywords: Analytic cocycle, torus translation, dominated splitting, Lyapunov exponents, homotopy class.
Mathematics Subject Classification: Primary: 37D30, 37C55; Secondary: 57T15, 37F99.
Citation: Pedro Duarte, Silvius Klein. Topological obstructions to dominated splitting for ergodic translations on the higher dimensional torus. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5379-5387. doi: 10.3934/dcds.2018237
A. Avila, Density of positive Lyapunov exponents for $\text {SL}(2, \mathbb R)$-cocycles, J. Am. Math. Soc., 24 (2011), 999-1014. doi: 10.1090/S0894-0347-2011-00702-9. Google Scholar
A. Avila, S. Jitomirskaya and C. Sadel, Complex one-frequency cocycles, J. Eur. Math. Soc. (JEMS), 16 (2014), 1915-1935. doi: 10.4171/JEMS/479. Google Scholar
J. Bochi, Genericity of zero Lyapunov exponents, Ergodic Theory Dynam. Systems, 22 (2002), 1667-1696. doi: 10.1017/S0143385702001165. Google Scholar
——, Cocycles of isometries and denseness of domination, Q. J. Math., 66 (2015), 773-798. doi: 10.1093/qmath/hav020. Google Scholar
J. Bochi and M. Viana, The Lyapunov exponents of generic volume-preserving and symplectic maps, Ann. of Math.(2), 161 (2005), 1423-1485. doi: 10.4007/annals.2005.161.1423. Google Scholar
P. Duarte and S. Klein, Continuity, positivity and simplicity of the Lyapunov exponents for quasi-periodic cocycles, to appear in J. Eur. Math. Soc. (JEMS), https://arXiv.org/abs/1603.06851 Google Scholar
P. Duarte and S. Klein, Lyapunov Exponents of Linear Cocycles; Continuity Via Large Deviations, Atlantis Studies in Dynamical Systems, vol. 3, Atlantis Press, 2016. doi: 10.2991/978-94-6239-124-6. Google Scholar
P. Griffiths and J. Harris, Principles of Algebraic Geometry, Wiley Classics Library. John Wiley & Sons, Inc., New York, 1994. doi: 10.1002/9781118032527. Google Scholar
A. Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, 2002. Google Scholar
L. I. Nicolaescu, An Invitation to Morse Theory, Universitext (Berlin. Print), Springer, 2007. Google Scholar
M. Viana, Lectures on Lyapunov Exponents, Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2014. doi: 10.1017/CBO9781139976602. Google Scholar
Dante Carrasco-Olivera, Bernardo San Martín. Robust attractors without dominated splitting on manifolds with boundary. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4555-4563. doi: 10.3934/dcds.2014.34.4555
Eleonora Catsigeras, Xueting Tian. Dominated splitting, partial hyperbolicity and positive entropy. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4739-4759. doi: 10.3934/dcds.2016006
Wenxiang Sun, Xueting Tian. Dominated splitting and Pesin's entropy formula. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1421-1434. doi: 10.3934/dcds.2012.32.1421
Danijela Damjanović, James Tanis. Cocycle rigidity and splitting for some discrete parabolic actions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5211-5227. doi: 10.3934/dcds.2014.34.5211
Artur Avila, Carlos Matheus, Jean-Christophe Yoccoz. The Kontsevich–Zorich cocycle over Veech–McMullen family of symmetric translation surfaces. Journal of Modern Dynamics, 2019, 14: 21-54. doi: 10.3934/jmd.2019002
Matthias Rumberger. Lyapunov exponents on the orbit space. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 91-113. doi: 10.3934/dcds.2001.7.91
Edson de Faria, Pablo Guarino. Real bounds and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1957-1982. doi: 10.3934/dcds.2016.36.1957
Andy Hammerlindl. Integrability and Lyapunov exponents. Journal of Modern Dynamics, 2011, 5 (1) : 107-122. doi: 10.3934/jmd.2011.5.107
Sebastian J. Schreiber. Expansion rates and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 433-438. doi: 10.3934/dcds.1997.3.433
Pedro Duarte, Silvius Klein, Manuel Santos. A random cocycle with non Hölder Lyapunov exponent. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4841-4861. doi: 10.3934/dcds.2019197
Xinsheng Wang, Lin Wang, Yujun Zhu. Formula of entropy along unstable foliations for $C^1$ diffeomorphisms with dominated splitting. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2125-2140. doi: 10.3934/dcds.2018087
Chao Liang, Wenxiang Sun, Jiagang Yang. Some results on perturbations of Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4287-4305. doi: 10.3934/dcds.2012.32.4287
Shrihari Sridharan, Atma Ram Tiwari. The dependence of Lyapunov exponents of polynomials on their coefficients. Journal of Computational Dynamics, 2019, 6 (1) : 95-109. doi: 10.3934/jcd.2019004
Linlin Fu, Jiahao Xu. A new proof of continuity of Lyapunov exponents for a class of $ C^2 $ quasiperiodic Schrödinger cocycles without LDT. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2915-2931. doi: 10.3934/dcds.2019121
Nguyen Dinh Cong, Thai Son Doan, Stefan Siegmund. On Lyapunov exponents of difference equations with random delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 861-874. doi: 10.3934/dcdsb.2015.20.861
Wilhelm Schlag. Regularity and convergence rates for the Lyapunov exponents of linear cocycles. Journal of Modern Dynamics, 2013, 7 (4) : 619-637. doi: 10.3934/jmd.2013.7.619
Jianyu Chen. On essential coexistence of zero and nonzero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4149-4170. doi: 10.3934/dcds.2012.32.4149
Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187
Andrey Gogolev, Ali Tahzibi. Center Lyapunov exponents in partially hyperbolic dynamics. Journal of Modern Dynamics, 2014, 8 (3&4) : 549-576. doi: 10.3934/jmd.2014.8.549
Luis Barreira, César Silva. Lyapunov exponents for continuous transformations and dimension theory. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 469-490. doi: 10.3934/dcds.2005.13.469
Pedro Duarte Silvius Klein | CommonCrawl |
Fermat curve
In mathematics, the Fermat curve is the algebraic curve in the complex projective plane defined in homogeneous coordinates (X:Y:Z) by the Fermat equation
$X^{n}+Y^{n}=Z^{n}.\ $
Therefore, in terms of the affine plane its equation is
$x^{n}+y^{n}=1.\ $
An integer solution to the Fermat equation would correspond to a nonzero rational number solution to the affine equation, and vice versa. But by Fermat's Last Theorem it is now known that (for n > 2) there are no nontrivial integer solutions to the Fermat equation; therefore, the Fermat curve has no nontrivial rational points.
The Fermat curve is non-singular and has genus
$(n-1)(n-2)/2.\ $
This means genus 0 for the case n = 2 (a conic) and genus 1 only for n = 3 (an elliptic curve). The Jacobian variety of the Fermat curve has been studied in depth. It is isogenous to a product of simple abelian varieties with complex multiplication.
The Fermat curve also has gonality
$n-1.\ $
Fermat varieties
Fermat-style equations in more variables define as projective varieties the Fermat varieties.
Related studies
• Baker, Matthew; Gonzalez-Jimenez, Enrique; Gonzalez, Josep; Poonen, Bjorn (2005), "Finiteness results for modular curves of genus at least 2", American Journal of Mathematics, 127 (6): 1325–1387, arXiv:math/0211394, doi:10.1353/ajm.2005.0037, JSTOR 40068023
• Gross, Benedict H.; Rohrlich, David E. (1978), "Some Results on the Mordell-Weil Group of the Jacobian of the Fermat Curve" (PDF), Inventiones Mathematicae, 44 (3): 201–224, doi:10.1007/BF01403161, archived from the original (PDF) on 2011-07-13
• Klassen, Matthew J.; Debarre, Olivier (1994), "Points of Low Degree on Smooth Plane Curves", Journal für die reine und angewandte Mathematik, 1994 (446): 81–88, doi:10.1515/crll.1994.446.81
• Tzermias, Pavlos (2004), "Low-Degree Points on Hurwitz-Klein Curves", Transactions of the American Mathematical Society, 356 (3): 939–951, doi:10.1090/S0002-9947-03-03454-8, JSTOR 1195002
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
| Wikipedia |
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About
Kyoto Journal of Mathematics
Kyoto J. Math.
Volume 59, Number 1 (2019), 195-235.
Brane involutions on irreducible holomorphic symplectic manifolds
Emilio Franco, Marcos Jardim, and Grégoire Menet
More by Emilio Franco
More by Marcos Jardim
More by Grégoire Menet
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
Buy article
Article info and citation
In the context of irreducible holomorphic symplectic manifolds, we say that (anti)holomorphic (anti)symplectic involutions are brane involutions since their fixed point locus is a brane in the physicists' language, that is, a submanifold which is either a complex or Lagrangian submanifold with respect to each of the three Kähler structures of the associated hyper-Kähler structure. Starting from a brane involution on a K3 or Abelian surface, one can construct a natural brane involution on its moduli space of sheaves. We study these natural involutions and their relation with the Fourier–Mukai transform. Later, we recall the lattice-theoretical approach to mirror symmetry. We provide two ways of obtaining a brane involution on the mirror, and we study the behavior of the brane involutions under both mirror transformations, giving examples in the case of a K3 surface and K3[2]-type manifolds.
Kyoto J. Math., Volume 59, Number 1 (2019), 195-235.
Received: 5 July 2016
First available in Project Euclid: 8 January 2019
Permanent link to this document
https://projecteuclid.org/euclid.kjm/1546916422
Digital Object Identifier
doi:10.1215/21562261-2018-0009
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 14J28: $K3$ surfaces and Enriques surfaces
Secondary: 14J33: Mirror symmetry [See also 11G42, 53D37] 14J50: Automorphisms of surfaces and higher-dimensional varieties
K3 surfaces involutions mirror symmetry
Franco, Emilio; Jardim, Marcos; Menet, Grégoire. Brane involutions on irreducible holomorphic symplectic manifolds. Kyoto J. Math. 59 (2019), no. 1, 195--235. doi:10.1215/21562261-2018-0009. https://projecteuclid.org/euclid.kjm/1546916422
[1] P. S. Aspinwall and D. R. Morrison, "String theory on K3 surfaces" in Mirror Symmetry, II, AMS/IP Stud. Adv. Math. 1, Amer. Math. Soc., Providence, 1997, 703–716.
[2] M. F. Atiyah, K-theory and reality, Q. J. Math. 17 (1966), 367–386.
[3] D. Baraglia and L. P. Schaposnik, Higgs bundles and $(A,B,A)$-branes, Comm. Math. Phys. 331 (2014), no. 3, 1271–1300.
Zentralblatt MATH: 1311.53058
Digital Object Identifier: doi:10.1007/s00220-014-2053-6
[4] C. Bartocci, U. Bruzzo, and D. Hernández Ruipérez, Fourier–Mukai and Nahm Transforms in Geometry and Mathematical Physics, Progr. Math. 276, Birkhaüser, Boston, 2009.
[5] A. Bayer, B. Hassett, and Y. Tschinkel, Mori cones of holomorphic symplectic varieties of K3 type, Ann. Sci. Éc. Norm. Supér. (4) 48 (2015), 941–950.
Digital Object Identifier: doi:10.24033/asens.2262
[6] A. Beauville, Variétés Kähleriennes dont la première classe de Chern est nulle, J. Differential Geom. 18 (1983), no. 4, 755–782.
[7] A. Beauville, Antisymplectic involutions of holomorphic symplectic manifolds, J. Topol. 4 (2011), no. 2, 300–304.
Digital Object Identifier: doi:10.1112/jtopol/jtr002
[8] I. Biswas, Connections on principal bundles over Kähler manifolds with antiholomorphic involution, Forum Math. 17 (2005), no. 6, 871–884.
Mathematical Reviews (MathSciNet): MR2195711
Digital Object Identifier: doi:10.1515/form.2005.17.6.871
[9] I. Biswas, J. Huisman, and J. Hurtubise, The moduli space of stable vector bundles over a real algebraic curve, Math. Ann. 347 (2010), no. 1, 201–233.
[10] I. Biswas and G. Wilkin, Anti-holomorphic involutive isometry of hyper-Kähler manifolds and branes, J. Geom. Phys. 88 (2015), 52–55.
Digital Object Identifier: doi:10.1016/j.geomphys.2014.11.001
[11] S. Boissière, Automorphismes naturels de l'espace de Douady de points sur une surface, Canad. J. Math. 64 (2012), no. 1, 3–23.
[12] S. Boissière, C. Camere, and A. Sarti, Classification of automorphisms on a deformation family of hyper-Kähler four-folds by $p$-elementary lattices, Kyoto J. Math. 56 (2016), no. 3, 465–499.
[13] C. Camere, Lattice polarized irreducible holomorphic symplectic manifolds, Ann. Inst. Fourier (Grenoble) 66 (2016), no. 2, 687–709.
Digital Object Identifier: doi:10.5802/aif.3022
[14] I. V. Dolgachev, Mirror symmetry for lattice polarized $\mathrm{K3}$ surfaces, J. Math. Sci. 81 (1996), no. 3, 2599–2630.
Digital Object Identifier: doi:10.1007/BF02362332
[15] J. Fogarty, Algebraic families on an algebraic surface, Amer. J. Math. 90 (1968), 511–521.
Digital Object Identifier: doi:10.2307/2373541
[16] E. Franco, M. Jardim, and S. Marchesi, Branes in the moduli space of framed sheaves, Bull. Sci. Math. 141 (2017), no. 4, 353–383.
Digital Object Identifier: doi:10.1016/j.bulsci.2017.04.002
[17] B. Hassett and Y. Tschinkel, Hodge theory and Lagrangian planes on generalized Kummer fourfolds, Mosc. Math. J. 13 (2013), no. 1, 33–56.
Digital Object Identifier: doi:10.17323/1609-4514-2013-13-1-33-56
[18] D. Huybrechts, Compact hyper-Kähler manifolds: Basic results, Invent. Math. 135 (1999), no. 1, 63–113.
Digital Object Identifier: doi:10.1007/s002220050280
[19] D. Huybrechts, "Moduli spaces of hyperkähler manifolds and mirror symmetry" in Intersection Theory and Moduli, ICTP Lect. Notes 19, Abdus Salam Int. Cent. Theoret. Phys., Trieste, 2004, 185–247.
[20] D. Huybrechts, Lectures on K3 Surfaces, Cambridge Stud. Adv. Math. 158, Cambridge Univ. Press, Cambridge, 2016.
[21] D. Huybrechts and M. Lehn, The Geometry of Moduli Spaces of Sheaves, 2nd ed., Cambridge Math. Libr., Cambridge Univ. Press, Cambridge, 2010.
[22] A. Kapustin and E. Witten, Electric-magnetic duality and the geometric Langlands program, Commun. Number Theory Phys. 1 (2007), no. 1, 1–236.
Digital Object Identifier: doi:10.4310/CNTP.2007.v1.n1.a1
[23] E. Looijenga and C. Peters, Torelli theorems for Kähler K3 surfaces, Compos. Math. 42 (1980/81), no. 2, 145–186.
[24] E. Markman, Integral constraints on the monodromy group of the hyperKähler resolution of a symmetric product of a K3 surface, Internat. J. Math. 21 (2010), no. 2, 169–223.
Digital Object Identifier: doi:10.1142/S0129167X10005957
[25] E. Markman, "A survey of Torelli and monodromy results for holomorphic-symplectic varieties" in Complex and Differential Geometry, Springer Proc. Math. 8, Springer, Heidelberg, 2011, 257–322.
[26] G. Mongardi, Symplectic involutions on deformations of $K3^{[2]}$, Cent. Eur. J. Math. 10 (2012), no. 4, 1472–1485.
[27] G. Mongardi, On natural deformations of symplectic automorphisms of manifolds of $\mathrm{K3}^{[n]}$ type, C. R. Math. Acad. Sci. Paris 351 (2013), no. 13–14, 561–564.
[28] G. Mongardi, A note on the Kähler and Mori cones of hyperkähler manifolds, Asian J. Math. 19 (2015), no. 4, 583–591.
[29] G. Mongardi, K. Tari, and M. Wandel, Prime order automorphisms of generalised Kummer fourfolds, Manuscripta Math. 155 (2018), no. 3–4, 449–469.
Zentralblatt MATH: 06845362
[30] G. Mongardi and M. Wandel, Automorphisms of O'Grady's manifolds acting trivially on cohomology, Algebr. Geom. 4 (2017), no. 1, 104–119.
Digital Object Identifier: doi:10.14231/AG-2017-005
[31] S. Mukai, Symplectic structure on the moduli space of sheaves on an abelian or K3 surface, Invent. Math. 77 (1984), no. 1, 101–116.
[32] S. Mukai, "On the moduli space of bundles on $\mathrm{K3}$ surfaces, I" in Vector Bundles on Algebraic Varieties (Bombay, 1984), Tata Inst. Fund. Res. Stud. Math. 11, Tata Inst. Fund. Res., Bombay, 1987, 341–413.
[33] V. V. Nikulin, Finite groups of automorphisms of Kählerian K3 surfaces (in Russian), Trudy Moskov. Mat. Obshch. 38 (1979), 75–137;
[34] V. V. Nikulin, Integer symmetric bilinear forms and some of their geometric applications (in Russian), Izv. Akad. Nauk SSSR Ser. Mat. 43, no. 1 (1979), 111–177, 238; English translation in Math. USSR-Izv. 14 (1980), no. 1, 103–167.
[35] V. V. Nikulin, Quotient-groups of groups of automorphisms of hyperbolic forms of subgroups generated by $2$-reflections (in Russian), Dokl. Akad. Nauk SSSR 248, no. 6 (1979), 1307–1309; English translation in Soviet Math. Dokl. 20 (1979), no. 5, 1156–1158.
[36] V. V. Nikulin, "Discrete reflection groups in Lobachevsky spaces and algebraic surfaces" in Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), Amer. Math. Soc., Providence, 1987, 654–671.
[37] F. Schaffhauser, Real points of coarse moduli schemes of vector bundles on a real algebraic curve, J. Symplectic Geom. 10 (2012), no. 4, 503–534.
Digital Object Identifier: doi:10.4310/JSG.2012.v10.n4.a2
Project Euclid: euclid.jsg/1357153427
[38] B. van Geemen and A. Sarti, Nikulin involutions on K3 surfaces, Math. Z. 255 (2007), no. 4, 731–753.
New content alerts
Email RSS ToC RSS Article
Turn Off MathJax
What is MathJax?
Lagrangian submanifolds in strict nearly Kähler 6-manifolds
Lê, Hông Vân and Schwachhöfer, Lorenz, Osaka Journal of Mathematics, 2019
Moduli spaces of bundles over nonprojective K3 surfaces
Perego, Arvid and Toma, Matei, Kyoto Journal of Mathematics, 2017
Relative Prym varieties associated to the double cover of an Enriques surface
Arbarello, E., Saccà, G., and Ferretti, A., Journal of Differential Geometry, 2015
Perverse coherent sheaves and Fourier–Mukai transforms on surfaces, I
Yoshioka, Kōta, Kyoto Journal of Mathematics, 2013
Perverse coherent sheaves and Fourier–Mukai transforms on surfaces, II
Lagrangian Submanifolds in Hyperkähler Manifolds, Legendre Transformation
Leung, Naichung Conan, Journal of Differential Geometry, 2002
Twistor Spaces and Compact Manifolds Admitting Both Kähler and Non-Kähler Structures
Kamenova, Ljudmila, Journal of Geometry and Symmetry in Physics, 2017
LAGRANGIAN $H$-UMBILICAL SUBMANIFOLDS OF PARA-KÄHLER MANIFOLDS
Chen, Bang-Yen, Taiwanese Journal of Mathematics, 2011
Submanifolds of generalized complex manifolds
Ben-Bassat, Oren and Boyarchenko, Mitya, Journal of Symplectic Geometry, 2004
Deformation of integral coisotropic submanifolds in symplectic manifolds
Ruan, Wei-Dong, Journal of Symplectic Geometry, 2005
euclid.kjm/1546916422 | CommonCrawl |
math-ph
Condensed Matter > Statistical Mechanics
Title: Low-temperature spectrum of correlation lengths of the XXZ chain in the antiferromagnetic massive regime
Authors: Maxime Dugave, Frank Göhmann, Karol K. Kozlowski, Junji Suzuki
(Submitted on 29 Apr 2015 (v1), last revised 16 Jun 2015 (this version, v2))
Abstract: We consider the spectrum of correlation lengths of the spin-$\frac{1}{2}$ XXZ chain in the antiferromagnetic massive regime. These are given as ratios of eigenvalues of the quantum transfer matrix of the model. The eigenvalues are determined by integrals over certain auxiliary functions and by their zeros. The auxiliary functions satisfy nonlinear integral equations. We analyse these nonlinear integral equations in the low-temperature limit. In this limit we can determine the auxiliary functions and the expressions for the eigenvalues as functions of a finite number of parameters which satisfy finite sets of algebraic equations, the so-called higher-level Bethe Ansatz equations. The behaviour of these equations, if we send the temperature $T$ to zero, is different for zero and non-zero magnetic field $h$. If $h$ is zero the situation is much like in the case of the usual transfer matrix. Non-trivial higher-level Bethe Ansatz equations remain which determine certain complex excitation parameters as functions of hole parameters which are free on a line segment in the complex plane. If $h$ is non-zero, on the other hand, a remarkable restructuring occurs, and all parameters which enter the description of the quantum transfer matrix eigenvalues can be interpreted entirely in terms of particles and holes which are freely located on two curves when $T$ goes to zero.
Comments: 38 pages, dedicated to Prof. R. J. Baxter on the occasion of his 75th birthday, v2: minor typos corrected
Subjects: Statistical Mechanics (cond-mat.stat-mech); Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph)
Journal reference: J. Phys. A 48 (2015) 334001 (38pp)
DOI: 10.1088/1751-8113/48/33/334001
Cite as: arXiv:1504.07923 [cond-mat.stat-mech]
(or arXiv:1504.07923v2 [cond-mat.stat-mech] for this version)
From: Frank Göhmann [view email]
[v1] Wed, 29 Apr 2015 16:36:29 GMT (1833kb,D)
[v2] Tue, 16 Jun 2015 13:52:13 GMT (1833kb,D) | CommonCrawl |
Calculators Topics Go Premium About Snapxam
ENG • ESP
Processing image... Tap to take a pic of the problem
Completing the square
In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form $ax^{2}+bx+c$ to the form $a(x+h)^{2}+k$.
$4x^2+5x+3$ 1d ago
$x^2+6x-133$ 1d ago
$x^2-x+3$ 1d ago
$x^2+18x+49$ 1d ago
$order\left(x^2+1+x,x,1\right)$ 1d ago
$\frac{\left(x-1\right)^{32}\left(x^2+1\right)^{\frac{3}{2}}}{\left(x^2+3x+3\right)^{17}}$ 1d ago
$4x^2-x+1$ 1d ago
$factor\left(x^3-27\right)$ 1d ago
$\left(x^2+8x+16\right)+x^2$ 2d ago
$factor\left(x^2+16x+71\right)$ 2d ago
Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day!
Factor by difference of squares
Quadratic formula
Perfect square trinomial
© 2018-2020 Snapxam, Inc. About Us Privacy Terms Contact
Calculators Topics Go Premium | CommonCrawl |
Mikhail Katz
Mikhail "Mischa" Gershevich Katz (born 1958, in Chișinău)[1] is an Israeli mathematician, a professor of mathematics at Bar-Ilan University. His main interests are differential geometry, geometric topology and mathematics education; he is the author of the book Systolic Geometry and Topology, which is mainly about systolic geometry. The Katz–Sabourau inequality is named after him and Stéphane Sabourau.[2][3]
Mikhail Katz
Born1958
Chișinău, Moldavian SSR
NationalityIsraeli
EducationHarvard University
Columbia University
Scientific career
FieldsMathematics
InstitutionsBar-Ilan University
ThesisJung's Theorem in Complex Projective Geometry
Doctoral advisorTroels Jørgensen
Mikhail Gromov
Websitehttp://u.cs.biu.ac.il/~katzmik/
Biography
Mikhail Katz was born in Chișinău in 1958. His mother was Clara Katz (née Landman). In 1976, he moved with his mother to the United States.[4][5]
Katz earned a bachelor's degree in 1980 from Harvard University.[1] He did his graduate studies at Columbia University, receiving his Ph.D. in 1984 under the joint supervision of Troels Jørgensen and Mikhael Gromov.[6] His thesis title is Jung's Theorem in Complex Projective Geometry.
He moved to Bar-Ilan University in 1999, after previously holding positions at the University of Maryland, College Park, Stony Brook University, Indiana University Bloomington, the Institut des Hautes Études Scientifiques, the University of Rennes 1, Henri Poincaré University, and Tel Aviv University.[1]
Work
Katz has performed research in systolic geometry in collaboration with Luigi Ambrosio, Victor Bangert, Mikhail Gromov, Steve Shnider, Shmuel Weinberger, and others. He has authored research publications appearing in journals including Communications on Pure and Applied Mathematics, Duke Mathematical Journal, Geometric and Functional Analysis, and Journal of Differential Geometry. Along with these papers, Katz was a contributor to the book "Metric Structures for Riemannian and Non-Riemannian Spaces".[7] Marcel Berger in his article "What is... a Systole?"[8] lists the book (Katz, 2007) as one of two books he cites in systolic geometry.
More recently Katz also contributed to the study of mathematics education[9] including work that provides an alternative interpretation of the number 0.999....[10]
Selected publications
• Bair, Jacques; Błaszczyk, Piotr; Ely, Robert; Henry, Valérie; Kanovei, Vladimir; Katz, Karin; Katz, Mikhail; Kutateladze, Semen; McGaffey, Thomas; Schaps, David; Sherry, David; Shnider, Steve (2013), "Is mathematical history written by the victors?" (PDF), Notices of the American Mathematical Society, 60 (7): 886–904, arXiv:1306.5973, doi:10.1090/noti1001.
• Katz, Mikhail G.; Sherry, David (2012), "Leibniz's laws of continuity and homogeneity", Notices of the American Mathematical Society, 59 (11): 1550–1558, arXiv:1211.7188, Bibcode:2012arXiv1211.7188K, doi:10.1090/noti921, S2CID 42631313.
• Katz, Mikhail G.; Schaps, David; Shnider, Steve (2013), "Almost Equal: The Method of Adequality from Diophantus to Fermat and Beyond", Perspectives on Science, 21 (3): 283–324, arXiv:1210.7750, Bibcode:2012arXiv1210.7750K, doi:10.1162/POSC_a_00101, S2CID 57569974.
• Borovik, Alexandre; Jin, Renling; Katz, Mikhail G. (2012), "An Integer Construction of Infinitesimals: Toward a Theory of Eudoxus Hyperreals", Notre Dame Journal of Formal Logic, 53 (4): 557–570, arXiv:1210.7475, Bibcode:2012arXiv1210.7475B, doi:10.1215/00294527-1722755, S2CID 14850847.
• Kanovei, Vladimir; Katz, Mikhail G.; Mormann, Thomas (2013), "Tools, Objects, and Chimeras: Connes on the Role of Hyperreals in Mathematics", Foundations of Science, 18 (2): 259–296, arXiv:1211.0244, doi:10.1007/s10699-012-9316-5, S2CID 7631073.
• Katz, Mikhail; Tall, David (2012), "A Cauchy-Dirac delta function", Foundations of Science, 18: 107–123, arXiv:1206.0119, Bibcode:2012arXiv1206.0119K, doi:10.1007/s10699-012-9289-4, S2CID 119167714.
• Katz, Mikhail; Sherry, David (2012), "Leibniz's Infinitesimals: Their Fictionality, Their Modern Implementations, and Their Foes from Berkeley to Russell and Beyond", Erkenntnis, 78 (3): 571–625, arXiv:1205.0174, Bibcode:2012arXiv1205.0174K, doi:10.1007/s10670-012-9370-y, S2CID 119329569.
• Błaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, 18: 43–74, arXiv:1202.4153, Bibcode:2012arXiv1202.4153B, doi:10.1007/s10699-012-9285-8, S2CID 119134151.
• Katz, Mikhail; Tall, David (2012), Tension between Intuitive Infinitesimals and Formal Mathematical Analysis, Bharath Sriraman, Editor. Crossroads in the History of Mathematics and Mathematics Education. The Montana Mathematics Enthusiast Monographs in Mathematics Education 12, Information Age Publishing, Inc., Charlotte, NC, pp. 71–89, arXiv:1110.5747, Bibcode:2011arXiv1110.5747K.
• Katz, Karin Usadi; Katz, Mikhail G. (2011), "Meaning in Classical Mathematics: Is it at Odds with Intuitionism?", Intellectica, 56 (2): 223–302, arXiv:1110.5456, Bibcode:2011arXiv1110.5456U.
• Borovik, Alexandre; Katz, Mikhail G. (2012), "Who gave you the Cauchy—Weierstrass tale? The dual history of rigorous calculus", Foundations of Science, 17 (3): 245–276, arXiv:1108.2885, doi:10.1007/s10699-011-9235-x, S2CID 119320059.
• Katz, Karin Usadi; Katz, Mikhail G. (2011), "Cauchy's continuum", Perspectives on Science, 19 (4): 426–452, arXiv:1108.4201, doi:10.1162/POSC_a_00047, MR 2884218, S2CID 57565752.
• Katz, Karin Usadi; Katz, Mikhail G.; Sabourau, Stéphane; Shnider, Steven; Weinberger, Shmuel (2011), "Relative systoles of relative-essential 2-complexes", Algebraic & Geometric Topology, 11 (1): 197–217, arXiv:0911.4265, doi:10.2140/agt.2011.11.197, MR 2764040, S2CID 20087785.
• Katz, Karin Usadi; Katz, Mikhail G. (2012), "Stevin numbers and reality", Foundations of Science, 17 (2): 109–123, arXiv:1107.3688, doi:10.1007/s10699-011-9228-9, S2CID 119167692.
• Katz, Karin Usadi; Katz, Mikhail G. (2012), "A Burgessian critique of nominalistic tendencies in contemporary mathematics and its historiography", Foundations of Science, 17 (1): 51–89, arXiv:1104.0375, doi:10.1007/s10699-011-9223-1, MR 2896999, S2CID 119250310.
• Katz, Mikhail G. (2007), Systolic geometry and topology, Mathematical Surveys and Monographs, vol. 137, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4177-8, MR 2292367. With an appendix by J. Solomon.
• Katz, Karin Usadi; Katz, Mikhail G. (2010), "When is .999... less than 1?", The Montana Mathematics Enthusiast, 7 (1): 3–30, arXiv:1007.3018, Bibcode:2010arXiv1007.3018U, doi:10.54870/1551-3440.1381, S2CID 11544878, archived from the original on 2011-07-20.
• Katz, Karin Usadi; Katz, Mikhail G. (2010), "Zooming in on infinitesimal 1–.9.. in a post-triumvirate era", Educational Studies in Mathematics, 74 (3): 259–273, arXiv:1003.1501, Bibcode:2010arXiv1003.1501K, doi:10.1007/s10649-010-9239-4, S2CID 115168622.
• Bangert, Victor; Katz, Mikhail G. (2003), "Stable systolic inequalities and cohomology products", Communications on Pure and Applied Mathematics, 56 (7): 979–997, arXiv:math/0204181, doi:10.1002/cpa.10082, MR 1990484, S2CID 14485627.
• Katz, Mikhail G.; Rudyak, Yuli B. (2006), "Lusternik–Schnirelmann category and systolic category of low-dimensional manifolds", Communications on Pure and Applied Mathematics, 59 (10): 1433–1456, arXiv:dg-ga/9708007, doi:10.1002/cpa.20146, MR 2248895, S2CID 15470409.
• Bangert, Victor; Katz, Mikhail G.; Shnider, Steven; Weinberger, Shmuel (2009), "E7, Wirtinger inequalities, Cayley 4-form, and homotopy", Duke Mathematical Journal, 146 (1): 35–70, arXiv:math.DG/0608006, doi:10.1215/00127094-2008-061, MR 2475399, S2CID 2575584.
• Croke, Christopher B.; Katz, Mikhail G. (2003), "Universal volume bounds in Riemannian manifolds", in Yau, S. T. (ed.), Surveys in Differential Geometry VIII, Lectures on Geometry and Topology held in honor of Calabi, Lawson, Siu, and Uhlenbeck at Harvard University, May 3–5, 2002, Int. Press, Somerville, MA, pp. 109–137, arXiv:math.DG/0302248, MR 2039987.
• Katz, Mikhail G. (1983), "The filling radius of two-point homogeneous spaces", Journal of Differential Geometry, 18 (3): 505–511, doi:10.4310/jdg/1214437785, MR 0723814.
References
1. Curriculum vitae, retrieved 2011-05-23.
2. Kalogeropoulos, Nikolaos (2017). "Systolic aspects of black hole entropy". arXiv:1711.09963 [gr-qc].
3. Riemannian Geometry: A Modern Introduction, by Isaac Chavel, pg. 235 https://books.google.com/books?id=3Gjp4vQ_mPkC&pg=PA235
4. "Clara Katz, a Soviet émigré who saved her ailing granddaughter, dies at 85 – The Boston Globe". archive.boston.com. Retrieved 2018-01-10.
5. "Grandmother bucked the Soviet system – Obituaries – smh.com.au". www.smh.com.au. 12 October 2006. Retrieved 2018-01-10.
6. Mikhail Katz at the Mathematics Genealogy Project
7. Gromov, Misha: Metric structures for Riemannian and non-Riemannian spaces. Based on the 1981 French original. With appendices by M. Katz, P. Pansu and S. Semmes. Translated from the French by Sean Michael Bates. Progress in Mathematics, 152. Birkhäuser Boston, Inc., Boston, MA, 1999. xx+585 pp. ISBN 0-8176-3898-9
8. Berger, M.: What is... a Systole? Notices of the AMS 55 (2008), no. 3, 374–376.
9. Katz & Katz (2010).
10. Stewart, I. (2009) Professor Stewart's Hoard of Mathematical Treasures, Profile Books, p. 174.
External links
• Mikhail Katz's home page
• Mikhail Katz publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• CiNii
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.